diff --git a/README.md b/README.md index 968c8434a1..7fbc7a5f36 100644 --- a/README.md +++ b/README.md @@ -51,113 +51,117 @@ LCTT的组成 * 2014/12/25 提升runningwater为Core Translators成员。 * 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。 * 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。 +* 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。 活跃成员 ------------------------------- 目前 TP 活跃成员有: - CORE @wxy, -- CORE @carolinewuyan, - CORE @DeadFire, - CORE @geekpi, - CORE @GOLinux, -- CORE @reinoir, -- CORE @bazz2, -- CORE @zpl1025, - CORE @ictlyh, -- CORE @dongfengweixiao +- CORE @carolinewuyan, +- CORE @strugglingyouth, +- CORE @FSSlc +- CORE @zpl1025, +- CORE @runningwater, +- CORE @bazz2, +- CORE @Vic020, +- CORE @dongfengweixiao, +- CORE @alim0x, +- Senior @reinoir, - Senior @tinyeyeser, - Senior @vito-L, - Senior @jasminepeng, - Senior @willqian, - Senior @vizv, -- @ZTinoZ, -- @Vic020, -- @runningwater, -- @KayGuoWhu, -- @luoxcat, -- @alim0x, -- @2q1w2007, -- @theo-l, -- @FSSlc, -- @su-kaiyao, -- @blueabysm, -- @flsf, -- @martin2011qi, -- @SPccman, -- @wi-cuckoo, -- @Linchenguang, -- @linuhap, -- @crowner, -- @Linux-pdz, -- @H-mudcup, -- @yechunxiao19, -- @woodboow, -- @Stevearzh, -- @disylee, -- @cvsher, -- @wwy-hust, -- @johnhoow, -- @felixonmars, -- @TxmszLou, -- @shipsw, -- @scusjs, -- @wangjiezhe, -- @hyaocuk, -- @MikeCoder, -- @ZhouJ-sh, -- @boredivan, -- @goreliu, -- @l3b2w1, -- @JonathanKang, -- @NearTan, -- @jiajia9linuxer, -- @Love-xuan, -- @coloka, -- @owen-carter, -- @luoyutiantang, -- @JeffDing, -- @icybreaker, -- @tenght, -- @liuaiping, -- @mtunique, -- @rogetfan, -- @nd0104, -- @mr-ping, -- @szrlee, -- @lfzark, -- @CNprober, -- @DongShuaike, -- @ggaaooppeenngg, -- @haimingfg, -- @213edu, -- @Tanete, -- @guodongxiaren, -- @zzlyzq, -- @FineFan, -- @yujianxuechuan, -- @Medusar, -- @shaohaolin, -- @ailurus1991, -- @liaoishere, -- @CHINAANSHE, -- @stduolc, -- @yupmoon, -- @tomatoKiller, -- @zhangboyue, -- @kingname, -- @KevinSJ, -- @zsJacky, -- @willqian, -- @Hao-Ding, -- @JygjHappy, -- @Maclauring, -- @small-Wood, -- @cereuz, -- @fbigun, -- @lijhg, -- @soooogreen, +- ZTinoZ, +- theo-l, +- luoxcat, +- disylee, +- wi-cuckoo, +- haimingfg, +- KayGuoWhu, +- wwy-hust, +- martin2011qi, +- cvsher, +- su-kaiyao, +- flsf, +- SPccman, +- Stevearzh +- Linchenguang, +- oska874 +- Linux-pdz, +- 2q1w2007, +- felixonmars, +- wyangsun, +- MikeCoder, +- mr-ping, +- xiqingongzi +- H-mudcup, +- zhangboyue, +- goreliu, +- DongShuaike, +- TxmszLou, +- ZhouJ-sh, +- wangjiezhe, +- NearTan, +- icybreaker, +- shipsw, +- johnhoow, +- linuhap, +- boredivan, +- blueabysm, +- liaoishere, +- yechunxiao19, +- l3b2w1, +- XLCYun, +- KevinSJ, +- tenght, +- coloka, +- luoyutiantang, +- yupmoon, +- jiajia9linuxer, +- scusjs, +- tnuoccalanosrep, +- woodboow, +- 1w2b3l, +- crowner, +- mtunique, +- dingdongnigetou, +- CNprober, +- JonathanKang, +- Medusar, +- hyaocuk, +- szrlee, +- Xuanwo, +- nd0104, +- xiaoyu33, +- guodongxiaren, +- zzlyzq, +- yujianxuechuan, +- ailurus1991, +- ggaaooppeenngg, +- Ricky-Gong, +- lfzark, +- 213edu, +- Tanete, +- liuaiping, +- jerryling315, +- tomatoKiller, +- stduolc, +- shaohaolin, +- Timeszoro, +- rogetfan, +- FineFan, +- kingname, +- jasminepeng, +- JeffDing, +- CHINAANSHE, +(按提交行数排名前百) LFS 项目活跃成员有: @@ -169,7 +173,7 @@ LFS 项目活跃成员有: - @KevinSJ - @Yuking-net -(更新于2015/06/09,以Github contributors列表排名) +(更新于2015/11/29) 谢谢大家的支持! diff --git a/published/201407/Encrypting Your Cat Photos.md b/published/201407/Encrypting Your Cat Photos.md old mode 100755 new mode 100644 diff --git a/published/201505/20150326 How to set up server monitoring system with Monit.md b/published/201505/20150326 How to set up server monitoring system with Monit.md old mode 100755 new mode 100644 diff --git a/published/20150921 14 tips for teaching open source development.md b/published/20150921 14 tips for teaching open source development.md new file mode 100644 index 0000000000..2d5cfaf302 --- /dev/null +++ b/published/20150921 14 tips for teaching open source development.md @@ -0,0 +1,88 @@ +在大学培养学生们参与开源代码开发的十四个技巧 +================================================================================ + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDU_TeachingOS.png) + +学术界是培养和塑造未来的开源开发者的最佳平台。研究中发现,我们偶尔会开源自己编写的软件。这样做有两个理由,一是为了推广自己编写的工具的使用,二是为了了解人们使用这些工具时会遇到哪些问题。在这样一个编写研究软件的背景下,我的任务就是为 Bradford 大学重新设计二年级的本科软件工程课程。 + +这是一个挑战,因为我所面对的 80 个学生是来自不同专业的,包括 IT、商务计算和软件工程,这些学生将要在一起上课。最有难度的是,需要和这些编程经验差距很大的学生一起编写代码。按照传统,该课程允许学生选择自己的小组,然后给他们布置构建一个加油站数据库系统的任务,最后提交报告作为评估的一部分。 + +而我决定重新设计课程,让学生了解现实中的软件团队是如何协作的过程。根据学生的专业和编程技能,我将他们分为五、六个人一组。这是为了确保每个小组的整体水平相当,避免小组之间的不等。 + +### 核心课程 ### + +课程的形式改为讲座和实践课两项结合在一起。然而实践课作为指导过程,主要是老师监督各个小组的实践进度以及他们如何处理客户和产品之间的关系。而传统的教学方式由项目管理、软件测试、工程需求分析以及类似主题的讲座组成,再辅以实践和导师会议。这些会议可以很好的考核学生的水平以及检测出他们是否可以跟得上我们在讲座部分中的软件工程方法。本年的教学主题包括以下内容: + +- 工程需求分析 +- 如何与客户及其他团队成员互动 +- 程序设计方法,如敏捷和极限编程方法 +- 如何通过学习不同的软件工程方法进行短期的水平提高 +- 小组会议及文档编写 +- 项目管理及项目进展图表(甘特图) +- UML 图表及系统描述 +- 使用 Git 来进行代码的版本控制 +- 软件测试及 BUG 跟踪 +- 使用开源库 +- 开源代码许可及其选择 +- 软件交付 + +在这些讲座之后,会有一些来自世界各地的嘉宾为我们说说他们在软件交付过程中的经验。我们也设法请来大学里知识产权律师谈关于软件在英国的知识产权问题,以及如何处理软件的知识产权问题。 + +### 协作工具 ### + +为了让上述教学内容的顺利进行,我们将会引入一些工具,并训练学生在他们的项目中使用这些工具。如下: + +- Google Drive:团队与导师之间进行共享的工具,暂时存储用于描述项目的文档和图表、需求收集、会议纪要以及项目时间跟踪等信息。采取这样一个方式来监控并提供直接反馈到每个团队,是非常有效的。 +- [Basecamp][1]:同样是用于分享文档,在随后的课程中,我们可能会考虑用它取代 Google Drive。 +- BUG 报告工具,如 [Mantis][2]:只能让有限的用户免费提交 BUG。稍后我们提到的 Git 可以让小组内的所有人员用做 BUG 提交。 +- 远程视频会议工具:在人员不在校内,甚至去了其他城市的情况下使用。学生们可以定期通过 Skype 来交流并记录会议内容或则进行录音作为今后其他用处。 +- 同时,学生们的项目中还会用到大量的开源工具包。他们可以根据自己小组的项目需求来选择自己使用的工具包和编程语言。唯一的条件是,这些项目必须开源,最后成果可以安装到大学里的实验室,并且大多的研究人员都非常支持这个条件。 +- 最后,所有团队必须向客户交付他们的项目,包括完整的可以工作的软件版本、文档和他们自己选择的开放源码许可。大多数的团队选择了 GPLv3 许可证。 + +### 技巧和经验教训 ### + +在最后,这一年过的很愉快,并且所有学生的项目都做的非常棒。这里有一些我学到的经验教训,可能有助于提高明年的课程质量: + +1. 提供各种各样有趣的选择项目给学生选择。比如说,游戏开发或者移动应用开发以及完成各种目标的项目等。建立普通的数据库系统已经不能提起学生的兴趣了,而参与到有趣的项目中去,学生本身就是自学者,同时可以帮助解决小组成员和小组之间的常见问题。再通过一个消息列表,学生们发表他们在测试中遇到的任何问题,以寻求其他人的帮助建议。然而,这种方法有一个缺点。外部考官建议我们使用统一种类型的项目和统一的编程语言以帮助缩小对学生的评估标准。 + +2. 定期给学生在每一个阶段的表现进行反馈。比方说,可以在和各个小组开指导会议的时候,或者每个阶段进行反馈,以帮助他们在接下来的工作中自我改进。 + +3. 学生更加愿意与校外的客户一起协作。他们期待着与外部公司代表或校外人员协作,不过是为了获得新体验而已。与导师进行交流时,他们都能够表现得很专业,这样使得老师非常放心。 + +4. 很多团队版将开发单元测试的部分放到项目结束之后,从极限编程方法的角度来说,这是一个严重的禁忌。也许测试应包括在不同阶段的评估中,来提醒他们需要并行开展软件开发和单元测试。 + +5. 在这个班的 80 个人里边,仅有 4 个女生,每个女生都分在不同的小组里边。我观察到,男生们总是充分准备好来承担起领队角色,并将最有趣的代码部分留给他们自己来编写,女生则多大遵循安排或者是编写文档。出于某种原因,女生选择不出头,即使在女性辅导员鼓励下,她们也不愿编写代码。这仍然是一个需要解决的主要问题。 + +6. 允许不同风格项目文档,比方说,UML 图表、状态图或其他形式的。让学生学习这些并与其他课程融汇贯通来提高他们的学习经验。 + +7. 学生里边,有些是很好的开发人员,有些做商务计算的则没有多少编程经验。我们要鼓励团队共同努力,避免开发人员做得比那些只做会议记录或文档的其他成员更好的错误认知。我们常在辅导课程中鼓励角色转换,让每个人都有机会学习如何编程。 + +8. 小组与导师每周见面沟通是非常重要的,可以有效监督各个小组进展情况,还可以了解是谁做了大部分工作。通常,没来参加会议的小组成员基本就是没有参与到他们的团队工作中去的,并且通过其他成员所提交的工作报告也可以确定哪些人不活跃。 + +9. 我们鼓励学生们把许可证附加到项目中去,使用外部库以及和客户协作的时候要表明确切知识产权问题。 这样可让打破陈规,开拓思维,并了解真实的软件交付问题。 + +10. 给学生们自己选择所用技术的空间。 + +11. 助教是关键。同时管理 80 个学生显然很有难度,特别是需要对他们进行评估的那几周。明年我一定会找个助教来帮我一起管理各个小组。 + +12. 实验室的技术支持是非常重要的。大学里的技术支持人员对于本课程是非常赞同的。他们正在考虑明年将虚拟机分配给每个团队,这样没个团队可以根据需要自行在虚拟机中安装任何软件。 + +13. 团队合作,相互帮助。大多数团队自然而然的支持其他团队成员,同时指导员在中间也帮助了不少。 + +14. 来自其他同事的帮助会锦上添花。作为一名新的大学导师,我需要从经验中学习,如果我想了解如何管理某些学生和团队,或者对如何让学生适应课程感到困惑时,我会通过多个方面来寻求建议。来自资深同事的支持对我来说是一种极大的鼓励。 + +最后,对于作为导师的我以及所有的学生来说,这都是个有趣的课程。在学习目标和传统评分方案上还有有一些问题需解决,以减少教师的工作量。明年,我计划会保留这种教学模式,并希望能够提出更好的评分方案以及引入更多的软件来帮助监督项目和控制代码版本。 + +-------------------------------------------------------------------------------- + +via: http://opensource.com/education/15/9/teaching-open-source-development-undergraduates + +作者:[Mariam Kiran][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/mariamkiran +[1]:https://basecamp.com/ +[2]:https://www.mantisbt.org/ diff --git a/published/20151012 Remember sed and awk All Linux admins should.md b/published/20151012 Remember sed and awk All Linux admins should.md new file mode 100644 index 0000000000..346557684b --- /dev/null +++ b/published/20151012 Remember sed and awk All Linux admins should.md @@ -0,0 +1,66 @@ +sed 和 awk,所有的 Linux 管理员都应该会的技能! +========================== + +![](http://images.techhive.com/images/article/2015/03/linux-100573790-primary.idge.jpg) + +*图片来源: Shutterstock* + +**我们不要让下一代 Linux 和 Unix 的管理员忘记初始化脚本和基本工具的好处** + +我曾经有一次在 Reddit 看到一个帖子,“[请问如何操作文本文件][1]”。这是一个很简单的需求,就像我们常用 Unix 的人每天遇到的一样。他的问题是,如何删除文件中的重复行,只保留不重复的。 这听起来似乎很简单,但是当文件足够大时,就会有些复杂。 + +这个问题有很多种不同的答案。你可以使用几乎任何一种语言来写这样的一个脚本,只是时间的投入和代码的复杂性不同罢了。根据你的个人水平,它大概会花费20-60分钟。但是如果你使用了 Perl、Python、Ruby 中的一种,你可能很快实现它。 + +或者你可以使用下面的一个方法,让你无比暖心的: 只用 awk。 + +这个答案是迄今为止最简明、最简单的解决问题的方法。它只要一行! + +``` +awk '!seen[$0]++' +``` + +让我们来看看发生了什么: + +在这个命令中,其实隐藏了很多代码。awk 是一种文本处理语言,并且它内部有很多预设。首先,你看到的实际上是一个 for 循环的结果。awk 假定你想通过循环处理输入文件的每一行,所以你不需要明确的去指定它。awk 还假定了你需要打印输出处理后的数据,所以你也不需要去指定它。最后,awk 假定循环在最后一句指令执行完结束,这一块也不再需要你去指定它。 + +这个例子中的字符串 seen 是一个关联数组的名字。$0 是一个变量,表示整个当前行。所以,这个命令翻译成人类语言就是“对这个文件的每一行进行检查,如果你之前没有见过它,就打印出来。” 如果该关联数组的键名还不存在就添加到数组,并增加其取值,这样 awk 下次遇到同样的行时就会不匹配(条件判断为“假”),从而不打印出来。 + +一些人认为这样是优雅的,另外的人认为这可能会造成混淆。任何在日常工作上使用 awk 的都是第一类人。awk 就是设计用来做这个的。在 awk 中,你可以写多行代码。你甚至可以[用 awk 写一些让人不安的复杂功能][2]。但终究来说,awk 还是一个进行文本处理的程序,一般是通过管道。去掉(没必要的)循环定义是很常见的快捷用法,不过如果你乐意,你也可以用下面的代码做同样的事情: + + +``` +awk '{ if (!seen[$0]) print $0; seen[$0]++ }’ +``` + +这会产生相同的结果。 + +awk 是完成这项工作的完美工具。不过,我相信很多管理员--特别是新管理员会转而使用 [Bash][3] 或 Python 来完成这一任务,因为对 awk 的知识和对它的能力的了解看起来随着时间而慢慢被人淡忘。我认为这是标志着一个问题,由于对之前的解决方案缺乏了解,那些已经解决了几十年的问题又突然出现了。 + +shell、grep、sed 和 awk 是 Unix 的基础。如果你不能非常轻松的使用它们,你将会被自己束缚住,因为它们构成了通过命令行和脚本与 Unix 系统交互的基础。学习这些工具如何工作最好的方法之一就是观察真实的例子和实验,你可以在各种 Unix 衍生系统的初始化系统中找到很多,但在 Linux 发行版中它们已经被 [systemd][4] 取代了。 + +数以百万计的 Unix 管理员了解 Shell 脚本和 Unix 工具如何读、写、修改和用在初始化脚本上。不同系统的初始化脚本有很大不同,甚至是不同的 Linux 发行版也不同。但是它们都源自 sh,而且它们都用像 sed、awk 还有 grep 这样的核心的命令行工具。 + +我每天都会听到很多人抱怨初始化脚本太“古老”而且很“难”。但是实际上,初始化脚本和 Unix 管理员每天使用的工具一样,还提供了一个非常好的方式来更加熟悉和习惯这些工具。说初始化脚本难于阅读和难于使用实际上是承认你缺乏对 Unix 基础工具的熟悉。 + +说起在 Reddit 上看到的内容,我也碰到过这个问题,来自一个新入行的 Linux 系统管理员, “[问他是否应该还要去学老式的初始化系统 sysvinit][5]”。 这个帖子的大多数的答案都是正面的——是的,应该学习 sysvinit 和 systemd 两个。一位评论者甚至指出,初始化脚本是学习 Bash 的好方法。而另一个消息是,Fortune 50 强的公司还没有计划迁移到以 systemd 为基础的发行版上。 + +但是,这提醒了我这确实是一个问题。如果我们继续沿着消除脚本和脱离操作系统核心组件的方式发展下去,由于疏于接触,我们将会不经意间使新管理员难于学习基本的 Unix 工具。 + +我不知道为什么有些人想在一层又一层的抽象化来掩盖 Unix 内部,但是这样发展下去可能会让新一代的系统管理员们变成只会按下按钮的工人。我觉得这不是一件好事情。 + +------ + +via: http://www.infoworld.com/article/2985804/linux/remember-sed-awk-linux-admins-should.html + +作者:[Paul Venezia][a] +译者:[Bestony](https://github.com/Bestony) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.infoworld.com/author/Paul-Venezia/ +[1]: https://www.reddit.com/r/linuxadmin/comments/3lwyko/how_do_i_remove_every_occurence_of_duplicate_line/ +[2]: http://intro-to-awk.blogspot.com/2008/08/awk-more-complex-examples.html +[3]: http://www.infoworld.com/article/2613338/linux/linux-how-to-script-a-bash-crash-course.html +[4]: http://www.infoworld.com/article/2608798/data-center/systemd--harbinger-of-the-linux-apocalypse.html +[5]: https://www.reddit.com/r/linuxadmin/comments/3ltq2y/when_i_start_learning_about_linux_administration/ diff --git a/published/20151013 DFileManager--Cover Flow File Manager.md b/published/20151013 DFileManager--Cover Flow File Manager.md new file mode 100644 index 0000000000..31e953c8da --- /dev/null +++ b/published/20151013 DFileManager--Cover Flow File Manager.md @@ -0,0 +1,64 @@ +DFileManager:封面流(CoverFlow)文件管理器 +================================================================================ + +这个一个 Ubuntu 标准软件仓库中缺失的像宝石般的、有着其独特的功能的文件管理器。这是 DFileManager 在推特中的宣称。 + +有一个不好回答的问题,如何知道到底有多少个 Linux 的开源软件?好奇的话,你可以在 Shell 里输入如下命令: + + ~$ for f in /var/lib/apt/lists/*Packages; do printf '%5d %s\n' $(grep '^Package: ' "$f" | wc -l) ${f##*/} done | sort -rn + +在我的 Ubuntu 15.04 系统上,产生结果如下: + +![Ubuntu 15.04 Packages](http://www.linuxlinks.com/portal/content/reviews/FileManagers/UbuntuPackages.png) + +正如上面的截图所示,在 Universe 仓库中,大约有39000个包,在 main 仓库中大约有8500个包。这听起来很多。但是这些包括了开源应用、工具、库,有很多不是由 Ubuntu 开发者打包的。更重要的是,有很多重要的软件不在库中,只能通过源代码编译。DFileManager 就是这样一个软件。它是仍处在开发早期的一个基于 QT 的跨平台文件管理器。QT提供单一源码下的跨平台可移植性。 + +现在还没有二进制文件包,用户需要编译源代码才行。对于一些工具来说,这个可能会产生很大的问题,特别是如果这个应用依赖于某个复杂的依赖库,或者需要与已经安装在系统中的软件不兼容的某个版本。 + +### 安装 ### + +幸运的是,DFileManager 非常容易编译。对于我的老 Ubutnu 机器来说,在开发者网站上的安装介绍提供了大部分的重要步骤,不过少量的基础包没有列出(为什么总是这样?虽然许多库会让文件系统变得一团糟!)。在我的系统上,从github 下载源代码并且编译这个软件,我在 Shell 里输入了以下命令: + + ~$ sudo apt-get install qt5-default qt5-qmake libqt5x11extras5-dev + ~$ git clone git://git.code.sf.net/p/dfilemanager/code dfilemanager-code + ~$ cd dfilemananger-code + ~$ mkdir build + ~$ cd build + ~$ cmake ../ -DCMAKE_INSTALL_PREFIX=/usr + ~$ make + ~$ sudo make install + +你可以通过在shell中输入如下命令来启动它: + + ~$ dfm + +下面是运行中的 DFileManager,完全展示了其最吸引人的地方:封面流(Cover Flow)视图。可以在当前文件夹的项目间滑动,提供了一个相当有吸引力的体验。这是看图片的理想选择。这个文件管理器酷似 Finder(苹果操作系统下的默认文件管理器),可能会吸引你。 + +![DFileManager in action](http://www.linuxlinks.com/portal/content/reviews/FileManagers/Screenshot-dfm.png) + +### 特点: ### + +- 4种视图:图标、详情、列视图和封面流 +- 按位置和设备归类书签 +- 标签页 +- 简单的搜索和过滤 +- 自定义文件类型的缩略图,包括多媒体文件 +- 信息栏可以移走 +- 单击打开文件和目录 +- 可以排队 IO 操作 +- 记住每个文件夹的视图属性 +- 显示隐藏文件 + +DFileManager 不是 KDE 的 Dolphin 的替代品,但是能做相同的事情。这个是一个真正能够帮助人们的浏览文件的文件管理器。还有,别忘了反馈信息给开发者,任何人都可以做出这样的贡献。 + +-------------------------------------------------------------------------------- + +via: http://gofk.tumblr.com/post/131014089537/dfilemanager-cover-flow-file-manager-a-real-gem + +作者:[gofk][a] +译者:[bestony](https://github.com/bestony) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://gofk.tumblr.com/ diff --git a/published/20150823 How learning data structures and algorithms make you a better developer.md b/published/201511/20150823 How learning data structures and algorithms make you a better developer.md similarity index 100% rename from published/20150823 How learning data structures and algorithms make you a better developer.md rename to published/201511/20150823 How learning data structures and algorithms make you a better developer.md diff --git a/published/20150827 The Strangest Most Unique Linux Distros.md b/published/201511/20150827 The Strangest Most Unique Linux Distros.md similarity index 100% rename from published/20150827 The Strangest Most Unique Linux Distros.md rename to published/201511/20150827 The Strangest Most Unique Linux Distros.md diff --git a/published/201511/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md b/published/201511/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md new file mode 100644 index 0000000000..658d6c033d --- /dev/null +++ b/published/201511/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md @@ -0,0 +1,165 @@ +如何在 Linux 上从 NetworkManager 切换为 systemd-network +================================================================================ +在 Linux 世界里,对 [systemd][1] 的采用一直是激烈争论的主题,它的支持者和反对者之间的战火仍然在燃烧。到了今天,大部分主流 Linux 发行版都已经采用了 systemd 作为默认的初始化(init)系统。 + +正如其作者所说,作为一个 “从未完成、从未完善、但一直追随技术进步” 的系统,systemd 已经不只是一个初始化进程,它被设计为一个更广泛的系统以及服务管理平台,这个平台是一个包含了不断增长的核心系统进程、库和工具的生态系统。 + +**systemd** 的其中一部分是 **systemd-networkd**,它负责 systemd 生态中的网络配置。使用 systemd-networkd,你可以为网络设备配置基础的 DHCP/静态 IP 网络。它还可以配置虚拟网络功能,例如网桥、隧道和 VLAN。systemd-networkd 目前还不能直接支持无线网络,但你可以使用 wpa_supplicant 服务配置无线适配器,然后把它和 **systemd-networkd** 联系起来。 + +在很多 Linux 发行版中,NetworkManager 仍然作为默认的网络配置管理器。和 NetworkManager 相比,**systemd-networkd** 仍处于积极的开发状态,还缺少一些功能。例如,它还不能像 NetworkManager 那样能让你的计算机在任何时候通过多种接口保持连接。它还没有为更高层面的脚本编程提供 ifup/ifdown 钩子函数。但是,systemd-networkd 和其它 systemd 组件(例如用于域名解析的 **resolved**、NTP 的**timesyncd**,用于命名的 udevd)结合的非常好。随着时间增长,**systemd-networkd**只会在 systemd 环境中扮演越来越重要的角色。 + +如果你对 **systemd-networkd** 的进步感到高兴,从 NetworkManager 切换到 systemd-networkd 是值得你考虑的一件事。如果你强烈反对 systemd,对 NetworkManager 或[基础网络服务][2]感到很满意,那也很好。 + +但对于那些想尝试 systemd-networkd 的人,可以继续看下去,在这篇指南中学会在 Linux 中怎么从 NetworkManager 切换到 systemd-networkd。 + +### 需求 ### + +systemd 210 及其更高版本提供了 systemd-networkd。因此诸如 Debian 8 "Jessie" (systemd 215)、 Fedora 21 (systemd 217)、 Ubuntu 15.04 (systemd 219) 或更高版本的 Linux 发行版和 systemd-networkd 兼容。 + +对于其它发行版,在开始下一步之前先检查一下你的 systemd 版本。 + + $ systemctl --version + +### 从 NetworkManager 切换到 Systemd-networkd ### + +从 NetworkManager 切换到 systemd-networkd 其实非常简答(反过来也一样)。 + +首先,按照下面这样先停用 NetworkManager 服务,然后启用 systemd-networkd。 + + $ sudo systemctl disable NetworkManager + $ sudo systemctl enable systemd-networkd + +你还要启用 **systemd-resolved** 服务,systemd-networkd用它来进行域名解析。该服务还实现了一个缓存式 DNS 服务器。 + + $ sudo systemctl enable systemd-resolved + $ sudo systemctl start systemd-resolved + +当启动后,**systemd-resolved** 就会在 /run/systemd 目录下某个地方创建它自己的 resolv.conf。但是,把 DNS 解析信息存放在 /etc/resolv.conf 是更普遍的做法,很多应用程序也会依赖于 /etc/resolv.conf。因此为了兼容性,按照下面的方式创建一个到 /etc/resolv.conf 的符号链接。 + + $ sudo rm /etc/resolv.conf + $ sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf + +### 用 systemd-networkd 配置网络连接 ### + +要用 systemd-networkd 配置网络服务,你必须指定带.network 扩展名的配置信息文本文件。这些网络配置文件保存到 /etc/systemd/network 并从这里加载。当有多个文件时,systemd-networkd 会按照字母顺序一个个加载并处理。 + +首先创建 /etc/systemd/network 目录。 + + $ sudo mkdir /etc/systemd/network + +#### DHCP 网络 #### + +首先来配置 DHCP 网络。对于此,先要创建下面的配置文件。文件名可以任意,但记住文件是按照字母顺序处理的。 + + $ sudo vi /etc/systemd/network/20-dhcp.network + +---------- + + [Match] + Name=enp3* + + [Network] + DHCP=yes + +正如你上面看到的,每个网络配置文件包括了一个或多个 “sections”,每个 “section”都用 [XXX] 开头。每个 section 包括了一个或多个键值对。`[Match]` 部分决定这个配置文件配置哪个(些)网络设备。例如,这个文件匹配所有名称以 ens3 开头的网络设备(例如 enp3s0、 enp3s1、 enp3s2 等等)对于匹配的接口,然后启用 [Network] 部分指定的 DHCP 网络配置。 + +### 静态 IP 网络 ### + +如果你想给网络设备分配一个静态 IP 地址,那就新建下面的配置文件。 + + $ sudo vi /etc/systemd/network/10-static-enp3s0.network + +---------- + + [Match] + Name=enp3s0 + + [Network] + Address=192.168.10.50/24 + Gateway=192.168.10.1 + DNS=8.8.8.8 + +正如你猜测的, enp3s0 接口地址会被指定为 192.168.10.50/24,默认网关是 192.168.10.1, DNS 服务器是 8.8.8.8。这里微妙的一点是,接口名 enp3s0 事实上也匹配了之前 DHCP 配置中定义的模式规则。但是,根据词汇顺序,文件 "10-static-enp3s0.network" 在 "20-dhcp.network" 之前被处理,对于 enp3s0 接口静态配置比 DHCP 配置有更高的优先级。 + +一旦你完成了创建配置文件,重启 systemd-networkd 服务或者重启机器。 + + $ sudo systemctl restart systemd-networkd + +运行以下命令检查服务状态: + + $ systemctl status systemd-networkd + $ systemctl status systemd-resolved + +![](https://farm1.staticflickr.com/719/21010813392_76abe123ed_c.jpg) + +### 用 systemd-networkd 配置虚拟网络设备 ### + +**systemd-networkd** 同样允许你配置虚拟网络设备,例如网桥、VLAN、隧道、VXLAN、绑定等。你必须在用 .netdev 作为扩展名的文件中配置这些虚拟设备。 + +这里我展示了如何配置一个桥接接口。 + +#### Linux 网桥 #### + +如果你想创建一个 Linux 网桥(br0) 并把物理接口(eth1) 添加到网桥,你可以新建下面的配置。 + + $ sudo vi /etc/systemd/network/bridge-br0.netdev + +---------- + + [NetDev] + Name=br0 + Kind=bridge + +然后按照下面这样用 .network 文件配置网桥接口 br0 和从接口 eth1。 + + $ sudo vi /etc/systemd/network/bridge-br0-slave.network + +---------- + + [Match] + Name=eth1 + + [Network] + Bridge=br0 + +---------- + + $ sudo vi /etc/systemd/network/bridge-br0.network + +---------- + + [Match] + Name=br0 + + [Network] + Address=192.168.10.100/24 + Gateway=192.168.10.1 + DNS=8.8.8.8 + +最后,重启 systemd-networkd。 + + $ sudo systemctl restart systemd-networkd + +你可以用 [brctl 工具][3] 来验证是否创建好了网桥 br0。 + +### 总结 ### + +当 systemd 誓言成为 Linux 的系统管理器时,有类似 systemd-networkd 的东西来管理网络配置也就不足为奇。但是在现阶段,systemd-networkd 看起来更适合于网络配置相对稳定的服务器环境。对于桌面/笔记本环境,它们有多种临时有线/无线接口,NetworkManager 仍然是比较好的选择。 + +对于想进一步了解 systemd-networkd 的人,可以参考官方[man 手册][4]了解完整的支持列表和关键点。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/switch-from-networkmanager-to-systemd-networkd.html + +作者:[Dan Nanni][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/use-systemd-system-administration-debian.html +[2]:http://xmodulo.com/disable-network-manager-linux.html +[3]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html +[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html diff --git a/published/201511/20150909 Superclass--15 of the world's best living programmers.md b/published/201511/20150909 Superclass--15 of the world's best living programmers.md new file mode 100644 index 0000000000..89a42d29d7 --- /dev/null +++ b/published/201511/20150909 Superclass--15 of the world's best living programmers.md @@ -0,0 +1,427 @@ +超神们:15 位健在的世界级程序员! +================================================================================ + +当开发人员说起世界顶级程序员时,他们的名字往往会被提及。 + +好像现在程序员有很多,其中不乏有许多优秀的程序员。但是哪些程序员更好呢? + +虽然这很难客观评价,不过在这个话题确实是开发者们津津乐道的。ITworld 深入程序员社区,避开四溅的争执口水,试图找出可能存在的所谓共识。事实证明,屈指可数的某些名字经常是讨论的焦点。 + +![](http://images.techhive.com/images/article/2015/09/superman-620x465-100611650-orig.jpg) + +*图片来源: [tom_bullock CC BY 2.0][1]* + +下面就让我们来看看这些世界顶级的程序员吧! + +### 玛格丽特·汉密尔顿(Margaret Hamilton) ### + +![](http://images.techhive.com/images/article/2015/09/margaret_hamilton-620x465-100611764-orig.jpg) + +*图片来源: [NASA][2]* + +**成就: 阿波罗飞行控制软件背后的大脑** + +生平: 查尔斯·斯塔克·德雷珀实验室(Charles Stark Draper Laboratory)软件工程部的主任,以她为首的团队负责设计和打造 NASA 的阿波罗的舰载飞行控制器软件和空间实验室(Skylab)的任务。基于阿波罗这段的工作经历,她又后续开发了[通用系统语言(Universal Systems Language)][5]和[开发先于事实( Development Before the Fact)][6]的范例。开创了[异步软件、优先调度和超可靠的软件设计][7]理念。被认为发明了“[软件工程( software engineering)][8]”一词。1986年获[奥古斯塔·埃达·洛夫莱斯奖(Augusta Ada Lovelace Award)][9],2003年获 [NASA 杰出太空行动奖(Exceptional Space Act Award)][10]。 + +评论: + +> “汉密尔顿发明了测试,使美国计算机工程规范了很多” —— [ford_beeblebrox][11] + +> “我认为在她之前(不敬地说,包括高德纳(Knuth)在内的)计算机编程是(另一种形式上留存的)数学分支。然而这个宇宙飞船的飞行控制系统明确地将编程带入了一个崭新的领域。” —— [Dan Allen][12] + +> “... 她引入了‘软件工程’这个术语 — 并作出了最好的示范。” —— [David Hamilton][13] + +> “真是个坏家伙” [Drukered][14] + + +### 唐纳德·克努斯(Donald Knuth),即 高德纳 ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_donald_knuth-620x465-100502872-orig.jpg) + +*图片来源: [vonguard CC BY-SA 2.0][15]* + +**成就: 《计算机程序设计艺术(The Art of Computer Programming,TAOCP)》 作者** + +生平: 撰写了[编程理论的权威书籍][16]。发明了数字排版系统 Tex。1971年,[ACM(美国计算机协会)葛丽丝·穆雷·霍普奖(Grace Murray Hopper Award)][17] 的首位获奖者。1974年获 ACM [图灵奖(A. M. Turing)][18],1979年获[美国国家科学奖章(National Medal of Science)][19],1995年获IEEE[约翰·冯·诺依曼奖章(John von Neumann Medal)][20]。1998年入选[计算机历史博物馆(Computer History Museum)名人录(Hall of Fellows)][21]。 + +评论: + +> “... 写的计算机编程艺术(The Art of Computer Programming,TAOCP)可能是有史以来计算机编程方面最大的贡献。”—— [佚名][22] + +> “唐·克努斯的 TeX 是我所用过的计算机程序中唯一一个几乎没有 bug 的。真是让人印象深刻!”—— [Jaap Weel][23] + +> “如果你要问我的话,我只能说太棒了!” —— [Mitch Rees-Jones][24] + +### 肯·汤普逊(Ken Thompson) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_ken-thompson-620x465-100502874-orig.jpg) + +*图片来源: [Association for Computing Machinery][25]* + +**成就: Unix 之父** + +生平:与[丹尼斯·里奇(Dennis Ritchie)][26]共同创造了 Unix。创造了 [B 语言][27]、[UTF-8 字符编码方案][28]、[ed 文本编辑器][29],同时也是 Go 语言的共同开发者。(和里奇)共同获得1983年的[图灵奖(A.M. Turing Award )][30],1994年获 [IEEE 计算机先驱奖( IEEE Computer Pioneer Award)][31],1998年获颁[美国国家科技奖章( National Medal of Technology )][32]。在1997年入选[计算机历史博物馆(Computer History Museum)名人录(Hall of Fellows)][33]。 + +评论: + +> “... 可能是有史以来最能成事的程序员了。Unix 内核,Unix 工具,国际象棋程序世界冠军 Belle,Plan 9,Go 语言。” —— [Pete Prokopowicz][34] + +> “肯所做出的贡献,据我所知无人能及,是如此的根本、实用、经得住时间的考验,时至今日仍在使用。” —— [Jan Jannink][35] + + +### 理查德·斯托曼(Richard Stallman) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_richard_stallman-620x465-100502868-orig.jpg) + +*图片来源: [Jiel Beaumadier CC BY-SA 3.0][135]* + +**成就: Emacs 和 GCC 缔造者** + +生平: 成立了 [GNU 工程(GNU Project)] [36],并创造了它的许多核心工具,如 [Emacs、GCC、GDB][37] 和 [GNU Make][38]。还创办了[自由软件基金会(Free Software Foundation)] [39]。1990年荣获 ACM 的[葛丽丝·穆雷·霍普奖( Grace Murray Hopper Award)][40],1998年获 [EFF 先驱奖(Pioneer Award)][41]. + +评论: + +> “... 在 Symbolics 对阵 LMI 的战斗中,独自一人与一众 Lisp 黑客好手对码。” —— [Srinivasan Krishnan][42] + +> “通过他在编程上的精湛造诣与强大信念,开辟了一整套编程与计算机的亚文化。” —— [Dan Dunay][43] + +> “我可以不赞同这位伟人的很多方面,不必盖棺论定,他不可否认都已经是一位伟大的程序员了。” —— [Marko Poutiainen][44] + +> “试想 Linux 如果没有 GNU 工程的前期工作会怎么样。(多亏了)斯托曼的炸弹!” —— [John Burnette][45] + +### 安德斯·海尔斯伯格(Anders Hejlsberg) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_anders_hejlsberg-620x465-100502873-orig.jpg) + +*图片来源: [D.Begley CC BY 2.0][46]* + +**成就: 创造了Turbo Pascal** + +生平: [Turbo Pascal 的原作者][47],是最流行的 Pascal 编译器和第一个集成开发环境。而后,[领导了 Turbo Pascal 的继任者 Delphi][48] 的构建。[C# 的主要设计师和架构师][49]。2001年荣获[ Dr. Dobb 的杰出编程奖(Dr. Dobb's Excellence in Programming Award )][50]。 + +评论: + +> “他用汇编语言为当时两个主流的 PC 操作系统(DOS 和 CPM)编写了 [Pascal] 编译器。用它来编译、链接并运行仅需几秒钟而不是几分钟。” —— [Steve Wood][51] + +> “我佩服他 - 他创造了我最喜欢的开发工具,陪伴着我度过了三个关键的时期直至我成为一位专业的软件工程师。” —— [Stefan Kiryazov][52] + +### Doug Cutting ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_doug_cutting-620x465-100502871-orig.jpg) + +图片来源: [vonguard CC BY-SA 2.0][53] + +**成就: 创造了 Lucene** + +生平: [开发了 Lucene 搜索引擎以及 Web 爬虫 Nutch][54] 和用于大型数据集的分布式处理套件 [Hadoop][55]。一位强有力的开源支持者(Lucene、Nutch 以及 Hadoop 都是开源的)。前 [Apache 软件基金(Apache Software Foundation)的理事][56]。 + +评论: + + +> “...他就是那个既写出了优秀搜索框架(lucene/solr),又为世界开启大数据之门(hadoop)的男人。” —— [Rajesh Rao][57] + +> “他在 Lucene 和 Hadoop(及其它工程)的创造/工作中为世界创造了巨大的财富和就业...” —— [Amit Nithianandan][58] + +### Sanjay Ghemawat ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_sanjay_ghemawat-620x465-100502876-orig.jpg) + +*图片来源: [Association for Computing Machinery][59]* + +**成就: 谷歌核心架构师** + +生平: [协助设计和实现了一些谷歌大型分布式系统的功能][60],包括 MapReduce、BigTable、Spanner 和谷歌文件系统(Google File System)。[创造了 Unix 的 ical ][61]日历系统。2009年入选[美国国家工程院(National Academy of Engineering)][62]。2012年荣获 [ACM-Infosys 基金计算机科学奖( ACM-Infosys Foundation Award in the Computing Sciences)][63]。 + +评论: + + +> “Jeff Dean的僚机。” —— [Ahmet Alp Balkan][64] + +### Jeff Dean ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jeff_dean-620x465-100502866-orig.jpg) + +*图片来源: [Google][65]* + +**成就: 谷歌搜索索引背后的大脑** + +生平:协助设计和实现了[许多谷歌大型分布式系统的功能][66],包括网页爬虫,索引搜索,AdSense,MapReduce,BigTable 和 Spanner。2009年入选[美国国家工程院( National Academy of Engineering)][67]。2012年荣获ACM 的[SIGOPS 马克·维瑟奖( SIGOPS Mark Weiser Award)][68]及[ACM-Infosys基金计算机科学奖( ACM-Infosys Foundation Award in the Computing Sciences)][69]。 + +评论: + +> “... 带来了在数据挖掘(GFS、MapReduce、BigTable)上的突破。” —— [Natu Lauchande][70] + +> “... 设计、构建并部署 MapReduce 和 BigTable,和以及数不清的其它东西” —— [Erik Goldman][71] + +### 林纳斯·托瓦兹(Linus Torvalds) ### + +![](http://images.techhive.com/images/article/2015/09/linus_torvalds-620x465-100611765-orig.jpg) + +*图片来源: [Krd CC BY-SA 4.0][72]* + +**成就: Linux缔造者** + +生平:创造了 [Linux 内核][73]与[开源的版本控制系统 Git][74]。收获了许多奖项和荣誉,包括有1998年的 [EFF 先驱奖(EFF Pioneer Award)][75],2000年荣获[英国电脑学会(British Computer Society)授予的洛夫莱斯勋章(Lovelace Medal)][76],2012年荣获[千禧技术奖(Millenium Technology Prize)][77]还有2014年[IEEE计算机学会( IEEE Computer Society)授予的计算机先驱奖(Computer Pioneer Award)][78]。同样入选了2008年的[计算机历史博物馆( Computer History Museum)名人录(Hall of Fellows)][79]与2012年的[互联网名人堂(Internet Hall of Fame )][80]。 + +评论: + +> “他只用了几年的时间就写出了 Linux 内核,而 GNU Hurd(GNU 开发的内核)历经25年的开发却丝毫没有准备发布的意思。他的成就就是带来了希望。” —— [Erich Ficker][81] + +> “托沃兹可能是程序员的程序员。” —— [Dan Allen][82] + +> “他真的很棒。” —— [Alok Tripathy][83] + +### 约翰·卡马克(John Carmack) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_john_carmack-620x465-100502867-orig.jpg) + +*图片来源: [QuakeCon CC BY 2.0][84]* + +**成就: 毁灭战士的缔造者** + +生平: ID 社联合创始人,打造了德军总部3D(Wolfenstein 3D)、毁灭战士(Doom)和雷神之锤(Quake)等所谓的即时 FPS 游戏。引领了[切片适配刷新(adaptive tile refresh)][86], [二叉空间分割(binary space partitioning)][87],表面缓存(surface caching)等开创性的计算机图像技术。2001年入选[互动艺术与科学学会名人堂(Academy of Interactive Arts and Sciences Hall of Fame)][88],2007年和2008年荣获工程技术类[艾美奖(Emmy awards)][89]并于2010年由[游戏开发者甄选奖( Game Developers Choice Awards)][90]授予终生成就奖。 + +评论: + +> “他在写第一个渲染引擎的时候不到20岁。这家伙这是个天才。我若有他四分之一的天赋便心满意足了。” —— [Alex Dolinsky][91] + +> “... 德军总部3D(Wolfenstein 3D)、毁灭战士(Doom)还有雷神之锤(Quake)在那时都是革命性的,影响了一代游戏设计师。” —— [dniblock][92] + +> “一个周末他几乎可以写出任何东西....” —— [Greg Naughton][93] + +> “他是编程界的莫扎特... ” —— [Chris Morris][94] + +### 法布里斯·贝拉(Fabrice Bellard) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_fabrice_bellard-620x465-100502870-orig.jpg) + +*图片来源: [Duff][95]* + +**成就: 创造了 QEMU** + +生平: 创造了[一系列耳熟能详的开源软件][96],其中包括硬件模拟和虚拟化的平台 QEMU,用于处理多媒体数据的 FFmpeg,微型C编译器(Tiny C Compiler)和 一个可执行文件压缩软件 LZEXE。2000年和2001年[C语言混乱代码大赛(Obfuscated C Code Contest)的获胜者][97]并在2011年荣获[Google-O'Reilly 开源奖(Google-O'Reilly Open Source Award )][98]。[计算 Pi 最多位数][99]的前世界纪录保持着。 + +评论: + + +> “我觉得法布里斯·贝拉做的每一件事都是那么显著而又震撼。” —— [raphinou][100] + +> “法布里斯·贝拉是世界上最高产的程序员...” —— [Pavan Yara][101] + +> “他就像软件工程界的尼古拉·特斯拉(Nikola Tesla)。” —— [Michael Valladolid][102] + +> “自80年代以来,他一直高产出一系列的成功作品。” —— [Michael Biggins][103] + +### Jon Skeet ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jon_skeet-620x465-100502863-orig.jpg) + +*图片来源: [Craig Murphy CC BY 2.0][104]* + +**成就: Stack Overflow 的传说级贡献者** + +生平: Google 工程师,[深入解析C#(C# in Depth)][105]的作者。保持着[有史以来在 Stack Overflow 上最高的声誉][106],平均每月解答390个问题。 + +评论: + + +> “他根本不需要调试器,只要他盯一下代码,错误之处自会原形毕露。” —— [Steven A. Lowe][107] + +> “如果他的代码没有通过编译,那编译器应该道歉。” —— [Dan Dyer][108] + +> “他根本不需要什么编程规范,他的代码就是编程规范。” —— [佚名][109] + +### 亚当·安捷罗(Adam D'Angelo) ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_image_adam_dangelo-620x465-100502875-orig.jpg) + +*图片来源: [Philip Neustrom CC BY 2.0][110]* + +**成就: Quora 的创办人之一** + +生平: 还是 Facebook 工程师时,[为其搭建了 news feed 功能的基础][111]。直至其离开并联合创始了 Quora,已经成为了 Facebook 的CTO和工程 VP。2001年以高中生的身份在[美国计算机奥林匹克(USA Computing Olympiad)上第八位完成比赛][112]。2004年ACM国际大学生编程大赛(International Collegiate Programming Contest)[获得银牌的团队 - 加利福尼亚技术研究所( California Institute of Technology)][113]的成员。2005年入围 Topcoder 大学生[算法编程挑战赛(Algorithm Coding Competition)][114]。 + +评论: + +> “一位程序设计全才。” —— [佚名][115] + +> "我做的每个好东西,他都已有了六个。" —— [马克.扎克伯格(Mark Zuckerberg)][116] + +### Petr Mitrechev ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_petr_mitrichev-620x465-100502869-orig.jpg) + +*图片来源: [Facebook][117]* + +**成就: 有史以来最具竞技能力的程序员之一** + +生平: 在国际信息学奥林匹克(International Olympiad in Informatics)中[两次获得金牌][118](2000,2002)。在2006,[赢得 Google Code Jam][119] 同时也是[TopCoder Open 算法大赛冠军][120]。也同样,两次赢得 Facebook黑客杯(Facebook Hacker Cup)([2011][121],[2013][122])。写这篇文章的时候,[TopCoder 榜中排第二][123] (即:Petr)、在 [Codeforces 榜同样排第二][124]。 + +评论: + +> “他是竞技程序员的偶像,即使在印度也是如此...” —— [Kavish Dwivedi][125] + +### Gennady Korotkevich ### + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_gennady_korot-620x465-100502864-orig.jpg) + +*图片来源: [Ishandutta2007 CC BY-SA 3.0][126]* + +**成就: 竞技编程小神童** + +生平: 国际信息学奥林匹克(International Olympiad in Informatics)中最小参赛者(11岁),[6次获得金牌][127] (2007-2012)。2013年 ACM 国际大学生编程大赛(International Collegiate Programming Contest)[获胜队伍][128]成员及[2014 Facebook 黑客杯(Facebook Hacker Cup)][129]获胜者。写这篇文章的时候,[Codeforces 榜排名第一][130] (即:Tourist)、[TopCoder榜第一][131]。 + +评论: + +> “一个编程神童!” —— [Prateek Joshi][132] + +> “Gennady 真是棒,也是为什么我在白俄罗斯拥有一个强大开发团队的例证。” —— [Chris Howard][133] + +> “Tourist 真是天才” —— [Nuka Shrinivas Rao][134] + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclass-14-of-the-world-s-best-living-programmers.html#slide1 + +作者:[Phil Johnson][a] +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Phil-Johnson/ +[1]:https://www.flickr.com/photos/tombullock/15713223772 +[2]:https://commons.wikimedia.org/wiki/File:Margaret_Hamilton_in_action.jpg +[3]:http://klabs.org/home_page/hamilton.htm +[4]:https://www.youtube.com/watch?v=DWcITjqZtpU&feature=youtu.be&t=3m12s +[5]:http://www.htius.com/Articles/r12ham.pdf +[6]:http://www.htius.com/Articles/Inside_DBTF.htm +[7]:http://www.nasa.gov/home/hqnews/2003/sep/HQ_03281_Hamilton_Honor.html +[8]:http://www.nasa.gov/50th/50th_magazine/scientists.html +[9]:https://books.google.com/books?id=JcmV0wfQEoYC&pg=PA321&lpg=PA321&dq=ada+lovelace+award+1986&source=bl&ots=qGdBKsUa3G&sig=bkTftPAhM1vZ_3VgPcv-38ggSNo&hl=en&sa=X&ved=0CDkQ6AEwBGoVChMI3paoxJHWxwIVA3I-Ch1whwPn#v=onepage&q=ada%20lovelace%20award%201986&f=false +[10]:http://history.nasa.gov/alsj/a11/a11Hamilton.html +[11]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrswof +[12]:http://qr.ae/RFEZLk +[13]:http://qr.ae/RFEZUn +[14]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrv9u9 +[15]:https://www.flickr.com/photos/44451574@N00/5347112697 +[16]:http://cs.stanford.edu/~uno/taocp.html +[17]:http://awards.acm.org/award_winners/knuth_1013846.cfm +[18]:http://amturing.acm.org/award_winners/knuth_1013846.cfm +[19]:http://www.nsf.gov/od/nms/recip_details.jsp?recip_id=198 +[20]:http://www.ieee.org/documents/von_neumann_rl.pdf +[21]:http://www.computerhistory.org/fellowawards/hall/bios/Donald,Knuth/ +[22]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answers/3063 +[23]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Jaap-Weel +[24]:http://qr.ae/RFE94x +[25]:http://amturing.acm.org/photo/thompson_4588371.cfm +[26]:https://www.youtube.com/watch?v=JoVQTPbD6UY +[27]:https://www.bell-labs.com/usr/dmr/www/bintro.html +[28]:http://doc.cat-v.org/bell_labs/utf-8_history +[29]:http://c2.com/cgi/wiki?EdIsTheStandardTextEditor +[30]:http://amturing.acm.org/award_winners/thompson_4588371.cfm +[31]:http://www.computer.org/portal/web/awards/cp-thompson +[32]:http://www.uspto.gov/about/nmti/recipients/1998.jsp +[33]:http://www.computerhistory.org/fellowawards/hall/bios/Ken,Thompson/ +[34]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Pete-Prokopowicz-1 +[35]:http://qr.ae/RFEWBY +[36]:https://groups.google.com/forum/#!msg/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J +[37]:http://www.emacswiki.org/emacs/RichardStallman +[38]:https://www.gnu.org/gnu/thegnuproject.html +[39]:http://www.emacswiki.org/emacs/FreeSoftwareFoundation +[40]:http://awards.acm.org/award_winners/stallman_9380313.cfm +[41]:https://w2.eff.org/awards/pioneer/1998.php +[42]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton/comment/4146397 +[43]:http://qr.ae/RFEaib +[44]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Marko-Poutiainen +[45]:http://qr.ae/RFEUqp +[46]:https://www.flickr.com/photos/begley/2979906130 +[47]:http://www.taoyue.com/tutorials/pascal/history.html +[48]:http://c2.com/cgi/wiki?AndersHejlsberg +[49]:http://www.microsoft.com/about/technicalrecognition/anders-hejlsberg.aspx +[50]:http://www.drdobbs.com/windows/dr-dobbs-excellence-in-programming-award/184404602 +[51]:http://qr.ae/RFEZrv +[52]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Stefan-Kiryazov +[53]:https://www.flickr.com/photos/vonguard/4076389963/ +[54]:http://www.wizards-of-os.org/archiv/sprecher/a_c/doug_cutting.html +[55]:http://hadoop.apache.org/ +[56]:https://www.linkedin.com/in/cutting +[57]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Shalin-Shekhar-Mangar/comment/2293071 +[58]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answer/Amit-Nithianandan +[59]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm +[60]:http://research.google.com/pubs/SanjayGhemawat.html +[61]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat +[62]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 +[63]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm +[64]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat/answer/Ahmet-Alp-Balkan +[65]:http://research.google.com/people/jeff/index.html +[66]:http://research.google.com/people/jeff/index.html +[67]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 +[68]:http://news.cs.washington.edu/2012/10/10/uw-cse-ph-d-alum-jeff-dean-wins-2012-sigops-mark-weiser-award/ +[69]:http://awards.acm.org/award_winners/dean_2879385.cfm +[70]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Natu-Lauchande +[71]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Cosmin-Negruseri/comment/28399 +[72]:https://commons.wikimedia.org/wiki/File:LinuxCon_Europe_Linus_Torvalds_05.jpg +[73]:http://www.linuxfoundation.org/about/staff#torvalds +[74]:http://git-scm.com/book/en/Getting-Started-A-Short-History-of-Git +[75]:https://w2.eff.org/awards/pioneer/1998.php +[76]:http://www.bcs.org/content/ConWebDoc/14769 +[77]:http://www.zdnet.com/blog/open-source/linus-torvalds-wins-the-tech-equivalent-of-a-nobel-prize-the-millennium-technology-prize/10789 +[78]:http://www.computer.org/portal/web/pressroom/Linus-Torvalds-Named-Recipient-of-the-2014-IEEE-Computer-Society-Computer-Pioneer-Award +[79]:http://www.computerhistory.org/fellowawards/hall/bios/Linus,Torvalds/ +[80]:http://www.internethalloffame.org/inductees/linus-torvalds +[81]:http://qr.ae/RFEeeo +[82]:http://qr.ae/RFEZLk +[83]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Alok-Tripathy-1 +[84]:https://www.flickr.com/photos/quakecon/9434713998 +[85]:http://doom.wikia.com/wiki/John_Carmack +[86]:http://thegamershub.net/2012/04/gaming-gods-john-carmack/ +[87]:http://www.shamusyoung.com/twentysidedtale/?p=4759 +[88]:http://www.interactive.org/special_awards/details.asp?idSpecialAwards=6 +[89]:http://www.itworld.com/article/2951105/it-management/a-fly-named-for-bill-gates-and-9-other-unusual-honors-for-tech-s-elite.html#slide8 +[90]:http://www.gamechoiceawards.com/archive/lifetime.html +[91]:http://qr.ae/RFEEgr +[92]:http://www.itworld.com/answers/topic/software/question/whos-best-living-programmer#comment-424562 +[93]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton +[94]:http://money.cnn.com/2003/08/21/commentary/game_over/column_gaming/ +[95]:http://dufoli.wordpress.com/2007/06/23/ammmmaaaazing-night/ +[96]:http://bellard.org/ +[97]:http://www.ioccc.org/winners.html#B +[98]:http://www.oscon.com/oscon2011/public/schedule/detail/21161 +[99]:http://bellard.org/pi/pi2700e9/ +[100]:https://news.ycombinator.com/item?id=7850797 +[101]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/1718701 +[102]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/2454450 +[103]:http://qr.ae/RFEjhZ +[104]:https://www.flickr.com/photos/craigmurphy/4325516497 +[105]:http://www.amazon.co.uk/gp/product/1935182471?ie=UTF8&tag=developetutor-21&linkCode=as2&camp=1634&creative=19450&creativeASIN=1935182471 +[106]:http://stackexchange.com/leagues/1/alltime/stackoverflow +[107]:http://meta.stackexchange.com/a/9156 +[108]:http://meta.stackexchange.com/a/9138 +[109]:http://meta.stackexchange.com/a/9182 +[110]:https://www.flickr.com/photos/philipn/5326344032 +[111]:http://www.crunchbase.com/person/adam-d-angelo +[112]:http://www.exeter.edu/documents/Exeter_Bulletin/fall_01/oncampus.html +[113]:http://icpc.baylor.edu/community/results-2004 +[114]:https://www.topcoder.com/tc?module=Static&d1=pressroom&d2=pr_022205 +[115]:http://qr.ae/RFfOfe +[116]:http://www.businessinsider.com/in-new-alleged-ims-mark-zuckerberg-talks-about-adam-dangelo-2012-9#ixzz369FcQoLB +[117]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 +[118]:http://stats.ioinformatics.org/people/1849 +[119]:http://googlepress.blogspot.com/2006/10/google-announces-winner-of-global-code_27.html +[120]:http://community.topcoder.com/tc?module=SimpleStats&c=coder_achievements&d1=statistics&d2=coderAchievements&cr=10574855 +[121]:https://www.facebook.com/notes/facebook-hacker-cup/facebook-hacker-cup-finals/208549245827651 +[122]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 +[123]:http://community.topcoder.com/tc?module=AlgoRank +[124]:http://codeforces.com/ratings +[125]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Venkateswaran-Vicky/comment/1960855 +[126]:http://commons.wikimedia.org/wiki/File:Gennady_Korot.jpg +[127]:http://stats.ioinformatics.org/people/804 +[128]:http://icpc.baylor.edu/regionals/finder/world-finals-2013/standings +[129]:https://www.facebook.com/hackercup/posts/10152022955628845 +[130]:http://codeforces.com/ratings +[131]:http://community.topcoder.com/tc?module=AlgoRank +[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi +[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779 +[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549 +[135]:http://commons.wikimedia.org/wiki/File:Jielbeaumadier_richard_stallman_2010.jpg \ No newline at end of file diff --git a/published/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md b/published/201511/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md similarity index 100% rename from published/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md rename to published/201511/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md diff --git a/published/20150921 Configure PXE Server In Ubuntu 14.04.md b/published/201511/20150921 Configure PXE Server In Ubuntu 14.04.md similarity index 100% rename from published/20150921 Configure PXE Server In Ubuntu 14.04.md rename to published/201511/20150921 Configure PXE Server In Ubuntu 14.04.md diff --git a/published/20150929 A Developer's Journey into Linux Containers.md b/published/201511/20150929 A Developer's Journey into Linux Containers.md similarity index 100% rename from published/20150929 A Developer's Journey into Linux Containers.md rename to published/201511/20150929 A Developer's Journey into Linux Containers.md diff --git a/translated/tech/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md b/published/201511/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md similarity index 52% rename from translated/tech/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md rename to published/201511/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md index f1d9f7253f..da44814e11 100644 --- a/translated/tech/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md +++ b/published/201511/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md @@ -1,26 +1,26 @@ -修复Sheell脚本在Ubuntu中用文本编辑器打开的方式 +修复 Shell 脚本在 Ubuntu 中的默认打开方式 ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Run-Shell-Script-on-Double-Click.jpg) -当你双击一个脚本(.sh文件)的时候,你想要做的是什么?通常的想法是执行它。但是在Ubuntu下面却不是这样,或者我应该更确切地说是在Files(Nautilus)中。你可能会疯狂地大叫“运行文件,运行文件”,但是文件没有运行而是用Gedit打开了。 +当你双击一个脚本(.sh文件)的时候,你想要做的是什么?通常的想法是执行它。但是在Ubuntu下面却不是这样,或者我应该更确切地说是在Files(Nautilus)中。你可能会疯狂地大叫“运行文件,运行文件”,但是文件没有运行而是用Gedit打开了。 -我知道你也许会说文件有可执行权限么?我会说是的。脚本有可执行权限但是当我双击它的时候,它还是用文本编辑器打开了。我不希望这样如果你遇到了同样的问题,我想你也许也不需要这样。 +我知道你也许会说文件有可执行权限么?我会说是的。脚本有可执行权限但是当我双击它的时候,它还是用文本编辑器打开了。我不希望这样,如果你遇到了同样的问题,我想你也许也想要这样。 -我知道你或许已经被建议在终端下面运行,我知道这个可行但是这不是一个在GUI下不能运行的借口是么? +我知道你或许已经被建议在终端下面执行,我知道这个可行,但是这不是一个在GUI下不能运行的借口是么? 这篇教程中,我们会看到**如何在双击后运行shell脚本。** #### 修复在Ubuntu中shell脚本用文本编辑器打开的方式 #### -shell脚本用文件编辑器打开的原因是Files(Ubuntu中的文件管理器)中的默认行为设置。在更早的版本中,它或许会询问你是否运行文件或者用编辑器打开。默认的行位在新的版本中被修改了。 +shell脚本用文件编辑器打开的原因是Files(Ubuntu中的文件管理器)中的默认行为设置。在更早的版本中,它或许会询问你是否运行文件或者用编辑器打开。默认的行为在新的版本中被修改了。 要修复这个,进入文件管理器,并在菜单中点击**选项**: ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-1.png) -接下来在**文件选项**中进入**行为**标签中,你会看到**文本文件执行**选项。 +接下来在**文件选项(Files Preferences)**中进入**行为(Behavior)**标签中,你会看到**可执行的文本文件(Executable Text Files)**选项。 -默认情况下,它被设置成“在打开是显示文本文件”。我建议你把它改成“每次询问”,这样你可以选择是执行还是编辑了,当然了你也可以选择默认执行。你可以自行选择。 +默认情况下,它被设置成“在打开时显示文本文件(View executable text files when they are opend)”。我建议你把它改成“每次询问(Ask each time)”,这样你可以选择是执行还是编辑了,当然了你也可以选择“在打开时云可执行文本文件(Run executable text files when they are opend)”。你可以自行选择。 ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-2.png) @@ -32,7 +32,7 @@ via: http://itsfoss.com/shell-script-opens-text-editor/ 作者:[Abhishek][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/201511/20151012 Curious about Linux Try Linux Desktop on the Cloud.md b/published/201511/20151012 Curious about Linux Try Linux Desktop on the Cloud.md new file mode 100644 index 0000000000..2d2985bc34 --- /dev/null +++ b/published/201511/20151012 Curious about Linux Try Linux Desktop on the Cloud.md @@ -0,0 +1,44 @@ +好奇 Linux?试试云端的 Linux 桌面 +================================================================================ +Linux 在桌面操作系统市场上只占据了非常小的份额,从目前的调查结果来看,估计只有2%的市场份额;对比来看,丰富多变的 Windows 系统占据了接近90%的市场份额。对于 Linux 来说,要挑战 Windows 在桌面操作系统市场的垄断,需要有一个让用户学习不同的操作系统的简单方式。如果你相信传统的 Windows 用户会再买一台机器来使用 Linux,那你就太天真了。我们只能去试想用户重新分区,设置引导程序来使用双系统,或者跳过所有步骤回到一个最简单的方法。 + +![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png) + +我们实验过一系列让用户试操作 Linux 的无风险的使用方法,不涉及任何分区管理,包括 CD/DVD 光盘、USB 存储棒和桌面虚拟化软件等等。通过实验,我强烈推荐使用 VMware 的 VMware Player 或者 Oracle VirtualBox 虚拟机,对于桌面操作系统或者便携式电脑的用户,这是一种安装运行多操作系统的相对简单而且免费的的方法。每一台虚拟机和其他虚拟机相隔离,但是共享 CPU、内存、网络接口等等。虚拟机仍需要一定的资源来安装运行 Linux,也需要一台相当强劲的主机。但对于一个好奇心不大的人,这样做实在是太麻烦了。 + +要打破用户传统的使用观念是非常困难的。很多 Windows 用户可以尝试使用 Linux 提供的自由软件,但也有太多要学习的 Linux 系统知识。这会花掉他们相当一部分时间才能习惯 Linux 的工作方式。 + +当然了,对于一个第一次在 Linux 上操作的新手,有没有一个更高效的方法呢?答案是肯定的,接着往下看看云实验平台。 + +### LabxNow ### + +![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png) + +LabxNow 提供了一个免费服务,方便广大用户通过浏览器来访问远程 Linux 桌面。开发者将其加强为一个用户个人远程实验室(用户可以在系统里运行、开发任何程序),用户可以在任何地方通过互联网登入远程实验室。 + +这项服务现在可以为个人用户提供2核处理器,4GB RAM和10GB的固态硬盘,运行在128G RAM的4 AMD 6272处理器上。 + +#### 配置参数: #### + +- 系统镜像:基于 Ubuntu 14.04 的 Xface 4.10,RHEL 6.5,CentOS(Gnome桌面),Oracle +- 硬件: CPU - 1核或者2核;内存: 512MB, 1GB, 2GB or 4GB +- 超快的网络数据传输 +- 可以运行在所有流行的浏览器上 +- 可以安装任意程序,可以运行任何程序 – 这是一个非常棒的方法,可以随意做实验学习你想学的任何知识,没有 一点风险 +- 添加、删除、管理、制定虚拟机非常方便 +- 支持虚拟机共享,远程桌面 + +你所需要的只是一台有稳定网络的设备。不用担心虚拟专用系统(VPS)、域名、或者硬件带来的高费用。LabxNow提供了一个在 Ubuntu、RHEL 和 CentOS 上实验的非常好的方法。它给 Windows 用户提供一个极好的环境,让他们探索美妙的 Linux 世界。说得深入一点,它可以让用户随时随地在里面工作,而没有了要在每台设备上安装 Linux 的压力。点击下面这个链接进入 [www.labxnow.org/labxweb/][1]。 + +另外还有一些其它服务(大部分是收费服务)可以让用户使用 Linux,包括 Cloudsigma 环境的7天使用权和Icebergs.io (通过HTML5实现root权限)。但是现在,我推荐 LabxNow。 + +-------------------------------------------------------------------------------- + +来自: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html + +译者:[sevenot](https://github.com/sevenot) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.labxnow.org/labxweb/ diff --git a/published/20151012 How To Use iPhone In Antergos Linux.md b/published/201511/20151012 How To Use iPhone In Antergos Linux.md similarity index 100% rename from published/20151012 How To Use iPhone In Antergos Linux.md rename to published/201511/20151012 How To Use iPhone In Antergos Linux.md diff --git a/translated/tech/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md b/published/201511/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md similarity index 50% rename from translated/tech/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md rename to published/201511/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md index a5207c3813..50e64ca9ad 100644 --- a/translated/tech/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md +++ b/published/201511/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md @@ -1,22 +1,25 @@ -命令行下使用Mop 监视股票价格 +命令行下使用 Mop 监视股票价格 ================================================================================ -有一份隐性收入通常很不错,特别是当你可以轻松的协调业余和全职工作。如果你的日常工作使用了联网的电脑,交易股票是一个很流行的选项来获取额外收入。 +![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-featured-new.jpg) + +有一份隐性收入通常很不错,特别是当你可以轻松的协调业余和全职工作。如果你的日常工作使用了联网的电脑,交易股票就是一个获取额外收入的很流行的选项。 + +但是目前只有很少的股票监视软件可以运行在 linux 上,其中大多数还是基于图形界面的。如果你是一个 Linux 专家,并且大量的工作时间是在没有图形界面的电脑上呢?你是不是就没办法了?不,还是有一些命令行下的股票追踪工具,包括Mop,也就是本文要聊一聊的工具。 -但是目前只有很少的股票监视软件可以用在linux 上,其中大多数还是基于图形界面的。如果你是一个Linux 专家,并且大量的工作时间是在没有图形界面的电脑上呢?你是不是就没办法了?不,这里还有一个命令行下的股票追踪工具,包括Mop,也就是本文要聊一聊的工具。 ### Mop ### -Mop,如上所述,是一个命令行下连续显示和更新美股和独立股票信息的工具。使用GO 实现的,是Michael Dvorkin 大脑的产物。 +Mop,如上所述,是一个命令行下连续显示和更新美股和独立股票信息的工具。使用 GO 语言实现的,是 Michael Dvorkin 的智慧结晶。 + ### 下载安装 ### - -因为这个工程使用GO 实现的,所以你要做的第一步是在你的计算机上安装这种编程语言,下面就是在Debian 系系统,比如Ubuntu上安装GO的步骤: +因为这个项目使用 GO 实现的,所以你要做的第一步是在你的计算机上安装这种编程语言,下面就是在 Debian 系的系统,比如 Ubuntu 上安装 GO 的步骤: sudo apt-get install golang mkdir ~/workspace echo 'export GOPATH="$HOME/workspace"' >> ~/.bashrc source ~/.bashrc -GO 安装好后的下一步是安装Mop 工具和配置环境,你要做的是运行下面的命令: +GO 安装好后的下一步是安装 Mop 工具和配置环境,你要做的是运行下面的命令: sudo apt-get install git go get github.com/michaeldv/mop @@ -24,12 +27,13 @@ GO 安装好后的下一步是安装Mop 工具和配置环境,你要做的是 make install export PATH="$PATH:$GOPATH/bin" -完成之后就可以运行下面的命令执行Mop: +完成之后就可以运行下面的命令执行 Mop: + cmd ### 特性 ### -当你第一次运行Mop 时,你会看到类似下面的输出信息: +当你第一次运行 Mop 时,你会看到类似下面的输出信息: ![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-first-run.jpg) @@ -37,19 +41,19 @@ GO 安装好后的下一步是安装Mop 工具和配置环境,你要做的是 ### 添加删除股票 ### -Mop 允许你轻松的从输出列表上添加/删除个股信息。要添加,你全部要做的是按”+“和输入股票名称。举个例子,下图就是添加Facebook (FB) 到列表里。 +Mop 允许你轻松的从输出列表上添加/删除个股信息。要添加,你全部要做的是按“+”和输入股票名称。举个例子,下图就是添加 Facebook (FB) 到列表里。 ![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-add-stock.png) -因为我按下了”+“键,一列包含文本”Add tickers:“出现了,提示我添加股票名称—— 我添加了FB 然后按下回车。输出列表更新了,我添加的新股票也出现在列表了: +我按下了“+”键,就出现了包含文本“Add tickers:”的一列,提示我添加股票名称—— 我添加了 FB 然后按下回车。输出列表更新了,我添加的新股票也出现在列表了: ![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-stock-added.png) -类似的,你可以使用”-“ 键和提供股票名称删除一个股票。 +类似的,你可以使用“-”键和提供股票名称删除一个股票。 #### 根据价格分组 #### -还有一个把股票分组的办法:依据他们的股价升跌,你索要做的就是按下”g“ 键。接下来,股票会分组显示:升的在一起使用绿色字体显示,而下跌的股票会黑色字体显示。 +还有一个把股票分组的办法:依据他们的股价升跌,你所要做的就是按下“g”键。接下来,股票会分组显示:升的在一起使用绿色字体显示,而下跌的股票会黑色字体显示。 如下所示: @@ -57,7 +61,7 @@ Mop 允许你轻松的从输出列表上添加/删除个股信息。要添加, #### 列排序 #### -Mop 同时也允许你根据不同的列类型改变排序规则。这种用法需要你按下”o“(这个命令默认使用第一列的值来排序),然后使用左右键来选择你要使用的列。完成之后按下回车对内容重新排序。 +Mop 同时也允许你根据不同的列类型改变排序规则。这种用法需要你按下“o”(这个命令默认使用第一列的值来排序),然后使用左右键来选择你要排序的列。完成之后按下回车对内容重新排序。 举个例子,下面的截图就是根据输出内容的第一列、按照字母表排序之后的结果。 @@ -67,12 +71,13 @@ Mop 同时也允许你根据不同的列类型改变排序规则。这种用法 #### 其他选项 #### -其它的可用选项包括”p“:暂停市场和股票信息更新,”q“ 或者”esc“ 来退出命令行程序,”?“ 显示帮助页。 +其它的可用选项包括“p”:暂停市场和股票信息更新,“q”或者“esc” 来退出命令行程序,“?”显示帮助页。 + ![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-help.png) ### 结论 ### -Mop 是一个基础的股票监控工具,并没有提供太多的特性,只提供了他声称的功能。很明显,这个工具并不是为专业股票交易者提供的,而仅仅为你在只有命令行的机器上得体的提供了一个跟踪股票信息的选择。 +Mop 是一个基础的股票监控工具,并没有提供太多的特性,只提供了它所声称的功能。很明显,这个工具并不是为专业股票交易者提供的,而仅仅为你在只有命令行的机器上得体的提供了一个跟踪股票信息的选择。 -------------------------------------------------------------------------------- @@ -80,7 +85,7 @@ via: https://www.maketecheasier.com/monitor-stock-prices-ubuntu-command-line/ 作者:[Himanshu Arora][a] 译者:[oska874](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201511/20151012 How to Setup DockerUI--a Web Interface for Docker.md b/published/201511/20151012 How to Setup DockerUI--a Web Interface for Docker.md new file mode 100644 index 0000000000..10ead7542e --- /dev/null +++ b/published/201511/20151012 How to Setup DockerUI--a Web Interface for Docker.md @@ -0,0 +1,113 @@ +用浏览器管理 Docker +================================================================================ +Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统是一种非常棒的技术和想法。docker 已经通过节省工作时间来拯救了成千上万的系统管理员和开发人员。这是一个开源技术,提供一个平台来把应用程序当作容器来打包、分发、共享和运行,而不用关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制,并且可以在任何时间、任何地点运行,从小型计算机到高端服务器都可以。运行 docker 容器和管理它们可能会花费一点点努力和时间,所以现在有一款基于 web 的应用程序-DockerUI,可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉 Linux 命令行,但又很想运行容器化程序的人很有帮助的工具。DockerUI 是一个开源的基于 web 的应用程序,它最值得称道的是它华丽的设计和用来运行和管理 docker 的简洁的操作界面。 + +下面会介绍如何在 Linux 上安装配置 DockerUI。 + +### 1. 安装 docker ### + +首先,我们需要安装 docker。我们得感谢 docker 的开发者,让我们可以简单的在主流 linux 发行版上安装 docker。为了安装 docker,我们得在对应的发行版上使用下面的命令。 + +#### Ubuntu/Fedora/CentOS/RHEL/Debian #### + +docker 维护者已经写了一个非常棒的脚本,用它可以在 Ubuntu 15.04/14.10/14.04、 CentOS 6.x/7、 Fedora 22、 RHEL 7 和 Debian 8.x 这几个 linux 发行版上安装 docker。这个脚本可以识别出我们的机器上运行的 linux 的发行版本,然后将需要的源库添加到文件系统、并更新本地的安装源目录,最后安装 docker 及其依赖库。要使用这个脚本安装docker,我们需要在 root 用户或者 sudo 权限下运行如下的命令, + + # curl -sSL https://get.docker.com/ | sh + +#### OpenSuse/SUSE Linux 企业版 #### + +要在运行了 OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装 docker,我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker: + + # zypper in docker + +#### ArchLinux #### + +docker 在 ArchLinux 的官方源和社区维护的 AUR 库中可以找到。所以在 ArchLinux 上我们有两种方式来安装 docker。使用官方源安装,需要执行下面的 pacman 命令: + + # pacman -S docker + +如果要从社区源 AUR 安装 docker,需要执行下面的命令: + + # yaourt -S docker-git + +### 2. 启动 ### + +安装好 docker 之后,我们需要运行 docker 守护进程,然后才能运行并管理 docker 容器。我们需要使用下列命令来确认 docker 守护进程已经安装并运行了。 + +#### 在 SysVinit 上#### + + # service docker start + +#### 在Systemd 上#### + + # systemctl start docker + +### 3. 安装 DockerUI ### + +安装 DockerUI 比安装 docker 要简单很多。我们仅仅需要从 docker 注册库上拉取 dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令: + + # docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui + +![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png) + +在上面的命令里,dockerui 使用的默认端口是9000,我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker 的 socket。如果主机使用了 SELinux 那么就得使用`--privileged` 标志。 + +执行完上面的命令后,我们要检查 DockerUI 容器是否运行了,或者使用下面的命令检查: + + # docker ps + +![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png) + +### 4. 拉取 docker 镜像 ### + +现在我们还不能直接使用 DockerUI 拉取镜像,所以我们需要在命令行下拉取 docker 镜像。要完成这些我们需要执行下面的命令。 + + # docker pull ubuntu + +![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png) + +上面的命令将会从 docker 官方源 [Docker Hub][1]拉取一个标志为 ubuntu 的镜像。类似的我们可以从 Hub 拉取需要的其它镜像。 + +### 4. 管理 ### + +启动了 DockerUI 容器之后,我们可以用它来执行启动、暂停、终止、删除以及 DockerUI 提供的其它操作 docker 容器的命令。 + +首先,我们需要在 web 浏览器里面打开 dockerui:在浏览器里面输入 http://ip-address:9000 或者 http://mydomain.com:9000,具体要根据你的系统配置。默认情况下登录不需要认证,但是可以配置我们的 web 服务器来要求登录认证。要启动一个容器,我们需要有包含我们要运行的程序的镜像。 + +#### 创建 #### + +创建容器我们需要在 Images 页面里,点击我们想创建的容器的镜像 id。然后点击 `Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。 + +![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png) + +#### 停止 #### + +要停止一个容器,我们只需要跳转到`Containers` 页面,然后选取要停止的容器。然后在 Action 的子菜单里面按下 Stop 就行了。 + +![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png) + +#### 暂停与恢复 #### + +要暂停一个容器,只需要简单的选取目标容器,然后点击 Pause 就行了。恢复一个容器只需要在 Actions 的子菜单里面点击 Unpause 就行了。 + +#### 删除 #### + +类似于我们上面完成的任务,杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像,然后点击 Kill 或者 Remove 就行了。 + +### 结论 ### + +DockerUI 使用了 docker 远程 API 提供了一个很棒的管理 docker 容器的 web 界面。它的开发者们完全使用 HTML 和 JS 设计、开发了这个应用。目前这个程序还处于开发中,并且还有大量的工作要完成,所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像,而且只需要一点点工作。如果想要为 DockerUI 做贡献,可以访问它们的 [Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/ + +作者:[Arun Pyasi][a] +译者:[oska874](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://hub.docker.com/ +[2]:https://github.com/crosbymichael/dockerui/ diff --git a/translated/tech/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md b/published/201511/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md similarity index 81% rename from translated/tech/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md rename to published/201511/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md index 7e6ed0d2c2..4f00be2f90 100644 --- a/translated/tech/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md +++ b/published/201511/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md @@ -1,9 +1,8 @@ 如何在 CentOS 7.0 上配置 Ceph 存储 -How to Setup Red Hat Ceph Storage on CentOS 7.0 ================================================================================ -Ceph 是一个将数据存储在单一分布式计算机集群上的开源软件平台。当你计划构建一个云时,你首先需要决定如何实现你的存储。开源的 CEPH 是红帽原生技术之一,它基于称为 RADOS 的对象存储系统,用一组网关 API 表示块、文件、和对象模式中的数据。由于它自身开源的特性,这种便携存储平台能在公有和私有云上安装和使用。Ceph 集群的拓扑结构是按照备份和信息分布设计的,这内在设计能提供数据完整性。它的设计目标就是容错、通过正确配置能运行于商业硬件和一些更高级的系统。 +Ceph 是一个将数据存储在单一分布式计算机集群上的开源软件平台。当你计划构建一个云时,你首先需要决定如何实现你的存储。开源的 Ceph 是红帽原生技术之一,它基于称为 RADOS 的对象存储系统,用一组网关 API 表示块、文件、和对象模式中的数据。由于它自身开源的特性,这种便携存储平台能在公有云和私有云上安装和使用。Ceph 集群的拓扑结构是按照备份和信息分布设计的,这种内在设计能提供数据完整性。它的设计目标就是容错、通过正确配置能运行于商业硬件和一些更高级的系统。 -Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要求最近的内核以及其它最新的库。在这篇指南中,我们会使用最小化安装的 CentOS-7.0。 +Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它需要最近的内核以及其它最新的库。在这篇指南中,我们会使用最小化安装的 CentOS-7.0。 ### 系统资源 ### @@ -25,11 +24,11 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 ### 安装前的配置 ### -在安装 CEPH 存储之前,我们要在每个节点上完成一些步骤。第一件事情就是确保每个节点的网络已经配置好并且能相互访问。 +在安装 Ceph 存储之前,我们要在每个节点上完成一些步骤。第一件事情就是确保每个节点的网络已经配置好并且能相互访问。 **配置 Hosts** -要在每个节点上配置 hosts 条目,要像下面这样打开默认的 hosts 配置文件。 +要在每个节点上配置 hosts 条目,要像下面这样打开默认的 hosts 配置文件(LCTT 译注:或者做相应的 DNS 解析)。 # vi /etc/hosts @@ -46,9 +45,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 **配置防火墙** -如果你正在使用启用了防火墙的限制性环境,确保在你的 CEPH 存储管理节点和客户端节点中开放了以下的端口。 +如果你正在使用启用了防火墙的限制性环境,确保在你的 Ceph 存储管理节点和客户端节点中开放了以下的端口。 -你必须在你的 Admin Calamari 节点开放 80、2003、以及4505-4506 端口,并且允许通过 80 号端口到 CEPH 或 Calamari 管理节点,以便你网络中的客户端能访问 Calamari web 用户界面。 + 你必须在你的 Admin Calamari 节点开放 80、2003、以及4505-4506 端口,并且允许通过 80 号端口到 CEPH 或 Calamari 管理节点,以便你网络中的客户端能访问 Calamari web 用户界面。 你可以使用下面的命令在 CentOS 7 中启动并启用防火墙。 @@ -62,7 +61,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 #firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent #firewall-cmd --reload -在 CEPH Monitor 节点,你要在防火墙中允许通过以下端口。 +在 Ceph Monitor 节点,你要在防火墙中允许通过以下端口。 #firewall-cmd --zone=public --add-port=6789/tcp --permanent @@ -82,9 +81,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 #yum update #shutdown -r 0 -### 设置 CEPH 用户 ### +### 设置 Ceph 用户 ### -现在我们会新建一个单独的 sudo 用户用于在每个节点安装 ceph-deploy工具,并允许该用户无密码访问每个节点,因为它需要在 CEPH 节点上安装软件和配置文件而不会有输入密码提示。 +现在我们会新建一个单独的 sudo 用户用于在每个节点安装 ceph-deploy工具,并允许该用户无密码访问每个节点,因为它需要在 Ceph 节点上安装软件和配置文件而不会有输入密码提示。 运行下面的命令在 ceph-storage 主机上新建有独立 home 目录的新用户。 @@ -100,7 +99,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 ### 设置 SSH 密钥 ### -现在我们会在 ceph 管理节点生成 SSH 密钥并把密钥复制到每个 Ceph 集群节点。 +现在我们会在 Ceph 管理节点生成 SSH 密钥并把密钥复制到每个 Ceph 集群节点。 在 ceph-node 运行下面的命令复制它的 ssh 密钥到 ceph-storage。 @@ -125,7 +124,8 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 ### 配置 PID 数目 ### -要配置 PID 数目的值,我们会使用下面的命令检查默认的内核值。默认情况下,是一个小的最大线程数 32768. +要配置 PID 数目的值,我们会使用下面的命令检查默认的内核值。默认情况下,是一个小的最大线程数 32768。 + 如下图所示通过编辑系统配置文件配置该值为一个更大的数。 ![更改 PID 值](http://blog.linoxide.com/wp-content/uploads/2015/10/3-PID-value.png) @@ -142,9 +142,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 #rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm -![添加 EPEL](http://blog.linoxide.com/wp-content/uploads/2015/10/k1.png) +![添加 Ceph 仓仓库](http://blog.linoxide.com/wp-content/uploads/2015/10/k1.png) -或者创建一个新文件并更新 CEPH 库参数,别忘了替换你当前的 Release 和版本号。 +或者创建一个新文件并更新 Ceph 库参数,别忘了替换你当前的 Release 和版本号。 [root@ceph-storage ~]# vi /etc/yum.repos.d/ceph.repo @@ -160,7 +160,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 之后更新你的系统并安装 ceph-deploy 软件包。 -### 安装 CEPH-Deploy 软件包 ### +### 安装 ceph-deploy 软件包 ### 我们运行下面的命令以及 ceph-deploy 安装命令来更新系统以及最新的 ceph 库和其它软件包。 @@ -181,15 +181,16 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 ![设置 ceph 集群](http://blog.linoxide.com/wp-content/uploads/2015/10/k4.png) 如果成功执行了上面的命令,你会看到它新建了配置文件。 -现在配置 CEPH 默认的配置文件,用任意编辑器打开它并在会影响你公共网络的 global 参数下面添加以下两行。 + +现在配置 Ceph 默认的配置文件,用任意编辑器打开它并在会影响你公共网络的 global 参数下面添加以下两行。 #vim ceph.conf osd pool default size = 1 public network = 45.79.0.0/16 -### 安装 CEPH ### +### 安装 Ceph ### -现在我们准备在和 CEPH 集群相关的每个节点上安装 CEPH。我们使用下面的命令在 ceph-storage 和 ceph-node 上安装 CEPH。 +现在我们准备在和 Ceph 集群相关的每个节点上安装 Ceph。我们使用下面的命令在 ceph-storage 和 ceph-node 上安装 Ceph。 #ceph-deploy install ceph-node ceph-storage @@ -201,7 +202,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 #ceph-deploy mon create-initial -![CEPH 初始化监视器](http://blog.linoxide.com/wp-content/uploads/2015/10/k6.png) +![Ceph 初始化监视器](http://blog.linoxide.com/wp-content/uploads/2015/10/k6.png) ### 设置 OSDs 和 OSD 守护进程 ### @@ -223,9 +224,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 #ceph-deploy admin ceph-node ceph-storage -### 测试 CEPH ### +### 测试 Ceph ### -我们几乎完成了 CEPH 集群设置,让我们在 ceph 管理节点上运行下面的命令检查正在运行的 ceph 状态。 +我们快完成了 Ceph 集群设置,让我们在 ceph 管理节点上运行下面的命令检查正在运行的 ceph 状态。 #ceph status #ceph health @@ -235,7 +236,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要 ### 总结 ### -在这篇详细的文章中我们学习了如何使用两台安装了 CentOS 7 的虚拟机设置 CEPH 存储集群,这能用于备份或者作为用于处理其它虚拟机的本地存储。我们希望这篇文章能对你有所帮助。当你试着安装的时候记得分享你的经验。 +在这篇详细的文章中我们学习了如何使用两台安装了 CentOS 7 的虚拟机设置 Ceph 存储集群,这能用于备份或者作为用于处理其它虚拟机的本地存储。我们希望这篇文章能对你有所帮助。当你试着安装的时候记得分享你的经验。 -------------------------------------------------------------------------------- @@ -243,7 +244,7 @@ via: http://linoxide.com/storage/setup-red-hat-ceph-storage-centos-7-0/ 作者:[Kashif Siddique][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md b/published/201511/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md similarity index 100% rename from published/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md rename to published/201511/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md diff --git a/published/20151012 What is a good IDE for R on Linux.md b/published/201511/20151012 What is a good IDE for R on Linux.md similarity index 100% rename from published/20151012 What is a good IDE for R on Linux.md rename to published/201511/20151012 What is a good IDE for R on Linux.md diff --git a/published/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md b/published/201511/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md similarity index 100% rename from published/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md rename to published/201511/20151019 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md diff --git a/translated/talk/20151019 Nautilus File Search Is About To Get A Big Power Up.md b/published/201511/20151019 Nautilus File Search Is About To Get A Big Power Up.md similarity index 61% rename from translated/talk/20151019 Nautilus File Search Is About To Get A Big Power Up.md rename to published/201511/20151019 Nautilus File Search Is About To Get A Big Power Up.md index b9f0762cbe..b38d77c28e 100644 --- a/translated/talk/20151019 Nautilus File Search Is About To Get A Big Power Up.md +++ b/published/201511/20151019 Nautilus File Search Is About To Get A Big Power Up.md @@ -1,24 +1,24 @@ -Nautilus的文件搜索将迎来巨大提升 +Nautilus 的文件搜索将迎来巨大提升 ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/10/nautilus-new-search-filters.jpg) -**在Nautilus中搜索零散文件和文件夹将会将会变得相当简单。** +*在Nautilus中搜索零散文件和文件夹将会将会变得相当简单。* -[GNOME文件管理器][1]中一个新的**搜索过滤器**正在开发中。它大量使用 GNOME 漂亮的弹出式菜单努力提供一个简单的方法缩小搜索结果并精确找到你需要的。 +[GNOME文件管理器][1]中正在开发一个新的**搜索过滤器**。它大量使用 GNOME 漂亮的弹出式菜单,以通过简单的方法来缩小搜索结果并精确地找到你所需要的。 -开发者Georges Stavracas正致力于新的UI并[描述][2]新的编辑器为“更干净、更合理、更直观”。 +开发者Georges Stavracas正致力于开发新的UI,他[说][2]这个新的界面“更干净、更合理、更直观”。 -根据他[上传到Youtube][3]的视频来展示新的方式-他还没有嵌入它-他没有错。 +根据他[上传到Youtube][3]的视频来展示的新方式-他还没有嵌入它-他没有错。 -> 他在他的博客中写到:“ Nautilus 有非常复杂但是强大的内部组成,它允许我们做很多事情。事实上有代码可提供很多选择。那么,为何它曾经看上去这么糟糕?” +> 他在他的博客中写到:“ Nautilus 有非常复杂但是强大的内部组成,它允许我们做很多事情。事实上在代码上存在各种可能。那么,为何它曾经看上去这么糟糕?” -问题有部分比较夸张;新的搜索过滤器界面向用户展示了“强大的内部组成”。搜索结果可以根据类型、名字或者日期范围来进行过滤。 +这个问题的部分原因比较令人吃惊:新的搜索过滤器界面向用户展示了“强大的内部组成”。搜索结果可以根据类型、名字或者日期范围来进行过滤。 对于像 Nautilus 这类 app 的任何修改有可能让一些用户不安,因此像这样帮助性的、直接的新UI会带来一些争议。 -虽然对于不满的担心貌似会影响进度(毫无疑问,虽然像[移除类型优先搜索][4]的争议自2014年以来一直在争论)。GNOME 3.18 在[上个月发布了][5],给 Nautilus 引入了新的文件进度对话框,以及远程共享的更好整合,包括 Google Drive。 +虽然对于不满的担心貌似会影响进度(毫无疑问,虽然像[移除输入优先搜索][4]的争议自2014年以来一直在争论)。GNOME 3.18 在[上个月发布了][5],给 Nautilus 引入了新的文件进度对话框,以及远程共享的更好整合,包括 Google Drive。 -Stavracas 的搜索过滤器还没被合并进 Files 的 trunk,但是重做的搜索 UI 已经初步计划在明年春天的 GNOME 3.20 中实现。 +Stavracas 的搜索过滤器还没被合并进 Files 的 trunk 中,但是复刻的搜索 UI 已经初步计划在明年春天的 GNOME 3.20 中实现。 -------------------------------------------------------------------------------- diff --git a/published/20151027 How To Install Retro Terminal In Linux.md b/published/201511/20151027 How To Install Retro Terminal In Linux.md similarity index 100% rename from published/20151027 How To Install Retro Terminal In Linux.md rename to published/201511/20151027 How To Install Retro Terminal In Linux.md diff --git a/published/20151027 How To Show Desktop In GNOME 3.md b/published/201511/20151027 How To Show Desktop In GNOME 3.md similarity index 100% rename from published/20151027 How To Show Desktop In GNOME 3.md rename to published/201511/20151027 How To Show Desktop In GNOME 3.md diff --git a/published/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md b/published/201511/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md similarity index 100% rename from published/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md rename to published/201511/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md diff --git a/translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md b/published/201511/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md similarity index 74% rename from translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md rename to published/201511/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md index 2948e8de61..cce93c0d02 100644 --- a/translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md +++ b/published/201511/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md @@ -1,4 +1,3 @@ - 如何在 Linux 终端下创建新的文件系统/分区 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-feature-image.png) @@ -13,8 +12,7 @@ ![cfdisk-lsblk](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-lsblk.png) - -一旦你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。 +当你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。 在终端输入这个命令。它会显示一个功能强大的基于终端的分区编辑程序。 @@ -26,9 +24,7 @@ 当输入此命令后,你将进入分区编辑器中,然后访问你想改变的磁盘。 -Since hard drive partitions are different, depending on a user’s needs, this part of the guide will go over **how to set up a split Linux home/root system layout**. - -由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分布的 Linux home/root 文件分区**。 +由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分离的 Linux home/root 分区布局**。 首先,需要创建根分区。这需要根据磁盘的字节数来进行分割。我测试的磁盘是 32 GB。 @@ -38,7 +34,7 @@ Since hard drive partitions are different, depending on a user’s needs, this p 该程序会要求你输入分区大小。一旦你指定好大小后,按 Enter 键。这将被称为根分区(或 /dev/sdb1)。 -接下来该创建用户分区(/dev/sdb2)了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你用户分区的大小,然后按 Enter 键来创建它。 +接下来该创建 home 分区(/dev/sdb2)了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你的 home 分区的大小,然后按 Enter 键来创建它。 ![cfdisk-create-home-partition](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-create-home-partition.png) @@ -48,7 +44,7 @@ Since hard drive partitions are different, depending on a user’s needs, this p ![cfdisk-specify-partition-type-swap](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-specify-partition-type-swap.png) -现在,交换分区被创建了,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。 +现在,创建了交换分区,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。 ![cfdisk-write-partition-table](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-write-partition-table.jpg) @@ -56,13 +52,13 @@ Since hard drive partitions are different, depending on a user’s needs, this p ### 使用 mkfs 创建文件系统 ### -有时候,你并不需要一个完整的分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。 +有时候,你并不需要一个整个重新分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。 ![cfdisk-mkfs-list-partitions-lsblk](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-list-partitions-lsblk.png) -首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想制作文件系统的分区或盘符。 +首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想创建文件系统的分区或盘符。 -在这个例子中,我将使用 `/dev/sdb1` 的第一个分区。只对 `/dev/sdb` 使用 mkfs(将会使用整个分区)。 +在这个例子中,我将使用第二个硬盘的 `/dev/sdb1` 作为第一个分区。可以对 `/dev/sdb` 使用 mkfs(这将会使用整个分区)。 ![cfdisk-mkfs-make-file-system-ext4](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-make-file-system-ext4.png) @@ -70,13 +66,13 @@ Since hard drive partitions are different, depending on a user’s needs, this p sudo mkfs.ext4 /dev/sdb1 -在终端。应当指出的是,`mkfs.ext4` 可以将你指定的任何文件系统改变。 +在终端。应当指出的是,`mkfs.ext4` 可以换成任何你想要使用的的文件系统。 ### 结论 ### 虽然使用图形工具编辑文件系统和分区更容易,但终端可以说是更有效的。终端的加载速度更快,点击几个按钮即可。GParted 和其它工具一样,它也是一个完整的工具。我希望在本教程的帮助下,你会明白如何在终端中高效的编辑文件系统。 -你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?为什么或为什么不?在下面告诉我们! +你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?不管是不是,请在下面告诉我们。 -------------------------------------------------------------------------------- @@ -84,7 +80,7 @@ via: https://www.maketecheasier.com/create-file-systems-partitions-terminal-linu 作者:[Derrik Diener][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md b/published/201511/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md similarity index 100% rename from published/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md rename to published/201511/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md diff --git a/translated/tech/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md b/published/201511/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md similarity index 91% rename from translated/tech/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md rename to published/201511/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md index 9671ff2ecd..81a298a227 100644 --- a/translated/tech/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md +++ b/published/201511/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md @@ -1,4 +1,4 @@ -如何在 Ubuntu 上用 Go For It 管理您的待办清单 (To-Do Lists) +如何在 Ubuntu 上用 Go For It 管理您的待办清单 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-featured1.jpg) @@ -8,7 +8,7 @@ ### Go For It ### -[Go For It][1] (GFI) 由 Manuel Kehl 开发,他声称:“这是款简单易用且时尚优雅的生产力软件,以待办清单(To-Do List)为主打特色,并整合了一个能让你专注于当前事务的定时器。”这款软件的定时器功能尤其有趣,它还可以确保您在继续工作之前暂停下来,放松一段时间。 +[Go For It][1] (GFI) 由 Manuel Kehl 开发,他声称:“这是款简单易用且时尚优雅的生产力软件,以待办清单(To-Do List)为主打特色,并整合了一个能让你专注于当前事务的定时器。”这款软件的定时器功能尤其有趣,它还可以让您在继续工作之前暂停下来,放松一段时间。 ### 下载并安装 ### @@ -67,7 +67,7 @@ GFI 也能让您稍微调整一些它的设置。例如,下图所示的设置 ### 结论### -正如您所看到的,GFI 是一款简洁明了且易于使用的任务管理软件。虽然它不提供非常丰富的功能,但它实现了它的承诺,定时器的整合特别有用。如果您正在寻找一款实现了基础功能,并且开源的 Linux 任务管理软件,Go For It 值得您一试。 +正如您所看到的,GFI 是一款简洁明了且易于使用的任务管理软件。虽然它没有提供非常丰富的功能,但它实现了它的承诺,定时器的整合特别有用。如果您正在寻找一款实现了基础功能,并且开源的 Linux 任务管理软件,Go For It 值得您一试。 -------------------------------------------------------------------------------- diff --git a/published/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md b/published/201511/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md similarity index 100% rename from published/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md rename to published/201511/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md diff --git a/translated/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md b/published/201511/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md similarity index 66% rename from translated/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md rename to published/201511/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md index 675ef43d94..e9e3aeabcc 100644 --- a/translated/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md +++ b/published/201511/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md @@ -1,5 +1,4 @@ - -Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell +Linux 有问必答:如何知道当前正在使用的 shell 是哪个? ================================================================================ > **问题**: 我经常在命令行中切换 shell。是否有一个快速简便的方法来找出我当前正在使用的 shell 呢?此外,我怎么能找到当前 shell 的版本? @@ -7,36 +6,30 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell 有多种方式可以查看你目前在使用什么 shell,最简单的方法就是通过使用 shell 的特殊参数。 -其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 的 PID。此参数是只读的,不能被修改。所以,下面的命令也将显示你正在运行的 shell 的名字: +其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 实例的 PID。此参数是只读的,不能被修改。所以,下面的命令也将显示你正在运行的 shell 的名字: $ ps -p $$ ----------- - PID TTY TIME CMD 21666 pts/4 00:00:00 bash 上述命令可在所有可用的 shell 中工作。 -如果你不使用 csh,使用 shell 的特殊参数 “$$” 可以找出当前的 shell,这表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shells 中,如 sh, zsh, tcsh or dash。使用 echo 命令也可以查看你目前正在使用的 shell 的名称。 +如果你不使用 csh,找到当前使用的 shell 的另外一个办法是使用特殊参数 “$0” ,它表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shell 中,如 sh、zsh、tcsh 或 dash。使用 echo 命令可以查看你目前正在使用的 shell 的名称。 $ echo $0 ----------- - bash -不要将 $SHELL 看成是一个单独的环境变量,它被设置为整个路径下的默认 shell。因此,这个变量并不一定指向你当前使用的 shell。例如,即使你在终端中调用不同的 shell,$SHELL 也保持不变。 +不要被一个叫做 $SHELL 的单独的环境变量所迷惑,它被设置为你的默认 shell 的完整路径。因此,这个变量并不一定指向你当前使用的 shell。例如,即使你在终端中调用不同的 shell,$SHELL 也保持不变。 $ echo $SHELL ----------- - /bin/shell ![](https://c2.staticflickr.com/6/5688/22544087680_4a9c180485_c.jpg) -因此,找出当前的shell,你应该使用 $$ 或 $0,但不是 $ SHELL。 +因此,找出当前的shell,你应该使用 $$ 或 $0,但不是 $SHELL。 ### 找出当前 Shell 的版本 ### @@ -46,8 +39,6 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell $ bash --version ----------- - GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later @@ -59,23 +50,17 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell $ zsh --version ----------- - zsh 5.0.7 (x86_64-pc-linux-gnu) **对于** tcsh **shell**: $ tcsh --version ----------- - tcsh 6.18.01 (Astron) 2012-02-14 (x86_64-unknown-linux) options wide,nls,dl,al,kan,rh,nd,color,filec -对于一些 shells,你还可以使用 shell 特定的变量(例如,$ BASH_VERSION 或 $ ZSH_VERSION)。 +对于某些 shell,你还可以使用 shell 特定的变量(例如,$BASH_VERSION 或 $ZSH_VERSION)。 $ echo $BASH_VERSION ----------- - 4.3.8(1)-release -------------------------------------------------------------------------------- @@ -84,7 +69,7 @@ via: http://ask.xmodulo.com/which-shell-am-i-using.html 作者:[Dan Nanni][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151109 Open Source Alternatives to LastPass.md b/published/201511/20151109 Open Source Alternatives to LastPass.md similarity index 100% rename from published/20151109 Open Source Alternatives to LastPass.md rename to published/201511/20151109 Open Source Alternatives to LastPass.md diff --git a/published/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md b/published/201511/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md similarity index 100% rename from published/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md rename to published/201511/20151116 Linux FAQs with Answers--How to set JAVA_HOME environment variable automatically on Linux.md diff --git a/translated/share/20151117 N1--The Next Generation Open Source Email Client.md b/published/201511/20151117 N1--The Next Generation Open Source Email Client.md similarity index 77% rename from translated/share/20151117 N1--The Next Generation Open Source Email Client.md rename to published/201511/20151117 N1--The Next Generation Open Source Email Client.md index 6ffe067ef6..b2cbb4c4ea 100644 --- a/translated/share/20151117 N1--The Next Generation Open Source Email Client.md +++ b/published/201511/20151117 N1--The Next Generation Open Source Email Client.md @@ -2,22 +2,21 @@ N1:下一代开源邮件客户端 ================================================================================ ![N1 Open Source email client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/N1-email-client.png) -当我们谈论到Linux中的邮件客户端,通常上 Thunderbird、Geary 和 [Evolution][3] 会出现在我们的脑海。作为对这些大咖们的挑战,一款新的开源邮件客户端正在涌入市场。 - +当我们谈论到Linux中的邮件客户端,通常 Thunderbird、Geary 和 [Evolution][3] 就会出现在我们的脑海。作为对这些大咖们的挑战,一款新的开源邮件客户端正在涌入市场。 ### 设计和功能 ### -[N1][4]是一个同时聚焦设计和功能的下一代开源邮件客户端。作为一个开源软件,N1目前支持 Linux 和 Mac OS X,Windows的版本还在开发中。 +[N1][4]是一个设计与功能并重的新一代开源邮件客户端。作为一个开源软件,N1目前支持 Linux 和 Mac OS X,Windows的版本还在开发中。 -N1宣传它自己为“可扩展的开源邮件客户端”,因为它包含了 Javascript 插件架构,任何人都可以为它创建强大的新功能。可扩展是一个非常流行的功能,它帮助[开源编辑器Atom][5]变得流行。N1同样把重点放在了可扩展上面。 +N1宣传它自己为“可扩展的开源邮件客户端”,因为它包含了 Javascript 插件框架,任何人都可以为它创建强大的新功能。可扩展是一个非常流行的功能,它帮助[开源编辑器Atom][5]变得流行。N1同样把重点放在了可扩展上面。 除了可扩展性,N1同样着重设计了程序的外观。下面N1的截图就是个很好的例子: ![N1 Open Source email client on Mac OS X](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/N1-email-client-1.jpeg) -Mac OS X上的N1客户端。图片来自:N1 +*Mac OS X上的N1客户端。图片来自:N1* -除了这个功能,N1兼容上百的邮件提供商包括Gmail、Yahoo、iCloud、Microsoft Exchange等等,桌面应用提供离线功能。 +除了这个功能,N1兼容上百个邮件服务提供商,包括Gmail、Yahoo、iCloud、Microsoft Exchange等等,这个桌面应用提供了离线功能。 ### 目前只能邀请使用 ### diff --git a/published/201511/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md b/published/201511/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md new file mode 100644 index 0000000000..6ec1bdc1ec --- /dev/null +++ b/published/201511/20151123 How to Install NVIDIA 358.16 Driver in Ubuntu 15.10 or 14.04.md @@ -0,0 +1,68 @@ +如何在 Ubuntu 15.10,14.04 中安装 NVIDIA 358.16 驱动程序 +================================================================================ +![nvidia-logo-1](http://ubuntuhandbook.org/wp-content/uploads/2015/06/nvidia-logo-1.png) + +[NVIDIA 358.16][1] —— NVIDIA 358 系列的第一个稳定版本已经发布,并对 358.09 中(测试版)做了一些修正,以及一些小的改进。 + +NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块,可以配合 nvidia.ko 内核模块工作来调用 GPU 显示引擎。在以后发布版本中,**nvidia-modeset.ko** 内核驱动程序将被用于模式设置接口的基础,该接口由内核的直接渲染管理器(DRM)所提供。 + +新的驱动程序也有新的 GLX 协议扩展,以及在 OpenGL 驱动中分配大量内存的系统内存分配新机制。新的 GPU **GeForce 805A** 和 **GeForce GTX 960A** 都支持。NVIDIA 358.16 也支持 X.Org 1.18 服务器和 OpenGL 4.3。 + +### 如何在 Ubuntu 中安装 NVIDIA 358.16 : ### + +> **请不要在生产设备上安装,除非你知道自己在做什么以及如何才能恢复。** + +对于官方的二进制文件,请到 [nvidia.com/object/unix.html][1] 查看。 + +对于那些喜欢 Ubuntu PPA 的,我建议你使用 [显卡驱动 PPA][2]。到目前为止,支持 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 15.04, Ubuntu 14.04。 + +**1. 添加 PPA.** + +通过按 `Ctrl+Alt+T` 快捷键来从 Unity 桌面打开终端。当打启动应用后,粘贴下面的命令并按回车键: + + sudo add-apt-repository ppa:graphics-drivers/ppa + +![nvidia-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/08/nvidia-ppa.jpg) + +它会要求你输入密码。输入密码后,密码不会显示在屏幕上,按 Enter 继续。 + +**2. 刷新并安装新的驱动程序** + +添加 PPA 后,逐一运行下面的命令刷新软件库并安装新的驱动程序: + + sudo apt-get update + + sudo apt-get install nvidia-358 nvidia-settings + +### (如果需要的话,) 卸载: ### + +开机从 GRUB 菜单进入恢复模式,进入根控制台。然后逐一运行下面的命令: + +重新挂载文件系统为可写: + + mount -o remount,rw / + +删除所有的 nvidia 包: + + apt-get purge nvidia* + +最后返回菜单并重新启动: + + reboot + +要禁用/删除显卡驱动 PPA,点击系统设置下的**软件和更新**,然后导航到**其他软件**标签。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/install-nvidia-358-16-driver-ubuntu-15-10/ + +作者:[Ji m][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://www.nvidia.com/Download/driverResults.aspx/95921/en-us +[2]:http://www.nvidia.com/object/unix.html +[3]:https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa diff --git a/published/201511/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md b/published/201511/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md new file mode 100644 index 0000000000..bf6b5c3b11 --- /dev/null +++ b/published/201511/20151123 Install Intel Graphics Installer in Ubuntu 15.10.md @@ -0,0 +1,46 @@ +在 Ubuntu 15.10 上安装 Intel Graphics 安装器 +================================================================================ +![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg) + +Intel 最近发布了一个新版本的 Linux Graphics 安装器。在新版本中,将不支持 Ubuntu 15.04,而必须用 Ubuntu 15.10 Wily。 + +> Linux 版 Intel® Graphics 安装器可以让你很容易的为你的 Intel Graphics 硬件安装最新版的图形与视频驱动。它能保证你一直使用最新的增强与优化功能,并能够安装到 Intel Graphics Stack 中,来保证你在你的 Intel 图形硬件下,享受到最佳的用户体验。*现在 Linux 版的 Intel® Graphics 安装器支持最新版的 Ubuntu。* + +![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg) + +### 安装 ### + +**1.** 从[这个链接页面][1]中下载该安装器。当前支持 Ubuntu 15.10 的版本是1.2.1版。你可以在**系统设置 -> 详细信息**中检查你的操作系统(32位或64位)的类型。 + +![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg) + +**2.** 一旦下载完成,到下载目录中点击 .deb 安装包,用 Ubuntu 软件中心打开它,然最后点击“安装”按钮。 + +![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg) + +**3.** 为了让系统信任 Intel Graphics 安装器,你需要通过下面的命令来为它添加密钥。 + +用快捷键`Ctrl+Alt+T`或者在 Unity Dash 中的“应用程序启动器”中打开终端。依次粘贴运行下面的命令。 + + wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add - + + wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add - + +![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg) + +注意:在运行第一个命令的过程中,如果密钥下载完成后,光标停住不动并且一直闪烁的话,就像上面图片显示的那样,输入你的密码(输入时不会看到什么有变化)然后回车就行了。 + +最后通过 Unity Dash 或应用程序启动器打开 Intel Graphics 安装器。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/ + +作者:[Ji m][a] +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://01.org/linuxgraphics/downloads diff --git a/published/Learn with Linux--Master Your Math with These Linux Apps.md b/published/201511/Learn with Linux--Master Your Math with These Linux Apps.md similarity index 100% rename from published/Learn with Linux--Master Your Math with These Linux Apps.md rename to published/201511/Learn with Linux--Master Your Math with These Linux Apps.md diff --git a/published/LetsEncrypt.md b/published/201511/LetsEncrypt.md similarity index 100% rename from published/LetsEncrypt.md rename to published/201511/LetsEncrypt.md diff --git a/published/20151107 Hackers Successfully Install Linux on a Potato.md b/published/20151107 Hackers Successfully Install Linux on a Potato.md new file mode 100644 index 0000000000..b35220730b --- /dev/null +++ b/published/20151107 Hackers Successfully Install Linux on a Potato.md @@ -0,0 +1,32 @@ +黑客们成功地在土豆上安装了 Linux ! +================================================================================ + +来自荷兰阿姆斯特丹的消息称,LinuxOnAnything.nl 网站的黑客们成功地在土豆上安装了 Linux!这是该操作系统第一次在根用蔬菜(root vegetable)上安装成功(LCTT 译注:root vetetable,一语双关,root 在 Linux 是指超级用户)。 + +![Linux Potato](http://www.bbspot.com/Images/News_Features/2008/12/linux-potato.jpg) + +“土豆没有 CPU,内存和存储器,这真的是个挑战,” Linux On Anything (LOA) 小组的 Johan Piest 说。“显然我们不能使用一个像 Fedora 或 Ubuntu 这些体量较大的发行版,所以我们用的是 Damn Small Linux。” + +在尝试了几周之后,LOA 小组的的同学们弄出了一个适合土豆的 Linux 内核,这玩艺儿上面可以用 vi 来编辑小的文本文件。这个 Linux 通过一个小型的 U 盘加载到土豆上,并通过一组红黑线以二进制的方式向这个土豆发送命令。 + +LOA 小组是一个不断壮大的黑客组织的分支;这个组织致力于将 Linux 安装到所有物体上;他们先是将 Linux 装到Gameboy 和 iPod 等电子产品上,不过最近他们在挑战一些高难度的东西,譬如将Linux安装到灯泡和小狗身上! + +LOA 小组在与另一个黑客小组 Stuttering Monarchs 竞赛,看谁先拿到土豆这一分。“土豆是一种每个人都会接触到的蔬菜,它的用途就像 Linux 一样极其广泛。无论你是想煮捣烹炸还是别的都可以” Piest 说道,“你也许认为我们完成这个挑战是为了获得某些好处,而我们只是追求逼格而已。” + +LOA 是第一个将 Linux 安装到一匹设德兰矮种马上的小组,但这五年来竞争愈演愈烈,其它黑客小组的进度已经反超了他们。 + +“我们本来可以成为在饼干上面安装 Linux 的第一个小组,但是那群来自挪威的混蛋把我们击败了。” Piest 说。 + +第一个成功安装了 Linux 的蔬菜是一头卷心菜,它是由一个土耳其的一个黑客小组完成的。 + +(好啦——是不是已经目瞪口呆,事实上,这是一篇好几年前的恶搞文,你看出来了吗?哈哈哈哈) + +-------------------------------------------------------------------------------- + +via: http://www.bbspot.com/news/2008/12/linux-on-a-potato.html + +作者:[Brian Briggs](briggsb@bbspot.com) +译者:[StdioA](https://github.com/StdioA), [hittlle](https://github.com/hittlle) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151109 How to Install GitLab on Ubuntu or Fedora or Debian.md b/published/20151109 How to Install GitLab on Ubuntu or Fedora or Debian.md similarity index 62% rename from translated/tech/20151109 How to Install GitLab on Ubuntu or Fedora or Debian.md rename to published/20151109 How to Install GitLab on Ubuntu or Fedora or Debian.md index 524fc1e2c1..7dbeb72f95 100644 --- a/translated/tech/20151109 How to Install GitLab on Ubuntu or Fedora or Debian.md +++ b/published/20151109 How to Install GitLab on Ubuntu or Fedora or Debian.md @@ -1,12 +1,13 @@ -如何在 Ubuntu / Fedora / Debian 中安装 GitLab +如何在 Ubuntu/Fedora/Debian 中安装 GitLab ================================================================================ -在 Git 问世之前,分布式版本控制从来都不是一件简单的事。Git 是一个免费、开源的软件,旨在轻松且快速地对从小规模到非常巨大的项目进行管理。Git 最开始由 Linus Torvalds 开发,他同时也是著名的 Linux 内核的创建者。在 git 和分布式版本控制系统领域中,[GitLab][1] 是一个极棒的新产品。它是一个基于 web 的 Git 仓库管理应用,包含代码审查、wiki、问题跟踪等诸多功能。使用 GitLab 可以很方便、快速地创建、审查、部署及托管代码。与 Github 类似,尽管它也提供在其官方的服务器托管免费的代码仓库,但它也可以运行在我们自己的服务器上。GitLab 有两个不同的版本:社区版(Community Edition)和企业版(Enterprise Edition)。社区本完全免费且开源,遵循 MIT 协议;而企业版则遵循一个专有的协议,包含一些社区版中没有的功能。下面介绍的是有关如何在我们自己的运行着 Ubuntu、Fedora 或 Debian 操作系统的机子上安装 GitLab 社区版的简单步骤。 + +在 Git 问世之前,分布式版本控制从来都不是一件简单的事。Git 是一个自由开源的软件,旨在轻松且快速地对从小规模到非常巨大的项目进行管理。Git 最开始由 Linus Torvalds 开发,他同时也是著名的 Linux 内核的创建者。在 git 和分布式版本控制系统领域中,[GitLab][1] 是一个极棒的新产品。它是一个基于 web 的 Git 仓库管理应用,包含代码审查、wiki、问题跟踪等诸多功能。使用 GitLab 可以很方便、快速地创建、审查、部署及托管代码。尽管它在其官方的服务器提供了与 Github 类似的免费托管的代码仓库,但它也可以运行在我们自己的服务器上。GitLab 有两个不同的版本:社区版(Community Edition)和企业版(Enterprise Edition)。社区版本完全免费且开源,遵循 MIT 协议;而企业版则遵循一个专有的协议,包含一些社区版中没有的功能。下面介绍的是有关如何在我们自己的运行着 Ubuntu、Fedora 或 Debian 操作系统的机器上安装 GitLab 社区版的简单步骤。 ### 1. 安装先决条件 ### -首先,我们需要安装 GitLab 所依赖的软件包。我们将安装 `curl`,用以下载我们所需的文件;安装`openssh-server` ,以此来通过 ssh 协议登陆到我们的机子上;安装`ca-certificates`,用它来添加 CA 认证;以及 `postfix`,把它作为一个 MTA(Mail Transfer Agent,邮件传输代理)。 +首先,我们需要安装 GitLab 所依赖的软件包。我们将安装 `curl`,用以下载我们所需的文件;安装`openssh-server` ,以此来通过 ssh 协议登录到我们的机器上;安装`ca-certificates`,用它来添加 CA 认证;以及 `postfix`,把它作为一个 MTA(Mail Transfer Agent,邮件传输代理)。 -注: 若要安装 GitLab 社区版,我们需要一个至少包含 2 GB 内存和 2 核 CPU 的 linux 机子。 +注: 若要安装 GitLab 社区版,我们需要一个至少包含 2 GB 内存和 2 核 CPU 的 linux 机器。 #### 在 Ubuntu 14 .04/Debian 8.x 中 #### @@ -18,7 +19,7 @@ #### 在 Fedora 22 中 #### -在 Fedora 22 中,由于 `yum` 已经被弃用了,所以默认的包管理器是 `dnf`。为了安装上面那些需要的软件包,我们只需运行下面的 dnf 命令: +在 Fedora 22 中,由于 `yum` 已经被弃用了,默认的包管理器是 `dnf`。为了安装上面那些需要的软件包,我们只需运行下面的 dnf 命令: # dnf install curl openssh-server postfix @@ -26,11 +27,11 @@ ### 2. 打开并开启服务 ### -现在,我们将使用我们默认的 init 系统来打开 sshd 和 postfix 服务。并且我们将使得它们在每次系统启动时被自动开启。 +现在,我们将使用我们默认的初始化系统来打开 sshd 和 postfix 服务。并且我们将使得它们在每次系统启动时被自动开启。 #### 在 Ubuntu 14.04 中 #### -由于 SysVinit 在 Ubuntu 14.04 中作为 init 系统被安装,我们将使用 service 命令来开启 sshd 和 postfix 守护进程: +由于在 Ubuntu 14.04 中安装的是 SysVinit 初始化系统,我们将使用 service 命令来开启 sshd 和 postfix 守护进程: # service sshd start # service postfix start @@ -42,24 +43,24 @@ #### 在 Fedora 22/Debian 8.x 中 #### -鉴于 Fedora 22 和 Debi 8.x 已经用 Systemd 代替了 SysVinit 来作为默认的 init 系统,我们只需运行下面的命令来开启 sshd 和 postfix 服务: +鉴于 Fedora 22 和 Debian 8.x 已经用 Systemd 代替了 SysVinit 来作为默认的初始化系统,我们只需运行下面的命令来开启 sshd 和 postfix 服务: # systemctl start sshd postfix -现在,为了使得它们在每次开机启动时被自动地开启,我们需要运行下面的 systemctl 命令: +现在,为了使得它们在每次开机启动时可以自动运行,我们需要运行下面的 systemctl 命令: # systemctl enable sshd postfix - 从 /etc/systemd/system/multi-user.target.wants/sshd.service 建立软链接到 /usr/lib/systemd/system/sshd.service. - 从 /etc/systemd/system/multi-user.target.wants/postfix.service 建立软链接到 /usr/lib/systemd/system/postfix.service. + Created symlink from /etc/systemd/system/multi-user.target.wants/sshd.service to /usr/lib/systemd/system/sshd.service. + Created symlink from /etc/systemd/system/multi-user.target.wants/postfix.service to /usr/lib/systemd/system/postfix.service. ### 3. 下载 GitLab ### -现在,我们将使用 curl 从官方的 GitLab 社区版仓库下载二进制安装文件。首先,为了得到所需文件的下载链接,我们需要浏览到该软件仓库的页面。为此,我们需要在运行着相应操作系统的 linux 机子上运行下面的命令。 +现在,我们将使用 curl 从官方的 GitLab 社区版仓库下载二进制安装文件。首先,为了得到所需文件的下载链接,我们需要浏览到该软件仓库的页面。为此,我们需要在运行着相应操作系统的 linux 机器上运行下面的命令。 #### 在 Ubuntu 14.04 中 #### -由于 Ubuntu 和 Debian 使用相同格式的 debian 文件,我们需要在 [https://packages.gitlab.com/gitlab/gitlab-ce?filter=debs][2] 下搜索所需版本的 GitLab,然后点击有着 ubuntu/trusty 标签的链接,这是因为我们运作着 Ubuntu 14.04。接着一个新的页面将会出现,我们将看到一个下载按钮,然后我们在它的上面右击,得到文件的链接,然后像下面这样使用 curl 来下载它。 +由于 Ubuntu 和 Debian 使用相同的 debian 格式的安装包,我们需要在 [https://packages.gitlab.com/gitlab/gitlab-ce?filter=debs][2] 下搜索所需版本的 GitLab,然后点击有着 ubuntu/trusty 标签的链接,即我们运行着的 Ubuntu 14.04。接着一个新的页面将会出现,我们将看到一个下载按钮,然后我们在它的上面右击,得到文件的链接,然后像下面这样使用 curl 来下载它。 # curl https://packages.gitlab.com/gitlab/gitlab-ce/packages/ubuntu/trusty/gitlab-ce_8.1.2-ce.0_amd64.deb @@ -67,7 +68,7 @@ #### 在 Debian 8.x 中 #### -与 Ubuntu 类似,我们需要在 [https://packages.gitlab.com/gitlab/gitlab-ce?filter=debs][3] 页面中搜索所需版本的 GitLab,然后点击带有 debian/jessie 标签的链接,这是因为我们运行的是 Debian 8.x。接着,一个新的页面将会出现,然后我们在下载按钮上右击,得到文件的下载链接。最后我们像下面这样使用 curl 来下载该文件。 +与 Ubuntu 类似,我们需要在 [https://packages.gitlab.com/gitlab/gitlab-ce?filter=debs][3] 页面中搜索所需版本的 GitLab,然后点击带有 debian/jessie 标签的链接,即我们运行着的 Debian 8.x。接着,一个新的页面将会出现,然后我们在下载按钮上右击,得到文件的下载链接。最后我们像下面这样使用 curl 来下载该文件。 # curl https://packages.gitlab.com/gitlab/gitlab-ce/packages/debian/jessie/gitlab-ce_8.1.2-ce.0_amd64.deb/download @@ -83,11 +84,11 @@ ### 4. 安装 GitLab ### -在相应的软件源被添加到我们的 linux 机子上之后,现在我们将使用相应 linux 发行版本中的默认包管理器来安装 GitLab 社区版。 +在相应的软件源被添加到我们的 linux 机器上之后,现在我们将使用相应 linux 发行版本中的默认包管理器来安装 GitLab 社区版。 #### 在 Ubuntu 14.04/Debian 8.x 中 #### -要在运行着 Ubuntu 14.04 或 Debian 8.x linux 发行版本的机子上安装 GitLab 社区版,我们只需运行如下的命令: +要在运行着 Ubuntu 14.04 或 Debian 8.x linux 发行版本的机器上安装 GitLab 社区版,我们只需运行如下的命令: # dpkg -i gitlab-ce_8.1.2-ce.0_amd64.deb @@ -95,7 +96,7 @@ #### 在 Fedora 22 中 #### -我们只需执行下面的 dnf 命令来在我们的 Fedora 22 机子上安装 GitLab。 +我们只需执行下面的 dnf 命令来在我们的 Fedora 22 机器上安装 GitLab。 # dnf install gitlab-ce-8.1.2-ce.0.el7.x86_64.rpm @@ -103,7 +104,7 @@ ### 5. 配置和开启 GitLab ### -由于 GitLab 社区版已经成功地安装在我们的 linux 系统中了,接下来我们将要配置和开启它了。为此,我们需要运行下面的命令,这在 Ubuntu、Debian 和 Fedora 发行版本上都一样: +GitLab 社区版已经成功地安装在我们的 linux 系统中了,接下来我们将要配置和开启它了。为此,我们需要运行下面的命令,这在 Ubuntu、Debian 和 Fedora 发行版本上都一样: # gitlab-ctl reconfigure @@ -111,19 +112,19 @@ ### 6. 允许通过防火墙 ### -假如在我们的 linux 机子中已经启用了防火墙程序,为了使得 GitLab 社区版的 web 界面可以通过网络进行访问,我们需要允许 80 端口通过防火墙,这个端口是 GitLab 社区版的默认端口。为此,我们需要运行下面的命令。 +假如在我们的 linux 机器中已经启用了防火墙程序,为了使得 GitLab 社区版的 web 界面可以通过网络进行访问,我们需要允许 80 端口通过防火墙,这个端口是 GitLab 社区版的默认端口。为此,我们需要运行下面的命令。 -#### 在 Iptables 中 #### +#### 在 iptables 中 #### -Ubuntu 14.04 默认安装和使用 Iptables。所以,我们将运行下面的 iptables 命令来打开 80 端口: +Ubuntu 14.04 默认安装和使用的是 iptables。所以,我们将运行下面的 iptables 命令来打开 80 端口: # iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT # /etc/init.d/iptables save -#### 在 Firewalld 中 #### +#### 在 firewalld 中 #### -由于 Fedora 22 和 Debian 8.x 默认安装了 systemd,它包含了作为防火墙程序的 firewalld。为了使得 80 端口(http 服务) 能够通过 firewalld,我们需要执行下面的命令。 +由于 Fedora 22 和 Debian 8.x 默认安装了 systemd,它包含了作为防火墙程序的 firewalld。为了使得 80 端口(http 服务) 能够通过 firewalld,我们需要执行下面的命令。 # firewall-cmd --permanent --add-service=http @@ -139,13 +140,13 @@ Ubuntu 14.04 默认安装和使用 Iptables。所以,我们将运行下面的 ![Gitlab Login Screen](http://blog.linoxide.com/wp-content/uploads/2015/10/gitlab-login-screen.png) -现在,为了登陆进面板,我们需要点击登陆按钮,它将询问我们的用户名和密码。然后我们将输入默认的用户名和密码,即 **root** 和 **5iveL!fe** 。在登陆进控制面板后,我们将被强制要求为我们的 GitLab root 用户输入新的密码。 +现在,为了登录进面板,我们需要点击登录按钮,它将询问我们的用户名和密码。然后我们将输入默认的用户名和密码,即 **root** 和 **5iveL!fe** 。在登录进控制面板后,我们将被强制要求为我们的 GitLab root 用户输入新的密码。 ![Setting New Password Gitlab](http://blog.linoxide.com/wp-content/uploads/2015/10/setting-new-password-gitlab.png) ### 8. 创建仓库 ### -在我们成功地更改密码并登陆到我们的控制面板之后,现在,我们将为我们的新项目创建一个新的仓库。为此,我们需要来到项目栏,然后点击 **新项目** 绿色按钮。 +在我们成功地更改密码并登录到我们的控制面板之后,现在,我们将为我们的新项目创建一个新的仓库。为此,我们需要来到项目栏,然后点击 **新项目** 绿色按钮。 ![Creating New Projects](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-new-projects.png) @@ -153,13 +154,15 @@ Ubuntu 14.04 默认安装和使用 Iptables。所以,我们将运行下面的 ![Creating New Project](http://blog.linoxide.com/wp-content/uploads/2015/10/configuring-git-project.png) -做完这些后,我们将能够使用任何包含基本 git 命令行的 Git 客户端来访问我们的 Git 仓库。我们可以看到在仓库中进行的任何活动,例如创建一个里程碑,管理 issue,合并请求,管理成员,便签,Wiki 等。 +做完这些后,我们将能够使用任何包含基本 git 命令行的 Git 客户端来访问我们的 Git 仓库。我们可以看到在仓库中进行的任何活动,例如创建一个里程碑,管理问题,合并请求,管理成员,便签,Wiki 等。 ![Gitlab Menu](http://blog.linoxide.com/wp-content/uploads/2015/10/gitlab-menu.png) ### 总结 ### -GitLab 是一个用来管理 git 仓库的很棒的开源 web 应用。它有着漂亮,响应式的带有诸多酷炫功能的界面。它还打包有许多酷炫功能,例如管理群组,分发密钥,连续集成,查看日志,广播消息,钩子,系统 OAuth 应用,模板等。(注:OAuth 是一个开放标准,允许用户让第三方应用访问该用户在某一网站上存储的私密的资源(如照片,视频,联系人列表),而无需将用户名和密码提供给第三方应用。--- 摘取自 [维基百科上的 OAuth 词条](https://zh.wikipedia.org/wiki/OAuth)) 它还可以和大量的工具进行交互如 Slack,Hipchat,LDAP,JIRA,Jenkins,很多类型的钩子和一个完整的 API。它至少需要 2 GB 的内存和 2 核 CPU 来流畅运行,支持多达 500 个用户,但它也可以被扩展到多个活动的服务器上。假如你有任何的问题,建议,回馈,请将它们写在下面的评论框中,以便我们可以提升或更新我们的内容。谢谢! +GitLab 是一个用来管理 git 仓库的很棒的开源 web 应用。它有着漂亮的带有诸多酷炫功能的响应式界面。它还打包有许多酷炫功能,例如管理群组,分发密钥,持续集成,查看日志,广播消息,钩子,系统 OAuth 应用,模板等。(注:OAuth 是一个开放标准,允许用户让第三方应用访问该用户在某一网站上存储的私密的资源(如照片,视频,联系人列表),而无需将用户名和密码提供给第三方应用。--- 摘取自 [维基百科上的 OAuth 词条](https://zh.wikipedia.org/wiki/OAuth)) 它还可以和大量的工具进行交互如 Slack,Hipchat,LDAP,JIRA,Jenkins,有很多类型的钩子和完整的 API。它至少需要 2 GB 的内存和 2 核 CPU 来流畅运行,支持多达 500 个用户,但它也可以被扩展到多个工作服务器上。 + +假如你有任何的问题,建议,回馈,请将它们写在下面的评论框中,以便我们可以提升或更新我们的内容。谢谢! -------------------------------------------------------------------------------- @@ -167,7 +170,7 @@ via: http://linoxide.com/linux-how-to/install-gitlab-on-ubuntu-fedora-debian/ 作者:[Arun Pyasi][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151117 Linux 101--Get the most out of Systemd.md b/published/20151117 Linux 101--Get the most out of Systemd.md new file mode 100644 index 0000000000..b0df8fdfb9 --- /dev/null +++ b/published/20151117 Linux 101--Get the most out of Systemd.md @@ -0,0 +1,171 @@ +Linux 101:最有效地使用 Systemd +================================================================================ +干嘛要这么做? + +- 理解现代 Linux 发行版中的显著变化; +- 看看 Systemd 是如何取代 SysVinit 的; +- 搞定单元(unit)和新的 journal 日志。 + +吐槽邮件、人身攻击、死亡威胁——Lennart Poettering,Systemd 的作者,对收到这些东西早就习以为常了。这位 Red Hat 公司的员工之前在 Google+ 上怒斥 FOSS 社区([http://tinyurl.com/poorlennart][1])的本质,悲痛且失望地表示:“那真是个令人恶心的地方”。他着重指出 Linus Torvalds 在邮件列表上言辞极其刻薄的帖子,并谴责这位内核的领导者为在线讨论定下基调,并使得人身攻击及贬抑之辞成为常态。 + +但为何 Poettering 会遭受如此多的憎恨?为何就这么个搞搞开源软件的人要忍受这等愤怒?答案就在于他的软件的重要性。如今大多数发行版中,Systemd 是 Linux 内核发起的第一个程序,并且它还扮演多种角色。它会启动系统服务、处理用户登录,每隔特定的时间执行一些任务,还有很多很多。它在不断地成长,并逐渐成为 Linux 的某种“基础系统”——提供系统启动和发行版维护所需的所有工具。 + +如今,在以下几点上 Systemd 颇具争议:它逃避了一些已经确立的 Unix 传统,例如纯文本的日志文件;它被看成是个“大一统”的项目,试图接管一切;它还是我们这个操作系统的支柱的重要革新。然而大多数主流发行版已经接受了(或即将接受)它,因此它就活了下来。而且它确实是有好处的:更快地启动,更简单地管理那些有依赖的服务程序,提供强大且安全的日志系统等。 + +因此在这篇教程中,我们将探索 Systemd 的特性,并向您展示如何最有效地利用这些特性。即便您此刻并不是这款软件的粉丝,读完本文后您至少可以更加了解和适应它。 + +![](http://narf-archive.com/pix/bd0fb252416206158627fb0b1bff9b4779dca13f.gif) + +*这部没正经的动画片来自[http://tinyurl.com/m2e7mv8][2],它把 Systemd 塑造成一只狂暴的动物,吞噬它路过的一切。大多数批评者的言辞可不像这只公仔一样柔软。* + +### 启动及服务 ### + +大多数主流发行版要么已经采用 Systemd,要么即将在下个发布中采用(如 Debian 和 Ubuntu)。在本教程中,我们使用 Fedora 21(该发行版已经是 Systemd 的优秀实验场地)的一个预览版进行演示,但不论您用哪个发行版,要用到的命令和注意事项都应该是一样的。这是 Systemd 的一个加分点:它消除了不同发行版之间许多细微且琐碎的区别。 + +在终端中输入 `ps ax | grep systemd`,看到第一行,其中的数字 **1** 表示它的进程号是1,也就是说它是 Linux 内核发起的第一个程序。因此,内核一旦检测完硬件并组织好了内存,就会运行 `/usr/lib/systemd/systemd` 可执行程序,这个程序会按顺序依次发起其他程序。(在还没有 Systemd 的日子里,内核会去运行 `/sbin/init`,随后这个程序会在名为 SysVinit 的系统中运行其余的各种启动脚本。) + +Systemd 的核心是一个叫*单元* (unit)的概念,它是一些存有关于服务(service)(在运行在后台的程序)、设备、挂载点、和操作系统其他方面信息的配置文件。Systemd 的其中一个目标就是简化这些事物之间的相互作用,因此如果你有程序需要在某个挂载点被创建或某个设备被接入后开始运行,Systemd 可以让这一切正常运作起来变得相当容易。(在没有 Systemd 的日子里,要使用脚本来把这些事情调配好,那可是相当丑陋的。)要列出您 Linux 系统上的所有单元,输入以下命令: + + systemctl list-unit-files + +现在,`systemctl` 是与 Systemd 交互的主要工具,它有不少选项。在单元列表中,您会注意到这儿有一些格式化:被使能(enabled)的单元显示为绿色,被禁用(disabled)的显示为红色。标记为“static”的单元不能直接启用,它们是其他单元所依赖的对象。若要限制输出列表只包含服务,使用以下命令: + + systemctl list-unit-files --type=service + +注意,一个单元显示为“enabled”,并不等于对应的服务正在运行,而只能说明它可以被开启。要获得某个特定服务的信息,以 GDM (Gnome Display Manager) 为例,输入以下命令: + + systemctl status gdm.service + +这条命令提供了许多有用的信息:一段给人看的服务描述、单元配置文件的位置、启动的时间、进程号,以及它所从属的 CGroups(用以限制各组进程的资源开销)。 + +如果您去查看位于 `/usr/lib/systemd/system/gdm.service` 的单元配置文件,您可以看到各种选项,包括要被运行的二进制文件(“ExecStart”那一行),相冲突的其他单元(即不能同时进入运行的单元),以及需要在本单元执行前进入运行的单元(“After”那一行)。一些单元有附加的依赖选项,例如“Requires”(必要的依赖)和“Wants”(可选的依赖)。 + +此处另一个有趣的选项是: + + Alias=display-manager.service + +当您启动 **gdm.service** 后,您将可以通过 `systemctl status display-manager.service` 来查看它的状态。当您知道有*显示管理程序* (display manager)在运行并想对它做点什么,但您不关心那究竟是 GDM,KDM,XDM 还是什么别的显示管理程序时,这个选项会非常有用。 + +![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/status-large.jpg) + +*使用 systemctl status 命令后面跟一个单元名,来查看对应的服务有什么情况。* + +### “目标(target)”锁定 ### + +如果您在 `/usr/lib/systemd/system` 目录中输入 `ls` 命令,您将看到各种以 `.target` 结尾的文件。*启动目标* (target)是一种将多个单元聚合在一起以致于将它们同时启动的方式。例如,对大多数类 Unix 操作系统而言有一种“多用户(multi-user)”状态,意思是系统已被成功启动,后台服务正在运行,并且已准备好让一个或多个用户登录并工作——至少在文本模式下。(其他状态包括用于进行管理工作的单用户(single-user)状态,以及用于机器关机的重启(reboot)状态。) + +如果您打开 **multi-user.target** 文件一探究竟,您可能期待看到的是一个要被启动的单元列表。但您会发现这个文件内部几乎空空如也——其实,一个服务会通过 **WantedBy** 选项让自己成为启动目标的依赖。因此如果您去打开 **avahi-daemon.service**, **NetworkManager.service** 及其他 **.service** 文件看看,您将在 Install 段看到这一行: + + WantedBy=multi-user.target + +因此,切换到多用户启动目标会使能(enable)那些包含上述语句的单元。还有其他一些启动目标可用(例如 **emergency.target** 提供一个紧急情况使用的 shell,以及 **halt.target** 用于机器关机),您可以用以下方式轻松地在它们之间切换: + + systemctl isolate emergency.target + +在许多方面,这些都很像 SysVinit 中的*运行级* (runlevel),如文本模式的 **multi-user.target** 类似于第3运行级,**graphical.target** 类似于第5运行级,**reboot.target** 类似于第6运行级,诸如此类。 + +![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/unit-large.jpg) + +**与传统的脚本相比,单元配置文件也许看起来很陌生,但并不难以理解。** + +### 开启与停止 ### + +现在您也许陷入了沉思:我们已经看了这么多,但仍没看到如何停止和开启服务!这其实是有原因的。从外部看,Systemd 也许很复杂,像野兽一般难以驾驭。因此在您开始摆弄它之前,有必要从宏观的角度看看它是如何工作的。实际用来管理服务的命令非常简单: + + systemctl stop cups.service + systemctl start cups.service + +(若某个单元被禁用了,您可以先通过 `systemctl enable` 加上该单元名的方式将其使能。这种做法会为该单元创建一个符号链接,并将其放置在当前启动目标的 `.wants` 目录下,这些 `.wants` 目录在`/etc/systemd/system` 文件夹中。) + +还有两个有用的命令是 `systemctl restart` 和 `systemctl reload`,后面接单元名。后者用于让单元重新加载它的配置文件。Systemd 的绝大部分都有良好的文档,因此您可以查看手册 (`man systemctl`) 了解每条命令的细节。 + +### 定时器单元:取代 Cron ### + +除了系统初始化和服务管理,Systemd 还染指了其他方面。在很大程度上,它能够完成 **cron** 的工作,而且可以说是以更灵活的方式(并带有更易读的语法)。**cron** 是一个以规定时间间隔执行任务的程序——例如清除临时文件,刷新缓存等。 + +如果您再次进入 `/usr/lib/systemd/system` 目录,您会看到那儿有多个 `.timer` 文件。用 `less` 来查看这些文件,您会发现它们与 `.service` 和 `.target` 文件有着相似的结构,而区别在于 `[Timer]` 段。举个例子: + + [Timer] + OnBootSec=1h + OnUnitActiveSec=1w + +**OnBootSec** 选项告诉 Systemd 在系统启动一小时后启动这个单元。第二个选项的意思是:自那以后每周启动这个单元一次。关于定时器有大量选项您可以设置,输入 `man systemd.time` 查看完整列表。 + +Systemd 的时间精度默认为一分钟。也就是说,它会在设定时刻的一分钟内运行单元,但不一定精确到那一秒。这么做是基于电源管理方面的原因,但如果您需要一个没有任何延时且精确到毫秒的定时器,您可以添加以下一行: + + AccuracySec=1us + +另外, **WakeSystem** 选项(可以被设置为 true 或 false)决定了定时器是否可以唤醒处于休眠状态的机器。 + +![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/systemd_gui-large.jpg) + +*有一个 Systemd 的图形界面程序,即便它已有多年未被积极维护。* + +### 日志文件:向 journald 问声好 ### + +Systemd 的第二个主要部分是 journal 。这是个日志系统,类似于 syslog 但也有些显著区别。如果您是个 Unix 日志管理模式的粉丝,准备好出离愤怒吧:这是个二进制日志,因此您不能使用常规的命令行文本处理工具来解析它。这个设计决定不出意料地在网上引起了激烈的争论,但它的确有些优点。例如,日志可以被更系统地组织,带有更多的元数据,因此可以更容易地根据可执行文件名和进程号等过滤出信息。 + +要查看整个 journal,输入以下命令: + + journalctl + +像许多其他的 Systemd 命令一样,该命令将输出通过管道的方式引向 `less` 程序,因此您可以使用空格键向下滚动,键入`/`(斜杠)查找,以及其他熟悉的快捷键。您也能在此看到少许颜色,像红色的警告及错误信息。 + +以上命令会输出很多信息。为了限制其只输出本次启动的消息,使用如下命令: + + journalctl -b + +这就是 Systemd 大放异彩的地方!您想查看自上次启动以来的全部消息吗?试试 **journalctl -b -1** 吧。再上一次的?用 **-2** 替换 **-1** 吧。那自某个具体时间,例如2014年10月24日16:38以来的呢? + + journalctl -b --since=”2014-10-24 16:38” + +即便您对二进制日志感到遗憾,那依然是个有用的特性,并且对许多系统管理员来说,构建类似的过滤器比起写正则表达式而言容易多了。 + +我们已经可以根据特定的时间来准确查找日志了,那可以根据特定程序吗?对单元而言,试试这个: + + journalctl -u gdm.service + +(注意:这是个查看 X server 产生的日志的好办法。)那根据特定的进程号? + + journalctl _PID=890 + +您甚至可以请求只看某个可执行文件产生的消息: + + journalctl /usr/bin/pulseaudio + +若您想将输出的消息限制在某个优先级,可以使用 **-p** 选项。该选项参数为 0 的话只会显示紧急消息(也就是说,是时候向 **$DEITY** 祈求保佑了)(LCTT 译注: $DEITY 是一个计算机方面的幽默,DEITY 是指广义上的“神”,$前缀表示这是一个变量),为 7 的话会显示所有消息,包括调试消息。请查看手册 (`man journalctl`) 获取更多关于优先级的信息。 + +值得指出的是,您也可以将多个选项结合在一起,若想查看在当前启动中由 GDM 服务输出的优先级数小于等于 3 的消息,请使用下述命令: + + journalctl -u gdm.service -p 3 -b + +最后,如果您仅仅想打开一个随 journal 持续更新的终端窗口,就像在没有 Systemd 时使用 `tail` 命令实现的那样,输入 `journalctl -f` 就好了。 + +![](http://www.linuxvoice.com/wp-content/uploads/2015/10/journal-large.jpg) + +*二进制日志并不流行,但 journal 的确有它的优点,如非常方便的信息查找及过滤。* + +### 没有 Systemd 的生活?### + +如果您就是完全不能接受 Systemd,您仍然有一些主流发行版中的选择。尤其是 Slackware,作为历史最为悠久的发行版,目前还没有做出改变,但它的主要开发者并没有将其从未来规划中移除。一些不出名的发行版也在坚持使用 SysVinit 。 + +但这又将持续多久呢?Gnome 正越来越依赖于 Systemd,其他的主流桌面环境也会步其后尘。这也是引起 BSD 社区一阵恐慌的原因:Systemd 与 Linux 内核紧密相连,导致在某种程度上,桌面环境正变得越来越不可移植。一种折衷的解决方案也许会以 Uselessd ([http://uselessd.darknedgy.net][3]) 的形式到来:一种裁剪版的 Systemd,纯粹专注于启动和监控进程,而不消耗整个基础系统。 + +![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/gentoo-large.jpg) + +若您不喜欢 Systemd,可以尝试一下 Gentoo 发行版,它将 Systemd 作为初始化工具的一种选择,但并不强制用户使用 Systemd。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxvoice.com/linux-101-get-the-most-out-of-systemd/ + +作者:[Mike Saunders][a] +译者:[Ricky-Gong](https://github.com/Ricky-Gong) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxvoice.com/author/mike/ +[1]:http://tinyurl.com/poorlennart +[2]:http://tinyurl.com/m2e7mv8 +[3]:http://uselessd.darknedgy.net/ diff --git a/published/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md b/published/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md new file mode 100644 index 0000000000..9c1465d255 --- /dev/null +++ b/published/20151123 Assign Multiple IP Addresses To One Interface On Ubuntu 15.10.md @@ -0,0 +1,236 @@ +在 Ubuntu 15.10 上为单个网卡设置多个 IP 地址 +================================================================================ + +有时候你可能想在你的网卡上使用多个 IP 地址。遇到这种情况你会怎么办呢?买一个新的网卡并分配一个新的 IP?不,没有这个必要(至少在小型网络中)。现在我们可以在 Ubuntu 系统中为一个网卡分配多个 IP 地址。想知道怎么做到的?跟着我往下看,其实并不难。 + +这个方法也适用于 Debian 以及它的衍生版本。 + +### 临时添加 IP 地址 ### + +首先,让我们找到网卡的 IP 地址。在我的 Ubuntu 15.10 服务器版中,我只使用了一个网卡。 + +运行下面的命令找到 IP 地址: + + sudo ip addr + +**样例输出:** + + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 + link/ether 08:00:27:2a:03:4b brd ff:ff:ff:ff:ff:ff + inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3 + valid_lft forever preferred_lft forever + inet6 fe80::a00:27ff:fe2a:34e/64 scope link + valid_lft forever preferred_lft forever + +或 + + sudo ifconfig + +**样例输出:** + + enp0s3 Link encap:Ethernet HWaddr 08:00:27:2a:03:4b + inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 + inet6 addr: fe80::a00:27ff:fe2a:34e/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:186 errors:0 dropped:0 overruns:0 frame:0 + TX packets:70 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:21872 (21.8 KB) TX bytes:9666 (9.6 KB) + lo Link encap:Local Loopback + inet addr:127.0.0.1 Mask:255.0.0.0 + inet6 addr: ::1/128 Scope:Host + UP LOOPBACK RUNNING MTU:65536 Metric:1 + RX packets:217 errors:0 dropped:0 overruns:0 frame:0 + TX packets:217 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:0 + RX bytes:38793 (38.7 KB) TX bytes:38793 (38.7 KB) + +正如你在上面输出中看到的,我的网卡名称是 **enp0s3**,它的 IP 地址是 **192.168.1.103**。 + +现在让我们来为网卡添加一个新的 IP 地址,例如说 **192.168.1.104**。 + +打开你的终端并运行下面的命令添加额外的 IP。 + + sudo ip addr add 192.168.1.104/24 dev enp0s3 + +用命令检查是否启用了新的 IP: + + sudo ip address show enp0s3 + +**样例输出:** + + 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 + link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff + inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3 + valid_lft forever preferred_lft forever + inet 192.168.1.104/24 scope global secondary enp0s3 + valid_lft forever preferred_lft forever + inet6 fe80::a00:27ff:fe2a:34e/64 scope link + valid_lft forever preferred_lft forever + +类似地,你可以添加任意数量的 IP 地址,只要你想要。 + +让我们 ping 一下这个 IP 地址验证一下。 + + sudo ping 192.168.1.104 + +**样例输出** + + PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data. + 64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.901 ms + 64 bytes from 192.168.1.104: icmp_seq=2 ttl=64 time=0.571 ms + 64 bytes from 192.168.1.104: icmp_seq=3 ttl=64 time=0.521 ms + 64 bytes from 192.168.1.104: icmp_seq=4 ttl=64 time=0.524 ms + +好极了,它能工作! + +要删除 IP,只需要运行: + + sudo ip addr del 192.168.1.104/24 dev enp0s3 + +再检查一下是否删除了 IP。 + + sudo ip address show enp0s3 + +**样例输出:** + + 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 + link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff + inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3 + valid_lft forever preferred_lft forever + inet6 fe80::a00:27ff:fe2a:34e/64 scope link + valid_lft forever preferred_lft forever + +可以看到已经没有了!! + +正如你所知,重启系统后这些设置会失效。那么怎么设置才能永久有效呢?这也很简单。 + +### 添加永久 IP 地址 ### + +Ubuntu 系统的网卡配置文件是 **/etc/network/interfaces**。 + +让我们来看看上面文件的具体内容。 + + sudo cat /etc/network/interfaces + +**输出样例:** + + # This file describes the network interfaces available on your system + # and how to activate them. For more information, see interfaces(5). + source /etc/network/interfaces.d/* + # The loopback network interface + auto lo + iface lo inet loopback + # The primary network interface + auto enp0s3 + iface enp0s3 inet dhcp + +正如你在上面输出中看到的,网卡启用了 DHCP。 + +现在,让我们来分配一个额外的地址,例如 **192.168.1.104/24**。 + +编辑 **/etc/network/interfaces**: + + sudo nano /etc/network/interfaces + +如下添加额外的 IP 地址。 + + # This file describes the network interfaces available on your system + # and how to activate them. For more information, see interfaces(5). + source /etc/network/interfaces.d/* + # The loopback network interface + auto lo + iface lo inet loopback + # The primary network interface + auto enp0s3 + iface enp0s3 inet dhcp + iface enp0s3 inet static + address 192.168.1.104/24 + +保存并关闭文件。 + +运行下面的命令使更改无需重启即生效。 + + sudo ifdown enp0s3 && sudo ifup enp0s3 + +**样例输出:** + + Killed old client process + Internet Systems Consortium DHCP Client 4.3.1 + Copyright 2004-2014 Internet Systems Consortium. + All rights reserved. + For info, please visit https://www.isc.org/software/dhcp/ + Listening on LPF/enp0s3/08:00:27:2a:03:4e + Sending on LPF/enp0s3/08:00:27:2a:03:4e + Sending on Socket/fallback + DHCPRELEASE on enp0s3 to 192.168.1.1 port 67 (xid=0x225f35) + Internet Systems Consortium DHCP Client 4.3.1 + Copyright 2004-2014 Internet Systems Consortium. + All rights reserved. + For info, please visit https://www.isc.org/software/dhcp/ + Listening on LPF/enp0s3/08:00:27:2a:03:4e + Sending on LPF/enp0s3/08:00:27:2a:03:4e + Sending on Socket/fallback + DHCPDISCOVER on enp0s3 to 255.255.255.255 port 67 interval 3 (xid=0xdfb94764) + DHCPREQUEST of 192.168.1.103 on enp0s3 to 255.255.255.255 port 67 (xid=0x6447b9df) + DHCPOFFER of 192.168.1.103 from 192.168.1.1 + DHCPACK of 192.168.1.103 from 192.168.1.1 + bound to 192.168.1.103 -- renewal in 35146 seconds. + +**注意**:如果你从远程连接到服务器,把上面的两个命令放到**一行**中**非常重要**,因为第一个命令会断掉你的连接。而采用这种方式可以保留你的 ssh 会话。 + +现在,让我们用下面的命令来检查一下是否添加了新的 IP: + + sudo ip address show enp0s3 + +**输出样例:** + + 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 + link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff + inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3 + valid_lft forever preferred_lft forever + inet 192.168.1.104/24 brd 192.168.1.255 scope global secondary enp0s3 + valid_lft forever preferred_lft forever + inet6 fe80::a00:27ff:fe2a:34e/64 scope link + valid_lft forever preferred_lft forever + +很好!我们已经添加了额外的 IP。 + +再次 ping IP 地址进行验证。 + + sudo ping 192.168.1.104 + +**样例输出:** + + PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data. + 64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.137 ms + 64 bytes from 192.168.1.104: icmp_seq=2 ttl=64 time=0.050 ms + 64 bytes from 192.168.1.104: icmp_seq=3 ttl=64 time=0.054 ms + 64 bytes from 192.168.1.104: icmp_seq=4 ttl=64 time=0.067 ms + +好极了!它能正常工作。就是这样。 + +想知道怎么给 CentOS/RHEL/Scientific Linux/Fedora 系统添加额外的 IP 地址,可以点击下面的链接。 + +- [在CentOS 7上给一个网卡分配多个IP地址][1] + +工作愉快! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/assign-multiple-ip-addresses-to-one-interface-on-ubuntu-15-10/ + +作者:[SK][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/sk/ +[1]:https://linux.cn/article-5127-1.html diff --git a/published/20151123 How to Configure Apache Solr on Ubuntu 14 or 15.md b/published/20151123 How to Configure Apache Solr on Ubuntu 14 or 15.md new file mode 100644 index 0000000000..2f934be732 --- /dev/null +++ b/published/20151123 How to Configure Apache Solr on Ubuntu 14 or 15.md @@ -0,0 +1,131 @@ +如何在 Ubuntu 14/15 上配置 Apache Solr +================================================================================ + +大家好,欢迎来阅读我们今天这篇 Apache Solr 的文章。简单的来说,Apache Solr 是一个最负盛名的开源搜索平台,配合运行在网站后端的 Apache Lucene,能够让你轻松创建搜索引擎来搜索网站、数据库和文件。它能够索引和搜索多个网站并根据搜索文本的相关内容返回搜索建议。 + +Solr 使用 HTTP 可扩展标记语言(XML),可以为 JSON、Python 和 Ruby 等提供应用程序接口(API)。根据Apache Lucene 项目所述,Solr 提供了非常多的功能,很受管理员们的欢迎: + +- 全文检索 +- 分面导航(Faceted Navigation) +- 拼写建议/自动完成 +- 自定义文档排序/排列 + +#### 前提条件: #### + +在一个使用最小化安装包的全新 Ubuntu 14/15 系统上,你仅仅需要少量的准备,就开始安装 Apache Solor. + +### 1)System Update 系统更新### + +使用一个具有 sudo 权限的非 root 用户登录你的 Ubuntu 服务器,在接下来的所有安装和使用 Solr 的步骤中都会使用它。 + +登录成功后,使用下面的命令,升级你的系统到最新的更新及补丁: + + $ sudo apt-get update + +### 2) 安装 JRE### + +要安装 Solr,首先需要安装 JRE(Java Runtime Environment)作为基础环境,因为 solr 和 tomcat 都是基于Java.所以,我们需要安装最新版的 Java 并配置 Java 本地环境. + +要想安装最新版的 Java 8,我们需要通过以下命令安装 Python Software Properties 工具包 + + $ sudo apt-get install python-software-properties + +完成后,配置最新版 Java 8的仓库 + + $ sudo add-apt-repository ppa:webupd8team/java + +现在你可以通过以下命令更新包源列表,使用‘apt-get’来安装最新版本的 Oracle Java 8。 + + $ sudo apt-get update + + $ sudo apt-get install oracle-java8-installer + +在安装和配置过程中,点击'OK'按钮接受 Java SE Platform 和 JavaFX 的 Oracle 二进制代码许可协议(Oracle Binary Code License Agreement)。 + +在安装完成后,运行下面的命令,检查是否安装成功以及查看安装的版本。 + + kash@solr:~$ java -version + java version "1.8.0_66" + Java(TM) SE Runtime Environment (build 1.8.0_66-b17) + Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode) + +执行结果表明我们已经成功安装了 Java,并达到安装 Solr 最基本的要求了,接着我们进行下一步。 + +### 安装 Solr### + +有两种不同的方式可以在 Ubuntu 上安装 Solr,在本文中我们只用最新的源码包来演示源码安装。 + +要使用源码安装 Solr,先要从[官网][1]下载最新的可用安装包。复制以下链接,然后使用 'wget' 命令来下载。 + + $ wget http://www.us.apache.org/dist/lucene/solr/5.3.1/solr-5.3.1.tgz + +运行下面的命令,将这个已归档的服务解压到 /bin 目录。 + + $ tar -xzf solr-5.3.1.tgz solr-5.3.1/bin/install_solr_service.sh --strip-components=2 + +运行脚本来启动 Solr 服务,这将会先创建一个 solr 的用户,然后将 Solr 安装成服务。 + + $ sudo bash ./install_solr_service.sh solr-5.3.1.tgz + +![Solr 安装](http://blog.linoxide.com/wp-content/uploads/2015/11/12.png) + +使用下面的命令来检查 Solr 服务的状态。 + + $ service solr status + +![Solr 状态](http://blog.linoxide.com/wp-content/uploads/2015/11/22.png) + +### 创建 Solr 集合: ### + +我们现在可以使用 Solr 用户添加多个集合。就像下图所示的那样,我们只需要在命令行中指定集合名称和指定其配置集就可以创建多个集合了。 + + $ sudo su - solr -c "/opt/solr/bin/solr create -c myfirstcollection -n data_driven_schema_configs" + +![创建集合](http://blog.linoxide.com/wp-content/uploads/2015/11/32.png) + +我们已经成功的为我们的第一个集合创建了新核心实例目录,并可以将数据添加到里面。要查看库中的默认模式文件,可以在这里找到: '/opt/solr/server/solr/configsets/data_driven_schema_configs/conf' 。 + +### 使用 Solr Web### + +可以使用默认的端口8983连接 Apache Solr。打开浏览器,输入 http://your\_server\_ip:8983/solr 或者 http://your-domain.com:8983/solr. 确保你的防火墙允许8983端口. + + http://172.25.10.171:8983/solr/ + +![Web访问Solr](http://blog.linoxide.com/wp-content/uploads/2015/11/42.png) + +在 Solr 的 Web 控制台左侧菜单点击 'Core Admin' 按钮,你将会看见我们之前使用命令行方式创建的集合。你可以点击 'Add Core' 按钮来创建新的核心。 + +![添加核心](http://blog.linoxide.com/wp-content/uploads/2015/11/52.png) + +就像下图中所示,你可以选择某个集合并指向文档来向里面添加内容或从文档中查询数据。如下显示的那样添加指定格式的数据。 + + { + "number": 1, + "Name": "George Washington", + "birth_year": 1989, + "Starting_Job": 2002, + "End_Job": "2009-04-30", + "Qualification": "Graduation", + "skills": "Linux and Virtualization" + } + +添加文件后点击 'Submit Document'按钮. + +![添加文档](http://blog.linoxide.com/wp-content/uploads/2015/11/62.png) + +### 总结### + +在 Ubuntu 上安装成功后,你就可以使用 Solr Web 接口插入或查询数据。如果你想通过 Solr 来管理更多的数据和文件,可以创建更多的集合。希望你能喜欢这篇文章并且希望它能够帮到你。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/configure-apache-solr-ubuntu-14-15/ + +作者:[Kashif][a] +译者:[taichirain](https://github.com/taichirain) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ +[1]:http://lucene.apache.org/solr/ diff --git a/published/20151125 Running a mainline kernel on a cellphone.md b/published/20151125 Running a mainline kernel on a cellphone.md new file mode 100644 index 0000000000..0de1daf0c4 --- /dev/null +++ b/published/20151125 Running a mainline kernel on a cellphone.md @@ -0,0 +1,45 @@ +为什么主线内核不能运行在我的手机上? +================== + +对于自由软件来说,其最大的自由之一就是能够用一个更新或修改的版本来替换原始版本的程序。尽管如此,数千万使用那些手机里面装着所谓 Linux 的用户却很少能够在他们的手机上运行主线内核(mainline kernel),即使他们拥有替换内核代码的专业技能。可悲的是,我们必须承认目前仍然没有可以运行主线内核的主流手机。在由 Rob Herring 主持的2015届内核峰会(Kernel Summit)上,与会人员共同探讨了这个问题,并进一步谈论了他们应该怎么做才能解决这个问题。 + +当主持人提问的时候,在座的大多数开发人员都表示他们更乐意在他们的手机上面运行主线内核,然而也有少数人持相反的看法。在 Project Ara 的支持下,Rob 在这个问题上已经研究了近一年半的时间(参见:https://lwn.net/Articles/648400/ )。但是最新的研究成果并不理想。 + +Rob 表示,通常手机上运行了太多的过期(out-of-tree)代码;主线内核只是缺少能使手机正常运行所必须的驱动。每台常规的手机都在运行着100万行到300万行的过期(out-of-tree)代码。几乎所有的这些手机的内核版本都不超过3.10,有一些甚至更加古老。造成这种情况的原因有很多,但是有一点是很清楚的,在手机的世界里,一切都变化的太快以至于无法跟上内核社区的步伐。如果真是那样,他问到,我们还担心什么呢? + +Tim Bird 指出,第一台 Android 手机 Nexus 1 从来没有运行过任何一个主线内核,并且以后也不会。它打破了开源的承诺,也使得用户不可能做到将一个新的内核放到手机中。从这一点上来说,没有任何一款手机支持这种能力。Peter Zijlstra 想知道从一台手机到另一台手机到底复制了多少能够工作的过期代码;Rob表示,迄今为止,他已经见到了三个独立开发的热插拔 [Governors][1]。 + +Dirk Hohndel 提出了很少有人注意到的建议。他说,对于世界上的数以亿计的手机,大约只有他们27个人关心他们的手机是否运行着主线内核。剩下的用户仅仅只是想让他们的手机正常工作。或许那些关注手机是否在运行主线内核的开发者正在努力去解决这个令人不解的问题。 + +Chris Mason 说,那些手机厂商当前正面临着相同类型的问题,而这些问题也是那些 Linux 发行版过去所面临过的问题。他们疲于应付大量的无效且重复和能被复用的工作。一旦这些发行版决定将他们的工作配合主线内核而不是使用自己维护的内核,那么问题将会变得好解决的多。解决问题的关键就是去帮助手机制造商们认识到他们可以通过同样的方式获得便利,形成这种认识的关键并不是通过来自用户的压力。这样一来,问题就可以解决了。 + +Grant Likely 提出了对于安全问题的担忧,这种担忧来自于那些不能升级他们的手机系统的 android 设备。他说,我们需要的是一个真正专为手机设立的发行版。但是,只要手机厂商仍然掌控着手机中的应用软件,那么手机的同步更新将无法实现。我们接下来将面临一个很大的安全难题。Peter 补充说,随着 [Stagefright 漏洞][2]的出现,难题已经出现在我们面前了。 + +Ted Ts'o 说,运行主线内核并不是他的主要关注点。他很乐于见到这个假期中所售卖的手机能够运行3.18或者4.1的内核,而不是继续停留在3.10。他认为这是一个更可能被解决的问题。Steve Rostedt 认为,按照 Ted Ts'o 所说的那样去做并不能解决手机的安全问题,但是,Ted 认为使用一个更新一些的内核至少可以让漏洞修复变得更加容易。Grant 对此回应说,接下来的一年里,这一切都将再次发生。过渡到更新的内核也是一个渐进式的对系统的完善。Kees Cook 补充说,我们无法从修复旧版本的内核漏洞的过程中得到太多的益处,真正的问题是我们没有对 bug 的应对措施(他会在今天的另外一个对话中讲到这个话题)。 + +Rob 说,任何一种解决方案都需要得到当前市场上的手机供应商的支持。否则,由于厂商对安装到他们生产的手机上的操作系统的封锁,运行主线内核的策略将会陷入麻烦。Paolo Bonzini 提问说是否可以因为那些没有修复的安全漏洞而控告手机厂商,尤其当手机仍然处于保修期内。Grant 认为对于手机的可更新能力(upgradeability)的保证必须来源于市场需求,否则是无法实现的。而促使它实现的原因可能会是一个严重的安全问题,然后用户开始对手机的可更新能力提出要求。同时,内核开发人员必须不断朝着这个方向努力。Rob 表示,除了到目前为止指出的所有优点之外,运行主线内核也能帮助开发者对安卓设备上的新特性进行测试和验证。 + +Josh Triplett 提问说,如果手机厂商提出对主线内核提供支持的想法,那么内核社区又将采取什么措施呢?那样将会针对手机各方面的特性要求对内核进行大量的测试和验证;[Android 的兼容性测试套件][3]中出现的失败将不得不被再次回归到内核。Rob 提议这个问题可以在明年讨论,即先将最基本的功能做好。但是,Josh 强调说,如果这个需求出现了,我们就应该能够给出一个好的答案。 + +Tim 认为,当前,我们和厂商之间存在很大的脱节。厂商根本不会主动报告或者贡献任何反馈给社区。他们之间完全脱节了,这样的话永远不会有进步。Josh 表示,当厂商们开始报告他们正在使用的旧内核的相关 bug 时,双方之间的接受度将变得更加友好。Arnd Bergmann 认为,我们需要的是得到一个大芯片厂商对使用主线内核的认可,并且将该厂商的硬件提升到能够支持主线内核的运行的这样一个水平,而这样将会在其他方面增加负担。但是,他补充说,实现这个目标要求存在一个跟随硬件一起分发的自由 GPU 驱动程序——然而这种程序当前并不存在。 + +Rob 给存在问题的领域列了一个清单,但是现在已经没有太多的时间去讨论其中的细节了。WiFi 驱动仍然是一个问题,尤其是当这个新特性被添加到 Android 设备上的时候。Johannes Berg 对新特性仍然存在问题表示赞同;Android 的开发人员甚至在这些新特性被应用到 Android 设备上之前都不会去谈论它们是否存在问题。然而,对这些特性中的大多数的技术支持最终都会落实在主线内核中。 + +随着会议逐渐接近尾声,Ben Herrenschmidt 再次重申:实现在 Android 手机上运行主线内核的关键还是在于让厂商认识到使用主线内核是它们获得最大利润的最好选择。从长远看,使用主线内核能节省大量的工作。Mark Brown 认为,以前,当搭载在 Android 设备上的内核版本以更稳定的方式向前推进的时候,上游工作的好处对运营商来说更加明显。以现在的情况来看,手机上的内核版本似乎停留在了3.10,那种压力是不一样的。 + +这次谈话以开发者决定进一步改善当前的状况而结束,但是却并没有对如何改善提出一个明确的计划。 + +--------------------------------------------------------------------------------- + +via: https://lwn.net/Articles/662147/ + +作者:[Jonathan Corbet][a] +译者:[kylepeng93](https://github.com/kylepeng93) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://lwn.net/Articles/KernelSummit2015/ +[1]:http://androidmodguide.blogspot.com/p/blog-page.html +[2]:https://lwn.net/Articles/652728/ +[3]:https://source.android.com/compatibility/cts/index.html diff --git a/published/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md b/published/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md new file mode 100644 index 0000000000..bf97fc16d7 --- /dev/null +++ b/published/20151126 How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2.md @@ -0,0 +1,327 @@ +如何在 FreeBSD 10.2 上安装 Nginx 作为 Apache 的反向代理 +================================================================================ + +Nginx 是一款自由开源的 HTTP 和反向代理服务器,也可以用作 POP3/IMAP 的邮件代理服务器。Nginx 是一款高性能的 web 服务器,其特点是功能丰富,结构简单以及内存占用低。 第一个版本由 Igor Sysoev 发布于2002年,到现在有很多大型科技公司在使用,包括 Netflix、 Github、 Cloudflare、 WordPress.com 等等。 + +在这篇教程里我们会“**在 freebsd 10.2 系统上,安装和配置 Nginx 网络服务器作为 Apache 的反向代理**”。 Apache 将在8080端口上运行 PHP ,而我们会配置 Nginx 运行在80端口以接收用户/访问者的请求。如果80端口接收到用户浏览器的网页请求,那么 Nginx 会将该请求传递给运行在8080端口上的 Apache 网络服务器和 PHP。 + +#### 前提条件 #### + +- FreeBSD 10.2 +- Root 权限 + +### 步骤 1 - 更新系统 ### + +使用 SSH 认证方式登录到你的 FreeBSD 服务器,使用下面命令来更新你的系统: + + freebsd-update fetch + freebsd-update install + +### 步骤 2 - 安装 Apache ### + +Apache 是开源的、使用范围最广的 web 服务器。在 FreeBSD 里默认没有安装 Apache, 但是我们可以直接通过 /usr/ports/www/apache24 下的 ports 或软件包来安装,也可以直接使用 pkg 命令从 FreeBSD 软件库中安装。在本教程中,我们将使用 pkg 命令从 FreeBSD 软件库中安装: + + pkg install apache24 + +### 步骤 3 - 安装 PHP ### + +一旦成功安装 Apache,接着将会安装 PHP ,它来负责处理用户对 PHP 文件的请求。我们将会用到如下的 pkg 命令来安装 PHP: + + pkg install php56 mod_php56 php56-mysql php56-mysqli + +### 步骤 4 - 配置 Apache 和 PHP ### + +一旦所有都安装好了,我们将会配置 Apache 运行在8080端口上, 并让 PHP 与 Apache 一同工作。 要想配置Apache,我们可以编辑“httpd.conf”这个配置文件, 对于 PHP 我们只需要复制 “/usr/local/etc/”目录下的 PHP 配置文件 php.ini。 + +进入到“/usr/local/etc/”目录,并且复制 php.ini-production 文件到 php.ini : + + cd /usr/local/etc/ + cp php.ini-production php.ini + +下一步,在 Apache 目录下通过编辑“httpd.conf”文件来配置 Apache: + + cd /usr/local/etc/apache24 + nano -c httpd.conf + +端口配置在第**52**行 : + + Listen 8080 + +服务器名称配置在第**219**行: + + ServerName 127.0.0.1:8080 + +在第**277**行,添加 DirectoryIndex 文件,Apache 将用它来服务对目录的请求: + + DirectoryIndex index.php index.html + +在第**287**行下,配置 Apache ,添加脚本支持: + + + SetHandler application/x-httpd-php + + + SetHandler application/x-httpd-php-source + + +保存并退出。 + +现在用 sysrc 命令,来添加 Apache 为开机启动项目: + + sysrc apache24_enable=yes + +然后用下面的命令测试 Apache 的配置: + + apachectl configtest + +如果到这里都没有问题的话,那么就启动 Apache 吧: + + service apache24 start + +如果全部完毕,在“/usr/local/www/apache24/data”目录下创建一个 phpinfo 文件来验证 PHP 在 Apache 下顺利运行: + + cd /usr/local/www/apache24/data + echo "" > info.php + +现在就可以访问 freebsd 的服务器 IP : 192.168.1.123:8080/info.php 。 + +![Apache and PHP on Port 8080](http://blog.linoxide.com/wp-content/uploads/2015/11/Apache-and-PHP-on-Port-8080.png) + +Apache 及 PHP 运行在 8080 端口。 + +### 步骤 5 - 安装 Nginx ### + +Nginx 可以以较低内存占用提供高性能的 Web 服务器和反向代理服务器。在这个步骤里,我们将会使用 Nginx 作为Apache 的反向代理,因此让我们用 pkg 命令来安装它吧: + + pkg install nginx + +### 步骤 6 - 配置 Nginx ### + +一旦 Nginx 安装完毕,在“**nginx.conf**”文件里,我们需要做一个新的配置文件来替换掉原来的 nginx 配置文件。切换到“/usr/local/etc/nginx/”目录下,并且备份默认 nginx.conf 文件: + + cd /usr/local/etc/nginx/ + mv nginx.conf nginx.conf.oroginal + +现在就可以创建一个新的 nginx 配置文件了: + + nano -c nginx.conf + +然后粘贴下面的配置: + + user www; + worker_processes 1; + error_log /var/log/nginx/error.log; + + events { + worker_connections 1024; + } + + http { + include mime.types; + default_type application/octet-stream; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + access_log /var/log/nginx/access.log; + + sendfile on; + keepalive_timeout 65; + + # Nginx cache configuration + proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m; + proxy_temp_path /var/nginx/cache/tmp; + proxy_cache_key "$scheme$host$request_uri"; + + gzip on; + + server { + #listen 80; + server_name _; + + location /nginx_status { + + stub_status on; + access_log off; + } + + # redirect server error pages to the static page /50x.html + # + error_page 500 502 503 504 /50x.html; + location = /50x.html { + root /usr/local/www/nginx-dist; + } + + # proxy the PHP scripts to Apache listening on 127.0.0.1:8080 + # + location ~ \.php$ { + proxy_pass http://127.0.0.1:8080; + include /usr/local/etc/nginx/proxy.conf; + } + } + + include /usr/local/etc/nginx/vhost/*; + + } + +保存并退出。 + +下一步,在 nginx 目录下面,创建一个 **proxy.conf** 文件,使其作为反向代理 : + + cd /usr/local/etc/nginx/ + nano -c proxy.conf + +粘贴如下配置: + + proxy_buffering on; + proxy_redirect off; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + client_max_body_size 10m; + client_body_buffer_size 128k; + proxy_connect_timeout 90; + proxy_send_timeout 90; + proxy_read_timeout 90; + proxy_buffers 100 8k; + add_header X-Cache $upstream_cache_status; + +保存并退出。 + +最后一步,为 nginx 的高速缓存创建一个“/var/nginx/cache”的新目录: + + mkdir -p /var/nginx/cache + +### 步骤 7 - 配置 Nginx 的虚拟主机 ### + +在这个步骤里面,我们需要创建一个新的虚拟主机域“saitama.me”,其文档根目录为“/usr/local/www/saitama.me”,日志文件放在“/var/log/nginx”目录下。 + +我们必须做的第一件事情就是创建新的目录来存放虚拟主机配置文件,我们创建的新目录名为“**vhost**”。创建它: + + cd /usr/local/etc/nginx/ + mkdir vhost + +创建好 vhost 目录,然后我们就进入这个目录并创建一个新的虚拟主机文件。这里我取名为“**saitama.conf**”: + + cd vhost/ + nano -c saitama.conf + +粘贴如下虚拟主机的配置: + + server { + # Replace with your freebsd IP + listen 192.168.1.123:80; + + # Document Root + root /usr/local/www/saitama.me; + index index.php index.html index.htm; + + # Domain + server_name www.saitama.me saitama.me; + + # Error and Access log file + error_log /var/log/nginx/saitama-error.log; + access_log /var/log/nginx/saitama-access.log main; + + # Reverse Proxy Configuration + location ~ \.php$ { + proxy_pass http://127.0.0.1:8080; + include /usr/local/etc/nginx/proxy.conf; + + # Cache configuration + proxy_cache my-cache; + proxy_cache_valid 10s; + proxy_no_cache $cookie_PHPSESSID; + proxy_cache_bypass $cookie_PHPSESSID; + proxy_cache_key "$scheme$host$request_uri"; + + } + + # Disable Cache for the file type html, json + location ~* .(?:manifest|appcache|html?|xml|json)$ { + expires -1; + } + + # Enable Cache the file 30 days + location ~* .(jpg|png|gif|jpeg|css|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ { + proxy_cache_valid 200 120m; + expires 30d; + proxy_cache my-cache; + access_log off; + } + + } + +保存并退出。 + +下一步,为 nginx 和虚拟主机创建一个新的日志目录“/var/log/”: + + mkdir -p /var/log/nginx/ + +如果一切顺利,在文件的根目录下创建目录 saitama.me 用作文档根: + + cd /usr/local/www/ + mkdir saitama.me + +### 步骤 8 - 测试 ### + +在这个步骤里面,我们只是测试我们的 nginx 和虚拟主机的配置。 + +用如下命令测试 nginx 的配置: + + nginx -t + +如果一切都没有问题,用 sysrc 命令添加 nginx 为开机启动项,并且启动 nginx 和重启 apache: + + sysrc nginx_enable=yes + service nginx start + service apache24 restart + +一切完毕后,在 saitama.me 目录下,添加一个新的 phpinfo 文件来验证 php 的正常运行: + + cd /usr/local/www/saitama.me + echo "" > info.php + +然后访问这个域名: **www.saitama.me/info.php**。 + +![Virtualhost Configured saitamame](http://blog.linoxide.com/wp-content/uploads/2015/11/Virtualhost-Configured-saitamame.png) + +Nginx 作为 Apache 的反向代理运行了,PHP 也同样工作了。 + +这是另一个结果: + +测试无缓存的 .html 文件。 + + curl -I www.saitama.me + +![html with no-cache](http://blog.linoxide.com/wp-content/uploads/2015/11/html-with-no-cache.png) + +测试有三十天缓存的 .css 文件。 + + curl -I www.saitama.me/test.css + +![css file 30day cache](http://blog.linoxide.com/wp-content/uploads/2015/11/css-file-30day-cache.png) + +测试缓存的 .php 文件: + + curl -I www.saitama.me/info.php + +![PHP file cached](http://blog.linoxide.com/wp-content/uploads/2015/11/PHP-file-cached.png) + +全部搞定。 + +### 总结 ### + +Nginx 是最受欢迎的 HTTP 和反向代理服务器,拥有丰富的功能、高性能、低内存/RAM 占用。Nginx 也用于缓存, 我们可以在网络上缓存静态文件使得网页加速,并且缓存用户请求的 php 文件。 Nginx 容易配置和使用,可以将它用作 HTTP 服务器或者 apache 的反向代理。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/ + +作者:[Arul][a] +译者:[KnightJoker](https://github.com/KnightJoker) +校对:[Caroline](https://github.com/carolinewuyan),[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ diff --git a/published/201512/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md b/published/201512/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md new file mode 100644 index 0000000000..32ce669b8c --- /dev/null +++ b/published/201512/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md @@ -0,0 +1,162 @@ +在 Debian Linux 上安装配置 ISC DHCP 服务器 +================================================================================ + +动态主机控制协议(Dynamic Host Control Protocol,DHCP)给网络管理员提供了一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的 DHCP 服务工具是 ISC DHCP Server。DHCP 服务的目的是给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP 服务提供的信息包括:DNS 服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。 + +本教程介绍运行在 Debian 7.7 上 4.2.4 版的 ISC-DHCP-Server 如何管理多个虚拟局域网(VLAN),也可以非常容易应用到单一网络上。 + +测试用的网络是通过思科路由器使用传统的方式来管理 DHCP 租约地址的。目前有 12 个 VLAN 需要通过集中式服务器来管理。把 DHCP 的任务转移到一个专用的服务器上,路由器可以收回相应的资源,把资源用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。 + +另一个将 DHCP 服务转移到专用服务器的好处,以后会讲到,它可以建立动态域名服务器(DDNS),这样当主机从服务器请求 DHCP 地址的时候,这样新主机的主机名就会被添加到 DNS 系统里面。 + +### 安装和配置 ISC DHCP 服务器### + +1、使用 apt 工具用来安装 Debian 软件仓库中的 ISC 软件,来创建这个多宿主服务器。与其他教程一样需要使用 root 或者 sudo 访问权限。请适当的修改,以便使用下面的命令。(LCTT 译注:下面中括号里面是注释,使用的时候请删除,#表示使用的 root 权限) + + # apt-get install isc-dhcp-server [安装 the ISC DHCP Server 软件] + # dpkg --get-selections isc-dhcp-server [确认软件已经成功安装] + # dpkg -s isc-dhcp-server [用另一种方式确认成功安装] + +![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg) + +2、 确认服务软件已经安装完成,现在需要提供网络信息来配置服务器,这样服务器才能够根据我们的需要来分发网络信息。作为管理员最起码需要了解的 DHCP 信息如下: + +- 网络地址 +- 子网掩码 +- 动态分配的地址范围 + +其他一些服务器动态分配的有用信息包括: + +- 默认网关 +- DNS 服务器 IP 地址 +- 域名 +- 主机名 +- 网络广播地址 + +这只是能让 ISC DHCP 服务器处理的选项中非常少的一部分。如果你想查看所有选项及其描述需要在安装好软件后输入以下命令: + + # man dhcpd.conf + +3、 一旦管理员已经确定了这台服务器分发的所有必要信息,那么是时候配置服务器并且分配必要的地址池了。在配置任何地址池或服务器配置之前,必须配置 DHCP 服务器侦听这台服务器上面的一个接口。 + +在这台特定的服务器上,设置好网卡后,DHCP 会侦听名称名为`'bond0'`的接口。请适根据你的实际情况来更改服务器以及网络环境。下面的配置都是针对本教程的。 + +![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg) + +这行指定的是 DHCP 服务侦听接口(一个或多个)上的 DHCP 流量。修改主配置文件,分配适合的 DHCP 地址池到所需要的网络上。主配置文件在 /etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件 + + # nano /etc/dhcp/dhcpd.conf + +这个配置文件可以配置我们所需要的地址池/主机。文件顶部有 ‘ddns-update-style‘ 这样一句,在本教程中它设置为 ‘none‘。在以后的教程中会讲到动态 DNS,ISC-DHCP-Server 将会与 BIND9 集成,它能够使主机名更新指向到 IP 地址。 + +4、 接下来的部分是管理员配置全局网络设置,如 DNS 域名,默认的租约时间,IP地址,子网的掩码,以及其它。如果你想了解所有的选项,请阅读 man 手册中的 dhcpd.conf 文件,命令如下: + + # man dhcpd.conf + +对于这台服务器,我们需要在配置文件顶部配置一些全局网络设置,这样就不用到每个地址池中去单独设置了。 + +![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png) + +我们花一点时间来解释一下这些选项,在本教程中虽然它们是一些全局设置,但是也可以单独的为某一个地址池进行配置。 + +- option domain-name “comptech.local”; – 所有使用这台 DHCP 服务器的主机,都将成为 DNS 域 “comptech.local” 的一员 + +- option domain-name-servers 172.27.10.6; DHCP 向所有配置这台 DHCP 服务器的的网络主机分发 DNS 服务器地址为 172.27.10.6 + +- option subnet-mask 255.255.255.0; – 每个网络设备都分配子网掩码 255.255.255.0 或 /24 + +- default-lease-time 3600; – 默认有效的地址租约时间(单位是秒)。如果租约时间耗尽,那么主机可以重新申请租约。如果租约完成,那么相应的地址也将被尽快回收。 + +- max-lease-time 86400; – 这是一台主机所能租用的最大的租约时间(单位为秒)。 + +- ping-check true; – 这是一个额外的测试,以确保服务器分发出的网络地址不是当前网络中另一台主机已使用的网络地址。 + +- ping-timeout; – 在判断地址以前没有使用过前,服务器将等待 ping 响应多少秒。 + +- ignore client-updates; 现在这个选项是可以忽略的,因为 DDNS 在前面已在配置文件中已经被禁用,但是当 DDNS 运行时,这个选项会忽略主机更新其 DNS 主机名的请求。 + +5、 文件中下面一行是权威 DHCP 所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么取消对该权威关键字(authoritative stanza) 的注释。 + +通过去掉关键字 authoritative 前面的 ‘#’,取消注释全局权威关键字。这台服务器将是它所管理网络里面的唯一权威。 + +![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png) + +默认情况下服务器被假定为**不是**网络上的权威服务器。之所以这样做是出于安全考虑。如果有人因为不了解 DHCP 服务的配置,导致配置不当或配置到一个不该出现的网络里面,这都将带来非常严重的连接问题。这行还可用在每个网络中单独配置使用。也就是说如果这台服务器不是整个网络的 DHCP 服务器,authoritative 行可以用在每个单独的网络中,而不是像上面截图中那样的全局配置。 + +6、 这一步是配置服务器将要管理的所有 DHCP 地址池/网络。简短起见,本教程只讲到配置的地址池之一。作为管理员需要收集一些必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等) + +以下这个地址池所用到的信息都是管理员收集整理的:网络 ID 172.27.60.0, 子网掩码 255.255.255.0 或 /24, 默认子网网关 172.27.60.1,广播地址 172.27.60.255.0 。 + +以上这些信息对于构建 dhcpd.conf 文件中新网络非常重要。使用文本编辑器修改配置文件添加新网络进去,这里我们需要使用 root 或 sudo 访问权限。 + + # nano /etc/dhcp/dhcpd.conf + +![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png) + +当前这个例子是给用 VMWare 创建的虚拟服务器分配 IP 地址。第一行显示是该网络的子网掩码。括号里面的内容是 DHCP 服务器应该提供给网络上面主机的所有选项。 + +第一行, range 172.27.60.50 172.27.60.254; 这一行显示的是,DHCP 服务在这个网络上能够给主机动态分发的地址范围。 + +第二行,option routers 172.27.60.1; 这里显示的是给网络里面所有的主机分发的默认网关地址。 + +最后一行, option broadcast-address 172.27.60.255; 显示当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。 + +必须要强调的是每行的结尾必须要用(;)来结束,所有创建的网络必须要在 {} 里面。 + +7、 如果要创建多个网络,继续创建完它们的相应选项后保存文本文件即可。配置完成以后如果有更改,ISC-DHCP-Server 进程需要重启来使新的更改生效。重启进程可以通过下面的命令来完成: + + # service isc-dhcp-server restart + +这条命令将重启 DHCP 服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理 dhcp 请求。最简单的方法是通过 [lsof 命令][1]来查看服务器是否在侦听67端口,命令如下: + + # lsof -i :67 + +![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png) + +这里输出的结果表明 dhcpd(DHCP 服务守护进程)正在运行并且侦听67端口。由于在 /etc/services 文件中67端口的映射,所以输出中的67端口实际上被转换成了 “bootps”。 + +在大多数的系统中这是非常常见的,现在服务器应该已经为网络连接做好准备,我们可以将一台主机接入网络请求DHCP地址来验证服务是否正常。 + +### 测试客户端连接 ### + +8、 现在许多系统使用网络管理器来维护网络连接状态,因此这个设备应该预先配置好的,只要对应的接口处于活跃状态就能够获取 DHCP。 + +然而当一台设备无法使用网络管理器时,它可能需要手动获取 DHCP 地址。下面的几步将演示怎样手动获取以及如何查看服务器是否已经按需要分发地址。 + +‘[ifconfig][2]‘工具能够用来检查接口的配置。这台被用来测试的 DHCP 服务器的设备,它只有一个网络适配器(网卡),这块网卡被命名为 ‘eth0‘。 + + # ifconfig eth0 + +![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png) + +从输出结果上看,这台设备目前没有 IPv4 地址,这样很便于测试。我们把这台设备连接到 DHCP 服务器并发出一个请求。这台设备上已经安装了一个名为 ‘dhclient‘ 的DHCP客户端工具。因为操作系统各不相同,所以这个客户端软件也是互不一样的。 + + # dhclient eth0 + +![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png) + +当前 `'inet addr:'` 字段中显示了属于 172.27.60.0 网络地址范围内的 IPv4 地址。值得欣慰的是当前网络还配置了正确的子网掩码并且分发了广播地址。 + +到这里看起来还都不错,让我们来测试一下,看看这台设备收到新 IP 地址是不是由服务器发出的。这里我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具 ‘tail’,它只显示日志文件的最后几行,这样我们就可以不用拿一个文本编辑器去查看所有的日志文件了。命令如下: + + # tail /var/log/syslog + +![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png) + +OK!服务器记录表明它分发了一个地址给这台主机 (HRTDEBXENSRV)。服务器按预期运行,给它充当权威服务器的网络分发了适合的网络地址。至此 DHCP 服务器搭建成功并且运行。如果有需要你可以继续配置其他的网络,排查故障,确保安全。 + +在以后的Debian教程中我会讲一些新的 ISC-DHCP-Server 功能。有时间的话我将写一篇关于 Bind9 和 DDNS 的教程,融入到这篇文章里面。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/ + +作者:[Rob Turner][a] +译者:[ivo-wang](https://github.com/ivo-wang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/robturner/ +[1]:http://www.tecmint.com/10-lsof-command-examples-in-linux/ +[2]:http://www.tecmint.com/ifconfig-command-examples/ diff --git a/published/201512/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/published/201512/20150806 Installation Guide for Puppet on Ubuntu 15.04.md new file mode 100644 index 0000000000..67ae06db9b --- /dev/null +++ b/published/201512/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -0,0 +1,435 @@ +如何在 Ubuntu 15.04 中安装 puppet +================================================================================ + +大家好,本教程将学习如何在 ubuntu 15.04 上面安装 puppet,它可以用来管理你的服务器基础环境。puppet 是由puppet 实验室(Puppet Labs)开发并维护的一款开源的配置管理软件,它能够帮我们自动化供给、配置和管理服务器的基础环境。不管我们管理的是几个服务器还是数以千计的计算机组成的业务报表体系,puppet 都能够使管理员从繁琐的手动配置调整中解放出来,腾出时间和精力去提系统的升整体效率。它能够确保所有自动化流程作业的一致性、可靠性以及稳定性。它让管理员和开发者更紧密的联系在一起,使开发者更容易产出付出设计良好、简洁清晰的代码。puppet 提供了配置管理和数据中心自动化的两个解决方案。这两个解决方案分别是 **puppet 开源版** 和 **puppet 企业版**。puppet 开源版以 Apache 2.0 许可证发布,它是一个非常灵活、可定制的解决方案,设置初衷是帮助管理员去完成那些重复性操作工作。pupprt 企业版是一个全平台复杂 IT 环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端 apps、只有商业版才有的加强支持,以及模块化和集成管理等。Puppet 使用 SSL 证书来认证主控服务器与代理节点之间的通信。 + +本教程将要介绍如何在运行 ubuntu 15.04 的主控服务器和代理节点上面安装开源版的 puppet。在这里,我们用一台服务器做主控服务器(master),管理和控制剩余的当作 puppet 代理节点(agent node)的服务器,这些代理节点将依据主控服务器来进行配置。在 ubuntu 15.04 只需要简单的几步就能安装配置好 puppet,用它来管理我们的服务器基础环境非常的方便。(LCTT 译注:puppet 采用 C/S 架构,所以必须有至少有一台作为服务器,其他作为客户端处理) + +### 1.设置主机文件 ### + +在本教程里,我们将使用2台运行 ubuntu 15.04 “Vivid Vervet" 的主机,一台作为主控服务器,另一台作为 puppet 的代理节点。下面是我们将用到的服务器的基础信息。 + +- puupet 主控服务器 IP:44.55.88.6 ,主机名: puppetmaster +- puppet 代理节点 IP: 45.55.86.39 ,主机名: puppetnode + +我们要在代理节点和服务器这两台机器的 hosts 文件里面都添加上相应的条目,使用 root 或是 sudo 访问权限来编辑 /etc/hosts 文件,命令如下: + + # nano /etc/hosts + + 45.55.88.6 puppetmaster.example.com puppetmaster + 45.55.86.39 puppetnode.example.com puppetnode + +注意,puppet 主控服务器必使用 8140 端口来运行,所以请务必保证开启8140端口。 + +### 2. 用 NTP 更新时间 ### + +puppet 代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须互相同步。我们使用 NTP(Network Time Protocol,网络时间协议)来同步时间。**在服务器与代理节点上面分别**运行以下命令来同步时间。 + + # ntpdate pool.ntp.org + + 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec + +(LCTT 译注:显示类似的输出结果表示运行正常) + +如果没有安装 ntp,请使用下面的命令更新你的软件仓库,安装并运行ntp服务 + + # apt-get update && sudo apt-get -y install ntp ; service ntp restart + +### 3. 安装主控服务器软件 ### + +安装开源版本的 puppet 有很多的方法。在本教程中我们在 puppet 实验室官网下载一个名为 puppetlabs-release 的软件包的软件源,安装后它将为我们在软件源里面添加 puppetmaster-passenger。puppetmaster-passenger 包括带有 apache 的 puppet 主控服务器。我们开始下载这个软件包: + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + + --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s + + 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +下载完成,我们来安装它: + + # dpkg -i puppetlabs-release-trusty.deb + + Selecting previously unselected package puppetlabs-release. + (Reading database ... 85899 files and directories currently installed.) + Preparing to unpack puppetlabs-release-trusty.deb ... + Unpacking puppetlabs-release (1.0-11) ... + Setting up puppetlabs-release (1.0-11) ... + +使用 apt 包管理命令更新一下本地的软件源: + + # apt-get update + +现在我们就可以安装 puppetmaster-passenger 了 + + # apt-get install puppetmaster-passenger + +**提示**: 在安装的时候可能会报错: + + Warning: Setting templatedir is deprecated.see http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning') + +不过不用担心,忽略掉它就好,我们只需要在设置配置文件的时候把这一项禁用就行了。 + +如何来查看puppet 主控服务器是否已经安装成功了呢?非常简单,只需要使用下面的命令查看它的版本就可以了。 + + # puppet --version + + 3.8.1 + +现在我们已经安装好了 puppet 主控服务器。因为我们使用的是配合 apache 的 passenger,由 apache 来控制 puppet 主控服务器,当 apache 运行时 puppet 主控才运行。 + +在开始之前,我们需要通过停止 apache 服务来让 puppet 主控服务器停止运行。 + + # systemctl stop apache2 + +### 4. 使用 Apt 工具锁定主控服务器的版本 ### + +现在已经安装了 3.8.1 版的 puppet,我们锁定这个版本不让它随意升级,因为升级会造成配置文件混乱。 使用 apt 工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref** + + # nano /etc/apt/preferences.d/00-puppet.pref + +在新创建的文件里面添加以下内容: + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common puppetmaster-passenger + Pin: version 3.8* + Pin-Priority: 501 + +这样在以后的系统软件升级中, puppet 主控服务器将不会跟随系统软件一起升级。 + +### 5. 配置 Puppet 主控服务器### + +Puppet 主控服务器作为一个证书发行机构,需要生成它自己的证书,用于签署所有代理的证书的请求。首先我们要删除所有在该软件包安装过程中创建出来的 ssl 证书。本地默认的 puppet 证书放在 /var/lib/puppet/ssl。因此我们只需要使用 rm 命令来整个移除这些证书就可以了。 + + # rm -rf /var/lib/puppet/ssl + +现在来配置该证书,在创建 puppet 主控服务器证书时,我们需要包括代理节点与主控服务器沟通所用的每个 DNS 名称。使用文本编辑器来修改服务器的配置文件 puppet.conf + + # nano /etc/puppet/puppet.conf + +输出的结果像下面这样 + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + templatedir=$confdir/templates + + [master] + # These are needed when the puppetmaster is run by passenger + # and can safely be removed if webrick is used. + ssl_client_header = SSL_CLIENT_S_DN + ssl_client_verify_header = SSL_CLIENT_VERIFY + +在这我们需要注释掉 templatedir 这行使它失效。然后在文件的 `[main]` 小节的结尾添加下面的信息。 + + server = puppetmaster + environment = production + runinterval = 1h + strict_variables = true + certname = puppetmaster + dns_alt_names = puppetmaster, puppetmaster.example.com + +还有很多你可能用的到的配置选项。 如果你有需要,在 Puppet 实验室有一份详细的描述文件供你阅读: [Main Config File (puppet.conf)][1]。 + +编辑完成后保存退出。 + +使用下面的命令来生成一个新的证书。 + + # puppet master --verbose --no-daemonize + + Info: Creating a new SSL key for ca + Info: Creating a new SSL certificate request for ca + Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 + ... + Notice: puppetmaster has a waiting certificate request + Notice: Signed certificate request for puppetmaster + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' + Notice: Starting Puppet master version 3.8.1 + ^CNotice: Caught INT; storing stop + Notice: Processing stop + +至此,证书已经生成。一旦我们看到 **Notice: Starting Puppet master version 3.8.1**,就表明证书就已经制作好了。我们按下 CTRL-C 回到 shell 命令行。 + +查看新生成证书的信息,可以使用下面的命令。 + + # puppet cert list -all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 6. 创建一个 Puppet 清单 ### + +默认的主要清单(Manifest)是 /etc/puppet/manifests/site.pp。 这个主要清单文件包括了用于在代理节点执行的配置定义。现在我们来创建一个清单文件: + + # nano /etc/puppet/manifests/site.pp + +在刚打开的文件里面添加下面这几行: + + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + +以上这几行的意思是给代理节点部署 apache web 服务。 + +### 7. 运行 puppet 主控服务 ### + +已经准备好运行 puppet 主控服务器 了,那么开启 apache 服务来让它启动 + + # systemctl start apache2 + +我们 puppet 主控服务器已经运行,不过它还不能管理任何代理节点。现在我们给 puppet 主控服务器添加代理节点. + +**提示**: 如果报错 + + Job for apache2.service failed. see "systemctl status apache2.service" and "journalctl -xe" for details. + +肯定是 apache 服务器有一些问题,我们可以使用 root 或是 sudo 访问权限来运行**apachectl start**查看它输出的日志。在本教程执行过程中, 我们发现一个 **/etc/apache2/sites-enabled/puppetmaster.conf** 的证书配置问题。修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem **为 **SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem**,然后注释掉后面这行**SSLCertificateKeyFile** 。然后在命令行重新启动 apache。 + +### 8. 安装 Puppet 代理节点的软件包 ### + +我们已经准备好了 puppet 的服务器,现在需要一个可以管理的代理节点,我们将安装 puppet 代理软件到节点上去。这里我们要给每一个需要管理的节点安装代理软件,并且确保这些节点能够通过 DNS 查询到服务器主机。下面将 安装最新的代理软件到 节点 puppetnode.example.com 上。 + +在代理节点上使用下面的命令下载 puppet 实验室提供的软件包: + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ + + --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s + + 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +在 ubuntu 15.04 上我们使用debian包管理系统来安装它,命令如下: + + # dpkg -i puppetlabs-release-trusty.deb + +使用 apt 包管理命令更新一下本地的软件源: + + # apt-get update + +通过远程仓库安装: + + # apt-get install puppet + +Puppet 代理默认是不启动的。这里我们需要使用文本编辑器修改 /etc/default/puppet 文件,使它正常工作: + + # nano /etc/default/puppet + +更改 **START** 的值改成 "yes" 。 + + START=yes + +最后保存并退出。 + +### 9. 使用 Apt 工具锁定代理软件的版本 ### + +和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要使用 apt 工具来把它锁定。具体做法是使用文本编辑器创建一个文件 **/etc/apt/preferences.d/00-puppet.pref** + + # nano /etc/apt/preferences.d/00-puppet.pref + +在新建的文件里面加入如下内容 + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common + Pin: version 3.8* + Pin-Priority: 501 + +这样 puppet 就不会随着系统软件升级而随意升级了。 + +### 10. 配置 puppet 代理节点 ### + +我们需要编辑一下代理节点的 puppet.conf 文件,来使它运行。 + + # nano /etc/puppet/puppet.conf + +它看起来和服务器的配置文件完全一样。同样注释掉**templatedir**这行。不同的是在这里我们需要删除掉所有关于`[master]` 的部分。 + +假定主控服务器可以通过名字“puppet-master”访问,我们的客户端应该可以和它相互连接通信。如果不行的话,我们需要使用完整的主机域名 puppetmaster.example.com + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +在文件的结尾增加上面3行,增加之后文件内容像下面这样: + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + #templatedir=$confdir/templates + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +最后保存并退出。 + +使用下面的命令来启动客户端软件: + + # systemctl start puppet + +如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个请求,经过签名确认后,两台机器就可以互相通信了。 + +**提示**: 如果这是你添加的第一个代理节点,建议你在添加其他节点前先给这个证书签名。一旦能够通过并正常运行,回过头来再添加其他代理节点。 + +### 11. 在主控服务器上对证书请求进行签名 ### + +第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个签名请求。在主控服务器给代理节点服务器证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。 + +在主控服务器上使用下面的命令来列出当前的证书请求: + + # puppet cert list + "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 + +因为只设置了一台代理节点服务器,所以我们将只看到一个请求。看起来类似如上,代理节点的完整域名即其主机名。 + +注意有没有“+”号在前面,代表这个证书有没有被签名。 + +使用带有主机名的**puppet cert sign**这个命令来签署这个签名请求,如下: + + # puppet cert sign puppetnode.example.com + Notice: Signed certificate request for puppetnode.example.com + Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' + +主控服务器现在可以通讯和控制它签名过的代理节点了。 + +如果想签署所有的当前请求,可以使用 -all 选项,如下所示: + + # puppet cert sign --all + +### 12. 删除一个 Puppet 证书 ### + +如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除 puppet 主控服务器上面的一个证书。使用的命令如下: + + # puppet cert clean hostname + Notice: Revoked certificate with serial 5 + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' + +如果我们想查看所有的签署和未签署的请求,使用下面这条命令: + + # puppet cert list --all + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + + +### 13. 部署 Puppet 清单 ### + +当配置并完成 puppet 清单后,现在我们需要部署清单到代理节点服务器上。要应用并加载主 puppet 清单,我们可以在代理节点服务器上面使用下面的命令: + + # puppet agent --test + + Info: Retrieving pluginfacts + Info: Retrieving plugin + Info: Caching catalog for puppetnode.example.com + Info: Applying configuration version '1434563858' + Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully + Notice: Finished catalog run in 10.53 seconds + +这里向我们展示了主清单如何立即影响到了一个单一的服务器。 + +如果我们打算运行的 puppet 清单与主清单没有什么关联,我们可以简单使用 puppet apply 带上相应的清单文件的路径即可。它仅将清单应用到我们运行该清单的代理节点上。 + + # puppet apply /etc/puppet/manifest/test.pp + +### 14. 为特定节点配置清单 ### + +如果我们想部署一个清单到某个特定的节点,我们需要如下配置清单。 + +在主控服务器上面使用文本编辑器编辑 /etc/puppet/manifest/site.pp: + + # nano /etc/puppet/manifest/site.pp + +添加下面的内容进去 + + node 'puppetnode', 'puppetnode1' { + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + } + +这里的配置显示我们将在名为 puppetnode 和 puppetnode1 的2个指定的节点上面安装 apache 服务。这里可以添加其他我们需要安装部署的具体节点进去。 + +### 15. 配置清单模块 ### + +模块对于组合任务是非常有用的,在 Puppet 社区有很多人贡献了自己的模块组件。 + +在主控服务器上, 我们将使用 puppet module 命令来安装 **puppetlabs-apache** 模块。 + + # puppet module install puppetlabs-apache + +**警告**: 千万不要在一个已经部署 apache 环境的机器上面使用这个模块,否则它将清空你没有被 puppet 管理的 apache 配置。 + +现在用文本编辑器来修改 **site.pp** : + + # nano /etc/puppet/manifest/site.pp + +添加下面的内容进去,在 puppetnode 上面安装 apache 服务。 + + node 'puppet-node' { + class { 'apache': } # use apache module + apache::vhost { 'example.com': # define vhost resource + port => '80', + docroot => '/var/www/html' + } + } + +保存退出。然后重新运行该清单来为我们的代理节点部署 apache 配置。 + +### 总结 ### + +现在我们已经成功的在 ubuntu 15.04 上面部署并运行 puppet 来管理代理节点服务器的基础运行环境。我们学习了puppet 是如何工作的,编写清单文件,节点与主机间使用 ssl 证书认证的认证过程。使用 puppet 开源软件配置管理工具在众多的代理节点上来控制、管理和配置重复性任务是非常容易的。如果你有任何的问题,建议,反馈,与我们取得联系,我们将第一时间完善更新,谢谢。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ + +作者:[Arun Pyasi][a] +译者:[ivo-wang](https://github.com/ivo-wang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html diff --git a/translated/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md b/published/201512/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md similarity index 87% rename from translated/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md rename to published/201512/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md index 7de8349b9c..4380346dc9 100644 --- a/translated/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md +++ b/published/201512/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md @@ -1,6 +1,7 @@ 如何在 CentOS 7.x 上安装 Zephyr 测试管理工具 ================================================================================ -测试管理工具包括作为测试人员需要的任何东西。测试管理工具用来记录测试执行的结果、计划测试活动以及报告质量保证活动的情况。在这篇文章中我们会向你介绍如何配置 Zephyr 测试管理工具,它包括了管理测试活动需要的所有东西,不需要单独安装测试活动所需要的应用程序从而降低测试人员不必要的麻烦。一旦你安装完它,你就看可以用它跟踪 bug、缺陷,和你的团队成员协作项目任务,因为你可以轻松地共享和访问测试过程中多个项目团队的数据。 + +测试管理(Test Management)指测试人员所需要的任何的所有东西。测试管理工具用来记录测试执行的结果、计划测试活动以及汇报质量控制活动的情况。在这篇文章中我们会向你介绍如何配置 Zephyr 测试管理工具,它包括了管理测试活动需要的所有东西,不需要单独安装测试活动所需要的应用程序从而降低测试人员不必要的麻烦。一旦你安装完它,你就看可以用它跟踪 bug 和缺陷,和你的团队成员协作项目任务,因为你可以轻松地共享和访问测试过程中多个项目团队的数据。 ### Zephyr 要求 ### @@ -19,21 +20,21 @@ Packages -JDK 7 or above ,  Oracle JDK 6 update -No Prior Tomcat, MySQL installed +JDK 7 或更高 ,  Oracle JDK 6 update +没有事先安装的 Tomcat 和 MySQL RAM 4 GB -Preferred 8 GB +推荐 8 GB CPU -2.0 GHZ or Higher +2.0 GHZ 或更高 Hard Disk -30 GB , Atleast 5GB must be free +30 GB , 至少 5GB @@ -48,8 +49,6 @@ [root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1 ----------- - [root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64 安装完 java 和它的所有依赖后,运行下面的命令设置 JAVA_HOME 环境变量。 @@ -61,8 +60,6 @@ [root@centos-007 ~]# java –version ----------- - java version "1.7.0_79" OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14) OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode) @@ -71,7 +68,7 @@ ### 安装 MySQL 5.6.x ### -如果的机器上有其它的 MySQL,建议你先卸载它们并安装这个版本,或者升级它们的模式到指定的版本。因为 Zephyr 前提要求这个指定的主要/最小 MySQL (5.6.x)版本要有 root 用户名。 +如果的机器上有其它的 MySQL,建议你先卸载它们并安装这个版本,或者升级它们的模式(schemas)到指定的版本。因为 Zephyr 前提要求这个指定的 5.6.x 版本的 MySQL ,要有 root 用户名。 可以按照下面的步骤在 CentOS-7.1 上安装 MySQL 5.6 : @@ -93,10 +90,7 @@ [root@centos-007 ~]# service mysqld start [root@centos-007 ~]# service mysqld status -对于全新安装的 MySQL 服务器,MySQL root 用户的密码为空。 -为了安全起见,我们应该重置 MySQL root 用户的密码。 - -用自动生成的空密码连接到 MySQL 并更改 root 用户密码。 +对于全新安装的 MySQL 服务器,MySQL root 用户的密码为空。为了安全起见,我们应该重置 MySQL root 用户的密码。用自动生成的空密码连接到 MySQL 并更改 root 用户密码。 [root@centos-007 ~]# mysql mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password'); @@ -224,7 +218,7 @@ via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/ 作者:[Kashif Siddique][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201512/20150831 Linux workstation security checklist.md b/published/201512/20150831 Linux workstation security checklist.md new file mode 100644 index 0000000000..15daaa5382 --- /dev/null +++ b/published/201512/20150831 Linux workstation security checklist.md @@ -0,0 +1,509 @@ +来自 Linux 基金会内部的《Linux 工作站安全检查清单》 +================================================================================ + +### 目标受众 + + 这是一套 Linux 基金会为其系统管理员提供的推荐规范。 + +这个文档用于帮助那些使用 Linux 工作站来访问和管理项目的 IT 设施的系统管理员团队。 + +如果你的系统管理员是远程员工,你也许可以使用这套指导方针确保系统管理员的系统可以通过核心安全需求,降低你的IT 平台成为攻击目标的风险。 + +即使你的系统管理员不是远程员工,很多人也会在工作环境中通过便携笔记本完成工作,或者在家中设置系统以便在业余时间或紧急时刻访问工作平台。不论发生何种情况,你都能调整这个推荐规范来适应你的环境。 + + +### 限制 + +但是,这并不是一个详细的“工作站加固”文档,可以说这是一个努力避免大多数明显安全错误而不会导致太多不便的一组推荐基线(baseline)。你也许阅读这个文档后会认为它的方法太偏执,而另一些人也许会认为这仅仅是一些肤浅的研究。安全就像在高速公路上开车 -- 任何比你开的慢的都是一个傻瓜,然而任何比你开的快的人都是疯子。这个指南仅仅是一些列核心安全规则,既不详细又不能替代经验、警惕和常识。 + +我们分享这篇文档是为了[将开源协作的优势带到 IT 策略文献资料中][18]。如果你发现它有用,我们希望你可以将它用到你自己团体中,并分享你的改进,对它的完善做出你的贡献。 + +### 结构 + +每一节都分为两个部分: + +- 核对适合你项目的需求 +- 形式不定的提示内容,解释了为什么这么做 + +#### 严重级别 + +在清单的每一个项目都包括严重级别,我们希望这些能帮助指导你的决定: + +- **关键(ESSENTIAL)** 该项应该在考虑列表上被明确的重视。如果不采取措施,将会导致你的平台安全出现高风险。 +- **中等(NICE)** 该项将改善你的安全形势,但是会影响到你的工作环境的流程,可能会要求养成新的习惯,改掉旧的习惯。 +- **低等(PARANOID)** 留作感觉会明显完善我们平台安全、但是可能会需要大量调整与操作系统交互的方式的项目。 + +记住,这些只是参考。如果你觉得这些严重级别不能反映你的工程对安全的承诺,你应该调整它们为你所合适的。 + +## 选择正确的硬件 + +我们并不会要求管理员使用一个特殊供应商或者一个特殊的型号,所以这一节提供的是选择工作系统时的核心注意事项。 + +### 检查清单 + +- [ ] 系统支持安全启动(SecureBoot) _(关键)_ +- [ ] 系统没有火线(Firewire),雷电(thunderbolt)或者扩展卡(ExpressCard)接口 _(中等)_ +- [ ] 系统有 TPM 芯片 _(中等)_ + +### 注意事项 + +#### 安全启动(SecureBoot) + +尽管它还有争议,但是安全引导能够预防很多针对工作站的攻击(Rootkits、“Evil Maid”,等等),而没有太多额外的麻烦。它并不能阻止真正专门的攻击者,加上在很大程度上,国家安全机构有办法应对它(可能是通过设计),但是有安全引导总比什么都没有强。 + +作为选择,你也许可以部署 [Anti Evil Maid][1] 提供更多健全的保护,以对抗安全引导所需要阻止的攻击类型,但是它需要更多部署和维护的工作。 + +#### 系统没有火线(Firewire),雷电(thunderbolt)或者扩展卡(ExpressCard)接口 + +火线是一个标准,其设计上允许任何连接的设备能够完全地直接访问你的系统内存(参见[维基百科][2])。雷电接口和扩展卡同样有问题,虽然一些后来部署的雷电接口试图限制内存访问的范围。如果你没有这些系统端口,那是最好的,但是它并不严重,它们通常可以通过 UEFI 关闭或内核本身禁用。 + +#### TPM 芯片 + +可信平台模块(Trusted Platform Module ,TPM)是主板上的一个与核心处理器单独分开的加密芯片,它可以用来增加平台的安全性(比如存储全盘加密的密钥),不过通常不会用于日常的平台操作。充其量,这个是一个有则更好的东西,除非你有特殊需求,需要使用 TPM 增加你的工作站安全性。 + +## 预引导环境 + +这是你开始安装操作系统前的一系列推荐规范。 + +### 检查清单 + +- [ ] 使用 UEFI 引导模式(不是传统 BIOS)_(关键)_ +- [ ] 进入 UEFI 配置需要使用密码 _(关键)_ +- [ ] 使用安全引导 _(关键)_ +- [ ] 启动系统需要 UEFI 级别密码 _(中等)_ + +### 注意事项 + +#### UEFI 和安全引导 + +UEFI 尽管有缺点,还是提供了很多传统 BIOS 没有的好功能,比如安全引导。大多数现代的系统都默认使用 UEFI 模式。 + +确保进入 UEFI 配置模式要使用高强度密码。注意,很多厂商默默地限制了你使用密码长度,所以相比长口令你也许应该选择高熵值的短密码(关于密码短语请参考下面内容)。 + +基于你选择的 Linux 发行版,你也许需要、也许不需要按照 UEFI 的要求,来导入你的发行版的安全引导密钥,从而允许你启动该发行版。很多发行版已经与微软合作,用大多数厂商所支持的密钥给它们已发布的内核签名,因此避免了你必须处理密钥导入的麻烦。 + +作为一个额外的措施,在允许某人访问引导分区然后尝试做一些不好的事之前,让他们输入密码。为了防止肩窥(shoulder-surfing),这个密码应该跟你的 UEFI 管理密码不同。如果你经常关闭和启动,你也许不想这么麻烦,因为你已经必须输入 LUKS 密码了(LUKS 参见下面内容),这样会让你您减少一些额外的键盘输入。 + +## 发行版选择注意事项 + +很有可能你会坚持一个广泛使用的发行版如 Fedora,Ubuntu,Arch,Debian,或它们的一个类似发行版。无论如何,以下是你选择使用发行版应该考虑的。 + +### 检查清单 + +- [ ] 拥有一个强健的 MAC/RBAC 系统(SELinux/AppArmor/Grsecurity) _(关键)_ +- [ ] 发布安全公告 _(关键)_ +- [ ] 提供及时的安全补丁 _(关键)_ +- [ ] 提供软件包的加密验证 _(关键)_ +- [ ] 完全支持 UEFI 和安全引导 _(关键)_ +- [ ] 拥有健壮的原生全磁盘加密支持 _(关键)_ + +### 注意事项 + +#### SELinux,AppArmor,和 GrSecurity/PaX + +强制访问控制(Mandatory Access Controls,MAC)或者基于角色的访问控制(Role-Based Access Controls,RBAC)是一个用在老式 POSIX 系统的基于用户或组的安全机制扩展。现在大多数发行版已经捆绑了 MAC/RBAC 系统(Fedora,Ubuntu),或通过提供一种机制一个可选的安装后步骤来添加它(Gentoo,Arch,Debian)。显然,强烈建议您选择一个预装 MAC/RBAC 系统的发行版,但是如果你对某个没有默认启用它的发行版情有独钟,装完系统后应计划配置安装它。 + +应该坚决避免使用不带任何 MAC/RBAC 机制的发行版,像传统的 POSIX 基于用户和组的安全在当今时代应该算是考虑不足。如果你想建立一个 MAC/RBAC 工作站,通常认为 AppArmor 和 PaX 比 SELinux 更容易掌握。此外,在工作站上,很少有或者根本没有对外监听的守护进程,而针对用户运行的应用造成的最高风险,GrSecurity/PaX _可能_ 会比SELinux 提供更多的安全便利。 + +#### 发行版安全公告 + +大多数广泛使用的发行版都有一个给它们的用户发送安全公告的机制,但是如果你对一些机密感兴趣,去看看开发人员是否有见于文档的提醒用户安全漏洞和补丁的机制。缺乏这样的机制是一个重要的警告信号,说明这个发行版不够成熟,不能被用作主要管理员的工作站。 + +#### 及时和可靠的安全更新 + +多数常用的发行版提供定期安全更新,但应该经常检查以确保及时提供关键包更新。因此应避免使用附属发行版(spin-offs)和“社区重构”,因为它们必须等待上游发行版先发布,它们经常延迟发布安全更新。 + +现在,很难找到一个不使用加密签名、更新元数据或二者都不使用的发行版。如此说来,常用的发行版在引入这个基本安全机制就已经知道这些很多年了(Arch,说你呢),所以这也是值得检查的。 + +#### 发行版支持 UEFI 和安全引导 + +检查发行版是否支持 UEFI 和安全引导。查明它是否需要导入额外的密钥或是否要求启动内核有一个已经被系统厂商信任的密钥签名(例如跟微软达成合作)。一些发行版不支持 UEFI 或安全启动,但是提供了替代品来确保防篡改(tamper-proof)或防破坏(tamper-evident)引导环境([Qubes-OS][3] 使用 Anti Evil Maid,前面提到的)。如果一个发行版不支持安全引导,也没有防止引导级别攻击的机制,还是看看别的吧。 + +#### 全磁盘加密 + +全磁盘加密是保护静止数据的要求,大多数发行版都支持。作为一个选择方案,带有自加密硬盘的系统也可以用(通常通过主板 TPM 芯片实现),并提供了类似安全级别而且操作更快,但是花费也更高。 + +## 发行版安装指南 + +所有发行版都是不同的,但是也有一些一般原则: + +### 检查清单 + +- [ ] 使用健壮的密码全磁盘加密(LUKS) _(关键)_ +- [ ] 确保交换分区也加密了 _(关键)_ +- [ ] 确保引导程序设置了密码(可以和LUKS一样) _(关键)_ +- [ ] 设置健壮的 root 密码(可以和LUKS一样) _(关键)_ +- [ ] 使用无特权账户登录,作为管理员组的一部分 _(关键)_ +- [ ] 设置健壮的用户登录密码,不同于 root 密码 _(关键)_ + +### 注意事项 + +#### 全磁盘加密 + +除非你正在使用自加密硬盘,配置你的安装程序完整地加密所有存储你的数据与系统文件的磁盘很重要。简单地通过自动挂载的 cryptfs 环(loop)文件加密用户目录还不够(说你呢,旧版 Ubuntu),这并没有给系统二进制文件或交换分区提供保护,它可能包含大量的敏感数据。推荐的加密策略是加密 LVM 设备,以便在启动过程中只需要一个密码。 + +`/boot`分区将一直保持非加密,因为引导程序需要在调用 LUKS/dm-crypt 前能引导内核自身。一些发行版支持加密的`/boot`分区,比如 [Arch][16],可能别的发行版也支持,但是似乎这样增加了系统更新的复杂度。如果你的发行版并没有原生支持加密`/boot`也不用太在意,内核镜像本身并没有什么隐私数据,它会通过安全引导的加密签名检查来防止被篡改。 + +#### 选择一个好密码 + +现代的 Linux 系统没有限制密码口令长度,所以唯一的限制是你的偏执和倔强。如果你要启动你的系统,你将大概至少要输入两个不同的密码:一个解锁 LUKS ,另一个登录,所以长密码将会使你老的更快。最好从丰富或混合的词汇中选择2-3个单词长度,容易输入的密码。 + +优秀密码例子(是的,你可以使用空格): + +- nature abhors roombas +- 12 in-flight Jebediahs +- perdon, tengo flatulence + +如果你喜欢输入可以在公开场合和你生活中能见到的句子,比如: + +- Mary had a little lamb +- you're a wizard, Harry +- to infinity and beyond + +如果你愿意的话,你也应该带上最少要 10-12个字符长度的非词汇的密码。 + +除非你担心物理安全,你可以写下你的密码,并保存在一个远离你办公桌的安全的地方。 + +#### Root,用户密码和管理组 + +我们建议,你的 root 密码和你的 LUKS 加密使用同样的密码(除非你共享你的笔记本给信任的人,让他应该能解锁设备,但是不应该能成为 root 用户)。如果你是笔记本电脑的唯一用户,那么你的 root 密码与你的 LUKS 密码不同是没有安全优势上的意义的。通常,你可以使用同样的密码在你的 UEFI 管理,磁盘加密,和 root 登录中 -- 知道这些任意一个都会让攻击者完全控制您的系统,在单用户工作站上使这些密码不同,没有任何安全益处。 + +你应该有一个不同的,但同样强健的常规用户帐户密码用来日常工作。这个用户应该是管理组用户(例如`wheel`或者类似,根据发行版不同),允许你执行`sudo`来提升权限。 + +换句话说,如果在你的工作站只有你一个用户,你应该有两个独特的、强健(robust)而强壮(strong)的密码需要记住: + +**管理级别**,用在以下方面: + +- UEFI 管理 +- 引导程序(GRUB) +- 磁盘加密(LUKS) +- 工作站管理(root 用户) + +**用户级别**,用在以下: + +- 用户登录和 sudo +- 密码管理器的主密码 + +很明显,如果有一个令人信服的理由的话,它们全都可以不同。 + +## 安装后的加固 + +安装后的安全加固在很大程度上取决于你选择的发行版,所以在一个像这样的通用文档中提供详细说明是徒劳的。然而,这里有一些你应该采取的步骤: + +### 检查清单 + +- [ ] 在全局范围内禁用火线和雷电模块 _(关键)_ +- [ ] 检查你的防火墙,确保过滤所有传入端口 _(关键)_ +- [ ] 确保 root 邮件转发到一个你可以收到的账户 _(关键)_ +- [ ] 建立一个系统自动更新任务,或更新提醒 _(中等)_ +- [ ] 检查以确保 sshd 服务默认情况下是禁用的 _(中等)_ +- [ ] 配置屏幕保护程序在一段时间的不活动后自动锁定 _(中等)_ +- [ ] 设置 logwatch _(中等)_ +- [ ] 安装使用 rkhunter _(中等)_ +- [ ] 安装一个入侵检测系统(Intrusion Detection System) _(中等)_ + +### 注意事项 + +#### 将模块列入黑名单 + +将火线和雷电模块列入黑名单,增加一行到`/etc/modprobe.d/blacklist-dma.conf`文件: + + blacklist firewire-core + blacklist thunderbolt + +重启后的这些模块将被列入黑名单。这样做是无害的,即使你没有这些端口(但也不做任何事)。 + +#### Root 邮件 + +默认的 root 邮件只是存储在系统基本上没人读过。确保你设置了你的`/etc/aliases`来转发 root 邮件到你确实能读取的邮箱,否则你也许错过了重要的系统通知和报告: + + # Person who should get root's mail + root: bob@example.com + +编辑后这些后运行`newaliases`,然后测试它确保能投递到,像一些邮件供应商将拒绝来自不存在的域名或者不可达的域名的邮件。如果是这个原因,你需要配置邮件转发直到确实可用。 + +#### 防火墙,sshd,和监听进程 + +默认的防火墙设置将取决于您的发行版,但是大多数都允许`sshd`端口连入。除非你有一个令人信服的合理理由允许连入 ssh,你应该过滤掉它,并禁用 sshd 守护进程。 + + systemctl disable sshd.service + systemctl stop sshd.service + +如果你需要使用它,你也可以临时启动它。 + +通常,你的系统不应该有任何侦听端口,除了响应 ping 之外。这将有助于你对抗网络级的零日漏洞利用。 + +#### 自动更新或通知 + +建议打开自动更新,除非你有一个非常好的理由不这么做,如果担心自动更新将使您的系统无法使用(以前发生过,所以这种担心并非杞人忧天)。至少,你应该启用自动通知可用的更新。大多数发行版已经有这个服务自动运行,所以你不需要做任何事。查阅你的发行版文档了解更多。 + +你应该尽快应用所有明显的勘误,即使这些不是特别贴上“安全更新”或有关联的 CVE 编号。所有的问题都有潜在的安全漏洞和新的错误,比起停留在旧的、已知的问题上,未知问题通常是更安全的策略。 + +#### 监控日志 + +你应该会对你的系统上发生了什么很感兴趣。出于这个原因,你应该安装`logwatch`然后配置它每夜发送在你的系统上发生的任何事情的活动报告。这不会预防一个专业的攻击者,但是一个不错的安全网络功能。 + +注意,许多 systemd 发行版将不再自动安装一个“logwatch”所需的 syslog 服务(因为 systemd 会放到它自己的日志中),所以你需要安装和启用“rsyslog”来确保在使用 logwatch 之前你的 /var/log 不是空的。 + +#### Rkhunter 和 IDS + +安装`rkhunter`和一个类似`aide`或者`tripwire`入侵检测系统(IDS)并不是那么有用,除非你确实理解它们如何工作,并采取必要的步骤来设置正确(例如,保证数据库在外部介质,从可信的环境运行检测,记住执行系统更新和配置更改后要刷新散列数据库,等等)。如果你不愿在你的工作站执行这些步骤,并调整你如何工作的方式,这些工具只能带来麻烦而没有任何实在的安全益处。 + +我们建议你安装`rkhunter`并每晚运行它。它相当易于学习和使用,虽然它不会阻止一个复杂的攻击者,它也能帮助你捕获你自己的错误。 + +## 个人工作站备份 + +工作站备份往往被忽视,或偶尔才做一次,这常常是不安全的方式。 + +### 检查清单 + +- [ ] 设置加密备份工作站到外部存储 _(关键)_ +- [ ] 使用零认知(zero-knowledge)备份工具备份到站外或云上 _(中等)_ + +### 注意事项 + +#### 全加密的备份存到外部存储 + +把全部备份放到一个移动磁盘中比较方便,不用担心带宽和上行网速(在这个时代,大多数供应商仍然提供显著的不对称的上传/下载速度)。不用说,这个移动硬盘本身需要加密(再说一次,通过 LUKS),或者你应该使用一个备份工具建立加密备份,例如`duplicity`或者它的 GUI 版本 `deja-dup`。我建议使用后者并使用随机生成的密码,保存到离线的安全地方。如果你带上笔记本去旅行,把这个磁盘留在家,以防你的笔记本丢失或被窃时可以找回备份。 + +除了你的家目录外,你还应该备份`/etc`目录和出于取证目的的`/var/log`目录。 + +尤其重要的是,避免拷贝你的家目录到任何非加密存储上,即使是需要快速的在两个系统上移动文件时,一旦完成你肯定会忘了清除它,从而暴露个人隐私或者安全信息到监听者手中 -- 尤其是把这个存储介质跟你的笔记本放到同一个包里。 + +#### 有选择的零认知站外备份 + +站外备份(Off-site backup)也是相当重要的,是否可以做到要么需要你的老板提供空间,要么找一家云服务商。你可以建一个单独的 duplicity/deja-dup 配置,只包括重要的文件,以免传输大量你不想备份的数据(网络缓存、音乐、下载等等)。 + +作为选择,你可以使用零认知(zero-knowledge)备份工具,例如 [SpiderOak][5],它提供一个卓越的 Linux GUI工具还有更多的实用特性,例如在多个系统或平台间同步内容。 + +## 最佳实践 + +下面是我们认为你应该采用的最佳实践列表。它当然不是非常详细的,而是试图提供实用的建议,来做到可行的整体安全性和可用性之间的平衡。 + +### 浏览 + +毫无疑问, web 浏览器将是你的系统上最大、最容易暴露的面临攻击的软件。它是专门下载和执行不可信、甚至是恶意代码的一个工具。它试图采用沙箱和代码清洁(code sanitization)等多种机制保护你免受这种危险,但是在之前它们都被击败了多次。你应该知道,在任何时候浏览网站都是你做的最不安全的活动。 + +有几种方法可以减少浏览器的影响,但这些真实有效的方法需要你明显改变操作您的工作站的方式。 + +#### 1: 使用两个不同的浏览器 _(关键)_ + +这很容易做到,但是只有很少的安全效益。并不是所有浏览器都可以让攻击者完全自由访问您的系统 -- 有时它们只能允许某人读取本地浏览器存储,窃取其它标签的活动会话,捕获浏览器的输入等。使用两个不同的浏览器,一个用在工作/高安全站点,另一个用在其它方面,有助于防止攻击者请求整个 cookie 存储的小问题。主要的不便是两个不同的浏览器会消耗大量内存。 + +我们建议: + +##### 火狐用来访问工作和高安全站点 + +使用火狐登录工作有关的站点,应该额外关心的是确保数据如 cookies,会话,登录信息,击键等等,明显不应该落入攻击者手中。除了少数的几个网站,你不应该用这个浏览器访问其它网站。 + +你应该安装下面的火狐扩展: + +- [ ] NoScript _(关键)_ + - NoScript 阻止活动内容加载,除非是在用户白名单里的域名。如果用于默认浏览器它会很麻烦(可是提供了真正好的安全效益),所以我们建议只在访问与工作相关的网站的浏览器上开启它。 + +- [ ] Privacy Badger _(关键)_ + - EFF 的 Privacy Badger 将在页面加载时阻止大多数外部追踪器和广告平台,有助于在这些追踪站点影响你的浏览器时避免跪了(追踪器和广告站点通常会成为攻击者的目标,因为它们能会迅速影响世界各地成千上万的系统)。 + +- [ ] HTTPS Everywhere _(关键)_ + - 这个 EFF 开发的扩展将确保你访问的大多数站点都使用安全连接,甚至你点击的连接使用的是 http://(可以有效的避免大多数的攻击,例如[SSL-strip][7])。 + +- [ ] Certificate Patrol _(中等)_ + - 如果你正在访问的站点最近改变了它们的 TLS 证书,这个工具将会警告你 -- 特别是如果不是接近失效期或者现在使用不同的证书颁发机构。它有助于警告你是否有人正尝试中间人攻击你的连接,不过它会产生很多误报。 + +你应该让火狐成为你打开连接时的默认浏览器,因为 NoScript 将在加载或者执行时阻止大多数活动内容。 + +##### 其它一切都用 Chrome/Chromium + +Chromium 开发者在增加很多很好的安全特性方面走在了火狐前面(至少[在 Linux 上][6]),例如 seccomp 沙箱,内核用户空间等等,这会成为一个你访问的网站与你其它系统之间的额外隔离层。Chromium 是上游开源项目,Chrome 是 Google 基于它构建的专有二进制包(加一句偏执的提醒,如果你有任何不想让谷歌知道的事情都不要使用它)。 + +推荐你在 Chrome 上也安装**Privacy Badger** 和 **HTTPS Everywhere** 扩展,然后给它一个与火狐不同的主题,以让它告诉你这是你的“不可信站点”浏览器。 + +#### 2: 使用两个不同浏览器,一个在专用的虚拟机里 _(中等)_ + +这有点像上面建议的做法,除了您将添加一个通过快速访问协议运行在专用虚拟机内部 Chrome 的额外步骤,它允许你共享剪贴板和转发声音事件(如,Spice 或 RDP)。这将在不可信浏览器和你其它的工作环境之间添加一个优秀的隔离层,确保攻击者完全危害你的浏览器将必须另外打破 VM 隔离层,才能达到系统的其余部分。 + +这是一个鲜为人知的可行方式,但是需要大量的 RAM 和高速的处理器来处理多增加的负载。这要求作为管理员的你需要相应地调整自己的工作实践而付出辛苦。 + +#### 3: 通过虚拟化完全隔离你的工作和娱乐环境 _(低等)_ + +了解下 [Qubes-OS 项目][3],它致力于通过划分你的应用到完全隔离的 VM 中来提供高度安全的工作环境。 + +### 密码管理器 + +#### 检查清单 + +- [ ] 使用密码管理器 _(关键)_ +- [ ] 不相关的站点使用不同的密码 _(关键)_ +- [ ] 使用支持团队共享的密码管理器 _(中等)_ +- [ ] 给非网站类账户使用一个单独的密码管理器 _(低等)_ + +#### 注意事项 + +使用好的、唯一的密码对你的团队成员来说应该是非常关键的需求。凭证(credential)盗取一直在发生 — 通过被攻破的计算机、盗取数据库备份、远程站点利用、以及任何其它的方式。凭证绝不应该跨站点重用,尤其是关键的应用。 + +##### 浏览器中的密码管理器 + +每个浏览器有一个比较安全的保存密码机制,可以同步到供应商维护的,并使用用户的密码保证数据加密。然而,这个机制有严重的劣势: + +1. 不能跨浏览器工作 +2. 不提供任何与团队成员共享凭证的方法 + +也有一些支持良好、免费或便宜的密码管理器,可以很好的融合到多个浏览器,跨平台工作,提供小组共享(通常是付费服务)。可以很容易地通过搜索引擎找到解决方案。 + +##### 独立的密码管理器 + +任何与浏览器结合的密码管理器都有一个主要的缺点,它实际上是应用的一部分,这样最有可能被入侵者攻击。如果这让你不放心(应该这样),你应该选择两个不同的密码管理器 -- 一个集成在浏览器中用来保存网站密码,一个作为独立运行的应用。后者可用于存储高风险凭证如 root 密码、数据库密码、其它 shell 账户凭证等。 + +这样的工具在团队成员间共享超级用户的凭据方面特别有用(服务器 root 密码、ILO密码、数据库管理密码、引导程序密码等等)。 + +这几个工具可以帮助你: + +- [KeePassX][8],在第2版中改进了团队共享 +- [Pass][9],它使用了文本文件和 PGP,并与 git 结合 +- [Django-Pstore][10],它使用 GPG 在管理员之间共享凭据 +- [Hiera-Eyaml][11],如果你已经在你的平台中使用了 Puppet,在你的 Hiera 加密数据的一部分里面,可以便捷的追踪你的服务器/服务凭证。 + +### 加固 SSH 与 PGP 的私钥 + +个人加密密钥,包括 SSH 和 PGP 私钥,都是你工作站中最重要的物品 -- 这是攻击者最想得到的东西,这可以让他们进一步攻击你的平台或在其它管理员面前冒充你。你应该采取额外的步骤,确保你的私钥免遭盗窃。 + +#### 检查清单 + +- [ ] 用来保护私钥的强壮密码 _(关键)_ +- [ ] PGP 的主密码保存在移动存储中 _(中等)_ +- [ ] 用于身份验证、签名和加密的子密码存储在智能卡设备 _(中等)_ +- [ ] SSH 配置为以 PGP 认证密钥作为 ssh 私钥 _(中等)_ + +#### 注意事项 + +防止私钥被偷的最好方式是使用一个智能卡存储你的加密私钥,绝不要拷贝到工作站上。有几个厂商提供支持 OpenPGP 的设备: + +- [Kernel Concepts][12],在这里可以采购支持 OpenPGP 的智能卡和 USB 读取器,你应该需要一个。 +- [Yubikey NEO][13],这里提供 OpenPGP 功能的智能卡还提供很多很酷的特性(U2F、PIV、HOTP等等)。 + +确保 PGP 主密码没有存储在工作站也很重要,仅使用子密码。主密钥只有在签名其它的密钥和创建新的子密钥时使用 — 不经常发生这种操作。你可以照着 [Debian 的子密钥][14]向导来学习如何将你的主密钥移动到移动存储并创建子密钥。 + +你应该配置你的 gnupg 代理作为 ssh 代理,然后使用基于智能卡 PGP 认证密钥作为你的 ssh 私钥。我们发布了一个[详尽的指导][15]如何使用智能卡读取器或 Yubikey NEO。 + +如果你不想那么麻烦,最少要确保你的 PGP 私钥和你的 SSH 私钥有个强健的密码,这将让攻击者很难盗取使用它们。 + +### 休眠或关机,不要挂起 + +当系统挂起时,内存中的内容仍然保留在内存芯片中,可以会攻击者读取到(这叫做冷启动攻击(Cold Boot Attack))。如果你离开你的系统的时间较长,比如每天下班结束,最好关机或者休眠,而不是挂起它或者就那么开着。 + +### 工作站上的 SELinux + +如果你使用捆绑了 SELinux 的发行版(如 Fedora),这有些如何使用它的建议,让你的工作站达到最大限度的安全。 + +#### 检查清单 + +- [ ] 确保你的工作站强制(enforcing)使用 SELinux _(关键)_ +- [ ] 不要盲目的执行`audit2allow -M`,应该经常检查 _(关键)_ +- [ ] 绝不要 `setenforce 0` _(中等)_ +- [ ] 切换你的用户到 SELinux 用户`staff_u` _(中等)_ + +#### 注意事项 + +SELinux 是强制访问控制(Mandatory Access Controls,MAC),是 POSIX许可核心功能的扩展。它是成熟、强健,自从它推出以来已经有很长的路了。不管怎样,许多系统管理员现在仍旧重复过时的口头禅“关掉它就行”。 + +话虽如此,在工作站上 SELinux 会带来一些有限的安全效益,因为大多数你想运行的应用都是可以自由运行的。开启它有益于给网络提供足够的保护,也有可能有助于防止攻击者通过脆弱的后台服务提升到 root 级别的权限用户。 + +我们的建议是开启它并强制使用(enforcing)。 + +##### 绝不`setenforce 0` + +使用`setenforce 0`临时把 SELinux 设置为许可(permissive)模式很有诱惑力,但是你应该避免这样做。当你想查找一个特定应用或者程序的问题时,实际上这样做是把整个系统的 SELinux 给关闭了。 + +你应该使用`semanage permissive -a [somedomain_t]`替换`setenforce 0`,只把这个程序放入许可模式。首先运行`ausearch`查看哪个程序发生问题: + + ausearch -ts recent -m avc + +然后看下`scontext=`(源自 SELinux 的上下文)行,像这样: + + scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023 + ^^^^^^^^^^^^^^ + +这告诉你程序`gpg_pinentry_t`被拒绝了,所以你想排查应用的故障,应该增加它到许可域: + + semange permissive -a gpg_pinentry_t + +这将允许你使用应用然后收集 AVC 的其它数据,你可以结合`audit2allow`来写一个本地策略。一旦完成你就不会看到新的 AVC 的拒绝消息,你就可以通过运行以下命令从许可中删除程序: + + semanage permissive -d gpg_pinentry_t + +##### 用 SELinux 的用户 staff_r 使用你的工作站 + +SELinux 带有角色(role)的原生实现,基于用户帐户相关角色来禁止或授予某些特权。作为一个管理员,你应该使用`staff_r`角色,这可以限制访问很多配置和其它安全敏感文件,除非你先执行`sudo`。 + +默认情况下,用户以`unconfined_r`创建,你可以自由运行大多数应用,没有任何(或只有一点)SELinux 约束。转换你的用户到`staff_r`角色,运行下面的命令: + + usermod -Z staff_u [username] + +你应该退出然后登录新的角色,届时如果你运行`id -Z`,你将会看到: + + staff_u:staff_r:staff_t:s0-s0:c0.c1023 + +在执行`sudo`时,你应该记住增加一个额外标志告诉 SELinux 转换到“sysadmin”角色。你需要用的命令是: + + sudo -i -r sysadm_r + +然后`id -Z`将会显示: + + staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023 + +**警告**:在进行这个切换前你应该能很顺畅的使用`ausearch`和`audit2allow`,当你以`staff_r`角色运行时你的应用有可能不再工作了。在写作本文时,已知以下流行的应用在`staff_r`下没有做策略调整就不会工作: + +- Chrome/Chromium +- Skype +- VirtualBox + +切换回`unconfined_r`,运行下面的命令: + + usermod -Z unconfined_u [username] + +然后注销再重新回到舒适区。 + +## 延伸阅读 + +IT 安全的世界是一个没有底的兔子洞。如果你想深入,或者找到你的具体发行版更多的安全特性,请查看下面这些链接: + +- [Fedora 安全指南](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html) +- [CESG Ubuntu 安全指南](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts) +- [Debian 安全手册](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html) +- [Arch Linux 安全维基](https://wiki.archlinux.org/index.php/Security) +- [Mac OSX 安全](https://www.apple.com/support/security/guides/) + +## 许可 + +这项工作在[创作共用授权4.0国际许可证][0]许可下。 + +-------------------------------------------------------------------------------- + +via: https://github.com/lfit/itpol/blob/bbc17d8c69cb8eee07ec41f8fbf8ba32fdb4301b/linux-workstation-security.md + +作者:[mricon][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://github.com/mricon +[0]: http://creativecommons.org/licenses/by-sa/4.0/ +[1]: https://github.com/QubesOS/qubes-antievilmaid +[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues +[3]: https://qubes-os.org/ +[4]: https://xkcd.com/936/ +[5]: https://spideroak.com/ +[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing +[7]: http://www.thoughtcrime.org/software/sslstrip/ +[8]: https://keepassx.org/ +[9]: http://www.passwordstore.org/ +[10]: https://pypi.python.org/pypi/django-pstore +[11]: https://github.com/TomPoulton/hiera-eyaml +[12]: http://shop.kernelconcepts.de/ +[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/ +[14]: https://wiki.debian.org/Subkeys +[15]: https://github.com/lfit/ssh-gpg-smartcard-config +[16]: http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption/ +[17]: https://en.wikipedia.org/wiki/Cold_boot_attack +[18]: http://www.linux.com/news/featured-blogs/167-amanda-mcpherson/850607-linux-foundation-sysadmins-open-source-their-it-policies \ No newline at end of file diff --git a/published/201512/20150917 A Repository with 44 Years of Unix Evolution.md b/published/201512/20150917 A Repository with 44 Years of Unix Evolution.md new file mode 100755 index 0000000000..5c19f0180f --- /dev/null +++ b/published/201512/20150917 A Repository with 44 Years of Unix Evolution.md @@ -0,0 +1,220 @@ +一个涵盖 Unix 44 年进化史的版本仓库 +============================================================================= + +http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html + +This is an HTML rendering of a working paper draft that led to a publication. The publication should always be cited in preference to this draft using the following reference: + +- **Diomidis Spinellis**. [A repository with 44 years of Unix evolution](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html). In MSR '15: Proceedings of the 12th Working Conference on Mining Software Repositories, pages 13-16. IEEE, 2015. Best Data Showcase Award. ([doi:10.1109/MSR.2015.6](http://dx.doi.org/10.1109/MSR.2015.6)) + +This document is also available in [PDF format](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf). + +The document's metadata is available in [BibTeX format](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c-bibtex.html). + +This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. + +[Diomidis Spinellis Publications](http://www.dmst.aueb.gr/dds/pubs/) + +© 2015 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. + +### 摘要 ### + +Unix 操作系统的进化历史,可以从一个版本控制仓库中窥见,时间跨度从 1972 年的 5000 行内核代码开始,到 2015 年成为一个含有 26,000,000 行代码的被广泛使用的系统。该仓库包含 659,000 条提交,和 2306 次合并。仓库部署了被普遍采用的 Git 系统用于储存其代码,并且在时下流行的 GitHub 上建立了存档。它由来自贝尔实验室(Bell Labs),伯克利大学(Berkeley University),386BSD 团队所开发的系统软件的 24 个快照综合定制而成,这包括两个老式仓库和一个开源 FreeBSD 系统的仓库。总的来说,可以确认其中的 850 位个人贡献者,更早些时候的一批人主要做基础研究。这些数据可以用于一些经验性的研究,在软件工程,信息系统和软件考古学领域。 + +### 1、介绍 ### + +Unix 操作系统作为一个主要的工程上的突破而脱颖而出,得益于其模范的设计、大量的技术贡献、它的开发模型及广泛的使用。Unix 编程环境的设计已经被视为一个提供非常简洁、强大而优雅的设计 [[1][1]] 。在技术方面,许多对 Unix 有直接贡献的,或者因 Unix 而流行的特性就包括 [[2][2]] :用高级语言编写的可移植部署的内核;一个分层式设计的文件系统;兼容的文件,设备,网络和进程间 I/O;管道和过滤架构;虚拟文件系统;和作为普通进程的可由用户选择的不同 shell。很早的时候,就有一个庞大的社区为 Unix 贡献软件 [[3][3]] ,[[4][4],pp. 65-72] 。随时间流逝,这个社区不断壮大,并且以现在称为开源软件开发的方式在工作着 [[5][5],pp. 440-442] 。Unix 和其睿智的晚辈们也将 C 和 C++ 编程语言、分析程序和词法分析生成器(*yacc*,*lex*)、文档编制工具(*troff*,*eqn*,*tbl*)、脚本语言(*awk*,*sed*,*Perl*)、TCP/IP 网络、和配置管理系统(configuration management system)(*SCSS*,*RCS*,*Subversion*,*Git*)发扬广大了,同时也形成了现代互联网基础设施和网络的最大的部分。 + +幸运的是,一些重要的具有历史意义的 Unix 材料已经保存下来了,现在保持对外开放。尽管 Unix 最初是由相对严格的协议发行,但在早期的开发中,很多重要的部分是通过 Unix 的版权拥有者之一(Caldera International) (LCTT 译注:2002年改名为 SCO Group)以一个自由的协议发行。通过将这些部分再结合上由加州大学伯克利分校(University of California, Berkeley)和 FreeBSD 项目组开发或发布的开源软件,贯穿了从 1972 年六月二十日开始到现在的整个系统的开发。 + +通过规划和处理这些可用的快照以及或旧或新的配置管理仓库,将这些可用数据的大部分重建到一个新合成的 Git 仓库之中。这个仓库以数字的形式记录了过去44年来最重要的数字时代产物的详细的进化。下列章节描述了该仓库的结构和内容(第[2][6]节)、创建方法(第[3][7]节)和该如何使用(第[4][8]节)。 + +### 2、数据概览 ### + +这 1GB 的 Unix 历史仓库可以从 [GitHub][9] 上克隆^[1][10] 。如今^[2][11] ,这个仓库包含来自 850 个贡献者的 659,000 个提交和 2,306 个合并。贡献者有来自贝尔实验室(Bell Labs)的 23 个员工,伯克利大学(Berkeley University)的计算机系统研究组(Computer Systems Research Group)(CSRG)的 158 个人,和 FreeBSD 项目的 660 个成员。 + +这个仓库的生命始于一个 *Epoch* 的标签,这里面只包含了证书信息和现在的 README 文件。其后各种各样的标签和分支记录了很多重要的时刻。 + +- *Research-VX* 标签对应来自贝尔实验室(Bell Labs)六个研究版本。从 *Research-V1* (4768 行 PDP-11 汇编代码)开始,到以 *Research-V7* (大约 324,000 行代码,1820 个 C 文件)结束。 +- *Bell-32V* 是第七个版本 Unix 在 DEC/VAX 架构上的移植。 +- *BSD-X* 标签对应伯克利大学(Berkeley University)释出的 15 个快照。 +- *386BSD-X* 标签对应该系统的两个开源版本,主要是 Lynne 和 William Jolitz 写的适用于 Intel 386 架构的内核代码。 +- *FreeBSD-release/X* 标签和分支标记了来自 FreeBSD 项目的 116 个发行版。 + +另外,以 *-Snapshot-Development* 为后缀的分支,表示该提交由来自一个以时间排序的快照文件序列而合成;而以一个 *-VCS-Development* 为后缀的标签,标记了有特定发行版出现的历史分支的时刻。 + +仓库的历史包含从系统开发早期的一些提交,比如下面这些。 + + commit c9f643f59434f14f774d61ee3856972b8c3905b1 + Author: Dennis Ritchie + Date: Mon Dec 2 18:18:02 1974 -0500 + Research V5 development + Work on file usr/sys/dmr/kl.c + +两个发布之间的合并代表着系统发生了进化,比如 BSD 3 的开发来自 BSD2 和 Unix 32/V,它在 Git 仓库里正是被表示为带两个父节点的图形节点。 + +更为重要的是,以这种方式构造的仓库允许 **git blame**,就是可以给源代码行加上注释,如版本、日期和它们第一次出现相关联的作者,这样可以知道任何代码的起源。比如说,检出 **BSD-4** 这个标签,并在内核的 *pipe.c* 文件上运行一下 git blame,就会显示出由 Ken Thompson 写于 1974,1975 和 1979年的代码行,和 Bill Joy 写于 1980 年的。这就可以自动(尽管计算上比较费事)检测出任何时刻出现的代码。 + +![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/provenance.png) + +*图1:各个重大 Unix 发行版的代码来源* + +如[上图][12]所示,现代版本的 Unix(FreeBSD 9)依然有相当部分的来自 BSD 4.3,BSD 4.3 Net/2 和 BSD 2.0 的代码块。有趣的是,这图片显示有部分代码好像没有保留下来,当时激进地要创造一个脱离于伯克利(386BSD 和 FreeBSD 1.0)所释出代码的开源操作系统。FreeBSD 9 中最古老的代码是一个 18 行的队列,在 C 库里面的 timezone.c 文件里,该文件也可以在第七版的 Unix 文件里找到,同样的名字,时间戳是 1979 年一月十日 - 36 年前。 + +### 3、数据收集和处理 ### + +这个项目的目的是以某种方式巩固从数据方面说明 Unix 的进化,通过将其并入一个现代的版本仓库,帮助人们对系统进化的研究。项目工作包括收录数据,分类并综合到一个单独的 Git 仓库里。 + +![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/branches.png) + +*图2:导入 Unix 快照、仓库及其合并* + +项目以三种数据类型为基础(见[图2][13])。首先,早期发布版本的快照,获取自 [Unix 遗产社会归档(Unix Heritage Society archive)][14]^[3][15] 、包括了 CSRG 全部的源代码归档的 [CD-ROM 镜像][16]^[4][17] , [Oldlinux 网站][18]^[5][19] 和 [FreeBSD 归档][20]^[6][21] 。 其次,以前的和现在的仓库,即 CSRG SCCS [[6][22]] 仓库,FreeBSD 1 CVS 仓库,和[现代 FreeBSD 开发的 Git 镜像][23]^[7][24] 。前两个都是从和快照相同的来源获得的。 + +最后,也是最费力的数据源是 **初步研究(primary research)**。释出的快照并没有提供关于它们的源头和每个文件贡献者的信息。因此,这些信息片段需要通过初步研究(primary research)验证。至于作者信息主要通过作者的自传,研究论文,内部备忘录和旧文档扫描件;通过阅读并且自动处理源代码和帮助页面补充;通过与那个年代的人用电子邮件交流;在 *StackExchange* 网站上贴出疑问;查看文件的位置(在早期的内核版本的源代码,分为 `usr/sys/dmr` 和 `/usr/sys/ken` 两个位置);从研究论文和帮助手册披露的作者找到源代码,从一个又一个的发行版中获取。(有趣的是,第一和第二的研究版(Research Edition)帮助页面都有一个 “owner” 部分,列出了作者(比如,*Ken*)及对应的系统命令、文件、系统调用或库函数。在第四版中这个部分就没了,而在 BSD 发行版中又浮现了 “Author” 部分。)关于作者信息更为详细地写在了项目的文件中,这些文件被用于匹配源代码文件和它们的作者和对应的提交信息。最后,关于源代码库之间的合并信息是获取自[ NetBSD 项目所维护的 BSD 家族树][25]^[8][26] 。 + +作为本项目的一部分而开发的软件和数据文件,现在可以[在线获取][27]^[9][28] ,并且,如果有合适的网络环境,CPU 和磁盘资源,可以用来从头构建这样一个仓库。关于主要发行版的作者信息,都存储在本项目的 `author-path` 目录下的文件里。它们的内容中带有正则表达式的文件路径后面指出了相符的作者。可以指定多个作者。正则表达式是按线性处理的,所以一个文件末尾的匹配一切的表达式可以指定一个发行版的默认作者。为避免重复,一个以 `.au` 后缀的独立文件专门用于映射作者的识别号(identifier)和他们的名字及 email。这样一个文件为每个与该系统进化相关的社区都建立了一个:贝尔实验室(Bell Labs),伯克利大学(Berkeley University),386BSD 和 FreeBSD。为了真实性的需要,早期贝尔实验室(Bell Labs)发行版的 emails 都以 UUCP 注释(UUCP notation)方式列出(例如, `research!ken`)。FreeBSD 作者的识别映射,需要导入早期的 CVS 仓库,通过从如今项目的 Git 仓库里拆解对应的数据构建。总的来说,由 1107 行构成了注释作者信息的文件(828 个规则),并且另有 640 行用于映射作者的识别号到名字。 + +现在项目的数据源被编码成了一个 168 行的 `Makefile`。它包括下面的步骤。 + +**Fetching** 从远程站点复制和克隆大约 11GB 的镜像、归档和仓库。 + +**Tooling** 从 2.9 BSD 中为旧的 PDP-11 归档获取一个归档器,并调整它以在现代的 Unix 版本下编译;编译 4.3 BSD 的 *compress* 程序来解压 386BSD 发行版,这个程序不再是现代 Unix 系统的组成部分了。 + +**Organizing** 用 *tar* 和 *cpio* 解压缩包;合并第六个研究版的三个目录;用旧的 PDP-11 归档器解压全部一个 BSD 归档;挂载 CD-ROM 镜像,这样可以作为文件系统处理;合并第 8 和 62 的 386BSD 磁盘镜像为两个独立的文件。 + +**Cleaning** 恢复第一个研究版的内核源代码文件,这个可以通过 OCR 从打印件上得到近似其原始状态的的格式;给第七个研究版的源代码文件打补丁;移除发行后被添加进来的元数据和其他文件,为避免得到错误的时间戳信息;修复毁坏的 SCCS 文件;用一个定制的 Perl 脚本移除指定到多个版本的 CVS 符号、删除与现在冲突的 CVS *Attr* 文件、用 *cvs2svn* 将 CVS 仓库转换为 Git 仓库,以处理早期的 FreeBSD CVS 仓库。 + +在仓库再现(representation)中有一个很有意思的部分就是,如何导入那些快照,并以一种方式联系起来,使得 *git blame* 可以发挥它的魔力。快照导入到仓库是基于每个文件的时间戳作为一系列的提交实现的。当所有文件导入后,就被用对应发行版的名字给标记了。然后,可以删除那些文件,并开始导入下一个快照。注意 *git blame* 命令是通过回溯一个仓库的历史来工作的,并使用启发法(heuristics)来检测文件之间或文件内的代码移动和复制。因此,删除掉的快照间会产生中断,以防止它们之间的代码被追踪。 + +相反,在下一个快照导入之前,之前快照的所有文件都被移动到了一个隐藏的后备目录里,叫做 `.ref`(引用)。它们保存在那,直到下个快照的所有文件都被导入了,这时候它们就会被删掉。因为 `.ref` 目录下的每个文件都精确对应一个原始文件,*git blame* 可以知道多少源代码通过 `.ref` 文件从一个版本移到了下一个,而不用显示出 `.ref` 文件。为了更进一步帮助检测代码起源,同时增加再现(representation)的真实性,每个发行版都被再现(represented)为一个有增量文件的分支(*-Development*)与之前发行版之间的合并。 + +上世纪 80 年代时期,只有伯克利(Berkeley) 开发的文件的一个子集是用 SCCS 版本控制的。在那个期间,我们的统一仓库里包含了来自 SCCS 的提交和快照的增量文件的导入数据。对于每个发行版,可用最近的时间戳找到该 SCCS 提交,并被标记为一个与发行版增量导入分支的合并。这些合并可以在[图2][29] 的中间看到。 + +将各种数据资源综合到一个仓库的工作,主要是用两个脚本来完成的。一个 780 行的 Perl 脚本(`import-dir.pl`)可以从一个单独的数据源(快照目录、SCCS 仓库,或者 Git 仓库)中,以 *Git fast export* 格式导出(真实的或者综合的)提交历史。输出是一个简单的文本格式,Git 工具用这个来导入和导出提交。其他方面,这个脚本以一些东西为参数,如文件到贡献者的映射、贡献者登录名和他们的全名间的映射、哪个导入的提交会被合并、哪些文件要处理和忽略、以及“引用”文件的处理。一个 450 行的 Shell 脚本创建 Git 仓库,并调用带适当参数的 Perl 脚本,来导入 27 个可用的历史数据资源。Shell 脚本也会运行 30 个测试,比较特定标签的仓库和对应的数据源,核对查看的目录中出现的和没出现的,并回溯查看分支树和合并的数量,*git blame* 和 *git log* 的输出。最后,调用 *git* 作垃圾收集和仓库压缩,从最初的 6GB 降到分发的 1GB 大小。 + +### 4、数据使用 ### + +该数据可以用于软件工程、信息系统和软件考古学(software archeology)领域的经验性研究。鉴于它从不间断而独一无二的存在了超过了 40 年,可以供软件进化和跨代更迭参考。从那时以来,处理速度已经成千倍地增长、存储容量扩大了百万倍,该数据同样可以用于软件和硬件技术交叉进化(co-evolution)的研究。软件开发从研究中心到大学,到开源社区的转移,可以用来研究组织文化对于软件开发的影响。该仓库也可以用于学习著名人物的实际编程,比如 Turing 奖获得者(Dennis Ritchie 和 Ken Thompson)和 IT 产业的大佬(Bill Joy 和 Eric Schmidt)。另一个值得学习的现象是代码的长寿,无论是单行的水平,或是作为那个时代随 Unix 发布的完整的系统(Ingres、 Lisp、 Pascal、 Ratfor、 Snobol、 TMP),和导致代码存活或消亡的因素。最后,因为该数据让 Git 感到了压力,底层的软件仓库存储技术达到了其极限,这会推动版本管理系统领域的工程进度。 + +![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/metrics.png) + +*图3:Unix 发行版的代码风格进化* + +[图3][30] 根据 36 个主要 Unix 发行版描述了一些有趣的代码统计的趋势线(用 R 语言的局部多项式回归拟合函数生成),验证了代码风格和编程语言的使用在很长的时间尺度上的进化。这种进化是软硬件技术的需求和支持、软件构筑理论,甚至社会力量所驱动的。图片中的日期计算了出现在一个给定发行版中的所有文件的平均日期。正如可以从中看到,在过去的 40 年中,标示符和文件名字的长度已经稳步从 4 到 6 个字符增长到 7 到 11 个字符。我们也可以看到注释数量的少量稳步增加,以及 *goto* 语句的使用量减少,同时 *register* 这个类型修饰符的消失。 + +### 5、未来的工作 ### + +可以做很多事情去提高仓库的正确性和有效性。创建过程以开源代码共享了,通过 GitHub 的拉取请求(pull request),可以很容易地贡献更多代码和修复。最有用的社区贡献将使得导入的快照文件的覆盖面增长,以便归属于某个具体的作者。现在,大约 90,000 个文件(在 160,000 总量之外)通过默认规则指定了作者。类似地,大约有 250 个作者(最初 FreeBSD 那些)仅知道其识别号。两个都列在了 build 仓库的 unmatched 目录里,欢迎贡献数据。进一步,BSD SCCS 和 FreeBSD CVS 的提交共享相同的作者和时间戳,这些可以结合成一个单独的 Git 提交。导入 SCCS 文件提交的支持会被添加进来,以便引入仓库对应的元数据。最后,也是最重要的,开源系统的更多分支会添加进来,比如 NetBSD、 OpenBSD、DragonFlyBSD 和 *illumos*。理想情况下,其他历史上重要的 Unix 发行版,如 System III、System V、 NeXTSTEP 和 SunOS 等的当前版权拥有者,也会在一个允许他们的合作伙伴使用仓库用于研究的协议下释出他们的系统。 + +### 鸣谢 ### + +本文作者感谢很多付出努力的人们。 Brian W. Kernighan, Doug McIlroy 和 Arnold D. Robbins 在贝尔实验室(Bell Labs)的登录识别号方面提供了帮助。 Clem Cole, Era Erikson, Mary Ann Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze 和 Anatole Shaw 在 BSD 的登录识别号方面提供了帮助。BSD SCCS 的导入代码是基于 H. Merijn Brand 和 Jonathan Gray 的工作。 + +这次研究由欧盟 ( 欧洲社会基金(European Social Fund,ESF)) 和 希腊国家基金(Greek national funds)通过国家战略参考框架( National Strategic Reference Framework ,NSRF) 的 Operational Program " Education and Lifelong Learning" - Research Funding Program: Thalis - Athens University of Economics and Business - Software Engineering Research Platform ,共同出资赞助。 + +### 引用 ### + +[[1]][31] + M. D. McIlroy, E. N. Pinson, and B. A. Tague, "UNIX time-sharing system: Foreword," *The Bell System Technical Journal*, vol. 57, no. 6, pp. 1899-1904, July-August 1978. + +[[2]][32] + D. M. Ritchie and K. Thompson, "The UNIX time-sharing system," *Bell System Technical Journal*, vol. 57, no. 6, pp. 1905-1929, July-August 1978. + +[[3]][33] + D. M. Ritchie, "The evolution of the UNIX time-sharing system," *AT&T Bell Laboratories Technical Journal*, vol. 63, no. 8, pp. 1577-1593, Oct. 1984. + +[[4]][34] + P. H. Salus, *A Quarter Century of UNIX*. Boston, MA: Addison-Wesley, 1994. + +[[5]][35] + E. S. Raymond, *The Art of Unix Programming*. Addison-Wesley, 2003. + +[[6]][36] + M. J. Rochkind, "The source code control system," *IEEE Transactions on Software Engineering*, vol. SE-1, no. 4, pp. 255-265, 1975. + +---------- + +#### 脚注 #### + +[1][37] - [https://github.com/dspinellis/unix-history-repo][38] + +[2][39] - Updates may add or modify material. To ensure replicability the repository's users are encouraged to fork it or archive it. + +[3][40] - [http://www.tuhs.org/archive_sites.html][41] + +[4][42] - [https://www.mckusick.com/csrg/][43] + +[5][44] - [http://www.oldlinux.org/Linux.old/distributions/386BSD][45] + +[6][46] - [http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/][47] + +[7][48] - [https://github.com/freebsd/freebsd][49] + +[8][50] - [http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree][51] + +[9][52] - [https://github.com/dspinellis/unix-history-make][53] + +-------------------------------------------------------------------------------- + +via: http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html + +作者:Diomidis Spinellis +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#MPT78 +[2]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#RT78 +[3]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Rit84 +[4]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Sal94 +[5]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Ray03 +[6]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:data +[7]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:dev +[8]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:use +[9]:https://github.com/dspinellis/unix-history-repo +[10]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAB +[11]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAC +[12]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:provenance +[13]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches +[14]:http://www.tuhs.org/archive_sites.html +[15]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAD +[16]:https://www.mckusick.com/csrg/ +[17]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAE +[18]:http://www.oldlinux.org/Linux.old/distributions/386BSD +[19]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAF +[20]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/ +[21]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAG +[22]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#SCCS +[23]:https://github.com/freebsd/freebsd +[24]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAH +[25]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree +[26]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAI +[27]:https://github.com/dspinellis/unix-history-make +[28]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAJ +[29]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches +[30]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:metrics +[31]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITEMPT78 +[32]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERT78 +[33]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERit84 +[34]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESal94 +[35]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERay03 +[36]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESCCS +[37]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAB +[38]:https://github.com/dspinellis/unix-history-repo +[39]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAC +[40]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAD +[41]:http://www.tuhs.org/archive_sites.html +[42]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAE +[43]:https://www.mckusick.com/csrg/ +[44]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAF +[45]:http://www.oldlinux.org/Linux.old/distributions/386BSD +[46]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAG +[47]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/ +[48]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAH +[49]:https://github.com/freebsd/freebsd +[50]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAI +[51]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree +[52]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAJ +[53]:https://github.com/dspinellis/unix-history-make diff --git a/published/201512/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md b/published/201512/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md new file mode 100644 index 0000000000..2f6780cdc2 --- /dev/null +++ b/published/201512/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md @@ -0,0 +1,101 @@ +UNIX 家族小史 +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png) + +要记住,当一扇门在你面前关闭的时候,另一扇门就会打开。肯·汤普森([Ken Thompson][1]) 和丹尼斯·里奇([Dennis Richie][2]) 两个人就是这句名言很好的实例。他们俩是**20世纪**最优秀的信息技术专家之二,因为他们创造了最具影响力和创新性的软件之一: **UNIX**。 + +### UNIX 系统诞生于贝尔实验室 ### + +**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice),它有一个大家庭,并不是从石头缝里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时(timesharing)的方式使用大型机。 + +UNIX 诞生于 **1969** 年,由**肯·汤普森**以及后来加入的**丹尼斯·里奇**共同完成。这两位优秀的研究员和科学家在一个**通用电器 GE**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。 + +Multics 的目标是整合分时技术以及当时其他先进技术,允许用户在远程终端通过电话(拨号)登录到主机,然后可以编辑文档,阅读电子邮件,运行计算器,等等。 + +在之后的五年里,AT&T 公司为 Multics 项目投入了数百万美元。他们购买了 GE-645 大型机,聚集了贝尔实验室的顶级研究人员,例如肯·汤普森、 Stuart Feldman、丹尼斯·里奇、道格拉斯·麦克罗伊(M. Douglas McIlroy)、 Joseph F. Ossanna 以及 Robert Morris。但是项目目标太过激进,进度严重滞后。最后,AT&T 高层决定放弃这个项目。 + +贝尔实验室的管理层决定停止这个让许多研究人员无比纠结的操作系统上的所有遗留工作。不过要感谢汤普森,里奇和一些其他研究员,他们把老板的命令丢到一边,并继续在实验室里满怀热心地忘我工作,最终孵化出前无古人后无来者的 UNIX。 + +UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是汤普森测试自己在操作系统设计上的点子的机器,也是汤普森和 里奇一起玩 Space and Travel 游戏的模拟器。 + +> “我们想要的不仅是一个优秀的编程环境,而是能围绕这个系统形成团体。按我们自己的经验,通过远程访问和分时主机实现的公共计算,本质上不只是用终端输入程序代替打孔机而已,而是鼓励密切沟通。”丹尼斯·里奇说。 + +UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,它吸引了大量因其他操作系统限制而投身过来的高手做出无私贡献,因此它的功能模型一直保持上升趋势。 + +UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次用于实际场景中是在 1971 年,贝尔实验室的专利部门配备来做文字处理。 + +### UNIX 上的 C 语言革命 ### + +丹尼斯·里奇在 1972 年发明了一种叫 “**C**” 的高级编程语言 ,之后他和肯·汤普森决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在迁移到 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。 + +UNIX 第一次公开露面是 1973 年丹尼斯·里奇和肯·汤普森在操作系统原理(Operating Systems Principles)上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,之后在 1975 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买授权,只是授权条款非常严格。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。反正,各种版本 UNIX 系统完全由它的用户手册确定。 + +### AIX 系统 ### + +在 **1983** 年,**微软**计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界 UNIX System V 第二版的安装数量已经超过了 100,000 。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。 + +AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (Logical Volume Manager ,LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。 + +在 2004 年发布的 AIX 5.3 引入了支持高级电源虚拟化( Advanced Power Virtualization,APV)的虚拟化技术,支持对称多线程、微分区,以及共享处理器池。 + +在 2007 年,IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将高级电源虚拟化重新包装成 PowerVM。 + +这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers,但是功能更强。 + +### HP-UX 系统 ### + +**惠普 UNIX (Hewlett-Packard’s UNIX,HP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。 + +HP-UX 第 9 版引入了 SAM,一个基于字符的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。 + +第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i,因为 HP 为特定的信息技术用途,引入了操作环境(operating environments)和分级应用(layered applications)的捆绑组(bundled groups)。 + +在 2001 年发布的 11.20 版宣称支持安腾(Itanium)系统。HP-UX 是第一个使用 ACLs(访问控制列表,Access Control Lists)管理文件权限的 UNIX 系统,也是首先支持内建逻辑卷管理器(Logical Volume Manager)的系统之一。 + +如今,HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。 + +HP-UX 目前的最新版本是 11iv3, update 4。 + +### Solaris 系统 ### + +Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD(伯克利软件发行版,Berkeley Software Distribution)风格的 UNIX,但是 SunOS 5.0 版以及之后的版本都是基于重新包装为 Solaris 的 Unix System V 第 4 版。 + +SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年,Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。 + +Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。 + +Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器(Solaris Volume Manager)。之后,2005 年发布了 Solaris 10,带来许多创新,比如支持 Solaris Containers,新的 ZFS 文件系统,以及逻辑域(Logical Domains)。 + +目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。 + +### Linux ### + +到了 1991 年,用来替代商业操作系统的自由(free)操作系统的需求日渐高涨。因此,**Linus Torvalds** 开始构建一个自由的操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件,并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。 + +2015 年发布了基于 GNU Public License (GPL)授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开源给开发者。 + +如今 GNU Public License 是应用最广泛的自由软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发、运行、通过拷贝共享、学习,以及修改软件源码。 + +### UNIX vs. Linux:技术概要 ### + +- Linux 鼓励多样性,Linux 的开发人员来自各种背景,有更多不同经验和意见。 +- Linux 比 UNIX 支持更多的平台和架构。 +- UNIX 商业版本的开发人员针对特定目标平台以及用户设计他们的操作系统。 +- **Linux 比 UNIX 有更好的安全性**,更少受病毒或恶意软件攻击。截止到现在,Linux 上大约有 60-100 种病毒,但是没有任何一种还在传播。另一方面,UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。 +- 由于 UNIX 命令、工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。 +- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。 +- 传统 UNIX 是纵向扩展,而另一方面 Linux 是横向扩展。 + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/ + +作者:[M.el Khamlichi][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ +[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/ +[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/ diff --git a/translated/tech/20151020 how to h2 in apache.md b/published/201512/20151020 how to h2 in apache.md similarity index 55% rename from translated/tech/20151020 how to h2 in apache.md rename to published/201512/20151020 how to h2 in apache.md index 32420d5bf4..add5bb7560 100644 --- a/translated/tech/20151020 how to h2 in apache.md +++ b/published/201512/20151020 how to h2 in apache.md @@ -8,45 +8,44 @@ Copyright (C) 2015 greenbytes GmbH ### 源码 ### -你可以从[这里][1]得到 Apache 发行版。Apache 2.4.17 及其更高版本都支持 HTTP/2。我不会再重复介绍如何构建服务器的指令。在很多地方有很好的指南,例如[这里][2]。 +你可以从[这里][1]得到 Apache 版本。Apache 2.4.17 及其更高版本都支持 HTTP/2。我不会再重复介绍如何构建该服务器的指令。在很多地方有很好的指南,例如[这里][2]。 -(有任何试验的链接?在 Twitter 上告诉我吧 @icing) +(有任何这个试验性软件包的相关链接?在 Twitter 上告诉我吧 @icing) -#### 编译支持 HTTP/2 #### +#### 编译支持 HTTP/2 #### -在你编译发行版之前,你要进行一些**配置**。这里有成千上万的选项。和 HTTP/2 相关的是: +在你编译版本之前,你要进行一些**配置**。这里有成千上万的选项。和 HTTP/2 相关的是: - **--enable-http2** - 启用在 Apache 服务器内部实现协议的 ‘http2’ 模块。 + 启用在 Apache 服务器内部实现该协议的 ‘http2’ 模块。 -- **--with-nghttp2=** +- **--with-nghttp2=\** 指定 http2 模块需要的 libnghttp2 模块的非默认位置。如果 nghttp2 是在默认的位置,配置过程会自动采用。 - **--enable-nghttp2-staticlib-deps** - 很少用到的选项,你可能用来静态链接 nghttp2 库到服务器。在大部分平台上,只有在找不到共享 nghttp2 库时才有效。 + 很少用到的选项,你可能想将 nghttp2 库静态链接到服务器里。在大部分平台上,只有在找不到共享 nghttp2 库时才有用。 -如果你想自己编译 nghttp2,你可以到 [nghttp2.org][3] 查看文档。最新的 Fedora 以及其它发行版已经附带了这个库。 +如果你想自己编译 nghttp2,你可以到 [nghttp2.org][3] 查看文档。最新的 Fedora 以及其它版本已经附带了这个库。 #### TLS 支持 #### -大部分人想在浏览器上使用 HTTP/2, 而浏览器只在 TLS 连接(**https:// 开头的 url)时支持它。你需要一些我下面介绍的配置。但首先你需要的是支持 ALPN 扩展的 TLS 库。 +大部分人想在浏览器上使用 HTTP/2, 而浏览器只在使用 TLS 连接(**https:// 开头的 url)时才支持 HTTP/2。你需要一些我下面介绍的配置。但首先你需要的是支持 ALPN 扩展的 TLS 库。 +ALPN 用来协商(negotiate)服务器和客户端之间的协议。如果你服务器上 TLS 库还没有实现 ALPN,客户端只能通过 HTTP/1.1 通信。那么,可以和 Apache 链接并支持它的是什么库呢? -ALPN 用来屏蔽服务器和客户端之间的协议。如果你服务器上 TLS 库还没有实现 ALPN,客户端只能通过 HTTP/1.1 通信。那么,和 Apache 连接的到底是什么?又是什么支持它呢? +- **OpenSSL 1.0.2** 及其以后。 +- ??? (别的我也不知道了) -- **OpenSSL 1.0.2** 即将到来。 -- ??? - -如果你的 OpenSSL 库是 Linux 发行版自带的,这里使用的版本号可能和官方 OpenSSL 发行版的不同。如果不确定的话检查一下你的 Linux 发行版吧。 +如果你的 OpenSSL 库是 Linux 版本自带的,这里使用的版本号可能和官方 OpenSSL 版本的不同。如果不确定的话检查一下你的 Linux 版本吧。 ### 配置 ### 另一个给服务器的好建议是为 http2 模块设置合适的日志等级。添加下面的配置: - # 某个地方有这样一行 + # 放在某个地方的这样一行 LoadModule http2_module modules/mod_http2.so @@ -62,38 +61,37 @@ ALPN 用来屏蔽服务器和客户端之间的协议。如果你服务器上 TL 那么,假设你已经编译部署好了服务器, TLS 库也是最新的,你启动了你的服务器,打开了浏览器。。。你怎么知道它在工作呢? -如果除此之外你没有添加其它到服务器配置,很可能它没有工作。 +如果除此之外你没有添加其它的服务器配置,很可能它没有工作。 -你需要告诉服务器在哪里使用协议。默认情况下,你的服务器并没有启动 HTTP/2 协议。因为这是安全路由,你可能要有一套部署了才能继续。 +你需要告诉服务器在哪里使用该协议。默认情况下,你的服务器并没有启动 HTTP/2 协议。因为这样比较安全,也许才能让你已有的部署可以继续工作。 -你用 **Protocols** 命令启用 HTTP/2 协议: +你可以用新的 **Protocols** 指令启用 HTTP/2 协议: - # for a https server + # 对于 https 服务器 Protocols h2 http/1.1 ... - # for a http server + # 对于 http 服务器 Protocols h2c http/1.1 -你可以给一般服务器或者指定的 **vhosts** 添加这个配置。 +你可以给整个服务器或者指定的 **vhosts** 添加这个配置。 #### SSL 参数 #### -对于 TLS (SSL),HTTP/2 有一些特殊的要求。阅读 [https:// 连接][4]了解更详细的信息。 +对于 TLS (SSL),HTTP/2 有一些特殊的要求。阅读下面的“ https:// 连接”一节了解更详细的信息。 ### http:// 连接 (h2c) ### -尽管现在还没有浏览器支持 HTTP/2 协议, http:// 这样的 url 也能正常工作, 因为有 mod_h[ttp]2 的支持。启用它你只需要做的一件事是在 **httpd.conf** 配置 Protocols : +尽管现在还没有浏览器支持,但是 HTTP/2 协议也工作在 http:// 这样的 url 上, 而且 mod_h[ttp]2 也支持。启用它你唯一所要做的是在 Protocols 配置中启用它: - # for a http server + # 对于 http 服务器 Protocols h2c http/1.1 - 这里有一些支持 **h2c** 的客户端(和客户端库)。我会在下面介绍: #### curl #### -Daniel Stenberg 维护的网络资源命令行客户端 curl 当然支持。如果你的系统上有 curl,有一个简单的方法检查它是否支持 http/2: +Daniel Stenberg 维护的用于访问网络资源的命令行客户端 curl 当然支持。如果你的系统上有 curl,有一个简单的方法检查它是否支持 http/2: sh> curl -V curl 7.43.0 (x86_64-apple-darwin15.0) libcurl/7.43.0 SecureTransport zlib/1.2.5 @@ -126,11 +124,11 @@ Daniel Stenberg 维护的网络资源命令行客户端 curl 当然支持。如 恭喜,如果看到了有 **...101 Switching...** 的行就表示它正在工作! -有一些情况不会发生到 HTTP/2 的 Upgrade 。如果你的第一个请求没有内容,例如你上传一个文件,就不会触发 Upgrade。[h2c 限制][5]部分有详细的解释。 +有一些情况不会发生 HTTP/2 的升级切换(Upgrade)。如果你的第一个请求有内容数据(body),例如你上传一个文件时,就不会触发升级切换。[h2c 限制][5]部分有详细的解释。 #### nghttp #### -nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客户端,你可以简单地通过获取资源验证你的安装: +nghttp2 可以一同编译它自己的客户端和服务器。如果你的系统中有该客户端,你可以简单地通过获取一个资源来验证你的安装: sh> nghttp -uv http:/// [ 0.001] Connected @@ -151,7 +149,7 @@ nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客 这和我们上面 **curl** 例子中看到的 Upgrade 输出很相似。 -在命令行参数中隐藏着一种可以使用 **h2c**:的参数:**-u**。这会指示 **nghttp** 进行 HTTP/1 Upgrade 过程。但如果我们不使用呢? +有另外一种在命令行参数中不用 **-u** 参数而使用 **h2c** 的方法。这个参数会指示 **nghttp** 进行 HTTP/1 升级切换过程。但如果我们不使用呢? sh> nghttp -v http:/// [ 0.002] Connected @@ -166,36 +164,33 @@ nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客 :scheme: http ... -连接马上显示出了 HTTP/2!这就是协议中所谓的直接模式,当客户端发送一些特殊的 24 字节到服务器时就会发生: +连接马上使用了 HTTP/2!这就是协议中所谓的直接(direct)模式,当客户端发送一些特殊的 24 字节到服务器时就会发生: 0x505249202a20485454502f322e300d0a0d0a534d0d0a0d0a - or in ASCII: PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n + +用 ASCII 表示是: + + PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n 支持 **h2c** 的服务器在一个新的连接中看到这些信息就会马上切换到 HTTP/2。HTTP/1.1 服务器则认为是一个可笑的请求,响应并关闭连接。 -因此 **直接** 模式只适合于那些确定服务器支持 HTTP/2 的客户端。例如,前一个 Upgrade 过程是成功的。 +因此,**直接**模式只适合于那些确定服务器支持 HTTP/2 的客户端。例如,当前一个升级切换过程成功了的时候。 -**直接** 模式的魅力是零开销,它支持所有请求,即使没有 body 部分(查看[h2c 限制][6])。任何支持 h2c 协议的服务器默认启用了直接模式。如果你想停用它,可以添加下面的配置指令到你的服务器: +**直接**模式的魅力是零开销,它支持所有请求,即使带有请求数据部分(查看[h2c 限制][6])。 -注:下面这行打删除线 - - H2Direct off - -注:下面这行打删除线 - -对于 2.4.17 发行版,默认明文连接时启用 **H2Direct** 。但是有一些模块和这不兼容。因此,在下一发行版中,默认会设置为**off**,如果你希望你的服务器支持它,你需要设置它为: +对于 2.4.17 版本,明文连接时默认启用 **H2Direct** 。但是有一些模块和这不兼容。因此,在下一版本中,默认会设置为**off**,如果你希望你的服务器支持它,你需要设置它为: H2Direct on ### https:// 连接 (h2) ### -一旦你的 mod_h[ttp]2 支持 h2c 连接,就是时候一同启用 **h2**,因为现在的浏览器支持它和 **https:** 一同使用。 +当你的 mod_h[ttp]2 可以支持 h2c 连接时,那就可以一同启用 **h2** 兄弟了,现在的浏览器仅支持它和 **https:** 一同使用。 -HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已经提到了 ALNP 扩展。另外的一个要求是不会使用特定[黑名单][7]中的密码。 +HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已经提到了 ALNP 扩展。另外的一个要求是不能使用特定[黑名单][7]中的加密算法。 -尽管现在版本的 **mod_h[ttp]2** 不增强这些密码(以后可能会),大部分客户端会这么做。如果你用不切当的密码在浏览器中打开 **h2** 服务器,你会看到模糊警告**INADEQUATE_SECURITY**,浏览器会拒接连接。 +尽管现在版本的 **mod_h[ttp]2** 不增强这些算法(以后可能会),但大部分客户端会这么做。如果让你的浏览器使用不恰当的算法打开 **h2** 服务器,你会看到不明确的警告**INADEQUATE_SECURITY**,浏览器会拒接连接。 -一个可接受的 Apache SSL 配置类似: +一个可行的 Apache SSL 配置类似: SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK SSLProtocol All -SSLv2 -SSLv3 @@ -203,11 +198,11 @@ HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已 (是的,这确实很长。) -这里还有一些应该调整的 SSL 配置参数,但不是必须:**SSLSessionCache**, **SSLUseStapling** 等,其它地方也有介绍这些。例如 Ilya Grigorik 写的一篇博客 [高性能浏览器网络][8]。 +这里还有一些应该调整,但不是必须调整的 SSL 配置参数:**SSLSessionCache**, **SSLUseStapling** 等,其它地方也有介绍这些。例如 Ilya Grigorik 写的一篇超赞的博客: [高性能浏览器网络][8]。 #### curl #### -再次回到 shell 并使用 curl(查看 [curl h2c 章节][9] 了解要求)你也可以通过 curl 用简单的命令检测你的服务器: +再次回到 shell 使用 curl(查看上面的“curl h2c”章节了解要求),你也可以通过 curl 用简单的命令检测你的服务器: sh> curl -v --http2 https:/// ... @@ -220,9 +215,9 @@ HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已 恭喜你,能正常工作啦!如果还不能,可能原因是: -- 你的 curl 不支持 HTTP/2。查看[检测][10]。 +- 你的 curl 不支持 HTTP/2。查看上面的“检测 curl”一节。 - 你的 openssl 版本太低不支持 ALPN。 -- 不能验证你的证书,或者不接受你的密码配置。尝试添加命令行选项 -k 停用 curl 中的检查。如果那能工作,还要重新配置你的 SSL 和证书。 +- 不能验证你的证书,或者不接受你的算法配置。尝试添加命令行选项 -k 停用 curl 中的这些检查。如果可以工作,就重新配置你的 SSL 和证书。 #### nghttp #### @@ -246,11 +241,11 @@ HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已 The negotiated protocol: http/1.1 [ERROR] HTTP/2 protocol was not selected. (nghttp2 expects h2) -这表示 ALPN 能正常工作,但并没有用 h2 协议。你需要像上面介绍的那样在服务器上选中那个协议。如果一开始在 vhost 部分选中不能正常工作,试着在通用部分选中它。 +这表示 ALPN 能正常工作,但并没有用 h2 协议。你需要像上面介绍的那样检查你服务器上的 Protocols 配置。如果一开始在 vhost 部分设置不能正常工作,试着在通用部分设置它。 #### Firefox #### -Update: [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox HTTP/2 指示插件][12]。你可以看到有多少地方用到了 h2(提示:Apache Lounge 用 h2 已经有一段时间了。。。) +更新: [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox 上有个 HTTP/2 指示插件][12]。你可以看到有多少地方用到了 h2(提示:Apache Lounge 用 h2 已经有一段时间了。。。) 你可以在 Firefox 浏览器中打开开发者工具,在那里的网络标签页查看 HTTP/2 连接。当你打开了 HTTP/2 并重新刷新 html 页面时,你会看到类似下面的东西: @@ -260,9 +255,9 @@ Update: [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox HTTP/2 指示 #### Google Chrome #### -在 Google Chrome 中,你在开发者工具中看不到 HTTP/2 指示器。相反,Chrome 用特殊的地址 **chrome://net-internals/#http2** 给出了相关信息。 +在 Google Chrome 中,你在开发者工具中看不到 HTTP/2 指示器。相反,Chrome 用特殊的地址 **chrome://net-internals/#http2** 给出了相关信息。(LCTT 译注:Chrome 已经有一个 “HTTP/2 and SPDY indicator” 可以很好的在地址栏识别 HTTP/2 连接) -如果你在服务器中打开了一个页面并在 Chrome 那个页面查看,你可以看到类似下面这样: +如果你打开了一个服务器的页面,可以在 Chrome 中查看那个 net-internals 页面,你可以看到类似下面这样: ![](https://icing.github.io/mod_h2/images/chrome-h2.png) @@ -276,21 +271,21 @@ Windows 10 中 Internet Explorer 的继任者 Edge 也支持 HTTP/2。你也可 #### Safari #### -在 Apple 的 Safari 中,打开开发者工具,那里有个网络标签页。重新加载你的服务器页面并在开发者工具中选择显示了加载的行。如果你启用了在右边显示详细试图,看 **状态** 部分。那里显示了 **HTTP/2.0 200**,类似: +在 Apple 的 Safari 中,打开开发者工具,那里有个网络标签页。重新加载你的服务器上的页面,并在开发者工具中选择显示了加载的那行。如果你启用了在右边显示详细视图,看 **Status** 部分。那里显示了 **HTTP/2.0 200**,像这样: ![](https://icing.github.io/mod_h2/images/safari-h2.png) #### 重新协商 #### -https: 连接重新协商是指正在运行的连接中特定的 TLS 参数会发生变化。在 Apache httpd 中,你可以通过目录中的配置文件修改 TLS 参数。如果一个要获取特定位置资源的请求到来,配置的 TLS 参数会和当前的 TLS 参数进行对比。如果它们不相同,就会触发重新协商。 +https: 连接重新协商是指正在运行的连接中特定的 TLS 参数会发生变化。在 Apache httpd 中,你可以在 directory 配置中改变 TLS 参数。如果进来一个获取特定位置资源的请求,配置的 TLS 参数会和当前的 TLS 参数进行对比。如果它们不相同,就会触发重新协商。 -这种最常见的情形是密码变化和客户端验证。你可以要求客户访问特定位置时需要通过验证,或者对于特定资源,你可以使用更安全的, CPU 敏感的密码。 +这种最常见的情形是算法变化和客户端证书。你可以要求客户访问特定位置时需要通过验证,或者对于特定资源,你可以使用更安全的、对 CPU 压力更大的算法。 -不管你的想法有多么好,HTTP/2 中都**不可以**发生重新协商。如果有 100 多个请求到同一个地方,什么时候哪个会发生重新协商呢? +但不管你的想法有多么好,HTTP/2 中都**不可以**发生重新协商。在同一个连接上会有 100 多个请求,那么重新协商该什么时候做呢? -对于这种配置,现有的 **mod_h[ttp]2** 还不能保证你的安全。如果你有一个站点使用了 TLS 重新协商,别在上面启用 h2! +对于这种配置,现有的 **mod_h[ttp]2** 还没有办法。如果你有一个站点使用了 TLS 重新协商,别在上面启用 h2! -当然,我们会在后面的发行版中解决这个问题然后你就可以安全地启用了。 +当然,我们会在后面的版本中解决这个问题,然后你就可以安全地启用了。 ### 限制 ### @@ -298,45 +293,45 @@ https: 连接重新协商是指正在运行的连接中特定的 TLS 参数会 实现除 HTTP 之外协议的模块可能和 **mod_http2** 不兼容。这在其它协议要求服务器首先发送数据时无疑会发生。 -**NNTP** 就是这种协议的一个例子。如果你在服务器中配置了 **mod_nntp_like_ssl**,甚至都不要加载 mod_http2。等待下一个发行版。 +**NNTP** 就是这种协议的一个例子。如果你在服务器中配置了 **mod\_nntp\_like\_ssl**,那么就不要加载 mod_http2。等待下一个版本。 #### h2c 限制 #### **h2c** 的实现还有一些限制,你应该注意: -#### 在虚拟主机中拒绝 h2c #### +##### 在虚拟主机中拒绝 h2c ##### 你不能对指定的虚拟主机拒绝 **h2c 直连**。连接建立而没有看到请求时会触发**直连**,这使得不可能预先知道 Apache 需要查找哪个虚拟主机。 -#### 升级请求体 #### +##### 有请求数据时的升级切换 ##### -对于有 body 部分的请求,**h2c** 升级不能正常工作。那些是 PUT 和 POST 请求(用于提交和上传)。如果你写了一个客户端,你可能会用一个简单的 GET 去处理请求或者用选项 * 去触发升级。 +对于有数据的请求,**h2c** 升级切换不能正常工作。那些是 PUT 和 POST 请求(用于提交和上传)。如果你写了一个客户端,你可能会用一个简单的 GET 或者 OPTIONS * 来处理那些请求以触发升级切换。 -原因从技术层面来看显而易见,但如果你想知道:升级过程中,连接处于半疯状态。请求按照 HTTP/1.1 的格式,而响应使用 HTTP/2。如果请求有一个 body 部分,服务器在发送响应之前需要读取整个 body。因为响应可能需要从客户端处得到应答用于流控制。但如果仍在发送 HTTP/1.1 请求,客户端就还不能处理 HTTP/2 连接。 +原因从技术层面来看显而易见,但如果你想知道:在升级切换过程中,连接处于半疯状态。请求按照 HTTP/1.1 的格式,而响应使用 HTTP/2 帧。如果请求有一个数据部分,服务器在发送响应之前需要读取整个数据。因为响应可能需要从客户端处得到应答用于流控制及其它东西。但如果仍在发送 HTTP/1.1 请求,客户端就仍然不能以 HTTP/2 连接。 -为了使行为可预测,几个服务器实现商决定不要在任何请求体中进行升级,即使 body 很小。 +为了使行为可预测,几个服务器在实现上决定不在任何带有请求数据的请求中进行升级切换,即使请求数据很小。 -#### 升级 302s #### +##### 302 时的升级切换 ##### -有重定向发生时当前 h2c 升级也不能工作。看起来 mod_http2 之前的重写有可能发生。这当然不会导致断路,但你测试这样的站点也许会让你迷惑。 +有重定向发生时,当前的 h2c 升级切换也不能工作。看起来 mod_http2 之前的重写有可能发生。这当然不会导致断路,但你测试这样的站点也许会让你迷惑。 #### h2 限制 #### 这里有一些你应该意识到的 h2 实现限制: -#### 连接重用 #### +##### 连接重用 ##### HTTP/2 协议允许在特定条件下重用 TLS 连接:如果你有带通配符的证书或者多个 AltSubject 名称,浏览器可能会重用现有的连接。例如: -你有一个 **a.example.org** 的证书,它还有另外一个名称 **b.example.org**。你在浏览器中打开 url **https://a.example.org/**,用另一个标签页加载 **https://b.example.org/**。 +你有一个 **a.example.org** 的证书,它还有另外一个名称 **b.example.org**。你在浏览器中打开 URL **https://a.example.org/**,用另一个标签页加载 **https://b.example.org/**。 -在重新打开一个新的连接之前,浏览器看到它有一个到 **a.example.org** 的连接并且证书对于 **b.example.org** 也可用。因此,它在第一个连接上面向第二个标签页发送请求。 +在重新打开一个新的连接之前,浏览器看到它有一个到 **a.example.org** 的连接并且证书对于 **b.example.org** 也可用。因此,它在第一个连接上面发送第二个标签页的请求。 -这种连接重用是刻意设计的,它使得致力于 HTTP/1 切分效率的站点能够不需要太多变化就能利用 HTTP/2。 +这种连接重用是刻意设计的,它使得使用了 HTTP/1 切分(sharding)来提高效率的站点能够不需要太多变化就能利用 HTTP/2。 -Apache **mod_h[ttp]2** 还没有完全实现这点。如果 **a.example.org** 和 **b.example.org** 是不同的虚拟主机, Apache 不会允许这样的连接重用,并会告知浏览器状态码**421 错误请求**。浏览器会意识到它需要重新打开一个到 **b.example.org** 的连接。这仍然能工作,只是会降低一些效率。 +Apache **mod_h[ttp]2** 还没有完全实现这点。如果 **a.example.org** 和 **b.example.org** 是不同的虚拟主机, Apache 不会允许这样的连接重用,并会告知浏览器状态码 **421 Misdirected Request**。浏览器会意识到它需要重新打开一个到 **b.example.org** 的连接。这仍然能工作,只是会降低一些效率。 -我们期望下一次的发布中能有切当的检查。 +我们期望下一次的发布中能有合适的检查。 Münster, 12.10.2015, @@ -355,7 +350,7 @@ via: https://icing.github.io/mod_h2/howto.html 作者:[icing][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151022 9 Tips for Improving WordPress Performance.md b/published/201512/20151022 9 Tips for Improving WordPress Performance.md similarity index 52% rename from translated/tech/20151022 9 Tips for Improving WordPress Performance.md rename to published/201512/20151022 9 Tips for Improving WordPress Performance.md index 9c105df42a..ea1a76200d 100644 --- a/translated/tech/20151022 9 Tips for Improving WordPress Performance.md +++ b/published/201512/20151022 9 Tips for Improving WordPress Performance.md @@ -1,19 +1,18 @@ - -提高 WordPress 性能的9个技巧 +深入浅出讲述提升 WordPress 性能的九大秘笈 ================================================================================ -关于建站和 web 应用程序交付,WordPress 是全球最大的一个平台。全球大约 [四分之一][1] 的站点现在正在使用开源 WordPress 软件,包括 eBay, Mozilla, RackSpace, TechCrunch, CNN, MTV,纽约时报,华尔街日报。 +在建站和 web 应用程序交付方面,WordPress 是全球最大的一个平台。全球大约[四分之一][1] 的站点现在正在使用开源 WordPress 软件,包括 eBay、 Mozilla、 RackSpace、 TechCrunch、 CNN、 MTV、纽约时报、华尔街日报 等等。 -WordPress.com,对于用户创建博客平台是最流行的,其也运行在WordPress 开源软件上。[NGINX powers WordPress.com][2]。许多 WordPress 用户刚开始在 WordPress.com 上建站,然后移动到搭载着 WordPress 开源软件的托管主机上;其中大多数站点都使用 NGINX 软件。 +最流行的个人博客平台 WordPress.com,其也运行在 WordPress 开源软件上。[而 NGINX 则为 WordPress.com 提供了动力][2]。在 WordPress.com 的用户当中,许多站点起步于 WordPress.com,然后换成了自己运行 WordPress 开源软件;它们中越来越多的站点也使用了 NGINX 软件。 -WordPress 的吸引力是它的简单性,无论是安装启动或者对于终端用户的使用。然而,当使用量不断增长时,WordPress 站点的体系结构也存在一定的问题 - 这里几个方法,包括使用缓存以及组合 WordPress 和 NGINX,可以解决这些问题。 +WordPress 的吸引力源于其简单性,无论是对于最终用户还是安装架设。然而,当使用量不断增长时,WordPress 站点的体系结构也存在一定的问题 - 这里有几个方法,包括使用缓存,以及将 WordPress 和 NGINX 组合起来,可以解决这些问题。 -在这篇博客中,我们提供了9个技巧来进行优化,以帮助你解决 WordPress 中一些常见的性能问题: +在这篇博客中,我们提供了九个提速技巧来帮助你解决 WordPress 中一些常见的性能问题: - [缓存静态资源][3] - [缓存动态文件][4] -- [使用 NGINX][5] -- [添加支持 NGINX 的链接][6] +- [迁移到 NGINX][5] +- [添加 NGINX 静态链接支持][6] - [为 NGINX 配置 FastCGI][7] - [为 NGINX 配置 W3_Total_Cache][8] - [为 NGINX 配置 WP-Super-Cache][9] @@ -22,39 +21,39 @@ WordPress 的吸引力是它的简单性,无论是安装启动或者对于终 ### 在 LAMP 架构下 WordPress 的性能 ### -大多数 WordPress 站点都运行在传统的 LAMP 架构下:Linux 操作系统,Apache Web 服务器软件,MySQL 数据库软件 - 通常是一个单独的数据库服务器 - 和 PHP 编程语言。这些都是非常著名的,广泛应用的开源工具。大多数人都将 WordPress “称为” LAMP,并且很容易寻求帮助和支持。 +大多数 WordPress 站点都运行在传统的 LAMP 架构下:Linux 操作系统,Apache Web 服务器软件,MySQL 数据库软件(通常是一个单独的数据库服务器)和 PHP 编程语言。这些都是非常著名的,广泛应用的开源工具。在 WordPress 世界里,很多人都用的是 LAMP,所以很容易寻求帮助和支持。 -当用户访问 WordPress 站点时,浏览器为每个用户创建六到八个连接来运行 Linux/Apache 的组合。当用户请求连接时,每个页面的 PHP 文件开始飞速的从 MySQL 数据库争夺资源来响应请求。 +当用户访问 WordPress 站点时,浏览器为每个用户创建六到八个连接来连接到 Linux/Apache 上。当用户请求连接时,PHP 即时生成每个页面,从 MySQL 数据库获取资源来响应请求。 -LAMP 对于数百个并发用户依然能照常工作。然而,流量突然增加是常见的并且 - 通常是 - 一件好事。 +LAMP 或许对于数百个并发用户依然能照常工作。然而,流量突然增加是常见的,并且通常这应该算是一件好事。 但是,当 LAMP 站点变得繁忙时,当同时在线的用户达到数千个时,它的瓶颈就会被暴露出来。瓶颈存在主要是两个原因: -1. Apache Web 服务器 - Apache 为每一个连接需要消耗大量资源。如果 Apache 接受了太多的并发连接,内存可能会耗尽,性能急剧降低,因为数据必须使用磁盘进行交换。如果以限制连接数来提高响应时间,新的连接必须等待,这也导致了用户体验变得很差。 +1. Apache Web 服务器 - Apache 的每个/每次连接需要消耗大量资源。如果 Apache 接受了太多的并发连接,内存可能会耗尽,从而导致性能急剧降低,因为数据必须交换到磁盘了。如果以限制连接数来提高响应时间,新的连接必须等待,这也导致了用户体验变得很差。 -1. PHP/MySQL 的交互 - 总之,一个运行 PHP 和 MySQL 数据库服务器的应用服务器上每秒的请求量不能超过最大限制。当请求的数量超过最大连接数时,用户必须等待。超过最大连接数时也会增加所有用户的响应时间。超过其两倍以上时会出现明显的性能问题。 +1. PHP/MySQL 的交互 - 一个运行 PHP 和 MySQL 数据库服务器的应用服务器上每秒的请求量有一个最大限制。当请求的数量超过这个最大限制时,用户必须等待。超过这个最大限制时也会增加所有用户的响应时间。超过其两倍以上时会出现明显的性能问题。 - LAMP 架构的网站一般都会出现性能瓶颈,这时就需要升级硬件了 - 加 CPU,扩大磁盘空间等等。当 Apache 和 PHP/MySQL 的架构负载运行后,在硬件上不断的提升无法保证对系统资源指数增长的需求。 +LAMP 架构的网站出现性能瓶颈是常见的情况,这时就需要升级硬件了 - 增加 CPU,扩大磁盘空间等等。当 Apache 和 PHP/MySQL 的架构超载后,在硬件上不断的提升却跟不上系统资源指数增长的需求。 -最先取代 LAMP 架构的是 LEMP 架构 – Linux, NGINX, MySQL, 和 PHP。 (这是 LEMP 的缩写,E 代表着 “engine-x.” 的发音。) 我们在 [技巧 3][12] 中会描述 LEMP 架构。 +首选替代 LAMP 架构的是 LEMP 架构 – Linux, NGINX, MySQL, 和 PHP。 (这是 LEMP 的缩写,E 代表着 “engine-x.” 的发音。) 我们在 [技巧 3][12] 中会描述 LEMP 架构。 ### 技巧 1. 缓存静态资源 ### -静态资源是指不变的文件,像 CSS,JavaScript 和图片。这些文件往往在网页的数据中占半数以上。页面的其余部分是动态生成的,像在论坛中评论,仪表盘的性能,或个性化的内容(可以看看Amazon.com 产品)。 +静态资源是指不变的文件,像 CSS,JavaScript 和图片。这些文件往往在网页的数据中占半数以上。页面的其余部分是动态生成的,像在论坛中评论,性能仪表盘,或个性化的内容(可以看看 Amazon.com 产品)。 缓存静态资源有两大好处: -- 更快的交付给用户 - 用户从他们浏览器的缓存或者从互联网上离他们最近的缓存服务器获取静态文件。有时候文件较大,因此减少等待时间对他们来说帮助很大。 +- 更快的交付给用户 - 用户可以从它们浏览器的缓存或者从互联网上离它们最近的缓存服务器获取静态文件。有时候文件较大,因此减少等待时间对它们来说帮助很大。 - 减少应用服务器的负载 - 从缓存中检索到的每个文件会让 web 服务器少处理一个请求。你的缓存越多,用户等待的时间越短。 -要让浏览器缓存文件,需要早在静态文件中设置正确的 HTTP 首部。当看到 HTTP Cache-Control 首部时,特别设置了 max-age,Expires 首部,以及 Entity 标记。[这里][13] 有详细的介绍。 +要让浏览器缓存文件,需要在静态文件中设置正确的 HTTP 首部。看看 HTTP Cache-Control 首部,特别是设置了 max-age 参数,Expires 首部,以及 Entity 标记。[这里][13] 有详细的介绍。 -当启用本地缓存然后用户请求以前访问过的文件时,浏览器首先检查该文件是否在缓存中。如果在,它会询问 Web 服务器该文件是否改变过。如果该文件没有改变,Web 服务器将立即响应一个304状态码(未改变),这意味着该文件没有改变,而不是返回状态码200 OK,然后继续检索并发送已改变的文件。 +当启用本地缓存,然后用户请求以前访问过的文件时,浏览器首先检查该文件是否在缓存中。如果在,它会询问 Web 服务器该文件是否改变过。如果该文件没有改变,Web 服务器将立即响应一个304状态码(未改变),这意味着该文件没有改变,而不是返回状态码200 OK 并检索和发送已改变的文件。 -为了支持浏览器以外的缓存,可以考虑下面的方法,内容分发网络(CDN)。CDN 是一​​种流行且​​强大的缓存工具,但我们在这里不详细描述它。可以想一下 CDN 背后的支撑技术的实现。此外,当你的站点从 HTTP/1.x 过渡到 HTTP/2 协议时,CDN 的用处可能不太大;根据需要调查和测试,找到你网站需要的正确方法。 +要在浏览器之外支持缓存,可以考虑下面讲到的技巧,以及考虑使用内容分发网络(CDN)。CDN 是一​​种流行且​​强大的缓存工具,但我们在这里不详细描述它。在你实现了这里讲到的其它技术之后可以考虑 CDN。此外,当你的站点从 HTTP/1.x 过渡到 HTTP/2 协议时,CDN 的用处可能不太大;根据需要调查和测试,找到你网站需要的正确方法。 -如果你转向 NGINX Plus 或开源的 NGINX 软件作为架构的一部分,建议你考虑 [技巧 3][14],然后配置 NGINX 缓存静态资源。使用下面的配置,用你 Web 服务器的 URL 替换 www.example.com。 +如果你转向 NGINX Plus 或将开源的 NGINX 软件作为架构的一部分,建议你考虑 [技巧 3][14],然后配置 NGINX 缓存静态资源。使用下面的配置,用你 Web 服务器的 URL 替换 www.example.com。 server { # substitute your web server's URL for www.example.com @@ -86,63 +85,63 @@ LAMP 对于数百个并发用户依然能照常工作。然而,流量突然增 ### 技巧 2. 缓存动态文件 ### -WordPress 是动态生成的网页,这意味着每次请求时它都要生成一个给定的网页(即使和前一次的结果相同)。这意味着用户随时获得的是最新内容。 +WordPress 动态地生成网页,这意味着每次请求时它都要生成一个给定的网页(即使和前一次的结果相同)。这意味着用户随时获得的是最新内容。 想一下,当用户访问一个帖子时,并在文章底部有用户的评论时。你希望用户能够看到所有的评论 - 即使评论刚刚发布。动态内容就是处理这种情况的。 -但现在,当帖子每秒出现十几二十几个请求时。应用服务器可能每秒需要频繁生成页面导致其压力过大,造成延误。为了给用户提供最新的内容,每个访问理论上都是新的请求,因此他们也不得不在首页等待。 +但现在,当帖子每秒出现十几二十几个请求时。应用服务器可能每秒需要频繁生成页面导致其压力过大,造成延误。为了给用户提供最新的内容,每个访问理论上都是新的请求,因此它们不得不在原始出处等待很长时间。 -为了防止页面由于负载过大变得缓慢,需要缓存动态文件。这需要减少文件的动态内容来提高整个系统的响应速度。 +为了防止页面由于不断提升的负载而变得缓慢,需要缓存动态文件。这需要减少文件的动态内容来提高整个系统的响应速度。 -要在 WordPress 中启用缓存中,需要使用一些流行的插件 - 如下所述。WordPress 的缓存插件需要刷新页面,然后将其缓存短暂时间 - 也许只有几秒钟。因此,如果该网站每秒中只有几个请求,那大多数用户获得的页面都是缓存的副本。这也有助于提高所有用户的检索时间: +要在 WordPress 中启用缓存中,需要使用一些流行的插件 - 如下所述。WordPress 的缓存插件会请求最新的页面,然后将其缓存短暂时间 - 也许只有几秒钟。因此,如果该网站每秒中会有几个请求,那大多数用户获得的页面都是缓存的副本。这也有助于提高所有用户的检索时间: - 大多数用户获得页面的缓存副本。应用服务器没有做任何工作。 -- 用户很快会得到一个新的副本。应用服务器只需每隔一段时间刷新页面。当服务器产生一个新的页面(对于第一个用户访问后,缓存页过期),它这样做要快得多,因为它的请求不会超载。 +- 用户会得到一个之前的崭新副本。应用服务器只需每隔一段时间生成一个崭新页面。当服务器产生一个崭新页面(对于缓存过期后的第一个用户访问),它这样做要快得多,因为它的请求并没有超载。 -你可以缓存运行在 LAMP 架构或者 [LEMP 架构][15] 上 WordPress 的动态文件(在 [技巧 3][16] 中说明了)。有几个缓存插件,你可以在 WordPress 中使用。这里有最流行的缓存插件和缓存技术,从最简单到最强大的: +你可以缓存运行在 LAMP 架构或者 [LEMP 架构][15] 上 WordPress 的动态文件(在 [技巧 3][16] 中说明了)。有几个缓存插件,你可以在 WordPress 中使用。运用到了最流行的缓存插件和缓存技术,从最简单到最强大的: -- [Hyper-Cache][17] 和 [Quick-Cache][18] – 这两个插件为每个 WordPress 页面创建单个 PHP 文件。它支持的一些动态函数会绕过多个 WordPress 与数据库的连接核心处理,创建一个更快的用户体验。他们不会绕过所有的 PHP 处理,所以使用以下选项他们不能给出相同的性能提升。他们也不需要修改 NGINX 的配置。 +- [Hyper-Cache][17] 和 [Quick-Cache][18] – 这两个插件为每个 WordPress 页面创建单个 PHP 文件。它支持绕过多个 WordPress 与数据库的连接核心处理的一些动态功能,创建一个更快的用户体验。它们不会绕过所有的 PHP 处理,所以并不会如下面那些取得同样的性能提升。它们也不需要修改 NGINX 的配置。 -- [WP Super Cache][19] – 最流行的 WordPress 缓存插件。它有许多功能,它的界面非常简洁,如下图所示。我们展示了 NGINX 一个简单的配置实例在 [技巧 7][20] 中。 +- [WP Super Cache][19] – 最流行的 WordPress 缓存插件。在它易用的界面易用上提供了许多功能,如下所示。我们在 [技巧 7][20] 中展示了一个简单的 NGINX 配置实例。 -- [W3 Total Cache][21] – 这是第二大最受欢迎的 WordPress 缓存插件。它比 WP Super Cache 的功能更强大,但它有些配置选项比较复杂。一个 NGINX 的简单配置,请看 [技巧 6][22]。 +- [W3 Total Cache][21] – 这是第二流行的 WordPress 缓存插件。它比 WP Super Cache 的功能更强大,但它有些配置选项比较复杂。样例 NGINX 配置,请看 [技巧 6][22]。 -- [FastCGI][23] – CGI 代表通用网关接口,在因特网上发送请求和接收文件。它不是一个插件只是一种能直接使用缓存的方法。FastCGI 可以被用在 Apache 和 Nginx 上,它也是最流行的动态缓存方法;我们在 [技巧 5][24] 中描述了如何配置 NGINX 来使用它。 +- [FastCGI][23] – CGI 的意思是通用网关接口( Common Gateway Interface),在因特网上发送请求和接收文件的一种通用方式。它不是一个插件,而是一种与缓存交互缓存的方法。FastCGI 可以被用在 Apache 和 Nginx 上,它也是最流行的动态缓存方法;我们在 [技巧 5][24] 中描述了如何配置 NGINX 来使用它。 -这些插件的技术文档解释了如何在 LAMP 架构中配置它们。配置选项包括数据库和对象缓存;也包括使用 HTML,CSS 和 JavaScript 来构建 CDN 集成环境。对于 NGINX 的配置,请看列表中的提示技巧。 +这些插件和技术的文档解释了如何在典型的 LAMP 架构中配置它们。配置方式包括数据库和对象缓存;最小化 HTML、CSS 和 JavaScript;集成流行的 CDN 集成环境。对于 NGINX 的配置,请看列表中的提示技巧。 -**注意**:WordPress 不能缓存用户的登录信息,因为它们的 WordPress 页面都是不同的。(对于大多数网站来说,只有一小部分用户可能会登录),大多数缓存不会对刚刚评论过的用户显示缓存页面,只有当用户刷新页面时才会看到他们的评论。若要缓存页面的非个性化内容,如果它对整体性能来说很重要,可以使用一种称为 [fragment caching][25] 的技术。 +**注意**:缓存不会用于已经登录的 WordPress 用户,因为他们的 WordPress 页面都是不同的。(对于大多数网站来说,只有一小部分用户可能会登录)此外,大多数缓存不会对刚刚评论过的用户显示缓存页面,因为当用户刷新页面时希望看到他们的评论。若要缓存页面的非个性化内容,如果它对整体性能来说很重要,可以使用一种称为 [碎片缓存(fragment caching)][25] 的技术。 ### 技巧 3. 使用 NGINX ### -如上所述,当并发用户数超过某一值时 Apache 会导致性能问题 – 可能数百个用户同时使用。Apache 对于每一个连接会消耗大量的资源,因而容易耗尽内存。Apache 可以配置连接数的值来避免耗尽内存,但是这意味着,超过限制时,新的连接请求必须等待。 +如上所述,当并发用户数超过某一数量时 Apache 会导致性能问题 – 可能是数百个用户同时使用。Apache 对于每一个连接会消耗大量的资源,因而容易耗尽内存。Apache 可以配置连接数的值来避免耗尽内存,但是这意味着,超过限制时,新的连接请求必须等待。 -此外,Apache 使用 mod_php 模块将每一个连接加载到内存中,即使只有静态文件(图片,CSS,JavaScript 等)。这使得每个连接消耗更多的资源,从而限制了服务器的性能。 +此外,Apache 为每个连接加载一个 mod_php 模块副本到内存中,即使只有服务于静态文件(图片,CSS,JavaScript 等)。这使得每个连接消耗更多的资源,从而限制了服务器的性能。 -开始解决这些问题吧,从 LAMP 架构迁到 LEMP 架构 – 使用 NGINX 取代 Apache 。NGINX 仅消耗很少量的内存就能处理成千上万的并发连接数,所以你不必经历颠簸,也不必限制并发连接数。 +要解决这些问题,从 LAMP 架构迁到 LEMP 架构 – 使用 NGINX 取代 Apache 。NGINX 在一定的内存之下就能处理成千上万的并发连接数,所以你不必经历颠簸,也不必限制并发连接数到很小的数量。 -NGINX 处理静态文件的性能也较好,它有内置的,简单的 [缓存][26] 控制策略。减少应用服务器的负载,你的网站的访问速度会更快,用户体验更好。 +NGINX 处理静态文件的性能也较好,它有内置的,容易调整的 [缓存][26] 控制策略。减少应用服务器的负载,你的网站的访问速度会更快,用户体验更好。 -你可以在部署的所有 Web 服务器上使用 NGINX,或者你可以把一个 NGINX 服务器作为 Apache 的“前端”来进行反向代理 - NGINX 服务器接收客户端请求,将请求的静态文件直接返回,将 PHP 请求转发到 Apache 上进行处理。 +你可以在部署环境的所有 Web 服务器上使用 NGINX,或者你可以把一个 NGINX 服务器作为 Apache 的“前端”来进行反向代理 - NGINX 服务器接收客户端请求,将请求的静态文件直接返回,将 PHP 请求转发到 Apache 上进行处理。 -对于动态页面的生成 - WordPress 核心体验 - 选择一个缓存工具,如 [技巧 2][27] 中描述的。在下面的技巧中,你可以看到 FastCGI,W3_Total_Cache 和 WP-Super-Cache 在 NGINX 上的配置示例。 (Hyper-Cache 和 Quick-Cache 不需要改变 NGINX 的配置。) +对于动态页面的生成,这是 WordPress 核心体验,可以选择一个缓存工具,如 [技巧 2][27] 中描述的。在下面的技巧中,你可以看到 FastCGI,W3\_Total\_Cache 和 WP-Super-Cache 在 NGINX 上的配置示例。 (Hyper-Cache 和 Quick-Cache 不需要改变 NGINX 的配置。) **技巧** 缓存通常会被保存到磁盘上,但你可以用 [tmpfs][28] 将缓存放在内存中来提高性能。 -为 WordPress 配置 NGINX 很容易。按照这四个步骤,其详细的描述在指定的技巧中: +为 WordPress 配置 NGINX 很容易。仅需四步,其详细的描述在指定的技巧中: -1.添加永久的支持 - 添加对 NGINX 的永久支持。此步消除了对 **.htaccess** 配置文件的依赖,这是 Apache 特有的。参见 [技巧 4][29] -2.配置缓存 - 选择一个缓存工具并安装好它。可选择的有 FastCGI cache,W3 Total Cache, WP Super Cache, Hyper Cache, 和 Quick Cache。请看技巧 [5][30], [6][31], 和 [7][32]. -3.落实安全防范措施 - 在 NGINX 上采用对 WordPress 最佳安全的做法。参见 [技巧 8][33]。 -4.配置 WordPress 多站点 - 如果你使用 WordPress 多站点,在 NGINX 下配置子目录,子域,或多个域的结构。见 [技巧9][34]。 +1. 添加永久链接的支持 - 让 NGINX 支持永久链接。此步消除了对 **.htaccess** 配置文件的依赖,这是 Apache 特有的。参见 [技巧 4][29]。 +2. 配置缓存 - 选择一个缓存工具并安装好它。可选择的有 FastCGI cache,W3 Total Cache, WP Super Cache, Hyper Cache, 和 Quick Cache。请看技巧 [5][30]、 [6][31] 和 [7][32]。 +3. 落实安全防范措施 - 在 NGINX 上采用对 WordPress 最佳安全的做法。参见 [技巧 8][33]。 +4. 配置 WordPress 多站点 - 如果你使用 WordPress 多站点,在 NGINX 下配置子目录,子域,或多域名架构。见 [技巧9][34]。 -### 技巧 4. 添加支持 NGINX 的链接 ### +### 技巧 4. 让 NGINX 支持永久链接 ### -许多 WordPress 网站依靠 **.htaccess** 文件,此文件依赖 WordPress 的多个功能,包括永久支持,插件和文件缓存。NGINX 不支持 **.htaccess** 文件。幸运的是,你可以使用 NGINX 的简单而全面的配置文件来实现大部分相同的功能。 +许多 WordPress 网站依赖于 **.htaccess** 文件,此文件为 WordPress 的多个功能所需要,包括永久链接支持、插件和文件缓存。NGINX 不支持 **.htaccess** 文件。幸运的是,你可以使用 NGINX 的简单而全面的配置文件来实现大部分相同的功能。 -你可以在使用 NGINX 的 WordPress 中通过在主 [server][36] 块下添加下面的 location 块中启用 [永久链接][35]。(此 location 块在其他代码示例中也会被包括)。 +你可以在你的主 [server][36] 块下添加下面的 location 块中为使用 NGINX 的 WordPress 启用 [永久链接][35]。(此 location 块在其它代码示例中也会被包括)。 -**try_files** 指令告诉 NGINX 检查请求的 URL 在根目录下是作为文件(**$uri**)还是目录(**$uri/**),**/var/www/example.com/htdocs**。如果都不是,NGINX 将重定向到 **/index.php**,通过查询字符串参数判断是否作为参数。 +**try_files** 指令告诉 NGINX 检查请求的 URL 在文档根目录(**/var/www/example.com/htdocs**)下是作为文件(**$uri**)还是目录(**$uri/**) 存在的。如果都不是,NGINX 将重定向到 **/index.php**,并传递查询字符串参数作为参数。 server { server_name example.com www.example.com; @@ -159,17 +158,17 @@ NGINX 处理静态文件的性能也较好,它有内置的,简单的 [缓存 ### 技巧 5. 在 NGINX 中配置 FastCGI ### -NGINX 可以从 FastCGI 应用程序中缓存响应,如 PHP 响应。此方法可提供最佳的性能。 +NGINX 可以缓存来自 FastCGI 应用程序的响应,如 PHP 响应。此方法可提供最佳的性能。 -对于开源的 NGINX,第三方模块 [ngx_cache_purge][37] 提供了缓存清除能力,需要手动编译,配置代码如下所示。NGINX Plus 已经包含了此代码的实现。 +对于开源的 NGINX,编译入第三方模块 [ngx\_cache\_purge][37] 可以提供缓存清除能力,配置代码如下所示。NGINX Plus 已经包含了它自己实现此代码。 -当使用 FastCGI 时,我们建议你安装 [NGINX 辅助插件][38] 并使用下面的配置文件,尤其是要使用 **fastcgi_cache_key** 并且 location 块下要包括 **fastcgi_cache_purge**。当页面被发布或有改变时,甚至有新评论被发布时,该插件会自动清除你的缓存,你也可以从 WordPress 管理控制台手动清除。 +当使用 FastCGI 时,我们建议你安装 [NGINX 辅助插件][38] 并使用下面的配置文件,尤其是要注意 **fastcgi\_cache\_key** 的使用和包括 **fastcgi\_cache\_purge** 的 location 块。当页面发布或有改变时,有新评论被发布时,该插件会自动清除你的缓存,你也可以从 WordPress 管理控制台手动清除。 -NGINX 的辅助插件还可以添加一个简短的 HTML 代码到你网页的底部,确认缓存是否正常并显示一些统计工作。(你也可以使用 [$upstream_cache_status][39] 确认缓存功能是否正常。) +NGINX 的辅助插件还可以在你网页的底部添加一个简短的 HTML 代码,以确认缓存是否正常并显示一些统计数据。(你也可以使用 [$upstream\_cache\_status][39] 确认缓存功能是否正常。) -fastcgi_cache_path /var/run/nginx-cache levels=1:2 + fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m; -fastcgi_cache_key "$scheme$request_method$host$request_uri"; + fastcgi_cache_key "$scheme$request_method$host$request_uri"; server { server_name example.com www.example.com; @@ -181,7 +180,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; set $skip_cache 0; - # POST 请求和查询网址的字符串应该交给 PHP + # POST 请求和带有查询参数的网址应该交给 PHP if ($request_method = POST) { set $skip_cache 1; } @@ -196,7 +195,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; set $skip_cache 1; } - #用户不能使用缓存登录或缓存最近的评论 + #不要为登录用户或最近的评论者进行缓存 if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass |wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; @@ -240,13 +239,13 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; } } -### 技巧 6. 为 NGINX 配置 W3_Total_Cache ### +### 技巧 6. 为 NGINX 配置 W3\_Total\_Cache ### -[W3 Total Cache][40], 是 Frederick Townes 的 [W3-Edge][41] 下的, 是一个支持 NGINX 的 WordPress 缓存框架。其有众多选项配置,可以替代 FastCGI 缓存。 +[W3 Total Cache][40], 是 [W3-Edge][41] 的 Frederick Townes 出品的, 是一个支持 NGINX 的 WordPress 缓存框架。其有众多选项配置,可以替代 FastCGI 缓存。 -缓存插件提供了各种缓存配置,还包括数据库和对象的缓存,对 HTML,CSS 和 JavaScript,可选择性的与流行的 CDN 整合。 +这个缓存插件提供了各种缓存配置,还包括数据库和对象的缓存,最小化 HTML、CSS 和 JavaScript,并可选与流行的 CDN 整合。 -使用插件时,需要将其配置信息写入位于你的域的根目录的 NGINX 配置文件中。 +这个插件会通过写入一个位于你的域的根目录的 NGINX 配置文件来控制 NGINX。 server { server_name example.com www.example.com; @@ -271,11 +270,11 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; ### 技巧 7. 为 NGINX 配置 WP Super Cache ### -[WP Super Cache][42] 是由 Donncha O Caoimh 完成的, [Automattic][43] 上的一个 WordPress 开发者, 这是一个 WordPress 缓存引擎,它可以将 WordPress 的动态页面转变成静态 HTML 文件,以使 NGINX 可以很快的提供服务。它是第一个 WordPress 缓存插件,和其他的相比,它更专注于某一特定的领域。 +[WP Super Cache][42] 是由 Donncha O Caoimh 开发的, 他是 [Automattic][43] 的一个 WordPress 开发者, 这是一个 WordPress 缓存引擎,它可以将 WordPress 的动态页面转变成静态 HTML 文件,以使 NGINX 可以很快的提供服务。它是第一个 WordPress 缓存插件,和其它的相比,它更专注于某一特定的领域。 配置 NGINX 使用 WP Super Cache 可以根据你的喜好而进行不同的配置。以下是一个示例配置。 -在下面的配置中,location 块中使用了名为 WP Super Cache 的超级缓存中部分配置来工作。代码的其余部分是根据 WordPress 的规则不缓存用户登录信息,不缓存 POST 请求,并对静态资源设置过期首部,再加上标准的 PHP 实现;这部分可以进行定制,来满足你的需求。 +在下面的配置中,带有名为 supercache 的 location 块是 WP Super Cache 特有的部分。 WordPress 规则的其余代码用于不缓存已登录用户的信息,不缓存 POST 请求,并对静态资源设置过期首部,再加上标准的 PHP 处理;这部分可以根据你的需求进行定制。 server { @@ -288,7 +287,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; set $cache_uri $request_uri; - # POST 请求和查询网址的字符串应该交给 PHP + # POST 请求和带有查询字符串的网址应该交给 PHP if ($request_method = POST) { set $cache_uri 'null cache'; } @@ -305,13 +304,13 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; set $cache_uri 'null cache'; } - #用户不能使用缓存登录或缓存最近的评论 + #不对已登录用户和最近的评论者使用缓存 if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+ |wp-postpass|wordpress_logged_in") { set $cache_uri 'null cache'; } - #当请求的文件存在时使用缓存,否则将请求转发给WordPress + #当请求的文件存在时使用缓存,否则将请求转发给 WordPress location / { try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html $uri $uri/ /index.php; @@ -346,7 +345,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; ### 技巧 8. 为 NGINX 配置安全防范措施 ### -为了防止攻击,可以控制对关键资源的访问以及当机器超载时进行登录限制。 +为了防止攻击,可以控制对关键资源的访问并限制机器人对登录功能的过量攻击。 只允许特定的 IP 地址访问 WordPress 的仪表盘。 @@ -365,14 +364,14 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; deny all; } -拒绝其他人访问 WordPress 的配置文件 **wp-config.php**。拒绝其他人访问的另一种方法是将该文件的一个目录移到域的根目录下。 +拒绝其它人访问 WordPress 的配置文件 **wp-config.php**。拒绝其它人访问的另一种方法是将该文件的一个目录移到域的根目录之上的目录。 - # 拒绝其他人访问 wp-config.php + # 拒绝其它人访问 wp-config.php location ~* wp-config.php { deny all; } -对 **wp-login.php** 进行限速来防止暴力攻击。 +对 **wp-login.php** 进行限速来防止暴力破解。 # 拒绝访问 wp-login.php location = /wp-login.php { @@ -383,27 +382,27 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri"; ### 技巧 9. 配置 NGINX 支持 WordPress 多站点 ### -WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单个实例中允许你管理两个或多个网站。[WordPress.com][44] 运行的就是 WordPress 多站点,其主机为成千上万的用户提供博客服务。 +WordPress 多站点(WordPress Multisite),顾名思义,这个版本 WordPress 可以让你以单个实例管理两个或多个网站。[WordPress.com][44] 运行的就是 WordPress 多站点,其主机为成千上万的用户提供博客服务。 你可以从单个域的任何子目录或从不同的子域来运行独立的网站。 使用此代码块添加对子目录的支持。 - # 在 WordPress 中添加支持子目录结构的多站点 + # 在 WordPress 多站点中添加对子目录结构的支持 if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } -使用此代码块来替换上面的代码块以添加对子目录结构的支持,子目录名自定义。 +使用此代码块来替换上面的代码块以添加对子目录结构的支持,替换为你自己的子目录名。 # 添加支持子域名 server_name example.com *.example.com; 旧版本(3.4以前)的 WordPress 多站点使用 readfile() 来提供静态内容。然而,readfile() 是 PHP 代码,它会导致在执行时性能会显著降低。我们可以用 NGINX 来绕过这个非必要的 PHP 处理。该代码片段在下面被(==============)线分割出来了。 - # 避免 PHP readfile() 在 /blogs.dir/structure 子目录中 + # 避免对子目录中 /blogs.dir/ 结构执行 PHP readfile() location ^~ /blogs.dir { internal; alias /var/www/example.com/htdocs/wp-content/blogs.dir; @@ -414,8 +413,8 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单 ============================================================ - # 避免 PHP readfile() 在 /files/structure 子目录中 - location ~ ^(/[^/]+/)?files/(?.+) { + # 避免对子目录中 /files/ 结构执行 PHP readfile() + location ~ ^(/[^/]+/)?files/(?.+) { try_files /wp-content/blogs.dir/$blogid/files/$rt_file /wp-includes/ms-files.php?file=$rt_file; access_log off; log_not_found off; @@ -424,7 +423,7 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单 ============================================================ - # WPMU 文件结构的子域路径 + # 子域路径的WPMU 文件结构 location ~ ^/files/(.*)$ { try_files /wp-includes/ms-files.php?file=$1 =404; access_log off; @@ -434,7 +433,7 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单 ============================================================ - # 地图博客 ID 在特定的目录下 + # 映射博客 ID 到特定的目录 map $http_host $blogid { default 0; example.com 1; @@ -444,15 +443,15 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单 ### 结论 ### -可扩展性对许多站点的开发者来说是一项挑战,因为这会让他们在 WordPress 站点中取得成功。(对于那些想要跨越 WordPress 性能问题的新站点。)为 WordPress 添加缓存,并将 WordPress 和 NGINX 结合,是不错的答案。 +可扩展性对许多要让他们的 WordPress 站点取得成功的开发者来说是一项挑战。(对于那些想要跨越 WordPress 性能门槛的新站点而言。)为 WordPress 添加缓存,并将 WordPress 和 NGINX 结合,是不错的答案。 -NGINX 不仅对 WordPress 网站是有用的。世界上排名前 1000,10,000和100,000网站中 NGINX 也是作为 [领先的 web 服务器][45] 被使用。 +NGINX 不仅用于 WordPress 网站。世界上排名前 1000、10000 和 100000 网站中 NGINX 也是 [遥遥领先的 web 服务器][45]。 -欲了解更多有关 NGINX 的性能,请看我们最近的博客,[关于 10x 应用程序的 10 个技巧][46]。 +欲了解更多有关 NGINX 的性能,请看我们最近的博客,[让应用性能提升 10 倍的 10 个技巧][46]。 NGINX 软件有两个版本: -- NGINX 开源的软件 - 像 WordPress 一样,此软件你可以自行下载,配置和编译。 +- NGINX 开源软件 - 像 WordPress 一样,此软件你可以自行下载,配置和编译。 - NGINX Plus - NGINX Plus 包括一个预构建的参考版本的软件,以及服务和技术支持。 想要开始,先到 [nginx.org][47] 下载开源软件并了解下 [NGINX Plus][48]。 @@ -463,7 +462,7 @@ via: https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with- 作者:[Floyd Smith][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md b/published/201512/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md similarity index 65% rename from translated/tech/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md rename to published/201512/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md index 941d3a2b72..6057efd5fb 100644 --- a/translated/tech/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md +++ b/published/201512/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md @@ -1,16 +1,15 @@ -如何在树莓派2 B型上安装 FreeBSD +如何在树莓派 2B 上安装 FreeBSD ================================================================================ -在树莓派2 B型上如何安装 FreeBSD 10 或 FreeBSD 11(current)?怎么在 Linux,OS X,FreeBSD 或类 Unix 操作系统上烧录 SD 卡? +在树莓派 2B 上如何安装 FreeBSD 10 或 FreeBSD 11(current)?怎么在 Linux,OS X,FreeBSD 或类 Unix 操作系统上烧录 SD 卡? -在树莓派2 B型上安装 FreeBSD 10或 FreeBSD 11(current)很容易。使用 FreeBSD 操作系统可以打造一个非常易用的 Unix 服务器。FreeBSD-CURRENT 自2012年十一月以来一直支持树莓派,2015年三月份后也开始支持树莓派2了。在这个快速教程中我将介绍如何在 RPI2 上安装 FreeBSD 11 current arm 版。 +在树莓派 2B 上安装 FreeBSD 10 或 FreeBSD 11(current)很容易。使用 FreeBSD 操作系统可以打造一个非常易用的 Unix 服务器。FreeBSD-CURRENT 自2012年十一月以来一直支持树莓派,2015年三月份后也开始支持树莓派2了。在这个快速教程中我将介绍如何在树莓派 2B 上安装 FreeBSD 11 current arm 版。 ### 1. 下载 FreeBSD-current 的 arm 镜像 ### 你可以 [访问这个页面来下载][1] 树莓派2的镜像。使用 wget 或 curl 命令来下载镜像: - $ wget ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/arm/armv6/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img.xz 或 @@ -45,52 +44,51 @@ 1024+0 records out 1073741824 bytes transferred in 661.669584 secs (1622776 bytes/sec) -#### 使用 Linux/FreeBSD 或者 类 Unix 系统来烧录 FreeBSD-current #### +#### 使用 Linux/FreeBSD 或者类 Unix 系统来烧录 FreeBSD-current #### 语法是这样: $ dd if=FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img of=/dev/sdb bs=1M -确保使用实际 SD 卡的设备名称来替换 /dev/sdb 。 +**确保使用实际的 SD 卡的设备名称来替换 /dev/sdb**(LCTT 译注:千万注意不要写错了)。 ### 4. 引导 FreeBSD ### -在树莓派2 B型上插入 SD 卡。你需要连接键盘,鼠标和显示器。我使用的是 USB 转串口线来连接显示器的: +在树莓派 2B 上插入 SD 卡。你需要连接键盘,鼠标和显示器。我使用的是 USB 转串口线来连接显示器的: ![Fig.01 RPi USB based serial connection](http://s0.cyberciti.org/uploads/faq/2015/10/Raspberry-Pi-2-Model-B.pin-out.jpg) - -图01 RPI 基于 USB 的串行连接 +*图01 基于树莓派 USB 的串行连接* 在下面的例子中,我使用 screen 命令来连接我的 RPI: - ## Linux version ## + ## Linux 上 ## screen /dev/tty.USB0 115200 - ## OS X version ## + ## OS X 上 ## screen /dev/cu.usbserial 115200 - ## Windows user use Putty.exe ## + ## Windows 请使用 Putty.exe ## FreeBSD RPI 启动输出样例: ![Gif 01: Booting FreeBSD-current on RPi 2](http://s0.cyberciti.org/uploads/faq/2015/10/freebsd-current-rpi.gif) -图01: 在 RPi 2上引导 FreeBSD-current +*图02: 在树莓派 2上引导 FreeBSD-current* ### 5. FreeBSD 在 RPi 2上的用户名和密码 ### 默认的密码是 freebsd/freebsd 和 root/root。 -到此为止, FreeBSD-current 已经安装并运行在 RPi 2上。 +到此为止, FreeBSD-current 已经安装并运行在树莓派 2上。 -------------------------------------------------------------------------------- via: http://www.cyberciti.biz/faq/how-to-install-freebsd-on-raspberry-pi-2-model-b/ 作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[strugglingyouth](https://github.com/strugglingyouth) +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201512/20151104 How to Install Redis Server on CentOS 7.md b/published/201512/20151104 How to Install Redis Server on CentOS 7.md new file mode 100644 index 0000000000..90e090ef39 --- /dev/null +++ b/published/201512/20151104 How to Install Redis Server on CentOS 7.md @@ -0,0 +1,239 @@ +如何在 CentOS 7 上安装 Redis 服务器 +================================================================================ + +大家好,本文的主题是 Redis,我们将要在 CentOS 7 上安装它。编译源代码,安装二进制文件,创建、安装文件。在安装了它的组件之后,我们还会配置 redis ,就像配置操作系统参数一样,目标就是让 redis 运行的更加可靠和快速。 + +![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg) + +*Redis 服务器* + +Redis 是一个开源的多平台数据存储软件,使用 ANSI C 编写,直接在内存使用数据集,这使得它得以实现非常高的效率。Redis 支持多种编程语言,包括 Lua, C, Java, Python, Perl, PHP 和其他很多语言。redis 的代码量很小,只有约3万行,它只做“很少”的事,但是做的很好。尽管是在内存里工作,但是数据持久化的保存还是有的,而redis 的可靠性就很高,同时也支持集群,这些可以很好的保证你的数据安全。 + +### 构建 Redis ### + +redis 目前没有官方 RPM 安装包,我们需要从源代码编译,而为了要编译就需要安装 Make 和 GCC。 + +如果没有安装过 GCC 和 Make,那么就使用 yum 安装。 + + yum install gcc make + +从[官网][1]下载 tar 压缩包。 + + curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz + +解压缩。 + + tar zxvf redis-3.0.4.tar.gz + +进入解压后的目录。 + + cd redis-3.0.4 + +使用Make 编译源文件。 + + make + +### 安装 ### + +进入源文件的目录。 + + cd src + +复制 Redis 的服务器和客户端到 /usr/local/bin。 + + cp redis-server redis-cli /usr/local/bin + +最好也把 sentinel,benchmark 和 check 复制过去。 + + cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin + +创建redis 配置文件夹。 + + mkdir /etc/redis + +在`/var/lib/redis` 下创建有效的保存数据的目录 + + mkdir -p /var/lib/redis/6379 + +#### 系统参数 #### + +为了让 redis 正常工作需要配置一些内核参数。 + +配置 `vm.overcommit_memory` 为1,这可以避免数据被截断,详情[见此][2]。 + + sysctl -w vm.overcommit_memory=1 + +修改 backlog 连接数的最大值超过 redis.conf 中的 `tcp-backlog` 值,即默认值511。你可以在[kernel.org][3] 找到更多有关基于 sysctl 的 ip 网络隧道的信息。 + + sysctl -w net.core.somaxconn=512 + +取消对透明巨页内存(transparent huge pages)的支持,因为这会造成 redis 使用过程产生延时和内存访问问题。 + + echo never > /sys/kernel/mm/transparent_hugepage/enabled + +### redis.conf ### + +redis.conf 是 redis 的配置文件,然而你会看到这个文件的名字是 6379.conf ,而这个数字就是 redis 监听的网络端口。如果你想要运行超过一个的 redis 实例,推荐用这样的名字。 + +复制示例的 redis.conf 到 **/etc/redis/6379.conf**。 + + cp redis.conf /etc/redis/6379.conf + +现在编辑这个文件并且配置参数。 + + vi /etc/redis/6379.conf + +#### daemonize #### + +设置 `daemonize` 为 no,systemd 需要它运行在前台,否则 redis 会突然挂掉。 + + daemonize no + +#### pidfile #### + +设置 `pidfile` 为 /var/run/redis_6379.pid。 + + pidfile /var/run/redis_6379.pid + +#### port #### + +如果不准备用默认端口,可以修改。 + + port 6379 + +#### loglevel #### + +设置日志级别。 + + loglevel notice + +#### logfile #### + +修改日志文件路径。 + + logfile /var/log/redis_6379.log + +#### dir #### + +设置目录为 /var/lib/redis/6379 + + dir /var/lib/redis/6379 + +### 安全 ### + +下面有几个可以提高安全性的操作。 + +#### Unix sockets #### + +在很多情况下,客户端程序和服务器端程序运行在同一个机器上,所以不需要监听网络上的 socket。如果这和你的使用情况类似,你就可以使用 unix socket 替代网络 socket,为此你需要配置 `port` 为0,然后配置下面的选项来启用 unix socket。 + +设置 unix socket 的套接字文件。 + + unixsocket /tmp/redis.sock + +限制 socket 文件的权限。 + + unixsocketperm 700 + +现在为了让 redis-cli 可以访问,应该使用 -s 参数指向该 socket 文件。 + + redis-cli -s /tmp/redis.sock + +#### requirepass #### + +你可能需要远程访问,如果是,那么你应该设置密码,这样子每次操作之前要求输入密码。 + + requirepass "bTFBx1NYYWRMTUEyNHhsCg" + +#### rename-command #### + +想象一下如下指令的输出。是的,这会输出服务器的配置,所以你应该在任何可能的情况下拒绝这种访问。 + + CONFIG GET * + +为了限制甚至禁止这条或者其他指令可以使用 `rename-command` 命令。你必须提供一个命令名和替代的名字。要禁止的话需要设置替代的名字为空字符串,这样禁止任何人猜测命令的名字会比较安全。 + + rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u" + rename-command FLUSHALL "" + rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u" + +![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg) + +*使用密码通过 unix socket 访问,和修改命令* + +#### 快照 #### + +默认情况下,redis 会周期性的将数据集转储到我们设置的目录下的 **dump.rdb** 文件。你可以使用 `save` 命令配置转储的频率,它的第一个参数是以秒为单位的时间帧,第二个参数是在数据文件上进行修改的数量。 + +每隔15分钟并且最少修改过一次键。 + + save 900 1 + +每隔5分钟并且最少修改过10次键。 + + save 300 10 + +每隔1分钟并且最少修改过10000次键。 + + save 60 10000 + +文件 `/var/lib/redis/6379/dump.rdb` 包含了从上次保存以来内存里数据集的转储数据。因为它先创建临时文件然后替换之前的转储文件,这里不存在数据破坏的问题,你不用担心,可以直接复制这个文件。 + +### 开机时启动 ### + +你可以使用 systemd 将 redis 添加到系统开机启动列表。 + +复制示例的 init_script 文件到 `/etc/init.d`,注意脚本名所代表的端口号。 + + cp utils/redis_init_script /etc/init.d/redis_6379 + +现在我们要使用 systemd,所以在 `/etc/systems/system` 下创建一个单位文件名字为 `redis_6379.service`。 + + vi /etc/systemd/system/redis_6379.service + +填写下面的内容,详情可见 systemd.service。 + + [Unit] + Description=Redis on port 6379 + + [Service] + Type=forking + ExecStart=/etc/init.d/redis_6379 start + ExecStop=/etc/init.d/redis_6379 stop + + [Install] + WantedBy=multi-user.target + +现在添加我之前在 `/etc/sysctl.conf` 里面修改过的内存过量使用和 backlog 最大值的选项。 + + vm.overcommit_memory = 1 + + net.core.somaxconn=512 + +对于透明巨页内存支持,并没有直接 sysctl 命令可以控制,所以需要将下面的命令放到 `/etc/rc.local` 的结尾。 + + echo never > /sys/kernel/mm/transparent_hugepage/enabled + +### 总结 ### + +这样就可以启动了,通过设置这些选项你就可以部署 redis 服务到很多简单的场景,然而在 redis.conf 还有很多为复杂环境准备的 redis 选项。在一些情况下,你可以使用 [replication][4] 和 [Sentinel][5] 来提高可用性,或者[将数据分散][6]在多个服务器上,创建服务器集群。 + +谢谢阅读。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/storage/install-redis-server-centos-7/ + +作者:[Carlos Alberto][a] +译者:[ezio](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/carlosal/ +[1]:http://redis.io/download +[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting +[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt +[4]:http://redis.io/topics/replication +[5]:http://redis.io/topics/sentinel +[6]:http://redis.io/topics/partitioning diff --git a/published/201512/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md b/published/201512/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md new file mode 100644 index 0000000000..964a9774ca --- /dev/null +++ b/published/201512/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md @@ -0,0 +1,35 @@ +开源开发者提交不安全代码,遭 Linus 炮轰 +================================================================================ +![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg) + +Linus 上个月骂了一个 Linux 开发者,原因是他向 kernel 提交了一份不安全的代码。 + +Linus 是个 Linux 内核项目非官方的“仁慈的独裁者(benevolent dictator)”(LCTT译注:英国《卫报》曾将乔布斯评价为‘仁慈的独裁者’),这意味着他有权决定将哪些代码合入内核,哪些代码直接丢掉。 + +在10月28号,一个开源开发者提交的代码未能符合 Torvalds 的要求,于是遭来了[一顿臭骂][1]。Torvalds 在他提交的代码下评论道:“你提交的是什么东西。” + +接着他说这个开发者是“毫无能力的神经病”。 + +Torvalds 为什么会这么生气?他觉得那段代码可以写得更有效率一点,可读性更强一点,编译器编译后跑得更好一点(编译器的作用就是将让人看的代码翻译成让电脑看的代码)。 + +Torvalds 重新写了一版代码将原来的那份替换掉,并建议所有开发者应该像他那种风格来写代码。 + +Torvalds 一直在嘲讽那些不符合他观点的人。早在1991年他就攻击过 [Andrew Tanenbaum][2]——那个 Minix 操作系统的作者,而那个 Minix 操作系统被 Torvalds 描述为“脑残”。 + +但是 Torvalds 在这次嘲讽中表现得更有战略性了:“我想让*每个人*都知道,像他这种代码是完全不能被接收的。”他说他的目的是提醒每个 Linux 开发者,而不是针对那个开发者。 + +Torvalds 也用这个机会强调了烂代码的安全问题。现在的企业对安全问题很重视,所以安全问题需要在开源开发者心中得到足够重视,甚至需要在代码中表现为最高等级(LCTT 译注:操作系统必须权衡许多因素:安全、处理速度、灵活性、易用性等,而这里 Torvalds 将安全提升为最高优先级了)。骂一下那些提交不安全代码的开发者可以帮助提高 Linux 系统的安全性。 + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse + +作者:[Christopher Tozzi][a] +译者:[bazz2](https://github.com/bazz2) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html +[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate diff --git a/published/201512/20151109 How to Monitor the Progress of a Linux Command Line Operation Using PV Command.md b/published/201512/20151109 How to Monitor the Progress of a Linux Command Line Operation Using PV Command.md new file mode 100644 index 0000000000..37e17b4266 --- /dev/null +++ b/published/201512/20151109 How to Monitor the Progress of a Linux Command Line Operation Using PV Command.md @@ -0,0 +1,80 @@ +如何使用 pv 命令监控 linux 命令的执行进度 +================================================================================ + +![](https://www.maketecheasier.com/assets/uploads/2015/11/pv-featured-1.jpg) + +如果你是一个 linux 系统管理员,那么毫无疑问你必须花费大量的工作时间在命令行上:安装和卸载软件,监视系统状态,复制、移动、删除文件,查错,等等。很多时候都是你输入一个命令,然后等待很长时间直到执行完成。也有的时候你执行的命令挂起了,而你只能猜测命令执行的实际情况。 + +通常 linux 命令不提供和进度相关的信息,而这些信息特别重要,尤其当你只有有限的时间时。然而这并不意味着你是无助的——现在有一个命令,pv,它会显示当前在命令行执行的命令的进度信息。在本文我们会讨论它并用几个简单的例子说明其特性。 + +### PV 命令 ### + +[PV][1] 由Andrew Wood 开发,是 Pipe Viewer 的简称,意思是通过管道显示数据处理进度的信息。这些信息包括已经耗费的时间,完成的百分比(通过进度条显示),当前的速度,全部传输的数据,以及估计剩余的时间。 + +> "要使用 PV,需要配合合适的选项,把它放置在两个进程之间的管道。命令的标准输入将会通过标准输出传进来的,而进度会被输出到标准错误输出。” + +上述解释来自该命令的帮助页。 + +### 下载和安装 ### + +Debian 系的操作系统,如 Ubuntu,可以简单的使用下面的命令安装 PV: + + sudo apt-get install pv + +如果你使用了其他发行版本,你可以使用各自的包管理软件在你的系统上安装 PV。一旦 PV 安装好了你就可以在各种场合使用它(详见下文)。需要注意的是下面所有例子都使用的是 pv 1.2.0。 + +### 特性和用法 ### + +我们(在 linux 上使用命令行的用户)的大多数使用场景都会用到的命令是从一个 USB 驱动器拷贝电影文件到你的电脑。如果你使用 cp 来完成上面的任务,你会什么情况都不清楚,直到整个复制过程结束或者出错。 + +然而pv 命令在这种情景下很有帮助。比如: + + pv /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv + +输出如下: + +![pv-copy](https://www.maketecheasier.com/assets/uploads/2015/10/pv-copy.png) + +所以,如你所见,这个命令显示了很多和操作有关的有用信息,包括已经传输了的数据量,花费的时间,传输速率,进度条,进度的百分比,以及剩余的时间。 + +`pv` 命令提供了多种显示选项开关。比如,你可以使用`-p` 来显示百分比,`-t` 来显示时间,`-r` 表示传输速率,`-e` 代表eta(LCTT 译注:估计剩余的时间)。好事是你不必记住某一个选项,因为默认这几个选项都是启用的。但是,如果你只要其中某一个信息,那么可以通过控制这几个选项来完成任务。 + +这里还有一个`-n` 选项来允许 pv 命令显示整数百分比,在标准错误输出上每行显示一个数字,用来替代通常的可视进度条。下面是一个例子: + + pv -n /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv + +![pv-numeric](https://www.maketecheasier.com/assets/uploads/2015/10/pv-numeric.png) + +这个特殊的选项非常合适某些情境下的需求,如你想把用管道把输出传给[dialog][2] 命令。 + +接下来还有一个命令行选项,`-L` 可以让你修改 pv 命令的传输速率。举个例子,使用 -L 选项来限制传输速率为2MB/s。 + + pv -L 2m /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv + +![pv-ratelimit](https://www.maketecheasier.com/assets/uploads/2015/10/pv-ratelimit.png) + +如上图所见,数据传输速度按照我们的要求被限制了。 + +另一个pv 可以帮上忙的情景是压缩文件。这里有一个例子可以向你解释如何与压缩软件Gzip 一起工作。 + + pv /media/himanshu/1AC2-A8E3/fnf.mkv | gzip > ./Desktop/fnf.log.gz + +![pv-gzip](https://www.maketecheasier.com/assets/uploads/2015/10/pv-gzip.png) + +### 结论 ### + +如上所述,pv 是一个非常有用的小工具,它可以在命令没有按照预期执行的情况下帮你节省你宝贵的时间。而且这些显示的信息还可以用在 shell 脚本里。我强烈的推荐你使用这个命令,它值得你一试。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/monitor-progress-linux-command-line-operation/ + +作者:[Himanshu Arora][a] +译者:[ezio](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/himanshu/ +[1]:http://linux.die.net/man/1/pv +[2]:http://linux.die.net/man/1/dialog diff --git a/translated/tech/20151109 How to Set Up AWStats On Ubuntu Server.md b/published/201512/20151109 How to Set Up AWStats On Ubuntu Server.md similarity index 84% rename from translated/tech/20151109 How to Set Up AWStats On Ubuntu Server.md rename to published/201512/20151109 How to Set Up AWStats On Ubuntu Server.md index 11bfdde3ab..7bea4e40d8 100644 --- a/translated/tech/20151109 How to Set Up AWStats On Ubuntu Server.md +++ b/published/201512/20151109 How to Set Up AWStats On Ubuntu Server.md @@ -1,16 +1,14 @@ - 如何在 Ubuntu 服务器中配置 AWStats ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/10/Apache_awstats_featured.jpg) - -AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FTP 或邮件服务器统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它采用的是部分信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 Apache,IIS 等。 +AWStats 是一个开源的网站分析报告工具,可以生成强大的网站、流媒体、FTP 或邮件服务器的访问统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它可以“部分”读取信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 Apache,IIS 等。 本文将帮助你在 Ubuntu 上安装配置 AWStats。 ### 安装 AWStats 包 ### -默认情况下,AWStats 的包在 Ubuntu 仓库中。 +默认情况下,AWStats 的包可以在 Ubuntu 仓库中找到。 可以通过运行下面的命令来安装: @@ -18,7 +16,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT 接下来,你需要启用 Apache 的 CGI 模块。 -运行以下命令来启动: +运行以下命令来启动 CGI: sudo a2enmod cgi @@ -38,7 +36,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT sudo nano /etc/awstats/awstats.test.com.conf -像下面这样修改下: +像下面这样修改一下: # Change to Apache log file, by default it's /var/log/apache2/access.log LogFile="/var/log/apache2/access.log" @@ -73,6 +71,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT ### 测试 AWStats ### 现在,您可以通过访问 url “http://your-server-ip/cgi-bin/awstats.pl?config=test.com.” 来查看 AWStats 的页面。 + 它的页面像下面这样: ![awstats_page](https://www.maketecheasier.com/assets/uploads/2015/10/awstats_page.jpg) @@ -101,7 +100,7 @@ via: https://www.maketecheasier.com/set-up-awstats-ubuntu/ 作者:[Hitesh Jethva][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201512/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md b/published/201512/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md new file mode 100644 index 0000000000..040fb40302 --- /dev/null +++ b/published/201512/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md @@ -0,0 +1,313 @@ +在 Ubuntu 上安装世界上最先进的开源数据库 PostgreSQL 9.4 和 phpPgAdmin +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/05/postgresql.png) + +### 简介 ### + +[PostgreSQL][1] 是一款强大的,开源的,对象关系型数据库系统。它支持所有的主流操作系统,包括 Linux、Unix(AIX、BSD、HP-UX,SGI IRIX、Mac OS、Solaris、Tru64) 以及 Windows 操作系统。 + +下面是 **Ubuntu** 发起者 **Mark Shuttleworth** 对 PostgreSQL 的一段评价。 + +> PostgreSQL 是一款极赞的数据库系统。刚开始我们在 Launchpad 上使用它的时候,并不确定它能否胜任工作。但我是错了。它很强壮、快速,在各个方面都很专业。 +> +> — Mark Shuttleworth. + +在这篇简短的指南中,让我们来看看如何在 Ubuntu 15.10 服务器中安装 PostgreSQL 9.4。 + +### 安装 PostgreSQL ### + +默认仓库中就有可用的 PostgreSQL。在终端中输入下面的命令安装它。 + + sudo apt-get install postgresql postgresql-contrib + +如果你需要其它的版本,按照下面那样先添加 PostgreSQL 仓库然后再安装。 + +**PostgreSQL apt 仓库** 支持 amd64 和 i386 架构的 Ubuntu 长期支持版(10.04、12.04 和 14.04),以及非长期支持版(14.04)。对于其它非长期支持版,该软件包虽然没有完全支持,但使用和 LTS 版本近似的也能正常工作。 + +#### Ubuntu 14.10 系统: #### + +新建文件**/etc/apt/sources.list.d/pgdg.list**; + + sudo vi /etc/apt/sources.list.d/pgdg.list + +用下面一行添加仓库: + + deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main + +**注意**: 上面的库只能用于 Ubuntu 14.10。还没有升级到 Ubuntu 15.04 和 15.10。 + +对于 **Ubuntu 14.04**,添加下面一行: + + deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main + +对于 **Ubuntu 12.04**,添加下面一行: + + deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main + +导入库签名密钥: + + wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc + + sudo apt-key add - + +更新软件包列表: + + sudo apt-get update + +然后安装需要的版本。 + + sudo apt-get install postgresql-9.4 + +### 访问 PostgreSQL 命令窗口 ### + +默认的数据库名称和数据库用户名称都是 “**postgres**”。切换到 postgres 用户进行 postgresql 相关的操作: + + sudo -u postgres psql postgres + +#### 示例输出: #### + + psql (9.4.5) + Type "help" for help. + postgres=# + +要退出 postgresql 窗口,在 **psql** 窗口输入 **\q** 退出到终端。 + +### 设置 “postgres” 用户密码 ### + +登录到 postgresql 窗口, + + sudo -u postgres psql postgres + +用下面的命令为用户 postgres 设置密码: + + postgres=# \password postgres + Enter new password: + Enter it again: + postgres=# \q + +要安装 PostgreSQL Adminpack 扩展,在 postgresql 窗口输入下面的命令: + + sudo -u postgres psql postgres + +---------- + + postgres=# CREATE EXTENSION adminpack; + CREATE EXTENSION + +在 **psql** 窗口输入 **\q** 从 postgresql 窗口退回到终端。 + +### 创建新用户和数据库 ### + +例如,让我们创建一个新的用户,名为 “**senthil**”,密码是 “**ubuntu**”,以及名为 “**mydb**” 的数据库。 + + sudo -u postgres createuser -D -A -P senthil + +---------- + + sudo -u postgres createdb -O senthil mydb + +### 删除用户和数据库 ### + +要删除数据库,首先切换到 postgres 用户: + + sudo -u postgres psql postgres + +输入命令: + + $ drop database + +要删除一个用户,输入下面的命令: + + $ drop user + +### 配置 PostgreSQL-MD5 验证 ### + +**MD5 验证** 要求用户提供一个 MD5 加密的密码用于认证。首先编辑 **/etc/postgresql/9.4/main/pg_hba.conf** 文件: + + sudo vi /etc/postgresql/9.4/main/pg_hba.conf + +按照下面所示添加或修改行 + + [...] + # TYPE DATABASE USER ADDRESS METHOD + # "local" is for Unix domain socket connections only + local all all md5 + # IPv4 local connections: + host all all 127.0.0.1/32 md5 + host all all 192.168.1.0/24 md5 + # IPv6 local connections: + host all all ::1/128 md5 + [...] + +其中, 192.168.1.0/24 是我的本地网络 IP 地址。用你自己的地址替换。 + +重启 postgresql 服务以使更改生效: + + sudo systemctl restart postgresql + +或者, + + sudo service postgresql restart + +### 配置 PostgreSQL TCP/IP 配置 ### + +默认情况下,没有启用 TCP/IP 连接,因此其它计算机的用户不能访问 postgresql。为了允许其它计算机的用户访问,编辑文件 **/etc/postgresql/9.4/main/postgresql.conf:** + + sudo vi /etc/postgresql/9.4/main/postgresql.conf + +找到下面一行: + + [...] + #listen_addresses = 'localhost' + [...] + #port = 5432 + [...] + +取消该行的注释,然后设置你 postgresql 服务器的 IP 地址,或者设置为 ‘*’ 监听所有用户。你应该谨慎设置所有远程用户都可以访问 PostgreSQL。 + + [...] + listen_addresses = '*' + [...] + port = 5432 + [...] + +重启 postgresql 服务保存更改: + + sudo systemctl restart postgresql + +或者, + + sudo service postgresql restart + +### 用 phpPgAdmin 管理 PostgreSQL ### + +[**phpPgAdmin**][2] 是基于 web 用 PHP 写的 PostgreSQL 管理工具。 + +默认仓库中有可用的 phpPgAdmin。用下面的命令安装 phpPgAdmin: + + sudo apt-get install phppgadmin + +默认情况下,你可以在本地系统的 web 浏览器用 **http://localhost/phppgadmin** 访问 phppgadmin。 + +要访问远程系统,在 Ubuntu 15.10 上做如下操作: + +编辑文件 **/etc/apache2/conf-available/phppgadmin.conf**, + + sudo vi /etc/apache2/conf-available/phppgadmin.conf + +找到 **Require local** 的一行在这行前面添加 **#** 注释掉它。 + + #Require local + +添加下面的一行: + + allow from all + +保存并退出文件。 + +然后重启 apache 服务。 + + sudo systemctl restart apache2 + +对于 Ubuntu 14.10 及之前版本: + +编辑 **/etc/apache2/conf.d/phppgadmin**: + + sudo nano /etc/apache2/conf.d/phppgadmin + +注释掉下面一行: + + [...] + #allow from 127.0.0.0/255.0.0.0 ::1/128 + +取消下面一行的注释使所有系统都可以访问 phppgadmin。 + + allow from all + +编辑 **/etc/apache2/apache2.conf**: + + sudo vi /etc/apache2/apache2.conf + +添加下面一行: + + Include /etc/apache2/conf.d/phppgadmin + +然后重启 apache 服务。 + + sudo service apache2 restart + +### 配置 phpPgAdmin ### + +编辑文件 **/etc/phppgadmin/config.inc.php**, 做以下更改。下面大部分选项都带有解释。认真阅读以便了解为什么要更改这些值。 + + sudo nano /etc/phppgadmin/config.inc.php + +找到下面一行: + + $conf['servers'][0]['host'] = ''; + +按照下面这样更改: + + $conf['servers'][0]['host'] = 'localhost'; + +找到这一行: + + $conf['extra_login_security'] = true; + +更改值为 **false**。 + + $conf['extra_login_security'] = false; + +找到这一行: + + $conf['owned_only'] = false; + +更改值为 **true**。 + + $conf['owned_only'] = true; + +保存并关闭文件。重启 postgresql 服务和 Apache 服务。 + + sudo systemctl restart postgresql + + sudo systemctl restart apache2 + +或者, + + sudo service postgresql restart + + sudo service apache2 restart + +现在打开你的浏览器并导航到 **http://ip-address/phppgadmin**。你会看到以下截图。 + +![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg) + +用你之前创建的用户登录。我之前已经创建了一个名为 “**senthil**” 的用户,密码是 “**ubuntu**”,因此我以 “senthil” 用户登录。 + +![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg) + +然后你就可以访问 phppgadmin 面板了。 + +![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg) + +用 postgres 用户登录: + +![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg) + +就是这样。现在你可以用 phppgadmin 可视化创建、删除或者更改数据库了。 + +加油! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-postgresql-9-4-and-phppgadmin-on-ubuntu-15-10/ + +作者:[SK][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.twitter.com/ostechnix +[1]:http://www.postgresql.org/ +[2]:http://phppgadmin.sourceforge.net/doku.php \ No newline at end of file diff --git a/published/201512/20151123 7 ways hackers can use Wi-Fi against you.md b/published/201512/20151123 7 ways hackers can use Wi-Fi against you.md new file mode 100644 index 0000000000..d8794383c7 --- /dev/null +++ b/published/201512/20151123 7 ways hackers can use Wi-Fi against you.md @@ -0,0 +1,71 @@ +黑客利用 Wi-Fi 攻击你的七种方法 +================================================================================ +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg) + +### 黑客利用 Wi-Fi 侵犯你隐私的七种方法 ### + +Wi-Fi — 啊,你是如此的方便,却又如此的危险! + +这里给大家介绍一下通过Wi-Fi连接“慷慨捐赠”你的身份信息的七种方法和反制措施。 + +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/1_free-hotspots-100626674-orig.jpg) + +### 利用免费热点 ### + +它们似乎无处不在,而且它们的数量会在[接下来四年里增加三倍][1]。但是它们当中很多都是不值得信任的,从你的登录凭证、email 甚至更加敏感的账户,都能被黑客用“嗅探器(sniffers)”软件截获 — 这种软件能截获到任何你通过该连接提交的信息。防止被黑客盯上的最好办法就是使用VPN(虚拟私有网virtual private network),它加密了你所输入的信息,因此能够保护你的数据隐私。 + +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/2_online-banking-100626675-orig.jpg) + +### 网上银行 ### + +你可能认为没有人需要被提醒不要使用免费 Wi-Fi 来操作网上银行, 但网络安全厂商卡巴斯基实验室表示**[全球超过100家银行因为网络黑客而损失9亿美元][2]**,由此可见还是有很多人因此受害。如果你确信一家咖啡店的免费 Wi-Fi 是正规的,想要连接它,那么你应该向服务员确认网络名称。[其他人在店里用路由器设置一个开放的无线连接][3],并将它的网络名称设置成店名是一件相当简单的事。 + +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/3_keeping-wifi-on-100626676-orig.jpg) + +### 始终开着 Wi-Fi 开关 ### + +如果你手机的 Wi-Fi 开关一直开着的,你会自动被连接到一个不安全的网络中去,你甚至都没有意识到。你可以利用你手机中[基于位置的 Wi-Fi 功能][4],如果有这种功能的话,那它会在你离开你所保存的网络范围后自动关闭你的 Wi-Fi 开关并在你回去之后再次开启。 + +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/4_not-using-firewall-100626677-orig.jpg) + +### 不使用防火墙 ### + +防火墙是你的第一道抵御恶意入侵的防线,它能有效地让你的电脑网络保持通畅并阻挡黑客和恶意软件。你应该时刻开启它除非你的杀毒软件有它自己的防火墙。 + +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/5_browsing-unencrypted-sites-100626678-orig.jpg) + +### 浏览非加密网页 ### + +说起来很难过,**[世界上排名前100万个网站中55%是不加密的][5]**,一个未加密的网站会让一切传输数据暴露在黑客的眼中。如果一个网页是安全的,你的浏览器则会有标明(比如说火狐浏览器是一把灰色的挂锁,Chrome 浏览器则是个绿锁图标)。但是即使是安全的网站不能让你免于被劫持的风险,他们能通过公共网络从你访问过的网站上窃取 cookies,无论是不是正规网站。 + +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/6_updating-security-software-100626679-orig.jpg) + +### 不更新你的安全防护软件 ### + +如果你想要确保你自己的网络是受保护的,就更新路由器固件。你要做的就是进入你的路由器管理页面去检查,通常你能在厂商的官方网页上下载到最新的固件版本。 + +![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/7_securing-home-wifi-100626680-orig.jpg) + +### 不保护你的家用 Wi-Fi ### + +不用说,设置一个复杂的密码和更改无线连接的默认名都是非常重要的。你还可以过滤你的 MAC 地址来让你的路由器只识别那些确认过的设备。 + +本文作者 **Josh Althuser** 是一个开源支持者、网络架构师和科技企业家。在过去12年里,他花了很多时间去倡导使用开源软件来管理团队和项目,同时为网络应用程序提供企业级咨询并帮助它们把产品推向市场。你可以通过[他的推特][6]联系他。 + +-------------------------------------------------------------------------------- + +via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers-can-use-wi-fi-against-you.html + +作者:[Josh Althuser][a] +译者:[ZTinoZ](https://github.com/ZTinoZ) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://twitter.com/JoshAlthuser +[1]:http://www.pcworld.com/article/243464/number_of_wifi_hotspots_to_quadruple_by_2015_says_study.html +[2]:http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?hp&action=click&pgtype=Homepage&module=first-column-region%C2%AEion=top-news&WT.nav=top-news&_r=3 +[3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html +[4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off +[5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/ +[6]:https://twitter.com/JoshAlthuser diff --git a/published/201512/20151123 How to access Dropbox from the command line in Linux.md b/published/201512/20151123 How to access Dropbox from the command line in Linux.md new file mode 100644 index 0000000000..3c452ac4f6 --- /dev/null +++ b/published/201512/20151123 How to access Dropbox from the command line in Linux.md @@ -0,0 +1,97 @@ +Linux 中如何通过命令行访问 Dropbox +================================================================================ +在当今这个多设备的环境下,云存储无处不在。无论身处何方,人们都想通过多种设备来从云存储中获取所需的内容。由于拥有漂亮的 UI 和完美的跨平台兼容性,Dropbox 已成为最为广泛使用的云存储服务。 Dropbox 的流行已引发了一系列官方或非官方 Dropbox 客户端的出现,它们支持不同的操作系统平台。 + +当然 Linux 平台下也有着自己的 Dropbox 客户端: 既有命令行的,也有图形界面客户端。[Dropbox Uploader][1] 是一个简单易用的 Dropbox 命令行客户端,它是用 Bash 脚本语言所编写的(LCTT 译注:对,你没看错, 就是 Bash)。在这篇教程中,我将描述 **在 Linux 中如何使用 Dropbox Uploader 通过命令行来访问 Dropbox**。 + +### Linux 中安装和配置 Dropbox Uploader ### + +要使用 Dropbox Uploader,你需要下载该脚本并使其可被执行。 + + $ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh + $ chmod +x dropbox_uploader.sh + +请确保你已经在系统中安装了 `curl`,因为 Dropbox Uploader 通过 curl 来运行 Dropbox 的 API。 + +要配置 Dropbox Uploader,只需运行 dropbox_uploader.sh 即可。当你第一次运行这个脚本时,它将请求得到授权以使得脚本可以访问你的 Dropbox 账户。 + + $ ./dropbox_uploader.sh + +![](https://c2.staticflickr.com/6/5739/22860931599_10c08ff15f_c.jpg) + +如上图所指示的那样,你需要通过浏览器访问 [https://www.dropbox.com/developers/apps][2] 页面,并创建一个新的 Dropbox app。接着像下图那样填入新 app 的相关信息,并输入 app 的名称,它与 Dropbox Uploader 所生成的 app 名称类似。 + +![](https://c2.staticflickr.com/6/5745/22932921350_4123d2dbee_c.jpg) + +在你创建好一个新的 app 之后,你将在下一个页面看到 app key 和 app secret。请记住它们。 + +![](https://c1.staticflickr.com/1/736/22932962610_7db51aa718_c.jpg) + +然后在正运行着 dropbox_uploader.sh 的终端窗口中输入 app key 和 app secret。然后 dropbox_uploader.sh 将产生一个 oAUTH 网址(例如,https://www.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXXXXXXX)。 + +![](https://c1.staticflickr.com/1/563/22601635533_423738baed_c.jpg) + +接着通过浏览器访问那个 oAUTH 网址,并同意访问你的 Dropbox 账户。 + +![](https://c1.staticflickr.com/1/675/23202598606_6110c1a31b_c.jpg) + +这便完成了 Dropbox Uploader 的配置。若要确认 Dropbox Uploader 是否真的被成功地认证了,可以运行下面的命令。 + + $ ./dropbox_uploader.sh info + +---------- + + Dropbox Uploader v0.12 + + > Getting info... + + Name: Dan Nanni + UID: XXXXXXXXXX + Email: my@email_address + Quota: 2048 Mb + Used: 13 Mb + Free: 2034 Mb + +### Dropbox Uploader 示例 ### + +要显示根目录中的所有内容,运行: + + $ ./dropbox_uploader.sh list + +要列出某个特定文件夹中的所有内容,运行: + + $ ./dropbox_uploader.sh list Documents/manuals + +要上传一个本地文件到一个远程的 Dropbox 文件夹,使用: + + $ ./dropbox_uploader.sh upload snort.pdf Documents/manuals + +要从 Dropbox 下载一个远程的文件到本地,使用: + + $ ./dropbox_uploader.sh download Documents/manuals/mysql.pdf ./mysql.pdf + +要从 Dropbox 下载一个完整的远程文件夹到一个本地的文件夹,运行: + + $ ./dropbox_uploader.sh download Documents/manuals ./manuals + +要在 Dropbox 上创建一个新的远程文件夹,使用: + + $ ./dropbox_uploader.sh mkdir Documents/whitepapers + +要完全删除 Dropbox 中某个远程的文件夹(包括它所含的所有内容),运行: + + $ ./dropbox_uploader.sh delete Documents/manuals + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/access-dropbox-command-line-linux.html + +作者:[Dan Nanni][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://www.andreafabrizi.it/?dropbox_uploader +[2]:https://www.dropbox.com/developers/apps diff --git a/published/201512/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md b/published/201512/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md new file mode 100644 index 0000000000..c851ce0da2 --- /dev/null +++ b/published/201512/20151123 How to install Android Studio on Ubuntu 15.04 or CentOS 7.md @@ -0,0 +1,129 @@ +如何在 Ubuntu 15.04 / CentOS 7 上安装 Android Studio +================================================================================ +随着最近几年智能手机的进步,安卓成为了最大的手机平台之一,在开发安卓应用中所用到的所有工具也都可以免费得到。Android Studio 是基于 [IntelliJ IDEA][1] 用于开发安卓应用的集成开发环境(IDE)。它是 Google 2014 年发布的免费开源软件,继 Eclipse 之后成为主要的 IDE。 + +在这篇文章,我们一起来学习如何在 Ubuntu 15.04 和 CentOS 7 上安装 Android Studio。 + +### 在 Ubuntu 15.04 上安装 ### + +我们可以用两种方式安装 Android Studio。第一种是配置所需的库然后再安装它;另一种是从 Android 官方网站下载然后在本地编译安装。在下面的例子中,我们会使用命令行设置库并安装它。在继续下一步之前,我们需要确保我们已经安装了 JDK 1.6 或者更新版本。 + +这里,我打算安装 JDK 1.8。 + + $ sudo add-apt-repository ppa:webupd8team/java + $ sudo apt-get update + $ sudo apt-get install oracle-java8-installer oracle-java8-set-default + +验证 java 是否安装成功: + + poornima@poornima-Lenovo:~$ java -version + +现在,设置安装 Android Studio 需要的库 + + $ sudo apt-add-repository ppa:paolorotolo/android-studio + +![Android-Studio-repo](http://blog.linoxide.com/wp-content/uploads/2015/11/Android-studio-repo.png) + + $ sudo apt-get update + $ sudo apt-get install android-studio + +上面的安装命令会在 /opt 目录下面安装 Android Studio。 + +现在,运行下面的命令启动安装向导: + + $ /opt/android-studio/bin/studio.sh + +这会激活安装窗口。下面的截图展示了安装 Android Studio 的过程。 + +![安装 Android Studio](http://blog.linoxide.com/wp-content/uploads/2015/11/Studio-setup.png) + +![安装类型](http://blog.linoxide.com/wp-content/uploads/2015/11/Install-type.png) + +![设置模拟器](http://blog.linoxide.com/wp-content/uploads/2015/11/Emulator-settings.png) + +你点击了 Finish 按钮之后,就会显示同意协议页面。当你接受协议之后,它就开始下载需要的组件。 + +![下载组件](http://blog.linoxide.com/wp-content/uploads/2015/11/Download.png) + +这一步完成之后就结束了 Android Studio 的安装。当你重启 Android Studio 时,你会看到下面的欢迎界面,从这里你可以开始用 Android Studio 工作了。 + +![欢迎界面](http://blog.linoxide.com/wp-content/uploads/2015/11/Welcome-screen.png) + +### 在 CentOS 7 上安装 ### + +现在再让我们来看看如何在 CentOS 7 上安装 Android Studio。这里你同样需要安装 JDK 1.6 或者更新版本。如果你不是 root 用户,记得在命令前面使用 ‘sudo’。你可以下载[最新版本][2]的 JDK。如果你已经安装了一个比较旧的版本,在安装新的版本之前你需要先卸载旧版本。在下面的例子中,我会通过下载需要的 rpm 包安装 JDK 1.8.0_65。 + + [root@li1260-39 ~]# rpm -ivh jdk-8u65-linux-x64.rpm + Preparing... ################################# [100%] + Updating / installing... + 1:jdk1.8.0_65-2000:1.8.0_65-fcs ################################# [100%] + Unpacking JAR files... + tools.jar... + plugin.jar... + javaws.jar... + deploy.jar... + rt.jar... + jsse.jar... + charsets.jar... + localedata.jar... + jfxrt.jar... + +如果没有正确设置 Java 路径,你会看到错误信息。因此,设置正确的路径: + + export JAVA_HOME=/usr/java/jdk1.8.0_25/ + export PATH=$PATH:$JAVA_HOME + +检查是否安装了正确的版本: + + [root@li1260-39 ~]# java -version + java version "1.8.0_65" + Java(TM) SE Runtime Environment (build 1.8.0_65-b17) + Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode) + +如果你安装 Android Studio 的时候看到任何类似 “unable-to-run-mksdcard-sdk-tool:” 的错误信息,你可能要在 CentOS 7 64 位系统中安装以下软件包: + + - glibc.i686 + - glibc-devel.i686 + - libstdc++.i686 + - zlib-devel.i686 + - ncurses-devel.i686 + - libX11-devel.i686 + - libXrender.i686 + - libXrandr.i686 + +通过从 [Android 网站][3] 下载 IDE 文件然后解压安装 studio 也是一样的。 + + [root@li1260-39 tmp]# unzip android-studio-ide-141.2343393-linux.zip + +移动 android-studio 目录到 /opt 目录 + + [root@li1260-39 tmp]# mv /tmp/android-studio/ /opt/ + +需要的话你可以创建一个到 studio 可执行文件的符号链接用于快速启动。 + + [root@li1260-39 tmp]# ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/android-studio + +现在在终端中启动 studio: + + [root@localhost ~]#studio + +之后用于完成安装的截图和前面 Ubuntu 安装过程中的是一样的。安装完成后,你就可以开始开发你自己的 Android 应用了。 + +### 总结 ### + +虽然发布不到一年,但是 Android Studio 已经替代 Eclipse 成为了 Android 的开发最主要的 IDE。它是唯一能支持 Google 之后将要提供的 Android SDK 和其它 Android 特性的官方 IDE 工具。那么,你还在等什么呢?赶快安装 Android Studio 来体验开发 Android 应用的乐趣吧。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-android-studio-ubuntu-15-04-centos-7/ + +作者:[B N Poornima][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/bnpoornima/ +[1]:https://www.jetbrains.com/idea/ +[2]:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html +[3]:http://developer.android.com/sdk/index.html diff --git a/published/201512/20151123 LNAV--Ncurses based log file viewer.md b/published/201512/20151123 LNAV--Ncurses based log file viewer.md new file mode 100644 index 0000000000..d51ebe8e76 --- /dev/null +++ b/published/201512/20151123 LNAV--Ncurses based log file viewer.md @@ -0,0 +1,81 @@ +LNAV:基于 Ncurses 的日志文件阅读器 +================================================================================ +日志文件导航器(Logfile Navigator,简称 lnav),是一个基于 curses 的,用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理像这样的事情:来自不同文件的交错的信息;按照时间生成信息直方图;支持在文件中导航的快捷键。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。 + +### lnav 功能 ### + +#### 支持以下日志文件格式: #### + +Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳的日志文件。读入文件的时候回自动检测文件格式。 + +#### 直方图视图: #### + +以时间区划来显示日志信息数量。这对于大概了解在一长段时间内发生了什么非常有用。 + +#### 过滤器: #### + +只显示那些匹配或不匹配一些正则表达式的行。对于移除大量你不感兴趣的日志行非常有用。 + +#### 即时操作: #### + +在你输入到时候会同时完成检索;当添加了新日志行的时候会自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查其正确性。 + +#### 自动显示后文: #### + +日志文件视图会自动往下滚动到新添加到文件中的行。只需要向上滚动就可以锁定当前视图,然后向下滚动到底部恢复显示后文。 + +#### 按照日期顺序排序行: #### + +从所有文件中加载的日志行会按照日期进行排序。使得你不需要手动从不同文件中收集日志信息。 + +#### 语法高亮: #### + +错误和警告会用红色和黄色显示。高亮还可用于: SQL 关键字、XML 标签、Java 文件行号和括起来的字符串。 + +#### 导航: #### + +有快捷键用于跳转到下一个或上一个错误或警告,按照指定的时间向后或向前翻页。 + +#### 用 SQL 查询日志: #### + +每个日志文件行都相当于数据库中的一行,可以使用 SQL 进行查询。可以使用的列取决于查看的日志文件类型。 + +#### 命令和搜索历史: #### + +会自动保存你之前输入的命令和搜素,因此你可以在会话之间使用它们。 + +#### 压缩文件: #### + +会实时自动检测和解压压缩的日志文件。 + +### 在 ubuntu 15.10 上安装 lnav #### + +打开终端运行下面的命令 + + sudo apt-get install lnav + +### 使用 lnav ### + +如果你想使用 lnav 查看日志,你可以使用下面的命令,默认它会显示 syslogs + + lnav + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png) + +如果你想查看特定的日志,那么需要指定路径。如果你想看 CPU 日志,在你的终端里运行下面的命令 + + lnav /var/log/cups + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/6.png) + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html + +作者:[ruchi][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ubuntugeek.com/author/ubuntufix diff --git a/published/201512/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md b/published/201512/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md new file mode 100644 index 0000000000..7c2e304403 --- /dev/null +++ b/published/201512/20151125 How to Install GIMP 2.8.16 in Ubuntu 16.04 or 15.10 or 14.04.md @@ -0,0 +1,59 @@ +如何在 Ubuntu 16.04,15.10,14.04 中安装 GIMP 2.8.16 +================================================================================ +![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png) + +GIMP 图像编辑器 2.8.16 版本在其20岁生日时发布了。下面是如何安装或升级 GIMP 在 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 及其衍生版本中,如 Linux Mint 17.x/13, Elementary OS Freya。 + +GIMP 2.8.16 支持 OpenRaster 文件中的层组,修复了 PSD 中的层组支持以及各种用户界面改进,修复了 OSX 上的构建系统,以及更多新的变化。请阅读 [官方声明][1]。 + +![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg) + +### 如何安装或升级: ### + +多亏了 Otto Meier,[Ubuntu PPA][2] 中最新的 GIMP 包可用于当前所有的 Ubuntu 版本和其衍生版。 + +**1. 添加 GIMP PPA** + +从 Unity Dash 中打开终端,或通过 Ctrl+Alt+T 快捷键打开。在它打开它后,粘贴下面的命令并回车: + + sudo add-apt-repository ppa:otto-kesselgulasch/gimp + +![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg) + +输入你的密码,密码不会在终端显示,然后回车继续。 + +**2. 安装或升级编辑器** + +在添加了 PPA 后,启动 **Software Updater**(在 Mint 中是 Software Manager)。检查更新后,你将看到 GIMP 的更新列表。点击 “Install Now” 进行升级。 + +![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg) + +对于那些喜欢 Linux 命令的,按顺序执行下面的命令,刷新仓库的缓存然后安装 GIMP: + + sudo apt-get update + + sudo apt-get install gimp + +**3. (可选的) 卸载** + +如果你想卸载或降级 GIMP 图像编辑器。从软件中心直接删除它,或者按顺序运行下面的命令来将 PPA 清除并降级软件: + + sudo apt-get install ppa-purge + + sudo ppa-purge ppa:otto-kesselgulasch/gimp + +就这样。玩的愉快! + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/ + +作者:[Ji m][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/ +[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp diff --git a/published/201512/20151125 The tar command explained.md b/published/201512/20151125 The tar command explained.md new file mode 100644 index 0000000000..22244bf89c --- /dev/null +++ b/published/201512/20151125 The tar command explained.md @@ -0,0 +1,143 @@ +tar 命令使用介绍 +================================================================================ +Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录,还能保留其文件权限,它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这种格式是 POSIX 标准。 + +### Tar 文件格式 ### + +tar 压缩等级简介: + +- **无压缩** 没有压缩的文件用 .tar 结尾。 +- **Gzip 压缩** Gzip 格式是 tar 使用最广泛的压缩格式,它能快速压缩和提取文件。用 gzip 压缩的文件通常用 .tar.gz 或 .tgz 结尾。这里有一些如何[创建][2]和[解压][3] tar.gz 文件的例子。 +- **Bzip2 压缩** 和 Gzip 格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。 +- **Lzip(LAMA)压缩** Lizp 压缩结合了 Gzip 快速的优势,以及和 Bzip2 类似(甚至更好) 的压缩率。尽管有这些好处,这个格式并没有得到广泛使用。 +- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,但也没有广泛使用。 + +常见的格式是 tar.gz 和 tar.bz2。如果你想快速压缩,那么就是用 gzip。如果归档文件大小比较重要,就是用 tar.bz2。 + +### tar 命令用来干什么? ### + +下面是一些使用 tar 命令的常见情形。 + +- 备份服务器或桌面系统 +- 文档归档 +- 软件分发 + +### 安装 tar ### + +大部分 Linux 系统默认都安装了 tar。如果没有,这里有安装 tar 的命令。 + +#### CentOS #### + +在 CentOS 中,以 root 用户在 shell 中执行下面的命令安装 tar。 + + yum install tar + +#### Ubuntu #### + +下面的命令会在 Ubuntu 上安装 tar。“sudo” 命令确保 apt 命令是以 root 权限运行的。 + + sudo apt-get install tar + +#### Debian #### + +下面的 apt 命令在 Debian 上安装 tar。 + + apt-get install tar + +#### Windows #### + +tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin32.sourceforge.net/packages/gtar.htm][4]中下载它。 + +### 创建 tar.gz 文件 ### + +下面是在 shell 中运行 [tar 命令][5] 的一些例子。下面我会解释这些命令行选项。 + + tar pczf myarchive.tar.gz /home/till/mydocuments + +这个命令会创建归档文件 myarchive.tar.gz,其中包括了路径 /home/till/mydocuments 中的文件和目录。**命令行选项解释**: + +- **[p]** 这个选项表示 “preserve”,它指示 tar 在归档文件中保留文件属主和权限信息。 +- **[c]** 表示创建。要创建文件时不能缺少这个选项。 +- **[z]** z 选项启用 gzip 压缩。 +- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到标准输出( LCTT 译注:如果没有指定,标准输出默认是屏幕,显然你不会想在屏幕上显示一堆乱码,通常你可以用管道符号送到其它程序去)。 + +#### Tar 命令示例 #### + +**示例 1: 备份 /etc 目录** + +创建 /etc 配置目录的一个备份。备份保存在 root 目录。 + + tar pczvf /root/etc.tar.gz /etc + +![用 tar 备份 /etc 目录](https://www.howtoforge.com/images/linux-tar-command/big/create-tar.png) + +要以 root 用户运行命令确保 /etc 中的所有文件都会被包含在备份中。这次,我在命令中添加了 [v] 选项。这个选项表示 verbose,它告诉 tar 显示所有被包含到归档文件中的文件名。 + +**示例 2: 备份你的 /home 目录** + +创建你的 home 目录的备份。备份会被保存到 /backup 目录。 + + tar czf /backup/myuser.tar.gz /home/myuser + +用你的用户名替换 myuser。这个命令中,我省略了 [p] 选项,也就不会保存权限。 + +**示例 3: 基于文件的 MySQL 数据库备份** + +在大部分 Linux 发行版中,MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令来查看: + + ls /var/lib/mysql + +![使用 tar 基于文件备份 MySQL](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png) + +用 tar 备份 MySQL 数据文件时为了保持数据一致性,首先停用数据库服务器。备份会被写到 /backup 目录。 + +1) 创建 backup 目录 + + mkdir /backup + chmod 600 /backup + +2) 停止 MySQL,用 tar 进行备份并重新启动数据库。 + + service mysql stop + tar pczf /backup/mysql.tar.gz /var/lib/mysql + service mysql start + ls -lah /backup + +![基于文件的 MySQL 备份](https://www.howtoforge.com/images/linux-tar-command/big/tar-backup-mysql2.png) + +### 提取 tar.gz 文件### + +提取 tar.gz 文件的命令是: + + tar xzf myarchive.tar.gz + +#### tar 命令选项解释 #### + +- **[x]** x 表示提取,提取 tar 文件时这个命令不可缺少。 +- **[z]** z 选项告诉 tar 要解压的归档文件是 gzip 格式。 +- **[f]** 该选项告诉 tar 从一个文件中读取归档内容,本例中是 myarchive.tar.gz。 + +上面的 tar 命令会安静地提取 tar.gz 文件,除非有错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。 + + tar xzvf myarchive.tar.gz + +**[v]** 选项表示 verbose,它会向你显示解压的文件名。 + +![提取 tar.gz 文件](https://www.howtoforge.com/images/linux-tar-command/big/tar-xfz.png) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/linux-tar-command/ + +作者:[howtoforge][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com/ +[1]:https://en.wikipedia.org/wiki/Tar_(computing) +[2]:http://www.faqforge.com/linux/create-tar-gz/ +[3]:http://www.faqforge.com/linux/extract-tar-gz/ +[4]:http://gnuwin32.sourceforge.net/packages/gtar.htm +[5]:http://www.faqforge.com/linux/tar-command/ \ No newline at end of file diff --git a/published/201512/20151130 eSpeak--Text To Speech Tool For Linux.md b/published/201512/20151130 eSpeak--Text To Speech Tool For Linux.md new file mode 100644 index 0000000000..e968be847b --- /dev/null +++ b/published/201512/20151130 eSpeak--Text To Speech Tool For Linux.md @@ -0,0 +1,64 @@ +eSpeak: Linux 文本转语音工具 +================================================================================ +![Text to speech tool in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Text-to-speech-Linux.jpg) + +[eSpeak][1]是一款 Linux 命令行工具,能把文本转换成语音。它是一款简洁的语音合成器,用C语言编写而成,它支持英语和其它多种语言。 + +eSpeak 从标准输入或者输入文件中读取文本。虽然语音输出与真人声音相去甚远,但是,在你项目需要的时候,eSpeak 仍不失为一个简便快捷的工具。 + +eSpeak 部分主要特性如下: + +- 提供给 Linux 和 Windows 的命令行工具 +- 从文件或者标准输入中把文本读出来 +- 提供给其它程序使用的共享库版本 +- 为 Windows 提供 SAPI5 版本,所以它能用于 screen-readers 或者其它支持 Windows SAPI5 接口的程序 +- 可移植到其它平台,包括安卓,OSX等 +- 提供多种声音特性选择 +- 语音输出可保存为 [.WAV][2] 格式的文件 +- 配合 HTML 部分可支持 SSML(语音合成标记语言,[Speech Synthesis Markup Language][3]) +- 体积小巧,整个程序连同语言支持等占用小于2MB +- 可以实现文本到音素编码(phoneme code)的转化,因此可以作为其它语音合成引擎的前端工具 +- 开发工具可用于生产和调整音素数据 + +### 安装 eSpeak ### + +基于 Ubuntu 的系统中,在终端运行以下命令安装 eSpeak: + + sudo apt-get install espeak + +eSpeak 是一个古老的工具,我推测它应该能在其它众多 Linux 发行版中运行,比如 Arch,Fedora。使用 dnf,pacman 等命令就能轻松安装。 + +eSpeak 用法如下:输入 espeak 运行程序。输入字符按 enter 转换为语音输出(LCTT 译注:补充)。使用 Ctrl+C 来关闭运行中的程序。 + +![eSpeak command line](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-example.png) + +还有一些其他的选项可用,可以通过程序帮助进行查看。 + +### GUI 版本:Gespeaker ### + +如果你更倾向于使用 GUI 版本,可以安装 Gespeaker,它为 eSpeak 提供了 GTK 界面。 + +使用以下命令来安装 Gespeaker: + + sudo apt-get install gespeaker + +操作界面简明易用,你完全可以自行探索。 + +![eSpeak GUI tool for text to speech in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-GUI.png) + +虽然这些工具在大多数计算任务下用不到,但是当你的项目需要把文本转换成语音时,使用 espeak 还是挺方便的。是否使用 espeak 这款语音合成器,选择权就交给你们啦。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/espeak-text-speech-linux/ + +作者:[Abhishek][a] +译者:[soooogreen](https://github.com/soooogreen) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://espeak.sourceforge.net/ +[2]:http://en.wikipedia.org/wiki/WAV +[3]:http://en.wikipedia.org/wiki/Speech_Synthesis_Markup_Language diff --git a/published/201512/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md b/published/201512/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md new file mode 100644 index 0000000000..27160257ad --- /dev/null +++ b/published/201512/20151201 How to Install The Latest Arduino IDE 1.6.6 in Ubuntu.md @@ -0,0 +1,73 @@ +如何在 Ubuntu 中安装最新的 Arduino IDE 1.6.6 +================================================================================ +![Install latest Arduino in Ubuntu](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-icon.png) + +> 本篇教程会教你如何在当前的 Ubuntu 发行版中安装最新的 Arduino IDE 1.6.6。 + +开源的 Arduino IDE 发布了1.6.6,并带来了很多的改变。新的发布已经切换到 Java 8,它与 IDE 绑定并且用于编译所需。具体见 [发布说明][1]。 + +![Arduino 1.6.6 in Ubuntu 15.10](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-ubuntu.jpg) + +对于那些不想使用软件中心的 1.0.5 旧版本的人而言,你可以使用下面的步骤在所有的 Ubuntu 发行版中安装 Arduino。 + +> **请用正确版本号替换软件包的版本号** + +**1、** 从下面的官方链接下载最新的包 **Linux 32-bit 或者 Linux 64-bit**。 + +- [https://www.arduino.cc/en/Main/Software][2] + +如果不知道你系统的类型?进入系统设置->详细->概览。 + +**2、** 从Unity Dash、App Launcher 或者使用 Ctrl+Alt+T 打开终端。打开后,一个个运行下面的命令: + +进入下载文件夹: + + cd ~/Downloads + +![navigate-downloads](http://ubuntuhandbook.org/wp-content/uploads/2015/11/navigate-downloads.jpg) + +使用 tar 命令解压: + + tar -xvf arduino-1.6.6-*.tar.xz + +![extract-archive](http://ubuntuhandbook.org/wp-content/uploads/2015/11/extract-archive.jpg) + +将解压后的文件移动到**/opt/**下: + + sudo mv arduino-1.6.6 /opt + +![move-opt](http://ubuntuhandbook.org/wp-content/uploads/2015/11/move-opt.jpg) + +**3、** 现在 IDE 已经与最新的 Java 绑定使用了。但是最好为程序设置一个桌面图标/启动方式: + +进入安装目录: + + cd /opt/arduino-1.6.6/ + +在这个目录给 install.sh 可执行权限 + + chmod +x install.sh + +最后运行脚本同时安装桌面快捷方式和启动图标: + + ./install.sh + +下图中我用“&&”同时运行这三个命令: + +![install-desktop-icon](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-desktop-icon.jpg) + +最后从 Unity Dash、程序启动器或者桌面快捷方式运行 Arduino IDE。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/11/install-arduino-ide-1-6-6-ubuntu/ + +作者:[Ji m][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://www.arduino.cc/en/Main/ReleaseNotes +[2]:https://www.arduino.cc/en/Main/Software diff --git a/published/201512/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md b/published/201512/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md new file mode 100644 index 0000000000..f6d69b0cf5 --- /dev/null +++ b/published/201512/20151201 Linux and Unix Port Scanning With netcat [nc] Command.md @@ -0,0 +1,95 @@ +使用 netcat [nc] 命令对 Linux 和 Unix 进行端口扫描 +================================================================================ + +我如何在自己的服务器上找出哪些端口是开放的?如何使用 nc 命令进行端口扫描来替换 [Linux 或类 Unix 中的 nmap 命令][1]? + +nmap (“Network Mapper”)是一个用于网络探测和安全审核的开源工具。如果 nmap 没有安装或者你不希望使用 nmap,那你可以用 netcat/nc 命令进行端口扫描。它对于查看目标计算机上哪些端口是开放的或者运行着服务是非常有用的。你也可以使用 [nmap 命令进行端口扫描][2] 。 + +### 如何使用 nc 来扫描 Linux,UNIX 和 Windows 服务器的端口呢? ### + +如果未安装 nmap,试试 nc/netcat 命令,如下所示。-z 参数用来告诉 nc 报告开放的端口,而不是启动连接。在 nc 命令中使用 -z 参数时,你需要在主机名/ip 后面限定端口的范围和加速其运行: + + ### 语法 ### + ### nc -z -v {host-name-here} {port-range-here} + nc -z -v host-name-here ssh + nc -z -v host-name-here 22 + nc -w 1 -z -v server-name-here port-Number-her + + ### 扫描 1 to 1023 端口 ### + nc -zv vip-1.vsnl.nixcraft.in 1-1023 + +输出示例: + + Connection to localhost 25 port [tcp/smtp] succeeded! + Connection to vip-1.vsnl.nixcraft.in 25 port [tcp/smtp] succeeded! + Connection to vip-1.vsnl.nixcraft.in 80 port [tcp/http] succeeded! + Connection to vip-1.vsnl.nixcraft.in 143 port [tcp/imap] succeeded! + Connection to vip-1.vsnl.nixcraft.in 199 port [tcp/smux] succeeded! + Connection to vip-1.vsnl.nixcraft.in 783 port [tcp/*] succeeded! + Connection to vip-1.vsnl.nixcraft.in 904 port [tcp/vmware-authd] succeeded! + Connection to vip-1.vsnl.nixcraft.in 993 port [tcp/imaps] succeeded! + +你也可以扫描单个端口: + + nc -zv v.txvip1 443 + nc -zv v.txvip1 80 + nc -zv v.txvip1 22 + nc -zv v.txvip1 21 + nc -zv v.txvip1 smtp + nc -zvn v.txvip1 ftp + + ### 使用1秒的超时值来更快的扫描 ### + netcat -v -z -n -w 1 v.txvip1 1-1023 + +输出示例: + +![Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server](http://s0.cyberciti.org/uploads/faq/2007/07/scan-with-nc.jpg) + +*图01:Linux/Unix:使用 Netcat 来测试 TCP 和 UDP 与服务器建立连接* + +1. -z : 端口扫描模式即零 I/O 模式。 +1. -v : 显示详细信息 [使用 -vv 来输出更详细的信息]。 +1. -n : 使用纯数字 IP 地址,即不用 DNS 来解析 IP 地址。 +1. -w 1 : 设置超时值设置为1。 + +更多例子: + + $ netcat -z -vv www.cyberciti.biz http + www.cyberciti.biz [75.126.153.206] 80 (http) open + sent 0, rcvd 0 + $ netcat -z -vv google.com https + DNS fwd/rev mismatch: google.com != maa03s16-in-f2.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f6.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f5.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f3.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f8.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f0.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f7.1e100.net + DNS fwd/rev mismatch: google.com != maa03s16-in-f4.1e100.net + google.com [74.125.236.162] 443 (https) open + sent 0, rcvd 0 + $ netcat -v -z -n -w 1 192.168.1.254 1-1023 + (UNKNOWN) [192.168.1.254] 989 (ftps-data) open + (UNKNOWN) [192.168.1.254] 443 (https) open + (UNKNOWN) [192.168.1.254] 53 (domain) open + +也可以看看 : + +- [使用 nmap 命令扫描网络中开放的端口][3]。 +- 手册页 - [nc(1)][4], [nmap(1)][5] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/linux-port-scanning/ + +作者:Vivek Gite +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://linux.cn/article-2561-1.html +[2]:https://linux.cn/article-2561-1.html +[3]:https://linux.cn/article-2561-1.html +[4]:http://www.manpager.com/linux/man1/nc.1.html +[5]:http://www.manpager.com/linux/man1/nmap.1.html diff --git a/published/201512/20151202 How to use the Linux ftp command to up- and download files on the shell.md b/published/201512/20151202 How to use the Linux ftp command to up- and download files on the shell.md new file mode 100644 index 0000000000..dbb7e9d189 --- /dev/null +++ b/published/201512/20151202 How to use the Linux ftp command to up- and download files on the shell.md @@ -0,0 +1,146 @@ +如何在命令行中使用 ftp 命令上传和下载文件 +================================================================================ +本文中,介绍在 Linux shell 中如何使用 ftp 命令。包括如何连接 FTP 服务器,上传或下载文件以及创建文件夹。尽管现在有许多不错的 FTP 桌面应用,但是在服务器、SSH、远程会话中命令行 ftp 命令还是有很多应用的。比如。需要服务器从 ftp 仓库拉取备份。 + +### 步骤 1: 建立 FTP 连接 ### + +想要连接 FTP 服务器,在命令上中先输入`ftp`然后空格跟上 FTP 服务器的域名 'domain.com' 或者 IP 地址 + +#### 例如: #### + + ftp domain.com + + ftp 192.168.0.1 + + ftp user@ftpdomain.com + +**注意: 本例中使用匿名服务器。** + +替换下面例子中 IP 或域名为你的服务器地址。 + +![FTP 登录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/ftpanonymous.png) + +### 步骤 2: 使用用户名密码登录 ### + +绝大多数的 FTP 服务器是使用密码保护的,因此这些 FTP 服务器会询问'**username**'和'**password**'. + +如果你连接到被称作匿名 FTP 服务器(LCTT 译注:即,并不需要你有真实的用户信息即可使用的 FTP 服务器称之为匿名 FTP 服务器),可以尝试`anonymous`作为用户名以及使用空密码: + + Name: anonymous + + Password: + +之后,终端会返回如下的信息: + + 230 Login successful. + Remote system type is UNIX. + Using binary mode to transfer files. + ftp> + +登录成功。 + +![FTP 登录成功](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/login.png) + +### 步骤 3: 目录操作 ### + +FTP 命令可以列出、移动和创建文件夹,如同我们在本地使用我们的电脑一样。`ls`可以打印目录列表,`cd`可以改变目录,`mkdir`可以创建文件夹。 + +#### 使用安全设置列出目录 #### + + ftp> ls + +服务器将返回: + + 200 PORT command successful. Consider using PASV. + 150 Here comes the directory listing. + directory list + .... + .... + 226 Directory send OK. + +![打印目录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/listing.png) + +#### 改变目录: #### + +改变目录可以输入: + + ftp> cd directory + +服务器将会返回: + + 250 Directory succesfully changed. + +![FTP中改变目录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/directory.png) + +### 步骤 4: 使用 FTP 下载文件 ### + +在下载一个文件之前,我们首先需要使用`lcd`命令设定本地接受目录位置。 + + lcd /home/user/yourdirectoryname + +如果你不指定下载目录,文件将会下载到你登录 FTP 时候的工作目录。 + +现在,我们可以使用命令 get 来下载文件,比如: + + get file + +文件会保存在使用lcd命令设置的目录位置。 + +服务器返回消息: + + local: file remote: file + 200 PORT command successful. Consider using PASV. + 150 Opening BINARY mode data connection for file (xxx bytes). + 226 File send OK. + XXX bytes received in x.xx secs (x.xxx MB/s). + +![使用FTP下载文件](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/gettingfile.png) + +下载多个文件可以使用通配符及 `mget` 命令。例如,下面这个例子我打算下载所有以 .xls 结尾的文件。 + + mget *.xls + +### 步骤 5: 使用 FTP 上传文件 ### + +完成 FTP 连接后,FTP 同样可以上传文件 + +使用 `put`命令上传文件: + + put file + +当文件不再当前本地目录下的时候,可以使用绝对路径: + + put /path/file + +同样,可以上传多个文件: + + mput *.xls + +### 步骤 6: 关闭 FTP 连接 ### + +完成FTP工作后,为了安全起见需要关闭连接。有三个命令可以关闭连接: + + bye + + exit + + quit + +任意一个命令可以断开FTP服务器连接并返回: + + 221 Goodbye + +![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/goodbye.png) + +需要更多帮助,在使用 ftp 命令连接到服务器后,可以使用`help`获得更多帮助。 + +![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/helpwindow.png) + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-use-ftp-on-the-linux-shell/ + +译者:[VicYu](http://vicyu.net) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201512/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md b/published/201512/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md new file mode 100644 index 0000000000..a527982122 --- /dev/null +++ b/published/201512/20151204 How to Remove Banned IP from Fail2ban on CentOS 6 or CentOS 7.md @@ -0,0 +1,60 @@ +如何在 CentOS 6/7 上移除被 Fail2ban 禁止的 IP +================================================================================ +![](http://www.ehowstuff.com/wp-content/uploads/2015/12/security-265130_1280.jpg) + +[fail2ban][1] 是一款用于保护你的服务器免于暴力攻击的入侵保护软件。fail2ban 用 python 写成,并广泛用于很多服务器上。fail2ban 会扫描日志文件和 IP 黑名单来显示恶意软件、过多的密码失败尝试、web 服务器利用、wordpress 插件攻击和其他漏洞。如果你已经安装并使用了 fail2ban 来保护你的 web 服务器,你也许会想知道如何在 CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7 中找到被 fail2ban 阻止的 IP,或者你想将 ip 从 fail2ban 监狱中移除。 + +### 如何列出被禁止的 IP ### + +要查看所有被禁止的 ip 地址,运行下面的命令: + + # iptables -L + Chain INPUT (policy ACCEPT) + target prot opt source destination + f2b-AccessForbidden tcp -- anywhere anywhere tcp dpt:http + f2b-WPLogin tcp -- anywhere anywhere tcp dpt:http + f2b-ConnLimit tcp -- anywhere anywhere tcp dpt:http + f2b-ReqLimit tcp -- anywhere anywhere tcp dpt:http + f2b-NoAuthFailures tcp -- anywhere anywhere tcp dpt:http + f2b-SSH tcp -- anywhere anywhere tcp dpt:ssh + f2b-php-url-open tcp -- anywhere anywhere tcp dpt:http + f2b-nginx-http-auth tcp -- anywhere anywhere multiport dports http,https + ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED + ACCEPT icmp -- anywhere anywhere + ACCEPT all -- anywhere anywhere + ACCEPT tcp -- anywhere anywhere tcp dpt:EtherNet/IP-1 + ACCEPT tcp -- anywhere anywhere tcp dpt:http + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain FORWARD (policy ACCEPT) + target prot opt source destination + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain OUTPUT (policy ACCEPT) + target prot opt source destination + + + Chain f2b-NoAuthFailures (1 references) + target prot opt source destination + REJECT all -- 64.68.50.128 anywhere reject-with icmp-port-unreachable + REJECT all -- 104.194.26.205 anywhere reject-with icmp-port-unreachable + RETURN all -- anywhere anywhere + +### 如何从 Fail2ban 中移除 IP ### + + # iptables -D f2b-NoAuthFailures -s banned_ip -j REJECT + +我希望这篇教程可以给你在 CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7 中移除被禁止的 ip 一些指导。 + +-------------------------------------------------------------------------------- + +via: http://www.ehowstuff.com/how-to-remove-banned-ip-from-fail2ban-on-centos/ + +作者:[skytech][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ehowstuff.com/author/skytech/ +[1]:http://www.fail2ban.org/wiki/index.php/Main_Page diff --git a/published/201512/20151208 Apple Swift Programming Language Comes To Linux.md b/published/201512/20151208 Apple Swift Programming Language Comes To Linux.md new file mode 100644 index 0000000000..1eb1de02fc --- /dev/null +++ b/published/201512/20151208 Apple Swift Programming Language Comes To Linux.md @@ -0,0 +1,41 @@ +可以在 Linux 下试试苹果编程语言 Swift +================================================================================ +![](http://itsfoss.com/wp-content/uploads/2015/12/Apple-Swift-Open-Source.jpg) + +是的,你知道的,苹果编程语言 Swift 已经开源了。其实我们并不应该感到意外,因为[在六个月以前苹果就已经宣布了这个消息][1]。 + +苹果宣布推出开源 Swift 社区。一个专用于开源 Swift 社区的[新网站][2]已经就位,网站首页显示以下信息: + +> 我们对 Swift 开源感到兴奋。在苹果推出了编程语言 Swift 之后,它很快成为历史上增长最快的语言之一。Swift 可以编写出难以置信的又快又安全的软件。目前,Swift 是开源的,你可以将这个最好的通用编程语言用在各种地方。 + +[swift.org][2] 这个网站将会作为一站式网站,它会提供各种资料的下载,包括各种平台,社区指南,最新消息,入门教程,为开源 Swift 做贡献的说明,文件和一些其他的指南。 如果你正期待着学习 Swift,那么必须收藏这个网站。 + +在苹果的这次宣布中,一个用于方便分享和构建代码的包管理器已经可用了。 + +对于所有的 Linux 使用者来说,最重要的是,源代码已经可以从 [Github][3]获得了.你可以从以下链接 Checkout 它: + +- [苹果 Swift 源代码][3] + +除此之外,对于 ubuntu 14.04 和 15.10 版本还有预编译的二进制文件。 + +- [ubuntu 系统的 Swift 二进制文件][4] + +不要急着在产品环境中使用它们,因为这些都是开发分支而不适合于产品环境。因此现在应避免使用在产品环境中,一旦发布了 Linux 下 Swift 的稳定版本,我希望 ubuntu 会把它包含在 [umake][5]中,和 [Visual Studio Code][6] 放一起。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/swift-open-source-linux/ + +作者:[Abhishek][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/apple-open-sources-swift-programming-language-linux/ +[2]:https://swift.org/ +[3]:https://github.com/apple +[4]:https://swift.org/download/#latest-development-snapshots +[5]:https://wiki.ubuntu.com/ubuntu-make +[6]:http://itsfoss.com/install-visual-studio-code-ubuntu/ diff --git a/published/201512/20151208 How to Customize Time and Date Format in Ubuntu Panel.md b/published/201512/20151208 How to Customize Time and Date Format in Ubuntu Panel.md new file mode 100644 index 0000000000..4b20628048 --- /dev/null +++ b/published/201512/20151208 How to Customize Time and Date Format in Ubuntu Panel.md @@ -0,0 +1,66 @@ +如何深度定制 Ubuntu 面板的时间日期显示格式 +================================================================================ +![时间日期格式](http://ubuntuhandbook.org/wp-content/uploads/2015/08/ubuntu_tips1.png) + +尽管设置页面里已经有一些选项可以用了,这个快速教程会向你展示如何更加深入地自定义 Ubuntu 面板上的时间和日期指示器。 + +![自定义世间日期](http://ubuntuhandbook.org/wp-content/uploads/2015/12/custom-timedate.jpg) + +在开始之前,在 Ubuntu 软件中心搜索并安装 **dconf Editor**。然后启动该软件并按以下步骤执行: + +**1、** 当 dconf Editor 启动后,导航至 **com -> canonical -> indicator -> datetime**。将 **time-format** 的值设置为 **custom**。 + +![自定义时间格式](http://ubuntuhandbook.org/wp-content/uploads/2015/12/time-format.jpg) + +你也可以通过终端里的命令完成以上操作: + + gsettings set com.canonical.indicator.datetime time-format 'custom' + +**2、** 现在你可以通过编辑 **custom-time-format** 的值来自定义时间和日期的格式。 + +![自定义-时间格式](http://ubuntuhandbook.org/wp-content/uploads/2015/12/customize-timeformat.jpg) + +你也可以通过命令完成:(LCTT 译注:将 FORMAT_VALUE_HERE 替换为所需要的格式值) + + gsettings set com.canonical.indicator.datetime custom-time-format 'FORMAT_VALUE_HERE' + +以下是参数含义: + +- %a = 星期名缩写 +- %A = 星期名完整拼写 +- %b = 月份名缩写 +- %B = 月份名完整拼写 +- %d = 每月的日期 +- %l = 小时 ( 1..12), %I = 小时 (01..12) +- %k = 小时 ( 1..23), %H = 小时 (01..23) +- %M = 分钟 (00..59) +- %p = 午别,AM 或 PM, %P = am 或 pm. +- %S = 秒 (00..59) + +可以打开终端键入命令 `man date` 并执行以了解更多细节。 + +一些自定义时间日期显示格式值的例子: + +**%a %H:%M %m/%d/%Y** + +![%a %H:%M %m/%d/%Y](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-1.jpg) + +**%a %r %b %d or %a %I:%M:%S %p %b %d** + +![%a %r %b %d or %a %I:%M:%S %p %b %d](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-2.jpg) + +**%a %-d %b %l:%M %P %z** + +![%a %-d %b %l:%M %P %z](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-3.jpg) + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/12/time-date-format-ubuntu-panel/ + +作者:[Ji m][a] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ diff --git a/published/201512/20151208 Install Wetty on Centos or RHEL 6.X.md b/published/201512/20151208 Install Wetty on Centos or RHEL 6.X.md new file mode 100644 index 0000000000..c0f9f503b5 --- /dev/null +++ b/published/201512/20151208 Install Wetty on Centos or RHEL 6.X.md @@ -0,0 +1,72 @@ +在 Centos/RHEL 6.X 上安装 Wetty +================================================================================ +![](http://www.unixmen.com/wp-content/uploads/2015/11/Terminal.png) + +**Wetty 是什么?** + +Wetty = Web + tty + +作为系统管理员,如果你是在 Linux 桌面下,你可以用它像一个 GNOME 终端(或类似的)一样来连接远程服务器;如果你是在 Windows 下,你可以用它像使用 Putty 这样的 SSH 客户端一样来连接远程,然后同时可以在浏览器中上网并查收邮件等其它事情。 + +(LCTT 译注:简而言之,这是一个基于 Web 浏览器的远程终端) + +![](https://github.com/krishnasrinivas/wetty/raw/master/terminal.png) + +### 第1步: 安装 epel 源 ### + + # wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm + # rpm -ivh epel-release-6-8.noarch.rpm + +### 第2步:安装依赖 ### + + # yum install epel-release git nodejs npm -y + +(LCTT 译注:对,没错,是用 node.js 编写的) + +### 第3步:在安装完依赖后,克隆 GitHub 仓库 ### + + # git clone https://github.com/krishnasrinivas/wetty + +### 第4步:运行 Wetty ### + + # cd wetty + # npm install + +### 第5步:从 Web 浏览器启动 Wetty 并访问 Linux 终端 ### + + # node app.js -p 8080 + +### 第6步:为 Wetty 安装 HTTPS 证书 ### + + # openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes + +(等待完成) + +### 第7步:通过 HTTPS 来使用 Wetty ### + + # nohup node app.js --sslkey key.pem --sslcert cert.pem -p 8080 & + +### 第8步:为 wetty 添加一个用户 ### + + # useradd + # Passwd + +### 第9步:访问 wetty ### + + http://Your_IP-Address:8080 + +输入你之前为 wetty 创建的证书然后访问。 + +到此结束! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-wetty-centosrhel-6-x/ + +作者:[Debojyoti Das][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/debjyoti/ diff --git a/published/201512/20151215 How to enable Software Collections (SCL) on CentOS.md b/published/201512/20151215 How to enable Software Collections (SCL) on CentOS.md new file mode 100644 index 0000000000..2c0b835026 --- /dev/null +++ b/published/201512/20151215 How to enable Software Collections (SCL) on CentOS.md @@ -0,0 +1,100 @@ +如何在 CentOS 上启用 软件集 Software Collections(SCL) +================================================================================ + +红帽企业版 linux(RHEL)和它的社区版分支——CentOS,提供10年的生命周期,这意味着 RHEL/CentOS 的每个版本会提供长达10年的安全更新。虽然这么长的生命周期为企业用户提供了迫切需要的系统兼容性和可靠性,但也存在一个缺点:随着底层的 RHEL/CentOS 版本接近生命周期的结束,核心应用和运行时环境变得陈旧过时。例如 CentOS 6.5,它的生命周期结束时间是2020年11月30日,其所携带的 Python 2.6.6和 MySQL 5.1.73,以今天的标准来看已经非常古老了。 + +另一方面,在 RHEL/CentOS 上试图手动升级开发工具链和运行时环境存在使系统崩溃的潜在可能,除非所有依赖都被正确解决。通常情况下,手动升级都是不推荐的,除非你知道你在干什么。 + +[软件集(Software Collections)][1](SCL)源出现了,以帮助解决 RHEL/CentOS 下的这种问题。SCL 的创建就是为了给 RHEL/CentOS 用户提供一种以方便、安全地安装和使用应用程序和运行时环境的多个(而且可能是更新的)版本的方式,同时避免把系统搞乱。与之相对的是第三方源,它们可能会在已安装的包之间引起冲突。 + +最新的 SCL 提供了: + +- Python 3.3 和 2.7 +- PHP 5.4 +- Node.js 0.10 +- Ruby 1.9.3 +- Perl 5.16.3 +- MariaDB 和 MySQL 5.5 +- Apache httpd 2.4.6 + +在这篇教程的剩余部分,我会展示一下如何配置 SCL 源,以及如何安装和启用 SCL 中的包。 + +### 配置 SCL 源 + +SCL 可用于 CentOS 6.5 及更新的版本。要配置 SCL 源,只需执行: + + $ sudo yum install centos-release-SCL + +要启用和运行 SCL 中的应用,你还需要安装下列包: + + $ sudo yum install scl-utils-build + +执行下面的命令可以查看 SCL 中可用包的完整列表: + + $ yum --disablerepo="*" --enablerepo="scl" list available + +![](https://c2.staticflickr.com/6/5730/23304424250_f5c8a09584_c.jpg) + +### 从 SCL 中安装和启用包 + +既然你已配置好了 SCL,你可以继续并从 SCL 中安装包了。 + +你可以搜索 SCL 中的包: + + $ yum --disablerepo="*" --enablerepo="scl" search + +我们假设你要安装 Python 3.3。 + +继续,就像通常安装包那样使用 yum 安装: + + $ sudo yum install python33 + +任何时候你都可以查看从 SCL 中安装的包的列表,只需执行: + + $ scl --list + + python33 + +SCL 的优点之一是安装其中的包不会覆盖任何系统文件,并且保证不会引起与系统中其它库和应用的冲突。 + +例如,如果在安装 python33 包后检查默认的 python 版本,你会发现默认的版本并没有改变: + + $ python --version + + Python 2.6.6 + +如果想使用一个已经安装的 SCL 包,你需要在每个命令中使用 `scl` 命令显式启用它(LCTT 译注:即想在哪条命令中使用 SCL 中的包,就得通过`scl`命令执行该命令) + + $ scl enable + +例如,要针对`python`命令启用 python33 包: + + $ scl enable python33 'python --version' + + Python 3.3.2 + +如果想在启用 python33 包时执行多条命令,你可以像下面那样创建一个启用 SCL 的 bash 会话: + + $ scl enable python33 bash + +在这个 bash 会话中,默认的 python 会被切换为3.3版本,直到你输入`exit`,退出会话。 + +![](https://c2.staticflickr.com/6/5642/23491549632_1d08e163cc_c.jpg) + +简而言之,SCL 有几分像 Python 的虚拟环境,但更通用,因为你可以为远比 Python 更多的应用启用/禁用 SCL 会话。 + +更详细的 SCL 指南,参考官方的[快速入门指南][2] + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/enable-software-collections-centos.html + +作者:[Dan Nanni][a] +译者:[bianjp](https://github.com/bianjp) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://www.softwarecollections.org/ +[2]:https://www.softwarecollections.org/docs/ diff --git a/published/201512/20151215 Linux or UNIX Desktop Fun--Let it Snow On Your Desktop.md b/published/201512/20151215 Linux or UNIX Desktop Fun--Let it Snow On Your Desktop.md new file mode 100644 index 0000000000..29b9b1a77f --- /dev/null +++ b/published/201512/20151215 Linux or UNIX Desktop Fun--Let it Snow On Your Desktop.md @@ -0,0 +1,76 @@ +Linux/Unix 桌面趣事:让桌面下雪 +================================================================================ + +在这个节日里感到孤独么?试一下 Xsnow 吧。它是一个可以在 Unix/Linux 桌面下下雪的应用。圣诞老人和他的驯鹿会在屏幕中奔跑,伴随着雪片让你感受到节日的感觉。 + +我第一次安装它还是在 13、4 年前。它最初是在 1984 年 Macintosh 系统中创造的。你可以用下面的方法来安装: + +### 安装 xsnow ### + +Debian/Ubuntu/Mint 用户用下面的命令: + + $ sudo apt-get install xsnow + +Freebsd 用户输入下面的命令: + + # cd /usr/ports/x11/xsnow/ + # make install clean + +或者尝试添加包: + + # pkg_add -r xsnow + +#### 其他发行版的方法 #### + +1. Fedora/RHEL/CentOS 在 [rpmfusion][1] 仓库中找找。 +2. Gentoo 用户试下 Gentoo portage,也就是[emerge -p xsnow][2] +3. Opensuse 用户使用 yast 搜索 xsnow + +### 我该如何使用 xsnow? ### + +打开终端(程序 > 附件 > 终端),输入下面的额命令启动 xsnow: + + $ xsnow + +示例输出: + +![Fig.01: Snow for your Linux and Unix desktop systems](http://files.cyberciti.biz/uploads/tips/2011/12/application-to-bring-snow-to-desktop_small.png) + +*图01: 在 Linux 和 Unix 桌面中显示雪花* + +你可以设置背景为蓝色,并让它下白雪,输入: + + $ xsnow -bg blue -sc snow + +设置最大的雪片数量,并让它尽可能快地掉下,输入: + + $ xsnow -snowflakes 10000 -delay 0 + +不要显示圣诞树和圣诞老人满屏幕地跑,输入: + + $ xsnow -notrees -nosanta + +关于 xsnow 更多的信息和选项,在命令行下输入 man xsnow 查看手册: + + $ man xsnow + +建议阅读 + +- 官网[下载 Xsnow][1] +- 注意 [MS-Windows][2] 和 [Mac OS X][3] 版本有一次性的共享软件费用。 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/tips/linux-unix-xsnow.html + +作者:Vivek Gite +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://rpmfusion.org/Configuration +[2]:http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?part=2&chap=1 +[3]:http://dropmix.xs4all.nl/rick/Xsnow/ +[4]:http://dropmix.xs4all.nl/rick/WinSnow/ +[5]:http://dropmix.xs4all.nl/rick/MacOSXSnow/ diff --git a/published/201512/20151215 Linux or UNIX Desktop Fun--Steam Locomotive.md b/published/201512/20151215 Linux or UNIX Desktop Fun--Steam Locomotive.md new file mode 100644 index 0000000000..45a56d6ddc --- /dev/null +++ b/published/201512/20151215 Linux or UNIX Desktop Fun--Steam Locomotive.md @@ -0,0 +1,41 @@ +Linux/Unix 桌面趣事:蒸汽火车 +================================================================================ +一个你[经常犯的错误][1]是把 ls 输入成了 sl。我已经设置了[一个别名][2],也就是 `alias sl=ls`。但是这样你也许就错过了这辆带汽笛的蒸汽小火车了。 + +sl 是一个搞笑软件或,也是一个 Unix 游戏。它会在你错误地把“ls”输入成“sl”(Steam Locomotive)后出现一辆蒸汽火车穿过你的屏幕。 + +### 安装 sl ### + +在 Debian/Ubuntu 下输入下面的命令: + + # apt-get install sl + +它同样也在 Freebsd 和其他类Unix的操作系统上存在。 + +下面,让我们把 ls 输错成 sl: + + $ sl + +![Fig.01: Run steam locomotive across the screen if you type "sl" instead of "ls"](http://files.cyberciti.biz/uploads/tips/2011/05/sl_command_steam_locomotive.png) + +*图01: 如果你把 “ls” 输入成 “sl” ,蒸汽火车会穿过你的屏幕。* + +它同样支持下面的选项: + +- **-a** : 似乎发生了意外。你会为那些哭喊求助的人们感到难过。 +- **-l** : 显示小一点的火车 +- **-F** : 它居然飞走了 +- **-e** : 允许被 Ctrl+C 中断 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html + +作者:Vivek Gite +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/tips/my-10-unix-command-line-mistakes.html +[2]:http://bash.cyberciti.biz/guide/Create_and_use_aliases diff --git a/published/201512/20151215 Linux or UNIX Desktop Fun--Terminal ASCII Aquarium.md b/published/201512/20151215 Linux or UNIX Desktop Fun--Terminal ASCII Aquarium.md new file mode 100644 index 0000000000..e24c2f381e --- /dev/null +++ b/published/201512/20151215 Linux or UNIX Desktop Fun--Terminal ASCII Aquarium.md @@ -0,0 +1,67 @@ +Linux/Unix 桌面趣事:终端 ASCII 水族箱 +================================================================================ + +你可以在你的终端中使用 ASCIIQuarium 安全地欣赏海洋的神秘了。它是一个用 perl 写的 ASCII 艺术水族箱/海洋动画。 + +### 安装 Term::Animation ### + +首先你需要安装名为 Term-Animation 的perl模块。打开终端(选择程序 > 附件 > 终端),并输入: + + $ sudo apt-get install libcurses-perl + $ cd /tmp + $ wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz + $ tar -zxvf Term-Animation-2.4.tar.gz + $ cd Term-Animation-2.4/ + $ perl Makefile.PL && make && make test + $ sudo make install + +### 下载安装 ASCIIQuarium ### + +接着在终端中输入: + + $ cd /tmp + $ wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz + $ tar -zxvf asciiquarium.tar.gz + $ cd asciiquarium_1.0/ + $ sudo cp asciiquarium /usr/local/bin + $ sudo chmod 0755 /usr/local/bin/asciiquarium + +### 我怎么观赏 ASCII 水族箱? ### + +输入下面的命令: + + $ /usr/local/bin/asciiquarium + +或者 + + $ perl /usr/local/bin/asciiquarium + +![Fig.01: ASCII Aquarium](http://s0.cyberciti.org/uploads/tips/2011/01/screenshot-ASCIIQuarium.png) + +*ASCII 水族箱* + +### 相关媒体 ### + +注:youtube 视频 + + +[视频01: ASCIIQuarium - Linux/Unix桌面上的海洋动画][1] + +### 下载:ASCII Aquarium 的 KDE 和 Mac OS X 版本 ### + +[点此下载 asciiquarium][2]。如果你运行的是 Mac OS X,试下这个可以直接使用的已经打包好的[版本][3]。对于 KDE 用户,试试基于 Asciiquarium 的[KDE 屏幕保护程序][4] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/tips/linux-unix-apple-osx-terminal-ascii-aquarium.html + +作者:Vivek Gite +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://youtu.be/MzatWgu67ok +[2]:http://www.robobunny.com/projects/asciiquarium/html/ +[3]:http://habilis.net/macasciiquarium/ +[4]:http://kde-look.org/content/show.php?content=29207 diff --git a/published/201512/20151215 Linux or Unix Desktop Fun--Cat And Mouse Chase All Over Your Screen.md b/published/201512/20151215 Linux or Unix Desktop Fun--Cat And Mouse Chase All Over Your Screen.md new file mode 100644 index 0000000000..1994d7b797 --- /dev/null +++ b/published/201512/20151215 Linux or Unix Desktop Fun--Cat And Mouse Chase All Over Your Screen.md @@ -0,0 +1,89 @@ +Linux/Unix桌面趣事:显示器里的猫和老鼠 +================================================================================ +Oneko 是一个有趣的应用。它会把你的光标变成一只老鼠,并在后面创建一个可爱的小猫,并且始终追逐着老鼠光标。单词“neko”在日语中的意思是老鼠。它最初是一位日本人开发的 Macintosh 桌面附件。 + +### 安装 oneko ### + +试下下面的命令: + + $ sudo apt-get install oneko + +示例输出: + + [sudo] password for vivek: + Reading package lists... Done + Building dependency tree + Reading state information... Done + The following NEW packages will be installed: + oneko + 0 upgraded, 1 newly installed, 0 to remove and 10 not upgraded. + Need to get 38.6 kB of archives. + After this operation, 168 kB of additional disk space will be used. + Get:1 http://debian.osuosl.org/debian/ squeeze/main oneko amd64 1.2.sakura.6-7 [38.6 kB] + Fetched 38.6 kB in 1s (25.9 kB/s) + Selecting previously deselected package oneko. + (Reading database ... 274152 files and directories currently installed.) + Unpacking oneko (from .../oneko_1.2.sakura.6-7_amd64.deb) ... + Processing triggers for menu ... + Processing triggers for man-db ... + Setting up oneko (1.2.sakura.6-7) ... + Processing triggers for menu ... + +FreeBSD 用户输入下面的命令安装 oneko: + + # cd /usr/ports/games/oneko + # make install clean + +### 我该如何使用 oneko? ### + +输入下面的命令: + + $ oneko + +你可以把猫变成 “tora-neko”,一只像白老虎条纹的猫: + + $ oneko -tora + +### 不喜欢猫? ### + +你可以用狗代替猫: + + $ oneko -dog + +下面可以用樱花代替猫: + + $ oneko -sakura + +用大道寺代替猫: + + $ oneko -tomoyo + +### 查看相关媒体 ### + +这个教程同样也有视频格式: + +注:youtube 视频 + + +(Video.01: 示例 - 在 Linux 下安装和使用 oneko) + +### 其他选项 ### + +你可以传入下面的选项 + +1. **-tofocus**:让猫在获得焦点的窗口顶部奔跑。当获得焦点的窗口不在视野中时,猫像平常那样追逐老鼠。 +2. **-position 坐标** :指定X和Y来调整猫相对老鼠的位置 +3. **-rv**:将前景色和背景色对调 +4. **-fg 颜色** : 前景色 (比如 oneko -dog -fg red)。 +5. **-bg 颜色** : 背景色 (比如 oneko -dog -bg green)。 +6. 查看 oneko 的手册获取更多信息。 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/open-source/oneko-app-creates-cute-cat-chasing-around-your-mouse/ + +作者:Vivek Gite +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201512/20151229 Watch Star Wars In Linux Terminal.md b/published/201512/20151229 Watch Star Wars In Linux Terminal.md new file mode 100644 index 0000000000..b92a5b8b61 --- /dev/null +++ b/published/201512/20151229 Watch Star Wars In Linux Terminal.md @@ -0,0 +1,55 @@ +在 Linux 终端下看《星球大战》 +================================================================================ +![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-2.png) + +《星球大战(Star Wars)》已经席卷世界。最新一期的 [《星球大战》系列, 《星球大战7:原力觉醒》,打破了有史以来的记录][1]。 + +虽然我不能帮你得到一张最新的《星球大战》的电影票,但我可以提供给你一种方式,看[星球大战第四集][2],它是非常早期的《星球大战》电影(1977 年)。 + + +不,它不会是高清,也不是蓝光版。相反,它将是 ASCII 版的《星球大战》第四集,你可以在 Linux 终端看它,这才是真正的极客的方式 :) + +### 在 Linux 终端看星球大战 ### + +打开一个终端,使用以下命令: + + telnet towel.blinkenlights.nl + +等待几秒钟,你可以在终端看到类似于以下这样的动画ASCII艺术: + +(LCTT 译注:有时候会解析到效果更好 IPv6 版本上,如果你没有 IPv6 地址,可以重新连接试试;另外似乎线路不稳定,出现卡顿时稍等。) + +![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal.png) + +它将继续播映…… + +![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-1.png) + +![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-2.png) + +![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-3.png) + +![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-5.png) + +要停止动画,按 ctrl +],在这之后输入 quit 来退出 telnet 程序。 + +### 更多有趣的终端 ### + +事实上,看《星球大战》并不是你在 Linux 终端下唯一能做有趣的事情。您可以运行[终端里的列车][3]或[通过ASCII艺术得到Linux标志][4]。 + +希望你能享受在 Linux 下看《星球大战》。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/star-wars-linux/ + +作者:[Abhishek][a] +译者:[zky001](https://github.com/zky001) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.gamespot.com/articles/star-wars-7-breaks-thursday-night-movie-opening-re/1100-6433246/ +[2]:http://www.imdb.com/title/tt0076759/ +[3]:http://itsfoss.com/ubuntu-terminal-train/ +[4]:http://itsfoss.com/display-linux-logo-in-ascii/ diff --git a/published/20151204 How to Install Laravel PHP Framework on CentOS 7 or Ubuntu 15.04.md b/published/20151204 How to Install Laravel PHP Framework on CentOS 7 or Ubuntu 15.04.md new file mode 100644 index 0000000000..5b60ec5f1a --- /dev/null +++ b/published/20151204 How to Install Laravel PHP Framework on CentOS 7 or Ubuntu 15.04.md @@ -0,0 +1,164 @@ +如何在 CentOS 7 / Ubuntu 15.04 上安装 PHP 框架 Laravel +================================================================================ + +大家好,这篇文章将要讲述如何在 CentOS 7 / Ubuntu 15.04 上安装 Laravel。如果你是一个 PHP Web 的开发者,你并不需要考虑如何在琳琅满目的现代 PHP 框架中选择,Laravel 是最轻松启动和运行的,它省时省力,能让你享受到 web 开发的乐趣。Laravel 信奉着一个普世的开发哲学,通过简单的指导创建出可维护代码具有最高优先级,你将保持着高速的开发效率,能够随时毫不畏惧更改你的代码来改进现有功能。 + +Laravel 安装并不繁琐,你只要跟着本文章一步步操作就能在 CentOS 7 或者 Ubuntu 15 服务器上安装。 + +### 1) 服务器要求 ### + +在安装 Laravel 前需要安装一些它的依赖前提条件,主要是一些基本的参数调整,比如升级系统到最新版本,sudo 权限和安装依赖包。 + +当你连接到你的服务器时,请确保你能通以下命令能成功的使用 EPEL 仓库并且升级你的服务器。 + +#### CentOS-7 #### + + # yum install epel-release + + # rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm + # rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm + + # yum update + +#### Ubuntu #### + + # apt-get install python-software-properties + # add-apt-repository ppa:ondrej/php5 + + # apt-get update + + # apt-get install -y php5 mcrypt php5-mcrypt php5-gd + +### 2) 防火墙安装 ### + +系统防火墙和 SELinux 设置对于用于产品应用安全来说非常重要,当你使用测试服务器的时候可以关闭防火墙,用以下命令行设置 SELinux 成宽容模式(permissive)来保证安装程序不受它们的影响。 + + # setenforce 0 + +### 3) Apache, MariaDB, PHP 安装 ### + +Laravel 安装程序需要完成安装 LAMP 整个环境,需要额外安装 OpenSSL、PDO,Mbstring 和 Tokenizer 等 PHP 扩展。如果 LAMP 已经运行在你的服务器上你可以跳过这一步,直接确认一些必要的 PHP 插件是否安装好。 + +要安装完整 AMP 你需要在自己的服务器上运行以下命令。 + +#### CentOS #### + + # yum install httpd mariadb-server php56w php56w-mysql php56w-mcrypt php56w-dom php56w-mbstring + +要在 CentOS 7 上实现 MySQL / Mariadb 服务开机自动启动,你需要运行以下命令。 + + # systemctl start httpd + # systemctl enable httpd + + #systemctl start mysqld + #systemctl enable mysqld + +在启动 MariaDB 服务之后,你需要运行以下命令配置一个足够安全的密码。 + + #mysql_secure_installation + +#### Ubuntu #### + + # apt-get install mysql-server apache2 libapache2-mod-php5 php5-mysql + +### 4) 安装 Composer ### + +在我们安装 Laravel 前,先让我们开始安装 composer。安装 composer 是安装 Laravel 的最重要步骤之一,因为 composer 能帮我们安装 Laravel 的各种依赖。 + +#### CentOS/Ubuntu #### + +在 CentOS / Ubuntu 下运行以下命令来配置 composer 。 + + # curl -sS https://getcomposer.org/installer | php + # mv composer.phar /usr/local/bin/composer + # chmod +x /usr/local/bin/composer + +![composer installation](http://blog.linoxide.com/wp-content/uploads/2015/11/14.png) + +### 5) 安装 Laravel ### + +我们可以运行以下命令从 github 上下载 Laravel 的安装包。 + + # wget https://github.com/laravel/laravel/archive/develop.zip + +运行以下命令解压安装包并且移动 document 的根目录。 + + # unzip develop.zip + + # mv laravel-develop /var/www/ + +现在使用 compose 命令来安装目录下所有 Laravel 所需要的依赖。 + + # cd /var/www/laravel-develop/ + # composer install + +![compose laravel](http://blog.linoxide.com/wp-content/uploads/2015/11/25.png) + +### 6) 密钥 ### + +为了加密服务器,我们使用以下命令来生成一个加密后的 32 位的密钥。 + + # php artisan key:generate + + Application key [Lf54qK56s3qDh0ywgf9JdRxO2N0oV9qI] set successfully + +现在把这个密钥放到 'app.php' 文件,如以下所示。 + + # vim /var/www/laravel-develop/config/app.php + +![Key encryption](http://blog.linoxide.com/wp-content/uploads/2015/11/45.png) + +### 7) 虚拟主机和所属用户 ### + +在 composer 安装好后,分配 document 根目录的权限和所属用户,如下所示。 + + # chmod 775 /var/www/laravel-develop/app/storage + + # chown -R apache:apache /var/www/laravel-develop + +用任意一款编辑器打开 apache 服务器的默认配置文件,在文件最后加上虚拟主机配置。 + + # vim /etc/httpd/conf/httpd.conf + +---------- + + ServerName laravel-develop + DocumentRoot /var/www/laravel/public + + start Directory /var/www/laravel + AllowOverride All + Directory close + +现在我们用以下命令重启 apache 服务器,打开浏览器查看 localhost 页面。 + +#### CentOS #### + + # systemctl restart httpd + +#### Ubuntu #### + + # service apache2 restart + +### 8) Laravel 5 网络访问 ### + +打开浏览器然后输入你配置的 IP 地址或者完整域名(Fully qualified domain name)你将会看到 Laravel 5 的默认页面。 + +![Laravel Default](http://blog.linoxide.com/wp-content/uploads/2015/11/35.png) + +### 总结 ### + +Laravel 框架对于开发网页应用来说是一个绝好的的工具。所以,看了这篇文章你将学会在 Ubuntu 15 和 CentOS 7 上安装 Laravel, 之后你就可以使用这个超棒的 PHP 框架提供的各种功能和舒适便捷性来进行你的开发工作。 + +如果您有什么意见或者建议请在以下评论区中回复,我们将根据您宝贵的反馈来使我们的文章更加浅显易懂。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-laravel-php-centos-7-ubuntu-15-04/ + +作者:[Kashif][a] +译者:[NearTan](https://github.com/NearTan) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ diff --git a/published/20151204 Linux or Unix--jobs Command Examples.md b/published/20151204 Linux or Unix--jobs Command Examples.md new file mode 100644 index 0000000000..70e5138862 --- /dev/null +++ b/published/20151204 Linux or Unix--jobs Command Examples.md @@ -0,0 +1,172 @@ +jobs 命令示例 +================================================================================ + +我是个新的 Linux/Unix 用户。我该如何在 Linux 或类 Unix 系统中使用 BASH/KSH/TCSH 或者基于 POSIX 的 shell 来查看当前正在进行的作业(job)?在 Unix/Linux 上怎样显示当前作业的状态?(LCTT 译注:job,也常称为“任务”) + +作业控制是一种能力,可以停止/暂停进程(命令)的执行并按你的要求继续/恢复它们的执行。这是通过你的操作系统和诸如 bash/ksh 或 POSIX shell 等 shell 来执行的。 + +shell 会将当前所执行的作业保存在一个表中,可以用 jobs 命令来显示。 + +### 用途 ### + +> 在当前 shell 会话中显示作业的状态。 + +### 语法 ### + +其基本语法如下: + + jobs + +或 + + jobs jobID + +或者 + + jobs [options] jobID + +### 启动一些作业来进行示范 ### + +在开始使用 jobs 命令前,你需要在系统上先启动多个作业。执行以下命令来启动作业: + + ### 启动 xeyes, calculator, 和 gedit 文本编辑器 ### + xeyes & + gnome-calculator & + gedit fetch-stock-prices.py & + +最后,在前台运行 ping 命令: + + ping www.cyberciti.biz + +按 **Ctrl-Z** 键来挂起(suspend) ping 命令的作业。 + +### jobs 命令示例 ### + +要在当前 shell 显示作业的状态,请输入: + + $ jobs + +输出示例: + + [1] 7895 Running gpass & + [2] 7906 Running gnome-calculator & + [3]- 7910 Running gedit fetch-stock-prices.py & + [4]+ 7946 Stopped ping cyberciti.biz + +要显示名字以“p”开头的进程 ID 或作业名称,输入: + + $ jobs -p %p + +或者 + + $ jobs %p + +输出示例: + + [4]- Stopped ping cyberciti.biz + +字符 % 是一个指定任务的方法。在这个例子中,你可以使用作业名称开头字符串来来暂停它,如 %ping。 + +### 如何显示进程 ID 不包含其他正常的信息? ### + +通过 jobs 命令的 -l(小写的 L)选项列出每个作业的详细信息,运行: + + $ jobs -l + +示例输出: + +![Fig.01: Displaying the status of jobs in the shell](http://s0.cyberciti.org/uploads/faq/2013/02/jobs-command-output.jpg) + +*Fig.01: 在 shell 中显示 jobs 的状态* + +### 如何只列出最近一次状态改变的进程? ### + +首先,启动一个新的工作如下所示: + + $ sleep 100 & + +现在,只显示自从上次提示过停止或退出之后的作业,输入: + + $ jobs -n + +示例输出: + + [5]- Running sleep 100 & + +### 仅显示进程 ID(PID) ### + +通过 jobs 命令的 -p 选项仅显示 PID: + + $ jobs -p + +示例输出: + + 7895 + 7906 + 7910 + 7946 + 7949 + +### 怎样只显示正在运行的作业呢? ### + +通过 jobs 命令的 -r 选项只显示正在运行的作业,输入: + + $ jobs -r + +示例输出: + + [1] Running gpass & + [2] Running gnome-calculator & + [3]- Running gedit fetch-stock-prices.py & + +### 怎样只显示已经停止工作的作业? ### + +通过 jobs 命令的 -s 选项只显示停止工作的作业,输入: + + $ jobs -s + +示例输出: + + [4]+ Stopped ping cyberciti.biz + +要继续执行 ping cyberciti.biz 作业,输入以下 bg 命令: + + $ bg %4 + +### jobs 命令选项 ### + +摘自 [bash(1)][1] 命令 man 手册页: + +|选项|描述| +|---|------------------| +|`-l`| 列出进程 ID 及其它信息。| +|`-p`| 仅列出进程 ID。| +|`-n`| 仅列出自从上次输出了状态变化提示(比如显示有进程退出)后的发生了状态变化的进程。| +|`-r`| 仅显示运行中的作业。| +|`-s`| 仅显示停止的作业。| +|`-x`| 运行命令及其参数,并用新的命令的进程 ID 替代所匹配的原有作业的进程组 ID。| + +### 关于 /usr/bin/jobs 和 shell 内建的说明 ### + +输入以下 type 命令找出是否 jobs 命令是 shell 的内建命令或是外部命令还是都是: + + $ type -a jobs + +输出示例: + + jobs is a shell builtin + jobs is /usr/bin/jobs + +在几乎所有情况下,你都需要使用 BASH/KSH/POSIX shell 内建的jobs 命令。/usr/bin/jobs 命令不能被用在当前 shell 中。/usr/bin/jobs 命令工作在不同的环境中,并不共享其父 bash/ksh 的 shell 作业。 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/unix-linux-jobs-command-examples-usage-syntax/ + +作者:Vivek Gite +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.manpager.com/linux/man1/bash.1.html diff --git a/published/20151208 How to renew the ISPConfig 3 SSL Certificate.md b/published/20151208 How to renew the ISPConfig 3 SSL Certificate.md new file mode 100644 index 0000000000..ea132d4185 --- /dev/null +++ b/published/20151208 How to renew the ISPConfig 3 SSL Certificate.md @@ -0,0 +1,58 @@ +如何更新 ISPConfig 3 SSL 证书 +================================================================================ +本教程描述了如何在 ISPConfig 3控制面板中更新 SSL 证书。有两个可选的方法: + +- 用 OpenSSL 创建一个新的 OpenSSL 证书和 CSR。 +- 用 ISPConfig updater 更新 SSL 证书 + +我将从用手工的方法更新 SSL 证书开始。 + +### 1)用 OpenSSL 创建一个新的 ISPConfig 3 SSL 证书 ### + +用 root 用户登录你的服务器。在创建一个新的 SSL 证书之前,先备份现有的。SSL 证书是安全敏感的,因此我将它存储在 /root/ 目录下。 + + tar pcfz /root/ispconfig_ssl_backup.tar.gz /usr/local/ispconfig/interface/ssl + chmod 600 /root/ispconfig_ssl_backup.tar.gz + +> 现在创建一个新的 SSL 证书密钥,证书请求(CSR)和自签发证书。 + + cd /usr/local/ispconfig/interface/ssl + openssl genrsa -des3 -out ispserver.key 4096 + openssl req -new -key ispserver.key -out ispserver.csr + openssl x509 -req -days 3650 -in ispserver.csr \ + -signkey ispserver.key -out ispserver.crt + openssl rsa -in ispserver.key -out ispserver.key.insecure + mv ispserver.key ispserver.key.secure + mv ispserver.key.insecure ispserver.key + +重启 apache 来加载新的 SSL 证书 + + service apache2 restart + +### 2)用 ISPConfig 安装器来更新 SSL 证书 ### + +另一个获取新的 SSL 证书的替代方案是使用 ISPConfig 更新脚本。下载 ISPConfig 到 /tmp 目录下,解压包并运行脚本。 + + cd /tmp + wget http://www.ispconfig.org/downloads/ISPConfig-3-stable.tar.gz + tar xvfz ISPConfig-3-stable.tar.gz + cd ispconfig3_install/install + php -q update.php + +更新脚本会在更新时询问下面的问题: + + Create new ISPConfig SSL certificate (yes,no) [no]: + +这里回答“yes”,SSL 证书创建对话框就会启动。 + +-------------------------------------------------------------------------------- + +via: http://www.faqforge.com/linux/how-to-renew-the-ispconfig-3-ssl-certificate/ + +作者:[Till][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.faqforge.com/author/till/ diff --git a/published/20151208 Top 5 open source community metrics to track.md b/published/20151208 Top 5 open source community metrics to track.md new file mode 100644 index 0000000000..5b616f8116 --- /dev/null +++ b/published/20151208 Top 5 open source community metrics to track.md @@ -0,0 +1,80 @@ +衡量开源社区的五大指标 +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/yearbook2015-osdc-lead-1.png) + +如果你想要使用指标来追踪你的自由开源软件(FOSS)的社区。现在就面临着一个问题:我应该去追踪哪些指标呢? + +要回答这个问题,你必须知道你需要什么信息。比如,你可能想要知道一个项目社区的可持续性。一个社区对问题的应对速度有多快。一个社区怎么吸引、维护或者流失贡献者。一旦你知道需要哪类信息,你就可以找出哪些社区活动可以提供你想要知道的内容。幸运的是,自由开源软件(FOSS)遵从开放式开发模型,在其软件开发仓库里留下了大量的公共数据,我们可以对这些数据进行分析,并从中收集到一些有用的数据。 + +在这篇文章中,我会介绍一些指标,从而为你的项目社区提供一个多方位的视角分析。 + +### 1. 社区活动(Activity) ### + +一个社区的总体活动和这个社区怎样随着时间演变,是度量所有社区好坏的非常有用的指标。社区活动是评价一个社区工作量的第一印象,也可以用来追踪不同种类的活动。比如,提交次数,给人的第一印象就是跟开发工作量挂钩。通过提出的问题(tickets opened)我们可以大概知道提交了多少 bug 或者又提出了多少新特性。邮件列表中的邮件数量或者论坛帖子的数量可以让我们了解到有过多少次公开讨论。 + +![Activity metrics chart](https://opensource.com/sites/default/files/images/business-uploads/activity-metrics.png) + +[OpenStack 活动看板][1]上面显示的项目代码提交次数和代码评审之后代码合并次数随时间变化的趋势图(周数据)。 + + +### 2. 社区规模(Size) ### + +社区的规模指的是参与到这个社区的人数,但是,基于不同形式的参与人数也有很大的差别。好消息是,通常你只对积极活跃的贡献者比较感兴趣。活跃的贡献者会在项目的仓库留下一些线索。这意味着你可以通过查看git仓库存放的代码中**author**字段来统计积极贡献代码的人数,或者通过看积极参与问题解决的人数来统计活跃人数。 + +所谓活动(某些人做了某些事)可以扩展到很多方面。一种常见的跟踪活动的方式是看有多少人做了工作量相当可观的任务。比如,通常一个项目代码的贡献者是来自这个项目社区的一小部分人。了解了这一小部分人,就对核心的工作组(比如,领导这个社区的人)有一个基本的认识了。 + +![Size metrics chart](https://opensource.com/sites/default/files/images/business-uploads/size-metrics.png) + +[Xen 项目开发看板][2]上展示的该项目邮件列表上作者人数和提交人数随时间的变化趋势(每月数据) + +### 3. 社区表现(Performance) ### + +到目前为止,关注点主要集中在活动数量和贡献者数量的统计上了。你也可以分析流程还有用户的表现如何。比如,你可以测量某流程需要多久才能执行完成。解决或者关闭问题的时间可以表明一个需要及时响应的项目对新信息的应对如何,比如修复一个报告过来的 bug 或者实现一个新需求。代码评审花费的时间,即从代码修改提交到被通过的时间,可以看出更新一个提出的改变要达到社区期望的标准需要多久。 + +其他的一些指标主要与项目处理挂起的工作表现如何有关,比如新的和被关闭问题的比例,或者仍然没有完成的代码评审的队列。这些参数能告诉我们像投入到解决这些问题的资源是否充足这样的一些信息。 + +![Efficiency metrics chart](https://opensource.com/sites/default/files/images/business-uploads/efficiency-metrics.png) + +在[2015第三季度 OpenStack 开发报告][3]上显示的,每季度关闭与打开状态的问题数之比,接受与放弃的改变提案与最新的改变提案之比。 + +### 4. 社区人口特征(Demographics) ### + +随着贡献者的参与或者退出,社区也在不断改变。随着人们加入和退出社区,社区成员的会龄(从社区成员加入时算起)也各异。[社区会龄统计图表][4]很直观的展现了这些改变随时间的变化。图表是由一系列的水平条组成,每两条水平条代表加入到社区的一代人。对于每一代,吸引力(Attracted)水平条表示在相应的时间里有多少人加入到了社区。活跃度(Retained)水平条表示有多少人目前仍然活跃在社区。 + +代表一代人的两个水平条的关系就是滞留比例:依然在这个项目中的那一代人的一部分。吸引力(Attracted)水平条的完整集合表示这个项目在过去有多么受欢迎。活跃度(Retained)水平条的完整集合则表示社区目前的会龄结构。 + +![Demographics metrics chart](https://opensource.com/sites/default/files/images/business-uploads/demography-metrics.png) + +[Eclipse 开发看板][5]上显示的 Eclipse 社区的社区年龄表。每六个月定义一次。 + +### 5. 社区多样性(Diversity) ### + +多样性是一个社区保持弹性的很关键的因素。通常来说,一个社区越具有多样性(人或者组织参与的多元化),那么这个社区的弹性也就越大。比如,如果一个公司要决定离开一个自由开源社区,那么这个公司的员工贡献5%要远比贡献85%所可能引起的潜在问题要小很多。 + +[小马因素(Pony Factor)][6],是 [Daniel Gruno][7] 为“最少的开发者贡献了50%的代码提交量”这一现象定义的术语。基于小马因素,大象因素(Elephant Factor)则是指最少量的公司其员工贡献了50%的代码提交量。这两个数据提供了一种指示,即这个社区依赖多少人或者公司。 + +![Diversity metrics chart](https://opensource.com/sites/default/files/images/business-uploads/diversity-metrics.png) + +[2015开发云数量状态统计][8]显示的在云计算领域的几个自由开源社区项目的小马和大象因素。 + +还有许多其他的指标来衡量一个社区。在决定收集哪些指标时,可以考虑一下社区的目标,还有哪些指标能帮到你。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/15/12/top-5-open-source-community-metrics-track + +作者:[Jesus M. Gonzalez-Barahona][a] +译者:[sonofelice](https://github.com/sonofelice) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jgbarah +[1]:http://activity.openstack.org/ +[2]:http://projects.bitergia.com/xen-project-dashboard/ +[3]:http://activity.openstack.org/dash/reports/2015-q3/pdf/2015-q3_OpenStack_report.pdf +[4]:http://radar.oreilly.com/2014/10/measure-your-open-source-communitys-age-to-keep-it-healthy.html +[5]:http://dashboard.eclipse.org/demographics.html +[6]:https://ke4qqq.wordpress.com/2015/02/08/pony-factor-math/ +[7]:https://twitter.com/humbedooh +[8]:https://speakerdeck.com/jgbarah/the-quantitative-state-of-the-open-cloud-2015-edition \ No newline at end of file diff --git a/published/20151215 Fix--Cannot establish FTP connection to an SFTP server.md b/published/20151215 Fix--Cannot establish FTP connection to an SFTP server.md new file mode 100644 index 0000000000..02ce43fcf4 --- /dev/null +++ b/published/20151215 Fix--Cannot establish FTP connection to an SFTP server.md @@ -0,0 +1,50 @@ +错误:无法与 SFTP 服务器建立 FTP 连接 +================================================================================ + +### 问题 ### + +有一天我要连接到我的 web 服务器。我使用 [FileZilla][1] 连接到 FTP 服务器。当我输入主机名和密码连接服务器后,我得到了下面的错误。 + +> Error: Cannot establish FTP connection to an SFTP server. Please select proper protocol. +> +> Error: Critical error: Could not connect to server + +![FileZilla Cannot establish FTP connection to an SFTP server](http://itsfoss.com/wp-content/uploads/2015/12/FileZilla_FTP_SFTP_Problem_1.jpeg) + +### 原因 ### + +看见错误信息后我意识到了我的错误是什么。我尝试与一台 **SFTP** 服务器建立一个 **[FTP][2]** 连接。很明显我没有使用一个正确的协议(应该是SFTP而不是FTP)。 + +如你在上图所见,FileZilla 默认使用的是FTP协议。 + +### 解决 “Cannot establish FTP connection to an SFTP server” 的方案 ### + +解决方案很简单。使用 SFTP 协议而不是 FTP。你要做的就是把协议修改成 SFTP。这就是我要告诉你的。 + +在 FileZilla 菜单中,进入 **文件->站点管理**。 + +![FileZilla Site Manager](http://itsfoss.com/wp-content/uploads/2015/12/FileZilla_FTP_SFTP_Problem_2.jpeg) + +在站点管理中,进入通用选项并选择 SFTP 协议。同样填上主机、端口号、用户密码等。 + +![Cannot establish FTP connection to an SFTP server](http://itsfoss.com/wp-content/uploads/2015/12/FileZilla_FTP_SFTP_Problem_3.png) + +我希望你从这里可以开始工作了。 + +我希望本篇教程可以帮助你修复 “Cannot establish FTP connection to an SFTP server. Please select proper protocol.”这个问题。在相关的文章中,你可以读[了解在 Linux 中如何设置 FTP][4]。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/fix-establish-ftp-connection-sftp-server/ + +作者:[Abhishek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:https://filezilla-project.org/ +[2]:https://en.wikipedia.org/wiki/File_Transfer_Protocol +[3]:https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol +[4]:http://itsfoss.com/set-ftp-server-linux/ diff --git a/published/20151215 How to block network traffic by country on Linux.md b/published/20151215 How to block network traffic by country on Linux.md new file mode 100644 index 0000000000..0a70e7d49b --- /dev/null +++ b/published/20151215 How to block network traffic by country on Linux.md @@ -0,0 +1,111 @@ +如何在 Linux 中根据国家位置来阻断网络流量 +================================================================================ + +作为一名维护 Linux 生产服务器的系统管理员,你可能会遇到这样一些情形:你需要**根据地理位置,选择性地阻断或允许网络流量通过。** 例如你正经历一次由注册在某个特定国家的 IP 发起的 DoS 攻击;或者基于安全考虑,你想阻止来自未知国家的 SSH 登录请求;又或者你的公司对某些在线视频有分销权,它要求只能在特定的国家内合法发行;抑或是由于公司的政策,你需要阻止某个本地主机将文件上传至任意一个非美国的远程云端存储。 + +所有的上述情形都需要设置防火墙,使之具有**基于国家位置过滤流量**的功能。有几个方法可以做到这一点,其中之一是你可以使用 TCP wrappers 来为某个应用(例如 SSH,NFS, httpd)设置条件阻塞。但其缺点是你想要保护的那个应用必须以支持 TCP wrappers 的方式构建。另外,TCP wrappers 并不总是能够在各个平台中获取到(例如,Arch Linux [放弃了][2]对它的支持)。另一种方式是结合基于国家的 GeoIP 信息,设置 [ipset][3],并将它应用到 iptables 的规则中。后一种方式看起来更有希望一些,因为基于 iptables 的过滤器是与应用无关的,且容易设置。 + +在本教程中,我将展示 **另一个基于 iptables 的 GeoIP 过滤器,它由 xtables-addons 来实现**。对于那些不熟悉它的人来说, xtables-addons 是用于 netfilter/iptables 的一系列扩展。一个包含在 xtables-addons 中的名为 xt\_geoip 的模块扩展了 netfilter/iptables 的功能,使得它可以根据流量来自或流向的国家来进行过滤,IP 掩蔽(NAT)或丢包。若你想使用 xt\_geoip,你不必重新编译内核或 iptables,你只需要使用当前的内核构建环境(/lib/modules/\`uname -r`/build)以模块的形式构建 xtables-addons。同时也不需要进行重启。只要你构建并安装了 xtables-addons , xt\_geoip 便能够配合 iptables 使用。 + +至于 xt\_geoip 和 ipset 之间的比较,[xtables-addons 的官方网站][3] 上是这么说的: 相比于 ipset,xt\_geoip 在内存占用上更胜一筹,但对于匹配速度,基于哈希的 ipset 可能更有优势。 + +在教程的余下部分,我将展示**如何使用 iptables/xt\_geoip 来根据流量的来源地或流入的国家阻断网络流量**。 + +### 在 Linux 中安装 xtables-addons ### + +下面介绍如何在各种 Linux 平台中编译和安装 xtables-addons。 + +为了编译 xtables-addons,首先你需要安装一些依赖软件包。 + +#### 在 Debian,Ubuntu 或 Linux Mint 中安装依赖 #### + + $ sudo apt-get install iptables-dev xtables-addons-common libtext-csv-xs-perl pkg-config + +#### 在 CentOS,RHEL 或 Fedora 中安装依赖 #### + +CentOS/RHEL 6 需要事先设置好 EPEL 仓库(为 perl-Text-CSV\_XS 所需要)。 + + $ sudo yum install gcc-c++ make automake kernel-devel-`uname -r` wget unzip iptables-devel perl-Text-CSV_XS + +#### 编译并安装 xtables-addons #### + +从 `xtables-addons` 的[官方网站][4] 下载源码包,然后按照下面的指令编译安装它。 + + $ wget http://downloads.sourceforge.net/project/xtables-addons/Xtables-addons/xtables-addons-2.10.tar.xz + $ tar xf xtables-addons-2.10.tar.xz + $ cd xtables-addons-2.10 + $ ./configure + $ make + $ sudo make install + +需要注意的是,对于基于红帽的系统(CentOS、RHEL、Fedora),它们默认开启了 SELinux,所以有必要像下面这样调整 SELinux 的策略。否则,SELinux 将阻止 iptables 加载 xt\_geoip 模块。 + + $ sudo chcon -vR --user=system_u /lib/modules/$(uname -r)/extra/*.ko + $ sudo chcon -vR --type=lib_t /lib64/xtables/*.so + +### 为 xtables-addons 安装 GeoIP 数据库 ### + +下一步是安装 GeoIP 数据库,它将被 xt\_geoip 用来查询 IP 地址与国家地区之间的对应关系。方便的是,`xtables-addons` 的源码包中带有两个帮助脚本,它们被用来从 MaxMind 下载 GeoIP 数据库并将它转化为 xt\_geoip 可识别的二进制形式文件;它们可以在源码包中的 geoip 目录下找到。请遵循下面的指导来在你的系统中构建和安装 GeoIP 数据库。 + + $ cd geoip + $ ./xt_geoip_dl + $ ./xt_geoip_build GeoIPCountryWhois.csv + $ sudo mkdir -p /usr/share/xt_geoip + $ sudo cp -r {BE,LE} /usr/share/xt_geoip + +根据 [MaxMind][5] 的说明,他们的 GeoIP 数据库能够以 99.8% 的准确率识别出 ip 所对应的国家,并且每月这个数据库将进行更新。为了使得本地安装的 GeoIP 数据是最新的,或许你需要设置一个按月执行的 [cron job][6] 来时常更新你本地的 GeoIP 数据库。 + +### 阻断来自或流向某个国家的网络流量 ### + +一旦 xt\_geoip 模块和 GeoIP 数据库安装好后,你就可以在 iptabels 命令中使用 geoip 的匹配选项。 + + $ sudo iptables -m geoip --src-cc country[,country...] --dst-cc country[,country...] + +你想要阻断流量的那些国家是使用[2个字母的 ISO3166 代码][7] 来特别指定的(例如 US(美国)、CN(中国)、IN(印度)、FR(法国))。 + +例如,假如你想阻断来自也门(YE) 和 赞比亚(ZM)的流量,下面的 iptabels 命令便可以达到此目的。 + + $ sudo iptables -I INPUT -m geoip --src-cc YE,ZM -j DROP + +假如你想阻断流向中国(CN) 的流量,可以运行下面的命令: + + $ sudo iptables -A OUTPUT -m geoip --dst-cc CN -j DROP + +匹配条件也可以通过在 `--src-cc` 或 `--dst-cc` 选项前加 `!` 来达到相反的目的: + +假如你想在你的服务器上阻断来自所有非美国的流量,可以运行: + + $ sudo iptables -I INPUT -m geoip ! --src-cc US -j DROP + +![](https://c2.staticflickr.com/6/5654/23665427845_050241b03f_c.jpg) + +#### 对于使用 Firewall-cmd 的用户 #### + +某些发行版本例如 CentOS/RHEL7 或 Fedora 已经用 firewalld 替代了 iptables 来作为默认的防火墙服务。在这些系统中,你可以类似使用 xt\_geoip 那样,使用 firewall-cmd 来阻断流量。利用 firewall-cmd 命令,上面的三个例子可被重新写为: + + $ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -m geoip --src-cc YE,ZM -j DROP + $ sudo firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -m geoip --dst-cc CN -j DROP + $ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -m geoip ! --src-cc US -j DROP + +### 总结 ### + +在本教程中,我展示了使用 iptables/xt\_geoip 来根据流量的来源地或流入的国家轻松地阻断网络流量。假如你有这方面的需求,把它部署到你的防火墙系统中可以使之成为一个实用的办法。作为最后的警告,我应该提醒你的是:在你的服务器上通过基于 GeoIP 的流量过滤来禁止特定国家的流量并不总是万无一失的。GeoIP 数据库本身就不是很准确或齐全,且流量的来源或目的地可以轻易地通过使用 VPN、Tor 或其他任意易受攻击的中继主机来达到欺骗的目的。基于地理位置的过滤器甚至可能会阻止本不该阻止的合法网络流量。在你决定把它部署到你的生产环境之前请仔细考虑这个限制。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/block-network-traffic-by-country-linux.html + +作者:[Dan Nanni][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://www.archlinux.org/news/dropping-tcp_wrappers-support/ +[2]:http://xmodulo.com/block-unwanted-ip-addresses-linux.html +[3]:http://xtables-addons.sourceforge.net/geoip.php +[4]:http://xtables-addons.sourceforge.net/ +[5]:https://support.maxmind.com/geoip-faq/geoip2-and-geoip-legacy-databases/how-accurate-are-your-geoip2-and-geoip-legacy-databases/ +[6]:http://ask.xmodulo.com/add-cron-job-linux.html +[7]:https://en.wikipedia.org/wiki/ISO_3166-1 diff --git a/published/20151222 Turn Tor socks to http.md b/published/20151222 Turn Tor socks to http.md new file mode 100644 index 0000000000..77415b8e82 --- /dev/null +++ b/published/20151222 Turn Tor socks to http.md @@ -0,0 +1,92 @@ +将 Tor socks 转换成 http 代理 +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/12/tor-593x445.jpg) + +你可以通过不同的 Tor 工具来使用 Tor 服务,如 Tor 浏览器、Foxyproxy 和其它东西,像 wget 和 aria2 这样的下载管理器不能直接使用 Tor socks 开始匿名下载,因此我们需要一些工具来将 Tor socks 转换成 http 代理,这样就能用它来下载了。 + +**注意**:本教程基于 Debian ,其他发行版会有些不同,因此如果你的发行版是基于 Debian 的,就可以直接使用下面的配置了。 + +### Polipo + +这个服务会使用 8123 端口和 127.0.0.1 的 IP 地址,使用下面的命令来在计算机上安装 Polipo: + + sudo apt install polipo + +现在使用如下命令打开 Polipo 的配置文件: + + sudo nano /etc/polipo/config + +在文件最后加入下面的行: + + proxyAddress = "::0" + allowedClients = 192.168.1.0/24 + socksParentProxy = "localhost:9050" + socksProxyType = socks5 + +用如下的命令来重启 Polipo: + + sudo service polipo restart + +现在 Polipo 已经安装好了!在匿名的世界里做你想做的吧!下面是使用的例子: + + pdmt -l "link" -i 127.0.01 -p 8123 + +通过上面的命令 PDMT(Persian 下载器终端)会匿名地下载你的文件。 + +### Proxychains + +在此服务中你可以设置使用 Tor 或者 Lantern 代理,但是在使用上它和 Polipo 和 Privoxy 有点不同,它不需要使用任何端口!使用下面的命令来安装: + + sudo apt install proxychains + +用这条命令来打开配置文件: + + sudo nano /etc/proxychains.conf + +现在添加下面的代码到文件底部,这里是 Tor 的端口和 IP: + + socks5 127.0.0.1 9050 + +如果你在命令的前面加上“proxychains”并运行,它就能通过 Tor 代理来运行: + + proxychains firefoxt + proxychains aria2c + proxychains wget + +### Privoxy + +Privoxy 使用 8118 端口,可以很轻松地通过 privoxy 包来安装: + + sudo apt install privoxy + +我们现在要修改配置文件: + + sudo nano /etc/pivoxy/config + +在文件底部加入下面的行: + + forward-socks5 / 127.0.0.1:9050 . + forward-socks4a / 127.0.0.1:9050 . + forward-socks5t / 127.0.0.1:9050 . + forward 192.168.*.*/ . + forward 10.*.*.*/ . + forward 127.*.*.*/ . + forward localhost/ . + +重启服务: + + sudo service privoxy restart + +服务已经好了!端口是 8118,IP 是 127.0.0.1,就尽情使用吧! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/turn-tor-socks-http/ + +作者:[Hossein heydari][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/hossein/ diff --git a/published/20151223 How to Setup SSH Login Without Password CentOS or RHEL.md b/published/20151223 How to Setup SSH Login Without Password CentOS or RHEL.md new file mode 100644 index 0000000000..6f9cb3e793 --- /dev/null +++ b/published/20151223 How to Setup SSH Login Without Password CentOS or RHEL.md @@ -0,0 +1,105 @@ +如何在 CentOS / RHEL 上设置 SSH 免密码登录 +================================================================================ +![](http://www.ehowstuff.com/wp-content/uploads/2015/12/notebook-1071774_1280.jpg) + +作为系统管理员,你计划在 Linux 上使用 OpenSSH,完成日常工作的自动化,比如文件传输、备份数据库转储文件到另一台服务器等。为实现该目标,你需要从主机 A 能自动登录到主机 B。自动登录也就是说,要在 shell 脚本中使用ssh,而无需要输入任何密码。 + +本文会告诉你怎样在 CentOS/RHEL 上设置 SSH 免密码登录。自动登录配置好以后,你可以通过它使用 SSH (Secure Shell)和安全复制 (SCP)来移动文件。 + +SSH 是开源的,是用于远程登录的最为可靠的网络协议。系统管理员用它来执行命令,以及通过 SCP 协议在网络上向另一台电脑传输文件。 + +通过配置 SSH 免密码登录,你可以享受到如下的便利: + +- 用脚本实现日常工作的自动化。 +- 增强 Linux 服务器的安全性。这是防范虚拟专用服务器(VPS)遭受暴力破解攻击的一个推荐的方法,SSH 密钥单凭暴力破解是几乎不可攻破的。 + +### 什么是 ssh-keygen ### + +ssh-keygen 是一个用来生成、创建和管理 SSH 认证用的公私钥的工具。通过 ssh-keygen 命令,用户可以创建支持SSH1 和 SSH2 两个协议的密钥。ssh-keygen 为 SSH1 协议创建 RSA 密钥,SSH2 则可以是 RSA 或 DSA。 + +### 什么是 ssh-copy-id ### + +ssh-copy-id 是用来将本地公钥拷贝到远程的 authorized_keys 文件的脚本命令,它还会将身份标识文件追加到远程机器的 ~/.ssh/authorized_keys 文件中,并给远程主机的用户主目录适当的的权限。 + +### SSH 密钥 ### + +SSH 密钥为登录 Linux 服务器提供了更好且安全的机制。运行 ssh-keygen 后,将会生成公私密钥对。你可以将公钥放置到任意服务器,从持有私钥的客户端连接到服务器的时,会用它来解锁。两者匹配时,系统无需密码就能解除锁定。 + +### 在 CentOS 和 RHEL 上设置免密码登录 SSH ### + +以下步骤在 CentOS 5/6/7、RHEL 5/6/7 和 Oracle Linux 6/7 上测试通过。 + +节点1 : 192.168.0.9 +节点2 : 192.168.l.10 + +#### 步骤1 : #### + +测试节点1到节点2的连接和访问: + + [root@node1 ~]# ssh root@192.168.0.10 + The authenticity of host '192.168.0.10 (192.168.0.10)' can't be established. + RSA key fingerprint is 6d:8f:63:9b:3b:63:e1:72:b3:06:a4:e4:f4:37:21:42. + Are you sure you want to continue connecting (yes/no)? yes + Warning: Permanently added '192.168.0.10' (RSA) to the list of known hosts. + root@192.168.0.10's password: + Last login: Thu Dec 10 22:04:55 2015 from 192.168.0.1 + [root@node2 ~]# + +#### 步骤二: #### + +使用 ssh-key-gen 命令生成公钥和私钥,这里要注意的是可以对私钥进行加密保护以增强安全性。 + + [root@node1 ~]# ssh-keygen + Generating public/private rsa key pair. + Enter file in which to save the key (/root/.ssh/id_rsa): + Enter passphrase (empty for no passphrase): + Enter same passphrase again: + Your identification has been saved in /root/.ssh/id_rsa. + Your public key has been saved in /root/.ssh/id_rsa.pub. + The key fingerprint is: + b4:51:7e:1e:52:61:cd:fb:b2:98:4b:ad:a1:8b:31:6d root@node1.ehowstuff.local + The key's randomart image is: + +--[ RSA 2048]----+ + | . ++ | + | o o o | + | o o o . | + | . o + .. | + | S . . | + | . .. .| + | o E oo.o | + | = ooo. | + | . o.o. | + +-----------------+ + +#### 步骤三: #### + +用 ssh-copy-id 命令将公钥复制或上传到远程主机,并将身份标识文件追加到节点2的 ~/.ssh/authorized_keys 中: + + [root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.0.10 + root@192.168.0.10's password: + Now try logging into the machine, with "ssh '192.168.0.10'", and check in: + + .ssh/authorized_keys + + to make sure we haven't added extra keys that you weren't expecting. + +#### 步骤四: #### + +验证免密码 SSH 登录节点2: + + [root@node1 ~]# ssh root@192.168.0.10 + Last login: Sun Dec 13 14:03:20 2015 from www.ehowstuff.local + +我希望这篇文章能帮助到你,为你提供 SSH 免密码登录 CentOS / RHEL 的基本认知和快速指南。 + +-------------------------------------------------------------------------------- + +via: http://www.ehowstuff.com/ssh-login-without-password-centos/ + +作者:[skytech][a] +译者:[fw8899](https://github.com/fw8899) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ehowstuff.com/author/skytech/ diff --git a/published/20151223 How to Use Glances to Monitor System on Ubuntu.md b/published/20151223 How to Use Glances to Monitor System on Ubuntu.md new file mode 100644 index 0000000000..08c03efb93 --- /dev/null +++ b/published/20151223 How to Use Glances to Monitor System on Ubuntu.md @@ -0,0 +1,109 @@ +如何在 Ubuntu 上使用 Glances 监控系统 +================================================================================ + +![](https://www.maketecheasier.com/assets/uploads/2015/12/glances_featured.jpg) + +Glances 是一个用于监控系统的跨平台、基于文本模式的命令行工具。它是用 Python 编写的,使用 `psutil` 库从系统获取信息。你可以用它来监控 CPU、平均负载、内存、网络接口、磁盘 I/O,文件系统空间利用率、挂载的设备、所有活动进程以及消耗资源最多的进程。Glances 有很多有趣的选项。它的主要特性之一是可以在配置文件中设置阀值(careful[小心]、warning[警告]、critical[致命]),然后它会用不同颜色显示信息以表明系统的瓶颈。 + +### Glances 的功能 + +- CPU 平均负载 +- 不同状态(如活动、休眠)进程的数量 +- 所有内存信息,如物理内存、交换空间、空闲内存 +- CPU 信息 +- 网络连接的上行/下行速度 +- 磁盘 I/O 读/写速度详细信息 +- 当前挂载设备的磁盘使用情况 +- 消耗资源最多的进程和他们的 CPU/内存使用情况 + +### 安装 Glances + +Glances 在 Ubuntu 的软件仓库中,所以安装很简单。执行下面的命令安装 Glances: + + sudo apt-get install glances + +(LCTT 译注:若安装后无法正常使用,可考虑使用 pip 安装/升级 glances:`sudo pip install --upgrade glances`) + +### Glances 使用方法 + +安装完成后,可以执行下面的命令启动 Glances: + + glances + +你将看到类似下图的输出: + +![glances monitor system output](https://www.maketecheasier.com/assets/uploads/2015/12/glances_output1.png) + +要退出 Glances 终端,按 ESC 键或 `Ctrl + C`。 + +默认情况下,时间间隔(LCTT 译注:显示数据刷新的时间间隔)是 1s,不过你可以在从终端启动 Glances 时自定义时间间隔。 + +要把时间间隔设为 5s,执行下面的命令: + + glances -t 5 + +### Glances 中不同颜色含义 + +Glances 中不同颜色的含义: + +- `绿色`:正常(OK) +- `蓝色`:小心(careful) +- `紫色`:警告(warning) +- `红色`:致命(critical) + +默认设置下,Glances 的阀值设置是:careful=50,warning=70,critical=90。你可以通过 “/etc/glances/” 目录下的默认配置文件 glances.conf 来自定义这些阀值。 + +### Glances 的选项 + +Glances 提供了很多快捷键,可以在它运行时用来查找输出信息。 + +下面是一些常用的热键列表: + +- `m` : 按内存占用排序进程 +- `p` : 按进程名称排序进程 +- `c` : 按 CPU 占用率排序进程 +- `i` : 按 I/O 频率排序进程 +- `a` : 自动排序进程 +- `d` : 显示/隐藏磁盘 I/O 统计信息 +- `f` : 显示/隐藏文件系统统计信息 +- `s` : 显示/隐藏传感器统计信息 +- `y` : 显示/隐藏硬盘温度统计信息 +- `l` : 显示/隐藏日志 +- `n` : 显示/隐藏网络统计信息 +- `x` : 删除警告和严重日志 +- `h` : 显示/隐藏帮助界面 +- `q` : 退出 +- `w` : 删除警告记录 + +### 使用 Glances 监控远程系统 + +你也可以使用 Glances 监控远程系统。要在远程系统上使用它,使用下面的命令: + + glances -s + +你会看到类似下面的输出: + +![glances monitor remote system server](https://www.maketecheasier.com/assets/uploads/2015/12/glances_server.png) + +如你所见,Glances 运行在 61209 端口。 + +现在,到远程机器上执行下面的命令以连接到指定 IP 地址的 Glances 服务器上。假设 192.168.1.10 是你的 Glances 服务器 IP 地址。 + + glances -c -P 192.168.1.10 + +### 结论 + +对于每个 Linux 系统管理员来说,Glances 都是一个非常有用的工具。使用它,你可以轻松、高效地监控 Linux 系统。如果你有什么问题,自由地评论吧。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/glances-monitor-system-ubuntu/ + +作者:[Hitesh Jethva][a] +译者:[bianjp](https://github.com/bianjp) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/hiteshjethva/ + diff --git a/published/20151223 Monitor Linux System Performance Using Nmon.md b/published/20151223 Monitor Linux System Performance Using Nmon.md new file mode 100644 index 0000000000..26f3f3f1e2 --- /dev/null +++ b/published/20151223 Monitor Linux System Performance Using Nmon.md @@ -0,0 +1,87 @@ +使用 Nmon 监控 Linux 的系统性能 +================================================================================ +Nmon(得名于 Nigel 的监控器)是IBM的员工 Nigel Griffiths 为 AIX 和 Linux 系统开发的一款计算机性能系统监控工具。Nmon 可以把操作系统的统计数据展示在屏幕上或者存储到一份数据文件里,来帮助了解计算机资源的使用情况、调整方向和系统瓶颈。这个系统基准测试工具只需要使用一条命令就能得到大量重要的性能数据。使用 Nmon 可以很轻松的监控系统的 CPU、内存、网络、硬盘、文件系统、NFS、高耗进程、资源和 IBM Power 系统的微分区的信息。 + +### Nmon 安装 ### + +Nmon 默认是存在于 Ubuntu 的仓库中的。你可以通过下面的命令安装 Nmon: + + sudo apt-get install nmon + +### 怎么使用Nmon来监控Linux的性能 ### + +安装完成后,通过在终端输入`nmon` 命令来启动 Nmon + + nmon + +你会看到下面的输出: + +![nmon-output](https://www.maketecheasier.com/assets/uploads/2015/12/nmon-output.png) + +从上面的截图可以看到 nmon 命令行工具完全是交互式运行的,你可以使用快捷键来轻松查看对应的统计数据。你可以使用下面的 nmon 快捷键来显示不同的系统统计数据: + +- `q` : 停止并退出 Nmon +- `h` : 查看帮助 +- `c` : 查看 CPU 统计数据 +- `m` : 查看内存统计数据 +- `d` : 查看硬盘统计数据 +- `k` : 查看内核统计数据 +- `n` : 查看网络统计数据 +- `N` : 查看 NFS 统计数据 +- `j` : 查看文件系统统计数据 +- `t` : 查看高耗进程 +- `V` : 查看虚拟内存统计数据 +- `v` : 详细模式 + +### 核查 CPU 处理器 ### + +如果你想收集关于 CPU 性能相关的统计数据,你应该按下键盘上的`c`键,之后你将会看到下面的输出: + +![nmon_cpu_output](https://www.maketecheasier.com/assets/uploads/2015/12/nmon_cpu_output.png) + +### 核查高耗进程统计数据 ### + +如果想收集系统正在运行的高耗进程的统计数据,按键盘上的`t`键,之后你将会看到下面的输出: + +![nmon_process_output](https://www.maketecheasier.com/assets/uploads/2015/12/nmon_process_output.jpg) + +### 核查网络统计数据 ### + +如果想收集 Linux 系统的网络统计数据,按下`n`键,你将会看到下面输出: + +![n_network_output](https://www.maketecheasier.com/assets/uploads/2015/12/nmon_network_output.png) + +### 硬盘 I/O 图表 ### + +使用`d` 键获取硬盘相关的信息,你会看到下面输出: + +![nmon_disk_output](https://www.maketecheasier.com/assets/uploads/2015/12/nmon_disk_output.png) + +### 核查内核信息 ### + +Nmon 一个非常重要的快捷键是`k`键,用来显示系统内核相关的概要信息。按下`k`键之后,会看到下面输出: + +![nmon_kernel_output](https://www.maketecheasier.com/assets/uploads/2015/12/nmon_kernel_output.png) + +### 获取系统信息 ### + +对每个系统管理员来说一个非常有用的快捷键是`r`键,可以用来显示计算机的系统结构、操作系统版本号和 CPU 等不同资源的信息。按下`r`键之后会看到下面输出: + +![nmon_system_output](https://www.maketecheasier.com/assets/uploads/2015/12/nmon_system_output.png) + +### 总结 ### + +还有许多其他的工具做的和 Nmon 同样的工作,不过 Nmon 对一个 Linux 新手来说还是很友好的。如果你有什么问题,尽管评论。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/monitor-linux-system-performance/ + +作者:[Hitesh Jethva][a] +译者:[sonofelice](https://github.com/sonofelice) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/hiteshjethva/ + diff --git a/published/20151225 What are the best plugins to increase productivity on Emacs.md b/published/20151225 What are the best plugins to increase productivity on Emacs.md new file mode 100644 index 0000000000..8e063d3693 --- /dev/null +++ b/published/20151225 What are the best plugins to increase productivity on Emacs.md @@ -0,0 +1,79 @@ +提升 emacs 生产力的十大最佳插件 +================================================================================ + +一年前的这个时候,我想要寻找[将 Vim 打造成一个成熟的全功能的 IDE][1] 的最好插件。有趣的是,那篇文章的很多评论提到了 Emacs 已经大部分有了这些内置插件,已经是一个很棒的 IDE 了。尽管我对 Emacs 的难以置信的多样化表示赞同,它依旧不是一个可以开箱即用的高级编辑器。还好,其庞大的插件库可以解决这个问题。但在过多的选择中,有时很难弄清该如何入手。因此,现在让我试着收集一个不可或缺的插件的简短列表,来提升你使用 Emacs 时的工作效率。 虽然我主要侧重于与编程相关的生产力提升,但是这些插件对所有人或不同用途都是有用的。 + +### 1. Ido-mode ### + +![](https://c2.staticflickr.com/6/5718/23311895573_c1fb34337c_c.jpg) + +Ido 或许是对新手最有用的一个插件,Ido 的意思是交互式工作(interactively do)。它取代了大部分的用花哨字符匹配菜单的枯燥提示。好比说,它用列出了当前目录所有文件的列表来取代了常规的打开文件提示符。输入一些字符,Ido 将尝试匹配最合适的文件。它通过可视化让你的操作变得更容易,这也是一个快速遍历所有文件都有相同前缀的文件夹的方法。 + +### 2. Smex ### + +![](https://c2.staticflickr.com/2/1517/23310442314_2a22a60c34_c.jpg) + +它不算最著名的一个、但却是一个替代 Ido-mode 的好选择:Smex 可以优雅的替代普通的`M-x`提示符,灵感大部分来自于 Ido-mode。它也给调用`M-x`后输入的命令带来了同样的交互搜索能力。它简单而有效,是一个为常用操作提升效率的最好方法。 + +### 3. Auto Complete ### + +![](https://c2.staticflickr.com/6/5794/23643004900_3042f77952_c.jpg) + +知道这个插件的存在之前,我在 Emacs 里面有一半的时间花在敲击 `M-/` 来补完单词上。现在,我有一个漂亮的弹出菜单可以为我做自动补全。无须多说,我们都需要它。 + +### 4. YASnippet ### + +![](https://c2.staticflickr.com/2/1688/23830403072_0d8df6ef4c_b.jpg) + +这是真正的程序员必备利器。总有一些代码片段会让我们觉得我们一辈子都在写它。对我来说,就是调试 PHP 时不断输入的 `var_dump(...);exit;`。经过一段时间一遍又一遍的输入`var_dump(...);exit;`,我觉得我可以预先把其做成录制好的、方便用到的代码片段。使用 YASnippets,可以很容易导入代码片段文件或者自己做个。之后,只要按下一个 tab 键,就可以将一个小的关键词扩展成一大段预先写好的代码,然后可以很方便地在里面修改。 + +### 5. Org-mode ### + +![](https://c2.staticflickr.com/6/5687/23570808789_d683c949e4.jpg) + +免责声明,我最近才开始使用 Org-mode,但它已经深深的吸引了我。从我看过数以百计的文章来说,Org-mode 可以改变你的生活。它背后的想法很简单:它是一种用普通文本做简单备注的模式,可以很容易地在任务列表和各种数据中转来转去,并进行一些比如按优先级或到期日期的过滤,或设置一个重复日期。然而,虽然思路简单,但你可以做到很多,用各种方法用于各种用途。与其去看一个长长的介绍,我觉得你可以去读读[现有教程][2],有很多视频可以看,自己去体验一下 Org-mode 是多么强大。 + +### 6. Helm ### + +![](https://c2.staticflickr.com/2/1489/23310442334_5e6db22b79_c.jpg) + +一些使用者喜欢它,但是其他人没有这么大的使用热情。我是后者的一部分。但在拥有这样一个庞大的追随者的情况下,是不能不提到它的。Helm 旨在完全变换你的 Emacs 使用体验。简单来说,Helm 是一个在 Emacs 中帮助你快速找到一个文件或命令的框架。根据你的输入,它将尝试使用词语自动完成来引导你将大脑的念头变为行动。起初感觉有点奇怪,但对一些人来说,Helm 本身就是一个信仰。虽然我不是 Helm 的粉丝,我欣赏 helm-occur 这一个伟大的工具可以在一个大文档搜索字符串并且在一个单独的缓冲区显示所有匹配结果,以便很容易在它们之间跳转。如果你正在寻找一个快速演示来了解 Helm 能做什么,我推荐[这篇文章][3]。 + +### 7. ace-jump-mode ### + +![](https://c2.staticflickr.com/2/1710/23856168871_6df1faa565_c.jpg) + +这是另一个有一大群追随者的插件,我正在试图成为 ace-jump-mode 的粉丝。掌握这个插件,你会体验到超越鼠标感受。简单描述一下,通过你选择的快捷方式触发 ace-jump-mode 后,你会被提示输入字符。输入一个字符,所有以该字符开头的单词中的那个字符就会替换成一个唯一字符并被高亮。输入一个屏幕上的高亮字符,你的光标会直接跳转到高亮显示的那个词。我不得不承认,这让我使用它时有点反应不过来,但是,一旦你掌握它,它将显著提升你在一个文档里的移动速度。(LCTT 译注:用文字描述比较困难,如截图中,你输入的是一个“i”,然后屏幕中所有以“i”开头的单词中的那个“i”都被替换成了从 a 到 z 的字符,并高亮;你可以输入这些高亮的字符直接跳转到那个位置。) + +### 8. find-file-in-project ### + +![](https://c2.staticflickr.com/2/1492/23570808809_96ec8454a9_c.jpg) + +如果你喜欢 Sublime text 以及它可以用非常方便的`Ctrl-p`模糊搜索来打开一个项目中的任何文件的功能,你将会喜欢上 find-file-in-project (简称 ffip)的。使用设置指定了您的版本控制的根文件夹后,您可以轻松地调出一个很酷的文本条,通过快速扫描和搜索你的代码,来根据你输入的名称找到匹配的文件。我喜欢把它绑定到键盘上的 F6 键。如果你不知道整个目录从上到下的复杂结构,这很简单,而且非常易用。 + +### 9. Flymake ### + +![](https://c2.staticflickr.com/6/5708/23310442354_cbba657ed3.jpg) + +对 IDE 的爱好者来说,我认为语法检查器是 IDE 最强大的特性之一,它非常适合初学者和方便了那些疲惫的程序员。感谢 Flymake,Emacs 用户也可以享受到了语法检查器。因为我工作中用 PHP 很多,Flymake 就不需要任何额外的配置。当我写代码的时候,它会自动检查我的代码和高亮任何一个包含问题的行。对于编译语言,Flymake 将寻找一个用于检查你的代码的 Makefile。真神奇。 + +### 10. electric-pair ### + +最后,但并非最不重要,在我看来,electric-pair 是最简单但最强大的插件之一。它会自动关闭你输入的括号。它起初看起来并不是很有用,但相信我,在被寻找配对括号折磨几百次之后,你会很高兴有这么一个插件,可以确保你所有的表达式的括号都是一一对应的。 + +总结一下,Emacs 是一个奇妙的工具。这可不是一个令人惊讶的说法。试试这些插件,看着你的效率直线飙升吧。这个列表当然不是详尽的列表。如果你想贡献你的建议,请在评论中这样做。我自己一直在寻找新的插件来试着发现 Emacs 的新体验。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/best-plugins-to-increase-productivity-on-emacs.html + +作者:[Adrien Brochard][a] +译者:[zky001](https://github.com/zky001) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://xmodulo.com/turn-vim-full-fledged-ide.html +[2]:http://orgmode.org/worg/org-tutorials/ +[3]:http://tuhdo.github.io/helm-intro.html diff --git a/published/20151229 Grub 2--Heal your bootloader.md b/published/20151229 Grub 2--Heal your bootloader.md new file mode 100644 index 0000000000..ef246a203a --- /dev/null +++ b/published/20151229 Grub 2--Heal your bootloader.md @@ -0,0 +1,236 @@ +Grub 2:拯救你的 bootloader +================================================================================ + +**没有什么事情比 bootloader 坏掉更让气人的了。充分发挥 Grub 2 的作用,让 bootloader 安分工作吧。** + +为什么这么说? + +- Grub 2 是最受欢迎的 bootloader ,几乎用在所有 Linux 发行版上。 +- bootloader 是一个至关重要的软件,但是非常容易损坏。 +- Grub 2 是兼具扩展性和灵活性的一款引导加载程序,提供了大量可定制选项。 + +Grub 2 是一款精彩的功能强大的软件。它不是 bootloader 界的一枝独秀,但却最受欢迎,几乎所有主要的桌面发行版都在使用它。 Grub 的工作有两个。首先,它用一个菜单展示计算机上所有已经安装的操作系统供你选择。其次,当你从启动菜单中选择了一个 Linux 操作系统, Grub 便加载这个 Linux 的内核。 + +你知道,如果使用 Linux ,你就离不开 bootloader 。然而它却是 Linux 发行版内部最鲜为人知的部分。在这篇文章里,我们将带你熟悉 Grub 2 一些著名的特性,强化你相关技能,使你在 bootloader 跑飞的时候能够自行处理。 + +Grub 2 最重要的部分是一堆文本文件和两个脚本文件。首先需要了解的是 `/etc/default/grub` 。这是一个文本文件,你可以在里面设置通用配置变量和 Grub 2 菜单(见下方 “常见用户设置” )的其它特性。 + +Grub 2 另一个重要的部分是 `/etc/grub.d` 文件夹。定义每个菜单项的所有脚本都放置在这里。这些脚本的名称必须有两位的数字前缀。其目的是,在构建 Grub 2 菜单时定义脚本的执行顺序以及相应菜单项的顺序。文件 `00_header` 首先被读取,负责解析 `/etc/default/grub` 配置文件。然后是 Linux 内核的菜单项,位于 `10_linux` 文件中。这个脚本在默认的 `/boot` 分区为每个内核创建一个正规菜单项和一个恢复菜单项。 + +紧接着的是为第三方应用所用的脚本,如 `30_os-prober` 和 `40_custom` 。 **os-prober** 脚本为内核和其它分区里的操作系统创建菜单项。它能识别安装的 Linux、 Windows、 BSD 以及 Mac OS X 。 如果你的硬盘布局比较独特,使得 **os-prober** 无法找到已经安装的发行版,你可以在 `40_custom` 文件(见下方 “添加自定义菜单项”)中添加菜单项。 + +**Grub** 2 不需要你手动维护你的启动选项的配置文件:取而代之的是使用 `grub2-mkconfig` 命令产生 `/boot/grub/grub.cfg` 文件。这个功能会解析 `/etc/grub.d` 目录中的脚本以及 `/etc/default/grub` 设置文件来定义你的设置情况。 + +###图形化的引导修复### + +多亏了 Boot Repair 应用,只需要点击按钮,Grub 2 许许多多的问题都能轻易解决。这个漂亮小巧的应用有一个直观的用户界面,可以扫描并识别多种硬盘布局和分区方案,还能发现并正确识别安装在其中的操作系统。这个应用可以处理传统计算机里的主引导记录(Master Boot Record) (MBR),也可以处理新型 UEFI 计算机中的 GUID 分区表(GUID Partition Table)(GPT)。 + +Boot Repair 最简单的使用方式是安装到 Live Ubuntu 会话中。在一个 bootloader 损坏的机器上启动 Ubuntu Live 发行版,先通过添加它的 PPA 版本库来安装 Boot Repair ,命令如下: + + sudo add-apt-repository ppa:yannubuntu/Boot Repair + +然后刷新版本库列表: + + sudo apt-get update + +安装应用,如下: + + sudo apt-get install -y Boot Repair + +安装完毕后就启动应用。在显示它的界面(由一对按键组成)之前将会扫描你的硬盘。根据工具的指示,只需按下 Recommended Repair(推荐的修复)按钮,即可修复大部分坏掉的 bootloader 。修复 bootloader 之后,这个工具会输出一个短小的 URL ,你应该把它记录下来。这个 URL 包含了硬盘详尽的信息:分区信息以及重要的 Grub 2 文件(如 `/etc/default/grub` 和 `/boot/grub/grub.cfg` )的内容。如果工具不能解决 bootloader 的问题,可以把你这个 URL 共享在你的发行版的论坛上,让其他人可以分析你的硬盘布局以便给你建议。 + +![](http://www.linuxvoice.com/wp-content/uploads/2015/10/boot-repair-large.jpg) + +*Boot Repair 也可以让你定制 Grub 2 的选项。* + +### Bootloader 急救 ### + +Grub 2 引导问题会让系统处于几种不同状态。屏幕(如你所想,本该显示 bootloader 菜单的地方)所展示的文本会指示出系统的当前状态。如果系统中止于 **grub>** 提示符,表明 Grub 2 模块已经被加载,但是找不到 **grub.cfg** 文件。当前是完全版的 Grub 2 命令行 shell,你可以通过多种方式解决此问题。如果你看到的是 **grub rescue>** 提示符,表明 bootloader 不能找到 Grub 2 模块或者找不到任何引导文件( boot files )。然而,如果你的屏幕只显示 ‘GRUB’ 一词,表明 bootloader 找不到通常位于主引导记录( Master Boot Record )里的最基本的信息。 + +你可以通过使用 live CD 或者在 Grub 2 shell 中修正此类错误。如果你够幸运, bootloader 出现了 **grub>** 提示符,你就能获得 Grub 2 shell 的支配权,来帮助你排错。 + +接下来几个命令工作在 **grub>** 和 **grub rescue>** 提示符下。 **set pager=1** 命令设置显示分页( pager ),防止文本在屏幕上一滚而过。你还可以使用 **ls** 命令列出 Grub 识别出的所有分区,如下: + + grub> ls + (hd0) (hd0,msdos5) (hd0,msdos6) (hd1,msdos1) + +如你所见,这个命令列出分区的同时一并列出了分区表方案(即 msdos)。 + +你还可以在每个分区上面使用 **ls** 来查找你的根文件系统: + + grub> ls (hd0,5)/ + lost+found/ var/ etc/ media/ bin/ initrd.gz + boot/ dev/ home/ selinux/ srv/ tmp/ vmlinuz + +你可以不写上分区名的 **msdos** 部分。同样,如果你忘记了尾部的斜杠( trailing slash )只输入 `ls (hd0,5)` ,那你将获得分区的信息,比如文件系统类型、总体大小和最后修改时间。如果你有多个分区,可以使用 `cat` 读取 `/etc/issue` 文件中的内容,来确定发行版,格式如 `cat (hd0,5)/etc/issue` 。 + +假设你在 **(hd0,5)** 中找到根文件系统,请确保它包含 `/boot/grub` 目录,以及你想引导进入的内核镜像,如 **vmlinuz-3.13.0-24-generic** 。此时输入以下命令: + + grub> set root=(hd0,5) + grub> linux /boot/vmlinuz-3.13.0-24-generic root=/dev/sda5 + grub> initrd /boot/initrd.img-3.13.0-24-generic + +第一个命令把 Grub 指向我们想引导进入的发行版所在的分区。接着第二个命令告知 Grub 内核镜像在分区中的位置,以及根文件系统的位置。最后一行设置虚拟文件系统( initial ramdisk )文件的位置。你可以使用 tab 补全功能补全内核名字和虚拟文件系统( initrd: initial ramdisk )的名字,节省时间和精力。 + +输入完毕,在下一个 **grub>** 提示符后输入 `boot` , Grub 将会引导进入指定的操作系统。 + +如果你在 **grub rescue>** 提示符下,情况会有些许不同。因为 bootloader 未能够找到并加载任何必需的模块,你需要手动添加这些模块: + + grub rescue> set root=(hd0,5) + grub rescue> insmod (hd0,5)/boot/grub/normal.mod + grub rescue> normal + grub> insmod linux + +如上所示,跟之前一样,使用 `ls` 命令列出所有分区之后,使用 `set` 命令标记起来。然后添加 **normal** 模块,此模块激活时将会恢复到标准 **grub>** 模式。如果 linux 模块没加载,接下来的命令会进行添加。如果这个模块已经加载,你可以跟之前一样,把引导加载程序指向内核镜像和虚拟文件系统( initrd )文件,然后使用 `boot` 启动发行版,完美收官。 + +一旦成功启动发行版,别忘了为 Grub 重新产生新的配置文件,使用 + + grub-mkconfig -o /boot/grub/grub.cfg + +命令。你还需要往 MBR 里安装一份 bootloader 的拷贝,使用 + + sudo grub2-install /dev/sda + +命令。 + +![](http://www.linuxvoice.com/wp-content/uploads/2015/10/grub2-cfg-large.jpg) + +*想要禁用 `/etc/grub.d` 目录下的脚本,你只需移除其可执行位,比如使用 `chmod -x /etc/grub.d/20_memtest86+` 就能将 ‘Memory Test’ 选项从菜单中移除。* + +### Grub 2 和 UEFI ### + +在支持 UEFI 的机器(最近几年上市的机器大部分都是)调试坏掉的 Grub 2 增加了另一复杂的层次。恢复安装在 UEFI 机器上的 **Grub 2** 的和安装在非 UEFI 机器上的并没多大区别,只是新的固件处理方式不一样,从而导致了很多种恢复结果。 + +对于基于 UEFI 的系统,不要在 MBR 上安装任何东西。相反,你要在 EFI 系统分区(EFI System Partition)( ESP )里安装 Linux EFI bootloader,并且借助工具把它设置为 EFI 的默认启动程序,这个工具对于 Linux 用户是 `efibootmgr` ,对于 window 用户则是 `bcdedit` 。 + +照目前情况看,在安装任何与 Windows 8 兼容的主流桌面 Linux 发行版前,应该正确安装好 Grub 2。然而,如果 bootloader 损坏,你可以使用 live 发行版修复机器。在启动 live 介质之时,请确保是以 UEFI 模式启动。计算机每个可移动驱动器的启动菜单将会有两个: 一个普通的和一个以 EFI 标记的。使用后者会用到 **/sys/firmware/efi/** 文件中的 EFI 变量。 + +在 live 环境中,挂载教程前面所提的安装挂掉系统的根文件系统。除此之外,还需要挂载 ESP 分区。假设分区是 **/dev/sda1** ,你可以如下所示挂载: + + sudo mount /dev/sda1 /mnt/boot/efi + +接着在 chroot 到安装完毕的发行版前之前,使用 `modprobe efivars` 加载 **efivars** 模块。 + +在这里, Fedora 用户可以使用如下命令重新安装 bootloader + + yum reinstall grub2-efi shim + +但在此之前,需要使用 + + grub2-mkconfig -o /boot/grub2/grub.cfg + +来产生新的配置文件。 Ubuntu 用户则改用以下命令 + + apt-get install --reinstall grub-efi-amd64 + +一旦 bootloader 正确就位,退出 chroot ,卸载所有分区,重启到 Grub 2 菜单。 + +### 伙计,我的 Grub 哪去了? ### + +Grub 2 最好的特性是可以随时重新安装。因此,当其它像 Windows 之类的系统用它们自己的 bootloader 替换后,导致 Grub 2 丢失,你可以使用 live 发行版,寥寥数步即可重装 Grub 。假设你在 `/dev/sda5` 安装了一个发行版,若要重装 Grub ,你只需首先使用以下命令为发行版创建一个挂载目录: + + sudo mkdir -p /mnt/distro + +然后挂载分区,如下: + + mount /dev/sda5 /mnt/distro + +接着就能重装 Grub 了,如下: + + grub2-install --root-directory=/mnt/distro /dev/sda + +这个命令会改写 `/dev/sda` 设备上的 MBR 信息,指向当前 Linux 系统,并重写一些 Grub 2 文件,如 **grubenv** 和 **device.map** 。 + +另一个问题常见于装有多个发行版的计算机上。当你安装了新的 Linux 发行版,它的 bootloader 应当要能找到所有已经安装的发行版。一旦不行,只要引导进入新安装的发行版,并运行 + + grub2-mkconfig + +在运行这个命令之前,请确保启动菜单中缺失的发行版的 root 分区已经挂载。如果你想添加的发行版有单独的 `/root` 和 `/home` 分区,在运行 `grub2-mkconfig` 之前,只需挂载包含 `/root` 的分区。 + +虽然 Grub 2 能够找到大部分发行版,但是在 Ubuntu 中尝试添加安装的 Fedora 系统需要额外的一个步骤。如果你以默认设置安装了 Fedora ,则发行版的安装器已经创建了 LVM 分区。此时你需要使用发行版的包管理系统安装 **lvm2** 驱动,如下 + + sudo apt-get install lvm2 + +才能使得 Grub 2 的 `os-prober` 脚本能够找到并将 Fedora 添加进启动菜单。 + +### 常见用户设置 ### + +Grub 2 有很多可配置变量。 这里有一些 `/etc/default/grub` 文件中你最可能会修改到的常见变量。 **GRUB_DEFAULT** 变量指定默认的启动项,可以设置为数字值,比如 0 ,表示第一个菜单项,或者设置为 “saved” ,将指向上一次启动时选中的菜单项。 **GRUB\_TIMEOUT** 变量指定启动默认菜单项之前的停留时间。 **GRUB\_CMDLINE\_LINUX** 列出了要传递给所有 Linux 菜单项的内核命令行参数。 + +如果 **GRUB\_DISABLE\_RECOVERY** 变量设置为 **true** ,那么将不生成恢复模式菜单项。这些菜单项会以单用户模式启动发行版,这种模式下允许你利用命令行工具修复系统。 **GRUB_GFXMODE** 变量同样有用,它指定了菜单上文本显示的分辨率,它可以设置为你的显卡所支持的任何数值。 + +![](http://www.linuxvoice.com/wp-content/uploads/2015/10/grub2-cli-large.jpg) + +*Grub 2 有个命令行模式,通过在 bootloader 菜单上按 C 进入。* + +### 彻底的修复 ### + +如果 `grub2-install` 命令不能正常运作,使得你无法引导进入 Linux ,你需要完整地重装以及重新配置 bootloader 。为此目的,需要用到强大的 **chroot** 功能将运行环境从 live CD 环境切换至我们想修复的 Linux 的安装位置。任何拥有 **chroot** 工具的 Linux live CD 都可以实现这个目的。不过需要确保 live 介质的系统架构和硬盘上系统的架构一致。因此当你希望 **chroot** 到 64 位系统,你必须使用 amd64 live 发行版。 + +启动进入 live 发行版之后,首先需要检查机器上的分区。使用 `fdisk -l` 列出磁盘上所有分区,记录你想修复的 Grub 2 系统所在的分区。 + +假设我们希望从安装在 `/dev/sda5` 中的发行版中恢复 bootloader 。启动终端使用如下命令挂载分区: + + sudo mount /dev/sda5 /mnt + +此时需要绑定(bind)Grub 2 bootloader 需要进入的目录,以便检测其它操作系统: + + $ sudo mount --bind /dev /mnt/dev + $ sudo mount --bind /dev/pts /mnt/dev/pts + $ sudo mount --bind /proc /mnt/proc + $ sudo mount --bind /sys /mnt/sys + +此时可以离开 live 环境进入安装在 **/dev/sda5** 分区中的发行版了,通过 **chroot** : + + $ sudo chroot /mnt /bin/bash + +现在可以安装、检测、以及升级 Grub 了,跟之前一样,使用 + + sudo grub2-install /dev/sda + +命令来重装 bootloader 。因为 **grub2-install** 命令不能创建**grub.cfg** 文件,需要手动创建,如下 + + sudo grub-mkconfig -o /boot/grub/grub.cfg + +这样应该就可以了。现在你就有了 Grub 2 的一份全新拷贝,罗列了机器上所有的操作系统和发行版。在重启电脑之前,你需要依次退出 chroot 系统,卸载所有分区,如下所示: + + $ exit + $ sudo umount /mnt/sys + $ sudo umount /mnt/proc + $ sudo umount /mnt/dev/pts + $ sudo umount /mnt/dev + $ sudo umount /mnt + +现在你可以安全地重启电脑了,而它应该会回退到 Grub 2 的控制之中,你已经修好了这个 bootloader。 + +### 添加自定义菜单项 ### + +如果希望往 bootloader 菜单里添加菜单项,你需要在 **40_custom** 文件里添加一个启动段( boot stanza )。例如,你可以使用它展示一个菜单项来启动安装在可移动 USB 驱动里的 Linux 发行版。假设你的 USB 驱动器是 **sdb1** ,并且 vmlinuz 内核镜像和虚拟文件系统( initrd )都位于根 (/)目录下,在 **40_custom** 文件中添加以下内容: + + menuentry “Linux on USB” { + set root=(hd1,1) + linux /vmlinuz root=/dev/sdb1 ro quiet splash + initrd /initrd.img + } + +相比使用设备和分区名,使用它们的 UUID 可以获得更精确结果,比如 + + set root=UUID=54f22dd7-eabe + +使用 + + sudo blkid + +来获得所有已连接的驱动器和分区的 UUID 。你还可以为你磁盘上没被 os-prober 脚本找到的发行版添加菜单项,只要你知道该发行版的安装位置以及其内核和虚拟文件系统( initrd )的位置即可。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxvoice.com/grub-2-heal-your-bootloader/ + +作者:[Mayank Sharma][a] +译者:[soooogreen](https://github.com/soooogreen) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxvoice.com/author/mayank/ diff --git a/published/20160111 Open Source DJ Software Mixxx Version 2.0 Released.md b/published/20160111 Open Source DJ Software Mixxx Version 2.0 Released.md new file mode 100644 index 0000000000..972d52e035 --- /dev/null +++ b/published/20160111 Open Source DJ Software Mixxx Version 2.0 Released.md @@ -0,0 +1,75 @@ +开源 DJ 软件 Mixxx 2.0 版发布 +================================================================================ +![](http://itsfoss.com/wp-content/uploads/2016/01/DJ-Software-Mixxx-2-Released.jpg) + +时隔三年,开源 DJ 混音软件 [Mixxx][1] 再度发布一个大的版本更新----Mixxx 2.0。 + +Mixxx 是一个跨平台的自由、开源的 DJ 混音软件,它几乎提供了当你想自己混音时需要的一切功能。Mixxx 近几年在专业人士以及业余爱好者中都很火。 + +甚至在 Mixxx 中你能使用你的 iTunes 音乐库。它的强有力的引擎使它支持多种文件格式。Mixxx 默认即支持超过85种MIDI DJ 调节器以及少部分 HID 调节器。它也包含一个自动选项,可以让你在混音时休息一下。 + +Mixxx 的完整功能列表可以在[这里][2]找到。在查看完整列表之前,让我们看看最新版有何更新。 + +### Mixxx 2.0更新 ### + +- 可动态调整大小的外观 +- 4 轨道混音并且和主轨道同步 +- 内置特效 +- 谐波混频(Harmonic Mixing)与音乐按键检测 +- RGB 音频波形 +- 4 个麦克风输入和 4 个音频输入,麦克风音量可调 +- 黑胶音源输入、输出 +- 支持自定义封面 +- 核心混音引擎改进 +- 更新的音乐库 +- 改进增强了 DJ 调节器 + +你可以在[这里][3]中看到所有的新功能。 + +### 在基于 Ubuntu 的发行版中安装 Mixxx 2.0 ### + +Mixxx 提供了他们自己的ppa源,这使得在基于 Ubuntu 的发行版,如 Linux Mint、elementary OS、 Zorin OS 上安装Mixxx 2.0 变得十分简单. + +打开终端,并输入如下命令: + + sudo add-apt-repository ppa:mixxx/mixxx + sudo apt-get update + sudo apt-get install mixxx + +使用如下命令卸载 Mixxx: + + sudo apt-get remove mixxx + sudo add-apt-repository --remove ppa:mixxx/mixxx + +如果你已经在使用旧版本的 Mixxx。它将很快升级到2.0版。 + +### 在其他发行版中安装 Mixxx 2.0 ### + +Archliunx + + sudo pacman -S mixxx + +对于其他发行版,你还可以从源码编译安装 Mixxx。从下列地址下载源代码: + +- [源码地址][4] + +由于 Mixxx 是个跨平台的应用,你也可以下载它的 Windows 版或者 Mac OS 版,请访问 Mixxx 下载页面找到对应的下载链接: + +- [下载地址][5] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/dj-mixxx-2/ + +作者:[Abhishek][a] +译者:[name1e5s](https://github.com/name1e5s) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://mixxx.org/ +[2]:http://mixxx.org/features/ +[3]:http://mixxx.org/whats-new-in-mixxx-2-0/ +[4]:http://downloads.mixxx.org/mixxx-2.0.0/mixxx-2.0.0-src.tar.gz +[5]:http://mixxx.org/download/ diff --git a/published/20160111 Ubuntu-‘Spyware’-Will-Be-Disabled-In-Ubuntu-16.04-LTS.md b/published/20160111 Ubuntu-‘Spyware’-Will-Be-Disabled-In-Ubuntu-16.04-LTS.md new file mode 100644 index 0000000000..e1824fe0e6 --- /dev/null +++ b/published/20160111 Ubuntu-‘Spyware’-Will-Be-Disabled-In-Ubuntu-16.04-LTS.md @@ -0,0 +1,80 @@ +Ubuntu 里的“间谍软件”将在 Ubuntu 16.04 LTS 中被禁用 +================================================================================ + +出于用户隐私的考虑,Ubuntu 阉割了一个有争议的功能。 + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2013/09/as2.jpg) + +**Unity 中有争议的在线搜索功能将在今年四月份发布的 Ubuntu 16.04 LTS 中被默认禁用** + +用户在 Unity 7 的 Dash 搜索栏里将**只能搜索到本地文件、文件夹以及应用**。这样,用户输入的关键词将不会被发送到 Canonical 或任何第三方内容提供商的服务器里。 + +> “现在,Unity 的在线搜索在默认状况下是关闭的” + +在目前 ubuntu 的支持版本中,Dash 栏会将用户搜索的关键词发送到 Canonical 运营的远程服务器中。它发送这些数据以用于从50多家在线服务获取搜索结果,这些服务包括维基百科、YouTube 和 The Weather Channel 等。 + +我们可以选择去**系统设置 > 隐私控制**关闭这项功能。但是,一些开源社区针对的是默认打开这个事情。 + +### Ubuntu 在线搜索引发的争议 ### + +> “Richard Stallman 将这个功能描述为 ‘间谍软件’” + +早在2012年,在 Ubuntu 搜索中整合了来自亚马逊的内容之后,开源社区就表示为其用户的隐私感到担忧。在2013年,“Smart Scopes 服务”全面推出后,开源社区再度表示担忧. + +风波如此之大,以至于开源界大神 [Richard Stallman 都称 Ubuntu 为"间谍软件"][1]。 + +[电子前哨基金会 (EFF)][2]也在一系列博文中表达出对此的关注,并且建议 Canonical 将这个功能做成用户自由选择是否开启的功能。Privacy International 比其他的组织走的更远,对于 Ubuntu 的工作,他们给 Ubuntu 的缔造者发了一个“[老大哥奖][3]”。 + +[Canonical][4] 坚称 Unity 的在线搜索功能所收集的数据是匿名的以及“不可识别是来自哪个用户的”。 + +在[2013年 Canoical 发布的博文中][5]他们解释道:“**(我们)会使用户了解我们收集哪些信息以及哪些第三方服务商将会在他们搜索时从 Dash 栏中给出结果。我们只会收集能够提升用户体验的信息。**” + +### Ubuntu 开始严肃对待的用户数据隐私### + +Canonical 在给新安装的 Ubuntu 14.04 LTS 以及以上版本中禁用了来自亚马逊的产品搜索结果(尽管来自其他服务商的搜索结果仍然在出现,直到你关闭这个选项) + +在下一个LTS(长期支持)版,也就是 Ubuntu 16.04 中,Canonical 完全关闭了这个有争议的在线搜索功能,这个功能在用户安装完后就是关闭的。就如同 EFF 在2012年建议他们做的那样。 + +“你搜索的关键词将不会逃出你的计算机。” [Ubuntu 桌面主管 Will Cooke][6]解释道,“对于搜索结果的更精细的控制”和 Unity 8 所提供的“更有针对性的结果添加不到 Unity 7 里”。 + +这也就是“[Unity 7]的在线搜索功能将会退役”的原因。 + +这个变化也会降低对 Unity 7 的支持以及对 Canonical 基础设施的压力。Unity 提供的搜索结果越少,Canonical 就能把时间和工程师放到更加振奋人心的地方,比如更早的发布 Unity 8 桌面环境。 + +### 在 Ubuntu 16.04 中你需要自己开启在线搜索功能 ### + +![Privacy settings in Ubuntu let you opt in to seeing online results](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/privacy.jpg) + +*在 Ubuntu 隐私设置中你可以打开在线搜索功能* + +禁用 Ubuntu 桌面的在线搜索功能的决定将获得众多开源/免费软件社区的欢呼。但是并不是每一个人都对 Dash 提供的语义搜索功能反感,如果你认为你失去了在搜索时预览天气、查看新闻或其他来自 Dash 在线搜索提供的内容所带来的效率的话,你只需要简单的点几下鼠标就可以**再次打开这个功能**,定位到 Ubuntu 的**系统设置 > 隐私控制 > 搜索**然后将选项调至“**开启**”。 + +这个选项不会自动把亚马逊的产品信息加入到搜索结果中。如果你想看产品信息的话,需要打开第二个可选项“shipping lens”才能看到来自 Amazon (和 Skimlinks)的内容。 + +### 总结 ### + + +- 默认情况下,Ubuntu 16.04 LTS 的 Dash 栏将不会搜索到在线结果 +- 可以手动打开在线搜索 +- **系统设置 > 隐私控制 > 搜索**中的第二个可选项允许你看到亚马逊的产品信息 +- 这个变动只会影响新安装的系统。从老版本升级的将会保留用户的喜好 + +你同意这个决定吗?抑或是 Cononical 可能降低了新用户的体验?在评论中告诉我们。 + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2016/01/ubuntu-online-search-feature-disabled-16-04 + +作者:[Joey-Elijah Sneddon][a] +译者:[name1e5s](https://github.com/name1e5s) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://arstechnica.com/information-technology/2012/12/richard-stallman-calls-ubuntu-spyware-because-it-tracks-searches/?utm_source=omgubuntu +[2]:https://www.eff.org/deeplinks/2012/10/privacy-ubuntu-1210-amazon-ads-and-data-leaks?utm_source=omgubuntu +[3]:http://www.omgubuntu.co.uk/2013/10/ubuntu-wins-big-brother-austria-privacy-award +[4]:http://blog.canonical.com/2012/12/07/searching-in-the-dash-in-ubuntu-13-04/ +[5]:http://blog.canonical.com/2012/12/07/searching-in-the-dash-in-ubuntu-13-04/?utm_source=omgubuntu +[6]:http://www.whizzy.org/2015/12/online-searches-in-the-dash-to-be-off-by-default?utm_source=omgubuntu diff --git a/published/Learn with Linux--Learning to Type.md b/published/Learn with Linux--Learning to Type.md new file mode 100644 index 0000000000..01c2dbc70d --- /dev/null +++ b/published/Learn with Linux--Learning to Type.md @@ -0,0 +1,119 @@ +与 Linux 一起学习:学习打字 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-featured.png) + +[与 Linux 一起学习][1]的所有文章: + +- [与 Linux 一起学习:学习打字][2] +- [与 Linux 一起学习:物理模拟][3] +- [与 Linux 一起学习:玩音乐][4] +- [与 Linux 一起学习:两款地理软件][5] +- [与 Linux 一起学习: 使用这些 Linux 应用来征服你的数学学习][6] + +Linux 提供大量的教学软件和工具,面向各个年级以及不同年龄段,提供大量学科的练习实践,其中大多数是可以与用户进行交互的。本“与 Linux 一起学习”系列就来介绍一些教学软件。 + +很多人都要打字,操作键盘已经成为他们的第二天性。但是这些人中有多少是依然使用两个手指头来快速地按键盘的?即使学校有教我们使用键盘的方法,我们也会慢慢地抛弃正确的打字姿势,养成只用两个大拇指玩键盘的习惯。(LCTT 译注:呃,你确认是拇指而不是食指?) + +下面要介绍的两款软件可以帮你掌控你的键盘,然后你就可以让你的手指跟上你的思维,这样你的思维就不会被打断了。当然,还有很多更炫更酷的软件可供选择,但本文所选的这两款是最简单、最容易上手的。 + +### TuxType (或者叫 TuxTyping) ### + +TuxType 是给小孩子玩的。在一些有趣的游戏中,小学生们可以通过完成一些简单的练习来 Get “双手打字以示清白”的新技能。 + +Debian 及其衍生版本(包含所有 Ubuntu 衍生版本)的标准软件仓库都有 TuxType,使用下面的命令安装: + + sudo apt-get install tuxtype + +软件开始时有一个简单的 Tux 界面和一段难听的 midi 音乐,幸运的是你可以通过右下角的喇叭按钮把声音调低了。(LCTT 译注:Tux 就是那只 Linux 吉祥物,Linus 说它的表情被设计成刚喝完啤酒后的满足感,见《Just For Fun》。) + +![learntotype-tuxtyping-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-main.jpg) + +最开始处的两个选项“Fish Cascade”和“Comet Zap”是打字游戏,当你开始游戏时,你就投入到了这个课程。 + +第3个选项为“Lessions”,提供40多个简单的课程,每个课程会增加一个字母让你来练习,练习过程中会给出一些提示,比如应该用哪个手指按键盘上的字母。 + +![learntotype-tuxtyping-exd1](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd1.jpg) + +![learntotype-tuxtyping-exd2](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd2.jpg) + +更高级点的,你可以练习输入句子。不知道为什么,句子练习被放在“Options”选项里。(LCTT 译注:句子练习第一句是“The quick brown fox jumps over the lazy dog”,包含了26个英文字母,可用于检测键盘是否坏键,也是练习英文打字的必备良药啊。) + +![learntotype-tuxtyping-phrase](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-phrase.jpg) + +这个游戏让玩家打出单词来帮助 Tux 吃到小鱼或者干掉掉下来的流星,训练速度和精确度。 + +![learntotype-tuxtyping-fish](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-fish.jpg) + +![learntotype-tuxtyping-zap](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-zap.jpg) + +除了练习有趣外,这些游戏还可以训练玩家的拼写、速度、手眼配合能力,因为你如果认真在玩的话,必须盯着屏幕,不看键盘打字。 + +### GNU typist (gtype) ### + +对于成年人或有打字经验的人来说,GNU Typist 可能更合适,它是一个 GNU 项目,基于控制台操作。 + +GNU Typist 也在大多数 Debian 衍生版本的软件库中,运行下面的命令来安装: + + sudo apt-get install gtype + +你估计不能在应用菜单里找到它,只能在终端界面上执行下面的命令来启动: + + gtype + +界面简单,没有废话,直接提供课程内容,玩家选择就是了。 + +![learntotype-gtype-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-main.png) + +课程直截了当,内容详细。 + +![learntotype-gtype-lesson](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-lesson.png) + +在交互练习的过程中,如果你输入错误,会将错误位置高亮显示。不会像其他漂亮界面分散你的注意力,你可以专注于练习。每个课程的右下角都有一组统计数据来展示你的表现,如果你犯了很多错误,就可能无法通过关卡了。 + +![learntotype-gtype-mistake](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-mistake.png) + +简单练习只需要你重复输入一些字符,而高阶练习需要你输入整个句子。 + +![learntotype-gtype-warmup](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmup.png) + +下图的错误已经超过 3%,错误率太高了,你得降低些。 + +![learntotype-gtype-warmupfail](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmupfail.png) + +一些训练用于完成特殊目标,比如“平衡键盘训练(LCTT 译注:感觉是用来练习手感的)”。 + +![learntotype-gtype-balanceddrill](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-balanceddrill.png) + +下图是速度练习。 + +![learntotype-gtype-speed-simple](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-simple.png) + +下图是要你输入一段经典文章。 + +![learntotype-gtype-speed-advanced](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-advanced.png) + +如果你想练习其他语种,操作一下命令行参数就行。 + +![learntotype-gtype-more-lessons](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-more-lessons.png) + +### 总结 ### + +如果你想练练自己的打字水平,Linux 上有很多软件给你用。本文介绍的两款软件界面简单但内容丰富,能满足绝大多数打字爱好者的需求。如果你正在使用、或者听说过其他的优秀打字练习软件,请在评论栏贴出来,让我们长长姿势。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/learn-to-type-in-linux/ + +作者:[Attila Orosz][a] +译者:[bazz2](https://github.com/bazz2) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://linux.cn/article-6546-1.html \ No newline at end of file diff --git a/published/Learn with Linux--Physics Simulation.md b/published/Learn with Linux--Physics Simulation.md new file mode 100644 index 0000000000..452ec3a9eb --- /dev/null +++ b/published/Learn with Linux--Physics Simulation.md @@ -0,0 +1,112 @@ +与 Linux 一起学习:物理模拟 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/physics-fetured.jpg) + +[与 Linux 一起学习][1]的所有文章: + +- [与 Linux 一起学习:学习打字][2] +- [与 Linux 一起学习:物理模拟][3] +- [与 Linux 一起学习:玩音乐][4] +- [与 Linux 一起学习:两款地理软件][5] +- [与 Linux 一起学习:掌握数学][6] + +Linux 提供大量的教学软件和工具,面向各个年级段以及不同年龄段,提供大量学科的练习实践,其中大多数是可以与用户进行交互的。本“与 Linux 一起学习”系列就来介绍一些教学软件。 + +物理是一个有趣的课题,证据就是任何物理课程都可以用具体的图片演示给你看。能看到物理变化过程是一个很妙的体验,特别是你不需要到教室就能体验到。Linux 上有很多很好的科学软件来为你提供这种美妙感觉,本篇文章只着重介绍其中几种。 + +### 1. Step ### + +[Step][7] 是一个交互型物理模拟器,属于 [KDEEdu][8](KDE 教育)项目的一部分。没人会比它的作者更了解它的作用。在项目官网主页上写着“[Step] 是这样玩的:你放点东西进来,添加一些力(地心引力或者弹簧),然后点击‘模拟(Simulate)’按钮,这款软件就会为你模拟这个物体在真实世界的物理定律影响下的运动状态。你可以改变物体或力的属性(允许在模拟过程中进行修改),然后观察不同属性下产生的现象。Step 可以让你从体验中学习物理!” + +Step 依赖 Qt 以及其他一些 KDE 所依赖的软件,正是由于像 KDEEdu 之类的项目存在,才使得 KDE 变得如此强大,当然,你可能需要忍受由此带来的庞大的桌面系统。 + +Debian 的源中包含了 step 软件,终端下运行以下命令安装: + + sudo apt-get install step + +在 KDE 环境下,它只需要很少的依赖,几秒钟就能安装完成。 + +Step 有个简单的交互界面,你进去后直接可以进行模拟操作。 + +![physics-step-main](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-main.png) + +你会发现所有物品在屏幕左边,包括不同的质点,空气,不同形状的物体,弹簧,以及不同的力(见区域1) 。如果你选中一个物体,屏幕右边会出现简短的描述信息(见区域2),以及你创造的世界的介绍(主要介绍这个世界中包含的物体)(见区域3),以及你当前选中的物体的属性(见区域4),以及你的操作历史(见区域5)。 + +![physics-step-parts](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-parts.png) + +一旦你放好了所有物体,点击下“模拟”按钮,可以看到物体与物体之间的相互作用。 + +![physics-step-simulate1](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate1.png) + +![physics-step-simulate2](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate2.png) + +![physics-step-simulate3](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate3.png) + +想要更多了解 Step,按 F1 键,KDE 帮助中心会显示出详细的软件操作手册。 + +### 2. Lightspeed ### + +Lightspeed 是一个简单的基于 GTK+ 和 OpenGL 的模拟器,可以模拟一个高速移动的物体被观测到的现象。这个模拟器的理论基础是爱因斯坦的狭义相对论,在 Lightspeed 的 [srouceforge 页面][9]上,他们这样介绍:当一个物体被加速到几千公里每秒,它就会表现得扭曲和褪色;当物体被不断加速到接近光速(299,792,458 m/s)时,这个现象会越来越明显,并且在不同方向观察这个物体的扭曲方式,会得到完全不一样的结果。 + +受到相对速度影响的现象如下(LCTT 译注:都可以从“光速不变”理论推导出来): + +- **洛伦兹收缩(The Lorentz contraction)** —— 物体看起来变短了 +- **多普勒红移/蓝移(The Doppler red/blue shift)** —— 物体的颜色变了 +- **前灯效应(The headlight effect)** —— 物体的明暗变化(LCTT 译注:当物体接近光速移动时,会在它前进的方向强烈地辐射光子,从这个角度看,物体会变得很亮,相反,从物体背后观察,会发现它很暗) +- **光行差效应(Optical aberration)** —— 物体扭曲变形了 + +Lightspeed 有 Debian 的源,执行下面的命令来安装: + + sudo apt-get install lightspeed + +用户界面非常简单,里边有一个物体(你可以从 sourceforge 下载更多形状的物体)沿着 x 轴运动(按下 A 键或在菜单栏 object 项目的 Animation 选项设置,物体就会开始运动)。 + +![physics-lightspeed](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed.png) + +你可以滑动右边的滑动条来控制物体移动的速度。 + +![physics-lightspeed-deform](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-deform.png) + +其他一些简单的控制器可以让你获得更多的视觉效果。 + +![physics-lightspeed-visual](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-visual.png) + +点击界面并拖动鼠标可以改变物体视角,在 Camera 菜单下可以修改背景颜色或者物体的图形模式,以及其他效果。 + +### 特别推荐: Physion ### + +Physion 是个非常有趣并且美观的物理模拟软件,比上面介绍的两款软件都好玩好看。 + +可以从它的[官网][10]下载: + +- [Linux][11] + +从他们放在 Youtube 上的视频来看,Physion 还是值得我们下载下来玩玩的。\ + +注:youtube 视频 + + +你有其他 Linux 下的好玩的物理模拟、演示、教学软件吗?如果有,请在评论处分享给我们。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-physics-simulation/ + +作者:[Attila Orosz][a] +译者:[bazz2](https://github.com/bazz2) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://linux.cn/article-6902-1.html +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://linux.cn/article-6546-1.html +[7]:https://edu.kde.org/applications/all/step +[8]:https://edu.kde.org/ +[9]:http://lightspeed.sourceforge.net/ +[10]:http://www.physion.net/ +[11]:http://physion.net/en/downloads/linux/13-physion-linux-x8664/download \ No newline at end of file diff --git a/translated/tech/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md b/published/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md similarity index 50% rename from translated/tech/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md rename to published/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md index 54c4330ae2..404ad003fd 100644 --- a/translated/tech/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md +++ b/published/RHCE/Part 10 - Setting Up 'NTP (Network Time Protocol) Server' in RHEL or CentOS 7.md @@ -1,12 +1,13 @@ -第 10 部分:在 RHEL/CentOS 7 中设置 “NTP(网络时间协议) 服务器” +RHCE 系列(十):在 RHEL/CentOS 7 中设置 NTP(网络时间协议)服务器 ================================================================================ -网络时间协议 - NTP - 是运行在传输层 123 号端口允许计算机通过网络同步准确时间的协议。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要备份服务器资源或数据库。 + +网络时间协议 - NTP - 是运行在传输层 123 号端口的 UDP 协议,它允许计算机通过网络同步准确时间。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要复制服务器的资源或数据库。 ![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Install-in-CentOS.png) -在 CentOS 和 RHEL 7 上安装 NTP 服务器 +*在 CentOS 和 RHEL 7 上安装 NTP 服务器* -#### 要求: #### +#### 前置要求: #### - [CentOS 7 安装过程][1] - [RHEL 安装过程][2] @@ -17,62 +18,62 @@ - [在 CentOS/RHCE 7 上配置静态 IP][4] - [在 CentOS/RHEL 7 上停用并移除不需要的服务][5] -这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池列表中和你服务器地理位置最近的可用节点中同步时间。 +这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池(NTP Public Pool Time Servers)列表中和你服务器地理位置最近的可用节点中同步时间。 #### 步骤一:安装和配置 NTP 守护进程 #### -1. 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。 +1、 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。 # yum install ntp ![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Install-NTP-in-CentOS.png) -安装 NTP 服务器 +*安装 NTP 服务器* -2. 安装完服务器之后,首先到官方 [NTP 公共时间服务器池][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。 +2、 安装完服务器之后,首先到官方 [NTP 公共时间服务器池(NTP Public Pool Time Servers)][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。 ![NTP 服务器池](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Pool-Server.png) -NTP 服务器池 +*NTP 服务器池* -3. 然后打开编辑 NTP 守护进程主要配置文件,从 pool.ntp.org 中注释掉默认的公共服务器列表并用类似下面截图提供给你国家的列表替换。 +3、 然后打开编辑 NTP 守护进程的主配置文件,注释掉来自 pool.ntp.org 项目的公共服务器默认列表,并用类似下面截图中提供给你所在国家的列表替换。(LCTT 译注:中国使用 0.cn.pool.ntp.org 等) ![在 CentOS 中配置 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Configure-NTP-Server.png) -配置 NTP 服务器 +*配置 NTP 服务器* -4. 下一步,你需要允许客户端从你的网络中和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中限制语句控制允许哪些网络查询和同步时间 - 根据需要替换网络 IP。 +4、 下一步,你需要允许来自你的网络的客户端和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中 **restrict** 语句控制允许哪些网络查询和同步时间 - 请根据需要替换网络 IP。 restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap nomodify notrap 语句意味着不允许你的客户端配置服务器或者作为同步时间的节点。 -5. 如果你需要额外的信息用于错误处理,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。 +5、 如果你需要用于错误处理的额外信息,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。 logfile /var/log/ntp.log ![在 CentOS 中启用 NTP 日志](http://www.tecmint.com/wp-content/uploads/2014/09/Enable-NTP-Log.png) -启用 NTP 日志 +*启用 NTP 日志* -6. 你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。 +6、 在你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。 ![CentOS 中 NTP 服务器的配置](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Configuration.png) -NTP 服务器配置 +*NTP 服务器配置* ### 步骤二:添加防火墙规则并启动 NTP 守护进程 ### -7. NTP 服务在传输层(第四层)使用 123 号 UDP 端口。它是针对限制可变延迟的影响特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。 +7、 NTP 服务使用 OSI 传输层(第四层)的 123 号 UDP 端口。它是为了避免可变延迟的影响所特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。 # firewall-cmd --add-service=ntp --permanent # firewall-cmd --reload ![在 Firewall 中开放 NTP 端口](http://www.tecmint.com/wp-content/uploads/2014/09/Open-NTP-Port.png) -在 Firewall 中开放 NTP 端口 +*在 Firewall 中开放 NTP 端口* -8. 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。 +8、 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。 # systemctl start ntpd # systemctl enable ntpd @@ -80,34 +81,34 @@ NTP 服务器配置 ![启动 NTP 服务](http://www.tecmint.com/wp-content/uploads/2014/09/Start-NTP-Service.png) -启动 NTP 服务 +*启动 NTP 服务* ### 步骤三:验证服务器时间同步 ### -9. 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。 +9、 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。 # ntpq -p # date -R ![验证 NTP 服务器时间](http://www.tecmint.com/wp-content/uploads/2014/09/Verify-NTP-Time-Sync.png) -验证 NTP 时间同步 +*验证 NTP 时间同步* -10. 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行事例。 +10、 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行示例。 # ntpdate -q 0.ro.pool.ntp.org 1.ro.pool.ntp.org ![同步 NTP 同步](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-NTP-Time.png) -同步 NTP 时间 +*同步 NTP 时间* ### 步骤四:设置 Windows NTP 客户端 ### -11. 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。 +11、 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。 ![和 NTP 同步 Windows 时间](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-Windows-Time-with-NTP.png) -和 NTP 同步 Windows 时间 +*和 NTP 同步 Windows 时间* 就是这些。在你的网络中配置一个本地 NTP 服务器能确保你所有的服务器和客户端有相同的时间设置,以防出现网络连接失败,并且它们彼此都相互同步。 @@ -117,7 +118,7 @@ via: http://www.tecmint.com/install-ntp-server-in-centos/ 作者:[Matei Cezar][a] 译者:[ictlyh](http://motouxiaogui.cn/blog) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md b/published/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md similarity index 62% rename from translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md rename to published/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md index 5ff1f9fe65..574e9dc594 100644 --- a/translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md +++ b/published/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md @@ -1,11 +1,13 @@ -RHCE 系列: 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS +RHCE 系列(八):在 Apache 上使用网络安全服务(NSS)实现 HTTPS ================================================================================ -如果你是一个负责维护和确保 web 服务器安全的系统管理员,你不能不花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。 + +如果你是一个负责维护和确保 web 服务器安全的系统管理员,你需要花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。 + ![使用 SSL/TLS 设置 Apache HTTPS](http://www.tecmint.com/wp-content/uploads/2015/09/Setup-Apache-SSL-TLS-Server.png) -RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS +*RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS* -为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL(安全套接层)或者最近称为 TLS(传输层安全)的组合,产生了 HTTPS 协议。 +为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL(Secure Sockets Layer,安全套接层)或者最近称为 TLS(Transport Layer Security,传输层安全)的组合,产生了 HTTPS 协议。 由于一些严重的安全漏洞,SSL 已经被更健壮的 TLS 替代。由于这个原因,在这篇文章中我们会解析如何通过 TLS 实现你 web 服务器和客户端之间的安全连接。 @@ -22,11 +24,11 @@ RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 # firewall-cmd --permanent –-add-service=http # firewall-cmd --permanent –-add-service=https -然后安装一些必须软件包: +然后安装一些必需的软件包: # yum update && yum install openssl mod_nss crypto-utils -**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS(网络安全服务)实现 TLS,你可以在上面的命令中用 mod\_ssl 替换 mod\_nss(使用哪一个取决于你,但在这篇文章中由于更加健壮我们会使用 NSS;例如,它支持最新的加密标准,比如 PKCS #11)。 +**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS(Network Security Service,网络安全服务)实现 TLS,你可以在上面的命令中用 mod\_ssl 替换 mod\_nss(使用哪一个取决于你,但在这篇文章中我们会使用 NSS,因为它更加安全,比如说,它支持最新的加密标准,比如 PKCS #11)。 如果你使用 mod\_nss,首先要卸载 mod\_ssl,反之如此。 @@ -54,15 +56,15 @@ nss.conf – 配置文件 下一步,在 `/etc/httpd/conf.d/nss.conf` 配置文件中做以下更改: -1. 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的: +1、 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的: NSSCertificateDatabase /etc/httpd/alias -2. 通过保存密码到数据库目录中的 /etc/httpd/nss-db-password.conf 文件避免每次系统启动时要手动输入密码: +2、 通过保存密码到数据库目录中的 `/etc/httpd/nss-db-password.conf` 文件来避免每次系统启动时要手动输入密码: NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf -其中 /etc/httpd/nss-db-password.conf 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码: +其中 `/etc/httpd/nss-db-password.conf` 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码: internal:mypassword @@ -71,27 +73,27 @@ nss.conf – 配置文件 # chmod 640 /etc/httpd/nss-db-password.conf # chgrp apache /etc/httpd/nss-db-password.conf -3. 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS(更多信息可以查看[这里][2])。 +3、 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS(更多信息可以查看[这里][2])。 确保 NSSProtocol 指令的每个实例都类似下面一样(如果你没有托管其它虚拟主机,很可能只有一条): NSSProtocol TLSv1.0,TLSv1.1 -4. 由于这是一个自签名证书,Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加: +4、 由于这是一个自签名证书,Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加: NSSEnforceValidCerts off -5. 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要: +5、 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要: # certutil -W -d /etc/httpd/alias ![为 NSS 数据库设置密码](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Password-for-NSS-Database.png) -为 NSS 数据库设置密码 +*为 NSS 数据库设置密码* ### 创建一个 Apache SSL 自签名证书 ### -下一步,我们会创建一个自签名证书为我们的客户机识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert)。 +下一步,我们会创建一个自签名证书来让我们的客户机可以识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert)。 我们用 genkey 命令为 box1 创建有效期为 365 天的 NSS 兼容证书。完成这一步后: @@ -101,19 +103,19 @@ nss.conf – 配置文件 ![创建 Apache SSL 密钥](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Apache-SSL-Key.png) -创建 Apache SSL 密钥 +*创建 Apache SSL 密钥* 你可以使用默认的密钥大小(2048),然后再次选择 Next: ![选择 Apache SSL 密钥大小](http://www.tecmint.com/wp-content/uploads/2015/09/Select-Apache-SSL-Key-Size.png) -选择 Apache SSL 密钥大小 +*选择 Apache SSL 密钥大小* 等待系统生成随机比特: ![生成随机密钥比特](http://www.tecmint.com/wp-content/uploads/2015/09/Generating-Random-Bits.png) -生成随机密钥比特 +*生成随机密钥比特* 为了加快速度,会提示你在控制台输入随机字符,正如下面的截图所示。请注意当没有从键盘接收到输入时进度条是如何停止的。然后,会让你选择: @@ -124,35 +126,35 @@ nss.conf – 配置文件 注:youtube 视频 -最后,会提示你输入之前设置的密码到 NSS 证书: +最后,会提示你输入之前给 NSS 证书设置的密码: # genkey --nss --days 365 box1 ![Apache NSS 证书密码](http://www.tecmint.com/wp-content/uploads/2015/09/Apache-NSS-Password.png) -Apache NSS 证书密码 +*Apache NSS 证书密码* -在任何时候你都可以用以下命令列出现有的证书: +需要的话,你可以用以下命令列出现有的证书: # certutil –L –d /etc/httpd/alias ![列出 Apache NSS 证书](http://www.tecmint.com/wp-content/uploads/2015/09/List-Apache-Certificates.png) -列出 Apache NSS 证书 +*列出 Apache NSS 证书* -然后通过名字删除(除非严格要求,用你自己的证书名称替换 box1): +然后通过名字删除(如果你真的需要删除的,用你自己的证书名称替换 box1): # certutil -d /etc/httpd/alias -D -n "box1" -如果你需要继续的话: +如果你需要继续进行的话,请继续阅读。 ### 测试 Apache SSL HTTPS 连接 ### -最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://,你会看到著名的信息 “This connection is untrusted”: +最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://\,你会看到著名的信息 “This connection is untrusted”: ![检查 Apache SSL 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Connection.png) -检查 Apache SSL 连接 +*检查 Apache SSL 连接* 在上面的情况中,你可以点击添加例外(Add Exception) 然后确认安全例外(Confirm Security Exception) - 但先不要这么做。让我们首先来看看证书看它的信息是否和我们之前输入的相符(如截图所示)。 @@ -160,37 +162,37 @@ Apache NSS 证书密码 ![确认 Apache SSL 证书详情](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Certificate-Details.png) -确认 Apache SSL 证书详情 +*确认 Apache SSL 证书详情* -现在你继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情: +现在你可以继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情: -在火狐浏览器中,你可以通过在屏幕中右击然后从上下文菜单中选择检查元素(Inspect Element)启动,尤其是通过网络选项卡: +在火狐浏览器中,你可以通过在屏幕中右击,然后从上下文菜单中选择检查元素(Inspect Element)启动开发者工具,尤其要看“网络”选项卡: ![检查 Apache HTTPS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Inspect-Apache-HTTPS-Connection.png) -检查 Apache HTTPS 连接 +*检查 Apache HTTPS 连接* 请注意这和之前显示的在验证过程中输入的信息一致。还有一种方式通过使用命令行工具测试连接: -左边(测试 SSLv3): +左图(测试 SSLv3): # openssl s_client -connect localhost:443 -ssl3 -右边(测试 TLS): +右图(测试 TLS): # openssl s_client -connect localhost:443 -tls1 ![测试 Apache SSL 和 TLS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Apache-SSL-and-TLS.png) -测试 Apache SSL 和 TLS 连接 +*测试 Apache SSL 和 TLS 连接* -参考上面的截图了解更相信信息。 +参考上面的截图了解更详细信息。 ### 总结 ### -我确信你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。 +我想你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。 -在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(启用的步骤和发送 CSR 到 CA 然后获得签名证书的例子相同);另外的情况,就是像我们的例子中一样使用自签名证书。 +在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(步骤和设置需要启用例外的证书的步骤相同,发送 CSR 到 CA 然后获得返回的签名证书);否则,就像我们的例子中一样使用自签名证书即可。 要获取更多关于使用 NSS 的详情,可以参考关于 [mod-nss][3] 的在线帮助。如果你有任何疑问或评论,请告诉我们。 @@ -200,11 +202,11 @@ via: http://www.tecmint.com/create-apache-https-self-signed-certificate-using-ns 作者:[Gabriel Cánepa][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.tecmint.com/install-lamp-in-centos-7/ -[1]:http://www.tecmint.com/author/gacanepa/ +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://linux.cn/article-5789-1.html [2]:https://access.redhat.com/articles/1232123 [3]:https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html \ No newline at end of file diff --git a/translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md b/published/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md similarity index 55% rename from translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md rename to published/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md index ccc67dbb30..59d6d9de0c 100644 --- a/translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md +++ b/published/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md @@ -1,25 +1,25 @@ -第九部分 - 如果使用零客户端配置 Postfix 邮件服务器(SMTP) +RHCE 系列(九):如何使用无客户端配置 Postfix 邮件服务器(SMTP) ================================================================================ -尽管现在有很多在线联系方式,邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。 +尽管现在有很多在线联系方式,电子邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。 -下面的图描述了邮件从发送者发出直到信息到达接收者收件箱的传递过程。 +下面的图描述了电子邮件从发送者发出直到信息到达接收者收件箱的传递过程。 -![邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png) +![电子邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png) -邮件如何工作 +*电子邮件如何工作* -要使这成为可能,背后发生了好多事情。为了使邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook,或者网络邮件服务,例如 Gmail 或 Yahoo 邮件)到一个邮件服务器,并从其到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP(简单邮件传输协议)服务。 +要实现这一切,背后发生了好多事情。为了使电子邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook,或者 web 邮件服务,例如 Gmail 或 Yahoo 邮件)投递到一个邮件服务器,并从其投递到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP(简单邮件传输协议)服务。 -这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从中本地用户发送的邮件(甚至发送到本地用户)被转发到一个中央邮件服务器以便于访问。 +这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从本地用户发送的邮件(甚至发送到另外一个本地用户)被转发(forward)到一个中央邮件服务器以便于访问。 -在实际需求中这称为零客户端安装。 +在这个考试的要求中这称为无客户端(null-client)安装。 -在我们的测试环境中将包括一个原始邮件服务器和一个中央服务器或中继主机。 +在我们的测试环境中将包括一个起源(originating)邮件服务器和一个中央服务器或中继主机(relayhost)。 - 原始邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18) - 中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20) +- 起源邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18) +- 中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20) -为了域名解析我们在两台机器中都会使用有名的 /etc/hosts 文件: +我们在两台机器中都会使用你熟知的 `/etc/hosts` 文件做名字解析: 192.168.0.18 box1.mydomain.com box1 192.168.0.20 mail.mydomain.com mail @@ -28,34 +28,29 @@ 首先,我们需要(在两台机器上): -**1. 安装 Postfix:** +**1、 安装 Postfix:** # yum update && yum install postfix -**2. 启动服务并启用开机自动启动:** +**2、 启动服务并启用开机自动启动:** # systemctl start postfix # systemctl enable postfix -**3. 允许邮件流量通过防火墙:** +**3、 允许邮件流量通过防火墙:** # firewall-cmd --permanent --add-service=smtp # firewall-cmd --add-service=smtp - ![在防火墙中开通邮件服务器端口](http://www.tecmint.com/wp-content/uploads/2015/09/Allow-Traffic-through-Firewall.png) -在防火墙中开通邮件服务器端口 +*在防火墙中开通邮件服务器端口* -**4. 在 box1.mydomain.com 配置 Postfix** +**4、 在 box1.mydomain.com 配置 Postfix** -Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一个很大的文本,因为其中包含的注释解析了程序设置的目的。 +Postfix 的主要配置文件是 `/etc/postfix/main.cf`。这个文件本身是一个很大的文本文件,因为其中包含了解释程序设置的用途的注释。 -为了简洁,我们只显示了需要编辑的行(是的,在原始服务器中你需要保留 mydestination 为空;否则邮件会被保存到本地而不是我们实际想要的中央邮件服务器): - -**在 box1.mydomain.com 配置 Postfix** - ----------- +为了简洁,我们只显示了需要编辑的行(没错,在起源服务器中你需要保留 `mydestination` 为空;否则邮件会被存储到本地,而不是我们实际想要发往的中央邮件服务器): myhostname = box1.mydomain.com mydomain = mydomain.com @@ -64,11 +59,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 mydestination = relayhost = 192.168.0.20 -**5. 在 mail.mydomain.com 配置 Postfix** - -** 在 mail.mydomain.com 配置 Postfix ** - ----------- +**5、 在 mail.mydomain.com 配置 Postfix** myhostname = mail.mydomain.com mydomain = mydomain.com @@ -83,23 +74,23 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 ![设置 Postfix SELinux 权限](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Postfix-SELinux-Permission.png) -设置 Postfix SELinux 权限 +*设置 Postfix SELinux 权限* -上面的 SELinux 布尔值会允许 Postfix 在中央服务器写入邮件池。 +上面的 SELinux 布尔值会允许中央服务器上的 Postfix 可以写入邮件池(mail spool)。 -**6. 在两台机子上重启服务以使更改生效:** +**6、 在两台机子上重启服务以使更改生效:** # systemctl restart postfix 如果 Postfix 没有正确启动,你可以使用下面的命令进行错误处理。 - # systemctl –l status postfix - # journalctl –xn - # postconf –n + # systemctl -l status postfix + # journalctl -xn + # postconf -n ### 测试 Postfix 邮件服务 ### -为了测试邮件服务器,你可以使用任何邮件用户代理(最常见的简称为 MUA)例如 [mail 或 mutt][2]。 +要测试邮件服务器,你可以使用任何邮件用户代理(Mail User Agent,常简称为 MUA),例如 [mail 或 mutt][2]。 由于我个人喜欢 mutt,我会在 box1 中使用它发送邮件给用户 tecmint,并把现有文件(mailbody.txt)作为信息内容: @@ -107,7 +98,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 ![测试 Postfix 邮件服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Postfix-Mail-Server.png) -测试 Postfix 邮件服务器 +*测试 Postfix 邮件服务器* 现在到中央邮件服务器(mail.mydomain.com)以 tecmint 用户登录,并检查是否收到了邮件: @@ -116,15 +107,15 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一 ![检查 Postfix 邮件服务器发送](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Postfix-Mail-Server-Delivery.png) -检查 Postfix 邮件服务器发送 +*检查 Postfix 邮件服务器发送* -如果没有收到邮件,检查 root 用户的邮件池查看警告或者错误提示。你也需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中 打开了 25 号端口: +如果没有收到邮件,检查 root 用户的邮件池看看是否有警告或者错误提示。你也许需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中打开了 25 号端口: # nmap -PN 192.168.0.20 ![Postfix 邮件服务器错误处理](http://www.tecmint.com/wp-content/uploads/2015/09/Troubleshoot-Postfix-Mail-Server.png) -Postfix 邮件服务器错误处理 +*Postfix 邮件服务器错误处理* ### 总结 ### @@ -134,7 +125,7 @@ Postfix 邮件服务器错误处理 - [在 CentOS/RHEL 07 上配置仅缓存的 DNS 服务器][4] -最后,我强烈建议你熟悉 Postfix 的配置文件(main.cf)和这个程序的帮助手册。如果有任何疑问,别犹豫,使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎及时的帮助。 +最后,我强烈建议你熟悉 Postfix 的配置文件(main.cf)和这个程序的帮助手册。如果有任何疑问,别犹豫,使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎是及时的帮助。 -------------------------------------------------------------------------------- @@ -142,7 +133,7 @@ via: http://www.tecmint.com/setup-postfix-mail-server-smtp-using-null-client-on- 作者:[Gabriel Cánepa][a] 译者:[ictlyh](https//www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sign.md b/sign.md index 1c413aba40..960d450a96 100644 --- a/sign.md +++ b/sign.md @@ -19,4 +19,4 @@ via:来源链接 [6]: [7]: [8]: -[9]: \ No newline at end of file +[9]: diff --git a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md deleted file mode 100644 index c4746bc482..0000000000 --- a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md +++ /dev/null @@ -1,228 +0,0 @@ -Great Open Source Collaborative Editing Tools -================================================================================ -In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore. - -There are many ways to collaborate online, and it has never been easier. This article highlights my favourite open source tools to collaborate on documents in real time. - -Google Docs is an excellent productivity application with most of the features I need. It serves as a collaborative tool for editing documents in real time. Documents can be shared, opened, and edited by multiple users simultaneously and users can see character-by-character changes as other collaborators make edits. While Google Docs is free for individuals, it is not open source. - -Here is my take on the finest open source collaborative editors which help you focus on writing without interruption, yet work mutually with others. - ----------- - -### Hackpad ### - -![Hackpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Hackpad.png) - -Hackpad is an open source web-based realtime wiki, based on the open source EtherPad collaborative document editor. - -Hackpad allows users to share your docs realtime and it uses color coding to show which authors have contributed to which content. It also allows in line photos, checklists and can also be used for coding as it offers syntax highlighting. - -While Dropbox acquired Hackpad in April 2014, it is only this month that the software has been released under an open source license. It has been worth the wait. - -Features include: - -- Very rich set of functions, similar to those offered by wikis -- Take collaborative notes, share data and files, and use comments to share your thoughts in real-time or asynchronously -- Granular privacy permissions enable you to invite a single friend, a dozen teammates, or thousands of Twitter followers -- Intelligent execution -- Directly embed videos from popular video sharing sites -- Tables -- Syntax highlighting for most common programming languages including C, C#, CSS, CoffeeScript, Java, and HTML - -- Website: [hackpad.com][1] -- Source code: [github.com/dropbox/hackpad][2] -- Developer: [Contributors][3] -- License: Apache License, Version 2.0 -- Version Number: - - ----------- - -### Etherpad ### - -![Etherpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Etherpad.png) - -Etherpad is an open source web-based collaborative real-time editor, allowing authors to simultaneously edit a text document leave comments, and interact with others using an integrated chat. - -Etherpad is implemented in JavaScript, on top of the AppJet platform, with the real-time functionality achieved using Comet streaming. - -Features include: - -- Well designed spartan interface -- Simple text formatting features -- "Time slider" - explore the history of a pad -- Download documents in plain text, PDF, Microsoft Word, Open Document, and HTML -- Auto-saves the document at regular, short intervals -- Highly customizable -- Client side plugins extend the editor functionality -- Hundreds of plugins extend Etherpad including support for email notifications, pad management, authentication -- Accessibility enabled -- Interact with Pad contents in real time from within Node and from your CLI - -- Website: [etherpad.org][4] -- Source code: [github.com/ether/etherpad-lite][5] -- Developer: David Greenspan, Aaron Iba, J.D. Zamfiresc, Daniel Clemens, David Cole -- License: Apache License Version 2.0 -- Version Number: 1.5.7 - ----------- - -### Firepad ### - -![Firepad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Firepad.png) - -Firepad is an open source, collaborative text editor. It is designed to be embedded inside larger web applications with collaborative code editing added in only a few days. - -Firepad is a full-featured text editor, with capabilities like conflict resolution, cursor synchronization, user attribution, and user presence detection. It uses Firebase as a backend, and doesn't need any server-side code. It can be added to any web app. Firepad can use either the CodeMirror editor or the Ace editor to render documents, and its operational transform code borrows from ot.js. - -If you want to extend your web application capabilities by adding the simple document and code editor, Firepad is perfect. - -Firepad is used by several editors, including the Atlassian Stash Realtime Editor, Nitrous.IO, LiveMinutes, and Koding. - -Features include: - -- True collaborative editing -- Intelligent OT-based merging and conflict resolution -- Support for both rich text and code editing -- Cursor position synchronization -- Undo / redo -- Text highlighting -- User attribution -- Presence detection -- Version checkpointing -- Images -- Extend Firepad through its API -- Supports all modern browsers: Chrome, Safari, Opera 11+, IE8+, Firefox 3.6+ - -- Website: [www.firepad.io][6] -- Source code: [github.com/firebase/firepad][7] -- Developer: Michael Lehenbauer and the team at Firebase -- License: MIT -- Version Number: 1.1.1 - ----------- - -### OwnCloud Documents ### - -![ownCloud Documents in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ownCloud.png) - -ownCloud Documents is an ownCloud app to work with office documents alone and/or collaboratively. It allows up to 5 individuals to collaborate editing .odt and .doc files in a web browser. - -ownCloud is a self-hosted file sync and share server. It provides access to your data through a web interface, sync clients or WebDAV while providing a platform to view, sync and share across devices easily. - -Features include: - -- Cooperative edit, with multiple users editing files simultaneously -- Document creation within ownCloud -- Document upload -- Share and edit files in the browser, and then share them inside ownCloud or through a public link -- ownCloud features like versioning, local syncing, encryption, undelete -- Seamless support for Microsoft Word documents by way of transparent conversion of file formats - -- Website: [owncloud.org][8] -- Source code: [github.com/owncloud/documents][9] -- Developer: OwnCloud Inc. -- License: AGPLv3 -- Version Number: 8.1.1 - ----------- - -### Gobby ### - -![Gobby in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Gobby.png) - -Gobby is a collaborative editor supporting multiple documents in one session and a multi-user chat. All users could work on the file simultaneously without the need to lock it. The parts the various users write are highlighted in different colours and it supports syntax highlighting of various programming and markup languages. - -Gobby allows multiple users to edit the same document together over the internet in real-time. It integrates well with the GNOME environment. It features a client-server architecture which supports multiple documents in one session, document synchronisation on request, password protection and an IRC-like chat for communication out of band. Users can choose a colour to highlight the text they have written in a document. - -A dedicated server called infinoted is also provided. - -Features include: - -- Full-fledged text editing capabilities including syntax highlighting using GtkSourceView -- Real-time, lock-free collaborative text editing through encrypted connections (including PFS) -- Integrated group chat -- Local group undo: Undo does not affect changes of remote users -- Shows cursors and selections of remote users -- Highlights text written by different users with different colors -- Syntax highlighting for most programming languages, auto indentation, configurable tab width -- Zeroconf support -- Encrypted data transfer including perfect forward secrecy (PFS) -- Sessions can be password-protected -- Sophisticated access control with Access Control Lists (ACLs) -- Highly configurable dedicated server -- Automatic saving of documents -- Advanced search and replace options -- Internationalisation -- Full Unicode support - -- Website: [gobby.github.io][10] -- Source code: [github.com/gobby][11] -- Developer: Armin Burgmeier, Philipp Kern and contributors -- License: GNU GPLv2+ and ISC -- Version Number: 0.5.0 - ----------- - -### OnlyOffice ### - -![OnlyOffice in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-OnlyOffice.png) - -ONLYOFFICE (formerly known as Teamlab Office) is a multifunctional cloud online office suite integrated with CRM system, document and project management toolset, Gantt chart and email aggregator. - -It allows you to organize business tasks and milestones, store and share your corporate or personal documents, use social networking tools such as blogs and forums, as well as communicate with your team members via corporate IM. - -Manage documents, projects, team and customer relations in one place. OnlyOffice combines text, spreadsheet and presentation editors that include features similar to Microsoft desktop editors (Word, Excel and PowerPoint), but then allow to co-edit, comment and chat in real time. - -OnlyOffice is written in ASP.NET, based on HTML5 Canvas element, and translated to 21 languages. - -Features include: - -- As powerful as a desktop editor when working with large documents, paging and zooming -- Document sharing in view / edit modes -- Document embedding -- Spreadsheet and presentation editors -- Co-editing -- Commenting -- Integrated chat -- Mobile applications -- Gantt charts -- Time management -- Access right management -- Invoicing system -- Calendar -- Integration with file storage systems: Google Drive, Box, OneDrive, Dropbox, OwnCloud -- Integration with CRM, email aggregator and project management module -- Mail server -- Mail aggregator -- Edit documents, spreadsheets and presentations of the most popular formats: DOC, DOCX, ODT, RTF, TXT, XLS, XLSX, ODS, CSV, PPTX, PPT, ODP - -- Website: [www.onlyoffice.com][12] -- Source code: [github.com/ONLYOFFICE/DocumentServer][13] -- Developer: Ascensio System SIA -- License: GNU GPL v3 -- Version Number: 7.7 - --------------------------------------------------------------------------------- - -via: http://www.linuxlinks.com/article/20150823085112605/CollaborativeEditing.html - -作者:Frazer Kline -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://hackpad.com/ -[2]:https://github.com/dropbox/hackpad -[3]:https://github.com/dropbox/hackpad/blob/master/CONTRIBUTORS -[4]:http://etherpad.org/ -[5]:https://github.com/ether/etherpad-lite -[6]:http://www.firepad.io/ -[7]:https://github.com/firebase/firepad -[8]:https://owncloud.org/ -[9]:http://github.com/owncloud/documents/ -[10]:https://gobby.github.io/ -[11]:https://github.com/gobby -[12]:https://www.onlyoffice.com/free-edition.aspx -[13]:https://github.com/ONLYOFFICE/DocumentServer diff --git a/sources/share/20150901 5 best open source board games to play online.md b/sources/share/20150901 5 best open source board games to play online.md index c14fecc697..eee49289e0 100644 --- a/sources/share/20150901 5 best open source board games to play online.md +++ b/sources/share/20150901 5 best open source board games to play online.md @@ -1,3 +1,4 @@ +translating by tastynoodle 5 best open source board games to play online ================================================================================ I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons. diff --git a/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md b/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md deleted file mode 100644 index f9384d4635..0000000000 --- a/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md +++ /dev/null @@ -1,605 +0,0 @@ - -translation by strugglingyouth -80 Linux Monitoring Tools for SysAdmins -================================================================================ -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg) - -The industry is hotting up at the moment, and there are more tools than you can shake a stick at. Here lies the most comprehensive list on the Internet (of Tools). Featuring over 80 ways to your machines. Within this article we outline: - -- Command line tools -- Network related -- System related monitoring -- Log monitoring tools -- Infrastructure monitoring tools - -It’s hard work monitoring and debugging performance problems, but it’s easier with the right tools at the right time. Here are some tools you’ve probably heard of, some you probably haven’t – and when to use them: - -### Top 10 System Monitoring Tools ### - -#### 1. Top #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg) - -This is a small tool which is pre-installed in many unix systems. When you want an overview of all the processes or threads running in the system: top is a good tool. You can order these processes on different criteria and the default criteria is CPU. - -#### 2. [htop][1] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg) - -Htop is essentially an enhanced version of top. It’s easier to sort by processes. It’s visually easier to understand and has built in commands for common things you would like to do. Plus it’s fully interactive. - -#### 3. [atop][2] #### - -Atop monitors all processes much like top and htop, unlike top and htop however it has daily logging of the processes for long-term analysis. It also shows resource consumption by all processes. It will also highlight resources that have reached a critical load. - -#### 4. [apachetop][3] #### - -Apachetop monitors the overall performance of your apache webserver. It’s largely based on mytop. It displays current number of reads, writes and the overall number of requests processed. - -#### 5. [ftptop][4] #### - -ftptop gives you basic information of all the current ftp connections to your server such as the total amount of sessions, how many are uploading and downloading and who the client is. - -#### 6. [mytop][5] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg) - -mytop is a neat tool for monitoring threads and performance of mysql. It gives you a live look into the database and what queries it’s processing in real time. - -#### 7. [powertop][6] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg) - -powertop helps you diagnose issues that has to do with power consumption and power management. It can also help you experiment with power management settings to achieve the most efficient settings for your server. You switch tabs with the tab key. - -#### 8. [iotop][7] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg) - -iotop checks the I/O usage information and gives you a top-like interface to that. It displays columns on read and write and each row represents a process. It also displays the percentage of time the process spent while swapping in and while waiting on I/O. - -### Network related monitoring ### - -#### 9. [ntopng][8] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg) - -ntopng is the next generation of ntop and the tool provides a graphical user interface via the browser for network monitoring. It can do stuff such as: geolocate hosts, get network traffic and show ip traffic distribution and analyze it. - -#### 10. [iftop][9] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg) - -iftop is similar to top, but instead of mainly checking for cpu usage it listens to network traffic on selected network interfaces and displays a table of current usage. It can be handy for answering questions such as “Why on earth is my internet connection so slow?!”. - -#### 11. [jnettop][10] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg) - -jnettop visualises network traffic in much the same way as iftop does. It also supports customizable text output and a machine-friendly mode to support further analysis. - -12. [bandwidthd][11] - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg) - -BandwidthD tracks usage of TCP/IP network subnets and visualises that in the browser by building a html page with graphs in png. There is a database driven system that supports searching, filtering, multiple sensors and custom reports. - -#### 13. [EtherApe][12] #### - -EtherApe displays network traffic graphically, the more talkative the bigger the node. It either captures live traffic or can read it from a tcpdump. The displayed can also be refined using a network filter with pcap syntax. - -#### 14. [ethtool][13] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg) - -ethtool is used for displaying and modifying some parameters of the network interface controllers. It can also be used to diagnose Ethernet devices and get more statistics from the devices. - -#### 15. [NetHogs][14] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg) - -NetHogs breaks down network traffic per protocol or per subnet. It then groups by process. So if there’s a surge in network traffic you can fire up NetHogs and see which process is causing it. - -#### 16. [iptraf][15] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg) - -iptraf gathers a variety of metrics such as TCP connection packet and byte count, interface statistics and activity indicators, TCP/UDP traffic breakdowns and station packet and byte counts. - -#### 17. [ngrep][16] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg) - -ngrep is grep but for the network layer. It’s pcap aware and will allow to specify extended regular or hexadecimal expressions to match against packets of . - -#### 18. [MRTG][17] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg) - -MRTG was orginally developed to monitor router traffic, but now it’s able to monitor other network related things as well. It typically collects every five minutes and then generates a html page. It also has the capability of sending warning emails. - -#### 19. [bmon][18] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg) - -Bmon monitors and helps you debug networks. It captures network related statistics and presents it in human friendly way. You can also interact with bmon through curses or through scripting. - -#### 20. traceroute #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg) - -Traceroute is a built-in tool for displaying the route and measuring the delay of packets across a network. - -#### 21. [IPTState][19] #### - -IPTState allows you to watch where traffic that crosses your iptables is going and then sort that by different criteria as you please. The tool also allows you to delete states from the table. - -#### 22. [darkstat][20] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg) - -Darkstat captures network traffic and calculates statistics about usage. The reports are served over a simple HTTP server and gives you a nice graphical user interface of the graphs. - -#### 23. [vnStat][21] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg) - -vnStat is a network traffic monitor that uses statistics provided by the kernel which ensures light use of system resources. The gathered statistics persists through system reboots. It has color options for the artistic sysadmins. - -#### 24. netstat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg) - -Netstat is a built-in tool that displays TCP network connections, routing tables and a number of network interfaces. It’s used to find problems in the network. - -#### 25. ss #### - -Instead of using netstat, it’s however preferable to use ss. The ss command is capable of showing more information than netstat and is actually faster. If you want a summary statistics you can use the command `ss -s`. - -#### 26. [nmap][22] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg) - -Nmap allows you to scan your server for open ports or detect which OS is being used. But you could also use this for SQL injection vulnerabilities, network discovery and other means related to penetration testing. - -#### 27. [MTR][23] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg) - -MTR combines the functionality of traceroute and the ping tool into a single network diagnostic tool. When using the tool it will limit the number hops individual packets has to travel while also listening to their expiry. It then repeats this every second. - -#### 28. [Tcpdump][24] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg) - -Tcpdump will output a description of the contents of the packet it just captured which matches the expression that you provided in the command. You can also save the this data for further analysis. - -#### 29. [Justniffer][25] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg) - -Justniffer is a tcp packet sniffer. You can choose whether you would like to collect low-level data or high-level data with this sniffer. It also allows you to generate logs in customizable way. You could for instance mimic the access log that apache has. - -### System related monitoring ### - -#### 30. [nmon][26] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg) - -nmon either outputs the data on screen or saves it in a comma separated file. You can display CPU, memory, network, filesystems, top processes. The data can also be added to a RRD database for further analysis. - -#### 31. [conky][27] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg) - -Conky monitors a plethora of different OS stats. It has support for IMAP and POP3 and even support for many popular music players! For the handy person you could extend it with your own scripts or programs using Lua. - -#### 32. [Glances][28] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg) - -Glances monitors your system and aims to present a maximum amount of information in a minimum amount of space. It has the capability to function in a client/server mode as well as monitoring remotely. It also has a web interface. - -#### 33. [saidar][29] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg) - -Saidar is a very small tool that gives you basic information about your system resources. It displays a full screen of the standard system resources. The emphasis for saidar is being as simple as possible. - -#### 34. [RRDtool][30] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg) - -RRDtool is a tool developed to handle round-robin databases or RRD. RRD aims to handle time-series data like CPU load, temperatures etc. This tool provides a way to extract RRD data in a graphical format. - -#### 35. [monit][31] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg) - -Monit has the capability of sending you alerts as well as restarting services if they run into trouble. It’s possible to perform any type of check you could write a script for with monit and it has a web user interface to ease your eyes. - -#### 36. [Linux process explorer][32] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg) - -Linux process explorer is akin to the activity monitor for OSX or the windows equivalent. It aims to be more usable than top or ps. You can view each process and see how much memory usage or CPU it uses. - -#### 37. df #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg) - -df is an abbreviation for disk free and is pre-installed program in all unix systems used to display the amount of available disk space for filesystems which the user have access to. - -#### 38. [discus][33] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg) - -Discus is similar to df however it aims to improve df by making it prettier using fancy features as colors, graphs and smart formatting of numbers. - -#### 39. [xosview][34] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg) - -xosview is a classic system monitoring tool and it gives you a simple overview of all the different parts of the including IRQ. - -#### 40. [Dstat][35] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg) - -Dstat aims to be a replacement for vmstat, iostat, netstat and ifstat. It allows you to view all of your system resources in real-time. The data can then be exported into csv. Most importantly dstat allows for plugins and could thus be extended into areas not yet known to mankind. - -#### 41. [Net-SNMP][36] #### - -SNMP is the protocol ‘simple network management protocol’ and the Net-SNMP tool suite helps you collect accurate information about your servers using this protocol. - -#### 42. [incron][37] #### - -Incron allows you to monitor a directory tree and then take action on those changes. If you wanted to copy files to directory ‘b’ once new files appeared in directory ‘a’ that’s exactly what incron does. - -#### 43. [monitorix][38] #### - -Monitorix is lightweight system monitoring tool. It helps you monitor a single machine and gives you a wealth of metrics. It also has a built-in HTTP server to view graphs and a reporting mechanism of all metrics. - -#### 44. vmstat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg) - -vmstat or virtual memory statistics is a small built-in tool that monitors and displays a summary about the memory in the machine. - -#### 45. uptime #### - -This small command that quickly gives you information about how long the machine has been running, how many users currently are logged on and the system load average for the past 1, 5 and 15 minutes. - -#### 46. mpstat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg) - -mpstat is a built-in tool that monitors cpu usage. The most common command is using `mpstat -P ALL` which gives you the usage of all the cores. You can also get an interval update of the CPU usage. - -#### 47. pmap #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg) - -pmap is a built-in tool that reports the memory map of a process. You can use this command to find out causes of memory bottlenecks. - -#### 48. ps #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg) - -The ps command will give you an overview of all the current processes. You can easily select all processes using the command `ps -A` - -#### 49. [sar][39] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg) - -sar is a part of the sysstat package and helps you to collect, report and save different system metrics. With different commands it will give you CPU, memory and I/O usage among other things. - -#### 50. [collectl][40] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg) - -Similar to sar collectl collects performance metrics for your machine. By default it shows cpu, network and disk stats but it collects a lot more. The difference to sar is collectl is able to deal with times below 1 second, it can be fed into a plotting tool directly and collectl monitors processes more extensively. - -#### 51. [iostat][41] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg) - -iostat is also part of the sysstat package. This command is used for monitoring system input/output. The reports themselves can be used to change system configurations to better balance input/output load between hard drives in your machine. - -#### 52. free #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg) - -This is a built-in command that displays the total amount of free and used physical memory on your machine. It also displays the buffers used by the kernel at that given moment. - -#### 53. /Proc file system #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg) - -The proc file system gives you a peek into kernel statistics. From these statistics you can get detailed information about the different hardware devices on your machine. Take a look at the [full list of the proc file statistics][42] - -#### 54. [GKrellM][43] #### - -GKrellm is a gui application that monitor the status of your hardware such CPU, main memory, hard disks, network interfaces and many other things. It can also monitor and launch a mail reader of your choice. - -#### 55. [Gnome system monitor][44] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg) - -Gnome system monitor is a basic system monitoring tool that has features looking at process dependencies from a tree view, kill or renice processes and graphs of all server metrics. - -### Log monitoring tools ### - -#### 56. [GoAccess][45] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg) - -GoAccess is a real-time web log analyzer which analyzes the access log from either apache, nginx or amazon cloudfront. It’s also possible to output the data into HTML, JSON or CSV. It will give you general statistics, top visitors, 404s, geolocation and many other things. - -#### 57. [Logwatch][46] #### - -Logwatch is a log analysis system. It parses through your system’s logs and creates a report analyzing the areas that you specify. It can give you daily reports with short digests of the activities taking place on your machine. - -#### 58. [Swatch][47] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg) - -Much like Logwatch Swatch also monitors your logs, but instead of giving reports it watches for regular expression and notifies you via mail or the console when there is a match. It could be used for intruder detection for example. - -#### 59. [MultiTail][48] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg) - -MultiTail helps you monitor logfiles in multiple windows. You can merge two or more of these logfiles into one. It will also use colors to display the logfiles for easier reading with the help of regular expressions. - -#### System tools #### - -#### 60. [acct or psacct][49] #### - -acct or psacct (depending on if you use apt-get or yum) allows you to monitor all the commands a users executes inside the system including CPU and memory time. Once installed you get that summary with the command ‘sa’. - -#### 61. [whowatch][50] #### - -Similar to acct this tool monitors users on your system and allows you to see in real time what commands and processes they are using. It gives you a tree structure of all the processes and so you can see exactly what’s happening. - -#### 62. [strace][51] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg) - -strace is used to diagnose, debug and monitor interactions between processes. The most common thing to do is making strace print a list of system calls made by the program which is useful if the program does not behave as expected. - -#### 63. [DTrace][52] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg) - -DTrace is the big brother of strace. It dynamically patches live running instructions with instrumentation code. This allows you to do in-depth performance analysis and troubleshooting. However, it’s not for the weak of heart as there is a 1200 book written on the topic. - -#### 64. [webmin][53] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg) - -Webmin is a web-based system administration tool. It removes the need to manually edit unix configuration files and lets you manage the system remotely if need be. It has a couple of monitoring modules that you can attach to it. - -#### 65. stat #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg) - -Stat is a built-in tool for displaying status information of files and file systems. It will give you information such as when the file was modified, accessed or changed. - -#### 66. ifconfig #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg) - -ifconfig is a built-in tool used to configure the network interfaces. Behind the scenes network monitor tools use ifconfig to set it into promiscuous mode to capture all packets. You can do it yourself with `ifconfig eth0 promisc` and return to normal mode with `ifconfig eth0 -promisc`. - -#### 67. [ulimit][54] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg) - -ulimit is a built-in tool that monitors system resources and keeps a limit so any of the monitored resources don’t go overboard. For instance making a fork bomb where a properly configured ulimit is in place would be totally fine. - -#### 68. [cpulimit][55] #### - -CPUlimit is a small tool that monitors and then limits the CPU usage of a process. It’s particularly useful to make batch jobs not eat up too many CPU cycles. - -#### 69. lshw #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg) - -lshw is a small built-in tool extract detailed information about the hardware configuration of the machine. It can output everything from CPU version and speed to mainboard configuration. - -#### 70. w #### - -W is a built-in command that displays information about the users currently using the machine and their processes. - -#### 71. lsof #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg) - -lsof is a built-in tool that gives you a list of all open files and network connections. From there you can narrow it down to files opened by processes, based on the process name, by a specific user or perhaps kill all processes that belongs to a specific user. - -### Infrastructure monitoring tools ### - -#### 72. Server Density #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png) - -Our [server monitoring tool][56]! It has a web interface that allows you to set alerts and view graphs for all system and network metrics. You can also set up monitoring of websites whether they are up or down. Server Density allows you to set permissions for users and you can extend your monitoring with our plugin infrastructure or api. The service already supports Nagios plugins. - -#### 73. [OpenNMS][57] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg) - -OpenNMS has four main functional areas: event management and notifications; discovery and provisioning; service monitoring and data collection. It’s designed to be customizable to work in a variety of network environments. - -#### 74. [SysUsage][58] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg) - -SysUsage monitors your system continuously via Sar and other system commands. It also allows notifications to alarm you once a threshold is reached. SysUsage itself can be run from a centralized place where all the collected statistics are also being stored. It has a web interface where you can view all the stats. - -#### 75. [brainypdm][59] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg) - -brainypdm is a data management and monitoring tool that has the capability to gather data from nagios or another generic source to make graphs. It’s cross-platform, has custom graphs and is web based. - -#### 76. [PCP][60] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg) - -PCP has the capability of collating metrics from multiple hosts and does so efficiently. It also has a plugin framework so you can make it collect specific metrics that is important to you. You can access graph data through either a web interface or a GUI. Good for monitoring large systems. - -#### 77. [KDE system guard][61] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg) - -This tool is both a system monitor and task manager. You can view server metrics from several machines through the worksheet and if a process needs to be killed or if you need to start a process it can be done within KDE system guard. - -#### 78. [Munin][62] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg) - -Munin is both a network and a system monitoring tool which offers alerts for when metrics go beyond a given threshold. It uses RRDtool to create the graphs and it has web interface to display these graphs. Its emphasis is on plug and play capabilities with a number of plugins available. - -#### 79. [Nagios][63] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg) - -Nagios is system and network monitoring tool that helps you monitor monitor your many servers. It has support for alerting for when things go wrong. It also has many plugins written for the platform. - -#### 80. [Zenoss][64] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg) - -Zenoss provides a web interface that allows you to monitor all system and network metrics. Moreover it discovers network resources and changes in network configurations. It has alerts for you to take action on and it supports the Nagios plugins. - -#### 81. [Cacti][65] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg) - -(And one for luck!) Cacti is network graphing solution that uses the RRDtool data storage. It allows a user to poll services at predetermined intervals and graph the result. Cacti can be extended to monitor a source of your choice through shell scripts. - -#### 82. [Zabbix][66] #### - -![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png) - -Zabbix is an open source infrastructure monitoring solution. It can use most databases out there to store the monitoring statistics. The Core is written in C and has a frontend in PHP. If you don’t like installing an agent, Zabbix might be an option for you. - -### Bonus section: ### - -Thanks for your suggestions. It’s an oversight on our part that we’ll have to go back trough and renumber all the headings. In light of that, here’s a short section at the end for some of the Linux monitoring tools recommended by you: - -#### 83. [collectd][67] #### - -Collectd is a Unix daemon that collects all your monitoring statistics. It uses a modular design and plugins to fill in any niche monitoring. This way collectd stays as lightweight and customizable as possible. - -#### 84. [Observium][68] #### - -Observium is an auto-discovering network monitoring platform supporting a wide range of hardware platforms and operating systems. Observium focuses on providing a beautiful and powerful yet simple and intuitive interface to the health and status of your network. - -#### 85. Nload #### - -It’s a command line tool that monitors network throughput. It’s neat because it visualizes the in and and outgoing traffic using two graphs and some additional useful data like total amount of transferred data. You can install it with - - yum install nload - -or - - sudo apt-get install nload - -#### 84. [SmokePing][69] #### - -SmokePing keeps track of the network latencies of your network and it visualises them too. There are a wide range of latency measurement plugins developed for SmokePing. If a GUI is important to you it’s there is an ongoing development to make that happen. - -#### 85. [MobaXterm][70] #### - -If you’re working in windows environment day in and day out. You may feel limited by the terminal Windows provides. MobaXterm comes to the rescue and allows you to use many of the terminal commands commonly found in Linux. Which will help you tremendously in your monitoring needs! - -#### 86. [Shinken monitoring][71] #### - -Shinken is a monitoring framework which is a total rewrite of Nagios in python. It aims to enhance flexibility and managing a large environment. While still keeping all your nagios configuration and plugins. - --------------------------------------------------------------------------------- - -via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/ - -作者:[Jonathan Sundqvist][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - -[a]:https://www.serverdensity.com/ -[1]:http://hisham.hm/htop/ -[2]:http://www.atoptool.nl/ -[3]:https://github.com/JeremyJones/Apachetop -[4]:http://www.proftpd.org/docs/howto/Scoreboard.html -[5]:http://jeremy.zawodny.com/mysql/mytop/ -[6]:https://01.org/powertop -[7]:http://guichaz.free.fr/iotop/ -[8]:http://www.ntop.org/products/ntop/ -[9]:http://www.ex-parrot.com/pdw/iftop/ -[10]:http://jnettop.kubs.info/wiki/ -[11]:http://bandwidthd.sourceforge.net/ -[12]:http://etherape.sourceforge.net/ -[13]:https://www.kernel.org/pub/software/network/ethtool/ -[14]:http://nethogs.sourceforge.net/ -[15]:http://iptraf.seul.org/ -[16]:http://ngrep.sourceforge.net/ -[17]:http://oss.oetiker.ch/mrtg/ -[18]:https://github.com/tgraf/bmon/ -[19]:http://www.phildev.net/iptstate/index.shtml -[20]:https://unix4lyfe.org/darkstat/ -[21]:http://humdi.net/vnstat/ -[22]:http://nmap.org/ -[23]:http://www.bitwizard.nl/mtr/ -[24]:http://www.tcpdump.org/ -[25]:http://justniffer.sourceforge.net/ -[26]:http://nmon.sourceforge.net/pmwiki.php -[27]:http://conky.sourceforge.net/ -[28]:https://github.com/nicolargo/glances -[29]:https://packages.debian.org/sid/utils/saidar -[30]:http://oss.oetiker.ch/rrdtool/ -[31]:http://mmonit.com/monit -[32]:http://sourceforge.net/projects/procexp/ -[33]:http://packages.ubuntu.com/lucid/utils/discus -[34]:http://www.pogo.org.uk/~mark/xosview/ -[35]:http://dag.wiee.rs/home-made/dstat/ -[36]:http://www.net-snmp.org/ -[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en -[38]:http://www.monitorix.org/ -[39]:http://sebastien.godard.pagesperso-orange.fr/ -[40]:http://collectl.sourceforge.net/ -[41]:http://sebastien.godard.pagesperso-orange.fr/ -[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html -[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html -[44]:http://freecode.com/projects/gnome-system-monitor -[45]:http://goaccess.io/ -[46]:http://sourceforge.net/projects/logwatch/ -[47]:http://sourceforge.net/projects/swatch/ -[48]:http://www.vanheusden.com/multitail/ -[49]:http://www.gnu.org/software/acct/ -[50]:http://whowatch.sourceforge.net/ -[51]:http://sourceforge.net/projects/strace/ -[52]:http://dtrace.org/blogs/about/ -[53]:http://www.webmin.com/ -[54]:http://ss64.com/bash/ulimit.html -[55]:https://github.com/opsengine/cpulimit -[56]:https://www.serverdensity.com/server-monitoring/ -[57]:http://www.opennms.org/ -[58]:http://sysusage.darold.net/ -[59]:http://sourceforge.net/projects/brainypdm/ -[60]:http://www.pcp.io/ -[61]:https://userbase.kde.org/KSysGuard -[62]:http://munin-monitoring.org/ -[63]:http://www.nagios.org/ -[64]:http://www.zenoss.com/ -[65]:http://www.cacti.net/ -[66]:http://www.zabbix.com/ -[67]:https://collectd.org/ -[68]:http://www.observium.org/ -[69]:http://oss.oetiker.ch/smokeping/ -[70]:http://mobaxterm.mobatek.net/ -[71]:http://www.shinken-monitoring.org/ diff --git a/sources/share/20151104 Optimize Web Delivery with these Open Source Tools.md b/sources/share/20151104 Optimize Web Delivery with these Open Source Tools.md deleted file mode 100644 index aaf8a7292d..0000000000 --- a/sources/share/20151104 Optimize Web Delivery with these Open Source Tools.md +++ /dev/null @@ -1,195 +0,0 @@ -Optimize Web Delivery with these Open Source Tools -================================================================================ -Web proxy software forwards HTTP requests without modifying traffic in any way. They can be configured as a transparent proxy with no client-side configuration required. They can also be used as a reverse proxy front-end to websites; here the cache serves an unlimited number of clients for one or some web servers. - -Web proxies are versatile tools. They have a wide variety of uses, from caching web, DNS and other lookups, to speeding up the delivery of a web server / reducing bandwidth consumption. Web proxy software can also harden security by filtering traffic and anonymizing connections, and offer media-range limitations. This software is used by high-profile, high-traffic websites such as The New York Times, The Guardian, and social media and content sites such as Twitter, Facebook, and Wikipedia. - -Web caches have become a vital mechanism for optimising the amount of data that is delivered in a given period of time. Good web caches also help to minimise latency, serving pages as quickly as possible. This helps to prevent the end user from becoming impatient having to wait for content to be delivered. Web caches optimise the data flow between client and server. They also help to converse bandwidth by caching frequently-delivered content. If you need to reduce server load and improve delivery speed of your content, it is definitely worth exploring the benefits offered by web cache software. - -To provide an insight into the quality of software available for Linux, I feature below 5 excellent open source web proxy tools. Some of the them are full-featured; a couple of them have very modest resource needs. - -### Squid ### - -Squid is a high-performance open source proxy caching server and web cache daemon. It supports FTP, Internet Gopher, HTTPS, TLS, and SSL. It handles all requests in a single, non-blocking, I/O-driven process over IPv4 or IPv6. - -Squid consists of a main server program squid, a Domain Name System lookup program dnsserver, some optional programs for rewriting requests and performing authentication, together with some management and client tools. - -Squid offers a rich access control, authorization and logging environment to develop web proxy and content serving applications. - -Features include: - -- Web proxy: - - Caching to reduce access time and bandwidth use - - Keeps meta data and especially hot objects cached in RAM - - Caches DNS lookups - - Supports non-blocking DNS lookups - - Implements negative chacking of failed requests -- Squid caches can be arranged in a hierarchy or mesh for additional bandwidth savings -- Enforce site-usage policies with extensive access controls -- Anonymize connections, such as disabling or changing specific header fields in a client's HTTP request -- Reverse proxy -- Media-range limitations -- Supports SSL -- Support for IPv6 -- Error Page Localization - error pages presented by Squid may now be localized per-request to match the visitors local preferred language -- Connection Pinning (for NTLM Auth Passthrough) - a workaround which permits Web servers to use Microsoft NTLM Authentication instead of HTTP standard authentication through a web proxy -- Quality of Service (QoS) Flow support - - Select a TOS/Diffserv value to mark local hits - - Select a TOS/Diffserv value to mark peer hits - - Selectively mark only sibling or parent requests - - Allows any HTTP response towards clients to have the TOS value of the response coming from the remote server preserved - - Mask certain bits in the TOS received from the remote server, before copying the value to the TOS send towards clients -- SSL Bump (for HTTPS Filtering and Adaptation) - Squid-in-the-middle decryption and encryption of CONNECT tunneled SSL traffic, using configurable client- and server-side certificates -- eCAP Adaptation Module support -- ICAP Bypass and Retry enhancements - ICAP is now extended with full bypass and dynamic chain routing to handle multiple adaptation services. -- ICY streaming protocol support - commonly known as SHOUTcast multimedia streams -- Dynamic SSL Certificate Generation -- Support for the Internet Content Adaptation Protocol (ICAP) -- Full request logging -- Anonymize connections - -- Website: [www.squid-cache.org][1] -- Developer: National Laboratory for Applied Networking Research (NLANR) and Internet volunteers -- License: GNU GPL v2 -- Version Number: 4.0.1 - -### Privoxy ### - -Privoxy (Privacy Enhancing Proxy) is a non-caching Web proxy with advanced filtering capabilities for enhancing privacy, modifying web page data and HTTP headers, controlling access, and removing ads and other obnoxious Internet junk. Privoxy has a flexible configuration and can be customized to suit individual needs and tastes. It supports both stand-alone systems and multi-user networks. - -Privoxy uses the concept of actions in order to manipulate the data stream between the browser and remote sites. - -Features include: - -- Highly configurable - completely personalize your installation -- Ad blocking -- Cookie management -- Supports "Connection: keep-alive". Outgoing connections can be kept alive independently from the client -- Supports IPv6 -- Tagging which allows to change the behaviour based on client and server headers -- Run as an "intercepting" proxy -- Sophisticated actions and filters for manipulating both server and client headers -- Can be chained with other proxies -- Integrated browser-based configuration and control utility. Browser-based tracing of rule and filter effects. Remote toggling -- Web page filtering (text replacements, removes banners based on size, invisible "web-bugs" and HTML annoyances, etc) -- Modularized configuration that allows for standard settings and user settings to reside in separate files, so that installing updated actions files won't overwrite individual user settings -- Support for Perl Compatible Regular Expressions in the configuration files, and a more sophisticated and flexible configuration syntax -- GIF de-animation -- Bypass many click-tracking scripts (avoids script redirection) -- User-customizable HTML templates for most proxy-generated pages (e.g. "blocked" page) -- Auto-detection and re-reading of config file changes -- Most features are controllable on a per-site or per-location basis - -- Website: [www.privoxy.org][2] -- Developer: Fabian Keil (lead developer), David Schmidt, and many other contributors -- License: GNU GPL v2 -- Version Number: 3.4.2 - -### Varnish Cache ### - -Varnish Cache is a web accelerator written with performance and flexibility in mind. It's modern architecture offers significantly better performance. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture. Varnish stores web pages in memory so the web servers do not have to create the same web page repeatedly. The web server only recreates a page when it is changed. When content is served from memory this happens a lot faster then anything. - -Additionally Varnish can serve web pages much faster then any application server is capable of - giving the website a significant speed enhancement. - -For a cost-effective configuration, Varnish Cache uses between 1-16GB and a SSD disk. - -Features include: - -- Modern design -- VCL - a very flexible configuration language. The VCL configuration is translated to C, compiled, loaded and executed giving flexibility and speed -- Load balancing using both a round-robin and a random director, both with a per-backend weighting -- DNS, Random, Hashing and Client IP based Directors -- Load balance between multiple backends -- Support for Edge Side Includes including stitching together compressed ESI fragments -- Heavily threaded -- URL rewriting -- Cache multiple vhosts with a single Varnish -- Log data is stored in shared memory -- Basic health-checking of backends -- Graceful handling of "dead" backends -- Administered by a command line interface -- Use In-line C to extend Varnish -- Can be used on the same system as Apache -- Run multiple Varnish on the same system -- Support for HAProxy's PROXY protocol. This is a protocol adds a small header on each incoming TCP connection that describes who the real client is, added by (for example) an SSL terminating process -- Warm and cold VCL states -- Plugin support with Varnish Modules, called VMODs -- Backends defined through VMODs -- Gzip Compression and Decompression -- HTTP Streaming Pass & Fetch -- Saint and Grace mode. Saint Mode allows for unhealthy backends to be blacklisted for a period of time, preventing them from serving traffic when using Varnish as a load balancer. Grace mode allows Varnish to serve an expired version of a page or other asset in cases where Varnish is unable to retrieve a healthy response from the backend -- Experimental support for Persistent Storage, without LRU eviction - -- Website: [www.varnish-cache.org][3] -- Developer: Varnish Software -- License: FreeBSD -- Version Number: 4.1.0 - -### Polipo ### - -Polipo is an open source caching HTTP proxy which has modest resource needs. - -It listens to requests for web pages from your browser and forwards them to web servers, and forwards the servers’ replies to your browser. In the process, it optimises and cleans up the network traffic. It is similar in spirit to WWWOFFLE, but the implementation techniques are more like the ones ones used by Squid. - -Polipo aims at being a compliant HTTP/1.1 proxy. It should work with any web site that complies with either HTTP/1.1 or the older HTTP/1.0. - -Features include: - -- HTTP 1.1, IPv4 & IPv6, traffic filtering and privacy-enhancement -- Uses HTTP/1.1 pipelining if it believes that the remote server supports it, whether the incoming requests are pipelined or come in simultaneously on multiple connections -- Cache the initial segment of an instance if the download has been interrupted, and, if necessary, complete it later using Range requests -- Upgrade client requests to HTTP/1.1 even if they come in as HTTP/1.0, and up- or downgrade server replies to the client's capabilities -- Complete support for IPv6 (except for scoped (link-local) addresses) -- Use as a bridge between the IPv4 and IPv6 Internets -- Content-filtering -- Can use a technique known as Poor Man's Multiplexing to reduce latency -- SOCKS 4 and SOCKS 5 protocol support -- HTTPS proxying -- Behaves as a transparent proxy -- Run Polipo together with Privoxy or tor - -- Website: [www.pps.univ-paris-diderot.fr/~jch/software/polipo/][4] -- Developer: Juliusz Chroboczek, Christopher Davis -- License: MIT License -- Version Number: 1.1.1 - -### Tinyproxy ### - -Tinyproxy is a lightweight open source web proxy daemon. It is designed to be fast and yet small. It is useful for cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable. - -Tinyproxy is very useful in a small network setting, where a larger proxy would either be too resource intensive, or a security risk. One of the key features of Tinyproxy is the buffering connection concept. In effect, Tinyproxy will buffer a high speed response from a server, and then relay it to a client at the highest speed the client will accept. This feature greatly reduces the problems with sluggishness on the net. - -Features: - -- Easy to modify -- Anonymous mode - allows specification of individual HTTP headers that should be allowed through, and which should be blocked -- HTTPS support - Tinyproxy allows forwarding of HTTPS connections without modifying traffic in any way through the CONNECT method -- Remote monitoring - access proxy statistics from afar, letting you know exactly how busy the proxy is -- Load average monitoring - configure software to refuse connections after the server load reaches a certain point -- Access control - configure to only allow connections from certain subnets or IP addresses -- Secure - run without any special privileges, thus minimizing the chance of system compromise -- URL based filtering - allows domain and URL-based black- and whitelisting -- Transparent proxying - configure as a transparent proxy, so that a proxy can be used without any client-side configuration -- Proxy chaining - use an upstream proxy server for outbound connections, instead of direct connections to the target server, creating a so-called proxy chain -- Privacy features - restrict both what data comes to your web browser from the HTTP server (e.g., cookies), and to restrict what data is allowed through from your web browser to the HTTP server (e.g., version information) -- Small footprint - the memory footprint is about 2MB with glibc, and the CPU load increases linearly with the number of simultaneous connections (depending on the speed of the connection). Tinyproxy can be run on an old machine without affecting performance - -- Website: [banu.com/tinyproxy][5] -- Developer: Robert James Kaes and contributors -- License: GNU GPL v2 -- Version Number: 1.8.3 - --------------------------------------------------------------------------------- - -via: http://www.linuxlinks.com/article/20151101020309690/WebDelivery.html - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://www.squid-cache.org/ -[2]:http://www.privoxy.org/ -[3]:https://www.varnish-cache.org/ -[4]:http://www.pps.univ-paris-diderot.fr/%7Ejch/software/polipo/ -[5]:https://banu.com/tinyproxy/ \ No newline at end of file diff --git a/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md b/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md new file mode 100644 index 0000000000..f6e40d4286 --- /dev/null +++ b/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md @@ -0,0 +1,66 @@ +bazz2222222222222222222222222222222222222222222 +Review EXT4 vs. Btrfs vs. XFS +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/09/1385698302_funny_linux_wallpapers-593x445.jpg) + +To be honest, one of the things that comes last in people’s thinking is to look at which file system on their PC is being used. Windows users as well as Mac OS X users even have less reason for looking as they have really only 1 choice for their operating system which are NTFS and HFS+. Linux operating system, on the other side, has plenty of various file system options, with the current default is being widely used ext4. However, there is another push for changing the file system to something other which is called btrfs. But what makes btrfs better, what are other file systems, and when can we see the distributions making the change? + +Let’s first have a general look at file systems and what they really do, then we will make a small comparison between famous file systems. + +### So, What Do File Systems Do? ### + +Just in case if you are unfamiliar about what file systems really do, it is actually simple when it is summarized. The file systems are mainly used in order for controlling how the data is stored after any program is no longer using it, how access to the data is controlled, what other information (metadata) is attached to the data itself, etc. I know that it does not sound like an easy thing to be programmed, and it is definitely not. The file systems are continually still being revised for including more functionality while becoming more efficient in what it simply needs to do. Therefore, however, it is a basic need for all computers, it is not quite as basic as it sounds like. + +### Why Partitioning? ### + +Many people have a vague knowledge of what the partitions are since each operating system has an ability for creating or removing them. It can seem strange that Linux operating system uses more than 1 partition on the same disk, even while using the standard installation procedure, so few explanations are called for them. One of the main goals of having different partitions is achieving higher data security in the disaster case. + +By dividing your hard disk into partitions, the data may be grouped and also separated. When the accidents occur, only the data stored in the partition which got the hit will only be damaged, while data on the other partitions will survive most likely. These principles date from the days when the Linux operating system didn’t have a journaled file system and any power failure might have led to a disaster. + +The using of partitions will remain for security and the robustness reasons, then the breach on 1 part of the operating system does not automatically mean that whole computer is under risk or danger. This is currently most important factor for the partitioning process. For example, the users create scripts, the programs or web applications which start filling up the disk. If that disk contains only 1 big partition, then entire system may stop functioning if that disk is full. If the users store data on separate partitions, then only that data partition can be affected, while system partitions and the possible other data partitions will keep functioning. + +Mind that to have a journaled file system will only provide data security in case if there is a power failure as well as sudden disconnection of the storage devices. Such will not protect the data against the bad blocks and the logical errors in the file system. In such cases, the user should use a Redundant Array of Inexpensive Disks (RAID) solution. + +### Why Switch File Systems? ### + +The ext4 file system has been an improvement for the ext3 file system that was also an improvement over the ext2 file system. While the ext4 is a very solid file system which has been the default choice for almost all distributions for the past few years, it is made from an aging code base. Additionally, Linux operating system users are seeking many new different features in file systems which ext4 does not handle on its own. There is software which takes care of some of such needs, but in the performance aspect, being able to do such things on the file system level could be faster. + +### Ext4 File System ### + +The ext4 has some limits which are still a bit impressive. The maximum file size is 16 tebibytes (which is roughly 17.6 terabytes) and is much bigger than any hard drive a regular consumer can currently buy. While, the largest volume/partition you can make with ext4 is 1 exbibyte (which is roughly 1,152,921.5 terabytes). The ext4 is known to bring the speed improvements over ext3 by using multiple various techniques. Like in the most modern file systems, it is a journaling file system that means that it will keep a journal of where the files are mainly located on the disk and of any other changes that happen to the disk. Regardless all of its features, it doesn’t support the transparent compression, the data deduplication, or the transparent encryption. The snapshots are supported technically, but such feature is experimental at best. + +### Btrfs File System ### + +The btrfs, many of us pronounce it different ways, as an example, Better FS, Butter FS, or B-Tree FS. It is a file system which is completely made from scratch. The btrfs exists because its developers firstly wanted to expand the file system functionality in order to include snapshots, pooling, as well as checksums among the other things. While it is independent from the ext4, it also wants to build off the ideas present in the ext4 that are great for the consumers and the businesses alike as well as incorporate those additional features that will benefit everybody, but specifically the enterprises. For the enterprises who are using very large programs with very large databases, they are having a seemingly continuous file system across the multiple hard drives could be very beneficial as it will make a consolidation of the data much easier. The data deduplication could reduce the amount of the actual space data could occupy, and the data mirroring could become easier with the btrfs as well when there is a single and broad file system which needs to be mirrored. + +The user certainly can still choose to create multiple partitions so that he does not need to mirror everything. Considering that the btrfs will be able for spanning over the multiple hard drives, it is a very good thing that it can support 16 times more drive space than the ext4. A maximum partition size of the btrfs file system is 16 exbibytes, as well as maximum file size is 16 exbibytes too. + +### XFS File System ### + +The XFS file system is an extension of the extent file system. The XFS is a high-performance 64-bit journaling file system. The support of the XFS was merged into Linux kernel in around 2002 and In 2009 Red Hat Enterprise Linux version 5.4 usage of the XFS file system. XFS supports maximum file system size of 8 exbibytes for the 64-bit file system. There is some comparison of XFS file system is XFS file system can’t be shrunk and poor performance with deletions of the large numbers of files. Now, the RHEL 7.0 uses XFS as the default filesystem. + +### Final Thoughts ### + +Unfortunately, the arrival date for the btrfs is not quite known. But officially, the next-generation file system is still classified as “unstable”, but if the user downloads the latest version of Ubuntu, he will be able to choose to install on a btrfs partition. When the btrfs will be classified actually as “stable” is still a mystery, but users shouldn’t expect the Ubuntu to use the btrfs by default until it’s indeed considered “stable”. It has been reported that Fedora 18 will use the btrfs as its default file system as by the time of its release a file system checker for the btrfs should exist. There is a good amount of work still left for the btrfs, as not all the features are yet implemented and the performance is a little sluggish if we compare it to the ext4. + +So, which is better to use? Till now, the ext4 will be the winner despite the identical performance. But why? The answer will be the convenience as well as the ubiquity. The ext4 is still excellent file system for the desktop or workstation use. It is provided by default, so the user can install the operating system on it. Also, the ext4 supports volumes up to 1 Exabyte and files up to 16 Terabyte in size, so there’s still a plenty of room for the growth where space is concerned. + +The btrfs might offer greater volumes up to 16 Exabyte and improved fault tolerance, but, till now, it feels more as an add-on file system rather than one integrated into the Linux operating system. For example, the btrfs-tools have to be present before a drive will be formatted with the btrfs, which means that the btrfs is not an option during the Linux operating system installation though that could vary with the distribution. + +Even though the transfer rates are so important, there’s more to a just file system than speed of the file transfers. The btrfs has many useful features such as Copy-on-Write (CoW), extensive checksums, snapshots, scrubbing, self-healing data, deduplication, as well as many more good improvements that ensure the data integrity. The btrfs lacks the RAID-Z features of ZFS, so the RAID is still in an experimental state with the btrfs. For pure data storage, however, the btrfs is the winner over the ext4, but time still will tell. + +Till the moment, the ext4 seems to be a better choice on the desktop system since it is presented as a default file system, as well as it is faster than the btrfs when transferring files. The btrfs is definitely worth to look into, but to completely switch to replace the ext4 on desktop Linux might be few years later. The data farms and the large storage pools could reveal different stories and show the right differences between ext4, XCF, and btrfs. + +If you have a different or additional opinion, kindly let us know by commenting on this article. + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/review-ext4-vs-btrfs-vs-xfs/ + +作者:[M.el Khamlichi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ diff --git a/sources/talk/20101020 19 Years of KDE History--Step by Step.md b/sources/talk/20101020 19 Years of KDE History--Step by Step.md deleted file mode 100644 index b5abb96572..0000000000 --- a/sources/talk/20101020 19 Years of KDE History--Step by Step.md +++ /dev/null @@ -1,220 +0,0 @@ -19 Years of KDE History: Step by Step -================================================================================ -注:youtube 视频 - - -### Introduction ### - -KDE – one of most functional desktop environment ever. It’s open source and free for use. 19 years ago, 14 october 1996 german programmer Matthias Ettrich has started a development of this beautiful environment. KDE provides the shell and many applications for everyday using. Today KDE uses the hundred thousand peoples over the world on Unix and Windows operating system. 19 years – serious age for software projects. Time to return and see how it begin. - -K Desktop Environment has some new aspects: new design, good look & feel, consistency, easy to use, powerful applications for typical desktop work and special use cases. Name “KDE” is an easy word hack with “Common Desktop Environment”, “K” – “Cool”. The first KDE version used proprietary Trolltech’s Qt framework (parent of Qt) with dual licensing: open source QPL(Q public license) and proprietary commercial license. In 2000 Trolltech released some Qt libraries under GPL; Qt 4.5 was released in LGPL 2.1. Since 2009 KDE is compiled for three products: Plasma Workspaces (Shell), KDE Applications, KDE Platform as KDE Software compilation. - -### Releases ### - -#### Pre-Release – 14 October 1996 #### - -![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/0b3.png) - -Kool Desktop Environment. Word “Kool” will be dropped in future. In the beginning, all components were released to the developer community separately without any coordinated timeframe throughout the overall project. First communication of KDE via mailing list, that was called kde@fiwi02.wiwi.uni-Tubingen.de. - -#### KDE 1.0 – July 12, 1998 #### - -![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/10.png) - -This version received mixed reception. Many criticized the use of the Qt software framework – back then under the FreeQt license which was claimed to not be compatible with free software – and advised the use of Motif or LessTif instead. Despite that criticism, KDE was well received by many users and made its way into the first Linux distributions. - -![28 January 1999](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/11.png) - -28 January 1999 - -An update, **K Desktop Environment 1.1**, was faster, more stable and included many small improvements. It also included a new set of icons, backgrounds and textures. Among this overhauled artwork was a new KDE logo by Torsten Rahn consisting of the letter K in front of a gear which is used in revised form to this day. - -#### KDE 2.0 – October 23, 2000 #### - -![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/20.png) - -Major updates: * DCOP (Desktop COmmunication Protocol), a client-to-client communications protocol * KIO, an application I/O library. * KParts, a component object model * KHTML, an HTML 4.0 compliant rendering and drawing engine - -![26 February 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/21.png) - -26 February 2001 - -**K Desktop Environment 2.1** release inaugurated the media player noatun, which used a modular, plugin design. For development, K Desktop Environment 2.1 was bundled with KDevelop. - -![15 August 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/22.png) - -15 August 2001 - -The **KDE 2.2** release featured up to a 50% improvement in application startup time on GNU/Linux systems and increased stability and capabilities for HTML rendering and JavaScript; some new features in KMail. - -#### KDE 3.0 – April 3, 2002 #### - -![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/30.png) - -K Desktop Environment 3.0 introduced better support for restricted usage, a feature demanded by certain environments such as kiosks, Internet cafes and enterprise deployments, which disallows the user from having full access to all capabilities of a piece of software. - -![28 January 2003](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/31.png) - -28 January 2003 - -**K Desktop Environment 3.1** introduced new default window (Keramik) and icon (Crystal) styles as well as several feature enhancements. - -![3 February 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/32.png) - -3 February 2004 - -**K Desktop Environment 3.2** included new features, such as inline spell checking for web forms and emails, improved e-mail and calendaring support, tabs in Konqueror and support for Microsoft Windows desktop sharing protocol (RDP). - -![19 August 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/33.png) - -19 August 2004 - -**K Desktop Environment 3.3** focused on integrating different desktop components. Kontact was integrated with Kolab, a groupware application, and Kpilot. Konqueror was given better support for instant messaging contacts, with the capability to send files to IM contacts and support for IM protocols (e.g., IRC). - -![16 March 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/34.png) - -16 March 2005 - -**K Desktop Environment 3.4** focused on improving accessibility. The update added a text-to-speech system with support for Konqueror, Kate, KPDF, the standalone application KSayIt and text-to-speech synthesis on the desktop. - -![29 November 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/35.png) - -29 November 2005 - -**The K Desktop Environment 3.5** release added SuperKaramba, which provides integrated and simple-to-install widgets to the desktop. Konqueror was given an ad-block feature and became the second web browser to pass the Acid2 CSS test. - -#### KDE SC 4.0 – January 11, 2008 #### - -![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/400.png) - -The majority of development went into implementing most of the new technologies and frameworks of KDE 4. Plasma and the Oxygen style were two of the biggest user-facing changes. Dolphin replaces Konqueror as file manager, Okular – default document viewer. - -![29 July 2008](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/401.png) - -29 July 2008 - -**KDE 4.1** includes a shared emoticon theming system which is used in PIM and Kopete, and DXS, a service that lets applications download and install data from the Internet with one click. Also introduced are GStreamer, QuickTime 7, and DirectShow 9 Phonon backends. New applications: * Dragon Player * Kontact * Skanlite – software for scanners * Step – physics simulator * New games: Kdiamond, Kollision, KBreakout and others - -![27 January 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/402.png) - -27 January 2009 - -**KDE 4.2** is considered a significant improvement beyond KDE 4.1 in nearly all aspects, and a suitable replacement for KDE 3.5 for most users. - -![4 August 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/403.png) - -4 August 2009 - -**KDE 4.3** fixed over 10,000 bugs and implemented almost 2,000 feature requests. Integration with other technologies, such as PolicyKit, NetworkManager & Geolocation services, was another focus of this release. - -![9 February 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/404.png) - -9 February 2010 - -**KDE SC 4.4** is based on version 4.6 of the Qt 4 toolkit. New application – KAddressBook, first release of Kopete. - -![10 August 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/405.png) - -10 August 2010 - -**KDE SC 4.5** has some new features: integration of the WebKit library, an open-source web browser engine, which is used in major browsers such as Apple Safari and Google Chrome. KPackageKit replaced Kpackage. - -![26 January 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/406.png) - -26 January 2011 - -**KDE SC 4.6** has better OpenGL compositing along with the usual myriad of fixes and features. - -![27 July 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/407.png) - -27 July 2011 - -**KDE SC 4.7** has updated KWin with OpenGL ES 2.0 compatible, Qt Quick, Plasma Desktop with many enhancements and a lot of new functions in general applications. 12k bugs if fixed. - -![25 January 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/408.png) - -25 January 2012 - -**KDE SC 4.8**: better KWin performance and Wayland support, new design of Doplhin. - -![1 August 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/409.png) - -1 August 2012 - -**KDE SC 4.9**: several improvements to the Dolphin file manager, including the reintroduction of in-line file renaming, back and forward mouse buttons, improvement of the places panel and better usage of file metadata. - -![6 February 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/410.png) - -6 February 2013 - -**KDE SC 4.10**: many of the default Plasma widgets were rewritten in QML, and Nepomuk, Kontact and Okular received significant speed improvements. - -![14 August 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/411.png) - -14 August 2013 - -**KDE SC 4.11**: Kontact and Nepomuk received many optimizations. The first generation Plasma Workspaces entered maintenance-only development mode. - -![18 December 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/412.png) - -18 December 2013 - -**KDE SC 4.12**: Kontact received substantial improvements, many small improvements. - -![16 April 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/413.png) - -18 December 2013 - -**KDE SC 4.13**: Nepomuk semantic desktop search was replaced with KDE’s in house Baloo. KDE SC 4.13 was released in 53 different translations. - -![20 August 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/414.png) - -18 December 2013 - -**KDE SC 4.14**: he release primarily focused on stability, with numerous bugs fixed and few new features added. This was the final KDE SC 4 release. - -#### KDE Plasma 5.0 – July 15, 2014 #### - -![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/500.png) - -KDE Plasma 5 – 5th generation of KDE. Massive impovements in design and system, new default theme – Breeze, complete migration to QML, better performance with OpenGL, better HiDPI displays support. - -![11 November 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/501.png) - -11 November 2014 - -**KDE Plasma 5.1**: Ported missing features from Plasma 4. - -![27 January 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/502.png) - -27 January 2015 - -**KDE Plasma 5.2**: New components: BlueDevil, KSSHAskPass, Muon, SDDM theme configuration, KScreen, GTK+ style configuration and KDecoration. - -![28 April 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/503.png) - -28 April 2015 - -**KDE Plasma 5.3**: Tech preview of Plasma Media Center. New Bluetooth and touchpad applets. Enhanced power management. - -![25 August 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/504.png) - -25 August 2015 - -**KDE Plasma 5.4**: Initial Wayland session, new QML-based audio volume applet, and alternative full-screen application launcher. - -Big thanks to the [KDE][1] developers and community, Wikipedia for [descriptions][2] and all my readers. Be free and use the open source software like a KDE. - --------------------------------------------------------------------------------- - -via: https://tlhp.cf/kde-history/ - -作者:[Pavlo RudyiCategories][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://tlhp.cf/author/paul/ -[1]:https://www.kde.org/ -[2]:https://en.wikipedia.org/wiki/KDE_Plasma_5 \ No newline at end of file diff --git a/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md b/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md deleted file mode 100644 index 22a0acdbf1..0000000000 --- a/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md +++ /dev/null @@ -1,345 +0,0 @@ -sevenot translating -A Linux User Using ‘Windows 10′ After More than 8 Years – See Comparison -================================================================================ -Windows 10 is the newest member of windows NT family of which general availability was made on July 29, 2015. It is the successor of Windows 8.1. Windows 10 is supported on Intel Architecture 32 bit, AMD64 and ARMv7 processors. - -![Windows 10 and Linux Comparison](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-vs-Linux.jpg) - -Windows 10 and Linux Comparison - -As a Linux-user for more than 8 continuous years, I thought to test Windows 10, as it is making a lots of news these days. This article is a breakthrough of my observation. I will be seeing everything from the perspective of a Linux user so you may find it a bit biased towards Linux but with absolutely no false information. - -1. I searched Google with the text “download windows 10” and clicked the first link. - -![Search Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Windows-10.jpg) - -Search Windows 10 - -You may directly go to link : [https://www.microsoft.com/en-us/software-download/windows10ISO][1] - -2. I was supposed to select a edition from ‘windows 10‘, ‘windows 10 KN‘, ‘windows 10 N‘ and ‘windows 10 single language‘. - -![Select Windows 10 Edition](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Windows-10-Edition.jpg) - -Select Windows 10 Edition - -For those who want to know details of different editions of Windows 10, here is the brief details of editions. - -- Windows 10 – Contains everything offered by Microsoft for this OS. -- Windows 10N – This edition comes without Media-player. -- Windows 10KN – This edition comes without media playing capabilities. -- Windows 10 Single Language – Only one Language Pre-installed. - -3. I selected the first option ‘Windows 10‘ and clicked ‘Confirm‘. Then I was supposed to select a product language. I choose ‘English‘. - -I was provided with Two Download Links. One for 32-bit and other for 64-bit. I clicked 64-bit, as per my architecture. - -![Download Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Download-Windows-10.jpg) - -Download Windows 10 - -With my download speed (15Mbps), it took me 3 long hours to download it. Unfortunately there were no torrent file to download the OS, which could otherwise have made the overall process smooth. The OS iso image size is 3.8 GB. - -I could not find an image of smaller size but again the truth is there don’t exist net-installer image like things for Windows. Also there is no way to calculate hash value after the iso image has been downloaded. - -Wonder why so ignorance from windows on such issues. To verify if the iso is downloaded correctly I need to write the image to a disk or to a USB flash drive and then boot my system and keep my finger crossed till the setup is finished. - -Lets start. I made my USB flash drive bootable with the windows 10 iso using dd command, as: - - # dd if=/home/avi/Downloads/Win10_English_x64.iso of=/dev/sdb1 bs=512M; sync - -It took a few minutes to complete the process. I then rebooted the system and choose to boot from USB flash Drive in my UEFI (BIOS) settings. - -#### System Requirements #### - -If you are upgrading - -- Upgrade supported only from Windows 7 SP1 or Windows 8.1 - -If you are fresh Installing - -- Processor: 1GHz or faster -- RAM : 1GB and Above(32-bit), 2GB and Above(64-bit) -- HDD: 16GB and Above(32-bit), 20GB and Above(64-bit) -- Graphic card: DirectX 9 or later + WDDM 1.0 Driver - -### Installation of Windows 10 ### - -1. Windows 10 boots. Yet again they changed the logo. Also no information on whats going on. - -![Windows 10 Logo](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Logo.jpg) - -Windows 10 Logo - -2. Selected Language to install, Time & currency format and keyboard & Input methods before clicking Next. - -![Select Language and Time](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Language-and-Time.jpg) - -Select Language and Time - -3. And then ‘Install Now‘ Menu. - -![Install Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Windows-10.jpg) - -Install Windows 10 - -4. The next screen is asking for Product key. I clicked ‘skip’. - -![Windows 10 Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Product-Key.jpg) - -Windows 10 Product Key - -5. Choose from a listed OS. I chose ‘windows 10 pro‘. - -![Select Install Operating System](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Operating-System.jpg) - -Select Install Operating System - -6. oh yes the license agreement. Put a check mark against ‘I accept the license terms‘ and click next. - -![Accept License](http://www.tecmint.com/wp-content/uploads/2015/08/Accept-License.jpg) - -Accept License - -7. Next was to upgrade (to windows 10 from previous versions of windows) and Install Windows. Don’t know why custom: Windows Install only is suggested as advanced by windows. Anyway I chose to Install windows only. - -![Select Installation Type](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Installation-Type.jpg) - -Select Installation Type - -8. Selected the file-system and clicked ‘next’. - -![Select Install Drive](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Drive.jpg) - -Select Install Drive - -9. The installer started to copy files, getting files ready for installation, installing features, installing updates and finishing up. It would be better if the installer would have shown verbose output on the action is it taking. - -![Installing Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Installing-Windows.jpg) - -Installing Windows - -10. And then windows restarted. They said reboot was needed to continue. - -![Windows Installation Process](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Installation-Process.jpg) - -Windows Installation Process - -11. And then all I got was the below screen which reads “Getting Ready”. It took 5+ minutes at this point. No idea what was going on. No output. - -![Windows Getting Ready](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Getting-Ready.jpg) - -Windows Getting Ready - -12. yet again, it was time to “Enter Product Key”. I clicked “Do this later” and then used expressed settings. - -![Enter Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Enter-Product-Key.jpg) - -Enter Product Key - -![Select Express Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Express-Settings.jpg) - -Select Express Settings - -14. And then three more output screens, where I as a Linuxer expected that the Installer will tell me what it is doing but all in vain. - -![Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Loading-Windows.jpg) - -Loading Windows - -![Getting Updates](http://www.tecmint.com/wp-content/uploads/2015/08/Getting-Updates.jpg) - -Getting Updates - -![Still Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Still-Loading-Windows.jpg) - -Still Loading Windows - -15. And then the installer wanted to know who owns this machine “My organization” or I myself. Chose “I own it” and then next. - -![Select Organization](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Organization.jpg) - -Select Organization - -16. Installer prompted me to join “Azure Ad” or “Join a domain”, before I can click ‘continue’. I chooses the later option. - -![Connect Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Connect-Windows.jpg) - -Connect Windows - -17. The Installer wants me to create an account. So I entered user_name and clicked ‘Next‘, I was expecting an error message that I must enter a password. - -![Create Account](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Account.jpg) - -Create Account - -18. To my surprise Windows didn’t even showed warning/notification that I must create password. Such a negligence. Anyway I got my desktop. - -![Windows 10 Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Desktop.jpg) - -Windows 10 Desktop - -#### Experience of a Linux-user (Myself) till now #### - -- No Net-installer Image -- Image size too heavy -- No way to check the integrity of iso downloaded (no hash check) -- The booting and installation remains same as it was in XP, Windows 7 and 8 perhaps. -- As usual no output on what windows Installer is doing – What file copying or what package installing. -- Installation was straight forward and easy as compared to the installation of a Linux distribution. - -### Windows 10 Testing ### - -19. The default Desktop is clean. It has a recycle bin Icon on the default desktop. Search web directly from the desktop itself. Additionally icons for Task viewing, Internet browsing, folder browsing and Microsoft store is there. As usual notification bar is present on the bottom right to sum up desktop. - -![Deskop Shortcut Icons](http://www.tecmint.com/wp-content/uploads/2015/08/Deskop-Shortcut-icons.jpg) - -Deskop Shortcut Icons - -20. Internet Explorer replaced with Microsoft Edge. Windows 10 has replace the legacy web browser Internet Explorer also known as IE with Edge aka project spartan. - -![Microsoft Edge Browser](http://www.tecmint.com/wp-content/uploads/2015/08/Edge-browser.jpg) - -Microsoft Edge Browser - -It is fast at least as compared to IE (as it seems it testing). Familiar user Interface. The home screen contains news feed updates. There is also a search bar title that reads ‘Where to next?‘. The browser loads time is considerably low which result in improving overall speed and performance. The memory usages of Edge seems normal. - -![Windows Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Performance.jpg) - -Windows Performance - -Edge has got cortana – Intelligent Personal Assistant, Support for chrome-extension, web Note – Take notes while Browsing, Share – Right from the tab without opening any other TAB. - -#### Experience of a Linux-user (Myself) on this point #### - -21. Microsoft has really improved web browsing. Lets see how stable and fine it remains. It don’t lag as of now. - -22. Though RAM usages by Edge was fine for me, a lots of users are complaining that Edge is notorious for Excessive RAM Usages. - -23. Difficult to say at this point if Edge is ready to compete with Chrome and/or Firefox at this point of time. Lets see what future unfolds. - -#### A few more Virtual Tour #### - -24. Start Menu redesigned – Seems clear and effective. Metro icons make it live. Populated with most commonly applications viz., Calendar, Mail, Edge, Photos, Contact, Temperature, Companion suite, OneNote, Store, Xbox, Music, Movies & TV, Money, News, Store, etc. - -![Windows Look and Feel](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Look.jpg) - -Windows Look and Feel - -In Linux on Gnome Desktop Environment, I use to search required applications simply by pressing windows key and then type the name of the application. - -![Search Within Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Within-Desktop.jpg) - -Search Within Desktop - -25. File Explorer – seems clear Designing. Edges are sharp. In the left pane there is link to quick access folders. - -![Windows File Explorer](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-File-Explorer.jpg) - -Windows File Explorer - -Equally clear and effective file explorer on Gnome Desktop Environment on Linux. Removed UN-necessary graphics and images from icons is a plus point. - -![File Browser on Gnome](http://www.tecmint.com/wp-content/uploads/2015/08/File-Browser.jpg) - -File Browser on Gnome - -26. Settings – Though the settings are a bit refined on Windows 10, you may compare it with the settings on a Linux Box. - -**Settings on Windows** - -![Windows 10 Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Settings.jpg) - -Windows 10 Settings - -**Setting on Linux Gnome** - -![Gnome Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Settings.jpg) - -Gnome Settings - -27. List of Applications – List of Application on Linux is better than what they use to provide (based upon my memory, when I was a regular windows user) but still it stands low as compared to how Gnome3 list application. - -**Application Listed by Windows** - -![Application List on Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Application-List-on-Windows-10.jpg) - -Application List on Windows 10 - -**Application Listed by Gnome3 on Linux** - -![Gnome Application List on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Application-List-on-Linux.jpg) - -Gnome Application List on Linux - -28. Virtual Desktop – Virtual Desktop feature of Windows 10 is one of those topic which are very much talked about these days. - -Here is the virtual Desktop in Windows 10. - -![Windows Virtual Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Virtual-Desktop.jpg) - -Windows Virtual Desktop - -and the virtual Desktop on Linux we are using for more than 2 decades. - -![Virtual Desktop on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Virtual-Desktop-on-Linux.jpg) - -Virtual Desktop on Linux - -#### A few other features of Windows 10 #### - -29. Windows 10 comes with wi-fi sense. It shares your password with others. Anyone who is in the range of your wi-fi and connected to you over Skype, Outlook, Hotmail or Facebook can be granted access to your wifi network. And mind it this feature has been added as a feature by microsoft to save time and hassle-free connection. - -In a reply to question raised by Tecmint, Microsoft said – The user has to agree to enable wifi sense, everytime on a new network. oh! What a pathetic taste as far as security is concerned. I am not convinced. - -30. Up-gradation from Windows 7 and Windows 8.1 is free though the retail cost of Home and pro editions are approximately $119 and $199 respectively. - -31. Microsoft released first cumulative update for windows 10, which is said to put system into endless crash loop for a few people. Windows perhaps don’t understand such problem or don’t want to work on that part don’t know why. - -32. Microsoft’s inbuilt utility to block/hide unwanted updates don’t work in my case. This means If a update is there, there is no way to block/hide it. Sorry windows users! - -#### A few features native to Linux that windows 10 have #### - -Windows 10 has a lots of features that were taken directly from Linux. If Linux were not released under GNU License perhaps Microsoft would never had the below features. - -33. Command-line package management – Yup! You heard it right. Windows 10 has a built-in package management. It works only in Windows Power Shell. OneGet is the official package manager for windows. Windows package manager in action. - -![Windows 10 Package Manager](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Package-Manager.jpg) - -Windows 10 Package Manager - -- Border-less windows -- Flat Icons -- Virtual Desktop -- One search for Online+offline search -- Convergence of mobile and desktop OS - -### Overall Conclusion ### - -- Improved responsiveness -- Well implemented Animation -- low on resource -- Improved battery life -- Microsoft Edge web-browser is rock solid -- Supported on Raspberry pi 2. -- It is good because windows 8/8.1 was not upto mark and really bad. -- It is a the same old wine in new bottle. Almost the same things with brushed up icons. - -What my testing suggest is Windows 10 has improved on a few things like look and feel (as windows always did), +1 for Project spartan, Virtual Desktop, Command-line package management, one search for online and offline search. It is overall an improved product but those who thinks that Windows 10 will prove to be the last nail in the coffin of Linux are mistaken. - -Linux is years ahead of Windows. Their approach is different. In near future windows won’t stand anywhere around Linux and there is nothing for which a Linux user need to go to Windows 10. - -That’s all for now. Hope you liked the post. I will be here again with another interesting post you people will love to read. Provide us with your valuable feedback in the comments below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/a-linux-user-using-windows-10-after-more-than-8-years-see-comparison/ - -作者:[vishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:https://www.microsoft.com/en-us/software-download/windows10ISO diff --git a/sources/talk/20150820 Why did you start using Linux.md b/sources/talk/20150820 Why did you start using Linux.md deleted file mode 100644 index 5fb6a8d4fe..0000000000 --- a/sources/talk/20150820 Why did you start using Linux.md +++ /dev/null @@ -1,147 +0,0 @@ -Why did you start using Linux? -================================================================================ -> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux - -### Why did you start using Linux? ### - -Linux has become quite popular over the years, with many users defecting to it from OS X or Windows. But have you ever wondered what got people started with Linux? A redditor asked that question and got some very interesting answers. - -SilverKnight asked his question on the Linux subreddit: - -> I know this has been asked before, but I wanted to hear more from the younger generation why it is that they started using linux and what keeps them here. -> -> I dont want to discourage others from giving their linux origin stories, because those are usually pretty good, but I was mostly curious about our younger population since there isn't much out there from them yet. -> -> I myself am 27 and am a linux dabbler. I have installed quite a few different distros over the years but I haven't made the plunge to full time linux. I guess I am looking for some more reasons/inspiration to jump on the bandwagon. -> -> [More at Reddit][1] - -Fellow redditors in the Linux subreddit responded with their thoughts: - -> **DoublePlusGood**: "I started using Backtrack Linux (now Kali) at 12 because I wanted to be a "1337 haxor". I've stayed with Linux (Archlinux currently) because it lets me have the endless freedom to make my computer do what I want." -> -> **Zack**: "I'm a Linux user since, I think, the age of 12 or 13, I'm 15 now. -> -> It started when I got tired with Windows XP at 11 and the waiting, dammit am I impatient sometimes, but waiting for a basic task such as shutting down just made me tired of Windows all together. -> -> A few months previously I had started participating in discussions in a channel on the freenode IRC network which was about a game, and as freenode usually goes, it was open source and most of the users used Linux. -> -> I kept on hearing about this Linux but wasn't that interested in it at the time. However, because the channel (and most of freenode) involved quite a bit of programming I started learning Python. -> -> A year passed and I was attempting to install GNU/Linux (specifically Ubuntu) on my new (technically old, but I had just got it for my birthday) PC, unfortunately it continually froze, for reasons unknown (probably a bad hard drive, or a lot of dust or something else...). -> -> Back then I was the type to give up on things, so I just continually nagged my dad to try and install Ubuntu, he couldn't do it for the same reasons. -> -> After wanting Linux for a while I became determined to get Linux and ditch windows for good. So instead of Ubuntu I tried Linux Mint, being a derivative of Ubuntu(?) I didn't have high hopes, but it worked! -> -> I continued using it for another 6 months. -> -> During that time a friend on IRC gave me a virtual machine (which ran Ubuntu) on their server, I kept it for a year a bit until my dad got me my own server. -> -> After the 6 months I got a new PC (which I still use!) I wanted to try something different. -> -> I decided to install openSUSE. -> -> I liked it a lot, and on the same Christmas I obtained a Raspberry Pi, and stuck with Debian on it for a while due to the lack of support other distros had for it." -> -> **Cqz**: "Was about 9 when the Windows 98 machine handed down to me stopped working for reasons unknown. We had no Windows install disk, but Dad had one of those magazines that comes with demo programs and stuff on CDs. This one happened to have install media for Mandrake Linux, and so suddenly I was a Linux user. Had no idea what I was doing but had a lot of fun doing it, and although in following years I often dual booted with various Windows versions, the FLOSS world always felt like home. Currently only have one Windows installation, which is a virtual machine for games." -> -> **Tosmarcel**: "I was 15 and was really curious about this new concept called 'programming' and then I stumbled upon this Harvard course, CS50. They told users to install a Linux vm to use the command line. But then I asked myself: "Why doesn't windows have this command line?!". I googled 'linux' and Ubuntu was the top result -Ended up installing Ubuntu and deleted the windows partition accidentally... It was really hard to adapt because I knew nothing about linux. Now I'm 16 and running arch linux, never looked back and I love it!" -> -> **Micioonthet**: "First heard about Linux in the 5th grade when I went over to a friend's house and his laptop was running MEPIS (an old fork of Debian) instead of Windows XP. -> -> Turns out his dad was a socialist (in America) and their family didn't trust Microsoft. This was completely foreign to me, and I was confused as to why he would bother using an operating system that didn't support the majority of software that I knew. -> -> Fast forward to when I was 13 and without a laptop. Another friend of mine was complaining about how slow his laptop was, so I offered to buy it off of him so I could fix it up and use it for myself. I paid $20 and got a virus filled, unusable HP Pavilion with Windows Vista. Instead of trying to clean up the disgusting Windows install, I remembered that Linux was a thing and that it was free. I burned an Ubuntu 12.04 disc and installed it right away, and was absolutely astonished by the performance. -> -> Minecraft (one of the few early Linux games because it ran on Java), which could barely run at 5 FPS on Vista, ran at an entirely playable 25 FPS on a clean install of Ubuntu. -> -> I actually still have that old laptop and use it occasionally, because why not? Linux doesn't care how old your hardware is. -> -> I since converted my dad to Linux and we buy old computers at lawn sales and thrift stores for pennies and throw Linux Mint or some other lightweight distros on them." -> -> **Webtm**: "My dad had every computer in the house with some distribution on it, I think a couple with OpenSUSE and Debian, and his personal computer had Slackware on it. So I remember being little and playing around with Debian and not really getting into it much. So I had a Windows laptop for a few years and my dad asked me if I wanted to try out Debian. It was a fun experience and ever since then I've been using Debian and trying out distributions. I currently moved away from Linux and have been using FreeBSD for around 5 months now, and I am absolutely happy with it. -> -> The control over your system is fantastic. There are a lot of cool open source projects. I guess a lot of the fun was figuring out how to do the things I want by myself and tweaking those things in ways to make them do something else. Stability and performance is also a HUGE plus. Not to mention the level of privacy when switching." -> -> **Wyronaut**: "I'm currently 18, but I first started using Linux when I was 13. Back then my first distro was Ubuntu. The reason why I wanted to check out Linux, was because I was hosting little Minecraft game servers for myself and a couple of friends, back then Minecraft was pretty new-ish. I read that the defacto operating system for hosting servers was Linux. -> -> I was a big newbie when it came to command line work, so Linux scared me a little, because I had to take care of a lot of things myself. But thanks to google and a few wiki pages I managed to get up a couple of simple servers running on a few older PC's I had lying around. Great use for all that older hardware no one in the house ever uses. -> -> After running a few game servers I started running a few web servers as well. Experimenting with HTML, CSS and PHP. I worked with those for a year or two. Afterwards, took a look at Java. I made the terrible mistake of watching TheNewBoston video's. -> -> So after like a week I gave up on Java and went to pick up a book on Python instead. That book was Learn Python The Hard Way by Zed A. Shaw. After I finished that at the fast pace of two weeks, I picked up the book C++ Primer, because at the time I wanted to become a game developer. Went trough about half of the book (~500 pages) and burned out on learning. At that point I was spending a sickening amount of time behind my computer. -> -> After taking a bit of a break, I decided to pick up JavaScript. Read like 2 books, made like 4 different platformers and called it a day. -> -> Now we're arriving at the present. I had to go through the horrendous process of finding a school and deciding what job I wanted to strive for when I graduated. I ruled out anything in the gaming sector as I didn't want anything to do with graphics programming anymore, I also got completely sick of drawing and modelling. And I found this bachelor that had something to do with netsec and I instantly fell in love. I picked up a couple books on C to shred this vacation period and brushed up on some maths and I'm now waiting for the new school year to commence. -> -> Right now, I am having loads of fun with Arch Linux, made couple of different arrangements on different PC's and it's going great! -> -> In a sense Linux is what also got me into programming and ultimately into what I'm going to study in college starting this september. I probably have my future life to thank for it." -> -> **Linuxllc**: "You also can learn from old farts like me. -> -> The crutch, The crutch, The crutch. Getting rid of the crutch will inspired you and have good reason to stick with Linux. -> -> I got rid of my crutch(Windows XP) back in 2003. Took me only 5 days to get all my computer task back and running at a 100% workflow. Including all my peripheral devices. Minus any Windows games. I just play native Linux games." -> -> **Highclass**: "Hey I'm 28 not sure if this is the age group you are looking for. -> -> To be honest, I was always interested in computers and the thought of a free operating system was intriguing even though at the time I didn't fully grasp the free software philosophy, to me it was free as in no cost. I also did not find the CLI too intimidating as from an early age I had exposure to DOS. -> -> I believe my first distro was Mandrake, I was 11 or 12, I messed up the family computer on several occasions.... I ended up sticking with it always trying to push myself to the next level. Now I work in the industry with Linux everyday. -> -> /shrug" -> -> Matto: "My computer couldn't run fast enough for XP (got it at a garage sale), so I started looking for alternatives. Ubuntu came up in Google. I was maybe 15 or 16 at the time. Now I'm 23 and have a job working on a product that uses Linux internally." -> -> [More at Reddit][2] - -### IBM's Linux only Mainframe ### - -IBM has a long history with Linux, and now the company has created a Mainframe that features Ubuntu Linux. The new machine is named LinuxOne. - -Ron Miller reports for TechCrunch: - -> The new mainframes come in two flavors, named for penguins (Linux — penguins — get it?). The first is called Emperor and runs on the IBM z13, which we wrote about in January. The other is a smaller mainframe called the Rockhopper designed for a more “entry level” mainframe buyer. -> -> You may have thought that mainframes went the way of the dinosaur, but they are still alive and well and running in large institutions throughout the world. IBM as part of its broader strategy to promote the cloud, analytics and security is hoping to expand the potential market for mainframes by running Ubuntu Linux and supporting a range of popular open source enterprise software such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef. -> -> The metered mainframe will still sit inside the customer’s on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained. -> -> ...IBM is looking for ways to increase those sales. Partnering with Canonical and encouraging use of open source tools on a mainframe gives the company a new way to attract customers to a small, but lucrative market. -> -> [More at TechCrunch][3] - -### Why you should skip Windows 10 and opt for Linux ### - -Since Windows 10 has been released there has been quite a bit of media coverage about its potential to spy on users. ZDNet has listed some reasons why you should skip Windows 10 and opt for Linux instead on your computer. - -SJVN reports for ZDNet: - -> You can try to turn Windows 10's data-sharing ways off, but, bad news: Windows 10 will keep sharing some of your data with Microsoft anyway. There is an alternative: Desktop Linux. -> -> You can do a lot to keep Windows 10 from blabbing, but you can't always stop it from talking. Cortana, Windows 10's voice activated assistant, for example, will share some data with Microsoft, even when it's disabled. That data includes a persistent computer ID to identify your PC to Microsoft. -> -> So, if that gives you a privacy panic attack, you can either stick with your old operating system, which is likely Windows 7, or move to Linux. Eventually, when Windows 7 is no longer supported, if you want privacy you'll have no other viable choice but Linux. -> -> There are other, more obscure desktop operating systems that are also desktop-based and private. These include the BSD Unix family such as FreeBSD, PCBSD, and NetBSD and eComStation, OS/2 for the 21st century. Your best choice, though, is a desktop-based Linux with a low learning curve. -> -> [More at ZDNet][4] - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html - -作者:[Jim Lynch][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Jim-Lynch/ -[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ -[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ -[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/ -[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ diff --git a/sources/talk/20150921 14 tips for teaching open source development.md b/sources/talk/20150921 14 tips for teaching open source development.md deleted file mode 100644 index bf8212da70..0000000000 --- a/sources/talk/20150921 14 tips for teaching open source development.md +++ /dev/null @@ -1,73 +0,0 @@ -icybreaker translating... -14 tips for teaching open source development -================================================================================ -Academia is an excellent platform for training and preparing the open source developers of tomorrow. In research, we occasionally open source software we write. We do this for two reasons. One, to promote the use of the tools we produce. And two, to learn more about the impact and issues other people face when using them. With this background of writing research software, I was tasked with redesigning the undergraduate software engineering course for second-year students at the University of Bradford. - -It was a challenge, as I was faced with 80 students coming for different degrees, including IT, business computing, and software engineering, all in the same course. The hardest part was working with students with a wide range of programming experience levels. Traditionally, the course had involved allowing students to choose their own teams, tasking them with building a garage database system and then submitting a report in the end as part of the assessment. - -I decided to redesign the course to give students insight into the process of working on real-world software teams. I divided the students into teams of five or six, based on their degrees and programming skills. The aim was to have an equal distribution of skills across the teams to prevent any unfair advantage of one team over another. - -### The core lessons ### - -The course format was updated to have both lectures and lab sessions. However, the lab session functioned as mentoring sessions, where instructors visited each team to ask for updates and see how the teams were progressing with the clients and the products. There were traditional lectures on project management, software testing, requirements engineering, and similar topics, supplemented by lab sessions and mentor meetings. These meetings allowed us to check up on students' progress and monitor whether they were following the software engineering methodologies taught in the lecture portion. Topics we taught this year included: - -- Requirements engineering -- How to interact with clients and other team members -- Software methodologies, such as agile and extreme programming approaches -- How to use different software engineering approaches and work through sprints -- Team meetings and documentations -- Project management and Gantt charts -- UML diagrams and system descriptions -- Code revisioning using Git -- Software testing and bug tracking -- Using open source libraries for their tools -- Open source licenses and which one to use -- Software delivery - -Along with these lectures, we had a few guest speakers from the corporate world talk about their practices in software product deliveries. We also managed to get the university’s intellectual property lawyer to come and talk about IP issues surrounding software in the UK, and how to handle any intellectual properties issues in software. - -### Collaboration tools ### - -To make all of the above possible, a number of tools were introduced. Students were trained on how to use them for their projects. These included: - -- Google Drive folders shared within the team and the tutor, to maintain documents and spreadsheets for project descriptions, requirements gathering, meeting minutes, and time tracking of the project. This was an extremely efficient way to monitor and also provide feedback straight into the folders for each team. -- [Basecamp][1] for document sharing as well, and later in the course we considered this as a possible replacement for Google Drive. -- Bug reporting tools such as [Mantis][2] again have a limited users for free reporting. Later Git itself was being used for bug reports n any tools by the testers in the teams -- Remote videoconferencing tools were used as a number of clients were off-campus, and sometimes not even in the same city. The students were regularly using Skype to communicate with them, documenting their meetings and sometimes even recording them for later use. -- A number of open source tool kits were also used for students' projects. The students were allowed to choose their own tool kits and languages based on the requirements of the projects. The only condition was that these have to be open source and could be installed in the university labs, which the technical staff was extremely supportive of. -- In the end all teams had to deliver their projects to the client, including complete working version of the software, documentation, and open source licenses of their own choosing. Most of the teams chose the GPL version 3 license. - -### Tips and lessons learned ### - -In the end, it was a fun year and nearly all students did very well. Here are some of the lessons I learned which may help improve the course next year: - -1. Give the students a wide variety of choice in projects that are interesting, such as game development or mobile application development, and projects with goals. Working with mundane database systems is not going to keep most students interested. Working with interesting projects, most students became self-learners, and were also helping others in their teams and outside to solve some common issues. The course also had a message list, where students were posting any issues they were encountering, in hopes of receiving advice from others. However, there was a drawback to this approach. The external examiners have advised us to go back to a style of one type of project, and one type of language to help narrow the assessment criteria for the students. -1. Give students regular feedback on their performance at every stage. This could be done during the mentoring meetings with the teams, or at other stages, to help them improve the work for next time. -1. Students are more than willing to work with clients from outside university! They look forward to working with external company representatives or people outside the university, just because of the new experience. They were all able to display professional behavior when interacting with their mentors, which put the instructors at ease. -1. A lot of teams left developing unit testing until the end of the project, which from an extreme programming methodology standpoint was a serious no-no. Maybe testing should be included at the assessments of the various stages to help remind students that they need to be developing unit tests in parallel with the software. -1. In the class of 80, there were only four girls, each working in different teams. I observed that boys were very ready to take on roles as team leads, assigning the most interesting code pieces to themselves and the girls were mostly following instructions or doing documentation. For some reason, the girls choose not to show authority or preferred not to code even when they were encouraged by a female instructor. This is still a major issue that needs to be addressed. -1. There are different styles of documentation such as using UML, state diagrams, and others. Allow students to learn them all and merge with other courses during the year to improve their learning experience. -1. Some students were very good developers, but some doing business computing had very little coding experience. The teams were encouraged to work together to prevent the idea that developer would get better marks than other team members if they were only doing meeting minutes or documentations. Roles were also encouraged to be rotated during mentoring sessions to see that everyone was getting a chance to learn how to program. -1. Allowing the team to meet with the mentor every week was helpful in monitoring team activities. It also showed who was doing the most work. Usually students who were not participating in their groups would not come to meetings, and could be identified by the work being presented by other members every week. -1. We encouraged students to attach licenses to their work and identify intellectual property issues when working with external libraries and clients. This allowed students to think out of the box and learn about real-world software delivery problems. -1. Give students room to choose their own technologies. -1. Having teaching assistants is key. Managing 80 students was very difficult, especially on the weeks when they were being assessed. Next year I would definitely have teaching assistants helping me with the teams. -1. A supportive tech support for the lab is very important. The university tech support was extremely supportive of the course. Next year, they are talking about having virtual machines assigned to teams, so the teams can install any software on their own virtual machine as needed. -1. Teamwork helps. Most teams exhibited a supportive nature to other team members, and mentoring also helped. -1. Additional support from other staff members is a plus. As a new academic, I needed to learn from experience and also seek advice at multiple points on how to handle certain students and teams if I was confused on how to engage them with the course. Support from senior staff members was very encouraging to me. - -In the end, it was a fun course—not only for the me as an instructor, but for the students as well. There were some issues with learning objectives and traditional grading schemes that still need to be ironed out to reduce the workload it produced on the instructors. For next year, I plan to keep this same format, but hope to come up with a better grading scheme and introduce more software tools that can help monitor project activities and code revisions. - --------------------------------------------------------------------------------- - -via: http://opensource.com/education/15/9/teaching-open-source-development-undergraduates - -作者:[Mariam Kiran][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://opensource.com/users/mariamkiran -[1]:https://basecamp.com/ -[2]:https://www.mantisbt.org/ diff --git a/sources/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md b/sources/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md deleted file mode 100644 index 174fc55262..0000000000 --- a/sources/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md +++ /dev/null @@ -1,199 +0,0 @@ -18 Years of GNOME Design and Software Evolution: Step by Step -================================================================================ -注:youtube 视频 - - -[GNOME][1] (GNU Object Model Environment) was started on August 15th 1997 by two Mexican programmers – Miguel de Icaza and Federico Mena. GNOME – Free Software project to develop a desktop environment and applications by volunteers and paid full-time developers. All of GNOME Desktop Environment is the open source software and support Linux, FreeBSD, OpenBSD and others. - -Now we move to 1997 and see the first version of GNOME: - -### GNOME 1 ### - -![GNOME 1.0 - First major GNOME release](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.0/gnome.png) - -**GNOME 1.0** (1997) – First major GNOME release - -![GNOME 1.2 Bongo](https://raw.githubusercontent.com/paulcarroty/Articles/master/GNOME_History/1.2/1361441938.or.86429.png) - -**GNOME 1.2** “Bongo”, 2000 - -![GNOME 1.4 Tranquility](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.4/1.png) - -**GNOME 1.4** “Tranquility”, 2001 - -### GNOME 2 ### - -![GNOME 2.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.0/1.png) - -**GNOME 2.0**, 2002 - -Major upgrade based on GTK+2. Introduction of the Human Interface Guidelines. - -![GNOME 2.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.2/GNOME_2.2_catala.png) - -**GNOME 2.2**, 2003 - -Multimedia and file manager improvements. - -![GNOME 2.4 Temujin](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.4/gnome-desktop.png) - -**GNOME 2.4** “Temujin”, 2003 - -First release of Epiphany Browser, accessibility support. - -![GNOME 2.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.6/Adam_Hooper.png) - -**GNOME 2.6**, 2004 - -Nautilus changes to a spatial file manager, and a new GTK+ file dialog is introduced. A short-lived fork of GNOME, GoneME, is created as a response to the changes in this version. - -![GNOME 2.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.8/3.png) - -**GNOME 2.8**, 2004 - -Improved removable device support, adds Evolution - -![GNOME 2.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.10/GNOME-Screenshot-2.10-FC4.png) - -**GNOME 2.10**, 2005 - -Lower memory requirements and performance improvements. Adds: new panel applets (modem control, drive mounter and trashcan); and the Totem and Sound Juicer applications. - -![GNOME 2.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.12/gnome-livecd.jpg) - -**GNOME 2.12**, 2005 - -Nautilus improvements; improvements in cut/paste between applications and freedesktop.org integration. Adds: Evince PDF viewer; New default theme: Clearlooks; menu editor; keyring manager and admin tools. Based on GTK+ 2.8 with cairo support - -![GNOME 2.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.14/debian4-stable.jpg) - -**GNOME 2.14**, 2006 - -Performance improvements (over 100% in some cases); usability improvements in user preferences; GStreamer 0.10 multimedia framework. Adds: Ekiga video conferencing application; Deskbar search tool; Pessulus lockdown editor; Fast user switching; Sabayon system administration tool. - -![GNOME 2.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.16/Gnome-2.16-screenshot.png) - -**GNOME 2.16**, 2006 - -Performance improvements. Adds: Tomboy notetaking application; Baobab disk usage analyser; Orca screen reader; GNOME Power Manager (improving laptop battery life); improvements to Totem, Nautilus; compositing support for Metacity; new icon theme. Based on GTK+ 2.10 with new print dialog - -![GNOME 2.18](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.18/Gnome-2.18.1.png) - -**GNOME 2.18**, 2007 - -Performance improvements. Adds: Seahorse GPG security application, allowing encryption of emails and local files; Baobab disk usage analyser improved to support ring chart view; Orca screen reader; improvements to Evince, Epiphany and GNOME Power Manager, Volume control; two new games, GNOME Sudoku and glChess. MP3 and AAC audio encoding. - -![GNOME 2.20](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.20/rnintroduction-screenshot.png) - -**GNOME 2.20**, 2007 - -Tenth anniversary release. Evolution backup functionality; improvements in Epiphany, EOG, GNOME Power Manager; password keyring management in Seahorse. Adds: PDF forms editing in Evince; integrated search in the file manager dialogs; automatic multimedia codec installer. - -![GNOME 2.22, 2008](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.22/GNOME-2-22-2-Released-2.png) - -**GNOME 2.22**, 2008 - -Addition of Cheese, a tool for taking photos from webcams and Remote Desktop Viewer; basic window compositing support in Metacity; introduction of GVFS; improved playback support for DVDs and YouTube, MythTV support in Totem; internationalised clock applet; Google Calendar support and message tagging in Evolution; improvements in Evince, Tomboy, Sound Juicer and Calculator. - -![GNOME 2.24](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.24/gnome-224.jpg) - -**GNOME 2.24**, 2008 - -Addition of the Empathy instant messenger client, Ekiga 3.0, tabbed browsing in Nautilus, better multiple screens support and improved digital TV support. - -![GNOME 2.26](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.26/gnome226-large_001.jpg) - -**GNOME 2.26**, 2009 - -New optical disc recording application Brasero, simpler file sharing, media player improvements, support for multiple monitors and fingerprint reader support. - -![GNOME 2.28](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.28/1.png) - -**GNOME 2.28**, 2009 - -Addition of GNOME Bluetooth module. Improvements to Epiphany web browser, Empathy instant messenger client, Time Tracker, and accessibility. Upgrade to GTK+ version 2.18. - -![GNOME 2.30](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.30/GNOME2.30.png) - -**GNOME 2.30**, 2010 - -Improvements to Nautilus file manager, Empathy instant messenger client, Tomboy, Evince, Time Tracker, Epiphany, and Vinagre. iPod and iPod Touch devices are now partially supported via GVFS through libimobiledevice. Uses GTK+ 2.20. - -![GNOME 2.32](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.32/gnome-2-32.png.en_GB.png) - -**GNOME 2.32**, 2010 - -Addition of Rygel and GNOME Color Manager. Improvements to Empathy instant messenger client, Evince, Nautilus file manager and others. 3.0 was intended to be released in September 2010, so a large part of the development effort since 2.30 went towards 3.0. - -### GNOME 3 ### - -![GNOME 3.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.0/chat-3-0.png) - -**GNOME 3.0**, 2011 - -Introduction of GNOME Shell. A redesigned settings framework with fewer, more focused options. Topic-oriented help based on the Mallard markup language. Side-by-side window tiling. A new visual theme and default font. Adoption of GTK+ 3.0 with its improved language bindings, themes, touch, and multiplatform support. Removal of long-deprecated development APIs.[73] - -![GNOME 3.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.2/gdm.png) - -**GNOME 3.2**, 2011 - -Online accounts support; Web applications support; contacts manager; documents and files manager; quick preview of files in the File Manager; greater integration; better documentation; enhanced looks and various performance improvements. - -![GNOME 3.4](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.4/application-view.png) - -**GNOME 3.4**, 2012 - -New Look for GNOME 3 Applications: Documents, Epiphany (now called Web), and GNOME Contacts. Search for documents from the Activities overview. Application menus support. Refreshed interface components: New color picker, redesigned scrollbars, easier to use spin buttons, and hideable title bars. Smooth scrolling support. New animated backgrounds. Improved system settings with new Wacom panel. Easier extensions management. Better hardware support. Topic-oriented documentation. Video calling and Live Messenger support in Empathy. Better accessibility: Improved Orca integration, better high contrast mode, and new zoom settings. Plus many other application enhancements and smaller details. - -![GNOME 3.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.6/gnome-3-6.png) - -**GNOME 3.6**, 2012 - -Refreshed Core components: New applications button and improved layout in the Activities Overview. A new login and lock screen. Redesigned Message Tray. Notifications are now smarter, more noticeable, easier to dismiss. Improved interface and settings for System Settings. The user menu now shows Power Off by default. Integrated Input Methods. Accessibility is always on. New applications: Boxes, that was introduced as a preview version in GNOME 3.4, and Clocks, an application to handle world times. Updated looks for Disk Usage Analyzer, Empathy and Font Viewer. Improved braille support in Orca. In Web, the previously blank start page was replaced by a grid that holds your most visited pages, plus better full screen mode and a beta of WebKit2. Evolution renders email using WebKit. Major improvements to Disks. Revamped Files application (also known as Nautilus), with new features like Recent files and search. - -![GNOME 3.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.8/applications-view.png) - -**GNOME 3.8**, 2013 - -Refreshed Core components: A new applications view with frequently used and all apps. An overhauled window layout. New input methods OSD switcher. The Notifications & Messaging tray now react to the force with which the pointer is pressed against the screen edge. Added Classic mode for those who prefer a more traditional desktop experience. The GNOME Settings application features an updated toolbar design. New Initial Setup assistant. GNOME Online Accounts integrates with more services. Web has been upgraded to use the WebKit2 engine. Web has a new private browsing mode. Documents has gained a new dual page mode & Google Documents integration. Improved user interface of Contacts. GNOME Files, GNOME Boxes and GNOME Disks have received a number of improvements. Integration of ownCloud. New GNOME Core Applications: GNOME Clocks and GNOME Weather. - -![GNOME 3.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.10/GNOME-3-10-Release-Schedule-2.png) - -**GNOME 3.10**, 2013 - -A reworked system status area, which gives a more focused overview of the system. A collection of new applications, including GNOME Maps, GNOME Notes, GNOME Music and GNOME Photos. New geolocation features, such as automatic time zones and world clocks. HiDPI support[75] and smart card support. D-Bus activation made possible with GLib 2.38 - -![GNOME 3.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.12/app-folders.png) - -**GNOME 3.12**, 2014 - -Improved keyboard navigation and window selection in the Overview. Revamped first set-up utility based on usability tests. Wired networking re-added to the system status area. Customizable application folders in the Applications view. Introduction of new GTK+ widgets such as popovers in many applications. New tab style in GTK+. GNOME Videos GNOME Terminal and gedit were given a fresh look, more consistent with the HIG. A search provider for the terminal emulator is included in GNOME Shell. Improvements to GNOME Software and high-density display support. A new sound recorder application. New desktop notifications API. Progress in the Wayland port has reached a usable state that can be optionally previewed. - -![GNOME 3.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.14/Top-Features-of-GNOME-3-14-Gallery-459893-2.jpg) - -**GNOME 3.14**, 2014 - -Improved desktop environment animations. Improved touchscreen support. GNOME Software supports managing installed add-ons. GNOME Photos adds support for Google. Redesigned UI for Evince, Sudoku, Mines and Weather. Hitori is added as part of GNOME Games. - -![GNOME 3.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.16/preview-apps.png) - -**GNOME 3.16**, 2015 - -33,000 changes. Major changes include UI color scheme goes from black to charcoal. Overlay scroll bars added. Improvements to notifications including integration with Calendar applet. Tweaks to various apps including Files, Image Viewer, and Maps. Access to Preview Apps. Continued porting from X11 to Wayland. - -Thanks to [Wikipedia][2] for short changelogs review and another big thanks for GNOME Project! Stay tuned! - - --------------------------------------------------------------------------------- - -via: https://tlhp.cf/18-years-of-gnome-evolution/ - -作者:[Pavlo Rudyi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://tlhp.cf/author/paul/ -[1]:https://www.gnome.org/ -[2]:https://en.wikipedia.org/wiki/GNOME \ No newline at end of file diff --git a/sources/talk/20151020 30 Years of Free Software Foundation--Best Quotes of Richard Stallman.md b/sources/talk/20151020 30 Years of Free Software Foundation--Best Quotes of Richard Stallman.md index af8bb311db..e7dbefff62 100644 --- a/sources/talk/20151020 30 Years of Free Software Foundation--Best Quotes of Richard Stallman.md +++ b/sources/talk/20151020 30 Years of Free Software Foundation--Best Quotes of Richard Stallman.md @@ -1,3 +1,5 @@ +For my dear RMS + 30 Years of Free Software Foundation: Best Quotes of Richard Stallman ================================================================================ 注:youtube 视频 @@ -167,4 +169,4 @@ via: https://tlhp.cf/fsf-richard-stallman/ [a]:https://tlhp.cf/author/paul/ [1]:http://www.gnu.org/ -[2]:http://www.fsf.org/ \ No newline at end of file +[2]:http://www.fsf.org/ diff --git a/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md b/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md deleted file mode 100644 index 1e37549646..0000000000 --- a/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md +++ /dev/null @@ -1,35 +0,0 @@ -Linus Torvalds Lambasts Open Source Programmers over Insecure Code -================================================================================ -![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg) - -Linus Torvalds's latest rant underscores the high expectations the Linux developer places on open source programmers—as well the importance of security for Linux kernel code. - -Torvalds is the unofficial "benevolent dictator" of the Linux kernel project. That means he gets to decide which code contributions go into the kernel, and which ones land in the reject pile. - -On Oct. 28, open source coders whose work did not meet Torvalds's expectations faced an [angry rant][1]. "Christ people," Torvalds wrote about the code. "This is just sh*t." - -He went on to call the coders "just incompetent and out to lunch." - -What made Torvalds so angry? He believed the code could have been written more efficiently. It could have been easier for other programmers to understand and would run better through a compiler, the program that translates human-readable code into the binaries that computers understand. - -Torvalds posted his own substitution for the code in question and suggested that the programmers should have written it his way. - -Torvalds has a history of lashing out against people with whom he disagrees. It stretches back to 1991, when he famously [flamed Andrew Tanenbaum][2]—whose Minix operating system he later described as a series of "brain-damages." No doubt this latest criticism of fellow open source coders will go down as another example of Torvalds's confrontational personality. - -But Torvalds may also have been acting strategically during this latest rant. "I want to make it clear to *everybody* that code like this is completely unacceptable," he wrote, suggesting that his goal was to send a message to all Linux programmers, not just vent his anger at particular ones. - -Torvalds also used the incident as an opportunity to highlight the security concerns that arise from poorly written code. Those are issues dear to open source programmers' hearts in an age when enterprises are finally taking software security seriously, and demanding top-notch performance from their code in this regard. Lambasting open source programmers who write insecure code thus helps Linux's image. - --------------------------------------------------------------------------------- - -via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse - -作者:[Christopher Tozzi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://thevarguy.com/author/christopher-tozzi -[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html -[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate \ No newline at end of file diff --git a/sources/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md b/sources/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md new file mode 100644 index 0000000000..c9d08b73bf --- /dev/null +++ b/sources/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md @@ -0,0 +1,53 @@ +GHLandy Translating + +Linux Foundation Explains a "World without Linux" and Open Source +================================================================================ +> The Linux Foundation responds to questions about its "World without Linux" movies, including what the Internet would be like without Linux and other open source software. + +![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/hey_22.png) + +Would the world really be tremendously different if Linux, the open source operating system kernel, did not exist? Would there be no Internet or movies? Those are the questions some viewers of the [Linux Foundation's][1] ongoing "[World without Linux][2]" video series are asking. Here are some answers. + +In case you've missed it, the "World without Linux" series is a collection of quirky short films that depict, well, a world without Linux (and open source software more generally). They have emphasized themes like [Linux's role in movie-making][3] and in [serving the Internet][4]. + +To offer perspective on the series's claims, direction and hidden symbols, Jennifer Cloer, vice president of communications at The Linux Foundation, recently sent The VAR Guy responses to some common queries about the movies. Below are the answers, in her own words. + +### The latest episode takes Sam and Annie to the movies. Would today's graphics really be that much different without Linux? ### + +In episode #4, we do a bit of a parody on "Avatar." Love it or hate it, the graphics in the real "Avatar" are pretty impressive. In a world without Linux, the graphics would be horrible but we wouldn't even know it because we wouldn't know any better. But in fact, "Avatar" was created using Linux. Weta Digital used one of the world's largest Linux clusters to render the film and do 3D modeling. It's also been reported that "Lord of the Rings," "Fantastic Four" and "King Kong," among others, have used Linux. We hope this episode can bring attention to that work, which hasn't been widely reported. + +### Some people criticized the original episode for concluding there would be no Internet without Linux. What's your reaction? ### + +We enjoyed the debate that resulted from the debut episode. With more than 100,000 views to date of that episode alone, it brought awareness to the role that Linux plays in society and to the worldwide community of contributors and supporters. Of course the Internet would exist without Linux but it wouldn't be the Internet we know today and it wouldn't have matured at the pace it has. Each episode makes a bold and fun statement about Linux's role in our every day lives. We hope this can help extend the story of Linux to more people around the world. + +### Why is Sam and Annie's cat named String? ### + +Nothing in the series is a coincidence. Look closely and you'll find all kinds of inside Linux and geek jokes. String is named after String theory and was named by our Linux.com Editor Libby Clark. In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. Kind of like Sam, Annie and String in a World Without Linux. + +### What can we expect from the next two episodes and, in particular, the finale? When will it air? ### + +In episode #5, we'll go to space and experience what a world without Linux would mean to exploration. It's a wild ride. In the finale, we finally get to see Linus in a world without Linux. There have been clues throughout the series as to what this finale will include but I can't give more than that away since there are ongoing contests to find the clues. And I can't give away the air date for the finale! You'll have to follow #WorldWithoutLinux to learn more. + +### Can you give us a hint on the clues in episode #4? ### + +There is another reference to the Free Burger Restaurant in this episode. Linux also actually does appear in this world without Linux but in a very covert way; you could say it's like reading Linux in another language. And, of course, just for fun, String makes another appearance. + +### Is the series achieving what you hoped? ### + +Yes. We're really happy to see people share and engage with these stories. We hope that it's reaching people who might not otherwise know the story of Linux or understand its pervasiveness in the world today. It's really about surfacing this to a broader audience and giving thanks to the worldwide community of developers and companies that support Linux and all the things it makes possible. + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/linux-foundation-explains-world-without-linux-and-open-so + +作者:[Christopher Tozzi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://linuxfoundation.org/ +[2]:http://www.linuxfoundation.org/world-without-linux +[3]:http://thevarguy.com/open-source-application-software-companies/new-linux-foundation-video-highlights-role-open-source-3d +[4]:http://thevarguy.com/open-source-application-software-companies/100715/would-internet-exist-without-linux-yes-without-open-sourc diff --git a/sources/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md b/sources/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md new file mode 100644 index 0000000000..92705b4b5c --- /dev/null +++ b/sources/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md @@ -0,0 +1,77 @@ +Microsoft and Linux: True Romance or Toxic Love? +================================================================================ +Every now and then, you come across a news story that makes you choke on your coffee or splutter hot latte all over your monitor. Microsoft's recent proclamations of love for Linux is an outstanding example of such a story. + +Common sense says that Microsoft and the FOSS movement should be perpetual enemies. In the eyes of many, Microsoft embodies most of the greedy excesses that the Free Software movement rejects. In addition, Microsoft previously has labeled Linux as a cancer and the FOSS community as a "pack of thieves". + +We can understand why Microsoft has been afraid of a free operating system. When combined with open-source applications that challenge Microsoft's core line, it threatens Microsoft's grip on the desktop/laptop market. + +In spite of Microsoft's fears over its desktop dominance, the Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world's busiest sites. The sight of so much unclaimed licensing revenue must be painful indeed for Microsoft. + +Handheld devices are another realm where Microsoft has lost ground to free software. At one point, its Windows CE and Pocket PC operating systems were at the forefront of mobile computing. Windows-powered PDA devices were the shiniest and flashiest gadgets around. But, that all ended when Apple released its iPhone. Since then, Android has stepped into the limelight, with Windows Mobile largely ignored and forgotten. The Android platform is built on free and open-source components. + +The rapid expansion in Android's market share is due to the open nature of the platform. Unlike with iOS, any phone manufacturer can release an Android handset. And, unlike with Windows Mobile, there are no licensing fees. This has been really good news for consumers. It has led to lots of powerful and cheap handsets appearing from manufacturers all over the world. It's a very definite vindication of the value of FOSS software. + +Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater. Nobody likes to lose, especially when money is on the line. And, Microsoft does have a lot to lose. You would expect Microsoft to be bitter about it. And in the past, it has been. + +Microsoft has fought back against Linux and FOSS using every weapon at its disposal, from propaganda to patent threats, and although these attacks have slowed the adoption of Linux, they haven't stopped it. + +So, you can forgive us for being shocked when Microsoft starts handing out t-shirts and badges that say "Microsoft Loves Linux" at open-source conferences and events. Could it be true? Does Microsoft really love Linux? + +Of course, PR slogans and free t-shirts do not equal truth. Actions speak louder than words. And when you consider Microsoft's actions, Microsoft's stance becomes a little more ambiguous. + +On the one hand, Microsoft is recruiting hundreds of Linux developers and sysadmins. It's releasing its .NET Core framework as an open-source project with cross-platform support (so that .NET apps can run on OS X and Linux). And, it is partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center. + +On the other hand, Microsoft continues to launch legal attacks on open-source projects directly and through puppet corporations. It's clear that Microsoft hasn't had some big moral change of heart over proprietary vs. free software, so why the public declarations of adoration? + +To state the obvious, Microsoft is a profit-making entity. It's an investment vehicle for its shareholders and a source of income for its employees. Everything it does has a single ultimate goal: revenue. Microsoft doesn't act out of love or even hate (although that's a common accusation). + +So the question shouldn't be "does Microsoft really love Linux?" Instead, we should ask how Microsoft is going to profit from all this. + +Let's take the open-source release of .NET Core. This move makes it easy to port the .NET runtime to any platform. That extends the reach of Microsoft's .NET framework far beyond the Windows platform. + +Opening .NET Core ultimately will make it possible for .NET developers to produce cross-platform apps for OS X, Linux, iOS and even Android--all from a single codebase. + +From a developer's perspective, this makes the .NET framework much more attractive than before. Being able to reach many platforms from a single codebase dramatically increases the potential target market for any app developed using the .NET framework. + +What's more, a strong Open Source community would provide developers with lots of code to reuse in their own projects. So, the availability of open-source projects would make the .NET framework. + +On the plus side, opening .NET Core reduces fragmentation across different platforms and means a wider choice of apps for consumers. That means more choice, both in terms of open-source software and proprietary apps. + +From Microsoft's point of view, it would gain a huge army of developers. Microsoft profits by selling training, certification, technical support, development tools (including Visual Studio) and proprietary extensions. + +The question we should ask ourselves is does this benefit or hurt the Free Software community? + +Widespread adoption of the .NET framework could mean the eventual death of competing open-source projects, forcing us all to dance to Microsoft's tune. + +Moving beyond .NET, Microsoft is drawing a lot of attention to its Linux support on its Azure cloud computing platform. Remember, Azure originally was Windows Azure. That's because Windows Server was the only supported operating system. Today, Azure offers support for a number of Linux distros too. + +There's one reason for this: paying customers who need and want Linux services. If Microsoft didn't offer Linux virtual machines, those customers would do business with someone else. + +It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. + +This brings us back to the question of why there is so much buzz about Microsoft and Linux. We're all talking about it, because Microsoft wants us to think about it. After all, all these stories trace back to Microsoft, whether it's through press releases, blog posts or public announcements at conferences. The company is working hard to draw attention to its Linux expertise. + +What other possible purpose could be behind Chief Architect Kamala Subramaniam's blog post announcing Azure Cloud Switch? ACS is a custom Linux distro that Microsoft uses to automate the configuration of its switch hardware in the Azure data centers. + +ACS is not publicly available. It's intended for internal use in the Azure data center, and it's unlikely that anyone else would be able to find a use for it. In fact, Subramaniam states the same thing herself in her post. + +So, Microsoft won't be making any money from selling ACS, and it won't attract a user base by giving it away. Instead, Microsoft gets to draw attention to Linux and Azure, strengthening its position as a Linux cloud computing platform. + +Is Microsoft's new-found love for Linux good news for the community? + +We shouldn't be slow to forget Microsoft's mantra of Embrace, Extend and Exterminate. Right now, Microsoft is very much in the early stages of embracing Linux. Will Microsoft seek to splinter the community through custom extensions and proprietary "standards"? + +Let us know what you think in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/microsoft-and-linux-true-romance-or-toxic-love-0 + +作者:[James Darvell][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/james-darvell \ No newline at end of file diff --git a/sources/talk/20151201 Cinnamon 2.8 Review.md b/sources/talk/20151201 Cinnamon 2.8 Review.md new file mode 100644 index 0000000000..0c44eba14f --- /dev/null +++ b/sources/talk/20151201 Cinnamon 2.8 Review.md @@ -0,0 +1,87 @@ +Cinnamon 2.8 Review +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2-8-featured.jpg) + +Other than Gnome and KDE, Cinnamon is another desktop environment that is used by many people. It is made by the same team that produces Linux Mint (and ships with Linux Mint) and can also be installed on several other distributions. The latest version of this DE – Cinnamon 2.8 – was released earlier this month, and it brings a host of bug fixes and improvements as well as some new features. + +I’m going to go over the major improvements made in this release as well as how to update to Cinnamon 2.8 or install it for the first time. + +### Improvements to Applets ### + +There are several improvements to already existing applets for the panel. + +#### Sound Applet #### + +![cinnamon-28-sound-applet](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-sound-applet.jpg) + +The Sound applet was revamped and now displays track information as well as the media controls on top of the cover art of the audio file. For music players with seeking support (such as Banshee), a progress bar will be displayed in the same region which you can use to change the position of the audio track. Right-clicking on the applet in the panel will display the options to mute input and output devices. + +#### Power Applet #### + +The Power applet now displays the status of each of the connected batteries and devices using the manufacturer’s data instead of generic names. + +#### Window Thumbnails #### + +![cinnamon-2.8-window-thumbnails](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-window-thumbnails.png) + +Cinnamon 2.8 brings the option to show window thumbnails when hovering over the window list in the panel. You can turn it off if you don’t like it, though. + +#### Workspace Switcher Applet #### + +![cinnamon-2.8-workspace-switcher](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-workspace-switcher.png) + +Adding the Workspace switcher applet to your panel will show you a visual representation of your workspaces with little rectangles embedded inside to show the position of your windows. + +#### System Tray #### + +Cinnamon 2.8 brings support for app indicators in the system tray. You can easily disable this in the settings which will force affected apps to fall back to using status icons instead. + +### Visual Improvements ### + +A host of visual improvements were made in Cinnamon 2.8. The classic and preview Alt + Tab switchers were polished with noticeable improvements, while the Alt + F2 dialog received bug fixes and better auto completion for commands. + +Also, the issue with the traditional animation effect for minimizing windows is now sorted and works with multiple panels. + +### Nemo Improvements ### + +![cinnamon-2.8-nemo](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-nemo.jpg) + +The default file manager for Cinnamon also received several bug fixes and has a new “Quick-rename” feature for renaming files and directories. This works by clicking the file or directory twice with a short pause in between to rename the files. + +Nemo also detects issues with thumbnails automatically and prompts you to quickly fix them. + +### Other Notable improvements ### + +- Applets now reload themselves automatically once they are updated. +- Support for multiple monitors was improved significantly. +- Dialog windows have been improved and now attach themselves to their parent windows. +- HiDPI dectection has been improved. +- QT5 applications now look more native and use the default GTK theme. +- Window management and rendering performance has been improved. +- There are various bugfixes. + +### How to Get Cinnamon 2.8 ### + +If you’re running Linux Mint you will get Cinnamon 2.8 as part of the upgrade to Linux Mint 17.3 “Rosa” Cinnamon Edition. The BETA release is already out, so you can grab that if you’d like to get your hands on the new software immediately. + +For Arch users, Cinnamon 2.8 is already in the official Arch repositories, so you can just update your packages and do a system-wide upgrade to get the latest version. + +Finally, for Ubuntu users, you can install or upgrade to Cinnamon 2.8 by running in turn the following commands: + + sudo add-apt-repository -y ppa:moorkai/cinnamon + sudo apt-get update + sudo apt-get install cinnamon + +Have you tried Cinnamon 2.8? What do you think of it? + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/cinnamon-2-8-review/ + +作者:[Ayo Isaiah][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/ayoisaiah/ \ No newline at end of file diff --git a/sources/talk/20151227 Upheaval in the Debian Live project.md b/sources/talk/20151227 Upheaval in the Debian Live project.md new file mode 100644 index 0000000000..d663d09b17 --- /dev/null +++ b/sources/talk/20151227 Upheaval in the Debian Live project.md @@ -0,0 +1,66 @@ +While the event had a certain amount of drama surrounding it, the [announcement][1] of the end for the [Debian Live project][2] seems likely to have less of an impact than it first appeared. The loss of the lead developer will certainly be felt—and the treatment he and the project received seems rather baffling—but the project looks like it will continue in some form. So Debian will still have tools to create live CDs and other media going forward, but what appears to be a long-simmering dispute between project founder and leader Daniel Baumann and the Debian CD and installer teams has been "resolved", albeit in an unfortunate fashion. + +The November 9 announcement from Baumann was titled "An abrupt End to Debian Live". In that message, he pointed to a number of different events over the nearly ten years since the [project was founded][3] that indicated to him that his efforts on Debian Live were not being valued, at least by some. The final straw, it seems, was an "intent to package" (ITP) bug [filed][4] by Iain R. Learmonth that impinged on the namespace used by Debian Live. + +Given that one of the main Debian Live packages is called "live-build", the new package's name, "live-build-ng", was fairly confrontational in and of itself. Live-build-ng is meant to be a wrapper around the [vmdebootstrap][5] tool for creating live media (CDs and USB sticks), which is precisely the role Debian Live is filling. But when Baumann [asked][6] Learmonth to choose a different name for his package, he got an "interesting" [reply][7]: + +``` +It is worth noting that live-build is not a Debian project, it is an external project that claims to be an official Debian project. This is something that needs to be fixed. +There is no namespace issue, we are building on the existing live-config and live-boot packages that are maintained and bringing these into Debian as native projects. If necessary, these will be forks, but I'm hoping that won't have to happen and that we can integrate these packages into Debian and continue development in a collaborative manner. +live-build has been deprecated by debian-cd, and live-build-ng is replacing it. In a purely Debian context at least, live-build is deprecated. live-build-ng is being developed in collaboration with debian-cd and D-I [Debian Installer]. +``` + +Whether or not Debian Live is an "official" Debian project (or even what "official" means in this context) has been disputed in the thread. Beyond that, though, Neil Williams (who is the maintainer of vmdebootstrap) [provided some][8] explanation for the switch away from Debian Live: + +``` +vmdebootstrap is being extended explicitly to provide support for a replacement for live-build. This work is happening within the debian-cd team to be able to solve the existing problems with live-build. These problems include reliability issues, lack of multiple architecture support and lack of UEFI support. vmdebootstrap has all of these, we do use support from live-boot and live-config as these are out of the scope for vmdebootstrap. +``` + +Those seem like legitimate complaints, but ones that could have been fixed within the existing project. Instead, though, something of a stealth project was evidently undertaken to replace live-build. As Baumann [pointed out][9], nothing was posted to the debian-live mailing list about the plans. The ITP was the first notice that anyone from the Debian Live project got about the plans, so it all looks like a "secret plan"—something that doesn't sit well in a project like Debian. + +As might be guessed, there were multiple postings that supported Baumann's request to rename "live-build-ng", followed by many that expressed dismay at his decision to stop working on Debian Live. But Learmonth and Williams were adamant that replacing live-build is needed. Learmonth did [rename][10] live-build-ng to a perhaps less confrontational name: live-wrapper. He noted that his aim had been to add the new tool to the Debian Live project (and "bring the Debian Live project into Debian"), but things did not play out that way. + +``` +I apologise to everyone that has been upset by the ITP bug. The software is not yet ready for use as a full replacement for live-build, and it was filed to let people know that the work was ongoing and to collect feedback. This sort of worked, but the feedback wasn't the kind I was looking for. +``` + +The backlash could perhaps have been foreseen. Communication is a key aspect of free-software communities, so a plan to replace the guts of a project seems likely to be controversial—more so if it is kept under wraps. For his part, Baumann has certainly not been perfect—he delayed the "wheezy" release by [uploading an unsuitable syslinux package][11] and [dropped down][12] from a Debian Developer to a Debian Maintainer shortly thereafter—but that doesn't mean he deserves this kind of treatment. There are others involved in the project as well, of course, so it is not just Baumann who is affected. + +One of those other people is Ben Armstrong, who has been something of a diplomat during the event and has tried to smooth the waters. He started with a [post][13] that celebrated the project and what Baumann and the team had accomplished over the years. As he noted, the [list of downstream projects][14] for Debian Live is quite impressive. In another post, he also [pointed out][15] that the project is not dead: + +``` +If the Debian CD team succeeds in their efforts and produces a replacement that is viable, reliable, well-tested, and a suitable candidate to replace live-build, this can only be good for Debian. If they are doing their job, they will not "[replace live-build with] an officially improved, unreliable, little-tested alternative". I've seen no evidence so far that they operate that way. And in the meantime, live-build remains in the archive -- there is no hurry to remove it, so long as it remains in good shape, and there is not yet an improved successor to replace it. +``` + +On November 24, Armstrong also [posted][16] an update (and to [his blog][17]) on Debian Live. It shows some good progress made in the two weeks since Baumann's exit; there are even signs of collaboration between the project and the live-wrapper developers. There is also a [to-do list][18], as well as the inevitable call for more help. That gives reason to believe that all of the drama surrounding the project was just a glitch—avoidable, perhaps, but not quite as dire as it might have seemed. + + +--------------------------------- + +via: https://lwn.net/Articles/665839/ + +作者:Jake Edge +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[1]: https://lwn.net/Articles/666127/ +[2]: http://live.debian.net/ +[3]: https://www.debian.org/News/weekly/2006/08/ +[4]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804315 +[5]: http://liw.fi/vmdebootstrap/ +[6]: https://lwn.net/Articles/666173/ +[7]: https://lwn.net/Articles/666176/ +[8]: https://lwn.net/Articles/666181/ +[9]: https://lwn.net/Articles/666208/ +[10]: https://lwn.net/Articles/666321/ +[11]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699808 +[12]: https://nm.debian.org/public/process/14450 +[13]: https://lwn.net/Articles/666336/ +[14]: http://live.debian.net/project/downstream/ +[15]: https://lwn.net/Articles/666338/ +[16]: https://lwn.net/Articles/666340/ +[17]: http://syn.theti.ca/2015/11/24/debian-live-after-debian-live/ +[18]: https://wiki.debian.org/DebianLive/TODO diff --git a/sources/talk/The history of Android/20 - The history of Android.md b/sources/talk/The history of Android/20 - The history of Android.md deleted file mode 100644 index 75c89a1abc..0000000000 --- a/sources/talk/The history of Android/20 - The history of Android.md +++ /dev/null @@ -1,95 +0,0 @@ -alim0x translating - -The history of Android -================================================================================ -![Another Market design that was nothing like the old one. This lineup shows the categories page, featured, a top apps list, and an app page.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market-pages.png) -Another Market design that was nothing like the old one. This lineup shows the categories page, featured, a top apps list, and an app page. -Photo by Ron Amadeo - -These screenshots give us our first look at the refined version of the Action Bar in Ice Cream Sandwich. Almost every app got a bar at the top of the screen that housed the app icon, title of the screen, several function buttons, and a menu button on the right. The right-aligned menu button was called the "overflow" button, because it housed items that didn't fit on the main action bar. The overflow menu wasn't static, though, it gave the action bar more screen real-estate—like in horizontal mode or on a tablet—and more of the overflow menu items were shown on the action bar as actual buttons. - -New in Ice Cream Sandwich was this design style of "swipe tabs," which replaced the 2×3 interstitial navigation screen Google was previously pushing. A tab bar sat just under the Action Bar, with the center title showing the current tab and the left and right having labels for the pages to the left and right of this screen. A swipe in either direction would change tabs, or you could tap on a title to go to that tab. - -One really cool design touch on the individual app screen was that, after the pictures, it would dynamically rearrange the page based on your history with that app. If you never installed the app before, the description would be the first box. If you used the app before, the first section would be the reviews bar, which would either invite you to review the app or remind you what you thought of the app last time you installed it. The second section for a previously used app was “What’s New," since an existing user would most likely be interested in changes. - -![Recent apps and the browser were just like Honeycomb, but smaller.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/recentbrowser.png) -Recent apps and the browser were just like Honeycomb, but smaller. -Photo by Ron Amadeo - -Recent apps toned the Tron look way down. The blue outline around the thumbnails was removed, along with the eerie, uneven blue glow in the background. It now looked like a neutral UI piece that would be at home in any time period. - -The Browser did its best to bring a tabbed experience to phones. Multi-tab browsing was placed front and center, but instead of wasting precious screen space on a tab strip, a tab button would open a Recent Apps-like interface that would show you your open tabs. Functionally, there wasn't much difference between this and the "window" view that was present in past versions of the Browser. The best addition to the Browser was a "Request desktop site" menu item, which would switch from the default mobile view to the normal site. The Browser showed off the flexibility of Google's Action Bar design, which, despite not having a top-left app icon, still functioned like any other top bar design. - -![Gmail and Google Talk—they're like Honeycomb, but smaller!](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gmail2.png) -Gmail and Google Talk—they're like Honeycomb, but smaller! -Photo by Ron Amadeo - -Gmail and Google Talk both looked like smaller versions of their Honeycomb designs, but with a few tweaks to work better on smaller screens. Gmail featured a dual Action Bar—one on the top of the screen and one on the bottom. The top of the bar showed your current folder, account, and number of unread messages, and tapping on the bar opened a navigation menu. The bottom featured all the normal buttons you would expect along with the overflow button. This dual layout was used in order display more buttons on the surface level, but in landscape mode where vertical space was at a premium, the dual bars merged into a single top bar. - -In the message view, the blue bar was "sticky" when you scrolled down. It stuck to the top of the screen, so you could always see who wrote the current message, reply, or star it. Once in a message, the thin, dark gray bar at the bottom showed your current spot in the inbox (or whatever list brought you here), and you could swipe left and right to get to other messages. - -Google Talk would let you swipe left and right to change chat windows, just like Gmail, but there the bar was at the top. - -![The new dialer and the incoming call screen, both of which we haven't seen since Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/inc-calls.png) -The new dialer and the incoming call screen, both of which we haven't seen since Gingerbread. -Photo by Ron Amadeo - -Since Honeycomb was only for tablets, some UI pieces were directly preceded by Gingerbread instead. The new Ice Cream Sandwich dialer was, of course, black and blue, and it used smaller tabs that could be swiped through. While Ice Cream Sandwich finally did the sensible thing and separated the main phone and contacts interfaces, the phone app still had its own contacts tab. There were now two spots to view your contact list—one with a dark theme and one with a light theme. With a hardware search button no longer being a requirement, the bottom row of buttons had the voicemail shortcut swapped out for a search icon. - -Google liked to have the incoming call interface mirror the lock screen, which meant Ice Cream Sandwich got a circle-unlock design. Besides the usual decline or accept options, a new button was added to the top of the circle, which would let you decline a call by sending a pre-defined text message to the caller. Swiping up and picking a message like "Can't talk now, call you later" was (and still is) much more informative than an endlessly ringing phone. - -![Honeycomb didn't have folders or a texting app, so here's Ice Cream Sandwich versus Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/thenonmessedupversion.png) -Honeycomb didn't have folders or a texting app, so here's Ice Cream Sandwich versus Gingerbread. -Photo by Ron Amadeo - -Folders were now much easier to make. In Gingerbread, you had to long press on the screen, pick "folders," and then pick "new folder." In Ice Cream Sandwich, just drag one icon on top of another, and a folder is created containing those two icons. It was dead simple and much easier than finding the hidden long-press command. - -The design was much improved, too. Gingerbread used a generic beige folder icon, but Ice Cream Sandwich actually showed you what was in the folder by stacking the first three icons on top of each other, drawing a circle around them, and using that as the folder icon. Open folder containers resized to fit the amount of icons in the folder rather than being a full-screen, mostly empty box. It looked way, way better. - -![YouTube switched to a more modern white theme and used a list view instead of the crazy 3D scrolling](http://cdn.arstechnica.net/wp-content/uploads/2014/03/youtubes.png) -YouTube switched to a more modern white theme and used a list view instead of the crazy 3D scrolling -Photo by Ron Amadeo - -YouTube was completely redesigned and looked less like something from The Matrix and more like, well, YouTube. It was a simple white list of vertically scrolling videos, just like the website. Making videos on your phone was given prime real estate, with the first button on the action bar dedicated to recording a video. Strangely, different screens used different YouTube logos in the top left, switching between a horizontal YouTube logo and a square one. - -YouTube used swipe tabs just about everywhere. They were placed on the main page to browse and view your account and on the video pages to switch between comments, info, and related videos. The 4.0 app showed the first signs of Google+ YouTube integration, placing a "+1" icon next to the traditional rating buttons. Eventually Google+ would completely take over YouTube, turning the comments and author pages into Google+ activity. - -![Ice Cream Sandwich tried to make things easier on everyone. Here is a screen for tracking data usage, the new developer options with tons of analytics enabled, and the intro tutorial.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/data.png) -Ice Cream Sandwich tried to make things easier on everyone. Here is a screen for tracking data usage, the new developer options with tons of analytics enabled, and the intro tutorial. -Photo by Ron Amadeo - -Data Usage allowed users to easily keep track of and control their data usage. The main page showed a graph of this month's data usage, and users could set thresholds to be warned about data consumption or even set a hard usage limit to avoid overage charges. All of this was done easily by dragging the horizontal orange and red threshold lines higher or lower on the chart. The vertical white bars allowed users to select a slice of time in the graph. At the bottom of the page, the data usage for the selected time was broken down by app, so users could select a spike and easily see what app was sucking up all their data. When times got really tough, in the overflow button was an option to restrict all background data. Then, only apps running in the foreground could have access to the Internet connection. - -The Developer Options typically only housed a tiny handful of settings, but in Ice Cream Sandwich the section received a huge expansion. Google added all sorts of on-screen diagnostic overlays to help app developers understand what was happening inside their app. You could view CPU usage, pointer location, and view screen updates. There were also options to change the way the system functioned, like control over animation speed, background processing, and GPU rendering. - -One of the biggest differences between Android and the iOS is Android's app drawer interface. In Ice Cream Sandwich's quest to be more user-friendly, the initial startup launched a small tutorial showing users where the app drawer was and how to drag icons out of the drawer and onto the homescreen. With the removal of the off-screen menu button and changes like this, Android 4.0 made a big push to be more inviting to new smartphone users and switchers. - -![The "touch to beam" NFC support, Google Earth, and App Info, which would let you disable crapware.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-06-03.57.png) -The "touch to beam" NFC support, Google Earth, and App Info, which would let you disable crapware. - -Built into Ice Cream Sandwich was full support for [NFC][1]. While previous devices like the Nexus S had NFC, support was limited and the OS couldn't do much with the chip. 4.0 added a feature called Android Beam, which would let two NFC-equipped Android 4.0 devices transfer data back and forth. NFC would transmit data related to whatever was on the screen at the time, so tapping when a phone displayed a webpage would send that page to the other phone. You could also send contact information, directions, and YouTube links. When the two phones were put together, the screen zoomed out, and tapping on the zoomed-out display would send the information. - -In Android, users are not allowed to uninstall system apps, which are often integral to the function of the device. Carriers and OEMs took advantage of this and started putting crapware in the system partition, which they would often stick with software they didn't want. Android 4.0 allowed users to disable any app that couldn't be uninstalled, meaning the app remained on the system but didn't show up in the app drawer and couldn't be run. If users were willing to dig through the settings, this gave them an easy way to take control of their phone. - -Android 4.0 can be thought of as the start of the modern Android era. Most of the Google apps released around this time only worked on Android 4.0 and above. There were so many new APIs that Google wanted to take advantage of that—initially at least—support for versions below 4.0 was limited. After Ice Cream Sandwich and Honeycomb, Google was really starting to get serious about software design. In January 2012, the company [finally launched][2] *Android Design*, a design guideline site that taught Android app developers how to create apps to match the look and feel of Android. This was something iOS not only had from the start of third-party app support, but Apple enforced design so seriously that apps that did not meet the guidelines were blocked from the App Store. The fact that Android went three years without any kind of public design documents from Google shows just how bad things used to be. But with Duarte in charge of Android's design revolution, the company was finally addressing basic design needs. - ----------- - -![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) - -[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work. - -[@RonAmadeo][t] - --------------------------------------------------------------------------------- - -via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/20/ - -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://arstechnica.com/gadgets/2011/02/near-field-communications-a-technology-primer/ -[2]:http://arstechnica.com/business/2012/01/google-launches-style-guide-for-android-developers/ -[a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo diff --git a/sources/talk/The history of Android/21 - The history of Android.md b/sources/talk/The history of Android/21 - The history of Android.md deleted file mode 100644 index 265e7a867b..0000000000 --- a/sources/talk/The history of Android/21 - The history of Android.md +++ /dev/null @@ -1,103 +0,0 @@ -The history of Android -================================================================================ -![](http://cdn.arstechnica.net/wp-content/uploads/2014/03/playicons2.png) -Photo by Ron Amadeo - -### Google Play and the return of direct-to-consumer device sales ### - -On March 6, 2012, Google unified all of its content offerings under the banner of "Google Play." The Android Market became the Google Play Store, Google Books became Google Play Books, Google Music became Google Play Music, and Android Market Movies became Google Play Movies & TV. While the app interfaces didn't change much, all four content apps got new names and icons. Content purchased in the Play Store would be downloaded to the appropriate app, and the Play Store and Play content apps all worked together to provide a fairly organized content experience. - -The Google Play update was Google's first big out-of-cycle update. Four packed-in apps were all changed without having to issue a system update—they were all updated through the Android Market/Play Store. Enabling out-of-cycle updates to individual apps was a big focus for Google, and being able to do an update like this was the culmination of an engineering effort that started in the Gingerbread era. Google had been working on "decoupling" the apps from the operating system and making everything portable enough to be distributed through the Android Market/Play Store. - -While one or two apps (mostly Maps and Gmail) had previously lived on the Android Market, from here on you'll see a lot more significant updates that have nothing to do with an operating system release. System updates require the cooperation of OEMs and carriers, so they are difficult to push out to every user. Play Store updates are completely controlled by Google, though, providing the company a direct line to users' devices. For the launch of Google Play, the Android Market updated itself to the Google Play Store, and from there, Books, Music, and Movies were all issued Google Play-flavored updates. - -The design of the Google Play apps was still all over the place. Each app looked and functioned differently, but for now, a cohesive brand was a good start. And removing "Android" from the branding was necessary because many services were available in the browser and could be used without touching an Android device at all. - -In April 2012, Google started [selling devices though the Play Store again][1], reviving the direct-to-customer model it had experimented with for the launch of the Nexus One. While it was only two years after ending the Nexus One sales, Internet shopping was now more common place, and buying something before you could hold it didn't seem as crazy as it did in 2010. - -Google also saw how price-conscious consumers became when faced with the Nexus One's $530 price tag. The first device for sale was an unlocked, GSM version of the Galaxy Nexus for $399. From there, price would go even lower. $350 has been the entry-level price for the last two Nexus smartphones, and 7-inch Nexus tablets would come in at only $200 to $220. - -Today, the Play Store sells eight different Android devices, four Chromebooks, a thermostat, and tons of accessories, and the device store is the de-facto location for a new Google product launch. New phone launches are so popular, the site usually breaks under the load, and new Nexus phones sell out in a few hours. - -### Android 4.1, Jelly Bean—Google Now points toward the future ### - -![The Asus-made Nexus 7, Android 4.1's launch device.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/ASUS_Google_Nexus_7_4_11.jpg) -The Asus-made Nexus 7, Android 4.1's launch device. - -With the release of Android 4.1, Jelly Bean in July 2012, Google settled into an Android release cadence of about every six months. The platform matured to the point where a release every three months was unnecessary, and the slower release cycle gave OEMs a chance to catch their breath. Unlike Honeycomb, point releases were now fairly major updates, with 4.1 bringing major UI and framework changes. - -One of the biggest changes in Jelly Bean that you won't be able to see in screenshots is "Project Butter," the name for a concerted effort by Google's engineers to make Android animations run smoothly at 30FPS. Core changes were made, like Vsync and triple buffering, and individual animations were optimized so they could be drawn smoothly. Animation and scrolling smoothness had always been a weak point of Android when compared to iOS. After some work on both the core animation framework and on individual apps, Jelly Bean brought Android a lot closer to iOS' smoothness. - -Along with Jelly Bean came the [Nexus][2] 7, a 7-inch tablet manufactured by Asus. Unlike the primarily horizontal Xoom, the Nexus 7 was meant to be used in portrait mode, like a large phone. The Nexus 7 showed that, after almost a year-and-a-half of ecosystem building, Google was ready to commit to the tablet market with a flagship device. Like the Nexus One and GSM Galaxy Nexus, the Nexus 7 was sold online directly by Google. While those earlier devices had shockingly high prices for consumers that were used to carrier subsidies, the Nexus 7 hit a mass market price point of only $200. The price bought you a device with a 7-inch, 1280x800 display, a quad core, 1.2 GHz Tegra 3 processor, 1GB of RAM, and 8GB of storage. The Nexus 7 was such a good value that many wondered if Google was making any money at all on its flagship tablet. - -This smaller, lighter, 7-inch form factor would be a huge success for Google, and it put the company in the rare position of being an industry trendsetter. Apple, which started with a 10-inch iPad, was eventually forced to answer the Nexus 7 and tablets like it with the iPad Mini. - -![4.1's new lock screen design, wallpaper, and the new on-press highlight on the system buttons.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/picture.png) -4.1's new lock screen design, wallpaper, and the new on-press highlight on the system buttons. -Photo by Ron Amadeo - -The Tron look introduced in Honeycomb was toned down a little in Ice Cream Sandwich, and Jelly Bean took things a step further. It started removing blue from large chunks of the operating system. The hint was the on-press highlights on the system buttons, which changed from blue to gray. - -![A composite image of the new app lineup and the new notification panel with expandable notifications.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/jb-apps-and-notications.png) -A composite image of the new app lineup and the new notification panel with expandable notifications. -Photo by Ron Amadeo - -The Notification panel was completely revamped, and we've finally arrived at the design used today in KitKat. The new panel extended to the top of the screen and covered the usual status icons, meaning the status bar was no longer visible when the panel was open. The time was prominently displayed in the top left corner, along with the date and a settings shortcut. The clear all notions button, which was represented by an "X" in Ice Cream Sandwich, changed to a stairstep icon, symbolizing the staggered sliding animation that cleared the notification panel. The bottom handle changed from a circle to a single line that ran the length of the notification panel. All the typography was changed—the notification panel now used bigger, thinner fonts for everything. This was another screen where the blue introduced in Ice Cream Sandwich and Honeycomb was removed. The notification panel was entirely gray now except for on-touch highlights. - -There was new functionality in the panel, too. Notifications were now expandable and could show much more information than the previous two-line design. It now showed up to eight lines of text and could even show buttons at the bottom of the notification. The screenshot notification had a share button at the bottom, and you could call directly from a missed call notification, or you could snooze a ringing alarm all from the notification panel. New notifications were expanded by default, but as they piled up they would collapse back to the traditional size. Dragging down on a notification with two fingers would expand it. - -![The new Google Search app, with Google Now cards, voice search, and text search.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googlenow.png) -The new Google Search app, with Google Now cards, voice search, and text search. -Photo by Ron Amadeo - -The biggest feature addition to Jelly Bean for not only Android, but for Google as a whole, was the new version of the Google Search application. This introduced "Google Now," a predictive search feature. Google Now was displayed as several cards that sit below the search box, and it would offer results to searches Google thinks you care about. These were things like Google Maps searches for places you've recently looked at on your desktop computer or calendar appointment locations, the weather, and time at home while traveling. - -The new Google Search app could, of course, be launched with the Google icon, but it could also be accessed from any screen with a swipe up from the system bar. Long pressing on the system bar brought up a ring that worked similarly to the lock screen ring. The card section scrolled vertically, and cards could be a swipe away if you didn't want to see them. Voice Search was a big part of the updates. Questions weren't just blindly entered into Google; if Google knew the answer, it would also talk back using a text-To-Speech engine. And old-school text searches were, of course, still supported. Just tap on the bar and start typing. - -Google frequently called Google Now "the future of Google Search." Telling Google what you wanted wasn't good enough. Google wanted to know what you wanted before you did. Google Now put all of Google's data mining knowledge about you to work for you, and it was the company's biggest advantage against rival search services like Bing. Smartphones knew more about you than any other device you own, so the service debuted on Android. But Google slowly worked Google Now into Chrome, and eventually it will likely end up on Google.com. - -While the functionality was important, it became clear that Google Now was the most important design work to ever come out of the company, too. The white card aesthetic that this app introduced would become the foundation for Google's design of just about everything. Today, this card style is used in the Google Play Store and in all of the Play content apps, YouTube, Google Maps, Drive, Keep, Gmail, Google+, and many others. It's not just Android apps, either. Many of Google's desktop sites and iOS apps are inspired by this design. Design was historically one of Google's weak areas, but Google Now was the point where the company finally got its act together with a cohesive, company-wide design language. - -![Yet another YouTube redesign. Information density went way down.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/yotuube.png) -Yet another YouTube redesign. Information density went way down. -Photo by Ron Amadeo - -Another version, another YouTube redesign. This time the list view was primarily thumbnail-based, with giant images taking up most of the screen real estate. Information density tanked with the new list design. Before YouTube would display around six items per screen, now it could only display three. - -YouTube was one of the first apps to add a sliding drawer to the left side of an app, a feature which would become a standard design style across Google's apps. The drawer has links for your account and channel subscriptions, which allowed Google to kill the tabs-on-top design. - -![Google Play Service's responsibilities versus the rest of Android.](http://cdn.arstechnica.net/wp-content/uploads/2013/08/playservicesdiagram2.png) -Google Play Service's responsibilities versus the rest of Android. -Photo by Ron Amadeo - -### Google Play Services—fragmentation and making OS versions (nearly) obsolete ### - -It didn't seem like a big deal at the time, but in September 2012, Google Play Services 1.0 was automatically pushed out to every Android phone running 2.2 and up. It added a few Google+ APIs and support for OAuth 2.0. - -While this update might sound boring, Google Play Services would eventually grow to become an integral part of Android. Google Play Services acts as a shim between the normal apps and the installed Android OS, allowing Google to update or replace some core components and add APIs without having to ship out a new Android version. - -With Play Services, Google had a direct line to the core of an Android phone without having to go through OEM updates and carrier approval processes. Google used Play Services to add an entirely new location system, a malware scanner, remote wipe capabilities, and new Google Maps APIs, all without shipping an OS update. Like we mentioned at the end of the Gingerbread section, thanks to all the "portable" APIs implemented in Play Services, Gingerbread can still download a modern version of the Play Store and many other Google Apps. - -The other big benefit was compatibility with Android's user base. The newest release of an Android OS can take a very long time to get out to the majority of users, which means APIs that get tied to the latest version of the OS won't be any good to developers until the majority of the user base upgrades. Google Play Services is compatible with Froyo and above, which is 99 percent of active devices, and the updates pushed directly to phones through the Play Store. By including APIs in Google Play Services instead of Android, Google can push a new API out to almost all users in about a week. It's [a great solution][3] to many of the problems caused by version fragmentation. - ----------- - -![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) - -[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work. - -[@RonAmadeo][t] - --------------------------------------------------------------------------------- - -via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/21/ - -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://arstechnica.com/gadgets/2012/04/unlocked-samsung-galaxy-nexus-can-now-be-purchased-from-google/ -[2]:http://arstechnica.com/gadgets/2012/07/divine-intervention-googles-nexus-7-is-a-fantastic-200-tablet/ -[3]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/ -[a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo \ No newline at end of file diff --git a/sources/talk/The history of Android/22 - The history of Android.md b/sources/talk/The history of Android/22 - The history of Android.md index 79cf7bd2a5..fab3d8c087 100644 --- a/sources/talk/The history of Android/22 - The history of Android.md +++ b/sources/talk/The history of Android/22 - The history of Android.md @@ -1,8 +1,10 @@ +alim0x translating + The history of Android ================================================================================ ### Android 4.2, Jelly Bean—new Nexus devices, new tablet interface ### -The Android Platform was rapidly maturing, and with Google hosting more and more apps in the Play Store, there was less and less that needed to go out in the OS update. Still, the relentless march of updates must continue, and in November 2012 Android 4.2 was released. 4.2 was still called "Jelly Bean," a nod to the relatively small amount of changes that were present in this release. +The Android Platform was rapidly maturing, and with Google hosting more and more apps in the Play Store, there was less and less that needed to go out in the OS update. Still, the relentless march of updates must continue, and in November 2012 Android 4.2 was released. 4.2 was still called "Jelly Bean," a nod to the relatively small amount of changes that were present in this release. ![The LG-made Nexus 4 and Samsung-made Nexus 10.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/unnamed.jpg) The LG-made Nexus 4 and Samsung-made Nexus 10. @@ -81,4 +83,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor [1]:http://arstechnica.com/gadgets/2014/01/hands-on-with-samsungs-notepro-and-tabpro-new-screen-sizes-and-magazine-ui/ [2]:http://cdn.arstechnica.net/wp-content/uploads/2013/12/device-2013-12-26-11016071.png [a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo \ No newline at end of file +[t]:https://twitter.com/RonAmadeo diff --git a/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md b/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md new file mode 100644 index 0000000000..bf2ba1ff25 --- /dev/null +++ b/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md @@ -0,0 +1,155 @@ +10 tools for visual effects in Linux with Kdenlive +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/kdenlivetoolssummary.png) +Image credits : Seth Kenlon. [CC BY-SA 4.0.][1] + +[Kdenlive][2] is one of those applications; you can use it daily for a year and wake up one morning only to realize that you still have only grazed the surface of all of its potential. That's why it's nice every once in a while to sit back and look over some of the lesser-used tricks and tools in Kdenlive. Even though something's not used as often as, say, the Spacer or Razor tools, it still may end up being just the right finishing touch on your latest masterpiece. + +Most of the tools I'll discuss here are not officially part of Kdenlive; they are plugins from the [Frei0r][3] package. These are ubiquitous parts of video processing on Linux and Unix, and they usually get installed along with Kdenlive as distributed by most Linux distributions, so they often seem like part of the application. If your install of Kdenlive does not feature some of the tools mentioned here, make sure that you have Frei0r plugins installed. + +Since many of the tools in this article affect the look of an image, here is the base image, without effects or adjustment: + +![](https://opensource.com/sites/default/files/images/life-uploads/before_0.png) + +Still image grabbed from a video by Footage Firm, Inc. [CC BY-SA 4.0.][1] + +Let's get started. + +### 1. Color effect ### + +![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect.png) + +You can find the **Color Effect** filter in **Add Effect > Misc** context menu. As filters go, it's mostly just a preset; the only controls it has are which filter you want to use. + +![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect_ctl_0.png) + +Normally that's the kind of filter I avoid, but I have to be honest: Sometimes a plug-and-play solution is exactly what you want. This filter has a few different settings, but the two that make it worth while (at least for me) are the Sepia and XPro effects. Admittedly, controls to adjust how sepia tone the sepia effect is would be nice, but no matter what, when you need a quick and familiar color effect, this is the filter to throw onto a clip. It's immediate, it's easy, and if your client asks for that look, this does the trick every time. + +### 2. Colorize ### + +![](https://opensource.com/sites/default/files/images/life-uploads/colorize.png) + +The simplicity of the **Colorize** filter in **Add Effect > Misc** is also its strength. In some editing applications, it takes two filters and some compositing to achieve this simple color-wash effect. It's refreshing that in Kdenlive, it's a matter of one filter with three possible controls (only one of which, strictly speaking, is necessary to achieve the look). + +![](https://opensource.com/sites/default/files/images/life-uploads/colorize_ctl.png) + +Its use is intuitive; use the **Hue** slider to set the color. Use the other controls to adjust the luma of the base image as needed. + +This is not a filter I use every day, but for ad spots, bumpers, dreamy sequences, or titles, it's the easiest and quickest path to a commonly needed look. Get a company's color, use it as the colorize effect, slap a logo over the top of the screen, and you've just created a winning corporate intro. + +### 3. Dynamic Text ### + +![](https://opensource.com/sites/default/files/images/life-uploads/dyntext.png) + +For the assistant editor, the Add Effect > Misc > Dynamic **Text** effect is worth the price of Kdenlive. With one mostly pre-set filter, you can add a running timecode burn-in to your project, which is an absolute must-have safety feature when round-tripping your footage through effects and sound. + +The controls look more complex than they actually are. + +![](https://opensource.com/sites/default/files/images/life-uploads/dyntext_ctl.png) + +The font settings are self-explanatory. Placement of the text is controlled by the Horizontal and Vertical Alignment settings; steer clear of the **Size** setting (it controls the size of the "canvas" upon which you are compositing the burn-in, not the size of the burn-in itself). + +The text itself doesn't have to be timecode. From the dropdown menu, you can choose from a list of useful text, including frame count (useful for VFX, since animators work in frames), source frame rate, source dimensions, and more. + +You are not limited to just one choice. The text field in the control panel will take whatever arbitrary text you put into it, so if you want to burn in more information than just timecode and frame rate (such as **Sc 3 - #timecode# - #meta.media.0.stream.frame_rate#**), then have at it. + +### 4. Luminance ### + +![](https://opensource.com/sites/default/files/images/life-uploads/luminance.png) + +The **Add Effect > Misc > Luminance** filter is a no-options filter. Luminance does one thing and it does it well: It drops the chroma values of all pixels in an image so that they are displayed by their luma values. In simpler terms, it's a grayscale filter. + +The nice thing about this filter is that it's quick, easy, efficient, and effective. This filter combines particularly well with other related filters (meaning that yes, I'm cheating and including three filters for one). + +![](https://opensource.com/sites/default/files/images/life-uploads/luminance_ctl.png) + +Combining, in this order, the **RGB Noise** for emulated grain, **Luminance** for grayscale, and **LumaLiftGainGamma** for levels can render a textured image that suggests the classic look and feel of [Kodax Tri-X][4] film. + +### 5. Mask0mate ### + +![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate.png) +Image by Footage Firm, Inc. + +Better known as a four-point garbage mask, the **Add Effect > Alpha Manipulation > Mask0mate** tool is a quick, no-frills way to ditch parts of your frame that you don't need. There isn't much to say about it; it is what it is. + +![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate_ctl.png) + +The confusing thing about the effect is that it does not imply compositing. You can pull in the edges all you want, but you won't see it unless you add the **Composite** transition to reveal what's underneath the clip (even if that's nothing). Also, use the **Invert** function for the filter to act like you think it should act (without it, the controls will probably feel backward to you). + +### 6. Pr0file ### + +![](https://opensource.com/sites/default/files/images/life-uploads/pr0file.png) + +The **Add Effect > Misc > Pr0file** filter is an analytical tool, not something you would actually leave on a clip for final export (unless, of course, you do). Pr0file consists of two components: the Marker, which dictates what area of the image is being analyzed, and the Graph, which displays information about the marked region. + +Set the marker using the **X, Y, Tilt**, and **Length** controls. The graphical readout of all the relevant color channel information is displayed as a graph, superimposed over your image. + +![](https://opensource.com/sites/default/files/images/life-uploads/pr0file_ctl.jpg) + +The readout displays a profile of the colors within the region marked. The result is a sort of hyper-specific vectorscope (or oscilloscope, as the case may be) that can help you zero in on problem areas during color correction, or compare regions while color matching. + +In other editors, the way to get the same information was simply to temporarily scale your image up to the region you want to analyze, look at your readout, and then hit undo to scale back. Both ways work, but the Pr0file filter does feel a little more elegant. + +### 7. Vectorscope ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vectorscope.jpg) + +Kdenlive features an inbuilt vectorscope, available from the **View** menu in the main menu bar. A vectorscope is not a filter, it's just another view the footage in your Project Monitor, specifically a view of the color saturation in the current frame. If you are color correcting an image and you're not sure what colors you need to boost or counteract, looking at the vectorscope can be a huge help. + +There are several different views available. You can render the vectorscope in traditional green monochrome (like the hardware vectorscopes you'd find in a broadcast control room), or a chromatic view (my personal preference), or subtracted from a color-wheel background, and more. + +The vectorscope reads the entire frame, so unlike the Pr0file filter, you are not just getting a reading of one area in the frame. The result is a consolidated view of what colors are most prominent within a frame. Technically, the same sort of information can be intuited by several trial-and-error passes with color correction, or you can just leave your vectorscope open and watch the colors float along the color wheel and make adjustments accordingly. + +Aside from how you want the vectorscope to look, there are no controls for this tool. It is a readout only. + +### 8. Vertigo ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vertigo.jpg) + +There's no way around it; **Add Effect > Misc > Vertigo** is a gimmicky special effect filter. So unless you're remaking [Fear and Loathing][5] or the movie adaptation of [Dead Island][6], you probably aren't going to use it that much; however, it's one of those high-quality filters that does the exact trick you want when you happen to be looking for it. + +The controls are simple. You can adjust how distorted the image becomes and the rate at which it distorts. The overall effect is probably more drunk or vision-quest than vertigo, but it's good. + +![](https://opensource.com/sites/default/files/images/life-uploads/vertigo_ctl.png) + +### 9. Vignette ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vignette.jpg) + +Another beautiful effect, the **Add Effect > Misc > Vignette** darkens the outer edges of the frame to provide a sort of portrait, soft-focus nouveau look. Combined with the Color Effect or the Luminance faux Tri-X trick, this can be a powerful and emotional look. + +The softness of the border and the aspect ratio of the iris can be adjusted. The **Clear Center Size** attribute controls the size of the clear area, which has the effect of adjusting the intensity of the vignette effect. + +![](https://opensource.com/sites/default/files/images/life-uploads/vignette_ctl.png) + +### 10. Volume ### + +![](https://opensource.com/sites/default/files/images/life-uploads/vol.jpg) + +I don't believe in mixing sound within the video editing application, but I do acknowledge that sometimes it's just necessary for a quick fix or, sometimes, even for a tight production schedule. And that's when the **Audio correction > Volume (Keyframable)** effect comes in handy. + +The control panel is clunky, and no one really wants to adjust volume that way, so the effect is best when used directly in the timeline. To create a volume change, double-click the volume line over the audio clip, and then click and drag to adjust. It's that simple. + +Should you use it? Not really. Sound mixing should be done in a sound mixing application. Will you use it? Absolutely. At some point, you'll get audio that is too loud to play as you edit, or you'll be up against a deadline without a sound engineer in sight. Use it judiciously, watch your levels, and get the show finished. + +### Everything else ### + +This has been 10 (OK, 13 or 14) effects and tools that Kdenlive has quietly lying around to help your edits become great. Obviously there's a lot more to Kdenlive than just these little tricks. Some are obvious, some are cliché, some are obtuse, but they're all in your toolkit. Get to know them, explore your options, and you might be surprised what a few cheap tricks will get you. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/15/12/10-kdenlive-tools + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/seth +[1]:https://creativecommons.org/licenses/by-sa/4.0/ +[2]:https://kdenlive.org/ +[3]:http://frei0r.dyne.org/ +[4]:http://www.kodak.com/global/en/professional/products/films/bw/triX2.jhtml +[5]:https://en.wikipedia.org/wiki/Fear_and_Loathing_in_Las_Vegas_(film) +[6]:https://en.wikipedia.org/wiki/Dead_Island \ No newline at end of file diff --git a/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md b/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md new file mode 100644 index 0000000000..302034c330 --- /dev/null +++ b/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md @@ -0,0 +1,100 @@ +taichirain 翻译中 + +5 great Raspberry Pi projects for the classroom +5 伟大的树莓派项目教室 +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-open-source-yearbook-lead3.png) + +Image by : opensource.com +图片来源 : opensource.com + +### 1. Minecraft Pi ### + +Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1] + +Minecraft is the favorite game of pretty much every teenager in the world—and it's one of the most creative games ever to capture the attention of young people. The version that comes with every Raspberry Pi is not only a creative thinking building game, but comes with a programming interface allowing for additional interaction with the Minecraft world through Python code. + +Minecraft: Pi Edition is a great way for teachers to engage students with problem solving and writing code to perform tasks. You can use the Python API to build a house and have it follow you wherever you go, build a bridge wherever you walk, make it rain lava, show the temperature in the sky, and anything else your imagination can create. + +Read more in "[Getting Started with Minecraft Pi][2]." + +### 2. Reaction game and traffic lights ### + +![](https://opensource.com/sites/default/files/pi_traffic_installed_yellow_led_on.jpg) + +Courtesy of [Low Voltage Labs][3]. [CC BY-SA 4.0][1]. + +It's really easy to get started with physical computing on Raspberry Pi—just connect up LEDs and buttons to the GPIO pins, and with a few lines of code you can turn lights on and control things with button presses. Once you know the code to do the basics, it's down to your imagination as to what you do next! + +If you know how to flash one light, you can flash three. Pick out three LEDs in traffic light colors and you can code the traffic light sequence. If you know how to use a button to a trigger an event, then you have a pedestrian crossing! Also look out for great pre-built traffic light add-ons like [PI-TRAFFIC][4], [PI-STOP][5], [Traffic HAT][6], and more. + +It's not always about the code—this can be used as an exercise in understanding how real world systems are devised. Computational thinking is a useful skill in any walk of life. + +![](https://opensource.com/sites/default/files/reaction-game.png) + +Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0][1]. + +Next, try wiring up two buttons and an LED and making a two-player reaction game—let the light come on after a random amount of time and see who can press the button first! + +To learn more, check out "[GPIO Zero recipes][7]. Everything you need is in [CamJam EduKit 1][8]. + +### 3. Sense HAT Pixel Pet ### + +The Astro Pi—an augmented Raspberry Pi—is going to space this December, but you haven't missed your chance to get your hands on the hardware. The Sense HAT is the sensor board add-on used in the Astro Pi mission and it's available for anyone to buy. You can use it for data collection, science experiments, games and more. Watch this Gurl Geek Diaries video from Raspberry Pi's Carrie Anne for a great way to get started—by bringing to life an animated pixel pet of your own design on the Sense HAT display: + +注:youtube 视频 + + +Learn more in "[Exploring the Sense HAT][9]." + +### 4. Infrared bird box ### + +![](https://opensource.com/sites/default/files/ir-bird-box.png) +Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1] + +A great exercise for the whole class to get involved with—place a Raspberry Pi and the NoIR camera module inside a bird box along with some infra-red lights so you can see in the dark, then stream video from the Pi over the network or on the internet. Wait for birds to nest and you can observe them without disturbing them in their habitat. + +Learn all about infrared and the light spectrum, and how to adjust the camera focus and control the camera in software. + +Learn more in "[Make an infrared bird box.][10]" + +### 5. Robotics ### + +![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg) + +Courtesy of Low Voltage Labs. [CC BY-SA 4.0][1]. + +With a Raspberry Pi and as little as a couple of motors and a motor controller board, you can build your own robot. There is a vast range of robots you can make, from basic buggies held together by sellotape and a homemade chassis, all the way to self-aware, sensor-laden metallic stallions with camera attachments driven by games controllers. + +Learn how to control individual motors with something straightforward like the RTK Motor Controller Board (£8/$12), or dive into the new CamJam robotics kit (£17/$25) which comes with motors, wheels and a couple of sensors—great value and plenty of learning potential. + +Alternatively, if you'd like something more hardcore, try PiBorg's [4Borg][11] (£99/$150) or [DiddyBorg][12] (£180/$273) or go the whole hog and treat yourself to their DoodleBorg Metal edition (£250/$380)—and build a mini version of their infamous [DoodleBorg tank][13] (unfortunately not for sale). + +Check out the [CamJam robotics kit worksheets][14]. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom + +作者:[Ben Nuttall][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/bennuttall +[1]:https://creativecommons.org/licenses/by-sa/4.0/ +[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi +[3]:http://lowvoltagelabs.com/ +[4]:http://lowvoltagelabs.com/products/pi-traffic/ +[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390 +[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html +[7]:http://pythonhosted.org/gpiozero/recipes/ +[8]:http://camjam.me/?page_id=236 +[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat +[10]:https://www.raspberrypi.org/learning/infrared-bird-box/ +[11]:https://www.piborg.org/4borg +[12]:https://www.piborg.org/diddyborg +[13]:https://www.piborg.org/doodleborg +[14]:http://camjam.me/?page_id=1035#worksheets diff --git a/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md b/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md deleted file mode 100644 index 2a8bdb2fbd..0000000000 --- a/sources/tech/20150410 How to Install and Configure Multihomed ISC DHCP Server on Debian Linux.md +++ /dev/null @@ -1,159 +0,0 @@ -How to Install and Configure Multihomed ISC DHCP Server on Debian Linux -================================================================================ -Dynamic Host Control Protocol (DHCP) offers an expedited method for network administrators to provide network layer addressing to hosts on a constantly changing, or dynamic, network. One of the most common server utilities to offer DHCP functionality is ISC DHCP Server. The goal of this service is to provide hosts with the necessary network information to be able to communicate on the networks in which the host is connected. Information that is typically served by this service can include: DNS server information, network address (IP), subnet mask, default gateway information, hostname, and much more. - -This tutorial will cover ISC-DHCP-Server version 4.2.4 on a Debian 7.7 server that will manage multiple virtual local area networks (VLAN) but can very easily be applied to a single network setup as well. - -The test network that this server was setup on has traditionally relied on a Cisco router to manage the DHCP address leases. The network currently has 12 VLANs needing to be managed by one centralized server. By moving this responsibility to a dedicated server, the router can regain resources for more important tasks such as routing, access control lists, traffic inspection, and network address translation. - -The other benefit to moving DHCP to a dedicated server will, in a later guide, involve setting up Dynamic Domain Name Service (DDNS) so that new host’s host-names will be added to the DNS system when the host requests a DHCP address from the server. - -### Step 1: Installing and Configuring ISC DHCP Server ### - -1. To start the process of creating this multi-homed server, the ISC software needs to be installed via the Debian repositories using the ‘apt‘ utility. As with all tutorials, root or sudo access is assumed. Please make the appropriate modifications to the following commands. - - # apt-get install isc-dhcp-server [Installs the ISC DHCP Server software] - # dpkg --get-selections isc-dhcp-server [Confirms successful installation] - # dpkg -s isc-dhcp-server [Alternative confirmation of installation] - -![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg) - -2. Now that the server software is confirmed installed, it is now necessary to configure the server with the network information that it will need to hand out. At the bare minimum, the administrator needs to know the following information for a basic DHCP scope: - -- The network addresses -- The subnet masks -- The range of addresses to be dynamically assigned - -Other useful information to have the server dynamically assign includes: - -- Default gateway -- DNS server IP addresses -- The Domain Name -- Host name -- Network Broadcast addresses - -These are merely a few of the many options that the ISC DHCP server can handle. To get a complete list as well as a description of each option, enter the following command after installing the package: - - # man dhcpd.conf - -3. Once the administrator has concluded all the necessary information for this server to hand out it is time to configure the DHCP server as well as the necessary pools. Before creating any pools or server configurations though, the DHCP service must be configured to listen on one of the server’s interfaces. - -On this particular server, a NIC team has been setup and DHCP will listen on the teamed interfaces which were given the name `'bond0'`. Be sure to make the appropriate changes given the server and environment in which everything is being configured. The defaults in this file are okay for this tutorial. - -![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg) - -This line will instruct the DHCP service to listen for DHCP traffic on the specified interface(s). At this point, it is time to modify the main configuration file to enable the DHCP pools on the necessary networks. The main configuration file is located at /etc/dhcp/dhcpd.conf. Open the file with a text editor to begin: - - # nano /etc/dhcp/dhcpd.conf - -This file is the configuration for the DHCP server specific options as well as all of the pools/hosts one wishes to configure. The top of the file starts of with a ‘ddns-update-style‘ clause and for this tutorial it will remain set to ‘none‘ however in a future article, Dynamic DNS will be covered and ISC-DHCP-Server will be integrated with BIND9 to enable host name to IP address updates. - -4. The next section is typically the area where and administrator can configure global network settings such as the DNS domain name, default lease time for IP addresses, subnet-masks, and much more. Again to know more about all the options be sure to read the man page for the dhcpd.conf file. - - # man dhcpd.conf - -For this server install, there were a couple of global network options that were configured at the top of the configuration file so that they wouldn’t have to be implemented in every single pool created. - -![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png) - -Lets take a moment to explain some of these options. While they are configured globally in this example, all of them can be configured on a per pool basis as well. - -- option domain-name “comptech.local”; – All hosts that this DHCP server hosts, will be a member of the DNS domain name “comptech.local” -- option domain-name-servers 172.27.10.6; DHCP will hand out DNS server IP of 172.27.10.6 to all of the hosts on all of the networks it is configured to host. -- option subnet-mask 255.255.255.0; – The subnet mask handed out to every network will be a 255.255.255.0 or a /24 -- default-lease-time 3600; – This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host is done with the lease, they can hand the address back early. -- max-lease-time 86400; – This is the maximum amount of time in seconds a lease can be held by a host. -- ping-check true; – This is an extra test to ensure that the address the server wants to assign out isn’t in use by another host on the network already. -- ping-timeout; – This is how long in second the server will wait for a response to a ping before assuming the address isn’t in use. -- ignore client-updates; For now this option is irrelevant since DDNS has been disabled earlier in the configuration file but when DDNS is operating, this option will ignore a hosts to request to update its host-name in DNS. - -5. The next line in this file is the authoritative DHCP server line. This line means that if this server is to be the server that hands out addresses for the networks configured in this file, then uncomment the authoritative stanza. - -This server will be the only authority on all the networks it manages so the global authoritative stanza was un-commented by removing the ‘#’ in front of the keyword authoritative. - -![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png) -Enable ISC Authoritative - -By default the server is assumed to NOT be an authority on the network. The rationale behind this is security. If someone unknowingly configures the DHCP server improperly or on a network they shouldn’t, it could cause serious connectivity issues. This line can also be used on a per network basis. This means that if the server is not the entire network’s DHCP server, the authoritative line can instead be used on a per network basis rather than in the global configuration as seen in the above screen-shot. - -6. The next step is to configure all of the DHCP pools/networks that this server will manage. For brevities sake, this guide will only walk through one of the pools configured. The administrator will need to have gathered all of the necessary network information (ie domain name, network addresses, how many addresses can be handed out, etc). - -For this pool the following information was obtained from the network administrator: network id of 172.27.60.0, subnet mask of 255.255.255.0 or a /24, the default gateway for the subnet is 172.27.60.1, and a broadcast address of 172.27.60.255. -This information is important to building the appropriate network stanza in the dhcpd.conf file. Without further ado, let’s open the configuration file again using a text editor and then add the new network to the server. This must be done with root/sudo! - - # nano /etc/dhcp/dhcpd.conf - -![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png) -Configure DHCP Pools and Networks - -This is the sample created to hand out IP addresses to a network that is used for the creation of VMWare virtual practice servers. The first line indicates the network as well as the subnet mask for that network. Then inside the brackets are all the options that the DHCP server should provide to hosts on this network. - -The first stanza, range 172.27.60.50 172.27.60.254;, is the range of dynamically assignable addresses that the DHCP server can hand out to hosts on this network. Notice that the first 49 addresses aren’t in the pool and can be assigned statically to hosts if needed. - -The second stanza, option routers 172.27.60.1; , hands out the default gateway address for all hosts on this network. - -The last stanza, option broadcast-address 172.27.60.255;, indicates what the network’s broadcast address. This address SHOULD NOT be a part of the range stanza as the broadcast address can’t be assigned to a host. - -Some pointers, be sure to always end the option lines with a semi-colon (;) and always make sure each network created is enclosed in curly braces { }. - -7. If there are more networks to create, continue creating them with their appropriate options and then save the text file. Once all configurations have been completed, the ISC-DHCP-Server process will need to be restarted in order to apply the new changes. This can be accomplished with the following command: - - # service isc-dhcp-server restart - -This will restart the DHCP service and then the administrator can check to see if the server is ready for DHCP requests several different ways. The easiest is to simply see if the server is listening on port 67 via the [lsof command][1]: - - # lsof -i :67 - -![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png) -Check DHCP Listening Port - -This output indicates that the DHCPD (DHCP Server daemon) is running and listening on port 67. Port 67 in this output was actually converted to ‘bootps‘ due to a port number mapping for port 67 in /etc/services file. - -This is very common on most systems. At this point, the server should be ready for network connectivity and can be confirmed by connecting a machine to the network and having it request a DHCP address from the server. - -### Step 2: Testing Client Connectivity ### - -8. Most systems now-a-days are using Network Manager to maintain network connections and as such the device should be pre-configured to pull DHCP when the interface is active. - -However on machines that aren’t using Network Manager, it may be necessary to manually attempt to pull a DHCP address. The next few steps will show how to do this as well as how to see whether the server is handing out addresses. - -The ‘[ifconfig][2]‘ utility can be used to check an interface’s configuration. The machine used to test the DHCP server only has one network adapter and it is called ‘eth0‘. - - # ifconfig eth0 - -![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png) -Check Network Interface IP Address - -From this output, this machine currently doesn’t have an IPv4 address, great! Let’s instruct this machine to reach out to the DHCP server and request an address. This machine has the DHCP client utility known as ‘dhclient‘ installed. The DHCP client utility may very from system to system. - - # dhclient eth0 - -![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png) -Request IP Address from DHCP - -Now the `'inet addr:'` field shows an IPv4 address that falls within the scope of what was configured for the 172.27.60.0 network. Also notice that the proper broadcast address was handed out as well as subnet mask for this network. - -Things are looking promising but let’s check the server to see if it was actually the place where this machine received this new IP address. To accomplish this task, the server’s system log file will be consulted. While the entire log file may contain hundreds of thousands of entries, only a few are necessary for confirming that the server is working properly. Rather than using a full text editor, this time a utility known as ‘tail‘ will be used to only show the last few lines of the log file. - - # tail /var/log/syslog - -![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png) -Check DHCP Logs - -Voila! The server recorded handing out an address to this host (HRTDEBXENSRV). It is a safe assumption at this point that the server is working as intended and handing out the appropriate addresses for the networks that it is an authority. At this point the DHCP server is up and running. Configure the other networks, troubleshoot, and secure as necessary. - -Enjoy the newly functioning ISC-DHCP-Server and tune in later for more Debian tutorials. In the not too distant future there will be an article on Bind9 and DDNS that will tie into this article. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/ - -作者:[Rob Turner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/robturner/ -[1]:http://www.tecmint.com/10-lsof-command-examples-in-linux/ -[2]:http://www.tecmint.com/ifconfig-command-examples/ \ No newline at end of file diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md deleted file mode 100644 index ae8df117ef..0000000000 --- a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ /dev/null @@ -1,429 +0,0 @@ -Installation Guide for Puppet on Ubuntu 15.04 -================================================================================ -Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. - -In this tutorial, we will cover how to install open source puppet in an agent and master setup running ubuntu 15.04 linux distribution. Here, Puppet master is a server from where all the configurations will be controlled and managed and all our remaining servers will be puppet agent nodes, which is configured according to the configuration of puppet master server. Here are some easy steps to install and configure puppet to manage our server infrastructure running Ubuntu 15.04. - -### 1. Setting up Hosts ### - -In this tutorial, we'll use two machines, one as puppet master server and another as puppet node agent both running ubuntu 15.04 "Vivid Vervet" in both the machines. Here is the infrastructure of the server that we're gonna use for this tutorial. - -puppet master server with IP 44.55.88.6 and hostname : puppetmaster -puppet node agent with IP 45.55.86.39 and hostname : puppetnode - -Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server. - - # nano /etc/hosts - - 45.55.88.6 puppetmaster.example.com puppetmaster - 45.55.86.39 puppetnode.example.com puppetnode - -Please note that the Puppet Master server must be reachable on port 8140. So, we'll need to open port 8140 in it. - -### 2. Updating Time with NTP ### - -As puppet nodes needs to maintain accurate system time to avoid problems when it issues agent certificates. Certificates can appear to be expired if there is time difference, the time of the both the master and the node agent must be synced with each other. To sync the time, we'll update the time with NTP. To do so, here's the command below that we need to run on both master and node agent. - - # ntpdate pool.ntp.org - - 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec - -Now, we'll update our local repository index and install ntp as follows. - - # apt-get update && sudo apt-get -y install ntp ; service ntp restart - -### 3. Puppet Master Package Installation ### - -There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package. - - # cd /tmp/ - # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - - --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d - Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7384 (7.2K) [application/x-debian-package] - Saving to: ‘puppetlabs-release-trusty.deb’ - - puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s - - 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] - -After the download has been completed, we'll wanna install the package. - - # dpkg -i puppetlabs-release-trusty.deb - - Selecting previously unselected package puppetlabs-release. - (Reading database ... 85899 files and directories currently installed.) - Preparing to unpack puppetlabs-release-trusty.deb ... - Unpacking puppetlabs-release (1.0-11) ... - Setting up puppetlabs-release (1.0-11) ... - -Then, we'll update the local respository index with the server using apt package manager. - - # apt-get update - -Then, we'll install the puppetmaster-passenger package by running the below command. - - # apt-get install puppetmaster-passenger - -**Note**: While installing we may get an error **Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** but we no need to worry, we'll just simply ignore this as it says that the templatedir is deprecated so, we'll simply disbale that setting in the configuration. :) - -To check whether puppetmaster has been installed successfully in our Master server not not, we'll gonna try to check its version. - - # puppet --version - - 3.8.1 - -We have successfully installed puppet master package in our puppet master box. As we are using passenger with apache, the puppet master process is controlled by apache server, that means it runs when apache is running. - -Before continuing, we'll need to stop the Puppet master by stopping the apache2 service. - - # systemctl stop apache2 - -### 4. Master version lock with Apt ### - -As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a new file **/etc/apt/preferences.d/00-puppet.pref** using our favorite text editor. - - # nano /etc/apt/preferences.d/00-puppet.pref - -Then, we'll gonna add the entries in the newly created file as: - - # /etc/apt/preferences.d/00-puppet.pref - Package: puppet puppet-common puppetmaster-passenger - Pin: version 3.8* - Pin-Priority: 501 - -Now, it will not update the puppet while running updates in the system. - -### 5. Configuring Puppet Config ### - -Puppet master acts as a certificate authority and must generate its own certificates which is used to sign agent certificate requests. First of all, we'll need to remove any existing SSL certificates that were created during the installation of package. The default location of puppet's SSL certificates is /var/lib/puppet/ssl. So, we'll remove the entire ssl directory using rm command. - - # rm -rf /var/lib/puppet/ssl - -Then, we'll configure the certificate. While creating the puppet master's certificate, we need to include every DNS name at which agent nodes can contact the master at. So, we'll edit the master's puppet.conf using our favorite text editor. - - # nano /etc/puppet/puppet.conf - -The output seems as shown below. - - [main] - logdir=/var/log/puppet - vardir=/var/lib/puppet - ssldir=/var/lib/puppet/ssl - rundir=/var/run/puppet - factpath=$vardir/lib/facter - templatedir=$confdir/templates - - [master] - # These are needed when the puppetmaster is run by passenger - # and can safely be removed if webrick is used. - ssl_client_header = SSL_CLIENT_S_DN - ssl_client_verify_header = SSL_CLIENT_VERIFY - -Here, we'll need to comment the templatedir line to disable the setting as it has been already depreciated. After that, we'll add the following line at the end of the file under [main]. - - server = puppetmaster - environment = production - runinterval = 1h - strict_variables = true - certname = puppetmaster - dns_alt_names = puppetmaster, puppetmaster.example.com - -This configuration file has many options which might be useful in order to setup own configuration. A full description of the file is available at Puppet Labs [Main Config File (puppet.conf)][1]. - -After editing the file, we'll wanna save that and exit. - -Now, we'll gonna generate a new CA certificates by running the following command. - - # puppet master --verbose --no-daemonize - - Info: Creating a new SSL key for ca - Info: Creating a new SSL certificate request for ca - Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 - ... - Notice: puppetmaster has a waiting certificate request - Notice: Signed certificate request for puppetmaster - Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' - Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' - Notice: Starting Puppet master version 3.8.1 - ^CNotice: Caught INT; storing stop - Notice: Processing stop - -Now, the certificate is being generated. Once we see **Notice: Starting Puppet master version 3.8.1**, the certificate setup is complete. Then we'll press CTRL-C to return to the shell. - -If we wanna look at the cert information of the certificate that was just created, we can get the list by running in the following command. - - # puppet cert list -all - - + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") - -### 6. Creating a Puppet Manifest ### - -The default location of the main manifest is /etc/puppet/manifests/site.pp. The main manifest file contains the definition of configuration that is used to execute in the puppet node agent. Now, we'll create the manifest file by running the following command. - - # nano /etc/puppet/manifests/site.pp - -Then, we'll add the following lines of configuration in the file that we just opened. - - # execute 'apt-get update' - exec { 'apt-update': # exec resource named 'apt-update' - command => '/usr/bin/apt-get update' # command this resource will run - } - - # install apache2 package - package { 'apache2': - require => Exec['apt-update'], # require 'apt-update' before installing - ensure => installed, - } - - # ensure apache2 service is running - service { 'apache2': - ensure => running, - } - -The above lines of configuration are responsible for the deployment of the installation of apache web server across the node agent. - -### 7. Starting Master Service ### - -We are now ready to start the puppet master. We can start it by running the apache2 service. - - # systemctl start apache2 - -Here, our puppet master is running, but it isn't managing any agent nodes yet. Now, we'll gonna add the puppet node agents to the master. - -**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server. - -### 8. Puppet Agent Package Installation ### - -Now, as we have our puppet master ready and it needs an agent to manage, we'll need to install puppet agent into the nodes. We'll need to install puppet agent in every nodes in our infrastructure we want puppet master to manage. We'll need to make sure that we have added our node agents in the DNS. Now, we'll gonna install the latest puppet agent in our agent node ie. puppetnode.example.com . - -We'll run the following command to download the Puppet Labs package in our puppet agent nodes. - - # cd /tmp/ - # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ - - --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb - Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d - Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7384 (7.2K) [application/x-debian-package] - Saving to: ‘puppetlabs-release-trusty.deb’ - - puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s - - 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] - -Then, as we're running ubuntu 15.04, we'll use debian package manager to install it. - - # dpkg -i puppetlabs-release-trusty.deb - -Now, we'll gonna update the repository index using apt-get. - - # apt-get update - -Finally, we'll gonna install the puppet agent directly from the remote repository. - - # apt-get install puppet - -Puppet agent is always disabled by default, so we'll need to enable it. To do so we'll need to edit /etc/default/puppet file using a text editor. - - # nano /etc/default/puppet - -Then, we'll need to change value of **START** to "yes" as shown below. - - START=yes - -Then, we'll need to save and exit the file. - -### 9. Agent Version Lock with Apt ### - -As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a file /etc/apt/preferences.d/00-puppet.pref using our favorite text editor. - - # nano /etc/apt/preferences.d/00-puppet.pref - -Then, we'll gonna add the entries in the newly created file as: - - # /etc/apt/preferences.d/00-puppet.pref - Package: puppet puppet-common - Pin: version 3.8* - Pin-Priority: 501 - -Now, it will not update the Puppet while running updates in the system. - -### 10. Configuring Puppet Node Agent ### - -Next, We must make a few configuration changes before running the agent. To do so, we'll need to edit the agent's puppet.conf - - # nano /etc/puppet/puppet.conf - -It will look exactly like the Puppet master's initial configuration file. - -This time also we'll comment the **templatedir** line. Then we'll gonna delete the [master] section, and all of the lines below it. - -Assuming that the puppet master is reachable at "puppet-master", the agent should be able to connect to the master. If not we'll need to use its fully qualified domain name ie. puppetmaster.example.com . - - [agent] - server = puppetmaster.example.com - certname = puppetnode.example.com - -After adding this, it will look alike this. - - [main] - logdir=/var/log/puppet - vardir=/var/lib/puppet - ssldir=/var/lib/puppet/ssl - rundir=/var/run/puppet - factpath=$vardir/lib/facter - #templatedir=$confdir/templates - - [agent] - server = puppetmaster.example.com - certname = puppetnode.example.com - -After done with that, we'll gonna save and exit it. - -Next, we'll wanna start our latest puppet agent in our Ubuntu 15.04 nodes. To start our puppet agent, we'll need to run the following command. - - # systemctl start puppet - -If everything went as expected and configured properly, we should not see any output displayed by running the above command. When we run an agent for the first time, it generates an SSL certificate and sends a request to the puppet master then if the master signs the agent's certificate, it will be able to communicate with the agent node. - -**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further. - -### 11. Signing certificate Requests on Master ### - -While puppet agent runs for the first time, it generates an SSL certificate and sends a request for signing to the master server. Before the master will be able to communicate and control the agent node, it must sign that specific agent node's certificate. - -To get the list of the certificate requests, we'll run the following command in the puppet master server. - - # puppet cert list - - "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 - -As we just setup our first agent node, we will see one request. It will look something like the following, with the agent node's Domain name as the hostname. - -Note that there is no + in front of it which indicates that it has not been signed yet. - -Now, we'll go for signing a certification request. In order to sign a certification request, we should simply run **puppet cert sign** with the **hostname** as shown below. - - # puppet cert sign puppetnode.example.com - - Notice: Signed certificate request for puppetnode.example.com - Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' - -The Puppet master can now communicate and control the node that the signed certificate belongs to. - -If we want to sign all of the current requests, we can use the -all option as shown below. - - # puppet cert sign --all - -### Removing a Puppet Certificate ### - -If we wanna remove a host from it or wanna rebuild a host then add it back to it. In this case, we will want to revoke the host's certificate from the puppet master. To do this, we will want to use the clean action as follows. - - # puppet cert clean hostname - - Notice: Revoked certificate with serial 5 - Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' - Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' - -If we want to view all of the requests signed and unsigned, run the following command: - - # puppet cert list --all - - + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") - -### 12. Deploying a Puppet Manifest ### - -After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node. - - # puppet agent --test - - Info: Retrieving pluginfacts - Info: Retrieving plugin - Info: Caching catalog for puppetnode.example.com - Info: Applying configuration version '1434563858' - Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully - Notice: Finished catalog run in 10.53 seconds - -This will show us all the processes how the main manifest will affect a single server immediately. - -If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from. - - # puppet apply /etc/puppet/manifest/test.pp - -### 13. Configuring Manifest for a Specific Node ### - -If we wanna deploy a manifest only to a specific node then we'll need to configure the manifest as follows. - -We'll need to edit the manifest on the master server using a text editor. - - # nano /etc/puppet/manifest/site.pp - -Now, we'll gonna add the following lines there. - - node 'puppetnode', 'puppetnode1' { - # execute 'apt-get update' - exec { 'apt-update': # exec resource named 'apt-update' - command => '/usr/bin/apt-get update' # command this resource will run - } - - # install apache2 package - package { 'apache2': - require => Exec['apt-update'], # require 'apt-update' before installing - ensure => installed, - } - - # ensure apache2 service is running - service { 'apache2': - ensure => running, - } - } - -Here, the above configuration will install and deploy the apache web server only to the two specified nodes having shortname puppetnode and puppetnode1. We can add more nodes that we need to get deployed with the manifest specifically. - -### 14. Configuring Manifest with a Module ### - -Modules are useful for grouping tasks together, they are many available in the Puppet community which anyone can contribute further. - -On the puppet master, we'll gonna install the **puppetlabs-apache** module using the puppet module command. - - # puppet module install puppetlabs-apache - -**Warning**: Please do not use this module on an existing apache setup else it will purge your apache configurations that are not managed by puppet. - -Now we'll gonna edit the main manifest ie **site.pp** using a text editor. - - # nano /etc/puppet/manifest/site.pp - -Now add the following lines to install apache under puppetnode. - - node 'puppet-node' { - class { 'apache': } # use apache module - apache::vhost { 'example.com': # define vhost resource - port => '80', - docroot => '/var/www/html' - } - } - -Then we'll wanna save and exit it. Then, we'll wanna rerun the manifest to deploy the configuration to the agents for our infrastructure. - -### Conclusion ### - -Finally we have successfully installed puppet to manage our Server Infrastructure running Ubuntu 15.04 "Vivid Vervet" linux operating system. We learned how puppet works, configure a manifest configuration, communicate with nodes and deploy the manifest on the agent nodes with secure SSL certification. Controlling, managing and configuring repeated task in several N number of nodes is very easy with puppet open source software configuration management tool. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html diff --git a/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md b/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md index 515b15844a..7a56750804 100644 --- a/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md +++ b/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md @@ -1,3 +1,4 @@ +translated by iov-wang How to Install OsTicket Ticketing System in Fedora 22 / Centos 7 ================================================================================ In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system. @@ -176,4 +177,4 @@ via: http://linoxide.com/linux-how-to/install-osticket-fedora-22-centos-7/ [a]:http://linoxide.com/author/arunp/ [1]:http://www.enhancesoft.com/ [2]:http://osticket.com/download -[3]:https://github.com/osTicket/osTicket-1.8/releases \ No newline at end of file +[3]:https://github.com/osTicket/osTicket-1.8/releases diff --git a/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md b/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md index c7810d06ef..aca2b04bba 100644 --- a/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md +++ b/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md @@ -1,3 +1,4 @@ +translated by ivo-wang How to Configure OpenNMS on CentOS 7.x ================================================================================ Systems management and monitoring services are very important that provides information to view important systems management information that allow us to to make decisions based on this information. To make sure the network is running at its best and to minimize the network downtime we need to improve application performance. So, in this article we will make you understand the step by step procedure to setup OpenNMS in your IT infrastructure. OpenNMS is a free open source enterprise level network monitoring and management platform that provides information to allow us to make decisions in regards to future network and capacity planning. @@ -216,4 +217,4 @@ via: http://linoxide.com/monitoring-2/install-configure-opennms-centos-7-x/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file +[a]:http://linoxide.com/author/kashifs/ diff --git a/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md b/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md index fe4a784d5a..736a6577de 100644 --- a/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md +++ b/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md @@ -1,3 +1,4 @@ +translated by ivo-wang How to install Suricata intrusion detection system on Linux ================================================================================ With incessant security threats, intrusion detection system (IDS) has become one of the most critical requirements in today's data center environments. However, as more and more servers upgrade their NICs to 10GB/40GB Ethernet, it is increasingly difficult to implement compute-intensive intrusion detection on commodity hardware at line rates. One approach to scaling IDS performance is **multi-threaded IDS**, where CPU-intensive deep packet inspection workload is parallelized into multiple concurrent tasks. Such parallelized inspection can exploit multi-core hardware to scale up IDS throughput easily. Two well-known open-source efforts in this area are [Suricata][1] and [Bro][2]. @@ -194,4 +195,4 @@ via: http://xmodulo.com/install-suricata-intrusion-detection-system-linux.html [6]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Runmodes [7]:http://ask.xmodulo.com/view-threads-process-linux.html [8]:http://xmodulo.com/how-to-compile-and-install-snort-from-source-code-on-ubuntu.html -[9]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki \ No newline at end of file +[9]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki diff --git a/sources/tech/20150917 A Repository with 44 Years of Unix Evolution.md b/sources/tech/20150917 A Repository with 44 Years of Unix Evolution.md deleted file mode 100644 index 807cedf01d..0000000000 --- a/sources/tech/20150917 A Repository with 44 Years of Unix Evolution.md +++ /dev/null @@ -1,202 +0,0 @@ -A Repository with 44 Years of Unix Evolution -================================================================================ -### Abstract ### - -The evolution of the Unix operating system is made available as a version-control repository, covering the period from its inception in 1972 as a five thousand line kernel, to 2015 as a widely-used 26 million line system. The repository contains 659 thousand commits and 2306 merges. The repository employs the commonly used Git system for its storage, and is hosted on the popular GitHub archive. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, Berkeley University, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, 850 individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology. - -### 1 Introduction ### - -The Unix operating system stands out as a major engineering breakthrough due to its exemplary design, its numerous technical contributions, its development model, and its widespread use. The design of the Unix programming environment has been characterized as one offering unusual simplicity, power, and elegance [[1][1]]. On the technical side, features that can be directly attributed to Unix or were popularized by it include [[2][2]]: the portable implementation of the kernel in a high level language; a hierarchical file system; compatible file, device, networking, and inter-process I/O; the pipes and filters architecture; virtual file systems; and the shell as a user-selectable regular process. A large community contributed software to Unix from its early days [[3][3]], [[4][4],pp. 65-72]. This community grew immensely over time and worked using what are now termed open source software development methods [[5][5],pp. 440-442]. Unix and its intellectual descendants have also helped the spread of the C and C++ programming languages, parser and lexical analyzer generators (*yacc, lex*), document preparation tools (*troff, eqn, tbl*), scripting languages (*awk, sed, Perl*), TCP/IP networking, and configuration management systems (*SCCS, RCS, Subversion, Git*), while also forming a large part of the modern internet infrastructure and the web. - -Luckily, important Unix material of historical importance has survived and is nowadays openly available. Although Unix was initially distributed with relatively restrictive licenses, the most significant parts of its early development have been released by one of its right-holders (Caldera International) under a liberal license. Combining these parts with software that was developed or released as open source software by the University of California, Berkeley and the FreeBSD Project provides coverage of the system's development over a period ranging from June 20th 1972 until today. - -Curating and processing available snapshots as well as old and modern configuration management repositories allows the reconstruction of a new synthetic Git repository that combines under a single roof most of the available data. This repository documents in a digital form the detailed evolution of an important digital artefact over a period of 44 years. The following sections describe the repository's structure and contents (Section [II][6]), the way it was created (Section [III][7]), and how it can be used (Section [IV][8]). - -### 2 Data Overview ### - -The 1GB Unix history Git repository is made available for cloning on [GitHub][9].[1][10] Currently[2][11] the repository contains 659 thousand commits and 2306 merges from about 850 contributors. The contributors include 23 from the Bell Labs staff, 158 from Berkeley's Computer Systems Research Group (CSRG), and 660 from the FreeBSD Project. - -The repository starts its life at a tag identified as *Epoch*, which contains only licensing information and its modern README file. Various tag and branch names identify points of significance. - -- *Research-VX* tags correspond to six research editions that came out of Bell Labs. These start with *Research-V1* (4768 lines of PDP-11 assembly) and end with *Research-V7* (1820 mostly C files, 324kLOC). -- *Bell-32V* is the port of the 7th Edition Unix to the DEC/VAX architecture. -- *BSD-X* tags correspond to 15 snapshots released from Berkeley. -- *386BSD-X* tags correspond to two open source versions of the system, with the Intel 386 architecture kernel code mainly written by Lynne and William Jolitz. -- *FreeBSD-release/X* tags and branches mark 116 releases coming from the FreeBSD project. - -In addition, branches with a *-Snapshot-Development* suffix denote commits that have been synthesized from a time-ordered sequence of a snapshot's files, while tags with a *-VCS-Development* suffix mark the point along an imported version control history branch where a particular release occurred. - -The repository's history includes commits from the earliest days of the system's development, such as the following. - - commit c9f643f59434f14f774d61ee3856972b8c3905b1 - Author: Dennis Ritchie - Date: Mon Dec 2 18:18:02 1974 -0500 - Research V5 development - Work on file usr/sys/dmr/kl.c - -Merges between releases that happened along the system's evolution, such as the development of BSD 3 from BSD 2 and Unix 32/V, are also correctly represented in the Git repository as graph nodes with two parents. - -More importantly, the repository is constructed in a way that allows *git blame*, which annotates source code lines with the version, date, and author associated with their first appearance, to produce the expected code provenance results. For example, checking out the *BSD-4* tag, and running git blame on the kernel's *pipe.c* file will show lines written by Ken Thompson in 1974, 1975, and 1979, and by Bill Joy in 1980. This allows the automatic (though computationally expensive) detection of the code's provenance at any point of time. - -![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/provenance.png) - -Figure 1: Code provenance across significant Unix releases. - -As can be seen in Figure [1][12], a modern version of Unix (FreeBSD 9) still contains visible chunks of code from BSD 4.3, BSD 4.3 Net/2, and FreeBSD 2.0. Interestingly, the Figure shows that code developed during the frantic dash to create an open source operating system out of the code released by Berkeley (386BSD and FreeBSD 1.0) does not seem to have survived. The oldest code in FreeBSD 9 appears to be an 18-line sequence in the C library file timezone.c, which can also be found in the 7th Edition Unix file with the same name and a time stamp of January 10th, 1979 - 36 years ago. - -### 3 Data Collection and Processing ### - -The goal of the project is to consolidate data concerning the evolution of Unix in a form that helps the study of the system's evolution, by entering them into a modern revision repository. This involves collecting the data, curating them, and synthesizing them into a single Git repository. - -![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/branches.png) - -Figure 2: Imported Unix snapshots, repositories, and their mergers. - -The project is based on three types of data (see Figure [2][13]). First, snapshots of early released versions, which were obtained from the [Unix Heritage Society archive][14],[3][15] the [CD-ROM images][16] containing the full source archives of CSRG,[4][17] the [OldLinux site][18],[5][19] and the [FreeBSD archive][20].[6][21] Second, past and current repositories, namely the CSRG SCCS [[6][22]] repository, the FreeBSD 1 CVS repository, and the [Git mirror of modern FreeBSD development][23].[7][24] The first two were obtained from the same sources as the corresponding snapshots. - -The last, and most labour intensive, source of data was **primary research**. The release snapshots do not provide information regarding their ancestors and the contributors of each file. Therefore, these pieces of information had to be determined through primary research. The authorship information was mainly obtained by reading author biographies, research papers, internal memos, and old documentation scans; by reading and automatically processing source code and manual page markup; by communicating via email with people who were there at the time; by posting a query on the Unix *StackExchange* site; by looking at the location of files (in early editions the kernel source code was split into `usr/sys/dmr` and `/usr/sys/ken`); and by propagating authorship from research papers and manual pages to source code and from one release to others. (Interestingly, the 1st and 2nd Research Edition manual pages have an "owner" section, listing the person (e.g. *ken*) associated with the corresponding system command, file, system call, or library function. This section was not there in the 4th Edition, and resurfaced as the "Author" section in BSD releases.) Precise details regarding the source of the authorship information are documented in the project's files that are used for mapping Unix source code files to their authors and the corresponding commit messages. Finally, information regarding merges between source code bases was obtained from a [BSD family tree maintained by the NetBSD project][25].[8][26] - -The software and data files that were developed as part of this project, are [available online][27],[9][28] and, with appropriate network, CPU and disk resources, they can be used to recreate the repository from scratch. The authorship information for major releases is stored in files under the project's `author-path` directory. These contain lines with a regular expressions for a file path followed by the identifier of the corresponding author. Multiple authors can also be specified. The regular expressions are processed sequentially, so that a catch-all expression at the end of the file can specify a release's default authors. To avoid repetition, a separate file with a `.au` suffix is used to map author identifiers into their names and emails. One such file has been created for every community associated with the system's evolution: Bell Labs, Berkeley, 386BSD, and FreeBSD. For the sake of authenticity, emails for the early Bell Labs releases are listed in UUCP notation (e.g. `research!ken`). The FreeBSD author identifier map, required for importing the early CVS repository, was constructed by extracting the corresponding data from the project's modern Git repository. In total the commented authorship files (828 rules) comprise 1107 lines, and there are another 640 lines mapping author identifiers to names. - -The curation of the project's data sources has been codified into a 168-line `Makefile`. It involves the following steps. - -**Fetching** Copying and cloning about 11GB of images, archives, and repositories from remote sites. - -**Tooling** Obtaining an archiver for old PDP-11 archives from 2.9 BSD, and adjusting it to compile under modern versions of Unix; compiling the 4.3 BSD *compress* program, which is no longer part of modern Unix systems, in order to decompress the 386BSD distributions. - -**Organizing** Unpacking archives using tar and *cpio*; combining three 6th Research Edition directories; unpacking all 1 BSD archives using the old PDP-11 archiver; mounting CD-ROM images so that they can be processed as file systems; combining the 8 and 62 386BSD floppy disk images into two separate files. - -**Cleaning** Restoring the 1st Research Edition kernel source code files, which were obtained from printouts through optical character recognition, into a format close to their original state; patching some 7th Research Edition source code files; removing metadata files and other files that were added after a release, to avoid obtaining erroneous time stamp information; patching corrupted SCCS files; processing the early FreeBSD CVS repository by removing CVS symbols assigned to multiple revisions with a custom Perl script, deleting CVS *Attic* files clashing with live ones, and converting the CVS repository into a Git one using *cvs2svn*. - -An interesting part of the repository representation is how snapshots are imported and linked together in a way that allows *git blame* to perform its magic. Snapshots are imported into the repository as sequential commits based on the time stamp of each file. When all files have been imported the repository is tagged with the name of the corresponding release. At that point one could delete those files, and begin the import of the next snapshot. Note that the *git blame* command works by traversing backwards a repository's history, and using heuristics to detect code moving and being copied within or across files. Consequently, deleted snapshots would create a discontinuity between them, and prevent the tracing of code between them. - -Instead, before the next snapshot is imported, all the files of the preceding snapshot are moved into a hidden look-aside directory named `.ref` (reference). They remain there, until all files of the next snapshot have been imported, at which point they are deleted. Because every file in the `.ref` directory matches exactly an original file, *git blame* can determine how source code moves from one version to the next via the `.ref` file, without ever displaying the `.ref` file. To further help the detection of code provenance, and to increase the representation's realism, each release is represented as a merge between the branch with the incremental file additions (*-Development*) and the preceding release. - -For a period in the 1980s, only a subset of the files developed at Berkeley were under SCCS version control. During that period our unified repository contains imports of both the SCCS commits, and the snapshots' incremental additions. At the point of each release, the SCCS commit with the nearest time stamp is found and is marked as a merge with the release's incremental import branch. These merges can be seen in the middle of Figure [2][29]. - -The synthesis of the various data sources into a single repository is mainly performed by two scripts. A 780-line Perl script (`import-dir.pl`) can export the (real or synthesized) commit history from a single data source (snapshot directory, SCCS repository, or Git repository) in the *Git fast export* format. The output is a simple text format that Git tools use to import and export commits. Among other things, the script takes as arguments the mapping of files to contributors, the mapping between contributor login names and their full names, the commit(s) from which the import will be merged, which files to process and which to ignore, and the handling of "reference" files. A 450-line shell script creates the Git repository and calls the Perl script with appropriate arguments to import each one of the 27 available historical data sources. The shell script also runs 30 tests that compare the repository at specific tags against the corresponding data sources, verify the appearance and disappearance of look-aside directories, and look for regressions in the count of tree branches and merges and the output of *git blame* and *git log*. Finally, *git* is called to garbage-collect and compress the repository from its initial 6GB size down to the distributed 1GB. - -### 4 Data Uses ### - -The data set can be used for empirical research in software engineering, information systems, and software archeology. Through its unique uninterrupted coverage of a period of more than 40 years, it can inform work on software evolution and handovers across generations. With thousandfold increases in processing speed and million-fold increases in storage capacity during that time, the data set can also be used to study the co-evolution of software and hardware technology. The move of the software's development from research labs, to academia, and to the open source community can be used to study the effects of organizational culture on software development. The repository can also be used to study how notable individuals, such as Turing Award winners (Dennis Ritchie and Ken Thompson) and captains of the IT industry (Bill Joy and Eric Schmidt), actually programmed. Another phenomenon worthy of study concerns the longevity of code, either at the level of individual lines, or as complete systems that were at times distributed with Unix (Ingres, Lisp, Pascal, Ratfor, Snobol, TMG), as well as the factors that lead to code's survival or demise. Finally, because the data set stresses Git, the underlying software repository storage technology, to its limits, it can be used to drive engineering progress in the field of revision management systems. - -![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/metrics.png) - -Figure 3: Code style evolution along Unix releases. - -Figure [3][30], which depicts trend lines (obtained with R's local polynomial regression fitting function) of some interesting code metrics along 36 major releases of Unix, demonstrates the evolution of code style and programming language use over very long timescales. This evolution can be driven by software and hardware technology affordances and requirements, software construction theory, and even social forces. The dates in the Figure have been calculated as the average date of all files appearing in a given release. As can be seen in it, over the past 40 years the mean length of identifiers and file names has steadily increased from 4 and 6 characters to 7 and 11 characters, respectively. We can also see less steady increases in the number of comments and decreases in the use of the *goto* statement, as well as the virtual disappearance of the *register* type modifier. - -### 5 Further Work ### - -Many things can be done to increase the repository's faithfulness and usefulness. Given that the build process is shared as open source code, it is easy to contribute additions and fixes through GitHub pull requests. The most useful community contribution would be to increase the coverage of imported snapshot files that are attributed to a specific author. Currently, about 90 thousand files (out of a total of 160 thousand) are getting assigned an author through a default rule. Similarly, there are about 250 authors (primarily early FreeBSD ones) for which only the identifier is known. Both are listed in the build repository's unmatched directory, and contributions are welcomed. Furthermore, the BSD SCCS and the FreeBSD CVS commits that share the same author and time-stamp can be coalesced into a single Git commit. Support can be added for importing the SCCS file comment fields, in order to bring into the repository the corresponding metadata. Finally, and most importantly, more branches of open source systems can be added, such as NetBSD OpenBSD, DragonFlyBSD, and *illumos*. Ideally, current right holders of other important historical Unix releases, such as System III, System V, NeXTSTEP, and SunOS, will release their systems under a license that would allow their incorporation into this repository for study. - -#### Acknowledgements #### - -The author thanks the many individuals who contributed to the effort. Brian W. Kernighan, Doug McIlroy, and Arnold D. Robbins helped with Bell Labs login identifiers. Clem Cole, Era Eriksson, Mary Ann Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze, and Anatole Shaw helped with BSD login identifiers. The BSD SCCS import code is based on work by H. Merijn Brand and Jonathan Gray. - -This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thalis - Athens University of Economics and Business - Software Engineering Research Platform. - -### References ### - -[[1]][31] - M. D. McIlroy, E. N. Pinson, and B. A. Tague, "UNIX time-sharing system: Foreword," *The Bell System Technical Journal*, vol. 57, no. 6, pp. 1899-1904, July-August 1978. - -[[2]][32] - D. M. Ritchie and K. Thompson, "The UNIX time-sharing system," *Bell System Technical Journal*, vol. 57, no. 6, pp. 1905-1929, July-August 1978. - -[[3]][33] - D. M. Ritchie, "The evolution of the UNIX time-sharing system," *AT&T Bell Laboratories Technical Journal*, vol. 63, no. 8, pp. 1577-1593, Oct. 1984. - -[[4]][34] - P. H. Salus, *A Quarter Century of UNIX*. Boston, MA: Addison-Wesley, 1994. - -[[5]][35] - E. S. Raymond, *The Art of Unix Programming*. Addison-Wesley, 2003. - -[[6]][36] - M. J. Rochkind, "The source code control system," *IEEE Transactions on Software Engineering*, vol. SE-1, no. 4, pp. 255-265, 1975. - ----------- - -#### Footnotes: #### - -[1][37] - [https://github.com/dspinellis/unix-history-repo][38] - -[2][39] - Updates may add or modify material. To ensure replicability the repository's users are encouraged to fork it or archive it. - -[3][40] - [http://www.tuhs.org/archive_sites.html][41] - -[4][42] - [https://www.mckusick.com/csrg/][43] - -[5][44] - [http://www.oldlinux.org/Linux.old/distributions/386BSD][45] - -[6][46] - [http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/][47] - -[7][48] - [https://github.com/freebsd/freebsd][49] - -[8][50] - [http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree][51] - -[9][52] - [https://github.com/dspinellis/unix-history-make][53] - --------------------------------------------------------------------------------- - -via: http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#MPT78 -[2]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#RT78 -[3]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Rit84 -[4]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Sal94 -[5]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Ray03 -[6]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:data -[7]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:dev -[8]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:use -[9]:https://github.com/dspinellis/unix-history-repo -[10]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAB -[11]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAC -[12]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:provenance -[13]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches -[14]:http://www.tuhs.org/archive_sites.html -[15]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAD -[16]:https://www.mckusick.com/csrg/ -[17]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAE -[18]:http://www.oldlinux.org/Linux.old/distributions/386BSD -[19]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAF -[20]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/ -[21]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAG -[22]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#SCCS -[23]:https://github.com/freebsd/freebsd -[24]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAH -[25]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree -[26]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAI -[27]:https://github.com/dspinellis/unix-history-make -[28]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAJ -[29]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches -[30]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:metrics -[31]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITEMPT78 -[32]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERT78 -[33]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERit84 -[34]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESal94 -[35]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERay03 -[36]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESCCS -[37]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAB -[38]:https://github.com/dspinellis/unix-history-repo -[39]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAC -[40]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAD -[41]:http://www.tuhs.org/archive_sites.html -[42]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAE -[43]:https://www.mckusick.com/csrg/ -[44]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAF -[45]:http://www.oldlinux.org/Linux.old/distributions/386BSD -[46]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAG -[47]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/ -[48]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAH -[49]:https://github.com/freebsd/freebsd -[50]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAI -[51]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree -[52]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAJ -[53]:https://github.com/dspinellis/unix-history-make \ No newline at end of file diff --git a/sources/tech/20151012 Remember sed and awk All Linux admins should.md b/sources/tech/20151012 Remember sed and awk All Linux admins should.md deleted file mode 100644 index 67a6641393..0000000000 --- a/sources/tech/20151012 Remember sed and awk All Linux admins should.md +++ /dev/null @@ -1,60 +0,0 @@ -Remember sed and awk? All Linux admins should -================================================================================ -![](http://images.techhive.com/images/article/2015/03/linux-100573790-primary.idge.jpg) - -Credit: Shutterstock - -**We aren’t doing the next generation of Linux and Unix admins any favors by forgetting init scripts and fundamental tools** - -I happened across a post on Reddit by chance, [asking about textfile manipulation][1]. It was a fairly simple request, similar to those that folks in Unix see nearly every day. In this case, it was how to remove all duplicate lines in a file, keeping one instance of each. This sounds relatively easy, but can get a bit complicated if the source file is sufficiently large and random. - -There are countless answers to this problem. You could write a script in nearly any language to do this, with varying levels of complexity and time investment, which I suspect is what most would do. It might take 20 or 60 minutes depending on skill level, but armed with Perl, Python, or Ruby, you could make quick work of it. - -Or you could use the answer stated in that thread, which warmed my heart: Just use awk. - -That answer is the most concise and simplest solution to the problem by far. It’s one line: - - awk '!seen[$0]++' . - -Let’s take a look at this. - -In this command, there’s a lot of hidden code. Awk is a text processing language, and as such it makes a lot of assumptions. For starters, what you see here is actually the meat of a for loop. Awk assumes you want to loop through every line of the input file, so you don’t need to explicitly state it. Awk also assumes you want to print the postprocessed output, so you don’t need to state that either. Finally, Awk then assumes the loop ends when the last statement finishes, so no need to state it. - -The string seen in this example is the name given to an associative array. $0 is a variable that represents the entirety of the current line of the file. Thus, this command translates to “Evaluate every line in this file, and if you haven’t seen this line before, print it.” Awk does this by adding $0 to the seen array if it doesn’t already exist and incrementing the value so that it will not match the pattern the next time around and, thus, not print. - -Some will see this as elegant, while others may see this as obfuscation. Anyone who uses awk on a daily basis will be in the first group. Awk is designed to do this. You can write multiline programs in awk. You can even write [disturbingly complex functions in awk][2]. But at the end of the day, awk is designed to do text processing, generally within a pipe. Eliminating the extraneous cruft of loop definition is simply a shortcut for a very common use case. If you like, you could write the same thing as the following: - - awk '{ if (!seen[$0]) print $0; seen[$0]++ }’ - -It would lead to the same result. - -Awk is the perfect tool for this job. Nevertheless, I believe many admins -- especially newer admins -- would jump into [Bash][3] or Python to try to accomplish this task, because knowledge of awk and what it can do seems to be fading as time goes on. I think it may be an indicator of things to come, where problems that have been solved for decades suddenly emerge again, based on lack of exposure to the previous solutions. - -The shell, grep, sed, and awk are fundaments of Unix computing. If you’re not completely comfortable with their use, you’re artificially hamstrung because they form the basis of interaction with Unix systems via the CLI and shell scripting. One of the best ways to learn how these tools work is by observing and working with live examples, which every Unix flavor has in spades with their init systems -- or had, in the case of Linux distros that have adopted [systemd][4]. - -Millions of Unix admins learned how shell scripting and Unix tools worked by reading, writing, modifying, and working with init scripts. Init scripts differ greatly from OS to OS, even from distribution to distribution in the case of Linux, but they are all rooted in sh, and they all use core CLI tools like sed, awk, and grep. - -I’ve heard many complaints that init scripts are “ancient” and “difficult,” but in fact, init scripts use the same tools that Unix admins work with every day, and thus provide an excellent way to become more familiar and comfortable with those tools. Saying that init scripts are hard to read or difficult to work with is to admit that you lack fundamental familiarity with the Unix toolset. - -Speaking of things found on Reddit, I also came across this question from a budding Linux sys admin, [asking whether he should bother to learn sysvinit][5]. Most of the answers in the thread are good -- yes, definitely learn sysvinit and systemd. One commenter even notes that init scripts are a great way to learn Bash, and another states that the Fortune 50 company he works for has no plans to move to a systemd-based release. - -But it concerns me that this is a question at all. If we continue down the path of eliminating scripts and roping off core system elements within our operating systems, we will inadvertently make it harder for new admins to learn the fundamental Unix toolset due to the lack of exposure. - -I’m not sure why some want to cover up Unix internals with abstraction after abstraction, but such a path may reduce a generation of Unix admins to hapless button pushers dependent on support contracts. I’m pretty sure that would not be a good development. - --------------------------------------------------------------------------------- - -via: http://www.infoworld.com/article/2985804/linux/remember-sed-awk-linux-admins-should.html - -作者:[Paul Venezia][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.infoworld.com/author/Paul-Venezia/ -[1]:https://www.reddit.com/r/linuxadmin/comments/3lwyko/how_do_i_remove_every_occurence_of_duplicate_line/ -[2]:http://intro-to-awk.blogspot.com/2008/08/awk-more-complex-examples.html -[3]:http://www.infoworld.com/article/2613338/linux/linux-how-to-script-a-bash-crash-course.html -[4]:http://www.infoworld.com/article/2608798/data-center/systemd--harbinger-of-the-linux-apocalypse.html -[5]:https://www.reddit.com/r/linuxadmin/comments/3ltq2y/when_i_start_learning_about_linux_administration/ diff --git a/sources/tech/20151013 DFileManager--Cover Flow File Manager.md b/sources/tech/20151013 DFileManager--Cover Flow File Manager.md deleted file mode 100644 index 9c96fe9553..0000000000 --- a/sources/tech/20151013 DFileManager--Cover Flow File Manager.md +++ /dev/null @@ -1,63 +0,0 @@ -DFileManager: Cover Flow File Manager -================================================================================ -A real gem of a file manager absent from the standard Ubuntu repositories but sporting a unique feature. That’s DFileManager in a twitterish statement. - -A tricky question to answer is just how many open source Linux applications are available. Just out of curiosity, you can type at the shell: - - ~$ for f in /var/lib/apt/lists/*Packages; do printf ’%5d %s\n’ $(grep ’^Package: ’ “$f” | wc -l) ${f##*/} done | sort -rn - -On my Ubuntu 15.04 system, it produces the following results: - -![Ubuntu 15.04 Packages](http://www.linuxlinks.com/portal/content/reviews/FileManagers/UbuntuPackages.png) - -As the screenshot above illustrates, there are approximately 39,000 packages in the Universe repository, and around 8,500 packages in the main repository. These numbers sound a lot. But there is a smorgasbord of open source applications, utilities, and libraries that don’t have an Ubuntu team generating a package. And more importantly, there are some real treasures missing from the repositories which can only be discovered by compiling source code. DFileManager is one such utility. It is a Qt based cross-platform file manager which is in an early stage of development. Qt provides single-source portability across all major desktop operating systems. - -In the absence of a binary package, the user needs to compile the code. For some tools, this can be problematic, particularly if the application depends on any obscure libraries, or specific versions which may be incompatible with other software installed on a system. - -### Installation ### - -Fortunately, DFileManager is simple to compile. The installation instructions on the developer’s website provide most of the steps necessary for my creaking Ubuntu box, but a few essential packages were missing (why is it always that way however many libraries clutter up your filesystem?) To prepare my system, download the source code from GitHub and then compile the software, I entered the following commands at the shell: - - ~$ sudo apt-get install qt5-default qt5-qmake libqt5x11extras5-dev - ~$ git clone git://git.code.sf.net/p/dfilemanager/code dfilemanager-code - ~$ cd dfilemananger-code - ~$ mkdir build - ~$ cd build - ~$ cmake ../ -DCMAKE_INSTALL_PREFIX=/usr - ~$ make - ~$ sudo make install - -You can then start the application by typing at the shell: - - ~$ dfm - -Here is a screenshot of DFileManager in action, with the main attraction in full view; the Cover Flow view. This offers the ability to slide through items in the current folder with an attractive feel. It’s ideal for viewing photos. The file manager bears a resemblance to Finder (the default file manager and graphical user interface shell used on all Macintosh operating systems), which may appeal to you. - -![DFileManager in action](http://www.linuxlinks.com/portal/content/reviews/FileManagers/Screenshot-dfm.png) - -### Features: ### - -- 4 views: Icons, Details, Columns, and Cover Flow -- Categorised bookmarks with Places and Devices -- Tabs -- Simple searching and filtering -- Customizable thumbnails for filetypes including multimedia files -- Information bar which can be undocked -- Open folders and files with one click -- Option to queue IO operations -- Remembers some view properties for each folder -- Show hidden files - -DFileManager is not a replacement for KDE’s Dolphin, but do give it a go. It’s a file manager that really helps the user browse files. And don’t forget to give feedback to the developer; that’s a contribution anyone can offer. - --------------------------------------------------------------------------------- - -via: http://gofk.tumblr.com/post/131014089537/dfilemanager-cover-flow-file-manager-a-real-gem - -作者:[gofk][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://gofk.tumblr.com/ \ No newline at end of file diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md deleted file mode 100644 index a899284450..0000000000 --- a/sources/tech/20151028 10 Tips for 10x Application Performance.md +++ /dev/null @@ -1,277 +0,0 @@ -10 Tips for 10x Application Performance -================================================================================ -Improving web application performance is more critical than ever. The share of economic activity that’s online is growing; more than 5% of the developed world’s economy is now on the Internet (see Resources below for statistics). And our always-on, hyper-connected modern world means that user expectations are higher than ever. If your site does not respond instantly, or if your app does not work without delay, users quickly move on to your competitors. - -For example, a study done by Amazon almost 10 years ago proved that, even then, a 100-millisecond decrease in page-loading time translated to a 1% increase in its revenue. Another recent study highlighted the fact that that more than half of site owners surveyed said they lost revenue or customers due to poor application performance. - -How fast does a website need to be? For each second a page takes to load, about 4% of users abandon it. Top e-commerce sites offer a time to first interaction ranging from one to three seconds, which offers the highest conversion rate. It’s clear that the stakes for web application performance are high and likely to grow. - -Wanting to improve performance is easy, but actually seeing results is difficult. To help you on your journey, this blog post offers you ten tips to help you increase your website performance by as much as 10x. It’s the first in a series detailing how you can increase your application performance with the help of some well-tested optimization techniques, and with a little support from NGINX. This series also outlines potential improvements in security that you can gain along the way. - -### Tip #1: Accelerate and Secure Applications with a Reverse Proxy Server ### - -If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on. Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.) - -Trouble is, machine speed might not be the problem. Web applications often run slowly because the computer is switching among different kinds of tasks: interacting with users on thousands of connections, accessing files from disk, and running application code, among others. The application server may be thrashing – running out of memory, swapping chunks of memory out to disk, and making many requests wait on a single task such as disk I/O. - -Instead of upgrading your hardware, you can take an entirely different approach: adding a reverse proxy server to offload some of these tasks. A [reverse proxy server][1] sits in front of the machine running the application and handles Internet traffic. Only the reverse proxy server is connected directly to the Internet; communication with the application servers is over a fast internal network. - -Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks. - -Adding a reverse proxy server also adds flexibility to your web server setup. For instance, if a server of a given type is overloaded, another server of the same type can easily be added; if a server is down, it can easily be replaced. - -Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as: - -- **Load balancing** (see [Tip #2][2]) – A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all. -- **Caching static files** (see [Tip #3][3]) – Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster. -- **Securing your site** – The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected. - -NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application [health checks][4], specialized request routing, advanced caching, and support. - -![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png) - -### Tip #2: Add a Load Balancer ### - -Adding a [load balancer][5] is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes. - -A load balancer is, first, a reverse proxy server (see [Tip #1][6]) – it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using [a choice of algorithms][7] to split requests between servers. The simplest load balancing approach is round robin, with each new request sent to the next server on the list. Other methods include sending requests to the server with the fewest active connections. NGINX Plus has [capabilities][8] for continuing a given user session on the same server, which is called session persistence. - -Load balancers can lead to strong improvements in performance because they prevent one server from being overloaded while other servers wait for traffic. They also make it easy to expand your web server capacity, as you can add relatively low-cost servers and be sure they’ll be put to full use. - -Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, [FastCGI][9], SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other Layer 4 protocols. Analyze your web applications to determine which you use and where performance is lagging. - -The same server or servers used for load balancing can also handle several other tasks, such as SSL termination, support for HTTP/1/x and HTTP/2 use by clients, and caching for static files. - -NGINX is often used for load balancing; to learn more, please see our [overview blog post][10], [configuration blog post][11], [ebook][12] and associated [webinar][13], and [documentation][14]. Our commercial version, [NGINX Plus][15], supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol. - -### Tip #3: Cache Static and Dynamic Content ### - -Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination. - -There are two different types of caching to consider: - -- **Caching of static content**. Infrequently changing files, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk. -- **Caching of dynamic content**. Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, you can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet your requirements. - -If a page gets ten views per second, for instance, and you cache it for one second, 90% of requests for the page will come from the cache. If you separately cache static content, even the freshly generated versions of the page might be made up largely of cached content. - -There are three main techniques for caching content generated by web applications: - -- **Moving content closer to users**. Keeping a copy of content closer to the user reduces its transmission time. -- **Moving content to faster machines**. Content can be kept on a faster machine for faster retrieval. -- **Moving content off of overused machines**. Machines sometimes operate much slower than their benchmark performance on a particular task because they are busy with other tasks. Caching on a different machine improves performance for the cached resources and also for non-cached resources, because the host machine is less overloaded. - -Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times. - -Improved caching can speed up applications tremendously. For many web pages, static data, such as large image files, makes up more than half the content. It might take several seconds to retrieve and transmit such data without caching, but only fractions of a second if the data is cached locally. - -As an example of how caching is used in practice, NGINX and NGINX Plus use two directives to [set up caching][16]: proxy_cache_path and proxy_cache. You specify the cache location and size, the maximum time files are kept in the cache, and other parameters. Using a third (and quite popular) directive, proxy_cache_use_stale, you can even direct the cache to supply stale content when the server that supplies fresh content is busy or down, giving the client something rather than nothing. From the user’s perspective, this may strongly improves your site or application’s uptime. - -NGINX Plus has [advanced caching features][17], including support for [cache purging][18] and visualization of cache status on a [dashboard][19] for live activity monitoring. - -For more information on caching with NGINX, see the [reference documentation][20] and [NGINX Content Caching][21] in the NGINX Plus Admin Guide. - -**Note**: Caching crosses organizational lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, )such as completed transactions or sales. - -### Tip #4: Compress Data ### - -Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more. - -Text data – including HTML (which includes plain text and HTML tags), CSS, and code such as JavaScript – is often transmitted uncompressed. Compressing this data can have a disproportionate impact on perceived web application performance, especially for clients with slow or constrained mobile connections. - -That’s because text data is often sufficient for a user to interact with a page, where multimedia data may be more supportive or decorative. Smart content compression can reduce the bandwidth requirements of HTML, Javascript, CSS and other text-based content, typically by 30% or more, with a corresponding reduction in load time. - -If you use SSL, compression reduces the amount of data that has to be SSL-encoded, which offsets some of the CPU time it takes to compress the data. - -Methods for compressing text data vary. For example, see the [section on HTTP/2][23] for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can [turn on][24] GZIP compression in NGINX. After you [pre-compress text data][25] on your services, you can serve the compressed .gz version directly using the gzip_static directive. - -### Tip #5: Optimize SSL/TLS ### - -The Secure Sockets Layer ([SSL][26]) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings. - -Despite rising popularity, the performance hit involved in SSL/TLS is a sticking point for many sites. SSL/TLS slows website performance for two reasons: - -1. The initial handshake required to establish encryption keys whenever a new connection is opened. The way that browsers using HTTP/1.x establish multiple connections per server multiplies that hit. -1. Ongoing overhead from encrypting data on the server and decrypting it on the client. - -To encourage the use of SSL/TLS, the authors of HTTP/2 and SPDY (described in the [next section][27]) designed these protocols so that browsers need just one connection per browser session. This greatly reduces one of the two major sources of SSL overhead. However, even more can be done today to improve the performance of applications delivered over SSL/TLS. - -The mechanism for optimizing SSL/TLS varies by web server. As an example, NGINX uses [OpenSSL][28], running on standard commodity hardware, to provide performance similar to dedicated hardware solutions. NGINX [SSL performance][29] is well-documented and minimizes the time and CPU penalty from performing SSL/TLS encryption and decryption. - -In addition, see [this blog post][30] for details on ways to increase SSL/TLS performance. To summarize briefly, the techniques are: - -- **Session caching**. Uses the [ssl_session_cache][31] directive to cache the parameters used when securing each new connection with SSL/TLS. -- **Session tickets or IDs**. These store information about specific SSL/TLS sessions in a ticket or ID so a connection can be reused smoothly, without new handshaking. -- **OCSP stapling**. Cuts handshaking time by caching SSL/TLS certificate information. - -NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use [these steps][32] to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are [specific steps][33] for NGINX Plus when used with servers that accept TCP connections. - -### Tip #6: Implement HTTP/2 or SPDY ### - -For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view. - -Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is the recently approved IETF standard based on SPDY. SPDY is broadly supported, but is soon to be deprecated, replaced by HTTP/2. - -The key feature of SPDY and HTTP/2 is the use of a single connection rather than multiple connections. The single connection is multiplexed, so it can carry pieces of multiple requests and responses at the same time. - -By getting the most out of one connection, these protocols avoid the overhead of setting up and managing multiple connections, as required by the way browsers implement HTTP/1.x. The use of a single connection is especially helpful with SSL, because it minimizes the time-consuming handshaking that SSL/TLS needs to set up a secure connection. - -The SPDY protocol required the use of SSL/TLS; HTTP/2 does not officially require it, but all browsers so far that support HTTP/2 use it only if SSL/TLS is enabled. That is, a browser that supports HTTP/2 uses it only if the website is using SSL and its server accepts HTTP/2 traffic. Otherwise, the browser communicates over HTTP/1.x. - -When you implement SPDY or HTTP/2, you no longer need typical HTTP performance optimizations such as domain sharding, resource merging, and image spriting. These changes make your code and deployments simpler and easier to manage. To learn more about the changes that HTTP/2 is bringing about, read our [white paper][34]. - -![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png) - -As an example of support for these protocols, NGINX has supported SPDY from early on, and [most sites][35] that use SPDY today run on NGINX. NGINX is also [pioneering support][36] for HTTP/2, with [support][37] for HTTP/2 in NGINX open source and NGINX Plus as of September 2015. - -Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP/2. This will lead to increased security and, as new optimizations are found and implemented, simpler code that performs better. - -### Tip #7: Update Software Versions ### - -One simple way to boost application performance is to select components for your software stack based on their reputation for stability and performance. In addition, because developers of high-quality components are likely to pursue performance enhancements and fix bugs over time, it pays to use the latest stable version of software. New releases receive more attention from developers and the user community. Newer builds also take advantage of new compiler optimizations, including tuning for new hardware. - -Stable new releases are typically more compatible and higher-performing than older releases. It’s also easier to keep on top of tuning optimizations, bug fixes, and security alerts when you stay on top of software updates. - -Staying with older software can also prevent you from taking advantage of new capabilities. For example, HTTP/2, described above, currently requires OpenSSL 1.0.1. Starting in mid-2016, HTTP/2 will require OpenSSL 1.0.2, which was released in January 2015. - -NGINX users can start by moving to the [[latest version of the NGINX open source software][38] or [NGINX Plus][39]; they include new capabilities such as socket sharding and thread pools (see below), and both are constantly being tuned for performance. Then look at the software deeper in your stack and move to the most recent version wherever you can. - -### Tip #8: Tune Linux for Performance ### - -Linux is the underlying operating system for most web server implementations today, and as the foundation of your infrastructure, Linux represents a significant opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and to match a typical desktop workload. This means that web application use cases require at least some degree of tuning for maximum performance. - -Linux optimizations are web server-specific. Using NGINX as an example, here are a few highlights of changes you can consider to speed up Linux: - -- **Backlog queue**. If you have connections that appear to be stalling, consider increasing net.core.somaxconn, the maximum number of connections that can be queued awaiting attention from NGINX. You will see error messages if the existing connection limit is too small, and you can gradually increase this parameter until the error messages stop. -- **File descriptors**. NGINX uses up to two file descriptors for each connection. If your system is serving a lot of connections, you might need to increase sys.fs.file_max, the system-wide limit for file descriptors, and nofile, the user file descriptor limit, to support the increased load. -- **Ephemeral ports**. When used as a proxy, NGINX creates temporary (“ephemeral”) ports for each upstream server. You can increase the range of port values, set by net.ipv4.ip_local_port_range, to increase the number of ports available. You can also reduce the timeout before an inactive port gets reused with the net.ipv4.tcp_fin_timeout setting, allowing for faster turnover. - -For NGINX, check out the [NGINX performance tuning guides][40] to learn how to optimize your Linux system so that it can cope with large volumes of network traffic without breaking a sweat! - -### Tip #9: Tune Your Web Server for Performance ### - -Whatever web server you use, you need to tune it for web application performance. The following recommendations apply generally to any web server, but specific settings are given for NGINX. Key optimizations include: - -- **Access logging**. Instead of writing a log entry for every request to disk immediately, you can buffer entries in memory and write them to disk as a group. For NGINX, add the *buffer=size* parameter to the *access_log* directive to write log entries to disk when the memory buffer fills up. If you add the **flush=time** parameter, the buffer contents are also be written to disk after the specified amount of time. -- **Buffering**. Buffering holds part of a response in memory until the buffer fills, which can make communications with the client more efficient. Responses that don’t fit in memory are written to disk, which can slow performance. When NGINX buffering is [on][42], you use the *proxy_buffer_size* and *proxy_buffers* directives to manage it. -- **Client keepalives**. Keepalive connections reduce overhead, especially when SSL/TLS is in use. For NGINX, you can increase the maximum number of *keepalive_requests* a client can make over a given connection from the default of 100, and you can increase the *keepalive_timeout* to allow the keepalive connection to stay open longer, resulting in faster subsequent requests. -- **Upstream keepalives**. Upstream connections – connections to application servers, database servers, and so on – benefit from keepalive connections as well. For upstream connections, you can increase *keepalive*, the number of idle keepalive connections that remain open for each worker process. This allows for increased connection reuse, cutting down on the need to open brand new connections. For more information about keepalives, refer to this [blog post][41]. -- **Limits**. Limiting the resources that clients use can improve performance and security. For NGINX,the *limit_conn* and *limit_conn_zone* directives restrict the number of connections from a given source, while *limit_rate* constrains bandwidth. These settings can stop a legitimate user from “hogging” resources and also help prevent against attacks. The *limit_req* and *limit_req_zone* directives limit client requests. For connections to upstream servers, use the max_conns parameter to the server directive in an upstream configuration block. This limits connections to an upstream server, preventing overloading. The associated queue directive creates a queue that holds a specified number of requests for a specified length of time after the *max_conns* limit is reached. -- **Worker processes**. Worker processes are responsible for the processing of requests. NGINX employs an event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The recommendation is to set the value of *worker_processes* to one per CPU. The maximum number of worker_connections (512 by default) can safely be raised on most systems if needed; experiment to find the value that works best for your system. -- **Socket sharding**. Typically, a single socket listener distributes new connections to all worker processes. Socket sharding creates a socket listener for each worker process, with the kernel assigning connections to socket listeners as they become available. This can reduce lock contention and improve performance on multicore systems. To enable [socket sharding][43], include the reuseport parameter on the listen directive. -- **Thread pools**. Any computer process can be held up by a single, slow operation. For web server software, disk access can hold up many faster operations, such as calculating or copying information in memory. When a thread pool is used, the slow operation is assigned to a separate set of tasks, while the main processing loop keeps running faster operations. When the disk operation completes, the results go back into the main processing loop. In NGINX, two operations – the read() system call and sendfile() – are offloaded to [thread pools][44]. - -![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png) - -**Tip**. When changing settings for any operating system or supporting service, change a single setting at a time, then test performance. If the change causes problems, or if it doesn’t make your site run faster, change it back. - -See this [blog post][45] for more details on tuning NGINX. - -### Tip #10: Monitor Live Activity to Resolve Issues and Bottlenecks ### - -The key to a high-performance approach to application development and delivery is watching your application’s real-world performance closely and in real time. You must be able to monitor activity within specific devices and across your web infrastructure. - -Monitoring site activity is mostly passive – it tells you what’s going on, and leaves it to you to spot problems and fix them. - -Monitoring can catch several different kinds of issues. They include: - -- A server is down. -- A server is limping, dropping connections. -- A server is suffering from a high proportion of cache misses. -- A server is not sending correct content. - -A global application performance monitoring tool like New Relic or Dynatrace helps you monitor page load time from remote locations, while NGINX helps you monitor the application delivery side. Application performance data tells you when your optimizations are making a real difference to your users, and when you need to consider adding capacity to your infrastructure to sustain the traffic. - -To help identify and resolve issues quickly, NGINX Plus adds [application-aware health checks][46] – synthetic transactions that are repeated regularly and are used to alert you to problems. NGINX Plus also has [session draining][47], which stops new connections while existing tasks complete, and a slow start capability, allowing a recovered server to come up to speed within a load-balanced group. When used effectively, health checks allow you to identify issues before they significantly impact the user experience, while session draining and slow start allow you to replace servers and ensure the process does not negatively affect perceived performance or uptime. The figure shows the built-in NGINX Plus [live activity monitoring][48] dashboard for a web infrastructure with servers, TCP connections, and caching. - -![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png) - -### Conclusion: Seeing 10x Performance Improvement ### - -The performance improvements that are available for any one web application vary tremendously, and actual gains depend on your budget, the time you can invest, and gaps in your existing implementation. So, how might you achieve 10x performance improvement for your own applications? - -To help guide you on the potential impact of each optimization, here are pointers to the improvement that may be possible with each tip detailed above, though your mileage will almost certainly vary: - -- **Reverse proxy server and load balancing**. No load balancing, or poor load balancing, can cause episodes of very poor performance. Adding a reverse proxy server, such as NGINX, can prevent web applications from thrashing between memory and disk. Load balancing can move processing from overburdened servers to available ones and make scaling easy. These changes can result in dramatic performance improvement, with a 10x improvement easily achieved compared to the worst moments for your current implementation, and lesser but substantial achievements available for overall performance. -- **Caching dynamic and static content**. If you have an overburdened web server that’s doubling as your application server, 10x improvements in peak-time performance can be achieved by caching dynamic content alone. Caching for static files can improve performance by single-digit multiples as well. -- **Compressing data**. Using media file compression such as JPEG for photos, PNG for graphics, MPEG-4 for movies, and MP3 for music files can greatly improve performance. Once these are all in use, then compressing text data (code and HTML) can improve initial page load times by a factor of two. -- **Optimizing SSL/TLS**. Secure handshakes can have a big impact on performance, so optimizing them can lead to perhaps a 2x improvement in initial responsiveness, particularly for text-heavy sites. Optimizing media file transmission under SSL/TLS is likely to yield only small performance improvements. -- **Implementing HTTP/2 and SPDY**. When used with SSL/TLS, these protocols are likely to result in incremental improvements for overall site performance. -- **Tuning Linux and web server software (such as NGINX)**. Fixes such as optimizing buffering, using keepalive connections, and offloading time-intensive tasks to a separate thread pool can significantly boost performance; thread pools, for instance, can speed disk-intensive tasks by [nearly an order of magnitude][49]. - -We hope you try out these techniques for yourself. We want to hear the kind of application performance improvements you’re able to achieve. Share your results in the comments below, or tweet your story with the hash tags #NGINX and #webperf! - -### Resources for Internet Statistics ### - -[Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50] - -[Load Impact – How Bad Performance Impacts Ecommerce Sales][51] - -[Kissmetrics – How Loading Time Affects Your Bottom Line (infographic)][52] - -[Econsultancy – Site speed: case studies, tips and tools for improving your conversion rate][53] - --------------------------------------------------------------------------------- - -via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io - -作者:[Floyd Smith][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.nginx.com/blog/author/floyd/ -[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server -[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2 -[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3 -[4]:https://www.nginx.com/products/application-health-checks/ -[5]:https://www.nginx.com/solutions/load-balancing/ -[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1 -[7]:https://www.nginx.com/resources/admin-guide/load-balancer/ -[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/ -[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx -[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/ -[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/ -[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/ -[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/ -[14]:https://www.nginx.com/resources/admin-guide/load-balancer/ -[15]:https://www.nginx.com/products/ -[16]:https://www.nginx.com/blog/nginx-caching-guide/ -[17]:https://www.nginx.com/products/content-caching-nginx-plus/ -[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge -[19]:https://www.nginx.com/products/live-activity-monitoring/ -[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache -[21]:https://www.nginx.com/resources/admin-guide/content-caching -[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/ -[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6 -[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/ -[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html -[26]:https://www.digicert.com/ssl.htm -[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6 -[28]:http://openssl.org/ -[29]:https://www.nginx.com/blog/nginx-ssl-performance/ -[30]:https://www.nginx.com/blog/improve-seo-https-nginx/ -[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache -[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/ -[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/ -[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/ -[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites -[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/ -[37]:https://www.nginx.com/blog/nginx-plus-r7-released/ -[38]:http://nginx.org/en/download.html -[39]:https://www.nginx.com/products/ -[40]:https://www.nginx.com/blog/tuning-nginx/ -[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/ -[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering -[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/ -[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/ -[45]:https://www.nginx.com/blog/tuning-nginx/ -[46]:https://www.nginx.com/products/application-health-checks/ -[47]:https://www.nginx.com/products/session-persistence/#session-draining -[48]:https://www.nginx.com/products/live-activity-monitoring/ -[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/ -[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/ -[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/ -[52]:https://blog.kissmetrics.com/loading-time/?wide=1 -[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/ \ No newline at end of file diff --git a/sources/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md b/sources/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md deleted file mode 100644 index 7ceced012d..0000000000 --- a/sources/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md +++ /dev/null @@ -1,62 +0,0 @@ -translation by strugglingyouth -Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy -================================================================================ -> **Question**: My computer is connected to a corporate network sitting behind an HTTP proxy. When I try to install Ubuntu desktop on the computer from a CD-ROM drive, the installation hangs and never finishes while trying to retrieve files, which is presumably due to the proxy. However, the problem is that Ubuntu installer never asks me to configure proxy during installation procedure. Then how can I install Ubuntu desktop behind a proxy? - -Unlike Ubuntu server, installation of Ubuntu desktop is pretty much auto-pilot, not leaving much room for customization, such as custom disk partitioning, manual network settings, package selection, etc. While such simple, one-shot installation is considered user-friendly, it leaves much to be desired for those users looking for "advanced installation mode" to customize their Ubuntu desktop installation. - -In addition, one big problem of the default Ubuntu desktop installer is the absense of proxy settings. If your computer is connected behind a proxy, you will notice that Ubuntu installation gets stuck while preparing to download files. - -![](https://c2.staticflickr.com/6/5683/22195372232_cea81a5e45_c.jpg) - -This post describes how to get around the limitation of Ubuntu **installer and install Ubuntu desktop when you are behind a proxy**. - -The basic idea is as follows. Instead of starting with Ubuntu installer directly, boot into live Ubuntu desktop first, configure proxy settings, and finally launch Ubuntu installer manually from live desktop. The following is the step by step procedure. - -After booting from Ubuntu desktop CD/DVD or USB, click on "Try Ubuntu" on the first welcome screen. - -![](https://c1.staticflickr.com/1/586/22195371892_3816ba09c3_c.jpg) - -Once you boot into live Ubuntu desktop, click on Settings icon in the left. - -![](https://c1.staticflickr.com/1/723/22020327738_058610c19d_c.jpg) - -Go to Network menu. - -![](https://c2.staticflickr.com/6/5675/22021212239_ba3901c8bf_c.jpg) - -Configure proxy settings manually. - -![](https://c1.staticflickr.com/1/735/22020025040_59415e0b9a_c.jpg) - -Next, open a terminal. - -![](https://c2.staticflickr.com/6/5642/21587084823_357b5c48cb_c.jpg) - -Enter a root session by typing the following: - - $ sudo su - -Finally, type the following command as the root. - - # ubiquity gtk_ui - -This will launch GUI-based Ubuntu installer as follows. - -![](https://c1.staticflickr.com/1/723/22020025090_cc64848b6c_c.jpg) - -Proceed with the rest of installation. - -![](https://c1.staticflickr.com/1/628/21585344214_447020e9d6_c.jpg) - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/install-ubuntu-desktop-behind-proxy.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni diff --git a/sources/tech/20151109 How to Monitor the Progress of a Linux Command Line Operation Using PV Command.md b/sources/tech/20151109 How to Monitor the Progress of a Linux Command Line Operation Using PV Command.md deleted file mode 100644 index 8ecf772a21..0000000000 --- a/sources/tech/20151109 How to Monitor the Progress of a Linux Command Line Operation Using PV Command.md +++ /dev/null @@ -1,81 +0,0 @@ -translating by ezio - -How to Monitor the Progress of a Linux Command Line Operation Using PV Command -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/11/pv-featured-1.jpg) - -If you’re a Linux system admin, there’s no doubt that you must be spending most of your work time on the command line – installing and removing packages; monitoring system stats; copying, moving, deleting stuff; debugging problems; and more. There are times when you fire a command, and it takes a while before the operation completes. However, there are also times when the command you executed just hangs, leaving you guessing as to what’s actually happening behind the scenes. - -Usually, Linux commands provide no information related to the progress of the ongoing operation, something that is very important especially when you have limited time. However, that doesn’t mean you’re helpless – there exists a command, dubbed pv, that displays useful progress information related to the ongoing command line operation. In this article we will discuss this command as well as its features through some easy-to-understand examples. - -### PV Command ### - -Developed by Andrew Wood, [PV][1] – which stands for Pipe Viewer – displays information related to the progress of data through a pipeline. The information includes time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. - -> “To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error,” - -The above explains the command’s man page. - -### Download and Installation ### - -Users of Debian-based systems like Ubuntu can easily install the utility by running the following command in terminal: - - sudo apt-get install pv - -If you’re using any other Linux distro, you can install the command using the package manager installed on your system. Once installed successfully you can use the command line utility in various scenarios (see the following section). It’s worth mentioning that pv version 1.2.0 has been used in all the examples mentioned in this article. - -### Features and Usage ### - -A very common scenario that probably most of us (who work on the command line in Linux) would relate to is copying a movie file from a USB drive to your computer. If you try to complete the aforementioned operation using the cp command, you’ll have to blindly wait until the copying is complete or some error is thrown. - -However, the pv command can be helpful in this case. Here is an example: - - pv /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv - -And here’s the output: - -![pv-copy](https://www.maketecheasier.com/assets/uploads/2015/10/pv-copy.png) - -So, as you can see above, the command shows a lot of useful information related to the ongoing operation, including the amount of data that has been transferred, time elapsed, rate of transfer, progress bar, progress in percentage, and the amount of time left. - -The `pv` command provides various display switches. For example, you can use `-p` for displaying percentage, `-t` for timer, `-r` for rate of transfer, `-e` for eta, and -b for byte counter. The good thing is that you won’t have to remember any of them, as all of them are enabled by default. However, should you exclusively require information related to only a particular display switch in the output, you can pass that switch in the pv command. - -There’s also a `-n` display switch that allows the command to display an integer percentage, one per line on standard error, instead of the regular visual progress indicator. The following is an example of this switch in action: - - pv -n /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv - -![pv-numeric](https://www.maketecheasier.com/assets/uploads/2015/10/pv-numeric.png) - -This particular display switch is suitable in scenarios where you want to pipe the output into the [dialog][2] command. - -Moving on, there’s also a command line option, `-L`, that lets you modify the data transfer rate of the pv command. For example, I used -L to limit the data transfer rate to 2MB/s. - - pv -L 2m /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv - -![pv-ratelimit](https://www.maketecheasier.com/assets/uploads/2015/10/pv-ratelimit.png) - -As can be seen in the screenshot above, the data transfer rate was capped according to my direction. - -Another scenario where `pv` can help is while compressing files. Here is an example of how you can use this command while compressing files using Gzip: - - pv /media/himanshu/1AC2-A8E3/fnf.mkv | gzip > ./Desktop/fnf.log.gz - -![pv-gzip](https://www.maketecheasier.com/assets/uploads/2015/10/pv-gzip.png) - -### Conclusion ### - -As you have observed, pv is a useful little utility that could help you save your precious time in case a command line operation isn’t behaving as expected. Plus, the information it displays can also be used in shell scripts. I’d strongly recommend this command; it’s worth giving a try. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/monitor-progress-linux-command-line-operation/ - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/himanshu/ -[1]:http://linux.die.net/man/1/pv -[2]:http://linux.die.net/man/1/dialog diff --git a/sources/tech/20151109 How to send email notifications using Gmail SMTP server on Linux.md b/sources/tech/20151109 How to send email notifications using Gmail SMTP server on Linux.md index 5ffcb5aea8..22e8606c6c 100644 --- a/sources/tech/20151109 How to send email notifications using Gmail SMTP server on Linux.md +++ b/sources/tech/20151109 How to send email notifications using Gmail SMTP server on Linux.md @@ -1,3 +1,4 @@ +Translating by KnightJoker How to send email notifications using Gmail SMTP server on Linux ================================================================================ Suppose you want to configure a Linux app to send out email messages from your server or desktop. The email messages can be part of email newsletters, status updates (e.g., [Cachet][1]), monitoring alerts (e.g., [Monit][2]), disk events (e.g., [RAID mdadm][3]), and so on. While you can set up your [own outgoing mail server][4] to deliver messages, you can alternatively rely on a freely available public SMTP server as a maintenance-free option. diff --git a/sources/tech/20151109 Install Android On BQ Aquaris Ubuntu Phone In Linux.md b/sources/tech/20151109 Install Android On BQ Aquaris Ubuntu Phone In Linux.md deleted file mode 100644 index 864068eb91..0000000000 --- a/sources/tech/20151109 Install Android On BQ Aquaris Ubuntu Phone In Linux.md +++ /dev/null @@ -1,126 +0,0 @@ -zpl1025 -Install Android On BQ Aquaris Ubuntu Phone In Linux -================================================================================ -![How to install Android on Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-on-Ubuntu-Phone.jpg) - -If you happen to own the first Ubuntu phone and want to **replace Ubuntu with Android on the bq Aquaris e4.5**, this post is going to help you. - -There can be plenty of reasons why you might want to remove Ubuntu and use the mainstream Android OS. One of the foremost reason is that the OS itself is at an early stage and intend to target developers and enthusiasts. Whatever may be your reason, installing Android on bq Aquaris is a piece of cake, thanks to the tools provided by bq. - -Let’s see what to do we need to install Android on bq Aquaris. - -### Prerequisite ### - -- Working Internet connection to download Android factory image and install tools for flashing Android -- USB data cable -- A system running Linux - -This tutorial is performed using Ubuntu 15.10. But the steps should be applicable to most other Linux distributions. - -### Replace Ubuntu with Android in bq Aquaris e4.5 ### - -#### Step 1: Download Android firmware #### - -First step is to download the Android image for bq Aquaris e4.5. Good thing is that it is available from the bq’s support website. You can download the firmware, around 650 MB in size, from the link below: - -- [Download Android for bq Aquaris e4.5][1] - -Yes, you would get OTA updates with it. At present the firmware version is 2.0.1 which is based on Android Lolipop. Over time, there could be a new firmware based on Marshmallow and then the above link could be outdated. - -I suggest to check the [bq support page and download][2] the latest firmware from there. - -Once downloaded, extract it. In the extracted directory, look for **MT6582_Android_scatter.txt** file. We shall be using it later. - -#### Step 2: Download flash tool #### - -bq has provided its own flash tool, Herramienta MTK Flash Tool, for easier installation of Android or Ubuntu on the device. You can download the tool from the link below: - -- [Download MTK Flash Tool][3] - -Since the flash tool might be upgraded in future, you can always get the latest version of flash tool from the [bq support page][4]. - -Once downloaded extract the downloaded file. You should see an executable file named **flash_tool** in it. We shall be using it later. - -#### Step 3: Remove conflicting packages (optional) #### - -If you are using recent version of Ubuntu or Ubuntu based Linux distributions, you may encounter “BROM ERROR : S_UNDEFINED_ERROR (1001)” later in this tutorial. - -To avoid this error, you’ll have to uninstall conflicting package. Use the commands below: - - sudo apt-get remove modemmanager - -Restart udev service with the command below: - - sudo service udev restart - -Just to check for any possible side effects on kernel module cdc_acm, run the command below: - - lsmod | grep cdc_acm - -If the output of the above command is an empty list, you’ll have to reinstall this kernel module: - - sudo modprobe cdc_acm - -#### Step 4: Prepare to flash Android #### - -Go to the downloaded and extracted flash tool directory (in step 2). Use command line for this purpose because you’ll have to use the root privileges here. - -Presuming that you saved it in the Downloads directory, use the command below to go to this directory (in case you do not know how to navigate between directories in command line). - - cd ~/Downloads/SP_Flash* - -After that use the command below to run the flash tool as root: - - sudo ./flash_tool - -You’ll see a window popped as the one below. Don’t bother about Download Agent field, it will be automatically filled. Just focus on Scatter-loading field. - -![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-1.jpeg) - -Remember we talked about **MT6582_Android_scatter.txt** in step 1? This text file is in the extracted directory of the Andriod firmware you downloaded in step 1. Click on Scatter-loading (in the above picture) and point to MT6582_Android_scatter.txt file. - -When you do that, you’ll see several green lines like the one below: - -![Install-Android-bq-aquaris-Ubuntu-2](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-2.jpeg) - -#### Step 5: Flashing Android #### - -We are almost ready. Switch off your phone and connect it to your computer via a USB cable. - -Select Firmware Upgrade from the dropdown and after that click on the big download button. - -![flash Android with Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu.jpeg) - -If everything is correct, you should see a flash status in the bottom of the tool: - -![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-3.jpeg) - -When the procedure is successfully completed, you’ll see a notification like this: - -![Successfully flashed Android on bq qauaris Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-4.jpeg) - -Unplug your phone and power it on. You should see a white screen with AQUARIS written in the middle and at bottom, “powered by Android” would be displayed. It might take upto 10 minutes before you could configure and start using Android. - -Note: If something goes wrong in the process, Press power, volume up, volume down button together and boot in to fast boot mode. Turn off again and connect the cable again. Repeat the process of firmware upgrade. It should work. - -### Conclusion ### - -Thanks to the tools provided, it becomes easier to **flash Android on bq Ubuntu Phone**. Of course, you can use the same steps to replace Android with Ubuntu. All you need is to download Ubuntu firmware instead of Android. - -I hope this tutorial helped you to replace Ubuntu with Android on your bq phone. If you have questions or suggestions, feel free to ask in the comment section below. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/install-android-ubuntu-phone/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5_L/2.0.1_20150623-1900_bq-FW.zip -[2]:http://www.bq.com/gb/support/aquaris-e4-5 -[3]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5/Ubuntu/Web%20version/Web%20version/SP_Flash_Tool_exe_linux_v5.1424.00.zip -[4]:http://www.bq.com/gb/support/aquaris-e4-5-ubuntu-edition diff --git a/sources/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md b/sources/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md deleted file mode 100644 index bfcf1e3ae3..0000000000 --- a/sources/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md +++ /dev/null @@ -1,317 +0,0 @@ -How to Setup Drone - a Continuous Integration Service in Linux -============================================================== - -Are you tired of cloning, building, testing, and deploying codes time and again? If yes, switch to continuous integration. Continuous Integration aka CI is practice in software engineering of making frequent commits to the code base, building, testing and deploying as we go. CI helps to quickly integrate new codes into the existing code base. If this process is made automated, then this will speed up the development process as it reduces the time taken for the developer to build and test things manually. [Drone][1] is a free and open source project which provides an awesome environment of continuous integration service and is released under Apache License Version 2.0. It integrates with many repository providers like Github, Bitbucket and Google Code and has the ability to pull codes from the repositories enabling us to build the source code written in number of languages including PHP, Node, Ruby, Go, Dart, Python, C/C++, JAVA and more. It is made such a powerful platform cause it uses containers and docker technology for every build making users a complete control over their build environment with guaranteed isolation. - -### 1. Installing Docker ### - -First of all, we'll gonna install Docker as its the most vital element for the complete workflow of Drone. Drone does a proper utilization of docker for the purpose of building and testing application. This container technology speeds up the development of the applications. To install docker, we'll need to run the following commands with respective the distribution of linux. In this tutorial, we'll cover the steps with Ubuntu 14.04 and CentOS 7 linux distributions. - -#### On Ubuntu #### - -To install Docker in Ubuntu, we can simply run the following commands in a terminal or console. - - # apt-get update - # apt-get install docker.io - -After the installation is done, we'll restart our docker engine using service command. - - # service docker restart - -Then, we'll make docker start automatically in every system boot. - - # update-rc.d docker defaults - - Adding system startup for /etc/init.d/docker ... - /etc/rc0.d/K20docker -> ../init.d/docker - /etc/rc1.d/K20docker -> ../init.d/docker - /etc/rc6.d/K20docker -> ../init.d/docker - /etc/rc2.d/S20docker -> ../init.d/docker - /etc/rc3.d/S20docker -> ../init.d/docker - /etc/rc4.d/S20docker -> ../init.d/docker - /etc/rc5.d/S20docker -> ../init.d/docker - -#### On CentOS #### - -First, we'll gonna update every packages installed in our centos machine. We can do that by running the following command. - - # sudo yum update - -To install docker in centos, we can simply run the following commands. - - # curl -sSL https://get.docker.com/ | sh - -After our docker engine is installed in our centos machine, we'll simply start it by running the following systemd command as systemd is the default init system in centos 7. - - # systemctl start docker - -Then, we'll enable docker to start automatically in every system startup. - - # systemctl enable docker - - ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service' - -### 2. Installing SQlite Driver ### - -It uses SQlite3 database server for storing its data and information by default. It will automatically create a database file named drone.sqlite under /var/lib/drone/ which will handle database schema setup and migration. To setup SQlite3 drivers, we'll need to follow the below steps. - -#### On Ubuntu 14.04 #### - -As SQlite3 is available on the default respository of Ubuntu 14.04, we'll simply install it by running the following apt command. - - # apt-get install libsqlite3-dev - -#### On CentOS 7 #### - -To install it on CentOS 7 machine, we'll need to run the following yum command. - - # yum install sqlite-devel - -### 3. Installing Drone ### - -Finally, after we have installed those dependencies successfully, we'll now go further towards the installation of drone in our machine. In this step, we'll simply download the binary package of it from the official download link of the respective binary formats and then install them using the default package manager. - -#### On Ubuntu #### - -We'll use wget to download the debian package of drone for ubuntu from the [official Debian file download link][2]. Here is the command to download the required debian package of drone. - - # wget downloads.drone.io/master/drone.deb - - Resolving downloads.drone.io (downloads.drone.io)... 54.231.48.98 - Connecting to downloads.drone.io (downloads.drone.io)|54.231.48.98|:80... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7722384 (7.4M) [application/x-debian-package] - Saving to: 'drone.deb' - 100%[======================================>] 7,722,384 1.38MB/s in 17s - 2015-11-06 14:09:28 (456 KB/s) - 'drone.deb' saved [7722384/7722384] - -After its downloaded, we'll gonna install it with dpkg package manager. - - # dpkg -i drone.deb - - Selecting previously unselected package drone. - (Reading database ... 28077 files and directories currently installed.) - Preparing to unpack drone.deb ... - Unpacking drone (0.3.0-alpha-1442513246) ... - Setting up drone (0.3.0-alpha-1442513246) ... - Your system ubuntu 14: using upstart to control Drone - drone start/running, process 9512 - -#### On CentOS #### - -In the machine running CentOS, we'll download the RPM package from the [official download link for RPM][3] using wget command as shown below. - - # wget downloads.drone.io/master/drone.rpm - - --2015-11-06 11:06:45-- http://downloads.drone.io/master/drone.rpm - Resolving downloads.drone.io (downloads.drone.io)... 54.231.114.18 - Connecting to downloads.drone.io (downloads.drone.io)|54.231.114.18|:80... connected. - HTTP request sent, awaiting response... 200 OK - Length: 7763311 (7.4M) [application/x-redhat-package-manager] - Saving to: ‘drone.rpm’ - 100%[======================================>] 7,763,311 1.18MB/s in 20s - 2015-11-06 11:07:06 (374 KB/s) - ‘drone.rpm’ saved [7763311/7763311] - -Then, we'll install the download rpm package using yum package manager. - - # yum localinstall drone.rpm - -### 4. Configuring Port ### - -After the installation is completed, we'll gonna configure drone to make it workable. The configuration of drone is inside **/etc/drone/drone.toml** file. By default, drone web interface is exposed under port 80 which is the default port of http, if we wanna change it, we can change it by replacing the value under server block as shown below. - - [server] - port=":80" - -### 5. Integrating Github ### - -In order to run Drone we must setup at least one integration points between GitHub, GitHub Enterprise, Gitlab, Gogs, Bitbucket. In this tutorial, we'll only integrate github but if we wanna integrate other we can do that from the configuration file. In order to integrate github, we'll need to create a new application in our [github settings][4]. - -![Registering App Github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-app-github.png) - -To create, we'll need to click on Register a New Application then fill out the form as shown in the following image. - -![Registering OAuth app github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-OAuth-app-github.png) - -We should make sure that **Authorization callback URL** looks like http://drone.linoxide.com/api/auth/github.com under the configuration of the application. Then, we'll click on Register application. After done, we'll note the Client ID and Client Secret key as we'll need to configure it in our drone configuration. - -![Client ID and Secret Token](http://blog.linoxide.com/wp-content/uploads/2015/11/client-id-secret-token.png) - -After thats done, we'll need to edit our drone configuration using a text editor by running the following command. - - # nano /etc/drone/drone.toml - -Then, we'll find the [github] section and append the section with the above noted configuration as shown below. - - [github] - client="3dd44b969709c518603c" - secret="4ee261abdb431bdc5e96b19cc3c498403853632a" - # orgs=[] - # open=false - -![Configuring Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-github-drone-e1446835124465.png) - -### 6. Configuring SMTP server ### - -If we wanna enable drone to send notifications via emails, then we'll need to specify the SMTP configuration of our SMTP server. If we already have an SMTP server, we can use its configuration but as we don't have an SMTP server, we'll need to install an MTA ie Postfix and then specify the SMTP configuration in the drone configuration. - -#### On Ubuntu #### - -We can install postfix in ubuntu by running the following apt command. - - # apt-get install postfix - -#### On CentOS #### - -We can install postfix in CentOS by running the following yum command. - - # yum install postfix - -After installing, we'll need to edit the configuration of our postfix configuration using a text editor. - - # nano /etc/postfix/main.cf - -Then, we'll need to replace the value of myhostname parameter to our FQDN ie drone.linoxide.com . - - myhostname = drone.linoxide.com - -Now, we'll gonna finally configure the SMTP section of our drone configuration file. - - # nano /etc/drone/drone.toml - -Then, we'll find the [stmp] section and then we'll need to append the setting as follows. - - [smtp] - host = "drone.linoxide.com" - port = "587" - from = "root@drone.linoxide.com" - user = "root" - pass = "password" - -![Configuring SMTP Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-smtp-drone.png) - -Note: Here, **user** and **pass** parameters are strongly recommended to be changed according to one's user configuration. - -### 7. Configuring Worker ### - -As we know that drone utilizes docker for its building and testing task, we'll need to configure docker as the worker for our drone. To do so, we'll need to edit the [worker] section in the drone configuration file. - - # nano /etc/drone/drone.toml - -Then, we'll uncomment the following lines and append as shown below. - - [worker] - nodes=[ - "unix:///var/run/docker.sock", - "unix:///var/run/docker.sock" - ] - -Here, we have set only 2 node which means the above configuration is capable of executing only 2 build at a time. In order to increase concurrency, we can increase the number of nodes. - - [worker] - nodes=[ - "unix:///var/run/docker.sock", - "unix:///var/run/docker.sock", - "unix:///var/run/docker.sock", - "unix:///var/run/docker.sock" - ] - -Here, in the above configuration, drone is configured to process four builds at a time, using the local docker daemon. - -### 8. Restarting Drone ### - -Finally, after everything is done regarding the installation and configuration, we'll now start our drone server in our linux machine. - -#### On Ubuntu #### - -To start drone in our Ubuntu 14.04 machine, we'll simply run service command as the default init system of Ubuntu 14.04 is SysVinit. - - # service drone restart - -To make drone start automatically in every boot of the system, we'll run the following command. - - # update-rc.d drone defaults - -#### On CentOS #### - -To start drone in CentOS machine, we'll simply run systemd command as CentOS 7 is shipped with systemd as init system. - - # systemctl restart drone - -Then, we'll enable drone to start automatically in every system boot. - - # systemctl enable drone - -### 9. Allowing Firewalls ### - -As we know drone utilizes port 80 by default and we haven't changed the port, we'll gonna configure our firewall programs to allow port 80 (http) and be accessible from other machines in the network. - -#### On Ubuntu 14.04 #### - -Iptables is a popular firewall program which is installed in the ubuntu distributions by default. We'll make iptables to expose port 80 so that we can make our Drone web interface accessible in the network. - - # iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT - # /etc/init.d/iptables save - -#### On CentOS 7 #### - -As CentOS 7 has systemd installed by default, it contains firewalld running as firewall problem. In order to open the port 80 (http service) on firewalld, we'll need to execute the following commands. - - # firewall-cmd --permanent --add-service=http - - success - - # firewall-cmd --reload - - success - -### 10. Accessing Web Interface ### - -Now, we'll gonna open the web interface of drone using our favourite web browser. To do so, we'll need to point our web browser to our machine running drone in it. As the default port of drone is 80 and we have also set 80 in this tutorial, we'll simply point our browser to http://ip-address/ or http://drone.linoxide.com according to our configuration. After we have done that correctly, we'll see the first page of it having options to login into our dashboard. - -![Login Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/login-github-drone-e1446834688394.png) - -As we have configured Github in the above step, we'll simply select github and we'll go through the app authentication process and after its done, we'll be forwarded to our Dashboard. - -![Drone Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/11/drone-dashboard.png) - -Here, it will synchronize all our github repository and will ask us to activate the repo which we want to build with drone. - -![Activate Repository](http://blog.linoxide.com/wp-content/uploads/2015/11/activate-repository-e1446835574595.png) - -After its activated, it will ask us to add a new file named .drone.yml in our repository and define the build process and configuration in that file like which image to fetch and which command/script to run while compiling, etc. - -We'll need to configure our .drone.yml as shown below. - - image: python - script: - - python helloworld.py - - echo "Build has been completed." - -After its done, we'll be able to build our application using the configuration YAML file .drone.yml in our drone appliation. All the commits made into the repository is synced in realtime. It automatically syncs the commit and changes made to the repository. Once the commit is made in the repository, build is automatically started in our drone application. - -![Building Application Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/building-application-drone.png) - -After the build is completed, we'll be able to see the output of the build with the output console. - -![Build Success Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/build-success-drone.png) - -### Conclusion ### - -In this article, we learned to completely setup a workable Continuous Intergration platform with Drone. If we want, we can even get started with the services provided by the official Drone.io project. We can start with free service or paid service according to our requirements. It has changed the world of Continuous integration with its beautiful web interface and powerful bunches of features. It has the ability to integrate with many third party applications and deployment platforms. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/setup-drone-continuous-integration-linux/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://drone.io/ -[2]:http://downloads.drone.io/master/drone.deb -[3]:http://downloads.drone.io/master/drone.rpm -[4]:https://github.com/settings/developers diff --git a/sources/tech/20151116 Linux FAQs with Answers--How to install Node.js on Linux.md b/sources/tech/20151116 Linux FAQs with Answers--How to install Node.js on Linux.md deleted file mode 100644 index 146c918d1d..0000000000 --- a/sources/tech/20151116 Linux FAQs with Answers--How to install Node.js on Linux.md +++ /dev/null @@ -1,92 +0,0 @@ -translation by strugglingyouth - -Linux FAQs with Answers--How to install Node.js on Linux -================================================================================ -> **Question**: How can I install Node.js on [insert your Linux distro]? - -[Node.js][1] is a server-side software platform built on Google's V8 JavaScript engine. Node.js has become a popular choice for building high-performance server-side applications all in JavaScript. What makes Node.js even more attractive for backend server development is the [huge ecosystem][2] of Node.js libraries and applications. Node.js comes with a command line utility called npm which allows you to easily install, version-control, and manage dependencies of Node.js libraries and applications from the vast npm online repository. - -In this tutorial, I will describe **how to install Node.js on major Linux distros including Debian, Ubuntu, Fedora and CentOS**. - -Node.js is available as a pre-built package on some distros (e.g., Fedora or Ubuntu), while you need to install it from its source on other distros. As Node.js is fast evolving, it is recommended to install the latest Node.js from its source, instead of installing an outdated pre-built package. The lasted Node.js comes with npm (Node.js package manager) bundled, allowing you to install external Node.js modules easily. - -### Install Node.js on Debian ### - -Starting from Debian 8 (Jessie), Node.js is available in the official repositories. Thus you can install it with: - - $ sudo apt-get install npm - -On Debian 7 (Wheezy) or earlier, you can install Node.js from its source as follows. - - $ sudo apt-get install python g++ make - $ wget http://nodejs.org/dist/node-latest.tar.gz - $ tar xvfvz node-latest.tar.gz - $ cd node-v0.10.21 (replace a version with your own) - $ ./configure - $ make - $ sudo make install - -### Install Node.js on Ubuntu or Linux Mint ### - -Node.js is included in Ubuntu (13.04 and higher). Thus installation is straightforward. The following will install Node.js and npm. - - $ sudo apt-get install npm - $ sudo ln -s /usr/bin/nodejs /usr/bin/node - -While stock Ubuntu ships Node.js, you can install a more recent version from [its PPA][3]. - - $ sudo apt-get install python-software-properties python g++ make - $ sudo add-apt-repository -y ppa:chris-lea/node.js - $ sudo apt-get update - $ sudo apt-get install npm - -### Install Node.js on Fedora ### - -Node.js is included in the base repository of Fedora. Therefore you can use yum to install Node.js on Fedora. - - $ sudo yum install npm - -If you want to install the latest version of Node.js, you can build it from its source as follows. - - $ sudo yum groupinstall 'Development Tools' - $ wget http://nodejs.org/dist/node-latest.tar.gz - $ tar xvfvz node-latest.tar.gz - $ cd node-v0.10.21 (replace a version with your own) - $ ./configure - $ make - $ sudo make install - -### Install Node.js on CentOS or RHEL ### - -To install Node.js with yum package manager on CentOS, first enable EPEL repository, and then run: - - $ sudo yum install npm - -If you want to build the latest Node.js on CentOS, follow the same procedure as in Fedora. - -### Install Node.js on Arch Linux ### - -Node.js is available in the Arch Linux community repository. Thus installation is as simple as running: - - $ sudo pacman -S nodejs npm - -### Check the Version of Node.js ### - -Once you have installed Node.js, you can check Node.js version as follows. - - $ node --version - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/install-node-js-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:http://nodejs.org/ -[2]:https://www.npmjs.com/ -[3]:https://launchpad.net/~chris-lea/+archive/node.js diff --git a/sources/tech/20151117 Install Android On BQ Aquaris Ubuntu Phone In Linux.md b/sources/tech/20151117 Install Android On BQ Aquaris Ubuntu Phone In Linux.md deleted file mode 100644 index 94e7ef69ce..0000000000 --- a/sources/tech/20151117 Install Android On BQ Aquaris Ubuntu Phone In Linux.md +++ /dev/null @@ -1,125 +0,0 @@ -Install Android On BQ Aquaris Ubuntu Phone In Linux -================================================================================ -![How to install Android on Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-on-Ubuntu-Phone.jpg) - -If you happen to own the first Ubuntu phone and want to **replace Ubuntu with Android on the bq Aquaris e4.5**, this post is going to help you. - -There can be plenty of reasons why you might want to remove Ubuntu and use the mainstream Android OS. One of the foremost reason is that the OS itself is at an early stage and intend to target developers and enthusiasts. Whatever may be your reason, installing Android on bq Aquaris is a piece of cake, thanks to the tools provided by bq. - -Let’s see what to do we need to install Android on bq Aquaris. - -### Prerequisite ### - -- Working Internet connection to download Android factory image and install tools for flashing Android -- USB data cable -- A system running Linux - -This tutorial is performed using Ubuntu 15.10. But the steps should be applicable to most other Linux distributions. - -### Replace Ubuntu with Android in bq Aquaris e4.5 ### - -#### Step 1: Download Android firmware #### - -First step is to download the Android image for bq Aquaris e4.5. Good thing is that it is available from the bq’s support website. You can download the firmware, around 650 MB in size, from the link below: - -- [Download Android for bq Aquaris e4.5][1] - -Yes, you would get OTA updates with it. At present the firmware version is 2.0.1 which is based on Android Lolipop. Over time, there could be a new firmware based on Marshmallow and then the above link could be outdated. - -I suggest to check the [bq support page][2] and download the latest firmware from there. - -Once downloaded, extract it. In the extracted directory, look for **MT6582_Android_scatter.txt** file. We shall be using it later. - -#### Step 2: Download flash tool #### - -bq has provided its own flash tool, Herramienta MTK Flash Tool, for easier installation of Android or Ubuntu on the device. You can download the tool from the link below: - -- [Download MTK Flash Tool][3] - -Since the flash tool might be upgraded in future, you can always get the latest version of flash tool from the [bq support page][4]. - -Once downloaded extract the downloaded file. You should see an executable file named **flash_tool** in it. We shall be using it later. - -#### Step 3: Remove conflicting packages (optional) #### - -If you are using recent version of Ubuntu or Ubuntu based Linux distributions, you may encounter “BROM ERROR : S_UNDEFINED_ERROR (1001)” later in this tutorial. - -To avoid this error, you’ll have to uninstall conflicting package. Use the commands below: - - sudo apt-get remove modemmanager - -Restart udev service with the command below: - - sudo service udev restart - -Just to check for any possible side effects on kernel module cdc_acm, run the command below: - - lsmod | grep cdc_acm - -If the output of the above command is an empty list, you’ll have to reinstall this kernel module: - - sudo modprobe cdc_acm - -#### Step 4: Prepare to flash Android #### - -Go to the downloaded and extracted flash tool directory (in step 2). Use command line for this purpose because you’ll have to use the root privileges here. - -Presuming that you saved it in the Downloads directory, use the command below to go to this directory (in case you do not know how to navigate between directories in command line). - - cd ~/Downloads/SP_Flash* - -After that use the command below to run the flash tool as root: - - sudo ./flash_tool - -You’ll see a window popped as the one below. Don’t bother about Download Agent field, it will be automatically filled. Just focus on Scatter-loading field. - -![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-1.jpeg) - -Remember we talked about **MT6582_Android_scatter.txt** in step 1? This text file is in the extracted directory of the Andriod firmware you downloaded in step 1. Click on Scatter-loading (in the above picture) and point to MT6582_Android_scatter.txt file. - -When you do that, you’ll see several green lines like the one below: - -![Install-Android-bq-aquaris-Ubuntu-2](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-2.jpeg) - -#### Step 5: Flashing Android #### - -We are almost ready. Switch off your phone and connect it to your computer via a USB cable. - -Select Firmware Upgrade from the dropdown and after that click on the big download button. - -![flash Android with Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu.jpeg) - -If everything is correct, you should see a flash status in the bottom of the tool: - -![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-3.jpeg) - -When the procedure is successfully completed, you’ll see a notification like this: - -![Successfully flashed Android on bq qauaris Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-4.jpeg) - -Unplug your phone and power it on. You should see a white screen with AQUARIS written in the middle and at bottom, “powered by Android” would be displayed. It might take upto 10 minutes before you could configure and start using Android. - -Note: If something goes wrong in the process, Press power, volume up, volume down button together and boot in to fast boot mode. Turn off again and connect the cable again. Repeat the process of firmware upgrade. It should work. - -### Conclusion ### - -Thanks to the tools provided, it becomes easier to **flash Android on bq Ubuntu Phone**. Of course, you can use the same steps to replace Android with Ubuntu. All you need is to download Ubuntu firmware instead of Android. - -I hope this tutorial helped you to replace Ubuntu with Android on your bq phone. If you have questions or suggestions, feel free to ask in the comment section below. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/install-android-ubuntu-phone/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5_L/2.0.1_20150623-1900_bq-FW.zip -[2]:http://www.bq.com/gb/support/aquaris-e4-5 -[3]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5/Ubuntu/Web%20version/Web%20version/SP_Flash_Tool_exe_linux_v5.1424.00.zip -[4]:http://www.bq.com/gb/support/aquaris-e4-5-ubuntu-edition \ No newline at end of file diff --git a/sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md b/sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md deleted file mode 100644 index a7847c46d4..0000000000 --- a/sources/tech/20151117 Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10.md +++ /dev/null @@ -1,318 +0,0 @@ -Install PostgreSQL 9.4 And phpPgAdmin On Ubuntu 15.10 -================================================================================ -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/05/postgresql.png) - -### Introduction ### - -[PostgreSQL][1] is a powerful, open-source object-relational database system. It runs under all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS, Solaris, Tru64), and Windows OS. - -Here is what **Mark Shuttleworth**, the founder of **Ubuntu**, says about PostgreSQL. - -> Postgres is a truly awesome database. When we started working on Launchpad I wasn’t sure if it would be up to the job. I was so wrong. It’s been robust, fast, and professional in every regard. -> -> — Mark Shuttleworth. - -In this handy tutorial, let us see how to install PostgreSQL 9.4 on Ubuntu 15.10 server. - -### Install PostgreSQL ### - -PostgreSQL is available in the default repositories. So enter the following command from the Terminal to install it. - - sudo apt-get install postgresql postgresql-contrib - -If you’re looking for other versions, add the PostgreSQL repository, and install it as shown below. - -The **PostgreSQL apt repository** supports LTS versions of Ubuntu (10.04, 12.04 and 14.04) on amd64 and i386 architectures as well as select non-LTS versions(14.10). While not fully supported, the packages often work on other non-LTS versions as well, by using the closest LTS version available. - -#### On Ubuntu 14.10 systems: #### - -Create the file **/etc/apt/sources.list.d/pgdg.list**; - - sudo vi /etc/apt/sources.list.d/pgdg.list - -Add a line for the repository: - - deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main - -**Note**: The above repository will only work on Ubuntu 14.10. It is not updated yet to Ubuntu 15.04 and 15.10. - -**On Ubuntu 14.04**, add the following line: - - deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main - -**On Ubuntu 12.04**, add the following line: - - deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main - -Import the repository signing key: - - wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc - ----------- - - sudo apt-key add - - -Update the package lists: - - sudo apt-get update - -Then install the required version. - - sudo apt-get install postgresql-9.4 - -### Accessing PostgreSQL command prompt ### - -The default database name and database user are “**postgres**”. Switch to postgres user to perform postgresql related operations: - - sudo -u postgres psql postgres - -#### Sample Output: #### - - psql (9.4.5) - Type "help" for help. - postgres=# - -To exit from posgresql prompt, type **\q** in the **psql** prompt return back to the Terminal. - -### Set “postgres” user password ### - -Login to postgresql prompt, - - sudo -u postgres psql postgres - -.. and set postgres password with following command: - - postgres=# \password postgres - Enter new password: - Enter it again: - postgres=# \q - -To install PostgreSQL Adminpack, enter the command in postgresql prompt: - - sudo -u postgres psql postgres - ----------- - - postgres=# CREATE EXTENSION adminpack; - CREATE EXTENSION - -Type **\q** in the **psql** prompt to exit from posgresql prompt, and return back to the Terminal. - -### Create New User and Database ### - -For example, let us create a new user called “**senthil**” with password “**ubuntu**”, and database called “**mydb**”. - - sudo -u postgres createuser -D -A -P senthil - ----------- - - sudo -u postgres createdb -O senthil mydb - -### Delete Users and Databases ### - -To delete the database, switch to postgres user: - - sudo -u postgres psql postgres - -Enter command: - - $ drop database - -To delete a user, enter the following command: - - $ drop user - -### Configure PostgreSQL-MD5 Authentication ### - -**MD5 authentication** requires the client to supply an MD5-encrypted password for authentication. To do that, edit **/etc/postgresql/9.4/main/pg_hba.conf** file: - - sudo vi /etc/postgresql/9.4/main/pg_hba.conf - -Add or Modify the lines as shown below - - [...] - # TYPE DATABASE USER ADDRESS METHOD - # "local" is for Unix domain socket connections only - local all all md5 - # IPv4 local connections: - host all all 127.0.0.1/32 md5 - host all all 192.168.1.0/24 md5 - # IPv6 local connections: - host all all ::1/128 md5 - [...] - -Here, 192.168.1.0/24 is my local network IP address. Replace this value with your own address. - -Restart postgresql service to apply the changes: - - sudo systemctl restart postgresql - -Or, - - sudo service postgresql restart - -### Configure PostgreSQL-Configure TCP/IP ### - -By default, TCP/IP connection is disabled, so that the users from another computers can’t access postgresql. To allow to connect users from another computers, Edit file **/etc/postgresql/9.4/main/postgresql.conf:** - - sudo vi /etc/postgresql/9.4/main/postgresql.conf - -Find the lines: - - [...] - #listen_addresses = 'localhost' - [...] - #port = 5432 - [...] - -Uncomment both lines, and set the IP address of your postgresql server or set ‘*’ to listen from all clients as shown below. You should be careful to make postgreSQL to be accessible from all remote clients. - - [...] - listen_addresses = '*' - [...] - port = 5432 - [...] - -Restart postgresql service to save changes: - - sudo systemctl restart postgresql - -Or, - - sudo service postgresql restart - -### Manage PostgreSQL with phpPgAdmin ### - -[**phpPgAdmin**][2] is a web-based administration utility written in PHP for managing PosgreSQL. - -phpPgAdmin is available in default repositories. So, Install phpPgAdmin using command: - - sudo apt-get install phppgadmin - -By default, you can access phppgadmin using **http://localhost/phppgadmin** from your local system’s web browser. - -To access remote systems, do the following. -On Ubuntu 15.10 systems: - -Edit file **/etc/apache2/conf-available/phppgadmin.conf**, - - sudo vi /etc/apache2/conf-available/phppgadmin.conf - -Find the line **Require local** and comment it by adding a **#** in front of the line. - - #Require local - -And add the following line: - - allow from all - -Save and exit the file. - -Then, restart apache service. - - sudo systemctl restart apache2 - -On Ubuntu 14.10 and previous versions: - -Edit file **/etc/apache2/conf.d/phppgadmin**: - - sudo nano /etc/apache2/conf.d/phppgadmin - -Comment the following line: - - [...] - #allow from 127.0.0.0/255.0.0.0 ::1/128 - -Uncomment the following line to make phppgadmin from all systems. - - allow from all - -Edit **/etc/apache2/apache2.conf**: - - sudo vi /etc/apache2/apache2.conf - -Add the following line: - - Include /etc/apache2/conf.d/phppgadmin - -Then, restart apache service. - - sudo service apache2 restart - -### Configure phpPgAdmin ### - -Edit file **/etc/phppgadmin/config.inc.php**, and do the following changes. Most of these options are self-explanatory. Read them carefully to know why do you change these values. - - sudo nano /etc/phppgadmin/config.inc.php - -Find the following line: - - $conf['servers'][0]['host'] = ''; - -Change it as shown below: - - $conf['servers'][0]['host'] = 'localhost'; - -And find the line: - - $conf['extra_login_security'] = true; - -Change the value to **false**. - - $conf['extra_login_security'] = false; - -Find the line: - - $conf['owned_only'] = false; - -Set the value as **true**. - - $conf['owned_only'] = true; - -Save and close the file. Restart postgresql service and Apache services. - - sudo systemctl restart postgresql - ----------- - - sudo systemctl restart apache2 - -Or, - - sudo service postgresql restart - - sudo service apache2 restart - -Now open your browser and navigate to **http://ip-address/phppgadmin**. You will see the following screen. - -![phpPgAdmin – Google Chrome_001](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg) - -Login with users that you’ve created earlier. I already have created a user called “**senthil**” with password “**ubuntu**” before, so I log in with user “senthil”. - -![phpPgAdmin – Google Chrome_002](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg) - -Now, you will be able to access the phppgadmin dashboard. - -![phpPgAdmin – Google Chrome_003](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg) - -Log in with postgres user: - -![phpPgAdmin – Google Chrome_004](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg) - -That’s it. Now you’ll able to create, delete and alter databases graphically using phppgadmin. - -Cheers! - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/install-postgresql-9-4-and-phppgadmin-on-ubuntu-15-10/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.twitter.com/ostechnix -[1]:http://www.postgresql.org/ -[2]:http://phppgadmin.sourceforge.net/doku.php \ No newline at end of file diff --git a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md deleted file mode 100644 index 0bdd6abadb..0000000000 --- a/sources/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md +++ /dev/null @@ -1,328 +0,0 @@ -Going Beyond Hello World Containers is Hard Stuff -================================================================================ -In [my previous post][1], I provided the basic concepts behind Linux container technology. I wrote as much for you as I did for me. Containers are new to me. And I figured having the opportunity to blog about the subject would provide the motivation to really learn the stuff. - -I intend to learn by doing. First get the concepts down, then get hands-on and write about it as I go. I assumed there must be a lot of Hello World type stuff out there to give me up to speed with the basics. Then, I could take things a bit further and build a microservice container or something. - -I mean, it can’t be that hard, right? - -Wrong. - -Maybe it’s easy for someone who spends significant amount of their life immersed in operations work. But for me, getting started with this stuff turned out to be hard to the point of posting my frustrations to Facebook... - -But, there is good news: I got it to work! And it’s always nice being able to make lemonade from lemons. So I am going to share the story of how I made my first microservice container with you. Maybe my pain will save you some time. - -If you've ever found yourself in a situation like this, fear not: folks like me are here to deal with the problems so you don't have to! - -Let’s begin. - -### A Thumbnail Micro Service ### - -The microservice I designed was simple in concept. Post a digital image in JPG or PNG format to an HTTP endpoint and get back a a 100px wide thumbnail. - -Here’s what that looks like: - -![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png) - -I decide to use a NodeJS for my code and version of [ImageMagick][2] to do the thumbnail transformation. - -I did my first version of the service, using the logic shown here: - -![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png) - -I download the [Docker Toolbox][3] which installs an the Docker Quickstart Terminal. Docker Quickstart Terminal makes creating containers easier. The terminal fires up a Linux virtual machine that has Docker installed, allowing you to run Docker commands from within a terminal. - -In my case, I am running on OS X. But there’s a Windows version too. - -I am going to use Docker Quickstart Terminal to build a container image for my microservice and run a container from that image. - -The Docker Quickstart Terminal runs in your regular terminal, like so: - -![container-diagram-2](https://deis.com/images/blog-images/containers-hard-2.png) - -### The First Little Problem and the First Big Problem ### - -So I fiddled around with NodeJS and ImageMagick and I got the service to work on my local machine. - -Then, I created the Dockerfile, which is the configuration script Docker uses to build your container. (I’ll go more into builds and Dockerfile more later on.) - -Here’s the build command I ran on the Docker Quickstart Terminal: - - $ docker build -t thumbnailer:0.1 - -I got this response: - - docker: "build" requires 1 argument. - -Huh. - -After 15 minutes I realized: I forgot to put a period . as the last argument! - -It needs to be: - - $ docker build -t thumbnailer:0.1 . - -But this wasn’t the end of my problems. - -I got the image to build and then I typed [the the `run` command][4] on the Docker Quickstart Terminal to fire up a container based on the image, called `thumbnailer:0.1`: - - $ docker run -d -p 3001:3000 thumbnailer:0.1 - -The `-p 3001:3000` argument makes it so the NodeJS microservice running on port 3000 within the container binds to port 3001 on the host virtual machine. - -Looks so good so far, right? - -Wrong. Things are about to get pretty bad. - -I determined the IP address of the virtual machine created by Docker Quickstart Terminal by running the `docker-machine` command: - - $ docker-machine ip default - -This returns the IP address of the default virtual machine, the one that is run under the Docker Quickstart Terminal. For me, this IP address was 192.168.99.100. - -I browsed to http://192.168.99.100:3001/ and got the file upload page I built: - -![container-diagram-3](https://deis.com/images/blog-images/containers-hard-3.png) - -I selected a file and clicked the Upload Image button. - -But it didn’t work. - -The terminal is telling me it can’t find the `/upload` directory my microservice requires. - -Now, keep in mind, I had been at this for about a day—between the fiddling and research. I’m feeling a little frustrated by this point. - -Then, a brain spark flew. Somewhere along the line remembered reading a microservice should not do any data persistence on its own! Saving data should be the job of another service. - -So what if the container can’t find the `/upload` directory? The real issue is: my microservice has a fundamentally flawed design. - -Let’s take another look: - -![container-diagram-4](https://deis.com/images/blog-images/containers-hard-4.png) - -Why am I saving a file to disk? Microservices are supposed to be fast. Why not do all my work in memory? Using memory buffers will make the "I can’t find no stickin’ directory" error go away and will increase the performance of my app dramatically. - -So that’s what I did. And here’s what the plan was: - -![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png) - -Here’s the NodeJS I wrote to do all the in-memory work for creating a thumbnail: - - // Bind to the packages - var express = require('express'); - var router = express.Router(); - var path = require('path'); // used for file path - var im = require("imagemagick"); - - // Simple get that allows you test that you can access the thumbnail process - router.get('/', function (req, res, next) { - res.status(200).send('Thumbnailer processor is up and running'); - }); - - // This is the POST handler. It will take the uploaded file and make a thumbnail from the - // submitted byte array. I know, it's not rocket science, but it serves a purpose - router.post('/', function (req, res, next) { - req.pipe(req.busboy); - req.busboy.on('file', function (fieldname, file, filename) { - var ext = path.extname(filename) - - // Make sure that only png and jpg is allowed - if(ext.toLowerCase() != '.jpg' && ext.toLowerCase() != '.png'){ - res.status(406).send("Service accepts only jpg or png files"); - } - - var bytes = []; - - // put the bytes from the request into a byte array - file.on('data', function(data) { - for (var i = 0; i < data.length; ++i) { - bytes.push(data[i]); - } - console.log('File [' + fieldname + '] got bytes ' + bytes.length + ' bytes'); - }); - - // Once the request is finished pushing the file bytes into the array, put the bytes in - // a buffer and process that buffer with the imagemagick resize function - file.on('end', function() { - var buffer = new Buffer(bytes,'binary'); - console.log('Bytes got ' + bytes.length + ' bytes'); - - //resize - im.resize({ - srcData: buffer, - height: 100 - }, function(err, stdout, stderr){ - if (err){ - throw err; - } - // get the extension without the period - var typ = path.extname(filename).replace('.',''); - res.setHeader("content-type", "image/" + typ); - res.status(200); - // send the image back as a response - res.send(new Buffer(stdout,'binary')); - }); - }); - }); - }); - - module.exports = router; - -Okay, so we’re back on track and everything is hunky dory on my local machine. I go to sleep. - -But, before I do I test the microservice code running as standard Node app on localhost... - -![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png) - -It works fine. Now all I needed to do was get it working in a container. - -The next day I woke up, grabbed some coffee, and built an image—not forgetting to put in the period! - - $ docker build -t thumbnailer:01 . - -I am building from the root directory of my thumbnailer project. The build command uses the Dockerfile that is in the root directory. That’s how it goes: put the Dockerfile in the same place you want to run build and the Dockerfile will be used by default. - -Here is the text of the Dockerfile I was using: - - FROM ubuntu:latest - MAINTAINER bob@CogArtTech.com - - RUN apt-get update - RUN apt-get install -y nodejs nodejs-legacy npm - RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev - RUN apt-get clean - - COPY ./package.json src/ - - RUN cd src && npm install - - COPY . /src - - WORKDIR src/ - - CMD npm start - -What could go wrong? - -### The Second Big Problem ### - -I ran the `build` command and I got this error: - - Do you want to continue? [Y/n] Abort. - - The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 - -I figured something was wrong with the microservice. I went back to my machine, fired up the service on localhost, and uploaded a file. - -Then I got this error from NodeJS: - - Error: spawn convert ENOENT - -What’s going on? This worked the other night! - -I searched and searched, for every permutation of the error I could think of. After about four hours of replacing different node modules here and there, I figured: why not restart the machine? - -I did. And guess what? The error went away! - -Go figure. - -### Putting the Genie Back in the Bottle ### - -So, back to the original quest: I needed to get this build working. - -I removed all of the containers running on the VM, using [the `rm` command][5]: - - $ docker rm -f $(docker ps -a -q) - -The `-f` flag here force removes running images. - -Then I removed all of my Docker images, using [the `rmi` command][6]: - - $ docker rmi if $(docker images | tail -n +2 | awk '{print $3}') - -I go through the whole process of rebuilding the image, installing the container and try to get the microservice running. Then after about an hour of self-doubt and accompanying frustration, I thought to myself: maybe this isn’t a problem with the microservice. - -So, I looked that the the error again: - - Do you want to continue? [Y/n] Abort. - - The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 - -Then it hit me: the build is looking for a Y input from the keyboard! But, this is a non-interactive Dockerfile script. There is no keyboard. - -I went back to the Dockerfile, and there it was: - - RUN apt-get update - RUN apt-get install -y nodejs nodejs-legacy npm - RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev - RUN apt-get clean - -The second `apt-get` command is missing the `-y` flag which causes "yes" to be given automatically where usually it would be prompted for. - -I added the missing `-y` to the command: - - RUN apt-get update - RUN apt-get install -y nodejs nodejs-legacy npm - RUN apt-get install -y imagemagick libmagickcore-dev libmagickwand-dev - RUN apt-get clean - -And guess what: after two days of trial and tribulation, it worked! Two whole days! - -So, I did my build: - - $ docker build -t thumbnailer:0.1 . - -I fired up the container: - - $ docker run -d -p 3001:3000 thumbnailer:0.1 - -Got the IP address of the Virtual Machine: - - $ docker-machine ip default - -Went to my browser and entered http://192.168.99.100:3001/ into the address bar. - -The upload page loaded. - -I selected an image, and this is what I got: - -![container-diagram-7](https://deis.com/images/blog-images/containers-hard-7.png) - -It worked! - -Inside a container, for the first time! - -### So What Does It All Mean? ### - -A long time ago, I accepted the fact when it comes to tech, sometimes even the easy stuff is hard. Along with that, I abandoned the desire to be the smartest guy in the room. Still, the last few days trying get basic competency with containers has been, at times, a journey of self doubt. - -But, you wanna know something? It’s 2 AM on an early morning as I write this, and every nerve wracking hour has been worth it. Why? Because you gotta put in the time. This stuff is hard and it does not come easy for anyone. And don’t forget: you’re learning tech and tech runs the world! - -P.S. Check out this two part video of Hello World containers, check out [Raziel Tabib’s][7] excellent work in this video... - -注:youtube视频 - - -And don't miss part two... - -注:youtube视频 - - --------------------------------------------------------------------------------- - -via: https://deis.com/blog/2015/beyond-hello-world-containers-hard-stuff - -作者:[Bob Reselman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://deis.com/blog -[1]:http://deis.com/blog/2015/developer-journey-linux-containers -[2]:https://github.com/rsms/node-imagemagick -[3]:https://www.docker.com/toolbox -[4]:https://docs.docker.com/reference/commandline/run/ -[5]:https://docs.docker.com/reference/commandline/rm/ -[6]:https://docs.docker.com/reference/commandline/rmi/ -[7]:http://twitter.com/RazielTabib \ No newline at end of file diff --git a/sources/tech/20151119 How to Install Revive Adserver on Ubuntu 15.04 or CentOS 7.md b/sources/tech/20151119 How to Install Revive Adserver on Ubuntu 15.04 or CentOS 7.md new file mode 100644 index 0000000000..3b6277da80 --- /dev/null +++ b/sources/tech/20151119 How to Install Revive Adserver on Ubuntu 15.04 or CentOS 7.md @@ -0,0 +1,242 @@ +How to Install Revive Adserver on Ubuntu 15.04 / CentOS 7 +================================================================================ +Revive AdserverHow to Install Revive Adserver on Ubuntu 15.04 / CentOS 7 is a free and open source advertisement management system that enables publishers, ad networks and advertisers to serve ads on websites, apps, videos and manage campaigns for multiple advertiser with many features. Revive Adserver is licensed under GNU Public License which is also known as OpenX Source. It features an integrated banner management interface, URL targeting, geo-targeting and tracking system for gathering statistics. This application enables website owners to manage banners from both in-house advertisement campaigns as well as from paid or third-party sources, such as Google's AdSense. Here, in this tutorial, we'll gonna install Revive Adserver in our machine running Ubuntu 15.04 or CentOS 7. + +### 1. Installing LAMP Stack ### + +First of all, as Revive Adserver requires a complete LAMP Stack to work, we'll gonna install it. LAMP Stack is the combination of Apache Web Server, MySQL/MariaDB Database Server and PHP modules. To run Revive properly, we'll need to install some PHP modules like apc, zlib, xml, pcre, mysql and mbstring. To setup LAMP Stack, we'll need to run the following command with respect to the distribution of linux we are currently running. + +#### On Ubuntu 15.04 #### + + # apt-get install apache2 mariadb-server php5 php5-gd php5-mysql php5-curl php-apc zlibc zlib1g zlib1g-dev libpcre3 libpcre3-dev libapache2-mod-php5 zip + +#### On CentOS 7 #### + + # yum install httpd mariadb php php-gd php-mysql php-curl php-mbstring php-xml php-apc zlibc zlib1g zlib1g-dev libpcre3 libpcre3-dev zip + +### 2. Starting Apache and MariaDB server ### + +We’ll now start our newly installed Apache web server and MariaDB database server in our linux machine. To do so, we'll need to execute the following commands. + +#### On Ubuntu 15.04 #### + +Ubuntu 15.04 is shipped with Systemd as its default init system, so we'll need to execute the following commands to start apache and mariadb daemons. + + # systemctl start apache2 mysql + +After its started, we'll now make it able to start automatically in every system boot by running the following command. + + # systemctl enable apache2 mysql + + Synchronizing state for apache2.service with sysvinit using update-rc.d... + Executing /usr/sbin/update-rc.d apache2 defaults + Executing /usr/sbin/update-rc.d apache2 enable + Synchronizing state for mysql.service with sysvinit using update-rc.d... + Executing /usr/sbin/update-rc.d mysql defaults + Executing /usr/sbin/update-rc.d mysql enable + +#### On CentOS 7 #### + +Also in CentOS 7, systemd is the default init system so, we'll run the following command to start them. + + # systemctl start httpd mariadb + +Next, we'll enable them to start automatically in every startup of init system using the following command. + + # systemctl enable httpd mariadb + + ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service' + ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service' + +### 3. Configuring MariaDB ### + +#### On CentOS 7/Ubuntu 15.04 #### + +Now, as we are starting MariaDB for the first time and no password has been assigned for MariaDB so, we’ll first need to configure a root password for it. Then, we’ll gonna create a new database so that it can store data for our Revive Adserver installation. + +To configure MariaDB and assign a root password, we’ll need to run the following command. + + # mysql_secure_installation + +This will ask us to enter the password for root but as we haven’t set any password before and its our first time we’ve installed mariadb, we’ll simply press enter and go further. Then, we’ll be asked to set root password, here we’ll hit Y and enter our password for root of MariaDB. Then, we’ll simply hit enter to set the default values for the further configurations. + + …. + so you should just press enter here. + + Enter current password for root (enter for none): + OK, successfully used password, moving on… + + Setting the root password ensures that nobody can log into the MariaDB + root user without the proper authorisation. + + Set root password? [Y/n] y + New password: + Re-enter new password: + Password updated successfully! + Reloading privilege tables.. + … Success! + … + installation should now be secure. + Thanks for using MariaDB! + +![Configuring MariaDB](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-mariadb.png) + +### 4. Creating new Database ### + +After we have assigned the password to our root user of mariadb server, we'll now create a new database for Revive Adserver application so that it can store its data into the database server. To do so, first we'll need to login to our MariaDB console by running the following command. + + # mysql -u root -p + +Then, it will ask us to enter the password of root user which we had just set in the above step. Then, we'll be welcomed into the MariaDB console in which we'll create our new database, database user and assign its password and grant all privileges to create, remove and edit the tables and data stored in it. + + > CREATE DATABASE revivedb; + > CREATE USER 'reviveuser'@'localhost' IDENTIFIED BY 'Pa$$worD123'; + > GRANT ALL PRIVILEGES ON revivedb.* TO 'reviveuser'@'localhost'; + > FLUSH PRIVILEGES; + > EXIT; + +![Creating Mariadb Revive Database](http://blog.linoxide.com/wp-content/uploads/2015/11/creating-mariadb-revive-database.png) + +### 5. Downloading Revive Adserver Package ### + +Next, we'll download the latest release of Revive Adserver ie version 3.2.2 in the time of writing this article. So, we'll first get the download link from the official Download Page of Revive Adserver ie [http://www.revive-adserver.com/download/][1] then we'll download the compressed zip file using wget command under /tmp/ directory as shown bellow. + + # cd /tmp/ + # wget http://download.revive-adserver.com/revive-adserver-3.2.2.zip + + --2015-11-09 17:03:48-- http://download.revive-adserver.com/revive-adserver-3.2.2.zip + Resolving download.revive-adserver.com (download.revive-adserver.com)... 54.230.119.219, 54.239.132.177, 54.230.116.214, ... + Connecting to download.revive-adserver.com (download.revive-adserver.com)|54.230.119.219|:80... connected. + HTTP request sent, awaiting response... 200 OK + Length: 11663620 (11M) [application/zip] + Saving to: 'revive-adserver-3.2.2.zip' + revive-adserver-3.2 100%[=====================>] 11.12M 1.80MB/s in 13s + 2015-11-09 17:04:02 (906 KB/s) - 'revive-adserver-3.2.2.zip' saved [11663620/11663620] + +After the file is downloaded, we'll simply extract its files and directories using unzip command. + + # unzip revive-adserver-3.2.2.zip + +Then, we'll gonna move the entire Revive directories including every files from /tmp to the default webroot of Apache Web Server ie /var/www/html/ directory. + + # mv revive-adserver-3.2.2 /var/www/html/reviveads + +### 6. Configuring Apache Web Server ### + +We'll now configure our Apache Server so that revive will run with proper configuration. To do so, we'll create a new virtualhost by creating a new configuration file named reviveads.conf . The directory here may differ from one distribution to another, here is how we create in the following distributions of linux. + +#### On Ubuntu 15.04 #### + + # touch /etc/apache2/sites-available/reviveads.conf + # ln -s /etc/apache2/sites-available/reviveads.conf /etc/apache2/sites-enabled/reviveads.conf + # nano /etc/apache2/sites-available/reviveads.conf + +Now, we'll gonna add the following lines of configuration into this file using our favorite text editor. + + + ServerAdmin info@reviveads.linoxide.com + DocumentRoot /var/www/html/reviveads/ + ServerName reviveads.linoxide.com + ServerAlias www.reviveads.linoxide.com + + Options FollowSymLinks + AllowOverride All + + ErrorLog /var/log/apache2/reviveads.linoxide.com-error_log + CustomLog /var/log/apache2/reviveads.linoxide.com-access_log common + + +![Configuring Apache2 Ubuntu](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-apache2-ubuntu.png) + +After done, we'll gonna save the file and exit our text editor. Then, we'll restart our Apache Web server. + + # systemctl restart apache2 + +#### On CentOS 7 #### + +In CentOS, we'll directly create the file reviveads.conf under /etc/httpd/conf.d/ directory using our favorite text editor. + + # nano /etc/httpd/conf.d/reviveads.conf + +Then, we'll gonna add the following lines of configuration into the file. + + + ServerAdmin info@reviveads.linoxide.com + DocumentRoot /var/www/html/reviveads/ + ServerName reviveads.linoxide.com + ServerAlias www.reviveads.linoxide.com + + Options FollowSymLinks + AllowOverride All + + ErrorLog /var/log/httpd/reviveads.linoxide.com-error_log + CustomLog /var/log/httpd/reviveads.linoxide.com-access_log common + + +![Configuring httpd Centos](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-httpd-centos.png) + +Once done, we'll simply save the file and exit the editor. And then, we'll gonna restart our apache web server. + + # systemctl restart httpd + +### 7. Fixing Permissions and Ownership ### + +Now, we'll gonna fix some file permissions and ownership of the installation path. First, we'll gonna set the ownership of the installation directory to Apache process owner so that apache web server will have full access of the files and directories to edit, create and delete. + +#### On Ubuntu 15.04 #### + + # chown www-data: -R /var/www/html/reviveads + +#### On CentOS 7 #### + + # chown apache: -R /var/www/html/reviveads + +### 8. Allowing Firewall ### + +Now, we'll gonna configure our firewall programs to allow port 80 (http) so that our apache web server running Revive Adserver will be accessible from other machines in the network across the default http port ie 80. + +#### On Ubuntu 15.04/CentOS 7 #### + +As CentOS 7 and Ubuntu 15.04 both has systemd installed by default, it contains firewalld running as firewall program. In order to open the port 80 (http service) on firewalld, we'll need to execute the following commands. + + # firewall-cmd --permanent --add-service=http + + success + + # firewall-cmd --reload + + success + +### 9. Web Installation ### + +Finally, after everything is done as expected, we'll now be able to access the web interface of the application using a web browser. We can go further towards the web installation, by pointing the web browser to the web server we are running in our linux machine. To do so, we'll need to point our web browser to http://ip-address/ or http://domain.com assigned to our linux machine. Here, in this tutorial, we'll point our browser to http://reviveads.linoxide.com/ . + +Here, we'll see the Welcome page of the installation of Revive Adserver with the GNU General Public License V2 as Revive Adserver is released under this license. Then, we'll simply click on I agree button in order to continue the installation. + +In the next page, we'll need to enter the required database information in order to connect Revive Adserver with the MariaDB database server. Here, we'll need to enter the database name, user and password that we had set in the above step. In this tutorial, we entered database name, user and password as revivedb, reviveuser and Pa$$worD123 respectively then, we set the hostname as localhost and continue further. + +![Configuring Revive Adserver](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-revive-adserver.png) + +We'll now enter the required information like administration username, password and email address so that we can use these information to login to the dashboard of our Adserver. After done, we'll head towards the Finish page in which we'll see that we have successfully installed Revive Adserver in our server. + +Next, we'll be redirected to the Adverstiser page where we'll add new Advertisers and manage them. Then, we'll be able to navigate to our Dashboard, add new users to the adserver, add new campaign for our advertisers, banners, websites, video ads and everything that its built with. + +For enabling more configurations and access towards the administrative settings, we can switch our Dashboard user to the Administrator account. This will add new administrative menus in the dashboard like Plugins, Configuration through which we can add and manage plugins and configure many features and elements of Revive Adserver. + +### Conclusion ### + +In this article, we learned some information on what is Revive Adserver and how we can setup on linux machine running Ubuntu 15.04 and CentOS 7 distributions. Though Revive Adserver's initial source code was bought from OpenX, currently the code base for OpenX Enterprise and Revive Adserver are completely separate. To extend more features, we can install more plugins which we can also find from [http://www.adserverplugins.com/][2] . Really, this piece of software has changed the way of managing the ads for websites, apps, videos and made it very easy and efficient. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-revive-adserver-ubuntu-15-04-centos-7/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://www.revive-adserver.com/download/ +[2]:http://www.adserverplugins.com/ \ No newline at end of file diff --git a/sources/tech/20151123 Data Structures in the Linux Kernel.md b/sources/tech/20151123 Data Structures in the Linux Kernel.md new file mode 100644 index 0000000000..d344eacd97 --- /dev/null +++ b/sources/tech/20151123 Data Structures in the Linux Kernel.md @@ -0,0 +1,203 @@ +Translating by DongShuaike + +Data Structures in the Linux Kernel +================================================================================ + +Radix tree +-------------------------------------------------------------------------------- + +As you already know linux kernel provides many different libraries and functions which implement different data structures and algorithms. In this part we will consider one of these data structures - [Radix tree](http://en.wikipedia.org/wiki/Radix_tree). There are two files which are related to `radix tree` implementation and API in the linux kernel: + +* [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h) +* [lib/radix-tree.c](https://github.com/torvalds/linux/blob/master/lib/radix-tree.c) + +Lets talk about what a `radix tree` is. Radix tree is a `compressed trie` where a [trie](http://en.wikipedia.org/wiki/Trie) is a data structure which implements an interface of an associative array and allows to store values as `key-value`. The keys are usually strings, but any data type can be used. A trie is different from an `n-tree` because of its nodes. Nodes of a trie do not store keys; instead, a node of a trie stores single character labels. The key which is related to a given node is derived by traversing from the root of the tree to this node. For example: + + +``` +               +-----------+ +               |           | +               |    " "    | + | | +        +------+-----------+------+ +        |                         | +        |                         | +   +----v------+            +-----v-----+ +   |           |            |           | +   |    g      |            |     c     | + | | | | +   +-----------+            +-----------+ +        |                         | +        |                         | +   +----v------+            +-----v-----+ +   |           |            |           | +   |    o      |            |     a     | + | | | | +   +-----------+            +-----------+ +                                  | +                                  | +                            +-----v-----+ +                            |           | +                            |     t     | + | | +                            +-----------+ +``` + +So in this example, we can see the `trie` with keys, `go` and `cat`. The compressed trie or `radix tree` differs from `trie` in that all intermediates nodes which have only one child are removed. + +Radix tree in linux kernel is the datastructure which maps values to integer keys. It is represented by the following structures from the file [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h): + +```C +struct radix_tree_root { + unsigned int height; + gfp_t gfp_mask; + struct radix_tree_node __rcu *rnode; +}; +``` + +This structure presents the root of a radix tree and contains three fields: + +* `height` - height of the tree; +* `gfp_mask` - tells how memory allocations will be performed; +* `rnode` - pointer to the child node. + +The first field we will discuss is `gfp_mask`: + +Low-level kernel memory allocation functions take a set of flags as - `gfp_mask`, which describes how that allocation is to be performed. These `GFP_` flags which control the allocation process can have following values: (`GF_NOIO` flag) means sleep and wait for memory, (`__GFP_HIGHMEM` flag) means high memory can be used, (`GFP_ATOMIC` flag) means the allocation process has high-priority and can't sleep etc. + +* `GFP_NOIO` - can sleep and wait for memory; +* `__GFP_HIGHMEM` - high memory can be used; +* `GFP_ATOMIC` - allocation process is high-priority and can't sleep; + +etc. + +The next field is `rnode`: + +```C +struct radix_tree_node { + unsigned int path; + unsigned int count; + union { + struct { + struct radix_tree_node *parent; + void *private_data; + }; + struct rcu_head rcu_head; + }; + /* For tree user */ + struct list_head private_list; + void __rcu *slots[RADIX_TREE_MAP_SIZE]; + unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS]; +}; +``` + +This structure contains information about the offset in a parent and height from the bottom, count of the child nodes and fields for accessing and freeing a node. This fields are described below: + +* `path` - offset in parent & height from the bottom; +* `count` - count of the child nodes; +* `parent` - pointer to the parent node; +* `private_data` - used by the user of a tree; +* `rcu_head` - used for freeing a node; +* `private_list` - used by the user of a tree; + +The two last fields of the `radix_tree_node` - `tags` and `slots` are important and interesting. Every node can contains a set of slots which are store pointers to the data. Empty slots in the linux kernel radix tree implementation store `NULL`. Radix trees in the linux kernel also supports tags which are associated with the `tags` fields in the `radix_tree_node` structure. Tags allow individual bits to be set on records which are stored in the radix tree. + +Now that we know about radix tree structure, it is time to look on its API. + +Linux kernel radix tree API +--------------------------------------------------------------------------------- + +We start from the datastructure initialization. There are two ways to initialize a new radix tree. The first is to use `RADIX_TREE` macro: + +```C +RADIX_TREE(name, gfp_mask); +```` + +As you can see we pass the `name` parameter, so with the `RADIX_TREE` macro we can define and initialize radix tree with the given name. Implementation of the `RADIX_TREE` is easy: + +```C +#define RADIX_TREE(name, mask) \ + struct radix_tree_root name = RADIX_TREE_INIT(mask) + +#define RADIX_TREE_INIT(mask) { \ + .height = 0, \ + .gfp_mask = (mask), \ + .rnode = NULL, \ +} +``` + +At the beginning of the `RADIX_TREE` macro we define instance of the `radix_tree_root` structure with the given name and call `RADIX_TREE_INIT` macro with the given mask. The `RADIX_TREE_INIT` macro just initializes `radix_tree_root` structure with the default values and the given mask. + +The second way is to define `radix_tree_root` structure by hand and pass it with mask to the `INIT_RADIX_TREE` macro: + +```C +struct radix_tree_root my_radix_tree; +INIT_RADIX_TREE(my_tree, gfp_mask_for_my_radix_tree); +``` + +where: + +```C +#define INIT_RADIX_TREE(root, mask) \ +do { \ + (root)->height = 0; \ + (root)->gfp_mask = (mask); \ + (root)->rnode = NULL; \ +} while (0) +``` + +makes the same initialziation with default values as it does `RADIX_TREE_INIT` macro. + +The next are two functions for inserting and deleting records to/from a radix tree: + +* `radix_tree_insert`; +* `radix_tree_delete`; + +The first `radix_tree_insert` function takes three parameters: + +* root of a radix tree; +* index key; +* data to insert; + +The `radix_tree_delete` function takes the same set of parameters as the `radix_tree_insert`, but without data. + +The search in a radix tree implemented in two ways: + +* `radix_tree_lookup`; +* `radix_tree_gang_lookup`; +* `radix_tree_lookup_slot`. + +The first `radix_tree_lookup` function takes two parameters: + +* root of a radix tree; +* index key; + +This function tries to find the given key in the tree and return the record associated with this key. The second `radix_tree_gang_lookup` function have the following signature + +```C +unsigned int radix_tree_gang_lookup(struct radix_tree_root *root, + void **results, + unsigned long first_index, + unsigned int max_items); +``` + +and returns number of records, sorted by the keys, starting from the first index. Number of the returned records will not be greater than `max_items` value. + +And the last `radix_tree_lookup_slot` function will return the slot which will contain the data. + +Links +--------------------------------------------------------------------------------- + +* [Radix tree](http://en.wikipedia.org/wiki/Radix_tree) +* [Trie](http://en.wikipedia.org/wiki/Trie) + +-------------------------------------------------------------------------------- + +via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/radix-tree.md + +作者:[0xAX] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + diff --git a/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md b/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md new file mode 100644 index 0000000000..183d39252f --- /dev/null +++ b/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md @@ -0,0 +1,109 @@ +#name1e5s Translating +8 things to do after installing openSUSE Leap 42.1 +================================================================================ +![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg) +Credit: [Metropolitan Transportation/Flicrk][1] + +> You've installed openSUSE on your PC. Here's what to do next. + +[openSUSE Leap is indeed a huge leap][2], allowing users to run a distro that has the same DNA of SUSE Linux Enterprise. Like any other operating system, some work is needed to get it set up for optimal use. + +Following are some of the things that I did after installing openSUSE Leap on my PC (these are not applicable for server installations). None of them are mandatory, and you may be fine with the basic install. But if you need more out of your openSUSE Leap, follow me. + +### 1. Adding Packman repository ### + +Due to software patents and licences, openSUSE, like many Linux distributions, doesn't offer many applications, codecs, and drivers through official repositories (repos). Instead, these are made available through 3rd party or community repos. The first and most important repository is 'Packman'. Since these repos are not enabled by default, we have to add them. You can do so either using YaST (one of the gems of openSUSE) or by command line (instructions below). + +![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png) +Adding Packman repositories. + +Using YaST, go to the Software Repositories section. Click on the 'Add’ button and select 'Community Repositories.' Click 'next.' And once the repos are loaded, select the Packman Repository. Click 'OK,' then import the trusted GnuPG key by clicking on the 'Trust' button. + +Or, using the terminal you can add and enable the Packman repo using the following command: + + zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman + +Once the repo is added, you have access to many more packages. To install any application or package, open YaST Software Manager, search for the package and install it. + +### 2. Install VLC ### + +VLC is the Swiss Army knife of media players and can play virtually any media file. You can install VLC from YaST Software Manager or from software.opensuse.org. You will need to install two packages: vlc and vlc-codecs. + +If using terminal, run the following command: + + sudo zypper install vlc vlc-codecs + +### 3. Install Handbrake ### + +If you need to transcode or convert your video files from one format to another, [Handbrake is the tools for you][3]. Handbrake is available through repositories we enabled, so just search for it in YaST and install. + +If you are using the terminal, run the following command: + + sudo zypper install handbrake-cli handbrake-gtk + +(Pro tip: VLC can also transcode audio and video files.) + +### 4. Install Chrome ### + +OpenSUSE comes with Firefox as the default browser. But since Firefox isn't capable of playing restricted media such as Netflix, I recommend installing Chrome. This takes some extra work. First you need to import the trusted key from Google. Open the terminal app and run the 'wget' command to download the key: + + wget https://dl.google.com/linux/linux_signing_key.pub + +Then import the key: + + sudo rpm --import linux_signing_key.pub + +Now head over to the [Google Chrome website][4] and download the 64 bit .rpm file. Once downloaded run the following command to install the browser: + + sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm + +### 5. Install Nvidia drivers ### + +OpenSUSE Leap will work out of the box even if you have Nvidia or ATI graphics cards. However, if you do need the proprietary drivers for gaming or any other purpose, you can install such drivers, but some extra work is needed. + +First you need to add the Nvidia repositories; it's the same procedure we used to add Packman repositories using YaST. The only difference is that you will choose Nvidia from the Community Repositories section. Once it's added, go to **Software Management > Extras** and select 'Extras/Install All Matching Recommended Packages'. + +![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png) + +It will open a dialogue box showing all the packages it's going to install, click OK and follow the instructions. You can also run the following command after adding the Nvidia repository to install the needed Nvidia drivers: + + sudo zypper inr + +(Note: I have never used AMD/ATI cards so I have no experience with them.) + +### 6. Install media codecs ### + +Once you have VLC installed you won't need to install media codecs, but if you are using other apps for media playback you will need to install such codecs. Some developers have written scripts/tools which makes it a much easier process. Just go to [this page][5] and install the entire pack by clicking on the appropriate button. It will open YaST and install the packages automatically (of source you will have to give the root password and trust the GnuPG key, as usual). + +### 7. Install your preferred email client ### + +OpenSUSE comes with Kmail or Evolution, depending on the Desktop Environment you installed on the system. I run Plasma, which comes with Kmail, and this email client leaves a lot to be desired. I suggest trying Thunderbird or Evolution mail. All major email clients are available through official repositories. You can also check my [handpicked list of the best email clients for Linux][7]. + +### 8. Enable Samba services from Firewall ### + +OpenSUSE offers a much more secure system out of the box, compared to other distributions. But it also requires a little bit more work for a new user. If you are using Samba protocol to share files within your local network then you will have to allow that service from the Firewall. + +![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png) +Allow Samba Client and Server from Firewall settings. + +Open YaST and search for Firewall. Once in Firewall settings, go to 'Allowed Services' where you will see a drop down list under 'Service to allow.' Select 'Samba Client,' then click 'Add.' Do the same with the 'Samba Server' option. Once both are added, click 'Next,' then click 'Finish,' and now you will be able to share folders from your openSUSE system and also access other machines over the local network. + +That's pretty much all that I did on my new openSUSE system to set it up just the way I like it. If you have any questions, please feel free to ask in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html + +作者:[Swapnil Bhartiya][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ +[1]:https://www.flickr.com/photos/mtaphotos/11200079265/ +[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution +[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line +[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha +[5]:http://opensuse-community.org/ +[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html diff --git a/sources/tech/20151202 A new Mindcraft moment.md b/sources/tech/20151202 A new Mindcraft moment.md new file mode 100644 index 0000000000..79930e8202 --- /dev/null +++ b/sources/tech/20151202 A new Mindcraft moment.md @@ -0,0 +1,43 @@ +A new Mindcraft moment? +======================= + +Credit:Jonathan Corbet + +It is not often that Linux kernel development attracts the attention of a mainstream newspaper like The Washington Post; lengthy features on the kernel community's approach to security are even more uncommon. So when just such a feature hit the net, it attracted a lot of attention. This article has gotten mixed reactions, with many seeing it as a direct attack on Linux. The motivations behind the article are hard to know, but history suggests that we may look back on it as having given us a much-needed push in a direction we should have been going for some time. + +Think back, a moment, to the dim and distant past — April 1999, to be specific. An analyst company named Mindcraft issued a report showing that Windows NT greatly outperformed Red Hat Linux 5.2 and Apache for web-server workloads. The outcry from the Linux community, including from a very young LWN, was swift and strong. The report was a piece of Microsoft-funded FUD trying to cut off an emerging threat to its world-domination plans. The Linux system had been deliberately configured for poor performance. The hardware chosen was not well supported by Linux at the time. And so on. + +Once people calmed down a bit, though, one other fact came clear: the Mindcraft folks, whatever their motivations, had a point. Linux did, indeed, have performance problems that were reasonably well understood even at the time. The community then did what it does best: we sat down and fixed the problems. The scheduler got exclusive wakeups, for example, to put an end to thethundering-herd problem in the acceptance of connection requests. Numerous other little problems were fixed. Within a year or so, the kernel's performance on this kind of workload had improved considerably. + +The Mindcraft report, in other words, was a much-needed kick in the rear that got the community to deal with issues that had been neglected until then. + +The Washington Post article seems clearly slanted toward a negative view of the Linux kernel and its contributors. It freely mixes kernel problems with other issues (the AshleyMadison.com breakin, for example) that were not kernel vulnerabilities at all. The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room. There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true, but it should not be allowed to overshadow the simple fact that the article has a valid point. + +We do a reasonable job of finding and fixing bugs. Problems, whether they are security-related or not, are patched quickly, and the stable-update mechanism makes those patches available to kernel users. Compared to a lot of programs out there (free and proprietary alike), the kernel is quite well supported. But pointing at our ability to fix bugs is missing a crucial point: fixing security bugs is, in the end, a game of whack-a-mole. There will always be more moles, some of which we will not know about (and will thus be unable to whack) for a long time after they are discovered and exploited by attackers. These bugs leave our users vulnerable, even if the commercial side of Linux did a perfect job of getting fixes to users — which it decidedly does not. + +The point that developers concerned about security have been trying to make for a while is that fixing bugs is not enough. We must instead realize that we will never fix them all and focus on making bugs harder to exploit. That means restricting access to information about the kernel, making it impossible for the kernel to execute code in user-space memory, instrumenting the kernel to detect integer overflows, and all the other things laid out in Kees Cook's Kernel Summit talk at the end of October. Many of these techniques are well understood and have been adopted by other operating systems; others will require innovation on our part. But, if we want to adequately defend our users from attackers, these changes need to be made. + +Why hasn't the kernel adopted these technologies already? The Washington Post article puts the blame firmly on the development community, and on Linus Torvalds in particular. The culture of the kernel community prioritizes performance and functionality over security and is unwilling to make compromises if they are needed to improve the security of the kernel. There is some truth to this claim; the good news is that attitudes appear to be shifting as the scope of the problem becomes clear. Kees's talk was well received, and it clearly got developers thinking and talking about the issues. + +The point that has been missed is that we do not just have a case of Linus fending off useful security patches. There simply are not many such patches circulating in the kernel community. In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream. Getting any large, intrusive patch set merged requires working with the kernel community, making the case for the changes, splitting the changes into reviewable pieces, dealing with review comments, and so on. It can be tiresome and frustrating, but it's how the kernel works, and it clearly results in a more generally useful, more maintainable kernel in the long run. + +Almost nobody is doing that work to get new security technologies into the kernel. One might cite a "chilling effect" from the hostile reaction such patches can receive, but that is an inadequate answer: developers have managed to merge many changes over the years despite a difficult initial reaction. Few security developers are even trying. + +Why aren't they trying? One fairly obvious answer is that almost nobody is being paid to try. Almost all of the work going into the kernel is done by paid developers and has been for many years. The areas that companies see fit to support get a lot of work and are well advanced in the kernel. The areas that companies think are not their problem are rather less so. The difficulties in getting support for realtime development are a clear case in point. Other areas, such as documentation, tend to languish as well. Security is clearly one of those areas. There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies. + +There are signs that things might be changing a bit. More developers are showing interest in security-related issues, though commercial support for their work is still less than it should be. The reaction against security-related changes might be less knee-jerk negative than it used to be. Efforts like the Kernel Self Protection Project are starting to work on integrating existing security technologies into the kernel. + +We have a long way to go, but, with some support and the right mindset, a lot of progress can be made in a short time. The kernel community can do amazing things when it sets its mind to it. With luck, the Washington Post article will help to provide the needed impetus for that sort of setting of mind. History suggests that we will eventually see this moment as a turning point, when we were finally embarrassed into doing work that has clearly needed doing for a while. Linux should not have a substandard security story for much longer. + +--------------------------- + +via: https://lwn.net/Articles/663474/ + +作者:Jonathan Corbet + +译者:[译者ID](https://github.com/译者ID) + +校对:[校对者ID](https://github.com/校对者ID) + + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20151206 Supporting secure DNS in glibc.md b/sources/tech/20151206 Supporting secure DNS in glibc.md new file mode 100644 index 0000000000..0547719a72 --- /dev/null +++ b/sources/tech/20151206 Supporting secure DNS in glibc.md @@ -0,0 +1,47 @@ +zpl1025 +Supporting secure DNS in glibc +======================== + +Credit: Jonathan Corbet + +One of the many weak links in Internet security is the domain name system (DNS); it is subject to attacks that, among other things, can mislead applications regarding the IP address of a system they wish to connect to. That, in turn, can cause connections to go to the wrong place, facilitating man-in-the-middle attacks and more. The DNSSEC protocol extensions are meant to address this threat by setting up a cryptographically secure chain of trust for DNS information. When DNSSEC is set up properly, applications should be able to trust the results of domain lookups. As the discussion over an attempt to better integrate DNSSEC into the GNU C Library shows, though, ensuring that DNS lookups are safe is still not a straightforward problem. + +In a sense, the problem was solved years ago; one can configure a local nameserver to perform full DNSSEC verification and use that server via glibc calls in applications. DNSSEC can even be used to increase security in other areas; it can, for example, carry SSH or TLS key fingerprints, allowing applications to verify that they are talking to the right server. Things get tricky, though, when one wants to be sure that DNS results claiming to have DNSSEC verification are actually what they claim to be — when one wants the security that DNSSEC is meant to provide, in other words. + +The /etc/resolv.conf problem + +Part of the problem, from the glibc perspective, is that glibc itself does not do DNSSEC verification. Instead, it consults /etc/resolv.conf and asks the servers found therein to do the lookup and verification; the results are then returned to the application. If the application is using the low-level res_query() interface, those results may include the "authenticated data" (AD) flag (if the nameserver has set it) indicating that DNSSEC verification has been successfully performed. But glibc knows nothing about the trustworthiness of the nameserver that has provided those results, so it cannot tell the application anything about whether they should really be trusted. + +One of the first steps suggested by glibc maintainer Carlos O'Donell is to add an option (dns-strip-dnssec-ad-bit) to the resolv.conf file telling glibc to unconditionally remove the AD bit. This option could be set by distributions to indicate that the DNS lookup results cannot be trusted at a DNSSEC level. Once things have been set up so that the results can be trusted, that option can be removed. In the meantime, though, applications would have a way to judge the DNS lookup results they get from glibc, something that does not exist now. + +What would a trustworthy setup look like? The standard picture looks something like this: there is a local nameserver, accessed via the loopback interface, as the only entry in /etc/resolv.conf. That nameserver would be configured to do verification and, in the case that verification fails, simply return no results at all. There would, in almost all cases, be no need to worry about whether applications see the AD bit or not; if the results are not trustworthy, applications will simply not see them at all. A number of distributions are moving toward this model, but the situation is still not as simple as some might think. + +One problem is that this scheme makes /etc/resolv.conf into a central point of trust for the system. But, in a typical Linux system, there are no end of DHCP clients, networking scripts, and more that will make changes to that file. As Paul Wouters pointed out, locking down this file in the short term is not really an option. Sometimes those changes are necessary: when a diskless system is booting, it may need name-resolution service before it is at a point where it can start up its own nameserver. A system's entire DNS environment may change depending on which network it is attached to. Systems in containers may be best configured to talk to a nameserver on the host. And so on. + +So there seems to be a general belief that /etc/resolv.conf cannot really be trusted on current systems. Ideas to add secondary configuration files (/etc/secure-resolv.conf or whatever) have been floated, but they don't much change the basic nature of the situation. Beyond that, some participants felt that even a local nameserver running on the loopback interface is not really trustworthy; Zack Weinberg suggested that administrators might intentionally short out DNSSEC validation, for example. + +Since the configuration cannot be trusted on current systems, the reasoning goes, glibc needs to have a way to indicate to applications when the situation has improved and things can be trusted. That could include the AD-stripping option described above (or, conversely, an explicit "this nameserver is trusted" option); that, of course, would require that the system be locked down to a level where surprising changes to /etc/resolv.conf no longer happen. A variant, as suggested by Petr Spacek, is to have a way for an application to ask glibc whether it is talking to a local nameserver or not. + +Do it in glibc? + +An alternative would be to dispense with the nameserver and have glibc do DNSSEC validation itself. There is, however, resistance to putting a big pile of cryptographic code into glibc itself. That would increase the size of the library and, it is felt, increase the attack surface of any application using it. A variant of this idea, suggested by Zack, would be to put the validation code into the name-service caching daemon (nscd) instead. Since nscd is part of glibc, it is under the control of the glibc developers and there could be a certain amount of confidence that DNSSEC validation is being performed properly. The location of the nscd socket is well known, so the /etc/resolv.confissues don't come into play. Carlos worried, though, that this approach might deter adoption by users who do not want the caching features of nscd; in his mind, that seems to rule out the nscd option. + +So, in the short term, at least, it seems unlikely that glibc will take on the full task of performing validated DNSSEC lookups. That means that, if security-conscious applications are going to use glibc for their name lookups, the library will have to provide an indication of how trustworthy the results received from a separate nameserver are. And that will almost certainly require explicit action on the part of the distributor and/or system administrator. As Simo Sorce put it: + +A situation in which glibc does not use an explicit configuration option to signal applications that it is using a trusted resolver is not useful ... no scratch that, it is actively harmful, because applications developers will quickly realize they cannot trust any information coming from glibc and will simply not use it for DNSSEC related information. + +Configuring a system to properly use DNSSEC involves change to many of the components of that system — it is a distribution-wide problem that will take time to solve fully. The role that glibc plays in this transition is likely to be relatively small, but it is an important one: glibc is probably the only place where applications can receive some assurance that their DNS results are trustworthy without implementing their own resolver code. Running multiple DNSSEC implementations on a system seems like an unlikely path to greater security, so it would be good to get this right. + +The glibc project has not yet chosen a path by which it intends to get things right, though some sort of annotation in /etc/resolv.conf looks like a likely outcome. Any such change would then have to get into a release; given the conservative nature of glibc development, it may already be late for the 2.23 release, which is likely to happen in February. So higher DNSSEC awareness in glibc may not happen right away, but there is at least some movement in that direction. + +--------------------------- + +via: https://lwn.net/Articles/663474/ + +作者:Jonathan Corbet + +译者:[译者ID](https://github.com/译者ID) + +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20151215 Linux Desktop Fun--Summon Swarms Of Penguins To Waddle About The Desktop.md b/sources/tech/20151215 Linux Desktop Fun--Summon Swarms Of Penguins To Waddle About The Desktop.md new file mode 100644 index 0000000000..b544517f5e --- /dev/null +++ b/sources/tech/20151215 Linux Desktop Fun--Summon Swarms Of Penguins To Waddle About The Desktop.md @@ -0,0 +1,101 @@ +translation by strugglingyouth +Linux Desktop Fun: Summon Swarms Of Penguins To Waddle About The Desktop +================================================================================ +XPenguins is a program for animating cute cartoons animals in your root window. By default it will be penguins they drop in from the top of the screen, walk along the tops of your windows, up the side of your windows, levitate, skateboard, and do other similarly exciting things. Now you can send an army of cute little penguins to invade the screen of someone else on your network. + +### Install XPenguins ### + +Open a command-line terminal (select Applications > Accessories > Terminal), and then type the following commands to install XPenguins program. First, type the command apt-get update to tell apt to refresh its package information by querying the configured repositories and then install the required program: + + $ sudo apt-get update + $ sudo apt-get install xpenguins + +### How do I Start XPenguins Locally? ### + +Type the following command: + + $ xpenguins + +Sample outputs: + +![An army of cute little penguins invading the screen](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_002_12_07_2011.png) + +An army of cute little penguins invading the screen + +![Linux: Cute little penguins walking along the tops of your windows](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_001_12_07_2011.png) + +Linux: Cute little penguins walking along the tops of your windows + +![Xpenguins Screenshot](http://files.cyberciti.biz/uploads/tips/2011/07/xpenguins-screenshot.jpg) + +Xpenguins Screenshot + +Be careful when you move windows as the little guys squash easily. If you send the program an interupt signal (Ctrl-C) they will burst. + +### Themes ### + +To list themes, enter: + + $ xpenguins -l + +Sample outputs: + + Big Penguins + Bill + Classic Penguins + Penguins + Turtles + +You can use alternative themes as follows: + + $ xpenguins --theme "Big Penguins" --theme "Turtles" + +You can install additional themes as follows: + + $ cd /tmp + $ wget http://xpenguins.seul.org/xpenguins_themes-1.0.tar.gz + $ tar -zxvf xpenguins_themes-1.0.tar.gz + $ mkdir ~/.xpenguins + $ mv -v themes ~/.xpenguins/ + $ xpenguins -l + +Sample outputs: + + Lemmings + Sonic the Hedgehog + The Simpsons + Winnie the Pooh + Worms + Big Penguins + Bill + Classic Penguins + Penguins + Turtles + +To start with a random theme, enter: + + $ xpenguins --random-theme + +To load all available themes and run them simultaneously, enter: + + $ xpenguins --all + +More links and information: + +- [XPenguins][1] home page. +- man penguins +- More Linux / UNIX desktop fun with [Steam Locomotive][2] and [Terminal ASCII Aquarium][3]. + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/tips/linux-cute-little-xpenguins-walk-along-tops-ofyour-windows.html + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://xpenguins.seul.org/ +[2]:http://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html +[3]:http://www.cyberciti.biz/tips/linux-unix-apple-osx-terminal-ascii-aquarium.html diff --git a/sources/tech/20151215 Linux or Unix Desktop Fun--Text Mode ASCII-art Box and Comment Drawing.md b/sources/tech/20151215 Linux or Unix Desktop Fun--Text Mode ASCII-art Box and Comment Drawing.md new file mode 100644 index 0000000000..268c6c9477 --- /dev/null +++ b/sources/tech/20151215 Linux or Unix Desktop Fun--Text Mode ASCII-art Box and Comment Drawing.md @@ -0,0 +1,202 @@ +翻译中 +Linux / Unix Desktop Fun: Text Mode ASCII-art Box and Comment Drawing +================================================================================ +Boxes command is a text filter and a little known tool that can draw any kind of ASCII art box around its input text or code for fun and profit. You can quickly create email signatures, or create regional comments in any programming language. This command was intended to be used with the vim text editor, but can be tied to any text editor which supports filters, as well as from the command line as a standalone tool. + +### Task: Install boxes ### + +Use the [apt-get command][1] to install boxes under Debian / Ubuntu Linux: + + $ sudo apt-get install boxes + +Sample outputs: + + Reading package lists... Done + Building dependency tree + Reading state information... Done + The following NEW packages will be installed: + boxes + 0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded. + Need to get 0 B/59.8 kB of archives. + After this operation, 205 kB of additional disk space will be used. + Selecting previously deselected package boxes. + (Reading database ... 224284 files and directories currently installed.) + Unpacking boxes (from .../boxes_1.0.1a-2.3_amd64.deb) ... + Processing triggers for man-db ... + Setting up boxes (1.0.1a-2.3) ... + +RHEL / CentOS / Fedora Linux users, use the [yum command to install boxes][2] (first [enable EPEL repo as described here][3]): + + # yum install boxes + +Sample outputs: + + Loaded plugins: rhnplugin + Setting up Install Process + Resolving Dependencies + There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them. + --> Running transaction check + ---> Package boxes.x86_64 0:1.1-8.el6 will be installed + --> Finished Dependency Resolution + Dependencies Resolved + ========================================================================== + Package Arch Version Repository Size + ========================================================================== + Installing: + boxes x86_64 1.1-8.el6 epel 64 k + Transaction Summary + ========================================================================== + Install 1 Package(s) + Total download size: 64 k + Installed size: 151 k + Is this ok [y/N]: y + Downloading Packages: + boxes-1.1-8.el6.x86_64.rpm | 64 kB 00:00 + Running rpm_check_debug + Running Transaction Test + Transaction Test Succeeded + Running Transaction + Installing : boxes-1.1-8.el6.x86_64 1/1 + Installed: + boxes.x86_64 0:1.1-8.el6 + Complete! + +FreeBSD user can use the port as follows: + + cd /usr/ports/misc/boxes/ && make install clean + +Or, add the package using the pkg_add command: + + # pkg_add -r boxes + +### Draw any kind of box around some given text ### + +Type the following command: + + echo "This is a test" | boxes + +Or specify the name of the design to use: + + echo -e "\n\tVivek Gite\n\tvivek@nixcraft.com\n\twww.cyberciti.biz" | boxes -d dog + +Sample outputs: + +![Unix / Linux: Boxes Command To Draw Various Designs](http://s0.cyberciti.org/uploads/l/tips/2012/06/unix-linux-boxes-draw-dog-design.png) + +Fig.01: Unix / Linux: Boxes Command To Draw Various Designs + +#### How do I list all designs? #### + +The syntax is: + + boxes option + pipe | boxes options + echo "text" | boxes -d foo + boxes -l + +The -d design option sets the name of the design to use. The syntax is: + + echo "Text" | boxes -d design + pipe | boxes -d desig + +The -l option list designs. It produces a listing of all available box designs in the config file, along with a sample box and information about it's creator: + + boxes -l + boxes -l | more + boxes -l | less + +Sample outputs: + + 43 Available Styles in "/etc/boxes/boxes-config": + ------------------------------------------------- + ada-box (Neil Bird ): + --------------- + -- -- + -- -- + --------------- + ada-cmt (Neil Bird ): + -- + -- regular Ada + -- comments + -- + boy (Joan G. Stark ): + .-"""-. + / .===. \ + \/ 6 6 \/ + ( \___/ ) + _________ooo__\_____/______________ + / \ + | joan stark spunk1111@juno.com | + | VISIT MY ASCII ART GALLERY: | + | http://www.geocities.com/SoHo/7373/ | + \_______________________ooo_________/ jgs + | | | + |_ | _| + | | | + |__|__| + /-'Y'-\ + (__/ \__) + .... + ... + output truncated + .. + +### How do I filter text via boxes while using vi/vim text editor? ### + +You can use any external command with vi or vim. In this example, [insert current date and time][4], enter: + + !!date + +OR + + :r !date + +You need to type above command in Vim to read the output from the date command. This will insert the date and time after the current line: + + Tue Jun 12 00:05:38 IST 2012 + +You can do the same with boxes command. Create a sample shell script or a c program as follows: + + #!/bin/bash + Purpose: Backup mysql database to remote server. + Author: Vivek Gite + Last updated on: Tue Jun, 12 2012 + +Now type the following (move cursor to the second line i.e. line which starts with "Purpose: ...") + + 3!!boxes + +And voila you will get the output as follows: + + #!/bin/bash + /****************************************************/ + /* Purpose: Backup mysql database to remote server. */ + /* Author: Vivek Gite */ + /* Last updated on: Tue Jun, 12 2012 */ + /****************************************************/ + +This video will give you an introduction to boxes command: + +注:youtube 视频 + + +(Video:01: boxes command in action. BTW, this is my first video so go easy on me and let me know what you think.) + +See also + +- boxes man page + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/tips/unix-linux-draw-any-kind-of-boxes-around-text-editor.html + +作者:Vivek Gite +译者:[zky001](https://github.com/zky001) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html +[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ +[3]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/ +[4]:http://www.cyberciti.biz/faq/vim-inserting-current-date-time-under-linux-unix-osx/ diff --git a/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md b/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md new file mode 100644 index 0000000000..7388b7693e --- /dev/null +++ b/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md @@ -0,0 +1,501 @@ +translating by ezio + + + +Securi-Pi: Using the Raspberry Pi as a Secure Landing Point +================================================================================ + +Like many LJ readers these days, I've been leading a bit of a techno-nomadic lifestyle as of the past few years—jumping from network to network, access point to access point, as I bounce around the real world while maintaining my connection to the Internet and other networks I use on a daily basis. As of late, I've found that more and more networks are starting to block outbound ports like SMTP (port 25), SSH (port 22) and others. It becomes really frustrating when you drop into a local coffee house expecting to be able to fire up your SSH client and get a few things done, and you can't, because the network's blocking you. + +However, I have yet to run across a network that blocks HTTPS outbound (port 443). After a bit of fiddling with a Raspberry Pi 2 I have at home, I was able to get a nice clean solution that lets me hit various services on the Raspberry Pi via port 443—allowing me to walk around blocked ports and hobbled networks so I can do the things I need to do. In a nutshell, I have set up this Raspberry Pi to act as an OpenVPN endpoint, SSH endpoint and Apache server—with all these services listening on port 443 so networks with restrictive policies aren't an issue. + +### Notes +This solution will work on most networks, but firewalls that do deep packet inspection on outbound traffic still can block traffic that's tunneled using this method. However, I haven't been on a network that does that...yet. Also, while I use a lot of cryptography-based solutions here (OpenVPN, HTTPS, SSH), I haven't done a strict security audit of this setup. DNS may leak information, for example, and there may be other things I haven't thought of. I'm not recommending this as a way to hide all your traffic—I just use this so that I can connect to the Internet in an unfettered way when I'm out and about. + +### Getting Started +Let's start off with what you need to put this solution together. I'm using this on a Raspberry Pi 2 at home, running the latest Raspbian, but this should work just fine on a Raspberry Pi Model B, as well. It fits within the 512MB of RAM footprint quite easily, although performance may be a bit slower, because the Raspberry Pi Model B has a single-core CPU as opposed to the Pi 2's quad-core. My Raspberry Pi 2 is behind my home's router/firewall, so I get the added benefit of being able to access my machines at home. This also means that any traffic I send to the Internet appears to come from my home router's IP address, so this isn't a solution designed to protect anonymity. If you don't have a Raspberry Pi, or don't want this running out of your home, it's entirely possible to run this out of a small cloud server too. Just make sure that the server's running Debian or Ubuntu, as these instructions are targeted at Debian-based distributions. + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11913f1.jpg) + +Figure 1. The Raspberry Pi, about to become an encrypted network endpoint. + +### Installing and Configuring BIND +Once you have your platform up and running—whether it's a Raspberry Pi or otherwise—next you're going to install BIND, the nameserver that powers a lot of the Internet. You're going to install BIND as a caching nameserver only, and not have it service incoming requests from the Internet. Installing BIND will give you a DNS server to point your OpenVPN clients at, once you get to the OpenVPN step. Installing BIND is easy; it's just a simple `apt-get `command to install it: + +``` +root@test:~# apt-get install bind9 +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + bind9utils +Suggested packages: + bind9-doc resolvconf ufw +The following NEW packages will be installed: + bind9 bind9utils +0 upgraded, 2 newly installed, 0 to remove and + ↪0 not upgraded. +Need to get 490 kB of archives. +After this operation, 1,128 kB of additional disk + ↪space will be used. +Do you want to continue [Y/n]? y +``` + +There are a couple minor configuration changes that need to be made to one of the config files of BIND before it can operate as a caching nameserver. Both changes are in `/etc/bind/named.conf.options`. First, you're going to uncomment the "forwarders" section of this file, and you're going to add a nameserver on the Internet to which to forward requests. In this case, I'm going to add Google's DNS (8.8.8.8). The "forwarders" section of the file should look like this: + +``` +forwarders { + 8.8.8.8; +}; +``` + +The second change you're going to make allows queries from your internal network and localhost. Simply add this line to the bottom of the configuration file, right before the `}`; that ends the file: + +``` +allow-query { 192.168.1.0/24; 127.0.0.0/16; }; +``` + +That line above allows this DNS server to be queried from the network it's on (in this case, my network behind my firewall) and localhost. Next, you just need to restart BIND: + +``` +root@test:~# /etc/init.d/bind9 restart +[....] Stopping domain name service...: bind9waiting + ↪for pid 13209 to die +. ok +[ ok ] Starting domain name service...: bind9. +``` + +Now you can test `nslookup` to make sure your server works: + +``` +root@test:~# nslookup +> server localhost +Default server: localhost +Address: 127.0.0.1#53 +> www.google.com +Server: localhost +Address: 127.0.0.1#53 + +Non-authoritative answer: +Name: www.google.com +Address: 173.194.33.176 +Name: www.google.com +Address: 173.194.33.177 +Name: www.google.com +Address: 173.194.33.178 +Name: www.google.com +Address: 173.194.33.179 +Name: www.google.com +Address: 173.194.33.180 +``` + +That's it! You've got a working nameserver on this machine. Next, let's move on to OpenVPN. + +### Installing and Configuring OpenVPN + +OpenVPN is an open-source VPN solution that relies on SSL/TLS for its key exchange. It's also easy to install and get working under Linux. Configuration of OpenVPN can be a bit daunting, but you're not going to deviate from the default configuration by much. To start, you're going to run an apt-get command and install OpenVPN: + +``` +root@test:~# apt-get install openvpn +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + liblzo2-2 libpkcs11-helper1 +Suggested packages: + resolvconf +The following NEW packages will be installed: + liblzo2-2 libpkcs11-helper1 openvpn +0 upgraded, 3 newly installed, 0 to remove and + ↪0 not upgraded. +Need to get 621 kB of archives. +After this operation, 1,489 kB of additional disk + ↪space will be used. +Do you want to continue [Y/n]? y +``` + +Now that OpenVPN is installed, you're going to configure it. OpenVPN is SSL-based, and it relies on both server and client certificates to work. To generate these certificates, you need to configure a Certificate Authority (CA) on the machine. Luckily, OpenVPN ships with some wrapper scripts known as "easy-rsa" that help to bootstrap this process. You'll start by making a directory on the filesystem for the easy-rsa scripts to reside in and by copying the scripts from the template directory there: + +``` +root@test:~# mkdir /etc/openvpn/easy-rsa +root@test:~# cp -rpv + ↪/usr/share/doc/openvpn/examples/easy-rsa/2.0/* + ↪/etc/openvpn/easy-rsa/ + ``` + +Next, copy the vars file to a backup copy: + +``` +root@test:/etc/openvpn/easy-rsa# cp vars vars.bak +``` + +Now, edit vars so it's got information pertinent to your installation. I'm going specify only the lines that need to be edited, with sample data, below: + +``` +KEY_SIZE=4096 +KEY_COUNTRY="US" +KEY_PROVINCE="CA" +KEY_CITY="Silicon Valley" +KEY_ORG="Linux Journal" +KEY_EMAIL="bill.childers@linuxjournal.com" +``` + +The next step is to source the vars file, so that the environment variables in the file are in your current environment: + +``` +root@test:/etc/openvpn/easy-rsa# source ./vars +NOTE: If you run ./clean-all, I will be doing a + ↪rm -rf on /etc/openvpn/easy-rsa/keys + ``` + +### Building the Certificate Authority + +You're now going to run clean-all to ensure a clean working environment, and then you're going to build the CA. Note that I'm changing changeme prompts to something that's appropriate for this installation: + +``` +root@test:/etc/openvpn/easy-rsa# ./clean-all +root@test:/etc/openvpn/easy-rsa# ./build-ca +Generating a 4096 bit RSA private key +...................................................++ +...................................................++ +writing new private key to 'ca.key' +----- +You are about to be asked to enter information that +will be incorporated into your certificate request. +What you are about to enter is what is called a +Distinguished Name or a DN. +There are quite a few fields but you can leave some +blank. For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [US]: +State or Province Name (full name) [CA]: +Locality Name (eg, city) [Silicon Valley]: +Organization Name (eg, company) [Linux Journal]: +Organizational Unit Name (eg, section) + ↪[changeme]:SecTeam +Common Name (eg, your name or your server's hostname) + ↪[changeme]:test.linuxjournal.com +Name [changeme]:test.linuxjournal.com +Email Address [bill.childers@linuxjournal.com]: +``` + +### Building the Server Certificate + +Once the CA is created, you need to build the OpenVPN server certificate: + +```root@test:/etc/openvpn/easy-rsa# + ↪./build-key-server test.linuxjournal.com +Generating a 4096 bit RSA private key +...................................................++ +writing new private key to 'test.linuxjournal.com.key' +----- +You are about to be asked to enter information that +will be incorporated into your certificate request. +What you are about to enter is what is called a +Distinguished Name or a DN. +There are quite a few fields but you can leave some +blank. For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [US]: +State or Province Name (full name) [CA]: +Locality Name (eg, city) [Silicon Valley]: +Organization Name (eg, company) [Linux Journal]: +Organizational Unit Name (eg, section) + ↪[changeme]:SecTeam +Common Name (eg, your name or your server's hostname) + ↪[test.linuxjournal.com]: +Name [changeme]:test.linuxjournal.com +Email Address [bill.childers@linuxjournal.com]: + +Please enter the following 'extra' attributes +to be sent with your certificate request +A challenge password []: +An optional company name []: +Using configuration from + ↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf +Check that the request matches the signature +Signature ok +The Subject's Distinguished Name is as follows +countryName :PRINTABLE:'US' +stateOrProvinceName :PRINTABLE:'CA' +localityName :PRINTABLE:'Silicon Valley' +organizationName :PRINTABLE:'Linux Journal' +organizationalUnitName:PRINTABLE:'SecTeam' +commonName :PRINTABLE:'test.linuxjournal.com' +name :PRINTABLE:'test.linuxjournal.com' +emailAddress + ↪:IA5STRING:'bill.childers@linuxjournal.com' +Certificate is to be certified until Sep 1 + ↪06:23:59 2025 GMT (3650 days) +Sign the certificate? [y/n]:y + +1 out of 1 certificate requests certified, commit? [y/n]y +Write out database with 1 new entries +Data Base Updated +``` + +The next step may take a while—building the Diffie-Hellman key for the OpenVPN server. This takes several minutes on a conventional desktop-grade CPU, but on the ARM processor of the Raspberry Pi, it can take much, much longer. Have patience, as long as the dots in the terminal are proceeding, the system is building its Diffie-Hellman key (note that many dots are snipped in these examples): + +``` +root@test:/etc/openvpn/easy-rsa# ./build-dh +Generating DH parameters, 4096 bit long safe prime, + ↪generator 2 +This is going to take a long time +....................................................+ + +``` + +### Building the Client Certificate + +Now you're going to generate a client key for your client to use when logging in to the OpenVPN server. OpenVPN is typically configured for certificate-based auth, where the client presents a certificate that was issued by an approved Certificate Authority: + +``` +root@test:/etc/openvpn/easy-rsa# ./build-key + ↪bills-computer +Generating a 4096 bit RSA private key +...................................................++ +...................................................++ +writing new private key to 'bills-computer.key' +----- +You are about to be asked to enter information that +will be incorporated into your certificate request. +What you are about to enter is what is called a +Distinguished Name or a DN. There are quite a few +fields but you can leave some blank. +For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [US]: +State or Province Name (full name) [CA]: +Locality Name (eg, city) [Silicon Valley]: +Organization Name (eg, company) [Linux Journal]: +Organizational Unit Name (eg, section) + ↪[changeme]:SecTeam +Common Name (eg, your name or your server's hostname) + ↪[bills-computer]: +Name [changeme]:bills-computer +Email Address [bill.childers@linuxjournal.com]: + +Please enter the following 'extra' attributes +to be sent with your certificate request +A challenge password []: +An optional company name []: +Using configuration from + ↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf +Check that the request matches the signature +Signature ok +The Subject's Distinguished Name is as follows +countryName :PRINTABLE:'US' +stateOrProvinceName :PRINTABLE:'CA' +localityName :PRINTABLE:'Silicon Valley' +organizationName :PRINTABLE:'Linux Journal' +organizationalUnitName:PRINTABLE:'SecTeam' +commonName :PRINTABLE:'bills-computer' +name :PRINTABLE:'bills-computer' +emailAddress + ↪:IA5STRING:'bill.childers@linuxjournal.com' +Certificate is to be certified until + ↪Sep 1 07:35:07 2025 GMT (3650 days) +Sign the certificate? [y/n]:y + +1 out of 1 certificate requests certified, + ↪commit? [y/n]y +Write out database with 1 new entries +Data Base Updated +root@test:/etc/openvpn/easy-rsa# +``` + +Now you're going to generate an HMAC code as a shared key to increase the security of the system further: + +``` +root@test:~# openvpn --genkey --secret + ↪/etc/openvpn/easy-rsa/keys/ta.key +``` + +### Configuration of the Server + +Finally, you're going to get to the meat of configuring the OpenVPN server. You're going to create a new file, /etc/openvpn/server.conf, and you're going to stick to a default configuration for the most part. The main change you're going to do is to set up OpenVPN to use TCP rather than UDP. This is needed for the next major step to work—without OpenVPN using TCP for its network communication, you can't get things working on port 443. So, create a new file called /etc/openvpn/server.conf, and put the following configuration in it: Garrick, shrink below. + +``` +port 1194 +proto tcp +dev tun +ca easy-rsa/keys/ca.crt +cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever + ↪your hostname was +key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key + ↪- This file should be kept secret +management localhost 7505 +dh easy-rsa/keys/dh4096.pem +tls-auth /etc/openvpn/certs/ta.key 0 +server 10.8.0.0 255.255.255.0 # The server will use this + ↪subnet for clients connecting to it +ifconfig-pool-persist ipp.txt +push "redirect-gateway def1 bypass-dhcp" # Forces clients + ↪to redirect all traffic through the VPN +push "dhcp-option DNS 192.168.1.1" # Tells the client to + ↪use the DNS server at 192.168.1.1 for DNS - + ↪replace with the IP address of the OpenVPN + ↪machine and clients will use the BIND + ↪server setup earlier +keepalive 30 240 +comp-lzo # Enable compression +persist-key +persist-tun +status openvpn-status.log +verb 3 +``` + +And last, you're going to enable IP forwarding on the server, configure OpenVPN to start on boot and start the OpenVPN service: + +``` +root@test:/etc/openvpn/easy-rsa/keys# echo + ↪"net.ipv4.ip_forward = 1" >> /etc/sysctl.conf +root@test:/etc/openvpn/easy-rsa/keys# sysctl -p + ↪/etc/sysctl.conf +net.core.wmem_max = 12582912 +net.core.rmem_max = 12582912 +net.ipv4.tcp_rmem = 10240 87380 12582912 +net.ipv4.tcp_wmem = 10240 87380 12582912 +net.core.wmem_max = 12582912 +net.core.rmem_max = 12582912 +net.ipv4.tcp_rmem = 10240 87380 12582912 +net.ipv4.tcp_wmem = 10240 87380 12582912 +net.core.wmem_max = 12582912 +net.core.rmem_max = 12582912 +net.ipv4.tcp_rmem = 10240 87380 12582912 +net.ipv4.tcp_wmem = 10240 87380 12582912 +net.ipv4.ip_forward = 0 +net.ipv4.ip_forward = 1 + +root@test:/etc/openvpn/easy-rsa/keys# update-rc.d + ↪openvpn defaults +update-rc.d: using dependency based boot sequencing + +root@test:/etc/openvpn/easy-rsa/keys# + ↪/etc/init.d/openvpn start +[ ok ] Starting virtual private network daemon:. +``` + +### Setting Up OpenVPN Clients + +Your client installation depends on the host OS of your client, but you'll need to copy your client certs and keys created above to your client, and you'll need to import those certificates and create a configuration for that client. Each client and client OS does it slightly differently and documenting each one is beyond the scope of this article, so you'll need to refer to the documentation for that client to get it running. Refer to the Resources section for OpenVPN clients for each major OS. + +### Installing SSLH—the "Magic" Protocol Multiplexer + +The really interesting piece of this solution is SSLH. SSLH is a protocol multiplexer—it listens on port 443 for traffic, and then it can analyze whether the incoming packet is an SSH packet, HTTPS or OpenVPN, and it can forward that packet onto the proper service. This is what enables this solution to bypass most port blocks—you use the HTTPS port for all of this traffic, since HTTPS is rarely blocked. + +To start, `apt-get` install SSLH: + +``` +root@test:/etc/openvpn/easy-rsa/keys# apt-get + ↪install sslh +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + apache2 apache2-mpm-worker apache2-utils + ↪apache2.2-bin apache2.2-common + libapr1 libaprutil1 libaprutil1-dbd-sqlite3 + ↪libaprutil1-ldap libconfig9 +Suggested packages: + apache2-doc apache2-suexec apache2-suexec-custom + ↪openbsd-inetd inet-superserver +The following NEW packages will be installed: + apache2 apache2-mpm-worker apache2-utils + ↪apache2.2-bin apache2.2-common + libapr1 libaprutil1 libaprutil1-dbd-sqlite3 + ↪libaprutil1-ldap libconfig9 sslh +0 upgraded, 11 newly installed, 0 to remove + ↪and 0 not upgraded. +Need to get 1,568 kB of archives. +After this operation, 5,822 kB of additional + ↪disk space will be used. +Do you want to continue [Y/n]? y +``` + +After SSLH is installed, the package installer will ask you if you want to run it in inetd or standalone mode. Select standalone mode, because you want SSLH to run as its own process. If you don't have Apache installed, the Debian/Raspbian package of SSLH will pull it in automatically, although it's not strictly required. If you already have Apache running and configured, you'll want to make sure it only listens on localhost's interface and not all interfaces (otherwise, SSLH can't start because it can't bind to port 443). After installation, you'll receive an error that looks like this: + +``` +[....] Starting ssl/ssh multiplexer: sslhsslh disabled, + ↪please adjust the configuration to your needs +[FAIL] and then set RUN to 'yes' in /etc/default/sslh + ↪to enable it. ... failed! +failed! +``` + +This isn't an error, exactly—it's just SSLH telling you that it's not configured and can't start. Configuring SSLH is pretty simple. Its configuration is stored in `/etc/default/sslh`, and you just need to configure the `RUN` and `DAEMON_OPTS` variables. My SSLH configuration looks like this: + +``` +# Default options for sslh initscript +# sourced by /etc/init.d/sslh + +# Disabled by default, to force yourself +# to read the configuration: +# - /usr/share/doc/sslh/README.Debian (quick start) +# - /usr/share/doc/sslh/README, at "Configuration" section +# - sslh(8) via "man sslh" for more configuration details. +# Once configuration ready, you *must* set RUN to yes here +# and try to start sslh (standalone mode only) + +RUN=yes + +# binary to use: forked (sslh) or single-thread + ↪(sslh-select) version +DAEMON=/usr/sbin/sslh + +DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh + ↪127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn + ↪127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid" + ``` + + Save the file and start SSLH: + +``` + root@test:/etc/openvpn/easy-rsa/keys# + ↪/etc/init.d/sslh start +[ ok ] Starting ssl/ssh multiplexer: sslh. +``` + +Now, you should be able to ssh to port 443 on your Raspberry Pi, and have it forward via SSLH: + +``` +$ ssh -p 443 root@test.linuxjournal.com +root@test:~# +``` + +SSLH is now listening on port 443 and can direct traffic to SSH, Apache or OpenVPN based on the type of packet that hits it. You should be ready to go! + +### Conclusion + +Now you can fire up OpenVPN and set your OpenVPN client configuration to port 443, and SSLH will route it to the OpenVPN server on port 1194. But because you're talking to your server on port 443, your VPN traffic won't get blocked. Now you can land at a strange coffee shop, in a strange town, and know that your Internet will just work when you fire up your OpenVPN and point it at your Raspberry Pi. You'll also gain some encryption on your link, which will improve the privacy of your connection. Enjoy surfing the Net via your new landing point! + +Resources + +Installing and Configuring OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) and [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn) + +OpenVPN client downloads: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html) + +OpenVPN Client for iOS: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8) + +OpenVPN Client for Android: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en) + +Tunnelblick for Mac OS X (OpenVPN client): [https://tunnelblick.net](https://tunnelblick.net) + +SSLH—Protocol Multiplexer: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) and [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh) + + +---------- +via: http://www.linuxjournal.com/content/securi-pi-using-raspberry-pi-secure-landing-point?page=0,0 + +作者:[Bill Childers][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/bill-childers + + diff --git a/sources/tech/20151220 GCC-Inline-Assembly-HOWTO.md b/sources/tech/20151220 GCC-Inline-Assembly-HOWTO.md new file mode 100644 index 0000000000..80031c7fd8 --- /dev/null +++ b/sources/tech/20151220 GCC-Inline-Assembly-HOWTO.md @@ -0,0 +1,631 @@ +[Translating by cposture 16-01-14] +* * * + +# GCC-Inline-Assembly-HOWTO +v0.1, 01 March 2003. +* * * + +_This HOWTO explains the use and usage of the inline assembly feature provided by GCC. There are only two prerequisites for reading this article, and that’s obviously a basic knowledge of x86 assembly language and C._ + +* * * + +## 1. Introduction. + +## 1.1 Copyright and License. + +Copyright (C)2003 Sandeep S. + +This document is free; you can redistribute and/or modify this under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. + +This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. + +## 1.2 Feedback and Corrections. + +Kindly forward feedback and criticism to [Sandeep.S](mailto:busybox@sancharnet.in). I will be indebted to anybody who points out errors and inaccuracies in this document; I shall rectify them as soon as I am informed. + +## 1.3 Acknowledgments. + +I express my sincere appreciation to GNU people for providing such a great feature. Thanks to Mr.Pramode C E for all the helps he did. Thanks to friends at the Govt Engineering College, Trichur for their moral-support and cooperation, especially to Nisha Kurur and Sakeeb S. Thanks to my dear teachers at Govt Engineering College, Trichur for their cooperation. + +Additionally, thanks to Phillip, Brennan Underwood and colin@nyx.net; Many things here are shamelessly stolen from their works. + +* * * + +## 2. Overview of the whole thing. + +We are here to learn about GCC inline assembly. What this inline stands for? + +We can instruct the compiler to insert the code of a function into the code of its callers, to the point where actually the call is to be made. Such functions are inline functions. Sounds similar to a Macro? Indeed there are similarities. + +What is the benefit of inline functions? + +This method of inlining reduces the function-call overhead. And if any of the actual argument values are constant, their known values may permit simplifications at compile time so that not all of the inline function’s code needs to be included. The effect on code size is less predictable, it depends on the particular case. To declare an inline function, we’ve to use the keyword `inline` in its declaration. + +Now we are in a position to guess what is inline assembly. Its just some assembly routines written as inline functions. They are handy, speedy and very much useful in system programming. Our main focus is to study the basic format and usage of (GCC) inline assembly functions. To declare inline assembly functions, we use the keyword `asm`. + +Inline assembly is important primarily because of its ability to operate and make its output visible on C variables. Because of this capability, "asm" works as an interface between the assembly instructions and the "C" program that contains it. + +* * * + +## 3. GCC Assembler Syntax. + +GCC, the GNU C Compiler for Linux, uses **AT&T**/**UNIX** assembly syntax. Here we’ll be using AT&T syntax for assembly coding. Don’t worry if you are not familiar with AT&T syntax, I will teach you. This is quite different from Intel syntax. I shall give the major differences. + +1. Source-Destination Ordering. + + The direction of the operands in AT&T syntax is opposite to that of Intel. In Intel syntax the first operand is the destination, and the second operand is the source whereas in AT&T syntax the first operand is the source and the second operand is the destination. ie, + + "Op-code dst src" in Intel syntax changes to + + "Op-code src dst" in AT&T syntax. + +2. Register Naming. + + Register names are prefixed by % ie, if eax is to be used, write %eax. + +3. Immediate Operand. + + AT&T immediate operands are preceded by ’$’. For static "C" variables also prefix a ’$’. In Intel syntax, for hexadecimal constants an ’h’ is suffixed, instead of that, here we prefix ’0x’ to the constant. So, for hexadecimals, we first see a ’$’, then ’0x’ and finally the constants. + +4. Operand Size. + + In AT&T syntax the size of memory operands is determined from the last character of the op-code name. Op-code suffixes of ’b’, ’w’, and ’l’ specify byte(8-bit), word(16-bit), and long(32-bit) memory references. Intel syntax accomplishes this by prefixing memory operands (not the op-codes) with ’byte ptr’, ’word ptr’, and ’dword ptr’. + + Thus, Intel "mov al, byte ptr foo" is "movb foo, %al" in AT&T syntax. + +5. Memory Operands. + + In Intel syntax the base register is enclosed in ’[’ and ’]’ where as in AT&T they change to ’(’ and ’)’. Additionally, in Intel syntax an indirect memory reference is like + + section:[base + index*scale + disp], which changes to + + section:disp(base, index, scale) in AT&T. + + One point to bear in mind is that, when a constant is used for disp/scale, ’$’ shouldn’t be prefixed. + +Now we saw some of the major differences between Intel syntax and AT&T syntax. I’ve wrote only a few of them. For a complete information, refer to GNU Assembler documentations. Now we’ll look at some examples for better understanding. + +> ` +> +>
+------------------------------+------------------------------------+
+> |       Intel Code             |      AT&T Code                     |
+> +------------------------------+------------------------------------+
+> | mov     eax,1                |  movl    $1,%eax                   |   
+> | mov     ebx,0ffh             |  movl    $0xff,%ebx                |   
+> | int     80h                  |  int     $0x80                     |   
+> | mov     ebx, eax             |  movl    %eax, %ebx                |
+> | mov     eax,[ecx]            |  movl    (%ecx),%eax               |
+> | mov     eax,[ebx+3]          |  movl    3(%ebx),%eax              | 
+> | mov     eax,[ebx+20h]        |  movl    0x20(%ebx),%eax           |
+> | add     eax,[ebx+ecx*2h]     |  addl    (%ebx,%ecx,0x2),%eax      |
+> | lea     eax,[ebx+ecx]        |  leal    (%ebx,%ecx),%eax          |
+> | sub     eax,[ebx+ecx*4h-20h] |  subl    -0x20(%ebx,%ecx,0x4),%eax |
+> +------------------------------+------------------------------------+
+> 
+> +> ` + +* * * + +## 4. Basic Inline. + +The format of basic inline assembly is very much straight forward. Its basic form is + +`asm("assembly code");` + +Example. + +> ` +> +> * * * +> +>
asm("movl %ecx %eax"); /* moves the contents of ecx to eax */
+> __asm__("movb %bh (%eax)"); /*moves the byte from bh to the memory pointed by eax */
+> 
+> +> * * * +> +> ` + +You might have noticed that here I’ve used `asm` and `__asm__`. Both are valid. We can use `__asm__` if the keyword `asm` conflicts with something in our program. If we have more than one instructions, we write one per line in double quotes, and also suffix a ’\n’ and ’\t’ to the instruction. This is because gcc sends each instruction as a string to **as**(GAS) and by using the newline/tab we send correctly formatted lines to the assembler. + +Example. + +> ` +> +> * * * +> +>
 __asm__ ("movl %eax, %ebx\n\t"
+>           "movl $56, %esi\n\t"
+>           "movl %ecx, $label(%edx,%ebx,$4)\n\t"
+>           "movb %ah, (%ebx)");
+> 
+> +> * * * +> +> ` + +If in our code we touch (ie, change the contents) some registers and return from asm without fixing those changes, something bad is going to happen. This is because GCC have no idea about the changes in the register contents and this leads us to trouble, especially when compiler makes some optimizations. It will suppose that some register contains the value of some variable that we might have changed without informing GCC, and it continues like nothing happened. What we can do is either use those instructions having no side effects or fix things when we quit or wait for something to crash. This is where we want some extended functionality. Extended asm provides us with that functionality. + +* * * + +## 5. Extended Asm. + +In basic inline assembly, we had only instructions. In extended assembly, we can also specify the operands. It allows us to specify the input registers, output registers and a list of clobbered registers. It is not mandatory to specify the registers to use, we can leave that head ache to GCC and that probably fit into GCC’s optimization scheme better. Anyway the basic format is: + +> ` +> +> * * * +> +>
       asm ( assembler template 
+>            : output operands                  /* optional */
+>            : input operands                   /* optional */
+>            : list of clobbered registers      /* optional */
+>            );
+> 
+> +> * * * +> +> ` + +The assembler template consists of assembly instructions. Each operand is described by an operand-constraint string followed by the C expression in parentheses. A colon separates the assembler template from the first output operand and another separates the last output operand from the first input, if any. Commas separate the operands within each group. The total number of operands is limited to ten or to the maximum number of operands in any instruction pattern in the machine description, whichever is greater. + +If there are no output operands but there are input operands, you must place two consecutive colons surrounding the place where the output operands would go. + +Example: + +> ` +> +> * * * +> +>
        asm ("cld\n\t"
+>              "rep\n\t"
+>              "stosl"
+>              : /* no output registers */
+>              : "c" (count), "a" (fill_value), "D" (dest)
+>              : "%ecx", "%edi" 
+>              );
+> 
+> +> * * * +> +> ` + +Now, what does this code do? The above inline fills the `fill_value` `count` times to the location pointed to by the register `edi`. It also says to gcc that, the contents of registers `eax` and `edi` are no longer valid. Let us see one more example to make things more clearer. + +> ` +> +> * * * +> +>
        
+>         int a=10, b;
+>         asm ("movl %1, %%eax; 
+>               movl %%eax, %0;"
+>              :"=r"(b)        /* output */
+>              :"r"(a)         /* input */
+>              :"%eax"         /* clobbered register */
+>              );       
+> 
+> +> * * * +> +> ` + +Here what we did is we made the value of ’b’ equal to that of ’a’ using assembly instructions. Some points of interest are: + +* "b" is the output operand, referred to by %0 and "a" is the input operand, referred to by %1. +* "r" is a constraint on the operands. We’ll see constraints in detail later. For the time being, "r" says to GCC to use any register for storing the operands. output operand constraint should have a constraint modifier "=". And this modifier says that it is the output operand and is write-only. +* There are two %’s prefixed to the register name. This helps GCC to distinguish between the operands and registers. operands have a single % as prefix. +* The clobbered register %eax after the third colon tells GCC that the value of %eax is to be modified inside "asm", so GCC won’t use this register to store any other value. + +When the execution of "asm" is complete, "b" will reflect the updated value, as it is specified as an output operand. In other words, the change made to "b" inside "asm" is supposed to be reflected outside the "asm". + +Now we may look each field in detail. + +## 5.1 Assembler Template. + +The assembler template contains the set of assembly instructions that gets inserted inside the C program. The format is like: either each instruction should be enclosed within double quotes, or the entire group of instructions should be within double quotes. Each instruction should also end with a delimiter. The valid delimiters are newline(\n) and semicolon(;). ’\n’ may be followed by a tab(\t). We know the reason of newline/tab, right?. Operands corresponding to the C expressions are represented by %0, %1 ... etc. + +## 5.2 Operands. + +C expressions serve as operands for the assembly instructions inside "asm". Each operand is written as first an operand constraint in double quotes. For output operands, there’ll be a constraint modifier also within the quotes and then follows the C expression which stands for the operand. ie, + +"constraint" (C expression) is the general form. For output operands an additional modifier will be there. Constraints are primarily used to decide the addressing modes for operands. They are also used in specifying the registers to be used. + +If we use more than one operand, they are separated by comma. + +In the assembler template, each operand is referenced by numbers. Numbering is done as follows. If there are a total of n operands (both input and output inclusive), then the first output operand is numbered 0, continuing in increasing order, and the last input operand is numbered n-1\. The maximum number of operands is as we saw in the previous section. + +Output operand expressions must be lvalues. The input operands are not restricted like this. They may be expressions. The extended asm feature is most often used for machine instructions the compiler itself does not know as existing ;-). If the output expression cannot be directly addressed (for example, it is a bit-field), our constraint must allow a register. In that case, GCC will use the register as the output of the asm, and then store that register contents into the output. + +As stated above, ordinary output operands must be write-only; GCC will assume that the values in these operands before the instruction are dead and need not be generated. Extended asm also supports input-output or read-write operands. + +So now we concentrate on some examples. We want to multiply a number by 5\. For that we use the instruction `lea`. + +> ` +> +> * * * +> +>
        asm ("leal (%1,%1,4), %0"
+>              : "=r" (five_times_x)
+>              : "r" (x) 
+>              );
+> 
+> +> * * * +> +> ` + +Here our input is in ’x’. We didn’t specify the register to be used. GCC will choose some register for input, one for output and does what we desired. If we want the input and output to reside in the same register, we can instruct GCC to do so. Here we use those types of read-write operands. By specifying proper constraints, here we do it. + +> ` +> +> * * * +> +>
        asm ("leal (%0,%0,4), %0"
+>              : "=r" (five_times_x)
+>              : "0" (x) 
+>              );
+> 
+> +> * * * +> +> ` + +Now the input and output operands are in the same register. But we don’t know which register. Now if we want to specify that also, there is a way. + +> ` +> +> * * * +> +>
        asm ("leal (%%ecx,%%ecx,4), %%ecx"
+>              : "=c" (x)
+>              : "c" (x) 
+>              );
+> 
+> +> * * * +> +> ` + +In all the three examples above, we didn’t put any register to the clobber list. why? In the first two examples, GCC decides the registers and it knows what changes happen. In the last one, we don’t have to put `ecx` on the c lobberlist, gcc knows it goes into x. Therefore, since it can know the value of `ecx`, it isn’t considered clobbered. + +## 5.3 Clobber List. + +Some instructions clobber some hardware registers. We have to list those registers in the clobber-list, ie the field after the third ’**:**’ in the asm function. This is to inform gcc that we will use and modify them ourselves. So gcc will not assume that the values it loads into these registers will be valid. We shoudn’t list the input and output registers in this list. Because, gcc knows that "asm" uses them (because they are specified explicitly as constraints). If the instructions use any other registers, implicitly or explicitly (and the registers are not present either in input or in the output constraint list), then those registers have to be specified in the clobbered list. + +If our instruction can alter the condition code register, we have to add "cc" to the list of clobbered registers. + +If our instruction modifies memory in an unpredictable fashion, add "memory" to the list of clobbered registers. This will cause GCC to not keep memory values cached in registers across the assembler instruction. We also have to add the **volatile** keyword if the memory affected is not listed in the inputs or outputs of the asm. + +We can read and write the clobbered registers as many times as we like. Consider the example of multiple instructions in a template; it assumes the subroutine _foo accepts arguments in registers `eax` and `ecx`. + +> ` +> +> * * * +> +>
        asm ("movl %0,%%eax;
+>               movl %1,%%ecx;
+>               call _foo"
+>              : /* no outputs */
+>              : "g" (from), "g" (to)
+>              : "eax", "ecx"
+>              );
+> 
+> +> * * * +> +> ` + +## 5.4 Volatile ...? + +If you are familiar with kernel sources or some beautiful code like that, you must have seen many functions declared as `volatile` or `__volatile__` which follows an `asm` or `__asm__`. I mentioned earlier about the keywords `asm` and `__asm__`. So what is this `volatile`? + +If our assembly statement must execute where we put it, (i.e. must not be moved out of a loop as an optimization), put the keyword `volatile` after asm and before the ()’s. So to keep it from moving, deleting and all, we declare it as + +`asm volatile ( ... : ... : ... : ...);` + +Use `__volatile__` when we have to be verymuch careful. + +If our assembly is just for doing some calculations and doesn’t have any side effects, it’s better not to use the keyword `volatile`. Avoiding it helps gcc in optimizing the code and making it more beautiful. + +In the section `Some Useful Recipes`, I have provided many examples for inline asm functions. There we can see the clobber-list in detail. + +* * * + +## 6. More about constraints. + +By this time, you might have understood that constraints have got a lot to do with inline assembly. But we’ve said little about constraints. Constraints can say whether an operand may be in a register, and which kinds of register; whether the operand can be a memory reference, and which kinds of address; whether the operand may be an immediate constant, and which possible values (ie range of values) it may have.... etc. + +## 6.1 Commonly used constraints. + +There are a number of constraints of which only a few are used frequently. We’ll have a look at those constraints. + +1. **Register operand constraint(r)** + + When operands are specified using this constraint, they get stored in General Purpose Registers(GPR). Take the following example: + + `asm ("movl %%eax, %0\n" :"=r"(myval));` + + Here the variable myval is kept in a register, the value in register `eax` is copied onto that register, and the value of `myval` is updated into the memory from this register. When the "r" constraint is specified, gcc may keep the variable in any of the available GPRs. To specify the register, you must directly specify the register names by using specific register constraints. They are: + + > ` + > + >
+---+--------------------+
+    > | r |    Register(s)     |
+    > +---+--------------------+
+    > | a |   %eax, %ax, %al   |
+    > | b |   %ebx, %bx, %bl   |
+    > | c |   %ecx, %cx, %cl   |
+    > | d |   %edx, %dx, %dl   |
+    > | S |   %esi, %si        |
+    > | D |   %edi, %di        |
+    > +---+--------------------+
+    > 
+ > + > ` + +2. **Memory operand constraint(m)** + + When the operands are in the memory, any operations performed on them will occur directly in the memory location, as opposed to register constraints, which first store the value in a register to be modified and then write it back to the memory location. But register constraints are usually used only when they are absolutely necessary for an instruction or they significantly speed up the process. Memory constraints can be used most efficiently in cases where a C variable needs to be updated inside "asm" and you really don’t want to use a register to hold its value. For example, the value of idtr is stored in the memory location loc: + + `asm("sidt %0\n" : :"m"(loc));` + +3. **Matching(Digit) constraints** + + In some cases, a single variable may serve as both the input and the output operand. Such cases may be specified in "asm" by using matching constraints. + + `asm ("incl %0" :"=a"(var):"0"(var));` + + We saw similar examples in operands subsection also. In this example for matching constraints, the register %eax is used as both the input and the output variable. var input is read to %eax and updated %eax is stored in var again after increment. "0" here specifies the same constraint as the 0th output variable. That is, it specifies that the output instance of var should be stored in %eax only. This constraint can be used: + + * In cases where input is read from a variable or the variable is modified and modification is written back to the same variable. + * In cases where separate instances of input and output operands are not necessary. + + The most important effect of using matching restraints is that they lead to the efficient use of available registers. + +Some other constraints used are: + +1. "m" : A memory operand is allowed, with any kind of address that the machine supports in general. +2. "o" : A memory operand is allowed, but only if the address is offsettable. ie, adding a small offset to the address gives a valid address. +3. "V" : A memory operand that is not offsettable. In other words, anything that would fit the `m’ constraint but not the `o’constraint. +4. "i" : An immediate integer operand (one with constant value) is allowed. This includes symbolic constants whose values will be known only at assembly time. +5. "n" : An immediate integer operand with a known numeric value is allowed. Many systems cannot support assembly-time constants for operands less than a word wide. Constraints for these operands should use ’n’ rather than ’i’. +6. "g" : Any register, memory or immediate integer operand is allowed, except for registers that are not general registers. + +Following constraints are x86 specific. + +1. "r" : Register operand constraint, look table given above. +2. "q" : Registers a, b, c or d. +3. "I" : Constant in range 0 to 31 (for 32-bit shifts). +4. "J" : Constant in range 0 to 63 (for 64-bit shifts). +5. "K" : 0xff. +6. "L" : 0xffff. +7. "M" : 0, 1, 2, or 3 (shifts for lea instruction). +8. "N" : Constant in range 0 to 255 (for out instruction). +9. "f" : Floating point register +10. "t" : First (top of stack) floating point register +11. "u" : Second floating point register +12. "A" : Specifies the `a’ or `d’ registers. This is primarily useful for 64-bit integer values intended to be returned with the `d’ register holding the most significant bits and the `a’ register holding the least significant bits. + +## 6.2 Constraint Modifiers. + +While using constraints, for more precise control over the effects of constraints, GCC provides us with constraint modifiers. Mostly used constraint modifiers are + +1. "=" : Means that this operand is write-only for this instruction; the previous value is discarded and replaced by output data. +2. "&" : Means that this operand is an earlyclobber operand, which is modified before the instruction is finished using the input operands. Therefore, this operand may not lie in a register that is used as an input operand or as part of any memory address. An input operand can be tied to an earlyclobber operand if its only use as an input occurs before the early result is written. + + The list and explanation of constraints is by no means complete. Examples can give a better understanding of the use and usage of inline asm. In the next section we’ll see some examples, there we’ll find more about clobber-lists and constraints. + +* * * + +## 7. Some Useful Recipes. + +Now we have covered the basic theory about GCC inline assembly, now we shall concentrate on some simple examples. It is always handy to write inline asm functions as MACRO’s. We can see many asm functions in the kernel code. (/usr/src/linux/include/asm/*.h). + +1. First we start with a simple example. We’ll write a program to add two numbers. + + > ` + > + > * * * + > + >
int main(void)
+    > {
+    >         int foo = 10, bar = 15;
+    >         __asm__ __volatile__("addl  %%ebx,%%eax"
+    >                              :"=a"(foo)
+    >                              :"a"(foo), "b"(bar)
+    >                              );
+    >         printf("foo+bar=%d\n", foo);
+    >         return 0;
+    > }
+    > 
+ > + > * * * + > + > ` + + Here we insist GCC to store foo in %eax, bar in %ebx and we also want the result in %eax. The ’=’ sign shows that it is an output register. Now we can add an integer to a variable in some other way. + + > ` + > + > * * * + > + >
 __asm__ __volatile__(
+    >                       "   lock       ;\n"
+    >                       "   addl %1,%0 ;\n"
+    >                       : "=m"  (my_var)
+    >                       : "ir"  (my_int), "m" (my_var)
+    >                       :                                 /* no clobber-list */
+    >                       );
+    > 
+ > + > * * * + > + > ` + + This is an atomic addition. We can remove the instruction ’lock’ to remove the atomicity. In the output field, "=m" says that my_var is an output and it is in memory. Similarly, "ir" says that, my_int is an integer and should reside in some register (recall the table we saw above). No registers are in the clobber list. + +2. Now we’ll perform some action on some registers/variables and compare the value. + + > ` + > + > * * * + > + >
 __asm__ __volatile__(  "decl %0; sete %1"
+    >                       : "=m" (my_var), "=q" (cond)
+    >                       : "m" (my_var) 
+    >                       : "memory"
+    >                       );
+    > 
+ > + > * * * + > + > ` + + Here, the value of my_var is decremented by one and if the resulting value is `0` then, the variable cond is set. We can add atomicity by adding an instruction "lock;\n\t" as the first instruction in assembler template. + + In a similar way we can use "incl %0" instead of "decl %0", so as to increment my_var. + + Points to note here are that (i) my_var is a variable residing in memory. (ii) cond is in any of the registers eax, ebx, ecx and edx. The constraint "=q" guarantees it. (iii) And we can see that memory is there in the clobber list. ie, the code is changing the contents of memory. + +3. How to set/clear a bit in a register? As next recipe, we are going to see it. + + > ` + > + > * * * + > + >
__asm__ __volatile__(   "btsl %1,%0"
+    >                       : "=m" (ADDR)
+    >                       : "Ir" (pos)
+    >                       : "cc"
+    >                       );
+    > 
+ > + > * * * + > + > ` + + Here, the bit at the position ’pos’ of variable at ADDR ( a memory variable ) is set to `1` We can use ’btrl’ for ’btsl’ to clear the bit. The constraint "Ir" of pos says that, pos is in a register, and it’s value ranges from 0-31 (x86 dependant constraint). ie, we can set/clear any bit from 0th to 31st of the variable at ADDR. As the condition codes will be changed, we are adding "cc" to clobberlist. + +4. Now we look at some more complicated but useful function. String copy. + + > ` + > + > * * * + > + >
static inline char * strcpy(char * dest,const char *src)
+    > {
+    > int d0, d1, d2;
+    > __asm__ __volatile__(  "1:\tlodsb\n\t"
+    >                        "stosb\n\t"
+    >                        "testb %%al,%%al\n\t"
+    >                        "jne 1b"
+    >                      : "=&S" (d0), "=&D" (d1), "=&a" (d2)
+    >                      : "0" (src),"1" (dest) 
+    >                      : "memory");
+    > return dest;
+    > }
+    > 
+ > + > * * * + > + > ` + + The source address is stored in esi, destination in edi, and then starts the copy, when we reach at **0**, copying is complete. Constraints "&S", "&D", "&a" say that the registers esi, edi and eax are early clobber registers, ie, their contents will change before the completion of the function. Here also it’s clear that why memory is in clobberlist. + + We can see a similar function which moves a block of double words. Notice that the function is declared as a macro. + + > ` + > + > * * * + > + >
#define mov_blk(src, dest, numwords) \
+    > __asm__ __volatile__ (                                          \
+    >                        "cld\n\t"                                \
+    >                        "rep\n\t"                                \
+    >                        "movsl"                                  \
+    >                        :                                        \
+    >                        : "S" (src), "D" (dest), "c" (numwords)  \
+    >                        : "%ecx", "%esi", "%edi"                 \
+    >                        )
+    > 
+ > + > * * * + > + > ` + + Here we have no outputs, so the changes that happen to the contents of the registers ecx, esi and edi are side effects of the block movement. So we have to add them to the clobber list. + +5. In Linux, system calls are implemented using GCC inline assembly. Let us look how a system call is implemented. All the system calls are written as macros (linux/unistd.h). For example, a system call with three arguments is defined as a macro as shown below. + + > ` + > + > * * * + > + >
#define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \
+    > type name(type1 arg1,type2 arg2,type3 arg3) \
+    > { \
+    > long __res; \
+    > __asm__ volatile (  "int $0x80" \
+    >                   : "=a" (__res) \
+    >                   : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \
+    >                     "d" ((long)(arg3))); \
+    > __syscall_return(type,__res); \
+    > }
+    > 
+ > + > * * * + > + > ` + + Whenever a system call with three arguments is made, the macro shown above is used to make the call. The syscall number is placed in eax, then each parameters in ebx, ecx, edx. And finally "int 0x80" is the instruction which makes the system call work. The return value can be collected from eax. + + Every system calls are implemented in a similar way. Exit is a single parameter syscall and let’s see how it’s code will look like. It is as shown below. + + > ` + > + > * * * + > + >
{
+    >         asm("movl $1,%%eax;         /* SYS_exit is 1 */
+    >              xorl %%ebx,%%ebx;      /* Argument is in ebx, it is 0 */
+    >              int  $0x80"            /* Enter kernel mode */
+    >              );
+    > }
+    > 
+ > + > * * * + > + > ` + + The number of exit is "1" and here, it’s parameter is 0\. So we arrange eax to contain 1 and ebx to contain 0 and by `int $0x80`, the `exit(0)` is executed. This is how exit works. + +* * * + +## 8. Concluding Remarks. + +This document has gone through the basics of GCC Inline Assembly. Once you have understood the basic concept it is not difficult to take steps by your own. We saw some examples which are helpful in understanding the frequently used features of GCC Inline Assembly. + +GCC Inlining is a vast subject and this article is by no means complete. More details about the syntax’s we discussed about is available in the official documentation for GNU Assembler. Similarly, for a complete list of the constraints refer to the official documentation of GCC. + +And of-course, the Linux kernel use GCC Inline in a large scale. So we can find many examples of various kinds in the kernel sources. They can help us a lot. + +If you have found any glaring typos, or outdated info in this document, please let us know. + +* * * + +## 9. References. + +1. [Brennan’s Guide to Inline Assembly](http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html) +2. [Using Assembly Language in Linux](http://linuxassembly.org/articles/linasm.html) +3. [Using as, The GNU Assembler](http://www.gnu.org/manual/gas-2.9.1/html_mono/as.html) +4. [Using and Porting the GNU Compiler Collection (GCC)](http://gcc.gnu.org/onlinedocs/gcc_toc.html) +5. [Linux Kernel Source](http://ftp.kernel.org/) + +* * * +via: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html + + 作者:[Sandeep.S](mailto:busybox@sancharnet.in) 译者:[](https://github.com/) 校对:[]() + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/sources/tech/20151227 Ubuntu Touch, three years later.md b/sources/tech/20151227 Ubuntu Touch, three years later.md new file mode 100644 index 0000000000..3d467163cf --- /dev/null +++ b/sources/tech/20151227 Ubuntu Touch, three years later.md @@ -0,0 +1,68 @@ +Back in early 2013, your editor [dedicated a sacrificial handset][2] to the testing of the then-new Ubuntu Touch distribution. At that time, things were so unbaked that the distribution came with mocked-up data for unready apps; it even came with a set of fake tweets. Nearly three years later, it seemed time to give Ubuntu Touch another try on another sacrificial device. This distribution has certainly made some progress in those years, but, sadly, it still seems far from being a competitive offering in this space. +In particular, your editor tested version 16.04r3 from the testing channel on a Nexus 4 handset. The Nexus 4 is certainly past its prime at the end of 2015, but it still functions as a credible Android device. It is, in any case, the only phone handset on [the list of supported devices][1] other than the three that were sold (in locations far from your editor's home) with Ubuntu Touch pre-installed. It is a bit discouraging that Ubuntu Touch is not supported on a more recent device; the Nexus 4 was discontinued over two years ago. + +People who are accustomed to putting strange systems on Nexus devices know the drill fairly well: unlock the bootloader, install a new recovery image if necessary, then use the **fastboot** tool to flash a new image. Ubuntu Touch does not work that way; instead, one must use a set of tools available only on the Ubuntu desktop distribution. Your editor's current menagerie of systems does not include any of those, but, fortunately, running the Ubuntu 15.10 distribution off a USB drive works just fine. It must be said, though, that Ubuntu appears not to have gotten the memo regarding high-DPI laptop displays; 15.10 is an exercise in eyestrain on such a device. + +Once the requisite packages have been installed, the **ubuntu-device-flash** command can be used to install Ubuntu Touch on the phone. It finds the installation image wherever Canonical hides them (it's not obvious where that is) and puts it onto the phone; the process, on the Nexus 4, took about three hours — a surprisingly long time. Among other things, it installs a Ubuntu-specific recovery image, regardless of whether that should be necessary or not. The installation takes up about 4.5GB of space on the device. At the end, the phone reboots and comes up with the Ubuntu Touch lock screen, which has changed little in the last three years. The first boot takes a discouragingly long time, but subsequent reboots are faster, perhaps faster than Android on the same device. + +Alas, that's about the only thing that is faster than Android. The phone starts sluggish and gets worse as time goes on. At one point it took a solid minute to get the dialer screen up on the running device. Scrolling can be jerky and unpleasant to work with. At least once, the phone bogged down to the point that there was little alternative to shutting it down and starting over. + +Logging into the device over the USB connection offers some clues as to why that might be. There were no less than 258 processes running on the system. A number of them have "evolution" in their name, which is never a good sign even on a heftier system. Daemons like NetworkManager and pulseaudio are running. In general, Ubuntu Touch seems to have a large number of relatively large moving parts, leading, seemingly, to memory pressure and a certain amount of thrashing. + +Three years ago, Ubuntu Touch was built on an Android chassis. There are still bits of Android that show up here and there (it uses binder, for example), but a number of those components have been replaced. This release runs an Android-derived kernel that identifies itself as "3.4.0-7 #39-Ubuntu". 3.4.0 was released in May 2012, so it is getting a bit long in the tooth; the 3.4.0 number suggests this kernel hasn't even gotten the stable updates that followed that release. Finding the source for the kernel in this distribution is not easy; it must almost certainly be hidden somewhere in this Gerrit repository, but your editor ran out of time while trying to find it. The SurfaceFlinger display manager has been replaced by Ubuntu's own Mir, with Unity providing the interface. Upstart is the init system, despite the fact that Ubuntu has moved to systemd on desktop systems. + +When one moves beyond the command-line interface and starts playing with the touchscreen, one finds that the basics of the interface resemble what was demonstrated three years ago. Swiping from the left edge brings the [Overview screen] Unity icon bar (but no longer switches to a home screen; the "home screen" concept doesn't really seem to exist anymore). Swiping from the right will either switch to another application or produce an overview of running applications; it's not clear how it decides which. The overview provides a cute oblique view of the running applications; it's sufficient to choose one, but seems somewhat wasteful of screen space. Swiping up from the bottom produces an application-specific menu — usually. + +![][3] + + +The swipe gestures work well enough once one gets used to them, but there is scope for confusion. The camera app, for example, will instruct the user to "swipe left for photo roll," but, unless one is careful to avoid [Swipe left] the right edge of the screen, that gesture will yield the overview screen instead. One can learn subtleties like "swipes involving the edge" and "swipes avoiding the edge," but one could argue that such an interface is more difficult than it needs to be and less discoverable than it could be. + +![][4] + +Speaking of the camera app, it takes pictures as one might expect, and it has gained a high-dynamic-range mode in recent years. It still has no support for stitching together photos in a panorama or "photo sphere" mode, though. + +![][5] + +The base distribution comes with a fairly basic set of apps. Many of them appear to be interfaces to an associated web page; the Amazon, GMail, and Facebook apps, for example. Something called "Shorts" appears to be an RSS reader, though it seems impervious to the addition of arbitrary feeds. There is a terminal app, but it prompts for a password — a bit surprising [Terminal emulator] given that no password had ever been supplied for the device (it turns out that one should use the screen-lock PIN here). It's not clear that this extra level of "security" is helpful, given that the user involved is already able to install, launch, and run applications on the device, but so it goes. + +Despite the presence of all those evolution processes, there is no IMAP-capable email app; there are also no mapping apps. There is a rudimentary web browser with Ubuntu branding; it appears that this browser is based on Chromium. The weather app is limited to a few dozen hardwired locations worldwide; the closest supported location to LWN headquarters was Houston, which, one assumes, is unlikely to be dealing with the foot of snow your editor had to shovel while partway through this article. One suspects we would have heard about that. + +![][6] + +Inevitably, there is a store from which one can obtain other apps. There are, for example, a couple of seemingly capable, OpenStreetMap-based mapping apps there, including one that claims turn-by-turn navigation, but nothing requiring GPS access worked in your editor's tests. Games abound, of course, but [Maps] there is little in the way of apps that are well known in the Android or iOS worlds. The store will refuse to allow the installation of apps until one creates a "Ubuntu One" account; that is unfortunate, but most Android users never get anywhere near that far before having to create or supply a Google account. + +![][7] + +Canonical puts a fair amount of energy into promoting its "scopes," which are said to be better than apps for the aggregation of content. In truth, they seem to just be another type of app with a focus on gathering information from more than one source. Although, with "branded scopes," the "more than one source" part is often deliberately put by the wayside. Your editor played around with scopes for a while, but, in truth, could not find what was supposed to make them special. + +Permissions management in Ubuntu Touch resembles that found in recent Android releases: the user will be prompted the first time an application tries to exercise a specific privilege. As with Android, the number of [Permissions request] actions requiring privilege is relatively small, and "connect to any arbitrary site on the Internet" is not among them. Access to location information or the camera, though, will generate a prompt. There is also, again as with Android, a way to control which applications are allowed to place notifications on the screen. + +Ubuntu Touch still seems to drain the battery far more quickly than Android does on the same device. Indeed, it is barely able to get through the night while sitting idle. There is a cute battery app that offers a couple of "ways to reduce battery use," but it lacks Android's ability to say which apps are actually draining the battery (though, it must be said, that information from Android is often less helpful than one might hope). + +![][8] + +The keyboard now has proper multi-lingual support (though there is no visual indication of which language is currently in effect) and, as with Android, one can switch between languages on the fly. It offers word suggestions, does [Keyboard] spelling correction, and all the usual things. One missing feature, though, is "swipe" typing which, your editor has found, can speed the process of inputting text on a small keyboard considerably. There is also no voice input; no major loss from your editor's point of view, but others will probably see that differently. + +There is a lot to like in Ubuntu Touch. There is some appeal to running something that looks like a proper Linux system, even if it still has a number of Ubuntu-specific components. One does not get the sense that the device is watching quite as closely as Android devices do, though it's not entirely clear, for example, what happens with location data or where it might be stored. In any case, a Ubuntu device clearly has more free software on it than most alternatives do; there is no proprietary "play services" layer maintaining control over the system. + +Sadly, though, this distribution still is not up to the capabilities and the performance of the big alternatives. Switching to Ubuntu Touch means settling for a much slower system, running on a severely limited set of devices, with a relative scarcity of apps to choose from. Your editor would very much like to see a handset distribution that is more free and more open than the alternatives, but that distribution must also be competitive with those alternatives, and that does not seem to be the case here. Unless Canonical can find a way to close the performance and feature gaps with Android, it seems unlikely to have much hope of achieving uptake that is within a few orders of magnitude of Android's. + +-------------------------------------- + +via: https://lwn.net/Articles/667983/ + +作者:Jonathan Corbet +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]: https://developer.ubuntu.com/en/start/ubuntu-for-devices/devices/ +[2]: https://lwn.net/Articles/540138/ +[3]: https://static.lwn.net/images/2015/utouch/overview-sm.png +[4]: https://static.lwn.net/images/2015/utouch/camera-swipe-sm.png +[5]: https://static.lwn.net/images/2015/utouch/terminal.png +[6]: https://static.lwn.net/images/2015/utouch/gps-sm.png +[7]: https://static.lwn.net/images/2015/utouch/camera-perm.png +[8]: https://static.lwn.net/images/2015/utouch/schifo.png diff --git a/sources/tech/20160104 What is good stock portfolio management software on Linux.md b/sources/tech/20160104 What is good stock portfolio management software on Linux.md new file mode 100644 index 0000000000..b7c372ce71 --- /dev/null +++ b/sources/tech/20160104 What is good stock portfolio management software on Linux.md @@ -0,0 +1,110 @@ +translating by fw8899 +What is good stock portfolio management software on Linux +================================================================================ +If you are investing in the stock market, you probably understand the importance of a sound portfolio management plan. The goal of portfolio management is to come up with the best investment plan tailored for you, considering your risk tolerance, time horizon and financial goals. Given its importance, no wonder there are no shortage of commercial portfolio management apps and stock market monitoring software, each touting various sophisticated portfolio performance tracking and reporting capabilities. + +For those of you Linux aficionados who are looking for a **good open-source portfolio management tool** to manage and track your stock portfolio on Linux, I would highly recommend a Java-based portfolio manager called [JStock][1]. If you are not a big Java fan, you might be turned off by the fact that JStock runs on a heavyweight JVM. At the same time I am sure many people will appreciate the fact that JStock is instantly accessible on every Linux platform with JRE installed. No hoops to jump through to make it work on your Linux environment. + +The day is gone when "open-source" means "cheap" or "subpar". Considering that JStock is just a one-man job, JStock is impressively packed with many useful features as a portfolio management tool, and all that credit goes to Yan Cheng Cheok! For example, JStock supports price monitoring via watchlists, multiple portfolios, custom/built-in stock indicators and scanners, support for 27 different stock markets and cross-platform cloud backup/restore. JStock is available on multiple platforms (Linux, OS X, Android and Windows), and you can save and restore your JStock portfolios seamlessly across different platforms via cloud backup/restore. + +Sounds pretty neat, huh? Now I am going to show you how to install and use JStock in more detail. + +### Install JStock on Linux ### + +Since JStock is written in Java, you must [install JRE][2] to run it. Note that JStock requires JRE 1.7 or higher. If your JRE version does not meet this requirement, JStock will fail with the following error. + + Exception in thread "main" java.lang.UnsupportedClassVersionError: org/yccheok/jstock/gui/JStock : Unsupported major.minor version 51.0 + +Once you install JRE on your Linux, download the latest JStock release from the official website, and launch it as follows. + + $ wget https://github.com/yccheok/jstock/releases/download/release_1-0-7-13/jstock-1.0.7.13-bin.zip + $ unzip jstock-1.0.7.13-bin.zip + $ cd jstock + $ chmod +x jstock.sh + $ ./jstock.sh + +In the rest of the tutorial, let me demonstrate several useful features of JStock. + +### Monitor Stock Price Movements via Watchlist ### + +On JStock you can monitor stock price movement and automatically get notified by creating one or more watchlists. In each watchlist, you can add multiple stocks you are interested in. Then add your alert thresholds under "Fall Below" and "Rise Above" columns, which correspond to minimum and maximum stock prices you want to set, respectively. + +![](https://c2.staticflickr.com/2/1588/23795349969_37f4b0f23c_c.jpg) + +For example, if you set minimum/maximum prices of AAPL stock to $102 and $115.50, you will be alerted via desktop notifications if the stock price goes below $102 or moves higher than $115.50 at any time. + +You can also enable email alert option, so that you will instead receive email notifications for such price events. To enable email alerts, go to "Options" menu. Under "Alert" tab, turn on "Send message to email(s)" box, and enter your Gmail account. Once you go through Gmail authorization steps, JStock will start sending email alerts to that Gmail account (and optionally CC to any third-party email address). + +![](https://c2.staticflickr.com/2/1644/24080560491_3aef056e8d_b.jpg) + +### Manage Multiple Portfolios ### + +JStock allows you to manage multiple portfolios. This feature is useful if you are using multiple stock brokers. You can create a separate portfolio for each broker and manage your buy/sell/dividend transactions on a per-broker basis. You can switch different portfolios by choosing a particular portfolio under "Portfolio" menu. The following screenshot shows a hypothetical portfolio. + +![](https://c2.staticflickr.com/2/1646/23536385433_df6c036c9a_c.jpg) + +Optionally you can enable broker fee option, so that you can enter any broker fees, stamp duty and clearing fees for each buy/sell transaction. If you are lazy, you can enable fee auto-calculation and enter fee schedules for each brokering firm from the option menu beforehand. Then JStock will automatically calculate and enter fees when you add transactions to your portfolio. + +![](https://c2.staticflickr.com/2/1653/24055085262_0e315c3691_b.jpg) + +### Screen Stocks with Built-in/Custom Indicators ### + +If you are doing any technical analysis on stocks, you may want to screen stocks based on various criteria (so-called "stock indicators"). For stock screening, JStock offers several [pre-built technical indicators][3] that capture upward/downward/reversal trends of individual stocks. The following is a list of available indicators. + +- Moving Average Convergence Divergence (MACD) +- Relative Strength Index (RSI) +- Money Flow Index (MFI) +- Commodity Channel Index (CCI) +- Doji +- Golden Cross, Death Cross +- Top Gainers/Losers + +To install any pre-built indicator, go to "Stock Indicator Editor" tab on JStock. Then click on "Install" button in the right-side panel. Choose "Install from JStock server" option, and then install any indicator(s) you want. + +![](https://c2.staticflickr.com/2/1476/23867534660_b6a9c95a06_c.jpg) + +Once one or more indicators are installed, you can scan stocks using them. Go to "Stock Indicator Scanner" tab, click on "Scan" button at the bottom, and choose any indicator. + +![](https://c2.staticflickr.com/2/1653/24137054996_e8fcd10393_c.jpg) + +Once you select the stocks to scan (e.g., NYSE, NASDAQ), JStock will perform scan, and show a list of stocks captured by the indicator. + +![](https://c2.staticflickr.com/2/1446/23795349889_0f1aeef608_c.jpg) + +Besides pre-built indicators, you can also define custom indicator(s) on your own with a GUI-based indicator editor. The following example screens for stocks whose current price is less than or equal to its 60-day average price. + +![](https://c2.staticflickr.com/2/1605/24080560431_3d26eac6b5_c.jpg) + +### Cloud Backup and Restore between Linux and Android JStock ### + +Another nice feature of JStock is cloud backup and restore. JStock allows you to save and restore your portfolios/watchlists via Google Drive, and this features works seamlessly across different platforms (e.g., Linux and Android). For example, if you saved your JStock portfolios to Google Drive on Android, you can restore them on Linux version of JStock. + +![](https://c2.staticflickr.com/2/1537/24163165565_bb47e04d6c_c.jpg) + +![](https://c2.staticflickr.com/2/1556/23536385333_9ed1a75d72_c.jpg) + +If you don't see your portfolios/watchlists after restoring from Google Drive, make sure that your country is correctly set under "Country" menu. + +JStock Android free version is available from [Google Play store][4]. You will need to upgrade to premium version for one-time payment if you want to use its full features (e.g., cloud backup, alerts, charts). I think the premium version is definitely worth it. + +![](https://c2.staticflickr.com/2/1687/23867534720_18b917028c_c.jpg) + +As a final note, I should mention that its creator, Yan Cheng Cheok, is pretty active in JStock development, and quite responsive in addressing any bugs. Kudos to him! + +What do you think of JStock as portfolio tracking software? + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/stock-portfolio-management-software-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://jstock.org/ +[2]:http://ask.xmodulo.com/install-java-runtime-linux.html +[3]:http://jstock.org/ma_indicator.html +[4]:https://play.google.com/store/apps/details?id=org.yccheok.jstock.gui diff --git a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md index 3ffb1dc54f..841d5c0625 100644 --- a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md +++ b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @@ -1,3 +1,4 @@ +Being translated by hittlle...... Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting ================================================================================ The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. @@ -312,4 +313,4 @@ via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-tro [1]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ [2]:http://www.tecmint.com/vi-editor-usage/ [3]:http://www.tecmint.com/learning-shell-scripting-language-a-guide-from-newbies-to-system-administrator/ -[4]:http://www.tecmint.com/basic-shell-programming-part-ii/ \ No newline at end of file +[4]:http://www.tecmint.com/basic-shell-programming-part-ii/ diff --git a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md deleted file mode 100644 index 7fe8073a77..0000000000 --- a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md +++ /dev/null @@ -1,387 +0,0 @@ -Part 2 - LFCS: How to Install and Use vi/vim as a Full Text Editor -================================================================================ -A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when it’s time to raise issues to upper support teams. - -![Learning VI Editor in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/LFCS-Part-2.png) - -Learning VI Editor in Linux - -Please take a look at the below video that explains The Linux Foundation Certification Program. - -注:youtube 视频 - - -This post is Part 2 of a 10-tutorial series, here in this part, we will cover the basic file editing operations and understanding modes in vi/m editor, that are required for the LFCS certification exam. - -### Perform Basic File Editing Operations Using vi/m ### - -Vi was the first full-screen text editor written for Unix. Although it was intended to be small and simple, it can be a bit challenging for people used exclusively to GUI text editors, such as NotePad++, or gedit, to name a few examples. - -To use Vi, we must first understand the 3 modes in which this powerful program operates, in order to begin learning later about the its powerful text-editing procedures. - -Please note that most modern Linux distributions ship with a variant of vi known as vim (“Vi improved”), which supports more features than the original vi does. For that reason, throughout this tutorial we will use vi and vim interchangeably. - -If your distribution does not have vim installed, you can install it as follows. - -- Ubuntu and derivatives: aptitude update && aptitude install vim -- Red Hat-based distributions: yum update && yum install vim -- openSUSE: zypper update && zypper install vim - -### Why should I want to learn vi? ### - -There are at least 2 good reasons to learn vi. - -1. vi is always available (no matter what distribution you’re using) since it is required by POSIX. - -2. vi does not consume a considerable amount of system resources and allows us to perform any imaginable tasks without lifting our fingers from the keyboard. - -In addition, vi has a very extensive built-in manual, which can be launched using the :help command right after the program is started. This built-in manual contains more information than vi/m’s man page. - -![vi Man Pages](http://www.tecmint.com/wp-content/uploads/2014/10/vi-man-pages.png) - -vi Man Pages - -#### Launching vi #### - -To launch vi, type vi in your command prompt. - -![Start vi Editor](http://www.tecmint.com/wp-content/uploads/2014/10/start-vi-editor.png) - -Start vi Editor - -Then press i to enter Insert mode, and you can start typing. Another way to launch vi/m is. - - # vi filename - -Which will open a new buffer (more on buffers later) named filename, which you can later save to disk. - -#### Understanding Vi modes #### - -1. In command mode, vi allows the user to navigate around the file and enter vi commands, which are brief, case-sensitive combinations of one or more letters. Almost all of them can be prefixed with a number to repeat the command that number of times. - -For example, yy (or Y) copies the entire current line, whereas 3yy (or 3Y) copies the entire current line along with the two next lines (3 lines in total). We can always enter command mode (regardless of the mode we’re working on) by pressing the Esc key. The fact that in command mode the keyboard keys are interpreted as commands instead of text tends to be confusing to beginners. - -2. In ex mode, we can manipulate files (including saving a current file and running outside programs). To enter this mode, we must type a colon (:) from command mode, directly followed by the name of the ex-mode command that needs to be used. After that, vi returns automatically to command mode. - -3. In insert mode (the letter i is commonly used to enter this mode), we simply enter text. Most keystrokes result in text appearing on the screen (one important exception is the Esc key, which exits insert mode and returns to command mode). - -![vi Insert Mode](http://www.tecmint.com/wp-content/uploads/2014/10/vi-insert-mode.png) - -vi Insert Mode - -#### Vi Commands #### - -The following table shows a list of commonly used vi commands. File edition commands can be enforced by appending the exclamation sign to the command (for example, - - - - - - -  Key command -  Description - - -  h or left arrow -  Go one character to the left - - -  j or down arrow -  Go down one line - - -  k or up arrow -  Go up one line - - -  l (lowercase L) or right arrow -  Go one character to the right - - -  H -  Go to the top of the screen - - -  L -  Go to the bottom of the screen - - -  G -  Go to the end of the file - - -  w -  Move one word to the right - - -  b -  Move one word to the left - - -  0 (zero) -  Go to the beginning of the current line - - -  ^ -  Go to the first nonblank character on the current line - - -  $ -  Go to the end of the current line - - -  Ctrl-B -  Go back one screen - - -  Ctrl-F -  Go forward one screen - - -  i -  Insert at the current cursor position - - -  I (uppercase i) -  Insert at the beginning of the current line - - -  J (uppercase j) -  Join current line with the next one (move next line up) - - -  a -  Append after the current cursor position - - -  o (lowercase O) -  Creates a blank line after the current line - - -  O (uppercase o) -  Creates a blank line before the current line - - -  r -  Replace the character at the current cursor position - - -  R -  Overwrite at the current cursor position - - -  x -  Delete the character at the current cursor position - - -  X -  Delete the character immediately before (to the left) of the current cursor position - - -  dd -  Cut (for later pasting) the entire current line - - -  D -  Cut from the current cursor position to the end of the line (this command is equivalent to d$) - - -  yX -  Give a movement command X, copy (yank) the appropriate number of characters, words, or lines from the current cursor position - - -  yy or Y -  Yank (copy) the entire current line - - -  p -  Paste after (next line) the current cursor position - - -  P -  Paste before (previous line) the current cursor position - - -  . (period) -  Repeat the last command - - -  u -  Undo the last command - - -  U -  Undo the last command in the last line. This will work as long as the cursor is still on the line. - - -  n -  Find the next match in a search - - -  N -  Find the previous match in a search - - -  :n -  Next file; when multiple files are specified for editing, this commands loads the next file. - - -  :e file -  Load file in place of the current file. - - -  :r file -  Insert the contents of file after (next line) the current cursor position - - -  :q -  Quit without saving changes. - - -  :w file -  Write the current buffer to file. To append to an existing file, use :w >> file. - - -  :wq -  Write the contents of the current file and quit. Equivalent to x! and ZZ - - -  :r! command -  Execute command and insert output after (next line) the current cursor position. - - - - -#### Vi Options #### - -The following options can come in handy while running vim (we need to add them in our ~/.vimrc file). - - # echo set number >> ~/.vimrc - # echo syntax on >> ~/.vimrc - # echo set tabstop=4 >> ~/.vimrc - # echo set autoindent >> ~/.vimrc - -![vi Editor Options](http://www.tecmint.com/wp-content/uploads/2014/10/vi-options.png) - -vi Editor Options - -- set number shows line numbers when vi opens an existing or a new file. -- syntax on turns on syntax highlighting (for multiple file extensions) in order to make code and config files more readable. -- set tabstop=4 sets the tab size to 4 spaces (default value is 8). -- set autoindent carries over previous indent to the next line. - -#### Search and replace #### - -vi has the ability to move the cursor to a certain location (on a single line or over an entire file) based on searches. It can also perform text replacements with or without confirmation from the user. - -a). Searching within a line: the f command searches a line and moves the cursor to the next occurrence of a specified character in the current line. - -For example, the command fh would move the cursor to the next instance of the letter h within the current line. Note that neither the letter f nor the character you’re searching for will appear anywhere on your screen, but the character will be highlighted after you press Enter. - -For example, this is what I get after pressing f4 in command mode. - -![Search String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-string.png) - -Search String in Vi - -b). Searching an entire file: use the / command, followed by the word or phrase to be searched for. A search may be repeated using the previous search string with the n command, or the next one (using the N command). This is the result of typing /Jane in command mode. - -![Vi Search String in File](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-line.png) - -Vi Search String in File - -c). vi uses a command (similar to sed’s) to perform substitution operations over a range of lines or an entire file. To change the word “old” to “young” for the entire file, we must enter the following command. - - :%s/old/young/g - -**Notice**: The colon at the beginning of the command. - -![Vi Search and Replace](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-and-replace.png) - -Vi Search and Replace - -The colon (:) starts the ex command, s in this case (for substitution), % is a shortcut meaning from the first line to the last line (the range can also be specified as n,m which means “from line n to line m”), old is the search pattern, while young is the replacement text, and g indicates that the substitution should be performed on every occurrence of the search string in the file. - -Alternatively, a c can be added to the end of the command to ask for confirmation before performing any substitution. - - :%s/old/young/gc - -Before replacing the original text with the new one, vi/m will present us with the following message. - -![Replace String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-replace-old-with-young.png) - -Replace String in Vi - -- y: perform the substitution (yes) -- n: skip this occurrence and go to the next one (no) -- a: perform the substitution in this and all subsequent instances of the pattern. -- q or Esc: quit substituting. -- l (lowercase L): perform this substitution and quit (last). -- Ctrl-e, Ctrl-y: Scroll down and up, respectively, to view the context of the proposed substitution. - -#### Editing Multiple Files at a Time #### - -Let’s type vim file1 file2 file3 in our command prompt. - - # vim file1 file2 file3 - -First, vim will open file1. To switch to the next file (file2), we need to use the :n command. When we want to return to the previous file, :N will do the job. - -In order to switch from file1 to file3. - -a). The :buffers command will show a list of the file currently being edited. - - :buffers - -![Edit Multiple Files](http://www.tecmint.com/wp-content/uploads/2014/10/vi-edit-multiple-files.png) - -Edit Multiple Files - -b). The command :buffer 3 (without the s at the end) will open file3 for editing. - -In the image above, a pound sign (#) indicates that the file is currently open but in the background, while %a marks the file that is currently being edited. On the other hand, a blank space after the file number (3 in the above example) indicates that the file has not yet been opened. - -#### Temporary vi buffers #### - -To copy a couple of consecutive lines (let’s say 4, for example) into a temporary buffer named a (not associated with a file) and place those lines in another part of the file later in the current vi section, we need to… - -1. Press the ESC key to be sure we are in vi Command mode. - -2. Place the cursor on the first line of the text we wish to copy. - -3. Type “a4yy to copy the current line, along with the 3 subsequent lines, into a buffer named a. We can continue editing our file – we do not need to insert the copied lines immediately. - -4. When we reach the location for the copied lines, use “a before the p or P commands to insert the lines copied into the buffer named a: - -- Type “ap to insert the lines copied into buffer a after the current line on which the cursor is resting. -- Type “aP to insert the lines copied into buffer a before the current line. - -If we wish, we can repeat the above steps to insert the contents of buffer a in multiple places in our file. A temporary buffer, as the one in this section, is disposed when the current window is closed. - -### Summary ### - -As we have seen, vi/m is a powerful and versatile text editor for the CLI. Feel free to share your own tricks and comments below. - -#### Reference Links #### - -- [About the LFCS][1] -- [Why get a Linux Foundation Certification?][2] -- [Register for the LFCS exam][3] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/vi-editor-usage/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:https://training.linuxfoundation.org/certification/LFCS -[2]:https://training.linuxfoundation.org/certification/why-certify-with-us -[3]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md deleted file mode 100644 index 82cc54a5a6..0000000000 --- a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md +++ /dev/null @@ -1,382 +0,0 @@ -Part 3 - LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux -================================================================================ -Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams. - -![Linux Foundation Certified Sysadmin – Part 3](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-3.png) - -Linux Foundation Certified Sysadmin – Part 3 - -Please watch the below video that gives the idea about The Linux Foundation Certification Program. - -注:youtube 视频 - - -This post is Part 3 of a 10-tutorial series, here in this part, we will cover how to archive/compress files and directories, set file attributes, and find files on the filesystem, that are required for the LFCS certification exam. - -### Archiving and Compression Tools ### - -A file archiving tool groups a set of files into a single standalone file that we can backup to several types of media, transfer across a network, or send via email. The most frequently used archiving utility in Linux is tar. When an archiving utility is used along with a compression tool, it allows to reduce the disk size that is needed to store the same files and information. - -#### The tar utility #### - -tar bundles a group of files together into a single archive (commonly called a tar file or tarball). The name originally stood for tape archiver, but we must note that we can use this tool to archive data to any kind of writeable media (not only to tapes). Tar is normally used with a compression tool such as gzip, bzip2, or xz to produce a compressed tarball. - -**Basic syntax:** - - # tar [options] [pathname ...] - -Where … represents the expression used to specify which files should be acted upon. - -#### Most commonly used tar commands #### - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Long optionAbbreviationDescription
 –create c Creates a tar archive
 –concatenate A Appends tar files to an archive
 –append r Appends files to the end of an archive
 –update u Appends files newer than copy in archive
 –diff or –compare d Find differences between archive and file system
 –file archive f Use archive file or device ARCHIVE
 –list t Lists the contents of a tarball
 –extract or –get x Extracts files from an archive
- -#### Normally used operation modifiers #### - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Long optionAbbreviationDescription
 –directory dir C Changes to directory dir before performing operations
 –same-permissions p Preserves original permissions
 –verbose v Lists all files read or extracted. When this flag is used along with –list, the file sizes, ownership, and time stamps are displayed.
 –verify W Verifies the archive after writing it
 –exclude file — Excludes file from the archive
 –exclude=pattern X Exclude files, given as a PATTERN
 –gzip or –gunzip z Processes an archive through gzip
 –bzip2 j Processes an archive through bzip2
 –xz J Processes an archive through xz
- -Gzip is the oldest compression tool and provides the least compression, while bzip2 provides improved compression. In addition, xz is the newest but (usually) provides the best compression. This advantages of best compression come at a price: the time it takes to complete the operation, and system resources used during the process. - -Normally, tar files compressed with these utilities have .gz, .bz2, or .xz extensions, respectively. In the following examples we will be using these files: file1, file2, file3, file4, and file5. - -**Grouping and compressing with gzip, bzip2 and xz** - -Group all the files in the current working directory and compress the resulting bundle with gzip, bzip2, and xz (please note the use of a regular expression to specify which files should be included in the bundle – this is to prevent the archiving tool to group the tarballs created in previous steps). - - # tar czf myfiles.tar.gz file[0-9] - # tar cjf myfiles.tar.bz2 file[0-9] - # tar cJf myfile.tar.xz file[0-9] - -![Compress Multiple Files Using tar](http://www.tecmint.com/wp-content/uploads/2014/10/Compress-Multiple-Files.png) - -Compress Multiple Files - -**Listing the contents of a tarball and updating / appending files to the bundle** - -List the contents of a tarball and display the same information as a long directory listing. Note that update or append operations cannot be applied to compressed files directly (if you need to update or append a file to a compressed tarball, you need to uncompress the tar file and update / append to it, then compress again). - - # tar tvf [tarball] - -![Check Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/List-Archive-Content.png) - -List Archive Content - -Run any of the following commands: - - # gzip -d myfiles.tar.gz [#1] - # bzip2 -d myfiles.tar.bz2 [#2] - # xz -d myfiles.tar.xz [#3] - -Then - - # tar --delete --file myfiles.tar file4 (deletes the file inside the tarball) - # tar --update --file myfiles.tar file4 (adds the updated file) - -and - - # gzip myfiles.tar [ if you choose #1 above ] - # bzip2 myfiles.tar [ if you choose #2 above ] - # xz myfiles.tar [ if you choose #3 above ] - -Finally, - - # tar tvf [tarball] #again - -and compare the modification date and time of file4 with the same information as shown earlier. - -**Excluding file types** - -Suppose you want to perform a backup of user’s home directories. A good sysadmin practice would be (may also be specified by company policies) to exclude all video and audio files from backups. - -Maybe your first approach would be to exclude from the backup all files with an .mp3 or .mp4 extension (or other extensions). What if you have a clever user who can change the extension to .txt or .bkp, your approach won’t do you much good. In order to detect an audio or video file, you need to check its file type with file. The following shell script will do the job. - - #!/bin/bash - # Pass the directory to backup as first argument. - DIR=$1 - # Create the tarball and compress it. Exclude files with the MPEG string in its file type. - # -If the file type contains the string mpeg, $? (the exit status of the most recently executed command) expands to 0, and the filename is redirected to the exclude option. Otherwise, it expands to 1. - # -If $? equals 0, add the file to the list of files to be backed up. - tar X <(for i in $DIR/*; do file $i | grep -i mpeg; if [ $? -eq 0 ]; then echo $i; fi;done) -cjf backupfile.tar.bz2 $DIR/* - -![Exclude Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Exclude-Files-in-Tar.png) - -Exclude Files in tar - -**Restoring backups with tar preserving permissions** - -You can then restore the backup to the original user’s home directory (user_restore in this example), preserving permissions, with the following command. - - # tar xjf backupfile.tar.bz2 --directory user_restore --same-permissions - -![Restore Files from tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-tar-Backup-Files.png) - -Restore Files from Archive - -**Read Also:** - -- [18 tar Command Examples in Linux][1] -- [Dtrx – An Intelligent Archive Tool for Linux][2] - -### Using find Command to Search for Files ### - -The find command is used to search recursively through directory trees for files or directories that match certain characteristics, and can then either print the matching files or directories or perform other operations on the matches. - -Normally, we will search by name, owner, group, type, permissions, date, and size. - -#### Basic syntax: #### - -# find [directory_to_search] [expression] - -**Finding files recursively according to Size** - -Find all files (-f) in the current directory (.) and 2 subdirectories below (-maxdepth 3 includes the current working directory and 2 levels down) whose size (-size) is greater than 2 MB. - - # find . -maxdepth 3 -type f -size +2M - -![Find Files by Size in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-Based-on-Size.png) - -Find Files Based on Size - -**Finding and deleting files that match a certain criteria** - -Files with 777 permissions are sometimes considered an open door to external attackers. Either way, it is not safe to let anyone do anything with files. We will take a rather aggressive approach and delete them! (‘{}‘ + is used to “collect” the results of the search). - - # find /home/user -perm 777 -exec rm '{}' + - -![Find all 777 Permission Files](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-with-777-Permission.png) - -Find Files with 777Permission - -**Finding files per atime or mtime** - -Search for configuration files in /etc that have been accessed (-atime) or modified (-mtime) more (+180) or less (-180) than 6 months ago or exactly 6 months ago (180). - -Modify the following command as per the example below: - - # find /etc -iname "*.conf" -mtime -180 -print - -![Find Files by Modification Time](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Modified-Files.png) - -Find Modified Files - -- Read Also: [35 Practical Examples of Linux ‘find’ Command][3] - -### File Permissions and Basic Attributes ### - -The first 10 characters in the output of ls -l are the file attributes. The first of these characters is used to indicate the file type: - -- – : a regular file -- -d : a directory -- -l : a symbolic link -- -c : a character device (which treats data as a stream of bytes, i.e. a terminal) -- -b : a block device (which handles data in blocks, i.e. storage devices) - -The next nine characters of the file attributes are called the file mode and represent the read (r), write (w), and execute (x) permissions of the file’s owner, the file’s group owner, and the rest of the users (commonly referred to as “the world”). - -Whereas the read permission on a file allows the same to be opened and read, the same permission on a directory allows its contents to be listed if the execute permission is also set. In addition, the execute permission in a file allows it to be handled as a program and run, while in a directory it allows the same to be cd’ed into it. - -File permissions are changed with the chmod command, whose basic syntax is as follows: - - # chmod [new_mode] file - -Where new_mode is either an octal number or an expression that specifies the new permissions. - -The octal number can be converted from its binary equivalent, which is calculated from the desired file permissions for the owner, the group, and the world, as follows: - -The presence of a certain permission equals a power of 2 (r=22, w=21, x=20), while its absence equates to 0. For example: - -![Linux File Permissions](http://www.tecmint.com/wp-content/uploads/2014/10/File-Permissions.png) - -File Permissions - -To set the file’s permissions as above in octal form, type: - - # chmod 744 myfile - -You can also set a file’s mode using an expression that indicates the owner’s rights with the letter u, the group owner’s rights with the letter g, and the rest with o. All of these “individuals” can be represented at the same time with the letter a. Permissions are granted (or revoked) with the + or – signs, respectively. - -**Revoking execute permission for a shell script to all users** - -As we explained earlier, we can revoke a certain permission prepending it with the minus sign and indicating whether it needs to be revoked for the owner, the group owner, or all users. The one-liner below can be interpreted as follows: Change mode for all (a) users, revoke (–) execute permission (x). - - # chmod a-x backup.sh - -Granting read, write, and execute permissions for a file to the owner and group owner, and read permissions for the world. - -When we use a 3-digit octal number to set permissions for a file, the first digit indicates the permissions for the owner, the second digit for the group owner and the third digit for everyone else: - -- Owner: (r=22 + w=21 + x=20 = 7) -- Group owner: (r=22 + w=21 + x=20 = 7) -- World: (r=22 + w=0 + x=0 = 4), - - # chmod 774 myfile - -In time, and with practice, you will be able to decide which method to change a file mode works best for you in each case. A long directory listing also shows the file’s owner and its group owner (which serve as a rudimentary yet effective access control to files in a system): - -![Linux File Listing](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-File-Listing.png) - -Linux File Listing - -File ownership is changed with the chown command. The owner and the group owner can be changed at the same time or separately. Its basic syntax is as follows: - - # chown user:group file - -Where at least user or group need to be present. - -**Few Examples** - -Changing the owner of a file to a certain user. - - # chown gacanepa sent - -Changing the owner and group of a file to an specific user:group pair. - - # chown gacanepa:gacanepa TestFile - -Changing only the group owner of a file to a certain group. Note the colon before the group’s name. - - # chown :gacanepa email_body.txt - -### Conclusion ### - -As a sysadmin, you need to know how to create and restore backups, how to find files in your system and change their attributes, along with a few tricks that can make your life easier and will prevent you from running into future issues. - -I hope that the tips provided in the present article will help you to achieve that goal. Feel free to add your own tips and ideas in the comments section for the benefit of the community. Thanks in advance! -Reference Links - -- [About the LFCS][4] -- [Why get a Linux Foundation Certification?][5] -- [Register for the LFCS exam][6] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/compress-files-and-finding-files-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/18-tar-command-examples-in-linux/ -[2]:http://www.tecmint.com/dtrx-an-intelligent-archive-extraction-tar-zip-cpio-rpm-deb-rar-tool-for-linux/ -[3]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ -[4]:https://training.linuxfoundation.org/certification/LFCS -[5]:https://training.linuxfoundation.org/certification/why-certify-with-us -[6]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md deleted file mode 100644 index ada637fabb..0000000000 --- a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md +++ /dev/null @@ -1,191 +0,0 @@ -Part 4 - LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition -================================================================================ -Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation – if needed – to other support teams. - -![Linux Foundation Certified Sysadmin – Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png) - -Linux Foundation Certified Sysadmin – Part 4 - -Please aware that Linux Foundation certifications are precise, totally based on performance and available through an online portal anytime, anywhere. Thus, you no longer have to travel to a examination center to get the certifications you need to establish your skills and expertise. - -Please watch the below video that explains The Linux Foundation Certification Program. - -注:youtube 视频 - - -This post is Part 4 of a 10-tutorial series, here in this part, we will cover the Partitioning storage devices, Formatting filesystems and Configuring swap partition, that are required for the LFCS certification exam. - -### Partitioning Storage Devices ### - -Partitioning is a means to divide a single hard drive into one or more parts or “slices” called partitions. A partition is a section on a drive that is treated as an independent disk and which contains a single type of file system, whereas a partition table is an index that relates those physical sections of the hard drive to partition identifications. - -In Linux, the traditional tool for managing MBR partitions (up to ~2009) in IBM PC compatible systems is fdisk. For GPT partitions (~2010 and later) we will use gdisk. Each of these tools can be invoked by typing its name followed by a device name (such as /dev/sdb). - -#### Managing MBR Partitions with fdisk #### - -We will cover fdisk first. - - # fdisk /dev/sdb - -A prompt appears asking for the next operation. If you are unsure, you can press the ‘m‘ key to display the help contents. - -![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png) - -fdisk Help Menu - -In the above image, the most frequently used options are highlighted. At any moment, you can press ‘p‘ to display the current partition table. - -![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png) - -Show Partition Table - -The Id column shows the partition type (or partition id) that has been assigned by fdisk to the partition. A partition type serves as an indicator of the file system, the partition contains or, in simple words, the way data will be accessed in that partition. - -Please note that a comprehensive study of each partition type is out of the scope of this tutorial – as this series is focused on the LFCS exam, which is performance-based. - -**Some of the options used by fdisk as follows:** - -You can list all the partition types that can be managed by fdisk by pressing the ‘l‘ option (lowercase l). - -Press ‘d‘ to delete an existing partition. If more than one partition is found in the drive, you will be asked which one should be deleted. - -Enter the corresponding number, and then press ‘w‘ (write modifications to partition table) to apply changes. - -In the following example, we will delete /dev/sdb2, and then print (p) the partition table to verify the modifications. - -![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png) - -fdisk Command Options - -Press ‘n‘ to create a new partition, then ‘p‘ to indicate it will be a primary partition. Finally, you can accept all the default values (in which case the partition will occupy all the available space), or specify a size as follows. - -![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png) - -Create New Partition - -If the partition Id that fdisk chose is not the right one for our setup, we can press ‘t‘ to change it. - -![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png) - -Change Partition Name - -When you’re done setting up the partitions, press ‘w‘ to commit the changes to disk. - -![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png) - -Save Partition Changes - -#### Managing GPT Partitions with gdisk #### - -In the following example, we will use /dev/sdb. - - # gdisk /dev/sdb - -We must note that gdisk can be used either to create MBR or GPT partitions. - -![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png) - -Create GPT Partitions - -The advantage of using GPT partitioning is that we can create up to 128 partitions in the same disk whose size can be up to the order of petabytes, whereas the maximum size for MBR partitions is 2 TB. - -Note that most of the options in fdisk are the same in gdisk. For that reason, we will not go into detail about them, but here’s a screenshot of the process. - -![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png) - -gdisk Command Options - -### Formatting Filesystems ### - -Once we have created all the necessary partitions, we must create filesystems. To find out the list of filesystems supported in your system, run. - - # ls /sbin/mk* - -![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png) - -Check Filesystems Type - -The type of filesystem that you should choose depends on your requirements. You should consider the pros and cons of each filesystem and its own set of features. Two important attributes to look for in a filesystem are. - -- Journaling support, which allows for faster data recovery in the event of a system crash. -- Security Enhanced Linux (SELinux) support, as per the project wiki, “a security enhancement to Linux which allows users and administrators more control over access control”. - -In our next example, we will create an ext4 filesystem (supports both journaling and SELinux) labeled Tecmint on /dev/sdb1, using mkfs, whose basic syntax is. - - # mkfs -t [filesystem] -L [label] device - or - # mkfs.[filesystem] -L [label] device - -![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png) - -Create ext4 Filesystems - -### Creating and Using Swap Partitions ### - -Swap partitions are necessary if we need our Linux system to have access to virtual memory, which is a section of the hard disk designated for use as memory, when the main system memory (RAM) is all in use. For that reason, a swap partition may not be needed on systems with enough RAM to meet all its requirements; however, even in that case it’s up to the system administrator to decide whether to use a swap partition or not. - -A simple rule of thumb to decide the size of a swap partition is as follows. - -Swap should usually equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB. - -So, if: - -M = Amount of RAM in GB, and S = Amount of swap in GB, then - - If M < 2 - S = M *2 - Else - S = M + 2 - -Remember this is just a formula and that only you, as a sysadmin, have the final word as to the use and size of a swap partition. - -To configure a swap partition, create a regular partition as demonstrated earlier with the desired size. Next, we need to add the following entry to the /etc/fstab file (X can be either b or c). - - /dev/sdX1 swap swap sw 0 0 - -Finally, let’s format and enable the swap partition. - - # mkswap /dev/sdX1 - # swapon -v /dev/sdX1 - -To display a snapshot of the swap partition(s). - - # cat /proc/swaps - -To disable the swap partition. - - # swapoff /dev/sdX1 - -For the next example, we’ll use /dev/sdc1 (=512 MB, for a system with 256 MB of RAM) to set up a partition with fdisk that we will use as swap, following the steps detailed above. Note that we will specify a fixed size in this case. - -![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png) - -Create Swap Partition - -![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png) - -Enable Swap Partition - -### Conclusion ### - -Creating partitions (including swap) and formatting filesystems are crucial in your road to Sysadminship. I hope that the tips given in this article will guide you to achieve your goals. Feel free to add your own tips & ideas in the comments section below, for the benefit of the community. -Reference Links - -- [About the LFCS][1] -- [Why get a Linux Foundation Certification?][2] -- [Register for the LFCS exam][3] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:https://training.linuxfoundation.org/certification/LFCS -[2]:https://training.linuxfoundation.org/certification/why-certify-with-us -[3]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md deleted file mode 100644 index 1544a378bc..0000000000 --- a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md +++ /dev/null @@ -1,232 +0,0 @@ -Part 5 - LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux -================================================================================ -The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. - -![Linux Foundation Certified Sysadmin – Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png) - -Linux Foundation Certified Sysadmin – Part 5 - -The following video shows an introduction to The Linux Foundation Certification Program. - -注:youtube 视频 - - -This post is Part 5 of a 10-tutorial series, here in this part, we will explain How to mount/unmount local and network filesystems in linux, that are required for the LFCS certification exam. - -### Mounting Filesystems ### - -Once a disk has been partitioned, Linux needs some way to access the data on the partitions. Unlike DOS or Windows (where this is done by assigning a drive letter to each partition), Linux uses a unified directory tree where each partition is mounted at a mount point in that tree. - -A mount point is a directory that is used as a way to access the filesystem on the partition, and mounting the filesystem is the process of associating a certain filesystem (a partition, for example) with a specific directory in the directory tree. - -In other words, the first step in managing a storage device is attaching the device to the file system tree. This task can be accomplished on a one-time basis by using tools such as mount (and then unmounted with umount) or persistently across reboots by editing the /etc/fstab file. - -The mount command (without any options or arguments) shows the currently mounted filesystems. - - # mount - -![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png) - -Check Mounted Filesystem - -In addition, mount is used to mount filesystems into the filesystem tree. Its standard syntax is as follows. - - # mount -t type device dir -o options - -This command instructs the kernel to mount the filesystem found on device (a partition, for example, that has been formatted with a filesystem type) at the directory dir, using all options. In this form, mount does not look in /etc/fstab for instructions. - -If only a directory or device is specified, for example. - - # mount /dir -o options - or - # mount device -o options - -mount tries to find a mount point and if it can’t find any, then searches for a device (both cases in the /etc/fstab file), and finally attempts to complete the mount operation (which usually succeeds, except for the case when either the directory or the device is already being used, or when the user invoking mount is not root). - -You will notice that every line in the output of mount has the following format. - - device on directory type (options) - -For example, - - /dev/mapper/debian-home on /home type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered) - -Reads: - -dev/mapper/debian-home is mounted on /home, which has been formatted as ext4, with the following options: rw,relatime,user_xattr,barrier=1,data=ordered - -**Mount Options** - -Most frequently used mount options include. - -- async: allows asynchronous I/O operations on the file system being mounted. -- auto: marks the file system as enabled to be mounted automatically using mount -a. It is the opposite of noauto. -- defaults: this option is an alias for async,auto,dev,exec,nouser,rw,suid. Note that multiple options must be separated by a comma without any spaces. If by accident you type a space between options, mount will interpret the subsequent text string as another argument. -- loop: Mounts an image (an .iso file, for example) as a loop device. This option can be used to simulate the presence of the disk’s contents in an optical media reader. -- noexec: prevents the execution of executable files on the particular filesystem. It is the opposite of exec. -- nouser: prevents any users (other than root) to mount and unmount the filesystem. It is the opposite of user. -- remount: mounts the filesystem again in case it is already mounted. -- ro: mounts the filesystem as read only. -- rw: mounts the file system with read and write capabilities. -- relatime: makes access time to files be updated only if atime is earlier than mtime. -- user_xattr: allow users to set and remote extended filesystem attributes. - -**Mounting a device with ro and noexec options** - - # mount -t ext4 /dev/sdg1 /mnt -o ro,noexec - -In this case we can see that attempts to write a file to or to run a binary file located inside our mounting point fail with corresponding error messages. - - # touch /mnt/myfile - # /mnt/bin/echo “Hi there” - -![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png) - -Mount Device Read Write - -**Mounting a device with default options** - -In the following scenario, we will try to write a file to our newly mounted device and run an executable file located within its filesystem tree using the same commands as in the previous example. - - # mount -t ext4 /dev/sdg1 /mnt -o defaults - -![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png) - -Mount Device - -In this last case, it works perfectly. - -### Unmounting Devices ### - -Unmounting a device (with the umount command) means finish writing all the remaining “on transit” data so that it can be safely removed. Note that if you try to remove a mounted device without properly unmounting it first, you run the risk of damaging the device itself or cause data loss. - -That being said, in order to unmount a device, you must be “standing outside” its block device descriptor or mount point. In other words, your current working directory must be something else other than the mounting point. Otherwise, you will get a message saying that the device is busy. - -![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png) - -Unmount Device - -An easy way to “leave” the mounting point is typing the cd command which, in lack of arguments, will take us to our current user’s home directory, as shown above. - -### Mounting Common Networked Filesystems ### - -The two most frequently used network file systems are SMB (which stands for “Server Message Block”) and NFS (“Network File System”). Chances are you will use NFS if you need to set up a share for Unix-like clients only, and will opt for Samba if you need to share files with Windows-based clients and perhaps other Unix-like clients as well. - -Read Also - -- [Setup Samba Server in RHEL/CentOS and Fedora][1] -- [Setting up NFS (Network File System) on RHEL/CentOS/Fedora and Debian/Ubuntu][2] - -The following steps assume that Samba and NFS shares have already been set up in the server with IP 192.168.0.10 (please note that setting up a NFS share is one of the competencies required for the LFCE exam, which we will cover after the present series). - -#### Mounting a Samba share on Linux #### - -Step 1: Install the samba-client samba-common and cifs-utils packages on Red Hat and Debian based distributions. - - # yum update && yum install samba-client samba-common cifs-utils - # aptitude update && aptitude install samba-client samba-common cifs-utils - -Then run the following command to look for available samba shares in the server. - - # smbclient -L 192.168.0.10 - -And enter the password for the root account in the remote machine. - -![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png) - -Mount Samba Share - -In the above image we have highlighted the share that is ready for mounting on our local system. You will need a valid samba username and password on the remote server in order to access it. - -Step 2: When mounting a password-protected network share, it is not a good idea to write your credentials in the /etc/fstab file. Instead, you can store them in a hidden file somewhere with permissions set to 600, like so. - - # mkdir /media/samba - # echo “username=samba_username” > /media/samba/.smbcredentials - # echo “password=samba_password” >> /media/samba/.smbcredentials - # chmod 600 /media/samba/.smbcredentials - -Step 3: Then add the following line to /etc/fstab file. - - # //192.168.0.10/gacanepa /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0 - -Step 4: You can now mount your samba share, either manually (mount //192.168.0.10/gacanepa) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently. - -![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png) - -Mount Password Protect Samba Share - -#### Mounting a NFS share on Linux #### - -Step 1: Install the nfs-common and portmap packages on Red Hat and Debian based distributions. - - # yum update && yum install nfs-utils nfs-utils-lib - # aptitude update && aptitude install nfs-common - -Step 2: Create a mounting point for the NFS share. - - # mkdir /media/nfs - -Step 3: Add the following line to /etc/fstab file. - -192.168.0.10:/NFS-SHARE /media/nfs nfs defaults 0 0 - -Step 4: You can now mount your nfs share, either manually (mount 192.168.0.10:/NFS-SHARE) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently. - -![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png) - -Mount NFS Share - -### Mounting Filesystems Permanently ### - -As shown in the previous two examples, the /etc/fstab file controls how Linux provides access to disk partitions and removable media devices and consists of a series of lines that contain six fields each; the fields are separated by one or more spaces or tabs. A line that begins with a hash mark (#) is a comment and is ignored. - -Each line has the following format. - - - -Where: - -- : The first column specifies the mount device. Most distributions now specify partitions by their labels or UUIDs. This practice can help reduce problems if partition numbers change. -- : The second column specifies the mount point. -- : The file system type code is the same as the type code used to mount a filesystem with the mount command. A file system type code of auto lets the kernel auto-detect the filesystem type, which can be a convenient option for removable media devices. Note that this option may not be available for all filesystems out there. -- : One (or more) mount option(s). -- : You will most likely leave this to 0 (otherwise set it to 1) to disable the dump utility to backup the filesystem upon boot (The dump program was once a common backup tool, but it is much less popular today.) -- : This column specifies whether the integrity of the filesystem should be checked at boot time with fsck. A 0 means that fsck should not check a filesystem. The higher the number, the lowest the priority. Thus, the root partition will most likely have a value of 1, while all others that should be checked should have a value of 2. - -**Mount Examples** - -1. To mount a partition with label TECMINT at boot time with rw and noexec attributes, you should add the following line in /etc/fstab file. - - LABEL=TECMINT /mnt ext4 rw,noexec 0 0 - -2. If you want the contents of a disk in your DVD drive be available at boot time. - - /dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0 - -Where /dev/sr0 is your DVD drive. - -### Summary ### - -You can rest assured that mounting and unmounting local and network filesystems from the command line will be part of your day-to-day responsibilities as sysadmin. You will also need to master /etc/fstab. I hope that you have found this article useful to help you with those tasks. Feel free to add your comments (or ask questions) below and to share this article through your network social profiles. -Reference Links - -- [About the LFCS][3] -- [Why get a Linux Foundation Certification?][4] -- [Register for the LFCS exam][5] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/mount-filesystem-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/setup-samba-server-using-tdbsam-backend-on-rhel-centos-6-3-5-8-and-fedora-17-12/ -[2]:http://www.tecmint.com/how-to-setup-nfs-server-in-linux/ -[3]:https://training.linuxfoundation.org/certification/LFCS -[4]:https://training.linuxfoundation.org/certification/why-certify-with-us -[5]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md deleted file mode 100644 index fd23db110f..0000000000 --- a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md +++ /dev/null @@ -1,276 +0,0 @@ -Part 6 - LFCS: Assembling Partitions as RAID Devices – Creating & Managing System Backups -================================================================================ -Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams. - -![Linux Foundation Certified Sysadmin – Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png) - -Linux Foundation Certified Sysadmin – Part 6 - -The following video provides an introduction to The Linux Foundation Certification Program. - -注:youtube 视频 - - -This post is Part 6 of a 10-tutorial series, here in this part, we will explain How to Assemble Partitions as RAID Devices – Creating & Managing System Backups, that are required for the LFCS certification exam. - -### Understanding RAID ### - -The technology known as Redundant Array of Independent Disks (RAID) is a storage solution that combines multiple hard disks into a single logical unit to provide redundancy of data and/or improve performance in read / write operations to disk. - -However, the actual fault-tolerance and disk I/O performance lean on how the hard disks are set up to form the disk array. Depending on the available devices and the fault tolerance / performance needs, different RAID levels are defined. You can refer to the RAID series here in Tecmint.com for a more detailed explanation on each RAID level. - -- RAID Guide: [What is RAID, Concepts of RAID and RAID Levels Explained][1] - -Our tool of choice for creating, assembling, managing, and monitoring our software RAIDs is called mdadm (short for multiple disks admin). - - ---------------- Debian and Derivatives ---------------- - # aptitude update && aptitude install mdadm - ----------- - - ---------------- Red Hat and CentOS based Systems ---------------- - # yum update && yum install mdadm - ----------- - - ---------------- On openSUSE ---------------- - # zypper refresh && zypper install mdadm # - -#### Assembling Partitions as RAID Devices #### - -The process of assembling existing partitions as RAID devices consists of the following steps. - -**1. Create the array using mdadm** - -If one of the partitions has been formatted previously, or has been a part of another RAID array previously, you will be prompted to confirm the creation of the new array. Assuming you have taken the necessary precautions to avoid losing important data that may have resided in them, you can safely type y and press Enter. - - # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 - -![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png) - -Creating RAID Array - -**2. Check the array creation status** - -After creating RAID array, you an check the status of the array using the following commands. - - # cat /proc/mdstat - or - # mdadm --detail /dev/md0 [More detailed summary] - -![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png) - -Check RAID Array Status - -**3. Format the RAID Device** - -Format the device with a filesystem as per your needs / requirements, as explained in [Part 4][2] of this series. - -**4. Monitor RAID Array Service** - -Instruct the monitoring service to “keep an eye” on the array. Add the output of mdadm –detail –scan to /etc/mdadm/mdadm.conf (Debian and derivatives) or /etc/mdadm.conf (CentOS / openSUSE), like so. - - # mdadm --detail --scan - -![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png) - -Monitor RAID Array - - # mdadm --assemble --scan [Assemble the array] - -To ensure the service starts on system boot, run the following commands as root. - -**Debian and Derivatives** - -Debian and derivatives, though it should start running on boot by default. - - # update-rc.d mdadm defaults - -Edit the /etc/default/mdadm file and add the following line. - - AUTOSTART=true - -**On CentOS and openSUSE (systemd-based)** - - # systemctl start mdmonitor - # systemctl enable mdmonitor - -**On CentOS and openSUSE (SysVinit-based)** - - # service mdmonitor start - # chkconfig mdmonitor on - -**5. Check RAID Disk Failure** - -In RAID levels that support redundancy, replace failed drives when needed. When a device in the disk array becomes faulty, a rebuild automatically starts only if there was a spare device added when we first created the array. - -![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png) - -Check RAID Faulty Disk - -Otherwise, we need to manually attach an extra physical drive to our system and run. - - # mdadm /dev/md0 --add /dev/sdX1 - -Where /dev/md0 is the array that experienced the issue and /dev/sdX1 is the new device. - -**6. Disassemble a working array** - -You may have to do this if you need to create a new array using the devices – (Optional Step). - - # mdadm --stop /dev/md0 # Stop the array - # mdadm --remove /dev/md0 # Remove the RAID device - # mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes - -**7. Set up mail alerts** - -You can configure a valid email address or system account to send alerts to (make sure you have this line in mdadm.conf). – (Optional Step) - - MAILADDR root - -In this case, all alerts that the RAID monitoring daemon collects will be sent to the local root account’s mail box. One of such alerts looks like the following. - -**Note**: This event is related to the example in STEP 5, where a device was marked as faulty and the spare device was automatically built into the array by mdadm. Thus, we “ran out” of healthy spare devices and we got the alert. - -![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png) - -RAID Monitoring Alerts - -#### Understanding RAID Levels #### - -**RAID 0** - -The total array size is n times the size of the smallest partition, where n is the number of independent disks in the array (you will need at least two drives). Run the following command to assemble a RAID 0 array using partitions /dev/sdb1 and /dev/sdc1. - - # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 - -Common uses: Setups that support real-time applications where performance is more important than fault-tolerance. - -**RAID 1 (aka Mirroring)** - -The total array size equals the size of the smallest partition (you will need at least two drives). Run the following command to assemble a RAID 1 array using partitions /dev/sdb1 and /dev/sdc1. - - # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 - -Common uses: Installation of the operating system or important subdirectories, such as /home. - -**RAID 5 (aka drives with Parity)** - -The total array size will be (n – 1) times the size of the smallest partition. The “lost” space in (n-1) is used for parity (redundancy) calculation (you will need at least three drives). - -Note that you can specify a spare device (/dev/sde1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 5 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 as spare. - - # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1 - -Common uses: Web and file servers. - -**RAID 6 (aka drives with double Parity** - -The total array size will be (n*s)-2*s, where n is the number of independent disks in the array and s is the size of the smallest disk. Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. - -Run the following command to assemble a RAID 6 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare. - - # mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1 - -Common uses: File and backup servers with large capacity and high availability requirements. - -**RAID 1+0 (aka stripe of mirrors)** - -The total array size is computed based on the formulas for RAID 0 and RAID 1, since RAID 1+0 is a combination of both. First, calculate the size of each mirror and then the size of the stripe. - -Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 1+0 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare. - - # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1 - -Common uses: Database and application servers that require fast I/O operations. - -#### Creating and Managing System Backups #### - -It never hurts to remember that RAID with all its bounties IS NOT A REPLACEMENT FOR BACKUPS! Write it 1000 times on the chalkboard if you need to, but make sure you keep that idea in mind at all times. Before we begin, we must note that there is no one-size-fits-all solution for system backups, but here are some things that you do need to take into account while planning a backup strategy. - -- What do you use your system for? (Desktop or server? If the latter case applies, what are the most critical services – whose configuration would be a real pain to lose?) -- How often do you need to take backups of your system? -- What is the data (e.g. files / directories / database dumps) that you want to backup? You may also want to consider if you really need to backup huge files (such as audio or video files). -- Where (meaning physical place and media) will those backups be stored? - -**Backing Up Your Data** - -Method 1: Backup entire drives with dd command. You can either back up an entire hard disk or a partition by creating an exact image at any point in time. Note that this works best when the device is offline, meaning it’s not mounted and there are no processes accessing it for I/O operations. - -The downside of this backup approach is that the image will have the same size as the disk or partition, even when the actual data occupies a small percentage of it. For example, if you want to image a partition of 20 GB that is only 10% full, the image file will still be 20 GB in size. In other words, it’s not only the actual data that gets backed up, but the entire partition itself. You may consider using this method if you need exact backups of your devices. - -**Creating an image file out of an existing device** - - # dd if=/dev/sda of=/system_images/sda.img - OR - --------------------- Alternatively, you can compress the image file --------------------- - # dd if=/dev/sda | gzip -c > /system_images/sda.img.gz - -**Restoring the backup from the image file** - - # dd if=/system_images/sda.img of=/dev/sda - OR - - --------------------- Depending on your choice while creating the image --------------------- - gzip -dc /system_images/sda.img.gz | dd of=/dev/sda - -Method 2: Backup certain files / directories with tar command – already covered in [Part 3][3] of this series. You may consider using this method if you need to keep copies of specific files and directories (configuration files, users’ home directories, and so on). - -Method 3: Synchronize files with rsync command. Rsync is a versatile remote (and local) file-copying tool. If you need to backup and synchronize your files to/from network drives, rsync is a go. - -Whether you’re synchronizing two local directories or local < — > remote directories mounted on the local filesystem, the basic syntax is the same. -Synchronizing two local directories or local < — > remote directories mounted on the local filesystem - - # rsync -av source_directory destination directory - -Where, -a recurse into subdirectories (if they exist), preserve symbolic links, timestamps, permissions, and original owner / group and -v verbose. - -![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png) - -rsync Synchronizing Files - -In addition, if you want to increase the security of the data transfer over the wire, you can use ssh over rsync. - -**Synchronizing local → remote directories over ssh** - - # rsync -avzhe ssh backups root@remote_host:/remote_directory/ - -This example will synchronize the backups directory on the local host with the contents of /root/remote_directory on the remote host. - -Where the -h option shows file sizes in human-readable format, and the -e flag is used to indicate a ssh connection. - -![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png) - -rsync Synchronize Remote Files - -Synchronizing remote → local directories over ssh. - -In this case, switch the source and destination directories from the previous example. - - # rsync -avzhe ssh root@remote_host:/remote_directory/ backups - -Please note that these are only 3 examples (most frequent cases you’re likely to run into) of the use of rsync. For more examples and usages of rsync commands can be found at the following article. - -- Read Also: [10 rsync Commands to Sync Files in Linux][4] - -### Summary ### - -As a sysadmin, you need to ensure that your systems perform as good as possible. If you’re well prepared, and if the integrity of your data is well supported by a storage technology such as RAID and regular system backups, you’ll be safe. - -If you have questions, comments, or further ideas on how this article can be improved, feel free to speak out below. In addition, please consider sharing this series through your social network profiles. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ -[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ -[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md index abf09ee523..3a2dfa844a 100644 --- a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md +++ b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md @@ -1,3 +1,5 @@ +Translating by Flowsnow + Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart) ================================================================================ A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams. @@ -364,4 +366,4 @@ via: http://www.tecmint.com/linux-boot-process-and-manage-services/ [3]:http://www.tecmint.com/chkconfig-command-examples/ [4]:http://www.tecmint.com/remove-unwanted-services-from-linux/ [5]:http://www.tecmint.com/chkconfig-command-examples/ -[6]:http://upstart.ubuntu.com/cookbook/ \ No newline at end of file +[6]:http://upstart.ubuntu.com/cookbook/ diff --git a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md deleted file mode 100644 index 2cec4de4ae..0000000000 --- a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md +++ /dev/null @@ -1,330 +0,0 @@ -Part 8 - LFCS: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts -================================================================================ -Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when it’s necessary to escalate issues to higher level support teams. - -![Linux Users and Groups Management](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-8.png) - -Linux Foundation Certified Sysadmin – Part 8 - -Please have a quick look at the following video that describes an introduction to the Linux Foundation Certification Program. - -注:youtube视频 - - -This article is Part 8 of a 10-tutorial long series, here in this section, we will guide you on how to manage users and groups permissions in Linux system, that are required for the LFCS certification exam. - -Since Linux is a multi-user operating system (in that it allows multiple users on different computers or terminals to access a single system), you will need to know how to perform effective user management: how to add, edit, suspend, or delete user accounts, along with granting them the necessary permissions to do their assigned tasks. - -### Adding User Accounts ### - -To add a new user account, you can run either of the following two commands as root. - - # adduser [new_account] - # useradd [new_account] - -When a new user account is added to the system, the following operations are performed. - -1. His/her home directory is created (/home/username by default). - -2. The following hidden files are copied into the user’s home directory, and will be used to provide environment variables for his/her user session. - - .bash_logout - .bash_profile - .bashrc - -3. A mail spool is created for the user at /var/spool/mail/username. - -4. A group is created and given the same name as the new user account. - -**Understanding /etc/passwd** - -The full account information is stored in the /etc/passwd file. This file contains a record per system user account and has the following format (fields are delimited by a colon). - - [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] - -- Fields [username] and [Comment] are self explanatory. -- The x in the second field indicates that the account is protected by a shadowed password (in /etc/shadow), which is needed to logon as [username]. -- The [UID] and [GID] fields are integers that represent the User IDentification and the primary Group IDentification to which [username] belongs, respectively. -- The [Home directory] indicates the absolute path to [username]’s home directory, and -- The [Default shell] is the shell that will be made available to this user when he or she logins the system. - -**Understanding /etc/group** - -Group information is stored in the /etc/group file. Each record has the following format. - - [Group name]:[Group password]:[GID]:[Group members] - -- [Group name] is the name of group. -- An x in [Group password] indicates group passwords are not being used. -- [GID]: same as in /etc/passwd. -- [Group members]: a comma separated list of users who are members of [Group name]. - -![Add User Accounts in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-user-accounts.png) - -Add User Accounts - -After adding an account, you can edit the following information (to name a few fields) using the usermod command, whose basic syntax of usermod is as follows. - - # usermod [options] [username] - -**Setting the expiry date for an account** - -Use the –expiredate flag followed by a date in YYYY-MM-DD format. - - # usermod --expiredate 2014-10-30 tecmint - -**Adding the user to supplementary groups** - -Use the combined -aG, or –append –groups options, followed by a comma separated list of groups. - - # usermod --append --groups root,users tecmint - -**Changing the default location of the user’s home directory** - -Use the -d, or –home options, followed by the absolute path to the new home directory. - - # usermod --home /tmp tecmint - -**Changing the shell the user will use by default** - -Use –shell, followed by the path to the new shell. - - # usermod --shell /bin/sh tecmint - -**Displaying the groups an user is a member of** - - # groups tecmint - # id tecmint - -Now let’s execute all the above commands in one go. - - # usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp --shell /bin/sh tecmint - -![usermod Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/usermod-command-examples.png) - -usermod Command Examples - -Read Also: - -- [15 useradd Command Examples in Linux][1] -- [15 usermod Command Examples in Linux][2] - -For existing accounts, we can also do the following. - -**Disabling account by locking password** - -Use the -L (uppercase L) or the –lock option to lock a user’s password. - - # usermod --lock tecmint - -**Unlocking user password** - -Use the –u or the –unlock option to unlock a user’s password that was previously blocked. - - # usermod --unlock tecmint - -![Lock User in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/lock-user-in-linux.png) - -Lock User Accounts - -**Creating a new group for read and write access to files that need to be accessed by several users** - -Run the following series of commands to achieve the goal. - - # groupadd common_group # Add a new group - # chown :common_group common.txt # Change the group owner of common.txt to common_group - # usermod -aG common_group user1 # Add user1 to common_group - # usermod -aG common_group user2 # Add user2 to common_group - # usermod -aG common_group user3 # Add user3 to common_group - -**Deleting a group** - -You can delete a group with the following command. - - # groupdel [group_name] - -If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted. - -### Linux File Permissions ### - -Besides the basic read, write, and execute permissions that we discussed in [Setting File Attributes – Part 3][3] of this series, there are other less used (but not less important) permission settings, sometimes referred to as “special permissions”. - -Like the basic permissions discussed earlier, they are set using an octal file or through a letter (symbolic notation) that indicates the type of permission. -Deleting user accounts - -You can delete an account (along with its home directory, if it’s owned by the user, and all the files residing therein, and also the mail spool) using the userdel command with the –remove option. - - # userdel --remove [username] - -#### Group Management #### - -Every time a new user account is added to the system, a group with the same name is created with the username as its only member. Other users can be added to the group later. One of the purposes of groups is to implement a simple access control to files and other system resources by setting the right permissions on those resources. - -For example, suppose you have the following users. - -- user1 (primary group: user1) -- user2 (primary group: user2) -- user3 (primary group: user3) - -All of them need read and write access to a file called common.txt located somewhere on your local system, or maybe on a network share that user1 has created. You may be tempted to do something like, - - # chmod 660 common.txt - OR - # chmod u=rw,g=rw,o= common.txt [notice the space between the last equal sign and the file name] - -However, this will only provide read and write access to the owner of the file and to those users who are members of the group owner of the file (user1 in this case). Again, you may be tempted to add user2 and user3 to group user1, but that will also give them access to the rest of the files owned by user user1 and group user1. - -This is where groups come in handy, and here’s what you should do in a case like this. - -**Understanding Setuid** - -When the setuid permission is applied to an executable file, an user running the program inherits the effective privileges of the program’s owner. Since this approach can reasonably raise security concerns, the number of files with setuid permission must be kept to a minimum. You will likely find programs with this permission set when a system user needs to access a file owned by root. - -Summing up, it isn’t just that the user can execute the binary file, but also that he can do so with root’s privileges. For example, let’s check the permissions of /bin/passwd. This binary is used to change the password of an account, and modifies the /etc/shadow file. The superuser can change anyone’s password, but all other users should only be able to change their own. - -![passwd Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/passwd-command.png) - -passwd Command Examples - -Thus, any user should have permission to run /bin/passwd, but only root will be able to specify an account. Other users can only change their corresponding passwords. - -![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png) - -Change User Password - -**Understanding Setgid** - -When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. You will most likely use this approach whenever members of a certain group need access to all the files in a directory, regardless of the file owner’s primary group. - - # chmod g+s [filename] - -To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions. - - # chmod 2755 [directory] - -**Setting the SETGID in a directory** - -![Add Setgid in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-setgid-to-directory.png) - -Add Setgid to Directory - -**Understanding Sticky Bit** - -When the “sticky bit” is set on files, Linux just ignores it, whereas for directories it has the effect of preventing users from deleting or even renaming the files it contains unless the user owns the directory, the file, or is root. - -# chmod o+t [directory] - -To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic permissions. - -# chmod 1755 [directory] - -Without the sticky bit, anyone able to write to the directory can delete or rename files. For that reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable. - -![Add Stickybit in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-sticky-bit-to-directory.png) - -Add Stickybit to Directory - -### Special Linux File Attributes ### - -There are other attributes that enable further limits on the operations that are allowed on files. For example, prevent the file from being renamed, moved, deleted, or even modified. They are set with the [chattr command][4] and can be viewed using the lsattr tool, as follows. - - # chattr +i file1 - # chattr +a file2 - -After executing those two commands, file1 will be immutable (which means it cannot be moved, renamed, modified or deleted) whereas file2 will enter append-only mode (can only be open in append mode for writing). - -![Protect File from Deletion](http://www.tecmint.com/wp-content/uploads/2014/10/chattr-command.png) - -Chattr Command to Protect Files - -### Accessing the root Account and Using sudo ### - -One of the ways users can gain access to the root account is by typing. - - $ su - -and then entering root’s password. - -If authentication succeeds, you will be logged on as root with the current working directory as the same as you were before. If you want to be placed in root’s home directory instead, run. - - $ su - - -and then enter root’s password. - -![Enable sudo Access on Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Sudo-Access.png) - -Enable Sudo Access on Users - -The above procedure requires that a normal user knows root’s password, which poses a serious security risk. For that reason, the sysadmin can configure the sudo command to allow an ordinary user to execute commands as a different user (usually the superuser) in a very controlled and limited way. Thus, restrictions can be set on a user so as to enable him to run one or more specific privileged commands and no others. - -- Read Also: [Difference Between su and sudo User][5] - -To authenticate using sudo, the user uses his/her own password. After entering the command, we will be prompted for our password (not the superuser’s) and if the authentication succeeds (and if the user has been granted privileges to run the command), the specified command is carried out. - -To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is recommended that this file is edited using the visudo command instead of opening it directly with a text editor. - - # visudo - -This opens the /etc/sudoers file using vim (you can follow the instructions given in [Install and Use vim as Editor – Part 2][6] of this series to edit the file). - -These are the most relevant lines. - - Defaults secure_path="/usr/sbin:/usr/bin:/sbin" - root ALL=(ALL) ALL - tecmint ALL=/bin/yum update - gacanepa ALL=NOPASSWD:/bin/updatedb - %admin ALL=(ALL) ALL - -Let’s take a closer look at them. - - Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin" - -This line lets you specify the directories that will be used for sudo, and is used to prevent using user-specific directories, which can harm the system. - -The next lines are used to specify permissions. - - root ALL=(ALL) ALL - -- The first ALL keyword indicates that this rule applies to all hosts. -- The second ALL indicates that the user in the first column can run commands with the privileges of any user. -- The third ALL means any command can be run. - - tecmint ALL=/bin/yum update - -If no user is specified after the = sign, sudo assumes the root user. In this case, user tecmint will be able to run yum update as root. - - gacanepa ALL=NOPASSWD:/bin/updatedb - -The NOPASSWD directive allows user gacanepa to run /bin/updatedb without needing to enter his password. - - %admin ALL=(ALL) ALL - -The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of the line is identical to that of an regular user. This means that members of the group “admin” can run all commands as any user on all hosts. - -To see what privileges are granted to you by sudo, use the “-l” option to list them. - -![Sudo Access Rules](http://www.tecmint.com/wp-content/uploads/2014/10/sudo-access-rules.png) - -Sudo Access Rules - -### Summary ### - -Effective user and file management skills are essential tools for any system administrator. In this article we have covered the basics and hope you can use it as a good starting to point to build upon. Feel free to leave your comments or questions below, and we’ll respond quickly. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/manage-users-and-groups-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/add-users-in-linux/ -[2]:http://www.tecmint.com/usermod-command-examples/ -[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ -[4]:http://www.tecmint.com/chattr-command-examples/ -[5]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/ -[6]:http://www.tecmint.com/vi-editor-usage/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md deleted file mode 100644 index 6d0f65223f..0000000000 --- a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md +++ /dev/null @@ -1,229 +0,0 @@ -Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper -================================================================================ -Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams. - -![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) - -Linux Foundation Certified Sysadmin – Part 9 - -Watch the following video that explains about the Linux Foundation Certification Program. - -注:youtube 视频 - - -This article is a Part 9 of 10-tutorial long series, today in this article we will guide you about Linux Package Management, that are required for the LFCS certification exam. - -### Package Management ### - -In few words, package management is a method of installing and maintaining (which includes updating and probably removing as well) software on the system. - -In the early days of Linux, programs were only distributed as source code, along with the required man pages, the necessary configuration files, and more. Nowadays, most Linux distributors use by default pre-built programs or sets of programs called packages, which are presented to users ready for installation on that distribution. However, one of the wonders of Linux is still the possibility to obtain source code of a program to be studied, improved, and compiled. - -**How package management systems work** - -If a certain package requires a certain resource such as a shared library, or another package, it is said to have a dependency. All modern package management systems provide some method of dependency resolution to ensure that when a package is installed, all of its dependencies are installed as well. - -**Packaging Systems** - -Almost all the software that is installed on a modern Linux system will be found on the Internet. It can either be provided by the distribution vendor through central repositories (which can contain several thousands of packages, each of which has been specifically built, tested, and maintained for the distribution) or be available in source code that can be downloaded and installed manually. - -Because different distribution families use different packaging systems (Debian: *.deb / CentOS: *.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution will not be compatible with another distribution. However, most distributions are likely to fall into one of the three distribution families covered by the LFCS certification. - -**High and low-level package tools** - -In order to perform the task of package management effectively, you need to be aware that you will have two types of available utilities: low-level tools (which handle in the backend the actual installation, upgrade, and removal of package files), and high-level tools (which are in charge of ensuring that the tasks of dependency resolution and metadata searching -”data about the data”- are performed). - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DISTRIBUTIONLOW-LEVEL TOOLHIGH-LEVEL TOOL
 Debian and derivatives dpkg apt-get / aptitude
 CentOS rpm yum
 openSUSE rpm zypper
- -Let us see the descrption of the low-level and high-level tools. - -dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide information about and build *.deb packages but it can’t automatically download and install their corresponding dependencies. - -- Read More: [15 dpkg Command Examples][1] - -apt-get is a high-level package manager for Debian and derivatives, and provides a simple way to retrieve and install packages, including dependency resolution, from multiple sources using the command line. Unlike dpkg, apt-get does not work directly with *.deb files, but with the package proper name. - -- Read More: [25 apt-get Command Examples][2] - -aptitude is another high-level package manager for Debian-based systems, and can be used to perform management tasks (installing, upgrading, and removing packages, also handling dependency resolution automatically) in a fast and easy way. It provides the same functionality as apt-get and additional ones, such as offering access to several versions of a package. - -rpm is the package management system used by Linux Standard Base (LSB)-compliant distributions for low-level handling of packages. Just like dpkg, it can query, install, verify, upgrade, and remove packages, and is more frequently used by Fedora-based distributions, such as RHEL and CentOS. - -- Read More: [20 rpm Command Examples][3] - -yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories. - -- Read More: [20 yum Command Examples][4] -- -### Common Usage of Low-Level Tools ### - -The most frequent tasks that you will do with low level tools are as follows: - -**1. Installing a package from a compiled (*.deb or *.rpm) file** - -The downside of this installation method is that no dependency resolution is provided. You will most likely choose to install a package from a compiled file when such package is not available in the distribution’s repositories and therefore cannot be downloaded and installed through a high-level tool. Since low-level tools do not perform dependency resolution, they will exit with an error if we try to install a package with unmet dependencies. - - # dpkg -i file.deb [Debian and derivative] - # rpm -i file.rpm [CentOS / openSUSE] - -**Note**: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-versa! - -**2. Upgrading a package from a compiled file** - -Again, you will only upgrade an installed package manually when it is not available in the central repositories. - - # dpkg -i file.deb [Debian and derivative] - # rpm -U file.rpm [CentOS / openSUSE] - -**3. Listing installed packages** - -When you first get your hands on an already working system, chances are you’ll want to know what packages are installed. - - # dpkg -l [Debian and derivative] - # rpm -qa [CentOS / openSUSE] - -If you want to know whether a specific package is installed, you can pipe the output of the above commands to grep, as explained in [manipulate files in Linux – Part 1][6] of this series. Suppose we need to verify if package mysql-common is installed on an Ubuntu system. - - # dpkg -l | grep mysql-common - -![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) - -Check Installed Packages - -Another way to determine if a package is installed. - - # dpkg --status package_name [Debian and derivative] - # rpm -q package_name [CentOS / openSUSE] - -For example, let’s find out whether package sysdig is installed on our system. - - # rpm -qa | grep sysdig - -![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) - -Check sysdig Package - -**4. Finding out which package installed a file** - - # dpkg --search file_name - # rpm -qf file_name - -For example, which package installed pw_dict.hwm? - - # rpm -qf /usr/share/cracklib/pw_dict.hwm - -![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) - -Query File in Linux - -### Common Usage of High-Level Tools ### - -The most frequent tasks that you will do with high level tools are as follows. - -**1. Searching for a package** - -aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name. - - # aptitude update && aptitude search package_name - -In the search all option, yum will search for package_name not only in package names, but also in package descriptions. - - # yum search package_name - # yum search all package_name - # yum whatprovides “*/package_name” - -Let’s supposed we need a file whose name is sysdig. To know that package we will have to install, let’s run. - - # yum whatprovides “*/sysdig” - -![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) - -Check Package Description - -whatprovides tells yum to search the package the will provide a file that matches the above regular expression. - - # zypper refresh && zypper search package_name [On openSUSE] - -**2. Installing a package from a repository** - -While installing a package, you may be prompted to confirm the installation after the package manager has resolved all dependencies. Note that running update or refresh (according to the package manager being used) is not strictly necessary, but keeping installed packages up to date is a good sysadmin practice for security and dependency reasons. - - # aptitude update && aptitude install package_name [Debian and derivatives] - # yum update && yum install package_name [CentOS] - # zypper refresh && zypper install package_name [openSUSE] - -**3. Removing a package** - -The option remove will uninstall the package but leaving configuration files intact, whereas purge will erase every trace of the program from your system. -# aptitude remove / purge package_name -# yum erase package_name - - ---Notice the minus sign in front of the package that will be uninstalled, openSUSE --- - - # zypper remove -package_name - -Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble! - -**4. Displaying information about a package** - -The following command will display information about the birthday package. - - # aptitude show birthday - # yum info birthday - # zypper info birthday - -![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) - -Check Package Information - -### Summary ### - -Package management is something you just can’t sweep under the rug as a system administrator. You should be prepared to use the tools described in this article at a moment’s notice. Hope you find it useful in your preparation for the LFCS exam and for your daily tasks. Feel free to leave your comments or questions below. We will be more than glad to get back to you as soon as possible. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-package-management/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/dpkg-command-examples/ -[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ -[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ -[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Learning Music.md b/sources/tech/Learn with Linux/Learn with Linux--Learning Music.md deleted file mode 100644 index e6467eb810..0000000000 --- a/sources/tech/Learn with Linux/Learn with Linux--Learning Music.md +++ /dev/null @@ -1,155 +0,0 @@ -Learn with Linux: Learning Music -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-featured.png) - -This article is part of the [Learn with Linux][1] series: - -- [Learn with Linux: Learning to Type][2] -- [Learn with Linux: Physics Simulation][3] -- [Learn with Linux: Learning Music][4] -- [Learn with Linux: Two Geography Apps][5] -- [Learn with Linux: Master Your Math with These Linux Apps][6] - -Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. - -Learning music is a great pastime. Training your ears to identify scales and chords and mastering an instrument or your own voice requires lots of practise and could become difficult. Music theory is extensive. There is much to memorize, and to turn it into a “skill” you will need diligence. Linux offers exceptional software to help you along your musical journey. They will not help you become a professional musician instantly but could ease the process of learning, being a great aide and reference point. - -### Gnu Solfège ### - -[Solfège][7] is a popular music education method that is used in all levels of music education all around the world. Many popular methods (like the Kodály method) use Solfège as their basis. GNU Solfège is a great software aimed more at practising Solfège than learning it. It assumes the student has already acquired the basics and wishes to practise what they have learned. - -As the developer states on the GNU website: - -> “When you study music on high school, college, music conservatory, you usually have to do ear training. Some of the exercises, like sight singing, is easy to do alone [sic]. But often you have to be at least two people, one making questions, the other answering. […] GNU Solfège tries to help out with this. With Solfege you can practise the more simple and mechanical exercises without the need to get others to help you. Just don’t forget that this program only touches a part of the subject.” - -The software delivers its promise; you can practise essentially everything with audible and visual aids. - -GNU solfege is in the Debian (therefore Ubuntu) repositories. To get it just type the following command into a terminal: - - sudo apt-get install solfege - -When it loads, you find yourself on a simple starting screen/ - -![learnmusic-solfege-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-main.png) - -The number of options is almost overwhelming. Most of the links will open sub-categories - -![learnmusic-solfege-scales](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-scales.png) - -from where you can select individual exercises. - -![learnmusic-solfege-hun](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-hun.png) - -There are practice sessions and tests. Both will be able to play the tones through any connected MIDI device or just your sound card’s MIDI player. The exercises often have visual notation and the ability to play back the sequence slowly. - -One important note about Solfège is that under Ubuntu you might not be able to hear anything with the default setup (unless you have a MIDI device connected). If that is the case, head over to “File -> Preferences,” select sound setup and choose the appropriate option for your system (choosing ALSA would probably work in most cases). - -![learnmusic-solfege-midi](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-midi.png) - -Solfège could be very helpful for your daily practise. Use it regularly and you will have trained your ear before you can sing do-re-mi. - -### Tete (ear trainer) ### - -[Tete][8] (This ear trainer ‘ere) is a Java application for simple, yet efficient, [ear training][9]. It helps you identify a variety of scales by playing thhm back under various circumstances, from different roots and on different MIDI sounds. [Download it from SourceForge][10]. You then need to unzip the downloaded file. - - unzip Tete-* - -Enter the unpacked directory: - - cd Tete-* - -Assuming you have Java installed in your system, you can run the java file with - - java -jar Tete-[your version] - -(To autocomplete the above command, just press the Tab key after typing “Tete-“.) - -Tete has a simple, one-page interface with everything on it. - -![learnmusic-tete-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-main.png) - -You can choose to play scales (see above), chords, - -![learnmusic-tete-chords](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-chords.png) - -or intervals. - -![learnmusic-tete-intervals](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-intervals.png) - -You can “fine tune” your experience with various options including the midi instrument’s sound, what note to start from, ascending or descending scales, and how slow/fast the playback should be. Tete’s SourceForge page includes a very useful tutorial that explains most aspects of the software. - -### JalMus ### - -Jalmus is a Java-based keyboard note reading trainer. It works with attached MIDI keyboards or with the on-screen virtual keyboard. It has many simple lessons and exercises to train in music reading. Unfortunately, its development has been discontinued since 2013, but the software appears to still be functional. - -To get Jalmus, head over to the [sourceforge page][11] of its last version (2.3) to get the Java installer, or just type the following command into a terminal: - - wget http://garr.dl.sourceforge.net/project/jalmus/Jalmus-2.3/installjalmus23.jar - -Once the download finishes, load the installer with - - java -jar installjalmus23.jar - -You will be guided through a simple Java-based installer that was made for cross-platform installation. - -Jalmus’s main screen is plain. - -![learnmusic-jalmus-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-main.jpg) - -You can find lessons of varying difficulty in the Lessons menu. It ranges from very simple ones, where one notes swims in from the left, and the corresponding key lights up on the on screen keyboard … - -![learnmusic-jalmus-singlenote](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-singlenote.png) - -… to difficult ones with many notes swimming in from the right, and you are required to repeat the sequence on your keyboard. - -![learnmusic-jalmus-multinote](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-multinote.png) - -Jalmus also includes exercises of note reading single notes, which are very similar to the lessons, only without the visual hints, where your score will be displayed after you finished. It also aids rhythm reading of varying difficulty, where the rhythm is both audible and visually marked. A metronome (audible and visual) aids in the understanding - -![learnmusic-jalmus-rhythm](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-rhythm.png) - -and score reading where multiple notes will be played - -![learnmusic-jalmus-score](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-score.png) - -All these options are configurable; you can switch features on and off as you like. - -All things considered, Jalmus probably works best for rhythm training. Although it was not necessarily its intended purpose, the software really excelled in this particular use-case. - -### Notable mentions ### - -#### TuxGuitar #### - -For guitarists, [TuxGuitar][12] works much like Guitar Pro on Windows (and it can also read guitar-pro files). -PianoBooster - -[Piano Booster][13] can help with piano skills. It is designed to play MIDI files, which you can play along with on an attached keyboard, watching the core roll past on the screen. - -### Conclusion ### - -Linux offers many great tools for learning, and if your particular interest is music, your will not be left without software to aid your practice. Surely there are many more excellent software tools available for music students than were mentioned above. Do you know of any? Please let us know in the comments below. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/linux-learning-music/ - -作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ -[1]:https://www.maketecheasier.com/series/learn-with-linux/ -[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ -[3]:https://www.maketecheasier.com/linux-physics-simulation/ -[4]:https://www.maketecheasier.com/linux-learning-music/ -[5]:https://www.maketecheasier.com/linux-geography-apps/ -[6]:https://www.maketecheasier.com/learn-linux-maths/ -[7]:https://en.wikipedia.org/wiki/Solf%C3%A8ge -[8]:http://tete.sourceforge.net/index.shtml -[9]:https://en.wikipedia.org/wiki/Ear_training -[10]:http://sourceforge.net/projects/tete/files/latest/download -[11]:http://sourceforge.net/projects/jalmus/files/Jalmus-2.3/ -[12]:http://tuxguitar.herac.com.ar/ -[13]:http://www.linuxlinks.com/article/20090517041840856/PianoBooster.html \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Learning to Type.md b/sources/tech/Learn with Linux/Learn with Linux--Learning to Type.md deleted file mode 100644 index 51cef0f1a8..0000000000 --- a/sources/tech/Learn with Linux/Learn with Linux--Learning to Type.md +++ /dev/null @@ -1,121 +0,0 @@ -Learn with Linux: Learning to Type -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-featured.png) - -This article is part of the [Learn with Linux][1] series: - -- [Learn with Linux: Learning to Type][2] -- [Learn with Linux: Physics Simulation][3] -- [Learn with Linux: Learning Music][4] -- [Learn with Linux: Two Geography Apps][5] -- [Learn with Linux: Master Your Math with These Linux Apps][6] - -Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. - -Typing is taken for granted by many people; today being keyboard savvy often comes as second nature. Yet how many of us still type with two fingers, even if ever so fast? Once typing was taught in schools, but slowly the art of ten-finger typing is giving way to two thumbs. - -The following two applications can help you master the keyboard so that your next thought does not get lost while your fingers catch up. They were chosen for their simplicity and ease of use. While there are some more flashy or better looking typing apps out there, the following two will get the basics covered and offer the easiest way to start out. - -### TuxType (or TuxTyping) ### - -TuxType is for children. Young students can learn how to type with ten fingers with simple lessons and practice their newly-acquired skills in fun games. - -Debian and derivatives (therefore all Ubuntu derivatives) should have TuxType in their standard repositories. To install simply type - - sudo apt-get install tuxtype - -The application starts with a simple menu screen featuring Tux and some really bad midi music (Fortunately the sound can be turned off easily with the icon in the lower left corner.). - -![learntotype-tuxtyping-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-main.jpg) - -The top two choices, “Fish Cascade” and “Comet Zap,” represent typing games, but to start learning you need to head over to the lessons. - -There are forty simple built-in lessons to choose from. Each one of these will take a letter from the keyboard and make the student practice while giving visual hints, such as which finger to use. - -![learntotype-tuxtyping-exd1](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd1.jpg) - -![learntotype-tuxtyping-exd2](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd2.jpg) - -For more advanced practice, phrase typing is also available, although for some reason this is hidden under the options menu. - -![learntotype-tuxtyping-phrase](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-phrase.jpg) - -The games are good for speed and accuracy as the player helps Tux catch falling fish - -![learntotype-tuxtyping-fish](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-fish.jpg) - -or zap incoming asteroids by typing the words written over them. - -![learntotype-tuxtyping-zap](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-zap.jpg) - -Besides being a fun way to practice, these games teach spelling, speed, and eye-to-hand coordination, as you must type while also watching the screen, building a foundation for touch typing, if taken seriously. - -### GNU typist (gtype) ### - -For adults and more experienced typists, there is GNU Typist, a console-based application developed by the GNU project. - -GNU Typist will also be carried by most Debian derivatives’ main repos. Installing it is as easy as typing - - sudo apt-get install gtype - -You will probably not find it in the Applications menu; insteaad you should start it from a terminal window. - - gtype - -The main menu is simple, no-nonsense and frill-free, yet it is evident how much the software has to offer. Typing lessons of all levels are immediately accessible. - -![learntotype-gtype-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-main.png) - -The lessons are straightforward and detailed. - -![learntotype-gtype-lesson](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-lesson.png) - -The interactive practice sessions offer little more than highlighting your mistakes. Instead of flashy visuals you have to chance to focus on practising. At the end of each lesson you get some simple statistics of how you’ve been doing. If you make too many mistakes, you cannot proceed until you can pass the level. - -![learntotype-gtype-mistake](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-mistake.png) - -While the basic lessons only require you to repeat some characters, more advanced drills will have the practitioner type either whole sentences, - -![learntotype-gtype-warmup](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmup.png) - -where of course the three percent error margin means you are allowed even fewer mistakes, - -![learntotype-gtype-warmupfail](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmupfail.png) - -or some drills aiming to achieve certain goals, as in the “Balanced keyboard drill.” - -![learntotype-gtype-balanceddrill](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-balanceddrill.png) - -Simple speed drills have you type quotes, - -![learntotype-gtype-speed-simple](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-simple.png) - -while more advanced ones will make you write longer texts taken from classics. - -![learntotype-gtype-speed-advanced](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-advanced.png) - -If you’d prefer a different language, more lessons can also be loaded as command line arguments. - -![learntotype-gtype-more-lessons](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-more-lessons.png) - -### Conclusion ### - -If you care to hone your typing skills, Linux has great software to offer. The two basic, yet feature-rich, applications discussed above will cater to most aspiring typists’ needs. If you use or know of another great typing application, please don’t hesitate to let us know below in the comments. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/learn-to-type-in-linux/ - -作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ -[1]:https://www.maketecheasier.com/series/learn-with-linux/ -[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ -[3]:https://www.maketecheasier.com/linux-physics-simulation/ -[4]:https://www.maketecheasier.com/linux-learning-music/ -[5]:https://www.maketecheasier.com/linux-geography-apps/ -[6]:https://www.maketecheasier.com/learn-linux-maths/ \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md b/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md deleted file mode 100644 index 2a8415dda7..0000000000 --- a/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md +++ /dev/null @@ -1,107 +0,0 @@ -Learn with Linux: Physics Simulation -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/physics-fetured.jpg) - -This article is part of the [Learn with Linux][1] series: - -- [Learn with Linux: Learning to Type][2] -- [Learn with Linux: Physics Simulation][3] -- [Learn with Linux: Learning Music][4] -- [Learn with Linux: Two Geography Apps][5] -- [Learn with Linux: Master Your Math with These Linux Apps][6] - -Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. - -Physics is an interesting subject, and arguably the most enjoyable part of any Physics class/lecture are the demonstrations. It is really nice to see physics in action, yet the experiments do not need to be restricted to the classroom. While Linux offers many great tools for scientists to support or conduct experiments, this article will concern a few that would make learning physics easier or more fun. - -### 1. Step ### - -[Step][7] is an interactive physics simulator, part of [KDEEdu, the KDE Education Project][8]. Nobody could better describe what Step does than the people who made it. According to the project webpage, “[Step] works like this: you place some bodies on the scene, add some forces such as gravity or springs, then click “Simulate” and Step shows you how your scene will evolve according to the laws of physics. You can change every property of bodies/forces in your experiment (even during simulation) and see how this will change the outcome of the experiment. With Step, you can not only learn but feel how physics works!” - -While of course it requires Qt and loads of KDE-specific dependencies to work, projects like this (and KDEEdu itself) are part of the reason why KDE is such an awesome environment (if you don’t mind running a heavier desktop, of course). - -Step is in the Debian repositories; to install it on derivatives, simply type - - sudo apt-get install step - -into a terminal. On a KDE system it should have minimal dependencies and install in seconds. - -Step has a simple interface, and it lets you jump right into simulations. - -![physics-step-main](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-main.png) - -You will find all available objects on the left-hand side. You can have different particles, gas, shaped objects, springs, and different forces in action. (1) If you select an object, a short description of it will appear on the right-hand side (2). On the right you will also see an overview of the “world” you have created (the objects it contains) (3), the properties of the currently selected object (4), and the steps you have taken so far (5). - -![physics-step-parts](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-parts.png) - -Once you have placed all you wanted on the canvas, just press “Simulate,” and watch the events unfold as the objects interact with each other. - -![physics-step-simulate1](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate1.png) - -![physics-step-simulate2](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate2.png) - -![physics-step-simulate3](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate3.png) - -To get to know Step better you only need to press F1. The KDE Help Center offers a great and detailed Step handbook. - -### 2. Lightspeed ### - -Lightspeed is a simple GTK+ and OpenGL based simulator that is meant to demonstrate the effect of how one might observe a fast moving object. Lightspeed will simulate these effects based on Einstein’s special relativity. According to [their sourceforge page][9] “When an object accelerates to more than a few million meters per second, it begins to appear warped and discolored in strange and unusual ways, and as it approaches the speed of light (299,792,458 m/s) the effects become more and more bizarre. In addition, the manner in which the object is distorted varies drastically with the viewpoint from which it is observed.” - -These effects which come into play at relative velocities are: - -- **The Lorentz contraction** – causes the object to appear shorter -- **The Doppler red/blue shift** – alters the hues of color observed -- **The headlight effect** – brightens or darkens the object -- **Optical aberration** – deforms the object in unusual ways - -Lightspeed is in the Debian repositories; to install it, simply type: - - sudo apt-get install lightspeed - -The user interface is very simple. You get a shape (more can be downloaded from sourceforge) which would move along the x-axis (animation can be started by processing “A” or by selecting it from the object menu). - -![physics-lightspeed](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed.png) - -You control the speed of its movement with the right-hand side slider and watch how it deforms. - -![physics-lightspeed-deform](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-deform.png) - -Some simple controls will allow you to add more visual elements - -![physics-lightspeed-visual](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-visual.png) - -The viewing angles can be adjusted by pressing either the left, middle or right button and dragging the mouse or from the Camera menu that also offers some other adjustments like background colour or graphics mode. - -### Notable mention: Physion ### - -Physion looks like an interesting project and a great looking software to simulate physics in a much more colorful and fun way than the above examples would allow. Unfortunately, at the time of writing, the [official website][10] was experiencing problems, and the download page was unavailable. - -Judging from their Youtube videos, Physion must be worth installing once a download line becomes available. Until then we can just enjoy the this video demo. - -注:youtube 视频 - - -Do you have another favorite physics simulation/demonstration/learning applications for Linux? Please share with us in the comments below. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/linux-physics-simulation/ - -作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ -[1]:https://www.maketecheasier.com/series/learn-with-linux/ -[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ -[3]:https://www.maketecheasier.com/linux-physics-simulation/ -[4]:https://www.maketecheasier.com/linux-learning-music/ -[5]:https://www.maketecheasier.com/linux-geography-apps/ -[6]:https://www.maketecheasier.com/learn-linux-maths/ -[7]:https://edu.kde.org/applications/all/step -[8]:https://edu.kde.org/ -[9]:http://lightspeed.sourceforge.net/ -[10]:http://www.physion.net/ \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md b/sources/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md deleted file mode 100644 index a31e1f73b4..0000000000 --- a/sources/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md +++ /dev/null @@ -1,103 +0,0 @@ -Learn with Linux: Two Geography Apps -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-featured.png) - -This article is part of the [Learn with Linux][1] series: - -- [Learn with Linux: Learning to Type][2] -- [Learn with Linux: Physics Simulation][3] -- [Learn with Linux: Learning Music][4] -- [Learn with Linux: Two Geography Apps][5] -- [Learn with Linux: Master Your Math with These Linux Apps][6] - -Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. - -Geography is an interesting subject, used by many of us day to day, often without realizing. But when you fire up GPS, SatNav, or just Google maps, you are using the geographical data provided by this software with the maps drawn by cartographists. When you hear about a certain country in the news or hear financial data being recited, these all fall under the umbrella of geography. And you have some great Linux software to study and practice these, whether it is for school or your own improvement. - -### Kgeography ### - -There are only two geography-related applications readily available in most Linux repositories, and both of these are KDE applications, in fact part of the KDE Educatonal project. Kgeography uses simple color-coded maps of any selected country. - -To install kegeography just type - - sudo apt-get install kgeography - -into a terminal window of any Ubuntu-based distribution. - -The interface is very basic. You are first presented with a picker menu that lets you choose an area map. - -![learn-geography-kgeo-pick](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-pick.png) - -On the map you can display the name and capital of any given territory by clicking on it, - -![learn-geography-kgeo-brit](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-brit.png) - -and test your knowledge in different quizzes. - -![learn-geography-kgeo-test](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-test.png) - -It is an interactive way to test your basic geographical knowledge and could be an excellent tool to help you prepare for exams. - -### Marble ### - -Marble is a somewhat more advanced software, offering a global view of the world without the need of 3D acceleration. - -![learn-geography-marble-main](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-main.png) - -To get Marble, type - - sudo apt-get install marble - -into a terminal window of any Ubuntu-based distribution. - -Marble focuses on cartography, its main view being that of an atlas. - -![learn-geography-marble-atlas](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-atlas.jpg) - -You can have different projections, like Globe or Mercator displayed as defaults, with flat and other exotic views available from a drop-down menu. The surfaces include the basic Atlas view, a full-fledged offline map powered by OpenStreetMap, - -![learn-geography-marble-map](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-map.jpg) - -satellite view (by NASA), - -![learn-geography-marble-satellite](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-satellite.jpg) - -and political and even historical maps of the world, among others. - -![learn-geography-marble-history](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-history.jpg) - -Besides providing great offline maps with different skins and varying amount of data, Marble offers other types of information as well. You can switch on and off various offline info-boxes - -![learn-geography-marble-offline](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-offline.png) - -and online services from the menu. - -![learn-geography-marble-online](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-online.png) - -An interesting online service is Wikipedia integration. Clicking on the little Wiki logos will bring up a pop-up featuring detailed information about the selected places. - -![learn-geography-marble-wiki](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-wiki.png) - -The software also includes options for location tracking, route planning, and searching for locations, among other great and useful features. If you enjoy cartography, Marble offers hours of fun exploring and learning. - -### Conclusion ### - -Linux offers many great educational applications, and the subject of geography is no exception. With the above two programs you can learn a lot about our globe and test your knowledge in a fun and interactive manner. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/linux-geography-apps/ - -作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ -[1]:https://www.maketecheasier.com/series/learn-with-linux/ -[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ -[3]:https://www.maketecheasier.com/linux-physics-simulation/ -[4]:https://www.maketecheasier.com/linux-learning-music/ -[5]:https://www.maketecheasier.com/linux-geography-apps/ -[6]:https://www.maketecheasier.com/learn-linux-maths/ \ No newline at end of file diff --git a/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 5--Grep From Files and Display the File Name.md b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 5--Grep From Files and Display the File Name.md new file mode 100644 index 0000000000..3b683746eb --- /dev/null +++ b/sources/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 5--Grep From Files and Display the File Name.md @@ -0,0 +1,68 @@ +(translating by runningwater) +Grep From Files and Display the File Name +================================================================================ +How do I grep from a number of files and display the file name only? + +When there is more than one file to search it will display file name by default: + + grep "word" filename + grep root /etc/* + +Sample outputs: + + /etc/bash.bashrc: See "man sudo_root" for details. + /etc/crontab:17 * * * * root cd / && run-parts --report /etc/cron.hourly + /etc/crontab:25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) + /etc/crontab:47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) + /etc/crontab:52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) + /etc/group:root:x:0: + grep: /etc/gshadow: Permission denied + /etc/logrotate.conf: create 0664 root utmp + /etc/logrotate.conf: create 0660 root utmp + +The first name is file name (e.g., /etc/crontab, /etc/group). The -l option will only print filename if th + + grep -l "string" filename + grep -l root /etc/* + +Sample outputs: + + /etc/aliases + /etc/arpwatch.conf + grep: /etc/at.deny: Permission denied + /etc/bash.bashrc + /etc/bash_completion + /etc/ca-certificates.conf + /etc/crontab + /etc/group + +You can suppress normal output; instead print the name of each input file from **which no output would normally have been** printed: + + grep -L "word" filename + grep -L root /etc/* + +Sample outputs: + + /etc/apm + /etc/apparmor + /etc/apparmor.d + /etc/apport + /etc/apt + /etc/avahi + /etc/bash_completion.d + /etc/bindresvport.blacklist + /etc/blkid.conf + /etc/bluetooth + /etc/bogofilter.cf + /etc/bonobo-activation + /etc/brlapi.key + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/grep-from-files-and-display-the-file-name/ + +作者:Vivek Gite +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/translated/share/20150824 Great Open Source Collaborative Editing Tools.md b/translated/share/20150824 Great Open Source Collaborative Editing Tools.md new file mode 100644 index 0000000000..bc0a841477 --- /dev/null +++ b/translated/share/20150824 Great Open Source Collaborative Editing Tools.md @@ -0,0 +1,228 @@ +优秀的开源合作编辑工具 +================================================================================ +一句话,合作编著就是多个人进行编著。合作有好处也有风险。好处包括更加全面/协调的方式,更好的利用现有资源和一个更加有力的、团结的声音。对于我来说,最大的好处是极大的透明度。那是当我需要采纳同事的观点。同事之间来来回回地传文件效率非常低,导致不必要的延误还让人(比如,我)对整个合作这件事都感到不满意。有个好的合作软件,我就能实时地或异步地分享笔记,数据和文件,并用评论来分享自己的想法。这样在文档、图片、视频、演示文稿上合作就不会那么的琐碎而无聊。 + +有很多种方式能在线进行合作,简直不能更简便了。这篇文章表明了我最喜欢的开源实时文档合作编辑工具。 + +Google Docs 是个非常好的高效应用,有着大部分我所需要的功能。它可以作为一个实时地合作编辑文档的工具提供服务。文档可以被分享、打开并被多位用户同时编辑,用户还能看见其他合作者一个字母一个字母的编辑过程。虽然 Google Docs 对个人是免费的,但并不开源。 + +下面是我带来的最棒的开源合作编辑器,它们能帮你不被打扰的集中精力进行写作,而且是和其他人协同完成。 + +---------- + +### Hackpad ### + +![Hackpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Hackpad.png) + +Hackpad 是个开源的基于网页的实时 wiki,基于开源 EtherPad 合作文档编辑器。 + +Hackpad 允许用户实时分享你的文档,它还用彩色编码显示各个作者分别贡献了哪部分。它还允许插入图片、清单,由于提供了语法高亮功能,它还能用来写代码。 + +当2014年4月 Dropbox 获得了 Hackpad 后,这款软件就以开源的形式在本月发行。让我们经历的等待非常值得。 + +特性: + +- 有类似 wiki 所提供的,一套非常完善的功能 +- 实时或者异步地记合作笔记,共享数据和文件,或用评论分享你们的想法 +- 细致的隐私许可让你可以邀请单个朋友,一个十几人的团队或者上千的 Twitter 粉丝 +- 智能执行 +- 直接从流行的视频分享网站上插入视频 +- 表格 +- 可对使用广泛的包括 C, C#, CSS, CoffeeScript, Java, 以及 HTML 在内的编程语言进行语法高亮 + +- 网站:[hackpad.com][1] +- 源代码:[github.com/dropbox/hackpad][2] +- 开发者:[Contributors][3] +- 许可:Apache License, Version 2.0 +- 版本号: - + +---------- + +### Etherpad ### + +![Etherpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Etherpad.png) + +Etherpad 是个基于网页的开源实时合作编辑器,允许多个作者同时编辑一个文本文档,写评论,并与其他作者用群聊方式进行交流。 + +Etherpad 是用 JavaScript 运行的,在 AppJet 平台的顶端,通过 Comet 流实现实时的功能。 + +特性: + +- 尽心设计的斯巴达界面 +- 简单的格式化文本功能 +- “滑动时间轴”——浏览一个工程历史版本 +- 可以下载纯文本、 PDF、微软的 Word 文档、Open Document 和 HTML 格式的文档 +- 每隔一段很短的时间就会自动保存 +- 可个性化程度高 +- 有客户端插件可以扩展编辑的功能 +- 几百个支持 Etherpad 的扩展包括支持 email 提醒,pad 管理,授权 +- 可访问性开启 +- 可从 Node 里或通过 CLI(命令行界面)和 Pad 目录实时交互 + +- 网站: [etherpad.org][4] +- 源代码:[github.com/ether/etherpad-lite][5] +- 开发者:David Greenspan, Aaron Iba, J.D. Zamfiresc, Daniel Clemens, David Cole +- 许可:Apache License, Version 2.0 +- 版本号: 1.5.7 + +---------- + +### Firepad ### + +![Firepad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Firepad.png) + +Firepad 是个开源的合作文本编辑器。它的设计目的是被嵌入到更大的网页应用中对几天内新加入的代码进行批注。 + +Firepad 是个全功能的文本编辑器,有解决冲突,光标同步,用户属性,用户在线状态检测功能。它使用 Firebase 作为后台,而且不需要任何服务器端的代码。他可以被加入到任何网页应用中。Firepad 可以使用 CodeMirror 编辑器或者 Ace 编辑器提交文本,它的操作转换代码是从 ot.js 上借鉴的。 + +如果你想要通过添加简单的文档和代码编辑器来扩展你的网页应用能力,Firepad 最适合不过了。 + +Firepad 已被多个编辑器使用,包括Atlassian Stash Realtime Editor、Nitrous.IO、LiveMinutes 和 Koding。 + +特性: + +- 纯正的合作编辑 +- 基于 OT 的智能合并及解决冲突 +- 支持多种格式的文本和代码的编辑 +- 光标位置同步 +- 撤销/重做 +- 文本高亮 +- 用户属性 +- 在线检测 +- 版本检查点 +- 图片 +- 通过它的 API 拓展 Firepad +- 支持所有现代浏览器:Chrome、Safari、Opera 11+、IE8+、Firefox 3.6+ + +- 网站: [www.firepad.io][6] +- 源代码:[github.com/firebase/firepad][7] +- 开发者:Michael Lehenbauer and the team at Firebase +- 许可:MIT +- 版本号:1.1.1 + +---------- + +### OwnCloud Documents ### + +![ownCloud Documents in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ownCloud.png) + +ownCloud Documents 是个可以单独并/或合作进行办公室文档编辑 ownCloud 应用。它允许最多5个人同时在网页浏览器上合作进行编辑 .odt 和 .doc 文件。 + +ownCloud 是个自托管文件同步和分享服务器。他通过网页界面,同步客户端或 WebDAV 提供你数据的使用权,同时提供一个容易在设备间进行浏览、同步和分享的平台。 + +特性: + +- 合作编辑,多个用户同时进行文件编辑 +- 在 ownCloud 里创建文档 +- 上传文档 +- 在浏览器里分享和编辑文件,然后在 ownCloud 内部或通过公共链接进行分享这些文件 +- 有类似 ownCloud 的功能,如版本管理、本地同步、加密、恢复被删文件 +- 通过透明转换文件格式的方式无缝支持微软 Word 文档 + +- 网站:[owncloud.org][8] +- 源代码: [github.com/owncloud/documents][9] +- 开发者:OwnCloud Inc. +- 许可:AGPLv3 +- 版本号:8.1.1 + +---------- + +### Gobby ### + +![Gobby in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Gobby.png) + +Gobby 是个支持在一个会话内进行多个用户聊天并打开多个文档的合作编辑器。所有的用户都能同时在文件上进行工作,无需锁定。不同用户编写的部分用不同颜色高亮显示,它还支持多个编程和标记语言的语法高亮。 + +Gobby 允许多个用户在互联网上实时共同编辑同一个文档。他很好的整合了 GNOME 环境。它拥有一个客户端-服务端结构,这让它能支持一个会话开多个文档,文档同步请求,密码保护和 IRC 式的聊天方式可以在多个频道进行交流。用户可以选择一个颜色对他们在文档中编写的文本进行高亮。 + +还供有一个叫做 infinoted 的专用服务器。 + +特性: + +- 成熟的文本编辑能力包括使用 GtkSourceView 的语法高亮功能 +- 实时、无需锁定、通过加密(包括PFS)连接的合作文本编辑 +- 整合了群聊 +- 本地组撤销:撤销不会影响远程用户的修改 +- 显示远程用户的光标和选择区域 +- 用不同颜色高亮不同用户编写的文本 +- 适用于大多数编程语言的语法高亮,自动缩进,可配置 tab 宽度 +- 零冲突 +- 加密数据传输包括完美的正向加密(PFS) +- 会话可被密码保护 +- 通过 Access Control Lists (ACLs) 进行精密的权限保护 +- 高度个性化的专用服务器 +- 自动保存文档 +- 先进的查找和替换功能 +- 国际化 +- 完整的 Unicode 支持 + +- 网站:[gobby.github.io][10] +- 源代码: [github.com/gobby][11] +- 开发者: Armin Burgmeier, Philipp Kern and contributors +- 许可: GNU GPLv2+ and ISC +- 版本号:0.5.0 + +---------- + +### OnlyOffice ### + +![OnlyOffice in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-OnlyOffice.png) + +ONLYOFFICE(从前叫 Teamlab Office)是个多功能云端在线办公套件,整合了 CRM(客户关系管理)系统、文档和项目管理工具箱、甘特图以及邮件整合器 + +它能让你整理商业任务和时间表,保存并分享你的合作或个人文档,使用网络社交工具如博客和论坛,还可以和你的队员通过团队的即时聊天工具进行交流。 + +能在同一个地方管理文档、项目、团队和顾客关系。OnlyOffice 结合了文本,电子表格和电子幻灯片编辑器,他们的功能跟微软桌面应用(Word、Excel 和 PowerPoint)的功能相同。但是他允许实时进行合作编辑、评论和聊天。 + +OnlyOffice 是用 ASP.NET 编写的,基于 HTML5 Canvas 元素,并且被翻译成21种语言。 + +特性: + +- 当在大文档里工作、翻页和缩放时,它能与桌面应用一样强大 +- 文档可以在浏览/编辑模式下分享 +- 文档嵌入 +- 电子表格和电子幻灯片编辑器 +- 合作编辑 +- 评论 +- 群聊 +- 移动应用 +- 甘特图 +- 时间管理 +- 权限管理 +- Invoicing 系统 +- 日历 +- 整合了文件保存系统:Google Drive、Box、OneDrive、Dropbox、OwnCloud +- 整合了 CRM、电子邮件整合器和工程管理模块 +- 邮件服务器 +- 邮件整合器 +- 可以编辑流行格式的文档、电子表格和电子幻灯片:DOC、DOCX、ODT、RTF、TXT、XLS、XLSX、ODS、CSV、PPTX、PPT、ODP + +- 网站:[www.onlyoffice.com][12] +- 源代码:[github.com/ONLYOFFICE/DocumentServer][13] +- 开发者:Ascensio System SIA +- 许可:GNU GPL v3 +- 版本号:7.7 + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20150823085112605/CollaborativeEditing.html + +作者:Frazer Kline +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://hackpad.com/ +[2]:https://github.com/dropbox/hackpad +[3]:https://github.com/dropbox/hackpad/blob/master/CONTRIBUTORS +[4]:http://etherpad.org/ +[5]:https://github.com/ether/etherpad-lite +[6]:http://www.firepad.io/ +[7]:https://github.com/firebase/firepad +[8]:https://owncloud.org/ +[9]:http://github.com/owncloud/documents/ +[10]:https://gobby.github.io/ +[11]:https://github.com/gobby +[12]:https://www.onlyoffice.com/free-edition.aspx +[13]:https://github.com/ONLYOFFICE/DocumentServer diff --git a/translated/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md b/translated/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md deleted file mode 100644 index 016429d92d..0000000000 --- a/translated/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md +++ /dev/null @@ -1,43 +0,0 @@ -sevenot translated -好奇Linux?试试云端的Linux桌面 -================================================================================ -Linux在桌面操作系统市场上只占据了非常小的份额,目前调查来看,估计只有2%的市场份额;对比来看丰富多变的Windows系统占据了接近90%的市场份额。对于Linux来说要挑战Windows在桌面操作系统市场的垄断,需要一个简单的方式来让用户学习不同的操作系统。如果你相信传统的Windows用户再买一台机器来使用Linux,你就太天真了。我们只能去试想用户重新分盘,设置引导程序来使用双系统,或者跳过所有步骤回到一个最简单的方法。 -![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png) - -我们实验过一系列无风险的使用方法让用户试操作Linux,并且不涉及任何分区管理,包括CD/DVDs光盘、USB钥匙和桌面虚拟化软件。通过实验,我强烈推荐使用VMware的VMware Player或者Oracle VirtualBox虚拟机,对于桌面操作系统或者便携式电脑的用户,这是一种相对简单而且免费的的方法来安装运行多操作系统。每一台虚拟机和其他虚拟机相隔离,但是共享CPU,存贮,网络接口等等。但是虚拟机仍需要一定的资源来安装运行Linux,也需要一台相当强劲的主机。对于一个好奇心不大的人,这样做实在是太麻烦了。 - -要打破用户传统的使用观念市非常困难的。很多Windows用户可以尝试使用Linux提供的免费软件,但也有太多要学习的Linux系统知识。这会花掉相当一部分时间来习惯Linux的工作方式。 - -当然了,对于一个第一次在Linux上操作的新手,有没有一个更高效的方法呢?答案是肯定的,接着往下看看云实验平台。 - -### LabxNow ### - -![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png) - -LabxNow提供了一个免费服务,方便广大用户通过浏览器来访问远程Liunx桌面。开发者将其加强为一个用户个人远程实验室(用户可以在系统里运行、开发任何程序),用户可以在任何地方通过互联网登入远程实验室。 - -这项服务现在可以为个人用户提供2核处理器,4GB RAM和10GB的固态硬盘,运行在128G RAM的4 AMD 6272处理器上。 - -#### 配置参数: #### - -- 系统镜像:基于Ubuntu 14.04的Xface 4.10,RHEL 6.5,CentOS(Gnome桌面),Oracle -- 硬件: CPU - 1核或者2核; 内存: 512MB, 1GB, 2GB or 4GB -- 超快的网络数据传输 -- 可以运行在所有流行的浏览器上 -- 可以安装任意程序,可以运行任何程序 – 这是一个非常棒的方法,可以随意做实验学你你想学的所有知识, 没有 一点风险 -- 添加、删除、管理、制定虚拟机非常方便 -- 支持虚拟机共享,远程桌面 - -你所需要的只是一台有稳定网络的设备。不用担心虚拟专用系统(VPS)、域名、或者硬件带来的高费用。LabxNow提供了一个非常好的方法在Ubuntu、RHEL和CentOS上实验。它给Windows用户一个极好的环境,让他们探索美妙的Linux世界。说得深一点,它可以让用户随时随地在里面工作,而没有了要在每台设备上安装Linux的压力。点击下面这个链接进入[www.labxnow.org/labxweb/][1]。 - -这里还有一些其它服务(大部分市收费服务)可以让用户在Linux使用。包括Cloudsigma环境的7天使用权和Icebergs.io(通过HTML5实现root权限)。但是现在,我推荐LabxNow。 --------------------------------------------------------------------------------- - -来自: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html - -译者:[sevenot](https://github.com/sevenot) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.labxnow.org/labxweb/ diff --git a/translated/share/20151030 80 Linux Monitoring Tools for SysAdmins.md b/translated/share/20151030 80 Linux Monitoring Tools for SysAdmins.md new file mode 100644 index 0000000000..4733a41d17 --- /dev/null +++ b/translated/share/20151030 80 Linux Monitoring Tools for SysAdmins.md @@ -0,0 +1,604 @@ + +Linux 系统管理员必备的80个监控工具 +================================================================================ +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg) + +随着行业的不断发展,各种工具多得不可胜数。这里列出网上最全的(工具)。拥有超过80种方式来管理你的机器。在本文中,我们主要讲述以下方面: + +- 命令行工具 +- 网络相关内容 +- 系统相关的监控工具 +- 日志监控工具 +- 基础设施监控工具 + +监控和调试性能问题非常困难,但用对了正确的工具有时也是很容易的。下面是一些你可能听说过的工具,当你使用它们时可能存在一些问题: + +### 十大系统监控工具 ### + +#### 1. Top #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg) + +这是一个被预装在许多 UNIX 系统中的小工具。当你想要查看在系统中运行的进程或线程时:top 是一个很好的工具。你可以对这些进程以不同的标准进行排序,默认是以 CPU 进行排序的。 + +#### 2. [htop][1] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg) + +HTOP 实质上是 top 的增强版本。它更容易对进程排序。它在视觉上更容易理解并且已经内建了许多通用的命令。它也是完全交互的。 + +#### 3. [atop][2] #### + +Atop 和 top,htop 非常相似,它也能监控所有进程,但不同于 top 和 htop 的是,它会记录进程的日志供以后分析。它也能显示所有进程的资源消耗。它还会高亮显示已经达到临界负载的资源。 + +#### 4. [apachetop][3] #### + +Apachetop 会监控 apache 网络服务器的整体性能。它主要是基于 mytop。它会显示当前 reads, writes 的数量以及 requests 进程的总数。 + +#### 5. [ftptop][4] #### + +ftptop 给你提供了当前所有连接到 ftp 服务器的基本信息,如会话总数,正在上传和下载的客户端数量以及客户端信息。 + +#### 6. [mytop][5] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg) + +mytop 是一个很简洁的工具,用于监控线程和 mysql 的性能。它给了你一个实时的数据库来查询处理结果。 + +#### 7. [powertop][6] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg) + +powertop 可以帮助你诊断与电量消耗和电源管理相关的问题。它也可以帮你进行电源管理设置,以实现对你服务器最有效的配置。你可以使用 tab 键进行选项切换。 + +#### 8. [iotop][7] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg) + +iotop 用于检查 I/O 的使用情况,并为你提供了一个类似 top 的界面来显示。它每列显示读和写的速率,每行代表一个进程。当出现等待 I/O 交换时,它也显示进程消耗时间的百分比。 + +### 与网络相关的监控 ### + +#### 9. [ntopng][8] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg) + +ntopng 是 ntop 的升级版,它提供了一个能使用浏览器进行网络监控的图形用户界面。它还有其他用途,如:定位主机,显示网络流量和 ip 流量分布并能进行分析。 + +#### 10. [iftop][9] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg) + +iftop 类似于 top,但它主要不是检查 cpu 的使用率而是监听所选择网络接口的流量,并以表格的形式显示当前的使用量。像“为什么我的网速这么慢呢?!”这样的问题它可以直接回答。 + +#### 11. [jnettop][10] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg) + +jnettop 以相同的方式来监测网络流量但比 iftop 更形象。它还支持自定义的文本输出并能以友好的交互方式来深度分析日志。 + +#### 12. [bandwidthd][11] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg) + +bandwidthd 可以跟踪 TCP/IP 网络子网的使用情况并能在浏览器中通过 png 图片形象化地构建一个 HTML 页面。它有一个数据库驱动系统,支持搜索、过滤,多传感器和自定义报表。 + +#### 13. [EtherApe][12] #### + +EtherApe 以图形化显示网络流量,可以支持更多的节点。它可以捕获实时流量信息,也可以从 tcpdump 进行读取。也可以使用具有 pcap 语法的网络过滤显示特定信息。 + +#### 14. [ethtool][13] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg) + +ethtool 用于显示和修改网络接口控制器的一些参数。它也可以用来诊断以太网设备,并获得更多的统计数据。 + +#### 15. [NetHogs][14] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg) + +NetHogs 打破了网络流量按协议或子网进行统计的原理。它以进程组来计算。所以,当网络流量猛增时,你可以使用 NetHogs 查看是由哪个进程造成的。 + +#### 16. [iptraf][15] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg) + +iptraf 收集的各种指标,如 TCP 连接数据包和字节数,端口统计和活动指标,TCP/UDP 通信故障,站内数据包和字节数。 + +#### 17. [ngrep][16] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg) + +如果不是网络层的话,ngrep 就是 grep。pcap 意识到后允许其指定扩展规则或十六进制表达式来匹配数据包。 + +#### 18. [MRTG][17] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg) + +MRTG 最初被开发来监控路由器的流量,但现在它也能够监控网络相关的东西。它每五分钟收集一次,然后产生一个 HTML 页面。它还具有发送邮件报警的能力。 + +#### 19. [bmon][18] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg) + +Bmon 能监控并帮助你调试网络。它能捕获网络相关的统计数据,并以友好的方式进行展示。你还可以与 bmon 通过脚本进行交互。 + +#### 20. traceroute #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg) + +Traceroute 是一个内置工具,能显示路由和测试数据包在网络中的延迟。 + +#### 21. [IPTState][19] #### + +IPTState 可以让你跨越 iptables 来监控流量,并通过你指定的条件来进行排序。该工具还允许你从表中删除状态信息。 + +#### 22. [darkstat][20] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg) + +Darkstat 能捕获网络流量并计算使用情况的统计数据。该报告保存在一个简单的HTTP服务器中,它为你提供了一个非常棒的图形用户界面。 + +#### 23. [vnStat][21] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg) + +vnStat 是一个网络流量监控工具,它的数据统计是由内核进行提供的,其消耗的系统资源非常少。系统重新启动后,它收集的数据仍然存在。它具有颜色选项供系统管理员使用。 + +#### 24. netstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg) + +netstat 是一个内置的工具,它能显示 TCP 网络连接,路由表和网络接口数量,被用来在网络中查找问题。 + +#### 25. ss #### + +比起 netstat,使用 ss 更好。ss 命令能够显示的信息比 netstat 更多,也更快。如果你想查看统计结果的总信息,你可以使用命令 `ss -s`。 + +#### 26. [nmap][22] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg) + +Nmap 可以扫描你服务器开放的端口并且可以检测正在使用哪个操作系统。但你也可以使用 SQL 注入漏洞,网络发现和渗透测试相关的其他手段。 + +#### 27. [MTR][23] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg) + +MTR 结合了 traceroute 和 ping 的功能到一个网络诊断工具上。当使用该工具时,它会限制单个数据包的跳数,同时也监视它们的到期时间。然后每秒进行重复。 + +#### 28. [Tcpdump][24] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg) + +Tcpdump 将输出一个你在命令中匹配并捕获到的数据包的信息。你还可以将此数据保存并进一步分析。 + +#### 29. [Justniffer][25] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg) + +Justniffer 是 tcp 数据包嗅探器。使用此嗅探器你可以选择收集低级别的数据还是高级别的数据。它也可以让你以自定义方式生成日志。比如模仿 Apache 的访问日志。 + +### 与系统有关的监控 ### + +#### 30. [nmon][26] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg) + +nmon 将数据输出到屏幕上的,或将其保存在一个以逗号分隔的文件中。你可以查看 CPU,内存,网络,文件系统,top 进程。数据也可以被添加到 RRD 数据库中用于进一步分析。 + +#### 31. [conky][27] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg) + +Conky 能监视不同操作系统并统计数据。它支持 IMAP 和 POP3, 甚至许多流行的音乐播放器!出于方便不同的人,你可以使用自己的 Lua 脚本或程序来进行扩展。 + +#### 32. [Glances][28] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg) + +使用 Glances 监控你的系统,其旨在使用最小的空间为你呈现最多的信息。它可以在客户端/服务器端模式下运行,也有远程监控的能力。它也有一个 Web 界面。 + +#### 33. [saidar][29] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg) + +Saidar 是一个非常小的工具,为你提供有关系统资源的基础信息。它将系统资源在全屏进行显示。重点是 saidar 会尽可能的简化。 + +#### 34. [RRDtool][30] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg) + +RRDtool 是用来处理 RRD 数据库的工具。RRDtool 旨在处理时间序列数据,如 CPU 负载,温度等。该工具提供了一种方法来提取 RRD 数据并以图形界面显示。 + +#### 35. [monit][31] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg) + +如果出现故障时,monit 有发送警报以及重新启动服务的功能。它可以对任何类型进行检查,你可以为 monit 写一个脚本,它有一个 Web 用户界面来分担你眼睛的压力。 + +#### 36. [Linux process explorer][32] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg) + +Linux process explorer 是类似 OSX 或 Windows 的在线监视器。它比 top 或 ps 的使用范围更广。你可以查看每个进程的内存消耗以及 CPU 的使用情况。 + +#### 37. df #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg) + +df 是 disk free 的缩写,它是所有 UNIX 系统预装的程序,用来显示用户有访问权限的文件系统的可用磁盘空间。 + +#### 38. [discus][33] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg) + +Discus 类似于 df,它的目的是通过使用更吸引人的特性,如颜色,图形和数字来对 df 进行改进。 + +#### 39. [xosview][34] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg) + +xosview 是一款经典的系统监控工具,它给你提供包括 IRQ 的各个不同部分的总览。 + +#### 40. [Dstat][35] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg) + +Dstat 旨在替代 vmstat,iostat,netstat 和 ifstat。它可以让你查实时查看所有的系统资源。这些数据可以导出为 CSV。最重要的是 dstat 允许使用插件,因此其可以扩展到更多领域。 + +#### 41. [Net-SNMP][36] #### + +SNMP 是“简单网络管理协议”,Net-SNMP 工具套件使用该协议可帮助你收集服务器的准确信息。 + +#### 42. [incron][37] #### + +Incron 允许你监控一个目录树,然后对这些变化采取措施。如果你想将目录‘a’中的新文件复制到目录‘b’,这正是 incron 能做的。 + +#### 43. [monitorix][38] #### + +Monitorix 是轻量级的系统监控工具。它可以帮助你监控一台机器,并为你提供丰富的指标。它也有一个内置的 HTTP 服务器,来查看图表和所有指标的报告。 + +#### 44. vmstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg) + +vmstat(virtual memory statistics)是一个小的内置工具,能监控和显示机器的内存。 + +#### 45. uptime #### + +这个小程序能快速显示你机器运行了多久,目前有多少用户登录和系统过去1分钟,5分钟和15分钟的平均负载。 + +#### 46. mpstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg) + +mpstat 是一个内置的工具,能监视 cpu 的使用情况。最常见的使用方法是 `mpstat -P ALL`,它给你提供 cpu 的使用情况。你也可以间隔更新 cpu 的使用情况。 + +#### 47. pmap #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg) + +pmap 是一个内置的工具,报告一个进程的内存映射。你可以使用这个命令来找出内存瓶颈的原因。 + +#### 48. ps #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg) + +该命令将给你当前所有进程的概述。你可以使用 `ps -A` 命令查看所有进程。 + +#### 49. [sar][39] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg) + +sar 是 sysstat 包的一部分,可以帮助你收集,报告和保存不同系统的指标。使用不同的参数,它会给你提供 CPU, 内存 和 I/O 使用情况及其他东西。 + +#### 50. [collectl][40] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg) + +类似于 sar,collectl 收集你机器的性能指标。默认情况下,显示 cpu,网络和磁盘统计数据,但它实际收集了很多信息。与 sar 不同的是,collectl 能够处理比秒更小的单位,它可以被直接送入绘图工具并且 collectl 的监控过程更广泛。 + +#### 51. [iostat][41] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg) + +iostat 也是 sysstat 包的一部分。此命令用于监控系统的输入/输出。其报告可以用来进行系统调优,以更好地调节你机器上硬盘的输入/输出负载。 + +#### 52. free #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg) + +这是一个内置的命令用于显示你机器上可用的内存大小以及已使用的内存大小。它还可以显示某时刻内核所使用的缓冲区大小。 + +#### 53. /Proc 文件系统 #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg) + +proc 文件系统可以让你查看内核的统计信息。从这些统计数据可以得到你机器上不同硬件设备的详细信息。看看这个 [ proc文件统计的完整列表 ][42]。 + +#### 54. [GKrellM][43] #### + +GKrellm 是一个图形应用程序来监控你硬件的状态信息,像CPU,内存,硬盘,网络接口以及其他的。它也可以监视并启动你所选择的邮件阅读器。 + +#### 55. [Gnome 系统监控器][44] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg) + +Gnome 系统监控器是一个基本的系统监控工具,其能通过一个树状结构来查看进程的依赖关系,能杀死及调整进程优先级,还能以图表形式显示所有服务器的指标。 + +### 日志监控工具 ### + +#### 56. [GoAccess][45] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg) + +GoAccess 是一个实时的网络日志分析器,它能分析 apache, nginx 和 amazon cloudfront 的访问日志。它也可以将数据输出成 HTML,JSON 或 CSV 格式。它会给你一个基本的统计信息,访问量,404页面,访客位置和其他东西。 + +#### 57. [Logwatch][46] #### + +Logwatch 是一个日志分析系统。它通过分析系统的日志,并为你所指定的区域创建一个分析报告。它每天给你一个报告可以让你花费更少的时间来分析日志。 + +#### 58. [Swatch][47] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg) + +像 Logwatch 一样,Swatch 也监控你的日志,但不是给你一个报告,它会匹配你定义的正则表达式,当匹配到后会通过邮件或控制台通知你。它可用于检测入侵者。 + +#### 59. [MultiTail][48] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg) + +MultiTail 可帮助你在多窗口下监控日志文件。你可以将这些日志文件合并成一个。它也像正则表达式一样使用不同的颜色来显示日志文件以方便你阅读。 + +#### 系统工具 #### + +#### 60. [acct or psacct][49] #### + +acct 也称 psacct(取决于如果你使用 apt-get 还是 yum)可以监控所有用户执行的命令,包括 CPU 和内存在系统内所使用的时间。一旦安装完成后你可以使用命令 ‘sa’ 来查看。 + +#### 61. [whowatch][50] #### + +类似 acct,这个工具监控系统上所有的用户,并允许你实时查看他们正在执行的命令及运行的进程。它将所有进程以树状结构输出,这样你就可以清楚地看到到底发生了什么。 + +#### 62. [strace][51] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg) + +strace 被用于诊断,调试和监控程序之间的相互调用过程。最常见的做法是用 strace 打印系统调用的程序列表,其可以看出程序是否像预期那样被执行了。 + +#### 63. [DTrace][52] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg) + +DTrace 可以说是 strace 的大哥。它动态地跟踪与检测代码实时运行的指令。它允许你深入分析其性能和诊断故障。但是,它并不简单,大约有1200本书中提到过它。 + +#### 64. [webmin][53] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg) + +Webmin 是一个基于 Web 的系统管理工具。它不需要手动编辑 UNIX 配置文件,并允许你远程管理系统。它有一对监控模块用于连接它。 + +#### 65. stat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg) + +Stat 是一个内置的工具,用于显示文件和文件系统的状态信息。它会显示文件被修改,访问或更改的信息。 + +#### 66. ifconfig #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg) + +ifconfig 是一个内置的工具用于配置网络接口。大多数网络监控工具背后都使用 ifconfig 将其设置成混乱模式来捕获所有的数据包。你可以手动执行 `ifconfig eth0 promisc` 并使用 `ifconfig eth0 -promisc` 返回正常模式。 + +#### 67. [ulimit][54] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg) + +ulimit 是一个内置的工具,可监控系统资源,并可以限制任何监控资源不得超标。比如做一个 fork 炸弹,如果使用 ulimit 正确配置了将完全不受影响。 + +#### 68. [cpulimit][55] #### + +CPULimit 是一个小工具用于监控并限制进程对 CPU 的使用率。其特别有用,能限制批处理作业对 CPU 的使用率保持在一定范围。 + +#### 69. lshw #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg) + +lshw 是一个小的内置工具能提取关于本机硬件配置的详细信息。它可以输出 CPU 版本和主板配置。 + +#### 70. w #### + +w 是一个内置命令用于显示当前登录用户的信息及他们所运行的进程。 + +#### 71. lsof #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg) + +lsof 是一个内置的工具可让你列出所有打开的文件和网络连接。从那里你可以看到文件是由哪个进程打开的,基于进程名,可通过一个特定的用户来杀死属于某个用户的所有进程。 + +### 基础架构监控工具 ### + +#### 72. Server Density #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png) + +我们的 [服务器监控工具][56]!它有一个 web 界面,使你可以进行报警设置并可以通过图表来查看所有系统的网络指标。你还可以设置监控的网站,无论是否在线。Server Density 允许你设置用户的权限,你可以根据我们的插件或 api 来扩展你的监控。该服务已经支持 Nagios 的插件了。 + +#### 73. [OpenNMS][57] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg) + +OpenNMS 主要有四个功能区:事件管理和通知;发现和配置;服务监控和数据收集。其设计可被在多种网络环境中定制。 + +#### 74. [SysUsage][58] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg) + +SysUsage 通过 Sar 和其他系统命令持续监控你的系统。一旦达到阈值它也可以进行报警通知。SysUsage 本身也可以收集所有的统计信息并存储在一个地方。它有一个 Web 界面可以让你查看所有的统计数据。 + +#### 75. [brainypdm][59] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg) + +brainypdm 是一个数据管理和监控工具,它能收集来自 nagios 或其它公共资源的数据并以图表显示。它是跨平台的,其基于 Web 并可自定义图形。 + +#### 76. [PCP][60] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg) + +PCP 可以收集来自多个主机的指标,并且效率很高。它也有一个插件框架,所以你可以把它收集的对你很重要的指标使用插件来管理。你可以通过任何一个 Web 界面或 GUI 访问图形数据。它比较适合大型监控系统。 + +#### 77. [KDE 系统保护][61] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg) + +这个工具既是一个系统监控器也是一个任务管理器。你可以通过工作表来查看多台机器的服务指标,如果一个进程需要被杀死或者你需要启动一个进程,它可以在 KDE 系统保护中来完成。 + +#### 78. [Munin][62] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg) + +Munin 既是一个网络也是系统监控工具,当一个指标超出给定的阈值时它会提供报警机制。它运用 RRDtool 创建图表,并且它也有 Web 界面来显示这些图表。它更强调的是即插即用的功能并且有许多可用的插件。 + +#### 79. [Nagios][63] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg) + +Nagios 是系统和网络监控工具,可帮助你监控多台服务器。当发生错误时它也有报警功能。它的平台也有很多的插件。 + +#### 80. [Zenoss][64] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg) + +Zenoss 提供了一个 Web 界面,使你可以监控所有的系统和网络指标。此外,它能自动发现网络资源和修改网络配置。并且会提醒你采取行动,它也支持 Nagios 的插件。 + +#### 81. [Cacti][65] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg) + +(和上一个一样!) Cacti 是一个网络图形解决方案,其使用 RRDtool 进行数据存储。它允许用户在预定的时间间隔进行投票服务并将结果以图形显示。Cacti 可以通过 shell 脚本扩展来监控你所选择的来源。 + +#### 82. [Zabbix][66] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png) + +Zabbix 是一个开源的基础设施监控解决方案。它使用了许多数据库来存放监控统计信息。其核心是用 C 语言编写,并在前端中使用 PHP。如果你不喜欢安装代理,Zabbix 可能是一个最好选择。 + +### 附加部分: ### + +感谢您的建议。这是我们的一个附加部分,由于我们需要重新编排所有的标题,鉴于此,这是在最后的一个简短部分,根据您的建议添加的一些 Linux 监控工具: + +#### 83. [collectd][67] #### + +Collectd 是一个 Unix 守护进程来收集所有的监控数据。它采用了模块化设计并使用插件来填补一些缺陷。这样能使 collectd 保持轻量级并可进行定制。 + +#### 84. [Observium][68] #### + +Observium 是一个自动发现网络的监控平台,支持普通的硬件平台和操作系统。Observium 专注于提供一个优美,功能强大,简单直观的界面来显示网络的健康和状态。 + +#### 85. Nload #### + +这是一个命令行工具来监控网络的吞吐量。它很整洁,因为它使用两个图表和其他一些有用的数据类似传输的数据总量来对进出站流量进行可视化。你可以使用如下方法安装它: + + yum install nload + +或者 + + sudo apt-get install nload + +#### 86. [SmokePing][69] #### + +SmokePing 可以跟踪你网络延迟,并对他们进行可视化。SmokePing 有一个流行的延迟测量插件。如果图形用户界面对你来说非常重要,现在有一个正在开发中的插件来实现此功能。 + +#### 87. [MobaXterm][70] #### + +如果你整天在 windows 环境下工作。你可能会觉得 Windows 下受终端窗口的限制。MobaXterm 正是由此而来的,它允许你使用多个在 Linux 中相似的终端。这将会极大地帮助你在监控方面的需求! + +#### 88. [Shinken monitoring][71] #### + +Shinken 是一个监控框架,其是由 python 对 Nagios 进行完全重写的。它的目的是增强灵活性和管理更大环境。但仍保持所有的 nagios 配置和插件。 + +-------------------------------------------------------------------------------- + +via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/ + +作者:[Jonathan Sundqvist][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:https://www.serverdensity.com/ +[1]:http://hisham.hm/htop/ +[2]:http://www.atoptool.nl/ +[3]:https://github.com/JeremyJones/Apachetop +[4]:http://www.proftpd.org/docs/howto/Scoreboard.html +[5]:http://jeremy.zawodny.com/mysql/mytop/ +[6]:https://01.org/powertop +[7]:http://guichaz.free.fr/iotop/ +[8]:http://www.ntop.org/products/ntop/ +[9]:http://www.ex-parrot.com/pdw/iftop/ +[10]:http://jnettop.kubs.info/wiki/ +[11]:http://bandwidthd.sourceforge.net/ +[12]:http://etherape.sourceforge.net/ +[13]:https://www.kernel.org/pub/software/network/ethtool/ +[14]:http://nethogs.sourceforge.net/ +[15]:http://iptraf.seul.org/ +[16]:http://ngrep.sourceforge.net/ +[17]:http://oss.oetiker.ch/mrtg/ +[18]:https://github.com/tgraf/bmon/ +[19]:http://www.phildev.net/iptstate/index.shtml +[20]:https://unix4lyfe.org/darkstat/ +[21]:http://humdi.net/vnstat/ +[22]:http://nmap.org/ +[23]:http://www.bitwizard.nl/mtr/ +[24]:http://www.tcpdump.org/ +[25]:http://justniffer.sourceforge.net/ +[26]:http://nmon.sourceforge.net/pmwiki.php +[27]:http://conky.sourceforge.net/ +[28]:https://github.com/nicolargo/glances +[29]:https://packages.debian.org/sid/utils/saidar +[30]:http://oss.oetiker.ch/rrdtool/ +[31]:http://mmonit.com/monit +[32]:http://sourceforge.net/projects/procexp/ +[33]:http://packages.ubuntu.com/lucid/utils/discus +[34]:http://www.pogo.org.uk/~mark/xosview/ +[35]:http://dag.wiee.rs/home-made/dstat/ +[36]:http://www.net-snmp.org/ +[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en +[38]:http://www.monitorix.org/ +[39]:http://sebastien.godard.pagesperso-orange.fr/ +[40]:http://collectl.sourceforge.net/ +[41]:http://sebastien.godard.pagesperso-orange.fr/ +[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html +[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html +[44]:http://freecode.com/projects/gnome-system-monitor +[45]:http://goaccess.io/ +[46]:http://sourceforge.net/projects/logwatch/ +[47]:http://sourceforge.net/projects/swatch/ +[48]:http://www.vanheusden.com/multitail/ +[49]:http://www.gnu.org/software/acct/ +[50]:http://whowatch.sourceforge.net/ +[51]:http://sourceforge.net/projects/strace/ +[52]:http://dtrace.org/blogs/about/ +[53]:http://www.webmin.com/ +[54]:http://ss64.com/bash/ulimit.html +[55]:https://github.com/opsengine/cpulimit +[56]:https://www.serverdensity.com/server-monitoring/ +[57]:http://www.opennms.org/ +[58]:http://sysusage.darold.net/ +[59]:http://sourceforge.net/projects/brainypdm/ +[60]:http://www.pcp.io/ +[61]:https://userbase.kde.org/KSysGuard +[62]:http://munin-monitoring.org/ +[63]:http://www.nagios.org/ +[64]:http://www.zenoss.com/ +[65]:http://www.cacti.net/ +[66]:http://www.zabbix.com/ +[67]:https://collectd.org/ +[68]:http://www.observium.org/ +[69]:http://oss.oetiker.ch/smokeping/ +[70]:http://mobaxterm.mobatek.net/ +[71]:http://www.shinken-monitoring.org/ diff --git a/translated/share/20151104 Optimize Web Delivery with these Open Source Tools.md b/translated/share/20151104 Optimize Web Delivery with these Open Source Tools.md new file mode 100644 index 0000000000..21fd8ad8e2 --- /dev/null +++ b/translated/share/20151104 Optimize Web Delivery with these Open Source Tools.md @@ -0,0 +1,195 @@ +使用开源工具优化Web响应 +================================================================================ +Web代理软件转发HTTP请求时并不会改变数据流量。它们经过配置后,可以免客户端配置,作为透明代理。它们还可以作为网站反向代理的前端;缓存服务器在此能支撑一台或多台web服务器为海量用户提供服务。 + +网站代理功能多样,有着宽泛的用途:从页面缓存、DNS和其他查询,到加速web服务器响应、降低带宽消耗。代理软件广泛用于大型高访问量的网站,比如纽约时报、卫报, 以及社交媒体网站如Twitter、Facebook和Wikipedia。 + +页面缓存已经成为优化单位时间内所能吞吐的数据量的至关重要的机制。好的Web缓存还能降低延迟,尽可能快地响应页面,让终端用户不至于因等待内容的时间过久而失去耐心。它们还能将频繁访问的内容缓存起来以节省带宽。如果你需要降低服务器负载并改善网站内容响应速度,那缓存软件能带来的好处就绝对值得探索一番。 + +为深入探查Linux下可用的相关软件的质量,我列出了下边5个优秀的开源web代理工具。它们中有些功能完备强大,也有几个只需很低的资源就能运行。 + +### Squid ### + +Squid是一个高性能、开源的代理缓存和Web缓存服务器,支持FTP、Internet Gopher、HTTPS和SSL等多种协议。它通过一个非阻塞,I/O事件驱动的单一进程处理所有IPV4或IPV6上的请求。 + +Squid由一个主服务程序squid,和DNS查询程序dnsserver,另外还有可选的请求重写、执行认证程序组件,及一些管理和客户端工具构成。 + +Squid提供了丰富的访问控制、认证和日志环境, 用于开发web代理和内容服务网站应用。 + +其特性包括: + +- Web代理: + - 通过缓存来降低访问时间和带宽使用 + - 将元数据和特别热的对象缓存到内存中 + - 缓存DNS查询 + - 支持非阻塞的DNS查询 + - 实现了失败请求的未果缓存 +- Squid缓存可架设为层次结构,或网状结构以节省额外的带宽 +- 通过可扩展的访问控制来执行网站使用条款 +- 隐匿请求,如禁用或修改客户端HTTP请求头特定属性 +- 反向代理 +- 媒体范围限制 +- 支持SSL +- 支持IPv6 +- 错误页面的本地化 - Squid可以根据访问者的语言选项对每个请求展示本地化的错误页面 +- 连接Pinning(用于NTLM Auth Passthrough) - 一种通过Web代理,允许Web服务器使用Microsoft NTLM安全认证替代HTTP标准认证的方案 +- 支持服务质量 (QoS, Quality of Service) 流 + - 选择一个TOS/Diffserv值来标记本地命中 + - 选择一个TOS/Diffserv值来标记邻居命中 + - 选择性地仅标记同级或上级请求 + - 允许任意发往客户端的HTTP响应保持由远程服务器处响应的TOS值 + - 对收到的远程服务器的TOS值,在复制之前对指定位进行掩码操作,再发送到客户端 +- SSL Bump (用于HTTPS过滤和适配) - Squid-in-the-middle,在CONNECT方式的SSL隧道中,用配置化的客户端和服务器端证书,对流量进行解密和加密 +- 支持适配模块 +- ICAP旁路和重试增强 - 通过完全的旁路和动态链式路由扩展ICAP,来处理多多个适应性服务。 +- 支持ICY流式协议 - 俗称SHOUTcast多媒体流 +- 动态SSL证书生产 +- 支持ICAP协议(Internet Content Adaptation Protocol) +- 完整的请求日志记录 +- 匿名连接 + +- 网站: [www.squid-cache.org][1] +- 开发: 美国国家应用网络研究实验室和网络志愿者 +- 授权: GNU GPL v2 +- 版本号: 4.0.1 + +### Privoxy ### + +Privoxy(Privacy Enhancing Proxy)是一个非缓存类Web代理软件,它自带的高级过滤功能用来增强隐私保护,修改页面内容和HTTP头部信息,访问控制,以及去除广告和其它招人反感的互联网垃圾。Privoxy的配置非常灵活,能充分定制已满足各种各样的需求和偏好。它支持单机和多用户网络两种模式。 + +Privoxy使用Actions规则来处理浏览器和远程站点间的数据流。 + +其特性包括: + +- 高度配置化 +- 广告拦截 +- Cookie管理 +- 支持"Connection: keep-alive"。可以无视客户端配置而保持持久连接 +- 支持IPv6 +- 标签化,允许按照客户端和服务器的请求头进行处理 +- 作为拦截代理器运行 +- 巧妙的手段和过滤机制用来处理服务器和客户端的HTTP头部 +- 可以与其他代理软件链式使用 +- 整合了基于浏览器的配置和控制工具,能在线跟踪规则和过滤效果,可远程开关 +- 页面过滤(文本替换、根据尺寸大小删除广告栏, 隐藏的"web-bugs"元素和HTML容错等) +- 模块化的配置使得标准配合和用户配置可以存放于不同文件中,这样安装更新就不会覆盖用户的个性化设置 +- 配置文件支持Perl兼容的正则表达式,以及更为精妙和灵活的配置语法 +- GIF去动画 +- 旁路处理大量click-tracking脚本(避免脚本重定向) +- 大多数代理生成的页面(例如 "访问受限" 页面)可由用户自定义HTML模板 +- 自动监测配置文件的修改并重新读取 +- 最大特点是可以基于每个站点或每个位置来进行控制 + +- 网站: [www.privoxy.org][2] +- 开发: Fabian Keil(开发领导者), David Schmidt, 和众多其他贡献者 +- 授权: GNU GPL v2 +- 版本号: 3.4.2 + +### Varnish Cache ### + +Varnish Cache是一个为性能和灵活性而生的web加速器。它新颖的架构设计能带来显著的性能提升。根据你的架构,通常情况下它能加速响应速度300-1000倍。Varnish将页面存储到内存,这样web服务器就无需重复地创建相同的页面,只需要在页面发生变化后重新生成。页面内容直接从内存中访问,当然比其他方式更快。 + +此外Varnish能大大提升响应web页面的速度,用任何应用服务器都能使网站访问速度大幅度地提升。 + +按按经验,Varnish Cache比较经济的配置是1-16GB内存+SSD固态硬盘。 + +其特性包括: + +- 新颖的设计 +- VCL - 非常灵活的配置语言。VCL配置转换成C,然后编译、加载、运行,灵活且高效 +- 能使用round-robin轮询和随机分发两种方式来负载均衡,两种方式下后端服务器都可以设置权重 +- 基于DNS、随机、散列和客户端IP的分发器 +- 多台后端主机间的负载均衡 +- 支持Edge Side Includes,包括拼装压缩后的ESI片段 +- 多线程并发 +- URL重写 +- 单Varnish缓存多个虚拟主机 +- 日志数据存储在共享内存中 +- 基本的后端服务器健康检查 +- 优雅地处理后端服务器“挂掉” +- 命令行界面的管理控制台 +- 使用内联C来扩展Varnish +- 可以与Apache用在相同的系统上 +- 单系统可运行多个Varnish +- 支持HAProxy代理协议。该协议在每个收到的TCP请求,例如SSL终止过程中,附加小段头信息,以记录客户端的真实地址 +- 冷热VCL状态 +- 用名为VMODs的Varnish模块来提供插件扩展 +- 通过VMODs定义后端主机 +- Gzip压缩及解压 +- HTTP流通过和获取 +- 神圣模式和优雅模式。用Varnish作为负载均衡器,神圣模式下可以将不稳定的后端服务器在一段时间内打入黑名单,阻止它们继续提供流量服务。优雅模式允许Varnish在获取不到后端服务器状态良好的响应时,提供已过期版本的页面或其它内容。 +- 实验性支持持久化存储,无需LRU缓存淘汰 + +- 网站: [www.varnish-cache.org][3] +- 开发: Varnish Software +- 授权: FreeBSD +- 版本号: 4.1.0 + +### Polipo ### + +Polipo是一个开源的HTTP缓存代理,只需要非常低的资源开销。 + +它监听来自浏览器的web页面请求,转发到web服务器,然后将服务器的响应转发到浏览器。在此过程中,它能优化和整形网络流量。从本质来讲Polipo与WWWOFFLE很相似,但其实现技术更接近于Squid。 + +Polipo最开始的目标是作为一个兼容HTTP/1.1的代理,理论它能在任何兼容HTTP/1.1或更早的HTTP/1.0的站点上运行。 + +其特性包括: + +- HTTP 1.1、IPv4 & IPv6、流量过滤和隐私保护增强 +- 如确认远程服务器支持,则无论收到的请求是管道处理过的还是在多个连接上同时收到的,都使用HTTP/1.1管道 +- 下载被中断时缓存起始部分,当需要续传时用区间请求来完成下载 +- 将HTTP/1.0的客户端请求升级为HTTP/1.1,然后按照客户端支持的级别进行升级或降级后回复 +- 全面支持IPv6 (作用域(链路本地)地址除外) +- 作为IPv4和IPv6网络的网桥 +- 内容过滤 +- 能使用Poor Man多路复用技术降低延迟 +- 支持SOCKS 4和SOCKS 5协议 +- HTTPS代理 +- 扮演透明代理的角色 +- 可以与Privoxy或tor一起运行 + +- 网站: [www.pps.univ-paris-diderot.fr/~jch/software/polipo/][4] +- 开发: Juliusz Chroboczek, Christopher Davis +- 授权: MIT License +- 版本号: 1.1.1 + +### Tinyproxy ### + +Tinyproxy是一个轻量级的开源web代理守护进程,其设计目标是快而小。它适用于需要完整HTTP代理特性,但系统资源又不足以运行大型代理的场景,比如嵌入式部署。 + +Tinyproxy对小规模网络非常有用,这样的场合下大型代理会使系统资源紧张,或有安全风险。Tinyproxy的一个关键特性是其缓冲连接的理念。实质上Tinyproxy服务器的响应进行了高速缓冲,然后按照客户端能够处理的最高速度进行响应。该特性极大的降低了网络延滞带来的问题。 + +特性: + +- 易于修改 +- 隐匿模式 - 定义哪些HTTP头允许通过,哪些又会被拦截 +- 支持HTTPS - Tinyproxy允许通过CONNECT方法转发HTTPS连接,任何情况下都不会修改数据流量 +- 远程监控 - 远程访问代理统计数据,让你能清楚了解代理服务当前的忙碌状态 +- 平均负载监控 - 通过配置,当服务器的负载接近一定值后拒绝新连接 +- 访问控制 - 通过配置,仅允许指定子网或IP地址的访问 +- 安全 - 运行无需额外权限,减小了系统受到威胁的概率 +- 基于URL的过滤 - 允许基于域和URL的黑白名单 +- 透明代理 - 配位为透明代理,这样客户端就无需任何配置 +- 代理链 - 来流量出口处采用上游代理服务器,而不是直接转发到目标服务器,创建我们所说的代理链 +- 隐私特性 - 限制允许从浏览器收到的来自HTTP服务器的数据(例如cookies),同时限制允许通过的从浏览器到HTTP服务器的数据(例如版本信息) +- 低开销 - 使用glibc内存开销只有2MB,CPU负载按并发连接数线性增长(取决于网络连接速度)。 Tinyproxy可以运行在老旧的机器上而无需担心性能问题。 + +- 网站: [banu.com/tinyproxy][5] +- 开发: Robert James Kaes和其他贡献者 +- 授权: GNU GPL v2 +- 版本号: 1.8.3 + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20151101020309690/WebDelivery.html + +译者:[fw8899](https://github.com/fw8899) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.squid-cache.org/ +[2]:http://www.privoxy.org/ +[3]:https://www.varnish-cache.org/ +[4]:http://www.pps.univ-paris-diderot.fr/%7Ejch/software/polipo/ +[5]:https://banu.com/tinyproxy/ diff --git a/translated/talk/20101020 19 Years of KDE History--Step by Step.md b/translated/talk/20101020 19 Years of KDE History--Step by Step.md new file mode 100644 index 0000000000..ef90acd91f --- /dev/null +++ b/translated/talk/20101020 19 Years of KDE History--Step by Step.md @@ -0,0 +1,209 @@ +# 19年KDE进化历程 +注:youtube 视频 + + +## 概述 +KDE – 史上功能最强大的桌面环境之一; 开源且免费。19年前,1996年10月14日,德国程序员 Matthias Ettrich 开始了编写这个美观的桌面环境。KDE提供了诸如shell以及其他很多日常使用的程序。今日,KDE被成千上万人在 Unix 和 Windows 上使用。19年----一个对软件项目而言极为漫长的年岁。现在是时候让我们回到最初,看看这一切从哪里开始了。 + +K Desktop Environment(KDE)有很多创新之处:新设计,美观,连贯性,易于使用,对普通用户和专业用户都足够强大的应用库。"KDE"这个名字是对单词"通用桌面环境"(Common Desktop Environment)玩的一个简单谐音游戏,"K"----"Cool"。 第一代KDE在双证书授权下使用了有专利的 Trolltech's Qt 框架 (现Qt的前身),这两个许可证分别是 open source QPL(Q public license) 和 商业专利许可证(proprietary commercial license)。在2000年 Trolltech 让一部分 Qt 软件库开始发布在 GPL 证书下; Qt 4.5 发布在了 LGPL 2.1 许可证下。自2009起 KDE 桌面环境由三部分构成:Plasma Workspaces (作Shell),KDE 应用,作为 KDE Software 编译的 KDE Platform. + +## 各发布版本 +### Pre-Release – 1996年10月14日 +![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/0b3.png) + +当时名称为 Kool Desktop Environment;"Kool"这个单词在很快就被弃用了。最初,所有KDE的组件都是被单独发布在开发社区里的,他们之间没有任何环绕大项目的组装配合。开发组邮件列表中的第一封通信是发往kde@fiwi02.wiwi.uni-Tubingen.de 的邮件。 + +### KDE 1.0 – 1998年7月12日 +![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/10.png) + +这个版本受到了颇有争议的反馈。很多人反对使用Qt框架----当时的 FreeQt 许可证和自由软件许可证并不兼容----并建议开发组使用 Motif 或者 LessTif 替代。尽管有着这些反对声,KDE 仍然被很多用户所青睐,并且成功作为第一个Linux发行版的环境被集成了进去。(made its way into the first Linux distributions) + +![28 January 1999](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/11.png) + +1999年1月28日 + +一次升级,**K Desktop Environment 1.1**,更快,更稳定的同时加入了很多小升级。这个版本同时也加入了很多新的图标,背景,外观文理。和这些全面翻新同时出现的还有 Torsten Rahn 绘制的全新KDE图标----齿轮前的3个K字母;这个图标的修改版也一直沿用至今。 + +### KDE 2.0 – 2000年10月23日 +![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/20.png) + +重大更新:_ DCOP (Desktop COmmunication Protocol),一个端到端的通信协议 _ KIO,一个应用程序I/O库 _ KParts,组件对象模板 _ KHTML,一个符合 HTML 4.0 标准的图像绘制引擎。 + +![26 February 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/21.png) + +2001年2月26日 + +**K Desktop Environment 2.1** 首次发布了媒体播放器 noatun,noatun使用了先进的模组-插件设计。为了便利开发者,K Desktop Environment 2.1 打包了 KDevelop + +![15 August 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/22.png) + +2001年8月15日 + +**KDE 2.2**版本在GNU/Linux上加快了50%的应用启动速度,同时提高了稳定性和 HTML、JavaScript的解析性能,同时还增加了一些 KMail 的功能。 + +### KDE 3.0 – 2002年4月3日 +![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/30.png) + +K Desktop Environment 3.0 加入了更好的限制使用功能,这个功能在网咖,企业公用电脑上被广泛需求。 + +![28 January 2003](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/31.png) + +2003年1月28日 + +**K Desktop Environment 3.1** 加入了新的默认窗口(Keramik)和图标样式(Crystal)和其他一些改进。 + +![3 February 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/32.png) + +2004年2月3日 + +**K Desktop Environment 3.2** 加入了诸如网页表格,书写邮件中拼写检查的新功能;补强了邮件和日历功能。完善了Konqueror 中的标签机制和对 Microsoft Windows 桌面共享协议的支持。 + +![19 August 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/33.png) + +2004年8月19日 + +**K Desktop Environment 3.3** 侧重于组合不同的桌面组件。Kontact 被放进了群件应用Kolab 并与 Kpilot 结合。Konqueror 的加入让KDE有了更好的 IM 交流功能,比如支持发送文件,以及其他 IM 协议(如IRC)的支持。 + +![16 March 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/34.png) + +2005年3月16日 + +**K Desktop Environment 3.4** 侧重于提高易用性。这次更新为Konqueror,Kate,KPDF加入了文字-语音转换功能;也在桌面系统中加入了独立的 KSayIt 文字-语音转换软件。 + +![29 November 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/35.png) + +2005年11月29日 + +**The K Desktop Environment 3.5** 发布加入了 SuperKaramba,为桌面环境提供了易于安装的插件机制。 desktop. Konqueror 加入了广告屏蔽功能并成为了有史以来第二个通过Acid2 CSS 测试的浏览器。 + +### KDE SC 4.0 – 2008年1月11日 +![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/400.png) + +大部分开组投身于把最新的技术和开发框架整合进 KDE 4 当中。Plasma 和 Oxygen 是两次最大的用户界面风格变更。同时,Dolphin 替代 Konqueror 成为默认文件管理器,Okular 成为了默认文档浏览器。 + +![29 July 2008](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/401.png) + +2008年7月29日 + +**KDE 4.1** 引入了一个在 PIM 和 Kopete 中使用的表情主题系统;引入了可以让用户便利地从互联网上一键下载数据的DXS。同时引入了 GStreamer,QuickTime,和 DirectShow 9 Phonon 后台。加入了新应用如:_ Dragon Player _ Kontact _ Skanlite – 扫描仪软件,_ Step – 物理模拟软件 * 新游戏: Kdiamond,Kollision,KBreakout 和更多...... + +![27 January 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/402.png) + +2009年1月27日 + +**KDE 4.2** 被认为是在已经极佳的 KDE 4.1 基础上的又一次全面超越,同时也成为了大多数用户替换旧 3.5 版本的完美选择。 + +![4 August 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/403.png) + +2009年8月4日 + +**KDE 4.3** 修复了超过10,000个 bugs,同时加入了让近2,000个被用户需求的功能。整合一些新的技术例如:PolicyKit,NetworkManage & Geolocation services 等也是这个版本的一大重点。 + +![9 February 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/404.png) + +2010年2月9日 + +**KDE SC 4.4** 基础 Qt 4 开框架的 4.6 版本,新的应用 KAddressBook 被加入,同时也是is based on version 4.6 of the Qt 4 toolkit. New application – KAddressBook,Kopete首次发布。 + +![10 August 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/405.png) + +2010年8月10日 + +**KDE SC 4.5** 增加了一些新特性:整合了 WebKit 库----一个开源的浏览器引擎库,现在也被在 Apple Safari 和 Google Chrome 中广泛使用。KPackageKit 替换了 Kpackage。 + +![26 January 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/406.png) + +2011年1月26日 + +**KDE SC 4.6** 加强了 OpenGl 的性能,同时照常更新了无数bug和小改进。 + +![27 July 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/407.png) + +2011年7月27日 + +**KDE SC 4.7** 升级 KWin 以兼容 OpenGL ES 2.0 ,更新了 Qt Quick,Plasma Desktop 中在应用里普遍使用的新特性 1.2万个bug被修复。 + +![25 January 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/408.png) + +2012年1月25日 + +**KDE SC 4.8**: 更好的 KWin 性能与 Wayland 支持,更新了 Doplhin 的外观设计。 + +![1 August 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/409.png) + +2012年8月1日 + +**KDE SC 4.9**: 向 Dolphin 文件管理器增加了一些更新,比如加入了实时文件重命名,鼠标辅助按钮支持,更好的位置标签和更多文件分类管理功能。 + +![6 February 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/410.png) + +2013年2月6日 + +**KDE SC 4.10**: 很多 Plasma 插件使用 QML 重写; Nepomuk,Kontact 和 Okular 得到了很大程度的性能和功能提升。 + +![14 August 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/411.png) + +2013年8月14日 + +**KDE SC 4.11**: Kontact 和 Nepomuk 有了很大的优化。 第一代 Plasma Workspaces 进入了仅有维护而没有新生开发的软件周期。 + +![18 December 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/412.png) + +2013年12月18日 + +**KDE SC 4.12**: Kontact 得到了极大的提升。 + +![16 April 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/413.png) + +2014年4月16日 + +**KDE SC 4.13**: Nepomuk 语义搜索功能替代了桌面上的原有的Baloo搜索。 KDE SC 4.13 发布了53个语言版本。 + +![20 August 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/414.png) + +2014年8月20日 + +**KDE SC 4.14**: 这个发布版本侧重于稳定性提升:大量的bug修复和小更新。这是最后一个 KDE SC 4 发布版本。 + +### KDE Plasma 5.0 – 2014年7月15日 +![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/500.png) + +KDE Plasma 5 – 第五代 KDE。大幅改进了设计和系统,新的默认主题 ---- Breeze,完全迁移到了 QML,更好的 OpenGL 性能,更完美的 HiDPI (高分辨率)显示支持。 + +![11 November 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/501.png) + +2014年11月11日 + +**KDE Plasma 5.1**:加入了Plasma 4里原先没有补完的功能。 + +![27 January 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/502.png) + +2015年1月27日 + +**KDE Plasma 5.2**:新组件:BlueDevil,KSSHAskPass,Muon,SDDM 主题设置,KScreen,GTK+ 样式设置 和 KDecoration. + +![28 April 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/503.png) + +2015年4月28日 + +**KDE Plasma 5.3**:Plasma Media Center 技术预览。新的蓝牙和触摸板小程序;改良了电源管理。 + +![25 August 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/504.png) + +2015年8月25日 + +**KDE Plasma 5.4**:Wayland 登场,新的基于 QML 的音频管理程序,交替式全屏程序显示。 + +万分感谢 [KDE][1] 开发者和社区及Wikipedia 为书写 [概述][2] 带来的帮助,同时,感谢所有读者。希望大家保持自由精神(be free)并继续支持如同 KDE 一样的开源的自由软件发展。 + +-------------------------------------------------------------------------------- + +via: [https://tlhp.cf/kde-history/](https://tlhp.cf/kde-history/) + +作者:[Pavlo RudyiCategories][a] 译者:[jerryling315](https://github.com/jerryling315) 校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]: https://www.kde.org/ +[2]: https://en.wikipedia.org/wiki/KDE_Plasma_5 +[a]: https://tlhp.cf/author/paul/ diff --git a/translated/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md b/translated/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md new file mode 100644 index 0000000000..6f821f777d --- /dev/null +++ b/translated/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md @@ -0,0 +1,344 @@ +对比Windows 10与Linux:Linux用户已经使用'Windows 10'超过8年 +============================================================================================================================================================== +Windows 10 是2015年7月29日上市的最新一代Windows NT系列系统,Windows 8.1 的继任者.Windows 10 支持Intel 32位平台,AMD64以及ARM v7处理器. + +![Windows 10 and Linux Comparison](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-vs-Linux.jpg) + +对比:Windows 10与Linux + +作为一个连续使用linux超过8年的用户,我想要去测试Windows 10 ,因为它最近制造了很多新闻.这篇文章是我观察力的一个重大突破.我将从一个linux用户的角度去看待一切,所以这篇文章可能会有些偏向于linux.尽管如此,本文也应该不会有任何错误信息. + +1. 用谷歌搜索"download Windows 10" 并且点击第一个链接. + +![Search Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Windows-10.jpg) + +搜索Windows 10 + +你也可以直接打开: [https://www.microsoft.com/en_us/software-download/Windows10[1] + +2. 微软要求我从Windows 10, Windows 10 KN, Windows 10 N 和Windows 10 单语言版中选择一个版本 + +![Select Windows 10 Edition](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Windows-10-Edition.jpg) + +选择版本 + +以下是各个版本的简略信息: + +- Windows 10 - 包含微软提供给我们的所有软件 +- Windows 10N - 此版本不包含媒体播放器 +- Windows 10KN - 此版本没有媒体播放能力 +- Windows 10单语言版 - 仅预装一种语言 + +3. 我选择了第一个选项 " Windows 10"并且单击"确认".之后我要选择语言,我选择了"英语" + +微软给我提供了两个下载链接.一个是32位版,另一个是64位版.我单击了64位版--这与我的电脑架构相同. + +![Download Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Download-Windows-10.jpg) + +下载Windows 10 + +我的带宽是15M的,下载了整整3个小时.不幸的是微软没有提供系统的种子文件,否则整个过程会更加舒畅.镜像大小为 3.8 GB(译者注:就我的10M小水管,我使用迅雷下载用时50分钟). + +我找不到更小的镜像,微软并没有为Windows提供网络安装镜像.我也没有办法在下载完成后去校验哈希值. + +我十分惊讶,Windows在这样的问题上居然如此漫不经心.为了验证这个镜像是否正确下载,我需要把它刻到光盘上或者复制到我的U盘上然后启动它,一直静静的看着它安装直到安装完成. + +首先,我用dd命令将win10的iso镜像刻录到U盘上 + + # dd if=/home/avi/Downloads/Win10_English_x64.iso of=/dev/sdb1 bs=512M; sync + +这需要一点时间.在此之后我重启系统并在UEFI(BIOS)设置中选择从我的U盘启动. + +#### 系统要求 #### + +升级 + +- 仅支持从Windows 7 SP1或者Windows 8.1升级 + +重新安装 + +- 处理器: 1GHz 以上 +- 内存: 1GB以上(32位),2GB以上(64位) +- 硬盘: 16GB以上(32位),20GB以上(64位) +- 显卡: 支持DirectX 9或更新 + WDDM 1.0 驱动 + +###Windows 10 安装过程### + +1. Windows 10启动成功了.他们又换了logo,但是仍然没有信息提示我它正在做什么. + +![Windows 10 Logo](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Logo.jpg) + +Windows 10 Logo + +2. 选择安装语言,时区,键盘,输入法,点击下一步 + +![Select Language and Time](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Language-and-Time.jpg) + +选择语言和时区 + +3. 点击'现在安装' + +![Install Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Windows-10.jpg) + +安装Windows 10 + +4. 下一步是输入密钥,我点击了跳过 + +![Windows 10 Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Product-Key.jpg) + +Windows 10 产品密钥 + +5. 从列表中选择一个系统版本.我选择了Windows 10专业版 + +![Select Install Operating System](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Operating-System.jpg) + +选择系统版本 + +6. 到了协议部分,选中"我接受"然后点击下一步 + +![Accept License](http://www.tecmint.com/wp-content/uploads/2015/08/Accept-License.jpg) + +同意协议 + +7. 下一步是选择(从Windows的老版本)升级到Windows 10或者安装Windows.我搞不懂为什么微软要让我自己选择:"安装Windows"被微软建议为"高级"选项.但是我还是选择了"安装Windows". + +![Select Installation Type](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Installation-Type.jpg) + +选择安装类型 + +8. 选择驱动器,点击"下一步" + +![Select Install Drive](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Drive.jpg) + +选择安装盘 + +9. 安装程序开始复制文件,准备文件,安装更新,之后进行收尾.(如果安装程序能在安装时输出一堆字符来表示他在做什么就更好了) + +![Installing Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Installing-Windows.jpg) + +安装 Windows + +10. 在此之后Windows重启了.他们说为了继续,我们需要重启 + +![Windows Installation Process](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Installation-Process.jpg) + +安装进程 + +11. 我看到了一个写着"正在准备Windows"的界面.它停了整整五分多钟,仍然没有说明它正在做什么.没有输出. + +![Windows Getting Ready](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Getting-Ready.jpg) + +正在准备Windows + +12. 又到了输入产品密钥的时间.我点击了"以后再说",并使用快速设置 + +![Enter Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Enter-Product-Key.jpg) + +输入产品密钥 + +![Select Express Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Express-Settings.jpg) + +使用快速设置 + + +13. 又出现了三个界面,作为Linux用户我认为此处应有信息来告诉我安装程序在做什么,但是我想多了 +![Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Loading-Windows.jpg) + +载入 Windows + +![Getting Updates](http://www.tecmint.com/wp-content/uploads/2015/08/Getting-Updates.jpg) + +获取更新 + +![Still Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Still-Loading-Windows.jpg) + +还是载入 Windows + +14. 安装程序想要知道谁拥有这台机器,"我的组织"或者我自己 + +![Select Organization](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Organization.jpg) + +选择组织 + +15. 安装程序提示我加入"Aruze Ad"或者"加入域".在单击继续之前,我选择了后者. + +![Connect Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Connect-Windows.jpg) + +连接网络 + +16. 安装程序让我新建一个账户.所以我输入了 "user_name"并点击下一步,我觉得我会收到一个要求我必须输入密码的信息. + +![Create Account](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Account.jpg) + +新建账户 + +17. 让我惊讶的是Windows甚至都没有警告/发现我必须创建密码.真粗心.不管怎样,现在我可以体验系统了. + +![Windows 10 Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Desktop.jpg) + +Windows 10的桌面环境 + +#### Linux用户(我)直到现在的体验 #### + +- 没有网络安装镜像 +- 镜像文件太臃肿了 +- 没有验证iso是否为正确的方法(官方没有提供哈希值) +- 启动与安装方式仍然与XP,Win 7,Win 8相同(可能吧...) +- 和以前一样,安装程序没有输出他正在干什么 - 正在复制什么和正在安装什么软件包 +- 安装程序比Linux发行版的更加直白和简单 + +####测试 Windows#### + +18. 默认桌面很干净,上面只有一个回收站图标.我们可以直接从桌面搜索网络.底部的快捷方式分别是任务预览,网络,微软应用商店.和以前的版本一样,消息栏在右下角. + +![ ](http://www.tecmint.com/wp-content/uploads/2015/08/Deskop-Shortcut-icons.jpg) + +桌面图标 + +19. IE浏览器被换成了Edge浏览器.微软把他们的老IE换成了Edge(斯巴达计划) + +![Microsoft Edge Browser](http://www.tecmint.com/wp-content/uploads/2015/08/Edge-browser.jpg) + +Edge浏览器 + +这个浏览器至少比IE要快.他们有相同的用户界面.它的主页包含新的更新.它还有一个标题是"下一步怎么走".由于它全面的性能提升,它的加载速度非常快.Edge的内存占用看起来一般般. + +![Windows Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Performance.jpg) + +性能 + +Edge也有小娜加成 -- 智能个人助理.支持笔记(在浏览网页时记笔记),分享(在本TAB分享而不必打开其他TAB) + +#### Linux用户(我)此时体验 #### + +20. 微软确实提升了网页浏览体验.我们要看的就是他的稳定性和质量.现在它并不落后. + +21. 对我来说,Edge的内存占用不算太大.但是有很多用户抱怨他的内存占用. + +22. 很难说目前Edge已经准备好了与火狐或Chrome竞争.让我们静观其变. + +#### 更多的视觉体验 #### + +23. 重新设计的开始菜单 -- 看起来很简洁高效.Merto磁贴大部分都会动.预先放置了最通用的应用. + +![Windows Look and Feel](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Look.jpg) + +Windows + +在Linux的Gnome桌面环境下.我仅仅需要按下Win键并输入应用名就可以搜索应用. + +![Search Within Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Within-Desktop.jpg) + +桌面内进行搜索 + +24. 文件浏览器 -- 设计的很简洁.左边是进入文件夹的快捷方式. + +![Windows File Explorer](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-File-Explorer.jpg) + +Windows资源管理器 + +我们的Gnome下的文件管理也同样的简洁高效. + +![File Browser on Gnome](http://www.tecmint.com/wp-content/uploads/2015/08/File-Browser.jpg) + +Gnome 的文件管理 + +25. 设置 -- 尽管Windows 10的设置有点精炼,但是我们还是可以把它与linux的设置进行对比. + +**Windows 的设置** + +![Windows 10 Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Settings.jpg) + +Windows 10 设置 + +**Linux Gnome 上的设置** + +![Gnome Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Settings.jpg) + +Gnome 的设置 + +26. 应用列表 -- 目前,Linux上的应用列表比之前的版本要好一些 + +**Windows 的应用列表** + +![Application List on Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Application-List-on-Windows-10.jpg) + +Windows 10 的应用列表 + +**Gnome3 的应用列表** + +![Gnome Application List on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Application-List-on-Linux.jpg) + +Gnome3 的应用列表 + +27. 虚拟桌面 -- Windows 10 上的虚拟桌面是近来被提及最多的特性之一 + +这是Windows 10 上的虚拟桌面. + +![Windows Virtual Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Virtual-Desktop.jpg) + +Windows的虚拟桌面 + +这是我们Linux用户使用了超过20年的虚拟桌面. + +![Virtual Desktop on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Virtual-Desktop-on-Linux.jpg) + +Linux的虚拟桌面 + +#### Windows 10 的其他新特性 #### + +28. Windows 10 自带wifi感知.它会把你的wifi密码分享给他人.任何在你wifi范围内并且曾经通过Skype, Outlook, Hotmail 或 Facebook与你联系的人都能够获得你的网络接入权.这个特性的本意是让用户可以省时省力的连接网络. + +在微软对于 Tecmint 的问题的回答中,他们说道 -- 用户需要在每次到一个新的网络环境时自己去同意打开wifi感知.如果我们考虑到网络安全这将是很不安全的一件事.微软的说法并没有说服我. + +29. 从Windows 7 和 Windows 8.1升级可以省下买新版的花费.(家庭版$119 专业版$199 ) + +30. 微软发布了第一个累积更新,这个更新在一小部分设备上会让系统一直重启.Windows可能不知道这个问题或者不知道它发生的原因. + +31. 微软内建的禁用/隐藏我不想要的更新的功能在我这不起作用.这意味着一旦更新开始推送,你没有方法去禁用/隐藏他们.对不住啦,Windows 用户. + +#### Windows 10 包含的来源于Linux的功能 #### + +Windows 10有很多直接取自Linux的功能.如果Linux不已GPL发布的话,以下下这些功能永远不会出现在Windows上. + +32. 包管理器 -- 是的,你没有听错!Windows 10内建了一个包管理器.它只在Power Shell下工作.OneGet是Windows的官方包管理器. + +![Windows 10 Package Manager](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Package-Manager.jpg) + + Windows 10的包管理器 + +- 无国界的Windows +- 扁平化图标 +- 虚拟桌面 +- 离线/在线搜索一体化 +- 手机/桌面系统一体化 + +### 总体印象### + +- 响应速度提升 +- 动画很好看 +- 资源占用少 +- 电池续航提升 +- Edge浏览器坚如磐石 +- 支持树莓派 2 +- Windows 10好的原因是Windows 8/8.1没有达到公众预期并且坏的可以 +- 旧瓶装新酒:Windows 10基本上就是以前的那一套换上新的图标 + +测试后我对Windows 10的评价是:Windows 10 在视觉和感觉上做了一些更新(就如同Windows经常做的那样).我要为斯巴达计划,虚拟桌面,命令行包管理器,整合在线/离线搜索的搜索栏点赞.这确实是一个更新后的产品 ,但是认为Windows 10将是Linux的最后一个棺材钉的人错了. + +Linux走在Windows前面.它们的做事方法并不相同.在以后的一段时间里Windows不会站到Linux这一旁.Linux用户也不必去使用Windows 10. + +这就是我要说的.希望你喜欢本文.如果你们喜欢本篇文章我会再写一些你们喜欢读的有趣的文章.在下方留下你的有价值的评论. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/a-linux-user-using-Windows-10-after-more-than-8-years-see-comparison/ + +作者:[vishek Kumar][a] +译者:[name1e5s](https://github.com/name1e5s) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:https://www.microsoft.com/en-us/software-download/Windows10ISO diff --git a/translated/talk/20150820 Why did you start using Linux.md b/translated/talk/20150820 Why did you start using Linux.md new file mode 100644 index 0000000000..aa48db697c --- /dev/null +++ b/translated/talk/20150820 Why did you start using Linux.md @@ -0,0 +1,144 @@ +年轻人,你为啥使用 linux +================================================================================ +> 今天的开源综述:是什么带你进入 linux 的世界?号外:IBM 基于 Linux 的大型机。以及,你应该抛弃 win10 选择 Linux 的原因。 + +### 当初你为何使用 Linux? ### + +Linux 越来越流行,很多 OS X 或 Windows 用户都转移到 Linux 阵营了。但是你知道是什么让他们开始使用 Linux 的吗?一个 Reddit 用户在网站上问了这个问题,并且得到了很多有趣的回答。 + +一个名为 SilverKnight 的用户在 Reddit 的 Linux 板块上问了如下问题: + +> 我知道这个问题肯定被问过了,但我还是想听听年轻一代使用 Linux 的原因,以及是什么让他们坚定地成为 Linux 用户。 +> +> 我无意阻止大家讲出你们那些精彩的 Linux 故事,但是我还是对那些没有经历过什么精彩故事的新人的想法比较感兴趣。 +> +> 我27岁,半吊子 Linux 用户,这些年装过不少发行版,但没有投入全部精力去玩 Linux。我正在找更多的、能让我全身心投入到 Linux 潮流的理由,或者说激励。 +> +> [详见 Reddit][1] + +以下是网站上的回复: + +> **DoublePlusGood**:我12岁开始使用 Backtrack(现在改名为 Kali),因为我想成为一名黑客(LCTT 译注:原文1337 haxor,1337 是 leet 的火星文写法,意为'火星文',haxor 为 hackor 的火星文写法,意为'黑客',另一种写法是 1377 h4x0r,满满的火星文文化)。我现在一直使用 ArchLinux,因为它给我无限自由,让我对我的电脑可以为所欲为。 +> +> **Zack**:我记得是12、3岁的时候使用 Linux,现在15岁了。 +> +> 我11岁的时候就对 Windows XP 感到不耐烦,一个简单的功能,比如关机,TMD 都要让我耐心等着它慢慢完成。 +> +> 在那之前几个月,我在 freenode IRC 聊天室参与讨论了一个游戏,它是一个开源项目,大多数用户使用 Linux。 +> +> 我不断听到 Linux 但当时对它还没有兴趣。然而由于这些聊天频道(大部分在 freenode 上)谈论了很多编程话题,我就开始学习 python 了。 +> +> 一年后我尝试着安装 GNU/Linux (主要是 ubuntu)到我的新电脑(其实不新,但它是作为我的生日礼物被我得到的)。不幸的是它总是不能正常工作,原因未知,也许硬盘坏了,也许灰尘太多了。 +> +> 那时我放弃自己解决这个问题,然后缠着老爸给我的电脑装上 Ubuntu,他也无能为力,原因同上。 +> +> 在追求 Linux 一段时间后,我打算抛弃 Windows,使用 Linux Mint 代替 Ubuntu,本来没抱什么希望,但 Linux Mint 竟然能跑起来! +> +> 于是这个系统我用了6个月。 +> +> 那段时间我的一个朋友给了我一台虚拟机,跑 Ubuntu 的,我用了一年,直到我爸给了我一台服务器。 +> +> 6个月后我得到一台新 PC(现在还在用)。于是起想折腾点不一样的东西。 +> +> 我打算装 openSUSE。 +> +> 我很喜欢这个系统。然后在圣诞节的时候我得到树莓派,上面只能跑 Debian,还不能支持其它发行版。 +> +> **Cqz**:我9岁的时候有一次玩 Windows 98,结果这货当机了,原因未知。我没有 Windows 安装盘,但我爸的一本介绍编程的杂志上有一张随书附赠的光盘,这张光盘上刚好有 Mandrake Linux 的安装软件,于是我瞬间就成为了 Linux 用户。我当时还不知道自己在玩什么,但是玩得很嗨皮。这些年我虽然在电脑上装了多种 Windows 版本,但是 FLOSS 世界才是我的家。现在我只把 Windows 装在虚拟机上,用来玩游戏。 +> +> **Tosmarcel**:15岁那年对'编程'这个概念很好奇,然后我开始了哈佛课程'CS50',这个课程要我们安装 Linux 虚拟机用来执行一些命令。当时我问自己为什么 Windows 没有这些命令?于是我 Google 了 Linux,搜索结果出现了 Ubuntu,在安装 Ubuntu。的时候不小心把 Windows 分区给删了。。。当时对 Linux 毫无所知,适应这个系统非常困难。我现在16岁,用 ArchLinux,不想用回 Windows,我爱 ArchLinux。 +> +> **Micioonthet**:第一次听说 Linux 是在我5年级的时候,当时去我一朋友家,他的笔记本装的就是 MEPIS(Debian的一个比较老的衍生版),而不是 XP。 +> +> 原来是他爸爸是个美国的社会学家,而他全家都不信任微软。我对这些东西完全陌生,这系统完全没有我熟悉的软件,我很疑惑他怎么能使用。 +> +> 我13岁那年还没有自己的笔记本电脑,而我另一位朋友总是抱怨他的电脑有多慢,所以我打算把它买下来并修好它。我花了20美元买下了这台装着 Windows Vista 系统、跑满病毒、完全无法使用的惠普笔记本。我不想重装讨厌的 Windows 系统,记得 Linux 是免费的,所以我刻了一张 Ubuntu 14.04 光盘,马上把它装起来,然后我被它的高性能给震精了。 +> +> 我的世界(由于它允运行在 JAVA 上,所以当时它是 Linux 下为数不多的几个游戏之一)在 Vista 上只能跑5帧每秒,而在 Ubuntu 上能跑到25帧。 +> +> 我到现在还会偶尔使用一下那台笔记本,Linux 可不会在乎你的硬件设备有多老。 +> +> 之后我把我爸也拉入 Linux 行列,我们会以很低的价格买老电脑,装上 Linux Mint 或其他轻量级发行版,这省了好多钱。 +> +> **Webtm**:我爹每台电脑都会装多个发行版,有几台是 opensuse 和 Debian,他的个人电脑装的是 Slackware。所以我记得很小的时候一直在玩 debian,但没有投入很多精力,我用了几年的 Windows,然后我爹问我有没有兴趣试试 debian。这是个有趣的经历,在那之后我一直使用 debian。而现在我不用 Linux,转投 freeBSD,5个月了,用得很开心。 +> +> 完全控制自己的系统是个很奇妙的体验。开源届有好多酷酷的软件,我认为在自己解决一些问题并且利用这些工具解决其他事情的过程是最有趣的。当然稳定和高效也是吸引我的地方。更不用说它的保密级别了。 +> +> **Wyronaut**:我今年18,第一次玩 Linux 是13岁,当时玩的 Ubuntu,为啥要碰 Linux?因为我想搭一个'我的世界'的服务器来和小伙伴玩游戏,当时'我的世界'可是个新鲜玩意儿。而搭个私服需要用 Linux 系统。 +> +> 当时我还是个新手,对着 Linux 的命令行有些傻眼,因为很多东西都要我自己处理。还是多亏了 Google 和维基,我成功地在多台老 PC 上部署了一些简单的服务器,那些早已无人问津的老古董机器又能发挥余热了。 +> +> 跑过游戏服务器后,我又开始跑 web 服务器,先是跑了几年 HTML,CSS 和 PHP,之后受 TheNewBoston 视频的误导转到了 JAVA。 +> +> 一周后放弃 JAVA 改用 Python,当时学习 Python 用的书名叫《Learn Python The Hard Way》,作者是 Zed A. Shaw。我花了两周学完 Python,然后开始看《C++ Primer》,因为我想做游戏开发。看到一半(大概500页)的时候我放弃了。那个时候我有点讨厌玩电脑了。 +> +> 这样中断了一段时间之后,我决定学习 JavaScript,读了2本书,试了4个平台,然后又不玩了。 +> +> 现在到了不得不找一所学校并决定毕业后找什么样工作的糟糕时刻。我不想玩图形界面编程,所以我不会进游戏行业。我也不喜欢画画和建模。然后我发现了一个涉及网络安全的专业,于是我立刻爱上它了。我挑了很多 C 语言的书来度过这个假期,并且复习了一下数学来迎接新的校园生活。 +> +> 目前我玩 archlinux,不同 PC 上跑着不同任务,它们运行很稳定。 +> +> 可以说 Linux 带我进入编程的世界,而反过来,我最终在学校要学的就是 Linux。我估计会终生感谢 Linux。 +> +> **Linuxllc**:你们可以学学像我这样的老头。 +> +> 扔掉 Windows!扔掉 Windows!扔掉 Windows!给自己一个坚持使用 Linux 的理由,那就是完全,彻底,远离,Windows。 +> +> 我在 2003 年放弃 Windows,只用了5天就把所有电脑跑成 Linux,包括所有的外围设备(LCTT 译注:比如打印机?)。我不玩 Windows 里的游戏,只玩 Linux 里的。 +> +> **Highclass**:我28岁,不知道还是不是你要找的年轻人类型。 +> +> 老实说我对电脑挺感兴趣的,当我还没接触'自由软件哲学'的时候,我认为 free 是免费的意思。我也不认为命令行界面很让人难以接受,因为我小时候就接触过 DOS 系统。 +> +> 我第一个发行版是 Mandrake,在我11岁还是12岁那年我把家里的电脑弄得乱七八糟,然后我一直折腾那台电脑,试着让我技的技能提升一个台阶。现在我在一家公司全职使用 Linux。(请允许我耸个肩)。 +> +> **Matto**:我的电脑是旧货市场淘回来的,装 XP,跑得慢,于是我想换个系统。Google 了一下,发现 Ubuntu。当年我15、6岁,现在23了,就职的公司内部使用 Linux。 +> +> [更多评论移步 Reddit][2] + +### IBM 的 Linux 大型机 ### + +IBM 很久前就用 Linux 了。现在这家公司退推出一款机器专门使用 Ubuntu,机器名叫 LinuxOne。 + +Ron Miller 在 TecchCrunch 博客上说: + +> 新的大型机包括两款机型,都是以企鹅名称命名的(Linux 的吉祥物就是一只企鹅,懂18摸的命名用意了没?)第一款叫帝企鹅,使用 IBM z13 机型,我们早在1月份就介绍过了。另一款稍微小一点,名叫跳岩企鹅,供入门级买家使用。 +> +> 也许你会以为大型机就像恐龙一样早就灭绝了,但世界上许多大型机构中都还在使用它们,它们还健在。作为发展云技术战略的一部分,数据分析与安全有望于提升 Ubuntu 大型机的市场,这种大型机能提供一系列开源的企业级软件,比如 Apache Spark,Node.js,MongoDB,MariaDB,PostgreSQL 和 Chef。 +> +> 大型机还会存在于客户预置的数据中心中,但是市场的大小取决于会有多少客户使用这种类似于云服务的系统。Mauri 解释道,IBM 正在寻求增加大型机销量的途径,与 Canonical 公司合作,鼓励使用开源工具,都能为大型机打开一个小的,却能赚钱的市场。 +> +> +> [详情移步 TechCrunch][3] + +### 你为什么要放弃 Windows10 而选择 Linux ### + +自从 Windows10 出来以后,各种媒体都报道过它的隐藏间谍功能。ZDNet 列出了一些放弃 Windows10 的理由。 + +SJVN 在 ZDNet 的报告: + +> 你试试关掉 Windows10 的数据分享功能,坏消息来了:window10 会继续把你的数据分享给微软公司。请选择 Linux 吧。 +> +> 你可以有很多方法不让 Windows10 泄露你的秘密,但你不能阻止它交谈。Cortana,win10 小娜,语音助手,就算你把她关了,她也会把数据发给微软公司。这些数据包括你的电脑 ID,微软用它来识别你的 PC 机。 +> +> 所以如果这些泄密给你带来了烦恼,你可以使用老版本 Windows7,或者换到 Linux。然而,当 Windows7 不再提供技术支持的那天到来,如果你还想保留隐私,最终你还是只能选择 Linux。 +> +> 这里还有些小众的桌面系统能保护你的隐私,比如 BSD 家族的 FreeBSD,PCBSD,NetBSD,eComStation,OS/2。但是,最好的选择还是 Linux,它提供最低的学习曲线。 +> +> [详情移步 ZDNet][4] + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html + +作者:[Jim Lynch][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Jim-Lynch/ +[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/ +[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ diff --git a/translated/talk/20150909 Superclass--15 of the world's best living programmers.md b/translated/talk/20150909 Superclass--15 of the world's best living programmers.md deleted file mode 100644 index 6f59aa13d9..0000000000 --- a/translated/talk/20150909 Superclass--15 of the world's best living programmers.md +++ /dev/null @@ -1,389 +0,0 @@ -教父们: 15位举世瞩目的程序员 -================================================================================ -当开发人员讨论关于世界顶级程序员时,这些名字往往就会出现。 - -![](http://images.techhive.com/images/article/2015/09/superman-620x465-100611650-orig.jpg) - -图片来源: [tom_bullock CC BY 2.0][1] - -好像现在程序员有很多,其中不乏有许多优秀的程序员。但是期中哪些程序员更好呢? - -虽然这很难客观评价,不过在这个话题确实是开发者们乐于津道的。ITworld针对程序员社区的输入和刷新试图找出可能存在的所谓共识。事实证明,屈指可数的某些名字经常是讨论的焦点。 - -Use the arrows above to read about 15 people commonly cited as the world’s best living programmer.下面就让我们来看看这些世界顶级的程序员吧!(没有箭头呢:P) - -![](http://images.techhive.com/images/article/2015/09/margaret_hamilton-620x465-100611764-orig.jpg) - -图片来源: [NASA][2] - -### 玛格丽特·汉密尔顿 ### - -**成就: 阿波罗飞行控制软件背后的大脑** - -生平: 查尔斯·斯塔克·德雷珀实验室软件工程部的主任,她为首的团队负责设计和打造NASA阿波罗的板载飞行控制器软件和Skylab任务。基于阿波罗这段的工作经历,她又后续开发了[通用系统语言][5]和[开发先于事实][6]的范例。开创了[异步软件、优先调度和超可靠的软件设计][7]理念。被认为发明了“[软件工程][8]”一词。1986年获[奥古斯塔·埃达·洛夫莱斯][9]奖,[2003年获NASA杰出太空行动奖][10]。 - -评论: “汉密尔顿发明了测试,使美国计算机工程规范了很多” [ford_beeblebrox][11] - -“我认为在她之前(不敬地说,包括高德纳在内的)计算机编程是(另一种形式上留存的)数学分支。然而宇宙飞船的飞行控制系统明确地将编程带入了一个崭新的领域。” [Dan Allen][12] - -“... 她引入了‘计算机工程’这个术语 — 并作出了最好的示范。” [David Hamilton][13] - -“真是个坏家伙” [Drukered][14] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_donald_knuth-620x465-100502872-orig.jpg) - -图片来源: [vonguard CC BY-SA 2.0][15] - -### 唐纳德·尔文·克努斯 ### - -**成就: 《计算机程序设计艺术》 作者** - -生平: 撰写了[编程理论的权威书籍][16]。发明了数字排版系统Tex。1971年获得[首次ACM(美国计算机协会)葛丽丝·穆雷·霍普奖][17]。1974年获ACM[图灵奖][18]奖,1979年获[国家科学奖章][19],1995年获IEEE[约翰·冯·诺依曼奖章][20]。1998年入选[计算机历史博物馆名人录][21]。 - -评论: “... 写的计算器编程的艺术可能是有史以来计算机编程最大的贡献。”[佚名][22] - -“唐·克努斯的TeX是我所用过的计算机程序中唯一一个几乎没有bug的。真是让人印象深刻!” [Jaap Weel][23] - -“如果你要问我的话,我只能说太棒了!” [Mitch Rees-Jones][24] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_ken-thompson-620x465-100502874-orig.jpg) - -图片来源: [Association for Computing Machinery][25] - -### 肯尼斯·蓝·汤普逊 ### - -**成就: Unix之父** - -生平: 与[丹尼斯·里奇][26]共同创造了Unix。创造了[B语言][27]、[UTF-8字符编码方案][28]、[ed文本编辑器][29],同时也是Go语言的合作开发人。(同里奇)共同获得1983年的[图灵奖][30],1994年获[IEEE计算机先驱奖][31],1998年获颁[美国国家科技创新奖章][32]。在1997年入选[计算机历史博物馆名人录][33]。 - -评论: “... 可能是有史以来最能成事的程序员了。Unix内核,Unix用具,国际象棋程序世界冠军Belle,Plan 9,Go语言。” [Pete Prokopowicz][34] - -“肯所做出的贡献,据我所知无人能及,是如此的根本、实用、经得住时间的考验,时至今日仍在使用。” [Jan Jannink][35] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_richard_stallman-620x465-100502868-orig.jpg) - -图片来源: Jiel Beaumadier CC BY-SA 3.0 - -### 理查德·斯托曼 ### - -**成就: Emacs和GCC缔造者** - -生平: 成立了[GNU工程] [36],并创造了许多的核心工具,如[Emacs, GCC, GDB][37]和[GNU Make][38]。还创办了[自由软件基金会] [39]。1990 荣获ACM[葛丽丝·穆雷·霍普奖][40],[1998获EFF先驱奖][41]. - -评论: “... 在Symbolics对阵LMI的战斗中,独自一人与一众Lisp黑客好手对码。” [Srinivasan Krishnan][42] - -“通过他在编程上的造诣与强大信念,开辟了一整套编程与计算机的亚文化。” [Dan Dunay][43] - -“我可以不赞同这位伟人的很多方面,但不可否认无论活着还是死去,他都已经是一位伟大的程序员了。” [Marko Poutiainen][44] - -“试想Linux如果没有GNU工程的前期工作。斯托曼就是这个炸弹包,哟。” [John Burnette][45] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_anders_hejlsberg-620x465-100502873-orig.jpg) - -图片来源: [D.Begley CC BY 2.0][46] - -### 安德斯·海尔斯伯格 ### - -**成就: 创造了Turbo Pascal** - -生平: [Turbo Pascal的原作者][47],是最流行的Pascal编译器和第一个集成开发环境。而后,[领导了Delphi][48]和下一代Turbo Pascal的构建。[C#的主要设计师和架构师][49]。2001年荣获[Dr. Dobb's杰出编程奖][50]。 - -评论: “他用汇编在主流PC操作系统day(DOS and CPM)上编写了[Pascal]的编译器。用它来编译、链接并运行仅需几秒钟而不是几分钟。” [Steve Wood][51] - -“我佩服他 - 他创造了我最喜欢的开发工具,陪伴着我度过了三个关键的时期直至我成为一位专业的软件工程师。” [Stefan Kiryazov][52] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_doug_cutting-620x465-100502871-orig.jpg) - -图片来源: [vonguard CC BY-SA 2.0][53] - -### Doug Cutting ### - -**成就: 创造了Lucene** - -生平: [开发了Lucene搜索引擎、Web爬虫Nutch][54]和[对于大型数据集的分布式处理套件Hadoop][55]。一位强有力的开源支持者(Lucene、Nutch以及Hadoop都是开源的)。前[Apache软件基金的理事][56]。 - -评论: “...他就是那个即写出了优秀搜索框架(lucene/solr),又为世界开启大数据之门(hadoop)的男人。” [Rajesh Rao][57] - -“他在Lucene和Hadoop(及其它工程)的创造/工作中为世界创造了巨大的财富和就业...” [Amit Nithianandan][58] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_sanjay_ghemawat-620x465-100502876-orig.jpg) - -图片来源: [Association for Computing Machinery][59] - -### Sanjay Ghemawat ### - -**成就: 谷歌核心架构师** - -生平: [协助设计和实现了一些谷歌大型分布式系统的功能][60],包括MapReduce、BigTable、Spanner和谷歌文件系统。[创造了Unix的 ical][61]日历系统。2009年入选[国家工程院][62]。2012年荣获[ACM-Infosys基金计算机科学奖][63]。 - -评论: “Jeff Dean的僚机。” [Ahmet Alp Balkan][64] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jeff_dean-620x465-100502866-orig.jpg) - -图片来源: [Google][65] - -### Jeff Dean ### - -**成就: 谷歌索引搜索背后的大脑** - -生平: 协助设计和实现了[许多谷歌大型分布式系统的功能][66],包括网页爬虫,索引搜索,AdSense,MapReduce,BigTable和Spanner。2009年入选[国家工程院][67]。2012年荣获ACM [SIGOPS马克·维瑟奖][68]及[ACM-Infosys基金计算机科学奖][69]。 - -评论: “... 带来的在数据挖掘(GFS、MapReduce、BigTable)上的突破。” [Natu Lauchande][70] - -“... 设计、构建并部署MapReduce和BigTable,和以及数不清的东西” [Erik Goldman][71] - -![](http://images.techhive.com/images/article/2015/09/linus_torvalds-620x465-100611765-orig.jpg) - -图片来源: [Krd CC BY-SA 4.0][72] - -### 林纳斯·托瓦兹 ### - -**成就: Linux缔造者** - -生平: 创造了[Linux内核][73]与[开源版本控制器Git][74]。收获了许多奖项和荣誉,包括有1998年的[EFF先驱奖][75],2000年荣获[英国电脑学会授予的洛夫莱斯勋章][76],2012年荣获[千禧技术奖][77]还有2014年[IEEE计算机学会授予的计算机先驱奖][78]。同样入选了2008年的[计算机历史博物馆名人录][79]与2012年的[网络名人堂][80]。 - -评论: “他只用了几年的时间就写出了Linux内核,而GNU Hurd(GNU开发的内核)历经25年的开发却丝毫没有准备发布的意思。他的成就就是带来了希望。” [Erich Ficker][81] - -“托沃兹可能是程序员的程序员。” [Dan Allen][82] - -“他真的很棒。” [Alok Tripathy][83] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_john_carmack-620x465-100502867-orig.jpg) - -图片来源: [QuakeCon CC BY 2.0][84] - -### 约翰·卡马克 ### - -**成就: 毁灭战士缔造者** - -生平: ID社联合创始人,打造了德军总部3D、毁灭战士和雷神之锤等所谓的即使FPS游戏。引领了[切片适配更新(adaptive tile refresh)][86], [二叉空间分割(binary space partitioning)][87],表面缓存(surface caching)等开创性的计算机图像技术。2001年入选[互动艺术与科学学会名人堂][88],2007年和2008年荣获工程技术类[艾美奖][89]并于2010年由[游戏开发者甄选奖][90]授予终生成就奖。 - -评论: “他在写第一个渲染引擎的时候不到20岁。这家伙这是个天才。我若有他四分之一的天赋便心满意足了。” [Alex Dolinsky][91] - -“... 德军总部3D,、毁灭战士还有雷神之锤在那时都是革命性的,影响了一代游戏设计师。” [dniblock][92] - -“一个周末他几乎可以写出任何东西....” [Greg Naughton][93] - -“他是编程界的莫扎特... [Chris Morris][94] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_fabrice_bellard-620x465-100502870-orig.jpg) - -图片来源: [Duff][95] - -### 法布里斯·贝拉 ### - -**成就: 创造了QEMU** - -生平: 创造了[一系列耳熟能详的开源软件][96],其中包括硬件模拟和虚拟化的平台QEMU,用于处理多媒体数据的FFmpeg,微型C编译器和 一个可执行文件压缩软件LZEXE。2000年和2001年[C语言混乱代码大赛的获胜者][97]并在2011年荣获[Google-O'Reilly开源奖][98]。[计算Pi最多位数][99]的前世界纪录保持着。 - -评论: “我觉得法布里斯·贝拉做的每一件事都是那么显著而又震撼。” [raphinou][100] - -“法布里斯·贝拉是世界上最高产的程序员...” [Pavan Yara][101] - -“他就像软件工程界的尼古拉·特斯拉。” [Michael Valladolid][102] - -“自80年代以来,他一直高产出一些列的成功作品。” [Michael Biggins][103] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jon_skeet-620x465-100502863-orig.jpg) - -图片来源: [Craig Murphy CC BY 2.0][104] - -### Jon Skeet ### - -**成就: Stack Overflow传说级贡献者** - -生平: Google工程师[深入解析C#][105]的作者。保持着[有史以来在Stack Overflow上最高的声誉][106],平均每月解答390个问题。 - -评论: “他根本不需要调试器,只要他盯一下代码,错误之处自会原形毕露。” [Steven A. Lowe][107] - -“如果他的代码没有通过编译,那编译器应该道歉。” [Dan Dyer][108] - -“他根本不需要什么编程规范,他的代码就是编程规范。” [Anonymous][109] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_image_adam_dangelo-620x465-100502875-orig.jpg) - -图片来源: [Philip Neustrom CC BY 2.0][110] - -### 亚当·安捷罗 ### - -**成就: Quora的创办人之一** - -生平: 还是Facebook工程师时,[为其搭建了news feed功能的基础][111]。直至其离开并联合创始了Quora,已经成为了Facebook的CTO和工程VP。2001年以高中生的身份在[美国计算机奥林匹克上第八位完成比赛][112]。2004年ACM国际大学生编程大赛[获得银牌的团队 - 加利福尼亚技术研究所][113]的成员。2005年入围Topcoder大学生[算法编程挑战赛][114]。 - -评论: “一位程序设计全才。” [Anonymous][115] - -"我做的每个好东西,他都已有了六个。" [Mark Zuckerberg][116] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_petr_mitrichev-620x465-100502869-orig.jpg) - -图片来源: [Facebook][117] - -### Petr Mitrechev ### - -**成就: 有史以来最具竞技能力的程序员之一** - -生平: 在国际信息学奥林匹克中[两次获得金牌][118](2000,2002)。在2006,[赢得Google Code Jam][119]同时也是[TopCoder Open算法大赛冠军][120]。也同样,两次赢得Facebook黑客杯([2011][121],[2013][122])。写这篇文章的时候,[TopCoder榜中排第二][123] (即:Petr)、在[Codeforces榜同样排第二][124]。 - -评论: “他是竞技程序员的偶像,即使在印度也是如此...[Kavish Dwivedi][125] - -![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_gennady_korot-620x465-100502864-orig.jpg) - -图片来源: [Ishandutta2007 CC BY-SA 3.0][126] - -### Gennady Korotkevich ### - -**成就: 竞技编程小神童** - -生平: 国际信息学奥林匹克中最小参赛者(11岁)[6次获得金牌][127] (2007-2012)。2013年ACM国际大学生编程大赛[获胜队伍][128]成员及[2014 Facebook黑客杯][129]获胜者。写这篇文章的时候,[Codeforces榜排名第一][130] (即:Tourist)、[TopCoder榜第一][131]。 - -评论: “一个编程神童!” [Prateek Joshi][132] - -“Gennady真是棒,也是为什么我在白俄罗斯拥有一个强大开发团队的例证。” [Chris Howard][133] - -“Tourist真是天才” [Nuka Shrinivas Rao][134] - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclass-14-of-the-world-s-best-living-programmers.html#slide1 - -作者:[Phil Johnson][a] -译者:[martin2011qi](https://github.com/martin2011qi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Phil-Johnson/ -[1]:https://www.flickr.com/photos/tombullock/15713223772 -[2]:https://commons.wikimedia.org/wiki/File:Margaret_Hamilton_in_action.jpg -[3]:http://klabs.org/home_page/hamilton.htm -[4]:https://www.youtube.com/watch?v=DWcITjqZtpU&feature=youtu.be&t=3m12s -[5]:http://www.htius.com/Articles/r12ham.pdf -[6]:http://www.htius.com/Articles/Inside_DBTF.htm -[7]:http://www.nasa.gov/home/hqnews/2003/sep/HQ_03281_Hamilton_Honor.html -[8]:http://www.nasa.gov/50th/50th_magazine/scientists.html -[9]:https://books.google.com/books?id=JcmV0wfQEoYC&pg=PA321&lpg=PA321&dq=ada+lovelace+award+1986&source=bl&ots=qGdBKsUa3G&sig=bkTftPAhM1vZ_3VgPcv-38ggSNo&hl=en&sa=X&ved=0CDkQ6AEwBGoVChMI3paoxJHWxwIVA3I-Ch1whwPn#v=onepage&q=ada%20lovelace%20award%201986&f=false -[10]:http://history.nasa.gov/alsj/a11/a11Hamilton.html -[11]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrswof -[12]:http://qr.ae/RFEZLk -[13]:http://qr.ae/RFEZUn -[14]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrv9u9 -[15]:https://www.flickr.com/photos/44451574@N00/5347112697 -[16]:http://cs.stanford.edu/~uno/taocp.html -[17]:http://awards.acm.org/award_winners/knuth_1013846.cfm -[18]:http://amturing.acm.org/award_winners/knuth_1013846.cfm -[19]:http://www.nsf.gov/od/nms/recip_details.jsp?recip_id=198 -[20]:http://www.ieee.org/documents/von_neumann_rl.pdf -[21]:http://www.computerhistory.org/fellowawards/hall/bios/Donald,Knuth/ -[22]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answers/3063 -[23]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Jaap-Weel -[24]:http://qr.ae/RFE94x -[25]:http://amturing.acm.org/photo/thompson_4588371.cfm -[26]:https://www.youtube.com/watch?v=JoVQTPbD6UY -[27]:https://www.bell-labs.com/usr/dmr/www/bintro.html -[28]:http://doc.cat-v.org/bell_labs/utf-8_history -[29]:http://c2.com/cgi/wiki?EdIsTheStandardTextEditor -[30]:http://amturing.acm.org/award_winners/thompson_4588371.cfm -[31]:http://www.computer.org/portal/web/awards/cp-thompson -[32]:http://www.uspto.gov/about/nmti/recipients/1998.jsp -[33]:http://www.computerhistory.org/fellowawards/hall/bios/Ken,Thompson/ -[34]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Pete-Prokopowicz-1 -[35]:http://qr.ae/RFEWBY -[36]:https://groups.google.com/forum/#!msg/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J -[37]:http://www.emacswiki.org/emacs/RichardStallman -[38]:https://www.gnu.org/gnu/thegnuproject.html -[39]:http://www.emacswiki.org/emacs/FreeSoftwareFoundation -[40]:http://awards.acm.org/award_winners/stallman_9380313.cfm -[41]:https://w2.eff.org/awards/pioneer/1998.php -[42]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton/comment/4146397 -[43]:http://qr.ae/RFEaib -[44]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Marko-Poutiainen -[45]:http://qr.ae/RFEUqp -[46]:https://www.flickr.com/photos/begley/2979906130 -[47]:http://www.taoyue.com/tutorials/pascal/history.html -[48]:http://c2.com/cgi/wiki?AndersHejlsberg -[49]:http://www.microsoft.com/about/technicalrecognition/anders-hejlsberg.aspx -[50]:http://www.drdobbs.com/windows/dr-dobbs-excellence-in-programming-award/184404602 -[51]:http://qr.ae/RFEZrv -[52]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Stefan-Kiryazov -[53]:https://www.flickr.com/photos/vonguard/4076389963/ -[54]:http://www.wizards-of-os.org/archiv/sprecher/a_c/doug_cutting.html -[55]:http://hadoop.apache.org/ -[56]:https://www.linkedin.com/in/cutting -[57]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Shalin-Shekhar-Mangar/comment/2293071 -[58]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answer/Amit-Nithianandan -[59]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm -[60]:http://research.google.com/pubs/SanjayGhemawat.html -[61]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat -[62]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 -[63]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm -[64]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat/answer/Ahmet-Alp-Balkan -[65]:http://research.google.com/people/jeff/index.html -[66]:http://research.google.com/people/jeff/index.html -[67]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 -[68]:http://news.cs.washington.edu/2012/10/10/uw-cse-ph-d-alum-jeff-dean-wins-2012-sigops-mark-weiser-award/ -[69]:http://awards.acm.org/award_winners/dean_2879385.cfm -[70]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Natu-Lauchande -[71]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Cosmin-Negruseri/comment/28399 -[72]:https://commons.wikimedia.org/wiki/File:LinuxCon_Europe_Linus_Torvalds_05.jpg -[73]:http://www.linuxfoundation.org/about/staff#torvalds -[74]:http://git-scm.com/book/en/Getting-Started-A-Short-History-of-Git -[75]:https://w2.eff.org/awards/pioneer/1998.php -[76]:http://www.bcs.org/content/ConWebDoc/14769 -[77]:http://www.zdnet.com/blog/open-source/linus-torvalds-wins-the-tech-equivalent-of-a-nobel-prize-the-millennium-technology-prize/10789 -[78]:http://www.computer.org/portal/web/pressroom/Linus-Torvalds-Named-Recipient-of-the-2014-IEEE-Computer-Society-Computer-Pioneer-Award -[79]:http://www.computerhistory.org/fellowawards/hall/bios/Linus,Torvalds/ -[80]:http://www.internethalloffame.org/inductees/linus-torvalds -[81]:http://qr.ae/RFEeeo -[82]:http://qr.ae/RFEZLk -[83]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Alok-Tripathy-1 -[84]:https://www.flickr.com/photos/quakecon/9434713998 -[85]:http://doom.wikia.com/wiki/John_Carmack -[86]:http://thegamershub.net/2012/04/gaming-gods-john-carmack/ -[87]:http://www.shamusyoung.com/twentysidedtale/?p=4759 -[88]:http://www.interactive.org/special_awards/details.asp?idSpecialAwards=6 -[89]:http://www.itworld.com/article/2951105/it-management/a-fly-named-for-bill-gates-and-9-other-unusual-honors-for-tech-s-elite.html#slide8 -[90]:http://www.gamechoiceawards.com/archive/lifetime.html -[91]:http://qr.ae/RFEEgr -[92]:http://www.itworld.com/answers/topic/software/question/whos-best-living-programmer#comment-424562 -[93]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton -[94]:http://money.cnn.com/2003/08/21/commentary/game_over/column_gaming/ -[95]:http://dufoli.wordpress.com/2007/06/23/ammmmaaaazing-night/ -[96]:http://bellard.org/ -[97]:http://www.ioccc.org/winners.html#B -[98]:http://www.oscon.com/oscon2011/public/schedule/detail/21161 -[99]:http://bellard.org/pi/pi2700e9/ -[100]:https://news.ycombinator.com/item?id=7850797 -[101]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/1718701 -[102]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/2454450 -[103]:http://qr.ae/RFEjhZ -[104]:https://www.flickr.com/photos/craigmurphy/4325516497 -[105]:http://www.amazon.co.uk/gp/product/1935182471?ie=UTF8&tag=developetutor-21&linkCode=as2&camp=1634&creative=19450&creativeASIN=1935182471 -[106]:http://stackexchange.com/leagues/1/alltime/stackoverflow -[107]:http://meta.stackexchange.com/a/9156 -[108]:http://meta.stackexchange.com/a/9138 -[109]:http://meta.stackexchange.com/a/9182 -[110]:https://www.flickr.com/photos/philipn/5326344032 -[111]:http://www.crunchbase.com/person/adam-d-angelo -[112]:http://www.exeter.edu/documents/Exeter_Bulletin/fall_01/oncampus.html -[113]:http://icpc.baylor.edu/community/results-2004 -[114]:https://www.topcoder.com/tc?module=Static&d1=pressroom&d2=pr_022205 -[115]:http://qr.ae/RFfOfe -[116]:http://www.businessinsider.com/in-new-alleged-ims-mark-zuckerberg-talks-about-adam-dangelo-2012-9#ixzz369FcQoLB -[117]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 -[118]:http://stats.ioinformatics.org/people/1849 -[119]:http://googlepress.blogspot.com/2006/10/google-announces-winner-of-global-code_27.html -[120]:http://community.topcoder.com/tc?module=SimpleStats&c=coder_achievements&d1=statistics&d2=coderAchievements&cr=10574855 -[121]:https://www.facebook.com/notes/facebook-hacker-cup/facebook-hacker-cup-finals/208549245827651 -[122]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 -[123]:http://community.topcoder.com/tc?module=AlgoRank -[124]:http://codeforces.com/ratings -[125]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Venkateswaran-Vicky/comment/1960855 -[126]:http://commons.wikimedia.org/wiki/File:Gennady_Korot.jpg -[127]:http://stats.ioinformatics.org/people/804 -[128]:http://icpc.baylor.edu/regionals/finder/world-finals-2013/standings -[129]:https://www.facebook.com/hackercup/posts/10152022955628845 -[130]:http://codeforces.com/ratings -[131]:http://community.topcoder.com/tc?module=AlgoRank -[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi -[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779 -[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549 diff --git a/translated/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md b/translated/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md deleted file mode 100644 index 921f1a57aa..0000000000 --- a/translated/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md +++ /dev/null @@ -1,101 +0,0 @@ -Aix, HP-UX, Solaris, BSD, 和 LINUX 简史 -================================================================================ -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png) - -要记住,当一扇门在你面前关闭的时候,另一扇门就会打开。[Ken Thompson][1] 和 [Dennis Richie][2] 两个人就是这句名言很好的实例。他们俩是 **20世纪** 最优秀的信息技术专家,因为他们创造了 **UNIX**,最具影响力和创新性的软件之一。 - -### UNIX 系统诞生于贝尔实验室 ### - -**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice),它有一个大家庭,并不是从石头缝里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时使用大型机。 - -UNIX 诞生于 **1969** 年,由 **Ken Thompson** 以及后来加入的 **Dennis Richie** 共同完成。这两位优秀的研究员和科学家一起在一个**通用电子**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。 - -Multics 的目标是整合分时共享以及当时其他先进技术,允许用户在远程终端通过电话登录到主机,然后可以编辑文档,阅读电子邮件,运行计算器,等等。 - -在之后的五年里,AT&T 公司为 Multics 项目投入了数百万美元。他们购买了 GE-645 大型机,聚集了贝尔实验室的顶级研究人员,例如 Ken Thompson, Stuart Feldman, Dennis Ritchie, M. Douglas McIlroy, Joseph F. Ossanna, 以及 Robert Morris。但是项目目标太过激进,进度严重滞后。最后,AT&T 高层决定放弃这个项目。 - -贝尔实验室的管理层决定停止这个让许多研究人员无比纠结的操作系统上的所有遗留工作。不过要感谢 Thompson,Richie 和一些其他研究员,他们把老板的命令丢到一边,并继续在实验室里满怀热心地忘我工作,最终孵化出前无古人后无来者的 UNIX。 - -UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是 Thompson 测试自己在操作系统设计上的点子的机器,也是 Thompson 和 Richie 一起玩 Space and Travel 游戏的模拟器。 - -> “我们想要的不仅是一个优秀的编程环境,而是能围绕这个系统形成团体。按我们自己的经验,通过远程访问和分时共享主机实现的公共计算,本质上不只是用终端输入程序代替打孔机而已,而是鼓励密切沟通。”Dennis Richie 说。 - -UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,它吸引了大量因其他操作系统限制而投身过来的高手做出无私贡献,因此它的功能模型一直保持上升趋势。 - -UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次收获是在 1971 年,贝尔实验室的专利部门配备来做文字处理。 - -### UNIX 上的 C 语言革命 ### - -Dennis Richie 在 1972 年发明了一种叫 “**C**” 的高级编程语言 ,之后他和 Ken Thompson 决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在使用了 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。 - -UNIX 第一次公开露面是 1973 年 Dennis Ritchie 和 Ken Thompson 在操作系统原理上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,然后在 1976 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买授权,只是授权条款非常有限。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。反正,各种版本 UNIX 系统完全由它的用户手册确定。 - -### AIX 系统 ### - -在 **1983** 年,**Microsoft** 计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界 UNIX System V 第二版的安装数量已经超过了 100,000 。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。 - -AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。 - -在 2004 年发布的 AIX 5.3 引入了支持 Advanced Power Virtualization (APV) 的虚拟化技术,支持对称多线程,微分区,以及可分享的处理器池。 - -在 2007 年,IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将 Advanced Power Virtualization 重新包装成 PowerVM。 - -这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers,但是功能更强。 - -### HP-UX 系统 ### - -**惠普 UNIX (HP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。 - -HP-UX 第 9 版引入了 SAM,一个基于字符的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。 - -第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i,因为 HP 为特定的信息技术目的,引入了操作环境和分级应用的捆绑组。 - -在 2001 年发布的 11.20 版宣称支持 Itanium 系统。HP-UX 是第一个使用 ACLs(访问控制列表)管理文件权限的 UNIX 系统,也是首先支持内建逻辑卷管理器的系统之一。 - -如今,HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。 - -HP-UX 目前的最新版本是 11iv3, update 4。 - -### Solaris 系统 ### - -Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD(伯克利软件发行版)风格的 UNIX,但是 SunOS 5.0 版以及之后的版本都是基于重新包装成 Solaris 的 Unix System V 第 4 版。 - -SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年,Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。 - -Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。 - -Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器。之后,2005 年发布了 Solaris 10,带来许多创新,比如支持 Solaris Containers,新的 ZFS 文件系统,以及逻辑域。 - -目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。 - -### Linux ### - -到了 1991 年,用来替代商业操作系统的免费系统的需求日渐高涨。因此 **Linus Torvalds** 开始构建一个免费的操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件,并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。 - -2015 年 发布了基于 GNU Public License 授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开放给开发者。 - -如今 GNU Public License 是应用最广泛的免费软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发、运行、通过拷贝共享、学习,以及修改软件源码。 - -### UNIX vs. Linux: 技术概要 ### - -- Linux 鼓励多样性,Linux 的开发人员有更广阔的背景,有更多不同经验和意见。 -- Linux 比 UNIX 支持更多的平台和架构。 -- UNIX 商业版本的开发人员会为他们的操作系统考虑特定目标平台以及用户。 -- **Linux 比 UNIX 有更好的安全性**,更少受病毒或恶意软件攻击。Linux 上大约有 60-100 种病毒,但是没有任何一种还在传播。另一方面,UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。 -- 通过 UNIX 命令,系统上的工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。 -- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。 -- 传统 UNIX 是纵向扩展,而另一方面 Linux 是横向扩展。 - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/ - -作者:[M.el Khamlichi][a] -译者:[zpl1025](https://github.com/zpl1025) -校对:[Caroline](https://github.com/carolinewuyan) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/pirat9/ -[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/ -[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/ diff --git a/translated/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md b/translated/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md new file mode 100644 index 0000000000..614f10bb85 --- /dev/null +++ b/translated/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md @@ -0,0 +1,200 @@ + +一步一脚印:GNOME十八年进化史 +================================================================================ +注:youtube 视频 + + +[GNOME][1] (GNU Object Model Environment)由两位墨西哥的程序员Miguel de Icaza和Federico Mena 始创于1997年8月15日。GNOME自由软件的桌面环境和应用程序计划由志愿者和全职开发者来开发。所有的GNOME桌面环境都由开源软件组成,并且支持Linux, FreeBSD, OpenBSD 等操作系统。 + +现在就让我穿越到1997年来看看GNOME的第一个版本: + +### GNOME 1 ### + +![GNOME 1.0 - First major GNOME release](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.0/gnome.png) + +**GNOME 1.0** (1997) – GNOME 发布的第一个版本 + +![GNOME 1.2 Bongo](https://raw.githubusercontent.com/paulcarroty/Articles/master/GNOME_History/1.2/1361441938.or.86429.png) + +**GNOME 1.2** “Bongo”, 2000 + +![GNOME 1.4 Tranquility](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.4/1.png) + +**GNOME 1.4** “Tranquility”, 2001 + +### GNOME 2 ### + +![GNOME 2.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.0/1.png) + +**GNOME 2.0**, 2002 + +基于GTK+2的重大更新。引入了人机界面指南。 + +![GNOME 2.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.2/GNOME_2.2_catala.png) + +**GNOME 2.2**, 2003 + +改进了多媒体和文件管理器。 + +![GNOME 2.4 Temujin](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.4/gnome-desktop.png) + +**GNOME 2.4** “Temujin”, 2003 + +首次发布Epiphany浏览器,增添了辅助功能。 + +![GNOME 2.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.6/Adam_Hooper.png) + +**GNOME 2.6**, 2004 + +启用Nautilus空间文件管理工具同时引入了新的GTK+ (译注:跨平台图形用户界面工具包)对话框。这个转瞬即逝的版本变更被称做是GNOME的一个分支:GoneME。 + +![GNOME 2.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.8/3.png) + +**GNOME 2.8**, 2004 + +改良了对可移动设备的支持并新增了Evolution邮件应用。 + +![GNOME 2.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.10/GNOME-Screenshot-2.10-FC4.png) + +**GNOME 2.10**, 2005 + +减小内存需求,改进显示界面。增加网络控制、磁盘挂载和回收站组件以及Totem影片播放器和Sound Juicer CD抓取工具。 + +![GNOME 2.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.12/gnome-livecd.jpg) + +**GNOME 2.12**, 2005 + +改进了Nautilus以及跨平台剪切/粘贴功能的整合。 新增Evince PDF阅读器;新预设主题Clearlooks;新增菜单编辑器、管理员工具与环状管理器。基于支持Cairo的GTK+2.8。 + +![GNOME 2.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.14/debian4-stable.jpg) + +**GNOME 2.14**, 2006 + +改善显示效果;增强易用性;基于GStreamer 0.10多媒体框架。增加了Ekiga视频会议应用,Deskbar搜索工具,Pessulus权限管理器,和Sabayon系统管理员工具和快速切换用户功能。 + +![GNOME 2.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.16/Gnome-2.16-screenshot.png) + +**GNOME 2.16**, 2006 + +界面改良。增加了Tomboy笔记应用,Baobab磁盘用量分析应用,Orca屏幕朗读器以及GNOME 电源管理程序(以延长笔记本电池寿命);改进了Totem, Nautilus, 使用了新的图标主题。基于GTK+ 2.0 的全新显示对话框。 + +![GNOME 2.18](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.18/Gnome-2.18.1.png) + +**GNOME 2.18**, 2007 + +界面改良。增加了Seahorse GPG安全应用,可以对邮件和本地文件进行加密;Baobab增加了环状图表显示方式;改进了Orca,Evince, Epiphany, GNOME电源管理,音量控制;增加了两款新游戏:GNOME数独和国际象棋。支持MP3和AAC音频解码。 + +![GNOME 2.20](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.20/rnintroduction-screenshot.png) + +**GNOME 2.20**, 2007 + +发布十周年版本。Evolution增加了备份功能;改进了Epiphany,EOG,GNOME电源管理以及Seahorse中的Keyring密码管理方式;在Evince中可以编辑PDF文档;文件管理界面中整合了搜索模块;自动安装多媒体解码器。 + +![GNOME 2.22, 2008](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.22/GNOME-2-22-2-Released-2.png) + +**GNOME 2.22**, 2008 + +新增Cheese应用,它是一个可以截取网络摄像头和远程桌面图像的工具;Metacity支持基本的窗口叠加复合;引入GVFS(译注:GNOME Virtual file system,GNOME虚拟文件系统);改善了Totem播放DVD 和YouTube的效果,支持播放MythTV;在Evolution中新增了谷歌日历以及为信息添加标签的功能;改进了Evince, Tomboy, Sound Juicer和计算器。 + +![GNOME 2.24](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.24/gnome-224.jpg) + +**GNOME 2.24**, 2008 + +新增了Empathy即时通讯软件,Ekiga升级至3.0版本;Nautilus支持标签式浏览,更好的支持了多屏幕显示方式和数字电视功能。 + +![GNOME 2.26](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.26/gnome226-large_001.jpg) + +**GNOME 2.26**, 2009 + +新增光盘刻录应用Brasero;简化了文件分享的流程,改进了媒体播放器的性能;支持多显示器和指纹识别器。 + +![GNOME 2.28](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.28/1.png) + +**GNOME 2.28**, 2009 + +增加了GNOME 蓝牙模块;改进了Epiphany ,Empathy,时间追踪器和辅助功能。GTK+升级至2.18版本。 + +![GNOME 2.30](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.30/GNOME2.30.png) + +**GNOME 2.30**, 2010 + +改进了Nautilus,Empathy,Tomboy,Evince,Time Tracker,Epiphany和 Vinagre。借助基于libimobiledevice(译注:支持iOS®设备跨平台使用的工具协议库)的GVFS可以访问部分iPod 和iPod Touch。 + +![GNOME 2.32](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.32/gnome-2-32.png.en_GB.png) + +**GNOME 2.32**, 2010 + +新增Rygel 媒体分享工具和GNOME色彩管理器;改进了Empathy即时通讯客户端,Evince,Nautilus文件管理器等。计划于2010年9月发布3.0版本,因此大部分开发者的精力都由2.3x转移至了3.0版本。 + +### GNOME 3 ### + +![GNOME 3.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.0/chat-3-0.png) + +**GNOME 3.0**, 2011 + +引入GNOME Shell,一个重新设计的、具有更简练更集中的选项的框架。基于Mallard标记语言的话题导向型帮助。支持窗口并列堆叠。启用新的视觉主题和字体。采用GTK+3.0,具有更好的语言绑定,主题,触控以及多平台支持。去除了长期弃用的API。 + +![GNOME 3.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.2/gdm.png) + +**GNOME 3.2**, 2011 + +支持在线帐户,Web应用;新增通讯录应用和文档文件管理器;文件管理器支持快速预览;整合性能,更新文档以及对外观的一些小改进。 + +![GNOME 3.4](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.4/application-view.png) + +**GNOME 3.4**, 2012 + +全新的GNOME 3 应用程序外观:文件,Epiphany(更名为Web),GNOME 通讯录。可以在Activities Overview中搜索本地文件。支持应用菜单。焕然一新的界面元素:崭新的颜色拾取器,重新设计的滚动条,更易使用的旋转按钮以及可隐藏的标题栏。支持视角平滑。全新的动态壁纸。在系统设置中增添了对Wacom数位板的支持。更简便的扩展应用管理。更好的硬件支持。面向主题的文档。在Empathy中提供了对视频电话和动态信息的支持。更好的辅助功能:提升Orca整合度,增强高对比度模式适配性,以及全新的缩放设置。大量应用和细节的改进。 + +![GNOME 3.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.6/gnome-3-6.png) + +**GNOME 3.6**, 2012 + +全新设计的核心元素:新的应用按钮和改进的Activities Overview布局。新的登陆锁定界面。重新设计的通知栏。通知现在更智能,可见性更高,同时更容易操作。改进了系统设置的界面和设定逻辑。用户菜单默认显示关机操作。整合了输入法。辅助功能一直开启。新的应用:Boxes虚拟机,在GNOME 3.4中发布了预览版。Clocks时钟, 可以显示世界时间。升级了磁盘用量分析,Empathy和 Font Viewer的外观。改进了Orca对布莱叶盲文的支持。 在Web浏览器中, 用最常访问页面取代了之前的空白起始页,增添了更好的全屏模式并使用了WebKit2测试版引擎. Evolution 开始使用WebKit提交邮件。 改进了磁盘功能。 改进了文件管理应用即之前的Nautilus, 新增诸如最近访问的文件和搜索等功能。 + +![GNOME 3.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.8/applications-view.png) + +**GNOME 3.8**, 2013 + +令人耳目一新的核心组件:新应用界面可以分别显示常用应用及全部应用,窗口布局得到全面改造。新的屏幕即现式输入法开关。通知和信息现在会对屏幕边缘的点击作出回应。为那些喜欢传统桌面的用户提供了经典模式。重新设计了设置界面的工具栏。新的初始化引导流程。GNOME 在线帐户添加了对更多供应商的支持。浏览器正式启用WebKit2引擎。文档支持双页模式并且整合了Google 文档。通讯录的UI升级。GNOME Files,GNOME Boxes和GNOME Disks都得到了大幅改进。两款全新的GNOME核心应用:GNOME时钟和GNOME天气。 + +![GNOME 3.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.10/GNOME-3-10-Release-Schedule-2.png) + +**GNOME 3.10**, 2013 + +全新设计的系统状态界面,能够更直观的纵览全局。一系列新应用,包括GNOME Maps, GNOME Notes, GNOME Music 和GNOME Photos。新的基于位置的功能,如自动时区和世界时间。支持高分辨率及智能卡。 基于GLib 2.38提供了对D-Bus的支持。 + +![GNOME 3.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.12/app-folders.png) + +**GNOME 3.12**, 2014 + +改进了Overview中的键盘导航和窗口选择,基于易用性测试对初始设置进行了修改。有线网络重新回到了状态栏上,在应用预览中可以自定义应用文件夹。在大量应用的对话框中引入了新的GTK+小工具同时使用了新的GTK+标签风格。GNOME Videos,GNOME 终端以及Gedit都改用了全新外观,更贴合HIG(译注:Human Interface Guidelines,人机界面指南)。在GNOME Shell的终端仿真器中提供了搜索预测功能。增强了对GNOME软件和高密度显示屏的支持。提供了新的录音工具。增加了新的桌面通知接口。在Wayland中的进程被置于更易使用的位置并可以进行选择性预览。 + +![GNOME 3.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.14/Top-Features-of-GNOME-3-14-Gallery-459893-2.jpg) + +**GNOME 3.14**, 2014 + +更炫酷的桌面环境效果,改善了对触摸屏的支持。GNOME Software supports managing installed add-ons. 在GNOME Photos中可以访问Google相册。重绘了Evince,数独,扫雷和天气应用的用户界面,同时增加了一款叫做Hitori 的GNOME游戏。 + +![GNOME 3.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.16/preview-apps.png) + +**GNOME 3.16**, 2015 + +33,000处改变。主要修改了UI的配色方案。 增加了即现式滚动条。通知窗口中整合了日历应用。对文件管理器,图像查看器和地图等大量应用进行了微调。可以预览应用程序。进一步使用Wayland取代X11。 + +感谢GNOME Project及[Wikipedia][2]提供的变更日志!感谢阅读!(译注:原文此处为“敬请期待”。) + + +-------------------------------------------------------------------------------- + +via: https://tlhp.cf/18-years-of-gnome-evolution/ + +作者:[Pavlo Rudyi][a] +译者:[Haohong WANG](https://github.com/HaohongWANG) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://tlhp.cf/author/paul/ +[1]:https://www.gnome.org/ +[2]:https://en.wikipedia.org/wiki/GNOME diff --git a/translated/talk/20151124 Review--5 memory debuggers for Linux coding.md b/translated/talk/20151124 Review--5 memory debuggers for Linux coding.md new file mode 100644 index 0000000000..b49ba9e40a --- /dev/null +++ b/translated/talk/20151124 Review--5 memory debuggers for Linux coding.md @@ -0,0 +1,299 @@ +点评:Linux编程中五款内存调试器 +================================================================================ +![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg) +Credit: [Moini][1] + +作为一个程序员,我知道我总在犯错误——事实是,怎么可能会不犯错的!程序员也是人啊。有的错误能在编码过程中及时发现,而有些却得等到软件测试才显露出来。然而,有一类错误并不能在这两个时期被排除,从而导致软件不能正常运行,甚至是提前中止。 + +想到了吗?我说的就是内存相关的错误。手动调试这些错误不仅耗时,而且很难发现并纠正。值得一提的是,这种错误非常地常见,特别是在一些软件里,这些软件是用C/C++这类允许[手动管理内存][2]的语言编写的。 + +幸运的是,现行有一些编程工具能够帮你找到软件程序中这些内存相关的错误。在这些工具集中,我评定了五款Linux可用的,流行、免费并且开源的内存调试器:Dmalloc、Electric Fence、 Memcheck、 Memwatch以及Mtrace。日常编码过程中我已经把这五个调试器用了个遍,所以这些点评是建立在我的实际体验之上的。 + +### [Dmalloc][3] ### + +**开发者**:Gray Watson + +**点评版本**:5.5.2 + +**Linux支持**:所有种类 + +**许可**:知识共享署名-相同方式共享许可证3.0 + +Dmalloc是Gray Watson开发的一款内存调试工具。它实现成库,封装了标准内存管理函数如**malloc(), calloc(), free()**等,使得程序员得以检测出有问题的代码。 + +![cw dmalloc output](http://images.techhive.com/images/article/2015/11/cw_dmalloc-output-100627040-large.idge.png) +Dmalloc + +如同工具的网页所列,这个调试器提供的特性包括内存泄漏跟踪、[重复释放(double free)][4]错误跟踪、以及[越界写入(fence-post write)][5]检测。其它特性包括文件/行号报告、普通统计记录。 + +#### 更新内容 #### + +5.5.2版本是一个[bug修复发行版][6],同时修复了构建和安装的问题。 + +#### 有何优点 #### + +Dmalloc最大的优点是可以进行任意配置。比如说,你可以配置以支持C++程序和多线程应用。Dmalloc还提供一个有用的功能:运行时可配置,这表示在Dmalloc执行时,可以轻易地使能或者禁能它提供的特性。 + +你还可以配合[GNU Project Debugger (GDB)][7]来使用Dmalloc,只需要将dmalloc.gdb文件(位于Dmalloc源码包中的contrib子目录里)的内容添加到你的主目录中的.gdbinit文件里即可。 + +另外一个优点让我对Dmalloc爱不释手的是它有大量的资料文献。前往官网的[Documentation标签][8],可以获取任何内容,有关于如何下载、安装、运行,怎样使用库,和Dmalloc所提供特性的细节描述,及其输入文件的解释。里面还有一个章节介绍了一般问题的解决方法。 + +#### 注意事项 #### + +跟Mtrace一样,Dmalloc需要程序员改动他们的源代码。比如说你可以(必须的)添加头文件**dmalloc.h**,工具就能汇报产生问题的调用的文件或行号。这个功能非常有用,因为它节省了调试的时间。 + +除此之外,还需要在编译你的程序时,把Dmalloc库(编译源码包时产生的)链接进去。 + +然而,还有点更麻烦的事,需要设置一个环境变量,命名为**DMALLOC_OPTION**,以供工具在运行时配置内存调试特性,以及输出文件的路径。可以手动为该环境变量分配一个值,不过初学者可能会觉得这个过程有点困难,因为你想使能的Dmalloc特性是存在于这个值之中的——表示为各自的十六进制值的累加。[这里][9]有详细介绍。 + +一个比较简单方法设置这个环境变量是使用[Dmalloc实用指令][10],这是专为这个目的设计的方法。 + +#### 总结 #### + +Dmalloc真正的优势在于它的可配置选项。而且高度可移植,曾经成功移植到多种操作系统如AIX、BSD/OS、DG/UX、Free/Net/OpenBSD、GNU/Hurd、HPUX、Irix、Linux、MS-DOG、NeXT、OSF、SCO、Solaris、SunOS、Ultrix、Unixware甚至Unicos(运行在Cray T3E主机上)。虽然Dmalloc有很多东西需要学习,但是它所提供的特性值得为之付出。 + +### [Electric Fence][15] ### + +**开发者**:Bruce Perens + +**点评版本**:2.2.3 + +**Linux支持**:所有种类 + +**许可**:GNU 通用公共许可证 (第二版) + +Electric Fence是Bruce Perens开发的一款内存调试工具,它以库的形式实现,你的程序需要链接它。Electric Fence能检测出[栈][11]内存溢出和访问已经释放的内存。 + +![cw electric fence output](http://images.techhive.com/images/article/2015/11/cw_electric-fence-output-100627041-large.idge.png) +Electric Fence + +顾名思义,Electric Fence在每个申请的缓存边界建立了fence(防护),任何非法内存访问都会导致[段错误][12]。这个调试工具同时支持C和C++编程。 + + +#### 更新内容 #### + +2.2.3版本修复了工具的构建系统,使得-fno-builtin-malloc选项能真正传给[GNU Compiler Collection (GCC)][13]。 + +#### 有何优点 #### + +我喜欢Electric Fence首要的一点是(Memwatch、Dmalloc和Mtrace所不具有的),这个调试工具不需要你的源码做任何的改动,你只需要在编译的时候把它的库链接进你的程序即可。 + +其次,Electric Fence实现一个方法,确认导致越界访问(a bounds violation)的第一个指令就是引起段错误的原因。这比在后面再发现问题要好多了。 + +不管是否有检测出错误,Electric Fence经常会在输出产生版权信息。这一点非常有用,由此可以确定你所运行的程序已经启用了Electric Fence。 + +#### 注意事项 #### + +另一方面,我对Electric Fence真正念念不忘的是它检测内存泄漏的能力。内存泄漏是C/C++软件最常见也是最难隐秘的问题之一。不过,Electric Fence不能检测出堆内存溢出,而且也不是线程安全的。 + +基于Electric Fence会在用户分配内存区的前后分配禁止访问的虚拟内存页,如果你过多的进行动态内存分配,将会导致你的程序消耗大量的额外内存。 + +Electric Fence还有一个局限是不能明确指出错误代码所在的行号。它所能做只是在监测到内存相关错误时产生段错误。想要定位行号,需要借助[The Gnu Project Debugger (GDB)][14]这样的调试工具来调试你启用了Electric Fence的程序。 + +最后一点,Electric Fence虽然能检测出大部分的缓冲区溢出,有一个例外是,如果所申请的缓冲区大小不是系统字长的倍数,这时候溢出(即使只有几个字节)就不能被检测出来。 + +#### 总结 #### + +尽管有那么多的局限,但是Electric Fence的优点却在于它的易用性。程序只要链接工具一次,Electric Fence就可以在监测出内存相关问题的时候报警。不过,如同前面所说,Electric Fence需要配合像GDB这样的源码调试器使用。 + + +### [Memcheck][16] ### + +**开发者**:[Valgrind开发团队][17] + +**点评版本**:3.10.1 + +**Linux支持**:所有种类 + +**许可**:通用公共许可证 + +[Valgrind][18]是一个提供好几款调试和Linux程序性能分析工具的套件。虽然Valgrind和编写语言各不相同(有Java、Perl、Python、Assembly code、ortran、Ada等等)的程序配合工作,但是它所提供的工具大部分都意在支持C/C++所编写的程序。 + +Memcheck作为内存错误检测器,是一款最受欢迎的Memcheck工具。它能够检测出诸多问题诸如内存泄漏、无效的内存访问、未定义变量的使用以及栈内存分配和释放相关的问题等。 + +#### 更新内容 #### + +工具套件(3.10.1)的[发行版][19]是一个副版本,主要修复了3.10.0版本发现的bug。除此之外,从主版本backport一些包,修复了缺失的AArch64 ARMv8指令和系统调用。 + +#### 有何优点 #### + +同其它所有Valgrind工具一样,Memcheck也是基本的命令行实用程序。它的操作非常简单:通常我们会使用诸如prog arg1 arg2格式的命令来运行程序,而Memcheck只要求你多加几个值即可,就像valgrind --leak-check=full prog arg1 arg2。 + +![cw memcheck output](http://images.techhive.com/images/article/2015/11/cw_memcheck-output-100627037-large.idge.png) +Memcheck + +(注意:因为Memcheck是Valgrind的默认工具所以无需提及Memcheck。但是,需要在编译程序之初带上-g参数选项,这一步会添加调试信息,使得Memcheck的错误信息会包含正确的行号。) + +我真正倾心于Memcheck的是它提供了很多命令行选项(如上所述的--leak-check选项),如此不仅能控制工具运转还可以控制它的输出。 + +举个例子,可以开启--track-origins选项,以查看程序源码中未初始化的数据。可以开启--show-mismatched-frees选项让Memcheck匹配内存的分配和释放技术。对于C语言所写的代码,Memcheck会确保只能使用free()函数来释放内存,malloc()函数来申请内存。而对C++所写的源码,Memcheck会检查是否使用了delete或delete[]操作符来释放内存,以及new或者new[]来申请内存。 + +Memcheck最好的特点,尤其是对于初学者来说的,是它会给用户建议使用那个命令行选项能让输出更加有意义。比如说,如果你不使用基本的--leak-check选项,Memcheck会在输出时建议“使用--leak-check=full重新运行,查看更多泄漏内存细节”。如果程序有未初始化的变量,Memcheck会产生信息“使用--track-origins=yes,查看未初始化变量的定位”。 + +Memcheck另外一个有用的特性是它可以[创建抑制文件(suppression files)][20],由此可以忽略特定不能修正的错误,这样Memcheck运行时就不会每次都报警了。值得一提的是,Memcheck会去读取默认抑制文件来忽略系统库(比如C库)中的报错,这些错误在系统创建之前就已经存在了。可以选择创建一个新的抑制文件,或是编辑现有的(通常是/usr/lib/valgrind/default.supp)。 + +Memcheck还有高级功能,比如可以使用[定制内存分配器][22]来[检测内存错误][21]。除此之外,Memcheck提供[监控命令][23],当用到Valgrind的内置gdbserver,以及[客户端请求][24]机制(不仅能把程序的行为告知Memcheck,还可以进行查询)时可以使用。 + +#### 注意事项 #### + +毫无疑问,Memcheck可以节省很多调试时间以及省去很多麻烦。但是它使用了很多内存,导致程序执行变慢([由资料可知][25],大概花上20至30倍时间)。 + +除此之外,Memcheck还有其它局限。根据用户评论,Memcheck明显不是[线程安全][26]的;它不能检测出 [静态缓冲区溢出][27];还有就是,一些Linux程序如[GNU Emacs][28],目前还不能使用Memcheck。 + +如果有兴趣,可以在[这里][29]查看Valgrind详尽的局限性说明。 + +#### 总结 #### + +无论是对于初学者还是那些需要高级特性的人来说,Memcheck都是一款便捷的内存调试工具。如果你仅需要基本调试和错误核查,Memcheck会非常容易上手。而当你想要使用像抑制文件或者监控指令这样的特性,就需要花一些功夫学习了。 + +虽然罗列了大量的局限性,但是Valgrind(包括Memcheck)在它的网站上声称全球有[成千上万程序员][30]使用了此工具。开发团队称收到来自超过30个国家的用户反馈,而这些用户的工程代码有的高达2.5千万行。 + +### [Memwatch][31] ### + +**开发者**:Johan Lindh + +**点评版本**:2.71 + +**Linux支持**:所有种类 + +**许可**:GNU通用公共许可证 + +Memwatch是由Johan Lindh开发的内存调试工具,虽然它主要扮演内存泄漏检测器的角色,但是它也具有检测其它如[重复释放跟踪和内存错误释放][32]、缓冲区溢出和下溢、[野指针][33]写入等等内存相关问题的能力(根据网页介绍所知)。 + +Memwatch支持用C语言所编写的程序。可以在C++程序中使用它,但是这种做法并不提倡(由Memwatch源码包随附的Q&A文件中可知)。 + +#### 更新内容 #### + +这个版本添加了ULONG_LONG_MAX以区分32位和64位程序。 + +#### 有何优点 #### + +跟Dmalloc一样,Memwatch也有优秀的文献资料。参考USING文件,可以学习如何使用Memwatch,可以了解Memwatch是如何初始化、如何清理以及如何进行I/O操作的,等等不一而足。还有一个FAQ文件,旨在帮助用户解决使用过程遇到的一般问题。最后还有一个test.c文件提供工作案例参考。 + +![cw memwatch output](http://images.techhive.com/images/article/2015/11/cw_memwatch_output-100627038-large.idge.png) +Memwatch + +不同于Mtrace,Memwatch的输出产生的日志文件(通常是memwatch.log)是人类可阅读格式。而且,Memwatch每次运行时总会拼接内存调试输出到此文件末尾,而不是进行覆盖(译改)。如此便可在需要之时,轻松查看之前的输出信息。 + +同样值得一提的是当你执行了启用Memwatch的程序,Memwatch会在[标准输出][34]中产生一个单行输出,告知发现了错误,然后你可以在日志文件中查看输出细节。如果没有产生错误信息,就可以确保日志文件不会写入任何错误,多次运行的话能实际节省时间。 + +另一个我喜欢的优点是Memwatch同样在源码中提供一个方法,你可以据此获取Memwatch的输出信息,然后任由你进行处理(参考Memwatch源码中的mwSetOutFunc()函数获取更多有关的信息)。 + +#### 注意事项 #### + +跟Mtrace和Dmalloc一样,Memwatch也需要你往你的源文件里增加代码:你需要把memwatch.h这个头文件包含进你的代码。而且,编译程序的时候,你需要连同memwatch.c一块编译;或者你可以把已经编译好的目标模块包含起来,然后在命令行定义MEMWATCH和MW_STDIO变量。不用说,想要在输出中定位行号,-g编译器选项也少不了。 + +还有一些没有具备的特性。比如Memwatch不能检测出往一块已经被释放的内存写入操作,或是在分配的内存块之外的读取操作。而且,Memwatch也不是线程安全的。还有一点,正如我在开始时指出,在C++程序上运行Memwatch的结果是不能预料的。 + +#### 总结 #### + +Memcheck可以检测很多内存相关的问题,在处理C程序时是非常便捷的调试工具。因为源码小巧,所以可以从中了解Memcheck如何运转,有需要的话可以调试它,甚至可以根据自身需求扩展升级它的功能。 + +### [Mtrace][35] ### + +**开发者**: Roland McGrath and Ulrich Drepper + +**点评版本**: 2.21 + +**Linux支持**:所有种类 + +**许可**:GNU通用公共许可证 + +Mtrace是[GNU C库][36]中的一款内存调试工具,同时支持Linux C和C++程序,检测由malloc()和free()函数的不对等调用所引起的内存泄漏问题。 + +![cw mtrace output](http://images.techhive.com/images/article/2015/11/cw_mtrace-output-100627039-large.idge.png) +Mtrace + +Mtrace实现为对mtrace()函数的调用,跟踪程序中所有malloc/free调用,在用户指定的文件中记录相关信息。文件以一种机器可读的格式记录数据,所以有一个Perl脚本(同样命名为mtrace)用来把文件转换并展示为人类可读格式。 + +#### 更新内容 #### + +[Mtrace源码][37]和[Perl文件][38]同GNU C库(2.21版本)一起释出,除了更新版权日期,其它别无改动。 + +#### 有何优点 #### + +Mtrace最优秀的特点是非常简单易学。你只需要了解在你的源码中如何以及何处添加mtrace()及其对立的muntrace()函数,还有如何使用Mtrace的Perl脚本。后者非常简单,只需要运行指令mtrace (例子见开头截图最后一条指令)。 + +Mtrace另外一个优点是它的可收缩性,体现在,不仅可以使用它来调试完整的程序,还可以使用它来检测程序中独立模块的内存泄漏。只需在每个模块里调用mtrace()和muntrace()即可。 + +最后一点,因为Mtrace会在mtace()(在源码中添加的函数)执行时被触发,因此可以很灵活地[使用信号][39]动态地(在程序执行周期内)使能Mtrace。 + +#### 注意事项 #### + +因为mtrace()和mauntrace()函数(在mcheck.h文件中声明,所以必须在源码中包含此头文件)的调用是Mtrace运行(mauntrace()函数并非[总是必要][40])的根本,因此Mtrace要求程序员至少改动源码一次。 + +了解需要在编译程序的时候带上-g选项([GCC][41]和[G++][42]编译器均由提供),才能使调试工具在输出展示正确的行号。除此之外,有些程序(取决于源码体积有多大)可能会花很长时间进行编译。最后,带-g选项编译会增加了可执行文件的内存(因为提供了额外的调试信息),因此记得程序需要在测试结束,不带-g选项重新进行编译。 + +使用Mtrace,你需要掌握Linux环境变量的基本知识,因为在程序执行之前,需要把用户指定文件(mtrace()函数用以记载全部信息)的路径设置为环境变量MALLOC_TRACE的值。 + +Mtrace在检测内存泄漏和尝试释放未经过分配的内存方面存在局限。它不能检测其它内存相关问题如非法内存访问、使用未初始化内存。而且,[有人抱怨][43]Mtrace不是[线程安全][44]的。 + +### 总结 ### + +不言自明,我在此讨论的每款内存调试器都有其优点和局限。所以,哪一款适合你取决于你所需要的特性,虽然有时候容易安装和使用也是一个决定因素。 + +要想捕获软件程序中的内存泄漏,Mtrace最适合不过了。它还可以节省时间。由于Linux系统已经预装了此工具,对于不能联网或者不可以下载第三方调试调试工具的情况,Mtrace也是极有助益的。 + +另一方面,相比Mtrace,,Dmalloc不仅能检测更多错误类型,还你呢个提供更多特性,比如运行时可配置、GDB集成。而且,Dmalloc不像这里所说的其它工具,它是线程安全的。更不用说它的详细资料了,这让Dmalloc成为初学者的理想选择。 + +虽然Memwatch的资料比Dmalloc的更加丰富,而且还能检测更多的错误种类,但是你只能在C语言写就的软件程序上使用它。一个让Memwatch脱颖而出的特性是它允许在你的程序源码中处理它的输出,这对于想要定制输出格式来说是非常有用的。 + +如果改动程序源码非你所愿,那么使用Electric Fence吧。不过,请记住,Electric Fence只能检测两种错误类型,而此二者均非内存泄漏。还有就是,需要了解GDB基础以最大程序发挥这款内存调试工具的作用。 + +Memcheck可能是这当中综合性最好的了。相比这里所说其它工具,它检测更多的错误类型,提供更多的特性,而且不需要你的源码做任何改动。但请注意,基本功能并不难上手,但是想要使用它的高级特性,就必须学习相关的专业知识了。 + +-------------------------------------------------------------------------------- + +via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debuggers-for-linux-coding.html + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/soooogreen) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Himanshu-Arora/ +[1]:https://openclipart.org/detail/132427/penguin-admin +[2]:https://en.wikipedia.org/wiki/Manual_memory_management +[3]:http://dmalloc.com/ +[4]:https://www.owasp.org/index.php/Double_Free +[5]:https://stuff.mit.edu/afs/sipb/project/gnucash-test/src/dmalloc-4.8.2/dmalloc.html#Fence-Post%20Overruns +[6]:http://dmalloc.com/releases/notes/dmalloc-5.5.2.html +[7]:http://www.gnu.org/software/gdb/ +[8]:http://dmalloc.com/docs/ +[9]:http://dmalloc.com/docs/latest/online/dmalloc_26.html#SEC32 +[10]:http://dmalloc.com/docs/latest/online/dmalloc_23.html#SEC29 +[11]:https://en.wikipedia.org/wiki/Memory_management#Dynamic_memory_allocation +[12]:https://en.wikipedia.org/wiki/Segmentation_fault +[13]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection +[14]:http://www.gnu.org/software/gdb/ +[15]:https://launchpad.net/ubuntu/+source/electric-fence/2.2.3 +[16]:http://valgrind.org/docs/manual/mc-manual.html +[17]:http://valgrind.org/info/developers.html +[18]:http://valgrind.org/ +[19]:http://valgrind.org/docs/manual/dist.news.html +[20]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles +[21]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools +[22]:http://stackoverflow.com/questions/4642671/c-memory-allocators +[23]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands +[24]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs +[25]:http://valgrind.org/docs/manual/valgrind_manual.pdf +[26]:http://sourceforge.net/p/valgrind/mailman/message/30292453/ +[27]:https://msdn.microsoft.com/en-us/library/ee798431%28v=cs.20%29.aspx +[28]:http://www.computerworld.com/article/2484425/linux/5-free-linux-text-editors-for-programming-and-word-processing.html?nsdr=true&page=2 +[29]:http://valgrind.org/docs/manual/manual-core.html#manual-core.limits +[30]:http://valgrind.org/info/ +[31]:http://www.linkdata.se/sourcecode/memwatch/ +[32]:http://www.cecalc.ula.ve/documentacion/tutoriales/WorkshopDebugger/007-2579-007/sgi_html/ch09.html +[33]:http://c2.com/cgi/wiki?WildPointer +[34]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29 +[35]:http://www.gnu.org/software/libc/manual/html_node/Tracing-malloc.html +[36]:https://www.gnu.org/software/libc/ +[37]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.c;h=df10128b872b4adc4086cf74e5d965c1c11d35d2;hb=HEAD +[38]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.pl;h=0737890510e9837f26ebee2ba36c9058affb0bf1;hb=HEAD +[39]:http://webcache.googleusercontent.com/search?q=cache:s6ywlLtkSqQJ:www.gnu.org/s/libc/manual/html_node/Tips-for-the-Memory-Debugger.html+&cd=1&hl=en&ct=clnk&gl=in&client=Ubuntu +[40]:http://www.gnu.org/software/libc/manual/html_node/Using-the-Memory-Debugger.html#Using-the-Memory-Debugger +[41]:http://linux.die.net/man/1/gcc +[42]:http://linux.die.net/man/1/g++ +[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html +[44]:https://en.wikipedia.org/wiki/Thread_safety diff --git a/translated/talk/20151125 20 Years of GIMP Evolution--Step by Step.md b/translated/talk/20151125 20 Years of GIMP Evolution--Step by Step.md new file mode 100644 index 0000000000..20acfc1ee8 --- /dev/null +++ b/translated/talk/20151125 20 Years of GIMP Evolution--Step by Step.md @@ -0,0 +1,169 @@ +GHLandy Translated + +GIMP 过去的 20 年:一点一滴的进步 +================================================================================ +注:youtube 视频 + + +[GIMP][1](GNU 图像处理程序)—— 一流的开源免费图像处理程序。加州大学伯克利分校的 Peter Mattis 和 Spencer Kimball 最早在 1995 年的时候就进行了该程序的开发。到了 1997 年,该程序成为了 [GNU Project][2] 官方的一部分,并正式更名为 GIMP。时至今日,GIMP 已经成为了最好的图像编辑器之一,并有最受欢迎的 “GIMP vs Photoshop” 之争。 + +1995 年 11 月 21 日,首版发布: + +> 发布者: Peter Mattis +> +> 发布主题: ANNOUNCE: The GIMP +> +> 日期: 1995-11-21 +> +> 消息ID: <48s543$r7b@agate.berkeley.edu> +> +> 新闻组: comp.os.linux.development.apps,comp.os.linux.misc,comp.windows.x.apps +> +> GIMP:通用图像处理程序 +> ------------------------------------------------ +> +> GIMP 是为各种图像编辑操作提供一个直观的图形界面而设计的。 +> +> 以下是 GIMP 的主要功能介绍: +> +> 图像查看 +> ------------- +> +> * 支持 8 位,15 位,16 位和 24 位颜色 +> * 8 位色显示的图像序列的稳定算法 +> * 以 RGB 色、灰度和索引色模式查看图像 +> * 同时编辑多个图像 +> * 实时缩放和全图查看 +> * 支持 GIF、JPEG、PNG、TIFF 和 XPM 格式 +> +> 图像编辑 +> ------------- +> +> * 选区工具:包括矩形、椭圆、自由、模糊、贝尔赛曲线以及智能 +> * 变换工具:包括旋转、缩放、剪切和翻转 +> * 绘画工具:包括油漆桶、笔刷、喷枪、克隆、卷积、混合和文本 +> * 效果滤镜:如模糊和边缘检测 +> * 通道和颜色操作:叠加、反相和分解 +> * 组件功能:允许你方便的添加新的文件格式和效果滤镜 +> * 多步撤销/重做功能 + +1996 年,GIMP 0.54 版 + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/054.png) + +GIMP 0.54 版需要具备 X11 显示、X-server 以及 Motif 1.2 微件,支持 8 位、15 位、16 位和 24 位的颜色深度和灰度,支持 GIF、JPEG、PNG、TIFF 和 XPM 图像格式。 + +基本功能:具备矩形、椭圆、自由、模糊、贝塞尔曲线和智能等选择工具,旋转、缩放、剪切、克隆、混合和翻转等变换工具。 + +扩展工具:文字添加、效果滤镜、通道和颜色操纵工具、撤销/重做功能。由于第一个版本支持组件扩展,才方便添加这些功能。 + +GIMP 0.54 版可以在 Linux、HP-UX、Solaris 和 SGI IRIX 中运行。 + +### 1997 年,GIMP 0.60 版 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/060.gif) + +这只是一个开发版本,并非面向用户发布的。GIMP 有了新的工具包——GDK(GIMP Drawing Kit,GIMP 绘图工具)和 GTK(GIMP Toolkit,GIMP 工具包),并弃用 Motif。GIMP 工具包随后也发展成为了 GTK+ 跨平台的微件工具包。新特性: + +- 基本的图层功能 +- 子像素采集 +- 笔刷间距 +- 改进剂喷枪功能 +- 绘制模式 + +### 1997 年,GIMP 0.99 版 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/099.png) + +从 0.99 版本开始,GIMP 有了宏脚本的支持。GTK 及 GTK 功能增强版正式更名为 GTK+。其他更新: + +- 支持大体积图像(大于 100M) +- 新增原生格式 – XCF +- 新的 API – 使得更加容易编写组件和扩展 + +### 1998 年,GIMP 1.0 版 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/100.gif) + +GIMP 和 GTK+ 开始分为两个不同的项目。GIMP 官网进行重构,包含新教程、组件和文档。新特性: + +- 基于瓦片式的内存管理 +- 组件 API 做了较大改变 +- XFC 格式现在支持图层、导航和选择 +- web 界面 +- 在线图像生成 + +### 2000 年,GIMP 1.2 版 ### + +新特性: + +- 进行了非英文语言翻译 +- 修复 GTK+ 和 GIMP 中的大量bug +- 增加大量组件 +- 图像映射 +- 新工具:调整大小、测量、加亮、燃烧效果、颜色吸管和翻转等。 +- 图像管道 +- 保存前可以进行图像预览 +- 按比例缩放的笔刷进行预览 +- 通过路径进行递归选择 +- 新的窗口导航 +- 支持图像拖拽 +- 支持水印 + +### 2004 年,GIMP 2.0 版 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/200.png) + +重大更新 – 更新 GTK+ 2.x toolkit. + +### 2004 年,GIMP 2.2 版 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/220.png) + +修复大量 Bug 并支持图像拖拽 + +### 2007 年,GIMP 2.4 版 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/240.png) + +新特性: + +- 更好的图像拖拽体验 +- 使用新的脚本解释器 Script-Fu 替代了 旧的 Ti-Fu +- 新组件:影印效果、光晕效果、霓虹灯效果、卡通效果、小狗笔刷、水珠笔刷以及其他组件 + +### 2008 年,GIMP 2.6 版 ### + +新特性: + +- 更新了图形界面 +- 新的选择工具 +- 继承了 GEGL (GEneric Graphics Library,通用图形库) +- 为 MDI 行为实现了实用程序窗口提示 + +### 2012 年,GIMP 2.8 版 ### + +![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/280.png) + +新特性: + +- GUI 在视觉上做了一些改变 +- 新的保存和导出菜单 +- 更新文本框工具 +- 支持图层群组 +- 支持 JPEG2000 和导出为 pdf +- 网页截图工具 + +-------------------------------------------------------------------------------- + +via: https://tlhp.cf/20-years-of-gimp-evolution/ + +作者:[Pavlo Rudyi][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://tlhp.cf/author/paul/ +[1]:https://gimp.org/ +[2]:http://www.gnu.org/ diff --git a/translated/talk/20151202 KDE vs GNOME vs XFCE Desktop.md b/translated/talk/20151202 KDE vs GNOME vs XFCE Desktop.md new file mode 100644 index 0000000000..bdbaf2cdbd --- /dev/null +++ b/translated/talk/20151202 KDE vs GNOME vs XFCE Desktop.md @@ -0,0 +1,47 @@ +translating by kylepeng93 +KDE,GNOME和XFCE的较量 +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2013/07/300px-Xfce_logo.svg_.png) +这么多年来,很多人一直都在他们的linux桌面端使用KDE或者GNOME桌面环境。这两个桌面环境经过多年的发展之后仍然在继续增加他们的用户基数。然而,在轻量级桌面环境下,XFCE一举成为了最受欢迎的桌面环境,相较于LXDE缺少的优美视觉效果,默认配置下的XFCE就可以在这方面打败前者。XFCE提供了用户能在GNOME2下使用的所有功能特性。但是,必须承认,在一些太老的计算机上,它的轻量级的特性并不能得到很好的效果。 + +### 桌面主题定制 ### +用户完成安装之后,XFCE看起来可能会有一点无聊,因为它在视觉上还缺少一些吸引力。但是,请不要误解我的话,XFCE仍然拥有漂亮的桌面,可能看起来像是用户眼中的香草,正如大多数刚刚接触XFCE桌面环境的人。好消息是当我们给XFCE安装新的主题的时候,这会是一个十分容易的过程,因为你能够快速的找到你喜欢的XFCE主题,之后,你可以将它解压到一个合适的目录中。从这一点上来说,XFCE自带的一个重要的图形界面工具可以帮助用户更加容易的选中你已经选好的主题,这可能是目前在XFCE上最好用的工具了。如果用户按照上面的指示去做的话,对于任何想要尝试使用XFCE的用户来说将不存在任何困难。 + +在GNOME桌面上,用户也可以按照上面的方法去做。不过,其中最主要的不同点就是用户必须手动下载并安装GNOME Tweak Tool,这样才能继续你想做的事。当然,对于使用任何一种方式都不会有什么障碍,但是对于用户来说,使用XFCE安装和激活主题并不需要去额外的下载并安装任何tweak tool可能是他们无法忽略的一个优势。而在GNOME上,尤其是在用户已经下载并安装了GNOME Tweak tool之后,你仍将必须确保你已经安装了用户主题拓展。 + +在XFCE一样,用户将会去搜索并下载自己喜欢的主题,然后,用户可以重新使用GNOME Tweak tool,并点击该工具界面左边的Appearance按钮,接着用户便可以简单的通过点击相应的按钮并选择自己已经下载好的主题来使用自己的主题,当一切都完成之后,用户将会看到一个告诉用户已经成功应用了主题的对话框,这样,你的主题便已经安装完成。对于这一点,用户可以简单的使用滚动条来选择他们想要的主题。和XFCE一样,主题激活的过程也是十分简单的,然而,对于因为要使用一个新的主题而下载一个不被包含的应用的需求也是需要考虑的。 + +最后,就是KDE桌面主题定制的过程了。和XFCE一样,不需要去下载额外的工具来安装主题。从这点来看,让人有种XFCE必将使KDE成为最后的赢家的感觉。不仅在KDE上可以完全使用图形用户界面来安装主题,而且甚至只需要用户点击获取新主题的按钮就可以定位,查看,并且最后自动安装新的主题。 + +然而,我们不应该认为KDE相比XFCE是一个更加稳定的桌面环境。因此,现在正是我们思考为什么一些额外的功能可能会从桌面环境中移除来达到最小化的目的。为此,我们都必须为拥有这样出色的功能而给予KDE更多的支持。 + +### MATE不是一个轻量级的桌面环境 ### +在继续比较XFCE,GNOME3和KDE之前,必须对这方面的老手作一个事先说明,我们不会将MATE桌面环境加入到我们的比较中。MATE可被认为是GNOME2的另一个衍生品,但是它并没有声称是作为一款轻量级或者快捷桌面。相反,它的主要目的是成为一款更加传统和舒适的桌面环境,并使它的用户感觉就像在家里使用它一样。 + +另一方面,XFCE生来就是要实现他自己的一系列使命。XFCE给它的用户提供一个更加轻量级的桌面环境,至今仍然有着吸引人的桌面视觉体验。然后,对于一些认为MATE也是一款轻量级的桌面环境的人来说,其实MATE真正的目标并不是成为一款轻量级的桌面环境。这两个选择在各自安装了一款好的主题之后看起来都会让人觉得非常具有吸引力。 + +### 桌面导航 ### +XFCE在窗口之外提供了一个显眼的导航器。任何使用过传统的windows或者GNOME 2/MATE桌面环境的用户都可以在没有任何帮助的情况下自如的使用新安装的XFCE桌面环境的导航器。紧接着,添加小程序到面板中也是很明显的。和找到已经安装的应用程序一样,直接使用启动器并点击你想要运行的应用程序图标。除了LXDE和MATE之外,还没有其他的桌面的导航器可以做到如此简单。不仅如此,更加简单的是对控制面板的使用,对于刚刚使用这个新桌面的每个用户来说这是一个非常大的好处。如果用户更喜欢通过老式的方法去使用他们的桌面,对于GNOME来说,这不是一个问题。和没有最小化按钮形成的用户关注热点一样,加上其他应用布局方法,这将使新用户更加容易习惯这个风格设计。 + +如果用户来自windows桌面环境,那么这些用户将要放弃这些习惯,因为,他们将不能简单的通过鼠标右击一下就可以将一个小程序添加到他们的工作空间的顶部。与此相反,它可以通过使用拓展来实现。GNOME中的KDE拓展是可用的,并且是非常的容易,这些容易之处体现在只需要用户简单的使用位于GNOME拓展页面上的on/off开关。用户必须清楚,只能通过访问该页面才能使用这个功能。 + +另一方面,GNOME正在它的外观中体现它的设计理念,即为用户提供一个直观和易用的控制面板。你可能认为那并不是什么大事,但是,在我看来,它确实是我认为值得称赞并且有必要被提及的方面。KDE提供给它的用户大量传统的桌面使用体验,并通过提供相似的启动器和一种更加类似的获取软件的方式的能力来迎合来自windows的用户。添加小图标或者小程序到KDE桌面是件非常简单的事情,只需要在桌面的底部右击即可。只有在KDE的使用中才会存在的的问题,对于KDE用户寻找的很多KDE的特性实际上都是隐藏的。KDE的用户可能会指责这我的观点,但我仍然坚持我的说法。 + +为了增加小部件,仅仅在我的面板上右击就可以看见面板选项,但是并不是安装小部件的一个快速的方法。通常在你选择面板选项之前你都不会看到添加的小部件,然后,就添加小部件吧。这对我来说不是个问题,但是后来对于一些用户来说,它变成了不必要的困惑。而使事情变得更加复杂,用户管理定位部件区域后,他们后来发现一种称为“Activities”的品牌新术语。在同一地区的小部件,但它是在自己的领域是什么。 + +现在请不要误解我,KDE中的活动特性是很不错的,也是很有价值的,但是从可用性的角度看,为了不让新手感到困惑,它更加适合于应用在另一个目录选项。欢迎来自用户的分歧,,但为了测试这个新手对一些长时间的可以一遍又一遍的证明它是正确的。责骂放在一边,KDE添加新部件的方法的确很棒。与KDE的主题一样,用户不能通过使用提供的图形用户界面浏览和自动安装部件。这是一个神奇的功能,也可以是这样的方式去庆祝。KDE的控制面板可能和用户希望的样子不一样,它不是足够的简单。但是有一点很清楚,这将是他们致力于改进的地方。 + +### 因此,XFCE是最好的桌面环境,对吗? ### +我在我的计算机上使用GNOME,KDE,并在我的办公室和家里的电脑上使用Xfce。我也有一些老机器在使用Openbox和LXDE。每一个使用桌面的经验都可以给我提供一些有用的东西,可以帮助我使用每台机器,因为我认为它是合适的。对我来说,在我的心里有一个柔软的地方,因为Xfce作为一个桌面环境,我坚持使用了很多年。但在这篇文章中,我只是在写我使用电脑的日常,事实上,它是GNOME。 +这篇文章的主要思想是我还是觉得Xfce能提供好一点的用户体验,对于那些正在寻找稳定的、传统的、容易理解的桌面环境的用户来说,XFCE是理想的。欢迎您在评论部分和我们分享你的意见。 +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/kde-vs-gnome-vs-xfce-desktop/ + +作者:[M.el Khamlichi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ diff --git a/translated/talk/20151223 Ten Biggest Linux Stories Of The Year 2015.md b/translated/talk/20151223 Ten Biggest Linux Stories Of The Year 2015.md new file mode 100644 index 0000000000..90c8f2f6a7 --- /dev/null +++ b/translated/talk/20151223 Ten Biggest Linux Stories Of The Year 2015.md @@ -0,0 +1,125 @@ +# 2015年度十大Linux事件 + +![2015年的大事件](http://itsfoss.com/wp-content/uploads/2015/12/Biggest-Linux-Stories-2015.jpg) + +2015年即将结束,我在这里(It's FOSS)发表《2015年的大事件》系列。这个系列的第一篇文章为《2015十大Linux事件》。这些事件在Linux世界中产生了极大的影响,无论它们是积极的还是消极的。 + +我总结了2015发生的十个像这样——产生了最大影响的事件。黑喂狗! + +### 2015年度十大Linux/开源相关事件 + +补充一句,以下这些项目没有按照时间顺序排列。 + +#### 微软与Linux的结盟 + +在9月下旬,所有人听到[微软构建了自己的Linux发行版][1]这个消息时都大吃一惊。其在后来被揭露,这其实是一个微软开发的用于`Azur cloud switches`的[软件][2]。 + +但故事还没结束。微软真的与Canonical (Ubuntu Linux的母公司)达成合作 来开发[HDInsight][3]——微软在通过Azure上构建Hadoop来进行大数据处理的服务. Ubuntu是[微软在其上部署软件][4]的第一个Linux系统。 + +微软会继续保持它与Linux的关系吗? 还是在使用Linux达到其目的(Azur)就会收手?只有时间能告诉我们一切。 + +#### 微软发布适用于Linux的Visual Studio Code + +在微软发布Linux发行版引起喧嚣之前,微软扔下了另一枚炸弹——发布Linux版Visual Studio Code, 与其一并发布的还有Windows以及OS X版。尽管Visual Studio Code并不是开源的,从某种意义上讲,发布Linux版本仍然是Linux用户的胜利。 无论如何,Linus Torvalds曾说过一句很著名的话:“如果微软给Linux开发过一款应用的话,这就意味着我已经赢了”。 + +你可以看这个教程来学习[如何在Ubuntu中安装Visual Studio Code][5]。 + +#### 苹果公司开源编程语言Swift + +在向Linux及开源“示爱”方面,苹果公司也不甘示弱。苹果用来制作iOS应用的首选编程语言Swift, [现已开源][6]并移植到Linux中。虽然其还在测试中,但你可以轻易地[在Ubuntu中安装Swift][7]。 + +但是,苹果就是苹果,它[开始吹嘘][8]其为“第一个视开源开发为公司的关键软件开发策略的计算机公司巨头(原文如此)”。 + +#### Ubuntu手机终于发布 + +Ubuntu手机终于在今年年初发布。 因其早期使用者及开发者,Ubuntu深受Ubuntu社区喜爱。主流智能机用户仍然回避它,主要[因为该系统还在大规模开发中][9]。对于Ubuntu手机的问世,2016年将成为决定性的一年。 + +#### Jolla遭受经济危机 + +Jolla, Sailfish OS —— 以Linux为基础的智能手机系统的幕后公司,遭受了严重的财政障碍。这导致了[一半的Jolla员工罢工][10]。 + +Jolla在2014年针对它的平板电脑完成了一次非常[成功的众筹][11],显然,他们将大部分预算都花在了Sailfish OS的开发上,而在主要投资者退出后,公司在挣扎以求生存。 + +不过有一个好消息,Jolla成功拿到了一些雄厚的资金,而且他们已[回归经营生意][12]。 + +#### Firefox OS已死 + +作为安卓的开源替代品,Mozila的移动操作系统Firefox OS在这个月初慢性死亡。本打算在发展中国家售卖低至25美金的智能手机,可Firefox OS永远没有流行起来。我认为主要原因是它的硬件廉价,以及它缺少流行应用。 + +在十二月,[Mozilla宣布][13]其将停止开发Firefox OS, 并停止出售Firefox智能手机。 + +我认为[Tizen][14], Linux基金会旗下的基于Linux的移动操作系统,也已经消失了,尽管其从未发布过。我没有看到任何关于Tizen开发的消息,而且Linux基金会从未推动过它的开发。Tizen何时死亡只是一个时间问题。 + +#### “Ubuntu家族”内讧 + +今年五月,Jonathan Riddell,Kubuntu项目的领导者,[被Ubuntu社区委员会强制要求下台][15],这引起了很多激烈的讨论。Jonathan曾质问Ubuntu所收捐款的使用情况,他抱怨Kubuntu从未见到过这些钱。 + +这导致了两方的互相谴责。最终。Ubuntu的老爹,[Mark Shuttleworth要求Jonathan下台][16]。 + +#### 女性Linux内核开发者因“野蛮的沟通方式”而退出 + +Linux之父Linus Torvalds因其粗俗的语言而著称。Linux内核开发者[Sarah Sharp][17]也因为嘴快心直而著称。 + +Sarah Sharp曾在2013年与Linus Torvalds公开争执,[建议Linus将“语言暴力”赶出邮件列表][18]。Linus也没有[委婉地][19]回复她。 + +那是在2013年。2015年,Sarah宣布她正在[逐步停止她在内核社区的工作][20]因为他们的交流方式缺乏基本礼仪,并且野蛮而充满亵渎。 + +这一举动让人们开始讨论Linux内核社区是否真的应该改变他们的行为方式,还是Sarah做的太过分了。 + +#### Unity游戏编辑器移植到Linux平台 + +尽管[在Linux上玩游戏][21]认识Linux用户们的阿克琉斯之踵,整个社区在游戏引擎Unity宣布其正在测试[Linux下的游戏编辑器][22]时都沸腾了。因为在渲染图像时,Linux是一个最流行的选择,所以我们推测这将使游戏开发者向Linux靠拢。不过,Unity是否真的会推出一个最终版本的游戏编辑器,这个问题还未被证实。 + +#### 政府机构采用开源软件 + +欧洲数个城市的管理机构决定[抛弃先前的软件][23],并使用其开源的替代品。大多数城市管理机构将Microsoft Office替换为LibreOffice或OpenOffice. 一些城市管理机构和[公立学校][24]也跟进,将Microsoft Windows换成Linux. + +对于这一行为,削减成本是一个重要的因素,因为城市管理机构通过采用开源软件省下了无数欧元。 + +大学也并没有在采用开源软件的道路上落后。这一年,我们听到了[大学如何抛弃Photoshop改用Krita][25]以及[大学使用开源Office软件][26]的消息。 + +### 总结 + +与其他年一样,2015年同样有许多另Linux爱好者感到积极或消极的时刻。我们看到Linux的竞争者,如微软和苹果,向Linux靠拢,政府机构采用开源软件。同时,我们特见证了Firefox智能手机系统的失败。我想说,这真是喜忧参半的一年。 + +你认为呢?我希望你们分享你们所认为对于Linuxer们来说最重要的新闻,和你们对这一年的整体感受。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/biggest-linux-stories-2015/ + +作者:[Abhishek][a] + +译者:[ StdioA](https://github.com/StdioA) + +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://itsfoss.com/author/abhishek/ +[1]: http://www.theregister.co.uk/2015/09/18/microsoft_has_developed_its_own_linux_repeat_microsoft_has_developed_its_own_linux/ +[2]: http://arstechnica.com/information-technology/2015/09/microsoft-has-built-software-but-not-a-linux-distribution-for-its-software-switches/ +[3]: https://azure.microsoft.com/en-us/services/hdinsight/ +[4]: http://www.zdnet.com/article/microsoft-deploys-first-major-server-application-on-ubuntu-linux/ +[5]: http://itsfoss.com/install-visual-studio-code-ubuntu/ +[6]: http://itsfoss.com/swift-open-source-linux/ +[7]: http://itsfoss.com/use-swift-linux/ +[8]: https://business.facebook.com/itsfoss/photos/pb.115098615297581.-2207520000.1450817108./634288916711879/?type=3&theater +[9]: http://www.engadget.com/2015/07/24/ubuntu-phone-review/ +[10]: http://techcrunch.com/2015/11/20/jolla-running-out-of-runway-for-its-android-alternative/ +[11]: https://www.indiegogo.com/projects/jolla-tablet-world-s-first-crowdsourced-tablet#/ +[12]: https://blog.jolla.com/jolla-back-business/ +[13]: http://arstechnica.com/gadgets/2015/12/firefox-os-smartphones-are-dead/ +[14]: https://www.tizen.org/ +[15]: http://www.omgubuntu.co.uk/2015/05/kubuntu-project-lead-asked-to-step-down-by-ubuntu-community-council +[16]: http://www.cio.com/article/2926838/linux/mark-shuttleworth-ubuntu-community-council-ask-kubuntu-developer-to-step-down-as-leader.html +[17]: http://sarah.thesharps.us/ +[18]: http://www.techeye.net/chips/linus-torvalds-and-intel-woman-in-sweary-spat +[19]: http://marc.info/?l=linux-kernel&m=137392506516022&w=2 +[20]: http://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html +[21]: http://itsfoss.com/linux-gaming-guide/ +[22]: http://itsfoss.com/unity-gaming-engine-linux/ +[23]: http://itsfoss.com/tag/open-source-adoption/ +[24]: http://itsfoss.com/spanish-school-ditches-windows-ubuntu/ +[25]: http://itsfoss.com/french-university-dumps-adobe-photoshop-open-source-app-krita/ +[26]: http://itsfoss.com/hungarian-universities-switch-eurooffice/ diff --git a/translated/talk/20151223 What' s the Best File System for My Linux Install.md b/translated/talk/20151223 What' s the Best File System for My Linux Install.md new file mode 100644 index 0000000000..2d093cb83f --- /dev/null +++ b/translated/talk/20151223 What' s the Best File System for My Linux Install.md @@ -0,0 +1,96 @@ + +我的Linux系统应该安装的最好的文件系统是什么? +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-feature-image.jpg) + +文件系统: 它们不是世界上最激动人心的技术,但是仍然很重要。本文我们将细数那些流行的Linux文件系统 - 它们是什么,它们能够做什么,以及它们的目标用户。 + +### Ext4 ### + +![file-systems-ext4](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-ext4.png) + +如果你曾经安装过Linux,你可能在安装过程中看到过"Ext4"字样。关于这个有一个不错的理由: 它是当前每个可用Linux发行版所选择的文件系统。当然,还有其他的一些选择,但是不可否认的是,Ext4(Extended 4)几乎是所有Linux用户都会选择的文件系统。 + +#### 它能做什么? #### + +Ext4拥有你预期的曾经的文件系统(Ext2/Ext3)的所有优点, 同时还带来了一些改进。还有很多内容可以发掘,这里列举出Ext4为你带来的最好的部分: + +- 文件系统日志 +- 日志校验 +- 多重块文件的分配 +- 对Ext2 && Ext3向后兼容 +- 持续的预分配空闲空间 +- 改进的文件系统校验(相比于之前的版本) +- 当然,同时支持更大的文件 + +#### 目标用户 #### + +Ext4针对那些寻找超级可靠的构建环境或者那些仅仅需要可用环境的用户。这个文件系统不会对你的系统做快照;它甚至没有最好的SSD支持,但是如果你的要求不是太过严格的话,你会觉得它也还不错。 + +### BtrFS ### + +![file-systems-btrFS](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-btrFS-e1450065697580.png) + +B树(B-tree)文件系统 (也被认为是butterFS,黄油文件系统) 是Oracle为Linux研发的一款文件系统。它是一个全新的文件系统,而且正处于重度开发阶段。Linux社区认为其目前使用上还有些不稳定。BtrFS的核心原则是基于写时复制(copy-on-write). **写时复制**基本上意味着在写入数据前,这份数据的每一比特都有单独的一份副本。当数据写入完毕后,它相应的副本也随之生成。 + +#### 它能做什么 #### + +除了支持写时复制之外,BtrFS也能够胜任许多其他的事务 - 事实上,如此多的事务支持以致于它能永久列出一切(数据)。这里列举最值得一提的特性:支持只读快照,文件克隆,子卷,透明压缩,离线文件系统校验,无缝地从ext3&&4转换到BtrFS,在线碎片整理,还支持RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10。 + +#### 目标用户 #### + +BtrFS的开发者门许诺过,该文件系统是当前其他文件系统的新一代替代者。非常正确,虽然目前其处于开发中。它有很多杀手级的特性面向高级用户,也包括基本用户 (包括SSDs上面的表现)。这个文件系统针对那些想要从文件系统中获取更多(特性),以及那些想尝试用写时复制机制做一些事情的用户。 + +### XFS ### + +![file-systems-xfs](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-xfs.jpg) + +由Silicon Graphics公司创造开发,XFS是一个高端文件系统,定位于速度和性能。对于性能方面的专注,使得在并行输入输出方面,XFS表现的尤其出色。XFS文件系统能够处理数量庞大的数据,事实上某些XFS用户的数据接近300+TB。 + +#### 它能做什么 #### + +XFS是一个经历良好测试的数据存储文件系统,它是为了高性能操作而诞生的。其特性包括: + +- RAID阵列的条纹(striped)模式分配 +- 文件系统日志 +- 多种块大小 +- 直接I/O +- 指定速率(guaranteed-rate)I/O +- 快照 +- 在线碎片整理 +- 在线调整大小 + +#### 目标用户 #### + +XFS针对那些想要一个坚如磐石的文件系统方案的用户。它起源于1993年,并且随着时间的变迁它变得越来越好。如果你有一台家庭服务器,而且你苦恼于如何部署存储环境,那么可以考虑下XFS。它拥有的众多特性(比如快照)能够协助你的文件存储系统。尽管如此,它不局限于服务器端。如果你是一个相对高级一点的用户或者你对BtrFS所承诺的很多特性感兴趣的话,尝试一下XFS。它实现了很多与BtrFS相似的特性,并且没有稳定性方面的问题。 + +### Reiser4 ### + +![file-system-riser4](https://www.maketecheasier.com/assets/uploads/2015/05/file-system-riser4.gif) + +Reiser4,ReiserFS的继任者,由Namesys公司创造研发。它的诞生追溯到Linspire工程和DARPA。它与众不同的地方在于众多的事务模式。没有一个单一的方式来写入数据;取而代之的是,有很多方式(来写入)。 + +#### 它能做什么 #### + +Reiser4拥有着使用多种不同事务模式的独特能力。它能够使用写时复制模式 (像BtrFS),任意位置写入(write-anywhere),日志,以及超级事务模式。它在ReiserFS的基础上做了许多的改进,包括更好的基于漫游日志的文件系统日志,对更小的文件较好的支持,以及更快速的目录处理。Reiser4提供了许多功能特性。还有更多的特性可以探讨,不过简单来讲,相比于ReiserFS它对众多新增特性做了非常大的改进。 + +#### 目标用户 #### + +Resier4适合那些想要将一个文件系统应用到多中场景下的用户。可能你想在一台机器上使用copy-on-write机制,在另一台机器上使用write-anywhere机制,还会在另一台机器上使用超级事务,而你又不希望使用多种不同类型的文件系统来完成这项任务。Reiser4是适合这种情况的完美方案。 + +### 结论 ### + +Linux上有许多可用的文件系统。每个文件系统都有其特定的用途,以便于特定用户解决不同的问题。本文的焦点集中在平台上文件系统的主流的选择。毫无疑问,其它的场景下还有一些别的选择。 + +你在Linux上最喜欢的文件系统是什么?在下面(的评论区)告诉我们吧! +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/best-file-system-linux/ + +作者:[Derrik Diener][a] +译者:[icecoobe](https://github.com/icecoobe) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/derrikdiener/ diff --git a/translated/talk/The history of Android/20 - The history of Android.md b/translated/talk/The history of Android/20 - The history of Android.md new file mode 100644 index 0000000000..9ef34f1f63 --- /dev/null +++ b/translated/talk/The history of Android/20 - The history of Android.md @@ -0,0 +1,93 @@ +安卓编年史 +================================================================================ +![和之前完全不同的市场设计。以上是分类,特色,热门应用以及应用详情页面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market-pages.png) +和之前完全不同的市场设计。以上是分类,特色,热门应用以及应用详情页面。 +Ron Amadeo 供图 + +这些截图给了我们冰淇淋三明治中新版操作栏的第一印象。几乎所有的应用顶部都有一条栏,带有应用图标,当前界面标题,一些功能按钮,右边还有一个菜单按钮。这个右对齐的菜单按钮被称为“更多操作”,因为里面存放着无法放置到主操作栏的项目。不过更多操作菜单并不是固定不变的,它给了操作栏节省了更多的屏幕空间——比如在横屏模式或在平板上时,更多操作菜单的项目会像通常的按钮一样显示在操作栏上。 + +冰淇凌三明治中新增了“滑动标签页”设计,替换掉了谷歌之前推行的2×3方阵导航屏幕。一个标签页栏放置在了操作栏下方,位于中间的标签显示的是当前页面,左右侧的两个标签显示的是对应的当前页面的左右侧页面。向左右滑动可以切换标签页,或者你可以点击指定页面的标签跳转过去。 + +应用详情页面有个很赞的设计,在应用截图后,会根据你关于那个应用的历史动态地重新布局页面。如果你从来没有安装过该应用,应用描述会优先显示。如果你曾安装过这个应用,第一部分将会是评价栏,它会邀请你评价该应用或者提醒你上次你安装该应用时的评价是什么。之前使用过的应用页面第二部分是“新特性”,因为一个老用户最关心的应该是应用有什么变化。 + +![最近应用和浏览器和蜂巢中的类似,但是是小号的](http://cdn.arstechnica.net/wp-content/uploads/2014/03/recentbrowser.png) +最近应用和浏览器和蜂巢中的类似,但是是小号的。 +Ron Amadeo 供图 + +最近应用的电子风格外观被移除了。略缩图周围的蓝色的轮廓线被去除了,同时去除的还有背景怪异的,不均匀的蓝色光晕。它现在看起来是个中立型的界面,在任何时候看起来都很舒适。 + +浏览器尽了最大的努力把标签页体验带到手机上来。多标签浏览受到了关注,操作栏上引入的一个标签页按钮会打开一个类似最近应用的界面,显示你打开的标签页,而不是浪费宝贵的屏幕空间引入一个标签条。从功能上来说,这个和之前的浏览器中的“窗口”视图没什么差别。浏览器最佳的改进是菜单中的“请求桌面版站点”选项,这让你可以从默认的移动站点视图切换到正常站点。浏览器展示了谷歌的操作栏设计的灵活性,尽管这里没有左上角的应用图标,功能上来说和其他的顶栏设计相似。 + +![Gmail 和 Google Talk —— 它们和蜂巢中的相似,但是更小!](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gmail2.png) +Gmail 和 Google Talk —— 它们和蜂巢中的相似,但是更小! +Ron Amadeo 供图 + +Gmail 和 Google Talk 看起来都像是之前蜂巢中的设计的缩小版,但是有些小调整让它们在小屏幕上表现更佳。Gmail 以双操作栏为特色——一个在屏幕顶部,一个在底部。顶部操作栏显示当前文件夹,账户,以及未读消息数目,点击顶栏可以打开一个导航菜单。底部操作栏有你期望出现在更多操作中的选项。使用双操作栏布局是为了在界面显示更多的按钮,但是在横屏模式下纵向空间有限,双操作栏就是合并成一个顶部操作栏。 + +在邮件视图下,往下滚动屏幕时蓝色栏有“粘性”。它会固定在屏幕顶部,所以你一直可以看到该邮件是谁写的,回复它,或者给它加星标。一旦处于邮件消息界面,底部细长的,深灰色栏会显示你当前在收件箱(或你所在的某个列表)的位置,并且你可以向左或向右滑动来切换到其他邮件。 + +Google Talk 允许你像在 Gmail 中那样左右滑动来切换聊天窗口,但是这里显示栏是在顶部。 + +![新的拨号和来电界面,都是姜饼以来我们还没见过的。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/inc-calls.png) +新的拨号和来电界面,都是姜饼以来我们还没见过的。 +Ron Amadeo 供图 + +因为蜂巢只给平板使用,所以一些界面设计直接超前于姜饼。冰淇淋三明治的新拨号界面就是如此,黑色和蓝色相间,并且使用了可滑动切换的小标签。尽管冰淇淋三明治终于做了对的事情并将电话主体和联系人独立开来,但电话应用还是有它自己的联系人标签。现在有两个地方可以看到你的联系人列表——一个有着暗色主题,另一个有着亮色主题。由于实体搜索按钮不再是硬性要求,底部的按钮栏的语音信息快捷方式被替换为了搜索图标。 + +谷歌几乎就是把来电界面做成了锁屏界面的镜像,这意味着冰淇淋三明治有着一个环状解锁设计。除了通常的接受和挂断选项,圆环的顶部还添加了一个按钮,让你可以挂断来电并给对方发送一条预先定义好的信息。向上滑动并选择一条信息如“现在无法接听,一会回电”,相比于一直响个不停的手机而言这样做的信息交流更加丰富。 + +![蜂巢没有文件夹和信息应用,所以这里是冰淇淋三明治和姜饼的对比。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/thenonmessedupversion.png) +蜂巢没有文件夹和信息应用,所以这里是冰淇淋三明治和姜饼的对比。 +Ron Amadeo 供图 + +现在创建文件夹更加方便了。在姜饼中,你得长按屏幕,选择“文件夹”选项,再点击“新文件夹”。在冰淇淋三明治中,你只要将一个图标拖拽到另一个图标上面,就会自动创建一个文件夹,并包含这两个图标。这简直不能更简单了,比寻找隐藏的长按命令容易多了。 + +设计上也有很大的改进。姜饼使用了一个通用的米黄色文件夹图标,但冰淇淋三明治直接显示出了文件夹中的头三个应用,把它们的图标叠在一起,在外侧画一个圆圈,并将其设置为文件夹图标。打开文件夹容器将自动调整大小以适应文件夹中的应用图标数目,而不是显示一个全屏的,大部分都是空的对话框。这看起来好得多得多。 + +![Youtube 转换到一个更加现代的白色主题,使用了列表视图替换疯狂的 3D 滚动视图。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/youtubes.png) +Youtube 转换到一个更加现代的白色主题,使用了列表视图替换疯狂的 3D 滚动视图。 +Ron Amadeo 供图 + +Youtube 经过了完全的重新设计,看起来没那么像是来自黑客帝国的产物,更像是,嗯,Youtube。它现在就是一个简单的垂直滚动的白色视频列表,就像网站的那样。在你手机上制作视频受到了重视,操作栏的第一个按钮专用于拍摄视频。奇怪的是,不同的界面左上角使用了不同的 Youtube 标志,在水平的 Youtube 标志和方形标志之间切换。 + +Youtube 几乎在所有地方都使用了滑动标签页。它们被放置在主页面以在浏览和账户间切换,放置在视频页面以在评论,介绍和相关视频之间切换。4.0 版本的应用显示出 Google+ Youtube 集成的第一个信号,通常的评分按钮旁边放置了 “+1” 图标。最终 Google+ 会完全占据 Youtube,将评论和作者页面变成 Google+ 活动。 + +![冰淇淋三明治试着让事情对所有人都更加简单。这里是数据使用量追踪,打开许多数据的新开发者选项,以及使用向导。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/data.png) +冰淇淋三明治试着让事情对所有人都更加简单。这里是数据使用量追踪,打开许多数据的新开发者选项,以及使用向导。 +Ron Amadeo 供图 + +数据使用量允许用户更轻松地追踪和控制他们的数据使用。主页面显示一个月度使用量图表,用户可以设置数据使用警告值或者硬性使用限制以避免超量使用产生费用。所有的这些只需简单地拖动橙色和红色水平限制线在图表上的位置即可。纵向的白色把手允许用户选择图表上的一段指定时间段。在页面底部,选定时间段内的数据使用量又细分到每个应用,所以用户可以选择一个数据使用高峰并轻松地查看哪个应用在消耗大量流量。当流量紧张的时候,更多操作按钮中有个限制所有后台流量的选项。设置之后只用在前台运行的程序有权连接互联网。 + +开发者选项通常只有一点点设置选项,但是在冰淇淋三明治中,这部分有非常多选项。谷歌添加了所有类型的屏幕诊断显示浮层来帮助开发者理解他们的应用中发生了什么。你可以看到 CPU 使用率,触摸点位置,还有视图界面更新。还有些选项可以更改系统功能,比如控制动画速度,后台处理,以及 GPU 渲染。 + +安卓和 iOS 之间最大的区别之一就是应用抽屉界面。在冰淇淋三明治对更加用户友好的追求下,设备第一次初始化启动会启动一个小教程,向用户展示应用抽屉的位置以及如何将应用图标从应用抽屉拖拽到主屏幕。随着实体菜单按键的移除和像这样的改变,安卓 4.0 做了很大的努力变得对新智能手机用户和转换过来的用户更有吸引力。 + +![“触摸分享”NFC 支持,Google Earth,以及应用信息,让你可以禁用垃圾软件。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-06-03.57.png) +“触摸分享”NFC 支持,Google Earth,以及应用信息,让你可以禁用垃圾软件。 + +冰淇淋三明治内置对 [NFC][1] 的完整支持。尽管之前的设备,比如 Nexus S 也拥有 NFC,得到的支持是有限的并且系统并不能利用芯片做太多事情。4.0 添加了一个“Android Beam”功能,两台拥有 NFC 的安卓 4.0 设备可以借此在设备间来回传输数据。NFC 会传输关于此事屏幕显示的数据,因此在手机显示一个网页的时候使用该功能会将该页面传送给另一部手机。你还可以发送联系人信息,方向导航,以及 Youtube 链接。当两台手机放在一起时,屏幕显示会缩小,点击缩小的界面会发送相关信息。 + +I在安卓中,用户不允许删除系统应用,以保证系统完整性。运营商和 OEM 利用该特性并开始将垃圾软件放入系统分区,经常有一些没用的应用存在系统中。安卓 4.0 允许用户禁用任何不能被卸载的应用,意味着该应用还存在于系统中但是不显示在应用抽屉里并且不能运行。如果用户愿意深究设置项,这给了他们一个简单的途径来拿回手机的控制权。 + +安卓 4.0 可以看做是现代安卓时代的开始。大部分这时发布的谷歌应用只能在安卓 4.0 及以上版本运行。4.0 还有许多谷歌想要好好利用的新 API——至少最初想要——对 4.0 以下的版本的支持就有限了。在冰淇淋三明治和蜂巢之后,谷歌真的开始认真对待软件设计。在2012年1月,谷歌[最终发布了][2] *Android Design*,一个教安卓开发者如何创建符合安卓外观和感觉的应用的设计指南站点。这是 iOS 在有第三方应用支持开始就在做的事情,苹果还严肃地对待应用的设计,不符合指南的应用都被 App Store 拒之门外。安卓三年以来谷歌没有给出任何公共设计规范文档的事实,足以说明事情有多糟糕。但随着在 Duarte 掌控下的安卓设计革命,谷歌终于发布了基本设计需求。 + +---------- + +![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) + +[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。 + +[@RonAmadeo][t] + +-------------------------------------------------------------------------------- + +via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/20/ + +译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://arstechnica.com/gadgets/2011/02/near-field-communications-a-technology-primer/ +[2]:http://arstechnica.com/business/2012/01/google-launches-style-guide-for-android-developers/ +[a]:http://arstechnica.com/author/ronamadeo +[t]:https://twitter.com/RonAmadeo diff --git a/translated/talk/The history of Android/21 - The history of Android.md b/translated/talk/The history of Android/21 - The history of Android.md new file mode 100644 index 0000000000..48cd8880be --- /dev/null +++ b/translated/talk/The history of Android/21 - The history of Android.md @@ -0,0 +1,104 @@ +安卓编年史 +================================================================================ +![](http://cdn.arstechnica.net/wp-content/uploads/2014/03/playicons2.png) +Ron Amadeo 供图 + +### Google Play 和直接面向消费者出售设备的回归 ### + +2012年3月6日,谷歌将旗下提供的所有内容统一到 “Google Play”。安卓市场变为了 Google Play 商店,Google Books 变为 Google Play Books,Google Music 变为 Google Play Music,还有 Android Market Movies 变为 Google Play Movies & TV。尽管应用界面的变化不是很大,这四个内容应用都获得了新的名称和图标。在 Play 商店购买的内容会下载到对应的应用中,Play 商店和 Play 内容应用一道给用户提供了易管理的内容体验。 + +Google Play 更新是谷歌第一个大的更新周期外更新。四个自带应用都没有通过系统更新获得升级,它们都是直接通过安卓市场/ Play商店更新的。对单独的应用启用周期外更新是谷歌的重大关注点之一,而能够实现这样的更新,是自姜饼时代开始的工程努力的顶峰。谷歌一直致力于对应用从系统“解耦”,从而让它们能够通过安卓市场/ Play 商店进行分发。 + +尽管一两个应用(主要是地图和 Gmail)之前就在安卓市场上,从这里开始你会看到许多更重大的更新,而其和系统发布无关。系统更新需要 OEM 厂商和运营商的合作,所以很难保证推送到每个用户手上。而 Play 商店更新则完全掌握在谷歌手上,给了谷歌一条直接到达用户设备的途径。因为 Google Play 的发布,安卓市场对自身升级到了 Google Play Store,在那之后,图书,音乐以及电影应用都下发了 Google Play 式的更新。 + +Google Play 系列应用的设计仍然不尽相同。每个应用的外观和功能各有差异,但暂且来说,一个统一的品牌标识是个好的开始。从品牌标识中去除“安卓”字样是很有必要的,因为很多服务是在浏览器中提供的,不需要安卓设备也能使用。 + +2012年4月,谷歌[再次开始通过 Play 商店销售设备][1],恢复在 Nexus One 发布时尝试的直接面向消费者销售的方式。尽管距 Nexus One 销售结束仅有两年,但往上购物现在更加寻常,在接触到物品之前就购买它并不像在2010年时听起来那么疯狂。 + +谷歌也看到了价格敏感的用户在面对 Nexus One 530美元的价格时的反应。第一部销售的设备是无锁的,GSM 版本的 Galaxy Nexus,价格399美元。在那之后,价格变得更低。350美元成为了最近两台 Nexus 设备的入门价,7英寸 Nexus 平板的价格更是只有200美元到220美元。 + +今天,Play 商店销售八款不同的安卓设备,四款 Chromebook,一款自动调温器,以及许多配件,设备商店已经是谷歌新产品发布的实际地点了。新产品发布总是如此受欢迎,站点往往无法承载如此大的流量,新 Nexus 手机也在几小时内售空。 + +### 安卓 4.1,果冻豆——Google Now指明未来 + +![华硕制造的 Nexus 7,安卓 4.1 的首发设备。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/ASUS_Google_Nexus_7_4_11.jpg) +华硕制造的 Nexus 7,安卓 4.1 的首发设备。 + +随着2012年7月安卓 4.1 果冻豆的发布,谷歌的安卓发布节奏进入每六个月一发布的轨道。平台已经成熟,三个月的发布周期就没那么必要了,更长的发布周期也给了 OEM 厂商足够的时间跟上谷歌的节奏。和蜂巢不同,小数点后的更新发布现在是主要更新,4.1 带来了主要的界面更新和框架变化。 + +果冻豆最大的变化之一,并且你在截图中看不到的是“黄油计划”,谷歌工程师齐心努力让安卓的动画顺畅地跑在 30FPS 上。还有一些核心变化,像垂直同步和三重缓冲,每个动画都经过优化以流畅地绘制。动画和顺滑滚动一直是安卓和 iOS 相比之下的弱点。经过在核心动画框架和单独的应用上的努力,果冻豆让安卓的流畅度大幅接近 iOS。 + +和果冻豆一起到来的还有 [Nexus][2] 7,由华硕生产的7英寸平板。不像之前主要是横屏模式的 Xoom,Nexus 7 主要以竖屏模式使用,像个大一号的手机。Nexus 7 展现了经过一年半的生态建设,谷歌已经准备好了给平板市场带来一部旗舰设备。和 Nexus One 和 GSM Galaxy Nexus 一样,Nexus 7 直接由谷歌在线销售。尽管那些早先的设备对习惯于运营商补贴的消费者来说拥有惊人的高价,Nexus 7 以仅仅 200 美元的价格推向大众市场。这个价格给你带来一部7英寸,1280x800 英寸显示屏,四核 1.2GHz Tegra 3 处理器,1GB 内存,8GB 内置存储的设备。Nexus 7 的性价比如此之高,许多人都想知道谷歌到底有没有在其旗舰平板上赚到钱。 + +更小,更轻,7英寸,这些因素促成了谷歌巨大的成功,并且将谷歌带向了引领行业潮流的位置。一开始制造10英寸 iPad 的苹果,最终也不得不推出和 Nexus 7 相似的 iPad Mini 来应对。 + +![4.1 的新锁屏设计,壁纸,以及系统按钮新的点击高亮。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/picture.png) +4.1 的新锁屏设计,壁纸,以及系统按钮新的点击高亮。 +Ron Amadeo 供图 + +蜂巢引入的电子风格在冰淇淋三明治中有所减少,果冻豆在此之上走得更远。它开始从系统中大范围地移除蓝色。迹象就是系统按钮的点击高亮从蓝色变为了灰色。 + +![新应用阵容合成图以及新的消息可展开通知面板。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/jb-apps-and-notications.png) +新应用阵容合成图以及新的消息可展开通知面板。 +Ron Amadeo 供图 + +通知中心面板完全重制了,这个设计一直沿用到今天的奇巧巧克力(KitKat)。新面板扩展到了屏幕顶部,并且覆盖了状态栏图标,这意味着通知面板打开的时候不再能看到状态栏。时间突出显示在左上角,旁边是日期和设置按钮。清除所有通知按钮,冰淇淋三明治中显示为一个“X”按钮,现在变为阶梯状的按钮,象征着清除所有通知的时候消息交错滑动的动画效果。底部的面板把手从一个小圆换成了一条直线,和面板等宽。所有的排版都发生了变化——通知面板的所有项现在都使用了更大,更细的字体。通知面板是另一个从冰淇淋三明治和蜂巢中引入的蓝色元素被移除的屏幕。除了触摸高亮之外整个通知面板都是灰色的。 + +通知面板也引入了新功能。相较于之前的两行设计,现在的通知消息可以展开以显示更多信息。通知消息可以显示最多8行文本,甚至还能在消息底部显示按钮。屏幕截图通知消息底部有个分享按钮,你也可以直接从未接来电通知拨号,或者将一个正在响铃的闹钟小睡,这些都可以在通知面板完成。新通知消息默认展开,但当它们堆叠到一起时会恢复原来的尺寸。在通知消息上双指向下滑动可以展开消息。 + +![新谷歌搜索应用,带有 Google Now 卡片,语音搜索,以及文字搜索。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googlenow.png) +新谷歌搜索应用,带有 Google Now 卡片,语音搜索,以及文字搜索。 +Ron Amadeo 供图 + +果冻豆中不止对安卓而言,也是对谷歌来说最大的特性,是新版谷歌搜索应用。它带来了“Google Now”,一个预测性搜索功能。Google Now 在搜索框下面显示为几张卡片,它会提供谷歌认为你所关心的事物的搜索结果。就比如谷歌地图搜索你最近在桌面电脑查找的地点或日历的约会地点,天气,以及旅行时回家的时间。 + +新版谷歌搜索应用自然可以从谷歌图标启动,但它还可以在任意屏幕从系统栏上滑访问。长按系统栏会唤出一个类似锁屏解锁的环。卡片部分纵向滚动,如果你不想看到它们,可以滑动消除它们。语音搜索是更新的一个大部分。提问不是无脑地输入进谷歌,如果谷歌知道答案,它还会用文本语音转换引擎回答你。传统的文字搜索当然也受支持。只需点击搜索栏然后开始输入即可。 + +谷歌经常将 Google Now 称作“谷歌搜索的未来”。告诉谷歌你想要什么这还不够好。谷歌想要在你之前知道你想要什么。Google Now 用谷歌所有的数据挖掘关于你的知识为你服务,这也是谷歌对抗搜索引擎竞争对手,比如必应,最大的优势所在。智能手机比你拥有的其它设备更了解你,所以该服务在安卓上首次亮相。但谷歌慢慢也将 Google Now 加入 Chrome,最终似乎会到达 Google.com。 + +尽管功能很重要,但同时 Google Now 是谷歌产品有史以来最重要的设计工作也是毋庸置疑的。谷歌搜索应用引入的白色卡片审美将会成为几乎所有谷歌产品设计的基础。今天,卡片风格被用在 Google Play 商店以及所有的 Play 内容应用,Youtube,谷歌地图,Drive,Keep,Gmail,Google+以及其它产品。同时也不限于安卓应用。不少谷歌的桌面站点和 iOS 应用也以此设计为灵感。设计是谷歌历史中的弱项之一,但 Google Now 开始谷歌最终在设计上采取了行动,带来一个统一的,全公司范围的设计语言。 + +![又一个 Youtube 重新设计,信息密度有所下降。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/yotuube.png) +又一个 Youtube 的重新设计,信息密度有所下降。 +Ron Amadeo 供图 + +又一个版本,又一个 Youtube 的重新设计。这次列表视图主要基于略缩图,大大的图片占据了屏幕的大部分。信息密度在新列表设计中有所下降。之前 Youtube 每屏大约能显示6个项目,现在只能显示3个。 + +Youtube 是首批在应用左侧加入滑动抽屉的应用之一,该特性会成为谷歌应用的标准设计风格。抽屉中有你的账户的链接和订阅频道,这让谷歌可以去除页面顶部标签页设计。 + +![Google Play 服务的职责以及安卓的剩余部分职责。](http://cdn.arstechnica.net/wp-content/uploads/2013/08/playservicesdiagram2.png) +Google Play 服务的职责以及安卓的剩余部分职责。 +Ron Amadeo 供图 + +### Google Play Services—fragmentation and making OS versions (nearly) obsolete ### +### Google Play 服务——碎片化和让系统版本(几乎)过时 ### + +碎片化那时候看起来这并不是个大问题,但2012年12月,Google Play 服务 1.0 面向所有安卓2.2及以上版本的手机推出。它添加了一些 Google+ API 和对 OAuth 2.0 的支持。 + +尽管这个升级听起来很无聊,但 Google Play 服务最终会成长为安卓整体的一部分。Google Play 服务扮演着正常应用和安卓系统的中间角色,使得谷歌可以升级或替换一些核心组件,并在不发布新安卓版本的前提下添加 API。 + +有了 Play 服务,谷歌有了直接接触安卓手机核心部分的能力,而不用通过 OEM 更新一集运营商批准的过程。谷歌使用 Play 服务添加了全新的位置系统,恶意软件扫描,远程擦除功能,以及新的谷歌地图 API,所有的这一切都不用通过发布一个系统更新实现。正如我们在姜饼部分的结尾提到的,感谢 Play 服务里这些“可移植的”API 实现,姜饼仍然能够下载现代版本的 Play 商店和许多其他的谷歌应用。 + +另一个巨大的益处是安卓用户基础的兼容性。最新版本的安卓系统要经过很长时间到达大多数用户手中,这意味着最新版本系统绑定的 API 在大多数用户升级之前对开发者来说没有任何意义。Google Play 服务兼容冻酸奶及以上版本,换句话说就是99%的活跃设备,并且更新可以直接通过 Play 商店直接推送到手机上。通过将 API 包含在 Google Play 服务中而不是安卓中,谷歌可以在一周内将新 API 推送到几乎所有用户手中。这对许多版本碎片化引起的问题来说是个[伟大的解决方案][3]。 + +---------- + +![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) + +[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。 + +[@RonAmadeo][t] + +-------------------------------------------------------------------------------- + +via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/21/ + +译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://arstechnica.com/gadgets/2012/04/unlocked-samsung-galaxy-nexus-can-now-be-purchased-from-google/ +[2]:http://arstechnica.com/gadgets/2012/07/divine-intervention-googles-nexus-7-is-a-fantastic-200-tablet/ +[3]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/ +[a]:http://arstechnica.com/author/ronamadeo +[t]:https://twitter.com/RonAmadeo diff --git a/translated/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md b/translated/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md new file mode 100644 index 0000000000..5d81ed9cdf --- /dev/null +++ b/translated/talk/yearbook2015/20151208 6 creative ways to use ownCloud.md @@ -0,0 +1,95 @@ +GHLandy Translated + +使用 ownCloud 的六个创意方法 +================================================================================ +![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/osdc-open-source-yearbook-lead1-inc0335020sw-201511-01.png) + +图片来源:Opensource.com + +[ownCloud][1] 是一个自我托管且开源的文件同步和共享服务上。就像 "big boys" Dropbox、Google Drive、Box 和其他的同类服务一样,ownCloud 可以让你访问自己的文件、日历、联系人和其他数据。你可以在自己设备之间进行任意数据(包括它自身的一部分)同步以及给其他人分享文件。然而,ownCloud 并非只能运行在它自己的开发商之中,试试[将 ownCloud 运行在其他服务器上][2] + +现在,一起来看看在 ownCloud 上的六件创意事件。其中一些是由于 ownCloud 的开源才得以完成,而另外的则是 ownCloud 自身特有的功能。 + +### 1. 可扩展的 ownCloud 派集群 ### + +由于 ownCloud 是开源的,你可以选择将它运行在自己的服务器中,或者从你信任的服务器提供商那里获取空间——没必要将你的文件存储在大公司的服务器中,谁知他们将你的文件存储到哪里去。[点击此处查看部分 ownCloud 服务商][3],或者下载该服务软件到你的虚拟主机中[搭建自己的服务器][4]. + +![](https://opensource.com/sites/default/files/images/life-uploads/banana-pi-owncloud-cluster.jpg) + +拍摄: Jörn Friedrich Dreyer. [CC BY-SA 4.0.][5] + +我们见过最具创意的事情就是组建 [香蕉派集群][6] 和 [树莓派集群][7]。ownCloud 的扩展性通常是成千上万的用户来完成的,这些人则将它往不同方向发展,通过大量的小型系统集群在一起,就可以创建出运行速度非常快的 ownCloud。酷毙了! + +### 2. 密码同步 ### + +为了让 ownCloud 更容易扩展,我们需要将它模块化,并拥有 [ownCloud app store][8]。然后你就可以在里边搜索音乐、视频播放器、日历、联系人、生产应用、游戏、应用框架等。 + +仅从 200 多个可用应用中挑选一个是一件非常困难的事,但密码管理则是一个很好的特性。ownCloud app store 里边至少有三款这种应用:[Passwords][9]、[Secure Container][10] 和 [Passman][11]。 + +![](https://opensource.com/sites/default/files/images/life-uploads/password.png) + +### 3. 随心所欲地存储文件 ### + +外部存储允许你通过接口将现有数据联系到 ownCloud,让你轻松访问存储在FTP、WebDAV、Amazon S3,甚至 Dropbox 和Google Drive。 + +注:youtube 视频 + + +DropBox 喜欢创建自己的 “围墙式花园”,只有注册用户之间才可以进行协作;假如你通过Google Drive 来分享文件,你的同伴也必须要有一个 Google 账号才可以访问的分享。通过 ownCloud 的外部存储功能,你可以轻松打破这些规则障碍。 + +最有创意的就是把 Google Drive 和 Dropbox 添加为外部存储。这样你就可以无缝的使用它们,并使用不需要账户地链接把文件分享给和你协作的人。 + +### 4. 下载以上传的文件 ### + +由于 ownCloud 的开源,人们可以不受公司需求限制地向它共享代码,增加新特性。共献者关注的往往是安全和隐私,所以 ownCloud 引入的特性常常比别人的要早,比如通过密码保护的公共链接和[设置失效期限][12]。 + +现在,ownCloud 可以配置分享链接的读写权限了,这就是说链接的访问者可以无缝的编辑你分享给他们的文件(不管是否有密码保护),或者在不提供他们的私人数据来登录其他服务的情况下将文件上传到服务器。 + +注:youtube 视频 + + +对于有人想给你分享大体积的文件时,这个特性就非常有用了。相比于上传到第三方站点、然后给你发送一个连接、你再去下载文件(通常需要登录),ownCloud 仅需要上传文件到你提供的分享文件夹、你就可以买上获取到文件了。 + +### 5. 免费却又安全的存储空间 ### + +之前就强调过,我们的代码贡献者最关注的就是安全和隐私,这就是 ownCloud 中有用于加密和解密存储数据的应用的原因。 + +通过使用 ownCloud 将你的文件存储到 Dropbox 或者 Google Drive,则会违背控制数据以及保持数据隐私的原则。但是加密应用则刚好可以满足安全及隐私问题。在发送数据给这些提供商前进行数据加密,并在取回数据的时候进行解密,你的数据就会变得很安全。 + +### 6. 在你的可控范围内分享文件 ### + +作为开源项目,ownCloud 没有必要自建 “围墙式花园”。进入联邦云共享:[developed and published by ownCloud][13] 协议使不同的文件同步和共享服务器可以彼此之间进行通信,并能够安全地传输文件。联邦云共享本身有一个有趣的故事:[22 所德国大学][14] 想要为自身的 500,000 学生建立一个庞大的云服务,但是每个大学都想控制自己学生数据。于是乎,我们需要一个可行性解决方案:也就是联邦云服务。该解决方案让让学生保持连接,使得他们可以无缝的协同工作。同时,每个大学的系统管理员保持着对自己学生创建的文件的控制权,如限制存储或者限制什么人、什么文件以及如何共享。 + +注:youtube 视频 + + +并且,这项令人崇敬的技术并没有限制于德国的大学之间,而是每个 ownCloud 用户都能在自己的用户设置中找到自己的 [联邦云 ID][15],并将之分享给同伴。 + +现在你明白了吧。仅六个方法,ownCloud 就能让人们完成特殊和特别的事。而是这一切成为可能的,就是 ownCloud 的开源 —— 设计用来释放你数据。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/15/12/6-creative-ways-use-owncloud + +作者:[Jos Poortvliet][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jospoortvliet +[1]:https://owncloud.com/ +[2]:https://blogs.fsfe.org/mk/new-stickers-and-leaflets-no-cloud-and-e-mail-self-defense/ +[3]:https://owncloud.org/providers +[4]:https://owncloud.org/install/#instructions-server +[5]:https://creativecommons.org/licenses/by-sa/4.0/ +[6]:http://www.owncluster.de/ +[7]:https://christopherjcoleman.wordpress.com/2013/01/05/host-your-owncloud-on-a-raspberry-pi-cluster/ +[8]:https://apps.owncloud.com/ +[9]:https://apps.owncloud.com/content/show.php/Passwords?content=170480 +[10]:https://apps.owncloud.com/content/show.php/Secure+Container?content=167268 +[11]:https://apps.owncloud.com/content/show.php/Passman?content=166285 +[12]:https://owncloud.com/owncloud45-community/ +[13]:http://karlitschek.de/2015/08/announcing-the-draft-federated-cloud-sharing-api/ +[14]:https://owncloud.com/customer/sciebo/ +[15]:https://owncloud.org/federation/ diff --git a/translated/talk/yearbook2015/20151208 6 useful LibreOffice extensions.md b/translated/talk/yearbook2015/20151208 6 useful LibreOffice extensions.md new file mode 100644 index 0000000000..b8d4f1fa1c --- /dev/null +++ b/translated/talk/yearbook2015/20151208 6 useful LibreOffice extensions.md @@ -0,0 +1,78 @@ +GHLandy Translated + +LibreOffice 中的六大实用扩展组件 +================================================================================ +![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/yearbook2015-osdc-lead-2.png) + +图片来源:Opensource.com + +LibreOffice 是最好的免费办公套件,并在所有的主要 Linux 发行版中得到应用。尽管 LibreOffice 已经拥有了大多数特性,它仍然可以添加特定的附加组件,即扩展。 + +LibreOffice 的扩展组件的网站是 [extensions.libreoffice.org][1]。扩展组件只是一些工具,可以在安装主体上进行独立添加或者移除,以便增加新功能或者让已有功能更容易使用。 + +### 1. 多格式保存组件 ### + +多格式保存组件可以根据用户的设置,同时将文档保存为开源文档、微软 Office 文档或者 PDF 文档。在你将微软 Office 文档格式转为标准的[开源文档格式][2]的时候,这个组件就很实用了,因为该组件同时提供了两种选择:互操作性较强的 ODF 文档格式以及为所有用户让在实用的微软 Office 文档格式保持兼容性。这样使管理员的文档迁移过程变得更具弹性、更易于管理。 + +**[下载 多格式保存组件][3]** + +![Multiformatsave extension](https://opensource.com/sites/default/files/images/business-uploads/multiformatsave.png) + +### 2. Writer 中可交替使用的查找与替换组件(交替搜索) ### + +该组件向 Writer 中的查找与替换功能添加了许多新特性:可以查找和替换一段或多段文本;一次执行在多个查找和替换;搜索:书签、笔记、文本字段、交叉引用和参考标志内容、名称或标志及其插入;搜索和插入脚注和尾注;通过名称来搜索表格对象、图像和文本框;搜索帮助手册页的分栏符以及创建和失活时间;根据光标位置搜索相同格式的文本。还可以保存/加载查找和替换参数,并在多个同时打开的文件中执行批处理。 + +**[下载 Writer 中可交替使用的查找与替换组件(交替搜索)][4]** + +![Alternative Find&amp;Replace add-on](https://opensource.com/sites/default/files/images/business-uploads/alternativefindreplace.png) + +### 3. Pepito 清除组件 ### +中 LibreOffice 中, Pepito 清除组件主要用来快速清除并修复旧扫描件、导入的 PDF 以及每个电子文本文档的格式错误。通过点击 LibreOffice 工具栏中的 Pepito 图标,用户可以打开一个用于分析文档并呈现文档错误类型。当你将 PDF 文档转换为 ODF 文档时,这个工具就非常有用了,它会自动清除转换过程中出现的错误。 + +**[下载 Pepito 清除组件][5]** + +![Pepito cleaner screenshot](https://opensource.com/sites/default/files/images/business-uploads/pepitocleaner.png) + +### 4. ImpressRunner 组件### +Impress Runner 是将 [Impress][6] 文档转换成自动播放文件的扩展组件。该组件会添加两个图标,用以设置或移除自动开始播放的功能,我们还可以通过编辑 文件 | 属性 | 自定义属性 菜单来手动添加这两个图标,并将自动运行按钮添加到四个文本域之前。在会议与活动组织而且幻灯片无人主持的时候,这个扩展组件就变得非常有用。 + +**[下载 ImpressRunner 组件][7]** + +### 5. 导出为图像组件 ### + +导出为图像组件是 Impress 和 [Draw][8] 中文件菜单里边的一个入口——导出为图像...,,主要用于将所有的幻灯片和页面导出成 JPG、PNG、GIF、BMP 和 TIFF 等图像格式,并且允许用户自定义导出图像的名称、大小以及其他参数。 + +**[下载 导出为图像组件][9]** + +![Export as images extension](https://opensource.com/sites/default/files/images/business-uploads/exportasimages.png) + +### 6. Anaphraseus 组件### +Anaphraseus 是一个 CAT(Computer-Aided Translation,计算机辅助翻译)工具组件,用来创建、管理双语翻译。Anaphraseus 是一个设置成扩展组件或者独立文档的 LibreOffice 宏。最开始,开发者设计 Anaphraseus 为快速翻译(Wordfast)格式,但现在它可以将文件导入或者到出成 TMX 格式。其主要特性:分本分割、在翻译记录中模糊搜索、术语识别以及导入导出 TMX(OmegaT translation memory format,OmegaT 翻译存储格式)。 + +**[下载 Anaphraseus 组件][10]** + +![Anaphraseus screenshot](https://opensource.com/sites/default/files/images/business-uploads/anaphraseus.png) + +你是否也有自己喜欢和推荐的 LibreOffice 组件呢?在评论中告诉大家吧。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/business/15/12/6-useful-libreoffice-extensions + +作者:[Italo Vignoli][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/italovignoli +[1]:http://extensions.libreoffice.org/ +[2]:http://www.opendocumentformat.org/ +[3]:http://extensions.libreoffice.org/extension-center/multisave-1 +[4]:http://extensions.libreoffice.org/extension-center/alternative-dialog-find-replace-for-writer +[5]:http://pepitoweb.altervista.org/pepito_cleaner/index.php +[6]:https://www.libreoffice.org/discover/impress/ +[7]:http://extensions.libreoffice.org/extension-center/impressrunner +[8]:https://www.libreoffice.org/discover/draw/ +[9]:http://extensions.libreoffice.org/extension-center/export-as-images +[10]:http://anaphraseus.sourceforge.net/ diff --git a/translated/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md b/translated/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md deleted file mode 100644 index 1a1fe6b35a..0000000000 --- a/translated/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md +++ /dev/null @@ -1,202 +0,0 @@ -如何在 Linux 中从 NetworkManager 切换为 systemd-network -How to switch from NetworkManager to systemd-networkd on Linux -================================================================================ -在 Linux 世界里, [systemd][1] 的采用一直是激烈争论的主题,它的支持者和反对者之间的战火仍然在燃烧。到了今天,大部分主流 Linux 发行版都已经采用了 systemd 作为默认初始化系统。 -In the world of Linux, adoption of [systemd][1] has been a subject of heated controversy, and the debate between its proponents and critics is still going on. As of today, most major Linux distributions have adopted systemd as a default init system. - -正如其作者所说,作为一个 “从未完成、从未完善、但一直追随技术进步” 的系统,systemd 已经不只是一个初始化进程,它被设计为一个更广泛的系统以及服务管理平台,这个;平台包括了不断增长的核心系统进程、库和工具的生态系统。 -Billed as a "never finished, never complete, but tracking progress of technology" by its author, systemd is not just the init daemon, but is designed as a more broad system and service management platform which encompasses the growing ecosystem of core system daemons, libraries and utilities. - -**systemd** 的其中一部分是 **systemd-networkd**,它负责 systemd 生态中的网络配置。使用 systemd-networkd,你可以为网络设备配置基础的 DHCP/静态 IP 网络。它还可以配置虚拟网络功能,例如网桥、隧道和 VLAN。systemd-networkd 目前还不能直接支持无线网络,但你可以使用 wpa_supplicant 服务配置无线适配器,然后用 **systemd-networkd** 挂钩起来。 -One of many additions to **systemd** is **systemd-networkd**, which is responsible for network configuration within the systemd ecosystem. Using systemd-networkd, you can configure basic DHCP/static IP networking for network devices. It can also configure virtual networking features such as bridges, tunnels or VLANs. Wireless networking is not directly handled by systemd-networkd, but you can use wpa_supplicant service to configure wireless adapters, and then hook it up with **systemd-networkd**. - -在很多 Linux 发行版中,NetworkManager 仍然作为默认的网络配置管理器。和 NetworkManager 相比,**systemd-networkd** 仍处于活跃的开发状态,还缺少一些功能。例如,它还不能像 NetworkManager 那样能在任何时候让你的计算机在多种接口之间保持连接。它还没有为高级脚本提供 ifup/ifdown 钩子函数。但是,systemd-networkd 和其它 systemd 组件(例如用于域名解析的 **resolved**、NTP 的**timesyncd**,用于命名的 udevd)结合的非常好。随着时间增长,**systemd-networkd**只会在 systemd 环境中扮演越来越重要的角色。 -On many Linux distributions, NetworkManager has been and is still used as a default network configuration manager. Compared to NetworkManager, **systemd-networkd** is still under active development, and missing features. For example, it does not have NetworkManager's intelligence to keep your computer connected across various interfaces at all times. It does not provide ifup/ifdown hooks for advanced scripting. Yet, systemd-networkd is integrated well with the rest of systemd components (e.g., **resolved** for DNS, **timesyncd** for NTP, udevd for naming), and the role of **systemd-networkd** may only grow over time in the systemd environment. - -如果你对 **systemd-networkd** 的进步感到高兴,从 NetworkManager 切换到 systemd-networkd 是值得你考虑的一件事。如果你强烈反对 systemd,对 NetworkManager 或[基础网络服务][2]感到很满意,那也很好。 -If you are happy with the way **systemd** is evolving, one thing you can consider is to switch from NetworkManager to systemd-networkd. If you are feverishly against systemd, and perfectly happy with NetworkManager or [basic network service][2], that is totally cool. - -但对于那些想尝试 systemd-networkd 的人,可以继续看下去,在这篇指南中学会在 Linux 中怎么从 NetworkManager 切换到 systemd-networkd。 -But for those of you who want to try out systemd-networkd, you can read on, and find out in this tutorial how to switch from NetworkManager to systemd-networkd on Linux. - -### 需求 ### -### Requirement ### - -systemd 210 或更高版本提供了 systemd-networkd。因此诸如 Debian 8 "Jessie" (systemd 215)、 Fedora 21 (systemd 217)、 Ubuntu 15.04 (systemd 219) 或更高版本的 Linux 发行版和 systemd-networkd 兼容。 -systemd-networkd is available in systemd version 210 and higher. Thus distributions like Debian 8 "Jessie" (systemd 215), Fedora 21 (systemd 217), Ubuntu 15.04 (systemd 219) or later are compatible with systemd-networkd. - -对于其它发行版,在开始下一步之前先检查一下你的 systemd 版本。 -For other distributions, check the version of your systemd before proceeding. - - $ systemctl --version - -### 从 NetworkManager 切换到 Systemd-networkd ### -### Switch from Network Manager to Systemd-Networkd ### - -从 NetworkManager 切换到 systemd-networkd 其实非常简答(反过来也一样)。 -It is relatively straightforward to switch from Network Manager to systemd-networkd (and vice versa). - -首先,按照下面这样先停用 NetworkManager 服务,然后启用 systemd-networkd。 -First, disable Network Manager service, and enable systemd-networkd as follows. - - $ sudo systemctl disable NetworkManager - $ sudo systemctl enable systemd-networkd - -你还要启用 **systemd-resolved** 服务,systemd-networkd用它来进行域名解析。该服务还实现了一个缓存式 DNS 服务器。 -You also need to enable **systemd-resolved** service, which is used by systemd-networkd for network name resolution. This service implements a caching DNS server. - - $ sudo systemctl enable systemd-resolved - $ sudo systemctl start systemd-resolved - -一旦启动,**systemd-resolved** 就会在 /run/systemd 目录下某个地方创建它自己的 resolv.conf。但是,把 DNS 解析信息存放在 /etc/resolv.conf 是更普遍的做法,很多应用程序也会依赖于 /etc/resolv.conf。因此为了兼容性,按照下面的方式创建一个到 /etc/resolv.conf 的符号链接。 -Once started, **systemd-resolved** will create its own resolv.conf somewhere under /run/systemd directory. However, it is a common practise to store DNS resolver information in /etc/resolv.conf, and many applications still rely on /etc/resolv.conf. Thus for compatibility reason, create a symlink to /etc/resolv.conf as follows. - - $ sudo rm /etc/resolv.conf - $ sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf - -### 用 systemd-networkd 配置网络连接 ### -### Configure Network Connections with Systemd-networkd ### - -要用 systemd-networkd 配置网络服务,你必须指定带.network 扩展名的配置信息文本文件。这些网络配置文件保存到 /etc/systemd/network 并从这里加载。当有多个文件时,systemd-networkd 会按照词汇顺序一个个加载并处理。 -To configure network devices with systemd-networkd, you must specify configuration information in text files with .network extension. These network configuration files are then stored and loaded from /etc/systemd/network. When there are multiple files, systemd-networkd loads and processes them one by one in lexical order. - -首先创建 /etc/systemd/network 目录。 -Let's start by creating a folder /etc/systemd/network. - - $ sudo mkdir /etc/systemd/network - -#### DHCP 网络 #### -#### DHCP Networking #### - -首先来配置 DHCP 网络。对于此,先要创建下面的配置文件。文件名可以任意,但记住文件是按照词汇顺序处理的。 -Let's configure DHCP networking first. For this, create the following configuration file. The name of a file can be arbitrary, but remember that files are processed in lexical order. - - $ sudo vi /etc/systemd/network/20-dhcp.network - ----------- - - [Match] - Name=enp3* - - [Network] - DHCP=yes - -正如你上面看到的,每个网络配置文件包括了一个多多个 “sections”,每个 “section”都用 [XXX] 开头。每个 section 包括了一个或多个键值对。[Match] 部分决定这个配置文件配置哪个(些)网络设备。例如,这个文件匹配所有名称以 ens3 开头的网络设备(例如 enp3s0、 enp3s1、 enp3s2 等等)对于匹配的接口,然后启用 [Network] 部分指定的 DHCP 网络配置。 -As you can see above, each network configuration file contains one or more "sections" with each section preceded by [XXX] heading. Each section contains one or more key/value pairs. The [Match] section determine which network device(s) are configured by this configuration file. For example, this file matches any network interface whose name starts with ens3 (e.g., enp3s0, enp3s1, enp3s2, etc). For matched interface(s), it then applies DHCP network configuration specified under [Network] section. - -### 静态 IP 网络 ### -### Static IP Networking ### - -如果你想给网络设备分配一个静态 IP 地址,那就新建下面的配置文件。 -If you want to assign a static IP address to a network interface, create the following configuration file. - - $ sudo vi /etc/systemd/network/10-static-enp3s0.network - ----------- - - [Match] - Name=enp3s0 - - [Network] - Address=192.168.10.50/24 - Gateway=192.168.10.1 - DNS=8.8.8.8 - -正如你猜测的, enp3s0 接口地址会被指定为 192.168.10.50/24,默认网关是 192.168.10.1, DNS 服务器是 8.8.8.8。这里微妙的一点是,接口名 enp3s0 事实上也匹配了之前 DHCP 配置中定义的模式规则。但是,根据词汇顺序,文件 "10-static-enp3s0.network" 在 "20-dhcp.network" 之前被处理,对于 enp3s0 接口静态配置比 DHCP 配置有更高的优先级。 -As you can guess, the interface enp3s0 will be assigned an address 192.168.10.50/24, a default gateway 192.168.10.1, and a DNS server 8.8.8.8. One subtlety here is that the name of an interface enp3s0, in facts, matches the pattern rule defined in the earlier DHCP configuration as well. However, since the file "10-static-enp3s0.network" is processed before "20-dhcp.network" according to lexical order, the static configuration takes priority over DHCP configuration in case of enp3s0 interface. - -一旦你完成了创建配置文件,重启 systemd-networkd 服务或者重启机器。 -Once you are done with creating configuration files, restart systemd-networkd service or reboot. - - $ sudo systemctl restart systemd-networkd - -运行以下命令检查服务状态: -Check the status of the service by running: - - $ systemctl status systemd-networkd - $ systemctl status systemd-resolved - -![](https://farm1.staticflickr.com/719/21010813392_76abe123ed_c.jpg) - -### 用 systemd-networkd 配置虚拟网络设备 ### -### Configure Virtual Network Devices with Systemd-networkd ### - -**systemd-networkd** 同样允许你配置虚拟网络设备,例如网桥、VLAN、隧道、VXLAN、绑定等。你必须在用 .netdev 作为扩展名的文件中配置这些虚拟设备。 -**systemd-networkd** also allows you to configure virtual network devices such as bridges, VLANs, tunnel, VXLAN, bonding, etc. You must configure these virtual devices in files with .netdev extension. - -这里我展示了如何配置一个桥接接口。 -Here I'll show how to configure a bridge interface. - -#### Linux 网桥 #### -#### Linux Bridge #### - -如果你想创建一个 Linux 网桥(br0) 并把物理接口(eth1) 添加到网桥,你可以新建下面的配置。 -If you want to create a Linux bridge (br0) and add a physical interface (eth1) to the bridge, create the following configuration. - - $ sudo vi /etc/systemd/network/bridge-br0.netdev - ----------- - - [NetDev] - Name=br0 - Kind=bridge - -然后按照下面这样用 .network 文件配置网桥接口 br0 和从接口 eth1。 -Then configure the bridge interface br0 and the slave interface eth1 using .network files as follows. - - $ sudo vi /etc/systemd/network/bridge-br0-slave.network - ----------- - - [Match] - Name=eth1 - - [Network] - Bridge=br0 - ----------- - - $ sudo vi /etc/systemd/network/bridge-br0.network - ----------- - - [Match] - Name=br0 - - [Network] - Address=192.168.10.100/24 - Gateway=192.168.10.1 - DNS=8.8.8.8 - -最后,重启 systemd-networkd。 -Finally, restart systemd-networkd: - - $ sudo systemctl restart systemd-networkd - -你可以用 [brctl 工具][3] 来验证是否创建了网桥 br0。 -You can use [brctl tool][3] to verify that a bridge br0 has been created. - -### 总结 ### -### Summary ### - -当 systemd 誓言成为 Linux 的系统管理器时,有类似 systemd-networkd 的东西来管理网络配置也就不足为奇。但是在现阶段,systemd-networkd 看起来更适合于网络配置相对稳定的服务器环境。对于桌面/笔记本环境,它们有多种临时有线/无线接口,NetworkManager 仍然是比较好的选择。 -When systemd promises to be a system manager for Linux, it is no wonder something like systemd-networkd came into being to manage network configurations. At this stage, however, systemd-networkd seems more suitable for a server environment where network configurations are relatively stable. For desktop/laptop environments which involve various transient wired/wireless interfaces, NetworkManager may still be a preferred choice. - -对于想进一步了解 systemd-networkd 的人,可以参考官方[man 手册][4]了解完整的支持列表和关键点。 -For those who want to check out more on systemd-networkd, refer to the official [man page][4] for a complete list of supported sections and keys. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/switch-from-networkmanager-to-systemd-networkd.html - -作者:[Dan Nanni][a] -译者:[ictlyh](http://mutouxiaogui.cn/blog) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://xmodulo.com/use-systemd-system-administration-debian.html -[2]:http://xmodulo.com/disable-network-manager-linux.html -[3]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html -[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html diff --git a/translated/tech/20150831 Linux workstation security checklist.md b/translated/tech/20150831 Linux workstation security checklist.md deleted file mode 100644 index 11155a48e0..0000000000 --- a/translated/tech/20150831 Linux workstation security checklist.md +++ /dev/null @@ -1,487 +0,0 @@ -Linux平台安全备忘录 -================================================================================ -这是一组Linux基金会自己系统管理员的推荐规范。所有Linux基金会的雇员都是远程工作,我们使用这套指导方针确保系统管理员的系统通过核心安全需求,降低我们平台成为攻击目标的风险。 - -即使你的系统管理员不用远程工作,很有可能的是,很多人的工作是在一个便携的笔记本上完成的,或者在业余时间或紧急时刻他们在工作平台中部署自己的家用系统。不论发生何种情况,你都能对应这个规范匹配到你的环境中。 - -这绝不是一个详细的“工作站加固”文档,可以说这是一个努力避免大多数明显安全错误导致太多不便的一组规范的底线。你可能阅读这个文档会认为它的方法太偏执,同时另一些人也许会认为这仅仅是一些肤浅的研究。安全就像在高速公路上开车 -- 任何比你开的慢的都是一个傻瓜,然而任何比你开的快的人都是疯子。这个指南仅仅是一些列核心安全规则,既不详细又不是替代经验,警惕,和常识。 - -每一节都分为两个部分: - -- 核对适合你项目的需求 -- 随意列出关心的项目,解释为什么这么决定 - -## 严重级别 - -在清单的每一个项目都包括严重级别,这些是我们希望能帮助指导你的决定: - -- _(关键)_ 项目应该在考虑列表上被明确的重视。如果不采取措施,将会导致你的平台安全出现高风险。 -- _(中等)_ 项目将改善你的安全形态,但不是很重要,尤其是如果他们太多的干涉你的工作流程。 -- _(低等)_ 项目也许会改善整体安全性,但是在便利权衡下也许并不值得。 -- _(可疑)_ 留作感觉会明显完善我们平台安全的项目,但是可能会需要大量的调整与操作系统交互的方式。 - -记住,这些只是参考。如果你觉得这些严重级别不能表达你的工程对安全承诺,正如你所见你应该调整他们为你合适的。 - -## 选择正确的硬件 - -我们禁止管理员使用一个特殊供应商或者一个特殊的型号,所以在选择工作系统时这部分是核心注意事项。 - -### 清单 - -- [ ] 系统支持安全启动 _(关键)_ -- [ ] 系统没有火线,雷电或者扩展卡接口 _(中等)_ -- [ ] 系统有TPM芯片 _(低)_ - -### 注意事项 - -#### 安全引导 - -尽管它是有争议的性质,安全引导提供了对抗很多针对平台的攻击(Rootkits, "Evil Maid,"等等),没有介绍太多额外的麻烦。它将不会停止真正专用的攻击者,加上有很大程度上,站点安全机构有办法应对它(可能通过设计),但是拥有安全引导总比什么都没有强。 - -作为选择,你也许部署了[Anti Evil Maid][1]提供更多健全的保护,对抗安全引导支持的攻击类型,但是它需要更多部署和维护的工作。 - -#### 系统没有火线,雷电或者扩展卡接口 - -火线是一个标准,故意的,允许任何连接设备完全直接内存访问你的系统([查看维基百科][2])。雷电接口和扩展卡同样有问题,虽然一些后来部署的雷电接口试图限制内存访问的范围。如果你没有这些系统端口,那是最好的,但是它并不严重,他们通常可以通过UEFI或内核本身禁用。 - -#### TPM芯片 - -可信平台模块(TPM)是主板上的一个与核心处理器单独分开的加密芯片,他可以用来增加平台的安全性(比如存储完整磁盘加密密钥),不过通常不用在日常平台操作。最多,这是个很好的存在,除非你有特殊需要使用TPM增加你平台安全性。 - -## 预引导环境 - -这是你开始安装系统前的一系列推荐规范。 - -### 清单 - -- [ ] 使用UEFI引导模式(不是传统BIOS)_(关键)_ -- [ ] 进入UEFI配置需要使用密码 _(关键)_ -- [ ] 使用安全引导 _(关键)_ -- [ ] 启动系统需要UEFI级别密码 _(低)_ - -### 注意事项 - -#### UEFI和安全引导 - -UEFI尽管有缺点,还是提供很多传统BIOS没有的好功能,比如安全引导。大多数现代的系统都默认使用UEFI模式。 - -UEFI配置模式密码要确保密码强度。注意,很多厂商默默地限制了你使用密码长度,所以对比长口令你也许应该选择高熵短密码(更多地密码短语看下面)。 - -基于你选择的Linux分支,你也许会也许不会跳过额外的圈子,以导入你的发行版的安全引导键,才允许你启动发行版。很多分支已经与微软合作大多数厂商给他们已发布的内核签订密钥,这已经是大多数厂商公认的了,因此为了避免问题你必须处理密钥导入。 - -作为一个额外的措施,在允许某人得到引导分区然后尝试做一些不好的事之前,让他们输入密码。为了防止肩窥,这个密码应该跟你的UEFI管理密码不同。如果你关闭启动太多,你也许该选择别把心思费在这上面,当你已经进入LUKS密码,这将为您节省一些额外的按键。 - -## 发行版选择注意事项 - -很有可能你会坚持一个广泛使用的发行版如Fedora,Ubuntu,Arch,Debian,或他们的一个类似分支。无论如何,这是你选择使用发行版应该考虑的。 - -### 清单 - -- [ ] 拥有一个强健的MAC/RBAC系统(SELinux/AppArmor/Grsecurity) _(关键)_ -- [ ] 公开的安全公告 _(关键)_ -- [ ] 提供及时的安全补丁 _(关键)_ -- [ ] 提供密码验证的包 _(关键)_ -- [ ] 完全支持UEFI和安全引导 _(关键)_ -- [ ] 拥有健壮的原生全磁盘加密支持 _(关键)_ - -### 注意事项 - -#### SELinux,AppArmor,和GrSecurity/PaX - -强制访问控制(MAC)或者基于角色的访问控制(RBAC)是一个POSIX系统遗留的基于用户或组的安全机制延伸。这些天大多数发行版已经绑定MAC/RBAC系统(Fedora,Ubuntu),或通过提供一种机制一个可选的安装后的步骤来添加它(Gentoo,Arch,Debian)。很明显,强烈建议您选择一个预装MAC/RBAC系统的分支,但是如果你对一个分支情有独钟,没有默认启用它,装完系统后应计划配置安装它。 - -应该坚决避免使用不带任何MAC/RBAC机制的分支,像传统的POSIX基于用户和组的安全在当今时代应该算是考虑不足。如果你想建立一个MAC/RBAC工作站,通常会考虑AppArmor和PaX,他们比SELinux更容易学习。此外,在一个工作站上,有很少或者没有额外的监听用户运行的应用造成的最高风险,GrSecurity/PaX_可能_会比SELinux提供更多的安全效益。 - -#### 发行版安全公告 - -大多数广泛使用的分支都有一个机制发送安全公告到他们的用户,但是如果你对一些机密感兴趣,查看开发人员是否有记录机制提醒用户安全漏洞和补丁。缺乏这样的机制是一个重要的警告信号,这个分支不够成熟,不能被视为主要管理工作站。 - -#### 及时和可靠的安全更新 - -多数常用的发行版提供的定期安全更新,但为确保关键包更新及时提供是值得检查的。避免使用分支和"社区重建"的原因是,由于不得不等待上游分支先发布它,他们经常延迟安全更新。 - -你如果找到一个在安装包,更新元数据,或两者上不使用加密签名的发行版,将会处于困境。这么说,常用的发行版多年前就已经知道这个基本安全的意义(Arch,我正在看你),所以这也是值得检查的。 - -#### 发行版支持UEFI和安全引导 - -检查发行版支持UEFI和安全引导。查明它是否需要导入额外的密钥或是否要求启动内核有一个已经被系统厂商信任的密钥签名(例如跟微软达成合作)。一些发行版不支持UEFI或安全启动,但是提供了替代品来确保防篡改或防破坏引导环境([Qubes-OS][3]使用Anti Evil Maid,前面提到的)。如果一个发行版不支持安全引导和没有机制防止引导级别攻击,还是看看别的吧。 - -#### 全磁盘加密 - -全磁盘加密是保护静止数据要求,大多数发行版都支持。作为一个选择方案,系统自加密硬件驱动也许用来(通常通过主板TPM芯片实现)和提供类似安全级别加更快的选项,但是花费也更高。 - -## 发行版安装指南 - -所有发行版都是不同的,但是也有一些一般原则: - -### 清单 - -- [ ] 使用健壮的密码全磁盘加密(LUKS) _(关键)_ -- [ ] 确保交换分区也加密了 _(关键)_ -- [ ] 确保引导程序设置了密码(可以和LUKS一样) _(关键)_ -- [ ] 设置健壮的root密码(可以和LUKS一样) _(关键)_ -- [ ] 使用无特权账户登录,管理员组的一部分 _(关键)_ -- [ ] 设置强壮的用户登录密码,不同于root密码 _(关键)_ - -### 注意事项 - -#### 全磁盘加密 - -除非你正在使用自加密硬件设备,配置你的安装程序给磁盘完整加密用来存储你的数据与你的系统文件很重要。通过自动安装的cryptfs循环文件加密用户目录还不够简单(我正在看你,老版Ubuntu),这并没有给系统二进制文件或交换分区提供保护,它可能包含大量的敏感数据。推荐的加密策略是加密LVM设备,所以在启动过程中只需要一个密码。 - -`/boot`分区将一直保持非加密,当引导程序需要引导内核前,调用LUKS/dm-crypt。内核映像本身应该用安全引导加密签名检查防止被篡改。 - -换句话说,`/boot`应该是你系统上唯一没有加密的分区。 - -#### 选择好密码 - -现代的Linux系统没有限制密码口令长度,所以唯一的限制是你的偏执和倔强。如果你要启动你的系统,你将大概至少要输入两个不同的密码:一个解锁LUKS,另一个登陆,所以长密码将会使你老的很快。最好从丰富或混合的词汇中选择2-3个单词长度,容易输入的密码。 - -优秀密码例子(是的,你可以使用空格): -- nature abhors roombas -- 12 in-flight Jebediahs -- perdon, tengo flatulence - -如果你更喜欢输入口令句,你也可以坚持使用无词汇密码但最少要10-12个字符长度。 - -除非你有人身安全的担忧,写下你的密码,并保存在一个远离你办公桌的安全的地方才合适。 - -#### Root,用户密码和管理组 - -我们建议,你的root密码和你的LUKS加密使用同样的密码(除非你共享你的笔记本给可信的人,他应该能解锁设备,但是不应该能成为root用户)。如果你是笔记本电脑的唯一用户,那么你的root密码与你的LUKS密码不同是没有意义的安全优势。通常,你可以使用同样的密码在你的UEFI管理,磁盘加密,和root登陆 -- 知道这些任意一个都会让攻击者完全控制您的系统,在单用户工作站上使这些密码不同,没有任何安全益处。 - -你应该有一个不同的,但同样强健的常规用户帐户密码用来每天工作。这个用户应该是管理组用户(例如`wheel`或者类似,根据分支),允许你执行`sudo`来提升权限。 - -换句话说,如果在你的工作站只有你一个用户,你应该有两个独特的,强健的,同样的强壮的密码需要记住: - -**管理级别**,用在以下区域: - -- UEFI管理 -- 引导程序(GRUB) -- 磁盘加密(LUKS) -- 工作站管理(root用户) - -**User-level**, used for the following: -**用户级别**,用在以下: - -- 用户登陆和sudo -- 密码管理器的主密码 - -很明显,如果有一个令人信服的理由他们所有可以不同。 - -## 安装后的加强 - -安装后的安全性加强在很大程度上取决于你选择的分支,所以在一个通用的文档中提供详细说明是徒劳的,例如这一个。然而,这里有一些你应该采取的步骤: - -### 清单 - -- [ ] 在全体范围内禁用火线和雷电模块 _(关键)_ -- [ ] 检查你的防火墙,确保过滤所有传入端口 _(关键)_ -- [ ] 确保root邮件转发到一个你可以查看到的账户 _(关键)_ -- [ ] 检查以确保sshd服务默认情况下是禁用的 _(中等)_ -- [ ] 建立一个系统自动更新任务,或更新提醒 _(中等)_ -- [ ] 配置屏幕保护程序在一段时间的不活动后自动锁定 _(中等)_ -- [ ] 建立日志监控 _(中等)_ -- [ ] 安装使用rkhunter _(低等)_ -- [ ] 安装一个入侵检测系统 _(偏执)_ - -### 注意事项 - -#### 黑名单模块 - -将火线和雷电模块列入黑名单,增加一行到`/etc/modprobe.d/blacklist-dma.conf`文件: - - blacklist firewire-core - blacklist thunderbolt - -重启后的模块将被列入黑名单。这样做是无害的,即使你没有这些端口(但也不做任何事)。 - -#### Root邮件 - -默认的root邮件只是存储在系统基本上没人读过。确保你设置了你的`/etc/aliases`来转发root邮件到你确实能读取的邮箱,否则你也许错过了重要的系统通知和报告: - - # Person who should get root's mail - root: bob@example.com - -编辑后这些后运行`newaliases`,然后测试它确保已投递,像一些邮件供应商将拒绝从没有或者不可达的域名的邮件。如果是这个原因,你需要配置邮件转发直到确实可用。 - -#### 防火墙,sshd,和监听进程 - -默认的防火墙设置将取决于您的发行版,但是大多数都允许`sshd`端口连入。除非你有一个令人信服的合理理由允许连入ssh,你应该过滤出来,禁用sshd守护进程。 - - systemctl disable sshd.service - systemctl stop sshd.service - -如果你需要使用它,你也可以临时启动它。 - -通常,你的系统不应该有任何侦听端口除了响应ping。这将有助于你对抗网络级别的零日漏洞利用。 - -#### 自动更新或通知 - -建议打开自动更新,除非你有一个非常好的理由不这么做,如担心自动更新将使您的系统无法使用(这是发生在过去,所以这种恐惧并非杞人忧天)。至少,你应该启用自动通知可用的更新。大多数发行版已经有这个服务自动运行,所以你不需要做任何事。查阅你的发行版文档查看更多。 - -你应该尽快应用所有明显的勘误,即使这些不是特别贴上“安全更新”或有关联的CVE代码。所有错误都潜在的安全漏洞和新的错误,比起坚持旧的,已知的错误,未知错误通常是更安全的策略。 - -#### 监控日志 - -你应该对你的系统上发生了什么很感兴趣。出于这个原因,你应该安装`logwatch`然后配置它每夜发送在你的系统上发生的任何事情的活动报告。这不会预防一个专业的攻击者,但是一个好安全网功能。 - -注意,许多systemd发行版将不再自动安装一个“logwatch”需要的syslog服务(由于systemd依靠自己的分类),所以你需要安装和启用“rsyslog”来确保使用logwatch之前你的/var/log不是空。 - -#### Rkhunter和IDS - -安装`rkhunter`和一个入侵检测系统(IDS)像`aide`或者`tripwire`将不会有用,除非你确实理解他们如何工作采取必要的步骤来设置正确(例如,保证数据库在额外的媒介,从可信的环境运行检测,记住执行系统更新和配置更改后要刷新数据库散列,等等)。如果你不愿在你的工作站执行这些步骤调整你如何工作,这些工具将带来麻烦没有任何实在的安全益处。 - -我们强烈建议你安装`rkhunter`并每晚运行它。它相当易于学习和使用,虽然它不会阻止一个复杂的攻击者,它也能帮助你捕获你自己的错误。 - -## 个人工作站备份 - -工作站备份往往被忽视,或无计划的做,常常是不安全的方式。 - -### 清单 - -- [ ] 设置加密备份工作站到外部存储 _(关键)_ -- [ ] 使用零认知云备份的备份工具 _(中等)_ - -### 注意事项 - -#### 全加密备份存到外部存储 - -把全部备份放到一个移动磁盘中比较方便,不用担心带宽和流速(在这个时代,大多数供应商仍然提供显著的不对称的上传/下载速度)。不用说,这个移动硬盘本身需要加密(又一次,通过LIKS),或者你应该使用一个备份工具建立加密备份,例如`duplicity`或者它的GUI版本,`deja-dup`。我建议使用后者并使用随机生成的密码,保存到你的密码管理器中。如果你带上笔记本去旅行,把这个磁盘留在家,以防你的笔记本丢失或被窃时可以找回备份。 - -除了你的家目录外,你还应该备份`/etc`目录和处于鉴定目的的`/var/log`目录。 - -首先是,避免拷贝你的家目录到任何非加密存储上,甚至是快速的在两个系统上移动文件,一旦完成你肯定会忘了清除它,暴露个人隐私或者安全信息到监听者手中 -- 尤其是把这个存储跟你的笔记本防盗同一个包里。 - -#### 零认知站外备份选择性 - -站外备份也是相当重要的,是否可以做到要么需要你的老板提供空间,要么找一家云服务商。你可以建一个单独的duplicity/deja-dup配置,只包括重要的文件,以免传输大量你不想备份的数据(网络缓存,音乐,下载等等)。 - -作为选择,你可以使用零认知备份工具,例如[SpiderOak][5],它提供一个卓越的Linux GUI工具还有实用的特性,例如在多个系统或平台间同步内容。 - -## 最佳实践 - -下面是我们认为你应该采用的最佳实践列表。它当然不是非常详细的,而是试图提供实用的建议,一个可行的整体安全性和可用性之间的平衡 - -### 浏览 - -毫无疑问,在你的系统上web浏览器将是最大、最容易暴露的攻击层面的软件。它是专门下载和执行不可信,恶意代码的一个工具。它试图采用沙箱和代码卫生处理等多种机制保护你免受这种危险,但是在之前多个场合他们都被击败了。你应该学到浏览网站是最不安全的活动在你参与的任何一天。 - -有几种方法可以减少浏览器的影响,但真正有效的方法需要你操作您的工作站将发生显著的变化。 - -#### 1: 实用两个不同的浏览器 - -这很容易做到,但是只有很少的安全效益。并不是所有浏览器都妥协给攻击者完全自由访问您的系统 -- 有时他们只能允许一个读取本地浏览器存储,窃取其他标签的活动会话,捕获输入浏览器,例如,实用两个不同的浏览器,一个用在工作/高安全站点,另一个用在其他,有助于防止攻击者请求整个饼干罐的小妥协。主要的不便是两个不同的浏览器消耗内存大量。 - -我们建议: - -##### 火狐用来工作和高安全站点 - -使用火狐登陆工作有关的站点,应该额外关心的是确保数据如cookies,会话,登陆信息,打键次数等等,明显不应该落入攻击者手中。除了少数的几个网站,你不应该用这个浏览器访问其他网站。 - -你应该安装下面的火狐扩展: - -- [ ] NoScript _(关键)_ - - NoScript阻止活动内容加载,除非在用户白名单里的域名。跟你默认浏览器比它使用起来很麻烦(可是提供了真正好的安全效益),所以我们建议只在开启了它的浏览器上访问与工作相关的网站。 - -- [ ] Privacy Badger _(关键)_ - - EFF的Privacy Badger将在加载时预防大多数外部追踪器和广告平台,在这些追踪站点影响你的浏览器时将有助于避免妥协(追踪着和广告站点通常会成为攻击者的目标,因为他们会迅速影响世界各地成千上万的系统)。 - -- [ ] HTTPS Everywhere _(关键)_ - - 这个EFF开发的扩展将确保你访问的大多数站点都在安全连接上,甚至你点击的连接使用的是http://(有效的避免大多数的攻击,例如[SSL-strip][7])。 - -- [ ] Certificate Patrol _(中等)_ - - 如果你正在访问的站点最近改变了他们的TLS证书 -- 特别是如果不是接近失效期或者现在使用不同的证书颁发机构,这个工具将会警告你。它有助于警告你是否有人正尝试中间人攻击你的连接,但是产生很多无害的假的类似情况。 - -你应该让火狐成为你的默认打开连接的浏览器,因为NoScript将在加载或者执行时阻止大多数活动内容。 - -##### 其他一切都用Chrome/Chromium - -Chromium开发者在增加很多很好的安全特性方面比火狐强(至少[在Linux上][6])),例如seccomp沙箱,内核用户名空间等等,这担当一个你访问网站和你其他系统间额外的隔离层。Chromium是流开源项目,Chrome是Google所有的基于它构建的包(使用它输入时要非常谨慎任,何你不想让谷歌知道的事情都不要使用它)。 - -有人推荐你在Chrome上也安装**Privacy Badger**和**HTTPS Everywhere**扩展,然后给他一个不同的主题,从火狐指出这是你浏览器“不信任的站点”。 - -#### 2: 使用两个不同浏览器,一个在专用的虚拟机里 - -这有点像上面建议的做法,除了您将添加一个额外的步骤,通过快速访问协议运行专用虚拟机内部Chrome,允许你共享剪贴板和转发声音事件(如,Spice或RDP)。这将在不可信的浏览器和你其他的工作环境之间添加一个优秀的隔离层,确保攻击者完全危害你的浏览器将不得不另外打破VM隔离层,以达到系统的其余部分。 - -这是一个出奇可行的结构,但是需要大量的RAM和高速处理器可以处理增加的负载。这还需要一个重要的奉献的管理员需要相应地调整自己的工作实践。 - -#### 3: 通过虚拟化完全隔离你的工作和娱乐环境 - -看[Qubes-OS项目][3],它致力于通过划分你的应用到完全独立分开的VM中,提供高安全工作环境。 - -### 密码管理器 - -#### 清单 - -- [ ] 使用密码管理器 _(关键)_ -- [ ] 不相关的站点使用不同的密码 _(关键)_ -- [ ] 使用支持团队共享的密码管理器 _(中等)_ -- [ ] 给非网站用户使用一个单独的密码管理器 _(偏执)_ - -#### 注意事项 - -使用好的,唯一的密码对你的团队成员来说应该是非常关键的需求。证书盗取一直在发生 — 要么通过中间计算机,盗取数据库备份,远程站点利用,要么任何其他的打算。证书从不应该通过站点被重用,尤其是关键的应用。 - - -##### 浏览器中的密码管理器 - -每个浏览器有一个比较安全的保存密码机制,通过供应商的机制可以同步到云存储同事用户提供密码保证数据加密。无论如何,这个机制有严重的劣势: - - -1. 不能跨浏览器工作 -2. 不提供任何与团队成员共享凭证的方法 - -也有一些良好的支持,免费或便宜的密码管理器,很好的融合到多个浏览器,跨平台工作,提供小组共享(通常是支付服务)。可以很容易地通过搜索引擎找到解决方案。 - -##### 独立的密码管理器 - -任何密码管理器都有一个主要的缺点,与浏览器结合,事实上是应用的一部分,这样最有可能被入侵者攻击。如果这让你不舒服(应该这样),你应该选择两个不同的密码管理器 -- 一个集成在浏览器中用来保存网站密码,一个作为独立运行的应用。后者可用于存储高风险凭证如root密码,数据库密码,其他shell账户凭证等。 - -有这样的工具可以特别有效的在团腿成员间共享超级用户的凭据(服务器根密码,ILO密码,数据库管理密码,引导装载程序密码等等)。 - -这几个工具可以帮助你: - -- [KeePassX][8],2版中改善了团队共享 -- [Pass][9],它使用了文本文件和PGP并与git结合 -- [Django-Pstore][10],他是用GPG在管理员之间共享凭据 -- [Hiera-Eyaml][11],如果你已经在你的平台中使用了Puppet,可以便捷的追踪你的服务器/服务凭证,像你的Hiera加密数据的一部分。 - -### 加固SSH和PGP私钥 - -个人加密密钥,包括SSH和PGP私钥,都是你工作站中最重要的物品 -- 攻击将在获取到感兴趣的东西,这将允许他们进一步攻击你的平台或冒充你为其他管理员。你应该采取额外的步骤,确保你的私钥免遭盗窃。 - -#### 清单 - -- [ ] 强壮的密码用来保护私钥 _(关键)_ -- [ ] PGP的主密码保存在移动存储中 _(中等)_ -- [ ] 身份验证、签名和加密注册表子项存储在智能卡设备 _(中等)_ -- [ ] SSH配置为使用PGP认证密钥作为ssh私钥 _(中等)_ - -#### 注意事项 - -防止私钥被偷的最好方式是使用一个智能卡存储你的加密私钥,不要拷贝到工作平台上。有几个厂商提供支持OpenPGP的设备: - -- [Kernel Concepts][12],在这里可以采购支持OpenPGP的智能卡和USB读取器,你应该需要一个。 -- [Yubikey NEO][13],这里提供OpenPGP功能的智能卡还提供很多很酷的特性(U2F, PIV, HOTP等等)。 - -确保PGP主密码没有存储在工作平台也很重要,只有子密码在使用。主密钥只有在登陆其他的密钥和创建子密钥时使用 — 不经常发生这种操作。你可以照着[Debian的子密钥][14]向导来学习如何移动你的主密钥到移动存储和创建子密钥。 - -你应该配置你的gnupg代理作为ssh代理然后使用基于智能卡PGP认证密钥作为你的ssh私钥。我们公布了一个细节向导如何使用智能卡读取器或Yubikey NEO。 - -如果你不想那么麻烦,最少要确保你的PGP私钥和你的SSH私钥有个强健的密码,这将让攻击者很难盗取使用它们。 - -### 工作站上的SELinux - -如果你使用的发行版绑定了SELinux(如Fedora),这有些如何使用它的建议,让你的工作站达到最大限度的安全。 - -#### 清单 - -- [ ] 确保你的工作站强制使用SELinux _(关键)_ -- [ ] 不要盲目的执行`audit2allow -M`,经常检查 _(关键)_ -- [ ] 从不 `setenforce 0` _(中等)_ -- [ ] 切换你的用户到SELinux用户`staff_u` _(中等)_ - -#### 注意事项 - -SELinux是一个强制访问控制(MAC)为POSIX许可核心功能扩展。它是成熟,强健,自从它推出以来已经有很长的路了。不管怎样,许多系统管理员现在重复过时的口头禅“关掉它就行。” - -话虽如此,在工作站上SELinux还是限制了安全效益,像很多应用都要作为一个用户自由的运行。开启它有益于给网络提供足够的保护,有可能有助于防止攻击者通过脆弱的后台服务提升到root级别的权限用户。 - -我们的建议是开启它并强制使用。 - -##### 从不`setenforce 0` - -使用`setenforce 0`短时间内把SELinux设置为许可模式,但是你应该避免这样做。其实你是想查找一个特定应用或者程序的问题,实际上这样是把全部系统的SELinux关闭了。 - -你应该使用`semanage permissive -a [somedomain_t]`替换`setenforce 0`,只把这个程序放入许可模式。首先运行`ausearch`查看那个程序发生问题: - - ausearch -ts recent -m avc - -然后看下`scontext=`(SELinux的上下文)行,像这样: - - scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023 - ^^^^^^^^^^^^^^ - -这告诉你程序`gpg_pinentry_t`被拒绝了,所以你想查看应用的故障,应该增加它到许可模式: - - semange permissive -a gpg_pinentry_t - -这将允许你使用应用然后收集AVC的其他部分,你可以连同`audit2allow`写一个本地策略。一旦完成你就不会看到新的AVC的拒绝,你可以从许可中删除程序,运行: - - semanage permissive -d gpg_pinentry_t - -##### 用SELinux的用户staff_r,使用你的工作站 - -SELinux附带的本地角色实现基于角色的用户帐户禁止或授予某些特权。作为一个管理员,你应该使用`staff_r`角色,这可以限制访问很多配置和其他安全敏感文件,除非你先执行`sudo`。 - -默认,用户作为`unconfined_r`被创建,你可以运行大多数应用,没有任何(或只有一点)SELinux约束。转换你的用户到`staff_r`角色,运行下面的命令: - - usermod -Z staff_u [username] - -你应该退出然后登陆激活新角色,届时如果你运行`id -Z`,你将会看到: - - staff_u:staff_r:staff_t:s0-s0:c0.c1023 - -在执行`sudo`时,你应该记住增加一个额外的标准告诉SELinux转换到"sysadmin"角色。你想要的命令是: - - sudo -i -r sysadm_r - -届时`id -Z`将会显示: - - staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023 - -**警告**:在进行这个切换前你应该舒服的使用`ausearch`和`audit2allow`,当你作为`staff_r`角色运行时你的应用有可能不再工作了。写到这里时,以下流行的应用已知在`staff_r`下没有做策略调整就不会工作: - -- Chrome/Chromium -- Skype -- VirtualBox - -切换回`unconfined_r`,运行下面的命令: - - usermod -Z unconfined_u [username] - -然后注销再重新回到舒服的区域。 - -## 延伸阅读 - -IT安全的世界是一个没有底的兔子洞。如果你想深入,或者找到你的具体发行版更多的安全特性,请查看下面这些链接: - -- [Fedora Security Guide](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html) -- [CESG Ubuntu Security Guide](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts) -- [Debian Security Manual](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html) -- [Arch Linux Security Wiki](https://wiki.archlinux.org/index.php/Security) -- [Mac OSX Security](https://www.apple.com/support/security/guides/) - -## 许可 - -这项工作在[创作共用授权4.0国际许可证][0]许可下。 - --------------------------------------------------------------------------------- - -via: https://github.com/lfit/itpol/blob/master/linux-workstation-security.md#linux-workstation-security-list - -作者:[mricon][a] -译者:[wyangsun](https://github.com/wyangsun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://github.com/mricon -[0]: http://creativecommons.org/licenses/by-sa/4.0/ -[1]: https://github.com/QubesOS/qubes-antievilmaid -[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues -[3]: https://qubes-os.org/ -[4]: https://xkcd.com/936/ -[5]: https://spideroak.com/ -[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing -[7]: http://www.thoughtcrime.org/software/sslstrip/ -[8]: https://keepassx.org/ -[9]: http://www.passwordstore.org/ -[10]: https://pypi.python.org/pypi/django-pstore -[11]: https://github.com/TomPoulton/hiera-eyaml -[12]: http://shop.kernelconcepts.de/ -[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/ -[14]: https://wiki.debian.org/Subkeys -[15]: https://github.com/lfit/ssh-gpg-smartcard-config diff --git a/translated/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md b/translated/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md deleted file mode 100644 index 52d63d6ac9..0000000000 --- a/translated/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md +++ /dev/null @@ -1,111 +0,0 @@ -在浏览器上使用Docker -================================================================================ -Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统的这种是一个非常棒的技术和想法。docker 已经通过节省工作时间来拯救了千上万的系统管理员和开发人员。这是一个开源技术,提供一个平台来把应用程序当作容器来打包、分发、共享和运行,而不去关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制,并且可以在任何时间、任何地点运行,从小型计算机到高端服务器都可以。运行docker容器和管理他们可能会花费一点点困难和时间,所以现在有一款基于web 的应用程序-DockerUI,可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉Linux 命令行担忧很想运行容器话程序的人很有帮助。DockerUI 是一个开源的基于web 的应用程序,它最著名的是它华丽的设计和简单的用来运行和管理docker 的简单的操作界面。 - -下面会介绍如何在Linux 上安装配置DockerUI。 - -### 1. 安装docker ### - -首先,我们需要安装docker。我们得感谢docker 的开发者,让我们可以简单的在主流linux 发行版上安装docker。为了安装docker,我们得在对应的发行版上使用下面的命令。 - -#### Ubuntu/Fedora/CentOS/RHEL/Debian #### - -docker 维护者已经写了一个非常棒的脚本,用它可以在Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 和Debian 8.x 这几个linux 发行版上安装docker。这个脚本可以识别出我们的机器上运行的linux 的发行版本,然后将需要的源库添加到文件系统、更新本地的安装源目录,最后安装docker 和依赖库。要使用这个脚本安装docker,我们需要在root 用户或者sudo 权限下运行如下的命令, - - # curl -sSL https://get.docker.com/ | sh - -#### OpenSuse/SUSE Linux 企业版 #### - -要在运行了OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装docker,我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker: - - # zypper in docker - -#### ArchLinux #### - -docker 存在于ArchLinux 的官方源和社区维护的AUR 库。所以在ArchLinux 上我们有两条路来安装docker。使用官方源安装,需要执行下面的pacman 命令: - - # pacman -S docker - -如果要从社区源 AUR 安装docker,需要执行下面的命令: - - # yaourt -S docker-git - -### 2. 启动 ### - -安装好docker 之后,我们需要运行docker 监护程序,然后再能运行并管理docker 容器。我们需要使用下列命令来确定docker 监护程序已经安装并运行了。 - -#### 在 SysVinit 上#### - - # service docker start - -#### 在Systemd 上#### - - # systemctl start docker - -### 3. 安装DockerUI ### - -安装DockerUI 比安装docker 要简单很多。我们仅仅需要懂docker 注册表上拉取dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令: - - # docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui - -![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png) - -在上面的命令里,dockerui 使用的默认端口是9000,我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker socket。如果主机使用了SELinux那么就得使用`--privileged` 标志。 - -执行完上面的命令后,我们要检查dockerui 容器是否运行了,或者使用下面的命令检查: - - # docker ps - -![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png) - -### 4. 拉取docker镜像 ### - -现在我们还不能直接使用dockerui 拉取镜像,所以我们需要在命令行下拉取docker 镜像。要完成这些我们需要执行下面的命令。 - - # docker pull ubuntu - -![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png) - -上面的命令将会从docker 官方源[Docker Hub][1]拉取一个标志为ubuntu 的镜像。类似的我们可以从Hub 拉取需要的其它镜像。 - -### 4. 管理 ### - -启动了dockerui 容器之后,我们快乐的用它来执行启动、暂停、终止、删除和其它dockerui 提供的其他用来操作docker 容器的命令。第一,我们需要在web 浏览器里面打开dockerui:在浏览器里面输入http://ip-address:9000 或者 http://mydomain.com:9000,具体要根据你的系统配置。默认情况下登录不需啊哟认证,但是可以配置我们的web 服务器来要求登录认证。要启动一个容器,我们得得到包含我们要运行的程序的景象。 - -#### 创建 #### - -创建容器我们需要在Images 页面,点击我们想创建的容器的镜像id。然后点击`Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。 - -![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png) - -#### 中止 #### - -要停止一个容器,我们只需要跳转到`Containers` 页面,然后选取要停止的容器。然后再Action 的子菜单里面按下Stop 就行了。 - -![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png) - -#### 暂停与恢复 #### - -要暂停一个容器,只需要简单的选取目标容器,然后点击Pause 就行了。恢复一个容器只需要在Actions 的子菜单里面点击Unpause 就行了。 - -#### 删除 #### - -类似于我们上面完成的任务,杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像,然后点击Kill 或者Remove 就行了。 - -### 结论 ### - -dockerui 使用了docker 远程API 完成了一个很棒的管理docker 容器的web 界面。它的开发者们已经使用纯HTML 和JS 设计、开发了这个应用。目前这个程序还处于开发中,并且还有大量的工作要完成,所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像,而且只需要一点点工作。如果想参与、贡献dockerui,我们可以访问它们的[Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/ - -作者:[Arun Pyasi][a] -译者:[oska874](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://hub.docker.com/ -[2]:https://github.com/crosbymichael/dockerui/ diff --git a/translated/tech/20151028 10 Tips for 10x Application Performance.md b/translated/tech/20151028 10 Tips for 10x Application Performance.md new file mode 100644 index 0000000000..55cd24bd9a --- /dev/null +++ b/translated/tech/20151028 10 Tips for 10x Application Performance.md @@ -0,0 +1,279 @@ +10 Tips for 10x Application Performance + +将程序性能提高十倍的10条建议 +================================================================================ + +提高web 应用的性能从来没有比现在更关键过。网络经济的比重一直在增长;全球经济超过5% 的价值是在因特网上产生的(数据参见下面的资料)。我们的永远在线、超级连接的世界意味着用户的期望值也处于历史上的最高点。如果你的网站不能及时的响应,或者你的app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。 + +举一个例子,一份亚马逊十年前做过的研究可以证明,甚至在那个时候,网页加载时间每减少100毫秒,收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有着在调查中说他们会因为应用程序性能的问题流失用户。 + +网站到底需要多块呢?对于页面加载,每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间,而这是提供最高舒适度的速度。很明显这种利害关系对于web 应用来说很高,而且在不断的增加。 + +想要提高效率很简单,但是看到实际结果很难。要在旅途上帮助你,这篇blog 会给你提供10条最高可以10倍的提升网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章,包括测试充分的优化技术和一点NGIX 的帮助。这个系列给出了潜在的提高安全性的帮助。 + +### Tip #1: 通过反向代理来提高性能和增加安全性 ### + +如果你的web 应用运行在单个机器上,那么这个办法会明显的提升性能:只需要添加一个更快的机器,更好的处理器,更多的内存,更快的磁盘阵列,等等。然后新机器就可以更快的运行你的WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么这个办法还是很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。) + +问题是,机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换:和用户的成千上万的连接,从磁盘访问文件,运行代码,等等。应用服务器可能会抖动-内存不足,将内存数据写会磁盘,以及多个请求等待一个任务完成,如磁盘I/O。 + +你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和程序的通讯都是通过一个快速的内部网络完成的。 + +使用反向代理服务器可以将应用服务器从等待用户与web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要在能带客户端的响应,可以运行与接近优化过的性能水平。 + +添加方向代理服务器还可以给你的web 服务器安装带来灵活性。比如,一个已知类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易的被替代。 + +因为反向代理带来的灵活性,所以方向代理也是一些性能加速功能的必要前提,比如: + +- **负载均衡** (参见 [Tip #2][2]) – 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。 +- **缓存静态文件** (参见 [Tip #3][3]) – 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快 +- **网站安全** – 反响代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。 + +NGINX 软件是一个专门设计的反响代理服务器,也包含了上述的多种功能。NGINX 使用事件驱动的方式处理问题,着回避传统的服务器更加有效率。NGINX plus 天价了更多高级的反向代理特性,比如程序[健康度检查][4],专门用来处理request 路由,高级缓冲和相关支持。 + +![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png) + +### Tip #2: 添加负载平衡 ### + +添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。使用负载均衡讲流量分配到多个服务器,是用来替代只使用一个巨大且高性能web 服务器的方案。即使程序写的不好,或者在扩容方面有困难,只使用负载均衡服务器就可以很好的提高用户体验。 + +负载均衡服务器首先是一个反响代理服务器(参见[Tip #1][6])——它接收来自互联网的流量,然后转发请求给另一个服务器。小戏法是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法,只需要将新的请求发给列表里的下一个服务器。其它的方法包括将请求发给负载最小的活动连接。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8]. + +负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有流量来处理。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。 + +可以进行负载均衡的协议包括HTTP, HTTPS, SPDY, HTTP/2, WebSocket,[FastCGI][9],SCGI,uwsgi, memcached,以及集中其它的应用类型,包括采用TCP 第4层协议的程序。分析你的web 应用来决定那些你要使用以及那些地方的性能不足。 + +相同的服务器或服务器群可以被用来进行负载均衡,也可以用来处理其它的任务,如SSL 终止,提供对客户端使用的HTTP/1/x 和 HTTP/2 ,以及缓存静态文件。 + +NGINX 经常被用来进行负载均衡;要想了解更多的情况可以访问我们的[overview blog post][10], [configuration blog post][11], [ebook][12] 以及相关网站 [webinar][13], 和 [documentation][14]。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性,如基于服务器响应时间的加载路由和Microsoft’s NTLM 协议上的负载均衡。 + +### Tip #3: 缓存静态和动态的内容 ### + +缓存通过加速内容的传输速度来提高web 应用的性能。它可以采用一下集中策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者结合起来使用。 + +下面要考虑两种不同类型数据的缓冲: + +- **静态内容缓存**。不经常变化的文件,比如图像(JPEG,PNG) 和代码(CSS,JavaScript),可以保存在边缘服务器,这样就可以快速的从内存和磁盘上提取。 +- **动态内容缓存**。很多web 应用回针对每个网页请求生成不同的HTML 页面。在短时间内简单的缓存每个生成HTML 内容,就可以很好的减少要生成的内容的数量,这完全可以达到你的要求。 + +举个例子,如果一个页面每秒会被浏览10次,你将它缓存1 秒,99%请求的页面都会直接从缓存提取。如果你将将数据分成静态内容,甚至新生成的页面可能都是由这些缓存构成的。 + +下面由是web 应用发明的三种主要的缓存技术: + +- **缩短数据与用户的距离**。把一份内容的拷贝放的离用户更近点来减少传输时间。 +- **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。 +- **从过载服务器拿走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的效率,而这知识因为主机没有被过度使用。 + +对web 应用的缓存机制可以web 应用服务器内部实现。第一,缓存动态内容是用来减少应用服务器加载动态内容的时间。然后,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。 + +改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说,静态数据,比如大图像文件,构成了超过一半的内容。如果没有缓存,那么这可能会花费几秒的时间来提取和传输这类数据,但是采用了缓存之后不到1秒就可以完成。 + +举一个在实际中缓存是如何使用的例子, NGINX 和NGINX Plus使用了两条指令来[设置缓存机制][16]:proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小,文件在缓存中保存的最长时间和其他一些参数。使用第三条(而且是相当受欢迎的一条)指令,proxy_cache_use_stale,如果服务器提供新鲜内容是忙或者挂掉之类的信息,你甚至可以让缓存提供旧的内容,这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的上线时间。 + +NGINX plus 拥有[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。 + +要想获得更多关于NGINX 的缓存机制的信息可以浏览NGINX Plus 管理员指南中的 [reference documentation][20] 和 [NGINX Content Caching][21] 。 + +**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。 + +### Tip #4: 压缩数据 ### + +压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片(JPEG 和PNG)、视频(MPEG-4)和音乐(MP3)等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。 + +文本数据 —— 包括HTML(包含了纯文本和HTL 标签),CSS和代码,比如Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生不成比例的影响。 + +这是因为文本数据经常是用户与网页交互的有效数据,而多媒体数据可能更多的是起提供支持或者装饰的作用。聪明的内容压缩可以减少HTML,Javascript,CSS和其他文本内容对贷款的要求,通常可以减少30% 甚至更多的带宽和相应的页面加载时间。 + +如果你是用SSL,压缩可以减少需要进行SSL 编码的的数据量,而这些编码操作会占用一些CPU时间而抵消了压缩数据减少的时间。 + +压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的[HTTP/2 部分]就专门为适应头数据。另一个例子是可以在NGINX 里打开使用GZIP 压缩文本。你在你的服务里[预压缩文本数据][25]之后,你就可以直接使用gzip_static 指令来处理压缩过的.gz 版本。 + +### Tip #5: 优化 SSL/TLS ### + +安全套接字([SSL][26]) 协议和它的继承者,传输层安全(TLS)协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是Google 正在使用SSL/TLS,这在搜索引擎排名上是一个正面的影响因素。 + +尽管SSL/TLS 越来越流行,但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二: + +1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。 +2. 数据在传输过程中需要不断的在服务器加密、在客户端解密。 + +要鼓励使用SSL/TLS,HTTP/2 和SPDY(在[下一章][27]会描述)的作者设计新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述两个原因中的一个浪费的时间。然而现在可以用来提高应用程序使用SSL/TLS 传输数据的性能的方法不止这些。 + +web 服务器有对应的机制优化SSL/TLS 传输。举个例子,NGINX 使用[OpenSSL][28]运行在普通的硬件上提供接近专用硬件的传输性能。NGINX [SSL 性能][29] 有详细的文档,而且把对SSL/TLS 数据进行加解密的时间和CPU 占用率降低了很多。 + +更进一步,在这篇[blog][30]有详细的说明如何提高SSL/TLS 性能,可以总结为一下几点: + +- **会话缓冲**。使用指令[ssl_session_cache][31]可以缓存每个新的SSL/TLS 连接使用的参数。 +- **会话票据或者ID**。把SSL/TLS 的信息保存在一个票据或者ID 里可以流畅的复用而不需要重新握手。 +- **OCSP 分割**。通过缓存SSL/TLS 证书信息来减少握手时间。 + +NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的加密和解密,而同时和其他服务器进行明文通信。使用[这几步][32] 来设置NGINX 和NGINX Plus 处理SSL/TLS 终止。同时,这里还有一些NGINX Plus 和接收TCP 连接的服务器一起使用时的[特有的步骤][33] + +### Tip #6: 使用 HTTP/2 或 SPDY ### + +对于已经使用了SSL/TLS 的站点,HTTP/2 和SPDY 可以很好的提高性能,因为每个连接只需要一次握手。而对于没有使用SSL/TLS 的站点来说,HTTP/2 和SPDY会在响应速度上有些影响(通常会将度效率)。 + +Google 在2012年开始把SPDY 作为一个比HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前IETF 标准,他也基于SPDY。SPDY 已经被广泛的支持了,但是很快就会被HTTP/2 替代。 + +SPDY 和HTTP/2 的关键是用单连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。 + +通过使用一个连接这些协议可以避免过多的设置和管理多个连接,就像浏览器实现了HTTP/1.x 一样。单连接在对SSL 特别有效,这是因为它可以最小化SSL/TLS 建立安全链接时的握手时间。 + +SPDY 协议需要使用SSL/TLS, 而HTTP/2 官方并不需要,但是目前所有支持HTTP/2的浏览器只有在使能了SSL/TLS 的情况下才会使用它。这就意味着支持HTTP/2 的浏览器只有在网站使用了SSL 并且服务器接收HTTP/2 流量的情况下才会启用HTTP/2。否则的话浏览器就会使用HTTP/1.x 协议。 + +当你实现SPDY 或者HTTP/2时,你不再需要通常的HTTP 性能优化方案,比如域分隔资源聚合,以及图像登记。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。 + +![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png) + +作为支持这些协议的一个样例,NGINX 已经从一开始就支持了SPDY,而且[大部分使用SPDY 协议的网站][35]都运行的是NGINX。NGINX 同时也[很早][36]对HTTP/2 的提供了支持,从2015 年9月开始开源NGINX 和NGINX Plus 就[支持][37]它了。 + +经过一段时间,我们NGINX 希望更多的站点完全是能SSL 并且向HTTP/2 迁移。这将会提高安全性,同时新的优化手段也会被发现和实现,更简单的代码表现的更加优异。 + +### Tip #7: 升级软件版本 ### + +一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说,因为高性能组件的开发者更愿意追求更高的性能和解决bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。 + +稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新,可以非常简单的保持软件保持最佳的优化,解决掉bug,以及安全性的提高。 + +一直使用旧版软件也会组织你利用新的特性。比如上面说到的HTTP/2,目前要求OpenSSL 1.0.1.在2016 年中期开始将会要求1.0.2 ,而这是在2015年1月才发布的。 + +NGINX 用户可以开始迁移到[NGINX 最新的开源软件][38] 或者[NGINX Plus][39];他们都包含了罪行的能力,如socket分区和线程池(见下文),这些都已经为性能优化过了。然后好好看看的你软件栈,把他们升级到你能能升级道德最新版本吧。 + +### Tip #8: linux 系统性能调优 ### + +linux 是大多数web 服务器使用操作系统,而且作为你的架构的基础,Linux 表现出明显可以提高性能的机会。默认情况下,很多linux 系统都被设置为使用很少的资源,匹配典型的桌面应用负载。这就意味着web 应用需要最少一些等级的调优才能达到最大效能。 + +Linux 优化是转变们针对web 服务器方面的。以NGINX 为例,这里有一些在加速linux 时需要强调的变化: + +- **缓冲队列**。如果你有挂起的连接,那么你应该考虑增加net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接线直太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数知道错误信息停止出现。 +- **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接,你可能就需要提高sys.fs.file_max ,增加系统对文件描述符数量整体的限制,这样子才能支持不断增加的负载需求。 +- **临时端口**。当使用代理时,NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。 + +对于NGINX 来说,可以查阅[NGINX 性能调优指南][40]来学习如果优化你的Linux 系统,这样子它就可以很好的适应大规模网络流量而不会超过工作极限。 + +### Tip #9: web 服务器性能调优 ### + +无论你是用哪种web 服务器,你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何web 服务器,但是一些设置是针对NGINX的。关键的优化手段包括: + +- **f访问日志**。不要把每个请求的日志都直接写回磁盘,你可以在内存将日志缓存起来然后一批写回磁盘。对于NGINX 来说添加给指令*access_log* 添加参数 *buffer=size* 可以让系统在缓存满了的情况下才把日志写到此哦按。如果你添加了参数**flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。 +- **缓存**。缓存掌握了内存中的部分资源知道满了位置,这可以让与客户端的通信更加高效。与内存中缓存不匹配的响应会写回磁盘,而这就会降低效能。当NGINX [启用][42]了缓存机制后,你可以使用指令*proxy_buffer_size* 和 *proxy_buffers* 来管理缓存。 +- **客户端保活**。保活连接可以减少开销,特别是使用SSL/TLS时。对于NGINX 来说,你可以增加*keepalive_requests* 的值,从默认值100 开始修改,这样一个客户端就可以转交一个指定的连接,而且你也可以通过增加*keepalive_timeout* 的值来允许保活连接存活更长时间,结果就是让后来的请求处理的更快速。 +- **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会收益于连接保活。对于上游连接老说,你可以增加*保活时间*,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新的连接次数。更多关于保活连接的信息可以参见[blog][41]. +- **限制**。限制客户端使用的资源可以提高性能和安全性。对于NGINX 来说指令*limit_conn* 和 *limit_conn_zone* 限制了每个源的连接数量,而*limit_rate* 限制了带宽。这些限制都可以阻止合法用户*攫取* 资源,同时夜避免了攻击。指令*limit_req* 和 *limit_req_zone* 限制了客户端请求。对于上游服务器来说,可以在上游服务器的配置块里使用max_conns 可以限制连接到上游服务器的连接。 这样可以避免服务器过载。关联的队列指令会创建一个队列来在连接数抵达*max_conn* 限制时在指定的长度的时间内保存特定数量的请求。 +- **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和依赖操作系统的机制来有效的讲请求分发给不同的工人进程。这条建议推荐设置每个CPU 的参数*worker_processes* 。如果需要的话,工人连接的最大数(默认512)可以安全在大部分系统增加,是指找到最适合你的系统的值。 +- **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会未每个工人进程创建一个套接字监听器,这样一来以内核分配连接给套接字就成为可能了。折可以减少锁竞争,并且提高多核系统的性能,要使能[套接字分隔][43]需要在监听指令里面加上复用端口参数。 +- **线程池**。一个计算机进程可以处理一个缓慢的操作。对于web 服务器软件来说磁盘访问会影响很多更快的操作,比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集,而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在NGINX理有两个操作——read()系统调用和sendfile() ——被分配到了[线程池][44] + +![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png) + +**技巧**。当改变任务操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快那么就改回去。 + +在[blog][45]可以看到更详细的NGINX 调优方法。 + +### Tip #10: 监视系统活动来解决问题和瓶颈 ### + +在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的web 基础设施上监控程序活动。 + +监视活动是最积极的——他会告诉你发生了什么,把问题留给你发现和最终解决掉。 + +监视可以发现集中不同的问题。它们包括: + +- 服务器宕机。 +- 服务器出问题一直在丢失连接。 +- 服务器出现大量的缓存未命中。 +- 服务器没有发送正确的内容。 + +应用的总体性能监控工具,比如New Relic 和Dynatrace,可以帮助你监控到从远处加载网页的时间,二NGINX 可以帮助你监控到应用发送的时 间。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。 + +为了帮助开发者快速的发现、解决问题,NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能,折可以组织当前任务未完成之前不接受新的连接,另一个功能是慢启动,允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的速度。当有使用得当时,健康度检查可以让你在问题变得严重到影响用户体验前就发现它,而会话过滤和慢启动可以让你替换服务器,并且这个过程不会对性能和正常运行时间产生负面影响。这个表格就展示了NGINX Plus 内建模块在web 基础设施[监视活活动][48]的仪表盘,包括了服务器群,TCP 连接和缓存等信息。 + +![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png) + +### 总结: 看看10倍性能提升的效果 ### + +这些性能提升方案对任何一个web 应用都可用并且效果都很好,而实际效果取决于你的预算,如你能花费的时间,目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升? + +为了指导你了解每种优化手段的潜在影响,这里是是上面详述的每个优化方法的关键点,虽然你的里程肯定大不相同: + +- **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间断的极低性能。增加一个反向代理,比如NGINX可以避免web应用程序在内存和磁盘之间抖动。负载均衡可以将过载服务器的任务转移到空闲的服务器,还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升,很容易就可以比你现在的实现方案的最差性能提高10倍,对于总体性能来说可能提高的不多,但是也是有实质性的提升。 +- **缓存动态和静态数据**。如果你又一个web 服务器负担过重,那么毫无疑问肯定是你的应用服务器,只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高个位数倍的性能。 +- **压缩数据**。使用媒体文件压缩格式,比如图像格式JPEG,图形格式PNG,视频格式MPEG-4,音乐文件格式MP3可以极大的提高性能。一旦这些都用上了,然后压缩文件数据可以提高初始页面加载速度提高两倍。 +- **优化SSL/TLS**。安全握手会对性能产生巨大的影响,对他们的优化可能会对初始响应特别是重文本站点产生2倍的提升。优化SSL/TLS 下媒体文件只会产生很小的性能提升。 +- **使用HTTP/2 和SPDY*。当你使用了SSL/TLS,这些协议就可以提高整个站点的性能。 +- **对linux 和web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49]. + +我们希望你亲自尝试这些技术。我们希望这些提高应用性能的手段可以被你实现。请在下面评论栏分享你的结果 或者在标签#NGINX 和#webperf 下tweet 你的故事。 +### 网上资源 ### + +[Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50] + +[Load Impact – How Bad Performance Impacts Ecommerce Sales][51] + +[Kissmetrics – How Loading Time Affects Your Bottom Line (infographic)][52] + +[Econsultancy – Site speed: case studies, tips and tools for improving your conversion rate][53] + +-------------------------------------------------------------------------------- + +via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io + +作者:[Floyd Smith][a] +译者:[Ezio]](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nginx.com/blog/author/floyd/ +[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server +[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2 +[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3 +[4]:https://www.nginx.com/products/application-health-checks/ +[5]:https://www.nginx.com/solutions/load-balancing/ +[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1 +[7]:https://www.nginx.com/resources/admin-guide/load-balancer/ +[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/ +[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx +[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/ +[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/ +[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/ +[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/ +[14]:https://www.nginx.com/resources/admin-guide/load-balancer/ +[15]:https://www.nginx.com/products/ +[16]:https://www.nginx.com/blog/nginx-caching-guide/ +[17]:https://www.nginx.com/products/content-caching-nginx-plus/ +[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge +[19]:https://www.nginx.com/products/live-activity-monitoring/ +[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache +[21]:https://www.nginx.com/resources/admin-guide/content-caching +[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/ +[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6 +[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/ +[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html +[26]:https://www.digicert.com/ssl.htm +[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6 +[28]:http://openssl.org/ +[29]:https://www.nginx.com/blog/nginx-ssl-performance/ +[30]:https://www.nginx.com/blog/improve-seo-https-nginx/ +[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache +[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/ +[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/ +[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/ +[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites +[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/ +[37]:https://www.nginx.com/blog/nginx-plus-r7-released/ +[38]:http://nginx.org/en/download.html +[39]:https://www.nginx.com/products/ +[40]:https://www.nginx.com/blog/tuning-nginx/ +[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/ +[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering +[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/ +[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/ +[45]:https://www.nginx.com/blog/tuning-nginx/ +[46]:https://www.nginx.com/products/application-health-checks/ +[47]:https://www.nginx.com/products/session-persistence/#session-draining +[48]:https://www.nginx.com/products/live-activity-monitoring/ +[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/ +[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/ +[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/ +[52]:https://blog.kissmetrics.com/loading-time/?wide=1 +[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/ diff --git a/translated/tech/20151104 How to Install Redis Server on CentOS 7.md b/translated/tech/20151104 How to Install Redis Server on CentOS 7.md deleted file mode 100644 index eb872a2719..0000000000 --- a/translated/tech/20151104 How to Install Redis Server on CentOS 7.md +++ /dev/null @@ -1,236 +0,0 @@ -How to Install Redis Server on CentOS 7.md - -如何在CentOS 7上安装Redis 服务 -================================================================================ - -大家好, 本文的主题是Redis,我们将要在CentOS 7 上安装它。编译源代码,安装二进制文件,创建、安装文件。然后安装组建,我们还会配置redis ,就像配置操作系统参数一样,目标就是让redis 运行的更加可靠和快速。 - -![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg) - -Redis 服务器 - -Redis 是一个开源的多平台数据存储软件,使用ANSI C 编写,直接在内存使用数据集,这使得它得以实现非常高的效率。Redis 支持多种编程语言,包括Lua, C, Java, Python, Perl, PHP 和其他很多语言。redis 的代码量很小,只有约3万行,它只做很少的事,但是做的很好。尽管你在内存里工作,但是对数据持久化的需求还是存在的,而redis 的可靠性就很高,同时也支持集群,这儿些可以很好的保证你的数据安全。 - -### 构建 Redis ### - -redis 目前没有官方RPM 安装包,我们需要从牙UN代码编译,而为了要编译就需要安装Make 和GCC。 - -如果没有安装过GCC 和Make,那么就使用yum 安装。 - - yum install gcc make - -从[官网][1]下载tar 压缩包。 - - curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz - -解压缩。 - - tar zxvf redis-3.0.4.tar.gz - -进入解压后的目录。 - - cd redis-3.0.4 - -使用Make 编译源文件。 - - make - -### 安装 ### - -进入源文件的目录。 - - cd src - -复制 Redis server 和 client 到 /usr/local/bin - - cp redis-server redis-cli /usr/local/bin - -最好也把sentinel,benchmark 和check 复制过去。 - - cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin - -创建redis 配置文件夹。 - - mkdir /etc/redis - -在`/var/lib/redis` 下创建有效的保存数据的目录 - - mkdir -p /var/lib/redis/6379 - -#### 系统参数 #### - -为了让redis 正常工作需要配置一些内核参数。 - -配置vm.overcommit_memory 为1,它的意思是一直避免数据被截断,详情[见此][2]. - - sysctl -w vm.overcommit_memory=1 - -修改backlog 连接数的最大值超过redis.conf 中的tcp-backlog 值,即默认值511。你可以在[kernel.org][3] 找到更多有关基于sysctl 的ip 网络隧道的信息。 - - sysctl -w net.core.somaxconn=512. - -禁止支持透明大页,,因为这会造成redis 使用过程产生延时和内存访问问题。 - - echo never > /sys/kernel/mm/transparent_hugepage/enabled - -### redis.conf ### -Redis.conf 是redis 的配置文件,然而你会看到这个文件的名字是6379.conf ,而这个数字就是redis 监听的网络端口。这个名字是告诉你可以运行超过一个redis 实例。 - -复制redis.conf 的示例到 **/etc/redis/6379.conf**. - - cp redis.conf /etc/redis/6379.conf - -现在编辑这个文件并且配置参数。 - - vi /etc/redis/6379.conf - -#### 守护程序 #### - -设置daemonize 为no,systemd 需要它运行在前台,否则redis 会突然挂掉。 - - daemonize no - -#### pidfile #### - -设置pidfile 为/var/run/redis_6379.pid。 - - pidfile /var/run/redis_6379.pid - -#### port #### - -如果不准备用默认端口,可以修改。 - - port 6379 - -#### loglevel #### - -设置日志级别。 - - loglevel notice - -#### logfile #### - -修改日志文件路径。 - - logfile /var/log/redis_6379.log - -#### dir #### - -设置目录为 /var/lib/redis/6379 - - dir /var/lib/redis/6379 - -### 安全 ### - -下面有几个操作可以提高安全性。 - -#### Unix sockets #### - -在很多情况下,客户端程序和服务器端程序运行在同一个机器上,所以不需要监听网络上的socket。如果这和你的使用情况类似,你就可以使用unix socket 替代网络socket ,为此你需要配置**port** 为0,然后配置下面的选项来使能unix socket。 - -设置unix socket 的套接字文件。 - - unixsocket /tmp/redis.sock - -限制socket 文件的权限。 - - unixsocketperm 700 - -现在为了获取redis-cli 的访问权限,应该使用-s 参数指向socket 文件。 - - redis-cli -s /tmp/redis.sock - -#### 密码 #### - -你可能需要远程访问,如果是,那么你应该设置密码,这样子每次操作之前要求输入密码。 - - requirepass "bTFBx1NYYWRMTUEyNHhsCg" - -#### 重命名命令 #### - -想象一下下面一条条指令的输出。使得,这回输出服务器的配置,所以你应该在任何可能的情况下拒绝这种信息。 - - CONFIG GET * - -为了限制甚至禁止这条或者其他指令可以使用**rename-command** 命令。你必须提供一个命令名和替代的名字。要禁止的话需要设置replacement 为空字符串,这样子禁止任何人猜测命令的名字会比较安全。 - - rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u" - rename-command FLUSHALL "" - rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u" - -![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg) - -通过密码和修改命令来访问unix socket。 - -#### 快照 #### - -默认情况下,redis 会周期性的将数据集转储到我们设置的目录下的文件**dump.rdb**。你可以使用save 命令配置转储的频率,他的第一个参数是以秒为单位的时间帧(译注:按照下文的意思单位应该是分钟),第二个参数是在数据文件上进行修改的数量。 - -每隔15小时并且最少修改过一次键。 - save 900 1 - -每隔5小时并且最少修改过10次键。 - - save 300 10 - -每隔1小时并且最少修改过10000次键。 - - save 60 10000 - -文件**/var/lib/redis/6379/dump.rdb** 包含了内存里经过上次保存命令的转储数据。因为他创建了临时文件并且替换了源文件,这里没有被破坏的问题,而且你不用担心直接复制这个文件。 - -### 开机时启动 ### - -You may use systemd to add Redis to the system startup -你可以使用systemd 将redis 添加到系统开机启动列表。 - -复制init_script 示例文件到/etc/init.d,注意脚本名所代表的端口号。 - - cp utils/redis_init_script /etc/init.d/redis_6379 - -现在我们来使用systemd,所以在**/etc/systems/system** 下创建一个单位文件名字为redis_6379.service。 - - vi /etc/systemd/system/redis_6379.service - -填写下面的内容,详情可见systemd.service。 - - [Unit] - Description=Redis on port 6379 - - [Service] - Type=forking - ExecStart=/etc/init.d/redis_6379 start - ExecStop=/etc/init.d/redis_6379 stop - - [Install] - WantedBy=multi-user.target - -现在添加我之前在**/etc/sysctl.conf** 里面修改多的内存过分提交和backlog 最大值的选项。 - - vm.overcommit_memory = 1 - - net.core.somaxconn=512 - -对于透明大页支持,并没有直接sysctl 命令可以控制,所以需要将下面的命令放到/etc/rc.local 的结尾。 - echo never > /sys/kernel/mm/transparent_hugepage/enabled - -### 总结 ### - -这些足够启动了,通过设置这些选项你将足够部署redis 服务到很多简单的场景,然而在redis.conf 还有很多为复杂环境准备的redis 的选项。在一些情况下,你可以使用[replication][4] 和 [Sentinel][5] 来提高可用性,或者[将数据分散][6]在多个服务器上,创建服务器集群 。谢谢阅读。 --------------------------------------------------------------------------------- - -via: http://linoxide.com/storage/install-redis-server-centos-7/ - -作者:[Carlos Alberto][a] -译者:[ezio](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/carlosal/ -[1]:http://redis.io/download -[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting -[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt -[4]:http://redis.io/topics/replication -[5]:http://redis.io/topics/sentinel -[6]:http://redis.io/topics/partitioning diff --git a/translated/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md b/translated/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md new file mode 100644 index 0000000000..b7c86609bd --- /dev/null +++ b/translated/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md @@ -0,0 +1,63 @@ + +Linux 有问必答 - 如何通过代理服务器安装 Ubuntu 桌面 +================================================================================ +> **问题**: 我的电脑通过 HTTP 代理连接到公司网络。当我尝试从 CD-ROM 在计算机上安装 Ubuntu 桌面时,在检索文件时安装程序会被挂起,检索则不会完成,这可能是由于代理造成的。然而问题是,Ubuntu 的安装程序从不要求我在安装过程中配置代理。那我该怎么使用代理来安装 Ubuntu 桌面? + +与 Ubuntu 服务器不太一样,安装 Ubuntu 桌面几乎都是自动安装,没有留下太多自定义的空间,如自定义磁盘分区,手动网络设置,包选择等等。尽管非常简单,一键安装被认为是用户友好的,它不需要用户寻找“高级安装模式”来定制自己的 Ubuntu 桌面。 + +此外,Ubuntu 默认桌面的安装程序中一个大问题是代理设置。如果你的计算机连接在一个代理上,你会发现 Ubuntu 安装时会卡在准备下载文件处。 + +![](https://c2.staticflickr.com/6/5683/22195372232_cea81a5e45_c.jpg) + +这篇文章描述了如何解决当使用代理时 Ubuntu **安装程序的限制和安装 Ubuntu 桌面**。 + +其基本思路如下。不是直接使用 Ubuntu 的安装程序开始安装,首先进入 live Ubuntu 桌面,配置代理服务器,最后从 live 桌面手动启动 Ubuntu 的安装程序。以下是程序的安装步骤。 + +从 CD/DVD 或 USB 启动 Ubuntu 桌面后,点击第一个欢迎屏幕上的"Try Ubuntu"。 + +![](https://c1.staticflickr.com/1/586/22195371892_3816ba09c3_c.jpg) + +一旦进入 live Ubuntu 桌面,点击左侧设置图标。 + +![](https://c1.staticflickr.com/1/723/22020327738_058610c19d_c.jpg) + +进入 Network 菜单。 + +![](https://c2.staticflickr.com/6/5675/22021212239_ba3901c8bf_c.jpg) + +手动配置代理服务器。 + +![](https://c1.staticflickr.com/1/735/22020025040_59415e0b9a_c.jpg) + +接下来,打开一个终端。 + +![](https://c2.staticflickr.com/6/5642/21587084823_357b5c48cb_c.jpg) + +通过输入以下命令切换到 root 用户: + + $ sudo su + +最后,在 root 用户下输入以下命令。 + + # ubiquity gtk_ui + +然后将启动基于 GUI 的 Ubuntu 安装程序。 + + +![](https://c1.staticflickr.com/1/723/22020025090_cc64848b6c_c.jpg) + +继续安装其余部分。 + +![](https://c1.staticflickr.com/1/628/21585344214_447020e9d6_c.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/install-ubuntu-desktop-behind-proxy.html + +作者:[Dan Nanni][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni diff --git a/translated/tech/20151109 Install Android On BQ Aquaris Ubuntu Phone In Linux.md b/translated/tech/20151109 Install Android On BQ Aquaris Ubuntu Phone In Linux.md new file mode 100644 index 0000000000..73397819cd --- /dev/null +++ b/translated/tech/20151109 Install Android On BQ Aquaris Ubuntu Phone In Linux.md @@ -0,0 +1,125 @@ +在 Linux 上将 BQ Aquaris Ubuntu 手机刷成 Android 系统 +================================================================================ +![How to install Android on Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-on-Ubuntu-Phone.jpg) + +如果你正好拥有全球第一支运行 Ubuntu 的手机并且希望将 **BQ Aquaris E4.5 自带的 Ubuntu 系统换成 Android **,那这篇文章能帮你点小忙。 + +有一万种理由来解释为什么要将 Ubuntu 换成主流 Android OS。其中最主要的一个,就是这个系统本身仍然处于非常早期的阶段,针对的目标用户仍然是开发者和爱好者。不管你的理由是什么,要谢谢 bq 提供的工具,让我们能非常轻松地在 BQ Aquaris 上安装 Android OS。 + +下面让我们一起看下在 BQ Aquaris 上安装 Android 需要做哪些事情。 + +### 前提条件 ### + +- 可用的因特网连接,用来下载 Android 出厂固件以及安装刷机工具。 +- USB 数据线 +- 运行 Linux 的电脑 + +本文是基于 Ubuntu 15.10 操作的。但是这些步骤应该也可以应用于其他大多数 Linux 发行版。 + +### 将 BQ Aquaris E4.5 上的 Ubuntu 换成 Android ### + +#### 第一步:下载 Android 固件 #### + +首先是下载可以在 BQ Aquaris E4.5 上运行的 Android 固件。幸运的是我们可以在 bq 的技术支持网站找到。可以从下面的链接直接下载,差不多 650 MB: + +- [下载为 BQ Aquaris E4.5 制作的 Android][1] + +是的,这个版本还支持 OTA 自动升级。目前,固件版本是 2.0.1,基于 Android Lolipop 开发。过一段时间,应该就会放出基于 Marshmallow 的新版本,上边的链接可能就无效了。 + +我建议去[ bq 的技术支持网站][2]下载最新的固件。 + +下载完成后解压。在解压后的目录里,找到一个名字是 **MT6582_Android_scatter.txt** 的文件。后面将要用到它。 + +#### 第二步:下载刷机工具 #### + +bq 已经提供了自己的刷机工具,Herramienta MTK Flash Tool,可以轻松地给设备安装 Andriod 或者 Ubuntu 系统。你可以从下面的链接下载工具: + +- [下载 MTK Flash Tool][3] + +考虑到刷机工具在以后可能会升级,你总是可以从[bq 技术支持网站][4]上找到最新的版本。 + +下载完后解压。之后应该可以在目录里找到一个叫 **flash_tool** 的可执行文件。我们稍后会用到。 + +#### 第三步:移除冲突的软件包(可选) #### + +如果你正在用最新版本的 Ubuntu 或 基于 Ubuntu 的 Linux 发行版,稍后可能会碰到 “BROM ERROR : S_UNDEFINED_ERROR (1001)” 错误。 + +要避免这个错误,你需要卸载有冲突的软件包。可以使用下面的命令: + + sudo apt-get remove modemmanager + +用下面的命令重启 udev 服务: + + sudo service udev restart + +检查一下内核模块 cdc_acm 可能存在的边际效应,运行下面的命令: + + lsmod | grep cdc_acm + +如果上面命令输出是空,你将需要重新加载一下这个内核模块: + + sudo modprobe cdc_acm + +#### 第四步:准备刷入 Android #### + +切换到下载好并解压完成的刷机工具目录(第二步)。请使用命令行来完成,这是因为将要用到 root 权限。 + +假设你保存在下载目录里,使用下面的命令切换目录(为那些不懂如何在命令行下切换目录的朋友考虑)。 + + cd ~/Downloads/SP_Flash* + +然后使用下面的命令以 root 权限启动刷机工具: + + sudo ./flash_tool + +然后你会看到一个像下面的窗口界面。不用在意 Download Agent 区域,它将会被自动填入。只要关心 Scatter-loading 区域。 + +![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-1.jpeg) + +还记得之前第一步里提到的 **MT6582_Android_scatter.txt** 文件吗?这个文本文件就在你第一步中下载的 Android 固件解压后的目录里。点击 Scatter-loading(上图中)然后选中 MT6582_Android_scatter.txt 文件。 + +之后,你将看到类似下面图片里的一些绿色线条: + +![Install-Android-bq-aquaris-Ubuntu-2](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-2.jpeg) + +#### 第五步:刷入 Android #### + +已经差不多了。把你的手机关机,然后通过 USB 线连接到电脑上。 + +在下拉列表里选择 Firmware Upgrade,然后点击那个大的 Download 按钮。 + +![flash Android with Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu.jpeg) + +如果一切顺利,你应该可以在工具下方的状态栏里看到刷机状态: + +![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-3.jpeg) + +当所有过程都完成后,你将看到一个类似这样的提示: + +![Successfully flashed Android on bq qauaris Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-4.jpeg) + +将手机从电脑上移除然后开机。你应该看到屏幕上显示白色并在中间和底部有 AQUARIS 文字,还应该有 “powered by Android” 字样。差不多需要差不多十分钟,你才可以设置和开始使用 Android。 + +注意:如果中间出了什么问题,同时按下电源、音量加、音量减按键可以进入 fast boot 模式。然后再次关机并连接电脑。重复升级固件的过程。应该可以。 + +### 总结 ### + +要感谢厂商提供的工具,让我们可以轻松地 **在 bq Ubuntu 手机上刷 Android**。当然,你可以使用相同的步骤将 Android 替换回 Ubuntu。只是下载的时候选 Ubuntu 固件而不是 Android。 + +希望这篇文章可以帮你将你的 bq 手机上的 Ubuntu 刷成 Android。如果有什么问题或建议,可以在下面留言区里讨论。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/install-android-ubuntu-phone/ + +作者:[Abhishek][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5_L/2.0.1_20150623-1900_bq-FW.zip +[2]:http://www.bq.com/gb/support/aquaris-e4-5 +[3]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5/Ubuntu/Web%20version/Web%20version/SP_Flash_Tool_exe_linux_v5.1424.00.zip +[4]:http://www.bq.com/gb/support/aquaris-e4-5-ubuntu-edition diff --git a/translated/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md b/translated/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md new file mode 100644 index 0000000000..077c945b9c --- /dev/null +++ b/translated/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md @@ -0,0 +1,317 @@ +如何在linux 上配置持续集成服务 - Drone +============================================================== + +如果你对一次又一次的克隆、构建、测试和部署代码感到厌倦了,可以考虑一下持续集成。持续集成也就是CI,是软件工程的像我们一样的频繁提交的代码库,构建、测试和部署的实践。CI 帮助我们快速的集成新代码到已有的代码基线。如果这个过程是自动化进行的,那么就会提高开发的速度,因为这可以减少开发人员手工构建和测试的时间。[Drone][1] 是一个免费的开源项目,用来提供一个非常棒的持续集成服务的环境,采用了Apache 2.0 协议。它已经集成近很多代码库提供商,比如Github、Bitbucket 以及Google COde,并且它可以从代码库提取代码,使我们可以编译多种语言,包括PHP, Node, Ruby, Go, Dart, Python, C/C++, JAVA 等等。它是如此一个强大的平台是因为它每次构建都使用了容器和docker 技术,这让用户可以在保证隔离的条件下完全控制他们自己的构建环境。 + +### 1. 安装 Docker ### + +首先,我们要安装docker,因为这是Drone 的工作流的最关键的元素。Drone 合理的利用了docker 来构建和测试应用。容器技术提高了应用部署的效率。要安装docker ,我们需要在不同的linux 发行版本运行下面对应的命令,我们这里会说明Ubuntu 14.04 和CentOS 7 两个版本。 + +#### Ubuntu #### + +要在Ubuntu 上安装Docker ,我们只需要运行下面的命令。 + + # apt-get update + # apt-get install docker.io + +安装之后我们需要使用`service` 命令重启docker 引擎。 + + # service docker restart + +然后我们让docker 在系统启动时自动启动。 + + # update-rc.d docker defaults + + Adding system startup for /etc/init.d/docker ... + /etc/rc0.d/K20docker -> ../init.d/docker + /etc/rc1.d/K20docker -> ../init.d/docker + /etc/rc6.d/K20docker -> ../init.d/docker + /etc/rc2.d/S20docker -> ../init.d/docker + /etc/rc3.d/S20docker -> ../init.d/docker + /etc/rc4.d/S20docker -> ../init.d/docker + /etc/rc5.d/S20docker -> ../init.d/docker + +#### CentOS #### + +第一,我们要更新机器上已经安装的软件包。我们可以使用下面的命令。 + + # sudo yum update + +要在centos 上安装docker,我们可以简单的运行下面的命令。 + + # curl -sSL https://get.docker.com/ | sh + +安装好docker 引擎之后我么只需要简单实用下面的`systemd` 命令启动docker,因为centos 7 的默认init 系统是systemd。 + + # systemctl start docker + +然后我们要让docker 在系统启动时自动启动。 + + # systemctl enable docker + + ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service' + +### 2. 安装 SQlite 驱动 ### + +Drone 默认使用SQLite3 数据库服务器来保存数据和信息。它会在/var/lib/drone/ 自动创建名为drone.sqlite 的数据库来处理数据库模式的创建和迁移。要安装SQLite3 我们要完成以下几步。 + +#### Ubuntu 14.04 #### + +因为SQLite3 存在于Ubuntu 14.04 的默认软件库,我们只需要简单的使用apt 命令安装它。 + + # apt-get install libsqlite3-dev + +#### CentOS 7 #### + +要在Centos 7 上安装选哟使用下面的yum 命令。 + + # yum install sqlite-devel + +### 3. 安装 Drone ### + +最后,我们安装好依赖的软件,我们现在更进一步的接近安装Drone。在这一步里我们值简单的从官方链接下载对应的二进制软件包,然后使用默认软件包管理器安装Drone。 + +#### Ubuntu #### + +我们将使用wget 从官方的[Debian 文件下载链接][2]下载drone 的debian 软件包。下面就是下载命令。 + + # wget downloads.drone.io/master/drone.deb + + Resolving downloads.drone.io (downloads.drone.io)... 54.231.48.98 + Connecting to downloads.drone.io (downloads.drone.io)|54.231.48.98|:80... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7722384 (7.4M) [application/x-debian-package] + Saving to: 'drone.deb' + 100%[======================================>] 7,722,384 1.38MB/s in 17s + 2015-11-06 14:09:28 (456 KB/s) - 'drone.deb' saved [7722384/7722384] + +下载好之后,我们将使用dpkg 软件包管理器安装它。 + + # dpkg -i drone.deb + + Selecting previously unselected package drone. + (Reading database ... 28077 files and directories currently installed.) + Preparing to unpack drone.deb ... + Unpacking drone (0.3.0-alpha-1442513246) ... + Setting up drone (0.3.0-alpha-1442513246) ... + Your system ubuntu 14: using upstart to control Drone + drone start/running, process 9512 + +#### CentOS #### + +在CentOS 机器上我们要使用wget 命令从[下载链接][3]下载RPM 包。 + + # wget downloads.drone.io/master/drone.rpm + + --2015-11-06 11:06:45-- http://downloads.drone.io/master/drone.rpm + Resolving downloads.drone.io (downloads.drone.io)... 54.231.114.18 + Connecting to downloads.drone.io (downloads.drone.io)|54.231.114.18|:80... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7763311 (7.4M) [application/x-redhat-package-manager] + Saving to: ‘drone.rpm’ + 100%[======================================>] 7,763,311 1.18MB/s in 20s + 2015-11-06 11:07:06 (374 KB/s) - ‘drone.rpm’ saved [7763311/7763311] + +然后我们使用yum 安装rpm 包。 + + # yum localinstall drone.rpm + +### 4. 配置端口 ### + +安装完成之后,我们要使它工作要先进行配置。drone 的配置文件在**/etc/drone/drone.toml** 。默认情况下drone 的web 接口使用的是80,而这也是http 默认的端口,如果我们要下面所示的修改配置文件里server 块对应的值。 + + [server] + port=":80" + +### 5. 集成 Github ### + +为了运行Drone 我们必须设置最少一个和GitHub、GitHub 企业版,Gitlab,Gogs,Bitbucket 关联的集成点。在本文里我们只集成了github,但是如果哦我们要集成其他的我们可以在配置文件做修改。为了集成github 我们需要在[github setting] 创建一个新的应用。 + +![Registering App Github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-app-github.png) + +要创建一个应用,我们需要在`New Application` 页面点击`Register`,然后如下所示填表。 + +![Registering OAuth app github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-OAuth-app-github.png) + +我们应该保证在应用的配置项里设置了**授权了的回调链接**,链接看起来像`http://drone.linoxide.com/api/auth/github.com`。然后我们点击注册应用。所有都做好之后我们会看到我们需要在我们的Drone 配置文件里配置的客户端ID 和客户端密钥。 + +![Client ID and Secret Token](http://blog.linoxide.com/wp-content/uploads/2015/11/client-id-secret-token.png) + +在这些都完成之后我们需要使用文本编辑器编辑drone 配置文件,比如使用下面的命令。 + + # nano /etc/drone/drone.toml + +然后我们会在drone 的配置文件里面找到`[github]` 部分,紧接着的是下面所示的配置内容 + + [github] + client="3dd44b969709c518603c" + secret="4ee261abdb431bdc5e96b19cc3c498403853632a" + # orgs=[] + # open=false + +![Configuring Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-github-drone-e1446835124465.png) + +### 6. 配置 SMTP 服务器 ### + +如果我们想让drone 使用email 发送通知,那么我们需要在SMTP 配置里面设置我们的SMTP 服务器。如果我们已经有了一个SMTP 服务,那就只需要简单的使用它的配置文件就行了,但是因为我们没有一个SMTP 服务器,我们需要安装一个MTA 比如Postfix,然后在drone 配置文件里配置好SMTP。 + +#### Ubuntu #### + +在ubuntu 里使用下面的apt 命令安装postfix。 + + # apt-get install postfix + +#### CentOS #### + +在CentOS 里使用下面的yum 命令安装postfix。 + + # yum install postfix + +安装好之后,我们需要编辑我们的postfix 配置文件。 + + # nano /etc/postfix/main.cf + +然后我们要把myhostname 的值替换为我们自己的FQDN,比如drone.linoxide.com。 + + myhostname = drone.linoxide.com + +现在开始配置drone 配置文件里的SMTP 部分。 + + # nano /etc/drone/drone.toml + +找到`[smtp]` 部分补充上下面的内容。 + + [smtp] + host = "drone.linoxide.com" + port = "587" + from = "root@drone.linoxide.com" + user = "root" + pass = "password" + +![Configuring SMTP Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-smtp-drone.png) + +注意:这里的**user** 和 **pass** 参数强烈推荐一定要改成一个用户的配置。 + +### 7. 配置 Worker ### + +如我们所知的drone 利用了docker 完成构建、测试任务,我们需要把docker 配置为drone 的worker。要完成这些需要修改drone 配置文件里的`[worker]` 部分。 + + # nano /etc/drone/drone.toml + +然后取消底下几行的注释并且补充上下面的内容。 + + [worker] + nodes=[ + "unix:///var/run/docker.sock", + "unix:///var/run/docker.sock" + ] + +这里我们只设置了两个节点,这意味着上面的配置文件只能同时执行2 个构建操作。要提高并发性可以增大节点的值。 + + [worker] + nodes=[ + "unix:///var/run/docker.sock", + "unix:///var/run/docker.sock", + "unix:///var/run/docker.sock", + "unix:///var/run/docker.sock" + ] + +使用上面的配置文件drone 被配置为使用本地的docker 守护程序可以同时构建4个任务。 + +### 8. 重启 Drone ### + +最后,当所有的安装和配置都准备好之后,我们现在要在本地的linux 机器上启动drone 服务器。 + +#### Ubuntu #### + +因为ubuntu 14.04 使用了sysvinit 作为默认的init 系统,所以只需要简单执行下面的service 命令就可以启动drone 了。 + + # service drone restart + +要让drone 在系统启动时也自动运行,需要运行下面的命令。 + + # update-rc.d drone defaults + +#### CentOS #### + +因为CentOS 7使用systemd 作为init 系统,所以只需要运行下面的systemd 命令就可以重启drone。 + + # systemctl restart drone + +要让drone 自动运行只需要运行下面的命令。 + + # systemctl enable drone + +### 9. 添加防火墙例外 ### + +众所周知drone 默认使用了80 端口而我们又没有修改他,所以我们需要配置防火墙程序允许80 端口(http)开发并允许其他机器可以通过网络连接。 + +#### Ubuntu 14.04 #### + +iptables 是最流行的防火墙程序,并且ubuntu 默认安装了它。我们需要修改iptable 暴露端口80,这样我们才能让drone 的web 界面在网络上被大家访问。 + + # iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT + # /etc/init.d/iptables save + +#### CentOS 7 #### + +因为CentOS 7 默认安装了systemd,它使用firewalld 作为防火墙程序。为了在firewalld 上打开80端口(http 服务),我们需要执行下面的命令。 + + # firewall-cmd --permanent --add-service=http + + success + + # firewall-cmd --reload + + success + +### 10. 访问web 界面 ### + +现在我们将在我们最喜欢的浏览器上通过web 界面打开drone。要完成这些我们要把浏览器指向运行drone 的服务器。因为drone 默认使用80 端口而我们有没有修改过,所以我们只需要在浏览器里根据我们的配置输入`http://ip-address/` 或 `http://drone.linoxide.com` 就行了。在我们正确的完成了上述操作后,我们就可以看到登陆界面了。 + +![Login Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/login-github-drone-e1446834688394.png) + +因为在上面的步骤里配置了Github,我们现在只需要简单的选择github然后进入应用授权步骤,这些完成后我们就可以进入工作台了。 + +![Drone Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/11/drone-dashboard.png) + +这里它会同步我们在github 上的代码库,然后询问我们要在drone 上构建那个代码库。 + +![Activate Repository](http://blog.linoxide.com/wp-content/uploads/2015/11/activate-repository-e1446835574595.png) + +这一步完成后,它会询问我们在代码库里添加`.drone.yml` 文件的新名称,并且在这个文件里定义构建的过程和配置项,比如使用那个docker 镜像,执行那些命令和脚本来编译,等等。 + +我们按照下面的内容来配置我们的`.drone.yml`。 + + image: python + script: + - python helloworld.py + - echo "Build has been completed." + +这一步完成后我们就可以使用drone 应用里的YAML 格式的配置文件来构建我们的应用了。所有对代码库的提交和改变此时都会同步到这个仓库。一旦提交完成了,drone 就会自动开始构建。 + +![Building Application Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/building-application-drone.png) + +所有操作都完成后,我们就能在终端看到构建的结果了。 + +![Build Success Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/build-success-drone.png) + +### 总结 ### + +在本文中我们学习了如何安装一个可以工作的使用drone 的持续集成平台。如果我们愿意我们甚至可以从drone.io 官方提供的服务开始工作。我们可以根据自己的需求从免费的服务或者收费服务开始。它通过漂亮的web界面和强大的功能改变了持续集成的世界。它可以集成很多第三方应用和部署平台。如果你有任何问题、建议可以直接反馈给我们,谢谢。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-drone-continuous-integration-linux/ + +作者:[Arun Pyasi][a] +译者:[ezio](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://drone.io/ +[2]:http://downloads.drone.io/master/drone.deb +[3]:http://downloads.drone.io/master/drone.rpm +[4]:https://github.com/settings/developers diff --git a/translated/tech/20151116 Linux FAQs with Answers--How to install Node.js on Linux.md b/translated/tech/20151116 Linux FAQs with Answers--How to install Node.js on Linux.md new file mode 100644 index 0000000000..8ccca22632 --- /dev/null +++ b/translated/tech/20151116 Linux FAQs with Answers--How to install Node.js on Linux.md @@ -0,0 +1,92 @@ +Linux 有问必答 - 如何在 Linux 上安装 Node.js +================================================================================ +> **问题**: 如何在你的 Linux 发行版上安装 Node.js? + +[Node.js][1] 是建立在谷歌的 V8 JavaScript 引擎服务器端的软件平台上。在构建高性能的服务器端应用程序上,Node.js 在 JavaScript 中已是首选方案。是什么让使用 Node.js 库和应用程序的 [庞大生态系统][2] 来开发服务器后台变得如此流行。Node.js 自带一个被称为 npm 的命令行工具可以让你轻松地安装它,进行版本控制并使用 npm 的在线仓库来管理 Node.js 库和应用程序的依赖关系。 + +在本教程中,我将介绍 **如何在主流 Linux 发行版上安装 Node.js,包括Debian,Ubuntu,Fedora 和 CentOS** 。 + +Node.js 在一些发行版上作为预构建的程序包(如,Fedora 或 Ubuntu),而在其他发行版上你需要源码安装。由于 Node.js 发展比较快,建议从源码安装最新版而不是安装一个过时的预构建的程序包。最新的 Node.js 自带 npm(Node.js 的包管理器),让你可以轻松的安装 Node.js 的外部模块。 + +### 在 Debian 上安装 Node.js on ### + +从 Debian 8 (Jessie)开始,Node.js 已被纳入官方软​​件仓库。因此,你可以使用如下方式安装它: + + $ sudo apt-get install npm + +在 Debian 7 (Wheezy) 以前的版本中,你需要使用下面的方式来源码安装: + + $ sudo apt-get install python g++ make + $ wget http://nodejs.org/dist/node-latest.tar.gz + $ tar xvfvz node-latest.tar.gz + $ cd node-v0.10.21 (replace a version with your own) + $ ./configure + $ make + $ sudo make install + +### 在 Ubuntu 或 Linux Mint 中安装 Node.js ### + +Node.js 被包含在 Ubuntu(13.04 及更高版本)。因此,安装非常简单。以下方式将安装 Node.js 和 npm。 + + $ sudo apt-get install npm + $ sudo ln -s /usr/bin/nodejs /usr/bin/node + +而 Ubuntu 中的 Node.js 可能版本比较老,你可以从 [其 PPA][3] 中安装最新的版本。 + + $ sudo apt-get install python-software-properties python g++ make + $ sudo add-apt-repository -y ppa:chris-lea/node.js + $ sudo apt-get update + $ sudo apt-get install npm + +### 在 Fedora 中安装 Node.js ### + +Node.js 被包含在 Fedora 的 base 仓库中。因此,你可以在 Fedora 中用 yum 安装 Node.js。 + + $ sudo yum install npm + +如果你想安装 Node.js 的最新版本,可以按照以下步骤使用源码来安装。 + + $ sudo yum groupinstall 'Development Tools' + $ wget http://nodejs.org/dist/node-latest.tar.gz + $ tar xvfvz node-latest.tar.gz + $ cd node-v0.10.21 (replace a version with your own) + $ ./configure + $ make + $ sudo make install + +### 在 CentOS 或 RHEL 中安装 Node.js ### + +在 CentOS 使用 yum 包管理器来安装 Node.js,首先启用 EPEL 软件库,然后运行: + + $ sudo yum install npm + +如果你想在 CentOS 中安装最新版的 Node.js,其安装步骤和在 Fedora 中的相同。 + +### 在 Arch Linux 上安装 Node.js ### + +Node.js is available in the Arch Linux community repository. Thus installation is as simple as running: + +Node.js 在 Arch Linux 的社区库中可以找到。所以安装很简单,只要运行: + + $ sudo pacman -S nodejs npm + +### 检查 Node.js 的版本 ### + +一旦你已经安装了 Node.js,你可以使用如下所示的方法检查 Node.js 的版本。 + + $ node --version + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/install-node-js-linux.html + +作者:[Dan Nanni][a] +译者:[strugglingyou](https://github.com/strugglingyou) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://nodejs.org/ +[2]:https://www.npmjs.com/ +[3]:https://launchpad.net/~chris-lea/+archive/node.js diff --git a/translated/tech/20151117 Linux 101--Get the most out of Systemd.md b/translated/tech/20151117 Linux 101--Get the most out of Systemd.md deleted file mode 100644 index 1a382479ec..0000000000 --- a/translated/tech/20151117 Linux 101--Get the most out of Systemd.md +++ /dev/null @@ -1,171 +0,0 @@ -Linux 101:最有效地使用 Systemd -================================================================================ -干嘛要这么做? - -- 理解现代 Linux 发行版中的显著变化; -- 看看 Systemd 是如何取代 SysVinit 的; -- 处理好*单元* (unit)和新的 journal 日志。 - -吐槽邮件,人身攻击,死亡威胁——Lennart Poettering,Systemd 的作者,对收到这些东西早就习以为常了。这位 Red Hat 公司的员工最近在 Google+ 上怒斥 FOSS 社区([http://tinyurl.com/poorlennart][1])的本质,悲痛且失望地表示:“那真是个令人恶心的地方”。他着重指出 Linus Torvalds 在邮件列表上言辞刻薄的帖子,并谴责这位内核的领导者为在线讨论定下基调,并使得人身攻击及贬抑之辞成为常态。 - -但为何 Poettering 会遭受如此多的憎恨?为何就这么个搞搞开源软件的人要忍受这等愤怒?答案就在于他的软件的重要性。如今大多数发行版中,Systemd 是 Linux 内核发起的第一个程序,并且它还扮演多种角色。它会启动系统服务,处理用户登陆,每隔特定的时间执行一些任务,还有很多很多。它在不断地成长,并逐渐成为 Linux 的某种“基础系统”——提供系统启动和发行版维护所需的所有工具。 - -如今,在以下几点上 Systemd 颇具争议:它逃避了一些确立好的 Unix 传统,例如纯文本的日志文件;它被看成是个“大一统”的项目,试图接管一切;它还是我们这个操作系统的支柱的重要革新。然而大多数主流发行版已经接受了(或即将接受)它,因此它就保留了下来。而且它确实是有好处的:更快地启动,更简单地管理那些有依赖的服务程序,提供强大且安全的日志系统等。 - -因此在这篇教程中,我们将探索 Systemd 的特性,并向您展示如何最有效地利用这些特性。即便您此刻并不是这款软件的粉丝,读完本文后您至少可以更加了解和适应它。 - -![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/eating-large.jpg) - -**这部没正经的动画片来自[http://tinyurl.com/m2e7mv8][2],它把 Systemd 塑造成一只狂暴的动物,吞噬它路过的一切。大多数批评者的言辞可不像这只公仔一样柔软。** - -### 启动及服务 ### - -大多数主流发行版要么已经采用 Systemd,要么即将在下个发布中采用(如 Debian 和 Ubuntu)。在本教程中,我们使用 Fedora 21——该发行版已经是 Systemd 的优秀实验场地——的一个预览版进行演示,但不论您用哪个发行版,要用到的命令和注意事项都应该是一样的。这是 Systemd 的一个加分点:它消除了不同发行版之间许多细微且琐碎的区别。 - -在终端中输入 **ps ax | grep systemd**,看到第一行,其中的数字 **1** 表示它的进程号是1,也就是说它是 Linux 内核发起的第一个程序。因此,内核一旦检测完硬件并组织好了内存,就会运行 **/usr/lib/systemd/systemd** 可执行程序,这个程序会按顺序依次发起其他程序。(在还没有 Systemd 的日子里,内核会去运行 **/sbin/init**,随后这个程序会在名为 SysVinit 的系统中运行其余的各种启动脚本。) - -Systemd 的核心是一个叫*单元* (unit)的概念,它是一些存有关于服务(在运行在后台的程序),设备,挂载点,和操作系统其他方面信息的配置文件。Systemd 的其中一个目标就是简化这些事物之间的相互作用,因此如果你有程序需要在某个挂载点被创建或某个设备被接入后开始运行,Systemd 可以让这一切正常运作起来变得相当容易。(在没有 Systemd 的日子里,要使用脚本来把这些事情调配好,那可是相当丑陋的。)要列出您 Linux 系统上的所有单元,输入以下命令: - - systemctl list-unit-files - -现在,**systemctl** 是与 Systemd 交互的主要工具,它有不少选项。在单元列表中,您会注意到这儿有一些格式:被使能的单元显示为绿色,被禁用的显示为红色。标记为“static”的单元不能直接启用,它们是其他单元所依赖的对象。若要限制输出列表只包含服务,使用以下命令: - - systemctl list-unit-files --type=service - -注意,一个单元显示为“enabled”,并不等于对应的服务正在运行,而只能说明它可以被开启。要获得某个特定服务的信息,以 GDM (the Gnome Display Manager) 为例,输入以下命令: - - systemctl status gdm.service - -这条命令提供了许多有用的信息:一段人类可读的服务描述,单元配置文件的位置,启动的时间,进程号,以及它所从属的 CGroups (用以限制各组进程的资源开销)。 - -如果您去查看位于 **/usr/lib/systemd/system/gdm.service** 的单元配置文件,您可以看到多种选项,包括要被运行的二进制文件(“ExecStart”那一行),相冲突的其他单元(即不能同时进入运行的单元),以及需要在本单元执行前进入运行的单元(“After”那一行)。一些单元有附加的依赖选项,例如“Requires”(必要的依赖)和“Wants”(可选的依赖)。 - -此处另一个有趣的选项是: - - Alias=display-manager.service - -当您启动 **gdm.service** 后,您将可以通过 **systemctl status display-manager.service** 来查看它的状态。当您知道有*显示管理程序* (display manager)在运行并想对它做点什么,但您不关心那究竟是 GDM,KDM,XDM 还是什么别的显示管理程序时,这个选项会非常有用。 - -![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/status-large.jpg) - -**使用 systemctl status 命令后面跟一个单元名,来查看对应的服务有什么情况。** - -### “目标”锁定 ### - -如果您在 **/usr/lib/systemd/system** 目录中输入 **ls** 命令,您将看到各种以 **.target** 结尾的文件。一个*启动目标* (target)是一种将多个单元聚合在一起以致于将它们同时启动的方式。例如,对大多数类 Unix 操作系统而言有一种“多用户”状态,意思是系统已被成功启动,后台服务正在运行,并且已准备好让一个或多个用户登陆并工作——至少在文本模式下。(其他状态包括用于进行管理工作的单用户状态,以及用于机器关机的重启状态。) - -如果您打开 **multi-user.target** 文件一探究竟,您可能期待看到的是一个要被启动的单元列表。但您会发现这个文件内部几乎空空如也——其实,一个服务会通过 **WantedBy** 选项让自己成为启动目标的依赖。因此如果您去打开 **avahi-daemon.service**, **NetworkManager.service** 及其他 **.service** 文件看看,您将在 Install 段看到这一行: - - WantedBy=multi-user.target - -因此,切换到多用户启动目标会使能那些包含上述语句的单元。还有其他一些启动目标可用(例如 **emergency.target** 用于一个紧急情况使用的 shell,以及 **halt.target** 用于机器关机),您可以用以下方式轻松地在它们之间切换: - - systemctl isolate emergency.target - -在许多方面,这些都很像 SysVinit 中的*运行级* (runlevel),如文本模式的 **multi-user.target** 类似于第3运行级,**graphical.target** 类似于第5运行级,**reboot.target** 类似于第6运行级,诸如此类。 - -![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/unit-large.jpg) - -**与传统的脚本相比,单元配置文件也许看起来很陌生,但并不难以理解。** - -### 开启与停止 ### - -现在您也许陷入了沉思:我们已经看了这么多,但仍没看到如何停止和开启服务!这其实是有原因的。从外部看,Systemd 也许很复杂,像野兽一般难以驾驭。因此在您开始摆弄它之间,有必要从宏观的角度看看它是如何工作的。实际用来管理服务的命令非常简单: - - systemctl stop cups.service - systemctl start cups.service - -(若某个单元被禁用了,您可以先通过 **systemctl enable** 加该单元名的方式将其使能。这种做法会为该单元创建一个符号链接,并将其放置在当前启动目标的 .wants 目录下,这些 .wants 目录在**/etc/systemd/system** 文件夹中。) - -还有两个有用的命令是 **systemctl restart** 和 **systemctl reload**,后面接单元名。后者要求单元重新加载它的配置文件。Systemd 的绝大部分都有良好的文档,因此您可以查看手册 (**man systemctl**) 了解每条命令的细节。 - -> ### 定时器单元:取代 Cron ### -> -> 除了系统初始化和服务管理,Systemd 还染指其他方面。在很大程度上,它能够完成 **cron** 的工作,而且可以说是以更灵活的方式(并带有更易读的语法)。**cron** 是一个以规定时间间隔执行任务的程序——例如清楚临时文件,刷新缓存等。 -> -> 如果您再次进入 **/usr/lib/systemd/system** 目录,您会看到那儿有多个 **.timer** 文件。用 **less** 来查看这些文件,您会发现它们与 **.service** 和 **.target** 文件有着相似的结构,而区别在于 **[Timer]** 段。举个例子: -> -> [Timer] -> OnBootSec=1h -> OnUnitActiveSec=1w -> -> **OnBootSec** 选项告诉 Systemd 在系统启动一小时后启动这个单元。第二个选项的意思是:自那以后每周启动这个单元一次。关于定时器有大量选项您可以设置——输入 **man systemd.time** 查看完整列表。 -> -> Systemd 的时间精度默认为一分钟。也就是说,它会在设定时刻的一分钟内运行单元,但不一定精确到那一秒。这么做是基于电源管理方面的原因,但如果您需要一个没有任何延时且精确到毫秒的定时器,您可以添加以下一行: -> -> AccuracySec=1us -> -> 另外, **WakeSystem** 选项(可以被设置为 true 或 false)决定了定时器是否可以唤醒处于休眠状态的机器。 - -![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/systemd_gui-large.jpg) - -**存在一个 Systemd 的图形界面程序,即便它已有多年未被积极维护。** - -### 日志文件:向 journald 问声好 ### - -Systemd 的第二个主要部分是 journal 。这是个日志系统,类似于 syslog 但也有些显著区别。如果您是个 Unix 日志管理模式的 粉丝,准备好热血沸腾吧:这是个二进制日志,因此您不能使用常规的命令行文本处理工具来解析它。这个设计决定不出意料地在网上引起了激烈的争论,但它的确有些优点。例如,日志可以被更系统地组织,带有更多元数据,因此可以更容易地根据可执行文件名和进程号等过滤出信息。 - -要查看整个 journal,输入以下命令: - - journalctl - -像许多其他的 Systemd 命令一样,该命令将输出通过管道的方式引向 **less** 程序,因此您可以使用空格键向下滚动,“/”(斜杠)键查找,以及其他熟悉的快捷键。您也能在此看到少许颜色,像红色的警告及错误信息。 - -以上命令会输出很多信息。为了限制其只输出当前启动的消息,使用如下命令: - - journalctl -b - -这就是 Systemd 大放异彩的地方!您想查看自上次启动以来的全部消息吗?试试 **journalctl -b -1** 吧。再上一次的?用 **-2** 替换 **-1** 吧。那自某个具体时间,例如2014年10月24日16:38以来的呢? - - journalctl -b --since=”2014-10-24 16:38” - -即便您对二进制日志感到遗憾,那依然是个有用的特性,并且对许多系统管理员来说,构建类似的过滤器比起写正则表达式而言容易多了。 - -我们已经可以根据特定的时间来准确查找日志了,那可以根据特定程序吗?对单元而言,试试这个: - - journalctl -u gdm.service - -(注意:这是个查看 X server 产生的日志的好办法。)那根据特定的进程号? - - journalctl _PID=890 - -您甚至可以请求只看某个可执行文件产生的消息: - - journalctl /usr/bin/pulseaudio - -若您想将输出的消息限制在某个优先级,可以使用 **-p** 选项。该选项参数为 0 的话只会显示紧急消息(也就是说,是时候向 **\$DEITY** 祈求保佑了),为 7 的话会显示所有消息,包括调试消息。请查看手册 (**man journalctl**) 获取更多关于优先级的信息。 - -值得指出的是,您也可以将多个选项结合在一起,若想查看在当前启动中由 GDM 服务输出的优先级数小于等于 3 的消息,请使用下述命令: - - journalctl -u gdm.service -p 3 -b - -最后,如果您仅仅想打开一个随 journal 持续更新的终端窗口,就像在没有 Systemd 时使用 tail 命令实现的那样,输入 **journalctl -f** 就好了。 - -![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/journal-large.jpg) - -**二进制日志并不流行,但 journal 的确有它的优点,如非常方便的信息查找及过滤。** - -> ### 没有 Systemd 的生活?### -> -> 如果您就是完全不能接收 Systemd,您仍然有一些主流发现版中的选择。尤其是 Slackware,作为历史最为悠久的发行版,目前还没有做出改变,但它的主要开发者并没有将其从未来规划中移除。一些不出名的发行版也在坚持使用 SysVinit 。 -> -> 但这又将持续多久呢?Gnome 正越来越依赖于 Systemd,其他的主流桌面环境也会步其后尘。这也是引起 BSD 社区一阵恐慌的原因:Systemd 与 Linux 内核紧密相连,导致在某种程度上,桌面环境正变得越来越不可移植。一种折中的解决方案也许会以 Uselessd ([http://uselessd.darknedgy.net][3]) 的形式到来:一种裁剪版的 Systemd,纯粹专注于启动和监控进程,而不消耗整个基础系统。 -> -> ![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/gentoo-large.jpg) -> -> 若您不喜欢 Systemd,可以尝试一下 Gentoo 发行版,它将 Systemd 作为初始化工具的一种选择,但并不强制用户使用 Systemd。 - --------------------------------------------------------------------------------- - -via: http://www.linuxvoice.com/linux-101-get-the-most-out-of-systemd/ - -作者:[Mike Saunders][a] -译者:[Ricky-Gong](https://github.com/Ricky-Gong) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxvoice.com/author/mike/ -[1]:http://tinyurl.com/poorlennart -[2]:http://tinyurl.com/m2e7mv8 -[3]:http://uselessd.darknedgy.net/ diff --git a/translated/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md b/translated/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md new file mode 100644 index 0000000000..c42b278787 --- /dev/null +++ b/translated/tech/20151119 Going Beyond Hello World Containers is Hard Stuff.md @@ -0,0 +1,334 @@ +要超越Hello World 容器是件困难的事情 +================================================================================ + +在[我的上一篇文章里][1], 我介绍了Linux 容器背后的技术的概念。我写了我知道的一切。容器对我来说也是比较新的概念。我写这篇文章的目的就是鼓励我真正的来学习这些东西。 + +我打算在使用中学习。首先实践,然后上手并记录下我是怎么走过来的。我假设这里肯定有很多想"Hello World" 这种类型的知识帮助我快速的掌握基础。然后我能够更进一步,构建一个微服务容器或者其它东西。 + +我的意思是还会比着更难吗,对吧? + +错了。 + +可能对某些人来说这很简单,因为他们会耗费大量的时间专注在操作工作上。但是对我来说实际上是很困难的,可以从我在Facebook 上的状态展示出来的挫折感就可以看出了。 + +但是还有一个好消息:我最终让它工作了。而且他工作的还不错。所以我准备分享向你分享我如何制作我的第一个微服务容器。我的痛苦可能会节省你不少时间呢。 + +如果你曾经发现或者从来都没有发现自己处在这种境地:像我这样的人在这里解决一些你不需要解决的问题。 + +让我们开始吧。 + + +### 一个缩略图微服务 ### + +我设计的微服务在理论上很简单。以JPG 或者PNG 格式在HTTP 终端发布一张数字照片,然后获得一个100像素宽的缩略图。 + +下面是它实际的效果: + +![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png) + +我决定使用NodeJS 作为我的开发语言,使用[ImageMagick][2] 来转换缩略图。 + +我的服务的第一版的逻辑如下所示: + +![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png) + +我下载了[Docker Toolbox][3],用它安装了Docker 的快速启动终端。Docker 快速启动终端使得创建容器更简单了。终端会启动一个装好了Docker 的Linux 虚拟机,它允许你在一个终端里运行Docker 命令。 + +虽然在我的例子里,我的操作系统是Mac OS X。但是Windows 下也有相同的工具。 + +我准备使用Docker 快速启动终端里为我的微服务创建一个容器镜像,然后从这个镜像运行容器。 + +Docker 快速启动终端就运行在你使用的普通终端里,就像这样: + +![container-diagram-2](https://deis.com/images/blog-images/containers-hard-2.png) + +### 第一个小问题和第一个大问题### + +所以我用NodeJS 和ImageMagick 瞎搞了一通然后让我的服务在本地运行起来了。 + +然后我创建了Dockerfile,这是Docker 用来构建容器的配置脚本。(我会在后面深入介绍构建和Dockerfile) + +这是我运行Docker 快速启动终端的命令: + + $ docker build -t thumbnailer:0.1 + +获得如下回应: + + docker: "build" requires 1 argument. + +呃。 + +我估摸着过了15分钟:我忘记了在末尾参数输入一个点`.`。 + +正确的指令应该是这样的: + + $ docker build -t thumbnailer:0.1 . + + +但是这不是我最后一个问题。 + +我让这个镜像构建好了,然后我Docker 快速启动终端输入了[`run` 命令][4]来启动容器,名字叫`thumbnailer:0.1`: + + $ docker run -d -p 3001:3000 thumbnailer:0.1 + +参数`-p 3001:3000` 让NodeJS 微服务在Docker 内运行在端口3000,而在主机上则是3001。 + +到目前卡起来都很好,对吧? + +错了。事情要马上变糟了。 + +我指定了在Docker 快速启动中端里用命令`docker-machine` 运行的Docker 虚拟机的ip地址: + + $ docker-machine ip default + +这句话返回了默认虚拟机的IP地址,即运行docker 的虚拟机。对于我来说,这个ip 地址是192.168.99.100。 + +我浏览网页http://192.168.99.100:3001/ ,然后找到了我创建的上传图片的网页: + +![container-diagram-3](https://deis.com/images/blog-images/containers-hard-3.png) + +我选择了一个文件,然后点击上传图片的按钮。 + +但是它并没有工作。 + +终端告诉我他无法找到我的微服务需要的`/upload` 目录。 + +现在开始记住,我已经在此耗费了将近一天的时间-从浪费时间到研究问题。我此时感到了一些挫折感。 + +然后灵光一闪。某人记起来微服务不应该自己做任何数据持久化的工作!保存数据应该是另一个服务的工作。 + +所以容器找不到目录`/upload` 的原因到底是什么?这个问题的根本就是我的微服务在基础设计上就有问题。 + +让我们看看另一幅图: + +![container-diagram-4](https://deis.com/images/blog-images/containers-hard-4.png) + +我为什么要把文件保存到磁盘?微服务按理来说是很快的。为什么不能让我的全部工作都在内存里完成?使用内存缓冲可以解决“找不到目录”这个问题,而且可以提高我的应用的性能。 + +这就是我现在所做的。下面是我的计划: + +![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png) + +这是我用NodeJS 写的在内存工作、生成缩略图的代码: + + // Bind to the packages + var express = require('express'); + var router = express.Router(); + var path = require('path'); // used for file path + var im = require("imagemagick"); + + // Simple get that allows you test that you can access the thumbnail process + router.get('/', function (req, res, next) { + res.status(200).send('Thumbnailer processor is up and running'); + }); + + // This is the POST handler. It will take the uploaded file and make a thumbnail from the + // submitted byte array. I know, it's not rocket science, but it serves a purpose + router.post('/', function (req, res, next) { + req.pipe(req.busboy); + req.busboy.on('file', function (fieldname, file, filename) { + var ext = path.extname(filename) + + // Make sure that only png and jpg is allowed + if(ext.toLowerCase() != '.jpg' && ext.toLowerCase() != '.png'){ + res.status(406).send("Service accepts only jpg or png files"); + } + + var bytes = []; + + // put the bytes from the request into a byte array + file.on('data', function(data) { + for (var i = 0; i < data.length; ++i) { + bytes.push(data[i]); + } + console.log('File [' + fieldname + '] got bytes ' + bytes.length + ' bytes'); + }); + + // Once the request is finished pushing the file bytes into the array, put the bytes in + // a buffer and process that buffer with the imagemagick resize function + file.on('end', function() { + var buffer = new Buffer(bytes,'binary'); + console.log('Bytes got ' + bytes.length + ' bytes'); + + //resize + im.resize({ + srcData: buffer, + height: 100 + }, function(err, stdout, stderr){ + if (err){ + throw err; + } + // get the extension without the period + var typ = path.extname(filename).replace('.',''); + res.setHeader("content-type", "image/" + typ); + res.status(200); + // send the image back as a response + res.send(new Buffer(stdout,'binary')); + }); + }); + }); + }); + + module.exports = router; + +好了,回到正轨,已经可以在我的本地机器正常工作了。我该去休息了。 + +但是,在我测试把这个微服务当作一个普通的Node 应用运行在本地时... + +![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png) + +它工作的很好。现在我要做的就是让他在容器里面工作。 + +第二天我起床后喝点咖啡,然后创建一个镜像——这次没有忘记那个"."! + + $ docker build -t thumbnailer:01 . + +我从缩略图工程的根目录开始构建。构建命令使用了根目录下的Dockerfile。它是这样工作的:把Dockerfile 放到你想构建镜像的地方,然后系统就默认使用这个Dockerfile。 + +下面是我使用的Dockerfile 的内容: + + FROM ubuntu:latest + MAINTAINER bob@CogArtTech.com + + RUN apt-get update + RUN apt-get install -y nodejs nodejs-legacy npm + RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev + RUN apt-get clean + + COPY ./package.json src/ + + RUN cd src && npm install + + COPY . /src + + WORKDIR src/ + + CMD npm start + +这怎么可能出错呢? + +### 第二个大问题 ### + +我运行了`build` 命令,然后出了这个错: + + Do you want to continue? [Y/n] Abort. + + The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 + +我猜测微服务出错了。我回到本地机器,从本机启动微服务,然后试着上传文件。 + +然后我从NodeJS 获得了这个错误: + + Error: spawn convert ENOENT + +怎么回事?之前还是好好的啊! + +我搜索了我能想到的所有的错误原因。差不多4个小时后,我想:为什么不重启一下机器呢? + +重启了,你猜猜结果?错误消失了!(译注:万能的重启) + +继续。 + +### 将精灵关进瓶子 ### + +跳回正题:我需要完成构建工作。 + +我使用[`rm` 命令][5]删除了虚拟机里所有的容器。 + + $ docker rm -f $(docker ps -a -q) + +`-f` 在这里的用处是强制删除运行中的镜像。 + +然后删除了全部Docker 镜像,用的是[命令`rmi`][6]: + + $ docker rmi if $(docker images | tail -n +2 | awk '{print $3}') + +我重新执行了命令构建镜像,安装容器,运行微服务。然后过了一个充满自我怀疑和沮丧的一个小时,我告诉我自己:这个错误可能不是微服务的原因。 + +所以我重新看到了这个错误: + + Do you want to continue? [Y/n] Abort. + + The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1 + +这太打击我了:构建脚本好像需要有人从键盘输入Y! 但是,这是一个非交互的Dockerfile 脚本啊。这里并没有键盘。 + +回到Dockerfile,脚本元来时这样的: + + RUN apt-get update + RUN apt-get install -y nodejs nodejs-legacy npm + RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev + RUN apt-get clean + +The second `apt-get` command is missing the `-y` flag which causes "yes" to be given automatically where usually it would be prompted for. +第二个`apt-get` 忘记了`-y` 标志,这才是错误的根本原因。 + +I added the missing `-y` to the command: +我在这条命令后面添加了`-y` : + + RUN apt-get update + RUN apt-get install -y nodejs nodejs-legacy npm + RUN apt-get install -y imagemagick libmagickcore-dev libmagickwand-dev + RUN apt-get clean + +猜一猜结果:经过将近两天的尝试和痛苦,容器终于正常工作了!整整两天啊! + +我完成了构建工作: + + $ docker build -t thumbnailer:0.1 . + +启动了容器: + + $ docker run -d -p 3001:3000 thumbnailer:0.1 + +Got the IP address of the Virtual Machine: +获取了虚拟机的IP 地址: + + $ docker-machine ip default + +在我的浏览器里面输入 http://192.168.99.100:3001/ : + +上传页面打开了。 + +我选择了一个图片,然后得到了这个: + +![container-diagram-7](https://deis.com/images/blog-images/containers-hard-7.png) + +工作了! + +在容器里面工作了,我的第一次啊! + +### 这意味着什么? ### + +很久以前,我接受了这样一个道理:当你刚开始尝试某项技术时,即使是最简单的事情也会变得很困难。因此,我压抑了要成为房间里最聪明的人的欲望。然而最近几天尝试容器的过程就是一个充满自我怀疑的旅程。 + +但是你想知道一些其它的事情吗?这篇文章是我在凌晨2点完成的,而每一个折磨的小时都值得了。为什么?因为这段时间你将自己全身心投入了喜欢的工作里。这件事很难,对于所有人来说都不是很容易就获得结果的。但是不要忘记:你在学习技术,运行世界的技术。 + +P.S. 了解一下Hello World 容器的两段视频,这里会有 [Raziel Tabib’s][7] 的精彩工作内容。 + +注:youtube视频 + + +千万被忘记第二部分... + +注:youtube视频 + + +-------------------------------------------------------------------------------- + +via: https://deis.com/blog/2015/beyond-hello-world-containers-hard-stuff + +作者:[Bob Reselman][a] +译者:[Ezio](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://deis.com/blog +[1]:http://deis.com/blog/2015/developer-journey-linux-containers +[2]:https://github.com/rsms/node-imagemagick +[3]:https://www.docker.com/toolbox +[4]:https://docs.docker.com/reference/commandline/run/ +[5]:https://docs.docker.com/reference/commandline/rm/ +[6]:https://docs.docker.com/reference/commandline/rmi/ +[7]:http://twitter.com/RazielTabib diff --git a/translated/tech/20151122 Doubly linked list in the Linux Kernel.md b/translated/tech/20151122 Doubly linked list in the Linux Kernel.md new file mode 100644 index 0000000000..631d918813 --- /dev/null +++ b/translated/tech/20151122 Doubly linked list in the Linux Kernel.md @@ -0,0 +1,258 @@ +Linux 内核里的数据结构——双向链表 +================================================================================ + +双向链表 +-------------------------------------------------------------------------------- + + +Linux 内核自己实现了双向链表,可以在[include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h)找到定义。我们将会从双向链表数据结构开始`内核的数据结构`。为什么?因为它在内核里使用的很广泛,你只需要在[free-electrons.com](http://lxr.free-electrons.com/ident?i=list_head) 检索一下就知道了。 + +首先让我们看一下在[include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) 里的主结构体: + +```C +struct list_head { + struct list_head *next, *prev; +}; +``` + +你可能注意到这和你以前见过的双向链表的实现方法是不同的。举个例子来说,在[glib](http://www.gnu.org/software/libc/) 库里是这样实现的: + +```C +struct GList { + gpointer data; + GList *next; + GList *prev; +}; +``` + +通常来说一个链表会包含一个指向某个项目的指针。但是内核的实现并没有这样做。所以问题来了:`链表在哪里保存数据呢?`。实际上内核里实现的链表实际上是`侵入式链表`。侵入式链表并不在节点内保存数据-节点仅仅包含指向前后节点的指针,然后把数据是附加到链表的。这就使得这个数据结构是通用的,使用起来就不需要考虑节点数据的类型了。 + +比如: + +```C +struct nmi_desc { + spinlock_t lock; + struct list_head head; +}; +``` + +让我们看几个例子来理解一下在内核里是如何使用`list_head` 的。如上所述,在内核里有实在很多不同的地方用到了链表。我们来看一个在杂项字符驱动里面的使用的例子。在 [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) 的杂项字符驱动API 被用来编写处理小型硬件和虚拟设备的小驱动。这些驱动共享相同的主设备号: + +```C +#define MISC_MAJOR 10 +``` + +但是都有各自不同的次设备号。比如: + +``` +ls -l /dev | grep 10 +crw------- 1 root root 10, 235 Mar 21 12:01 autofs +drwxr-xr-x 10 root root 200 Mar 21 12:01 cpu +crw------- 1 root root 10, 62 Mar 21 12:01 cpu_dma_latency +crw------- 1 root root 10, 203 Mar 21 12:01 cuse +drwxr-xr-x 2 root root 100 Mar 21 12:01 dri +crw-rw-rw- 1 root root 10, 229 Mar 21 12:01 fuse +crw------- 1 root root 10, 228 Mar 21 12:01 hpet +crw------- 1 root root 10, 183 Mar 21 12:01 hwrng +crw-rw----+ 1 root kvm 10, 232 Mar 21 12:01 kvm +crw-rw---- 1 root disk 10, 237 Mar 21 12:01 loop-control +crw------- 1 root root 10, 227 Mar 21 12:01 mcelog +crw------- 1 root root 10, 59 Mar 21 12:01 memory_bandwidth +crw------- 1 root root 10, 61 Mar 21 12:01 network_latency +crw------- 1 root root 10, 60 Mar 21 12:01 network_throughput +crw-r----- 1 root kmem 10, 144 Mar 21 12:01 nvram +brw-rw---- 1 root disk 1, 10 Mar 21 12:01 ram10 +crw--w---- 1 root tty 4, 10 Mar 21 12:01 tty10 +crw-rw---- 1 root dialout 4, 74 Mar 21 12:01 ttyS10 +crw------- 1 root root 10, 63 Mar 21 12:01 vga_arbiter +crw------- 1 root root 10, 137 Mar 21 12:01 vhci +``` + +现在让我们看看它是如何使用链表的。首先看一下结构体`miscdevice`: + +```C +struct miscdevice +{ + int minor; + const char *name; + const struct file_operations *fops; + struct list_head list; + struct device *parent; + struct device *this_device; + const char *nodename; + mode_t mode; +}; +``` + +可以看到结构体的第四个变量`list` 是所有注册过的设备的链表。在源代码文件的开始可以看到这个链表的定义: + +```C +static LIST_HEAD(misc_list); +``` + +它实际上是对用`list_head` 类型定义的变量的扩展。 + +```C +#define LIST_HEAD(name) \ + struct list_head name = LIST_HEAD_INIT(name) +``` + +然后使用宏`LIST_HEAD_INIT` 进行初始化,这会使用变量`name` 的地址来填充`prev`和`next` 结构体的两个变量。 + +```C +#define LIST_HEAD_INIT(name) { &(name), &(name) } +``` + +现在来看看注册杂项设备的函数`misc_register`。它在开始就用 `INIT_LIST_HEAD` 初始化了`miscdevice->list`。 + +```C +INIT_LIST_HEAD(&misc->list); +``` + +作用和宏`LIST_HEAD_INIT`一样。 + +```C +static inline void INIT_LIST_HEAD(struct list_head *list) +{ + list->next = list; + list->prev = list; +} +``` + +在函数`device_create` 创建了设备后我们就用下面的语句将设备添加到设备链表: + +``` +list_add(&misc->list, &misc_list); +``` + +内核文件`list.h` 提供了项链表添加新项的API 接口。我们来看看它的实现: + + +```C +static inline void list_add(struct list_head *new, struct list_head *head) +{ + __list_add(new, head, head->next); +} +``` + +实际上就是使用3个指定的参数来调用了内部函数`__list_add`: + +* new - 新项。 +* head - 新项将会被添加到`head`之前. +* head->next - `head` 之后的项。 + +`__list_add`的实现非常简单: + +```C +static inline void __list_add(struct list_head *new, + struct list_head *prev, + struct list_head *next) +{ + next->prev = new; + new->next = next; + new->prev = prev; + prev->next = new; +} +``` + +我们会在`prev`和`next` 之间添加一个新项。所以我们用宏`LIST_HEAD_INIT`定义的`misc` 链表会包含指向`miscdevice->list` 的向前指针和向后指针。 + +这里有一个问题:如何得到列表的内容呢?这里有一个特殊的宏: + +```C +#define list_entry(ptr, type, member) \ + container_of(ptr, type, member) +``` + +使用了三个参数: + +* ptr - 指向链表头的指针; +* type - 结构体类型; +* member - 在结构体内类型为`list_head` 的变量的名字; + +比如说: + +```C +const struct miscdevice *p = list_entry(v, struct miscdevice, list) +``` + +然后我们就可以使用`p->minor` 或者 `p->name`来访问`miscdevice`。让我们来看看`list_entry` 的实现: + +```C +#define list_entry(ptr, type, member) \ + container_of(ptr, type, member) +``` + +如我们所见,它仅仅使用相同的参数调用了宏`container_of`。初看这个宏挺奇怪的: + +```C +#define container_of(ptr, type, member) ({ \ + const typeof( ((type *)0)->member ) *__mptr = (ptr); \ + (type *)( (char *)__mptr - offsetof(type,member) );}) +``` + +首先你可以注意到花括号内包含两个表达式。编译器会执行花括号内的全部语句,然后返回最后的表达式的值。 + +举个例子来说: + +``` +#include + +int main() { + int i = 0; + printf("i = %d\n", ({++i; ++i;})); + return 0; +} +``` + +最终会打印`2` + +下一点就是`typeof`,它也很简单。就如你从名字所理解的,它仅仅返回了给定变量的类型。当我第一次看到宏`container_of`的实现时,让我觉得最奇怪的就是`container_of`中的0.实际上这个指针巧妙的计算了从结构体特定变量的偏移,这里的`0`刚好就是位宽里的零偏移。让我们看一个简单的例子: + +```C +#include + +struct s { + int field1; + char field2; + char field3; +}; + +int main() { + printf("%p\n", &((struct s*)0)->field3); + return 0; +} +``` + +结果显示`0x5`。 + +下一个宏`offsetof` 会计算从结构体的某个变量的相对于结构体起始地址的偏移。它的实现和上面类似: + +```C +#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) +``` + +现在我们来总结一下宏`container_of`。只需要知道结构体里面类型为`list_head` 的变量的名字和结构体容器的类型,它可以通过结构体的变量`list_head`获得结构体的起始地址。在宏定义的第一行,声明了一个指向结构体成员变量`ptr`的指针`__mptr`,并且把`ptr` 的地址赋给它。现在`ptr` 和`__mptr` 指向了同一个地址。从技术上讲我们并不需要这一行,但是它可以方便的进行类型检查。第一行保证了特定的结构体(参数`type`)包含成员变量`member`。第二行代码会用宏`offsetof`计算成员变量相对于结构体起始地址的偏移,然后从结构体的地址减去这个偏移,最后就得到了结构体。 + +当然了`list_add` 和 `list_entry`不是``提供的唯一功能。双向链表的实现还提供了如下API: + +* list_add +* list_add_tail +* list_del +* list_replace +* list_move +* list_is_last +* list_empty +* list_cut_position +* list_splice +* list_for_each +* list_for_each_entry + +等等很多其它API。 + +via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/dlist.md + +译者:[Ezio](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151123 How to Install Cockpit in Fedora or CentOS or RHEL or Arch Linux.md b/translated/tech/20151123 How to Install Cockpit in Fedora or CentOS or RHEL or Arch Linux.md new file mode 100644 index 0000000000..38a89dcbc2 --- /dev/null +++ b/translated/tech/20151123 How to Install Cockpit in Fedora or CentOS or RHEL or Arch Linux.md @@ -0,0 +1,148 @@ +如何在 Fedora/CentOS/RHEL 或 Arch Linux 上安装 Cockpit +================================================================================ +Cockpit 是一个免费开源的服务器管理软件,它使得我们可以通过它好看的 web 前端界面轻松地管理我们的 GNU/Linux 服务器。Cockpit 使得 linux 系统管理员、系统维护员和开发者能轻松地管理他们的服务器并执行一些简单的任务,例如管理存储、检测日志、启动或停止服务以及一些其它任务。它的报告界面添加了一些很好的功能使得可以轻松地在终端和 web 界面之间切换。另外,它不仅使得管理一台服务器变得简单,更重要的是只需要一个单击就可以在一个地方同时管理多个通过网络连接的服务器。它非常轻量级,web 界面也非常简单易用。在这篇博文中,我们会学习如何安装 Cockpit 并用它管理我们的运行着 Fedora、CentOS、Arch Linux 以及 RHEL 发行版操作系统的服务器。下面是 Cockpit 在我们的 GNU/Linux 服务器中一些非常棒的功能: + +1. 它包含 systemd 服务管理器。 +2. 有一个用于故障排除和日志分析的 Journal 日志查看器。 +3. 包括 LVM 在内的存储配置比以前任何时候都要简单。 +4. 用 Cockpit 可以进行基本的网络配置。 +5. 可以轻松地添加和删除用户以及管理多台服务器。 + +### 1. 安装 Cockpit ### + +首先,我们需要在我们基于 linux 的服务器上安装 Cockpit。大部分发行版的官方软件仓库中都有可用的 cockpit 安装包。这篇博文中,我们会在 Fedora 22、CentOS 7、Arch Linux 和 RHEL 7 中通过它们的官方软件仓库安装 Cockpit。 + +#### CentOS / RHEL #### + +CentOS 和 RHEL 官方软件库中有可用的 Cockpit。我们只需要用 yum 管理器就可以安装。只需要以 sudo/root 权限运行下面的命令就可以安装它。 + + # yum install cockpit + +![Centos 上安装 Cockpit](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-centos.png) + +#### Fedora 22/21 #### + +和 CentOS 一样, Fedora 的官方软件库默认也有可用的 Cockpit。我们只需要用 dnf 软件包管理器就可以安装 Cockpit。 + + # dnf install cockpit + +![Fedora 上安装 Cockpit](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-fedora.png) + +#### Arch Linux #### + +现在 Arch Linux 官方软件库中还没有可用的 Cockpit,但 Arch 用户库(Arch User Repository,AUR)有。只需要运行下面的 yaourt 命令就可以安装。 + + # yaourt cockpit + +![Arch linux 上安装 Cockpit](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-archlinux.png) + +### 2. 启动并启用 Cockpit ### + +成功安装完 Cockpit,我们就要用服务/守护进程管理器启动 Cockpit 服务。到了 2015 年,尽管一些 linux 发行版仍然运行 SysVinit 管理守护进程,但大部分 linux 发行版都采用了 Systemd,Cockpit 使用 systemd 完成从运行守护进程到服务几乎所有的功能。因此,我们只能在运行着 Systemd 的最新的 linux 发行版中安装 Cockpit。要启动 Cockpit 并让它在每次系统重启时自动启动,我们需要在终端或控制台中运行下面的命令。 + # systemctl start cockpit + + # systemctl enable cockpit.socket + + 创建从 /etc/systemd/system/sockets.target.wants/cockpit.socket 到 /usr/lib/systemd/system/cockpit.socket 的符号链接。 + +### 3. 允许通过防火墙 ### + +启动 Cockpit 并使得它能在每次系统重启时自动启动后,我们现在要给它配置防火墙。由于我们的服务器上运行着防火墙程序,我们需要允许它通过某些端口使得从服务器外面可以访问 Cockpit。 + +#### Firewalld #### + + # firewall-cmd --add-service=cockpit --permanent + + success + + # firewall-cmd --reload + + success + +![允许 Cockpit 通过 Firewalld](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-allowing-firewalld.png) + +#### Iptables #### + + # iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT + + # service iptables save + +### 4. 访问 Cockpit Web 界面 ### + +下面,我们终于要通过 web 浏览器访问 Cockpit web 界面了。根据配置,我们只需要用浏览器打开 https://ip-address:9090 或 https://server.domain.com:9090。在我们这篇博文中,我们用浏览器打开 https://128.199.114.17:9090,正如下图所示。 + +![通过 SSL 访问 Cockpit Web 服务](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-webserver-ssl-proceed.png) + +此时会出现一个 SSL 认证警告,因为我们正在使用一个自签名认证。我们只需要忽略这个警告并进入到登录页面,在 chrome/chromium 中,我们需要点击 Show Advanced 然后点击 **Proceed to 128.199.114.17 (unsafe)**。 + +![Cockpit 登录界面](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-login-screen.png) + +现在,要进入仪表盘,我们需要输入详细的登录信息。这里,用户名和密码和用于登录我们的 linux 服务器的用户名和密码相同。当我们输入登录信息并点击 Log In 按钮后,我们就会进入到 Cockpit 仪表盘。 + +![Cockpit 仪表盘](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-dashboard.png) + +这里我们可以看到所有的菜单以及 CPU、磁盘、网络、存储使用情况的可视化结果。仪表盘正如上图所示。 + +#### 服务 #### + +要管理服务,我们需要点击 web 页面右边菜单中的 Services 按钮。然后,我们会看到服务被分成了 5 个类别,目标、系统服务、套接字、计时器和路径。 + +![Cockpit 服务](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-services.png) + +#### Docker 容器 #### + +我们甚至可以用 Cockpit 管理 docker 容器。用 Cockpit 监控和管理 Docker 容器非常简单。由于我们的服务器中没有安装运行 docker,我们需要点击 Start Docker。 + + +![Cockpit 容器](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-container.png) + +Cockpit 会自动在我们的服务器上安装和运行 docker。启动之后,我们就会看到下面的截图。然后我们就可以按照需求管理 docker 镜像、容器。 + +![Cockpit 容器管理](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-containers-mangement.png) + +#### Journal 日志查看器 #### + +Cockpit 有个日志查看器,它把错误、警告、注意分到不同的标签页。我们也有一个 All 标签页,在这里可以看到所有的日志信息。 + +![Cockpit Journal 日志](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-journal-logs.png) + +#### 网络 #### + +在网络部分,我们可以看到两个可视化发送和接收速度的图。我们可以看到这里有一个可用网卡的列表,还有 Add Bond、Bridge、VLAN 的选项。如果我们需要配置一个网卡,我们只需要点击网卡名称。在下面,我们可以看到网络的 Journal 日志信息。 + +![Cockpit Network](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-network.png) + +#### 存储 #### + +现在,用 Cockpit 可以方便地查看硬盘的读写速度。我们可以查看存储的 Journal 日志以便进行故障排除和修复。在页面中还有一个已用空间的可视化图。我们甚至可以卸载、格式化、删除一块硬盘的某个分区。它还有类似创建 RAID 设备、卷组等攻能。 + +![Cockpit Storage](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-storage.png) + +#### 用户管理 #### + +通过 Cockpit Web 界面我们可以方便地创建新用户。在这里创建的账户会应用到系统用户账户。我们可以用它更改密码、指定角色、以及删除用户账户。 + +![Cockpit Accounts](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-accounts.png) + +#### 实时终端 #### + +Cockpit 还有一个很棒的特性。是的,我们可以执行命令,用 Cockpit 界面提供的实时终端执行任务。这使得我们可以根据我们的需求在 web 界面和终端之间自由切换。 + +![Cockpit 终端](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-terminal.png) + +### 总结 ### + +Cockpit 是由 [Red Hat][1] 开发的使得管理服务器变得轻松简单的免费开源软件。它非常适合于进行简单的系统管理任务和新手系统管理员。它仍然处于开发阶段,还没有稳定版发行。因此不适合于生产环境。它是针对最新的默认安装了 systemd 的 Fedora、CentOS、Arch Linux、RHEL 系统开发的。如果你想在 Ubuntu 上安装 Cockpit,你可以通过 PPA 访问,但现在已经过期了。如果你有任何疑问、建议,请在下面的评论框中反馈给我们,这样我们可以改进和更新我们的内容。非常感谢 ! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-cockpit-fedora-centos-rhel-arch-linux/ + +作者:[Arun Pyasi][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://www.redhat.com/ \ No newline at end of file diff --git a/translated/tech/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md b/translated/tech/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md new file mode 100644 index 0000000000..f115444665 --- /dev/null +++ b/translated/tech/20151130 Useful Linux and Unix Tape Managements Commands For Sysadmins.md @@ -0,0 +1,425 @@ +15条给系统管理员的实用 Linux/Unix 磁带管理命令 +================================================================================ +磁带设备应只用于定期的文件归档或将数据从一台服务器传送至另一台。通常磁带设备与 Unix 机器连接,用 mt 或 mtx 控制。你可以将所有的数据备份到磁盘(也许是云中)和磁带设备。在这个教程中你将会了解到: + +- 磁带设备名 +- 管理磁带驱动器的基本命令 +- 基本的备份和恢复命令 + +### 为什么备份? ### + +一个备份设备是很重要的: + +- 从磁盘故障中恢复的能力 +- 意外的文件删除 +- 文件或文件系统损坏 +- 服务器完全毁坏,包括由于火灾或其他问题导致的同盘备份毁坏 + +你可以使用磁带归档备份整个服务器并将其离线存储。 + +### 理解磁带文件标记和块大小 ### + +![Fig.01: Tape file marks](http://s0.cyberciti.org/uploads/cms/2015/10/tape-format.jpg) + +图01:磁带文件标记 + +每个磁带设备能存储多个备份文件。磁带备份文件通过 cpio,tar,dd 等命令创建。但是,磁带设备可以由各种程序打开,写入数据,并关闭。你可以存储若干备份(磁带文件)到一个物理磁带上。在每个磁带文件之间有个“磁带文件标记”。这个是用来指示一个物理磁带上磁带文件的结尾以及另一个文件的开始。你需要使用 mt 命令来定位磁带(快进,倒带和标记)。 + +#### 磁带上的数据是如何存储的 #### + +![Fig.02: How data is stored on a tape](http://s0.cyberciti.org/uploads/cms/2015/10/how-data-is-stored-on-a-tape.jpg) + +图02:磁带上的数据是如何存储的 + +所有的数据使用 tar 以连续磁带存储格式连续地存储。第一个磁带归档会从磁带的物理开始端开始存储(tar #0)。接下来的就是 tar #1,以此类推。 + +### Unix 上的磁带设备名 ### + +1. /dev/rmt/0 或 /dev/rmt/1 或 /dev/rmt/[0-127] :Unix 上的常规磁带设备名。磁带自动倒回。 +1. /dev/rmt/0n :以无倒回为特征,换言之,磁带使用之后,停留在当前状态等待下个命令。 +1. /dev/rmt/0b :使用磁带接口,也就是 BSD 的行为。各种类型的操作系统比如 AIX,Windows,Linux,FreeBSD 等的行为更有可读性。 +1. /dev/rmt/0l :设置密度为低。 +1. /dev/rmt/0m :设置密度为中。 +1. /dev/rmt/0u :设置密度为高。 +1. /dev/rmt/0c :设置密度为压缩。 +1. /dev/st[0-9] :Linux 特定 SCSI 磁带设备名。 +1. /dev/sa[0-9] :FreeBSD 特定 SCSI 磁带设备名。 +1. /dev/esa0 :FreeBSD 特定 SCSI 磁带设备名,在关闭时弹出(如果可以的话)。 + +#### 磁带设备名示例 #### + +- /dev/rmt/1cn 指明正在使用 unity 1,压缩密度,无倒回。 +- /dev/rmt/0hb 指明正在使用 unity 0,高密度,BSD 行为。 +- Linux 上的自动倒回 SCSI 磁带设备名:/dev/st0 +- Linux 上的无倒回 SCSI 磁带设备名:/dev/nst0 +- FreeBSD 上的自动倒回 SCSI 磁带设备名:/dev/sa0 +- FreeBSD 上的无倒回 SCSI 磁带设备名:/dev/nsa0 + +#### 如何列出已安装的 scsi 磁带设备? #### + +输入下列命令: + + ## Linux(更多信息参阅 man) ## + lsscsi + lsscsi -g + + ## IBM AIX ## + lsdev -Cc tape + lsdev -Cc adsm + lscfg -vl rmt* + + ## Solaris Unix ## + cfgadm –a + cfgadm -al + luxadm probe + iostat -En + + ## HP-UX Unix ## + ioscan Cf + ioscan -funC tape + ioscan -fnC tape + ioscan -kfC tape + + +来自我的 Linux 服务器的输出示例: + +![Fig.03: Installed tape devices on Linux server](http://s0.cyberciti.org/uploads/cms/2015/10/linux-find-tape-devices-command.jpg) + +图03:Linux 服务器上已安装的磁带设备 + +### mt 命令实例 ### + +在 Linux 和类Unix系统上,mt 命令用来控制磁带驱动器的操作,比如查看状态或查找磁带上的文件或写入磁带控制标记。下列大多数命令需要作为 root 用户执行。语法如下: + + mt -f /tape/device/name operation + +#### 设置环境 #### + +你可以设置 TAPE shell 变量。这是磁带驱动器的路径名。在 FreeBSD 上默认的(如果变量没有设置,而不是 null)是 /dev/nsa0。可以通过 mt 命令的 -f 参数传递变量覆盖它,就像下面解释的那样。 + + ## 添加到你的 shell 配置文件 ## + TAPE=/dev/st1 #Linux + TAPE=/dev/rmt/2 #Unix + TAPE=/dev/nsa3 #FreeBSD + export TAPE + +### 1:显示磁带/驱动器状态 ### + + mt status #Use default + mt -f /dev/rmt/0 status #Unix + mt -f /dev/st0 status #Linux + mt -f /dev/nsa0 status #FreeBSD + mt -f /dev/rmt/1 status #Unix unity 1 也就是 tape device no. 1 + +你可以像下面一样使用 shell 循环调查系统并定位所有的磁带驱动器: + + for d in 0 1 2 3 4 5 + do + mt -f "/dev/rmt/${d}" status + done + +### 2:倒带 ### + + mt rew + mt rewind + mt -f /dev/mt/0 rewind + mt -f /dev/st0 rewind + +### 3:弹出磁带 ### + + mt off + mt offline + mt eject + mt -f /dev/mt/0 off + mt -f /dev/st0 eject + +### 4:擦除磁带(倒带,在可以的情况下卸载磁带) ### + + mt erase + mt -f /dev/st0 erase #Linux + mt -f /dev/rmt/0 erase #Unix + +### 5:张紧磁带盒 ### + +如果磁带在读取时发生错误,你重新张紧磁带,清洁磁带驱动器,像下面这样再试一次: + + mt retension + mt -f /dev/rmt/1 retension #Unix + mt -f /dev/st0 retension #Linux + +### 6:在磁带当前位置写入 EOF 标记 ### + + mt eof + mt weof + mt -f /dev/st0 eof + +### 7:将磁带前进指定的文件标记数目,即跳过指定个 EOF 标记 ### + +磁带定位在下一个文件的第一个块,即磁带会定位在下一区域的第一个块(见图01): + + mt fsf + mt -f /dev/rmt/0 fsf + mt -f /dev/rmt/1 fsf 1 #go 1 forward file/tape (see fig.01) + +### 8:将磁带后退指定的文件标记数目,即倒带指定个 EOF 标记 ### + +磁带定位在下一个文件的第一个块,即磁带会定位在 EOF 标记之后(见图01): + + mt bsf + mt -f /dev/rmt/1 bsf + mt -f /dev/rmt/1 bsf 1 #go 1 backward file/tape (see fig.01) + +这里是磁带定位命令列表: + + fsf 前进指定的文件标记数目。磁带定位在下一个文件的第一块。 + + fsfm 前进指定的文件标记数目。磁带定位在前一文件的最后一块。 + + bsf 后退指定的文件标记数目。磁带定位在前一文件的最后一块。 + + bsfm 后退指定的文件标记数目。磁带定位在下一个文件的第一块。 + + asf The tape is positioned at the beginning of the count file. Positioning is done by first rewinding the tape and then spacing forward over count filemarks.磁带定位在 + + fsr 前进指定的记录数。 + + bsr 后退指定的记录数。 + + fss (SCSI tapes)前进指定的 setmarks。 + + bss (SCSI tapes)后退指定的 setmarks。 + +### 基本备份命令 ### + +让我们来看看备份和恢复命令。 + +### 9:备份目录(tar 格式) ### + + tar cvf /dev/rmt/0n /etc + tar cvf /dev/st0 /etc + +### 10:恢复目录(tar 格式) ### + + tar xvf /dev/rmt/0n -C /path/to/restore + tar xvf /dev/st0 -C /tmp + +### 11:列出或检查磁带内容(tar 格式) ### + + mt -f /dev/st0 rewind; dd if=/dev/st0 of=- + + ## tar 格式 ## + tar tvf {DEVICE} {Directory-FileName} + tar tvf /dev/st0 + tar tvf /dev/st0 desktop + tar tvf /dev/rmt/0 foo > list.txt + +### 12:使用 dump 或 ufsdump 备份分区 ### + + ## Unix 备份 c0t0d0s2 分区 ## + ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s2 + + ## Linux 备份 /home 分区 ## + dump 0uf /dev/nst0 /dev/sda5 + dump 0uf /dev/nst0 /home + + ## FreeBSD 备份 /usr 分区 ## + dump -0aL -b64 -f /dev/nsa0 /usr + +### 12:使用 ufsrestore 或 restore 恢复分区 ### + + ## Unix ## + ufsrestore xf /dev/rmt/0 + ## Unix 交互式恢复 ## + ufsrestore if /dev/rmt/0 + + ## Linux ## + restore rf /dev/nst0 + ## 从磁带媒介上的第6个备份交互式恢复 ## + restore isf 6 /dev/nst0 + + ## FreeBSD 恢复 ufsdump 格式 ## + restore -i -f /dev/nsa0 + +### 13:从磁带开头开始写入(见图02) ### + + ## 这会覆盖磁带上的所有数据 ## + mt -f /dev/st1 rewind + + ### 备份 home ## + tar cvf /dev/st1 /home + + ## 离线并卸载磁带 ## + mt -f /dev/st0 offline + +从磁带开头开始恢复: + + mt -f /dev/st0 rewind + tar xvf /dev/st0 + mt -f /dev/st0 offline + +### 14:从最后一个 tar 后开始写入(见图02) ### + + ## 这会保留之前写入的数据 ## + mt -f /dev/st1 eom + + ### 备份 home ## + tar cvf /dev/st1 /home + + ## 卸载 ## + mt -f /dev/st0 offline + +### 15:从 tar number 2 后开始写入(见图02) ### + + ## 在 tar number 2 之后写入(应该是 2+1) + mt -f /dev/st0 asf 3 + tar cvf /dev/st0 /usr + + ## asf 等效于 fsf ## + mt -f /dev/sf0 rewind + mt -f /dev/st0 fsf 2 + +从 tar number 2 恢复 tar: + + mt -f /dev/st0 asf 3 + tar xvf /dev/st0 + mt -f /dev/st0 offline + +### 如何验证使用 tar 创建的备份磁带? ### + +定期做全系统修复和服务测试是很重要的,这是唯一确定整个系统正确工作的途径。参见我们的[验证 tar 命令磁带备份的教程][1]以获取更多信息。 + +### 示例 shell 脚本 ### + + #!/bin/bash + # A UNIX / Linux shell script to backup dirs to tape device like /dev/st0 (linux) + # This script make both full and incremental backups. + # You need at two sets of five tapes. Label each tape as Mon, Tue, Wed, Thu and Fri. + # You can run script at midnight or early morning each day using cronjons. + # The operator or sys admin can replace the tape every day after the script has done. + # Script must run as root or configure permission via sudo. + # ------------------------------------------------------------------------- + # Copyright (c) 1999 Vivek Gite + # This script is licensed under GNU GPL version 2.0 or above + # ------------------------------------------------------------------------- + # This script is part of nixCraft shell script collection (NSSC) + # Visit http://bash.cyberciti.biz/ for more information. + # ------------------------------------------------------------------------- + # Last updated on : March-2003 - Added log file support. + # Last updated on : Feb-2007 - Added support for excluding files / dirs. + # ------------------------------------------------------------------------- + LOGBASE=/root/backup/log + + # Backup dirs; do not prefix / + BACKUP_ROOT_DIR="home sales" + + # Get todays day like Mon, Tue and so on + NOW=$(date +"%a") + + # Tape devie name + TAPE="/dev/st0" + + # Exclude file + TAR_ARGS="" + EXCLUDE_CONF=/root/.backup.exclude.conf + + # Backup Log file + LOGFIILE=$LOGBASE/$NOW.backup.log + + # Path to binaries + TAR=/bin/tar + MT=/bin/mt + MKDIR=/bin/mkdir + + # ------------------------------------------------------------------------ + # Excluding files when using tar + # Create a file called $EXCLUDE_CONF using a text editor + # Add files matching patterns such as follows (regex allowed): + # home/vivek/iso + # home/vivek/*.cpp~ + # ------------------------------------------------------------------------ + [ -f $EXCLUDE_CONF ] && TAR_ARGS="-X $EXCLUDE_CONF" + + #### Custom functions ##### + # Make a full backup + full_backup(){ + local old=$(pwd) + cd / + $TAR $TAR_ARGS -cvpf $TAPE $BACKUP_ROOT_DIR + $MT -f $TAPE rewind + $MT -f $TAPE offline + cd $old + } + + # Make a partial backup + partial_backup(){ + local old=$(pwd) + cd / + $TAR $TAR_ARGS -cvpf $TAPE -N "$(date -d '1 day ago')" $BACKUP_ROOT_DIR + $MT -f $TAPE rewind + $MT -f $TAPE offline + cd $old + } + + # Make sure all dirs exits + verify_backup_dirs(){ + local s=0 + for d in $BACKUP_ROOT_DIR + do + if [ ! -d /$d ]; + then + echo "Error : /$d directory does not exits!" + s=1 + fi + done + # if not; just die + [ $s -eq 1 ] && exit 1 + } + + #### Main logic #### + + # Make sure log dir exits + [ ! -d $LOGBASE ] && $MKDIR -p $LOGBASE + + # Verify dirs + verify_backup_dirs + + # Okay let us start backup procedure + # If it is Monday make a full backup; + # For Tue to Fri make a partial backup + # Weekend no backups + case $NOW in + Mon) full_backup;; + Tue|Wed|Thu|Fri) partial_backup;; + *) ;; + esac > $LOGFIILE 2>&1 + +### 关于第三方备份工具 ### + +Linux 和类Unix系统都提供了许多第三方工具,可以用来安排备份,包括磁带备份在内,如: + +- Amanda +- Bacula +- rsync +- duplicity +- rsnapshot + +另行参阅 + +- Man pages - [mt(1)][2], [mtx(1)][3], [tar(1)][4], [dump(8)][5], [restore(8)][6] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/hardware/unix-linux-basic-tape-management-commands/ + +作者:Vivek Gite +译者:[alim0x](https://github.com/alim0x) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/unix-verify-tape-backup/ +[2]:http://www.manpager.com/linux/man1/mt.1.html +[3]:http://www.manpager.com/linux/man1/mtx.1.html +[4]:http://www.manpager.com/linux/man1/tar.1.html +[5]:http://www.manpager.com/linux/man8/dump.8.html +[6]:http://www.manpager.com/linux/man8/restore.8.html diff --git a/translated/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md b/translated/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md new file mode 100644 index 0000000000..c931365600 --- /dev/null +++ b/translated/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md @@ -0,0 +1,43 @@ +# 使用 SystemBack 备份你的 Ubuntu/Linux Mint(系统还原) + +对于任何一款允许用户还原电脑到之前状态(包括文件系统,安装的应用,以及系统设置)的操作系统来说,系统还原都是必备功能,可以恢复系统故障以及其他的问题。 + +有的时候安装一个程序或者驱动可能让你的系统黑屏。系统还原则让你电脑里面的系统文件(译者注:是系统文件,并非普通文件,详情请看**注意**部分)和程序恢复到之前工作正常时候的状态,进而让你远离那让人头痛的排障过程了。而且它也不会影响你的文件,照片或者其他数据。 + +简单的系统备份还原工具[Systemback](https://launchpad.net/systemback)让你很容易地创建系统备份以及用户配置文件。一旦遇到问题,你可以简单地恢复到系统先前的状态。它还有一些额外的特征包括系统复制,系统安装以及Live系统创建。 + +截图 + +![systemback](http://2.bp.blogspot.com/-2UPS3yl3LHw/VlilgtGAlvI/AAAAAAAAGts/ueRaAghXNvc/s1600/systemback-1.jpg) + +![systemback](http://2.bp.blogspot.com/-7djBLbGenxE/Vlilgk-FZHI/AAAAAAAAGtk/2PVNKlaPO-c/s1600/systemback-2.jpg) + +![](http://3.bp.blogspot.com/-beZYwKrsT4o/VlilgpThziI/AAAAAAAAGto/cwsghXFNGRA/s1600/systemback-3.jpg) + +![](http://1.bp.blogspot.com/-t_gmcoQZrvM/VlilhLP--TI/AAAAAAAAGt0/GWBg6bGeeaI/s1600/systemback-5.jpg) + +**注意**:使用系统还原不会还原你的文件,音乐,电子邮件或者其他任何类型的私人文件。对不同用户来讲,这既是优点又是缺点。坏消息是它不会还原你意外删除的文件,不过你可以通过一个文件恢复程序来解决这个问题。如果你的计算机没有还原点,那么系统恢复就无法奏效,所以这个工具就无法帮助你(还原系统),如果你尝试恢复一个主要问题,你将需要移步到另外的步骤来进行故障排除。 + +> > >适用于Ubuntu 15.10 Wily/16.04/15.04 Vivid/14.04 Trusty/Linux Mint 14.x/其他Ubuntu衍生版,打开终端,将下面这些命令复制过去: + +终端命令: + +``` +sudo add-apt-repository ppa:nemh/systemback +sudo apt-get update +sudo apt-get install systemback + +``` + +大功告成。 + +-------------------------------------------------------------------------------- + +via: http://www.noobslab.com/2015/11/backup-system-restore-point-your.html + +译者:[DongShuaike](https://github.com/DongShuaike) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://launchpad.net/systemback diff --git a/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md b/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md new file mode 100644 index 0000000000..1e8a032a04 --- /dev/null +++ b/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md @@ -0,0 +1,138 @@ +如何使用加密过密码的Mutt邮件客户端 +================================================================================ +Mutt是一个开源的Linux/UNIX终端环境下的邮件客户端。连同[Alpine][1],Mutt有充分的理由在Linux命令行热衷者中有最忠诚的追随者。想一下你对邮件客户端的期待的事情,Mutt拥有:多协议支持(e.g., POP3, IMAP and SMTP),S/MIME和PGP/GPG集成,线程会话,颜色编码,可定制宏/快捷键,等等。另外,基于命令行的Mutt相比笨重的web浏览器(如:Gmail,Ymail)或可视化邮件客户端(如:Thunderbird,MS Outlook)是一个轻量访问电子邮件的选择。 + +当你想使用Mutt通过公司的SMTP/IMAP服务器访问或发送邮件,或取代网页邮件服务,可能所关心的一个问题是如何保护您的邮件凭据(如:SMTP/IMAP密码)存储在一个纯文本Mutt配置文件(~/.muttrc)。 + +对于一些人安全的担忧,确实有一个容易的方法来**加密Mutt配置文件***,防止这种风险。在这个教程中,我描述了如何加密Mutt敏感配置,比如SMTP/IMAP密码使用GnuPG(GPG),一个开源的OpenPGP实现。 + +### 第一步 (可选):创建GPG密钥 ### + +因为我们将要使用GPG加密Mutt配置文件,如果你没有,第一步就是创建一个GPG密钥(公有/私有 密钥对)。如果有,忽略这步。 + +创建一个新GPG密钥,输入下面的。 + + $ gpg --gen-key + +选择密钥类型(RSA),密钥长度(2048 bits),和过期时间(0,不过期)。当出现用户ID提示时,输入你的名字(Dan Nanni) 和邮箱地址(myemail@email.com)关联到私有/公有密钥对。最后,输入一个密码来保护你的私钥。 + +![](https://c2.staticflickr.com/6/5726/22808727824_7735f11157_c.jpg) + +生成一个GPG密钥需要大量的随机字节熵,所以在生成密钥期间确保在你的系统上执行一些随机行为(如:打键盘,移动鼠标或者读写磁盘)。根据密钥长度决定生成GPG密钥要花几分钟或更多时间。 + +![](https://c1.staticflickr.com/1/644/23328597612_6ac5a29944_c.jpg) + +### 第二部:加密Mutt敏感配置 ### + +下一步,在~/.mutt目录创建一个新的文本文件,然后把一些你想隐藏的Mutt敏感配置放进去。这个例子里,我指定了SMTP/IMAP密码。 + + $ mkdir ~/.mutt + $ vi ~/.mutt/password + +---------- + + set smtp_pass="XXXXXXX" + set imap_pass="XXXXXXX" + +现在gpg用你的公钥加密这个文件如下。 + + $ gpg -r myemail@email.com -e ~/.mutt/password + +这将创建~/.mutt/password.gpg,这个是一个GPG加密原始版本文件。 + +继续删除~/.mutt/password,只保留GPG加密版本。 + +### 第三部:创建完整Mutt配置文件 ### + +由于你已经在一个单独的文件加密了Mutt敏感配置,你可以在~/.muttrc指定其余的Mutt配置。然后增加下面这行在~/.muttrc末尾。 + + source "gpg -d ~/.mutt/password.gpg |" + +当你使用Mutt,这行将解密~/.mutt/password.gpg,然后将解密内容应用到你的Mutt配置。 + +下面展示一个完整Mutt配置例子,这允许你用Mutt访问Gmail,没有暴露你的SMTP/IMAP密码。取代你用Gmail ID登陆你的账户。 + + set from = "yourgmailaccount@gmail.com" + set realname = "Your Name" + set smtp_url = "smtp://yourgmailaccount@smtp.gmail.com:587/" + set imap_user = "yourgmailaccount@gmail.com" + set folder = "imaps://imap.gmail.com:993" + set spoolfile = "+INBOX" + set postponed = "+[Google Mail]/Drafts" + set trash = "+[Google Mail]/Trash" + set header_cache =~/.mutt/cache/headers + set message_cachedir =~/.mutt/cache/bodies + set certificate_file =~/.mutt/certificates + set move = no + set imap_keepalive = 900 + + # encrypted IMAP/SMTP passwords + source "gpg -d ~/.mutt/password.gpg |" + +### 第四部(可选):配置GPG代理 ### + +这时候,你将可以使用加密了IMAP/SMTP密码的Mutt。无论如何,每次你运行Mutt,你都要先被提示输入一个GPG密码来使用你的私钥解密IMAP/SMTP密码。 + +![](https://c2.staticflickr.com/6/5667/23437064775_20c874940f_c.jpg) + +如果你想避免这样的GPG密码提示,你可以部署gpg代理。运行一个后台程序,gpg代理安全的缓存你的GPG密码,无需手工干预gpg自动从gpg代理获得你的GPG密码。如果你正在使用Linux桌面,你可以使用桌面特定方式来配置一些东西等价于gpg代理,例如,GNOME桌面的gnome-keyring-daemon。 + +你可以在基于Debian系统安装gpg代理: + +$ sudo apt-get install gpg-agent + +gpg代理是基于Red Hat系统预装的。 + +现在增加下面这些道你的.bashrc文件。 + + envfile="$HOME/.gnupg/gpg-agent.env" + if [[ -e "$envfile" ]] && kill -0 $(grep GPG_AGENT_INFO "$envfile" | cut -d: -f 2) 2>/dev/null; then + eval "$(cat "$envfile")" + else + eval "$(gpg-agent --daemon --allow-preset-passphrase --write-env-file "$envfile")" + fi + export GPG_AGENT_INFO + +重载.bashrc,或单纯的登出然后登陆回来。 + + $ source ~/.bashrc + +现在确认GPG_AGENT_INFO环境变量已经设置妥当。 + + $ echo $GPG_AGENT_INFO + +---------- + + /tmp/gpg-0SKJw8/S.gpg-agent:942:1 + +并且,当你输入gpg-agent命令时,你应该看到下面的信息。 + + $ gpg-agent + +---------- + + gpg-agent: gpg-agent running and available + +一旦gpg-agent启动运行,它将会在第一次提示你输入密码时缓存你的GPG密码。随后你运行Mutt多次,你将不会被提示要GPG密码(gpg-agent一直开着,缓存就不会过期)。 + +![](https://c1.staticflickr.com/1/664/22809928093_3be57698ce_c.jpg) + +### 结论 ### + +在这个指导里,我提出一个方法加密Mutt敏感配置如SMTP/IMAP密码使用GnuPG。注意,如果你想在Mutt上使用GnuPG或者登陆你的邮件信息,你可以参考[官方指南][2]在使用GPG与Mutt结合。 + +如果你知道任何使用Mutt的安全技巧,随时分享他。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/mutt-email-client-encrypted-passwords.html + +作者:[Dan Nanni][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/gmail-command-line-linux-alpine.html +[2]:http://dev.mutt.org/trac/wiki/MuttGuide/UseGPG diff --git a/translated/tech/20151204 Install and Configure Munin monitoring server in Linux.md b/translated/tech/20151204 Install and Configure Munin monitoring server in Linux.md new file mode 100644 index 0000000000..d26c1b0d8e --- /dev/null +++ b/translated/tech/20151204 Install and Configure Munin monitoring server in Linux.md @@ -0,0 +1,145 @@ +在 Linux 上安装和配置 Munin 检测服务器 +================================================================================ +![](http://www.linuxnix.com/wp-content/uploads/2015/12/munin_page.jpg) + +Munin 是一款和 [RRD][1] 类似很好的系统检测工具,它能提供给你多方面的系统性能信息,例如 **磁盘、网络、进程、系统和用户**。这些是 Munin 默认检测的属性。 + +### Munin 如何工作? ### + +Munin 以客户端-服务器模式运行。主服务器上运行的 Munin 服务器进程尝试从本地运行的客户端守护进程(Munin 可以检测它自己的资源)或者远程客户端(Munin 可以检测上百台机器)收集数据,然后在它的 web 页面以图的形式显示出来。 + +### 在 nutshell 中配置 Munin ### + +要配置服务器端和客户端,我们需要完成以下两步。 +1) 安装 Munin 服务器软件包并配置,使得它能从客户端收集数据。 +2) 安装 Munin 客户端使得服务器能连接到客户端守护进程用于收集数据。 + +### 在 Linux 上安装 munin 服务器端 ### + +在基于 Ubuntu/Debian 的机器上安装 Munin 服务器 + + apt-get install munin apache2 + +在基于 Redhat/CentOS 的机器上安装 Munin 服务器。在基于 Redhat 的机器上安装 Munin 之前,你需要确保 [启用 EPEL 软件仓库][2],因为基于 Redhead 的机器它们的软件仓库默认没有 Munin。 + + yum install munin httpd + +### 在 Linux 上配置 Munin 服务器端 ### + +下面是我们要启动服务器按按顺序执行的步骤。 + +1. 在 /etc/munin/munin.conf 中添加需要检测的主机详情 +2. 配置 apache web 服务器使其包括 munin 配置。 +3. 为 web 页面创建用户名和密码 +4. 重启 apache 服务器 + +**步骤 1**: 在 **/etc/munin/munin.conf** 文件中添加主机条目。调到文件末尾添加要检测的客户端。在这个例子中,我添加了要检测的数据库服务器和它的 IP 地址。 + +事例: + + [db.linuxnix.com] + address 192.168.1.25 + use_node_name yes + +保存文件并退出。 + +**步骤 2**: 在 /etc/apache2/conf.d 目录中编辑/创建文件 munin.conf 用于包括 Munin 和 Apache 相关的配置。另外注意一点,默认其它和 web 相关的 Munin 配置保存在 /var/www/munin 目录。 + + vi /etc/apache2/conf.d/munin.conf + +内容: + + Alias /munin /var/www/munin + + Order allow,deny + Allow from localhost 127.0.0.0/8 ::1 + AllowOverride None + Options ExecCGI FollowSymlinks + AddHandler cgi-script .cgi + DirectoryIndex index.cgi + AuthUserFile /etc/munin/munin.passwd + AuthType basic + AuthName "Munin stats" + require valid-user + + ExpiresActive On + ExpiresDefault M310 + + + +保存文件并退出 + +**步骤 3**: 现在为查看 munin 的图示创建用户名和密码: + + htpasswd -c /etc/munin/munin-htpasswd munin + +**注意**:对于 Redhat/Centos 机器,为了访问你的配置文件,需要在每个路径中用 “**httpd**” 替换 “**apache2**”。 + +**步骤 4**: 重启 Apache 服务器,使得 Munin 配置生效。 + +#### 基于 Ubuntu/Debian : #### + + service apache2 restart + +#### 基于 Centos/Redhat : #### + + service httpd restart + +### 在 Linux 上安装和配置 Munin 客户端 ### + +**步骤 1**: 在 Linux 上安装 Munin 客户端 + + apt-get install munin-node + +**注意**:如果你想检测你的 Munin 服务器端,你也需要在那里安装 munin-node。 + +**步骤 2**: 编辑 munin-node.conf 文件配置客户端。 + + vi /etc/munin/munin-node.conf + +事例: + + allow ^127\.0\.0\.1$ + allow ^10\.10\.20\.20$ + +---------- + + # Which address to bind to; + host * + +---------- + + # And which port + port 4949 + +**注意**: 10.10.20.20 是我的 Munin 服务器,它连接到客户端的 4949 端口获取数据。 + +**步骤 3**: 在客户端机器中重启 munin-node + + service munin-node restart + +### 测试连接 ### + +检查你是否能从服务器的连接到客户端的 4949 端口,如果不行,你需要在客户端机器中打开该端口。 + + telnet db.linuxnix.com 4949 + +访问 Munin web 页面 + + http://munin.linuxnix.com/munin/index.html + +希望这些能对你配置基本的 Munin 服务器有所帮助。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxnix.com/install-and-configure-munin-monitoring-server-in-linux/ + +作者:[Surendra Anne][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxnix.com/author/surendra/ +[1]:http://www.linuxnix.com/network-monitoringinfo-gathering-tools-in-linux/ +[2]:http://www.linuxnix.com/how-to-install-and-enable-epel-repo-in-rhel-centos-oracle-scentific-linux/ diff --git a/translated/tech/20151206 NetworkManager and privacy in the IPv6 internet.md b/translated/tech/20151206 NetworkManager and privacy in the IPv6 internet.md new file mode 100644 index 0000000000..cfd203da33 --- /dev/null +++ b/translated/tech/20151206 NetworkManager and privacy in the IPv6 internet.md @@ -0,0 +1,54 @@ +IPv6因特网中的隐私保护和网络管理器 +====================== + +IPv6的使用量正在不断增加,让我们始料未及的是,伴随这个协议不断增加的使用量,大量隐私问题涌现出来。互联网社区在积极发表相关解决方案,当前状况是怎样的呢?网络管理器又是如何跟上的呢?让我们来瞧瞧吧! + +![](https://blogs.gnome.org/lkundrak/files/2015/12/cameras1.jpg) + +## 通过IPv6方式连接的主机的特性 + +通过IPv6方式激活的结点(译者注:结点在网络中指一个联网的设备)不需要类似IPv4网络中[DHCP](https://tools.ietf.org/html/rfc2132)服务器的中央机构来配置他们的地址。它们发现自己所在的网络然后通过生成主机部分来[自主生成地址](https://tools.ietf.org/html/rfc2132)。这种方式使得网络配置更加简单,并且能够更好的扩展到更大规模的网络。然而,这种方式也有一些缺点。首先,这个结点需要确保它的地址不会和网络上其他结点冲突。其次,如果这个结点在进入的每一个网络中使用相同的主机部分,它的运动就可以被追踪,如此一来,隐私便处于危险之中。 + +负责制定因特网标准的组织Internet工程任务组(Internet Engineering Task Force,IETF)[意识到了这个问题](https://tools.ietf.org/html/draft-iesg-serno-privacy-00),这个组织建议取消使用硬件序列号来识别网络上的结点。 + +但实际的实施情况是怎样的呢? + +地址唯一性问题可以通过[重复地址检测(Duplicate Address Detection, DAD)](https://tools.ietf.org/html/rfc4862#section-5.4)机制来解决。当结点为自身创建地址的时候,他首先通过[邻居发现协议(Neighbor Discovery Protocol)](https://tools.ietf.org/html/rfc4861)(一种不同于IPv4 [ARP](https://tools.ietf.org/html/rfc826)协议的机制)来检查另一个结点是否使用了相同的地址。当它发现地址已经被使用,它必须抛弃掉这个地址。 + +解决另一个问题——隐私问题,有一点困难。一个IP地址(无论IPv4或IPv6)由网络部分和主机部分组成(译者注:网络部分用来划分子网,主机部分用来从相应子网中找到具体的主机)。主机查找出相关的网络部分,并且生成主机部分。传统上它只使用了源自网络硬件(MAC)地址的接口识别器。MAC地址在硬件制造的时候就被设置好了,它可以唯一的识别机器。这样就确保了地址的稳定性和唯一性。这对避免地址冲突来说是件好事,但是对隐私来说一点也不好。主机部分在不同网络下保持恒定意味着机器在进入不同网络时可以被唯一的识别。这在协议制定的时候看起来无可非议,但是随着IPv6的流行,人们对于隐私问题的担忧也愈演愈烈。幸运的是,解决办法还是有的。 + +## 进入隐私扩展 + +Pv4的最大问题——地址枯竭,已经不是什么秘密。对IPv6来说,这一点不再成立,事实上,使用IPv6的主机能够相当大方的利用地址。多个IPv6地址对应一块网卡绝对没有任何错误,正好相反,这是一种标准情形。最起码每个结点都有一个“本地连接地址”,它被用来与同一物理链路的结点联络。当网络包含了一个连接其他网络的路由器,这个网络中的每一个结点都有一个与每个直接连接网络联络的地址。如果主机在相同网络有更多的地址,结点(译者注:指路由器)将接受它们全部的传入流量。对于输出连接,它会把地址显示给远程主机,内核会挑选最适合的地址。但到底是哪一个呢? + +隐私扩展可用时,就像[RFC4941](https://tools.ietf.org/html/rfc4941)定义的那样,时常会生成带有随机主机部分的新地址。最新的那个被用于最新的输出连接,与此同时,那些不被使用了的旧地址将被丢弃。这是一个极好的窍略——当固定地址不用于输出连接的时候,主机就不会显示它,但它仍然会接受注意到它的主机的连接。 + +但这也存在美中不足之处——某些应用会把地址与用户识别绑定在一起。让我们来考虑一下这种情形,一个web应用在用户认证的时候生成一个HTTP Cookie,但它只接受实施认证的地址的连接。当内核生成了一个新的临时地址,服务器会拒绝使用这个地址的请求,实际上相当于将用户登出。地址是不是建立用户认证的合适机制值得商榷,但这确实是现实中应用程序正在做的。 + +## 解救之道——隐私固定寻址 + +解决这个问题可能需要另辟蹊径。唯一的(当然咯)地址确实有必要,对于特定网络来说是稳定的,但当用户进入了另一个网络后仍然会变,这样的话追踪就变得几乎不可能。RFC7217介绍了一种如上所述的机制。 + +创建隐私固定地址依赖于伪随机值,这个随机值只被主机本身知晓,它不会显示给网络上的其他主机。这个随机值随后被一个密码安全算法加密,一起被加密的还有一些与网络连接的特殊值。这些值包含:用以标识网卡的标识符;网络前缀;对于这个网络来说有可能的其他特殊值,例如无线的SSID。使用安全值使其他主机很难预测结果地址,与此同时,当进入不同的网络时,网络的特殊数据会让地址变得不同。 + +这也巧妙的解决了地址重复问题。因为有随机值的存在,冲突也不太可能发生。万一发生了冲突,结果地址会得到重复地址检测失败的记录,这时会生成一个不同的地址而不会断开网络连接。看,这种方式很聪明吧。 + +使用隐私固定地址一点儿也不会妨碍隐私扩展。你可以在使用RFC4941所描述的临时地址的同时使用[RFC7217](https://tools.ietf.org/html/rfc7217)中的固定地址。 + +## 网络管理器处于什么样的状况? + +我们已经在网络管理器1.0.4版本中实现了隐私扩展。在这个版本中,隐私扩展默认开启。你可以用ipv6.ip6-privacy参数来控制它。 + +在网络管理器1.2版本中,我们将会加入固定隐私寻址。应该指出的是,目前的隐私扩展还不符合这种需求。我们可以使用ipv6.addr-gen-mode参数来控制这个特性。如果它被设置成固定隐私,那么将会使用固定隐私寻址。设置成“eui64”或者干脆不设置它将会保持传统的默认寻址方式。 + +敬请期待明年,也就是2016年年初网络管理器1.2版本的发布吧!如果你想尝试一下最新的版本,不妨试试Fedora Rawhide,它最终会变成Fedora24。 + +*我想感谢Hannes Frederic Sowa,他给了我很有价值的反馈。如果没有他的帮助,这篇文章的作用将会逊色很多。另外,Hannes也是RFC7217所描述机制的内核实现者,当网络管理器不起作用的时候,它将发挥作用。* + +-------------------------------------------------------------------------------- + +via: https://blogs.gnome.org/lkundrak/2015/12/03/networkmanager-and-privacy-in-the-ipv6-internet/ +作者:[Lubomir Rintel] +译者:[itsang](https://github.com/itsang) +校对:[校对者ID](https://github.com/校对者ID) +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151208 How to Install Bugzilla with Apache and SSL on FreeBSD 10.2.md b/translated/tech/20151208 How to Install Bugzilla with Apache and SSL on FreeBSD 10.2.md new file mode 100644 index 0000000000..f709ba5582 --- /dev/null +++ b/translated/tech/20151208 How to Install Bugzilla with Apache and SSL on FreeBSD 10.2.md @@ -0,0 +1,267 @@ +如何在FreeBSD 10.2上配置Apache和SSL并安装Bugzilla +================================================================================ +Bugzilla是一款bug跟踪系统和测试工具,它基于web且开源,由mozilla计划开发并由Mozilla公共许可证授权。它经常被一些高科技公司如mozilla、红帽公司和gnome使用。Bugzilla起初由Terry Weissman在1998年创立,它用perl语言编写,用MySQL作为后端数据库。它是一款旨在帮助管理软件开发的服务器软件,它功能丰富、高优化度的数据库、卓越的安全性、高级的搜索工具、整合邮件功能等等。 + +在本教程中,我们将给web服务器安装bugzilla 5.0的apache并为它启用SSL,然后在freebsd 10.2上安装mysql 5.1来作为数据库系统。 + +#### 准备 #### + + FreeBSD 10.2 - 64位 + Root权限 + +### 第一步 - 更新系统 ### + +用ssl登录freebsd服务器,并更新库: + + sudo su + freebsd-update fetch + freebsd-update install + +### 第二步 - 安装并配置Apache ### + +在这一步我们将从freebsd库中用pkg命令安装apache,然后在apache24目录下编辑"httpd.conf"文件,启用SSL和CGI支持。 + +用pkg命令安装apache: + + pkg install apache24 + +进入apache目录并用nano编辑器编辑"httpd.conf"文件: + + cd /usr/local/etc/apache24 + nano -c httpd.conf + +反注释掉下面列出的行: + + #第70行 + LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so + + #第89行 + LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so + + #第117行 + LoadModule expires_module libexec/apache24/mod_expires.so + + #第141行,启用SSL + LoadModule ssl_module libexec/apache24/mod_ssl.so + + #第162行,支持cgi + LoadModule cgi_module libexec/apache24/mod_cgi.so + + #第174行,启用mod_rewrite + LoadModule rewrite_module libexec/apache24/mod_rewrite.so + + #第219行,服务器名配置 + ServerName 127.0.0.1:80 + +保存并退出。 + +接着,我们需要从freebsd库中安装mod perl,并启用它: + + pkg install ap24-mod_perl2 + +启用mod_perl,编辑"httpd.conf"文件并添加"Loadmodule"行: + + nano -c httpd.conf + +添加该行: + + #第175行 + LoadModule perl_module libexec/apache24/mod_perl.so + +保存并退出。 + +在启用apache之前,用sysrc命令添加以下行来在引导的时候启动: + + sysrc apache24_enable=yes + service apache24 start + +### 第三步 - 安装并配置MySQL数据库 ### + +我们要用mysql 5.1来作为后端数据库并且支持perl模块。用pkg命令安装mysql 5.1: + + pkg install p5-DBD-mysql51 mysql51-server mysql51-client + +现在我们要在启动时添加mysql服务并启动,然后为mysql配置root密码。 + +运行以下命令来完成所有操作: + + sysrc mysql_enable=yes + service mysql-server start + mysqladmin -u root password aqwe123 + +注意: + +这里mysql密码为:aqwe123 + +![Configure MySQL Password](http://blog.linoxide.com/wp-content/uploads/2015/12/Configure-MySQL-Password.png) + +以上步骤都完成之后,我们用root登录mysql shell,然后为bugzilla安装创建一个新的数据库和用户。 + +用以下命令登录mysql shell: + + mysql -u root -p + password: aqwe123 + +添加数据库: + + create database bugzilladb; + create user bugzillauser@localhost identified by 'bugzillauser@'; + grant all privileges on bugzilladb.* to bugzillauser@localhost identified by 'bugzillauser@'; + flush privileges; + \q + +![Creating Database for Bugzilla](http://blog.linoxide.com/wp-content/uploads/2015/12/Creating-Database-for-Bugzilla.png) + +bugzilla的数据库创建好了,名字为"bugzilladb",用户名和密码分别为"bugzillauser"和"bugzillauser@"。 + +### 第四步 - 生成新的SSL证书 ### + +在bugzilla站点的"ssl"目录里生成新的自签名SSL证书。 + +前往apache24目录并在此创建新目录"ssl": + + cd /usr/local/etc/apache24/ + mkdir ssl; cd ssl + +接着,用openssl命令生成证书文件,然后更改其权限: + + sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /usr/local/etc/apache24/ssl/bugzilla.key -out /usr/local/etc/apache24/ssl/bugzilla.crt + chmod 600 * + +### 第五步 - 配置虚拟主机 ### + +我们将在"/usr/local/www/bugzilla"目录里安装bugzilla,所以我们必须为它创建新的虚拟主机配置。 + +前往apache目录并为虚拟主机文件创建名为"vhost"的新目录: + + cd /usr/local/etc/apache24/ + mkdir vhost; cd vhost + +现在为虚拟主机文件创建新文件"bugzilla.conf": + + nano -c bugzilla.conf + +将以下配置粘贴进去: + + + ServerName mybugzilla.me + ServerAlias www.mybuzilla.me + DocumentRoot /usr/local/www/bugzilla + Redirect permanent / https://mybugzilla.me/ + + + Listen 443 + + ServerName mybugzilla.me + DocumentRoot /usr/local/www/bugzilla + + ErrorLog "/var/log/mybugzilla.me-error_log" + CustomLog "/var/log/mybugzilla.me-access_log" common + + SSLEngine On + SSLCertificateFile /usr/local/etc/apache24/ssl/bugzilla.crt + SSLCertificateKeyFile /usr/local/etc/apache24/ssl/bugzilla.key + + + AddHandler cgi-script .cgi + Options +ExecCGI + DirectoryIndex index.cgi index.html + AllowOverride Limit FileInfo Indexes Options + Require all granted + + + +保存并退出。 + +上述都完成之后,为bugzilla安装创建新目录并通过添加虚拟主机配置至httpd.conf文件来启用bugzilla虚拟主机。 + +在"apache24"目录下运行以下命令: + + mkdir -p /usr/local/www/bugzilla + cd /usr/local/etc/apache24/ + nano -c httpd.conf + +文末,添加以下配置: + + Include etc/apache24/vhost/*.conf + +保存并退出。 + +现在用"apachectl"命令测试一下apache的配置并重启它: + + apachectl configtest + service apache24 restart + +### 第六步 - 安装Bugzilla ### + +我们可以通过下载源来手动安装bugzilla了,或从freebsd库中安装也可以。在这一步中我们将用pkg命令从freebsd库中安装bugzilla: + + pkg install bugzilla50 + +以上步骤都完成之后,前往bugzilla安装目录并安装所有bugzilla需要的perl模块。 + + cd /usr/local/www/bugzilla + ./install-module --all + +要等到所有都完成,这需要点时间。 + +下一步,在bugzilla的安装目录中执行"checksetup.pl"文件来生成配置文件"localconfig"。 + + ./checksetup.pl + +你会看到一条关于数据库配置错误,你得用nano编辑器编辑一下"localconfig"文件: + + nano -c localconfig + +现在添加第三步创建的数据库。 + + #第五十七行 + $db_name = 'bugzilladb'; + + #第六十行 + $db_user = 'bugzillauser'; + + #第六十七行 + $db_pass = 'bugzillauser@'; + +保存并退出。 + +然后再次运行"checksetup.pl": + + ./checksetup.pl + +你会收到输入邮箱名和管理员账号的提示,你只要输入你得邮箱、用户名和密码就行了。 + +![Admin Setup](http://blog.linoxide.com/wp-content/uploads/2015/12/Admin-Setup.png) + +最后,我们需要把安装目录的属主改成"www",然后用服务命令重启apache: + + cd /usr/local/www/ + chown -R www:www bugzilla + service apache24 restart + +现在Bugzilla已经安装好了,你可以通过访问mybugzilla.me来查看,并且将会重定向到https连接。 + +Bugzilla首页: + +![Bugzilla Home](http://blog.linoxide.com/wp-content/uploads/2015/12/Bugzilla-Home.png) + +Bugzilla admin面板: + +![Bugzilla Admin Page](http://blog.linoxide.com/wp-content/uploads/2015/12/Bugzilla-Admin-Page.png) + +### 结论 ### + +Bugzilla是一个基于web并能帮助你管理软件开发的应用,它用perl开发并使用MySQL作为数据库系统。Bugzilla被mozilla,redhat,gnome等等公司用来帮助它们的软件开发工作。Bugzilla有很多功能并易于配置和安装。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-bugzilla-apache-ssl-freebsd-10-2/ + +作者:[Arul][a] +译者:[ZTinoZ](https://github.com/ZTinoZ) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ diff --git a/translated/tech/20151210 Getting started with Docker by Dockerizing this Blog.md.md b/translated/tech/20151210 Getting started with Docker by Dockerizing this Blog.md.md new file mode 100644 index 0000000000..a74af87b6f --- /dev/null +++ b/translated/tech/20151210 Getting started with Docker by Dockerizing this Blog.md.md @@ -0,0 +1,464 @@ +通过Dockerize这篇博客来开启我们的Docker之旅 +=== +>这篇文章将包含Docker的基本概念,以及如何通过创建一个定制的Dockerfile来Dockerize一个应用 +>作者:Benjamin Cane,2015-12-01 10:00:00 + +Docker是2年前从某个idea中孕育而生的有趣技术,世界各地的公司组织都积极使用它来部署应用。在今天的文章中,我将教你如何通过"Dockerize"一个现有的应用,来开始我们的Docker运用。问题中的应用指的就是这篇博客! + +## 什么是Docker? + +当我们开始学习Docker基本概念时,让我们先去搞清楚什么是Docker以及它为什么这么流行。Docker是一个操作系统容器管理工具,它通过将应用打包在操作系统容器中,来方便我们管理和部署应用。 + +### 容器 vs. 虚拟机 + +容器虽和虚拟机并不完全相似,但它也是一种提供**操作系统虚拟化**的方式。但是,它和标准的虚拟机还是有不同之处的。 + +标准虚拟机一般会包括一个完整的操作系统,操作系统包,最后还有一至两个应用。这都得益于为虚拟机提供硬件虚拟化的管理程序。这样一来,一个单一的服务器就可以将许多独立的操作系统作为虚拟客户机运行了。 + +容器和虚拟机很相似,它们都支持在单一的服务器上运行多个操作环境,只是,在容器中,这些环境并不是一个个完整的操作系统。容器一般只包含必要的操作系统包和一些应用。它们通常不会包含一个完整的操作系统或者硬件虚拟化程序。这也意味着容器比传统的虚拟机开销更少。 + +容器和虚拟机常被误认为是两种抵触的技术。虚拟机采用同一个物理服务器,来提供全功能的操作环境,该环境会和其余虚拟机一起共享这些物理资源。容器一般用来隔离运行中的应用进程,运行进程将在单独的主机中运行,以保证隔离后的进程之间不能相互影响。事实上,容器和**BSD Jails**以及`chroot`进程的相似度,超过了和完整虚拟机的相似度。 + +### Docker在容器的上层提供了什么 + +Docker不是一个容器运行环境,事实上,只是一个容器技术,并不包含那些帮助Docker支持[Solaris Zones](https://blog.docker.com/2015/08/docker-oracle-solaris-zones/)和[BSD Jails](https://wiki.freebsd.org/Docker)的技术。Docker提供管理,打包和部署容器的方式。虽然一定程度上,虚拟机多多少少拥有这些类似的功能,但虚拟机并没有完整拥有绝大多数的容器功能,即使拥有,这些功能用起来都并没有Docker来的方便。 + +现在,我们应该知道Docker是什么了,然后,我们将从安装Docker,并部署一个公共的预构建好的容器开始,学习Docker是如何工作的。 + +## 从安装开始 + +默认情况下,Docker并不会自动被安装在您的计算机中,所以,第一步就是安装Docker包;我们的教学机器系统是Ubuntu 14.0.4,所以,我们将使用Apt包管理器,来执行安装操作。 + +``` +# apt-get install docker.io +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following extra packages will be installed: + aufs-tools cgroup-lite git git-man liberror-perl +Suggested packages: + btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc + git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki + git-svn +The following NEW packages will be installed: + aufs-tools cgroup-lite docker.io git git-man liberror-perl +0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. +Need to get 7,553 kB of archives. +After this operation, 46.6 MB of additional disk space will be used. +Do you want to continue? [Y/n] y +``` + +为了检查当前是否有容器运行,我们可以执行`docker`命令,加上`ps`选项 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +``` + +`docker`命令中的`ps`功能类似于Linux的`ps`命令。它将显示可找到的Docker容器以及各自的状态。由于我们并没有开启任何Docker容器,所以命令没有显示任何正在运行的容器。 + +## 部署一个预构建好的nginx Docker容器 + +我比较喜欢的Docker特性之一就是Docker部署预先构建好的容器的方式,就像`yum`和`apt-get`部署包一样。为了更好地解释,我们来部署一个运行着nginx web服务器的预构建容器。我们可以继续使用`docker`命令,这次选择`run`选项。 + +``` +# docker run -d nginx +Unable to find image 'nginx' locally +Pulling repository nginx +5c82215b03d1: Download complete +e2a4fb18da48: Download complete +58016a5acc80: Download complete +657abfa43d82: Download complete +dcb2fe003d16: Download complete +c79a417d7c6f: Download complete +abb90243122c: Download complete +d6137c9e2964: Download complete +85e566ddc7ef: Download complete +69f100eb42b5: Download complete +cd720b803060: Download complete +7cc81e9a118a: Download complete +``` + +`docker`命令的`run`选项,用来通知Docker去寻找一个指定的Docker镜像,然后开启运行着该镜像的容器。默认情况下,Docker容器在前台运行,这意味着当你运行`docker run`命令的时候,你的shell会被绑定到容器的控制台以及运行在容器中的进程。为了能在后台运行该Docker容器,我们可以使用`-d` (**detach**)标志。 + +再次运行`docker ps`命令,可以看到nginx容器正在运行。 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande +``` + +从上面的打印信息中,我们可以看到正在运行的名为`desperate_lalande`的容器,它是由`nginx:latest image`(译者注:nginx最新版本的镜像)构建而来得。 + +### Docker镜像 + +镜像是Docker的核心特征之一,类似于虚拟机镜像。和虚拟机镜像一样,Docker镜像是一个被保存并打包的容器。当然,Docker不只是创建镜像,它还可以通过Docker仓库发布这些镜像,Docker仓库和包仓库的概念差不多,它让Docker能够模仿`yum`部署包的方式来部署镜像。为了更好地理解这是怎么工作的,我们来回顾`docker run`执行后的输出。 + +``` +# docker run -d nginx +Unable to find image 'nginx' locally +``` + +我们可以看到第一条信息是,Docker不能在本地找到名叫nginx的镜像。这是因为当我们执行`docker run`命令时,告诉Docker运行一个基于nginx镜像的容器。既然Docker要启动一个基于特定镜像的容器,那么Docker首先需要知道那个指定镜像。在检查远程仓库之前,Docker首先检查本地是否存在指定名称的本地镜像。 + +因为系统是崭新的,不存在nginx镜像,Docker将选择从Docker仓库下载之。 + +``` +Pulling repository nginx +5c82215b03d1: Download complete +e2a4fb18da48: Download complete +58016a5acc80: Download complete +657abfa43d82: Download complete +dcb2fe003d16: Download complete +c79a417d7c6f: Download complete +abb90243122c: Download complete +d6137c9e2964: Download complete +85e566ddc7ef: Download complete +69f100eb42b5: Download complete +cd720b803060: Download complete +7cc81e9a118a: Download complete +``` + +这就是第二部分打印信息显示给我们的内容。默认,Docker会使用[Docker Hub](https://hub.docker.com/)仓库,该仓库由Docker公司维护。 + +和Github一样,在Docker Hub创建公共仓库是免费的,私人仓库就需要缴纳费用了。当然,部署你自己的Docker仓库也是可以实现的,事实上只需要简单地运行`docker run registry`命令就行了。但在这篇文章中,我们的重点将不是讲解如何部署一个定制的注册服务。 + +### 关闭并移除容器 + +在我们继续构建定制容器之前,我们先清理Docker环境,我们将关闭先前的容器,并移除它。 + +我们利用`docker`命令和`run`选项运行一个容器,所以,为了停止该相同的容器,我们简单地在执行`docker`命令时,使用`kill`选项,并指定容器名。 + +``` +# docker kill desperate_lalande +desperate_lalande +``` + +当我们再次执行`docker ps`,就不再有容器运行了 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +``` + +但是,此时,我们这是停止了容器;虽然它不再运行,但仍然存在。默认情况下,`docker ps`只会显示正在运行的容器,如果我们附加`-a` (all) 标识,它会显示所有运行和未运行的容器。 + +``` +# docker ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande +``` + +为了能完整地移除容器,我们在用`docker`命令时,附加`rm`选项。 + +``` +# docker rm desperate_lalande +desperate_lalande +``` + +虽然容器被移除了;但是我们仍拥有可用的**nginx**镜像(译者注:镜像缓存)。如果我们重新运行`docker run -d nginx`,Docker就无需再次拉取nginx镜像,即可启动容器。这是因为我们本地系统中已经保存了一个副本。 + +为了列出系统中所有的本地镜像,我们运行`docker`命令,附加`images`选项。 + +``` +# docker images +REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE +nginx latest 9fab4090484a 5 days ago 132.8 MB +``` + +## 构建我们自己的镜像 + +截至目前,我们已经使用了一些基础的Docker命令来开启,停止和移除一个预构建好的普通镜像。为了"Dockerize"这篇博客,我们需要构建我们自己的镜像,也就是创建一个**Dockerfile**。 + +在大多数虚拟机环境中,如果你想创建一个机器镜像,首先,你需要建立一个新的虚拟机,安装操作系统,安装应用,最后将其转换为一个模板或者镜像。但在Docker中,所有这些步骤都可以通过Dockerfile实现全自动。Dockerfile是向Docker提供构建指令去构建定制镜像的方式。在这一章节,我们将编写能用来部署这篇博客的定制Dockerfile。 + +### 理解应用 + +我们开始构建Dockerfile之前,第一步要搞明白,我们需要哪些东西来部署这篇博客。 + +博客本质上是由静态站点生成器生成的静态HTML页面,这个静态站点是我编写的,名为**hamerkop**。这个生成器很简单,它所做的就是生成该博客站点。所有的博客源码都被我放在了一个公共的[Github仓库](https://github.com/madflojo/blog)。为了部署这篇博客,我们要先从Github仓库把博客内容拉取下来,然后安装**Python**和一些**Python**模块,最后执行`hamerkop`应用。我们还需要安装**nginx**,来运行生成后的内容。 + +截止目前,这些还是一个简单的Dockerfile,但它却给我们展示了相当多的[Dockerfile语法]((https://docs.docker.com/v1.8/reference/builder/))。我们需要克隆Github仓库,然后使用你最喜欢的编辑器编写Dockerfile;我选择`vi` + +``` +# git clone https://github.com/madflojo/blog.git +Cloning into 'blog'... +remote: Counting objects: 622, done. +remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 +Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. +Resolving deltas: 100% (242/242), done. +Checking connectivity... done. +# cd blog/ +# vi Dockerfile +``` + +### FROM - 继承一个Docker镜像 + +第一条Dockerfile指令是`FROM`指令。这将指定一个现存的镜像作为我们的基础镜像。这也从根本上给我们提供了继承其他Docker镜像的途径。在本例中,我们还是从刚刚我们使用的**nginx**开始,如果我们想重新开始,我们可以通过指定`ubuntu:latest`来使用**Ubuntu** Docker镜像。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane +``` + +除了`FROM`指令,我还使用了`MAINTAINER`,它用来显示Dockerfile的作者。 + +Docker支持使用`#`作为注释,我将经常使用该语法,来解释Dockerfile的部分内容。 + +### 运行一次测试构建 + +因为我们继承了**nginx** Docker镜像,我们现在的Dockerfile也就包括了用来构建**nginx**镜像的[Dockerfile](https://github.com/nginxinc/docker-nginx/blob/08eeb0e3f0a5ee40cbc2bc01f0004c2aa5b78c15/Dockerfile)中所有指令。这意味着,此时我们可以从该Dockerfile中构建出一个Docker镜像,然后从该镜像中运行一个容器。虽然,最终的镜像和**nginx**镜像本质上是一样的,但是我们这次是通过构建Dockerfile的形式,然后我们将讲解Docker构建镜像的过程。 + +想要从Dockerfile构建镜像,我们只需要在运行`docker`命令的时候,加上**build**选项。 + +``` +# docker build -t blog /root/blog +Sending build context to Docker daemon 23.6 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Running in c97f36450343 + ---> 60a44f78d194 +Removing intermediate container c97f36450343 +Successfully built 60a44f78d194 +``` + +上面的例子,我们使用了`-t` (**tag**)标识给镜像添加"blog"的标签。本质上我们只是在给镜像命名,如果我们不指定标签,就只能通过Docker分配的**Image ID**来访问镜像了。本例中,从Docker构建成功的信息可以看出,**Image ID**值为`60a44f78d194`。 + +除了`-t`标识外,我还指定了目录`/root/blog`。该目录被称作"构建目录",它将包含Dockerfile,以及其他需要构建该容器的文件。 + +现在我们构建成功,下面我们开始定制该镜像。 + +### 使用RUN来执行apt-get + +用来生成HTML页面的静态站点生成器是用**Python**语言编写的,所以,在Dockerfile中需要做的第一件定制任务是安装Python。我们将使用Apt包管理器来安装Python包,这意味着在Dockerfile中我们要指定运行`apt-get update`和`apt-get install python-dev`;为了完成这一点,我们可以使用`RUN`指令。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip +``` + +如上所示,我们只是简单地告知Docker构建镜像的时候,要去执行指定的`apt-get`命令。比较有趣的是,这些命令只会在该容器的上下文中执行。这意味着,即使容器中安装了`python-dev`和`python-pip`,但主机本身并没有安装这些。说的更简单点,`pip`命令将只在容器中执行,出了容器,`pip`命令不存在。 + +还有一点比较重要的是,Docker构建过程中不接受用户输入。这说明任何被`RUN`指令执行的命令必须在没有用户输入的时候完成。由于很多应用在安装的过程中需要用户的输入信息,所以这增加了一点难度。我们例子,`RUN`命令执行的命令都不需要用户输入。 + +### 安装Python模块 + +**Python**安装完毕后,我们现在需要安装Python模块。如果在Docker外做这些事,我们通常使用`pip`命令,然后参考博客Git仓库中名叫`requirements.txt`的文件。在之前的步骤中,我们已经使用`git`命令成功地将Github仓库"克隆"到了`/root/blog`目录;这个目录碰巧也是我们创建`Dockerfile`的目录。这很重要,因为这意味着Dokcer在构建过程中可以访问Git仓库中的内容。 + +当我们执行构建后,Docker将构建的上下文环境设置为指定的"构建目录"。这意味着目录中的所有文件都可以在构建过程中被使用,目录之外的文件(构建环境之外)是不能访问的。 + +为了能安装需要的Python模块,我们需要将`requirements.txt`从构建目录拷贝到容器中。我们可以在`Dockerfile`中使用`COPY`指令完成这一需求。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip + +## Create a directory for required files +RUN mkdir -p /build/ + +## Add requirements file and run pip +COPY requirements.txt /build/ +RUN pip install -r /build/requirements.txt +``` + +在`Dockerfile`中,我们增加了3条指令。第一条指令使用`RUN`在容器中创建了`/build/`目录。该目录用来拷贝生成静态HTML页面需要的一切应用文件。第二条指令是`COPY`指令,它将`requirements.txt`从"构建目录"(`/root/blog`)拷贝到容器中的`/build/`目录。第三条使用`RUN`指令来执行`pip`命令;安装`requirements.txt`文件中指定的所有模块。 + +当构建定制镜像时,`COPY`是条重要的指令。如果在Dockerfile中不指定拷贝文件,Docker镜像将不会包含requirements.txt文件。在Docker容器中,所有东西都是隔离的,除非在Dockerfile中指定执行,否则容器中不会包括需要的依赖。 + +### 重新运行构建 + +现在,我们让Docker执行了一些定制任务,现在我们尝试另一次blog镜像的构建。 + +``` +# docker build -t blog /root/blog +Sending build context to Docker daemon 19.52 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Using cache + ---> 8e0f1899d1eb +Step 2 : RUN apt-get update + ---> Using cache + ---> 78b36ef1a1a2 +Step 3 : RUN apt-get install -y python-dev python-pip + ---> Using cache + ---> ef4f9382658a +Step 4 : RUN mkdir -p /build/ + ---> Running in bde05cf1e8fe + ---> f4b66e09fa61 +Removing intermediate container bde05cf1e8fe +Step 5 : COPY requirements.txt /build/ + ---> cef11c3fb97c +Removing intermediate container 9aa8ff43f4b0 +Step 6 : RUN pip install -r /build/requirements.txt + ---> Running in c50b15ddd8b1 +Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) +Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) + +Successfully installed jinja2 PyYaml mistune markdown MarkupSafe +Cleaning up... + ---> abab55c20962 +Removing intermediate container c50b15ddd8b1 +Successfully built abab55c20962 +``` + +上述输出所示,我们可以看到构建成功了,我们还可以看到另外一个有趣的信息` ---> Using cache`。这条信息告诉我们,Docker在构建该镜像时使用了它的构建缓存。 + +### Docker构建缓存 + +当Docker构建镜像时,它不仅仅构建一个单独的镜像;事实上,在构建过程中,它会构建许多镜像。从上面的输出信息可以看出,在每一"步"执行后,Docker都在创建新的镜像。 + +``` + Step 5 : COPY requirements.txt /build/ + ---> cef11c3fb97c +``` + +上面片段的最后一行可以看出,Docker在告诉我们它在创建一个新镜像,因为它打印了**Image ID**;`cef11c3fb97c`。这种方式有用之处在于,Docker能在随后构建**blog**镜像时将这些镜像作为缓存使用。这很有用处,因为这样,Docker就能加速同一个容器中新构建任务的构建流程。从上面的例子中,我们可以看出,Docker没有重新安装`python-dev`和`python-pip`包,Docker则使用了缓存镜像。但是由于Docker并没有找到执行`mkdir`命令的构建缓存,随后的步骤就被一一执行了。 + +Docker构建缓存一定程度上是福音,但有时也是噩梦。这是因为使用缓存或者重新运行指令的决定在一个很狭窄的范围内执行。比如,如果`requirements.txt`文件发生了修改,Docker会在构建时检测到该变化,然后Docker会重新执行该执行那个点往后的所有指令。这得益于Docker能查看`requirements.txt`的文件内容。但是,`apt-get`命令的执行就是另一回事了。如果提供Python包的**Apt** 仓库包含了一个更新的python-pip包;Docker不会检测到这个变化,转而去使用构建缓存。这会导致之前旧版本的包将被安装。虽然对`python-pip`来说,这不是主要的问题,但对使用了某个致命攻击缺陷的包缓存来说,这是个大问题。 + +出于这个原因,抛弃Docker缓存,定期地重新构建镜像是有好处的。这时,当我们执行Docker构建时,我简单地指定`--no-cache=True`即可。 + +## 部署博客的剩余部分 + +Python包和模块安装后,接下来我们将拷贝需要用到的应用文件,然后运行`hamerkop`应用。我们只需要使用更多的`COPY` and `RUN`指令就可完成。 + +``` +## Dockerfile that generates an instance of http://bencane.com + +FROM nginx:latest +MAINTAINER Benjamin Cane + +## Install python and pip +RUN apt-get update +RUN apt-get install -y python-dev python-pip + +## Create a directory for required files +RUN mkdir -p /build/ + +## Add requirements file and run pip +COPY requirements.txt /build/ +RUN pip install -r /build/requirements.txt + +## Add blog code nd required files +COPY static /build/static +COPY templates /build/templates +COPY hamerkop /build/ +COPY config.yml /build/ +COPY articles /build/articles + +## Run Generator +RUN /build/hamerkop -c /build/config.yml +``` + +现在我们已经写出了剩余的构建指令,我们再次运行另一次构建,并确保镜像构建成功。 + +``` +# docker build -t blog /root/blog/ +Sending build context to Docker daemon 19.52 MB +Sending build context to Docker daemon +Step 0 : FROM nginx:latest + ---> 9fab4090484a +Step 1 : MAINTAINER Benjamin Cane + ---> Using cache + ---> 8e0f1899d1eb +Step 2 : RUN apt-get update + ---> Using cache + ---> 78b36ef1a1a2 +Step 3 : RUN apt-get install -y python-dev python-pip + ---> Using cache + ---> ef4f9382658a +Step 4 : RUN mkdir -p /build/ + ---> Using cache + ---> f4b66e09fa61 +Step 5 : COPY requirements.txt /build/ + ---> Using cache + ---> cef11c3fb97c +Step 6 : RUN pip install -r /build/requirements.txt + ---> Using cache + ---> abab55c20962 +Step 7 : COPY static /build/static + ---> 15cb91531038 +Removing intermediate container d478b42b7906 +Step 8 : COPY templates /build/templates + ---> ecded5d1a52e +Removing intermediate container ac2390607e9f +Step 9 : COPY hamerkop /build/ + ---> 59efd1ca1771 +Removing intermediate container b5fbf7e817b7 +Step 10 : COPY config.yml /build/ + ---> bfa3db6c05b7 +Removing intermediate container 1aebef300933 +Step 11 : COPY articles /build/articles + ---> 6b61cc9dde27 +Removing intermediate container be78d0eb1213 +Step 12 : RUN /build/hamerkop -c /build/config.yml + ---> Running in fbc0b5e574c5 +Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux +Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux + +Successfully created file /usr/share/nginx/html//archive.html +Successfully created file /usr/share/nginx/html//sitemap.xml + ---> 3b25263113e1 +Removing intermediate container fbc0b5e574c5 +Successfully built 3b25263113e1 +``` + +### 运行定制的容器 + +成功的一次构建后,我们现在就可以通过运行`docker`命令和`run`选项来运行我们定制的容器,和之前我们启动nginx容器一样。 + +``` +# docker run -d -p 80:80 --name=blog blog +5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1 +``` + +我们这次又使用了`-d` (**detach**)标识来让Docker在后台运行。但是,我们也可以看到两个新标识。第一个新标识是`--name`,这用来给容器指定一个用户名称。之前的例子,我们没有指定名称,因为Docker随机帮我们生成了一个。第二个新标识是`-p`,这个标识允许用户从主机映射一个端口到容器中的一个端口。 + +之前我们使用的基础**nginx**镜像分配了80端口给HTTP服务。默认情况下,容器内的端口通道并没有绑定到主机系统。为了让外部系统能访问容器内部端口,我们必须使用`-p`标识将主机端口映射到容器内部端口。上面的命令,我们通过`-p 8080:80`语法将主机80端口映射到容器内部的80端口。 + +经过上面的命令,我们的容器似乎成功启动了,我们可以通过执行`docker ps`核实。 + +``` +# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog +``` + +## 总结 + +截止目前,我们拥有了正在运行的定制Docker容器。虽然在这篇文章中,我们只接触了一些Dockerfile指令用法,但是我们还是要讨论所有的指令。我们可以检查[Docker's reference page](https://docs.docker.com/v1.8/reference/builder/)来获取所有的Dockerfile指令用法,那里对指令的用法说明得很详细。 + +另一个比较好的资源是[Dockerfile Best Practices page](https://docs.docker.com/engine/articles/dockerfile_best-practices/),它有许多构建定制Dockerfile的最佳练习。有些技巧非常有用,比如战略性地组织好Dockerfile中的命令。上面的例子中,我们将`articles`目录的`COPY`指令作为Dockerfile中最后的`COPY`指令。这是因为`articles`目录会经常变动。所以,将那些经常变化的指令尽可能地放在最后面的位置,来最优化那些可以被缓存的步骤。 + +通过这篇文章,我们涉及了如何运行一个预构建的容器,以及如何构建,然后部署定制容器。虽然关于Docker你还有许多需要继续学习的地方,但我想这篇文章给了你如何继续开始的好建议。当然,如果你认为还有一些需要继续补充的内容,在下面评论即可。 + +-------------------------------------- +via:http://bencane.com/2015/12/01/getting-started-with-docker-by-dockerizing-this-blog/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+bencane%2FSAUo+%28Benjamin+Cane%29 + +作者:Benjamin Cane + +译者:[su-kaiyao](https://github.com/su-kaiyao) + +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + diff --git a/translated/tech/20151214 Linux or Unix Desktop Fun--Christmas Tree For Your Terminal.md b/translated/tech/20151214 Linux or Unix Desktop Fun--Christmas Tree For Your Terminal.md new file mode 100644 index 0000000000..dbbab5ef8c --- /dev/null +++ b/translated/tech/20151214 Linux or Unix Desktop Fun--Christmas Tree For Your Terminal.md @@ -0,0 +1,84 @@ +Linux / Unix桌面之趣:终端上的圣诞树 +================================================================================ +给你的Linux或Unix控制台创造一棵圣诞树玩玩吧。在此之前,需要先安装一个Perl模块,命名为Acme::POE::Tree。这是一棵很喜庆的圣诞树,我已经在Linux、OSX和类Unix系统上验证过了。 + + +### 安装 Acme::POE::Tree ### + +安装perl模块最简单的办法就是使用cpan(Perl综合典藏网)。打开终端,把下面的指令敲进去便可安装Acme::POE::Tree。 + + ## 以root身份运行 ## + perl -MCPAN -e 'install Acme::POE::Tree' + +**案例输出:** + + Installing /home/vivek/perl5/man/man3/POE::NFA.3pm + Installing /home/vivek/perl5/man/man3/POE::Kernel.3pm + Installing /home/vivek/perl5/man/man3/POE::Loop.3pm + Installing /home/vivek/perl5/man/man3/POE::Resource.3pm + Installing /home/vivek/perl5/man/man3/POE::Filter::Map.3pm + Installing /home/vivek/perl5/man/man3/POE::Resource::SIDs.3pm + Installing /home/vivek/perl5/man/man3/POE::Loop::IO_Poll.3pm + Installing /home/vivek/perl5/man/man3/POE::Pipe::TwoWay.3pm + Appending installation info to /home/vivek/perl5/lib/perl5/x86_64-linux-gnu-thread-multi/perllocal.pod + RCAPUTO/POE-1.367.tar.gz + /usr/bin/make install -- OK + RCAPUTO/Acme-POE-Tree-1.022.tar.gz + Has already been unwrapped into directory /home/vivek/.cpan/build/Acme-POE-Tree-1.022-uhlZUz + RCAPUTO/Acme-POE-Tree-1.022.tar.gz + Has already been prepared + Running make for R/RC/RCAPUTO/Acme-POE-Tree-1.022.tar.gz + cp lib/Acme/POE/Tree.pm blib/lib/Acme/POE/Tree.pm + Manifying 1 pod document + RCAPUTO/Acme-POE-Tree-1.022.tar.gz + /usr/bin/make -- OK + Running make test + PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t + t/01_basic.t .. ok + All tests successful. + Files=1, Tests=2, 6 wallclock secs ( 0.09 usr 0.03 sys + 0.53 cusr 0.06 csys = 0.71 CPU) + Result: PASS + RCAPUTO/Acme-POE-Tree-1.022.tar.gz + Tests succeeded but one dependency not OK (Curses) + RCAPUTO/Acme-POE-Tree-1.022.tar.gz + [dependencies] -- NA + +### 在Shell中显示圣诞树 ### + +只需要在终端上运行以下命令: + + perl -MAcme::POE::Tree -e 'Acme::POE::Tree->new()->run()' + +**案例输出** + +![Gif 01: An animated christmas tree in Perl](http://s0.cyberciti.org/uploads/cms/2015/12/perl-tree.gif) + +Gif 01: 一棵用Perl写的喜庆圣诞树 + +### 树的定制 ### + +以下是我的脚本文件tree.pl的内容: + + #!/usr/bin/perl + + use Acme::POE::Tree; + my $tree = Acme::POE::Tree->new( + { + star_delay => 1.5, # shimmer star every 1.5 sec + light_delay => 2, # twinkle lights every 2 sec + run_for => 10, # automatically exit after 10 sec + } + ); + $tree->run(); + +这样就可以通过修改star_delay、run_for和light_delay参数的值来自定义你的树了。一棵提供消遣的终端圣诞树就此诞生。 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/open-source/command-line-hacks/linux-unix-desktop-fun-christmas-tree-for-your-terminal/ + +作者:Vivek Gite +译者:[soooogreen](https://github.com/soooogreen) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151215 How to Install Light Table 0.8 in Ubuntu 14.04, 15.10.md b/translated/tech/20151215 How to Install Light Table 0.8 in Ubuntu 14.04, 15.10.md new file mode 100644 index 0000000000..3b3f72ac4c --- /dev/null +++ b/translated/tech/20151215 How to Install Light Table 0.8 in Ubuntu 14.04, 15.10.md @@ -0,0 +1,107 @@ + 如何在 Ubuntu 14.04, 15.10 中安装Light Table 0.8 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2014/11/LightTable-IDE-logo-icon.png) + +Light Table 在经过一年以上的开发,已经推出了新的稳定发行版本。现在它只为Linux提供64位的二进制包。 +LightTable 0.8.0的改动: + +- 更改: 我们从 NW.js 中选择了 Electron +- 更改: LT’s 发行版本与自更新进程在github上面完全的公开 +- 增加: LT 可以由提供的脚本从源码在支持的不同平台上安装 +- 增加: LT’s 大部分的代码库将用npm依赖来安装以取代以forked库安装 +- 增加: 有效文档. 更多详情内容见下面 +- 修复: 版本号>= OSX 10.10的系统下工作的主要的可用性问题 +- 更改: 32位Linux不再提供官方包文件下载,从源码安装仍旧将被支持 +- 修复: ClojureScript eval 在ClojureScript的现代版本可以正常工作 +- 参阅更多 [github.com/LightTable/LightTable/releases][1] + +![LightTable 0.8.0](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-08.jpg) + +### 如何在Ubuntu中安Light Table 0.8.0: ### + +下面的步骤回指导你怎么样在Ubuntu下安装官方的二进制包,在目前Ubuntu发行版本都适用(**仅仅针对64位**)。 + +在开始之前,如果你安装了之前的版本请做好备份。 + +**1.** +从以下链接下载LightTable Linux下的二进制文件: + +- [lighttable-0.8.0-linux.tar.gz][2] + +**2.** +从dash或是应用启动器,或者是Ctrl+Alt+T快捷键打开终端,并且在输入以下命令后敲击回车键: + + gksudo file-roller ~/Downloads/lighttable-0.8.0-linux.tar.gz + +![open-via-fileroller](http://ubuntuhandbook.org/wp-content/uploads/2015/12/open-via-fileroller.jpg) + +如果命令不工作的话从Ubuntu软件中心安装`gksu`。 + +**3.** +之前的命令使用了root用户权限通过档案管理器打开了下载好的存档。 + + +打开它后,请做以下步骤: + +- 右击文件并且将其重命名为 **LightTable** +- 将其解压到 **Computer -> /opt/** 目录下。 + +![extract-lighttable](http://ubuntuhandbook.org/wp-content/uploads/2015/12/extract-lighttable.jpg) + +最终你应该安装好了LightTable,可以在/opt/ 目录下查看: + +![lighttable-in-opt](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-in-opt.jpg) + +**4.** 创建一个启动器使你可以从dash工具或是应用启动器打开LightTable。 + +打开终端,运行以下命令来创建与编辑一个LightTable的启动文件: + + gksudo gedit /usr/share/applications/lighttable.desktop + +通过Gedit文本编辑器打开文件后, 粘贴下面的内容并保存: + + [Desktop Entry] + Version=1.0 + Type=Application + Name=Light Table + GenericName=Text Editor + Comment=Open source IDE that modify, from running programs to embed websites and games + Exec=/opt/LightTable/LightTable %F + Terminal=false + MimeType=text/plain; + Icon=/opt/LightTable/resources/app/core/img/lticon.png + Categories=TextEditor;Development;Utility; + StartupNotify=true + Actions=Window;Document; + + Name[en_US]=Light Table + + [Desktop Action Window] + Name=New Window + Exec=/opt/LightTable/LightTable -n + OnlyShowIn=Unity; + + [Desktop Action Document] + Name=New File + Exec=/opt/LightTable/LightTable --command new_file + OnlyShowIn=Unity; + +因此它看起来像: + +![lighttable-launcher](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-launcher.jpg) + +最后,从dash工具或者是应用启动器打开IDE,好好享受它吧! + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/12/install-light-table-0-8-ubuntu-14-04/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://github.com/LightTable/LightTable/releases +[2]:https://github.com/LightTable/LightTable/releases/download/0.8.0/lighttable-0.8.0-linux.tar.gz diff --git a/translated/tech/20160104 How to Check Hardware Information on Linux Using Command Line.md b/translated/tech/20160104 How to Check Hardware Information on Linux Using Command Line.md new file mode 100644 index 0000000000..9806dd8b0d --- /dev/null +++ b/translated/tech/20160104 How to Check Hardware Information on Linux Using Command Line.md @@ -0,0 +1,149 @@ +Linux系统下使用命令行来查看硬件信息 +================================================================================ +![](https://maketecheasier-holisticmedia.netdna-ssl.com/assets/uploads/2015/12/hdd_info_featured-1.png) + +有许多可用的命令可以用来查看Linux系统上的硬件信息。有些命令只能够打印出像CPU和内存这一特定的硬件组件信息,其余的命令可以查看其余硬件单元的信息。 + +这个教程可以带大家快速学习一下查看各种硬件设备的信息和配置详情的最常用的命令。 +### lscpu ### + +`lscpu`命令能够查看CPU和处理单元的信息。该命令没有任何其他选项或者别的功能。 + + lscpu + +运行该命令会看到下面输出: + +![hdd_info_lscpu](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_lscpu.png) + +### lspci ### + +`lspci`是另一个命令行工具,可以用来列出所有的PCI总线,还有与PCI总线相连的设备的详细信息,比如VGA适配器、显卡、网络适配器、usb端口、SATA控制器等。 + + lspci + +你可以看到类似下图的输出信息。 + +![hdd_info_lspci](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_lspci-1.png) + +可以通过运行下面的命令来过滤出特定设备的信息: + + lspci -v | grep "VGA" -A 12 + +运行上面的命令可以看到类似下图的关于显卡的信息。 + +![hdd_info_lspci_vga](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_lspci_vga.png) + +### lshw ### + +`lshw`是一个通用的工具,可以列出多种硬件单元的详细或者概要的信息,比如CPU、内存、usb控制器、硬盘等。`lshw`能够从不同的“/proc”文件中提取出相关的信息。 + + lshw -short + +通过运行上面的命令可以看到下面的信息。 + +![hdd_info_lshw](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_lshw.png) + +### lsscsi ### + +通过运行下面的命令可以列出像硬盘和光驱等scsi/sata设备的信息: + + lsscsi + +会得到类似下面的输出。 + +![hdd_info_lsscsi](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_lsscsi-1.png) + +### lsusb ### + +`lsusb`命令能够列出USB控制器和与USB控制相连的设备的详细信息。默认情况下,`lsusb`命令只打印出概要信息。可以通过使用-v参数打印每一个usb端口的详细信息。 + + lsusb + +可以看到下面输出。 + +![hdd_info_lsusb](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_lsusb.png) + +### Inxi ### + +`Inxi`是一个bash脚本,能够从系统的多个源文件和命令行抓取硬件信息,并打印出一个非技术人员也能看懂的友好的报告。 + +默认情况下,Ubuntu上没有安装`inxi`。可以通过运行下面命令来安装`Inxi`: + + sudo apt-get install inxi + +安装完`Inxi`之后,通过运行下面命令能够得到硬件相关的信息: + + inxi -Fx + +能够得到类似下图的输出。 + +![hdd_info_inxi](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_inxi.jpg) + +### df ### + +`df`命令能够列出不同分区的概要信息,挂载点,已用的和可用的空间。 +可以在使用`df`命令的时候加上`-H`参数。 + + df -H + +会得到下面的输出。 + +![hdd_info_df](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_df-1.png) + +### Free ### + +通过使用`free`命令可以查看系统中使用的、闲置的和总体的RAM数量。 + + free -m + +会看到下面输出。 + +![hdd_info_free](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_free.png) + +### Dmidecode ### + +`dmidecode`命令与其他命令不同。该命令是从DMI表中读取硬件信息的。 + +要查看处理器的信息,运行下面命令: + + sudo dmidecode -t processor + +![hdd_info_dmi_processor](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_dmi_processor.jpg) + +要查看内存的信息,运行下面命令: + + sudo dmidecode -t memory + +![hdd_info_dmi_memory](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_dmi_memory.png) + +要查看bios的信息,运行下面命令: + + sudo dmidecode -t bios + +![hdd_info_dmi_bios](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_dmi_bios.png) + +### Hdparm ### + +`hdparm`命令可以用来显示想硬盘这样的sata设备的信息。 + + sudo hdparm + +可以看到下面的输出。 + +![hdd_info_hdparm](https://www.maketecheasier.com/assets/uploads/2015/12/hdd_info_hdparm.png) + +### 总结 ### + +每个命令都有不同的方式来获取硬件的信息。在查看特定的硬件信息的时候,可以尝试使用不同的方式。上面所有的命令行工具在大部分的Linux发型版本中都是可以使用的,可以很容易的从仓库中获取安装。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/check-hardware-information-linux/ + +作者:[Hitesh Jethva][a] +译者:[sonofelice](https://github.com/sonofelice) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/hiteshjethva/ diff --git a/translated/tech/20160104 How to use KVM from the command line on Debian or Ubuntu.md b/translated/tech/20160104 How to use KVM from the command line on Debian or Ubuntu.md new file mode 100644 index 0000000000..6f9f3690cb --- /dev/null +++ b/translated/tech/20160104 How to use KVM from the command line on Debian or Ubuntu.md @@ -0,0 +1,297 @@ + +怎样在ubuntu和debian中用命令行使用KVM +================================================================================ +有很多不同的方式去管理运行在KVM管理程序上的虚拟机。例如,virt-manager就是一个流行的基于图形用户界面的前端虚拟机管理工具。然而,如果你想要在没有图形窗口的服务器环境下使用KVM,那么基于图形用户界面的解决方案显然是行不通的。事实上,你可以纯粹的使用包装了kvm命令行脚本的命令行来管理KVM虚拟机。作为替代方案,你可以使用virsh这个容易使用的命令行用户接口来管理客户虚拟机。在virsh中,它通过和libvirtd服务通信来达到控制虚拟机的目的,而libvirtd可以控制几个不同的虚拟机管理器,包括KVM,Xen,QEMU,LXC和OpenVZ。 + 当你想要对虚拟机的前期准备和后期管理实现自动化操作时,像virsh这样的命令行管理工具是非常有用的。同样,virsh支持多个管理器的事实也意味着你可以通过相同的virsh接口去管理不同的虚拟机管理器。 + 在这篇文章中,我会示范**怎样在ubuntu和debian上通过使用virsh命令行去运行KVM**。 + +### 第一步:确认你的硬件平台支持虚拟化 ### + +作为第一步,首先要确认你的主机CPU配备了硬件虚拟化拓展(e.g.,Intel VT或者AMD-V),这是KVM对硬件的要求。下面的命令可以检查硬件是否支持虚拟化。 + + $ egrep '(vmx|svm)' --color /proc/cpuinfo + +![](https://c2.staticflickr.com/2/1505/24050288606_758a44c4c6_c.jpg) + + 如果在输出中不包含vmx或者svm标识,那么就意味着你的主机cpu不支持硬件虚拟化。因此你不能在你的机器上使用KVM。确认了主机cpu存在vmx或者svm之后,接下来开始安装KVM。 +对于KVM来说,它不要求运行在拥有64位内核系统的主机上,但是通常我们会推荐在64位系统的主机上面运行KVM。 + +### 第二步:安装KVM ### + +使用apt-get安装KVM和相关的用户空间工具。 + + $ sudo apt-get install qemu-kvm libvirt-bin + +安装期间,libvirtd组(在debian上是libvirtd-qemu组)将会被创建,并且你的用户id将会被自动添加到该组中。这样做的目的是让你可以以一个普通用户而不是root用户的身份去管理虚拟机。你可以使用id命令来确认这一点,下面将会告诉你怎么去显示你的组id: + + $ id + +![](https://c2.staticflickr.com/6/5597/15432586092_64dfb867d3_c.jpg) + +如果因为某些原因,libvirt(在debian中是libvirt-qemu)没有在你的组id中被找到,你也可以手动将你自己添加到对应的组中,如下所示: +在ubuntu上: + + $ sudo adduser [youruserID] libvirtd + +在debian上: + + $ sudo adduser [youruserID] libvirt-qemu + +按照如下形式重修载入更新后的组成员关系。如果要求输入密码,那么输入你的登陆密码即可。 + + $ exec su -l $USER + +这时,你应该可以以普通用户的身份去执行virsh了。做一个如下所示的测试,这个命令将会以列表的形式列出可用的虚拟机(当前的列表是空的)。如果你没有遇到权限问题,那意味着迄今为止一切都是正常的。 + + $ virsh list + +---------- + + Id Name State + ---------------------------------------------------- + +### 第三步:配置桥接网络 ### + +为了使KVM虚拟机能够访问外部网络,一种方法是通过在KVM宿主机上创建Linux桥来实现。创建之后的桥能够将虚拟机的虚拟网卡和宿主机的物理网卡连接起来,因此,虚拟机能够发送和接受由物理网卡发送过来的流量数据包。这种方式叫做网桥连接。 +下面将告诉你如何创建并且配置网桥,我们称它为br0. +首先,安装一个必需的包,然后用命令行创建一个网桥。 + + $ sudo apt-get install bridge-utils + $ sudo brctl addbr br0 + +下一步就是配置已经创建好的网桥,即修改位于/etc/network/interfaces的配置文件。我们需要将该桥接网卡设置成开机启动。为了修改该配置文件,你需要关闭你的操作系统上的网络管理器(如果你在使用它的话)。跟随[操作指南][1]的说明去关闭网络管理器。 + 关闭网络管理器之后,接下来就是通过修改配置文件来配置网桥了。 + + #auto eth0 + #iface eth0 inet dhcp + + auto br0 + iface br0 inet dhcp + bridge_ports eth0 + bridge_stp off + bridge_fd 0 + bridge_maxwait 0 + + +在上面的配置中,我假设eth0是主要网卡,它也是连接到外网的网卡,同样,我假设eth0将会通过DHCP得到它的ip地址。注意,之前在/etc/network/interfaces中还没有对eth0进行任何配置。桥接网卡br0引用了eth0的配置,而eth0也会受到br0的制约。 +重启网络服务,并确认网桥已经被成功的配置好。如果成功的话,br0的ip地址将会是eth0的被自动分配的ip地址,而且eth0不会被分配任何ip地址。 + + $ sudo /etc/init.d/networking restart + $ ifconfig + +如果因为某些原因,eth0仍然保留了之前分配给了br0的ip地址,那么你可能必须明确的删除eth0的ip地址。 + +![](https://c2.staticflickr.com/2/1698/23780708850_66cd7ba6ea_c.jpg) + +###第四步:用命令行创建一个虚拟机 ### + +对于虚拟机来说,它的配置信息被存储在它对应的xml文件中。因此,创建一个虚拟机的第一步就是准备一个与主机名对应的xml文件。 +下面是一个示例xml文件,你可以根据需要手动修改它。 + + + alice + f5b8c05b-9c7a-3211-49b9-2bd635f7e2aa + 1048576 + 1048576 + 1 + + hvm + + + + + + + destroy + restart + destroy + + /usr/bin/kvm + + + + +
+ + + + + + +
+ + + + + + +
+ + + + + + + + + +上面的主机xml配置文件定义了如下的虚拟机内容。 + +- 1GB内存,一个虚拟cpu和一个硬件驱动。 +- Disk image:/home/dev/images/alice.img。 +- Boot from CD-ROM(/home/dev/iso/CentOS-6.5-x86_64-minomal.iso)。 +- Networking:一个桥接到br0的虚拟网卡。 +- 通过VNC远程访问。 +中的UUID字符串可以随机生成。为了得到一个随机的uuid字符串,你可能需要使用uuid命令行工具。 + + $ sudo apt-get install uuid + $ uuid + +生成一个主机xml配置文件的方式就是通过一个已经存在的虚拟机来导出它的xml配置文件。如下所示。 + + $ virsh dumpxml alice > bob.xml + +![](https://c2.staticflickr.com/6/5808/23968234602_25e8019ec8_c.jpg) + +###第五步:使用命令行启动虚拟机### + +在启动虚拟机之前,我们需要创建它的初始磁盘镜像。为此,你需要使用qemu-img命令来生成一个你已经安装的qemu-kvm镜像。下面的命令将会创建10GB大小的空磁盘,并且它是qcow2格式的。 + + $ qemu-img create -f qcow2 /home/dev/images/alice.img 10G + +使用qcow2格式的磁盘镜像的好处就是它在创建之初并不会给它分配全部大小磁盘容量(这里是10GB),而是随着虚拟机中文件的增加而逐渐增大。因此,它对空间的使用更加有效。 +现在,你可以准备通过使用之前创建的xml配置文件启动你的虚拟机了。下面的命令将会创建一个虚拟机,然后自动启动它。 + + $ virsh create alice.xml + +---------- + + Domain alice created from alice.xml + +**注意**:如果你对一个已经存在的虚拟机运行了上面的命令,那么这个操作将会在没有警告信息的情况下抹去那个已经存在的虚拟机的全部信息。如果你已经创建了一个虚拟机,你可能会使用下面的命令来启动虚拟机。 + + $ virsh start alice.xml + +使用如下命令确认一个新的虚拟机已经被创建并成功的被启动。 + + $ virsh list + +---------- + + Id Name State + ---------------------------------------------------- + 3 alice running + +同样,使用如下命令确认你的虚拟机的虚拟网卡已经被成功的添加到了你先前创建的br0网桥中。 + + $ sudo brctl show + +![](https://c2.staticflickr.com/2/1546/23449585383_a371e9e579_c.jpg) + +### 远程连接虚拟机 ### + +为了远程访问一个正在运行的虚拟机的控制台,你可以使用VNC客户端。 +首先,你需要使用如下命令找出用于虚拟机的VNC端口号。 + + $ sudo netstat -nap | egrep '(kvm|qemu)' + +![](https://c2.staticflickr.com/6/5633/23448144274_49045bc868_c.jpg) + +在这个例子中,用于alice虚拟机的VNC端口号是5900 +然后启动一个VNC客户端,连接到一个端口号为5900的VNC服务器。在我们的例子中,虚拟机支持由CentOS光盘文件启动。 + +![](https://c2.staticflickr.com/2/1533/24076369675_99408972a4_c.jpg) + +### 使用virsh管理虚拟机 ### + +下面列出了virsh命令的常规用法 +创建客户机并且启动虚拟机: + + $ virsh create alice.xml + +停止虚拟机并且删除客户机 + + $ virsh destroy alice + +关闭虚拟机(不用删除它) + + $ virsh shutdown alice + +暂停虚拟机 + + $ virsh suspend alice + +恢复虚拟机 + + $ virsh resume alice + +访问正在运行的虚拟机的登陆控制台 + + $ virsh console alice + +设置虚拟机开机启动: + + $ virsh autostart alice + +查看虚拟机的详细信息 + + $ virsh dominfo alice + +编辑虚拟机的配置文件: + + $ virsh edit alice + +上面的这个命令将会使用一个默认的编辑器来调用主机配置文件。该配置文件中的任何改变都将自动被libvirt验证其正确性。 +你也可以在一个virsh会话中管理虚拟机。下面的命令会创建并进入到一个virsh会话中: + + $ virsh +在virsh提示中,你可以使用任何virsh命令。 + +![](https://c2.staticflickr.com/6/5645/23708565129_b1ef968b30_c.jpg) + +### 问题处理 ### + +1. 我在创建虚拟机的时候遇到了一个错误: + + error: internal error: no supported architecture for os type 'hvm' + + 如果你的硬件不支持虚拟化的话你可能就会遇到这个错误。(例如,Intel VT或者AMD-V),这是运行KVM所必需的。如果你遇到了这个错误,而你的cpu支持虚拟化,那么这里可以给你一些可用的解决方案: + +首先,检查你的内核模块是否丢失。 + + $ lsmod | grep kvm + +如果内核模块没有加载,你必须按照如下方式加载它。 + + $ sudo modprobe kvm_intel (for Intel processor) + $ sudo modprobe kvm_amd (for AMD processor) + +第二个解决方案就是添加“--connect qemu:///system”参数到virsh命令中,如下所示。当你正在你的硬件平台上使用超过一个虚拟机管理器的时候就需要添加这个参数(例如,VirtualBox,VMware)。 + $ virsh --connect qemu:///system create alice.xml + +2. 当我试着访问我的虚拟机的登陆控制台的时候遇到了错误: + + $ virsh console alice + +---------- + + error: internal error: cannot find character device + +这个错误发生的原因是你没有在你的虚拟机配置文件中定义控制台设备。在xml文件中加上下面的内部设备部分即可。 + + + + + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/use-kvm-command-line-debian-ubuntu.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/disable-network-manager-linux.html diff --git a/translated/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md b/translated/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md new file mode 100644 index 0000000000..c7f3d5140c --- /dev/null +++ b/translated/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md @@ -0,0 +1,392 @@ +GHLandy Translated + +LFCS系列第二讲:如何安装和使用纯文本编辑器vi/vim + +================================================================================ + +几个月前, Linux 基金会发起了 LFCS (Linux Foundation Certified System administrator,Linux 基金会认证系统管理员)认证,以帮助世界各地的人来验证他们能够在 Linux 系统上做基本的中间系统管理任务:如系统支持,第一手的故障诊断和处理,以及何时向上游支持团队提出问题的智能决策。 + +![Learning VI Editor in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/LFCS-Part-2.png) + +在 Linux 中学习 vi 编辑器 + +请简要看看一下视频,里边介绍了 Linux 基金会认证的程序。 + +注:youtube 视频 + + + +这篇文章是《十个教程》系列的第二部分,在这个部分,我们会介绍 vi/vim 基本的文件编辑操作,帮助读者理解编辑器中的三个模式,这是LFCS认证考试中必须掌握的。 + +### 使用 vi/vim 执行基本的文件编辑操作 ### + +vi 是为 Unix 而生的第一个全屏文本编辑器。它的设计小巧简单,对于仅仅使用过诸如 NotePad++ 或 gedit 等图形界面的文本编辑器的用户来说,使用起来可能存在一些困难。 + +为了使用 vi,我们必须首先理解这个强大的程序操作中的三种模式,方便我们后边学习这个强大的文本处理软件的相关操作。 + +请注意,大多数的现代 Linux 发行版都集成了 vi 的变种——— vim(Vi IMproved,VI 的改进),相比于 vi,它有更多新功能。所以,我们会在本教程中交替使用 vi 和 vim。 + +如果你的发行版还没有安装 vim,你可以通过以下方法来安装: + +- Ubuntu 及其衍生版:apt-get update && apt-get install vim +- 以 Red-Hat 为基础的发行版:yum update && yum install vim +- openSUSE :zypper update && zypper install vim + +### 我为什么要学习 vi ### + +至少有以下两个理由: + +1.因为它是 POSIX 标准的一部分,所以不管你使用什么发行版 vi 总是可用的。 + +2.vi 基本不消耗多少系统资源,并且允许我们仅仅通过键盘来完成任何可能的任务。 + +此外,vi 有的非常丰富的内置帮助手册,程序打开后就可以通过 :help 命令来查看。这个内置帮助手册比 vi/vim 的 man 页面包含了更多信息。 + +![vi Man Pages](http://www.tecmint.com/wp-content/uploads/2014/10/vi-man-pages.png) + +vi Man 页面 + +#### 启动 vi #### + +可以通过在命令提示符下输入 vi 来启动。 + +![Start vi Editor](http://www.tecmint.com/wp-content/uploads/2014/10/start-vi-editor.png) + +使用 vi 编辑器 + +然后按下字母 i,你就可以开始输入了。或者通过下面的方法来启动 vi: + + # vi filename + +这样会打开一个名为 filename 的 buffer(稍后详细介绍 buffer),在你编辑完成之后就可以存储在磁盘中了。 + +#### 理解 vi 的三个模式 #### + +1.在命令模式中,vi 允许用户浏览该文件并输入由一个或多个字母组成简短的、大小写敏感的 vi 命令。这些命令的大部分都可以增加一个前缀数字表示执行次数。 + +比如:yy(或Y) 复制当前的整行,3yy(或3Y) 复制当前整行和下边紧接着的两行(总共3行)。通过 Esc 键可以随时进入命令模式(而不管当前工作在什么模式下)。事实上,在命令模式下,键盘上所有的输入都被解释为命令而非文本,这往往使得初学者困惑不已。 + +2.在末行模式中,我们可以处理文件(包括保存当前文件和运行外部程序)。我们必须在命令模式下输入一个冒号(:),才能进入这个模式,紧接着是需要使用的末行模式下的命令。执行之后 vi 自动回到命令模式。 + +3.在文本输入模式(通常使用字母 i 进入这个模式)中,我们可以随意输入文本。大多数的键入将以文本形式输出到屏幕(一个重要的例外是Esc键,它将退出文本编辑模式并回到命令模式)。 + +![vi Insert Mode](http://www.tecmint.com/wp-content/uploads/2014/10/vi-insert-mode.png) + +vi 文本插入模式 + +#### vi 命令 #### + +下面的表格列出常用的 vi 命令。文件版本的命令可以通过添加叹号的命令强制执行(如,:q! 命令强制退出编辑器而不保存文件)。 + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 关键命令 描述
 h 或 ← 光标左移一个字符
 j 或 ↓ 光标下移一行
 k 或 ↑ 光标上移一行
 l (小写 L) 或 → 光标右移一个字符
 H 光标移至屏幕顶行
 L 光标移至屏幕末行
 G 光标移至文件末行
 w 光标右移一个词
 b 光标左移一个词
 0 (零) 光标移至行首
 ^ 光标移至当前行第一个非空格字符
 $ 光标移至当前行行尾
 Ctrl-B 向后翻页
 Ctrl-F 向前翻页
 i 在光标所在位置插入文本
 I (大写 i) 在当前行首插入文本
 J (大写 j) 将下一行与当前行合并(下一行上移到当前行)
 a 在光标所在位置后追加文本
 o (小写 O) 在当前行下边插入空白行
 O (大写 o) 在当前行上边插入空白行
 r 替换光标所在位置的字符
 R 光标所在位置覆盖插入文本
 x 删除光标所在位置的字符
 X 立即删除光标所在位置之前(左边)的一个字符
 dd 剪切当前整行文本(为了之后进行粘贴)
 D 剪切光标所在位置到行末的文本(该命令等效于 d$)
 yX 给 X 命令一个移动长度,复制适当数量的字符、单词或者从光标开始到一定数量的行
 yy 或 Y 复制当前整行
 p 粘贴在光标所在位置之后(下一行)
 P 粘贴在光标所在位置之前(上一行)
 . (句点) 重复最后一个命令
 u 撤销最后一个命令
 U 撤销最后一行的最后一个命令,只有光标仍在最后一行才能执行。
 n 在查找中跳到下一个匹配项
 N 在查找中跳到前一个匹配项
 :n 下一个文件,编辑多个指定文件时,该命令加载下一个文件。
 :e file 加载新文件来替代当前文件
 :r file 将新文件的内容插入到光标所在位置的下一行
 :q 退出并放弃更改
 :w file 将当期打开的buffer保存为file。如果是追加到已存在的文件中,则使用 :w >> file 命令
 :wq 保存当前文件的内容并退出。等效于 x! 和 ZZ
 :r! command 执行 command 命令,并将命令的输出插入到光标所在位置的下一行
+ +#### vi 选项 #### + +下列选项将会在启动 Vim 的时候进行加载(需要写入到~/.vimrc文件): + + # echo set number >> ~/.vimrc + # echo syntax on >> ~/.vimrc + # echo set tabstop=4 >> ~/.vimrc + # echo set autoindent >> ~/.vimrc + +![vi Editor Options](http://www.tecmint.com/wp-content/uploads/2014/10/vi-options.png) + +vi编辑器选项 + +- set number 当 vi 打开或新建文件时,显示行号。 +- syntax on 打开语法高亮(对应多个文件扩展名),以便源码文件和配置文件更具可读性。 +- set tabstop=4 设置制表符间距为 4 个空格(默认为 8)。 +- set autoindent 将前一行的缩进应用于下一行。 + +#### 查找和替换 #### + +vi 具有通过查找将光标移动到(在单独一行或者整个文件中的)指定位置。它还可自动或者通过用户确认来执行文本替换。 + +a) 在行内查找。f 命令在当前行查找指定字符,并将光标移动到指定字符出现的位置。 + +例如,命令 fh 会在本行中将光标实例字母h出现的位置。注意,字母 f 和你要查找的字符都不会出现在屏幕上,但是当你按下回车的时候,要查找的字符会被高亮显示。 + +比如,以下是在命令模式按下 f4 之后的结果。 + +![Search String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-string.png) + +在 vi 中查找字符。 + +b) 在整个文件内查找。使用 / 命令,紧接着需要查找的单词或短语。这个查找可以通过使用 n 命令或者 N 重复查找上一个查找的字符串。以下是在命令模式键入 /Jane 的查找结果。 + +![Vi Search String in File](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-line.png) + +在vi中查找字符 + +c) vi 通过使用命令来完成多行或者整个文件的替换操作(类似于 sed)。我们可以使用以下命令,使得整个文件中的单词 “old” 替换为 “young”。 + + :%s/old/young/g + +**注意**:冒号位于命令的最前面。 + +![Vi Search and Replace](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-and-replace.png) + +vi 的查找和替换 + +冒号 (:) 进入末行模式,在本例中 s 表示替换,% 是从第一行到最后一行的表示方式(也可以使用 nm 表示范围,即第 n 行到第 m 行),old 是查找模式,young 是用来替换的文本,g 表示在每个查找出来的字符串都进行替换。 + +另外,在命令最后增加一个 c,可以在每一个匹配项替换前进行确认。 + + :%s/old/young/gc + +将就文本替换为新文本前,vi/vim 会想我们显示一下信息: + +![Replace String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-replace-old-with-young.png) + +vi 中替换字符串 + +- y: 执行替换(yes) +- n: 跳过这个匹配字符的替换并转到下一个(no) +- a: 在当前匹配字符及后边的相同项全部执行替换 +- q 或 Esc: 取消替换 +- l (小写 L): 执行本次替换并退出 +- Ctrl-e, Ctrl-y: 下翻页,上翻页,查看相应的文本来进行替换 + +#### 同时编辑多个文件 #### + +我们在命令提示符输入 vim file1 file2 file3 如下: + + # vim file1 file2 file3 + +vim 会首先打开 file1,要跳到 file2 需用 :n 命令。当需要打开前一个文件时,:N 就可以了。 + +为了从 file1 跳到 file3 + +a) :buffers 命令会显示当前正在编辑的文件列表 + + :buffers + +![Edit Multiple Files](http://www.tecmint.com/wp-content/uploads/2014/10/vi-edit-multiple-files.png) + +编辑多个文件 + +b) :buffer 3 命令(后边没有 s)会打开 file 进行编辑。 + +在上边的图片中,标记符号 # 表示该文件当前已被打开在后台,而 %a 标记的文件是正在被编辑的。另外,文件号(如上边例子的 3)后边的空格表示该文件还没有被打开。 + +#### vi 的临时 buffers #### + +为了复制连续的多行(比如,假设为 4 行)到一个名为 a 的临时 buffer(与文件无关),并且还要将这些行粘贴到在当前 vi 会话文件中的其它位置,我们需要: + +1. 按下 Esc 键以确认 vi 处在命令模式 + +2. 将光标放在我们希望复制的第一行文本 + +3. 输入 a4yy 复制当前行和接下来的 3 行,进入一个名为 a 的 buffer。我们一继续编辑我们的文件————我们不需要立即插入刚刚复制的行。 + +4. 当到了需要使用刚刚复制行的位置,在 p(小写)或 P(大写)命令来讲复制行插入到名为 a 的 buffer: + +- 输入 ap,复制行将插入到光标位置所在行的下一行。 +- 输入 aP,复制行将插入到光标位置所在行的上一行。 + +如果愿意,我们可以重复上述步骤,将 buffer a 中的内容插入到我们文件的多个位置。一个临时 buffer,像本次会话中的一样,会在当前窗口关闭时释放掉。 + +### 总结 ### + +像我们看到的一样,vi/vim 在命令接口下是一个强大而灵活的文本编辑器。通过以下链接,随时分享你自己的技巧和评论。 + +#### 参考链接 #### + +- [About the LFCS][1] +- [Why get a Linux Foundation Certification?][2] +- [Register for the LFCS exam][3] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/vi-editor-usage/ + +作者:[Gabriel Cánepa][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[东风唯笑](https://github.com/dongfengweixiao) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://training.linuxfoundation.org/certification/LFCS +[2]:https://training.linuxfoundation.org/certification/why-certify-with-us +[3]:https://identity.linuxfoundation.org/user?destination=pid/1 diff --git a/translated/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md b/translated/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md new file mode 100644 index 0000000000..876fa4d3af --- /dev/null +++ b/translated/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md @@ -0,0 +1,390 @@ +GHLandy Translated + +LFCS 系列第三讲:如何在 Linux 中归档/压缩文件及目录、设置文件属性和搜索文件 + +================================================================================ +最近,Linux 基金会发起了 一个全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让遍布全世界的人都有机会参加该认证的考试,并且通过考试的人将会得到关于他们有能力在 Linux 上执行基本的中间系统管理任务的认证证书。这项认证包括了对已运行的系统和服务的支持、一流水平的问题解决和分析以及决定何时将问题反映给工程师团队的能力。 + +![Linux Foundation Certified Sysadmin – Part 3](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-3.png) + +LFCS 系列第三讲 + +请看以下视频,这里边讲给出 Linux 基金会认证程序的一些想法。 + +注:youtube 视频 + + +本讲是《十套教程》系列中的第三讲,在这一讲中,我们会涵盖如何在文件系统中归档/压缩文件及目录、设置文件属性和搜索文件等内容,这些都是 LFCS 认证中必须掌握的知识。 + +### 归档和压缩的相关工具 ### + +文件归档工具将一堆文件整合到一个单独的归档文件之后,我们可以将归档文件备份到不同类型的媒介或者通过网络传输和发送 Email 来备份。在 Linux 中使用频率最高的归档实用工具是 tar。当归档工具和压缩工具一起使用的时候,可以减少同一文件和信息在硬盘中的存储空间。 + +#### tar 使用工具 #### + +tar 将一组文件打包到一个单独的归档文件(通常叫做 tar 文件或者 tarball)。tar 这个名称最初代表磁带存档程序(tape archiver),但现在我们可以用它来归档任意类型的可读写媒介上边的数据,而不是只能归档磁带数据。tar 通常与 gzip、bzip2 或者 xz 等压缩工具一起使用,生成一个压缩的 tarball。 + +**基本语法:** + + # tar [选项] [路径名 ...] + +其中 ... 代表指定那些文件进行归档操作的表达式 + +#### tar 的常用命令 #### + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
长选项简写描述
 –create c 创建 tar 归档文件
 –concatenate A 将一存档与已有的存档合并
 –append r 把要存档的文件追加到归档文件的末尾
 –update u 更新新文件到归档文件中去
 –diff 或 –compare d 比较存档与当前文件的不同之处
 –file archive f 使用档案文件或设备
 –list t 列出 tarball 中的内容
 –extract 或 –get x 从归档文件中释放文件
+ +#### 常用的操作修饰符 #### + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
长选项缩写描述
 –directory dir C 执行归档操作前,先转到指定目录
 –same-permissions p 保持原始的文件权限
 –verbose v 列出所有的读取或提取文件。但这个标识符与 –list 一起使用的时候,还会显示出文件大小、属主和时间戳的信息
 –verify W 写入存档后进行校验
 –exclude file — 不把指定文件包含在内
 –exclude=pattern X 以PATTERN模式排除文件
 –gzip 或 –gunzip z 通过gzip压缩归档
 –bzip2 j 通过bzip2压缩归档
 –xz J 通过xz压缩归档
+ +Gzip 是最古老的压缩工具,压缩率最小,bzip2 的压缩率稍微高一点。另外,xz是最新的压缩工具,压缩率最好。xz 具有最佳压缩率的代价是:完成压缩操作花费最多时间,压缩过程中占有较多系统资源。 + +通常,通过这些工具压缩的 tar 文件相应的具有 .gz、.bz2 或 .xz的扩展名。在下列的例子中,我们使用 file1、file2、file3、file4 和 file5 进行演示。 + +**通过 gzip、bzip2 和 xz 压缩归档** + +归档当前工作目录的所有文件,并以 gzip、bzip2 和 xz 压缩刚刚的归档文件(请注意,用正则表达式来指定那些文件应该归档——这是为了防止归档工具包前一步生成的文件打包进来)。 + + # tar czf myfiles.tar.gz file[0-9] + # tar cjf myfiles.tar.bz2 file[0-9] + # tar cJf myfile.tar.xz file[0-9] + +![Compress Multiple Files Using tar](http://www.tecmint.com/wp-content/uploads/2014/10/Compress-Multiple-Files.png) + +压缩多个文件 + +**列举 tarball 中的内容和更新/追加文件到归档文件中** + +列举 tarball 中的内容,并显示相同信息为一个详细目录清单。注意,不能直接向压缩的归档文件更新/追加文件(若你需要向压缩的 tarball 中更新/追加文件,需要先解压 tar 文件后再进行操作,然后重新压缩)。 + + # tar tvf [tarball] + +![Check Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/List-Archive-Content.png) + +列举归档文件中的内容 + +运行一下任意一条命令: + + # gzip -d myfiles.tar.gz [#1] + # bzip2 -d myfiles.tar.bz2 [#2] + # xz -d myfiles.tar.xz [#3] + +然后: + + # tar --delete --file myfiles.tar file4 (删除tarball中的file4) + # tar --update --file myfiles.tar file4 (更新tarball中的file4) + +和 + + # gzip myfiles.tar [ 如果你运行 #1 命令 ] + # bzip2 myfiles.tar [ 如果你运行 #2 命令 ] + # xz myfiles.tar [ 如果你运行 #3 命令 ] + +最后 + + # tar tvf [tarball] #again + +将 file4 修改后的日期和时间与之前显示的对应信息进行比较 + +**排除文件类型** + +假设你现在需要备份用户的家目录。一个有经验的系统管理员会选择忽略所有视频和音频文件再备份(也可能是公司规定)。 + +可能你最先想到的方法是在备份是时候,忽略扩展名为 .mp3 和 .mp4(或者其他格式)的文件。但如果你有些自作聪明的用户将扩展名改为 .txt 或者 .bkp,那你的方法就不灵了。为了发现并排除音频或者视频文件,你需要先检查文件类型。以下 shell 脚本可以代你完成类型检查: + + #!/bin/bash + # 把需要进行备份的目录传递给 $1 参数. + DIR=$1 + #排除文件类型中包含了 mpeg 字符串的文件,然后创建 tarball 并进行压缩。 + # -若文件类型中包含 mpeg 字符串, $?(最后执行的命令的退出状态)返回 0,然后文件名被定向到排除选项。否则返回 1。 + # -若 $? 等于 0,该文件从需要备份文件的列表排除。 + tar X <(for i in $DIR/*; do file $i | grep -i mpeg; if [ $? -eq 0 ]; then echo $i; fi;done) -cjf backupfile.tar.bz2 $DIR/* + +![Exclude Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Exclude-Files-in-Tar.png) + +排除文件进行备份 + +**使用 tar 保持文件的原有权限进行恢复** + +通过以下命令,你可以保留文件的权限将备份文件恢复到原始用户的家目录(本例是 user_restore)。 + + # tar xjf backupfile.tar.bz2 --directory user_restore --same-permissions + +![Restore Files from tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-tar-Backup-Files.png) + +从归档文件中恢复 + +**扩展阅读:** + +- [18 tar Command Examples in Linux][1] +- [Dtrx – An Intelligent Archive Tool for Linux][2] + +### 通过 find 命令搜索文件 ### + +find 命令用于递归搜索目录树中包含指定字符的文件和目录,然后在屏幕显示出于指定字符相匹配的文件和目录,或者在匹配项进行其他操作。 + +通常,我们通过文件名、文件的属主、属组、类型权限、日期及大小来搜索。 + +#### 基本语法:#### + +# find [需搜索的目录] [表达式] + +**通过文件大小递归搜索文件** + +以下命令会搜索当前目录(.)及其下两层子目录(-maxdepth 3,包含当前目录及往下两层的子目录)大于 2 MB(-size +2M)的所有文件(-f)。 + + # find . -maxdepth 3 -type f -size +2M + +![Find Files by Size in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-Based-on-Size.png) + +通过文件大小搜索文件 + +**搜索符合一定规则的文件并将其删除** + +有时候,777 权限的文件通常为外部攻击者打开便利之门。不管是以何种方式,让所有人都可以对文件进行任意操作都是不安全的。对此,我们采取一个相对激进的方法——删除这些文件(‘{ }’用来“聚集”搜索的结果)。 + + # find /home/user -perm 777 -exec rm '{}' + + +![Find all 777 Permission Files](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-with-777-Permission.png) + +搜索 777 权限的文件 + +**按访问时间和修改时间搜索文件** + +搜索 /etc 目录下访问时间(-atime)或修改时间(-mtime)大于或小于 6 个月或者刚好 6 个月的配置文件。 + +按照下面例子对命令进行修改: + + # find /etc -iname "*.conf" -mtime -180 -print + +![Find Files by Modification Time](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Modified-Files.png) + +按修改时间搜索文件 + +- 扩展阅读: [35 Practical Examples of Linux ‘find’ Command][3] + +### 文件权限及基本属性 ### + +ls -l 命令输出的前 10 位字符是文件的属性,其中第一个字符用来表明文件的类型。 + +- – : 普通文件 +- -d : 目录 +- -l : 符号链接 +- -c : 字符设备 (它将数据作为字节流处理,如terminal) +- -b : 块设备 (在块设备中处理数据,如存储设备) + +接下来表示文件属性的 9 位字符叫做文件的读写模式,代表文件属主、同组用户和其他用户(通常指的是“外部世界”)对应的读(r)、写(w)和执行(x)权限。 + +文件的写权限允许对应的用户对文件进行打开和读写,对于同时设置了执行权限的目录,对应用户可以列举出该目录的内容。另外,文件的执行权限允许将文件当做是一个可执行程序来运行,而目录的执行权限则是允许用户进入和退出该目录。 + +文件的权限通过 chown 命令来更改,其基本语法如下: + + # chmod [new_mode] file + +new_mode 可以是 3 位八进制数值或者对应权限的表达式。 + +八进制数值可以从二进制数值进行等值转换,通过下列方法来计算文件属主、同组用户和其他用户权限对应的二进制数值: + +一个确定权限的二进制数值表现为 2 的幂(r=2^2,w=2^1,x=2^0),当权限省缺时,二进制数值为 0。如下: + +![Linux File Permissions](http://www.tecmint.com/wp-content/uploads/2014/10/File-Permissions.png) + +文件权限 + +使用八进制数值设置上图的文件权限,请输入: + + # chmod 744 myfile + +通过 u、g 和 o 分别代表用户、同组用户和其他用户,然后你也可以使用权限表达式来单独对用户设置文件的权限模式。也可以通过 a 代表所有用户,然后设置文件权限。通过 + 号或者 - 号相应的赋予或移除文件权限。 + + +**为所有用户撤销一个 shell 脚本的执行权限** + +正如之前解释的那样,我们可以通过 - 号为需要移除权限的属主、同组用户、其他用户或者所有用户去掉指定的文件权限。下面命令中的短横线(-)可以理解为:移除(-)所有用户(a)的 backup.sh 文件执行权限(x)。 + + # chmod a-x backup.sh + +下面演示为文件属主、同组用户赋予读、写和执行权限,并赋予其他用户读权限。 + +当我们使用 3 位八进制数值为文件设置权限的时候,第一位数字代表属主权限,第二位数字代表同组用户权限,第三位数字代表其他用户的权限: + +- 属主:(r=2^2 + w=2^1 + x=2^0 = 7) +- 同组用户:(r=2^2 + w=2^1 + x=2^0 = 7) +- 其他用户:(r=2^2 + w=0 + x=0 = 4), + + # chmod 774 myfile + +随着练习时间的推移,你会知道何种情况下使用哪种方式来更改文件的权限模式的效果最好。 + +使用 ls -l 详细列举目录详细同样会显示出文件的属主和属组(这个很基本,而且影响到系统文件的访问控制)。 + +![Linux File Listing](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-File-Listing.png) + +列举 Linux 文件 + +通过 chown 命令可以对文件的归属权进行更改,可以同时或者分开更改属主和属组。其基本语法为: + + # chown user:group file + +至少要指定用户或者用户组 + +**举几个例子:** + +将文件的属主更改为指定用户: + + # chown gacanepa sent + +同时将文件的属主和属组更改为指定的用户和组: + + # chown gacanepa:gacanepa TestFile + +只将文件的属组更改为指定组。注意组名前的冒号(:)。 + + # chown :gacanepa email_body.txt + +### 结论 ### + +作为一个系统管理员,你需要懂得如何创建和恢复备份、如何在系统中搜索文件并更改它们的属性。通过一些技巧,你可以更好地管理系统并避免以后出问题。 + +我希望,本文给出的技巧可以帮助你达成管理系统的目标。你可以随时在评论中发表自己的技巧及社区给你带来的益处。 + +先行感谢! + +参考链接 +- [About the LFCS][4] +- [Why get a Linux Foundation Certification?][5] +- [Register for the LFCS exam][6] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/compress-files-and-finding-files-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/18-tar-command-examples-in-linux/ +[2]:http://www.tecmint.com/dtrx-an-intelligent-archive-extraction-tar-zip-cpio-rpm-deb-rar-tool-for-linux/ +[3]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ +[4]:https://training.linuxfoundation.org/certification/LFCS +[5]:https://training.linuxfoundation.org/certification/why-certify-with-us +[6]:https://identity.linuxfoundation.org/user?destination=pid/1 diff --git a/translated/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md b/translated/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md new file mode 100644 index 0000000000..987ea4a7f8 --- /dev/null +++ b/translated/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md @@ -0,0 +1,195 @@ +GHLandy Translated + +LFCS 系列第四讲:分区存储设备、格式化文件系统和配置交换分区 + +================================================================================ +去年八月份,Linux 基金会发起了 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,给所有系统管理员一个展现自己的机会。通过基础考试后,他们可以胜任在 Linux 上的整体运维工作:包括系统支持、一流水平的诊断和监控以及在必要之时向其他支持团队提交帮助请求等。 + +![Linux Foundation Certified Sysadmin – Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png) + +LFCS 系列第四讲 + +需要注意的是,Linux 基金会认证是非常严格的,通过与否完全要看个人能力。通过在线链接,你可以随时随地参加 Linux 基金会认证考试。所以,你再也不用到考试中心了,只需要不断提高自己的专业技能和经验就可去参加考试了。 + +请看一下视频,这里将讲解 Linux 基金会认证程序。 + +注:youtube 视频 + + +本讲是《十套教程》系列中的第四讲。在本讲中,我们将涵盖分区存储设备、格式化文件系统和配置交换分区等内容,这些都是 LFCS 认证中的必备知识。 + +### 分区存储设备 ### + +分区是一种将单独的硬盘分成一个或多个区的手段。一个分区只是硬盘的一部分,我们可以认为这部分是独立的磁盘,里边包含一个单一类型的文件系统。分区表则是将硬盘上这些分区与分区标识符联系起来的索引。 + +在 Linux 中,IBM PC 兼容系统里边用于管理传统 MBR(最新到2009年)分区的工具是 fdisk。对于 GPT(2010年至今)分区,我们使用 gdisk。这两个工具都可以通过程序名后面加上设备名称(如 /dev/sdb)进行调用。 + +#### 使用 fdisk 管理 MBR 分区 #### + +我们先来介绍 fdisk: + + # fdisk /dev/sdb + +然后出现提示说进行下一步操作。若不确定如何操作,按下 “m” 键显示帮助。 + +![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png) + +fdisk 帮助菜单 + +上图中,使用频率最高的选项已高亮显示。你可以随时按下 “p” 显示分区表。 + +![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png) + +显示分区表 + +Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id)。一个分区类型代表一种文件系统的标识符,简单来说,包括该分区上数据的访问方法。 + +请注意,每个分区类型的全面都全面讲解将超出了本教程的范围——本系列教材主要专注于 LFCS 测试,因能力为主。 + +**下面列出一些 fdisk 常用选项:** + +按下 “l”(小写 L)选项来显示所有可以由 fdisk 管理的分区类型。 + +按下 “d” 可以删除现有的分区。若硬盘上有多个分区,fdisk 将询问你要删除那个分区。 + +键入对应的数字,并按下 “w” 保存更改(将更改写入分区表)。 + +在下图的命令中,我们将删除 /dev/sdb2,然后显示(p)分区表来验证更改。 + +![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png) + +fdisk 命令选项 + +按下 “n” 后接着按下 “p” 会创建新一个主分区。最后,你可以使用所有的默认值(这将占用所有的可用空间),或者像下面一样自定义分区大小。 + +![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png) + +创建新分区 + +若 fdisk 分配的分区 Id 并不是我们想用的,可以按下 “t” 来更改。 + +![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png) + +更改分区类型 + +全部设置好分区后,按下 “w” 将更改保存到硬盘分区表上。 + +![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png) + +保存分区更改 + +#### 使用 gdisk 管理 GPT 分区 #### + +下面的例子中,我们使用 /dev/sdb。 + + # gdisk /dev/sdb + +必须注意的是,gdisk 可以用于创建 MBR 和 GPT 两种分区表。 + +![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png) + +创建 GPT 分区 + +使用 GPT 分区方案,我们可以在同一个硬盘上创建最多 128 个分区,单个分区最大以 PB 为单位,而 MBR 分区方案最大的只能 2TB。 + +注意,fdisk 与 gdisk 中大多数命令都是一样的。因此,我们不会详细介绍这些命令选项,而是给出一张使用过程中的截图。 + +![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png) + +gdisk 命令选项 + +### 格式化文件系统 ### + +一旦创建完需要的分区,我们就必须为分区创建文件系统。查询你所用系统支持的文件系统,请运行: + + # ls /sbin/mk* + +![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png) + +检查文件系统类型 + +选择文件系统取决于你的需求。你应该考虑到每个文件系统的优缺点以及其特点。选择文件系统需要看的两个重要属性: + +- 日志支持,允许从系统崩溃事件中快速恢复数据。 +- 安全增强式 Linux(SELinux)支持,按照项目 wiki 所说,“安全增强式 Linux 允许用户和管理员更好的把握访问控制权限”。 + +在接下来的例子中,我们通过 mkfs 在 /dev/sdb1上创建 ext4 文件系统(支持日志和 SELinux),标卷为 Tecmint。mkfs 基本语法如下: + + # mkfs -t [filesystem] -L [label] device + 或者 + # mkfs.[filesystem] -L [label] device + +![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png) + +创建 ext4 文件系统 + +### 创建并启用交换分区 ### + +要让 Linux 系统访问虚拟内存,则必须有一个交换分区,当内存(RAM)用完的时候,将硬盘中指定分区(即 Swap 分区)当做内存来使用。因此,当有足够的系统内存(RAM)来满足系统的所有的需求时,我们并不需要划分交换分区。尽管如此,是否使用交换分区取决于管理员。 + +下面列出选择交换分区大小的经验法则: + +物理内存不高于 2GB 时,取两倍物理内存大小即可;物理内存在 2GB 以上时,取一倍物理内存大小即可;并且所取大小应该大于 32MB。 + +所以,如果: + +M为物理内存大小,S 为交换分区大小,单位 GB,那么: + + 若 M < 2 + S = M *2 + 否则 + S = M + 2 + +记住,这只是基本的经验。对于作为系统管理员的你,才是决定是否使用交换分区及其大小的关键。 + +要配置交换分区,首先要划分一个常规分区,大小像我们之前演示的那样来选取。然后添加以下条目到 /etc/fstab 文件中(其中的X要更改为对应的 b 或 c)。 + + /dev/sdX1 swap swap sw 0 0 + +最后,格式化并启用交换分区: + + # mkswap /dev/sdX1 + # swapon -v /dev/sdX1 + +显示交换分区的快照: + + # cat /proc/swaps + +关闭交换分区: + + # swapoff /dev/sdX1 + +下面的例子,我们会使用 fdisk 将 /dev/sdc1(512MB,系统和内存为 256MB)来设置交换分区,下面是我们之前详细提过的步骤。注意,这种情况下我们使用的是指定大小分区。 + +![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png) + +创建交换分区 + +![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png) + +启用交换分区 + +### 结论 ### + +在你的系统管理员之路上,创建分区(包括交换分区)和格式化文件系统是非常重要的一部。我希望本文中所给出的技巧指导你到达你的管理员目标。随时在本讲评论区中发表你的技巧和想法,一起为社区做贡献。 + +参考链接 + +- [About the LFCS][1] +- [Why get a Linux Foundation Certification?][2] +- [Register for the LFCS exam][3] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://training.linuxfoundation.org/certification/LFCS +[2]:https://training.linuxfoundation.org/certification/why-certify-with-us +[3]:https://identity.linuxfoundation.org/user?destination=pid/1 diff --git a/translated/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md b/translated/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md new file mode 100644 index 0000000000..1551f4de0c --- /dev/null +++ b/translated/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md @@ -0,0 +1,246 @@ +GHLandy Translated + +LFCS 系列第五讲:如何在 Linux 中挂载/卸载本地文件系统和网络文件系统(Samba 和 NFS) + +================================================================================ + +Linux 基金会已经发起了一个全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。 + +![Linux Foundation Certified Sysadmin – Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png) + +LFCS 系列第五讲 + +请看以下视频,这里边介绍了 Linux 基金会认证程序。 + +注:youtube 视频 + + +本讲是《十套教程》系列中的第三讲,在这一讲里边,我们会解释如何在 Linux 中挂载/卸载本地和网络文件系统。这些都是 LFCS 认证中的必备知识。 + + +### 挂载文件系统 ### + +在个硬盘分好区之后,Linux 需要通过某些方式对硬盘分区上的数据进行访问。Linux 并不会像 DOS 或者 Windows 那样给每个硬盘分区分配一个字母来作为盘符,而是将硬盘分区挂载到统一的目录树上的挂载点。 + +挂载点是一个目录,挂载是一种访问分区上文件系统的方法,挂载文件系统实际上是将一个确切的文件系统(比如一个分区)和目录树中指定的目录联系起来的过程。 + +换句话说,管理存储设备的第一步就是把设备关联到文件系统树。要完成这一步,通常可以这样:用 mount 命令来进行临时挂载(用完的时候,使用 umount 命令来卸载),或者通过编辑 /etc/fstab 文件之后重启系统来永久性挂载,这样每次开机都会进行挂载。 + + +不带任何选项的 mount 命令,可以显示当前已挂载的文件系统。 + + # mount + +![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png) + +检查已挂载的文件系统 + +另外,mount 命令通常用来挂载文件系统。其基本语法如下: + + # mount -t type device dir -o options + +该命令会指引内核在设备上找到的文件系统(如已格式化为指定类型的文件系统)挂载到指定目录。像这样的形式,mount 命令不会再到 /etc/fstab 文件中进行确认。 + +除非像下面,挂载指定的目录或者设备: + + # mount /dir -o options + 或 + # mount device -o options + +mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上述两种情况下,mount 命令会在 /etc/fstab 查找相应的设备或挂载点),最后尝试完成挂载操作(这个通常可以成功执行,除非你的挂载点或者设备正在使用中,或者你调用 mount 命令的时候没有 root 权限)。 + +你可以看到,mount 命令的每行输出都是如下格式: + + device on directory type (options) + +例如: + + /dev/mapper/debian-home on /home type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered) + +读作: + +设备 dev/mapper/debian-home 的格式为 ext4,挂载在 /home 下,并且有以下挂载选项: rw,relatime,user_xattr,barrier=1,data=ordered。 + +**mount 命令选项** + +下面列出 mount 命令的常用选项 + + +- async:运许在将要挂载的文件系统上进行异步 I/O 操作 +- auto:标志文件系统通过 mount -a 命令挂载,与 noauto 相反。 + +- defaults:该选项为 async,auto,dev,exec,nouser,rw,suid 的一个别名。注意,多个选项必须由逗号隔开并且中间没有空格。倘若你不小心在两个选项中间输入了一个空格,mount 命令会把后边的字符解释为另一个参数。 +- loop:将镜像文件(如 .iso 文件)挂载为 loop 设备。该选项可以用来模拟显示光盘中的文件内容。 +- noexec:阻止该文件系统中可执行文件的执行。与 exec 选项相反。 + +- nouser:阻止任何用户(除 root 用户外) 挂载或卸载文件系统。与 user 选项相反。 +- remount:重新挂载文件系统。 +- ro:只读模式挂载。 +- rw:读写模式挂载。 +- relatime:只要访问时间早于修改时间,就更新文件的的访问时间。 +- user_xattr:允许用户设置和移除可扩展文件系统属性。 + +**以 ro 和 noexec 模式挂载设备** + + # mount -t ext4 /dev/sdg1 /mnt -o ro,noexec + +在本例中,我们可以看到,在挂载点 /mnt 中尝试写入文件或者运行可执行文件都会显示相应的错误信息。 + + # touch /mnt/myfile + # /mnt/bin/echo “Hi there” + +![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png) + +可读写模式挂载设备 + +**以默认模式挂载设备** + +以下场景,我们在重新挂载设备的挂载点中,像上例一样尝试你写入文件和运行可执行文件。 + + + # mount -t ext4 /dev/sdg1 /mnt -o defaults + +![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png) + +挂载设备 + +在这个例子中,我们发现写入文件和命令都完美执行了。 + +### 卸载设备 ### + +使用 umount 命令卸载设备,意味着将所有的“在使用”数据全部写入到文件系统了,然后可以安全移除文件系统。请注意,倘若你移除一个没有事先正确卸载的文件系统,就会有造成设备损坏和数据丢失的风险。 + +也就是说,你必须设备的盘符或者挂载点中退出,才能卸载设备。换言之,当前工作目录不能是需要卸载设备的挂载点。否则,系统将返回设备繁忙的提示信息。 + +![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png) + +卸载设备 + +离开需卸载设备的挂载点最简单的方法就是,运行不带任何选项的 cd 命令,这样会回到当前用户的家目录。 + + +### 挂载常见的网络文件系统 ### +最常用的两种网络文件系统是 SMB(Server Message Block,服务器消息块)和 NFS(Network File System,网络文件系统)。如果你只向类 Unix 客户端提供共享,用 NFS 就可以了,如果是向 Windows 和其他类 Unix客户端提供共享服务,就需要用到 Samba 了。 + + +扩展阅读 + +- [Setup Samba Server in RHEL/CentOS and Fedora][1] +- [Setting up NFS (Network File System) on RHEL/CentOS/Fedora and Debian/Ubuntu][2] + +下面的例子中,假设 Samba 和 NFS 已经在地址为 192.168.0.10 的服务器上架设好了(请注意,架设 NFS 服务器也是 LFCS 考试中需要考核的能力,我们会在后边中提到)。 + + +#### 在 Linux 中挂载 Samba 共享 #### + +第一步:在 Red Hat 以 Debian 系发行版中安装 samba-client、samba-common 和 cifs-utils 软件包,如下: + + # yum update && yum install samba-client samba-common cifs-utils + # aptitude update && aptitude install samba-client samba-common cifs-utils +然后运行下列命令,查看服务器上可用的 Samba 共享。 + + # smbclient -L 192.168.0.10 + +并输入远程机器上 root 账户的密码。 + +![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png) + +挂载 Samba 共享 + +上图中,已经对可以挂载到我们本地系统上的共享进行高亮显示。你只需要与一个远程服务器上的合法用户名及密码就可以访问共享了。 + +第二步:当挂载有密码保护的网络文件系统时候,将你的访问凭证写入到 /etc/fstab 文件中并非明智的选择。你需要将这些信息写入到具有 600 权限的隐藏文件中,像这样: + + # mkdir /media/samba + # echo “username=samba_username” > /media/samba/.smbcredentials + # echo “password=samba_password” >> /media/samba/.smbcredentials + # chmod 600 /media/samba/.smbcredentials + +第三步:然后将下面的内容添加到 /etc/fstab 文件中。 + + # //192.168.0.10/gacanepa /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0 + +第四步:现在可以挂载你的 Samba 共享了。手动挂载(mount //192.168.0.10/gacanepa)或者重启系统并应用 /etc/fstab 中相应行来用就挂载都可以。 + +![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png) + +挂载有密码保护的 Samba 共享 + +#### 在 Linux 系统中挂载 NFS 共享 #### + +第一步:在 Red Hat 以 Debian 系发行版中安装 nfs-common 和 portmap 软件包。如下: + + # yum update && yum install nfs-utils nfs-utils-lib + # aptitude update && aptitude install nfs-common + +第二步:为 NFS 共享创建挂载点。 + + # mkdir /media/nfs + +第三步:将下面的内容添加到 /etc/fstab 文件中。 + +192.168.0.10:/NFS-SHARE /media/nfs nfs defaults 0 0 + +第四步:现在可以挂载你的 Samba 共享了。手动挂载(mount 192.168.0.10:/NFS-SHARE)或者重启系统并应用 /etc/fstab 中相应行来用就挂载都可以。 + +![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png) + +挂载 NFS 共享 + +### 永久性挂载文件系统 ### + +像前面两个例子那样,/etc/fstab 控制着Linux如何访问硬盘分区及可移动设备。/etc/fstab 由六个字段的内容组成,各个字段之间通过一个空格符或者制表符来分开。井号(#)开始的行只是会被忽略的注释。 + +每一行都按照这个格式来写入: + + + +其中: + +- : 第一个字段指定挂载的设备。大多数发行版本都通过分区的标卷(label)或者 UUID 来指定。这样做可以避免分区号改变是带来的错误。 +- : 第二字段指定挂载点。 +- :文件系统的类型代码与 mount 命令挂载文件系统时使用的类型代码是一样的。通过 auto 类型代码可以让内核自动检测文件系统,这对于可移动设备来说非常方便。注意,该选项可能不是对所有文件系统可用。 +- : 一个(或多个)挂载选项。 +- : 你可能把这个字段设置为 0(否则设置为 1),使得系统启动时禁用 dump 工具(dump 程序曾经是一个常用的备份工具,但现在越来越少用了)对文件系统进行备份。 + +- : 这个字段指定启动系统是是否通过 fsck 来检查文件系统的完整性。0 表示 fsck 不对文件系统进行检查。数字越大,优先级越低。因此,根分区(/)最可能使用数字 1,其他所有需要检查的分区则是以数字 2. + +**Mount 命令例示** + +1. 在系统启动时,通过 TECMINT 标卷来挂载文件系统,并具备 rw 和 noexec 属性,你应该将以下语句添加到 /etc/fstab 文件中。 + + LABEL=TECMINT /mnt ext4 rw,noexec 0 0 + +2. 若你想在系统启动时挂载 DVD 光驱中的内容,添加已下语句。 + + /dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0 + +其中 /dev/sr0 为你的 DVD 光驱。 + +### 总结 ### + +可以放心,在命令行中挂载/卸载本地和网络文件系统将是你作为系统管理员的日常责任的一部分。同时,你需要掌握 /etc/fstab 文件的编写。希望本文对你有帮助。随时在下边发表评论(或者提问),并分享本文到你的朋友圈。 + + +参考链接 + +- [About the LFCS][3] +- [Why get a Linux Foundation Certification?][4] +- [Register for the LFCS exam][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mount-filesystem-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/setup-samba-server-using-tdbsam-backend-on-rhel-centos-6-3-5-8-and-fedora-17-12/ +[2]:http://www.tecmint.com/how-to-setup-nfs-server-in-linux/ +[3]:https://training.linuxfoundation.org/certification/LFCS +[4]:https://training.linuxfoundation.org/certification/why-certify-with-us +[5]:https://identity.linuxfoundation.org/user?destination=pid/1 diff --git a/translated/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md b/translated/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md new file mode 100644 index 0000000000..5de5004424 --- /dev/null +++ b/translated/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md @@ -0,0 +1,284 @@ +LFCS 系列第六讲:组装分区为RAID设备——创建和管理系统备份 +========================================================= +Linux 基金会已经发起了一个全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。 + +![Linux Foundation Certified Sysadmin – Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png) + +LFCS 系列第六讲 + +以下视频介绍了 Linux 基金会认证程序。 + +注:youtube 视频 + + +本讲是《十套教程》系列中的第六讲,在这一讲里,我们将会解释如何组装分区为RAID设备——创建和管理系统备份。这些都是 LFCS 认证中的必备知识。 + +### 了解RAID ### + +一种被称为独立磁盘冗余阵列(RAID)的技术是将多个硬盘组合成一个单独逻辑单元的存储解决方案,它提供了数据冗余功能并且改善硬盘的读写操作性能。 + +然而,实际的容错和磁盘I/O性能硬盘取决于如何将多个硬盘组装成磁盘阵列。根据可用的设备和容错/性能的需求,RAID被分为不同的级别,你可以在Tecmint.com上参考RAID系列文章以获得每个RAID级别更详细的解释。 + +- RAID Guide: [What is RAID, Concepts of RAID and RAID Levels Explained][1] + +我们选择用于创建、组装、管理、监视软件RAID的工具,叫做mdadm(multiple disk admin的简写)。 + +``` +---------------- Debian and Derivatives ---------------- +# aptitude update && aptitude install mdadm +``` + +``` +---------------- Red Hat and CentOS based Systems ---------------- +# yum update && yum install mdadm +``` + +``` +---------------- On openSUSE ---------------- +# zypper refresh && zypper install mdadm # +``` + +#### 组装分区作为RAID设备 #### + +组装已有分区作为RAID设备的过程由以下步骤组成。 + +**1. 使用mdadm创建阵列** + +如果先前其中一个分区被格式化,或者作为了另一个RAID阵列的一部分,你会被提示以确认创建一个新的阵列。假设你已经采取了必要的预防措施以避免丢失重要数据,那么可以安全地输入Y并且按下回车。 + +``` +# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 +``` + +![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png) + +创建RAID阵列 + +**2. 检查阵列的创建状态** + +在创建了RAID阵列之后,你可以检查使用以下命令检查阵列的状态。 + + + # cat /proc/mdstat + or + # mdadm --detail /dev/md0 [More detailed summary] + +![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png) + +检查RAID阵列的状态 + +**3. 格式化RAID设备** + +如本系列[Part 4][2]所介绍的,按照你的需求/要求采用某种文件系统格式化你的设备。 + +4. 监控RAID阵列服务 + +指示监控服务时刻监视你的RAID阵列。把```# mdadm --detail --scan```命令输出结果添加到/etc/mdadm/mdadm.conf(Debian和derivatives)或者/etc/mdadm.conf(Cent0S/openSUSE),如下。 + + # mdadm --detail --scan + + +![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png) + +监控RAID阵列 + + # mdadm --assemble --scan [Assemble the array] + +为了确保服务能够开机启动,需要以root权限运行以下命令。 + +**Debian 和 Derivatives** + +Debian 和 Derivatives能够通过下面步骤使服务默认开机启动 + + # update-rc.d mdadm defaults + +在/etc/default/mdadm文件中添加下面这一行 + + AUTOSTART=true + + +**CentOS 和 openSUSE(systemd-based)** + + # systemctl start mdmonitor + # systemctl enable mdmonitor + +**CentOS 和 openSUSEi(SysVinit-based)** + + # service mdmonitor start + # chkconfig mdmonitor on + +**5. 检查RAID磁盘故障** + +在支持冗余的的RAID级别中,在需要时会替换故障的驱动器。当磁盘阵列中的设备出现故障时,仅当存在我们第一次创建阵列时预留的备用设备时,磁盘阵列会将自动启动重建。 + +![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png) + +检查RAID故障磁盘 + +否则,我们需要手动连接一个额外的物理驱动器到我们的系统,并且运行。 + + # mdadm /dev/md0 --add /dev/sdX1 + +/dev/md0是出现了问题的阵列,而/dev/sdx1是新添加的设备。 + +**6. 分解一个工作阵列** + +如果你需要使用工作阵列的设备创建一个新的阵列,你可能不得不去分解已有工作阵列——(可选步骤) + + # mdadm --stop /dev/md0 # Stop the array + # mdadm --remove /dev/md0 # Remove the RAID device + # mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes + +**7. 设置邮件通知** + +你可以配置一个用于发送通知的有效邮件地址或者系统账号(确保在mdadm.conf文件中有下面这一行)。——(可选步骤) + + MAILADDR root + +在这种情况下,来自RAID后台监控程序所有的通知将会发送到你的本地root账号的邮件箱中。其中一个类似的通知如下。 + +说明:此次通知事件和第5步中的例子相关。一个设备被标志为错误并且一个空闲的设备自动地被mdadm加入到阵列。我们用完了所有"健康的"空闲设备,因此我们得到了通知。 + +![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png) + +RAID监控通知 + +#### 了解RAID级别 #### + +** RAID 0 ** + +阵列总大小是最小分区大小的n倍,n是阵列中独立磁盘的个数(你至少需要两个驱动器/磁盘)。运行下面命令,使用/dev/sdb1和/dev/sdc1分区组装一个RAID 0 阵列。 + + # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 + +常见用途:用于支持性能比容错更重要的实时应用程序的设置 + +**RAID 1 (又名镜像/Mirroring)** + +阵列总大小等于最小分区大小(你至少需要两个驱动器/磁盘)。运行下面命令,使用/dev/sdb1和/dev/sdc1分区组装一个RAID 1 阵列。 + + # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 + +常见用途:操作系统的安装或者重要的子文件夹,例如 /home + +**RAID 5 (又名奇偶校验码盘/drives with Parity)** + +阵列总大小将是最小分区大小的(n-1)倍。//用于奇偶校验(冗余)计算(你至少需要3个驱动器/磁盘)。 + +说明:你可以指定一个空闲设备(/dev/sde1)替换问题出现时的故障部分(分区)。运行下面命令,使用/dev/sdb1, /dev/sdc1, /dev/sdd1,/dev/sde1组装一个RAID 5 阵列,其中/dev/sde1作为空闲分区。 + + # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1 + +常见用途:Web和文件服务 + +**RAID 6 (又名双重奇偶校验码盘/drives with double Parity)** + +阵列总大小为(n*s)-2*s,其中n为阵列中独立磁盘的个数,s为最小磁盘大小。 + +说明:你可以指定一个空闲分区(在这个例子为/dev/sdf1)替换问题出现时的故障部分(分区)。 + +运行下面命令,使用/dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1和/dev/sdf1组装RAID 6阵列,其中/dev/sdf1作为空闲分区。 + + # mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1 + +常见用途:大容量、高可用性要求的文件服务器和备份服务器。 + +**RAID 1+0 (又名镜像条带/stripe of mirrors)** + +因为RAID 1+0是RAID 0 和 RAID 1的组合,所以阵列总大小是基于两者的公式计算的。首先,计算每一个镜像的大小,然后再计算条带的大小。 + + + # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1 + +常见用途:需要快速IO操作的数据库和应用服务器 + +#### 创建和管理系统备份 #### + +记住RAID其所有的价值不是在于备份的替换者是对你有益的!在黑板上写上1000次,如果你需要的话,但无论何时一定要记住它。在我们开始前,我们必须注意的是,没有一个放之四海皆准的针对所有系统备份的解决方案,但这里有一些东西,是你在规划一个备份策略时需要考虑的。 + +- 你的系统将用于什么?(桌面或者服务器?如果系统是应用于后者,那么最重要的服务是什么——//其配置?) +- 你每隔多久备份你的系统? +- 你需要备份的数据是什么(比如 文件/文件夹/数据库转储)?你还可以考虑是否需要备份大型文件(比如音频和视频文件)。 +- 这些备份将会存储在哪里(物理位置和媒体)? + +**备份你的数据** + +方法1:使用dd命令备份整个磁盘。你可以在任意时间点通过创建一个准确的镜像来备份一整个硬盘或者是分区。注意当设备是离线时,这种方法效果最好,也就是说它没有被挂载并且没有任何进程的I/O操作访问它。 + +这种备份方法的缺点是镜像将具有和磁盘或分区一样的大小,即使实际数据占用的是一个很小的比例。比如,如果你想要为只使用了10%的20GB的分区创建镜像,那么镜像文件将仍旧是20GB。换句话来讲,它不仅包含了备份的实际数据,而且也包含了整个分区。如果你想完整备份你的设备,那么你可以考虑使用这个方法。 + +**从现有的设备创建一个镜像文件** + + # dd if=/dev/sda of=/system_images/sda.img + 或者 + --------------------- 可选地,你可以压缩镜像文件 ------------------- + # dd if=/dev/sda | gzip -c > /system_images/sda.img.gz + +**从镜像文件恢复备份** + + # dd if=/system_images/sda.img of=/dev/sda + 或者 + --------------------- 根据你创建镜像文件时的选择(译者注:比如压缩) ---------------- + # gzip -dc /system_images/sda.img.gz | dd of=/dev/sda + +方法2:使用tar命令备份确定的文件/文件夹——已经在本系列[Part 3][3]中讲了。如果你想要备份指定的文件/文件夹(配置文件,用户主目录等等),你可以使用这种方法。 + +方法3:使用rsync命令同步文件。rsync是一种多功能远程(和本地)文件复制工具。如果你想要从网络设备备份或同步文件,rsync是一种选择。 + + +无论是你是正在同步两个本地文件夹还是本地 < — > 挂载在本地文件系统的远程文件夹,其基本语法是一样的。 + + # rsync -av source_directory destination directory + +在这里,-a 递归遍历子目录(如果它们存在的话),维持符号链接、时间戳、权限以及原本的属主/属组,-v 显示详细过程。 + +![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png) + +rsync 同步文件 + +除此之外,如果你想增加在网络上传输数据的安全性,你可以通过rsync使用ssh协议。 + +**通过ssh同步本地 → 远程文件夹** + + # rsync -avzhe ssh backups root@remote_host:/remote_directory/ + +这个示例,本地主机上的backups文件夹将与远程主机上的/root/remote_directory的内容同步。 + +在这里,-h 选项以人可读的格式显示文件的大小,-e 标志用于表示一个ssh连接。 + +![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png) + +rsync 同步远程文件 + +**通过ssh同步远程 → 本地 文件夹** + +在这种情况下,交换前面示例中的source和destination文件夹。 + + # rsync -avzhe ssh root@remote_host:/remote_directory/ backups + +请注意这些只是rsync用法的三个示例而已(你可能遇到的最常见的情形)。对于更多有关rsync命令的示例和用法 ,你可以查看下面的文章。 + +- Read Also: [10 rsync Commands to Sync Files in Linux][4] + +### Summary ### + +作为一个系统管理员,你需要确保你的系统表现得尽可能好。如果你做好了充分准备,并且如果你的数据完整性能被诸如RAID和系统日常备份的存储技术支持,那你将是安全的。 + +如果你有有关完善这篇文章的问题、评论或者进一步的想法,可以在下面畅所欲言。除此之外,请考虑通过你的社交网络简介分享这系列文章。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[cpsoture](https://github.com/cposture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ +[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ +[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ + diff --git a/translated/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md b/translated/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md new file mode 100644 index 0000000000..5f511676af --- /dev/null +++ b/translated/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md @@ -0,0 +1,333 @@ +GHLandy Translated + +LFCS 系列第八讲:管理用户和用户组、文件权限和属性以及启用账户 sudo 访问权限 + +================================================================================ + +去年八月份,Linux 基金会发起了全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让世界各地的人能够参与到关于 Linux 系统中间层的基本管理操作的认证考试中去,这项认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。 + +![Linux Users and Groups Management](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-8.png) + +LFCS 系列第八讲 + +请看以下视频,里边将描述 LFCS 认证程序。 + +注:youtube视频 + + +本讲是《十套教程》系列的第八讲,在这一讲中,我们将引导你学习如何在 Linux 管理用户和用户组权限的设置,这些内容是 LFCS 认证开始中的必备知识。 + +由于 Linux 是一个多用户的操作系统(允许多个用户通过不同主机或者终端访问一个独立系统),因此你需要知道如何才能有效地管理用户:如何添加、编辑、禁用和删除用户账户,并赋予他们足以完成自身任务的必要权限。 + +### 添加用户账户 ### + +添加新用户账户,你需要以 root 运行一下两条命令中的任意一条: + + # adduser [new_account] + # useradd [new_account] + +当新用户账户添加到系统时,会自动执行以下操作: + +1. 自动创建用户家目录(默认是 /home/username)。 + +2. 自动拷贝下列隐藏文件到新建用户的家目录,用来设置新用户会话的环境变量。 + + .bash_logout + .bash_profile + .bashrc + +3. 自动创建邮件缓存目录 /var/spool/mail/username。 + +4. 自动创建于用户名相同的用户组。 + +**理解 /etc/passwd 中的内容** + +/etc/passwd 文件中存储了所有用户账户的信息,每个用户在里边都有一条对应的记录,其格式(每个字段用冒号隔开)如下: + + [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] + +- 字段 [username] 和 [Comment] 是自解释的关系。 +- 第二个字段中 x 表明通过用户名 username 登录系统是有密码保护的, 密码保存在 /etc/shadow 文件中。 +- [UID] 和 [GID] 字段用整数表示,代表该用户的用户标识符和对应所在组的组标志符。 +- 字段 [Home directory] 为 username 用户家目录的绝对路径。 +- 字段 [Default shell] 指定用户登录系统时默认使用的 shell。 + +**理解 /etc/group 中的内容** + +/etc/group 文件存储所有用户组的信息。每行记录的格式如下: + + [Group name]:[Group password]:[GID]:[Group members] + +- [Group name] 为用户组名称。 +- 字段 [Group password] 为 x 的话,则说明不适用用户组密码。 +- [GID] 与 /etc/passwd 中保存的 GID 相同。 +- [Group members] 用户组中的用户使用逗号隔开。 + +![Add User Accounts in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-user-accounts.png) + +添加用户账户 + +添加用户账户之后,你可以使用 usermod 命令来修改用户信息中的部分字段,该命令基本语法如下: + + # usermod [options] [username] + +**设置账户的过期时间** + +通过 –expiredate 标记后边接 年-月-日 格式的日期,如下: + + # usermod --expiredate 2014-10-30 tecmint + +**将用户添加到其他组** + +使用 -aG 或者 –append –groups 选项,后边跟着用户组,如果有多个用户组,每个用户组之间使用逗号隔开。 + + # usermod --append --groups root,users tecmint + +**改变用户家目录的默认位置** + +使用 -d 或者 –home 选项,后边跟着新的家目录的绝对路径。 + + # usermod --home /tmp tecmint + +**改变用户的默认 shell** + +使用 –shell 选项,后边跟着新 shell 的路径。 + + # usermod --shell /bin/sh tecmint + +**显示用户所属的用户组** + + # groups tecmint + # id tecmint + +下面,我们一次运行上述命令: + + # usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp --shell /bin/sh tecmint + +![usermod Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/usermod-command-examples.png) + +usermod 命令例示 + +扩展阅读 + +- [15 useradd Command Examples in Linux][1] +- [15 usermod Command Examples in Linux][2] + +对于已有用户账户,我们还可以: + +**通过锁定密码来禁用账户** + +使用 -L (大写 l)或者 –lock 选项来锁定用户密码。 + + # usermod --lock tecmint + +**解锁用户密码** + +使用 –u 或者 –unlock 选项来解锁我们之前锁定的账户。 + + # usermod --unlock tecmint + +![Lock User in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/lock-user-in-linux.png) + +锁定用户账户 + +**为需要对指定文件进行读写的多个用户建立用户组** + +运行下列几条命令来完成: + + # groupadd common_group # 添加新用户组 + # chown :common_group common.txt # 将 common.txt 的用户组修改为 common_group + # usermod -aG common_group user1 # 添加用户 user1 到 common_group 用户组 + # usermod -aG common_group user2 # 添加用户 user2 到 common_group 用户组 + # usermod -aG common_group user3 # 添加用户 user3 到 common_group 用户组 + +**删除用户组** + +通过以下命令删除用户组: + + # groupdel [group_name] + +属于这个 group_name 用户组的文件是不会被删除的,而仅仅是删除了用户组。 + +### Linux 文件权限 ### + +除了我们在 [Setting File Attributes – Part 3][3] 中说到的基本的读取、写入和执行权限外,文件还有一些不常用却很重要的的权限设置,有时候把它当做“特殊权限”。 + +就像之前我们讨论的基本权限,这里同样使用八进制数字或者一个字母(象征性符号)好表示该权限类型。 + +**删除用户账户** + +你可以通过 userdel --remove 命令来删除用户账户。这样会删除用户拥有的家目录和家目录下的所有文件,以及邮件缓存目录。 + + # userdel --remove [username] + +#### 用户组管理 #### + +每次添加新用户,系统会为该用户创建同名的用户组,此时用户组里边只有新建的用户,其他用户可以随后添加进去。建立用户组的目的之一,就是为了通过对指定资源设置权限来完成对这些资源和文件进行访问控制。 + +比如,你有下列用户: + +- user1 (primary group: user1) +- user2 (primary group: user2) +- user3 (primary group: user3) + +他们都需要对你系统里边某个位置的 common.txt 文件,或者 user1 用户刚刚创建的共享进行读写。你可能会运行下列命令: + + # chmod 660 common.txt + 或 + # chmod u=rw,g=rw,o= common.txt [注意最后那个 = 号和文件名之间的空格] + +然而,这样仅仅给文件所属的用户和用户组(本例为 user1)成员的提供了读写权限。你还需要将 user2 和 user3 添加到 user1 组,打这样做也将 user1 用户和用户组的其他文件的权限开放给了 user2 和 user3。 + +这时候,用户组就派上用场了,下面将演示怎么做。 + +**理解 Setuid 位** + +当为可执行文件设置 setuid 位之后,用户运行程序是会继承该程序属主的有效特权。由于这样做会引起安全风险,因此设置 setuid 权限的文件及程序必须尽量少。你会发现,当系统中有用户需要执行 root 用户所管辖的程序时就是设置了 setuid 权限。 + +也就是说,用户不仅仅可以运行这个可执行文件,而且能以 root 权限来运行。比如,先检查 /bin/passwd 的权限,这个可执行文件用于改变账户的密码,并修改 /etc/shadow 文件。超级用户可以改变任意账户的密码,但是其他用户只能改变自己账户的密码。 + +![passwd Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/passwd-command.png) + +passwd 命令例示 + +因此,所有用户都有权限运行 /bin/passwd,但只有 root 用户可以指定改变指定用户账户的密码。其他用户只能改变其自身的密码。 + +![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png) + +修改用户密码 + +**理解 Setgid 位** + +设置 setgid 位之后,真实用户的有效 GID 变为属组的 GID。因此,任何用户都能以赋予属组用户的权限来访问文件。另外,当目录置了 setgid 位之后,新建的文件将继承其所属目录的 GID,并且新建的子目录会继承父目录的 setgid 位。通过这个方法,你能够以一个指定用户组的身份来访问该目录里边的文件,而不必管文件属主的主属组。 + + # chmod g+s [filename] + +以八进制形式来设置 setgid 位,在当前基本权限(或者想要设置的权限)前加上数字 2 就行了。 + + # chmod 2755 [directory] + +**给目录设置 SETGID 位** + +![Add Setgid in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-setgid-to-directory.png) + +给命令设置 setgid 位 + +**理解 Sticky 位** + +文件设置了 Sticky 为之后,Linux 会将文件忽略,对于该文件影响到的目录,除了属主或者 root 用户外,其他用户无法删除,甚至重命名目录中其他文件也不行。 + +# chmod o+t [directory] + +以八进制形式来设置 sticky 位,在当前基本权限(或者想要设置的权限)前加上数字 1 就行了。 + +# chmod 1755 [directory] + +若没有 sticky 位,任何有权限读写目录的用户都可删除和重命名文件。因此,sticky 为通常出现在像 /tmp 之类的目录,这些目录是所有人都具有写权限的。 + +![Add Stickybit in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-sticky-bit-to-directory.png) + +给目录设置 sticky 位 + +### Linux 特殊文件属性 ### + +文件还有其他一些属性,用来做进一步的操作限制。比如,阻止对文件的重命名、移动、删除甚至是修改。可以通过使用 [chattr 命令][4] 来设置,并可以使用 lsattr 工具来查看这些属性。设置如下: + + # chattr +i file1 + # chattr +a file2 + +运行这些命令之后,file1 成为不可变状态(即不可移动、重命名、修改或删除),而 file2 进入“仅追加”模式(仅在追加内容模式中打开)。 + +![Protect File from Deletion](http://www.tecmint.com/wp-content/uploads/2014/10/chattr-command.png) + +通过 Chattr 命令来包含文件 + +### 访问 root 账户并启用 sudo ### + +访问 root 账户的方法之一,就是通过输入: + + $ su + +然后输入 root 账户密码。 + +倘若授权成功,你将以 root 身份登录,工作目录则是登录前所在的位置。如果是想要一登录就自动进入 root 用户的家目录,请运行: + $ su - + +然后输入 root 账户密码。 + +![Enable sudo Access on Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Sudo-Access.png) + +为用户启用 sudo 访问权限 + +执行上个步骤需要普通用户知道 root 账户的密码,这样会引起非常严重的安全问题。于是,系统管理员通常会配置 sudo 命令来让普通用户在严格控制的环境中以其他用户身份(通常是 root)来执行命令。所有,可以在严格控制用户的情况下,又允许他运行一条或多条特权命令。 + +- 扩展阅读:[Difference Between su and sudo User][5] + +普通用户通过他自己的用户密码来完成 sudo 授权。输入命令之后会出现输入密码(并不是超级用户密码)的提示,授权成功(只要赋予了用户运行该命令的权限)的话,指定的命令就会运行。 + +系统管理员必须编辑 /etc/sudoers 文件,才能为 sudo 赋予相应权限。通常建议使用 visudo 命令来编辑这个文件,而不是使用文本编辑器来打开它。 + + # visudo + +这样会使用 vim(如果你按照 [Install and Use vim as Editor – Part 2][6] 里边说的来编辑文件)来打开 /etc/sudoers 文件。 + +以下是需要设置的相关的行: + + Defaults secure_path="/usr/sbin:/usr/bin:/sbin" + root ALL=(ALL) ALL + tecmint ALL=/bin/yum update + gacanepa ALL=NOPASSWD:/bin/updatedb + %admin ALL=(ALL) ALL + +来更加深入了解这些项: + + Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin" + +这一行指定 sudo 将要使用的目录,这样可以阻止使用用户指定的目录,那样的话可能会危及系统。 + +下一行是用来指定权限的: + + root ALL=(ALL) ALL + +- 第一个 ALL 关键词表明这条规则适用于所有主机。 +- 第二个 ALL 关键词表明第一个字段中指定的用户能以任何用户身份所具有的权限来运行相应命令。 +- 第三个 ALL 关键词表明可以运行任何命令。 + + tecmint ALL=/bin/yum update + +如果 = 号后边没有指定用户,sudo 则默认为 root 用户。本例中,tecmint 用户能以 root身份运行 yum update 命令。 + + gacanepa ALL=NOPASSWD:/bin/updatedb + +NOPASSWD 关键词表明 gacanepa 用户不需要密码,可以直接运行 /bin/updatedb 命令。 + + %admin ALL=(ALL) ALL + +% 符号表示该行应用 admin 用户组。其他部分的含义与对于用户的含义是一样的。本例表示 admin 用户组的成员可以通过任何主机的链接来运行任何命令。 + +通过 sudo -l 命令可以查看,你的账户拥有什么样的权限。 + +![Sudo Access Rules](http://www.tecmint.com/wp-content/uploads/2014/10/sudo-access-rules.png) + +Sudo 访问规则 + +### 总结 ### +对于系统管理员来说,高效能的用户和文件管理技能是非常必要的。本文已经涵盖了这些内容,我们希望你将这些作为一个开始,,然后慢慢进步。随时在下边发表评论或提问,我们会尽快回应的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-users-and-groups-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[GHLandy](https://github.com/GHLandy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/add-users-in-linux/ +[2]:http://www.tecmint.com/usermod-command-examples/ +[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ +[4]:http://www.tecmint.com/chattr-command-examples/ +[5]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/ +[6]:http://www.tecmint.com/vi-editor-usage/ diff --git a/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md new file mode 100644 index 0000000000..2781dde63d --- /dev/null +++ b/translated/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -0,0 +1,230 @@ +Flowsnow translating... +LFCS系列第九讲: 使用Yum, RPM, Apt, Dpkg, Aptitude, Zypper进行Linux包管理 +================================================================================ +去年八月, Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划,这对广大系统管理员来说是一个很好的机会,管理员们可以通过绩效考试来表明自己可以成功支持Linux系统的整体运营。 当需要的时候一个Linux基金会认证的系统管理员有足够的专业知识来确保系统高效运行,提供第一手的故障诊断和监视,并且为工程师团队在问题升级时提供智能决策。 + +![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) + +Linux基金会认证系统管理员 – 第九讲 + +请观看下面关于Linux基金会认证计划的演示。 + +注:youtube 视频 + + +本文是本系列十套教程中的第九讲,今天在这篇文章中我们会引导你学习Linux包管理,这也是LFCS认证考试所需要的。 + +### 包管理 ### + +简单的说,包管理是系统中安装和维护软件的一种方法,其中维护也包含更新和卸载。 + +在Linux早期,程序只以源代码的方式发行,还带有所需的用户使用手册和必备的配置文件,甚至更多。现如今,大多数发行商使用默认的预装程序或者被称为包的程序集合。用户使用这些预装程序或者包来安装该发行版本。然而,Linux最伟大的一点是我们仍然能够获得程序的源代码用来学习、改进和编译。 + +**包管理系统是如何工作的** + +如果某一个包需要一定的资源,如共享库,或者需要另一个包,据说就会存在依赖性问题。所有现在的包管理系统提供了一些解决依赖性的方法,以确保当安装一个包时,相关的依赖包也安装好了 + +**打包系统** + +几乎所有安装在现代Linux系统上的软件都会在互联网上找到。它要么能够通过中央库(中央库能包含几千个包,每个包都已经构建、测试并且维护好了)发行商得到,要么能够直接得到可以下载和手动安装的源代码。 + +由于不同的发行版使用不同的打包系统(Debian的*.deb文件/ CentOS的*.rpm文件/ openSUSE的专门为openSUSE构建的*.rpm文件),因此为一个发行版本开发的包会与其他发行版本不兼容。然而,大多数发行版本都可能是LFCS认证的三个发行版本之一。 + +**高级和低级打包工具** + +为了有效地进行包管理的任务,你需要知道,你将有两种类型的实用工具:低级工具(能在后端实际安装,升级,卸载包文件),以及高级工具(负责确保能很好的执行依赖性解决和元数据检索的任务,元数据也称为关于数据的数据)。 + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
发行版低级工具高级工具
 Debian版及其衍生版 dpkg apt-get / aptitude
 CentOS版 rpm yum
 openSUSE版 rpm zypper
+ +让我们来看下低级工具和高级工具的描述。 + +dpkg的是基于Debian系统中的一个低级包管理器。它可以安装,删除,提供有关资料,并建立*.deb包,但它不能自动下载并安装它们相应的依赖包。 + +- 阅读更多: [15个dpkg命令实例][1] + +apt-get是Debian和衍生版的高级包管理器,并提供命令行方式从多个来源检索和安装软件包,其中包括解决依赖性。和dpkg不同的是,apt-get不是直接基于.deb文件工作,而是基于包的正确名称。 + +- 阅读更多: [25个apt-get命令实力][2] + +Aptitude是基于Debian的系统的另一个高级包管理器,它可用于快速简便的执行管理任务(安装,升级和删除软件包,还可以自动处理解决依赖性)。与atp-get和额外的包管理器相比,它提供了相同的功能,例如提供对包的几个版本的访问。 + +rpm是Linux标准基础(LSB)兼容发布版使用的一种包管理器,用来对包进行低级处理。就像dpkg,rpm可以查询,安装,检验,升级和卸载软件包,并能被基于Fedora的系统频繁地使用,比如RHEL和CentOS。 + +- 阅读更多: [20个rpm命令实例][3] + +相对于基于RPM的系统,yum增加了系统自动更新的功能和带依赖性管理的包管理功能。作为一个高级工具,和apt-get或者aptitude相似,yum基于库工作。 + +- 阅读更多: [20个yum命令实例][4] +- +### 低级工具的常见用法 ### + +你用低级工具处理最常见的任务如下。 + +**1. 从已编译(*.deb或*.rpm)的文件安装一个包** + +这种安装方法的缺点是没有提供解决依赖性的方案。当你在发行版本库中无法获得某个包并且又不能通过高级工具下载安装时,你很可能会从一个已编译文件安装该包。因为低级工具不需要解决依赖性问题,所以当安装一个没有解决依赖性的包时会出现出错并且退出。 + + # dpkg -i file.deb [Debian版和衍生版] + # rpm -i file.rpm [CentOS版 / openSUSE版] + +**注意**: 不要试图在CentOS中安装一个为openSUSE构建的.rpm文件,反之亦然! + +**2. 从已编译文件中更新一个包** + +同样,当中央库中没有某安装包时,你只能手动升级该包。 + + # dpkg -i file.deb [Debian版和衍生版] + # rpm -U file.rpm [CentOS版 / openSUSE版] + +**3. 列举安装的包** + +当你第一次接触一个已经在工作中的系统时,很可能你会想知道安装了哪些包。 + + # dpkg -l [Debian版和衍生版] + # rpm -qa [CentOS版 / openSUSE版] + +如果你想知道一个特定的包安装在哪儿, 你可以使用管道命令从以上命令的输出中去搜索,这在这个系列的[操作Linux文件 – 第一讲][5] 中有介绍。假定我们需要验证mysql-common这个包是否安装在Ubuntu系统中。 + + # dpkg -l | grep mysql-common + +![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) + +检查安装的包 + +另外一种方式来判断一个包是否已安装。 + + # dpkg --status package_name [Debian版和衍生版] + # rpm -q package_name [CentOS版 / openSUSE版] + +例如,让我们找出sysdig包是否安装在我们的系统。 + + # rpm -qa | grep sysdig + +![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) + +检查sysdig包 + +**4. 查询一个文件是由那个包安装的** + + # dpkg --search file_name + # rpm -qf file_name + +例如,pw_dict.hwm文件是由那个包安装的? + + # rpm -qf /usr/share/cracklib/pw_dict.hwm + +![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) + +Linux中查询文件 + +### 高级工具的常见用法 ### + +你用高级工具处理最常见的任务如下。 + +**1. 搜索包** + +aptitude更新将会更新可用的软件包列表,并且aptitude搜索会根据包名进行实际性的搜索。 + + # aptitude update && aptitude search package_name + +在搜索所有选项中,yum不仅可以通过包名还可以通过包的描述搜索程序包。 + + # yum search package_name + # yum search all package_name + # yum whatprovides “*/package_name” + +假定我们需要一个名为sysdig的包,要知道的是我们需要先安装然后才能运行。 + + # yum whatprovides “*/sysdig” + +![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) + +检查包描述 + +whatprovides告诉yum搜索一个含有能够匹配上述正则表达式的文件的包。 + + # zypper refresh && zypper search package_name [在openSUSE上] + +**2. 从仓库安装一个包** + +当安装一个包时,在包管理器解决了所有依赖性问题后,可能会提醒你确认安装。需要注意的是运行更新或刷新(根据所使用的软件包管理器)不是绝对必要,但是考虑到安全性和依赖性的原因,保持安装的软件包是最新的是一个好的系统管理员的做法。 + + # aptitude update && aptitude install package_name [Debian版和衍生版] + # yum update && yum install package_name [CentOS版] + # zypper refresh && zypper install package_name [openSUSE版] + +**3. 卸载包** + +按选项卸载将会卸载软件包,但把配置文件保留完好,然而清除包从系统中完全删去该程序。 +# aptitude remove / purge package_name +# yum erase package_name + + ---注意要卸载的openSUSE包前面的减号 --- + + # zypper remove -package_name + +在默认情况下,大部分(如果不是全部)的包管理器会提示你,在你实际卸载之前你是否确定要继续卸载。所以,请仔细阅读屏幕上的信息,以避免陷入不必要的麻烦! + +**4. 显示包的信息** + +下面的命令将会显示birthday这个包的信息。 + + # aptitude show birthday + # yum info birthday + # zypper info birthday + +![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) + +检查包信息 + +### 总结 ### + +作为一个系统管理员,包管理器是你不能回避的东西。你应该立即准备使用本文中介绍的这些工具。希望你在准备LFCS考试和日常工作中会觉得这些工具好用。欢迎在下面留下您的意见或问题,我们将尽可能快的回复你。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-package-management/ + +作者:[Gabriel Cánepa][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/dpkg-command-examples/ +[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ +[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ \ No newline at end of file diff --git a/translated/tech/Learn with Linux/Learn with Linux--Learning Music.md b/translated/tech/Learn with Linux/Learn with Linux--Learning Music.md new file mode 100644 index 0000000000..c732344a19 --- /dev/null +++ b/translated/tech/Learn with Linux/Learn with Linux--Learning Music.md @@ -0,0 +1,153 @@ +Linux 教学之教你玩音乐 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-featured.png) + +[Linux 学习系列][1]的所有文章: + +- [Linux 教学之教你练打字][2] +- [Linux 教学之物理模拟][3] +- [Linux 教学之教你玩音乐][4] +- [Linux 教学之两款地理软件][5] +- [Linux 教学之掌握数学][6] + +引言:Linux 提供大量的教学软件和工具,面向各个年级段以及年龄段,提供大量学科的练习实践,其中大多数是可以与用户进行交互的。本“Linux 教学”系列就来介绍一些教学软件。 + +学习音乐是一个很好的消遣方式。训练你的耳朵能识别音阶与和弦、掌握一门乐器、控制自己的嗓音,这些都需要大量的练习,以及会遇到很多困难。音乐理论非常博大精深,有太多东西需要记忆,你需要非常勤奋才能讲这些东西变成你的“技术”。在你的音乐之路上,Linux 提供了杰出的软件来帮助你前行。它们不能让你立刻成为一个音乐家,但可以作为一个降低学习难度的好助手。 + +### Gnu Solfège ### + +[Solfège][7] 是一个世界流行的音乐教学工具,适用于各个级别的音乐教育。很多流行的教学方法(比如著名的柯达伊教学法)就使用 Solfège 作为它们的基础。相比于学到音乐知识,Solfège 更关注于让用户不断练习音乐。它假想的用户是那些已经有一些音乐基础,并且想不断练习音乐技巧的学生。 + +以下是 GNU 网站的开发者声明: + +> “当你在高校、学院、音乐学校中学习音乐,你一般要进行的一些听力训练,比如视唱,会比较简单,但是通常需要两个人配合,一个问,一个答。[...] GNU Solfège 尝试着解决这个问题,你可以在没有其他人的帮助下完成更多的简单机械式练习。只是别忘了这些练习只是整个音乐训练过程的一部分。” + +这款软件兑现了它的承诺,你可以在试听帮手的帮助下练习几乎所有音乐技巧。 + +Debian 和 Ubuntu 的远端库上有这款软件,在终端运行下面命令安装软件: + + sudo apt-get install solfege + +它开启的时候会出现一个简单的开始界面。 + +![learnmusic-solfege-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-main.png) + +这些选项几乎包含了所有种类,大多数链接里面都有子类,你可以从中选择独立的练习。 + +![learnmusic-solfege-scales](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-scales.png) + +![learnmusic-solfege-hun](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-hun.png) + +软件提供多种练习和测试项目,都能通过外接的 MIDI 设备(LCTT 译注:MIDI,Musical Instrument Digital Interface,乐器数字接口)或者声卡来播放音乐。这些练习还配合音符播放,以及支持慢动作回放功能。 + +很重要的一点是如果你在 Ubuntu 下使用 Solfège,默认情况下你可能没法听到声音(除非你有外接 MIDI 设备)。如果出现了这种情况,点击“File -> Prefernces -> Sound Setup”,选择合适的设备(一般情况下选 ALSA 都能解决问题)。 + +![learnmusic-solfege-midi](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-midi.png) + +Solfège 对你的日常练习非常有帮助,经常使用它,可以在你开始唱 do-re-mi 之前练好你的音乐听觉。 + +### Tete (听力训练) ### + +[Tete][8] (这款听力训练软件)是一款简单但有效的 JAVA 软件,用于[训练听力][9]。它通过在不同背景下播放不同和弦以及不同 MIDI 声音来训练你分辨不同的音阶。[从 SourceForge 下载][10],然后解压它。 + + unzip Tete-* + +进入解压出来的目录: + + cd Tete-* + +这里假设你的系统已经安装好了 JAVA,你可以使用下面的命令执行 Java 文件: + + java -jar Tete-[版本号] + +(可以在输入“Tete-”后按 Tab 键进行自动补全。) + +Tete 只有一个简单的界面,所有内容都在这里了。 + +![learnmusic-tete-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-main.png) + +你可以选择表演音阶(见上图),和弦(下图), + +![learnmusic-tete-chords](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-chords.png) + +或音程。 + +![learnmusic-tete-intervals](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-intervals.png) + +你可以“精调”很多选项,包括 midi 乐器的声音、提升或降低音阶以及回放的快慢等等。SourceForge 网站上有关于 Tete 的非常有用的教程,介绍了这个软件的各个方面。 + +### JalMus ### + +Jalmus 是用 JAVA 写的键盘音符阅读训练器。可以外接 MIDI 键盘,也可以使用虚拟键盘。它提供很多简单的课程练习来训练你的音符阅读能力。这个软件在2013年之后就不再更新了,但还是比较实用的。 + +进入[sourceforge 页面][11]下载最后版本(v2.3)的 JAVA 安装器,或者在终端输入下面的命令下载: + + wget http://garr.dl.sourceforge.net/project/jalmus/Jalmus-2.3/installjalmus23.jar + +下载完成后,加载安装器: + + java -jar installjalmus23.jar + +跨平台的 JAVA 安装器会一步一步引导你完成安装的。 + +Jalmus 的主界面非常朴素。 + +![learnmusic-jalmus-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-main.jpg) + +你可以在“Lessons”菜单中找到各种不同难度的课程,从非常简单(一行音符从左边向右滑过,键盘上相应的按键会高亮显示), + +![learnmusic-jalmus-singlenote](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-singlenote.png) + +到非常困难(有多行音符从右向左滑过,你需要按顺序键入音符)。 + +![learnmusic-jalmus-multinote](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-multinote.png) + +Jalmus 也包含一些训练,内容和课程相似,只是没有那些视觉上的提示了。当完成训练后,屏幕上会显示你的乐谱。它还提供不同难度的节拍训练,你能听到看到这些训练里面播放的旋律。在多行乐谱同时播放时,一个节拍器(能听见能看见)可以帮你理解 + +![learnmusic-jalmus-rhythm](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-rhythm.png) + +和阅读乐谱。(LCTT 写给王老板的话:我特么实在编不下去了,这段你得帮我改改。) + +![learnmusic-jalmus-score](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-score.png) + +所有这些功能都是可配置的,你可以选择打开或者关闭它们。 + +总的来说,Jalmus 可能是节奏训练软件中属于功能最强的,虽然它不是学音乐必备的软件,但在节奏训练这个特殊的领域,它做得很出色。 + +### 号外 ### + +#### TuxGuitar #### + +对于吉他练习者,[TuxGuitar][12] 看起来很像 Windows 下面的 Guitar Pro 软件(它也可以读 Guitar Pro 格式的文件)。 + +#### PianoBooster #### +[Piano Booster][13] 可以练习钢琴技巧,它能播放 MIDI 文件,你可以使用外接键盘来弹钢琴,同时还能查看屏幕上滑过的乐谱。 + +### 总结 ### + +Linux 提供很多优秀的工具供你学习,如果你对音乐感兴趣,你完全不用担心没有软件能帮你练习音乐技术。实际上,可供学习音乐的学生选择的优秀软件数量远比上面介绍的要多。如果你还知道其他的音乐训练软件,请在写下你的评论,让我们能够知道。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-learning-music/ + +作者:[Attila Orosz][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ +[7]:https://en.wikipedia.org/wiki/Solf%C3%A8ge +[8]:http://tete.sourceforge.net/index.shtml +[9]:https://en.wikipedia.org/wiki/Ear_training +[10]:http://sourceforge.net/projects/tete/files/latest/download +[11]:http://sourceforge.net/projects/jalmus/files/Jalmus-2.3/ +[12]:http://tuxguitar.herac.com.ar/ +[13]:http://www.linuxlinks.com/article/20090517041840856/PianoBooster.html diff --git a/translated/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md b/translated/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md new file mode 100644 index 0000000000..8ce6b052af --- /dev/null +++ b/translated/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md @@ -0,0 +1,99 @@ +Linux 教学之两款地理软件 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-featured.png) + +[Linux 学习系列][1]的所有文章: + +- [Linux 教学之教你练打字][2] +- [Linux 教学之物理模拟][3] +- [Linux 教学之教你玩音乐][4] +- [Linux 教学之两款地理软件][5] +- [Linux 教学之掌握数学][6] + +引言:Linux 提供大量的教学软件和工具,面向各个年级段以及年龄段,提供大量学科的练习实践,其中大多数是可以与用户进行交互的。本“Linux 教学”系列就来介绍一些教学软件。 + +地理是一门有趣的学科,我们每天都能接触到,虽然可能没有意识到,但当你打开 GPS、SatNav 或谷歌地图时,你就已经在使用这些软件提供的地理数据了;当你在新闻中看到一个国家的消息或听到一些金融数据时,这些信息都可以归于地理学范畴。Linux 提供了很多学习地理学的软件,可用于教学,也可用于自学。 + +### Kgeography ### + +在多数 Linux 发行版的软件库中,只有两个与地理有关的软件,两个都属于 KDE 阵营,或者说都属于 KDE 教育项目。Kgeopraphy 使用简单的彩色编码图来绘制被选中的国家。 + +Ubuntu 及衍生版在终端执行下面命令安装软件: + + sudo apt-get install kgeography + +界面很简单,给你一个选择界面,你可以选择不同的国家。 + +![learn-geography-kgeo-pick](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-pick.png) + +点击地图上的某个区域,界面就会显示这个区域所在的国家和首都。 + +![learn-geography-kgeo-brit](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-brit.png) + +以及给出不同的测试题来检测你的知识水平。 + +![learn-geography-kgeo-test](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-test.png) + +这款软件以交互的方式测试你的地理知识,并且可以帮你为考试做好充足的准备。 + +### Marble ### + +Marble 是一个稍微高级一点的软件,无需 3D 加速就能提供全球视角。 + +![learn-geography-marble-main](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-main.png) + +在 Ubuntu 及衍生版的终端输入下面的命令来安装 Marble: + + sudo apt-get install marble + +Marble 专注于地图绘制,它的主界面就是一张地图。 + +![learn-geography-marble-atlas](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-atlas.jpg) + +你可以选择不同的投影方法,比如球状投影和麦卡托投影(LCTT 译注:把地球表面绘制在平面上的方法),在下拉菜单里你可以选择平面视角或外部视角,包括 Atlas 视角,OpenStreetMap 提供的成熟的离线地图, + +![learn-geography-marble-map](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-map.jpg) + +以及卫星视角(由 NASA 提供), + +![learn-geography-marble-satellite](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-satellite.jpg) + +以及政治上甚至是历史上的世界地图。 + +![learn-geography-marble-history](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-history.jpg) + +除了有包含不同界面和大量数据的离线地图,Marble 还提供其他信息。你可以在菜单中打开或关闭不同的离线 info-boxes + +![learn-geography-marble-offline](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-offline.png) + +和在线的 online services。 + +![learn-geography-marble-online](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-online.png) + +一个有趣的在线服务是维基百科,点击下 Wiki 图标,会弹出一个界面来展示你选中区域的详细信息。 + +![learn-geography-marble-wiki](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-wiki.png) + +这款软件还提供定位追踪、路由规划、位置搜索和其他有用的功能。如果你喜欢地图学,Marble 可以让你长时间享受探索和学习的乐趣。 + +### 总结 ### + +Linux 提供大量优秀的教育软件,当然也包括地理学科。本文介绍的两款软件可以帮你学到很多地理知识,并且你可以以一种好玩的人机交互方式来测试你的知识量。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-geography-apps/ + +作者:[Attila Orosz][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md new file mode 100644 index 0000000000..b539b9a4a8 --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 1--HowTo--Use grep Command In Linux or UNIX--Examples.md @@ -0,0 +1,143 @@ +grepƥַ򵥴ʸʽļļͨ˵grep ʾƥ䵽Уʹgrepһʽƥ䵽УֻʾʵУgrepΪLinuxUnixϵͳõ +### ֪ ### +grep֣ԴڱʾһƵΪgrepUnixLinuxı༭ǣ + + g/re/p + +### grep﷨ ### + +﷨ʾ: + + grep 'word' filename + grep 'word' file1 file2 file3 + grep 'string1 string2' filename + cat otherfile | grep 'something' + command | grep 'something' + command option1 | grep 'data' + grep --color 'data' fileName + +###ôʹgrepһļ### + + /etc/passwd ļµbooû,: + + $ grep boo /etc/passwd + +: + + foo:x:1000:1000:foo,,,:/home/foo:/bin/ksh + +ʹgrepȥǿƺԴСд i.e ʹ-iƥ boo, Boo, BOO ѡ: + + $ grep -i "boo" /etc/passwd + +### ݹʹgrep ### + +ʹgrepݹ i.e. ļĿ¼аַ192.168.1.5ļ + + $ grep -r "192.168.1.5" /etc/ + +ǣ + + $ grep -R "192.168.1.5" /etc/ + +ʾ: + + /etc/ppp/options:# ms-wins 192.168.1.50 + /etc/ppp/options:# ms-wins 192.168.1.51 + /etc/NetworkManager/system-connections/Wired connection 1:addresses1=192.168.1.5;24;192.168.1.2; + +ῴҵ 192.168.1.5 ĽļΪʾڵ棬֮аļԼ-hѡֹ + $ grep -h -R "192.168.1.5" /etc/ + + + + $ grep -hR "192.168.1.5" /etc/ + +ʾ: + + # ms-wins 192.168.1.50 + # ms-wins 192.168.1.51 + addresses1=192.168.1.5;24;192.168.1.2; + +### ʹgrepȥı ### + +boogrepƥfoobooboo123, barfoo35 booַʹ-wѡȥǿѡЩǸʵС + + $ grep -w "boo" file + +### ʹegrepȥȽϲͬ ### + +ʹegrep: + + $ egrep -w 'word1|word2' /path/to/file + +### ıƥʱͳ ### + +grepͨ-cʾÿļƥ䵽Ĵ + + $ grep -c 'word' /path/to/file + +-nѡȥʾǰƥ䵽ļ + + $ grep -n 'root' /etc/passwd + +ʾ: + + 1:root:x:0:0:root:/root:/bin/bash + 1042:rootdoor:x:0:0:rootdoor:/home/rootdoor:/bin/csh + 3319:initrootapp:x:0:0:initrootapp:/home/initroot:/bin/ksh + +### תƥ ### + +ʹ-vѡȥӡƥݣݽЩʵУɾbarʵУ + + $ grep -v bar /path/to/file + +### UNIX / Linux ܵ grep ### + +grep ܵһʹãУʾӲ֣ + + # dmesg | egrep '(s|h)d[a-z]' + +ʾCPUģ + + # cat /proc/cpuinfo | grep -i 'Model' + +Ȼ԰·ʹõͬʱʹùܵ: + + # grep -i 'Model' /proc/cpuinfo + +ʾ: + + model : 30 + model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz + model : 30 + model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz + +### νʾƥ䵽ݵļ? ### + +ʹ-lѡȥʾЩļаmainļ: + + $ grep -l 'main' *.c + +ʹgrepɫʵʾ: + + $ grep --color vivek /etc/passwd + +ʾ: + +![Grep command in action](http://files.cyberciti.biz/uploads/faq/2007/08/grep_command_examples.png) + + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ + +ߣVivek Gite +ߣ[zky001](https://github.com/zky001) +Уԣ[УID](https://github.com/УID) + + [LCTT](https://github.com/LCTT/TranslateProject) ԭ룬[Linuxй](https://linux.cn/) Ƴ + +УID +[1]:http://bash.cyberciti.biz/guide/Pipes \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md new file mode 100755 index 0000000000..8389f4c339 --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 2--Regular Expressions In grep.md @@ -0,0 +1,288 @@ +## grep 中的正则表达式 +================================================================================ +在 Linux 、类 Unix 系统中我该如何使用 Grep 命令的正则表达式呢? + +Linux 附带有 GNU grep 命令工具,它支持正则表达式,而且 GNU grep 在所有的 Linux 系统中都是默认有的。Grep 命令被用于搜索定位存储在您服务器或工作站的信息。 + +### 正则表达式 ### + +正则表达式仅仅是对每个输入行的匹配的一种模式,即对字符序列的匹配模式。下面是范例: + + ^w1 + w1|w2 + [^ ] + +#### grep 正则表达式示例 #### + +在 /etc/passswd 目录中搜索 'vivek' + + grep vivek /etc/passwd + +输出例子: + + vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash + vivekgite:x:1001:1001::/home/vivekgite:/bin/sh + gitevivek:x:1002:1002::/home/gitevivek:/bin/sh + +摸索任何情况下的 vivek(即不区分大小写的搜索) + + grep -i -w vivek /etc/passwd + +摸索任何情况下的 vivek 或 raj + + grep -E -i -w 'vivek|raj' /etc/passwd + +上面最后的例子显示的,就是一个正则表达式扩展的模式。 + +### 锚 ### + +你可以分别使用 ^ 和 $ 符号来正则匹配输入行的开始或结尾。下面的例子搜索显示仅仅以 vivek 开始的输入行: + + grep ^vivek /etc/passwd + +输出例子: + + vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash + vivekgite:x:1001:1001::/home/vivekgite:/bin/sh + +你可以仅仅只搜索出以单词 vivek 开始的行,即不显示 vivekgit、vivekg 等 + + grep -w ^vivek /etc/passwd + +找出以单词 word 结尾的行: + + grep 'foo$' 文件名 + +匹配仅仅只包含 foo 的行: + + grep '^foo$' 文件名 + +如下所示的例子可以搜索空行: + + grep '^$' 文件名 + +### 字符类 ### + +匹配 Vivek 或 vivek: + + grep '[vV]ivek' 文件名 + +或者 + + grep '[vV][iI][Vv][Ee][kK]' 文件名 + +也可以匹配数字 (即匹配 vivek1 或 Vivek2 等等): + + grep -w '[vV]ivek[0-9]' 文件名 + +可以匹配两个数字字符(即 foo11、foo12 等): + + grep 'foo[0-9][0-9]' 文件名 + +不仅仅局限于数字,也能匹配至少一个字母的: + + grep '[A-Za-z]' 文件名 + +显示含有"w" 或 "n" 字符的所有行: + + grep [wn] 文件名 + +在括号内的表达式,即包在"[:" 和 ":]" 之间的字符类的名字,它表示的是属于此类的所有字符列表。标准的字符类名称如下: + +- [:alnum:] - 字母数字字符. +- [:alpha:] - 字母字符 +- [:blank:] - 空字符: 空格键符 和 制表符. +- [:digit:] - 数字: '0 1 2 3 4 5 6 7 8 9'. +- [:lower:] - 小写字母: 'a b c d e f g h i j k l m n o p q r s t u v w x y z'. +- [:space:] - 空格字符: 制表符、换行符、垂直制表符、换页符、回车符和空格键符. +- [:upper:] - 大写字母: 'A B C D E F G H I J K L M N O P Q R S T U V W X Y Z'. + +例子所示的是匹配所有大写字母: + + grep '[:upper:]' 文件名 + +### 通配符 ### + +你可以使用 "." 来匹配单个字符。例子中匹配以"b"开头以"t"结尾的3个字符的单词: + + grep '\' 文件名 + +在这儿, + +- \< 匹配单词前面的空字符串 +- \> 匹配单词后面的空字符串 + +打印出只有两个字符的所有行: + + grep '^..$' 文件名 + +显示以一个点和一个数字开头的行: + + grep '^\.[0-9]' 文件名 + +#### 点号转义 #### + +下面要匹配到 IP 地址为 192.168.1.254 的正则式是不会工作的: + + egrep '192.168.1.254' /etc/hosts + +三个点符号都需要转义: + + grep '192\.168\.1\.254' /etc/hosts + +下面的例子仅仅匹配出 IP 地址: + + egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' 文件名 + +下面的例子会匹配任意大小写的 Linux 或 UNIX 这两个单词: + + egrep -i '^(linux|unix)' 文件名 + +### 怎么样搜索以 - 符号开头的匹配模式? ### + +要使用 -e 选项来搜索匹配 '--test--' 字符串,如果不使用 -e 选项,grep 命令会试图把 '--test--' 当作自己的选项参数来解析: + + grep -e '--test--' 文件名 + +### 怎么使用 grep 的 OR 匹配? ### + +使用如下的语法: + + grep 'word1|word2' 文件名 + +或者是 + + grep 'word1\|word2' 文件名 + +### 怎么使用 grep 的 AND 匹配? ### + +使用下面的语法来显示既包含 'word1' 又包含 'word2' 的所有行 + + grep 'word1' 文件名 | grep 'word2' + +### 怎么样使用序列检测? ### + +使用如下的语法,您可以检测一个字符在序列中重复出现次数: + + {N} + {N,} + {min,max} + +要匹配字符 “v" 出现两次: + + egrep "v{2}" 文件名 + +下面的命令能匹配到 "col" 和 "cool" : + + egrep 'co{1,2}l' 文件名 + +下面的命令将会匹配出至少有三个 'c' 字符的所有行。 + + egrep 'c{3,}' 文件名 + +下面的例子会匹配 91-1234567890(即二个数字-十个数字) 这种格式的手机号。 + + grep "[[:digit:]]\{2\}[ -]\?[[:digit:]]\{10\}" 文件名 + +### 怎么样使 grep 命令突出显示?### + +使用如下的语法: + + grep --color regex 文件名 + +### 怎么样仅仅只显示匹配出的字符,而不是匹配出的行? ### + +使用如下语法: + + grep -o regex 文件名 + +### 正则表达式限定符### + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
限定符描述
.匹配任意的一个字符.
?匹配前面的子表达式,最多一次。
*匹配前面的子表达式零次或多次。
+匹配前面的子表达式一次或多次。
{N}匹配前面的子表达式 N 次。
{N,}匹配前面的子表达式 N 次到多次。
{N,M}匹配前面的子表达式 N 到 M 次,至少 N 次至多 M 次。
-只要不是在序列开始、结尾或者序列的结束点上,表示序列范围
^匹配一行开始的空字符串;也表示字符不在要匹配的列表中。
$匹配一行末尾的空字符串。
\b匹配一个单词前后的空字符串。
\B匹配一个单词中间的空字符串
\<匹配单词前面的空字符串。
\> 匹配单词后面的空字符串。
+ +#### grep 和 egrep #### + +egrep 跟 **grep -E** 是一样的。他会以正则表达式的模式来解释。下面是 grep 的帮助页(man): + + 基本的正则表达式元字符 ?、+、 {、 |、 ( 和 ) 已经失去了他们特殊的意义,要使用的话用反斜线的版本 \?、\+、\{、\|、\( 和 \) 来代替。 + 传统的 egrep 不支持 { 元字符,一些 egrep 的实现是以 \{ 替代的,所以有 grep -E 的通用脚本应该避免使用 { 符号,要匹配字面的 { 应该使用 [}]。 + GNU grep -E 试图支持传统的用法,如果 { 出在在无效的间隔规范字符串这前,它就会假定 { 不是特殊字符。 + 例如,grep -E '{1' 命令搜索包含 {1 两个字符的串,而不会报出正则表达式语法错误。 + POSIX.2 标准允许对这种操作的扩展,但在可移植脚本文件里应该避免这样使用。 + +引用: + +- grep 和 regex 帮助手册页(7) +- grep 的 info 页` + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/grep-regular-expressions/ + +作者:Vivek Gite +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md new file mode 100644 index 0000000000..9af1afa163 --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 3--Search Multiple Words or String Pattern Using grep Command.md @@ -0,0 +1,41 @@ +使用 grep 命令来搜索多个单词/字符串模式 +================================================================================ +要使用 grep 命令来搜索多个字符串或单词,我们该怎么做?例如我想要查找 /path/to/file 文件中的 word1、word2、word3 等单词,我怎么样命令 grep 查找这些单词呢? + +[grep 命令支持正则表达式][1]匹配模式。要使用多单词搜索,请使用如下语法: + + grep 'word1\|word2\|word3' /path/to/file + +下的例子中,要在一个名叫 /var/log/messages 的文本日志文件中查找 warning、error 和 critical 这几个单词,输入: + + $ grep 'warning\|error\|critical' /var/log/messages + +仅仅只是要匹配单词的话,可以加上 -w 选项参数: + + $ grep -w 'warning\|error\|critical' /var/log/messages + +egrep 命令可以跳过上面的语法格式,其使用的语法格式如下: + + $ egrep -w 'warning|error|critical' /var/log/messages + +我建义您们加上 -i (忽略大小写) 和 --color 选项参数,如下示: + + $ egrep -wi --color 'warning|error|critical' /var/log/messages + +输出示例: + +![Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output](http://s0.cyberciti.org/uploads/faq/2008/04/egrep-words-output.png) + +Fig.01: Linux / Unix egrep 命令查找多个单词输出例子 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/searching-multiple-words-string-using-grep/ + +作者:Vivek Gite +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/grep-regular-expressions/ \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 4--Grep Count Lines If a String or Word Matches.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 4--Grep Count Lines If a String or Word Matches.md new file mode 100644 index 0000000000..63cb1aa189 --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 4--Grep Count Lines If a String or Word Matches.md @@ -0,0 +1,33 @@ +Grep 命令统计匹配的字符串/单词行数 +================================================================================ +在 Linux 或 UNIX 操作系统下,对于给定的单词或字符串,我们应该怎么统计它们在每个输入文件中存在的行数呢? + +您需要通过添加 -c 或者 --count 选项参数来抑制正常的输出。它将会显示对输入文件单词匹配的行数,如下示: + + $ grep -c vivek /etc/passwd + +或者 + + $ grep -w -c vivek /etc/passwd + +输出的示例: + + 1 + +相反的,使用 -v 或者 --invert 选项参数可以统计出不匹配的输入文件行数,键入: + + $ grep -c vivek /etc/passwd + +输出的示例: + + 45 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/grep-count-lines-if-a-string-word-matches/ + +作者:Vivek Gite +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 6--How To Find Files by Content Under UNIX.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 6--How To Find Files by Content Under UNIX.md new file mode 100644 index 0000000000..9bf698136a --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 6--How To Find Files by Content Under UNIX.md @@ -0,0 +1,83 @@ +如何在 UNIX 中根据文件内容查找文件 +How To Find Files by Content Under UNIX +================================================================================ +为了完成课程作业,我写了很多 C 语言代码并把它们保存为 /home/user/c/*.c and *.h。那么在 UNIX shell 窗口中我如何能通过字符串或者单词(例如函数名 main())文件内容来查找文件呢? +I had written lots of code in C for my school work and saved it as source code under /home/user/c/*.c and *.h. How do I find files by content such as string or words (function name such as main() under UNIX shell prompt? + +你需要用到以下工具: +You need to use the following tools: + +[a] **grep 命令** : 输出匹配模式的行。 + +[b] **find 命令**: 在目录层次中查找文件。 + +### [使用 grep 命令根据内容查找文件][1] +### [grep Command To Find Files By][1] Content ### + +输入以下命令: +Type the command as follows: + + grep 'string' *.txt + grep 'main(' *.c + grep '#include' *.c + grep 'getChar*' *.c + grep -i 'ultra' *.conf + grep -iR 'ultra' *.conf + +其中 +Where + +- **-i** : 忽视模式(匹配字符串 valid、 VALID、 ValID )和输入文件(匹配 file.c FILE.c FILE.C)的大小写。 +- **-R** : 递归读取每个目录下的所有文件。 +- **-i** : Ignore case distinctions in both the PATTERN (match valid, VALID, ValID string) and the input files (math file.c FILE.c FILE.C filename). +- **-R** : Read all files under each directory, recursively + +### 高亮匹配到的模式 ### +### Highlighting searched patterns ### + +在搜索大量文件的时候你可以轻松地高亮模式: +You can highlight patterns easily while searching large number of files: + + $ grep --color=auto -iR 'getChar();' *.c + +### 为查找到的模式显示文件名和行号 ### +### Displaying file names and line number for searched patterns ### + +你也许需要显示文件名和行号: +You may also need to display filenames and numbers: + + $ grep --color=auto -iRnH 'getChar();' *.c + +其中, +Where, + +- **-n** : 在输出的每行前面添加文件中以 1 开始的行号。 +- **-H** : 为每个匹配打印文件名。要搜索多个文件时这是默认选项。 +- **-n** : Prefix each line of output with the 1-based line number within its input file. +- **-H** Print the file name for each match. This is the default when there is more than one file to search. + + $grep --color=auto -nH 'DIR' * + +输出样例: +Sample output: + +![Fig.01: grep 命令显示搜索到的模式](http://www.cyberciti.biz/faq/wp-content/uploads/2008/09/grep-command.png) + +Fig.01: grep 命令显示搜索到的模式 + +你也可以使用 find 命令: +You can also use find command: + + $ find . -name "*.c" -print | xargs grep "main(" + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/unix-linux-finding-files-by-content/ + +作者:Vivek Gite +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/howto-search-find-file-for-text-string/ \ No newline at end of file diff --git a/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 7--Linux or UNIX View Only Configuration File Directives Uncommented Lines of a Config File.md b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 7--Linux or UNIX View Only Configuration File Directives Uncommented Lines of a Config File.md new file mode 100644 index 0000000000..9de5e1d64d --- /dev/null +++ b/translated/tech/Linux or UNIX grep Command Tutorial series/20151127 Linux or UNIX grep Command Tutorial series 7--Linux or UNIX View Only Configuration File Directives Uncommented Lines of a Config File.md @@ -0,0 +1,152 @@ +Linux / UNIX 下只查看配置文件的有效配置行(配置文件中未被注释的命令行) +========================================================= + +大多数的Linux和类Unix系统的配置文件中都有许多的注释行,但是有时候我只想看其中的有效配置行。那我怎么才能只看到quid.conf或httpd.conf这样的配置文件中的非注释命令行呢?怎么去掉这些注释或者空行呢? + +我们可以使用UNIX / BSD / OS X / Linux 这些操作系统自身提供的grep,sed,awk,perl或者其他文本处理工具来查看配置文件中的有效配置命令行。 + + +### grep 命令示例——去掉注释 ### + +可以按照如下示例使用grep命令: + + $ grep -v "^#" /path/to/config/file + $ grep -v "^#" /etc/apache2/apache2.conf + +示例输出: + + ServerRoot "/etc/apache2" + + LockFile /var/lock/apache2/accept.lock + + PidFile ${APACHE_PID_FILE} + + Timeout 300 + + KeepAlive On + + MaxKeepAliveRequests 100 + + KeepAliveTimeout 15 + + + + StartServers 5 + MinSpareServers 5 + MaxSpareServers 10 + MaxClients 150 + MaxRequestsPerChild 0 + + + + StartServers 2 + MinSpareThreads 25 + MaxSpareThreads 75 + ThreadLimit 64 + ThreadsPerChild 25 + MaxClients 150 + MaxRequestsPerChild 0 + + + + StartServers 2 + MaxClients 150 + MinSpareThreads 25 + MaxSpareThreads 75 + ThreadLimit 64 + ThreadsPerChild 25 + MaxRequestsPerChild 0 + + + User ${APACHE_RUN_USER} + Group ${APACHE_RUN_GROUP} + + + AccessFileName .htaccess + + + Order allow,deny + Deny from all + Satisfy all + + + DefaultType text/plain + + + HostnameLookups Off + + ErrorLog /var/log/apache2/error.log + + LogLevel warn + + Include /etc/apache2/mods-enabled/*.load + Include /etc/apache2/mods-enabled/*.conf + + Include /etc/apache2/httpd.conf + + Include /etc/apache2/ports.conf + + LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined + LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined + LogFormat "%h %l %u %t \"%r\" %>s %O" common + LogFormat "%{Referer}i -> %U" referer + LogFormat "%{User-agent}i" agent + + CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined + + + + Include /etc/apache2/conf.d/ + + Include /etc/apache2/sites-enabled/ + +想要跳过空行,可以使用 [egrep 命令][1], 示例: + + egrep -v "^#|^$" /etc/apache2/apache2.conf + ## or pass it to the page such as more or less ## + egrep -v "^#|^$" /etc/apache2/apache2.conf | less + + ## Bash function ###################################### + ## or create function or alias and use it as follows ## + ## viewconfig /etc/squid/squid.conf ## + ####################################################### + viewconfig(){ + local f="$1" + [ -f "$1" ] && command egrep -v "^#|^$" "$f" || echo "Error $1 file not found." + } + +示例输出: + +![Fig.01: Unix/Linux Egrep Strip Out Comments Blank Lines](http://s0.cyberciti.org/uploads/faq/2008/05/grep-strip-out-comments-blank-lines.jpg) + +Fig.01: Unix/Linux Egrep 除去注释行和空行 + +### 理解 grep/egrep 命令行选项 ### + +-v 选项,选择出不匹配的命令行。该选项适用于所有基于posix的系统。正则表达式 ^$ 匹配出所有的非空行, ^#匹配出所有的不以“#”开头的非注释行。 + +### sed 命令示例 ### + +可以按照如下示例使用 GNU / sed 命令: + + $ sed '/ *#/d; /^ *$/d' /path/to/file + $ sed '/ *#/d; /^ *$/d' /etc/apache2/apache2.conf +GNU or BSD sed 也可以修改配置文件。下面的语法是编辑文件,修改扩展名(比如 .bak)进行文件备份: + + sed -i'.bak.2015.12.27' '/ *#/d; /^ *$/d' /etc/apache2/apache2.conf + +更多信息见参考手册 - [grep(1)][2], [sed(1)][3] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/shell-display-uncommented-lines-only/ + +作者:Vivek Gite +译者:[sonofelice](https://github.com/sonofelice) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/faq/grep-regular-expressions/ +[2]:http://www.manpager.com/linux/man1/grep.1.html +[3]:http://www.manpager.com/linux/man1/sed.1.html \ No newline at end of file