Merge branch 'HEAD' of git@github.com:LCTT/TranslateProject.git

This commit is contained in:
ictlyh 2016-07-03 14:19:47 +08:00
commit b6923d130c
99 changed files with 5071 additions and 2217 deletions

View File

@ -1,16 +1,16 @@
简介
-------------------------------
LCTT是“Linux中国”[https://linux.cn/](https://linux.cn/)的翻译组负责从国外优秀媒体翻译Linux相关的技术、资讯、杂文等内容。
LCTT 是“Linux中国”[https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。
LCTT已经拥有几百名活跃成员并欢迎更多的Linux志愿者加入我们的团队。
LCTT 已经拥有几百名活跃成员并欢迎更多的Linux志愿者加入我们的团队。
![logo](http://img.linux.net.cn/static/image/common/lctt_logo.png)
LCTT的组成
LCTT 的组成
-------------------------------
**选题**负责选择合适的内容并将原文转换为markdown格式提交到LCTT的[TranslateProject](https://github.com/LCTT/TranslateProject) 库中。
**选题**,负责选择合适的内容,并将原文转换为 markdown 格式,提交到 LCTT [TranslateProject](https://github.com/LCTT/TranslateProject) 库中。
**译者**,负责从选题中选择内容进行翻译。
@ -21,38 +21,38 @@ LCTT的组成
加入我们
-------------------------------
请首先加入翻译组的QQ群群号是198889102加群时请说明是“志愿者”。加入后记得修改您的群名片为您的github的ID。
请首先加入翻译组的 QQ 群号是198889102加群时请说明是“志愿者”。加入后记得修改您的群名片为您的 GitHub 的 ID。
加入的成员,请先阅读[WIKI 如何开始](https://github.com/LCTT/TranslateProject/wiki/01-如何开始)。
加入的成员,请先阅读 [WIKI 如何开始](https://github.com/LCTT/TranslateProject/wiki/01-如何开始)。
如何开始
-------------------------------
请阅读[WIKI](https://github.com/LCTT/TranslateProject/wiki)。
请阅读 [WIKI](https://github.com/LCTT/TranslateProject/wiki)。
历史
-------------------------------
* 2013/09/10 倡议并得到了大家的积极响应,成立翻译组。
* 2013/09/11 采用github进行翻译协作,并开始进行选题翻译。
* 2013/09/11 采用 GitHub 进行翻译协作,并开始进行选题翻译。
* 2013/09/16 公开发布了翻译组成立消息后,又有新的成员申请加入了。并从此建立见习成员制度。
* 2013/09/24 鉴于大家使用Github的水平不一容易导致主仓库的一些错误因此换成了常规的fork+PR的模式来进行翻译流程。
* 2013/10/11 根据对LCTT的贡献划分了Core Translators组最先的加入成员是vito-L和tinyeyeser。
* 2013/10/12 取消对LINUX.CN注册用户的依赖在QQ群内、文章内都采用github的注册ID。
* 2013/10/18 正式启动man翻译计划。
* 2013/09/24 鉴于大家使用 GitHub 的水平不一,容易导致主仓库的一些错误,因此换成了常规的 fork+PR 的模式来进行翻译流程。
* 2013/10/11 根据对 LCTT 的贡献,划分了 Core Translators 组,最先的加入成员是 vito-L tinyeyeser。
* 2013/10/12 取消对 LINUX.CN 注册用户的依赖,在 QQ 群内、文章内都采用 GitHub 的注册 ID。
* 2013/10/18 正式启动 man 翻译计划。
* 2013/11/10 举行第一次北京线下聚会。
* 2014/01/02 增加了Core Translators 成员: geekpi。
* 2014/05/04 更换了新的QQ群198889102
* 2014/05/16 增加了Core Translators 成员: will.qian、vizv。
* 2014/06/18 由于GOLinux令人惊叹的翻译速度和不错的翻译质量升级为Core Translators成员。
* 2014/01/02 增加了 Core Translators 成员: geekpi。
* 2014/05/04 更换了新的 QQ 198889102
* 2014/05/16 增加了 Core Translators 成员: will.qian、vizv。
* 2014/06/18 由于 GOLinux 令人惊叹的翻译速度和不错的翻译质量,升级为 Core Translators 成员。
* 2014/09/09 LCTT 一周年,做一年[总结](http://linux.cn/article-3784-1.html)。并将曾任 CORE 的成员分组为 Senior以表彰他们的贡献。
* 2014/10/08 提升bazz2为Core Translators成员。
* 2014/11/04 提升zpl1025为Core Translators成员。
* 2014/12/25 提升runningwater为Core Translators成员。
* 2014/10/08 提升 bazz2 Core Translators 成员。
* 2014/11/04 提升 zpl1025 Core Translators 成员。
* 2014/12/25 提升 runningwater Core Translators 成员。
* 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。
* 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。
* 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。
* 2016/05/09 提升PurlingNayuki为校对。
* 2015/06/09 提升 ictlyh dongfengweixiao Core Translators 成员。
* 2015/11/10 提升 strugglingyouth、FSSlc、Vic020、alim0x Core Translators 成员。
* 2016/05/09 提升 PurlingNayuki 为校对。
活跃成员
-------------------------------
@ -74,16 +74,16 @@ LCTT的组成
- CORE @dongfengweixiao,
- CORE @alim0x,
- Senior @DeadFire,
- Senior @reinoir,
- Senior @reinoir222,
- Senior @tinyeyeser,
- Senior @vito-L,
- Senior @jasminepeng,
- Senior @willqian,
- Senior @vizv,
- ZTinoZ,
- theo-l,
- luoxcat,
- martin2011qi,
- theo-l,
- Luoxcat,
- wi-cuckoo,
- disylee,
- haimingfg,
@ -91,8 +91,8 @@ LCTT的组成
- wwy-hust,
- felixonmars,
- su-kaiyao,
- ivo-wang,
- GHLandy,
- ivo-wang,
- cvsher,
- wyangsun,
- DongShuaike,
@ -119,6 +119,7 @@ LCTT的组成
- blueabysm,
- boredivan,
- name1e5s,
- StdioA,
- yechunxiao19,
- l3b2w1,
- XLCYun,
@ -134,49 +135,34 @@ LCTT的组成
- 1w2b3l,
- JonathanKang,
- crowner,
- mtunique,
- dingdongnigetou,
- mtunique,
- CNprober,
- hyaocuk,
- szrlee,
- KnightJoker,
- Xuanwo,
- nd0104,
- jerryling315,
- Moelf,
- xiaoyu33,
- guodongxiaren,
- ynmlml,
- kylepeng93,
- vim-kakali,
- ggaaooppeenngg,
- Ricky-Gong,
- zky001,
- Flowsnow,
- lfzark,
- 213edu,
- Tanete,
- liuaiping,
- bestony,
- mudongliang,
- liuaiping,
- Timeszoro,
- rogetfan,
- itsang,
- JeffDing,
- Yuking-net,
- MikeCoder,
- zhangboyue,
- liaoishere,
- yupmoon,
- Medusar,
- zzlyzq,
- yujianxuechuan,
- ailurus1991,
- tomatoKiller,
- stduolc,
- shaohaolin,
- FineFan,
- kingname,
- CHINAANSHE,
(按提交行数排名前百)
(按增加行数排名前百)
LFS 项目活跃成员有:
@ -188,7 +174,7 @@ LFS 项目活跃成员有:
- @KevinSJ
- @Yuking-net
更新于2016/05/09
更新于2016/06/20
谢谢大家的支持!

View File

@ -19,7 +19,7 @@ LinuxQuestions 问卷调查揭晓最佳开源项目
via: http://ostatic.com/blog/linuxquestions-survey-results-surface-top-open-source-projects
作者:[Sam Dean][a]
译者:[jerryling315](https://github.com/jerryling315)
译者:[Moelf](https://github.com/Moelf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
@ -29,4 +29,4 @@ via: http://ostatic.com/blog/linuxquestions-survey-results-surface-top-open-sour
[2]:http://www.linuxquestions.org/questions/linux-news-59/2014-linuxquestions-org-members-choice-award-winners-4175532948/
[3]:http://www.linuxquestions.org/questions/2014mca.php
[4]:http://ostatic.com/blog/lq-members-choice-award-winners-announced
[5]:http://www.linuxquestions.org/questions/2014mca.php
[5]:http://www.linuxquestions.org/questions/2014mca.php

View File

@ -98,7 +98,7 @@ Debian 在 Linux 生态环境中的贡献是难以用语言描述的。 如果 D
via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/
作者:[Avishek Kumar][a]
译者:[jerryling315](http://moelf.xyz)
译者:[Moelf](https://github.com/Moelf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -69,7 +69,7 @@ b、 一旦你保存了这个文件,你应该能在 Wifi 菜单里看到你刚
via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/
作者:[Sayantan Das][a]
译者:[jerryling315](https://github.com/jerryling315)
译者:[Moelf](https://github.com/Moelf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -222,7 +222,7 @@ KDE Plasma 5 第五代 KDE。大幅改进了设计和系统新的默认
via: [https://tlhp.cf/kde-history/](https://tlhp.cf/kde-history/)
作者:[Pavlo Rudyi][a]
译者:[jerryling315](https://github.com/jerryling315)
译者:[Moelf](https://github.com/Moelf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,86 @@
Linux 下五个顶级的开源命令行 Shell
===============================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/terminal_blue_smoke_command_line_0.jpg?itok=u2mRRqOa)
这个世界上有两种 Linux 用户:敢于冒险的和态度谨慎的。
其中一类用户总是本能的去尝试任何能够戳中其痛点的新选择。他们尝试过不计其数的窗口管理器、系统发行版和几乎所有能找到的桌面插件。
另一类用户找到他们喜欢的东西后,会一直使用下去。他们往往喜欢所使用的系统发行版的默认配置。最先熟练掌握的文本编辑器会成为他们最钟爱的那一个。
作为一个使用桌面版和服务器版十五年之久的 Linux 用户,比起第一类来,我无疑属于第二类用户。我更倾向于使用现成的东西,如此一来,很多时候我就可以通过文档和示例方便地找到我所需要的使用案例。如果我决定选择使用非费标准的东西,这个切换过程一定会基于细致的研究,并且前提是来自好基友的大力推荐。
但这并不意味着我不喜欢尝试新事物并且查漏补失。所以最近一段时间,在我不假思索的使用了 bash shell 多年之后,决定尝试一下另外四个 shell 工具ksh、tcsh、zsh 和 fish。这四个 shell 都可以通过我所用的 Fedora 系统的默认库轻松安装,并且他们可能已经内置在你所使用的系统发行版当中了。
这里对它们每个选择都稍作介绍,并且阐述下它适合做为你的下一个 Linux 命令行解释器的原因所在。
### bash
首先,我们回顾一下最为熟悉的一个。 [GNU Bash][1],又名 Bourne Again Shell它是我这些年使用过的众多 Linux 发行版的默认选择。它最初发布于 1989 年,并且轻松成长为 Linux 世界中使用最广泛的 shell甚至常见于其他一些类 Unix 系统当中。
Bash 是一个广受赞誉的 shell当你通过互联网寻找各种事情解决方法所需的文档时总能够无一例外的发现这些文档都默认你使用的是 bash shell。但 bash 也有一些缺点存在,如果你写过 Bash 脚本就会发现我们写的代码总是得比真正所需要的多那么几行。这并不是说有什么事情是它做不到的,而是说它读写起来并不总是那么直观,至少是不够优雅。
如上所述基于其巨大的安装量并且考虑到各类专业和非专业系统管理员已经适应了它的使用方式和独特之处至少在将来一段时间内bash 或许会一直存在。
### ksh
[KornShell][4],或许你对这个名字并不熟悉,但是你一定知道它的调用命令 ksh。这个替代性的 shell 于 80 年代起源于贝尔实验室,由 David Korn 所写。虽然最初是一个专有软件,但是后期版本是在 [Eclipse Public 许可][5]下发布的。
ksh 的拥趸们列出了他们觉得其优越的诸多理由,包括更好的循环语法,清晰的管道退出代码,处理重复命令和关联数组的更简单的方式。它能够模拟 vi 和 emacs 的许多行为,所以如果你是一个重度文本编辑器患者,它值得你一试。最后,我发现它虽然在高级脚本方面拥有不同的体验,但在基本输入方面与 bash 如出一辙。
### tcsh
[tcsh][6] 衍生于 cshBerkely Unix C shell并且可以追溯到早期的 Unix 和计算机时代开始。
tcsh 最大的卖点在于它的脚本语言,对于熟悉 C 语言编程的人来说看起来会非常亲切。tcsh 的脚本编写有人喜欢,有人憎恶。但是它也有其他的技术特色,包括可以为 aliases 添加参数,各种可能迎合你偏好的默认行为,包括 tab 自动完成和将 tab 完成的工作记录下来以备后查。
tcsh 以 [BSD 许可][7]发布。
### zsh
[zsh][8] 是另外一个与 bash 和 ksh 有着相似之处的 shell。诞生于 90 年代初zsh 支持众多有用的新技术,包括拼写纠正、主题化、可命名的目录快捷键,在多个终端中共享同一个命令历史信息和各种相对于原来的 bash 的轻微调整。
虽然部分需要遵照 GPL 许可,但 zsh 的代码和二进制文件可以在一个类似 MIT 许可证的许可下进行分发; 你可以在 [actual license][9] 中查看细节。
### fish
之前我访问了 [fish][10] 的主页,当看到 “好了,这是一个为 90 后而生的命令行 shell” 这条略带调侃的介绍时fish 完成于 2005 年),我就意识到我会爱上这个交互友好的 shell 的。
fish 的作者提供了若干切换过来的理由,这些理由有点小幽默并且能戳中笑点,不过还真是那么回事。这些特性包括自动建议(“注意, Netscape Navigator 4.0 来了”LCTT 译注NN4 是一个重要版本。),支持“惊人”的 256 色 VGA 调色,不过也有真正有用的特性,包括根据你机器上的 man 页面自动补全命令,清除脚本和基于 web 界面的配置方式。
fish 的许可主要基于 GPLv2但有些部分是在其他许可下的。你可以查看资源库来了解[完整信息][11]。
***
如果你想要寻找关于每个选择确切不同之处的详尽纲要,[这个网站][12]应该可以帮到你。
我的立场到底是怎样的呢?好吧,最终我应该还是会重新投入 bash 的怀抱,因为对于大多数时间都在使用命令行交互的人来说,切换过程对于编写高级的脚本能带来的好处微乎其微,并且我已经习惯于使用 bash 了。
但是我很庆幸做出了敞开大门并且尝试新选择的决定。我知道门外还有许许多多其他的东西。你尝试过哪些 shell更中意哪一个请在评论里告诉我们。
---
via: https://opensource.com/business/16/3/top-linux-shells
作者:[Jason Baker][a]
译者:[mr-ping](https://github.com/mr-ping)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jason-baker
[1]: https://www.gnu.org/software/bash/
[2]: http://mywiki.wooledge.org/BashPitfalls
[3]: http://www.gnu.org/licenses/gpl.html
[4]: http://www.kornshell.org/
[5]: https://www.eclipse.org/legal/epl-v10.html
[6]: http://www.tcsh.org/Welcome
[7]: https://en.wikipedia.org/wiki/BSD_licenses
[8]: http://www.zsh.org/
[9]: https://sourceforge.net/p/zsh/code/ci/master/tree/LICENCE
[10]: https://fishshell.com/
[11]: https://github.com/fish-shell/fish-shell/blob/master/COPYING
[12]: http://hyperpolyglot.org/unix-shells

View File

@ -0,0 +1,63 @@
为什么 Ubuntu 家族会占据 Linux 发行版的主导地位?
=========================================
在过去的数年中,我体验了一些优秀的 Linux 发行版。给我印象最深刻的是那些由强大的社区维护的发行版,而流行的发行版比强大的社区给我的印象更深。流行的 Linux 发行版往往能吸引新用户,这通常是由于其流行而使得使用该发行版会更加容易。并非绝对如此,但一般来说是这样的。
说到这里,首先映入我脑海的一个发行版是 [Ubuntu][1]。其基于健壮的 [Debian][2] 发行版构建,它不仅成为了一个非常受欢迎的 Linux 发行版,而且它也衍生出了不可计数的其他分支,比如 Linux Mint 就是一个例子。在本文中,我会探讨为何我认为 Ubuntu 会赢得 Linux 发行版之战的原因,以及它是怎样影响到了整个 Linux 桌面领域。
### Ubuntu 易于使用
在我几年前首次尝试使用 Ubuntu 前,我更喜欢使用 KED 桌面。在那个时期,我接触的大多是这种 KDE 桌面环境。主要原因还是 KDE 是大多数新手容易入手的 Linux 发行版中最受欢迎的。这些新手友好的发行版有 Knoppix、Simply Mepis、Xandros、Linspire 以及其它的发行版等等,这些发行版都推荐他们的用户去使用广受欢迎的 KDE。
现在 KDE 能满足我的需求,我也没有什么理由去折腾其他的桌面环境。有一天我的 Debian 安装失败了(由于我个人的操作不当),我决定尝试开发代号为 Dapper Drake 的 Ubuntu 版本LCTT 译注Ubuntu 6.06 - Dapper Drake发布日期2006 年 6 月 1 日),每个人都对它赞不绝口。那个时候,我对于它的印象仅限于屏幕截图,但是我想试试也挺有趣的。
Ubuntu Dapper Drake 给我的最大的印象是它让我很清楚地知道每个东西都在哪儿。记住,我是来自于 KDE 世界的用户,在 KDE 上要想改变菜单的设置就有 15 种方法 !而 Ubuntu 上的 GNOME 实现极具极简主义的。
时间来到 2016 年,最新的版本号是 16.04:我们有了好几种 Ubuntu 特色版本,也有一大堆基于 Ubuntu 的发行版。所有的 Ubuntu 特色版和衍生发行版的共同具有的核心都是为易用而设计。发行版想要增大用户基数时,这就是最重要的原因。
### Ubuntu LTS
过去,我几乎一直坚持使用 LTSLong Term Support发行版作为我的主要桌面系统。10月份的发行版很适合我测试硬盘驱动器甚至把它用在一个老旧的手提电脑上。我这样做的原因很简单——我没有兴趣在一个正式使用的电脑上折腾短期发行版。我是个很忙的家伙我觉得这样会浪费我的时间。
对于我来说,我认为 Ubuntu 提供 LTS 发行版是 Ubuntu 能够变得流行的最大的原因。这样说吧———给普罗大众提供一个桌面 Linux 发行版,这个发行版能够得到长期的有效支持就是它的优势。事实上,不只 Ubuntu 是这样,其他的分支在这一点上也做的很好。长期支持策略以及对新手的友好环境,我认为这就为 Ubuntu 的普及带来了莫大的好处。
### Ubuntu Snap 软件包
以前,用户会夸赞可以在他们的系统上使用 PPApersonal package archive 个人软件包档案)获得新的软件。不好的是,这种技术也有缺点。当它用在各种软件名称时, PPA 经常会找不到,这种情况很常见。
现在有了 [Snap 软件包][3] 。当然这不是一个全新的概念,过去已经进行了类似的尝试。用户可以在一个长期支持版本上运行最新的软件,而不必去使用最新的 Ubuntu 发行版。虽然我认为目前还处于 Snap 软件包的早期,但是我很期待可以在一个稳定的发行版上运行的崭新的软件。
最明显的问题是,如果你要运行很多软件,那么 Snap 包实际会占用很多硬盘空间。不仅如此,大多数 Ubuntu 软件仍然需要由官方从 deb 包进行转换。第一个问题可以通过使用更大的硬盘空间得到解决,而后一个问题的解决则需要等待。
### Ubuntu 社区
首先,我承认大多数主要的 Linux 发行版都有强大的社区。然而,我坚信 Ubuntu 社区的成员是最多样化的,他们来自各行各业。例如,我们的论坛包括从苹果硬件支持到游戏等不同分类。特别是这些专业的讨论话题还非常广泛。
除过论坛Ubuntu 也提供了一个很正式的社区组织。这个组织包括一个理事会、技术委员会、[本地社区团队][4]和开发者成员委员会。还有很多,但是这些都是我知道的社区组织部分。
我们还有一个 [Ubuntu 问答][5]版块。我认为,这种功能可以代替人们从论坛寻求帮助的方式,我发现在这个网站你得到有用信息的可能性更大。不仅如此,那些提供的解决方案中被选出的最精准的答案也会被写入到官方文档中。
### Ubuntu 的未来
我认为 Ubuntu 的 Unity 界面LCTT 译注Unity 是 Canonical 公司为 Ubuntu 操作系统的 GNOME 桌面环境开发的图形化界面)在提升桌面占有率上少有作为。我能理解其中的缘由,现在它主要做一些诸如可以使开发团队的工作更轻松的事情。但是最终,我还是认为 Unity 为 Ubuntu MATE 和 Linux Mint 的普及铺平道路。
我最好奇的一点是 Ubuntu's IRC 和邮件列表的发展LCTT 译注:可以在 Ubuntu LoCo Teams 的 IRC Chat 上提问关于地方团队和计划的事件的问题,也可以和一些不同团队的成员进行交流)。事实是,他们都不能像 Ubuntu 问答板块那样文档化。至于邮件列表,我一直认为这对于合作是一种很痛苦的过时方法,但这仅仅是我的个人看法——其他人可能有不同的看法,也可能会认为它很好。
你怎么看?你认为 Ubuntu 将来会占据主要的份额吗?也许你会认为 Arch 和 Linux Mint 或者其他的发行版会在普及度上打败 Ubuntu 既然这样,那请大声说出你最喜爱的发行版。如果这个发行版是 Ubuntu 衍生版 ,说说你为什么更喜欢它而不是 Ubuntu 本身。如果不出意外Ubuntu 会成为构建其他发行版的基础,我想很多人都是这样认为的。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/why-ubuntu-based-distros-are-leaders.html
作者:[Matt Hartley][a]
译者:[vim-kakali](https://github.com/vim-kakali)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.datamation.com/author/Matt-Hartley-3080.html
[1]: http://www.ubuntu.com/
[2]: https://www.debian.org/
[3]: http://www.datamation.com/open-source/ubuntu-snap-packages-the-good-the-bad-the-ugly.html
[4]: http://loco.ubuntu.com/
[5]: http://askubuntu.com/

View File

@ -0,0 +1,129 @@
马克·沙特尔沃思 Ubuntu 背后的那个男人
================================================================================
![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg)
**马克·理查德·沙特尔沃思Mark Richard Shuttleworth** 是 Ubuntu 的创始人,也被称作 [Debian 背后的人][1][之一][2])。他于 1973 年出生在南非的韦尔科姆Welkom。他不仅是个企业家还是个太空游客——他是第一个前往太空旅行的非洲独立国家的公民。
马克曾在 1996 年成立了一家名为 **Thawte** 的互联网商务安全公司,那时他还在开普敦大学( University of Cape Town的学习金融和信息技术。
2000 年,马克创立了 HBDHere be Dragons (此处有龙/危险的缩写所以其吉祥物是一只龙这是一家投资公司同时他还创立了沙特尔沃思基金会Shuttleworth Foundation致力于以奖金和投资等形式给社会中有创新性的领袖提供资助。
> “移动设备对于个人电脑行业的未来而言至关重要。比如就在这个月,相对于平板电脑的发展而言,传统 PC 行业很明显正在萎缩。所以如果我们想要涉足个人电脑产业,我们必须首先涉足移动行业。移动产业之所以有趣,是因为在这里没有盗版 Windows 操作系统的市场。所以如果你为你的操作系统赢得了一台设备的市场份额,这台设备会一直使用你的操作系统。在传统 PC 行业,我们时不时得和“免费”的 Windows 产生竞争,这是一种非常微妙的挑战。所以我们现在的重心是围绕 Ubuntu 和移动设备——手机和平板——以图与普通用户建立更深层次的联系。”
>
> — 马克·沙特尔沃思
2002 年他在俄罗斯的星城Star City接受了为期一年的训练随后作为联盟号 TM-34 任务组的一员飞往了国际空间站。再后来,在面向有志于航空航天或者其相关学科的南非学生群体发起了推广科学、编程及数学的运动后,马克 创立了 **Canonical Ltd**。此后直至2013年他一直在领导 Ubuntu 操作系统的开发。
现今,沙特尔沃思拥有英国与南非双重国籍并和 18 只可爱的鸭子住在英国的 Isle of Man 小岛上的一处花园,一同的还有他可爱的女友 Claire两条黑色母狗以及时不时经过的羊群。
> “电脑不仅仅是一台电子设备了。它现在是你思维的延续,以及通向他人的大门。”
>
> — 马克·沙特尔沃思
### 马克·沙特尔沃思的早年生活###
正如我们之前提到的马克出生在南非的奥兰治自由邦Orange Free State的韦尔科姆Welkom。他是一名外科医生和护士学校教师的孩子。他在西部省预科学校就读并在 1986 年成为了学生会主席,一个学期后就读于 Rondebosch 男子高中,再之后入学 Bishops Diocesan 学院并在 1991 年再次成为那里的学生会主席。
马克在开普敦大学( University of Cape Town拿到了金融和信息系统的商业科学双学士学位他在学校就读时住在 Smuts Hall。作为学生他也在那里帮助安装了学校的第一条宿舍互联网接入。
>“无数的企业和国家已经证明,引入开源政策能提高竞争力和效率。在不同层面上创造生产力对于公司和国家而言都是至关重要的。”
>
> — 马克·沙特尔沃思
### 马克·沙特尔沃思的职业生涯 ###
马克在 1995 年创立了 Thawte公司专注于数字证书和互联网安全然后在 1999 年把公司卖给了 VeriSign赚取了大约 5.75 亿美元。
2000 年,马克创立了 HBD 风险资本公司成为了商业投资人和项目孵化器。2004 年,他创立了 Canonical Ltd. 以支持和鼓励自由软件开发项目的商业化,特别是 Ubuntu 操作系统的项目。直到 2009 年,马克才从 Canonical CEO 的位置上退下。
> “在 [DDC](https://en.wikipedia.org/wiki/DCC_Alliance) (LCTT 译注:一个 Debian GNU/Linux 开发者联盟) 的早期,我更倾向于让拥护者们放手去做,看看能发展出什么。”
>
> — 马克·沙特尔沃思
### Linux、自由开源软件与马克·沙特尔沃思 ###
在 90 年代后期,马克曾作为一名开发者参与 Debian 操作系统项目。
2001 年,马克创立了沙特尔沃思基金会,这是个扎根南非的、非赢利性的基金会,专注于赞助社会创新、免费/教育用途开源软件,曾赞助过[自由烤面包机][3]Freedom ToasterLCTT 译注:自由烤面包机是一个可以给用户带来的 CD/DVD 上刻录自由软件的公共信息亭)。
2004 年,马克通过出资开发基于 Debian 的 Ubuntu 操作系统返回了自由软件界,这一切也经由他的 Canonical 公司完成。
2005 年,马克出资建立了 Ubuntu 基金会并投入了一千万美元作为启动资金。在 Ubuntu 项目内,人们经常用一个朗朗上口的名字称呼他——“**SABDFL 自封的生命之仁慈独裁者Self-Appointed Benevolent Dictator for Life**”。为了能够找到足够多的高手开发这个巨大的项目,马克花费了 6 个月的时间从 Debian 邮件列表里寻找这一切都是在他乘坐在南极洲的一艘破冰船——赫列布尼科夫船长号Kapitan Khlebnikov——上完成的。同年马克买下了 Impi Linux 65% 的股份。
> “我呼吁电信公司的掌权者们尽快开发出跨洲际的高效信息传输服务。”
>
> — 马克·沙特尔沃思
2006 年KDE 宣布沙特尔沃思成为 KDE 的**第一赞助人first patron**——彼时 KDE 最高级别的赞助。这一赞助协议在 2012 年终止,取而代之的是对 Kubuntu 的资金支持,这是一个使用 KDE 作为默认桌面环境的 Ubuntu 变种。
![](http://www.unixmen.com/wp-content/uploads/2015/10/shuttleworth-kde.jpg)
2009 年Shuttleworth 宣布他会从 Canonical 的 CEO 上退位以更好地关注合作关系、产品设计和客户。从 2004 年起担任公司 COO 的珍妮·希比尔Jane Silber晋升为 CEO。
2010 年马克由于其贡献而被开放大学Open University授予了荣誉学位。
2012 年马克和肯尼斯·罗格夫Kenneth Rogoff一同在牛津大学与彼得·蒂尔Peter Thiel和加里·卡斯帕罗夫Garry Kasparov就**创新悖论**The Innovation Enigma展开辩论。
2013 年,马克和 Ubuntu 一同被授予**澳大利亚反个人隐私大哥奖**Austrian anti-privacy Big Brother Award理由是默认情况下 Ubuntu 会把 Unity 桌面的搜索框的搜索结果发往 Canonical 服务器LCTT 译注:因此侵犯了个人隐私)。而一年前,马克曾经申明过这一过程进行了匿名化处理。
> “所有主流 PC 厂家现在都提供 Ubuntu 预安装选项,所以我们和业界的合作已经相当紧密了。但那些 PC 厂家对于给买家推广新东西这件事都很紧张。如果我们可以让 PC 买家习惯 Ubuntu 的平板/手机操作系统的体验,那他们也应该更愿意买预装 Ubuntu 的 PC。没有哪个操作系统是通过抄袭模仿获得成功的Android 很棒但如果我们想成功的话我们必须给市场带去更新更好的东西LCTT 译注:而不是改进或者模仿 Android。如果我们中没有人追寻未来的话我们将陷入停滞不前的危险。但如果你尝试去追寻未来了那你必须接受不是所有人对未来的预见都和你一样这一事实。”
>
> — 马克·沙特尔沃思
### 马克·沙特尔沃思的太空之旅 ###
马克在 2002 年作为世界第二名自费太空游客而闻名世界,同时他也是南非第一个旅行太空的人。这趟旅行中,马克作为俄罗斯联盟号 TM-34 任务的一名乘员加入并为此支付了约两千万美元。2 天后,联盟号宇宙飞船抵达了国际空间站,在那里马克呆了 8 天并参与了艾滋病和基因组研究的相关实验。同年晚些时候,马克随联盟号 TM-33 任务返回了地球。为了参与这趟旅行,马克花了一年时间准备与训练,其中有 7 个月居住在俄罗斯的星城。
![](http://www.unixmen.com/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg)
在太空中马克与纳尔逊·曼德拉Nelson Mandela和另一个 14 岁的南非女孩米歇尔·福斯特Michelle Foster (她问马克要不要娶她)通过无线电进行了交谈。马克礼貌地回避了这个结婚问题,但在巧妙地改换话题之前他说他感到很荣幸。身患绝症的女孩福斯特通过梦想基金会( Dream foundation的赞助获得了与马克和纳尔逊·曼德拉交谈的机会。
归来后,马克在世界各地做了旅行,并和各地的学生就太空之旅发表了感言。
>“粗略的统计数据表明 Ubuntu 的实际用户依然在增长。而我们的合作方——戴尔、惠普、联想和其他硬件生产商,以及游戏厂商 EA、Valve 都在加入我们——这让我觉得我们在关键的领域继续领先。”
>
> — 马克·沙特尔沃思
### 马克·沙特尔沃思的交通工具 ###
马克有他自己的私人客机庞巴迪全球特快Bombardier Global Express虽然它经常被称为 Canonical 一号,但事实上此飞机是通过 HBD 风险投资公司注册拥有的。涂画在飞机侧面的龙图案是 HBD 风投公司的吉祥物 ,名叫 Norman。
![](http://www.leader.co.za/leadership/logos/logomarkshuttleworthdirectory_31ce.gif)
### 与南非储备银行的法律冲突 ###
在从南非转移 25 亿南非兰特去往 Isle of Man 的过程中,南非储备银行征收了 2.5 亿南非兰特的税金。马克上诉了,经过冗长的法庭唇枪舌战,南非储备银行被勒令返还 2.5 亿征税,以及其利息。马克宣布他会把这 2.5 亿存入信托基金,以用于帮助那些上诉到宪法法院的案子。
> “离境征税倒也不和宪法冲突。但离境征税的主要目的不是提高税收,而是通过监管资金流出来保护本国经济。”
>
> — Dikgang Moseneke 法官
2015 年,南非宪法法院修正了低级法院的判决结果,并宣布了上述对于离岸征税的理解。
### 马克·沙特尔沃思喜欢的东西 ###
Cesária Évora、mp3、春天、切尔西Chelsea、“恍然大悟”finally seeing something obvious for first time、回家、辛纳屈Sinatra、白日梦、暮后小酌、挑逗、苔丝dUrberville、弦理论、Linux、粒子物理、Python、转世、米格-29、雪、旅行、Mozilla、酸橙果酱、激情代价body shots、非洲丛林、豹、拉贾斯坦邦、俄罗斯桑拿、单板滑雪、失重、Iain m 银行、宽度、阿拉斯泰尔·雷诺兹Alastair Reynolds、化装舞会服装、裸泳、灵机一动、肾上腺素激情消退、莫名the inexplicable、活动顶篷式汽车、Clifton、国家公路、国际空间站、机器学习、人工智能、维基百科、Slashdot、风筝冲浪kitesurfing和 Manx lanes。
### 马克·沙特尔沃思不喜欢的东西 ###
行政、涨工资、法律术语和公众演讲。
--------------------------------------------------------------------------------
via: http://www.unixmen.com/mark-shuttleworth-man-behind-ubuntu-operating-system/
作者:[M.el Khamlichi][a]
译者:[Moelf](https://github.com/Moelf)
校对:[PurlingNayuki](https://github.com/PurlingNayuki), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/
[1]:https://wiki.debian.org/PeopleBehindDebian
[2]:https://raphaelhertzog.com/2011/11/17/people-behind-debian-mark-shuttleworth-ubuntus-founder/
[3]:https://en.wikipedia.org/wiki/Freedom_Toaster

View File

@ -1,39 +1,41 @@
修补 Linux 系统 glibc 严重漏洞
=================================================
**谷歌揭露的一个严重漏洞影响主流的 Linux 发行版。glibc 的漏洞可能导致远程代码执行。**
**谷歌披露的一个严重漏洞影响到了主流的 Linux 发行版。glibc 的漏洞可能导致远程代码执行。**
Linux 用户今天都竞相给一个可以使系统暴露在远程代码执行风险中的核心 glibc 开放源码库的严重漏洞打补丁。glibc 的漏洞被确定为 CVE-2015-7547题为“getaddrinfo 基于堆栈的缓冲区溢出”。
编者按:这个消息并不是一个新闻,基于技术的原因,我们还是分享给大家。
Linux 用户都在竞相给一个可以使系统暴露在远程代码执行风险中的核心 glibc 开放源码库的严重漏洞打补丁。这个 glibc 的漏洞编号被确定为 CVE-2015-7547题为“getaddrinfo 基于堆栈的缓冲区溢出”。
glibc或 GNU C 库,是一个开放源码的 C 和 C++ 编程语言库的实现,是每一个主流 Linux 发行版的一部分。谷歌工程师们在他们试图连接到某个主机系统时发生了一个段错误导致连接崩溃,偶然发现了 CVE-2015-7547 问题。进一步的研究表明, glibc 有缺陷而且该崩溃可能实现任意远程代码执行的条件。
谷歌在一篇博客文章中写道, “当 getaddrinfo() 库函数被使用时glibc 的 DNS 客户端解析器易受基于堆栈缓冲区溢出的攻击,使用该功能的软件可能被利用为攻击者控制的域名,攻击者控制的 DNS[域名系统] 服务器,或通过中间人攻击。”
谷歌在一篇博客文章中写道, “当 getaddrinfo() 库函数被使用时glibc 的 DNS 客户端解析器易受基于堆栈缓冲区溢出的攻击,使用该功能的软件可能通过攻击者控制的域名、攻击者控制的 DNS [域名系统] 服务器,或通过中间人攻击方式MITM进行破坏。”
其实利用 CVE-2015-7547 问题并不简单但它是可能的。为了证明这个问题能被利用谷歌发布了论证一个终端用户或系统是否易受攻击的概念验证POC代码到 GitHub 上。
GitHub 上的 POC 网页声明“服务器代码触发漏洞,因此会使客户端代码崩溃”。
GitHub 上的 POC 网页说“服务器代码会触发漏洞,因此会使客户端代码崩溃”。
Duo Security 公司的高级安全研究员 Mark Loveless 解释说 CVE-2015-7547 的主要风险在于 Linux 上依赖于 DNS 响应的基于客户端的应用程序。
Duo Security 公司的高级安全研究员 Mark Loveless 解释说 CVE-2015-7547 的主要风险在于依赖于 DNS 响应的基于 Linux 客户端的应用程序。
Loveless 告诉 eWEEK “需要一些特定的条件,所以不是每个应用程序都会受到影响,但似乎一些命令行工具,包括流行的 SSH[安全 Shell] 客户端都可能触发该漏洞,我们认为这是严重的,主要是因为对 Linux 系统存在的风险,但也因为潜在的其他问题。”
其他问题可能包括一种触发调用易受攻击的 glibc 库 getaddrinfo() 的基于电子邮件攻击的风险。另外值得注意的是,该漏洞被发现之前已存在于代码之中多年。
其他问题可能包括一种通过电子邮件触发调用易受攻击的 glibc 库 getaddrinfo() 攻击的风险。另外值得注意的是,该漏洞被发现之前已存在于代码之中多年。
谷歌的工程师不是第一或唯一发现 glibc 中的安全风险的团体。这个问题于 2015 年 7 月 13 日首先被报告给了 glibc 的 bug[跟踪系统](https://sourceware.org/bugzilla/show_bug.cgi?id=1866)。该缺陷的根源可以更进一步追溯到在 2008 五月发布的 glibc 2.9 的代码提交时首次引入缺陷。
谷歌的工程师不是第一或唯一发现这个 glibc 安全风险的团体。这个问题于 2015 年 7 月 13 日首先被报告给了 glibc 的 bug[跟踪系统](https://sourceware.org/bugzilla/show_bug.cgi?id=1866)。该缺陷的根源可以更进一步追溯到在 2008 五月发布的 glibc 2.9 的代码提交时首次引入缺陷。
Linux 厂商红帽也独立找到了 glibc 中的这个 bug而且在 2016 年 1 月 6 日,谷歌和红帽开发人员证实,他们作为最初与上游 glibc 的维护者私下讨论的部分人员,已经独立在为同一个漏洞工作。
Linux 厂商红帽也独立找到了 glibc 中的这个 bug而且在 2016 年 1 月 6 日,谷歌和红帽开发人员证实,他们作为最初与上游 glibc 的维护者私下讨论的部分人员,已经独立在为同一个漏洞工作。
红帽产品安全首席软件工程师 Florian Weimer 告诉 eWEEK “一旦确认了两个团队都在为同一个漏洞工作,我们合作进行可能的修复,缓解措施和回归测试,我们还共同努力,使测试覆盖尽可能广,捕捉代码中的任何相关问题,以帮助避免今后更多问题。”
红帽产品安全首席软件工程师 Florian Weimer 告诉 eWEEK “一旦确认了两个团队都在为同一个漏洞工作,我们合作进行可能的修复,缓解措施和回归测试,我们还共同努力,使测试覆盖尽可能广,捕捉代码中的任何相关问题,以帮助避免今后更多问题。”
由于缺陷不明显或不易立即显现,我们花了几年时间才发现 glibc 代码有一个安全问题。
Weimer 说“要诊断一个网络组件的漏洞,如 DNS 解析器,当遇到问题时通常要看抓数据包的踪迹,在这种情况下这样的抓包不适用,所以需要一些实验来重现触发这个 bug 的确切场景。”
Weimer 说“要诊断一个网络组件的漏洞,如 DNS 解析器,当遇到问题时通常要看抓到的数据包的踪迹,在这种情况下这样的抓包不适用,所以需要一些实验来重现触发这个 bug 的确切场景。”
Weimer 补充说,一旦可以抓取数据包,大量精力投入到验证修复程序中,最终导致回归测试套件一系列的改进,有助于上游 glibc 项目。
Weimer 补充说,一旦可以抓取数据包,就会投入大量精力到验证修复程序中,最终完成回归测试套件一系列的改进,有助于上游 glibc 项目。
在许多情况下,安全增强式 Linux (SELinux) 的强制访问安全控制可以减少潜在漏洞风险,除了这个 glibc 的新问题
在许多情况下,安全增强式 Linux (SELinux) 的强制访问安全控制可以减少潜在漏洞风险,但是这个 glibc 的新问题例外
Weimer 说“由于攻击者提供的任意代码的执行,风险是重要系统功能的一个妥协。一个合适的 SELinux 策略可以遏制一些攻击者可能会做的损害,并限制他们访问系统,但是 DNS 被许多应用程序和系统组件使用,所以 SELinux 策略只提供了针对此问题有限的遏制。”
Weimer 说“由于攻击者提供的任意代码的执行,会对很多重要系统功能带来风险。一个合适的 SELinux 策略可以遏制一些攻击者可能会做的损害,并限制他们访问系统,但是 DNS 被许多应用程序和系统组件使用,所以 SELinux 策略只提供了针对此问题有限的遏制。”
在揭露漏洞的今天,现在有一个可用的补丁来减少 CVE-2015-7547 的潜在风险。
@ -43,7 +45,7 @@ via: http://www.eweek.com/security/linux-systems-patched-for-critical-glibc-flaw
作者:[Michael Kerner][a]
译者:[robot527](https://github.com/robot527)
校对:[校对者 ID](https://github.com/校对者 ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux 中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,97 @@
两个出色的一体化 Linux 服务器软件
================================================
回到2000年那时微软发布小型商务服务器SBSSmall Business Server。这个产品改变了很多人们对科技在商务领域的看法。你可以部署一个单独的服务器它能处理邮件日历文件共享目录服务VPN以及更多而不是很多机器处理不同的任务。对很多小型公司来说这是实实在在的好处但是对于一些公司来说 Windows SMB 是昂贵的。对于另外一些人,根本不会考虑使用这种微软设计的单一服务器的想法。
对于后者也有替代方案。事实上,在 Linux 和开源领域里你可以选择许多稳定的平台它可以作为一站式服务商店服务于你的小型企业。如果你的小型企业有10到50员工一体化服务器也许是你所需的理想方案。
这里,我将要展示两个 Linux 一体化服务器,你可以看看它们哪个能完美适用于你的公司。
记住这些服务器不适用于不管是哪种方式大型商务或企业。大公司无法依靠一体化服务器那是因为一台服务器不能负担得起企业所需的期望。也就是说Linux 一体化服务器适合于小型企业。
### ClearOS
[ClearOS][1] 最初发布于 2009 年,那时名为 ClarkConnect是一个路由和网关的发行版。从那以后ClearOS 增加了所有一体化服务器必要的特性。CearOS 提供的不仅仅是一个软件,你可以购买一个 [ClearBox 100][2] 或 [ClearBox 300][3]。这些服务器搭载了完整的 ClearOS作为一个 IT 设备被销售。在[这里][4]查看特性比对/价格矩阵。
如果你已经有响应的硬件,你可以下载这些之一:
- [ClearOS 社区版][5] — 社区(免费)版的 ClearOS
- [ClearOS 家庭版][6] — 理想的家庭办公室(详细的功能和订阅费用,见[这里][12]
- [ClearOS商务][7] — 理想的小型企业(详细的功能和订阅费用,见[这里][13]
使用 ClearOS 能给你你带来什么?你得到了一个商业级的服务器,带有单一的精美 Web 界面。是什么让 ClearOS 从标准的服务器所提供的一大堆功能中脱颖而出?除了那些基础的部分,你可以从 [Clear 市场][8] 中增加功能。在这个市场里,你可以安装免费或付费的应用来扩展 ClearOS 服务器的特性。这里你可以找到支持 Windows 服务器活动目录OpenLDAPFlexsharesAntimalwareWeb 访问控制内容过滤等等很多的补充插件。你甚至可以找到一些第三方组件比如谷歌应用同步Zarafa 合作平台,卡巴斯基杀毒。
ClearOS 的安装就像其他的 Linux 发行版一样(基于红帽的 Anaconda 安装程序)。安装完成后,系统将提示您设置网络接口,这个地址用来供你的浏览器(需要与 ClearOS 服务器在同一个网络里)访问。地址格式如下:
https://IP_OF_CLEAROS_SERVER:81
IP_OF_CLEAROS_SERVER 就是服务器的真实 IP 地址。注当你第一次在浏览器访问这个服务器时你将收到一个“Connection is not private”的警告。继续访问以便你可以继续设置。
当浏览器最终连接上之后,就会提示你 root 用户认证(在初始化安装中你设置的 root 用户密码)。一通过认证,你将看到 ClearOS 的安装向导图1
![](http://www.linux.com/images/stories/66866/jack-clear_a.png)
*图1: ClearOS安装向导。*
点击下一步按钮,开始设置你的 ClearOS 服务器。这个向导无需加以说明,在最后还会问你想用那个版本的 ClearOS。点击“社区”“家庭”或者“商业”。选择之后你就被要求注册一个账户。创建了一个账户并注册了你的服务器后你可以开始更新服务器配置服务器从市场添加模块图2
![](http://www.linux.com/images/stories/66866/jack-clear_b.png)
*图2: 从市场安装模块。*
此时,一切准备就绪,可以开始深入挖掘配置你的 ClearOS 小型商务服务器了。
### Zentyal
[Zentyal][10] 是一个基于 Ubuntu 的小型商务服务器,有段时期的名字是 eBox。Zentyal 提供了大量的服务器/服务来适应你的小型商务需求:
- 电子邮件 — 网页邮件;支持原生的微软 Exchange 协议和活动目录;日历和通讯录;手机设备电子邮件同步;反病毒/反垃圾IMAPPOPSMTPCalDAV和 CardDAV 支持。
- 域和目录 — 中央域目录管理多个组织部门单点登录身份验证文件共享ACL高级域管理打印机管理。
- 网络和防火墙 — 支持静态和 DHCP 接口;对象和服务;包过滤;端口转发。
- 基础设施 — DNSDHCPNTP认证中心VPN。
- 防火墙
安装 Zentyal 很像Ubuntu服务器的安装基于文本界面而且很简单从安装镜像启动做一些简单的选择然后等待安装完成。当这个最初的基于文本的安装完成之后就会显示桌面 GUI提供选择软件包的向导程序。你可以选择所有你想安装的包让安装程序继续完成这些工作。
最终,你可以通过网页界面来访问 Zentyal 服务器(浏览器访问 https://IP_OF_SERVER:8443 - 这里 IP_OF_SERVER是你的 Zentyal 服务器的局域网地址)或使用独立的桌面 GUI 程序来管理服务器Zentyal 包括一个可以快速访问管理员和用户控制台的 Zentyal 管理控制台)。当真系统已经保存并启动,你将看到 Zentyal 面板图3
![](http://www.linux.com/images/stories/66866/jack-zentyal_a.png)
*图3: Zentyal活动面板。*
这个面板允许你控制服务器所有方面,比如更新,管理服务器/服务,获取服务器的敏捷状态更新。您也可以进入组件区域,然后安装在部署过程中没有选择的组件或更新当前的软件包列表。点击“软件管理” > “系统更新”并选择你想更新的图4然后在屏幕最底端点击“更新”按钮。
![](http://www.linux.com/images/stories/66866/jack-zentyal_b.png)
*图4: 更新你的Zentyal服务器很简单。*
### 那个服务器适合你?
回答这个问题要看你有什么需求。Zentyal 是一个不可思议的服务器,它可以很好的胜任你的小型商务网络。如果你需要更多,如群件,我觉得你可以试试 ClearOS。如果你不需要群件其它的服务器也不错。
我强烈建议你安装一下这两个一体化的服务器,看看哪个更适合你的小公司。
------------------------------------------------------------------------------
via: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers
作者:[Jack Wallen][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linux.com/community/forums/person/93
[1]: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers#clearfoundation-overview
[2]: https://www.clearos.com/products/hardware/clearbox-100-series
[3]: https://www.clearos.com/products/hardware/clearbox-300-series
[4]: https://www.clearos.com/products/hardware/clearbox-overview
[5]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso
[6]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso
[7]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso
[8]: https://www.clearos.com/products/purchase/clearos-marketplace-overview
[9]: https://ip_of_clearos_server:81/
[10]: http://www.zentyal.org/server/
[11]: https://ip_of_server:8443/
[12]: https://www.clearos.com/products/clearos-editions/clearos-7-home
[13]: https://www.clearos.com/products/clearos-editions/clearos-7-business

View File

@ -0,0 +1,63 @@
65% 的企业正致力于开源项目
==========================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_openseries.png?itok=s7lXChId)
今年 Black Duck 和 North Bridge 发布了第十届年度开源软件前景调查,来调查开源软件的发展趋势。今年这份调查的亮点在于,当前主流社会对开源软件的接受程度以及过去的十年中人们对开源软件态度的变化。
[2016 年的开源软件前景调查][1]分析了来自约3400位专家的反馈。今年的调查中开发者发表了他们的看法大约 70% 的参与者是开发者。数据显示,安全专家的参与人数呈指数级增长,增长超过 450% 。他们的参与表明,开源社区开始逐渐关注开源软件中存在的安全问题,以及当新的技术出现时确保它们的安全性。
Black Duck 的[年度开源新秀奖][2] 涉及到一些新出现的技术,如容器方面的 Docker 和 Kontena。容器技术这一年有了巨大的发展 ———— 76% 的受访者表示,他们的企业有一些使用容器技术的规划。而 59% 的受访者正准备使用容器技术完成大量的部署,从开发与测试,到内部与外部的生产环境部署。开发者社区已经把容器技术作为一种简单快速开发的方法。
调查显示,几乎每个组织都有开发者致力于开源软件,这一点毫不惊讶。当像微软和苹果这样的大公司将它们的一些解决方案开源时,开发者就获得了更多的机会来参与开源项目。我非常希望这样的趋势会延续下去,让更多的软件开发者无论在工作中,还是工作之余都可以致力于开源项目。
### 2016 年调查结果中的一些要点
#### 商业价值
* 开源软件是发展战略中的一个重要元素,超过 65% 的受访者使用开源软件来加速软件开发的进度。
* 超过 55% 的受访者在生产环境中使用开源软件。
#### 创新的原动力
* 受访者表示,开源软件的使用让软件开发更加快速灵活,从而推进了创新;同时加速了软件推向市场的时间,也极大地减少了与上司沟通的时间。
* 开源软件的优质解决方案,富有竞争力的特性,技术能力,以及可定制化的能力,也促进了更多的创新。
#### 开源商业模式与投资的激增
* 更多不同商业模式的出现给开源企业带来了前所未有的价值。这些价值并不依赖于云服务和技术支持。
* 开源的私募融资在过去的五年内,已增长了将近四倍。
#### 安全和管理
一流的开源安全与管理实践的发展,也没有跟上人们使用开源不断增长的步伐。尽管备受关注的开源项目近年来爆炸式地增长,调查结果却指出:
* 50% 的企业在选择和批准开源代码这方面没有出台正式的政策。
* 47% 的企业没有正式的流程来跟踪开源代码,这就限制了它们对开源代码的了解,以及控制开源代码的能力。
* 超过三分之一的企业没有用于识别、跟踪和修复重大开源安全漏洞的流程。
#### 不断增长的开源参与者
调查结果显示,一个活跃的企业开源社区,激励创新,提供价值,共享情谊:
* 67% 的受访者表示,它们积极鼓励开发者参与开源项目。
* 65% 的企业正致力于开源项目。
* 约三分之一的企业有专门为开源项目设置的全职岗位。
* 59% 的受访者参与开源项目以获得竞争优势。
Black Duck 和 North Bridge 从今年的调查中了解到了很多,如安全,政策,商业模式等。我们很兴奋能够分享这些新发现。感谢我们的合作者,以及所有参与我们调查的受访者。这是一个伟大的十年,我很高兴我们可以肯定地说,开源的未来充满了无限可能。
想要了解更多内容,可以查看完整的[调查结果][3]。
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/5/2016-future-open-source-survey
作者:[Haidee LeClair][a]
译者:[Cathon](https://github.com/Cathon)
校对:[wxy](https://github.com/wxy)
[a]: https://opensource.com/users/blackduck2016
[1]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results
[2]: https://info.blackducksoftware.com/OpenSourceRookies2015.html
[3]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results%C2%A0

View File

@ -1,12 +1,11 @@
如何为登录和 sudo 设置双认证
如何为登录和 sudo 设置双因子认证
==========================================================
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/auth_crop.png?itok=z_cdYZZf)
>[Used with permission][1]
安全就是一切。我们生活的当今世界,数据具有令人难以置信的价值,而你也一直处于数据丢失的风险之中。因此,你必须想尽办法保证你桌面系统和服务器中东西的安全。结果,管理员和用户就会创建极其复杂的密码、使用密码管理器甚至其它更复杂的东西。但是,如果我告诉你你可以只需要一步-至多两步就能登录到你的 Linux 服务器或桌面系统中呢?多亏了 [Google Authenticator][2],现在你可以做到了。在这之上配置也极其简单。
安全就是一切。我们生活的当今世界,数据具有令人难以置信的价值,而你也一直处于数据丢失的风险之中。因此,你必须想尽办法保证你桌面系统和服务器中数据的安全。结果,管理员和用户就会创建极其复杂的密码、使用密码管理器甚至其它更复杂的东西。但是,如果我告诉你你可以只需要一步,至多两步就能登录到你的 Linux 服务器或桌面系统中呢?多亏了 [Google 身份验证器][2],现在你可以做到了。并且,配置也极其简单。
我会给你简要介绍为登录和 sudo 设值双重认证的步骤。我基于 Ubuntu 16.04 桌面系统进行介绍,但这些步骤也适用于其它服务器。为了做到双重认证,我会使用 Google Authenticator
我会给你简要介绍为登录和 sudo 设置双因子认证的步骤。我基于 Ubuntu 16.04 桌面系统进行介绍,但这些步骤也适用于其它服务器。为了实现双因子认证,我会使用 Google 身份验证器
这里有个非常重要的警告:一旦你设置了认证,没有一个从认证器中获得的由 6 个数字组成的验证码你就不可能登录账户(或者执行 sudo 命令)。这也给你增加了一步额外的操作,因此如果你不想每次登录到 Linux 服务器(或者使用 sudo的时候都要拿出你的智能手机这个方案就不适合你。但你也要记住这额外的一个步骤也给你带来一层其它方法无法给予的保护。
@ -14,38 +13,28 @@
### 安装必要的组件
安装 Google 认证,首先要解决两个问题。一是安装智能机应用。下面是如何从 Google 应用商店安装的方法:
安装 Google 身份验证器Google Authenticator,首先要解决两个问题。一是安装智能机应用。下面是如何从 Google 应用商店安装的方法:
1. 在你的安卓设备中打开 Google 应用商店
2. 搜索 google 认证
3. 找到并点击有 Google 标识的应用
2. 搜索 google 身份验证器
3. 找到并点击有 Google Inc. 标识的应用
4. 点击安装
5. 点击 接受
5. 点击“接受”
6. 等待安装完成
接下来,我们继续在你的 Linux 机器上安装认证。步骤如下:
接下来,我们继续在你的 Linux 机器上安装这个认证。步骤如下:
1. 打开一个终端窗口
2. 输入命令 sudo apt-get install google-authenticator
3. 输入你的 sudo 密码并敲击回车
4. 如果有弹窗提示,输入 y 并敲击回车
5. 等待安装完成
接下来配置使用 google-authenticator 进行登录。
### 配置
要为登录和 sudo 添加两阶段认证只需要编辑一个文件。也就是 /etc/pam.d/common-auth。打开并找到如下一行
Just one file must be edited to add two-step authentication for both login and sudo usage. The file is /etc/pam.d/common-auth. Open it and look for the line
要为登录和 sudo 添加双因子认证只需要编辑一个文件,即 /etc/pam.d/common-auth。打开并找到如下一行
```
auth [success=1 default=ignore] pam_unix.so nullok_secure
@ -59,57 +48,53 @@ auth required pam_google_authenticator.so
保存并关闭文件。
下一步就是为系统中的每个用户设置 google-authenticator否则会不允许他们登录。为了简单起见我们假设你的系统中有两个用户jack 和 olivia。首先为 jack 设置(我们假设这是我们一直使用的账户)。
下一步就是为系统中的每个用户设置 google-authenticator否则他们就不能登录。为了简单起见我们假设你的系统中有两个用户jack 和 olivia。首先为 jack 设置(我们假设这是我们一直使用的账户)。
打开一个终端窗口并输入命令 google-authenticator。之后会问你一系列的问题每个问题你都应该用 y 回答)。问题包括:
* 是否允许更新你的 "/home/jlwallen/.google_authenticator" 文件 (y/n) y
* 是否禁止多个用户使用同一个认证令牌?这会限制你每 30 秒内只能登录一次,但能增加你注意到甚至防止中间人攻击的可能 (y/n)
* 默认情况下令牌时长为 30 秒即可,为了补偿客户端和服务器之间可能出现的时间偏差,我们允许使用当前时间之前或之后的其它令牌。如果你无法进行时间同步,你可以把这个时间窗口由默认的 1:30 分钟增加到 4 分钟。是否希望如此 (y/n)
* 如果你尝试登录的计算机没有针对暴力破解进行加固,你可以为验证模块启用速率限制。默认情况下,限制攻击者每 30 秒不能尝试登陆超过 3 次。是否启用速率限制 (y/n)
* 默认情况下令牌时长为 30 秒即可,为了补偿客户端和服务器之间可能出现的时间偏差,我们允许添加一个当前时间之前或之后的令牌。如果你无法进行时间同步,你可以把时间窗口由默认的 1:30 分钟增加到 4 分钟。是否希望如此 (y/n)
* 如果你尝试登陆的计算机没有针对蛮力登陆进行加固,你可以为验证模块启用速率限制。默认情况下,限制攻击者每 30 秒不能尝试登陆超过 3 次。是否启用速率限制 (y/n)
一旦完成了问题回答,你就会看到你的密钥、验证码以及 5 个紧急刮码。把刮码输出保存起来。你可以在无法使用手机的时候使用它们(每个刮码仅限使用一次)。密钥用于你在 Google Authenticator 上设置账户,验证码是你能立即使用(如果需要)的一次性验证码。
一旦完成了问题回答,你就会看到你的密钥、验证码以及 5 个紧急刮码emergency scratch code。把这些刮码打印出来并保存。你可以在无法使用手机的时候使用它们每个刮码仅限使用一次。密钥用于你在 Google 身份验证器上设置账户,验证码是你能当下就能够立即使用(如果需要)的一次性验证码。
### 设置应用
现在你已经配置好了用户 jack。在设置用户 olivia 之前,你需要在 Google Authenticator 应用上为 jack 添加账户。在主屏幕上打开应用,点击 菜单 按钮(右上角三个竖排点)。点击添加账户然后输入提供的密钥。在下一个窗口(示意图1你需要输入你运行 google-authenticator 应用时提供的 16 个数字的密钥。给账户取个名字(以便你记住这用于哪个账户),然后点击添加。
现在你已经配置好了用户 jack。在设置用户 olivia 之前,你需要在 Google 身份验证器应用上为 jack 添加账户LCTT 译注:实际操作情形中,是为 jack 的手机上安装的该应用创建一个账户。在打开应用点击“菜单”按钮右上角三个竖排点。点击“添加账户”然后点击“输入提供的密钥”。在下一个窗口图1你需要输入你运行 google-authenticator 应用时提供的 16 个数字的密钥。给账户取个名字(以便你记住这用于哪个账户),然后点击“添加”。
![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/auth_a.png?itok=xSMkd-Mf)
>Figure 1: 在 Google Authenticator 应用上新建账户
*图1: 在 Google Authenticator 应用上新建账户*
LCTT 译注Google 身份验证器也可以扫描你在服务器上设置时显示的二维码,而不用手工输入密钥)
添加完账户之后,你就会看到一个 6 个数字的密码,你每次登录或者使用 sudo 的时候都会需要这个密码。
最后,在系统上设置其它账户。正如之前提到的,我们会设置一个叫 olivia 的账户。步骤如下:
1. 打开一个终端窗口
2. 输入命令 sudo su olivia
3. 在智能机上打开 Google Authenticator
4. 在终端窗口示意图2中输入应用提供的 6 位数字验证码并敲击回车
3. 在智能机上打开 Google 身份验证器
4. 在终端窗口图2中输入应用提供的 6 位数字验证码并敲击回车
5. 输入你的 sudo 密码并敲击回车
6. 以新用户输入命令 google-authenticator回答问题并记录生成的密钥和验证码。
成功为 olivia 用户设置好之后,用 google-authenticator 命令,在 Google Authenticator 应用上根据用户信息(和之前为第一个用户添加账户相同)添加一个新的账户。现在你在 Google Authenticator 应用上就会有 jack 和 olivia 两个账户了。
成功为 olivia 用户设置好之后,用 google-authenticator 命令,在 Google 身份验证器应用上根据用户信息(和之前为第一个用户添加账户相同)添加一个新的账户。现在你在 Google 身份验证器应用上就会有 jack 和 olivia 两个账户了。LCTT 译注:在实际操作情形中,通常是为 jack 和 olivia 两个人的手机分别设置。)
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/auth_b.png?itok=FH36V1r0)
>Figure 2: 为 sudo 输入 6位数字验证码
好了,就是这些。每次你尝试登陆系统(或者使用 sudo 的时候,在你输入用户密码之前,都会要求你输入提供的 6 位数字验证码。现在你的 Linux 机器就比添加双重认证之前安全多了。虽然有些人会认为这非常麻烦,我仍然推荐使用,尤其是那些保存了敏感数据的机器。
*图2: 为 sudo 输入 6位数字验证码*
好了,就是这些。每次你尝试登录系统(或者使用 sudo 的时候,在你输入用户密码之前,都会要求你输入提供的 6 位数字验证码。现在你的 Linux 机器就比添加双因子认证之前安全多了。虽然有些人会认为这非常麻烦,我仍然推荐使用,尤其是那些保存了敏感数据的机器。
--------------------------------------------------------------------------------
via: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/auth_b.png?itok=FH36V1r0
via: https://www.linux.com/learn/how-set-2-factor-authentication-login-and-sudo
作者:[JACK WALLEN][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
[a]: https://www.linux.com/users/jlwallen
[1]: https://www.linux.com/licenses/category/used-permission

View File

@ -3,23 +3,23 @@
谷歌安卓的一项新创新将可以让你无需安装即可在你的设备上使用应用程序。现在已经初具雏形。
还记得那时候吗,某人发给你了一个链接,要求你通过安装来查看应用
还记得那时候吗,某人发给你了一个链接,要求你通过安装一个应用才能查看
是否要安装这个应用来查看一个一次性的链接,这种进退两难的选择一定让你感到很沮丧。而且,应用安装本身也会消耗你不少宝贵的时间。
是否要安装这个应用就为了看一下链接,这种进退两难的选择一定让你感到很沮丧。而且,安装应用这个事也会消耗你不少宝贵的时间。
上述场景可能大多数人都经历过,或者说大多数现代科技用户都经历过。尽管如此,我们都接受这是正确且合理的过程
上述场景可能大多数人都经历过,或者说大多数现代科技用户都经历过。尽管如此,我们都接受,认为这是天经地义的事情
事实真的如此吗?
针对这个问题谷歌的安卓部门给出了一个全新的开箱即用的答案:
针对这个问题谷歌的安卓部门给出了一个全新的开箱即用的答案:
### Android Instant Apps
### Android Instant Apps AIA
Android Instant Apps 声称第一时间帮你摆脱这样的两难境地,让你简单地点击链接(见打开链接的示例)然后直接开始使用这个应用。
Android Instant Apps 声称可以从一开始就帮你摆脱这样的两难境地,让你简单地点击链接(见打开链接的示例)然后直接开始使用这个应用。
另一个真实生活场景的例子,如果你想停车但是没有停车码表的配对应用,有了 Instant Apps 在这种情况下就方便多了。
另一个真实生活场景的例子,如果你想停车但是没有停车码表的相应应用,有了 Instant Apps 在这种情况下就方便多了。
根据谷歌的信息,你可以简单地将你的手机和码表触碰,停车应用就会直接显示在你的屏幕上,并且准备就绪可以使用。
根据谷歌提供的信息,你可以简单地将你的手机和码表触碰,停车应用就会直接显示在你的屏幕上,并且准备就绪可以使用。
#### 它是怎么工作的?
@ -30,21 +30,25 @@ Instant Apps 和你已经熟悉的应用基本相同,只有一个不同——
这样应用就可以快速打开,让你可以完成你的目标任务。
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/05/AIA-demo.jpg)
>AIA 示例
*AIA 示例*
![](https://4.bp.blogspot.com/-p5WOrD6wVy8/VzyIpsDqULI/AAAAAAAADD0/xbtQjurJZ6EEji_MPaY1sLK5wVkXSvxJgCKgB/s800/B%2526H%2B-%2BDevice%2B%2528Final%2529.gif)
>B&H 图片(通过谷歌搜索)
*B&H 图片(通过谷歌搜索)*
![](https://2.bp.blogspot.com/-q5ApCzECuNA/VzyKa9l0t2I/AAAAAAAADEI/nYhhMClDl5Y3qL5-wiOb2J2QjtGWwbF2wCLcB/s800/BuzzFeed-Device-Install%2B%2528Final%2529.gif)
>BuzzFeedVideo通过一个共享链接
*BuzzFeedVideo通过一个共享链接*
![](https://2.bp.blogspot.com/-mVhKMMzhxms/VzyKg25ihBI/AAAAAAAADEM/dJN6_8H7qkwRyulCF7Yr2234-GGUXzC6ACLcB/s800/Park%2Band%2BPay%2B-%2BDevice%2Bwith%2BMeter%2B%2528Final%2529.gif)
>停车与支付(例)(通过 NFC
*停车与支付(例)(通过 NFC*
听起来很棒,不是吗?但是其中还有很多技术方面的问题需要解决。
比如,从安全的观点来说:如果任何应用从理论上来说都能在你的设备上运行,甚至你都不用安装它——你要怎么保证设备远离恶意软件攻击?
比如,从安全的观点来说:从理论上来说,如果任何应用都能在你的设备上运行,甚至你都不用安装它——你要怎么保证设备远离恶意软件攻击?
因此,为了消除这类威胁,谷歌还在这个项目上努力,目前只有少数合作伙伴,未来将逐步扩展。

View File

@ -5,19 +5,19 @@ ORB新一代 Linux 应用
我们之前讨论过[在 Ubuntu 上离线安装应用][1]。我们现在要再次讨论它。
[Orbital Apps][2] 给我们带来了新的软件包类型,**ORB**,它带有便携软件,交互式安装向导支持,以及离线使用的能力。
[Orbital Apps][2] 给我们带来了一种新的软件包类型 **ORB**,它具有便携软件、交互式安装向导支持,以及离线使用的能力。
便携软件很方便。主要是因为它们能够无需任何管理员权限直接运行也能够带着所有的设置和数据随U盘存储。而交互式的安装向导也能让我们轻松地安装应用。
便携软件很方便。主要是因为它们能够无需任何管理员权限直接运行,也能够带着所有的设置和数据随 U 盘存储。而交互式的安装向导也能让我们轻松地安装应用。
### 开放可运行包 OPEN RUNNABLE BUNDLE (ORB)
### 开放式可运行的打包OPEN RUNNABLE BUNDLE (ORB)
ORB 是一个免费和开源的包格式它和其它包格式在很多方面有所不同。ORB 的一些特性:
ORB 是一个自由开源的包格式它和其它包格式在很多方面有所不同。ORB 的一些特性:
- **压缩**:所有的包经过压缩,使用 squashfs体积最多减少 60%。
- **便携模式**:如果一个便携 ORB 应用是从可移动设备运行的,它会把所有设置和数据存储在那之上。
- **压缩**:所有的包经过 squashfs 压缩,体积最多减少 60%。
- **便携模式**:如果一个便携 ORB 应用是在可移动设备上运行的,它会把所有设置和数据存储在那之上。
- **安全**:所有的 ORB 包使用 PGP/RSA 签名,通过 TLS 1.2 分发。
- **离线**:所有的依赖都打包进软件包,所以不再需要下载依赖。
- **开放包**ORB 包可以作为 ISO 镜像挂载。
- **开放式软件包**ORB 软件包可以作为 ISO 镜像挂载。
### 种类
@ -26,77 +26,69 @@ ORB 应用现在有两种类别:
- 便携软件
- SuperDEB
#### 1. 便携 ORB 软件
### 1. 便携 ORB 软件
便携 ORB 软件可以立即运行而不需要任何的事先安装。那意味着它不需要管理员权限和依赖!你可以直接从 Orbital Apps 网站下载下来就能使用。
便携 ORB 软件可以立即运行而不需要任何的事先安装。这意味着它不需要管理员权限,也没有依赖!你可以直接从 Orbital Apps 网站下载下来就能使用。
并且由于它支持便携模式你可以将它拷贝到U盘携带。它所有的设置和数据会和它一起存储在U盘。只需将U盘连接到任何运行 Ubuntu 16.04 的机器上就行了。
并且由于它支持便携模式,你可以将它拷贝到 U 盘携带。它所有的设置和数据会和它一起存储在 U 盘。只需将 U 盘连接到任何运行 Ubuntu 16.04 的机器上就行了。
##### 可用便携软件
#### 可用便携软件
目前有超过 35 个软件以便携包的形式提供,包括一些十分流行的软件,比如:[Deluge][3][Firefox][4][GIMP][5][Libreoffice][6][uGet][7] 以及 [VLC][8]。
完整的可用包列表可以查阅 [便携 ORB 软件列表][9]。
##### 使用便携软件
#### 使用便携软件
按照以下步骤使用便携 ORB 软件:
- 从 Orbital Apps 网站下载想要的软件包。
- 将其移动到想要的位置(本地磁盘/U盘
- 将其移动到想要的位置(本地磁盘/U 盘)。
- 打开存储 ORB 包的目录。
![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-1-1024x576.jpg)
![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-1-1024x576.jpg)
- 打开 ORB 包的属性。
![](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-2.jpg)
>给 ORB 包添加运行权限
![给 ORB 包添加运行权限](http://itsfoss.com/wp-content/uploads/2016/05/using-portable-orb-app-2.jpg)
- 在权限标签页添加运行权限。
- 双击打开它。
等待几秒,让它准备好运行。大功告成。
#### 2. SuperDEB
### 2. SuperDEB
另一种类型的 ORB 软件是 SuperDEB。SuperDEB 很简单交互式安装向导能够让软件安装过程顺利得多。如果你不喜欢从终端或软件中心安装软件superDEB 就是你的菜。
最有趣的部分是你安装时不需要一个互联网连接,因为所有的依赖都由安装向导打包了。
##### 可用的 SuperDEB
#### 可用的 SuperDEB
超过 60 款软件以 SuperDEB 的形式提供。其中一些流行的有:[Chromium][10][Deluge][3][Firefox][4][GIMP][5][Libreoffice][6][uGet][7] 以及 [VLC][8]。
完整的可用 SuperDEB 列表,参阅 [SuperDEB 列表][11]。
##### 使用 SuperDEB 安装向导
#### 使用 SuperDEB 安装向导
- 从 Orbital Apps 网站下载需要的 SuperDEB。
- 像前面一样给它添加**运行权限**(属性 > 权限)。
- 双击 SuperDEB 安装向导并按下列说明操作:
![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-1.png)
>点击 OK
![点击 OK](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-1.png)
![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-2.png)
>输入你的密码并继续
![输入你的密码并继续](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-2.png)
![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-3.png)
>它会开始安装…
![它会开始安装…](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-3.png)
![](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-4.png)
>一会儿他就完成了…
![一会儿它就完成了…](http://itsfoss.com/wp-content/uploads/2016/05/Using-SuperDEB-Installer-4.png)
- 完成安装之后,你就可以正常使用了。
### ORB 软件兼容性
从 Orbital Apps 可知,它们完全适配 Ubuntu 16.04 [64 bit]。
从 Orbital Apps 可知,它们完全适配 Ubuntu 16.04 [64 ]。
>阅读建议:[如何在 Ubuntu 获知你的是电脑 32 位还是 64 位的][12]。
至于其它发行版兼容性不受保证。但我们可以说,它在所有 Ubuntu 16.04 衍生版UbuntuMATEUbuntuGNOMELubuntuXubuntu 等)以及基于 Ubuntu 16.04 的发行版(比如即将到来的 Linux Mint 18上都适用。我们现在还不清楚 Orbital Apps 是否有计划拓展它的支持到其它版本 Ubuntu 或 Linux 发行版上。
至于其它发行版兼容性则不受保证。但我们可以说,它在所有 Ubuntu 16.04 衍生版UbuntuMATEUbuntuGNOMELubuntuXubuntu 等)以及基于 Ubuntu 16.04 的发行版(比如即将到来的 Linux Mint 18上都适用。我们现在还不清楚 Orbital Apps 是否有计划拓展它的支持到其它版本 Ubuntu 或 Linux 发行版上。
如果你在你的系统上经常使用便携 ORB 软件,你可以考虑安装 ORB 启动器。它不是必需的,但是推荐安装它以获取更佳的体验。最简短的 ORB 启动器安装流程是打开终端输入以下命令:
@ -116,11 +108,11 @@ wget -O - https://www.orbital-apps.com/orb.sh | bash
----------------------------------
via: http://itsfoss.com/orb-linux-apps/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29
via: http://itsfoss.com/orb-linux-apps/
作者:[Munif Tanjim][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,106 @@
与 Linux 一同驾车奔向未来
===========================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open-snow-car-osdc-lead.png?itok=IgYZ6mNY)
当我驾车的时候并没有这么想过,但是我肯定喜欢一个配有这样系统的车子,它可以让我按下几个按钮就能与我的妻子、母亲以及孩子们语音通话。这样的系统也可以让我选择是否从云端、卫星广播、以及更传统的 AM/FM 收音机收听音乐流媒体。我也会得到最新的天气情况,以及它可以引导我的车载 GPS 找到抵达下一个目的地的最快路线。[车载娱乐系统In-vehicle infotainment][1],业界常称作 IVI它已经普及出现在最新的汽车上了。
前段时间,我乘坐飞机跨越了数百英里,然后租了一辆汽车。令人愉快的是,我发现我租赁的汽车上配置了类似我自己车上同样的 IVI 技术。毫不犹豫地,我就通过蓝牙连接把我的联系人上传到了系统当中,然后打电话回家给我的家人,让他们知道我已经安全抵达了,然后我的主机会让他们知道我正在去往他们家的路上。
在最近的[新闻综述][2]中Scott Nesbitt 引述了一篇文章,说福特汽车公司因其开源的[智能设备连接Smart Device Link][3]SDL从竞争对手汽车制造商中得到了足够多的回报这个中间件框架可以用于支持移动电话。 SDL 是 [GENIVI 联盟][4]的一个项目,这个联盟是一个非营利性组织,致力于建设支持开源车载娱乐系统的中间件。据 GENIVI 的执行董事 [Steven Crumb][5] 称,他们的[成员][6]有很多,包括戴姆勒集团、现代、沃尔沃、日产、本田等等 170 个企业。
为了在同行业间保持竞争力,汽车生产企业需要一个中间设备系统,以支持现代消费者所使用的各种人机界面技术。无论您使用的是 Android、iOS 还是其他设备,汽车 OEM 厂商都希望自己的产品能够支持这些。此外,这些的 IVI 系统必须有足够适应能力以支持日益变化的移动技术。OEM 厂商希望提供有价值的服务,并可以在他们的 IVI 之上增加服务,以满足他们客户的各种需求。
### 步入 Linux 和开源软件
除了 GENIVI 在努力之外,[Linux 基金会][7]也赞助支持了[车载 LinuxAutomotive Grade Linux][8]AGL工作组这是一个致力于为汽车应用寻求开源解决方案的软件基金会。虽然 AGL 初期将侧重于 IVI 系统,但是未来他们希望发展到不同的方向,包括[远程信息处理telematics][9]、抬头显示器HUD及其他控制系统等等。 现在 AGL 已经有超过 50 名成员,包括捷豹、丰田、日产,并在其[最近发布的一篇公告][10]中宣称福特、马自达、三菱、和斯巴鲁也加入了。
为了了解更多信息,我们采访了这一新兴领域的两位领导人。具体来说,我们想知道 Linux 和开源软件是如何被使用的,并且它们是如何事实上改变了汽车行业的面貌。首先,我们将与 [Alison Chaiken][11] 谈谈,她是一位任职于 Peloton Technology 的软件工程师,也是一位在车载 Linux 、网络安全和信息透明化方面的专家。她曾任职于 [Alison Chaiken][11] 公司、诺基亚和斯坦福直线性加速器。然后我们和 [Steven Crumb][12] 进行了交谈,他是 GENIVI 执行董事,他之前从事于高性能计算环境(超级计算机和早期的云计算)的开源工作。他说,虽然他再不是一个程序员了,但是他乐于帮助企业解决在使用开源软件时的实际业务问题。
### 采访 Alison Chaiken (by [Deb Nicholson][13])
#### 你是如何开始对汽车软件领域感兴趣的?
我曾在诺基亚从事于手机上的 [MeeGo][14] 产品2009 年该项目被取消了。我想,我下一步怎么办?其时,我的一位同事正在从事于 [MeeGo-IVI][15],这是一个早期的车载 Linux 发行版。 “Linux 在汽车方面将有很大发展,” 我想,所以我就朝着这个方向努力。
#### 你能告诉我们你在这些日子里工作在哪些方面吗?
我目前正在启动一个高级巡航控制系统的项目,它用在大型卡车上,使用实时 Linux 以提升安全性和燃油经济性。我喜欢在这方面的工作,因为没有人会反对提升货运的能力。
#### 近几年有几则汽车被黑的消息。开源代码方案可以帮助解决这个问题吗?
我恰好针对这一话题准备了一次讲演,我会在南加州 Linux 2016 博览会上就 Linux 能否解决汽车上的安全问题做个讲演 [讲演稿在此][16]。值得注意的是GENIVI 和车载 Linux 项目已经公开了他们的代码,这两个项目可以通过 Git 提交补丁。如果你有补丁的话请给上游发送您的补丁许多眼睛都盯着bug 将无从遁形。
#### 执法机构和保险公司可以找到很多汽车上的数据的用途。他们获取这些信息很容易吗?
好问题。IEEE-1609 专用短程通信标准Dedicated Short Range Communication Standard就是为了让汽车的 WiFi 消息可以安全、匿名地传递。不过,如果你从你的车上发推,那可能就有人能够跟踪到你。
#### 开发人员和公民个人可以做些什么,以在汽车技术进步的同时确保公民自由得到保护?
电子前沿基金会( Electronic Frontier FoundationEFF在关注汽车问题方面做了出色的工作包括对哪些数据可以存储在汽车 “黑盒子”里通过官方渠道发表了看法,以及 DMCA 规定 1201 如何应用于汽车上。
#### 在未来几年,你觉得在汽车方面会发生哪些令人激动的发展?
可以拯救生命的自适应巡航控制系统和防撞系统将取得长足发展。当它们大量进入汽车里面时,我相信这会使得(因车祸而导致的)死亡人数下降。如果这都不令人激动,我不知道还有什么会更令人激动。此外,像自动化停车辅助功能,将会使汽车更容易驾驶,减少汽车磕碰事故。
#### 我们需要做什么?人们怎样才能参与?
车载 Linux 开发是以开源的方式开发,它运行在每个人都能买得起的廉价硬件上(如树莓派 2 和中等价位的 Renesas Porter 主板)。 GENIVI 汽车 Linux 中间件联盟通过 Git 开源了很多软件。此外,还有很酷的 [OSVehicle 开源硬件][17]汽车平台。
只需要不太多的预算,人们就可以参与到 Linux 软件和开放硬件中。如果您感兴趣,请加入我们在 Freenode 上的IRC #automotive 吧。
### 采访 Steven Crumb (by Don Watkins)
#### GENIVI 在 IVI 方面做了哪些巨大贡献?
GENIVI 率先通过使用自由开源软件填补了汽车行业的巨大空白,这包括 Linux、非安全关键性汽车软件如车载娱乐系统IVI等。作为消费者他们很期望在车辆上有和智能手机一样的功能对这种支持 IVI 功能的软件的需求量成倍地增长。不过不断提升的软件数量也增加了建设 IVI 系统的成本,从而延缓了其上市时间。
GENIVI 使用开源软件和社区开发的模式为汽车制造商及其软件提供商节省了大量资金,从而显著地缩短了产品面市时间。我为 GENIVI 而感到激动,我们有幸引导了一场革命,在缓慢进步的汽车行业中,从高度结构化和专有的解决方案转换为以社区为基础的开发方式。我们还没有完全达成目标,但是我们很荣幸在这个可以带来实实在在好处的转型中成为其中的一份子。
#### 你们的主要成员怎样推动了 GENIVI 的发展方向?
GENIVI 有很多成员和非成员致力于我们的工作。在许多开源项目中,任何公司都可以通过通过技术输出而发挥影响,包括简单地贡献代码、补丁、花点时间测试。前面说过,宝马、奔驰、现代汽车、捷豹路虎、标致雪铁龙、雷诺/日产和沃尔沃都是 GENIVI 积极的参与者和贡献者,其他的许多 OEM 厂商也在他们的汽车中采用了 IVI 解决方案,广泛地使用了 GENIVI 的软件。
#### 这些贡献的代码使用了什么许可证?
GENIVI 采用了一些许可证包括从LGPLv2 到 MPLv2 和 Apache2.0。我们的一些工具使用的是 Eclipse 许可证。我们有一个[公开许可策略][18],详细地说明了我们的许可证偏好。
#### 个人或团体如何参与其中?社区的参与对于这个项目迈向成功有多重要?
GENIVI 的开发完全是开放的([projects.genivi.org][19]),因此,欢迎任何有兴趣在汽车中使用开源软件的人参加。也就是说,公司可以通过成员的方式[加入该联盟][20]联盟以开放的方式资助其不断进行开发。GENIVI 的成员可以享受各种各样的便利,在过去六年中,已经有多达 140 家公司参与到这个全球性的社区当中。
社区对于 GENIVI 是非常重要的,没有一个活跃的贡献者社区,我们不可能在这些年开发和维护了这么多有价值的软件。我们努力让参与到 GENIVI 更加简单,现在只要加入一个[邮件列表][21]就可以接触到各种软件项目中的人们。我们使用了许多开源项目采用的标准做法,并提供了高品质的工具和基础设施,以帮助开发人员宾至如归而富有成效。
无论你是否熟悉汽车软件都欢迎你加入我们的社区。人们已经对汽车改装了许多年所以对于许多人来说在汽车上修修改改是自热而然的做法。对于汽车来说软件是一个新的领域GENIVI 希望能为对汽车和开源软件有兴趣的人打开这扇门。
-------------------------------
via: https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb
作者:[Don Watkins][a]
译者:[erlinux](https://github.com/erlinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[1]: https://en.wikipedia.org/wiki/In_car_entertainment
[2]: https://opensource.com/life/16/1/weekly-news-jan-9
[3]: http://projects.genivi.org/smartdevicelink/home
[4]: http://www.genivi.org/
[5]: https://www.linkedin.com/in/stevecrumb
[6]: http://www.genivi.org/genivi-members
[7]: http://www.linuxfoundation.org/
[8]: https://www.automotivelinux.org/
[9]: https://en.wikipedia.org/wiki/Telematics
[10]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and
[11]: https://www.linkedin.com/in/alison-chaiken-3ba456b3
[12]: https://www.linkedin.com/in/stevecrumb
[13]: https://opensource.com/users/eximious
[14]: https://en.wikipedia.org/wiki/MeeGo
[15]: http://webinos.org/deliverable-d026-target-platform-requirements-and-ipr/automotive/
[16]: http://she-devel.com/Chaiken_automotive_cybersecurity.pdf
[17]: https://www.osvehicle.com/
[18]: http://projects.genivi.org/how
[19]: http://projects.genivi.org/
[20]: http://genivi.org/join
[21]: http://lists.genivi.org/mailman/listinfo/genivi-projects

View File

@ -0,0 +1,212 @@
如何在 Linux 及 Unix 系统中添加定时任务
======================================
![](https://www.unixmen.com/wp-content/uploads/2016/05/HOW-TO-ADD-CRON-JOBS-IN-LINUX-AND-UNIX-696x334.png)
### 导言
![](http://www.unixmen.com/wp-content/uploads/2016/05/cronjob.gif)
定时任务 (cron job) 被用于安排那些需要被周期性执行的命令。利用它,你可以配置某些命令或者脚本,让它们在某个设定的时间周期性地运行。`cron` 是 Linux 或者类 Unix 系统中最为实用的工具之一。cron 服务(守护进程)在系统后台运行,并且会持续地检查 `/etc/crontab` 文件和 `/etc/cron.*/ `目录。它同样也会检查 `/var/spool/cron/` 目录。
### crontab 命令
`crontab` 是用来安装、卸载或者列出定时任务列表的命令。cron 配置文件则用于驱动 `Vixie Cron` 的 [cron(8)][1] 守护进程。每个用户都可以拥有自己的 crontab 文件,虽然这些文件都位于 `/var/spool/cron/crontabs` 目录中,但并不意味着你可以直接编辑它们。你需要通过 `crontab` 命令来编辑或者配置你自己的定时任务。
### 定时配置文件的类型
配置文件分为以下不同的类型:
- **UNIX 或 Linux 系统的 crontab** : 此类型通常由那些需要 root 或类似权限的系统服务和重要任务使用。第六个字段(见下方的字段介绍)为用户名,用来指定此命令以哪个用户身份来执行。如此一来,系统的 `crontab` 就能够以任意用户的身份来执行操作。
- **用户的 crontab**: 用户可以使用 `crontab` 命令来安装属于他们自己的定时任务。 第六个字段为需要运行的命令, 所有的命令都会以创建该 crontab 任务的用户的身份运行。
**注意**: 这种问答形式的 `Cron` 实现由 Paul Vixie 所编写,并且被包含在许多 [Linux][2] 发行版本和类 Unix 系统(如广受欢迎的第四版 BSD中。它的语法被各种 crond 的实现所[兼容][3]。
那么我该如何安装、创建或者编辑我自己的定时任务呢?
要编辑你的 crontab 文件,需要在 Linux 或 Unix 的 shell 提示符后键入以下命令:
```
$ crontab -e
```
`crontab` 语法(字段介绍)
语法为:
```
1 2 3 4 5 /path/to/command arg1 arg2
```
或者
```
1 2 3 4 5 /root/ntp_sync.sh
```
其中:
- 第1个字段分钟 (0-59)
- 第2个字段小时 (0-23)
- 第3个字段日期 (0-31)
- 第4个字段月份 (0-12 [12 代表 December])
- 第5个字段一周当中的某天 (0-7 [7 或 0 代表星期天])
- /path/to/command - 计划执行的脚本或命令的名称
便于记忆的格式:
```
* * * * * 要执行的命令
----------------
| | | | |
| | | | ---- 周当中的某天 (0 - 7) (周日为 0 或 7)
| | | ------ 月份 (1 - 12)
| | -------- 一月当中的某天 (1 - 31)
| ---------- 小时 (0 - 23)
------------ 分钟 (0 - 59)
```
简单的 `crontab` 示例:
````
## 每隔 5 分钟运行一次 backupscript 脚本 ##
*/5 * * * * /root/backupscript.sh
## 每天的凌晨 1 点运行 backupscript 脚本 ##
0 1 * * * /root/backupscript.sh
## 每月的第一个凌晨 3:15 运行 backupscript 脚本 ##
15 3 1 * * /root/backupscript.sh
```
### 如何使用操作符
操作符允许你为一个字段指定多个值,这里有三个操作符可供使用:
- **星号 (*)** : 此操作符为字段指定所有可用的值。举个例子,在小时字段中,一个星号等同于每个小时;在月份字段中,一个星号则等同于每月。
- **逗号 (,)** : 这个操作符指定了一个包含多个值的列表,例如:`1,5,10,15,20,25`.
- **横杠 (-)** : 此操作符指定了一个值的范围,例如:`5-15` ,等同于使用逗号操作符键入的 `5,6,7,8,9,...,13,14,15`
- **分隔符 (/)** : 此操作符指定了一个步进值,例如: `0-23/` 可以用于小时字段来指定某个命令每小时被执行一次。步进值也可以跟在星号操作符后边,如果你希望命令行每 2 小时执行一次,则可以使用 `*/2`
### 如何禁用邮件输出
默认情况下,某个命令或者脚本的输出内容(如果有的话)会发送到你的本地邮箱账户中。若想停止收到 `crontab` 发送的邮件,需要添加 `>/dev/null 2>&1` 这段内容到执行的命令的后面,例如:
```
0 3 * * * /root/backup.sh >/dev/null 2>&1
```
如果想将输出内容发送到特定的邮件账户中,比如说 vivek@nixcraft.in 这个邮箱, 则你需要像下面这样定义一个 MAILTO 变量:
```
MAILTO="vivek@nixcraft.in"
0 3 * * * /root/backup.sh >/dev/null 2>&1
```
访问 “[禁用 Crontab 命令的邮件提示](http://www.cyberciti.biz/faq/disable-the-mail-alert-by-crontab-command/)” 查看更多信息。
### 任务:列出你所有的定时任务
键入以下命令:
```
# crontab -l
# crontab -u username -l
```
要删除所有的定时任务,可以使用如下命令:
```
# 删除当前定时任务 #
crontab -r
```
```
## 删除某用户名下的定时任务,此命令需以 root 用户身份执行 ##
crontab -r -u username
```
### 使用特殊字符串来节省时间
你可以使用以下 8 个特殊字符串中的其中一个替代头五个字段,这样不但可以节省你的时间,还可以提高可读性。
特殊字符 |含义
|:-- |:--
@reboot | 在每次启动时运行一次
@yearly | 每年运行一次,等同于 “0 0 1 1 *”.
@annually | (同 @yearly)
@monthly | 每月运行一次, 等同于 “0 0 1 * *”.
@weekly | 每周运行一次, 等同于 “0 0 * * 0”.
@daily | 每天运行一次, 等同于 “0 0 * * *”.
@midnight | (同 @daily)
@hourly | 每小时运行一次, 等同于 “0 * * * *”.
示例:
#### 每小时运行一次 ntpdate 命令 ####
```
@hourly /path/to/ntpdate
```
### 关于 `/etc/crontab` 文件和 `/etc/cron.d/*` 目录的更多内容
** /etc/crontab ** 是系统的 crontab 文件。通常只被 root 用户或守护进程用于配置系统级别的任务。每个独立的用户必须像上面介绍的那样使用 `crontab` 命令来安装和编辑自己的任务。`/var/spool/cron/` 或者 `/var/cron/tabs/` 目录存放了个人用户的 crontab 文件,它必定会备份在用户的家目录当中。
###理解默认的 `/etc/crontab` 文件
典型的 `/etc/crontab` 文件内容是这样的:
```
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
```
首先,环境变量必须被定义。如果 SHELL 行被忽略cron 会使用默认的 sh shell。如果 PATH 变量被忽略,就没有默认的搜索路径,所有的文件都需要使用绝对路径来定位。如果 HOME 变量被忽略cron 会使用调用者(用户)的家目录替代。
另外cron 会读取 `/etc/cron.d/`目录中的文件。通常情况下,像 sa-update 或者 sysstat 这样的系统守护进程会将他们的定时任务存放在此处。作为 root 用户或者超级用户,你可以使用以下目录来配置你的定时任务。你可以直接将脚本放到这里。`run-parts`命令会通过 `/etc/crontab` 文件来运行位于某个目录中的脚本或者程序。
目录 |描述
|:-- |:--
/etc/cron.d/ | 将所有的脚本文件放在此处,并从 /etc/crontab 文件中调用它们。
/etc/cron.daily/ | 运行需要 每天 运行一次的脚本
/etc/cron.hourly/ | 运行需要 每小时 运行一次的脚本
/etc/cron.monthly/ | 运行需要 每月 运行一次的脚本
/etc/cron.weekly/ | 运行需要 每周 运行一次的脚本
### 备份定时任务
```
# crontab -l > /path/to/file
# crontab -u user -l > /path/to/file
```
--------------------------------------------------------------------------------
via: https://www.unixmen.com/add-cron-jobs-linux-unix/
作者:[Duy NguyenViet][a]
译者:[mr-ping](https://github.com/mr-ping)
校对:[FSSlc](https://github.com/FSSlc)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.unixmen.com/author/duynv/
[1]: http://www.manpager.com/linux/man8/cron.8.html
[2]: http://www.linuxsecrets.com/
[3]: http://www.linuxsecrets.com/linux-hardware/

View File

@ -14,7 +14,8 @@
基于这个信任分,一个需要登录认证的应用可以验证你确实可以授权登录,从而不会提示需要密码。
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/05/Abacus-to-Trust-API.jpg)
>Abacus 到 Trust API
*Abacus 到 Trust API*
### 需要思考的地方
@ -31,7 +32,7 @@ via: http://www.iwillfolo.com/will-google-replace-passwords-with-a-new-trust-bas
作者:[iWillFolo][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,102 @@
vlock 一个锁定 Linux 用户虚拟控制台或终端的好方法
=======================================================================
虚拟控制台是 Linux 上非常重要的功能,它们给系统用户提供了 shell 提示符,以保证用户在登录和远程登录一个未安装图形界面的系统时仍能使用。
一个用户可以同时操作多个虚拟控制台会话,只需在虚拟控制台间来回切换即可。
![](http://www.tecmint.com/wp-content/uploads/2016/05/vlock-Lock-User-Terminal-in-Linux.png)
*用 vlock 锁定 Linux 用户控制台或终端*
这篇使用指导旨在教会大家如何使用 vlock 来锁定用户虚拟控制台和终端。
### vlock 是什么?
vlock 是一个用于锁定一个或多个用户虚拟控制台用户会话的工具。在多用户系统中 vlock 扮演着重要的角色,它让用户可以在锁住自己会话的同时不影响其他用户通过其他虚拟控制台操作同一个系统。必要时,还可以锁定所有的控制台,同时禁止在虚拟控制台间切换。
vlock 的主要功能面向控制台会话方面,同时也支持非控制台会话的锁定,但该功能的测试还不完全。
### 在 Linux 上安装 vlock
根据你的 Linux 系统选择 vlock 安装指令:
```
# yum install vlock [On RHEL / CentOS / Fedora]
$ sudo apt-get install vlock [On Ubuntu / Debian / Mint]
```
### 在 Linux 上使用 vlock
vlock 操作选项的常规语法:
```
# vlock option
# vlock option plugin
# vlock option -t <timeout> plugin
```
#### vlock 常用选项及用法:
1、 锁定用户的当前虚拟控制台或终端会话,如下:
```
# vlock --current
```
![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-User-Terminal-Session-in-Linux.png)
*锁定 Linux 用户终端会话*
选项 -c 或 --current用于锁定当前的会话该参数为运行 vlock 时的默认行为。
2、 锁定所有你的虚拟控制台会话,并禁用虚拟控制台间切换,命令如下:
```
# vlock --all
```
![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-All-Linux-Terminal-Sessions.png)
*锁定所有 Linux 终端会话*
选项 -a 或 --all用于锁定所有用户的控制台会话并禁用虚拟控制台间切换。
其他的选项只有在编译 vlock 时编入了相关插件支持和引用后,才能发挥作用:
3、 选项 -n 或 --new调用时后会在锁定用户的控制台会话前切换到一个新的虚拟控制台。
```
# vlock --new
```
4、 选项 -s 或 --disable-sysrq在禁用虚拟控制台的同时禁用 SysRq 功能,只有在与 -a 或 --all 同时使用时才起作用。
```
# vlock -sa
```
5、 选项 -t 或 --timeout <time_in_seconds>,用以设定屏幕保护插件的 timeout 值。
```
# vlock --timeout 5
```
你可以使用 `-h``--help``-v``--version` 分别查看帮助消息和版本信息。
我们的介绍就到这了,提示一点,你可以将 vlock 的 `~/.vlockrc` 文件包含到系统启动中,并参考入门手册[添加环境变量][1],特别是 Debian 系的用户。
想要找到更多或是补充一些这里没有提及的信息,可以直接在写在下方评论区。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/vlock-lock-user-virtual-console-terminal-linux/
作者:[Aaron Kili][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/set-path-variable-linux-permanently/

View File

@ -0,0 +1,98 @@
用 Docker 创建 serverless 应用
======================================
当今世界会时不时地出现一波波科技浪潮,将以前的技术拍死在海滩上。针对 serverless 应用的概念我们已经谈了很多,它是指将你的应用程序按功能来部署,这些功能在被用到时才会启动。你不用费心去管理服务器和程序规模,因为它们会在需要的时候在一个集群中启动并运行。
但是 serverless 并不意味着没有 Docker 什么事儿,事实上 Docker 就是 serverless 的。你可以使用 Docker 来容器化这些功能,然后在 Swarm 中按需求来运行它们。serverless 是一项构建分布式应用的技术,而 Docker 是它们完美的构建平台。
### 从 servers 到 serverless
那如何才能写一个 serverless 应用呢?来看一下我们的例子,[5个服务组成的投票系统][1]
![](https://blog.docker.com/wp-content/uploads/Picture1.png)
投票系统由下面5个服务组成
- 两个 web 前端
- 一个后台处理投票的进程
- 一个计票的消息队列
- 一个数据库
后台处理投票的进程很容易转换成 serverless 构架,我们可以使用以下代码来实现:
```
import dockerrun
client = dockerrun.from_env()
client.run("bfirsh/serverless-record-vote-task", [voter_id, vote], detach=True)
```
这个投票处理进程和消息队列可以用运行在 Swarm 上的 Docker 容器来代替,并实现按需自动部署。
我们也可以用容器替换 web 前端,使用一个轻量级 HTTP 服务器来触发容器响应一个 HTTP 请求。Docker 容器代替长期运行的 HTTP 服务器来挑起响应请求的重担,这些容器可以自动扩容来支撑更大访问量。
新的架构就像这样:
![](https://blog.docker.com/wp-content/uploads/Picture2.png)
红色框内是持续运行的服务,绿色框内是按需启动的容器。这个架构里需要你来管理的长期运行服务更少,并且可以自动扩容(最大容量由你的 Swarm 决定)。
### 我们可以做点什么?
你可以在你的应用中使用3种技术
1. 在 Docker 容器中按需运行代码。
2. 使用 Swarm 来部署集群。
3. 通过使用 Docker API 套接字在容器中运行容器。
结合这3种技术你可以有很多方法搭建你的应用架构。用这种方法来部署后台环境真是非常有效而在另一些场景也可以这么玩比如说
- 由于存在延时,使用容器实现面向用户的 HTTP 请求可能不是很合适,但你可以写一个负载均衡器,使用 Swarm 来对自己的 web 前端进行自动扩容。
- 实现一个 MongoDB 容器,可以自检 Swarm 并且启动正确的分片和副本LCTT 译注:分片技术为大规模并行检索提供支持,副本技术则是为数据提供冗余)。
### 下一步怎么做
我们提供了这些前卫的工具和概念来构建应用,并没有深入发掘它们的功能。我们的架构里还是存在长期运行的服务,将来我们需要使用 Swarm 来把所有服务都用按需扩容的方式实现。
希望本文能在你搭建架构时给你一些启发,但我们还是需要你的帮助。我们提供了所有的基本工具,但它们还不是很完善,我们需要更多更好的工具、库、应用案例、文档以及其他资料。
[我们在这里发布了工具、库和文档][3]。如果想了解更多,请贡献给我们一些你知道的资源,以便我们能够完善这篇文章。
玩得愉快。
### 更多关于 Docker 的资料
- New to Docker? Try our 10 min [online tutorial][4]
- Share images, automate builds, and more with [a free Docker Hub account][5]
- Read the Docker [1.12 Release Notes][6]
- Subscribe to [Docker Weekly][7]
- Sign up for upcoming [Docker Online Meetups][8]
- Attend upcoming [Docker Meetups][9]
- Watch [DockerCon EU 2015 videos][10]
- Start [contributing to Docker][11]
--------------------------------------------------------------------------------
via: https://blog.docker.com/2016/06/building-serverless-apps-with-docker/
作者:[Ben Firshman][a]
译者:[bazz2](https://github.com/bazz2)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.docker.com/author/bfirshman/
[1]: https://github.com/docker/example-voting-app
[3]: https://github.com/bfirsh/serverless-docker
[4]: https://docs.docker.com/engine/understanding-docker/
[5]: https://hub.docker.com/
[6]: https://docs.docker.com/release-notes/
[7]: https://www.docker.com/subscribe_newsletter/
[8]: http://www.meetup.com/Docker-Online-Meetup/
[9]: https://www.docker.com/community/meetup-groups
[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv
[11]: https://docs.docker.com/contributing/contributing/

View File

@ -1 +0,0 @@
这里放新闻类文章,要求时效性

View File

@ -1 +0,0 @@
这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。

View File

@ -1,5 +1,6 @@
Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech
================================================================================
![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png)
Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1]

View File

@ -1,119 +0,0 @@
Mark Shuttleworth The Man Behind Ubuntu Operating System
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg)
**Mark Richard Shuttleworth** is the founder of **Ubuntu** or the man behind the Debian as they call him. He was born in 1973 in Welkom, South Africa. Hes an entrepreneur and also space tourist who became later **1st citizen of independent African country who could travel to the space**.
Mark also founded **Thawte** in 1996, the Internet commerce security company, while he was studying finance and IT at University of Cape Town.
In 2000, Mark founded the HBD, as an investment company, and also he created the Shuttleworth Foundation in order to fund the innovative leaders in the society with combination of fellowships and some investments.
> “The mobile world is crucial to the future of the PC. This month, for example, it became clear that the traditional PC is shrinking in favor of tablets. So if we want to be relevant on the PC, we have to figure out how to be relevant in the mobile world first. Mobile is also interesting because theres no pirated Windows market. So if you win a device to your OS, it stays on your OS. In the PC world, we are constantly competing with “free Windows”, which presents somewhat unique challenges. So our focus now is to establish a great story around Ubuntu and mobile form factors the tablet and the phone on which we can build deeper relationships with everyday consumers.”
>
> — Mark Shuttleworth
In 2002, he flew to International Space Station as member of their crew of Soyuz mission TM-34, after 1 year of training in the Star City, Russia. And after running campaign to promote the science, code, and mathematics to the aspiring astronauts and the other ambitious types at schools in SA, Mark founded the **Canonical Ltd**. and in 2013, he provided leadership for Ubuntu operating system for software development purposes.
Today, Shuttleworth holds dual citizenship of United Kingdom and South Africa currently lives on lovely Mallards botanical garden in Isle of Man, with 18 precocious ducks, equally his lovely girlfriend Claire, 2 black bitches and occasional itinerant sheep.
> “Computer is not a device anymore. It is an extension of your mind and your gateway to other people.”
>
> — Mark Shuttleworth
### Mark Shuttleworths Early life ###
As we mentioned above, Mark was born in Welkom, South Africas Orange Free State as son of surgeon and nursery-school teacher, Mark attended the school at Western Province Preparatory School where he became eventually the Head Boy in 1986, followed by 1 term at Rondebosch Boys High School, and later at Bishops/Diocesan College where he was again Head Boy in 1991.
Mark obtained the Bachelor of Business Science degree in the Finance and Information Systems at University of Cape Town, where he lived there in Smuts Hall. He became, as a student, involved in installations of the 1st residential Internet connections at his university.
> “There are many examples of companies and countries that have improved their competitiveness and efficiency by adopting open source strategies. The creation of skills through all levels is of fundamental importance to both companies and countries.”
>
> — Mark Shuttleworth
### Mark Shuttleworths Career ###
Mark founded Thawte in 1995, which was specialized in the digital certificates and Internet security, then he sold it to VeriSign in 1999, earning about $575 million at the time.
In 2000, Mark formed the HBD Venture Capital (Here be Dragons), the business incubator and venture capital provider. In 2004, he formed the Canonical Ltd., for promotion and commercial support of the free software development projects, especially Ubuntu operating system. In 2009, Mark stepped down as CEO of Canonical, Ltd.
> “In the early days of the DCC I preferred to let the proponents do their thing and then see how it all worked out in the end. Now we are pretty close to the end.”
>
> — Mark Shuttleworth
### Linux and FOSS with Mark Shuttleworth ###
In the late 1990s, Mark participated as one of developers of Debian operating system.
In 2001, Mark formed the Shuttleworth Foundation, It is non-profit organization dedicated to the social innovation that also funds free, educational, and open source software projects in South Africa, including Freedom Toaster.
In 2004, Mark returned to free software world by funding software development of Ubuntu, as it was Linux distribution based on Debian, throughout his company Canonical Ltd.
In 2005, Mark founded Ubuntu Foundation and made initial investment of 10 million dollars. In Ubuntu project, Mark is often referred to with tongue-in-cheek title “**SABDFL (Self-Appointed Benevolent Dictator for Life)**”. To come up with list of names of people in order to hire for the entire project, Mark took about six months of Debian mailing list archives with him during his travelling to Antarctica aboard icebreaker Kapitan Khlebnikov in 2004. In 2005, Mark purchased 65% stake of Impi Linux.
> “I urge telecommunications regulators to develop a commercial strategy for delivering effective access to the continent.”
>
> — Mark Shuttleworth
In 2006, it was announced that Shuttleworth became **first patron of KDE**, which was highest level of sponsorship available at the time. This patronship ended in 2012, with financial support together for Kubuntu, which was Ubuntu variant with KDE as a main desktop.
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/shuttleworth-kde.jpg)
In 2009, Shuttleworth announced that, he would step down as the CEO of Canonical in order to focus more energy on partnership, product design, and the customers. Jane Silber, took on this job as the CEO at Canonical after he was the COO at Canonical since 2004.
In 2010, Mark received the honorary degree from Open University for that work.
In 2012, Mark and Kenneth Rogoff took part together in debate opposite Peter Thiel and Garry Kasparov at Oxford Union, this debate was entitled “**The Innovation Enigma**”.
In 2013, Mark and Ubuntu were awarded **Austrian anti-privacy Big Brother Award** for sending the local Ubuntu Unity Dash searches to the Canonical servers by default. One year earlier in 2012, Mark had defended the anonymization method that was used.
> “All the major PC companies now ship PCs with Ubuntu pre-installed. So we have a very solid set of working engagements in the industry. But those PC companies are nervous to promote something new to PC buyers. If we can get PC buyers familiar with Ubuntu as a phone and tablet experience, then they may be more willing buy it on the PC too. Because no OS ever succeeded by emulating another OS. Android is great, but if we want to succeed we need to bring something new and better to market. We are all at risk of stagnating if we dont pursue the future, vigorously. But if you pursue the future, you have to accept that not everybody will agree with your vision.”
>
> — Mark Shuttleworth
### Mark Shuttleworths Spaceflight ###
Mark gained worldwide fame in 2002 as a second self-funded space tourist and the first South African who could travel to the space. Flying through Space Adventures, Mark launched aboard Russian Soyuz TM-34 mission as spaceflight participant, and he paid approximately $20 million for that voyage. 2 days later, Soyuz spacecraft arrived at International Space Station, where Mark spent 8 days participating in the experiments related to the AIDS and the GENOME research. Later in 2002, Mark returned to the Earth on the Soyuz TM-33. To participate in that flight, Mark had to undergo 1 year of preparation and training, including 7 months spent in the Star City, Russia.
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg)
While in space, Mark had radio conversation with Nelson Mandela and another 14 year old South African girl, called Michelle Foster, who asked Mark to marry her. Of course Mark politely dodged that question, stating that he was much honored to this question before cunningly change the subject. The terminally ill Foster was also provided the opportunity to have conversation with Mark and Nelson Mandela by Reach for Dream foundation.
Upon returning, Mark traveled widely and also spoke about that spaceflight to schoolchildren around the world.
> “The raw numbers suggest that Ubuntu continues to grow in terms of actual users. And our partnerships Dell, HP, Lenovo on the hardware front, and gaming companies like EA, Valve joining up on the software front make me feel like we continue to lead where it matters.”
>
> — Mark Shuttleworth
### Mark Shuttleworths Transport ###
Mark has his private jet, Bombardier Global Express that is often referred to as Canonical One but its in fact owned through the HBD Venture Capital Company. The dragon depicted on side of the plane is Norman, HBD Venture Capital mascot.
### The Legal Clash with South African Reserve Bank ###
Upon the moving R2.5 billion in the capital from South Africa to Isle of Man, South African Reserve Bank imposed R250 million levy to release Marks assets. Mark appealed, and then after lengthy legal battle, Reserve Bank was ordered to repay Mark his R250 million, plus the interest. Mark announced that he would be donating that entire amount to trust that will be established in order to help others take cases to Constitutional Court.
> “The exit charge was not inconsistent with the Constitution. The dominant purpose of the exit charge was not to raise revenue but rather to regulate conduct by discouraging the export of capital to protect the domestic economy.”
>
> — Judge Dikgang Moseneke
In 2015, Constitutional Court of South Africa reversed and set-aside findings of lower courts, ruling that dominant purpose of the exit charge was in order to regulate conduct rather than for raising the revenue.
### Mark Shuttleworths likes ###
Cesária Évora, mp3s,Spring, Chelsea, finally seeing something obvious for first time, coming home, Sinatra, daydreaming, sundowners, flirting, dUrberville, string theory, Linux, particle physics, Python, reincarnation, mig-29s, snow, travel, Mozilla, lime marmalade, body shots, the African bush, leopards, Rajasthan, Russian saunas, snowboarding, weightlessness, Iain m banks, broadband, Alastair Reynolds, fancy dress, skinny-dipping, flashes of insight, post-adrenaline euphoria, the inexplicable, convertibles, Clifton, country roads, international space station, machine learning, artificial intelligence, Wikipedia, Slashdot, kitesurfing, and Manx lanes.
### Shuttleworths dislikes ###
Admin, salary negotiations, legalese, and public speaking.
--------------------------------------------------------------------------------
via: http://www.unixmen.com/mark-shuttleworth-man-behind-ubuntu-operating-system/
作者:[M.el Khamlichi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/

View File

@ -1,4 +1,3 @@
sonofelice translating
How bad a boss is Linus Torvalds?
================================================================================
![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg)

View File

@ -44,3 +44,4 @@ via: https://www.linux.com/news/linus-torvalds-talks-iot-smart-devices-security-
[a]: https://www.linux.com/users/ericstephenbrown
[0]: http://events.linuxfoundation.org/events/embedded-linux-conference
[1]: http://go.linuxfoundation.org/elc-openiot-summit-2016-videos?utm_source=lf&utm_medium=blog&utm_campaign=linuxcom

View File

@ -1,63 +0,0 @@
65% of companies are contributing to open source projects
==========================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_openseries.png?itok=s7lXChId)
This year marks the 10th annual Future of Open Source Survey to examine trends in open source, hosted by Black Duck and North Bridge. The big takeaway from the survey this year centers around the mainstream acceptance of open source today and how much has changed over the last decade.
The [2016 Future of Open Source Survey][1] analyzed responses from nearly 3,400 professionals. Developers made their voices heard in the survey this year, comprising roughly 70% of the participants. The group that showed exponential growth were security professionals, whose participation increased by over 450%. Their participation shows the increasing interest in ensuring that the open source community pays attention to security issues in open source software and securing new technologies as they emerge.
Black Duck's [Open Source Rookies][2] of the Year awards identify some of these emerging technologies, like Docker and Kontena in containers. Containers themselves have seen huge growth this year76% of respondents say their company has some plans to use containers. And an amazing 59% of respondents are already using containers in a variety of deployments, from development and testing to internal and external production environment. The developer community has embraced containers as a way to get their code out quickly and easily.
It's not surprising that the survey shows a miniscule number of organizations having no developers contributing to open source software. When large corporations like Microsoft and Apple open source some of their solutions, developers gain new opportunities to participate in open source. I certainly hope this trend will continue, with more software developers contributing to open source projects at work and outside of work.
### Highlights from the 2016 survey
#### Business value
* Open source is an essential element in development strategy with more than 65% of respondents relying on open source to speed development.
* More than 55% leverage open source within their production environments.
#### Engine for innovation
* Respondents reported use of open source to drive innovation through faster, more agile development; accelerated time to market and vastly superior interoperability.
* Additional innovation is afforded by open source's quality of solutions; competitive features and technical capabilities; and ability to customize.
#### Proliferation of open source business models and investment
* More diverse business models are emerging that promise to deliver more value to open source companies than ever before. They are not as dependent on SaaS and services/support.
* Open source private financing has increased almost 4x in five years.
#### Security and management
The development of best-in-class open source security and management practices has not kept pace with growth in adoption. Despite a proliferation of expensive, high-profile open source breaches in recent years, the survey revealed that:
* 50% of companies have no formal policy for selecting and approving open source code.
* 47% of companies dont have formal processes in place to track open source code, limiting their visibility into their open source and therefore their ability to control it.
* More than one-third of companies have no process for identifying, tracking or remediating known open source vulnerabilities.
#### Open source participation on the rise
The survey revealed an active corporate open source community that spurs innovation, delivers exponential value and shares camaraderie:
* 67% of respondents report actively encouraging developers to engage in and contribute to open source projects.
* 65% of companies are contributing to open source projects.
* One in three companies have a fulltime resource dedicated to open source projects.
* 59% of respondents participate in open source projects to gain competitive edge.
Black Duck and North Bridge learned a great deal this year about security, policy, business models and more from the survey, and were excited to share these findings. Thank you to our many collaborators and all the respondents for taking the time to take the survey. Its been a great ten years, and I am happy that we can safely say that the future of open source is full of possibilities.
Learn more, see the [full results][3].
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/5/2016-future-open-source-survey
作者:[Haidee LeClair][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
[a]: https://opensource.com/users/blackduck2016
[1]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results
[2]: https://info.blackducksoftware.com/OpenSourceRookies2015.html
[3]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results%C2%A0

View File

@ -1,3 +1,4 @@
KevinSJ Translating
Linux will be the major operating system of 21st century cars
===============================================================

View File

@ -1,104 +0,0 @@
Driving cars into the future with Linux
===========================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open-snow-car-osdc-lead.png?itok=IgYZ6mNY)
I don't think much about it while I'm driving, but I sure do love that my car is equipped with a system that lets me use a few buttons and my voice to call my wife, mom, and children. That same system allows me to choose whether I listen to music streaming from the cloud, satellite radio, or the more traditional AM/FM radio. I also get weather updates and can direct my in-vehicle GPS to find the fastest route to my next destination. [In-vehicle infotainment][1], or IVI as it's known in the industry, has become ubiquitous in today's newest automobiles.
A while ago, I had to travel hundreds of miles by plane and then rent a car. Happily, I discovered that my rental vehicle was equipped with IVI technology similar to my own car. In no time, I was connected via Bluetooth, had uploaded my contacts into the system, and was calling home to let my family know I arrived safely and my hosts to let them know I was en route to their home.
In a recent [news roundup][2], Scott Nesbitt cited an article that said Ford Motor Company is getting substantial backing from a rival automaker for its open source [Smart Device Link][3] (SDL) middleware framework, which supports mobile phones. SDL is a project of the [GENIVI Alliance][4], a nonprofit committed to building middleware to support open source in-vehicle infotainment systems. According to [Steven Crumb][5], executive director of GENIVI, their [membership][6] is broad and includes Daimler Group, Hyundai, Volvo, Nissan, Honda, and 170 others.
In order to remain competitive in the industry, automotive companies need a middleware system that can support the various human machine interface technologies available to consumers today. Whether you own an Android, iOS, or other device, automotive OEMs want their units to be able to support these systems. Furthermore, these IVI systems must be adaptable enough to support the ever decreasing half-life of mobile technology. OEMs want to provide value and add services in their IVI stacks that will support a variety of options for their customers. Enter Linux and open source software.
In addition to GENIVI's efforts, the [Linux Foundation][7] sponsors the [Automotive Grade Linux][8] (AGL) workgroup, a software foundation dedicated to finding open source solutions for automotive applications. Although AGL will initially focus on IVI systems, they envision branching out to include [telematics][9], heads up displays, and other control systems. AGL has over 50 members at this time, including Jaguar, Toyota, and Nissan, and in a [recent press release][10] announced that Ford, Mazda, Mitsubishi, and Subaru have joined.
To find out more, we interviewed two leaders in this emerging field. Specifically, we wanted to know how Linux and open source software are being used and if they are in fact changing the face of the automotive industry. First, we talk to [Alison Chaiken][11], a software engineer at Peloton Technology and an expert on automotive Linux, cybersecurity, and transparency. She previously worked for Mentor Graphics, Nokia, and the Stanford Linear Accelerator. Then, we chat with [Steven Crumb][12], executive director of GENIVI, who got started in open source in high-performance computing environments (supercomputers and early cloud computing). He says that though he's not a coder anymore, he loves to help organizations solve real business problems with open source software.
### Interview with Alison Chaiken (by [Deb Nicholson][13])
#### How did you get interested in the automotive software space?
I was working on [MeeGo][14] in phones at Nokia in 2009 when the project was cancelled. I thought, what's next? A colleague was working on [MeeGo-IVI][15], an early automotive Linux distribution. "Linux is going to be big in cars," I thought, so I headed in that direction.
#### Can you tell us what aspects you're working on these days?
I'm currently working for a startup on an advanced cruise control system that uses real-time Linux to increase the safety and fuel economy of big-rig trucks. I love working in this area, as no one would disagree that trucking can be improved.
#### There have been a few stories about hacked cars in recent years. Can open source solutions help address this issue?
I presented a talk on precisely this topic, on how Linux can (and cannot) contribute to security solutions in automotive at Southern California Linux Expo 2016 ([Slides][16]). Notably, GENIVI and Automotive Grade Linux have published their code and both projects take patches via Git. Please send your fixes upstream! Many eyes make all bugs shallow.
#### Law enforcement agencies and insurance companies could find plenty of uses for data about drivers. How easy will it be for them to obtain this information?
Good question. The Dedicated Short Range Communication Standard (IEEE-1609) takes great pains to keep drivers participating in Wi-Fi safety messaging anonymous. Still, if you're posting to Twitter from your car, someone will be able to track you.
#### What can developers and private citizens do to make sure civil liberties are protected as automotive technology evolves?
The Electronic Frontier Foundation (EFF) has done an excellent job of keeping on top of automotive issues, having commented through official channels on what data may be stored in automotive "black boxes" and on how DMCA's Provision 1201 applies to cars.
#### What are some of the exciting things you see coming for drivers in the next few years?
Adaptive cruise control and collision avoidance systems are enough of an advance to save lives. As they roll out through vehicle fleets, I truly believe that fatalities will decline. If that's not exciting, I don't know what is. Furthermore, capabilities like automated parking assist will make cars easier to drive and reduce fender-benders.
#### What needs to be built and how can people get involved?
Automotive Grade Linux is developed in the open and runs on cheap hardware (e.g. Raspberry Pi 2 and moderately priced Renesas Porter board) that anyone can buy. GENIVI automotive Linux middleware consortium has lots of software publicly available via Git. Furthermore, there is the ultra cool [OSVehicle open hardware][17] automotive platform.
#### There are many ways for Linux software and open hardware folks with moderate budgets to get involved. Join us at #automotive on Freenode IRC if you have questions.
### Interview with Steven Crumb (by Don Watkins)
#### What's so huge about GENIVI's approach to IVI?
GENIVI filled a huge gap in the automotive industry by pioneering the use of free and open source software, including Linux, for non-safety-critical automotive software like in-vehicle infotainment (IVI) systems. As consumers came to expect the same functionality in their vehicles as on their smartphones, the amount of software required to support IVI functions grew exponentially. The increased amount of software has also increased the costs of building the IVI systems and thus slowed time to market.
GENIVI's use of open source software and a community development model has saved automakers and their software suppliers significant amounts of money while significantly reducing the time to market. I'm excited about GENIVI because we've been fortunate to lead a revolution of sorts in the automotive industry by slowly evolving organizations from a highly structured and proprietary methodology to a community-based approach. We're not done yet, but it's been a privilege to take part in a transformation that is yielding real benefits.
#### How do your major members drive the direction of GENIVI?
GENIVI has a lot of members and non-members contributing to our work. As with many open source projects, any company can influence the technical output by simply contributing code, patches, and time to test. With that said, BMW, Mercedes-Benz, Hyundai Motor, Jaguar Land Rover, PSA, Renault/Nissan, and Volvo are all active adopters of and contributors to GENIVI—and many other OEMs have IVI solutions in their cars that extensively use GENIVI's software.
#### What licenses cover the contributed code?
GENIVI employs a number of licenses ranging from (L)GPLv2 to MPLv2 to Apache 2.0. Some of our tools use the Eclipse license. We have a [public licensing policy][18] that details our licensing preferences.
#### How does a person or group get involved? How important are community contributions to the ongoing success of the project?
GENIVI does its development completely in the open ([projects.genivi.org][19]) and thus, anyone interested in using open software in automotive is welcome to participate. That said, the alliance can fund its continued development in the open through companies [joining GENIVI][20] as members. GENIVI members enjoy a wide variety of benefits, not the least of which is participation in the global community of 140 companies that has been developed over the last six years.
Community is hugely important to GENIVI, and we could not have produced and maintained the valuable software we developed over the years without an active community of contributors. We've worked hard to make contributing to GENIVI as simple as joining an [email list][21] and connecting to the people in the various software projects. We use standard practices employed by many open source projects and provide high-quality tools and infrastructure to help developers feel at home and be productive.
Regardless of someone's familiarity with the automotive software, they are welcome to join our community. People have modified cars for years, so for many people there is a natural draw to anything automotive. Software is the new domain for cars, and GENIVI wants to be the open door for anyone interested in working with automotive, open source software.
-------------------------------
via: https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb
作者:[Don Watkins][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[1]: https://en.wikipedia.org/wiki/In_car_entertainment
[2]: https://opensource.com/life/16/1/weekly-news-jan-9
[3]: http://projects.genivi.org/smartdevicelink/home
[4]: http://www.genivi.org/
[5]: https://www.linkedin.com/in/stevecrumb
[6]: http://www.genivi.org/genivi-members
[7]: http://www.linuxfoundation.org/
[8]: https://www.automotivelinux.org/
[9]: https://en.wikipedia.org/wiki/Telematics
[10]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and
[11]: https://www.linkedin.com/in/alison-chaiken-3ba456b3
[12]: https://www.linkedin.com/in/stevecrumb
[13]: https://opensource.com/users/eximious
[14]: https://en.wikipedia.org/wiki/MeeGo
[15]: http://webinos.org/deliverable-d026-target-platform-requirements-and-ipr/automotive/
[16]: http://she-devel.com/Chaiken_automotive_cybersecurity.pdf
[17]: https://www.osvehicle.com/
[18]: http://projects.genivi.org/how
[19]: http://projects.genivi.org/
[20]: http://genivi.org/join
[21]: http://lists.genivi.org/mailman/listinfo/genivi-projects

View File

@ -0,0 +1,276 @@
(翻译中 by runningwater)
Explanation of “Everything is a File” and Types of Files in Linux
====================================================================
![](http://www.tecmint.com/wp-content/uploads/2016/05/Everything-is-a-File-in-Linux.png)
>Everything is a File and Types of Files in Linux
That is in fact true although it is just a generalization concept, in Unix and its derivatives such as Linux, everything is considered as a file. If something is not a file, then it must be running as a process on the system.
To understand this, take for example the amount of space on your root (/) directory is always consumed by different types of Linux files. When you create a file or transfer a file to your system, it occupies some space on the physical disk and it is considered to be in a specific format (file type).
And also the Linux system does not differentiate between files and directories, but directories do one important job, that is store other files in groups in a hierarchy for easy location. All your hardware components are represented as files and the system communicates with them using these files.
The idea is an important description of a great property of Linux, where input/output resources such as your documents, directories (folders in Mac OS X and Windows), keyboard, monitor, hard-drives, removable media, printers, modems, virtual terminals and also inter-process and network communication are streams of bytes defined by file system space.
A notable advantage of everything being a file is that the same set of Linux tools, utilities and APIs can be used on the above input/output resources.
Although everything in Linux is a file, there are certain special files that are more than just a file for example [sockets and named pipes][1].
### What are the different types of files in Linux?
In Linux there are basically three types of files:
- Ordinary/Regular files
- Special files
- Directories
#### Ordinary/Regular Files
These are files data contain text, data or program instructions and they are the most common type of files you can expect to find on a Linux system and they include:
- Readable files
- Binary files
- Image files
- Compressed files and so on.
#### Special Files
Special files include the following:
Block files : These are device files that provide buffered access to system hardware components. They provide a method of communication with device drivers through the file system.
One important aspect about block files is that they can transfer a large block of data and information at a given time.
Listing block files sockets in a directory:
```
# ls -l /dev | grep "^b"
```
Sample Output
```
brw-rw---- 1 root disk 7, 0 May 18 10:26 loop0
brw-rw---- 1 root disk 7, 1 May 18 10:26 loop1
brw-rw---- 1 root disk 7, 2 May 18 10:26 loop2
brw-rw---- 1 root disk 7, 3 May 18 10:26 loop3
brw-rw---- 1 root disk 7, 4 May 18 10:26 loop4
brw-rw---- 1 root disk 7, 5 May 18 10:26 loop5
brw-rw---- 1 root disk 7, 6 May 18 10:26 loop6
brw-rw---- 1 root disk 7, 7 May 18 10:26 loop7
brw-rw---- 1 root disk 1, 0 May 18 10:26 ram0
brw-rw---- 1 root disk 1, 1 May 18 10:26 ram1
brw-rw---- 1 root disk 1, 10 May 18 10:26 ram10
brw-rw---- 1 root disk 1, 11 May 18 10:26 ram11
brw-rw---- 1 root disk 1, 12 May 18 10:26 ram12
brw-rw---- 1 root disk 1, 13 May 18 10:26 ram13
brw-rw---- 1 root disk 1, 14 May 18 10:26 ram14
brw-rw---- 1 root disk 1, 15 May 18 10:26 ram15
brw-rw---- 1 root disk 1, 2 May 18 10:26 ram2
brw-rw---- 1 root disk 1, 3 May 18 10:26 ram3
brw-rw---- 1 root disk 1, 4 May 18 10:26 ram4
brw-rw---- 1 root disk 1, 5 May 18 10:26 ram5
...
```
Character files : These are also device files that provide unbuffered serial access to system hardware components. They work by providing a way of communication with devices by transferring data one character at a time.
Listing character files sockets in a directory:
```
# ls -l /dev | grep "^c"
```
Sample Output
```
crw------- 1 root root 10, 235 May 18 15:54 autofs
crw------- 1 root root 10, 234 May 18 15:54 btrfs-control
crw------- 1 root root 5, 1 May 18 10:26 console
crw------- 1 root root 10, 60 May 18 10:26 cpu_dma_latency
crw------- 1 root root 10, 203 May 18 15:54 cuse
crw------- 1 root root 10, 61 May 18 10:26 ecryptfs
crw-rw---- 1 root video 29, 0 May 18 10:26 fb0
crw-rw-rw- 1 root root 1, 7 May 18 10:26 full
crw-rw-rw- 1 root root 10, 229 May 18 10:26 fuse
crw------- 1 root root 251, 0 May 18 10:27 hidraw0
crw------- 1 root root 10, 228 May 18 10:26 hpet
crw-r--r-- 1 root root 1, 11 May 18 10:26 kmsg
crw-rw----+ 1 root root 10, 232 May 18 10:26 kvm
crw------- 1 root root 10, 237 May 18 10:26 loop-control
crw------- 1 root root 10, 227 May 18 10:26 mcelog
crw------- 1 root root 249, 0 May 18 10:27 media0
crw------- 1 root root 250, 0 May 18 10:26 mei0
crw-r----- 1 root kmem 1, 1 May 18 10:26 mem
crw------- 1 root root 10, 57 May 18 10:26 memory_bandwidth
crw------- 1 root root 10, 59 May 18 10:26 network_latency
crw------- 1 root root 10, 58 May 18 10:26 network_throughput
crw-rw-rw- 1 root root 1, 3 May 18 10:26 null
crw-r----- 1 root kmem 1, 4 May 18 10:26 port
crw------- 1 root root 108, 0 May 18 10:26 ppp
crw------- 1 root root 10, 1 May 18 10:26 psaux
crw-rw-rw- 1 root tty 5, 2 May 18 17:40 ptmx
crw-rw-rw- 1 root root 1, 8 May 18 10:26 random
```
Symbolic link files : A symbolic link is a reference to another file on the system. Therefore, symbolic link files are files that point to other files, and they can either be directories or regular files.
Listing symbolic link sockets in a directory:
```
# ls -l /dev/ | grep "^l"
```
Sample Output
```
lrwxrwxrwx 1 root root 3 May 18 10:26 cdrom -> sr0
lrwxrwxrwx 1 root root 11 May 18 15:54 core -> /proc/kcore
lrwxrwxrwx 1 root root 13 May 18 15:54 fd -> /proc/self/fd
lrwxrwxrwx 1 root root 4 May 18 10:26 rtc -> rtc0
lrwxrwxrwx 1 root root 8 May 18 10:26 shm -> /run/shm
lrwxrwxrwx 1 root root 15 May 18 15:54 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 May 18 15:54 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 May 18 15:54 stdout -> /proc/self/fd/1
```
You can make symbolic links using the `ln` utility in Linux as in the example below.
```
# touch file1.txt
# ln -s file1.txt /home/tecmint/file1.txt [create symbolic link]
# ls -l /home/tecmint/ | grep "^l" [List symbolic links]
```
In the above example, I created a file called `file1.txt` in `/tmp` directory, then created the symbolic link, `/home/tecmint/file1.txt` to point to `/tmp/file1.txt`.
Pipes or Named pipes : These are files that allow inter-process communication by connecting the output of one process to the input of another.
A named pipe is actually a file that is used by two process to communicate with each and it acts as a Linux pipe.
Listing pipes sockets in a directory:
```
# ls -l | grep "^p"
```
Sample Output
```
prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe1
prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe2
prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe3
prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe4
prw-rw-r-- 1 tecmint tecmint 0 May 18 17:47 pipe5
```
You can use the mkfifo utility to create a named pipe in Linux as follows.
```
# mkfifo pipe1
# echo "This is named pipe1" > pipe1
```
In the above example, I created a named pipe called pipe1, then I passed some data to it using the [echo command][2], after that the shell became un-interactive while processing the input.
Then I opened another shell and run the another command to print out what was passed to pipe.
```
# while read line ;do echo "This was passed-'$line' "; done<pipe1
```
Socket files : These are files that provide a means of inter-process communication, but they can transfer data and information between process running on different environments.
This means that sockets provide data and information transfer between process running on different machines on a network.
An example to show the work of sockets would be a web browser making a connection to a web server.
```
# ls -l /dev/ | grep "^s"
```
Sample Output
```
srw-rw-rw- 1 root root 0 May 18 10:26 log
```
This is an example of a socket create in C by using the `socket()` system call.
```
int socket_desc= socket(AF_INET, SOCK_STREAM, 0 );
```
In the above:
- `AF_INET` is the address family(IPv4)
- `SOCK_STREAM` is the type (connection is TCP protocol oriented)
- `0` is the protocol(IP Protocol)
To refer to the socket file, use the `socket_desc`, which is the same as the file descriptor, and use `read()` and `write()` system calls to read and write from the socket respectively.
### Directories
These are special files that store both ordinary and other special files and they are organized on the Linux file system in a hierarchy starting from the root (/) directory.
Listing sockets in a directory:
```
# ls -l / | grep "^d"
```
Sample Output
```
drwxr-xr-x 2 root root 4096 May 5 15:49 bin
drwxr-xr-x 4 root root 4096 May 5 15:58 boot
drwxr-xr-x 2 root root 4096 Apr 11 2015 cdrom
drwxr-xr-x 17 root root 4400 May 18 10:27 dev
drwxr-xr-x 168 root root 12288 May 18 10:28 etc
drwxr-xr-x 3 root root 4096 Apr 11 2015 home
drwxr-xr-x 25 root root 4096 May 5 15:44 lib
drwxr-xr-x 2 root root 4096 May 5 15:44 lib64
drwx------ 2 root root 16384 Apr 11 2015 lost+found
drwxr-xr-x 3 root root 4096 Apr 10 2015 media
drwxr-xr-x 3 root root 4096 Feb 23 17:54 mnt
drwxr-xr-x 16 root root 4096 Apr 30 16:01 opt
dr-xr-xr-x 223 root root 0 May 18 15:54 proc
drwx------ 19 root root 4096 Apr 9 11:12 root
drwxr-xr-x 27 root root 920 May 18 10:54 run
drwxr-xr-x 2 root root 12288 May 5 15:57 sbin
drwxr-xr-x 2 root root 4096 Dec 1 2014 srv
dr-xr-xr-x 13 root root 0 May 18 15:54 sys
drwxrwxrwt 13 root root 4096 May 18 17:55 tmp
drwxr-xr-x 11 root root 4096 Mar 31 16:00 usr
drwxr-xr-x 12 root root 4096 Nov 12 2015 var
```
You can make a directory using the mkdir command.
```
# mkdir -m 1666 tecmint.com
# mkdir -m 1666 news.tecmint.com
# mkdir -m 1775 linuxsay.com
```
### Summary
You should now be having a clear understanding of why everything in Linux is a file and the different types of files that can exit on your Linux system.
You can add more to this by reading more about the individual file types and they are created. I hope this find this guide helpful and for any questions and additional information that you would love to share, please leave a comment and we shall discuss more.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/explanation-of-everything-is-a-file-and-types-of-files-in-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Aaron Kili][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/
[2]: http://www.tecmint.com/echo-command-in-linux/

View File

@ -0,0 +1,118 @@
transalting by ynmlml
5 Best Linux Package Managers for Linux Newbies
=====================================================
One thing a new Linux user will get to know as he/she progresses in using it is the existence of several Linux distributions and the different ways they manage packages.
Package management is very important in Linux, and knowing how to use multiple package managers can proof life saving for a power user, since downloading or installing software from repositories, plus updating, handling dependencies and uninstalling software is very vital and a critical section in Linux system Administration.
![](http://www.tecmint.com/wp-content/uploads/2016/06/Best-Linux-Package-Managers.png)
>Best Linux Package Managers
Therefore to become a Linux power user, it is significant to understand how the major Linux distributions actually handle packages and in this article, we shall take a look at some of the best package managers you can find in Linux.
Here, our main focus is on relevant information about some of the best package managers, but not how to use them, that is left to you to discover more. But I will provide meaningful links that point out usage guides and many more.
### 1. DPKG Debian Package Management System
Dpkg is a base package management system for the Debian Linux family, it is used to install, remove, store and provide information about `.deb` packages.
It is a low-level tool and there are front-end tools that help users to obtain packages from remote repositories and/or handle complex package relations and these include:
Dont Miss: [15 Practical Examples of “dpkg commands” for Debian Based Distros][1]
#### APT (Advanced Packaging Tool)
It is a very popular, free, powerful and more so, useful command line package management system that is a front end for dpkg package management system.
Users of Debian or its derivatives such as Ubuntu and Linux Mint should be familiar with this package management tool.
To understand how it actually works, you can go over these how to guides:
Dont Miss: [15 Examples of How to Use New Advanced Package Tool (APT) in Ubuntu/Debian][2]
Dont Miss: [25 Useful Basic Commands of APT-GET and APT-CACHE for Package Management][3]
#### Aptitude Package Manager
This is also a popular command line front-end package management tool for Debian Linux family, it works similar to APT and there have been a lot of comparisons between the two, but above all, testing out both can make you understand which one actually works better.
It was initially built for Debian and its derivatives but now its functionality stretches to RHEL family as well. You can refer to this guide for more understanding of APT and Aptitude:
Dont Miss: [What is APT and Aptitude? and Whats real Difference Between Them?][4]
#### Synaptic Package Manager
Synaptic is a GUI package management tool for APT based on GTK+ and it works fine for users who may not want to get their hands dirty on a command line. It implements the same features as apt-get command line tool.
### 2. RPM (Red Hat Package Manager)
This is the Linux Standard Base packing format and a base package management system created by RedHat. Being the underlying system, there several front-end package management tools that you can use with it and but we shall only look at the best and that is:
#### YUM (Yellowdog Updater, Modified)
It is an open source and popular command line package manager that works as a interface for users to RPM. You can compare it to APT under Debian Linux systems, it incorporates the common functionalities that APT has. You can get a clear understanding of YUM with examples from this how to guide:
Dont Miss: [20 Linux YUM Commands for Package Management][5]
#### DNF Dandified Yum
It is also a package manager for the RPM-based distributions, introduced in Fedora 18 and it is the next generation of version of YUM.
If you have been using Fedora 22 onwards, you must have realized that it is the default package manager. Here are some links that will provide you more information about DNF and how to use it:
Dont Miss: [DNF The Next Generation Package Management for RPM Based Distributions][6]
Dont Miss: [27 DNF Commands Examples to Manage Fedora Package Management][7]
### 3. Pacman Package Manager Arch Linux
It is a popular and powerful yet simple package manager for Arch Linux and some little known Linux distributions, it provides some of the fundamental functionalities that other common package managers provide including installing, automatic dependency resolution, upgrading, uninstalling and also downgrading software.
But most effectively, it is built to be simple for easy package management by Arch users. You can read this [Pacman overview][8] which explains into details some of its functions mentioned above.
### 4. Zypper Package Manager openSUSE
It is a command line package manager on OpenSUSE Linux and makes use of the libzypp library, its common functionalities include repository access, package installation, resolution of dependencies issues and many more.
Importantly, it can also handle repository extensions such as patterns, patches, and products. New OpenSUSE user can refer to this following guide to master it.
Dont Miss: [45 Zypper Commands to Master OpenSUSE Package Management][9]
### 5. Portage Package Manager Gentoo
It is a package manager for Gentoo, a less popular Linux distribution as of now, but this wont limit it as one of the best package managers in Linux.
The main aim of the Portage project is to make a simple and trouble free package management system to include functionalities such as backwards compatibility, automation plus many more.
For better understanding, try reading [Portage project page][10].
### Concluding Remarks
As I already hinted at the beginning, the main purpose of this guide was to provide Linux users a list of the best package managers but knowing how to use them can be done by following the necessary links provided and trying to test them out.
Users of the different Linux distributions will have to learn more on their own to better understand the different package managers mentioned above.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-package-managers/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Ravi Saive][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/admin/
[1]: http://www.tecmint.com/dpkg-command-examples/
[2]: http://www.tecmint.com/apt-advanced-package-command-examples-in-ubuntu/
[3]: http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
[4]: http://www.tecmint.com/difference-between-apt-and-aptitude/
[5]: http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[6]: http://www.tecmint.com/dnf-next-generation-package-management-utility-for-linux/
[7]: http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/
[8]: https://wiki.archlinux.org/index.php/Pacman
[9]: http://www.tecmint.com/zypper-commands-to-manage-suse-linux-package-management/
[10]: https://wiki.gentoo.org/wiki/Project:Portage

View File

@ -0,0 +1,62 @@
Training vs. hiring to meet the IT needs of today and tomorrow
================================================================
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf)
In the digital era, IT skills requirements are in a constant state of flux thanks to the constant change of the tools and technologies companies need to keep pace. Its not easy for companies to find and hire talent with coveted skills that will enable them to innovate. Meanwhile, training internal staff to take on new skills and challenges takes time that is often in short supply.
[Sandy Hill][1] is quite familiar with the various skills required across a variety of IT disciplines. As the director of IT for [Pegasystems][2], she is responsible for IT teams involved in areas ranging from application development to data center operations. Whats more, Pegasystems develops applications to help sales, marketing, service and operations teams streamline operations and connect with customers, which means she has to grasp the best way to use IT resources internally, and the IT challenges the companys customers face.
![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png)
**The Enterprisers Project (TEP): How has the emphasis you put on training changed in recent years?**
**Hill**: Weve been growing exponentially over the past couple of years so now were implementing more global processes and procedures. With that comes the training aspect of making sure everybody is on the same page.
Most of our focus has shifted to training staff on new products and tools that get implemented to drive innovation and enhance end user productivity. For example, weve implemented an asset management system; we didnt have one before. So we had to do training globally instead of hiring someone who already knew the product. As were growing, were also trying to maintain a tight budget and flat headcount. So wed rather internally train than try to hire new people.
**TEP: Describe your approach to training. What are some of the ways you help employees evolve their skills?**
**Hill**: I require each staff member to have a technical and non-technical training goal, which are tracked and reported on as part of their performance review. Their technical goal needs to align within their job function, and the non-technical goal can be anything from focusing on sharpening one of their soft skills to learning something outside of their area of expertise. I perform yearly staff evaluations to see where the gaps and shortages are so that teams remain well-rounded.
**TEP: To what extent have your training initiatives helped quell recruitment and retention issues?**
**Hill**: Keeping our staff excited about learning new technologies keeps their skill sets sharp. Having the staff know that we value them, and we are vested in their professional growth and development motivates them.
**TEP: What sorts of training have you found to be most effective?**
**Hill**: We use several different training methods that weve found to be effective. With new or special projects, we try to incorporate a training curriculum led by the vendor as part of the project rollout. If thats not an option, we use off-site training. We also purchase on-line training packages, and I encourage my staff to attend at least one conference per year to keep up with whats new in the industry.
**TEP**: For what sorts of skills have you found its better to hire new people than train existing staff?
**Hill**: It depends on the project. In one recent initiative, trying to implement OpenStack, we didnt have internal expertise at all. So we aligned with a consulting firm that specialized in that area. We utilized their expertise on-site to help run the project and train internal team members. It was a massive undertaking to get internal people to learn the skills they needed while also doing their day-to-day jobs.
The consultant helped us determine the headcount we needed to be proficient. This allowed us to assess our staff to see if gaps remained, which would require additional training or hiring. And we did end up hiring some of the contractors. But the alternative was to send some number of FTEs (full-time employees) for 6 to 8 weeks of training, and our pipeline of projects wouldnt allow that.
**TEP: In thinking about some of your most recent hires, what skills did they have that are especially attractive to you?**
**Hill**: In recent hires, Ive focused on soft skills. In addition to having solid technical skills, they need to be able to communicate effectively, work in teams and have the ability to persuade, negotiate and resolve conflicts.
IT people in general kind of keep to themselves; theyre often not the most social people. Now, where IT is more integrated throughout the organization, the ability to give useful updates and status reports to other business units is critical to show that IT is an active presence and to be successful.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[ Paul Desmond][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://enterprisersproject.com/user/paul-desmond
[1]: https://enterprisersproject.com/user/sandy-hill
[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c|

View File

@ -0,0 +1,62 @@
IT runs on the cloud, and the cloud runs on Linux. Any questions?
===================================================================
>IT is moving to the cloud. And, what powers the cloud? Linux. When even Microsoft's Azure has embraced Linux, you know things have changed.
![](http://zdnet1.cbsistatic.com/hub/i/r/2016/06/24/7d2b00eb-783d-4202-bda2-ca65d45c460a/resize/770xauto/732db8df725ede1cc38972788de71a0b/linux-owns-cloud.jpg)
>Image: ZDNet
Like it or lump it, the cloud is taking over IT. We've seen [the rise of the cloud over in-house IT][1] for years now. And, what powers the cloud? Linux.
A recent survey by the [Uptime Institute][2] of 1,000 IT executives found that 50 percent of senior enterprise IT executives expect the [majority of IT workloads to reside off-premise in cloud][3] or colocation sites in the future. Of those surveyed, 23 percent expect the shift to happen next year, and 70 percent expect that shift to occur within the next four years.
This comes as no surprise. Much as many of us still love our physical servers and racks, it often doesn't make financial sense to run your own data center.
It's really very simple. Just compare your [capital expense (CAPEX) of running your own hardware versus the operational expenses (OPEX)][4] of using a cloud. Now, that's not to say you want to outsource everything and the kitchen sink, but most of the time and for many of your jobs you'll want to move to the cloud.
In turn, if you're going to make the best use of the cloud, you need to know Linux.
[Amazon Web Services][5], [Apache CloudStack][6], [Rackspace][7], [Google Cloud Platform][8], and [OpenStack][9] all run Linux at their hearts. The result? By 2014, [Linux server application deployments had risen to 79 percent][10] of all businesses, while Windows server app deployments had fallen to 36 percent. Linux has only gained more momentum since then.
Even Microsoft understands this.
In the past year alone, Azure Chief Technology Officer Mark Russinovich said, Microsoft has gone from [one in four of its Azure virtual machines running Linux][11] to [nearly one in three][12].
Think about that. Microsoft, which is switching to the [cloud for its main source of revenue][13], is relying on Linux for a third of its cloud business.
Even now, both those who love Microsoft and those who hate it have trouble getting their minds around the fundamental shift of [Microsoft from a proprietary software company to an open-source][14], cloud-based service business.
Linux's penetration into the proprietary server room is even deeper than it first appears. For example, [Docker recently announced a public beta of its Windows 10 and Mac OS X releases][15]. So, does that mean [Docker][16] is porting its eponymous container service to Windows 10 and the Mac? Nope.
On both platforms, Docker runs within a Linux virtual machine. HyperKit on Mac OS and Hyper-V on Windows. Your interface may look like just another Mac or Windows application, but at heart your containers will still be running on Linux.
So, just as the vast majority of Android phone and Chromebook users have no clue they're running Linux, so too will IT users continue to quietly move to Linux and the cloud.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/it-runs-on-the-cloud-and-the-cloud-runs-on-linux-any-questions/#ftag=RSSbaffb68
作者:[Steven J. Vaughan-Nichols][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]: http://www.zdnet.com/article/2014-the-year-the-cloud-killed-the-datacenter/
[2]: https://uptimeinstitute.com/
[3]: http://www.zdnet.com/article/move-to-cloud-accelerating-faster-than-thought-survey-finds/
[4]: http://www.zdnet.com/article/rethinking-capex-and-opex-in-a-cloud-centric-world/
[5]: https://aws.amazon.com/
[6]: https://cloudstack.apache.org/
[7]: https://www.rackspace.com/en-us
[8]: https://cloud.google.com/
[9]: http://www.openstack.org/
[10]: http://www.zdnet.com/article/linux-foundation-finds-enterprise-linux-growing-at-windows-expense/
[11]: http://news.microsoft.com/bythenumbers/azure-virtual
[12]: http://www.zdnet.com/article/microsoft-nearly-one-in-three-azure-virtual-machines-now-are-running-linux/
[13]: http://www.zdnet.com/article/microsofts-q3-azure-commercial-cloud-strong-but-earnings-revenue-light/
[14]: http://www.zdnet.com/article/why-microsoft-is-turning-into-an-open-source-company/
[15]: http://www.zdnet.com/article/new-docker-betas-for-azure-windows-10-now-available/
[16]: http://www.docker.com/

View File

@ -0,0 +1,110 @@
Ubuntus Snap, Red Hats Flatpak And Is One Fits All Linux Packages Useful?
=================================================================================
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Flatpak-and-Snap-Packages.jpg)
An in-depth look into the new generation of packages starting to permeate the Linux ecosystem.
Lately weve been hearing more and more about Ubuntus Snap packages and Flatpak (formerly referred to as xdg-app) created by Red Hats employee Alexander Larsson.
These 2 types of next generation packages are in essence having the same goal and characteristics which are: being standalone packages that doesnt rely on 3rd-party system libraries in order to function.
This new technology direction which Linux seems to be headed is automatically giving rise to questions such as, what are the advantages / disadvantages of standalone packages? does this lead us to a better Linux overall? what are the motives behind it?
To answer these questions and more, let us explore the things we know about Snap and Flatpak so far.
### The Motive
According to both [Flatpak][1] and [Snap][2] statements, the main motive behind them is to be able to bring one and the same version of application to run across multiple Linux distributions.
>“From the very start its primary goal has been to allow the same application to run across a myriad of Linux distributions and operating systems.” Flatpak
>“… snap universal Linux package format, enabling a single binary package to work perfectly and securely on any Linux desktop, server, cloud or device.” Snap
To be more specific, the guys behind Snap and Flatpak (S&F) believe that theres a barrier of fragmentation on the Linux platform.
A barrier which holds back the platform advancement by burdening developers with more, perhaps unnecessary, work to get their software run on the many distributions out there.
Therefore, as leading Linux distributions (Ubuntu & Red Hat), they wish to eliminate the barrier and strengthen the platform in general.
But what are the more personal gains which motivate the development of S&F?
#### Personal Gains?
Although not officially stated anywhere, it may be assumed that by leading the efforts of creating a unified package that could potentially be adopted by the vast majority of Linux distros (if not all of them), the captains of these projects could assume a key position in determining where the Linux ship sails.
### The Advantages
The benefits of standalone packages are diverse and can depend on different factors.
Basically however, these factors can be categorized under 2 distinct criteria:
#### User Perspective
+ From a Linux user point of view, Snap and Flatpak both bring the possibility of installing any package (software / app) on any distribution the user is using.
That is, for instance, if youre using a not so popular distribution which has only a scarce supply of packages available in their repo, due to workforce limitations probably, youll now be able to easily and significantly increase the amount of packages available to you which is a great thing.
+ Also, users of popular distributions that do have many packages available in their repos, will enjoy the ability of installing packages that might not have behaved with their current set of installed libraries.
For example, a Debian user who wants to install a package from testing branch will not have to convert his entire system into testing (in order for the package to run against newer libraries), rather, that user will simply be able to install only the package he wants from whichever branch he likes and on whatever branch hes on.
The latter point, was already basically possible for users who were compiling their packages straight from source, however, unless using a source based distribution such as Gentoo, most users will see this as just an unworthily hassle.
+ The advanced user, or perhaps better put, the security aware user might feel more comfortable with this type of packages as long as they come from a reliable source as they tend to provide another layer of isolation since they are generally isolated from system packages.
* Both S&F are being developed with enhanced security in mind, which generally makes use of “sandboxing” i.e isolation in order to prevent cases where they carry a virus which can infect the entire system, similar to the way .exe files on MS Windows may. (More on MS and S&F later)
#### Developer Perspective
For developers, the advantages of developing S&F packages will probably be a lot clearer than they are to the average user, some of these were already hinted in a previous section of this post.
Nonetheless, here they are:
+ S&F will make it easier on devs who want to develop for more than one Linux distribution by unifying the process of development, therefore minimizing the amount of work a developer needs to do in order to get his app running on multiple distributions.
++ Developers could therefore gain easier access to a wider range of distributions.
+ S&F allow devs to privately distribute their packages without being dependent on distribution maintainers to stabilize their package for each and every distro.
++ Through the above, devs may gain access to direct statistics of user adoption / engagement for their software.
++ Also through the above, devs could get more directly involved with users, rather than having to do so through a middleman, in this case, the distribution.
### The Downsides
Bloat. Simple as that. Flatpak and Snap arent just magic making dependencies evaporate into thin air. Rather, instead of relying on the target system to provide the required dependencies, S&F comes with the dependencies prebuilt into them.
As the saying goes “if the mountain wont come to Muhammad, Muhammad must go to the mountain…”
Just as the security-aware user might enjoy S&F packages extra layer of isolation, as long they come from a trusted source. The less knowledgeable user on the hand, might be prone to the other side of the coin hazard which is using a package from an unknown source which may contain malicious software.
The above point can be said to be valid even with todays popular methods, as PPAs, overlays, etc might also be maintained by untrusted sources.
However, with S&F packages the risk increases since malicious software developers need to create only one version of their program in order to infect a large number of distributions, whereas, without it theyd needed to create multiple versions in order to adjust their malware to other distributions.
Was Microsoft Right All Along?
With all thats mentioned above in mind, its pretty clear that for the most part, the advantages of using S&F packages outweighs the drawbacks.
At the least for users of binary-based distributions, or, non lightweight focused distros.
Which eventually lead me to asking the above question could it be that Microsoft was right all along? if so and S&F becomes the Linux standard, would you still consider Linux a Unix-like variant?
Well apparently, the best one to answer those questions is probably time.
Nevertheless, Id argue that even if not entirely right, MS certainly has a good point to their credit, and having all these methods available here on Linux out of the box is certainly a plus in my book.
--------------------------------------------------------------------------------
via: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/
作者:[Editorials][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.iwillfolo.com/category/editorials/

View File

@ -0,0 +1,90 @@
Linux Applications That Works On All Distributions Are They Any Good?
============================================================================
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Bundled-applications.jpg)
A revisit of Linux communitys latest ambitions promoting decentralized applications in order to tackle distribution fragmentation.
Following last weeks article: [Ubuntus Snap, Red Hats Flatpak And Is One Fits All Linux Packages Useful][1]?, a couple of new opinions rose to the surface which may contain crucial information about the usefulness of such apps.
### The Con Side
Commenting on the subject [here][2], a [Gentoo][3] user who goes by the name Till, gave rise to a few points which hasnt been addressed to the fullest the last time we covered the issue.
While previously we settled on merely calling it bloat, Till here on the other hand, is dissecting that bloat further so to help us better understand both its components and its consequences.
Referring to such apps as “bundled applications” for the way they work on all distributions is by containing dependencies together with the apps themselves, Till says:
>“bundles ship a lot of software that now needs to be maintained by the application developer. If library X has a security problem and needs an update, you rely on every single applications to ship correct updates in order to make your system save.”
Essentially, Till raises an important security point. However, it isnt necessarily has to be tied to security alone, but can also be linked to other aspects such as system maintenance, atomic updates, etc…
Furthermore, if we take that notion one step further and assume that dependencies developers may cooperate, therefore releasing their software in correlation with the apps who use them (an utopic situation), we shall then get an overall slowdown of the entire platform development.
Another problem that arises from the same point made above is dependencies-transparency becomes obscure, that is, if youd want to know which libraries are bundled with a certain app, youll have to rely on the developer to publish such data.
Or, as Till puts it: “Questions like, did package XY already include the updated library Z, will be your daily bread”.
For comparison, with the standard methods available on Linux nowadays (both binary and source distributions), you can easily notice which libraries are being updated upon a system update.
And you can also rest assure that all other apps on the system will use it, freeing you from the need to check each app individually.
Other cons that may be deduced from the term bloat include: bigger package size (each app is bundled with dependencies), higher memory usage (no more library sharing) and also
One less filter mechanism to prevent malicious software distributions package maintainers also serve as a filter between developers and users, helping to assure users get quality software.
With bundled apps this may no longer be the case.
As a finalizing general point, Till asserts that although useful in some cases, for the most part, bundled apps weakens the free software position in distributions (as proprietary vendors will now be able to deliver software without sharing it on public repositories).
And apart from that, it introduces many other issues. Many problems are simply moved towards the developers.
### The Pro Side
In contrast, another comment by a person named Sven tries to contradict common claims that basically go against the use of bundled applications, hence justifying and promoting the use of it.
“waste of space” Sven claims that in todays world we have many other things that wastes disk space, such as movies stored on hard drive, installed locals, etc…
Ultimately, these things are infinitely more wasteful than a mere “100 MB to run a program you use all day … Dont be ridiculous.”
“waste of RAM” the major points in favor are:
- Shared libraries waste significantly less RAM compared to application runtime data.
- RAM is cheap today.
“security nightmare” not every application you run is actually security-critical.
Also, many applications never even see any security updates, unless on a rolling distro.
In addition to Svens opinions, who try to stick to the pragmatic side, a few advantages were also pointed by Till who admits that bundled apps has their merits in certain cases:
- Proprietary vendors who want to keep their code out of the public repositories will be able to do so more easily.
- Niche applications, which are not packaged by your distribution, will now be more readily available.
- Testing on binary distributions which do not have beta packages will become easier.
- Freeing users from solving dependencies problems.
### Final Thoughts
Although shedding new light onto the matter, it seems that one conclusion still stands and accepted by all parties bundled apps has their niche to fill in the Linux ecosystem.
Nevertheless, the role that niche should take, whether main or marginal one, appears to be a lot clearer now, at least from a theoretical point of view.
Users who are looking to make their system optimized as possible, should, in the majority of cases, avoid using bundled apps.
Whereas, users that are after ease-of-use, meaning doing the least of work in order to maintain their systems, should and probably would feel very comfortable adopting the new method.
--------------------------------------------------------------------------------
via: http://www.iwillfolo.com/linux-applications-that-works-on-all-distributions-are-they-any-good/
作者:[Editorials][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.iwillfolo.com/category/editorials/
[1]: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/
[2]: http://www.proli.net/2016/06/25/gnulinux-bundled-application-ramblings/
[3]: http://www.iwillfolo.com/5-reasons-use-gentoo-linux/

View File

@ -1,497 +0,0 @@
Securi-Pi: Using the Raspberry Pi as a Secure Landing Point
================================================================================
Like many LJ readers these days, I've been leading a bit of a techno-nomadic lifestyle as of the past few years—jumping from network to network, access point to access point, as I bounce around the real world while maintaining my connection to the Internet and other networks I use on a daily basis. As of late, I've found that more and more networks are starting to block outbound ports like SMTP (port 25), SSH (port 22) and others. It becomes really frustrating when you drop into a local coffee house expecting to be able to fire up your SSH client and get a few things done, and you can't, because the network's blocking you.
However, I have yet to run across a network that blocks HTTPS outbound (port 443). After a bit of fiddling with a Raspberry Pi 2 I have at home, I was able to get a nice clean solution that lets me hit various services on the Raspberry Pi via port 443—allowing me to walk around blocked ports and hobbled networks so I can do the things I need to do. In a nutshell, I have set up this Raspberry Pi to act as an OpenVPN endpoint, SSH endpoint and Apache server—with all these services listening on port 443 so networks with restrictive policies aren't an issue.
### Notes
This solution will work on most networks, but firewalls that do deep packet inspection on outbound traffic still can block traffic that's tunneled using this method. However, I haven't been on a network that does that...yet. Also, while I use a lot of cryptography-based solutions here (OpenVPN, HTTPS, SSH), I haven't done a strict security audit of this setup. DNS may leak information, for example, and there may be other things I haven't thought of. I'm not recommending this as a way to hide all your traffic—I just use this so that I can connect to the Internet in an unfettered way when I'm out and about.
### Getting Started
Let's start off with what you need to put this solution together. I'm using this on a Raspberry Pi 2 at home, running the latest Raspbian, but this should work just fine on a Raspberry Pi Model B, as well. It fits within the 512MB of RAM footprint quite easily, although performance may be a bit slower, because the Raspberry Pi Model B has a single-core CPU as opposed to the Pi 2's quad-core. My Raspberry Pi 2 is behind my home's router/firewall, so I get the added benefit of being able to access my machines at home. This also means that any traffic I send to the Internet appears to come from my home router's IP address, so this isn't a solution designed to protect anonymity. If you don't have a Raspberry Pi, or don't want this running out of your home, it's entirely possible to run this out of a small cloud server too. Just make sure that the server's running Debian or Ubuntu, as these instructions are targeted at Debian-based distributions.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11913f1.jpg)
Figure 1. The Raspberry Pi, about to become an encrypted network endpoint.
### Installing and Configuring BIND
Once you have your platform up and running—whether it's a Raspberry Pi or otherwise—next you're going to install BIND, the nameserver that powers a lot of the Internet. You're going to install BIND as a caching nameserver only, and not have it service incoming requests from the Internet. Installing BIND will give you a DNS server to point your OpenVPN clients at, once you get to the OpenVPN step. Installing BIND is easy; it's just a simple `apt-get `command to install it:
```
root@test:~# apt-get install bind9
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
bind9utils
Suggested packages:
bind9-doc resolvconf ufw
The following NEW packages will be installed:
bind9 bind9utils
0 upgraded, 2 newly installed, 0 to remove and
↪0 not upgraded.
Need to get 490 kB of archives.
After this operation, 1,128 kB of additional disk
↪space will be used.
Do you want to continue [Y/n]? y
```
There are a couple minor configuration changes that need to be made to one of the config files of BIND before it can operate as a caching nameserver. Both changes are in `/etc/bind/named.conf.options`. First, you're going to uncomment the "forwarders" section of this file, and you're going to add a nameserver on the Internet to which to forward requests. In this case, I'm going to add Google's DNS (8.8.8.8). The "forwarders" section of the file should look like this:
```
forwarders {
8.8.8.8;
};
```
The second change you're going to make allows queries from your internal network and localhost. Simply add this line to the bottom of the configuration file, right before the `}`; that ends the file:
```
allow-query { 192.168.1.0/24; 127.0.0.0/16; };
```
That line above allows this DNS server to be queried from the network it's on (in this case, my network behind my firewall) and localhost. Next, you just need to restart BIND:
```
root@test:~# /etc/init.d/bind9 restart
[....] Stopping domain name service...: bind9waiting
↪for pid 13209 to die
. ok
[ ok ] Starting domain name service...: bind9.
```
Now you can test `nslookup` to make sure your server works:
```
root@test:~# nslookup
> server localhost
Default server: localhost
Address: 127.0.0.1#53
> www.google.com
Server: localhost
Address: 127.0.0.1#53
Non-authoritative answer:
Name: www.google.com
Address: 173.194.33.176
Name: www.google.com
Address: 173.194.33.177
Name: www.google.com
Address: 173.194.33.178
Name: www.google.com
Address: 173.194.33.179
Name: www.google.com
Address: 173.194.33.180
```
That's it! You've got a working nameserver on this machine. Next, let's move on to OpenVPN.
### Installing and Configuring OpenVPN
OpenVPN is an open-source VPN solution that relies on SSL/TLS for its key exchange. It's also easy to install and get working under Linux. Configuration of OpenVPN can be a bit daunting, but you're not going to deviate from the default configuration by much. To start, you're going to run an apt-get command and install OpenVPN:
```
root@test:~# apt-get install openvpn
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
liblzo2-2 libpkcs11-helper1
Suggested packages:
resolvconf
The following NEW packages will be installed:
liblzo2-2 libpkcs11-helper1 openvpn
0 upgraded, 3 newly installed, 0 to remove and
↪0 not upgraded.
Need to get 621 kB of archives.
After this operation, 1,489 kB of additional disk
↪space will be used.
Do you want to continue [Y/n]? y
```
Now that OpenVPN is installed, you're going to configure it. OpenVPN is SSL-based, and it relies on both server and client certificates to work. To generate these certificates, you need to configure a Certificate Authority (CA) on the machine. Luckily, OpenVPN ships with some wrapper scripts known as "easy-rsa" that help to bootstrap this process. You'll start by making a directory on the filesystem for the easy-rsa scripts to reside in and by copying the scripts from the template directory there:
```
root@test:~# mkdir /etc/openvpn/easy-rsa
root@test:~# cp -rpv
↪/usr/share/doc/openvpn/examples/easy-rsa/2.0/*
↪/etc/openvpn/easy-rsa/
```
Next, copy the vars file to a backup copy:
```
root@test:/etc/openvpn/easy-rsa# cp vars vars.bak
```
Now, edit vars so it's got information pertinent to your installation. I'm going specify only the lines that need to be edited, with sample data, below:
```
KEY_SIZE=4096
KEY_COUNTRY="US"
KEY_PROVINCE="CA"
KEY_CITY="Silicon Valley"
KEY_ORG="Linux Journal"
KEY_EMAIL="bill.childers@linuxjournal.com"
```
The next step is to source the vars file, so that the environment variables in the file are in your current environment:
```
root@test:/etc/openvpn/easy-rsa# source ./vars
NOTE: If you run ./clean-all, I will be doing a
↪rm -rf on /etc/openvpn/easy-rsa/keys
```
### Building the Certificate Authority
You're now going to run clean-all to ensure a clean working environment, and then you're going to build the CA. Note that I'm changing changeme prompts to something that's appropriate for this installation:
```
root@test:/etc/openvpn/easy-rsa# ./clean-all
root@test:/etc/openvpn/easy-rsa# ./build-ca
Generating a 4096 bit RSA private key
...................................................++
...................................................++
writing new private key to 'ca.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN.
There are quite a few fields but you can leave some
blank. For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[changeme]:test.linuxjournal.com
Name [changeme]:test.linuxjournal.com
Email Address [bill.childers@linuxjournal.com]:
```
### Building the Server Certificate
Once the CA is created, you need to build the OpenVPN server certificate:
```root@test:/etc/openvpn/easy-rsa#
↪./build-key-server test.linuxjournal.com
Generating a 4096 bit RSA private key
...................................................++
writing new private key to 'test.linuxjournal.com.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN.
There are quite a few fields but you can leave some
blank. For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[test.linuxjournal.com]:
Name [changeme]:test.linuxjournal.com
Email Address [bill.childers@linuxjournal.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from
↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'CA'
localityName :PRINTABLE:'Silicon Valley'
organizationName :PRINTABLE:'Linux Journal'
organizationalUnitName:PRINTABLE:'SecTeam'
commonName :PRINTABLE:'test.linuxjournal.com'
name :PRINTABLE:'test.linuxjournal.com'
emailAddress
↪:IA5STRING:'bill.childers@linuxjournal.com'
Certificate is to be certified until Sep 1
↪06:23:59 2025 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
```
The next step may take a while—building the Diffie-Hellman key for the OpenVPN server. This takes several minutes on a conventional desktop-grade CPU, but on the ARM processor of the Raspberry Pi, it can take much, much longer. Have patience, as long as the dots in the terminal are proceeding, the system is building its Diffie-Hellman key (note that many dots are snipped in these examples):
```
root@test:/etc/openvpn/easy-rsa# ./build-dh
Generating DH parameters, 4096 bit long safe prime,
↪generator 2
This is going to take a long time
....................................................+
<snipped out many more dots>
```
### Building the Client Certificate
Now you're going to generate a client key for your client to use when logging in to the OpenVPN server. OpenVPN is typically configured for certificate-based auth, where the client presents a certificate that was issued by an approved Certificate Authority:
```
root@test:/etc/openvpn/easy-rsa# ./build-key
↪bills-computer
Generating a 4096 bit RSA private key
...................................................++
...................................................++
writing new private key to 'bills-computer.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN. There are quite a few
fields but you can leave some blank.
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[bills-computer]:
Name [changeme]:bills-computer
Email Address [bill.childers@linuxjournal.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from
↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'CA'
localityName :PRINTABLE:'Silicon Valley'
organizationName :PRINTABLE:'Linux Journal'
organizationalUnitName:PRINTABLE:'SecTeam'
commonName :PRINTABLE:'bills-computer'
name :PRINTABLE:'bills-computer'
emailAddress
↪:IA5STRING:'bill.childers@linuxjournal.com'
Certificate is to be certified until
↪Sep 1 07:35:07 2025 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified,
↪commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
root@test:/etc/openvpn/easy-rsa#
```
Now you're going to generate an HMAC code as a shared key to increase the security of the system further:
```
root@test:~# openvpn --genkey --secret
↪/etc/openvpn/easy-rsa/keys/ta.key
```
### Configuration of the Server
Finally, you're going to get to the meat of configuring the OpenVPN server. You're going to create a new file, /etc/openvpn/server.conf, and you're going to stick to a default configuration for the most part. The main change you're going to do is to set up OpenVPN to use TCP rather than UDP. This is needed for the next major step to work—without OpenVPN using TCP for its network communication, you can't get things working on port 443. So, create a new file called /etc/openvpn/server.conf, and put the following configuration in it: Garrick, shrink below.
```
port 1194
proto tcp
dev tun
ca easy-rsa/keys/ca.crt
cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever
↪your hostname was
key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key
↪- This file should be kept secret
management localhost 7505
dh easy-rsa/keys/dh4096.pem
tls-auth /etc/openvpn/certs/ta.key 0
server 10.8.0.0 255.255.255.0 # The server will use this
↪subnet for clients connecting to it
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp" # Forces clients
↪to redirect all traffic through the VPN
push "dhcp-option DNS 192.168.1.1" # Tells the client to
↪use the DNS server at 192.168.1.1 for DNS -
↪replace with the IP address of the OpenVPN
↪machine and clients will use the BIND
↪server setup earlier
keepalive 30 240
comp-lzo # Enable compression
persist-key
persist-tun
status openvpn-status.log
verb 3
```
And last, you're going to enable IP forwarding on the server, configure OpenVPN to start on boot and start the OpenVPN service:
```
root@test:/etc/openvpn/easy-rsa/keys# echo
↪"net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
root@test:/etc/openvpn/easy-rsa/keys# sysctl -p
↪/etc/sysctl.conf
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.ipv4.ip_forward = 0
net.ipv4.ip_forward = 1
root@test:/etc/openvpn/easy-rsa/keys# update-rc.d
↪openvpn defaults
update-rc.d: using dependency based boot sequencing
root@test:/etc/openvpn/easy-rsa/keys#
↪/etc/init.d/openvpn start
[ ok ] Starting virtual private network daemon:.
```
### Setting Up OpenVPN Clients
Your client installation depends on the host OS of your client, but you'll need to copy your client certs and keys created above to your client, and you'll need to import those certificates and create a configuration for that client. Each client and client OS does it slightly differently and documenting each one is beyond the scope of this article, so you'll need to refer to the documentation for that client to get it running. Refer to the Resources section for OpenVPN clients for each major OS.
### Installing SSLH—the "Magic" Protocol Multiplexer
The really interesting piece of this solution is SSLH. SSLH is a protocol multiplexer—it listens on port 443 for traffic, and then it can analyze whether the incoming packet is an SSH packet, HTTPS or OpenVPN, and it can forward that packet onto the proper service. This is what enables this solution to bypass most port blocks—you use the HTTPS port for all of this traffic, since HTTPS is rarely blocked.
To start, `apt-get` install SSLH:
```
root@test:/etc/openvpn/easy-rsa/keys# apt-get
↪install sslh
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
apache2 apache2-mpm-worker apache2-utils
↪apache2.2-bin apache2.2-common
libapr1 libaprutil1 libaprutil1-dbd-sqlite3
↪libaprutil1-ldap libconfig9
Suggested packages:
apache2-doc apache2-suexec apache2-suexec-custom
↪openbsd-inetd inet-superserver
The following NEW packages will be installed:
apache2 apache2-mpm-worker apache2-utils
↪apache2.2-bin apache2.2-common
libapr1 libaprutil1 libaprutil1-dbd-sqlite3
↪libaprutil1-ldap libconfig9 sslh
0 upgraded, 11 newly installed, 0 to remove
↪and 0 not upgraded.
Need to get 1,568 kB of archives.
After this operation, 5,822 kB of additional
↪disk space will be used.
Do you want to continue [Y/n]? y
```
After SSLH is installed, the package installer will ask you if you want to run it in inetd or standalone mode. Select standalone mode, because you want SSLH to run as its own process. If you don't have Apache installed, the Debian/Raspbian package of SSLH will pull it in automatically, although it's not strictly required. If you already have Apache running and configured, you'll want to make sure it only listens on localhost's interface and not all interfaces (otherwise, SSLH can't start because it can't bind to port 443). After installation, you'll receive an error that looks like this:
```
[....] Starting ssl/ssh multiplexer: sslhsslh disabled,
↪please adjust the configuration to your needs
[FAIL] and then set RUN to 'yes' in /etc/default/sslh
↪to enable it. ... failed!
failed!
```
This isn't an error, exactly—it's just SSLH telling you that it's not configured and can't start. Configuring SSLH is pretty simple. Its configuration is stored in `/etc/default/sslh`, and you just need to configure the `RUN` and `DAEMON_OPTS` variables. My SSLH configuration looks like this:
```
# Default options for sslh initscript
# sourced by /etc/init.d/sslh
# Disabled by default, to force yourself
# to read the configuration:
# - /usr/share/doc/sslh/README.Debian (quick start)
# - /usr/share/doc/sslh/README, at "Configuration" section
# - sslh(8) via "man sslh" for more configuration details.
# Once configuration ready, you *must* set RUN to yes here
# and try to start sslh (standalone mode only)
RUN=yes
# binary to use: forked (sslh) or single-thread
↪(sslh-select) version
DAEMON=/usr/sbin/sslh
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh
↪127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn
↪127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid"
```
Save the file and start SSLH:
```
root@test:/etc/openvpn/easy-rsa/keys#
↪/etc/init.d/sslh start
[ ok ] Starting ssl/ssh multiplexer: sslh.
```
Now, you should be able to ssh to port 443 on your Raspberry Pi, and have it forward via SSLH:
```
$ ssh -p 443 root@test.linuxjournal.com
root@test:~#
```
SSLH is now listening on port 443 and can direct traffic to SSH, Apache or OpenVPN based on the type of packet that hits it. You should be ready to go!
### Conclusion
Now you can fire up OpenVPN and set your OpenVPN client configuration to port 443, and SSLH will route it to the OpenVPN server on port 1194. But because you're talking to your server on port 443, your VPN traffic won't get blocked. Now you can land at a strange coffee shop, in a strange town, and know that your Internet will just work when you fire up your OpenVPN and point it at your Raspberry Pi. You'll also gain some encryption on your link, which will improve the privacy of your connection. Enjoy surfing the Net via your new landing point!
Resources
Installing and Configuring OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) and [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn)
OpenVPN client downloads: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html)
OpenVPN Client for iOS: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8)
OpenVPN Client for Android: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en)
Tunnelblick for Mac OS X (OpenVPN client): [https://tunnelblick.net](https://tunnelblick.net)
SSLH—Protocol Multiplexer: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) and [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh)
----------
via: http://www.linuxjournal.com/content/securi-pi-using-raspberry-pi-secure-landing-point?page=0,0
作者:[Bill Childers][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/bill-childers

View File

@ -1,4 +1,8 @@
zky001开始翻译
【flankershen翻译中】
How to Set Nginx as Reverse Proxy on Centos7 CPanel
================================================================================
@ -24,9 +28,9 @@ First of all, we need to install the EPEL repo to start-up with the process.
--> Running transaction check
---> Package epel-release.noarch 0:7-5 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================
@ -44,9 +48,9 @@ First of all, we need to install the EPEL repo to start-up with the process.
--> Running transaction check
---> Package nDeploy-release-centos.noarch 0:1.0-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================
@ -63,9 +67,9 @@ First of all, we need to install the EPEL repo to start-up with the process.
(1/4): ndeploy/7/x86_64/primary_db | 14 kB 00:00:00
(2/4): epel/x86_64/group_gz | 169 kB 00:00:00
(3/4): epel/x86_64/primary_db | 3.7 MB 00:00:02
Dependencies Resolved
===============================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================
@ -78,7 +82,7 @@ First of all, we need to install the EPEL repo to start-up with the process.
memcached x86_64 1.4.15-9.el7 base 84 k
python-inotify noarch 0.9.4-4.el7 base 49 k
python-lxml x86_64 3.2.1-4.el7 base 758 k
Transaction Summary
===============================================================================================================================================
Install 2 Packages (+5 Dependent packages)
@ -89,7 +93,7 @@ With these steps, we've completed with the installation of Nginx plugin in our s
root@server1 [/usr]# /opt/nDeploy/scripts/cpanel-nDeploy-setup.sh enable
Modifying apache http and https port in cpanel
httpd restarted successfully.
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/ndeploy_watcher.service to /usr/lib/systemd/system/ndeploy_watcher.service.
@ -109,7 +113,7 @@ As you can see these script will modify the Apache port from 80 to another port
Main PID: 24760 (httpd)
CGroup: /system.slice/httpd.service
‣ 24760 /usr/local/apache/bin/httpd -k start
Jan 18 06:34:23 server1.centos7-test.com systemd[1]: Starting Apache Web Server...
Jan 18 06:34:23 server1.centos7-test.com apachectl[25606]: httpd (pid 24760) already running
Jan 18 06:34:23 server1.centos7-test.com systemd[1]: Started Apache Web Server.
@ -127,7 +131,7 @@ As you can see these script will modify the Apache port from 80 to another port
├─25473 nginx: worker process
├─25474 nginx: worker process
└─25475 nginx: cache manager process
Jan 17 17:18:29 server1.centos7-test.com systemd[1]: Starting nginx-nDeploy - high performance web server...
Jan 17 17:18:29 server1.centos7-test.com nginx[3804]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Jan 17 17:18:29 server1.centos7-test.com nginx[3804]: nginx: configuration file /etc/nginx/nginx.conf test is successful
@ -159,14 +163,14 @@ The virtualhost entries created for the existing users as located in the folder
listen 45.79.183.73:80;
#CPIPVSIX:80;
# ServerNames
server_name saheetha.com www.saheetha.com;
access_log /usr/local/apache/domlogs/saheetha.com main;
access_log /usr/local/apache/domlogs/saheetha.com-bytes_log bytes_log;
include /etc/nginx/sites-enabled/saheetha.com.include;
}
We can confirm the working of the web server status by calling a website in the browser. Please see the web server information on my server after the installation.
@ -200,4 +204,4 @@ via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/saheethas/
[a]: http://linoxide.com/author/saheethas/

View File

@ -1,4 +1,4 @@
@4357 翻译中
translated by mudongliang
What do Linux developers think of Git and GitHub?
=====================================================

View File

@ -1,35 +0,0 @@
(翻译中 by runningwater)
Viper, the Python IoT Development Suite, is now Zerynth
============================================================
![](http://www.open-electronics.org/wp-content/uploads/2016/02/Logo_Zerynth-636x144.png)
The startup that launched the tools to develop embedded solutions in Python language announced the brand change along with the first official release.
>Exactly one year after the Kickstarter launch of the suite for developing Internet of Things solutions in Python language, **Viper becomes Zerynth**. It is definitely a big day for the startup that created a radically new way to approach the world of microcontrollers and connected devices, making professionals and makers able to design interactive solutions with reduced efforts and shorter time.
>“We really believe in the uniqueness of our tools, this is why they deserve an adequate recognition. Viper was a great name for a product, but other notable companies had the same feeling many decades ago, with the result that this term was shared with too many other actors out there. We are grown now, and ready to take off fast and light, like the design processes that our tools are enabling”, says the Viper (now Zerynth), co-founders.
>**Thousands of users** developed amazing connected solutions in just 9 months of life in Beta version. Built to be cross-platform, Zerynths tools are meant for high-level design of Internet/cloud-connected devices, interactive objects, artistic installations. They are: **Zerynth Studio**, a browser-based IDE for programming embedded devices in Python with cloud sync and board management features; **Zerynth Virtual Machine**: a multithreaded real-time OS that provides real hardware independence allowing code reuse on the entire ARM architecture; **Zerynth App**, a general purpose interface that turns any mobile into the controller and display for smart objects and IoT systems.
>This modular set of tools, adaptable to different hardware and cloud infrastructures, can dramatically reduce the time to market and the overall development costs for makers, professionals and companies.
>Now Zerynth celebrates its new name launching the **first official release** of the toolkit. Check it here [www.zerynth.com][1]
![](http://www.open-electronics.org/wp-content/uploads/2016/02/Zerynth-Press-Release_Studio-Img-768x432.png)
--------------------------------------------------------------------------------
via: http://www.open-electronics.org/viper-the-python-iot-development-suite-is-now-zerynth/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+OpenElectronics+%28Open+Electronics%29
作者:[Staff ][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.open-electronics.org/author/staff/
[1]: http://www.zerynth.com/

View File

@ -1,90 +0,0 @@
翻译中by ping
Top 5 open source command shells for Linux
===============================================
keyword: shell , Linux , bash , zsh , fish , ksh , tcsh , license
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/terminal_blue_smoke_command_line_0.jpg?itok=u2mRRqOa)
There are two kinds of Linux users: the cautious and the adventurous.
On one side is the user who almost reflexively tries out ever new option which hits the scene. Theyve tried handfuls of window managers, dozens of distributions, and every new desktop widget they can find.
On the other side is the user who finds something they like and sticks with it. They tend to like their distributions defaults. If theyre passionate about a text editor, its whichever one they mastered first.
As a Linux user, both on the server and the desktop, for going on fifteen years now, I am definitely more in the second category than the first. I have a tendency to use whats presented to me, and I like the fact that this means more often than not I can find thorough documentation and examples of most any use case I can dream up. If I used something non-standard, the switch was carefully researched and often predicated by a strong pitch from someone I trust.
But that doesnt mean I dont like to sometimes try and see what Im missing. So recently, after years of using the bash shell without even giving it a thought, I decided to try out four alternative shells: ksh, tcsh, zsh, and fish. All four were easy installs from my default repositories in Fedora, and theyre likely already packaged for your distribution of choice as well.
Heres a little bit on each option and why you might choose it to be your next Linux command-line interpreter.
### bash
First, lets take a look back at the familiar. [GNU Bash][1], the Bourne Again Shell, has been the default in pretty much every Linux distribution Ive used through the years. Originally released in 1989, bash has grown to easily become the most used shell across the Linux world, and it is commonly found in other unix-like operating systems as well.
Bash is a perfectly respectable shell, and as you look for documentation of how to do various things across the Internet, almost invariably youll find instructions which assume you are using a bash shell. But bash has some shortcomings, as anyone who has ever written a bash script thats more than a few lines can attest to. Its not that you cant do something, its that its not always particularly intuitive (or at least elegant) to read and write. For some examples, see this list of [common bash pitfalls][2].
That said, bash is probably here to stay for at least the near future, with its enormous install base and legions of both casual and professional system administrators who are already attuned to its usage, and quirks. The bash project is available under a [GPLv3][3] license.
### ksh
[KornShell][4], also known by its command invocation, ksh, is an alternative shell that grew out of Bell Labs in the 1980s, written by David Korn. While originally proprietary software, later versions were released under the [Eclipse Public License][5].
Proponents of ksh list a number of ways in which they feel it is superior, including having a better loop syntax, cleaner exit codes from pipes, an easier way to repeat commands, and associative arrays. It's also capable of emulating many of the behaviors of vi or emacs, so if you are very partial to a text editor, it may be worth giving a try. Overall, I found it to be very similar to bash for basic input, although for advanced scripting it would surely be a different experience.
### tcsh
[Tcsh][6] is a derivative of csh, the Berkely Unix C shell, and sports a very long lineage back to the early days of Unix and computing itself.
The big selling point for tcsh is its scripting language, which should look very familiar to anyone who has programmed in C. Tcsh's scripting is loved by some and hated by others. But it has other features as well, including adding arguments to aliases, and various defaults that might appeal to your preferences, including the way autocompletion with tab and history tab completion work.
You can find tcsh under a [BSD license][7].
### zsh
[Zsh][8] is another shell which has similarities to bash and ksh. Originating in the early 90s, zsh sports a number of useful features, including spelling correction, theming, namable directory shortcuts, sharing your command history across multiple terminals, and various other slight tweaks from the original Bourne shell.
The code and binaries for zsh can be distributed under an MIT-like license, though portions are under the GPL; check the [actual license][9] for details.
### fish
I knew I was going to like the Friendly Interactive Shell, [fish][10], when I visited the website and found it described tongue-in-cheek with "Finally, a command line shell for the 90s"—fish was written in 2005.
The authors of fish offer a number of reasons to make the switch, all invoking a bit of humor and poking a bit of fun at shells that don't quite live up. Features include autosuggestions ("Watch out, Netscape Navigator 4.0"), support of the "astonishing" 256 color palette of VGA, but some actually quite helpful features as well including command completion based on the man pages on your machine, clean scripting, and a web-based configuration.
Fish is licensed primarily unde the GPL version 2 but with portions under other licenses; check the repository for [complete information][11].
***
Looking for a more detailed rundown on the precise differences between each option? [This site][12] ought to help you out.
So where did I land? Well, ultimately, Im probably going back to bash, because the differences were subtle enough that someone who mostly used the command line interactively as opposed to writing advanced scripts really wouldn't benefit much from the switch, and I'm already pretty comfortable in bash.
But Im glad I decided to come out of my shell (ha!) and try some new options. And I know there are many, many others out there. Which shells have you tried, and which one do you prefer? Let us know in the comments!
via: https://opensource.com/business/16/3/top-linux-shells
作者:[Jason Baker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jason-baker
[1]: https://www.gnu.org/software/bash/
[2]: http://mywiki.wooledge.org/BashPitfalls
[3]: http://www.gnu.org/licenses/gpl.html
[4]: http://www.kornshell.org/
[5]: https://www.eclipse.org/legal/epl-v10.html
[6]: http://www.tcsh.org/Welcome
[7]: https://en.wikipedia.org/wiki/BSD_licenses
[8]: http://www.zsh.org/
[9]: https://sourceforge.net/p/zsh/code/ci/master/tree/LICENCE
[10]: https://fishshell.com/
[11]: https://github.com/fish-shell/fish-shell/blob/master/COPYING
[12]: http://hyperpolyglot.org/unix-shells

View File

@ -1,4 +1,3 @@
zpl1025
15 podcasts for FOSS fans
=============================

View File

@ -1,4 +1,3 @@
Translating by yuba0604
Healthy Open Source
============================

View File

@ -1,3 +1,4 @@
translating by kylepeng93
A newcomer's guide to navigating OpenStack Infrastructure
===========================================================

View File

@ -1,3 +1,4 @@
[Translating by itsang]
4 Container Networking Tools to Know
=======================================

View File

@ -1,51 +0,0 @@
Translating KevinSJ
An introduction to data processing with Cassandra and Spark
==============================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28)
There's been a huge surge of interest around the Apache Cassandra database due to the increasing uptime and performance demands of modern cloud applications.
So, what is Apache Cassandra? A distributed OLTP database built for high availability and linear scalability. When people ask what Cassandra is used for, think about the type of system you want close to the customer. This is ultimately the system that our users interact with. Applications that must always be available: product catalogs, IoT, medical systems, and mobile applications. In these categories downtime can mean loss of revenue or even more dire outcomes depending on your specific use case. Netflix was one of the earliest adopters of this project, which was open sourced in 2008, and their contributions, along with successes, put it on the radar of the masses.
Cassandra became a top level Apache Software Foundation project in 2010 and has been riding the wave in popularity since then. Now even knowledge in Cassandra gets you serious returns in the job market. It's both crazy and awesome to consider a NoSQL and open source technology could perform this sort of disruption next to the giants of enterprise SQL. This begs the question, what makes it so popular?
Cassandra has the ability to be always on in spite of massive hardware and network failures by utilizing a design first widely discussed in [the Dynamo paper from Amazon][1]. By using a peer to peer model, with no single point of failure, we can survive rack failure and even complete network partitions. We can deal with an entire data center failure without impacting our customer's experience. A distributed system that plans for failure is a properly planned distributed system, because frankly, failures are just going to happen. With Cassandra, we accept that cruel fact of life, and bake it into the database's architecture and functionality.
We know what youre thinking, "But, Im coming from a relational background, isn't this going to be a daunting transition?" The answer is somewhat yes and no. Data modeling with Cassandra will feel familiar to developers coming from the relational world. We use tables to model our data, and CQL, the Cassandra Query Language, to query the database. However, unlike SQL, Cassandra supports more complex data structures such as nested and user defined types. For instance, instead of creating a dedicated table to store likes on a cat photo, we can store that data in a collection with the photo itself enabling faster, sequential lookups. That's expressed very naturally in CQL. In our photo table we may want to track the name, URL, and the people that liked the photo.
![](https://opensource.com/sites/default/files/resize/screen_shot_2016-05-06_at_7.17.33_am-350x198.png)
In a high performance system milliseconds matter for both user experience and for customer retention. Expensive JOIN operations limit our ability to scale out by adding unpredictable network calls. By denormalizing our data so it can be fetched in as few requests as possible, we profit from the trend of decreasing costs in disk space and in return get predictable, high performance applications. We embrace the concept of denormalization with Cassandra because it offers a pretty appealing tradeoff.
We're obviously not just limited to storing likes on cat photos. Cassandra is a optimized for high write throughput. This makes it the perfect solution for big data applications where were constantly ingesting data. Time series and IoT use cases are growing at a steady rate in both demand and appearance in the market, and we're continuously finding ways to utilize the data we collect to improve our technological application.
This brings us to the next step, we've talked about storing our data in a modern, cost-effective fashion, but how do we get even more horsepower? Meaning, once we've collected all that data, what do we do with it? How can we analyze hundreds of terabytes efficiently? How can we react to information we're receiving in real-time, making decisions in seconds rather than hours? Enter Apache Spark.
Spark is the next step in the evolution of big data processing. Hadoop and MapReduce were revolutionary projects, giving the big data world an opportunity to crunch all the data we've collected. Spark takes our big data analysis to the next level by drastically improving performance and massively decreasing code complexity. Through Spark, we can perform massive batch processing calculations, react quickly to stream processing, make smart decisions through machine learning, and understand complex, recursive relationships through graph traversals. Its not just about offering your customers a fast and reliable connection to their application (which is what Cassandra offers), it's also about being able to leverage insights from the data Cassandra stores to make more intelligent business decisions and better cater to customer needs.
You can check out the [Spark-Cassandra Connector][2] (open source) and give it a shot. To learn more about both technologies, we highly recommend the free self-paced courses on [DataStax Academy][3].
Have fun digging in and learning some killer new technology! If you want to learn more, check out our [OSCON tutorial][4], with a hands on exploration into the worlds of both Cassandra and Spark.
We also love taking questions on Twitter, so give us a shout and well try to help: [Dani][5] and [Jon][6].
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/5/basics-cassandra-and-spark-data-processing
作者:[Jon Haddad][a],[Dani Traphagen][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twitter.com/rustyrazorblade
[b]: https://opensource.com/users/dtrapezoid
[1]: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf
[2]: https://github.com/datastax/spark-cassandra-connector
[3]: https://academy.datastax.com/
[4]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49162
[5]: https://twitter.com/dtrapezoid
[6]: https://twitter.com/rustyrazorblade

View File

@ -1,4 +1,4 @@
[Translating By cposture 20160520]
[Translating by cposture 2016.06.29]
Data Structures in the Linux Kernel
================================================================================

View File

@ -1,4 +1,5 @@
[Translating by cposture]
[Cathon is Translating...]
Python 3: An Intro to Encryption
===================================

View File

@ -1,4 +1,4 @@
Translating by strugglingyouth
Translating by GitFuture
Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04
=====================================================================================

View File

@ -0,0 +1,67 @@
Microfluidic cooling may prevent the demise of Moore's Law
============================================================
![](http://tr1.cbsistatic.com/hub/i/r/2015/12/09/a7cb82d1-96e8-43b5-bfbd-d4593869b230/resize/620x/9607388a284e3a61a39f4399a9202bd7/networkingistock000042544852agsandrew.jpg)
>Image: iStock/agsandrew
Existing technology's inability to keep microchips cool is fast becoming the number one reason why [Moore's Law][1] may soon meet its demise.
In the ongoing need for digital speed, scientists and engineers are working hard to squeeze more transistors and support circuitry onto an already-crowded piece of silicon. However, as complex as that seems, it pales in comparison to the [problem of heat buildup][2].
"Right now, we're limited in the power we can put into microchips," says John Ditri, principal investigator at Lockheed Martin in [this press release][3]. "One of the biggest challenges is managing the heat. If you can manage the heat, you can use fewer chips, and that means using less material, which results in cost savings as well as reduced system size and weight. If you manage the heat and use the same number of chips, you'll get even greater performance in your system."
Resistance to the flow of electrons through silicon causes the heat, and packing so many transistors in such a small space creates enough heat to destroy components. One way to eliminate heat buildup is to reduce the flow of electrons by [using photonics at the chip level][4]. However, photonic technology is not without its set of problems.
SEE: [Silicon photonics will revolutionize data centers in 2015][5]
### Microfluid cooling might be the answer
To seek out other solutions, the Defense Advanced Research Projects Agency (DARPA) has initiated a program called [ICECool Applications][6] (Intra/Interchip Enhanced Cooling). "ICECool is exploring disruptive thermal technologies that will mitigate thermal limitations on the operation of military electronic systems while significantly reducing the size, weight, and power consumption," explains the [GSA website FedBizOpps.gov][7].
What is unique about this method of cooling is the push to use a combination of intra- and/or inter-chip microfluidic cooling and on-chip thermal interconnects.
![](http://tr4.cbsistatic.com/hub/i/r/2016/05/25/fd3d0d17-bd86-4d25-a89a-a7050c4d59c4/resize/300x/e9c18034bde66526310c667aac92fbf5/microcooling-1.png)
>MicroCooling 1 Image: DARPA
The [DARPA ICECool Application announcement][8] notes, "Such miniature intra- and/or inter-chip passages (see right) may take the form of axial micro-channels, radial passages, and/or cross-flow passages, and may involve micro-pores and manifolded structures to distribute and re-direct liquid flow, including in the form of localized liquid jets, in the most favorable manner to meet the specified heat flux and heat density metrics."
Using the above technology, engineers at Lockheed Martin have experimentally demonstrated how on-chip cooling is a significant improvement. "Phase I of the ICECool program verified the effectiveness of Lockheed's embedded microfluidic cooling approach by showing a four-times reduction in thermal resistance while cooling a thermal demonstration die dissipating 1 kW/cm2 die-level heat flux with multiple local 30 kW/cm2 hot spots," mentions the Lockheed Martin press release.
In phase II of the Lockheed Martin project, the engineers focused on RF amplifiers. The press release continues, "Utilizing its ICECool technology, the team has been able to demonstrate greater than six times increase in RF output power from a given amplifier while still running cooler than its conventionally cooled counterpart."
### Moving to production
Confident of the technology, Lockheed Martin is already designing and building a functional microfluidic cooled transmit antenna. Lockheed Martin is also collaborating with Qorvo to integrate its thermal solution with Qorvo's high-performance [GaN process][9].
The authors of the research paper [DARPA's Intra/Interchip Enhanced Cooling (ICECool) Program][10] suggest ICECool Applications will produce a paradigm shift in the thermal management of electronic systems. "ICECool Apps performers will define and demonstrate intra-chip and inter-chip thermal management approaches that are tailored to specific applications and this approach will be consistent with the materials sets, fabrication processes, and operating environment of the intended application."
If this microfluidic technology is as successful as scientists and engineers suggest, it seems Moore's Law does have a fighting chance.
For more about networking, subscribe to our Data Centers newsletter.
[SUBSCRIBE](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fmicrofluidic-cooling-may-prevent-the-demise-of-moores-law%2F&)
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/microfluidic-cooling-may-prevent-the-demise-of-moores-law/
作者:[Michael Kassner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techrepublic.com/search/?a=michael+kassner
[1]: http://www.intel.com/content/www/us/en/history/museum-gordon-moore-law.html
[2]: https://books.google.com/books?id=mfec2Zw_b7wC&pg=PA154&lpg=PA154&dq=does+heat+destroy+transistors&source=bl&ots=-aNdbMD7FD&sig=XUUiaYG_6rcxHncx4cI4Cqe3t20&hl=en&sa=X&ved=0ahUKEwif4M_Yu_PMAhVL7oMKHW3GC3cQ6AEITTAH#v=onepage&q=does%20heat%20destroy%20transis
[3]: http://www.lockheedmartin.com/us/news/press-releases/2016/march/160308-mst-cool-technology-turns-down-the-heat-on-high-tech-equipment.html
[4]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/
[5]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/
[6]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[7]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[8]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[9]: http://electronicdesign.com/communications/what-s-difference-between-gaas-and-gan-rf-power-amplifiers
[10]: http://www.csmantech.org/Digests/2013/papers/050.pdf

View File

@ -1,180 +0,0 @@
Vic020
How to Add Cron Jobs in Linux and Unix
======================================
![](https://www.unixmen.com/wp-content/uploads/2016/05/HOW-TO-ADD-CRON-JOBS-IN-LINUX-AND-UNIX-696x334.png)
### Introduction
![](http://www.unixmen.com/wp-content/uploads/2016/05/cronjob.gif)
Cron job are used to schedule commands to be executed periodically. You can setup commands or scripts, which will repeatedly run at a set time. Cron is one of the most useful tool in Linux or UNIX like operating systems. The cron service (daemon) runs in the background and constantly checks the /etc/crontab file, and /etc/cron.*/ directories. It also checks the /var/spool/cron/ directory.
### Command of crontab
crontab is the command used to install, deinstall or list the tables (cron configuration file) used to drive the [cron(8)][1] daemon in Vixie Cron. Each user can have their own crontab file, and though these are files in /var/spool/cron/crontabs, they are not intended to be edited directly. You need to use crontab command for editing or setting up your own cron jobs.
### Types of cron configuration files
There are different types of configuration files:
- **The UNIX / Linux system crontab** : Usually, used by system services and critical jobs that requires root like privileges. The sixth field (see below for field description) is the name of a user for the command to run as. This gives the system crontab the ability to run commands as any user.
- **The user crontabs**: User can install their own cron jobs using the crontab command. The sixth field is the command to run, and all commands run as the user who created the crontab
**Note**: This faq features cron implementations written by Paul Vixie and included in many [Linux][2] distributions and Unix like systems such as in the popular 4th BSD edition. The syntax is [compatible][3] with various implementations of crond.
How Do I install or create or edit my own cron jobs?
To edit your crontab file, type the following command at the UNIX / Linux shell prompt:
```
$ crontab -e
```
Syntax of crontab (field description)
The syntax is:
```
1 2 3 4 5 /path/to/command arg1 arg2
```
OR
```
1 2 3 4 5 /root/ntp_sync.sh
```
Where,
- 1: Minute (0-59)
- 2: Hours (0-23)
- 3: Day (0-31)
- 4: Month (0-12 [12 == December])
- 5: Day of the week(0-7 [7 or 0 == sunday])
- /path/to/command Script or command name to schedule
Easy to remember format:
```
* * * * * command to be executed
| | | | |
| | | | —– Day of week (0 7) (Sunday=0 or 7)
| | | ——- Month (1 12)
| | ——— Day of month (1 31)
| ———– Hour (0 23)
————- Minute (0 59)
```
Example simple crontab.
````
## run backupscript 5 minutes 1 time ##
*/5 * * * * /root/backupscript.sh
## Run backupscript daily on 1:00 am ##
0 1 * * * /root/backupscript.sh
## Run backup script monthly on the 1st of month 3:15 am ##
15 3 1 * * /root/backupscript.sh
```
### How do I use operators?
An operator allows you to specifying multiple values in a field. There are three operators:
- **The asterisk (*)** : This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month
- **The comma (,)** : This operator specifies a list of values, for example: “1,5,10,15,20, 25”.
- **The dash ()** : This operator specifies a range of values, for example: “5-15” days , which is equivalent to typing “5,6,7,8,9,….,13,14,15” using the comma operator.
- **The separator (/)** : This operator specifies a step value, for example: “0-23/” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say every two hours, just use */2.
### Use special string to save time
Instead of the first five fields, you can use any one of eight special strings. It will not just save your time but it will improve readability.
Special string | Meaning
|:-- |:--
@reboot | Run once, at startup.
@yearly | Run once a year, “0 0 1 1 *”.
@annually | (same as @yearly)
@monthly | Run once a month, “0 0 1 * *”.
@weekly | Run once a week, “0 0 * * 0”.
@daily | Run once a day, “0 0 * * *”.
@midnight | (same as @daily)
@hourly | Run once an hour, “0 * * * *”.
Examples
```
#### Run ntpdate command every hour ####
@hourly /path/to/ntpdate
```
### More about /etc/crontab file and /etc/cron.d/* directories
/etc/crontab is system crontabs file. Usually only used by root user or daemons to configure system wide jobs. All individual user must must use crontab command to install and edit their jobs as described above. /var/spool/cron/ or /var/cron/tabs/ is directory for personal user crontab files. It must be backup with users home directory.
Understanding Default /etc/crontab
Typical /etc/crontab file entries:
```
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
```
First, the environment must be defined. If the shell line is omitted, cron will use the default, which is sh. If the PATH variable is omitted, no default will be used and file locations will need to be absolute. If HOME is omitted, cron will use the invoking users home directory.
Additionally, cron reads the files in /etc/cron.d/ directory. Usually system daemon such as sa-update or sysstat places their cronjob here. As a root user or superuser you can use following directories to configure cron jobs. You can directly drop your scripts here. The run-parts command run scripts or programs in a directory via /etc/crontab file:
Directory |Description
|:-- |:--
/etc/cron.d/ | Put all scripts here and call them from /etc/crontab file.
/etc/cron.daily/ | Run all scripts once a day
/etc/cron.hourly/ | Run all scripts once an hour
/etc/cron.monthly/ | Run all scripts once a month
/etc/cron.weekly/ | Run all scripts once a week
### Backup cronjob
```
# crontab -l > /path/to/file
# crontab -u user -l > /path/to/file
```
--------------------------------------------------------------------------------
via: https://www.unixmen.com/add-cron-jobs-linux-unix/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+unixmenhowtos+%28Unixmen+Howtos+%26+Tutorials%29
作者:[Duy NguyenViet][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.unixmen.com/author/duynv/
[1]: http://www.manpager.com/linux/man8/cron.8.html
[2]: http://www.linuxsecrets.com/
[3]: http://www.linuxsecrets.com/linux-hardware/

View File

@ -0,0 +1,101 @@
How to record your terminal session on Linux
=================================================
Recording a terminal session may be important in helping someone learn a process, sharing information in an understandable way, and also presenting a series of commands in a proper manner. Whatever the purpose, there are many times when copy-pasting text from the terminal won't be very helpful while capturing a video of the process is quite far-fetched and may not be always possible. In this quick guide, we will take a look at the easiest way to record and share a terminal session in .gif format.
### Prerequisites
If you just want to record your terminal sessions and be able to play the recording in your terminal, or share them with people who will use a terminal for playback, then the only tool that you'll need is called “ttyrec”. Ubuntu users may install it by inserting the following command on a terminal:
```
sudo apt-get install ttyrec
```
If you want to produce a .gif file from the recording and be able to share it with people who don't use the terminal, publish it on websites, or simply keep a .gif handy for when you'll need it instead of written commands, you will have to install two additional packages. The first one is “imagemagick” which you can install with:
```
sudo apt-get install imagemagick
```
and the second one is “tty2gif” which can be downloaded from here. The latter has a dependency that can be satisfied with:
sudo apt-get install python-opster
### Capturing
To start capturing the terminal session, all you need to do is simply start with “ttyrec” + enter. This will launch the real-time recording tool which will run in the background until we enter “exit” or we press “Ctrl+D”. By default, ttyrec creates a file named “ttyrecord” on the destination of the terminal session which by default is “Home”.
![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_1.jpg)
![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_2.jpg)
![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_3.jpg)
### Playing
Playing the file is as simple as opening a terminal on the destination of the “ttyrecord” file and using the “ttyplay” command followed by the name of the recording (in our case it's ttyrecord but you may change this into whatever you want).
![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_4.jpg)
This will result in the playback of the recorded session, in real-time, and with typing corrections included (all actions are recorded). This will look like a completely normal automated terminal session, but the commands and their apparent execution are obviously not really applied to the system, as they are only reproduced as a recording.
It is also important to note that the playback of the terminal session recording is completely controllable. You may double the playback speed by hitting the “+” button, slow it down with the “-” button, pause it with “0”, and resume it in normal speed with “1”.
### Converting into a .gif
For reasons of convenience, many of us would like to convert the recorded session into a .gif file, and that is very easy to do. Here's how:
First, untar the downloaded “tty2gif.tar.bz2” by opening a terminal in the download location and entering the following command:
```
tar xvfj tty2gif.tar.bz2
```
Next, copy the resulting “tty2gif.py file onto the destination of the “ttyrecord” file (or whatever the name you've specified is), and then open a terminal on that destination and type the command:
```
python tty2gif.py typing ttyrecord
```
If you are getting errors in this step, check that you have installed the “python-opster” package. If errors persist, give the following two commands consecutively:
```
sudo apt-get install xdotool
export WINDOWID=$(xdotool getwindowfocus)
```
then repeat the “python tty2gif.py typing ttyrecord ” and you should now see a number of gif files that were created on the location of the “ttyrecord”
![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_5.jpg)
The next step is to unify all these gifs that correspond to individual terminal session actions into one final .gif file using the imagemagick utility. To do this, open a terminal on the destination and insert the following command:
```
convert -delay 25 -loop 0 *.gif example.gif
```
![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/pic_6.jpg)
You may name the resulting file as you like (I used “example.gif”), and you may change the delay and loop settings as needed. Here is the resulting file of this quick tutorial:
![](https://www.howtoforge.com/images/how-to-record-your-terminal-session-on-linux/example.gif)
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-record-your-terminal-session-on-linux/
作者:[Bill Toulas][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twitter.com/howtoforgecom

View File

@ -1,98 +0,0 @@
vlock A Smart Way to Lock User Virtual Console or Terminal in Linux
=======================================================================
Virtual consoles are very important features of Linux, and they provide a system user a shell prompt to use the system in a non-graphical setup which you can only use on the physical machine but not remotely.
A user can use several virtual console sessions at the same time just by switching form one virtual console to another.
![](http://www.tecmint.com/wp-content/uploads/2016/05/vlock-Lock-User-Terminal-in-Linux.png)
>vlock Lock User Console or Terminal in Linux
In this how to guide, we shall look at how to lock user virtual console or terminal console in Linux systems using vlock program.
### What is vlock?
vlock is a utility used to lock one or several user virtual console sessions. vlock is important on a multi user system, it allows users to lock their own sessions while other users can still use the same system via other virtual consoles. Where necessary, the entire console can be locked down and also switching virtual console disabled.
vlock primarily works for console sessions and also has support for locking non-console sessions but this has not been tested fully.
### Installing vlock in Linux
To install vlock program on your respective Linux systems, use:
```
# yum install vlock [On RHEL / CentOS / Fedora]
$ sudo apt-get install vlock [On Ubuntu / Debian / Mint]
```
### How to use vlock in Linux
There are few options that you can use with vlock and the general syntax is:
```
# vlock option
# vlock option plugin
# vlock option -t <timeout> plugin
```
#### vlock common options and usage:
1. To lock current virtual console or terminal session of user, run the following command:
```
# vlock --current
```
![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-User-Terminal-Session-in-Linux.png)
>Lock User Terminal Session in Linux
The options -c or --current, means lock the current session and it is the default behavior when you run vlock.
2. To lock all your virtual console sessions and also disable virtual console switching, run the command below:
```
# vlock --all
```
![](http://www.tecmint.com/wp-content/uploads/2016/05/Lock-All-Linux-Terminal-Sessions.png)
>Lock All Linux Terminal Sessions
The options -a or --all, when used, it locks all users console sessions and also disables virtual console switching.
These other options can only work when vlock was compiled with plugin support and they include:
3. The options -n or --new, when invoked, it means switch to a new virtual console before users console sessions are locked.
```
# vlock --new
```
4. The options -s or --disable-sysrq, it disables the SysRq mechanism while virtual consoles are locked by a user and works only when -a or --all is invoked.
```
# vlock -sa
```
5. The options -t or --timeout <time_in_seconds>, invoked to set a timeout for screensaver plugin.
```
# vlock --timeout 5
```
You can use `-h` or `--help` and `-v` or `--version` to view help messages and version respectively.
We shall leave it at that and also know that you can include a `~/.vlockrc` file which is read by vlock program during system startup and [add the environmental variables][1] that you can check in the manaul entry page, especially users of Debian based distros.
To find out more or add any information which may not be included here, simply drop a message below in the comment section.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/vlock-lock-user-virtual-console-terminal-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/set-path-variable-linux-permanently/

View File

@ -1,126 +0,0 @@
10 Basic Linux Commands That Every Linux Newbies Should Remember
=====================================================================
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/4225072_orig.png)
[Linux][1] has a big impact on our Lives. At least, your android phone has Linux kernel on it. However, getting started with Linux just make you discomfort for the first time. Because on Linux, you usually should use terminal commands instead of just clicking the launcher icon (as you did on Windows). But don't worry, We will give you 10 basic Linux commands & important commands that will help you get started.
### 10 Basic Linux Commands That Help Newbies Get Started
When We talk about Linux commands, what we are really talking about is the Linux system itself. These small number of 10 Basic Linux commands will not make you a genius or a Linux expert instead, it'll help you get started with Linux. It'll help Linux newbies to perform daily basic tasks in Linux using these Linux basic commands or I should say Linux top commands(because of their usage).
So let's get started with the list of 10 Linux Basic commands -
#### 1. sudo
This SuperUserDo is the most important command Linux newbies will use. Every single command that needs root's permission, need this sudo command. You can use sudo before each command that requires root permissions -
```
$ sudo su
```
#### 2. ls (list)
Just like the other, you often want to see anything in your directory. With list command, the terminal will show you all the files and folders of the directory that you're working in. Let's say I'm in the /home folder and I want to see the directories & files in /home.
```
/home$ ls
```
ls in /home returns the following -
```
imad lost+found
```
#### 3. cd
Changing directory (cd) is the main command that always be in use in terminal. It's one of the most Linux basic commands. Using this is easy. Just type the name of the folder you want to go in from your current directory. If you want to go up just do it by giving double dots (..) as the parameter.
Let's say I'm in /home directory and I want to move in usr directory which is always in the /home. Here is how I can use cd commands -
```
/home $ cd usr
/home/usr $
```
#### 4. mkdir
Just changing directory is still incomplete. Sometimes you want to create a new folder or subfolder. You can use mkdir command to do that. Just give your folder name after mkdir command in your terminal.
```
~$ mkdir folderName
```
#### 5. cp
copy-and-paste is the important task we need to do to organize our files. Using cp will help you to copy-and-paste the file from terminal. First, you determine the file you want to copy and type the destination location to paste the file.
```
$ cp src des
```
Note: If you're copying files into the directory that requires root permission for any new file, then you'll need to use sudo command.
#### 6. rm
rm is a command to remove your file or even your directory. You can use -f if the file need root permission to be removed. And also you can use -r to do recursive removal to remove your folder.
```
$ rm myfile.txt
```
#### 7. apt-get
This command differs distro-by-distro. In Debian based Linux distributions, to install, remove and upgrade any package we've Advanced Packaging Tool (APT) package manager. The apt-get command will help you installing the software you need to run in your Linux. It is a powerful command-line tool which can perform installation, upgrade, and even removing your software.
In other distributions, such as Fedora, Centos there are different package managers. Fedora used to have yum but now it has dnf.
```
$ sudo apt-get update
$ sudo dnf update
```
#### 8. grep
You need to find a file but you don't remember its exact location or the path. grep will help you to solve this problem. You can use the grep command to help finding the file based on given keywords.
```
$ grep user /etc/passwd
```
#### 9. cat
As a user, you often need to view some of text or code from your script. Again, one of the Linux basic commands is cat command. It will show you the text inside your file.
```
$ cat CMakeLists.txt
```
#### 10. poweroff
And the last one is poweroff. Sometimes you need to poweroff directly from your terminal. This command will do the task. Don't forget to add sudo at the beginning of the command since it needs root permission to execute poweroff.
```
$ sudo poweroff
```
### Conclusion
As I mentioned when I started off the article that these 10 basic Linux commands will not make you a Linux geek immediately. It'll help you to start using Linux at this early stage. With these basic Linux commands, start using Linux and set your target to learn 1-3 commands daily. So it's all for this article. I hope it helped you. Share with us interesting and useful commands in the comment section below and don't forget to share this article with your friends.
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember
作者:[Commenti][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember#comments
[1]: http://linuxandubuntu.com/home/category/linux

View File

@ -1,114 +0,0 @@
6 Amazing Linux Distributions For Kids
======================================
Linux and open source is the future and there is no doubt about that, and to see this come to a reality, a strong foundation has to be lied, by starting from the lowest level possible and that is exposing kids to Linux and teaching them how to use Linux operating systems.
![](http://www.tecmint.com/wp-content/uploads/2016/05/Linux-Distros-For-Kids.png)
>Linux Distros For Kids
Linux is a very powerful operating system and that is one of the reasons why it powers a lot of servers on the Internet. Though there have been concerns about its user friendliness which has brought about debate of how it will over take Mac OSX and Windows on desktop computers, I think users need to accept Linux as it is to realize its real power.
Today, Linux powers a lot of machines out there, from mobile phones, to tablets, laptops, workstations, servers, supercomputers, cars, air traffic control systems, refrigerators and many more. With all this and more yet to come in the near future, as I had already stated at the beginning, Linux is the operating system for future computing.
>Read Also: [30 Big Companies and Devices Running on Linux][1]
Because the future belongs to the kids of today, then introducing them to technologies that will change the future is the way to go. Therefore they have to be introduced at an early stage to start learning computer technologies and Linux as a special case.
One thing common to children is curiosity and early learning can help instill the a character of exploration in them when the learning environment is designed to suit them.
Having looked some quick reasons why kids should learn Linux, let us now go through a list of exciting Linux distributions that you can introduce your kids to, so that they can start using and learning Linux.
### Sugar on a Stick
It is a project by Sugar Labs that aims at designing free tools to support learning among children by making them gain skills in exploring, discovering, creating and also reflecting on ideas. It is a non-profit organization led by volunteers.
![](http://www.tecmint.com/wp-content/uploads/2016/05/Sugar-Neighborhood-View.png)
>Sugar Neighborhood View
![](http://www.tecmint.com/wp-content/uploads/2016/05/Sugar-Activity-Library.png)
>Sugar Activity Library
You can think of sugar as both a desktop and a collection of learning activities that help encourage active involvement from children who are learning.
Visit Homepage: <https://www.sugarlabs.org/>
### Edubuntu
This is a grassroots project that is based on the most popular Linux distribution today, Ubuntu. It is intended get schools, homes and communities to easily install and use free Ubuntu software.
![](http://www.tecmint.com/wp-content/uploads/2016/05/Edubuntu-Apps.jpg)
>Edubuntu Desktop Apps
It is supported by different groups of students, teachers, parents, stake holders and also hackers who believe in free learning and sharing of knowledge for self improvement and also community based development.
The main aim of the project is to assemble a system that can offer free software to enhance learning and education by making it easy for users to install and also maintain software.
Visit Homepage: <http://www.edubuntu.org/>
### Doudou Linux
It is designed specifically for children to experience ease in using a computer while building creative thinking in them. It provides simple yet educative applications that allows kids to learn and discover new ideas while using it.
![](http://www.tecmint.com/wp-content/uploads/2016/05/Doudou-Linux.png)
>Doudou Linux
One important thing about Doudou Linux is its content filtering feature, which prevents children from visiting restricted content on the web. For more kids protection, it also includes user privacy on the Internet, automatically removes adds from web pages and many more.
Visit Homepage: <http://www.doudoulinux.org/>
### LinuxKidX
It is a LiveCD based on Slackware Linux with a long list of educational software for kids to learn form. It uses KDE as the default Desktop Environment and includes software such as Ktouch a typing tutor, Kstars as virtual planetaruim, Kalzium a periodic table, KwordQuiz among others.
![](http://www.tecmint.com/wp-content/uploads/2016/05/LinuxKidX.jpg)
>LinuxKidX
Visit Homepage: <http://linuxkidx.blogspot.in/>
### Ubermix
It is a free software that is built from the ground based on Ubuntu Linux and is intended for educational purposes. It comes with over 60 free software reinstalled and helps to make learning and teaching easy for students and teachers respectively.
![](http://www.tecmint.com/wp-content/uploads/2016/05/ubermix.png)
>Ubermix Linux
Some of its features include 5 minutes installation and also few seconds quick recovery mechanism. It should work well well for teenage children.
Visit Homepage: <http://www.ubermix.org/>
### Qimo
I have added this to list because many readers are expected to ask about Qimo, but as of this writing, the Qimo for kids development team has retired from the project, therefore no more development it expected.
![](http://www.tecmint.com/wp-content/uploads/2016/05/Qimo-Linux.png)
>Qimo Linux
But you can still find most of the games for kids in Ubuntu and other Linux distributions. As they have mentioned, they are not done working on an educational software for kids and are developing an android application for children to improve their literacy skills.
You can read more from their official website and expect more from them in the future.
Visit Homepage: <http://www.qimo4kids.com/>
That is it for now, in case there are more Linux operating systems intended for kids or children out there, which I have not included in this list, you can let us know by leaving a comment.
You can also let us know of what you think of introducing kids to Linux and the future of Linux especially on Desktop computers.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/best-linux-distributions-for-kids/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/big-companies-and-devices-running-on-gnulinux/

View File

@ -0,0 +1,113 @@
Monitor Linux With Netdata
===
Netdata is a real-time resource monitoring tool with a friendly web front-end developed and maintained by [FireHOL][1]. With this tool, you can read charts representing resource utilization of things like CPUs, RAM, disks, network, Apache, Postfix and more. It is similar to other monitoring software like Nagios; however, Netdata is only for real-time monitoring via a web interface.
### Understanding Netdata
Theres currently no authentication, so if youre concerned about someone getting information about the applications youre running on your system, you should restrict who has access via a firewall policy. The UI is simplified in a way anyone could look at the graphs and understand what theyre seeing, or at least be impressed by your flashy setup.
The web front-end is very responsive and requires no Flash plugin. The UI doesnt clutter things up with unneeded features, but sticks to what it does. At first glance, it may seem a bit much with the hundreds of charts you have access to, but luckily the most commonly needed charts (i.e. CPU, RAM, network, and disk) are at the top. If you wish to drill deeper into the graphical data, all you have to do is scroll down or click on the item in the menu to the right. Netdata even allows you to control the chart with play, reset, zoom and resize with the controls on the bottom right of each chart.
![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-1.png)
>Netdata chart control
When it comes down to system resources, the software doesnt need too much either. The creators choose to write the software in C. Netdata doesnt use much more than ~40MB of RAM.
![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture.png)
>Netdata memory usage
### Download Netdata
To download this software, you can head over to [Netdata GitHub page][2]. Then click the “Clone or download” green button on the left of the page. You should then be presented with two options.
#### Via the ZIP file
One option is to download the ZIP file. This will include everything in the repository; however, if the repository is updated then you will need to download the ZIP file again. Once you download the ZIP file, you can use the `unzip` tool in the command line to extract the contents. Running the following command will extract the contents of the ZIP file into a “`netdata`” folder.
```
$ cd ~/Downloads
$ unzip netdata-master.zip
```
![](https://fedoramagazine.org/wp-content/uploads/2016/06/Capture-2.png)
>Netdata unzipped
ou dont need to add the `-d` option in unzip because their content is inside a folder at the root of the ZIP file. If they didnt have that folder at the root, unzip would have extracted the contents in the current directory (which can be messy).
#### Via git
The next option is to download the repository via git. You will, of course, need git installed on your system. This is usually installed by default on Fedora. If not, you can install git from the command line with the following command.
```
$ sudo dnf install git
```
After installing git, you will need to “clone” the repository to your system. To do this, run the following command.
```
$ git clone https://github.com/firehol/netdata.git
```
This will then clone (or make a copy of) the repository in the current working directory.
### Install Netdata
There are some packages you will need to build Netdata successfully. Luckily, its a single line to install the things you need ([as stated in their installation guide][3]). Running the following command in the terminal will install all of the dependencies you need to use Netdata.
```
$ dnf install zlib-devel libuuid-devel libmnl-devel gcc make git autoconf autogen automake pkgconfig
```
Once the required packages are installed, you will need to cd into the netdata/ directory and run the netdata-installer.sh script.
```
$ sudo ./netdata-installer.sh
```
You will then be prompted to press enter to build and install the program. If you wish to continue, press enter to be on your way!
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-3-600x341.png)
>Netdata install.
If all goes well, you will have Netdata built, installed, and running on your system. The installer will also add an uninstall script in the same folder as the installer called `netdata-uninstaller.sh`. If you change your mind later, running this script will remove it from your system.
You can see it running by checking its status via systemctl.
```
$ sudo systemctl status netdata
```
### Accessing Netdata
Now that we have Netdata installed and running, you can access the web interface via port 19999. I have it running on a test machine, as shown in the screenshot below.
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Capture-4-768x458.png)
>An overview of what Netdata running on your system looks like
Congratulations! You now have successfully installed and have access to beautiful displays, graphs, and advanced statistics on the performance of your machine. Whether its for a personal machine so you can show it off to your friends or for getting deeper insight into the performance of your server, Netdata delivers on performance reporting for any system you choose.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/monitor-linux-netdata/
作者:[Martino Jones][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/monitor-linux-netdata/
[1]: https://firehol.org/
[2]: https://github.com/firehol/netdata
[3]: https://github.com/firehol/netdata/wiki/Installation

View File

@ -0,0 +1,146 @@
translating by hkurj
Basic Linux Networking Commands You Should Know
==================================================
![](https://itsfoss.com/wp-content/uploads/2016/06/Basic-Networking-Commands-Linux.jpg)
Brief: A collection of most important and yet basic Linux networking commands an aspiring Linux SysAdmin and Linux enthusiasts must know.
Its not every day at Its FOSS that we talk about the “command line side” of Linux. Basically, I focus more on the desktop side of Linux. But as some of you readers pointed out in the internal survey (exclusive for Its FOSS newsletter subscribers), that you would like to learn some command line tricks as well. Cheat sheets were also liked and encouraged by most readers.
For this purpose, I have compiled a list of the basic networking commands in Linux. Its not a tutorial that teaches you how to use these commands, rather, its a collection of commands and their short explanation. So if you already have some experience with these commands, you can use it for quickly remembering the commands.
You can bookmark this page for quick reference or even download all the commands in PDF for offline access.
I had this list of Linux networking commands when I was a student of Communication System Engineering. It helped me to get the top score in Computer Networks course. I hope it helps you in the same way.
>Exclusive bonus: [Download Linux Networking Commands Cheat Sheet][1] for future reference. You can print it or save it for offline viewing.
### List of basic networking commands in Linux
I used FreeBSD in the computer networking course but the UNIX commands should work the same in Linux also.
#### Connectivity:
- ping <host> —- sends an ICMP echo message (one packet) to a host. This may go continually until you hit Control-C. Ping means a packet was sent from your machine via ICMP, and echoed at the IP level. ping tells you if the other Host is Up.
- telnet host <port> —- talk to “hosts” at the given port number. By default, the telnet port is port 23. Few other famous ports are:
```
7 echo port,
25 SMTP, use to send mail
79 Finger, provides information on other users of the network
```
Use control-] to get out of telnet.
#### Arp:
Arp is used to translate IP addresses into Ethernet addresses. Root can add and delete arp entries. Deleting them can be useful if an arp entry is malformed or just wrong. Arp entries explicitly added by root are permanent — they can also be by proxy. The arp table is stored in the kernel and manipulated dynamically. Arp entries are cached and will time out and are deleted normally in 20 minutes.
- arp a : Prints the arp table
- arp s <ip_address> <mac_address> [pub] to add an entry in the table
- arp a d to delete all the entries in the ARP table
#### Routing:
- netstat r —- Print routing tables. The routing tables are stored in the kernel and used by ip to route packets to non-local networks.
- route add —- The route command is used for setting a static (non-dynamic by hand route) route path in the route tables. All the traffic from this PC to that IP/SubNet will go through the given Gateway IP. It can also be used for setting a default route; i.e., send all packets to a particular gateway, by using 0.0.0.0 in the pace of IP/SubNet.
- routed —– The BSD daemon that does dynamic routing. Started at boot. This runs the RIP routing protocol. ROOT ONLY. You wont be able to run this without root access.
- gated —– Gated is an alternative routing daemon to RIP. It uses the OSPF, EGP, and RIP protocols in one place. ROOT ONLY.
- traceroute —- Useful for tracing the route of IP packets. The packet causes messages to be sent back from all gateways in between the source and destination by increasing the number of hopes by 1 each time.
- netstat rnf inet : it displays the routing tables of IPv4
- sysctl net.inet.ip.forwarding=1 : to enable packets forwarding (to turn a host into a router)
- route add|delete [-net|-host] <destination> <gateway> (ex. route add 192.168.20.0/24 192.168.30.4) to add a route
- route flush : it removes all the routes
- route add -net 0.0.0.0 192.168.10.2 : to add a default route
- routed -Pripv2 Pno_rdisc d [-s|-q] to execute routed daemon with RIPv2 protocol, without ICMP auto-discovery, in foreground, in supply or in quiet mode
- route add 224.0.0.0/4 127.0.0.1 : it defines the route used from RIPv2
- rtquery n : to query the RIP daemon on a specific host (manually update the routing table)
#### Others:
- nslookup —- Makes queries to the DNS server to translate IP to a name, or vice versa. eg. nslookup facebook.com will gives you the IP of facebook.com
- ftp <host>water —– Transfer files to host. Often can use login=“anonymous” , p/w=“guest”
- rlogin -l —– Logs into the host with a virtual terminal like telnet
#### Important Files:
```
/etc/hosts —- names to ip addresses
/etc/networks —- network names to ip addresses
/etc/protocols —– protocol names to protocol numbers
/etc/services —- tcp/udp service names to port numbers
```
#### Tools and network performance analysis
- ifconfig <interface> <address> [up] : start the interface
- ifconfig <interface> [down|delete] : stop the interface
- ethereal & : it allows you open ethereal background not foreground
- tcpdump i -vvv : tool to capture and analyze packets
- netstat w [seconds] I [interface] : display network settings and statistics
- udpmt p [port] s [bytes] target_host : it creates UDP traffic
- udptarget p [port] : its able to receive UDP traffic
- tcpmt p [port] s [bytes] target_host : it creates TCP traffic
- tcptarget p [port] its able to receive TCP traffic
- ifconfig netmask [up] : it allows to subnet the sub-networks
#### Switching:
- ifconfig sl0 srcIP dstIP : configure a serial interface (do “slattach l /dev/ttyd0” before, and “sysctl net.inet.ip.forwarding=1“ after)
- telnet 192.168.0.254 : to access the switch from a host in its subnetwork
- sh ru or show running-configuration : to see the current configurations
- configure terminal : to enter in configuration mode
- exit : in order to go to the lower configuration mode
#### VLAN:
- vlan n : it creates a VLAN with ID n
- no vlan N : it deletes the VLAN with ID N
- untagged Y : it adds the port Y to the VLAN N
- ifconfig vlan0 create : it creates vlan0 interface
- ifconfig vlan0 vlan ID vlandev em0 : it associates vlan0 interface on top of em0, and set the tags to ID
- ifconfig vlan0 [up] : to turn on the virtual interface
- tagged Y : it adds to the port Y the support of tagged frames for the current VLAN
#### UDP/TCP
- socklab udp it executes socklab with udp protocol
- sock it creates a udp socket, its equivalent to type sock udp and bind
- sendto <Socket ID> <hostname> <port #> emission of data packets
- recvfrom <Socket ID> <byte #> it receives data from socket
- socklab tcp it executes socklab with tcp protocol
- passive it creates a socket in passive mode, its equivalent to socklab, sock tcp, bind, listen
- accept it accepts an incoming connection (it can be done before or after creating the incoming connection)
- connect <hostname> <port #> these two commands are equivalent to socklab, sock tcp, bind, connect
- close it closes the connection
- read <byte #> to read bytes on the socket
- write (ex. write ciao, ex. write #10) to write “ciao” or to write 10 bytes on the socket
#### NAT/Firewall
- rm /etc/resolv.conf it prevent address resolution and make sure your filtering and firewall rules works properly
- ipnat f file_name it writes filtering rules into file_name
- ipnat l it gives the list of active rules
- ipnat C F it re-initialize the rules table
- map em0 192.168.1.0/24 -> 195.221.227.57/32 em0 : mapping IP addresses to the interface
- map em0 192.168.1.0/24 -> 195.221.227.57/32 portmap tcp/udp 20000:50000 : mapping with port
- ipf f file_name : it writes filtering rules into file_name
- ipf F a : it resets the rule table
- ipfstat I : it grants access to a few information on filtered packets, as well as active filtering rules
--------------------------------------------------------------------------------
via: https://itsfoss.com/basic-linux-networking-commands/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29
作者:[Abhishek Prakash][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://drive.google.com/open?id=0By49_3Av9sT1cDdaZnh4cHB4aEk

View File

@ -0,0 +1,106 @@
Container technologies in Fedora: systemd-nspawn
===
Welcome to the “Container technologies in Fedora” series! This is the first article in a series of articles that will explain how you can use the various container technologies available in Fedora. This first article will deal with `systemd-nspawn`.
### What is a container?
A container is a user-space instance which can be used to run a program or an operating system in isolation from the system hosting the container (called the host system). The idea is very similar to a `chroot` or a [virtual machine][1]. The processes running in a container are managed by the same kernel as the host operating system, but they are isolated from the host file system, and from the other processes.
### What is systemd-nspawn?
The systemd project considers container technologies as something that should fundamentally be part of the desktop and that should integrate with the rest of the users systems. To this end, systemd provides `systemd-nspawn`, a tool which is able to create containers using various Linux technologies. It also provides some container management tools.
In many ways, `systemd-nspawn` is similar to `chroot`, but is much more powerful. It virtualizes the file system, process tree, and inter-process communication of the guest system. Much of its appeal lies in the fact that it provides a number of tools, such as `machinectl`, for managing containers. Containers run by `systemd-nspawn` will integrate with the systemd components running on the host system. As an example, journal entries can be logged from a container in the host systems journal.
In Fedora 24, `systemd-nspawn` has been split out from the systemd package, so youll need to install the `systemd-container` package. As usual, you can do that with a `dnf install systemd-container`.
### Creating the container
Creating a container with `systemd-nspawn` is easy. Lets say you have an application made for Debian, and it doesnt run well anywhere else. Thats not a problem, we can make a container! To set up a container with the latest version of Debian (at this point in time, Jessie), you need to pick a directory to set up your system in. Ill be using `~/DebianJessie` for now.
Once the directory has been created, you need to run `debootstrap`, which you can install from the Fedora repositories. For Debian Jessie, you run the following command to initialize a Debian file system.
```
$ debootstrap --arch=amd64 stable ~/DebianJessie
```
This assumes your architecture is x86_64. If it isnt, you must change `amd64` to the name of your architecture. You can find your machines architecture with `uname -m`.
Once your root directory is set up, you will start your container with the following command.
```
$ systemd-nspawn -bD ~/DebianJessie
```
Youll be up and running within seconds. Youll notice something as soon as you try to log in: you cant use any accounts on your system. This is because systemd-nspawn virtualizes users. The fix is simple: remove -b from the previous command. Youll boot directly to the root shell in the container. From there, you can just use passwd to set a password for root, or you can use adduser to add a new user. As soon as youre done with that, go ahead and put the -b flag back. Youll boot to the familiar login console and you log in with the credentials you set.
All of this applies for any distribution you would want to run in the container, but you need to create the system using the correct package manager. For Fedora, you would use DNF instead of debootstrap. To set up a minimal Fedora system, you can run the following command, replacing the absolute path with wherever you want the container to be.
```
$ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd dnf fedora-release
```
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-04-14.png)
### Setting up the network
Youll notice an issue if you attempt to start a service that binds to a port currently in use on your host system. Your container is using the same network interface. Luckily, `systemd-nspawn` provides several ways to achieve separate networking from the host machine.
#### Local networking
The first method uses the `--private-network` flag, which only creates a loopback device by default. This is ideal for environments where you dont need networking, such as build systems and other continuous integration systems.
#### Multiple networking interfaces
If you have multiple network devices, you can give one to the container with the `--network-interface` flag. To give `eno1` to my container, I would add the flag `--network-interface=eno1`. While an interface is assigned to a container, the host cant use it at the same time. When the container is completely shut down, it will be available to the host again.
#### Sharing network interfaces
For those of us who dont have spare network devices, there are other options for providing access to the container. One of those is the `--port` flag. This forwards a port on the container to the host. The format is `protocol:host:container`, where protocol is either `tcp` or `udp`, `host` is a valid port number on the host, and `container` is a valid port on the container. You can omit the protocol and specify only `host:container`. I often use something similar to `--port=2222:22`.
You can enable complete, host-only networking with the `--network-veth` flag, which creates a virtual Ethernet interface between the host and the container. You can also bridge two connections with `--network-bridge`.
### Using systemd components
If the system in your container has D-Bus, you can use systemds provided utilities to control and monitor your container. Debian doesnt include dbus in the base install. If you want to use it with Debian Jessie, youll want to run `apt install dbus`.
#### machinectl
To easily manage containers, systemd provides the machinectl utility. Using machinectl, you can log in to a container with machinectl login name, check the status with machinectl status name, reboot with machinectl reboot name, or power it off with machinectl poweroff name.
### Other systemd commands
Most systemd commands, such as journalctl, systemd-analyze, and systemctl, support containers with the `--machine` option. For example, if you want to see the journals of a container named “foobar”, you can use journalctl `--machine=foobar`. You can also see the status of a service running in this container with `systemctl --machine=foobar` status service.
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot-from-2016-06-17-15-09-25.png)
### Working with SELinux
If youre running with SELinux enforcing (the default in Fedora), youll need to set the SELinux context for your container. To do that, you need to run the following two commands on the host system.
```
$ semanage fcontext -a -t svirt_sandbox_file_t "/path/to/container(/.*)?"
$ restorecon -R /path/to/container/
```
Make sure you replace “/path/to/container” with the path to your container. For my container, “DebianJessie”, I would run the following:
```
$ semanage fcontext -a -t svirt_sandbox_file_t "/home/johnmh/DebianJessie(/.*)?"
$ restorecon -R /home/johnmh/DebianJessie/
```
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/
作者:[John M. Harris, Jr.][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/
[1]: https://en.wikipedia.org/wiki/Virtual_machine

View File

@ -0,0 +1,138 @@
DOCKER DATACENTER IN AWS AND AZURE IN A FEW CLICKS
====================================================
Introducing Docker Datacenter AWS Quickstart and Azure Marketplace Templates production-ready, high availability deployments in just a few clicks.
The Docker Datacenter AWS Quickstart uses a CloudFormation templates and pre-built templates on Azure Marketplace to make it easier than ever to deploy an enterprise CaaS Docker environment on public cloud infrastructures.
The Docker Datacenter Container as a Service (CaaS) platform for agile application development provides container and cluster orchestration and management that is simple, secure and scalable for enterprises of any size. With our new cloud templates pre-built for Docker Datacenter, developers and IT operations can frictionlessly move dockerized applications to an Amazon EC2 or Microsoft Azure environment without any code changes. Now businesses can quickly realize greater efficiency of computing and operations resources and Docker supported container management and orchestration in just a few steps.
### What is Docker Datacenter?
Docker Datacenter includes Docker Universal Control Plane, Docker Trusted Registry (DTR), CS Docker Engine with commercial support & subscription to align to your application SLAs:
- Docker Universal Control Plane (UCP), an enterprise-grade cluster management solution that helps you manage your whole cluster from a single pane of glass
- Docker Trusted Registry (DTR), an image storage solution that helps securely store and manage the Docker images.
- Commercially Supported (CS) Docker Engines
![](http://img.scoop.it/lVraAJgJbjAKqfWCLtLuZLnTzqrqzN7Y9aBZTaXoQ8Q=)
### Deploy on AWS in a single click with the Docker Datacenter AWS Quick Start
With AWS Quick Start reference deployments you can rapidly deploy Docker containers on the AWS cloud, adhering to Docker and AWS best practices. The Docker Datacenter Quick Start uses CloudFormation templates that are modular and customizable so you can layer additional functionality on top or modify them for your own Docker deployments.
[Docker Datacenter for AWS Quickstart](https://youtu.be/aUx7ZdFSkXU)
#### Architecture
![](http://img.scoop.it/sZ3_TxLba42QB-r_6vuApLnTzqrqzN7Y9aBZTaXoQ8Q=)
The AWS Cloudformation starts the installation process by creating all the required AWS resources such as the VPC, security groups, public and private subnets, internet gateways, NAT gateways, and S3 bucket.
It then launches the first UCP controller instances and goes through the installation process of Docker engine and UCP containers. It backs the Root CAs created by the first UCP controllers to S3. Once the first UCP controller is up and running, the process of creating the other UCP controllers, the UCP cluster nodes, and the first DTR replica is triggered. Similar to the first UCP controller node, all other nodes are started by installing Docker Commercially Supported engine, followed by running the UCP and DTR containers to join the cluster. Two ELBs, one for UCP and one for DTR, are launched and automatically configured to provide resilient load balancing across the two AZs.
Additionally, UCP controllers and nodes are launched in an ASG to provide scaling functionality if needed. This architecture ensures that both UCP and DTR instances are spread across both AZs to ensure resiliency and high-availability. Route53 is used to dynamically register and configure UCP and DTR in your private or public HostedZone.
![](http://img.scoop.it/HM7Ag6RFvMXvZ_iBxRgKo7nTzqrqzN7Y9aBZTaXoQ8Q=)
### Key functionality of this Quickstart templates includes the following:
- Creates a New VPC, Private and Public Subnets in different AZs, ELBs, NAT Gateways, Internet Gateways, AutoScaling Groups- all based on AWS best practices
- Creates an S3 bucket for DDC to be used for cert backup and DTR image storage ( requires additional configuration in DTR )
- Deploys 3 UCP Controllers across multiple AZs within your VPC
- Creates a UCP ELB with preconfigured health checks
- Creates a DNS record and attaches it to UCP ELB
- Deploys a scalable cluster of UCP nodes
- Backs up UCP Root CAs to S3
- Create a 3 DTR Replicas across multiple AZs within your VPC
- Creates a DTR with preconfigured health checks
- Creates a DNS record and attaches it to DTR ELB
[Download the AWS Quick Start Guide to Learn More](https://s3.amazonaws.com/quickstart-reference/docker/latest/doc/docker-datacenter-on-the-aws-cloud.pdf)
### Getting Started with Docker Datacenter for AWS
1. Go to [Docker Store][1] to get your [30 day free trial][2] or [contact sales][4].
2. At confirmation, youll be prompted to “Launch Stack” and youll be directed to the AWS Cloudformation portal.
3. Confirm your AWS Region that youd like to launch this stack in
4. Provide the required parameters
5. Confirm and Launch.
6. Once complete, click on outputs tab to see the URLs of UCP/DTR, default username, and password, and S3 bucket name.
[Request up to $2000 AWS credit for Docker Datacenter](https://aws.amazon.com/mp/contactdocker/)
### Deploy on Azure with pre-built templates on Azure Marketplace
Docker Datacenter is available as pre-built template on Azure Marketplace for you to run instantly on Azure across various datacenters globally. Customers can choose to deploy Docker Datacenter from various VM choices offered on Azure as it fits their needs.
#### Architecture
![](http://img.scoop.it/V9SpuBCoAnUnkRL3J-FRFLnTzqrqzN7Y9aBZTaXoQ8Q=)
The Azure deployment process begins by entering some basic information about the user including, the admin username for ssh-ing into all the nodes (OS level admin user) and the name of the resource group. You can think of the resource group as a collection of resources that has a lifecycle and deployment boundary. You can read more about resource groups here: <azure.microsoft.com/en-us/documentation/articles/resource-group-overview/>
Next you will enter the details of the cluster, including: VM size for UCP controllers, Number of Controllers (default is 3), VM size for UCP nodes, Number of UCP nodes (default is 1, max of 10), VM size for DTR nodes, Number of DTR nodes (default is 3), Virtual Network Name and Address (ex. 10.0.0.1/19). Regarding networking, you will have 2 subnets: the first subnet is for the UCP controller nodes and the second subnet is for the DTR and UCP nodes.
Lastly you will click OK to complete deployment. Provisioning should take about 15-19 minutes for small clusters with a few additional minutes for larger ones.
![](http://img.scoop.it/DXPM5-GXP0j2kEhno0kdRLnTzqrqzN7Y9aBZTaXoQ8Q=)
![](http://img.scoop.it/321ElkCf6rqb7u_-nlGPtrnTzqrqzN7Y9aBZTaXoQ8Q=)
#### How to Deploy in Azure
1. Register for [a 30 day free trial][5] license of Docker Datacenter or [contact sales][6].
2. [Go to Docker Datacenter on the Microsoft Azure Marketplace][7]
3. [Review Deployment Documents][8]
Get Started by registering for a Docker Datacenter license and youll be prompted with the ability to launch either the AWS or Azure templates.
- [Get a 30 day trial license][9]
- [Understand Docker Datacenter architecture with this video][10]
- [Watch demo videos][11]
- [Get $75 AWS credit torwards your deployment][12]
### Learn More about Docker
- New to Docker? Try our 10 min [online tutorial][20]
- Share images, automate builds, and more with [a free Docker Hub account][21]
- Read the Docker [1.12 Release Notes][22]
- Subscribe to [Docker Weekly][23]
- Sign up for upcoming [Docker Online Meetups][24]
- Attend upcoming [Docker Meetups][25]
- Watch [DockerCon EU 2015 videos][26]
- Start [contributing to Docker][27]
--------------------------------------------------------------------------------
via: https://blog.docker.com/2016/06/docker-datacenter-aws-azure-cloud/
作者:[Trisha McCanna][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.docker.com/author/trisha/
[1]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[2]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[4]: https://goto.docker.com/contact-us.html
[5]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[6]: https://goto.docker.com/contact-us.html
[7]: https://azure.microsoft.com/en-us/marketplace/partners/docker/dockerdatacenterdocker-datacenter/
[8]: https://success.docker.com/Datacenter/Apply/Docker_Datacenter_on_Azure
[9]: http://www.docker.com/trial
[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8tFH7xzI5Y-vP48wUiuXfH
[11]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8a8JaIOA5Q757KiqEjPKWr
[12]: https://aws.amazon.com/quickstart/promo/
[20]: https://docs.docker.com/engine/understanding-docker/
[21]: https://hub.docker.com/
[22]: https://docs.docker.com/release-notes/
[23]: https://www.docker.com/subscribe_newsletter/
[24]: http://www.meetup.com/Docker-Online-Meetup/
[25]: https://www.docker.com/community/meetup-groups
[26]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv
[27]: https://docs.docker.com/contributing/contributing/

View File

@ -0,0 +1,35 @@
翻译中by zky001
Flatpak brings standalone apps to Linux
===
![](https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/flatpak-945x400.jpg)
The development team behind [Flatpak][1] has [just announced the general availability][2] of the Flatpak desktop application framework. Flatpak (which was also known during development as xdg-app) provides the ability for an application — bundled as a Flatpak — to be installed and run easily and consistently on many different Linux distributions. Applications bundled as Flatpaks also have the ability to be sandboxed for security, isolating them from your operating system, and other applications. Check out the [Flatpak website][3], and the [press release][4] for more information on the tech that makes up the Flatpak framework.
### Installing Flatpak on Fedora
For users wanting to run applications bundled as Flatpaks, installation on Fedora is easy, with Flatpak already available in the official Fedora 23 and Fedora 24 repositories. The Flatpak website has [full details on installation on Fedora][5], as well as how to install on Arch, Debian, Mageia, and Ubuntu. [Many applications][6] have builds already bundled with Flatpak — including LibreOffice, and nightly builds of popular graphics applications Inkscape and GIMP.
### For Application Developers
If you are an application developer, the Flatpak website also contains some great resources on getting started [bundling and distributing your applications with Flatpak][7]. These resources contain information on using Flakpak SDKs to build standalone, sandboxed Flatpak applications.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/introducing-flatpak/
作者:[Ryan Lerch][a]
译者:[zky001](https://github.com/zky001)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/introducing-flatpak/
[1]: http://flatpak.org/
[2]: http://flatpak.org/press/2016-06-21-flatpak-released.html
[3]: http://flatpak.org/
[4]: http://flatpak.org/press/2016-06-21-flatpak-released.html
[5]: http://flatpak.org/getting.html
[6]: http://flatpak.org/apps.html
[7]: http://flatpak.org/developer.html

View File

@ -0,0 +1,127 @@
alim0x translating
How to permanently mount a Windows share on Linux
==================================================
>If you get tired of having to remount Windows shares when you reboot your Linux box, read about an easy way to make those shares permanently mount.
![](http://tr2.cbsistatic.com/hub/i/2016/06/02/e965310b-b38d-43e6-9eac-ea520992138b/68fd9ec5d6731cc405bdd27f2f42848d/linuxadminhero.jpg)
>Image: Jack Wallen
It has never been easier for Linux to interact within a Windows network. And considering how many businesses are adopting Linux, those two platforms have to play well together. Fortunately, with the help of a few tools, you can easily map Windows network drives onto a Linux machine, and even ensure they are still there upon rebooting the Linux machine.
### Before we get started
For this to work, you will be using the command line. The process is pretty simple, but you will be editing the /etc/fstab file, so do use caution.
Also, I assume you already have Samba working properly so you can manually mount shares from a Windows network to your Linux box, and that you know the IP address of the machine hosting the share.
Are you ready? Let's go.
### Create your mount point
The first thing we're going to do is create a folder that will serve as the mount point for the share. For the sake of simplicity, we'll name this folder share and we'll place it in /media. Open your terminal window and issue the command:
```
sudo mkdir /media/share
```
### A few installations
Now we have to install the system that allows for cross-platform file sharing; this system is cifs-utils. From the terminal window, issue the command:
```
sudo apt-get install cifs-utils
```
This command will also install all of the dependencies for cifs-utils.
Once this is installed, open up the file /etc/nsswitch.conf and look for the line:
```
hosts: files mdns4_minimal [NOTFOUND=return] dns
```
Edit that line so it looks like:
```
hosts: files mdns4_minimal [NOTFOUND=return] wins dns
```
Now you must install windbind so that your Linux machine can resolve Windows computer names on a DHCP network. From the terminal, issue this command:
```
sudo apt-get install libnss-windbind windbind
```
Restart networking with the command:
```
sudo service networking restart
```
### Mount the network drive
Now we're going to map the network drive. This is where we must edit the /etc/fstab file. Before you make that first edit, back up the file with this command:
```
sudo cp /etc/fstab /etc/fstab.old
```
If you need to restore that file, issue the command:
```
sudo mv /etc/fstab.old /etc/fstab
```
Create a credentials file in your home directory called .smbcredentials. In that file, add your username and password, like so (USER is the actual username and password is the actual password):
```
username=USER
password=PASSWORD
```
You now have to know the Group ID (GID) and User ID (UID) of the user that will be mounting the drive. Issue the command:
```
id USER
```
USER is the actual username, and you should see something like:
```
uid=1000(USER) gid=1000(GROUP)
```
USER is the actual username, and GROUP is the group name. The numbers before (USER) and (GROUP) will be used in the /etc/fstab file.
It's time to edit the /etc/fstab file. Open that file in your editor and add the following line to the end (replace everything in ALL CAPS and the IP address of the remote machine):
```
//192.168.1.10/SHARE /media/share cifs credentials=/home/USER/.smbcredentials,iocharset=uft8,gid=GID,udi=UID,file_mode=0777,dir_mode=0777 0 0
```
**Note**: The above should be on a single line.
Save and close that file. Issue the command sudo mount -a and the share will be mounted. Check in /media/share and you should see the files and folders on the network share.
### Sharing made easy
Thanks to cifs-utils and Samba, mapping network shares is incredibly easy on a Linux machine. And now, you won't have to manually remount those shares every time your machine boots.
For more networking tips and tricks, sign up for our Data Center newsletter.
[SUBSCRIBE](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fhow-to-permanently-mount-a-windows-share-on-linux%2F&)
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/how-to-permanently-mount-a-windows-share-on-linux/
作者:[Jack Wallen][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techrepublic.com/search/?a=jack+wallen

View File

@ -0,0 +1,71 @@
Industrial SBC builds on Raspberry Pi Compute Module
=====================================================
![](http://hackerboards.com/files/embeddedmicro_mypi-thm.jpg)
On Kickstarter, a “MyPi” industrial SBC using the RPi Compute Module offers a mini-PCIe slot, serial port, wide-range power, and modular expansion.
You might wonder why in 2016 someone would introduce a sandwich-style single board computer built around the aging, ARM11 based COM version of the original Raspberry Pi, the [Raspberry Pi Compute Module][1]. First off, there are still plenty of industrial applications that dont need much CPU horsepower, and second, the Compute Module is still the only COM based on Raspberry Pi hardware, although the cheaper, somewhat COM-like [Raspberry Pi Zero][2], which has the same 700MHz processor, comes close.
![](http://hackerboards.com/files/embeddedmicro_mypi-sm.jpg)
![](http://hackerboards.com/files/embeddedmicro_mypi_encl-sm.jpg)
>MyPi with COM and I/O add-ons (left), and in its optional industrial enclosure
In addition, Embedded Micro Technology says its SBC is also designed to support a swap-in for a promised Raspberry Pi Compute Module upgrade built around the Raspberry Pi 3s quad-core, Cortex-A53 Broadcom BCM2837 SoC. Since this product could arrive any week now, its unclear how that all sorts out for Kickstarter backers. Still, its nice to know youre somewhat futureproof, even if you have to pay for the upgrade.
The MyPi is not the only new commercial embedded device based on the Raspberry Pi Compute Module. Pigeon Computers launched a [Pigeon RB100][3] industrial automation controller based on the COM in May. Most such devices arrived in 2014, however, shortly after the COM arrived, including the [Techbase Modberry][4].
The MyPi is over a third of the way toward its $21,696 funding goal with 30 days to go. An early bird package starts at $119, with shipments in September. Other kit options include a $187 version that includes the $30 Raspberry Pi Compute Module, as well as various cables. Kits are also available with add-on boards and an industrial enclosure.
![](http://hackerboards.com/files/embeddedmicro_mypi_baseboard-sm.jpg)
![](http://hackerboards.com/files/embeddedmicro_mypi_detail-sm.jpg)
>MyPi baseboard without COM or add-ons (left) and its port details
The Raspberry Pi Compute Module starts the MyPi off with the Broadcom BCM2835 SoC, 512MB RAM, and 4GB eMMC flash. The MyPi adds to this with a microSD card slot, an HDMI port, two USB 2.0 ports, a 10/100 Ethernet port, and a similarly coastline RS232 port (via USB).
![](http://hackerboards.com/files/embeddedmicro_mypi_angle1-sm.jpg)
![](http://hackerboards.com/files/embeddedmicro_mypi_angle2.jpg)
>Two views of the MyPi board with RPi and mini-PCIe modules installed
The MyPi is further equipped with a mini-PCIe socket, which is said to be “USB only and intended for use of modems available in the mPCIe form factor.” A SIM card slot is also available. Dual standard Raspberry Pi camera connectors are onboard along with an audio out interface, a battery-backed RTC, and LEDs. The SBC has a wide-range, 9-23V DC input.
The MyPi is designed for Raspberry Pi hackers who have stacked so many HAT add-on boards, they can no longer work with them effectively or stuff them inside and industrial enclosure, says Embedded Micro. The MyPi supports HATs, but also offers the companys own “ASIO” (Application Specific I/O) add-on modules, which route their I/O back to the carrier board, which, in turn, connects it to the 8-pin, green, Phoenix-style, industrial I/O connector (labeled “ASIO Out”) on the boards edge, as illustrated in the diagram below.
![](http://hackerboards.com/files/embeddedmicro_mypi_io-sm.jpg)
>MyPis modular expansion interface
As the Kickstarter page explains it: “Rather than have a plug in HAT card with IO signal connectors poking out on all sides, instead we take these same IO signals back down a second output connector which is directly connected to the green industrial connector.” Additionally, “by simply using extended length interface pins on the card (raising it up) you can expand the IO set further — all without using any cable assemblies!” says Embedded Micro.
![](http://hackerboards.com/files/embeddedmicro_mypi_with_iocards-sm.jpg)
>MyPi and its optional I/O add-on cards
The company offers a line of hardened ASIO plug-in cards for the MyPi, as shown above. These initially include CAN-Bus, 4-20mA transducer signals, RS485, Narrow Band RF, and more.
### Further information
The MyPi is available on Kickstarter starting at a 79-Pound ($119) early bird package (without the Raspberry Pi Compute Module) through July 23, with shipments due in September. More information may be found on the [MyPi Kickstarter page][5] and the [Embedded Micro Technology website][6].
--------------------------------------------------------------------------------
via: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/
作者:[Eric Brown][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://hackerboards.com/industrial-sbc-builds-on-rpi-compute-module/
[1]: http://hackerboards.com/raspberry-pi-morphs-into-30-dollar-com/
[2]: http://hackerboards.com/pi-zero-tweak-adds-camera-connector-keeps-5-price/
[3]: http://hackerboards.com/automation-controller-runs-linux-on-raspberry-pi-com/
[4]: http://hackerboards.com/automation-controller-taps-raspberry-pi-compute-module/
[5]: https://www.kickstarter.com/projects/410598173/mypi-industrial-strength-raspberry-pi-for-iot-proj
[6]: http://www.embeddedpi.com/

View File

@ -0,0 +1,97 @@
chunyang-wen translating
How to Use Comparison Operators with Awk in Linux
===================================================
![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Comparison-Operators-with-AWK.png)
When dealing with numerical or string values in a line of text, filtering text or strings using comparison operators comes in handy for Awk command users.
In this part of the Awk series, we shall take a look at how you can filter text or strings using comparison operators. If you are a programmer then you must already be familiar with comparison operators but those who are not, let me explain in the section below.
### What are Comparison operators in Awk?
Comparison operators in Awk are used to compare the value of numbers or strings and they include the following:
- `>` greater than
- `<` less than
- `>=` greater than or equal to
- `<=` less than or equal to
- `==` equal to
- `!=` not equal to
- `some_value ~ / pattern/` true if some_value matches pattern
- `some_value !~ / pattern/` true if some_value does not match pattern
Now that we have looked at the various comparison operators in Awk, let us understand them better using an example.
In this example, we have a file named food_list.txt which is a shopping list for different food items and I would like to flag food items whose quantity is less than or equal 20 by adding `(**)` at the end of each line.
```
File food_list.txt
No Item_Name Quantity Price
1 Mangoes 45 $3.45
2 Apples 25 $2.45
3 Pineapples 5 $4.45
4 Tomatoes 25 $3.45
5 Onions 15 $1.45
6 Bananas 30 $3.45
```
The general syntax for using comparison operators in Awk is:
```
# expression { actions; }
```
To achieve the above goal, I will have to run the command below:
```
# awk '$3 <= 30 { printf "%s\t%s\n", $0,"**" ; } $3 > 30 { print $0 ;}' food_list.txt
No Item_Name` Quantity Price
1 Mangoes 45 $3.45
2 Apples 25 $2.45 **
3 Pineapples 5 $4.45 **
4 Tomatoes 25 $3.45 **
5 Onions 15 $1.45 **
6 Bananas 30 $3.45 **
```
In the above example, there are two important things that happen:
- The first expression `{ action ; }` combination, `$3 <= 30 { printf “%s\t%s\n”, $0,”**” ; }` prints out lines with quantity less than or equal to 30 and adds a `(**)` at the end of each line. The value of quantity is accessed using `$3` field variable.
- The second expression `{ action ; }` combination, `$3 > 30 { print $0 ;}` prints out lines unchanged since their quantity is greater then `30`.
One more example:
```
# awk '$3 <= 20 { printf "%s\t%s\n", $0,"TRUE" ; } $3 > 20 { print $0 ;} ' food_list.txt
No Item_Name Quantity Price
1 Mangoes 45 $3.45
2 Apples 25 $2.45
3 Pineapples 5 $4.45 TRUE
4 Tomatoes 25 $3.45
5 Onions 15 $1.45 TRUE
6 Bananas 30 $3.45
```
In this example, we want to indicate lines with quantity less or equal to 20 with the word (TRUE) at the end.
### Summary
This is an introductory tutorial to comparison operators in Awk, therefore you need to try out many other options and discover more.
In case of any problems you face or any additions that you have in mind, then drop a comment in the comment section below. Remember to read the next part of the Awk series where I will take you through compound expressions.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/comparison-operators-in-awk/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,81 @@
martin
How to Use Compound Expressions with Awk in Linux
====================================================
![](http://www.tecmint.com/wp-content/uploads/2016/05/Use-Compound-Expressions-with-Awk.png)
All along, we have been looking at simple expressions when checking whether a condition has been meet or not. What if you want to use more then one expression to check for a particular condition in?
In this article, we shall take a look at the how you can combine multiple expressions referred to as compound expressions to check for a condition when filtering text or strings.
In Awk, compound expressions are built using the `&&` referred to as `(and)` and the `||` referred to as `(or)` compound operators.
The general syntax for compound expressions is:
```
( first_expression ) && ( second_expression )
```
Here, `first_expression` and `second_expression` must be true to make the whole expression true.
```
( first_expression ) || ( second_expression)
```
Here, one of the expressions either `first_expression` or `second_expression` must be true for the whole expression to be true.
**Caution**: Remember to always include the parenthesis.
The expressions can be built using the comparison operators that we looked at in Part 4 of the awk series.
Let us now get a clear understanding using an example below:
In this example, a have a text file named `tecmint_deals.txt`, which contains a list of some amazing random Tecmint deals, it includes the name of the deal, the price and type.
```
TecMint Deal List
No Name Price Type
1 Mac_OS_X_Cleanup_Suite $9.99 Software
2 Basics_Notebook $14.99 Lifestyle
3 Tactical_Pen $25.99 Lifestyle
4 Scapple $19.00 Unknown
5 Nano_Tool_Pack $11.99 Unknown
6 Ditto_Bluetooth_Altering_Device $33.00 Tech
7 Nano_Prowler_Mini_Drone $36.99 Tech
```
Say that we want only print and flag deals that are above $20 and of type “Tech” using the (**) sign at the end of each line.
We shall need to run the command below.
```
# awk '($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/) && ($4=="Tech") { printf "%s\t%s\n",$0,"*"; } ' tecmint_deals.txt
6 Ditto_Bluetooth_Altering_Device $33.00 Tech *
7 Nano_Prowler_Mini_Drone $36.99 Tech *
```
In this example, we have used two expressions in a compound expression:
- First expression, `($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/)` ; checks the for lines with deals with price above `$20`, and it is only true if the value of $3 which is the price matches the pattern `/^\$[2-9][0-9]*\.[0-9][0-9]$/`
- And the second expression, `($4 == “Tech”)` ; checks whether the deal is of type “`Tech`” and it is only true if the value of `$4` equals to “`Tech`”.
Remember, a line will only be flagged with the `(**)`, if first expression and second expression are true as states the principle of the `&&` operator.
### Summary
Some conditions always require building compound expressions for you to match exactly what you want. When you understand the use of comparison and compound expression operators then, filtering text or strings based on some difficult conditions will become easy.
Hope you find this guide useful and for any questions or additions, always remember to leave a comment and your concern will be solved accordingly.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/combine-multiple-expressions-in-awk/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -1 +0,0 @@
这里放新闻类文章,要求时效性

View File

@ -1 +0,0 @@
这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。

View File

@ -1,85 +0,0 @@
为什么 Ubuntu 家族会占据 Linux 发行版的主导地位?
=========================================
在过去的数年中,我已经尝试了大量的优秀 Linux 发行版。我印象最深刻的是那些被强大的社区维护的发行版。但是这样的发行版却比他们s所属的社区更受人欢迎。流行的 Linux 发行版吸引着更多的人,通常由于这样的特点使得使用该发行版更加容易。这很明显毫无关系,但一般认为这种说法是正确的。
我想到的一个发行版 [Ubuntu][1]。它属于健壮的 [Debian][2]分支,Ubuntu 不可思议的成为了受欢迎的 Linux 发行版,而且它也衍生出了其他的版本,比如 Linux Mint。在本文中我会探讨我坚信 Ubuntu 会赢得 Linux 发行版战争的原因,以及它在整个 Linux 桌面领域有着怎样的影响力。
### Ubuntu容易使用
多年前我第一次尝试使用Ubuntu在这之前我更喜欢使用 KED 桌面。在那个时期,我接触的大多是这种 KDE 桌面环境。主要原因还是 KDE 是大多数新手友好的 Linux 发行版中最受欢迎的。新手友好的发行版有 KnoppixSimply Mepis, Xandros, Linspire等另外一些发行版和这些发行版都指出他们的用户趋向于使用 KDE。
现在KDE能满足我的需求也没有什么理由去折腾其他的桌面环境了。有一天我的 Debian 安装失败了(由于我个人的操作不当),我决定尝试开发代号为「整洁的公鸭(Ubuntu Dapper Drake)」的 Ubuntu 版本【译者注ubuntu 6.06 - Dapper Drake(整洁的公鸭)发布日期2006年6月1日】。那个时候我对于它的印象比一个屏幕截图还要少但是我认为它很有趣并且毫无顾忌的使用它。
Ubuntu Dapper Drake 给我的最大的印象是它的操作很简单。记住,我是来自于 KDE 世界的用户,在 KDE 上要想改变菜单的设置就有15钟方法。Ubuntu 图形界面的安装启动极具极简主义。
时间来到2016年最新的版本号是16.04:我们有多种可用的 Ubuntu 衍生版本,许多的都是基于 Ubuntu 的。所有的 Ubuntu 风格和公用发行版的核心都被设计的容易使用。并且发行版想要增大用户基数的时候,这就是最重要的原因。
### Ubuntu LTS
过去,我几乎一直坚持使用 LTSLong Term Support发行版作为我的主要桌面系统。10月份的发行版很适合我测试硬盘驱动器甚至把它用在一个老旧的手提电脑上。我这样做的原因很简单——我没有兴趣在一个作为实验品的电脑上折腾短期发行版。我是个很忙的家伙我觉得这样会浪费我的时间。
对于我来说,我认为 Ubuntu 提供 LTS 发行版是 Ubuntu 能够变得流行的原因。这样说吧———提供一个大众的桌面 Linux 发行版这个发行版能够得到长期的充分支持就是它的优势。事实上Ubuntu 的优势不只这一点,其他的分支在这一点上也做的很好。长期支持版带有一个对新手的友好环境的策略,我认为这就为 Ubuntu 的普及带来了莫大的好处。
### Ubuntu Snap 包
以前,用户在他们的系统上使用很多 PPApersonal package archive个人软件包档案他们总会抱怨它获得新的软件名称的能力。不好的是这种技术也有缺点。它工作的时候带有任意的软件名称而 PPA 却没有发现,这种情况很常见。
现在有了[Snap 包][3] 。当然这不是一个全新的概念,过去已经进行了类似的尝试。用户不必要在最新的 Ubuntu 发行版上运行最新的软件,我认为这才是 Snap 将要长期提供给 Ubuntu 用户的东西。然而我仍然认为我们将会看到 Snap 淘汰的的那一天,我很期待看到一个在稳定的发行版上运行的优秀软件。
如果你要运行很多软件,那么 Snap 包实际使用的硬盘空间很明显存在问题。不仅如此,大多数 Ubuntu 软件也是通过由官方开发的 deb 包进行管理的。当后者需要花费一些时间的时候,这个问题可以通过 Snap 使用更大的硬盘驱动器空间得到解决。
### Ubuntu 社区
首先,我承认大多数主要的 Linux 发行版都有强大的社区。然而,我坚信 Ubuntu 社区的成员是最多样化的,他们来自各行各业。例如,我们有一个论坛来分类不同的苹果硬件对于游戏的支持程度。这些大量的专业讨论特别广泛。
除过论坛Ubuntu 也提供了一个很正式的社区组织。这个组织包括一个委员会,技术板块,[各地的团队LoCo teams][4](Ubuntu Local Community Teams)和开发人员板块。还有很多,但是这些都是我知道的社区组织部分。
我们还有一个[Ubuntu 问答][5]板块。我认为,这种特色可以代替人们从论坛寻求帮助的方式,我发现在这个网站你得到有用信息的可能行更大。不仅如此,那些提供的解决方案中被选出的最精准的答案也会被写入到官方文档中。
### Ubuntu 的未来
我认为 Ubuntu 的 Unity 接口【译者注Unity 是 Canonical 公司为 Ubuntu 操作系统的 GNOME 桌面环境开发的图形化 shell】在增加桌面舒适性上少有作为。我能理解其中的缘由现在它主要做一些诸如可以使开发团队的工作更轻松的事情。但是最终我还是希望 Unity 可以为 Ubuntu MATE 和 Linux Mint 的普及铺平道路。
我最好奇的一点是 Ubuntu's IRC(Internet Relay Chat) 和邮件列表的发展【译者注:可以在 Ubuntu LoCo Teams IRC Chat上提问关于地方团队和计划的事件的问题也可以和一些不同团队的成员进行交流】。事实是他们都不能像 Ubuntu 问答板块那样为它们自己增添一些好的文档。至于邮件列表,我一直认为这对于合作是一种很痛苦的过时方法,但这仅仅是我的个人看法——其他人可能有不同的看法,也可能会认为它很好。
你说什么?你认为 Ubuntu 将来会剩下一点主要的使用者?也许你相信 Arch 和 Linux Mint 或者其他的发行版会在普及度上打败 Ubuntu 。 既然这样,那请大声说出你最喜爱的发行版。如果这个发行版是 Ubuntu 衍生版 ,说说你为什么更喜欢它而不是 Ubuntu 本身。如果不出意外Ubuntu 会成为构建其他发行版的基础,我想很多人都是这样认为的。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/why-ubuntu-based-distros-are-leaders.html
作者:[Matt Hartley][a]
译者:[vim-kakali](https://github.com/vim-kakali)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.datamation.com/author/Matt-Hartley-3080.html
[1]: http://www.ubuntu.com/
[2]: https://www.debian.org/
[3]: http://www.datamation.com/open-source/ubuntu-snap-packages-the-good-the-bad-the-ugly.html
[4]: http://loco.ubuntu.com/
[5]: http://askubuntu.com/

View File

@ -1,98 +0,0 @@
5 个适合课堂教学的树莓派项目
================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-open-source-yearbook-lead3.png)
图片来源 : opensource.com
### 1. Minecraft Pi ###
![](https://opensource.com/sites/default/files/lava.png)
上图由树莓派基金会提供。遵循 [CC BY-SA 4.0.][1] 协议。
Minecraft我的世界几乎是世界上每个青少年都极其喜爱的游戏 —— 在吸引年轻人注意力方面,它也是最具创意的游戏之一。伴随着每一个树莓派的游戏版本不仅仅是一个关于创造性思维的建筑游戏,它还带有一个编程接口,允许使用者通过 Python 代码来与 Minecraft 世界进行互动。
对于教师来说Minecraft: Pi 版本是一个鼓励学生们解决遇到的问题以及通过书写代码来执行特定任务的极好方式。你可以使用 Python API
来建造一所房子,让它跟随你到任何地方;或在你所到之处修建一座桥梁;又或者是下一场岩溶雨;或在天空中显示温度;以及其他任何你能想像到的事物。
可在 "[Minecraft Pi 入门][2]" 中了解更多相关内容。
### 2. 反应游戏和交通指示灯 ###
![](https://opensource.com/sites/default/files/pi_traffic_installed_yellow_led_on.jpg)
上图由 [Low Voltage Labs][3] 提供。遵循 [CC BY-SA 4.0][1] 协议。
在树莓派上进行物理计算是非常容易的 —— 只需将 LED 灯 和按钮连接到 GPIO 针脚上,再加上少量的代码,你就可以点亮 LED 灯并通过按按钮来控制物体。一旦你知道来执行基本操作的代码,下一步就可以随你的想像那样去做了!
假如你知道如何让一盏灯闪烁,你就可以让三盏灯闪烁。选出三盏交通灯颜色的 LED 灯,你就可以编程出交通灯闪烁序列。假如你知道如何使用一个按钮来触发一个事件,然后你就有一个人行横道了!同时,你还可以找到诸如 [PI-TRAFFIC][4]、[PI-STOP][5]、[Traffic HAT][6] 等预先构建好的交通灯插件。
这不总是关于代码的 —— 它还可以被用来作为一个的练习,用以理解真实世界中的系统是如何被设计出来的。计算思维在生活中的各种情景中都是一个有用的技能。
![](https://opensource.com/sites/default/files/reaction-game.png)
上图由树莓派基金会提供。遵循 [CC BY-SA 4.0][1] 协议。
下面尝试将两个按钮和一个 LED 灯连接起来,来制作一个二人制反应游戏 —— 让灯在一段随机的时间中点亮,然后看谁能够先按到按钮!
想了解更多的话,请查看 [GPIO 新手指南][7]。你所需要的尽在 [CamJam EduKit 1][8]。
### 3. Sense HAT 像素宠物 ###
Astro Pi— 一个增强版的树莓派 —将于今年 12 月应该是去年的事了。问世但你并没有错过让你的手玩弄硬件的机会。Sense HAT 是一个用在 Astro Pi 任务中的感应器主板插件,且任何人都可以买到。你可以用它来做数据收集、科学实验、游戏或者更多。 观看下面这个由树莓派的 Carrie Anne 带来的 Gurl Geek Diaries 录像来开始一段美妙的旅程吧 —— 通过在 Sense HAT 的显示器上展现出你自己设计的一个动物像素宠物:
youtube 视频
<iframe width="520" height="315" frameborder="0" src="https://www.youtube.com/embed/gfRDFvEVz-w" allowfullscreen=""></iframe>
在 "[探索 Sense HAT][9]" 中可以学到更多。
### 4. 红外鸟箱 ###
![](https://opensource.com/sites/default/files/ir-bird-box.png)
上图由 [Low Voltage Labs][3] 提供。遵循 [CC BY-SA 4.0][1] 协议。
让全班所有同学都能够参与进来的一个好的练习是 —— 在一个鸟箱中沿着某些红外线放置一个树莓派和 NoIR 照相模块,这样你就可以在黑暗中观看,然后通过网络或在网络中你可以从树莓派那里获取到视频流。等鸟进入笼子,然后你就可以在不打扰到它们的情况下观察它们。
在这期间,你可以学习到所有关于红外和光谱的知识,以及如何用软件来调整摄像头的焦距和控制它。
在 "[制作一个红外鸟箱][10]" 中你可以学到更多。
### 5. 机器人 ###
![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg)
上图由 Low Voltage Labs 提供。遵循 [CC BY-SA 4.0][1] 协议。
拥有一个树莓派,一些感应器和一个感应器控制电路板,你就可以构建你自己的机器人。你可以制作各种类型的机器人,从用透明胶带和自制底盘组合在一起的简易四驱车,一直到由游戏控制器驱动的具有自我意识,带有传感器和摄像头的金属马儿。
学习如何直接去控制单个的发动机,例如通过 RTK Motor Controller Board (£8/$12),或者尝试新的 CamJam robotics kit (£17/$25) ,它带有发动机、轮胎和一系列的感应器 — 这些都很有价值并很有学习的潜力。
另外,如何你喜欢更为骨灰级别的东西,可以尝试 PiBorg 的 [4Borg][11] (£99/$150) 或 [DiddyBorg][12] (£180/$273) 或者一干到底,享受他们的 DoodleBorg 金属版 (£250/$380) — 并构建一个他们声名远扬的 [DoodleBorg tank][13](很不幸的时,这个没有卖的) 的迷你版。
另外请参考 [CamJam robotics kit worksheets][14]。
--------------------------------------------------------------------------------
via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom
作者:[Ben Nuttall][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bennuttall
[1]:https://creativecommons.org/licenses/by-sa/4.0/
[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi
[3]:http://lowvoltagelabs.com/
[4]:http://lowvoltagelabs.com/products/pi-traffic/
[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390
[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html
[7]:http://pythonhosted.org/gpiozero/recipes/
[8]:http://camjam.me/?page_id=236
[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat
[10]:https://www.raspberrypi.org/learning/infrared-bird-box/
[11]:https://www.piborg.org/4borg
[12]:https://www.piborg.org/diddyborg
[13]:https://www.piborg.org/doodleborg
[14]:http://camjam.me/?page_id=1035#worksheets

View File

@ -0,0 +1,519 @@
Securi-Pi: 使用树莓派作为安全跳板
================================================================================
像很多 LinuxJournal 的读者一样,我也过上了当今非常普遍的“科技游牧”生活,在网络到网络间,从一个接入点到另一个接入点,我们身处现实世界的不同地方却始终保持统一的互联网接入端。近来我发现越来越多的网络环境开始屏蔽对外的常用端口比如 SMTP端口25SSH端口22之类的。当你走进一家咖啡馆然后想 SSH 到你的一台服务器上做点事情的时候发现端口22被屏蔽了是一件很烦的事情。
不过,我到目前为止还没发现有什么网络环境会把 HTTPS 给墙了端口443。在稍微配置了一下家中的树莓派 2之后我成功地让自己能通过接入树莓派的443接口充当跳板从而让我在各种网络环境下连上想要的目标端口。简而言之我把家中的树莓派设置成了一个 OpenVPN 的端点SSH 端点同时也是一个 Apache 服务器——用于监听443端口上的我的接入活动并执行我预先设置好的网络策略。
### 笔记
此解决方案能搞定大多数有限制的网络环境但有些防火墙会对外部流量调用深度包检查Deep packet inspection它们时常能屏蔽掉用本篇文章里的方式传输的信息。不过我到目前为止还没在这样的防火墙后测试过。同时尽管我使用了很多基于密码学的工具OpenVPNHTTPSSSH我并没有非常严格地审计过这套配置方案译者注作者的意思是指这套方案能帮你绕过端口限制但不代表你就是完全安全地连接上了树莓派。有时候甚至 DNS 服务都会泄露你的信息,很可能在我没有考虑周到的角落里会有遗漏。我强烈不推荐把此跳板配置方案当作是万无一失的隐藏网络流量的办法,此配置只是希望能绕过一些端口限制连上网络,而不是做一些危险的事情。
### 起步
让我们先从你需要什么说起,我用的是树莓派 2装载了最新版本的 Raspbian不过这个配置也应该能在树莓派 Model B 上运行512MB 的内存对我们来说绰绰有余了,虽然性能可能没有树莓派 2这么好毕竟Model B只有一颗单核心 CPU 相比于四核心的树莓派 2。我的树莓派在家里的防火墙和路由器之后所以我还能用这个树莓派作为跳板访问家里的其他电子设备。同时这也意味着我的流量在互联网上看起来仿佛来自我家的ip地址所以这也算某种意义上保护了我的匿名性。如果你没有树莓派或者不想从家里运行这个服务那你完全可以把这个配置放在一台小型云服务器上译者比如 IPS )。你只要确保服务器运行着基于 Debian 的 Linux 发行版即可,这份指南依然可用。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11913f1.jpg)
图 1 树莓派,即将成为我们的加密网络端点
### 安装并配置 BIND
无论你是用树莓派还是一台服务器,当你成功启动之后你就可以安装 BIND 了,驱动了互联网相当一部分的域名服务软件。你将会把 BIND 仅仅作为缓存域名服务使用,而不用把它配置为用来处理来自互联网的域名请求。安装 BIND 会让你拥有一个可以被 OpenVPN 使用的 DNS 服务器。安装 BIND 十分简单,`apt-get` 就可以直接搞定:
```
root@test:~# apt-get install bind9
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
bind9utils
Suggested packages:
bind9-doc resolvconf ufw
The following NEW packages will be installed:
bind9 bind9utils
0 upgraded, 2 newly installed, 0 to remove and
↪0 not upgraded.
Need to get 490 kB of archives.
After this operation, 1,128 kB of additional disk
↪space will be used.
Do you want to continue [Y/n]? y
```
在我们能把 BIND 当做缓存域名服务器之前,还有一些小细节需要配置。两个修改都在`/etc/bind/named.conf.options`里完成。首先你要反注释掉 forwarders 这一节内容,同时你还要增加一个可以转发域名请求的目标服务器。作为例子我会用 Google 的 DNS 服务器8.8.8.8)(译者:国内的话需要找一个替代品);文件的 forwarders 节看上去大致是这样的:
```
forwarders {
8.8.8.8;
};
```
第二点你需要做的更改是允许来自互联网和本地局域网的 query直接把这一行加入配置文件的低端最后一个`}`之前就可以了:
```
allow-query { 192.168.1.0/24; 127.0.0.0/16; };
```
上面那行配置会允许此 DNS 服务器接收来自网络和局域网的请求。下一步,你需要重启一下 BIND 的服务:
```
root@test:~# /etc/init.d/bind9 restart
[....] Stopping domain name service...: bind9waiting
↪for pid 13209 to die
. ok
[ ok ] Starting domain name service...: bind9.
```
现在你可以测试一下 `nslookup` 来确保你的服务正常运行了:
```
root@test:~# nslookup
> server localhost
Default server: localhost
Address: 127.0.0.1#53
> www.google.com
Server: localhost
Address: 127.0.0.1#53
Non-authoritative answer:
Name: www.google.com
Address: 173.194.33.176
Name: www.google.com
Address: 173.194.33.177
Name: www.google.com
Address: 173.194.33.178
Name: www.google.com
Address: 173.194.33.179
Name: www.google.com
Address: 173.194.33.180
```
完美现在你的系统里已经有一个正常的域名服务在允许了下一步我们来配置一下OpenVPN。
### 安装并配置 OpenVPN
OpenVPN 是一个运用 SSL/TLS 作为密钥交换的开源 VPN 解决方案。同时它也非常便于在 Linux 环境下部署。配置 OpenVPN 可能有一点艰巨,不过在此其实你也不需要在默认的配置文件里做太多修改。首先你会需要运行一下 `apt-get` 来安装 OpenVPN
```
root@test:~# apt-get install openvpn
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
liblzo2-2 libpkcs11-helper1
Suggested packages:
resolvconf
The following NEW packages will be installed:
liblzo2-2 libpkcs11-helper1 openvpn
0 upgraded, 3 newly installed, 0 to remove and
↪0 not upgraded.
Need to get 621 kB of archives.
After this operation, 1,489 kB of additional disk
↪space will be used.
Do you want to continue [Y/n]? y
```
现在 OpenVPN 已经安装好了你需要去配置它了。OpenVPN 是基于 SSL 的并且它同时依赖于服务端和客户端两方的证书来工作。为了生成这些证书你需要配置机器上的证书签发CA。幸运地OpenVPN 在安装中自带了一些用于生成证书的脚本比如 “easy-rsa” 来帮助你加快这个过程。你将要创建一个文件目录用于放置 easy-rsa 脚本的模板:
```
root@test:~# mkdir /etc/openvpn/easy-rsa
root@test:~# cp -rpv
↪/usr/share/doc/openvpn/examples/easy-rsa/2.0/*
↪/etc/openvpn/easy-rsa/
```
下一步,把 vars 文件复制一个备份:
```
root@test:/etc/openvpn/easy-rsa# cp vars vars.bak
```
接下来,编辑一下 vars 以让其中的信息符合你的状态。我将以我需要编辑的信息作为例子:
```
KEY_SIZE=4096
KEY_COUNTRY="US"
KEY_PROVINCE="CA"
KEY_CITY="Silicon Valley"
KEY_ORG="Linux Journal"
KEY_EMAIL="bill.childers@linuxjournal.com"
```
下一步是 source 一下 vars ,这样系统就能把其中的信息当作环境变量处理了:
```
root@test:/etc/openvpn/easy-rsa# source ./vars
NOTE: If you run ./clean-all, I will be doing a
↪rm -rf on /etc/openvpn/easy-rsa/keys
```
### 搭建CA证书签发
接下来你要允许一下 `clean-all` 来确保有一个清理干净的系统工作环境,紧接着你也就要做证书签发了。注意一下我修改了一些 changeme 的跳出的交互提示内容以符合我需要的安装情况:
```
root@test:/etc/openvpn/easy-rsa# ./clean-all
root@test:/etc/openvpn/easy-rsa# ./build-ca
Generating a 4096 bit RSA private key
...................................................++
...................................................++
writing new private key to 'ca.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN.
There are quite a few fields but you can leave some
blank. For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[changeme]:test.linuxjournal.com
Name [changeme]:test.linuxjournal.com
Email Address [bill.childers@linuxjournal.com]:
```
### 生成服务端证书
一旦CA创建好了你接着就可以生成客户端的 OpenVPN 证书了:
```
root@test:/etc/openvpn/easy-rsa#
↪./build-key-server test.linuxjournal.com
Generating a 4096 bit RSA private key
...................................................++
writing new private key to 'test.linuxjournal.com.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN.
There are quite a few fields but you can leave some
blank. For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[test.linuxjournal.com]:
Name [changeme]:test.linuxjournal.com
Email Address [bill.childers@linuxjournal.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from
↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'CA'
localityName :PRINTABLE:'Silicon Valley'
organizationName :PRINTABLE:'Linux Journal'
organizationalUnitName:PRINTABLE:'SecTeam'
commonName :PRINTABLE:'test.linuxjournal.com'
name :PRINTABLE:'test.linuxjournal.com'
emailAddress
↪:IA5STRING:'bill.childers@linuxjournal.com'
Certificate is to be certified until Sep 1
↪06:23:59 2025 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
```
下一步需要用掉一些时间来生成 OpenVPN 服务器需要的 Diffie-Hellman 密钥。这个步骤在一般的桌面级 CPU 上会需要几分钟的时间,但在 ARM 构架的树莓派上,会用掉超级超级长的时间。耐心点,只要终端上的点还在跳,那么一切就在按部就班运行:
```
root@test:/etc/openvpn/easy-rsa# ./build-dh
Generating DH parameters, 4096 bit long safe prime,
↪generator 2
This is going to take a long time
....................................................+
<snipped out many more dots>
```
### 生成客户端证书
现在你要生成一下客户端用于登陆 OpenVPN 的密钥。通常来说 OpenVPN 都会被配置成使用证书验证的加密方式,在这个配置下客户端需要持有由服务端签发的一份证书:
```
root@test:/etc/openvpn/easy-rsa# ./build-key
↪bills-computer
Generating a 4096 bit RSA private key
...................................................++
...................................................++
writing new private key to 'bills-computer.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN. There are quite a few
fields but you can leave some blank.
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[bills-computer]:
Name [changeme]:bills-computer
Email Address [bill.childers@linuxjournal.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from
↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'CA'
localityName :PRINTABLE:'Silicon Valley'
organizationName :PRINTABLE:'Linux Journal'
organizationalUnitName:PRINTABLE:'SecTeam'
commonName :PRINTABLE:'bills-computer'
name :PRINTABLE:'bills-computer'
emailAddress
↪:IA5STRING:'bill.childers@linuxjournal.com'
Certificate is to be certified until
↪Sep 1 07:35:07 2025 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified,
↪commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
root@test:/etc/openvpn/easy-rsa#
```
现在你需要再生成一个 HMAC 代码作为共享密钥来进一步增加整个加密提供的安全性:
```
root@test:~# openvpn --genkey --secret
↪/etc/openvpn/easy-rsa/keys/ta.key
```
### 配置服务器
最后,你来到了需要配置 OpenVPN 服务的时候了。你需要创建一个 `/etc/openvpn/server.conf` 文件;这个配置文件的大多数地方都可以套用模板解决。设置 OpenVPN 服务的主要修改在于让它只用 TCP 而不是 UDP 链接。这是下一步所必需的---如果不是 TCP 链接那么你的服务将不能通过 端口443 运作。创建 `/etc/openvpn/server.conf` 然后把下述配置丢进去:
```
port 1194
proto tcp
dev tun
ca easy-rsa/keys/ca.crt
cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever
↪your hostname was
key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key
↪- This file should be kept secret
management localhost 7505
dh easy-rsa/keys/dh4096.pem
tls-auth /etc/openvpn/certs/ta.key 0
server 10.8.0.0 255.255.255.0 # The server will use this
↪subnet for clients connecting to it
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp" # Forces clients
↪to redirect all traffic through the VPN
push "dhcp-option DNS 192.168.1.1" # Tells the client to
↪use the DNS server at 192.168.1.1 for DNS -
↪replace with the IP address of the OpenVPN
↪machine and clients will use the BIND
↪server setup earlier
keepalive 30 240
comp-lzo # Enable compression
persist-key
persist-tun
status openvpn-status.log
verb 3
```
最后,你将需要在服务器上启用 IP 转发,配置 OpenVPN 为开机启动并立刻启动 OpenVPN 服务:
```
root@test:/etc/openvpn/easy-rsa/keys# echo
↪"net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
root@test:/etc/openvpn/easy-rsa/keys# sysctl -p
↪/etc/sysctl.conf
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.ipv4.ip_forward = 0
net.ipv4.ip_forward = 1
root@test:/etc/openvpn/easy-rsa/keys# update-rc.d
↪openvpn defaults
update-rc.d: using dependency based boot sequencing
root@test:/etc/openvpn/easy-rsa/keys#
↪/etc/init.d/openvpn start
[ ok ] Starting virtual private network daemon:.
```
### 配置 OpenVPN 客户端
客户端的安装取决于客户端的操作系统,但你总会需要之前生成的证书和密钥,并导入你的 OpenVPN 客户端并新建一个配置文件。每种操作系统下的 OpenVPN 客户端在操作上会有些稍许不同,这也不在这篇文章的覆盖范围内,所以你最好去看看特定操作系统下的 OpenVPN 文档来获取更多信息。参考文档里的 Resources 章节。
### 安装 SSLH —— "魔法"多协议工具
本文章介绍的解决方案最有趣的部分就是运用 SSLH 了。SSLH 是一个多重协议工具——它可以监听443端口的流量然后分析他们是以SSHHTTPS 还是 OpenVPN 的通讯包并把他们分别转发给正确的系统服务。这就是为何本解决方案可以让你绕过大多数端口封杀——你可以一直使用HTTPS通讯介于它几乎从来不会被封杀。
同样,直接 `apt-get` 安装:
```
root@test:/etc/openvpn/easy-rsa/keys# apt-get
↪install sslh
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
apache2 apache2-mpm-worker apache2-utils
↪apache2.2-bin apache2.2-common
libapr1 libaprutil1 libaprutil1-dbd-sqlite3
↪libaprutil1-ldap libconfig9
Suggested packages:
apache2-doc apache2-suexec apache2-suexec-custom
↪openbsd-inetd inet-superserver
The following NEW packages will be installed:
apache2 apache2-mpm-worker apache2-utils
↪apache2.2-bin apache2.2-common
libapr1 libaprutil1 libaprutil1-dbd-sqlite3
↪libaprutil1-ldap libconfig9 sslh
0 upgraded, 11 newly installed, 0 to remove
↪and 0 not upgraded.
Need to get 1,568 kB of archives.
After this operation, 5,822 kB of additional
↪disk space will be used.
Do you want to continue [Y/n]? y
```
在 SSLH 被安装之后,包管理器会询问要在 inetd 还是 standalone 模式下允许。选择 standalone 模式,因为你希望 SSLH 在它自己的进程里运行。如果你没有安装 Apacheapt包管理器会自动帮你下载并安装的尽管它也不是完全不可或缺。如果你已经有 Apache 了,那你需要确保它只监听 localhost 端口而不是所有的端口(不然的话 SSLH 会无法运行因为 443 端口已经被 Apache 监听占用)。安装后,你会看到一个如下所示的错误信息:
```
[....] Starting ssl/ssh multiplexer: sslhsslh disabled,
↪please adjust the configuration to your needs
[FAIL] and then set RUN to 'yes' in /etc/default/sslh
↪to enable it. ... failed!
failed!
```
这其实并不是错误信息,只是 SSLH 在提醒你它还未被配置所以无法启动,这很正常。配置 SSLH 相对来说比较简单。它的配置文件放置在 `/etc/default/sslh`,你只需要修改 `RUN``DAEMON_OPTS` 变量就可以了。我的 SSLH 配置文件如下所示:
```
# Default options for sslh initscript
# sourced by /etc/init.d/sslh
# Disabled by default, to force yourself
# to read the configuration:
# - /usr/share/doc/sslh/README.Debian (quick start)
# - /usr/share/doc/sslh/README, at "Configuration" section
# - sslh(8) via "man sslh" for more configuration details.
# Once configuration ready, you *must* set RUN to yes here
# and try to start sslh (standalone mode only)
RUN=yes
# binary to use: forked (sslh) or single-thread
↪(sslh-select) version
DAEMON=/usr/sbin/sslh
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh
↪127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn
↪127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid"
```
保存编辑并启动 SSLH
```
root@test:/etc/openvpn/easy-rsa/keys#
↪/etc/init.d/sslh start
[ ok ] Starting ssl/ssh multiplexer: sslh.
```
现在你应该可以从 443 端口 ssh 到你的树莓派了,它会正确地使用 SSLH 转发:
```
$ ssh -p 443 root@test.linuxjournal.com
root@test:~#
```
SSLH 现在开始监听端口443 并且可以转发流量信息到 SSHApache 或者 OpenVPN 取决于抵达流量包的类型。这套系统现已整装待发了!
### 结论
现在你可以启动 OpenVPN 并且配置你的客户端连接到服务器的 443 端口了,然后 SSLH 会从那里把流量转发到服务器的 1194 端口。但介于你正在和服务器的 443 端口通信,你的 VPN 流量不会被封锁。现在你可以舒服地坐在陌生小镇的咖啡店里,畅通无阻地通过树莓派上的 OpenVPN 浏览互联网。你顺便还给你的链接增加了一些安全性,这个额外作用也会让你的链接更安全和私密一些。享受通过安全跳板浏览互联网把!
资源:
安装与配置 OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) and [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn)
OpenVPN 客户端下载: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html)
OpenVPN Client for iOS: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8)
OpenVPN Client for Android: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en)
Tunnelblick for Mac OS X (OpenVPN client): [https://tunnelblick.net](https://tunnelblick.net)
SSLH 介绍: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) 和 [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh)
----------
via: http://www.linuxjournal.com/content/securi-pi-using-raspberry-pi-secure-landing-point?page=0,0
作者:[Bill Childers][a]
译者:[Moelf](https://github.com/Moelf)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/bill-childers

View File

@ -1,103 +0,0 @@
两个杰出的一体化Linux服务器
================================================
关键词Linux服务器SMBclearosZentyal
![](http://www.linux.com/images/stories/66866/jack-clear_a.png)
>图1: ClearOS安装向导。
回到2000年微软发布小型商务服务器。这个产品改变了很多人们对科技在商务领域的看法。你可以部署一个单独的服务器它能处理邮件日历文件共享目录服务VPN以及更多而不是很多机器处理不同的任务。对很多小型商务来说这是非常好的恩惠但是Windows SMB的一些花费是昂贵的。对于其他人微软设计的依赖于一个服务器的想法根本不是一个选项。
对于最近的用户群有些替代品。事实上在Linux和开源领域里你可以选择许多稳定的平台它可以作为一站式服务商店服务于你的小型企业。如果你的小型企业有10到50员工一体化服务器也许是你所需的理想方案。
这里我将要看看两个Linux一体化服务器所以你可以查看他们哪个能完美适用于你的公司。
记住这些服务器不能以任何方式适用于大型商务或企业。大公司无法依靠一体化服务器仅仅是因为一台服务器不能负荷在企业内所需的企望。除此之外这就是小型企业可以从Linux一体化服务器期待什么。
### ClearOS
[ClearOS][1]是在2009年在ClarkConnect下发行的作为一个路由和网关的分支。从那以后ClearOS已经增加了所有一体化服务器必要的特性。CearOS提供的不仅仅是一个软件。你可以购买一个[ClearBox 100][2] 或[ClearBox 300][3]。这些服务器搭载完整的ClearOS作为一个IT设备被销售。在[这里][4]查看特性比对/价格矩阵。
家里已经有这些硬件,你可以下载这些之一:
- [ClearOS社区][5] — 社区免费版的ClearOS
- [ClearOS家庭][6] — 理想的家庭办公室(详细的功能和订阅费用,见这里)
- [ClearOS商务][7] — 理想的小型商务(详细的功能和订阅费用,见这里)
使用ClearOS你得到了什么你得到了一个单机的业务合作服务器设计精美的网页。ClearOS独特的是什么你可以在基础服务中得到很多特性。除了这个你必须从 [Clear Marketplace][8]增加特性。在市场上你可以安装免费或付费的应用程序扩展集的ClearOS服务器的特性。这里你可以找到附加的Windows服务器活动目录OpenLDAPFlexsharesAntimalwareWeb访问控制内容过滤还有更多。你甚至可以找到一些第三方组件像谷歌应用同步Zarafa合作平台卡巴斯基杀毒。
ClearOS的安装像其他Linux发行版基于红帽的Anaconda安装程序。安装完成后系统将提示您设置网络接口就是提供你浏览器访问的地址与ClearOS服务器在同一个网络里。地址格式如下
[https://IP_OF_CLEAROS_SERVER:81][9]
IP_OF_CLEAROS_SERVER就是服务器的真实IP地址。注当你第一次在浏览器访问这个服务器时你将收到一个“Connection is not private”的警告。继续访问这个地址你才能继续设置。
当浏览器连接上就会提示你root用户认证在初始化安装中你设置的root用户密码。一通过认证你将看到ClearOS的安装向导上图1
点击下一步按钮开始设置你的ClearOS服务器。这个向导无需加以说明在最后还会问你想用那个版本的ClearOS。点击社区家庭或者商业。一旦选择你就需要注册一个账户。创建了一个账户注册了服务器后你可以开始更新服务器配置服务器从市场添加模块图2
![](http://www.linux.com/images/stories/66866/jack-clear_b.png)
>图2: 从市场安装模块。
此时你已经准备开始深入挖掘配置你的ClearOS小型商务服务器了。
### Zentyal
[Zentyal][10]是一个基于Ubuntu的小型商务服务器现在发布在eBox域名下。Zentyal提供了大量的服务器/服务来适应你的小型商务需求:
- 电子邮件 — 网页邮件;原生微软邮件协议和活动目录支持;日历和通讯录;手机设备电子邮件同步;反病毒/反垃圾IMAPPOPSMTPCalDAV和CardDAV支持。
- 域和目录 — 核心域目录管理多个组织单元单点登录身份验证文件共享ACLs高级域名管理打印机管理。
- 网络和防火墙 — 静态和DHCP接口对象和服务包过滤端口转发。
- 基础设施 — DNSDHCPNTP认证中心VPN。
- 防火墙
安装Zentyal很像Ubuntu服务器的文本安装而且很简单启动安装镜像做一些选择等待安装完成。一旦初始化完成基于文本安装就提供给你桌面GUI向导程序提供选择包。选择所有你想安装的包让安装程序完成这些工作。
最终你可以通过网页接口来访问Zentyal服务器浏览器访问[https://IP_OF_SERVER:8443][11] - IP_OF_SERVER是Zentyal服务器的内网地址或使用独立的桌面GUI来管理服务器Zentyal包括快速访问管理员和用户控制台就像Zentyal管理控制台。当全部系统已经保存开启你将看到Zentyal面板图3
![](http://www.linux.com/images/stories/66866/jack-zentyal_a.png)
>图3: Zentyal活动面板.
这个面板允许你控制服务器所有方面,比如更新,管理服务器/服务,获取服务器的敏捷状态更新。您也可以进入组件领域,然后安装部署过程中选择出来的组件或更新当前的软件包列表。点击 软件管理 > 系统更新 并选择你想更新的图4然后在屏幕最底端点击更新按钮。
![](http://www.linux.com/images/stories/66866/jack-zentyal_b.png)
>图4: 更新你的Zentyal服务器很简单。
### 那个服务器适合你?
回答这个问题要看你有什么需求。Zentyal是一个不可思议的服务器它很好的胜任于你的小型商务网络中。如果你需要更多如组合软件你最好赌在ClearOS上。如果你不需要组合软件任意的服务器将表现杰出的工作。
我强烈建议安装这两个一体化的服务器,看看哪个是你的小公司所需的最好服务。
------------------------------------------------------------------------------
via: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers
作者:[Jack Wallen][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linux.com/community/forums/person/93
[1]: http://www.linux.com/learn/tutorials/882146-two-outstanding-all-in-one-linux-servers#clearfoundation-overview
[2]: https://www.clearos.com/products/hardware/clearbox-100-series
[3]: https://www.clearos.com/products/hardware/clearbox-300-series
[4]: https://www.clearos.com/products/hardware/clearbox-overview
[5]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso
[6]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso
[7]: http://mirror.clearos.com/clearos/7/iso/x86_64/ClearOS-DVD-x86_64.iso
[8]: https://www.clearos.com/products/purchase/clearos-marketplace-overview
[9]: https://ip_of_clearos_server:81/
[10]: http://www.zentyal.org/server/
[11]: https://ip_of_server:8443/

View File

@ -0,0 +1,49 @@
Cassandra 和 Spark 数据处理入门
==============================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28)
Apache Cassandra 数据库近来引起了很多的兴趣,这主要源于现代云端软件对于可用性及性能方面的要求。
那么Apache Cassandra 是什么?它是一种为高可用性及线性可扩展性优化的分布式的联机交易处理 (OLTP) 数据库。当人们想知道 Cassandra 的用途时可以想想你想要的离客户近的系统。这j最终是我们的用户进行交互的系统。需要保证实时可用的程序产品目录IoT医疗系统以及移动应用。对这些程序而言下线时间意味着利润降低甚至导致其他更坏的结果。Netfilix 是这个于2008年开源的项目的早期使用者他们对此项目的贡献以及带来的成功让这个项目名声大噪。
Cassandra 于2010年成为了 Apache 软件基金会的顶级项目,在这之后就开始变得流行。现在,只要你有 Cassadra 的相关知识,找工作时就能轻松不少。光是想想一个 NoSQL 语言和开源技术能达到如此企业级 SQL 的高度就觉得这是十分疯狂而又不可思议的。这引出了一个问题。是什么让它如此的流行?
因为采用了首先在[亚马逊发表的 Dynamo 论文][1]提出的设计Cassandra 有能力在大规模的硬件及网络故障时保持实时在线。由于采用了点对点模式,在没有单点故障的情况下,我们能幸免于机架故障甚至完全网络分区。我们能在不影响用户体验的前提下处理数据中心故障。一个能考虑到故障的分布式系统才是一个没有后顾之忧的分布式系统,因为老实说,故障是迟早会发生的。有了 Cassandra 我们可疑直面残酷的生活并将之融入数据库的结构和功能中。
我们能猜到你现在在想什么,“但我只有关系数据库相关背景,难道这样的转变不会很困难吗?"这问题的答案介于是和不是之间。使用 Cassandra 建立数据模型对有关系数据库背景的开发者而言是轻车熟路。我们使用表格来建立数据模型,并使用 CQL 或者 Cassandra 查询语言来查询数据库。然而,与 SQL 不同的是Cassandra 支持更加复杂的数据结构,例如多重和用户自定义类型。举个例子,当要储存对一个小猫照片的点赞数目时,我们可以将整个数据储存在一个包含照片本身的集合之中从而获得更快的顺序查找而不是建立一个独立的表。这样的表述在 CQL 中十分的自然。在我们照片表中我们需要记录名字URL以及给此照片点赞过的人。
![](https://opensource.com/sites/default/files/resize/screen_shot_2016-05-06_at_7.17.33_am-350x198.png)
在一个高性能系统中,毫秒对用户体验和客户保留都能产生影响。昂贵的 JOIN 制约了我们通过增加不可预见的网络调用而扩容的能力。当我们将数据反规范化使其能在尽可能少的请求中被获取到时,我们即可从磁盘空间花费的降低中获益并获得可预测的,高性能应用。我们将反规范化同 Cassandra 一同介绍是因为它提供了很有吸引力的的折衷方案。
很明显我们不会局限于对于小猫照片的点赞数量。Canssandra 是一款个为并发高写入优化的方案。这使其成为需要时常吞吐数据的大数据应用的理想解决方案。市场上的时序和 IoT 的使用场景正在以稳定的速度在需求和亮相方面增加,我们也在不断探寻优化我们所收集到的数据以求提升我们的技术应用(注:这句翻的非常别扭,求校队)
这就引出了我们的下一步,我们已经提到了如何以一种现代的,性价比高的方式储存数据,但我们应该如何获得更多的马力呢?具体而言,当我们收集到了所需的数据,我们应该怎样处理呢?如何才能有效的分析几百 TB 的数据呢如何才能在实时的对我们所收集到的信息进行反馈并在几秒而不是几小时的时间利作出决策呢Apache Spark 将给我们答案。
Spark 是大数据变革中的下一步。 Hadoop 和 MapReduce 都是革命性的产品他们让大数据界获得了分析所有我们所取得的数据的机会。Spark 对性能的大幅提升及对代码复杂度的大幅降低则将大数据分析提升到了另一个高度。通过 Spark我们能大批量的处理计算对流处理进行快速反映通过机器学习作出决策并理解通过对图的遍历理解复杂的递归关系。这并非只是为你的客户提供与快捷可靠的应用程序连接Cassandra 已经提供了这样的功能),这更是能一探 Canssandra 所储存的数据并作出更加合理的商业决策同时更好地满足客户需求。
你可以看看 [Spark-Cassandra Connector][2] (open source) 并动手试试。若想了解更多关于这两种技术的信息,我们强烈推荐名为 [DataStax Academy][3] 的自学课程
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/5/basics-cassandra-and-spark-data-processing
作者:[Jon Haddad][a],[Dani Traphagen][b]
译者:[KevinSJ](https://github.com/KevinSJ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twitter.com/rustyrazorblade
[b]: https://opensource.com/users/dtrapezoid
[1]: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf
[2]: https://github.com/datastax/spark-cassandra-connector
[3]: https://academy.datastax.com/
[4]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49162
[5]: https://twitter.com/dtrapezoid
[6]: https://twitter.com/rustyrazorblade

View File

@ -0,0 +1,285 @@
学习使用 python 控制流和循环来编写和执行 Shell 脚本 —— Part 2
======================================================================================
在[Python series][1]之前的文章里,我们分享了 Python的一个简介它的命令行 shell 和 IDLE译者注python 自带的一个IDE。我们也演示了如何进行数值运算如何用变量存储值还有如何打印那些值到屏幕上。最后我们通过一个练习示例讲解了面向对象编程中方法和属性概念。
![](http://www.tecmint.com/wp-content/uploads/2016/06/Write-Shell-Scripts-in-Python-Programming.png)
>在 Python 编程中写 Linux Shell 脚本
本篇中,我嫩会讨论控制流(根据用户输入的信息,计算的结果,或者一个变量的当前值选择不同的动作行为)和循环(自动重复执行任务),接着应用到我们目前所学东西中,编写一个简单的 shell 脚本,这个脚本会显示操作系统类型,主机名,内核发行版,版本号和机器硬件名字。
这个例子尽管很基础,但是会帮助我们证明,比起使用一些 bash 工具写 shell 脚本,我们可以使得用 Python OOP 的兼容特性来编写 shell 脚本会更简单些。
换句话说,我们想从这里出发
```
# uname -snrvm
```
![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Hostname-of-Linux.png)
> 检查 Linux 的主机号
![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Linux-Hostname-Using-Python-Script.png)
> 用 Python 脚本来检查 Linux 的主机号
或者
![](http://www.tecmint.com/wp-content/uploads/2016/05/Script-to-Check-Linux-System-Information.png)
> 用脚本检查 Linux 系统信息
看着不错,不是吗?那我们就挽起袖子,开干吧。
### Python 中的控制流
如我们刚说那样,控制流允许我们根据一个给定的条件,选择不同的输出结果。在 Python 中最简单的实现就是一个 if/else 语句。
基本语法是这样的:
```
if condition:
# action 1
else:
# action 2
```
当 condition 求值为真true下面的代码块就会被执行`# action 1`代表的部分。否则else 下面的代码就会运行。
condition 可以是任何表达式,只要可以求得值为真或者假。
举个例子:
1. 1 < 3 #
2. firstName == "Gabriel" # 对 firstName 为 Gabriel 的人是真,对其他不叫 Gabriel 的人为假
- 在第一个例子中,我们比较了两个值,判断 1 是否小于 3。
- 在第二个例子中,我们比较了 firstName一个变量与字符串 “Gabriel”看在当前执行的位置firstName 的值是否等于该字符串。
- 条件和 else 表达式都必须带着一个冒号(:)。
- 缩进在 Python 非常重要。同样缩进下的行被认为是相同的代码块。
请注意if/else 表达式只是 Python 中许多控制流工具的一个而已。我们先在这里了解以下,后面会用在我们的脚本中。你可以在[官方文档][2]中学到更多工具。
### Python 中的循环
简单来说,一个循环就是一组指令或者表达式序列,可以按顺序一直执行,只要一个条件为真,或者在一个列表里一次执行一个条目。
Python 中最简单的循环,就是 for 循环迭代一个给定列表的元素,或者一个字符串从第一个字符开始到最后一个字符结束。
基本语句:
```
for x in example:
# do this
```
这里的 example 可以是一个列表或者一个字符串。如果是列表,变量 x 就代表列表中每个元素如果是字符串x 就代表字符串中每个字符。
```
>>> rockBands = []
>>> rockBands.append("Roxette")
>>> rockBands.append("Guns N' Roses")
>>> rockBands.append("U2")
>>> for x in rockBands:
print(x)
or
>>> firstName = "Gabriel"
>>> for x in firstName:
print(x)
```
上面例子的输出如下图所示:
![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Loops-in-Python.png)
>学习 Python 中的循环
### Python 模块
很明显,必须有个途径可以保存一系列的 Python 指令和表达式到文件里,然后需要的时候再取出来。
准确来说模块就是这样的。特别地os 模块提供了一个接口到操作系统的底层,允许我们做许多通常在命令行下的操作。
没错os 模块包含了许多方法和属性,可以用来调用,就如我们之前文章里讲解的那样。尽管如此,我们需要使用 import 关键词导入(或者叫包含)模块到开发环境里来:
```
>>> import os
```
我们来打印出当前的工作目录:
```
>>> os.getcwd()
```
![](http://www.tecmint.com/wp-content/uploads/2016/05/Learn-Python-Modules.png)
>学习 Python 模块
现在,让我们把所有结合在一起(包括之前文章里讨论的概念),编写需要的脚本。
### Python 脚本
以一个声明开始一个脚本是个不错的想法,表明脚本的目的,发行所依据的证书,和一个修订历史列出所做的修改。尽管这主要是个人喜好,但这会让我们的工作看起来比较专业。
这里有个脚本,可以输出这篇文章最前面展示的那样。脚本做了大量的注释,为了让大家可以理解发生了什么。
在进行下一步之前,花点时间来理解它。注意,我们是如何使用一个 if/else 结构,判断每个字段标题的长度是否比字段本身的值还大。
基于这个结果,我们用空字符去填充一个字段标题和下一个之间的空格。同时,我们使用一定数量的短线作为字段标题与其值之间的分割符。
```
#!/usr/bin/python3
# Change the above line to #!/usr/bin/python if you don't have Python 3 installed
# Script name: uname.py
# Purpose: Illustrate Python's OOP capabilities to write shell scripts more easily
# License: GPL v3 (http://www.gnu.org/licenses/gpl.html)
# Copyright (C) 2016 Gabriel Alejandro Cánepa
# Facebook / Skype / G+ / Twitter / Github: gacanepa
# Email: gacanepa (at) gmail (dot) com
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# REVISION HISTORY
# DATE VERSION AUTHOR CHANGE DESCRIPTION
# ---------- ------- --------------
# 2016-05-28 1.0 Gabriel Cánepa Initial version
# Import the os module
import os
# Assign the output of os.uname() to the the systemInfo variable
# os.uname() returns a 5-string tuple (sysname, nodename, release, version, machine)
# Documentation: https://docs.python.org/3.2/library/os.html#module-os
systemInfo = os.uname()
# This is a fixed array with the desired captions in the script output
headers = ["Operating system","Hostname","Release","Version","Machine"]
# Initial value of the index variable. It is used to define the
# index of both systemInfo and headers in each step of the iteration.
index = 0
# Initial value of the caption variable.
caption = ""
# Initial value of the values variable
values = ""
# Initial value of the separators variable
separators = ""
# Start of the loop
for item in systemInfo:
if len(item) < len(headers[index]):
# A string containing dashes to the length of item[index] or headers[index]
# To repeat a character(s), enclose it within quotes followed
# by the star sign (*) and the desired number of times.
separators = separators + "-" * len(headers[index]) + " "
caption = caption + headers[index] + " "
values = values + systemInfo[index] + " " * (len(headers[index]) - len(item)) + " "
else:
separators = separators + "-" * len(item) + " "
caption = caption + headers[index] + " " * (len(item) - len(headers[index]) + 1)
values = values + item + " "
# Increment the value of index by 1
index = index + 1
# End of the loop
# Print the variable named caption converted to uppercase
print(caption.upper())
# Print separators
print(separators)
# Print values (items in systemInfo)
print(values)
# INSTRUCTIONS:
# 1) Save the script as uname.py (or another name of your choosing) and give it execute permissions:
# chmod +x uname.py
# 2) Execute it:
# ./uname.py
```
如果你已经保存上面的脚本到一个文件里,给文件执行权限,并且运行它,像代码底部描述的那样:
```
# chmod +x uname.py
# ./uname.py
```
如果试图运行脚本时,你得到了如下的错误:
```
-bash: ./uname.py: /usr/bin/python3: bad interpreter: No such file or directory
```
这意味着你没有安装 Python3。如果那样的话你要么安装 Python3 的包,要么替换解释器那行(如果你跟着下面的步骤去更新 Python 执行文件的软连接,如之前文章里概述的那样,要特别注意并且非常小心):
```
#!/usr/bin/python3
```
```
#!/usr/bin/python
```
这样会导致使用安装好的 Python 2 版本去执行该脚本。
**注意**: 该脚本在 Python 2.x 与 Pyton 3.x 上都测试成功过了。
尽管比较粗糙,你可以认为该脚本就是一个 Python 模块。这意味着你可以在 IDLE 中打开它File → Open… → Select file):
![](http://www.tecmint.com/wp-content/uploads/2016/05/Open-Python-in-IDLE.png)
>在 IDLE 中打开 Python
一个包含有文件内容的新窗口就会打开。然后执行 Run → Run module或者按 F5。脚本的输出就会在原 Shell 里显示出来:
![](http://www.tecmint.com/wp-content/uploads/2016/05/Run-Python-Script.png)
>执行 Python 脚本
如果你想纯粹用 bash 写一个脚本,也获得同样的结果,你可能需要结合使用 [awk][3][sed][4],并且借助复杂的方法来存储与获得列表中的元素(忘了提醒使用 tr 命令将小写字母转为大写)
另外Python 在所有的 Linux 系统版本中集成了至少一个 Python 版本2.x 或者 3.x或者两者都有。你还需要依赖 shell 去完成同样的目标吗,那样你可能会为不同的 shell 编写不同的版本。
这里演示了面向对象编程的特性,会成为一个系统管理员得力的助手。
**注意**:你可以在我的 Github 仓库里获得 [这个 python 脚本][5](或者其他的)。
### 总结
这篇文章里,我们讲解了 Python 中控制流,循环/迭代,和模块的概念。我们也演示了如何利用 Python 中 OOP 的方法和属性,来简化复杂的 shell 脚本。
你有任何其他希望去验证的想法吗?开始吧,写出自己的 Python 脚本,如果有任何问题可以咨询我们。不必犹豫,在分割线下面留下评论,我们会尽快回复你。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/learn-python-programming-to-write-linux-shell-scripts/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Gabriel Cánepa][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]: http://www.tecmint.com/learn-python-programming-and-scripting-in-linux/
[2]: http://please%20note%20that%20the%20if%20/%20else%20statement%20is%20only%20one%20of%20the%20many%20control%20flow%20tools%20available%20in%20Python.%20We%20reviewed%20it%20here%20since%20we%20will%20use%20it%20in%20our%20script%20later.%20You%20can%20learn%20more%20about%20the%20rest%20of%20the%20tools%20in%20the%20official%20docs.
[3]: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/
[4]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
[5]: https://github.com/gacanepa/scripts/blob/master/python/uname.py

View File

@ -1,396 +1,395 @@
Smem Reports Memory Consumption Per-Process and Per-User Basis in Linux
Smem Linux 下基于进程和用户的内存占用报告程序
===========================================================================
Memory management in terms of monitoring memory usage is one important thing to do on your Linux system, there are many tools available for monitoring your memory usage that you can find on different Linux distributions. But they work in different ways, in this how to guide, we shall take a look at how to install and use one such tool called smem.
Linux 系统的内存管理工作中,内存使用情况的监控是十分重要的,不同的 Linux 发行版可能会提供不同的工具。但是它们的工作方式多种多样,这里,我们将会介绍如何安装和使用这样的一个名为 SMEM 的工具软件。
Smem is a command-line memory reporting tool thats gives a user diverse reports on memory usage on a Linux system. There is one unique thing about smem, unlike other traditional memory reporting tools, it reports PSS (Proportional Set Size), a more meaningful representation of memory usage by applications and libraries in a virtual memory setup.
Smem 是一款命令行下的内存使用情况报告工具。和其它传统的内存报告工具不同个,它仅做这一件事情——报告 PPS实际使用的物理内存[比例分配共享库占用的内存]),这种内存使用量表示方法对于那些在虚拟内存中的应用和库更有意义。
![](http://www.tecmint.com/wp-content/uploads/2016/06/Smem-Linux-Memory-Reporting-Tool.png)
>Smem Linux Memory Reporting Tool
>Smem Linux 内存报告工具
Existing traditional tools focus mainly on reading RSS (Resident Set Size) which is a standard measure to monitor memory usage in a physical memory scheme, but tends to overestimate memory usage by applications.
已有的传统工具会将目光主要集中于读取 RSS实际使用物理内存[包含共享库占用的内存]),这种方法对于恒量那些使用物理内存方案的使用情况来说是标准方法,但是应用程序往往会高估内存的使用情况。
PSS on the other hand, gives a reasonable measure by determining the “fair-share” of memory used by applications and libraries in a virtual memory scheme.
PSS 从另一个侧面,为那些使用虚拟内存方案的应用和库提供了给出了确定内存“公评分担”的合理措施。
You can read this [guide (about memory RSS and PSS)][1] to understand memory consumption in a Linux system, but let us proceed to looking at some of the features of smem.
你可以 [阅读此指南了解 (关于内存的 RSS 和 PSS)][1] Linux 系统中的内存占用。
### Features of Smem Tool
### Smem 这一工具的特点
- System overview listing
- Listings and also filtering by process, mappings or user
- Using data from /proc filesystem
- Configurable listing columns from several data sources
- Configurable output units and percentages
- Easy to configure headers and totals in listings
- Using data snapshots from directory mirrors or compressed tar files
- Built-in chart generation mechanism
- Lightweight capture tool used in embedded systems
- 系统概览列表
- 以进程,映射和用户来显示或者是过滤
- 从 /proc 文件系统中得到数据
- 从多个数据源配置显示条目
- 可配置输出单元和百分比
- 易于配置列表标题和汇总
- 从镜像文件夹或者是压缩的 tar 文件中获得数据快照
- 内置的图表生成机制
- 在嵌入式系统中使用轻量级的捕获工具
### How to Install Smem Memory Reporting Tool in Linux
### 如何安装 Smem - Linux 下的内存使用情况报告工具
Before you proceed with installation of smem, your system must meet the following requirements:
安装之前,需要确保满足以下的条件:
- modern kernel (> 2.6.27 or so)
- a recent version of Python (2.4 or so)
- optional [matplotlib][2] library for generation of charts
- 现代内存 (版本号高于 2.6.27)
- 较新的 Python 版本 (2.4 及以后版本)
- 可选的 [matplotlib][2] 库用于生成图表
Most of the todays Linux distributions comes with latest Kernel version with Python 2 or 3 support, so the only requirement is to install matplotlib library which is used to generate nice charts.
对于当今的大多数的 Linux 发行版而言,内核版本和 Python 的版本都能够 满足需要,所以仅需要为生成良好的图表安装 matplotlib 库。
#### On RHEL, CentOS and Fedora
#### RHEL, CentOS 和 Fedora
First enable [EPEL (Extra Packages for Enterprise Linux)][3] repository and then install as follows:
首先启用 [EPEL (Extra Packages for Enterprise Linux)][3] 软件源然后按照下列步骤操作:
```
# yum install smem python-matplotlib python-tk
```
#### On Debian and Ubuntu
#### Debian 和 Ubuntu
```
$ sudo apt-get install smem
```
#### On Linux Mint
#### Linux Mint
```
$ sudo apt-get install smem python-matplotlib python-tk
```
#### on Arch Linux
#### Arch Linux
Use this [AUR repository][4].
使用此 [AUR repository][4]。
### How to Use Smem Memory Reporting Tool in Linux
### 如何使用 Smem Linux 下的内存使用情况报告工具
To view a report of memory usage across the whole system, by all system users, run the following command:
为了查看整个系统所有用户的内存使用情况,运行以下的命令:
```
$ sudo smem
$ sudo smem
```
Monitor Memory Usage of Linux System
监视 Linux 系统中的内存使用情况
```
PID User Command Swap USS PSS RSS
6367 tecmint cat 0 100 145 1784
6368 tecmint cat 0 100 147 1676
2864 tecmint /usr/bin/ck-launch-session 0 144 165 1780
7656 tecmint gnome-pty-helper 0 156 178 1832
5758 tecmint gnome-pty-helper 0 156 179 1916
1441 root /sbin/getty -8 38400 tty2 0 152 184 2052
1434 root /sbin/getty -8 38400 tty5 0 156 187 2060
1444 root /sbin/getty -8 38400 tty3 0 156 187 2060
1432 root /sbin/getty -8 38400 tty4 0 156 188 2124
1452 root /sbin/getty -8 38400 tty6 0 164 196 2064
2619 root /sbin/getty -8 38400 tty1 0 164 196 2136
3544 tecmint sh -c /usr/lib/linuxmint/mi 0 212 224 1540
1504 root acpid -c /etc/acpi/events - 0 220 236 1604
3311 tecmint syndaemon -i 0.5 -K -R 0 252 292 2556
3143 rtkit /usr/lib/rtkit/rtkit-daemon 0 300 326 2548
1588 root cron 0 292 333 2344
1589 avahi avahi-daemon: chroot helpe 0 124 334 1632
1523 root /usr/sbin/irqbalance 0 316 343 2096
585 root upstart-socket-bridge --dae 0 328 351 1820
3033 tecmint /usr/bin/dbus-launch --exit 0 328 360 2160
1346 root upstart-file-bridge --daemo 0 348 371 1776
2607 root /usr/bin/xdm 0 188 378 2368
1635 kernoops /usr/sbin/kerneloops 0 352 386 2684
344 root upstart-udev-bridge --daemo 0 400 427 2132
2960 tecmint /usr/bin/ssh-agent /usr/bin 0 480 485 992
3468 tecmint /bin/dbus-daemon --config-f 0 344 515 3284
1559 avahi avahi-daemon: running [tecm 0 284 517 3108
7289 postfix pickup -l -t unix -u -c 0 288 534 2808
2135 root /usr/lib/postfix/master 0 352 576 2872
2436 postfix qmgr -l -t unix -u 0 360 606 2884
1521 root /lib/systemd/systemd-logind 0 600 650 3276
2222 nobody /usr/sbin/dnsmasq --no-reso 0 604 669 3288
PID User Command Swap USS PSS RSS
6367 tecmint cat 0 100 145 1784
6368 tecmint cat 0 100 147 1676
2864 tecmint /usr/bin/ck-launch-session 0 144 165 1780
7656 tecmint gnome-pty-helper 0 156 178 1832
5758 tecmint gnome-pty-helper 0 156 179 1916
1441 root /sbin/getty -8 38400 tty2 0 152 184 2052
1434 root /sbin/getty -8 38400 tty5 0 156 187 2060
1444 root /sbin/getty -8 38400 tty3 0 156 187 2060
1432 root /sbin/getty -8 38400 tty4 0 156 188 2124
1452 root /sbin/getty -8 38400 tty6 0 164 196 2064
2619 root /sbin/getty -8 38400 tty1 0 164 196 2136
3544 tecmint sh -c /usr/lib/linuxmint/mi 0 212 224 1540
1504 root acpid -c /etc/acpi/events - 0 220 236 1604
3311 tecmint syndaemon -i 0.5 -K -R 0 252 292 2556
3143 rtkit /usr/lib/rtkit/rtkit-daemon 0 300 326 2548
1588 root cron 0 292 333 2344
1589 avahi avahi-daemon: chroot helpe 0 124 334 1632
1523 root /usr/sbin/irqbalance 0 316 343 2096
585 root upstart-socket-bridge --dae 0 328 351 1820
3033 tecmint /usr/bin/dbus-launch --exit 0 328 360 2160
1346 root upstart-file-bridge --daemo 0 348 371 1776
2607 root /usr/bin/xdm 0 188 378 2368
1635 kernoops /usr/sbin/kerneloops 0 352 386 2684
344 root upstart-udev-bridge --daemo 0 400 427 2132
2960 tecmint /usr/bin/ssh-agent /usr/bin 0 480 485 992
3468 tecmint /bin/dbus-daemon --config-f 0 344 515 3284
1559 avahi avahi-daemon: running [tecm 0 284 517 3108
7289 postfix pickup -l -t unix -u -c 0 288 534 2808
2135 root /usr/lib/postfix/master 0 352 576 2872
2436 postfix qmgr -l -t unix -u 0 360 606 2884
1521 root /lib/systemd/systemd-logind 0 600 650 3276
2222 nobody /usr/sbin/dnsmasq --no-reso 0 604 669 3288
....
```
When a normal user runs smem, it displays memory usage by process that the user has started, the processes are arranged in order of increasing PSS.
当常规用户运行 smem将会显示由用户启用的进程的占用情况其中进程按照 PSS 的值升序排列。
Take a look at the output below on my system for memory usage by processes started by user aaronkilik:
下面的输出为用户 “aaronkilik” 启用的进程的使用情况:
```
$ smem
```
Monitor User Memory Usage in Linux
监视 Linux 系统中的内存使用情况
```
PID User Command Swap USS PSS RSS
6367 tecmint cat 0 100 145 1784
6368 tecmint cat 0 100 147 1676
2864 tecmint /usr/bin/ck-launch-session 0 144 166 1780
3544 tecmint sh -c /usr/lib/linuxmint/mi 0 212 224 1540
3311 tecmint syndaemon -i 0.5 -K -R 0 252 292 2556
3033 tecmint /usr/bin/dbus-launch --exit 0 328 360 2160
3468 tecmint /bin/dbus-daemon --config-f 0 344 515 3284
3122 tecmint /usr/lib/gvfs/gvfsd 0 656 801 5552
3471 tecmint /usr/lib/at-spi2-core/at-sp 0 708 864 5992
3396 tecmint /usr/lib/gvfs/gvfs-mtp-volu 0 804 914 6204
3208 tecmint /usr/lib/x86_64-linux-gnu/i 0 892 1012 6188
3380 tecmint /usr/lib/gvfs/gvfs-afc-volu 0 820 1024 6396
3034 tecmint //bin/dbus-daemon --fork -- 0 920 1081 3040
3365 tecmint /usr/lib/gvfs/gvfs-gphoto2- 0 972 1099 6052
3228 tecmint /usr/lib/gvfs/gvfsd-trash - 0 980 1153 6648
3107 tecmint /usr/lib/dconf/dconf-servic 0 1212 1283 5376
6399 tecmint /opt/google/chrome/chrome - 0 144 1409 10732
3478 tecmint /usr/lib/x86_64-linux-gnu/g 0 1724 1820 6320
7365 tecmint /usr/lib/gvfs/gvfsd-http -- 0 1352 1884 8704
6937 tecmint /opt/libreoffice5.0/program 0 1140 2328 5040
3194 tecmint /usr/lib/x86_64-linux-gnu/p 0 1956 2405 14228
6373 tecmint /opt/google/chrome/nacl_hel 0 2324 2541 8908
3313 tecmint /usr/lib/gvfs/gvfs-udisks2- 0 2460 2754 8736
3464 tecmint /usr/lib/at-spi2-core/at-sp 0 2684 2823 7920
5771 tecmint ssh -p 4521 tecmnt765@212.7 0 2544 2864 6540
5759 tecmint /bin/bash 0 2416 2923 5640
3541 tecmint /usr/bin/python /usr/bin/mi 0 2584 3008 7248
7657 tecmint bash 0 2516 3055 6028
3127 tecmint /usr/lib/gvfs/gvfsd-fuse /r 0 3024 3126 8032
3205 tecmint mate-screensaver 0 2520 3331 18072
3171 tecmint /usr/lib/mate-panel/notific 0 2860 3495 17140
3030 tecmint x-session-manager 0 4400 4879 17500
3197 tecmint mate-volume-control-applet 0 3860 5226 23736
PID User Command Swap USS PSS RSS
6367 tecmint cat 0 100 145 1784
6368 tecmint cat 0 100 147 1676
2864 tecmint /usr/bin/ck-launch-session 0 144 166 1780
3544 tecmint sh -c /usr/lib/linuxmint/mi 0 212 224 1540
3311 tecmint syndaemon -i 0.5 -K -R 0 252 292 2556
3033 tecmint /usr/bin/dbus-launch --exit 0 328 360 2160
3468 tecmint /bin/dbus-daemon --config-f 0 344 515 3284
3122 tecmint /usr/lib/gvfs/gvfsd 0 656 801 5552
3471 tecmint /usr/lib/at-spi2-core/at-sp 0 708 864 5992
3396 tecmint /usr/lib/gvfs/gvfs-mtp-volu 0 804 914 6204
3208 tecmint /usr/lib/x86_64-linux-gnu/i 0 892 1012 6188
3380 tecmint /usr/lib/gvfs/gvfs-afc-volu 0 820 1024 6396
3034 tecmint //bin/dbus-daemon --fork -- 0 920 1081 3040
3365 tecmint /usr/lib/gvfs/gvfs-gphoto2- 0 972 1099 6052
3228 tecmint /usr/lib/gvfs/gvfsd-trash - 0 980 1153 6648
3107 tecmint /usr/lib/dconf/dconf-servic 0 1212 1283 5376
6399 tecmint /opt/google/chrome/chrome - 0 144 1409 10732
3478 tecmint /usr/lib/x86_64-linux-gnu/g 0 1724 1820 6320
7365 tecmint /usr/lib/gvfs/gvfsd-http -- 0 1352 1884 8704
6937 tecmint /opt/libreoffice5.0/program 0 1140 2328 5040
3194 tecmint /usr/lib/x86_64-linux-gnu/p 0 1956 2405 14228
6373 tecmint /opt/google/chrome/nacl_hel 0 2324 2541 8908
3313 tecmint /usr/lib/gvfs/gvfs-udisks2- 0 2460 2754 8736
3464 tecmint /usr/lib/at-spi2-core/at-sp 0 2684 2823 7920
5771 tecmint ssh -p 4521 tecmnt765@212.7 0 2544 2864 6540
5759 tecmint /bin/bash 0 2416 2923 5640
3541 tecmint /usr/bin/python /usr/bin/mi 0 2584 3008 7248
7657 tecmint bash 0 2516 3055 6028
3127 tecmint /usr/lib/gvfs/gvfsd-fuse /r 0 3024 3126 8032
3205 tecmint mate-screensaver 0 2520 3331 18072
3171 tecmint /usr/lib/mate-panel/notific 0 2860 3495 17140
3030 tecmint x-session-manager 0 4400 4879 17500
3197 tecmint mate-volume-control-applet 0 3860 5226 23736
...
```
There are many options to invoke while using smem, for example, to view system wide memory consumption, run the following command:
使用 smem 是还有一些参数可以选用,例如当参看整个系统的内存占用情况,运行以下的命令:
```
$ sudo smem -w
```
Monitor System Wide Memory User Consumption
监视 Linux 系统中的内存使用情况
```
Area Used Cache Noncache
firmware/hardware 0 0 0
kernel image 0 0 0
kernel dynamic memory 1425320 1291412 133908
userspace memory 2215368 451608 1763760
free memory 4424936 4424936 0
Area Used Cache Noncache
firmware/hardware 0 0 0
kernel image 0 0 0
kernel dynamic memory 1425320 1291412 133908
userspace memory 2215368 451608 1763760
free memory 4424936 4424936 0
```
To view memory usage on a per-user basis, run the command below:
如果想要查看每一个用户的内存使用情况,运行以下的命令:
```
$ sudo smem -u
```
Monitor Memory Consumption Per-User Basis in Linux
Linux 下以用户为单位监控内存占用情况
```
User Count Swap USS PSS RSS
rtkit 1 0 300 326 2548
kernoops 1 0 352 385 2684
avahi 2 0 408 851 4740
postfix 2 0 648 1140 5692
messagebus 1 0 1012 1173 3320
syslog 1 0 1396 1419 3232
www-data 2 0 5100 6572 13580
mpd 1 0 7416 8302 12896
nobody 2 0 4024 11305 24728
root 39 0 323876 353418 496520
tecmint 64 0 1652888 1815699 2763112
User Count Swap USS PSS RSS
rtkit 1 0 300 326 2548
kernoops 1 0 352 385 2684
avahi 2 0 408 851 4740
postfix 2 0 648 1140 5692
messagebus 1 0 1012 1173 3320
syslog 1 0 1396 1419 3232
www-data 2 0 5100 6572 13580
mpd 1 0 7416 8302 12896
nobody 2 0 4024 11305 24728
root 39 0 323876 353418 496520
tecmint 64 0 1652888 1815699 2763112
```
You can also report memory usage by mappings as follows:
你也可以按照映射显示内存使用情况:
```
$ sudo smem -m
```
Monitor Memory Usage by Mappings in Linux
Linux 下以映射为单位监控内存占用情况
```
Map PIDs AVGPSS PSS
/dev/fb0 1 0 0
/home/tecmint/.cache/fontconfig/7ef2298f 18 0 0
/home/tecmint/.cache/fontconfig/c57959a1 18 0 0
/home/tecmint/.local/share/mime/mime.cac 15 0 0
/opt/google/chrome/chrome_material_100_p 9 0 0
/opt/google/chrome/chrome_material_200_p 9 0 0
/usr/lib/x86_64-linux-gnu/gconv/gconv-mo 41 0 0
/usr/share/icons/Mint-X-Teal/icon-theme. 15 0 0
/var/cache/fontconfig/0c9eb80ebd1c36541e 20 0 0
/var/cache/fontconfig/0d8c3b2ac0904cb8a5 20 0 0
/var/cache/fontconfig/1ac9eb803944fde146 20 0 0
/var/cache/fontconfig/3830d5c3ddfd5cd38a 20 0 0
/var/cache/fontconfig/385c0604a188198f04 20 0 0
/var/cache/fontconfig/4794a0821666d79190 20 0 0
/var/cache/fontconfig/56cf4f4769d0f4abc8 20 0 0
/var/cache/fontconfig/767a8244fc0220cfb5 20 0 0
/var/cache/fontconfig/8801497958630a81b7 20 0 0
/var/cache/fontconfig/99e8ed0e538f840c56 20 0 0
/var/cache/fontconfig/b9d506c9ac06c20b43 20 0 0
/var/cache/fontconfig/c05880de57d1f5e948 20 0 0
/var/cache/fontconfig/dc05db6664285cc2f1 20 0 0
/var/cache/fontconfig/e13b20fdb08344e0e6 20 0 0
/var/cache/fontconfig/e7071f4a29fa870f43 20 0 0
Map PIDs AVGPSS PSS
/dev/fb0 1 0 0
/home/tecmint/.cache/fontconfig/7ef2298f 18 0 0
/home/tecmint/.cache/fontconfig/c57959a1 18 0 0
/home/tecmint/.local/share/mime/mime.cac 15 0 0
/opt/google/chrome/chrome_material_100_p 9 0 0
/opt/google/chrome/chrome_material_200_p 9 0 0
/usr/lib/x86_64-linux-gnu/gconv/gconv-mo 41 0 0
/usr/share/icons/Mint-X-Teal/icon-theme. 15 0 0
/var/cache/fontconfig/0c9eb80ebd1c36541e 20 0 0
/var/cache/fontconfig/0d8c3b2ac0904cb8a5 20 0 0
/var/cache/fontconfig/1ac9eb803944fde146 20 0 0
/var/cache/fontconfig/3830d5c3ddfd5cd38a 20 0 0
/var/cache/fontconfig/385c0604a188198f04 20 0 0
/var/cache/fontconfig/4794a0821666d79190 20 0 0
/var/cache/fontconfig/56cf4f4769d0f4abc8 20 0 0
/var/cache/fontconfig/767a8244fc0220cfb5 20 0 0
/var/cache/fontconfig/8801497958630a81b7 20 0 0
/var/cache/fontconfig/99e8ed0e538f840c56 20 0 0
/var/cache/fontconfig/b9d506c9ac06c20b43 20 0 0
/var/cache/fontconfig/c05880de57d1f5e948 20 0 0
/var/cache/fontconfig/dc05db6664285cc2f1 20 0 0
/var/cache/fontconfig/e13b20fdb08344e0e6 20 0 0
/var/cache/fontconfig/e7071f4a29fa870f43 20 0 0
....
```
There are also options for filtering smem output and we shall look at two examples here.
还有其它的选项用于 smem 的输出,下面将会举两个例子。
To filter output by username, invoke the -u or --userfilter="regex" option as below:
要按照用户名筛选输出的信息,调用 -u 或者是 --userfilter="regex" 选项,就像下面的命令这样:
```
$ sudo smem -u
```
Report Memory Usage by User
按照用户报告内存使用情况
```
User Count Swap USS PSS RSS
rtkit 1 0 300 326 2548
kernoops 1 0 352 385 2684
avahi 2 0 408 851 4740
postfix 2 0 648 1140 5692
messagebus 1 0 1012 1173 3320
syslog 1 0 1400 1423 3236
www-data 2 0 5100 6572 13580
mpd 1 0 7416 8302 12896
nobody 2 0 4024 11305 24728
root 39 0 323804 353374 496552
tecmint 64 0 1708900 1871766 2819212
User Count Swap USS PSS RSS
rtkit 1 0 300 326 2548
kernoops 1 0 352 385 2684
avahi 2 0 408 851 4740
postfix 2 0 648 1140 5692
messagebus 1 0 1012 1173 3320
syslog 1 0 1400 1423 3236
www-data 2 0 5100 6572 13580
mpd 1 0 7416 8302 12896
nobody 2 0 4024 11305 24728
root 39 0 323804 353374 496552
tecmint 64 0 1708900 1871766 2819212
```
To filter output by process name, invoke the -P or --processfilter="regex" option as follows:
要按照进程名称筛选输出信息,调用 -P 或者是 --processfilter="regex" 选项,就像下面的命令这样:
```
$ sudo smem --processfilter="firefox"
```
Report Memory Usage by Process Name
按照进程名称报告内存使用情况
```
PID User Command Swap USS PSS RSS
9212 root sudo smem --processfilter=f 0 1172 1434 4856
9213 root /usr/bin/python /usr/bin/sm 0 7368 7793 11984
4424 tecmint /usr/lib/firefox/firefox 0 931732 937590 961504
PID User Command Swap USS PSS RSS
9212 root sudo smem --processfilter=f 0 1172 1434 4856
9213 root /usr/bin/python /usr/bin/sm 0 7368 7793 11984
4424 tecmint /usr/lib/firefox/firefox 0 931732 937590 961504
```
Output formatting can be very important, and there are options to help you format memory reports and we shall take a look at few examples below.
To show desired columns in the report, use -c or --columns option as follows:
输出的格式有时候也很重要smem 提供了一些参数帮助您格式化内存使用报告,我们将举出几个例子。
设置哪些列在报告中,使用 -c 或者是 --columns选项就像下面的命令这样
```
$ sudo smem -c "name user pss rss"
```
Report Memory Usage by Columns
按列报告内存使用情况
```
Name User PSS RSS
cat tecmint 145 1784
cat tecmint 147 1676
ck-launch-sessi tecmint 165 1780
gnome-pty-helpe tecmint 178 1832
gnome-pty-helpe tecmint 179 1916
getty root 184 2052
getty root 187 2060
getty root 187 2060
getty root 188 2124
getty root 196 2064
getty root 196 2136
sh tecmint 224 1540
acpid root 236 1604
syndaemon tecmint 296 2560
rtkit-daemon rtkit 326 2548
cron root 333 2344
avahi-daemon avahi 334 1632
irqbalance root 343 2096
upstart-socket- root 351 1820
dbus-launch tecmint 360 2160
upstart-file-br root 371 1776
xdm root 378 2368
kerneloops kernoops 386 2684
upstart-udev-br root 427 2132
ssh-agent tecmint 485 992
Name User PSS RSS
cat tecmint 145 1784
cat tecmint 147 1676
ck-launch-sessi tecmint 165 1780
gnome-pty-helpe tecmint 178 1832
gnome-pty-helpe tecmint 179 1916
getty root 184 2052
getty root 187 2060
getty root 187 2060
getty root 188 2124
getty root 196 2064
getty root 196 2136
sh tecmint 224 1540
acpid root 236 1604
syndaemon tecmint 296 2560
rtkit-daemon rtkit 326 2548
cron root 333 2344
avahi-daemon avahi 334 1632
irqbalance root 343 2096
upstart-socket- root 351 1820
dbus-launch tecmint 360 2160
upstart-file-br root 371 1776
xdm root 378 2368
kerneloops kernoops 386 2684
upstart-udev-br root 427 2132
ssh-agent tecmint 485 992
...
```
You can invoke the -p option to report memory usage in percentages, as in the command below:
也可以调用 -p 选项以百分比的形式报告内存使用情况,就像下面的命令这样:
```
$ sudo smem -p
```
Report Memory Usage by Percentages
按百分比报告内存使用情况
```
PID User Command Swap USS PSS RSS
6367 tecmint cat 0.00% 0.00% 0.00% 0.02%
6368 tecmint cat 0.00% 0.00% 0.00% 0.02%
9307 tecmint sh -c { sudo /usr/lib/linux 0.00% 0.00% 0.00% 0.02%
2864 tecmint /usr/bin/ck-launch-session 0.00% 0.00% 0.00% 0.02%
3544 tecmint sh -c /usr/lib/linuxmint/mi 0.00% 0.00% 0.00% 0.02%
5758 tecmint gnome-pty-helper 0.00% 0.00% 0.00% 0.02%
7656 tecmint gnome-pty-helper 0.00% 0.00% 0.00% 0.02%
1441 root /sbin/getty -8 38400 tty2 0.00% 0.00% 0.00% 0.03%
1434 root /sbin/getty -8 38400 tty5 0.00% 0.00% 0.00% 0.03%
1444 root /sbin/getty -8 38400 tty3 0.00% 0.00% 0.00% 0.03%
1432 root /sbin/getty -8 38400 tty4 0.00% 0.00% 0.00% 0.03%
1452 root /sbin/getty -8 38400 tty6 0.00% 0.00% 0.00% 0.03%
2619 root /sbin/getty -8 38400 tty1 0.00% 0.00% 0.00% 0.03%
1504 root acpid -c /etc/acpi/events - 0.00% 0.00% 0.00% 0.02%
3311 tecmint syndaemon -i 0.5 -K -R 0.00% 0.00% 0.00% 0.03%
3143 rtkit /usr/lib/rtkit/rtkit-daemon 0.00% 0.00% 0.00% 0.03%
1588 root cron 0.00% 0.00% 0.00% 0.03%
1589 avahi avahi-daemon: chroot helpe 0.00% 0.00% 0.00% 0.02%
1523 root /usr/sbin/irqbalance 0.00% 0.00% 0.00% 0.03%
585 root upstart-socket-bridge --dae 0.00% 0.00% 0.00% 0.02%
3033 tecmint /usr/bin/dbus-launch --exit 0.00% 0.00% 0.00% 0.03%
PID User Command Swap USS PSS RSS
6367 tecmint cat 0.00% 0.00% 0.00% 0.02%
6368 tecmint cat 0.00% 0.00% 0.00% 0.02%
9307 tecmint sh -c { sudo /usr/lib/linux 0.00% 0.00% 0.00% 0.02%
2864 tecmint /usr/bin/ck-launch-session 0.00% 0.00% 0.00% 0.02%
3544 tecmint sh -c /usr/lib/linuxmint/mi 0.00% 0.00% 0.00% 0.02%
5758 tecmint gnome-pty-helper 0.00% 0.00% 0.00% 0.02%
7656 tecmint gnome-pty-helper 0.00% 0.00% 0.00% 0.02%
1441 root /sbin/getty -8 38400 tty2 0.00% 0.00% 0.00% 0.03%
1434 root /sbin/getty -8 38400 tty5 0.00% 0.00% 0.00% 0.03%
1444 root /sbin/getty -8 38400 tty3 0.00% 0.00% 0.00% 0.03%
1432 root /sbin/getty -8 38400 tty4 0.00% 0.00% 0.00% 0.03%
1452 root /sbin/getty -8 38400 tty6 0.00% 0.00% 0.00% 0.03%
2619 root /sbin/getty -8 38400 tty1 0.00% 0.00% 0.00% 0.03%
1504 root acpid -c /etc/acpi/events - 0.00% 0.00% 0.00% 0.02%
3311 tecmint syndaemon -i 0.5 -K -R 0.00% 0.00% 0.00% 0.03%
3143 rtkit /usr/lib/rtkit/rtkit-daemon 0.00% 0.00% 0.00% 0.03%
1588 root cron 0.00% 0.00% 0.00% 0.03%
1589 avahi avahi-daemon: chroot helpe 0.00% 0.00% 0.00% 0.02%
1523 root /usr/sbin/irqbalance 0.00% 0.00% 0.00% 0.03%
585 root upstart-socket-bridge --dae 0.00% 0.00% 0.00% 0.02%
3033 tecmint /usr/bin/dbus-launch --exit 0.00% 0.00% 0.00% 0.03%
....
```
The command below will show totals at the end of each column of the output:
下面的额命令将会在输出的最后输出一行汇总信息:
```
$ sudo smem -t
```
Report Total Memory Usage Count
报告内存占用合计
```
PID User Command Swap USS PSS RSS
6367 tecmint cat 0 100 139 1784
6368 tecmint cat 0 100 141 1676
9307 tecmint sh -c { sudo /usr/lib/linux 0 96 158 1508
2864 tecmint /usr/bin/ck-launch-session 0 144 163 1780
3544 tecmint sh -c /usr/lib/linuxmint/mi 0 108 170 1540
5758 tecmint gnome-pty-helper 0 156 176 1916
7656 tecmint gnome-pty-helper 0 156 176 1832
1441 root /sbin/getty -8 38400 tty2 0 152 181 2052
1434 root /sbin/getty -8 38400 tty5 0 156 184 2060
1444 root /sbin/getty -8 38400 tty3 0 156 184 2060
1432 root /sbin/getty -8 38400 tty4 0 156 185 2124
1452 root /sbin/getty -8 38400 tty6 0 164 193 2064
2619 root /sbin/getty -8 38400 tty1 0 164 193 2136
1504 root acpid -c /etc/acpi/events - 0 220 232 1604
3311 tecmint syndaemon -i 0.5 -K -R 0 260 298 2564
3143 rtkit /usr/lib/rtkit/rtkit-daemon 0 300 324 2548
1588 root cron 0 292 326 2344
1589 avahi avahi-daemon: chroot helpe 0 124 332 1632
1523 root /usr/sbin/irqbalance 0 316 340 2096
585 root upstart-socket-bridge --dae 0 328 349 1820
3033 tecmint /usr/bin/dbus-launch --exit 0 328 359 2160
1346 root upstart-file-bridge --daemo 0 348 370 1776
2607 root /usr/bin/xdm 0 188 375 2368
1635 kernoops /usr/sbin/kerneloops 0 352 384 2684
344 root upstart-udev-bridge --daemo 0 400 426 2132
PID User Command Swap USS PSS RSS
6367 tecmint cat 0 100 139 1784
6368 tecmint cat 0 100 141 1676
9307 tecmint sh -c { sudo /usr/lib/linux 0 96 158 1508
2864 tecmint /usr/bin/ck-launch-session 0 144 163 1780
3544 tecmint sh -c /usr/lib/linuxmint/mi 0 108 170 1540
5758 tecmint gnome-pty-helper 0 156 176 1916
7656 tecmint gnome-pty-helper 0 156 176 1832
1441 root /sbin/getty -8 38400 tty2 0 152 181 2052
1434 root /sbin/getty -8 38400 tty5 0 156 184 2060
1444 root /sbin/getty -8 38400 tty3 0 156 184 2060
1432 root /sbin/getty -8 38400 tty4 0 156 185 2124
1452 root /sbin/getty -8 38400 tty6 0 164 193 2064
2619 root /sbin/getty -8 38400 tty1 0 164 193 2136
1504 root acpid -c /etc/acpi/events - 0 220 232 1604
3311 tecmint syndaemon -i 0.5 -K -R 0 260 298 2564
3143 rtkit /usr/lib/rtkit/rtkit-daemon 0 300 324 2548
1588 root cron 0 292 326 2344
1589 avahi avahi-daemon: chroot helpe 0 124 332 1632
1523 root /usr/sbin/irqbalance 0 316 340 2096
585 root upstart-socket-bridge --dae 0 328 349 1820
3033 tecmint /usr/bin/dbus-launch --exit 0 328 359 2160
1346 root upstart-file-bridge --daemo 0 348 370 1776
2607 root /usr/bin/xdm 0 188 375 2368
1635 kernoops /usr/sbin/kerneloops 0 352 384 2684
344 root upstart-udev-bridge --daemo 0 400 426 2132
.....
-------------------------------------------------------------------------------
134 11 0 2171428 2376266 3587972
134 11 0 2171428 2376266 3587972
```
Further more, there are options for graphical reports that you can also use and we shall dive into them in this sub section.
另外smem 也提供了选项以图形的形式报告内存的使用情况,我们将会在下一小节深入介绍。
You can produce a bar graph of processes and their PSS and RSS values, in the example below, we produce a bar graph of processes owned by the root user.
比如,你可以生成一张进程的 PSS 和 RSS 值的条状图。在下面的例子中,我们会生成属于 root 用户的进程的内存占用图。
The vertical plane shows the PSS and RSS measure of processes and the horizontal plane represents each root user process:
纵坐标为每一个进程的 PSS 和 RSS 值,横坐标为 root 用户的所有进程:
```
$ sudo smem --userfilter="root" --bar pid -c"pss rss"
@ -399,9 +398,9 @@ $ sudo smem --userfilter="root" --bar pid -c"pss rss"
![](http://www.tecmint.com/wp-content/uploads/2016/06/Linux-Memory-Usage-in-PSS-and-RSS-Values.png)
>Linux Memory Usage in PSS and RSS Values
You can also produce a pie chart showing processes and their memory consumption based on PSS or RSS values. The command below outputs a pie chart for processes owned by root user measuring values.
也可以生成进程及其 PSS 和 RSS 占用量的饼状图。以下的命令将会输出一张 root 用户的所有进程的饼状。
The `--pie` name means label by name and `-s` option helps to sort by PSS value.
`--pie` name 意思为以各个进程名字为标签,`-s` 选项帮助以 PSS 的值排序。
```
$ sudo smem --userfilter="root" --pie name -s pss
@ -410,13 +409,13 @@ $ sudo smem --userfilter="root" --pie name -s pss
![](http://www.tecmint.com/wp-content/uploads/2016/06/Linux-Memory-Consumption-by-Processes.png)
>Linux Memory Consumption by Processes
There are many other known fields apart from PSS and RSS used for labeling charts:
它们还提供了一些其它与 PSS 和 RSS 相关的字段用于图表的标签:
To get help, simply type, `smem -h` or visit the manual entry page.
假如需要获得帮助,非常简单,仅需要输入 `smem -h` 或者是浏览帮助页面。
We shall stop here with smem, but to understand it better, use it with many other options that you can find in the man page. As usual you can use the comment section below to express any thoughts or concerns.
关于 smem 的介绍到底为止,不过想要更好的了解它,可以通过 man 手册获得更多的选项,然后一一实践。有什么想法或者疑惑,都可以跟帖评价。
Reference Links: <https://www.selenic.com/smem/>
参考链接: <https://www.selenic.com/smem/>
--------------------------------------------------------------------------------
@ -424,7 +423,7 @@ Reference Links: <https://www.selenic.com/smem/>
via: http://www.tecmint.com/smem-linux-memory-usage-per-process-per-user/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
译者:[dongfengweixiao](https://github.com/dongfengweixiao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -434,7 +433,3 @@ via: http://www.tecmint.com/smem-linux-memory-usage-per-process-per-user/?utm_so
[2]: http://matplotlib.org/index.html
[3]: http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[4]: https://www.archlinux.org/packages/community/i686/smem/

View File

@ -0,0 +1,128 @@
ReactOS 新手指南
====================================
ReactOS 是一个比较年轻的开源操作系统,它提供了一个和 Windows NT 类似的图形界面,并且它的目标也是提供一个与 NT 功能和应用程序兼容性差不多的系统。这个项目在没有使用任何 Unix 的情况下实现了一个类似 Wine 的用户模式。它的开发者们从头实现了 NT 的架构以及对于 FAT32 的兼容,因此它也不需要负任何法律责任。这也就是说,它不是又双叒叕一个 Linux 发行版,而是一个独特的类 Windows 系统,并且是开源世界的一部分。这份快速指南是给那些想要一个易于使用的 Windows 的开源替代品的人准备的。
### 安装系统
在开始安装这个系统之前我需要说明一下ReactOS 的最低硬件要求是 500MB 硬盘以及仅仅 96MB 内存。我会在一个 32 位的虚拟机里面演示安装过程。
现在,你需要使用箭头键来选择你想要语言,而后通过回车键来确认。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_1.png)
之后再次敲击回车键来继续安装。你也可以选择按“R”键来修复现有的系统。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_2.png)
在第三屏中,你将看到一个警告说这个系统还是早期开发版本。再次敲击回车键,你将看到一个需要你最后确认的配置概览。如果你认为没问题,就按回车。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_3.png)
然后我们就到了分区这一步在这里你可以使用“D”键删除高亮分区分别使用“P”键、“E”键以及“L”键来添加一个主分区、拓展分区或逻辑分区。如果你想要自己添加一个分区你需要输入这个分区的大小以 MB 为单位),然后通过回车来确认。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_4.png)
但是,如果你有未使用的硬盘空间,在分区过程直接敲击回车键可以自动在你选中的分区上安装 ReactOS。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_5.png)
下一步是选择分区的格式,不过现在我们只能选择 FAT32。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_6.png)
再下一步是选择安装文件夹。我就使用默认的“/ReactOS”了应该没有问题。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_7.png)
然后就是等待...
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_8.png)
最后,我们要选择启动程序的安装位置。如果你是在实机上操作的话,第一个选项应该是最安全的。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_9.png)
总地来说,我认为 ReactOS 的安装向导很直接。尽管安装程序的界面可能看起来一点也不现代、不友好但是大多数情况下作为用户的我们只需要狂敲回车就能安个差不多。这就是说ReactOS 的开发版安装起来也是相对简单方便的。
### 设置 ReactOS
在我们重启进入新系统之后,“设置向导”会帮助你设置系统。目前,这个向导仅支持设置语言和键盘格式。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_10.png)
我在这里选择了第二个键盘格式。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_11.png)
我还可以设置一个改变键盘布局的快捷键。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_12.png)
之后我添加了用户名…
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_13.png)
…以及管理员密码…
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_14.png)
在设置好时间之后,我们就算完成了系统设置。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_15.png)
### ReactOS 之内
当我们历经千辛万苦,终于首次进入 ReactOS 的界面时,系统会检测硬件并自动帮助我们安装驱动。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_16.png)
这是我这里被自动检测出来的三个硬件:
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_17.png)
在上一张图片里你看到的是 ReactOS 的“应用管理器”,这东西是 Linux 的标配。不过你不会在这里找到任何与 Linux 有关系的东西。只有在这个系统里工作良好的开源软件才会在这个管理器中出现。这就导致了管理器中有的分类下挤得满满当当,有的却冷清异常。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_18.png)
我试着通过软件中心安装了 Firefox 以及通过直接下载 exe 文件双击安装 Notepad++。这两个应用都能完美运行它们的图标出现在了桌面上在菜单中也出现了它们的名字Notepad++ 也出现在了软件中心右侧的分类栏里。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_19.png)
我没有尝试运行任何现代的 Windows 游戏,如果你想配置 Direct 3D 的话,你可以转到 “我的电脑/控制选项/WineD3D 配置”。在那里,你能看到很多 Direct3D 选项,大致与 dx 8 的选项类似。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_20.png)
ReactOS 还有一个好啊,就是我们可以通过“我的电脑”来操作注册表。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_21.png)
如果你需要一个简单点的工具,你可以在应用菜单里打开注册表编辑器。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_22.png)
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_23.png)
最后,如果你认为 ReactOS 看起来有点过时了的话,你可以在桌面右击选择“属性”,之后在“外观”那里选择你喜欢的主题和颜色。
![](https://www.howtoforge.com/images/getting-started-with-eeactos/pic_24.png)
### 结论
老实说,我对 ReactOS 的工作方式印象深刻。它相当稳定、连贯、快速,并且真正人性化。抛开 Windows 的阴影过时的应用菜单不合理的菜单结构不谈的话ReactOS 几乎做到了尽善尽美。它可能不会有太多应用可供选择现有的功能也可能不够强大但是我确信它将会繁荣壮大。关于它的数据显示出了它的人气我确定将要围绕它建立起来的社区将会很快就壮大到能把这个项目带往成功之路的地步。如今ReactOS 的最新版本是 0.4.1。如果想要以开源的方式运行 Windows 的应用,那么它就是你的菜!
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/getting-started-with-reactos/
作者:[Bill Toulas][a]
译者:[name1e5s](https://github.com/name1e5s)
校对:[PurlingNayuki](https://github.com/PurlingNayuki)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.howtoforge.com/tutorial/getting-started-with-reactos/

View File

@ -0,0 +1,141 @@
Linux新手必知必会的10条Linux基本命令
=====================================================================
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/4225072_orig.png)
[Linux][1]对我们的生活产生了巨大的冲击。至少你的安卓手机使用的就是Linux核心。尽管如此在第一次开始使用Linux时你还是会感到难以下手。因为在Linux中通常需要使用终端命令来取代Windows系统中的点击启动图标操作。但是不必担心这里我们会介绍10个Linux基本命令来帮助你开启Linux神秘之旅。
### 帮助新手走出第一步的10个Linux基本命令
当我们谈论Linux命令时实质上是在谈论Linux系统本身。这短短的10个Linux基本命令不会让你变成天才或者Linux专家但是能帮助你轻松开始Linux之旅。使用这些基本命令会帮助新手们完成Linux的日常任务由于它们的使用频率如此至高所以我更乐意称他们为Linux命令之王
让我们开始学习这10条Linux基本命令吧。
#### 1. sudo
这条命令的意思是“以超级用户的身份执行”,是 SuperUserDo 的简写它是新手将要用到的最重要的一条Linux命令。当一条单行命令需要root权限的时候`sudo`命令就派上用场了。你可以在每一条需要root权限的命令前都加上`sudo`。
```
$ sudo su
```
#### 2. ls (list)
跟其他人一样,你肯定也经常想看看目录下都有些什么东西。使用列表命令,终端会把当前工作目录下所有的文件以及文件夹展示给你。比如说,我当前处在 /home 文件夹中,我想看看 /home文件夹中都有哪些文件和目录。
```
/home$ ls
```
在/home中执行`ls`命令将会返回以下内容
```
imad lost+found
```
#### 3. cd
变更目录命令cd是终端中总会被用到的主要命令。他是最常用到的Linux基本命令之一。此命令使用非常简单当你打算从当前目录跳转至某个文件夹时只需要将文件夹键入此命令之后即可。如果你想跳转至上层目录只需要在此命令之后键入两个点(..)就可以了。
举个例子,我现在处在/home目录中我想移动到/home目录中的usr文件夹下可以通过以下命令来完成操作。
```
/home $ cd usr
/home/usr $
```
#### 4. mkdir
只是可以切换目录还是不够完美。有时候你会想要新建一个文件夹或子文件夹。此时可以使用mkdir命令来完成操作。使用方法很简单只需要把新的文件夹名跟在mkdir命令之后就好了。
```
~$ mkdir folderName
```
#### 5. cp
拷贝-粘贴copy-and-paste是我们组织文件需要用到的重要命令。使用 `cp` 命令可以帮助你在终端当中完成拷贝-粘贴操作。首先确定你想要拷贝的文件,然后键入打算粘贴此文件的目标位置。
```
$ cp src des
```
注意如果目标目录对新建文件需要root权限时你可以使用`sudo`命令来完成文件拷贝操作。
#### 6. rm
rm命令可以帮助你移除文件甚至目录。如果文件需要root权限才能移除可以用`-f`参数来强制执行。也可以使用`-r`参数来递归的移除文件夹。
```
$ rm myfile.txt
```
#### 7. apt-get
这个命令会依据发行版的不同而有所区别。在基于Debian的发行版中我们拥有Advanced Packaging ToolAPT包管理工具来安装、移除和升级包。apt-get命令会帮助你安装需要在Linux系统中运行的软件。它是一个功能强大的命令行可以用来帮助你对软件执行安装、升级和移除操作。
在其他发行版中例如Fedora、Centos都各自不同的包管理工具。Fedora之前使用的是yum不过现在dnf成了它默认的包管理工具。
```
$ sudo apt-get update
$ sudo dnf update
```
#### 8. grep
当你需要查找一个文件,但是又忘记了它具体的位置和路径时,`grep`命令会帮助你解决这个难题。你可以提供文件的关键字,使用`grep`命令来查找到它。
```
$ grep user /etc/passwd
```
#### 9. cat
作为一个用户,你应该会经常需要浏览脚本内的文本或者代码。`cat`命令是Linux系统的基本命令之一它的用途就是将文件的内容展示给你。
```
$ cat CMakeLists.txt
```
#### 10. poweroff
最后一个命令是 `poweroff`。有时你需要直接在终端中执行关机操作。此命令可以完成这个任务。由于关机操作需要root权限所以别忘了在此命令之前添加`sudo`。
```
$ sudo poweroff
```
### 总结
如我在文章开始所言这10条命令并不会让你立即成为一个Linux大拿。它们会让你在初期快速上手Linux。以这些命令为基础给自己设置一个目标每天学习一到三条命令这就是此文的目的所在。在下方评论区分享有趣并且有用的命令。别忘了跟你的朋友分享此文。
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember
作者:[Commenti][a]
译者:[mr-ping](https://github.com/mr-ping)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember#comments
[1]: http://linuxandubuntu.com/home/category/linux

View File

@ -0,0 +1,116 @@
[Translated by HaohongWANG]
//校对爸爸辛苦了!>_<
惊艳6款面向儿童的 Linux 发行版
======================================
毫无疑问未来是属于 Linux 和开源的。为了实现这样的未来、使Linux占据一席之地人们已经着手从尽可能低的水平开始开发面向儿童的Linux发行版并尝试教授他们如何使用Linux操作系统。
![](http://www.tecmint.com/wp-content/uploads/2016/05/Linux-Distros-For-Kids.png)
>面向儿童的 Linux 发行版
Linux 是一款非常强大的操作系统,原因之一便是它驱动了互联网上绝大多数的服务器。但出于对其用户友好性的担忧,坊间时常展开有关于 Linux 应如何取代 Mac OS X 或 Windows 的辩论。而我认为用户应该接受 Linux 来见识它真正的威力。
如今Linux 运行在绝大多数设备上,从智能手机到平板电脑,笔记本电脑,工作站,服务器,超级计算机,再到汽车,航空管制系统,甚至电冰箱,都有 Linux 的身影。正如我在开篇所说,有了这一切, Linux 是未来的操作系统。
>参考阅读: [30 Big Companies and Devices Running on Linux][1]
未来是属于孩子们的,教育要从娃娃抓起。所以,要让小孩子尽早地学习计算机、了解 Linux 、接触科学技术。这是改变未来图景的最好方法。
一个常见的现象是,当儿童在一个适合他的环境中学习时,好奇心和早期学习的能力会使他自己养成喜好探索的性格。
说了这么多儿童应该学习 Linux 的原因,接下来我就列出这些令人激动的发行版。你可以把它们推荐给小孩子来帮助他们开始学习使用 Linux 。
### Sugar on a Stick
Sugar on a Stick (译注:“糖在棒上”)是 Sugar Labs 旗下的工程Sugar Labs 是一个由志愿者领导的非盈利组织。这一发行版旨在设计大量的免费工具来使儿童在探索、发现、创造中认知自己的思想。
![](http://www.tecmint.com/wp-content/uploads/2016/05/Sugar-Neighborhood-View.png)
>Sugar Neighborhood 界面
![](http://www.tecmint.com/wp-content/uploads/2016/05/Sugar-Activity-Library.png)
>Sugar 应用程序
你既可以将 Sugar 看作是普通的桌面环境,也可以把它当做是帮助鼓励孩子学习、提高参与活动的积极性的一款应用合集。
访问主页: <https://www.sugarlabs.org/>
### Edubuntu
Edubuntu 是基于当下最流行的发行版 Ubuntu 而开发的一款草根发行版。主要致力于降低学校、家庭和社区安装、使用 Ubuntu 自由软件的难度。
![](http://www.tecmint.com/wp-content/uploads/2016/05/Edubuntu-Apps.jpg)
>Edubuntu 桌面应用
它的桌面应用由来自不同组织的学生、教师、家长、一些利益相关者甚至黑客来提供。他们都笃信社区的发展和知识的共享是自由学习和自由分享的基石。
该项目的主要目标是组建一款安装、管理软件难度低的操作系统以增长使用 Linux 学习和教育的用户数量。
访问主页: <http://www.edubuntu.org/>
### Doudou Linux
Doudou Linux 是专为方便儿童使用而设计的发行版,能在构建中激发儿童的创造性思维。它提供了简单但是颇具教育意义的应用来使儿童在应用过程中学习发现新的知识。
![](http://www.tecmint.com/wp-content/uploads/2016/05/Doudou-Linux.png)
>Doudou Linux
其最引人注目的一点便是内容过滤功能顾名思义它能够阻止孩童访问网络上的禁止内容。如果想要更进一步的儿童保护功能Doudou Linux 还提供了互联网用户隐私功能,能够去除网页中的特定加载内容。
访问主页: <http://www.doudoulinux.org/>
### LinuxKidX
这是一款整合了许多专为儿童的教育类软件的 Slackware Linux 的 LiveCD。它使用 KDE 作为默认桌面环境并配置了诸如 Ktouch 打字指导Kstars 虚拟天文台Kalzium 元素周期表和 KwordQuiz 单词测试等应用。
![](http://www.tecmint.com/wp-content/uploads/2016/05/LinuxKidX.jpg)
>LinuxKidX
访问主页: <http://linuxkidx.blogspot.in/>
### Ubermix
Ubermix 基于 Ubuntu 构建同样以教学为目的。默认配备了超过60款应用帮助学生更好地学习同时给教师教学提供便利。
![](http://www.tecmint.com/wp-content/uploads/2016/05/ubermix.png)
>Ubermix Linux
Ubermix 还具有5分钟快速安装和快速恢复等功能可以给小孩子更好的帮助。
访问主页: <http://www.ubermix.org/>
### Qimo
因为很多读者曾向我询问过 Qimo 发行版的情况所以我把它写进这篇文章。但是截止发稿时Qimo 儿童版的开发已经终止,不再提供更新。
![](http://www.tecmint.com/wp-content/uploads/2016/05/Qimo-Linux.png)
>Qimo Linux
你仍然可以在 Ubuntu 或者其他的 Linux 发行版中找到大多数儿童游戏。正如这些开发商所说,他们不仅在为儿童制作教育软件,同时也在开发增长儿童文化水平的安卓应用。
如果你想进一步了解,可以移步他们的官方网站。
访问主页: <http://www.qimo4kids.com/>
以上这些便是我所知道的面向儿童的Linux发行版或有缺漏欢迎评论补充。
如果你想探讨桌面 Linux 的发展前景或是如何引导儿童接触 Linux ,欢迎与我联系。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/best-linux-distributions-for-kids/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Aaron Kili][a]
译者:[HaohongWANG](https://github.com/HaohongWANG)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/big-companies-and-devices-running-on-gnulinux/

View File

@ -0,0 +1,103 @@
PowerPC 获得大端 Android 4.4 系统的移植
===========================================================
eInfochips一家软件厂商 已将将 Android 4.4 系统移植到 PowerPC 架构它将作为一家航空电子客户的人机界面HMI:Human Machine Interface用来监视引擎的建康状况。
eInfochips 已经开发了第一个面向 PowerPC 架构的 CPU 的 Android 移植版本,它使用较新的大端 Android 系统。此移植基于 Android 开源项目[Android Open Source Project (AOSP)] 中 Android 4.4 (KitKat) 的代码,其功能内核的版本号为 3.12.19。
Android 开始兴起的时候PowerPC正在快速失去和 ARM 架构共通角逐的市场。高端的网络客户和以市场为导向的嵌入式工具大多运行在诸如飞思卡尔Freescale的 PowerQUICC 和 QorIQ 上,而不取决于 Linux 系统。一些 Android 的移植计划最终失败,然而在 2009 年,飞思卡尔和 Embedded Alley一家软件厂商当前是 Mentor Graphics 的 Linux 团队的一部分)[宣布了针对 PowerQUICC 和 QorIQ 芯片的移植版本][15],当前由 NXP 公司构建。另一个名为[Android-PowerPC][16] 的项目也作出了相似的工作。
这些努力来的都并不容易,然而,当航空公司找到 eInfochips希望能够为他们那些基于 PowerPC 的引擎监控系统添加 Android 应用程序以改善人机界面。此公司找出了这些早期的移植版本,然而,他们都很难达到标准。所以,他们不得不从头开始新的移植。
最主要的问题是这些移植的 Android 版本实在是太老了,且 very different。Embedded Alley 移植的版本为 Android 1.5 (Cupcake),它于 2009 年发布Linux 内核版本为 2.6.28。最后一版的移植为 Android-PowerPC 项目的 Android 2.2 (Froyo)它于 2010 年发布,内核版本为 2.6.32。此外,航空公司还有一些额外的技术诉求,例如对大端的支持. 现有的存储器接入方案仍旧应用于网络通信和电信行业。然而那些早期的移植版本仅能够支持小端的存储器访问。
### 来自 eInfochips 的全新 PowerPC 架构移植
eInfochips, 它最为出名的应该是那些基于 ARM/骁龙处理器的模块计算机板卡,例如 [Eragon 600][17]。 它已经完成了基于 QorIQ 的 Android 4.4 系统移植,且发布了白皮书描述了此项目。采用该项目的航空电子设备客户仍旧不愿透露姓名,目前仍旧不清楚什么时候会公开此该移植版本。
![](http://hackerboards.com/files/einfochips_porting_android_on_powerpc-sm.jpg)
>图片来自 eInfochips 的博客日志
- 全新的 PowerPC Android 项目包括:
- 基于 PowerPC [e5500][1] 深度定制bionic 定制不知道什么鬼,校对的时候也可以想想怎么处理)
- 基于 Android KitKat 的大端序支持
- 开发工具链为 Gcc 5.2
- Android 4.4 框架的 PowerPC 支持
- PowerPC e5500 的 Android 内核版本为 3.12.19
根据 eInfochips 的销售经理 Sooryanarayanan Balasubramanian 描述,航空电子客户想要使用 Android 主要是因为熟悉的界面能够缩减培训的时间,并且让程序更新和提供新的程序变得更加容易。他继续解释说:“这次成功的移植了 Android使得今后的工作仅仅需要在应用层作出修修改改而不再向以前一样需要在所有层之间作相互的校验。”“这是第一次在航空航天工业作出这些尝试这需要在设计时作出尽职的调查。”
通过白皮书,可以知道将 Android 移植到 PowerPC 上需要对框架,核心库,开发工具链,运行时链接器,对象链接器和开源编译工具作出大量的修改。在字节码生成阶段,移植团队决定使用便携模式而不是快速的解释模式。这是因为,还没有 PowerPC 可用的快速解释模式,而使用 [libffi][18] 的便携模式能够支持 PowerPC。
同时,团队还面临在 Android 运行时 (ART) 环境和 Dalvik 虚拟机 (DVM) 环境之间的选择。他们发现ART 环境下的便携模式还未经测试且缺乏良好的文档支持,所以最终选择了 DVM 环境下的便携模式。
白皮书中还提及了其它的一些在移植过程中遇到的困难,包括重新开发工具链,重写脚本以解决 AOSP “非标准”的使用编译器标志的问题。最终,移植提供了 37 个服务and features a headless Android deployment along with an emulated UI in user space.
### 目标硬件
感谢来自 [eInfochips 博客日志][2] 的图片(如下图所示),我们能够确认此 PowerPC 的 Android 移植项目的硬件平台。这个板卡为 [X-ES Xpedite 6101][3],它是固实的 XMC/PrPMC 夹层模组。
![](http://hackerboards.com/files/xes_xpedite6101-sm.jpg)
>X-ES Xpedite 6101 照片和框图
X-ES Xpedite 6101 板卡拥有可选择的 NXP 公司基于 QorIQ T系列通信处理器 T2081, T1042, 和 T1022他们分别拥有 8 个4 个和 2 个 e6500 核心稍有不同的是T2081 的处理器主频为 1.8GHzT1042/22 的处理器主频为 1.4GHz。所有的核心都集成了 AltiVec SIMD 引擎,这也就意味着它能够提供 DSP 级别的浮点运算性能。所有以上 3 款 X-ES 板卡都能够支持最高 8GB 的 DDR3-1600 ECC SDRAM 内存。外加 512MB NOR 和 32GB 的 NAND 闪存。
![](http://hackerboards.com/files/nxp_qoriq_t2081_block-sm.jpg)
>NXP T2081 框图
板卡的 I/O 包括一个 x4 PCI Express Gen2 通到along with dual helpings of Gigabit Ethernet, RS232/422/485 串口和 SATA 3.0 接口。此外,它可选 3 款 QorIQ 处理器Xpedite 6101 提供了三种[X-ES 加固等级][19],分别是额定工作温度 0 ~ 55°C, -40 ~ 70°C, 或者是 -40 ~ 85°C且包含 3 类冲击和抗振类别。
此外,我们已经介绍过的基于 X-ES QorIQ 的 XMC/PrPMC 板卡包括[XPedite6401 和 XPedite6370][20],它们支持已有的板卡级 Linux LinuxWind River VxWorks一种实时操作系统 和 Green Hills Integrity也是一种操作系统
### 更多信息
eInfochips Android PowerPC 移植白皮书可以[在此[4]下载(需要先免费注册)。
### Related posts:
- [Commercial embedded Linux distro boosts virtualization][5]
- [Freescale unveils first ARM-based QorIQ SoCs][6]
- [High-end boards run Linux on 64-bit ARM QorIQ SoCs][7]
- [Free, Open Enea Linux taps Yocto Project and Linaro code][8]
- [LynuxWorks reverts to its LynxOS roots, changes name][9]
- [First quad- and octa-core QorIQ SoCs unveiled][10]
- [Free white paper shows how Linux won embedded][11]
- [Quad-core Snapdragon COM offers three dev kit options][12]
- [Tiny COM runs Linux on quad-core 64-bit Snapdragon 410][13]
- [PowerPC based IoT gateway COM ships with Linux BSP][14]
--------------------------------------------------------------------------------
via: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/
作者:[Eric Brown][a]
译者:[dongfengweixiao](https://github.com/dongfengweixiao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://hackerboards.com/powerpc-gains-android-4-4-port-with-big-endian-support/
[1]: http://linuxdevices.linuxgizmos.com/low-cost-powerquicc-chips-offer-flexible-interconnect-options/
[2]: https://www.einfochips.com/blog/k2-categories/aerospace/presenting-a-case-for-porting-android-on-powerpc-architecture.html
[3]: http://www.xes-inc.com/products/processor-mezzanines/xpedite6101/
[4]: http://biz.einfochips.com/portingandroidonpowerpc
[5]: http://hackerboards.com/commercial-embedded-linux-distro-boosts-virtualization/
[6]: http://hackerboards.com/freescale-unveils-first-arm-based-qoriq-socs/
[7]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/
[8]: http://hackerboards.com/free-open-enea-linux-taps-yocto-and-linaro-code/
[9]: http://hackerboards.com/lynuxworks-reverts-to-its-lynxos-roots-changes-name/
[10]: http://hackerboards.com/first-quad-and-octa-core-qoriq-socs-unveiled/
[11]: http://hackerboards.com/free-white-paper-shows-how-linux-won-embedded/
[12]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/
[13]: http://hackerboards.com/tiny-com-runs-linux-and-android-on-quad-core-64-bit-snapdragon-410/
[14]: http://hackerboards.com/powerpc-based-iot-gateway-com-ships-with-linux-bsp/
[15]: http://linuxdevices.linuxgizmos.com/android-ported-to-powerpc/
[16]: http://www.androidppc.com/
[17]: http://hackerboards.com/quad-core-snapdragon-com-offers-three-dev-kit-options/
[18]: https://sourceware.org/libffi/
[19]: http://www.xes-inc.com/capabilities/ruggedization/
[20]: http://hackerboards.com/high-end-boards-run-linux-on-64-bit-arm-qoriq-socs/

View File

@ -0,0 +1,212 @@
如何使用Awk和正则表达式过滤文本或文件中的字符串
=============================================================================
![](http://www.tecmint.com/wp-content/uploads/2016/04/Linux-Awk-Command-Examples.png)
当我们在 Unix/Linux 下使用特定的命令从字符串或文件中读取或编辑文本时,我们经常会尝试过滤输出以得到感兴趣的部分。这时正则表达式就派上用场了。
### 什么是正则表达式?
正则表达式可以定义为代表若干个字符序列的字符串。它最重要的功能就是它允许你过滤一条命令或一个文件的输出,编辑文本或配置等文件的一部分。
### 正则表达式的特点
正则表达式由以下内容组合而成:
- 普通的字符例如空格、下划线、A-Z、a-z、0-9。
- 可以扩展为普通字符的元字符,它们包括:
- `(.)` 它匹配除了换行符外的任何单个字符。
- `(*)` 它匹配零个或多个在其之前的立即字符。
- `[ character(s) ]` 它匹配任何由 character(s) 指定的一个字符,你可以使用连字符(-)代表字符区间,例如 [a-f]、[1-5]等。
- `^` 它匹配文件中一行的开头。
- `$` 它匹配文件中一行的结尾。
- `\` 这是一个转义字符。
你必须使用类似 awk 这样的文本过滤工具来过滤文本。你还可以把 awk 当作一个用于自身的编程语言。但由于这个指南的适用范围是关于使用 awk 的,我会按照一个简单的命令行过滤工具来介绍它。
awk 的一般语法如下:
```
# awk 'script' filename
```
此处 `'script'` 是一个由 awk 使用并应用于 filename 的命令集合。
它通过读取文件中的给定的一行,复制该行的内容并在该行上执行脚本的方式工作。这个过程会在该文件中的所有行上重复。
该脚本 `'script'` 中内容的格式是 `'/pattern/ action'`,其中 `pattern` 是一个正则表达式,而 `action` 是当 awk 在该行中找到此模式时应当执行的动作。
### 如何在 Linux 中使用 Awk 过滤工具
在下面的例子中,我们将聚焦于之前讨论过的元字符。
#### 一个使用 awk 的简单示例:
下面的例子打印文件 /etc/hosts 中的所有行,因为没有指定任何的模式。
```
# awk '//{print}'/etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Command-Example.gif)
>Awk 打印文件中的所有行
#### 结合模式使用 Awk
在下面的示例中,指定了模式 `localhost`,因此 awk 将匹配文件 `/etc/hosts` 中有 `localhost` 的那些行。
```
# awk '/localhost/{print}' /etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-Command-with-Pattern.gif)
>Awk 打印文件中匹配模式的行
#### 在 Awk 模式中使用通配符 (.)
在下面的例子中,符号 `(.)` 将匹配包含 loc、localhost、localnet 的字符串。
这里的意思是匹配 *** l 一些单个字符 c ***。
```
# awk '/l.c/{print}' /etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Wild-Cards.gif)
>使用 Awk 打印文件中匹配模式的字符串
#### 在 Awk 模式中使用字符 (*)
在下面的例子中,将匹配包含 localhost、localnet、lines, capable 的字符串。
```
# awk '/l*c/{print}' /etc/localhost
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Match-Strings-in-File.gif)
>使用 Awk 匹配文件中的字符串
你可能也意识到 `(*)` 将会尝试匹配它可能检测到的最长的匹配。
让我们看一看可以证明这一点的例子,正则表达式 `t*t` 的意思是在下面的行中匹配以 `t` 开始和 `t` 结束的字符串:
```
this is tecmint, where you get the best good tutorials, how to's, guides, tecmint.
```
当你使用模式 `/t*t/` 时,会得到如下可能的结果:
```
this is t
this is tecmint
this is tecmint, where you get t
this is tecmint, where you get the best good t
this is tecmint, where you get the best good tutorials, how t
this is tecmint, where you get the best good tutorials, how tos, guides, t
this is tecmint, where you get the best good tutorials, how tos, guides, tecmint
```
`/t*t/` 中的通配符 `(*)` 将使得 awk 选择匹配的最后一项:
```
this is tecmint, where you get the best good tutorials, how to's, guides, tecmint
```
#### 结合集合 [ character(s) ] 使用 Awk
以集合 [al1] 为例awk 将匹配文件 /etc/hosts 中所有包含字符 a 或 l 或 1 的字符串。
```
# awk '/[al1]/{print}' /etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matching-Character.gif)
>使用 Awk 打印文件中匹配的字符
下一个例子匹配以 `K``k` 开始头,后面跟着一个 `T` 的字符串:
```
# awk '/[Kk]T/{print}' /etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Matched-String-in-File.gif)
>使用 Awk 打印文件中匹配的字符
#### 以范围的方式指定字符
awk 所能理解的字符:
- `[0-9]` 代表一个单独的数字
- `[a-z]` 代表一个单独的小写字母
- `[A-Z]` 代表一个单独的大写字母
- `[a-zA-Z]` 代表一个单独的字母
- `[a-zA-Z 0-9]` 代表一个单独的字母或数字
让我们看看下面的例子:
```
# awk '/[0-9]/{print}' /etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-To-Print-Matching-Numbers-in-File.gif)
>使用 Awk 打印文件中匹配的数字
在上面的例子中,文件 /etc/hosts 中的所有行都至少包含一个单独的数字 [0-9]。
#### 结合元字符 (\^) 使用 Awk
在下面的例子中,它匹配所有以给定模式开头的行:
```
# awk '/^fe/{print}' /etc/hosts
# awk '/^ff/{print}' /etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-All-Matching-Lines-with-Pattern.gif)
>使用 Awk 打印与模式匹配的行
#### 结合元字符 ($) 使用 Awk
它将匹配所有以给定模式结尾的行:
```
# awk '/ab$/{print}' /etc/hosts
# awk '/ost$/{print}' /etc/hosts
# awk '/rs$/{print}' /etc/hosts
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Print-Given-Pattern-String.gif)
>使用 Awk 打印与模式匹配的字符串
#### 结合转义字符 (\\) 使用 Awk
它允许你将该转义字符后面的字符作为文字,即理解为其字面的意思。
在下面的例子中,第一个命令打印出文件中的所有行,第二个命令中我想匹配具有 $25.00 的一行,但我并未使用转义字符,因而没有打印出任何内容。
第三个命令是正确的,因为一个这里使用了一个转义字符以转义 $,以将其识别为 '$'(而非元字符)。
```
# awk '//{print}' deals.txt
# awk '/$25.00/{print}' deals.txt
# awk '/\$25.00/{print}' deals.txt
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-with-Escape-Character.gif)
>结合转义字符使用 Awk
### 总结
以上内容并不是 Awk 命令用做过滤工具的全部,上述的示例均是 awk 的基础操作。在下面的章节中,我将进一步介绍如何使用 awk 的高级功能。感谢您的阅读,请在评论区贴出您的评论。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/use-linux-awk-command-to-filter-text-string-in-files/
作者:[Aaron Kili][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,106 @@
如何使用 Awk 输出文本中的字段和列
======================================================
在 Awk 系列的这一节中,我们将看到 Awk 最重要的特性之一,字段编辑。
需要知道的是Awk 能够自动将输入的行,分隔为若干字段。每一个字段就是一组字符,它们和其他的字段由一个内部字段分隔符分隔开来。
![](http://www.tecmint.com/wp-content/uploads/2016/04/Awk-Print-Fields-and-Columns.png)
>Awk Print Fields and Columns
如果你熟悉 Unix/Linux 或者使用 [bash 脚本][1]编过程那么你应该知道什么是内部字段分隔符IFS变量。Awk 中默认的 IFS 是制表符和空格。
Awk 中的字段分隔符的工作流程如下:当读到一行输入时,将它按照指定的 IFS 分割为不同字段,第一组字符就是字段一,可以通过 $1 来访问,第二组字符就是字段二,可以通过 $2 来访问,第三组字符就是字段三,可以通过 $3 来访问,以此类推,直到最后一组字符。
为了更好地理解 Awk 的字段编辑,让我们看一个下面的例子:
**例 1**:我创建了一个名为 tecmintinfo.txt 的文本文件。
```
# vi tecmintinfo.txt
# cat tecmintinfo.txt
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Create-File-in-Linux.png)
>在 Linux 上创建一个文件
然后在命令行中,我试着使用下面的命令从文本 tecmintinfo.txt 中输出第一个,第二个,以及第三个字段。
```
$ awk '//{print $1 $2 $3 }' tecmintinfo.txt
TecMint.comisthe
```
从上面的输出中你可以看到,前三个字段的字符是以空格为分隔符输出的:
- 字段一是 “TecMint.com”可以通过 `$1` 来访问。
- 字段二是 “is”可以通过 `$2` 来访问。
- 字段三是 “the”可以通过 `$3` 来访问。
如果你注意观察输出的话可以发现,输出的字段值并没有被分隔开,这是 print 函数默认的行为。
为了使输出看得更清楚,输出的字段值之间使用空格分开,你需要添加 (,) 操作符。
```
$ awk '//{print $1, $2, $3; }' tecmintinfo.txt
TecMint.com is the
```
需要记住而且非常重要的是,`($)` 在 Awk 和在 shell 脚本中的使用是截然不同的!
在 shell 脚本中,`($)` 被用来获取变量的值。而在 Awk 中,`($)` 只有在获取字段的值时才会用到,不能用于获取变量的值。
**例 2**:让我们再看一个例子,用到了一个名为 my_shoping.list 的包含多行的文件。
```
No Item_Name Unit_Price Quantity Price
1 Mouse #20,000 1 #20,000
2 Monitor #500,000 1 #500,000
3 RAM_Chips #150,000 2 #300,000
4 Ethernet_Cables #30,000 4 #120,000
```
如果你只想输出购物清单上每一个物品的`单价`,你只需运行下面的命令:
```
$ awk '//{print $2, $3 }' my_shopping.txt
Item_Name Unit_Price
Mouse #20,000
Monitor #500,000
RAM_Chips #150,000
Ethernet_Cables #30,000
```
可以看到上面的输出不够清晰Awk 还有一个 `printf` 的命令,可以帮助你将输出格式化。
使用 `printf` 来格式化 Item_Name 和 Unit_Price 的输出:
```
$ awk '//{printf "%-10s %s\n",$2, $3 }' my_shopping.txt
Item_Name Unit_Price
Mouse #20,000
Monitor #500,000
RAM_Chips #150,000
Ethernet_Cables #30,000
```
### 总结
使用 Awk 过滤文本或字符串时字段编辑的功能是非常重要的。它能够帮助你从一个表的数据中得到特定的列。一定要记住的是Awk 中 `($)` 操作符的用法与其在 shell 脚本中的用法是不同的!
希望这篇文章对您有所帮助。如有任何疑问,可以在评论区域发表评论。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/awk-print-fields-columns-with-space-separator/
作者:[Aaron Kili][a]
译者:[Cathon](https://github.com/Cathon)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/category/bash-shell/

View File

@ -0,0 +1,82 @@
如何使用 Awk 来筛选文本或字符串
=========================================================================
![](http://www.tecmint.com/wp-content/uploads/2016/04/Use-Awk-to-Filter-Text-or-Strings-Using-Pattern.png)
作为 Awk 命令系列的第三部分,这次我们将看一看如何基于用户定义的特定模式来筛选文本或字符串。
在筛选文本时,有时你可能想根据某个给定的条件或使用一个特定的可被匹配的模式,去标记某个文件或数行字符串中的某几行。使用 Awk 来完成这个任务是非常容易的,这也正是 Awk 中可能对你有所帮助的几个特色之一。
让我们看一看下面这个例子,比方说你有一个写有你想要购买的食物的购物清单,其名称为 food_prices.list它所含有的食物名称及相应的价格如下所示
```
$ cat food_prices.list
No Item_Name Quantity Price
1 Mangoes 10 $2.45
2 Apples 20 $1.50
3 Bananas 5 $0.90
4 Pineapples 10 $3.46
5 Oranges 10 $0.78
6 Tomatoes 5 $0.55
7 Onions 5 $0.45
```
然后,你想使用一个 `(*)` 符号去标记那些单价大于 $2 的食物,那么你可以通过运行下面的命令来达到此目的:
```
$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Text-Using-Awk.gif)
>打印出单价大于 $2 的项目
从上面的输出你可以看到在含有 芒果(mangoes) 和 菠萝(pineapples) 的那行末尾都已经有了一个 `(*)` 标记。假如你检查它们的单价,你可以看到它们的单价的确超过了 $2 。
在这个例子中,我们已经使用了两个模式:
- 第一个模式: `/ *\$[2-9]\.[0-9][0-9] */` 将会得到那些含有食物单价大于 $2 的行,
- 第二个模式: `/*\$[0-1]\.[0-9][0-9] */` 将查找那些食物单价小于 $2 的那些行。
上面的命令具体做了什么呢?这个文件有四个字段,当模式一匹配到含有食物单价大于 $2 的行时,它便会输出所有的四个字段并在该行末尾加上一个 `(*)` 符号来作为标记。
第二个模式只是简单地输出其他含有食物单价小于 $2 的行,因为它们出现在输入文件 food_prices.list 中。
这样你就可以使用模式来筛选出那些价格超过 $2 的食物项目,尽管上面的输出还有些问题,带有 `(*)` 符号的那些行并没有像其他行那样被格式化输出,这使得输出显得不够清晰。
我们在 Awk 系列的第二部分中也看到了同样的问题,但我们可以使用下面的两种方式来解决:
1. 可以像下面这样使用 printf 命令,但这样使用又长又无聊:
```
$ awk '/ *\$[2-9]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4; }' food_prices.list
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Printf.gif)
>使用 Awk 和 Printf 来筛选和输出项目
2. 使用 `$0` 字段。Awk 使用变量 **0** 来存储整个输入行。对于上面的问题,这种方式非常方便,并且它还简单、快速:
```
$ awk '/ *\$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list
```
![](http://www.tecmint.com/wp-content/uploads/2016/04/Filter-and-Print-Items-Using-Awk-and-Variable.gif)
>使用 Awk 和变量来筛选和输出项目
### 结论
这就是全部内容了,使用 Awk 命令你便可以通过几种简单的方法去利用模式匹配来筛选文本,帮助你在一个文件中对文本或字符串的某些行做标记。
希望这篇文章对你有所帮助。记得阅读这个系列的下一部分,我们将关注在 awk 工具中使用比较运算符。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/awk-filter-text-or-string-using-patterns/
作者:[Aaron Kili][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,76 @@
如何使用AWK的next命令
=============================================
![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-next-Command-with-Awk-in-Linux.png)
在Awk 系列的第六章, 我们来看一下`next`命令 ,它告诉 Awk 跳过你所提供的表达式而是读取下一个输入行.
`next` 命令帮助你阻止运行多余的步骤.
要明白它是如何工作的, 让我们来分析一下food_list.txt它看起来像这样 :
```
Food List Items
No Item_Name Price Quantity
1 Mangoes $3.45 5
2 Apples $2.45 25
3 Pineapples $4.45 55
4 Tomatoes $3.45 25
5 Onions $1.45 15
6 Bananas $3.45 30
```
运行下面的命令它将在每个食物数量小于或者等于20的行后面标一个星号:
```
# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt
No Item_Name Price Quantity
1 Mangoes $3.45 5 *
2 Apples $2.45 25
3 Pineapples $4.45 55
4 Tomatoes $3.45 25
5 Onions $1.45 15 *
6 Bananas $3.45 30
```
上面的命令实际运行如下:
- 首先, 它用`$4 <= 20`表达式检查每个输入行的第四列是否小于或者等于20如果满足条件, 它将在末尾打一个星号 `(*)` .
- 接着, 它用`$4 > 20`表达式检查每个输入行的第四列是否大于20如果满足条件显示出来.
但是这里有一个问题, 当第一个表达式用`{ printf "%s\t%s\n", $0,"**" ; }`命令进行标注的时候在同样的步骤第二个表达式也进行了判断这样就浪费了时间.
因此当我们已经用第一个表达式打印标志行的时候就不在需要用第二个表达式`$4 > 20`再次打印.
要处理这个问题, 我们需要用到`next` 命令:
```
# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt
No Item_Name Price Quantity
1 Mangoes $3.45 5 *
2 Apples $2.45 25
3 Pineapples $4.45 55
4 Tomatoes $3.45 25
5 Onions $1.45 15 *
6 Bananas $3.45 30
```
当输入行用`$4 <= 20` `{ printf "%s\t%s\n", $0,"*" ; next ; }`命令打印以后,`next`命令 将跳过第二个`$4 > 20` `{ print $0 ;}`表达式, 继续判断下一个输入行,而不是浪费时间继续判断一下是不是当前输入行还大于20.
next命令在编写高效的命令脚本时候是非常重要的, 它可以很大的提高脚本速度. 下面我们准备来学习Awk的下一个系列了.
希望这篇文章对你有帮助,你可以给我们留言.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/use-next-command-with-awk-in-linux/
作者:[Aaron Kili][a]
译者:[kokialoves](https://github.com/kokialoves)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,76 @@
在 Linux 上怎么读取标准输入(STDIN)作为 Awk 的输入
============================================
![](http://www.tecmint.com/wp-content/uploads/2016/06/Read-Awk-Input-from-STDIN.png)
在 Awk 工具系列的前几节,我们看到大多数操作都是从一个文件或多个文件读取输入,或者你想要把标准输入作为 Awk 的输入.
在 Awk 系列的第7节中,我们将会看到几个例子,这些例子都是关于你可以筛选其他命令的输出代替从一个文件读取输入作为 awk 的输入.
我们开始使用 [dir utility][1] , dir 命令和 [ls 命令][2] 相似,在第一个例子下面,我们使用 'dir -l' 命令的输出作为 Awk 命令的输入,这样就可以打印出文件拥有者的用户名,所属组组名以及在当前路径下他/她拥有的文件.
```
# dir -l | awk '{print $3, $4, $9;}'
```
![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-By-User-in-Directory.png)
>列出当前路径下的用户文件
看另一个例子,我们 [使用 awk 表达式][3] ,在这里,我们想要在 awk 命令里使用一个表达式筛选出字符串,通过这样来打印出 root 用户的文件.命令如下:
```
# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} '
```
![](http://www.tecmint.com/wp-content/uploads/2016/06/List-Files-Owned-by-Root-User.png)
>列出 root 用户的文件
上面的命令包含了 '(==)' 来进行比较操作,这帮助我们在当前路径下筛选出 root 用户的文件.这种方法的实现是通过使用 '$3=="root"' 表达式.
让我们再看另一个例子,我们使用一个 [awk 比较运算符][4] 来匹配一个确定的字符串.
现在,我们已经用了 [cat utility][5] 来浏览文件名为 tecmint_deals.txt 的文件内容,并且我们想要仅仅查看有字符串 Tech 的部分,所以我们会运行下列命令:
```
# cat tecmint_deals.txt
# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}'
# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}'
```
![](http://www.tecmint.com/wp-content/uploads/2016/06/Use-Comparison-Operator-to-Match-String.png)
>用 Awk 比较运算符匹配字符串
在上面的例子中,我们已经用了参数为 `~ /匹配字符/` 的比较操作,但是上面的两个命令给我们展示了一些很重要的问题.
当你运行带有 tech 字符串的命令时终端没有输出,因为在文件中没有 tech 这种字符串,但是运行带有 Tech 字符串的命令,你却会得到包含 Tech 的输出.
所以你应该在进行这种比较操作的时候时刻注意这种问题,正如我们在上面看到的那样, awk 对大小写很敏感.
你可以一直使用另一个命令的输出作为 awk 命令的输入来代替从一个文件中读取输入,这就像我们在上面看到的那样简单.
希望这些例子足够简单可以使你理解 awk 的用法,如果你有任何问题,你可以在下面的评论区提问,记得查看 awk 系列接下来的章节内容,我们将关注 awk 的一些功能,比如变量,数字表达式以及赋值运算符.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/read-awk-input-from-stdin-in-linux/
作者:[Aaron Kili][a]
译者:[vim-kakali](https://github.com/vim-kakali)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/linux-dir-command-usage-with-examples/
[2]: http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[3]: http://www.tecmint.com/combine-multiple-expressions-in-awk
[4]: http://www.tecmint.com/comparison-operators-in-awk
[5]: http://www.tecmint.com/13-basic-cat-command-examples-in-linux/