Merge pull request #7 from LCTT/master

update
This commit is contained in:
DoubleC 2014-09-30 21:46:23 +08:00
commit 79dac37871
52 changed files with 3263 additions and 2018 deletions

View File

@ -1,12 +1,12 @@
从命令行访问Linux命令小抄
================================================================================
Linux命令行的强大在于其灵活及多样化各个Linux命令都带有它自己那部分命令行选项和参数。混合并匹配它们,甚至还可以通过管道和重定向来联结不同的命令。理论上讲,你可以借助几个基本的命令来产生数以百计的使用案例。甚至对于浸淫多年的管理员而言,也难以完全使用它们。那正是命令行小抄成为我们救命稻草的一刻。
Linux命令行的强大在于其灵活及多样化各个Linux命令都带有它自己专属的命令行选项和参数。混合并匹配这些命令,甚至还可以通过管道和重定向来联结不同的命令。理论上讲,你可以借助几个基本的命令来产生数以百计的使用案例。甚至对于浸淫多年的管理员而言,也难以完全使用它们。那正是命令行小抄成为我们救命稻草的一刻。
[![](https://farm6.staticflickr.com/5562/14752051134_5a7c3d2aa4_z.jpg)][1]
我知道联机手册页仍然是我们的良师益友但我们想通过我们能自行支配的快速参考卡让这一切更为高效和有目的性。最终极的小抄可能被自豪地挂在你的办公室里也可能作为PDF文件隐秘地存储在你的硬盘上或者甚至设置成了你的桌面背景图。
我知道联机手册页man仍然是我们的良师益友但我们想通过我们能自行支配的快速参考卡让这一切更为高效和有目的性。最终极的小抄可能被自豪地挂在你的办公室里也可能作为PDF文件隐秘地存储在你的硬盘上或者甚至设置成了你的桌面背景图。
为一个选择,也可以通过另外一个命令来访问你最爱的命令行小抄。那就是,使用[cheat][2]。这是一个命令行工具它可以让你从命令行读取、创建或更新小抄。这个想法很简单不过cheat经证明是十分有用的。本教程主要介绍Linux下cheat命令的使用方法。你不需要为cheat命令做个小抄了它真的很简单。
为一个选择,也可以通过另外一个命令来访问你最爱的命令行小抄。那就是,使用[cheat][2]。这是一个命令行工具它可以让你从命令行读取、创建或更新小抄。这个想法很简单不过cheat经证明是十分有用的。本教程主要介绍Linux下cheat命令的使用方法。你不需要为cheat命令做个小抄了它真的很简单。
### 安装Cheat到Linux ###
@ -59,9 +59,9 @@ cheat命令一个很酷的事是它自带有超过90个的常用Linux命令
$ cheat -s <keyword>
在许多情况下,小抄适用于那些正派的人,而对其他某些人却没什么帮助。要想让内建的小抄更具个性化cheat命令也允许你创建新的小抄或者更新现存的那些。要这么做的话cheat命令也会帮你在本地~/.cheat目录中保存一份小抄的副本。
在许多情况下,小抄适用于某些人,而对另外一些人却没什么帮助。要想让内建的小抄更具个性化cheat命令也允许你创建新的小抄或者更新现存的那些。要这么做的话cheat命令也会帮你在本地~/.cheat目录中保存一份小抄的副本。
要使用cheat的编辑功能首先确保EDITOR环境变量设置为你默认编辑器所在位置的完整路径。然后,复制(不可编辑)内建小抄到~/.cheat目录。你可以通过下面的命令找到内建小抄所在的位置。一旦你找到了它们的位置只不过是将它们拷贝到~/.cheat目录。
要使用cheat的编辑功能首先确保EDITOR环境变量设置为你默认编辑器所在位置的完整路径。然后复制不可编辑内建小抄到~/.cheat目录。你可以通过下面的命令找到内建小抄所在的位置。一旦你找到了它们的位置只不过是将它们拷贝到~/.cheat目录。
$ cheat -d
@ -85,7 +85,7 @@ via: http://xmodulo.com/2014/07/access-linux-command-cheat-sheets-command-line.h
作者:[Dan Nanni][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,14 +1,14 @@
在哪儿以及怎么写代码:选择最好的免费代码编辑器
何处写,如何写:选择最好的免费在线代码编辑器
================================================================================
深入了解一下Cloud9Koding和Nitrous.IO。
> 深入了解一下Cloud9Koding和Nitrous.IO。
![](http://a2.files.readwrite.com/image/upload/c_fill,h_900,q_70,w_1600/MTIzMDQ5NjYzODM4NDU1MzA4.jpg)
**已经准备好开始你的第一个编程项目了吗?很好!只要配置一下**终端或命令行,学习如何使用并安装所有要用到的编程语言插件库和API函数库。当最终准备好一切以后再安装好[Visual Studio][1]就可以开始了,然后才可以预览自己的工作。
已经准备好开始你的第一个编程项目了吗?很好!只要配置一下终端或命令行,学习如何使用它,然后安装所有要用到的编程语言插件库和API函数库。当最终准备好一切以后再安装好[Visual Studio][1]就可以开始了,然后才可以预览自己的工作。
至少这是大家过去已经熟悉的方式。
也难怪初学程序员们逐渐喜欢上在线集成开发环境(IDE)了。IDE是一个代码编辑器不过已经准备好编程语言以及所有需要的依赖可以让你避免把它们一一安装到电脑上的麻烦。
也难怪初学程序员们逐渐喜欢上在线集成开发环境(IDE)了。IDE是一个代码编辑器不过已经准备好编程语言以及所有需要的依赖可以让你避免把它们一一安装到电脑上的麻烦。
我想搞清楚到底是哪些因素能组成一个典型的IDE所以我试用了一下免费级别的时下最受欢迎的三款集成开发环境[Cloud9][2][Koding][3]和[Nitrous.IO][4]。在这个过程中我了解了许多程序员应该或不应该使用IDE的各种情形。
@ -16,7 +16,7 @@
假如有一个像Microsoft Word那样的文字编辑器想想类似Google Drive那样的IDE吧。你可以拥有类似的功能但是它还能支持从任意电脑上访问还能随时共享。因为因特网在项目工作流中的影响已经越来越重要IDE也让生活更轻松。
在我最近的一篇ReadWrite教程中我使用了Nitrous.IO这是在文章[创建一个你自己的像Yo那样的极端简单的聊天应用][5]里的一个Python应用。当使用IDE的时候你只要选择你要用的编程语言然后通过IDE特别设计用来运行这种语言程序的虚拟机VM你就可以测试和预览你的应用了。
在我最近的一篇ReadWrite教程中我使用了Nitrous.IO这是在文章[创建一个你自己的像Yo那样的极端简单的聊天应用][5]里的一个Python应用。当使用IDE的时候你只要选择你要用的编程语言然后通过IDE特别为运行这种语言程序而设计的虚拟机VM你就可以测试和预览你的应用了。
如果你读过那篇教程就会知道我的那个应用只用到了两个API库信息服务Twilio和Python微框架Flask。在我的电脑上就算是使用文字编辑器和终端来做也是很简单的不过我选择使用IDE还有一个方便的地方如果大家都使用同样的开发环境跟着教程一步步走下去就更简单了。
@ -28,7 +28,7 @@
但是不能用IDE来永久存储你的整个项目。把帖子保存在Google Drive文件中不会让你的博客丢失。类似Google DriveIDE可以让你创建链接用于共享内容但是任何一个都还不足以替代真正的托管服务器。
还有IDE并不是设计成方便广泛共享。尽管各种IDE都在不断改善大多数文字编辑器的预览功能还只能用来给你的朋友或同事展示一下应用预览而不是比如说类似Hacker News的主页。那样的话占用太多带宽的IDE也许会让你崩溃。
还有IDE并不是设计成方便广泛共享。尽管各种IDE都在不断改善大多数文字编辑器的预览功能还只能用来给你的朋友或同事展示一下应用的预览而不是像Hacker News一样的主页。那样的话占用太多带宽的IDE也许会让你崩溃。
这样说吧IDE只是构建和测试你的应用的地方托管服务器才是它们生存的地方。所以一旦完成了你的应用你会希望把它布置到能长期托管的云服务器上最好是能免费托管的那种例如[Heroku][6]。
@ -44,7 +44,7 @@
当我完成了Cloud9的注册后它提示的第一件事情就是添加我的GitHub和BitBucket账号。马上所有我的GitHub项目个人的和协作的都可以直接克隆到本地并使用Cloud9的开发工具开始工作。其他的IDE在和GitHub集成的方面都没有达到这种水准。
在我测试的这三款IDE中Cloud9看起来更加侧重于一个可以让协同工作的人们无缝衔接工作的环境。在这里它并不是角落里放个聊天窗口。实际上按照CEO Ruben Daniels说的试用Cloud9的协作者可以互相看到其他人实时的编码情况就像Google Drive上的合作者那样。
在我测试的这三款IDE中Cloud9看起来更加侧重于一个可以让协同工作的人们无缝衔接工作的环境。在这里它并不是角落里放个聊天窗口。实际上按照CEO Ruben Daniels说的试用Cloud9的协作者可以互相看到其他人实时的编码情况就像Google Drive上的合作者那样。
“大多数IDE服务的协同功能只能操作单一文件”Daniels说“而我们的产品可以支持整个项目中的不同文件。协同功能被完美集成到了我们的IDE中。”
@ -58,15 +58,15 @@ IDE可以提供你所需的工具来构建和测试所有开源编程语言的
### Nitrous.IO: An IDE Wherever You Want ###
相对于自己的桌面环境使用IDE的最大优势是它是自包含的。你不需要安装任何其他的就可以使用。而另一方面,使用自己的桌面环境的最大优势就是你可以在本地工作,甚至在没有互联网的情况下。
相对于自己的桌面环境使用IDE的最大优势是它是自足的。你不需要安装任何其他的东西就可以使用。而另一方面,使用自己的桌面环境的最大优势就是你可以在本地工作,甚至在没有互联网的情况下。
Nitrous.IO结合了这两个优势。你可以在网站上在线使用这个IDE你也可以把它下载到自己的饿电脑上共同创始人AJ Solimine这样说。优点是你可以结合Nitrous的集成性和你最喜欢的文字编辑器的熟悉。
Nitrous.IO结合了这两个优势。你可以在网站上在线使用这个IDE你也可以把它下载到自己的电脑上共同创始人AJ Solimine这样说。优点是你可以结合Nitrous的集成性和你最喜欢的文字编辑器的熟悉。
他说:“你可以使用任意代浏览器访问Nitrous.IO的在线IDE网站但我们仍然提供了方便的Windows和Mac桌面应用可以让你使用你最喜欢的编辑器来写代码。”
他说:“你可以使用任意代浏览器访问Nitrous.IO的在线IDE网站但我们仍然提供了方便的Windows和Mac桌面应用可以让你使用你最喜欢的编辑器来写代码。”
### 底线 ###
这一个星期[使用][7]三个不同IDE的最让我意外的收获它们是如此相似。[当用来做最基本的代码编辑的时候][8],它们都一样的好用。
这一个星期[使用][7]三个不同IDE的最让我意外的收获是什么?它们是如此相似。[当用来做最基本的代码编辑的时候][8],它们都一样的好用。
Cloud9Koding[和Nitrous.IO都支持][9]所有主流的开源编程语言从Ruby到Python到PHP到HTML5。你可以选择任何一种VM。
@ -76,7 +76,7 @@ Cloud9和Nitrous.IO都实现了GitHub的一键集成。Koding需要[多几个步
不好的一面它们都有相同的缺陷不过考虑到它们都是免费的也还合理。你每次只能同时运行一个VM来测试特定编程语言写出的程序。而当你一段时间没有使用VM之后IDE会把VM切换成休眠模式以节省带宽而下次要用的时候就得等它重新加载Cloud9在这一点上更加费力。它们中也没有任何一个为已完成的项目提供像样的永久托管服务。
所以对咨询我是否有一个完美的免费IDE的人答案是可能没有。但是这也要看你侧重的地方对你的某个项目来说也许有一个完美的IDE。
所以对咨询我是否有一个完美的免费IDE的人来说答案是可能没有。但是这也要看你侧重的地方对你的某个项目来说也许有一个完美的IDE。
图片由[Shutterstock][11]友情提供
@ -86,7 +86,7 @@ via: http://readwrite.com/2014/08/14/cloud9-koding-nitrousio-integrated-developm
作者:[Lauren Orsini][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -6,13 +6,13 @@
<blockquote><em>通过入会声明,任何人都能轻易加入“匿名者”组织。某人类学家称,组织成员会“根据影响程度对重大事件保持着不同关注,特别是那些能挑起强烈争端的事件”。</em></blockquote>
<small>布景Jeff Nishinaka / 摄影Scott Dunbar</small>
<small>纸雕作品Jeff Nishinaka / 摄影Scott Dunbar</small>
<h2>1</h2>
<p>上世纪七十年代中期,当 Christopher Doyon 还是一个生活在缅因州乡村的孩童时,就终日泡在 CB radio 上与各种陌生人聊天。他的昵称是“大红”因为他有一头红色的头发。Christopher Doyon 把发射机挂在了卧室的墙壁上并且说服了父亲在自家屋顶安装了两根天线。CB radio 主要用于卡车司机间的联络,但 Doyon 和一些人却将之用于不久后出现在 Internet 上的虚拟社交——自定义昵称、成员间才懂的笑话,以及施行变革的强烈愿望。</p>
<p>上世纪七十年代中期,当 Christopher Doyon 还是一个生活在缅因州乡村的孩童时,就终日泡在 CB radio 上与各种陌生人聊天。他的昵称是“Big red”(大红)因为他有一头红色的头发。Christopher Doyon 把发射机挂在了卧室的墙壁上并且说服了父亲在自家屋顶安装了两根天线。CB radio 主要用于卡车司机间的联络,但 Doyon 和一些人却将之用于不久后出现在 Internet 上的虚拟社交——自定义昵称、成员间才懂的笑话,以及施行变革的强烈愿望。</p>
<p>Doyon 很小的时候母亲就去世了,兄妹二人由父亲抚养长大,他俩都说受到过父亲的虐待。由此 Doyon 在 CB radio 社区中找到了慰藉和归属感。他和他的朋友们轮流监听当地紧急事件频道。其中一个朋友的父亲买了一个气泡灯并安装在了他的车顶上;每当这个孩子收听到来自孤立无援的乘车人的求助后,都会开车载着所有人到求助者所在的公路旁。除了拨打 911 外他们基本没有什么可做的,但这足以让他们感觉自己成为了英雄。</p>
<p>Doyon 很小的时候母亲就去世了,兄妹二人由父亲抚养长大,他俩都说受到过父亲的虐待。由此 Doyon 在 CB radio 社区中找到了慰藉和目标感。他和他的朋友们轮流监听当地紧急事件频道。其中一个朋友的父亲买了一个气泡灯并安装在了他的车顶上;每当这个孩子收听到来自孤立无援的乘车人的求助后,都会开车载着所有人到求助者所在的公路旁。除了拨打 911 外他们基本没有什么可做的,但这足以让他们感觉自己成为了英雄。</p>
<p>短小精悍的 Doyon 有着一口浓厚的新英格兰口音,并且非常喜欢《星际迷航》和阿西莫夫的小说。当他在《大众机械》上看到一则“组装你的专属个人计算机”构件广告时,就央求祖父给他买一套,接下来 Doyon 花了数月的时间把计算机组装起来并连接到 Internet 上去。与鲜为人知的 CB 电波相比,在线聊天室确实不可同日而语。“我只需要点一下按钮,再选中某个家伙的名字,然后我就可以和他聊天了,” Doyon 在最近回忆时说道,“这真的很惊人。”</p>
@ -22,11 +22,11 @@
<p>Doyon 深深地沉溺于计算机中,虽然他并不是一位专业的程序员。在过去一年的几次谈话中,他告诉我他将自己视为激进主义分子,继承了 Abbie Hoffman 和 Eldridge Cleaver 的激进传统技术不过是他抗议的工具。八十年代哈佛大学和麻省理工学院的学生们举行集会强烈抗议他们的学校从南非撤资。为了帮助抗议者通过安全渠道进行交流PLF 制作了无线电套装移动调频发射器、伸缩式天线还有麦克风所有部件都内置于背包内。Willard Johnson麻省理工学院的一位激进分子和政治学家表示黑客们出席集会并不意味着一次变革。“我们的大部分工作仍然是通过扩音器来完成的”他解释道。</p>
<p>1992 年,在 Grateful Dead 的一场印第安纳的演唱会上Doyon 秘密地向一位瘾君子出售了 300 粒药。由此他被判决在印第安纳州立监狱服役十二年,后来改为五年。服役期间,他对宗教和哲学产生了浓厚的兴趣,并于鲍尔州立大学学习了相应课程。</p>
<p>1992 年,在印第安纳的一场 Grateful Dead 的演唱会上Doyon 秘密地向一位瘾君子出售了 300 粒药。由此他被判决在印第安纳州立监狱服役十二年,后来改为五年。服役期间,他对宗教和哲学产生了浓厚的兴趣,并于鲍尔州立大学学习了相应课程。</p>
<p>1994 年,第一款商业 Web 浏览器网景领航员正式发布,同一年 Doyon 被捕入狱。当他出狱并再次回到剑桥后PLF 依然活跃着并且他们的工具有了实质性的飞跃。Doyon 回忆起他入狱之前的变化“非常巨大——好比是烽火狼烟电报传信之间那么大的差距。”黑客们入侵了一个印度的军事网站并修改其首页文字为“拯救克什米尔”。在塞尔维亚黑客们攻陷了一个阿尔巴尼亚网站。Stefan Wray一位早期网络激进主义分子为一次纽约“反哥伦布日”集会上的黑客行径辩护。“我们视之为电子形式的公众抗议”他告诉大家。</p>
<p>1994 年,第一款商业 Web 浏览器 Netscape Navigator网景领航员正式发布,同一年 Doyon 被捕入狱。当他出狱并再次回到剑桥后PLF 依然活跃着并且他们的工具有了实质性的飞跃。Doyon 回忆起他入狱之前对比的变化“非常巨大——好比是烽火狼烟电报传信之间那么大的差距。”黑客们入侵了一个印度的军事网站并修改其首页文字为“拯救克什米尔”。在塞尔维亚黑客们攻陷了一个阿尔巴尼亚网站。Stefan Wray一位早期网络激进主义分子为一次纽约“反哥伦布日”集会上的黑客行径辩护。“我们视之为电子形式的公众抗议”他告诉大家。</p>
<p>1999 年,美国唱片业协会因为版权侵犯问题起诉了 Napster一款文件共享软件。最终Napster 于 2001 年关闭。Doyon 与其他黑客使用分布式拒绝服务Distributed Denial of ServiceDDoS使大量数据涌入网站导致其响应速度减缓直至奔溃的手段攻击了美国唱片业协会的网站使之停运时间长达一星期之久。Doyon为自己的行为进行了辩解并高度赞扬了其他的“黑客主义者”。“我们很快意识到保卫 Napster 的战争象征着保卫 Internet 自由的战争,”他在后来写道。</p>
<p>1999 年,美国唱片业协会因为版权侵犯问题起诉了 Napster一款文件共享服务。最终Napster 于 2001 年关闭。Doyon 与其他黑客使用分布式拒绝服务Distributed Denial of ServiceDDoS使大量数据涌入网站导致其响应速度减缓直至奔溃的手段攻击了美国唱片业协会的网站使之停运时间长达一星期之久。Doyon为自己的行为进行了辩解并高度赞扬了其他的“黑客主义者”。“我们很快意识到保卫 Napster 的战争象征着保卫 Internet 自由的战争,”他在后来写道。</p>
<p>2008 年的一天Doyon 和 “Commander Adama” 在剑桥的 PLE 地下公寓相遇。Adama 当着 Doyon 的面点击了癫痫基金会的一个链接,与意料中将要打开的论坛不同,出现的是一连串闪烁的彩光。有些癫痫病患者对闪光灯非常敏感——这完全是出于恶意,有人想要在无辜群众中诱发癫痫病。已经出现了至少一名受害者。</p>
@ -42,69 +42,69 @@
<center><small>“我得谈谈我的感受。”</small></center>
<p>Poole 希望匿名这一举措可以延续社区的尖锐性因素。“我们无意参与理智的涉外事件讨论”他在网站上写道。4chan 社区里最具价值的事之一便是寻求“挑起强烈的争端”lulz这个词源自缩写 LOL。Lulz 经常是通过分享充满孩子气的笑话或图片来实现的,它们中的大部分不是色情的就是下流的。其中最令人震惊的部分被贴在了网站的“/b/”版块上,这里的用户们称呼自己为“/b/tards”。Doyon 知道 4chan 这个社区,但他认为那些用户是“一群愚昧无知的顽童”。2004 年前后,/b/ 上的部分用户开始把“匿名者”视为一个独立的实体。</p>
<p>Poole 希望匿名这一举措可以延续社区的尖锐性因素。“我们无意参与理智的涉外事件讨论”他在网站上写道。4chan 社区里最具价值的事之一便是寻求“挑起强烈的争端”lulz这个词源自缩写 LOL。Lulz 经常是通过分享幼稚的笑话或图片来实现的,其中大部分不是色情的就是下流的。其中最令人震惊的部分被贴在了网站的“/b/”版块上,这里的用户们称呼自己为“/b/tards”。Doyon 知道 4chan 这个社区,但他认为它的用户是“一群愚昧无知的顽童”。2004 年前后,/b/ 上的部分用户开始把“匿名者”视为一个独立的实体。</p>
<p>这是一个全新的黑客团体。“这不是一个传统意义上的组织,”一位领导计算机安全工作的研究员 Mikko Hypponen 告诉我——倒不如视之为一个非传统的亚文化群体。Barrett Brown德克萨斯州的一名记者,同时也是众所周知的“匿名者”高层领导把“匿名者”描述为“一连串前仆后继的伟大友谊”。无需任何会费或者入会仪式。任何想要加入“匿名者”组织成为一名匿名者Anon的人都可以通过简短的象征性的宣誓加入。</p>
<p>尽管 4chan 的关注焦点是一些琐碎的话题,但许多匿名者认为自己就是“正义的十字军”。如果网上有不良迹象出现,他们就会发起具有针对性的治安维护行动。不止一次,他们以未成年少女的身份套取恋童癖的私人信息,然后把这些信息交给警察局。其他匿名者则是政治的厌恶者,为了挑起争端想方设法散布混乱的信息。他们中的一些人在 /b/ 上发布看着像是雷管炸弹的图片另一些则叫嚣着要炸毁足球场并因此被联邦调查局逮捕。2007 年,一家洛杉矶当地的新闻联盟机构称呼“匿名者”组织为“互联网负能量制造机”。</p>
<p>尽管 4chan 的关注焦点是一些琐碎的话题,但许多匿名者认为自己就是“正义的十字军”。如果网上有不良迹象出现,他们就会发起具有针对性的治安维护行动。不止一次,他们以未成年少女的身份使恋童癖陷入圈套,然后把他们的个人信息交给警察局。其他匿名者则是政治的厌恶者,为了挑起争端想方设法散布混乱的信息。他们中的一些人在 /b/ 上发布看着像是雷管炸弹的图片另一些则叫嚣着要炸毁足球场并因此被联邦调查局逮捕。2007 年,一家洛杉矶当地的新闻联盟机构称呼“匿名者”组织为“互联网负能量制造机”。</p>
<p>2008 年 1 月Gawker Media 上传了一段关于汤姆克鲁斯大力吹捧山达基优点的视频。这段视频是受版权保护的,山达基教会致信 Gawker勒令其删除这段视频。“匿名者”组织认为教会企图控制网络信息。“是时候让 /b/ 来干票大的了,”有人在 4chan 上写道。“我说的是‘入侵’或者‘攻陷’山达基官方网站。”一位匿名者使用 YouTube 放出一段“新闻稿”,其中包括暴雨云视频和经过计算机处理的语音。“我们要立刻把你们从 Internet 上赶出去,并且在现有规模上逐渐瓦解山达基教会,”那个声音说,“你们无处可躲。”不到一个星期,这段 YouTube 视频的点击率就超过了两百万次。</p>
<p>“匿名者”组织已经不仅限于 4chan 社区。黑客们在专用的互联网中继聊天Internet Relay Chat channelsIRC 聊天室)频道内进行交流,协商策略。通过 DDoS 攻击手段,他们使山达基的主网站间歇性崩溃了好几天。匿名者们制造了“谷歌炸弹”,由此导致 “dangerous cult” 的搜索结果中的第一条结果就是山达基主网站。其余的匿名者向山达基的欧洲总部寄送了数以百计的披萨,并用大量全黑的传真单耗干了洛杉矶教会总部的传真机墨盒。山达基教会,据报道拥有超过十亿美元资产的组织,当然能经得起墨盒耗尽的考验。但山达基教会的高层可不这么认为,他们还收到了严厉的恐吓,由此他们不得不向 FBI 申请逮捕“匿名者”组织的成员。</p>
<p>“匿名者”组织已经不仅限于 4chan 社区。黑客们在专用的互联网中继聊天Internet Relay Chat channelsIRC 聊天室)频道内进行交流,协商策略。通过 DDoS 攻击手段,他们使山达基的主网站间歇性崩溃了好几天。匿名者们制造了“谷歌炸弹”,由此导致 “dangerous cult” 的搜索结果中的第一条结果就是山达基主网站。其余的匿名者向山达基的欧洲总部寄送了数以百计的披萨,并用大量全黑的传真单耗干了洛杉矶教会总部的传真机墨盒。山达基教会,据报道是一个拥有超过十亿美元资产的组织,当然能经得起墨盒耗尽的考验。但山达基教会的高层可不这么认为,他们还收到了死亡恐吓,由此他们不得不向 FBI 申请调查“匿名者”组织的成员。</p>
<p>2008 年 3 月 15 日,在从伦敦到悉尼的一百多个城市里,数以千计匿名者们游行示威山达基教会。为了切合“匿名”这个主题,组织者下令所有的抗议者都应该佩戴相同的面具。深思熟虑过蝙蝠侠后,他们选定了 2005 年上映的反乌托邦电影《 V 字仇杀队》中 Guy Fawkes 的面具。“在每个大城市里都能以很便宜的价格大量购买,”广为人知的匿名者、游行组织者之一 Gregg Housh 告诉我说道。漫画式的面具上是一个的脸颊红润的男人,八字胡,有着灿烂的笑容。</p>
<p>匿名者们并未“瓦解”山达基教会。并且汤姆克鲁斯的那段视频任然保留在网络上。匿名者们证明了自己的顽强。组织选择了一个相当浮夸的口号:“我们是一体。绝不宽恕。永不遗忘。相信我们。”We are Legion. We do not forgive. We do not forget. Expect us.</p>
<p>匿名者们并未“瓦解”山达基教会。并且汤姆克鲁斯的那段视频任然保留在网络上。匿名者们证明了自己的顽强。组织选择了一个相当浮夸的口号:“我们是军团。绝不宽恕。永不遗忘。等待我们。”We are Legion. We do not forgive. We do not forget. Expect us.</p>
<h2>3</h2>
<p>2010 年Doyon 搬到了加利福尼亚州的圣克鲁斯,并加入了当地的“和平阵营”组织。利用从木材堆置场偷来的木头,他在山上盖起了一间简陋的小屋,“借用”附近住宅的 WiFi使用太阳能电池板发电并通过贩卖种植的大麻换取现金。</p>
<p>与此同时“和平阵营”维权者们每天晚上开始在公共场所休息以此抗议圣克鲁斯政府此前颁布的“流浪者管理法案”他们认为这项法案严重侵犯了流浪者的生存权。Doyon 出席了“和平阵营”的会议,并在网上发起了抗议活动。他留着蓬乱的红色山羊胡,戴一顶米黄色软呢帽,像军人那样不知疲倦。因此维权者们送给了他“罪恶制裁克里斯”的称呼。</p>
<p>与此同时“和平阵营”维权者们每天晚上开始在公共场所休息以此抗议圣克鲁斯政府此前颁布的“流浪者管理法案”他们认为这项法案严重侵犯了流浪者的生存权。Doyon 出席了“和平阵营”的会议,并在网上发起了抗议活动。他留着蓬乱的红色山羊胡,戴一顶米黄色软呢帽,类似军服的服装。因此维权者们送给了他“罪恶制裁克里斯”的称呼。</p>
<p>“和平阵营”的成员之一 Kelley Landaker 曾几次和 Doyong 讨论入侵事宜。Doyon 有时会吹嘘自己的技术是多么的厉害,但作为一名资深程序员的 Landaker 却不为所动。“他说得很棒,但却不是行动派”Landaker 告诉我。不过在那种场合下,的确更需要一位富有激情的领导者,而不是埋头苦干的技术员。“他非常热情并且坦率,”另一位成员 Robert Norse 如是对我说。“他创造出了大量的能够吸引媒体眼球的话题。我从事这行已经二十年了,在这一点上他比我见过的任何人都要厉害。”</p>
<p>“和平阵营”的成员之一 Kelley Landaker 曾几次和 Doyong 讨论入侵事宜。Doyon 有时会吹嘘自己的技术是多么的厉害,但作为一名资深程序员的 Landaker 却不为所动。“他说得很棒但却不是行动派”Landaker 告诉我。不过在那种场合下,的确更需要一位富有激情的领导者,而不是埋头苦干的技术员。“他非常热情并且坦率,”另一位成员 Robert Norse 如是对我说。“他创造出了大量的能够吸引媒体眼球的话题。我从事这行已经二十年了,在这一点上他比我见过的任何人都要厉害。”</p>
<p>Doyon 在 PLF 的上司Commander Adama 仍然住在剑桥,并且通过电子邮件和 Doyon 保持着联络,他下令让 Doyon 潜入“匿名者”组织。以此获知其运作方式,并伺机为 PLF 招募新成员。因为癫痫基金会网站入侵事件的那段不愉快回忆Doyon 拒绝了 Adama。Adama 给 Doyon 解释说在“匿名者”组织里不怀好意的黑客只占极少数与此相反这个组织经常会有一些的轰动世界举动。Doyon 对这点表示怀疑。“4chan 怎么可能会轰动世界?”他质问道。但出于对 PLF 的忠诚,他还是答应了 Adama 的请求。</p>
<p>Doyon 在 PLF 的上司Commander Adama 仍然住在剑桥,并且通过电子邮件和 Doyon 保持着联络,他下令让 Doyon 监视“匿名者”组织,以此获知其运作方式,并伺机为 PLF 招募新成员。因为癫痫基金会网站入侵事件的那段不愉快回忆Doyon 拒绝了 Adama。Adama 给 Doyon 解释说在“匿名者”组织里不怀好意的黑客只占极少数与此相反这个组织经常会有一些的轰动世界举动。Doyon 对这点表示怀疑。“4chan 怎么可能会轰动世界的大举动?”他质问道。但出于对 PLF 的忠诚,他还是答应了 Adama 的请求。</p>
<p>Doyon 经常带着一台宏基笔记本电脑出入于圣克鲁斯的一家名为 Coffee Roasting Company 的咖啡厅。“匿名者”组织的 IRC 聊天室主频道无需密码就能进入。Doyon 使用 PLF 的昵称进行登录并加入了聊天室。一段时间后,他发现了组织内大量的专用匿名者行动聊天频道,这些频道的规模更小,相互重复。要想参与行动,你必须知道行动的专用聊天频道名称,并且聊天频道随时会因为陌生的闯入者而进行变更。这套交流系统并不具备较高的安全系数,但它的确很凑效。“这些专用行动聊天频道确保了行动机密的高度集中”麦吉尔大学的人类学家 Gabriella Coleman 告诉我。</p>
<p>Doyon 经常带着一台宏基笔记本电脑出入于圣克鲁斯的一家名为 Coffee Roasting Company 的咖啡厅。“匿名者”组织的 IRC 聊天室主频道无需密码就能进入。Doyon 使用 PLF 的昵称进行登录并加入了聊天室。一段时间后,他发现了组织内大量的专用匿名者行动聊天频道,这些频道的规模更小,更多专门的组内匿名者间对话相互重复。要想参与行动,你必须知道行动的专用聊天频道名称,并且聊天频道随时会因为陌生的闯入者而进行变更。这套交流系统并不具备较高的安全系数,但它的确很凑效。“这些专用行动聊天频道确保了行动机密的高度集中”麦吉尔大学的人类学家 Gabriella Coleman 告诉我。</p>
<p>有些匿名者提议了一项行动,名为“反击行动”。如同新闻记者 Parmy Olson 于 2012 年在书中写道的,“我们是匿名者,”这项行动成为了又一次支援文件共享网站,如 Napster 的后继者海盗湾Pirate Bay的行动的前奏但随后其目标却扩展到了政治领域。2010 年末在美国国务院的要求下包括万事达、Visa、PayPal 在内的几家公司终止了对维基解密,一家公布了成百上千份外交文件的民间组织,的捐助。在一段网络视频中“匿名者”组织扬言要进行报复发誓会对那些阻碍维基解密发展的公司进行惩罚。Doyon 被这种抗议企业的精神所吸引,决定参加这次行动。</p>
<p>有些匿名者提议了一项行动,名为“反击行动”。如同新闻记者 Parmy Olson 于 2012 年在书中写道的,“我们是匿名者,” 这项行动是以又一次支持文件共享的网站而创立,如同 Napster 的后继者海盗湾Pirate Bay但随后其目标却扩展到了政治领域。2010 年末在美国国务院的要求下包括万事达、Visa、PayPal 在内的几家公司终止了对维基解密的捐助,维基解密是一家公布了成百上千份外交文件的自发性组织。在一段在线视频中“匿名者”组织扬言要进行报复发誓会对那些阻碍维基解密发展的公司进行惩罚。Doyon 被这种抗议企业的精神所吸引,决定参加这次行动。</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18473-600.jpg" /></center>
<center><small>潘多拉的魔盒</small></center>
<p>在十二月初的“反击行动”中,“匿名者”组织指导那些新成员,或者说新兵,关于“如何他【哔~】加入组织”,教程中提到“首先配置你【哔~】的网络,这他【哔~】的很重要。”同时他们被要求下载“低轨道离子炮”一款易于使用的开源软件。Doyon 下载了软件并在聊天室内等待着下一步指示。当开始的指令发出后数千名匿名者将同时发动进攻。Doyon 收到了含有目标网址的指令——目标是,www.visa.com——同时在软件的右上角有个按钮上面写着“IMMA CHARGIN MAH LAZER.”“反击行动”同时也发动了大量的复杂精密的入侵进攻。几天后“反击行动”攻陷了万事达、Visa、PayPal 公司的主页。在法院的控告单上PayPal 称这次攻击给公司造成了 550 万美元的损失。</p>
<p>在十二月初的“反击行动”中,“匿名者”组织指导那些新成员,或者说新兵,去看标题为“如何加入那个【哔~】的Hive”参与者被要求“首先配置他们【哔~】的网络,这【哔~】的很重要。”同时他们被要求下载“低轨道离子炮”一款易于使用的开源软件。Doyon 下载了软件并在聊天室内等待着下一步指示。当开始的指令发出后数千名匿名者将同时发动进攻。Doyon 进入了目标网址——www.visa.com——同时在软件的右上角有个按钮上面写着“IMMA CHARGIN MAH LAZER.”“反击行动”同时也发动了大量的复杂精密的入侵进攻。几天后“反击行动”攻陷了万事达、Visa、PayPal 公司的主页。在法院的控告单上PayPal 称这次攻击给公司造成了 550 万美元的损失。</p>
<p>但对 Doyon 来说,这是切实的激进主义体现。在剑桥反对种族隔离的行动中,他不能立即看到结果;而现在,只需指尖轻轻一点,就可以在攻陷大公司网站的行动中做出自己的贡献。隔天,赫芬顿邮报上出现了“万事达沦陷”的醒目标题。一位得意洋洋的匿名者发推特道:“有些事情维基解密是无能为力的。但这些事情却可以由‘反击行动’来完成。”</p>
<p>但对 Doyon 来说,这是切实的激进主义体现。在剑桥反对种族隔离的行动中,他不能即可见效;而现在,只需指尖轻轻一点,就可以在攻陷大公司网站的行动中做出自己的贡献。隔天,赫芬顿邮报上出现了“万事达沦陷”的醒目标题。一位得意洋洋的匿名者发推特道:“有些事情维基解密是无能为力的。但这些事情却可以由‘反击行动’来完成。”</p>
<h2>4</h2>
<p>2010 年的秋天,“和平阵营”的抗议活动终止,政府只做出了轻微的让步“流浪者管理法案”仍然有效。Doyon 希望通过借助“匿名者”组织的方略扭转局势。他回忆当时自己的想法,“也许我可以发动‘匿名者’组织来教训这种看似不堪一击的市政府网站,这些人绝对会【哔~】地赞同我的提议。最终我们将使得市政府永久性的废除‘流浪者管理法案’。”</p>
<p>2010 年的秋天,“和平阵营”的抗议活动终止,政府只做出了略微让步“流浪者管理法案”仍然有效。Doyon 希望通过借助“匿名者”组织的方略扭转局势。他回忆当时自己的想法,“也许我可以发动‘匿名者’组织来教训这种看似不堪一击的市政府网站,它们绝对会【哔~】地沦陷。最终我们使得市政府永久性废除‘流浪者管理法案’。”</p>
<p>Joshua Covelli 是一位 25 岁的匿名者他的昵称是“Absolem”他非常钦佩 Doyon 的果敢。“现在我们的组织完全是他【哔~】各种混乱的一盘散沙”Covelli 告诉我。在“Commander X”加入之后“组织似乎开始变得有模有样了。”Covelli 的工作是俄亥俄州费尔伯恩的一所大学接待员,他从不了解任何有关圣克鲁斯的政治。但是当 Doyon 提及帮助“和平阵营”抗击活动的计划后Covelli 立即回复了一封表示赞同的电子邮件:“我期待这样的行动很久了。”</p>
<p>Joshua Covelli 是一位 25 岁的匿名者他的昵称是“Absolem”他非常钦佩 Doyon 的果敢。“过去我们的组织完全是各种混乱的一盘散沙”Covelli 告诉我。在“Commander X”加入之后“组织似乎开始变得有模有样了。”Covelli 的工作是俄亥俄州费尔伯恩的一所大学接待员,他从不了解任何有关圣克鲁斯的政治。但是当 Doyon 提及帮助“和平阵营”抗击活动的计划后Covelli 立即回复了一封表示赞同的电子邮件:“我期待参加这样的行动已经很久了。”</p>
<p>Doyon 使用 PLF 的昵称邀请 Covelli 在 IRC 聊天室进行了一次秘密谈话:</p>
<blockquote>Absolem抱歉有个比较冒犯的问题...请问 PLF 也是组织的一员吗</blockquote>
<blockquote>Absolem抱歉有个比较冒犯的问题...请问 PLF 是组织的一部分还是分开的</blockquote>
<blockquote>Absolem我会这么问是因为我在频道里看过你的聊天记录,你像是一名训练有素的黑客,不太像是来自组织里的成员</blockquote>
<blockquote>Absolem我会这么问是因为看你们聊天,觉得你们都是非常有组织的</blockquote>
<blockquote>PLF不不不你的问题一点也不冒犯。很高兴遇到你。PLF 是一个来自波士顿的黑客组织,已经成立 22 年了。我在 1981 年就开始了我的黑客生涯,但那时我并没有使用计算机,而是使用的 PBXPrivate Branch Exchange电话交换机</blockquote>
<blockquote>PLF我们组织内所有成员的年龄都超过了 40 岁。我们当中有退伍士兵和学者。并且我们的成员“Commander Adama”正在躲避一大帮警察还有间谍的追捕。</blockquote>
<blockquote>Absolem听起来很棒我对这次行动很感兴趣不知道我是否可以提供一些帮助,我们的组织实在是太混乱了。我的电脑技术还不错,但我在入侵技术上还完全是一个新手。我有一些小工具,但不知道怎么去使用它们。</blockquote>
<blockquote>Absolem听起来很棒我对这次行动很感兴趣过“匿名者”组织看起来太混乱无序,不知道我是否可以提供一些帮助。我的电脑技术还不错,但我在入侵技术上还完全是一个新手。我有一些小工具,但不知道怎么去使用它们。</blockquote>
<p>庄重的入会仪式后Doyon 正式接纳 Covelli 加入 PLF</p>
<blockquote>PLF把所有可能对你不利的【哔~】敏感文件加密。</blockquote>
<blockquote>PLF把所有可能使你受牵连的敏感文件加密。</blockquote>
<blockquote>PLF还有想要联系任何一位 PLF 成员的话,给我发消息就行。从现在起,请叫我... Commander X。</blockquote>
<p>2012 年,美联社称“匿名者”组织为“一伙训练有素的黑客”Quinn Norton 在《连线》杂志上发文称“‘匿名者’组织可以入侵任何坚不可摧的网站”,并在文末赞扬他们为“一群卓越的民间黑客”。事实上,有些匿名者的确是很有天赋的程序员,但绝大部分成员根本不懂任何技术。人类学家 Coleman 告诉我只有大约五分之一的匿名者是真正的黑客——其他匿名者则是“极客与抗议者”。</p>
<p>2012 年,美联社称“匿名者”组织为“一帮专家级的黑客”Quinn Norton 在《连线》杂志上发文称“‘匿名者’组织可以入侵任何坚不可摧的网站”,并在文末赞扬他们为“一群卓越的民间黑客”。事实上,有些匿名者的确是很有天赋的程序员,但绝大部分成员根本不懂任何技术。人类学家 Coleman 告诉我只有大约五分之一的匿名者是真正的黑客——其他匿名者则是“极客与抗议者”。</p>
<p>2010 年 12 月 16 日Doyon 以 Commander X 的身份向几名记者发送了电子邮件。“明天当地时间 1200 的时候,‘人民解放阵线’组织与‘匿名者’组织将大举进攻圣克鲁斯政府网站”他在邮件中写道“12:30 之后我们将恢复网站的正常运行。”</p>
<p>2010 年 12 月 16 日Doyon 以 Commander X 的身份向几名记者发送了电子邮件。“明天当地时间 1200 的时候,‘人民解放阵线’组织与‘匿名者’组织将从互联网中删除圣克鲁斯政府网站”他在邮件中写道“12:30 之后我们将恢复网站的正常运行。”</p>
<p>圣克鲁斯数据中心的工作人员收到了警告,匆忙地准备应对攻击。他们在服务器上运行起安全扫描软件,并向当地的互联网供应商 AT & T 求助,后者建议他们向 FBI 报警。</p>
@ -132,7 +132,7 @@
<center><small>“Zach 很聪明... 并且... 是一个天才... 但.. 你们... 不在一个班。”</small></center>
<p>Doyon 引用了一句电影台词。“拼命地跑,”他说。“我会躲起来,尽可能保持我的行动自由,用尽全力和这帮杂种们作斗争。”Frey 给了他两张 20 美元的钞票并祝他好运。</p>
<p>Doyon 引用了一句电影台词。“拼命地跑,”他说。“我会躲起来,尽可能保持我的行动自由,用尽全力和这帮混蛋们作斗争。”Frey 给了他两张 20 美元的钞票并祝他好运。</p>
<h2>5</h2>
@ -142,35 +142,35 @@
<p>“突尼斯,” Brown 答道。</p>
<p>“我知道,那是中东地区的一个国家,” Doyon 继续问,“然后呢?”</p>
<p>“我知道,那是中东地区的一个国家,” Doyon 继续问,“具体任务是什么呢?”</p>
<p>“我们准备打倒那里的独裁者,” Brown 再次答道。</p>
<p>“啊?!那里有一位独裁者吗?” Doyon 有点惊讶。</p>
<p>几天后“突尼斯行动”正式展开。Doyon 作为参与者向突尼斯政府域名下的电子邮箱发送了大量的垃圾邮件,以此阻塞其服务器。“我会提前写好关于那次行动邮件,接着一次又一次地把它们发送出去,” Doyon 说,“有时候实在没有时间,我就只简短的写上一句问候对方母亲的的话,然后发送出去。”短短一天时间里,匿名者们就攻陷了包括突尼斯证券交易所、工业部、总统办公室、总办公室在内的多个网站。他们把总统办公室网站的首页替换成了一艘海盗船的图片,并配以文字“‘报复’是个贱人,不是吗?”</p>
<p>几天后“突尼斯行动”正式展开。Doyon 作为参与者向突尼斯政府域名下的电子邮箱发送了大量的垃圾邮件,以此阻塞其服务器。“我会提前写好关于那次行动邮件,接着一次又一次地把它们发送出去,” Doyon 说,“有时候实在没有时间,我就只简短的写上一句‘问候对方母亲’的话,然后发送出去。”短短一天时间里,匿名者们就攻陷了包括突尼斯证券交易所、工业部、总统办公室、总办公室在内的多个网站。他们把总统办公室网站的首页替换成了一艘海盗船的图片,并配以文字“恶有恶报,不是吗?”</p>
<p>Doyon 不时会谈起他的网上“战斗”经历似乎他刚从弹坑里爬出来一样。“伙计自从干了这行我就变黑了”他向我诉苦道。“你看我的脸全是抽烟的时候熏的——而且可能已经粘在我的脸上了。我仔细地照过镜子毫不夸张地说我简直就是一头棕熊。”很多个夜晚Doyon 都是在 Golden Gate 公园里露营过夜的。“我就那样干了四天,我看了看镜子里的‘我’,感觉还可以——但其实我觉得‘我’也许应该去吃点东西、洗个澡了。”</p>
<p>“匿名者”组织接着又在 YouTube 上声明了将要进行的一系列行动“利比亚行动”、“巴林行动”、“摩洛哥行动”。作为解放广场事件的抗议者Doyon 参与了“埃及行动”。在 Facebook 针对这次行动的宣传专页中,有一个为当地示威者准备的“行动套装”链接。“行动套装”通过文件共享网站 Megaupload 进行分发,其中含有一份加密软件以及应对瓦斯袭击的保护措施。并且不久后,埃及政府关闭了埃及的所有互联网及子网络的时候,继续向当地抗议者们提供连接网络的方法。</p>
<p>“匿名者”组织接着又在 YouTube 上声明了将要进行的一系列行动“利比亚行动”、“巴林行动”、“摩洛哥行动”。作为解放广场事件的抗议者Doyon 参与了“埃及行动”。在 Facebook 针对这次行动的宣传专页中,有一个为当地示威者准备的“行动套装”链接。“行动套装”通过文件共享网站 Megaupload 进行分发,其中含有一份加密软件以及应对瓦斯袭击的保护措施。在埃及政府关闭了埃及的所有互联网及子网络的时候不久后“匿名者”组织继续向当地抗议者们提供连接网络的方法。</p>
<p>2011 年夏季Doyon 接替 Adama 成为 PLF 的最高指挥官。Doyon 招募了六个新成员,并力图发展 PLF 成为“匿名者”组织的中坚力量。Covelli 成为了他的其中一位术顾问。另一名黑客 Crypt0nymous 负责在 YouTube 上发布视频其余的人负责研究以及组装电子设备。与松散的“匿名者”组织不同PLF 内部有一套极其严格的管理体系。“Commander X 事必躬亲”Covelli 说。“这是他的行事风格,也许不能称之为一种风格。”一位创立了 AnonInsiders 博客的黑客通过加密聊天告诉我,他认为 Doyon 总是一意孤行——这在“匿名者”组织中是很罕见的现象。“当我们策划发起一项行动时,他并不在乎其他人是否同意,”这位黑客补充道,“他会一个人列出行动方案,确定攻击目标,登录 IRC 聊天室,接着告诉所有人在哪里‘碰头’,然后发起 DDoS 攻击。”</p>
<p>2011 年夏季Doyon 接替 Adama 成为 PLF 的最高指挥官。Doyon 招募了六个新成员,并力图发展 PLF 成为“匿名者”组织的中坚力量。Covelli 成为了他的其中一位术顾问。另一名黑客 Crypt0nymous 负责在 YouTube 上发布视频其余的人负责研究以及组装电子设备。与松散的“匿名者”组织不同PLF 内部有一套极其严格的管理体系。“Commander X 事必躬亲”Covelli 说。“这是他的行事风格,要么不做,要么做好。”一位创立了 AnonInsiders 博客的黑客通过加密聊天告诉我,他认为 Doyon 总是一意孤行——这在“匿名者”组织中是很罕见的现象。“当我们策划发起一项行动时,他并不在乎其他人是否同意,”这位黑客补充道,“他会一个人列出行动方案,确定攻击目标,登录 IRC 聊天室,接着告诉所有人在哪里‘碰头’,然后发起 DDoS 攻击。”</p>
<p>一些匿名者把 PLF 视为可有可无的部分,认为 Doyon 的所作所为完全是个天大的笑柄。“他是因为吹牛出名的,”另一名昵称为 Tflow 的匿名者 Mustafa Al-Bassam 告诉我。不过,即使是那些极度反感 Doyon 的狂妄自大的人,也不得不承认他在“匿名者”组织发展过程中的重要性。“他所倡导的强硬路线有时很凑效,有时则完全不起作用,” Gregg Housh 说,并且补充道自己和其他优秀的匿名者都曾遇到过相同的问题。</p>
<p>一些匿名者把 PLF 视为“面子项目”,认为 Doyon 的所作所为完全是个笑柄。“他是因为吹牛出名的,”另一名昵称为 Tflow 的匿名者 Mustafa Al-Bassam 告诉我。不过,即使是那些极度反感 Doyon 的狂妄自大的人,也不得不承认他在“匿名者”组织发展过程中的重要性。“他所倡导的强硬路线有时很凑效,有时则是碍事,” Gregg Housh 说,并且补充道自己和其他优秀的匿名者都曾遇到过相同的问题。</p>
<p>“匿名者”组织对外坚持声称自己是不分层次的平等组织。在由 Brian Knappenberger 制作的一部纪录片,《我们是一个团体》中一名成员使用“一群鸟”来比喻组织它们轮流领飞带动整个组织不断前行。Gabriella Coleman 告诉我,这个比喻不太切合实际,“匿名者”组织内实际上早就出现了一个非正式的领导阶层。“领导者非常重要,”她说。“有四五个人可以看做是我们的领头羊。”她把 Doyon 也算在了其中。但是匿名者们仍然倾向于反抗这种具有体系的组织结构。在一本即将出版的关于“匿名者”组织的书《黑客、骗子、告密者、间谍》中Coleman 这么写道,在匿名者中,“成员个体以及那些特立独行的人依然在一些重大事件上保持着服从的态度,优先考虑集体——特别是那些能引发强烈争端的事件。”</p>
<p>“匿名者”组织对外坚持声称自己是不分层次的平等组织。在由 Brian Knappenberger 制作的一部纪录片,《我们是军团》中一名成员使用“一群鸟”来比喻组织它们轮流领飞带动整个组织不断前行。Gabriella Coleman 告诉我,这个比喻不太切合实际,“匿名者”组织内实际上早就出现了一个非正式的领导阶层。“领导者非常重要,”她说。“有四五个人可以看做是我们的领头羊。”她把 Doyon 也算在了其中。但是匿名者们仍然倾向于反抗这种体制结构。在一本即将出版的关于“匿名者”组织的书《黑客、骗子、告密者、间谍》中Coleman 这么写道,在匿名者中,“成员个体以及那些特立独行的人依然在一些重大事件上保持着服从的态度,优先考虑集体——特别是那些能引发强烈争端的事件。”</p>
<p>匿名者们谑称那些特立独行的成员为“自尊心超强的疯子”和“想让自己出名的疯子”。不过许多匿名者已经不会再随便给他人取那种具有冒犯性的称号了。“但还是有令人惊讶的极少数成员违反规则”打破传统上的看法Coleman 说。“这么做的人,像 Commander X 这样的,都会在组织里受到排斥。”去年,在一家网络论坛上,有人写道,“当他开始把自己比作‘蝙蝠侠’的时候我就不想理他了。”</p>
<p>Peter Fein是一位以 n0pants 为昵称而出名的网络激进分子,也是众多反对 Doyon 的浮夸行为的众多匿名者之一。Fein 浏览了 PLF 的网站其封面上有一个徽章还有关于组织的宣言——“为了解放众多人类的灵魂而不断战斗”。Fein 沮丧的发现 Doyon 早就使用真名为这家网站注册过了,使他这种,以及其他想要找事的匿名者们无机可乘。“如果有人要对我的网站进行 DDoS 攻击,那完全可以,” Fein 回想起通过私密聊天告诉 Doyon 时的情景,“但如果你要这么做了的话,我会揍扁你的屁股。”</p>
<p>2011 年 2 月 5 日,《金融时报》报道了在一家名为 HBGary Federal 的网络安全公司,首席执行官 HBGary Federal 已经得到了“匿名者”组织骨干成员名单的消息。Barr 的调查结果表明,三位最高领导人其中之一就是‘ Commander X这位潜伏在加利福尼亚州的黑客有能力“策划一些大型网络攻击事件”。Barr 联系了 FBI 并提交了自己的调查结果。</p>
<p>2011 年 2 月 5 日,《金融时报》报道了在一家名为 HBGary Federal 的网络安全公司,首席执行官 HBGary Federal 已经得到了“匿名者”组织骨干成员名单的消息。Barr 的调查结果表明,三位最高领导人其中之一就是‘ Commander X是一位潜伏在加利福尼亚州的黑客而且有能力“策划一些大型网络攻击事件”。Barr 联系了 FBI 并提交了自己的调查结果。</p>
<p>和 Fein 一样Barr 也发现了 PLF 网站的注册法人名为 Christopher Doyon地址是 Haight 大街。基于 Facebook 和 IRC 聊天室的调查Barr 断定‘ Commander X的真实身份是一名家庭住址在 Haight 大街附近的网络激进分子 Benjamin Spock de Vries。Barr 通过 Facebook 和 de Vries 取得了联系。“请告诉组织里的普通阶层,我并不是来抓你们的,” Barr 留言道,“只是想让‘领导阶层’知晓我的意图。”</p>
<p>和 Fein 一样Barr 也发现了 PLF 网站的注册法人名为 Christopher Doyon地址是 Haight 大街。基于 Facebook 和 IRC 聊天室的调查Barr 断定‘ Commander X的真实身份是一名家庭住址在 Haight 大街附近的网络激进分子 Benjamin Spock de Vries。Barr 通过 Facebook 和 de Vries 取得了联系。“请告诉我组织里的其他人,我并不是来抓你们的,” Barr 留言道,“只是想让‘领导阶层’知晓我的意图。”</p>
<p>“‘领导阶层’? 2333笑死我了” de Vries 回复道。</p>
<p>《金融时报》发布报道的第二天“匿名者”组织就进行了反击。HBGary Federal 的网站被进行了恶意篡改。Barr 的私人 Twitter 账户被盗取,他的上千封电子邮件被泄漏到了网上,同时匿名者们还公布了他的住址以及其他私人信息——这是一系列被称作“doxing”的惩罚。不到一个月后Barr 就从 HBGary Federal 辞职了。</p>
<p>《金融时报》发布报道的第二天“匿名者”组织就进行了反击。HBGary Federal 的网站被进行了恶意篡改。Barr 的私人 Twitter 账户被盗取,他的上千封电子邮件被泄漏到了网上,同时匿名者们还公布了他的住址以及其他私人信息——这就是“冲动的惩罚”。不到一个月后Barr 就从 HBGary Federal 辞职了。</p>
<h2>6</h2>
@ -180,17 +180,17 @@
<center><small>“这是我在 TED 夏令营里学到的东西。”</small></center>
<p>他时刻关注着“匿名者”组织的内部消息。那年春季,在 Barr 调查报告中提到的六位匿名者精锐成员组建了“LulzSec 安全”组织Lulz Security简称 LulzSec。这个组织正如其名这些成员认为“匿名者”组织已经变得太过严肃他们的目标是重新引发起那些“能挑起强烈争端”的事件。当“匿名者”组织还在继续支持“阿拉伯之春”的抗议者LulzSec 入侵了公共电视网Public Broadcasting ServicePBS网站并发布了一则虚假声明称已故说唱歌手 Tupac Shakur 仍然生活在新西兰。</p>
<p>他时刻关注着“匿名者”组织的内部消息。那年春季,在 Barr 调查报告中提到的六位匿名者精锐成员组建了“LulzSec 安全”组织Lulz Security简称 LulzSec。这个组织正如其名这些成员认为“匿名者”组织已经变得太过严肃他们的目标是重新引发起那些“能挑起强烈争端”的事件。当“匿名者”组织还在继续支持“阿拉伯之春”的抗议者时LulzSec 入侵了公共电视网Public Broadcasting ServicePBS网站并发布了一则虚假声明称已故说唱歌手 Tupac Shakur 仍然生活在新西兰。</p>
<p>匿名者之间会通过 Pastebin.com 网站来共享文。在这个网站上LulzSec 发表了一则声明,称“很不幸,我们注意到北约和我们的好总统巴拉克,奥萨马·本·美洲驼(拉登同学)的好朋友,来自 24 世纪的奥巴马,最近明显提高了对我们这些黑客的关注程度。他们把黑客入侵行为视作一种战争的表现。”目标越高远挑起的纷争就越大。6 月 15 日LulzSec 表示对 CIA 网站受到的袭击行为负责他们发表了一条推特上面写道“目标击毙Tango down亦即target down—— cia.gov ——这是起挑衅行为。”</p>
<p>匿名者之间会通过 Pastebin.com 网站来共享文。在这个网站上LulzSec 发表了一则声明,称“很不幸,我们注意到北约和我们的好朋友巴拉克奥萨马——来自24世纪的奥巴马 已经提升了关于黑客的筹码,他们把黑客入侵行为视作一种战争的表现。”目标越高远挑起的纷争就越大。6 月 15 日LulzSec 表示对 CIA 网站受到的袭击行为负责他们发表了一条推特上面写道“目标击毙Tango down亦即target down—— cia.gov ——这是起挑衅行为。”</p>
<p>2011 年 6 月 20 日LulzSec 的一名十九岁的成员 Ryan Cleary 因为对 CIA 的网站进行了 DDoS 攻击而被捕。7 月FBI 探员逮捕了七个月前对 PayPal 进行 DDoS 攻击的其他十四名黑客。这十四名黑客,每人都面临着 15 年的牢狱之灾以及 500 万美元的罚款。他们因为图谋不轨以及故意破坏互联网,而被控违反了计算机欺诈与滥用处理条例。(该法案允许检察官进行酌情处置,并在去年网络激进分子 Aaron Swartz 因为被判处 35 年牢狱之灾而自杀身亡之后,受到了广泛的质疑和批评。)</p>
<p>2011 年 6 月 20 日LulzSec 的一名十九岁的成员 Ryan Cleary 因为对 CIA 的网站进行了 DDoS 攻击而被捕。7 月FBI 探员逮捕了七个月前对 PayPal 进行 DDoS 攻击的其他十四名黑客。这十四名黑客,每人都面临着 15 年的牢狱之灾以及 50 万美元的罚款。他们因为图谋不轨以及故意破坏互联网而被控违反了计算机欺诈与滥用法案。Computer Fraud and Abuse Act该法案允许检察官拥有宽泛的起诉裁量权,并在去年网络激进分子 Aaron Swartz 因为被判处 35 年牢狱之灾而自杀身亡之后,受到了广泛的质疑和批评。)</p>
<p>LulzSec 的成员之一 Jake (Topiary) Davis 因为付不起法律诉讼费给组织的成员们写了一封请求帮助的信件。Doyon 进入了 IRC 聊天室把 Davis 需要帮助的消息进行了扩散:</p>
<blockquote>CommanderX那么请大家阅读信件并给予 Topiary 帮助...</blockquote>
<blockquote>Toad你真是和【哔~】一样消息灵通。</blockquote>
<blockquote>Toad你真是为了抓人眼球什么都做啊!</blockquote>
<blockquote>Toad这么说你得到 Topiary 的消息了?</blockquote>
@ -198,15 +198,15 @@
<blockquote>Katanon唉...</blockquote>
<p>Doyon 越来越大胆。在佛罗里达州当局逮捕了支持流浪者的激进分子后,就 DDoS 了奥兰多商务部商会网站。他使用个人笔记本电脑通过公用无线网络实施了攻击,并且没有花费太多精力来隐藏自己的网络行踪。“这种做法很勇敢,但也很愚蠢,”一位自称 Kalli 的 PLF 的资深成员告诉我。“他看起来并不在乎是否会被抓。他完全是一名自杀式黑客。”</p>
<p>Doyon 越来越大胆。在佛罗里达州当局逮捕了支持流浪者的激进分子后,他就攻击 了奥兰多商务部商会网站。他使用个人笔记本电脑通过公用无线网络实施了攻击,并且没有花费太多精力来隐藏自己的网络行踪。“这种做法很勇敢,但也很愚蠢,”一位自称 Kalli 的 PLF 的资深成员告诉我。“他看起来并不在乎是否会被抓。他完全是一名自杀式黑客。”</p>
<p>两个月后Doyon 参与了针对旧金山湾区快速交通系统Bay Area Rapid Transit的 DDoS 攻击,以此抗议一名 BART 的警官杀害一名叫做 Charles Hill 的流浪者的事件。随后 Doyon 现身“CBS 晚间新闻”为这次行动辩护,当然,他处理了自己的声音,把自己的脸用香蕉进行替代。他把 DDoS 攻击比作为公民的抗议行为。“与占用 Woolworth 午餐柜台的座位相比这真的没什么不同真的”他说道。CBS 的主播 Bob Schieffer 笑称:“就我所见,它并不完全是一项民权运动。”</p>
<p>两个月后Doyon 参与了针对旧金山湾区快速交通系统Bay Area Rapid Transit的 DDoS 攻击,以此抗议一名 BART 的警官杀害一名叫做 Charles Hill 的流浪者的事件。随后 Doyon 现身“CBS 晚间新闻”为这次行动辩护,当然,他处理了自己的声音,用印花大手帕盖住了脸。他把 DDoS 攻击比作为公民的抗议行为。“与占用 Woolworth 午餐柜台的座位相比这真的没什么不同真的”他说道。CBS 的主播 Bob Schieffer 笑称:“就我所见,它并不完全是一项民权运动。”</p>
<p>2011 年 9 月 22 日,在加利福尼亚州的一家名为 Mountain View 的咖啡店里Doyon 被捕,同时面临着“使用互联网非法破坏受保护的计算机”罪名指控。他被拘留了一个星期的时间,接着在签署协议之后获得假释。两天后,他不顾律师的反对,宣布将在圣克鲁斯郡法院召开新闻发布会。他梳起了马尾辫,戴着一副墨镜、一顶黑色海盗帽,同时还在脖子上围了一条五彩手帕。</p>
<p>2011 年 9 月 22 日,在加利福尼亚州的一家名为 Mountain View 的咖啡店里Doyon 被捕,同时面临着“使用互联网非法破坏受保护的计算机”罪名指控。他被拘留了一个星期的时间,接着在签署协议之后获得假释。两天后,他不顾律师的反对,宣布将在圣克鲁斯郡法院召开会。他梳起了马尾辫,戴着一副墨镜、一顶黑色海盗帽,同时还在脖子上围了一条五彩手帕。</p>
<p>Doyon 通过非常夸大的方式露了自己的身份。“我就是 Commander X”他告诉蜂拥的记者。他举起了拳头。“作为匿名者组织的一员作为一名核心成员我感到非常的骄傲。”他在接受一名记者的采访时说“想要成为一名顶尖黑客的话你只需要准备一台电脑以及一副墨镜。任何一台电脑都行。”</p>
<p>Doyon 通过非常夸大的方式露了自己的身份。“我就是 Commander X”他告诉蜂拥的记者。他举起了拳头。“作为匿名者组织的一员作为一名核心成员我感到非常的骄傲。”他在接受一名记者的采访时说“想要成为一名顶尖黑客的话你只需要准备一台电脑以及一副墨镜。任何一台电脑都行。”</p>
<p>Kalli 非常担心 Doyon 会不小心泄露组织机密或者其他匿名者的信息。“这是所有环节中最薄弱的地方,如果这里出问题了,那么组织就完了,”他告诉我。曾在“和平阵营行动”中给予 Doyon 大力帮助的匿名者 Josh Covelli 告诉我,当他在网上看见 Doyon 的新闻发布会视频的时候,他感觉瞬间“下巴掉地了”。“他的所作所为变得越来越不可捉摸,” Covelli 评价道。</p>
<p>Kalli 非常担心 Doyon 会不小心泄露组织机密或者其他匿名者的信息。“这是所有环节中最薄弱的地方,如果这里出问题了,那么组织就完了,”他告诉我。曾在“和平阵营行动”中给予 Doyon 大力帮助的匿名者 Josh Covelli 告诉我,当他在网上看见 Doyon 的新闻发布会视频的时候,他感觉瞬间“下巴掉地了”。“他的所作所为变得越来越不可捉摸,” Covelli 评价道。</p>
<p>三个月后Doyon 的指定律师 Jay Leiderman 出席了圣荷西联邦法庭的辩护。Leiderman 已经好几个星期没有得到 Doyon 的消息了。“我需要得知被告无法出席的具体原因”法官说。Leiderman 无法回答。Doyon 再次缺席了两星期后的另一场听证会。检控方表示:“很明显,看来被告已经逃跑了。”</p>
@ -214,7 +214,7 @@
<p>“Xport 行动”是“匿名者”组织进行的所有同类行动中的第一个行动。这次行动的目标是协助如今已经背负两项罪名的通缉犯 Doyon 潜逃出国。负责调度的人是 Kalli 以及另一位曾在八十年代剑桥的迷幻药派对上和 Doyon 见过面的匿名者老兵。这位老兵是一位已经退休的软件主管,在组织内部威望很高。</p>
<p>Doyon 的终点站是这位软件主管的位于加拿大的偏远乡村。2011 年 12 月,他搭便车前往旧金山,并辗转来到了市区组织大本营。他找到了他的指定联系人,后者带领他到达了奥克兰的一家披萨店。凌晨 2 点Doyon 通过披萨店的无线网络,接收了一条加密聊天消息。</p>
<p>Doyon 的目的地是这位软件主管位于加拿大的偏远乡村。2011 年 12 月,他搭便车前往旧金山,并辗转来到了市区组织大本营。他找到了他的指定联系人,后者带领他到达了奥克兰的一家披萨店。凌晨 2 点Doyon 通过披萨店的无线网络,接收了一条加密聊天消息。</p>
<p>“你现在靠近窗户吗?”那条消息问道。</p>
@ -222,13 +222,13 @@
<p>“往大街对面看。看见一个绿色的邮箱了吗?十五分钟后,你去站到那个邮箱旁边,把你的背包取下来,然后把你的面具放在上面。”</p>
<p>一连几个星期的时间Doyon 穿梭于海湾地区的安全屋之间,按照加密聊天那头的指示不断行动。最后,他搭上了前往西雅图的长途公交车,软件主管的一个朋友在那里接待了他。这个朋友是一名非常富有的退休人员,他花费了通过谷歌地球来帮助 Doyon 规划前往加拿大的路线。他们共同前往了一家野外用品供应商店,这位朋友为 Doyon 购置了价值 1500 美元的商品,包括登山鞋以及一个全新的背包。接着他又开车载着 Doyon 北上,两小时后到达距离国界只有几百英里的偏僻地区。随后 Doyon 见到了 Amber Lyon。</p>
<p>一连几个星期的时间Doyon 穿梭于海湾地区的安全屋之间,按照加密聊天那头的指示不断行动。最后,他搭上了前往西雅图的长途公交车,软件主管的一个朋友在那里接待了他。这个朋友是一名非常富有的退休人员,他花费了几小时的时间通过谷歌地球来帮助 Doyon 规划前往加拿大的路线。他们共同前往了一家野外用品供应商店,这位朋友为 Doyon 购置了价值 1500 美元的商品,包括登山鞋以及一个全新的背包。接着他又开车载着 Doyon 北上,两小时后到达距离国界只有几百英里的偏僻地区。随后 Doyon 见到了 Amber Lyon。</p>
<p>几个月前,广播新闻记者 Lyon 曾在 CNN 的关于“匿名者”组织的节目里采访过 Doyon。Doyon 很欣赏她的报道他们一直保持着联络。Lyon 要求加入 Doyon 的逃亡行程,为一部可能会发行的纪录片拍摄素材。软件主管认为这样太过冒险,但 Doyon 还是接受了她的请求。“我觉得他是想让自己出名,” Lyon 告诉我。四天的时间里,她用影像记录下了 Doyon 徒步北上,在林间露宿的行程。“那一切看起来不太像是仔细规划过的,” Lyon 回忆说。“他实在是无家可归了,所以他才会想要逃到国外去。”</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18506-600.jpg" /></center>
<center><small>“这里是我们存放各种感觉的仓库。如果你发现了某种感觉,把它带到这里然后锁起来。”</small></center>
<center><small>“这里是我们存放各种情感的仓库。如果你产生了某种情感,把它带到这里然后锁起来。”</small></center>
<p>2012 年 2 月 11 日Pastebin 上出现了一条消息。“PLF 很高兴的宣布‘ Commander X也就是 Christopher Mark Doyon已经离开了美国的司法管辖区抵达了加拿大一个比较安全的地方”上面写着“PLF 呼吁美国政府,希望政府能够醒悟过来并停止无谓的骚扰与监视行为——不要仅仅逮捕‘匿名者’组织的成员,对所有的激进组织应该一视同仁。”</p>
@ -236,13 +236,13 @@
Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barrett Brown 的聊天中Doyon 难掩内心的喜悦之情。
<blockquote>BarrettBrown你现在应该足够安全了吧,其他的呢?...</blockquote>
<blockquote>BarrettBrown你现在足够多安全的藏身之处等等吧?</blockquote>
<blockquote>CommanderX是的我现在很安全现在加拿大既不缺钱也不缺藏身的地方。</blockquote>
<blockquote>CommanderXAmber Lyon 想要你的一张照片。</blockquote>
<blockquote>CommanderX他【哔~】的怪人Barrett相信你会喜欢我告诉她应该怎样评价你的</blockquote>
<blockquote>CommanderX你【哔~】的怪人Barrett相信你会喜欢我的回复。我一直爱你永远爱你</blockquote>
<blockquote>CommanderX:-)</blockquote>
@ -258,13 +258,13 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
<blockquote>BarrettBrown当然估计我们不久后也得这样了</blockquote>
<p>在 Doyon 出逃十天后,《华尔街日报》上刊登了关于不久后升职为美国国家安全局及网络指挥部主任的 Keith Alexander 的报道,他在白宫举行的秘密会晤以及其他场合下表达了对“匿名者”组织的高度关注。Alexander 发出警告,两年内,该组织必将会是国家电网改造的大患。参谋长联席会议的主席 General Martin Dempsey 告诉记者,这群人是国家的敌人。“他们有能力把这些使用恶意软件造成破坏的技术扩散到其他的边缘组织去,”随后又补充道,“我们必须防范这种情况发生。”</p>
<p>在 Doyon 出逃十天后,《华尔街日报》上刊登了关于不久后升职为美国国家安全局及网络指挥部主任的 Keith Alexander 的报道,他在白宫以及其他场合举行的秘密会晤表达了对“匿名者”组织的高度关注。Alexander 发出警告,两年内,该组织必将会是国家电网改造的大患。参谋长联席会议的主席 General Martin Dempsey 告诉记者,这群人是国家的敌人。“他们有能力把这些使用恶意软件造成破坏的技术扩散到其他的边缘组织去,”随后又补充道,“我们必须防范这种情况发生。”</p>
<p>3 月 8 日,国会议员们在国会大厦附近的一个敏感信息隔离设施附近举行了关于网络安全的会议。包括 Alexander、Dempsey、美国联邦调查局局长 Robert Mueller以及美国国土安全部部长 Janet Napolitano 在内的多名美国安全方面的高级官员出席了这次会议。会议上,通过计算机向与会者模拟了东部沿海地区电力设施可能会遭受到的网络攻击时的情境。“匿名者”组织目前应该还不具备发动此种规模攻击的能力,但安全方面的官员担心他们会联合其他更加危险的组织来共同发动攻击。“在我们着手于不断增加的网络风险事故时,政府仍在就具体的处理细节进行不断协商讨论,” Napolitano 告诉我。当谈及潜在的网络安全隐患时,她补充道,“我们通常会把‘匿名者’组织的行动当做 A 级威胁来应对。”</p>
<p>“匿名者”也许是当今世界上最强大的无政府主义黑客组织。即使如此,它却从未表现出过任何的会对公共基础设施造成破坏的迹象或意愿。一些网络安全专家称,那些关于“匿名者”组织的谣传太过危言耸听。“在奥兰多发布战前宣言和实际发动 Stuxnet 蠕虫病毒攻击之间是有很大的差距的,” Internet 研究与战略中心的一位职员 James Andrew Lewis 告诉我,这和 2007 年美国与以色列对伊朗原子能网站发动的黑客袭击有关。哈佛大学法学院的教授 Yochai Benkler 告诉我,“我们所看见的只是以主要防御为理由而进行的开销,否则,将很难自圆其说。”</p>
<p>Keith Alexander 最近刚从政府部门退休,他拒绝就此事发表评论,因为他并不能代表国家安全局、联邦调查局、中央情报局以及国土安全部。尽管匿名者们从未真正盯上过政府部门的计算机网络,但他们对于那些激怒他们的人有着强烈的报复心理。前国土安全部国家网络安全部门负责人 Andy Purdy 告诉我他们“害怕被报复,”无论机构还是个人,都不同意政府公然反对“匿名者”组织。“每个人都非常脆弱,”他说。</p>
<p>Keith Alexander 最近刚从政府部门退休,他拒绝就此事发表评论,因为他并不能代表国家安全局、联邦调查局、中央情报局以及国土安全部。尽管匿名者们从未真正盯上过政府部门的计算机网络,但他们对于那些激怒他们的人有着强烈的报复心理。前国土安全部国家网络安全部门负责人 Andy Purdy 告诉我他们“害怕被报复,”无论机构还是个人,都不同意政府公然反对“匿名者”组织。“每个人都容易成为被攻击对象,”他说。</p>
<h2>9</h2>
@ -272,7 +272,7 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
<p>Doyon 感到很烦躁但他还是继续扮演着一名黑客——以此吸引关注。他在多伦多上映的纪录片上以戴着面具的匿名者形象出现。在接受《National Post》的采访时他向记者大肆吹嘘未经证实的消息“我们已经入侵了美国政府的所有机密数据库。现在的问题是我们该何时泄露这些机密数据而不是我们是否会泄露。”</p>
<p>2013 年 1 月,在另一名匿名者介入俄亥俄州<a href="https://gist.githubusercontent.com/SteveArcher/cdffc917a507f875b956/raw/c7b49cc11ae1e790d30c87f7b8de95482c18ec74/%E6%96%AF%E6%89%98%E6%9C%AC%E7%BB%B4%E5%B0%94%E8%BD%AE%E5%A5%B8%E6%A1%88%E5%86%8D%E8%B5%B7%E9%A3%8E%E6%B3%A2%20%E9%BB%91%E5%AE%A2%E7%BB%84%E7%BB%87%E4%BB%8B%E5%85%A5">斯托本维尔未成年少女奸案</a>发起抗议行动之后Doyon 重新启用了他两年前创办的网站 LocalLeaks作为那起奸事件的信息汇总处理中心。如同许多其他“匿名者”组织的所作所为一样LocalLeaks 网站非常具有影响力但却也不承担任何责任。LocalLeaks 网站是第一家公布 12 分钟斯托本维尔高中毕业生猥亵视频的网站这激起了众多当事人的愤怒。LocalLeaks 网站上同时披露了几份未被法庭收录的关于案件的材料并且由此不小心透漏出了案件受害人的名字。Doyon向我承认他公开这些未经证实的信息的策略是存在争议的但他同时回忆起自己当时的想法“我们可以选择去除这些斯托本维尔案件的材料...也可以选择公开所有我们搜集的信息,基本上,给公众以提醒,不过,前提是你们得相信我们。”</p>
<p>2013 年 1 月,在另一名匿名者介入俄亥俄州<a href="https://gist.githubusercontent.com/SteveArcher/cdffc917a507f875b956/raw/c7b49cc11ae1e790d30c87f7b8de95482c18ec74/%E6%96%AF%E6%89%98%E6%9C%AC%E7%BB%B4%E5%B0%94%E8%BD%AE%E5%A5%B8%E6%A1%88%E5%86%8D%E8%B5%B7%E9%A3%8E%E6%B3%A2%20%E9%BB%91%E5%AE%A2%E7%BB%84%E7%BB%87%E4%BB%8B%E5%85%A5">斯托本维尔未成年少女奸案</a>发起抗议行动之后Doyon 重新启用了他两年前创办的网站 LocalLeaks作为那起奸事件的信息汇总处理中心。如同许多其他“匿名者”组织的所作所为一样LocalLeaks 网站非常具有影响力但却也不承担任何责任。LocalLeaks 网站是第一家公布 12 分钟斯托本维尔高中毕业生猥亵视频的网站这激起了众多当事人的愤怒。LocalLeaks 网站上同时披露了几份未被法庭收录的关于案件的材料并且由此不小心透漏出了案件受害人的名字。Doyon向我承认他公开这些未经证实的信息的策略是存在争议的但他同时回忆起自己当时的想法“我们可以选择销毁这些斯托本维尔案件的材料...也可以选择公开所有我们搜集的信息,基本上,给公众以提醒,不过,前提是你们得相信我们。”</p>
<p>2013 年 3 月,一个名为 Rustle League 的组织入侵了 Doyon 的 Twitter 账户该组织此前经常挑衅“匿名者”组织。Rustle League 的领导者之一 Shm00p 告诉我,“我们的本意并不是伤害那些家伙,只不过,哦,那些家伙说的话你就当是在放屁好了——我会这么做只是因为我感到很好笑。” Rustle League 组织使用 Doyon 的账户发布了含有如 www.jewsdid911.org 链接这样的,种族主义和反犹太主义的信息。</p>
@ -290,37 +290,37 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
<p>我们约定了一次面谈。Doyon 坚持让我通过加密聊天把面谈的详细情况提前告诉他。我坐了几个小时的飞机,租车来到了加拿大的一个偏远小镇,并且禁用了我的电话。</p>
<p>最后,我在一个狭小安静的住宅区公寓里见到了 Doyon。他穿了一件绿色的军人夹克衫以及印有“匿名者”组织 logo 的 T 恤衫:一个脸被问号所替代的黑衣人形象。公寓里基本上没有什么家具,充满了一股烟味。他谈论起了美国政治(“我基本没怎么在众多的选举中投票——它们不过是暗箱操作的游戏罢了”),好战的伊斯兰教(“我相信,尼日利亚政府的人不过是相互勾结,以创建一个名为‘博科圣地’的基地组织的下属机构罢了”),以及他对“匿名者”组织的小小看法(“那些自称为怪人的人是真的是烂透了,意思是,邪恶的人”)。</p>
<p>最后,我在一个狭小安静的住宅区公寓里见到了 Doyon。他穿了一件绿色的军人夹克衫以及印有“匿名者”组织 logo 的 T 恤衫:一个脸被问号所替代的黑衣人形象。公寓里基本上没有什么家具,充满了一股烟味。他谈论起了美国政治(“我基本没怎么在众多的选举中投票——它们不过是暗箱操作的游戏罢了”),好战的伊斯兰教(“我相信,尼日利亚政府的人不过是相互勾结,以创建一个名为‘博科圣地’的基地组织的下属机构罢了”),以及他对“匿名者”组织的小小看法(“那些自称为怪人的人是真的是烂透了,其实是邪恶的人”)。</p>
<p>Doyon 剃去了他的胡须但他却显得更加憔悴了。他说那是因为他病了的原因他几乎很少出去。很小的写字台上有两台笔记本电脑、一摞关于佛教的书还有一个堆满烟灰的烟灰缸。另一面裸露的泛黄墙壁上挂着盖伊·福克斯面具。他告诉我“所谓Commander X不过是一个处于极度痛苦中的小老头罢了。”</p>
<p>在刚过去的圣诞节里,匿名者的新网站 AnonInsiders 的创建者拜访了 Doyon并给他带来了馅饼和香烟。Doyon 询问来访的朋友是否可以继承自己的衣钵成为 PLF 的最高指挥官,同时希望能够递交出自己手里的“王国钥匙”——手里的所有密码,以及几份关于“匿名者”组织的机密文件。这位朋友委婉的拒绝了。“我有自己的生活,”他告诉了我拒绝的理由。</p>
<p>在刚过去的圣诞节里,匿名者的新网站 AnonInsiders 的创建者拜访了 Doyon并给他带来了馅饼和香烟。Doyon 询问来访的朋友是否可以接替自己成为 PLF 的最高指挥官,同时希望能够递交出自己手里的“王国钥匙”——手里的所有密码,以及几份关于“匿名者”组织的机密文件。这位朋友委婉的拒绝了。“我有自己的生活,”他告诉了我拒绝的理由。</p>
<h2>11</h2>
<p>2014 年 8 月 9 日,当地时间下午 5 时 09 分,来自密苏里州圣路易斯郊区德尔伍德的一位说唱歌手同时也是激进分子的 Kareem (Tef Poe) Jackson在 Twitter 上谈起了邻近城镇的一系列令人担忧的举措。“基本可以断定弗格森已经实施了戒严,任何人都无法出入,”他在 Twitter 上写道。“国内的朋友还有因特网上的朋友请帮助我们!!!”五个小时前,弗格森,一位十八岁的手无寸铁的非裔美国人 Michael Brown被一位白人警察射杀。射杀警察声称自己这么做的原因是 Brown 意图伸手抢夺自己的枪支。而事发当时和 Brown 在一起的朋友 Dorian Johnson 却说Brown 唯一做得不对的地方在于他当时拒绝离开街道中间。</p>
<p>2014 年 8 月 9 日,当地时间下午 5 时 09 分,来自密苏里州圣路易斯郊区德尔伍德的一位说唱歌手同时也是激进分子的 Kareem (Tef Poe) Jackson在 Twitter 上谈起了邻近城镇的一系列令人担忧的举措。“基本可以断定弗格森已经实施了戒严,任何人都无法出入,”他在 Twitter 上写道。“国内外的朋友们请帮助我们!!!”五个小时前,弗格森,一位十八岁的手无寸铁的非裔美国人 Michael Brown被一位白人警察射杀。射杀警察声称自己这么做的原因是 Brown 意图伸手抢夺自己的枪支。而事发当时和 Brown 在一起的朋友 Dorian Johnson 却说Brown 唯一做得不对的地方在于他当时拒绝离开街道中间。</p>
<p>不到两小时Jackson 就收到了一位名为 CommanderXanon 的 Twitter 用户的回复。“你完全可以相信我们,”回复信息里写道。“你是否可以给我们详细描述一下现场情况,那样会对我们很有帮助。”近几周的时间里,仍然呆在加拿大的 Doyon 复出了。六月,他在还有两个月满 50 岁的时候,成功戒烟(“#戒瘾成功 #电子香烟功不可没 #老了,”他在戒烟成功后在 Twitter 上写道。七月在加沙地带爆发武装对抗之后Doyon 发表 Twiter 支持“匿名者”组织的“拯救加沙行动”,并发动了一系列针对以色列网站的 DDoS 攻击。Doyon 认为弗格森枪击事件更加令人关注。抛开他本人的个性,他有在事件发展到引人注目之前的早期,就迅速注意该事件的能力</p>
<p>不到两小时Jackson 就收到了一位名为 CommanderXanon 的 Twitter 用户的回复。“你完全可以相信我们,”回复信息里写道。“你是否可以给我们详细描述一下现场情况,那样会对我们很有帮助。”近几周的时间里,仍然呆在加拿大的 Doyon 复出了。六月,他在还有两个月满 50 岁的时候,成功戒烟(“#戒瘾成功 #电子香烟功不可没 #老了,”他在戒烟成功后在 Twitter 上写道。七月在加沙地带爆发武装对抗之后Doyon 发表 Twiter 支持“匿名者”组织的“拯救加沙行动”,并发动了一系列针对以色列网站的 DDoS 攻击。Doyon 认为弗格森枪击事件更加令人关注。抛开他本人的个性,他有能力在事件发展到引人注目之前,就迅速注意该事件。</p>
<p>“正在网上搜索关于那名警察以及当地政府的信息,” Doyon 发 Twitter 道。不到十分钟,他就为此专门在 IRC 聊天室里创建了一个频道。“‘匿名者’组织‘弗格森’行动正式启动,”他又发了一条 Twitter。但只有两个人转推了此消息。</p>
<p>次日早晨Doyon 发布了一条链接,链接指向的是一个初具雏形的网站,网站首页有一条致弗格森市民的信息——“你们并不孤单,我们将尽一切努力支持你们”——以及致当地警察的警告:“如果你们对弗格森的抗议者们滥用职权、骚扰,或者伤害了他们,我们绝对会让你们所有政府部门的网站瘫痪。这不是威胁,这是承诺。”同时 Doyon 呼吁有 130 万粉丝的“匿名者”组织的 Twitter 账号 YourAnonNews 给与支持。“请支持弗格森行动”他发送了消息。一分钟后YourAnonNews 回复表示同意。当天,包含话题 #OpFerguson 的 Twitter 发表/转推了超过六千次。</p>
<p>次日早晨Doyon 发布了一条链接,链接指向的是一个初具雏形的网站,网站首页有一条致弗格森市民的信息——“你们并不孤单,我们将尽一切努力支持你们”——以及致当地警察的警告:“如果你们对弗格森的抗议者们滥用职权、骚扰,或者伤害了他们,我们绝对会让你们所有政府部门的网站瘫痪。这不是威胁,这是承诺。”同时 Doyon 呼吁有 130 万粉丝的“匿名者”组织的 Twitter 账号 YourAnonNews 给与支持。“请支持弗格森行动”他发送了消息。一分钟后YourAnonNews 回复表示同意。当天,包含话题 #OpFerguson 的 Twitter 被转发了超过六千次。</p>
<p>这个事件迅速成为头条新闻同时匿名者们在弗格森周围进行了大集会。与“阿拉伯之春行动”类似“匿名者”组织向抗议者们发送了电子关怀包包括抗暴指导“把瓦斯弹捡起来回丢给警察”与可打印的盖伊·福克斯面具。Jackson 和其他示威者在弗格森进行示威游行时,警察企图通过橡皮子弹和催泪瓦斯来驱散他们。“当时的情景真像是布鲁斯·威利斯的电影里的情节,” Jackson 后来告诉我。“不过巴拉克·奥巴马应该并不会支持‘匿名者’组织传授给我们的这些知识,”他笑称道。“让那些警察赶到束手无策真的是太爽了。”</p>
<p>这个事件迅速成为头条新闻同时匿名者们在弗格森周围进行了大集会。与“阿拉伯之春行动”类似“匿名者”组织向抗议者们发送了电子关怀包包括抗暴指导“把瓦斯弹捡起来回丢给警察”与可打印的盖伊·福克斯面具。Jackson 和其他示威者在弗格森进行示威游行时,警察企图通过橡皮子弹和催泪瓦斯来驱散他们。“当时的情景真像是布鲁斯·威利斯的电影里的情节,” Jackson 后来告诉我。“不过巴拉克·奥巴马应该并不会支持‘匿名者’组织传授给我们的这些知识,”他说道。“知道有人在你的背后支持你,真是感觉欣慰。”</p>
<p>有个域名是 www.opferguson.com 的网站,后来发现不过是一个骗局——一个用来收集访问者 ip 地址的陷阱,随后这些地址会被移交给执法机构。有些人怀疑 Commander X 是政府的线人。在 IRC 聊天室 #OpFerguson 频道,一个名叫 Sherlock 写道,“现在频道里每个人说的已经让我害怕去点击任何陌生的链接了。除非是一个我非常熟悉的网址,否则我绝对不会去点击。”</p>
<p>有个网址是 www.opferguson.com 的网站,后来发现不过是一个骗局——一个用来收集访问者 ip 地址的陷阱,随后这些地址会被移交给执法机构。有些人怀疑 Commander X 是政府的线人。在 IRC 聊天室 #OpFerguson 频道,一个名叫 Sherlock 写道,“现在频道里每个人说的已经让我害怕去点击任何陌生的链接了。除非是一个我非常熟悉的网址,否则我绝对不会去点击。”</p>
<p>弗格森的抗议者要求当局公布射杀 Brown 的警察的名字。几天后,匿名者们附和了抗议者们的请求。有人在 Twitter 上写道“弗格森警察局最好公布肇事警察的名字否则匿名者组织将会替他们公布。”8 月 12 的新闻发布会上,圣路易斯警察局的局长 Jon Belmar 拒绝了这个请求。“我们不会这样做,除非他们被某个罪名所指控,”他说道。</p>
<p>作为报复,一名黑客使用名为 TheAnonMessage 的 Twitter 账户公布了一条链接,该链接指向一段来自警察的无线电设备所记录的音频文件,文件记录时间是 Brown 被枪杀的两小时左右。TheAnonMessage 同时也把矛头指向了 Belmar在 Twitter 上公布了这位警察局长的家庭住址、电话号码以及他的家庭照片——一张是他的儿子在长椅上睡觉,另一张则是 Belmar 和他的妻子的合影。“不错的照片Jon” TheAnonMessage 在 Twitter 上写道。“你的妻子在她这个年龄算是一个美人了。你已经爱她爱得不耐烦了吗”一个小时后TheAnonMessage 又以 Belmar 的女儿为把柄进行了恐吓。</p>
<p>Richard Stallman来自 MIT 的初代黑客,告诉我虽然他在很多地方赞同“匿名者”组织的行为,但他认为这些泄露私人信息的攻击行为是要受到谴责的。即使是在国内TheAnonMessage 的行为也受到了谴责。“为何要泄露无辜的人的信息到网上?”一位匿名者通过 IRC 发问,并且表示威胁 Belmar 的家人实在是“相当愚蠢的行为”。但是 TheAnonMessage 和其他的一些匿名者仍然进行着不断搜寻,并企图在将来再次进行泄露信息的攻击。在互联网上可以得到所有弗格森警察局警员的名字,匿名者们不断地搜索着信息,企图找出具体是哪一个警察找出杀害了 Brown。</p>
<p>Richard Stallman来自 MIT 的初代黑客,告诉我虽然他在很多地方赞同“匿名者”组织的行为,但他认为这些泄露私人信息的攻击行为是要受到谴责的。即使是组织内部TheAnonMessage 的行为也受到了谴责。“为何要泄露无辜的人的信息到网上?”一位匿名者通过 IRC 发问,并且表示威胁 Belmar 的家人实在是“相当愚蠢的行为”。但是 TheAnonMessage 和其他的一些匿名者仍然进行着不断搜寻,并企图在将来再次进行泄露信息的攻击。在互联网上可以得到所有弗格森警察局警员的名字,匿名者们不断地搜索着信息,企图找出具体是哪一个警察找出杀害了 Brown。</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_steig-1999-04-12-600.jpg" /></center>
<center><small></small>1999 年 4 月 12 日 “我应该把镜头对向谁?”</center>
<p>8 月 14 日清晨,位匿名者基于 Facebook 上的照片还有其他的证据,确定了射杀 Brown 的凶手是一位名叫 Bryan Willman 的 32 岁男子。根据一份 IRC 聊天记录,一位匿名者贴出了 Willman 的浮夸面孔的照片;另一位匿名者提醒道,“凶手声称自己的脸没有被任何人看到。”另一位昵称为 Anonymous|11057 的匿名者承认他对 Willman 的怀疑确实是“跳跃性的可能错误的逻辑过程推导出来的。”不过他还是写道,“我只是无法动摇自己的想法。虽然我没有任何证据,但我非常非常地确信就是他。”</p>
<p>8 月 14 日清晨,位匿名者基于 Facebook 上的照片还有其他的证据,确定了射杀 Brown 的凶手是一位名叫 Bryan Willman 的 32 岁男子。根据一份 IRC 聊天记录,一位匿名者贴出了 Willman 的肿胀面孔的照片;另一位匿名者提醒道,“凶手声称自己的脸没有被任何人看到。”另一位昵称为 Anonymous|11057 的匿名者承认他对 Willman 的怀疑确实是“跳跃性的可能错误的逻辑过程推导出来的。”不过他还是写道,“我只是无法动摇自己的想法。虽然我没有任何证据,但我非常非常地确信就是他。”</p>
<p>TheAnonMessage 看起来被这次对话逗乐了,写道,“#愿逝者安息,凶手是 BryanWillman。”另一位匿名者发出了强烈警告。“请务必确认” Anonymous|2252 写道。“这不仅仅关乎到一个人的性命,我们可以不负责任地向公众公布我们的结果,但却很可能有无辜的人会因此受到不应受到的对待。”</p>
@ -356,15 +356,15 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
<blockquote>anondepplol</blockquote>
<p>早晨 9 时 45 分,圣路易斯警察局对 TheAnonMessage 进行了答复。“Bryan Willman 从来没有在弗格森警察局或者圣路易斯警察局任过职,” 他们在 Twitter 上写道。“请不要再公布这位无辜市民的信息了。”(随后 FBI 对弗格森警察的电脑遭黑客入侵的事情展开了调查。Twitter 管理员迅速封禁了 TheAnonMessage 的账户,但 Willman 的名字和家庭住址仍然被广泛传开。</p>
<p>早晨 9 时 45 分,圣路易斯警察局对 TheAnonMessage 进行了答复。“Bryan Willman 从来没有在 警察局或者圣路易斯警察局任过职,” 他们在 Twitter 上写道。“请不要再公布这位无辜市民的信息了。”(随后 FBI 对弗格森警察的电脑遭黑客入侵的事情展开了调查。Twitter 管理员迅速封禁了 TheAnonMessage 的账户,但 Willman 的名字和家庭住址仍然被广泛传开。</p>
<p>实际上Willman 是弗格森西郊圣安区的警察外勤负责人。当圣路易斯警察局的情报处打电话告诉 Willman他已经被“确认”为凶手时他告诉我“我以为不过是个奇怪的笑话。”几小时后他的社交账号上就收到了数百条要杀死他的威胁。他在警察的保护下,独自一人在家里呆了将近一个星期。“我只希望这一切都尽快过去,”他告诉我他的感受。他认为“匿名者”组织已经不可挽回地损害了他的名誉。“我不知道他们怎么会以为自己可以被再次信任的,”他说。</p>
<p>实际上Willman 是弗格森西郊圣安区的警察外勤负责人。当圣路易斯警察局的情报处打电话告诉 Willman他已经被“确认”为凶手时他告诉我“我以为不过是个奇怪的笑话。”几小时后他的社交账号上就收到了成百上千条死亡恐吓。他在警察的保护下,独自一人在家里呆了将近一个星期。“我只希望这一切都尽快过去,”他告诉我他的感受。他认为“匿名者”组织已经不可挽回地损害了他的名誉。“我不知道他们怎么会以为自己可以被再次信任的,”他说。</p>
<p>“我们并不完美,” OpFerguson 在 Twitter 上说道。“‘匿名者’组织确实犯错了,过去的几天我们制造一些混乱。为此,我们道歉。”尽管 Doyon 并不应该为这次错误的信息泄露攻击负责但其他的匿名者却因为他发起了一次无法控制的行动而归咎他。YourAnonNews 在 Pastebin 上发表了一则消息,上面写道,“你们也许注意到了组织不同的 Twitter 账户发表的话题 #Ferguson#OpFerguson,这两个话题下的 Twitter 与信息是相互矛盾的。为什么会在这些关键话题上出现分歧,部分原因是因为 CommanderX 是一个‘想让自己出名的疯子/想让公众认识自己的疯子’——这种人喜欢,或者至少不回避媒体的宣传——并且显而易见的,组织内大部分成员并不喜欢这样。”</p>
<p>在个人 Twitter 上Doyon 否认了所有关于“弗格森行动”的职责,他写道,“我讨厌这样。我不希望这样的情况发生,我也不希望和我认为是朋友的人战斗。”沉寂了几天后,他又再度获吹响了战斗的号角。他最近在 Twitter 上写道,“你们称他们是暴民,我们却称他们是压迫下的反抗之声”以及“解放西藏”。</p>
<p>Doyon 仍然处于藏匿状态。甚至连他的律师 Jay Leiderman 也不知道他在哪里。Leiderman 表示除了在圣克鲁斯受到的指控Doyon 很有可能因为攻击了 PayPal 和奥兰多而面临新的指控。一旦他被捕,所有的刑期加起来,他的余生就要在监狱里度过了。借鉴 Edward Snowden 的先例,他希望申请去俄罗斯避难。我们谈话时,他用一支点燃的香烟在他的公寓里比划着。“这里比【哔~】的牢房强多了吧?我绝对不会出去,”他愤愤道。“我不会再联系我的家人了....这是相当高的代价,但我必须这么做,我会尽我的努力让所有人活得自由、明白。”</p>
<p>Doyon 仍然处于藏匿状态。甚至连他的律师 Jay Leiderman 也不知道他在哪里。Leiderman 表示除了在圣克鲁斯受到的指控Doyon 很有可能因为攻击了 PayPal 和奥兰多而面临新的指控。一旦他被捕,所有的刑期加起来,他的余生就要在监狱里度过了。借鉴 Edward Snowden 的先例,他希望申请去俄罗斯避难。我们谈话时,他用一支点燃的香烟在他的公寓里比划着。“这里比【哔~】的牢房强多了吧?我绝对不会出去,”他愤愤道。“我不会再联系我的家人了....这是相当高的代价,但我必须这么做,我会尽我的努力让所有人活得自由、明白。”</p>
@ -372,6 +372,6 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
<p>作者:<a href="http://www.newyorker.com/contributors/david-kushner">David Kushner</a></p>
<p>译者:<a href="https://github.com/SteveArcher">SteveArcher</a></p>
<p>校对:<a href="https://github.com/校对者ID">校对者ID</a></p>
<p>校对:<a href="https://github.com/carolinewuyan">Caroline</a></p>
<p>本文由 <a href="https://github.com/LCTT/TranslateProject">LCTT</a> 原创翻译,<a href="http://linux.cn/">Linux中国</a>荣誉推出</p>

View File

@ -1,12 +1,12 @@
Jelly Conky给你的Linux桌面加入了简约、时尚的状态
Jelly Conky为你的Linux桌面带来简约、时尚的状态信息
================================================================================
**我把Conky设置成有点像壁纸:我会找出一张我喜欢的,只在下一周更换因为我厌倦了并且想要一点改变。**
**我把Conky当成壁纸一样使用:我会找出一个我喜欢的样式,下一周当我厌烦了想要一点小改变时我就更换另外一个样式。**
耐烦的一部分原因是由于日益增长的设计目录。我最近最喜欢的是Jelly Conky。
断更换样式的部分原因是由于日益增多的样式目录。我最近最喜欢的样式是Jelly Conky。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/jelly-conky.png)
我们最近强调的许多Conky所夸耀的最小设计都遵循了。它并不想成为一个厨房水槽。它不会被那些需要一眼需要看到他们硬盘温度和IP地址的人所青睐
Jelly Conky遵循了许多我们推荐的Conky风格采用的最小设计原则。它并不想成为一个大杂烩。它不会被那些喜欢一眼就能看到他们硬盘温度和IP地址的人所青睐
它配备了三种不同的模式,它们都可以添加个性的或者静态背景图像:
@ -16,9 +16,9 @@ Jelly Conky给你的Linux桌面加入了简约、时尚的状态
一些人不理解为什么要在桌面上拥有重复的时钟。这是很好理解的。对于我而言这不仅仅是功能虽然个人而言Conky的时钟比挤在上部面板上那渺小的数字要更容易看清
机会是如果你的Android主屏幕有一个时间小部件的话你不会介意在你的桌面上也有这么一个
我想如果你的Android主屏幕有一个时间小部件的话你不会介意在你的桌面上也有这么一个的对吧
你可以从下述链接下载Jelly Conkyzip 包里面有一个说明如何安装的 readme 文件。如果希望看到完整的教程,可以[参考我们的前一篇文章][3]。
- [从Deviant Art上下载 Jelly Conky][2]
--------------------------------------------------------------------------------
@ -27,10 +27,11 @@ via: http://www.omgubuntu.co.uk/2014/09/jelly-conky-for-linux-desktop
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover
[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003
[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003
[3]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover

View File

@ -0,0 +1,37 @@
Red Hat公司8200万美元收购FeedHenry来推动移动开发
================================================================================
> 这是Red Hat公司进入移动开发领域的一次关键收获。
Red Hat公司的JBoss开发者工具事业部一直注重于企业开发而忽略了移动方面。而如今这一切将随着Red Hat公司宣布用8200万美元收购移动开发供应商 [FeedHenry][1] 开始发生改变。这笔交易将在Red Hat公司2015财年的第三季度结束。
Red Hat公司的中间件总经理Mike Piech说当交易结束后FeedHenry公司的员工将会变成Red Hat公司的员工。
FeedHenry公司的开发平台能让应用开发者快速地开发出Android、IOS、Windows Phone以及黑莓的移动应用。FeedHenry的平台Node.js的编程结构有着深远影响而那不是过去JBoss所涉及的领域。
"这次对FeedHenry公司的收购显著地提高了我们对于Node.js的支持与衔接。" Piech说。
Red Hat公司的平台即服务(PaaS)技术OpenShift已经有了一个Node.js的cartridge组件。此外Red Hat公司的企业版Linux把Node.js的技术预览来作为Red Hat公司软件包的一部分。
尽管Node.js本身就是开源的但不是所有FeedHenry公司的技术能在近期符合开源许可证的要求。作为Red Hat纵贯历史的政策, 现在也是致力于让FeedHenry开源的时候了。
"我们完成了收购那么开源我们所收购的技术就是公司的首要任务并且我们没有理由因Feedhenry而例外。"Piech说。
Red Hat公司最后一次主要的非开源性公司的收购是在2012年用104万美元收购 [ManageIQ][2] 公司。在今年的5月份Red Hat公司成立了ManageIQ公司的开源项目开放之前闭源的云管理技术代码。
从整合的角度来看Red Hat公司还尚未精确地提供FeedHenry公司如何融入它的完整信息。
"我们已经确定了一些FeedHenry公司和我们已经存在的技术和产品能很好地相互融合和集成的范围" Piech说"我们会在接下来的90天内分享更多我们发展蓝图的细节。"
--------------------------------------------------------------------------------
via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html
作者:[Sean Michael Kerner][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html
[1]:http://www.feedhenry.com/
[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html

View File

@ -1,15 +1,14 @@
Canonical在Ubuntu 14.04 LTS中关闭了一个nginx漏洞
Canonical解决了一个Ubuntu 14.04 LTS中的nginx漏洞
================================================================================
> 用户不得不升级他们的系统来修复这个漏洞
> 用户应该更新他们的系统来修复这个漏洞!
![Ubuntu 14.04 LTS](http://i1-news.softpedia-static.com/images/news2/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677-2.jpg)
<center>![Ubuntu 14.04 LTS](http://i1-news.softpedia-static.com/images/news2/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677-2.jpg)</center>
Ubuntu 14.04 LTS
<center>*Ubuntu 14.04 LTS*</center>
**Canonical已经在安全公告中公布了这个影响到Ubuntu 14.04 LTS (Trusty Tahr)的nginx漏洞的细节。这个问题已经被确定并被修复了**
Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被用来暴露网络上的敏感信息。
Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被利用来暴露网络上的敏感信息。
根据安全公告“Antoine Delignat-Lavaud和Karthikeyan Bhargavan发现nginx错误地重复使用了缓存的SSL会话。攻击者可能利用此问题在特定的配置下可以从不同的虚拟主机获得信息“。
@ -23,13 +22,14 @@ Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可
sudo apt-get dist-upgrade
在一般情况下,一个标准的系统更新将会进行必要的更改。要应用此修补程序您不必重新启动计算机。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,15 +1,20 @@
Wal Commander 0.17 Github版发布了
文件管理器 Wal Commander Github 0.17版发布了
================================================================================
![](http://wcm.linderdaum.com/wp-content/uploads/2014/09/wc21.png)
> ### 描述 ###
>
> Wal Commander GitHub 版是一款多平台的开源文件管理器。适用于Windows、Linux、FreeBSD、和OSX。
> Wal Commander GitHub 版是一款多平台的开源文件管理器。适用于Windows、Linux、FreeBSD、和OSX。
>
> 这个从项目的目的是创建一个模仿Far管理器外观和感觉的便携式文件管理器。
The next stable version of our Wal Commander GitHub Edition 0.17 is out. Major features include command line autocomplete using the commands history; file associations to bind custom commands to different actions on files; and experimental support of OS X using XQuartz. A lot of new hotkeys were added in this release. Precompiled binaries are available for Windows x64. Linux, FreeBSD and OS X versions can be built directly from the [GitHub source code][1].
Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能包括:使用命令历史自动补全;文件关联绑定自定义命令对文件的各种操作;和用XQuartz实验性地支持OS X。很多新的快捷键添加在此版本中。预编译二进制文件适用于Windows64、LinuxFreeBSD和OS X版本这些可以直接从[GitHub中的源代码][1]编译。
Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能包括:
- 使用命令历史自动补全;
- 文件关联绑定自定义命令对文件的各种操作;
- 和用XQuartz实验性地支持OS X。
很多新的快捷键添加在此版本中。预编译二进制文件适用于Windows64、LinuxFreeBSD和OS X版本这些可以直接从[GitHub中的源代码][1]编译。
### 主要特性 ###
@ -17,8 +22,9 @@ Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能
- 文件关联 (主菜单 -> 命令 -> 文件关联)
- XQuartz上实验性地支持OS X ([https://github.com/corporateshark/WalCommander/issues/5][2])
### [下载][3] ###.
### 下载 ###
下载:[http://wcm.linderdaum.com/downloads/][3]
源代码: [https://github.com/corporateshark/WalCommander][4]
@ -27,7 +33,7 @@ Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能
via: http://wcm.linderdaum.com/release-0-17-0/
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,38 +0,0 @@
Translating by ZTinoZ
Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development
================================================================================
> Red Hat jumps into the mobile development sector with a key acquisition.
Red Hat's JBoss developer tools division has always focused on enterprise development, but hasn't always been focused on mobile. Today that will start to change as Red Hat announced its intention to acquire mobile development vendor [FeedHenry][1] for $82 million in cash. The deal is set to close in the third quarter of Red Hat's fiscal 2015. Red Hat is set to disclose its second quarter fiscal 2015 earning at 4 ET today.
Mike Piech, general manager of Middleware at Red Hat, told Datamation that upon the deal's closing FeedHenry's employees will become Red Hat employees
FeedHenry's development platform enables application developers to rapidly build mobile application for Android, IOS, Windows Phone and BlackBerry. The FeedHenry platform leverages Node.js programming architecture, which is not an area where JBoss has had much exposure in the past.
"The acquisition of FeedHenry significantly expands Red Hat's support for and engagement in Node.js," Piech said.
Piech Red Hat's OpenShift Platform-as-a-Service (PaaS) technology already has a Node.js cartridge. Additionally Red Hat Enterprise Linux ships a tech preview of node.js as part of the Red Hat Software Collections.
While node.js itself is open source, not all of FeedHenry's technology is currently available under an open source license. As has been Red Hat's policy throughout its entire history, it is now committing to making FeedHenry open source as well.
"As we've done with other acquisitions, open sourcing the technology we acquire is a priority for Red Hat, and we have no reason to expect that approach will change with FeedHenry," Piech said.
Red Hat's last major acquisition of a company with non open source technology was with [ManageIQ][2] for $104 million back in 2012. In May of this year, Red Hat launched the ManageIQ open-source project, opening up development and code of the formerly closed-source cloud management technology.
From an integration standpoint, Red Hat is not yet providing full details of precisely where FeedHenry will fit it.
"We've already identified a number of areas where FeedHenry and Red Hat's existing technology and products can be better aligned and integrated," Piech said. "We'll share more details as we develop the roadmap over the next 90 days."
--------------------------------------------------------------------------------
via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html
作者:[Sean Michael Kerner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html
[1]:http://www.feedhenry.com/
[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html

View File

@ -0,0 +1,52 @@
Oracle Linux 5.11 Features Updated Unbreakable Linux Kernel
================================================================================
> A lot of packages have been updated in this release
![This is the last release for this branch](http://i1-news.softpedia-static.com/images/news2/Oracle-Linux-5-11-Features-Updated-Unbreakable-Linux-Kernel-460129-2.jpg)
This is the last release for this branch
> **Oracle has announced that Oracle Linux Release 5.11 has been made available for download, but this is the enterprise version, so users will have to register in order to get the download.**
The new Oracle Linux update is probably the last one in the series. This operating system is based on Red Hat and the company has just pushed out the last update for the RHEL 5x branch, which means that this is the end of the line for the Oracle version as well.
Oracle Linux also comes with a series of features that make it very interesting, like zero-downtime kernel updates with the help of a tool called Ksplice that was originally developed for OpenSUSE, inclusion of the Oracle Database and Oracle Applications, and it's used in all x86-based Oracle Engineered Systems.
### What's so special about Oracle Linux ###
Despite the fact that Oracle Linux is based on Red Hat, its developers have actually made a list of reasons why you shouldn't use RHEL. There are quite a lot of them, but the main one is that anyone can download Oracle Linux (after registering) and RHEL is actually off limits for non-paying members.
"Providing advanced scalability and reliability for enterprise applications and systems, Oracle Linux delivers extreme performance and is used in all x86-based Oracle Engineered Systems. Oracle Linux is free to use, free to distribute, free to update, and easy to download. It is the only Linux distribution with production support for zero-downtime kernel updates with Oracle Ksplice, allowing customers the ability to apply patches for security and other updates without a reboot, as well as providing diagnostic features for debugging kernel issues on production systems," say the developers on their website.
One of the most interesting features for Oracle Linux and unique for this distribution is its unbreakable kernel. This is the actual name used by the developers. It's based on an older Linux kernel from the 3.0.36 branch. Users also have access to a Red Hat-compatible Kernel (kernel-2.6.18-398.el5), which is provided by default in the distro.
Also, the Unbreakable Enterprise Kernel available in the Oracle Linux Release 5.11 features a ton of drivers for hardware and devices, but this latest update brought even better support.
You can check the comprehensive [release notes][1] for Oracle Linux 5.11, which will probably take you the rest of the day.
You can also download Oracle Linux 5.11:
- [Oracle Enterprise Linux 6.5 (ISO) 64-bit][2]
- [Oracle Enterprise Linux 6.5 (ISO) 32-bit][3]
- [Oracle Enterprise Linux 7.0 (ISO) 64-bit][4]
- [Oracle Enterprise Linux 5.11 (ISO) 64-bit][5]
- [Oracle Enterprise Linux 5.11 (ISO) 32-bit][6]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Oracle-Linux-5-11-Features-Updated-Unbreakable-Linux-Kernel-460129.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://oss.oracle.com/ol5/docs/RELEASE-NOTES-U11-en.html#Kernel_and_Driver_Updates
[2]:http://mirrors.dotsrc.org/oracle-linux/OL6/U5/i386/OracleLinux-R6-U5-Server-i386-dvd.iso
[3]:http://mirrors.dotsrc.org/oracle-linux/OL6/U5/x86_64/OracleLinux-R6-U5-Server-x86_64-dvd.iso
[4]:https://edelivery.oracle.com/linux/
[5]:http://ftp5.gwdg.de/pub/linux/oracle/EL5/U11/x86_64/Enterprise-R5-U11-Server-x86_64-dvd.iso
[6]:http://ftp5.gwdg.de/pub/linux/oracle/EL5/U11/i386/Enterprise-R5-U11-Server-i386-dvd.iso

View File

@ -1,89 +0,0 @@
Drab Desktop? Try These 4 Beautiful Linux Icon Themes
================================================================================
**Ubuntus default icon theme [hasnt changed much][1] in almost 5 years, save for the [odd new icon here and there][2]. If youre tired of how it looks were going to show you a handful of gorgeous alternatives that will easily freshen things up.**
Do feel free to share links to your own favourite choices in the comments below.
### Captiva ###
![Captiva icons, elementary folders and Moka GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-and-captiva.jpg)
Captiva icons, elementary folders and Moka GTK
Captiva is a relatively new icon theme that even the least bling-prone user can appreicate.
Made by DeviantArt user ~[bokehlicia][3], Captiva shuns the 2D flat look of many current icon themes for a softer, rounded look. The icons themselves have an almost material or textured look, with subtle drop shadows and a rich colour palette adding to the charm.
It doesnt yet include a set of its own folder icons, and will fallback to using elementary (if available) or stock Ubuntu icons.
To install Captiva icons in Ubuntu 14.04 you can add the official PPA by opening a new Terminal window and enter the following commands:
sudo add-apt-repository ppa:captiva/ppa
sudo apt-get update && sudo apt-get install captiva-icon-theme
Or, if youre not into software source cruft, by downloading the icon pack direct from the DeviantArt page. To install, extract the archive and move the resulting folder to the .icons directory in Home.
However you choose to install it, youll need to apply this (and every other theme on this list) using a utility like [Unity Tweak Tool][4].
- [Captiva Icon Theme on DeviantArt][5]
### Square Beam ###
![Square Beam icon set with Orchis GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/squarebeam.jpg)
Square Beam icon set with Orchis GTK
After something a bit angular? Check out Square Beam. It offers a more imposing visual statement than other sets on this list, with electric colours, harsh gradients and stark iconography. It claims to have more than 30,000 different icons (!) included (youll forgive me for not counting) so you should find very few gaps in its coverage.
- [Square Beam Icon Theme on GNOME-Look.org][6]
### Moka & Faba ###
![Moka/Faba Mono Icons with Orchis GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-faba.jpg)
Moka/Faba Mono Icons with Orchis GTK
The Moka icon suite needs little introduction. In fact, Id wager a good number of you are already using it
With pastel colours, soft edges and simple icon artwork, Moka is a truly standout and comprehensive set of application icons. Its best used with its sibling, Faba, which Moka will inherit so as to fill in all the system icons, folders, panel icons, etc. The combined result is…well, youve got eyes!
For full details on how to install on Ubuntu head over to the official project website, link below.
- [Download Moka and Faba Icon Themes][7]
### Compass ###
![Compass Icon Theme with Numix Blue GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/compass1.jpg)
Compass Icon Theme with Numix Blue GTK
Last on our list, but by no means least, is Compass. This is a true adherent to the 2D, two-tone UI design right now. It may not be as visually diverse as others on this list, but thats the point. Its consistent and uniform and all the better for it — just check out those folder icons!
Its available to download and install manually through GNOME-Look (link below) or through the Nitrux Artwork PPA:
sudo add-apt-repository ppa:nitrux/nitrux-artwork
sudo apt-get update && sudo apt-get install compass-icon-theme
- [Compass Icon Theme on GNOME-Look.org][8]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/4-gorgeous-linux-icon-themes-download
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2010/02/lucid-gets-new-icons-for-rhythmbox-ubuntuone-memenu-more
[2]:http://www.omgubuntu.co.uk/2012/08/new-icon-theme-lands-in-lubuntu-12-10
[3]:http://bokehlicia.deviantart.com/
[4]:http://www.omgubuntu.co.uk/2014/06/unity-tweak-tool-0-7-development-download
[5]:http://bokehlicia.deviantart.com/art/Captiva-Icon-Theme-479302805
[6]:http://gnome-look.org/content/show.php/Square-Beam?content=165094
[7]:http://mokaproject.com/moka-icon-theme/download/ubuntu/
[8]:http://gnome-look.org/content/show.php/Compass?content=160629

View File

@ -1,86 +0,0 @@
Whats wrong with IPv4 and Why we are moving to IPv6
================================================================================
For the past 10 years or so, this has been the year that IPv6 will become wide spread. It hasnt happened yet. Consequently, there is little widespread knowledge of what IPv6 is, how to use it, or why it is inevitable.
![IPv4 and IPv6 Comparison](http://www.tecmint.com/wp-content/uploads/2014/09/ipv4-ipv6.gif)
IPv4 and IPv6 Comparison
### Whats wrong with IPv4? ###
Weve been using **IPv4** ever since RFC 791 was published in 1981. At the time, computers were big, expensive, and rare. IPv4 had provision for **4 billion IP** addresses, which seemed like an enormous number compared to the number of computers. Unfortunately, IP addresses are not use consequently. There are gaps in the addressing. For example, a company might have an address space of **254 (2^8-2)** addresses, and only use 25 of them. The remaining 229 are reserved for future expansion. Those addresses cannot be used by anybody else, because of the way networks route traffic. Consequently, what seemed like a large number in 1981 is actually a small number in 2014.
The Internet Engineering Task Force (**IETF**) recognized this problem in the early 1990s and came up with two solutions: Classless Internet Domain Router (**CIDR**) and private IP addresses. Prior to the invention of CIDR, you could get one of three network sizes: **24 bits** (16,777,214 addresses), **20 bits** (1,048,574 addresses) and **16 bits** (65,534 addresses). Once CIDR was invented, it was possible to split networks into subnetworks.
So, for example, if you needed **5 IP** addresses, your ISP would give you a network with a size of 3 bits which would give you **6 IP** addresses. So that would allow your ISP to use addresses more efficiently. Private IP addresses allow you to create a network where each machine on the network can easily connect to another machine on the internet, but where it is very difficult for a machine on the internet to connect back to your machine. Your network is private, hidden. Your network could be very large, 16,777,214 addresses, and you could subnet your private network into smaller networks, so that you could manage your own addresses easily.
You are probably using a private address right now. Check your own IP address: if it is in the range of **10.0.0.0 10.255.255.255** or **172.16.0.0 172.31.255.255** or **192.168.0.0 192.168.255.255**, then you are using a private IP address. These two solutions helped forestall disaster, but they were stopgap measures and now the time of reckoning is upon us.
Another problem with **IPv4** is that the IPv4 header was variable length. That was acceptable when routing was done by software. But now routers are built with hardware, and processing the variable length headers in hardware is hard. The large routers that allow packets to go all over the world are having problems coping with the load. Clearly, a new scheme was needed with fixed length headers.
Still another problem with **IPv4** is that, when the addresses were allocated, the internet was an American invention. IP addresses for the rest of the world are fragmented. A scheme was needed to allow addresses to be aggregated somewhat by geography so that the routing tables could be made smaller.
Yet another problem with IPv4, and this may sound surprising, is that it is hard to configure, and hard to change. This might not be apparent to you, because your router takes care of all of these details for you. But the problems for your ISP drives them nuts.
All of these problems went into the consideration of the next version of the Internet.
### About IPv6 and its Features ###
The **IETF** unveiled the next generation of IP in December 1995. The new version was called IPv6 because the number 5 had been allocated to something else by mistake. Some of the features of IPv6 included.
- 128 bit addresses (3.402823669×10³⁸ addresses)
- A scheme for logically aggregating addresses
- Fixed length headers
- A protocol for automatically configuring and reconfiguring your network.
Lets look at these features one by one:
#### Addresses ####
The first thing everybody notices about **IPv6** is that the number of addresses is enormous. Why so many? The answer is that the designers were concerned about the inefficient organization of addresses, so there are so many available addresses that we could allocate inefficiently in order to achieve other goals. So, if you want to build your own IPv6 network, chances are that your ISP will give you a network of **64 bits** (1.844674407×10¹⁹ addresses) and let you subnet that space to your hearts content.
#### Aggregation ####
With so many addresses to use, the address space can be allocated sparsely in order to route packets efficiently. So, your ISP gets a network space of **80 bits**. Of those 80 bits, 16 of them are for the ISPs subnetworks, and 64 bits are for the customers networks. So, the ISP can have 65,534 networks.
However, that address allocation isnt cast in stone, and if the ISP wants more smaller networks, it can do that (although probably the ISP would probably simply ask for another space of 80 bits). The upper 48 bits is further divided, so that ISPs that are “**close**” to one another have similar network addresses ranges, to allow the networks to be aggregated in the routing tables.
#### Fixed length Headers ####
An **IPv4** header has a variable length. An **IPv6** header always has a fixed length of 40 bytes. In IPv4, extra options caused the header to increase in size. In IPv6, if additional information is needed, that additional information is stored in extension headers, which follow the IPv6 header and are generally not processed by the routers, but rather by the software at the destination.
One of the fields in the IPv6 header is the flow. A flow is a **20 bit** number which is created pseudo-randomly, and it makes it easier for the routers to route packets. If a packet has a flow, then the router can use that flow number as an index into a table, which is fast, rather than a table lookup, which is slow. This feature makes **IPv6** very easy to route.
#### Automatic Configuration ####
In **IPv6**, when a machine first starts up, it checks the local network to see if any other machine is using its address. If the address is unused, then the machine next looks for an IPv6 router on the local network. If it finds the router, then it asks the router for an IPv6 address to use. Now, the machine is set and ready to communicate on the internet it has an IP address for itself and it has a default router.
If the router should go down, then the machines on the network will detect the problem and repeat the process of looking for an IPv6 router, to find the backup router. Thats actually hard to do in IPv4. Similarly, if the router wants to change the addressing scheme on its network, it can. The machines will query the router from time to time and change their addresses automatically. The router will support both the old and new addresses until all of the machines have switched over to the new configuration.
IPv6 automatic configuration is not a complete solution. There are some other things that a machine needs in order to use the internet effectively: the name servers, a time server, perhaps a file server. So there is **dhcp6** which does the same thing as dhcp, only because the machine boots in a routable state, one dhcp daemon can service a large number of networks.
#### Theres one big problem ####
So if IPv6 is so much better than IPv4, why hasnt adoption been more widespread (as of **May 2014**, Google estimates that its IPv6 traffic is about **4%** of its total traffic)? The basic problem is which comes first, the **chicken or the egg**? Somebody running a server wants the server to be as widely available as possible, which means it must have an **IPv4** address.
It could also have an IPv6 address, but few people would use it and you do have to change your software a little to accommodate IPv6. Furthermore, a lot of home networking routers do not support IPv6. A lot of ISPs do not support IPv6. I asked my ISP about it, and I was told that they will provide it when customers ask for it. So I asked how many customers had asked for it. One, including me.
By way of contrast, all of the major operating systems, Windows, OS X, and Linux support IPv6 “**out of the box**” and have for years. The operating systems even have software that will allow IPv6 packets to “**tunnel**” within IPv4 to a point where the IPv6 packets can be removed from the surrounding IPv4 packet and sent on their way.
#### Conclusion ####
IPv4 has served us well for a long time. IPv4 has some limitations which are going to present insurmountable problems in the near future. IPv6 will solve those problems by changing the strategy for allocating addresses, making improvements to ease the routing of packets, and making it easier to configure a machine when it first joins the network.
However, acceptance and usage of IPv6 has been slow, because change is hard and expensive. The good news is that all operating systems support IPv6, so when you are ready to make the change, your computer will need little effort to convert to the new scheme.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/ipv4-and-ipv6-comparison/
作者:[Jeff Silverman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/jeffsilverm/

View File

@ -1,67 +0,0 @@
barney-ro translating
7 killer open source monitoring tools
================================================================================
Looking for greater visibility into your network? Look no further than these excellent free tools
Network and system monitoring is a broad category. There are solutions that monitor for the proper operation of servers, network gear, and applications, and there are solutions that track the performance of those systems and devices, providing trending and analysis. Some tools will sound alarms and notifications when problems are detected, while others will even trigger actions to run when alarms sound. Here is a collection of open source solutions that aim to provide some or all of these capabilities.
### Cacti ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_02-netmon-cacti-100448914-orig.jpg)
Cacti is a very extensive performance graphing and trending tool that can be used to track just about any monitored metric that can be plotted on a graph. From disk utilization to fan speeds in a power supply, if it can be monitored, Cacti can track it -- and make that data quickly available.
### Nagios ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_03-netmon-nagios-100448915-orig.jpg)
Nagios is the old guard of system and network monitoring. It is fast, reliable, and extremely customizable. Nagios can be a challenge for newcomers, but the rather complex configuration is also its strength, as it can be adapted to just about any monitoring task. What it may lack in looks it makes up for in power and reliability.
### Icinga ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_04-netmon-icinga-100448916-orig.jpg)
Icinga is an offshoot of Nagios that is currently being rebuilt anew. It offers a thorough monitoring and alerting framework that\u2019s designed to be as open and extensible as Nagios is, but with several different Web UI options. Icinga 1 is closely related to Nagios, while Icinga 2 is the rewrite. Both versions are currently supported, and Nagios users can migrate to Icinga 1 very easily.
### NeDi ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_05-netmon-nedi-100448917-orig.jpg)
NeDi may not be as well known as some of the others, but it\u2019s a great solution for tracking devices across a network. It continuously walks through a network infrastructure and catalogs devices, keeping track of everything it discovers. It can provide the current location of any device, as well as a history.
NeDi can be used to locate stolen or lost devices by alerting you if they reappear on the network. It can even display all known and discovered connections on a map, showing how every network interconnect is laid out, down to the physical port level.
### Observium ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_06-netmon-observium-100448918-orig.jpg)
Observium combines system and network monitoring with performance trending. It uses both static and auto discovery to identify servers and network devices, leverages a variety of monitoring methods, and can be configured to track just about any available metric. The Web UI is very clean, well thought out, and easy to navigate.
As shown, Observium can also display the physical location of monitored devices on a geographical map. Note too the heads-up panels showing active alarms and device counts.
### Zabbix ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_07-netmon-zabbix-100448919-orig.jpg)
Zabbix monitors servers and networks with an extensive array of tools. There are Zabbix agents for most operating systems, or you can use passive or external checks, including SNMP to monitor hosts and network devices. You'll also find extensive alerting and notification facilities, and a highly customizable Web UI that can be adapted to a variety of heads-up displays. In addition, Zabbix has specific tools that monitor Web application stacks and virtualization hypervisors.
Zabbix can also produce logical interconnection diagrams detailing how certain monitored objects are interconnected. These maps are customizable, and maps can be created for groups of monitored devices and hosts.
### Ntop ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_08-netmon-ntop-100448920-orig.jpg)
Ntop is a packet sniffing tool with a slick Web UI that displays live data on network traffic passing by a monitoring interface. Instant data on network flows is available through an advanced live graphing function. Host data flows and host communication pair information is also available in real-time.
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2686794/asset-management/164219-7-killer-open-source-monitoring-tools.html
作者:[Paul Venezia][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Paul-Venezia/

View File

@ -1,29 +1,55 @@
barney-ro translating
ChromeOS vs Linux: The Good, the Bad and the Ugly
ChromeOS 对战 Linux : 孰优孰劣 仁者见仁 智者见智
================================================================================
> In the battle between ChromeOS and Linux, both desktop environments have strengths and weaknesses.
> 在 ChromeOS 和 Linux 的斗争过程中,不管是哪一家的操作系统都是有优有劣。
Anyone who believes Google isn't "making a play" for desktop users isn't paying attention. In recent years, I've seen [ChromeOS][1] making quite a splash on the [Google Chromebook][2]. Exploding with popularity on sites such as Amazon.com, it looks as if ChromeOS could be unstoppable.
任何不关注Google的人都不会相信Google在桌面用户当中扮演这一个很重要的角色。在近几年我们见到的[ChromeOS][1]制造的[Google Chromebook][2]相当的轰动。和同期的人气火爆的Amazon一样似乎ChromeOS势不可挡。
In this article, I'm going to look at ChromeOS as a concept to market, how it's affecting Linux adoption and whether or not it's a good/bad thing for the Linux community as a whole. Plus, I'll talk about the biggest issue of all and how no one is doing anything about it.
在本文中我们要了解的是ChromeOS概念的市场ChromeOS怎么影响着Linux的使用和整个 ChromeOS 对于一个社区来说,是好事还是坏事。另外,我将会谈到一些重大的事情,和为什么没人去为他做点什么事情。
### ChromeOS isn't really Linux ###
### ChromeOS 并不是真正的Linux ###
When folks ask me if ChromeOS is a Linux distribution, I usually reply that ChromeOS is to Linux what OS X is to BSD. In other words, I consider ChromeOS to be a forked operating system that uses the Linux kernel under the hood. Much of the operating system is made up of Google's own proprietary blend of code and software.
每当有朋友问我说是否ChromeOS 是否是Linux 的一个分支时我都会这样回答ChromeOS 对于Linux 就好像是 OS X 对于BSD 。换句话说我认为ChromeOS 是一个派生的操作系统运行于Linux 内核的引擎之下。很多操作系统就组成了Google 的专利代码和软件。
So while the ChromeOS is using the Linux kernel under its hood, it's still very different from what we might find with today's modern Linux distributions.
尽管ChromeOS 是利用了Linux 内核引擎但是它仍然有很大的不同和现在流行的Linux分支版本。
Where ChromeOS's difference becomes most apparent, however, is in the apps it offers the end user: Web applications. With everything being launched from a browser window, Linux users might find using ChromeOS to be a bit vanilla. But for non-Linux users, the experience is not all that different than what they may have used on their old PCs.
ChromeOS和它们最大的不同就在于它给终端用户提供的app包括Web 应用。因为ChromeOS 每一个操作都是开始于浏览器窗口对于Linux 用户来说可能会有很多不一样的感受但是对于没有Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。
For example: Anyone who is living a Google-centric lifestyle on Windows will feel right at home on ChromeOS. Odds are this individual is already relying on the Chrome browser, Google Drive and Gmail. By extension, moving over to ChromeOS feels fairly natural for these folks, as they're simply using the browser they're already used to.
就是说每一个以Google-centric为生活方式的人来说当他们回到家时在ChromeOS上的感觉将会非常良好。这样的优势就是这个人已经接受了Chrome 浏览器Google 驱动器和Gmail 。久而久之他们的亲朋好友也都对ChromeOs有了好感就好像是他们很容易接受Chrome 流浪器,因为他们早已经用过。
Linux enthusiasts, however, tend to feel constrained almost immediately. Software choices feel limited and boxed in, plus games and VoIP are totally out of the question. Sorry, but [GooglePlus Hangouts][3] isn't a replacement for [VoIP][4] software. Not even by a long shot.
然而对于Linux 爱好者来说这样就立即带来了不适应。软件的选择是受限制的盒装的在加上游戏和VoIP 是完全不可能的。对不起,因为[GooglePlus Hangouts][3]是代替不了VoIP 软件的。甚至在很长的一段时间里。
### ChromeOS or Linux on the desktop ###
### ChromeOS 和Linux 的桌面化 ###
Anyone making the claim that ChromeOS hurts Linux adoption on the desktop needs to come up for air and meet non-technical users sometime.
有人断言ChromeOS 要是想在桌面系统中对Linux 产生影响只有在Linux 停下来浮出水面换气的时候或者是满足某个非技术用户的时候。
Yes, desktop Linux is absolutely fine for most casual computer users. However it helps to have someone to install the OS and offer "maintenance" services like we see in the Windows and OS X camps. Sadly Linux lacks this here in the States, which is where I see ChromeOS coming into play.
是的桌面Linux 对于大多数休闲型的用户来说绝对是一个好东西。它有助于有专人安装操作系统并且提供“维修”服务从windows 和 OS X 的阵营来看。但是令人失望的是在美国Linux 正好在这个方面很缺乏。所以我们看到ChromeOS 慢慢的走入我们的视线。
I've found the Linux desktop is best suited for environments where on-site tech support can manage things on the down-low. Examples include: Homes where advanced users can drop by and handle updates, governments and schools with IT departments. These are environments where Linux on the desktop is set up to be used by users of any skill level or background.
By contrast, ChromeOS is built to be completely maintenance free, thus not requiring any third part assistance short of turning it on and allowing updates to do the magic behind the scenes. This is partly made possible due to the ChromeOS being designed for specific hardware builds, in a similar spirit to how Apple develops their own computers. Because Google has a pulse on the hardware ChromeOS is bundled with, it allows for a generally error free experience. And for some individuals, this is fantastic!
@ -69,7 +95,7 @@ If local offline efforts like this don't happen, not to worry. Linux on the desk
via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html
作者:[Matt Hartley][a]
译者:[译者ID](https://github.com/译者ID)
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
@ -79,4 +105,4 @@ via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-an
[2]:http://www.google.com/chrome/devices/features/
[3]:https://plus.google.com/hangouts
[4]:http://en.wikipedia.org/wiki/Voice_over_IP
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html

View File

@ -0,0 +1,64 @@
What is a good subtitle editor on Linux
================================================================================
If you watch foreign movies regularly, chances are you prefer having subtitles rather than the dub. Grown up in France, I know that most Disney movies during my childhood sounded weird because of the French dub. If now I have the chance to be able to watch them in their original version, I know that for a lot of people subtitles are still required. And I surprise myself sometimes making subtitles for my family. Hopefully for me, Linux is not devoid of fancy and open source subtitle editors. In short, this is the non-exhaustive list of open source subtitle editors for Linux. Share your opinion on what you think of the best subtitle editor.
### 1. Gnome Subtitles ###
![](https://farm6.staticflickr.com/5596/15323769611_59bc5fb4b7_z.jpg)
[Gnome Subtitles][2] is a bit my go to when it comes to quickly editing some existing subtitles. You can load the video, load the subtitle text files and instantly get going. I appreciate its balance between ease of use and advanced features. It comes with a synchronization tool as well as a spell check. Finally, last but not least, the shortcuts are what makes it good in the end: when you edit a lot of lines, you prefer to keep your hands on the keyboard, and use the built in shortcuts to move around.
### 2. Aegisub ###
![](https://farm3.staticflickr.com/2944/15323964121_59e9b26ba5_z.jpg)
[Aegisub][2] is already one level of complexity higher. Just the interface reflects a learning curve. But besides its intimidating aspect, Aegisub is very complete software, providing tools beyond anything I could have imagined before. Like Gnome Subtitles, Aegisub has a WYSIWYG approach, but to a whole new level: it is possible to drag and drop the subtitles on the screen, see the audio spectrum on the side, and do everything with shortcuts. In addition to that, it comes with a Kanji tool, a karaoke mode, and the possibility to import lua script to automate some tasks. I really invite you to go read the [manual page][3] before starting using it.
### 3. Gaupol ###
![](https://farm3.staticflickr.com/2942/15326817292_6702cc63fc_z.jpg)
At the other end of the complexity spectrum is [Gaupol][4]. Unlike Aegisub, Gaupol is quick to pick up and adopts an interface very close to Gnome Subtitles. But behind this relative simplicity, it comes with all the necessary tools: shortcuts, third party extension, spell checking, and even speech recognition (courtesy of [CMU Sphinx][5]). As a downside, however, I did notice some slow-downs while testing it, nothing too serious, but just enough to make me prefer Gnome Subtitles still.
### 4. Subtitle Editor ###
![](https://farm4.staticflickr.com/3914/15323911521_8e33126610_z.jpg)
[Subtitle Editor][6] is very close to Gaupol. However, the interface is a little bit less intuitive, and the features are slightly more advanced. I appreciate the possibility to define "key frames" and all the given synchronization options. However, maybe more icons and less text would enhance the interface. As a goodie, Subtitle Editor can simulate a "type writer" effect, while I am not sure if it is extremely useful. And last but not least, the possibility to redefine the shortcuts is always handy.
### 5. Jubler ###
![](https://farm4.staticflickr.com/3912/15323769701_3d94ca8884_z.jpg)
Written in Java, [Jubler][7] is a multi-platform subtitle editor. I was actually very impressed by its interface. I definitely see the Java-ish aspect of it, but it remains well conceived and clear. Like Aegisub, you can drag and drop the subtitles on the image, making the experience far more pleasant than just typing. It is also possible to define a style for subtitles, play sound from another track, translate the subtitles, or use the spell checker. However, be careful as you will need MPlayer installed and correctly configured beforehand if you want to use Jubler fully. Oh and I give it a special credit for its easy installation process after downloading the script from the [official page][8].
### 6. Subtitle Composer ###
![](https://farm6.staticflickr.com/5578/15323769711_6c6dfbe405_z.jpg)
Defined as a "KDE subtitle composer," [Subtitle Composer][9] comes with most of the traditional features evoked previously, but with the KDE interface that we expect. This comes naturally with the option to redefine the shortcuts, which is very dear to me. But beyond all of this, what differentiates Subtitle Composer from all the previously mentioned programs is its ability to follow scripts written in JavaScript, Python, and even Ruby. A few examples are packaged with the software, and will definitely help you pick up the syntax and the usefulness of such feature.
To conclude, whether you, like me, just edit a few subtitles for your family, re-synchronize the entire track, or write everything from scratch, Linux has the tools for you. For me in the end, the shortcuts and the ease-of-use make all the difference, but for any higher usage, scripting or speech recognition can become super handy.
Which subtitle editor do you use and why? Or is there another one that you prefer not mentioned here? Let us know in the comments.
--------------------------------------------------------------------------------
via: http://xmodulo.com/good-subtitle-editor-linux.html
作者:[Adrien Brochard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://gnomesubtitles.org/
[2]:http://www.aegisub.org/
[3]:http://docs.aegisub.org/3.2/Main_Page/
[4]:http://home.gna.org/gaupol/
[5]:http://cmusphinx.sourceforge.net/
[6]:http://home.gna.org/subtitleeditor/
[7]:http://www.jubler.org/
[8]:http://www.jubler.org/download.html
[9]:http://sourceforge.net/projects/subcomposer/

View File

@ -0,0 +1,100 @@
Shellshock: How to protect your Unix, Linux and Mac servers
================================================================================
> **Summary**: The Unix/Linux Bash security hole can be deadly to your servers. Here's what you need to worry about, how to see if you can be attacked, and what to do if your shields are down.
The only thing you have to fear with [Shellshock, the Unix/Linux Bash security hole][1], is fear itself. Yes, Shellshock can serve as a highway for worms and malware to hit your Unix, Linux, and Mac servers, but you can defend against it.
![](http://cdn-static.zdnet.com/i/r/story/70/00/034072/cybersecurity-v1-620x464.jpg?hash=BQMxZJWuZG&upscale=1)
If you don't patch and defend yourself against Shellshock today, you may have lost control of your servers by tomorrow.
However, Shellshock is not as bad as [HeartBleed][2]. Not yet, anyway.
While it's true that the [Bash shell][3] is the default command interpreter on most Unix and Linux systems and all Macs — the majority of Web servers — for an attacker to get to your system, there has to be a way for him or her to actually get to the shell remotely. So, if you're running a PC without [ssh][4], [rlogin][5], or another remote desktop program, you're probably safe enough.
A more serious problem is faced by devices that use embedded Linux — such as routers, switches, and appliances. If you're running an older, no longer supported model, it may be close to impossible to patch it and will likely be vulnerable to attacks. If that's the case, you should replace as soon as possible.
The real and present danger is for servers. According to the National Institute of Standards (NIST), [Shellshock scores a perfect 10][6] for potential impact and exploitability. [Red Hat][7] reports that the most common attack vectors are:
- **httpd (Your Web server)**: CGI [Common-Gateway Interface] scripts are likely affected by this issue: when a CGI script is run by the web server, it uses environment variables to pass data to the script. These environment variables can be controlled by the attacker. If the CGI script calls Bash, the script could execute arbitrary code as the httpd user. mod_php, mod_perl, and mod_python do not use environment variables and we believe they are not affected.
- **Secure Shell (SSH)**: It is not uncommon to restrict remote commands that a user can run via SSH, such as rsync or git. In these instances, this issue can be used to execute any command, not just the restricted command.
- **dhclient**: The [Dynamic Host Configuration Protocol Client (dhclient)][8] is used to automatically obtain network configuration information via DHCP. This client uses various environment variables and runs Bash to configure the network interface. Connecting to a malicious DHCP server could allow an attacker to run arbitrary code on the client machine.
- **[CUPS][9] (Linux, Unix and Mac OS X's print server)**: It is believed that CUPS is affected by this issue. Various user-supplied values are stored in environment variables when cups filters are executed.
- **sudo**: Commands run via sudo are not affected by this issue. Sudo specifically looks for environment variables that are also functions. It could still be possible for the running command to set an environment variable that could cause a Bash child process to execute arbitrary code.
- **Firefox**: We do not believe Firefox can be forced to set an environment variable in a manner that would allow Bash to run arbitrary commands. It is still advisable to upgrade Bash as it is common to install various plug-ins and extensions that could allow this behavior.
- **Postfix**: The Postfix [mail] server will replace various characters with a ?. While the Postfix server does call Bash in a variety of ways, we do not believe an arbitrary environment variable can be set by the server. It is however possible that a filter could set environment variables.
So much for Red Hat's thoughts. Of these, the Web servers and SSH are the ones that worry me the most. The DHCP client is also troublesome, especially if, as it the case with small businesses, your external router doubles as your Internet gateway and DHCP server.
Of these, Web server attacks seem to be the most common by far. As Florian Weimer, a Red Hat security engineer, wrote: "[HTTP requests to CGI scripts][10] have been identified as the major attack vector." Attacks are being made against systems [running both Linux and Mac OS X][11].
Jaime Blasco, labs director at [AlienVault][12], a security management services company, ran a [honeypot][13] looking for attackers and found "[several machines trying to exploit the Bash vulnerability][14]. The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system."
Other security researchers have found that the malware is the usual sort. They typically try to plant distributed denial of service (DDoS) IRC bots and attempt to guess system logins and passwords using a list of poor passwords such as 'root', 'admin', 'user', 'login', and '123456.'
So, how do you know if your servers can be attacked? First, you need to check to see if you're running a vulnerable version of Bash. To do that, run the following command from a Bash shell:
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
If you get the result:
*vulnerable this is a test*
Bad news, your version of Bash can be hacked. If you see:
*bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test*
You're good. Well, to be more exact, you're as protected as you can be at the moment.
While all major Linux distributors have released patches that stop most attacks — [Apple has not released a patch yet][15] — it has been discovered that "[patches shipped for this issue are incomplete][16]. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions." While it's unclear if these attacks can be used to hack into a system, it is clear that they can be used to crash them, thanks to a null-pointer exception.
Patches to fill-in the [last of the Shellshock security hole][17] are being worked on now. In the meantime, you should update your servers as soon as possible with the available patches and keep an eye open for the next, fuller ones.
In the meantime, if, as is likely, you're running the Apache Web server, there are some [Mod_Security][18] rules that can stop attempts to exploit Shellshock. These rules, created by Red Hat, are:
Request Header values:
SecRule REQUEST_HEADERS "^\(\) {" "phase:1,deny,id:1000000,t:urlDecode,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
SERVER_PROTOCOL values:
SecRule REQUEST_LINE "\(\) {" "phase:1,deny,id:1000001,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
GET/POST names:
SecRule ARGS_NAMES "^\(\) {" "phase:2,deny,id:1000002,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
GET/POST values:
SecRule ARGS "^\(\) {" "phase:2,deny,id:1000003,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
File names for uploads:
SecRule FILES_NAMES "^\(\) {" "phase:2,deny,id:1000004,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
It is vital that you patch your servers as soon as possible, even with the current, incomplete ones, and to set up defenses around your Web servers. If you don't, you could come to work tomorrow to find your computers completely compromised. So get out there and start patching!
--------------------------------------------------------------------------------
via: http://www.zdnet.com/shellshock-how-to-protect-your-unix-linux-and-mac-servers-7000034072/
作者:[Steven J. Vaughan-Nichols][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:http://www.zdnet.com/unixlinux-bash-critical-security-hole-uncovered-7000034021/
[2]:http://www.zdnet.com/heartbleed-serious-openssl-zero-day-vulnerability-revealed-7000028166
[3]:http://www.gnu.org/software/bash/
[4]:http://www.openbsd.org/cgi-bin/man.cgi?query=ssh&sektion=1
[5]:http://unixhelp.ed.ac.uk/CGI/man-cgi?rlogin
[6]:http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-7169
[7]:http://www.redhat.com/
[8]:http://www.isc.org/downloads/dhcp/
[9]:https://www.cups.org/
[10]:http://seclists.org/oss-sec/2014/q3/650
[11]:http://www.zdnet.com/first-attacks-using-shellshock-bash-bug-discovered-7000034044/
[12]:http://www.alienvault.com/
[13]:http://www.sans.org/security-resources/idfaq/honeypot3.php
[14]:http://www.alienvault.com/open-threat-exchange/blog/attackers-exploiting-shell-shock-cve-2014-6721-in-the-wild
[15]:http://apple.stackexchange.com/questions/146849/how-do-i-recompile-bash-to-avoid-the-remote-exploit-cve-2014-6271-and-cve-2014-7
[16]:https://bugzilla.redhat.com/show_bug.cgi?id=1141597#c27
[17]:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7169
[18]:http://www.inmotionhosting.com/support/website/modsecurity/what-is-modsecurity-and-why-is-it-important

View File

@ -0,0 +1,65 @@
What Linux Users Should Know About Open Hardware
================================================================================
> What Linux users don't know about manufacturing open hardware can lead them to disappointment.
Business and free software have been intertwined for years, but the two often misunderstand one another. That's not surprising -- what is just a business to one is way of life for the other. But the misunderstanding can be painful, which is why debunking it is a worth the effort.
An increasingly common case in point: the growing attempts at open hardware, whether from Canonical, Jolla, MakePlayLive, or any of half a dozen others. Whether pundit or end-user, the average free software user reacts with exaggerated enthusiasm when a new piece of hardware is announced, then retreats into disillusionment as delay follows delay, often ending in the cancellation of the entire product.
It's a cycle that does no one any good, and often breeds distrust and all because the average Linux user has no idea what's happening behind the news.
My own experience with bringing products to market is long behind me. However, nothing I have heard suggests that anything has changed. Bringing open hardware or any other product to market remains not just a brutal business, but one heavily stacked against newcomers.
### Searching for Partners ###
Both the manufacturing and distribution of digital products is controlled by a relatively small number of companies, whose time can sometimes be booked months in advance. Profit margins can be tight, so like movie studios that buy the rights to an ancient sit-com, the manufacturers usually hope to clone the success of the latest hot product. As Aaron Seigo told me when talking about his efforts to develop the Vivaldi tablet, the manufacturers would much rather prefer someone else take the risk of doing anything new.
Not only that, but they would prefer to deal with someone with an existing sales record who is likely to bring repeat business.
Besides, the average newcomer is looking at a product run of a few thousand units. A chip manufacturer would much rather deal with Apple or Samsung, whose order is more likely in the hundreds of thousands.
Faced with this situation, the makers of open hardware are likely to find themselves cascading down into the list of manufacturers until they can find a second or third tier manufacturer that is willing to take a chance on a small run of something new.
They might be reduced to buying off-the-shelf components and assembling units themselves, as Seigo tried with Vivaldi. Alternatively, they might do as Canonical did, and find established partners that encourage the industry to take a gamble. Even if they succeed, they have usually taken months longer than they expected in their initial naivety.
### Staggering to Market ###
However, finding a manufacturer is only the first obstacle. As Raspberry Pi found out, even if the open hardware producers want only free software in their product, the manufacturers will probably insist that firmware or drivers stay proprietary in the name of protecting trade secrets.
This situation is guaranteed to set off criticism from potential users, but the open hardware producers have no choice except to compromise their vision. Looking for another manufacturer is not a solution, partly because to do so means more delays, but largely because completely free-licensed hardware does not exist. The industry giants like Samsung have no interest in free hardware, and, being new, the open hardware producers have no clout to demand any.
Besides, even if free hardware was available, manufacturers could probably not guarantee that it would be used in the next production run. The producers might easily find themselves re-fighting the same battle every time they needed more units.
As if all this is not enough, at this point the open hardware producer has probably spent 6-12 months haggling. The chances are, the industry standards have shifted, and they may have to start from the beginning again by upgrading specs.
### A Short and Brutal Shelf Life ###
Despite these obstacles, hardware with some degree of openness does sometimes get released. But remember the challenges of finding a manufacturer? They have to be repeated all over again with the distributors -- and not just once, but region by region.
Typically, the distributors are just as conservative as the manufacturers, and just as cautious about dealing with newcomers and new ideas. Even if they agree to add a product to their catalog, the distributors can easily decide not to encourage their representatives to promote it, which means that in a few months they have effectively removed it from the shelves.
Of course, online sales are a possibility. But meanwhile, the hardware has to be stored somewhere, adding to the cost. Production runs on demand are expensive even in the unlikely event that they are available, and even unassembled units need storage.
### Weighing the Odds ###
I have been generalizing wildly here, but anyone who has ever been involved in producing anything will recognize what I am describing as the norm. And just to make matters worse, open hardware producers typically discover the situation as they are going through it. Inevitably, they make mistakes, which adds still more delays.
But the point is, if you have any sense of the process at all, your knowledge is going to change how you react to news of another attempt at hardware. The process means that, unless a company has been in serious stealth mode, an announcement that a product will be out in six months will rapidly prove to be an outdate guestimate. 12-18 months is more likely, and the obstacles I describe may mean that the product will never actually be released.
For example, as I write, people are waiting for the emergence of the first Steam Machines, the Linux-based gaming consoles. They are convinced that the Steam Machines will utterly transform both Linux and gaming.
As a market category, Steam Machines may do better than other new products, because those who are developing them at least have experience developing software products. However, none of the dozen or so Steam Machines in development have produced more than a prototype after almost a year, and none are likely to be available for buying until halfway through 2015. Given the realities of hardware manufacturing, we will be lucky if half of them see daylight. In fact, a release of 2-4 might be more realistic.
I make that prediction with next to no knowledge of any of the individual efforts. But, having some sense of how hardware manufacturing works, I suspect that it is likely to be closer to what happens next year than all the predictions of a new Golden Age for Linux and gaming. I would be entirely happy being wrong, but the fact remains: what is surprising is not that so many Linux-associated hardware products fail, but that any succeed even briefly.
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/what-linux-users-should-know-about-open-hardware-1.html
作者:[Bruce Byfield][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html

View File

@ -1,111 +0,0 @@
alim0x translating
The history of Android
================================================================================
![Both screens of the Email app. The first two screenshots show the combined label/inbox view, and the last shows a message.](http://cdn.arstechnica.net/wp-content/uploads/2014/01/email2lol.png)
Both screens of the Email app. The first two screenshots show the combined label/inbox view, and the last shows a message.
Photo by Ron Amadeo
The message view was—surprise!—white. Android's e-mail app has historically been a watered-down version of the Gmail app, and you can see that close connection here. The message and compose views were taken directly from Gmail with almost no modifications.
![The “IM" applications. Screenshots show the short-lived provider selection screen, the friends list, and a chat.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/IM2.png)
The “IM" applications. Screenshots show the short-lived provider selection screen, the friends list, and a chat.
Photo by Ron Amadeo
Before Google Hangouts and even before Google Talk, there was "IM"—the only instant messaging client that shipped on Android 1.0. Surprisingly, multiple IM services were supported: users could pick from AIM, Google Talk, Windows Live Messenger, and Yahoo. Remember when OS creators cared about interoperability?
The friends list was a black background with white speech bubbles for open chats. Presence was indicated with colored circles, and a little Android on the right hand side would indicate that a person was mobile. It's amazing how much more communicative the IM app was than Google Hangouts. Green means the person is using a device they are signed into, yellow means they are signed in but idle, red means they have manually set busy and don't want to be bothered, and gray is offline. Today, Hangouts only shows when a user has the app open or closed.
The chats interface was clearly based on the Messaging program, and the chat backgrounds were changed from white and blue to white and green. No one changed the color of the blue text entry box, though, so along with the orange highlight effect, this screen used white, green, blue, and orange.
![YouTube on Android 1.0. The screens show the main page, the main page with the menu open, the categories screen, and the videos screen.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt5000.png)
YouTube on Android 1.0. The screens show the main page, the main page with the menu open, the categories screen, and the videos screen.
Photo by Ron Amadeo
YouTube might not have been the mobile sensation it is today with the 320p screen and 3G data speeds of the G1, but Google's video service was present and accounted for on Android 1.0. The main screen looked like a tweaked version of the Android Market, with a horizontally scrolling featured section along the top and vertically scrolling categories along the bottom. Some of Google's category choices were pretty strange: what would the difference be between "Most popular" and "Most viewed?"
In a sign that Google had no idea how big YouTube would eventually become, one of the video categories was "Most recent." Today, with [100 hours of video][1] uploaded to the site every minute, if this section actually worked it would be an unreadable blur of rapidly scrolling videos.
The menu housed search, favorites, categories, and settings. Settings (not pictured) was the lamest screen ever, housing one option to clear the search history. Categories was equally barren, showing only a black list of text.
The last screen shows a video, which only supported horizontal mode. The auto-hiding video controls weirdly had rewind and fast forward buttons, even though there was a seek bar.
![YouTubes video menu, description page, and comments.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt3.png)
YouTubes video menu, description page, and comments.
Photo by Ron Amadeo
Additional sections for each video could be brought up by hitting the menu button. Here you could favorite the video, access details, and read comments. All of these screens, like the videos, were locked to horizontal mode.
"Share" didn't bring up a share dialog yet; it just kicked the link out to a Gmail message. Texting or IMing someone a link wasn't possible. Comments could be read, but you couldn't rate them or post your own. You couldn't rate or like a video either.
![The camera apps picture taking interface, menu, and photo review mode.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/camera.png)
The camera apps picture taking interface, menu, and photo review mode.
Photo by Ron Amadeo
Real Android on real hardware meant a functional camera app, even if there wasn't much to look at. That black square on the left was the camera interface, which should be showing a viewfinder image, but the SDK screenshot utility can't capture it. The G1 had a hardware camera button (remember those?), so there wasn't a need for an on-screen shutter button. There were no settings for exposure, white balance, or HDR—you could take a picture and that was about it.
The menu button revealed a meager two options: a way to jump to the Pictures app and Settings screen with two options. The first settings option was whether or not to enable geotagging for pictures, and the second was for a dialog prompt after every capture, which you can see on the right. Also, you could only take pictures—there was no video support yet.
![The Calendars month view, week view with the menu open, day view, and agenda.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/calviews.png)
The Calendars month view, week view with the menu open, day view, and agenda.
Photo by Ron Amadeo
Like most apps of this era, the primary command interface for the calendar was the menu. It was used to switch views, add a new event, navigate to the current day, pick visible calendars, and go to the settings. The menu functioned as a catch-all for every single button.
The month view couldn't show appointment text. Every date had a bar next to it, and appointments were displayed as green sections in the bar denoting what time of day an appointment was. Week view couldn't show text either—the 320×480 display of the G1 just wasn't dense enough—so you got a white block with a strip of color indicating which calendar it was from. The only views that provided text were the agenda and day views. You could move through dates by swiping—week and day used left and right, and month and agenda used up and down.
![The main settings page, the Wireless section, and the bottom of the about page.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/settings.png)
The main settings page, the Wireless section, and the bottom of the about page.
Photo by Ron Amadeo
Android 1.0 finally brought a settings screen to the party. It was a black and white wall of text that was roughly broken down into sections. Down arrows next to each list item confusingly look like they would expand line-in to show more of something, but touching anywhere on the list item would just load the next screen. All the screens were pretty boring and samey looking, but hey, it's a settings screen.
Any option with an on/off state used a cartoony-looking checkbox. The original checkboxes in Android 1.0 were pretty strange—even when they were "unchecked," they still had a gray check mark in them. Android treated the check mark like a light bulb that would light up when on and be dim when off, but that's not how checkboxes work. We did finally get an "About" page, though. Android 1.0 ran Linux kernel 2.6.25.
A settings screen means we can finally open the security settings and change lock screens. Android 1.0 only had two styles, the gray square lock screen pictured in the Android 0.9 section, and pattern unlock, which required you to draw a pattern over a grid of 9 dots. A swipe pattern like this was easier to remember and input than a PIN even if it did not add any more security.
![The Voice Dialer, pattern lock screen, low battery warning, and time picker.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/grabbag.png)
The Voice Dialer, pattern lock screen, low battery warning, and time picker.
Photo by Ron Amadeo
oice functions arrived in 1.0 with Voice Dialer. This feature hung around in various capacities in AOSP for a while, as it was a simple voice command app for calling numbers and contacts. Voice Dialer was completely unrelated to Google's future voice products, however, and it worked the same way a voice dialer on a dumbphone would work.
As for a final note, low battery popup would occur when the battery dropped below 15 percent. It was a funny graphic, depicting plugging the wrong end of the power cord into the phone. That wasn't (and still isn't) how phones work, Google.
Android 1.0 was a great first start, but there were still so many gaps in functionality. Physical keyboards and tons of hardware buttons were mandatory, as Android devices were still not allowed to be sold without a d-pad or trackball. Base smartphone functionality like auto-rotate wasn't here yet, either. Updates for built-in apps weren't possible through the Android Market the way they were today. All the Google Apps were interwoven with the operating system. If Google wanted to update a single app, an update for the entire operating system needed to be pushed out through the carriers. There was still a lot of work to do.
### Android 1.1—the first truly incremental update ###
![All of Android 1.1s new features: Search by voice, the Android Market showing paid app support, Google Latitude, and the new “system updates" option in the settings.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/11.png)
All of Android 1.1s new features: Search by voice, the Android Market showing paid app support, Google Latitude, and the new “system updates" option in the settings.
Photo by Ron Amadeo
Four and a half months after Android 1.0, in February 2009, Android got its first public update in Android 1.1. Not much changed in the OS, and just about every new thing Google added with 1.1 has been shut down by now. Google Voice Search was Android's first foray into cloud-powered voice search, and it had its own icon in the app drawer. While the app can't communicate with Google's servers anymore, you can check out how it used to work [on the iPhone][2]. It wasn't yet Voice Actions, but you could speak and the results would go to a simple Google Search.
Support for paid apps was added to the Android Market, but just like the beta client, this version of the Android Market could no longer connect to the Google Play servers. The most that we could get to work was this sorting screen, which lets you pick between displaying free apps, paid apps, or a mix of both.
Maps added [Google Latitude][3], a way to share your location with friends. Latitude was shut down in favor of Google+ a few months ago and no longer works. There was an option for it in the Maps menu, but tapping on it just brings up a loading spinner forever.
Given that system updates come quickly in the Android world—or at least, that was the plan before carriers and OEMs got in the way—Google also added a button to the "About Phone" screen to check for system updates.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.youtube.com/yt/press/statistics.html
[2]:http://www.youtube.com/watch?v=y3z7Tw1K17A
[3]:http://arstechnica.com/information-technology/2009/02/google-tries-location-based-social-networking-with-latitude/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,107 +0,0 @@
[bazz2 hehe]
How to use systemd for system administration on Debian
================================================================================
Soon enough, hardly any Linux user will be able to escape the ever growing grasp that systemd imposes on Linux, unless they manually opt out. systemd has created more technical, emotional, and social issues than any other piece of software as of late. This predominantly came to show in the [heated discussions][1] also dubbed as the 'Init Wars', that occupied parts of the Debian developer body for months. While the Debian Technical Comittee finally decided to include systemd in Debian 8 "Jessie", there were efforts to [supersede the decision][2] by a General Resolution, and even threats to the health of developers in favor of systemd.
This goes to show how deep systemd interferes with the way of handling Linux systems that has, in large parts, been passed down to us from the Unix days. Theorems like "one tool for the job" are overthrown by the new kid in town. Besides substituting sysvinit as init system, it digs deep into system administration. For right now a lot of the commands you are used to will keep on working due to the compatibility layer provided by the package systemd-sysv. That might change as soon as systemd 214 is uploaded to Debian, destined to be released in the stable branch with Debian 8 "Jessie". From thereon, users need to utilize the new commands that come with systemd for managing services, processes, switching run levels, and querying the logging system. A workaround is to set up aliases in .bashrc.
So let's have a look at how systemd will change your habits of administrating your computers and the pros and cons involved. Before making the switch to systemd, it is a good security measure to save the old sysvinit to be able to still boot, should systemd fail. This will only work as long as systemd-sysv is not yet installed, and can be easily obtained by running:
# cp -av /sbin/init /sbin/init.sysvinit
Thusly prepared, in case of emergency, just append:
init=/sbin/init.sysvinit
to the kernel boot-time parameters.
### Basic Usage of systemctl ###
systemctl is the command that substitutes the old "/etc/init.d/foo start/stop", but also does a lot more, as you can learn from its man page.
Some basic use-cases are:
- systemctl - list all loaded units and their state (where unit is the term for a job/service)
- systemctl list-units - list all units
- systemctl start [NAME...] - start (activate) one or more units
- systemctl stop [NAME...] - stop (deactivate) one or more units
- systemctl disable [NAME...] - disable one or more unit files
- systemctl list-unit-files - show all installed unit files and their state
- systemctl --failed - show which units failed during boot
- systemctl --type=mount - filter for types; types could be: service, mount, device, socket, target
- systemctl enable debug-shell.service - start a root shell on TTY 9 for debugging
For more convinience in handling units, there is the package systemd-ui, which is started as user with the command systemadm.
Switching runlevels, reboot and shutdown are also handled by systemctl:
- systemctl isolate graphical.target - take you to what you know as init 5, where your X-server runs
- systemctl isolate multi-user.target - take you to what you know as init 3, TTY, no X
- systemctl reboot - shut down and reboot the system
- systemctl poweroff - shut down the system
All these commands, other than the ones for switching runlevels, can be executed as normal user.
### Basic Usage of journalctl ###
systemd does not only boot machines faster than the old init system, it also starts logging much earlier, including messages from the kernel initialization phase, the initial RAM disk, the early boot logic, and the main system runtime. So the days where you needed to use a camera to provide the output of a kernel panic or otherwise stalled system for debugging are mostly over.
With systemd, logs are aggregated in the journal which resides in /var/log/. To be able to make full use of the journal, we first need to set it up, as Debian does not do that for you yet:
# addgroup --system systemd-journal
# mkdir -p /var/log/journal
# chown root:systemd-journal /var/log/journal
# gpasswd -a $user systemd-journal
That will set up the journal in a way where you can query it as normal user. Querying the journal with journalctl offers some advantages over the way syslog works:
- journalctl --all - show the full journal of the system and all its users
- journalctl -f - show a live view of the journal (equivalent to "tail -f /var/log/messages")
- journalctl -b - show the log since the last boot
- journalctl -k -b -1 - show all kernel logs from the boot before last (-b -1)
- journalctl -b -p err - shows the log of the last boot, limited to the priority "ERROR"
- journalctl --since=yesterday - since Linux people normally do not often reboot, this limits the size more than -b would
- journalctl -u cron.service --since='2014-07-06 07:00' --until='2014-07-06 08:23' - show the log for cron for a defined timeframe
- journalctl -p 2 --since=today - show the log for priority 2, which covers emerg, alert and crit; resembles syslog priorities emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), debug (7)
- journalctl > yourlog.log - copy the binary journal as text into your current directory
Journal and syslog can work side-by-side. On the other hand, you can remove any syslog packages like rsyslog or syslog-ng once you are satisfied with the way the journal works.
For very detailed output, append "systemd.log_level=debug" to the kernel boot-time parameter list, and then run:
# journalctl -alb
Log levels can also be edited in /etc/systemd/system.conf.
### Analyzing the Boot Process with systemd ###
systemd allows you to effectively analyze and optimize your boot process:
- systemd-analyze - show how long the last boot took for kernel and userspace
- systemd-analyze blame - show details of how long each service took to start
- systemd-analyze critical-chain - print a tree of the time-critical chain of units
- systemd-analyze dot | dot -Tsvg > systemd.svg - put a vector graphic of your boot process (requires graphviz package)
- systemd-analyze plot > bootplot.svg - generate a graphical timechart of the boot process
![](https://farm6.staticflickr.com/5559/14607588994_38543638b3_z.jpg)
![](https://farm6.staticflickr.com/5565/14423020978_14b21402c8_z.jpg)
systemd has pretty good documentation for such a young project under heavy developement. First of all, there is the [0pointer series by Lennart Poettering][3]. The series is highly technical and quite verbose, and holds a wealth of information. Another good source is the distro agnostic [Freedesktop info page][4] with the largest collection of links to systemd resources, distro specific pages, bugtrackers and documentation. A quick glance at:
# man systemd.index
will give you an overview of all systemd man pages. The command structure for systemd for various distributions is pretty much the same, differences are found mainly in the packaging.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/07/use-systemd-system-administration-debian.html
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://lists.debian.org/debian-devel/2013/10/msg00444.html
[2]:https://lists.debian.org/debian-devel/2014/02/msg00316.html
[3]:http://0pointer.de/blog/projects/systemd.html
[4]:http://www.freedesktop.org/wiki/Software/systemd/

View File

@ -1,136 +0,0 @@
zpl1025
Build a Raspberry Pi Arcade Machine
================================================================================
**Relive the golden majesty of the 80s with a little help from a marvel of the current decade.**
### WHAT YOULL NEED ###
- Raspberry Pi w/4GB SD-CARD.
- HDMI LCD monitor.
- Games controller or…
- A JAMMA arcade cabinet.
- J-Pac or I-Pac.
The 1980s were memorable for many things; the end of the cold war, a carbonated drink called Quatro, the Korg Polysix synthesiser and the Commodore 64. But to a certain teenager, none of these were as potent, or as perhaps familiarly illicit, as the games arcade. Enveloped by cigarette smoke and a barrage of 8-bit sound effects, they were caverns you visited only on borrowed time: 50 pence and a portion of chips to see you through lunchtime while you honed your skills at Galaga, Rampage, Centipede, Asteroids, Ms Pacman, Phoenix, R-Rype, Donkey Kong, Rolling Thunder, Gauntlet, Street Fighter, Outrun, Defender… The list is endless.
These games, and the arcade machine form factor that held them, are just as compelling today as they were 30 years ago. And unlike the teenage version of yourself, you can now play many of them without needing a pocket full of change, finally giving you an edge over the rich kids and their endless Continues. Its time to build your own Linux-based arcade machine and beat that old high score.
Were going to cover all the steps required to turn a cheap shell of an arcade machine into a Linux-powered multi-platform retro games system. But that doesnt mean youve got to build the whole system at the same scale. You could, for example, forgo the large, heavy and potentially carcinogenic hulk of the cabinet itself and stuff the controlling innards into an old games console or an even smaller case. Or you could just as easily forgo the diminutive Raspberry Pi and replace the brains of your system with a much more capable Linux machine. This might make an ideal platform for SteamOS, for example, and for playing some of its excellent modern arcade games.
Over the next few pages well construct a Raspberry Pi-based arcade machine, but you should be able to see plenty of ideas for your own projects, even if they dont look just like ours. And because were building it on the staggeringly powerful MAME, youll be able to get it running on almost anything.
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade3.png)
We did this project before the model B+ came out. It should all work exactly the same on the newer board, and you should be able to get by without a powered USB Hub (click for larger).
### Disclaimer ###
One again were messing with electrical components that could cause you a shock. Make sure you get any modifications you make checked by a qualified electrician. We dont go into any details on how to obtain games, but there are legal sources such as old games releases and newer commercial titles based on the MAME emulator.
#### Step1: The Cabinet ####
The cabinet itself is the biggest challenge. We bought an old two-player Bubble Bobble machine from the early 90s from eBay. It cost £220 delivered in the back of an old estate car. The prices for cabinets like these can vary. Weve seen many for less than £100. At the other end of the scale, people pay thousands for machines with original decals on the side.
There are two major considerations when it comes to buying a cabinet. The first is the size: These things are big and heavy. They take up a lot of space and it takes at least two people to move them around. If youve got the money, you can buy DIY cabinets or new smaller form-factors, such as cabinets that fit on tables. And cocktail cabinets can be easier to fit, too.
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade4.jpg)
Cabinets can be cheap, but theyre heavy. Dont lift them on your own. Older ones may need some TLC, such as are-spray and some repair work(click for larger).
One of the best reasons for buying an original cabinet, apart from getting a much more authentic gaming experience, is being able to use the original controls. Many machines you can buy on eBay will be for two concurrent players, with two joysticks and a variety of buttons for each player, plus the player one and player two controls. For compatibility with the widest number of games, wed recommend finding a machine with six buttons for each player, which is a common configuration. You might also want to look into a panel with more than two players, or one with space for other input controllers, such as an arcade trackball (for games like Marble Madness), or a spinner (Arkanoid). These can be added without too much difficulty later, as modern USB devices exist.
Controls are the second, and wed say most important consideration, because its these that transfer your twitches and tweaks into game movement. What you need to consider for when buying a cabinet is something called JAMMA, an acronym for Japan Amusement Machinery Manufacturers. JAMMA is a standard in arcade machines that defines how the circuit board containing the game chips connects to the game controllers and the coin mechanism. Its an interface conduit for all the cables coming from the buttons and the joysticks, for two players, bringing them into a standard edge connector. The JAMMA part is the size and layout of this connector, as it means the buttons and controls will be connected to the same functions on whichever board you install so that the arcade owner would only have to change the cabinet artwork to bring in new players.
But first, a word of warning: the JAMMA connector also carries the 12V power supply, usually from a power unit installed in most arcade machines. We disconnecting the power supply completely to avoid damaging anything with a wayward short-circuit or dropped screwdriver. We dont use any of the power connectors in any further stage of the tutorial.
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade2.png)
#### Step 2: J-PAC ####
Whats brilliant is that you can buy a device that connects to the JAMMA connector inside your cabinet and a USB port on your computer, transforming all the buttons presses and keyboard movements into (configurable) keyboard commands that you can use from Linux to control any game you wish. This device is called the J-Pac ([www.ultimarc.com/jpac.html][1] approximately £54).
Its best feature isnt the connectivity; its the way it handles and converts the input signals, because its vastly superior to a standard USB joystick. Every input generates its own interrupt, and theres no limit to the number of simultaneous buttons and directions you can press or hold down. This is vital for games like Street Fighter, because they rely on chords of buttons being pressed simultaneously and quickly, but its also essential when delivering the killing blow to cheating players who sulk and hold down all their own buttons. Many other controllers, especially those that create keyboard inputs, are restricted by their USB keyboard controllers to six inputs and a variety of Alt, Shift and Ctrl hacks. The J-Pac can also be connected to a tilt sensor and even some coin mechanisms, and it works in Linux without any pre-configuration.
Another option is a similar device called an I-Pac. It does the same thing as the J-Pac, only without the JAMMA connector. That means you cant connect your JAMMA controls, but it does mean you can design your own controller layout and wire each control to the I-Pac yourself. This might be a little ambitious for a first project, but its a route that many arcade aficionados take, especially when they want to design a panel for four players, or one that incorporates many different kinds of controls. Our approach isnt necessarily one wed recommend, but we re-wired an old X-Arcade Tankstick control panel that suffered from input contention, replaced the joysticks and buttons with new units and connected it to a new JAMMA harness, which is an excellent way of buying all the cables you need plus the edge connector for a low price (£8).
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade5.jpg)
Our J-Pac in situ. The blue and red wires on the right connect to the extra 1- and 2-player buttons on our cabinet (click for larger).
Whether you choose an I-Pac or a J-Pac, all the keys generated by both devices are the default values for MAME. That means you wont have to make any manual input changes when you start to run the emulator. Player 1, for example, creates cursor up, down, left and right as well as left Ctrl, left ALT, Space and left Shift for fire buttons 14. But the really useful feature, for us, is the two-button shortcuts. While holding down the player 1 button, you can generate the P key to pause the game by pulling down on the player 1 joystick, adjust the volume by pressing up and enter MAMEs own configuration menu by pushing right. These escape codes are cleverly engineered to not get in the way of playing games, as theyre only activated when holding down the Player 1 button, and they enable you to do almost anything you need to from within a running game. You can completely reconfigure MAME, for example, using its own menus, and change input assignments and sensitivity while playing the game itself.
Finally, holding down Player 1 and then pressing Player 2 will quit MAME, which is useful if youre using a launch menu or MAME manager, as these manage launching games automatically, and let you get on with playing another game as quickly as possible.
We took a rather cowardly route with the screen, removing the original, bulky and broken CRT that came with the cabinet and replacing it with a low-cost LCD monitor. This approach has many advantages. First, the screen has HDMI, so it will interface with a Raspberry Pi or a modern graphics card without any difficulty. Second, you dont have to configure the low-frequency update modes required to drive an arcade machines screen, nor do you need the specific graphics hardware that drives it. And third, this is the safest option because an arcade machines screen is often unprotected from the rear of a case, leaving very high voltages inches away from your hands. Thats not to say you shouldnt use a CRT if thats the experience youre after its the most authentic way to get the gaming experience youre after, but weve fined-tuned the CRT emulation enough in software that were happy with the output, and were definitely happier not to be using an ageing CRT.
You might also want to look into using an older LCD with a 4:3 aspect ratio, rather than the widescreen modern options, because 4:3 is more practical for playing both vertical and horizontal games. A vertical shooter such as Raiden, for example, will have black bars on either side of the gaming area if you use a widescreen monitor. Those black bars can be used to display the game instructions, or you could rotate the screen 90 degrees so that every pixel is used, but this is impractical unless youre only going to play vertical games or have easy access to a rotating mount.
Mounting a screen is also important. If youve removed a CRT, theres nowhere for an LCD to go. Our solution was to buy some MDF cut to fit the space where the CRT was. This was then screwed into position and we fitted a cheap VESA mounting plate into the centre of the new MDF. VESA mounts can be used by the vast majority of screens, big and small. Finally, because our cabinet was fronted with smoked glass, we had to be sure both the brightness and contrast were set high enough.
### Step 3: Installation ###
With the large hardware choices now made, and presumably the cabinet close to where you finally want to install it, putting the physical pieces together isnt that difficult. We safely split the power input from the rear of the cabinet and wired a multiple socket into the space at the back. We did this to the cable after it connects to the power switch.
Nearly all arcade cabinets have a power switch on the top-right surface, but theres usually plenty of cable to splice into this at a lower point in the cabinet, and it meant we could use normal power connectors for our equipment. Our cabinet has a fluorescent tube, used to backlight the top marquee on the machine, connected directly to the power, and we were able to keep this connected by attaching a regular plug. When you turn the power on from the cabinet switch, power flows to the components inside the case your Raspberry Pi and screen will come on, and all will be well with the world.
The J-Pac slides straight into the JAMMA interface, but you may also have to do a little manual wiring. The JAMMA standard only supports up to three buttons for each player (although many unofficially support four), while the J-Pac can handle up to six buttons. To get those extra buttons connected, you need to connect one side of the buttons switch to GND fed from the J-Pac with the other side of the switch going into one of the screw-mounted inputs in the side of the J-Pac. These are labelled 1SW4, 1SW5, 1SW6, 2SW4, 2SW5 and 2SW6. The J-Pac also includes passthrough connections for audio, but weve found this to be incredibly noisy. Instead, we wired the speaker in our cabinet to an old SoundBlaster amplifier and connected this to the audio outputs on the Raspberry Pi. You dont want audio to be pristine, but you do want it to be loud enough.
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade6.jpg)
Our Raspberry Pi is now connected to the J-Pac on the left and both the screen and the USB hub (click for larger).
The J-Pac or I-Pac then connects to your PC or Raspberry Pi using a PS2-to-USB cable, which should also be used to connect to a PS2 port on your PC directly. There is an additional option to use an old PS2 connector, if your PC is old enough to have one, but we found in testing that the USB performance is identical. This wont apply to the PS2-less Raspberry Pi, of course, and dont forget that the Pi will also need powering. We always recommend doing so from a compatible powered hub, as a lack of power is the most common source of Raspberry Pi errors. Youll also need to get networking to your Raspberry Pi, either through the Ethernet port (perhaps using a powerline adaptor hidden in the cabinet), or by using a wireless USB device. Networking is essential because it enables you to reconfigure your PI while its tucked away within the cabinet, and it also enables you to change settings and perform administration tasks without having to connect a keyboard or mouse.
> ### Coin Mechanism ###
> In the emulation community, getting your coin mechanism to work with your emulator was often considered a step too close to commercial production. It meant you could potential charge people to use your machine. Not only would this be wrong, but considering the provenance of many of the games you run on your own arcade machine, it could also be illegal. And its definitely against the spirit of emulation. However, we and many other devotees thinking that a working coin mechanism is another step closer to the realism of an arcade machine, and is worth the effort in recreating the nostalgia of an old arcade. Theres nothing like dropping a 10p piece into the coin tray and to hear the sound of the credits being added to the machine.
> Its not actually that difficult. It depends on the coin mechanism in your arcade machine and how it sends a signal to say how many credits had been inserted. Most coin mechanisms come in two parts. The large part is the coin acceptor/validator. This is the physical side of the process that detects whether a coin is authentic, and determines its value. It does this with the help of a credit/logic board, usually attached via a ribbon cable and featuring lots of DIP switches. These switches are used to change which coins are accepted and how many credits they generate. Its then usually as simple as finding the output switch, which is triggered with a credit, and connecting this to the coin input on your JAMMA connector, or directly onto the J-Pac. Our coin mechanism is a Mars MS111, common in the UK in the early 90s, and theres plenty of information online about what each of the DIP switches do, as well as how to programme the controller for newer coins. We were also able to wire the 12V connector from the mechanism to a small light for behind the coin entry slot.
#### Step 4: Software ####
MAME is the only viable emulator for a project of this scale, and it now supports many thousands of different games running on countless different platforms, from the first arcade machines through to some more recent ones. Its a project that has also spawned MESS, the multi-emulator super system, which targets platforms such as home computers and consoles from the 80s and 90s.
Configuring MAME could take a six-page article in itself. Its a complex, sprawling, magnificent piece of software that emulates so many CPUs, so many sound devices, chips, controllers with so many options, that like MythTV, you never really stop configuring it.
But theres an easier option, and one thats purpose-built for the Raspberry Pi. Its called PiMAME. This is both a distribution download and a script you can run on top of Raspbian, the Pis default distribution. Not only does it install MAME on your Raspberry Pi (which is useful because its not part of any of the default repositories), it also installs a selection of other emulators along with front-ends to manage them. MAME, for example, is a command-line utility with dozens of options. But PiMAME has another clever trick up its sleeve it installs a simple web server that enables you to install new games through a browser connected to your network. This is a great advantage, because getting games into the correct folders is one of the trials of dealing with MAME, and it also enables you to make best use of whatever storage youve got connected to your Pi. Plus, PiMAME will update itself from the same script you use to install it, so keeping on top of updates couldnt be easier. This could be especially useful at the moment, as at the time of writing the project was on the cusp of a major upgrade in the form of the 0.8 release. We found it slightly unstable in early March, but were sure everything will be sorted by the time you read this.
The best way to install PiMAME is to install Raspbian first. You can do this either through NOOBS, using a graphical tool from your desktop, or by using the dd command to copy the contents of the Raspbian image directly onto your SD card. As we mentioned in last months BrewPi tutorial, this process has been documented many times before, so we wont waste the space here. Just install NOOBS if you want the easy option, following the instructions on the Raspberry Pi site. With Raspbian installed and running, make sure you use the configuration tool to free the space on your SD card, and that the system is up to date (sudo apt-get update; sudo apt-get upgrade). You then need to make sure youve got the git package already installed. Any recent version of Raspbian will have installed git already, but you can check by typing sudo apt-get install git just to check.
You then have to type the following command to clone the PiMAME installer from the projects GitHub repository:
git clone https://github.com/ssilverm/pimame_installer
After that, you should get the following feedback if the command works:
Cloning into pimame_installer...
remote: Reusing existing pack: 2306, done.
remote: Total 2306 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (2306/2306), 4.61 MiB | 11 KiB/s, done.
Resolving deltas: 100% (823/823), done.
This command will create a new folder called pimame_installer, and the next step is to switch into this and run the script it contains:
cd pimame_installer/
sudo ./install.sh
This command installs and configures a lot of software. The length of time it takes will depend on your internet connection, as a lot of extra packages are downloaded. Our humble Pi with a 15Mb internet connection took around 45 minutes to complete the script, after which youre invited to restart the machine. You can do this safely by typing sudo shutdown -r now, as this command will automatically handle any remaining write operations to the SD card.
And thats all there is to the installation. After rebooting your Pi, you will be automatically logged in and the PiMAME launch menu will appear. Its a great-looking interface in version 0.8, with photos of each of the platforms supported, plus small red icons to indicate how many games youve got installed.This should now be navigable through your controller. If you want to make sure the controller is correctly detected, use SSH to connect to your Pi and check for the existence of **/dev/input/by-id/usb-Ultimarc_I-PAC_Ultimarc_I-PAC-event-kbd**.
The default keyboard controls will enable you to select what kind of emulator you want to run on your arcade machine. The option were most interested in is the first, labelled AdvMAME, but you might also be surprised to see another MAME on offer, MAME4ALL. MAME4ALL is built specifically for the Raspberry Pi, and takes an old version of the MAME source code so that the performance of the ROMS that it does support is optimal. This makes a lot of sense, because theres no way your Pi is going to be able to play anything too demanding, so theres no reason to belabour the emulator with unneeded compatibility. All thats left to do now is get some games onto your system (see the boxout below), and have fun!
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade1.png)
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/arcade-machine/
作者:[Ben Everard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/ben_everard/
[1]:http://www.ultimarc.com/jpac.html

View File

@ -1,3 +1,4 @@
Translating by SPccman
How to configure SNMPv3 on ubuntu 14.04 server
================================================================================
Simple Network Management Protocol (SNMP) is an "Internet-standard protocol for managing devices on IP networks". Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks and more.It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.[2]
@ -96,4 +97,4 @@ via: http://www.ubuntugeek.com/how-to-configure-snmpv3-on-ubuntu-14-04-server.ht
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,468 +0,0 @@
[felixonmars translating...]
Linux Tutorial: Install Ansible Configuration Management And IT Automation Tool
================================================================================
![](http://s0.cyberciti.org/uploads/cms/2014/08/ansible_core_circle.png)
Today I will be talking about ansible, a powerful configuration management solution written in python. There are many configuration management solutions available, all with pros and cons, ansible stands apart from many of them for its simplicity. What makes ansible different than many of the most popular configuration management systems is that its agent-less, no need to setup agents on every node you want to control. Plus, this has the benefit of being able to control you entire infrastructure from more than one place, if needed. That last point's validity, of being a benefit, may be debatable but I find it as a positive in most cases. Enough talk, lets get started with Ansible installation and configuration on a RHEL/CentOS, and Debian/Ubuntu based systems.
### Prerequisites ###
1. Distro: RHEL/CentOS/Debian/Ubuntu Linux
1. Jinja2: A modern and designer friendly templating language for Python.
1. PyYAML: A YAML parser and emitter for the Python programming language.
1. parmiko: Native Python SSHv2 protocol library.
1. httplib2: A comprehensive HTTP client library.
1. Most of the actions listed in this post are written with the assumption that they will be executed by the root user running the bash or any other modern shell.
How Ansible works
Ansible tool uses no agents. It requires no additional custom security infrastructure, so its easy to deploy. All you need is ssh client and server:
+----------------------+ +---------------+
|Linux/Unix workstation| SSH | file_server1 |
|with Ansible |<------------------>| db_server2 | Unix/Linux servers
+----------------------+ Modules | proxy_server3 | in local/remote
192.168.1.100 +---------------+ data centers
Where,
1. 192.168.1.100 - Install Ansible on your local workstation/server.
1. file_server1..proxy_server3 - Use 192.168.1.100 and Ansible to automates configuration management of all servers.
1. SSH - Setup ssh keys between 192.168.1.100 and local/remote servers.
### Ansible Installation Tutorial ###
Installation of ansible is a breeze, many distributions have a package available in their 3rd party repos which can easily be installed, a quick alternative is to just pip install it or grab the latest copy from github. To install using your package manager, on [RHEL/CentOS Linux based systems you will most likely need the EPEL repo][1] then:
#### Install ansible on a RHEL/CentOS Linux based system ####
Type the following [yum command][2]:
$ sudo yum install ansible
#### Install ansible on a Debian/Ubuntu Linux based system ####
Type the following [apt-get command][3]:
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
#### Install ansible using pip ####
The [pip command is a tool for installing and managing Python packages][4], such as those found in the Python Package Index. The following method works on Linux and Unix-like systems:
$ sudo pip install ansible
#### Install the latest version of ansible using source code ####
You can install the latest version from github as follows:
$ cd ~
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup
When running ansible from a git checkout, one thing to remember is that you will need to setup your environment everytime you want to use it, or you can add it to your bash rc file:
# ADD TO BASH RC
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts" >> ~/.bashrc
$ echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc
The hosts file for ansible is basically a list of hosts that ansible is able to perform work on. By default ansible looks for the hosts file at /etc/ansible/hosts, but there are ways to override that which can be handy if you are working with multiple installs or have several different clients for whose datacenters you are responsible for. You can pass the hosts file on the command line using the -i option:
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
My preference however is to use and environment variable, this can be useful if source a different file when starting work for a specific client. The environment variable is $ANSIBLE_HOSTS, and can be set as follows:
$ export ANSIBLE_HOSTS=~/ansible_hosts
Once all requirements are installed and you have you hosts file setup you can give it a test run. For a quick test I put 127.0.0.1 into the ansible hosts file as follow:
$ echo "127.0.0.1" > ~/ansible_hosts
Now lets test with a quick ping:
$ ansible all -m ping
OR ask for the ssh password:
$ ansible all -m ping --ask-pass
I have run across a problem a few times regarding initial setup, it is highly recommended you setup keys for ansible to use but in the previous test we used --ask-pass, on some machines you will need [to install sshpass][5] or add a -c paramiko like so:
$ ansible all -m ping --ask-pass -c paramiko
Or you [can install sshpass][6], however sshpass is not always available in the standard repos so paramiko can be easier.
### Setup SSH Keys ###
Now that we have gotten the configuration, and other simple stuff, out of the way lets move onto doing something productive. Alot of the power of ansible lies in playbooks, which are basically scripted ansible runs (for the most part), but we will start with some one liners before we build out a playbook. Lets start with creating and configuring keys so we can avoid the -c and --ask-pass options:
$ ssh-keygen -t rsa
Sample outputs:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/mike/.ssh/id_rsa.
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
The key fingerprint is:
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
The key's randomart image is:
+--[ RSA 2048]----+
|... . . |
|. . + . . |
|= . o o |
|.* . |
|. . . S |
| E.o |
|.. .. |
|o o+.. |
| +o+*o. |
+-----------------+
Now obviously there are plenty of ways to put this in place on the remote machine but since we are using ansible, lets use that:
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"dest": "/tmp/id_rsa.pub",
"gid": 100,
"group": "users",
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
"mode": "0644",
"owner": "mike",
"size": 410,
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
"state": "file",
"uid": 1000
}
Next, add the public key in remote server, enter:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | FAILED | rc=1 >>
/bin/sh: /root/.ssh/authorized_keys: Permission denied
Whoops, we want to be able to run things as root, so lets add a -u option:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
Sample outputs:
SSH password:
127.0.0.1 | success | rc=0 >>
Please note, I wanted to demonstrate a file transfer using ansible, there is however a more built in way for managing keys using ansible:
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"gid": 100,
"group": "users",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
"key_options": null,
"keyfile": "/home/mike/.ssh/authorized_keys",
"manage_dir": false,
"mode": "0600",
"owner": "mike",
"path": "/home/mike/.ssh/authorized_keys",
"size": 410,
"state": "file",
"uid": 1000,
"unique": false,
"user": "mike"
}
Now that the keys are in place lets try running an arbitrary command like hostname and hope we don't get prompted for a password
$ ansible all -m shell -a "hostname" -u root
Sample outputs:
127.0.0.1 | success | rc=0 >>
Success!!! Now that we can run commands as root and not be bothered by using a password we are in a good place to easily configure any and all hosts in the ansible hosts file. Let's remove the key from /tmp:
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
Sample outputs:
127.0.0.1 | success >> {
"changed": true,
"path": "/tmp/id_rsa.pub",
"state": "absent"
}
Next, I'm going to make sure we have a few packages installed and on the latest version and we will move on to something a little more complicated:
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
Sample outputs:
127.0.0.1 | success >> {
"changed": false,
"name": "apache2",
"state": "latest"
}
Alright, the key we placed in /tmp is now absent and we have the latest version of apache installed. This brings me to the next point, something that makes ansible very flexible and gives more power to playbooks, many may have noticed the -m zypper in the previous commands. Now unless you use openSuse or Suse enterpise you may not be familiar with zypper, it is basically the equivalent of yum in the suse world. In all of the examples above I have only had one machine in my hosts file, and while everything but the last command should work on any standard *nix systems with standard ssh configs, this leads to a problem. What if we had multiple machine types that we wanted to manage? Well this is where playbooks, and the configurability of ansible really shines. First lets modify our hosts file a little, here goes:
$ cat ~/ansible_hosts
Sample outputs:
[RHELBased]
10.50.1.33
10.50.1.47
[SUSEBased]
127.0.0.1
First, we create some groups of servers, and give them some meaningful tags. Then we create a playbook that will do different things for the different kinds of servers. You might notice the similarity between the yaml data structures and the command line instructions we ran earlier. Basically the -m is a module, and -a is for module args. In the YAML representation you put the module then :, and finally the args.
---
- hosts: SUSEBased
remote_user: root
tasks:
- zypper: name=apache2 state=latest
- hosts: RHELBased
remote_user: root
tasks:
- yum: name=httpd state=latest
Now that we have a simple playbook, we can run it as follows:
$ ansible-playbook testPlaybook.yaml -f 10
Sample outputs:
PLAY [SUSEBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [zypper name=apache2 state=latest] **************************************
ok: [127.0.0.1]
PLAY [RHELBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [10.50.1.33]
ok: [10.50.1.47]
TASK: [yum name=httpd state=latest] *******************************************
changed: [10.50.1.33]
changed: [10.50.1.47]
PLAY RECAP ********************************************************************
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
Now you will notice that you will see output from each machine that ansible contacted. The -f is what lets ansible run on multiple hosts in parallel. Instead of using all, or a name of a host group, on the command line you can put this passwords for the ask-pass prompt into the playbook. While we no longer need the --ask-pass since we have ssh keys setup, it comes in handy when setting up new machines, and even new machines can run from a playbook. To demonstrate this lets convert our earlier key example into a playbook:
---
- hosts: SUSEBased
remote_user: mike
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
- hosts: RHELBased
remote_user: mdonlon
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
Now there are plenty of other options here that could be done, for example having the keys dropped during a kickstart, or via some other kind of process involved with bringing up machines on the hosting of your choice, but this can be used in pretty much any situation assuming ssh is setup to accept a password. One thing to think about before writing out too many playbooks, version control can save you a lot of time. Machines need to change over time, you don't need to re-write a playbook every time a machine changes, just update the pertinent bits and commit the changes. Another benefit of this ties into what I said earlier about being able to manage the entire infrastructure from multiple places. You can easily git clone your playbook repo onto a new machine and be completely setup to manage everything in a repetitive manner.
#### Real world ansible example ####
I know a lot of people make great use of services like pastebin, and a lot of companies for obvious reasons setup their own internal instance of something similar. Recently, I came across a newish application called showterm and coincidentally I was asked to setup an internal instance of it for a client. I will spare you the details of this app, but you can google showterm if interested. So for a reasonable real world example I will attempt to setup a showterm server, and configure the needed app on the client to use it. In the process we will need a database server as well. So here goes, lets start with the client configuration.
---
- hosts: showtermClients
remote_user: root
tasks:
- yum: name=rubygems state=latest
- yum: name=ruby-devel state=latest
- yum: name=gcc state=latest
- gem: name=showterm state=latest user_install=no
That was easy, lets move on to the main server:
---
- hosts: showtermServers
remote_user: root
tasks:
- name: ensure packages are installed
yum: name={{item}} state=latest
with_items:
- postgresql
- postgresql-server
- postgresql-devel
- python-psycopg2
- git
- ruby21
- ruby21-passenger
- name: showterm server from github
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
- name: Initdb
command: service postgresql initdb
creates=/var/lib/pgsql/data/postgresql.conf
- name: Start PostgreSQL and enable at boot
service: name=postgresql
enabled=yes
state=started
- gem: name=pg state=latest user_install=no
handlers:
- name: restart postgresql
service: name=postgresql state=restarted
- hosts: showtermServers
remote_user: root
sudo: yes
sudo_user: postgres
vars:
dbname: showterm
dbuser: showterm
dbpassword: showtermpassword
tasks:
- name: create db
postgresql_db: name={{dbname}}
- name: create user with ALL priv
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
- hosts: showtermServers
remote_user: root
tasks:
- name: database.yml
template: src=database.yml dest=/root/showterm/config/database.yml
- hosts: showtermServers
remote_user: root
tasks:
- name: run bundle install
shell: bundle install
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: run rake db tasks
shell: 'bundle exec rake db:create db:migrate db:seed'
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: apache config
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
Not so bad, now keeping in mind that this is a somewhat random and obscure app that we can now install in a consistent fashion on any number of machines, this is where the benefits of configuration management really come to light. Also, in most cases the declarative syntax almost speaks for itself and wiki pages need not go into as much detail, although a wiki page with too much detail is never a bad thing in my opinion.
### Expanding Configuration ###
We have not touched on everything here, Ansible has many options for configuring you setup. You can do things like embedding variables in your hosts file, so that Ansible will interpolate them on the remote nodes, eg.
[RHELBased]
10.50.1.33 http_port=443
10.50.1.47 http_port=80 ansible_ssh_user=mdonlon
[SUSEBased]
127.0.0.1 http_port=443
While this is really handy for quick configurations, you can also layer variables across multiple files in yaml format. In you hosts file path you can make two sub directories named group_vars and host_vars. Any files in those paths that match the name of the group of hosts, or a host name in your hosts file will be interpolated at run time. So the previous example would look like this:
ultrabook:/etc/ansible # pwd
/etc/ansible
ultrabook:/etc/ansible # tree
.
├── group_vars
│ ├── RHELBased
│ └── SUSEBased
├── hosts
└── host_vars
├── 10.50.1.33
└── 10.50.1.47
----------
2 directories, 5 files
ultrabook:/etc/ansible # cat hosts
[RHELBased]
10.50.1.33
10.50.1.47
----------
[SUSEBased]
127.0.0.1
ultrabook:/etc/ansible # cat group_vars/RHELBased
ultrabook:/etc/ansible # cat group_vars/SUSEBased
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
---
http_port:80
ansible_ssh_user: mdonlon
### Refining Playbooks ###
There are many ways to organize playbooks as well. In the previous examples we used a single file, and everything is really simplified. One way of organizing things that is commonly used is creating roles. Basically you load a main file as your playbook, and that then imports all the data from the extra files, the extra files are organized as roles. For example if you have a wordpress site, you need a web head, and a database. The web head will have a web server, the app code, and any needed modules. The database is sometimes ran on the same host and some times ran on remote hosts, and this is where roles really shine. You make a directory, and small playbook for each role. In this case we can have an apache role, mysql role, wordpress role, mod_php, and php roles. The big advantage to this is that not every role has to be applied on one server, in this case mysql could be applied to a separate machine. This also allows for code re-use, for example you apache role could be used with python apps and php apps alike. Demonstrating this is a little beyond the scope of this article, and there are many different ways of doing thing, I would recommend searching for ansible playbook examples. There are many people contributing code on github, and I am sure various other sites.
### Modules ###
All of the work being done behind the scenes in ansible is driven by modules. Ansible has an excellent library of built in modules that do things like package installation, transferring files, and everything we have done in this article. But for some people this will not be suitable for their setup, ansible has a means of adding your own modules. One great thing about the API provided by Ansible is that you are not restricted to the language it was written in, Python, you can use any language really. Ansible modules work by passing around JSON data structures, so as long as you can build a JSON data structure in your language of choice, which I am pretty sure any scripting language can do, you can begin coding something right away. There is much documentation on the Ansible site, about how the module interface works, and many examples of modules on github as well. Keep in mind that some obscure languages may not have great support, but that would only be because not enough people are contributing code in that language, try it out and publish your results somewhere!
### Conclusion ###
In conclusion, there are many systems around for configuration management, I hope this article shows the ease of setup for ansible which I believe is one of its strongest points. Please keep in mind that I was trying to show a lot of different ways to do things, and not everything above may be considered best practice in your private infrastructure, or the coding world abroad. Here are some more links to take you knowledge of ansible to the next level:
- [Ansible project][7] home page.
- [Ansible project documentation][8].
- [Multistage environments with Ansible][9].
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
作者:[Nix Craft][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.cyberciti.biz/tips/about-us
[1]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
[3]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
[4]:http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-linux-install-pipclient/
[5]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[6]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[7]:http://www.ansible.com/
[8]:http://docs.ansible.com/
[9]:http://rosstuck.com/multistage-environments-with-ansible/

View File

@ -1,213 +0,0 @@
Setup Thin Provisioning Volumes in Logical Volume Management (LVM) Part IV
================================================================================
Logical Volume management has great features such as snapshots and Thin Provisioning. Previously in (Part III) we have seen how to snapshot the logical volume. Here in this article, we will going to see how to setup thin Provisioning volumes in LVM.
![Setup Thin Provisioning in LVM](http://www.tecmint.com/wp-content/uploads/2014/08/Setup-Thin-Provisioning-in-LVM.jpg)
Setup Thin Provisioning in LVM
### What is Thin Provisioning? ###
Thin Provisioning is used in lvm for creating virtual disks inside a thin pool. Let us assume that I have a **15GB** storage capacity in my server. I already have 2 clients who has 5GB storage each. You are the third client, you asked for 5GB storage. Back then we use to provide the whole 5GB (Thick Volume) but you may use 2GB from that 5GB storage and 3GB will be free which you can fill it up later.
But what we do in thin Provisioning is, we use to define a thin pool inside one of the large volume group and define the thin volumes inside that thin pool. So, that whatever files you write will be stored and your storage will be shown as 5GB. But the full 5GB will not allocate the entire disk. The same process will be done for other clients as well. Like I said there are 2 clients and you are my 3rd client.
So, let us assume how much total GB I have assigned for clients? Totally 15GB was already completed, If someone comes to me and ask for 5GB can I give? The answer is “**Yes**“, here in thin Provisioning I can give 5GB for 4th Client even though I have assigned 15GB.
**Warning**: From 15GB, if we are Provisioning more than 15GB it is called Over Provisioning.
### How it Works? and How we provide storage to new Clients? ###
I have provided you 5GB but you may used only 2GB and other 3GB will be free. In Thick Provisioning we cant do this, because it will allocate the whole space at first itself.
In thin Provisioning if Im defining 5GB for you it wont allocate the whole disk space while defining a volume, it will grow till 5GB according to your data write, Hope you got it! same like you, other clients too wont use the full volumes so there will be a chance to add 5GB to a new client, This is called over Provisioning.
But its compulsory to monitored each and every volume growth, if not it will end-up in a disaster. While over Provisioning is done if the all 4 clients write the datas badly to disk you may face an issue because it will fill up your 15GB and overflow to get drop the volumes.
### Requirements ###
注:此三篇文章如果发布后可换成发布后链接,原文在前几天更新中
- [Create Disk Storage with LVM in Linux PART 1][1]
- [How to Extend/Reduce LVMs in Linux Part II][2]
- [How to Create/Restore Snapshot of Logical Volume in LVM Part III][3]
#### My Server Setup ####
Operating System CentOS 6.5 with LVM Installation
Server IP 192.168.0.200
### Step 1: Setup Thin Pool and Volumes ###
Lets do it practically how to setup the thin pool and thin volumes. First we need a large size of Volume group. Here Im creating Volume group with **15GB** for demonstration purpose. Now, list the volume group using the below command.
# vgcreate -s 32M vg_thin /dev/sdb1
![Listing Volume Group](http://www.tecmint.com/wp-content/uploads/2014/08/Listing-Volume-Group.jpg)
Listing Volume Group
Next, check for the size of Logical volume availability, before creating the thin pool and volumes.
# vgs
# lvs
![Check Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/08/check-Logical-Volume.jpg)
Check Logical Volume
We can see there is only default logical volumes for file-system and swap is present in the above lvs output.
### Creating a Thin Pool ###
To create a Thin pool for 15GB in volume group (vg_thin) use the following command.
# lvcreate -L 15G --thinpool tp_tecmint_pool vg_thin
- **-L** Size of volume group
- **thinpool** To o create a thinpool
- **tp_tecmint_poolThin** - pool name
- **vg_thin** Volume group name were we need to create the pool
![Create Thin Pool](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Pool.jpg)
Create Thin Pool
To get more detail we can use the command lvdisplay.
# lvdisplay vg_thin/tp_tecmint_pool
![Logical Volume Information](http://www.tecmint.com/wp-content/uploads/2014/08/Logical-Volume-Information.jpg)
Logical Volume Information
Here we havent created Virtual thin volumes in this thin-pool. In the image we can see Allocated pool data showing **0.00%**.
### Creating Thin Volumes ###
Now we can define thin volumes inside the thin pool with the help of lvcreate command with option -V (Virtual).
# lvcreate -V 5G --thin -n thin_vol_client1 vg_thin/tp_tecmint_pool
I have created a Thin virtual volume with the name of **thin_vol_client1** inside the **tp_tecmint_pool** in my **vg_thin** volume group. Now, list the logical volumes using below command.
# lvs
![List Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/List-Logical-Volumes.jpg)
List Logical Volumes
Just now, we have created the thin volume above, thats why there is no data showing i.e. **0.00%M**.
Fine, let me create 2 more Thin volumes for other 2 clients. Here you can see now there are 3 thin volumes created under the pool (**tp_tecmint_pool**). So, from this point, we came to know that I have used all 15GB pool.
![Create Thin Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Volumes.jpg)
### Creating File System ###
Now, create mount points and mount these three thin volumes and copy some files in it using below commands.
# mkdir -p /mnt/client1 /mnt/client2 /mnt/client3
List the created directories.
# ls -l /mnt/
![Creating Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Creating-Mount-Points.jpg)
Creating Mount Points
Create the file system for these created thin volumes using mkfs command.
# mkfs.ext4 /dev/vg_thin/thin_vol_client1 && mkfs.ext4 /dev/vg_thin/thin_vol_client2 && mkfs.ext4 /dev/vg_thin/thin_vol_client3
![Create File System](http://www.tecmint.com/wp-content/uploads/2014/08/Create-File-System.jpg)
Create File System
Mount all three client volumes to the created mount point using mount command.
# mount /dev/vg_thin/thin_vol_client1 /mnt/client1/ && mount /dev/vg_thin/thin_vol_client2 /mnt/client2/ && mount /dev/vg_thin/thin_vol_client3 /mnt/client3/
List the mount points using df command.
# df -h
![Print Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Print-Mount-Points.jpg)
Print Mount Points
Here, we can see all the 3 clients volumes are mounted and therefore only 3% of data are used in every clients volumes. So, lets add some more files to all 3 mount points from my desktop to fill up some space.
![Add Files To Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Add-Files-To-Volumes.jpg)
Add Files To Volumes
Now list the mount point and see the space used in every thin volumes & list the thin pool to see the size used in pool.
# df -h
# lvdisplay vg_thin/tp_tecmint_pool
![Check Mount Point Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Mount-Point-Size.jpg)
Check Mount Point Size
![Check Thin Pool Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Thin-Pool-Size.jpg)
Check Thin Pool Size
The above command shows, the three mount pints along with their sizes in percentage.
13% of datas used out of 5GB for client1
29% of datas used out of 5GB for client2
49% of datas used out of 5GB for client3
While looking into the thin-pool we can see only **30%** of data is written totally. This is the total of above three clients virtual volumes.
### Over Provisioning ###
Now the **4th** client came to me and asked for 5GB storage space. Can I give? Because I had already given 15GB Pool to 3 clients. Is it possible to give 5GB more to another client? Yes it is possible to give. This is when we use **Over Provisioning**, which means giving the space more than what I have.
Let me create 5GB for the 4th Client and verify the size.
# lvcreate -V 5G --thin -n thin_vol_client4 vg_thin/tp_tecmint_pool
# lvs
![Create thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Create-thin-Storage.jpg)
Create thin Storage
I have only 15GB size in pool, but I have created 4 volumes inside thin-pool up-to 20GB. If all four clients start to write data to their volumes to fill up the pace, at that time, we will face critical situation, if not there will no issue.
Now I have created file system in **thin_vol_client4**, then mounted under **/mnt/client4** and copy some files in it.
# lvs
![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thing-Storage.jpg)
Verify Thin Storage
We can see in the above picture, that the total used size in newly created client 4 up-to **89.34%** and size of thin pool as **59.19%** used. If all these users are not writing badly to volume it will be free from overflow, drop. To avoid the overflow we need to extend the thin-pool size.
**Important**: Thin-pools are just a logical volume, so if we need to extend the size of thin-pool we can use the same command like, weve used for logical volumes extend, but we cant reduce the size of thin-pool.
# lvextend
Here we can see how to extend the logical thin-pool (**tp_tecmint_pool**).
# lvextend -L +15G /dev/vg_thin/tp_tecmint_pool
![Extend Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Extend-Thin-Storage.jpg)
Extend Thin Storage
Next, list the thin-pool size.
# lvs
![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thin-Storage.jpg)
Verify Thin Storage
Earlier our **tp_tecmint_pool** size was 15GB and 4 thin volumes which was over Provision by 20GB. Now it has extended to 30GB so our over Provisioning has been normalized and thin volumes are free from overflow, drop. This way you can add ever more thin volumes to the pool.
Here, we have seen how to create a thin-pool using a large size of volume group and create thin-volumes inside a thin-pool using Over-Provisioning and extending the pool. In the next article we will see how to setup a lvm Striping.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/
[2]:http://www.tecmint.com/extend-and-reduce-lvms-in-linux/
[3]:http://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/

View File

@ -1,3 +1,5 @@
[felixonmars translating...]
How to create a cloud-based encrypted file system on Linux
================================================================================
Commercial cloud storage services such as [Amazon S3][1] and [Google Cloud Storage][2] offer highly available, scalable, infinite-capacity object store at affordable costs. To accelerate wide adoption of their cloud offerings, these providers are fostering rich developer ecosystems around their products based on well-defined APIs and SDKs. Cloud-backed file systems are one popular by-product of such active developer communities, for which several open-source implementations exist.
@ -153,4 +155,4 @@ via: http://xmodulo.com/2014/09/create-cloud-based-encrypted-file-system-linux.h
[4]:http://aws.amazon.com/
[5]:http://ask.xmodulo.com/create-amazon-aws-access-key.html
[6]:https://aur.archlinux.org/packages/s3ql/
[7]:http://www.rath.org/s3ql-docs/
[7]:http://www.rath.org/s3ql-docs/

View File

@ -1,236 +0,0 @@
How to set up Nagios Remote Plugin Executor (NRPE) in Linux
================================================================================
As far as network management is concerned, Nagios is one of the most powerful tools. Nagios can monitor the reachability of remote hosts, as well as the state of services running on them. However, what if we want to monitor something other than network services for a remote host? For example, we may want to monitor the disk utilization or [CPU processor load][1] of a remote host. Nagios Remote Plugin Executor (NRPE) is a tool that can help with doing that. NRPE allows one to execute Nagios plugins installed on remote hosts, and integrate them with an [existing Nagios server][2].
This tutorial will cover how to set up NRPE on an existing Nagios deployment. The tutorial is primarily divided into two parts:
- Configure remote hosts.
- Configure a Nagios monitoring server.
We will then finish off by defining some custom commands that can be used with NRPE.
### Configure Remote Hosts for NRPE ###
#### Step One: Installing NRPE Service ####
You need to install NRPE service on every remote host that you want to monitor using NRPE. NRPE service daemon on each remote host will then communicate with a Nagios monitoring server.
Necessary packages for NRPE service can easily be installed using apt-get or yum, subject to the platform. In case of CentOS, we will need to [add Repoforge repository][3] as NRPE is not available in CentOS repositories.
**On Debian, Ubuntu or Linux Mint:**
# apt-get install nagios-nrpe-server
**On CentOS, Fedora or RHEL:**
# yum install nagios-nrpe
#### Step Two: Preparing Configuration File ####
The configuration file /etc/nagios/nrpe.cfg is similar for Debian-based and RedHat-based systems. The configuration file is backed up, and then updated as follows.
# vim /etc/nagios/nrpe.cfg
----------
## NRPE service port can be customized ##
server_port=5666
## the nagios monitoring server is permitted ##
## NOTE: There is no space after the comma ##
allowed_hosts=127.0.0.1,X.X.X.X-IP_v4_of_Nagios_server
## The following examples use hard-coded command arguments.
## These parameters can be modified as needed.
## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ##
command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200
Now that the configuration file is ready, NRPE service is ready to be fired up.
#### Step Three: Initiating NRPE Service ####
For RedHat-based systems, the NRPE service needs to be added as a startup service.
**On Debian, Ubuntu, Linux Mint:**
# service nagios-nrpe-server restart
**On CentOS, Fedora or RHEL:**
# service nrpe restart
# chkconfig nrpe on
#### Step Four: Verifying NRPE Service Status ####
Information about NRPE daemon status can be found in the system log. For a Debian-based system, the log file will be /var/log/syslog. The log file for a RedHat-based system will be /var/log/messages. A sample log is provided below for reference.
nrpe[19723]: Starting up daemon
nrpe[19723]: Listening for connections on port 5666
nrpe[19723]: Allowing connections from: 127.0.0.1,X.X.X.X
In case firewall is running, TCP port 5666 should be open, which is used by NRPE daemon.
# netstat -tpln | grep 5666
----------
tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN 19885/nrpe
### Configure Nagios Monitoring Server for NRPE ###
The first step in configuring an existing Nagios monitoring server for NRPE is to install NRPE plugin on the server.
#### Step One: Installing NRPE Plugin ####
In case the Nagios server is running on a Debian-based system (Debian, Ubuntu or Linux Mint), a necessary package can be installed using apt-get.
# apt-get install nagios-nrpe-plugin
After the plugin is installed, the check_nrpe command, which comes with the plugin, is modified a bit.
# vim /etc/nagios-plugins/config/check_nrpe.cfg
----------
## the default command is overwritten ##
define command{
command_name check_nrpe
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$'
}
In case the Nagios server is running on a RedHat-based system (CentOS, Fedora or RHEL), you can install NRPE plugin using yum. On CentOS, [adding Repoforge repository][4] is necessary.
# yum install nagios-plugins-nrpe
Now that the NRPE plugin is installed, proceed to configure a Nagios server following the rest of the steps.
#### Step Two: Defining Nagios Command for NRPE Plugin ####
First, we need to define a command in Nagios for using NRPE.
# vim /etc/nagios/objects/commands.cfg
----------
## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ##
define command{
command_name check_nrpe
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$'
}
#### Step Three: Adding Host and Command Definition ####
Next, define remote host(s) and commands to execute remotely on them.
The following shows sample definitions of a remote host a command to execute on the host. Naturally, your configuration will be adjusted based on your requirements. The path to the file is slightly different for Debian-based and RedHat-based systems. But the content of the files are identical.
**On Debian, Ubuntu or Linux Mint:**
# vim /etc/nagios3/conf.d/nrpe.cfg
**On CentOS, Fedora or RHEL:**
# vim /etc/nagios/objects/nrpe.cfg
----------
define host{
use linux-server
host_name server-1
alias server-1
address X.X.X.X-IPv4_address_of_remote_host
}
define service {
host_name server-1
service_description Check Load
check_command check_nrpe!check_load
check_interval 1
use generic-service
}
#### Step Four: Restarting Nagios Service ####
Before restarting Nagios, updated configuration is verified with a dry run.
**On Ubuntu, Debian, or Linux Mint:**
# nagios3 -v /etc/nagios3/nagios.cfg
**On CentOS, Fedora or RHEL:**
# nagios -v /etc/nagios/nagios.cfg
If everything goes well, Nagios service can be restarted.
# service nagios restart
![](https://farm8.staticflickr.com/7024/13330387845_0bde8b6db5_z.jpg)
### Configuring Custom Commands with NRPE ###
#### Setup on Remote Servers ####
The following is a list of custom commands that can be used with NRPE. These commands are defined in the file /etc/nagios/nrpe.cfg located at the remote servers.
## Warning status when load average exceeds 1, 2 and 1 for 1, 5, 15 minute interval, respectively.
## Critical status when load average exceeds 3, 5 and 3 for 1, 5, 15 minute interval, respectively.
command[check_load]=/usr/lib/nagios/plugins/check_load -w 1,2,1 -c 3,5,3
## Warning level 25% and critical level 10% for free space of /home.
## Could be customized to monitor any partition (e.g. /dev/sdb1, /, /var, /home)
command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 25% -c 10% -p /home
## Warn if number of instances for process_ABC exceeds 10. Critical for 20 ##
command[check_process_ABC]=/usr/lib/nagios/plugins/check_procs -w 1:10 -c 1:20 -C process_ABC
## Critical if the number of instances for process_XYZ drops below 1 ##
command[check_process_XYZ]=/usr/lib/nagios/plugins/check_procs -w 1: -c 1: -C process_XYZ
#### Setup on Nagios Monitoring Server ####
To apply the custom commands defined above, we modify the service definition at Nagios monitoring server as follows. The service definition could go to the file where all the services are defined (e.g., /etc/nagios/objects/nrpe.cfg or /etc/nagios3/conf.d/nrpe.cfg)
## example 1: check process XYZ ##
define service {
host_name server-1
service_description Check Process XYZ
check_command check_nrpe!check_process_XYZ
check_interval 1
use generic-service
}
## example 2: check disk state ##
define service {
host_name server-1
service_description Check Process XYZ
check_command check_nrpe!check_disk
check_interval 1
use generic-service
}
To sum up, NRPE is a powerful add-on to Nagios as it provides provision for monitoring a remote server in a highly configurable fashion. Using NRPE, we can monitor server load, running processes, logged in users, disk states and other parameters.
Hope this helps.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html
作者:[Sarmed Rahman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://xmodulo.com/2012/08/how-to-measure-average-cpu-utilization.html
[2]:http://xmodulo.com/2013/12/install-configure-nagios-linux.html
[3]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html
[4]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html

View File

@ -1,3 +1,4 @@
translating by cvsher
Sysstat All-in-One System Performance and Usage Activity Monitoring Tool For Linux
================================================================================
**Sysstat** is really a handy tool which comes with number of utilities to monitor system resources, their performance and usage activities. Number of utilities that we all use in our daily bases comes with sysstat package. It also provide the tool which can be scheduled using cron to collect all performance and activity data.
@ -121,4 +122,4 @@ via: http://www.tecmint.com/install-sysstat-in-linux/
[a]:http://www.tecmint.com/author/kuldeepsharma47/
[1]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
[2]:http://sebastien.godard.pagesperso-orange.fr/download.html
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html

View File

@ -1,84 +0,0 @@
Linux FAQs with Answers--How to build a RPM or DEB package from the source with CheckInstall
================================================================================
> **Question**: I would like to install a software program by building it from the source. Is there a way to build and install a package from the source, instead of running "make install"? That way, I could uninstall the program easily later if I want to.
If you have installed a Linux program from its source by running "make install", it becomes really tricky to remove it completely, unless the author of the program provides an uninstall target in the Makefile. You will have to compare the complete list of files in your system before and after installing the program from source, and manually remove all the files that were added during the installation.
That is when CheckInstall can come in handy. CheckInstall keeps track of all the files created or modified by an install command line (e.g., "make install" "make install_modules", etc.), and builds a standard binary package, giving you the ability to install or uninstall it with your distribution's standard package management system (e.g., yum for Red Hat or apt-get for Debian). It has been also known to work with Slackware, SuSe, Mandrake and Gentoo as well, as per the [official documentation][1].
In this post, we will only focus on Red Hat and Debian based distributions, and show how to build a RPM or DEB package from the source using CheckInstall.
### Installing CheckInstall on Linux ###
To install CheckInstall on Debian derivatives:
# aptitude install checkinstall
To install CheckInstall on Red Hat-based distributions, you will need to download a pre-built .rpm of CheckInstall (e.g., searchable from [http://rpm.pbone.net][2]), as it has been removed from the Repoforge repository. The .rpm package for CentOS 6 works in CentOS 7 as well.
# wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/ikoinoba/CentOS_CentOS-6/x86_64/checkinstall-1.6.2-3.el6.1.x86_64.rpm
# yum install checkinstall-1.6.2-3.el6.1.x86_64.rpm
Once checkinstall is installed, you can use the following format to build a package for particular software.
# checkinstall <install-command>
Without <install-command> argument, the default install command "make install" will be used.
### Build a RPM or DEB Pacakge with CheckInstall ###
In this example, we will build a package for [htop][3], an interactive text-mode process viewer for Linux (like top on steroids).
First, let's download the source code from the official website of the project. As a best practice, we will store the tarball in /usr/local/src, and untar it.
# cd /usr/local/src
# wget http://hisham.hm/htop/releases/1.0.3/htop-1.0.3.tar.gz
# tar xzf htop-1.0.3.tar.gz
# cd htop-1.0.3
Let's find out the install command for htop, so that we can invoke checkinstall with the command. As shown below, htop is installed with 'make install' command.
# ./configure
# make install
Therefore, to build a htop package, we can invoke checkinstall without any argument, which will then use 'make install' command to build a package. Along the process, the checkinstall command will ask you a series of questions.
In short, here are the commands to build a package for **htop**:
# ./configure
# checkinstall
Answer 'y' to "Should I create a default set of package docs?":
![](https://farm6.staticflickr.com/5577/15118597217_1fdd0e0346_z.jpg)
You can enter a brief description of the package, then press Enter twice:
![](https://farm4.staticflickr.com/3898/15118442190_604b71d9af.jpg)
Enter a number to modify any of the following values or Enter to proceed:
![](https://farm4.staticflickr.com/3898/15118442180_428de59d68_z.jpg)
Then checkinstall will create a .rpm or a .deb package automatically, depending on what your Linux system is:
On CentOS 7:
![](https://farm4.staticflickr.com/3921/15282103066_5d688b2217_z.jpg)
On Debian 7:
![](https://farm4.staticflickr.com/3905/15118383009_4909a7c17b_z.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/build-rpm-deb-package-source-checkinstall.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://checkinstall.izto.org/docs/README
[2]:http://rpm.pbone.net/
[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html

View File

@ -1,78 +0,0 @@
Linux FAQs with Answers--How to configure a static IP address on CentOS 7
================================================================================
> **Question**: On CentOS 7, I want to switch from DHCP to static IP address configuration with one of my network interfaces. What is a proper way to assign a static IP address to a network interface permanently on CentOS or RHEL 7?
If you want to set up a static IP address on a network interface in CentOS 7, there are several different ways to do it, varying depending on whether or not you want to use Network Manager for that.
Network Manager is a dynamic network control and configuration system that attempts to keep network devices and connections up and active when they are available). CentOS/RHEL 7 comes with Network Manager service installed and enabled by default.
To verify the status of Network Manager service:
$ systemctl status NetworkManager.service
To check which network interface is managed by Network Manager, run:
$ nmcli dev status
![](https://farm4.staticflickr.com/3861/15295802711_a102a3574d_z.jpg)
If the output of nmcli shows "connected" for a particular interface (e.g., enp0s3 in the example), it means that the interface is managed by Network Manager. You can easily disable Network Manager for a particular interface, so that you can configure it on your own for a static IP address.
Here are **two different ways to assign a static IP address to a network interface on CentOS 7**. We will be configuring a network interface named enp0s3.
### Configure a Static IP Address without Network Manager ###
Go to the /etc/sysconfig/network-scripts directory, and locate its configuration file (ifcfg-enp0s3). Create it if not found.
![](https://farm4.staticflickr.com/3911/15112399977_d3df8e15f5_z.jpg)
Open the configuration file and edit the following variables:
![](https://farm4.staticflickr.com/3880/15112184199_f4cbf269a6.jpg)
In the above, "NM_CONTROLLED=no" indicates that this interface will be set up using this configuration file, instead of being managed by Network Manager service. "ONBOOT=yes" tells the system to bring up the interface during boot.
Save changes and restart the network service using the following command:
# systemctl restart network.service
Now verify that the interface has been properly configured:
# ip add
![](https://farm6.staticflickr.com/5593/15112397947_ac69a33fb4_z.jpg)
### Configure a Static IP Address with Network Manager ###
If you want to use Network Manager to manage the interface, you can use nmtui (Network Manager Text User Interface) which provides a way to configure Network Manager in a terminal environment.
Before using nmtui, first set "NM_CONTROLLED=yes" in /etc/sysconfig/network-scripts/ifcfg-enp0s3.
Now let's install nmtui as follows.
# yum install NetworkManager-tui
Then go ahead and edit the Network Manager configuration of enp0s3 interface:
# nmtui edit enp0s3
The following screen will allow us to manually enter the same information that is contained in /etc/sysconfig/network-scripts/ifcfg-enp0s3.
Use the arrow keys to navigate this screen, press Enter to select from a list of values (or fill in the desired values), and finally click OK at the bottom right:
![](https://farm4.staticflickr.com/3878/15295804521_4165c97828_z.jpg)
Finally, restart the network service.
# systemctl restart network.service
and you're ready to go.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/configure-static-ip-address-centos7.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,53 +0,0 @@
[su-kaiyao]翻译中
How To Reset Root Password On CentOS 7
================================================================================
The way to reset the root password on centos7 is totally different to Centos 6. Let me show you how to reset root password in CentOS 7.
1 In the boot grub menu select option to edit.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_003.png)
2 Select Option to edit (e).
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_005.png)
3 Go to the line of Linux 16 and change ro with rw init=/sysroot/bin/sh.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_006.png)
4 Now press Control+x to start on single user mode.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_007.png)
5 Now access the system with this command.
chroot /sysroot
6 Reset the password.
passwd root
7 Update selinux information
touch /.autorelabel
8 Exit chroot
exit
9 Reboot your system
reboot
Thats it. Enjoy.
--------------------------------------------------------------------------------
via: http://www.unixmen.com/reset-root-password-centos-7/
作者M.el Khamlichi
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,120 +0,0 @@
How to monitor user login history on CentOS with utmpdump
================================================================================
Keeping, maintaining and analyzing logs (i.e., accounts of events that have happened during a certain period of time or are currently happening) are among the most basic and essential tasks of a Linux system administrator. In case of user management, examining user logon and logout logs (both failed and successful) can alert us about any potential security breaches or unauthorized use of our system. For example, remote logins from unknown IP addresses or accounts being used outside working hours or during vacation leave should raise a red flag.
On a CentOS system, user login history is stored in the following binary files:
- /var/run/utmp (which logs currently open sessions) is used by who and w tools to show who is currently logged on and what they are doing, and also by uptime to display system up time.
- /var/log/wtmp (which stores the history of connections to the system) is used by last tool to show the listing of last logged-in users.
- /var/log/btmp (which logs failed login attempts) is used by lastb utility to show the listing of last failed login attempts. `
![](https://farm4.staticflickr.com/3871/15106743340_bd13fcfe1c_o.png)
In this post I'll show you how to use utmpdump, a simple program from the sysvinit-tools package that can be used to dump these binary log files in text format for inspection. This tool is available by default on stock CentOS 6 and 7. The information gleaned from utmpdump is more comprehensive than the output of the tools mentioned earlier, and that's what makes it a nice utility for the job. Besides, utmpdump can be used to modify utmp or wtmp, which can be useful if you want to fix any corrupted entries in the binary logs.
### How to Use Utmpdump and Interpret its Output ###
As we mentioned earlier, these log files, as opposed to other logs most of us are familiar with (e.g., /var/log/messages, /var/log/cron, /var/log/maillog), are saved in binary file format, and thus we cannot use pagers such as less or more to view their contents. That is where utmpdump saves the day.
In order to display the contents of /var/run/utmp, run the following command:
# utmpdump /var/run/utmp
![](https://farm6.staticflickr.com/5595/15106696599_60134e3488_z.jpg)
To do the same with /var/log/wtmp:
# utmpdump /var/log/wtmp
![](https://farm6.staticflickr.com/5591/15106868718_6321c6ff11_z.jpg)
and finally with /var/log/btmp:
# utmpdump /var/log/btmp
![](https://farm6.staticflickr.com/5562/15293066352_c40bc98ca4_z.jpg)
As you can see, the output formats of three cases are identical, except for the fact that the records in the utmp and btmp are arranged chronologically, while in the wtmp, the order is reversed.
Each log line is formatted in multiple columns described as follows. The first field shows a session identifier, while the second holds PID. The third field can hold one of the following values: ~~ (indicating a runlevel change or a system reboot), bw (meaning a bootwait process), a digit (indicates a TTY number), or a character and a digit (meaning a pseudo-terminal). The fourth field can be either empty or hold the user name, reboot, or runlevel. The fifth field holds the main TTY or PTY (pseudo-terminal), if that information is available. The sixth field holds the name of the remote host (if the login is performed from the local host, this field is blank, except for run-level messages, which will return the kernel version). The seventh field holds the IP address of the remote system (if the login is performed from the local host, this field will show 0.0.0.0). If DNS resolution is not provided, the sixth and seventh fields will show identical information (the IP address of the remote system). The last (eighth) field indicates the date and time when the record was created.
### Usage Examples of Utmpdump ###
Here are a few simple use cases of utmpdump.
1. Check how many times (and at what times) a particular user (e.g., gacanepa) logged on to the system between August 18 and September 17.
# utmpdump /var/log/wtmp | grep gacanepa
![](https://farm4.staticflickr.com/3857/15293066362_fb2dd566df_z.jpg)
If you need to review login information from prior dates, you can check the wtmp-YYYYMMDD (or wtmp.[1...N]) and btmp-YYYYMMDD (or btmp.[1...N]) files in /var/log, which are the old archives of wtmp and btmp files, generated by [logrotate][1].
2. Count the number of logins from IP address 192.168.0.101.
# utmpdump /var/log/wtmp | grep 192.168.0.101
![](https://farm4.staticflickr.com/3842/15106743480_55ce84c9fd_z.jpg)
3. Display failed login attempts.
# utmpdump /var/log/btmp
![](https://farm4.staticflickr.com/3858/15293065292_e1d2562206_z.jpg)
In the output of /var/log/btmp, every log line corresponds to a failed login attempt (e.g., using incorrect password or a non-existing user ID). Logon using non-existing user IDs are highlighted in the above impage, which can alert you that someone is attempting to break into your system by guessing commonly-used account names. This is particularly serious in the cases when the tty1 was used, since it means that someone had access to a terminal on your machine (time to check who has keys to your datacenter, maybe?).
4. Display login and logout information per user session.
# utmpdump /var/log/wtmp
![](https://farm4.staticflickr.com/3835/15293065312_c762360791_z.jpg)
In /var/log/wtmp, a new login event is characterized by '7' in the first field, a terminal number (or pseudo-terminal id) in the third field, and username in the fourth. The corresponding logout event will be represented by '8' in the first field, the same PID as the login in the second field, and a blank terminal number field. For example, take a close look at PID 1463 in the above image.
- On [Fri Sep 19 11:57:40 2014 ART] the login prompt appeared in tty1.
- On [Fri Sep 19 12:04:21 2014 ART], user root logged on.
- On [Fri Sep 19 12:07:24 2014 ART], root logged out.
On a side note, the word LOGIN in the fourth field means that a login prompt is present in the terminal specified in the fifth field.
So far I covered somewhat trivial examples. You can combine utmpdump with other text sculpting tools such as awk, sed, grep or cut to produce filtered and enhanced output.
For example, you can use the following command to list all login events of a particular user (e.g., gacanepa) and send the output to a .csv file that can be viewed with a pager or a workbook application, such as LibreOffice's Calc or Microsoft Excel. Let's display PID, username, IP address and timestamp only:
# utmpdump /var/log/wtmp | grep -E "\[7].*gacanepa" | awk -v OFS="," 'BEGIN {FS="] "}; {print $2,$4,$7,$8}' | sed -e 's/\[//g' -e 's/\]//g'
![](https://farm4.staticflickr.com/3851/15293065352_91e1c1e4b6_z.jpg)
As represented with three blocks in the image, the filtering logic is composed of three pipelined steps. The first step is used to look for login events ([7]) triggered by user gacanepa. The second and third steps are used to select desired fields, remove square brackets in the output of utmpdump, and set the output field separator to a comma.
Of course, you need to redirect the output of the above command to a file if you want to open it later (append "> [name_of_file].csv" to the command).
![](https://farm4.staticflickr.com/3889/15106867768_0e37881a25_z.jpg)
In more complex examples, if you want to know what users (as listed in /etc/passwd) have not logged on during the period of time, you could extract user names from /etc/passwd, and then run grep the utmpdump output of /var/log/wtmp against user list. As you see, possibility is limitless.
Before concluding, let's briefly show yet another use case of utmpdump: modify utmp or wtmp. As these are binary log files, you cannot edit them as is. Instead, you can export their content to text format, modify the text output, and then import the modified content back to the binary logs. That is:
# utmpdump /var/log/utmp > tmp_output
<modify tmp_output using a text editor>
# utmpdump -r tmp_output > /var/log/utmp
This can be useful when you want to remove or fix any bogus entry in the binary logs.
To sum up, utmpdump complements standard utilities such as who, w, uptime, last, lastb by dumping detailed login events stored in utmp, wtmp and btmp log files, as well as in their rotated old archives, and that certainly makes it a great utility.
Feel free to enhance this post with your comments.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/09/monitor-user-login-history-centos-utmpdump.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/2014/09/logrotate-manage-log-files-linux.html

View File

@ -0,0 +1,112 @@
Translating by johnhoow...
How to Use Systemd Timers
================================================================================
I was setting up some scripts to run backups recently, and decided I would try to set them up to use [systemd timers][1] rather than the more familiar to me [cron jobs][2].
As I went about trying to set them up, I had the hardest time, since it seems like the required information is spread around in various places. I wanted to record what I did so firstly, I can remember, but also so that others dont have to go searching as far and wide as I did.
There are additional options associated with the each step I mention below, but this is the bare minimum to get started. Look at the man pages for **systemd.service**, **systemd.timer**, and **systemd.target** for all that you can do with them.
### Running a Single Script ###
Lets say you have a script **/usr/local/bin/myscript** that you want to run every hour.
#### Service File ####
First, create a service file, and put it wherever it goes on your Linux distribution (on Arch, it is either **/etc/systemd/system/** or **/usr/lib/systemd/system**).
myscript.service
[Unit]
Description=MyScript
[Service]
Type=simple
ExecStart=/usr/local/bin/myscript
Note that it is important to set the **Type** variable to be “simple”, not “oneshot”. Using “oneshot” makes it so that the script will be run the first time, and then systemd thinks that you dont want to run it again, and will turn off the timer we make next.
#### Timer File ####
Next, create a timer file, and put it also in the same directory as the service file above.
myscript.timer
[Unit]
Description=Runs myscript every hour
[Timer]
# Time to wait after booting before we run first time
OnBootSec=10min
# Time between running each consecutive time
OnUnitActiveSec=1h
Unit=myscript.service
[Install]
WantedBy=multi-user.target
#### Enable / Start ####
Rather than starting / enabling the service file, you use the timer.
# Start timer, as root
systemctl start myscript.timer
# Enable timer to start at boot
systemctl enable myscript.timer
### Running Multiple Scripts on the Same Timer ###
Now lets say there a bunch of scripts you want to run all at the same time. In this case, you will want make a couple changes on the above formula.
#### Service Files ####
Create the service files to run your scripts as I [showed previously][3], but include the following section at the end of each service file.
[Install]
WantedBy=mytimer.target
If there is any ordering dependency in your service files, be sure you specify it with the **After=something.service** and/or **Before=whatever.service** parameters within the **Description** section.
Alternatively (and perhaps more simply), create a wrapper script that runs the appropriate commands in the correct order, and use the wrapper in your service file.
#### Timer File ####
You only need a single timer file. Create **mytimer.timer**, as I [outlined above][4].
#### Target File ####
You can create the target that all these scripts depend upon.
mytimer.target
[Unit]
Description=Mytimer
# Lots more stuff could go here, but it's situational.
# Look at systemd.unit man page.
#### Enable / Start ####
You need to enable each of the service files, as well as the timer.
systemctl enable script1.service
systemctl enable script2.service
...
systemctl enable mytimer.timer
systemctl start mytimer.service
Good luck.
--------------------------------------------------------------------------------
via: http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/#enable--start-1
作者Jason Graham
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://fedoraproject.org/wiki/User:Johannbg/QA/Systemd/Systemd.timer
[2]:https://en.wikipedia.org/wiki/Cron
[3]:http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/#service-file
[4]:http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/#timer-file-1

View File

@ -0,0 +1,223 @@
How to turn your CentOS box into an OSPF router using Quagga
================================================================================
[Quagga][1] is an open source routing software suite that can be used to turn your Linux box into a fully-fledged router that supports major routing protocols like RIP, OSPF, BGP or ISIS router. It has full provisions for IPv4 and IPv6, and supports route/prefix filtering. Quagga can be a life saver in case your production router is down, and you don't have a spare one at your disposal, so are waiting for a replacement. With proper configurations, Quagga can even be provisioned as a production router.
In this tutorial, we will connect two hypothetical branch office networks (e.g., 192.168.1.0/24 and 172.17.1.0/24) that have a dedicated link between them.
![](https://farm4.staticflickr.com/3861/15172727969_13cb7f037f_b.jpg)
Our CentOS boxes are located at both ends of the dedicated link. The hostnames of the two boxes are set as 'site-A-RTR' and 'site-B-RTR' respectively. IP address details are provided below.
- **Site-A**: 192.168.1.0/24
- **Site-B**: 172.16.1.0/24
- **Peering between 2 Linux boxes**: 10.10.10.0/30
The Quagga package consists of several daemons that work together. In this tutorial, we will focus on setting up the following daemons.
1. **Zebra**: a core daemon, responsible for kernel interfaces and static routes.
1. **Ospfd**: an IPv4 OSPF daemon.
### Install Quagga on CentOS ###
We start the process by installing Quagga using yum.
# yum install quagga
On CentOS 7, SELinux prevents /usr/sbin/zebra from writing to its configuration directory by default. This SELinux policy interferes with the setup procedure we are going to describe, so we want to disable this policy. For that, either [turn off SELinux][2] (which is not recommended), or enable the 'zebra_write_config' boolean as follows. Skip this step if you are using CentOS 6.
# setsebool -P zebra_write_config 1
Without this change, we will see the following error when attempting to save Zebra configuration from inside Quagga's command shell.
Can't open configuration file /etc/quagga/zebra.conf.OS1Uu5.
After Quagga is installed, we configure necessary peering IP addresses, and update OSPF settings. Quagga comes with a command line shell called vtysh. The Quagga commands used inside vtysh are similar to those of major router vendors such as Cisco or Juniper.
### Phase 1: Configuring Zebra ###
We start by creating a Zebra configuration file, and launching Zebra daemon.
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
# service zebra start
# chkconfig zebra on
Launch vtysh command shell:
# vtysh
First, we configure the log file for Zebra. For that, enter the global configuration mode in vtysh by typing:
site-A-RTR# configure terminal
and specify log file location, then exit the mode:
site-A-RTR(config)# log file /var/log/quagga/quagga.log
site-A-RTR(config)# exit
Save configuration permanently:
site-A-RTR# write
Next, we identify available interfaces and configure their IP addresses as necessary.
site-A-RTR# show interface
----------
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
Configure eth0 parameters:
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 10.10.10.1/30
site-A-RTR(config-if)# description to-site-B
site-A-RTR(config-if)# no shutdown
Go ahead and configure eth1 parameters:
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 192.168.1.1/24
site-A-RTR(config-if)# description to-site-A-LAN
site-A-RTR(config-if)# no shutdown
Now verify configuration:
site-A-RTR(config-if)# do show interface
----------
Interface eth0 is up, line protocol detection is disabled
. . . . .
inet 10.10.10.1/30 broadcast 10.10.10.3
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
inet 192.168.1.1/24 broadcast 192.168.1.255
. . . . .
----------
site-A-RTR(config-if)# do show interface description
----------
Interface Status Protocol Description
eth0 up unknown to-site-B
eth1 up unknown to-site-A-LAN
Save configuration permanently:
site-A-RTR(config-if)# do write
Repeat the IP address configuration step on site-B server as well.
If all goes well, you should be able to ping site-B's peering IP 10.10.10.2 from site-A server.
Note that once Zebra daemon has started, any change made with vtysh's command line interface takes effect immediately. There is no need to restart Zebra daemon after configuration change.
### Phase 2: Configuring OSPF ###
We start by creating an OSPF configuration file, and starting the OSPF daemon:
# cp /usr/share/doc/quagga-XXXXX/ospfd.conf.sample /etc/quagga/ospfd.conf
# service ospfd start
# chkconfig ospfd on
Now launch vtysh shell to continue with OSPF configuration:
# vtysh
Enter router configuration mode:
site-A-RTR# configure terminal
site-A-RTR(config)# router ospf
Optionally, set the router-id manually:
site-A-RTR(config-router)# router-id 10.10.10.1
Add the networks that will participate in OSPF:
site-A-RTR(config-router)# network 10.10.10.0/30 area 0
site-A-RTR(config-router)# network 192.168.1.0/24 area 0
Save configuration permanently:
site-A-RTR(config-router)# do write
Repeat the similar OSPF configuration on site-B as well:
site-B-RTR(config-router)# network 10.10.10.0/30 area 0
site-B-RTR(config-router)# network 172.16.1.0/24 area 0
site-B-RTR(config-router)# do write
The OSPF neighbors should come up now. As long as ospfd is running, any OSPF related configuration change made via vtysh shell takes effect immediately without having to restart ospfd.
In the next section, we are going to verify our Quagga setup.
### Verification ###
#### 1. Test with ping ####
To begin with, you should be able to ping the LAN subnet of site-B from site-A. Make sure that your firewall does not block ping traffic.
[root@site-A-RTR ~]# ping 172.16.1.1 -c 2
#### 2. Check routing tables ####
Necessary routes should be present in both kernel and Quagga routing tables.
[root@site-A-RTR ~]# ip route
----------
10.10.10.0/30 dev eth0 proto kernel scope link src 10.10.10.1
172.16.1.0/30 via 10.10.10.2 dev eth0 proto zebra metric 20
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1
----------
[root@site-A-RTR ~]# vtysh
site-A-RTR# show ip route
----------
Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,
I - ISIS, B - BGP, > - selected route, * - FIB route
O 10.10.10.0/30 [110/10] is directly connected, eth0, 00:14:29
C>* 10.10.10.0/30 is directly connected, eth0
C>* 127.0.0.0/8 is directly connected, lo
O>* 172.16.1.0/30 [110/20] via 10.10.10.2, eth0, 00:14:14
C>* 192.168.1.0/24 is directly connected, eth1
#### 3. Verifying OSPF neighbors and routes ####
Inside vtysh shell, you can check if necessary neighbors are up, and proper routes are being learnt.
[root@site-A-RTR ~]# vtysh
site-A-RTR# show ip ospf neighbor
![](https://farm3.staticflickr.com/2943/15160942468_d348241bd5_z.jpg)
In this tutorial, we focused on configuring basic OSPF using Quagga. In general, Quagga allows us to easily configure a regular Linux box to speak dynamic routing protocols such as OSPF, RIP or BGP. Quagga-enabled boxes will be able to communicate and exchange routes with any other router that you may have in your network. Since it supports major open standard routing protocols, it may be a preferred choice in many scenarios. Better yet, Quagga's command line interface is almost identical to that of major router vendors like Cisco or Juniper, which makes deploying and maintaining Quagga boxes very easy.
Hope this helps.
--------------------------------------------------------------------------------
via: http://xmodulo.com/turn-centos-box-into-ospf-router-quagga.html
作者:[Sarmed Rahman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://www.nongnu.org/quagga/
[2]:http://xmodulo.com/how-to-disable-selinux.html

View File

@ -0,0 +1,117 @@
zpl1025
How to use xargs command in Linux
================================================================================
Have you ever been in the situation where you are running the same command over and over again for multiple files? If so, you know how tedious and inefficient this can feel. The good news is that there is an easier way, made possible through the xargs command in Unix-based operating systems. With this command you can process multiple files efficiently, saving you time and energy. In this tutorial, you will learn how to execute a command or script for multiple files at once, avoiding the daunting task of processing numerous log files or data files individually.
There are two ingredients for the xargs command. First, you must specify the files of interest. Second, you must indicate which command or script will be executed for each of the files you specified.
This tutorial will cover three scenarios in which the xargs command can be used to process files located within several different directories:
1. Count the number of lines in all files
1. Print the first line of specific files
1. Process each file using a custom script
Consider the following directory named xargstest (the directory tree can be displayed using the tree command with the combined -i and -f options, which print the results without indentation and with the full path prefix for each file):
$ tree -if xargstest/
![](https://farm3.staticflickr.com/2942/15334985981_ce1a192def.jpg)
The contents of each of the six files are as follows:
![](https://farm4.staticflickr.com/3882/15346287662_a3084a8e4f_o.png)
The **xargstest** directory, its subdirectories and files will be used in the following examples.
### Scenario 1: Count the number of lines in all files ###
As mentioned earlier, the first ingredient for the xargs command is a list of files for which the command or script will be run. We can use the find command to identify and list the files that we are interested in. The **-name 'file??'** option specifies that only files with names beginning with "file" followed by any two characters will be matched within the xargstest directory. This search is recursive by default, which means that the find command will search for matching files within xargstest and all of its sub-directories.
$ find xargstest/ -name 'file??'
----------
xargstest/dir3/file3B
xargstest/dir3/file3A
xargstest/dir1/file1A
xargstest/dir1/file1B
xargstest/dir2/file2B
xargstest/dir2/file2A
We can pipe the results to the sort command to order the filenames sequentially:
$ find xargstest/ -name 'file??' | sort
----------
xargstest/dir1/file1A
xargstest/dir1/file1B
xargstest/dir2/file2A
xargstest/dir2/file2B
xargstest/dir3/file3A
xargstest/dir3/file3B
We now need the second ingredient, which is the command to execute. We use the wc command with the -l option to count the number of newlines in each file (printed at the beginning of each output line):
$ find xargstest/ -name 'file??' | sort | xargs wc -l
----------
1 xargstest/dir1/file1A
2 xargstest/dir1/file1B
3 xargstest/dir2/file2A
4 xargstest/dir2/file2B
5 xargstest/dir3/file3A
6 xargstest/dir3/file3B
21 total
You'll see that instead of manually running the wc -l command for each of these files, the xargs command allows you to complete this operation in a single step. Tasks that may have previously seemed unmanageable, such as processing hundreds of files individually, can now be performed quite easily.
### Scenario 2: Print the first line of specific files ###
Now that you know the basics of how to use the xargs command, you have the freedom to choose which command you want to execute. Sometimes, you may want to run commands for only a subset of files and ignore others. In this case, you can use the find command with the -name option and the ? globbing character (matches any single character) to select specific files to pipe into the xargs command. For example, if you want to print the first line of all files that end with a "B" character and ignore the files that end with an "A" character, use the following combination of the find, xargs, and head commands (head -n1 will print the first line in a file):
$ find xargstest/ -name 'file?B' | sort | xargs head -n1
----------
==> xargstest/dir1/file1B <==
one
==> xargstest/dir2/file2B <==
one
==> xargstest/dir3/file3B <==
one
You'll see that only the files with names that end with a "B" character were processed, and all files that end with an "A" character were ignored.
### Scenario 3: Process each file using a custom script ###
Finally, you may want to run a custom script (in Bash, Python, or Perl for example) for the files. To do this, simply substitute the name of your custom script in place of the wc and head commands shown previously:
$ find xargstest/ -name 'file??' | xargs myscript.sh
The custom script **myscript.sh** needs to be written to take a file name as an argument and process the file. The above command will then invoke the script for every file found by find command.
Note that the above examples include file names that do not contain spaces. Generally speaking, life in a Linux environment is much more pleasant when using file names without spaces. If you do need to handle file names with spaces, the above commands will not work, and should be tweaked to accommodate them. This is accomplished with the -print0 option for find command (which prints the full file name to stdout, followed by a null character), and -0 option for xargs command (which interprets a null character as the end of a string), as shown below:
$ find xargstest/ -name 'file*' -print0 | xargs -0 myscript.sh
Note that the argument for the -name option has been changed to 'file*', which means any files with names beginning with "file" and trailed by any number of characters will be matched.
### Summary ###
After reading this tutorial you will understand the capabilities of the xargs command and how you can implement this into your workflow. Soon you'll be spending more time enjoying the efficiency offered by this command, and less time doing repetitive tasks. For more details and additional options you can read the xargs documentation by entering the 'man xargs' command in your terminal.
--------------------------------------------------------------------------------
via: http://xmodulo.com/xargs-command-linux.html
作者:[Joshua Reed][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/joshua

View File

@ -0,0 +1,108 @@
Git Rebase Tutorial: Going Back in Time with Git Rebase
================================================================================
![](https://www.gravatar.com/avatar/7c148ace0d63306091cc79ed9d9e77b4?d=mm&s=200)
A programmer since the tender age of 10, Christoph Burgdorf is the the founder of the HannoverJS meetup, and he has been an active member in the AngularJS community since its very beginning. He is also very knowledgeable about the ins and outs of git, where he hosts workshops at [thoughtram][1] to help beginners master the technology.
The following tutorial was originally posted on his [blog][2].
----------
### Tutorial: Git Rebase ###
Imagine you are working on that radical new feature. Its going to be brilliant but it takes a while. Youve been working on that for a couple of days now, maybe weeks.
Your feature branch is already six commits ahead of master. Youve been a good developer and have crafted meaningful semantical commits. But theres the thing: you are slowly realizing that this beast will still take some more time before its really ready to be merged back into master.
m1-m2-m3-m4 (master)
\
f1-f2-f3-f4-f5-f6(feature)
What you also realize is that some parts are actually less coupled to the new feature. They could land in master earlier. Unfortunately, the part that you want to port back into master earlier is in a commit somewhere in the middle of your six commits. Even worse, it also contains a change that relies on a previous commits of your feature branch. One could argue that you should have made that two commits in the first place, but then nobody is perfect.
m1-m2-m3-m4 (master)
\
f1-f2-f3-f4-f5-f6(feature)
^
|
mixed commit
At the time that you crafted the commit, you didnt foresee that you might come into a situation where you want to gradually bring the feature into master. Heck! You wouldnt have guessed that this whole thing could take us so long.
What you need is a way to go back in history, open up the commit and split it into two commits so that you can separate out all the things that are safe to be ported back into master by now.
Speaking in terms of a graph, we want to have it like this.
m1-m2-m3-m4 (master)
\
f1-f2-f3a-f3b-f4-f5-f6(feature)
With the work split into two commits, we could just cherry-pick the precious bits into master.
Turns out, git comes with a powerful command git rebase -i which lets us do exactly that. It lets us change the history. Changing the history can be problematic and as a rule of thumb should be avoided as soon as the history is shared with others. In our case though, we are just changing history of our local feature branch. Nobody will get hurt. Promised!
Ok, lets take a closer look at what exactly happened in commit f3. Turns out we modified two files: userService.js and wishlistService.js. Lets say that the changes to userService.js could go straight back into master whereas the changes to wishlistService.js could not. Because wishlistService.js does not even exist in master. It was introduced in commit f1.
> Pro Tip: even if the changes would have been in one file, git could handle that. We keep things simple for this blog post though.
Weve set up a [public demo repository][3] that we will use for this exercise. To make it easier to follow, each commit message is prefixed with the pseudo SHAs used in the graphs above. What follows is the branch graph as printed by git before we start to split the commit f3.
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git1.png)
Now the first thing we want to do is to checkout our feature branch with git checkout feature. To get started with the rebase we run git rebase -i master.
Now what follows is that git opens a temporary file in the configured editor (defaults to Vim).
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git2.png)
This file is meant to provide you some options for the rebase and it comes with a little cheat sheet (the blue text). For each commit we could choose between the actions pick, reword, edit, squash, fixup and exec. Each action can also be referred to by its short form p, r, e, s, f and e. Its out of the scope of this article to describe each and every option so lets focus on our specific task.
We want to choose the edit option for our f3 commit hence we change the contents to look like that.
Now we save the file (in Vim <ESC> followed by :wq, followed by <RETURN>). The next thing we notice is that git stops the rebase at the commit for which we choose the edit option.
What this means is that git started to apply f1, f2 and f3 as if it was a regular rebase but then stopped **after** applying f3. In fact, we can prove that if we just look at the log at the point where we stopped.
To split our commit f3 into two commits, all we have to do at this point is to reset gits pointer to the previous commit (f2) while keeping the working directory the same as it is right now. This is exactly what the mixed mode of git reset does. Since mixed is the default mode of git reset we can just write git reset head~1. Lets do that and also run git status right after it to see what happened.
The git status tells us that both our userService.js and our wishlistService.js are modified. If we run git diff we can see that those are exactly the changes of our f3 commit.
If we look at the log again at this point we see that the f3 is gone though.
We are now at the point that we have the changes of our previous f3 commit ready to be committed whereas the original f3 commit itself is gone. Keep in mind though that we are still in the middle of a rebase. Our f4, f5 and f6 commits are not lost, theyll be back in a moment.
Lets make two new commits: Lets start with the commit for the changes made to the userService.js which are fine to get picked into master. Run git add userService.js followed by git commit -m "f3a: add updateUser method".
Great! Lets create another commit for the changes made to wishlistService.js. Run git add wishlistService.js followed by git commit -m "f3b: add addItems method".
Lets take a look at the log again.
This is exactly what we wanted except our commits f4, f5 and f6 are still missing. This is because we are still in the middle of the interactive rebase and we need to tell git to continue with the rebase. This is done with the command git rebase --continue.
Lets check out the log again.
And thats it. We now have the history we wanted. The previous f3 commit is now split into two commits f3a and f3b. The only thing left to do is to cherry-pick the f3a commit over to the master branch.
To finish the last step we first switch to the master branch. We do this with git checkout master. Now we can pick the f3a commit with the cherry-pick command. We can refer to the commit by its SHA key which is bd47ee1 in this case.
We now have the f3a commit sitting on top of latest master. Exactly what we wanted!
Given the length of the post this may seem like a lot of effort but its really only a matter of seconds for an advanced git user.
> Note: Christoph is currently writing a book on [rebasing with Git][4] together with Pascal Precht, and you can subscribe to it at leanpub to get notified when its ready.
--------------------------------------------------------------------------------
via: https://www.codementor.io/git-tutorial/git-rebase-split-old-commit-master
作者:[cburgdorf][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://www.codementor.io/cburgdorf
[1]:http://thoughtram.io/
[2]:http://blog.thoughtram.io/posts/going-back-in-time-to-split-older-commits/
[3]:https://github.com/thoughtram/interactive-rebase-demo
[4]:https://leanpub.com/rebase-the-complete-guide-on-rebasing-in-git

View File

@ -0,0 +1,89 @@
Learning Vim in 2014: Working with Files
================================================================================
As a software developer, you shouldn't have to spend time thinking about how to get to the code you want to edit. One of the messiest parts of my transition to using Vim full time was its way of dealing with files. Coming to Vim after primarily using Eclipse and Sublime Text, it frustrated me that Vim doesn't bundle a persistent file system viewer, and the built-in ways of opening and switching files always felt extremely painful.
At this point I appreciate the depth of Vim's file management features. I've put together a system that works for me even better than more visual editors once did. Because it's purely keyboard based, it allows me to move through my code much faster. That took some time though, and involves several plugins. But the first step was me understanding Vim's built in options for dealing with files. This post will be looking at the most important structures Vim provides you for file management, with a quick peek at some of the more advanced features you can get through plugins.
### The Basics: Opening a new file ###
One of the biggest obstacles to learning Vim is its lack of visual affordances. Unlike modern GUI based editors, there is no obvious way to do anything when you open a new instance of Vim in the terminal. Everything is done through keyboard commands, and while that ends up being more efficient for experienced users, new Vim users will find themselves looking up even basic commands routinely. So lets start with the basics.
The command to open a new file in Vim is **:e <filename>. :e** opens up a new buffer with the contents of the file inside. If the file doesn't exist yet it opens up an empty buffer and will write to the file location you specify once you make changes and save. Buffers are Vim's term for a "block of text stored in memory". That text can be associated with an existing file or not, but there will be one buffer for each file you have open.
After you open a file and make changes, you can save the contents of the buffer back to the file with the write command **:w**. If the buffer is not yet associated with a file or you want to save to a different location, you can save to a specific file with **:w <filename>**. You may need to add a ! and use **:w! <filename>** if you're overwriting an existing file.
This is the survival level knowledge for dealing with Vim files. Plenty of developers get by with just these commands, and its technically all you need. But Vim offers a lot more for those who dig a bit deeper.
### Buffer Management ###
Moving beyond the basics, let's talk some more about buffers. Vim handles open files a bit differently than other editors. Rather than leaving all open files visible as tabs, or only allowing you to have one file open at a time, Vim allows you to have multiple buffers open. Some of these may be visible while others are not. You can view a list of all open buffers at any time with **:ls**. This shows each open buffer, along with their buffer number. You can then switch to a specific buffer with the **:b <buffer-number>** command, or move in order along the list with the **:bnext** and **:bprevious** commands. (these can be shortened to **:bn** and **:bp** respectively).
While these commands are the fundamental Vim solutions for managing buffers, I've found that they don't map well to my own way of thinking about files. I don't want to care about the order of buffers, I just want to go to the file I'm thinking about, or maybe to the file I was just in before the current one. So while its important to understand Vim's underlying buffer model, I wouldn't necessarily recommend its builtin commands as your main file management strategy. There are more powerful options available.
![](http://benmccormick.org/content/images/2014/Jul/skitch.jpeg)
### Splits ###
One of the best parts of managing files in Vim is its splits. With Vim, you can split your current window into 2 windows at any time, and then resize and arrange them into any configuration you like. Its not unusual for me to have 6 files open at a given time, each with its own small split of the window.
You can open a new split with **:sp <filename>** or **:vs <filename>**, for horizontal and vertical splits respectively. There are keyword commands you can use to then resize the windows the way you want them, but to be honest this is the one Vim task I prefer to do with my mouse. A mouse gives me more precision without having to guess the number of columns I want or fiddle back and forth between 2 widths.
After you create some splits, you can switch back and forth between them with **ctrl-w [h|j|k|l]**. This is a bit clunky though, and it's important for common operations to be efficient and easy. If you use splits heavily, I would personally recommend aliasing these commands to **ctrl-h** **ctrl-j** etc in your .vimrc using this snippet.
nnoremap <C-J> <C-W><C-J> "Ctrl-j to move down a split
nnoremap <C-K> <C-W><C-K> "Ctrl-k to move up a split
nnoremap <C-L> <C-W><C-L> "Ctrl-l to move right a split
nnoremap <C-H> <C-W><C-H> "Ctrl-h to move left a split
### The jumplist ###
Splits solve the problem of viewing multiple related files at a time, but we still haven't seen a satisfactory solution for moving quickly between open and hidden files. The jumplist is one tool you can use for that.
The jumplist is one of those Vim features that can appear weird or even useless at first. Vim keeps track of every motion command and file switch you make as you're editing files. Every time you "jump" from one place to another in a split, Vim adds an entry to the jumplist. While this may initially seem like a small thing, it becomes powerful when you're switching files a lot, or moving around in a large file. Instead of having to remember your place, or worry about what file you were in, you can instead retrace your footsteps quickly using some quick key commands. **Ctrl-o** allows you to jump back to your last jump location. Repeating it multiple times allows you to quickly jump back to the last file or code chunk you were working on, without having to keep the details of where that code is in your head. You can then move back up the chain with **ctrl-i**. This turns out to be immensely powerful when you're moving around in code quickly, debugging a problem in multiple files or flipping back and forth between 2 files. Instead of typing file names or remembering buffer numbers, you can just move up and down the existing path. It's not the answer to everything, but like other Vim concepts, it's a small focused tool that adds to the overall power of the editor without trying to do everything.
### Plugins ###
So let's be real, if you're coming to Vim from something like Sublime Text or Atom, there's a good chance all of this looks a bit arcane, scary, and inefficient. "Why would I want to type the full path to open a file when Sublime has fuzzy finding?" "How can I get a view of a project's structure without a sidebar to show the directory tree?" Legitimate questions. The good news is that Vim has solutions. They're just not baked into the Vim core. I'll touch more on Vim configuration and plugins in later posts, but for now here's a pointer to 3 helpful plugins that you can use to get Sublime-like file management.
- [CtrlP][1] is a fuzzy finding file search similar to Sublime's "Go to Anything" bar. It's lightning fast and pretty configurable. I use it as my main way of opening new files. With it I only need to know part of the file name and don't need to memorize my project's directory structure.
- [The NERDTree][2] is a "file navigation drawer" plugin that replicates the side file navigation that many editors have. I actually rarely use it, as fuzzy search always seems faster to me. But it can be useful coming into a project, when you're trying to learn the project structure and see what's available. NERDTree is immensely configurable, and also replaces Vim's built in directory tools when installed.
- [Ack.vim][3] is a code search plugin for Vim that allows you to search across your project for text expressions. It acts as a light wrapper around Ack or Ag, [2 great code search tools][4], and allows you to quickly jump to any occurrence of a search term in your project.
Between it's core and its plugin ecosystem, Vim offers enough tools to allow you to craft your workflow anyway you want. File management is a key part of a good software development system, and it's worth experimenting to get it right.
Start with the basics for long enough to understand them, and then start adding tools on top until you find a comfortable workflow. It will all be worth it when you're able to seamlessly move to the code you want to work on without the mental overhead of figuring out how to get there.
### More Resources ###
- [Seamlessly Navigate Vim & Tmux Splits][5] This is a must read for anyone who wants to use vim with [tmux][6]. It presents an easy system for treating Vim and Tmux splits as equals, and moving between them easily.
- [Using Tab Pages][7] One file management feature I didn't cover, since it's poorly named and a bit confusing to use, is Vim's "tab" feature. This post on the Vim wiki gives a good overview of how you can use "tab pages" to have multiple views of your current workspace
- [Vimcasts: The edit command][8] Vimcasts in general is a great resource for anyone learning Vim, but this screenshot does a good job of covering the file opening basics mentioned above, with some suggestions on improving the builtin workflow
### Subscribe ###
This was the third in a series of posts on learning Vim in a modern way. If you enjoyed the post consider subscribing to the [feed][8] or joining my [mailing list][10]. I'll be continuing with [a post on Vim configuration next week][11] after a brief JavaScript interlude later this week. You should also checkout the first 2 posts in this series, on [the basics of using Vim][12], and [the language of Vim and Vi][13].
--------------------------------------------------------------------------------
via: http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/
作者:[Ben McCormick][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/
[1]:https://github.com/kien/ctrlp.vim
[2]:https://github.com/scrooloose/nerdtree
[3]:https://github.com/mileszs/ack.vim
[4]:http://benmccormick.org/2013/11/25/a-look-at-ack/
[5]:http://robots.thoughtbot.com/seamlessly-navigate-vim-and-tmux-splits
[6]:http://tmux.sourceforge.net/
[7]:http://vim.wikia.com/wiki/Using_tab_pages
[8]:http://vimcasts.org/episodes/the-edit-command/
[9]:http://feedpress.me/benmccormick
[10]:http://eepurl.com/WFYon
[11]:http://benmccormick.org/2014/07/14/learning-vim-in-2014-configuring-vim/
[12]:http://benmccormick.org/2014/06/30/learning-vim-in-2014-the-basics/
[13]:http://benmccormick.org/2014/07/02/learning-vim-in-2014-vim-as-language/

View File

@ -0,0 +1,121 @@
wangjiezhe translating
Using GIT to backup your website files on linux
================================================================================
![](http://techarena51.com/wp-content/uploads/2014/09/git_logo-1024x480-580x271.png)
Well not exactly Git but a software based on Git known as BUP. I generally use rsync to backup my files and that has worked fine so far. The only problem or drawback is that you cannot restore your files to a particular point in time. Hence, I started looking for an alternative and found BUP a git based software which stores your data in repositories and gives you the option to restore data to a particular point in time.
With BUP you will first need to initialize an empty repository, then take a backup of all your files. When BUP takes a backup it creates a restore point which you can later restore to. It also creates an index of all your files, this index contains file attributes and checksum. When another backup is scheduled BUP compares the files with this attribute and only saves data if anything has changed. This saves you a lot of space.
### Installing BUP (Tested on Centos 6 & 7) ###
Ensure you have RPMFORGE and EPEL repos installed.
[techarena51@vps ~]$sudo yum groupinstall "Development Tools"
[techarena51@vps ~]$ sudo yum install python python-devel
[techarena51@vps ~]$ sudo yum install fuse-python pyxattr pylibacl
[techarena51@vps ~]$ sudo yum install perl-Time-HiRes
[techarena51@vps ~]$ git clone git://github.com/bup/bup
[techarena51@vps ~]$cd bup
[techarena51@vps ~]$ make
[techarena51@vps ~]$ make test
[techarena51@vps ~]$sudo make install
For debian/ubuntu users you can do “apt-get build-dep bup” on recent versions for more information check out https://github.com/bup/bup
You may get errors on CentOS 7 at “make test”, but you can continue to run make install.
The first step like git is to initialize an empty repository.
[techarena51@vps ~]$bup init
By default, bup will store its repository under “~/.bup” but you can change that by setting the “export BUP_DIR=/mnt/user/bup” environment variable
Next you create an index of all files. The index, as I mentioned earlier stores a listing of files, their attributes, and their git object ids (sha1 hashes). ( Attributes include soft links, permissions as well as the immutable bit
bup index /path/to/file
bup save -n nameofbackup /path/to/file
#Example
[techarena51@vps ~]$ bup index /var/www/html
Indexing: 7973, done (4398 paths/s).
bup: merging indexes (7980/7980), done.
[techarena51@vps ~]$ bup save -n techarena51 /var/www/html
Reading index: 28, done.
Saving: 100.00% (4/4k, 28/28 files), done.
bloom: adding 1 file (7 objects).
Receiving index from server: 1268/1268, done.
bloom: adding 1 file (7 objects).
“BUP save” will split all the contents of the file into chunks and store them as objects. The “-n” option takes the name of backup.
You can check a list of backups as well as a list of backed up files.
[techarena51@vps ~]$ bup ls
local-etc techarena51 test
#Check for a list of backups available for my site
[techarena51@vps ~]$ bup ls techarena51
2014-09-24-064416 2014-09-24-071814 latest
#Check for the files available in these backups
[techarena51@vps ~]$ bup ls techarena51/2014-09-24-064416/var/www/html
apc.php techarena51.com wp-config-sample.php wp-load.php
Backing up files on the same server is never a good option. BUP allows you to remotely backup your website files, you however need to ensure that your SSH keys and BUP are installed on the remote server.
bup index path/to/dir
bup save-r remote-vps.com -n backupname path/to/dir
### Example: Backing up the “/var/www/html” directory ###
[techarena51@vps ~]$bup index /var/www/html
[techarena51@vps ~]$ bup save -r user@remotelinuxvps.com: -n techarena51 /var/www/html
Reading index: 28, done.
Saving: 100.00% (4/4k, 28/28 files), done.
bloom: adding 1 file (7 objects).
Receiving index from server: 1268/1268, done.
bloom: adding 1 file (7 objects).
### Restoring your Backup ###
Log into the remote server and type the following
[techarena51@vps ~]$bup restore -C ./backup techarena51/latest
#Restore an older version of the entire working dir elsewhere
[techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-195827
#Restore one individual file from an old backup
[techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-201328/root/testbup/binfile1.bin
The only drawback is you cannot restore files to another server, you have to manually copy the files via SCP or even rsync.
View your backups via an integrated web server
bup web
#specific port
bup web :8181
You can run bup along with a shell script and a cron job once everyday.
#!/bin/bash
bup index /var/www/html
bup save -r user@remote-vps.com: -n techarena51 /var/www/html
BUP may not be perfect, but it gets the job done pretty well. I would definitely like to see more development on this project and hopefully a remote restore as well.
You may also like to read using [inotify-tools][1] for real time file syncing.
--------------------------------------------------------------------------------
via: http://techarena51.com/index.php/using-git-backup-website-files-on-linux/
作者:[Leo G][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://techarena51.com/
[1]:http://techarena51.com/index.php/inotify-tools-example/

View File

@ -0,0 +1,94 @@
How to Boot Linux ISO Images Directly From Your Hard Drive
================================================================================
![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAooAAAEsAgMAAAA5t3pxAAAABGdBTUEAALGPC/xhBQAAAAxQTFRFAAAALAAeqKio/v7+NGqafgAABflJREFUeNrt3L1u4zgQB/DU7q+/8qCnuJegBQcQVLlY98RWhoKrDu4V4LZaHCAgp/chUuYpWAY6DofUh6PdtXcVcZL8ZUSmaSf5mSKHY0rJzQ22RbbflPTtTxhhhBFGGGGEEUYYYYQRRhhhhBFGGGH8zrY9e/xpVycxHup6X+/pvnWltp4zHlztrdvfe+Mu1BzqtYz1vvFGkrJ27jVK7em9NNHI35HSWO8P9Zf6UFPbfamjiPZU29bU9qsa1T9sVGMjy7f+HbgKFrk91e5COYVx6I+hTdU2tN7WtyXvyah8+dCsZbxv7r3D3avYjvu6dT3vU2/kHsj7ttk53Y5GzIr98T72x3owuiPvWi8a9/51vK/VLpTXNLrROmtU+2isg24w1usam2BshjFz6xX1yHjr3f2YabjbrhbD9xRp4j2PGYo5tfs6NBxl2ubW1bUNx55tfdhT+YD5GkYYYYQRRhhhhBFGGGGEEUYYYfyhsewEbm/RKPAYlzDCCCOMML4zoxFvNJqNBYy/aHwy9NW5vVyj1fRVKMlGvsEIo5gxY73RSjVW5slUrh1zt8d8DSOM78Y4Hs99Oe+r8j7BNImM5ayxGBlj1rZOFjdndL941qhEGSmC+0hON81RvTMlR3dDJiqtlWl+Y762RnMWSWWeHelYc51SZLJ6rUzz2zmFor0vcw0b+egWo/rXzz7mjJ1rRXe8qS19eWo8RqNKaaTfqg23mVHnxtzIN9I4F2G0peJxcz5muB8OxjUyzXljpV2c8fFniD0um7SVoTqOPWa1TPPS+Trl6sp7MiI3+2DG2U6pkxin8bjo9/lZTWVKs8YK4K8Y3WykUhmti9XPluIz52EUyTvfYs/+mVhDc00+ys7XNRr9WMRcsQizNZWo9rGINSmNT5qN47n5hdH3x86kM3bWGxUbO3vsjfRMrKH30D3nicaMUWOjO6Kmb0fl29HX+GxSpTZqy0alz41KJzdyf1TlZMzkL8eMM6aKj5V5LHyGqGlNgiINfUIgIz0Ta6rwOTbxXKilzoXDVqVMG5GbwfgT+eOwXRIp9WKx55r8cWosZ346xfnOZUyle1ysbOT88XttmYefWfr1DkpSljJelz9yjKJX0/pk3j/ycd5Hr8/uZsIaR76Y8Zr8UYXZ02paa8n7Ryyin0DHmuasJY3X5Y88mMLvZ2NYpxwb3SvNssZr8kf6riOtUzpJZQfj0Rs7y8YhT0qRP/qxYWgVsD/WYZ3St6OKRv1KxkvyRw57L41KT41maeMV+WO/gk5Gm49WTidjht7xgrHnuvwxRhvjemOlKxse8dqlpVe4vbvv7JIx/Lr88bqjpxc3XpI/Js/DkZt9AKMRbvRnjUbjIfcPS7+nKLL2J7FLjKU/769DjORMI7VRm+l56c/KTYHOVggzjs9L5zTZ+jzaG5UEY3l2rtK5vNF44/ENGHMj5VhPjZSpunzW56tKyzQq345K0Jihc9bj89JkLDmNFWSs9Pi8tMsJ/ed3STEcOQWMMP6EUcs20nwyGFNEmwvi46QdU0TtS4x05VG81lGqka+A5PXHFBnjBf2xzyn8WkqCjPGSduz4ejiaqZNkjBcd634lNk3GeL1R8pgxIXuUHHvcvZYaw5FTLGDcttK2/2B8XWPWPog23kyMd5u77C6TZswyMsbtoc1O2UmWkUx32e/Z15b2Mo1//EumzYlsm5M3ttKMf58yf3P90bffQ/uXOOPXLDvdbMh4t2HjQyayHdvsFPthbE9x/XFiFDmuszBmNlKNFMMp6rjY7W0yYzhyigWMyMM/mHF8HUcu0mhGLr5qqEi6DvnN9cfeqFS8+jHVWsC8sVRPhkXWUrkz8oy5sjoaqRzaUcky8t/l0nWGVGbjUaCRr4UcjKnWIX9kNCOj0jKP9dho5BnDX9nLNHaW/hdAFf4rAZXpyh5ZMRw5BYwwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIIo3TjG9j+B4tUkGfI5p/jAAAAAElFTkSuQmCC)
Linuxs GRUB2 boot loader can boot Linux ISO files directly from your hard drive. Boot Linux live CDs or even install Linux on another hard drive partition without burning it to disc or booting from a USB drive.
We performed this process on Ubuntu 14.04 — Ubuntu and Ubuntu-based Linux distributions have good support for this. [Other Linux distributions][1] should work similarly.
### Get a Linux ISO File ###
This trick requires you have a Linux system installed on your hard drive. Your computer must be using [the GRUB2 boot loader][2], which is a standard boot loader on most Linux systems. Sorry, you cant boot a Linux ISO file directly from a Windows system using the Windows boot loader.
Download the ISO files you want to use and store them on your Linux partition. GRUB2 should support most Linux systems. if you want to use them in a live environment without installing them to your hard drive, be sure to download the “[live CD][3]” versions of each Linux ISO. Many Linux-based bootable utility discs should also work.
### Check the Contents of the ISO File ###
You may need to look inside the ISO file to determine exactly where specific files are. For example, you can do this by opening the ISO file with the Archive Manager/File Roller graphical application that comes with Ubuntu and other GNOME-based desktop environments. In the Nautilus file manager, right-click the ISO file and select Open with Archive Manager.
Locate the kernel file and the initrd image. If youre using a Ubuntu ISO file, youll find these files inside the casper folder — the vmlinuz file is the Linux kernel and the initrd file is the initrd image. Youll need to know their location inside the ISO file later.
![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x350xvmlinuz-and-initrd-file-locations.png.pagespeed.ic.hB1yMlHMr2.png)
### Determine the Hard Drive Partitions Path ###
GRUB uses a different “device name” scheme than Linux does. On a Linux system, /dev/sda0 is the first partition on the first hard disk — **a** means the first hard disk and **0** means its first partition. In GRUB, (hd0,1) is equivalent to /dev/sda0. The **0** means the first hard disk, while the **1** means the first partition on it. In other words, in a GRUB device name, the disk numbers start counting at 0 and the partition num6ers start counting at 1 — yes, its unnecessarily confusing. For example, (hd3,6) refers to the sixth partition on the fourth hard disk.
You can use the **fdisk -l** command to view this information. On Ubuntu, open a Terminal and run the following command:
sudo fdisk -l
Youll see a list of Linux device paths, which you can convert to GRUB device names on your own. For example, below we can see the system partition is /dev/sda1 — so thats (hd0,1) for GRUB.
![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x410xfdisk-l-command.png.pagespeed.ic.yW7uP1_G0C.png)
### Create the GRUB2 Boot Entry ###
The easiest way to add a custom boot entry is to edit the /etc/grub.d/40_custom script. This file is designed for user-added custom boot entries. After editing the file, the contents of your /etc/defaults/grub file and the /etc/grub.d/ scripts will be combined to create a /boot/grub/grub.cfg file — you shouldnt edit this file by hand. Its designed to be automatically generated from settings you specify in other files.
Youll need to open the /etc/grub.d/40_custom file for editing with root privileges. On Ubuntu, you can do this by opening a Terminal window and running the following command:
sudo gedit /etc/grub.d/40_custom
Feel free to open the file in your favorite text editor. For example, you could replace “gedit” with “nano” in the command to open the file in [the Nano text editor][4].
Unless youve added other custom boot entries, you should see a mostly empty file. Youll need to add one or more ISO-booting sections to the file below the [commented][5] lines.
![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x300xadd-custom-boot-menu-entries-to-grub.png.pagespeed.ic.uUT-Yls8xf.png)
Heres how you can boot an Ubuntu or Ubuntu-based distribution from an ISO file. We tested this with Ubuntu 14.04:
menuentry “Ubuntu 14.04 ISO” {
set isofile=”/home/name/Downloads/ubuntu-14.04.1-desktop-amd64.iso”
loopback loop (hd0,1)$isofile
linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash
initrd (loop)/casper/initrd.lz
}
Customize the boot entry to contain your desiredmenu entry name, the correct path to the ISO file on your computer, and the device name of the hard disk and partition containing the ISO file. If the vmlinuz and initrd files have different names or paths, be sure to specify the correct path to those files, too.
(If you have a separate /home/ partition, omit the /home bit, like so: **set isofile=”/name/Downloads/${isoname}”**).
**Important Note**: Different Linux distributions require different boot entries with different boot options. The GRUB Live ISO Multiboot project offers a variety of [menu entries for different Linux distributions][6]. You should be able to adapt these example menu entries for the ISO file you want to boot. You can also just perform a web search for the name and release number of the Linux distribution you want to boot along with “boot from ISO in GRUB” to find more information.
![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x392xadd-a-linux-iso-file-to-grub-boot-loader.png.pagespeed.ic.2FR0nOtugC.png)
If you want to add more ISO boot options, add additional sections to the file.
Save the file when youre done. Return to a Terminal window and run the following command:
sudo update-grub
![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x249xgenerate-grub.cfg-on-ubuntu.png.pagespeed.ic.5I70sH4ZRs.png)
The next time you boot your computer, youll see the ISO boot entry and you can choose it to boot the ISO file. You may have to hold Shift while booting to see the GRUB menu.
If you see an error message or a black screen when you attempt to boot the ISO file, you misconfigured the boot entry somehow. Even if you got the ISO file path and device name right, the paths to the vmlinuz and intird files on the ISO file may not be correct or the Linux system youre booting may require different options.
--------------------------------------------------------------------------------
via: http://www.howtogeek.com/196933/how-to-boot-linux-iso-images-directly-from-your-hard-drive/
作者:[Chris Hoffman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.howtogeek.com/author/chrishoffman/
[1]:http://www.howtogeek.com/191207/10-of-the-most-popular-linux-distributions-compared/
[2]:http://www.howtogeek.com/196655/how-to-configure-the-grub2-boot-loaders-settings/
[3]:http://www.howtogeek.com/172810/take-a-secure-desktop-everywhere-everything-you-need-to-know-about-linux-live-cds-and-usb-drives/
[4]:http://www.howtogeek.com/howto/42980/the-beginners-guide-to-nano-the-linux-command-line-text-editor/
[5]:http://www.howtogeek.com/118389/how-to-comment-out-and-uncomment-lines-in-a-configuration-file/
[6]:http://git.marmotte.net/git/glim/tree/grub2

View File

@ -1,6 +1,6 @@
优化 GitHub 服务器上的 MySQL 数据库性能
================================================================================
> 在 GitHub 我们总是说“如果网站响应速度不够快,说明我们的工作没完成”。我们之前在[前端的体验速度][1]这篇文章中介绍了一些提高网站响应速率的方法,但这只是故事的一部分。真正影响到 GitHub.com 性能的因素是 MySQL 数据库架构。让我们来瞧瞧我们的基础架构团队是如何无缝升级了 MySQL 架构吧这事儿发生在去年8月份成果就是大大提高了 GitHub 网站的速度。
> 在 GitHub 我们总是说“如果网站响应速度不够快,我们就不应该让它上线运营”。我们之前在[前端的体验速度][1]这篇文章中介绍了一些提高网站响应速率的方法,但这只是故事的一部分。真正影响到 GitHub.com 性能的因素是 MySQL 数据库架构。让我们来瞧瞧我们的基础架构团队是如何无缝升级了 MySQL 架构吧这事儿发生在去年8月份成果就是大大提高了 GitHub 网站的速度。
### 任务 ###

View File

@ -0,0 +1,89 @@
桌面看腻了?试试这 4 款漂亮的 Linux 图标主题吧
================================================================================
**Ubuntu 的默认图标主题在 5 年内[并未发生太大的变化][1],那些说“[图标早就彻底更新过了][2]”的你过来,我保证不打你。如果你确实想尝试一些新鲜的东西,我们将向你展示一些惊艳的替代品,它们会让你感到眼前一亮。**
如果还是感到不太满意,你可以在文末的评论里留下你比较中意的图标主题的链接地址。
### Captiva ###
![Captiva 图标 + elementary 文件夹图标 + Moka GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-and-captiva.jpg)
Captiva 图标 + elementary 文件夹图标 + Moka GTK
Captiva 是一款相对较新的图标主题,即使那些有华丽图标倾向的用户也会接受它。
Captiva 由 DeviantArt 的用户 ~[bokehlicia][3] 制作,它并未使用现在非常流行的平面扁平风格,而是采用了一种圆润、柔和的外观。图标本身呈现出一种很有质感的材质外观,同时通过微调的阴影和亮丽的颜色提高了自身的格调。
不过 Captiva 图标主题并未包含文件夹图标在内,因此它将使用 elementary如果可以的话或者普通的 Ubuntu 文件夹图标。
要想在 Ubuntu 14.04 中安装 Captiva 图标,你可以新开一个终端,按如下方式添加官方 PPA 并进行安装:
sudo add-apt-repository ppa:captiva/ppa
sudo apt-get update && sudo apt-get install captiva-icon-theme
或者,如果你不擅长通过软件源安装的话,你也可以直接从 DeviantArt 的主页上下载图标压缩包。把解压过的文件夹挪到家目录的‘.icons目录下即可完成安装。
不过在你完成安装后,你必须得通过像 [Unity Tweak Tool][4] 这样的工具来把你安装的图标主题(本文列出的其他图标主题也要这样)应用到系统上。
- [DeviantArt 上的 Captiva 图标主题][5]
### Square Beam ###
![Square Beam 图标在 Orchis GTK 主题下](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/squarebeam.jpg)
Square Beam 图标在 Orchis GTK 主题下
厌倦有棱角的图标了?尝试下 Square Beam 吧。Square Beam 因为其艳丽的色泽、尖锐的坡度变化和鲜明的图标形象比本文列出的其他图标具有更加宏大的视觉效果。Square Beam 声称自己有超过 30,000 个(抱歉,我没有仔细数过...)的不同图标(!),因此你很难找到它没有考虑到的地方。
- [GNOME-Look.org 上的 Square Beam 图标主题][6]
### Moka & Faba ###
![Moka/Faba Mono 图标在 Orchis GTK 主题下](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-faba.jpg)
Moka/Faba Mono 图标在 Orchis GTK 主题下
这里得稍微介绍下 Moka 图标集。事实上,我敢打赌阅读此文的绝大部分用户正在使用这款图标。
柔和的颜色、平滑的边缘以及简洁的图标艺术设计Moka 是一款真正出色的覆盖全面的应用图标。它的兄弟 Faba 将这些特点展现得淋漓尽致,而 Moka 也将延续这些 —— 涵盖所有的系统图标、文件夹图标、面板图标,等等。
欲知 Ubuntu 上的安装详情、访问项目官方网站?请点击下面的链接。
- [下载 Moka & Faba 图标主题][7]
### Compass ###
![Compass 图标在 Numix Blue GTK 主题下](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/compass1.jpg)
Compass 图标在 Numix Blue GTK 主题下
在本文最后推荐的是 Compass最后推荐当然不是最差的意思。这款图标现在仍然保持着2D双色的 UI 设计风格。它也许是不像本文推荐的其他图标那样鲜明但这正是它的特色。Compass 秉持这点并与之不断的完善 —— 看看文件夹的图标就知道了!
可以通过 GNOME-Look下面有链接进行下载和安装或者通过添加 Nitrux Artwork 的 PPA 安装:
sudo add-apt-repository ppa:nitrux/nitrux-artwork
sudo apt-get update && sudo apt-get install compass-icon-theme
- [GNOME-Look.org 上的 Compass 图标主题][8]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/4-gorgeous-linux-icon-themes-download
作者:[Joey-Elijah Sneddon][a]
译者:[SteveArcher](https://github.com/SteveArcher)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2010/02/lucid-gets-new-icons-for-rhythmbox-ubuntuone-memenu-more
[2]:http://www.omgubuntu.co.uk/2012/08/new-icon-theme-lands-in-lubuntu-12-10
[3]:http://bokehlicia.deviantart.com/
[4]:http://www.omgubuntu.co.uk/2014/06/unity-tweak-tool-0-7-development-download
[5]:http://bokehlicia.deviantart.com/art/Captiva-Icon-Theme-479302805
[6]:http://gnome-look.org/content/show.php/Square-Beam?content=165094
[7]:http://mokaproject.com/moka-icon-theme/download/ubuntu/
[8]:http://gnome-look.org/content/show.php/Compass?content=160629

View File

@ -0,0 +1,88 @@
IPv6IPv4犯的罪为什么要我来弥补
================================================================================
LCTT标题党了一把哈哈哈好过瘾求不拍砖
在过去的十年间IPv6 本来应该得到很大的发展,但事实上这种好事并没有降临。由此导致了一个结果,那就是大部分人都不了解 IPv6 的一些知识它是什么怎么使用以及为什么它会存在LCTT这是要回答蒙田的“我是谁”哲学思考题吗
![IPv4 and IPv6 Comparison](http://www.tecmint.com/wp-content/uploads/2014/09/ipv4-ipv6.gif)
IPv4 和 IPv6 的区别
### IPv4 做错了什么? ###
自从1981年发布了 RFC 791 标准以来我们就一直在使用 **IPv4**。在那个时候,电脑又大又贵还不多见,而 IPv4 号称能提供**40亿条 IP 地址**,在当时看来,这个数字好大好大。不幸的是,这么多的 IP 地址并没有被充分利用起来,地址与地址之间存在间隙。举个例子,一家公司可能有**254(2^8-2)**条地址但只使用其中的25条剩下的229条被空占着以备将来之需。于是这些空闲着的地址不能服务于真正需要它们的用户原因就是网络路由规则的限制。最终的结果是在1981年看起来那个好大好大的数字在2014年看起来变得好小好小。
互联网工程任务组(**IETF**在90年代指出了这个问题并提供了两套解决方案无类型域间选路**CIDR**)以及私有地址。在 CIDR 出现之前,你只能选择三种网络地址长度:**24 位** (共可用16,777,214个地址), **20位** (共可用1,048,574个地址)以及**16位** (共可用65,534个地址)。CIDR 出现之后,你可以将一个网络再划分成多个子网。
举个例子,如果你需要**5个 IP 地址**,你的 ISP 会为你提供一个子网里面的主机地址长度为3位也就是说你最多能得到**6个地址**LCTT抛开子网的网络号3位主机地址长度可以表示07共8个地址但第0个和第7个有特殊用途不能被用户使用所以你最多能得到6个地址。这种方法让 ISP 能尽最大效率分配 IP 地址。“私有地址”这套解决方案的效果是你可以自己创建一个网络里面的主机可以访问外网的主机但外网的主机很难访问到你创建的那个网络上的主机因为你的网络是私有的、别人不可见的。你可以创建一个非常大的网络因为你可以使用16,777,214个主机地址并且你可以将这个网络分割成更小的子网方便自己管理。
也许你现在正在使用私有地址。看看你自己的 IP 地址,如果这个地址在这些范围内:**10.0.0.0 10.255.255.255**、**172.16.0.0 172.31.255.255**或**192.168.0.0 192.168.255.255**就说明你在使用私有地址。这两套方案有效地将“IP 地址用尽”这个灾难延迟了好长时间,但这毕竟只是权宜之计,现在我们正面临最终的审判。
**IPv4** 还有另外一个问题,那就是这个协议的消息头长度可变。如果数据通过软件来路由,这个问题还好说。但现在路由器功能都是由硬件提供的,处理变长消息头对硬件来说是一件困难的事情。一个大的路由器需要处理来自世界各地的大量数据包,这个时候路由器的负载是非常大的。所以很明显,我们需要固定消息头的长度。
还有一个问题,在分配 IP 地址的时候美国人发了因特网LCTT这个万恶的资本主义国家占用了大量 IP 地址)。其他国家只得到了 IP 地址的碎片。我们需要重新定制一个架构,让连续的 IP 地址能在地理位置上集中分布这样一来路由表可以做的更小LCTT想想吧网速肯定更快
还有一个问题,这个问题你听起来可能还不大相信,就是 IPv4 配置起来比较困难,而且还不好改变。你可能不会碰到这个问题,因为你的路由器为你做了这些事情,不用你去操心。但是你的 ISP 对此一直是很头疼的。
下一代因特网需要考虑上述的所有问题。
### IPv6 和它的优点 ###
**IETF** 在1995年12月公布了下一代 IP 地址标准,名字叫 IPv6为什么不是 IPv5因为某个错误原因“版本5”这个编号被其他项目用去了。IPv6 的优点如下:
- 128位地址长度共有3.402823669×10³⁸个地址
- 这个架构下的地址在逻辑上聚合
- 消息头长度固定
- 支持自动配置和修改你的网络。
我们一项一项地分析这些特点:
#### 地址 ####
人们谈到 **IPv6** 时,第一件注意到的事情就是它的地址好多好多。为什么要这么多?因为设计者考虑到地址不能被充分利用起来,我们必须提供足够多的地址,让用户去挥霍,从而达到一些特殊目的。所以如果你想架设自己的 IPv6 网络,你的 ISP 可以给你分配拥有**64位**主机地址长度的网络可以分配1.844674407×10¹⁹台主机你想怎么玩就怎么玩。
#### 聚合 ####
有这么多的地址,这个地址可以被稀稀拉拉地分配给主机,从而更高效地路由数据包。算一笔帐啊,你的 ISP 拿到一个**80位**地址长度的网络空间其中16位是 ISP 的子网地址剩下64位分给你作为主机地址。这样一来你的 ISP 可以分配65,534个子网。
然而,这些地址分配不是一成不变地,如果 ISP 想拥有更多的小子网,完全可以做到(当然,土豪 ISP 可能会要求再来一个80位网络空间。最高的48位地址是相互独立地也就是说 ISP 与 ISP 之间虽然可能分到相同地80位网络空间但是这两个空间是相互隔离的好处就是一个网络空间里面的地址会聚合在一起。
#### 固定的消息头长度 ####
**IPv4** 消息头长度可变,但 **IPv6** 消息头长度被固定为40字节。IPv4 会由于额外的参数导致消息头变长IPv6 中,如果有额外参数,这些信息会被放到一个紧挨着消息头的地方,不会被路由器处理,当消息到达目的地时,这些额外参数会被软件提取出来。
IPv6 消息头有一个部分叫“flow”是一个20位伪随机数用于简化路由器对数据包地路由过程。如果一个数据包存在“flow”路由器就可以根据这个值作为索引查找路由表不必慢吞吞地遍历整张路由表来查询路由路径。这个优点使 **IPv6** 更容易被路由。
#### 自动配置 ####
**IPv6** 中,当主机开机时,会检查本地网络,看看有没有其他主机使用了自己的 IP 地址。如果地址没有被使用,就接着查询本地的 IPv6 路由器,找到后就向它请求一个 IPv6 地址。然后这台主机就可以连上互联网了 —— 它有自己的 IP 地址,和自己的默认路由器。
如果这台默认路由器当机,主机就会接着找其他路由器,作为备用路由器。这个功能在 IPv4 协议里实现起来非常困难。同样地,假如路由器想改变自己的地址,自己改掉就好了。主机会自动搜索路由器,并自动更新路由器地址。路由器会同时保存新老地址,直到所有主机都把自己地路由器地址更新成新地址。
IPv6 自动配置还不是一个完整地解决方案。想要有效地使用互联网,一台主机还需要另外的东西:域名服务器、时间同步服务器、或者还需要一台文件服务器。于是 **dhcp6** 出现了,提供与 dhcp 一样的服务,唯一的区别是 dhcp6 的机器可以在可路由的状态下启动,一个 dhcp 进程可以为大量网络提供服务。
#### 唯一的大问题 ####
如果 IPv6 真的比 IPv4 好那么多为什么它还没有被广泛使用起来Google 在**2014年5月份**估计 IPv6 的市场占有率为**4%**)?一个最基本的原因是“先有鸡还是先有蛋”问题,用户需要让自己的服务器能为尽可能多的客户提供服务,这就意味着他们必须部署一个 **IPv4** 地址。
当然,他们可以同时使用 IPv4 和 IPv6 两套地址,但很少有客户会用到 IPv6并且你还需要对你的软件做一些小修改来适应 IPv6。另外比较头疼的一点是很多家庭的路由器压根不支持 IPv6。还有就是 ISP 也不愿意支持 IPv6我问过我的 ISP 这个问题,得到的回答是:只有客户明确指出要部署这个时,他们才会用 IPv6。然后我问了现在有多少人有这个需求答案是包括我在内共有1个。
与这种现实状况呈明显对比的是所有主流操作系统Windows、OS X、Linux 都默认支持 IPv6 好多年了。这些操作系统甚至提供软件让 IPv6 的数据包披上 IPv4 的皮来骗过那些会丢弃 IPv6 数据包的主机从而达到传输数据的目的LCTT这是高科技偷渡
#### 总结 ####
IPv4 已经为我们服务了好长时间。但是它的缺陷会在不远的将来遭遇不可克服的困难。IPv6 通过改变地址分配规则、简化数据包路由过程、简化首次加入网络时的配置过程等策略,可以完美解决这个问题。
问题是,大众在接受和使用 IPv6 的过程中进展缓慢,因为改变代价太大了。好消息是所有操作系统都支持 IPv6所以当你有一天想做出改变你的电脑只需要改变一点点东西就能转到全新的架构体系中去。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/ipv4-and-ipv6-comparison/
作者:[Jeff Silverman][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/jeffsilverm/

View File

@ -0,0 +1,65 @@
7个杀手级的开源监测工具
================================================================================
想要更清晰的了解你的网络吗?没有比这几个免费的工具更好用的了。
网络和系统监控是一个很宽的范畴。有监控服务器正常工作,网络设备,应用的方案。也有跟踪这些系统和设备性能,提供趋势性能分析的解决方案。有些工具像个闹钟一样,当发现问题的时候就会报警,而另外的一些工具甚至可以在警报响起的时候触发一些动作。这里,收集了一些开源的工具,旨在解决上述的一些甚至大部分问题。
### Cacti ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_02-netmon-cacti-100448914-orig.jpg)
Cacti是一个性能广泛的图表和趋势分析工具可以用来跟踪并且几乎可以绘制出任何可监测指标并描绘出图表。从硬盘的利用率到风扇的转速在一个电脑管理系统中只要是可以被监测的指标Cacti都可以监测并快速的转换成可视化的图表。
### Nagios ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_03-netmon-nagios-100448915-orig.jpg)
Nagios是一个经典的老牌系统和网络监测工具。运行速度快可靠需要针对应用定制。Nagios对于初学者是一个挑战。但是它的极其复杂的配置正好也反应出它的强大因为它几乎可以适用于任何监控任务。要说缺点的话就是不怎么耐看但是其强劲的动力和可靠性弥补了这个缺点。
### Icinga ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_04-netmon-icinga-100448916-orig.jpg)
Icinga 是一个正在重建的Nagios的分支它提供了一个全面的监控和警报的框架致力于设计一个像Nagios一样的开放的和可扩展性的平台。但是和Nagios拥有不一样的Web界面。Icinga 1 是Nagios非常的相近不过Icinga 2就重写了。两个版本都能很好的兼容而且Nagios用户可以很轻松的转到Icinga 1平台。
### NeDi ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_05-netmon-nedi-100448917-orig.jpg)
NeDi可能不如其他的工具一样文明全世界但它确是一个跟踪网络接入的一个强大的解决方案。它可以很流畅的运行网络基础设施和设备目录保持对任何事件的跟踪。并且可以提供任意设备的当前位置也包括历史位置。
NeDi可以被用于定位被偷的或者是丢失掉的设备只要设备出现在网络上。它甚至可以在地图上显示所有已发现的节点。并且很清晰的告诉人们网络是怎么互联的到物理设备端口的。
### Observium ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_06-netmon-observium-100448918-orig.jpg)
Observium 综合系统网路在性能趋势监测上有很好的表现它支持静态和动态发现来确认服务器和网络设备利用多种监测方法可以监测任何可用的指标。Web界面非常的整洁易用。
就如我们看到的Observium也可以在地图上显示任何被监测节点的实际位置。需要注意的是面板上关于活跃设备和警报的计数。
### Zabbix ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_07-netmon-zabbix-100448919-orig.jpg)
Zabbix 利用广泛的矩阵工具监测服务器和网络。代理商的Zabbix针对大多数的操作系统你可以被动的或者是使用外部检查包括SNMP来监控主机和网络设备。你也会发现很多提醒和通知设施和一个非常人性化的Web界面适用于不同的面板此外Zabbix还拥有一些特殊的管理工具来监测Web应用和虚拟化的管理程序。
Zabbix 还可以提供详细的互联图,以便于我们了解某些对象是怎么连接的。这些图是可以定制的,并且,图也可以以被监测的服务器和主机的分组形式被创建。
### Ntop ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_08-netmon-ntop-100448920-orig.jpg)
Ntop是一个数据包嗅探工具。有一个整洁的Web界面用来显示被监测网络的实时数据。即时的网络数据通过一个高级的绘图工具可以可视化。主机信息流和主机通信信息对也可以被实时的进行可视化显示。
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2686794/asset-management/164219-7-killer-open-source-monitoring-tools.html
作者:[Paul Venezia][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Paul-Venezia/

View File

@ -0,0 +1,109 @@
安卓编年史
================================================================================
![电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。](http://cdn.arstechnica.net/wp-content/uploads/2014/01/email2lol.png)
电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。
Ron Amadeo供图
邮件视图是——令人惊讶的——白色。安卓的电子邮件应用从历史角度来说算是个打了折扣的Gmail应用你可以在这里看到紧密的联系。读邮件以及写邮件视图几乎没有任何修改地就从Gmail那里直接取过来使用。
![即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/IM2.png)
即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。
Ron Amadeo供图
在Google Hangouts之前甚至是Google Talk之前就有“IM”——安卓1.0带来的唯一一个即时通讯客户端。令人惊奇的是它支持多种IM服务用户可以从AIMGoogle TalkWindows Live Messenger以及Yahoo中挑选。还记得操作系统开发者什么时候关心过互通性吗
朋友列表是聊天中带有白色聊天气泡的黑色背景界面。状态用一个带颜色的圆形来指示右侧的小安卓机器人指示出某人正在使用移动设备。IM应用相比Google Hangouts远比它有沟通性这真是十分神奇的。绿色代表着某人正在使用设备并且已经登录黄色代表着他们登录了但处于空闲状态红色代表他们手动设置状态为忙不想被打扰灰色表示离线。现在Hangouts只显示用户是否打开了应用。
聊天对话界面明显基于信息应用,聊天的背景从白色和蓝色被换成了白色和绿色。但是没人更改信息输入框的颜色,所以加上橙色的高亮效果,界面共使用了白色,绿色,蓝色和橙色。
![安卓1.0上的YouTube。截图展示了主界面打开菜单的主界面分类界面视频播放界面。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt5000.png)
安卓1.0上的YouTube。截图展示了主界面打开菜单的主界面分类界面视频播放界面。
Ron Amadeo供图
YouTube仅仅以G1的320p屏幕和3G网络速度可能不会有今天这样的移动意识但谷歌的视频服务在安卓1.0上就被置入发布了。主界面看起来就像是安卓市场调整过的版本,顶部带有一个横向滚动选择部分,下面有垂直滚动分类列表。谷歌的一些分类选择还真是奇怪:“最热门”和“最多观看”有什么区别?
一个谷歌没有意识到YouTube最终能达到多庞大的标志——有一个视频分类是“最近更新”。在今天每分钟有[100小时时长的视频][1]上传到Youtube上如果这个分类能正常工作的话它会是一个快速滚动的视频列表快到以至于变为一片无法阅读的模糊。
菜单含有搜索,喜爱,分类,设置。设置(没有图片)是有史以来最简陋的,只有个清除搜索历史的选项。分类都是一样的平淡,仅仅是个黑色的文本列表。
最后一张截图展示了视频播放界面,只支持横屏模式。尽管自动隐藏的播放控制有个进度条,但它还是很奇怪地包含了后退和前进按钮。
![YouTube的视频菜单描述页面评论。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt3.png)
YouTube的视频菜单描述页面评论。
Ron Amadeo供图
每个视频的更多选项可以通过点击菜单按钮来打开。在这里你可以把视频标记为喜爱,查看详细信息,以及阅读评论。所有的这些界面,和视频播放一样,是锁定横屏模式的。
然而“共享”不会打开一个对话框它只是向Gmail邮件中加入了视频的链接。想要把链接通过短信或即时消息发送给别人是不可能的。你可以阅读评论但是没办法评价他们或发表自己的评论。你同样无法给视频评分或赞。
![相机应用的拍照界面,菜单,照片浏览模式。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/camera.png)
相机应用的拍照界面,菜单,照片浏览模式。
Ron Amadeo供图
在实体机上跑上真正的安卓意味着相机功能可以正常运作即便那里没什么太多可关注的。左边的黑色方块是相机的界面原本应该显示取景器图像但SDK的截图工具没办法捕捉下来。G1有个硬件实体的拍照键还记得吗所以相机没必要有个屏幕上的快门键。相机没有曝光白平衡或HDR设置——你可以拍摄照片仅此而已。
菜单按钮显示两个选项:跳转到相册应用和带有两个选项的设置界面。第一个设置选项是是否给照片加上地理标记,第二个是在每次拍摄后显示提示菜单,你可以在上面右边看到截图。同样的,你目前还只能拍照——还不支持视频拍摄。
![日历的月视图,打开菜单的周视图,日视图,以及日程。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/calviews.png)
日历的月视图,打开菜单的周视图,日视图,以及日程。
Ron Amadeo供图
就像这个时期的大多数应用一样,日历的主命令界面是菜单。菜单用来切换视图,添加新事件,导航至当天,选择要显示的日程,以及打开设置。菜单扮演着每个单独按钮的入口的作用。
月视图不能显示约会事件的文字。每个日期旁边有个侧边约会会显示为侧边上的绿色部分通过位置来表示约会是在一天中的什么时候。周视图同样不能显示预约文字——G1的320×480的显示屏像素还不够密——所以你会在日历中看到一个带有颜色指示条的白块。唯一一个显示文字的是日程和日视图。你可以用滑动来切换日期——左右滑动切换周和日上下滑动切换月份和日程。
![设置主界面,无线设置,关于页面的底部。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/settings.png)
设置主界面,无线设置,关于页面的底部。
Ron Amadeo供图
安卓1.0最终带来了设置界面。这个界面是个带有文字的黑白界面,粗略地分为各个部分。每个列表项边的下箭头让人误以为点击它会展开折叠的更多东西,但是触摸列表项的任何位置只会加载下一屏幕。所有的界面看起来确实无趣,都差不多一样,但是嘿,这可是设置啊。
任何带有开/关状态的选项都使用了卡通风的复选框。安卓1.0最初的复选框真是奇怪——就算是在“未选中”状态时它们还是有个灰色的勾选标记在里面。安卓把勾选标记当作了灯泡打开时亮起来关闭的时候变得黯淡但这不是复选框的工作方式。然而我们最终还是见到了“关于”页面。安卓1.0运行Linux内核2.6.25版本。
设置界面意味着我们终于可以打开安全设置并更改锁屏。安卓1.0只有两种风格安卓0.9那样的灰色方形锁屏以及需要你在9个点组成的网格中画出图案的图形解锁。像这样的滑动图案相比PIN码更加容易记忆和输入尽管它没有增加多少安全性。
![语音拨号,图形锁屏,电池低电量警告,时间设置。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/grabbag.png)
语音拨号,图形锁屏,电池低电量警告,时间设置。
Ron Amadeo供图
语音功能和语音拨号一同来到了1.0。这个特性以各种功能实现在AOSP徘徊了一段时间然而它是一个简单的拨打号码和联系人的语音命令应用。语音拨号是个和谷歌未来的语音产品完全无关的应用但是它的工作方式和非智能机上的语音拨号一样。
关于最后一个值得注意的,当电池电量低于百分之十五的时候会触发低电量弹窗。这是个有趣的图案,它把电源线错误的一端插入手机。谷歌,那可不是(现在依然不是)手机应该有的充电方式。
安卓1.0是个伟大的开头,但是功能上仍然有许多缺失。实体键盘和大量硬件按钮被强制要求配备,因为不带有十字方向键或轨迹球的安卓设备依然不被允许销售。另外,基本的智能手机功能比如自动旋转依然缺失。内置应用不可能像今天这样通过安卓市场来更新。所有的谷歌系应用和系统交织在一起。如果谷歌想要升级一个单独的应用,需要通过运营商推送整个系统的更新。安卓依然还有许多工作要做。
### 安卓1.1——第一个真正的增量更新 ###
![安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/11.png)
安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。
Ron Amadeo供图
安卓1.0发布四个半月后2009年2月安卓在安卓1.1中得到了它的第一个公开更新。系统方面没有太多变化谷歌向1.1中添加新东西现如今也都已被关闭。谷歌语音搜索是安卓向云端语音搜索的第一个突击,它在应用抽屉里有自己的图标。尽管这个应用已经不能与谷歌服务器通讯,你可以[在iPhone上][2]看到它以前是怎么工作的。它还没有语音操作,但你可以说出想要搜索的,结果会显示在一个简单的谷歌搜索中。
安卓市场添加了对付费应用的支持但是就像beta客户端中一样这个版本的安卓市场不再能够连接Google Play服务器。我们最多能够看到分类界面你可以在免费应用付费应用和全部应用中选择。
地图添加了[谷歌纵横][3]一个向朋友分享自己位置的方法。纵横在几个月前为了支持Google+而被关闭并且不再能够工作。地图菜单里有个纵横的选项,但点击它现在只会打开一个带载入中圆圈的画面,并永远停留在这里。
安卓世界的系统更新来得更加迅速——或者至少是一条在运营商和OEM推送之前获得更新的途径——谷歌向“关于手机”界面添加了检查系统更新按钮。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.youtube.com/yt/press/statistics.html
[2]:http://www.youtube.com/watch?v=y3z7Tw1K17A
[3]:http://arstechnica.com/information-technology/2009/02/google-tries-location-based-social-networking-with-latitude/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,106 @@
在 Debian 上使用 systemd 管理系统
================================================================================
人类已经无法阻止 systemd 占领全世界的 Linux 系统了唯一阻止它的方法是在你自己的机器上手动卸载它。到目前为止systemd 已经创建了比任何软件都多的技术问题、感情问题和社会问题。这一点从[热议][1](也称 Linux 初始化软件之战)上就能看出,这场争论在 Debian 开发者之间持续了好几个月。当 Debian 技术委员会最终决定将 systemd 放到 Debian 8代号 Jessie的发行版里面时其反对者试图通过多种努力来[取代这项决议][2],甚至有人扬言要威胁那些支持 systemd 的开发者的生命安全。
这也说明了 systemd 对 Unix 传承下来的系统处理方式有很大的干扰。“一个软件只做一件事情”的哲学思想已经被这个新来者彻底颠覆。除了取代了 sysvinit 成为新的系统初始化工具外systemd 还是一个系统管理工具。目前为止,由于 systemd-sysv 这个软件包提供的兼容性,那些我们使用惯了的工具还能继续工作。但是当 Debian 将 systemd 升级到214版本后这种兼容性就不复存在了。升级措施预计会在 Debian 8 "Jessie" 的稳定分支上进行。从此以后用户必须使用新的命令来管理系统、执行任务、变换运行级别、查询系统日志等等。不过这里有一个应对方案,那就是在 .bashrc 文件里面添加一些别名。
现在就让我们来看看 systemd 是怎么改变你管理系统的习惯的。在使用 systemd 之前,你得先把 sysvinit 保存起来,以防 systemd 出错的时候还能用 sysvinit 启动系统。这种方法只有在没安装 systemd-sysv 的情况下才能生效,具体操作方法如下:
# cp -av /sbin/init /sbin/init.sysvinit
在紧急情况下,可以把下面的文本:
init=/sbin/init.sysvinit
添加到内核启动参数项那里。
### systemctl 的基本用法 ###
systemctl 的功能是替代“/etc/init.d/foo start/stop”这类命令另外其实它还能做其他的事情这点你可以参考 man 文档。
一些基本用法:
- systemctl - 列出所有单元UNIT以及它们的状态这里的 UNIT 指的就是系统上的 job 和 service
- systemctl list-units - 列出所有 UNIT
- systemctl start [NAME...] - 启动一项或多项 UNIT
- systemctl stop [NAME...] - 停止一项或多项 UNIT
- systemctl disable [NAME...] - 将 UNIT 设置为开机不启动
- systemctl list-unit-files - 列出所有已安装的 UNIT以及它们的状态
- systemctl --failed - 列出开机启动失败的 UNIT
- systemctl --type=mount - 列出某种类型的 UNIT类型包含service, mount, device, socket, target
- systemctl enable debug-shell.service - 将一个 shell 脚本设置为开机启动,用于调试
为了更方便处理这些 UNIT你可以使用 systemd-ui 软件包,你只要输入 systemadm 命令就可以使用这个软件。
你同样可以使用 systemctl 实现转换运行级别、重启系统和关闭系统的功能:
- systemctl isolate graphical.target - 切换到运行级别5就是有桌面的级别
- systemctl isolate multi-user.target - 切换到运行级别3没有桌面的级别
- systemctl reboot - 重启系统
- systemctl poweroff - 关机
所有命令,包括切换到其他运行级别的命令,都可以在普通用户的权限下执行。
### journalctl 的基本用法 ###
systemd 不仅提供了比 sysvinit 更快的启动速度,还让日志系统在更早的时候启动起来,可以记录内核初始化阶段、内存初始化阶段、前期启动步骤以及主要的系统执行过程的日志。所以以前那种需要通过对显示屏拍照或者暂停系统来调试程序的日子已经一去不复返啦。
systemd 的日志文件都被放在 /var/log 目录。如果你想使用它的日志功能,需要执行一些命令,因为 Debian 没有打开日志功能。命令如下:
# addgroup --system systemd-journal
# mkdir -p /var/log/journal
# chown root:systemd-journal /var/log/journal
# gpasswd -a $user systemd-journal
通过上面的设置,你就可以以普通用户权限使用 journal 软件查看日志。使用 journalctl 查询日志可以获得一些比 syslog 软件更方便的玩法:
- journalctl --all - 显示系统上所有日志,以及它的用户
- journalctl -f - 监视系统日志的变化(类似 tail -f /var/log/messages 的效果)
- journalctl -b - 显示系统启动以后的日志
- journalctl -k -b -1 - 显示上一次(-b -1系统启动前产生的内核日志
- journalctl -b -p err - 显示系统启动后产生的“ERROR”日志
- journalctl --since=yesterday - 当系统不会经常重启的时候,这条命令能提供比 -b 更短的日志记录
- journalctl -u cron.service --since='2014-07-06 07:00' --until='2014-07-06 08:23' - 显示 cron 服务在某个时间段内打印出来的日志
- journalctl -p 2 --since=today - 显示优先级别为2以内的日志包含 emerg、alert、crit三个级别。所有日志级别有 emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), debug (7)
- journalctl > yourlog.log - 将二进制日志文件复制成文本文件并保存到当前目录
Journal 和 syslog 可以很好的共存。而另一方面,一旦你习惯了操作 journal你也可以卸载掉所有 syslog 的软件,比如 rsyslog 或 syslog-ng。
如果想要得到更详细的日志信息你可以在内核启动参数上添加“systemd.log_level=debug”然后运行下面的命令
# journalctl -alb
你也可以编辑 /etc/systemd/system.conf 文件来修改日志级别。
### 利用 systemd 分析系统启动过程 ###
systemd 可以让你能更有效地分析和优化你的系统启动过程:
- systemd-analyze - 显示本次启动系统过程中用户态和内核态所花的时间
- systemd-analyze blame - 显示每个启动项所花费的时间明细
- systemd-analyze critical-chain - 按时间顺序打印 UNIT 树
- systemd-analyze dot | dot -Tsvg > systemd.svg - 为开机启动过程生成向量图(需要安装 graphviz 软件包)
- systemd-analyze plot > bootplot.svg - 产生开机启动过程的时间图表
![](https://farm6.staticflickr.com/5559/14607588994_38543638b3_z.jpg)
![](https://farm6.staticflickr.com/5565/14423020978_14b21402c8_z.jpg)
systemd 虽然是个年轻的项目,但存在大量文档。首先要介绍的是[Lennart Poettering 的 0pointer 系列][3]。这个系列非常详细,非常有技术含量。另外一个是[免费桌面信息文档][4],它包含了最详细的关于 systemd 的链接发行版特性文件、bug 跟踪系统和说明文档。你可以使用下面的命令来查询 systemd 都提供了哪些文档:
# man systemd.index
不同发行版之间的 systemd 提供的命令基本一样,最大的不同之处就是打包方式。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/07/use-systemd-system-administration-debian.html
译者:[bazz2](https://github.com/bazz2) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://lists.debian.org/debian-devel/2013/10/msg00444.html
[2]:https://lists.debian.org/debian-devel/2014/02/msg00316.html
[3]:http://0pointer.de/blog/projects/systemd.html
[4]:http://www.freedesktop.org/wiki/Software/systemd/

View File

@ -0,0 +1,135 @@
自制一台树莓派街机
================================================================================
**利用当代神奇设备来重温80年代的黄金威严。**
### 你需要以下硬件 ###
- 一台树莓派以及一张4GBSD卡
- 一台支持HDMI的LCD显示屏
- 游戏手柄或者...
- 一个JAMMA街机游戏机外壳机箱
- J-Pac或者I-Pac
80年代有太多难忘的记忆冷战结束Quatro碳酸饮料Korg Polysix合成器以及Commodore 64家用电脑。但对于某些年轻人来说这些都没有街机游戏机那样有说服力或那种甜蜜的叛逆。笼罩着烟味和此起彼伏的8比特音效它们就是在挤出来的时间里去探索的洞穴50分钱和一份代币能让你消耗整个午餐时间在这些游戏上磨练着你的技能小蜜蜂城市大金刚蜈蚣行星射击吃豆小姐火凤凰R-Rype大金刚雷霆计划铁手套街头霸王超越赛车防卫者争战...这个列表太长了。
这些游戏以及玩这些游戏的街机机器仍然像30年前那样有吸引力。不像年轻时候那样现在可以不用装一兜零钱就能玩了最终让你超越那些有钱的孩子以及他们无休止的继续游戏。所以是时候打造一个你自己的基于Linux的街机游戏机了然后挑战一下过去的最高分。
我们将会覆盖所有的步骤来将一个便宜的街机游戏机器外壳变成一台Linux驱动的多平台复古游戏系统。但是这并不意味着你就一定要搭建一个同样的系统。比如说你可以放弃那个又大又重还有潜在致癌性外壳的箱子本身而是将内部控制核心装进一个旧游戏主机或同等大小的盒子里。或者说你也可以简单地放弃小巧的树莓派而将系统的大脑换成一台更强劲的Linux主机。举个例子它可以作为运行SteamOS的一个理想平台用来玩那些更优秀的现代街机游戏。
在之后的几个页面里我们将搭建一台基于树莓派的街机游戏机你应该也能从其中发现很多点子应用到你自己的项目上即使它们和我们这个项目不太一样。然后因为我们是用无比强大的MAME来做这件事情你几乎可以让它在任意平台上运行。
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade3.png)
我们在B+型号出来以前完成的这个项目。它应该也可以同样工作在更新的主板上你应该不用一个带电源的USB Hub也可以点击看大图
### 声明 ###
强调一下我们捣腾的电子器件可能会让你受到电击。请确保你做的任何改动都是有资质的电子工程师帮你检查过的。我们也不会深入讨论如何获取游戏但是有很多合法的资源例如基于MAME模拟器的老游戏以及较新的商业游戏。
#### 第一步:街机机柜 ####
街机机柜本身就是最大的挑战。我们在eBay上淘了个二手的90年代初的双人泡泡龙游戏机。然后花了£220装在一台旅行车后面送过来。类似这种机柜的价格并不确定。我们看到过很多在£100以内的。而另一方面还有很多人愿意花数千块钱去买原版侧面贴纸完整的机器。
决定买一个街机机柜主要有两个考虑。第一个是它的体积这东西又大又重。又占地方而且需要至少两个人才能搬动。如果你不缺钱的话还可以买DIY机柜或者全新的小一点的例如适合摆在桌子上的那种。然后酒柜也能很合适。
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade4.jpg)
这种机柜可能很便宜,但是他们都很重。不要一个人去搬。一些更古老的机器可能还会需要一点小关怀,例如重新喷个漆以及一些修理工作(点击看大图)。
除了获得更加真实的游戏体验以外购买原版的街机机柜的一个绝佳理由是可以使用原版的控制器。从eBay上买到的大多数机器都支持两个人同时玩有两个摇杆以及每个玩家各自的一些按钮再加上玩家一和玩家二的选择按钮。为了兼容更多游戏我们建议您找一台每个玩家都有6个按键这个是通用配置。也许你还想看看支持超过两位玩家的控制台或者有空间放其他游戏控制器的比如说街机轨迹球类似疯狂弹珠这种游戏需要的或者一个旋钮打砖块。这些待会都可以轻松装上去因为有现成的现代USB设备。
控制器是第二考虑的而且我们认为是最重要的因为要通过它把你的摇动和拍打转变成游戏里的动作。当你准备买一个机柜时需要考虑一种叫JAMMA的东西它是日本娱乐机械制造商协会Japan Amusement Machinery Manufacturers Association的缩写。JAMMA是街机游戏机里的行业标准定义了包含游戏芯片的电路板和游戏控制器的连接方式以及投币机制。它是一个连接两个玩家的摇杆和按钮的所有线缆的接口电路把它们统一到一个标准的连接头。JAMMA就是这个连接头的大小以及引脚定义这就意味着不管你安装的主板是什么按钮和控制器都将会连接到相同功能接口所以街机的主人只需要再更换下机柜上的外观图片就可以招揽新玩家了。
但是首先提醒一下JAMMA连接头上带有12V电压供电通常由大多数街机里都有的电源模块供给。为了避免意外短路或是不小心掉个螺丝刀什么的造成损坏我们完全切断了这个供电。在本教程后面的任何阶段我们也不会用到这个连接头上的任何电源脚。
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade2.png)
#### 第二步J-PAC ####
有一点非常方便你可以买到这样一种设备连接街机机柜里的JAMMA接头和电脑的USB端口将机柜上的摇杆和按键动作都转换成可配置的键盘命令它们可以在Linux里用来控制任何想玩的游戏。这个设备就叫J-Pac[www.ultimarc.com/jpac.html][1] 大概£54
它最大的特点不是它的连接性而是它处理和转换输入信号的方式因为它比标准的USB手柄强太多太多了。每一个输入都有自己独立的中断而且没有限制同时按下或按住的按钮或摇杆方向的数量。这对于类似街头霸王的游戏来说非常关键因为他们依赖于同时迅速按下的组合键而且用来对那些发飙后按下自己所有按键的不良对手发出致命一击时也必不可少。许多其他控制器特别是那些生成键盘输入的受到他们所采用的USB控制器的同时六个输入的限制以及一堆的AltShift和Ctrl键的特殊处理的限制。J-Pac还可以接入倾角传感器甚至某些投币装置不用预先配置就可以在Linux下工作了。
另外的选择是一个类似的叫I-Pac的设备。它做了和J-Pac相同的事情只不过不支持JAMMA接头。这意味着你不能把JAMMA控制器接上去但同时也就是说你可以设计你自己的控制器布局再把每个控制接到I-Pac上去。这对第一个项目来说也许有点小难但是这却是许多街机迷们选择的方式特别是他们想设计一个支持四个玩家的控制板的时候或者是一个整合许多不同类型控制的面板的时候。我们采用的方式并不是我们推荐必须要做的我们改造了一个输入有问题的二手X-Arcade Tankstick控制面板换上了新的摇杆和按钮再接到新的JAMMA接口这样有一个非常好的地方就是可以用便宜的价格£8买到所有用到的线材包括电路板边缘插头。
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade5.jpg)
我们的已经装到机柜上的J-Pac。右边的蓝色和红色导线接到我们的机柜上额外的1号和2号玩家按钮点击看大图
不管你选择的是I-Pac或是J-Pac它们产生的按键都是MAME的默认值。也就是说运行模拟器之后不需要手动调整输入。例如玩家1会默认将键盘方向键映射成上下左右以及将左边的Ctrl左边的ALT空格和左边的Shift键映射到按钮1-4。但是真正实用的功能是对于我们来说是双键快捷方式。当按下并按住玩家1按钮后就可以通过把玩家1的摇杆拉到下的位置发出用来暂停游戏的P按键推到上的位置调整音量以及推到右的位置来进入MAME自己的设置界面。这些特殊组合键设计的很巧妙不会对正常玩游戏带来任何干扰因为他们只有在按住玩家1按钮后才会生效然后可以让你正在运行游戏的时候也能做任何需要的事情。例如你可以完全地重新配置MAME使用它自己的菜单在玩游戏的时候改变输入绑定和灵敏度。
最后按住玩家1按钮然后按下玩家2按钮就可以退出MAME如果你使用了启动菜单或MAME管理器的话就很有用了因为他们会自动启动游戏然后你就可以用最快的速度开始玩另一个游戏了。
对于显示屏我们采取了比较保守的方式拿掉了街机原装的笨重的而且已经坏掉的CRT换成一个低成本的LCD显示器。这样做有很多好处。首先这个显示器有HDMI接口这样他就可以轻易地直接连接到树莓派或是现代的显卡上。第二你也不用去设定驱动街机屏幕所需要的低频率刷新模式也不需要驱动它的专用图形硬件。第三这也是最安全的方式因为街机屏幕往往在机身背后没有保护措施让很高的电压离你的手只有几英寸的距离。也不是说你完全不能用CRT如果那就是你追求的体验的话 这也是获得所追求的游戏体验的最真实的方式但是我们在软件里充分细调了CRT模拟部分我们对输出已经很满意了而且不需要用那个古老的CRT更是让我们高兴。
你也许还需要考虑用一个老式的4:3长宽比的LCD而不是那种宽屏的现代产品因为4:3模式用来玩竖屏或横屏的游戏更实用。比如说玩竖屏的射击游戏例如雷电如果使用宽屏显示器的话会在屏幕两边都有一个黑条。这些黑条一般会用来显示一些游戏指引或者你也可以把屏幕翻转90度这样就可以用上每个像素了但这却不实用除非你只玩竖屏游戏或者有一个容易操作的旋转支座。
装载显示屏也很重要。如果你拿掉了CRT的话没有现成的地方安装LCD。我们的方式是买了一些中密度纤维板(MDF)并切割成适合原来摆放CRT的地方。固定以好我们把一个便宜的VESA支座放在中间。VESA底座可以用来挂载大多数屏幕大的或小的。最后因为我们的机柜前面有烟玻璃我们必须保证亮度和对比度都设置的足够高。
### 第三步:装配 ###
现在几个硬件大件都选好了,而且也基本上确定了最终街机机柜要摆放的地方,把这几个配件装到一起并没有太大难度。我们安全地把机柜后面的电源输入部分拆开,然后在背后的空间接了一个符合插座。接在了电源开关之后的电线上。
几乎所有的街机机柜右上角都有个电源开关,但通常在机柜靠下一点的地方有大量的导线铰接在它上面,也就是说我们的设备可以使用普通的电源连接头。我们的机柜上还有一个荧光管,用做机器上边灯罩的背光,之前是直接连接到电源上的,我们可以用一个普通插头让它保持和电源连接。当你打开机柜上的电源开关的时候,电流会流入机柜里的各个部件 - 你的树莓派和显示屏都会开机,所有一切就都准备好了。
J-Pac模块直接插到JAMMA接口上但你可能还需要一点手动调整。标准的JAMMA只支持每个玩家最多三个按键尽管许多非正式的支持四个而J-Pac可以支持六个。为了连接额外的按钮你需要把按钮开关的一端接到J-Pac的GND上另一端接到J-Pac板边有螺丝固定的输入上。它们被标记成1SW41SW51SW62SW42SW5和2SW6。J-Pac也有声音的直通连接但是我们发现杂音太多没法用。改成把机柜上的喇叭连接到一个二手的SoundBlaster功放上再接到树莓派的音频输出端口。声音不一定要纯正但音量一定要足够大。
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade6.jpg)
我们的树莓派已经接到J-Pac左边也已经连接了显示屏和USB hub点击看大图
然后把J-Pac或I-Pac模块通过PS2转USB连接线接到你的PC或树莓派也可以直接接到PC的PS2接口。要用旧的PS2接头的话额外还有个要求你的电脑得足够古老还有这个但是我们测试发现用USB性能是一样的。当然这个不能用于不带PS2的树莓派而且别忘了树莓派也需要供电。我们一般建议使用一个带电源的USB hub因为没有供电是树莓派不工作最常见的错误。你还需要保证树莓派的网络正常要么通过以太网也许使用一个藏到机柜里的电力线适配器或者通过无线USB设备。网络很关键是因为在树莓派被藏到机柜里后你还可以重新配置它不用接键盘或鼠标就可以让你调整设置以及执行管理任务。
> ### 投币装置 ###
> 在街机模拟社区里让投币装置工作在模拟器上工作就会和商业产品太接近了。这就意味着你有潜在的可能对使用你机器的人收取费用。这不仅仅只是不正当考虑到运行在你自己街机上的那些游戏的来源这将会是非法的。这很显然违背了模拟的精神。不过我们和其他热爱者觉得一个能工作的投币装置更进一步地靠近了街机的真实而且值得付出努力来营造对那个过去街机的怀念。丢个10便士硬币到投币口然后再听到机器发出增加点数的声音没有什么比得上。
> 实际上难度也不大。取决于你街机上的投币装置以及它如何发信号通知投了几个币。大多数投币装置分为两个部分。较大的一部分是硬币接收和验证装置。这是投币过程的物理部分用于检测硬币是否真实以及确定它的价值。这是通过一个游戏点数逻辑电路板来实现的通常用一个排线连接上边还带有很多DIP开关。这些开关用来决定接受哪种硬币以及一个硬币能产生多少点数。然后就是简单地找到输出开关每个点数都会触发它一次然后把它接到JAMMA连接头的投币输入上或者直接接到J-Pac。我们的投币装置型号是Mars MS111在90年代早期的英国很常见网上有大量关于每个DIP开关作用的信息也有如何重新编程控制器来接受新硬币的方法。我们还能在这个装置的12V上接个小灯用来照亮投币孔。
#### 第四步:软件 ####
MAME是这种规模项目唯一可行的模拟器它如今支持运行在数不清的不同平台上的各种各样的游戏从第一代街机到一些最近的机器。从这个项目中还孕育出了MESS一个多模拟器的超级系统针对的平台是80到90年代的家庭电脑以及电视游戏机。
如何配置MAME本身都可以写上六页的文章了。它是一个复杂的无序的伟大的软件程序模拟了如此之多的CPU声卡芯片控制器以及那么多的选项就像MythTV你都永远不能真正停止配置它。
但是也有个相对省事的方式一个特别为树莓派构建的版本。它叫PiMAME。它是一个可下载的发布版和脚本基于Raspbian这是树莓派的默认发布版。它不仅仅会把MAME装到树莓派上这很有用因为没有哪个默认仓库里有这个还会安装其他一些精选出来的模拟器并通过一个前端来管理他们。MAME举个例子是一个有数十个参数的命令行应用。但是PiMAME还有一个妙招 - 它安装了一个简单的网页服务器可以在连接上网络后让你通过浏览器来安装新游戏。这是一个很好的优点因为把游戏文件放到正确的目录下是使用MAME的困难之一这还能让你连接到树莓派的存储设备得到最优使用。还有PiMAME会通过用来安装它的脚本更新自己所以保持最新版本就太简单了。目前来说这个非常有用因为在编写这个项目的时候正好在0.8版这样一个重大更新发布的时间点上。我们在三月份早期时发现有一些轻微的不稳定,但是我们确定在你读到这篇文章的时候一切都会解决。
安装PiMAME最好的方式就是先装Raspbian。你可以通过NOOBS安装使用电脑上的图形工具或者通过dd命令把Raspbian的内容直接写入到你的SD卡中。就像我们上个月的BrewPi教程里曾提到的这个过程在之前已经被记录过很多次所以就不再浪费口水了。想简单点就装一下NOOBS参照树莓派网站上的指引。在装好Raspbian并跑起来以后请确保使用配置工具来释放SD卡上的空间以及确保系统已经更新到最新`sudo apt-get update; sudo apt-get upgrade`。然后再确保已经安装好了git工具包。当前任意Raspbian版本都会自带git不过你仍然可以通过命令`sudo apt-get install git`检查一下。
然后再在终端里输入下面的命令把PiMAME安装器从项目的GitHub仓库克隆到本地
git clone https://github.com/ssilverm/pimame_installer
之后,如果命令工作正常的话你应该能看到如下的反馈输出:
Cloning into pimame_installer...
remote: Reusing existing pack: 2306, done.
remote: Total 2306 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (2306/2306), 4.61 MiB | 11 KiB/s, done.
Resolving deltas: 100% (823/823), done.
这个命令会创建一个叫pimame_installer的新目录然后下一步就是进入这个目录再执行它里面的脚本
cd pimame_installer/
sudo ./install.sh
这个命令会安装和配置很多软件。所需的时间长短也取决于你的因特网速度因为需要下载大量的包。我们那个简陋的树莓派加15Mb因特网连接用了差不多45分钟来执行完这个脚本在这之后你会收到重启机器的提示。你现在可以安全的通过输入`sudo shutdown -r`来重启了因为这个命令会自动处理剩下的SD卡写入操作。
这就是安装的全部事情了。在重启树莓派后就会自动登录然后会出现PiMAME启动菜单。在0.8版本里这是个非常漂亮的界面有每个支持平台的图片还有红色图标提示已经安装了多少个游戏。现在应该可以用控制器来操作了。如果需要检查控制器是否正确连接可以用SSH连接到树莓派然后检查一下文件**/dev/input/by-id/usb-Ultimarc_I-PAC_Ultimarc_I-PAC-event-kbd**是否存在。
默认的按键配置就可以让你选择要在你的街机上运行哪个模拟器。我们最感兴趣的就是第一个名字叫AdvMAME不过你也许会很惊讶看到还有一个MAME可选的MAME4ALL。MAME4ALL是特别为树莓派构建的使用了旧版的MAME源代码所以它所支持的ROMS的性能也是最佳的。这是很合理的因为你的树莓派不可能玩那些要求很高的游戏所以没有理由苛求模拟器的没必要的兼容性。现在剩下的事情就是找些游戏装到你的系统里参考下面的方法然后尽情享受吧
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade1.png)
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/arcade-machine/
作者:[Ben Everard][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/ben_everard/
[1]:http://www.ultimarc.com/jpac.html

View File

@ -0,0 +1,466 @@
Linux 教程:安装 Ansible 配置管理和 IT 自动化工具
================================================================================
![](http://s0.cyberciti.org/uploads/cms/2014/08/ansible_core_circle.png)
今天我来谈谈 ansible一个由 Python 编写的强大的配置管理解决方案。尽管市面上已经有很多可供选择的配置管理解决方案,但他们各有优劣,而 ansible 的特点就在于它的简洁。让 ansible 在主流的配置管理系统中与众不同的一点便是,它并不需要你在想要配置的每个节点上安装自己的组件。同时提供的一个优点在于,如果需要的话,你可以在不止一个地方控制你的整个基础结构。最后一点是它的正确性,或许这里有些争议,但是我认为在大多数时候这仍然可以作为它的一个优点。说得足够多了,让我们来着手在 RHEL/CentOS 和基于 Debian/Ubuntu 的系统中安装和配置 Ansible.
### 准备工作 ####
1. 发行版RHEL/CentOS/Debian/Ubuntu Linux
1. Jinja2Python 的一个对设计师友好的现代模板语言
1. PyYAMLPython 的一个 YAML 编码/反编码函数库
1. paramiko纯 Python 编写的 SSHv2 协议函数库 (译者注:原文对函数库名有拼写错误,校对时请去掉此条注解)
1. httplib2一个功能全面的 HTTP 客户端函数库
1. 本文中列出的绝大部分操作已经假设你将在 bash 或者其他任何现代的 shell 中以 root 用户执行。
Ansible 如何工作
Ansible 工具并不使用守护进程,它也不需要任何额外的自定义安全架构,因此它的部署可以说是十分容易。你需要的全部东西便是 SSH 客户端和服务器了。
+-----------------+ +---------------+
|安装了 Ansible 的| SSH | 文件服务器1 |
|Linux/Unix 工作站|<------------------>| 数据库服务器2 | 在本地或远程
+-----------------+ 模块 | 代理服务器3 | 数据中心的
192.168.1.100 +---------------+ Unix/Linux 服务器
其中:
1. 192.168.1.100 - 在你本地的工作站或服务器上安装 Ansible。
1. 文件服务器1到代理服务器3 - 使用 192.168.1.100 和 Ansible 来自动管理所有的服务器。
1. SSH - 在 192.168.1.100 和本地/远程的服务器之间设置 SSH 密钥。
### Ansible 安装教程 ###
ansible 的安装轻而易举,许多发行版的第三方软件仓库中都有现成的软件包,可以直接安装。其他简单的安装方法包括使用 pip 安装它,或者从 github 里获取最新的版本。若想使用你的软件包管理器安装,在[基于 RHEL/CentOS Linux 的系统里你很可能需要 EPEL 仓库][1]。
#### 在基于 RHEL/CentOS Linux 的系统中安装 ansible ####
输入如下 [yum 命令][2]:
$ sudo yum install ansible
#### 在基于 Debian/Ubuntu Linux 的系统中安装 ansible ####
输入如下 [apt-get 命令][3]:
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
#### 使用 pip 安装 ansible ####
[pip 命令是一个安装和管理 Python 软件包的工具][4],比如它能管理 Python Package Index 中的那些软件包。如下方式在 Linux 和类 Unix 系统中通用:
$ sudo pip install ansible
#### 从源代码安装最新版本的 ansible ####
你可以通过如下命令从 github 中安装最新版本:
$ cd ~
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup
当你从一个 git checkout 中运行 ansible 的时候,请记住你每次用它之前都需要设置你的环境,或者你可以把这个设置过程加入你的 bash rc 文件中:
# 加入 BASH RC
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts" >> ~/.bashrc
$ echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc
ansible 的 hosts 文件包括了一系列它能操作的主机。默认情况下 ansible 通过路径 /etc/ansible/hosts 查找 hosts 文件,不过这个行为也是可以更改的,这样当你想操作不止一个 ansible 或者针对不同的数据中心的不同客户操作的时候也是很方便的。你可以通过命令行参数 -i 指定 hosts 文件:
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
不过我更倾向于使用一个环境变量,这可以在你想要通过 source 一个不同的文件来切换工作目标的时候起到作用。这里的环境变量是 $ANSIBLE_HOSTS可以这样设置
$ export ANSIBLE_HOSTS=~/ansible_hosts
一旦所有需要的组件都已经安装完毕,而且你也准备好了你的 hosts 文件,你就可以来试一试它了。为了快速测试,这里我把 127.0.0.1 写到了 ansible 的 hosts 文件里:
$ echo "127.0.0.1" > ~/ansible_hosts
现在来测试一个简单的 ping
$ ansible all -m ping
或者提示 ssh 密码:
$ ansible all -m ping --ask-pass
我在刚开始的设置中遇到过几次问题,因此这里强烈推荐为 ansible 设置 SSH 公钥认证。不过在刚刚的测试中我们使用了 --ask-pass在一些机器上你会需要[安装 sshpass][5] 或者像这样指定 -c paramiko
$ ansible all -m ping --ask-pass -c paramiko
当然你也可以[安装 sshpass][6],然而 sshpass 并不总是在标准的仓库中提供,因此 paramiko 可能更为简单。
### 设置 SSH 公钥认证 ###
于是我们有了一份配置以及一些基础的其他东西。现在让我们来做一些实用的事情。ansible 的强大很大程度上体现在 playbooks 上,后者基本上就是一些写好的 ansible 脚本(大部分来说),不过在制作一个 playbook 之前,我们将先从一些一句话脚本开始。现在让我们创建和配置 SSH 公钥认证,以便省去 -c 和 --ask-pass 选项:
$ ssh-keygen -t rsa
样例输出:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/mike/.ssh/id_rsa.
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
The key fingerprint is:
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
The key's randomart image is:
+--[ RSA 2048]----+
|... . . |
|. . + . . |
|= . o o |
|.* . |
|. . . S |
| E.o |
|.. .. |
|o o+.. |
| +o+*o. |
+-----------------+
现在显然有很多种方式来把它放到远程主机上应该的位置。不过既然我们正在使用 ansible就用它来完成这个操作吧
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
样例输出:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"dest": "/tmp/id_rsa.pub",
"gid": 100,
"group": "users",
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
"mode": "0644",
"owner": "mike",
"size": 410,
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
"state": "file",
"uid": 1000
}
下一步,把公钥文件添加到远程服务器里。输入:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
样例输出:
SSH password:
127.0.0.1 | FAILED | rc=1 >>
/bin/sh: /root/.ssh/authorized_keys: Permission denied
矮油,我们需要用 root 来执行这个命令,所以还是加上一个 -u 参数吧:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
样例输出:
SSH password:
127.0.0.1 | success | rc=0 >>
请注意,我刚才这是想要演示通过 ansible 来传输文件的操作。事实上 ansible 有一个更加方便的内置 SSH 密钥管理支持:
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
样例输出:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"gid": 100,
"group": "users",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
"key_options": null,
"keyfile": "/home/mike/.ssh/authorized_keys",
"manage_dir": false,
"mode": "0600",
"owner": "mike",
"path": "/home/mike/.ssh/authorized_keys",
"size": 410,
"state": "file",
"uid": 1000,
"unique": false,
"user": "mike"
}
现在这些密钥已经设置好了。我们来试着随便跑一个命令,比如 hostname希望我们不会被提示要输入密码
$ ansible all -m shell -a "hostname" -u root
样例输出:
127.0.0.1 | success | rc=0 >>
成功!!!现在我们可以用 root 来执行命令,并且不会被输入密码的提示干扰了。我们现在可以轻易地配置任何在 ansible hosts 文件中的主机了。让我们把 /tmp 中的公钥文件删除:
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
样例输出:
127.0.0.1 | success >> {
"changed": true,
"path": "/tmp/id_rsa.pub",
"state": "absent"
}
下面我们来做一些更复杂的事情,我要确定一些软件包已经安装了,并且已经是最新的版本:
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
样例输出:
127.0.0.1 | success >> {
"changed": false,
"name": "apache2",
"state": "latest"
}
很好,我们刚才放在 /tmp 中的公钥文件已经消失了,而且我们已经安装好了最新版的 apache。下面我们来看看前面命令中的 -m zypper一个让 ansible 非常灵活,并且给了 playbooks 更多能力的功能。如果你不使用 openSuse 或者 Suse enterprise 你可能还不熟悉 zypper, 它基本上就是 suse 世界中相当于 yum 的存在。在上面所有的例子中,我的 hosts 文件中都只有一台机器。除了最后一个命令外,其他所有命令都应该在任何标准的 *nix 系统和标准的 ssh 配置中使用,这造成了一个问题。如果我们想要同时管理多种不同的机器呢?这便是 playbooks 和 ansible 的可配置性闪闪发光的地方了。首先我们来少许修改一下我们的 hosts 文件:
$ cat ~/ansible_hosts
样例输出:
[RHELBased]
10.50.1.33
10.50.1.47
[SUSEBased]
127.0.0.1
首先,我们创建了一些分组的服务器,并且给了他们一些有意义的标签。然后我们来创建一个为不同类型的服务器执行不同操作的 playbook。你可能已经发现这个 yaml 的数据结构和我们之前运行的命令行语句中的相似性了。简单来说,-m 是一个模块,而 -a 用来提供模块参数。在 YAML 表示中你可以先指定模块,然后插入一个冒号 :,最后指定参数。
---
- hosts: SUSEBased
remote_user: root
tasks:
- zypper: name=apache2 state=latest
- hosts: RHELBased
remote_user: root
tasks:
- yum: name=httpd state=latest
现在我们有一个简单的 playbook 了,我们可以这样运行它:
$ ansible-playbook testPlaybook.yaml -f 10
样例输出:
PLAY [SUSEBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [zypper name=apache2 state=latest] **************************************
ok: [127.0.0.1]
PLAY [RHELBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [10.50.1.33]
ok: [10.50.1.47]
TASK: [yum name=httpd state=latest] *******************************************
changed: [10.50.1.33]
changed: [10.50.1.47]
PLAY RECAP ********************************************************************
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
注意,你会看到 ansible 联系到的每一台机器的输出。-f 参数让 ansible 在多台主机上同时运行指令。除了指定全部主机,或者一个主机分组的名字以外,你还可以把导入 ssh 公钥的操作从命令行里转移到 playbook 中,这将在设置新主机的时候提供很大的方便,甚至让新主机直接可以运行一个 playbook。为了演示我们把我们之前的公钥例子放进一个 playbook 里:
---
- hosts: SUSEBased
remote_user: mike
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
- hosts: RHELBased
remote_user: mdonlon
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
除此之外还有很多可以做的事情,比如在启动的时候把公钥配置好,或者引入其他的流程来让你按需配置一些机器。不过只要 SSH 被配置成接受密码登陆,这些几乎可以用在所有的流程中。在你准备开始写太多 playbook 之前,另一个值得考虑的事情是,代码管理可以有效节省你的时间。机器需要不断变化,然而你并不需要在每次机器发生变化时都重新写一个 playbook只需要更新相关的部分并提交这些修改。与此相关的另一个好处是如同我之前所述你可以从不同的地方管理你的整个基础结构。你只需要将你的 playbook 仓库 git clone 到新的机器上,就完成了管理所有东西的全部设置流程。
#### 现实中的 ansible 例子 ####
我知道很多用户经常使用 pastebin 这样的服务,以及很多公司基于显而易见的理由配置了他们内部使用的类似东西。最近,我遇到了一个叫做 showterm 的程序,巧合之下我被一个客户要求配置它用于内部使用。这里我不打算赘述这个应用程序的细节,不过如果你感兴趣的话,你可以使用 Google 搜索 showterm。作为一个合理的现实中的例子我将会试图配置一个 showterm 服务器,并且配置使用它所需要的客户端应用程序。在这个过程中我们还需要一个数据库服务器。现在我们从配置客户端开始:
---
- hosts: showtermClients
remote_user: root
tasks:
- yum: name=rubygems state=latest
- yum: name=ruby-devel state=latest
- yum: name=gcc state=latest
- gem: name=showterm state=latest user_install=no
这部分很简单。下面是主服务器:
---
- hosts: showtermServers
remote_user: root
tasks:
- name: ensure packages are installed
yum: name={{item}} state=latest
with_items:
- postgresql
- postgresql-server
- postgresql-devel
- python-psycopg2
- git
- ruby21
- ruby21-passenger
- name: showterm server from github
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
- name: Initdb
command: service postgresql initdb
creates=/var/lib/pgsql/data/postgresql.conf
- name: Start PostgreSQL and enable at boot
service: name=postgresql
enabled=yes
state=started
- gem: name=pg state=latest user_install=no
handlers:
- name: restart postgresql
service: name=postgresql state=restarted
- hosts: showtermServers
remote_user: root
sudo: yes
sudo_user: postgres
vars:
dbname: showterm
dbuser: showterm
dbpassword: showtermpassword
tasks:
- name: create db
postgresql_db: name={{dbname}}
- name: create user with ALL priv
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
- hosts: showtermServers
remote_user: root
tasks:
- name: database.yml
template: src=database.yml dest=/root/showterm/config/database.yml
- hosts: showtermServers
remote_user: root
tasks:
- name: run bundle install
shell: bundle install
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: run rake db tasks
shell: 'bundle exec rake db:create db:migrate db:seed'
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: apache config
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
还凑合。请注意从某种意义上来说这是一个任意选择的程序然而我们现在已经可以持续地在任意数量的机器上部署它了这便是配置管理的好处。此外在大多数情况下这里的定义语法几乎是不言而喻的wiki 页面也就不需要加入太多细节了。当然在我的观点里,一个有太多细节的 wiki 页面绝不会是一件坏事。
### 扩展配置 ###
我们并没有涉及到这里所有的细节。Ansible 有许多选项可以用来配置你的系统。你可以在你的 hosts 文件中内嵌变量,而 ansible 将会把它们应用到远程节点。如:
[RHELBased]
10.50.1.33 http_port=443
10.50.1.47 http_port=80 ansible_ssh_user=mdonlon
[SUSEBased]
127.0.0.1 http_port=443
尽管这对于快速配置来说已经非常方便,你还可以将变量分成存放在 yaml 格式的多个文件中。在你的 hosts 文件路径里,你可以创建两个子目录 group_vars 和 host_vars。在这些路径里放置的任何文件只要能对得上一个主机分组的名字或者你的 hosts 文件中的一个主机名,它们都会在运行时被插入进来。所以前面的一个例子将会变成这样:
ultrabook:/etc/ansible # pwd
/etc/ansible
ultrabook:/etc/ansible # tree
.
├── group_vars
│ ├── RHELBased
│ └── SUSEBased
├── hosts
└── host_vars
├── 10.50.1.33
└── 10.50.1.47
----------
2 directories, 5 files
ultrabook:/etc/ansible # cat hosts
[RHELBased]
10.50.1.33
10.50.1.47
----------
[SUSEBased]
127.0.0.1
ultrabook:/etc/ansible # cat group_vars/RHELBased
ultrabook:/etc/ansible # cat group_vars/SUSEBased
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
---
http_port:80
ansible_ssh_user: mdonlon
### 改善 Playbooks ###
组织 playbooks 也已经有很多种现成的方式。在前面的例子中我们用了一个单独的文件,因此这方面被大幅地简化了。组织这些文件的一个常用方式是创建角色。简单来说,你将一个主文件加载为你的 playbook而它将会从其它文件中导入所有的数据这些其他的文件便是角色。举例来说如果你有了一个 wordpress 网站,你需要一个 web 前端和一个数据库。web 前端将包括一个 web 服务器,应用程序代码,以及任何需要的模块。数据库有时候运行在同一台主机上,有时候运行在远程的主机上,这时候角色就可以派上用场了。你创建一个目录,并对每个角色创建对应的小 playbook。在这个例子中我们需要一个 apache 角色mysql 角色wordpress 角色mod_php以及 php 角色。最大的好处是并不是每个角色都必须被应用到同一台机器上。在这个例子中mysql 可以被应用到一台单独的机器。这同样为代码重用提供了可能,比如你的 apache 角色还可以被用在 python 和其他相似的 php 应用程序中。展示这些已经有些超出了本文的范畴,而且做一件事总是有很多不同的方式,我建议搜索一些 ansible 的 playbook 例子。有很多人在 github 上贡献代码,当然还有其他一些网站。
### 模块 ###
在 ansible 中对于所有完成的工作幕后的工作都是由模块主导的。Ansible 有一个非常丰富的内置模块仓库其中包括软件包安装文件传输以及我们在本文中做的所有事情。但是对一部分人来说这些并不能满足他们的配置需求ansible 也提供了方法让你添加自己的模块。Ansible 的 API 有一个非常棒的事情是,它并没有限制模块也必须用编写它的语言 Python 来编写也就是说你可以用任何语言来编写模块。Ansible 模块通过传递 JSON 数据来工作,因此你只需要用想用的语言生成一段 JSON 数据。我很确定任何脚本语言都可以做到这一点,因此你现在就可以开始写点什么了。在 Ansible 的网站上有很多的文档,包括模块的接口是如何工作的,以及 Github 上也有很多模块的例子。注意一些小众的语言可能没有很好的支持,不过那只可能是因为没有多少人在用这种语言贡献代码。试着写点什么,然后把你的结果发布出来吧!
### 总结 ###
总的来说,虽然在配置管理方面已经有很多解决方案,我希望本文能显示出 ansible 简单的设置过程,在我看来这是它最重要的一个要点。请注意,因为我试图展示做一件事的不同方式,所以并不是前文中所有的例子都是适用于你的个别环境或者对于普遍情况的最佳实践。这里有一些链接能让你对 ansible 的了解进入下一个层次:
- [Ansible 项目][7]主页.
- [Ansible 项目文档][8].
- [多级环境与 Ansible][9].
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
作者:[Nix Craft][a]
译者:[felixonmars](https://github.com/felixonmars)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.cyberciti.biz/tips/about-us
[1]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
[3]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
[4]:http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-linux-install-pipclient/
[5]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[6]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[7]:http://www.ansible.com/
[8]:http://docs.ansible.com/
[9]:http://rosstuck.com/multistage-environments-with-ansible/

View File

@ -0,0 +1,212 @@
在逻辑卷管理中设置精简资源调配卷——第四部分
================================================================================
逻辑卷管理有许多特性比如像快照和精简资源调配。在先前第三部分中我们已经介绍了如何为逻辑卷创建快照。在本文中我们将了解如何在LVM中设置精简资源调配。
![Setup Thin Provisioning in LVM](http://www.tecmint.com/wp-content/uploads/2014/08/Setup-Thin-Provisioning-in-LVM.jpg)
在LVM中设置精简资源调配
### 精简资源调配是什么? ###
精简资源调配用于lvm以在精简池中创建虚拟磁盘。我们假定我服务器上有**15GB**的存储容量而我已经有2个客户各自占去了5GB存储空间。你是第三个客户你也请求5GB的存储空间。在以前我们会提供整个5GB的空间富卷。然而你可能只使用5GB中的2GB其它3GB以后再去填满它。
而在精简资源调配中我们所做的是在其中一个大卷组中定义一个精简池再在精简池中定义一个精简卷。这样不管你写入什么文件它都会保存进去而你的存储空间看上去就是5GB。然而这所有5GB空间不会全部铺满整个硬盘。对其它客户也进行同样的操作就像我说的那儿已经有两个客户你是第三个客户。
那么让我们想想我到底为客户分配了总计多少GB的空间呢所有15GB的空间已经全部分配完了如果现在有某个人来问我是否能提供5GB空间我还可以分配给他么答案是“可以”。在精简资源调配中我可以为第四位客户分配5GB空间即使我已经把那15GB的空间分配完了。
**警告**从那15GB空间中如果我们对资源调配超过15GB了那就是过度资源调配了。
### 它是怎么工作的?我们又是怎样为客户提供存储空间的? ###
我已经提供给你5GB空间但是你可能只用了2GB而其它3GB还空闲着。在富资源调配中我们不能这么做因为它一开始就分配了整个空间。
在精简资源调配中如果我为你定义了5GB空间它就不会在定义卷时就将整个磁盘空间全部分配它会根据你的数据写入而增长希望你看懂了跟你一样其它客户也不会使用全部卷所以还是有机会为一个新客户分配5GB空间的这称之为过度资源调配。
但是必须对各个卷的增长情况进行监控否则结局会是个灾难。在过度资源调配完成后如果所有4个客户都极度地写入数据到磁盘你将碰到问题了。因为这个动作会填满15GB的存储空间甚至溢出从而导致这些卷下线。
### 需求 ###
注:此三篇文章如果发布后可换成发布后链接,原文在前几天更新中
- [使用LVM在Linux中创建逻辑卷——第一部分][1]
- [在Linux中扩展/缩减LVM——第二部分][2]
- [在LVM中创建/恢复逻辑卷快照——第三部分][3]
#### 我的服务器设置 ####
操作系统 — 安装有LVM的CentOS 6.5
服务器IP — 192.168.0.200
### 步骤1 设置精简池和卷 ###
理论讲太多了,让我们还是来点实际的吧,我们一起来设置精简池和精简卷。首先,我们需要一个大尺寸的卷组。这里,我创建了一个**15GB**的卷组用于演示。现在,用下面的命令来列出卷组。
# vgcreate -s 32M vg_thin /dev/sdb1
![Listing Volume Group](http://www.tecmint.com/wp-content/uploads/2014/08/Listing-Volume-Group.jpg)
列出卷组
接下来,在创建精简池和精简卷之前,检查逻辑卷有多少空间可用。
# vgs
# lvs
![Check Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/08/check-Logical-Volume.jpg)
检查逻辑卷
我们可以在上面的lvs命令输出中看到只显示了一些默认逻辑用于文件系统和交换分区。
### 创建精简池 ###
使用以下命令在卷组vg_thin中创建一个15GB的精简池。
# lvcreate -L 15G --thinpool tp_tecmint_pool vg_thin
- **-L** 卷组大小
- **thinpool** 创建精简池
- **tp_tecmint_poolThin** - 精简池名称
- **vg_thin** 我们需要创建精简池的卷组名称
![Create Thin Pool](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Pool.jpg)
创建精简池
使用lvdisplay命令来查看详细信息。
# lvdisplay vg_thin/tp_tecmint_pool
![Logical Volume Information](http://www.tecmint.com/wp-content/uploads/2014/08/Logical-Volume-Information.jpg)
逻辑卷信息
这里,我们还没有在该精简池中创建虚拟精简卷。在图片中,我们可以看到分配的精简池数据为**0.00%**。
### 创建精简卷 ###
现在,我们可以在带有-VVirtual选项的lvcreate命令的帮助下在精简池中定义精简卷了。
# lvcreate -V 5G --thin -n thin_vol_client1 vg_thin/tp_tecmint_pool
我已经在我的**vg_thin**卷组中的**tp_tecmint_pool**内创建了一个精简虚拟卷,取名为**thin_vol_client1**。现在,使用下面的命令来列出逻辑卷。
# lvs
![List Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/List-Logical-Volumes.jpg)
列出逻辑卷
刚才,我们已经在上面创建了精简卷,这就是为什么没有数据,显示为**0.00%M**。
好吧让我为其它2个客户再创建2个精简卷。这里你可以看到在精简池**tp_tecmint_pool**下有3个精简卷了。所以从这一点上看我们开始明白我已经使用所有15GB的精简池。
![Create Thin Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Volumes.jpg)
### 创建文件系统 ###
现在使用下面的命令为这3个精简卷创建挂载点并挂载然后拷贝一些文件进去。
# mkdir -p /mnt/client1 /mnt/client2 /mnt/client3
列出创建的目录。
# ls -l /mnt/
![Creating Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Creating-Mount-Points.jpg)
创建挂载点
使用mkfs命令为这些创建的精简卷创建文件系统。
# mkfs.ext4 /dev/vg_thin/thin_vol_client1 && mkfs.ext4 /dev/vg_thin/thin_vol_client2 && mkfs.ext4 /dev/vg_thin/thin_vol_client3
![Create File System](http://www.tecmint.com/wp-content/uploads/2014/08/Create-File-System.jpg)
创建文件系统
使用mount命令来挂载所有3个客户卷到创建的挂载点。
# mount /dev/vg_thin/thin_vol_client1 /mnt/client1/ && mount /dev/vg_thin/thin_vol_client2 /mnt/client2/ && mount /dev/vg_thin/thin_vol_client3 /mnt/client3/
使用df命令来列出挂载点。
# df -h
![Print Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Print-Mount-Points.jpg)
打印挂载点
这里我们可以看到所有3个客户卷已经挂载了而每个客户卷只使用了3%的数据空间。那么让我们从桌面添加一些文件到这3个挂载点以填充一些空间。
![Add Files To Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Add-Files-To-Volumes.jpg)
添加文件到卷
现在列出挂载点,并查看每个精简卷使用的空间,然后列出精简池来查看池中已使用的大小。
# df -h
# lvdisplay vg_thin/tp_tecmint_pool
![Check Mount Point Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Mount-Point-Size.jpg)
检查挂载点大小
![Check Thin Pool Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Thin-Pool-Size.jpg)
检查精简池大小
上面的命令显示了3个挂载点及其使用大小百分比。
13% of datas used out of 5GB for client1
29% of datas used out of 5GB for client2
49% of datas used out of 5GB for client3
在查看精简池时,我们看到总共只有**30%**的数据被写入这是上面3个客户虚拟卷的总使用量。
### 过度资源调配 ###
现在,**第四个**客户来申请5GB的存储空间。我能给他吗因为我已经把15GB的池分配给了3个客户。能不能再给另外一个客户分配5GB的空间呢可以这完全可能。在我们使用**过度资源调配**时,就可以实现。过度资源调配可以给我们比我们所拥有的更大的空间。
让我来为第四位客户创建5GB的空间然后再验证一下大小吧。
# lvcreate -V 5G --thin -n thin_vol_client4 vg_thin/tp_tecmint_pool
# lvs
![Create thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Create-thin-Storage.jpg)
创建精简存储
在精简池中我只有15GB大小的空间但是我已经在精简池中创建了4个卷其总量达到了20GB。如果4个客户都开始写入数据到他们的卷并将空间填满到那时我们将面对严峻的形势。如果不填满空间那不会有问题。
现在,我已经创建在**thin_vol_client4**中创建了文件系统,然后挂载到了**/mnt/client4**下,并且拷贝了一些文件到里头。
# lvs
![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thing-Storage.jpg)
验证精简存储
我们可以在上面的图片中看到新创建的client 4总计使用空间达到了**89.34%**,而精简池的已用空间达到了**59.19**。如果所有这些用户不在过度对卷写入,那么它就不会溢出,下线。要避免溢出,我们需要扩展精简池大小。
**重要**:精简池只是一个逻辑卷,因此,如果我们需要对其进行扩展,我们可以使用和扩展逻辑卷一样的命令,但我们不能缩减精简池大小。
# lvextend
这里,我们可以看到怎样来扩展逻辑精简池(**tp_tecmint_pool**)。
# lvextend -L +15G /dev/vg_thin/tp_tecmint_pool
![Extend Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Extend-Thin-Storage.jpg)
扩展精简存储
接下来,列出精简池大小。
# lvs
![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thin-Storage.jpg)
验证精简存储
前面,我们的**tp_tecmint_pool**大小为15GB而在对第四个精简卷进行过度资源配置后达到了20GB。现在它扩展到了30GB所以我们的过度资源配置又回归常态而精简卷也不会溢出下线了。通过这种方式我们可以添加更多的精简卷到精简池中。
在本文中,我们已经了解了怎样来使用一个大尺寸的卷组创建一个精简池,以及怎样通过过度资源配置在精简池中创建精简卷和扩着精简池。在下一篇文章中,我们将介绍怎样来移除逻辑卷。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/
作者:[Babin Lonston][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/
[2]:http://www.tecmint.com/extend-and-reduce-lvms-in-linux/
[3]:http://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/

View File

@ -0,0 +1,236 @@
如何在 Linux 环境下配置 Nagios Remote Plugin Executor (NRPE)
================================================================================
就网络管理而言Nagios 是最强大的工具之一。Nagios 可以监控远程主机的可访问性,以及其中正在运行的服务的状态。不过,如果我们想要监控远程主机中网络服务以外的东西呢?比方说,我们可能想要监控远程主机上的磁盘利用率或者 [CPU 处理器负载][1]。Nagios Remote Plugin ExecutorNRPE便是一个可以帮助你完成这些操作的工具。NRPE 允许你执行在远程主机上安装的 Nagios 插件,并且将它们集成到一个[已经存在的 Nagios 服务器][2]里。
本教程将会介绍如何在一个已经部署好的 Nagios 中配置 NRPE。本教程主要分为两部分
- 配置远程主机。
- 配置 Nagios 监控服务器。
之后我们会以定义一些可以被 NRPE 使用的自定义命令来结束本教程。
### 为 NRPE 配置远程主机 ###
#### 第一步:安装 NRPE 服务 ####
你需要在你想要使用 NRPE 监控的每一台远程主机上安装 NRPE 服务。每一台远程主机上的 NRPE 服务守护进程将会与一台 Nagios 监控服务器进行通信。
取决于所在的平台, NRPE 服务所需要的软件包可以很容易地用 apt-get 或者 yum 来安装。对于 CentOS 来说,由于 NRPE 并不在 CentOS 的仓库中,我们需要[添加 Repoforge 仓库][3]。
**对于 Debian、Ubuntu 或者 Linux Mint**
# apt-get install nagios-nrpe-server
**对于 CentOS、Fedora 或者 RHEL**
# yum install nagios-nrpe
#### 第二步:准备配置文件 ####
配置文件 /etc/nagios/nrpe.cfg 在基于 Debian 或者 RedHat 的系统中比较相近。让我们备份并修改配置文件:
# vim /etc/nagios/nrpe.cfg
----------
## NRPE 服务端口是可以自定义的 ##
server_port=5666
## 允许 Nagios 监控服务器访问 ##
## 注意:逗号后面没有空格 ##
allowed_hosts=127.0.0.1,X.X.X.X-IP_v4_of_Nagios_server
## 下面的例子中我们硬编码了参数。
## 这些参数可以按需修改。
## 注意:对于 CentOS 64 位用户,请使用 /usr/lib64 替代 /usr/lib ##
command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200
现在配置文件已经准备好了NRPE 服务已经可以启动了。
#### 第三步:初始化 NRPE 服务 ####
对于基于 RedHat 的系统NRPE 服务需要被添加为启动服务。
**对于 Debian、Ubuntu、Linux Mint**
# service nagios-nrpe-server restart
**对于 CentOS、Fedora 或者 RHEL**
# service nrpe restart
# chkconfig nrpe on
#### 第四步:验证 NRPE 服务状态 ####
NRPE 守护进程的状态信息可以在系统日志中找到。对于基于 Debian 的系统,日志文件在 /var/log/syslog而基于 RedHat 的系统的日志文件则是 /var/log/messages。下面提供一段样例日志以供参考
nrpe[19723]: Starting up daemon
nrpe[19723]: Listening for connections on port 5666
nrpe[19723]: Allowing connections from: 127.0.0.1,X.X.X.X
如果使用了防火墙,被 NRPE 守护进程使用的 TCP 端口 5666 应该被开启。
# netstat -tpln | grep 5666
----------
tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN 19885/nrpe
### 为 NRPE 配置 Nagios 监控服务器 ###
为 NRPE 配置已有的 Nagios 监控服务器的第一步是在服务器上安装 NRPE 插件。
#### 第一步:安装 NRPE 插件 ####
当 Nagios 服务器运行在基于 Debian 的系统Debian、Ubuntu 或者 Linux Mint上时需要的软件宝可以通过 apt-get 安装。
# apt-get install nagios-nrpe-plugin
插件安装完成后,对随插件安装的 check_nrpe 命令稍作修改。
# vim /etc/nagios-plugins/config/check_nrpe.cfg
----------
## 默认命令会被覆盖 ##
define command{
command_name check_nrpe
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$'
}
如果 Nagios 服务器运行在基于 RedHat 的系统CentOS、Fedora 或者 RHEL你可以通过 yum 安装 NRPE 插件。对于 CentOS[添加 Repoforge 仓库][4] 是必要的。
# yum install nagios-plugins-nrpe
现在 NRPE 插件已经安装完成,继续下面的步骤以配置一台 Nagios 服务器。
#### 第二步:为 NRPE 插件定义 Nagios 命令 ####
我们需要首先在 Nagios 中定义一个命令来使用 NRPE。
# vim /etc/nagios/objects/commands.cfg
----------
## 注意:对于 CentOS 64 位用户,请使用 /usr/lib64 替代 /usr/lib ##
define command{
command_name check_nrpe
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$'
}
#### 第三步:添加主机与命令定义 ####
接下来定义远程主机以及我们将要在它们上面运行的命令。
下面的例子为一台远程主机定义了一个可以在上面执行的命令。一般来说,你的配置需要按照你的需求来改变。配置文件的路径在基于 Debian 和基于 RedHat 的系统上略有不同,不过文件的内容是完全一样的。
**对于 Debian、Ubuntu 或者 Linux Mint**
# vim /etc/nagios3/conf.d/nrpe.cfg
**对于 CentOS、Fedora 或者 RHEL**
# vim /etc/nagios/objects/nrpe.cfg
----------
define host{
use linux-server
host_name server-1
alias server-1
address X.X.X.X-IPv4_address_of_remote_host
}
define service {
host_name server-1
service_description Check Load
check_command check_nrpe!check_load
check_interval 1
use generic-service
}
#### 第四步:重启 Nagios 服务 ####
在重启 Nagios 之前,可以通过测试来验证配置。
**对于 Ubuntu、Debian 或者 Linux Mint**
# nagios3 -v /etc/nagios3/nagios.cfg
**对于 CentOS、Fedora 或者 RHEL**
# nagios -v /etc/nagios/nagios.cfg
如果一切正常,我们就可以重启 Nagios 服务了。
# service nagios restart
![](https://farm8.staticflickr.com/7024/13330387845_0bde8b6db5_z.jpg)
### 为 NRPE 配置自定义命令 ###
#### 远程服务器上的配置 ####
下面列出了一些可以用于 NRPE 的自定义命令。这些命令在远程服务器的 /etc/nagios/nrpe.cfg 文件中定义。
## 当 1、5、15 分钟的平均负载分别超过 1、2、1 时进入警告状态
## 当 1、5、15 分钟的平均负载分别超过 3、5、3 时进入严重警告状态
command[check_load]=/usr/lib/nagios/plugins/check_load -w 1,2,1 -c 3,5,3
## 对于 /home 目录的可用空间设置了警告级别为 25%,以及严重警告级别为 10%。
## 可以定制为监控任何分区(比如 /dev/sdb1、/、/var、/home
command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 25% -c 10% -p /home
## 当 process_ABC 的实例数量超过 10 时警告,超过 20 时严重警告 ##
command[check_process_ABC]=/usr/lib/nagios/plugins/check_procs -w 1:10 -c 1:20 -C process_ABC
## 当 process_ABC 的实例数量跌到 1 以下时严重警告 ##
command[check_process_XYZ]=/usr/lib/nagios/plugins/check_procs -w 1: -c 1: -C process_XYZ
#### Nagios 监控服务器上的配置 ####
我们通过修改 Nagios 监控服务器里的服务定义来应用上面定义的自定义命令。服务定义可以写在所有服务被定义的地方(比如 /etc/nagios/objects/nrpe.cfg 或 /etc/nagios3/conf.d/nrpe.cfg
## 示例 1检查进程 XYZ ##
define service {
host_name server-1
service_description Check Process XYZ
check_command check_nrpe!check_process_XYZ
check_interval 1
use generic-service
}
## 示例 2检查磁盘状态 ##
define service {
host_name server-1
service_description Check Process XYZ
check_command check_nrpe!check_disk
check_interval 1
use generic-service
}
总而言之NRPE 是 Nagios 的一个强大的扩展,它提供了高度可定制的远程服务器监控方案。使用 NRPE我们可以监控系统的负载、运行的进程、已登录的用户、磁盘状态以及其它的指标。
希望这些可以帮到你。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html
作者:[Sarmed Rahman][a]
译者:[felixonmars](https://github.com/felixonmars)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://xmodulo.com/2012/08/how-to-measure-average-cpu-utilization.html
[2]:http://xmodulo.com/2013/12/install-configure-nagios-linux.html
[3]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html
[4]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html

View File

@ -1,21 +1,19 @@
wangjiezhe translating
Unix: stat -- more than ls
Unix: stat -- 获取比 ls 更多的信息
================================================================================
> Tired of ls and want to see more interesting information on your files? Try stat!
> 厌倦了 ls 命令, 并且想查看更多有关你的文件的有趣的信息? 试一试 stat!
![](http://www.itworld.com/sites/default/files/imagecache/large_thumb_150x113/stats.jpg)
The ls command is probably one of the first commands that anyone using Unix learns, but it only shows a small portion of the information that is available with the stat command.
ls 命令可能是每一个 Unix 使用者第一个学习的命令之一, 但它仅仅显示了 stat 命令能给出的信息的一小部分.
The stat command pulls information from the file's inode. As you might be aware, there are actually three sets of dates and times that are stored for every file on your system. These include the date the file was last modified (i.e., the date and time that you see when you use the ls -l command), the time the file was last changed (which includes renaming the file), and the time that file was last accessed.
stat 命令从文件的索引节点获取信息. 正如你可能已经了解的那样, 每一个系统里的文件都存有三组日期和时间, 它们包括最近修改时间(即使用 ls -l 命令时显示的日期和时间), 最近状态改变时间(包括重命名文件)和最近访问时间.
View a long listing for a file and you will see something like this:
使用长列表模式查看文件信息, 你会看到类似下面的内容:
$ ls -l trythis
-rwx------ 1 shs unixdweebs 109 Nov 11 2013 trythis
Use the stat command and you see all this:
使用 stat 命令, 你会看到下面这些:
$ stat trythis
File: `trythis'
@ -26,11 +24,11 @@ Use the stat command and you see all this:
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2013-11-11 08:40:10.000000000 -0500
The file's change and modify dates/times are the same in this case, while the access time is fairly recent. We can also see that the file is using 8 blocks and we see the permissions in each of the two formats -- the octal (0700) format and the rwx format. The inode number, shown in the third line of the output, is 12731681. There are no additional hard links (Links: 1). And the file is a regular file.
在上面的情形中, 文件的状态改变和文件修改的日期/时间是相同的, 而访问时间则是相当近的时间. 我们还可以看到文件使用了 8 个块, 以及两种格式显示的文件权限 -- 八进制(0700)格式和 rwx 格式. 在第三行显示的索引节点是 12731681. 文件没有其它的硬链接(Links: 1). 而且, 这个文件是一个常规文件.
Rename the file and you will see that the change time will be updated.
重命名文件, 你会看到状态改变时间发生变化.
This, the ctime information, was originally intended to hold the creation date and time for the file, but the field was turned into the change time field somewhere a while back.
这里的 ctime 信息, 最早设计用来存储文件的创建日期和时间, 但之前的某个时间变为用来存储状态修改时间.
$ mv trythis trythat
$ stat trythat
@ -42,9 +40,9 @@ This, the ctime information, was originally intended to hold the creation date a
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2014-09-21 12:46:22.000000000 -0400
Changing the file's permissions would also register in the ctime field.
改变文件的权限也会改变 ctime 域.
You can also use wilcards with the stat command and list your files' stats in a group:
你也可以配合通配符来使用 stat 命令以列出一组文件的状态:
$ stat myfile*
File: `myfile'
@ -69,18 +67,18 @@ You can also use wilcards with the stat command and list your files' stats in a
Modify: 2014-08-22 12:03:59.000000000 -0400
Change: 2014-08-22 12:03:59.000000000 -0400
We can get some of this information with other commands if we like.
如果我们喜欢的话, 我们也可以通过其他命令来获取这些信息.
Add the "u" option to a long listing and you'll see something like this. Notice this shows us the last access time while adding "c" shows us the change time (in this example, the time when we renamed the file).
向 ls -l 命令添加 "u" 选项, 你会获得下面的结果. 注意这个选项会显示最后访问时间, 而添加 "c" 选项则会显示状态改变时间(在本例中, 是我们重命名文件的时间).
$ ls -lu trythat
-rwx------ 1 shs unixdweebs 109 Sep 9 19:27 trythat
$ ls -lc trythat
-rwx------ 1 shs unixdweebs 109 Sep 21 12:46 trythat
The stat command can also work against directories.
stat 命令也可应用与文件夹.
In this case, we see that there are a number of links.
在这个例子中, 我们可以看到有许多的链接.
$ stat bin
File: `bin'
@ -91,7 +89,7 @@ In this case, we see that there are a number of links.
Modify: 2014-09-15 17:54:41.000000000 -0400
Change: 2014-09-15 17:54:41.000000000 -0400
Here, we're looking at a file system.
在这里, 我们查看一个文件系统.
$ stat -f /dev/cciss/c0d0p2
File: "/dev/cciss/c0d0p2"
@ -100,16 +98,24 @@ Here, we're looking at a file system.
Blocks: Total: 259366 Free: 259337 Available: 259337
Inodes: Total: 223834 Free: 223531
Notice the Namelen (name length) field. Good luck if you had your heart set on file names with greater than 255 characters!
注意 Namelen (文件名长度)域, 如果文件名长于 255 个字符的话, 你会很幸运地在文件名处看到心形符号!
The stat command can also display some of its information a field at a time for those times when that's all you want to see, In the example below, we just want to see the file type and then the number of hard links.
stat 命令还可以一次显示所有我们想要的信息. 下面的例子中, 我们只想查看文件类型, 然后是硬连接数.
$ stat --format=%F trythat
regular file
$ stat --format=%h trythat
1
In the examples below, we look at permissions -- in each of the two available formats -- and then the file's SELinux security context.
在下面的例子中, 我们查看了文件权限 -- 分别以两种可用的格式 -- 然后是文件的 SELinux 安全环境.
译者注: 原文到这里就结束了, 但很明显缺少结尾. 最后一段的例子可以分别用
$ stat --format=%a trythat
$ stat --format=%A trythat
$ stat --format=%C trythat
来实现.
--------------------------------------------------------------------------------

View File

@ -0,0 +1,77 @@
Linux有问必答——如何为CentOS 7配置静态IP地址
================================================================================
> **问题**在CentOS 7上我想要将我其中一个网络接口从DHCP改为静态IP地址配置如何才能永久为CentOS或RHEL 7上的网络接口分配静态IP地址
如果你想要为CentOS 7中的某个网络接口设置静态IP地址有几种不同的方法这取决于你是否想要使用网络管理器。
网络管理器是一个动态的网络控制与配置系统它用于在网络设备可用时保持设备和连接开启并激活。默认情况下CentOS/RHEL 7安装有网络管理器并处于启用状态。
使用下面的命令来验证网络管理器服务的状态:
$ systemctl status NetworkManager.service
运行以下命令来检查受网络管理器管理的网络接口:
$ nmcli dev status
![](https://farm4.staticflickr.com/3861/15295802711_a102a3574d_z.jpg)
如果某个接口的nmcli的输出结果是“已连接”如本例中的enp0s3这就是说该接口受网络管理器管理。你可以轻易地为某个特定接口禁用网络管理器以便你可以自己为它配置一个静态IP地址。
下面将介绍**在CentOS 7上为网络接口配置静态IP地址的两种方式**在例子中我们将对名为enp0s3的网络接口进行配置。
### 不使用网络管理配置静态IP地址 ###
进入/etc/sysconfig/network-scripts目录找到该接口的配置文件ifcfg-enp0s3。如果没有请创建一个。
![](https://farm4.staticflickr.com/3911/15112399977_d3df8e15f5_z.jpg)
打开配置文件并编辑以下变量:
![](https://farm4.staticflickr.com/3880/15112184199_f4cbf269a6.jpg)
在上图中“NM_CONTROLLED=no”表示该接口将通过该配置进行设置而不是通过网络管理器进行管理。“ONBOOT=yes”告诉我们系统将在启动时开启该接口。
保存修改并使用以下命令来重启网络服务:
# systemctl restart network.service
现在验证接口是否配置正确:
# ip add
![](https://farm6.staticflickr.com/5593/15112397947_ac69a33fb4_z.jpg)
### 使用网络管理器配置静态IP地址 ###
如果你想要使用网络管理器来管理该接口你可以使用nmtui网络管理器文本用户界面它提供了在终端环境中配置配置网络管理器的方式。
在使用nmtui之前首先要在/etc/sysconfig/network-scripts/ifcfg-enp0s3中设置“NM_CONTROLLED=yes”。
现在请按以下方式安装nmtui。
# yum install NetworkManager-tui
然后继续去编辑enp0s3接口的网络管理器配置
# nmtui edit enp0s3
在下面的屏幕中,我们可以手动输入与/etc/sysconfig/network-scripts/ifcfg-enp0s3中所包含的内容相同的信息。
使用箭头键在屏幕中导航,按回车选择值列表中的内容(或填入想要的内容),最后点击屏幕底部右侧的确定按钮。
![](https://farm4.staticflickr.com/3878/15295804521_4165c97828_z.jpg)
最后,重启网络服务。
# systemctl restart network.service
好了,现在一切就绪。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/configure-static-ip-address-centos7.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,52 @@
如何重置CentOS 7的Root密码
===
重置Centos 7 Root密码的方式和Centos 6完全不同。让我来展示一下到底如何操作。
1 - 在启动grub菜单选择编辑选项启动
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_003.png)
2 - 按键盘e键来进入编辑界面
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_005.png)
3 - 找到Linux 16的那一行将ro改为rw init=/sysroot/bin/
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_006.png)
4 - 现在按下 Control+x ,使用单用户模式启动
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_007.png)
5 - 现在,可以使用下面的命令访问系统
chroot /sysroot
6 - 重置密码
passwd root
7 - 更新系统信息
touch /.autorelabel
8 - 退出chroot
exit
9 - 重启你的系统
reboot
就是这样!
---
via: http://www.unixmen.com/reset-root-password-centos-7/
作者M.el Khamlichi
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,120 @@
CentOS监控用户登录历史之utmpdump
================================================================================
保留、维护和分析日志如某个特定时期内发生过的或正在发生的帐号事件是Linux系统管理员最基础和最重要的任务之一。对于用户管理检查用户的登入和登出日志不管是失败的还是成功的可以让我们对任何潜在的安全隐患或未经授权使用系统的情况保持警惕。例如工作时间之外或例假期间的来自未知IP地址或帐号的远程登录应当发出红色警报。
在CentOS系统上用户登录历史存储在以下这些文件中
- /var/run/utmp用于记录当前打开的会话who和w工具用来记录当前有谁登录以及他们正在做什么而uptime用来记录系统启动时间。
- /var/log/wtmp 用于存储系统连接历史记录last工具用来记录最后登录的用户的列表。
- /var/log/btmp记录失败的登录尝试lastb工具用来记录最后失败的登录尝试的列表。
![](https://farm4.staticflickr.com/3871/15106743340_bd13fcfe1c_o.png)
在本帖中我将介绍如何使用utmpdump这个小程序来自sysvinit-tools包可以用于转储二进制日志文件到文本格式的文件以便检查。此工具默认在CentOS 6和7家族上可用。utmpdump收集到的信息比先前提到过的工具的输出要更全面这让它成为一个胜任该工作的很不错的工具。除此之外utmpdump可以用于修改utmp或wtmp。如果你想要修复二进制日志中的任何损坏条目它会很有用。
### Utmpdump的使用及其输出说明 ###
正如我们之前提到的,这些日志文件,与我们大多数人熟悉的其它日志相比(如/var/log/messages/var/log/cron/var/log/maillog是以二进制格式存储的因而我们不能使用像less或more这样的文件命令来查看它们的内容。那样看来utmpdump拯救了世界。
为了要显示/var/run/utmp的内容请运行以下命令
# utmpdump /var/run/utmp
![](https://farm6.staticflickr.com/5595/15106696599_60134e3488_z.jpg)
同样要显示/var/log/wtmp的内容
# utmpdump /var/log/wtmp
![](https://farm6.staticflickr.com/5591/15106868718_6321c6ff11_z.jpg)
最后,对于/var/log/btmp
# utmpdump /var/log/btmp
![](https://farm6.staticflickr.com/5562/15293066352_c40bc98ca4_z.jpg)
正如你所能看到的三种情况下的输出结果是一样的除了utmp和btmp的记录是按时间排序而wtmp的顺序是颠倒的这个原因外。
每个日志行格式化成了多列说明如下。第一个字段显示了会话识别符而第二个字段则是PID。第三个字段可以是以下值~~表示运行等级改变或系统重启bw启动守候进程数字表示TTY编号或者字符和数字表示伪终端。第四个字段可以为空或用户名、重启或运行级别。第五个字段是主TTY或PTY伪终端如果此信息可获得的话。第六个字段是远程主机名如果是本地登录该字段为空运行级别信息除外它会返回内核版本。第七个字段是远程系统的IP地址如果是本地登录该字段为0.0.0.0。如果没有提供DNS解析第六和第七字段会显示相同的信息远程系统的IP地址。最后一个第八字段指明了记录创建的日期和时间。
### Utmpdump使用样例 ###
下面提供了一些utmpdump的简单使用情况。
1. 检查8月18日到9月17日之间某个特定用户如gacanepa的登录次数。
# utmpdump /var/log/wtmp | grep gacanepa
![](https://farm4.staticflickr.com/3857/15293066362_fb2dd566df_z.jpg)
如果你需要回顾先前日期的登录信息,你可以检查/var/log下的wtmp-YYYYMMDD或wtmp.[1...N]和btmp-YYYYMMDD或btmp.[1...N])文件,这些是由[logrotate][1]生成的旧wtmp和btmp的归档文件。
2. 统计来自IP地址192.168.0.101的登录次数。
# utmpdump /var/log/wtmp | grep 192.168.0.101
![](https://farm4.staticflickr.com/3842/15106743480_55ce84c9fd_z.jpg)
3. 显示失败的登录尝试。
# utmpdump /var/log/btmp
![](https://farm4.staticflickr.com/3858/15293065292_e1d2562206_z.jpg)
在/var/log/btmp输出中每个日志行都与一个失败的登录尝试相关如使用不正确的密码或者一个不存在的用户ID。上面图片中高亮部分显示了使用不存在的用户ID登录这警告你有人尝试猜测常用帐号名来闯入系统。这在使用tty1的情况下是个极其严重的问题因为这意味着某人对你机器上的终端具有访问权限该检查一下谁拿到了进入你数据中心的钥匙了也许吧
4. 显示每个用户会话的登入和登出信息
# utmpdump /var/log/wtmp
![](https://farm4.staticflickr.com/3835/15293065312_c762360791_z.jpg)
在/var/logwtmp中一次新的登录事件的特征是第一个字段为7第三个字段是一个终端编号或伪终端id第四个字段为用户名。相关的登出事件会在第一个字段显示8第二个字段显示与登录一样的PID而终端编号字段空白。例如仔细观察上面图片中PID 1463的行。
- On [Fri Sep 19 11:57:40 2014 ART] the login prompt appeared in tty1.
- On [Fri Sep 19 12:04:21 2014 ART], user root logged on.
- On [Fri Sep 19 12:07:24 2014 ART], root logged out.
旁注第四个字段的LOGIN意味着出现了一次登录到第五字段指定的终端的提示。
到目前为止我介绍一些有点琐碎的例子。你可以将utmpdump和其它一些文本处理工具如awk、sed、grep或cut组合来产生过滤和加强的输出。
例如你可以使用以下命令来列出某个特定用户如gacanepa的所有登录事件并发送输出结果到.csv文件它可以用像LibreOffice Calc或Microsoft Excel之类的文字或工作簿应用程序打开查看。让我们只显示PID、用户名、IP地址和时间戳
# utmpdump /var/log/wtmp | grep -E "\[7].*gacanepa" | awk -v OFS="," 'BEGIN {FS="] "}; {print $2,$4,$7,$8}' | sed -e 's/\[//g' -e 's/\]//g'
![](https://farm4.staticflickr.com/3851/15293065352_91e1c1e4b6_z.jpg)
就像上面图片中三个块描绘的那样过滤逻辑操作是由三个管道步骤组成的。第一步用于查找由用户gacanepa触发的登录事件[7]第二步和第三部用于选择期望的字段移除utmpdump输出的方括号并设置输出字段分隔符为逗号。
当然,如果你想要在以后打开来看,你需要重定向上面的命令输出到文件(添加“>[文件名].csv”到命令后面
![](https://farm4.staticflickr.com/3889/15106867768_0e37881a25_z.jpg)
在更为复杂的例子中,如果你想要知道在特定时间内哪些用户(在/etc/passwd中列出没有登录你可以从/etc/passwd中提取用户名然后运行grep命令来获取/var/log/wtmp输出中对应用户的列表。就像你看到的那样有着无限可能。
在进行总结之前让我们简要地展示一下utmpdump的另外一种使用情况修改utmp或wtmp。由于这些都是二进制日志文件你不能像编辑文件一样来编辑它们。取而代之是你可以将其内容输出成为文本格式并修改文本输出内容然后将修改后的内容导入回二进制日志中。如下
# utmpdump /var/log/utmp > tmp_output
<modify tmp_output using a text editor>
# utmpdump -r tmp_output > /var/log/utmp
这在你想要移除或修复二进制日志中的任何伪造条目时很有用。
下面小结一下utmpdump通过转储详细的登录事件到utmp、wtmp和btmp日志文件也可以是轮循的旧归档文件来补充如whowuptimelastlastb之类的标准工具的不足这也使得它成为一个很棒的工具。
你可以随意添加评论以加强本帖的含金量。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/09/monitor-user-login-history-centos-utmpdump.html
作者:[Gabriel Cánepa][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/2014/09/logrotate-manage-log-files-linux.html

View File

@ -0,0 +1,65 @@
检查你的系统系统是否有“Shellshock”漏洞并修复它
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/shellshock_Linux_check.jpeg)
快速地向你展示**如何检查你的系统是否受到Shellshock的影响**如果有,**怎样修复你的系统免于被Bash漏洞利用**。
如果你正跟踪新闻,你可能已经听说过在[Bash][1]中发现了一个漏洞,这被称为**Bash Bug**或者** Shellshock**。 [红帽][2]是第一个发现这个漏洞的机构。Shellshock错误允许攻击者注入自己的代码从而使系统开放各给种恶意软件和远程攻击。事实上[黑客已经利用它来启动DDoS攻击][3]。
由于Bash在所有的类Unix系统中都有如果这些都运行bash的特定版本它会让所有的Linux系统都容易受到这种Shellshock错误的影响。
想知道如果你的Linux系统是否已经受到Shellshock影响有一个简单的方法来检查它这就是我们要看到的。
### 检查Linux系统的Shellshock漏洞 ###
打开一个终端,在它运行以下命令:
env x='() { :;}; echo vulnerable' bash -c 'echo hello'
如果你的系统没有漏洞,你会看到这样的输出:
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x
hello
如果你的系统有Shellshock漏洞你会看到一个像这样的输出:
vulnerable
hello
我尝试在我的Ubuntu14.10上运行,我得到了这个:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/Shellshock_Linux_Check.jpeg)
您还可以通过使用下面的命令查看bash的版本:
bash --version
如果bash的版本是3.2.51(1),你就应该更新了。
#### 为有Shellshock漏洞的Linux系统打补丁 ####
如果你运行的是基于Debian的Linux操作系统如Ubuntu、Linux Mint的等请使用以下命令升级Bash
sudo apt-get update && sudo apt-get install --only-upgrade bash
对于如FedoraRed HatCent OS等操作系统请使用以下命令
yum -y update bash
我希望这个小技巧可以帮助你看看你是否受到Shellshock漏洞的影响并解决它。有任何问题和建议欢迎来提。
--------------------------------------------------------------------------------
via: http://itsfoss.com/linux-shellshock-check-fix/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://en.wikipedia.org/wiki/Bash_(Unix_shell)
[2]:https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
[3]:http://www.wired.com/2014/09/hackers-already-using-shellshock-bug-create-botnets-ddos-attacks/

View File

@ -0,0 +1,86 @@
Linux 常见问题解答 --怎么用checkinstall从源码创建一个RPM或DEB包
================================================================================
> **问题**:我想要从源码创建安装的软件程序。有没有一种方式来创建并且从源码安装包,而不是运行“make install”那样以后如果我想我可以容易的卸载程序。
如果你已经从从它的源码运行“make install”安装了linux程序。想完整移除它将变得真的很麻烦除非程序的创造者在Makefile里提供卸载的目标。你会有在你系统里文件的完整列表来和从源码安装之后比较然后手工移除所有在制作安装过程中加入的文件
这时候Checkinstall就可以派上使用。Checkinstall保留命令行创建或修改的所有文件的路径(例如“make install”“make install_modules”等)并建立一个标准的二进制包让你能用你发行版的标准包管理系统安装或卸载它例子Red Hat的yum或者Debian的apt-get命令 It has been also known to work with Slackware, SuSe, Mandrake and Gentoo as well, as per the official documentation. [official documentation][1].
在这篇文章中我们只集中在红帽子和Debian为基础的发行版并展示怎样从源码使用Checkinstall创建一个RPM和DEB软件包
### 在linux上安装Checkinstall ###
在Debian衍生上安装Checkinstall
# aptitude install checkinstall
在红帽子的发行版上安装Checkinstall你需要下载一个预先建立的Checkinstall rpm(例如:从 [http://rpm.pbone.net][2]),他已经从Repoforge库里删除。对于Cent OS6这个rpm包也可在Cent OS7里工作。
# wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/ikoinoba/CentOS_CentOS-6/x86_64/checkinstall-1.6.2-3.el6.1.x86_64.rpm
# yum install checkinstall-1.6.2-3.el6.1.x86_64.rpm
一旦checkinstall安装你可以用下列格式创建一个特定的软件包
# checkinstall <install-command>
如果没有参数默认安装命令“make install”将被使用
### 用Checkinstall创建一个RPM或DEB包 ###
在这个例子里我们将创建一个htop包对于linux交互式文本模式进程查看器就像上面的 steroids
首先,让我们从项目的官方网站下载源代码,一个最佳的练习,我们存储源码到/usr/local/src,并解压它
# cd /usr/local/src
# wget http://hisham.hm/htop/releases/1.0.3/htop-1.0.3.tar.gz
# tar xzf htop-1.0.3.tar.gz
# cd htop-1.0.3
让我们找出htop安装命令那样我们就能调用Checkinstall命令下面展示了htop用“make install”命令安装
# ./configure
# make install
因此创建一个htop包我们可以调用checkinstall不带任何参数安装这将使用“make install”命令创建一个包。随着这个过程 checkinstall命令会问你一个连串的问题。
总之这个命令会创建一个htop包 **htop**:
# ./configure
# checkinstall
回答“Y”“我会创建一个默认设置的包文件
![](https://farm6.staticflickr.com/5577/15118597217_1fdd0e0346_z.jpg)
你可以输入一个包的简短描述然后按两次ENTER
![](https://farm4.staticflickr.com/3898/15118442190_604b71d9af.jpg)
输入一个数值修改下面的任何值或ENTER前进
![](https://farm4.staticflickr.com/3898/15118442180_428de59d68_z.jpg)
然后checkinstall将自动地创建一个.rpm或者.deb包根据你的linux系统是什么
在CentOS7
![](https://farm4.staticflickr.com/3921/15282103066_5d688b2217_z.jpg)
在Debian 7:
![](https://farm4.staticflickr.com/3905/15118383009_4909a7c17b_z.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/build-rpm-deb-package-source-checkinstall.html
译者:[luoyutiantang](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://checkinstall.izto.org/docs/README
[2]:http://rpm.pbone.net/
[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html