mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
commit
9cfa383bfc
@ -1,6 +1,7 @@
|
||||
适用于Linux的在线工具
|
||||
16个 Linux 方面的在线工具类网站
|
||||
================================================================================
|
||||
众所周知,GNU Linux不仅仅只是一款操作系统。看起来通过互联网全球许多人都在致力于这款企鹅图标(即Linux)的操作系统。如果你读到这篇文章,你可能倾向于读到关于Linux联机的内容。在可以找到的所有关于这个主题的网页中,有一些网站是每个Linux爱好者都应该收藏起来的。这些网站不仅仅只是教程或回顾,更是可以随时随地访问并与他人共享的实用工具。所以,今天我会建议一份包含16个应该收藏的网址清单。它们中的一些对Windows或Mac用户同样有用:这是在他们的能力范围内可以做到的。(译者注:Windows和Mac一样可以很好地体验Linux)
|
||||
众所周知,GNU Linux不仅仅只是一款操作系统。看起来通过互联网全球许多人都在致力于这款以企鹅为吉祥物的操作系统。如果你读到这篇文章,你可能希望读一些关于Linux在线资源的内容。在可以找到的所有关于这个主题的网页中,有一些网站是每个Linux爱好者都应该收藏起来的。这些网站不仅仅只是教程或回顾,更是可以随时随地访问并与他人共享的实用工具。所以,今天我会建议一份包含16个应该收藏的网址清单。它们中的一些对Windows或Mac用户同样有用:这是在他们的能力范围内可以做到的。(译者注:Windows和Mac一样可以很好地体验Linux)
|
||||
|
||||
### 1. [ExplainShell.com][1] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3841/14517716647_3b6a1a564d_z.jpg)][2]
|
||||
@ -11,43 +12,43 @@
|
||||
|
||||
[![](https://farm4.staticflickr.com/3900/14703872782_033e5acdb8_z.jpg)][4]
|
||||
|
||||
如果你想开始学习Linux命令行,或者想快速地得到一个自定义的shell命令提示符,但不知道从何下手,这个网站会为你生成PS1提示代码,在家目录下放置.bashrc文件。你可以拖拽任何你想在提示符里看到的元素,譬如用户名和当前时间,这个网站都会为你编写易懂可读的代码。绝对是懒人必备!
|
||||
如果你想开始学习Linux命令行,或者想快速地生成一个自定义的shell命令提示符,但不知道从何下手,这个网站可以为你生成PS1提示的代码,将代码放到家目录下的.bashrc文件中即可。你可以拖拽任何你想在提示符里看到的元素,譬如用户名和当前时间,这个网站都会为你编写易懂可读的代码。绝对是懒人必备!
|
||||
|
||||
### 3. [Vim-adventures.com][5] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3838/14681149696_0c533fd6de_z.jpg)][6]
|
||||
|
||||
我是最近才发现这个网站的,但我的生活已经深陷其中。简而言之:它就是一个使用Vim命令的RPG游戏。在等距的水平上使用‘h,j,k,l’四个键移动字母,获取新的命令/能力,收集关键词,非常快速地学习高效地使用Vim。
|
||||
我是最近才发现这个网站的,但我的生活已经深陷其中。简而言之:它就是一个使用Vim命令的RPG游戏。在地图的平面上使用‘h,j,k,l’四个键移动你的角色、得到新的命令/能力、收集钥匙,可以帮助你非常快速地学习如何高效使用Vim。
|
||||
|
||||
### 4. [Try Github][7] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3874/14517499739_0452848d68_z.jpg)][8]
|
||||
|
||||
目标很简单:15分钟学会Git。这个网站模拟一个控制台,带你遍历这种协作编辑的每一步。界面非常时尚,目的十分有用。唯一不足的是对Git敏感,但Git绝对是一项不错的技能,这里也是学习Git的绝佳之处。
|
||||
目标很简单:15分钟学会Git。这个网站模拟一个控制台,带你遍历这种协作编辑的每一步。界面非常时尚,目的十分有用。唯一不足的是对Git感兴趣,但Git绝对是一项不错的技能,这里也是学习Git的绝佳之处。
|
||||
|
||||
### 5. [Shortcutfoo.com][9] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3906/14517499799_f142ea37cb_z.jpg)][10]
|
||||
|
||||
又一个包含众多快捷键数据库的网站,shortcutfoo以更标准的方式将其内容呈现给用户,但绝对比有趣的迷你游戏更直截了当。这里有许多软件的快捷键,并按类别分组。虽然它不像Vim一类完全依赖快捷键的软件那么全面,但也足以提供快速的提示或一般性的概述。
|
||||
又一个包含众多快捷键数据库的网站,shortcutfoo以更标准的方式将其内容呈现给用户,但绝对比有趣的迷你游戏更直截了当。这里有许多软件的快捷键,并按类别分组。虽然像Vim一类的软件它没有给出超级完整的快捷键列表,但也足以提供快速的提示或一般性的概述。
|
||||
|
||||
### 6. [GitHub Free Programming Books][11] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3867/14517499989_408a28d8be_z.jpg)][12]
|
||||
|
||||
正如你从URL上猜到的一样,这个网站就是免费在线编程书籍的集合,使用Git协作方式编写。上面的内容非常好,作者们应该为做出这些工作受到表扬。它可能不是最容易阅读的,但一定是最有启发性的之一。我们只希望这项运动能持续进行。
|
||||
正如你从URL上猜到的一样,这个网站就是免费在线编程书籍的集合,使用Git协作方式编写。上面的内容非常好,作者们应该为他们做出的这些贡献受到表扬。它可能不是最容易阅读的,但一定是最有启发性的之一。我们只希望这项运动能持续进行。
|
||||
|
||||
### 7. [Collabedit.com][13] ###
|
||||
|
||||
[![](https://farm3.staticflickr.com/2940/14681150086_2d169d67f9_z.jpg)][14]
|
||||
|
||||
如果你曾经准备过电话面试,你应该先试试collabedit。它让你创建文件,选择你想使用的编程语言,然后通过URL共享文档。打开链接的人可以免费地实时使用文本交互,使你可以评判他们的编程水平或只是交换一些程序片段。这里甚至还提供合适的语法高亮和聊天功能。换句话说,这就是程序员的即时Google文档。
|
||||
如果你曾经计划过电话面试,你应该先试试collabedit。它让你创建文件,选择你想使用的编程语言,然后通过URL共享文档。打开链接的人可以免费地实时使用文本交互,使你可以评判他们的编程水平或只是交换一些程序片段。这里甚至还提供合适的语法高亮和聊天功能。换句话说,这就是程序员的即时Google Doucment。
|
||||
|
||||
### 8. [Cpp.sh][15] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3840/14700981001_af3ac40b65_z.jpg)][16]
|
||||
|
||||
尽管这个网站超出了Linux范围,但因为它非常有用,所以值得将它放在这里。简单地说,这是一个C++在线开发环境。只需在导航栏里编写程序,然后运行它。作为奖励,你可以使用自动补全、Ctrl+Z,以及和你的小伙伴共享URL。这些有趣的事情,你只需要通过一个简单的浏览器就能做到。
|
||||
尽管这个网站超出了Linux范围,但因为它非常有用,所以值得将它放在这里。简单地说,这是一个C++在线开发环境。只需在浏览器里编写程序,然后运行它。作为奖励,你可以使用自动补全、Ctrl+Z,以及和你的小伙伴分享你的作品的URL。这些有趣的事情,你只需要通过一个简单的浏览器就能做到。
|
||||
|
||||
### 9. [Copy.sh][17] ###
|
||||
|
||||
@ -59,13 +60,13 @@
|
||||
|
||||
[![](https://farm4.staticflickr.com/3887/14517495938_ca3b831ca9_z.jpg)][21]
|
||||
|
||||
我们总是在自己的电脑上保存着一大段类似于“gems”的命令行【翻译得不准确,麻烦校正】,commandlinefu的目标是把这些片段释放给全世界。作为一个协作式数据库,它就像是命令行里的维基百科。每个人可以免费注册,把自己最钟爱的命令提交到这个网站上给其他人看。你将能够获取来自四面八方的知识并与人分享。如果你对精通shell饶有兴趣,commandlinefu也可以提供一些优秀的特性,比如随机命令和每天学习新知识的新闻订阅。
|
||||
我们总是在自己的电脑上保存着一大段命令行“宝石”,commandlinefu的目标是把这些片段释放给全世界。作为一个协作式数据库,它就像是命令行里的维基百科。每个人可以免费注册,把自己最钟爱的命令提交到这个网站上给其他人看。你将能够获取来自四面八方的知识并与人分享。如果你对精通shell饶有兴趣,commandlinefu也可以提供一些优秀的特性,比如随机命令和每天学习新知识的新闻订阅。
|
||||
|
||||
### 11. [Alias.sh][22] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3868/14701762124_a7b3547aca_z.jpg)][23]
|
||||
|
||||
另一协作式数据库,alias.sh(我爱死这个URL了)有点像commandlinefu,但是为shell别名开发的。你可以共享和发现一些有用的别名,来使你的CLI(命令行界面)体验更加舒服。我个人喜欢这个获取图片维度的别名。
|
||||
另一协作式数据库,alias.sh(我爱死这个URL了)有点像commandlinefu,但是为shell别名开发的。你可以共享和发现一些有用的别名,来使你的CLI(命令行界面)体验更加舒服。我个人喜欢这个获取图片维度的别名命令。
|
||||
|
||||
function dim(){ sips $1 -g pixelWidth -g pixelHeight }
|
||||
|
||||
@ -75,40 +76,41 @@
|
||||
|
||||
[![](https://farm3.staticflickr.com/2910/14681149996_50a45bff78_z.jpg)][25]
|
||||
|
||||
有谁不知道Distrowatch?除了基于这个网站流行度给出一个精确的Linux发行版排名,Distrowatch也是一个非常有用的数据库。无论你正苦苦寻找一个新的发行版,还是只是出于好奇,它都能为你能找到的每个Linux版本呈现一个详尽的描述,包含默认的桌面环境,包管理系统,默认应用程序等信息,还有所有的版本号,以及可用的下载链接。总而言之,这就是个Linux宝库。
|
||||
有谁不知道Distrowatch?除了基于这个网站流行度给出一个精确的Linux发行版排名,Distrowatch也是一个非常有用的数据库。无论你正苦苦寻找一个新的发行版,还是只是出于好奇,它都能为你能找到的每个Linux版本呈现一个详尽的描述,包含默认的桌面环境、包管理系统、默认应用程序等信息,还有所有的版本号,以及可用的下载链接。总而言之,这就是个Linux宝库。
|
||||
|
||||
### 13. [Linuxmanpages.com][26] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3911/14704165765_8e30cb3d3f_z.jpg)][27]
|
||||
|
||||
一切都在URL中:随时随地获取主流命令的手册页面。尽管不确信对于Linux用户是否真的有用,因为他们可以从真实的终端中获取这些信息,但这里的内容还是值得关注的。
|
||||
一切尽在URL中说明了:随时随地获取主流命令的手册页面。尽管不确信对于Linux用户是否真的有用,因为他们可以从真实的终端中获取这些信息,但这里的内容还是值得关注的。
|
||||
|
||||
### 14. [AwesomeCow.com][28] ###
|
||||
|
||||
[![](https://farm6.staticflickr.com/5558/14704165965_02b10ee293_z.jpg)][29]
|
||||
|
||||
这里可能少一些核心的Linux内容,但肯定是有一些用的。Awesomecow是一个搜索引擎,来寻找Windows软件在Linux上的替代品。它对那些迁移到企鹅操作系统(Linux)或习惯Windows软件的人很有帮助。我认为这个网站代表一种能力,表明了在谈到软件质量时Linux也可以适用于专业领域。大家至少可以尝试一下。
|
||||
这可能对于骨灰级 Linux 没啥用,但是对于其他人也许有用。Awesomecow是一个搜索引擎,来寻找Windows软件在Linux上对应的替代品。它对那些迁移到企鹅操作系统(Linux)或习惯Windows软件的人很有帮助。我认为这个网站代表一种能力,表明了在谈到软件质量时Linux也可以适用于专业领域。大家至少可以尝试一下。
|
||||
|
||||
### 15. [PenguSpy.com][30] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3904/14517495728_f6877e8e3b_z.jpg)][31]
|
||||
|
||||
Steam在Linux上崭露头角之前,游戏性可能是Linux的软肋。但这个名为“pengsupy”的网站不遗余力地弥补这个软肋,通过使用漂亮的接口在数据库中收集所有兼容Linux的游戏。游戏按照类别、发行日期、评分等指标分类。我真心希望这一类的网站不会因为Steam的存在走向衰亡,毕竟这是我在这个列表里最喜爱的网站之一。
|
||||
Steam在Linux上崭露头角之前,可玩性可能是Linux的软肋。但这个名为“pengsupy”的网站不遗余力地弥补这个软肋,通过使用漂亮的界面展现了数据库中收集的所有兼容Linux的游戏。游戏按照类别、发行日期、评分等指标分类。我真心希望这一类的网站不会因为Steam的存在走向衰亡,毕竟这是我在这个列表里最喜爱的网站之一。
|
||||
|
||||
### 16. [Linux Cross Reference by Free Electrons][32] ###
|
||||
|
||||
[![](https://farm4.staticflickr.com/3913/14712049464_6b666e2cfa_z.jpg)][33]
|
||||
|
||||
最后,对所有的专家和好奇的用户,lxr是源自Linux Cross Reference的回文构词法,使我们能交互地在线查看Linux内核代码。通过标识符可以很方便地使用导航栏,你可以使用标准的diff标记对比文件的不同版本。这个网站的界面看起来严肃直接,毕竟这只是一个希望完美阐述开源观点的网站。
|
||||
最后,对所有的专家和好奇的用户,lxr 是源于 Linux Cross Reference 的另外一种形式,使我们能交互地在线查看Linux内核代码。可以通过各种标识符在代码中很方便地导航,你可以使用标准的diff标记对比文件的不同版本。这个网站的界面看起来严肃直接,毕竟这只是一个希望完美阐述开源观点的网站。
|
||||
|
||||
总而言之,应该列出更多这一类的网站,作为这篇文章第二部分的主题。但这篇文章是一个好的开始,是一道为Linux用户寻找在线工具的开胃菜。如果你有其它任何想要分享的页面,而且是紧跟这个主题的,在评论里写出来。这将有助于续写这个列表。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/07/useful-online-tools-linux.html
|
||||
|
||||
原文作者:[Adrien Brochard][a](我是一名来自法国的Linux狂热爱好者。在尝试过众多的发行版后,我最终选择了Archlinux。但我一直会通过叠加技巧和窍门来优化我的系统。)
|
||||
|
||||
译者:[KayGuoWhu](https://github.com/KayGuoWhu) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[KayGuoWhu](https://github.com/KayGuoWhu) 校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
如何在Linux中使用awk命令
|
||||
================================================================================
|
||||
文本处理是Unix的核心。从管道到/proc子系统,“一切都是文件”的理念贯穿于操作系统和所有基于它构造的工具。正因为如此,轻松地处理文本是一个期望成为Linux系统管理员甚至是资深用户的最重要的技能之一,awk是通用编程语言之外最强大的文本处理工具之一。
|
||||
文本处理是Unix的核心。从管道到/proc子系统,“一切都是文件”的理念贯穿于操作系统和所有基于它构造的工具。正因为如此,轻松地处理文本是一个期望成为Linux系统管理员甚至是资深用户的最重要的技能之一,而 awk是通用编程语言之外最强大的文本处理工具之一。
|
||||
|
||||
最简单的awk的任务是从标准输入中选择字段;如果你对awk除了这个没有学习过其他的,它还是会是你身边一个非常有用的工具。
|
||||
最简单的awk的任务是从标准输入中选择字段;如果你对awk除了这个用途之外,从来没了解过它的其他用途,你会发现它还是会是你身边一个非常有用的工具。
|
||||
|
||||
默认情况下,awk通过空格分隔输入。如果您想选择输入的第一个字段,你只需要告诉awk输出$ 1:
|
||||
|
||||
@ -30,13 +30,13 @@
|
||||
|
||||
> foo: three | bar: one
|
||||
|
||||
好吧,如果你的输入不是由空格分隔怎么办?只需用awk中的'-F'标志后带上你的分隔符:
|
||||
好吧,如果你的输入不是由空格分隔怎么办?只需用awk中的'-F'标志指定你的分隔符:
|
||||
|
||||
$ echo 'one mississippi,two mississippi,three mississippi,four mississippi' | awk -F , '{print $4}'
|
||||
|
||||
> four mississippi
|
||||
|
||||
偶尔间,你会发现自己正在处理拥有不同的字段数量的数据,但你只知道你想要的*最后*字段。 awk中内置的$NF变量代表*字段的数量*,这样你就可以用它来抓取最后一个元素:
|
||||
偶尔间,你会发现自己正在处理字段数量不同的数据,但你只知道你想要的*最后*字段。 awk中内置的$NF变量代表*字段的数量*,这样你就可以用它来抓取最后一个元素:
|
||||
|
||||
$ echo 'one two three four' | awk '{print $NF}'
|
||||
|
||||
@ -54,9 +54,9 @@
|
||||
|
||||
> three
|
||||
|
||||
而且这一切都非常有用,同样你可以摆脱强制使用sed,cut,和grep来得到这些结果(尽管有大量的工作)。
|
||||
而且这一切都非常有用,同样你可以摆脱强制使用sed,cut,和grep来得到这些结果(尽管要做更多的操作)。
|
||||
|
||||
因此,我将为你留下awk的最后介绍特性,维护跨行状态。
|
||||
因此,我将最后为你介绍awk的一个特性,维持跨行状态。
|
||||
|
||||
$ echo -e 'one 1\ntwo 2' | awk '{print $2}'
|
||||
|
||||
@ -68,7 +68,7 @@
|
||||
|
||||
> 3
|
||||
|
||||
(END代表的是我们在执行完每行的处理**之后**只处理下面的代码块
|
||||
(END代表的是我们在执行完每行的处理**之后**只处理下面的代码块)
|
||||
|
||||
这里我使用的例子是统计web服务器请求日志的字节大小。想象一下我们有如下这样的日志:
|
||||
|
||||
@ -104,7 +104,7 @@
|
||||
|
||||
> 31657
|
||||
|
||||
如果你正在寻找关于awk的更多资料,你可以在Amazon中在15美元内找到[原始awk手册][1]的副本。你同样可以使用Eric Pement的[单行awk命令收集][2]这本书
|
||||
如果你正在寻找关于awk的更多资料,你可以在Amazon中花费不到15美元买到[原始awk手册][1]的二手书。你也许还可以看看Eric Pement的[单行awk命令收集][2]这本书。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -112,7 +112,7 @@ via: http://xmodulo.com/2014/07/use-awk-command-linux.html
|
||||
|
||||
作者:[James Pearson][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -4,19 +4,20 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但
|
||||
|
||||
### 背景知识 ###
|
||||
对那些不熟悉Nvidia Optimus的读者,在板载Intel图形芯片组和使用被称为“GPU切换”、对需求有着更强大处理能力的NVIDA显卡这两者之间的进行切换是很有必要的。这么做的主要目的是延长笔记本电池的使用寿命,以便在不需要Nvidia GPU的时候将其关闭。带来的好处是显而易见的,比如说你只是想简单地打打字,笔记本电池可以撑8个小时;如果看高清视频,可能就只能撑3个小时了。使用Windows时经常如此。
|
||||
|
||||
![](https://farm6.staticflickr.com/5581/14612159387_2e89a52085_z.jpg)
|
||||
|
||||
几年前,我买了一台上网本(Asus VX6),犯的最蠢的一个错误就是没有检查Linux驱动兼容性。因为在以前,特别是对于一台上网本大小的设备,这根本不会是问题。即便某些驱动不是现成可用的,我也可以找到其它的办法让它正常工作,比如安装专门模块或者使用反向移植。对我来说这是第一次——我的电脑预先配备了Nvidia ION2图形显卡。
|
||||
|
||||
在那时候,Nvidia的Optimus混合GPU硬件还是相当新的产品,而我也没有预见到在这台机器上运行Linux会遇到什么限制。如果你读到了这里,恰好对Linux系统有经验,而且也在几年前买过一台笔记本,你可能对这种痛苦感同身受。
|
||||
|
||||
[Bumblebee][4]项目直到最近因为得到Linux系统对混合图形方面的支持才变得好起来。事实上,如果配置正确的话,通过命令行接口(如“optirun vlc”)为想要的应用程序去利用Nvidia显卡的功能是可能的,但让HDMI一类的功能运转起来就很不同了。(译者注:Bumblebee 项目是把Nvidia的Optimus技术移到Linux上来。)
|
||||
[Bumblebee][4]项目直到最近因为得到Linux系统对混合图形方面的支持才变得好起来。事实上,如果配置正确的话,通过命令行接口(如“optirun vlc”)让你选定的应用程序能利用Nvidia显卡功能是可行的,但让HDMI一类的功能运转起来就很不同了。(译者注:Bumblebee 项目是把Nvidia的Optimus技术移到Linux上来。)
|
||||
|
||||
我之所以使用“如果配置正确的话”这个短语,是因为实际上为了让它发挥出性能往往不只是通过几次尝试去改变Xorg的配置就能做到的。如果你以前没有使用过ppa-purge或者运行过“dpkg-reconfigure -phigh xserver-xorg”这类命令,那么我可以向你保证修补Bumblebee的过程会让你受益匪浅。
|
||||
我之所以使用“如果配置正确的话”这个短语,是因为实际上为了让它发挥出性能来往往不只是通过几次尝试去改变Xorg的配置就能做到的。如果你以前没有使用过ppa-purge或者运行过“dpkg-reconfigure -phigh xserver-xorg”这类命令,那么我可以向你保证修补Bumblebee的过程会让你受益匪浅。
|
||||
|
||||
[![](https://farm6.staticflickr.com/5588/14798680495_947c38b043_o.png)][2]
|
||||
|
||||
等待了很长一段时间,Nvidia才发布了支持Optimus的Linux驱动,但我们仍然没有获取对双显卡切换的真正支持。然而,现在有了Ubuntu 14.04、nvidia-prime和nvidia-331驱动,任何人都可以在Intel芯片和Nvidia显卡之间轻松切换。不幸的是,为了使切换生效,还是会受限于要重启X11视窗系统(通过注销登录实现)。
|
||||
在等待了很长一段时间后,Nvidia才发布了支持Optimus的Linux驱动,但我们仍然没有得到对双显卡切换的真正支持。然而,现在有了Ubuntu 14.04、nvidia-prime和nvidia-331驱动,任何人都可以在Intel芯片和Nvidia显卡之间轻松切换。不过不幸的是,为了使切换生效,还是会受限于需要重启X11视窗系统(通过注销登录实现)。
|
||||
|
||||
为了减轻这种不便,有一个小型程序用于快速切换,稍后我会给出。这个驱动程序的安装就此成为一件轻而易举的事了,HDMI也可以正常工作,这足以让我心满意足了。
|
||||
|
||||
@ -24,11 +25,11 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但
|
||||
|
||||
为了更快地描述这个过程,我假设你已经安装好Ubuntu 14.04或者Mint 17。
|
||||
|
||||
作为一名系统管理员,最近我发现90%的Linux通过命令行执行起来更快,但这次我推荐使用“Additional Drivers”这个应用程序,你可能使用它安装过网卡或声卡驱动。
|
||||
作为一名系统管理员,最近我发现90%的Linux操作通过命令行执行起来更快,但这次我推荐使用“Additional Drivers”这个应用程序,你可能使用它安装过网卡或声卡驱动。
|
||||
|
||||
![](https://farm4.staticflickr.com/3886/14795564221_753f9e2d99_z.jpg)
|
||||
|
||||
**注意:下面的所有命令都是在~#前执行的,需要root权限执行。在运行命令前,要么使用“sudo su”(切换到root权限),要么在每条命令的开头使用速冻运行。**
|
||||
**注意:下面的所有命令都是在~#提示符下执行的,需要root权限执行。在运行命令前,要么使用“sudo su”(切换到root权限),要么在每条命令的开头使用sudo运行。**
|
||||
|
||||
你也可以在命令行输入如下命令进行安装:
|
||||
|
||||
@ -44,19 +45,19 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但
|
||||
|
||||
~$ nvidia-settings
|
||||
|
||||
#### 注意:~$表示不以root用户身份执行。 ####
|
||||
**注意:~$表示不以root用户身份执行。**
|
||||
|
||||
![](https://farm4.staticflickr.com/3921/14796320814_de5c9882c2_z.jpg)
|
||||
|
||||
你也可以使用命令行设置默认使用哪一块显卡:
|
||||
|
||||
~# prime-select intel (or nvidia)
|
||||
~# prime-select intel (或 nvidia)
|
||||
|
||||
使用这个命令进行切换:
|
||||
|
||||
~# prime-switch intel (or nvidia)
|
||||
~# prime-switch intel (或 nvidia)
|
||||
|
||||
两个命令的生效都需要重启X11,可以通过注销和重新登录实现。重启电脑也行。
|
||||
两个命令的生效都需要重启X11,可以通过注销和重新登录实现。当然重启电脑也行。
|
||||
|
||||
对Ubuntu用户键入命令:
|
||||
|
||||
@ -70,7 +71,7 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但
|
||||
|
||||
~# prime-select query
|
||||
|
||||
最后,你可以通过添加ppa:nilarimogard/webupd8来安装叫做prime-indicator的程序包,实现通过工具栏快速切换来重启Xserver会话。为了安装它,只需要运行:
|
||||
最后,你可以通过添加ppa:nilarimogard/webupd8来安装叫做prime-indicator的程序包,实现通过工具栏快速切换来重启Xserver会话。要安装它,只需要运行:
|
||||
|
||||
~# add-apt-repository ppa:nilarimogard/webupd8
|
||||
~# apt-get update
|
||||
@ -84,7 +85,7 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但
|
||||
|
||||
也可以花时间查看一下这个我偶然发现的[脚本][3],用来方便地在Bumblebee和Nvidia-Prime之间进行切换,但我必须强调并没有亲自对此进行实验。
|
||||
|
||||
最后,我感到非常惭愧写了这么多才得以为Linux上的显卡提供了专门支持,但仍然不能实现双显卡切换,因为混合图形技术似乎是便携式设备的未来。一般情况下,AMD会发布Linux平台上的驱动支持,但我认为Optimus是目前为止我遇到过的最糟糕的硬件支持问题。
|
||||
最后,我感到非常惭愧,写了这么多才得以为Linux上的显卡提供了专门支持,但仍然不能实现双显卡切换,因为混合图形技术似乎是便携式设备的未来。一般情况下,AMD会发布Linux平台上的驱动支持,但我认为Optimus是目前为止我遇到过的最糟糕的硬件支持问题。
|
||||
|
||||
不管这篇教程对你的使用是否完美,但这确实是利用这块Nvidia显卡最容易的方法。你可以试着在Intel显卡上只运行最新的Unity,然后考虑2到3个小时的电池寿命是否值得权衡。
|
||||
|
||||
@ -94,7 +95,7 @@ via: http://xmodulo.com/2014/08/install-configure-nvidia-optimus-driver-ubuntu.h
|
||||
|
||||
作者:[Christopher Ward][a]
|
||||
译者:[KayGuoWhu](https://github.com/KayGuoWhu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,15 +1,16 @@
|
||||
Linux中15个‘echo’ 实例
|
||||
Linux中的15个‘echo’ 命令实例
|
||||
================================================================================
|
||||
**echo**是一种最常用的与广泛使用的内置于Linux的bash和C shell的命令,通常用在脚本语言和批处理文件中来在标准输出或者文件中显示一行文本或者字符串。
|
||||
|
||||
![echo command examples](http://www.tecmint.com/wp-content/uploads/2014/08/echo-command.png)
|
||||
|
||||
echo命令例子
|
||||
|
||||
echo命令的语法是:
|
||||
|
||||
echo [选项] [字符串]
|
||||
|
||||
**1.** 输入一行文本并显示在标准输出上
|
||||
###**1.** 输入一行文本并显示在标准输出上
|
||||
|
||||
$ echo Tecmint is a community of Linux Nerds
|
||||
|
||||
@ -17,7 +18,9 @@ echo命令的语法是:
|
||||
|
||||
Tecmint is a community of Linux Nerds
|
||||
|
||||
**2.** 声明一个变量并输出它的值。比如,声明变量**x**并给它赋值为**10**。
|
||||
###**2.** 输出一个声明的变量值
|
||||
|
||||
比如,声明变量**x**并给它赋值为**10**。
|
||||
|
||||
$ x=10
|
||||
|
||||
@ -27,15 +30,20 @@ echo命令的语法是:
|
||||
|
||||
The value of variable x = 10
|
||||
|
||||
**注意:** Linux中的选项‘**-e**‘扮演了转义字符反斜线的翻译器。
|
||||
|
||||
**3.** 使用‘**\b**‘选项- ‘**-e**‘后带上'\b'会删除字符间的所有空格。
|
||||
###**3.** 使用‘**\b**‘选项
|
||||
|
||||
‘**-e**‘后带上'\b'会删除字符间的所有空格。
|
||||
|
||||
**注意:** Linux中的选项‘**-e**‘扮演了转义字符反斜线的翻译器。
|
||||
|
||||
$ echo -e "Tecmint \bis \ba \bcommunity \bof \bLinux \bNerds"
|
||||
|
||||
TecmintisacommunityofLinuxNerds
|
||||
|
||||
**4.** 使用‘**\n**‘选项- ‘**-e**‘后面的带上‘\n’行会在遇到的地方作为新的一行
|
||||
###**4.** 使用‘**\n**‘选项
|
||||
|
||||
‘**-e**‘后面的带上‘\n’行会在遇到的地方作为新的一行
|
||||
|
||||
$ echo -e "Tecmint \nis \na \ncommunity \nof \nLinux \nNerds"
|
||||
|
||||
@ -47,13 +55,15 @@ echo命令的语法是:
|
||||
Linux
|
||||
Nerds
|
||||
|
||||
**5.** 使用‘**\t**‘选项 - ‘**-e**‘后面跟上‘\t’会在空格间加上水平制表符。
|
||||
###**5.** 使用‘**\t**‘选项
|
||||
|
||||
‘**-e**‘后面跟上‘\t’会在空格间加上水平制表符。
|
||||
|
||||
$ echo -e "Tecmint \tis \ta \tcommunity \tof \tLinux \tNerds"
|
||||
|
||||
Tecmint is a community of Linux Nerds
|
||||
|
||||
**6.** 也可以同时使用换行‘**\n**‘与水平制表符‘**\t**‘。
|
||||
###**6.** 也可以同时使用换行‘**\n**‘与水平制表符‘**\t**‘
|
||||
|
||||
$ echo -e "\n\tTecmint \n\tis \n\ta \n\tcommunity \n\tof \n\tLinux \n\tNerds"
|
||||
|
||||
@ -65,7 +75,9 @@ echo命令的语法是:
|
||||
Linux
|
||||
Nerds
|
||||
|
||||
**7.** 使用‘**\v**‘选项 - ‘**-e**‘后面跟上‘\v’会加上垂直制表符。
|
||||
###**7.** 使用‘**\v**‘选项
|
||||
|
||||
‘**-e**‘后面跟上‘\v’会加上垂直制表符。
|
||||
|
||||
$ echo -e "\vTecmint \vis \va \vcommunity \vof \vLinux \vNerds"
|
||||
|
||||
@ -77,7 +89,7 @@ echo命令的语法是:
|
||||
Linux
|
||||
Nerds
|
||||
|
||||
**8.** 也可以同时使用换行‘**\n**‘与垂直制表符‘**\v**‘。
|
||||
###**8.** 也可以同时使用换行‘**\n**‘与垂直制表符‘**\v**‘
|
||||
|
||||
$ echo -e "\n\vTecmint \n\vis \n\va \n\vcommunity \n\vof \n\vLinux \n\vNerds"
|
||||
|
||||
@ -98,43 +110,51 @@ echo命令的语法是:
|
||||
|
||||
**注意:** 你可以按照你的需求连续使用两个或者多个垂直制表符,水平制表符与换行符。
|
||||
|
||||
**9.** 使用‘**\r**‘选项 - ‘**-e**‘后面跟上‘\r’来指定输出中的回车符。
|
||||
###**9.** 使用‘**\r**‘选项
|
||||
|
||||
‘**-e**‘后面跟上‘\r’来指定输出中的回车符。(LCTT 译注:会覆写行开头的字符)
|
||||
|
||||
$ echo -e "Tecmint \ris a community of Linux Nerds"
|
||||
|
||||
is a community of Linux Nerds
|
||||
|
||||
**10.** 使用‘**\c**‘选项 - ‘**-e**‘后面跟上‘\c’会抑制输出后面的字符并且最后不会换新行。
|
||||
###**10.** 使用‘**\c**‘选项
|
||||
|
||||
‘**-e**‘后面跟上‘\c’会抑制输出后面的字符并且最后不会换新行。
|
||||
|
||||
$ echo -e "Tecmint is a community \cof Linux Nerds"
|
||||
|
||||
Tecmint is a community @tecmint:~$
|
||||
|
||||
**11.** ‘**-n**‘会在echo完后不会输出新行。
|
||||
###**11.** ‘**-n**‘会在echo完后不会输出新行
|
||||
|
||||
$ echo -n "Tecmint is a community of Linux Nerds"
|
||||
Tecmint is a community of Linux Nerds@tecmint:~/Documents$
|
||||
|
||||
**12.** 使用‘**\c**‘选项 - ‘**-e**‘后面跟上‘\a’选项会听到声音警告。
|
||||
###**12.** 使用‘**\a**‘选项
|
||||
|
||||
‘**-e**‘后面跟上‘\a’选项会听到声音警告。
|
||||
|
||||
$ echo -e "Tecmint is a community of \aLinux Nerds"
|
||||
Tecmint is a community of Linux Nerds
|
||||
|
||||
**注意:** 在你开始前,请先检查你的音量键。
|
||||
**注意:** 在你开始前,请先检查你的音量设置。
|
||||
|
||||
**13.** 使用echo命令打印所有的文件和文件夹(ls命令的替代)。
|
||||
###**13.** 使用echo命令打印所有的文件和文件夹(ls命令的替代)
|
||||
|
||||
$ echo *
|
||||
|
||||
103.odt 103.pdf 104.odt 104.pdf 105.odt 105.pdf 106.odt 106.pdf 107.odt 107.pdf 108a.odt 108.odt 108.pdf 109.odt 109.pdf 110b.odt 110.odt 110.pdf 111.odt 111.pdf 112.odt 112.pdf 113.odt linux-headers-3.16.0-customkernel_1_amd64.deb linux-image-3.16.0-customkernel_1_amd64.deb network.jpeg
|
||||
|
||||
**14.** 打印制定的文件类型。比如,让我们假设你想要打印所有的‘**.jpeg**‘文件,使用下面的命令。
|
||||
###**14.** 打印制定的文件类型
|
||||
|
||||
比如,让我们假设你想要打印所有的‘**.jpeg**‘文件,使用下面的命令。
|
||||
|
||||
$ echo *.jpeg
|
||||
|
||||
network.jpeg
|
||||
|
||||
**15.** echo可以使用重定向符来输出到一个文件而不是标准输出。
|
||||
###**15.** echo可以使用重定向符来输出到一个文件而不是标准输出
|
||||
|
||||
$ echo "Test Page" > testpage
|
||||
|
||||
@ -142,7 +162,7 @@ echo命令的语法是:
|
||||
avi@tecmint:~$ cat testpage
|
||||
Test Page
|
||||
|
||||
### echo 选项 ###
|
||||
### echo 选项列表 ###
|
||||
|
||||
<table border="0" cellspacing="0">
|
||||
<colgroup width="85"></colgroup>
|
||||
@ -187,14 +207,15 @@ echo命令的语法是:
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
就是这些了,不要忘记在下面留下你有价值的反馈。
|
||||
就是这些了,不要忘记在下面留下你的反馈。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/echo-command-in-linux/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,4 @@
|
||||
Linux有问必答——如何显示Linux网桥的MAC学习表
|
||||
Linux有问必答:如何显示Linux网桥的MAC学习表
|
||||
================================================================================
|
||||
|
||||
> **问题**:我想要检查一下我用brctl工具创建的Linux网桥的MAC地址学习状态。请问,我要怎样才能查看Linux网桥的MAC学习表(或者转发表)?
|
||||
@ -18,6 +18,6 @@ Linux网桥是网桥的软件实现,这是Linux内核的内核部分。与硬
|
||||
via: http://ask.xmodulo.com/show-mac-learning-table-linux-bridge.html
|
||||
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -1,20 +1,20 @@
|
||||
Linus Torvalds推动Linux的桌面与嵌入式计算的发展
|
||||
Linus Torvalds 希望推动Linux在桌面和嵌入式计算方面共同发展
|
||||
================================================================================
|
||||
> Linux的内核开发者和开源领袖Linus Torvalds最近表达了关于Linux桌面和嵌入式设备中Linux的未来的看法。
|
||||
> Linux的内核开发者和开源领袖Linus Torvalds前一段时间表达了关于Linux桌面和嵌入式设备中Linux的未来的看法。
|
||||
|
||||
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2014/08/linus-torvalds-1.jpg)
|
||||
|
||||
什么是Linux桌面和嵌入式设备中Linux的未来?这是个值得讨论的问题,不过Linux的创始人和开源巨人Linus Torvalds在最近一届 [Linux 基金会][1] 的LinuxCon大会上,在一次对话中表达了一些有趣的观点。
|
||||
|
||||
作为敲出第一版Linux内核代码并且在1991年将它们共享在互联网上的家伙,Torvalds毫无疑问是开源软件甚至是任何软件中最著名的开发者,如今他依然活跃在其中。在此期间,Torvalds是许多人和组织中唯一一个引领着Linux发展的个体,它的观点往往能影响着开源社区,而且,作为一个内核开发者的角色赋予了他能决定哪些特点和代码能被放进操作系统内部的强大权利。
|
||||
作为敲出第一版Linux内核代码并且在1991年将它们共享在互联网上的家伙,Torvalds毫无疑问是开源软件甚至是所有软件中最著名的开发者,如今他依然活跃在其中。在此期间,Torvalds是许多人和组织中唯一一个引领着Linux发展的个体,它的观点往往能影响着开源社区,而且,作为一个内核开发者的角色赋予了他能决定哪些特点和代码能被放进操作系统内部的强大权利。
|
||||
|
||||
所以说,关注Torvalds所说的话是很值得的, "我还是挺想要桌面的。" [上周他在LinuxCon大会上这样说道][2] 那标志着他仍然着眼于作为使个人机更加强大的操作系统Linux的未来,尽管十年来Linux桌面市场的分享一直很少,而且大部分围绕Linux的商业活动都去涉及服务器或者安卓手机硬件去了。
|
||||
所以说,关注Torvalds所说的话是很值得的, "我还是挺想要桌面的。" [他在上月的LinuxCon大会上这样说道][2] 那表明他仍然着眼于作为使PC更加强大的操作系统Linux的未来,尽管十年来Linux桌面市场的份额一直很少,而且大部分围绕Linux的商业活动都去涉及服务器或者安卓手机去了。
|
||||
|
||||
但是,Torvalds还说,确保Linux桌面能有个宏伟的未来意味着解决了受阻的 “基础设施问题”,好像庞大的开源软件生态系统和硬件世界让他充满信心。这不是Linux核心代码本身的问题,而是要让Linux桌面渠道友好,这可能是伟大的Torvalds和他开发同伴们所需要花精力去达到的目标。这取决于app的开发者、硬件制造商和其它有志于实现人们能方便使用基于Linux的计算平台的各方力量。
|
||||
但是,Torvalds还说,确保Linux桌面能有个宏伟的未来意味着解决了受阻的 “基础设施问题”,庞大的开源软件生态系统和硬件世界让他充满信心。这不是Linux核心代码本身的问题,而是要让Linux桌面渠道友好,这可能是伟大的Torvalds和他开发同伴们所需要花精力去达到的目标。这取决于app的开发者、硬件制造商和其它有志于实现人们能方便使用基于Linux的计算平台的各方力量。
|
||||
|
||||
另一方面,Torvalds也提到了他的憧憬,就是内核开发者们能简化嵌入式装置中的Linux代码——一个在让内核更加桌面友好化上会导致很多分歧的任务。但这也不一定,因为无论如何,Linux都是以模块化设计的,单内核代码库不能同时满足桌面用户和嵌入式开发者的需求,这是没有道理的,因为这取决于他们使用的模块。
|
||||
另一方面,Torvalds也提到了他的憧憬,就是内核开发者们能简化嵌入式装置中的Linux代码——这也许和让Linux内核更加桌面友好化的任务有所分歧。但这也不一定,因为无论如何,Linux都是以模块化设计的,单内核代码库不能同时满足桌面用户和嵌入式开发者的需求,这是没有道理的,因为这取决于他们使用的模块。
|
||||
|
||||
作为一个长时间想看到更多搭载Linux的嵌入式设备出现的Linux桌面用户,我希望Torvalds的所有愿望都可以实现,到那时我就能只用Liunx来做所有我想做的事情,无论是在电脑桌面上、手机上、车上,或者是任何其它的地方。
|
||||
作为一个一直想看到更多搭载Linux的嵌入式设备出现的Linux桌面用户,我希望Torvalds的所有愿望都可以实现,到那时我就可以只用Linux来做所有我想做的事情,无论是在电脑桌面上、手机上、车上,或者是任何其它的地方。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -22,7 +22,7 @@ via: http://thevarguy.com/open-source-application-software-companies/082514/linu
|
||||
|
||||
作者:[Christopher Tozzi][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
92
published/20140904 Making MySQL Better at GitHub.md
Normal file
92
published/20140904 Making MySQL Better at GitHub.md
Normal file
@ -0,0 +1,92 @@
|
||||
GitHub 是如何迁移 MySQL 集群的
|
||||
================================================================================
|
||||
> 在 GitHub 我们总是说“如果网站响应速度不够快,我们就不应该让它上线运营”。我们之前在[前端的体验速度][1]这篇文章中介绍了一些提高网站响应速率的方法,但这只是故事的一部分。真正影响到 GitHub.com 性能的因素是 MySQL 数据库架构。让我们来瞧瞧我们的基础架构团队是如何无缝升级了 MySQL 架构吧,这事儿发生在去年8月份,成果就是大大提高了 GitHub 网站的速度。
|
||||
|
||||
### 任务 ###
|
||||
|
||||
去年我们把 GitHub 上的大部分数据移到了新的数据中心,这个中心有世界顶级的硬件资源和网络平台。自从使用了 MySQL 作为我们的后端系统的基础,我们一直期望着一些改进来大大提高数据库性能,但是在数据中心使用全新的硬件来部署一套全新的集群环境并不是一件简单的工作,所以我们制定了一套计划和测试工作,以便数据能平滑过渡到新环境。
|
||||
|
||||
### 准备工作 ###
|
||||
|
||||
像我们这种关于架构上的巨大改变,在执行的每一步都需要收集数据指标。新机器上安装好了基本的操作系统,接下来就是测试新配置下的各种性能。为了模拟真实的工作负载环境,我们使用 tcpdump 工具从旧的集群那里复制正在发生的 SELECT 请求,并在新集群上重新回放一遍。
|
||||
|
||||
MySQL 调优是个繁琐的细致活,像众所周知的 innodb_buffer_pool_size 这个参数往往能对 MySQL 性能产生巨大的影响。对于这类参数,我们必须考虑在内,所以我们列了一份参数清单,包括 innodb_thread_concurrency,innodb_io_capacity,和 innodb_buffer_pool_instances,还有其它的。
|
||||
|
||||
在每次测试中,我们都很小心地只改变一个参数,并且让一次测试至少运行12小时。我们会观察响应时间的变化曲线,每秒的响应次数,以及有可能会导致并发性降低的参数。我们使用 “SHOW ENGINE INNODB STATUS” 命令打印 InnoDB 性能信息,特别观察了 “SEMAPHORES” 一节的内容,它为我们提供了工作负载的状态信息。
|
||||
|
||||
当我们在设置参数后对运行结果感到满意,然后就开始将我们最大的数据表格之一迁移到一套独立的集群上,这个步骤作为整个迁移过程的早期测试,以保证我们的核心集群有更多的缓存池空间,并且为故障切换和存储功能提供更强的灵活性。这步初始迁移方案也引入了一个有趣的挑战:我们必须维持多条客户连接,并且要将这些连接指向到正确的集群上。
|
||||
|
||||
除了硬件性能的提升,还需要补充一点,我们同时也对处理进程和拓扑结构进行了改进:我们添加了延时拷贝技术,更快、更高频地备份数据,以及更多的读拷贝空间。这些功能已经准备上线。
|
||||
|
||||
### 列出任务清单,三思后行 ###
|
||||
|
||||
每天有上百万用户的使用 GitHub.com,我们不可能有机会等没有人用了才进行实际数据切换。我们有一个详细的[任务清单][2]来执行迁移:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4116929/13fc6f50-328b-11e4-837b-922aad3055a8.png)
|
||||
|
||||
我们还规划了一个维护期,并且[在我们的博客中通知了大家][3],让用户注意到这件事情。
|
||||
|
||||
### 迁移时间到 ###
|
||||
|
||||
太平洋时间星期六上午5点,我们的迁移团队上线集合对话,同时数据迁移正式开始:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060850/39f52cd4-2df3-11e4-9aca-1f54a4870d24.png)
|
||||
|
||||
我们将 GitHub 网站设置为维护模式,并在 Twitter 上发表声明,然后开始按上述任务清单的步骤开始工作:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060864/54ff6bac-2df3-11e4-95da-b059c0ec668f.png)
|
||||
|
||||
**13 分钟**后,我们确保新的集群能正常工作:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060870/6a4c0060-2df3-11e4-8dab-654562fe628d.png)
|
||||
|
||||
然后我们让 GitHub.com 脱离维护模式,并且让全世界的用户都知道我们的最新状态:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060878/79b9884c-2df3-11e4-98ed-d11818c8915a.png)
|
||||
|
||||
大量前期的测试工作与准备工作,让我们将维护期缩到最短。
|
||||
|
||||
### 检验最终的成果 ###
|
||||
|
||||
在接下来的几周时间里,我们密切监视着 GitHub.com 的性能和响应时间。我们发现迁移后网站的平均加载时间减少一半,并且在99%的时间里,能减少*三分之二*:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060886/9106e54e-2df3-11e4-8fda-a4c64c229ba1.png)
|
||||
|
||||
### 我们学到了什么 ###
|
||||
|
||||
#### 功能划分 ####
|
||||
|
||||
在迁移过程中,我们采用了一个比较好的方法是:将大的数据表(主要记录了一些历史数据)先迁移过去,空出旧集群的磁盘空间和缓存池空间。这一步给我们留下了更多的资源用于“热”数据,将一些连接请求分离到多套集群里面。这步为我们之后的胜利奠定了基础,我们以后还会使用这种模式来进行迁移工作。
|
||||
|
||||
#### 测试测试测试 ####
|
||||
|
||||
为你的应用做验收测试和回归测试,越多越好,多多益善,不要嫌多。从老集群复制数据到新集群的过程中,如果进行验收测试和响应状态测试,得到的数据是不准的,如果数据不理想,这是正常的,不要惊讶,不要试图拿这些数据去分析原因。
|
||||
|
||||
#### 合作的力量 ####
|
||||
|
||||
对基础架构进行大的改变,通常需要涉及到很多人,我们要像一个团队一样为共同的目标而合作。我们的团队成员来自全球各地。
|
||||
|
||||
团队成员地图:
|
||||
|
||||
https://render.githubusercontent.com/view/geojson?url=https://gist.githubusercontent.com/anonymous/5fa29a7ccbd0101630da/raw/map.geojson
|
||||
|
||||
本次合作新创了一种工作流程:我们提交更改(pull request),获取实时反馈,查看修改了错误的 commit —— 全程没有电话交流或面对面的会议。当所有东西都可以通过 URL 提供信息,不同区域的人群之间的交流和反馈会变得非常简单。
|
||||
|
||||
### 一年后…… ###
|
||||
|
||||
整整一年时间过去了,我们很高兴地宣布这次数据迁移是很成功的 —— MySQL 性能和可靠性一直处于我们期望的状态。另外,新的集群还能让我们进一步去升级,提供更好的可靠性和响应时间。我将继续记录这些优化过程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/blog/1880-making-mysql-better-at-github
|
||||
|
||||
作者:[samlambert][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/samlambert
|
||||
[1]:https://github.com/blog/1756-optimizing-large-selector-sets
|
||||
[2]:https://help.github.com/articles/writing-on-github#task-lists
|
||||
[3]:https://github.com/blog/1603-site-maintenance-august-31st-2013
|
@ -1,8 +1,8 @@
|
||||
Linux有问必答——如何在CentOS或RHEL 7上修改主机名
|
||||
Linux有问必答:如何在CentOS或RHEL 7上修改主机名
|
||||
================================================================================
|
||||
> 问题:在CentOS/RHEL 7上修改主机名的正确方法是什么(永久或临时)?
|
||||
|
||||
在CentOS或RHEL中,有三种定义的主机名:(1)静态的(2)瞬态的,以及(3)灵活的。“静态”主机名也称为内核主机名,是系统在启动时从/etc/hostname自动初始化的主机名。“瞬态”主机名是在系统运行时临时分配的主机名,例如,通过DHCP或mDNS服务器分配。静态主机名和瞬态主机名都遵从作为互联网域名同样的字符限制规则。而另一方面,“灵活”主机名则允许使用自由形式(包括特殊/空白字符)的主机名,以展示给终端用户(如Dan's Computer)。
|
||||
在CentOS或RHEL中,有三种定义的主机名:a、静态的(static),b、瞬态的(transient),以及 c、灵活的(pretty)。“静态”主机名也称为内核主机名,是系统在启动时从/etc/hostname自动初始化的主机名。“瞬态”主机名是在系统运行时临时分配的主机名,例如,通过DHCP或mDNS服务器分配。静态主机名和瞬态主机名都遵从作为互联网域名同样的字符限制规则。而另一方面,“灵活”主机名则允许使用自由形式(包括特殊/空白字符)的主机名,以展示给终端用户(如Dan's Computer)。
|
||||
|
||||
在CentOS/RHEL 7中,有个叫hostnamectl的命令行工具,它允许你查看或修改与主机名相关的配置。
|
||||
|
||||
@ -22,7 +22,7 @@ Linux有问必答——如何在CentOS或RHEL 7上修改主机名
|
||||
|
||||
![](https://farm4.staticflickr.com/3855/15113489172_4e25ac87fa_z.jpg)
|
||||
|
||||
就像上面展示的那样,在修改静态/瞬态主机名时,任何特殊字符或空白字符会被移除,而提供的参数中的任何大写字母会自动转化为小写。一旦修改了静态主机名,/etc/hostname将被自动更新。然而,/etc/hosts不会更新来回应所做的修改,所以你需要手动更新/etc/hosts。
|
||||
就像上面展示的那样,在修改静态/瞬态主机名时,任何特殊字符或空白字符会被移除,而提供的参数中的任何大写字母会自动转化为小写。一旦修改了静态主机名,/etc/hostname 将被自动更新。然而,/etc/hosts 不会更新以保存所做的修改,所以你需要手动更新/etc/hosts。
|
||||
|
||||
如果你只想修改特定的主机名(静态,瞬态或灵活),你可以使用“--static”,“--transient”或“--pretty”选项。
|
||||
|
@ -0,0 +1,30 @@
|
||||
Linux 有问必答:如何在Perl中捕捉并处理信号
|
||||
================================================================================
|
||||
> **提问**: 我需要通过使用Perl的自定义信号处理程序来处理一个中断信号。在一般情况下,我怎么在Perl程序中捕获并处理各种信号(如INT,TERM)?
|
||||
|
||||
作为POSIX标准的异步通知机制,信号由操作系统发送给进程某个事件来通知它。当产生信号时,操作系统会中断目标程序的执行,并且该信号被发送到该程序的信号处理函数。可以定义和注册自己的信号处理程序或使用默认的信号处理程序。
|
||||
|
||||
在Perl中,信号可以被捕获,并由一个全局的%SIG哈希变量指定处理函数。这个%SIG哈希变量的键名是信号值,键值是对应的信号处理程序的引用。因此,如果你想为特定的信号定义自己的信号处理程序,你可以直接在%SIG中设置信号的哈希值。
|
||||
|
||||
下面是一个代码段来处理使用自定义信号处理程序中断(INT)和终止(TERM)的信号。
|
||||
|
||||
$SIG{INT} = \&signal_handler;
|
||||
$SIG{TERM} = \&signal_handler;
|
||||
|
||||
sub signal_handler {
|
||||
print "This is a custom signal handler\n";
|
||||
die "Caught a signal $!";
|
||||
}
|
||||
|
||||
![](https://farm4.staticflickr.com/3910/15141131060_f7958f20fb.jpg)
|
||||
|
||||
%SIG其他的可用的键值有'IGNORE'和'DEFAULT'。当所指定的键值是'IGNORE'(例如,$SIG{CHLD}='IGNORE')时,相应的信号将被忽略。指定'DEFAULT'的键值(例如,$SIG{HUP}='DEFAULT'),意味着我们将使用一个(系统)默认的信号处理程序。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/catch-handle-interrupt-signal-perl.html
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -0,0 +1,51 @@
|
||||
Oracle Linux 5.11更新了其Unbreakable Linux内核
|
||||
================================================================================
|
||||
> 此版本更新了很多软件包
|
||||
|
||||
![This is the last release for this branch](http://i1-news.softpedia-static.com/images/news2/Oracle-Linux-5-11-Features-Updated-Unbreakable-Linux-Kernel-460129-2.jpg)
|
||||
|
||||
这是这个分支的最后一个版本更新(随同 RHEL 5.11的落幕,CentOS 和 Oracle Linux 的5.x 系列也纷纷释出该系列的最后版本)。
|
||||
|
||||
>**甲骨文公司宣布,Oracle Linux5.11版已提供下载,但是这是企业版,需要用户注册才能下载。**
|
||||
|
||||
这个新的Oracle Linux是这个系列的最后一次更新。该系统基于Red Hat和该公司最近推送的RHEL 5X分支更新,这意味着这也是Oracle此产品线的最后一次更新。
|
||||
|
||||
Oracle Linux还带来了一系列有趣的功能,就像一个名为Ksplice的零宕机内核更新,它最初是针对openSUSE,包括Oracle数据库和Oracle应用软件开发的,它们在基于x86的Oracle系统中使用。
|
||||
|
||||
### Oracle Linux有哪些特别的 ###
|
||||
|
||||
尽管Oracle Linux基于红帽,它的开发者曾经举出了很多你不应该使用RHEL的原因。理由有很多,但最主要的是,任何人都可以下载Oracle Linux(注册后),而RHEL实际上限制了非付费会员下载。
|
||||
|
||||
开发者在其网站上说:“为企业应用和系统提供先进的可扩展性和可靠性,Oracle Linux提供了极高的性能,并且在采用x86架构的Oracle工程系统中使用。Oracle Linux是免费使用,免费派发,免费更新,并可轻松下载。它是唯一带来生产中零宕机补丁Oracle Ksplice支持的Linux发行版,允许客户无需重启而部署安全或者其他更新,并且同时提供诊断功能来调试生产系统中的内核问题。”
|
||||
|
||||
Oracle Linux其中一个最有趣且独一无二的功能是其Unbreakable Kernel(坚不可摧的内核)。这是它的开发者实际使用的名称。它基于来自3.0.36分支的旧Linux内核。用户还可以使用红帽兼容内核(内核2.6.18-398.el5),这在发行版中默认提供。
|
||||
|
||||
此外,Oracle Linux Release 5.11企业版内核提供了对大量硬件和设备的支持,但这个最新的更新带来了更好的支持。
|
||||
|
||||
您可以查看Oracle Linux 5.11全部[发布通告][1],这可能需要花费一些时间去读。
|
||||
|
||||
你也可以从下面下载Oracle Linux 5.11:
|
||||
|
||||
- [Oracle Enterprise Linux 6.5 (ISO) 64-bit][2]
|
||||
- [Oracle Enterprise Linux 6.5 (ISO) 32-bit][3]
|
||||
- [Oracle Enterprise Linux 7.0 (ISO) 64-bit][4]
|
||||
- [Oracle Enterprise Linux 5.11 (ISO) 64-bit][5]
|
||||
- [Oracle Enterprise Linux 5.11 (ISO) 32-bit][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Oracle-Linux-5-11-Features-Updated-Unbreakable-Linux-Kernel-460129.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
|
||||
[1]:https://oss.oracle.com/ol5/docs/RELEASE-NOTES-U11-en.html#Kernel_and_Driver_Updates
|
||||
[2]:http://mirrors.dotsrc.org/oracle-linux/OL6/U5/i386/OracleLinux-R6-U5-Server-i386-dvd.iso
|
||||
[3]:http://mirrors.dotsrc.org/oracle-linux/OL6/U5/x86_64/OracleLinux-R6-U5-Server-x86_64-dvd.iso
|
||||
[4]:https://edelivery.oracle.com/linux/
|
||||
[5]:http://ftp5.gwdg.de/pub/linux/oracle/EL5/U11/x86_64/Enterprise-R5-U11-Server-x86_64-dvd.iso
|
||||
[6]:http://ftp5.gwdg.de/pub/linux/oracle/EL5/U11/i386/Enterprise-R5-U11-Server-i386-dvd.iso
|
@ -1,12 +1,12 @@
|
||||
从命令行访问Linux命令小抄
|
||||
================================================================================
|
||||
Linux命令行的强大在于其灵活及多样化,各个Linux命令都带有它自己那部分命令行选项和参数。混合并匹配它们,甚至还可以通过管道和重定向来联结不同的命令。理论上讲,你可以借助几个基本的命令来产生数以百计的使用案例。甚至对于浸淫多年的管理员而言,也难以完全使用它们。那正是命令行小抄成为我们救命稻草的一刻。
|
||||
Linux命令行的强大在于其灵活及多样化,各个Linux命令都带有它自己专属的命令行选项和参数。混合并匹配这些命令,甚至还可以通过管道和重定向来联结不同的命令。理论上讲,你可以借助几个基本的命令来产生数以百计的使用案例。甚至对于浸淫多年的管理员而言,也难以完全使用它们。那正是命令行小抄成为我们救命稻草的一刻。
|
||||
|
||||
[![](https://farm6.staticflickr.com/5562/14752051134_5a7c3d2aa4_z.jpg)][1]
|
||||
|
||||
我知道联机手册页仍然是我们的良师益友,但我们想通过我们能自行支配的快速参考卡让这一切更为高效和有目的性。最终极的小抄可能被自豪地挂在你的办公室里,也可能作为PDF文件隐秘地存储在你的硬盘上,或者甚至设置成了你的桌面背景图。
|
||||
我知道联机手册页(man)仍然是我们的良师益友,但我们想通过我们能自行支配的快速参考卡让这一切更为高效和有目的性。最终极的小抄可能被自豪地挂在你的办公室里,也可能作为PDF文件隐秘地存储在你的硬盘上,或者甚至设置成了你的桌面背景图。
|
||||
|
||||
最为一个选择,也可以通过另外一个命令来访问你最爱的命令行小抄。那就是,使用[cheat][2]。这是一个命令行工具,它可以让你从命令行读取、创建或更新小抄。这个想法很简单,不过cheat经证明是十分有用的。本教程主要介绍Linux下cheat命令的使用方法。你不需要为cheat命令做个小抄了,它真的很简单。
|
||||
做为一个选择,也可以通过另外一个命令来访问你最爱的命令行小抄。那就是,使用[cheat][2]。这是一个命令行工具,它可以让你从命令行读取、创建或更新小抄。这个想法很简单,不过cheat经证明是十分有用的。本教程主要介绍Linux下cheat命令的使用方法。你不需要为cheat命令做个小抄了,它真的很简单。
|
||||
|
||||
### 安装Cheat到Linux ###
|
||||
|
||||
@ -59,9 +59,9 @@ cheat命令一个很酷的事是,它自带有超过90个的常用Linux命令
|
||||
|
||||
$ cheat -s <keyword>
|
||||
|
||||
在许多情况下,小抄适用于那些正派的人,而对其他某些人却没什么帮助。要想让内建的小抄更具个性化,cheat命令也允许你创建新的小抄,或者更新现存的那些。要这么做的话,cheat命令也会帮你在本地~/.cheat目录中保存一份小抄的副本。
|
||||
在许多情况下,小抄适用于某些人,而对另外一些人却没什么帮助。要想让内建的小抄更具个性化,cheat命令也允许你创建新的小抄,或者更新现存的那些。要这么做的话,cheat命令也会帮你在本地~/.cheat目录中保存一份小抄的副本。
|
||||
|
||||
要使用cheat的编辑功能,首先确保EDITOR环境变量设置为了你默认编辑器所在位置的完整路径。然后,复制(不可编辑)内建小抄到~/.cheat目录。你可以通过下面的命令找到内建小抄所在的位置。一旦你找到了它们的位置,只不过是将它们拷贝到~/.cheat目录。
|
||||
要使用cheat的编辑功能,首先确保EDITOR环境变量设置为你默认编辑器所在位置的完整路径。然后,复制(不可编辑)内建小抄到~/.cheat目录。你可以通过下面的命令找到内建小抄所在的位置。一旦你找到了它们的位置,只不过是将它们拷贝到~/.cheat目录。
|
||||
|
||||
$ cheat -d
|
||||
|
||||
@ -85,7 +85,7 @@ via: http://xmodulo.com/2014/07/access-linux-command-cheat-sheets-command-line.h
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,14 +1,14 @@
|
||||
在哪儿以及怎么写代码:选择最好的免费代码编辑器
|
||||
何处写,如何写:选择最好的免费在线代码编辑器
|
||||
================================================================================
|
||||
深入了解一下Cloud9,Koding和Nitrous.IO。
|
||||
> 深入了解一下Cloud9,Koding和Nitrous.IO。
|
||||
|
||||
![](http://a2.files.readwrite.com/image/upload/c_fill,h_900,q_70,w_1600/MTIzMDQ5NjYzODM4NDU1MzA4.jpg)
|
||||
|
||||
**已经准备好开始你的第一个编程项目了吗?很好!只要配置一下**终端或命令行,学习如何使用并安装所有要用到的编程语言,插件库和API函数库。当最终准备好一切以后,再安装好[Visual Studio][1]就可以开始了,然后才可以预览自己的工作。
|
||||
已经准备好开始你的第一个编程项目了吗?很好!只要配置一下终端或命令行,学习如何使用它,然后安装所有要用到的编程语言,插件库和API函数库。当最终准备好一切以后,再安装好[Visual Studio][1]就可以开始了,然后才可以预览自己的工作。
|
||||
|
||||
至少这是大家过去已经熟悉的方式。
|
||||
|
||||
也难怪初学程序员们逐渐喜欢上在线集成开发环境(IDE)了。IDE是一个代码编辑器,不过已经准备好编程语言以及所有需要的依赖,可以让你避免把它们一一安装到电脑上的麻烦。
|
||||
也难怪初学程序员们逐渐喜欢上在线的集成开发环境(IDE)了。IDE是一个代码编辑器,不过已经准备好编程语言以及所有需要的依赖,可以让你避免把它们一一安装到电脑上的麻烦。
|
||||
|
||||
我想搞清楚到底是哪些因素能组成一个典型的IDE,所以我试用了一下免费级别的时下最受欢迎的三款集成开发环境:[Cloud9][2],[Koding][3]和[Nitrous.IO][4]。在这个过程中,我了解了许多程序员应该或不应该使用IDE的各种情形。
|
||||
|
||||
@ -16,7 +16,7 @@
|
||||
|
||||
假如有一个像Microsoft Word那样的文字编辑器,想想类似Google Drive那样的IDE吧。你可以拥有类似的功能,但是它还能支持从任意电脑上访问,还能随时共享。因为因特网在项目工作流中的影响已经越来越重要,IDE也让生活更轻松。
|
||||
|
||||
在我最近的一篇ReadWrite教程中我使用了Nitrous.IO,这是在文章[创建一个你自己的像Yo那样的极端简单的聊天应用][5]里的一个Python应用。当使用IDE的时候,你只要选择你要用的编程语言,然后通过IDE特别设计用来运行这种语言程序的虚拟机(VM),你就可以测试和预览你的应用了。
|
||||
在我最近的一篇ReadWrite教程中我使用了Nitrous.IO,这是在文章“[创建一个你自己的像Yo那样的极端简单的聊天应用][5]”里的一个Python应用。当使用IDE的时候,你只要选择你要用的编程语言,然后通过IDE特别为运行这种语言程序而设计的虚拟机(VM),你就可以测试和预览你的应用了。
|
||||
|
||||
如果你读过那篇教程,就会知道我的那个应用只用到了两个API库-信息服务Twilio和Python微框架Flask。在我的电脑上就算是使用文字编辑器和终端来做也是很简单的,不过我选择使用IDE还有一个方便的地方:如果大家都使用同样的开发环境,跟着教程一步步走下去就更简单了。
|
||||
|
||||
@ -28,7 +28,7 @@
|
||||
|
||||
但是不能用IDE来永久存储你的整个项目。把帖子保存在Google Drive文件中不会让你的博客丢失。类似Google Drive,IDE可以让你创建链接用于共享内容,但是任何一个都还不足以替代真正的托管服务器。
|
||||
|
||||
还有,IDE并不是设计成方便广泛共享。尽管各种IDE都在不断改善大多数文字编辑器的预览功能,还只能用来给你的朋友或同事展示一下应用预览,而不是,比如说,类似Hacker News的主页。那样的话,占用太多带宽的IDE也许会让你崩溃。
|
||||
还有,IDE并不是设计成方便广泛共享。尽管各种IDE都在不断改善大多数文字编辑器的预览功能,还只能用来给你的朋友或同事展示一下应用的预览,而不是像Hacker News一样的主页。那样的话,占用太多带宽的IDE也许会让你崩溃。
|
||||
|
||||
这样说吧:IDE只是构建和测试你的应用的地方,托管服务器才是它们生存的地方。所以一旦完成了你的应用,你会希望把它布置到能长期托管的云服务器上,最好是能免费托管的那种,例如[Heroku][6]。
|
||||
|
||||
@ -44,7 +44,7 @@
|
||||
|
||||
当我完成了Cloud9的注册后,它提示的第一件事情就是添加我的GitHub和BitBucket账号。马上,所有我的GitHub项目,个人的和协作的,都可以直接克隆到本地并使用Cloud9的开发工具开始工作。其他的IDE在和GitHub集成的方面都没有达到这种水准。
|
||||
|
||||
在我测试的这三款IDE中,Cloud9看起来更加侧重于一个可以让协同工作的人们无缝衔接工作的环境。在这里,它并不是角落里放个聊天窗口。实际上,按照CEO Ruben Daniels说的,试用Cloud9的协作者可以互相看到其他人实时的编码情况,就像Google Drive上的合作者那样。
|
||||
在我测试的这三款IDE中,Cloud9看起来更加侧重于一个可以让协同工作的人们无缝衔接工作的环境。在这里,它并不是角落里放个聊天窗口。实际上,按照其CEO Ruben Daniels说的,试用Cloud9的协作者可以互相看到其他人实时的编码情况,就像Google Drive上的合作者那样。
|
||||
|
||||
“大多数IDE服务的协同功能只能操作单一文件”,Daniels说,“而我们的产品可以支持整个项目中的不同文件。协同功能被完美集成到了我们的IDE中。”
|
||||
|
||||
@ -58,15 +58,15 @@ IDE可以提供你所需的工具来构建和测试所有开源编程语言的
|
||||
|
||||
### Nitrous.IO: An IDE Wherever You Want ###
|
||||
|
||||
相对于自己的桌面环境,使用IDE的最大优势是它是自包含的。你不需要安装任何其他的就可以使用。而另一方面,使用自己的桌面环境的最大优势就是你可以在本地工作,甚至在没有互联网的情况下。
|
||||
相对于自己的桌面环境,使用IDE的最大优势是它是自足的。你不需要安装任何其他的东西就可以使用。而另一方面,使用自己的桌面环境的最大优势就是你可以在本地工作,甚至在没有互联网的情况下。
|
||||
|
||||
Nitrous.IO结合了这两个优势。你可以在网站上在线使用这个IDE,你也可以把它下载到自己的饿电脑上,共同创始人AJ Solimine这样说。优点是你可以结合Nitrous的集成性和你最喜欢的文字编辑器的熟悉。
|
||||
Nitrous.IO结合了这两个优势。“你可以在网站上在线使用这个IDE,你也可以把它下载到自己的电脑上”,其共同创始人AJ Solimine这样说。优点是你可以结合Nitrous的集成性和你最喜欢的文字编辑器的熟悉。
|
||||
|
||||
他说:“你可以使用任意当代浏览器访问Nitrous.IO的在线IDE网站,但我们仍然提供了方便的Windows和Mac桌面应用,可以让你使用你最喜欢的编辑器来写代码。”
|
||||
他说:“你可以使用任意现代浏览器访问Nitrous.IO的在线IDE网站,但我们仍然提供了方便的Windows和Mac桌面应用,可以让你使用你最喜欢的编辑器来写代码。”
|
||||
|
||||
### 底线 ###
|
||||
|
||||
这一个星期的[使用][7]三个不同IDE的最让我意外的收获?它们是如此相似。[当用来做最基本的代码编辑的时候][8],它们都一样的好用。
|
||||
这一个星期[使用][7]三个不同IDE的最让我意外的收获是什么?它们是如此相似。[当用来做最基本的代码编辑的时候][8],它们都一样的好用。
|
||||
|
||||
Cloud9,Koding,[和Nitrous.IO都支持][9]所有主流的开源编程语言,从Ruby到Python到PHP到HTML5。你可以选择任何一种VM。
|
||||
|
||||
@ -76,7 +76,7 @@ Cloud9和Nitrous.IO都实现了GitHub的一键集成。Koding需要[多几个步
|
||||
|
||||
不好的一面,它们都有相同的缺陷,不过考虑到它们都是免费的也还合理。你每次只能同时运行一个VM来测试特定编程语言写出的程序。而当你一段时间没有使用VM之后,IDE会把VM切换成休眠模式以节省带宽,而下次要用的时候就得等它重新加载(Cloud9在这一点上更加费力)。它们中也没有任何一个为已完成的项目提供像样的永久托管服务。
|
||||
|
||||
所以,对咨询我是否有一个完美的免费IDE的人,答案是可能没有。但是这也要看你侧重的地方,对你的某个项目来说也许有一个完美的IDE。
|
||||
所以,对咨询我是否有一个完美的免费IDE的人来说,答案是可能没有。但是这也要看你侧重的地方,对你的某个项目来说也许有一个完美的IDE。
|
||||
|
||||
图片由[Shutterstock][11]友情提供
|
||||
|
||||
@ -86,7 +86,7 @@ via: http://readwrite.com/2014/08/14/cloud9-koding-nitrousio-integrated-developm
|
||||
|
||||
作者:[Lauren Orsini][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,14 +1,16 @@
|
||||
8 Options to Trace/Debug Programs using Linux strace Command
|
||||
使用 Linux 的 strace 命令跟踪/调试程序的常用选项
|
||||
================================================================================
|
||||
|
||||
在调试的时候,strace能帮助你追踪到一个程序所执行的系统调用。当你想知道程序和操作系统如何交互的时候,这是极其方便的,比如你想知道执行了哪些系统调用,并且以何种顺序执行。
|
||||
|
||||
这个简单而又强大的工具几乎在所有的Linux操作系统上可用,并且可被用来调试大量的程序。
|
||||
|
||||
### 1. 命令用法 ###
|
||||
### 命令用法 ###
|
||||
|
||||
让我们看看strace命令如何追踪一个程序的执行情况。
|
||||
|
||||
最简单的形式,strace后面可以跟任何命令。它将列出许许多多的系统调用。一开始,我们并不能理解所有的输出,但是如果你正在寻找一些特殊的东西,那么你应该能从输出中发现它。
|
||||
|
||||
让我们来看看简单命令ls的系统调用跟踪情况。
|
||||
|
||||
raghu@raghu-Linoxide ~ $ strace ls
|
||||
@ -20,21 +22,22 @@
|
||||
![Strace write system call (ls)](http://linoxide.com/wp-content/uploads/2014/08/02.strace_ls_write.png)
|
||||
|
||||
上面的输出部分展示了write系统调用,它把当前目录的列表输出到标准输出。
|
||||
|
||||
下面的图片展示了使用ls命令列出的目录内容(没有使用strace)。
|
||||
|
||||
raghu@raghu-Linoxide ~ $ ls
|
||||
|
||||
![ls command output](http://linoxide.com/wp-content/uploads/2014/08/03.ls_.png)
|
||||
|
||||
#### 1.1 寻找被程序读取的配置文件 ####
|
||||
#### 选项1 寻找被程序读取的配置文件 ####
|
||||
|
||||
一个有用的跟踪(除了调试某些问题以外)是你能找到被一个程序读取的配置文件。例如,
|
||||
Strace 的用法之一(除了调试某些问题以外)是你能找到被一个程序读取的配置文件。例如,
|
||||
|
||||
raghu@raghu-Linoxide ~ $ strace php 2>&1 | grep php.ini
|
||||
|
||||
![Strace config file read by program](http://linoxide.com/wp-content/uploads/2014/08/04.strace_php_configuration.png)
|
||||
|
||||
#### 1.2 跟踪指定的系统调用 ####
|
||||
#### 选项2 跟踪指定的系统调用 ####
|
||||
|
||||
strace命令的-e选项仅仅被用来展示特定的系统调用(例如,open,write等等)
|
||||
|
||||
@ -44,7 +47,7 @@ strace命令的-e选项仅仅被用来展示特定的系统调用(例如,ope
|
||||
|
||||
![Stracing specific system call (open here)](http://linoxide.com/wp-content/uploads/2014/08/05.strace_open_systemcall.png)
|
||||
|
||||
#### 1.3 用于进程 ####
|
||||
#### 选项3 跟踪进程 ####
|
||||
|
||||
strace不但能用在命令上,而且通过使用-p选项能用在运行的进程上。
|
||||
|
||||
@ -52,15 +55,15 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进
|
||||
|
||||
![Strace a process](http://linoxide.com/wp-content/uploads/2014/08/06.strace_process.png)
|
||||
|
||||
#### 1.4 strace的统计概要 ####
|
||||
#### 选项4 strace的统计概要 ####
|
||||
|
||||
包括系统调用的概要,执行时间,错误等等。使用-c选项能够以一种整洁的方式展示:
|
||||
它包括系统调用的概要,执行时间,错误等等。使用-c选项能够以一种整洁的方式展示:
|
||||
|
||||
raghu@raghu-Linoxide ~ $ strace -c ls
|
||||
|
||||
![Strace summary display](http://linoxide.com/wp-content/uploads/2014/08/07.strace_summary.png)
|
||||
|
||||
#### 1.5 保存输出结果 ####
|
||||
#### 选项5 保存输出结果 ####
|
||||
|
||||
通过使用-o选项可以把strace命令的输出结果保存到一个文件中。
|
||||
|
||||
@ -70,7 +73,7 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进
|
||||
|
||||
之所以以sudo来运行上面的命令,是为了防止用户ID与所查看进程的所有者ID不匹配的情况。
|
||||
|
||||
### 1.6 显示时间戳 ###
|
||||
### 选项6 显示时间戳 ###
|
||||
|
||||
使用-t选项,可以在每行的输出之前添加时间戳。
|
||||
|
||||
@ -78,7 +81,7 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进
|
||||
|
||||
![Timestamp before each output line](http://linoxide.com/wp-content/uploads/2014/08/09.strace_timestamp.png)
|
||||
|
||||
#### 1.7 更好的时间戳 ####
|
||||
#### 选项7 更精细的时间戳 ####
|
||||
|
||||
-tt选项可以展示微秒级别的时间戳。
|
||||
|
||||
@ -92,7 +95,7 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进
|
||||
|
||||
![Seconds since epoch](http://linoxide.com/wp-content/uploads/2014/08/011.strace_epoch_seconds.png)
|
||||
|
||||
#### 1.8 Relative Time ####
|
||||
#### 选项8 相对时间 ####
|
||||
|
||||
-r选项展示系统调用之间的相对时间戳。
|
||||
|
||||
@ -106,7 +109,7 @@ via: http://linoxide.com/linux-command/linux-strace-command-examples/
|
||||
|
||||
作者:[Raghu][a]
|
||||
译者:[guodongxiaren](https://github.com/guodongxiaren)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -6,13 +6,13 @@
|
||||
|
||||
<blockquote><em>通过入会声明,任何人都能轻易加入“匿名者”组织。某人类学家称,组织成员会“根据影响程度对重大事件保持着不同关注,特别是那些能挑起强烈争端的事件”。</em></blockquote>
|
||||
|
||||
<small>布景:Jeff Nishinaka / 摄影:Scott Dunbar</small>
|
||||
<small>纸雕作品:Jeff Nishinaka / 摄影:Scott Dunbar</small>
|
||||
|
||||
<h2>1</h2>
|
||||
|
||||
<p>上世纪七十年代中期,当 Christopher Doyon 还是一个生活在缅因州乡村的孩童时,就终日泡在 CB radio 上与各种陌生人聊天。他的昵称是“大红”,因为他有一头红色的头发。Christopher Doyon 把发射机挂在了卧室的墙壁上,并且说服了父亲在自家屋顶安装了两根天线。CB radio 主要用于卡车司机间的联络,但 Doyon 和一些人却将之用于不久后出现在 Internet 上的虚拟社交——自定义昵称、成员间才懂的笑话,以及施行变革的强烈愿望。</p>
|
||||
<p>上世纪七十年代中期,当 Christopher Doyon 还是一个生活在缅因州乡村的孩童时,就终日泡在 CB radio 上与各种陌生人聊天。他的昵称是“Big red”(大红),因为他有一头红色的头发。Christopher Doyon 把发射机挂在了卧室的墙壁上,并且说服了父亲在自家屋顶安装了两根天线。CB radio 主要用于卡车司机间的联络,但 Doyon 和一些人却将之用于不久后出现在 Internet 上的虚拟社交——自定义昵称、成员间才懂的笑话,以及施行变革的强烈愿望。</p>
|
||||
|
||||
<p>Doyon 很小的时候母亲就去世了,兄妹二人由父亲抚养长大,他俩都说受到过父亲的虐待。由此 Doyon 在 CB radio 社区中找到了慰藉和归属感。他和他的朋友们轮流监听当地紧急事件频道。其中一个朋友的父亲买了一个气泡灯并安装在了他的车顶上;每当这个孩子收听到来自孤立无援的乘车人的求助后,都会开车载着所有人到求助者所在的公路旁。除了拨打 911 外他们基本没有什么可做的,但这足以让他们感觉自己成为了英雄。</p>
|
||||
<p>Doyon 很小的时候母亲就去世了,兄妹二人由父亲抚养长大,他俩都说受到过父亲的虐待。由此 Doyon 在 CB radio 社区中找到了慰藉和目标感。他和他的朋友们轮流监听当地紧急事件频道。其中一个朋友的父亲买了一个气泡灯并安装在了他的车顶上;每当这个孩子收听到来自孤立无援的乘车人的求助后,都会开车载着所有人到求助者所在的公路旁。除了拨打 911 外他们基本没有什么可做的,但这足以让他们感觉自己成为了英雄。</p>
|
||||
|
||||
<p>短小精悍的 Doyon 有着一口浓厚的新英格兰口音,并且非常喜欢《星际迷航》和阿西莫夫的小说。当他在《大众机械》上看到一则“组装你的专属个人计算机”构件广告时,就央求祖父给他买一套,接下来 Doyon 花了数月的时间把计算机组装起来并连接到 Internet 上去。与鲜为人知的 CB 电波相比,在线聊天室确实不可同日而语。“我只需要点一下按钮,再选中某个家伙的名字,然后我就可以和他聊天了,” Doyon 在最近回忆时说道,“这真的很惊人。”</p>
|
||||
|
||||
@ -22,11 +22,11 @@
|
||||
|
||||
<p>Doyon 深深地沉溺于计算机中,虽然他并不是一位专业的程序员。在过去一年的几次谈话中,他告诉我他将自己视为激进主义分子,继承了 Abbie Hoffman 和 Eldridge Cleaver 的激进传统;技术不过是他抗议的工具。八十年代,哈佛大学和麻省理工学院的学生们举行集会,强烈抗议他们的学校从南非撤资。为了帮助抗议者通过安全渠道进行交流,PLF 制作了无线电套装:移动调频发射器、伸缩式天线,还有麦克风,所有部件都内置于背包内。Willard Johnson,麻省理工学院的一位激进分子和政治学家,表示黑客们出席集会并不意味着一次变革。“我们的大部分工作仍然是通过扩音器来完成的,”他解释道。</p>
|
||||
|
||||
<p>1992 年,在 Grateful Dead 的一场印第安纳的演唱会上,Doyon 秘密地向一位瘾君子出售了 300 粒药。由此他被判决在印第安纳州立监狱服役十二年,后来改为五年。服役期间,他对宗教和哲学产生了浓厚的兴趣,并于鲍尔州立大学学习了相应课程。</p>
|
||||
<p>1992 年,在印第安纳的一场 Grateful Dead 的演唱会上,Doyon 秘密地向一位瘾君子出售了 300 粒药。由此他被判决在印第安纳州立监狱服役十二年,后来改为五年。服役期间,他对宗教和哲学产生了浓厚的兴趣,并于鲍尔州立大学学习了相应课程。</p>
|
||||
|
||||
<p>1994 年,第一款商业 Web 浏览器网景领航员正式发布,同一年 Doyon 被捕入狱。当他出狱并再次回到剑桥后,PLF 依然活跃着,并且他们的工具有了实质性的飞跃。Doyon 回忆起和他入狱之前的变化,“非常巨大——好比是‘烽火狼烟’跟‘电报传信’之间那么大的差距。”黑客们入侵了一个印度的军事网站,并修改其首页文字为“拯救克什米尔”。在塞尔维亚,黑客们攻陷了一个阿尔巴尼亚网站。Stefan Wray,一位早期网络激进主义分子,为一次纽约“反哥伦布日”集会上的黑客行径辩护。“我们视之为电子形式的公众抗议,”他告诉大家。</p>
|
||||
<p>1994 年,第一款商业 Web 浏览器 Netscape Navigator(网景领航员)正式发布,同一年 Doyon 被捕入狱。当他出狱并再次回到剑桥后,PLF 依然活跃着,并且他们的工具有了实质性的飞跃。Doyon 回忆起他和入狱之前对比的变化,“非常巨大——好比是‘烽火狼烟’跟‘电报传信’之间那么大的差距。”黑客们入侵了一个印度的军事网站,并修改其首页文字为“拯救克什米尔”。在塞尔维亚,黑客们攻陷了一个阿尔巴尼亚网站。Stefan Wray,一位早期网络激进主义分子,为一次纽约“反哥伦布日”集会上的黑客行径辩护。“我们视之为电子形式的公众抗议,”他告诉大家。</p>
|
||||
|
||||
<p>1999 年,美国唱片业协会因为版权侵犯问题起诉了 Napster,一款文件共享软件。最终,Napster 于 2001 年关闭。Doyon 与其他黑客使用分布式拒绝服务(Distributed Denial of Service,DDoS,使大量数据涌入网站导致其响应速度减缓直至奔溃)的手段,攻击了美国唱片业协会的网站,使之停运时间长达一星期之久。Doyon为自己的行为进行了辩解,并高度赞扬了其他的“黑客主义者”。“我们很快意识到保卫 Napster 的战争象征着保卫 Internet 自由的战争,”他在后来写道。</p>
|
||||
<p>1999 年,美国唱片业协会因为版权侵犯问题起诉了 Napster,一款文件共享服务。最终,Napster 于 2001 年关闭。Doyon 与其他黑客使用分布式拒绝服务(Distributed Denial of Service,DDoS,使大量数据涌入网站导致其响应速度减缓直至奔溃)的手段,攻击了美国唱片业协会的网站,使之停运时间长达一星期之久。Doyon为自己的行为进行了辩解,并高度赞扬了其他的“黑客主义者”。“我们很快意识到保卫 Napster 的战争象征着保卫 Internet 自由的战争,”他在后来写道。</p>
|
||||
|
||||
<p>2008 年的一天,Doyon 和 “Commander Adama” 在剑桥的 PLE 地下公寓相遇。Adama 当着 Doyon 的面点击了癫痫基金会的一个链接,与意料中将要打开的论坛不同,出现的是一连串闪烁的彩光。有些癫痫病患者对闪光灯非常敏感——这完全是出于恶意,有人想要在无辜群众中诱发癫痫病。已经出现了至少一名受害者。</p>
|
||||
|
||||
@ -42,69 +42,69 @@
|
||||
|
||||
<center><small>“我得谈谈我的感受。”</small></center>
|
||||
|
||||
<p>Poole 希望匿名这一举措可以延续社区的尖锐性因素。“我们无意参与理智的涉外事件讨论,”他在网站上写道。4chan 社区里最具价值的事之一便是寻求“挑起强烈的争端”(lulz),这个词源自缩写 LOL。Lulz 经常是通过分享充满孩子气的笑话或图片来实现的,它们中的大部分不是色情的就是下流的。其中最令人震惊的部分被贴在了网站的“/b/”版块上,这里的用户们称呼自己为“/b/tards”。Doyon 知道 4chan 这个社区,但他认为那些用户是“一群愚昧无知的顽童”。2004 年前后,/b/ 上的部分用户开始把“匿名者”视为一个独立的实体。</p>
|
||||
<p>Poole 希望匿名这一举措可以延续社区的尖锐性因素。“我们无意参与理智的涉外事件讨论,”他在网站上写道。4chan 社区里最具价值的事之一便是寻求“挑起强烈的争端”(lulz),这个词源自缩写 LOL。Lulz 经常是通过分享幼稚的笑话或图片来实现的,其中大部分不是色情的就是下流的。其中最令人震惊的部分被贴在了网站的“/b/”版块上,这里的用户们称呼自己为“/b/tards”。Doyon 知道 4chan 这个社区,但他认为它的用户是“一群愚昧无知的顽童”。2004 年前后,/b/ 上的部分用户开始把“匿名者”视为一个独立的实体。</p>
|
||||
|
||||
<p>这是一个全新的黑客团体。“这不是一个传统意义上的组织,”一位领导计算机安全工作的研究员 Mikko Hypponen 告诉我——倒不如,视之为一个非传统的亚文化群体。Barrett Brown,德克萨斯州的一名记者,同时也是众所周知的“匿名者”高层领导,把“匿名者”描述为“一连串前仆后继的伟大友谊”。无需任何会费或者入会仪式。任何想要加入“匿名者”组织,成为一名匿名者(Anon)的人,都可以通过简短的象征性的宣誓加入。</p>
|
||||
|
||||
<p>尽管 4chan 的关注焦点是一些琐碎的话题,但许多匿名者认为自己就是“正义的十字军”。如果网上有不良迹象出现,他们就会发起具有针对性的治安维护行动。不止一次,他们以未成年少女的身份套取恋童癖的私人信息,然后把这些信息交给警察局。其他匿名者则是政治的厌恶者,为了挑起争端想方设法散布混乱的信息。他们中的一些人在 /b/ 上发布看着像是雷管炸弹的图片;另一些则叫嚣着要炸毁足球场并因此被联邦调查局逮捕。2007 年,一家洛杉矶当地的新闻联盟机构称呼“匿名者”组织为“互联网负能量制造机”。</p>
|
||||
<p>尽管 4chan 的关注焦点是一些琐碎的话题,但许多匿名者认为自己就是“正义的十字军”。如果网上有不良迹象出现,他们就会发起具有针对性的治安维护行动。不止一次,他们以未成年少女的身份使恋童癖陷入圈套,然后把他们的个人信息交给警察局。其他匿名者则是政治的厌恶者,为了挑起争端想方设法散布混乱的信息。他们中的一些人在 /b/ 上发布看着像是雷管炸弹的图片;另一些则叫嚣着要炸毁足球场并因此被联邦调查局逮捕。2007 年,一家洛杉矶当地的新闻联盟机构称呼“匿名者”组织为“互联网负能量制造机”。</p>
|
||||
|
||||
<p>2008 年 1 月,Gawker Media 上传了一段关于汤姆克鲁斯大力吹捧山达基优点的视频。这段视频是受版权保护的,山达基教会致信 Gawker,勒令其删除这段视频。“匿名者”组织认为教会企图控制网络信息。“是时候让 /b/ 来干票大的了,”有人在 4chan 上写道。“我说的是‘入侵’或者‘攻陷’山达基官方网站。”一位匿名者使用 YouTube 放出一段“新闻稿”,其中包括暴雨云视频和经过计算机处理的语音。“我们要立刻把你们从 Internet 上赶出去,并且在现有规模上逐渐瓦解山达基教会,”那个声音说,“你们无处可躲。”不到一个星期,这段 YouTube 视频的点击率就超过了两百万次。</p>
|
||||
|
||||
<p>“匿名者”组织已经不仅限于 4chan 社区。黑客们在专用的互联网中继聊天(Internet Relay Chat channels,IRC 聊天室)频道内进行交流,协商策略。通过 DDoS 攻击手段,他们使山达基的主网站间歇性崩溃了好几天。匿名者们制造了“谷歌炸弹”,由此导致 “dangerous cult” 的搜索结果中的第一条结果就是山达基主网站。其余的匿名者向山达基的欧洲总部寄送了数以百计的披萨,并用大量全黑的传真单耗干了洛杉矶教会总部的传真机墨盒。山达基教会,据报道拥有超过十亿美元资产的组织,当然能经得起墨盒耗尽的考验。但山达基教会的高层可不这么认为,他们还收到了严厉的恐吓,由此他们不得不向 FBI 申请逮捕“匿名者”组织的成员。</p>
|
||||
<p>“匿名者”组织已经不仅限于 4chan 社区。黑客们在专用的互联网中继聊天(Internet Relay Chat channels,IRC 聊天室)频道内进行交流,协商策略。通过 DDoS 攻击手段,他们使山达基的主网站间歇性崩溃了好几天。匿名者们制造了“谷歌炸弹”,由此导致 “dangerous cult” 的搜索结果中的第一条结果就是山达基主网站。其余的匿名者向山达基的欧洲总部寄送了数以百计的披萨,并用大量全黑的传真单耗干了洛杉矶教会总部的传真机墨盒。山达基教会,据报道是一个拥有超过十亿美元资产的组织,当然能经得起墨盒耗尽的考验。但山达基教会的高层可不这么认为,他们还收到了死亡恐吓,由此他们不得不向 FBI 申请调查“匿名者”组织的成员。</p>
|
||||
|
||||
<p>2008 年 3 月 15 日,在从伦敦到悉尼的一百多个城市里,数以千计匿名者们游行示威山达基教会。为了切合“匿名”这个主题,组织者下令所有的抗议者都应该佩戴相同的面具。深思熟虑过蝙蝠侠后,他们选定了 2005 年上映的反乌托邦电影《 V 字仇杀队》中 Guy Fawkes 的面具。“在每个大城市里都能以很便宜的价格大量购买,”广为人知的匿名者、游行组织者之一 Gregg Housh 告诉我说道。漫画式的面具上是一个的脸颊红润的男人,八字胡,有着灿烂的笑容。</p>
|
||||
|
||||
<p>匿名者们并未“瓦解”山达基教会。并且汤姆克鲁斯的那段视频任然保留在网络上。匿名者们证明了自己的顽强。组织选择了一个相当浮夸的口号:“我们是一体。绝不宽恕。永不遗忘。相信我们。”(We are Legion. We do not forgive. We do not forget. Expect us.)</p>
|
||||
<p>匿名者们并未“瓦解”山达基教会。并且汤姆克鲁斯的那段视频任然保留在网络上。匿名者们证明了自己的顽强。组织选择了一个相当浮夸的口号:“我们是军团。绝不宽恕。永不遗忘。等待我们。”(We are Legion. We do not forgive. We do not forget. Expect us.)</p>
|
||||
|
||||
<h2>3</h2>
|
||||
|
||||
<p>2010 年,Doyon 搬到了加利福尼亚州的圣克鲁斯,并加入了当地的“和平阵营”组织。利用从木材堆置场偷来的木头,他在山上盖起了一间简陋的小屋,“借用”附近住宅的 WiFi,使用太阳能电池板发电,并通过贩卖种植的大麻换取现金。</p>
|
||||
|
||||
<p>与此同时,“和平阵营”维权者们每天晚上开始在公共场所休息,以此抗议圣克鲁斯政府此前颁布的“流浪者管理法案”,他们认为这项法案严重侵犯了流浪者的生存权。Doyon 出席了“和平阵营”的会议,并在网上发起了抗议活动。他留着蓬乱的红色山羊胡,戴一顶米黄色软呢帽,像军人那样不知疲倦。因此维权者们送给了他“罪恶制裁克里斯”的称呼。</p>
|
||||
<p>与此同时,“和平阵营”维权者们每天晚上开始在公共场所休息,以此抗议圣克鲁斯政府此前颁布的“流浪者管理法案”,他们认为这项法案严重侵犯了流浪者的生存权。Doyon 出席了“和平阵营”的会议,并在网上发起了抗议活动。他留着蓬乱的红色山羊胡,戴一顶米黄色软呢帽,类似军服的服装。因此维权者们送给了他“罪恶制裁克里斯”的称呼。</p>
|
||||
|
||||
<p>“和平阵营”的成员之一 Kelley Landaker 曾几次和 Doyong 讨论入侵事宜。Doyon 有时会吹嘘自己的技术是多么的厉害,但作为一名资深程序员的 Landaker 却不为所动。“他说得很棒,但却不是行动派的,”Landaker 告诉我。不过在那种场合下,的确更需要一位富有激情的领导者,而不是埋头苦干的技术员。“他非常热情并且坦率,”另一位成员 Robert Norse 如是对我说。“他创造出了大量的能够吸引媒体眼球的话题。我从事这行已经二十年了,在这一点上他比我见过的任何人都要厉害。”</p>
|
||||
<p>“和平阵营”的成员之一 Kelley Landaker 曾几次和 Doyong 讨论入侵事宜。Doyon 有时会吹嘘自己的技术是多么的厉害,但作为一名资深程序员的 Landaker 却不为所动。“他说得很棒,但却不是行动派,”Landaker 告诉我。不过在那种场合下,的确更需要一位富有激情的领导者,而不是埋头苦干的技术员。“他非常热情并且坦率,”另一位成员 Robert Norse 如是对我说。“他创造出了大量的能够吸引媒体眼球的话题。我从事这行已经二十年了,在这一点上他比我见过的任何人都要厉害。”</p>
|
||||
|
||||
<p>Doyon 在 PLF 的上司,Commander Adama 仍然住在剑桥,并且通过电子邮件和 Doyon 保持着联络,他下令让 Doyon 潜入“匿名者”组织。以此获知其运作方式,并伺机为 PLF 招募新成员。因为癫痫基金会网站入侵事件的那段不愉快回忆,Doyon 拒绝了 Adama。Adama 给 Doyon 解释说,在“匿名者”组织里不怀好意的黑客只占极少数,与此相反,这个组织经常会有一些的轰动世界举动。Doyon 对这点表示怀疑。“4chan 怎么可能会轰动世界?”他质问道。但出于对 PLF 的忠诚,他还是答应了 Adama 的请求。</p>
|
||||
<p>Doyon 在 PLF 的上司,Commander Adama 仍然住在剑桥,并且通过电子邮件和 Doyon 保持着联络,他下令让 Doyon 监视“匿名者”组织,以此获知其运作方式,并伺机为 PLF 招募新成员。因为癫痫基金会网站入侵事件的那段不愉快回忆,Doyon 拒绝了 Adama。Adama 给 Doyon 解释说,在“匿名者”组织里不怀好意的黑客只占极少数,与此相反,这个组织经常会有一些的轰动世界举动。Doyon 对这点表示怀疑。“4chan 怎么可能会有轰动世界的大举动?”他质问道。但出于对 PLF 的忠诚,他还是答应了 Adama 的请求。</p>
|
||||
|
||||
<p>Doyon 经常带着一台宏基笔记本电脑出入于圣克鲁斯的一家名为 Coffee Roasting Company 的咖啡厅。“匿名者”组织的 IRC 聊天室主频道无需密码就能进入。Doyon 使用 PLF 的昵称进行登录并加入了聊天室。一段时间后,他发现了组织内大量的专用匿名者行动聊天频道,这些频道的规模更小,并相互重复。要想参与行动,你必须知道行动的专用聊天频道名称,并且聊天频道随时会因为陌生的闯入者而进行变更。这套交流系统并不具备较高的安全系数,但它的确很凑效。“这些专用行动聊天频道确保了行动机密的高度集中,”麦吉尔大学的人类学家 Gabriella Coleman 告诉我。</p>
|
||||
<p>Doyon 经常带着一台宏基笔记本电脑出入于圣克鲁斯的一家名为 Coffee Roasting Company 的咖啡厅。“匿名者”组织的 IRC 聊天室主频道无需密码就能进入。Doyon 使用 PLF 的昵称进行登录并加入了聊天室。一段时间后,他发现了组织内大量的专用匿名者行动聊天频道,这些频道的规模更小,更多专门的组内匿名者间对话相互重复。要想参与行动,你必须知道行动的专用聊天频道名称,并且聊天频道随时会因为陌生的闯入者而进行变更。这套交流系统并不具备较高的安全系数,但它的确很凑效。“这些专用行动聊天频道确保了行动机密的高度集中”麦吉尔大学的人类学家 Gabriella Coleman 告诉我。</p>
|
||||
|
||||
<p>有些匿名者提议了一项行动,名为“反击行动”。如同新闻记者 Parmy Olson 于 2012 年在书中写道的,“我们是匿名者,”这项行动成为了又一次支援文件共享网站,如 Napster 的后继者海盗湾(Pirate Bay),的行动的前奏,但随后其目标却扩展到了政治领域。2010 年末,在美国国务院的要求下,包括万事达、Visa、PayPal 在内的几家公司终止了对维基解密,一家公布了成百上千份外交文件的民间组织,的捐助。在一段网络视频中,“匿名者”组织扬言要进行报复,发誓会对那些阻碍维基解密发展的公司进行惩罚。Doyon 被这种抗议企业的精神所吸引,决定参加这次行动。</p>
|
||||
<p>有些匿名者提议了一项行动,名为“反击行动”。如同新闻记者 Parmy Olson 于 2012 年在书中写道的,“我们是匿名者,” 这项行动是以又一次支持文件共享的网站而创立,如同 Napster 的后继者海盗湾(Pirate Bay),但随后其目标却扩展到了政治领域。2010 年末,在美国国务院的要求下,包括万事达、Visa、PayPal 在内的几家公司终止了对维基解密的捐助,维基解密是一家公布了成百上千份外交文件的自发性组织。在一段在线视频中,“匿名者”组织扬言要进行报复,发誓会对那些阻碍维基解密发展的公司进行惩罚。Doyon 被这种抗议企业的精神所吸引,决定参加这次行动。</p>
|
||||
|
||||
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18473-600.jpg" /></center>
|
||||
|
||||
<center><small>潘多拉的魔盒</small></center>
|
||||
|
||||
<p>在十二月初的“反击行动”中,“匿名者”组织指导那些新成员,或者说新兵,关于“如何他【哔~】加入组织”,教程中提到“首先配置你【哔~】的网络,这他【哔~】的很重要。”同时他们被要求下载“低轨道离子炮”,一款易于使用的开源软件。Doyon 下载了软件并在聊天室内等待着下一步指示。当开始的指令发出后,数千名匿名者将同时发动进攻。Doyon 收到了含有目标网址的指令——目标是,www.visa.com——同时,在软件的右上角有个按钮,上面写着“IMMA CHARGIN MAH LAZER.”(“反击行动”同时也发动了大量的复杂精密的入侵进攻。)几天后,“反击行动”攻陷了万事达、Visa、PayPal 公司的主页。在法院的控告单上,PayPal 称这次攻击给公司造成了 550 万美元的损失。</p>
|
||||
<p>在十二月初的“反击行动”中,“匿名者”组织指导那些新成员,或者说新兵,去看标题为“如何加入那个【哔~】的Hive”,参与者被要求“首先配置他们【哔~】的网络,这【哔~】的很重要。”同时他们被要求下载“低轨道离子炮”,一款易于使用的开源软件。Doyon 下载了软件并在聊天室内等待着下一步指示。当开始的指令发出后,数千名匿名者将同时发动进攻。Doyon 进入了目标网址——www.visa.com——同时,在软件的右上角有个按钮,上面写着“IMMA CHARGIN MAH LAZER.”(“反击行动”同时也发动了大量的复杂精密的入侵进攻。)几天后,“反击行动”攻陷了万事达、Visa、PayPal 公司的主页。在法院的控告单上,PayPal 称这次攻击给公司造成了 550 万美元的损失。</p>
|
||||
|
||||
<p>但对 Doyon 来说,这是切实的激进主义体现。在剑桥反对种族隔离的行动中,他不能立即看到结果;而现在,只需指尖轻轻一点,就可以在攻陷大公司网站的行动中做出自己的贡献。隔天,赫芬顿邮报上出现了“万事达沦陷”的醒目标题。一位得意洋洋的匿名者发推特道:“有些事情维基解密是无能为力的。但这些事情却可以由‘反击行动’来完成。”</p>
|
||||
<p>但对 Doyon 来说,这是切实的激进主义体现。在剑桥反对种族隔离的行动中,他不能即可见效;而现在,只需指尖轻轻一点,就可以在攻陷大公司网站的行动中做出自己的贡献。隔天,赫芬顿邮报上出现了“万事达沦陷”的醒目标题。一位得意洋洋的匿名者发推特道:“有些事情维基解密是无能为力的。但这些事情却可以由‘反击行动’来完成。”</p>
|
||||
|
||||
<h2>4</h2>
|
||||
|
||||
<p>2010 年的秋天,“和平阵营”的抗议活动终止,政府只做出了轻微的让步,“流浪者管理法案”仍然有效。Doyon 希望通过借助“匿名者”组织的方略扭转局势。他回忆当时自己的想法,“也许我可以发动‘匿名者’组织来教训这种看似不堪一击的市政府网站,这些人绝对会【哔~】地赞同我的提议。最终我们将使得市政府永久性的废除‘流浪者管理法案’。”</p>
|
||||
<p>2010 年的秋天,“和平阵营”的抗议活动终止,政府只做出了略微让步,“流浪者管理法案”仍然有效。Doyon 希望通过借助“匿名者”组织的方略扭转局势。他回忆当时自己的想法,“也许我可以发动‘匿名者’组织来教训这种看似不堪一击的市政府网站,它们绝对会【哔~】地沦陷。最终我们使得市政府永久性废除‘流浪者管理法案’。”</p>
|
||||
|
||||
<p>Joshua Covelli 是一位 25 岁的匿名者,他的昵称是“Absolem”,他非常钦佩 Doyon 的果敢。“现在我们的组织完全是他【哔~】各种混乱的一盘散沙,”Covelli 告诉我道。在“Commander X”加入之后,“组织似乎开始变得有模有样了。”Covelli 的工作是俄亥俄州费尔伯恩的一所大学接待员,他从不了解任何有关圣克鲁斯的政治。但是当 Doyon 提及帮助“和平阵营”抗击活动的计划后,Covelli 立即回复了一封表示赞同的电子邮件:“我期待这样的行动很久了。”</p>
|
||||
<p>Joshua Covelli 是一位 25 岁的匿名者,他的昵称是“Absolem”,他非常钦佩 Doyon 的果敢。“过去我们的组织完全是各种混乱的一盘散沙,”Covelli 告诉我。在“Commander X”加入之后,“组织似乎开始变得有模有样了。”Covelli 的工作是俄亥俄州费尔伯恩的一所大学接待员,他从不了解任何有关圣克鲁斯的政治。但是当 Doyon 提及帮助“和平阵营”抗击活动的计划后,Covelli 立即回复了一封表示赞同的电子邮件:“我期待参加这样的行动已经很久了。”</p>
|
||||
|
||||
<p>Doyon 使用 PLF 的昵称邀请 Covelli 在 IRC 聊天室进行了一次秘密谈话:</p>
|
||||
|
||||
<blockquote>Absolem:抱歉,有个比较冒犯的问题...请问 PLF 也是组织的一员吗?</blockquote>
|
||||
<blockquote>Absolem:抱歉,有个比较冒犯的问题...请问 PLF 是组织的一部分还是分开的?</blockquote>
|
||||
|
||||
<blockquote>Absolem:我会这么问,是因为我在频道里看过你的聊天记录,你像是一名训练有素的黑客,不太像是来自组织里的成员。</blockquote>
|
||||
<blockquote>Absolem:我会这么问,是因为看你们聊天,觉得你们都是非常有组织的。</blockquote>
|
||||
|
||||
<blockquote>PLF:不不不,你的问题一点也不冒犯。很高兴遇到你。PLF 是一个来自波士顿的黑客组织,已经成立 22 年了。我在 1981 年就开始了我的黑客生涯,但那时我并没有使用计算机,而是使用的 PBX(Private Branch Exchange,电话交换机)。</blockquote>
|
||||
|
||||
<blockquote>PLF:我们组织内所有成员的年龄都超过了 40 岁。我们当中有退伍士兵和学者。并且我们的成员“Commander Adama”,正在躲避一大帮警察还有间谍的追捕。</blockquote>
|
||||
|
||||
<blockquote>Absolem:听起来很棒!我对这次行动很感兴趣,不知道我是否可以提供一些帮助,我们的组织实在是太混乱了。我的电脑技术还不错,但我在入侵技术上还完全是一个新手。我有一些小工具,但不知道怎么去使用它们。</blockquote>
|
||||
<blockquote>Absolem:听起来很棒!我对这次行动很感兴趣,不过“匿名者”组织看起来太混乱无序,不知道我是否可以提供一些帮助。我的电脑技术还不错,但我在入侵技术上还完全是一个新手。我有一些小工具,但不知道怎么去使用它们。</blockquote>
|
||||
|
||||
<p>庄重的入会仪式后,Doyon 正式接纳 Covelli 加入 PLF:</p>
|
||||
|
||||
<blockquote>PLF:把所有可能对你不利的【哔~】敏感文件加密。</blockquote>
|
||||
<blockquote>PLF:把所有可能使你受牵连的敏感文件加密。</blockquote>
|
||||
|
||||
<blockquote>PLF:还有,想要联系任何一位 PLF 成员的话,给我发消息就行。从现在起,请叫我... Commander X。</blockquote>
|
||||
|
||||
<p>2012 年,美联社称“匿名者”组织为“一伙训练有素的黑客”;Quinn Norton 在《连线》杂志上发文称“‘匿名者’组织可以入侵任何坚不可摧的网站”,并在文末赞扬他们为“一群卓越的民间黑客”。事实上,有些匿名者的确是很有天赋的程序员,但绝大部分成员根本不懂任何技术。人类学家 Coleman 告诉我只有大约五分之一的匿名者是真正的黑客——其他匿名者则是“极客与抗议者”。</p>
|
||||
<p>2012 年,美联社称“匿名者”组织为“一帮专家级的黑客”;Quinn Norton 在《连线》杂志上发文称“‘匿名者’组织可以入侵任何坚不可摧的网站”,并在文末赞扬他们为“一群卓越的民间黑客”。事实上,有些匿名者的确是很有天赋的程序员,但绝大部分成员根本不懂任何技术。人类学家 Coleman 告诉我只有大约五分之一的匿名者是真正的黑客——其他匿名者则是“极客与抗议者”。</p>
|
||||
|
||||
<p>2010 年 12 月 16 日,Doyon 以 Commander X 的身份向几名记者发送了电子邮件。“明天当地时间 12:00 的时候,‘人民解放阵线’组织与‘匿名者’组织将大举进攻圣克鲁斯政府网站”,他在邮件中写道,“12:30 之后我们将恢复网站的正常运行。”</p>
|
||||
<p>2010 年 12 月 16 日,Doyon 以 Commander X 的身份向几名记者发送了电子邮件。“明天当地时间 12:00 的时候,‘人民解放阵线’组织与‘匿名者’组织将从互联网中删除圣克鲁斯政府网站”,他在邮件中写道,“12:30 之后我们将恢复网站的正常运行。”</p>
|
||||
|
||||
<p>圣克鲁斯数据中心的工作人员收到了警告,匆忙地准备应对攻击。他们在服务器上运行起安全扫描软件,并向当地的互联网供应商 AT & T 求助,后者建议他们向 FBI 报警。</p>
|
||||
|
||||
@ -132,7 +132,7 @@
|
||||
|
||||
<center><small>“Zach 很聪明... 并且... 是一个天才... 但.. 你们... 不在一个班。”</small></center>
|
||||
|
||||
<p>Doyon 引用了一句电影台词。“拼命地跑,”他说。“我会躲起来,尽可能保持我的行动自由,用尽全力和这帮杂种们作斗争。”Frey 给了他两张 20 美元的钞票并祝他好运。</p>
|
||||
<p>Doyon 引用了一句电影台词。“拼命地跑,”他说。“我会躲起来,尽可能保持我的行动自由,用尽全力和这帮混蛋们作斗争。”Frey 给了他两张 20 美元的钞票并祝他好运。</p>
|
||||
|
||||
<h2>5</h2>
|
||||
|
||||
@ -142,35 +142,35 @@
|
||||
|
||||
<p>“突尼斯,” Brown 答道。</p>
|
||||
|
||||
<p>“我知道,那是中东地区的一个国家,” Doyon 继续问,“然后呢?”</p>
|
||||
<p>“我知道,那是中东地区的一个国家,” Doyon 继续问,“具体任务是什么呢?”</p>
|
||||
|
||||
<p>“我们准备打倒那里的独裁者,” Brown 再次答道。</p>
|
||||
|
||||
<p>“啊?!那里有一位独裁者吗?” Doyon 有点惊讶。</p>
|
||||
|
||||
<p>几天后,“突尼斯行动”正式展开。Doyon 作为参与者向突尼斯政府域名下的电子邮箱发送了大量的垃圾邮件,以此阻塞其服务器。“我会提前写好关于那次行动邮件,接着一次又一次地把它们发送出去,” Doyon 说,“有时候实在没有时间,我就只简短的写上一句问候对方母亲的的话,然后发送出去。”短短一天时间里,匿名者们就攻陷了包括突尼斯证券交易所、工业部、总统办公室、总理办公室在内的多个网站。他们把总统办公室网站的首页替换成了一艘海盗船的图片,并配以文字“‘报复’是个贱人,不是吗?”</p>
|
||||
<p>几天后,“突尼斯行动”正式展开。Doyon 作为参与者向突尼斯政府域名下的电子邮箱发送了大量的垃圾邮件,以此阻塞其服务器。“我会提前写好关于那次行动邮件,接着一次又一次地把它们发送出去,” Doyon 说,“有时候实在没有时间,我就只简短的写上一句‘问候对方母亲’的话,然后发送出去。”短短一天时间里,匿名者们就攻陷了包括突尼斯证券交易所、工业部、总统办公室、总统办公室在内的多个网站。他们把总统办公室网站的首页替换成了一艘海盗船的图片,并配以文字“恶有恶报,不是吗?”</p>
|
||||
|
||||
<p>Doyon 不时会谈起他的网上“战斗”经历,似乎他刚从弹坑里爬出来一样。“伙计,自从干了这行我就变黑了,”他向我诉苦道。“你看我的脸,全是抽烟的时候熏的——而且可能已经粘在我的脸上了。我仔细地照过镜子,毫不夸张地说我简直就是一头棕熊。”很多个夜晚,Doyon 都是在 Golden Gate 公园里露营过夜的。“我就那样干了四天,我看了看镜子里的‘我’,感觉还可以——但其实我觉得‘我’也许应该去吃点东西、洗个澡了。”</p>
|
||||
|
||||
<p>“匿名者”组织接着又在 YouTube 上声明了将要进行的一系列行动:“利比亚行动”、“巴林行动”、“摩洛哥行动”。作为解放广场事件的抗议者,Doyon 参与了“埃及行动”。在 Facebook 针对这次行动的宣传专页中,有一个为当地示威者准备的“行动套装”链接。“行动套装”通过文件共享网站 Megaupload 进行分发,其中含有一份加密软件以及应对瓦斯袭击的保护措施。并且在不久后,埃及政府关闭了埃及的所有互联网及子网络的时候,继续向当地抗议者们提供连接网络的方法。</p>
|
||||
<p>“匿名者”组织接着又在 YouTube 上声明了将要进行的一系列行动:“利比亚行动”、“巴林行动”、“摩洛哥行动”。作为解放广场事件的抗议者,Doyon 参与了“埃及行动”。在 Facebook 针对这次行动的宣传专页中,有一个为当地示威者准备的“行动套装”链接。“行动套装”通过文件共享网站 Megaupload 进行分发,其中含有一份加密软件以及应对瓦斯袭击的保护措施。在埃及政府关闭了埃及的所有互联网及子网络的时候不久后,“匿名者”组织继续向当地抗议者们提供连接网络的方法。</p>
|
||||
|
||||
<p>2011 年夏季,Doyon 接替 Adama 成为 PLF 的最高指挥官。Doyon 招募了六个新成员,并力图发展 PLF 成为“匿名者”组织的中坚力量。Covelli 成为了他的其中一技位术顾问。另一名黑客 Crypt0nymous 负责在 YouTube 上发布视频;其余的人负责研究以及组装电子设备。与松散的“匿名者”组织不同,PLF 内部有一套极其严格的管理体系。“Commander X 事必躬亲,”Covelli 说。“这是他的行事风格,也许不能称之为一种风格。”一位创立了 AnonInsiders 博客的黑客通过加密聊天告诉我,他认为 Doyon 总是一意孤行——这在“匿名者”组织中是很罕见的现象。“当我们策划发起一项行动时,他并不在乎其他人是否同意,”这位黑客补充道,“他会一个人列出行动方案,确定攻击目标,登录 IRC 聊天室,接着告诉所有人在哪里‘碰头’,然后发起 DDoS 攻击。”</p>
|
||||
<p>2011 年夏季,Doyon 接替 Adama 成为 PLF 的最高指挥官。Doyon 招募了六个新成员,并力图发展 PLF 成为“匿名者”组织的中坚力量。Covelli 成为了他的其中一位技术顾问。另一名黑客 Crypt0nymous 负责在 YouTube 上发布视频;其余的人负责研究以及组装电子设备。与松散的“匿名者”组织不同,PLF 内部有一套极其严格的管理体系。“Commander X 事必躬亲,”Covelli 说。“这是他的行事风格,要么不做,要么做好。”一位创立了 AnonInsiders 博客的黑客通过加密聊天告诉我,他认为 Doyon 总是一意孤行——这在“匿名者”组织中是很罕见的现象。“当我们策划发起一项行动时,他并不在乎其他人是否同意,”这位黑客补充道,“他会一个人列出行动方案,确定攻击目标,登录 IRC 聊天室,接着告诉所有人在哪里‘碰头’,然后发起 DDoS 攻击。”</p>
|
||||
|
||||
<p>一些匿名者把 PLF 视为可有可无的部分,认为 Doyon 的所作所为完全是个天大的笑柄。“他是因为吹牛出名的,”另一名昵称为 Tflow 的匿名者 Mustafa Al-Bassam 告诉我。不过,即使是那些极度反感 Doyon 的狂妄自大的人,也不得不承认他在“匿名者”组织发展过程中的重要性。“他所倡导的强硬路线有时很凑效,有时则完全不起作用,” Gregg Housh 说,并且补充道自己和其他优秀的匿名者都曾遇到过相同的问题。</p>
|
||||
<p>一些匿名者把 PLF 视为“面子项目”,认为 Doyon 的所作所为完全是个笑柄。“他是因为吹牛出名的,”另一名昵称为 Tflow 的匿名者 Mustafa Al-Bassam 告诉我。不过,即使是那些极度反感 Doyon 的狂妄自大的人,也不得不承认他在“匿名者”组织发展过程中的重要性。“他所倡导的强硬路线有时很凑效,有时则是碍事,” Gregg Housh 说,并且补充道自己和其他优秀的匿名者都曾遇到过相同的问题。</p>
|
||||
|
||||
<p>“匿名者”组织对外坚持声称自己是不分层次的平等组织。在由 Brian Knappenberger 制作的一部纪录片,《我们是一个团体》中,一名成员使用“一群鸟”来比喻组织,它们轮流领飞带动整个组织不断前行。Gabriella Coleman 告诉我,这个比喻不太切合实际,“匿名者”组织内实际上早就出现了一个非正式的领导阶层。“领导者非常重要,”她说。“有四五个人可以看做是我们的领头羊。”她把 Doyon 也算在了其中。但是匿名者们仍然倾向于反抗这种具有体系的组织结构。在一本即将出版的关于“匿名者”组织的书,《黑客、骗子、告密者、间谍》中,Coleman 这么写道,在匿名者中,“成员个体以及那些特立独行的人依然在一些重大事件上保持着服从的态度,优先考虑集体——特别是那些能引发强烈争端的事件。”</p>
|
||||
<p>“匿名者”组织对外坚持声称自己是不分层次的平等组织。在由 Brian Knappenberger 制作的一部纪录片,《我们是军团》中,一名成员使用“一群鸟”来比喻组织,它们轮流领飞带动整个组织不断前行。Gabriella Coleman 告诉我,这个比喻不太切合实际,“匿名者”组织内实际上早就出现了一个非正式的领导阶层。“领导者非常重要,”她说。“有四五个人可以看做是我们的领头羊。”她把 Doyon 也算在了其中。但是匿名者们仍然倾向于反抗这种体制结构。在一本即将出版的关于“匿名者”组织的书,《黑客、骗子、告密者、间谍》中,Coleman 这么写道,在匿名者中,“成员个体以及那些特立独行的人依然在一些重大事件上保持着服从的态度,优先考虑集体——特别是那些能引发强烈争端的事件。”</p>
|
||||
|
||||
<p>匿名者们谑称那些特立独行的成员为“自尊心超强的疯子”和“想让自己出名的疯子”。(不过许多匿名者已经不会再随便给他人取那种具有冒犯性的称号了。)“但还是有令人惊讶的极少数成员违反规则”打破传统上的看法,Coleman 说。“这么做的人,像 Commander X 这样的,都会在组织里受到排斥。”去年,在一家网络论坛上,有人写道,“当他开始把自己比作‘蝙蝠侠’的时候我就不想理他了。”</p>
|
||||
|
||||
<p>Peter Fein,是一位以 n0pants 为昵称而出名的网络激进分子,也是众多反对 Doyon 的浮夸行为的众多匿名者之一。Fein 浏览了 PLF 的网站,其封面上有一个徽章,还有关于组织的宣言——“为了解放众多人类的灵魂而不断战斗”。Fein 沮丧的发现 Doyon 早就使用真名为这家网站注册过了,使他这种,以及其他想要找事的匿名者们无机可乘。“如果有人要对我的网站进行 DDoS 攻击,那完全可以,” Fein 回想起通过私密聊天告诉 Doyon 时的情景,“但如果你要这么做了的话,我会揍扁你的屁股。”</p>
|
||||
|
||||
<p>2011 年 2 月 5 日,《金融时报》报道了在一家名为 HBGary Federal 的网络安全公司里,首席执行官 HBGary Federal 已经得到了“匿名者”组织骨干成员名单的消息。Barr 的调查结果表明,三位最高领导人其中之一就是‘ Commander X’,这位潜伏在加利福尼亚州的黑客有能力“策划一些大型网络攻击事件”。Barr 联系了 FBI 并提交了自己的调查结果。</p>
|
||||
<p>2011 年 2 月 5 日,《金融时报》报道了在一家名为 HBGary Federal 的网络安全公司,首席执行官 HBGary Federal 已经得到了“匿名者”组织骨干成员名单的消息。Barr 的调查结果表明,三位最高领导人其中之一就是‘ Commander X’,是一位潜伏在加利福尼亚州的黑客而且有能力“策划一些大型网络攻击事件”。Barr 联系了 FBI 并提交了自己的调查结果。</p>
|
||||
|
||||
<p>和 Fein 一样,Barr 也发现了 PLF 网站的注册法人名为 Christopher Doyon,地址是 Haight 大街。基于 Facebook 和 IRC 聊天室的调查,Barr 断定‘ Commander X’的真实身份是一名家庭住址在 Haight 大街附近的网络激进分子 Benjamin Spock de Vries。Barr 通过 Facebook 和 de Vries 取得了联系。“请告诉组织里的普通阶层,我并不是来抓你们的,” Barr 留言道,“只是想让‘领导阶层’知晓我的意图。”</p>
|
||||
<p>和 Fein 一样,Barr 也发现了 PLF 网站的注册法人名为 Christopher Doyon,地址是 Haight 大街。基于 Facebook 和 IRC 聊天室的调查,Barr 断定‘ Commander X’的真实身份是一名家庭住址在 Haight 大街附近的网络激进分子 Benjamin Spock de Vries。Barr 通过 Facebook 和 de Vries 取得了联系。“请告诉我组织里的其他人,我并不是来抓你们的,” Barr 留言道,“只是想让‘领导阶层’知晓我的意图。”</p>
|
||||
|
||||
<p>“‘领导阶层’? 2333,笑死我了,” de Vries 回复道。</p>
|
||||
|
||||
<p>《金融时报》发布报道的第二天,“匿名者”组织就进行了反击。HBGary Federal 的网站被进行了恶意篡改。Barr 的私人 Twitter 账户被盗取,他的上千封电子邮件被泄漏到了网上,同时匿名者们还公布了他的住址以及其他私人信息——这是一系列被称作“doxing”的惩罚。不到一个月后,Barr 就从 HBGary Federal 辞职了。</p>
|
||||
<p>《金融时报》发布报道的第二天,“匿名者”组织就进行了反击。HBGary Federal 的网站被进行了恶意篡改。Barr 的私人 Twitter 账户被盗取,他的上千封电子邮件被泄漏到了网上,同时匿名者们还公布了他的住址以及其他私人信息——这就是“冲动的惩罚”。不到一个月后,Barr 就从 HBGary Federal 辞职了。</p>
|
||||
|
||||
<h2>6</h2>
|
||||
|
||||
@ -180,17 +180,17 @@
|
||||
|
||||
<center><small>“这是我在 TED 夏令营里学到的东西。”</small></center>
|
||||
|
||||
<p>他时刻关注着“匿名者”组织的内部消息。那年春季,在 Barr 调查报告中提到的六位匿名者精锐成员,组建了“LulzSec 安全”组织(Lulz Security),简称 LulzSec。这个组织正如其名,这些成员认为“匿名者”组织已经变得太过严肃;他们的目标是重新引发起那些“能挑起强烈争端”的事件。当“匿名者”组织还在继续支持“阿拉伯之春”的抗议者的时候,LulzSec 入侵了公共电视网(Public Broadcasting Service,PBS)网站,并发布了一则虚假声明称已故说唱歌手 Tupac Shakur 仍然生活在新西兰。</p>
|
||||
<p>他时刻关注着“匿名者”组织的内部消息。那年春季,在 Barr 调查报告中提到的六位匿名者精锐成员,组建了“LulzSec 安全”组织(Lulz Security),简称 LulzSec。这个组织正如其名,这些成员认为“匿名者”组织已经变得太过严肃;他们的目标是重新引发起那些“能挑起强烈争端”的事件。当“匿名者”组织还在继续支持“阿拉伯之春”的抗议者时,LulzSec 入侵了公共电视网(Public Broadcasting Service,PBS)网站,并发布了一则虚假声明称已故说唱歌手 Tupac Shakur 仍然生活在新西兰。</p>
|
||||
|
||||
<p>匿名者之间会通过 Pastebin.com 网站来共享文字。在这个网站上,LulzSec 发表了一则声明,称“很不幸,我们注意到北约和我们的好总统巴拉克,奥萨马·本·美洲驼(拉登同学)的好朋友,来自 24 世纪的奥巴马,最近明显提高了对我们这些黑客的关注程度。他们把黑客入侵行为视作一种战争的表现。”目标越高远,挑起的纷争就越大。6 月 15 日,LulzSec 表示对 CIA 网站受到的袭击行为负责,他们发表了一条推特,上面写道“目标击毙(Tango down,亦即target down)—— cia.gov ——这是起挑衅行为。”</p>
|
||||
<p>匿名者之间会通过 Pastebin.com 网站来共享文本。在这个网站上,LulzSec 发表了一则声明,称“很不幸,我们注意到北约和我们的好朋友巴拉克奥萨马——来自24世纪的奥巴马 已经提升了关于黑客的筹码,他们把黑客入侵行为视作一种战争的表现。”目标越高远,挑起的纷争就越大。6 月 15 日,LulzSec 表示对 CIA 网站受到的袭击行为负责,他们发表了一条推特,上面写道“目标击毙(Tango down,亦即target down)—— cia.gov ——这是起挑衅行为。”</p>
|
||||
|
||||
<p>2011 年 6 月 20 日,LulzSec 的一名十九岁的成员 Ryan Cleary 因为对 CIA 的网站进行了 DDoS 攻击而被捕。7 月,FBI 探员逮捕了七个月前对 PayPal 进行 DDoS 攻击的其他十四名黑客。这十四名黑客,每人都面临着 15 年的牢狱之灾以及 500 万美元的罚款。他们因为图谋不轨以及故意破坏互联网,而被控违反了计算机欺诈与滥用处理条例。(该法案允许检察官进行酌情处置,并在去年网络激进分子 Aaron Swartz 因为被判处 35 年牢狱之灾而自杀身亡之后,受到了广泛的质疑和批评。)</p>
|
||||
<p>2011 年 6 月 20 日,LulzSec 的一名十九岁的成员 Ryan Cleary 因为对 CIA 的网站进行了 DDoS 攻击而被捕。7 月,FBI 探员逮捕了七个月前对 PayPal 进行 DDoS 攻击的其他十四名黑客。这十四名黑客,每人都面临着 15 年的牢狱之灾以及 50 万美元的罚款。他们因为图谋不轨以及故意破坏互联网而被控违反了计算机欺诈与滥用法案。(Computer Fraud and Abuse Act,该法案允许检察官拥有宽泛的起诉裁量权,并在去年网络激进分子 Aaron Swartz 因为被判处 35 年牢狱之灾而自杀身亡之后,受到了广泛的质疑和批评。)</p>
|
||||
|
||||
<p>LulzSec 的成员之一 Jake (Topiary) Davis 因为付不起法律诉讼费,给组织的成员们写了一封请求帮助的信件。Doyon 进入了 IRC 聊天室把 Davis 需要帮助的消息进行了扩散:</p>
|
||||
|
||||
<blockquote>CommanderX:那么请大家阅读信件并给予 Topiary 帮助...</blockquote>
|
||||
|
||||
<blockquote>Toad:你真是和【哔~】一样消息灵通。</blockquote>
|
||||
<blockquote>Toad:你真是为了抓人眼球什么都做啊!</blockquote>
|
||||
|
||||
<blockquote>Toad:这么说你得到 Topiary 的消息了?</blockquote>
|
||||
|
||||
@ -198,15 +198,15 @@
|
||||
|
||||
<blockquote>Katanon:唉...</blockquote>
|
||||
|
||||
<p>Doyon 越来越大胆。他在佛罗里达州当局逮捕了支持流浪者的激进分子后,就 DDoS 了奥兰多商务部商会网站。他使用个人笔记本电脑通过公用无线网络实施了攻击,并且没有花费太多精力来隐藏自己的网络行踪。“这种做法很勇敢,但也很愚蠢,”一位自称 Kalli 的 PLF 的资深成员告诉我。“他看起来并不在乎是否会被抓。他完全是一名自杀式黑客。”</p>
|
||||
<p>Doyon 越来越大胆。在佛罗里达州当局逮捕了支持流浪者的激进分子后,他就攻击 了奥兰多商务部商会网站。他使用个人笔记本电脑通过公用无线网络实施了攻击,并且没有花费太多精力来隐藏自己的网络行踪。“这种做法很勇敢,但也很愚蠢,”一位自称 Kalli 的 PLF 的资深成员告诉我。“他看起来并不在乎是否会被抓。他完全是一名自杀式黑客。”</p>
|
||||
|
||||
<p>两个月后,Doyon 参与了针对旧金山湾区快速交通系统(Bay Area Rapid Transit)的 DDoS 攻击,以此抗议一名 BART 的警官杀害一名叫做 Charles Hill 的流浪者的事件。随后 Doyon 现身“CBS 晚间新闻”为这次行动辩护,当然,他处理了自己的声音,把自己的脸用香蕉进行替代。他把 DDoS 攻击比作为公民的抗议行为。“与占用 Woolworth 午餐柜台的座位相比,这真的没什么不同,真的,”他说道。CBS 的主播 Bob Schieffer 笑称:“就我所见,它并不完全是一项民权运动。”</p>
|
||||
<p>两个月后,Doyon 参与了针对旧金山湾区快速交通系统(Bay Area Rapid Transit)的 DDoS 攻击,以此抗议一名 BART 的警官杀害一名叫做 Charles Hill 的流浪者的事件。随后 Doyon 现身“CBS 晚间新闻”为这次行动辩护,当然,他处理了自己的声音,用印花大手帕盖住了脸。他把 DDoS 攻击比作为公民的抗议行为。“与占用 Woolworth 午餐柜台的座位相比,这真的没什么不同,真的,”他说道。CBS 的主播 Bob Schieffer 笑称:“就我所见,它并不完全是一项民权运动。”</p>
|
||||
|
||||
<p>2011 年 9 月 22 日,在加利福尼亚州的一家名为 Mountain View 的咖啡店里,Doyon 被捕,同时面临着“使用互联网非法破坏受保护的计算机”罪名指控。他被拘留了一个星期的时间,接着在签署协议之后获得假释。两天后,他不顾律师的反对,宣布将在圣克鲁斯郡法院召开新闻发布会。他梳起了马尾辫,戴着一副墨镜、一顶黑色海盗帽,同时还在脖子上围了一条五彩手帕。</p>
|
||||
<p>2011 年 9 月 22 日,在加利福尼亚州的一家名为 Mountain View 的咖啡店里,Doyon 被捕,同时面临着“使用互联网非法破坏受保护的计算机”的罪名指控。他被拘留了一个星期的时间,接着在签署协议之后获得假释。两天后,他不顾律师的反对,宣布将在圣克鲁斯郡法院召开会议。他梳起了马尾辫,戴着一副墨镜、一顶黑色海盗帽,同时还在脖子上围了一条五彩手帕。</p>
|
||||
|
||||
<p>Doyon 通过非常夸大的方式披露了自己的身份。“我就是 Commander X,”他告诉蜂拥的记者。他举起了拳头。“作为‘匿名者’组织的一员,作为一名核心成员,我感到非常的骄傲。”他在接受一名记者的采访时说,“想要成为一名顶尖黑客的话,你只需要准备一台电脑以及一副墨镜。任何一台电脑都行。”</p>
|
||||
<p>Doyon 通过非常夸大的方式揭露了自己的身份。“我就是 Commander X,”他告诉蜂拥的记者。他举起了拳头。“作为‘匿名者’组织的一员,作为一名核心成员,我感到非常的骄傲。”他在接受一名记者的采访时说,“想要成为一名顶尖黑客的话,你只需要准备一台电脑以及一副墨镜。任何一台电脑都行。”</p>
|
||||
|
||||
<p>Kalli 非常担心 Doyon 会不小心泄露组织机密或者其他匿名者的信息。“这是所有环节中最薄弱的地方,如果这里出问题了,那么组织就完了,”他告诉我。曾在“和平阵营行动”中给予 Doyon 大力帮助的匿名者 Josh Covelli 告诉我,当他在网上看见 Doyon 的新闻发布会视频的时候,他感觉瞬间“下巴掉地下了”。“他的所作所为变得越来越不可捉摸,” Covelli 评价道。</p>
|
||||
<p>Kalli 非常担心 Doyon 会不小心泄露组织机密或者其他匿名者的信息。“这是所有环节中最薄弱的地方,如果这里出问题了,那么组织就完了,”他告诉我。曾在“和平阵营行动”中给予 Doyon 大力帮助的匿名者 Josh Covelli 告诉我,当他在网上看见 Doyon 的新闻发布会视频的时候,他感觉瞬间“下巴掉地上了”。“他的所作所为变得越来越不可捉摸,” Covelli 评价道。</p>
|
||||
|
||||
<p>三个月后,Doyon 的指定律师 Jay Leiderman 出席了圣荷西联邦法庭的辩护。Leiderman 已经好几个星期没有得到 Doyon 的消息了。“我需要得知被告无法出席的具体原因,”法官说。Leiderman 无法回答。Doyon 再次缺席了两星期后的另一场听证会。检控方表示:“很明显,看来被告已经逃跑了。”</p>
|
||||
|
||||
@ -214,7 +214,7 @@
|
||||
|
||||
<p>“Xport 行动”是“匿名者”组织进行的所有同类行动中的第一个行动。这次行动的目标是协助如今已经背负两项罪名的通缉犯 Doyon 潜逃出国。负责调度的人是 Kalli 以及另一位曾在八十年代剑桥的迷幻药派对上和 Doyon 见过面的匿名者老兵。这位老兵是一位已经退休的软件主管,在组织内部威望很高。</p>
|
||||
|
||||
<p>Doyon 的终点站是这位软件主管的家,位于加拿大的偏远乡村。2011 年 12 月,他搭便车前往旧金山,并辗转来到了市区组织大本营。他找到了他的指定联系人,后者带领他到达了奥克兰的一家披萨店。凌晨 2 点,Doyon 通过披萨店的无线网络,接收了一条加密聊天消息。</p>
|
||||
<p>Doyon 的目的地是这位软件主管家,位于加拿大的偏远乡村。2011 年 12 月,他搭便车前往旧金山,并辗转来到了市区组织大本营。他找到了他的指定联系人,后者带领他到达了奥克兰的一家披萨店。凌晨 2 点,Doyon 通过披萨店的无线网络,接收了一条加密聊天消息。</p>
|
||||
|
||||
<p>“你现在靠近窗户吗?”那条消息问道。</p>
|
||||
|
||||
@ -222,13 +222,13 @@
|
||||
|
||||
<p>“往大街对面看。看见一个绿色的邮箱了吗?十五分钟后,你去站到那个邮箱旁边,把你的背包取下来,然后把你的面具放在上面。”</p>
|
||||
|
||||
<p>一连几个星期的时间,Doyon 穿梭于海湾地区的安全屋之间,按照加密聊天那头的指示不断行动。最后,他搭上了前往西雅图的长途公交车,软件主管的一个朋友在那里接待了他。这个朋友是一名非常富有的退休人员,他花费了通过谷歌地球来帮助 Doyon 规划前往加拿大的路线。他们共同前往了一家野外用品供应商店,这位朋友为 Doyon 购置了价值 1500 美元的商品,包括登山鞋以及一个全新的背包。接着他又开车载着 Doyon 北上,两小时后到达距离国界只有几百英里的偏僻地区。随后 Doyon 见到了 Amber Lyon。</p>
|
||||
<p>一连几个星期的时间,Doyon 穿梭于海湾地区的安全屋之间,按照加密聊天那头的指示不断行动。最后,他搭上了前往西雅图的长途公交车,软件主管的一个朋友在那里接待了他。这个朋友是一名非常富有的退休人员,他花费了几小时的时间通过谷歌地球来帮助 Doyon 规划前往加拿大的路线。他们共同前往了一家野外用品供应商店,这位朋友为 Doyon 购置了价值 1500 美元的商品,包括登山鞋以及一个全新的背包。接着他又开车载着 Doyon 北上,两小时后到达距离国界只有几百英里的偏僻地区。随后 Doyon 见到了 Amber Lyon。</p>
|
||||
|
||||
<p>几个月前,广播新闻记者 Lyon 曾在 CNN 的关于“匿名者”组织的节目里采访过 Doyon。Doyon 很欣赏她的报道,他们一直保持着联络。Lyon 要求加入 Doyon 的逃亡行程,为一部可能会发行的纪录片拍摄素材。软件主管认为这样太过冒险,但 Doyon 还是接受了她的请求。“我觉得他是想让自己出名,” Lyon 告诉我。四天的时间里,她用影像记录下了 Doyon 徒步北上,在林间露宿的行程。“那一切看起来不太像是仔细规划过的,” Lyon 回忆说。“他实在是无家可归了,所以他才会想要逃到国外去。”</p>
|
||||
|
||||
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18506-600.jpg" /></center>
|
||||
|
||||
<center><small>“这里是我们存放各种感觉的仓库。如果你发现了某种感觉,把它带到这里然后锁起来。”</small></center>
|
||||
<center><small>“这里是我们存放各种情感的仓库。如果你产生了某种情感,把它带到这里然后锁起来。”</small></center>
|
||||
|
||||
<p>2012 年 2 月 11 日,Pastebin 上出现了一条消息。“PLF 很高兴的宣布‘ Commander X’,也就是 Christopher Mark Doyon,已经离开了美国的司法管辖区,抵达了加拿大一个比较安全的地方,”上面写着,“PLF 呼吁美国政府,希望政府能够醒悟过来并停止无谓的骚扰与监视行为——不要仅仅逮捕‘匿名者’组织的成员,对所有的激进组织应该一视同仁。”</p>
|
||||
|
||||
@ -236,13 +236,13 @@
|
||||
|
||||
Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barrett Brown 的聊天中,Doyon 难掩内心的喜悦之情。
|
||||
|
||||
<blockquote>BarrettBrown:你现在应该足够安全了吧,其他的呢?...</blockquote>
|
||||
<blockquote>BarrettBrown:你现在足够多安全的藏身之处等等吧?</blockquote>
|
||||
|
||||
<blockquote>CommanderX:是的,我现在很安全,现在加拿大既不缺钱也不缺藏身的地方。</blockquote>
|
||||
|
||||
<blockquote>CommanderX:Amber Lyon 想要你的一张照片。</blockquote>
|
||||
|
||||
<blockquote>CommanderX:去他【哔~】的怪人,Barrett,相信你会喜欢我告诉她应该怎样评价你的。</blockquote>
|
||||
<blockquote>CommanderX:去你【哔~】的怪人,Barrett,相信你会喜欢我的回复。我一直爱你,永远爱你。</blockquote>
|
||||
|
||||
<blockquote>CommanderX::-)</blockquote>
|
||||
|
||||
@ -258,13 +258,13 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
|
||||
|
||||
<blockquote>BarrettBrown:当然,估计我们不久后也得这样了</blockquote>
|
||||
|
||||
<p>在 Doyon 出逃十天后,《华尔街日报》上刊登了关于不久后升职为美国国家安全局及网络指挥部主任的 Keith Alexander 的报道,他在白宫举行的秘密会晤以及其他场合下,表达了对“匿名者”组织的高度关注。Alexander 发出警告,两年内,该组织必将会是国家电网改造的大患。参谋长联席会议的主席 General Martin Dempsey 告诉记者,这群人是国家的敌人。“他们有能力把这些使用恶意软件造成破坏的技术扩散到其他的边缘组织去,”随后又补充道,“我们必须防范这种情况发生。”</p>
|
||||
<p>在 Doyon 出逃十天后,《华尔街日报》上刊登了关于不久后升职为美国国家安全局及网络指挥部主任的 Keith Alexander 的报道,他在白宫以及其他场合举行的秘密会晤,表达了对“匿名者”组织的高度关注。Alexander 发出警告,两年内,该组织必将会是国家电网改造的大患。参谋长联席会议的主席 General Martin Dempsey 告诉记者,这群人是国家的敌人。“他们有能力把这些使用恶意软件造成破坏的技术扩散到其他的边缘组织去,”随后又补充道,“我们必须防范这种情况发生。”</p>
|
||||
|
||||
<p>3 月 8 日,国会议员们在国会大厦附近的一个敏感信息隔离设施附近举行了关于网络安全的会议。包括 Alexander、Dempsey、美国联邦调查局局长 Robert Mueller,以及美国国土安全部部长 Janet Napolitano 在内的多名美国安全方面的高级官员出席了这次会议。会议上,通过计算机向与会者模拟了东部沿海地区电力设施可能会遭受到的网络攻击时的情境。“匿名者”组织目前应该还不具备发动此种规模攻击的能力,但安全方面的官员担心他们会联合其他更加危险的组织来共同发动攻击。“在我们着手于不断增加的网络风险事故时,政府仍在就具体的处理细节进行不断协商讨论,” Napolitano 告诉我。当谈及潜在的网络安全隐患时,她补充道,“我们通常会把‘匿名者’组织的行动当做 A 级威胁来应对。”</p>
|
||||
|
||||
<p>“匿名者”也许是当今世界上最强大的无政府主义黑客组织。即使如此,它却从未表现出过任何的会对公共基础设施造成破坏的迹象或意愿。一些网络安全专家称,那些关于“匿名者”组织的谣传太过危言耸听。“在奥兰多发布战前宣言和实际发动 Stuxnet 蠕虫病毒攻击之间是有很大的差距的,” Internet 研究与战略中心的一位职员 James Andrew Lewis 告诉我,这和 2007 年美国与以色列对伊朗原子能网站发动的黑客袭击有关。哈佛大学法学院的教授 Yochai Benkler 告诉我,“我们所看见的只是以主要防御为理由而进行的开销,否则,将很难自圆其说。”</p>
|
||||
|
||||
<p>Keith Alexander 最近刚从政府部门退休,他拒绝就此事发表评论,因为他并不能代表国家安全局、联邦调查局、中央情报局以及国土安全部。尽管匿名者们从未真正盯上过政府部门的计算机网络,但他们对于那些激怒他们的人有着强烈的报复心理。前国土安全部国家网络安全部门负责人 Andy Purdy 告诉我他们“害怕被报复,”无论机构还是个人,都不同意政府公然反对“匿名者”组织。“每个人都非常脆弱,”他说。</p>
|
||||
<p>Keith Alexander 最近刚从政府部门退休,他拒绝就此事发表评论,因为他并不能代表国家安全局、联邦调查局、中央情报局以及国土安全部。尽管匿名者们从未真正盯上过政府部门的计算机网络,但他们对于那些激怒他们的人有着强烈的报复心理。前国土安全部国家网络安全部门负责人 Andy Purdy 告诉我他们“害怕被报复,”无论机构还是个人,都不同意政府公然反对“匿名者”组织。“每个人都容易成为被攻击对象,”他说。</p>
|
||||
|
||||
<h2>9</h2>
|
||||
|
||||
@ -272,7 +272,7 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
|
||||
|
||||
<p>Doyon 感到很烦躁,但他还是继续扮演着一名黑客——以此吸引关注。他在多伦多上映的纪录片上以戴着面具的匿名者形象出现。在接受《National Post》的采访时,他向记者大肆吹嘘未经证实的消息,“我们已经入侵了美国政府的所有机密数据库。现在的问题是我们该何时泄露这些机密数据,而不是我们是否会泄露。”</p>
|
||||
|
||||
<p>2013 年 1 月,在另一名匿名者介入俄亥俄州<a href="https://gist.githubusercontent.com/SteveArcher/cdffc917a507f875b956/raw/c7b49cc11ae1e790d30c87f7b8de95482c18ec74/%E6%96%AF%E6%89%98%E6%9C%AC%E7%BB%B4%E5%B0%94%E8%BD%AE%E5%A5%B8%E6%A1%88%E5%86%8D%E8%B5%B7%E9%A3%8E%E6%B3%A2%20%E9%BB%91%E5%AE%A2%E7%BB%84%E7%BB%87%E4%BB%8B%E5%85%A5">斯托本维尔未成年少女轮奸案</a>,发起抗议行动之后,Doyon 重新启用了他两年前创办的网站 LocalLeaks,作为那起轮奸事件的信息汇总处理中心。如同许多其他“匿名者”组织的所作所为一样,LocalLeaks 网站非常具有影响力,但却也不承担任何责任。LocalLeaks 网站是第一家公布 12 分钟斯托本维尔高中毕业生猥亵视频的网站,这激起了众多当事人的愤怒。LocalLeaks 网站上同时披露了几份未被法庭收录的关于案件的材料,并且由此不小心透漏出了案件受害人的名字。Doyon向我承认他公开这些未经证实的信息的策略是存在争议的,但他同时回忆起自己当时的想法,“我们可以选择去除这些斯托本维尔案件的材料...也可以选择公开所有我们搜集的信息,基本上,给公众以提醒,不过,前提是你们得相信我们。”</p>
|
||||
<p>2013 年 1 月,在另一名匿名者介入俄亥俄州<a href="https://gist.githubusercontent.com/SteveArcher/cdffc917a507f875b956/raw/c7b49cc11ae1e790d30c87f7b8de95482c18ec74/%E6%96%AF%E6%89%98%E6%9C%AC%E7%BB%B4%E5%B0%94%E8%BD%AE%E5%A5%B8%E6%A1%88%E5%86%8D%E8%B5%B7%E9%A3%8E%E6%B3%A2%20%E9%BB%91%E5%AE%A2%E7%BB%84%E7%BB%87%E4%BB%8B%E5%85%A5">斯托本维尔未成年少女强奸案</a>,发起抗议行动之后,Doyon 重新启用了他两年前创办的网站 LocalLeaks,作为那起强奸事件的信息汇总处理中心。如同许多其他“匿名者”组织的所作所为一样,LocalLeaks 网站非常具有影响力,但却也不承担任何责任。LocalLeaks 网站是第一家公布 12 分钟斯托本维尔高中毕业生猥亵视频的网站,这激起了众多当事人的愤怒。LocalLeaks 网站上同时披露了几份未被法庭收录的关于案件的材料,并且由此不小心透漏出了案件受害人的名字。Doyon向我承认他公开这些未经证实的信息的策略是存在争议的,但他同时回忆起自己当时的想法,“我们可以选择销毁这些斯托本维尔案件的材料...也可以选择公开所有我们搜集的信息,基本上,给公众以提醒,不过,前提是你们得相信我们。”</p>
|
||||
|
||||
<p>2013 年 3 月,一个名为 Rustle League 的组织入侵了 Doyon 的 Twitter 账户,该组织此前经常挑衅“匿名者”组织。Rustle League 的领导者之一 Shm00p 告诉我,“我们的本意并不是伤害那些家伙,只不过,哦,那些家伙说的话你就当是在放屁好了——我会这么做只是因为我感到很好笑。” Rustle League 组织使用 Doyon 的账户发布了含有如 www.jewsdid911.org 链接这样的,种族主义和反犹太主义的信息。</p>
|
||||
|
||||
@ -290,37 +290,37 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
|
||||
|
||||
<p>我们约定了一次面谈。Doyon 坚持让我通过加密聊天把面谈的详细情况提前告诉他。我坐了几个小时的飞机,租车来到了加拿大的一个偏远小镇,并且禁用了我的电话。</p>
|
||||
|
||||
<p>最后,我在一个狭小安静的住宅区公寓里见到了 Doyon。他穿了一件绿色的军人夹克衫以及印有“匿名者”组织 logo 的 T 恤衫:一个脸被问号所替代的黑衣人形象。公寓里基本上没有什么家具,充满了一股烟味。他谈论起了美国政治(“我基本没怎么在众多的选举中投票——它们不过是暗箱操作的游戏罢了”),好战的伊斯兰教(“我相信,尼日利亚政府的人不过是相互勾结,以创建一个名为‘博科圣地’的基地组织的下属机构罢了”),以及他对“匿名者”组织的小小看法(“那些自称为怪人的人是真的是烂透了,意思是,邪恶的人”)。</p>
|
||||
<p>最后,我在一个狭小安静的住宅区公寓里见到了 Doyon。他穿了一件绿色的军人夹克衫以及印有“匿名者”组织 logo 的 T 恤衫:一个脸被问号所替代的黑衣人形象。公寓里基本上没有什么家具,充满了一股烟味。他谈论起了美国政治(“我基本没怎么在众多的选举中投票——它们不过是暗箱操作的游戏罢了”),好战的伊斯兰教(“我相信,尼日利亚政府的人不过是相互勾结,以创建一个名为‘博科圣地’的基地组织的下属机构罢了”),以及他对“匿名者”组织的小小看法(“那些自称为怪人的人是真的是烂透了,其实是邪恶的人”)。</p>
|
||||
|
||||
<p>Doyon 剃去了他的胡须,但他却显得更加憔悴了。他说那是因为他病了的原因,他几乎很少出去。很小的写字台上有两台笔记本电脑、一摞关于佛教的书,还有一个堆满烟灰的烟灰缸。另一面裸露的泛黄墙壁上挂着盖伊·福克斯面具。他告诉我,“所谓‘Commander X’不过是一个处于极度痛苦中的小老头罢了。”</p>
|
||||
|
||||
<p>在刚过去的圣诞节里,匿名者的新网站 AnonInsiders 的创建者拜访了 Doyon,并给他带来了馅饼和香烟。Doyon 询问来访的朋友是否可以继承自己的衣钵成为 PLF 的最高指挥官,同时希望能够递交出自己手里的“王国钥匙”——手里的所有密码,以及几份关于“匿名者”组织的机密文件。这位朋友委婉的拒绝了。“我有自己的生活,”他告诉了我拒绝的理由。</p>
|
||||
<p>在刚过去的圣诞节里,匿名者的新网站 AnonInsiders 的创建者拜访了 Doyon,并给他带来了馅饼和香烟。Doyon 询问来访的朋友是否可以接替自己成为 PLF 的最高指挥官,同时希望能够递交出自己手里的“王国钥匙”——手里的所有密码,以及几份关于“匿名者”组织的机密文件。这位朋友委婉的拒绝了。“我有自己的生活,”他告诉了我拒绝的理由。</p>
|
||||
|
||||
<h2>11</h2>
|
||||
|
||||
<p>2014 年 8 月 9 日,当地时间下午 5 时 09 分,来自密苏里州圣路易斯郊区德尔伍德的一位说唱歌手同时也是激进分子的 Kareem (Tef Poe) Jackson,在 Twitter 上谈起了邻近城镇的一系列令人担忧的举措。“基本可以断定弗格森已经实施了戒严,任何人都无法出入,”他在 Twitter 上写道。“国内的朋友还有因特网上的朋友请帮助我们!!!”五个小时前,弗格森,一位十八岁的手无寸铁的非裔美国人 Michael Brown,被一位白人警察射杀。射杀警察声称自己这么做的原因是 Brown 意图伸手抢夺自己的枪支。而事发当时和 Brown 在一起的朋友 Dorian Johnson 却说,Brown 唯一做得不对的地方在于他当时拒绝离开街道中间。</p>
|
||||
<p>2014 年 8 月 9 日,当地时间下午 5 时 09 分,来自密苏里州圣路易斯郊区德尔伍德的一位说唱歌手同时也是激进分子的 Kareem (Tef Poe) Jackson,在 Twitter 上谈起了邻近城镇的一系列令人担忧的举措。“基本可以断定弗格森已经实施了戒严,任何人都无法出入,”他在 Twitter 上写道。“国内外的朋友们请帮助我们!!!”五个小时前,弗格森,一位十八岁的手无寸铁的非裔美国人 Michael Brown,被一位白人警察射杀。射杀警察声称自己这么做的原因是 Brown 意图伸手抢夺自己的枪支。而事发当时和 Brown 在一起的朋友 Dorian Johnson 却说,Brown 唯一做得不对的地方在于他当时拒绝离开街道中间。</p>
|
||||
|
||||
<p>不到两小时,Jackson 就收到了一位名为 CommanderXanon 的 Twitter 用户的回复。“你完全可以相信我们,”回复信息里写道。“你是否可以给我们详细描述一下现场情况,那样会对我们很有帮助。”近几周的时间里,仍然呆在加拿大的 Doyon 复出了。六月,他在还有两个月满 50 岁的时候,成功戒烟(“#戒瘾成功 #电子香烟功不可没 #老了,”他在戒烟成功后在 Twitter 上写道)。七月,在加沙地带爆发武装对抗之后,Doyon 发表 Twiter 支持“匿名者”组织的“拯救加沙行动”,并发动了一系列针对以色列网站的 DDoS 攻击。Doyon 认为弗格森枪击事件更加令人关注。抛开他本人的个性,他有在事件发展到引人注目之前的早期,就迅速注意该事件的能力。</p>
|
||||
<p>不到两小时,Jackson 就收到了一位名为 CommanderXanon 的 Twitter 用户的回复。“你完全可以相信我们,”回复信息里写道。“你是否可以给我们详细描述一下现场情况,那样会对我们很有帮助。”近几周的时间里,仍然呆在加拿大的 Doyon 复出了。六月,他在还有两个月满 50 岁的时候,成功戒烟(“#戒瘾成功 #电子香烟功不可没 #老了,”他在戒烟成功后在 Twitter 上写道)。七月,在加沙地带爆发武装对抗之后,Doyon 发表 Twiter 支持“匿名者”组织的“拯救加沙行动”,并发动了一系列针对以色列网站的 DDoS 攻击。Doyon 认为弗格森枪击事件更加令人关注。抛开他本人的个性,他有能力在事件发展到引人注目之前,就迅速注意该事件。</p>
|
||||
|
||||
<p>“正在网上搜索关于那名警察以及当地政府的信息,” Doyon 发 Twitter 道。不到十分钟,他就为此专门在 IRC 聊天室里创建了一个频道。“‘匿名者’组织‘弗格森’行动正式启动,”他又发了一条 Twitter。但只有两个人转推了此消息。</p>
|
||||
|
||||
<p>次日早晨,Doyon 发布了一条链接,链接指向的是一个初具雏形的网站,网站首页有一条致弗格森市民的信息——“你们并不孤单,我们将尽一切努力支持你们”——以及致当地警察的警告:“如果你们对对弗格森的抗议者们滥用职权、骚扰,或者伤害了他们,我们绝对会让你们所有政府部门的网站瘫痪。这不是威胁,这是承诺。”同时 Doyon 呼吁有 130 万粉丝的“匿名者”组织的 Twitter 账号 YourAnonNews 给与支持。“请支持‘弗格森’行动”,他发送了消息。一分钟后,YourAnonNews 回复表示同意。当天,包含话题 #OpFerguson 的 Twitter 发表/转推了超过六千次。</p>
|
||||
<p>次日早晨,Doyon 发布了一条链接,链接指向的是一个初具雏形的网站,网站首页有一条致弗格森市民的信息——“你们并不孤单,我们将尽一切努力支持你们”——以及致当地警察的警告:“如果你们对弗格森的抗议者们滥用职权、骚扰,或者伤害了他们,我们绝对会让你们所有政府部门的网站瘫痪。这不是威胁,这是承诺。”同时 Doyon 呼吁有 130 万粉丝的“匿名者”组织的 Twitter 账号 YourAnonNews 给与支持。“请支持‘弗格森’行动”,他发送了消息。一分钟后,YourAnonNews 回复表示同意。当天,包含话题 #OpFerguson 的 Twitter 被转发了超过六千次。</p>
|
||||
|
||||
<p>这个事件迅速成为头条新闻,同时匿名者们在弗格森周围进行了大集会。与“阿拉伯之春行动”类似,“匿名者”组织向抗议者们发送了电子关怀包,包括抗暴指导(“把瓦斯弹捡起来回丢给警察”)与可打印的盖伊·福克斯面具。Jackson 和其他示威者在弗格森进行示威游行时,警察企图通过橡皮子弹和催泪瓦斯来驱散他们。“当时的情景真像是布鲁斯·威利斯的电影里的情节,” Jackson 后来告诉我。“不过巴拉克·奥巴马应该并不会支持‘匿名者’组织传授给我们的这些知识,”他笑称道。“让那些警察赶到束手无策真的是太爽了。”</p>
|
||||
<p>这个事件迅速成为头条新闻,同时匿名者们在弗格森周围进行了大集会。与“阿拉伯之春行动”类似,“匿名者”组织向抗议者们发送了电子关怀包,包括抗暴指导(“把瓦斯弹捡起来回丢给警察”)与可打印的盖伊·福克斯面具。Jackson 和其他示威者在弗格森进行示威游行时,警察企图通过橡皮子弹和催泪瓦斯来驱散他们。“当时的情景真像是布鲁斯·威利斯的电影里的情节,” Jackson 后来告诉我。“不过巴拉克·奥巴马应该并不会支持‘匿名者’组织传授给我们的这些知识,”他说道。“知道有人在你的背后支持你,真是感觉欣慰。”</p>
|
||||
|
||||
<p>有个域名是 www.opferguson.com 的网站,后来发现不过是一个骗局——一个用来收集访问者 ip 地址的陷阱,随后这些地址会被移交给执法机构。有些人怀疑 Commander X 是政府的线人。在 IRC 聊天室 #OpFerguson 频道,一个名叫 Sherlock 写道,“现在频道里每个人说的已经让我害怕去点击任何陌生的链接了。除非是一个我非常熟悉的网址,否则我绝对不会去点击。”</p>
|
||||
<p>有个网址是 www.opferguson.com 的网站,后来发现不过是一个骗局——一个用来收集访问者 ip 地址的陷阱,随后这些地址会被移交给执法机构。有些人怀疑 Commander X 是政府的线人。在 IRC 聊天室 #OpFerguson 频道,一个名叫 Sherlock 写道,“现在频道里每个人说的已经让我害怕去点击任何陌生的链接了。除非是一个我非常熟悉的网址,否则我绝对不会去点击。”</p>
|
||||
|
||||
<p>弗格森的抗议者要求当局公布射杀 Brown 的警察的名字。几天后,匿名者们附和了抗议者们的请求。有人在 Twitter 上写道,“弗格森警察局最好公布肇事警察的名字,否则‘匿名者’组织将会替他们公布。”8 月 12 的新闻发布会上,圣路易斯警察局的局长 Jon Belmar 拒绝了这个请求。“我们不会这样做,除非他们被某个罪名所指控,”他说道。</p>
|
||||
|
||||
<p>作为报复,一名黑客使用名为 TheAnonMessage 的 Twitter 账户公布了一条链接,该链接指向一段来自警察的无线电设备所记录的音频文件,文件记录时间是 Brown 被枪杀的两小时左右。TheAnonMessage 同时也把矛头指向了 Belmar,在 Twitter 上公布了这位警察局长的家庭住址、电话号码以及他的家庭照片——一张是他的儿子在长椅上睡觉,另一张则是 Belmar 和他的妻子的合影。“不错的照片,Jon,” TheAnonMessage 在 Twitter 上写道。“你的妻子在她这个年龄算是一个美人了。你已经爱她爱得不耐烦了吗?”一个小时后,TheAnonMessage 又以 Belmar 的女儿为把柄进行了恐吓。</p>
|
||||
|
||||
<p>Richard Stallman,来自 MIT 的初代黑客,告诉我虽然他在很多地方赞同“匿名者”组织的行为,但他认为这些泄露私人信息的攻击行为是要受到谴责的。即使是在国内,TheAnonMessage 的行为也受到了谴责。“为何要泄露无辜的人的信息到网上?”一位匿名者通过 IRC 发问,并且表示威胁 Belmar 的家人实在是“相当愚蠢的行为”。但是 TheAnonMessage 和其他的一些匿名者仍然进行着不断搜寻,并企图在将来再次进行泄露信息的攻击。在互联网上可以得到所有弗格森警察局警员的名字,匿名者们不断地搜索着信息,企图找出具体是哪一个警察找出杀害了 Brown。</p>
|
||||
<p>Richard Stallman,来自 MIT 的初代黑客,告诉我虽然他在很多地方赞同“匿名者”组织的行为,但他认为这些泄露私人信息的攻击行为是要受到谴责的。即使是组织内部,TheAnonMessage 的行为也受到了谴责。“为何要泄露无辜的人的信息到网上?”一位匿名者通过 IRC 发问,并且表示威胁 Belmar 的家人实在是“相当愚蠢的行为”。但是 TheAnonMessage 和其他的一些匿名者仍然进行着不断搜寻,并企图在将来再次进行泄露信息的攻击。在互联网上可以得到所有弗格森警察局警员的名字,匿名者们不断地搜索着信息,企图找出具体是哪一个警察找出杀害了 Brown。</p>
|
||||
|
||||
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_steig-1999-04-12-600.jpg" /></center>
|
||||
|
||||
<center><small></small>1999 年 4 月 12 日 “我应该把镜头对向谁?”</center>
|
||||
|
||||
<p>8 月 14 日清晨,及位匿名者基于 Facebook 上的照片还有其他的证据,确定了射杀 Brown 的凶手是一位名叫 Bryan Willman 的 32 岁男子。根据一份 IRC 聊天记录,一位匿名者贴出了 Willman 的浮夸面孔的照片;另一位匿名者提醒道,“凶手声称自己的脸没有被任何人看到。”另一位昵称为 Anonymous|11057 的匿名者承认他对 Willman 的怀疑确实是“跳跃性的可能错误的逻辑过程推导出来的。”不过他还是写道,“我只是无法动摇自己的想法。虽然我没有任何证据,但我非常非常地确信就是他。”</p>
|
||||
<p>8 月 14 日清晨,几位匿名者基于 Facebook 上的照片还有其他的证据,确定了射杀 Brown 的凶手是一位名叫 Bryan Willman 的 32 岁男子。根据一份 IRC 聊天记录,一位匿名者贴出了 Willman 的肿胀面孔的照片;另一位匿名者提醒道,“凶手声称自己的脸没有被任何人看到。”另一位昵称为 Anonymous|11057 的匿名者承认他对 Willman 的怀疑确实是“跳跃性的可能错误的逻辑过程推导出来的。”不过他还是写道,“我只是无法动摇自己的想法。虽然我没有任何证据,但我非常非常地确信就是他。”</p>
|
||||
|
||||
<p>TheAnonMessage 看起来被这次对话逗乐了,写道,“#愿逝者安息,凶手是 BryanWillman。”另一位匿名者发出了强烈警告。“请务必确认,” Anonymous|2252 写道。“这不仅仅关乎到一个人的性命,我们可以不负责任地向公众公布我们的结果,但却很可能有无辜的人会因此受到不应受到的对待。”</p>
|
||||
|
||||
@ -356,15 +356,15 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
|
||||
|
||||
<blockquote>anondepp:lol</blockquote>
|
||||
|
||||
<p>早晨 9 时 45 分,圣路易斯警察局对 TheAnonMessage 进行了答复。“Bryan Willman 从来没有在弗格森警察局或者圣路易斯警察局任过职,” 他们在 Twitter 上写道。“请不要再公布这位无辜市民的信息了。”(随后 FBI 对弗格森警察的电脑遭黑客入侵的事情展开了调查。)Twitter 管理员迅速封禁了 TheAnonMessage 的账户,但 Willman 的名字和家庭住址仍然被广泛传开。</p>
|
||||
<p>早晨 9 时 45 分,圣路易斯警察局对 TheAnonMessage 进行了答复。“Bryan Willman 从来没有在 警察局或者圣路易斯警察局任过职,” 他们在 Twitter 上写道。“请不要再公布这位无辜市民的信息了。”(随后 FBI 对弗格森警察的电脑遭黑客入侵的事情展开了调查。)Twitter 管理员迅速封禁了 TheAnonMessage 的账户,但 Willman 的名字和家庭住址仍然被广泛传开。</p>
|
||||
|
||||
<p>实际上,Willman 是弗格森西郊圣安区的警察外勤负责人。当圣路易斯警察局的情报处打电话告诉 Willman,他已经被“确认”为凶手时,他告诉我,“我以为不过是个奇怪的笑话。”几小时后,他的社交账号上就收到了数百条要杀死他的威胁。他在警察的保护下,独自一人在家里呆了将近一个星期。“我只希望这一切都尽快过去,”他告诉我他的感受。他认为“匿名者”组织已经不可挽回地损害了他的名誉。“我不知道他们怎么会以为自己可以被再次信任的,”他说。</p>
|
||||
<p>实际上,Willman 是弗格森西郊圣安区的警察外勤负责人。当圣路易斯警察局的情报处打电话告诉 Willman,他已经被“确认”为凶手时,他告诉我,“我以为不过是个奇怪的笑话。”几小时后,他的社交账号上就收到了成百上千条死亡恐吓。他在警察的保护下,独自一人在家里呆了将近一个星期。“我只希望这一切都尽快过去,”他告诉我他的感受。他认为“匿名者”组织已经不可挽回地损害了他的名誉。“我不知道他们怎么会以为自己可以被再次信任的,”他说。</p>
|
||||
|
||||
<p>“我们并不完美,” OpFerguson 在 Twitter 上说道。“‘匿名者’组织确实犯错了,过去的几天我们制造一些混乱。为此,我们道歉。”尽管 Doyon 并不应该为这次错误的信息泄露攻击负责,但其他的匿名者却因为他发起了一次无法控制的行动,而归咎他。YourAnonNews 在 Pastebin 上发表了一则消息,上面写道,“你们也许注意到了组织不同的 Twitter 账户发表的话题 #Ferguson 和 #OpFerguson,这两个话题下的 Twitter 与信息是相互矛盾的。为什么会在这些关键话题上出现分歧,部分原因是因为 CommanderX 是一个‘想让自己出名的疯子/想让公众认识自己的疯子’——这种人喜欢,或者至少不回避媒体的宣传——并且显而易见的,组织内大部分成员并不喜欢这样。”</p>
|
||||
|
||||
<p>在个人 Twitter 上,Doyon 否认了所有关于“弗格森行动”的职责,他写道,“我讨厌这样。我不希望这样的情况发生,我也不希望和我认为是朋友的人战斗。”沉寂了几天后,他又再度获吹响了战斗的号角。他最近在 Twitter 上写道,“你们称他们是暴民,我们却称他们是压迫下的反抗之声”以及“解放西藏”。</p>
|
||||
|
||||
<p>Doyon 仍然处于藏匿状态。甚至连他的律师 Jay Leiderman 也不知道他在哪里。Leiderman 表示,除了在圣克鲁斯受到的指控,Doyon 很有可能因为攻击了 PayPal 和奥兰多而面临新的指控。一旦他被捕,所有的刑期加起来,他的余生就要在监狱里度过了。借鉴 Edward Snowden 的先例,他希望申请去俄罗斯避难。我们谈话时,他用一支点燃的香烟在他的公寓里比划着。“这里比他【哔~】的牢房强多了吧?我绝对不会出去,”他愤愤道。“我不会再联系我的家人了....这是相当高的代价,但我必须这么做,我会尽我的努力让所有人活得自由、明白。”</p>
|
||||
<p>Doyon 仍然处于藏匿状态。甚至连他的律师 Jay Leiderman 也不知道他在哪里。Leiderman 表示,除了在圣克鲁斯受到的指控,Doyon 很有可能因为攻击了 PayPal 和奥兰多而面临新的指控。一旦他被捕,所有的刑期加起来,他的余生就要在监狱里度过了。借鉴 Edward Snowden 的先例,他希望申请去俄罗斯避难。我们谈话时,他用一支点燃的香烟在他的公寓里比划着。“这里比【哔~】的牢房强多了吧?我绝对不会出去,”他愤愤道。“我不会再联系我的家人了....这是相当高的代价,但我必须这么做,我会尽我的努力让所有人活得自由、明白。”</p>
|
||||
|
||||
|
||||
|
||||
@ -372,6 +372,6 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
|
||||
|
||||
<p>作者:<a href="http://www.newyorker.com/contributors/david-kushner">David Kushner</a></p>
|
||||
<p>译者:<a href="https://github.com/SteveArcher">SteveArcher</a></p>
|
||||
<p>校对:<a href="https://github.com/校对者ID">校对者ID</a></p>
|
||||
<p>校对:<a href="https://github.com/carolinewuyan">Caroline</a></p>
|
||||
|
||||
<p>本文由 <a href="https://github.com/LCTT/TranslateProject">LCTT</a> 原创翻译,<a href="http://linux.cn/">Linux中国</a>荣誉推出</p>
|
@ -1,12 +1,12 @@
|
||||
Jelly Conky给你的Linux桌面加入了简约、时尚的状态
|
||||
Jelly Conky为你的Linux桌面带来简约、时尚的状态信息
|
||||
================================================================================
|
||||
**我把Conky设置成有点像壁纸:我会找出一张我喜欢的,只在下一周更换因为我厌倦了并且想要一点改变。**
|
||||
**我把Conky当成壁纸一样使用:我会找出一个我喜欢的样式,下一周当我厌烦了想要一点小改变时我就更换另外一个样式。**
|
||||
|
||||
不耐烦的一部分原因是由于日益增长的设计目录。我最近最喜欢的是Jelly Conky。
|
||||
不断更换样式的部分原因是由于日益增多的样式目录。我最近最喜欢的样式是Jelly Conky。
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/jelly-conky.png)
|
||||
|
||||
我们最近强调的许多Conky所夸耀的最小设计都遵循了。它并不想成为一个厨房水槽。它不会被那些需要一眼需要看到他们硬盘温度和IP地址的人所青睐
|
||||
Jelly Conky遵循了许多我们推荐的Conky风格采用的最小设计原则。它并不想成为一个大杂烩。它不会被那些喜欢一眼就能看到他们硬盘温度和IP地址的人所青睐。
|
||||
|
||||
它配备了三种不同的模式,它们都可以添加个性的或者静态背景图像:
|
||||
|
||||
@ -16,9 +16,9 @@ Jelly Conky给你的Linux桌面加入了简约、时尚的状态
|
||||
|
||||
一些人不理解为什么要在桌面上拥有重复的时钟。这是很好理解的。对于我而言,这不仅仅是功能(虽然,个人而言,Conky的时钟比挤在上部面板上那渺小的数字要更容易看清)。
|
||||
|
||||
机会是如果你的Android主屏幕有一个时间小部件的话,你不会介意在你的桌面上也有这么一个
|
||||
|
||||
我想如果你的Android主屏幕有一个时间小部件的话,你不会介意在你的桌面上也有这么一个的,对吧?
|
||||
|
||||
你可以从下述链接下载Jelly Conky,zip 包里面有一个说明如何安装的 readme 文件。如果希望看到完整的教程,可以[参考我们的前一篇文章][3]。
|
||||
- [从Deviant Art上下载 Jelly Conky][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -27,10 +27,11 @@ via: http://www.omgubuntu.co.uk/2014/09/jelly-conky-for-linux-desktop
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover
|
||||
[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003
|
||||
[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003
|
||||
[3]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover
|
@ -0,0 +1,37 @@
|
||||
Red Hat公司8200万美元收购FeedHenry来推动移动开发
|
||||
================================================================================
|
||||
> 这是Red Hat公司进入移动开发领域的一次关键收获。
|
||||
|
||||
Red Hat公司的JBoss开发者工具事业部一直注重于企业开发,而忽略了移动方面。而如今这一切将随着Red Hat公司宣布用8200万美元收购移动开发供应商 [FeedHenry][1] 开始发生改变。这笔交易将在Red Hat公司2015财年的第三季度结束。
|
||||
|
||||
Red Hat公司的中间件总经理Mike Piech说当交易结束后FeedHenry公司的员工将会变成Red Hat公司的员工。
|
||||
|
||||
FeedHenry公司的开发平台能让应用开发者快速地开发出Android、IOS、Windows Phone以及黑莓的移动应用。FeedHenry的平台Node.js的编程结构有着深远影响,而那不是过去JBoss所涉及的领域。
|
||||
|
||||
"这次对FeedHenry公司的收购显著地提高了我们对于Node.js的支持与衔接。" Piech说。
|
||||
|
||||
Red Hat公司的平台即服务(PaaS)技术OpenShift已经有了一个Node.js的cartridge组件。此外,Red Hat公司的企业版Linux把Node.js的技术预览来作为Red Hat公司软件包的一部分。
|
||||
|
||||
尽管Node.js本身就是开源的,但不是所有FeedHenry公司的技术能在近期符合开源许可证的要求。作为Red Hat纵贯历史的政策, 现在也是致力于让FeedHenry开源的时候了。
|
||||
|
||||
"我们完成了收购,那么开源我们所收购的技术就是公司的首要任务,并且我们没有理由因Feedhenry而例外。"Piech说。
|
||||
|
||||
Red Hat公司最后一次主要的非开源性公司的收购是在2012年用104万美元收购 [ManageIQ][2] 公司。在今年的5月份,Red Hat公司成立了ManageIQ公司的开源项目,开放之前闭源的云管理技术代码。
|
||||
|
||||
从整合的角度来看,Red Hat公司还尚未精确地提供FeedHenry公司如何融入它的完整信息。
|
||||
|
||||
"我们已经确定了一些FeedHenry公司和我们已经存在的技术和产品能很好地相互融合和集成的范围," Piech说,"我们会在接下来的90天内分享更多我们发展蓝图的细节。"
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html
|
||||
[1]:http://www.feedhenry.com/
|
||||
[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html
|
@ -6,84 +6,83 @@
|
||||
|
||||
这是初学者经常会问的一个问题,在这里,我会告诉你们10个我最喜欢的博客,这些博客可以帮助我们解决问题,能让我们及时了解所有 Ubuntu 版本的更新消息。不,我谈论的不是通常的 Linux 和 shell 脚本一类的东东。我是在说一个流畅的 Linux 桌面系统和一个普通的用户所要的关于 Ubuntu 的经验。
|
||||
|
||||
这些网站帮助你解决你正遇到的问题,提醒你关注各种应用和提供给你来自 Ubuntu 世界的最新消息。这个网站可以让你对 Ubuntu 更了解,所以,下面列出的10个我最喜欢的网站覆盖 Ubuntu 的方方面面。
|
||||
这些网站帮助你解决你正遇到的问题,提醒你关注各种应用和提供给你来自 Ubuntu 世界的最新消息。这个网站可以让你对 Ubuntu 更了解,所以,下面列出的是10个我最喜欢的博客,它们包括了 Ubuntu 的方方面面。
|
||||
|
||||
###10个Ubutun用户一定要知道的博客###
|
||||
|
||||
从我开始在 itsfoss 网站上写作开始,我特意把他排除在外,没有列入名单。我也并没有把[Planet Ubuntu][1]列入名单,因为他不适合初学的同学。废话不多说,让我们一起来看下**最好的乌邦图(ubuntu)博客**(排名不分先后):
|
||||
从我开始在 itsfoss 网站上写作开始,我特意把它排除在外,没有列入名单。我也并没有把[Planet Ubuntu][1]列入名单,因为它不适合初学者。废话不多说,让我们一起来看下**最好的乌邦图(ubuntu)博客**(排名不分先后):
|
||||
|
||||
### [OMG! Ubuntu!][2] ###
|
||||
|
||||
这是一个只针对 ubuntu 爱好者的网站。任何和乌邦图有关系的想法,不管成不成熟,OMG!Ubuntu上都会有收集!他主要包括新闻和应用。你也可以再这里找到一些关于 Ubuntu 的教程,但是不是很多。
|
||||
这是一个只针对 ubuntu 爱好者的网站。无论多小,只要是和乌邦图有关系的,OMG!Ubuntu 都会收入站内!博客主要包括新闻和应用。你也可以再这里找到一些关于 Ubuntu 的教程,但不是很多。
|
||||
|
||||
这个博客会让你知道 Ubuntu 的世界是怎么样的。
|
||||
这个博客会让你知道 Ubuntu 世界发生的各种事情。
|
||||
|
||||
### [Web Upd8][3] ###
|
||||
|
||||
Web Upd8 是我最喜欢的博客。除了涵盖新闻,他有很多容易理解的教程。Web Upd8 还维护了几个PPAs。博主[Andrei][4]有时会在评论里回答你的问题,这对你来说也会是很有帮助的。
|
||||
Web Upd8 是我最喜欢的博客。除了涵盖新闻,它有很多容易理解的教程。Web Upd8 还维护了几个PPAs。博主[Andrei][4]有时会在评论里回答你的问题,这对你来说也会是很有帮助的。
|
||||
|
||||
一个你可以追新闻资讯和教程的网站。
|
||||
这是一个你可以了解新闻资讯,学习教程的网站。
|
||||
|
||||
### [Noobs Lab][5] ###
|
||||
|
||||
和Web Upd8一样,Noobs Lab上也有很多教程,新闻,并且它可能是PPA里最大的主题和图标集。
|
||||
和Web Upd8一样,Noobs Lab上也有很多教程,新闻,并且它可能是PPA里最大的主题和图标集。
|
||||
|
||||
如果你是个小白,跟着Noobs Lab。
|
||||
如果你是个新手,去Noobs Lab看看吧。
|
||||
|
||||
### [Linux Scoop][6] ###
|
||||
|
||||
这里,大多数的博客都是“文字博客”。你通过看说明和截图来学习教程。而 Linux Scoop 上有很多录像来帮助初学者来学习,是一个实实在在的录像博客。
|
||||
大多数的博客都是“文字博客”。你通过看说明和截图来学习教程。而 Linux Scoop 上有很多录像来帮助初学者来学习,完全是一个视频博客。
|
||||
|
||||
如果你更喜欢看,而不是阅读的话,Linux Scoop应该是最适合你的。
|
||||
比起阅读来,如果你更喜欢视频,Linux Scoop应该是最适合你的。
|
||||
|
||||
### [Ubuntu Geek][7] ###
|
||||
|
||||
这是一个相对比较老的博客。覆盖面很广,并且有很多快速安装的教程和说明。虽然,有时我发现其中的一些教程文章缺乏深度,当然这也许只是我个人的观点。
|
||||
|
||||
想要快速的小贴士,去Ubuntu Geek。
|
||||
想要快速小贴士,去Ubuntu Geek。
|
||||
|
||||
### [Tech Drive-in][8] ###
|
||||
|
||||
这个网站的更新好像没有以前那么勤快了,可能是 Manuel 在忙于他的工作,但是仍然给我们提供了很多的东西。新闻,教程,应用评论是这个博客的重点。
|
||||
这个网站的更新频率好像没有以前那么快了,可能是 Manuel 在忙于他的工作,但是仍然给我们提供了很多的东西。新闻,教程,应用评论是这个博客的亮点。
|
||||
|
||||
博客经常被收入到[Ubuntu的新闻邮件请求][9],Tech Drive-in肯定是一个很值得你去追的网站。
|
||||
博客经常被收入到[Ubuntu的新闻邀请邮件中][9],Tech Drive-in肯定是一个很值得你去学习的网站。
|
||||
|
||||
### [UbuntuHandbook][10] ###
|
||||
|
||||
快速小贴士,新闻和教程是UbuntuHandbook的USP。[Ji m][11]最近也在参与维护一些PPAS。我必须很认真的说,这个站界面其实可以做得更好看点,纯属个人观点。
|
||||
快速小贴士,新闻和教程是UbuntuHandbook的USP。[Ji m][11]最近也在参与维护一些PPAS。我必须很认真的说,这个博客的页面其实可以做得更好看点,纯属个人观点。
|
||||
|
||||
UbuntuHandbook 真的很方便。
|
||||
|
||||
### [Unixmen][12] ###
|
||||
|
||||
这个网站是由很多人一起维护的,而且并不仅仅局限于Ubuntu,它也覆盖了很多的其他的Linux发行版。他用他自己的方式来帮助用户。
|
||||
这个网站是由很多人一起维护的,而且并不仅仅局限于Ubuntu,它也覆盖了很多的其他的Linux发行版。它有自己的论坛来帮助用户。
|
||||
|
||||
紧跟着 Unixmen 的步伐。。
|
||||
|
||||
### [The Mukt][13] ###
|
||||
|
||||
The Mukt是Muktware新的代表。Muktware是一个逐渐消亡的Linux组织,并以Mukt重生。Muktware是一个很严谨的Linux开源的博客,The Mukt涉及很多广泛的主题,包括,科技新闻,古怪的新闻,有时还有娱乐新闻(听起来是否有一种混搭风的感觉?)The Mukt也包括很多Ubuntu的新闻,有些可能是你感兴趣的。
|
||||
The Mukt是Muktware新的代表。Muktware是一个逐渐消亡的Linux组织,并以Mukt重生。Muktware是一个很严谨的Linux开源的博客,The Mukt涉及很多广泛的主题,包括,科技新闻,极客新闻,有时还有娱乐新闻(听起来是否有一种混搭风的感觉?)The Mukt也包括很多你感兴趣的Ubuntu新闻。
|
||||
|
||||
The Mukt 不仅仅是一个博客,它是一种文化潮流。
|
||||
|
||||
### [LinuxG][14] ###
|
||||
|
||||
LinuxG是一个你可以找到所有关于“怎样安装”文章的站点。几乎所有的文章都开始于一句话“你好,Linux geeksters,正如你所知道的。。。”,博客可以在不同的主题上做得更好。我经常发现有些是文章缺乏深度,并且是急急忙忙写出来的,但是它仍然是一个关注应用更新的好地方。
|
||||
LinuxG是一个你可以找到所有关于“怎样安装”类型文章的站点。几乎所有的文章都开始于一句话“你好,Linux geeksters,正如你所知道的……”,博客可以在不同的主题上做得更好。我经常发现有些是文章缺乏深度,并且是急急忙忙写出来的,但是它仍然是一个关注应用最新版本的好地方。
|
||||
|
||||
它很好的平衡了新的应用和他们最新的版本。
|
||||
这是个快速浏览新的应用和它们最新的版本好地方。
|
||||
|
||||
### 你还有什么好的站点吗? ###
|
||||
|
||||
This was my list of best Ubuntu blogs which I regularly follow. I know there are plenty more out there, perhaps better than some of those listed here. So why don’t you mention your favorite Ubuntu blog in the comment section below?
|
||||
这些就是我平时经常浏览的 Ubuntu 博客。我知道还有很多我不知道的站点,可能会比我列出来的这些更好。所以,欢迎把你最喜爱的 Ubuntu 博客写在下面评论区。
|
||||
|
||||
这些就是我平时经常浏览的 Ubuntu 博客。我知道还有很多我不知道的站点,可能会比我列出来的这些更好。所以,欢迎把你最喜爱的 Ubuntu 博客在下面评论的位置写出来。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/ten-blogs-every-ubuntu-user-must-follow/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[barney-ro](https://github.com/barney-ro)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,15 +1,14 @@
|
||||
Canonical在Ubuntu 14.04 LTS中关闭了一个nginx漏洞
|
||||
Canonical解决了一个Ubuntu 14.04 LTS中的nginx漏洞
|
||||
================================================================================
|
||||
> 用户不得不升级他们的系统来修复这个漏洞
|
||||
> 用户应该更新他们的系统来修复这个漏洞!
|
||||
|
||||
![Ubuntu 14.04 LTS](http://i1-news.softpedia-static.com/images/news2/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677-2.jpg)
|
||||
<center>![Ubuntu 14.04 LTS](http://i1-news.softpedia-static.com/images/news2/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677-2.jpg)</center>
|
||||
|
||||
Ubuntu 14.04 LTS
|
||||
<center>*Ubuntu 14.04 LTS*</center>
|
||||
|
||||
**Canonical已经在安全公告中公布了这个影响到Ubuntu 14.04 LTS (Trusty Tahr)的nginx漏洞的细节。这个问题已经被确定并被修复了**
|
||||
|
||||
Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被用来暴露网络上的敏感信息。
|
||||
|
||||
Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被利用来暴露网络上的敏感信息。
|
||||
|
||||
根据安全公告,“Antoine Delignat-Lavaud和Karthikeyan Bhargavan发现nginx错误地重复使用了缓存的SSL会话。攻击者可能利用此问题,在特定的配置下,可以从不同的虚拟主机获得信息“。
|
||||
|
||||
@ -23,13 +22,14 @@ Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||
在一般情况下,一个标准的系统更新将会进行必要的更改。要应用此修补程序您不必重新启动计算机。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,43 @@
|
||||
文件管理器 Wal Commander Github 0.17版发布了
|
||||
================================================================================
|
||||
![](http://wcm.linderdaum.com/wp-content/uploads/2014/09/wc21.png)
|
||||
|
||||
> ### 描述 ###
|
||||
>
|
||||
> Wal Commander GitHub 版是一款多平台的开源文件管理器。适用于Windows、Linux、FreeBSD、和OSX。
|
||||
>
|
||||
> 这个从项目的目的是创建一个模仿Far管理器外观和感觉的便携式文件管理器。
|
||||
|
||||
Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能包括:
|
||||
|
||||
- 使用命令历史自动补全;
|
||||
- 文件关联绑定自定义命令对文件的各种操作;
|
||||
- 和用XQuartz实验性地支持OS X。
|
||||
|
||||
很多新的快捷键添加在此版本中。预编译二进制文件适用于Windows64、Linux,FreeBSD和OS X版本,这些可以直接从[GitHub中的源代码][1]编译。
|
||||
|
||||
### 主要特性 ###
|
||||
|
||||
- 命令行自动补全 (使用Del键删除一条命令)
|
||||
- 文件关联 (主菜单 -> 命令 -> 文件关联)
|
||||
- XQuartz上实验性地支持OS X ([https://github.com/corporateshark/WalCommander/issues/5][2])
|
||||
|
||||
### 下载 ###
|
||||
|
||||
下载:[http://wcm.linderdaum.com/downloads/][3]
|
||||
源代码: [https://github.com/corporateshark/WalCommander][4]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://wcm.linderdaum.com/release-0-17-0/
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://github.com/corporateshark/WalCommander/releases
|
||||
[2]:https://github.com/corporateshark/WalCommander/issues/5
|
||||
[3]:http://wcm.linderdaum.com/downloads/
|
||||
[4]:https://github.com/corporateshark/WalCommander
|
@ -1,38 +0,0 @@
|
||||
Translating by ZTinoZ
|
||||
Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development
|
||||
================================================================================
|
||||
> Red Hat jumps into the mobile development sector with a key acquisition.
|
||||
|
||||
Red Hat's JBoss developer tools division has always focused on enterprise development, but hasn't always been focused on mobile. Today that will start to change as Red Hat announced its intention to acquire mobile development vendor [FeedHenry][1] for $82 million in cash. The deal is set to close in the third quarter of Red Hat's fiscal 2015. Red Hat is set to disclose its second quarter fiscal 2015 earning at 4 ET today.
|
||||
|
||||
Mike Piech, general manager of Middleware at Red Hat, told Datamation that upon the deal's closing FeedHenry's employees will become Red Hat employees
|
||||
|
||||
FeedHenry's development platform enables application developers to rapidly build mobile application for Android, IOS, Windows Phone and BlackBerry. The FeedHenry platform leverages Node.js programming architecture, which is not an area where JBoss has had much exposure in the past.
|
||||
|
||||
"The acquisition of FeedHenry significantly expands Red Hat's support for and engagement in Node.js," Piech said.
|
||||
|
||||
Piech Red Hat's OpenShift Platform-as-a-Service (PaaS) technology already has a Node.js cartridge. Additionally Red Hat Enterprise Linux ships a tech preview of node.js as part of the Red Hat Software Collections.
|
||||
|
||||
While node.js itself is open source, not all of FeedHenry's technology is currently available under an open source license. As has been Red Hat's policy throughout its entire history, it is now committing to making FeedHenry open source as well.
|
||||
|
||||
"As we've done with other acquisitions, open sourcing the technology we acquire is a priority for Red Hat, and we have no reason to expect that approach will change with FeedHenry," Piech said.
|
||||
|
||||
Red Hat's last major acquisition of a company with non open source technology was with [ManageIQ][2] for $104 million back in 2012. In May of this year, Red Hat launched the ManageIQ open-source project, opening up development and code of the formerly closed-source cloud management technology.
|
||||
|
||||
From an integration standpoint, Red Hat is not yet providing full details of precisely where FeedHenry will fit it.
|
||||
|
||||
"We've already identified a number of areas where FeedHenry and Red Hat's existing technology and products can be better aligned and integrated," Piech said. "We'll share more details as we develop the roadmap over the next 90 days."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html
|
||||
[1]:http://www.feedhenry.com/
|
||||
[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html
|
@ -1,36 +0,0 @@
|
||||
Wal Commander GitHub Edition 0.17 released
|
||||
================================================================================
|
||||
![](http://wcm.linderdaum.com/wp-content/uploads/2014/09/wc21.png)
|
||||
|
||||
> ### Description ###
|
||||
>
|
||||
> Wal Commander GitHub Edition is a multi-platform open source file manager for Windows, Linux, FreeBSD and OS X.
|
||||
>
|
||||
> The purpose of this project is to create a portable file manager mimicking the look-n-feel of Far Manager.
|
||||
|
||||
The next stable version of our Wal Commander GitHub Edition 0.17 is out. Major features include command line autocomplete using the commands history; file associations to bind custom commands to different actions on files; and experimental support of OS X using XQuartz. A lot of new hotkeys were added in this release. Precompiled binaries are available for Windows x64. Linux, FreeBSD and OS X versions can be built directly from the [GitHub source code][1].
|
||||
|
||||
### Major features ###
|
||||
|
||||
- command line autocomplete (use Del key to erase a command)
|
||||
- file associations (Main menu -> Commands -> File associations)
|
||||
- experimental OS X support on top of XQuartz ([https://github.com/corporateshark/WalCommander/issues/5][2])
|
||||
|
||||
### [Downloads][3] ###.
|
||||
|
||||
Source code: [https://github.com/corporateshark/WalCommander][4]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://wcm.linderdaum.com/release-0-17-0/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://github.com/corporateshark/WalCommander/releases
|
||||
[2]:https://github.com/corporateshark/WalCommander/issues/5
|
||||
[3]:http://wcm.linderdaum.com/downloads/
|
||||
[4]:https://github.com/corporateshark/WalCommander
|
@ -1,92 +0,0 @@
|
||||
Making MySQL Better at GitHub
|
||||
================================================================================
|
||||
> At GitHub we say, "it's not fully shipped until it's fast." We've talked before about some of the ways we keep our [frontend experience speedy][1], but that's only part of the story. Our MySQL database infrastructure dramatically affects the performance of GitHub.com. Here's a look at how our infrastructure team seamlessly conducted a major MySQL improvement last August and made GitHub even faster.
|
||||
|
||||
### The mission ###
|
||||
|
||||
Last year we moved the bulk of GitHub.com's infrastructure into a new datacenter with world-class hardware and networking. Since MySQL forms the foundation of our backend systems, we expected database performance to benefit tremendously from an improved setup. But creating a brand-new cluster with brand-new hardware in a new datacenter is no small task, so we had to plan and test carefully to ensure a smooth transition.
|
||||
|
||||
### Preparation ###
|
||||
|
||||
A major infrastructure change like this requires measurement and metrics gathering every step of the way. After installing base operating systems on our new machines, it was time to test out our new setup with various configurations. To get a realistic test workload, we used tcpdump to extract SELECT queries from the old cluster that was serving production and replayed them onto the new cluster.
|
||||
|
||||
MySQL tuning is very workload specific, and well-known configuration settings like innodb_buffer_pool_size often make the most difference in MySQL's performance. But on a major change like this, we wanted to make sure we covered everything, so we took a look at settings like innodb_thread_concurrency, innodb_io_capacity, and innodb_buffer_pool_instances, among others.
|
||||
|
||||
We were careful to only make one test configuration change at a time, and to run tests for at least 12 hours. We looked for query response time changes, stalls in queries per second, and signs of reduced concurrency. We observed the output of SHOW ENGINE INNODB STATUS, particularly the SEMAPHORES section, which provides information on work load contention.
|
||||
|
||||
Once we were relatively comfortable with configuration settings, we started migrating one of our largest tables onto an isolated cluster. This served as an early test of the process, gave us more space in the buffer pools of our core cluster and provided greater flexibility for failover and storage. This initial migration introduced an interesting application challenge, as we had to make sure we could maintain multiple connections and direct queries to the correct cluster.
|
||||
|
||||
In addition to all our raw hardware improvements, we also made process and topology improvements: we added delayed replicas, faster and more frequent backups, and more read replica capacity. These were all built out and ready for go-live day.
|
||||
|
||||
### Making a list; checking it twice ###
|
||||
|
||||
With millions of people using GitHub.com on a daily basis, we did not want to take any chances with the actual switchover. We came up with a thorough [checklist][2] before the transition:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4116929/13fc6f50-328b-11e4-837b-922aad3055a8.png)
|
||||
|
||||
We also planned a maintenance window and [announced it on our blog][3] to give our users plenty of notice.
|
||||
|
||||
### Migration day ###
|
||||
|
||||
At 5am Pacific Time on a Saturday, the migration team assembled online in chat and the process began:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060850/39f52cd4-2df3-11e4-9aca-1f54a4870d24.png)
|
||||
|
||||
We put the site in maintenance mode, made an announcement on Twitter, and set out to work through the list above:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060864/54ff6bac-2df3-11e4-95da-b059c0ec668f.png)
|
||||
|
||||
**13 minutes** later, we were able to confirm operations of the new cluster:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060870/6a4c0060-2df3-11e4-8dab-654562fe628d.png)
|
||||
|
||||
Then we flipped GitHub.com out of maintenance mode, and let the world know that we were in the clear.
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060878/79b9884c-2df3-11e4-98ed-d11818c8915a.png)
|
||||
|
||||
Lots of up front testing and preparation meant that we kept the work we needed on go-live day to a minimum.
|
||||
|
||||
### Measuring the final results ###
|
||||
|
||||
In the weeks following the migration, we closely monitored performance and response times on GitHub.com. We found that our cluster migration cut the average GitHub.com page load time by half and the 99th percentile by *two-thirds*:
|
||||
|
||||
![](https://cloud.githubusercontent.com/assets/1155781/4060886/9106e54e-2df3-11e4-8fda-a4c64c229ba1.png)
|
||||
|
||||
### What we learned ###
|
||||
|
||||
#### Functional partitioning ####
|
||||
|
||||
During this process we decided that moving larger tables that mostly store historic data to separate cluster was a good way to free up disk and buffer pool space. This allowed us to leave more resources for our "hot" data, splitting some connection logic to enable the application to query multiple clusters. This proved to be a big win for us and we are working to reuse this pattern.
|
||||
|
||||
#### Always be testing ####
|
||||
|
||||
You can never do too much acceptance and regression testing for your application. Replicating data from the old cluster to the new cluster while running acceptance tests and replaying queries were invaluable for tracing out issues and preventing surprises during the migration.
|
||||
|
||||
#### The power of collaboration ####
|
||||
|
||||
Large changes to infrastructure like this mean a lot of people need to be involved, so pull requests functioned as our primary point of coordination as a team. We had people all over the world jumping in to help.
|
||||
|
||||
Deploy day team map:
|
||||
|
||||
<iframe width="620" height="420" frameborder="0" src="https://render.githubusercontent.com/view/geojson?url=https://gist.githubusercontent.com/anonymous/5fa29a7ccbd0101630da/raw/map.geojson"></iframe>
|
||||
|
||||
This created a workflow where we could open a pull request to try out changes, get real-time feedback, and see commits that fixed regressions or errors -- all without phone calls or face-to-face meetings. When everything has a URL that can provide context, it's easy to involve a diverse range of people and make it simple for them give feedback.
|
||||
|
||||
### One year later.. ###
|
||||
|
||||
A full year later, we are happy to call this migration a success — MySQL performance and reliability continue to meet our expectations. And as an added bonus, the new cluster enabled us to make further improvements towards greater availability and query response times. I'll be writing more about those improvements here soon.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/blog/1880-making-mysql-better-at-github
|
||||
|
||||
作者:[samlambert][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/samlambert
|
||||
[1]:https://github.com/blog/1756-optimizing-large-selector-sets
|
||||
[2]:https://help.github.com/articles/writing-on-github#task-lists
|
||||
[3]:https://github.com/blog/1603-site-maintenance-august-31st-2013
|
@ -1,89 +0,0 @@
|
||||
Drab Desktop? Try These 4 Beautiful Linux Icon Themes
|
||||
================================================================================
|
||||
**Ubuntu’s default icon theme [hasn’t changed much][1] in almost 5 years, save for the [odd new icon here and there][2]. If you’re tired of how it looks we’re going to show you a handful of gorgeous alternatives that will easily freshen things up.**
|
||||
|
||||
Do feel free to share links to your own favourite choices in the comments below.
|
||||
|
||||
### Captiva ###
|
||||
|
||||
![Captiva icons, elementary folders and Moka GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-and-captiva.jpg)
|
||||
|
||||
Captiva icons, elementary folders and Moka GTK
|
||||
|
||||
Captiva is a relatively new icon theme that even the least bling-prone user can appreicate.
|
||||
|
||||
Made by DeviantArt user ~[bokehlicia][3], Captiva shuns the 2D flat look of many current icon themes for a softer, rounded look. The icons themselves have an almost material or textured look, with subtle drop shadows and a rich colour palette adding to the charm.
|
||||
|
||||
It doesn’t yet include a set of its own folder icons, and will fallback to using elementary (if available) or stock Ubuntu icons.
|
||||
|
||||
To install Captiva icons in Ubuntu 14.04 you can add the official PPA by opening a new Terminal window and enter the following commands:
|
||||
|
||||
sudo add-apt-repository ppa:captiva/ppa
|
||||
|
||||
sudo apt-get update && sudo apt-get install captiva-icon-theme
|
||||
|
||||
Or, if you’re not into software source cruft, by downloading the icon pack direct from the DeviantArt page. To install, extract the archive and move the resulting folder to the ‘.icons‘ directory in Home.
|
||||
|
||||
However you choose to install it, you’ll need to apply this (and every other theme on this list) using a utility like [Unity Tweak Tool][4].
|
||||
|
||||
- [Captiva Icon Theme on DeviantArt][5]
|
||||
|
||||
### Square Beam ###
|
||||
|
||||
![Square Beam icon set with Orchis GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/squarebeam.jpg)
|
||||
|
||||
Square Beam icon set with Orchis GTK
|
||||
|
||||
After something a bit angular? Check out Square Beam. It offers a more imposing visual statement than other sets on this list, with electric colours, harsh gradients and stark iconography. It claims to have more than 30,000 different icons (!) included (you’ll forgive me for not counting) so you should find very few gaps in its coverage.
|
||||
|
||||
- [Square Beam Icon Theme on GNOME-Look.org][6]
|
||||
|
||||
### Moka & Faba ###
|
||||
|
||||
![Moka/Faba Mono Icons with Orchis GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-faba.jpg)
|
||||
|
||||
Moka/Faba Mono Icons with Orchis GTK
|
||||
|
||||
The Moka icon suite needs little introduction. In fact, I’d wager a good number of you are already using it
|
||||
|
||||
With pastel colours, soft edges and simple icon artwork, Moka is a truly standout and comprehensive set of application icons. It’s best used with its sibling, Faba, which Moka will inherit so as to fill in all the system icons, folders, panel icons, etc. The combined result is…well, you’ve got eyes!
|
||||
|
||||
For full details on how to install on Ubuntu head over to the official project website, link below.
|
||||
|
||||
- [Download Moka and Faba Icon Themes][7]
|
||||
|
||||
### Compass ###
|
||||
|
||||
![Compass Icon Theme with Numix Blue GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/compass1.jpg)
|
||||
|
||||
Compass Icon Theme with Numix Blue GTK
|
||||
|
||||
Last on our list, but by no means least, is Compass. This is a true adherent to the ’2D, two-tone’ UI design right now. It may not be as visually diverse as others on this list, but that’s the point. It’s consistent and uniform and all the better for it — just check out those folder icons!
|
||||
|
||||
It’s available to download and install manually through GNOME-Look (link below) or through the Nitrux Artwork PPA:
|
||||
|
||||
sudo add-apt-repository ppa:nitrux/nitrux-artwork
|
||||
|
||||
sudo apt-get update && sudo apt-get install compass-icon-theme
|
||||
|
||||
- [Compass Icon Theme on GNOME-Look.org][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2014/09/4-gorgeous-linux-icon-themes-download
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/2010/02/lucid-gets-new-icons-for-rhythmbox-ubuntuone-memenu-more
|
||||
[2]:http://www.omgubuntu.co.uk/2012/08/new-icon-theme-lands-in-lubuntu-12-10
|
||||
[3]:http://bokehlicia.deviantart.com/
|
||||
[4]:http://www.omgubuntu.co.uk/2014/06/unity-tweak-tool-0-7-development-download
|
||||
[5]:http://bokehlicia.deviantart.com/art/Captiva-Icon-Theme-479302805
|
||||
[6]:http://gnome-look.org/content/show.php/Square-Beam?content=165094
|
||||
[7]:http://mokaproject.com/moka-icon-theme/download/ubuntu/
|
||||
[8]:http://gnome-look.org/content/show.php/Compass?content=160629
|
@ -1,86 +0,0 @@
|
||||
What’s wrong with IPv4 and Why we are moving to IPv6
|
||||
================================================================================
|
||||
For the past 10 years or so, this has been the year that IPv6 will become wide spread. It hasn’t happened yet. Consequently, there is little widespread knowledge of what IPv6 is, how to use it, or why it is inevitable.
|
||||
|
||||
![IPv4 and IPv6 Comparison](http://www.tecmint.com/wp-content/uploads/2014/09/ipv4-ipv6.gif)
|
||||
|
||||
IPv4 and IPv6 Comparison
|
||||
|
||||
### What’s wrong with IPv4? ###
|
||||
|
||||
We’ve been using **IPv4** ever since RFC 791 was published in 1981. At the time, computers were big, expensive, and rare. IPv4 had provision for **4 billion IP** addresses, which seemed like an enormous number compared to the number of computers. Unfortunately, IP addresses are not use consequently. There are gaps in the addressing. For example, a company might have an address space of **254 (2^8-2)** addresses, and only use 25 of them. The remaining 229 are reserved for future expansion. Those addresses cannot be used by anybody else, because of the way networks route traffic. Consequently, what seemed like a large number in 1981 is actually a small number in 2014.
|
||||
|
||||
The Internet Engineering Task Force (**IETF**) recognized this problem in the early 1990s and came up with two solutions: Classless Internet Domain Router (**CIDR**) and private IP addresses. Prior to the invention of CIDR, you could get one of three network sizes: **24 bits** (16,777,214 addresses), **20 bits** (1,048,574 addresses) and **16 bits** (65,534 addresses). Once CIDR was invented, it was possible to split networks into subnetworks.
|
||||
|
||||
So, for example, if you needed **5 IP** addresses, your ISP would give you a network with a size of 3 bits which would give you **6 IP** addresses. So that would allow your ISP to use addresses more efficiently. Private IP addresses allow you to create a network where each machine on the network can easily connect to another machine on the internet, but where it is very difficult for a machine on the internet to connect back to your machine. Your network is private, hidden. Your network could be very large, 16,777,214 addresses, and you could subnet your private network into smaller networks, so that you could manage your own addresses easily.
|
||||
|
||||
You are probably using a private address right now. Check your own IP address: if it is in the range of **10.0.0.0 – 10.255.255.255** or **172.16.0.0 – 172.31.255.255** or **192.168.0.0 – 192.168.255.255**, then you are using a private IP address. These two solutions helped forestall disaster, but they were stopgap measures and now the time of reckoning is upon us.
|
||||
|
||||
Another problem with **IPv4** is that the IPv4 header was variable length. That was acceptable when routing was done by software. But now routers are built with hardware, and processing the variable length headers in hardware is hard. The large routers that allow packets to go all over the world are having problems coping with the load. Clearly, a new scheme was needed with fixed length headers.
|
||||
|
||||
Still another problem with **IPv4** is that, when the addresses were allocated, the internet was an American invention. IP addresses for the rest of the world are fragmented. A scheme was needed to allow addresses to be aggregated somewhat by geography so that the routing tables could be made smaller.
|
||||
|
||||
Yet another problem with IPv4, and this may sound surprising, is that it is hard to configure, and hard to change. This might not be apparent to you, because your router takes care of all of these details for you. But the problems for your ISP drives them nuts.
|
||||
|
||||
All of these problems went into the consideration of the next version of the Internet.
|
||||
|
||||
### About IPv6 and its Features ###
|
||||
|
||||
The **IETF** unveiled the next generation of IP in December 1995. The new version was called IPv6 because the number 5 had been allocated to something else by mistake. Some of the features of IPv6 included.
|
||||
|
||||
- 128 bit addresses (3.402823669×10³⁸ addresses)
|
||||
- A scheme for logically aggregating addresses
|
||||
- Fixed length headers
|
||||
- A protocol for automatically configuring and reconfiguring your network.
|
||||
|
||||
Let’s look at these features one by one:
|
||||
|
||||
#### Addresses ####
|
||||
|
||||
The first thing everybody notices about **IPv6** is that the number of addresses is enormous. Why so many? The answer is that the designers were concerned about the inefficient organization of addresses, so there are so many available addresses that we could allocate inefficiently in order to achieve other goals. So, if you want to build your own IPv6 network, chances are that your ISP will give you a network of **64 bits** (1.844674407×10¹⁹ addresses) and let you subnet that space to your heart’s content.
|
||||
|
||||
#### Aggregation ####
|
||||
|
||||
With so many addresses to use, the address space can be allocated sparsely in order to route packets efficiently. So, your ISP gets a network space of **80 bits**. Of those 80 bits, 16 of them are for the ISPs subnetworks, and 64 bits are for the customer’s networks. So, the ISP can have 65,534 networks.
|
||||
|
||||
However, that address allocation isn’t cast in stone, and if the ISP wants more smaller networks, it can do that (although probably the ISP would probably simply ask for another space of 80 bits). The upper 48 bits is further divided, so that ISPs that are “**close**” to one another have similar network addresses ranges, to allow the networks to be aggregated in the routing tables.
|
||||
|
||||
#### Fixed length Headers ####
|
||||
|
||||
An **IPv4** header has a variable length. An **IPv6** header always has a fixed length of 40 bytes. In IPv4, extra options caused the header to increase in size. In IPv6, if additional information is needed, that additional information is stored in extension headers, which follow the IPv6 header and are generally not processed by the routers, but rather by the software at the destination.
|
||||
|
||||
One of the fields in the IPv6 header is the flow. A flow is a **20 bit** number which is created pseudo-randomly, and it makes it easier for the routers to route packets. If a packet has a flow, then the router can use that flow number as an index into a table, which is fast, rather than a table lookup, which is slow. This feature makes **IPv6** very easy to route.
|
||||
|
||||
#### Automatic Configuration ####
|
||||
|
||||
In **IPv6**, when a machine first starts up, it checks the local network to see if any other machine is using its address. If the address is unused, then the machine next looks for an IPv6 router on the local network. If it finds the router, then it asks the router for an IPv6 address to use. Now, the machine is set and ready to communicate on the internet – it has an IP address for itself and it has a default router.
|
||||
|
||||
If the router should go down, then the machines on the network will detect the problem and repeat the process of looking for an IPv6 router, to find the backup router. That’s actually hard to do in IPv4. Similarly, if the router wants to change the addressing scheme on its network, it can. The machines will query the router from time to time and change their addresses automatically. The router will support both the old and new addresses until all of the machines have switched over to the new configuration.
|
||||
|
||||
IPv6 automatic configuration is not a complete solution. There are some other things that a machine needs in order to use the internet effectively: the name servers, a time server, perhaps a file server. So there is **dhcp6** which does the same thing as dhcp, only because the machine boots in a routable state, one dhcp daemon can service a large number of networks.
|
||||
|
||||
#### There’s one big problem ####
|
||||
|
||||
So if IPv6 is so much better than IPv4, why hasn’t adoption been more widespread (as of **May 2014**, Google estimates that its IPv6 traffic is about **4%** of its total traffic)? The basic problem is which comes first, the **chicken or the egg**? Somebody running a server wants the server to be as widely available as possible, which means it must have an **IPv4** address.
|
||||
|
||||
It could also have an IPv6 address, but few people would use it and you do have to change your software a little to accommodate IPv6. Furthermore, a lot of home networking routers do not support IPv6. A lot of ISPs do not support IPv6. I asked my ISP about it, and I was told that they will provide it when customers ask for it. So I asked how many customers had asked for it. One, including me.
|
||||
|
||||
By way of contrast, all of the major operating systems, Windows, OS X, and Linux support IPv6 “**out of the box**” and have for years. The operating systems even have software that will allow IPv6 packets to “**tunnel**” within IPv4 to a point where the IPv6 packets can be removed from the surrounding IPv4 packet and sent on their way.
|
||||
|
||||
#### Conclusion ####
|
||||
|
||||
IPv4 has served us well for a long time. IPv4 has some limitations which are going to present insurmountable problems in the near future. IPv6 will solve those problems by changing the strategy for allocating addresses, making improvements to ease the routing of packets, and making it easier to configure a machine when it first joins the network.
|
||||
|
||||
However, acceptance and usage of IPv6 has been slow, because change is hard and expensive. The good news is that all operating systems support IPv6, so when you are ready to make the change, your computer will need little effort to convert to the new scheme.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/ipv4-and-ipv6-comparison/
|
||||
|
||||
作者:[Jeff Silverman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/jeffsilverm/
|
@ -1,67 +0,0 @@
|
||||
barney-ro translating
|
||||
|
||||
7 killer open source monitoring tools
|
||||
================================================================================
|
||||
Looking for greater visibility into your network? Look no further than these excellent free tools
|
||||
|
||||
Network and system monitoring is a broad category. There are solutions that monitor for the proper operation of servers, network gear, and applications, and there are solutions that track the performance of those systems and devices, providing trending and analysis. Some tools will sound alarms and notifications when problems are detected, while others will even trigger actions to run when alarms sound. Here is a collection of open source solutions that aim to provide some or all of these capabilities.
|
||||
|
||||
### Cacti ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_02-netmon-cacti-100448914-orig.jpg)
|
||||
|
||||
Cacti is a very extensive performance graphing and trending tool that can be used to track just about any monitored metric that can be plotted on a graph. From disk utilization to fan speeds in a power supply, if it can be monitored, Cacti can track it -- and make that data quickly available.
|
||||
|
||||
### Nagios ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_03-netmon-nagios-100448915-orig.jpg)
|
||||
|
||||
Nagios is the old guard of system and network monitoring. It is fast, reliable, and extremely customizable. Nagios can be a challenge for newcomers, but the rather complex configuration is also its strength, as it can be adapted to just about any monitoring task. What it may lack in looks it makes up for in power and reliability.
|
||||
|
||||
### Icinga ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_04-netmon-icinga-100448916-orig.jpg)
|
||||
|
||||
Icinga is an offshoot of Nagios that is currently being rebuilt anew. It offers a thorough monitoring and alerting framework that\u2019s designed to be as open and extensible as Nagios is, but with several different Web UI options. Icinga 1 is closely related to Nagios, while Icinga 2 is the rewrite. Both versions are currently supported, and Nagios users can migrate to Icinga 1 very easily.
|
||||
|
||||
### NeDi ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_05-netmon-nedi-100448917-orig.jpg)
|
||||
|
||||
NeDi may not be as well known as some of the others, but it\u2019s a great solution for tracking devices across a network. It continuously walks through a network infrastructure and catalogs devices, keeping track of everything it discovers. It can provide the current location of any device, as well as a history.
|
||||
|
||||
NeDi can be used to locate stolen or lost devices by alerting you if they reappear on the network. It can even display all known and discovered connections on a map, showing how every network interconnect is laid out, down to the physical port level.
|
||||
|
||||
### Observium ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_06-netmon-observium-100448918-orig.jpg)
|
||||
|
||||
Observium combines system and network monitoring with performance trending. It uses both static and auto discovery to identify servers and network devices, leverages a variety of monitoring methods, and can be configured to track just about any available metric. The Web UI is very clean, well thought out, and easy to navigate.
|
||||
|
||||
As shown, Observium can also display the physical location of monitored devices on a geographical map. Note too the heads-up panels showing active alarms and device counts.
|
||||
|
||||
### Zabbix ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_07-netmon-zabbix-100448919-orig.jpg)
|
||||
|
||||
Zabbix monitors servers and networks with an extensive array of tools. There are Zabbix agents for most operating systems, or you can use passive or external checks, including SNMP to monitor hosts and network devices. You'll also find extensive alerting and notification facilities, and a highly customizable Web UI that can be adapted to a variety of heads-up displays. In addition, Zabbix has specific tools that monitor Web application stacks and virtualization hypervisors.
|
||||
|
||||
Zabbix can also produce logical interconnection diagrams detailing how certain monitored objects are interconnected. These maps are customizable, and maps can be created for groups of monitored devices and hosts.
|
||||
|
||||
### Ntop ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_08-netmon-ntop-100448920-orig.jpg)
|
||||
|
||||
Ntop is a packet sniffing tool with a slick Web UI that displays live data on network traffic passing by a monitoring interface. Instant data on network flows is available through an advanced live graphing function. Host data flows and host communication pair information is also available in real-time.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/2686794/asset-management/164219-7-killer-open-source-monitoring-tools.html
|
||||
|
||||
作者:[Paul Venezia][a]
|
||||
译者:[barney-ro](https://github.com/barney-ro)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.networkworld.com/author/Paul-Venezia/
|
@ -0,0 +1,64 @@
|
||||
What is a good subtitle editor on Linux
|
||||
================================================================================
|
||||
If you watch foreign movies regularly, chances are you prefer having subtitles rather than the dub. Grown up in France, I know that most Disney movies during my childhood sounded weird because of the French dub. If now I have the chance to be able to watch them in their original version, I know that for a lot of people subtitles are still required. And I surprise myself sometimes making subtitles for my family. Hopefully for me, Linux is not devoid of fancy and open source subtitle editors. In short, this is the non-exhaustive list of open source subtitle editors for Linux. Share your opinion on what you think of the best subtitle editor.
|
||||
|
||||
### 1. Gnome Subtitles ###
|
||||
|
||||
![](https://farm6.staticflickr.com/5596/15323769611_59bc5fb4b7_z.jpg)
|
||||
|
||||
[Gnome Subtitles][2] is a bit my go to when it comes to quickly editing some existing subtitles. You can load the video, load the subtitle text files and instantly get going. I appreciate its balance between ease of use and advanced features. It comes with a synchronization tool as well as a spell check. Finally, last but not least, the shortcuts are what makes it good in the end: when you edit a lot of lines, you prefer to keep your hands on the keyboard, and use the built in shortcuts to move around.
|
||||
|
||||
### 2. Aegisub ###
|
||||
|
||||
![](https://farm3.staticflickr.com/2944/15323964121_59e9b26ba5_z.jpg)
|
||||
|
||||
[Aegisub][2] is already one level of complexity higher. Just the interface reflects a learning curve. But besides its intimidating aspect, Aegisub is very complete software, providing tools beyond anything I could have imagined before. Like Gnome Subtitles, Aegisub has a WYSIWYG approach, but to a whole new level: it is possible to drag and drop the subtitles on the screen, see the audio spectrum on the side, and do everything with shortcuts. In addition to that, it comes with a Kanji tool, a karaoke mode, and the possibility to import lua script to automate some tasks. I really invite you to go read the [manual page][3] before starting using it.
|
||||
|
||||
### 3. Gaupol ###
|
||||
|
||||
![](https://farm3.staticflickr.com/2942/15326817292_6702cc63fc_z.jpg)
|
||||
|
||||
At the other end of the complexity spectrum is [Gaupol][4]. Unlike Aegisub, Gaupol is quick to pick up and adopts an interface very close to Gnome Subtitles. But behind this relative simplicity, it comes with all the necessary tools: shortcuts, third party extension, spell checking, and even speech recognition (courtesy of [CMU Sphinx][5]). As a downside, however, I did notice some slow-downs while testing it, nothing too serious, but just enough to make me prefer Gnome Subtitles still.
|
||||
|
||||
### 4. Subtitle Editor ###
|
||||
|
||||
![](https://farm4.staticflickr.com/3914/15323911521_8e33126610_z.jpg)
|
||||
|
||||
[Subtitle Editor][6] is very close to Gaupol. However, the interface is a little bit less intuitive, and the features are slightly more advanced. I appreciate the possibility to define "key frames" and all the given synchronization options. However, maybe more icons and less text would enhance the interface. As a goodie, Subtitle Editor can simulate a "type writer" effect, while I am not sure if it is extremely useful. And last but not least, the possibility to redefine the shortcuts is always handy.
|
||||
|
||||
### 5. Jubler ###
|
||||
|
||||
![](https://farm4.staticflickr.com/3912/15323769701_3d94ca8884_z.jpg)
|
||||
|
||||
Written in Java, [Jubler][7] is a multi-platform subtitle editor. I was actually very impressed by its interface. I definitely see the Java-ish aspect of it, but it remains well conceived and clear. Like Aegisub, you can drag and drop the subtitles on the image, making the experience far more pleasant than just typing. It is also possible to define a style for subtitles, play sound from another track, translate the subtitles, or use the spell checker. However, be careful as you will need MPlayer installed and correctly configured beforehand if you want to use Jubler fully. Oh and I give it a special credit for its easy installation process after downloading the script from the [official page][8].
|
||||
|
||||
### 6. Subtitle Composer ###
|
||||
|
||||
![](https://farm6.staticflickr.com/5578/15323769711_6c6dfbe405_z.jpg)
|
||||
|
||||
Defined as a "KDE subtitle composer," [Subtitle Composer][9] comes with most of the traditional features evoked previously, but with the KDE interface that we expect. This comes naturally with the option to redefine the shortcuts, which is very dear to me. But beyond all of this, what differentiates Subtitle Composer from all the previously mentioned programs is its ability to follow scripts written in JavaScript, Python, and even Ruby. A few examples are packaged with the software, and will definitely help you pick up the syntax and the usefulness of such feature.
|
||||
|
||||
To conclude, whether you, like me, just edit a few subtitles for your family, re-synchronize the entire track, or write everything from scratch, Linux has the tools for you. For me in the end, the shortcuts and the ease-of-use make all the difference, but for any higher usage, scripting or speech recognition can become super handy.
|
||||
|
||||
Which subtitle editor do you use and why? Or is there another one that you prefer not mentioned here? Let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/good-subtitle-editor-linux.html
|
||||
|
||||
作者:[Adrien Brochard][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/adrien
|
||||
[1]:http://gnomesubtitles.org/
|
||||
[2]:http://www.aegisub.org/
|
||||
[3]:http://docs.aegisub.org/3.2/Main_Page/
|
||||
[4]:http://home.gna.org/gaupol/
|
||||
[5]:http://cmusphinx.sourceforge.net/
|
||||
[6]:http://home.gna.org/subtitleeditor/
|
||||
[7]:http://www.jubler.org/
|
||||
[8]:http://www.jubler.org/download.html
|
||||
[9]:http://sourceforge.net/projects/subcomposer/
|
@ -0,0 +1,100 @@
|
||||
Shellshock: How to protect your Unix, Linux and Mac servers
|
||||
================================================================================
|
||||
> **Summary**: The Unix/Linux Bash security hole can be deadly to your servers. Here's what you need to worry about, how to see if you can be attacked, and what to do if your shields are down.
|
||||
|
||||
The only thing you have to fear with [Shellshock, the Unix/Linux Bash security hole][1], is fear itself. Yes, Shellshock can serve as a highway for worms and malware to hit your Unix, Linux, and Mac servers, but you can defend against it.
|
||||
|
||||
![](http://cdn-static.zdnet.com/i/r/story/70/00/034072/cybersecurity-v1-620x464.jpg?hash=BQMxZJWuZG&upscale=1)
|
||||
|
||||
If you don't patch and defend yourself against Shellshock today, you may have lost control of your servers by tomorrow.
|
||||
|
||||
However, Shellshock is not as bad as [HeartBleed][2]. Not yet, anyway.
|
||||
|
||||
While it's true that the [Bash shell][3] is the default command interpreter on most Unix and Linux systems and all Macs — the majority of Web servers — for an attacker to get to your system, there has to be a way for him or her to actually get to the shell remotely. So, if you're running a PC without [ssh][4], [rlogin][5], or another remote desktop program, you're probably safe enough.
|
||||
|
||||
A more serious problem is faced by devices that use embedded Linux — such as routers, switches, and appliances. If you're running an older, no longer supported model, it may be close to impossible to patch it and will likely be vulnerable to attacks. If that's the case, you should replace as soon as possible.
|
||||
|
||||
The real and present danger is for servers. According to the National Institute of Standards (NIST), [Shellshock scores a perfect 10][6] for potential impact and exploitability. [Red Hat][7] reports that the most common attack vectors are:
|
||||
|
||||
- **httpd (Your Web server)**: CGI [Common-Gateway Interface] scripts are likely affected by this issue: when a CGI script is run by the web server, it uses environment variables to pass data to the script. These environment variables can be controlled by the attacker. If the CGI script calls Bash, the script could execute arbitrary code as the httpd user. mod_php, mod_perl, and mod_python do not use environment variables and we believe they are not affected.
|
||||
- **Secure Shell (SSH)**: It is not uncommon to restrict remote commands that a user can run via SSH, such as rsync or git. In these instances, this issue can be used to execute any command, not just the restricted command.
|
||||
- **dhclient**: The [Dynamic Host Configuration Protocol Client (dhclient)][8] is used to automatically obtain network configuration information via DHCP. This client uses various environment variables and runs Bash to configure the network interface. Connecting to a malicious DHCP server could allow an attacker to run arbitrary code on the client machine.
|
||||
- **[CUPS][9] (Linux, Unix and Mac OS X's print server)**: It is believed that CUPS is affected by this issue. Various user-supplied values are stored in environment variables when cups filters are executed.
|
||||
- **sudo**: Commands run via sudo are not affected by this issue. Sudo specifically looks for environment variables that are also functions. It could still be possible for the running command to set an environment variable that could cause a Bash child process to execute arbitrary code.
|
||||
- **Firefox**: We do not believe Firefox can be forced to set an environment variable in a manner that would allow Bash to run arbitrary commands. It is still advisable to upgrade Bash as it is common to install various plug-ins and extensions that could allow this behavior.
|
||||
- **Postfix**: The Postfix [mail] server will replace various characters with a ?. While the Postfix server does call Bash in a variety of ways, we do not believe an arbitrary environment variable can be set by the server. It is however possible that a filter could set environment variables.
|
||||
|
||||
So much for Red Hat's thoughts. Of these, the Web servers and SSH are the ones that worry me the most. The DHCP client is also troublesome, especially if, as it the case with small businesses, your external router doubles as your Internet gateway and DHCP server.
|
||||
|
||||
Of these, Web server attacks seem to be the most common by far. As Florian Weimer, a Red Hat security engineer, wrote: "[HTTP requests to CGI scripts][10] have been identified as the major attack vector." Attacks are being made against systems [running both Linux and Mac OS X][11].
|
||||
|
||||
Jaime Blasco, labs director at [AlienVault][12], a security management services company, ran a [honeypot][13] looking for attackers and found "[several machines trying to exploit the Bash vulnerability][14]. The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system."
|
||||
|
||||
Other security researchers have found that the malware is the usual sort. They typically try to plant distributed denial of service (DDoS) IRC bots and attempt to guess system logins and passwords using a list of poor passwords such as 'root', 'admin', 'user', 'login', and '123456.'
|
||||
|
||||
So, how do you know if your servers can be attacked? First, you need to check to see if you're running a vulnerable version of Bash. To do that, run the following command from a Bash shell:
|
||||
|
||||
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
|
||||
|
||||
If you get the result:
|
||||
|
||||
*vulnerable this is a test*
|
||||
|
||||
Bad news, your version of Bash can be hacked. If you see:
|
||||
|
||||
*bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test*
|
||||
|
||||
You're good. Well, to be more exact, you're as protected as you can be at the moment.
|
||||
|
||||
While all major Linux distributors have released patches that stop most attacks — [Apple has not released a patch yet][15] — it has been discovered that "[patches shipped for this issue are incomplete][16]. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions." While it's unclear if these attacks can be used to hack into a system, it is clear that they can be used to crash them, thanks to a null-pointer exception.
|
||||
|
||||
Patches to fill-in the [last of the Shellshock security hole][17] are being worked on now. In the meantime, you should update your servers as soon as possible with the available patches and keep an eye open for the next, fuller ones.
|
||||
|
||||
In the meantime, if, as is likely, you're running the Apache Web server, there are some [Mod_Security][18] rules that can stop attempts to exploit Shellshock. These rules, created by Red Hat, are:
|
||||
|
||||
Request Header values:
|
||||
SecRule REQUEST_HEADERS "^\(\) {" "phase:1,deny,id:1000000,t:urlDecode,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
|
||||
|
||||
SERVER_PROTOCOL values:
|
||||
SecRule REQUEST_LINE "\(\) {" "phase:1,deny,id:1000001,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
|
||||
|
||||
GET/POST names:
|
||||
SecRule ARGS_NAMES "^\(\) {" "phase:2,deny,id:1000002,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
|
||||
|
||||
GET/POST values:
|
||||
SecRule ARGS "^\(\) {" "phase:2,deny,id:1000003,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
|
||||
|
||||
File names for uploads:
|
||||
SecRule FILES_NAMES "^\(\) {" "phase:2,deny,id:1000004,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
|
||||
|
||||
It is vital that you patch your servers as soon as possible, even with the current, incomplete ones, and to set up defenses around your Web servers. If you don't, you could come to work tomorrow to find your computers completely compromised. So get out there and start patching!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/shellshock-how-to-protect-your-unix-linux-and-mac-servers-7000034072/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]:http://www.zdnet.com/unixlinux-bash-critical-security-hole-uncovered-7000034021/
|
||||
[2]:http://www.zdnet.com/heartbleed-serious-openssl-zero-day-vulnerability-revealed-7000028166
|
||||
[3]:http://www.gnu.org/software/bash/
|
||||
[4]:http://www.openbsd.org/cgi-bin/man.cgi?query=ssh&sektion=1
|
||||
[5]:http://unixhelp.ed.ac.uk/CGI/man-cgi?rlogin
|
||||
[6]:http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-7169
|
||||
[7]:http://www.redhat.com/
|
||||
[8]:http://www.isc.org/downloads/dhcp/
|
||||
[9]:https://www.cups.org/
|
||||
[10]:http://seclists.org/oss-sec/2014/q3/650
|
||||
[11]:http://www.zdnet.com/first-attacks-using-shellshock-bash-bug-discovered-7000034044/
|
||||
[12]:http://www.alienvault.com/
|
||||
[13]:http://www.sans.org/security-resources/idfaq/honeypot3.php
|
||||
[14]:http://www.alienvault.com/open-threat-exchange/blog/attackers-exploiting-shell-shock-cve-2014-6721-in-the-wild
|
||||
[15]:http://apple.stackexchange.com/questions/146849/how-do-i-recompile-bash-to-avoid-the-remote-exploit-cve-2014-6271-and-cve-2014-7
|
||||
[16]:https://bugzilla.redhat.com/show_bug.cgi?id=1141597#c27
|
||||
[17]:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7169
|
||||
[18]:http://www.inmotionhosting.com/support/website/modsecurity/what-is-modsecurity-and-why-is-it-important
|
@ -0,0 +1,66 @@
|
||||
zpl1025
|
||||
What Linux Users Should Know About Open Hardware
|
||||
================================================================================
|
||||
> What Linux users don't know about manufacturing open hardware can lead them to disappointment.
|
||||
|
||||
Business and free software have been intertwined for years, but the two often misunderstand one another. That's not surprising -- what is just a business to one is way of life for the other. But the misunderstanding can be painful, which is why debunking it is a worth the effort.
|
||||
|
||||
An increasingly common case in point: the growing attempts at open hardware, whether from Canonical, Jolla, MakePlayLive, or any of half a dozen others. Whether pundit or end-user, the average free software user reacts with exaggerated enthusiasm when a new piece of hardware is announced, then retreats into disillusionment as delay follows delay, often ending in the cancellation of the entire product.
|
||||
|
||||
It's a cycle that does no one any good, and often breeds distrust – and all because the average Linux user has no idea what's happening behind the news.
|
||||
|
||||
My own experience with bringing products to market is long behind me. However, nothing I have heard suggests that anything has changed. Bringing open hardware or any other product to market remains not just a brutal business, but one heavily stacked against newcomers.
|
||||
|
||||
### Searching for Partners ###
|
||||
|
||||
Both the manufacturing and distribution of digital products is controlled by a relatively small number of companies, whose time can sometimes be booked months in advance. Profit margins can be tight, so like movie studios that buy the rights to an ancient sit-com, the manufacturers usually hope to clone the success of the latest hot product. As Aaron Seigo told me when talking about his efforts to develop the Vivaldi tablet, the manufacturers would much rather prefer someone else take the risk of doing anything new.
|
||||
|
||||
Not only that, but they would prefer to deal with someone with an existing sales record who is likely to bring repeat business.
|
||||
|
||||
Besides, the average newcomer is looking at a product run of a few thousand units. A chip manufacturer would much rather deal with Apple or Samsung, whose order is more likely in the hundreds of thousands.
|
||||
|
||||
Faced with this situation, the makers of open hardware are likely to find themselves cascading down into the list of manufacturers until they can find a second or third tier manufacturer that is willing to take a chance on a small run of something new.
|
||||
|
||||
They might be reduced to buying off-the-shelf components and assembling units themselves, as Seigo tried with Vivaldi. Alternatively, they might do as Canonical did, and find established partners that encourage the industry to take a gamble. Even if they succeed, they have usually taken months longer than they expected in their initial naivety.
|
||||
|
||||
### Staggering to Market ###
|
||||
|
||||
However, finding a manufacturer is only the first obstacle. As Raspberry Pi found out, even if the open hardware producers want only free software in their product, the manufacturers will probably insist that firmware or drivers stay proprietary in the name of protecting trade secrets.
|
||||
|
||||
This situation is guaranteed to set off criticism from potential users, but the open hardware producers have no choice except to compromise their vision. Looking for another manufacturer is not a solution, partly because to do so means more delays, but largely because completely free-licensed hardware does not exist. The industry giants like Samsung have no interest in free hardware, and, being new, the open hardware producers have no clout to demand any.
|
||||
|
||||
Besides, even if free hardware was available, manufacturers could probably not guarantee that it would be used in the next production run. The producers might easily find themselves re-fighting the same battle every time they needed more units.
|
||||
|
||||
As if all this is not enough, at this point the open hardware producer has probably spent 6-12 months haggling. The chances are, the industry standards have shifted, and they may have to start from the beginning again by upgrading specs.
|
||||
|
||||
### A Short and Brutal Shelf Life ###
|
||||
|
||||
Despite these obstacles, hardware with some degree of openness does sometimes get released. But remember the challenges of finding a manufacturer? They have to be repeated all over again with the distributors -- and not just once, but region by region.
|
||||
|
||||
Typically, the distributors are just as conservative as the manufacturers, and just as cautious about dealing with newcomers and new ideas. Even if they agree to add a product to their catalog, the distributors can easily decide not to encourage their representatives to promote it, which means that in a few months they have effectively removed it from the shelves.
|
||||
|
||||
Of course, online sales are a possibility. But meanwhile, the hardware has to be stored somewhere, adding to the cost. Production runs on demand are expensive even in the unlikely event that they are available, and even unassembled units need storage.
|
||||
|
||||
### Weighing the Odds ###
|
||||
|
||||
I have been generalizing wildly here, but anyone who has ever been involved in producing anything will recognize what I am describing as the norm. And just to make matters worse, open hardware producers typically discover the situation as they are going through it. Inevitably, they make mistakes, which adds still more delays.
|
||||
|
||||
But the point is, if you have any sense of the process at all, your knowledge is going to change how you react to news of another attempt at hardware. The process means that, unless a company has been in serious stealth mode, an announcement that a product will be out in six months will rapidly prove to be an outdate guestimate. 12-18 months is more likely, and the obstacles I describe may mean that the product will never actually be released.
|
||||
|
||||
For example, as I write, people are waiting for the emergence of the first Steam Machines, the Linux-based gaming consoles. They are convinced that the Steam Machines will utterly transform both Linux and gaming.
|
||||
|
||||
As a market category, Steam Machines may do better than other new products, because those who are developing them at least have experience developing software products. However, none of the dozen or so Steam Machines in development have produced more than a prototype after almost a year, and none are likely to be available for buying until halfway through 2015. Given the realities of hardware manufacturing, we will be lucky if half of them see daylight. In fact, a release of 2-4 might be more realistic.
|
||||
|
||||
I make that prediction with next to no knowledge of any of the individual efforts. But, having some sense of how hardware manufacturing works, I suspect that it is likely to be closer to what happens next year than all the predictions of a new Golden Age for Linux and gaming. I would be entirely happy being wrong, but the fact remains: what is surprising is not that so many Linux-associated hardware products fail, but that any succeed even briefly.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/open-source/what-linux-users-should-know-about-open-hardware-1.html
|
||||
|
||||
作者:[Bruce Byfield][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html
|
@ -1,111 +0,0 @@
|
||||
alim0x translating
|
||||
|
||||
The history of Android
|
||||
================================================================================
|
||||
![Both screens of the Email app. The first two screenshots show the combined label/inbox view, and the last shows a message.](http://cdn.arstechnica.net/wp-content/uploads/2014/01/email2lol.png)
|
||||
Both screens of the Email app. The first two screenshots show the combined label/inbox view, and the last shows a message.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The message view was—surprise!—white. Android's e-mail app has historically been a watered-down version of the Gmail app, and you can see that close connection here. The message and compose views were taken directly from Gmail with almost no modifications.
|
||||
|
||||
![The “IM" applications. Screenshots show the short-lived provider selection screen, the friends list, and a chat.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/IM2.png)
|
||||
The “IM" applications. Screenshots show the short-lived provider selection screen, the friends list, and a chat.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Before Google Hangouts and even before Google Talk, there was "IM"—the only instant messaging client that shipped on Android 1.0. Surprisingly, multiple IM services were supported: users could pick from AIM, Google Talk, Windows Live Messenger, and Yahoo. Remember when OS creators cared about interoperability?
|
||||
|
||||
The friends list was a black background with white speech bubbles for open chats. Presence was indicated with colored circles, and a little Android on the right hand side would indicate that a person was mobile. It's amazing how much more communicative the IM app was than Google Hangouts. Green means the person is using a device they are signed into, yellow means they are signed in but idle, red means they have manually set busy and don't want to be bothered, and gray is offline. Today, Hangouts only shows when a user has the app open or closed.
|
||||
|
||||
The chats interface was clearly based on the Messaging program, and the chat backgrounds were changed from white and blue to white and green. No one changed the color of the blue text entry box, though, so along with the orange highlight effect, this screen used white, green, blue, and orange.
|
||||
|
||||
![YouTube on Android 1.0. The screens show the main page, the main page with the menu open, the categories screen, and the videos screen.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt5000.png)
|
||||
YouTube on Android 1.0. The screens show the main page, the main page with the menu open, the categories screen, and the videos screen.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
YouTube might not have been the mobile sensation it is today with the 320p screen and 3G data speeds of the G1, but Google's video service was present and accounted for on Android 1.0. The main screen looked like a tweaked version of the Android Market, with a horizontally scrolling featured section along the top and vertically scrolling categories along the bottom. Some of Google's category choices were pretty strange: what would the difference be between "Most popular" and "Most viewed?"
|
||||
|
||||
In a sign that Google had no idea how big YouTube would eventually become, one of the video categories was "Most recent." Today, with [100 hours of video][1] uploaded to the site every minute, if this section actually worked it would be an unreadable blur of rapidly scrolling videos.
|
||||
|
||||
The menu housed search, favorites, categories, and settings. Settings (not pictured) was the lamest screen ever, housing one option to clear the search history. Categories was equally barren, showing only a black list of text.
|
||||
|
||||
The last screen shows a video, which only supported horizontal mode. The auto-hiding video controls weirdly had rewind and fast forward buttons, even though there was a seek bar.
|
||||
|
||||
![YouTube’s video menu, description page, and comments.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt3.png)
|
||||
YouTube’s video menu, description page, and comments.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Additional sections for each video could be brought up by hitting the menu button. Here you could favorite the video, access details, and read comments. All of these screens, like the videos, were locked to horizontal mode.
|
||||
|
||||
"Share" didn't bring up a share dialog yet; it just kicked the link out to a Gmail message. Texting or IMing someone a link wasn't possible. Comments could be read, but you couldn't rate them or post your own. You couldn't rate or like a video either.
|
||||
|
||||
![The camera app’s picture taking interface, menu, and photo review mode.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/camera.png)
|
||||
The camera app’s picture taking interface, menu, and photo review mode.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Real Android on real hardware meant a functional camera app, even if there wasn't much to look at. That black square on the left was the camera interface, which should be showing a viewfinder image, but the SDK screenshot utility can't capture it. The G1 had a hardware camera button (remember those?), so there wasn't a need for an on-screen shutter button. There were no settings for exposure, white balance, or HDR—you could take a picture and that was about it.
|
||||
|
||||
The menu button revealed a meager two options: a way to jump to the Pictures app and Settings screen with two options. The first settings option was whether or not to enable geotagging for pictures, and the second was for a dialog prompt after every capture, which you can see on the right. Also, you could only take pictures—there was no video support yet.
|
||||
|
||||
![The Calendar’s month view, week view with the menu open, day view, and agenda.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/calviews.png)
|
||||
The Calendar’s month view, week view with the menu open, day view, and agenda.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Like most apps of this era, the primary command interface for the calendar was the menu. It was used to switch views, add a new event, navigate to the current day, pick visible calendars, and go to the settings. The menu functioned as a catch-all for every single button.
|
||||
|
||||
The month view couldn't show appointment text. Every date had a bar next to it, and appointments were displayed as green sections in the bar denoting what time of day an appointment was. Week view couldn't show text either—the 320×480 display of the G1 just wasn't dense enough—so you got a white block with a strip of color indicating which calendar it was from. The only views that provided text were the agenda and day views. You could move through dates by swiping—week and day used left and right, and month and agenda used up and down.
|
||||
|
||||
![The main settings page, the Wireless section, and the bottom of the about page.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/settings.png)
|
||||
The main settings page, the Wireless section, and the bottom of the about page.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Android 1.0 finally brought a settings screen to the party. It was a black and white wall of text that was roughly broken down into sections. Down arrows next to each list item confusingly look like they would expand line-in to show more of something, but touching anywhere on the list item would just load the next screen. All the screens were pretty boring and samey looking, but hey, it's a settings screen.
|
||||
|
||||
Any option with an on/off state used a cartoony-looking checkbox. The original checkboxes in Android 1.0 were pretty strange—even when they were "unchecked," they still had a gray check mark in them. Android treated the check mark like a light bulb that would light up when on and be dim when off, but that's not how checkboxes work. We did finally get an "About" page, though. Android 1.0 ran Linux kernel 2.6.25.
|
||||
|
||||
A settings screen means we can finally open the security settings and change lock screens. Android 1.0 only had two styles, the gray square lock screen pictured in the Android 0.9 section, and pattern unlock, which required you to draw a pattern over a grid of 9 dots. A swipe pattern like this was easier to remember and input than a PIN even if it did not add any more security.
|
||||
|
||||
![The Voice Dialer, pattern lock screen, low battery warning, and time picker.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/grabbag.png)
|
||||
The Voice Dialer, pattern lock screen, low battery warning, and time picker.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
oice functions arrived in 1.0 with Voice Dialer. This feature hung around in various capacities in AOSP for a while, as it was a simple voice command app for calling numbers and contacts. Voice Dialer was completely unrelated to Google's future voice products, however, and it worked the same way a voice dialer on a dumbphone would work.
|
||||
|
||||
As for a final note, low battery popup would occur when the battery dropped below 15 percent. It was a funny graphic, depicting plugging the wrong end of the power cord into the phone. That wasn't (and still isn't) how phones work, Google.
|
||||
|
||||
Android 1.0 was a great first start, but there were still so many gaps in functionality. Physical keyboards and tons of hardware buttons were mandatory, as Android devices were still not allowed to be sold without a d-pad or trackball. Base smartphone functionality like auto-rotate wasn't here yet, either. Updates for built-in apps weren't possible through the Android Market the way they were today. All the Google Apps were interwoven with the operating system. If Google wanted to update a single app, an update for the entire operating system needed to be pushed out through the carriers. There was still a lot of work to do.
|
||||
|
||||
### Android 1.1—the first truly incremental update ###
|
||||
|
||||
![All of Android 1.1’s new features: Search by voice, the Android Market showing paid app support, Google Latitude, and the new “system updates" option in the settings.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/11.png)
|
||||
All of Android 1.1’s new features: Search by voice, the Android Market showing paid app support, Google Latitude, and the new “system updates" option in the settings.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Four and a half months after Android 1.0, in February 2009, Android got its first public update in Android 1.1. Not much changed in the OS, and just about every new thing Google added with 1.1 has been shut down by now. Google Voice Search was Android's first foray into cloud-powered voice search, and it had its own icon in the app drawer. While the app can't communicate with Google's servers anymore, you can check out how it used to work [on the iPhone][2]. It wasn't yet Voice Actions, but you could speak and the results would go to a simple Google Search.
|
||||
|
||||
Support for paid apps was added to the Android Market, but just like the beta client, this version of the Android Market could no longer connect to the Google Play servers. The most that we could get to work was this sorting screen, which lets you pick between displaying free apps, paid apps, or a mix of both.
|
||||
|
||||
Maps added [Google Latitude][3], a way to share your location with friends. Latitude was shut down in favor of Google+ a few months ago and no longer works. There was an option for it in the Maps menu, but tapping on it just brings up a loading spinner forever.
|
||||
|
||||
Given that system updates come quickly in the Android world—or at least, that was the plan before carriers and OEMs got in the way—Google also added a button to the "About Phone" screen to check for system updates.
|
||||
|
||||
----------
|
||||
|
||||
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
||||
|
||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.youtube.com/yt/press/statistics.html
|
||||
[2]:http://www.youtube.com/watch?v=y3z7Tw1K17A
|
||||
[3]:http://arstechnica.com/information-technology/2009/02/google-tries-location-based-social-networking-with-latitude/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -1,3 +1,5 @@
|
||||
alim0x translating
|
||||
|
||||
The history of Android
|
||||
================================================================================
|
||||
![Android 1.5’s on-screen keyboard showing the suggestion bar while typing, the capital letters keyboard, the number and symbols screen, and an additional key popup.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/kb5.png)
|
||||
@ -88,15 +90,15 @@ The HTC Magic, the second Android device, and the first without a hardware keybo
|
||||
Photo by HTC
|
||||
|
||||
> #### Google Maps is the first built-in app to hit the Android Market ####
|
||||
>
|
||||
>
|
||||
> While this article is (mostly) organizing app updates by Android version for simplicity's sake, there are a few outliers that deserve special recognition. On June 14, 2009, Google Maps was the first packed-in Android app to be updated via the Android Market. While every other app required a full system release to be updated, Maps was broken out of the OS, free to receive out-of-cycle updates whenever a new feature was ready.
|
||||
>
|
||||
>
|
||||
> Moving apps out of the core OS and onto the Android Market would be a big focus for Google going forward. In general, OTA updates were a big initiative—they required the cooperation of the OEM and the carrier, both of which could drag their feet. Updates also didn’t make it to every device. Today, the Android Market gives Google a direct line to every Android phone with no such interference from outside parties.
|
||||
>
|
||||
>
|
||||
> These were problems for a later date, though. In 2009, Google had only two unskinned phones to support, and the early Android carriers were seemingly responsive to Google’s update needs. This early move would prove to be a very proactive decision on Google’s part. At first, the company went this route only with its most important properties—Maps and Gmail—but later it would port the majority of the packed-in apps to the Android Market. Later initiatives like Google Play Services even brought app APIs out of the OS and into Google’s store.
|
||||
>
|
||||
>
|
||||
> As for the new Maps at the time, it gained a new directions interface, along with the ability to give mass transit and walking directions. For now, directions were given on a plain black list—turn-by-turn-style navigation would come later.
|
||||
>
|
||||
>
|
||||
> June 2009 was also the time Apple launched the third iPhone—the 3GS—and the third version of iPhone OS. iPhone OS 3's headline features were mostly catch-up items like copy/paste and MMS support. Apple's hardware was still nicer, and the software was smoother, more cohesive, and better designed. Google's insane pace of development was putting it on a path to parity though. iPhone OS 2 launched just before the Milestone 5 build of Android 0.5, which makes five Android releases in the span of the yearly iOS release cycle.
|
||||
|
||||
Android 1.5 gave the YouTube app the ability to upload videos to the site. Uploading was accomplished by sharing a video from the Gallery to the YouTube app, or by opening a video directly from the YouTube app. This would bring up an upload screen, where the user would set things like the video title, tags, and access rights. Photos could be uploaded to Picasa, Google's original photo site, in a similar fashion.
|
||||
@ -125,4 +127,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor
|
||||
|
||||
[1]:http://en.wikipedia.org/wiki/Diacritic
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
||||
|
@ -1,107 +0,0 @@
|
||||
[bazz2 hehe]
|
||||
How to use systemd for system administration on Debian
|
||||
================================================================================
|
||||
Soon enough, hardly any Linux user will be able to escape the ever growing grasp that systemd imposes on Linux, unless they manually opt out. systemd has created more technical, emotional, and social issues than any other piece of software as of late. This predominantly came to show in the [heated discussions][1] also dubbed as the 'Init Wars', that occupied parts of the Debian developer body for months. While the Debian Technical Comittee finally decided to include systemd in Debian 8 "Jessie", there were efforts to [supersede the decision][2] by a General Resolution, and even threats to the health of developers in favor of systemd.
|
||||
|
||||
This goes to show how deep systemd interferes with the way of handling Linux systems that has, in large parts, been passed down to us from the Unix days. Theorems like "one tool for the job" are overthrown by the new kid in town. Besides substituting sysvinit as init system, it digs deep into system administration. For right now a lot of the commands you are used to will keep on working due to the compatibility layer provided by the package systemd-sysv. That might change as soon as systemd 214 is uploaded to Debian, destined to be released in the stable branch with Debian 8 "Jessie". From thereon, users need to utilize the new commands that come with systemd for managing services, processes, switching run levels, and querying the logging system. A workaround is to set up aliases in .bashrc.
|
||||
|
||||
So let's have a look at how systemd will change your habits of administrating your computers and the pros and cons involved. Before making the switch to systemd, it is a good security measure to save the old sysvinit to be able to still boot, should systemd fail. This will only work as long as systemd-sysv is not yet installed, and can be easily obtained by running:
|
||||
|
||||
# cp -av /sbin/init /sbin/init.sysvinit
|
||||
|
||||
Thusly prepared, in case of emergency, just append:
|
||||
|
||||
init=/sbin/init.sysvinit
|
||||
|
||||
to the kernel boot-time parameters.
|
||||
|
||||
### Basic Usage of systemctl ###
|
||||
|
||||
systemctl is the command that substitutes the old "/etc/init.d/foo start/stop", but also does a lot more, as you can learn from its man page.
|
||||
|
||||
Some basic use-cases are:
|
||||
|
||||
- systemctl - list all loaded units and their state (where unit is the term for a job/service)
|
||||
- systemctl list-units - list all units
|
||||
- systemctl start [NAME...] - start (activate) one or more units
|
||||
- systemctl stop [NAME...] - stop (deactivate) one or more units
|
||||
- systemctl disable [NAME...] - disable one or more unit files
|
||||
- systemctl list-unit-files - show all installed unit files and their state
|
||||
- systemctl --failed - show which units failed during boot
|
||||
- systemctl --type=mount - filter for types; types could be: service, mount, device, socket, target
|
||||
- systemctl enable debug-shell.service - start a root shell on TTY 9 for debugging
|
||||
|
||||
For more convinience in handling units, there is the package systemd-ui, which is started as user with the command systemadm.
|
||||
|
||||
Switching runlevels, reboot and shutdown are also handled by systemctl:
|
||||
|
||||
- systemctl isolate graphical.target - take you to what you know as init 5, where your X-server runs
|
||||
- systemctl isolate multi-user.target - take you to what you know as init 3, TTY, no X
|
||||
- systemctl reboot - shut down and reboot the system
|
||||
- systemctl poweroff - shut down the system
|
||||
|
||||
All these commands, other than the ones for switching runlevels, can be executed as normal user.
|
||||
|
||||
### Basic Usage of journalctl ###
|
||||
|
||||
systemd does not only boot machines faster than the old init system, it also starts logging much earlier, including messages from the kernel initialization phase, the initial RAM disk, the early boot logic, and the main system runtime. So the days where you needed to use a camera to provide the output of a kernel panic or otherwise stalled system for debugging are mostly over.
|
||||
|
||||
With systemd, logs are aggregated in the journal which resides in /var/log/. To be able to make full use of the journal, we first need to set it up, as Debian does not do that for you yet:
|
||||
|
||||
# addgroup --system systemd-journal
|
||||
# mkdir -p /var/log/journal
|
||||
# chown root:systemd-journal /var/log/journal
|
||||
# gpasswd -a $user systemd-journal
|
||||
|
||||
That will set up the journal in a way where you can query it as normal user. Querying the journal with journalctl offers some advantages over the way syslog works:
|
||||
|
||||
- journalctl --all - show the full journal of the system and all its users
|
||||
- journalctl -f - show a live view of the journal (equivalent to "tail -f /var/log/messages")
|
||||
- journalctl -b - show the log since the last boot
|
||||
- journalctl -k -b -1 - show all kernel logs from the boot before last (-b -1)
|
||||
- journalctl -b -p err - shows the log of the last boot, limited to the priority "ERROR"
|
||||
- journalctl --since=yesterday - since Linux people normally do not often reboot, this limits the size more than -b would
|
||||
- journalctl -u cron.service --since='2014-07-06 07:00' --until='2014-07-06 08:23' - show the log for cron for a defined timeframe
|
||||
- journalctl -p 2 --since=today - show the log for priority 2, which covers emerg, alert and crit; resembles syslog priorities emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), debug (7)
|
||||
- journalctl > yourlog.log - copy the binary journal as text into your current directory
|
||||
|
||||
Journal and syslog can work side-by-side. On the other hand, you can remove any syslog packages like rsyslog or syslog-ng once you are satisfied with the way the journal works.
|
||||
|
||||
For very detailed output, append "systemd.log_level=debug" to the kernel boot-time parameter list, and then run:
|
||||
|
||||
# journalctl -alb
|
||||
|
||||
Log levels can also be edited in /etc/systemd/system.conf.
|
||||
|
||||
### Analyzing the Boot Process with systemd ###
|
||||
|
||||
systemd allows you to effectively analyze and optimize your boot process:
|
||||
|
||||
- systemd-analyze - show how long the last boot took for kernel and userspace
|
||||
- systemd-analyze blame - show details of how long each service took to start
|
||||
- systemd-analyze critical-chain - print a tree of the time-critical chain of units
|
||||
- systemd-analyze dot | dot -Tsvg > systemd.svg - put a vector graphic of your boot process (requires graphviz package)
|
||||
- systemd-analyze plot > bootplot.svg - generate a graphical timechart of the boot process
|
||||
|
||||
![](https://farm6.staticflickr.com/5559/14607588994_38543638b3_z.jpg)
|
||||
|
||||
![](https://farm6.staticflickr.com/5565/14423020978_14b21402c8_z.jpg)
|
||||
|
||||
systemd has pretty good documentation for such a young project under heavy developement. First of all, there is the [0pointer series by Lennart Poettering][3]. The series is highly technical and quite verbose, and holds a wealth of information. Another good source is the distro agnostic [Freedesktop info page][4] with the largest collection of links to systemd resources, distro specific pages, bugtrackers and documentation. A quick glance at:
|
||||
|
||||
# man systemd.index
|
||||
|
||||
will give you an overview of all systemd man pages. The command structure for systemd for various distributions is pretty much the same, differences are found mainly in the packaging.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/07/use-systemd-system-administration-debian.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://lists.debian.org/debian-devel/2013/10/msg00444.html
|
||||
[2]:https://lists.debian.org/debian-devel/2014/02/msg00316.html
|
||||
[3]:http://0pointer.de/blog/projects/systemd.html
|
||||
[4]:http://www.freedesktop.org/wiki/Software/systemd/
|
@ -1,3 +1,5 @@
|
||||
translating by haimingfg
|
||||
|
||||
What are useful CLI tools for Linux system admins
|
||||
================================================================================
|
||||
System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.
|
||||
@ -184,4 +186,4 @@ via: http://xmodulo.com/2014/08/useful-cli-tools-linux-system-admins.html
|
||||
[17]:http://rsync.samba.org/
|
||||
[18]:http://www.nongnu.org/rdiff-backup/
|
||||
[19]:http://nethogs.sourceforge.net/
|
||||
[20]:http://code.google.com/p/inxi/
|
||||
[20]:http://code.google.com/p/inxi/
|
||||
|
@ -1,136 +0,0 @@
|
||||
zpl1025
|
||||
Build a Raspberry Pi Arcade Machine
|
||||
================================================================================
|
||||
**Relive the golden majesty of the 80s with a little help from a marvel of the current decade.**
|
||||
|
||||
### WHAT YOU’LL NEED ###
|
||||
|
||||
- Raspberry Pi w/4GB SD-CARD.
|
||||
- HDMI LCD monitor.
|
||||
- Games controller or…
|
||||
- A JAMMA arcade cabinet.
|
||||
- J-Pac or I-Pac.
|
||||
|
||||
The 1980s were memorable for many things; the end of the cold war, a carbonated drink called Quatro, the Korg Polysix synthesiser and the Commodore 64. But to a certain teenager, none of these were as potent, or as perhaps familiarly illicit, as the games arcade. Enveloped by cigarette smoke and a barrage of 8-bit sound effects, they were caverns you visited only on borrowed time: 50 pence and a portion of chips to see you through lunchtime while you honed your skills at Galaga, Rampage, Centipede, Asteroids, Ms Pacman, Phoenix, R-Rype, Donkey Kong, Rolling Thunder, Gauntlet, Street Fighter, Outrun, Defender… The list is endless.
|
||||
|
||||
These games, and the arcade machine form factor that held them, are just as compelling today as they were 30 years ago. And unlike the teenage version of yourself, you can now play many of them without needing a pocket full of change, finally giving you an edge over the rich kids and their endless ‘Continues’. It’s time to build your own Linux-based arcade machine and beat that old high score.
|
||||
|
||||
We’re going to cover all the steps required to turn a cheap shell of an arcade machine into a Linux-powered multi-platform retro games system. But that doesn’t mean you’ve got to build the whole system at the same scale. You could, for example, forgo the large, heavy and potentially carcinogenic hulk of the cabinet itself and stuff the controlling innards into an old games console or an even smaller case. Or you could just as easily forgo the diminutive Raspberry Pi and replace the brains of your system with a much more capable Linux machine. This might make an ideal platform for SteamOS, for example, and for playing some of its excellent modern arcade games.
|
||||
|
||||
Over the next few pages we’ll construct a Raspberry Pi-based arcade machine, but you should be able to see plenty of ideas for your own projects, even if they don’t look just like ours. And because we’re building it on the staggeringly powerful MAME, you’ll be able to get it running on almost anything.
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade3.png)
|
||||
|
||||
We did this project before the model B+ came out. It should all work exactly the same on the newer board, and you should be able to get by without a powered USB Hub (click for larger).
|
||||
|
||||
### Disclaimer ###
|
||||
|
||||
One again we’re messing with electrical components that could cause you a shock. Make sure you get any modifications you make checked by a qualified electrician. We don’t go into any details on how to obtain games, but there are legal sources such as old games releases and newer commercial titles based on the MAME emulator.
|
||||
|
||||
#### Step1: The Cabinet ####
|
||||
|
||||
The cabinet itself is the biggest challenge. We bought an old two-player Bubble Bobble machine from the early 90s from eBay. It cost £220 delivered in the back of an old estate car. The prices for cabinets like these can vary. We’ve seen many for less than £100. At the other end of the scale, people pay thousands for machines with original decals on the side.
|
||||
|
||||
There are two major considerations when it comes to buying a cabinet. The first is the size: These things are big and heavy. They take up a lot of space and it takes at least two people to move them around. If you’ve got the money, you can buy DIY cabinets or new smaller form-factors, such as cabinets that fit on tables. And cocktail cabinets can be easier to fit, too.
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade4.jpg)
|
||||
|
||||
Cabinets can be cheap, but they’re heavy. Don’t lift them on your own. Older ones may need some TLC, such as are-spray and some repair work(click for larger).
|
||||
|
||||
One of the best reasons for buying an original cabinet, apart from getting a much more authentic gaming experience, is being able to use the original controls. Many machines you can buy on eBay will be for two concurrent players, with two joysticks and a variety of buttons for each player, plus the player one and player two controls. For compatibility with the widest number of games, we’d recommend finding a machine with six buttons for each player, which is a common configuration. You might also want to look into a panel with more than two players, or one with space for other input controllers, such as an arcade trackball (for games like Marble Madness), or a spinner (Arkanoid). These can be added without too much difficulty later, as modern USB devices exist.
|
||||
|
||||
Controls are the second, and we’d say most important consideration, because it’s these that transfer your twitches and tweaks into game movement. What you need to consider for when buying a cabinet is something called JAMMA, an acronym for Japan Amusement Machinery Manufacturers. JAMMA is a standard in arcade machines that defines how the circuit board containing the game chips connects to the game controllers and the coin mechanism. It’s an interface conduit for all the cables coming from the buttons and the joysticks, for two players, bringing them into a standard edge connector. The JAMMA part is the size and layout of this connector, as it means the buttons and controls will be connected to the same functions on whichever board you install so that the arcade owner would only have to change the cabinet artwork to bring in new players.
|
||||
|
||||
But first, a word of warning: the JAMMA connector also carries the 12V power supply, usually from a power unit installed in most arcade machines. We disconnecting the power supply completely to avoid damaging anything with a wayward short-circuit or dropped screwdriver. We don’t use any of the power connectors in any further stage of the tutorial.
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade2.png)
|
||||
|
||||
#### Step 2: J-PAC ####
|
||||
|
||||
What’s brilliant is that you can buy a device that connects to the JAMMA connector inside your cabinet and a USB port on your computer, transforming all the buttons presses and keyboard movements into (configurable) keyboard commands that you can use from Linux to control any game you wish. This device is called the J-Pac ([www.ultimarc.com/jpac.html][1] – approximately £54).
|
||||
|
||||
Its best feature isn’t the connectivity; it’s the way it handles and converts the input signals, because it’s vastly superior to a standard USB joystick. Every input generates its own interrupt, and there’s no limit to the number of simultaneous buttons and directions you can press or hold down. This is vital for games like Street Fighter, because they rely on chords of buttons being pressed simultaneously and quickly, but it’s also essential when delivering the killing blow to cheating players who sulk and hold down all their own buttons. Many other controllers, especially those that create keyboard inputs, are restricted by their USB keyboard controllers to six inputs and a variety of Alt, Shift and Ctrl hacks. The J-Pac can also be connected to a tilt sensor and even some coin mechanisms, and it works in Linux without any pre-configuration.
|
||||
|
||||
Another option is a similar device called an I-Pac. It does the same thing as the J-Pac, only without the JAMMA connector. That means you can’t connect your JAMMA controls, but it does mean you can design your own controller layout and wire each control to the I-Pac yourself. This might be a little ambitious for a first project, but it’s a route that many arcade aficionados take, especially when they want to design a panel for four players, or one that incorporates many different kinds of controls. Our approach isn’t necessarily one we’d recommend, but we re-wired an old X-Arcade Tankstick control panel that suffered from input contention, replaced the joysticks and buttons with new units and connected it to a new JAMMA harness, which is an excellent way of buying all the cables you need plus the edge connector for a low price (£8).
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade5.jpg)
|
||||
|
||||
Our J-Pac in situ. The blue and red wires on the right connect to the extra 1- and 2-player buttons on our cabinet (click for larger).
|
||||
|
||||
Whether you choose an I-Pac or a J-Pac, all the keys generated by both devices are the default values for MAME. That means you won’t have to make any manual input changes when you start to run the emulator. Player 1, for example, creates cursor up, down, left and right as well as left Ctrl, left ALT, Space and left Shift for fire buttons 1–4. But the really useful feature, for us, is the two-button shortcuts. While holding down the player 1 button, you can generate the P key to pause the game by pulling down on the player 1 joystick, adjust the volume by pressing up and enter MAME’s own configuration menu by pushing right. These escape codes are cleverly engineered to not get in the way of playing games, as they’re only activated when holding down the Player 1 button, and they enable you to do almost anything you need to from within a running game. You can completely reconfigure MAME, for example, using its own menus, and change input assignments and sensitivity while playing the game itself.
|
||||
|
||||
Finally, holding down Player 1 and then pressing Player 2 will quit MAME, which is useful if you’re using a launch menu or MAME manager, as these manage launching games automatically, and let you get on with playing another game as quickly as possible.
|
||||
|
||||
We took a rather cowardly route with the screen, removing the original, bulky and broken CRT that came with the cabinet and replacing it with a low-cost LCD monitor. This approach has many advantages. First, the screen has HDMI, so it will interface with a Raspberry Pi or a modern graphics card without any difficulty. Second, you don’t have to configure the low-frequency update modes required to drive an arcade machine’s screen, nor do you need the specific graphics hardware that drives it. And third, this is the safest option because an arcade machine’s screen is often unprotected from the rear of a case, leaving very high voltages inches away from your hands. That’s not to say you shouldn’t use a CRT if that’s the experience you’re after – it’s the most authentic way to get the gaming experience you’re after, but we’ve fined-tuned the CRT emulation enough in software that we’re happy with the output, and we’re definitely happier not to be using an ageing CRT.
|
||||
|
||||
You might also want to look into using an older LCD with a 4:3 aspect ratio, rather than the widescreen modern options, because 4:3 is more practical for playing both vertical and horizontal games. A vertical shooter such as Raiden, for example, will have black bars on either side of the gaming area if you use a widescreen monitor. Those black bars can be used to display the game instructions, or you could rotate the screen 90 degrees so that every pixel is used, but this is impractical unless you’re only going to play vertical games or have easy access to a rotating mount.
|
||||
|
||||
Mounting a screen is also important. If you’ve removed a CRT, there’s nowhere for an LCD to go. Our solution was to buy some MDF cut to fit the space where the CRT was. This was then screwed into position and we fitted a cheap VESA mounting plate into the centre of the new MDF. VESA mounts can be used by the vast majority of screens, big and small. Finally, because our cabinet was fronted with smoked glass, we had to be sure both the brightness and contrast were set high enough.
|
||||
|
||||
### Step 3: Installation ###
|
||||
|
||||
With the large hardware choices now made, and presumably the cabinet close to where you finally want to install it, putting the physical pieces together isn’t that difficult. We safely split the power input from the rear of the cabinet and wired a multiple socket into the space at the back. We did this to the cable after it connects to the power switch.
|
||||
|
||||
Nearly all arcade cabinets have a power switch on the top-right surface, but there’s usually plenty of cable to splice into this at a lower point in the cabinet, and it meant we could use normal power connectors for our equipment. Our cabinet has a fluorescent tube, used to backlight the top marquee on the machine, connected directly to the power, and we were able to keep this connected by attaching a regular plug. When you turn the power on from the cabinet switch, power flows to the components inside the case – your Raspberry Pi and screen will come on, and all will be well with the world.
|
||||
|
||||
The J-Pac slides straight into the JAMMA interface, but you may also have to do a little manual wiring. The JAMMA standard only supports up to three buttons for each player (although many unofficially support four), while the J-Pac can handle up to six buttons. To get those extra buttons connected, you need to connect one side of the button’s switch to GND fed from the J-Pac with the other side of the switch going into one of the screw-mounted inputs in the side of the J-Pac. These are labelled 1SW4, 1SW5, 1SW6, 2SW4, 2SW5 and 2SW6. The J-Pac also includes passthrough connections for audio, but we’ve found this to be incredibly noisy. Instead, we wired the speaker in our cabinet to an old SoundBlaster amplifier and connected this to the audio outputs on the Raspberry Pi. You don’t want audio to be pristine, but you do want it to be loud enough.
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade6.jpg)
|
||||
|
||||
Our Raspberry Pi is now connected to the J-Pac on the left and both the screen and the USB hub (click for larger).
|
||||
|
||||
The J-Pac or I-Pac then connects to your PC or Raspberry Pi using a PS2-to-USB cable, which should also be used to connect to a PS2 port on your PC directly. There is an additional option to use an old PS2 connector, if your PC is old enough to have one, but we found in testing that the USB performance is identical. This won’t apply to the PS2-less Raspberry Pi, of course, and don’t forget that the Pi will also need powering. We always recommend doing so from a compatible powered hub, as a lack of power is the most common source of Raspberry Pi errors. You’ll also need to get networking to your Raspberry Pi, either through the Ethernet port (perhaps using a powerline adaptor hidden in the cabinet), or by using a wireless USB device. Networking is essential because it enables you to reconfigure your PI while it’s tucked away within the cabinet, and it also enables you to change settings and perform administration tasks without having to connect a keyboard or mouse.
|
||||
|
||||
> ### Coin Mechanism ###
|
||||
|
||||
> In the emulation community, getting your coin mechanism to work with your emulator was often considered a step too close to commercial production. It meant you could potential charge people to use your machine. Not only would this be wrong, but considering the provenance of many of the games you run on your own arcade machine, it could also be illegal. And it’s definitely against the spirit of emulation. However, we and many other devotees thinking that a working coin mechanism is another step closer to the realism of an arcade machine, and is worth the effort in recreating the nostalgia of an old arcade. There’s nothing like dropping a 10p piece into the coin tray and to hear the sound of the credits being added to the machine.
|
||||
|
||||
> It’s not actually that difficult. It depends on the coin mechanism in your arcade machine and how it sends a signal to say how many credits had been inserted. Most coin mechanisms come in two parts. The large part is the coin acceptor/validator. This is the physical side of the process that detects whether a coin is authentic, and determines its value. It does this with the help of a credit/logic board, usually attached via a ribbon cable and featuring lots of DIP switches. These switches are used to change which coins are accepted and how many credits they generate. It’s then usually as simple as finding the output switch, which is triggered with a credit, and connecting this to the coin input on your JAMMA connector, or directly onto the J-Pac. Our coin mechanism is a Mars MS111, common in the UK in the early 90s, and there’s plenty of information online about what each of the DIP switches do, as well as how to programme the controller for newer coins. We were also able to wire the 12V connector from the mechanism to a small light for behind the coin entry slot.
|
||||
|
||||
#### Step 4: Software ####
|
||||
|
||||
MAME is the only viable emulator for a project of this scale, and it now supports many thousands of different games running on countless different platforms, from the first arcade machines through to some more recent ones. It’s a project that has also spawned MESS, the multi-emulator super system, which targets platforms such as home computers and consoles from the 80s and 90s.
|
||||
|
||||
Configuring MAME could take a six-page article in itself. It’s a complex, sprawling, magnificent piece of software that emulates so many CPUs, so many sound devices, chips, controllers with so many options, that like MythTV, you never really stop configuring it.
|
||||
|
||||
But there’s an easier option, and one that’s purpose-built for the Raspberry Pi. It’s called PiMAME. This is both a distribution download and a script you can run on top of Raspbian, the Pi’s default distribution. Not only does it install MAME on your Raspberry Pi (which is useful because it’s not part of any of the default repositories), it also installs a selection of other emulators along with front-ends to manage them. MAME, for example, is a command-line utility with dozens of options. But PiMAME has another clever trick up its sleeve – it installs a simple web server that enables you to install new games through a browser connected to your network. This is a great advantage, because getting games into the correct folders is one of the trials of dealing with MAME, and it also enables you to make best use of whatever storage you’ve got connected to your Pi. Plus, PiMAME will update itself from the same script you use to install it, so keeping on top of updates couldn’t be easier. This could be especially useful at the moment, as at the time of writing the project was on the cusp of a major upgrade in the form of the 0.8 release. We found it slightly unstable in early March, but we’re sure everything will be sorted by the time you read this.
|
||||
|
||||
The best way to install PiMAME is to install Raspbian first. You can do this either through NOOBS, using a graphical tool from your desktop, or by using the dd command to copy the contents of the Raspbian image directly onto your SD card. As we mentioned in last month’s BrewPi tutorial, this process has been documented many times before, so we won’t waste the space here. Just install NOOBS if you want the easy option, following the instructions on the Raspberry Pi site. With Raspbian installed and running, make sure you use the configuration tool to free the space on your SD card, and that the system is up to date (sudo apt-get update; sudo apt-get upgrade). You then need to make sure you’ve got the git package already installed. Any recent version of Raspbian will have installed git already, but you can check by typing sudo apt-get install git just to check.
|
||||
|
||||
You then have to type the following command to clone the PiMAME installer from the project’s GitHub repository:
|
||||
|
||||
git clone https://github.com/ssilverm/pimame_installer
|
||||
|
||||
After that, you should get the following feedback if the command works:
|
||||
|
||||
Cloning into ‘pimame_installer’...
|
||||
remote: Reusing existing pack: 2306, done.
|
||||
remote: Total 2306 (delta 0), reused 0 (delta 0)
|
||||
Receiving objects: 100% (2306/2306), 4.61 MiB | 11 KiB/s, done.
|
||||
Resolving deltas: 100% (823/823), done.
|
||||
|
||||
This command will create a new folder called ‘pimame_installer’, and the next step is to switch into this and run the script it contains:
|
||||
|
||||
cd pimame_installer/
|
||||
sudo ./install.sh
|
||||
|
||||
This command installs and configures a lot of software. The length of time it takes will depend on your internet connection, as a lot of extra packages are downloaded. Our humble Pi with a 15Mb internet connection took around 45 minutes to complete the script, after which you’re invited to restart the machine. You can do this safely by typing sudo shutdown -r now, as this command will automatically handle any remaining write operations to the SD card.
|
||||
|
||||
And that’s all there is to the installation. After rebooting your Pi, you will be automatically logged in and the PiMAME launch menu will appear. It’s a great-looking interface in version 0.8, with photos of each of the platforms supported, plus small red icons to indicate how many games you’ve got installed.This should now be navigable through your controller. If you want to make sure the controller is correctly detected, use SSH to connect to your Pi and check for the existence of **/dev/input/by-id/usb-Ultimarc_I-PAC_Ultimarc_I-PAC-event-kbd**.
|
||||
|
||||
The default keyboard controls will enable you to select what kind of emulator you want to run on your arcade machine. The option we’re most interested in is the first, labelled ‘AdvMAME’, but you might also be surprised to see another MAME on offer, MAME4ALL. MAME4ALL is built specifically for the Raspberry Pi, and takes an old version of the MAME source code so that the performance of the ROMS that it does support is optimal. This makes a lot of sense, because there’s no way your Pi is going to be able to play anything too demanding, so there’s no reason to belabour the emulator with unneeded compatibility. All that’s left to do now is get some games onto your system (see the boxout below), and have fun!
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade1.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxvoice.com/arcade-machine/
|
||||
|
||||
作者:[Ben Everard][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxvoice.com/author/ben_everard/
|
||||
[1]:http://www.ultimarc.com/jpac.html
|
@ -1,99 +0,0 @@
|
||||
How to configure SNMPv3 on ubuntu 14.04 server
|
||||
================================================================================
|
||||
Simple Network Management Protocol (SNMP) is an "Internet-standard protocol for managing devices on IP networks". Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks and more.It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.[2]
|
||||
|
||||
SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried (and sometimes set) by managing applications.
|
||||
|
||||
### Why you want to use SNMPv3 ###
|
||||
|
||||
Although SNMPv3 makes no changes to the protocol aside from the addition of cryptographic security, it looks much different due to new textual conventions, concepts, and terminology.
|
||||
|
||||
SNMPv3 primarily added security and remote configuration enhancements to SNMP.
|
||||
|
||||
Security has been the biggest weakness of SNMP since the beginning. Authentication in SNMP Versions 1 and 2 amounts to nothing more than a password (community string) sent in clear text between a manager and agent.[1] Each SNMPv3 message contains security parameters which are encoded as an octet string. The meaning of these security parameters depends on the security model being used.
|
||||
|
||||
SNMPv3 provides important security features:
|
||||
|
||||
Confidentiality -- Encryption of packets to prevent snooping by an unauthorized source.
|
||||
|
||||
Integrity -- Message integrity to ensure that a packet has not been tampered while in transit including an optional packet replay protection mechanism.
|
||||
|
||||
Authentication -- to verify that the message is from a valid source.
|
||||
|
||||
### Install SNMP server and client in ubuntu ###
|
||||
|
||||
Open the terminal and run the following command
|
||||
|
||||
sudo apt-get install snmpd snmp
|
||||
|
||||
After installation you need to do the following changes.
|
||||
|
||||
### Configuring SNMPv3 in Ubuntu ###
|
||||
|
||||
Get access to the daemon from the outside.
|
||||
|
||||
The default installation only provides access to the daemon for localhost. In order to get access from the outside open the file /etc/default/snmpd in your favorite editor
|
||||
|
||||
sudo vi /etc/default/snmpd
|
||||
|
||||
Change the following line
|
||||
|
||||
From
|
||||
|
||||
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /var/run/snmpd.pid'
|
||||
|
||||
to
|
||||
|
||||
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf'
|
||||
|
||||
and restart snmpd
|
||||
|
||||
sudo /etc/init.d/snmpd restart
|
||||
|
||||
### Define SNMPv3 users, authentication and encryption parameters ###
|
||||
|
||||
SNMPv3 can be used in a number of ways depending on the “securityLevel” configuration parameter:
|
||||
|
||||
noAuthNoPriv -- No authorisation and no encryption, basically no security at all!
|
||||
authNoPriv -- Authorisation is required but collected data sent over the network is not encrypted.
|
||||
authPriv -- The strongest form. Authorisation required and everything sent over the network is encrypted.
|
||||
|
||||
The snmpd configuration settings are all saved in a file called /etc/snmp/snmpd.conf. Open this file in your editor as in:
|
||||
|
||||
sudo vi /etc/snmp/snmpd.conf
|
||||
|
||||
Add the following lines to the end of the file:
|
||||
|
||||
#
|
||||
createUser user1
|
||||
createUser user2 MD5 user2password
|
||||
createUser user3 MD5 user3password DES user3encryption
|
||||
#
|
||||
rouser user1 noauth 1.3.6.1.2.1.1
|
||||
rouser user2 auth 1.3.6.1.2.1
|
||||
rwuser user3 priv 1.3.6.1.2.1
|
||||
|
||||
Note:- If you want to use your own username/password combinations you need to note that the password and encryption phrases should have a length of at least 8 characters
|
||||
|
||||
Also you need to do the following change so that snmp can listen for connections on all interfaces
|
||||
|
||||
From
|
||||
|
||||
#agentAddress udp:161,udp6:[::1]:161
|
||||
|
||||
to
|
||||
|
||||
agentAddress udp:161,udp6:[::1]:161
|
||||
|
||||
Save your modified snmpd.conf file and restart the daemon with:
|
||||
|
||||
sudo /etc/init.d/snmpd restart
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/how-to-configure-snmpv3-on-ubuntu-14-04-server.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -1,466 +0,0 @@
|
||||
Linux Tutorial: Install Ansible Configuration Management And IT Automation Tool
|
||||
================================================================================
|
||||
![](http://s0.cyberciti.org/uploads/cms/2014/08/ansible_core_circle.png)
|
||||
|
||||
Today I will be talking about ansible, a powerful configuration management solution written in python. There are many configuration management solutions available, all with pros and cons, ansible stands apart from many of them for its simplicity. What makes ansible different than many of the most popular configuration management systems is that its agent-less, no need to setup agents on every node you want to control. Plus, this has the benefit of being able to control you entire infrastructure from more than one place, if needed. That last point's validity, of being a benefit, may be debatable but I find it as a positive in most cases. Enough talk, lets get started with Ansible installation and configuration on a RHEL/CentOS, and Debian/Ubuntu based systems.
|
||||
|
||||
### Prerequisites ###
|
||||
|
||||
1. Distro: RHEL/CentOS/Debian/Ubuntu Linux
|
||||
1. Jinja2: A modern and designer friendly templating language for Python.
|
||||
1. PyYAML: A YAML parser and emitter for the Python programming language.
|
||||
1. parmiko: Native Python SSHv2 protocol library.
|
||||
1. httplib2: A comprehensive HTTP client library.
|
||||
1. Most of the actions listed in this post are written with the assumption that they will be executed by the root user running the bash or any other modern shell.
|
||||
|
||||
How Ansible works
|
||||
|
||||
Ansible tool uses no agents. It requires no additional custom security infrastructure, so it’s easy to deploy. All you need is ssh client and server:
|
||||
|
||||
+----------------------+ +---------------+
|
||||
|Linux/Unix workstation| SSH | file_server1 |
|
||||
|with Ansible |<------------------>| db_server2 | Unix/Linux servers
|
||||
+----------------------+ Modules | proxy_server3 | in local/remote
|
||||
192.168.1.100 +---------------+ data centers
|
||||
|
||||
Where,
|
||||
|
||||
1. 192.168.1.100 - Install Ansible on your local workstation/server.
|
||||
1. file_server1..proxy_server3 - Use 192.168.1.100 and Ansible to automates configuration management of all servers.
|
||||
1. SSH - Setup ssh keys between 192.168.1.100 and local/remote servers.
|
||||
|
||||
### Ansible Installation Tutorial ###
|
||||
|
||||
Installation of ansible is a breeze, many distributions have a package available in their 3rd party repos which can easily be installed, a quick alternative is to just pip install it or grab the latest copy from github. To install using your package manager, on [RHEL/CentOS Linux based systems you will most likely need the EPEL repo][1] then:
|
||||
|
||||
#### Install ansible on a RHEL/CentOS Linux based system ####
|
||||
|
||||
Type the following [yum command][2]:
|
||||
|
||||
$ sudo yum install ansible
|
||||
|
||||
#### Install ansible on a Debian/Ubuntu Linux based system ####
|
||||
|
||||
Type the following [apt-get command][3]:
|
||||
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ansible/ansible
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install ansible
|
||||
|
||||
#### Install ansible using pip ####
|
||||
|
||||
The [pip command is a tool for installing and managing Python packages][4], such as those found in the Python Package Index. The following method works on Linux and Unix-like systems:
|
||||
|
||||
$ sudo pip install ansible
|
||||
|
||||
#### Install the latest version of ansible using source code ####
|
||||
|
||||
You can install the latest version from github as follows:
|
||||
|
||||
$ cd ~
|
||||
$ git clone git://github.com/ansible/ansible.git
|
||||
$ cd ./ansible
|
||||
$ source ./hacking/env-setup
|
||||
|
||||
When running ansible from a git checkout, one thing to remember is that you will need to setup your environment everytime you want to use it, or you can add it to your bash rc file:
|
||||
|
||||
# ADD TO BASH RC
|
||||
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts" >> ~/.bashrc
|
||||
$ echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc
|
||||
|
||||
The hosts file for ansible is basically a list of hosts that ansible is able to perform work on. By default ansible looks for the hosts file at /etc/ansible/hosts, but there are ways to override that which can be handy if you are working with multiple installs or have several different clients for whose datacenters you are responsible for. You can pass the hosts file on the command line using the -i option:
|
||||
|
||||
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
|
||||
|
||||
My preference however is to use and environment variable, this can be useful if source a different file when starting work for a specific client. The environment variable is $ANSIBLE_HOSTS, and can be set as follows:
|
||||
|
||||
$ export ANSIBLE_HOSTS=~/ansible_hosts
|
||||
|
||||
Once all requirements are installed and you have you hosts file setup you can give it a test run. For a quick test I put 127.0.0.1 into the ansible hosts file as follow:
|
||||
|
||||
$ echo "127.0.0.1" > ~/ansible_hosts
|
||||
|
||||
Now lets test with a quick ping:
|
||||
|
||||
$ ansible all -m ping
|
||||
|
||||
OR ask for the ssh password:
|
||||
|
||||
$ ansible all -m ping --ask-pass
|
||||
|
||||
I have run across a problem a few times regarding initial setup, it is highly recommended you setup keys for ansible to use but in the previous test we used --ask-pass, on some machines you will need [to install sshpass][5] or add a -c paramiko like so:
|
||||
|
||||
$ ansible all -m ping --ask-pass -c paramiko
|
||||
|
||||
Or you [can install sshpass][6], however sshpass is not always available in the standard repos so paramiko can be easier.
|
||||
|
||||
### Setup SSH Keys ###
|
||||
|
||||
Now that we have gotten the configuration, and other simple stuff, out of the way lets move onto doing something productive. Alot of the power of ansible lies in playbooks, which are basically scripted ansible runs (for the most part), but we will start with some one liners before we build out a playbook. Lets start with creating and configuring keys so we can avoid the -c and --ask-pass options:
|
||||
|
||||
$ ssh-keygen -t rsa
|
||||
|
||||
Sample outputs:
|
||||
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/mike/.ssh/id_rsa.
|
||||
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
|
||||
The key fingerprint is:
|
||||
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
|
||||
The key's randomart image is:
|
||||
+--[ RSA 2048]----+
|
||||
|... . . |
|
||||
|. . + . . |
|
||||
|= . o o |
|
||||
|.* . |
|
||||
|. . . S |
|
||||
| E.o |
|
||||
|.. .. |
|
||||
|o o+.. |
|
||||
| +o+*o. |
|
||||
+-----------------+
|
||||
|
||||
Now obviously there are plenty of ways to put this in place on the remote machine but since we are using ansible, lets use that:
|
||||
|
||||
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"dest": "/tmp/id_rsa.pub",
|
||||
"gid": 100,
|
||||
"group": "users",
|
||||
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
|
||||
"mode": "0644",
|
||||
"owner": "mike",
|
||||
"size": 410,
|
||||
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
|
||||
"state": "file",
|
||||
"uid": 1000
|
||||
}
|
||||
|
||||
Next, add the public key in remote server, enter:
|
||||
|
||||
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | FAILED | rc=1 >>
|
||||
/bin/sh: /root/.ssh/authorized_keys: Permission denied
|
||||
|
||||
Whoops, we want to be able to run things as root, so lets add a -u option:
|
||||
|
||||
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success | rc=0 >>
|
||||
|
||||
Please note, I wanted to demonstrate a file transfer using ansible, there is however a more built in way for managing keys using ansible:
|
||||
|
||||
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"gid": 100,
|
||||
"group": "users",
|
||||
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
|
||||
"key_options": null,
|
||||
"keyfile": "/home/mike/.ssh/authorized_keys",
|
||||
"manage_dir": false,
|
||||
"mode": "0600",
|
||||
"owner": "mike",
|
||||
"path": "/home/mike/.ssh/authorized_keys",
|
||||
"size": 410,
|
||||
"state": "file",
|
||||
"uid": 1000,
|
||||
"unique": false,
|
||||
"user": "mike"
|
||||
}
|
||||
|
||||
Now that the keys are in place lets try running an arbitrary command like hostname and hope we don't get prompted for a password
|
||||
|
||||
$ ansible all -m shell -a "hostname" -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
127.0.0.1 | success | rc=0 >>
|
||||
|
||||
Success!!! Now that we can run commands as root and not be bothered by using a password we are in a good place to easily configure any and all hosts in the ansible hosts file. Let's remove the key from /tmp:
|
||||
|
||||
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"path": "/tmp/id_rsa.pub",
|
||||
"state": "absent"
|
||||
}
|
||||
|
||||
Next, I'm going to make sure we have a few packages installed and on the latest version and we will move on to something a little more complicated:
|
||||
|
||||
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
127.0.0.1 | success >> {
|
||||
"changed": false,
|
||||
"name": "apache2",
|
||||
"state": "latest"
|
||||
}
|
||||
|
||||
Alright, the key we placed in /tmp is now absent and we have the latest version of apache installed. This brings me to the next point, something that makes ansible very flexible and gives more power to playbooks, many may have noticed the -m zypper in the previous commands. Now unless you use openSuse or Suse enterpise you may not be familiar with zypper, it is basically the equivalent of yum in the suse world. In all of the examples above I have only had one machine in my hosts file, and while everything but the last command should work on any standard *nix systems with standard ssh configs, this leads to a problem. What if we had multiple machine types that we wanted to manage? Well this is where playbooks, and the configurability of ansible really shines. First lets modify our hosts file a little, here goes:
|
||||
|
||||
$ cat ~/ansible_hosts
|
||||
|
||||
Sample outputs:
|
||||
|
||||
[RHELBased]
|
||||
10.50.1.33
|
||||
10.50.1.47
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1
|
||||
|
||||
First, we create some groups of servers, and give them some meaningful tags. Then we create a playbook that will do different things for the different kinds of servers. You might notice the similarity between the yaml data structures and the command line instructions we ran earlier. Basically the -m is a module, and -a is for module args. In the YAML representation you put the module then :, and finally the args.
|
||||
|
||||
---
|
||||
- hosts: SUSEBased
|
||||
remote_user: root
|
||||
tasks:
|
||||
- zypper: name=apache2 state=latest
|
||||
- hosts: RHELBased
|
||||
remote_user: root
|
||||
tasks:
|
||||
- yum: name=httpd state=latest
|
||||
|
||||
Now that we have a simple playbook, we can run it as follows:
|
||||
|
||||
$ ansible-playbook testPlaybook.yaml -f 10
|
||||
|
||||
Sample outputs:
|
||||
|
||||
PLAY [SUSEBased] **************************************************************
|
||||
|
||||
GATHERING FACTS ***************************************************************
|
||||
ok: [127.0.0.1]
|
||||
|
||||
TASK: [zypper name=apache2 state=latest] **************************************
|
||||
ok: [127.0.0.1]
|
||||
|
||||
PLAY [RHELBased] **************************************************************
|
||||
|
||||
GATHERING FACTS ***************************************************************
|
||||
ok: [10.50.1.33]
|
||||
ok: [10.50.1.47]
|
||||
|
||||
TASK: [yum name=httpd state=latest] *******************************************
|
||||
changed: [10.50.1.33]
|
||||
changed: [10.50.1.47]
|
||||
|
||||
PLAY RECAP ********************************************************************
|
||||
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
|
||||
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
|
||||
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
|
||||
|
||||
Now you will notice that you will see output from each machine that ansible contacted. The -f is what lets ansible run on multiple hosts in parallel. Instead of using all, or a name of a host group, on the command line you can put this passwords for the ask-pass prompt into the playbook. While we no longer need the --ask-pass since we have ssh keys setup, it comes in handy when setting up new machines, and even new machines can run from a playbook. To demonstrate this lets convert our earlier key example into a playbook:
|
||||
|
||||
---
|
||||
- hosts: SUSEBased
|
||||
remote_user: mike
|
||||
sudo: yes
|
||||
tasks:
|
||||
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
|
||||
- hosts: RHELBased
|
||||
remote_user: mdonlon
|
||||
sudo: yes
|
||||
tasks:
|
||||
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
|
||||
|
||||
Now there are plenty of other options here that could be done, for example having the keys dropped during a kickstart, or via some other kind of process involved with bringing up machines on the hosting of your choice, but this can be used in pretty much any situation assuming ssh is setup to accept a password. One thing to think about before writing out too many playbooks, version control can save you a lot of time. Machines need to change over time, you don't need to re-write a playbook every time a machine changes, just update the pertinent bits and commit the changes. Another benefit of this ties into what I said earlier about being able to manage the entire infrastructure from multiple places. You can easily git clone your playbook repo onto a new machine and be completely setup to manage everything in a repetitive manner.
|
||||
|
||||
#### Real world ansible example ####
|
||||
|
||||
I know a lot of people make great use of services like pastebin, and a lot of companies for obvious reasons setup their own internal instance of something similar. Recently, I came across a newish application called showterm and coincidentally I was asked to setup an internal instance of it for a client. I will spare you the details of this app, but you can google showterm if interested. So for a reasonable real world example I will attempt to setup a showterm server, and configure the needed app on the client to use it. In the process we will need a database server as well. So here goes, lets start with the client configuration.
|
||||
|
||||
---
|
||||
- hosts: showtermClients
|
||||
remote_user: root
|
||||
tasks:
|
||||
- yum: name=rubygems state=latest
|
||||
- yum: name=ruby-devel state=latest
|
||||
- yum: name=gcc state=latest
|
||||
- gem: name=showterm state=latest user_install=no
|
||||
|
||||
That was easy, lets move on to the main server:
|
||||
|
||||
---
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: ensure packages are installed
|
||||
yum: name={{item}} state=latest
|
||||
with_items:
|
||||
- postgresql
|
||||
- postgresql-server
|
||||
- postgresql-devel
|
||||
- python-psycopg2
|
||||
- git
|
||||
- ruby21
|
||||
- ruby21-passenger
|
||||
- name: showterm server from github
|
||||
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
|
||||
- name: Initdb
|
||||
command: service postgresql initdb
|
||||
creates=/var/lib/pgsql/data/postgresql.conf
|
||||
|
||||
- name: Start PostgreSQL and enable at boot
|
||||
service: name=postgresql
|
||||
enabled=yes
|
||||
state=started
|
||||
- gem: name=pg state=latest user_install=no
|
||||
handlers:
|
||||
- name: restart postgresql
|
||||
service: name=postgresql state=restarted
|
||||
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
sudo: yes
|
||||
sudo_user: postgres
|
||||
vars:
|
||||
dbname: showterm
|
||||
dbuser: showterm
|
||||
dbpassword: showtermpassword
|
||||
tasks:
|
||||
- name: create db
|
||||
postgresql_db: name={{dbname}}
|
||||
|
||||
- name: create user with ALL priv
|
||||
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: database.yml
|
||||
template: src=database.yml dest=/root/showterm/config/database.yml
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: run bundle install
|
||||
shell: bundle install
|
||||
args:
|
||||
chdir: /root/showterm
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: run rake db tasks
|
||||
shell: 'bundle exec rake db:create db:migrate db:seed'
|
||||
args:
|
||||
chdir: /root/showterm
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: apache config
|
||||
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
|
||||
|
||||
Not so bad, now keeping in mind that this is a somewhat random and obscure app that we can now install in a consistent fashion on any number of machines, this is where the benefits of configuration management really come to light. Also, in most cases the declarative syntax almost speaks for itself and wiki pages need not go into as much detail, although a wiki page with too much detail is never a bad thing in my opinion.
|
||||
|
||||
### Expanding Configuration ###
|
||||
|
||||
We have not touched on everything here, Ansible has many options for configuring you setup. You can do things like embedding variables in your hosts file, so that Ansible will interpolate them on the remote nodes, eg.
|
||||
|
||||
[RHELBased]
|
||||
10.50.1.33 http_port=443
|
||||
10.50.1.47 http_port=80 ansible_ssh_user=mdonlon
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1 http_port=443
|
||||
|
||||
While this is really handy for quick configurations, you can also layer variables across multiple files in yaml format. In you hosts file path you can make two sub directories named group_vars and host_vars. Any files in those paths that match the name of the group of hosts, or a host name in your hosts file will be interpolated at run time. So the previous example would look like this:
|
||||
|
||||
ultrabook:/etc/ansible # pwd
|
||||
/etc/ansible
|
||||
ultrabook:/etc/ansible # tree
|
||||
.
|
||||
├── group_vars
|
||||
│ ├── RHELBased
|
||||
│ └── SUSEBased
|
||||
├── hosts
|
||||
└── host_vars
|
||||
├── 10.50.1.33
|
||||
└── 10.50.1.47
|
||||
|
||||
----------
|
||||
|
||||
2 directories, 5 files
|
||||
ultrabook:/etc/ansible # cat hosts
|
||||
[RHELBased]
|
||||
10.50.1.33
|
||||
10.50.1.47
|
||||
|
||||
----------
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1
|
||||
ultrabook:/etc/ansible # cat group_vars/RHELBased
|
||||
ultrabook:/etc/ansible # cat group_vars/SUSEBased
|
||||
---
|
||||
http_port: 443
|
||||
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
|
||||
---
|
||||
http_port: 443
|
||||
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
|
||||
---
|
||||
http_port:80
|
||||
ansible_ssh_user: mdonlon
|
||||
|
||||
### Refining Playbooks ###
|
||||
|
||||
There are many ways to organize playbooks as well. In the previous examples we used a single file, and everything is really simplified. One way of organizing things that is commonly used is creating roles. Basically you load a main file as your playbook, and that then imports all the data from the extra files, the extra files are organized as roles. For example if you have a wordpress site, you need a web head, and a database. The web head will have a web server, the app code, and any needed modules. The database is sometimes ran on the same host and some times ran on remote hosts, and this is where roles really shine. You make a directory, and small playbook for each role. In this case we can have an apache role, mysql role, wordpress role, mod_php, and php roles. The big advantage to this is that not every role has to be applied on one server, in this case mysql could be applied to a separate machine. This also allows for code re-use, for example you apache role could be used with python apps and php apps alike. Demonstrating this is a little beyond the scope of this article, and there are many different ways of doing thing, I would recommend searching for ansible playbook examples. There are many people contributing code on github, and I am sure various other sites.
|
||||
|
||||
### Modules ###
|
||||
|
||||
All of the work being done behind the scenes in ansible is driven by modules. Ansible has an excellent library of built in modules that do things like package installation, transferring files, and everything we have done in this article. But for some people this will not be suitable for their setup, ansible has a means of adding your own modules. One great thing about the API provided by Ansible is that you are not restricted to the language it was written in, Python, you can use any language really. Ansible modules work by passing around JSON data structures, so as long as you can build a JSON data structure in your language of choice, which I am pretty sure any scripting language can do, you can begin coding something right away. There is much documentation on the Ansible site, about how the module interface works, and many examples of modules on github as well. Keep in mind that some obscure languages may not have great support, but that would only be because not enough people are contributing code in that language, try it out and publish your results somewhere!
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In conclusion, there are many systems around for configuration management, I hope this article shows the ease of setup for ansible which I believe is one of its strongest points. Please keep in mind that I was trying to show a lot of different ways to do things, and not everything above may be considered best practice in your private infrastructure, or the coding world abroad. Here are some more links to take you knowledge of ansible to the next level:
|
||||
|
||||
- [Ansible project][7] home page.
|
||||
- [Ansible project documentation][8].
|
||||
- [Multistage environments with Ansible][9].
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
|
||||
|
||||
作者:[Nix Craft][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.cyberciti.biz/tips/about-us
|
||||
[1]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
|
||||
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
|
||||
[3]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
|
||||
[4]:http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-linux-install-pipclient/
|
||||
[5]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
|
||||
[6]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
|
||||
[7]:http://www.ansible.com/
|
||||
[8]:http://docs.ansible.com/
|
||||
[9]:http://rosstuck.com/multistage-environments-with-ansible/
|
@ -1,219 +0,0 @@
|
||||
DoubleC is translating
|
||||
How to create a site-to-site IPsec VPN tunnel using Openswan in Linux
|
||||
================================================================================
|
||||
A virtual private network (VPN) tunnel is used to securely interconnect two physically separate networks through a tunnel over the Internet. Tunneling is needed when the separate networks are private LAN subnets with globally non-routable private IP addresses, which are not reachable to each other via traditional routing over the Internet. For example, VPN tunnels are often deployed to connect different NATed branch office networks belonging to the same institution.
|
||||
|
||||
Sometimes VPN tunneling may be used simply for its security benefit as well. Service providers or private companies may design their networks in such a way that vital servers (e.g., database, VoIP, banking servers) are placed in a subnet that is accessible to trusted personnel through a VPN tunnel only. When a secure VPN tunnel is required, [IPsec][1] is often a preferred choice because an IPsec VPN tunnel is secured with multiple layers of security.
|
||||
|
||||
This tutorial will show how we can easily create a site-to-site VPN tunnel using [Openswan][2] in Linux.
|
||||
|
||||
### Topology ###
|
||||
|
||||
This tutorial will focus on the following topologies for creating an IPsec tunnel.
|
||||
|
||||
![](https://farm4.staticflickr.com/3838/15004668831_fd260b7f1e_z.jpg)
|
||||
|
||||
![](https://farm6.staticflickr.com/5559/15004668821_36e02ab8b0_z.jpg)
|
||||
|
||||
![](https://farm6.staticflickr.com/5571/14821245117_3f677e4d58_z.jpg)
|
||||
|
||||
### Installing Packages and Preparing VPN Servers ###
|
||||
|
||||
Usually, you will be managing site-A only, but based on the requirements, you could be managing both site-A and site-B. We start the process by installing Openswan.
|
||||
|
||||
On Red Hat based Systems (CentOS, Fedora or RHEL):
|
||||
|
||||
# yum install openswan lsof
|
||||
|
||||
On Debian based Systems (Debian, Ubuntu or Linux Mint):
|
||||
|
||||
# apt-get install openswan
|
||||
|
||||
Now we disable VPN redirects, if any, in the server using these commands:
|
||||
|
||||
# for vpn in /proc/sys/net/ipv4/conf/*;
|
||||
# do echo 0 > $vpn/accept_redirects;
|
||||
# echo 0 > $vpn/send_redirects;
|
||||
# done
|
||||
|
||||
Next, we modify the kernel parameters to allow IP forwarding and disable redirects permanently.
|
||||
|
||||
# vim /etc/sysctl.conf
|
||||
|
||||
----------
|
||||
|
||||
net.ipv4.ip_forward = 1
|
||||
net.ipv4.conf.all.accept_redirects = 0
|
||||
net.ipv4.conf.all.send_redirects = 0
|
||||
|
||||
Reload /etc/sysctl.conf:
|
||||
|
||||
# sysctl -p
|
||||
|
||||
We allow necessary ports in the firewall. Please make sure that the rules are not conflicting with existing firewall rules.
|
||||
|
||||
# iptables -A INPUT -p udp --dport 500 -j ACCEPT
|
||||
# iptables -A INPUT -p tcp --dport 4500 -j ACCEPT
|
||||
# iptables -A INPUT -p udp --dport 4500 -j ACCEPT
|
||||
|
||||
Finally, we create firewall rules for NAT.
|
||||
|
||||
# iptables -t nat -A POSTROUTING -s site-A-private-subnet -d site-B-private-subnet -j SNAT --to site-A-Public-IP
|
||||
|
||||
Please make sure that the firewall rules are persistent.
|
||||
|
||||
#### Note: ####
|
||||
|
||||
- You could use MASQUERADE instead of SNAT. Logically it should work, but it caused me to have issues with virtual private servers (VPS) in the past. So I would use SNAT if I were you.
|
||||
- If you are managing site-B as well, create similar rules in site-B server.
|
||||
- Direct routing does not need SNAT.
|
||||
|
||||
### Preparing Configuration Files ###
|
||||
|
||||
The first configuration file that we will work with is ipsec.conf. Regardless of which server you are configuring, always consider your site as 'left' and remote site as 'right'. The following configuration is done in siteA's VPN server.
|
||||
|
||||
# vim /etc/ipsec.conf
|
||||
|
||||
----------
|
||||
|
||||
## general configuration parameters ##
|
||||
|
||||
config setup
|
||||
plutodebug=all
|
||||
plutostderrlog=/var/log/pluto.log
|
||||
protostack=netkey
|
||||
nat_traversal=yes
|
||||
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/16
|
||||
## disable opportunistic encryption in Red Hat ##
|
||||
oe=off
|
||||
|
||||
## disable opportunistic encryption in Debian ##
|
||||
## Note: this is a separate declaration statement ##
|
||||
include /etc/ipsec.d/examples/no_oe.conf
|
||||
|
||||
## connection definition in Red Hat ##
|
||||
conn demo-connection-redhat
|
||||
authby=secret
|
||||
auto=start
|
||||
ike=3des-md5
|
||||
## phase 1 ##
|
||||
keyexchange=ike
|
||||
## phase 2 ##
|
||||
phase2=esp
|
||||
phase2alg=3des-md5
|
||||
compress=no
|
||||
pfs=yes
|
||||
type=tunnel
|
||||
left=<siteA-public-IP>
|
||||
leftsourceip=<siteA-public-IP>
|
||||
leftsubnet=<siteA-private-subnet>/netmask
|
||||
## for direct routing ##
|
||||
leftsubnet=<siteA-public-IP>/32
|
||||
leftnexthop=%defaultroute
|
||||
right=<siteB-public-IP>
|
||||
rightsubnet=<siteB-private-subnet>/netmask
|
||||
|
||||
## connection definition in Debian ##
|
||||
conn demo-connection-debian
|
||||
authby=secret
|
||||
auto=start
|
||||
## phase 1 ##
|
||||
keyexchange=ike
|
||||
## phase 2 ##
|
||||
esp=3des-md5
|
||||
pfs=yes
|
||||
type=tunnel
|
||||
left=<siteA-public-IP>
|
||||
leftsourceip=<siteA-public-IP>
|
||||
leftsubnet=<siteA-private-subnet>/netmask
|
||||
## for direct routing ##
|
||||
leftsubnet=<siteA-public-IP>/32
|
||||
leftnexthop=%defaultroute
|
||||
right=<siteB-public-IP>
|
||||
rightsubnet=<siteB-private-subnet>/netmask
|
||||
|
||||
Authentication can be done in several different ways. This tutorial will cover the use of pre-shared key, which is added to the file /etc/ipsec.secrets.
|
||||
|
||||
# vim /etc/ipsec.secrets
|
||||
|
||||
----------
|
||||
|
||||
siteA-public-IP siteB-public-IP: PSK "pre-shared-key"
|
||||
## in case of multiple sites ##
|
||||
siteA-public-IP siteC-public-IP: PSK "corresponding-pre-shared-key"
|
||||
|
||||
### Starting the Service and Troubleshooting ###
|
||||
|
||||
The server should now be ready to create a site-to-site VPN tunnel. If you are managing siteB as well, please make sure that you have configured the siteB server with necessary parameters. For Red Hat based systems, please make sure that you add the service into startup using chkconfig command.
|
||||
|
||||
# /etc/init.d/ipsec restart
|
||||
|
||||
If there are no errors in both end servers, the tunnel should be up now. Taking the following into consideration, you can test the tunnel with ping command.
|
||||
|
||||
1. The siteB-private subnet should not be reachable from site A, i.e., ping should not work if the tunnel is not up.
|
||||
1. After the tunnel is up, try ping to siteB-private-subnet from siteA. This should work.
|
||||
|
||||
Also, the routes to the destination's private subnet should appear in the server's routing table.
|
||||
|
||||
# ip route
|
||||
|
||||
----------
|
||||
|
||||
[siteB-private-subnet] via [siteA-gateway] dev eth0 src [siteA-public-IP]
|
||||
default via [siteA-gateway] dev eth0
|
||||
|
||||
Additionally, we can check the status of the tunnel using the following useful commands.
|
||||
|
||||
# service ipsec status
|
||||
|
||||
----------
|
||||
|
||||
IPsec running - pluto pid: 20754
|
||||
pluto pid 20754
|
||||
1 tunnels up
|
||||
some eroutes exist
|
||||
|
||||
----------
|
||||
|
||||
# ipsec auto --status
|
||||
|
||||
----------
|
||||
|
||||
## output truncated ##
|
||||
000 "demo-connection-debian": myip=<siteA-public-IP>; hisip=unset;
|
||||
000 "demo-connection-debian": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; nat_keepalive: yes
|
||||
000 "demo-connection-debian": policy: PSK+ENCRYPT+TUNNEL+PFS+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 32,28; interface: eth0;
|
||||
|
||||
## output truncated ##
|
||||
000 #184: "demo-connection-debian":500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 1653s; newest IPSEC; eroute owner; isakmp#183; idle; import:not set
|
||||
|
||||
## output truncated ##
|
||||
000 #183: "demo-connection-debian":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 1093s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:not set
|
||||
|
||||
The log file /var/log/pluto.log should also contain useful information regarding authentication, key exchanges and information on different phases of the tunnel. If your tunnel doesn't come up, you could check there as well.
|
||||
|
||||
If you are sure that all the configuration is correct, and if your tunnel is still not coming up, you should check the following things.
|
||||
|
||||
1. Many ISPs filter IPsec ports. Make sure that UDP 500, TCP/UDP 4500 ports are allowed by your ISP. You could try connecting to your server IPsec ports from a remote location by telnet.
|
||||
1. Make sure that necessary ports are allowed in the firewall of the server/s.
|
||||
1. Make sure that the pre-shared keys are identical in both end servers.
|
||||
1. The left and right parameters should be properly configured on both end servers.
|
||||
1. If you are facing problems with NAT, try using SNAT instead of MASQUERADING.
|
||||
|
||||
To sum up, this tutorial focused on the procedure of creating a site-to-site IPSec VPN tunnel in Linux using Openswan. VPN tunnels are very useful in enhancing security as they allow admins to make critical resources available only through the tunnels. Also VPN tunnels ensure that the data in transit is secured from eavesdropping or interception.
|
||||
|
||||
Hope this helps. Let me know what you think.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/08/create-site-to-site-ipsec-vpn-tunnel-openswan-linux.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://en.wikipedia.org/wiki/IPsec
|
||||
[2]:https://www.openswan.org/
|
@ -1,213 +0,0 @@
|
||||
Setup Thin Provisioning Volumes in Logical Volume Management (LVM) – Part IV
|
||||
================================================================================
|
||||
Logical Volume management has great features such as snapshots and Thin Provisioning. Previously in (Part – III) we have seen how to snapshot the logical volume. Here in this article, we will going to see how to setup thin Provisioning volumes in LVM.
|
||||
|
||||
![Setup Thin Provisioning in LVM](http://www.tecmint.com/wp-content/uploads/2014/08/Setup-Thin-Provisioning-in-LVM.jpg)
|
||||
Setup Thin Provisioning in LVM
|
||||
|
||||
### What is Thin Provisioning? ###
|
||||
|
||||
Thin Provisioning is used in lvm for creating virtual disks inside a thin pool. Let us assume that I have a **15GB** storage capacity in my server. I already have 2 clients who has 5GB storage each. You are the third client, you asked for 5GB storage. Back then we use to provide the whole 5GB (Thick Volume) but you may use 2GB from that 5GB storage and 3GB will be free which you can fill it up later.
|
||||
|
||||
But what we do in thin Provisioning is, we use to define a thin pool inside one of the large volume group and define the thin volumes inside that thin pool. So, that whatever files you write will be stored and your storage will be shown as 5GB. But the full 5GB will not allocate the entire disk. The same process will be done for other clients as well. Like I said there are 2 clients and you are my 3rd client.
|
||||
|
||||
So, let us assume how much total GB I have assigned for clients? Totally 15GB was already completed, If someone comes to me and ask for 5GB can I give? The answer is “**Yes**“, here in thin Provisioning I can give 5GB for 4th Client even though I have assigned 15GB.
|
||||
|
||||
**Warning**: From 15GB, if we are Provisioning more than 15GB it is called Over Provisioning.
|
||||
|
||||
### How it Works? and How we provide storage to new Clients? ###
|
||||
|
||||
I have provided you 5GB but you may used only 2GB and other 3GB will be free. In Thick Provisioning we can’t do this, because it will allocate the whole space at first itself.
|
||||
|
||||
In thin Provisioning if I’m defining 5GB for you it won’t allocate the whole disk space while defining a volume, it will grow till 5GB according to your data write, Hope you got it! same like you, other clients too won’t use the full volumes so there will be a chance to add 5GB to a new client, This is called over Provisioning.
|
||||
|
||||
But it’s compulsory to monitored each and every volume growth, if not it will end-up in a disaster. While over Provisioning is done if the all 4 clients write the data’s badly to disk you may face an issue because it will fill up your 15GB and overflow to get drop the volumes.
|
||||
|
||||
### Requirements ###
|
||||
|
||||
注:此三篇文章如果发布后可换成发布后链接,原文在前几天更新中
|
||||
|
||||
- [Create Disk Storage with LVM in Linux – PART 1][1]
|
||||
- [How to Extend/Reduce LVM’s in Linux – Part II][2]
|
||||
- [How to Create/Restore Snapshot of Logical Volume in LVM – Part III][3]
|
||||
|
||||
#### My Server Setup ####
|
||||
|
||||
Operating System – CentOS 6.5 with LVM Installation
|
||||
Server IP – 192.168.0.200
|
||||
|
||||
### Step 1: Setup Thin Pool and Volumes ###
|
||||
|
||||
Let’s do it practically how to setup the thin pool and thin volumes. First we need a large size of Volume group. Here I’m creating Volume group with **15GB** for demonstration purpose. Now, list the volume group using the below command.
|
||||
|
||||
# vgcreate -s 32M vg_thin /dev/sdb1
|
||||
|
||||
![Listing Volume Group](http://www.tecmint.com/wp-content/uploads/2014/08/Listing-Volume-Group.jpg)
|
||||
Listing Volume Group
|
||||
|
||||
Next, check for the size of Logical volume availability, before creating the thin pool and volumes.
|
||||
|
||||
# vgs
|
||||
# lvs
|
||||
|
||||
![Check Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/08/check-Logical-Volume.jpg)
|
||||
Check Logical Volume
|
||||
|
||||
We can see there is only default logical volumes for file-system and swap is present in the above lvs output.
|
||||
|
||||
### Creating a Thin Pool ###
|
||||
|
||||
To create a Thin pool for 15GB in volume group (vg_thin) use the following command.
|
||||
|
||||
# lvcreate -L 15G --thinpool tp_tecmint_pool vg_thin
|
||||
|
||||
- **-L** – Size of volume group
|
||||
- **–thinpool** – To o create a thinpool
|
||||
- **tp_tecmint_poolThin** - pool name
|
||||
- **vg_thin** – Volume group name were we need to create the pool
|
||||
|
||||
![Create Thin Pool](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Pool.jpg)
|
||||
Create Thin Pool
|
||||
|
||||
To get more detail we can use the command ‘lvdisplay’.
|
||||
|
||||
# lvdisplay vg_thin/tp_tecmint_pool
|
||||
|
||||
![Logical Volume Information](http://www.tecmint.com/wp-content/uploads/2014/08/Logical-Volume-Information.jpg)
|
||||
Logical Volume Information
|
||||
|
||||
Here we haven’t created Virtual thin volumes in this thin-pool. In the image we can see Allocated pool data showing **0.00%**.
|
||||
|
||||
### Creating Thin Volumes ###
|
||||
|
||||
Now we can define thin volumes inside the thin pool with the help of ‘lvcreate’ command with option -V (Virtual).
|
||||
|
||||
# lvcreate -V 5G --thin -n thin_vol_client1 vg_thin/tp_tecmint_pool
|
||||
|
||||
I have created a Thin virtual volume with the name of **thin_vol_client1** inside the **tp_tecmint_pool** in my **vg_thin** volume group. Now, list the logical volumes using below command.
|
||||
|
||||
# lvs
|
||||
|
||||
![List Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/List-Logical-Volumes.jpg)
|
||||
List Logical Volumes
|
||||
|
||||
Just now, we have created the thin volume above, that’s why there is no data showing i.e. **0.00%M**.
|
||||
|
||||
Fine, let me create 2 more Thin volumes for other 2 clients. Here you can see now there are 3 thin volumes created under the pool (**tp_tecmint_pool**). So, from this point, we came to know that I have used all 15GB pool.
|
||||
|
||||
![Create Thin Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Volumes.jpg)
|
||||
|
||||
### Creating File System ###
|
||||
|
||||
Now, create mount points and mount these three thin volumes and copy some files in it using below commands.
|
||||
|
||||
# mkdir -p /mnt/client1 /mnt/client2 /mnt/client3
|
||||
|
||||
List the created directories.
|
||||
|
||||
# ls -l /mnt/
|
||||
|
||||
![Creating Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Creating-Mount-Points.jpg)
|
||||
Creating Mount Points
|
||||
|
||||
Create the file system for these created thin volumes using ‘mkfs’ command.
|
||||
|
||||
# mkfs.ext4 /dev/vg_thin/thin_vol_client1 && mkfs.ext4 /dev/vg_thin/thin_vol_client2 && mkfs.ext4 /dev/vg_thin/thin_vol_client3
|
||||
|
||||
![Create File System](http://www.tecmint.com/wp-content/uploads/2014/08/Create-File-System.jpg)
|
||||
Create File System
|
||||
|
||||
Mount all three client volumes to the created mount point using ‘mount’ command.
|
||||
|
||||
# mount /dev/vg_thin/thin_vol_client1 /mnt/client1/ && mount /dev/vg_thin/thin_vol_client2 /mnt/client2/ && mount /dev/vg_thin/thin_vol_client3 /mnt/client3/
|
||||
|
||||
List the mount points using ‘df’ command.
|
||||
|
||||
# df -h
|
||||
|
||||
![Print Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Print-Mount-Points.jpg)
|
||||
Print Mount Points
|
||||
|
||||
Here, we can see all the 3 clients volumes are mounted and therefore only 3% of data are used in every clients volumes. So, let’s add some more files to all 3 mount points from my desktop to fill up some space.
|
||||
|
||||
![Add Files To Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Add-Files-To-Volumes.jpg)
|
||||
Add Files To Volumes
|
||||
|
||||
Now list the mount point and see the space used in every thin volumes & list the thin pool to see the size used in pool.
|
||||
|
||||
# df -h
|
||||
# lvdisplay vg_thin/tp_tecmint_pool
|
||||
|
||||
![Check Mount Point Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Mount-Point-Size.jpg)
|
||||
Check Mount Point Size
|
||||
|
||||
![Check Thin Pool Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Thin-Pool-Size.jpg)
|
||||
Check Thin Pool Size
|
||||
|
||||
The above command shows, the three mount pints along with their sizes in percentage.
|
||||
|
||||
13% of datas used out of 5GB for client1
|
||||
29% of datas used out of 5GB for client2
|
||||
49% of datas used out of 5GB for client3
|
||||
|
||||
While looking into the thin-pool we can see only **30%** of data is written totally. This is the total of above three clients virtual volumes.
|
||||
|
||||
### Over Provisioning ###
|
||||
|
||||
Now the **4th** client came to me and asked for 5GB storage space. Can I give? Because I had already given 15GB Pool to 3 clients. Is it possible to give 5GB more to another client? Yes it is possible to give. This is when we use **Over Provisioning**, which means giving the space more than what I have.
|
||||
|
||||
Let me create 5GB for the 4th Client and verify the size.
|
||||
|
||||
# lvcreate -V 5G --thin -n thin_vol_client4 vg_thin/tp_tecmint_pool
|
||||
# lvs
|
||||
|
||||
![Create thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Create-thin-Storage.jpg)
|
||||
Create thin Storage
|
||||
|
||||
I have only 15GB size in pool, but I have created 4 volumes inside thin-pool up-to 20GB. If all four clients start to write data to their volumes to fill up the pace, at that time, we will face critical situation, if not there will no issue.
|
||||
|
||||
Now I have created file system in **thin_vol_client4**, then mounted under **/mnt/client4** and copy some files in it.
|
||||
|
||||
# lvs
|
||||
|
||||
![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thing-Storage.jpg)
|
||||
Verify Thin Storage
|
||||
|
||||
We can see in the above picture, that the total used size in newly created client 4 up-to **89.34%** and size of thin pool as **59.19%** used. If all these users are not writing badly to volume it will be free from overflow, drop. To avoid the overflow we need to extend the thin-pool size.
|
||||
|
||||
**Important**: Thin-pools are just a logical volume, so if we need to extend the size of thin-pool we can use the same command like, we’ve used for logical volumes extend, but we can’t reduce the size of thin-pool.
|
||||
|
||||
# lvextend
|
||||
|
||||
Here we can see how to extend the logical thin-pool (**tp_tecmint_pool**).
|
||||
|
||||
# lvextend -L +15G /dev/vg_thin/tp_tecmint_pool
|
||||
|
||||
![Extend Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Extend-Thin-Storage.jpg)
|
||||
Extend Thin Storage
|
||||
|
||||
Next, list the thin-pool size.
|
||||
|
||||
# lvs
|
||||
|
||||
![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thin-Storage.jpg)
|
||||
Verify Thin Storage
|
||||
|
||||
Earlier our **tp_tecmint_pool** size was 15GB and 4 thin volumes which was over Provision by 20GB. Now it has extended to 30GB so our over Provisioning has been normalized and thin volumes are free from overflow, drop. This way you can add ever more thin volumes to the pool.
|
||||
|
||||
Here, we have seen how to create a thin-pool using a large size of volume group and create thin-volumes inside a thin-pool using Over-Provisioning and extending the pool. In the next article we will see how to setup a lvm Striping.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/
|
||||
[2]:http://www.tecmint.com/extend-and-reduce-lvms-in-linux/
|
||||
[3]:http://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/
|
@ -1,209 +0,0 @@
|
||||
How to install and configure ownCloud on Debian
|
||||
================================================================================
|
||||
According to its official website, ownCloud gives you universal access to your files through a web interface or WebDAV. It also provides a platform to easily view, edit and sync your contacts, calendars and bookmarks across all your devices. Even though ownCloud is very similar to the widely-used Dropbox cloud storage, the primary difference is that ownCloud is free and open-source, making it possible to set up a Dropbox-like cloud storage service on your own server. With ownCloud, only you have complete access and control over your private data, with no limits on storage space (except for hard disk capacity) or the number of connected clients.
|
||||
|
||||
ownCloud is available in Community Edition (free of charge) and Enterprise Edition (business-oriented with paid support). The pre-built package of ownCloud Community Edition is available for CentOS, Debian, Fedora openSUSE, SLE and Ubuntu. This tutorial will demonstrate how to install and configure ownCloud Community Edition on Debian Wheezy.
|
||||
|
||||
### Installing ownCloud on Debian ###
|
||||
|
||||
Go to the official website: [http://owncloud.org][1], and click on the 'Install' button (upper right corner).
|
||||
|
||||
![](https://farm4.staticflickr.com/3885/14884771598_323f2fc01c_z.jpg)
|
||||
|
||||
Now choose "Packages for auto updates" for the current version (v7 in the image below). This will allow you to easily keep ownCloud up to date using Debian's package management system, with packages maintained by the ownCloud community.
|
||||
|
||||
![](https://farm6.staticflickr.com/5589/15071372505_298a796ff6_z.jpg)
|
||||
|
||||
Then click on Continue on the next screen:
|
||||
|
||||
![](https://farm6.staticflickr.com/5589/14884818527_554d1483f9_z.jpg)
|
||||
|
||||
Select Debian 7 [Wheezy] from the list of available operating systems:
|
||||
|
||||
![](https://farm6.staticflickr.com/5581/14884669449_433e3334e0_z.jpg)
|
||||
|
||||
Add the ownCloud's official Debian repository:
|
||||
|
||||
# echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/Debian_7.0/ /' >> /etc/apt/sources.list.d/owncloud.list
|
||||
|
||||
Add the repository key to apt:
|
||||
|
||||
# wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/Release.key
|
||||
# apt-key add - < Release.key
|
||||
|
||||
Go ahead and install ownCloud:
|
||||
|
||||
# aptitude update
|
||||
# aptitude install owncloud
|
||||
|
||||
Open your web browser and navigate to your ownCloud instance, which can be found at http://<server-ip>/owncloud:
|
||||
|
||||
![](https://farm4.staticflickr.com/3869/15071011092_f8f32ffe11_z.jpg)
|
||||
|
||||
Note that ownCloud may be alerting about an Apache misconfiguration. Follow the steps below to solve this issue, and get rid of that error message.
|
||||
|
||||
a) Edit the /etc/apache2/apache2.conf file (set the AllowOverride directive to All):
|
||||
|
||||
<Directory /var/www/>
|
||||
Options Indexes FollowSymLinks
|
||||
AllowOverride All
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
</Directory>
|
||||
|
||||
b) Edit the /etc/apache2/conf.d/owncloud.conf file
|
||||
|
||||
<Directory /var/www/owncloud>
|
||||
Options Indexes FollowSymLinks MultiViews
|
||||
AllowOverride All
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
</Directory>
|
||||
|
||||
c) Restart the web server:
|
||||
|
||||
# service apache2 restart
|
||||
|
||||
d) Refresh the web browser. Verify that the security warning has disappeared.
|
||||
|
||||
![](https://farm6.staticflickr.com/5562/14884771428_fc9c063418_z.jpg)
|
||||
|
||||
### Setting up a Database ###
|
||||
|
||||
Now it's time to set up a database for ownCloud.
|
||||
|
||||
First, log in to the local MySQL/MariaDB server:
|
||||
|
||||
$ mysql -u root -h localhost -p
|
||||
|
||||
Create a database and user account for ownCloud as follows.
|
||||
|
||||
mysql> CREATE DATABASE owncloud_DB;
|
||||
mysql> CREATE USER ‘owncloud-web’@'localhost' IDENTIFIED BY ‘whateverpasswordyouchoose’;
|
||||
mysql> GRANT ALL PRIVILEGES ON owncloud_DB.* TO ‘owncloud-web’@'localhost';
|
||||
mysql> FLUSH PRIVILEGES;
|
||||
|
||||
Go to ownCloud page at http://<server-ip>/owncloud, and choose the 'Storage & database' section. Enter the rest of the requested information (MySQL/MariaDB user, password, database and hostname), and click on Finish setup.
|
||||
|
||||
![](https://farm6.staticflickr.com/5584/15071010982_b76c23c384_z.jpg)
|
||||
|
||||
### Configuring ownCloud for SSL Connections ###
|
||||
|
||||
Before you start using ownCloud, it is strongly recommended to enable SSL support in ownCloud. Using SSL provides important security benefits such as encrypting ownCloud traffic and providing proper authentication. In this tutorial, a self-signed certificate will be used for SSL.
|
||||
|
||||
Create a new directory where we will store the server key and certificate:
|
||||
|
||||
# mkdir /etc/apache2/ssl
|
||||
|
||||
Create a certificate (and the key that will protect it) which will remain valid for one year.
|
||||
|
||||
# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt
|
||||
|
||||
![](https://farm6.staticflickr.com/5587/15068784081_f281b54b72_z.jpg)
|
||||
|
||||
Edit the /etc/apache2/conf.d/owncloud.conf file to enable HTTPS. For details on the meaning of the rewrite rules NC, R, and L, you can refer to the [Apache docs][2]:
|
||||
|
||||
Alias /owncloud /var/www/owncloud
|
||||
|
||||
<VirtualHost 192.168.0.15:80>
|
||||
RewriteEngine on
|
||||
ReWriteCond %{SERVER_PORT} !^443$
|
||||
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L]
|
||||
</VirtualHost>
|
||||
|
||||
<VirtualHost 192.168.0.15:443>
|
||||
SSLEngine on
|
||||
SSLCertificateFile /etc/apache2/ssl/apache.crt
|
||||
SSLCertificateKeyFile /etc/apache2/ssl/apache.key
|
||||
DocumentRoot /var/www/owncloud/
|
||||
<Directory /var/www/owncloud>
|
||||
Options Indexes FollowSymLinks MultiViews
|
||||
AllowOverride All
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
Enable the rewrite module and restart Apache:
|
||||
|
||||
# a2enmod rewrite
|
||||
# service apache2 restart
|
||||
|
||||
Open your ownCloud instance. Notice that even if you try to use plain HTTP, you will be automatically redirected to HTTPS.
|
||||
|
||||
Be advised that even having followed the above steps, the first time that you launch your ownCloud instance, an error message will be displayed stating that the certificate has not been issued by a trusted authority (that is because we created a self-signed certificate). You can safely ignore this message, but if you are considering deploying ownCloud in a production server, you may want to purchase a certificate from a trusted company.
|
||||
|
||||
### Create an Account ###
|
||||
|
||||
Now we are ready to create an ownCloud admin account.
|
||||
|
||||
![](https://farm6.staticflickr.com/5587/15048366536_430b4fd64e.jpg)
|
||||
|
||||
Welcome to your new personal cloud! Note that you can install a desktop or mobile client app to sync your files, calendars, contacts and more.
|
||||
|
||||
![](https://farm4.staticflickr.com/3862/15071372425_c391d912f5_z.jpg)
|
||||
|
||||
In the upper right corner, click on your user name, and a drop-down menu is displayed:
|
||||
|
||||
![](https://farm4.staticflickr.com/3897/15071372355_3de08d2847.jpg)
|
||||
|
||||
Click on Personal to change your settings, such as password, display name, email address, profile picture, and more.
|
||||
|
||||
### ownCloud Use Case: Access Calendar ###
|
||||
|
||||
Let's start by adding an event to your calendar and later downloading it.
|
||||
|
||||
Click on the upper left corner drop-down menu and choose Calendar.
|
||||
|
||||
![](https://farm4.staticflickr.com/3891/15048366346_7dcc388244.jpg)
|
||||
|
||||
Add a new event and save it to your calendar.
|
||||
|
||||
![](https://farm4.staticflickr.com/3882/14884818197_f55154fd91_z.jpg)
|
||||
|
||||
Download your calendar and add it to your Thunderbird calendar by going to 'Event and Tasks' -> 'Import...' -> 'Select file':
|
||||
|
||||
![](https://farm4.staticflickr.com/3840/14884818217_16a53400f0_z.jpg)
|
||||
|
||||
![](https://farm4.staticflickr.com/3871/15048366356_a7f98ca63d_z.jpg)
|
||||
|
||||
TIP: You also need to set your time zone in order to successfully import your calendar in another application (by default, the Calendar application uses the UTC +00:00 time zone). To change the time zone, go to the bottom left corner and click on the small gear icon. The Calendar settings menu will appear and you will be able to select your time zone:
|
||||
|
||||
![](https://farm4.staticflickr.com/3858/14884669029_4e0cd3e366.jpg)
|
||||
|
||||
### ownCloud Use Case: Upload a File ###
|
||||
|
||||
Next, we will upload a file from the client computer.
|
||||
|
||||
Go to the Files menu (upper left corner) and click on the up arrow to open a select-file dialog.
|
||||
|
||||
![](https://farm4.staticflickr.com/3851/14884818067_4a4cc73b40.jpg)
|
||||
|
||||
Select a file and click on Open.
|
||||
|
||||
![](https://farm6.staticflickr.com/5591/14884669039_5a9dd00ca9_z.jpg)
|
||||
|
||||
You can then open/edit the selected file, move it into another folder, or delete it.
|
||||
|
||||
![](https://farm4.staticflickr.com/3909/14884771088_d0b8a20ae2_o.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
ownCloud is a versatile and powerful cloud storage that makes the transition from another provider quick, easy, and painless. In addition, it is FOSS, and with little time and effort you can configure it to meet all your needs. For further information, you can always refer to the [User][3], [Admin][4], or [Developer][5] manuals.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/08/install-configure-owncloud-debian.html
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.gabrielcanepa.com.ar/
|
||||
[1]:http://owncloud.org/
|
||||
[2]:http://httpd.apache.org/docs/2.2/rewrite/flags.html
|
||||
[3]:http://doc.owncloud.org/server/7.0/ownCloudUserManual.pdf
|
||||
[4]:http://doc.owncloud.org/server/7.0/ownCloudAdminManual.pdf
|
||||
[5]:http://doc.owncloud.org/server/7.0/ownCloudDeveloperManual.pdf
|
@ -1,76 +0,0 @@
|
||||
Photo Editing on Linux with Krita
|
||||
================================================================================
|
||||
![Figure 1: Annabelle the pygmy goat.](http://www.linux.com/images/stories/41373/fig-1-annabelle.jpg)
|
||||
Figure 1: Annabelle the pygmy goat.
|
||||
|
||||
[Krita][1] is a wonderful drawing and painting program, and it's also a nice photo editor. Today we will learn how to add text to an image, and how to selectively sharpen portions of a photo.
|
||||
|
||||
### Navigating Krita ###
|
||||
|
||||
Like all image creation and editing programs, Krita contains hundreds of tools and options, and redundant controls for exposing and using them. It's worth taking some time to explore it and to see where everything is.
|
||||
|
||||
The default theme for Krita is a dark theme. I'm not a fan of dark themes, but fortunately Krita comes with a nice batch of themes that you can change anytime in the Settings > Theme menu.
|
||||
|
||||
Krita uses docking tool dialogues. Check Settings > Show Dockers to see your tool docks in the right and left panes, and Settings > Dockers to select the ones you want to see. The individual docks can drive you a little bit mad, as some of them open in a tiny squished aspect so you can't see anything. You can drag them to the top and sides of your Krita window, enlarge and shrink them, and you can drag them out of Krita to any location on your computer screen. If you drop a dock onto another dock they automatically create tabs.
|
||||
|
||||
When you have arranged your perfect workspace, you can preserve it in the "Choose Workspace" picker. This is a button at the right end of the Brushes and Stuff toolbar (Settings > Toolbars Shown). This comes with a little batch of preset workspaces, and you can create your own (figure 2).
|
||||
|
||||
![Figure 2: Preserve custom workspaces in the Choose Workspace dialogue.](http://www.linux.com/images/stories/41373/fig-2-workspaces.jpg)
|
||||
Figure 2: Preserve custom workspaces in the Choose Workspace dialogue.
|
||||
|
||||
Krita has multiple zoom controls. Ctrl+= zooms in, Ctrl+- zooms out, and Ctrl+0 resets to 100%. You can also use the View > Zoom controls, and the zoom slider at the bottom right. There is also a dropdown zoom menu to the left of the slider.
|
||||
|
||||
The Tools menu sits in the left pane, and this contains your shape and selection tools. You have to hover your cursor over each tool to see its label. The Tool Options dock always displays options for the current tool you are using, and by default it sits in the right pane.
|
||||
|
||||
### Crop Tool ###
|
||||
|
||||
Of course there is a crop tool in the Tools dock, and it is very easy to use. Draw a rectangle that contains the area you want to keep, use the drag handles to adjust it, and press the Return key. In the Tools Options dock you can choose to apply the crop to all layers or just the current layer, adjust the dimensions by typing in the size values, or size it as a percentage.
|
||||
|
||||
### Adding Text ###
|
||||
|
||||
When you want to add some simple text to a photo, such as a label or a caption, Krita may leave you feeling overwhelmed because it contains so many artistic text effects. But it also supports adding simple text. Click the Text tool, and the Tool Options dock looks like figure 3.
|
||||
|
||||
![Figure 3: Text options.](http://www.linux.com/images/stories/41373/fig-3-text.jpg)
|
||||
Figure 3: Text options.
|
||||
|
||||
Click the Multiline button. This opens the simple text tool; first draw a rectangle to contain your text, then start typing your text. The Tool Options dock has all the usual text formatting options: font selector, font size, text and background colors, alignment, and a bunch of paragraph styles. When you're finished click the Shape Handling tool, which is the white arrow next to the Text tool button, to adjust the size, shape, and position of your text box. The Tool Options for the Shape Handling tool include borders of various thicknesses, colors, and alignments. Figure 4 shows the gleeful captioned photo I send to my city-trapped relatives.
|
||||
|
||||
![Figure 4: Green acres is the place to be.](http://www.linux.com/images/stories/41373/fig-4-frontdoor.jpg)
|
||||
Figure 4: Green acres is the place to be.
|
||||
|
||||
How to edit your existing text isn't obvious. Click the Shape Handling tool, and double-click inside the text box. This opens editing mode, which is indicated by the text cursor. Now you can select text, add new text, change formatting, and so on.
|
||||
|
||||
### Sharpening Selected Areas ###
|
||||
|
||||
Krita has a number of nice tools for making surgical edits. In figure 5 I want to sharpen Annabelle's face and eyes. (Annabelle lives next door, but she has a crush on my dog and spends a lot of time here. My dog is terrified of her and runs away, but she is not discouraged.) First select an area with the "Select an area by its outline" tool. Then open Filter > Enhance > Unsharp Mask. You have three settings to play with: Half-Size, Amount, and Threshold. Most image editing software has Radius, Amount, and Threshold settings. A radius is half of a diameter, so Half-Size is technically correct, but perhaps needlessly confusing.
|
||||
|
||||
![Figure 5: Selecting an arbitrary area to edit.](http://www.linux.com/images/stories/41373/fig-5-annabelle.jpg)
|
||||
Figure 5: Selecting an arbitrary area to edit.
|
||||
|
||||
The Half-Size value controls the width of the sharpening lines. You want a large enough value to get a good affect, but not so large that it's obvious.
|
||||
|
||||
The Threshold value determines how different two pixels need to be for the sharpening effect to be applied. 0 = maximum sharpening, and 99 is no sharpening.
|
||||
|
||||
Amount controls the strength of the sharpening effect; higher values apply more sharpening.
|
||||
|
||||
Sharpening is nearly always the last edit you want to make to a photo, because it is affected by anything else you do to your image: crop, resize, color and contrast... if you apply sharpening first and then make other changes it will mess up your sharpening.
|
||||
|
||||
And what, you ask, does unsharp mask mean? The name comes from the sharpening technique: the unsharp mask filter creates a blurred mask of the original, and then layers the unsharp mask over the original. This creates an image that appears sharper and clearer without creating a lot of obvious sharpening artifacts.
|
||||
|
||||
That is all for today. The documentation for Krita is abundant, but disorganized. Start at [Krita Tutorials][2], and poke around YouTube for a lot of good video how-tos.
|
||||
|
||||
- [krita Official Web Site][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linux.com/learn/tutorials/786040-photo-editing-on-linux-with-krita
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linux.com/community/forums/person/3734
|
||||
[1]:https://krita.org/
|
||||
[2]:https://krita.org/learn/tutorials/
|
@ -1,3 +1,5 @@
|
||||
[felixonmars translating...]
|
||||
|
||||
How to create a cloud-based encrypted file system on Linux
|
||||
================================================================================
|
||||
Commercial cloud storage services such as [Amazon S3][1] and [Google Cloud Storage][2] offer highly available, scalable, infinite-capacity object store at affordable costs. To accelerate wide adoption of their cloud offerings, these providers are fostering rich developer ecosystems around their products based on well-defined APIs and SDKs. Cloud-backed file systems are one popular by-product of such active developer communities, for which several open-source implementations exist.
|
||||
@ -153,4 +155,4 @@ via: http://xmodulo.com/2014/09/create-cloud-based-encrypted-file-system-linux.h
|
||||
[4]:http://aws.amazon.com/
|
||||
[5]:http://ask.xmodulo.com/create-amazon-aws-access-key.html
|
||||
[6]:https://aur.archlinux.org/packages/s3ql/
|
||||
[7]:http://www.rath.org/s3ql-docs/
|
||||
[7]:http://www.rath.org/s3ql-docs/
|
||||
|
@ -1,75 +0,0 @@
|
||||
How to download GOG games from the command line on Linux
|
||||
================================================================================
|
||||
If you are a gamer and a Linux user, you probably were delighted when [GOG][1] announced a few months ago that it will start proposing games for your favorite OS. If you have never heard of GOG before, I encourage you to check out their catalog of “good old games”, reasonably priced, DRM-free, and packed with goodies. However, if the Windows client for GOG existed for quite some time now, an official Linux version is nowhere to be seen. So if waiting for the official version is uncomfortable for you, an unofficial open source program named LGOGDownloader gives you access to your library from the command line.
|
||||
|
||||
![](https://farm4.staticflickr.com/3843/15121593356_b13309c70f_z.jpg)
|
||||
|
||||
### Install LGOGDownloader on Linux ###
|
||||
|
||||
For Ubuntu users, the [official page][2] recommends that you download the sources and do:
|
||||
|
||||
$ sudo apt-get install build-essential libcurl4-openssl-dev liboauth-dev libjsoncpp-dev libhtmlcxx-dev libboost-system-dev libboost-filesystem-dev libboost-regex-dev libboost-program-options-dev libboost-date-time-dev libtinyxml-dev librhash-dev help2man
|
||||
$ tar -xvzf lgogdownloader-2.17.tar.gz
|
||||
$ cd lgogdownloader-2.17
|
||||
$ make release
|
||||
$ sudo make install
|
||||
|
||||
If you are an Archlinux user, an [AUR package][2] is waiting for you.
|
||||
|
||||
### Usage of LGOGDownloader ###
|
||||
|
||||
Once the program is installed, you will need to identify yourself with the command:
|
||||
|
||||
$ lgogdownloader --login
|
||||
|
||||
![](https://farm6.staticflickr.com/5593/15121593346_9c5d02d5ce_z.jpg)
|
||||
|
||||
Notice that the configuration file if you need it is at ~/.config/lgogdownloader/config.cfg
|
||||
|
||||
Once authenticated, you can list all the games in your library with:
|
||||
|
||||
$ lgogdownloader --list
|
||||
|
||||
![](https://farm6.staticflickr.com/5581/14958040387_8321bb71cf.jpg)
|
||||
|
||||
Then download one with:
|
||||
|
||||
$ lgogdownloader --download --game [game name]
|
||||
|
||||
![](https://farm6.staticflickr.com/5585/14958040367_b1c584a2d1_z.jpg)
|
||||
|
||||
You will notice that lgogdownloader allows you to resume previously interrupted downloads, which is nice because typical game downloads are not small.
|
||||
|
||||
Like every respectable command line utility, you can add various options:
|
||||
|
||||
- **--platform [number]** to select your OS where 1 is for windows and 4 for Linux.
|
||||
- **--directory [destination]** to download the installer in a particular directory.
|
||||
- **--language [number]** for a particular language pack (check the manual pages for the number corresponding to your language).
|
||||
- **--limit-rate [speed]** to limit the downloading rate at a particular speed.
|
||||
|
||||
As a side bonus, lgogdownloader also comes with the possibility to check for updates on the GOG website:
|
||||
|
||||
$ lgogdownloader --update-check
|
||||
|
||||
![](https://farm4.staticflickr.com/3882/14958035568_7889acaef0.jpg)
|
||||
|
||||
The result will list the number of forum and private messages you have received, as well as the number of updated games.
|
||||
|
||||
To conclude, lgogdownloader is pretty standard when it comes to command line utilities. I would even say that it is an epitome of clarity and coherence. It is true that we are far in term of features from the relatively recent Steam Linux client, but on the other hand, the official GOG windows client does not do much more than this unofficial Linux version. In other words lgogdownloader is a perfect replacement. I cannot wait to see more Linux compatible games on GOG, especially after their recent announcements to offer DRM free movies, with a thematic around video games. Hopefully we will see an update in the client for when movie catalog matches the game library.
|
||||
|
||||
What do you think of GOG? Would you use the unofficial Linux Client? Let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/09/download-gog-games-command-line-linux.html
|
||||
|
||||
作者:[Adrien Brochard][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/adrien
|
||||
[1]:http://www.gog.com/
|
||||
[2]:https://sites.google.com/site/gogdownloader/home
|
||||
[3]:https://aur.archlinux.org/packages/lgogdownloader/
|
@ -1,236 +0,0 @@
|
||||
How to set up Nagios Remote Plugin Executor (NRPE) in Linux
|
||||
================================================================================
|
||||
As far as network management is concerned, Nagios is one of the most powerful tools. Nagios can monitor the reachability of remote hosts, as well as the state of services running on them. However, what if we want to monitor something other than network services for a remote host? For example, we may want to monitor the disk utilization or [CPU processor load][1] of a remote host. Nagios Remote Plugin Executor (NRPE) is a tool that can help with doing that. NRPE allows one to execute Nagios plugins installed on remote hosts, and integrate them with an [existing Nagios server][2].
|
||||
|
||||
This tutorial will cover how to set up NRPE on an existing Nagios deployment. The tutorial is primarily divided into two parts:
|
||||
|
||||
- Configure remote hosts.
|
||||
- Configure a Nagios monitoring server.
|
||||
|
||||
We will then finish off by defining some custom commands that can be used with NRPE.
|
||||
|
||||
### Configure Remote Hosts for NRPE ###
|
||||
|
||||
#### Step One: Installing NRPE Service ####
|
||||
|
||||
You need to install NRPE service on every remote host that you want to monitor using NRPE. NRPE service daemon on each remote host will then communicate with a Nagios monitoring server.
|
||||
|
||||
Necessary packages for NRPE service can easily be installed using apt-get or yum, subject to the platform. In case of CentOS, we will need to [add Repoforge repository][3] as NRPE is not available in CentOS repositories.
|
||||
|
||||
**On Debian, Ubuntu or Linux Mint:**
|
||||
|
||||
# apt-get install nagios-nrpe-server
|
||||
|
||||
**On CentOS, Fedora or RHEL:**
|
||||
|
||||
# yum install nagios-nrpe
|
||||
|
||||
#### Step Two: Preparing Configuration File ####
|
||||
|
||||
The configuration file /etc/nagios/nrpe.cfg is similar for Debian-based and RedHat-based systems. The configuration file is backed up, and then updated as follows.
|
||||
|
||||
# vim /etc/nagios/nrpe.cfg
|
||||
|
||||
----------
|
||||
|
||||
## NRPE service port can be customized ##
|
||||
server_port=5666
|
||||
|
||||
## the nagios monitoring server is permitted ##
|
||||
## NOTE: There is no space after the comma ##
|
||||
allowed_hosts=127.0.0.1,X.X.X.X-IP_v4_of_Nagios_server
|
||||
|
||||
## The following examples use hard-coded command arguments.
|
||||
## These parameters can be modified as needed.
|
||||
|
||||
## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ##
|
||||
|
||||
command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
|
||||
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
|
||||
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
|
||||
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
|
||||
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200
|
||||
|
||||
Now that the configuration file is ready, NRPE service is ready to be fired up.
|
||||
|
||||
#### Step Three: Initiating NRPE Service ####
|
||||
|
||||
For RedHat-based systems, the NRPE service needs to be added as a startup service.
|
||||
|
||||
**On Debian, Ubuntu, Linux Mint:**
|
||||
|
||||
# service nagios-nrpe-server restart
|
||||
|
||||
**On CentOS, Fedora or RHEL:**
|
||||
|
||||
# service nrpe restart
|
||||
# chkconfig nrpe on
|
||||
|
||||
#### Step Four: Verifying NRPE Service Status ####
|
||||
|
||||
Information about NRPE daemon status can be found in the system log. For a Debian-based system, the log file will be /var/log/syslog. The log file for a RedHat-based system will be /var/log/messages. A sample log is provided below for reference.
|
||||
|
||||
nrpe[19723]: Starting up daemon
|
||||
nrpe[19723]: Listening for connections on port 5666
|
||||
nrpe[19723]: Allowing connections from: 127.0.0.1,X.X.X.X
|
||||
|
||||
In case firewall is running, TCP port 5666 should be open, which is used by NRPE daemon.
|
||||
|
||||
# netstat -tpln | grep 5666
|
||||
|
||||
----------
|
||||
|
||||
tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN 19885/nrpe
|
||||
|
||||
### Configure Nagios Monitoring Server for NRPE ###
|
||||
|
||||
The first step in configuring an existing Nagios monitoring server for NRPE is to install NRPE plugin on the server.
|
||||
|
||||
#### Step One: Installing NRPE Plugin ####
|
||||
|
||||
In case the Nagios server is running on a Debian-based system (Debian, Ubuntu or Linux Mint), a necessary package can be installed using apt-get.
|
||||
|
||||
# apt-get install nagios-nrpe-plugin
|
||||
|
||||
After the plugin is installed, the check_nrpe command, which comes with the plugin, is modified a bit.
|
||||
|
||||
# vim /etc/nagios-plugins/config/check_nrpe.cfg
|
||||
|
||||
----------
|
||||
|
||||
## the default command is overwritten ##
|
||||
define command{
|
||||
command_name check_nrpe
|
||||
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$'
|
||||
}
|
||||
|
||||
In case the Nagios server is running on a RedHat-based system (CentOS, Fedora or RHEL), you can install NRPE plugin using yum. On CentOS, [adding Repoforge repository][4] is necessary.
|
||||
|
||||
# yum install nagios-plugins-nrpe
|
||||
|
||||
Now that the NRPE plugin is installed, proceed to configure a Nagios server following the rest of the steps.
|
||||
|
||||
#### Step Two: Defining Nagios Command for NRPE Plugin ####
|
||||
|
||||
First, we need to define a command in Nagios for using NRPE.
|
||||
|
||||
# vim /etc/nagios/objects/commands.cfg
|
||||
|
||||
----------
|
||||
|
||||
## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ##
|
||||
define command{
|
||||
command_name check_nrpe
|
||||
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$'
|
||||
}
|
||||
|
||||
#### Step Three: Adding Host and Command Definition ####
|
||||
|
||||
Next, define remote host(s) and commands to execute remotely on them.
|
||||
|
||||
The following shows sample definitions of a remote host a command to execute on the host. Naturally, your configuration will be adjusted based on your requirements. The path to the file is slightly different for Debian-based and RedHat-based systems. But the content of the files are identical.
|
||||
|
||||
**On Debian, Ubuntu or Linux Mint:**
|
||||
|
||||
# vim /etc/nagios3/conf.d/nrpe.cfg
|
||||
|
||||
**On CentOS, Fedora or RHEL:**
|
||||
|
||||
# vim /etc/nagios/objects/nrpe.cfg
|
||||
|
||||
----------
|
||||
|
||||
define host{
|
||||
use linux-server
|
||||
host_name server-1
|
||||
alias server-1
|
||||
address X.X.X.X-IPv4_address_of_remote_host
|
||||
}
|
||||
|
||||
define service {
|
||||
host_name server-1
|
||||
service_description Check Load
|
||||
check_command check_nrpe!check_load
|
||||
check_interval 1
|
||||
use generic-service
|
||||
}
|
||||
|
||||
#### Step Four: Restarting Nagios Service ####
|
||||
|
||||
Before restarting Nagios, updated configuration is verified with a dry run.
|
||||
|
||||
**On Ubuntu, Debian, or Linux Mint:**
|
||||
|
||||
# nagios3 -v /etc/nagios3/nagios.cfg
|
||||
|
||||
**On CentOS, Fedora or RHEL:**
|
||||
|
||||
# nagios -v /etc/nagios/nagios.cfg
|
||||
|
||||
If everything goes well, Nagios service can be restarted.
|
||||
|
||||
# service nagios restart
|
||||
|
||||
![](https://farm8.staticflickr.com/7024/13330387845_0bde8b6db5_z.jpg)
|
||||
|
||||
### Configuring Custom Commands with NRPE ###
|
||||
|
||||
#### Setup on Remote Servers ####
|
||||
|
||||
The following is a list of custom commands that can be used with NRPE. These commands are defined in the file /etc/nagios/nrpe.cfg located at the remote servers.
|
||||
|
||||
## Warning status when load average exceeds 1, 2 and 1 for 1, 5, 15 minute interval, respectively.
|
||||
## Critical status when load average exceeds 3, 5 and 3 for 1, 5, 15 minute interval, respectively.
|
||||
command[check_load]=/usr/lib/nagios/plugins/check_load -w 1,2,1 -c 3,5,3
|
||||
|
||||
## Warning level 25% and critical level 10% for free space of /home.
|
||||
## Could be customized to monitor any partition (e.g. /dev/sdb1, /, /var, /home)
|
||||
command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 25% -c 10% -p /home
|
||||
|
||||
## Warn if number of instances for process_ABC exceeds 10. Critical for 20 ##
|
||||
command[check_process_ABC]=/usr/lib/nagios/plugins/check_procs -w 1:10 -c 1:20 -C process_ABC
|
||||
|
||||
## Critical if the number of instances for process_XYZ drops below 1 ##
|
||||
command[check_process_XYZ]=/usr/lib/nagios/plugins/check_procs -w 1: -c 1: -C process_XYZ
|
||||
|
||||
#### Setup on Nagios Monitoring Server ####
|
||||
|
||||
To apply the custom commands defined above, we modify the service definition at Nagios monitoring server as follows. The service definition could go to the file where all the services are defined (e.g., /etc/nagios/objects/nrpe.cfg or /etc/nagios3/conf.d/nrpe.cfg)
|
||||
|
||||
## example 1: check process XYZ ##
|
||||
define service {
|
||||
host_name server-1
|
||||
service_description Check Process XYZ
|
||||
check_command check_nrpe!check_process_XYZ
|
||||
check_interval 1
|
||||
use generic-service
|
||||
}
|
||||
|
||||
## example 2: check disk state ##
|
||||
define service {
|
||||
host_name server-1
|
||||
service_description Check Process XYZ
|
||||
check_command check_nrpe!check_disk
|
||||
check_interval 1
|
||||
use generic-service
|
||||
}
|
||||
|
||||
To sum up, NRPE is a powerful add-on to Nagios as it provides provision for monitoring a remote server in a highly configurable fashion. Using NRPE, we can monitor server load, running processes, logged in users, disk states and other parameters.
|
||||
|
||||
Hope this helps.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://xmodulo.com/2012/08/how-to-measure-average-cpu-utilization.html
|
||||
[2]:http://xmodulo.com/2013/12/install-configure-nagios-linux.html
|
||||
[3]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html
|
||||
[4]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html
|
@ -1,3 +1,4 @@
|
||||
translating by cvsher
|
||||
Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux
|
||||
================================================================================
|
||||
**Sysstat** is really a handy tool which comes with number of utilities to monitor system resources, their performance and usage activities. Number of utilities that we all use in our daily bases comes with sysstat package. It also provide the tool which can be scheduled using cron to collect all performance and activity data.
|
||||
@ -121,4 +122,4 @@ via: http://www.tecmint.com/install-sysstat-in-linux/
|
||||
[a]:http://www.tecmint.com/author/kuldeepsharma47/
|
||||
[1]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
|
||||
[2]:http://sebastien.godard.pagesperso-orange.fr/download.html
|
||||
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html
|
||||
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html
|
||||
|
@ -1,84 +0,0 @@
|
||||
Linux FAQs with Answers--How to build a RPM or DEB package from the source with CheckInstall
|
||||
================================================================================
|
||||
> **Question**: I would like to install a software program by building it from the source. Is there a way to build and install a package from the source, instead of running "make install"? That way, I could uninstall the program easily later if I want to.
|
||||
|
||||
If you have installed a Linux program from its source by running "make install", it becomes really tricky to remove it completely, unless the author of the program provides an uninstall target in the Makefile. You will have to compare the complete list of files in your system before and after installing the program from source, and manually remove all the files that were added during the installation.
|
||||
|
||||
That is when CheckInstall can come in handy. CheckInstall keeps track of all the files created or modified by an install command line (e.g., "make install" "make install_modules", etc.), and builds a standard binary package, giving you the ability to install or uninstall it with your distribution's standard package management system (e.g., yum for Red Hat or apt-get for Debian). It has been also known to work with Slackware, SuSe, Mandrake and Gentoo as well, as per the [official documentation][1].
|
||||
|
||||
In this post, we will only focus on Red Hat and Debian based distributions, and show how to build a RPM or DEB package from the source using CheckInstall.
|
||||
|
||||
### Installing CheckInstall on Linux ###
|
||||
|
||||
To install CheckInstall on Debian derivatives:
|
||||
|
||||
# aptitude install checkinstall
|
||||
|
||||
To install CheckInstall on Red Hat-based distributions, you will need to download a pre-built .rpm of CheckInstall (e.g., searchable from [http://rpm.pbone.net][2]), as it has been removed from the Repoforge repository. The .rpm package for CentOS 6 works in CentOS 7 as well.
|
||||
|
||||
# wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/ikoinoba/CentOS_CentOS-6/x86_64/checkinstall-1.6.2-3.el6.1.x86_64.rpm
|
||||
# yum install checkinstall-1.6.2-3.el6.1.x86_64.rpm
|
||||
|
||||
Once checkinstall is installed, you can use the following format to build a package for particular software.
|
||||
|
||||
# checkinstall <install-command>
|
||||
|
||||
Without <install-command> argument, the default install command "make install" will be used.
|
||||
|
||||
### Build a RPM or DEB Pacakge with CheckInstall ###
|
||||
|
||||
In this example, we will build a package for [htop][3], an interactive text-mode process viewer for Linux (like top on steroids).
|
||||
|
||||
First, let's download the source code from the official website of the project. As a best practice, we will store the tarball in /usr/local/src, and untar it.
|
||||
|
||||
# cd /usr/local/src
|
||||
# wget http://hisham.hm/htop/releases/1.0.3/htop-1.0.3.tar.gz
|
||||
# tar xzf htop-1.0.3.tar.gz
|
||||
# cd htop-1.0.3
|
||||
|
||||
Let's find out the install command for htop, so that we can invoke checkinstall with the command. As shown below, htop is installed with 'make install' command.
|
||||
|
||||
# ./configure
|
||||
# make install
|
||||
|
||||
Therefore, to build a htop package, we can invoke checkinstall without any argument, which will then use 'make install' command to build a package. Along the process, the checkinstall command will ask you a series of questions.
|
||||
|
||||
In short, here are the commands to build a package for **htop**:
|
||||
|
||||
# ./configure
|
||||
# checkinstall
|
||||
|
||||
Answer 'y' to "Should I create a default set of package docs?":
|
||||
|
||||
![](https://farm6.staticflickr.com/5577/15118597217_1fdd0e0346_z.jpg)
|
||||
|
||||
You can enter a brief description of the package, then press Enter twice:
|
||||
|
||||
![](https://farm4.staticflickr.com/3898/15118442190_604b71d9af.jpg)
|
||||
|
||||
Enter a number to modify any of the following values or Enter to proceed:
|
||||
|
||||
![](https://farm4.staticflickr.com/3898/15118442180_428de59d68_z.jpg)
|
||||
|
||||
Then checkinstall will create a .rpm or a .deb package automatically, depending on what your Linux system is:
|
||||
|
||||
On CentOS 7:
|
||||
|
||||
![](https://farm4.staticflickr.com/3921/15282103066_5d688b2217_z.jpg)
|
||||
|
||||
On Debian 7:
|
||||
|
||||
![](https://farm4.staticflickr.com/3905/15118383009_4909a7c17b_z.jpg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/build-rpm-deb-package-source-checkinstall.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://checkinstall.izto.org/docs/README
|
||||
[2]:http://rpm.pbone.net/
|
||||
[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user