Merge pull request #4 from LCTT/master

2014.9.27
This commit is contained in:
Junkai 2014-09-27 21:24:07 +08:00
commit d8d4deed04
97 changed files with 4726 additions and 3747 deletions

View File

@ -1,6 +1,6 @@
10个实用的关于linux中Squid代理服务器的面试问答
10个关于linux中Squid代理服务器的实用面试问答
================================================================================
不仅是系统管理员和网络管理员时不时会听到“代理服务器”这个词,我们也经常听到。代理服务器已经是一种企业的文化而且那是需要时间来积累的。它现在也在一些小型的学校或者大型跨国公司的自助餐厅里得到了实现。Squid也可做代理服务就是这样一个应用程序它既可以被作为代理服务器同时也是在其同类工具中比较被广泛使用的一种。
不仅是系统管理员和网络管理员时不时会听到“代理服务器”这个词,我们也经常听到。代理服务器已经成为一种企业常态而且经常会接触到它。它现在也出现在一些小型的学校或者大型跨国公司的自助餐厅里。Squid常被视作代理服务的代名词就是这样一个应用程序它不但可以被作为代理服务器其同时也是在该类工具中比较被广泛使用的一种。
本文旨在提高你在遇到关于代理服务器面试点时的一些基本应对能力。
@ -10,12 +10,13 @@
### 1. 什么是代理服务器?代理服务器在计算机网络中有什么用途? ###
> **回答** : 代理服务器是指那些作为客户端和资源提供商或服务器之间的中间件的物理机或者应用程序。客户端从代理服务器中寻找文件、页面或者是数据而且代理服务器能处理客户端与服务器之间所有复杂事务从而满足客户端的生成的需求。
代理服务器是WWW万维网的支柱它们其中大部分都是Web代理。一台代理服务器能处理客户端与服务器之间的复杂通信事务。此外它在网络上提供的是匿名信息那就意味着你的身份和浏览痕迹都是安全的。代理可以去配置允许哪些网站的客户能看到哪些网站被屏蔽了。
> **回答** : 代理服务器是指那些作为客户端和资源提供商或服务器之间的中间件的物理机或者应用程序。客户端从代理服务器中寻找文件、页面或者是数据,而且代理服务器能处理客户端与服务器之间所有复杂事务,从而满足客户端的生成的需求。
代理服务器是WWW万维网的支柱它们其中大部分都是Web代理。一台代理服务器能处理客户端与服务器之间的复杂通信事务。此外它在网络上提供的是匿名信息LCTT 译注:指浏览者的 IP、浏览器信息等被隐藏这就意味着你的身份和浏览痕迹都是安全的。代理可以去配置允许哪些网站的客户能看到哪些网站被屏蔽了。
### 2. Squid是什么? ###
> **回答** : Squid是一个在GNU/GPL协议下发布的即可作为代理服务器同时也可作为Web缓存守护进程的应用软件。Squid主要是支持像HTTP和FTP那样的协议但是对其它的协议比如HTTPSSSL,TLS等同样也能支持。其特点是Web缓存守护进程通过从经常上访问的网站里缓存Web和DNS从而让上网速度更快。Squid支持所有的主流平台包括LinuxUNIX微软公司的Windows和苹果公司的Mac。
> **回答** : Squid是一个在GNU/GPL协议下发布的既可作为代理服务器,同时也可作为Web缓存守护进程的应用软件。Squid主要是支持像HTTP和FTP那样的协议但是对其它的协议比如HTTPSSSL,TLS等同样也能支持。其特点是Web缓存守护进程通过从经常上访问的网站里缓存Web和DNS数据,从而让上网速度更快。Squid支持所有的主流平台包括LinuxUNIX微软公司的Windows和苹果公司的Mac。
### 3. Squid的默认端口是什么怎么去修改它的操作端口 ###
@ -66,17 +67,17 @@ f. 保存配置文件并退出重启Squid服务让其生效。
# service squid restart
### 5. 在Squid中什么是媒体范围限制和部分下载 ###
### 5. 在Squid中什么是媒体范围限制Media Range Limitation和部分下载? ###
> **回答** : 媒体范围限制是Squid的一种特殊的功能它只从服务器中获取所需要的数据而不是整个文件。这个功能很好的实现了用户在各种视频流媒体网站如YouTube和Metacafe看视频时可以点击视频中的进度条来选择进度因此整个视频不用全部都加载除了一些需要的部分。
Squid部分下载功能的特点是很好地实现了在Windows更新时下载的文件能以一个个小数据包的形式暂停。正因为它的这个特点正在下载文件的Windows机器能不用担心数据会丢失从而进行恢复下载。Squid让媒体范围限制和部分下载功能只在存储一个完整文件的复件之后实现。此外当用户指向另一个页面时Squid要以某种方式进行特殊地配置部分下载下来的文件才会不被删除且留有缓存
Squid部分下载功能的特点是很好地实现了类似在Windows更新时能以一个个小数据包的形式下载并可以暂停正因为它的这个特点正在下载文件的Windows机器可以重新继续下载而不用担心数据会丢失。Squid的媒体范围限制和部分下载功能只有在存储了一个完整文件的副本之后才行。此外当用户访问另一个页面时除非Squid进行了特定的配置部分下载下来的文件会被删除且不留在缓存中
### 6. 什么是Squid的反向代理 ###
> **回答** : 反向代理是Squid的一个特点,这个功能被用来加快最终用户的上网速度。缩写为 RS 的原服务器包含了所有资源,而代理服务器则叫 PS 。客户端寻找RS所提供的数据第一次指定的数据和它的复件会经过多次配置从RS上存储在PS上。这样的话每次从PS上请求的数据就等于就是从原服务器上获取的。这样就会减轻网络拥堵减少CPU使用率降低网络资源的利用率从而缓解原来实际服务器的负载压力。但是RS统计不了总流量的数据因为PS分担了部分原服务器的任务。X-Forwarded-For HTTP 就能记录下通过HTTP代理或负载均衡方式连接到RS的客户端最原始的IP地址。
> **回答** : 反向代理是Squid的一个功能,这个功能被用来加快最终用户的上网速度。下面用缩写 RS 的表示包含了资源的原服务器,而代理服务器则称作 PS 。初次访问时它会从RS得到其提供的数据并将其副本按照配置好的时间存储在PS上。这样的话每次从PS上请求的数据就相当于就是从原服务器上获取的。这样就会减轻网络拥堵减少CPU使用率降低网络资源的利用率从而缓解原来实际服务器的负载压力。但是RS统计不了总流量的数据因为PS分担了部分原服务器的任务。X-Forwarded-For HTTP 信息能用于记录下通过HTTP代理或负载均衡方式连接到RS的客户端最原始的IP地址。
严格意义上来用单个Squid服务器同时作为正向代理服务器和反向代理服务器是可行的。
从技术上用单个Squid服务器同时作为正向代理服务器和反向代理服务器是可行的。
### 7. 由于Squid能作为一个Web缓存守护进程那缓存可以删除吗怎么删除 ###
@ -91,7 +92,7 @@ b. 创建交换分区目录。
# squid -z
### 8. 你身边有一台客户机,而你正在工作,如果想要限制儿童的访问时间段,你会怎么去设置那个场景? ###
### 8. 你有一台工作中的机器可以访问代理服务器,如果想要限制你的孩子的访问时间,你会怎么去设置那个场景? ###
把允许访问的时间设置成晚上4点到7点三个小时跨度为星期一到星期五。
@ -114,9 +115,9 @@ c. 重启Squid服务。
### 10. Squid的缓存会存储到哪里 ###
> **回答** : Squid存储的缓存是位于 /var/spool/squid 的特目录下。
> **回答** : Squid存储的缓存是位于 /var/spool/squid 的特目录下。
以上就是全部内容了,很快我还会带着其它有趣的内容回到这里届时还请继续关注Tecmint。别忘了告诉我们你的反馈和评论
以上就是全部内容了,很快我还会带着其它有趣的内容回到这里。
--------------------------------------------------------------------------------
@ -124,7 +125,7 @@ via: http://www.tecmint.com/squid-interview-questions/
作者:[Avishek Kumar][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -8,12 +8,13 @@
### 添加窗口按钮 ###
于一些未知的原因GNOME的开发者们决定对标准的窗口按钮关闭最小化最大化不屑一顾而支持只有单个关闭按钮的窗口了。我缺少了最大化按钮虽然你可以简单地拖动窗口到屏幕顶部来将它最大化而也可以通过在标题栏右击选择最小化或者最大化来进行最小化/最大化操作。这种变化仅仅增加了操作步骤,因此缺少最小化按钮实在搞得人云里雾里。所幸的是,有个简单的修复工具可以解决这个问题,下面说说怎样做吧:
于一些未知的原因GNOME的开发者们决定对标准的窗口按钮关闭最小化最大化不屑一顾而支持只有单个关闭按钮的窗口了。我缺少了最大化按钮虽然你可以简单地拖动窗口到屏幕顶部来将它最大化也可以通过在标题栏右击选择最小化或者最大化来进行最小化/最大化操作。这种变化仅仅增加了操作步骤,因此缺少最小化按钮实在搞得人云里雾里。所幸的是,有个简单的修复工具可以解决这个问题,下面说说怎样做吧:
默认情况下你应该安装了GNOME优化工具。通过该工具你可以打开最大化或最小化按钮图1
默认情况下你应该安装了GNOME优化工具GNOME Tweak Tool。通过该工具你可以打开最大化或最小化按钮图1
![Figure 1: Adding the minimize button back to the GNOME 3 windows.](http://www.linux.com/images/stories/41373/gnome3-max-min-window.png)
Figure 1: 添加回最小化按钮到GNOME 3窗口
<center>![图 1: Adding the minimize button back to the GNOME 3 windows.](http://www.linux.com/images/stories/41373/gnome3-max-min-window.png)
*图 1: 添加回最小化按钮到GNOME 3窗口*</center>
添加完后,你就可以看到最小化按钮了,它在关闭按钮的左边,等着为你服务呢。你的窗口现在管理起来更方便了。
@ -27,36 +28,39 @@ Figure 1: 添加回最小化按钮到GNOME 3窗口
### 添加扩展 ###
GNOME 3的最佳特性之一就是shell扩展这些扩展为GNOME带来了全部种类的有用的特性。关于shell扩展没必要从包管理器去安装。你可以访问[GNOME Shell扩展][2]站点搜索你想要添加的扩展点击扩展列表点击打开按钮然后扩展就安装完成了或者你也可以从GNOME优化工具中添加它们你在网站上会找到更多可用的扩展
GNOME 3的最佳特性之一就是shell扩展这些扩展为GNOME带来了各种类别的有用特性。关于shell扩展没必要从包管理器去安装。你可以访问[GNOME Shell扩展][2]站点搜索你想要添加的扩展点击扩展列表点击打开按钮然后扩展就安装完成了或者你也可以从GNOME优化工具中添加它们你在网站上会找到更多可用的扩展
你可能需要在浏览器中允许扩展安装。如果出现这样的情况你会在第一次访问GNOME Shell扩展站点时见到警告信息。当出现提示时只要点击允许即可。
令人印象更为深刻的(而又得心应手的扩展)之一,就是[Dash to Dock][3]。
令人印象更为深刻的(而又得心应手的)扩展之一,就是[Dash to Dock][3]。
该扩展将Dash移出应用程序概览并将它转变为相当标准的停靠栏图2
![Figure 2: Dash to Dock adds a dock to GNOME 3.](http://www.linux.com/images/stories/41373/gnome3-dash.png)
Figure 2: Dash to Dock添加一个停靠栏到GNOME 3.
<center>![图 2: Dash to Dock adds a dock to GNOME 3.](http://www.linux.com/images/stories/41373/gnome3-dash.png)
*图 2: Dash to Dock添加一个停靠栏到GNOME 3*</center>
当你添加应用程序到Dash后他们也将被添加到Dash to Dock。你也可以通过点击Dock底部的6点图标访问应用程序概览。
还有大量其它扩展聚焦于讲GNOME 3打造成一个更为高效的桌面在这些更好的扩展中,包括以下这些:
还有大量其它扩展致力于将GNOME 3打造成一个更为高效的桌面在这些不错的扩展中,包括以下这些:
- [最近项目][4]: 添加一个最近使用项目的下拉菜单到面板。
- [搜索Firefox书签提供者][5]: 从概览搜索(并启动)书签。
- [Firefox书签搜索][5]: 从概览搜索(并启动)书签。
- [跳转列表][6]: 添加一个跳转列表弹出菜单到Dash图标该扩展可以让你快速打开和程序关联的新文档甚至更多
- [待办列表][7]: 添加一个下拉列表到面板,它允许你添加项目到该列表。
- [网页搜索对话框][8]: 允许你通过敲击Ctrl+空格来快速搜索网页并输入一个文本字符串(结果在新的浏览器标签页中显示)。
- [网页搜索框][8]: 允许你通过敲击Ctrl+空格来快速搜索网页并输入一个文本字符串(结果在新的浏览器标签页中显示)。
### 添加一个完整停靠栏 ###
如果Dash to dock对于而言功能还是太有限你想要通知区域甚至更多那么向你推荐我最喜爱的停靠栏之一[Cairo Dock][9]图3
如果Dash to dock对于而言功能还是太有限(你想要通知区域,甚至更多),那么向你推荐我最喜爱的停靠栏之一[Cairo Dock][9]图3
![Figure 3: Cairo Dock ready for action.](http://www.linux.com/images/stories/41373/gnome3-Cairo-dock.png)
Figure 3: Cairo Dock待命
<center>![图 3: Cairo Dock ready for action.](http://www.linux.com/images/stories/41373/gnome3-Cairo-dock.png)
在Cairo Dock添加到GNOME 3后你的体验将成倍地增长。从你的发行版的包管理器中安装这个优秀的停靠栏吧。
*图 3: Cairo Dock待命*</center>
不必将GNOME 3看作是一个效率不高的用户不友好的桌面。只要稍作调整GNOME 3可以成为和其它可用的桌面一样强大而用户友好的桌面。
在将Cairo Dock添加到GNOME 3后你的体验将成倍地增长。从你的发行版的包管理器中安装这个优秀的停靠栏吧。
不要将GNOME 3看作是一个效率不高的用户不友好的桌面。只要稍作调整GNOME 3可以成为和其它可用的桌面一样强大而用户友好的桌面。
--------------------------------------------------------------------------------
@ -64,7 +68,7 @@ via: http://www.linux.com/learn/tutorials/781916-easy-steps-to-make-gnome-3-more
作者:[Jack Wallen][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[ wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
什么时候Linux才能完美
什么时候Linux才能完美
================================================================================
前几天我的同事兼损友Ken Starks在FOSS Force上发表了[一篇文章][1]关于他最喜欢发牢骚的内容Linux系统中那些不能正常工作的事情。这次他抱怨的是在Mint里使用KDE时碰到的字体问题。这对于Ken来说也不是什么新鲜事了。过去他写了一些文章关于各种Linux发行版中的缺陷从来都没有被认真修复过。他的观点是这些在一次又一次的发布中从没有被修复过的“小问题”对于Linux桌面系统在赢得大众方面的失败需要负主要责任。
@ -14,21 +14,21 @@
### 也不全是这样子的 ###
早在2002年的时候我第一次安装使用GNU/Linux像大多数美国人那样我搞不定拨号连接在我呆的这个小地方当时宽带还没普及。我在当地Best Buy商店里花了差不多70美元买了用热缩膜包装的Mandrake 9.0的Powerpack版当时那里同时在卖Mandrake和Red Hat现在仍然还在经营桌面业务。
早在2002年的时候我第一次安装使用GNU/Linux像大多数美国人那样我搞不定拨号连接在我呆的这个小地方当时宽带还没普及。我在当地Best Buy商店里花了差不多70美元买了用热缩膜包装的Mandrake 9.0的Powerpack版当时那里同时在卖Mandrake和Red Hat现在仍然还在经营桌面PC业务。
在那个恐龙时代Mandrake被认为是易用的Linux发行版中做的最好的。它安装简单还有人说比Windows还简单它自带的分区工具更是让划分磁盘像切苹果馅饼一样简单。不过实际上Linux老手们经常公开嘲笑Mandrake暗示易用的Linux不是真的Linux。
但是我很喜欢它感觉来到了一个全新的世界。再也不用担心Windows的蓝屏死机和几乎每天一死了。不幸的是之前在Windows下“能用”的很多外围设备也随之而去。
安装完Mandrake之后我要做的第一件事就是把我的小白盒拿给[Dragonware Computers][2]的Michelle把便宜的winmodem换成硬件调制解调器。就算一个硬件猫意味着计算机响应更快但是计算机商店却在40英里外的地方并不是很方便而且费用我也有点压力。
安装完Mandrake之后我要做的第一件事就是把我的小白盒拿给[Dragonware Computers][2]的Michelle把便宜的winmodem换成硬件调制解调器。就算一个硬件猫意味着计算机响应更快但是计算机商店却在40英里外的地方并不是很方便而且费用我也有点压力。
但是我不介意。我对Microsoft并不感冒而且使用一个“不同”的操作系统让我感觉自己就像一个计算机天才。
打印机也是个麻烦但是这个问题对于Mandrake还好不像其他大多数发行版还需要命令行里的操作才能解决。Mandrake提供了一个华丽的图形界面来设置打印机如果你正好幸运的有一台能在Linux下工作的打印机的话。很多,不是大多数,都不行。
打印机也是个麻烦但是这个问题对于Mandrake还好不像其他大多数发行版还需要命令行里的操作才能解决。Mandrake提供了一个华丽的图形界面来设置打印机如果你正好幸运的有一台能在Linux下工作的打印机的话。很多打印机——就算不是大多数——都不行。
我的还在保修期的Lexmark在Windows下比其他打印机多出很多华而不实的小功能厂商并不支持Linux版本但是我找到一个多少能用的开源逆向工程驱动。它能在Mozilla浏览器里正常打印网页但是在Star Office软件里打印的话会是用很小的字体塞到页面的右上角里。打印机还会发出很大的机械响声让我想起了汽车变速箱在报废时发出的噪音。
Star Office问题的变通方案是把所有文字都保存到文本文件然后在文本编辑器里打印。而对于那个听上去像是打印机处于解体模式的噪音?我的方法是尽量不要打印。
Star Office问题的变通方案是把所有文字都保存到文本文件然后在文本编辑器里打印。而对于那个听上去像是打印机处于天魔解体模式的噪音?我的方法是尽量不要打印。
### 更多的其他问题-对我来说太多了都快忘了 ###
@ -36,12 +36,13 @@ Star Office问题的变通方案是把所有文字都保存到文本文件
好吧我还有个并口扫描仪在我转移到Linux之前两个星期买的之后它就基本是块砖了因为没有Linux下的驱动。
我的观点是在那个年代里这些都不重要。我们大多数人都习惯了修改配置文件之类的事情即便是运行微软产品的“IBM兼容”计算机。就像那个年代的大多数用户我刚学开始接触使用命令行的DOS机器在它上面打印机需要针对每个程序单独设置而且写写简单的autoexec.bat是必的技能。
我的观点是在那个年代里这些都不重要。我们大多数人都习惯了修改配置文件之类的事情即便是运行微软产品的“IBM兼容”计算机。就像那个年代的大多数用户我刚学开始接触使用命令行的DOS机器在它上面打印机需要针对每个程序单独设置而且写写简单的autoexec.bat是必的技能。
![Linux as a 1966 “goat.”](http://fossforce.com/wp-content/uploads/2014/08/Pontiac_GTO_1966-300x224.jpg)
Linux就像1966年的“山羊”
<center>![Linux as a 1966 “goat.”](http://fossforce.com/wp-content/uploads/2014/08/Pontiac_GTO_1966-300x224.jpg)</center>
能够摆弄操作系统内部的配置是能够拥有一台计算机的一个简单部分。我们大多数使用计算机的人要么是极客或是希望成为极客。我们为这种能够调整计算机按我们想要的方式运行的能力而感到骄傲。我们就是那个年代里高科技版本的好男孩,他们会在周六下午在树荫下改装他们肌肉车上的排气管,通风管,化油器之类的。
<center>Linux就像1966年的“山羊”</center>
那时,能够摆弄操作系统内部的配置是能够拥有一台计算机的一个简单部分。我们大多数使用计算机的人要么是极客或是希望成为极客。我们为这种能够调整计算机按我们想要的方式运行的能力而感到骄傲。我们就是那个年代里高科技版本的好男孩,他们会在周六下午在树荫下改装他们肌肉车上的排气管,通风管,化油器之类的。
### 不过现在大家不是这样使用计算机的 ###
@ -59,7 +60,7 @@ via: http://fossforce.com/2014/08/when-linux-was-perfect-enough/
作者Christine Hall
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,20 +1,17 @@
使用Clonezilla对硬盘进行镜像和克隆
================================================================================
![Figure 1: Creating a partition on the USB stick for Clonezilla.](http://www.linux.com/images/stories/41373/fig-1-gparted.jpeg)
图1 在USB存储棒上为Clonezilla创建分区
Clonezilla是一个用于LinuxFree-Net-OpenBSDMac OS XWindows以及Minix的分区和磁盘克隆程序。它支持所有主要的文件系统包括EXTNTFSFATXFSJFS和BtrfsLVM2以及VMWare的企业集群文件系统VMFS3和VMFS5。Clonezilla支持32位和64位系统同时支持旧版BIOS和UEFI BIOS并且同时支持MBR和GPT分区表。它是一个用于完整备份Windows系统和所有安装于上的应用软件的好工具而我喜欢用它来为Linux测试系统做备份以便我可以在其上做疯狂的实验搞坏后可以快速恢复它们。
Clonezilla是一个用于LinuxFree-Net-OpenBSDMac OS XWindows以及Minix的分区和磁盘克隆程序。它支持所有主要的文件系统包括EXTNTFSFATXFSJFS和BtrfsLVM2以及VMWare的企业集群文件系统VMFS3和VMFS5。Clonezilla支持32位和64位系统同时支持旧版BIOS和UEFI BIOS并且同时支持MBR和GPT分区表。它是一个用于完整备份Windows系统和所有安装于上的应用软件的好工具而我喜欢用它来为Linux测试系统做备份以便我可以在其上做疯狂的实验搞坏后可以快速恢复它们。
Clonezilla也可以使用dd命令来备份不支持的文件系统该命令可以复制块而非文件因而不必在意文件系统。简单点说就是Clonezilla可以复制任何东西。关于块的快速说明磁盘扇区是磁盘上最小的可编址存储单元而块是由单个或者多个扇区组成的逻辑数据结构。
Clonezilla也可以使用dd命令来备份不支持的文件系统该命令可以复制块而非文件因而不必弄明白文件系统。因此简单点说就是Clonezilla可以复制任何东西。关于块的快速说明磁盘扇区是磁盘上最小的可编址存储单元而块是由单个或者多个扇区组成的逻辑数据结构。
Clonezilla分为两个版本Clonezilla Live和Clonezilla Server EditionSE。Clonezilla Live对于将单个计算机克隆岛本地存储设备或者网络共享来说是一流的。而Clonezilla SE则适合更大的部署用于一次性快速多点克隆整个网络中的PC。Clonezilla SE是一个神奇的软件我们将在今后讨论。今天我们将创建一个Clonezilla Live USB存储棒克隆某个系统然后恢复它。
Clonezilla分为两个版本Clonezilla Live和Clonezilla Server EditionSE。Clonezilla Live对于将单个计算机克隆到本地存储设备或者网络共享来说是一流的。而Clonezilla SE则适合更大的部署用于一次性快速多点克隆整个网络中的PC。Clonezilla SE是一个神奇的软件我们将在今后讨论。今天我们将创建一个Clonezilla Live USB存储棒克隆某个系统然后恢复它。
### Clonezilla和Tuxboot ###
当你访问下载页时,你会看到[稳定版和可选稳定发行版][1]。也有测试版本如果你有兴趣帮助改善Clonezilla那么我推荐你使用此版本。稳定版基于Debian不含有非自由软件。可选稳定版基于Ubuntu包含有一些非自由固件并支持UEFI安全启动。
在你[下载Clonezilla][2]后,请安装[Tuxboot][3]来复制Clonezilla到USB存储棒。Tuxboot是一个Unetbootin的修改版它支持Clonezilla你不能使用Unetbootin因为它无法工作。安装Tuxboot有点让人头痛然而Ubuntu用户通过个人包归档压缩PPA方便地安装
在你[下载Clonezilla][2]后,请安装[Tuxboot][3]来复制Clonezilla到USB存储棒。Tuxboot是一个Unetbootin的修改版它支持Clonezilla你不能使用Unetbootin因为它无法配合工作。安装Tuxboot有点让人头痛然而Ubuntu用户通过个人包归档包PPA方便地安装
$ sudo apt-add-repository ppa:thomas.tsai/ubuntu-tuxboot
$ sudo apt-get update
@ -22,18 +19,24 @@ Clonezilla分为两个版本Clonezilla Live和Clonezilla Server EditionSE
如果你没有运行Ubuntu并且你的发行版不包含打包好的Tuxboot版本那么请[下载源代码tarball][4]并遵循README.txt文件中的说明来编译并安装。
安装完Tuxboot后就可以使用它来创建你精巧的可直接启动的Clonezilla USB存储棒了。首先创建一个最小200MB的FAT 32分区图1上面展示了使用GParted来进行分区。我喜欢使用标签比如“Clonezilla”这会让我知道它是个什么东西。该例子中展示了将一个2GB的存储棒格式化成一个单个分区。
Then fire up Tuxboot (figure 2). Check "Pre-downloaded" and click the button with the ellipsis to select your Clonezilla file. It should find your USB stick automatically, and you should check the partition number to make sure it found the right one. In my example that is /dev/sdd1. Click OK, and when it's finished click Exit. It asks you if you want to reboot now, but don't worry because it won't. Now you have a nice portable Clonezilla USB stick you can use almost anywhere.
然后启动Tuxboot图2。选中“预下载的Pre-downloaded”然后点击带省略号的按钮来选择Clonezilla文件。它会自动发现你的USB存储棒而你需要选中分区号来确保它找到的是正确的那个我的例子中是/dev/sdd1。点击确定然后当它完成后点击退出。它会问你是否要重启动请不要担心因为它不会的。现在你有一个精巧的便携式Clonezilla USB存储棒了你可以随时随地使用它了。
<center>![Figure 1: Creating a partition on the USB stick for Clonezilla.](http://www.linux.com/images/stories/41373/fig-1-gparted.jpeg)</center>
![Figure 2: Fire up Tuxboot.](http://www.linux.com/images/stories/41373/fig-2-tuxboot.jpeg)
图2 启动Tuxboot
<center>*图1 在USB存储棒上为Clonezilla创建分区*</center>
安装完Tuxboot后就可以使用它来创建你精巧的可直接启动的Clonezilla USB存储棒了。首先创建一个最小200MB的FAT 32分区图1上图展示了使用GParted来进行分区。我喜欢使用类似“Clonezilla”这样的标签这会让我知道它是个什么东西。该例子中展示了将一个2GB的存储棒格式化成一个单个分区。
然后启动Tuxboot图2。选中“预下载的Pre-downloaded”然后点击带省略号的按钮来选择Clonezilla文件。它会自动发现你的USB存储棒而你需要选中分区号来确保它找到的是正确的那个我的例子中是/dev/sdd1。点击确定然后当它完成后点击退出。它会问你是否要重启动不要担心现在不用重启。现在你有一个精巧的便携式Clonezilla USB存储棒了你可以随时随地使用它了。
<center>![Figure 2: Fire up Tuxboot.](http://www.linux.com/images/stories/41373/fig-2-tuxboot.jpeg)</center>
<center>*图2 启动Tuxboot*</center>
### 创建磁盘镜像 ###
在你想要备份的计算机上启动Clonezilla USB存储棒第一个映入你眼帘的是常规的启动菜单。启动到默认条目。你会被问及使用何种语言和键盘而当你到达启动Clonezilla菜单时请选择启动Clonezilla。在下一级菜单中选择设备镜像然后进入下一屏。
这一屏有点让人摸不着头脑里头有什么local_devssh_serversamba_server以及nfs_server之类的选项。这里就是要你选择将备份的镜像拷贝到哪里目标分区或者驱动器必须和你要拷贝的卷要一样大甚至更大。如果你选择local_dev那么你需要一个足够大的本地分区来存储你的镜像。附加USB硬盘驱动器是一个不错的快速而又简单的选项。如果你选择任何服务器选项你需要有线连接到服务器并提供IP地址并登录上去。我将使用一个本地分区这就是说要选择local_dev。
这一屏有点让人摸不着头脑里头有什么local_devssh_serversamba_server以及nfs_server之类的选项。这里就是要你选择将备份的镜像拷贝到哪里目标分区或者驱动器必须和你要拷贝的卷要一样大甚至更大。如果你选择local_dev那么你需要一个足够大的本地分区来存储你的镜像。附加USB硬盘驱动器是一个不错的快速而又简单的选项。如果你选择任何服务器选项你需要连接到服务器并提供IP地址并登录上去。我将使用一个本地分区这就是说要选择local_dev。
当你选择local_dev时Clonezilla会扫描所有连接到本地的存储折本包括硬盘和USB存储设备。然后它会列出所有分区。选择你想要存储镜像的分区然后它会问你使用哪个目录并列出目录。选择你所需要的目录然后进入下一屏它会显示所有的挂载以及已使用/可用的空间。按回车进入下一屏,请选择初学者还是专家模式。我选择初学者模式。
@ -41,12 +44,13 @@ Then fire up Tuxboot (figure 2). Check "Pre-downloaded" and click the button wit
下一屏中它会问你新建镜像的名称。在接受默认名称或者输入你自己的名称后进入下一屏。Clonezilla会扫描你所有的分区并创建一个检查列表你可以从中选择你想要拷贝的。选择完后在下一屏中会让你选择是否进行文件系统检查并修复。我才没这耐心所以直接跳过了。
下一屏中会问你是否想要Clonezilla检查你新创建的镜像以确保它是可恢复的。选是吧确保万无一失。接下来它会给你一个命令行提示如果你想用命令行而非GUI那么你必须再次按回车。你需要再次确认并输入y来确认制作拷贝。
下一屏中会问你是否想要Clonezilla检查你新创建的镜像以确保它是可恢复的。选确保万无一失。接下来它会给你一个命令行提示如果你想用命令行而非GUI那么你必须再次按回车。你需要再次确认并输入y来确认制作拷贝。
在Clonezilla创建新镜像的时候你可以好好欣赏一下这个友好的红、白、蓝三色的进度屏图3
![Figure 3: Watch the creation of your new image.](http://www.linux.com/images/stories/41373/fig-3-export.jpeg)
图3 守候创建新镜像
<center>![Figure 3: Watch the creation of your new image.](http://www.linux.com/images/stories/41373/fig-3-export.jpeg)</center>
<center>*图3 守候创建新镜像*</center>
全部完成后按回车然后选择重启记得拔下你的Clonezilla USB存储棒。正常启动计算机然后去看看你新创建的Clonezilla镜像吧。你应该看到像下面这样的东西
@ -81,7 +85,7 @@ via: http://www.linux.com/learn/tutorials/783416-how-to-image-and-clone-hard-dri
作者:[Carla Schroder][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,155 @@
15个关于Linux的cd命令的练习例子
===========================
在Linux中**cd改变目录**命令,是对新手和系统管理员来说,最重要最常用的命令。对管理无图形界面的服务器的管理员,‘**cd**‘是进入目录,检查日志,执行程序/应用软件/脚本和其余每个任务的唯一方法。对新手来说,是他们必须自己动手学习的最初始命令
![15 cd command examples in linux](http://www.tecmint.com/wp-content/uploads/2014/08/cd-command-in-linux.png)
*Linux中15个cd命令举例*
所以,请用心学习,我们在这会带给你**15**个基础的‘**cd**‘命令,它们富有技巧和捷径,学会使用这些了解到的技巧,会大大减少你在终端上花费的努力和时间
### 课程细节 ###
- 命令名称cd
- 代表:切换目录
- 使用平台所有Linux发行版本
- 执行方式:命令行
- 权限:访问自己的目录或者其余指定目录
- 级别:基础/初学者
1. 从当前目录切换到/usr/local
avi@tecmint:~$ cd /usr/local
avi@tecmint:/usr/local$
2. 使用绝对路径,从当前目录切换到/usr/local/lib
avi@tecmint:/usr/local$ cd /usr/local/lib
avi@tecmint:/usr/local/lib$
3. 使用相对路径,从当前路径切换到/usr/local/lib
avi@tecmint:/usr/local$ cd lib
avi@tecmint:/usr/local/lib$
4. **a**切换当前目录到上级目录
avi@tecmint:/usr/local/lib$ cd -
/usr/local
avi@tecmint:/usr/local$
**b**切换当前目录到上级目录
avi@tecmint:/usr/local/lib$ cd ..
avi@tecmint:/usr/local$
5. 显示我们最后一个离开的工作目录(使用‘-’选项)
avi@tecmint:/usr/local$ cd --
/home/avi
6. 从当前目录向上级返回两层
avi@tecmint:/usr/local$ cd ../../
avi@tecmint:/$
7. 从任何目录返回到用户home目录
avi@tecmint:/usr/local$ cd ~
avi@tecmint:~$
avi@tecmint:/usr/local$ cd
avi@tecmint:~$
8. 切换工作目录到当前工作目录LCTT这有什么意义嘛
avi@tecmint:~/Downloads$ cd .
avi@tecmint:~/Downloads$
avi@tecmint:~/Downloads$ cd ./
avi@tecmint:~/Downloads$
9. 你当前目录是“/usr/local/lib/python3.4/dist-packages”现在要切换到“/home/avi/Desktop/”,要求:一行命令,通过向上一直切换直到‘/’,然后使用绝对路径
avi@tecmint:/usr/local/lib/python3.4/dist-packages$ cd ../../../../../home/avi/Desktop/
avi@tecmint:~/Desktop$
10. 从当前工作目录切换到/var/www/html要求不要将命令打完整使用TAB
avi@tecmint:/var/www$ cd /v<TAB>/w<TAB>/h<TAB>
avi@tecmint:/var/www/html$
11. 从当前目录切换到/etc/v__ _啊呀你竟然忘了目录的名字但是你又不想用TAB
avi@tecmint:~$ cd /etc/v*
avi@tecmint:/etc/vbox$
**请注意:**如果只有一个目录以‘**v**‘开头,这将会移动到‘**vbox**‘。如果有很多目录以‘**v**‘开头,而且命令行中没有提供更多的标准,这将会移动到第一个以‘**v**‘开头的目录(按照他们在标准字典里字母存在的顺序)
12. 你想切换到用户‘**av**不确定是avi还是avt目录不用**TAB**
avi@tecmint:/etc$ cd /home/av?
avi@tecmint:~$
13. Linux下的pushed和poped
Pushed和poped是Linux bash命令也是其他几个能够保存当前工作目录位置至内存并且从内存读取目录作为当前目录的脚本这些脚本也可以切换目录
avi@tecmint:~$ pushd /var/www/html
/var/www/html ~
avi@tecmint:/var/www/html$
上面的命令保存当前目录到内存然后切换到要求的目录。一旦poped被执行它会从内存取出保存的目录位置作为当前目录
avi@tecmint:/var/www/html$ popd
~
avi@tecmint:~$
14. 切换到名字带有空格的目录
avi@tecmint:~$ cd test\ tecmint/
avi@tecmint:~/test tecmint$
avi@tecmint:~$ cd 'test tecmint'
avi@tecmint:~/test tecmint$
avi@tecmint:~$ cd "test tecmint"/
avi@tecmint:~/test tecmint$
15. 从当前目录切换到下载目录,然后列出它所包含的内容(使用一行命令)
avi@tecmint:/usr$ cd ~/Downloads && ls
...
.
service_locator_in.xls
sources.list
teamviewer_linux_x64.deb
tor-browser-linux64-3.6.3_en-US.tar.xz
.
...
我们尝试使用最少的词句和一如既往的友好来让你了解Linux的工作和执行
这就是所有内容。我很快会带着另一个有趣的主题回来的。
---
via: http://www.tecmint.com/cd-command-in-linux/
作者:[Avishek Kumar][a]
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/

View File

@ -1,4 +1,4 @@
Linux FAQ -- 如何在CentOS或者RHEL上启用Nux Dextop仓库
Linux有问必答:如何在CentOS或者RHEL上启用Nux Dextop仓库
================================================================================
> **问题**: 我想要安装一个在Nux Dextop仓库的RPM包。我该如何在CentOS或者RHEL上设置Nux Dextop仓库
@ -6,7 +6,7 @@ Linux FAQ -- 如何在CentOS或者RHEL上启用Nux Dextop仓库
要在CentOS或者RHEL上启用Nux Dextop遵循下面的步骤。
首先,要理解Nux Dextop被设计与EPEL仓库共存。因此你需要使用Nux Dexyop仓库前先[启用 EPEL][2]。
首先,要知道Nux Dextop被设计与EPEL仓库共存。因此你需要使用Nux Dexyop仓库前先[启用 EPEL][2]。
启用EPEL后用下面的命令安装Nux Dextop仓库。
@ -26,13 +26,13 @@ Linux FAQ -- 如何在CentOS或者RHEL上启用Nux Dextop仓库
### 对于 Repoforge/RPMforge 用户 ###
据作者所说Nux Dextop目前所知会与其他第三方库比如Repoforge和ATrpms相冲突。因此如果你启用了除了EPEL的其他第三方库强烈建议你将Nux Dextop仓库设置成“default off”默认关闭状态。就是用文本编辑器打开/etc/yum.repos.d/nux-dextop.repo并且在nux-desktop下面将"enabled=1" 改成 "enabled=0"。
据作者所说,目前已知Nux Dextop会与其他第三方库比如Repoforge和ATrpms相冲突。因此如果你启用了除了EPEL的其他第三方库强烈建议你将Nux Dextop仓库设置成“default off”默认关闭状态。就是用文本编辑器打开/etc/yum.repos.d/nux-dextop.repo并且在nux-desktop下面将"enabled=1" 改成 "enabled=0"。
$ sudo vi /etc/yum.repos.d/nux-dextop.repo
$ sudo vi /etc/yum.repos.d/nux-dextop.repo
![](https://farm6.staticflickr.com/5560/14789955930_f8711b3581_z.jpg)
当你无论何时从Nux Dextop仓库安装包时显式地用下面的命令启用仓库。
无论何时当你从Nux Dextop仓库安装包时显式地用下面的命令启用仓库。
$ sudo yum --enablerepo=nux-dextop install <package-name>
@ -41,7 +41,7 @@ $ sudo vi /etc/yum.repos.d/nux-dextop.repo
via: http://ask.xmodulo.com/enable-nux-dextop-repository-centos-rhel.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[ wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux有问必答——如何修复“运行aclocal失败没有该文件或目录”
Linux有问必答:如何修复“运行aclocal失败没有该文件或目录”
================================================================================
> **问题**我试着在Linux上构建一个程序该程序的开发版本是使用“autogen.sh”脚本进行的。当我运行它来创建配置脚本时却发生了下面的错误
>
@ -24,7 +24,7 @@ Linux有问必答——如何修复“运行aclocal失败没有该文件或
via: http://ask.xmodulo.com/fix-failed-to-run-aclocal.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,5 +1,7 @@
Email生日快乐
他发明了 Email
================================================================================
[编者按:本文所述的 Email 发明人的观点存在很大的争议,请读者留意,以我的观点来看,其更应该被称作为某个 Email 应用系统的发明人其所发明的一些功能和特性至今沿用。——wxy]
**一个印度裔美国人用他天才的头脑发明了电子邮件,而从此以后我们没有哪一天可以离开电子邮件。**
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/photo/150x150xDbOx104130AM8312014.jpg.pagespeed.ic.QJJxt_P8uE.jpg)
@ -18,6 +20,6 @@ via: http://www.efytimes.com/e1/fullnews.asp?edid=147170
作者Sanchari Banerjee
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,6 +1,7 @@
在Ubuntu14.04上安装UberWriterMarkdown编辑器
================================================================================
下面将展示如何通过官方的PPA源在Ubuntu14.04上安装UberWriter编辑器
这是一篇快速教程指导我们如何通过官方的PPA源在Ubuntu14.04上安装UberWriter编辑器。
[UberWriter][1]是一款Ubuntu下的Markdown编辑器它简洁的界面能让我们更致力于编辑文字。UberWriter利用了[pandoc][3](一个格式转换器)。但由于UberWriter的UI是基于GTK3的因此不能完全兼容Unity桌面系统。以下是对UberWriter功能的列举
- 简洁的界面
@ -13,7 +14,7 @@
### 在Ubuntu14.04上安装UberWriter ###
UberWriter可以在[Ubuntu软件中心][4]中找到但是安装需要支付$5。如果你真的喜欢这款编辑器并想为开发者提供一些资金支持的话我很建议你购买它。
UberWriter可以在[Ubuntu软件中心][4]中找到但是安装需要支付5。如果你真的喜欢这款编辑器并想为开发者提供一些资金支持的话,我很建议你购买它。
除此之外UberWriter也能通过官方的PPA源来免费安装。通过如下命令
@ -29,16 +30,15 @@ UberWriter可以在[Ubuntu软件中心][4]中找到但是安装需要支付$5。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/UberWriter_Ubuntu_1.jpeg)
当想要导出到PDF的时候会提示先安装texlive。
我尝试导出到PDF的时候被提示安装texlive。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/UberWriter_Ubuntu_PDF_Export.png)
虽然导出到HTML和ODT格式是好的。
在Linux下还有一些其他的markdown编辑器。[Remarkable][5]是一款能够实时预览的编辑器但UberWriter不能。如果你在寻找文本编辑器的话你以可以试试[Texmaker LaTeX editor][6]。
系统这次展示能够帮你在Ubuntu14.04上成功安装UberWriter。我猜想UberWriter在Ubuntu12.04Linux Mint 17Elementary OS和其他在Ubuntu的基础上的Linux发行版上也能成功安装。
在Linux下还有一些其他的markdown编辑器。[Remarkable][5]是一款能够实时预览的编辑器UberWriter却不能不过总的来说它是一款很不错的应用。如果你在寻找文本编辑器的话你可以试试[Texmaker LaTeX editor][6]。
系统这个教程能够帮你在Ubuntu14.04上成功安装UberWriter。我猜想UberWriter在Ubuntu12.04Linux Mint 17Elementary OS和其他在Ubuntu的基础上的Linux发行版上也能成功安装。
--------------------------------------------------------------------------------
@ -46,7 +46,7 @@ via: http://itsfoss.com/install-uberwriter-markdown-editor-ubuntu-1404/
作者:[Abhishek][a]
译者:[John](https://github.com/johnhoow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,12 +1,12 @@
QuiteRSS: Linux桌面的RSS阅读器
================================================================================
[QuiteRSS][1]是一个自由[开源][2]的RSS/Atome阅读器。它可以在Windows、Linux和Mac上运行。它用C++/QT编写,所以它有许多的特点
[QuiteRSS][1]是一个免费的[开源][2]RSS/Atome阅读器。它可以在Windows、Linux和Mac上运行。它用C++/QT编写。它有许多的特色功能
QuiteRSS的界面让我想起Lotus Notes mail会有很多RSS信息排列在大小合适的方块上你可以通过标签分组。需要查找东西时只需在下面板上打开RSS信息。
QuiteRSS的界面让我想起Lotus Notes mail会有很多RSS信息排列在右侧面板上,你可以通过标签分组。点击一个 RSS 条目时,会在下方的面板里面显示该信息。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/QuiteRSS_Ubuntu.jpeg)
除了上述功能,它还有一个广告屏蔽器,一个报纸输出视图通过URL特性导入RSS等众多功能。你可以在[这里][3]查找到完整的功能列表。
除了上述功能它还有一个广告屏蔽器一个报纸视图通过URL导入RSS等众多功能。你可以在[这里][3]查找到完整的功能列表。
### 在 Ubuntu 和 Linux Mint 上安装 QuiteRSS ###
@ -20,19 +20,19 @@ QuiteRSS在Ubuntu 14.04 和 Linux Mint 17中可用。你可以通过以下命令
sudo apt-get update
sudo apt-get install quiterss
上面的命令在所有基于Ubuntu的发行版都支持比如Linux Mint, Elementary OS, Linux Lite, Pinguy OS等等。对于其他Linux发行版和平台上,你可以从 [下载页][5]获得源码来安装。
上面的命令支持所有基于Ubuntu的发行版比如Linux Mint, Elementary OS, Linux Lite, Pinguy OS等等。对于其他Linux发行版和平台上,你可以从 [下载页][5]获得源码来安装。
### 卸载 QuiteRSS ###
用下命令卸载 QuiteRSS
用下命令卸载 QuiteRSS
sudo apt-get remove quiterss
如果你使用了PPA,你还需要从源列表中把仓库删除:
如果你使用了PPA,你还也应该从源列表中把仓库删除:
sudo add-apt-repository --remove ppa:quiterss/quiterss
QuiteRSS是一个不错的开源RSS阅读器尽管我更喜欢[Feedly][6]。尽管现在 Feedly 还没有Linux桌面程序但是你依然可以在网页浏览器中使用。希望你会觉得QuiteRSS值得在桌面Linux一试。
QuiteRSS是一个不错的开源RSS阅读器尽管我更喜欢[Feedly][6]。不过现在 Feedly 还没有Linux桌面程序但是你依然可以在网页浏览器中使用。希望你会觉得QuiteRSS值得在桌面Linux一试。
--------------------------------------------------------------------------------

View File

@ -1,6 +1,6 @@
Linux有问必答——如何查找并移除Ubuntu上陈旧的PPA仓库
================================================================================
> **问题**我试着通过运行apt-get update命令来再次同步包索引文件但是却出现了“404 无法找到”的错误看起来似乎是我不能从先前添加的第三方PPA仓库中获取最新的索引。我怎样才能清这些破损而且陈旧的PPA仓库呢
> **问题**我试着通过运行apt-get update命令来再次同步包索引文件但是却出现了“404 无法找到”的错误看起来似乎是我不能从先前添加的第三方PPA仓库中获取最新的索引。我怎样才能清这些破损而且陈旧的PPA仓库呢
Err http://ppa.launchpad.net trusty/main amd64 Packages
404 Not Found
@ -12,7 +12,7 @@ Linux有问必答——如何查找并移除Ubuntu上陈旧的PPA仓库
E: Some index files failed to download. They have been ignored, or old ones used instead.
你试着更新APT包索引时“404 无法找到”错误总是会在版本更新之后发生。就是说在你升级你的Ubuntu发行版后你在旧的版本上添加的一些第三方PPA仓库就不再受新版本的支持。在此种情况下你可以像下面这样来**鉴别并清除那些破损的PPA仓库**。
你试着更新APT包索引时“404 无法找到”错误总是会在版本更新之后发生。就是说在你升级你的Ubuntu发行版后你在旧的版本上添加的一些第三方PPA仓库就不再受新版本的支持。在此种情况下你可以像下面这样来**鉴别并清除那些破损的PPA仓库**。
首先找出那些引起“404 无法找到”错误的PPA。
@ -22,7 +22,7 @@ Linux有问必答——如何查找并移除Ubuntu上陈旧的PPA仓库
在本例中Ubuntu Trusty不再支持的PPA仓库是“ppa:finalterm/daily”。
去吧,去[移除PPA仓库][1]。
去[移除PPA仓库][1]
$ sudo add-apt-repository --remove ppa:finalterm/daily
@ -30,14 +30,14 @@ Linux有问必答——如何查找并移除Ubuntu上陈旧的PPA仓库
![](https://farm4.staticflickr.com/3844/15158541642_1fc8f92c77_z.jpg)
在移除所有过时PPA仓库后重新运行“apt-get update”命令来检查它们是否都被移除。
在移除所有过时PPA仓库后重新运行“apt-get update”命令来检查它们是否都被成功移除。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/find-remove-obsolete-ppa-repositories-ubuntu.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,27 +1,26 @@
2q1w2007翻译中
世界上最小的发行版之一Tiny Core有了新的更新
世界上最小的发行版之一Tiny Core有了更新
================================================================================
![Tiny Core desktop](http://i1-news.softpedia-static.com/images/news2/One-of-the-Smallest-Distros-in-the-World-Tiny-Core-Gets-a-Fresh-Update-458785-2.jpg)
Tiny Core
**Robert Shingledecker 刚刚发布了最新可用的最终版本的Tiny Core 5.4,这也使它成为世界上最小的发行版之一**
**Robert Shingledecker 宣布了最终版本的Tiny Core 5.4 Linux操作系统已经可以即刻下载,这也使它成为世界上最小的发行版之一**
发行版的名字说明了一切,但是开发者依然集成了一些有意思的包和一个轻量的桌面。这次最新的迭代只有一个候选版本,而且它将会是有史以来最安静的版本
发行版的名字说明了一切,但是开发者依然集成了一些有意思的包和一个轻量的桌面来与它相匹配。这次最新的迭代只有一个候选版本,而且它也是迄今为止最安静的版本之一
官网上的开发者说"Tiny Core是一个简单的例子来示范核心项目可以提供什么。,它提供了一个12MB的FLTK/FLWM桌面。用户对提供的程序和外加的硬件有完整的控制权。你可以把它用在桌面、笔记本或者服务器上,这可以由用户从在线库中安装附加程序时选择,或者用提供的工具编译大多数你需要的。"
官网上的开发者说"Tiny Core是一个简单的范例来说明核心项目可以提供什么。它提供了一个12MB的FLTK/FLWM桌面。用户对提供的程序和外加的硬件有完整的控制权。你可以把它用在桌面、笔记本或者服务器上,这可以由用户从在线库中安装附加程序时选择,或者用提供的工具编译大多数你需要的。"
根据更新日志,NFS的入口被添加,'Done'将在新的一行里显示,udev也升级到174来修复竞态条件问题。
关于修改和升级的完整内容可以在官方的[声明][1]里找到。
你可以下载Tiny Core Linux 5.4.
你可以点击以下链接下载Tiny Core Linux 5.4.
- [Tiny Core Linux 5.4 (ISO)][2][iso] [14 MB]
- [Tiny Core Plus 5.4 (ISO)][3][iso] [72 MB]
- [Core 5.4 (ISO)][4][iso] [8.90 MB]
这些发都有Live,你可以在安装之前试用。
这些发行版都有Live,你可以在安装之前试用。
--------------------------------------------------------------------------------
@ -29,7 +28,7 @@ via: http://news.softpedia.com/news/One-of-the-Smallest-Distros-in-the-World-Tin
作者:[Silviu Stahie][a]
译者:[2q1w2007](https://github.com/2q1w2007)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,41 @@
GNOME控制中心3.14 RC1修复了大量潜在崩溃问题
================================================================================
![GNOME Control Center in Arch Linux](http://i1-news.softpedia-static.com/images/news2/GNOME-Control-Center-3-14-RC1-Correct-Lots-of-Potential-Crashes-458986-2.jpg)
Arch Linux下的GNOME控制中心
**GNOME控制中心可以在GNOME中更改你的桌面各个方面设置的主界面已经升级至3.14 RC1伴随而来的是大量来自GNOME stack的包。**
GNOME控制中心是在GNOME生态系统中十分重要的软件之一尽管不是所有的用户意识到了它的存在。GNOME控制中心是管理由GNOME驱动的操作系统中所有设置的部分就像你从截图里看到的那样。
GNOME控制中心不是很经常被宣传它实际上是GNOME stack中为数不多的源代码包和安装后的应用名称不同的软件包。源代码包的名字为GNOME控制中心但用户经常看到的应用名称是“设置”或“系统设置”取决于开发者的选择。
### GNOME控制中心 3.14 RC1 带来哪些新东西 ###
通过更新日志可以得知升级了libgd以修复GdNotification主题切换视图时背景选择对话框不再重新调整大小选择对话框由三个不同视图组合而成修复Flickr支持中的一个内存泄漏在“日期和时间”中不再使用硬编码的字体大小修复改换窗口管理器或重启时引起的崩溃更改无线网络启用时可能引起的崩溃也已被修复以及纠正了更多可能的WWAN潜在崩溃因素。
此外现在热点仅在设备活动时运行所有虚拟桥接现在是隐藏的不再显示VPN连接的底层设备默认不显示空文件夹列表解决了几个UI填充问题输入焦点现在重新回到了账户对话框将年份设置为0时导致的崩溃已修复“Wi-Fi热点”属性居中修复了打开启用热点时弹出警告的问题以及现在打开热点失败时将弹出错误信息。
完整的变动更新以及bug修复参见官方[更新日志][1]。
你可以下载GNOME控制中心 3.14 RC1
- [tar.xz (3.12.1 稳定版)][2][sources] [6.50 MB]
- [tar.xz (3.14 RC1 开发版)][3][sources] [6.60 MB]
这里提供的仅仅是源代码包你必须自己编译以测试GNOME控制中心。除非你真的知道自己在做什么否则你应该等到完整的GNOME stack在源中可用时再使用。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/GNOME-Control-Center-3-14-RC1-Correct-Lots-of-Potential-Crashes-458986.shtml
作者:[Silviu Stahie][a]
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://ftp.acc.umu.se/pub/GNOME/sources/gnome-control-center/3.13/gnome-control-center-3.13.92.news
[2]:http://ftp.acc.umu.se/pub/GNOME/sources/gnome-control-center/3.12/gnome-control-center-3.12.1.tar.xz
[3]:http://ftp.acc.umu.se/pub/GNOME/sources/gnome-control-center/3.13/gnome-control-center-3.13.92.tar.xz

View File

@ -0,0 +1,36 @@
欧洲现在很流行拥抱开源
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Turin_Open_Source.jpg)
看来拥抱[开源][1]最近在欧洲的国家很流行。上个月我们我只听说[都灵成为意大利首个官方接受开源产品的城市][2]。另一个意大利西北部城市,[乌迪内][3]已经宣布他们正在抛弃微软Office转而迁移到[OpenOffice][4]。
乌迪内有100,000的人口并且行政部门有大约900台电脑它们都运行着微软Windows以及它的默认产品套装。根据[预算文档][5]迁移将在大约12月份时进行从80台新电脑开始。接着将会是旧电脑迁移到OpenOffice。
迁移估计会节省一笔授权费用不然将会每台电脑花费大约400欧元总计360,000欧元。但是节约成本并不是迁移的唯一目的获得常规的软件升级也是其中一个因素。
当然从微软的Office到OpenOfifice不会太顺利。不过全市的培训计划是先让少数员工使用安装了OpenOffice的电脑。
如我先前说明的,这似乎在欧洲是一个趋势。在今年早些时候在[西班牙的加那利群岛][7]之后[法国城市图卢兹也使用了LibreOffice中从而节省了100万欧元][6]。相邻的法国城市[日内瓦也有开源方面的迹象][8]。在世界的另一边,政府机构[泰米尔纳德邦][9]和印度喀拉拉邦省也抛弃了微软而使用开源软件。
伴随着经济的萧条我觉得Windows XP的死亡一直是开源的福音。无论是什么原因我很高兴看到这份名单越来越大。你看呢
--------------------------------------------------------------------------------
via: http://itsfoss.com/udine-open-source/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://itsfoss.com/category/open-source-software/
[2]:http://linux.cn/article-3602-1.html
[3]:http://en.wikipedia.org/wiki/Udine
[4]:https://www.openoffice.org/
[5]:http://www.comune.udine.it/opencms/opencms/release/ComuneUdine/comune/Rendicontazione/PEG/PEG_2014/index.html?lang=it&style=1&expfolder=???+NavText+???
[6]:http://linux.cn/article-3575-1.html
[7]:http://itsfoss.com/canary-islands-saves-700000-euro-open-source/
[8]:http://itsfoss.com/170-primary-public-schools-geneva-switch-ubuntu/
[9]:http://linux.cn/article-2744-1.html

View File

@ -4,25 +4,17 @@
![](http://i1-news.softpedia-static.com/images/news2/Mir-and-Unity-8-Update-Arrive-from-Ubuntu-Devs-459263-2.jpg)
**和其他项目一样Canonical也在开发Unity桌面环境与Mir显示服务。开发团队刚刚发布了一个小的更新来据此让我们知道发生了些什么**
**和其他项目一样Canonical也在开发Unity桌面环境与Mir显示服务。开发团队刚刚发布了一个小的更新,据此我们可以知道都有些什么进展**
Ubuntu开发者可能刚刚集中精力在一些重要的发布上就像接下来的Ubuntu 14.10(Utopic Unicorn) 或者是新的面向移动设备的Ubuntu Touch但是他们同样也涉及Mir以及Unity 8这样的项目。
Ubuntu开发者可能刚刚集中精力在一些重要的发布上就像接下来的Ubuntu 14.10(Utopic Unicorn) 或者是新的面向移动设备的Ubuntu Touch但是他们同样也涉及Mir以及Unity 8这样的项目。
目前这代Ubuntu系统使用的是Unity 7桌面环境但是新一代已经酝酿了很长一段时间。与新的显示服务一起已经在Ubuntu的移动版中了但最终也要将它带到桌面上。
目前这代Ubuntu系统使用的是Unity 7桌面环境但是新一代已经酝酿了很长一段时间。与新的显示服务一起已经在Ubuntu的移动版中了但最终也要将它带到桌面上。
这两个项目的领导Kevin Gunn经常发布一些来自开发者的进度信息以及这周以来的一些改变虽然这些都很粗略。
根据 [开发团队][1]的消息, 一些关于触摸/高难度的问题已经修正了几个翻译问题也已经修复了一些Dash UI相关的问题已经修复了目前 团队在开发Mir 0.8Mir 0.7.2已经推广了同时一些高优先级的bug也在进行中。
根据 [开发团队][1]的消息, 一些关于触摸/触发角的问题已经修正了也修复了几个翻译问题一些Dash UI相关的问题已经修复了目前 团队在开发Mir 0.8Mir 0.7.2将被升级同时一些高优先级的bug处理也在进行中。
你可以下载 Ubuntu Next
- [Ubuntu 14.10 Daily Build (ISO) 64-bit][2]
- [Ubuntu 14.10 Daily Build (ISO) 32-bit][3]
- [Ubuntu 14.10 Daily Build (ISO) 64-bit Mac][4]
- [Ubuntu Desktop Next 14.10 Daily Build (ISO) 64-bit][5]
- [Ubuntu Desktop Next 14.10 Daily Build (ISO) 32-bit][6]
这个的特性是新的Unity 8以及Mir但是还不完全。直到有一个明确的方向之前它还会持续一会。
你可以[下载 Ubuntu Next][7]来体验新的Unity 8以及Mir的特性但是还不够稳定。要等到成熟还需要一些时间。
--------------------------------------------------------------------------------
@ -30,7 +22,7 @@ via: http://news.softpedia.com/news/Mir-and-Unity-8-Update-Arrive-from-Ubuntu-De
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,48 @@
Netflix支持 Ubuntu 上原生回放
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/netflix-ubuntu.jpg)
**我们[上个月说的Netflix 的原生Linux支持很接近了][1]现在终于有了我们只需几个简单的步骤就可以在Ubuntu桌面上启用HTML 5视频流了。
现在Netflix更近一步提供了支持。它希望给Ubuntu带来真正的开箱即用的Netflix回放。现在只需要更新**网络安全Network Security ServicesNSS**服务库就行。
### 原生Netflix? Neato. ###
在一封发给Ubuntu开发者邮件列表的[邮件中][2]Netflix的Paul Adolph解释了现在的情况
> “如果NSS的版本是3.16.2或者更高的话Netflix可以在Ubuntu 14.04的稳定版Chrome中播放。如果版本超过了14.04Netflix会作出一些调整以避免用户必须对浏览器的 User-Agent 参数进行一些修改才能播放。”
[LCTT 译注此处原文是“14.02”疑是笔误应该是指Ubuntu 14.04。]
很快要发布的Ubuntu 14.10提供了更新的[NSS v3.17][3], 而目前大多数用户使用的版本 Ubuntu 14.04 LTS 提供的是 v3.15.x。
NSS是一系列支持多种安全功能的客户端和服务端应用的库包括SSLTLSPKCS和其他安全标准。为了让Ubuntu LTS用户可以尽快用上原生的HTML5 Netflix Paul 问道:
>”让一个新的NSS版本进入更新流的过程是什么或者有人可以给我提供正确的联系方式么
Netflix今年早期时在Windows 8.1和OSX Yosemite上提供了HTML5视频回放而不需要任何额外的下载或者插件。现在可以通过[加密媒体扩展][4]特性来使用。
虽然我们等待这讨论取得进展并且希望可以完全解决但是你仍可以在Ubuntu上[下面的指导来][5]修改HTML5 Netflix。
更新9/19
本文发表后Canonical 已经确认所需版本的NSS 库会按计划在下个“安全更新”中更新,预计 Ubuntu 14.04 LTS 将在两周内得到更新。
这个新闻让 Netflix 的Paul Adolph 很高兴,作为回应,他说当软件包更新后,他将“去掉 Chrome 中回放 Netflix HTML5 视频时的User-Agent 过滤不再需要修改UA 了”。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/netflix-linux-html5-nss-change-request
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/08/netflix-linux-html5-support-plugins
[2]:https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2014-September/015048.html
[3]:https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.17_release_notes
[4]:http://en.wikipedia.org/wiki/Encrypted_Media_Extensions
[5]:http://www.omgubuntu.co.uk/2014/08/netflix-linux-html5-support-plugins

View File

@ -0,0 +1,103 @@
10个 Ubuntu 用户一定要知道的博客
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/Best_Ubuntu_Blogs.jpg)
**想要了解更多关于 ubuntu 的资讯,我们应该追哪些网站呢?**
这是初学者经常会问的一个问题在这里我会告诉你们10个我最喜欢的博客这些博客可以帮助我们解决问题能让我们及时了解所有 Ubuntu 版本的更新消息。不,我谈论的不是通常的 Linux 和 shell 脚本一类的东东。我是在说一个流畅的 Linux 桌面系统和一个普通的用户所要的关于 Ubuntu 的经验。
这些网站帮助你解决你正遇到的问题,提醒你关注各种应用和提供给你来自 Ubuntu 世界的最新消息。这个网站可以让你对 Ubuntu 更了解所以下面列出的是10个我最喜欢的博客它们包括了 Ubuntu 的方方面面。
###10个Ubutun用户一定要知道的博客###
从我开始在 itsfoss 网站上写作开始,我特意把它排除在外,没有列入名单。我也并没有把[Planet Ubuntu][1]列入名单,因为它不适合初学者。废话不多说,让我们一起来看下**最好的乌邦图ubuntu博客**(排名不分先后):
### [OMG! Ubuntu!][2] ###
这是一个只针对 ubuntu 爱好者的网站。无论多小只要是和乌邦图有关系的OMG!Ubuntu 都会收入站内!博客主要包括新闻和应用。你也可以再这里找到一些关于 Ubuntu 的教程,但不是很多。
这个博客会让你知道 Ubuntu 世界发生的各种事情。
### [Web Upd8][3] ###
Web Upd8 是我最喜欢的博客。除了涵盖新闻它有很多容易理解的教程。Web Upd8 还维护了几个PPAs。博主[Andrei][4]有时会在评论里回答你的问题,这对你来说也会是很有帮助的。
这是一个你可以了解新闻资讯,学习教程的网站。
### [Noobs Lab][5] ###
和Web Upd8一样Noobs Lab上也有很多教程新闻并且它可能是PPA里最大的主题和图标集。
如果你是个新手去Noobs Lab看看吧。
### [Linux Scoop][6] ###
大多数的博客都是“文字博客”。你通过看说明和截图来学习教程。而 Linux Scoop 上有很多录像来帮助初学者来学习,完全是一个视频博客。
比起阅读来如果你更喜欢视频Linux Scoop应该是最适合你的。
### [Ubuntu Geek][7] ###
这是一个相对比较老的博客。覆盖面很广,并且有很多快速安装的教程和说明。虽然,有时我发现其中的一些教程文章缺乏深度,当然这也许只是我个人的观点。
想要快速小贴士去Ubuntu Geek。
### [Tech Drive-in][8] ###
这个网站的更新频率好像没有以前那么快了,可能是 Manuel 在忙于他的工作,但是仍然给我们提供了很多的东西。新闻,教程,应用评论是这个博客的亮点。
博客经常被收入到[Ubuntu的新闻邀请邮件中][9]Tech Drive-in肯定是一个很值得你去学习的网站。
### [UbuntuHandbook][10] ###
快速小贴士新闻和教程是UbuntuHandbook的USP。[Ji m][11]最近也在参与维护一些PPAS。我必须很认真的说这个博客的页面其实可以做得更好看点纯属个人观点。
UbuntuHandbook 真的很方便。
### [Unixmen][12] ###
这个网站是由很多人一起维护的而且并不仅仅局限于Ubuntu它也覆盖了很多的其他的Linux发行版。它有自己的论坛来帮助用户。
紧跟着 Unixmen 的步伐。。
### [The Mukt][13] ###
The Mukt是Muktware新的代表。Muktware是一个逐渐消亡的Linux组织并以Mukt重生。Muktware是一个很严谨的Linux开源的博客The Mukt涉及很多广泛的主题包括科技新闻极客新闻有时还有娱乐新闻听起来是否有一种混搭风的感觉The Mukt也包括很多你感兴趣的Ubuntu新闻。
The Mukt 不仅仅是一个博客,它是一种文化潮流。
### [LinuxG][14] ###
LinuxG是一个你可以找到所有关于“怎样安装”类型文章的站点。几乎所有的文章都开始于一句话“你好Linux geeksters,正如你所知道的……”,博客可以在不同的主题上做得更好。我经常发现有些是文章缺乏深度,并且是急急忙忙写出来的,但是它仍然是一个关注应用最新版本的好地方。
这是个快速浏览新的应用和它们最新的版本好地方。
### 你还有什么好的站点吗? ###
这些就是我平时经常浏览的 Ubuntu 博客。我知道还有很多我不知道的站点,可能会比我列出来的这些更好。所以,欢迎把你最喜爱的 Ubuntu 博客写在下面评论区。
--------------------------------------------------------------------------------
via: http://itsfoss.com/ten-blogs-every-ubuntu-user-must-follow/
作者:[Abhishek][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://planet.ubuntu.com/
[2]:http://www.omgubuntu.co.uk/
[3]:http://www.webupd8.org/
[4]:https://plus.google.com/+AlinAndrei
[5]:http://www.noobslab.com/
[6]:http://linuxscoop.com/
[7]:http://www.ubuntugeek.com/
[8]:http://www.techdrivein.com/
[9]:https://lists.ubuntu.com/mailman/listinfo/ubuntu-news
[10]:http://ubuntuhandbook.org/
[11]:https://plus.google.com/u/0/+JimUbuntuHandbook
[12]:http://www.unixmen.com/
[13]:http://www.themukt.com/
[14]:http://linuxg.net/

View File

@ -0,0 +1,42 @@
Debian 8 "Jessie" 将把GNOME作为默认桌面环境
================================================================================
> Debian的GNOME团队已经取得了实质进展
<center>![The GNOME 3.14 desktop](http://i1-news.softpedia-static.com/images/news2/Debian-8-quot-Jessie-quot-to-Have-GNOME-as-the-Default-Desktop-459665-2.jpg)</center>
<center>*GNOME 3.14桌面*</center>
**Debian项目开发者花了很长一段时间来决定将XfceGNOME或一些其他桌面环境中的哪个作为默认环境不过目前看起来像是GNOME赢了。**
[我们两天前提到了][1]GNOME 3.14的软件包被上传到 Debian TestingDebian 8 “Jessie”的软件仓库中这是一个令人惊喜的事情。通常情况下GNOME的维护者对任何类型的软件包都不会这么快地决定添加更别说桌面环境。
事实证明关于即将到来的Debian 8的发行版中所用的默认桌面的争论已经尘埃落定尽管这个词可能有点过于武断。无论什么情况下总是有些开发者想要Xfce另外一些则是喜欢 GNOME看起来 MATE 也是不少人的备选。
### 最有可能的是GNOME将Debian 8“Jessie” 的默认桌面环境###
我们之所以说“最有可能”是因为协议尚未达成一致但它看起来GNOME已经遥遥领先了。Debian的维护者和开发者乔伊·赫斯解释了为什么会这样。
“根据从 https://wiki.debian.org/DebianDesktop/Requalification/Jessie 初步结果看一些所需数据尚不可用但在这一点上我百分之八十地确定GNOME已经领先了。特别是由于“辅助功能”和某些“systemd”整合的进度。在辅助功能方面Gnome和Mate都领先了一大截。其他一些桌面的辅助功能改善了在Debian上的支持部分原因是这一过程推动的但仍需要上游大力支持。“
“Systemd /etc 整合方面XfceMate等尽力追赶在这一领域正在发生的变化当技术团队停止了修改之后希望有时间能在冻结期间解决这些问题。所以这并不是完全否决这些桌面但要从目前的状态看GNOME是未来的选择“乔伊·赫斯[补充说][2]。
开发者在邮件中表示在Debian的GNOME团队对他们所维护的项目[充满了激情][3]而Debian的Xfce的团队是决定默认桌面的实际阻碍。
无论如何Debian 8“Jessie”没有一个具体发布时间并没有迹象显示何时可能会被发布。在另一方面GNOME 3.14已经发布了也许你已经看到新闻了它将很快应对好进行Debian的测试。
我们也应该感谢Jordi Mallach在Debian中的GNOME包的维护者之一他为我们指引了正确的讯息。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Debian-8-quot-Jessie-quot-to-Have-GNOME-as-the-Default-Desktop-459665.shtml
作者:[Silviu Stahie][a]
译者:[fbigun](https://github.com/fbigun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://news.softpedia.com/news/Debian-8-quot-Jessie-quot-to-Get-GNOME-3-14-459470.shtml
[2]:http://anonscm.debian.org/cgit/tasksel/tasksel.git/commit/?id=dce99f5f8d84e4c885e6beb4cc1bb5bb1d9ee6d7
[3]:http://news.softpedia.com/news/Debian-Maintainer-Says-that-Xfce-on-Debian-Will-Not-Meet-Quality-Standards-GNOME-Is-Needed-454962.shtml

View File

@ -0,0 +1,29 @@
Red Hat Enterprise Linux 5产品线终结
================================================================================
2007年3月红帽公司首次宣布它的[Red Hat Enterprise Linux 5][1]RHEL平台。虽然如今看来很普通RHEL 5特别显著的一点是它是红帽公司第一个强调虚拟化的主要发行版本而这点是如今现代发行版所广泛接受的特性。
最初的计划是为RHEL 5提供七年的寿命但在2012年该计划改变了红帽为RHEL 5[扩展][2]至10年的标准支持。
刚刚过去的这个星期Red Hat发布的RHEL 5.11是RHEL 5.X系列的最后的、次要里程碑版本。红帽现在进入了将持续三年的名为“production 3”的支持周期。在这阶段将没有新的功能被添加到平台中并且红帽公司将只提供有重大影响的安全修复程序和紧急优先级的bug修复。
平台事业部副总裁兼总经理Jim Totton在红帽公司在一份声明中说“红帽公司致力于建立一个长期稳定的产品生命周期这将给那些依赖Red Hat Enterprise Linux为他们的关键应用服务的企业客户提供关键的益处。虽然RHEL 5.11是RHEL 5平台的最终次要版本但它提供了安全性和可靠性方面的增强功能以保持该平台接下来几年的活力。”
新的增强功能包括安全性和稳定性更新,包括改进了红帽帮助用户调试系统的方式。
还有一些新的存储的驱动程序以支持新的存储适配器和改进在VMware ESXi上运行RHEL的支持。
在安全方面的巨大改进是OpenSCAP更新到版本1.0.8。红帽在2011年五月的[RHEL5.7的里程碑更新][3]中第一次支持了OpenSCAP。 OpenSCAP是安全内容自动化协议SCAP框架的开源实现用于创建一个标准化方法来维护安全系统。
--------------------------------------------------------------------------------
via: http://www.linuxplanet.com/news/end-of-the-line-for-red-hat-enterprise-linux-5.html
作者Sean Michael Kerner
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.internetnews.com/ent-news/article.php/3665641
[2]:http://www.serverwatch.com/server-news/red-hat-extends-linux-support.html
[3]:http://www.internetnews.com/skerner/2011/05/red-hat-enterprise-linux-57-ad.html

View File

@ -0,0 +1,39 @@
KDE Plasma 5的第二个bug修复版本发布带来了很多的改变
================================================================================
> 新的Plasma 5发布了带来了新的外观
 <center>![KDE Plasma 5](http://i1-news.softpedia-static.com/images/news2/Second-Bugfix-Release-for-KDE-Plasma-5-Arrives-with-Lots-of-Changes-459688-2.jpg)</center>
<center>*KDE Plasma 5*</center>
### Plasma 5的第二个bug修复版本发布已可下载###
KDE Plasma 5的bug修复版本不断来到它新的桌面体验将会是KDE的生态系统的一个组成部分。
[公告][1]称“plasma-5.0.2这个版本新增了一个月以来来自KDE的贡献者新的翻译和修订。Bug修复通常是很小但是很重要如修正未翻译的文字使用正确的图标和修正KDELibs 4软件的文件重复现象。它还增加了一个月以来辛勤的翻译成果使其支持其他更多的语言”
这个桌面还没有在任何Linux发行版中默认安装这将持续一段时间直到我们测试完成。
开发者还解释说更新的软件包可以在Kubuntu Plasma 5的开发版本中进行审查。
如果你个人需要它们,你也可以下载源码包。
- [KDE Plasma Packages][2]
- [KDE Plasma Sources][3]
如果你决定去编译它,你必须需要知道 KDE Plasma 5.0.2是一组复杂的软件,可能你需要解决不少问题。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Second-Bugfix-Release-for-KDE-Plasma-5-Arrives-with-Lots-of-Changes-459688.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://kde.org/announcements/plasma-5.0.2.php
[2]:https://community.kde.org/Plasma/Packages
[3]:http://kde.org/info/plasma-5.0.2.php

View File

@ -1,37 +0,0 @@
Microsoft Lobby Denies the State of Chile Access to Free Software
================================================================================
![Fuerza Chile](http://i1-news.softpedia-static.com/images/news2/Microsoft-Lobby-Denies-the-State-of-Chile-Access-to-Free-Software-455598-3.jpg)
Fuerza Chile
Fresh on the heels of the entire Munich and Linux debacle, another story involving Microsoft and free software has popped up across the world, in Chile. A prolific magazine from the South American country says that the powerful Microsoft lobby managed to turn around a law that would allow the authorities to use free software.
The story broke out from a magazine called El Sábado de El Mercurio, which explains in great detail how the Microsoft lobby works and how it can overturn a law that may harm its financial interests.
An independent member of the Chilean Parliament, Vlado Mirosevic, pushed a bill that would allow the state to consider free software when the authorities needed to purchase or renew licenses. The state of Chile pays $2.7 billion (€2 billion) on licenses from various companies, including Microsoft.
According to [ubuntizando.com][1], Microsoft representatives met with Vlado Mirosevic shortly after he announced his intentions, but the bill passed the vote, with 64 votes in favor, 12 abstentions, and one vote against it. That one vote was cast by Daniel Farcas, a member of a Chilean party.
A while later, the same member of the Parliament, Daniel Farcas, proposed another bill that actually nullified the effects of the previous one that had just been adopted. To make things even more interesting, some of the people who voted in favor of the first law also voted in favor of the second one.
The new bill is even more egregious, because it aggressively pushes for the adoption of proprietary software. Companies that choose to use proprietary software will receive certain tax breaks, which makes it very hard for free software to get adopted.
Microsoft has been in the news in the last few days because the [German city of Munich that adopted Linux][2] and dropped Windows system from its administration was considering, supposedly, returning to proprietary software.
This new situation in Chile give us a sample of the kind of pull a company like Microsoft has and it shows us just how fragile laws really are. This is not the first time a company tries to bend the laws in a country to maximize the profits, but the advent of free software and the clear financial advantages that it offers are really making a dent.
Five years ago, few people or governments would have considered adopting free software, but the quality of that software has risen dramatically and it has become a real competition [for the likes of Microsoft][3].
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Microsoft-Lobby-Denies-the-State-of-Chile-Access-to-Free-Software-455598.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://www.ubuntizando.com/2014/08/20/microsoft-chile-y-el-poder-del-lobby/
[2]:http://news.softpedia.com/news/Munich-Disappointed-with-Linux-Plans-to-Switch-Back-to-Windows-455405.shtml
[3]:http://news.softpedia.com/news/Munich-Switching-to-Windows-from-Linux-Is-Proof-that-Microsoft-Is-Still-an-Evil-Company-455510.shtml

View File

@ -1,40 +0,0 @@
Transport Tycoon Deluxe Remake OpenTTD 1.4.2 Is an Almost Perfect Sim
================================================================================
![Transport Tycoon](http://i1-news.softpedia-static.com/images/news2/Transport-Tycoon-Deluxe-Remake-OpenTTD-1-4-2-Is-an-Almost-Perfect-Sim-455715-2.jpg)
Transport Tycoon
**OpenTTD 1.4.2, an open source simulation game based on the popular Microprose title Transport Tycoon written by Chris Sawyer, has been officially released.**
Transport Tycoon is a very old game that was originally launched back in 1995, but it made such a huge impact on the gaming community that, even almost 20 years later, it still has a powerful fan base.
In fact, Transport Tycoon Deluxe had such an impact on the gaming industry that it managed to spawn an entire generation of similar games and it has yet to be surpassed by any new title, even though many have tried.
Despite the aging graphics, the developers of OpenTTD have tried to provide new challenges for the fans of the original games. To put things into perspective, the original game is already two decades old. That means that someone who was 20 years old back then is now in his forties and he is the main audience for OpenTTD.
"OpenTTD is modelled after the original Transport Tycoon game by Chris Sawyer and enhances the game experience dramatically. Many features were inspired by TTDPatch while others are original," reads the official announcement.
OpenTTD features bigger maps (up to 64 times in size), stable multiplayer mode for up to 255 players in 15 companies, a dedicated server mode and an in-game console for administration, IPv6 and IPv4 support for all communication of the client and server, new pathfinding algorithms that makes vehicles go where you want them to, different configurable models for acceleration of vehicles, and much more.
According to the changelog, awk is now used instead of trying to convince cpp to preprocess nfo files, CMD_CLEAR_ORDER_BACKUP is no longer suppressed by pause modes, the Wrong breakdown sound is no longer played for ships, integer overflow in the acceleration code is no longer causing either too low acceleration or too high acceleration, incorrectly saved order backups are now discarded when clients join, and the game no longer crashes when trying to show an error about vehicle in a NewGRF and the NewGRF was not loaded at all.
Also, the Slovak language no longer uses space as group separator in numbers, the parameter bound checks are now tighter on GSCargoMonitor functions, the days in dates are not represented by ordinal numbers in all languages, and the incorrect usage of string commands in the base language has been fixed.
Check out the [changelog][1] for a complete list of updates and fixes.
Download OpenTTD 1.4.2:
- [http://www.openttd.org/en/download-stable][2]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Transport-Tycoon-Deluxe-Remake-OpenTTD-1-4-2-Is-an-Almost-Perfect-Sim-455715.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://ftp.snt.utwente.nl/pub/games/openttd/binaries/releases/1.4.2/changelog.txt
[2]:http://www.openttd.org/en/download-stable

View File

@ -1,41 +0,0 @@
[sailing]
Munich Council: LiMux Demise Has Been Greatly Exaggerated
================================================================================
![LiMux Munich City Councils Official OS](http://www.omgubuntu.co.uk/wp-content/uploads/2014/07/limux-4-kde-desktop.jpg)
LiMux Munich City Councils Official OS
A Munich city council spokesman has attempted to clarify the reasons behind its [plan to re-examine the role of open-source][1] software in local government IT systems.
The response comes after numerous German media outlets revealed that the citys incoming mayor has asked for a report into the use of LiMux, the open-source Linux distribution used by more than 80% of municipalities.
Reports quoted an unnamed city official, who claimed employees were suffering from having to use open-source software. Others called it an expensive failure, with the deputy mayor, Josef Schmid, saying the move was driven by ideology, not financial prudence.
With Munich often viewed as the poster child for large Linux migrations, news of the potential renege quickly went viral. Now council spokesman Stefan Hauf has attempted to bring clarity to the situation.
### Plans for the future ###
Hauf confirms that the citys new mayor has requested a review of the citys IT systems, including its choice of operating systems. But the report is not, as implied in earlier reports, solely tasked with deciding whether to return to using Microsoft Windows.
**“Its about the organisation, the costs, performance and the usability and satisfaction of the users,”** [Techrepublic][2] quote him as saying.
**“[It's about gathering the] facts so we can decide and make a proposal for the city council how to proceed in future.”**
Hauf also confirms that council staff have, and do, complain about LiMux, but that the majority of issues stem from compatibility issues in OpenOffice, something a potential switch to LibreOffice could solve.
So is Munich about to switch back to Windows? As we said in our original coverage: its just too early to say, but its not being ruled out.
No final date for the reports recommendations is yet set, and any binding decision on Munichs IT infrastructure will need to be made by its elected members, the majority of whom are said to support the LiMux initiative.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/08/munich-council-say-talk-limux-demise-greatly-exaggerated
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/08/munich-city-linux-switching-back-windows
[2]:http://www.techrepublic.com/article/no-munich-isnt-about-to-ditch-free-software-and-move-back-to-windows/

View File

@ -1,31 +0,0 @@
Red Hat Shake-up, Desktop Users, and Outta Time
================================================================================
![](https://farm4.staticflickr.com/3839/15058131052_b5e86dce3e_t.jpg)
Our top story tonight is the seemingly sudden resignation of Red Hat CTO Brian Stevens. In other news, John C. Dvorak says "Linux has run out of time" and Infoworld.com says there may be problems with Red Hat Enterprise 7. OpenSource.com has a couple of interesting interviews and Nick Heath has five big names that use Linux on the desktop.
**In a late afternoon** [press release][1], Red Hat announced the resignation of long-time CTO Brian Stevens. Paul Cormier will be handling CTO duties until Stevens' replacement is named. No reason for the sudden resignation was given although CEO Whitehurst said, "We want to thank Brian for his years of service and numerous contributions to Red Hats business. We wish him well in his future endeavors." However, Steven J. Vaughan-Nichols says some rumors are flying. One says friction between Stevens and Cormier caused the resignation and others say Stevens had higher ambitions than Red Hat could provide. He'd been with Red Hat since 2001 and had been CTO at Mission Critical Linux before that [according to Vaughan-Nichols][2] who also said Stevens' Red Hat page was gone within seconds of the announcement.
**Speaking of Red Hat**, InfoWorld.com has a review of RHEL 7 available to the general public today. Reviewer Paul Venezia runs down the new features, but soon mentions systemd as one of the many new features "certain to cause consternation." After offering his opinion on several other key features and even throwing in a tip or two, [Venezia concludes][3], "RHEL 7 is a fairly significant departure from the expected full-revision release from Red Hat. This is not merely a reskinning of the previous release with updated packages, a more modern kernel, and some new toolkits and widgets. This is a very different release than RHEL 6 in any form, mostly due to the move to Systemd."
**Our own Sam Dean** [today said][4] that Linux doesn't need to own the desktop because of its success in many other key areas. While that may be true, Nick Heath today listed "five big names that use Linux on the desktop." He said besides Munich, there's Google for one and they even have their own Ubuntu derivative. He lists a couple of US government agencies and then mentions CERN and others. See that [full story][5] for more.
Despite that feel-good report, John C. Dvorak said he's tired of waiting for someone to develop that one "killer app" that would bring in the masses or satisfy his needs. [He says][6] he has to make podcasts and "photographic art" and he just can't do that with Linux. Our native applications "do not cut it in the end."
--------------------------------------------------------------------------------
via: http://ostatic.com/blog/red-hat-shake-up-desktop-users-and-outta-time
作者:[Susan Linton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://ostatic.com/member/susan-linton
[1]:http://www.businesswire.com/news/home/20140827006134/en/Brian-Stevens-Step-CTO-Red-Hat#.U_5AlvFdX0p
[2]:http://www.zdnet.com/red-hat-chief-technology-officer-resigns-7000033058/
[3]:http://www.infoworld.com/d/data-center/review-rhel-7-lands-jolt-249219
[4]:http://ostatic.com/blog/linux-doesnt-need-to-own-the-desktop
[5]:http://www.techrepublic.com/article/five-big-names-that-use-linux-on-the-desktop/
[6]:http://www.pcmag.com/article2/0,2817,2465125,00.asp

View File

@ -1,37 +0,0 @@
LibreOffice 4.3.1 Has More than 100 Fixes and DOCX Embedded Objects Support
================================================================================
![LibreOffice selection menu](http://i1-news.softpedia-static.com/images/news2/LibreOffice-4-3-1-Has-More-Than-100-Fixes-and-DOCX-Embedded-Objects-Support-456916-2.jpg)
LibreOffice selection menu
**The Document Foundation announces that the stable version of LibreOffice 4.3.1 has been released and is now available for download.**
The developers from The Document Foundation have released a new update for the 4.3 branch of LibreOffice and they have implemented quite a few fixes and other various changes. The development cycle for this latest update has been rather short and the devs managed to repair most of the issues that have been found.
LibreOffice 4.3.1 is just maintenance release, which means that the focus has been about the bugs found so far. Don't expect to find anything extraordinary, but you should upgrade the software nonetheless.
"The Document Foundation announces LibreOffice 4.3.1, the first minor release of LibreOffice 4.3 'fresh' family, with over 100 fixes (including patches for two CVEs, backported to LibreOffice 4.2.6-secfix, which is also available for download now)."
"All LibreOffice users are invited to update their installation as soon as possible to avoid security issues. This includes users who are running LibreOffice 4.2.6 as originally released on August, 5th 2014. LibreOffice 4.3.1 and LibreOffice 4.2.6 will be shown on stage at the LibreOffice Conference in Bern, from September 3 to September 5, with a large number of sessions about development, community, marketing and migrations," reads the announcement made by The Linux Foundation.
According to the changelog, editing the text search with expanded fields is now working properly, the static value array for OOXML chart is now handled correctly, bullets now have the color as the following text by default, ww8import no longer creates a pagedesc if a continuous section changes margins, the 0 font height is now handled just like outdev, it's now possible to import OLE objects in the header with background wrapping, the XLSX export of revisions has been fixed in order to get it to work in Excel, and borders around data labels are now supported.
Also, the table style for lastRow is now correctly applied, the rulers now have app-background by default, the graphics are now swapped in on DrawingML::WriteImage, the redundant 'Preferences' label has been removed in order to save some space, page breaks in tables are now ignored during the RTF import, some of the style hierarchy has been reworked, Data Statistics no longer crashes with any entry, DOCX embedded objects are now supported, and numerous other improvements have been made.
More details about this release can be found in the official [announcement][1].
- [Download LibreOffice 4.3.1 for Linux][2]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/LibreOffice-4-3-1-Has-More-Than-100-Fixes-and-DOCX-Embedded-Objects-Support-456916.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://blog.documentfoundation.org/2014/08/28/libreoffice-4-3-1-fresh-announced/
[2]:http://www.libreoffice.org/download/libreoffice-fresh/

View File

@ -1,37 +0,0 @@
Jelly Conky Adds Simple, Stylish Stats To Your Linux Desktop
================================================================================
**I treat Conky setups a bit like wallpapers: Ill find one I love, only to change it the next week because Im bored of it and want a change.**
Part of the impatience is fuelled by the ever-growing catalog of designs available. One of my most recent favourites is Jelly Conky.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/jelly-conky.png)
Jelly Conky sports the minimal design many of the Conkys weve highlighted recently have followed. Its not trying to be a kitchen sink. It wont win favour with those who need constant at-a-glance data on their HDD temperatures and IP addresses.
It comes with three distinct modes that can all add personality to an otherwise static background image:
- Clock
- Clock plus date
- Clock plus date and weather
Some people dont understand the point of having a duplicate clock on show on the desktop. Thats understandable. For me, its more about form than function (though, personally, I find Conky clocks easier to see than the minuscule digits nestled in my upper panel).
Chances are if you have a home screen widget on Android with the time, you wont mind having one on your desktop, either!
You can download Jelly Conky from the link below. The .zip archive contains a readme with instructions on how to install. For a guided walkthrough, [revisit one of our previous articles][1].
- [Download Jelly Conky on Deviant Art][2]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/jelly-conky-for-linux-desktop
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover
[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003

View File

@ -1,43 +0,0 @@
alim0x translating
GNOME Control Center 3.14 RC1 Corrects Lots of Potential Crashes
================================================================================
![GNOME Control Center in Arch Linux](http://i1-news.softpedia-static.com/images/news2/GNOME-Control-Center-3-14-RC1-Correct-Lots-of-Potential-Crashes-458986-2.jpg)
GNOME Control Center in Arch Linux
**GNOME Control Center, GNOME's main interface for the configuration of various aspects of your desktop, has been updated to version 3.14 RC1, along with a lot of the packages from the GNOME stack.**
The GNOME Control Center is a piece of software that's actually very important in the GNOME ecosystem, although not all users are aware of its existence. This is the part that takes care of all the settings in an OS powered by GNOME, as you can see from the screenshot.
It's not something that's usually advertised and it's actually one of the few packages in the GNOME stack that doesn't have the same name as source and as implementation. The source package is called GNOME Control Center, but users will usually see Settings or System Settings, depending on what the developers choose.
### What's new in GNOME Control Center 3.14 RC1 ###
According to the changelog, libgd has been updated in order to fix the GdNotification theming, the background chooser dialog is no longer resizing when switching views, a stack with three views is now used for the chooser dialog, a memory leak in Flickr support has been fixed, the hard-code font size is no longer used for the Date & Time, a crash that occurred if the WM changed (or restarted) has been fixed, a possible crash that occurred when wireless-enabled was changing has been fixed, and more potential crashers for WWAN have been corrected.
Also, the hotspot is now running only if the device is active, all of the virtualization bridges are now hidden, the underlying device for VPN connections is no longer shown, the empty folder list is no longer shown by default, various UI padding issues have been fixed, the focus is now returned in the account dialog, a crash that occurred when setting year to 0 has been fixed, the "Wi-Fi hotspot" properties are now centered, a warning provided on startup with the hotspot enabled has been fixed, and an error is now provided when turning on the hotspot fails.
A complete list of changes, updates, and bug fixes can be found in the official [changelog][1].
You can download GNOME Control Center 3.14 RC1:
- [tar.xz (3.12.1 Stable)][2][sources] [6.50 MB]
- [tar.xz (3.14 RC1 Development)][3][sources] [6.60 MB]
This is just the source package and you will have to compile it yourself in order to test it. Unless you really know what you are doing, you should wait until the complete GNOME stack is made available through the repositories.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/GNOME-Control-Center-3-14-RC1-Correct-Lots-of-Potential-Crashes-458986.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://ftp.acc.umu.se/pub/GNOME/sources/gnome-control-center/3.13/gnome-control-center-3.13.92.news
[2]:http://ftp.acc.umu.se/pub/GNOME/sources/gnome-control-center/3.12/gnome-control-center-3.12.1.tar.xz
[3]:http://ftp.acc.umu.se/pub/GNOME/sources/gnome-control-center/3.13/gnome-control-center-3.13.92.tar.xz

View File

@ -1,36 +0,0 @@
Another Italian City Says Goodbye To Microsoft Office, Will Switch To OpenOffice Soon
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Turin_Open_Source.jpg)
It seems [Open Source][1] adoption is the latest fad in European countries. Last month only we heard that [Turin became the first Italian city to officially opt for Open Source product][2]. Another city in north-west Italy, [Udine][3], has also announced that it is ditching Microsoft office and will migrate to [OpenOffice][4].
Udine has a population of 100,000 and the administration has around 900 computers which are running Microsoft Windows as their default productivity suite. As per the [budget document][5], the migration will start somewhere around December with 80 new computers. It will be followed by the migration of older computers to OpenOffice.
The migration is estimated to save licensing fee which otherwise would have cost around Euro 400 per computer, which makes a total of Euro 360,000. But saving money is not the only goal of this migration, getting regular software update is also one of the factors.
Of course the transition from Microsoft Office to OpenOffice wont be smooth. Keeping this in mind, the municipality is planning training sessions for at least first few employees who will get the new machines with OpenOffice.
As I stated earlier, this seems to be a trend in Europe. [French city Toulouse saved a million euro with LibreOffice][6] earlier this year along with [Canary Islands in Spain][7]. Neighboring Spanish city [Geneva also showed sign of Open Source adoption][8]. In other part of the world, government organizations in [Tamil Nadu][9] and Kerala provinces of India also ditched Microsoft for Open Source.
I think demise of Windows XP has been a boon for Open Source, along with sluggish economy. Whatever may be the reason, I am happy to see this list growing. What about you?
--------------------------------------------------------------------------------
via: http://itsfoss.com/udine-open-source/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://itsfoss.com/category/open-source-software/
[2]:http://itsfoss.com/italian-city-turin-open-source/
[3]:http://en.wikipedia.org/wiki/Udine
[4]:https://www.openoffice.org/
[5]:http://www.comune.udine.it/opencms/opencms/release/ComuneUdine/comune/Rendicontazione/PEG/PEG_2014/index.html?lang=it&style=1&expfolder=???+NavText+???
[6]:http://itsfoss.com/french-city-toulouse-saved-1-million-euro-libreoffice/
[7]:http://itsfoss.com/canary-islands-saves-700000-euro-open-source/
[8]:http://itsfoss.com/170-primary-public-schools-geneva-switch-ubuntu/
[9]:http://itsfoss.com/tamil-nadu-switches-linux/

View File

@ -1,42 +0,0 @@
Translating-----geekpi
Netflix Offers to Work with Ubuntu to Bring Native Playback to All
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/netflix-ubuntu.jpg)
**We saw [last month just how close native Netflix support for Linux is][1] to arriving, with now only a few simple steps required to enable HTML5 video streaming on Ubuntu desktops.**
Now Netflix wants to go one step further. It wants to bring true, out-of-the-box Netflix playback to all Ubuntu users. And all it requires is an update to the **Network Security** Services library.
### Netflix Natively? Neato. ###
In [an e-mail][2] sent to the Ubuntu Developer mailing list Netflixs Paul Adolph explains the current situation:
> “Netflix will play with Chrome stable in 14.02 if NSS version 3.16.2 or greater is installed. If this version is generally installed across 14.02, Netflix would be able to make a change so users would no longer have to hack their User-Agent to play.”
While the upcoming release of Ubuntu 14.10 offers the newer [NSS v3.17][3], Ubuntu 14.04 LTS — used by the majority of users — currently offers v3.15.x.
NSS is a set of libraries that supports a range of security-enabled client and server applications, including SSL, TLS, PKCS and other security standards. Keen to enable native HTML5 Netflix for Ubuntu LTS users, Paul asks:
> “What is the process of getting a new NSS version into the update stream? Or can somebody please provide me with the right contact?”
Netflix began offering HTML5 video playback on Windows 8.1 and OS X Yosemite earlier this year, negating the need for any extra downloads or plugins. The switch has been made possible through the [Encrypted Media Extension][4] spec.
While we wait for the discussions to move forward (and hopefully solve it for all) you can still “hack” HTML5 Netflix on Ubuntu by [following our guide][5].
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/netflix-linux-html5-nss-change-request
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/08/netflix-linux-html5-support-plugins
[2]:https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2014-September/015048.html
[3]:https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.17_release_notes
[4]:http://en.wikipedia.org/wiki/Encrypted_Media_Extensions
[5]:http://www.omgubuntu.co.uk/2014/08/netflix-linux-html5-support-plugins

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development
================================================================================
> Red Hat jumps into the mobile development sector with a key acquisition.

View File

@ -1,33 +0,0 @@
Red Hat's CEO Sees Open Source Cloud Domination
================================================================================
Red Hat CEO Jim Whitehurst sees the business opportunity of a generation in what he calls a computing paradigm shift from client server to cloud architectures. “In those paradigm shifts, generally new winners emerge,” says Whitehurst and he intends to make sure Red Hat is one of those winners. His logic is sound and simple: disruptive technologies like the cloud that arise every couple decades level the playing field between large, established firms and smaller, innovative challengers since everyone, from corporate behemoth to a couple guys in a garage, starts from the same spot and must play by the same unfamiliar and changeable rules. With the cloud “theres less of an installed based and an opportunity for new winners to be chosen,” Whitehurst adds. His mission is “to see that open source is the default choice for next generation architecture” and that Red Hat is the preferred choice, particularly for enterprise IT, of open source providers.
The case for open source dominating the cloud rests on the fact that its already the foundation for many popular cloud services and enterprise applications. Whitehurst aptly notes that outside of Microsoft Azure, the underlying infrastructure of all the major public cloud services is built upon open source software. Furthermore, software like Linux, Apache, MySQL, WordPress and many others are already widely used and trusted by most enterprises. “In many cases [open source] already is the default choice for next generation architectures, but it hasnt fully driven itself through the traditional enterprise data center,” he says. Cloud software is the next and most important software category up for open source disruption.
![](http://blogs-images.forbes.com/kurtmarko/files/2014/06/redhat-logo.jpg)
Yet open source is still saddled with a reputation for widely variable software quality and support, something the recent OpenSSL Heartbleed bug only reinforced. However Whitehurst contends that strong enterprise adoption of Red Hats Linux distribution and its training and skills certification programs lends credibility to a similar plan for the cloud: [Red Hats Cloud Partner Program][1]. He believes such insurance policies alleviate enterprise ITs fears of adopting open source software for both internal, private clouds and external public cloud services. Red Hat wants its imprimatur to be the Good Housekeeping seal of approval for open source in general and cloud software in specific, namely ITs assurance that their applications will work and the service is trustworthy and reliable.
Red Hats strategy to make open source clouds safe for the enterprise is mirrors that used to break into the market for enterprise server software. There, “Job one for Red Hat is making sure our operating system and layers above that work well on anyones infrastructure underneath,” says Whitehurst. Red Hat is applying this same model of polishing, integrating and supporting open source software to cloud stacks. “One of the most important parts about cloud, public, private or hybrid, is a sense that you can confidently run your applications,” says Whitehurst and he believes Red Hats track record on Linux and other open source products will carry over to make Red Hat “the enterprise choice” for cloud architectures.
### Cloud isnt just virtualization 2.0 ###
One of the conundrums for OpenStack advocates like Whitehurst is the entrenchment of Microsoft and VMware in the enterprise market. Although virtual servers are a prerequisite for clouds, theyre sufficient. Countering the notion that enterprise clouds are just a natural extension of virtualized servers and storage, Whitehurst argues that by setting new rules for infrastructure and application design, cloud infrastructure is more than just the natural evolution of server virtualization.
![](http://blogs-images.forbes.com/kurtmarko/files/2014/06/RH_NEXT_HS-JIM-W-01.jpg)
Whitehurst draws an important distinction between traditional client-server and cloud-optimized applications. “One of the big questions will be how much of this [cloud adoption] is moving traditional Windows workloads, which frankly were written as stateful apps in the first place. [Instead] are we talking about a new generation of applications that are actually built with elasticity and scalability in mind.” Whitehurst clearly believes cloud infrastructure is much more appropriate for the latter and that in such Greenfield scenarios, OpenStack and other open source software have established themselves as the preferred platform. Contrasting OpenStack, based on the Linux KVM hypervisor and VMware or Microsoft using their proprietary virtual machine platforms, Whitehurst says, “Longer term, nobody really cares what the hypervisor is, you just expect it to work and bluntly, as long as Red Hat supports you on it, why do you have to care,” adding “more and more, youll see the hypervisor mattering less and less.” Of course, VMware and Microsoft probably agree, both having moved their energies to building more sophisticated management platforms and making the hypervisor a baseline feature.
But in Whitehursts view of the world, traditional virtualization platforms like VMware or Microsoft Hyper-V are legacy infrastructure designed for yesterdays client-server software, not the sort of distributed, rapidly relocatable, elastically scalable applications that define the era of big data, SaaS and social software. “Im not sure what good you get out of putting Exchange on a cloud,” he quips. Instead, he says this new generation of cloud-optimized applications are the sweet spot for OpenStack. According to Whitehurst, “If you look at where most new applications are getting built, and therefore where so much of the innovation around languages, frameworks and management paradigms are happening, its around an open infrastructure.” But theres obviously some selection bias in Whitehursts account, as he lives in an open source world where its easy to be unaware, overlook or ignore the innovation happening on proprietary cloud platforms like Azure, AWS and vCloud.
In sum, Whitehurst hopes and expects OpenStack to do to VMware what Linux did to Windows: to become the first choice of cloud-savvy startups and if not the default choice, at least an accepted and respectable alternative within the enterprise. In my next column Ill explain that even for an open source champion like Whitehurst, OpenStack versus VMware vCloud or Microsoft Azure isnt an either/or choice and how he sees the fundamental notion of cloud computing as based on virtual machines as an design model likely to change.
--------------------------------------------------------------------------------
via: http://www.forbes.com/sites/kurtmarko/2014/06/08/red-hat-ceo-open-source-clouds/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.redhat.com/partners/become/cloud/

View File

@ -1,27 +0,0 @@
Linux hiring frenzy: Why open source devs are being bombarded with offers to jump ship
================================================================================
> Summary: Figures from the Linux Foundation suggest skills shortages across disciplines and throughout Europe.
Nine out of ten (87 percent) of hiring managers in Europe have "hiring Linux talent" on their list of priorities and almost half (48 percent) say they are looking to hire people with Linux skills within the next six months.
But while they either need or want to hire more people with Linux skills, the data from the Linux Foundation suggests that this is easier said than done. Almost all — 93 percent — of the managers surveyed said they were having difficulty finding IT professionals with the Linux skills required and a quarter (25 percent) said they have "delayed projects as a result".
All of this makes it a good time to be a Linux expert.
Seven out of 10 Europe-based Linux professionals have received calls where they were pitched new positions in the past six months, and a third said they had received more calls than in the previous six months. One in three are looking to move anyway, and over half of them said it would be fairly or very easy. Salary is the biggest reason to move jobs, followed by work-life balance and the chance to gain additional skills.
Employers are trying harder to keep hold of staff too: In the past six months, 29 percent of Linux professionals say they have been offered a higher salary from their current employers, while a quarter said theyve been offered a flexible work schedule and one in five have been extended additional training opportunities or certification.
The Linux Foundation, a non-profit organisation which supports the growth of Linux, and Dice Holdings, which provides career sites for technology professionals, produced the research which covers Europe and the US.
In terms of the specific skills organisations are looking for people with the developer (69 percent) and enterprise management (51 percent) skills. These are followed by 32 percent of respondents who are looking for people with a combination of development and operations skills (DevOps), and 19 percent who are in management/IT management.
The Linux Job Report has been produced for the last three years by the Linux Foundation and Dice but this is the first time that a specific report on European skills has been separated out of the worldwide report. Some 893 Linux professionals responded to the survey across Europe.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/linux-hiring-frenzy-why-open-source-devs-are-being-bombarded-with-offers-to-jump-ship-7000030418/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,61 +0,0 @@
The Companies That Support Linux: Rackspace
================================================================================
[![](http://www.linux.com/images/stories/41373/Paul-Voccio-Rackspace.jpg)][1]
[Rackspace][1] has lately been in the news for its stock market gains and a [potential acquisition][2]. But over the past 16 years the company has become well known, first as a web hosting provider built on Linux and open source, and later as a [pioneer of the open source cloud][3] and founder of the OpenStack cloud platform.
In May, Rackspace became a [Xen Project][4] member and was one of [three companies to join the Linux Foundation][5] as a corporate member, along with CoreOS and Cumulus Networks.
“Many of the applications and infrastructure that we need to run for internal use or for customers run best on Linux,” said Paul Voccio, Senior Director of Software Development at Rackspace, via email. “This includes all the popular language frameworks and open virtualization platforms such as Xen, LXC, KVM, Docker, etc.”
In this Q&A, Voccio discusses the role of Rackspace in the cloud, how the company uses Linux, why they joined the Linux Foundation, as well as current trends and future technologies in the data center.
### Linux.com: What is Rackspace? ###
Paul Voccio: Rackspace is the managed cloud specialist and founder of OpenStack, the open-source operating system for the cloud. Hundreds of thousands of customers look to Rackspace to deliver the best-fit hybrid cloud solutions for their IT needs, leveraging a product and services portfolio that allows workloads to run where they perform best whether on the public cloud, private cloud, dedicated servers, or a combination of platforms.
As a managed cloud pioneer, we give our customers 24x7 access to cloud engineers for everything from planning and architecting to building and operating clouds through our award-winning Fanatical Support®. We help customers successfully architect, deploy and run their most critical applications. Or, more plainly put, were cloud specialists so you dont have to be. We are headquartered in San Antonio, Texas, and operate a global support and engineering organization with data centers on four continents.
### How and why do you use Linux? ###
Rackspace uses Linux because it provides a stable and flexible platform for our customers' workloads. Our customers trust us to support their mission-critical applications and we need reliable infrastructure including software and hardware to meet their expectations. If you look under the hood in our dedicated environments or in our expansive cloud infrastructure, you'll find Linux running there.
Many of the applications and infrastructure that we need to run for internal use or for customers run best on Linux. This includes all the popular language frameworks and open virtualization platforms such as Xen, LXC, KVM, Docker, etc. Running combinations of these platforms give us the stability and performance we demand for the Rackspace Cloud. Our Cloud Servers product runs OpenStack services that manage tens of thousands of hypervisors all running Linux.
Using Linux also allows us to tap into a community of experts to solve problems. When we have an issue, we're comfortable asking questions. When we have a solution, we enjoy sharing it with the community. At Rackspace, we understand how to work and contribute in an open community and Linux has many opportunities to build relationships with other groups that have similar goals.
### Why did you join the Linux Foundation? ###
Joining the Linux Foundation allows us to show our support and engage the Linux community in new ways. We've learned plenty from running Linux in highly demanding environments at a large scale and we're eager to share those experiences. Other members of the community have probably run into different challenges than we have and this gives us a greater opportunity to learn from them as well.
### What interesting or innovative trends are you witnessing in the data center and what role does Linux play in them? ###
Virtualization and automation have changed how companies deploy hardware and software. Linux gives us several virtualization options and these allow us to automate more of our infrastructure deployments and maintenance tasks. Automation and configuration management frameworks allow us to reduce our costs, improve our testing capabilities, and bring products to market faster. The majority of these open source automation frameworks run best on Linux servers.
### How is Rackspace participating in that innovation? ###
We leverage several open-source Linux-based tools and projects to deliver great customer outcomes. One of our largest efforts in this area is with OpenStack. It's the software that runs our public and private clouds and we're actively engaged with the community to improve it. We're using Linux to find new ways to scale our large virtualization platform and deliver infrastructure to customers quickly.
The open-source nature of Linux inspires us to share the majority of these discoveries with the community. Our customers can improve OpenStack and those improvements will eventually make it into our product offering. We make contributions to a countless number of open source projects either as a company or as individual Rackers (our employees are called "Rackers") and many of these projects are designed to run on Linux.
### What other future technologies or industries do you think Linux and open source will increasingly become important in and why? ###
The move to software-defined infrastructure is a big shift. Customers already have access to virtualization platforms like Xen that allow them to define their infrastructure with software. Software-defined networking is quickly becoming more mature and scalable. However, customers want the ability to have a software defined datacenter at their fingertips. This may involve physical servers, virtual servers, and virtual networks that need high performance with flexible configurations. Many of the current technologies are designed to run on Linux due to technology already available in the kernel or userland frameworks provided by the community.
### Are you hiring? ###
From hacking on kernels to supporting thousands of virtual machines we are always looking for talented admins, developers and engineers. You can find more information at Rackertalent.com.
--------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/200-libby-clark/775890-the-companies-that-support-linux-rackspace/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.rackspace.com/
[2]:http://www.bloomberg.com/news/2014-05-15/rackspace-hires-morgan-stanley-to-evaluate-options.html
[3]:http://www.informationweek.com/cloud/infrastructure-as-a-service/9-more-cloud-computing-pioneers/d/d-id/1109120
[4]:http://www.xenproject.org/
[5]:http://www.linuxfoundation.org/news-media/announcements/2014/05/new-linux-foundation-members-advance-massively-scalable-secure

View File

@ -1,79 +0,0 @@
Fire Phone Dynamic Perspective tracks eyes for 3D UI
================================================================================
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/fire-phone-dynamic-perspective-1-820x420.jpg)
3D on phones is back, and it's Amazon giving it a try this time with Dynamic Perspective on the new [Fire Phone][1]. Eschewing a "true" 3D display as we've seen before, the Fire Phone's system instead uses four front-facing cameras to track the user's eyes, and adjusts the on-screen UI so that the various layers shift around to give the impression of 3D.
A combination of physically tilting the phone and moving your head as you hold it can be used to navigate through the interface and apps. So, tilting the Fire Phone can scroll through the browser, rather than having to swipe around with a fingertip.
youtube视频链接地址[http://www.youtube.com/embed/iB75HJe8eiI][2]
Similarly, with a carousel of items in Amazon's store on the phone, tilting the handset left and right pans through the products.
youtube视频链接地址[http://www.youtube.com/embed/lwj0hlE8CJc][3]
In ebooks, the Kindle app can scroll through according to how you're holding it. The settings can be switched between adjusting speed depending on how extreme the tilt angle is, or locking it to a fixed rate if you'd rather have things be predictable.
### This is the Amazon Fire Phone ###
Maps, too, get Dynamic Perspective support. Moving the Fire Phone around can show what's "hiding" behind 3D buildings or on different layers. Tilting can also be used to open up menus, in games for motion control, and even to navigate between the now-playing and lyrics UIs in the Prime Music app.
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010143-XL-600x337.jpg)
All that 3D didn't come easy, though. Based on the fact that every face is different, with variations in hair color, shape, whether they wear glasses, and other factors, Amazon had to put Dynamic Perspective through some serious testing.
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010006-XL-600x337.jpg)
In the companies labs, that involved a somewhat nightmarish rubber head on a stick, but then Amazon expanded that to use real-world data from thousands of photos of people. The use of four cameras means that, no matter what may be blocking the screen, the Fire Phone should be able to spot the user properly.
youtube视频链接地址[http://www.youtube.com/embed/X-wPOq27iXk][5]
Whether it'll all work as Bezos says, or be something owners quickly turn off, remains to be seen. We'll know more when we spend some hands-on time with the Fire Phone soon.
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/fire-phone-dynamic-perspective-1.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010143-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010012-XL1.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010010-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010007-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010003-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010153-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010145-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010019-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010030-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010022-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010004-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010010-XL-1.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010015-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010014-XL.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010008-XL1.jpg)
![](http://cdn.slashgear.com/wp-content/uploads/2014/06/P1010006-XL.jpg)
--------------------------------------------------------------------------------
via: http://www.slashgear.com/fire-phone-dynamic-perspective-tracks-eyes-for-3d-ui-18334229/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.slashgear.com/tags/fire-phone
[2]:http://www.youtube.com/embed/iB75HJe8eiI
[3]:http://www.youtube.com/embed/lwj0hlE8CJc
[4]:http://www.slashgear.com/this-is-the-amazon-fire-phone-18334195/
[5]:http://www.youtube.com/embed/X-wPOq27iXk

View File

@ -1,37 +0,0 @@
$2400 Valued Introduction To Linux Course Is Available For Free On edX
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/07/Introduction_Linux_edX.jpg)
Probably you have already heard it. [Linux Foundation][1] has tied up with [edX][2] (a major online learning platform founded by MIT and Harvard University) to provide its Introduction to Linux course, which usually costs $2400, for free.
edX has over 200 courses from over 50 elite universities, corporations and organizations worldwide. Over 2.5 million users attend these online courses across the globe.
**Introduction to Linux course is starting from 1st August**. There are three ways one can take this course (or most other edX courses):
- **Audit the course**: Simple register for **free** and get access to study material. Participate in course as per your own pace. There is no compulsion or penalty if you cannot complete the course.
- **Honor code certificate**: It certifies that you have successfully completed the course, however, it doesnt verify your identity. This too is for free.
- **Verified certificate of achievement**: This certificates validates your identity and costs $250 for **Introduction to Linux** course.
Introduction to Linux requires a working knowledge of computers and common software. Program aims to provide experienced computer users, who may or may not have previous Linux experience, a good working knowledge of Linux, from both a graphical and command line perspective. It consists a course work of 40 to 60 hours and is designed by Dr. Jerry Cooperstein, who manages training content at Linux Foundation.
If you are planning to attend Introduction to Linux, it is advised to have Linux installed on your computer beforehand. Linux Foundation has [prepared a guide to set up the computer][3] to help users out.
What are you waiting for? If you ever wanted to learn Linux, this is the time and best of all, its FREE! Sign up to the course with the link below:
- [Introduction to Linux course at edX][4]
--------------------------------------------------------------------------------
via: http://itsfoss.com/introduction-linux-free-edx/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://www.linuxfoundation.org/
[2]:https://www.edx.org/
[3]:https://training.linuxfoundation.org/images/pdfs/Preparing_Your_Computer_for_LFS101x.pdf
[4]:https://www.edx.org/course/linuxfoundationx/linuxfoundationx-lfs101x-introduction-1621#.U9gJ5nWSyb8

View File

@ -1,28 +0,0 @@
Microsofts Raspberry Pi Will Cost $300
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Sharks_Cove_Microsoft.jpg)
I presume that you have heard of [Raspberry Pi][1]. A $35 microcomputer that has revolutionized the low cost computing and has cult following among hardware hobbyist and do-it-yourself enthusiasts. Several other followed in the footsteps of Raspberry Pi to provide low cost micro computers, [Arduino][2] is one of the successful examples.
Microsoft has decided to enter the world of “System on Chip” and to come up with its “own Raspberry Pi”. Teamed up with Intel and [CircuitCo][3], [Microsoft will be launching a micro computer named “Sharks Cove“][4].
Sharks Cove boasts of Intel Atom Z3735G, a quad-core chip with speeds up to 1.83GHz, 1GB of RAM, 16GB of flash storage and a MicroSD slot among many other things. You can read the full specifications [here][5]. The main aim of Shark Cove is to provide a platform to develop hardware and drivers for Windows and Android.
Everything sounds fine till it comes to price. Sharks Cove will cost $299 with a Windows 8.1 license. While Arduino costs around $55 and Raspberry Pi $35, I dont think there will be many buyers for such a high price in a domain which is dominated by low cost Linux based devices. What do you think?
--------------------------------------------------------------------------------
via: http://itsfoss.com/microsofts-raspberry-pi/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://www.raspberrypi.org/
[2]:http://www.arduino.cc/
[3]:http://www.circuitco.com/
[4]:http://blogs.msdn.com/b/windows_hardware_and_driver_developer_blog/archive/2014/07/26/the-sharks-cove-is-now-available-for-pre-order.aspx
[5]:http://www.sharkscove.org/docs/

View File

@ -1,94 +0,0 @@
Nostalgic Gaming On Linux With Good Old Games
================================================================================
![](http://thelinuxrain.com/content/01-articles/70-nostalgic-gaming-on-linux-with-good-old-games/headimage.jpg)
**Thanks to the recent Linux support provided by DRM-free classic games provider, GOG.com, getting that nostalgic kick on Linux has never been easier. In this article I'll also detail a few of my favourite classic games that are now available to play in Linux.**
It's not all nostalgia, though. Some of the classic games you might think of are genuinely classic, amazing games no matter their age. Others, you might need to imagine you're back in, say, 1995 and look at the game from that point of view to appreciate how good it must have seemed at that time. Whatever the case though, there's no shortage of these old games out there to enjoy and thankfully it's recently gotten even easier with [GOG.com][1] recently announcing Linux support.
A lot of these old classic games actually run in [DOSBox][2], so a seasoned Linux gamer who has experience with such games may bring up the point that you could play a lot of these games provided by services such as GOG.com for years already, well before that recent announcement. Which is correct, I've done the same thing myself, but it does involve a bit of fiddling with files, so at the very least we now have a "turn-key" solution even with the DOSBox powered games - you download them, you launch them, they should just work. If you just want to purchase a game and play it right away, that's no bad thing.
Then there's the non-DOS games. A lot of old Windows 95/98 games do often work fine in WINE, but not always, or perhaps need workarounds to be manually applied or even a special version of WINE itself. Some old games just won't work at all no matter what you try, even on modern versions of Windows itself! So again, having an alternative available that is designed to work out-of-the-box (and DRM-free, no less) is a nice thing.
GOG.com initially provided 50 Linux compatible games on their penguin-friendly launch, but that number is and will keep growing. In coming months they say they hope to reach 100 games, and who knows how many thereafter, but it should grow to be a fairly considerable library.
Here are a few of my favourites so far, that are available right now:
### Rise Of The Triads Dark War (1994) ###
![](http://thelinuxrain.com/content/01-articles/70-nostalgic-gaming-on-linux-with-good-old-games/rott.png)
If you crave some 90's style shoot-em-up action where you get to blow the hell out of, well, everything and everyone, Rise Of The Triads (ROTT) is one of the best choices and a favourite to many.
If you know these kinds of shooters, you probably know what to expect. There is a storyline, but really it's about blowing everything up and/or riddling enemies full of bullet holes. As a member of an elite group of operatives you are sent to a remote island to stop a mad cult leader, where typically everything goes pear-shaped and you have to kill everything and successfully navigate levels to save the day and get out alive in the process.
True to the arcade-style shooter of this vintage, weapons are all about being big, high-tech and fun. You might be in an elite operations group, but you ain't stuck with peashooters and standard rifles - no there's duel pistols all the way to heat seeking missiles and the Flamewall cannon and many more. It's all about genuine fun and doesn't take itself too seriously.
*Verdict: A blast (literally)*
### Realms Of The Haunting (1996) ###
![](http://thelinuxrain.com/content/01-articles/70-nostalgic-gaming-on-linux-with-good-old-games/roth.png)
This one is actually fairly new to me and isn't a game I remember from years back. Which is my loss really, as I can imagine this game must have seemed pretty incredible all the way back in 1996.
Realms Of The Haunting is something of a first-person shooter/point-and-click adventure combination. The controls at first seem a bit strange because of this (keyboard to control movement and attack etc. Mouse to move the context indicator/cursor around the screen and interact with objects) but you soon get used to it. The storyline, although I have not experienced all of it yet myself, is apparently very good and certainly my impressions of it have been good. This is also one of those classic games that uses good old FMV (Full Motion Video) for cutscenes.
![](http://thelinuxrain.com/content/01-articles/70-nostalgic-gaming-on-linux-with-good-old-games/roth1.png)
Basically you play as a young man who receives a suitably vague letter from your recently deceased father about a strange deserted mansion and it's curious happenings inside. Naturally, said young man decides to visit the mansion and discovers his father's spirit being held captive by the forces of evil and then sets out to try free him. That sounds like a pretty standard storyline at first but the difference lays in the execution and how it progresses.
From the moment the main character picks up a lantern and gazes around the dark, creepy surroundings of the mansion, it actually reminds me a bit of Amnesia: The Dark Descent. Sure, the gameplay and amount of actual combat means the comparison somewhat ends after that, but ROTH does also have it's fair share of exploration and puzzles. Despite a very dated looking graphics engine (it is based on the DOOM engine after all!) it strikes me how much attention to detail the game creators managed to pack into the environment, which further adds to the atmosphere and immersion despite the constant pixel party happening on screen.
All in all, Realms Of The Haunting is a creepy but very intriguing old game that is very much worth checking out. And if you love games that feature old-school FMV, there is heaps on offer here too.
*Verdict: Ahead of it's time?*
### Sid Meier's Colonization (1994) ###
![](http://thelinuxrain.com/content/01-articles/70-nostalgic-gaming-on-linux-with-good-old-games/colonization.jpg)
Think Civilization, but with a colonial twist. Instead of building a nation from a mound of dirt in the middle of nowhere, Colonization tasks you with controlling either the forces of England, France, Spain or The Netherlands as you set about managing expansion across the Atlantic for your nation of choice. The aim of the game, as far as winning goes, is to achieve independence from your mother country and defeat the angry Royal Expeditionary Force that comes your way.
If big chunky pixels, even in text, is something that hurts your eyes you may want to avoid this one but the simple old graphics belie the actual gameplay and depth available here. If you have experience with the more modern Sid Meier turn-based strategy games like the Civilization series, you may be surprised just how much familiar elements and gameplay there is in this old game.
It may appear ancient and a little clunky, but like most of the classic Sid Meier games, you can sink hours upon hours into this game. Which considering it's price nowadays, no more than a piece of cake and a coffee, is fantastic value that is hard to beat. Do try it.
*Verdict: Superb turn-based strategy, all the way from 1994*
### Sword Of The Samurai (1989) ###
![](http://thelinuxrain.com/content/01-articles/70-nostalgic-gaming-on-linux-with-good-old-games/sots.png)
This one is a little more obscure and may surprise. For me, and this will sound a little cliché given the Japanese theme and setting, but there is something rather Zen about Sword Of The Samurai. A product in the year 1989, the graphics are obviously simple and have a very limited colour palette. Yet, I think even today the graphics work for this particular game and add to its charm and, again, the Zen.
Describing SOTS is difficult though. It's sort of... a strategy, war, dating, stealth, melee, dueling, diplomacy, choose-your-own adventure Samurai sim.
Seriously.
Somehow this old game, which weighs in less than 20 megabytes, fits in an incredible amount of different gameplay (and surprisingly smart artificial intelligence) and approaches you can take to achieve your goal. The core goal is get a very important thing called Honor. In the world of feudal Japan, Honor is a big, big thing and you must get more Honor any way you can in order to achieve the goal of unifying Japan under your rule, as Shogun.
While you can of course be the "good guy" and do everything you think is right to get Honor, the game is inherently deep and clever enough to allow you to achieve Honor even with, shall we say, more underhanded tactics.
It's difficult to truly describe all the ways you can play this game but my advice is to simply do so - play it, let it wash over you and soak in the Japanese culture and atmosphere that the game exudes in a really classy way, without being over-the-top. And yes, the game can also be educational! You can't beat that.
*Verdict: An under-appreciated masterpiece*
### Get your game on ###
So there we have it, there's some of my favourites that I've been (re)playing recently, on my Fedora 20 system no less. Some of these games may be older than Linux (the kernel) itself, but thanks to the likes of GOG.com and especially emulators like DOSBox, you can still enjoy the classic titles you remember from years gone by.
What are some of your favourite classic games? Are you also playing them now in your favourite Linux distro? Let us know in the comments!
--------------------------------------------------------------------------------
via: http://thelinuxrain.com/articles/nostalgic-gaming-on-linux-with-good-old-games
作者Andrew Powell
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://gog.com/
[2]:http://www.dosbox.com/

View File

@ -1,132 +0,0 @@
Five Awesome GOG.com Linux Games Everyone Should Play Once
================================================================================
![GOG AKA ILU TUX NAO](http://www.omgubuntu.co.uk/wp-content/uploads/2014/07/gog-com-tile.jpg)
GOG AKA ILU TUX NAO
**Ardent Linux gamers will have seen last week as a good one, as rising game distribution service [GOG.com brought a batch of more than 50 classic PC and indie titles to the platform][1], many for the very first time.**
Against the 775 DRM-free offerings offered to Windows users, not to mention the 600 strong Linux catalog on Steam, it might not sound like much. But the company says this is only the first wave and that another 50 games are set to land later in the year.
Last week [we asked][2] our Facebook fans which five games being sold by GOG they consider must have titles.
After pruning the titles often found warming the shelves of the Humble Bundle (*e.g., Uplink: Hacker Elite, Darwinia, Dont Starve and Anomaly Warzone Earth*), and throwing in a free title for good measure, we came up with the following list.
Its not comprehensive, its not definitive and its certainly not going to be the five youd pick. But for those either too young to have experienced some of these games for the first time, or old enough to level up nostalgia, its a great jumping in point.
Because we know it matters to some of you, weve listed the port type for each entry, so you can avoid Wine or DOSBox where needed.
Finally (though it really should go without saying) if youre looking for full HD immersive 3D worlds with GPU melting graphics requirements, this is not the list for you.
Now to hark back to rainy days spent cooped up inside, eyes firmly fixed on a CRT monitor…
### FlatOut ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/flatout.jpeg)
**Year**: 2005. **Genre**: Racing. **Port**: Wine. **Price**: $5.99 (inc. extras).
Unbuckle up and prepare for one bad-ass and throughly bumpy ride.
Trying to condense why FlatOut is a classic demolition rally game into just a few short sentences is traumatic. Almost as traumatic as being a driver in it must be.
Its premise — carnage, destruction, more carnage — reads fairly standard these days. Virtually every racing game (at least those worth their tread) implements an element of off-road mayhem. But FlatOut was one of the first, and even today remains one of the best.
With 36 course littered with more than 3000 items to crash and smash, plus 16 upgradeable vehicles, each made up of 40 “deformable pieces” for ultimate on-screen obliteration, FlatOut is flat out one of the best raucous racing games available on Linux.
*Also check out Flat Out 2, released in 2011 and costing $9.99.*
- [Buy “FlatOut” on GOG][3]
### Duke Nukem 3D: Atomic Edition ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/duke-3d.jpeg)
**Year**: 1995. **Genre**: First-Person Shooter. **Port**: DOSBox. **Price**: $5.99 (inc. extras).
Politically incorrect, full of female objectification, and featuring more cheesy one-liners than the script of a straight-to-VHS Jean-Claude Van Damme action film. Yep, its Duke Nukem.
But cmon; no list of retro PC classics would be complete without a least one Duke Nukem entry, right? They are bona fide classics. Along with Doom and Quake, it kickstarted the gory corridor crawling shooter genre.
Most of its strengths are in the pastiche; it is camp, cheesy and kaleidoscopically brash, and takes itself about as seriously as a Sega MegaCD video cutscene from Night Trap.
The environments are varied and rich. The gameplay mechanics easy to get to grips with. And while the less than subtle humour laced throughout may rile the easily offended, those of a certain age wont be able to resist smirking at the pop-culture satire.
- [Buy “Duke Nukem: Atomic Edition” on GOG][4]
### The Last Federation ###
Youtobe 视频地址:
https://www.youtube.com/embed/5RKXWpyf1i4?feature=oembed
**Year**: 2014. **Genre**: Strategy. **Port**: Native. Price: $19.99.
The Last Federation is the most expensive title on this list and also the most modern, having debuted this year.
Its a turn-based tactical combat set in space that burdens you with the task of forging a lasting federation of planets and usher in an era of peace and prosperity to the solar system.
But to forge a lasting truce you must indulge your inner machiavellian monsters.
*“Remember, when helping civilizations evolve, sometimes they evolve faster when a large multi-headed monster is glaring menacingly at them,” reads the games synopsis.*
Pricey, but one of the standout strategy games of 2014.
- [Buy “The Last Federation” on GOG][5]
### StarGunner ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/stargunner.jpeg)
**Year**: 1996. **Genre**: Arcade. **Port**: DOSBox. Price: Free.
StarGunner is one of two Linux games available for free on GOG. Its a space-based side scrolling shoot em up, similar to thousands of mid-nineties arcade games now resting in a land fill somewhere.
Thats not to say its not any good; its great fun but just a little familiar.
Gameplay is fast, battlefields switch between space, ground and water often enough to maintain interest, and with more than 75 different enemy crafts (plus over 30 super adaptive bosses) things never get visually tired, either.
Look out for weapons and other power-ups littered through levels.
- [Download “StarGunner” for free on GOG][6]
### Blocks That Matter ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/blocks-that-matter.jpeg)
**Year**: 2011. **Price**: $2.99. **Genre**: Platformer. **Port**: Wine, 32-bit only.
Take some blocks, drop them into an isometrical world, add bit of jumping and a whole lot of puzzle solving. Finally, coat it all in a layer of cuteness. Aside from an needlessly drawn out introduction, you should end up with **Blocks That Matter**. And boy do these blocks matter.
Playing as a robot called Tetrobot, your sole aim is to waddle about each level drilling blocks of various materials (sand, ice, etc.) one by one. Blocks can be collected and inserted into the game to help you complete levels, but depending on the material this can often be a hindrance rather than a help.
An innovative 2D platform-puzzler, it offers up 40 levels in standard Adventure Mode with another 20 waiting to be unlocked. Its cute, clever and cheap.
- [Buy “Blocks that Matter” on GOG][7]
### Honourable Mentions ###
####DarkLands####
**Year**: 1992. **Genre**: RPG. **Port**: DOSBox. **Price**: $5.99 (inc. extras).
#### Sid Meiers Covert Action ####
**Year**: 1990. **Genre**: Action/Strategy. **Port**: DOSBox. **Price**: $5.99 (inc. extras).
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/08/five-best-linux-gog-com-games-available-now
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/07/50-classic-pc-games-now-available-linux-gog
[2]:https://www.facebook.com/omgubuntu/posts/830930706919468
[3]:http://www.gog.com/game/flatout
[4]:http://www.gog.com/game/duke_nukem_3d_atomic_edition
[5]:http://www.gog.com/game/last_federation_the
[6]:http://www.gog.com/game/stargunner
[7]:http://www.gog.com/game/blocks_that_matter

View File

@ -1,87 +0,0 @@
translating by barney-ro
Interesting facts about Linux
================================================================================
Today, August, 25th, is the 23rd birthday of Linux. The modest [Usenet post][1] made by a 21 year old student at the University of Helsinki on August 25th, 1991, marks the birth of the venerable Linux as we know it today.
Fast forward 23 years, and now Linux is everywhere, not only installed on end user desktops, [smartphones][2] and embedded systems, but also fulfilling the needs of [leading enterprises][3] and powering mission-critical systems such as [US Navy's nuclear submarines][4] and [FAA's air traffic control][5]. Entering the era of ubiquitous cloud computing, Linux is continuing [its dominance][6] as by far the most popular platform for the cloud.
Celebrating the 23rd birthday of Linux today, let me show you **some interesting facts and history you may not know about Linux**. If there is anything to add, feel free to share it in the comments. In this article, I will use the terms "Linux", "kernel" or "Linux kernel" interchangeably to mean the same thing.
1. There is a never-ending debate on whether or not Linux is an operating system. Technically, the term "Linux" refers to the kernel, a core component of an operating system. Folks who argue that Linux is not an operating system are operating system purists who think that the kernel alone does not make the whole operating system, or free software ideologists who believe that the largest free operating system should be named "[GNU/Linux][7]" to give credit where credit is due (i.e., [GNU project][8]). On the other hand, some developers and programmers have a view that Linux qualifies as an operating system in a sense that it implements the [POSIX standard][9].
2. According to openhub.net, the majority (95%) of Linux is written in C language. The second popular language for Linux is assembly language (2.8%). The dominance of C lanaguage over C++ is no surprise given Linus's stance on C++. Here is the programming language breakdown for Linux.
![](https://farm4.staticflickr.com/3845/15025332121_055cfe3a2c_z.jpg)
3. Linux has been built by a total of [13,036 contributors][10] worldwide. The most prolific contributor is, of course, Linus Torvalds himself, who has committed code more than 20,000 times over the course of the lifetime of Linux. The following figures show the all-time top-10 contributors of Linux in terms of commit counts.
![](https://farm4.staticflickr.com/3837/14841786838_7a50625f9d_b.jpg)
4. The total source lines of code (SLOC) of Linux is over 17 million. The estimated cost for the entire code base is 5,526 person-years, or over 300M USD according to [basic COCOMO model][11].
5. Enterprises have not been simply consumers of Linux. Their employees have been [actively participated][12] in the development of Linux. The figure below shows the top-10 corporate sponsors of Linux kernel development, in terms of total commit counts from their employees, as of year 2013. They include commercial Linux distributors (Red Hat, SUSE), chip/embedded system makers (Intel, Texas Instruments, Wolfson), non-profits (Linaro), and other IT power houses (IBM, Samsung, Google).
![](https://farm6.staticflickr.com/5573/14841856427_a5a1828245_o.png)
6. The official mascot of Linux is "Tux", a friendly penguin character. The idea of using a cuddly penguin as a mascot/logo was in fact [first conceived and asserted][13] by Linus himself. Why penguin? Personally Linus is fond of penguins, despite the fact that he once was bitten by a ferocious penguin, causing him infected with a disease.
7. A Linux "distribution" contains the Linux kernel, supporting GNU utilities/libraries, and other third-party applications. According to [distrowatch.com][14], there are a total of 286 actively maintained Linux distrutions. The oldest among them is [Slackware][15] whose very first release 1.0 became available in 1993.
8. Kernel.org, which is the main repository of Linux source code, was [compromised][16] by an unknown attacker in August, 2011, who managed to tamper with several kernel.org's servers. In an effort to tighten up access policies of the Linux kernel, Linux foundation recently [turned on][17] two-factor authentication at the official Git repositories hosting the Linux kernel.
9. The dominance of Linux on top 500 supercomputers [continues to rise][18]. As of June 2014, 97% of the world-fastest computers are powered by Linux.
10. Spacewatch, a research group of Lunar and Planetary Laboratory at the University of Arizona, named several asteroids ([9793 Torvalds][19], [9882 Stallman][20], [9885 Linux][21] and [9965 GNU][22]) after GNU/Linux and their creators, in recognition of the free operating system which was instrumental in their asteroid survey activities.
11. In the modern history of Linux kernel development, there was a big jump in kernel version: from 2.6 to 3.0. The [renumbering to version 3][23] actually did not signify any major restructuring in kernel code, but was simply to celebrate the 20 year milestone of the Linux kernel.
12. In 2000, Steve Jobs at Apple Inc. [tried to hire][24] Linus Torvalds to have him drop Linux development and instead work on "Unix for the biggest user base," which was OS X back then. Linus declined the offer.
13. The [reboot()][25] system call in the Linux kernel requires two magic numbers. The second magic number comes from the [birth dates][26] of Linus Torvalds and his three daughters.
14. With so many fans of Linux around the world, there are [criticisms][27] on current Linux distributions (mainly desktops), such as limited hardware support, lack of standardization, instability due to short upgrade/release cycles, etc. During the [Linux kernel panel][28] at LinuxCon 2014, Linus was quoted as saying "I still want the desktop" when asked where he thinks Linux should go next.
If you know any interesting facts about Linux, feel free to share them in the comments.
Happy birthday, Linux!
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/interesting-facts-linux.html
作者:[Dan Nanni][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://groups.google.com/forum/message/raw?msg=comp.os.minix/dlNtH7RRrGA/SwRavCzVE7gJ
[2]:http://developer.android.com/about/index.html
[3]:http://fortune.com/2013/05/06/how-linux-conquered-the-fortune-500/
[4]:http://www.linuxjournal.com/article/7789
[5]:http://fcw.com/Articles/2006/05/01/FAA-manages-air-traffic-with-Linux.aspx
[6]:http://thecloudmarket.com/stats
[7]:http://www.gnu.org/gnu/why-gnu-linux.html
[8]:http://www.gnu.org/gnu/gnu-history.html
[9]:http://en.wikipedia.org/wiki/POSIX
[10]:https://www.openhub.net/p/linux/contributors/summary
[11]:https://www.openhub.net/p/linux/estimated_cost
[12]:http://www.linuxfoundation.org/publications/linux-foundation/who-writes-linux-2013
[13]:http://www.sjbaker.org/wiki/index.php?title=The_History_of_Tux_the_Linux_Penguin
[14]:http://distrowatch.com/search.php?ostype=All&category=All&origin=All&basedon=All&notbasedon=None&desktop=All&architecture=All&status=Active
[15]:http://www.slackware.com/info/
[16]:http://pastebin.com/BKcmMd47
[17]:http://www.linux.com/news/featured-blogs/203-konstantin-ryabitsev/784544-linux-kernel-git-repositories-add-2-factor-authentication
[18]:http://www.top500.org/statistics/details/osfam/1
[19]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9793
[20]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9882
[21]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9885
[22]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9965
[23]:https://lkml.org/lkml/2011/5/29/204
[24]:http://www.wired.com/2012/03/mr-linux/2/
[25]:http://lxr.free-electrons.com/source/kernel/reboot.c#L199
[26]:http://www.nndb.com/people/444/000022378/
[27]:http://linuxfonts.narod.ru/why.linux.is.not.ready.for.the.desktop.current.html
[28]:https://www.youtube.com/watch?v=8myENKt8bD0

View File

@ -1,92 +0,0 @@
Making MySQL Better at GitHub
================================================================================
> At GitHub we say, "it's not fully shipped until it's fast." We've talked before about some of the ways we keep our [frontend experience speedy][1], but that's only part of the story. Our MySQL database infrastructure dramatically affects the performance of GitHub.com. Here's a look at how our infrastructure team seamlessly conducted a major MySQL improvement last August and made GitHub even faster.
### The mission ###
Last year we moved the bulk of GitHub.com's infrastructure into a new datacenter with world-class hardware and networking. Since MySQL forms the foundation of our backend systems, we expected database performance to benefit tremendously from an improved setup. But creating a brand-new cluster with brand-new hardware in a new datacenter is no small task, so we had to plan and test carefully to ensure a smooth transition.
### Preparation ###
A major infrastructure change like this requires measurement and metrics gathering every step of the way. After installing base operating systems on our new machines, it was time to test out our new setup with various configurations. To get a realistic test workload, we used tcpdump to extract SELECT queries from the old cluster that was serving production and replayed them onto the new cluster.
MySQL tuning is very workload specific, and well-known configuration settings like innodb_buffer_pool_size often make the most difference in MySQL's performance. But on a major change like this, we wanted to make sure we covered everything, so we took a look at settings like innodb_thread_concurrency, innodb_io_capacity, and innodb_buffer_pool_instances, among others.
We were careful to only make one test configuration change at a time, and to run tests for at least 12 hours. We looked for query response time changes, stalls in queries per second, and signs of reduced concurrency. We observed the output of SHOW ENGINE INNODB STATUS, particularly the SEMAPHORES section, which provides information on work load contention.
Once we were relatively comfortable with configuration settings, we started migrating one of our largest tables onto an isolated cluster. This served as an early test of the process, gave us more space in the buffer pools of our core cluster and provided greater flexibility for failover and storage. This initial migration introduced an interesting application challenge, as we had to make sure we could maintain multiple connections and direct queries to the correct cluster.
In addition to all our raw hardware improvements, we also made process and topology improvements: we added delayed replicas, faster and more frequent backups, and more read replica capacity. These were all built out and ready for go-live day.
### Making a list; checking it twice ###
With millions of people using GitHub.com on a daily basis, we did not want to take any chances with the actual switchover. We came up with a thorough [checklist][2] before the transition:
![](https://cloud.githubusercontent.com/assets/1155781/4116929/13fc6f50-328b-11e4-837b-922aad3055a8.png)
We also planned a maintenance window and [announced it on our blog][3] to give our users plenty of notice.
### Migration day ###
At 5am Pacific Time on a Saturday, the migration team assembled online in chat and the process began:
![](https://cloud.githubusercontent.com/assets/1155781/4060850/39f52cd4-2df3-11e4-9aca-1f54a4870d24.png)
We put the site in maintenance mode, made an announcement on Twitter, and set out to work through the list above:
![](https://cloud.githubusercontent.com/assets/1155781/4060864/54ff6bac-2df3-11e4-95da-b059c0ec668f.png)
**13 minutes** later, we were able to confirm operations of the new cluster:
![](https://cloud.githubusercontent.com/assets/1155781/4060870/6a4c0060-2df3-11e4-8dab-654562fe628d.png)
Then we flipped GitHub.com out of maintenance mode, and let the world know that we were in the clear.
![](https://cloud.githubusercontent.com/assets/1155781/4060878/79b9884c-2df3-11e4-98ed-d11818c8915a.png)
Lots of up front testing and preparation meant that we kept the work we needed on go-live day to a minimum.
### Measuring the final results ###
In the weeks following the migration, we closely monitored performance and response times on GitHub.com. We found that our cluster migration cut the average GitHub.com page load time by half and the 99th percentile by *two-thirds*:
![](https://cloud.githubusercontent.com/assets/1155781/4060886/9106e54e-2df3-11e4-8fda-a4c64c229ba1.png)
### What we learned ###
#### Functional partitioning ####
During this process we decided that moving larger tables that mostly store historic data to separate cluster was a good way to free up disk and buffer pool space. This allowed us to leave more resources for our "hot" data, splitting some connection logic to enable the application to query multiple clusters. This proved to be a big win for us and we are working to reuse this pattern.
#### Always be testing ####
You can never do too much acceptance and regression testing for your application. Replicating data from the old cluster to the new cluster while running acceptance tests and replaying queries were invaluable for tracing out issues and preventing surprises during the migration.
#### The power of collaboration ####
Large changes to infrastructure like this mean a lot of people need to be involved, so pull requests functioned as our primary point of coordination as a team. We had people all over the world jumping in to help.
Deploy day team map:
<iframe width="620" height="420" frameborder="0" src="https://render.githubusercontent.com/view/geojson?url=https://gist.githubusercontent.com/anonymous/5fa29a7ccbd0101630da/raw/map.geojson"></iframe>
This created a workflow where we could open a pull request to try out changes, get real-time feedback, and see commits that fixed regressions or errors -- all without phone calls or face-to-face meetings. When everything has a URL that can provide context, it's easy to involve a diverse range of people and make it simple for them give feedback.
### One year later.. ###
A full year later, we are happy to call this migration a success — MySQL performance and reliability continue to meet our expectations. And as an added bonus, the new cluster enabled us to make further improvements towards greater availability and query response times. I'll be writing more about those improvements here soon.
--------------------------------------------------------------------------------
via: https://github.com/blog/1880-making-mysql-better-at-github
作者:[samlambert][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://github.com/samlambert
[1]:https://github.com/blog/1756-optimizing-large-selector-sets
[2]:https://help.github.com/articles/writing-on-github#task-lists
[3]:https://github.com/blog/1603-site-maintenance-august-31st-2013

View File

@ -1,378 +0,0 @@
[Translating by SteveArcher]
The Masked Avengers
================================================================================
> How Anonymous incited online vigilantism from Tunisia to Ferguson.
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_r25419-690.jpg)
Anyone can join Anonymous simply by claiming affiliation. An anthropologist says that participants “remain subordinate to a focus on the epic win—and, especially, the lulz.”
Paper Sculpture by Jeff Nishinaka / Photograph by Scott Dunbar
----------
In the mid-nineteen-seventies, when Christopher Doyon was a child in rural Maine, he spent hours chatting with strangers on CB radio. His handle was Big Red, for his hair. Transmitters lined the walls of his bedroom, and he persuaded his father to attach two directional antennas to the roof of their house. CB radio was associated primarily with truck drivers, but Doyon and others used it to form the sort of virtual community that later appeared on the Internet, with self-selected nicknames, inside jokes, and an earnest desire to effect change.
Doyons mother died when he was a child, and he and his younger sister were reared by their father, who they both say was physically abusive. Doyon found solace, and a sense of purpose, in the CB-radio community. He and his friends took turns monitoring the local emergency channel. One friends father bought a bubble light and affixed it to the roof of his car; when the boys heard a distress call from a stranded motorist, hed drive them to the side of the highway. There wasnt much they could do beyond offering to call 911, but the adventure made them feel heroic.
Small and wiry, with a thick New England accent, Doyon was fascinated by “Star Trek” and Isaac Asimov novels. When he saw an ad in Popular Mechanics for a build-your-own personal-computer kit, he asked his grandmother to buy it for him, and he spent months figuring out how to put it together and hook it up to the Internet. Compared with the sparsely populated CB airwaves, online chat rooms were a revelation. “I just click a button, hit this guys name, and Im talking to him,” Doyon recalled recently. “It was just breathtaking.”
At the age of fourteen, he ran away from home, and two years later he moved to Cambridge, Massachusetts, a hub of the emerging computer counterculture. The Tech Model Railroad Club, which had been founded thirty-four years earlier by train hobbyists at M.I.T., had evolved into “hackers”—the first group to popularize the term. Richard Stallman, a computer scientist who worked in M.I.T.s Artificial Intelligence Laboratory at the time, says that these early hackers were more likely to pass around copies of “Gödel, Escher, Bach” than to incite technological warfare. “We didnt have tenets,” Stallman said. “It wasnt a movement. It was just a thing that people did to impress each other.” Some of their “hacks” were fun (coding video games); others were functional (improving computer-processing speeds); and some were pranks that took place in the real world (placing mock street signs near campus). Michael Patton, who helped run the T.M.R.C. in the seventies, told me that the original hackers had unwritten rules and that the first one was “Do no damage.”
In Cambridge, Doyon supported himself through odd jobs and panhandling, preferring the freedom of sleeping on park benches to the monotony of a regular job. In 1985, he and a half-dozen other activists formed an electronic “militia.” Echoing the Animal Liberation Front, they called themselves the Peoples Liberation Front. They adopted aliases: the founder, a towering middle-aged man who claimed to be a military veteran, called himself Commander Adama; Doyon went by Commander X. Inspired by the Merry Pranksters, they sold LSD at Grateful Dead shows and used some of the cash to outfit an old school bus with bullhorns, cameras, and battery chargers. They also rented a basement apartment in Cambridge, where Doyon occasionally slept.
Doyon was drawn to computers, but he was not an expert coder. In a series of conversations over the past year, he told me that he saw himself as an activist, in the radical tradition of Abbie Hoffman and Eldridge Cleaver; technology was merely his medium of dissent. In the eighties, students at Harvard and M.I.T. held rallies urging their schools to divest from South Africa. To help the protesters communicate over a secure channel, the P.L.F. built radio kits: mobile FM transmitters, retractable antennas, and microphones, all stuffed inside backpacks. Willard Johnson, an activist and a political scientist at M.I.T., said that hackers were not a transformative presence at rallies. “Most of our work was still done using a bullhorn,” he said.
In 1992, at a Grateful Dead concert in Indiana, Doyon sold three hundred hits of acid to an undercover narcotics agent. He was sentenced to twelve years in Pendleton Correctional Facility, of which he served five. While there, he developed an interest in religion and philosophy and took classes from Ball State University.
Netscape Navigator, the first commercial Web browser, was released in 1994, while Doyon was incarcerated. When he returned to Cambridge, the P.L.F. was still active, and their tools had a much wider reach. The change, Doyon recalls, “was gigantic—it was the difference between sending up smoke signals and being able to telegraph someone.” Hackers defaced an Indian military Web site with the words “Save Kashmir.” In Serbia, hackers took down an Albanian site. Stefan Wray, an early online activist, defended such tactics at an “anti-Columbus Day” rally in New York. “We see this as a form of electronic civil disobedience,” he told the crowd.
In 1999, the Recording Industry Association of America sued Napster, the file-sharing service, for copyright infringement. As a result, Napster was shut down in 2001. Doyon and other hackers disabled the R.I.A.A. site for a weekend, using a Distributed Denial of Service, or DDoS, attack, which floods a site with so much data that it slows down or crashes. Doyon defended his actions, employing the heightened rhetoric of other “hacktivists.” “We quickly came to understand that the battle to defend Napster was symbolic of the battle to preserve a free internet,” he later wrote.
One day in 2008, Doyon and Commander Adama met at the P.L.F.s basement apartment in Cambridge. Adama showed Doyon the Web site of the Epilepsy Foundation, on which a link, instead of leading to a discussion forum, triggered a series of flashing colored lights. Some epileptics are sensitive to strobes; out of sheer malice, someone was trying to induce seizures in innocent people. There had been at least one victim already.
Doyon was incensed. He asked Adama who would do such a thing.
“Ever hear of a group called Anonymous?” Adama said.
----------
In 2003, Christopher Poole, a fifteen-year-old insomniac from New York City, launched 4chan, a discussion board where fans of anime could post photographs and snarky comments. The focus quickly widened to include many of the Internets earliest memes: LOLcats, Chocolate Rain, RickRolls. Users who did not enter a screen name were given the default handle Anonymous.
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18505-600.jpg)
“I need to talk about my inner life.”
Poole hoped that anonymity would keep things irreverent. “We have no intention of partaking in intelligent discussions concerning foreign affairs,” he wrote on the site. One of the highest values within the 4chan community was the pursuit of “lulz,” a term derived from the acronym LOL. Lulz were often achieved by sharing puerile jokes or images, many of them pornographic or scatological. The most shocking of these were posted on a part of the site labelled /b/, whose users called themselves /b/tards. Doyon was aware of 4chan, but considered its users “a bunch of stupid little pranksters.” Around 2004, some people on /b/ started referring to “Anonymous” as an independent entity.
It was a new kind of hacker collective. “Its not a group,” Mikko Hypponen, a leading computer-security researcher, told me—rather, it could be thought of as a shape-shifting subculture. Barrett Brown, a Texas journalist and a well-known champion of Anonymous, has described it as “a series of relationships.” There was no membership fee or initiation. Anyone who wanted to be a part of Anonymous—an Anon—could simply claim allegiance.
Despite 4chans focus on trivial topics, many Anons considered themselves crusaders for justice. They launched vigilante campaigns that were purposeful, if sometimes misguided. More than once, they posed as underage girls in order to entrap pedophiles, whose personal information they sent to the police. Other Anons were apolitical and sowed chaos for the lulz. One of them posted images on /b/ of what looked like pipe bombs; another threatened to blow up several football stadiums and was arrested by the F.B.I. In 2007, a local news affiliate in Los Angeles called Anonymous “an Internet hate machine.”
In January, 2008, Gawker Media posted a video in which Tom Cruise enthusiastically touted the benefits of Scientology. The video was copyright-protected, and the Church of Scientology sent a cease-and-desist letter to Gawker, asking that the video be removed. Anonymous viewed the churchs demands as attempts at censorship. “I think its time for /b/ to do something big,” someone posted on 4chan. “Im talking about hacking or taking down the official Scientology Web site.” An Anon used YouTube to issue a “press release,” which included stock footage of storm clouds and a computerized voice-over. “We shall proceed to expel you from the Internet and systematically dismantle the Church of Scientology in its present form,” the voice said. “You have nowhere to hide.” Within a few weeks, the YouTube video had been viewed more than two million times.
Anonymous had outgrown 4chan. The hackers met in dedicated Internet Relay Chat channels, or I.R.C.s, to coördinate tactics. Using DDoS attacks, they caused the main Scientology Web site to crash intermittently for several days. Anons created a “Google bomb,” so that a search for “dangerous cult” would yield the main Scientology site at the top of the results page. Others sent hundreds of pizzas to Scientology centers in Europe, and overwhelmed the churchs Los Angeles headquarters with all-black faxes, draining the machines of ink. The Church of Scientology, an organization that reportedly has more than a billion dollars in assets, could withstand the depletion of its ink cartridges. But its leaders, who had also received death threats, contacted the F.B.I. to request an investigation into Anonymous.
On March 15, 2008, several thousand Anons marched past Scientology churches in more than a hundred cities, from London to Sydney. In keeping with the theme of anonymity, the organizers decided that all the protesters should wear versions of the same mask. After considering Batman, they settled on the Guy Fawkes mask worn in “V for Vendetta,” a dystopian movie from 2005. “It was available in every major city, in large quantities, for cheap,” Gregg Housh, one of the organizers of the protests and a well-known Anon, told me. The mask was a caricature of a man with rosy cheeks, a handlebar mustache, and a wide grin.
Anonymous did not “dismantle” the Church of Scientology. Still, the Tom Cruise video remained online. Anonymous had proved its tenacity. The collective adopted a bombastic slogan: “We are Legion. We do not forgive. We do not forget. Expect us.”
----------
In 2010, Doyon moved to Santa Cruz, California, to join a local movement called Peace Camp. Using wood that he stole from a lumberyard, he built a shack in the mountains. He borrowed WiFi from a nearby mansion, drew power from salvaged solar panels, and harvested marijuana, which he sold for cash.
At the time, the Peace Camp activists were sleeping on city property as a protest against a Santa Cruz anti-homelessness law that they considered extreme. Doyon appeared at Peace Camp meetings and offered to promote their cause online. He had an unkempt red beard and wore a floppy beige hat and quasi-military fatigues. Some of the activists called him Curbhugger Chris.
Kelley Landaker, a member of Peace Camp, spoke with Doyon several times about hacking. Doyon sometimes bragged about his technical aptitude, but Landaker, an expert programmer, was unimpressed. “He was more of a spokesman than a hands-on-the-keyboard type of person,” Landaker told me. But the movement needed a passionate leader more than it needed a coder. “He was very enthusiastic and very outspoken,” Robert Norse, also a member of Peace Camp, told me. “He created more media attention for the issue than anyone Ive seen, and Ive been doing this for twenty years.”
Commander Adama, Doyons superior in the P.L.F., who still lived in Cambridge and communicated with him via e-mail, had ordered Doyon to monitor Anonymous. Doyons brief was to observe their methods and to recruit members to the P.L.F. Recalling his revulsion at the Epilepsy Foundation hack, Doyon initially balked. Adama argued that the malicious hackers were a minority within Anonymous, and that the collective might inspire powerful new forms of activism. Doyon was skeptical. “The biggest movement in the world is going to come from 4chan?” he said. But, out of loyalty to the P.L.F., he obeyed Adama.
Doyon spent much of his time at the Santa Cruz Coffee Roasting Company, a café downtown, hunched over an Acer laptop. The main Anonymous I.R.C. did not require a password. Doyon logged in using the name PLF and followed along. Over time, he discovered back channels where smaller, more dedicated groups of Anons had dozens of overlapping conversations. To participate, you had to know the names of the back channels, which could be changed to deflect intruders. It was not a highly secure system, but it was adaptable. “These simultaneous cabals keep centralization from happening,” Gabriella Coleman, an anthropologist at McGill University, told me.
Some Anons proposed an action called Operation Payback. As the journalist Parmy Olson wrote in a 2012 book, “We Are Anonymous,” Operation Payback started as another campaign in support of file-sharing sites like the Pirate Bay, a successor to Napster, but the focus soon broadened to include political speech. In late 2010, at the behest of the State Department, several companies, including MasterCard, Visa, and PayPal, stopped facilitating donations to WikiLeaks, the vigilante organization that had released hundreds of thousands of diplomatic cables. In an online video, Anonymous called for revenge, promising to lash out at the companies that had impeded WikiLeaks. Doyon, attracted by the anti-corporate spirit of the project, decided to participate.
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18473-600.jpg)
During Operation Payback, in early December, Anonymous directed new recruits, or noobs, to a flyer headed “How to Join the Fucking Hive,” in which participants were instructed to “FIX YOUR GODDAMN INTERNET. THIS IS VERY FUCKING IMPORTANT.” They were also asked to download Low Orbit Ion Cannon, an easy-to-use tool that is publicly available. Doyon downloaded the software and watched the chat rooms, waiting for a cue. When the signal came, thousands of Anons fired at once. Doyon entered a target URL—say, www.visa.com—and, in the upper-right corner, clicked a button that said “IMMA CHARGIN MAH LAZER.” (The operation also relied on more sophisticated hacking.) Over several days, Operation Payback disabled the home pages of Visa, MasterCard, and PayPal. In court filings, PayPal claimed that the attack had cost the company five and a half million dollars.
To Doyon, this was activism made tangible. In Cambridge, protesting against apartheid, he could not see immediate results; now, with the tap of a button, he could help sabotage a major corporations site. A banner headline on the Huffington Post read “MasterCard DOWN.” One gloating Anon tweeted, “There are some things WikiLeaks cant do. For everything else, theres Operation Payback.”
----------
In the fall of 2010, the Peace Camp protests ended. With slight concessions, the anti-homelessness law remained in effect. Doyon hoped to use the tactics of Anonymous to reinvigorate the movement. He recalls thinking, “I could wield Anonymous against this tiny little city government and they would just be fucking wrecked. Plan was we were finally going to solve this homelessness problem, once and for all.”
Joshua Covelli, a twenty-five-year-old Anon who went by the nickname Absolem, admired Doyons decisiveness. “Anonymous had been this clusterfuck of chaos,” Covelli told me. With Commander X, “there seemed to be a structure set up.” Covelli worked as a receptionist at a college in Fairborn, Ohio, and knew nothing about Santa Cruz politics. But when Doyon asked for help with Operation Peace Camp, Covelli e-mailed back immediately: “Ive been waiting to join something like that my entire life.”
Doyon, under the name PLF, invited Covelli into a private I.R.C.:
> Absolem: Sorry to be so rude . . . Is PLF part of Anonymous or separate?
>
> Absolem: I was just asking because you all seem very organized in chat.
>
> PLF: You are not in the least rude. I am pleased to meet you. PLF is 22 year old hacker group originally from Boston. I started hacking in 81, not with computers but PBX (telephones).
>
> PLF: We are all older 40 or over. Some of us are former military or intelligence. One of us, Commander Adama is currently sought by an alphabet soup of cops and spooks and in hiding.
>
> Absolem: Wow thats legit. I am really interested in helping this out in some way and Anonymous just seems too chaotic. I have some computer skills but very noob in hacking. I have some tools but no Idea how to use them.
With ritual solemnity, Doyon accepted Covellis request to join the P.L.F.:
> PLF: Encrypt the fuck out of all sensitive material that might incriminate you.
>
> PLF: Yep, work with any PLFer to get a message to me. Call me . . . Commander X for now.
In 2012, the Associated Press called Anonymous “a group of expert hackers”; Quinn Norton, in Wired, wrote that “Anonymous had figured out how to infiltrate anything,” resulting in “a wild string of brilliant hacks.” In fact some Anons are gifted coders, but the vast majority possess little technical skill. Coleman, the anthropologist, told me that only a fifth of Anons are hackers—the rest are “geeks and protesters.”
On December 16, 2010, Doyon, as Commander X, sent an e-mail to several reporters. “At exactly noon local time tomorrow, the Peoples Liberation Front and Anonymous will remove from the Internet the Web site of the Santa Cruz County government,” he wrote. “And exactly 30 minutes later, we will return it to normal function.”
The data-center staff for Santa Cruz County saw the warning and scrambled to prepare for the attack. They ran security scans on the servers and contacted A.T. & T., the countys Internet provider, which suggested that they alert the F.B.I.
The next day, Doyon entered a Starbucks and booted up his laptop. Even for a surfing town, he was notably eccentric: a homeless-looking man wearing fatigues and typing furiously. Covelli met him in a private chat room.
> PLF: Go to Forum, sign in—and look at top right menu bar “chat.” Thats Ops for today. Thank you for standing with us.
>
> Absolem: Anything for PLF, sir.
They both opened DDoS software. Though only a handful of people were participating in Operation Peace Camp, Doyon gave orders as if he were addressing legions of troops:
> PLF: ATTENTION: Everyone who supports the PLF or considers us their friend—or who cares about defeating evil and protecting the innocent: Operation Peace Camp is LIVE and an action is underway. TARGET: www.co.santa-cruz.ca.us. Fire At Will. Repeat: FIRE!
>
> Absolem: got it, sir.
The data-center staff watched their servers, which showed a flurry of denial-of-service requests. Despite their best efforts, the site crashed. Twenty-five minutes later, Doyon decided that he had made his point. He typed “CEASE FIRE,” and the countys site flickered back to life. (Despite the attack, the citys anti-homelessness law did not change.)
Doyon hardly had time to celebrate before he grew anxious. “I got to leave,” he typed to Covelli. He fled to his shack in the mountains. Doyon was right to be wary: an F.B.I. agent had been snooping in the I.R.C. The F.B.I. obtained a warrant to search Doyons laptop.
A few weeks later, Doyons food ran out, and he returned to town. While he was at the Santa Cruz Coffee Roasting Company, two federal agents entered the shop. They brought him to the county police station. Doyon called Ed Frey, a lawyer and the founder of Peace Camp, who met him at the station. Doyon told Frey about his alter ego as Commander X.
Doyon was released, but the F.B.I. kept his laptop, which was full of incriminating evidence. Frey, a civil-rights lawyer who knew little about cybersecurity, drove Doyon back to his hillside encampment. “What are you going to do?” Frey asked.
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18447-600.jpg)
“Zach is in the gifted-and-talented-and-youre-not class.”
He spoke in cinematic terms. “Run like hell,” he said. “I will go underground, try to stay free as long as I can, and keep fighting the bastards any way possible.” Frey gave him two twenty-dollar bills and wished him luck.
----------
Doyon hitchhiked to San Francisco and stayed there for three months. He spent his days at Coffee to the People, a quirky café in the Haight-Ashbury district, where he would sit for hours in front of his computer, interrupted only by outdoor cigarette breaks.
In January, 2011, Doyon contacted Barrett Brown, the journalist and Anon. “What are we going to do next?” Doyon asked.
“Tunisia,” Brown said.
“Yeah, its a country in the Middle East,” Doyon said. “What about it?”
“Were gonna take down its dictator,” Brown said.
“Oh, they have a dictator?” Doyon said.
A couple of days later, Operation Tunisia began. Doyon volunteered to spam Tunisian government e-mail addresses in an attempt to clog their servers. “I would take the text of the press release for that op and just keep sending it over and over again,” he said. “Sometimes, I was so busy that I would just put fuck you and send it.” In one day, the Anons brought down the Web sites of the Tunisian Stock Exchange, the Ministry of Industry, the President, and the Prime Minister. They replaced the Web page of the Presidents office with an image of a pirate ship and the message “Payback is a bitch, isnt it?”
Doyon often spoke of his online battles as if he had just crawled out of a foxhole. “Dude, I turned black from doing it,” he told me. “My face, from all the smoke—it would cling to me. I would look up and I would literally be like a raccoon.” Most nights, he camped out in Golden Gate Park. “I would look at myself in the mirror and Id be like, O.K., its been four days—maybe I should eat, bathe.”
Anonymous-affiliated operations continued to be announced on YouTube: Operation Libya, Operation Bahrain, Operation Morocco. As protesters filled Tahrir Square, Doyon participated in Operation Egypt. A Facebook page disseminated information, including links to a “care package” for protesters on the ground. The package, distributed through the file-sharing site Megaupload, contained encryption software and a primer on defending against tear gas. Later, when the Egyptian government disabled Internet and cellular networks within the country, Anonymous helped the protesters find alternative ways to get online.
In the summer of 2011, Doyon succeeded Adama as Supreme Commander of the P.L.F. Doyon recruited roughly half a dozen new members and attempted to position the P.L.F. as an élite squad within Anonymous. Covelli became one of his technical advisers. Another hacker, Crypt0nymous, made YouTube videos; others did research or assembled electronic care packages. Unlike Anonymous, the P.L.F. had a strict command structure. “X always called the shots on everything,” Covelli said. “It was his way or no way.” A hacker who founded a blog called AnonInsiders told me over encrypted chat that Doyon was willing to act unilaterally—a rare thing within Anonymous. “When we wanted to start an op, he didnt mind if anyone would agree or not,” he said. “He would just write the press release by himself, list all the targets, open the I.R.C. channel, tell everyone to go in there, and start the DDoSing.”
Some Anons viewed the P.L.F. as a vanity project and Doyon as a laughingstock. “Hes known for his exaggeration,” Mustafa Al-Bassam, an Anon who went by Tflow, told me. Others, even those who disapproved of Doyons egotism, grudgingly acknowledged his importance to the Anonymous movement. “He walks that tough line of sometimes being effective and sometimes being in the way,” Gregg Housh said, adding that he and other prominent Anons had faced similar challenges.
Publicly, Anonymous persists in claiming to be non-hierarchical. In “We Are Legion,” a 2012 documentary about Anonymous by Brian Knappenberger, one activist uses the metaphor of a flock of birds, with various individuals taking turns drifting toward the front. Gabriella Coleman told me that, despite such claims, something resembling an informal leadership class did emerge within Anonymous. “The organizer is really important,” she said. “There are four or five individuals who are really good at it.” She counted Doyon among them. Still, Anons tend to rebel against institutional structure. In a forthcoming book about Anonymous, “Hacker, Hoaxer, Whistleblower, Spy,” Coleman writes that, among Anons, “personal identity and the individual remain subordinate to a focus on the epic win—and, especially, the lulz.”
Anons who seek individual attention are often dismissed as “egofags” or “namefags.” (Many Anons have yet to outgrow their penchant for offensive epithets.) “There are surprisingly few people who violate the rule” against attention-seeking, Coleman says. “Those who do, like X, are marginalized.” Last year, in an online discussion forum, a commenter wrote, “I stopped reading his BS when he started comparing himself to Batman.”
Peter Fein, an online activist known by the nickname n0pants, was among the many Anons who were put off by Doyons self-aggrandizing rhetoric. Fein browsed the P.L.F. Web site, which featured a coat of arms and a manifesto about the groups “epic battle for the very soul of humanity.” Fein was dismayed to find that Doyon had registered the site using his real name, leaving himself and possibly other Anons vulnerable to prosecution. “Im basically okay with people DDoSing,” Fein recalls telling Doyon over private chat. “But if youre going to do it, youve got to cover your ass.”
On February 5, 2011, the Financial Times reported that Aaron Barr, the C.E.O. of a cybersecurity firm called HBGary Federal, had identified the “most senior” members of Anonymous. Barrs research suggested that one of the top three was Commander X, a hacker based in California, who could “manage some significant firepower.” Barr contacted the F.B.I. and offered to share his work with them.
Like Fein, Barr had seen that the P.L.F. site was registered to Christopher Doyon at an address on Haight Street. Based on Facebook and I.R.C. activity, Barr concluded that Commander X was Benjamin Spock de Vries, an online activist who had lived near the Haight Street address. Barr approached de Vries on Facebook. “Please tell the folks there that I am not out to get you guys,” Barr wrote. “Just want the leadership to know what my intent is.”
Leadership lmao,” de Vries responded.
Days after the Financial Times story appeared, Anonymous struck back. HBGary Federals Web site was defaced. Barrs personal Twitter account was hijacked, thousands of his e-mails were leaked online, and Anons released his address and other personal information—a punishment known as doxing. Barr resigned from HBGary Federal within the month.
----------
In April, 2011, Doyon left San Francisco and hitchhiked around the West, camping in parks at night and spending his days at Starbucks outlets. In his backpack he kept his laptop, his Guy Fawkes mask, and several packs of Pall Malls.
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18563-600.jpg)
“This is what I learned during my summer at TED camp.”
He followed internal Anonymous news. That spring, six élite Anons, all of whom had been instrumental in deflecting Barrs investigation, formed a group called Lulz Security, or LulzSec. As their name indicated, they felt that Anonymous had become too self-serious; they aimed to bring the lulz back. While Anonymous continued supporting Arab Spring protesters, LulzSec hacked the Web site of PBS and posted a fake story claiming that the late rapper Tupac Shakur was alive in New Zealand.
Anons often share text through the Web site Pastebin.com. On the site, LulzSec issued a statement that read, “It has come to our unfortunate attention that NATO and our good friend Barrack Osama-Llama 24th-century Obama have recently upped the stakes with regard to hacking. They now treat hacking as an act of war.” The loftier the target, the greater the lulz. On June 15th, LulzSec took credit for crashing the C.I.A.s Web site, tweeting, “Tango down—cia.gov—for the lulz.”
On June 20, 2011, Ryan Cleary, a nineteen-year-old member of LulzSec, was arrested for the DDoS attacks on the C.I.A. site. The next month, F.B.I. agents arrested fourteen other hackers for DDoS attacks on PayPal seven months earlier. Each of the PayPal Fourteen, as they became known, faced fifteen years in prison and a five-hundred-thousand-dollar fine. They were charged with conspiracy and intentional damage to protected computers under the Computer Fraud and Abuse Act. (The law allows for wide prosecutorial discretion and was widely criticized after Aaron Swartz, an Internet activist who was facing thirty-five years in prison, committed suicide last year.)
A petition was circulated on behalf of Jake (Topiary) Davis, a member of LulzSec, who needed help paying his legal fees. Doyon entered an I.R.C. to promote Daviss cause:
> CommanderX: Please sign the petition and help Topiary…
>
> toad: you are an attention whore
>
> toad: so you get attention
>
> CommanderX: Toad your an asshole.
>
> katanon: sigh
Doyon had grown increasingly brazen. He DDoSed the Web site of the Chamber of Commerce of Orlando, Florida, after activists there were arrested for feeding the homeless. He launched the attacks from public WiFi networks, using his personal laptop, without making much effort to cover his tracks. “Thats brave but stupid,” a senior member of the P.L.F. who asked to be called Kalli told me. “He didnt seem to care if he was caught. He was a suicide hacker.”
Two months later, Doyon participated in a DDoS strike against San Franciscos Bay Area Rapid Transit, protesting an incident in which a BART police officer had killed a homeless man named Charles Hill. Doyon appeared on the “CBS Evening News” to defend the action, his voice disguised and his face obscured by a bandanna. He compared DDoS attacks to civil disobedience. “Its no different, really, than taking up seats at the Woolworth lunch counters,” he said. Bob Schieffer, the CBS anchor, snickered and said, “Its not quite the civil-rights movement, as I see it.”
On September 22, 2011, in a coffee shop in Mountain View, California, Doyon was arrested and charged with causing intentional damage to a protected computer. He was detained for a week and released on bond. Two days later, against his lawyers advice, he called a press conference on the steps of the Santa Cruz County Courthouse. His hair in a ponytail, he wore dark sunglasses, a black pirate hat, and a camouflage bandanna around his neck.
In characteristically melodramatic fashion, Doyon revealed his identity. “I am Commander X,” he told reporters. He raised his fist. “I am immensely proud, and humbled to the core, to be a part of the idea called Anonymous.” He told a journalist, “All you need to be a world-class hacker is a computer and a cool pair of sunglasses. And the computer is optional.”
Kalli worried that Doyon was placing his ego above the safety of other Anons. “Its the weakest link in the chain that ends up taking everyone down,” he told me. Josh Covelli, the Anon who had been eager to help Doyon with Operation Peace Camp, told me that his “jaw dropped” when he saw a video of Doyons press conference online. “The way he presented himself and the way he acted had become more unhinged,” Covelli said.
Three months later, Doyons pro-bono lawyer, Jay Leiderman, was in a federal court in San Jose. Leiderman had not heard from Doyon in a couple of weeks. “Im inquiring as to whether theres a reason for that,” the judge said. Leiderman had no answer. Doyon was absent from another hearing two weeks later. The prosecutor stated the obvious: “It appears as though the defendant has fled.”
----------
Operation Xport was the first Anonymous operation of its kind. The goal was to smuggle Doyon, now a fugitive wanted for two felonies, out of the country. The coördinators were Kalli and a veteran Anon who had met Doyon at an acid party in Cambridge during the eighties. A retired software executive, he was widely respected within Anonymous.
Doyons ultimate destination was the software executives house, deep in rural Canada. In December, 2011, he hitchhiked to San Francisco and made his way to an Occupy encampment downtown. He found his designated contact, who helped him get to a pizzeria in Oakland. At 2 A.M., Doyon, using the pizzerias WiFi, received a message on encrypted chat.
“Are you near a window?” the message read.
“Yeah,” Doyon typed.
“Look across the street. Do you see the green mailbox? In exactly fifteen minutes, go and stand next to that mailbox and set your backpack down, and lay your mask on top of it.”
For a few weeks, Doyon shuttled among safe houses in the Bay Area, following instructions through encrypted chat. Eventually, he took a Greyhound bus to Seattle, where he stayed with a friend of the software executive. The friend, a wealthy retiree, spent hours using Google Earth to help Doyon plot a route to Canada. They went to a camping-supplies store, and the friend spent fifteen hundred dollars on gear for Doyon, including hiking boots and a new backpack. Then he drove Doyon two hours north and dropped him off in a remote area, several hundred miles from the border, where Doyon met up with Amber Lyon.
Months earlier, Lyon, a broadcast journalist, had interviewed Doyon for a CNN segment about Anonymous. He liked her report, and they stayed in touch. Lyon asked to join Doyon on his escape, to shoot footage for a possible documentary. The software executive thought that it was “nuts” to take the risk, but Doyon invited her anyway. “I think he wanted to make himself a face of the movement,” Lyon told me. For four days, she filmed him as he hiked north, camping in the woods. “It wasnt very organized,” Lyon recalls. “He was functionally homeless, so he just kind of wandered out of the country.”
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18506-600.jpg)
“This is the barn where we keep our feelings. If a feeling comes to you, bring it out here and lock it up.”
On February 11, 2012, a press release appeared on Pastebin. “The PLF is delighted to announce that Commander X, aka Christopher Mark Doyon, has fled the jurisdiction of the USA and entered the relative safety of the nation of Canada,” it read. “The PLF calls upon the government of the USA to come to its senses and cease the harassment, surveillance—and arrest of not only Anonymous, but all activists.”
----------
In Canada, Doyon spent a few days with the software executive in a small house in the woods. In a chat with Barrett Brown, Doyon was effusive.
> BarrettBrown: you have enough safe houses, etc? . . .
>
> CommanderX: Yes I am good here, money and houses a plenty in Canada.
>
> CommanderX: Amber Lyon asked me on camera about you.
>
> CommanderX: I think you will like my reply, and fuck the trolls Barrett. I have always loved you and always will.
>
> CommanderX: :-)
>
> CommanderX: I told her you were a hero.
>
> BarrettBrown: youre a hero . . .
>
> BarrettBrown: glad youre safe for now
>
> BarrettBrown: let me know if you need anything
>
> CommanderX: I am, and if this works we can get others out to . . . .
>
> BarrettBrown: good, were going to need that
Ten days after Doyons escape, the Wall Street Journal reported that Keith Alexander, then the N.S.A. and U.S. Cyber Command director, had held classified meetings in the White House and elsewhere during which he expressed concern about Anonymous. Within two years, Alexander warned, the group might be capable of destabilizing national power grids. General Martin Dempsey, the chairman of the Joint Chiefs of Staff, told the Journal that an enemy of the U.S. “could give cyber malware capability to some fringe group,” adding, “We have to get after this.”
On March 8th, a briefing on cybersecurity was held for members of Congress at a Sensitive Compartmented Information Facility near the Capitol Building. Many of the countrys top security officials attended the briefing, including Alexander, Dempsey, Robert Mueller, the head of the F.B.I., and Janet Napolitano, the Secretary of Homeland Security. Attendees were shown a computer simulation of what a cyberattack on the Eastern Seaboards electrical supply might look like. Anonymous was not yet capable of mounting an attack on this scale, but security officials worried that they might join forces with other, more sophisticated groups. “As we were dealing with this ever-increasing presence on the Net and ever-increasing risk, the government nuts and bolts were still being worked out,” Napolitano told me. When discussing potential cybersecurity threats, she added, “We often used Anonymous as Exhibit A.”
Anonymous might be the most powerful nongovernmental hacking collective in the world. Even so, it has never demonstrated an ability or desire to damage any key elements of public infrastructure. To some cybersecurity experts, the dire warnings about Anonymous sounded like fearmongering. “Theres a big gap between declaring war on Orlando and pulling off a Stuxnet attack,” James Andrew Lewis, a senior fellow at the Center for Strategic and International Studies, told me, referring to the elaborate cyberstrike carried out by the U.S. and Israel against Iranian nuclear sites in 2007. Yochai Benkler, a professor at Harvard Law School, told me, “What weve seen is the use of drumbeating as justification for major defense spending of a form that would otherwise be hard to justify.”
Keith Alexander, who recently retired from the government, declined to comment for this story, as did representatives from the N.S.A., the F.B.I., the C.I.A., and the D.H.S. Although Anons have never seriously compromised government computer networks, they have a record of seeking revenge against individuals who anger them. Andy Purdy, the former head of the national-cybersecurity division of the D.H.S., told me that “a fear of retaliation,” both institutional and personal, prevents government representatives from speaking out against Anonymous. “Everyone is vulnerable,” he said.
----------
On March 6, 2012, Hector Xavier Monsegur, a key member of LulzSec with the screen name Sabu, was revealed to be an F.B.I. informant. In exchange for a reduced sentence, Monsegur had spent several months undercover, helping to gather evidence against other LulzSec members. The same day, five leading Anons were arrested and charged with several crimes, including computer conspiracy. An F.B.I. official told Fox News, “This is devastating to the organization. Were chopping off the head of LulzSec.” Over the next ten months, Barrett Brown was indicted on seventeen federal charges, most of which were later dropped. (He will be sentenced in October.)
Doyon was distraught, but he continued to hack—and to seek attention. He appeared, masked, at a Toronto screening of a documentary about Anonymous. He gave an interview to a reporter from the National Post and boasted, without substantiation, “We have access to every classified database in the U.S. government. Its a matter of when we leak the contents of those databases, not if.”
In January, 2013, after another Anon started an operation about the rape of a teen-age girl in Steubenville, Ohio, Doyon repurposed LocalLeaks, a site he had created two years earlier, as a clearinghouse for information about the rape. Like many Anonymous efforts, LocalLeaks was both influential and irresponsible. It was the first site to widely disseminate the twelve-minute video of a Steubenville High School graduate joking about the rape, which inflamed public outrage about the story. But the site also perpetuated several false rumors about the case and it failed to redact a court document, thus accidentally revealing the rape victims name. Doyon admitted to me that his strategy of releasing unexpurgated materials was controversial, but he recalled thinking, “We could either gut the Steubenville Files . . . or we could release everything we know, basically, with the caveat, Hey, youve got to trust us.”
In May, 2013, the Rustle League, a group of online trolls who often provoke Anonymous, hacked Doyons Twitter account. Shm00p, one of the leaders of Rustle League, told me, “Were not trying to cause harm to the guy, but, just, the shit he was saying—it was comical to me.” The Rustle League implanted racist and anti-Semitic messages into Doyons account, such as a link to www.jewsdid911.org.
On August 27, 2013, Doyon posted a note announcing his retirement from Anonymous. “My entire life has been dedicated to fighting for justice and freedom,” he wrote. “ Commander X may be invincible, but I am extremely ill from the exhaustion and stress of fighting in this epic global cyber war.” Reactions varied from compassion (“you deserve a rest”) to ridicule (“poor crazy old gnoll. Maybe he has some time for bathing now”). Covelli told me, “The persona has consumed him to the point where he cant handle it anymore.”
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_roberts-1998-08-17-600.jpg)
August 17, 1998 “We still have Paris? Just thought Id check.”
----------
The first Million Mask March took place on November 5, 2013. Several thousand people marched in support of Anonymous, in four hundred and fifty cities around the world. In a sign of how deeply Anonymous had penetrated popular culture, one protester in London removed his Guy Fawkes mask to reveal that he was the actor Russell Brand.
While I attended the rally in Washington, D.C., Doyon watched a livestream in Canada. I exchanged e-mails with him on my phone. “It is so surreal to sit here, sidelined and out of the game—and watch something that you helped create turn into this,” he wrote. “At least it all made a difference.”
We arranged a face-to-face meeting. Doyon insisted that I submit to elaborate plans made over encrypted chat. I was to fly to an airport several hours away, rent a car, drive to a remote location in Canada, and disable my phone.
I found him in a small, run-down apartment building in a quiet residential neighborhood. He wore a green Army-style jacket and a T-shirt featuring one of Anonymouss logos: a black-suited man with a question mark instead of a face. The apartment was sparsely furnished and smelled of cigarette smoke. He discussed U.S. politics (“I have not voted in many elections—its all a rigged game”), militant Islam (“I believe that people in the Nigerian government essentially colluded to create a completely phony Al Qaeda affiliate called Boko Haram”), and his tenuous position within Anonymous (“These people who call themselves trolls are really just rotten, mean, evil people”).
Doyon had shaved his beard, and he looked gaunt. He told me that he was ill and that he rarely went outside. On his small desk were two laptops, a stack of books about Buddhism, and an overflowing ashtray. A Guy Fawkes mask hung on an otherwise bare yellow wall. He told me, “Underneath the whole X persona is a little old man who is in absolute agony at times.”
This past Christmas, the founder of the news site AnonInsiders visited him, bearing pie and cigarettes. Doyon asked the friend to succeed him as Supreme Commander of the P.L.F., offering “the keys to the kingdom”—all his passwords, as well as secret files relating to several Anonymous operations. The friend gently declined. “I have a life,” he told me.
----------
On August 9, 2014, at 5:09 P.M. local time, Kareem (Tef Poe) Jackson, a rapper and activist from Dellwood, Missouri, a suburb of St. Louis, tweeted about a crisis unfolding in a neighboring town. “Basically martial law is taking place in Ferguson all perimeters blocked coming and going,” he wrote. “National and international friends Help!!!” Five hours earlier in Ferguson, an unarmed eighteen-year-old African-American, Michael Brown, had been shot to death by a white police officer. The police claimed that Brown had reached for the officers gun. Browns friend Dorian Johnson, who was with him at the time, said that Browns only offense was refusing to leave the middle of the street.
Within two hours, Jackson received a reply from a Twitter account called CommanderXanon. “You can certainly expect us,” the message read. “See if you can get us some live streams going, that would be useful.” In recent weeks, Doyon, still in Canada, had come out of retirement. In June, two months before his fiftieth birthday, he quit smoking (“#hacktheaddiction #ecigaretteswork #old,” he later tweeted). The following month, after fighting broke out in Gaza, he tweeted in support of Anonymouss Operation Save Gaza, a series of DDoS strikes against Israeli Web sites. Doyon found the events in Ferguson even more compelling. Despite his idiosyncrasies, he had a knack for being early to a cause.
“Start collecting URLs for cops, city government,” Doyon tweeted. Within ten minutes, he had created an I.R.C. channel. “Anonymous Operation Ferguson is engaged,” he tweeted. Only two people retweeted the message.
The next morning, Doyon posted a link to a rudimentary Web site, which included a message to the people of Ferguson—“You are not alone, we will support you in every way possible”—and an ultimatum to the police: “If you abuse, harass or harm in any way the protesters in Ferguson, we will take every Web based asset of your departments and governments off line. That is not a threat, it is a promise.” Doyon appealed to the most visible Anonymous Twitter account, YourAnonNews, which has 1.3 million followers. “Please support Operation Ferguson,” he wrote. A minute later, YourAnonNews complied. That day, the hashtag #OpFerguson was tweeted more than six thousand times.
The crisis became a top news story, and Anons rallied around Operation Ferguson. As with the Arab Spring operations, Anonymous sent electronic care packages to protesters on the ground, including a riot guide (“Pick up the gas emitter and lob it back at the police”) and printable Guy Fawkes masks. As Jackson and other protesters marched through Ferguson, the police attempted to subdue them with rubber bullets and tear gas. “It looked like a scene from a Bruce Willis movie,” Jackson told me. “Barack Obama hasnt supported us to the degree Anonymous has,” he said. “Its comforting to know that someone out there has your back.”
One site, www.opferguson.com, turned out to be a honeypot—a trap designed to collect the Internet Protocol addresses of visitors and turn them over to law-enforcement agencies. Some suspected Commander X of being a government informant. In the #OpFerguson I.R.C., someone named Sherlock wrote, “Everyone got me scared clicking links. Unless its from a name Ive seen a lot, I just avoid them.”
Protesters in Ferguson asked the police to reveal the name of the officer who had shot Brown. Several times, Anons echoed this demand. Someone tweeted, “Ferguson police better release the shooters name before Anonymous does the work for them.” In a community meeting on August 12th, Jon Belmar, the Chief of the St. Louis Police Department, refused. “We do not do that until theyre charged with an offense,” he said.
In retaliation, a hacker with the handle TheAnonMessage tweeted a link to what he claimed was a two-hour audio file of a police radio scanner, recorded around the time of Browns death. TheAnonMessage also doxed Belmar, tweeting what he purported to be the police chiefs home address, phone number, and photographs of his family—one of his son sleeping on a couch, another of Belmar posing with his wife. “Nice photo, Jon,” TheAnonMessage tweeted. “Your wife actually looks good for her age. Have you had enough?” An hour later, TheAnonMessage threatened to dox Belmars daughter.
Richard Stallman, the first-generation hacker from M.I.T., told me that though he supports many of Anonymouss causes, he considered these dox attacks reprehensible. Even internally, TheAnonMessages actions were divisive. “Why bother doxing people who werent involved?” one Anon asked over I.R.C., adding that threatening Belmars family was “beyond stupid.” But TheAnonMessage and other Anons continued to seek information that could be used in future dox attacks. The names of Ferguson Police Department employees were available online, and Anons scoured the Internet, trying to suss out which of the officers had killed Brown.
![](http://www.newyorker.com/wp-content/uploads/2014/09/140908_steig-1999-04-12-600.jpg)
April 12, 1999“Which thing do I press?”
In the early morning of August 14th, a few Anons became convinced, based on Facebook photos and other disparate clues, that Browns shooter was a thirty-two-year-old man named Bryan Willman. According to a transcript of an I.R.C., one Anon posted a photo of Willman with a swollen face; another noted, “The shooter claimed to have been hit in the face.” Another user, Anonymous|11057, acknowledged that his suspicion of Willman involved “a leap of probably bad logic.” Still, he wrote, “i just cant shake it. i really truly honestly and without a shred of hard evidence think its him.”
TheAnonMessage seemed amused by the conversation, writing, “#RIPBryanWillman.” Other Anons urged caution. “Please be sure,” Anonymous|2252 wrote. “Its not just about a mans life, Anon can easily be turned on by the public if something truly unjust comes of this.”
The debate went on for more than an hour. Several Anons pointed out that there was no way to confirm that Willman had ever been a Ferguson police officer.
> Anonymous|3549: @gs we still dont have a confirmation that bry is even on PD
>
> Intangir: tensions are high enough right now where there is a slim chance someone might care enough to kill him
>
> Anonymous|11057: the only real way to get a confirmation would be an eyewitness report from the scene of the crime. otherwise its hearsay and shillery
>
> Anonymous|11057: the fastest way to eliminate a suspect is to call him a suspect . . . we are all terrified of being unjust, but the pegs keep fitting in the holes . . .
Many Anons remained uncomfortable with the idea of a dox. But around 7 A.M. a vote was taken. According to chat logs, of the eighty or so people in the I.R.C., fewer than ten participated. They decided to release Willmans personal information.
> Anonymous|2252: is this going on twitter?
>
> anondepp: lol
>
> Anonymous|2252: via @theanonmessage ?
>
> TheAnonMessage: yup
>
> TheAnonMessage: just did
>
> anondepp: its up
>
> Anonymous|2252: shit
>
> TheAnonMessage: Lord in heaven…
>
> Anonymous|3549: . . .have mercy on our souls
>
> anondepp: lol
At 9:45 A.M., the St. Louis Police Department responded to TheAnonMessage. “Bryan Willman is not even an officer with Ferguson or St. Louis County PD,” the tweet read. “Do not release more info on this random citizen.” (The F.B.I. later opened an investigation into the hacking of police computers in Ferguson.) Twitter quickly suspended TheAnonMessage, but Willmans name and address had been reported widely.
Willman is the head police dispatcher in St. Ann, a suburb west of Ferguson. When the St. Louis Police Departments Intelligence Unit called to tell him that he had been named as the killer, Willman told me, “I thought it was the weirdest joke.” Within hours, he received hundreds of death threats on his social-media accounts. He stayed in his house for nearly a week, alone, under police protection. “I just want it all to go away,” he told me. He thinks that Anonymous has irreparably harmed his reputation. “I dont see how they can ever think they can be trusted again,” he said.
“We are not perfect,” OpFerguson tweeted. “Anonymous makes mistakes, and weve made a few in the chaos of the past few days. For those, we apologize.” Though Doyon was not responsible for the errant dox attack, other Anons took the opportunity to shame him for having launched an operation that spiralled out of control. A Pastebin message, distributed by YourAnonNews, read, “You may notice contradictory tweets and information about #Ferguson and #OpFerguson from various Anonymous twitter accounts. Part of why there is dissension about this particular #op is that CommanderX is considered a namefag/facefag—a known entity who enjoys or at least doesnt shun publicity—which is considered by most Anonymous to be bad form, for some probably fairly obvious reasons.”
On his personal Twitter account, Doyon denied any involvement with Op Ferguson and wrote, “I hate this shit. I dont want drama and I dont want to fight with people I thought were friends.” Within a couple of days, he was sounding hopeful again. He recently retweeted messages reading, “You call them rioters, we call them voices of the oppressed” and “Free Tibet.”
Doyon is still in hiding. Even Jay Leiderman, his attorney, does not know where he is. Leiderman says that, in addition to the charges in Santa Cruz, Doyon may face indictment for his role in the PayPal and Orlando attacks. If he is arrested and convicted on all counts, he could spend the rest of his life in prison. Following the example of Edward Snowden, he hopes to apply for asylum with the Russians. When we spoke, he used a lit cigarette to gesture around his apartment. “How is this better than a fucking jail cell? I never go out,” he said. “I will never speak with my family again. . . . Its an incredibly high price to pay to do everything you can to keep people alive and free and informed.”
--------------------------------------------------------------------------------
via: http://www.newyorker.com/magazine/2014/09/08/masked-avengers
作者:[David Kushner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.newyorker.com/contributors/david-kushner

View File

@ -1,89 +0,0 @@
Drab Desktop? Try These 4 Beautiful Linux Icon Themes
================================================================================
**Ubuntus default icon theme [hasnt changed much][1] in almost 5 years, save for the [odd new icon here and there][2]. If youre tired of how it looks were going to show you a handful of gorgeous alternatives that will easily freshen things up.**
Do feel free to share links to your own favourite choices in the comments below.
### Captiva ###
![Captiva icons, elementary folders and Moka GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-and-captiva.jpg)
Captiva icons, elementary folders and Moka GTK
Captiva is a relatively new icon theme that even the least bling-prone user can appreicate.
Made by DeviantArt user ~[bokehlicia][3], Captiva shuns the 2D flat look of many current icon themes for a softer, rounded look. The icons themselves have an almost material or textured look, with subtle drop shadows and a rich colour palette adding to the charm.
It doesnt yet include a set of its own folder icons, and will fallback to using elementary (if available) or stock Ubuntu icons.
To install Captiva icons in Ubuntu 14.04 you can add the official PPA by opening a new Terminal window and enter the following commands:
sudo add-apt-repository ppa:captiva/ppa
sudo apt-get update && sudo apt-get install captiva-icon-theme
Or, if youre not into software source cruft, by downloading the icon pack direct from the DeviantArt page. To install, extract the archive and move the resulting folder to the .icons directory in Home.
However you choose to install it, youll need to apply this (and every other theme on this list) using a utility like [Unity Tweak Tool][4].
- [Captiva Icon Theme on DeviantArt][5]
### Square Beam ###
![Square Beam icon set with Orchis GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/squarebeam.jpg)
Square Beam icon set with Orchis GTK
After something a bit angular? Check out Square Beam. It offers a more imposing visual statement than other sets on this list, with electric colours, harsh gradients and stark iconography. It claims to have more than 30,000 different icons (!) included (youll forgive me for not counting) so you should find very few gaps in its coverage.
- [Square Beam Icon Theme on GNOME-Look.org][6]
### Moka & Faba ###
![Moka/Faba Mono Icons with Orchis GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-faba.jpg)
Moka/Faba Mono Icons with Orchis GTK
The Moka icon suite needs little introduction. In fact, Id wager a good number of you are already using it
With pastel colours, soft edges and simple icon artwork, Moka is a truly standout and comprehensive set of application icons. Its best used with its sibling, Faba, which Moka will inherit so as to fill in all the system icons, folders, panel icons, etc. The combined result is…well, youve got eyes!
For full details on how to install on Ubuntu head over to the official project website, link below.
- [Download Moka and Faba Icon Themes][7]
### Compass ###
![Compass Icon Theme with Numix Blue GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/compass1.jpg)
Compass Icon Theme with Numix Blue GTK
Last on our list, but by no means least, is Compass. This is a true adherent to the 2D, two-tone UI design right now. It may not be as visually diverse as others on this list, but thats the point. Its consistent and uniform and all the better for it — just check out those folder icons!
Its available to download and install manually through GNOME-Look (link below) or through the Nitrux Artwork PPA:
sudo add-apt-repository ppa:nitrux/nitrux-artwork
sudo apt-get update && sudo apt-get install compass-icon-theme
- [Compass Icon Theme on GNOME-Look.org][8]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/4-gorgeous-linux-icon-themes-download
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2010/02/lucid-gets-new-icons-for-rhythmbox-ubuntuone-memenu-more
[2]:http://www.omgubuntu.co.uk/2012/08/new-icon-theme-lands-in-lubuntu-12-10
[3]:http://bokehlicia.deviantart.com/
[4]:http://www.omgubuntu.co.uk/2014/06/unity-tweak-tool-0-7-development-download
[5]:http://bokehlicia.deviantart.com/art/Captiva-Icon-Theme-479302805
[6]:http://gnome-look.org/content/show.php/Square-Beam?content=165094
[7]:http://mokaproject.com/moka-icon-theme/download/ubuntu/
[8]:http://gnome-look.org/content/show.php/Compass?content=160629

View File

@ -1,100 +0,0 @@
5 Reasons Why I Hate GNU/Linux Do You Hate (Love) Linux?
================================================================================
This part of Linux, I dont like to talk very often but sometimes I do really feel some of the aspects related to Linux is real pain. Here are the five points which I come across on a daily basis, almost.
![5 Reasons Why I Hate Linux](http://www.tecmint.com/wp-content/uploads/2014/09/I-Hate-Linux.jpg)
5 Reasons Why I Hate Linux
### 1. Choose from Too Many Good Distros ###
While reading several on-line forum (a part of my hobby), I very often come across a question like Hi, I am new to Linux, just [switched over from Windows to Linux][1]. Which Linux Distribution, I should get my hands dirty with? Oh! forgot to mention, I am an Engineering Student.
As soon as someone posted such question, there is a flood of comments. each distributions fan boy tries to make sense that the distro he is using leads all the rest, a few comments may look like:
1. Get your hands upon Linux Mint or Ubuntu, they are easy to use specially for newbies like you.
1. Ubuntu is Sh** better go with Mint.
1. If you want something like windows, better stay there.
1. Nothing is better than Debian. It is easy to use and contains all the packages you may need.
1. Slackware, for the point, if you learn slack you learn Linux.
At this point, the student who asked question really gets confused and annoyed.
1. CentOS Nothing like this, when comes to stability.
1. I will recommend Fedora, Bleeding edge technology implementation, you will get a lot to learn.
1. Puppy Linux, SUSE, BSD, Manjaro, Megia, Kali, RedHat Beta, etc,……
At the end of discussion, the discussion forum may be used as a paper for research based upon the facts and figure provided in the comments.
Now think the same in Windows or Mac One may say are you Insane? Still using Windows XP or Vista but no one will try to prove that windows 8 is better than XP and XP is more on a User Friendly side. You wont get a fan boy in Mac as well, who is trying to jump into the discussion just to make his point sounds louder.
You may frequently come across points like Distros are like religion. These things makes the newbie puzzled. Anyone who have used Linux for a considerable time would be knowing that all the distros are same at the base. It is only the working interface and the way to perform task differs and that too rarely. You are using apt, yum, portage, emerge, spike or ABS who cares as far as the things are done and user is comfortable with it.
Well the above scenario is not only true in forums and groups on-line, it is sometimes taken to the corporate world.
I was recently being Interviewed by a company based in Mumbai (India). The person interviewing, asked me several questions and technologies, I have worked with. As per their requirements, I have worked with nearly half of the technologies they were looking for. A few of last conversation as mentioned below.
**Interviewer**: Do you know kernel editing? (Then he talked to himself for a couple of seconds no, no not kernel editing, it is a very different thing.) Do you know how to compile a kernel on a monolithic side?
**Me**: Yes, we just need to make sure what we need to run in future. We need to select those options only that supports our need before compiling the kernel.
**Interviewer**: How do you compile a kernel?
**Me**: make menuconfig, fire it as………..(interrupted)
**Interviewer**: When have you compiled the kernel lastly without any help?
**Me**: Very recently on my Debian…..(Interrupted)
**Interviewer**: Debian? Do you know what we does? Debian-Febian is not of our use. We use CentOS. Ok, I will tell the management the result. They will call you.
**Not to Mention**: I didnt get the call or job, but certainly the phrase **Debian-febian** forces me to think over and over again. He could have said we dont use Debian, we use CentOS. The tone of him, was a bit racist, it is spread-ed all over.
### 2. Some of the very important software has no support in Linux ###
No! I am not talking about Photoshop. I understand Linux is not build to perform such task. But some backbone softwares required to connect your Android phone to PC for Updation PC Suite certainly means a lot. I have been looking for a windows PC.
I know Linux is more like a server side OS. Really? Is not it trying to make a point that, it has been used as a Desktop as well? If Yes! It should have other developed desktop features. For a desktop user security, stability, RAID, Kernel does not mean much. They should get their work done with little or no effort.
Moreover the companies like Samsung, Sony, Micromax, etc are dealing with Android (Linux) Phones and they have no support to get their phone connected over a Linux PC.
Dont drag me in PC suite discussion. For Linux to be a Desktop OS, it still lacks several things, Little or no gaming support I mean high end gaming. No professional Video and Photo Editing Tools, I Said Professional. And yeah I remember Titanic and Avatar Movies were maid using some kind of FOSS video editor, I am coming to that point.
Agree or not, Linux still has to go a long way to be a distro for everyone.
### 3. Linuxer have a habit of living in virtual world ###
I am a Linux user, and I am superior than you. I can handle terminal much better than you. You know Linux is Everywhere in your wrist watch, mobile phones, remote control. You know what, Hackers use Linux. Are you aware as soon as you boot Linux you become hacker. You can do several things from Linux you cant even think of using Windows and Mac.
Let me tell you, Linux is now being used in International Space Station. The worlds most successful movies Avatar and Titanic were build using Linux. Last but not the least, worlds 90% supercomputers are using Linux. Worlds Top 5 fastest computer are using Linux. Facebook, Linkedin, Google, Yahoo all have their server based on Linux.
I dont mean they are wrong. I only mean they keeps on talking about the thing they very little know about.
### 4. The long hours of compilation and dependency resolution ###
I am aware of automatic dependency resolution and the program getting smart day by day. Still think from corporate view, I was installing a program say y, it had one dependency say **x** which was unable to be resolved automatically. While resolving **x** I came across 8 other dependency, a few of other were dependent on a few other libraries and program. Isnt it painful?
The rule of corporate is to have the work done efficiently with less man power and as much less time as possible. Who cares if your piece of codes are coming from Windows or Mac or Linux as far as the work is done.
### 5. Too much manual work ###
No matter which distro you choose, you have to manually do a lot a things time-to-time. Lets say you are installing proprietary Nvidia Driver. Now you need to kill **X** manually, may need to edit **Xorg.conf** manually and still may have a broken **X**. Furthermore, you have to make sure that the next time kernel updates, it still be in working condition.
Think of same on Windows. You have nothing to do other than firing the executables and click** Next, Next, I Agree, Next, Forward, Finish, Reboot** and your system may very rarely have broken GUI. Though the demerit is a broken GUI is not possible to be repaired on Windows but easily on Linux.
Hey dont tell me its because of security implementation. If you are installing something using **root**, and still needs a lot of things done manually that not security. Some may have a point that it gives you power to configure your system to any extent. My friend at least give him a working interface from where he can configure it to next best level. Why Installer laves him to re-invent the wheel every-time in the name of security and configurability.
I myself is a Linux fan and have been working on this platform for nearly half a decades. I myself have used Distros of several kind and came to the above conclusion. You may have used a different distros and might youve came to a such conclusion, where you feel that Linux is not upto the mark.
Please do share with us, why do you hate (Love) Linux? via our comment section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/why-i-hate-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/useful-linux-commands-for-newbies/

View File

@ -1,190 +0,0 @@
zpl1025
Make Downloading Files Effortless
================================================================================
A download manager is computer software that is dedicated to the task of downloading files, optimizing bandwidth usage, and operating in a more organized way. Some web browsers, such as Firefox, include a download manager as a feature, but their implementation lacks the sophistication of a dedicated download manager (or add-ons for the web browser), without using bandwidth optimally, and without good file management features.
Users that regularly download files benefit from using a good download manager. The ability to maximize download speeds (with download acceleration), resume and schedule downloads, make safer and more rewarding downloading. Download managers have lost some of their popularity, but the best of them offer real benefits including tight integration with browsers, support for popular sites such as YouTube and much more.
There are some sublime open source download managers for Linux, which makes selection somewhat problematic. I have compiled a roundup of my favorite download managers, and add-ons that turn a download manager into an excellent download manager for Firefox. Each application featured here is released under an open source license.
----------
![](http://www.linuxlinks.com/portal/content2/png/uGet.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-uGet.png)
uGet is a lightweight, easy-to-use and full-featured open source download manager. uGet allows the user to download in multiple parallel streams for download acceleration, put files in a download queue, pause & resume downloads, offers advanced category management, with browser integration, clipboard monitoring, batch downloads, localized into 26 languages, and many more features.
uGet is mature software; it has been in developed for more than 11 years. In that time, it has progressed into a highly versatile download manager, with an estimable set of features, yet maintaining ease of use.
uGet is written in the C language, uses cURL as a backend, and the applicable library, libcurl. uGet has excellent platform compatibility. uGet is primarily a project for Linux, but it also runs on Mac OS X, FreeBSD, Android, and Windows.
#### Features include: ####
- Easy to use
- Downloads queue place your downloads into a queue to download as many, or as few, downloads as you want simultaneously
- Resume downloads
- Categorized defaults
- Clipboard monitor which is well implemented
- Batch downloads
- Import downloads import from HTML files
- Support for downloading files through HTTP, HTTPS, FTP, BitTorrent & Metalink
- Multi-connection (also known as Multi-Segment): up to 20 simultaneous connections per download with adaptive segment management which means that when one segment drops out then the other connections pick up the slack to ensure optimal download speeds at all times
- Multi-mirror
- FTP login & anonymous FTP
- Powerful scheduler
- FireFox integration via FlashGot
- Aria2 plugin
- Theme chameleoning
- Quiet mode
- Keyboard shortcuts
- CLI / Terminal usage support
- Folder auto-creation
- Download history management
- GnuTLS support
- Supports 26 languages including: Arabic, Belarusian, Chinese (Simplified), Chinese (Traditional), Czech, Danish, English (default), French, Georgian, German, Hungarian, Indonesian, Italian, Polish, Portuguese (Brazil), Russian, Spanish, Turkish, Ukrainian, and Vietnamese
- Website: [ugetdm.com][1]
- Developer: C.H. Huang and contributors
- License: GNU LGPL 2.1
- Version Number: 1.10.5
----------
![](http://www.linuxlinks.com/portal/content2/png/DownThemAll%21.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-DownThemAll%21.png)
DownThemAll! is a fast, reliable and easy-to-use, open source download manager/accelerator built inside Firefox. This add-on lets the user download all the links or images contained in a webpage and much more. The add-on gives the user full control over downloads, dedicated speed and number of parallel connections at any time. Use Metalinks or add mirrors manually to download a file from different servers at the same time.
DownThemAll reads the size of the files you want to download and splits them into multiple sections, which are downloaded in parallel.
#### Features include: ####
- Complete integration with Firefox
- Multi-part download which allows the user to download the file in pieces, then combining the pieces after a completed download; thus increasing the download speed when connected to a slow server
- Metalink support which allows multiple URLs for each file to be passed to DTA, along with checksums and other informatio
- Spider a page with a single link
- Filtering
- Advanced auto-renaming options
- Pause and restart downloads
- Website: [addons.mozilla.org/en-US/firefox/addon/downthemall][2]
- Developer: Federico Parodi, Stefano Verna, Nils Maier
- License: GNU GPL v2
- Version Number: 2.0.17
----------
![](http://www.linuxlinks.com/portal/content2/png/JDownloader.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-JDownloader.png)
JDownloader is a free, open-source download management tool with a large community of developers that makes downloading easy and fast. Users can start, stop or pause downloads, set bandwith limitations, auto-extract archives and much more. It offers an easy-to-extend framework.
JDownloader simplifies downloading files from One-Click-Hosters. It also offers downloading in multiple parallel streams, captcha recognition, automatic file extraction and much more. Additionally, many "link encryption" sites are supported - so you just paste the "encrypted" links and JDownloader does the rest. JDownloader can import CCF, RSDF and DLC files.
#### Features include: ####
- Download several files at once
- Download with multiple connections
- JD has an own powerful OCR module
- Automatic extractor (including password list search) (Rar archives)
- Theme Support
- Multilingual
- About 110 hoster and over 300 decrypt plug-ins
- Reconnect with JDLiveHeaderScripts: (1400 router supported)
- Webupdate
- Integrated package manager for additional modules (eg. Webinterface, Shutdown)
- Website: [jdownloader.org][3]
- Developer: AppWork UG
- License: GNU GPL v3
- Version Number: 0.9.581
----------
![](http://www.linuxlinks.com/portal/content2/png/FreeRapidDownloader.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-FreeRapidDownloader.png)
FreeRapid Downloader is an easy to use open source downloader that supports downloading from Rapidshare, Youtube, Facebook, Picasa and other file-sharing services. Its engine is based on a list of plugins that make it possible to download from specific websites.
FreeRapid Downloader is an ideal choice for users needing a download manager specialized in sharing websites.
FreeRapid Downloader is written in Java. It needs at least Sun Java 7.0 to run.
#### Features include: ####
- Easy to use
- Supports concurrent downloading from multiple services
- Supports resuming downloads
- Download using proxy list
- Supports streamed videos or pictures
- Download history
- Smart clipboard monitoring
- Automatic checking for file's existence on server
- Auto shutdown options
- Automatic plugins updates
- Simple CAPTCHA recognition
- Multi-platform support
- Internationalization support: English, Bulgarian, Czech, Finnish, Portugal, Slovak, Hungarian, Simplified Chinese and many others
- More than 700 supported sites
- Website: [wordrider.net/freerapid/][4]
- Developer: Vity and contributors
- License: GNU GPL v2
- Version Number: 0.9u4
----------
![](http://www.linuxlinks.com/portal/content2/png/FlashGot.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-FlashGot.png)
FlashGot is a free add-on for Firefox and Thunderbird, meant to handle single and massive ("all" and "selection") downloads with several external Download Managers.
FlashGot turns every supported download manager into a download manager for Firefox.
#### Features include: ####
- Supports in Linux: Aria, Axel Download Accelerator, cURL, Downloader 4 X, FatRat, GNOME Gwget, FatRat, JDownloader, KDE KGet, pyLoad, SteadyFlow, uGet, wxDFast, and wxDownload Fast)
- Build Gallery functionality which helps to synthesize full media galleries in one page, from serial contents originally scattered on several pages, for easy and fast "download all"
- FlashGot Link downloads through the default download manager the link under the mouse pointer
- FlashGot Selection
- FlashGot All
- FlashGot Tabs
- FlashGot Media
- Capture all links from a page
- Capture all links from all tabs
- Filter the links using a mask (e.g. to download only certain types of files)
- Make a selection on a web page and capture all links in that selection
- Supports direct and batch download from the most popular link protection and file hosting services
- Privacy options
- Internationalization support
- Website: [flashgot.net][5]
- Developer: Giorgio Maone
- License: GNU GPL v2
- Version Number: 1.5.6.5
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20140913062041384/DownloadManagers.html
作者Frazer Kline
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://ugetdm.com/
[2]:https://addons.mozilla.org/en-US/firefox/addon/downthemall/
[3]:http://jdownloader.org/
[4]:http://wordrider.net/freerapid/
[5]:http://flashgot.net/

View File

@ -0,0 +1,108 @@
barney-ro translating
ChromeOS vs Linux: The Good, the Bad and the Ugly
ChromeOS 对战 Linux : 孰优孰劣 仁者见仁 智者见智
================================================================================
> In the battle between ChromeOS and Linux, both desktop environments have strengths and weaknesses.
> 在 ChromeOS 和 Linux 的斗争过程中,不管是哪一家的操作系统都是有优有劣。
Anyone who believes Google isn't "making a play" for desktop users isn't paying attention. In recent years, I've seen [ChromeOS][1] making quite a splash on the [Google Chromebook][2]. Exploding with popularity on sites such as Amazon.com, it looks as if ChromeOS could be unstoppable.
任何不关注Google的人都不会相信Google在桌面用户当中扮演这一个很重要的角色。在近几年我们见到的[ChromeOS][1]制造的[Google Chromebook][2]相当的轰动。和同期的人气火爆的Amazon一样似乎ChromeOS势不可挡。
In this article, I'm going to look at ChromeOS as a concept to market, how it's affecting Linux adoption and whether or not it's a good/bad thing for the Linux community as a whole. Plus, I'll talk about the biggest issue of all and how no one is doing anything about it.
在本文中我们要了解的是ChromeOS概念的市场ChromeOS怎么影响着Linux的使用和整个 ChromeOS 对于一个社区来说,是好事还是坏事。另外,我将会谈到一些重大的事情,和为什么没人去为他做点什么事情。
### ChromeOS isn't really Linux ###
### ChromeOS 并不是真正的Linux ###
When folks ask me if ChromeOS is a Linux distribution, I usually reply that ChromeOS is to Linux what OS X is to BSD. In other words, I consider ChromeOS to be a forked operating system that uses the Linux kernel under the hood. Much of the operating system is made up of Google's own proprietary blend of code and software.
每当有朋友问我说是否ChromeOS 是否是Linux 的一个分支时我都会这样回答ChromeOS 对于Linux 就好像是 OS X 对于BSD 。换句话说我认为ChromeOS 是一个派生的操作系统运行于Linux 内核的引擎之下。很多操作系统就组成了Google 的专利代码和软件。
So while the ChromeOS is using the Linux kernel under its hood, it's still very different from what we might find with today's modern Linux distributions.
尽管ChromeOS 是利用了Linux 内核引擎但是它仍然有很大的不同和现在流行的Linux分支版本。
Where ChromeOS's difference becomes most apparent, however, is in the apps it offers the end user: Web applications. With everything being launched from a browser window, Linux users might find using ChromeOS to be a bit vanilla. But for non-Linux users, the experience is not all that different than what they may have used on their old PCs.
ChromeOS和它们最大的不同就在于它给终端用户提供的app包括Web 应用。因为ChromeOS 每一个操作都是开始于浏览器窗口对于Linux 用户来说可能会有很多不一样的感受但是对于没有Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。
For example: Anyone who is living a Google-centric lifestyle on Windows will feel right at home on ChromeOS. Odds are this individual is already relying on the Chrome browser, Google Drive and Gmail. By extension, moving over to ChromeOS feels fairly natural for these folks, as they're simply using the browser they're already used to.
就是说每一个以Google-centric为生活方式的人来说当他们回到家时在ChromeOS上的感觉将会非常良好。这样的优势就是这个人已经接受了Chrome 浏览器Google 驱动器和Gmail 。久而久之他们的亲朋好友也都对ChromeOs有了好感就好像是他们很容易接受Chrome 流浪器,因为他们早已经用过。
Linux enthusiasts, however, tend to feel constrained almost immediately. Software choices feel limited and boxed in, plus games and VoIP are totally out of the question. Sorry, but [GooglePlus Hangouts][3] isn't a replacement for [VoIP][4] software. Not even by a long shot.
然而对于Linux 爱好者来说这样就立即带来了不适应。软件的选择是受限制的盒装的在加上游戏和VoIP 是完全不可能的。对不起,因为[GooglePlus Hangouts][3]是代替不了VoIP 软件的。甚至在很长的一段时间里。
### ChromeOS or Linux on the desktop ###
### ChromeOS 和Linux 的桌面化 ###
Anyone making the claim that ChromeOS hurts Linux adoption on the desktop needs to come up for air and meet non-technical users sometime.
有人断言ChromeOS 要是想在桌面系统中对Linux 产生影响只有在Linux 停下来浮出水面换气的时候或者是满足某个非技术用户的时候。
Yes, desktop Linux is absolutely fine for most casual computer users. However it helps to have someone to install the OS and offer "maintenance" services like we see in the Windows and OS X camps. Sadly Linux lacks this here in the States, which is where I see ChromeOS coming into play.
是的桌面Linux 对于大多数休闲型的用户来说绝对是一个好东西。它有助于有专人安装操作系统并且提供“维修”服务从windows 和 OS X 的阵营来看。但是令人失望的是在美国Linux 正好在这个方面很缺乏。所以我们看到ChromeOS 慢慢的走入我们的视线。
I've found the Linux desktop is best suited for environments where on-site tech support can manage things on the down-low. Examples include: Homes where advanced users can drop by and handle updates, governments and schools with IT departments. These are environments where Linux on the desktop is set up to be used by users of any skill level or background.
By contrast, ChromeOS is built to be completely maintenance free, thus not requiring any third part assistance short of turning it on and allowing updates to do the magic behind the scenes. This is partly made possible due to the ChromeOS being designed for specific hardware builds, in a similar spirit to how Apple develops their own computers. Because Google has a pulse on the hardware ChromeOS is bundled with, it allows for a generally error free experience. And for some individuals, this is fantastic!
Comically, the folks who exclaim that there's a problem here are not even remotely the target market for ChromeOS. In short, these are passionate Linux enthusiasts looking for something to gripe about. My advice? Stop inventing problems where none exist.
The point is: the market share for ChromeOS and Linux on the desktop are not even remotely the same. This could change in the future, but at this time, these two groups are largely separate.
### ChromeOS use is growing ###
No matter what your view of ChromeOS happens to be, the fact remains that its adoption is growing. New computers built for ChromeOS are being released all the time. One of the most recent ChromeOS computer releases is from Dell. Appropriately named the [Dell Chromebox][5], this desktop ChromeOS appliance is yet another shot at traditional computing. It has zero software DVDs, no anti-malware software, and offfers completely seamless updates behind the scenes. For casual users, Chromeboxes and Chromebooks are becoming a viable option for those who do most of their work from within a web browser.
Despite this growth, ChromeOS appliances face one huge downside storage. Bound by limited hard drive size and a heavy reliance on cloud storage, ChromeOS isn't going to cut it for anyone who uses their computers outside of basic web browser functionality.
### ChromeOS and Linux crossing streams ###
Previously, I mentioned that ChromeOS and Linux on the desktop are in two completely separate markets. The reason why this is the case stems from the fact that the Linux community has done a horrid job at promoting Linux on the desktop offline.
Yes, there are occasional events where casual folks might discover this "Linux thing" for the first time. But there isn't a single entity to then follow up with these folks, making sure theyre getting their questions answered and that they're getting the most out of Linux.
In reality, the likely offline discovery breakdown goes something like this:
- Casual user finds out Linux from their local Linux event.
- They bring the DVD/USB device home and attempt to install the OS.
- While some folks very well may have success with the install process, I've been contacted by a number of folks with the opposite experience.
- Frustrated, these folks are then expected to "search" online forums for help. Difficult to do on a primary computer experiencing network or video issues.
- Completely fed up, some of the above frustrated bring their computers back into a Windows shop for "repair." In addition to Windows being re-installed, they also receive an earful about how "Linux isn't for them" and should be avoided.
Some of you might charge that the above example is exaggerated. I would respond with this: It's happened to people I know personally and it happens often. Wake up Linux community, our adoption model is broken and tired.
### Great platforms, horrible marketing and closing thoughts ###
If there is one thing that I feel ChromeOS and Linux on the desktop have in common...besides the Linux kernel, it's that they both happen to be great products with rotten marketing. The advantage however, goes to Google with this one, due to their ability to spend big money online and reserve shelf space at big box stores.
Google believes that because they have the "online advantage" that offline efforts aren't really that important. This is incredibly short-sighted and reflects one of Google's biggest missteps. The belief that if you're not exposed to their online efforts, you're not worth bothering with, is only countered by local shelf-space at select big box stores.
My suggestion is this offer Linux on the desktop to the ChromeOS market through offline efforts. This means Linux User Groups need to start raising funds to be present at county fairs, mall kiosks during the holiday season and teaching free classes at community centers. This will immediately put Linux on the desktop in front of the same audience that might otherwise end up with a ChromeOS powered appliance.
If local offline efforts like this don't happen, not to worry. Linux on the desktop will continue to grow as will the ChromeOS market. Sadly though, it will absolutely keep the two markets separate as they are now.
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html
作者:[Matt Hartley][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Matt-Hartley-3080.html
[1]:http://en.wikipedia.org/wiki/Chrome_OS
[2]:http://www.google.com/chrome/devices/features/
[3]:https://plus.google.com/hangouts
[4]:http://en.wikipedia.org/wiki/Voice_over_IP
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html

View File

@ -1,200 +0,0 @@
How to Take Snapshot of Logical Volume and Restore in LVM Part III
================================================================================
**LVM Snapshots** are space efficient pointing time copies of lvm volumes. It works only with lvm and consume the space only when changes are made to the source logical volume to snapshot volume. If source volume has a huge changes made to sum of 1GB the same changes will be made to the snapshot volume. Its best to always have a small size of changes for space efficient. Incase the snapshot runs out of storage, we can use lvextend to grow. And if we need to shrink the snapshot we can use lvreduce.
![Take Snapshot in LVM](http://www.tecmint.com/wp-content/uploads/2014/08/Take-Snapshot-in-LVM.jpg)
Take Snapshot in LVM
If we have accidentally deleted any file after creating a Snapshot we dont have to worry because the snapshot have the original file which we have deleted. It is possible if the file was there when the snapshot was created. Dont alter the snapshot volume, keep as it while snapshot used to do a fast recovery.
Snapshots cant be use for backup option. Backups are Primary Copy of some datas, so we cant use snapshot as a backup option.
#### Requirements ####
注:此两篇文章如果发布后可换成发布后链接,原文在前几天更新中
- [Create Disk Storage with LVM in Linux PART 1][1]
- [How to Extend/Reduce LVMs in Linux Part II][2]
### My Server Setup ###
- Operating System CentOS 6.5 with LVM Installation
- Server IP 192.168.0.200
#### Step 1: Creating LVM Snapshot ####
First, check for free space in volume group to create a new snapshot using following **vgs** command.
# vgs
# lvs
![Check LVM Disk Space](http://www.tecmint.com/wp-content/uploads/2014/08/Check-LVM-Disk-Space.jpg)
Check LVM Disk Space
You see, there is 8GB of free space left in above **vgs** output. So, lets create a snapshot for one of my volume named **tecmint_datas**. For demonstration purpose, I am going to create only 1GB snapshot volume using following commands.
# lvcreate -L 1GB -s -n tecmint_datas_snap /dev/vg_tecmint_extra/tecmint_datas
OR
# lvcreate --size 1G --snapshot --name tecmint_datas_snap /dev/vg_tecmint_extra/tecmint_datas
Both the above commands does the same thing:
- **-s** Creates Snapshot
- **-n** Name for snapshot
![Create LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Create-LVM-Snapshot.jpg)
Create LVM Snapshot
Here, is the explanation of each point highlighted above.
- Size of snapshot Iam creating here.
- Creates snapshot.
- Creates name for the snapshot.
- New snapshots name.
- Volume which we are going to create a snapshot.
If you want to remove a snapshot, you can use **lvremove** command.
# lvremove /dev/vg_tecmint_extra/tecmint_datas_snap
![Remove LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Remove-LVM-Snapshot.jpg)
Remove LVM Snapshot
Now, list the newly created snapshot using following command.
# lvs
![Verify LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-LVM-Snapshot.jpg)
Verify LVM Snapshot
You see above, a snapshot was created successfully. I have marked with an arrow where snapshots origin from where its created, Its **tecmint_datas**. Yes, because we have created a snapshot for **tecmint_datas l-volume**.
![Check LVM Snapshot Space](http://www.tecmint.com/wp-content/uploads/2014/08/Check-LVM-Snapshot-Space.jpg)
Check LVM Snapshot Space
Lets add some new files into **tecmint_datas**. Now volume has some datas around 650MB and our snapshot size is 1GB. So there is enough space to backup our changes in snap volume. Here we can see, what is the status of our snapshot using below command.
# lvs
![Check Snapshot Status](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Snapshot-Status.jpg)
Check Snapshot Status
You see, **51%** of snapshot volume was used now, no issue for more modification in your files. For more detailed information use command.
# lvdisplay vg_tecmint_extra/tecmint_data_snap
![View Snapshot Information](http://www.tecmint.com/wp-content/uploads/2014/08/Snapshot-Information.jpg)
View Snapshot Information
Again, here is the clear explanation of each point highlighted in the above picture.
- Name of Snapshot Logical Volume.
- Volume group name currently under use.
- Snapshot volume in read and write mode, we can even mount the volume and use it.
- Time when the snapshot was created. This is very important because snapshot will look for every changes after this time.
- This snapshot belongs to tecmint_datas logical volume.
- Logical volume is online and available to use.
- Size of Source volume which we took snapshot.
- Cow-table size = copy on Write, that means whatever changes was made to the tecmint_data volume will be written to this snapshot.
- Currently snapshot size used, our tecmint_datas was 10G but our snapshot size was 1GB that means our file is around 650 MB. So what its now in 51% if the file grow to 2GB size in tecmint_datas size will increase more than snapshot allocated size, sure we will be in trouble with snapshot. That means we need to extend the size of logical volume (snapshot volume).
- Gives the size of chunk for snapshot.
Now, lets copy more than 1GB of files in **tecmint_datas**, lets see what will happen. If you do, you will get error message saying **Input/output error**, it means out of space in snapshot.
![Add Files to Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Add-Files-to-Snapshot.jpg)
Add Files to Snapshot
If the logical volume become full it will get dropped automatically and we cant use it any more, even if we extend the size of snapshot volume. It is the best idea to have the same size of Source while creating a snapshot, **tecmint_datas** size was 10G, if I create a snapshot size of 10GB it will never over flow like above because it has enough space to take snap of your volume.
#### Step 2: Extend Snapshot in LVM ####
If we need to extend the snapshot size before overflow we can do it using.
# lvextend -L +1G /dev/vg_tecmint_extra/tecmint_data_snap
Now there was totally 2GB size for snapshot.
![Extend LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Extend-LVM-Snapshot.jpg)
Extend LVM Snapshot
Next, verify the new size and COW table using following command.
# lvdisplay /dev/vg_tecmint_extra/tecmint_data_snap
To know the size of snap volume and usage **%**.
# lvs
![Check Size of Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Size-of-Snapshot.jpg)
Check Size of Snapshot
But if, you have snapshot volume with the same size of Source volume we dont need to worry about these issues.
#### Step 3: Restoring Snapshot or Merging ####
To restore the snapshot, we need to un-mount the file system first.
# unmount /mnt/tecmint_datas/
![Un-mount File System](http://www.tecmint.com/wp-content/uploads/2014/08/Unmount-File-System.jpg)
Un-mount File System
Just check for the mount point whether its unmounted or not.
# df -h
![Check File System Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Mount-Points.jpg)
Check File System Mount Points
Here our mount has been unmounted, so we can continue to restore the snapshot. To restore the snap using command **lvconvert**.
# lvconvert --merge /dev/vg_tecmint_extra/tecmint_data_snap
![Restore LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Restore-Snapshot.jpg)
Restore LVM Snapshot
After the merge is completed, snapshot volume will be removed automatically. Now we can see the space of our partition using **df** command.
# df -Th
![Check Size of Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Snapshot-Space.jpg)
After the snapshot volume removed automatically. You can see the size of logical volume.
# lvs
![Check Size of Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Size-of-LV.jpg)
Check Size of Logical Volume
**Important**: To Extend the Snapshots automatically, we can do it using some modification in conf file. For manual we can extend using lvextend.
Open the lvm configuration file using your choice of editor.
# vim /etc/lvm/lvm.conf
Search for word autoextend. By Default the value will be similar to below.
![LVM Configuration](http://www.tecmint.com/wp-content/uploads/2014/08/LVM-Configuration.jpg)
LVM Configuration
Change the **100** to **75** here, if so auto extend threshold is **75** and auto extend percent is 20, it will expand the size more by **20 Percent**
If the snapshot volume reach **75%** it will automatically expand the size of snap volume by **20%** more. Thus,we can expand automatically. Save and exit the file using **wq!**.
This will save snapshot from overflow drop. This will also help you to save more time. LVM is the only Partition method in which we can expand more and have many features as thin Provisioning, Striping, Virtual volume and more Using thin-pool, let us see them in the next topic.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/
[2]:http://www.tecmint.com/extend-and-reduce-lvms-in-linux/

View File

@ -1,3 +1,4 @@
zpl1025
Build a Raspberry Pi Arcade Machine
================================================================================
**Relive the golden majesty of the 80s with a little help from a marvel of the current decade.**

View File

@ -1,106 +0,0 @@
[translating by KayGuoWhu]
How to Encrypt Email in Linux
================================================================================
![Kgpg provides a nice GUI for creating and managing your encryption keys.](http://www.linux.com/images/stories/41373/fig-1-kgpg.png)
Kgpg provides a nice GUI for creating and managing your encryption keys.
If you've been thinking of encrypting your email, it is a rather bewildering maze to sort through thanks to the multitude of email services and mail clients. There are two levels of encryption to consider: SSL/TLS encryption protects your login and password to your mailserver. [GnuPG][1] is the standard strong Linux encryption tool, and it encrypts and authenticates your messages. It is best if you manage your own GPG encryption and not leave it up to third parties, which we will discuss in a moment.
Encrypting messages still leaves you vulnerable to traffic analysis, as message headers must be in the clear. So that necessitates yet another tool such as the [Tor network][2] for hiding your Internet footprints. Let's look at various mail services and clients, and the pitfalls and benefits therein.
### Forget Webmail ###
If you use GMail, Yahoo, Hotmail, or another Web mail provider, forget about it. Anything you type in a Web browser is vulnerable to JavaScript attacks, and whatever mischiefs the service provider engages in. GMail, Yahoo, and Hotmail all offer SSL/TLS encryption to protect your messages from wiretapping. But they offer no protections from their own data-mining habits, so they don't offer end-to-end encryption. Yahoo and Google both claim they're going to roll out end-to-end encryption next year. Color me skeptical, because they will wither and die if anything interferes with the data-mining that is their core business.
There are various third-party email security services such as [Virtru][3] and [SafeMess][4] that claim to offer secure encryption for all types of email. Again I am skeptical, because whoever holds your encryption keys has access to your messages, so you're still depending on trust rather than technology.
Peer messaging avoids many of the pitfalls of using centralized services. [RetroShare][5] and [Bitmessage][6] are two popular examples of this. I don't know if they live up to their claims, but the concept certainly has merit.
What about Android and iOS? It's safest to assume that the majority of Android and iOS apps are out to get you. Don't take my word for it-- read their terms of service and examine the permissions they require to install on your devices. And even if their terms are acceptable when you first install them, unilateral TOS changes are industry standard, so it is safest to assume the worst.
### Zero Knowledge ###
[Proton Mail][7] is a new email service that claims zero-knowledge message encryption. Authentication and message encryption are two separate steps, Proton is under Swiss privacy laws, and they do not log user activity. Zero knowledge encryption offers real security. This means that only you possess your encryption keys, and if you lose them your messages are not recoverable.
There are many encrypted email services that claim to protect your privacy. Read the fine print carefully and look for red flags such as limited user data collection, sharing with partners, and cooperation with law enforcement. These indicate that they collect and share user data, and have access to your encryption keys and can read your messages.
### Linux Mail Clients ###
A standalone open source mail client such as KMail, Thunderbird, Mutt, Claws, Evolution, Sylpheed, or Alpine, set up with your own GnuPG keys that you control gives you the most protection. (The easiest way to set up more secure email and Web surfing is to run the TAILS live Linux distribution. See [Protect Yourself Online With Tor, TAILS, and Debian][8].)
Whether you use TAILS or a standard Linux distro, managing GnuPG is the same, so let's learn how to encrypt messages with GnuPG.
### How to Use GnuPG ###
First, a quick bit of terminology. OpenPGP is an open email encryption and authentication protocol, based on Phil Zimmerman's Pretty Good Privacy (PGP). GNU Privacy Guard (GnuPG or GPG) is the GPL implementation of OpenPGP. GnuPG uses symmetric public key cryptography. This means that you create pairs of keys: a public key that anyone can use to encrypt messages to send to you, and a private key that only you possess to decrypt them. GnuPG performs two separate functions: digitally-signing messages to prove they came from you, and encrypting messages. Anyone can read your digitally-signed messages, but only people you have exchanged keys with can read your encrypted messages. Remember, never share your private keys! Only public keys.
Seahorse is GNOME's graphical front-end to GnuPG, and KGpg is KDE's graphical GnuPG tool.
Now let's run through the basic steps of creating and managing GnuPG keys. This command creates a new key:
$ gpg --gen-key
This is a multi-step process; just answer all the questions, and the defaults are fine for most people. When you create your passphrase, write it down and keep it in a secure place because if you lose it you cannot decrypt anything. All that advice about never writing down your passwords is wrong. Most of us have dozens of logins and passwords to track, including some that we rarely use, so it's not realistic to remember all of them. You know what happens when people don't write down their passwords? They create simple passwords and re-use them. Anything you store on your computer is potentially vulnerable; a nice little notebook kept in a locked drawer is impervious to everything but a physical intrusion, if an intruder even knew to look for it.
I must leave it as your homework to figure out how to configure your mail client to use your new key, as every one is different. You can list your key or keys:
$ gpg --list-keys
/home/carla/.gnupg/pubring.gpg
------------------------------
pub 2048R/587DD0F5 2014-08-13
uid Carla Schroder (my gpg key)
sub 2048R/AE05E1E4 2014-08-13
This is a fast way to grab necessary information like the location of your keys, and your key name, which is the UID. Suppose you want to upload your public key to a keyserver; this is how it looks using my example key:
$ gpg --send-keys 'Carla Schroder' --keyserver http://example.com
When you create a new key for upload to public key servers, you should also create a revocation certificate. Don't do it later-- create it when you create your new key. You can give it any arbitrary name, so instead of revoke.asc you could give it a descriptive name like mycodeproject.asc:
$ gpg --output revoke.asc --gen-revoke 'Carla Schroder'
Now if your key ever becomes compromised you can revoke it by first importing the revocation certificate into your keyring:
$ gpg --import ~/.gnupg/revoke.asc
Then create and upload a new key to replace it. Any users of your old key will be notified as they refresh their key databases.
You must guard your revocation certificate just as zealously as your private key. Copy it to a CD or USB stick and lock it up, and delete it from your computer. It is a plain-text key, so you could even print it on paper.
If you ever need a copy-and-paste key, for example on public keyrings that allow pasting your key into a web form, or if you want to post your public key on your Web site, then you must create an ASCII-armored version of your public key:
$ gpg --output carla-pubkey.asc --export -a 'Carla Schroder'
This creates the familiar plain-text public key you've probably seen, like this shortened example:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQENBFPrn4gBCADeEXKdrDOV3AFXL7QQQ+i61rMOZKwFTxlJlNbAVczpawkWRC3l
IrWeeJiy2VyoMQ2ZXpBLDwGEjVQ5H7/UyjUsP8h2ufIJt01NO1pQJMwaOMcS5yTS
[...]
I+LNrbP23HEvgAdNSBWqa8MaZGUWBietQP7JsKjmE+ukalm8jY8mdWDyS4nMhZY=
=QL65
-----END PGP PUBLIC KEY BLOCK-----
That should get you started learning your way around GnuPG. [The GnuPG manuals][9] have complete details on using GnuPG and all of its options.
--------------------------------------------------------------------------------
via: http://www.linux.com/learn/tutorials/784165-how-to-encrypt-email-in-linux
作者:[Carla Schroder][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/component/ninjaboard/person/3734
[1]:http://www.openpgp.org/members/gnupg.shtml
[2]:https://www.torproject.org/
[3]:https://www.virtru.com/
[4]:https://www.safemess.com/
[5]:http://retroshare.sourceforge.net/
[6]:http://retroshare.sourceforge.net/
[7]:https://protonmail.ch/
[8]:http://www.linux.com/learn/docs/718398-protect-yourself-online-with-tor-+tails-and-debian
[9]:https://www.gnupg.org/documentation/manuals.html

View File

@ -1,104 +0,0 @@
SPccman is translating
How to sniff HTTP traffic from the command line on Linux
================================================================================
Suppose you want to sniff live HTTP web traffic (i.e., HTTP requests and responses) on the wire for some reason. For example, you may be testing experimental features of a web server. Or you may be debugging a web application or a RESTful service. Or you may be trying to troubleshoot [PAC (proxy auto config)][1] or check for any malware files surreptitiously downloaded from a website. Whatever the reason is, there are cases where HTTP traffic sniffing is helpful, for system admins, developers, or even end users.
While [packet sniffing tools][2] such as tcpdump are popularly used for live packet dump, you need to set up proper filtering to capture HTTP traffic, and even then, their raw output typically cannot be interpreted on the HTTP protocol level so easily. Real-time web server log parsers such as [ngxtop][3] provide human-readable real-time web traffic traces, but only applicable with a full access to live web server logs.
What will be nice is to have tcpdump-like traffic sniffing tool, but targeting HTTP traffic only. In fact, [httpry][4] is extactly that: **HTTP packet sniffing tool**. httpry captures live HTTP packets on the wire, and displays their content at the HTTP protocol level in a human-readable format. In this tutorial, let's see how we can sniff HTTP traffic with httpry.
### Install httpry on Linux ###
On Debian-based systems (Ubuntu or Linux Mint), httpry is not available in base repositories. So build it from the source:
$ sudo apt-get install gcc make git libpcap0.8-dev
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install
On Fedora, CentOS or RHEL, you can install httpry with yum as follows. On CentOS/RHEL, enable [EPEL repo][5] before running yum.
$ sudo yum install httpry
If you still want to build httpry from the source, you can easily do that by:
$ sudo yum install gcc make git libpcap-devel
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install
### Basic Usage of httpry ###
The basic use case of httpry is as follows.
$ sudo httpry -i <network-interface>
httpry then listens on a specified network interface, and displays captured HTTP requests/responses in real time.
![](https://farm4.staticflickr.com/3883/14985851635_7b94787c6d_z.jpg)
In most cases, however, you will be swamped with the fast scrolling output as packets are coming in and out. So you want to save captured HTTP packets for offline analysis. For that, use either '-b' or '-o' options. The '-b' option allows you to save raw HTTP packets into a binary file as is, which then can be replayed with httpry later. On the other hand, '-o' option saves human-readable output of httpry into a text file.
To save raw HTTP packets into a binary file:
$ sudo httpry -i eth0 -b output.dump
To replay saved HTTP packets:
$ httpry -r output.dump
Note that when you read a dump file with '-r' option, you don't need root privilege.
To save httpr's output to a text file:
$ sudo httpry -i eth0 -o output.txt
### Advanced Usage of httpry ###
If you want to monitor only specific HTTP methods (e.g., GET, POST, PUT, HEAD, CONNECT, etc), use '-m' option:
$ sudo httpry -i eth0 -m get,head
![](https://farm6.staticflickr.com/5551/14799184220_3b449d422c_z.jpg)
If you downloaded httpry's source code, you will notice that the source code comes with a collection of Perl scripts which aid in analyzing httpry's output. These scripts are found in httpry/scripts/plugins directory. If you want to write a custom parser for httpry's output, these scripts can be good examples to start from. Some of their capabilities are:
- **hostnames**: Displays a list of unique host names with counts.
- **find_proxies**: Detect web proxies.
- **search_terms**: Find and count search terms entered in search services.
- **content_analysis**: Find URIs which contain specific keywords.
- **xml_output**: Convert output into XML format.
- **log_summary**: Generate a summary of log.
- **db_dump**: Dump log file data into a database.
Before using these scripts, first run httpry with '-o' option for some time. Once you obtained the output file, run the scripts on it at once by using this command:
$ cd httpry/scripts
$ perl parse_log.pl -d ./plugins <httpry-output-file>
You may encounter warnings with several plugins. For example, db_dump plugin may fail if you haven't set up a MySQL database with DBI interface. If a plugin fails to initialize, it will automatically be disabled. So you can ignore those warnings.
After parse_log.pl is completed, you will see a number of analysis results (*.txt/xml) in httpry/scripts directory. For example, log_summary.txt looks like the following.
![](https://farm4.staticflickr.com/3845/14799162189_b85abdf21d_z.jpg)
To conclude, httpry can be a life saver if you are in a situation where you need to interpret live HTTP packets. That might not be so common for average Linux users, but it never hurts to be prepared. What do you think of this tool?
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/sniff-http-traffic-command-line-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/2012/12/how-to-set-up-proxy-auto-config-on-ubuntu-desktop.html
[2]:http://xmodulo.com/2012/11/what-are-popular-packet-sniffers-on-linux.html
[3]:http://xmodulo.com/2014/06/monitor-nginx-web-server-command-line-real-time.html
[4]:http://dumpsterventures.com/jason/httpry/
[5]:http://xmodulo.com/2013/03/how-to-set-up-epel-repository-on-centos.html

View File

@ -1,3 +1,4 @@
Translating by SPccman
How to configure SNMPv3 on ubuntu 14.04 server
================================================================================
Simple Network Management Protocol (SNMP) is an "Internet-standard protocol for managing devices on IP networks". Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks and more.It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.[2]

View File

@ -1,466 +0,0 @@
Linux Tutorial: Install Ansible Configuration Management And IT Automation Tool
================================================================================
![](http://s0.cyberciti.org/uploads/cms/2014/08/ansible_core_circle.png)
Today I will be talking about ansible, a powerful configuration management solution written in python. There are many configuration management solutions available, all with pros and cons, ansible stands apart from many of them for its simplicity. What makes ansible different than many of the most popular configuration management systems is that its agent-less, no need to setup agents on every node you want to control. Plus, this has the benefit of being able to control you entire infrastructure from more than one place, if needed. That last point's validity, of being a benefit, may be debatable but I find it as a positive in most cases. Enough talk, lets get started with Ansible installation and configuration on a RHEL/CentOS, and Debian/Ubuntu based systems.
### Prerequisites ###
1. Distro: RHEL/CentOS/Debian/Ubuntu Linux
1. Jinja2: A modern and designer friendly templating language for Python.
1. PyYAML: A YAML parser and emitter for the Python programming language.
1. parmiko: Native Python SSHv2 protocol library.
1. httplib2: A comprehensive HTTP client library.
1. Most of the actions listed in this post are written with the assumption that they will be executed by the root user running the bash or any other modern shell.
How Ansible works
Ansible tool uses no agents. It requires no additional custom security infrastructure, so its easy to deploy. All you need is ssh client and server:
+----------------------+ +---------------+
|Linux/Unix workstation| SSH | file_server1 |
|with Ansible |<------------------>| db_server2 | Unix/Linux servers
+----------------------+ Modules | proxy_server3 | in local/remote
192.168.1.100 +---------------+ data centers
Where,
1. 192.168.1.100 - Install Ansible on your local workstation/server.
1. file_server1..proxy_server3 - Use 192.168.1.100 and Ansible to automates configuration management of all servers.
1. SSH - Setup ssh keys between 192.168.1.100 and local/remote servers.
### Ansible Installation Tutorial ###
Installation of ansible is a breeze, many distributions have a package available in their 3rd party repos which can easily be installed, a quick alternative is to just pip install it or grab the latest copy from github. To install using your package manager, on [RHEL/CentOS Linux based systems you will most likely need the EPEL repo][1] then:
#### Install ansible on a RHEL/CentOS Linux based system ####
Type the following [yum command][2]:
$ sudo yum install ansible
#### Install ansible on a Debian/Ubuntu Linux based system ####
Type the following [apt-get command][3]:
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
#### Install ansible using pip ####
The [pip command is a tool for installing and managing Python packages][4], such as those found in the Python Package Index. The following method works on Linux and Unix-like systems:
$ sudo pip install ansible
#### Install the latest version of ansible using source code ####
You can install the latest version from github as follows:
$ cd ~
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup
When running ansible from a git checkout, one thing to remember is that you will need to setup your environment everytime you want to use it, or you can add it to your bash rc file:
# ADD TO BASH RC
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts" >> ~/.bashrc
$ echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc
The hosts file for ansible is basically a list of hosts that ansible is able to perform work on. By default ansible looks for the hosts file at /etc/ansible/hosts, but there are ways to override that which can be handy if you are working with multiple installs or have several different clients for whose datacenters you are responsible for. You can pass the hosts file on the command line using the -i option:
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
My preference however is to use and environment variable, this can be useful if source a different file when starting work for a specific client. The environment variable is $ANSIBLE_HOSTS, and can be set as follows:
$ export ANSIBLE_HOSTS=~/ansible_hosts
Once all requirements are installed and you have you hosts file setup you can give it a test run. For a quick test I put 127.0.0.1 into the ansible hosts file as follow:
$ echo "127.0.0.1" > ~/ansible_hosts
Now lets test with a quick ping:
$ ansible all -m ping
OR ask for the ssh password:
$ ansible all -m ping --ask-pass
I have run across a problem a few times regarding initial setup, it is highly recommended you setup keys for ansible to use but in the previous test we used --ask-pass, on some machines you will need [to install sshpass][5] or add a -c paramiko like so:
$ ansible all -m ping --ask-pass -c paramiko
Or you [can install sshpass][6], however sshpass is not always available in the standard repos so paramiko can be easier.
### Setup SSH Keys ###
Now that we have gotten the configuration, and other simple stuff, out of the way lets move onto doing something productive. Alot of the power of ansible lies in playbooks, which are basically scripted ansible runs (for the most part), but we will start with some one liners before we build out a playbook. Lets start with creating and configuring keys so we can avoid the -c and --ask-pass options:
$ ssh-keygen -t rsa
Sample outputs:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/mike/.ssh/id_rsa.
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
The key fingerprint is:
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
The key's randomart image is:
+--[ RSA 2048]----+
|... . . |
|. . + . . |
|= . o o |
|.* . |
|. . . S |
| E.o |
|.. .. |
|o o+.. |
| +o+*o. |
+-----------------+
Now obviously there are plenty of ways to put this in place on the remote machine but since we are using ansible, lets use that:
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"dest": "/tmp/id_rsa.pub",
"gid": 100,
"group": "users",
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
"mode": "0644",
"owner": "mike",
"size": 410,
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
"state": "file",
"uid": 1000
}
Next, add the public key in remote server, enter:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | FAILED | rc=1 >>
/bin/sh: /root/.ssh/authorized_keys: Permission denied
Whoops, we want to be able to run things as root, so lets add a -u option:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
Sample outputs:
SSH password:
127.0.0.1 | success | rc=0 >>
Please note, I wanted to demonstrate a file transfer using ansible, there is however a more built in way for managing keys using ansible:
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"gid": 100,
"group": "users",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
"key_options": null,
"keyfile": "/home/mike/.ssh/authorized_keys",
"manage_dir": false,
"mode": "0600",
"owner": "mike",
"path": "/home/mike/.ssh/authorized_keys",
"size": 410,
"state": "file",
"uid": 1000,
"unique": false,
"user": "mike"
}
Now that the keys are in place lets try running an arbitrary command like hostname and hope we don't get prompted for a password
$ ansible all -m shell -a "hostname" -u root
Sample outputs:
127.0.0.1 | success | rc=0 >>
Success!!! Now that we can run commands as root and not be bothered by using a password we are in a good place to easily configure any and all hosts in the ansible hosts file. Let's remove the key from /tmp:
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
Sample outputs:
127.0.0.1 | success >> {
"changed": true,
"path": "/tmp/id_rsa.pub",
"state": "absent"
}
Next, I'm going to make sure we have a few packages installed and on the latest version and we will move on to something a little more complicated:
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
Sample outputs:
127.0.0.1 | success >> {
"changed": false,
"name": "apache2",
"state": "latest"
}
Alright, the key we placed in /tmp is now absent and we have the latest version of apache installed. This brings me to the next point, something that makes ansible very flexible and gives more power to playbooks, many may have noticed the -m zypper in the previous commands. Now unless you use openSuse or Suse enterpise you may not be familiar with zypper, it is basically the equivalent of yum in the suse world. In all of the examples above I have only had one machine in my hosts file, and while everything but the last command should work on any standard *nix systems with standard ssh configs, this leads to a problem. What if we had multiple machine types that we wanted to manage? Well this is where playbooks, and the configurability of ansible really shines. First lets modify our hosts file a little, here goes:
$ cat ~/ansible_hosts
Sample outputs:
[RHELBased]
10.50.1.33
10.50.1.47
[SUSEBased]
127.0.0.1
First, we create some groups of servers, and give them some meaningful tags. Then we create a playbook that will do different things for the different kinds of servers. You might notice the similarity between the yaml data structures and the command line instructions we ran earlier. Basically the -m is a module, and -a is for module args. In the YAML representation you put the module then :, and finally the args.
---
- hosts: SUSEBased
remote_user: root
tasks:
- zypper: name=apache2 state=latest
- hosts: RHELBased
remote_user: root
tasks:
- yum: name=httpd state=latest
Now that we have a simple playbook, we can run it as follows:
$ ansible-playbook testPlaybook.yaml -f 10
Sample outputs:
PLAY [SUSEBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [zypper name=apache2 state=latest] **************************************
ok: [127.0.0.1]
PLAY [RHELBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [10.50.1.33]
ok: [10.50.1.47]
TASK: [yum name=httpd state=latest] *******************************************
changed: [10.50.1.33]
changed: [10.50.1.47]
PLAY RECAP ********************************************************************
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
Now you will notice that you will see output from each machine that ansible contacted. The -f is what lets ansible run on multiple hosts in parallel. Instead of using all, or a name of a host group, on the command line you can put this passwords for the ask-pass prompt into the playbook. While we no longer need the --ask-pass since we have ssh keys setup, it comes in handy when setting up new machines, and even new machines can run from a playbook. To demonstrate this lets convert our earlier key example into a playbook:
---
- hosts: SUSEBased
remote_user: mike
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
- hosts: RHELBased
remote_user: mdonlon
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
Now there are plenty of other options here that could be done, for example having the keys dropped during a kickstart, or via some other kind of process involved with bringing up machines on the hosting of your choice, but this can be used in pretty much any situation assuming ssh is setup to accept a password. One thing to think about before writing out too many playbooks, version control can save you a lot of time. Machines need to change over time, you don't need to re-write a playbook every time a machine changes, just update the pertinent bits and commit the changes. Another benefit of this ties into what I said earlier about being able to manage the entire infrastructure from multiple places. You can easily git clone your playbook repo onto a new machine and be completely setup to manage everything in a repetitive manner.
#### Real world ansible example ####
I know a lot of people make great use of services like pastebin, and a lot of companies for obvious reasons setup their own internal instance of something similar. Recently, I came across a newish application called showterm and coincidentally I was asked to setup an internal instance of it for a client. I will spare you the details of this app, but you can google showterm if interested. So for a reasonable real world example I will attempt to setup a showterm server, and configure the needed app on the client to use it. In the process we will need a database server as well. So here goes, lets start with the client configuration.
---
- hosts: showtermClients
remote_user: root
tasks:
- yum: name=rubygems state=latest
- yum: name=ruby-devel state=latest
- yum: name=gcc state=latest
- gem: name=showterm state=latest user_install=no
That was easy, lets move on to the main server:
---
- hosts: showtermServers
remote_user: root
tasks:
- name: ensure packages are installed
yum: name={{item}} state=latest
with_items:
- postgresql
- postgresql-server
- postgresql-devel
- python-psycopg2
- git
- ruby21
- ruby21-passenger
- name: showterm server from github
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
- name: Initdb
command: service postgresql initdb
creates=/var/lib/pgsql/data/postgresql.conf
- name: Start PostgreSQL and enable at boot
service: name=postgresql
enabled=yes
state=started
- gem: name=pg state=latest user_install=no
handlers:
- name: restart postgresql
service: name=postgresql state=restarted
- hosts: showtermServers
remote_user: root
sudo: yes
sudo_user: postgres
vars:
dbname: showterm
dbuser: showterm
dbpassword: showtermpassword
tasks:
- name: create db
postgresql_db: name={{dbname}}
- name: create user with ALL priv
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
- hosts: showtermServers
remote_user: root
tasks:
- name: database.yml
template: src=database.yml dest=/root/showterm/config/database.yml
- hosts: showtermServers
remote_user: root
tasks:
- name: run bundle install
shell: bundle install
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: run rake db tasks
shell: 'bundle exec rake db:create db:migrate db:seed'
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: apache config
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
Not so bad, now keeping in mind that this is a somewhat random and obscure app that we can now install in a consistent fashion on any number of machines, this is where the benefits of configuration management really come to light. Also, in most cases the declarative syntax almost speaks for itself and wiki pages need not go into as much detail, although a wiki page with too much detail is never a bad thing in my opinion.
### Expanding Configuration ###
We have not touched on everything here, Ansible has many options for configuring you setup. You can do things like embedding variables in your hosts file, so that Ansible will interpolate them on the remote nodes, eg.
[RHELBased]
10.50.1.33 http_port=443
10.50.1.47 http_port=80 ansible_ssh_user=mdonlon
[SUSEBased]
127.0.0.1 http_port=443
While this is really handy for quick configurations, you can also layer variables across multiple files in yaml format. In you hosts file path you can make two sub directories named group_vars and host_vars. Any files in those paths that match the name of the group of hosts, or a host name in your hosts file will be interpolated at run time. So the previous example would look like this:
ultrabook:/etc/ansible # pwd
/etc/ansible
ultrabook:/etc/ansible # tree
.
├── group_vars
│ ├── RHELBased
│ └── SUSEBased
├── hosts
└── host_vars
├── 10.50.1.33
└── 10.50.1.47
----------
2 directories, 5 files
ultrabook:/etc/ansible # cat hosts
[RHELBased]
10.50.1.33
10.50.1.47
----------
[SUSEBased]
127.0.0.1
ultrabook:/etc/ansible # cat group_vars/RHELBased
ultrabook:/etc/ansible # cat group_vars/SUSEBased
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
---
http_port:80
ansible_ssh_user: mdonlon
### Refining Playbooks ###
There are many ways to organize playbooks as well. In the previous examples we used a single file, and everything is really simplified. One way of organizing things that is commonly used is creating roles. Basically you load a main file as your playbook, and that then imports all the data from the extra files, the extra files are organized as roles. For example if you have a wordpress site, you need a web head, and a database. The web head will have a web server, the app code, and any needed modules. The database is sometimes ran on the same host and some times ran on remote hosts, and this is where roles really shine. You make a directory, and small playbook for each role. In this case we can have an apache role, mysql role, wordpress role, mod_php, and php roles. The big advantage to this is that not every role has to be applied on one server, in this case mysql could be applied to a separate machine. This also allows for code re-use, for example you apache role could be used with python apps and php apps alike. Demonstrating this is a little beyond the scope of this article, and there are many different ways of doing thing, I would recommend searching for ansible playbook examples. There are many people contributing code on github, and I am sure various other sites.
### Modules ###
All of the work being done behind the scenes in ansible is driven by modules. Ansible has an excellent library of built in modules that do things like package installation, transferring files, and everything we have done in this article. But for some people this will not be suitable for their setup, ansible has a means of adding your own modules. One great thing about the API provided by Ansible is that you are not restricted to the language it was written in, Python, you can use any language really. Ansible modules work by passing around JSON data structures, so as long as you can build a JSON data structure in your language of choice, which I am pretty sure any scripting language can do, you can begin coding something right away. There is much documentation on the Ansible site, about how the module interface works, and many examples of modules on github as well. Keep in mind that some obscure languages may not have great support, but that would only be because not enough people are contributing code in that language, try it out and publish your results somewhere!
### Conclusion ###
In conclusion, there are many systems around for configuration management, I hope this article shows the ease of setup for ansible which I believe is one of its strongest points. Please keep in mind that I was trying to show a lot of different ways to do things, and not everything above may be considered best practice in your private infrastructure, or the coding world abroad. Here are some more links to take you knowledge of ansible to the next level:
- [Ansible project][7] home page.
- [Ansible project documentation][8].
- [Multistage environments with Ansible][9].
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
作者:[Nix Craft][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.cyberciti.biz/tips/about-us
[1]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
[3]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
[4]:http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-linux-install-pipclient/
[5]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[6]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[7]:http://www.ansible.com/
[8]:http://docs.ansible.com/
[9]:http://rosstuck.com/multistage-environments-with-ansible/

View File

@ -1,218 +0,0 @@
How to create a site-to-site IPsec VPN tunnel using Openswan in Linux
================================================================================
A virtual private network (VPN) tunnel is used to securely interconnect two physically separate networks through a tunnel over the Internet. Tunneling is needed when the separate networks are private LAN subnets with globally non-routable private IP addresses, which are not reachable to each other via traditional routing over the Internet. For example, VPN tunnels are often deployed to connect different NATed branch office networks belonging to the same institution.
Sometimes VPN tunneling may be used simply for its security benefit as well. Service providers or private companies may design their networks in such a way that vital servers (e.g., database, VoIP, banking servers) are placed in a subnet that is accessible to trusted personnel through a VPN tunnel only. When a secure VPN tunnel is required, [IPsec][1] is often a preferred choice because an IPsec VPN tunnel is secured with multiple layers of security.
This tutorial will show how we can easily create a site-to-site VPN tunnel using [Openswan][2] in Linux.
### Topology ###
This tutorial will focus on the following topologies for creating an IPsec tunnel.
![](https://farm4.staticflickr.com/3838/15004668831_fd260b7f1e_z.jpg)
![](https://farm6.staticflickr.com/5559/15004668821_36e02ab8b0_z.jpg)
![](https://farm6.staticflickr.com/5571/14821245117_3f677e4d58_z.jpg)
### Installing Packages and Preparing VPN Servers ###
Usually, you will be managing site-A only, but based on the requirements, you could be managing both site-A and site-B. We start the process by installing Openswan.
On Red Hat based Systems (CentOS, Fedora or RHEL):
# yum install openswan lsof
On Debian based Systems (Debian, Ubuntu or Linux Mint):
# apt-get install openswan
Now we disable VPN redirects, if any, in the server using these commands:
# for vpn in /proc/sys/net/ipv4/conf/*;
# do echo 0 > $vpn/accept_redirects;
# echo 0 > $vpn/send_redirects;
# done
Next, we modify the kernel parameters to allow IP forwarding and disable redirects permanently.
# vim /etc/sysctl.conf
----------
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
Reload /etc/sysctl.conf:
# sysctl -p
We allow necessary ports in the firewall. Please make sure that the rules are not conflicting with existing firewall rules.
# iptables -A INPUT -p udp --dport 500 -j ACCEPT
# iptables -A INPUT -p tcp --dport 4500 -j ACCEPT
# iptables -A INPUT -p udp --dport 4500 -j ACCEPT
Finally, we create firewall rules for NAT.
# iptables -t nat -A POSTROUTING -s site-A-private-subnet -d site-B-private-subnet -j SNAT --to site-A-Public-IP
Please make sure that the firewall rules are persistent.
#### Note: ####
- You could use MASQUERADE instead of SNAT. Logically it should work, but it caused me to have issues with virtual private servers (VPS) in the past. So I would use SNAT if I were you.
- If you are managing site-B as well, create similar rules in site-B server.
- Direct routing does not need SNAT.
### Preparing Configuration Files ###
The first configuration file that we will work with is ipsec.conf. Regardless of which server you are configuring, always consider your site as 'left' and remote site as 'right'. The following configuration is done in siteA's VPN server.
# vim /etc/ipsec.conf
----------
## general configuration parameters ##
config setup
plutodebug=all
plutostderrlog=/var/log/pluto.log
protostack=netkey
nat_traversal=yes
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/16
## disable opportunistic encryption in Red Hat ##
oe=off
## disable opportunistic encryption in Debian ##
## Note: this is a separate declaration statement ##
include /etc/ipsec.d/examples/no_oe.conf
## connection definition in Red Hat ##
conn demo-connection-redhat
authby=secret
auto=start
ike=3des-md5
## phase 1 ##
keyexchange=ike
## phase 2 ##
phase2=esp
phase2alg=3des-md5
compress=no
pfs=yes
type=tunnel
left=<siteA-public-IP>
leftsourceip=<siteA-public-IP>
leftsubnet=<siteA-private-subnet>/netmask
## for direct routing ##
leftsubnet=<siteA-public-IP>/32
leftnexthop=%defaultroute
right=<siteB-public-IP>
rightsubnet=<siteB-private-subnet>/netmask
## connection definition in Debian ##
conn demo-connection-debian
authby=secret
auto=start
## phase 1 ##
keyexchange=ike
## phase 2 ##
esp=3des-md5
pfs=yes
type=tunnel
left=<siteA-public-IP>
leftsourceip=<siteA-public-IP>
leftsubnet=<siteA-private-subnet>/netmask
## for direct routing ##
leftsubnet=<siteA-public-IP>/32
leftnexthop=%defaultroute
right=<siteB-public-IP>
rightsubnet=<siteB-private-subnet>/netmask
Authentication can be done in several different ways. This tutorial will cover the use of pre-shared key, which is added to the file /etc/ipsec.secrets.
# vim /etc/ipsec.secrets
----------
siteA-public-IP siteB-public-IP: PSK "pre-shared-key"
## in case of multiple sites ##
siteA-public-IP siteC-public-IP: PSK "corresponding-pre-shared-key"
### Starting the Service and Troubleshooting ###
The server should now be ready to create a site-to-site VPN tunnel. If you are managing siteB as well, please make sure that you have configured the siteB server with necessary parameters. For Red Hat based systems, please make sure that you add the service into startup using chkconfig command.
# /etc/init.d/ipsec restart
If there are no errors in both end servers, the tunnel should be up now. Taking the following into consideration, you can test the tunnel with ping command.
1. The siteB-private subnet should not be reachable from site A, i.e., ping should not work if the tunnel is not up.
1. After the tunnel is up, try ping to siteB-private-subnet from siteA. This should work.
Also, the routes to the destination's private subnet should appear in the server's routing table.
# ip route
----------
[siteB-private-subnet] via [siteA-gateway] dev eth0 src [siteA-public-IP]
default via [siteA-gateway] dev eth0
Additionally, we can check the status of the tunnel using the following useful commands.
# service ipsec status
----------
IPsec running - pluto pid: 20754
pluto pid 20754
1 tunnels up
some eroutes exist
----------
# ipsec auto --status
----------
## output truncated ##
000 "demo-connection-debian": myip=<siteA-public-IP>; hisip=unset;
000 "demo-connection-debian": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; nat_keepalive: yes
000 "demo-connection-debian": policy: PSK+ENCRYPT+TUNNEL+PFS+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 32,28; interface: eth0;
## output truncated ##
000 #184: "demo-connection-debian":500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 1653s; newest IPSEC; eroute owner; isakmp#183; idle; import:not set
## output truncated ##
000 #183: "demo-connection-debian":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 1093s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:not set
The log file /var/log/pluto.log should also contain useful information regarding authentication, key exchanges and information on different phases of the tunnel. If your tunnel doesn't come up, you could check there as well.
If you are sure that all the configuration is correct, and if your tunnel is still not coming up, you should check the following things.
1. Many ISPs filter IPsec ports. Make sure that UDP 500, TCP/UDP 4500 ports are allowed by your ISP. You could try connecting to your server IPsec ports from a remote location by telnet.
1. Make sure that necessary ports are allowed in the firewall of the server/s.
1. Make sure that the pre-shared keys are identical in both end servers.
1. The left and right parameters should be properly configured on both end servers.
1. If you are facing problems with NAT, try using SNAT instead of MASQUERADING.
To sum up, this tutorial focused on the procedure of creating a site-to-site IPSec VPN tunnel in Linux using Openswan. VPN tunnels are very useful in enhancing security as they allow admins to make critical resources available only through the tunnels. Also VPN tunnels ensure that the data in transit is secured from eavesdropping or interception.
Hope this helps. Let me know what you think.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/create-site-to-site-ipsec-vpn-tunnel-openswan-linux.html
作者:[Sarmed Rahman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://en.wikipedia.org/wiki/IPsec
[2]:https://www.openswan.org/

View File

@ -1,107 +0,0 @@
wangjiezhe translating
6 Interesting Funny Commands of Linux (Fun in Terminal) Part II
================================================================================
In our past following articles, weve shown some useful articles on some funny commands of Linux, which shows that Linux is not as complex as it seems and can be fun if we know how to use it. Linux command line can perform any complex task very easily and with perfection and can be interesting and joyful.
- [20 Funny Commands of Linux Part I][1]注此篇的原文应该翻译过文件名应该是20 Funny Commands of Linux or Linux is Fun in Terminal
- [Fun in Linux Terminal Play with Word and Character Counts][2]注:这篇文章刚刚补充上
![Funny Linux Commands](http://www.tecmint.com/wp-content/uploads/2014/08/Funny-Linux-Commands.png)
Funny Linux Commands
The former Post comprises of 20 funny Linux Commands/Script (and subcommands) which was highly appreciated by our readers. The other post, though not that much popular as former comprises of Commands/ Scripts and Tweaks which lets you play with text files, words and strings.
This post aims at bringing some new fun commands and one-liner scripts which is going to rejoice you.
### 1. pv Command ###
You might have seen simulating text in movies. It appears as, it is being typed in real time. Wont it be nice, if you can have such an effect in terminal?
This can be achieved, by installing **pv** command in your Linux system by using **apt** or **yum** tool. Lets install **pv** command as shown.
# yum install pv [On RedHat based Systems]
# sudo apt-get install pv [On Debian based Systems]
Once, **pv** command installed successfully on your system, lets try to run the following one liner command to see the real time text effect on the screen.
$ echo "Tecmint[dot]com is a community of Linux Nerds and Geeks" | pv -qL 10
![pv command in action](http://www.tecmint.com/wp-content/uploads/2014/08/pv-command.gif)
pv command in action
**Note**: The **q** option means quite, no output information and option **L** means the Limit of Transfer of bytes per second. The number value can be adjusted in either direction (must be integer) to get desired simulation of text.
### 2. toilet Command ###
How about printing text with border in terminal, using an one-liner script command **toilet**. Again, you must have **toilet** command installed on your system, if not use apt or yum to install it.
$ while true; do echo “$(date | toilet -f term -F border Tecmint)”; sleep 1; done
![toilet command in action](http://www.tecmint.com/wp-content/uploads/2014/08/toilet-command.gif)
toilet command in action
**Note**: The above script needs to be suspended using **ctrl+z** key.
### 3. rig Command ###
This command generates a random identity and address, every time. To run, this command you need to install **rig** using apt or yum.
# rig
![rig command in action](http://www.tecmint.com/wp-content/uploads/2014/08/rig-command.gif)
rig command in action
### 4. aview Command ###
How about viewing an image in ASCII format on the terminal? We must have a package **aview** installed, just apt or yum it. Ive an image named **elephant.jpg** in my current working directory and I want view it on terminal as ASCII format.
$ asciiview elephant.jpg -driver curses
![aview command in action](http://www.tecmint.com/wp-content/uploads/2014/08/elephant.gif)
aview command in action
### 5. xeyes Command ###
In last article we introduced a command **oneko** which attaches jerry with mouse pointer and keeps on chasing it. A similar program **xeyes** which is a graphical programs and as soon as you fire the command you will see two monster eyes chasing your movement.
$ xeyes
![xeyes command in action](http://www.tecmint.com/wp-content/uploads/2014/08/xeyes.gif)
xeyes command in action
### 6. cowsay Command ###
Do you remember last time we introduced command, which is useful in output of desired text with animated character cow. What if you want other animal in place of cow? Check a list of available animals.
$ cowsay -l
How about Elephant inside ASCII Snake?
$ cowsay -f elephant-in-snake Tecmint is Best
![cowsay command in action](http://www.tecmint.com/wp-content/uploads/2014/08/cowsay.gif)
cowsay command in action
How about Elephant inside ASCII goat?
$ cowsay -f gnu Tecmint is Best
![cowsay goat in action](http://www.tecmint.com/wp-content/uploads/2014/08/cowsay-goat.gif)
cowsay goat in action
Thats all for now. Ill be here again with another interesting article. Till then stay update and connected to Tecmint. Dont forget to provide us with your valuable feedback in the comments below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-funny-commands/
作者:[Avishek Kumar][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/20-funny-commands-of-linux-or-linux-is-fun-in-terminal/
[2]:http://www.tecmint.com/play-with-word-and-character-counts-in-linux/

View File

@ -1,3 +1,5 @@
chi1shi2 is translating.
How to use on-screen virtual keyboard on Linux
================================================================================
On-screen virtual keyboard is an alternative input method that can replace a real hardware keyboard. Virtual keyboard may be a necessity in various cases. For example, your hardware keyboard is just broken; you do not have enough keyboards for extra machines; your hardware does not have an available port left to connect a keyboard; you are a disabled person with difficulty in typing on a real keyboard; or you are building a touchscreen-based web kiosk.

View File

@ -1,46 +0,0 @@
johnhoow translating...
Use LaTeX In Ubuntu 14.04 and Linux Mint 17 With Texmaker
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/texmaker_Ubuntu.jpeg)
[LaTeX][1] is a document markup language and document preparation system. It is widely used as a standard in universities and academics to write professional scientific papers, thesis and other such documents. In this quick post, we shall see **how to use LaTeX in Ubuntu 14.04**.
### Install Texmaker to use LaTeX in Ubuntu 14.04 & Linux Mint 17 ###
[Texmaker][2] is a free and open source LaTeX editor which is available for all major desktop OS i.e. Windows, Linux and OS X. Followings are the salient features of the Texmaker:
- Unicode editor
- Spell checker
- Code folding
- Code completion
- Fast navigation
- Integrated Pdf viewer
- Easy compilation
- 370 Mathematical symbols
- LaTeX documentation
- Export to html and odt via TeX4ht
- Regex support
You can install Texmaker in Ubuntu 14.04 by downloading the binaries from the given link:
- [Download Texmaker LaTeX editor][3]
Since it is .deb packaging, same installation files can be used n any other Debian based distribution such as Linux Mint, Elementary OS, Pinguy OS etc.
If you want a Github type markdown editor, you should check [Remarkable editor][4]. I hope Texmaker helps you with **LaTeX in Ubuntu** and Linux Mint.
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-latex-ubuntu-1404/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://www.latex-project.org/
[2]:http://www.xm1math.net/texmaker/index.html
[3]:http://www.xm1math.net/texmaker/download.html#linux
[4]:http://itsfoss.com/remarkable-markdown-editor-linux/

View File

@ -1,73 +0,0 @@
How to install Arch Linux the easy way with Evo/Lution
================================================================================
The one who ventures into an install of Arch Linux and has only experienced installing Linux with Ubuntu or Mint is in for a steep learning curve. The number of people giving up halfway is probably higher than the ones that pull it through. Arch Linux is somewhat cult in the way that you may call yourself a weathered Linux user if you succeed in setting it up and configuring it in a useful way.
Even though there is a [helpful wiki][1] to guide newcomers, the requirements are still too high for some who set out to conquer Arch. You need to be at least familiar with commands like fdisk or mkfs in a terminal and have heard of mc, nano or chroot to make it through this endeavour. It reminds me of a Debian install 10 years ago.
For those ambitious souls that still lack some knowledge, there is an installer in the form of an ISO image called [Evo/Lution Live ISO][2] to the rescue. Even though it is booted like a distribution of its own, it does nothing but assist with installing a barebone Arch Linux. Evo/Lution is a project that aims to diversify the user base of Arch by providing a simple way of installing Arch as well as a community that provides comprehensive help and documentation to that group of users. In this mix, Evo is the (non-installable) live CD and Lution is the installer itself. The project's founders see a widening gap between Arch developers and users of Arch and its derivative distributions, and want to build a community with equal roles between all participants.
![](https://farm6.staticflickr.com/5559/15067088008_ecb221408c_z.jpg)
The software part of the project is the CLI installer Lution-AIS which explains every step of what happens during the installation of a pure vanilla Arch. The resulting installation will have all the latest software that Arch has to offer without adding anything from AUR or any other custom packages.
After booting up the ISO image, which weighs in at 422 MB, we are presented with a workspace consisting of a Conky display on the right with shortcuts to the options and a LX-Terminal on the left waiting to run the installer.
![](https://farm6.staticflickr.com/5560/15067056888_6345c259db_z.jpg)
After setting off the actual installer by either right-clicking on the desktop or using ALT-i, you are presented with a list of 16 jobs to be run. It makes sense to run them all unless you know better. You can either run them one by one or make a selection like 1 3 6 or 1-4 or do them all at once by entering 1-16. Most steps need to be confirmed with a 'y' for yes, and the next task waits for you to hit Enter. This will allow time to read the installation guide which is hidden behind ALT-g or even walking away from it.
![](https://farm4.staticflickr.com/3868/15253227082_5e7219f72d_z.jpg)
The 16 steps are divided in "Base Install" and "Desktop Install". The first group takes care of localization, partitioning, and installing a bootloader.
The installer leads you through partitioning with gparted, gdisk, and cfdisk as options.
![](https://farm4.staticflickr.com/3873/15230603226_56bba60d28_z.jpg)
![](https://farm4.staticflickr.com/3860/15253610055_e6a2a7a1cb_z.jpg)
After you have created partitions (e.g., /dev/sda1 for root and /dev/sda2 for swap using gparted as shown in the screenshot), you can choose 1 out of 10 file systems. In the next step, you can choose your kernel (latest or LTS) and base system.
![](https://farm6.staticflickr.com/5560/15253610085_aa5a9557fb_z.jpg)
After installing the bootloader of your choice, the first part of the install is done, which takes approximately 12 minutes. This is the point where in plain Arch Linux you reboot into your system for the first time.
With Lution you just move on to the second part which installs Xorg, sound and graphics drivers, and then moves on to desktop environments.
![](https://farm4.staticflickr.com/3918/15066917430_c21e0f0a9e_z.jpg)
The installer detects if an install is done in VirtualBox, and will automatically install and load the right generic drivers for the VM and sets up **systemd** accordingly.
In the next step, you can choose between the desktop environments KDE, Gnome, Cinnamon, LXDE, Enlightenment, Mate or XFCE. Should you not be friends with the big ships, you can also go with a Window manager like Awesome, Fluxbox, i3, IceWM, Openbox or PekWM.
![](https://farm4.staticflickr.com/3874/15253610125_26f913be20_z.jpg)
Part two of the installer will take under 10 minutes with Cinnamon as the desktop environment; however, KDE will take longer due to a much larger download.
Lution-AIS worked like a charm on two tries with Cinnamon and Awesome. After the installer was done and prompted me to reboot, it took me to the desired environments.
![](https://farm4.staticflickr.com/3885/15270946371_c2def59f37_z.jpg)
I have only two points to criticize: when the installer offered me to choose a mirror list and when it created the fstab file. In both cases it opened a second terminal, prompting me with an informational text. It took me a while to figure out I had to close the terminals before the installer would move on. When it prompts you after creating fstab, you need to close the terminal, and answer 'yes' when asked if you want to save the file.
![](https://farm4.staticflickr.com/3874/15067056958_3bba63da60_z.jpg)
The second of my issues probably has to do with VirtualBox. When starting up, you may see a message that no network has been detected. Clicking on the top icon on the left will open wicd, the network manager that is used here. Clicking on "Disconnect" and then "Connect" and restarting the installer will get it automatically detected.
Evo/Lution seems a worthwhile project, where Lution works fine. Not much can be said on the community part yet. They started a brand new website, forum, and wiki that need to be filled with content first. So if you like the idea, join [their forum][3] and let them know. The ISO image can be downloaded from [the website][4].
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/09/install-arch-linux-easy-way-evolution.html
作者:[Ferdinand Thommes][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/ferdinand
[1]:https://wiki.archlinux.org/
[2]:http://www.evolutionlinux.com/
[3]:http://www.evolutionlinux.com/forums/
[4]:http://www.evolutionlinux.com/downloads.html

View File

@ -1,109 +0,0 @@
Linux FAQs with Answers--How to create a MySQL database from the command line
================================================================================
> **Question**: I have a MySQL server up and running somewhere. How can I create and populate a MySQL database from the command line?
To create a MySQL database from the command line, you can use mysql CLI client. Here is a step-by-step procedure to create and populate a MySQL database using mysql client from the command line.
### Step One: Install MySQL Client ###
Of course you need to make sure that MySQL client program is installed. If not, you can install it as follows.
On Debian, Ubuntu or Linux Mint:
$ sudo apt-get install mysql-client
On Fedora, CentOS or RHEL:
$ sudo yum install mysql
### Step Two: Log in to a MySQL Server ###
To begin, first log in to your MySQL server as root with the following command:
$ mysql -u root -h <mysql-server-ip-address> -p
Note that to be able to log in to a remote MySQL server, you need to [enable remote access on the server][1]. If you are invoking mysql command on the same host where the MySQL server is running, you can omit "-h <mysql-server-ip-address>" as follows.
$ mysql -u root -p
You will be then asked for the password of the MySQL root user. If the authentication succeeds, the MySQL prompt will appear.
![](https://www.flickr.com/photos/xmodulo/15272971112/)
### Step Three: Create a MySQL Database ###
Before you start typing commands at the MySQL prompt, remember that each command must end with a semicolon (otherwise it will not execute). In addition, consider using uppercase letters for commands and lowercase letter for database objects. Note that this is not required but helpful for reading.
Now, let's create a database named xmodulo_DB:
mysql> CREATE DATABASE IF NOT EXISTS xmodulo_DB;
![](https://farm4.staticflickr.com/3864/15086792487_8e2eaedbcd.jpg)
### Step Four: Create a MySQL Table ###
For a demonstration purpose, we will create a tabled called posts_tbl where we want to store the following information about posts:
- Text of article
- Author's first name
- Author's last name
- Whether the post is enabled (visible) or not
- Date when article was posted
This process is actually performed in two steps:
First, select the database that we want to use:
mysql> USE xmodulo_DB;
Then create a new table in the database:
mysql> CREATE TABLE 'posts_tbl' (
'post_id' INT UNSIGNED NOT NULL AUTO_INCREMENT,
'content' TEXT,
'author_FirstName' VARCHAR(100) NOT NULL,
'author_LastName' VARCHAR(50) DEFAULT NULL ,
'isEnabled' TINYINT(1) NOT NULL DEFAULT 1,
'date' TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ,
PRIMARY KEY ( 'post_id' )
) TYPE = MYISAM;
![](https://farm4.staticflickr.com/3870/15086654980_39d2d54d72.jpg)
### Step Five: Create a User Account and Grant Permissions ###
When it comes to accessing our newly created database and tables, it's a good idea to create a new user account, so it can access that database (and that database only) without full permissions to the whole MySQL server.
You can create a new user, grant permissions and apply changes in two easy steps as follows:
mysql> GRANT ALL PRIVILEGES ON xmodulo_DB.* TO 'new_user'@'%' IDENTIFIED BY 'new_password';
mysql> FLUSH PRIVILEGES;
where 'new_user' and 'new_password' refer to the new user account name and its password, respectively. This information will be stored in the mysql.user table, and the password will be encrypted.
### Step Six: Testing ###
Let's insert one dummy record to the posts_tbl table:
mysql> USE xmodulo_DB;
mysql> INSERT INTO posts_tbl (content, author_FirstName, author_LastName)
VALUES ('Hi! This is some dummy text.', 'Gabriel', 'Canepa');
Then view all the records in posts_tbl table:
mysql> SELECT * FROM posts_tbl;
![](https://farm4.staticflickr.com/3896/15086792527_39a987d8bd_z.jpg)
Note that MySQL automatically inserted the proper default values in the fields where we defined them earlier (e.g., 'isEnabled' and 'date').
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/create-mysql-database-command-line.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://xmodulo.com/2012/06/how-to-allow-remote-access-to-mysql.html

View File

@ -1,182 +0,0 @@
Network Installation of “Debian 7 (Whezzy) on Client Machines using DNSMASQ Network Boot Server
================================================================================
This tutorial will guide you on how you can install **Debian 7 (Whezzy)** directly from a network location using **DNSMASQ** as a **PXE Server (Preboot eXecution Environment)**, in case your server doesnt provide any method to boot from a CD/DVD/USB media drive or it just can operate with an attached monitor, keyboard and mouse.
![Debian 7 Network Installation on Client Machines](http://www.tecmint.com/wp-content/uploads/2014/09/Network-Debian-Instalaltion.png)
Debian 7 Network Installation on Client Machines
**DNSMASQ** is a lightweight network infrastructure server which can provide crucial network services such as DNS, DHCP and Network Boot, using a build-in DNS, DHCP and TFTP server.
Once the PXE server is up and running you can instruct all your clients machines to directly boot from network, with the specifications that your clients must own a network card that supports network booting, which can be enabled from BIOS under Network Boot or Boot Services option.
### Requirements ###
- [Debian 7 (Wheezy) Installation Guide][1]
### Step 1: Install and Configure DNSMASQ Server ###
**1.** On first hand, after you install Debian Server assure that your system uses a **Static IP Address**, because, besides network booting, will also provide DHCP service for your entire network segment. Once the Static IP Address has been configured run the following command from root account or using a user with root powers to install DNSMASQ server.
# apt-get install dnsmasq
![Install Dnsmasq Package](http://www.tecmint.com/wp-content/uploads/2014/09/Install-Dnsmasq-in-Debian.png)
Install Dnsmasq Package
**2.** Once DNSMASQ package installed, you can start editing its configuration file. First create a backup of the main configuration and then start editing **dnsmasq.conf** file by issuing the following commands.
# mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup
# nano /etc/dnsmasq.conf
![Backup Dnsmasq Configuration](http://www.tecmint.com/wp-content/uploads/2014/09/Backup-dnsmasq-Configuration-file.png)
Backup Dnsmasq Configuration
**3.** The above backup process consisted on renaming the main configuration file, so the new file should be an empty one. Use the following excerpt for **DNSMASQ** configuration file as described below.
interface=eth0
domain=debian.lan
dhcp-range=192.168.1.3,192.168.1.253,255.255.255.0,1h
dhcp-boot=pxelinux.0,pxeserver,192.168.1.100
pxe-prompt="Press F8 for menu.", 60
#pxe-service types: x86PC, PC98, IA64_EFI, Alpha, Arc_x86, Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI
pxe-service=x86PC, "Install Debian 7 Linux from network server 192.168.1.100", pxelinux
enable-tftp
tftp-root=/srv/tftp
![Configuration of Dnsmasq](http://www.tecmint.com/wp-content/uploads/2014/09/Configure-dnsmasq.png)
Configuration of Dnsmasq
- **interface** The network interface that the server should listen.
- **domain** Replace it with your domain name.
- **dhcp-range** Replace it with your network IP range defined by your network mask.
- **dhcp-boot** Leave it as default but replace the IP statement with your server IP Address.
- **pxe-prompt** Leave it as default requires **F8 key strike** to enter menu 60 with seconds wait time.
- **pxe=service** Use **x86PC** for 32-bit/64-bit architectures and enter a menu description prompt under string quotes. Other values types can be: PC98, IA64_EFI, Alpha, Arc_x86, Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI.
- **enable-tftp** Enables the build-in TFTP server.
- **tftp-root** Use /srv/tftp is the location for Debian netboot files.
### Step 2: Download Debian Netboot Files and Open Firewall Connection ###
**4.** Now its time to download Debian Network Boot files. First, change your current working directory path to **TFTP Root** location defined by the last configuration statement (**/srv/tftp** system path ).
Go to a offical page mirror of [Debian Netinstall][2] [Network boot section][3] and grab the following files depending on your system architecture that you want to install it on your clients.
Once, you download **netboot.tar.gz** file, extract archive at the same time (this procedure describes only for 64-bit but the same procedure applies for other system architectures).
# cd /srv/tftp/
# wget http://ftp.nl.debian.org/debian/dists/wheezy/main/installer-amd64/current/images/netboot/netboot.tar.gz
# tar xfz netboot.tar.gz
# wget http://ftp.nl.debian.org/debian/dists/wheezy/main/installer-amd64/current/images/SHA256SUMS
# wget http://ftp.nl.debian.org/debian/dists/wheezy/Release
# wget http://ftp.nl.debian.org/debian/dists/wheezy/Release.gpg
Also it may be necessary to make all files in **TFTP** directory readable for TFTP server.
# chmod -R 755 /srv/tftp/
![Download Debian NetBoot Files](http://www.tecmint.com/wp-content/uploads/2014/09/Download-Debian-NetBoot-Files.png)
Download Debian NetBoot Files
Use the following variables for **Debian Netinstall** mirrors and architectures.
# wget http://"$YOURMIRROR"/debian/dists/wheezy/main/installer-"$ARCH"/current/images/netboot/netboot.tar.gz
# wget http://"$YOURMIRROR"/debian/dists/wheezy/main/installer-"$ARCH"/current/images/SHA256SUMS
# wget http://"$YOURMIRROR"/debian/dists/wheezy/Release
# wget http://"$YOURMIRROR"/debian/dists/wheezy/Release.gpg
**5.** On the next step start or restart DNSMASQ daemon and run netstat command to get a list of ports that the server is listening.
# service dnsmasq restart
# netstat -tulpn | grep dnsmasq
![Start Dnsmasq Service](http://www.tecmint.com/wp-content/uploads/2014/09/Start-Dnsmasq-Service.png)
Start Dnsmasq Service
**6.** Debian based distribution usually ships with **UFW Firewall** package. Use the following commands to open the required **DNSMASQ** port numbers: **67** (Bootps), **69** (TFTP) **53** (DNS), **4011** (proxyDHCP) udp and **53** tcp (DNS).
# ufw allow 69/udp
# ufw allow 4011/udp ## Only if you have a ProxyDHCP on the network
# ufw allow 67/udp
# ufw allow 53/tcp
# ufw allow 53/udp
![Open Dnsmasq Ports](http://www.tecmint.com/wp-content/uploads/2014/09/Open-Dnsmasq-Ports-620x303.png)
Open Dnsmasq Ports
Now, the PXE loader located on your client network interface will load **pxelinux** configuration files from **/srv/tftp/pxelinux.cfg** directory using this order.
- GUID files
- MAC files
- Default file
### Step 3: Configure Clients to Boot from Network ###
**7.** To enable network boot for a client computer enter your system **BIOS configuration** (please consult the hardware motherboard vendor documentation for entering BIOS settings).
Go to **Boot menu** and select **Network boot** as the **primary boot device** (on some systems you can select the boot device without entering BIOS configuration just by pressing a key during **BIOS POST**).
![Select BIOS Settings](http://www.tecmint.com/wp-content/uploads/2014/09/Select-BIOS-Settings.png)
Select BIOS Settings
**8.** After editing the boot order sequence, usually, press **F10** to save BIOS settings. After reboot, your client computer should boot directly from network and the first **PXE** prompt should appear demanding you to press **F8** key to enter menu.
Next, hit **F8** key to move forward and a new prompt should appear. Hit **Enter** key again and the main **Debian Installer** prompt should appear on your screen as in the screenshots below.
![Boot Menu Selection](http://www.tecmint.com/wp-content/uploads/2014/09/Boot-Menu-Selection.png)
Boot Menu Selection
![Select Debian Installer Boot](http://www.tecmint.com/wp-content/uploads/2014/09/Select-Debian-Installer-Boot.png)
Select Debian Installer Boot
![Select Debian Install](http://www.tecmint.com/wp-content/uploads/2014/09/Select-Debian-Install.png)
Select Debian Install
From here on you can start install Debian on your machine using the Debian 7 Wheezy procedure (installation link given above), but you can also need to make sure that your machine has an active Internet connection in order to be able to finish installation process.
### Step 4: Debug DNSMASQ Server and Enable it System-Wide ###
**9.** To diagnosticate the server for eventual occurred problems or other information offered to clients run the following command to open log file.
# tailf /var/log/daemon.log
![Debug DNSMASQ Server](http://www.tecmint.com/wp-content/uploads/2014/09/Debbug-DNSMASQ-Server.png)
Debug DNSMASQ Server
**10.** If everything is in place during server tests you can now enable **DNSMASQ** daemon to automatically start after system reboot with the help of **sysv-rc-conf** package.
# apt-get install sysv-rc-conf
# sysv-rc-conf dnsmaq on
![Enable DNSMASQ Daemon](http://www.tecmint.com/wp-content/uploads/2014/09/Enable-DNSMASQ-Daemon.png)
Enable DNSMASQ Daemon
Thats all! Now your **PXE** server is ready to allocate IP addresses (**DHCP**) and to offer the required boot information for all your network segment clients which will be configured to boot and install Debian Wheezy from network.
Using PXE network boot installation has some advantages on networks with an increased number of server hosts because you can set up the entire network infrastructure in a short period of time or the same time, facilitates the distribution upgrading process, and, can also automate the entire installation process using kickstart files.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/network-installation-of-debian-7-on-client-machines/
作者:[Matei Cezar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/debian-gnulinux-7-0-code-name-wheezy-server-installation-guide/
[2]:http://www.debian.org/distrib/netinst#netboot
[3]:http://ftp.nl.debian.org/debian/dists/wheezy/main/

View File

@ -0,0 +1,124 @@
wangjiezhe translating
Unix: stat -- more than ls
================================================================================
> Tired of ls and want to see more interesting information on your files? Try stat!
![](http://www.itworld.com/sites/default/files/imagecache/large_thumb_150x113/stats.jpg)
The ls command is probably one of the first commands that anyone using Unix learns, but it only shows a small portion of the information that is available with the stat command.
The stat command pulls information from the file's inode. As you might be aware, there are actually three sets of dates and times that are stored for every file on your system. These include the date the file was last modified (i.e., the date and time that you see when you use the ls -l command), the time the file was last changed (which includes renaming the file), and the time that file was last accessed.
View a long listing for a file and you will see something like this:
$ ls -l trythis
-rwx------ 1 shs unixdweebs 109 Nov 11 2013 trythis
Use the stat command and you see all this:
$ stat trythis
File: `trythis'
Size: 109 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731691 Links: 1
Access: (0700/-rwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-09 19:27:58.000000000 -0400
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2013-11-11 08:40:10.000000000 -0500
The file's change and modify dates/times are the same in this case, while the access time is fairly recent. We can also see that the file is using 8 blocks and we see the permissions in each of the two formats -- the octal (0700) format and the rwx format. The inode number, shown in the third line of the output, is 12731681. There are no additional hard links (Links: 1). And the file is a regular file.
Rename the file and you will see that the change time will be updated.
This, the ctime information, was originally intended to hold the creation date and time for the file, but the field was turned into the change time field somewhere a while back.
$ mv trythis trythat
$ stat trythat
File: `trythat'
Size: 109 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731691 Links: 1
Access: (0700/-rwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-09 19:27:58.000000000 -0400
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2014-09-21 12:46:22.000000000 -0400
Changing the file's permissions would also register in the ctime field.
You can also use wilcards with the stat command and list your files' stats in a group:
$ stat myfile*
File: `myfile'
Size: 20 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731803 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:02:12.000000000 -0400
Change: 2014-08-22 12:02:12.000000000 -0400
File: `myfile2'
Size: 20 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731806 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:03:30.000000000 -0400
Change: 2014-08-22 12:03:30.000000000 -0400
File: `myfile3'
Size: 40 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12730533 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:03:59.000000000 -0400
Change: 2014-08-22 12:03:59.000000000 -0400
We can get some of this information with other commands if we like.
Add the "u" option to a long listing and you'll see something like this. Notice this shows us the last access time while adding "c" shows us the change time (in this example, the time when we renamed the file).
$ ls -lu trythat
-rwx------ 1 shs unixdweebs 109 Sep 9 19:27 trythat
$ ls -lc trythat
-rwx------ 1 shs unixdweebs 109 Sep 21 12:46 trythat
The stat command can also work against directories.
In this case, we see that there are a number of links.
$ stat bin
File: `bin'
Size: 12288 Blocks: 24 IO Block: 262144 directory
Device: 18h/24d Inode: 15089714 Links: 9
Access: (0700/drwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-21 03:00:45.000000000 -0400
Modify: 2014-09-15 17:54:41.000000000 -0400
Change: 2014-09-15 17:54:41.000000000 -0400
Here, we're looking at a file system.
$ stat -f /dev/cciss/c0d0p2
File: "/dev/cciss/c0d0p2"
ID: 0 Namelen: 255 Type: tmpfs
Block size: 4096Fundamental block size: 4096
Blocks: Total: 259366 Free: 259337 Available: 259337
Inodes: Total: 223834 Free: 223531
Notice the Namelen (name length) field. Good luck if you had your heart set on file names with greater than 255 characters!
The stat command can also display some of its information a field at a time for those times when that's all you want to see, In the example below, we just want to see the file type and then the number of hard links.
$ stat --format=%F trythat
regular file
$ stat --format=%h trythat
1
In the examples below, we look at permissions -- in each of the two available formats -- and then the file's SELinux security context.
--------------------------------------------------------------------------------
via: http://www.itworld.com/operating-systems/437351/unix-stat-more-ls
作者:[Sandra Henry-Stocker][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/sandra-henry-stocker

View File

@ -0,0 +1,78 @@
Linux FAQs with Answers--How to configure a static IP address on CentOS 7
================================================================================
> **Question**: On CentOS 7, I want to switch from DHCP to static IP address configuration with one of my network interfaces. What is a proper way to assign a static IP address to a network interface permanently on CentOS or RHEL 7?
If you want to set up a static IP address on a network interface in CentOS 7, there are several different ways to do it, varying depending on whether or not you want to use Network Manager for that.
Network Manager is a dynamic network control and configuration system that attempts to keep network devices and connections up and active when they are available). CentOS/RHEL 7 comes with Network Manager service installed and enabled by default.
To verify the status of Network Manager service:
$ systemctl status NetworkManager.service
To check which network interface is managed by Network Manager, run:
$ nmcli dev status
![](https://farm4.staticflickr.com/3861/15295802711_a102a3574d_z.jpg)
If the output of nmcli shows "connected" for a particular interface (e.g., enp0s3 in the example), it means that the interface is managed by Network Manager. You can easily disable Network Manager for a particular interface, so that you can configure it on your own for a static IP address.
Here are **two different ways to assign a static IP address to a network interface on CentOS 7**. We will be configuring a network interface named enp0s3.
### Configure a Static IP Address without Network Manager ###
Go to the /etc/sysconfig/network-scripts directory, and locate its configuration file (ifcfg-enp0s3). Create it if not found.
![](https://farm4.staticflickr.com/3911/15112399977_d3df8e15f5_z.jpg)
Open the configuration file and edit the following variables:
![](https://farm4.staticflickr.com/3880/15112184199_f4cbf269a6.jpg)
In the above, "NM_CONTROLLED=no" indicates that this interface will be set up using this configuration file, instead of being managed by Network Manager service. "ONBOOT=yes" tells the system to bring up the interface during boot.
Save changes and restart the network service using the following command:
# systemctl restart network.service
Now verify that the interface has been properly configured:
# ip add
![](https://farm6.staticflickr.com/5593/15112397947_ac69a33fb4_z.jpg)
### Configure a Static IP Address with Network Manager ###
If you want to use Network Manager to manage the interface, you can use nmtui (Network Manager Text User Interface) which provides a way to configure Network Manager in a terminal environment.
Before using nmtui, first set "NM_CONTROLLED=yes" in /etc/sysconfig/network-scripts/ifcfg-enp0s3.
Now let's install nmtui as follows.
# yum install NetworkManager-tui
Then go ahead and edit the Network Manager configuration of enp0s3 interface:
# nmtui edit enp0s3
The following screen will allow us to manually enter the same information that is contained in /etc/sysconfig/network-scripts/ifcfg-enp0s3.
Use the arrow keys to navigate this screen, press Enter to select from a list of values (or fill in the desired values), and finally click OK at the bottom right:
![](https://farm4.staticflickr.com/3878/15295804521_4165c97828_z.jpg)
Finally, restart the network service.
# systemctl restart network.service
and you're ready to go.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/configure-static-ip-address-centos7.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,53 @@
[su-kaiyao]翻译中
How To Reset Root Password On CentOS 7
================================================================================
The way to reset the root password on centos7 is totally different to Centos 6. Let me show you how to reset root password in CentOS 7.
1 In the boot grub menu select option to edit.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_003.png)
2 Select Option to edit (e).
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_005.png)
3 Go to the line of Linux 16 and change ro with rw init=/sysroot/bin/sh.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_006.png)
4 Now press Control+x to start on single user mode.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_007.png)
5 Now access the system with this command.
chroot /sysroot
6 Reset the password.
passwd root
7 Update selinux information
touch /.autorelabel
8 Exit chroot
exit
9 Reboot your system
reboot
Thats it. Enjoy.
--------------------------------------------------------------------------------
via: http://www.unixmen.com/reset-root-password-centos-7/
作者M.el Khamlichi
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,151 @@
How to manage configurations in Linux with Puppet and Augeas
================================================================================
Although [Puppet][1](注此文原文原文中曾今做过文件名“20140808 How to install Puppet server and client on CentOS and RHEL.md”,如果翻译发布过,可修改此链接为发布地址) is a really unique and useful tool, there are situations where you could use a bit of a different approach. Situations like modification of configuration files which are already present on several of your servers and are unique on each one of them at the same time. Folks from Puppet labs realized this as well, and integrated a great tool called [Augeas][2] that is designed exactly for this usage.
Augeas can be best thought of as filling for the gaps in Puppet's capabilities where an object­specific resource type (such as the host resource to manipulate /etc/hosts entries) is not yet available. In this howto, you will learn how to use Augeas to ease your configuration file management.
### What is Augeas? ###
Augeas is basically a configuration editing tool. It parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native config files.
### What are we going to achieve in this tutorial? ###
We will install and configure the Augeas tool for use with our previously built Puppet server. We will create and test several different configurations with this tool, and learn how to properly use it to manage our system configurations.
### Prerequisites ###
We will need a working Puppet server and client setup. If you don't have it, please follow my previous tutorial.
Augeas package can be found in our standard CentOS/RHEL repositories. Unfortunately, Puppet uses Augeas ruby wrapper which is only available in the puppetlabs repository (or [EPEL][4]). If you don't have this repository in your system already, add it using following command:
On CentOS/RHEL 6.5:
# rpm -­ivh https://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabs­release­6­10.noarch.rpm
On CentOS/RHEL 7:
# rpm -­ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs­release­7­10.noarch.rpm
After you have successfully added this repository, install Ruby­Augeas in your system:
# yum install ruby­augeas
Or if you are continuing from my last tutorial, install this package using the Puppet way. Modify your custom_utils class inside of your /etc/puppet/manifests/site.pp to contain "ruby­augeas" inside of the packages array:
class custom_utils {
package { ["nmap","telnet","vim­enhanced","traceroute","ruby­augeas"]:
ensure => latest,
allow_virtual => false,
}
}
### Augeas without Puppet ###
As it was said in the beginning, Augeas is not originally from Puppet Labs, which means we can still use it even without Puppet itself. This approach can be useful for verifying your modifications and ideas before applying them in your Puppet environment. To make this possible, you need to install one additional package in your system. To do so, please execute following command:
# yum install augeas
### Puppet Augeas Examples ###
For demonstration, here are a few example Augeas use cases.
#### Management of /etc/sudoers file ####
1. Add sudo rights to wheel group
This example will show you how to add simple sudo rights for group %wheel in your GNU/Linux system.
# Install sudo package
package { 'sudo':
ensure => installed, # ensure sudo package installed
}
# Allow users belonging to wheel group to use sudo
augeas { 'sudo_wheel':
context => '/files/etc/sudoers', # The target file is /etc/sudoers
changes => [
# allow wheel users to use sudo
'set spec[user = "%wheel"]/user %wheel',
'set spec[user = "%wheel"]/host_group/host ALL',
'set spec[user = "%wheel"]/host_group/command ALL',
'set spec[user = "%wheel"]/host_group/command/runas_user ALL',
]
}
Now let's explain what the code does: **spec** defines the user section in /etc/sudoers, **[user]** defines given user from the array, and all definitions behind slash ( / ) are subparts of this user. So in typical configuration this would be represented as:
user host_group/host host_group/command host_group/command/runas_user
Which is translated into this line of /etc/sudoers:
%wheel ALL = (ALL) ALL
2. Add command alias
The following part will show you how to define command alias which you can use inside your sudoers file.
# Create new alias SERVICES which contains some basic privileged commands
augeas { 'sudo_cmdalias':
context => '/files/etc/sudoers', # The target file is /etc/sudoers
changes => [
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/name SERVICES",
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[1] /sbin/service",
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[2] /sbin/chkconfig",
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[3] /bin/hostname",
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[4] /sbin/shutdown",
]
}
Syntax of sudo command aliases is pretty simple: **Cmnd_Alias** defines the section of command aliases, **[alias/name]** binds all to given alias name, /alias/name **SERVICES** defines the actual alias name and alias/command is the array of all the commands that should be part of this alias. The output of this command will be following:
Cmnd_Alias SERVICES = /sbin/service , /sbin/chkconfig , /bin/hostname , /sbin/shutdown
For more information about /etc/sudoers, visit the [official documentation][5].
#### Adding users to a group ####
To add users to groups using Augeas, you might want to add the new user either after the gid field or after the last user. We'll use group SVN for the sake of this example. This can be achieved by using the following command:
In Puppet:
augeas { 'augeas_mod_group:
context => '/files/etc/group', # The target file is /etc/group
changes => [
"ins user after svn/*[self::gid or self::user][last()]",
"set svn/user[last()] john",
]
}
Using augtool:
augtool> ins user after /files/etc/group/svn/*[self::gid or self::user][last()] augtool> set /files/etc/group/svn/user[last()] john
### Summary ###
By now, you should have a good idea on how to use Augeas in your Puppet projects. Feel free to experiment with it and definitely go through the official Augeas documentation. It will help you get the idea how to use Augeas properly in your own projects, and it will show you how much time you can actually save by using it.
If you have any questions feel free to post them in the comments and I will do my best to answer them and advise you.
### Useful Links ###
- [http://www.watzmann.net/categories/augeas.html][6]: contains a lot of tutorials focused on Augeas usage.
- [http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas][7]: Puppet wiki with a lot of practical examples.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/09/manage-configurations-linux-puppet-augeas.html
作者:[Jaroslav Štěpánek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/jaroslav
[1]:http://xmodulo.com/2014/08/install-puppet-server-client-centos-rhel.html
[2]:http://augeas.net/
[3]:http://xmodulo.com/manage-configurations-linux-puppet-augeas.html
[4]:http://xmodulo.com/2013/03/how-to-set-up-epel-repository-on-centos.html
[5]:http://augeas.net/docs/references/lenses/files/sudoers-aug.html
[6]:http://www.watzmann.net/categories/augeas.html
[7]:http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas

View File

@ -0,0 +1,120 @@
How to monitor user login history on CentOS with utmpdump
================================================================================
Keeping, maintaining and analyzing logs (i.e., accounts of events that have happened during a certain period of time or are currently happening) are among the most basic and essential tasks of a Linux system administrator. In case of user management, examining user logon and logout logs (both failed and successful) can alert us about any potential security breaches or unauthorized use of our system. For example, remote logins from unknown IP addresses or accounts being used outside working hours or during vacation leave should raise a red flag.
On a CentOS system, user login history is stored in the following binary files:
- /var/run/utmp (which logs currently open sessions) is used by who and w tools to show who is currently logged on and what they are doing, and also by uptime to display system up time.
- /var/log/wtmp (which stores the history of connections to the system) is used by last tool to show the listing of last logged-in users.
- /var/log/btmp (which logs failed login attempts) is used by lastb utility to show the listing of last failed login attempts. `
![](https://farm4.staticflickr.com/3871/15106743340_bd13fcfe1c_o.png)
In this post I'll show you how to use utmpdump, a simple program from the sysvinit-tools package that can be used to dump these binary log files in text format for inspection. This tool is available by default on stock CentOS 6 and 7. The information gleaned from utmpdump is more comprehensive than the output of the tools mentioned earlier, and that's what makes it a nice utility for the job. Besides, utmpdump can be used to modify utmp or wtmp, which can be useful if you want to fix any corrupted entries in the binary logs.
### How to Use Utmpdump and Interpret its Output ###
As we mentioned earlier, these log files, as opposed to other logs most of us are familiar with (e.g., /var/log/messages, /var/log/cron, /var/log/maillog), are saved in binary file format, and thus we cannot use pagers such as less or more to view their contents. That is where utmpdump saves the day.
In order to display the contents of /var/run/utmp, run the following command:
# utmpdump /var/run/utmp
![](https://farm6.staticflickr.com/5595/15106696599_60134e3488_z.jpg)
To do the same with /var/log/wtmp:
# utmpdump /var/log/wtmp
![](https://farm6.staticflickr.com/5591/15106868718_6321c6ff11_z.jpg)
and finally with /var/log/btmp:
# utmpdump /var/log/btmp
![](https://farm6.staticflickr.com/5562/15293066352_c40bc98ca4_z.jpg)
As you can see, the output formats of three cases are identical, except for the fact that the records in the utmp and btmp are arranged chronologically, while in the wtmp, the order is reversed.
Each log line is formatted in multiple columns described as follows. The first field shows a session identifier, while the second holds PID. The third field can hold one of the following values: ~~ (indicating a runlevel change or a system reboot), bw (meaning a bootwait process), a digit (indicates a TTY number), or a character and a digit (meaning a pseudo-terminal). The fourth field can be either empty or hold the user name, reboot, or runlevel. The fifth field holds the main TTY or PTY (pseudo-terminal), if that information is available. The sixth field holds the name of the remote host (if the login is performed from the local host, this field is blank, except for run-level messages, which will return the kernel version). The seventh field holds the IP address of the remote system (if the login is performed from the local host, this field will show 0.0.0.0). If DNS resolution is not provided, the sixth and seventh fields will show identical information (the IP address of the remote system). The last (eighth) field indicates the date and time when the record was created.
### Usage Examples of Utmpdump ###
Here are a few simple use cases of utmpdump.
1. Check how many times (and at what times) a particular user (e.g., gacanepa) logged on to the system between August 18 and September 17.
# utmpdump /var/log/wtmp | grep gacanepa
![](https://farm4.staticflickr.com/3857/15293066362_fb2dd566df_z.jpg)
If you need to review login information from prior dates, you can check the wtmp-YYYYMMDD (or wtmp.[1...N]) and btmp-YYYYMMDD (or btmp.[1...N]) files in /var/log, which are the old archives of wtmp and btmp files, generated by [logrotate][1].
2. Count the number of logins from IP address 192.168.0.101.
# utmpdump /var/log/wtmp | grep 192.168.0.101
![](https://farm4.staticflickr.com/3842/15106743480_55ce84c9fd_z.jpg)
3. Display failed login attempts.
# utmpdump /var/log/btmp
![](https://farm4.staticflickr.com/3858/15293065292_e1d2562206_z.jpg)
In the output of /var/log/btmp, every log line corresponds to a failed login attempt (e.g., using incorrect password or a non-existing user ID). Logon using non-existing user IDs are highlighted in the above impage, which can alert you that someone is attempting to break into your system by guessing commonly-used account names. This is particularly serious in the cases when the tty1 was used, since it means that someone had access to a terminal on your machine (time to check who has keys to your datacenter, maybe?).
4. Display login and logout information per user session.
# utmpdump /var/log/wtmp
![](https://farm4.staticflickr.com/3835/15293065312_c762360791_z.jpg)
In /var/log/wtmp, a new login event is characterized by '7' in the first field, a terminal number (or pseudo-terminal id) in the third field, and username in the fourth. The corresponding logout event will be represented by '8' in the first field, the same PID as the login in the second field, and a blank terminal number field. For example, take a close look at PID 1463 in the above image.
- On [Fri Sep 19 11:57:40 2014 ART] the login prompt appeared in tty1.
- On [Fri Sep 19 12:04:21 2014 ART], user root logged on.
- On [Fri Sep 19 12:07:24 2014 ART], root logged out.
On a side note, the word LOGIN in the fourth field means that a login prompt is present in the terminal specified in the fifth field.
So far I covered somewhat trivial examples. You can combine utmpdump with other text sculpting tools such as awk, sed, grep or cut to produce filtered and enhanced output.
For example, you can use the following command to list all login events of a particular user (e.g., gacanepa) and send the output to a .csv file that can be viewed with a pager or a workbook application, such as LibreOffice's Calc or Microsoft Excel. Let's display PID, username, IP address and timestamp only:
# utmpdump /var/log/wtmp | grep -E "\[7].*gacanepa" | awk -v OFS="," 'BEGIN {FS="] "}; {print $2,$4,$7,$8}' | sed -e 's/\[//g' -e 's/\]//g'
![](https://farm4.staticflickr.com/3851/15293065352_91e1c1e4b6_z.jpg)
As represented with three blocks in the image, the filtering logic is composed of three pipelined steps. The first step is used to look for login events ([7]) triggered by user gacanepa. The second and third steps are used to select desired fields, remove square brackets in the output of utmpdump, and set the output field separator to a comma.
Of course, you need to redirect the output of the above command to a file if you want to open it later (append "> [name_of_file].csv" to the command).
![](https://farm4.staticflickr.com/3889/15106867768_0e37881a25_z.jpg)
In more complex examples, if you want to know what users (as listed in /etc/passwd) have not logged on during the period of time, you could extract user names from /etc/passwd, and then run grep the utmpdump output of /var/log/wtmp against user list. As you see, possibility is limitless.
Before concluding, let's briefly show yet another use case of utmpdump: modify utmp or wtmp. As these are binary log files, you cannot edit them as is. Instead, you can export their content to text format, modify the text output, and then import the modified content back to the binary logs. That is:
# utmpdump /var/log/utmp > tmp_output
<modify tmp_output using a text editor>
# utmpdump -r tmp_output > /var/log/utmp
This can be useful when you want to remove or fix any bogus entry in the binary logs.
To sum up, utmpdump complements standard utilities such as who, w, uptime, last, lastb by dumping detailed login events stored in utmp, wtmp and btmp log files, as well as in their rotated old archives, and that certainly makes it a great utility.
Feel free to enhance this post with your comments.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/09/monitor-user-login-history-centos-utmpdump.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/2014/09/logrotate-manage-log-files-linux.html

View File

@ -0,0 +1,36 @@
Jelly Conky给你的Linux桌面加入了简约、时尚的状态
================================================================================
**我把Conky设置成有点像壁纸我会找出一张我喜欢的只在下一周更换因为我厌倦了并且想要一点改变。**
不耐烦的一部分原因是由于日益增长的设计目录。我最近最喜欢的是Jelly Conky。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/jelly-conky.png)
我们最近强调的许多Conky所夸耀的最小设计都遵循了。它并不想成为一个厨房水槽。它不会被那些需要一眼需要看到他们硬盘温度和IP地址的人所青睐
它配备了三种不同的模式,它们都可以添加个性的或者静态背景图像:
- 时钟
- 时钟加日期
- 时钟加日期和天气
一些人不理解为什么要在桌面上拥有重复的时钟。这是很好理解的。对于我而言这不仅仅是功能虽然个人而言Conky的时钟比挤在上部面板上那渺小的数字要更容易看清
机会是如果你的Android主屏幕有一个时间小部件的话你不会介意在你的桌面上也有这么一个
- [从Deviant Art上下载 Jelly Conky][2]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/jelly-conky-for-linux-desktop
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover
[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003

View File

@ -0,0 +1,37 @@
Canonical在Ubuntu 14.04 LTS中关闭了一个nginx漏洞
================================================================================
> 用户不得不升级他们的系统来修复这个漏洞
![Ubuntu 14.04 LTS](http://i1-news.softpedia-static.com/images/news2/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677-2.jpg)
Ubuntu 14.04 LTS
**Canonical已经在安全公告中公布了这个影响到Ubuntu 14.04 LTS (Trusty Tahr)的nginx漏洞的细节。这个问题已经被确定并被修复了**
Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被用来暴露网络上的敏感信息。
根据安全公告“Antoine Delignat-Lavaud和Karthikeyan Bhargavan发现nginx错误地重复使用了缓存的SSL会话。攻击者可能利用此问题在特定的配置下可以从不同的虚拟主机获得信息“。
对于这些问题的更详细的描述可以看到Canonical的安全[公告][1]。用户应该升级自己的Linux发行版以解决此问题。
这个问题可以通过在系统升级到最新nginx包和依赖v包进行修复。要应用该补丁你可以直接运行升级管理程序。
如果你不想使用软件更新器您可以打开终端输入以下命令需要root权限
sudo apt-get update
sudo apt-get dist-upgrade
在一般情况下,一个标准的系统更新将会进行必要的更改。要应用此修补程序您不必重新启动计算机。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://www.ubuntu.com/usn/usn-2351-1/

View File

@ -0,0 +1,37 @@
Wal Commander 0.17 Github版发布了
================================================================================
![](http://wcm.linderdaum.com/wp-content/uploads/2014/09/wc21.png)
> ### 描述 ###
>
、> Wal Commander GitHub 版是一款多平台的开源文件管理器。适用于Windows、Linux、FreeBSD、和OSX。
>
> 这个从项目的目的是创建一个模仿Far管理器外观和感觉的便携式文件管理器。
The next stable version of our Wal Commander GitHub Edition 0.17 is out. Major features include command line autocomplete using the commands history; file associations to bind custom commands to different actions on files; and experimental support of OS X using XQuartz. A lot of new hotkeys were added in this release. Precompiled binaries are available for Windows x64. Linux, FreeBSD and OS X versions can be built directly from the [GitHub source code][1].
Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能包括:使用命令历史自动补全;文件关联绑定自定义命令对文件的各种操作;和用XQuartz实验性地支持OS X。很多新的快捷键添加在此版本中。预编译二进制文件适用于Windows64、LinuxFreeBSD和OS X版本这些可以直接从[GitHub中的源代码][1]编译。
### 主要特性 ###
- 命令行自动补全 (使用Del键删除一条命令)
- 文件关联 (主菜单 -> 命令 -> 文件关联)
- XQuartz上实验性地支持OS X ([https://github.com/corporateshark/WalCommander/issues/5][2])
### [下载][3] ###.
源代码: [https://github.com/corporateshark/WalCommander][4]
--------------------------------------------------------------------------------
via: http://wcm.linderdaum.com/release-0-17-0/
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/corporateshark/WalCommander/releases
[2]:https://github.com/corporateshark/WalCommander/issues/5
[3]:http://wcm.linderdaum.com/downloads/
[4]:https://github.com/corporateshark/WalCommander

View File

@ -0,0 +1,85 @@
Linux趣事
================================================================================
今天8月25号是Linux的第23个生日。1991年8月25日21岁的赫尔辛基大学的学生发布了举世闻名的[新闻组][1](Usenet post)标志着现在世界著名的Linux正式诞生。
23年以后的今天linux已经无处不在不仅仅被安装于桌面系统[智能手机][2]和嵌入式系统,甚至也被[龙头企业][3]用于他们的关键系统,比如说像[美国海军的核潜艇][4]US Navy's nuclear submarines和[联邦航空局的空中管制系统][5](FAA's air traffic control)。进入无处不在的云计算时代linux在云计算平台方面仍然保持着它的优势。
今天我们一起庆祝linux 23岁生日就让我们告诉你**一些你可能不知道的linux趣事和linux历史**。如果有什么要补充的请在评论中分享出来。在这篇文章里我可能用会用“linux”“kernel”和“Linux kernel”来表示同一个意思。
1.关于linux是否是一个开源的操作系统这种争论一直是无休无止的。事实上“Linux”操作系统的核心组件参照的是Linux kernel内核。而反派认为Linux不是一个纯粹的操作系统因为他们认为仅仅一个内核kernel并不是一个操作系统自由软件的推崇者认为最大的操作系统应叫做“[GNU/Linux][7]”把功劳归于应得的人。(比如:[GNU project][8]。另一方面一些linux的开发者认为linux拥有成为一个操作系统的资格因为它实现了[POSIX标准][9]。
2.从openhub网站的统计来看绝大部分95%的Linux是用C语言写的。第二2.8%受欢迎的是汇编语言。毫无疑问C语言比C++ 的更受欢迎也表明了linus对C++的立场。下面是Linus编程语言的分类。
![](https://farm4.staticflickr.com/3845/15025332121_055cfe3a2c_z.jpg)
3.在世界上Linux已经被[13,036个贡献者][10]创建和修改。当然贡献最多的还是Linus Torvalds自己。直到目前他提交了20,000次以上的代码。下图显示了所有提交次数最多的前十位Linux贡献者。
![](https://farm4.staticflickr.com/3837/14841786838_7a50625f9d_b.jpg)
4.Linux的代码行SLOC有超过1700万行。估计整个代码库的花费大概是5,526人年或者是超过300M1M=10*1000万亿美元[基于模型的基本估算法][11]basic COCOMO model
5.企业并不是单纯的Linux消费者。他们的员工也在[积极参与][12]Linux的开发。下图显示了前十的Linux内核开发参与的企业员工2013年提交次数的总和。他们包括linux的商业版发行者(Red Hat,SUSE),芯片/嵌入式系统制造商IntelTexas Instrumentwolfson非盈利性组织Linaro和其他的IT公司IBMSamsungGoogle
![](https://farm6.staticflickr.com/5573/14841856427_a5a1828245_o.png)
6.Linux的官方吉祥物是“小企鹅”一个非常可爱的企鹅标志。[第一次提出][13]并决定小企鹅作为Linux吉祥物/标志这个想法的是Linus自己。为什么是小企鹅呢因为Linus本人很喜欢企鹅尽管他曾经被一只凶猛的企鹅咬伤过还导致他得了一场病。
7.一个Linux系统“包括”Linux内核、支持GUN的组件和库、和一些第三方的应用。[distrowatch网站][14]显示现在总共有286个活跃的Linux发行版。其中最老的一个版本叫[Slackware][15],它是从1993年正式发布出来的一个可用的版本。
8.Kernel.org是一个Linux源码的主要仓库曾经在2011年8月被一个匿名的攻击者[攻陷][16],攻击者打算篡改kernel.org的服务器。为了加强linux内核的访问策略的安全性Linux基金会最近在Linux内核的Git官方托管的仓库上[开启了][17]双重认证。
9.Linux在500强超级计算机中的优势还在[增加][18]。截至2014年6月运算速度最快的计算机97%都是运行在Linux上面的。
10.太空监视spacewatch是亚利桑那大学月球与行星实验室的一个研究项目在GNU/Linux和它的创造者们出现之后用他们名字命名了几颗小行星[小行星9793 Torvalds][19],[小行星9882 Stallman][20][小行星9885 Linux][21][小行星9965 GUN][22]),以表彰他们把开源操作系统用于他们的小行星调查活动。
11.纵观Linux内核发展得近代史版本从2.6到3.0有一个很大的跳跃。这个[重编的版本号3][23]实际上并不是意味着Linux内核有什么重大的构建但却标志着Linux 20周年的一个里程碑。
12.在2000年的时候乔帮主还在苹果。他当时就[尝试雇佣][24]Linus Torvalds让他放弃Linux的开发转而为“Unix最大的用户群工作”这个项目后面发展成了MAC OS X。当时linus拒绝了乔帮主的邀请。
13.Linux 内核的重启函数[reboot()][25]要求两个神奇的数字而这第二个数字来自Linus Torvalds和他的3个女儿的出生日期。
14.虽然全世界都有Linux的很多粉丝但是也仍然存在很多对Linux的批评主要是桌面系统如缺乏硬件支持缺乏标准化由于很短的升级和发布周期导致系统的不稳定等。2014年Linux内核小组在linuxCon大会上当Linus被问及Linux的未来将何去何从他表示“I still want the desktop”(我仍然希望桌面化)。
如果你还知道些关于Linux的趣事请写在评论里。
生日快乐Linux
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/interesting-facts-linux.html
作者:[Dan Nanni][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://groups.google.com/forum/message/raw?msg=comp.os.minix/dlNtH7RRrGA/SwRavCzVE7gJ
[2]:http://developer.android.com/about/index.html
[3]:http://fortune.com/2013/05/06/how-linux-conquered-the-fortune-500/
[4]:http://www.linuxjournal.com/article/7789
[5]:http://fcw.com/Articles/2006/05/01/FAA-manages-air-traffic-with-Linux.aspx
[6]:http://thecloudmarket.com/stats
[7]:http://www.gnu.org/gnu/why-gnu-linux.html
[8]:http://www.gnu.org/gnu/gnu-history.html
[9]:http://en.wikipedia.org/wiki/POSIX
[10]:https://www.openhub.net/p/linux/contributors/summary
[11]:https://www.openhub.net/p/linux/estimated_cost
[12]:http://www.linuxfoundation.org/publications/linux-foundation/who-writes-linux-2013
[13]:http://www.sjbaker.org/wiki/index.php?title=The_History_of_Tux_the_Linux_Penguin
[14]:http://distrowatch.com/search.php?ostype=All&category=All&origin=All&basedon=All&notbasedon=None&desktop=All&architecture=All&status=Active
[15]:http://www.slackware.com/info/
[16]:http://pastebin.com/BKcmMd47
[17]:http://www.linux.com/news/featured-blogs/203-konstantin-ryabitsev/784544-linux-kernel-git-repositories-add-2-factor-authentication
[18]:http://www.top500.org/statistics/details/osfam/1
[19]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9793
[20]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9882
[21]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9885
[22]:http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=9965
[23]:https://lkml.org/lkml/2011/5/29/204
[24]:http://www.wired.com/2012/03/mr-linux/2/
[25]:http://lxr.free-electrons.com/source/kernel/reboot.c#L199
[26]:http://www.nndb.com/people/444/000022378/
[27]:http://linuxfonts.narod.ru/why.linux.is.not.ready.for.the.desktop.current.html
[28]:https://www.youtube.com/watch?v=8myENKt8bD0

View File

@ -0,0 +1,92 @@
优化 GitHub 服务器上的 MySQL 数据库性能
================================================================================
> 在 GitHub 我们总是说“如果网站响应速度不够快,说明我们的工作没完成”。我们之前在[前端的体验速度][1]这篇文章中介绍了一些提高网站响应速率的方法,但这只是故事的一部分。真正影响到 GitHub.com 性能的因素是 MySQL 数据库架构。让我们来瞧瞧我们的基础架构团队是如何无缝升级了 MySQL 架构吧这事儿发生在去年8月份成果就是大大提高了 GitHub 网站的速度。
### 任务 ###
去年我们把 GitHub 上的大部分数据移到了新的数据中心,这个中心有世界顶级的硬件资源和网络平台。自从使用了 MySQL 作为我们的后端基本存储系统,我们一直期望着一些改进来大大提高数据库性能,但是在数据中心使用全新的硬件来部署一套全新的集群环境并不是一件简单的工作,所以我们制定了一套计划和测试工作,以便数据能平滑过渡到新环境。
### 准备工作 ###
像我们这种关于架构上的巨大改变,在执行的每一步都需要收集数据指标。新机器上安装好了基础操作系统,接下来就是测试新配置下的各种性能。为了模拟真实的工作负载环境,我们使用 tcpdump 工具从老集群那里复制正在发生的 SELECT 请求,并在新集群上重新响应一遍。
MySQL 微调是个繁琐的细致活,像众所周知的 innodb_buffer_pool_size 这个参数往往能对 MySQL 性能产生巨大的影响。对于这类参数,我们必须考虑在内,所以我们列了一份参数清单,包括 innodb_thread_concurrencyinnodb_io_capacity和 innodb_buffer_pool_instances还有其它的。
在每次测试中我们都很小心地只改变一个参数并且让一次测试至少运行12小时。我们会观察响应时间的变化曲线每秒的响应次数以及有可能会导致并发性降低的参数。我们使用 “SHOW ENGINE INNODB STATUS” 命令打印 InnoDB 性能信息,特别观察了 “SEMAPHORES” 一节的内容,它为我们提供了工作负载的状态信息。
当我们在设置参数后对运行结果感到满意,然后就开始将我们最大的一个数据表格迁移到一套独立的集群上,这个步骤作为整个迁移过程的早期测试,保证我们的核心集群空出更多的缓存池空间,并且为故障切换和存储功能提供更强的灵活性。这步初始迁移方案也引入了一个有趣的挑战:我们必须维持多条客户连接,并且要将这些连接重定向到正确的集群上。
除了硬件性能的提升,还需要补充一点,我们同时也对处理进程和拓扑结构进行了改进:我们添加了延时拷贝技术,更快、更高频地备份数据,以及更多的读拷贝空间。这些功能已经准备上线。
### 列出任务清单,三思后行 ###
每天有上百万用户的使用 GitHub.com我们不可能有机会进行实际意义上的数据切换。我们有一个详细的[任务清单][2]来执行迁移:
![](https://cloud.githubusercontent.com/assets/1155781/4116929/13fc6f50-328b-11e4-837b-922aad3055a8.png)
我们还规划了一个维护期,并且[在我们的博客中通知了大家][3],让用户注意到这件事情。
### 迁移时间到 ###
太平洋时间星期六上午5点我们的迁移团队上线集合聊天同时数据迁移正式开始
![](https://cloud.githubusercontent.com/assets/1155781/4060850/39f52cd4-2df3-11e4-9aca-1f54a4870d24.png)
我们将 GitHub 网站设置为维护模式,并在 Twitter 上发表声明,然后开始按上述任务清单的步骤开始工作:
![](https://cloud.githubusercontent.com/assets/1155781/4060864/54ff6bac-2df3-11e4-95da-b059c0ec668f.png)
**13 分钟**后,我们确保新的集群能正常工作:
![](https://cloud.githubusercontent.com/assets/1155781/4060870/6a4c0060-2df3-11e4-8dab-654562fe628d.png)
然后我们让 GitHub.com 脱离维护期,并且让全世界的用户都知道我们的最新状态:
![](https://cloud.githubusercontent.com/assets/1155781/4060878/79b9884c-2df3-11e4-98ed-d11818c8915a.png)
大量前期的测试工作与准备工作,让我们将维护期缩到最短。
### 检验最终的成果 ###
在接下来的几周时间里,我们密切监视着 GitHub.com 的性能和响应时间。我们发现迁移后网站的平均加载时间减少一半并且在99%的时间里,能减少*三分之二*
![](https://cloud.githubusercontent.com/assets/1155781/4060886/9106e54e-2df3-11e4-8fda-a4c64c229ba1.png)
### 我们学到了什么 ###
#### 功能划分 ####
在迁移过程中,我们采用了一个比较好的方法是:将大的数据表(主要记录了一些历史数据)先迁移过去,空出旧集群的磁盘空间和缓存池空间。这一步给我们留下了更过的资源用户维护“热”数据,将一些连接请求分离到多套集群里面。这步为我们之后的胜利奠定了基础,我们以后还会使用这种模式来进行迁移工作。
#### 测试测试测试 ####
为你的应用做验收测试和回归测试,越多越好,多多益善,不要嫌多。从老集群复制数据到新集群的过程中,如果进行验收测试和响应状态测试,得到的数据是不准的,如果数据不理想,这是正常的,不要惊讶,不要试图拿这些数据去分析原因。
#### 合作的力量 ####
对基础架构进行大的改变,通常需要涉及到很多人,我们要像一个团队一样为共同的目标而合作。我们的团队成员来自全球各地。
团队成员地图:
![](https://render.githubusercontent.com/view/geojson?url=https://gist.githubusercontent.com/anonymous/5fa29a7ccbd0101630da/raw/map.geojson)
本次合作新创了一种工作流程我们提交更改pull request获取实时反馈查看修改了错误的 commit —— 全程没有电话交流或面对面的会议。当所有东西都可以通过 URL 提供信息,不同区域的人群之间的交流和反馈会变得非常简单。
### 一年后。。。 ###
整整一年时间过去了,我们很高兴地宣布这次数据迁移是很成功的 —— MySQL 性能和可靠性一直处于我们期望的状态。另外,新的集群还能让我们进一步去升级,提供更好的可靠性和响应时间。我将继续记录这些优化过程。
--------------------------------------------------------------------------------
via: https://github.com/blog/1880-making-mysql-better-at-github
作者:[samlambert][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://github.com/samlambert
[1]:https://github.com/blog/1756-optimizing-large-selector-sets
[2]:https://help.github.com/articles/writing-on-github#task-lists
[3]:https://github.com/blog/1603-site-maintenance-august-31st-2013

View File

@ -0,0 +1,377 @@
<h1>戴着面具的复仇者 —— 揭秘:激进黑客组织“匿名者”</h1>
<blockquote><em>从“<a href="https://zh.wikipedia.org/wiki/%E8%8C%89%E8%8E%89%E8%8A%B1%E9%9D%A9%E5%91%BD">突尼斯政变</a>”到“<a href="https://zh.wikipedia.org/wiki/%E9%82%81%E5%85%8B%E7%88%BE%C2%B7%E5%B8%83%E6%9C%97%E6%A7%8D%E6%93%8A%E6%A1%88">弗格森枪击事件</a>”,“匿名者”组织是如何煽动起网络示威活动的。</em></blockquote>
<center><img src="https://www.newyorker.com/wp-content/uploads/2014/09/140908_r25419-690.jpg" /></center>
<blockquote><em>通过入会声明,任何人都能轻易加入“匿名者”组织。某人类学家称,组织成员会“根据影响程度对重大事件保持着不同关注,特别是那些能挑起强烈争端的事件”。</em></blockquote>
<small>布景Jeff Nishinaka / 摄影Scott Dunbar</small>
<h2>1</h2>
<p>上世纪七十年代中期,当 Christopher Doyon 还是一个生活在缅因州乡村的孩童时,就终日泡在 CB radio 上与各种陌生人聊天。他的昵称是“大红”因为他有一头红色的头发。Christopher Doyon 把发射机挂在了卧室的墙壁上并且说服了父亲在自家屋顶安装了两根天线。CB radio 主要用于卡车司机间的联络,但 Doyon 和一些人却将之用于不久后出现在 Internet 上的虚拟社交——自定义昵称、成员间才懂的笑话,以及施行变革的强烈愿望。</p>
<p>Doyon 很小的时候母亲就去世了,兄妹二人由父亲抚养长大,他俩都说受到过父亲的虐待。由此 Doyon 在 CB radio 社区中找到了慰藉和归属感。他和他的朋友们轮流监听当地紧急事件频道。其中一个朋友的父亲买了一个气泡灯并安装在了他的车顶上;每当这个孩子收听到来自孤立无援的乘车人的求助后,都会开车载着所有人到求助者所在的公路旁。除了拨打 911 外他们基本没有什么可做的,但这足以让他们感觉自己成为了英雄。</p>
<p>短小精悍的 Doyon 有着一口浓厚的新英格兰口音,并且非常喜欢《星际迷航》和阿西莫夫的小说。当他在《大众机械》上看到一则“组装你的专属个人计算机”构件广告时,就央求祖父给他买一套,接下来 Doyon 花了数月的时间把计算机组装起来并连接到 Internet 上去。与鲜为人知的 CB 电波相比,在线聊天室确实不可同日而语。“我只需要点一下按钮,再选中某个家伙的名字,然后我就可以和他聊天了,” Doyon 在最近回忆时说道,“这真的很惊人。”</p>
<p>十四岁那年Doyon 离家出走,两年后他搬到了马萨诸塞州的剑桥,那里是一个新出现的计算机反主流文化的中心。同一时间,早在 34 年前就已由麻省理工学院的铁路狂热爱好者们创立的铁路模型技术俱乐部已经演变成了“黑客”——也是推广该词的第一个组织。Richard Stallman在那时还是一名任职于麻省理工学院人工智能实验室的计算机科学家指出早期黑客们比起引发技术战争更乐于讨论“哥德尔、艾舍尔、巴赫”之类的话题。“我们没有任何约束”Stallman 说“这不是一项运动而是一种可以让人们相互留下深刻印象的行为。”其中有些“行为”很有趣制作电子游戏有些非常实用提高计算机处理速度还有些则属于发生在真实世界里的恶作剧在校园内放置模拟街道标识。Michael Patton在七十年代里管理着铁路模型技术俱乐部的人谈起初代黑客间不成文的规定说第一条就是“不要搞破坏”。</p>
<p>在剑桥Doyon 以打零工和乞讨为生他宁愿为了自由而睡在公园的长椅上也不愿被单调的固定工作所束缚。1985 年他和其他六个活跃分子共同组建了一支电子“义勇军”。模仿“动物解放阵线”他们称呼自己为“人民解放阵线”Peoples Liberation FrontPLF。所有人都使用化名如组织的创建者声称自己是老兵的一位高大中年男子自称“Commander Adama”Doyon 则选择了“Commander X”这个称呼。受 “Merry Pranksters” 启示,他们在 Grateful Dead 的演唱会上出售 LSDlysergic acid diethylamide麦角酸酰二乙胺一种迷幻药并用收入的一部分购置了一辆二手校车以及扩音器、相机还有电源充电器。同时在剑桥租了一间地下公寓Doyon 偶尔会在那里歇息。</p>
<p>Doyon 深深地沉溺于计算机中,虽然他并不是一位专业的程序员。在过去一年的几次谈话中,他告诉我他将自己视为激进主义分子,继承了 Abbie Hoffman 和 Eldridge Cleaver 的激进传统技术不过是他抗议的工具。八十年代哈佛大学和麻省理工学院的学生们举行集会强烈抗议他们的学校从南非撤资。为了帮助抗议者通过安全渠道进行交流PLF 制作了无线电套装移动调频发射器、伸缩式天线还有麦克风所有部件都内置于背包内。Willard Johnson麻省理工学院的一位激进分子和政治学家表示黑客们出席集会并不意味着一次变革。“我们的大部分工作仍然是通过扩音器来完成的”他解释道。</p>
<p>1992 年,在 Grateful Dead 的一场印第安纳的演唱会上Doyon 秘密地向一位瘾君子出售了 300 粒药。由此他被判决在印第安纳州立监狱服役十二年,后来改为五年。服役期间,他对宗教和哲学产生了浓厚的兴趣,并于鲍尔州立大学学习了相应课程。</p>
<p>1994 年,第一款商业 Web 浏览器网景领航员正式发布,同一年 Doyon 被捕入狱。当他出狱并再次回到剑桥后PLF 依然活跃着并且他们的工具有了实质性的飞跃。Doyon 回忆起和他入狱之前的变化“非常巨大——好比是烽火狼烟电报传信之间那么大的差距。”黑客们入侵了一个印度的军事网站并修改其首页文字为“拯救克什米尔”。在塞尔维亚黑客们攻陷了一个阿尔巴尼亚网站。Stefan Wray一位早期网络激进主义分子为一次纽约“反哥伦布日”集会上的黑客行径辩护。“我们视之为电子形式的公众抗议”他告诉大家。</p>
<p>1999 年,美国唱片业协会因为版权侵犯问题起诉了 Napster一款文件共享软件。最终Napster 于 2001 年关闭。Doyon 与其他黑客使用分布式拒绝服务Distributed Denial of ServiceDDoS使大量数据涌入网站导致其响应速度减缓直至奔溃的手段攻击了美国唱片业协会的网站使之停运时间长达一星期之久。Doyon为自己的行为进行了辩解并高度赞扬了其他的“黑客主义者”。“我们很快意识到保卫 Napster 的战争象征着保卫 Internet 自由的战争,”他在后来写道。</p>
<p>2008 年的一天Doyon 和 “Commander Adama” 在剑桥的 PLE 地下公寓相遇。Adama 当着 Doyon 的面点击了癫痫基金会的一个链接,与意料中将要打开的论坛不同,出现的是一连串闪烁的彩光。有些癫痫病患者对闪光灯非常敏感——这完全是出于恶意,有人想要在无辜群众中诱发癫痫病。已经出现了至少一名受害者。</p>
<p>Doyon 愤怒了。他质问 Adama 什么样的人才会做出这样的事来。</p>
<p>“你听说过‘匿名者’组织吗?” Adama 问。</p>
<h2>2</h2>
<p>2003 年,一位来自纽约的已经患病 15 年的失眠症患者 Christopher Poole推出了 4chan 讨论社区,在这里用户们可以随意发布照片或者尖锐评论。随后其关注点迅速从动漫延伸到许多 Internet 的早期文化基因LOLcats、Chocolate Rain、RickRolls。当用户没有按照屏幕上的要求输入昵称时将会得到系统默认的“匿名者”Anonymous称呼。</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18505-600.jpg" /></center>
<center><small>“我得谈谈我的感受。”</small></center>
<p>Poole 希望匿名这一举措可以延续社区的尖锐性因素。“我们无意参与理智的涉外事件讨论”他在网站上写道。4chan 社区里最具价值的事之一便是寻求“挑起强烈的争端”lulz这个词源自缩写 LOL。Lulz 经常是通过分享充满孩子气的笑话或图片来实现的,它们中的大部分不是色情的就是下流的。其中最令人震惊的部分被贴在了网站的“/b/”版块上,这里的用户们称呼自己为“/b/tards”。Doyon 知道 4chan 这个社区但他认为那些用户是“一群愚昧无知的顽童”。2004 年前后,/b/ 上的部分用户开始把“匿名者”视为一个独立的实体。</p>
<p>这是一个全新的黑客团体。“这不是一个传统意义上的组织,”一位领导计算机安全工作的研究员 Mikko Hypponen 告诉我——倒不如视之为一个非传统的亚文化群体。Barrett Brown德克萨斯州的一名记者,同时也是众所周知的“匿名者”高层领导把“匿名者”描述为“一连串前仆后继的伟大友谊”。无需任何会费或者入会仪式。任何想要加入“匿名者”组织成为一名匿名者Anon的人都可以通过简短的象征性的宣誓加入。</p>
<p>尽管 4chan 的关注焦点是一些琐碎的话题,但许多匿名者认为自己就是“正义的十字军”。如果网上有不良迹象出现,他们就会发起具有针对性的治安维护行动。不止一次,他们以未成年少女的身份套取恋童癖的私人信息,然后把这些信息交给警察局。其他匿名者则是政治的厌恶者,为了挑起争端想方设法散布混乱的信息。他们中的一些人在 /b/ 上发布看着像是雷管炸弹的图片另一些则叫嚣着要炸毁足球场并因此被联邦调查局逮捕。2007 年,一家洛杉矶当地的新闻联盟机构称呼“匿名者”组织为“互联网负能量制造机”。</p>
<p>2008 年 1 月Gawker Media 上传了一段关于汤姆克鲁斯大力吹捧山达基优点的视频。这段视频是受版权保护的,山达基教会致信 Gawker勒令其删除这段视频。“匿名者”组织认为教会企图控制网络信息。“是时候让 /b/ 来干票大的了,”有人在 4chan 上写道。“我说的是‘入侵’或者‘攻陷’山达基官方网站。”一位匿名者使用 YouTube 放出一段“新闻稿”,其中包括暴雨云视频和经过计算机处理的语音。“我们要立刻把你们从 Internet 上赶出去,并且在现有规模上逐渐瓦解山达基教会,”那个声音说,“你们无处可躲。”不到一个星期,这段 YouTube 视频的点击率就超过了两百万次。</p>
<p>“匿名者”组织已经不仅限于 4chan 社区。黑客们在专用的互联网中继聊天Internet Relay Chat channelsIRC 聊天室)频道内进行交流,协商策略。通过 DDoS 攻击手段,他们使山达基的主网站间歇性崩溃了好几天。匿名者们制造了“谷歌炸弹”,由此导致 “dangerous cult” 的搜索结果中的第一条结果就是山达基主网站。其余的匿名者向山达基的欧洲总部寄送了数以百计的披萨,并用大量全黑的传真单耗干了洛杉矶教会总部的传真机墨盒。山达基教会,据报道拥有超过十亿美元资产的组织,当然能经得起墨盒耗尽的考验。但山达基教会的高层可不这么认为,他们还收到了严厉的恐吓,由此他们不得不向 FBI 申请逮捕“匿名者”组织的成员。</p>
<p>2008 年 3 月 15 日,在从伦敦到悉尼的一百多个城市里,数以千计匿名者们游行示威山达基教会。为了切合“匿名”这个主题,组织者下令所有的抗议者都应该佩戴相同的面具。深思熟虑过蝙蝠侠后,他们选定了 2005 年上映的反乌托邦电影《 V 字仇杀队》中 Guy Fawkes 的面具。“在每个大城市里都能以很便宜的价格大量购买,”广为人知的匿名者、游行组织者之一 Gregg Housh 告诉我说道。漫画式的面具上是一个的脸颊红润的男人,八字胡,有着灿烂的笑容。</p>
<p>匿名者们并未“瓦解”山达基教会。并且汤姆克鲁斯的那段视频任然保留在网络上。匿名者们证明了自己的顽强。组织选择了一个相当浮夸的口号“我们是一体。绝不宽恕。永不遗忘。相信我们。”We are Legion. We do not forgive. We do not forget. Expect us.</p>
<h2>3</h2>
<p>2010 年Doyon 搬到了加利福尼亚州的圣克鲁斯,并加入了当地的“和平阵营”组织。利用从木材堆置场偷来的木头,他在山上盖起了一间简陋的小屋,“借用”附近住宅的 WiFi使用太阳能电池板发电并通过贩卖种植的大麻换取现金。</p>
<p>与此同时“和平阵营”维权者们每天晚上开始在公共场所休息以此抗议圣克鲁斯政府此前颁布的“流浪者管理法案”他们认为这项法案严重侵犯了流浪者的生存权。Doyon 出席了“和平阵营”的会议,并在网上发起了抗议活动。他留着蓬乱的红色山羊胡,戴一顶米黄色软呢帽,像军人那样不知疲倦。因此维权者们送给了他“罪恶制裁克里斯”的称呼。</p>
<p>“和平阵营”的成员之一 Kelley Landaker 曾几次和 Doyong 讨论入侵事宜。Doyon 有时会吹嘘自己的技术是多么的厉害,但作为一名资深程序员的 Landaker 却不为所动。“他说得很棒但却不是行动派的”Landaker 告诉我。不过在那种场合下,的确更需要一位富有激情的领导者,而不是埋头苦干的技术员。“他非常热情并且坦率,”另一位成员 Robert Norse 如是对我说。“他创造出了大量的能够吸引媒体眼球的话题。我从事这行已经二十年了,在这一点上他比我见过的任何人都要厉害。”</p>
<p>Doyon 在 PLF 的上司Commander Adama 仍然住在剑桥,并且通过电子邮件和 Doyon 保持着联络,他下令让 Doyon 潜入“匿名者”组织。以此获知其运作方式,并伺机为 PLF 招募新成员。因为癫痫基金会网站入侵事件的那段不愉快回忆Doyon 拒绝了 Adama。Adama 给 Doyon 解释说在“匿名者”组织里不怀好意的黑客只占极少数与此相反这个组织经常会有一些的轰动世界举动。Doyon 对这点表示怀疑。“4chan 怎么可能会轰动世界?”他质问道。但出于对 PLF 的忠诚,他还是答应了 Adama 的请求。</p>
<p>Doyon 经常带着一台宏基笔记本电脑出入于圣克鲁斯的一家名为 Coffee Roasting Company 的咖啡厅。“匿名者”组织的 IRC 聊天室主频道无需密码就能进入。Doyon 使用 PLF 的昵称进行登录并加入了聊天室。一段时间后,他发现了组织内大量的专用匿名者行动聊天频道,这些频道的规模更小,并相互重复。要想参与行动,你必须知道行动的专用聊天频道名称,并且聊天频道随时会因为陌生的闯入者而进行变更。这套交流系统并不具备较高的安全系数,但它的确很凑效。“这些专用行动聊天频道确保了行动机密的高度集中,”麦吉尔大学的人类学家 Gabriella Coleman 告诉我。</p>
<p>有些匿名者提议了一项行动,名为“反击行动”。如同新闻记者 Parmy Olson 于 2012 年在书中写道的,“我们是匿名者,”这项行动成为了又一次支援文件共享网站,如 Napster 的后继者海盗湾Pirate Bay的行动的前奏但随后其目标却扩展到了政治领域。2010 年末在美国国务院的要求下包括万事达、Visa、PayPal 在内的几家公司终止了对维基解密一家公布了成百上千份外交文件的民间组织的捐助。在一段网络视频中“匿名者”组织扬言要进行报复发誓会对那些阻碍维基解密发展的公司进行惩罚。Doyon 被这种抗议企业的精神所吸引,决定参加这次行动。</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18473-600.jpg" /></center>
<center><small>潘多拉的魔盒</small></center>
<p>在十二月初的“反击行动”中,“匿名者”组织指导那些新成员,或者说新兵,关于“如何他【哔~】加入组织”,教程中提到“首先配置你【哔~】的网络,这他【哔~】的很重要。”同时他们被要求下载“低轨道离子炮”一款易于使用的开源软件。Doyon 下载了软件并在聊天室内等待着下一步指示。当开始的指令发出后数千名匿名者将同时发动进攻。Doyon 收到了含有目标网址的指令——目标是www.visa.com——同时在软件的右上角有个按钮上面写着“IMMA CHARGIN MAH LAZER.”“反击行动”同时也发动了大量的复杂精密的入侵进攻。几天后“反击行动”攻陷了万事达、Visa、PayPal 公司的主页。在法院的控告单上PayPal 称这次攻击给公司造成了 550 万美元的损失。</p>
<p>但对 Doyon 来说,这是切实的激进主义体现。在剑桥反对种族隔离的行动中,他不能立即看到结果;而现在,只需指尖轻轻一点,就可以在攻陷大公司网站的行动中做出自己的贡献。隔天,赫芬顿邮报上出现了“万事达沦陷”的醒目标题。一位得意洋洋的匿名者发推特道:“有些事情维基解密是无能为力的。但这些事情却可以由‘反击行动’来完成。”</p>
<h2>4</h2>
<p>2010 年的秋天“和平阵营”的抗议活动终止政府只做出了轻微的让步“流浪者管理法案”仍然有效。Doyon 希望通过借助“匿名者”组织的方略扭转局势。他回忆当时自己的想法,“也许我可以发动‘匿名者’组织来教训这种看似不堪一击的市政府网站,这些人绝对会【哔~】地赞同我的提议。最终我们将使得市政府永久性的废除‘流浪者管理法案’。”</p>
<p>Joshua Covelli 是一位 25 岁的匿名者他的昵称是“Absolem”他非常钦佩 Doyon 的果敢。“现在我们的组织完全是他【哔~】各种混乱的一盘散沙”Covelli 告诉我道。在“Commander X”加入之后“组织似乎开始变得有模有样了。”Covelli 的工作是俄亥俄州费尔伯恩的一所大学接待员,他从不了解任何有关圣克鲁斯的政治。但是当 Doyon 提及帮助“和平阵营”抗击活动的计划后Covelli 立即回复了一封表示赞同的电子邮件:“我期待这样的行动很久了。”</p>
<p>Doyon 使用 PLF 的昵称邀请 Covelli 在 IRC 聊天室进行了一次秘密谈话:</p>
<blockquote>Absolem抱歉有个比较冒犯的问题...请问 PLF 也是组织的一员吗?</blockquote>
<blockquote>Absolem我会这么问是因为我在频道里看过你的聊天记录你像是一名训练有素的黑客不太像是来自组织里的成员。</blockquote>
<blockquote>PLF不不不你的问题一点也不冒犯。很高兴遇到你。PLF 是一个来自波士顿的黑客组织,已经成立 22 年了。我在 1981 年就开始了我的黑客生涯,但那时我并没有使用计算机,而是使用的 PBXPrivate Branch Exchange电话交换机</blockquote>
<blockquote>PLF我们组织内所有成员的年龄都超过了 40 岁。我们当中有退伍士兵和学者。并且我们的成员“Commander Adama”正在躲避一大帮警察还有间谍的追捕。</blockquote>
<blockquote>Absolem听起来很棒我对这次行动很感兴趣不知道我是否可以提供一些帮助我们的组织实在是太混乱了。我的电脑技术还不错但我在入侵技术上还完全是一个新手。我有一些小工具但不知道怎么去使用它们。</blockquote>
<p>庄重的入会仪式后Doyon 正式接纳 Covelli 加入 PLF</p>
<blockquote>PLF把所有可能对你不利的【哔~】敏感文件加密。</blockquote>
<blockquote>PLF还有想要联系任何一位 PLF 成员的话,给我发消息就行。从现在起,请叫我... Commander X。</blockquote>
<p>2012 年美联社称“匿名者”组织为“一伙训练有素的黑客”Quinn Norton 在《连线》杂志上发文称“‘匿名者’组织可以入侵任何坚不可摧的网站”,并在文末赞扬他们为“一群卓越的民间黑客”。事实上,有些匿名者的确是很有天赋的程序员,但绝大部分成员根本不懂任何技术。人类学家 Coleman 告诉我只有大约五分之一的匿名者是真正的黑客——其他匿名者则是“极客与抗议者”。</p>
<p>2010 年 12 月 16 日Doyon 以 Commander X 的身份向几名记者发送了电子邮件。“明天当地时间 1200 的时候人民解放阵线组织与匿名者组织将大举进攻圣克鲁斯政府网站”他在邮件中写道“12:30 之后我们将恢复网站的正常运行。”</p>
<p>圣克鲁斯数据中心的工作人员收到了警告,匆忙地准备应对攻击。他们在服务器上运行起安全扫描软件,并向当地的互联网供应商 AT & T 求助,后者建议他们向 FBI 报警。</p>
<p>第二天Doyon 走进了一家星巴克并启动了笔记本电脑。即便是在这样一个小镇上Doyon 也显得格外醒目一个疲惫的流浪汉疯狂地敲击着键盘。随后Covelli 和他在一间秘密聊天室碰头。</p>
<blockquote>PLF去社区登录——检查一下右上角的“聊天”菜单栏上面有今天的具体方案。感谢你对我们的支持。</blockquote>
<blockquote>Absolem一切为了 PLF长官。</blockquote>
<p>他们都打开了 DDoS 软件。尽管只有少数人参加了这次“和平阵营”的行动,但 Doyon 好似统率千军万马般下令:</p>
<blockquote>PLF注意每一位支持 PLF 或者站在我们这边的朋友——还有那些对抗邪恶保卫正义的勇士们和平阵营行动进行中战斗的号角已经响起目标www.co.santa-cruz.ca.us。随意开火。重复指令开火</blockquote>
<blockquote>Absolem收到长官。</blockquote>
<p>数据中心的工作人员紧张地盯着服务器上面反馈出一连串拒绝服务的请求。尽管他们尽了最大的努力网站还是崩溃了。25 分钟后Doyon 决定遵守承诺。他下令“停止攻击”,政府网站开始恢复了正常运行。(这次攻击后,“流浪者管理法案”依旧没有废除。)</p>
<p>Doyon 没有时间去庆祝胜利,他显得焦躁不安。“我得走了,”他告诉 Covelli。他飞一般得逃回了山中小屋。Doyon 的感觉是正确的:一位 FBI 的探员早就在 IRC 上盯住了他。这位 FBI 的探员已经获许搜查 Doyon 的笔记本电脑。</p>
<p>几周后Doyon 的食物吃完了,他不得不下山进行采购。当 Doyon在 Coffee Roasting Company 咖啡厅逗留的时候两位联邦探员走了进来将他拘捕。Doyon 给“和平阵营”的创建者,同时也是一名律师的 Ed Frey 打了一个电话Ed Frey 来到了警察局。Doyon 告诉了 Frey 他的另一个身份“Commander X”的事。</p>
<p>随后 Doyon 被释放,但 FBI 没收了他的笔记本电脑里面满是犯罪证据。Frey 一个几乎不了解网络世界的维权律师,把 Doyon 载回了他的山边露营。“接着你要怎么办”Frey 问道。</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18447-600.jpg" /></center>
<center><small>“Zach 很聪明... 并且... 是一个天才... 但.. 你们... 不在一个班。”</small></center>
<p>Doyon 引用了一句电影台词。“拼命地跑”他说。“我会躲起来尽可能保持我的行动自由用尽全力和这帮杂种们作斗争。”Frey 给了他两张 20 美元的钞票并祝他好运。</p>
<h2>5</h2>
<p>Doyon 搭着便车来到了旧金山,并在这里呆了三个月。他经常混迹于 Haight 大街 Ashbury 区的一家杂乱的咖啡馆里,在计算机前一坐就是几个小时,只有在抽烟时他才会起身走到室外活动。</p>
<p>2011 年 1 月Doyon 联系了新闻记者兼匿名者的 Barrett Brown。“我们的下一步计划是什么”Doyon 问道。</p>
<p>“突尼斯,” Brown 答道。</p>
<p>“我知道,那是中东地区的一个国家,” Doyon 继续问,“然后呢?”</p>
<p>“我们准备打倒那里的独裁者,” Brown 再次答道。</p>
<p>“啊?!那里有一位独裁者吗?” Doyon 有点惊讶。</p>
<p>几天后“突尼斯行动”正式展开。Doyon 作为参与者向突尼斯政府域名下的电子邮箱发送了大量的垃圾邮件,以此阻塞其服务器。“我会提前写好关于那次行动邮件,接着一次又一次地把它们发送出去,” Doyon 说,“有时候实在没有时间,我就只简短的写上一句问候对方母亲的的话,然后发送出去。”短短一天时间里,匿名者们就攻陷了包括突尼斯证券交易所、工业部、总统办公室、总理办公室在内的多个网站。他们把总统办公室网站的首页替换成了一艘海盗船的图片,并配以文字“‘报复’是个贱人,不是吗?”</p>
<p>Doyon 不时会谈起他的网上“战斗”经历似乎他刚从弹坑里爬出来一样。“伙计自从干了这行我就变黑了”他向我诉苦道。“你看我的脸全是抽烟的时候熏的——而且可能已经粘在我的脸上了。我仔细地照过镜子毫不夸张地说我简直就是一头棕熊。”很多个夜晚Doyon 都是在 Golden Gate 公园里露营过夜的。“我就那样干了四天,我看了看镜子里的‘我’,感觉还可以——但其实我觉得‘我’也许应该去吃点东西、洗个澡了。”</p>
<p>“匿名者”组织接着又在 YouTube 上声明了将要进行的一系列行动“利比亚行动”、“巴林行动”、“摩洛哥行动”。作为解放广场事件的抗议者Doyon 参与了“埃及行动”。在 Facebook 针对这次行动的宣传专页中,有一个为当地示威者准备的“行动套装”链接。“行动套装”通过文件共享网站 Megaupload 进行分发,其中含有一份加密软件以及应对瓦斯袭击的保护措施。并且在不久后,埃及政府关闭了埃及的所有互联网及子网络的时候,继续向当地抗议者们提供连接网络的方法。</p>
<p>2011 年夏季Doyon 接替 Adama 成为 PLF 的最高指挥官。Doyon 招募了六个新成员,并力图发展 PLF 成为“匿名者”组织的中坚力量。Covelli 成为了他的其中一技位术顾问。另一名黑客 Crypt0nymous 负责在 YouTube 上发布视频其余的人负责研究以及组装电子设备。与松散的“匿名者”组织不同PLF 内部有一套极其严格的管理体系。“Commander X 事必躬亲”Covelli 说。“这是他的行事风格,也许不能称之为一种风格。”一位创立了 AnonInsiders 博客的黑客通过加密聊天告诉我,他认为 Doyon 总是一意孤行——这在“匿名者”组织中是很罕见的现象。“当我们策划发起一项行动时,他并不在乎其他人是否同意,”这位黑客补充道,“他会一个人列出行动方案,确定攻击目标,登录 IRC 聊天室,接着告诉所有人在哪里‘碰头’,然后发起 DDoS 攻击。”</p>
<p>一些匿名者把 PLF 视为可有可无的部分,认为 Doyon 的所作所为完全是个天大的笑柄。“他是因为吹牛出名的,”另一名昵称为 Tflow 的匿名者 Mustafa Al-Bassam 告诉我。不过,即使是那些极度反感 Doyon 的狂妄自大的人,也不得不承认他在“匿名者”组织发展过程中的重要性。“他所倡导的强硬路线有时很凑效,有时则完全不起作用,” Gregg Housh 说,并且补充道自己和其他优秀的匿名者都曾遇到过相同的问题。</p>
<p>“匿名者”组织对外坚持声称自己是不分层次的平等组织。在由 Brian Knappenberger 制作的一部纪录片《我们是一个团体》中一名成员使用“一群鸟”来比喻组织它们轮流领飞带动整个组织不断前行。Gabriella Coleman 告诉我,这个比喻不太切合实际,“匿名者”组织内实际上早就出现了一个非正式的领导阶层。“领导者非常重要,”她说。“有四五个人可以看做是我们的领头羊。”她把 Doyon 也算在了其中。但是匿名者们仍然倾向于反抗这种具有体系的组织结构。在一本即将出版的关于“匿名者”组织的书《黑客、骗子、告密者、间谍》中Coleman 这么写道,在匿名者中,“成员个体以及那些特立独行的人依然在一些重大事件上保持着服从的态度,优先考虑集体——特别是那些能引发强烈争端的事件。”</p>
<p>匿名者们谑称那些特立独行的成员为“自尊心超强的疯子”和“想让自己出名的疯子”。不过许多匿名者已经不会再随便给他人取那种具有冒犯性的称号了。“但还是有令人惊讶的极少数成员违反规则”打破传统上的看法Coleman 说。“这么做的人,像 Commander X 这样的,都会在组织里受到排斥。”去年,在一家网络论坛上,有人写道,“当他开始把自己比作‘蝙蝠侠’的时候我就不想理他了。”</p>
<p>Peter Fein是一位以 n0pants 为昵称而出名的网络激进分子,也是众多反对 Doyon 的浮夸行为的众多匿名者之一。Fein 浏览了 PLF 的网站其封面上有一个徽章还有关于组织的宣言——“为了解放众多人类的灵魂而不断战斗”。Fein 沮丧的发现 Doyon 早就使用真名为这家网站注册过了,使他这种,以及其他想要找事的匿名者们无机可乘。“如果有人要对我的网站进行 DDoS 攻击,那完全可以,” Fein 回想起通过私密聊天告诉 Doyon 时的情景,“但如果你要这么做了的话,我会揍扁你的屁股。”</p>
<p>2011 年 2 月 5 日,《金融时报》报道了在一家名为 HBGary Federal 的网络安全公司里,首席执行官 HBGary Federal 已经得到了“匿名者”组织骨干成员名单的消息。Barr 的调查结果表明,三位最高领导人其中之一就是‘ Commander X这位潜伏在加利福尼亚州的黑客有能力“策划一些大型网络攻击事件”。Barr 联系了 FBI 并提交了自己的调查结果。</p>
<p>和 Fein 一样Barr 也发现了 PLF 网站的注册法人名为 Christopher Doyon地址是 Haight 大街。基于 Facebook 和 IRC 聊天室的调查Barr 断定‘ Commander X的真实身份是一名家庭住址在 Haight 大街附近的网络激进分子 Benjamin Spock de Vries。Barr 通过 Facebook 和 de Vries 取得了联系。“请告诉组织里的普通阶层,我并不是来抓你们的,” Barr 留言道,“只是想让‘领导阶层’知晓我的意图。”</p>
<p>“‘领导阶层’? 2333笑死我了” de Vries 回复道。</p>
<p>《金融时报》发布报道的第二天“匿名者”组织就进行了反击。HBGary Federal 的网站被进行了恶意篡改。Barr 的私人 Twitter 账户被盗取他的上千封电子邮件被泄漏到了网上同时匿名者们还公布了他的住址以及其他私人信息——这是一系列被称作“doxing”的惩罚。不到一个月后Barr 就从 HBGary Federal 辞职了。</p>
<h2>6</h2>
<p>2011 年 4 月Doyon 离开了旧金山搭便车向西部前行过着夜晚露宿公园、白天混迹于星巴克的生活。他的背包里只有一台笔记本电脑、Guy Fawkes 面具,还有在 Pall 超市里购买的一些东西。</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18563-600.jpg" /></center>
<center><small>“这是我在 TED 夏令营里学到的东西。”</small></center>
<p>他时刻关注着“匿名者”组织的内部消息。那年春季,在 Barr 调查报告中提到的六位匿名者精锐成员组建了“LulzSec 安全”组织Lulz Security简称 LulzSec。这个组织正如其名这些成员认为“匿名者”组织已经变得太过严肃他们的目标是重新引发起那些“能挑起强烈争端”的事件。当“匿名者”组织还在继续支持“阿拉伯之春”的抗议者的时候LulzSec 入侵了公共电视网Public Broadcasting ServicePBS网站并发布了一则虚假声明称已故说唱歌手 Tupac Shakur 仍然生活在新西兰。</p>
<p>匿名者之间会通过 Pastebin.com 网站来共享文字。在这个网站上LulzSec 发表了一则声明,称“很不幸,我们注意到北约和我们的好总统巴拉克,奥萨马·本·美洲驼(拉登同学)的好朋友,来自 24 世纪的奥巴马最近明显提高了对我们这些黑客的关注程度。他们把黑客入侵行为视作一种战争的表现。”目标越高远挑起的纷争就越大。6 月 15 日LulzSec 表示对 CIA 网站受到的袭击行为负责他们发表了一条推特上面写道“目标击毙Tango down亦即target down—— cia.gov ——这是起挑衅行为。”</p>
<p>2011 年 6 月 20 日LulzSec 的一名十九岁的成员 Ryan Cleary 因为对 CIA 的网站进行了 DDoS 攻击而被捕。7 月FBI 探员逮捕了七个月前对 PayPal 进行 DDoS 攻击的其他十四名黑客。这十四名黑客,每人都面临着 15 年的牢狱之灾以及 500 万美元的罚款。他们因为图谋不轨以及故意破坏互联网,而被控违反了计算机欺诈与滥用处理条例。(该法案允许检察官进行酌情处置,并在去年网络激进分子 Aaron Swartz 因为被判处 35 年牢狱之灾而自杀身亡之后,受到了广泛的质疑和批评。)</p>
<p>LulzSec 的成员之一 Jake (Topiary) Davis 因为付不起法律诉讼费给组织的成员们写了一封请求帮助的信件。Doyon 进入了 IRC 聊天室把 Davis 需要帮助的消息进行了扩散:</p>
<blockquote>CommanderX那么请大家阅读信件并给予 Topiary 帮助...</blockquote>
<blockquote>Toad你真是和【哔~】一样消息灵通。</blockquote>
<blockquote>Toad这么说你得到 Topiary 的消息了?</blockquote>
<blockquote>CommanderXToad 你这个混蛋!</blockquote>
<blockquote>Katanon唉...</blockquote>
<p>Doyon 越来越大胆。他在佛罗里达州当局逮捕了支持流浪者的激进分子后,就 DDoS 了奥兰多商务部商会网站。他使用个人笔记本电脑通过公用无线网络实施了攻击,并且没有花费太多精力来隐藏自己的网络行踪。“这种做法很勇敢,但也很愚蠢,”一位自称 Kalli 的 PLF 的资深成员告诉我。“他看起来并不在乎是否会被抓。他完全是一名自杀式黑客。”</p>
<p>两个月后Doyon 参与了针对旧金山湾区快速交通系统Bay Area Rapid Transit的 DDoS 攻击,以此抗议一名 BART 的警官杀害一名叫做 Charles Hill 的流浪者的事件。随后 Doyon 现身“CBS 晚间新闻”为这次行动辩护,当然,他处理了自己的声音,把自己的脸用香蕉进行替代。他把 DDoS 攻击比作为公民的抗议行为。“与占用 Woolworth 午餐柜台的座位相比这真的没什么不同真的”他说道。CBS 的主播 Bob Schieffer 笑称:“就我所见,它并不完全是一项民权运动。”</p>
<p>2011 年 9 月 22 日,在加利福尼亚州的一家名为 Mountain View 的咖啡店里Doyon 被捕,同时面临着“使用互联网非法破坏受保护的计算机”罪名指控。他被拘留了一个星期的时间,接着在签署协议之后获得假释。两天后,他不顾律师的反对,宣布将在圣克鲁斯郡法院召开新闻发布会。他梳起了马尾辫,戴着一副墨镜、一顶黑色海盗帽,同时还在脖子上围了一条五彩手帕。</p>
<p>Doyon 通过非常夸大的方式披露了自己的身份。“我就是 Commander X”他告诉蜂拥的记者。他举起了拳头。“作为匿名者组织的一员作为一名核心成员我感到非常的骄傲。”他在接受一名记者的采访时说“想要成为一名顶尖黑客的话你只需要准备一台电脑以及一副墨镜。任何一台电脑都行。”</p>
<p>Kalli 非常担心 Doyon 会不小心泄露组织机密或者其他匿名者的信息。“这是所有环节中最薄弱的地方,如果这里出问题了,那么组织就完了,”他告诉我。曾在“和平阵营行动”中给予 Doyon 大力帮助的匿名者 Josh Covelli 告诉我,当他在网上看见 Doyon 的新闻发布会视频的时候,他感觉瞬间“下巴掉地下了”。“他的所作所为变得越来越不可捉摸,” Covelli 评价道。</p>
<p>三个月后Doyon 的指定律师 Jay Leiderman 出席了圣荷西联邦法庭的辩护。Leiderman 已经好几个星期没有得到 Doyon 的消息了。“我需要得知被告无法出席的具体原因”法官说。Leiderman 无法回答。Doyon 再次缺席了两星期后的另一场听证会。检控方表示:“很明显,看来被告已经逃跑了。”</p>
<h2>7</h2>
<p>“Xport 行动”是“匿名者”组织进行的所有同类行动中的第一个行动。这次行动的目标是协助如今已经背负两项罪名的通缉犯 Doyon 潜逃出国。负责调度的人是 Kalli 以及另一位曾在八十年代剑桥的迷幻药派对上和 Doyon 见过面的匿名者老兵。这位老兵是一位已经退休的软件主管,在组织内部威望很高。</p>
<p>Doyon 的终点站是这位软件主管的家位于加拿大的偏远乡村。2011 年 12 月,他搭便车前往旧金山,并辗转来到了市区组织大本营。他找到了他的指定联系人,后者带领他到达了奥克兰的一家披萨店。凌晨 2 点Doyon 通过披萨店的无线网络,接收了一条加密聊天消息。</p>
<p>“你现在靠近窗户吗?”那条消息问道。</p>
<p>“是的,” Doyon 回复道。</p>
<p>“往大街对面看。看见一个绿色的邮箱了吗?十五分钟后,你去站到那个邮箱旁边,把你的背包取下来,然后把你的面具放在上面。”</p>
<p>一连几个星期的时间Doyon 穿梭于海湾地区的安全屋之间,按照加密聊天那头的指示不断行动。最后,他搭上了前往西雅图的长途公交车,软件主管的一个朋友在那里接待了他。这个朋友是一名非常富有的退休人员,他花费了通过谷歌地球来帮助 Doyon 规划前往加拿大的路线。他们共同前往了一家野外用品供应商店,这位朋友为 Doyon 购置了价值 1500 美元的商品,包括登山鞋以及一个全新的背包。接着他又开车载着 Doyon 北上,两小时后到达距离国界只有几百英里的偏僻地区。随后 Doyon 见到了 Amber Lyon。</p>
<p>几个月前,广播新闻记者 Lyon 曾在 CNN 的关于“匿名者”组织的节目里采访过 Doyon。Doyon 很欣赏她的报道他们一直保持着联络。Lyon 要求加入 Doyon 的逃亡行程,为一部可能会发行的纪录片拍摄素材。软件主管认为这样太过冒险,但 Doyon 还是接受了她的请求。“我觉得他是想让自己出名,” Lyon 告诉我。四天的时间里,她用影像记录下了 Doyon 徒步北上,在林间露宿的行程。“那一切看起来不太像是仔细规划过的,” Lyon 回忆说。“他实在是无家可归了,所以他才会想要逃到国外去。”</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_a18506-600.jpg" /></center>
<center><small>“这里是我们存放各种感觉的仓库。如果你发现了某种感觉,把它带到这里然后锁起来。”</small></center>
<p>2012 年 2 月 11 日Pastebin 上出现了一条消息。“PLF 很高兴的宣布‘ Commander X也就是 Christopher Mark Doyon已经离开了美国的司法管辖区抵达了加拿大一个比较安全的地方”上面写着“PLF 呼吁美国政府,希望政府能够醒悟过来并停止无谓的骚扰与监视行为——不要仅仅逮捕‘匿名者’组织的成员,对所有的激进组织应该一视同仁。”</p>
<h2>8</h2>
Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barrett Brown 的聊天中Doyon 难掩内心的喜悦之情。
<blockquote>BarrettBrown你现在应该足够安全了吧其他的呢...</blockquote>
<blockquote>CommanderX是的我现在很安全现在加拿大既不缺钱也不缺藏身的地方。</blockquote>
<blockquote>CommanderXAmber Lyon 想要你的一张照片。</blockquote>
<blockquote>CommanderX去他【哔~】的怪人Barrett相信你会喜欢我告诉她应该怎样评价你的。</blockquote>
<blockquote>CommanderX:-)</blockquote>
<blockquote>CommanderX我告诉她你是一个英雄。</blockquote>
<blockquote>BarrettBrown你才是真正的英雄...</blockquote>
<blockquote>BarrettBrown很高兴你现在安全了</blockquote>
<blockquote>BarrettBrown如果你还需要什么告诉我一声就可以了</blockquote>
<blockquote>CommanderX我会的如果这种方式的确很凑效的话可以让其他被通缉的人也这样逃出来....</blockquote>
<blockquote>BarrettBrown当然估计我们不久后也得这样了</blockquote>
<p>在 Doyon 出逃十天后,《华尔街日报》上刊登了关于不久后升职为美国国家安全局及网络指挥部主任的 Keith Alexander 的报道他在白宫举行的秘密会晤以及其他场合下表达了对“匿名者”组织的高度关注。Alexander 发出警告,两年内,该组织必将会是国家电网改造的大患。参谋长联席会议的主席 General Martin Dempsey 告诉记者,这群人是国家的敌人。“他们有能力把这些使用恶意软件造成破坏的技术扩散到其他的边缘组织去,”随后又补充道,“我们必须防范这种情况发生。”</p>
<p>3 月 8 日,国会议员们在国会大厦附近的一个敏感信息隔离设施附近举行了关于网络安全的会议。包括 Alexander、Dempsey、美国联邦调查局局长 Robert Mueller以及美国国土安全部部长 Janet Napolitano 在内的多名美国安全方面的高级官员出席了这次会议。会议上,通过计算机向与会者模拟了东部沿海地区电力设施可能会遭受到的网络攻击时的情境。“匿名者”组织目前应该还不具备发动此种规模攻击的能力,但安全方面的官员担心他们会联合其他更加危险的组织来共同发动攻击。“在我们着手于不断增加的网络风险事故时,政府仍在就具体的处理细节进行不断协商讨论,” Napolitano 告诉我。当谈及潜在的网络安全隐患时,她补充道,“我们通常会把‘匿名者’组织的行动当做 A 级威胁来应对。”</p>
<p>“匿名者”也许是当今世界上最强大的无政府主义黑客组织。即使如此,它却从未表现出过任何的会对公共基础设施造成破坏的迹象或意愿。一些网络安全专家称,那些关于“匿名者”组织的谣传太过危言耸听。“在奥兰多发布战前宣言和实际发动 Stuxnet 蠕虫病毒攻击之间是有很大的差距的,” Internet 研究与战略中心的一位职员 James Andrew Lewis 告诉我,这和 2007 年美国与以色列对伊朗原子能网站发动的黑客袭击有关。哈佛大学法学院的教授 Yochai Benkler 告诉我,“我们所看见的只是以主要防御为理由而进行的开销,否则,将很难自圆其说。”</p>
<p>Keith Alexander 最近刚从政府部门退休,他拒绝就此事发表评论,因为他并不能代表国家安全局、联邦调查局、中央情报局以及国土安全部。尽管匿名者们从未真正盯上过政府部门的计算机网络,但他们对于那些激怒他们的人有着强烈的报复心理。前国土安全部国家网络安全部门负责人 Andy Purdy 告诉我他们“害怕被报复,”无论机构还是个人,都不同意政府公然反对“匿名者”组织。“每个人都非常脆弱,”他说。</p>
<h2>9</h2>
<p>2012 年 3 月 6 日Hector Xavier Monsegur昵称为 Sabu 的 LulzSec 骨干成员,被发现是 FBI 派来的卧底。为了换取减刑Monsegur 花费了数月的时间卧底,协助搜集其他 LulzSec 成员的罪证。同一天,五位匿名者领导被捕,同时面临着包括“计算机某犯罪”在内的多项罪名指控。联邦调查局的一名官员在接受福克斯新闻记者采访时说道,“这对那个组织是一个毁灭性的打击。我们的行动如同砍掉了 LulzSec 组织的头。”接下来的十个月里, Barrett Brown 收到了 17 项联邦罪名的指控,其中的大部分后来被撤销了。(他将在十月被宣判最终结果。)</p>
<p>Doyon 感到很烦躁但他还是继续扮演着一名黑客——以此吸引关注。他在多伦多上映的纪录片上以戴着面具的匿名者形象出现。在接受《National Post》的采访时他向记者大肆吹嘘未经证实的消息“我们已经入侵了美国政府的所有机密数据库。现在的问题是我们该何时泄露这些机密数据而不是我们是否会泄露。”</p>
<p>2013 年 1 月,在另一名匿名者介入俄亥俄州<a href="https://gist.githubusercontent.com/SteveArcher/cdffc917a507f875b956/raw/c7b49cc11ae1e790d30c87f7b8de95482c18ec74/%E6%96%AF%E6%89%98%E6%9C%AC%E7%BB%B4%E5%B0%94%E8%BD%AE%E5%A5%B8%E6%A1%88%E5%86%8D%E8%B5%B7%E9%A3%8E%E6%B3%A2%20%E9%BB%91%E5%AE%A2%E7%BB%84%E7%BB%87%E4%BB%8B%E5%85%A5">斯托本维尔未成年少女轮奸案</a>发起抗议行动之后Doyon 重新启用了他两年前创办的网站 LocalLeaks作为那起轮奸事件的信息汇总处理中心。如同许多其他“匿名者”组织的所作所为一样LocalLeaks 网站非常具有影响力但却也不承担任何责任。LocalLeaks 网站是第一家公布 12 分钟斯托本维尔高中毕业生猥亵视频的网站这激起了众多当事人的愤怒。LocalLeaks 网站上同时披露了几份未被法庭收录的关于案件的材料并且由此不小心透漏出了案件受害人的名字。Doyon向我承认他公开这些未经证实的信息的策略是存在争议的但他同时回忆起自己当时的想法“我们可以选择去除这些斯托本维尔案件的材料...也可以选择公开所有我们搜集的信息,基本上,给公众以提醒,不过,前提是你们得相信我们。”</p>
<p>2013 年 3 月,一个名为 Rustle League 的组织入侵了 Doyon 的 Twitter 账户该组织此前经常挑衅“匿名者”组织。Rustle League 的领导者之一 Shm00p 告诉我,“我们的本意并不是伤害那些家伙,只不过,哦,那些家伙说的话你就当是在放屁好了——我会这么做只是因为我感到很好笑。” Rustle League 组织使用 Doyon 的账户发布了含有如 www.jewsdid911.org 链接这样的,种族主义和反犹太主义的信息。</p>
<p>2013 年 8 月 27 日Doyon 发布了一则退出“匿名者”组织的声明。“我的一生都用在了追求正义和自由上,”他写道,“也许‘ Commander X是无敌的但我在这种高节奏的全球网络斗争中已经感到很累了感觉自己好像病了。”各界对此反应不一有同情的“你是该休息了”也有嘲讽的“可怜的疯狂小老头。也许他现在有时间洗澡了”。 Covelli 告诉我,“‘匿名者’的身份对他产生了较大的影响,他已经不能再应付了。”</p>
<cneter><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_roberts-1998-08-17-600.jpg" /></center>
<center><small>1998 年 8 月 17 日 “我们还有‘巴黎’吗?仔细想想,我等会儿去检查一下。”</small></center>
<h2>10</h2>
<p>2013 年 11 月 5 日举行了第一次“百万面具游行”活动。在全世界四百五十个城市里,发起了数千人的支持“匿名者”组织的游行。伦敦的一名抗议者摘下了盖伊·福克斯面具后,露出了演员罗素·布兰德的脸。这种的迹象表明,“匿名者”组织已经深入到了流行文化中。</p>
<p>我参加了华盛顿的集会Doyon 则呆在了加拿大观看现场直播。通过移动电话,我和 Doyon 不断交换着电子邮件。“只能坐在这里看直播而不能亲自去现场真的很令人沮丧——尤其是当这里面包含有你努力的结果的时候,”他在邮件里写道。“不过至少一切都已有所改变。”</p>
<p>我们约定了一次面谈。Doyon 坚持让我通过加密聊天把面谈的详细情况提前告诉他。我坐了几个小时的飞机,租车来到了加拿大的一个偏远小镇,并且禁用了我的电话。</p>
<p>最后,我在一个狭小安静的住宅区公寓里见到了 Doyon。他穿了一件绿色的军人夹克衫以及印有“匿名者”组织 logo 的 T 恤衫:一个脸被问号所替代的黑衣人形象。公寓里基本上没有什么家具,充满了一股烟味。他谈论起了美国政治(“我基本没怎么在众多的选举中投票——它们不过是暗箱操作的游戏罢了”),好战的伊斯兰教(“我相信,尼日利亚政府的人不过是相互勾结,以创建一个名为‘博科圣地’的基地组织的下属机构罢了”),以及他对“匿名者”组织的小小看法(“那些自称为怪人的人是真的是烂透了,意思是,邪恶的人”)。</p>
<p>Doyon 剃去了他的胡须但他却显得更加憔悴了。他说那是因为他病了的原因他几乎很少出去。很小的写字台上有两台笔记本电脑、一摞关于佛教的书还有一个堆满烟灰的烟灰缸。另一面裸露的泛黄墙壁上挂着盖伊·福克斯面具。他告诉我“所谓Commander X不过是一个处于极度痛苦中的小老头罢了。”</p>
<p>在刚过去的圣诞节里,匿名者的新网站 AnonInsiders 的创建者拜访了 Doyon并给他带来了馅饼和香烟。Doyon 询问来访的朋友是否可以继承自己的衣钵成为 PLF 的最高指挥官,同时希望能够递交出自己手里的“王国钥匙”——手里的所有密码,以及几份关于“匿名者”组织的机密文件。这位朋友委婉的拒绝了。“我有自己的生活,”他告诉了我拒绝的理由。</p>
<h2>11</h2>
<p>2014 年 8 月 9 日,当地时间下午 5 时 09 分,来自密苏里州圣路易斯郊区德尔伍德的一位说唱歌手同时也是激进分子的 Kareem (Tef Poe) Jackson在 Twitter 上谈起了邻近城镇的一系列令人担忧的举措。“基本可以断定弗格森已经实施了戒严,任何人都无法出入,”他在 Twitter 上写道。“国内的朋友还有因特网上的朋友请帮助我们!!!”五个小时前,弗格森,一位十八岁的手无寸铁的非裔美国人 Michael Brown被一位白人警察射杀。射杀警察声称自己这么做的原因是 Brown 意图伸手抢夺自己的枪支。而事发当时和 Brown 在一起的朋友 Dorian Johnson 却说Brown 唯一做得不对的地方在于他当时拒绝离开街道中间。</p>
<p>不到两小时Jackson 就收到了一位名为 CommanderXanon 的 Twitter 用户的回复。“你完全可以相信我们,”回复信息里写道。“你是否可以给我们详细描述一下现场情况,那样会对我们很有帮助。”近几周的时间里,仍然呆在加拿大的 Doyon 复出了。六月,他在还有两个月满 50 岁的时候,成功戒烟(“#戒瘾成功 #电子香烟功不可没 #老了,”他在戒烟成功后在 Twitter 上写道。七月在加沙地带爆发武装对抗之后Doyon 发表 Twiter 支持“匿名者”组织的“拯救加沙行动”,并发动了一系列针对以色列网站的 DDoS 攻击。Doyon 认为弗格森枪击事件更加令人关注。抛开他本人的个性,他有在事件发展到引人注目之前的早期,就迅速注意该事件的能力。</p>
<p>“正在网上搜索关于那名警察以及当地政府的信息,” Doyon 发 Twitter 道。不到十分钟,他就为此专门在 IRC 聊天室里创建了一个频道。“‘匿名者’组织‘弗格森’行动正式启动,”他又发了一条 Twitter。但只有两个人转推了此消息。</p>
<p>次日早晨Doyon 发布了一条链接,链接指向的是一个初具雏形的网站,网站首页有一条致弗格森市民的信息——“你们并不孤单,我们将尽一切努力支持你们”——以及致当地警察的警告:“如果你们对对弗格森的抗议者们滥用职权、骚扰,或者伤害了他们,我们绝对会让你们所有政府部门的网站瘫痪。这不是威胁,这是承诺。”同时 Doyon 呼吁有 130 万粉丝的“匿名者”组织的 Twitter 账号 YourAnonNews 给与支持。“请支持弗格森行动”他发送了消息。一分钟后YourAnonNews 回复表示同意。当天,包含话题 #OpFerguson 的 Twitter 发表/转推了超过六千次。</p>
<p>这个事件迅速成为头条新闻同时匿名者们在弗格森周围进行了大集会。与“阿拉伯之春行动”类似“匿名者”组织向抗议者们发送了电子关怀包包括抗暴指导“把瓦斯弹捡起来回丢给警察”与可打印的盖伊·福克斯面具。Jackson 和其他示威者在弗格森进行示威游行时,警察企图通过橡皮子弹和催泪瓦斯来驱散他们。“当时的情景真像是布鲁斯·威利斯的电影里的情节,” Jackson 后来告诉我。“不过巴拉克·奥巴马应该并不会支持‘匿名者’组织传授给我们的这些知识,”他笑称道。“让那些警察赶到束手无策真的是太爽了。”</p>
<p>有个域名是 www.opferguson.com 的网站,后来发现不过是一个骗局——一个用来收集访问者 ip 地址的陷阱,随后这些地址会被移交给执法机构。有些人怀疑 Commander X 是政府的线人。在 IRC 聊天室 #OpFerguson 频道,一个名叫 Sherlock 写道,“现在频道里每个人说的已经让我害怕去点击任何陌生的链接了。除非是一个我非常熟悉的网址,否则我绝对不会去点击。”</p>
<p>弗格森的抗议者要求当局公布射杀 Brown 的警察的名字。几天后,匿名者们附和了抗议者们的请求。有人在 Twitter 上写道“弗格森警察局最好公布肇事警察的名字否则匿名者组织将会替他们公布。”8 月 12 的新闻发布会上,圣路易斯警察局的局长 Jon Belmar 拒绝了这个请求。“我们不会这样做,除非他们被某个罪名所指控,”他说道。</p>
<p>作为报复,一名黑客使用名为 TheAnonMessage 的 Twitter 账户公布了一条链接,该链接指向一段来自警察的无线电设备所记录的音频文件,文件记录时间是 Brown 被枪杀的两小时左右。TheAnonMessage 同时也把矛头指向了 Belmar在 Twitter 上公布了这位警察局长的家庭住址、电话号码以及他的家庭照片——一张是他的儿子在长椅上睡觉,另一张则是 Belmar 和他的妻子的合影。“不错的照片Jon” TheAnonMessage 在 Twitter 上写道。“你的妻子在她这个年龄算是一个美人了。你已经爱她爱得不耐烦了吗”一个小时后TheAnonMessage 又以 Belmar 的女儿为把柄进行了恐吓。</p>
<p>Richard Stallman来自 MIT 的初代黑客告诉我虽然他在很多地方赞同“匿名者”组织的行为但他认为这些泄露私人信息的攻击行为是要受到谴责的。即使是在国内TheAnonMessage 的行为也受到了谴责。“为何要泄露无辜的人的信息到网上?”一位匿名者通过 IRC 发问,并且表示威胁 Belmar 的家人实在是“相当愚蠢的行为”。但是 TheAnonMessage 和其他的一些匿名者仍然进行着不断搜寻,并企图在将来再次进行泄露信息的攻击。在互联网上可以得到所有弗格森警察局警员的名字,匿名者们不断地搜索着信息,企图找出具体是哪一个警察找出杀害了 Brown。</p>
<center><img src="http://www.newyorker.com/wp-content/uploads/2014/09/140908_steig-1999-04-12-600.jpg" /></center>
<center><small></small>1999 年 4 月 12 日 “我应该把镜头对向谁?”</center>
<p>8 月 14 日清晨,及位匿名者基于 Facebook 上的照片还有其他的证据,确定了射杀 Brown 的凶手是一位名叫 Bryan Willman 的 32 岁男子。根据一份 IRC 聊天记录,一位匿名者贴出了 Willman 的浮夸面孔的照片;另一位匿名者提醒道,“凶手声称自己的脸没有被任何人看到。”另一位昵称为 Anonymous|11057 的匿名者承认他对 Willman 的怀疑确实是“跳跃性的可能错误的逻辑过程推导出来的。”不过他还是写道,“我只是无法动摇自己的想法。虽然我没有任何证据,但我非常非常地确信就是他。”</p>
<p>TheAnonMessage 看起来被这次对话逗乐了,写道,“#愿逝者安息,凶手是 BryanWillman。”另一位匿名者发出了强烈警告。“请务必确认” Anonymous|2252 写道。“这不仅仅关乎到一个人的性命,我们可以不负责任地向公众公布我们的结果,但却很可能有无辜的人会因此受到不应受到的对待。”</p>
<p>争论超过了一个小时。一些匿名者指出没有证据表明 Willman 曾经在弗格森警察局任过职。</p>
<blockquote>Anonymous|3549@gs 我们依旧没有证据能够证明 Bryan 曾在警局呆过</blockquote>
<blockquote>Intangir现在的形势已经够紧张的了一旦我们把这个消息公布出去可能就会有人因此去杀了他</blockquote>
<blockquote>Anonymous|11057唯一的证明方法是犯罪现场目击者报告。否则我们的结果只是一个谣言</blockquote>
<blockquote>Anonymous|11057最快的排除嫌疑的方法是称他为嫌疑犯...我们都害怕犯下不公正的错误,但这种方法恰好可以避免这些...</blockquote>
<p>大部分匿名者都反对在网上泄露他人信息。但是早晨七点左右匿名者们进行了一次投票。聊天记录显示,当时聊天室里有 80 人左右,只有不到十人参与了投票表决。因此他们决定在互联网上公布 Willman 的私人信息。</p>
<blockquote>Anonymous|2252还在 Twitter 上公布?</blockquote>
<blockquote>anondepplol</blockquote>
<blockquote>Anonymous|2252@theanonmessage 公布?</blockquote>
<blockquote>TheAnonMessage当然</blockquote>
<blockquote>TheAnonMessage去发吧</blockquote>
<blockquote>anondepp搞定了</blockquote>
<blockquote>Anonymous|2252我去</blockquote>
<blockquote>TheAnonMessage上帝保佑...</blockquote>
<blockquote>Anonymous|3549...请拯救我们的灵魂</blockquote>
<blockquote>anondepplol</blockquote>
<p>早晨 9 时 45 分,圣路易斯警察局对 TheAnonMessage 进行了答复。“Bryan Willman 从来没有在弗格森警察局或者圣路易斯警察局任过职,” 他们在 Twitter 上写道。“请不要再公布这位无辜市民的信息了。”(随后 FBI 对弗格森警察的电脑遭黑客入侵的事情展开了调查。Twitter 管理员迅速封禁了 TheAnonMessage 的账户,但 Willman 的名字和家庭住址仍然被广泛传开。</p>
<p>实际上Willman 是弗格森西郊圣安区的警察外勤负责人。当圣路易斯警察局的情报处打电话告诉 Willman他已经被“确认”为凶手时他告诉我“我以为不过是个奇怪的笑话。”几小时后他的社交账号上就收到了数百条要杀死他的威胁。他在警察的保护下独自一人在家里呆了将近一个星期。“我只希望这一切都尽快过去”他告诉我他的感受。他认为“匿名者”组织已经不可挽回地损害了他的名誉。“我不知道他们怎么会以为自己可以被再次信任的”他说。</p>
<p>“我们并不完美,” OpFerguson 在 Twitter 上说道。“‘匿名者’组织确实犯错了,过去的几天我们制造一些混乱。为此,我们道歉。”尽管 Doyon 并不应该为这次错误的信息泄露攻击负责但其他的匿名者却因为他发起了一次无法控制的行动而归咎他。YourAnonNews 在 Pastebin 上发表了一则消息,上面写道,“你们也许注意到了组织不同的 Twitter 账户发表的话题 #Ferguson#OpFerguson,这两个话题下的 Twitter 与信息是相互矛盾的。为什么会在这些关键话题上出现分歧,部分原因是因为 CommanderX 是一个‘想让自己出名的疯子/想让公众认识自己的疯子’——这种人喜欢,或者至少不回避媒体的宣传——并且显而易见的,组织内大部分成员并不喜欢这样。”</p>
<p>在个人 Twitter 上Doyon 否认了所有关于“弗格森行动”的职责,他写道,“我讨厌这样。我不希望这样的情况发生,我也不希望和我认为是朋友的人战斗。”沉寂了几天后,他又再度获吹响了战斗的号角。他最近在 Twitter 上写道,“你们称他们是暴民,我们却称他们是压迫下的反抗之声”以及“解放西藏”。</p>
<p>Doyon 仍然处于藏匿状态。甚至连他的律师 Jay Leiderman 也不知道他在哪里。Leiderman 表示除了在圣克鲁斯受到的指控Doyon 很有可能因为攻击了 PayPal 和奥兰多而面临新的指控。一旦他被捕,所有的刑期加起来,他的余生就要在监狱里度过了。借鉴 Edward Snowden 的先例,他希望申请去俄罗斯避难。我们谈话时,他用一支点燃的香烟在他的公寓里比划着。“这里比他【哔~】的牢房强多了吧?我绝对不会出去,”他愤愤道。“我不会再联系我的家人了....这是相当高的代价,但我必须这么做,我会尽我的努力让所有人活得自由、明白。”</p>
<p>via: http://www.newyorker.com/magazine/2014/09/08/masked-avengers</p>
<p>作者:<a href="http://www.newyorker.com/contributors/david-kushner">David Kushner</a></p>
<p>译者:<a href="https://github.com/SteveArcher">SteveArcher</a></p>
<p>校对:<a href="https://github.com/校对者ID">校对者ID</a></p>
<p>本文由 <a href="https://github.com/LCTT/TranslateProject">LCTT</a> 原创翻译,<a href="http://linux.cn/">Linux中国</a>荣誉推出</p>

View File

@ -0,0 +1,89 @@
桌面看腻了?试试这 4 款漂亮的 Linux 图标主题吧
================================================================================
**Ubuntu 的默认图标主题在 5 年内[并未发生太大的变化][1],那些说“[图标早就彻底更新过了][2]”的你过来,我保证不打你。如果你确实想尝试一些新鲜的东西,我们将向你展示一些惊艳的替代品,它们会让你感到眼前一亮。**
如果还是感到不太满意,你可以在文末的评论里留下你比较中意的图标主题的链接地址。
### Captiva ###
![Captiva 图标 + elementary 文件夹图标 + Moka GTK](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-and-captiva.jpg)
Captiva 图标 + elementary 文件夹图标 + Moka GTK
Captiva 是一款相对较新的图标主题,即使那些有华丽图标倾向的用户也会接受它。
Captiva 由 DeviantArt 的用户 ~[bokehlicia][3] 制作,它并未使用现在非常流行的平面扁平风格,而是采用了一种圆润、柔和的外观。图标本身呈现出一种很有质感的材质外观,同时通过微调的阴影和亮丽的颜色提高了自身的格调。
不过 Captiva 图标主题并未包含文件夹图标在内,因此它将使用 elementary如果可以的话或者普通的 Ubuntu 文件夹图标。
要想在 Ubuntu 14.04 中安装 Captiva 图标,你可以新开一个终端,按如下方式添加官方 PPA 并进行安装:
sudo add-apt-repository ppa:captiva/ppa
sudo apt-get update && sudo apt-get install captiva-icon-theme
或者,如果你不擅长通过软件源安装的话,你也可以直接从 DeviantArt 的主页上下载图标压缩包。把解压过的文件夹挪到家目录的‘.icons目录下即可完成安装。
不过在你完成安装后,你必须得通过像 [Unity Tweak Tool][4] 这样的工具来把你安装的图标主题(本文列出的其他图标主题也要这样)应用到系统上。
- [DeviantArt 上的 Captiva 图标主题][5]
### Square Beam ###
![Square Beam 图标在 Orchis GTK 主题下](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/squarebeam.jpg)
Square Beam 图标在 Orchis GTK 主题下
厌倦有棱角的图标了?尝试下 Square Beam 吧。Square Beam 因为其艳丽的色泽、尖锐的坡度变化和鲜明的图标形象比本文列出的其他图标具有更加宏大的视觉效果。Square Beam 声称自己有超过 30,000 个(抱歉,我没有仔细数过...)的不同图标(!),因此你很难找到它没有考虑到的地方。
- [GNOME-Look.org 上的 Square Beam 图标主题][6]
### Moka & Faba ###
![Moka/Faba Mono 图标在 Orchis GTK 主题下](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/moka-faba.jpg)
Moka/Faba Mono 图标在 Orchis GTK 主题下
这里得稍微介绍下 Moka 图标集。事实上,我敢打赌阅读此文的绝大部分用户正在使用这款图标。
柔和的颜色、平滑的边缘以及简洁的图标艺术设计Moka 是一款真正出色的覆盖全面的应用图标。它的兄弟 Faba 将这些特点展现得淋漓尽致,而 Moka 也将延续这些 —— 涵盖所有的系统图标、文件夹图标、面板图标,等等。
欲知 Ubuntu 上的安装详情、访问项目官方网站?请点击下面的链接。
- [下载 Moka & Faba 图标主题][7]
### Compass ###
![Compass 图标在 Numix Blue GTK 主题下](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/compass1.jpg)
Compass 图标在 Numix Blue GTK 主题下
在本文最后推荐的是 Compass最后推荐当然不是最差的意思。这款图标现在仍然保持着2D双色的 UI 设计风格。它也许是不像本文推荐的其他图标那样鲜明但这正是它的特色。Compass 秉持这点并与之不断的完善 —— 看看文件夹的图标就知道了!
可以通过 GNOME-Look下面有链接进行下载和安装或者通过添加 Nitrux Artwork 的 PPA 安装:
sudo add-apt-repository ppa:nitrux/nitrux-artwork
sudo apt-get update && sudo apt-get install compass-icon-theme
- [GNOME-Look.org 上的 Compass 图标主题][8]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/4-gorgeous-linux-icon-themes-download
作者:[Joey-Elijah Sneddon][a]
译者:[SteveArcher](https://github.com/SteveArcher)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2010/02/lucid-gets-new-icons-for-rhythmbox-ubuntuone-memenu-more
[2]:http://www.omgubuntu.co.uk/2012/08/new-icon-theme-lands-in-lubuntu-12-10
[3]:http://bokehlicia.deviantart.com/
[4]:http://www.omgubuntu.co.uk/2014/06/unity-tweak-tool-0-7-development-download
[5]:http://bokehlicia.deviantart.com/art/Captiva-Icon-Theme-479302805
[6]:http://gnome-look.org/content/show.php/Square-Beam?content=165094
[7]:http://mokaproject.com/moka-icon-theme/download/ubuntu/
[8]:http://gnome-look.org/content/show.php/Compass?content=160629

View File

@ -0,0 +1,189 @@
让下载更方便
================================================================================
下载管理器是一个电脑程序专门处理下载文件优化带宽占用以及让下载更有条理等任务。有些网页浏览器例如Firefox也集成了一个下载管理器作为功能但是它们的方式还是没有专门的下载管理器或者浏览器插件那么专业没有最佳地使用带宽也没有好用的文件管理功能。
对于那些经常下载的人使用一个好的下载管理器会更有帮助。它能够最大化下载速度加速下载断点续传以及制定下载计划让下载更安全也更有价值。下载管理器已经没有之前流行了但是最好的下载管理器还是很实用包括和浏览器的紧密结合支持类似YouTube的主流网站以及更多。
有好几个能在Linux下工作都非常优秀的开源下载管理器以至于让人无从选择。我整理了一个摘要是我喜欢的下载管理器以及Firefox里的一个非常好用的下载插件。这里列出的每一个程序都是开源许可发布的。
----------
![](http://www.linuxlinks.com/portal/content2/png/uGet.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-uGet.png)
uGet是一个轻量级容易使用功能完备的开源下载管理器。uGet允许用户从不同的源并行下载来加快速度添加文件到下载序列暂停或继续下载提供高级分类管理和浏览器集成监控剪贴板批量下载支持26种语言以及其他许多功能。
uGet是一个成熟的软件保持开发超过11年。在这个时间里它发展成一个非常多功能的下载管理器拥有一套很高价值的功能集还保持了易用性。
uGet是用C语言开发的使用了cURL作为底层支持以及应用库libcurl。uGet有非常好的平台兼容性。它一开始是Linux系统下的项目但是被移植到在Mac OS XFreeBSDAndroid和Windows平台运行。
#### 功能点: ####
- 容易使用
- 下载队列可以让下载任务按任意多或少或你希望的数量同时进行。
- 断点续传
- 默认分类
- 完美实现的剪贴板监控功能
- 批量下载
- 支持从HTML文件导入下载任务
- 支持通过HTTPHTTPSFTPBitTorrent和Metalink下载
- 多线程下载也被称为分块下载每个下载任务支持最多20个线程同时连接支持自适应的分块管理意味着如果某个下载块中断了那么会其他连接会把它捡起来以时刻保证最佳的下载速度。
- 多镜像下载
- FTP登录和匿名FTP
- 强大的计划任务
- 通过FlashGot和FireFox集成
- Aria2插件
- 多变的主题
- 安静模式
- 键盘快捷键
- 支持命令行/终端控制
- 自动创建目录
- 下载历史管理
- 支持GnuTLS
- 支持26种语言包括阿拉伯语白俄罗斯语简体中文繁体中文捷克语丹麦语英语默认法语格鲁吉亚语德语匈牙利语印尼语意大利语波兰语葡萄牙语巴西俄语西班牙语土耳其语乌克兰语以及越南语。
- 网站:[ugetdm.com][1]
- 开发人员C.H. Huang and contributors
- 许可GNU LGPL 2.1
- 版本1.10.5
----------
![](http://www.linuxlinks.com/portal/content2/png/DownThemAll%21.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-DownThemAll%21.png)
DownThemAll!是一个小巧的可靠的以及易用的开源下载管理器加速器是Firefox的一个组件。它可以让用户下载一个页面上所有链接和图片以及更多功能。它可以让用户完全控制下载任务随时分配下载速度以及同时下载的任务数量。通过使用Metalinks或者手动添加镜像的方式可以同时从不同的服务器下载同一个文件。
DownThemAll会根据你要下载的文件大小切割成不同的部分然后并行下载。
#### 功能点: ####
- 和Firefox的完全集成
- 分块下载,允许用户下载不同的文件块,完成之后再拼接成完整的文件;这样的话当连接到一个缓慢的服务器的时候可以加快下载速度。
- 支持Metalink允许发送下载文件的多个URL以及它的校验值和其他信息到DTA
- 支持爬虫方式通过一个单独的链接遍历整个网页
- 下载过滤
- 高级重命名选项
- 暂停和继续下载任务
- 网站:[addons.mozilla.org/en-US/firefox/addon/downthemall][2]
- 开发人员Federico Parodi, Stefano Verna, Nils Maier
- 许可GNU GPL v2
- 版本2.0.17
----------
![](http://www.linuxlinks.com/portal/content2/png/JDownloader.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-JDownloader.png)
JDownloader是一个免费开源的下载管理工具拥有一个大型社区的开发者支持让下载更简单和快捷。用户可以开始停止或暂停下载设置带宽限制自动解压缩包以及更多功能。它提供了一个容易扩展的框架。
JDownloader简化了从一键下载网站下载文件。它还支持从不同并行资源下载手势识别自动文件解压缩以及更多功能。另外还支持许多“加密链接”网站所以你只需要复制粘贴“加密的”链接然后JDownloader会处理剩下的事情。JDownloader还能导入CCFRSDF和DLC文件。
#### 功能点: ####
- 一次下载多个文件
- 从多个连接同时下载
- JD有一个自己实现的强大的OCR模块
- 自动解压包括密码搜索RAR压缩包
- 支持主题
- 支持多国语言
- 大约110个站点以及超过300个解密插件
- 通过JDLiveHeaderScripts重连支持1400路由
- 网页更新
- 集成包管理器支持额外模块例如WebinterfaceShutdown
- 网站:[jdownloader.org][3]
- 开发人员AppWork UG
- 许可GNU GPL v3
- 版本0.9.581
----------
![](http://www.linuxlinks.com/portal/content2/png/FreeRapidDownloader.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-FreeRapidDownloader.png)
FreeRapid Downloader是一个易用的开源下载程序支持从RapidshareYoutubeFacebookPicasa和其他文件分享网站下载。他的下载引擎基于一些插件所以可以从特殊站点下载。
对于需要针对特定文件分享网站的下载管理器用户来说FreeRapid Downloader是理想的选择。
FreeRapid Downloader使用Java语言编写。需要至少Sun Java 7.0版本才可以运行。
#### 功能点: ####
- 容易使用
- 支持从不同服务站点并行下载
- 支持断点续传
- 支持通过代理列表下载
- 支持流视频或图片
- 下载历史
- 聪明的剪贴板监控
- 自动检查服务器文件后缀
- 自动关机选项
- 插件自动更新
- 简单验证码识别
- 支持跨平台
- 支持多国语言:英语,保加利亚语,捷克语,芬兰语,葡萄牙语,斯洛伐克语,匈牙利语,简体中文,以及其他
- 支持超过700个站点
- 网站:[wordrider.net/freerapid/][4]
- 开发人员Vity and contributors
- 许可GNU GPL v2
- 版本0.9u4
----------
![](http://www.linuxlinks.com/portal/content2/png/FlashGot.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-FlashGot.png)
FlashGot是一个Firefox和Thunderbird的免费组件旨在通过外置下载管理器来处理单个和大规模“所有”和“已选”下载。
FlashGot把所支持的所有下载管理器统一成Firefox中的一个下载管理器。
#### 功能点: ####
- Linux下支持Aria, Axel Download Accelerator, cURL, Downloader 4 X, FatRat, GNOME Gwget, FatRat, JDownloader, KDE KGet, pyLoad, SteadyFlow, uGet, wxDFast, 和wxDownload Fast
- 支持图库功能,可以帮助把原来分散在不同页面的系列资源,整合到一个所有媒体库页面中,然后可以轻松迅速地“下载所有”
- FlashGot Link会使用默认下载管理器下载当前鼠标选中的链接
- FlashGot Selection
- FlashGot All
- FlashGot Tabs
- FlashGot Media
- 抓取页面里所有链接
- 抓取所有标签栏的所有链接
- 链接过滤(例如,只下载指定类型文件)
- 在网页上抓取点击所产生的所有链接
- 支持从大多数链接保护和文件托管服务器直接和批量下载
- 隐私选项
- 支持国际化
- 网站:[flashgot.net][5]
- 开发人员Giorgio Maone
- 许可GNU GPL v2
- 版本1.5.6.5
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20140913062041384/DownloadManagers.html
作者Frazer Kline
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://ugetdm.com/
[2]:https://addons.mozilla.org/en-US/firefox/addon/downthemall/
[3]:http://jdownloader.org/
[4]:http://wordrider.net/freerapid/
[5]:http://flashgot.net/

View File

@ -0,0 +1,65 @@
7个杀手级的开源监测工具
================================================================================
想要更清晰的了解你的网络吗?没有比这几个免费的工具更好用的了。
网络和系统监控是一个很宽的范畴。有监控服务器正常工作,网络设备,应用的方案。也有跟踪这些系统和设备性能,提供趋势性能分析的解决方案。有些工具像个闹钟一样,当发现问题的时候就会报警,而另外的一些工具甚至可以在警报响起的时候触发一些动作。这里,收集了一些开源的工具,旨在解决上述的一些甚至大部分问题。
### Cacti ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_02-netmon-cacti-100448914-orig.jpg)
Cacti是一个性能广泛的图表和趋势分析工具可以用来跟踪并且几乎可以绘制出任何可监测指标并描绘出图表。从硬盘的利用率到风扇的转速在一个电脑管理系统中只要是可以被监测的指标Cacti都可以监测并快速的转换成可视化的图表。
### Nagios ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_03-netmon-nagios-100448915-orig.jpg)
Nagios是一个经典的老牌系统和网络监测工具。运行速度快可靠需要针对应用定制。Nagios对于初学者是一个挑战。但是它的极其复杂的配置正好也反应出它的强大因为它几乎可以适用于任何监控任务。要说缺点的话就是不怎么耐看但是其强劲的动力和可靠性弥补了这个缺点。
### Icinga ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_04-netmon-icinga-100448916-orig.jpg)
Icinga 是一个正在重建的Nagios的分支它提供了一个全面的监控和警报的框架致力于设计一个像Nagios一样的开放的和可扩展性的平台。但是和Nagios拥有不一样的Web界面。Icinga 1 是Nagios非常的相近不过Icinga 2就重写了。两个版本都能很好的兼容而且Nagios用户可以很轻松的转到Icinga 1平台。
### NeDi ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_05-netmon-nedi-100448917-orig.jpg)
NeDi可能不如其他的工具一样文明全世界但它确是一个跟踪网络接入的一个强大的解决方案。它可以很流畅的运行网络基础设施和设备目录保持对任何事件的跟踪。并且可以提供任意设备的当前位置也包括历史位置。
NeDi可以被用于定位被偷的或者是丢失掉的设备只要设备出现在网络上。它甚至可以在地图上显示所有已发现的节点。并且很清晰的告诉人们网络是怎么互联的到物理设备端口的。
### Observium ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_06-netmon-observium-100448918-orig.jpg)
Observium 综合系统网路在性能趋势监测上有很好的表现它支持静态和动态发现来确认服务器和网络设备利用多种监测方法可以监测任何可用的指标。Web界面非常的整洁易用。
就如我们看到的Observium也可以在地图上显示任何被监测节点的实际位置。需要注意的是面板上关于活跃设备和警报的计数。
### Zabbix ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_07-netmon-zabbix-100448919-orig.jpg)
Zabbix 利用广泛的矩阵工具监测服务器和网络。代理商的Zabbix针对大多数的操作系统你可以被动的或者是使用外部检查包括SNMP来监控主机和网络设备。你也会发现很多提醒和通知设施和一个非常人性化的Web界面适用于不同的面板此外Zabbix还拥有一些特殊的管理工具来监测Web应用和虚拟化的管理程序。
Zabbix 还可以提供详细的互联图,以便于我们了解某些对象是怎么连接的。这些图是可以定制的,并且,图也可以以被监测的服务器和主机的分组形式被创建。
### Ntop ###
![](http://images.techhive.com/images/idge/imported/imageapi/2014/09/22/12/slide_08-netmon-ntop-100448920-orig.jpg)
Ntop是一个数据包嗅探工具。有一个整洁的Web界面用来显示被监测网络的实时数据。即时的网络数据通过一个高级的绘图工具可以可视化。主机信息流和主机通信信息对也可以被实时的进行可视化显示。
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2686794/asset-management/164219-7-killer-open-source-monitoring-tools.html
作者:[Paul Venezia][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Paul-Venezia/

View File

@ -0,0 +1,200 @@
在LVM中“录制逻辑卷快照并恢复”——第三部分
================================================================================
**LVM快照**是空间有效指向时间的lvm卷副本。它只在lvm中工作并只在源逻辑卷发生改变时消耗快照卷的空间。如果源卷的变化达到1GB这么大快照卷同样也会产生这样大的改变。因而对于空间有效利用的最佳途径就是总是进行小的修改。如果快照将存储空间消耗殆尽我们可以使用lvextend来扩容。而如果我们需要缩减快照可以使用lvreduce。
![Take Snapshot in LVM](http://www.tecmint.com/wp-content/uploads/2014/08/Take-Snapshot-in-LVM.jpg)
在LVM中录制快照
如果我们在创建快照后意外地删除了无论什么文件,我们没有必要担心,因为快照里包含了我们所删除的文件的原始文件。创建快照时,很有可能文件每次都在那。不要改变快照卷,保持创建时的样子,因为它用于快速恢复。
快照不可以用于备份选项。备份是某些数据的基础副本,因此我们不能使用快照作为备份的一个选择。
#### 需求 ####
注:此两篇文章如果发布后可换成发布后链接,原文在前几天更新中
- [在Linux中使用LVM创建磁盘存储 — 第一部分][1]
- [在Linux中扩展/缩减LVM — 第二部分][2]
### 我的服务器设置 ###
- 操作系统 — 安装有LVM的CentOS 6.5
- 服务器IP — 192.168.0.200
#### 步骤1 创建LVM快照 ####
首先,使用‘**vgs**’命令检查卷组中的空闲空间以创建新的快照。
# vgs
# lvs
![Check LVM Disk Space](http://www.tecmint.com/wp-content/uploads/2014/08/Check-LVM-Disk-Space.jpg)
检查LVM磁盘空间
正如你所见,在**vgs**命令输出中我们可以看到有8GB的剩余空闲空间。所以让我们为我的名为**tecmint_datas**的卷之一创建快照。处于演示的目的我将会使用以下命令来创建1GB的快照卷。
# lvcreate -L 1GB -s -n tecmint_datas_snap /dev/vg_tecmint_extra/tecmint_datas
或者
# lvcreate --size 1G --snapshot --name tecmint_datas_snap /dev/vg_tecmint_extra/tecmint_datas
上面的两个命令都是干得同一件事:
- **-s** 创建快照
- **-n** 为快照命名
![Create LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Create-LVM-Snapshot.jpg)
创建LVM快照
此处,是对上面高亮要点的说明。
- 我在此创建的快照的大小。
- 创建快照。
- 创建快照名。
- 新的快照名。
- 要创建快照的卷。
如果你想要移除快照,可以使用‘**lvremove**’命令。
# lvremove /dev/vg_tecmint_extra/tecmint_datas_snap
![Remove LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Remove-LVM-Snapshot.jpg)
移除LVM快照
现在,使用以下命令列出新创建的快照。
# lvs
![Verify LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-LVM-Snapshot.jpg)
验证LVM快照
上面的你看到了吧,成功创建了一个快照。上面我用箭头标出了快照创建的源,它就是**tecmint_datas**。是的,因为我已经为**tecmint_datas l-volume**创建了一个快照。
![Check LVM Snapshot Space](http://www.tecmint.com/wp-content/uploads/2014/08/Check-LVM-Snapshot-Space.jpg)
检查LVM快照空间
让我们添加一些新文件到**tecmint_datas**里头。现在卷里大概有650MB左右的数据而我我们的快照有1GB大。因此有足够的空间在快照卷里备份我们的修改。这里我们可以使用下面的命令来查看到我们的快照当前的状态。
# lvs
![Check Snapshot Status](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Snapshot-Status.jpg)
检查快照状态
你看到了,现在已经用掉了**51%**的快照卷,你要对你的文件作更多的修改都没有问题。使用下面的命令来查看更多详细信息。
# lvdisplay vg_tecmint_extra/tecmint_data_snap
![View Snapshot Information](http://www.tecmint.com/wp-content/uploads/2014/08/Snapshot-Information.jpg)
查看快照信息
再来对上面图片中高亮的要点作个清楚的说明。
- 快照逻辑卷名称。
- 当前使用的卷组名。
- 读写模式下的快照卷,我们甚至可以挂载并使用该卷。
- 快照创建时间。这个很重要,因为快照将跟踪此时间之后的每个改变。
- 该快照属于tecmint_datas逻辑卷。
- 逻辑卷在线并可用。
- 我们录制快照的源卷大小。
- 写时复制表大小Cow = copy on Write这是说对tecmint_data卷所作的任何改变都会写入此快照。
- 当前使用的快照大小我们的tecmint_data有10GB而我们的快照大小是1GB这就意味着我们的数据大概有650MB。所以如果tecmint_datas中的文件增长到2GB现在的51%中的内容将增加到超过所分配的快照的大小,当然,我们在创建快照时会出现问题。这就意味着我们需要扩展逻辑卷大小(快照逻辑卷)
- 给出快照组块的大小。
现在让我们复制超过1GB的文件到**tecmint_datas**。让我们看看会发生什么。如果你那么做了,你将会见到‘**Input/output error**’这样的错误信息,它告诉你快照超出空间大小了。
![Add Files to Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Add-Files-to-Snapshot.jpg)
添加文件到快照
如果逻辑卷满了,它就会自动下线,我们就不能再使用了,就算我们去扩展快照卷的大小也不行。最好的方法就是在创建快照时,创建一个和源一样大小的快照卷。**tecmint_datas**的大小是10GB如果我们创建一个10GB大小的快照它就永远都不会像上面那样超载因为它有足够的空间来录制你的逻辑卷的快照。
#### 步骤2 在LVM中扩展快照 ####
如果我们需要在超载前扩展快照大小,我们可以使用以下命令来完成此项任务。
# lvextend -L +1G /dev/vg_tecmint_extra/tecmint_data_snap
现在那里有总计2GB大小的快照空间。
![Extend LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Extend-LVM-Snapshot.jpg)
扩展LVM快照
接下来,使用以下命令来验证新的大小和写时复制表。
# lvdisplay /dev/vg_tecmint_extra/tecmint_data_snap
要知道快照卷的大小使用**%**。
# lvs
![Check Size of Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Size-of-Snapshot.jpg)
检查快照大小
然而,如果你的快照大小和源卷一样,我们就没有必要担心这些问题了。
#### 步骤3 恢复快照或合并 ####
要恢复快照,我们首先需要卸载文件系统。
# unmount /mnt/tecmint_datas/
![Un-mount File System](http://www.tecmint.com/wp-content/uploads/2014/08/Unmount-File-System.jpg)
卸载文件系统
只想检查挂载点是否卸载,可以使用下面的命令。
# df -h
![Check File System Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Mount-Points.jpg)
检查文件系统挂载点
这里,我们的挂载已经被卸载,所以我们可以继续恢复快照。要恢复快照,可以使用**lvconvert**命令。
# lvconvert --merge /dev/vg_tecmint_extra/tecmint_data_snap
![Restore LVM Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Restore-Snapshot.jpg)
恢复LVM快照
在合并完成后,快照卷将被自动移除。现在我们可以使用**df**命令来查看分区大小。
# df -Th
![Check Size of Snapshot](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Snapshot-Space.jpg)
在快照卷自动移除后,你可以用下面的命令查看逻辑卷大小。
# lvs
![Check Size of Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Size-of-LV.jpg)
检查逻辑卷大小
**重要**要自动扩展快照我们可以通过修改配置文件来进行。对于手动扩展我们可以使用lvextend。
使用你喜欢的编辑器打开lvm配置文件。
# vim /etc/lvm/lvm.conf
搜索单词autoextend。默认情况下该值和下图中的类似。
![LVM Configuration](http://www.tecmint.com/wp-content/uploads/2014/08/LVM-Configuration.jpg)
LVM配置
修改此处的**100**为**75**,这样自动扩展的起始点就是**75**而自动扩展百分比为20它将自动扩容**百分之20**。
如果快照卷达到**75%**,它会自动为快照卷扩容**20%**。这样,我们可以自动扩容了。使用**wq!**来保存并退出。
这将把快照从超载下线中拯救出来这也会帮助你节省更多时间。LVM是我们扩容以及获得其它众多特性如精简资源调配、拆卸、虚拟卷和使用精简池的唯一方法让我们在下一个话题中来讨论吧。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/
作者:[Babin Lonston][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/
[2]:http://www.tecmint.com/extend-and-reduce-lvms-in-linux/

View File

@ -0,0 +1,105 @@
在Linux中加密邮件
================================================================================
![Kgpg provides a nice GUI for creating and managing your encryption keys.](http://www.linux.com/images/stories/41373/fig-1-kgpg.png)
Kgpg为了创建了管理加密秘钥提供了一个很好的GUI界面
如果你一直在考虑如何加密电子邮件那么在众多的邮件服务和邮件客户端中挑来挑去一定是件头痛的事情可以考虑两种加密方法SSL或TLS加密会保护发送到邮件服务器的登录名和密码[Gunpg][1]是一款标准有用的Linux加密工具可以加密和认证消息如果你可以管理自己的GPG加密并不考虑第三方工具那它就够了其它的我们将在稍后讨论
即便加密了消息,你仍然会暴露在流量分析中,因为消息头部必须是明文形式.所以需要另一款比如[Tor network][2]来隐藏你在互联网上的足迹.我们会看看各种邮件服务和客户端,以及其中的利弊.
### 忘掉Web邮件 ###
如果你使用过GMail, YahooHotmail或者其它Web邮件提供商的邮件服务那就忘掉它们吧你在Web浏览器里输入的任何信息都会暴露在JavaScript攻击中而且无论服务提供商提供什么保障都是过眼云烟(译者注:此说法靠谱否?)GMail,Yahoo和Hotmail均提供SSL/TLS加密来防止消息被窃听但是它们不会提供任何保护来阻碍它们自己的数据挖掘因此并不会提供端到端的加密Yahoo和Google都声称将在明年推出端到端的加密对此我持怀疑态度因为如果一旦它们的核心业务数据挖掘受到干预它们就什么都干不了了
市面上也有各式各样的声称可以为所有类型的电子邮件都能提供安全加密的第三方邮件加密服务,比如[Virtru][3]和[SafeMess][4].对此我依旧表示怀疑,因为无论是谁,只要持有加密秘钥就可以访问你的消息,所以你还是要依赖于可信而不是技术.
对等消息可以避免许多使用集中化服务中的缺陷.[RetroShare][5]和[Bitmessage][6]是两种流行的范例.我不知道它们是否如实所述,但这么说肯定由可取之处.
那Anddroid和iOS又如何呢?假设大部分的Android和iOS应用都没有权限获取你的消息的话那就是最安全的不要照搬我说的 -- 在应用将要安装到你的设备上时麻烦读读相关的服务条款并检查所要求的权限.即便在初次安装时它们的条款是可接受的,也记得单方面的条款改变是行业的标准,所以做最坏的打算是最安全的.
### 零基础知识 ###
[Proton Mail][7]是一款全新的邮件服务声称无需任何基础就可以实现消息加密认证和消息加密分为两个单独的步骤Proton受到Swiss隐私条款的保护它们不会通过日志记录用户的活动零基础知识加密提供真正的安全这代表只有你拥有你的加密秘钥如果你丢了它们你的消息就无法恢复了
也有许多加密电子邮件服务声称可以保护你的隐私.认真阅读细则,查看红色标注的地方,比如受限的用户数据采集,与好友分享,与执法部门的合作等.这些条款暗示它们会收集和共享用户数据,拥有权限获取你的加密秘钥,并读取你的消息.
### Linux邮件客户端 ###
一款独立的开源邮件客户端,比如, Mutt, Claws, Evolution, Sylpheed和Alpine可建立你自己控制的GnuPG秘钥给你大部分的保护建立更安全的电子邮件和Web浏览的最容易的方式是运行TAILS live的Linux发行版详情查看[Protect Yourself Online With Tor, TAILS, and Debian][8].
无论你使用的是TAILS还是一款标准Linux发行版管理GnuPG的方法是相同的所以下面来学习如何使用GnuPG加密消息
### 使用GnuPG ###
首先熟悉一下相关术语。OpenPGP是一种开放的电子邮件加密和认证协议基于菲利普·齐默曼的Pretty Good Privacy (PGP)。GNU Privacy Guard (GnuPG or GPG)是OpenPGP的GPL实现。GnuPG使用对称公钥加密算法也就是说会生成一堆密钥一个任何人都可以用来加密发送给你的消息的公钥和一个只有你自己拥有用来解密消息的的私钥。GnuPG执行两个分开的函数数字化签名消息以证明消息来自你和加密消息。任何人都可以读到你的数字签名消息但只有那些与你交换密钥的人才可以读取加密消息。切记千万不要与他人分享你的密钥只能分享公钥。
Seahorse是GnuPG对应的GNOME图形化前端KGpg是KDE图形化的GnuPG工具。
现在我们执行生成和管理GunPG密钥的基本步骤。这个命令生成一个新的密钥
$ gpg --gen-key
这个过程有许多步骤;对于大部分人来说,只需要回答所有的问题,遵循默认设置就好。当你生成你的密钥时,记下来并将其保存在一个安全的地方,因为如果你丢掉了它,你就不能解密任何消息了。任何关于不要写下密码的建议都是错误的。我们中的大部分人要记住许多登录名和密码,包括那些我们几乎从来不会用到的,所以全部记住它们是不现实的。你知道当人们不写下他们的密码时会发生什么吗?他们会选择生成简单的密码并不断重复使用。你存储在电脑里的任何东西都潜在地会被攻击窃取;一个保存在上锁的柜子里的小本是无法通过渗透获取的,除了物理的入侵,当然入侵者要知道如何去寻找它。
我必须叮嘱你们去弄清楚如何使用新密钥去配置邮件客户端,因为每一个都不同。你可以按照如下操作列出你的密钥:
$ gpg --list-keys
/home/carla/.gnupg/pubring.gpg
------------------------------
pub 2048R/587DD0F5 2014-08-13
uid Carla Schroder (my gpg key)
sub 2048R/AE05E1E4 2014-08-13
这能快速地获知像密钥的位置、名称也就是UID等必要信息。假设你想要把公钥上传到密钥服务器可以参考实例操作
$ gpg --send-keys 'Carla Schroder' --keyserver http://example.com
当你为上传到公钥服务器生成了一个新的密钥你也应该生成一个撤销证书。不要推迟到以后做———当你生成新密钥时就生成它。你可以给它取任意的名称比如使用一个像mycodeproject.asc的描述性名称来代替revoke.asc
$ gpg --output revoke.asc --gen-revoke 'Carla Schroder'
如果你的密钥缺乏抵抗力你可以通过向keyring导入撤销证书来撤销它
$ gpg --import ~/.gnupg/revoke.asc
然后生成并上传一个新的密钥来取代它。当它们更新到密钥数据库时,所有使用旧密钥的用户都会被通知。
你必须像保护私钥一样保护撤销证书。将它拷贝到CD或USB存储器中并加锁然后从电脑中删除。这是明文密钥所以你甚至可以将它打印出来。
如果你需要一份复制粘贴的密钥比如在允许将密钥粘贴到网页表格中的公用keyring中或者是想将公钥发布到个人站点上那么你必须生成一份公钥的ASCII-armored版本
$ gpg --output carla-pubkey.asc --export -a 'Carla Schroder'
这会生成可见的明文公钥,就像下面这个小例子:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQENBFPrn4gBCADeEXKdrDOV3AFXL7QQQ+i61rMOZKwFTxlJlNbAVczpawkWRC3l
IrWeeJiy2VyoMQ2ZXpBLDwGEjVQ5H7/UyjUsP8h2ufIJt01NO1pQJMwaOMcS5yTS
[...]
I+LNrbP23HEvgAdNSBWqa8MaZGUWBietQP7JsKjmE+ukalm8jY8mdWDyS4nMhZY=
=QL65
-----END PGP PUBLIC KEY BLOCK-----
相信上面的教程应该使你学会如何使用GnuPG。如果不够[The GnuPG manuals][9]上有使用GnuPG和相关全部配置的详细信息。
--------------------------------------------------------------------------------
via: http://www.linux.com/learn/tutorials/784165-how-to-encrypt-email-in-linux
作者:[Carla Schroder][a]
译者:[KayGuoWhu](https://github.com/KayGuoWhu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/component/ninjaboard/person/3734
[1]:http://www.openpgp.org/members/gnupg.shtml
[2]:https://www.torproject.org/
[3]:https://www.virtru.com/
[4]:https://www.safemess.com/
[5]:http://retroshare.sourceforge.net/
[6]:http://retroshare.sourceforge.net/
[7]:https://protonmail.ch/
[8]:http://www.linux.com/learn/docs/718398-protect-yourself-online-with-tor-+tails-and-debian
[9]:https://www.gnupg.org/documentation/manuals.html

View File

@ -1,173 +0,0 @@
15个关于Linuxcd命令的练习例子
===
在Linux中**cd改变目录**命令,是对新手和系统管理员来说,最重要最常用的命令。对管理无屏幕服务器的管理员,‘**cd**‘是引导进入目录,检查日志,执行程序/应用软件/脚本和其余每个任务的唯一方法。对新手来说,是他们必须自己动手学习的最初始命令
![15 cd command examples in linux](http://www.tecmint.com/wp-content/uploads/2014/08/cd-command-in-linux.png)
Linux中15个cd命令举例
所以,请用心,我们在这会带给你**15**个基础的‘**cd**‘命令,它们富有技巧和捷径,学会使用这些了解到的技巧,会大大减少你在终端上花费的努力和时间
### 课程细节 ###
- 命令名称cd
- 代表:切换目录
- 使用平台所有Linux发行版本
- 执行方式:命令行
- 权限:访问自己的目录或者其余指定目录
- 级别:基础/初学者
1. 从当前目录切换到/usr/local
avi@tecmint:~$ cd /usr/local
avi@tecmint:/usr/local$
2. 使用绝对路径,从当前目录切换到/usr/local/lib
avi@tecmint:/usr/local$ cd /usr/local/lib
avi@tecmint:/usr/local/lib$
3. 使用相对路径,从当前路径切换到/usr/local/lib
avi@tecmint:/usr/local$ cd lib
avi@tecmint:/usr/local/lib$
4. **a**切换当前目录到上级目录
  avi@tecmint:/usr/local/lib$ cd -
/usr/local
avi@tecmint:/usr/local$
4. **b**切换当前目录到上级目录
avi@tecmint:/usr/local/lib$ cd ..
avi@tecmint:/usr/local$
5. 显示我们最后一个离开的工作目录(使用‘-’选项)
avi@tecmint:/usr/local$ cd --
/home/avi
6. 从当前目录向上级返回两层
avi@tecmint:/usr/local$ cd ../ ../
avi@tecmint:/usr$
7. 从任何目录返回到用户home目录
avi@tecmint:/usr/local$ cd ~
avi@tecmint:~$
or
avi@tecmint:/usr/local$ cd
avi@tecmint:~$
8. 切换工作目录到当前工作目录(通常情况下看上去没啥用)
avi@tecmint:~/Downloads$ cd .
avi@tecmint:~/Downloads$
or
avi@tecmint:~/Downloads$ cd ./
avi@tecmint:~/Downloads$
9. 你当前目录是“/usr/local/lib/python3.4/dist-packages”现在要切换到“home/avi/Desktop/”,要求:一行命令,通过向上一直切换直到‘/’,然后使用绝对路径
  avi@tecmint:/usr/local/lib/python3.4/dist-packages$ cd ../../../../../home/avi/Desktop/
avi@tecmint:~/Desktop$
10. 从当前工作目录切换到/var/www/html要求不要将命令打完整使用TAB
avi@tecmint:/var/www$ cd /v<TAB>/w<TAB>/h<TAB>
avi@tecmint:/var/www/html$
11. 从当前目录切换到/etc/v__ _啊呀你竟然忘了目录的名字但是你又不想用TAB
avi@tecmint:~$ cd /etc/v*
avi@tecmint:/etc/vbox$
**请注意:**如果只有一个目录以‘**v**‘开头,这将会移动到‘**vbox**‘。如果有很多目录以‘**v**‘开头,而且命令行中没有提供更多的标准,这将会移动到第一个以‘**v**‘开头的目录(按照他们在标准字典里字母存在的顺序)
12. 你想切换到用户‘**av**不确定是avi还是avt目录不用**TAB**
avi@tecmint:/etc$ cd /home/av?
avi@tecmint:~$
13. Linux下的pushed和poped
Pushed和poped是Linux bash命令也是其他几个能够保存当前工作目录位置至内存并且从内存读取目录作为当前目录的脚本这些脚本也可以切换目录
  avi@tecmint:~$ pushd /var/www/html
/var/www/html ~
avi@tecmint:/var/www/html$
上面的命令保存当前目录到内存然后切换到要求的目录。一旦poped被执行它会从内存取出保存的目录位置作为当前目录
  avi@tecmint:/var/www/html$ popd
~
avi@tecmint:~$
14. 切换到带有空格的目录
  avi@tecmint:~$ cd test\ tecmint/
avi@tecmint:~/test tecmint$
or
avi@tecmint:~$ cd 'test tecmint'
avi@tecmint:~/test tecmint$
or
avi@tecmint:~$ cd "test tecmint"/
avi@tecmint:~/test tecmint$
15. 从当前目录切换到下载目录,然后列出它所包含的内容(使用一行命令)
  avi@tecmint:/usr$ cd ~/Downloads && ls
.
service_locator_in.xls
sources.list
teamviewer_linux_x64.deb
tor-browser-linux64-3.6.3_en-US.tar.xz
.
...
我们尝试使用最少的词句和一如既往的友好来让你了解Linux的工作和执行
这就是所有内容。我很快会带着另一个有趣的主题回来的。在此之前保持和Tecmint的联系别忘了在下面给我们提供你宝贵的反馈和评论
---
via: http://www.tecmint.com/cd-command-in-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[su-kaiyao](https://github.com/su-kaiyao)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/

View File

@ -0,0 +1,466 @@
Linux 教程:安装 Ansible 配置管理和 IT 自动化工具
================================================================================
![](http://s0.cyberciti.org/uploads/cms/2014/08/ansible_core_circle.png)
今天我来谈谈 ansible一个由 Python 编写的强大的配置管理解决方案。尽管市面上已经有很多可供选择的配置管理解决方案,但他们各有优劣,而 ansible 的特点就在于它的简洁。让 ansible 在主流的配置管理系统中与众不同的一点便是,它并不需要你在想要配置的每个节点上安装自己的组件。同时提供的一个优点在于,如果需要的话,你可以在不止一个地方控制你的整个基础结构。最后一点是它的正确性,或许这里有些争议,但是我认为在大多数时候这仍然可以作为它的一个优点。说得足够多了,让我们来着手在 RHEL/CentOS 和基于 Debian/Ubuntu 的系统中安装和配置 Ansible.
### 准备工作 ####
1. 发行版RHEL/CentOS/Debian/Ubuntu Linux
1. Jinja2Python 的一个对设计师友好的现代模板语言
1. PyYAMLPython 的一个 YAML 编码/反编码函数库
1. paramiko纯 Python 编写的 SSHv2 协议函数库 (译者注:原文对函数库名有拼写错误,校对时请去掉此条注解)
1. httplib2一个功能全面的 HTTP 客户端函数库
1. 本文中列出的绝大部分操作已经假设你将在 bash 或者其他任何现代的 shell 中以 root 用户执行。
Ansible 如何工作
Ansible 工具并不使用守护进程,它也不需要任何额外的自定义安全架构,因此它的部署可以说是十分容易。你需要的全部东西便是 SSH 客户端和服务器了。
+-----------------+ +---------------+
|安装了 Ansible 的| SSH | 文件服务器1 |
|Linux/Unix 工作站|<------------------>| 数据库服务器2 | 在本地或远程
+-----------------+ 模块 | 代理服务器3 | 数据中心的
192.168.1.100 +---------------+ Unix/Linux 服务器
其中:
1. 192.168.1.100 - 在你本地的工作站或服务器上安装 Ansible。
1. 文件服务器1到代理服务器3 - 使用 192.168.1.100 和 Ansible 来自动管理所有的服务器。
1. SSH - 在 192.168.1.100 和本地/远程的服务器之间设置 SSH 密钥。
### Ansible 安装教程 ###
ansible 的安装轻而易举,许多发行版的第三方软件仓库中都有现成的软件包,可以直接安装。其他简单的安装方法包括使用 pip 安装它,或者从 github 里获取最新的版本。若想使用你的软件包管理器安装,在[基于 RHEL/CentOS Linux 的系统里你很可能需要 EPEL 仓库][1]。
#### 在基于 RHEL/CentOS Linux 的系统中安装 ansible ####
输入如下 [yum 命令][2]:
$ sudo yum install ansible
#### 在基于 Debian/Ubuntu Linux 的系统中安装 ansible ####
输入如下 [apt-get 命令][3]:
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
#### 使用 pip 安装 ansible ####
[pip 命令是一个安装和管理 Python 软件包的工具][4],比如它能管理 Python Package Index 中的那些软件包。如下方式在 Linux 和类 Unix 系统中通用:
$ sudo pip install ansible
#### 从源代码安装最新版本的 ansible ####
你可以通过如下命令从 github 中安装最新版本:
$ cd ~
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup
当你从一个 git checkout 中运行 ansible 的时候,请记住你每次用它之前都需要设置你的环境,或者你可以把这个设置过程加入你的 bash rc 文件中:
# 加入 BASH RC
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts" >> ~/.bashrc
$ echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc
ansible 的 hosts 文件包括了一系列它能操作的主机。默认情况下 ansible 通过路径 /etc/ansible/hosts 查找 hosts 文件,不过这个行为也是可以更改的,这样当你想操作不止一个 ansible 或者针对不同的数据中心的不同客户操作的时候也是很方便的。你可以通过命令行参数 -i 指定 hosts 文件:
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
不过我更倾向于使用一个环境变量,这可以在你想要通过 source 一个不同的文件来切换工作目标的时候起到作用。这里的环境变量是 $ANSIBLE_HOSTS可以这样设置
$ export ANSIBLE_HOSTS=~/ansible_hosts
一旦所有需要的组件都已经安装完毕,而且你也准备好了你的 hosts 文件,你就可以来试一试它了。为了快速测试,这里我把 127.0.0.1 写到了 ansible 的 hosts 文件里:
$ echo "127.0.0.1" > ~/ansible_hosts
现在来测试一个简单的 ping
$ ansible all -m ping
或者提示 ssh 密码:
$ ansible all -m ping --ask-pass
我在刚开始的设置中遇到过几次问题,因此这里强烈推荐为 ansible 设置 SSH 公钥认证。不过在刚刚的测试中我们使用了 --ask-pass在一些机器上你会需要[安装 sshpass][5] 或者像这样指定 -c paramiko
$ ansible all -m ping --ask-pass -c paramiko
当然你也可以[安装 sshpass][6],然而 sshpass 并不总是在标准的仓库中提供,因此 paramiko 可能更为简单。
### 设置 SSH 公钥认证 ###
于是我们有了一份配置以及一些基础的其他东西。现在让我们来做一些实用的事情。ansible 的强大很大程度上体现在 playbooks 上,后者基本上就是一些写好的 ansible 脚本(大部分来说),不过在制作一个 playbook 之前,我们将先从一些一句话脚本开始。现在让我们创建和配置 SSH 公钥认证,以便省去 -c 和 --ask-pass 选项:
$ ssh-keygen -t rsa
样例输出:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/mike/.ssh/id_rsa.
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
The key fingerprint is:
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
The key's randomart image is:
+--[ RSA 2048]----+
|... . . |
|. . + . . |
|= . o o |
|.* . |
|. . . S |
| E.o |
|.. .. |
|o o+.. |
| +o+*o. |
+-----------------+
现在显然有很多种方式来把它放到远程主机上应该的位置。不过既然我们正在使用 ansible就用它来完成这个操作吧
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
样例输出:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"dest": "/tmp/id_rsa.pub",
"gid": 100,
"group": "users",
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
"mode": "0644",
"owner": "mike",
"size": 410,
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
"state": "file",
"uid": 1000
}
下一步,把公钥文件添加到远程服务器里。输入:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
样例输出:
SSH password:
127.0.0.1 | FAILED | rc=1 >>
/bin/sh: /root/.ssh/authorized_keys: Permission denied
矮油,我们需要用 root 来执行这个命令,所以还是加上一个 -u 参数吧:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
样例输出:
SSH password:
127.0.0.1 | success | rc=0 >>
请注意,我刚才这是想要演示通过 ansible 来传输文件的操作。事实上 ansible 有一个更加方便的内置 SSH 密钥管理支持:
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
样例输出:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"gid": 100,
"group": "users",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
"key_options": null,
"keyfile": "/home/mike/.ssh/authorized_keys",
"manage_dir": false,
"mode": "0600",
"owner": "mike",
"path": "/home/mike/.ssh/authorized_keys",
"size": 410,
"state": "file",
"uid": 1000,
"unique": false,
"user": "mike"
}
现在这些密钥已经设置好了。我们来试着随便跑一个命令,比如 hostname希望我们不会被提示要输入密码
$ ansible all -m shell -a "hostname" -u root
样例输出:
127.0.0.1 | success | rc=0 >>
成功!!!现在我们可以用 root 来执行命令,并且不会被输入密码的提示干扰了。我们现在可以轻易地配置任何在 ansible hosts 文件中的主机了。让我们把 /tmp 中的公钥文件删除:
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
样例输出:
127.0.0.1 | success >> {
"changed": true,
"path": "/tmp/id_rsa.pub",
"state": "absent"
}
下面我们来做一些更复杂的事情,我要确定一些软件包已经安装了,并且已经是最新的版本:
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
样例输出:
127.0.0.1 | success >> {
"changed": false,
"name": "apache2",
"state": "latest"
}
很好,我们刚才放在 /tmp 中的公钥文件已经消失了,而且我们已经安装好了最新版的 apache。下面我们来看看前面命令中的 -m zypper一个让 ansible 非常灵活,并且给了 playbooks 更多能力的功能。如果你不使用 openSuse 或者 Suse enterprise 你可能还不熟悉 zypper, 它基本上就是 suse 世界中相当于 yum 的存在。在上面所有的例子中,我的 hosts 文件中都只有一台机器。除了最后一个命令外,其他所有命令都应该在任何标准的 *nix 系统和标准的 ssh 配置中使用,这造成了一个问题。如果我们想要同时管理多种不同的机器呢?这便是 playbooks 和 ansible 的可配置性闪闪发光的地方了。首先我们来少许修改一下我们的 hosts 文件:
$ cat ~/ansible_hosts
样例输出:
[RHELBased]
10.50.1.33
10.50.1.47
[SUSEBased]
127.0.0.1
首先,我们创建了一些分组的服务器,并且给了他们一些有意义的标签。然后我们来创建一个为不同类型的服务器执行不同操作的 playbook。你可能已经发现这个 yaml 的数据结构和我们之前运行的命令行语句中的相似性了。简单来说,-m 是一个模块,而 -a 用来提供模块参数。在 YAML 表示中你可以先指定模块,然后插入一个冒号 :,最后指定参数。
---
- hosts: SUSEBased
remote_user: root
tasks:
- zypper: name=apache2 state=latest
- hosts: RHELBased
remote_user: root
tasks:
- yum: name=httpd state=latest
现在我们有一个简单的 playbook 了,我们可以这样运行它:
$ ansible-playbook testPlaybook.yaml -f 10
样例输出:
PLAY [SUSEBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [zypper name=apache2 state=latest] **************************************
ok: [127.0.0.1]
PLAY [RHELBased] **************************************************************
GATHERING FACTS ***************************************************************
ok: [10.50.1.33]
ok: [10.50.1.47]
TASK: [yum name=httpd state=latest] *******************************************
changed: [10.50.1.33]
changed: [10.50.1.47]
PLAY RECAP ********************************************************************
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
注意,你会看到 ansible 联系到的每一台机器的输出。-f 参数让 ansible 在多台主机上同时运行指令。除了指定全部主机,或者一个主机分组的名字以外,你还可以把导入 ssh 公钥的操作从命令行里转移到 playbook 中,这将在设置新主机的时候提供很大的方便,甚至让新主机直接可以运行一个 playbook。为了演示我们把我们之前的公钥例子放进一个 playbook 里:
---
- hosts: SUSEBased
remote_user: mike
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
- hosts: RHELBased
remote_user: mdonlon
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
除此之外还有很多可以做的事情,比如在启动的时候把公钥配置好,或者引入其他的流程来让你按需配置一些机器。不过只要 SSH 被配置成接受密码登陆,这些几乎可以用在所有的流程中。在你准备开始写太多 playbook 之前,另一个值得考虑的事情是,代码管理可以有效节省你的时间。机器需要不断变化,然而你并不需要在每次机器发生变化时都重新写一个 playbook只需要更新相关的部分并提交这些修改。与此相关的另一个好处是如同我之前所述你可以从不同的地方管理你的整个基础结构。你只需要将你的 playbook 仓库 git clone 到新的机器上,就完成了管理所有东西的全部设置流程。
#### 现实中的 ansible 例子 ####
我知道很多用户经常使用 pastebin 这样的服务,以及很多公司基于显而易见的理由配置了他们内部使用的类似东西。最近,我遇到了一个叫做 showterm 的程序,巧合之下我被一个客户要求配置它用于内部使用。这里我不打算赘述这个应用程序的细节,不过如果你感兴趣的话,你可以使用 Google 搜索 showterm。作为一个合理的现实中的例子我将会试图配置一个 showterm 服务器,并且配置使用它所需要的客户端应用程序。在这个过程中我们还需要一个数据库服务器。现在我们从配置客户端开始:
---
- hosts: showtermClients
remote_user: root
tasks:
- yum: name=rubygems state=latest
- yum: name=ruby-devel state=latest
- yum: name=gcc state=latest
- gem: name=showterm state=latest user_install=no
这部分很简单。下面是主服务器:
---
- hosts: showtermServers
remote_user: root
tasks:
- name: ensure packages are installed
yum: name={{item}} state=latest
with_items:
- postgresql
- postgresql-server
- postgresql-devel
- python-psycopg2
- git
- ruby21
- ruby21-passenger
- name: showterm server from github
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
- name: Initdb
command: service postgresql initdb
creates=/var/lib/pgsql/data/postgresql.conf
- name: Start PostgreSQL and enable at boot
service: name=postgresql
enabled=yes
state=started
- gem: name=pg state=latest user_install=no
handlers:
- name: restart postgresql
service: name=postgresql state=restarted
- hosts: showtermServers
remote_user: root
sudo: yes
sudo_user: postgres
vars:
dbname: showterm
dbuser: showterm
dbpassword: showtermpassword
tasks:
- name: create db
postgresql_db: name={{dbname}}
- name: create user with ALL priv
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
- hosts: showtermServers
remote_user: root
tasks:
- name: database.yml
template: src=database.yml dest=/root/showterm/config/database.yml
- hosts: showtermServers
remote_user: root
tasks:
- name: run bundle install
shell: bundle install
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: run rake db tasks
shell: 'bundle exec rake db:create db:migrate db:seed'
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: apache config
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
还凑合。请注意从某种意义上来说这是一个任意选择的程序然而我们现在已经可以持续地在任意数量的机器上部署它了这便是配置管理的好处。此外在大多数情况下这里的定义语法几乎是不言而喻的wiki 页面也就不需要加入太多细节了。当然在我的观点里,一个有太多细节的 wiki 页面绝不会是一件坏事。
### 扩展配置 ###
我们并没有涉及到这里所有的细节。Ansible 有许多选项可以用来配置你的系统。你可以在你的 hosts 文件中内嵌变量,而 ansible 将会把它们应用到远程节点。如:
[RHELBased]
10.50.1.33 http_port=443
10.50.1.47 http_port=80 ansible_ssh_user=mdonlon
[SUSEBased]
127.0.0.1 http_port=443
尽管这对于快速配置来说已经非常方便,你还可以将变量分成存放在 yaml 格式的多个文件中。在你的 hosts 文件路径里,你可以创建两个子目录 group_vars 和 host_vars。在这些路径里放置的任何文件只要能对得上一个主机分组的名字或者你的 hosts 文件中的一个主机名,它们都会在运行时被插入进来。所以前面的一个例子将会变成这样:
ultrabook:/etc/ansible # pwd
/etc/ansible
ultrabook:/etc/ansible # tree
.
├── group_vars
│ ├── RHELBased
│ └── SUSEBased
├── hosts
└── host_vars
├── 10.50.1.33
└── 10.50.1.47
----------
2 directories, 5 files
ultrabook:/etc/ansible # cat hosts
[RHELBased]
10.50.1.33
10.50.1.47
----------
[SUSEBased]
127.0.0.1
ultrabook:/etc/ansible # cat group_vars/RHELBased
ultrabook:/etc/ansible # cat group_vars/SUSEBased
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
---
http_port:80
ansible_ssh_user: mdonlon
### 改善 Playbooks ###
组织 playbooks 也已经有很多种现成的方式。在前面的例子中我们用了一个单独的文件,因此这方面被大幅地简化了。组织这些文件的一个常用方式是创建角色。简单来说,你将一个主文件加载为你的 playbook而它将会从其它文件中导入所有的数据这些其他的文件便是角色。举例来说如果你有了一个 wordpress 网站,你需要一个 web 前端和一个数据库。web 前端将包括一个 web 服务器,应用程序代码,以及任何需要的模块。数据库有时候运行在同一台主机上,有时候运行在远程的主机上,这时候角色就可以派上用场了。你创建一个目录,并对每个角色创建对应的小 playbook。在这个例子中我们需要一个 apache 角色mysql 角色wordpress 角色mod_php以及 php 角色。最大的好处是并不是每个角色都必须被应用到同一台机器上。在这个例子中mysql 可以被应用到一台单独的机器。这同样为代码重用提供了可能,比如你的 apache 角色还可以被用在 python 和其他相似的 php 应用程序中。展示这些已经有些超出了本文的范畴,而且做一件事总是有很多不同的方式,我建议搜索一些 ansible 的 playbook 例子。有很多人在 github 上贡献代码,当然还有其他一些网站。
### 模块 ###
在 ansible 中对于所有完成的工作幕后的工作都是由模块主导的。Ansible 有一个非常丰富的内置模块仓库其中包括软件包安装文件传输以及我们在本文中做的所有事情。但是对一部分人来说这些并不能满足他们的配置需求ansible 也提供了方法让你添加自己的模块。Ansible 的 API 有一个非常棒的事情是,它并没有限制模块也必须用编写它的语言 Python 来编写也就是说你可以用任何语言来编写模块。Ansible 模块通过传递 JSON 数据来工作,因此你只需要用想用的语言生成一段 JSON 数据。我很确定任何脚本语言都可以做到这一点,因此你现在就可以开始写点什么了。在 Ansible 的网站上有很多的文档,包括模块的接口是如何工作的,以及 Github 上也有很多模块的例子。注意一些小众的语言可能没有很好的支持,不过那只可能是因为没有多少人在用这种语言贡献代码。试着写点什么,然后把你的结果发布出来吧!
### 总结 ###
总的来说,虽然在配置管理方面已经有很多解决方案,我希望本文能显示出 ansible 简单的设置过程,在我看来这是它最重要的一个要点。请注意,因为我试图展示做一件事的不同方式,所以并不是前文中所有的例子都是适用于你的个别环境或者对于普遍情况的最佳实践。这里有一些链接能让你对 ansible 的了解进入下一个层次:
- [Ansible 项目][7]主页.
- [Ansible 项目文档][8].
- [多级环境与 Ansible][9].
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
作者:[Nix Craft][a]
译者:[felixonmars](https://github.com/felixonmars)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.cyberciti.biz/tips/about-us
[1]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
[3]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
[4]:http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-linux-install-pipclient/
[5]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[6]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
[7]:http://www.ansible.com/
[8]:http://docs.ansible.com/
[9]:http://rosstuck.com/multistage-environments-with-ansible/

View File

@ -0,0 +1,108 @@
6个有趣的命令行工具(终端中的乐趣) - 第二部分
================================================================================
在之前, 我们给出类一些有关有趣的 Linux 命令行命令的文章, 这些文章告诉我们, Linux 并不像看起来那样复杂, 如果我们知道如何使用的话, 反而会非常有趣. Linux 命令行可以简洁而完美地执行一些复杂的任务, 并且十分有趣.
- [Linux命令及Linux终端的20个趣事][3]
- [Fun in Linux Terminal Play with Word and Character Counts][2]
![Funny Linux Commands](http://www.tecmint.com/wp-content/uploads/2014/08/Funny-Linux-Commands.png)
有趣的 Linux 命令
之前的一篇文章包含了 20 个有趣的 Linux 命令/脚本(和子命令), 得到了读者的高度赞扬. 而另一篇文章则包含了一些处理文字文件, 单词和字符串的命令/脚本和改进, 虽然没有之前那篇文章那么受欢迎.
这篇文章介绍了一些新的有趣的命令和单行脚本.
### 1. pv 命令 ###
你也许曾经看见电影里的模仿文字, 它们好像是被实时打出来的. 如果我么能在终端里实现这样的效果, 那不是很好?
这是可以做到的. 我们可以安装通过 '**apt**' 或者 '**yum**' 工具在 Linux 系统上安装 '**pv**' 命令. 安装命令如下?
# yum install pv [在基于 RedHat 的系统上]
# sudo apt-get install pv [在基于 Debian 的系统上]
'**pv**' 命令安装成功之后, 我们尝试输入下面的命令来在终端查看实时文字输出的效果.
$ echo "Tecmint[dot]com is a community of Linux Nerds and Geeks" | pv -qL 10
![pv command in action](http://www.tecmint.com/wp-content/uploads/2014/08/pv-command.gif)
正在运行的 pv 命令
**注意**: '**q**' 选项表示'安静'(没有其他输出信息), '**L**' 选项表示每秒转化的字节数上限. 数字变量(必须是整数)用来调整预设的文本模拟.(To be fixed: 这里翻译有问题)
### 2. toilet 命令 ###
用单行命令 '**toilet**' 在终端里显示有边框的文字值一个不错的主意. 同样, 你必须保证 '**toilet**' 已经安装在你的电脑上. 如果没有的话, 请使用 apt 或 yum 安装. (译者注: 'toilet' 并不在 Fedora 的官方仓库里, 你可以从 github 上下载源代码来安装)
$ while true; do echo “$(date | toilet -f term -F border Tecmint)”; sleep 1; done
![toilet command in action](http://www.tecmint.com/wp-content/uploads/2014/08/toilet-command.gif)
正在运行的 toilet 命令
**注意**: 上面的脚本需要使用 **ctrl+z** 键来暂停.
### 3. rig 命令 ###
这个命令每次生成一个随机的身份信息和地址. 要运行这个命令, 你需要用 apt 或 yum 安装 '**rig**'. (译者注: 'rig' 不在 Fedora 的官方仓库中, 我只在 rpmseek 上找到了 Ubuntu 的 deb 包, 可以使用它来安装.)
# rig
![rig command in action](http://www.tecmint.com/wp-content/uploads/2014/08/rig-command.gif)
正在运行的 rig 命令
### 4. aview 命令 ###
你认为在终端用 ASCII 格式显示图片怎么样? 我们必须用 apt 或 yum 安装软件包 '**aview**'. (译者注: 'avieww' 不在 Fedora 的官方仓库中, 可以从 aview 的[项目主页][4]上下载源代码来安装. ) 在当前文件夹下有一个名为 '**elephant.jpg**' 的图片, 我想用 ASCII 模式在终端查看.
$ asciiview elephant.jpg -driver curses
![aview command in action](http://www.tecmint.com/wp-content/uploads/2014/08/elephant.gif)
正在运行的 aview 命令
### 5. xeyes 命令 ###
在上一篇文章中, 我们介绍了 '**oneko**' 命令, 它可以显示一个追随鼠标指针运动的小老鼠. '**xeyes**' 是一个类似的程序, 当你运行程序时, 你可以看见两个怪物的眼球追随鼠标的运动.
$ xeyes
![xeyes command in action](http://www.tecmint.com/wp-content/uploads/2014/08/xeyes.gif)
正在运行的 xeyes 命令
### 6. cowsay 命令 ###
你是否还记得上一次我们介绍的这个命令? 它可以显示一段预先确定的文本和一个字符构成的奶牛. 如果你想使用其它动物来代替奶牛怎么办? 查看可用的动物列表:
$ cowsay -l
蟒蛇吃大象怎么样?
$ cowsay -f elephant-in-snake Tecmint is Best
![cowsay command in action](http://www.tecmint.com/wp-content/uploads/2014/08/cowsay.gif)
正在运行的 cowsay 命令
山羊怎么样?
$ cowsay -f gnu Tecmint is Best
![cowsay goat in action](http://www.tecmint.com/wp-content/uploads/2014/08/cowsay-goat.gif)
正在运行的 山羊cowsay 命令
今天就到这里吧. 我将带着另一篇有趣的文章回来. 跟踪 Tecmint 来获得最新消息. 不要忘记在下面的评论里留下你的有价值的回复.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-funny-commands/
作者:[Avishek Kumar][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/20-funny-commands-of-linux-or-linux-is-fun-in-terminal/
[2]:http://www.tecmint.com/play-with-word-and-character-counts-in-linux/
[3]:http://linux.cn/article-2831-1.html
[4]:http://aa-project.sourceforge.net/aview/

View File

@ -0,0 +1,47 @@
在Ubuntu 14.04和拥有Texmaker的Linux Mint 17(基于ubuntu和debian的Linux发行版)中使用LaTeX
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/texmaker_Ubuntu.jpeg)
[LaTeX][1]是一种文本标记语言,也可以说是一种文档制作系统。经常在很多大学或者机构中作为一种标准来书写专业的科学文献,毕业论文或其他类似的文档。在这篇文章中我们会看到如何在Ubuntu 14.04中使用LaTeX。
### 在Ubuntu 14.04或Linux Mint 17中安装Texmaker
[Texmaker][2]是一款免费开源的LaTeX编辑器它支持一些主流的桌面操作系统比如WindowLinux和OS X。下面是Texmaker的主要特点
- 支持Unicode编码的编辑器
- 拼写检查
- 代码折叠
- 自动补全
- 快速导航
- PDF查看器
- 编译简单
- 支持370个数学符号
- LaTeX格式文本
- 通过TeX4ht导出到html和odt文件
- 支持正则表达式
在Ubuntu 14.04下你可以通过下面的链接下载Texmaker的二进制包
- [下载Texmaker编辑器][3]
你通过链接下载到的是一个.deb包因此你在一些像Linux MintElementary OSPinguy OS等等类Debain的发行版中可以使用相同的安装方式。
如果你想使用像Github类型的markdown编辑器你可以试试[Remarkable编辑器][4]。
希望Texmaker能够在Ubuntu和Linux Mint中帮到你
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-latex-ubuntu-1404/
作者:[Abhishek][a]
译者:[john](https://github.com/johnhoow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://www.latex-project.org/
[2]:http://www.xm1math.net/texmaker/index.html
[3]:http://www.xm1math.net/texmaker/download.html#linux
[4]:http://itsfoss.com/remarkable-markdown-editor-linux/

View File

@ -2,7 +2,7 @@ Linux有问必答——如何在CentOS或RHEL 7上修改主机名
================================================================================
> 问题在CentOS/RHEL 7上修改主机名的正确方法是什么永久或临时
在CentOS或RHEL中有三种定义的主机名1静态的2瞬态的以及3优雅的。“静态”主机名也成为内核主机名,是系统在启动时从/etc/hostname自动初始化的主机名。“瞬态”主机名是在系统运行时临时分配的主机名例如通过DHCP或mDNS服务器分配。静态主机名和瞬态主机名都遵从作为互联网域名同样的字符限制规则。而另一方面优雅”主机名则被允许使用自由形式(包括特殊/空白字符的主机名以展示给终端用户如Dan's Computer
在CentOS或RHEL中有三种定义的主机名:1静态的2瞬态的以及3灵活的。“静态”主机名也称为内核主机名,是系统在启动时从/etc/hostname自动初始化的主机名。“瞬态”主机名是在系统运行时临时分配的主机名例如通过DHCP或mDNS服务器分配。静态主机名和瞬态主机名都遵从作为互联网域名同样的字符限制规则。而另一方面灵活”主机名则允许使用自由形式(包括特殊/空白字符的主机名以展示给终端用户如Dan's Computer
在CentOS/RHEL 7中有个叫hostnamectl的命令行工具它允许你查看或修改与主机名相关的配置。
@ -12,31 +12,31 @@ Linux有问必答——如何在CentOS或RHEL 7上修改主机名
![](https://farm4.staticflickr.com/3844/15113861225_e0e19783a7.jpg)
只查看静态、瞬态或优雅主机名,分别使用“--static”“--transient”或“--pretty”选项。
只查看静态、瞬态或灵活主机名,分别使用“--static”“--transient”或“--pretty”选项。
$ hostnamectl status [--static|--transient|--pretty]
要同时修改所有三个主机名:静态、瞬态和优雅主机名:
要同时修改所有三个主机名:静态、瞬态和灵活主机名:
$ sudo hostnamectl set-hostname <host-name>
![](https://farm4.staticflickr.com/3855/15113489172_4e25ac87fa_z.jpg)
就像上面展示的那样,在修改静态/瞬态主机名时,任何特殊字符或空白字符会被移除,而提供的参数中的任何大写字母会自动转化为小写。一旦修改了静态主机名,/etc/hostname将被自动更新。然而/etc/hosts不会更新以对修改作出回应,所以你需要手动更新/etc/hosts。
就像上面展示的那样,在修改静态/瞬态主机名时,任何特殊字符或空白字符会被移除,而提供的参数中的任何大写字母会自动转化为小写。一旦修改了静态主机名,/etc/hostname将被自动更新。然而/etc/hosts不会更新来回应所做的修改,所以你需要手动更新/etc/hosts。
如果你只想修改特定的主机名(静态,瞬态或优雅),你可以使用“--static”“--transient”或“--pretty”选项。
如果你只想修改特定的主机名(静态,瞬态或灵活),你可以使用“--static”“--transient”或“--pretty”选项。
例如,要永久修改主机名,你可以修改静态主机名:
$ sudo hostnamectl --static set-hostname <host-name>
注意,你不必重启机器以激活永久主机名修改。上面的命令会立即修改内核主机名。注销并重新登入后在命令行提示观察新的静态主机名。
注意,你不必重启机器以激活永久主机名修改。上面的命令会立即修改内核主机名。注销并重新登入后在命令行提示观察新的静态主机名。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/change-hostname-centos-rhel-7.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,73 @@
Arch Linux安装捷径Evo/Lution
================================================================================
有些人只体验过Ubuntu或Mint的安装却鼓起勇气想要安装Arch Linux他们的学习道路是那样的陡峭和严峻安装过程中半途而废的人数可能要比顺利过关的人多。如果你成功以有用的方式跑起并配置了Arch Linux那么它已经把你培养成了一个饱经风霜的Linux用户。
即使有[有帮助的维基][1]可以为新手提供指南对于那些想要征服Arch的人而言要求仍然太高。你需要至少熟悉诸如fdisk或mkfs之类的终端命令并且听过mc、nano或chroot这些并努力掌握它们。这让我回想起了10年前的Debian安装。
对于那些满怀抱负而又缺乏知识的生灵,有一个叫[Evo/Lution Live ISO][2]的ISO镜像格式安装器可以拯救他们。即便它貌似和自有发行版一样启动但它也什么都没干除了辅助安装Arch Linux准系统。Evo/Lution是一个项目它旨在通过提供Arch的简单安装方式让Arch的用户基础多样化就像为那些用户提供全面帮助和文档的社区一样。在这样一个组合中Evo是Live CD不可安装而Lution是个安装器。项目创立者看到了Arch开发者和用户之间的巨大鸿沟及其衍生发行版而想要在所有参与者之间构筑一个平等身份的社区。
![](https://farm6.staticflickr.com/5559/15067088008_ecb221408c_z.jpg)
项目的软件部分是命令行安装器Lution-AIS它解释了一个普通的纯净的Arch安装过程中的每一步。安装完毕后你将获得Arch提供的没有从AUR添加任何东西的最新软件或其它任何自定义的包。
启动这个422MB大小的ISO镜像后一个由显示在右边的带有选项快捷方式的Conky和一个左边等待运行安装器的LX-Terminal组成的工作区便呈现在我们眼前。
![](https://farm6.staticflickr.com/5560/15067056888_6345c259db_z.jpg)
在通过右击桌面或使用ALT-i启动实际的安装器后一个写满了16个等待运行的任务的列表就出现在你面前了。除非你有一个更好的了解否则将这些命令全部运行一遍。你可以一次运行也可以进行选择如1 3 6或者1-4也可以一次将它们全部运行输入1-16。大多数步骤需要y即yes来确认而下一个任务则等着你敲击回车来执行。在此期间你有足够的时间来阅读安装指南它可以通过ALT-g来打开。当然你也可以出去溜达一圈再回来。
![](https://farm4.staticflickr.com/3868/15253227082_5e7219f72d_z.jpg)
这16个步骤分成“基础安装”和“桌面安装”两组。第一个组安装主要关注本地化、分区以及安装启动器。
安装器带领你穿越分区世界你可以选择使用gparted、gdisk以及cfdisk。
![](https://farm4.staticflickr.com/3873/15230603226_56bba60d28_z.jpg)
![](https://farm4.staticflickr.com/3860/15253610055_e6a2a7a1cb_z.jpg)
创建完分区后像截图中所示用gparted划分/dev/sda1用于root/dev/sda2用于swap你可以在10个文件系统中选择其中之一。在下一步中你可以选择内核最新或长期支持LTS和基础系统。
![](https://farm6.staticflickr.com/5560/15253610085_aa5a9557fb_z.jpg)
安装完你喜爱的启动加载器后第一部分安装就完成了这大约需要花费12分钟。这是在普通的Arch Linux中你第一次重启进入系统所处之处。
在Lution的帮助下继续进入第二部分在这一部分中将安装Xorg、声音和图形驱动然后进入桌面环境。
![](https://farm4.staticflickr.com/3918/15066917430_c21e0f0a9e_z.jpg)
安装器会检测是否在VirtualBox中安装并且会自动为VM安装并加载正确的通用驱动然后相应地设置**systemd**。
在下一步中你可以选择KDE、Gnome、Cinnamon、LXDE、Englightenment、Mate或XFCE作为你的桌面环境。如果你不喜欢臃肿的桌面你也可以试试这些窗口管理器Awesome、Fluxbox、i3、IceWM、Openbox或PekWM。
![](https://farm4.staticflickr.com/3874/15253610125_26f913be20_z.jpg)
在使用Cinnamon作为桌面环境的情况下第二部分安装将花费不到10分钟的时间而选择KDE的话因为要下载的东西多得多所以花费的时间也会更长。
Lution-AIS在Cinnamon和Awesome上像个妩媚的小妖精。在安装完成并提示重启后它就带我进入了我所渴望的环境。
![](https://farm4.staticflickr.com/3885/15270946371_c2def59f37_z.jpg)
我要提出两点非议一是在安装器要我选择一个镜像列表时另外一个是在创建fstab文件时。在这两种情况下它都另外开了一个终端给出了一些文本信息提示。这让我花了点时间才搞清楚原来我得把它关了安装器才会继续。在创建fstab后它又会提示你而你需要关闭终端并在问你是否想要保存文件时回答
![](https://farm4.staticflickr.com/3874/15067056958_3bba63da60_z.jpg)
我碰到的第二个问题可能与VirtualBox有关了。在启动的时候你可以看到没有网络被检测到的提示信息。点击顶部左边的图标将会打开wicd这里所使用的网络管理器。点击“断开”然后再点击“连接”并重启安装器就可以让它自动检测到了。
Evo/Lution我以为是个有价值的项目在这里Lution工作一切顺利目前还没有什么可告诉社区的。他们开启了一个全新的网站、论坛和维基需要填充内容进去啊。所以如果你喜欢这主意加入[他们的论坛][3]并告诉他们吧。本文中的ISO镜像可以从[此网站][4]下载。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/09/install-arch-linux-easy-way-evolution.html
作者:[Ferdinand Thommes][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/ferdinand
[1]:https://wiki.archlinux.org/
[2]:http://www.evolutionlinux.com/
[3]:http://www.evolutionlinux.com/forums/
[4]:http://www.evolutionlinux.com/downloads.html

View File

@ -0,0 +1,116 @@
数据库常见问题答案--如何使用命令行创建一个MySQL数据库
===
> **问题**在一个某处运行的MySQL服务器上我该怎样通过命令行创建和安装一个MySQL数据库呢
为了能通过命令行创建一个MySQL数据库你可以使用mysql命令行客户端。下面是通过mysql命令行客户端创建和安装MySQL的步骤。
### 第一步安装MySQL客户端 ###
当然你得确保MySQL客户端已经安装完毕。如果没有的话可以按照下面的方法。
在DebianUbuntu 或者 Linux Mint上
$ sudo apt-get install mysql-client
在FedoraCentOS 或者 RHEL上
$ sudo apt-get install mysql
### 第二步登陆到MySQL服务器 ###
首先你需要使用root用户登陆进你的MySQL数据库如下
$ mysql -u root -h <mysql-server-ip-address> -p
请注意为了能登进远程的MySQL服务器你需要[开启服务器上的远程访问][1]如果你想调用同一主机上的MySQL服务器你可以省略 "-h <mysql-server-ip-address>" 参数
$ mysql -u root -p
你将需要输入MySQL服务器的密码如果认证成功MySQL提示将会出现。
![](https://www.flickr.com/photos/xmodulo/15272971112/)
### 第三步创建一个MySQL数据库 ###
在MySQL提示中输入命令之前请记住所有的命令都是以分号结束的(否则将不会执行)。另外,考虑输入命令的时候使用大些字母,输入数据库对象使用小写字母。但那不是必须的,只是方便你的阅读。
现在让我们创建一个叫做xmodulo_DB的数据库
mysql> CREATE DATABASE IF NOT EXISTS xmodulo_DB;
![](https://farm4.staticflickr.com/3864/15086792487_8e2eaedbcd.jpg)
### 第四步:创建一个数据库表 ###
为了达到演示的目的我们将会创建一个叫做posts_tbl的表表里会存储关于文章的如下信息
- 文章的标题
- 作者的第一个名字
- 作者的最后一个名字
- 文章可用或者不可用
- 文章创建的日期
这个过程分两步执行:
首先,选择我们需要使用的数据库:
mysql> USE xmodulo_DB;
然后,在数据库中创建新表:
mysql> CREATE TABLE 'posts_tbl' (
'post_id' INT UNSIGNED NOT NULL AUTO_INCREMENT,
'content' TEXT,
'author_FirstName' VARCHAR(100) NOT NULL,
'author_LastName' VARCHAR(50) DEFAULT NULL ,
'isEnabled' TINYINT(1) NOT NULL DEFAULT 1,
'date' TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ,
PRIMARY KEY ( 'post_id' )
) TYPE = MYISAM;
![](https://farm4.staticflickr.com/3870/15086654980_39d2d54d72.jpg)
### 第五步:创建一个用户,并授予权限 ###
当涉及到访问我们新创的数据库和表的时候创建一个新用户是一个很好的主意。这样做就可以让用户在没有整个MySQL服务器权限的情况下去访问那个数据库(而且只能是那个数据库)
你可以创建新用户,授予权限,并且使改变生效:
mysql> GRANT ALL PRIVILEGES ON xmodulo_DB.* TO 'new_user'@'%' IDENTIFIED BY 'new_password';
mysql> FLUSH PRIVILEGES;
'new_user'和'new_password'分别指的是新的用户名和他的密码。这条信息将会被保存在mysql.user表中而且密码会被加密。
### 第六步:测试 ###
让我们插入一个虚拟的记录到posts_tbl表
mysql> USE xmodulo_DB;
mysql> INSERT INTO posts_tbl (content, author_FirstName, author_Las tName)
VALUES ('Hi! This is some dummy text.', 'Gabriel', 'Canepa');
然后查看posts_tbl表中的所有记录
mysql> SELECT * FROM posts_tbl;
![](https://farm4.staticflickr.com/3896/15086792527_39a987d8bd_z.jpg)
注意MySQL会在我们先前定义的地方自动插入适当的默认值(比如,'isEnabled'和'date')。
---
via: http://ask.xmodulo.com/create-mysql-database-command-line.html
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linu
x中国](http://linux.cn/) 荣誉推出
[1]:http://xmodulo.com/2012/06/how-to-allow-remote-access-to-mysql.html

View File

@ -0,0 +1,182 @@
客户机通过DNSMASQ网络启动服务器网络安装“Debian 7Wheezy
================================================================================
本教程将指引你直接通过使用**DNSMASQ**作为**PXE服务器预启动执行环境**的网络位置安装**Debian 7Wheezy**此种情况是假定你的服务器不提供任何CD/DVD/USB介质驱动器或者它只能通过相连的监视器、键盘和鼠标操作。
![Debian 7 Network Installation on Client Machines](http://www.tecmint.com/wp-content/uploads/2014/09/Network-Debian-Instalaltion.png)
客户机上的Debian 7网络安装
**DNSMASQ**是一个轻量级网络基础架构服务器它可以通过内建的DNS、DHCP和TFTP服务器提供如DNS、DHCP和网络启动等关键服务。
一旦PXE服务器启动并运行你可以指示你所有的客户机直接从网络启动前提是你的客户机必须拥有一张支持网络启动的网卡网络启动可以从BIOS的网络启动或启动服务选项中启用。
### 需求 ###
- [Debian 7 (Wheezy)安装指南][1]
### 步骤1 安装及配置DNSMASQ服务器 ###
**1.** 首先在安装Debian服务器后要确保你的系统使用的是**静态IP地址**。因为除了网络启动之外也要为你的整个网段提供DHCP服务。设置好静态IP地址后以root帐号或具有root权力的用户来运行以下命令进行DNSMASQ服务器的安装。
# apt-get install dnsmasq
![Install Dnsmasq Package](http://www.tecmint.com/wp-content/uploads/2014/09/Install-Dnsmasq-in-Debian.png)
安装Dnsmasq包
**2.** 安装好DNSMASQ包后你可以开始编辑配置文件。首先创建一个主配置文件的备份然后使用下面的命令对**dnsmasq.conf**文件进行编辑。
# mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup
# nano /etc/dnsmasq.conf
![Backup Dnsmasq Configuration](http://www.tecmint.com/wp-content/uploads/2014/09/Backup-dnsmasq-Configuration-file.png)
备份Dnsmasq配置
**3.** 上面的备份过程适合重命名配置文件,所以新的文件应该是空,你可以使用以下下面描述的**DNSMASQ**配置文件节录。
interface=eth0
domain=debian.lan
dhcp-range=192.168.1.3,192.168.1.253,255.255.255.0,1h
dhcp-boot=pxelinux.0,pxeserver,192.168.1.100
pxe-prompt="Press F8 for menu.", 60
#pxe-service types: x86PC, PC98, IA64_EFI, Alpha, Arc_x86, Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI
pxe-service=x86PC, "Install Debian 7 Linux from network server 192.168.1.100", pxelinux
enable-tftp
tftp-root=/srv/tftp
![Configuration of Dnsmasq](http://www.tecmint.com/wp-content/uploads/2014/09/Configure-dnsmasq.png)
Dnsmasq配置
- **interface** 服务器监听的网络接口。
- **domain** 用你自己的域名替换。
- **dhcp-range** 用你自己的网络掩码定义的网络IP地址范围。
- **dhcp-boot** 保持默认但使用你自己的服务器IP地址替换IP声明。
- **pxe-prompt** 保持默认 要求在**敲击F8键** 进入菜单时等待60秒。
- **pxe=service** 使用**x86PC**作为32位/64位架构并进入引号字符串的菜单描述提示。其它值类型可能是PC98IA64_EFIAlphaArc_x86Intel_Lean_ClientIA32_EFI, BC_EFIXscale_EFI和 X86-64_EFI。
- **enable-tftp** 启用内建TFTP服务器。
- **tftp-root** 使用/srv/tftp作为Debian网络启动文件的存放位置。
### 步骤2: 下载Debian网络启动文件并打开防火墙连接 ###
**4.** 现在该下载Debian网络启动文件了。首先修改你当前工作目录路径到**TFTP根目录**位置,此位置由最后的配置语句定义(**/srv/tftp**系统路径)。
转到[Debian网络安装][2] [网络启动部分][3]的官方页面镜像,抓取以下文件,要抓取的文件取决于你想要安装到客户端的系统架构。
下载好**netboot.tar.gz**文件后同时提取归档该过程描述只适用于64位但对于其它系统架构也基本相同
# cd /srv/tftp/
# wget http://ftp.nl.debian.org/debian/dists/wheezy/main/installer-amd64/current/images/netboot/netboot.tar.gz
# tar xfz netboot.tar.gz
# wget http://ftp.nl.debian.org/debian/dists/wheezy/main/installer-amd64/current/images/SHA256SUMS
# wget http://ftp.nl.debian.org/debian/dists/wheezy/Release
# wget http://ftp.nl.debian.org/debian/dists/wheezy/Release.gpg
同时,必须确保**TFTP**目录中的所有文件都可让TFTP服务器读取。
# chmod -R 755 /srv/tftp/
![Download Debian NetBoot Files](http://www.tecmint.com/wp-content/uploads/2014/09/Download-Debian-NetBoot-Files.png)
下载Debian网络启动文件
使用以下变量用于**Debian网络安装**镜像和架构。
# wget http://"$YOURMIRROR"/debian/dists/wheezy/main/installer-"$ARCH"/current/images/netboot/netboot.tar.gz
# wget http://"$YOURMIRROR"/debian/dists/wheezy/main/installer-"$ARCH"/current/images/SHA256SUMS
# wget http://"$YOURMIRROR"/debian/dists/wheezy/Release
# wget http://"$YOURMIRROR"/debian/dists/wheezy/Release.gpg
**5.** 下一步启动或重启DNSMASQ守护进程并运行netstat命令来获取服务器监听的端口列表。
# service dnsmasq restart
# netstat -tulpn | grep dnsmasq
![Start Dnsmasq Service](http://www.tecmint.com/wp-content/uploads/2014/09/Start-Dnsmasq-Service.png)
启动Dnsmasq服务
**6.** 基于Debian的发行版通常附带了**UFW防火墙**包。使用以下命令来打开需要的**DNSMASQ**端口号:**67**Bootps**69**TFTP**53**DNS**4011**代理DHCPudp和**53** tcpDNS
# ufw allow 69/udp
# ufw allow 4011/udp ## Only if you have a ProxyDHCP on the network
# ufw allow 67/udp
# ufw allow 53/tcp
# ufw allow 53/udp
![Open Dnsmasq Ports](http://www.tecmint.com/wp-content/uploads/2014/09/Open-Dnsmasq-Ports-620x303.png)
开启Dnsmasq端口
Now, the PXE loader located on your client network interface will load **pxelinux** configuration files from **/srv/tftp/pxelinux.cfg** directory using this order.
现在位于你的客户机网络接口上的PXE加载器将使用按以下顺序从**/srv/tftp/pxelinux.cfg**目录加载**pxelinux**配置文件。
- GUID文件
- MAC文件
- 默认文件
### 步骤3 配置客户端从网络启动 ###
**7.** 要为你的客户端计算机启用网络启动,请进入系统**BIOS配置**如何进入BIOS设置请查阅硬件主板提供商的文档
转到**启动菜单**,然后选择**网络启动**作为**首要启动设备**在某些系统上你可以不用进入BIOS配置就能选择启动设备只要在**BIOS自检**时按一个键就可以进行选择了)。
![Select BIOS Settings](http://www.tecmint.com/wp-content/uploads/2014/09/Select-BIOS-Settings.png)
选择BIOS设置
**8。** 在编辑启动顺序后,通常按**F10**来保存BIOS设置。重启后你的客户端计算机应该可以直接从网络启动了应该会出第一个**PXE**提示,要求你按**F8**键进入菜单。
接下来,敲击**F8**键来进入,会出现一个新的提示。敲击**回车**键,屏幕上会出现**Debian安装器**主界面提示,如下图所示。
![Boot Menu Selection](http://www.tecmint.com/wp-content/uploads/2014/09/Boot-Menu-Selection.png)
启动菜单选择
![Select Debian Installer Boot](http://www.tecmint.com/wp-content/uploads/2014/09/Select-Debian-Installer-Boot.png)
选择Debian安装器启动
![Select Debian Install](http://www.tecmint.com/wp-content/uploads/2014/09/Select-Debian-Install.png)
选择Debian安装
从这里开始你可以使用Debian 7 Wheezy安装进程将Debian安装到你的机器上了安装链接见上面。然而为了能够完成安装进程你也需要确保你的机器上互联网连接已经激活。
### 步骤4 DNSMASQ服务器排障并在系统范围内启用 ###
**9.** 要诊断服务器以查询最终是否发生问题或要查询其它提供给客户端的信息,运行以下命令来打开日志文件。
# tailf /var/log/daemon.log
![Debug DNSMASQ Server](http://www.tecmint.com/wp-content/uploads/2014/09/Debbug-DNSMASQ-Server.png)
DNSMASQ服务器排障
**10.** 如果服务器测试中已一切就绪,你现在可以在**sysv-rc-conf**包的帮助下,启用**DNSMASQ**守护进程自启动,以使该进程在系统重启后自动启动。
# apt-get install sysv-rc-conf
# sysv-rc-conf dnsmaq on
![Enable DNSMASQ Daemon](http://www.tecmint.com/wp-content/uploads/2014/09/Enable-DNSMASQ-Daemon.png)
启用DNSMASQ守护进程
到此为止吧!现在你的**PXE**服务器已经整装待发随时准备好分配IP地址了**DHCP**并为你所有网段中的客户端提供需要的启动信息这些信息配置用来从网络启动并安装Debian Wheezy。
使用PXE网络启动安装在服务器主机数量增长时很有优势因为你可以在短时间内火同时设置整个网络基础架构为版本升级提供了方便也可以通过kickstart文件使整个安装的全自动化。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/network-installation-of-debian-7-on-client-machines/
作者:[Matei Cezar][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/debian-gnulinux-7-0-code-name-wheezy-server-installation-guide/
[2]:http://www.debian.org/distrib/netinst#netboot
[3]:http://ftp.nl.debian.org/debian/dists/wheezy/main/

View File

@ -0,0 +1,101 @@
安卓应用乾坤大挪移Ubuntu上的搬运工ARChon
================================================================================
![Android, Chrome, Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/android-ubuntu.jpg)
Android, Chrome, Ubuntu
**Google最近发布了首批[能在Chrome OS本地运行的安卓应用集][1],通过‘安卓运行时’扩展完成了该壮举。**
现在,一位开发者已经[指明了将安卓应用带入桌面版Chrome的路][2]。
[弗拉德·菲利波夫][3]的[chromeos-apk脚本][4]和[ARChon安卓运行时扩展][5]手拉手一起开展工作将安卓应用带进了WindowsMac和Linux桌面上的Chrome中。
![IMDB, Flipboard and Twitter Android Apps running on Ubuntu 14.04 LTS](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/android-apps-on-linux.jpg)
运行在Ubuntu 14.04 LTS上的安卓应用IMDBFlipboard和Twitter
通过运行时运行的应用的性能不是很令人惊异任何想要运行Dead Trigger 2或者其它图形密集型游戏的雄心壮志可以放到一边了。
同样地作为官方运行时的非官方重构包并在Chrome OS之外运行系统整合如网络摄像头扬声器等可能不完整或者根本不可能。
下面的指南只是提供原样,并不保证一定成功。它只能作为高度实验性进行,里面遍布漏洞,很不稳定——甚至平出恶魔。只能出于好奇而尝试,不去高度寄予厚望,那么你就不会深受其困扰。
### 安卓应用转战Linux大法 ###
要通过Chrome在Linux上运行安卓应用很明显你需要安装Chrome要求的版本是37或者更高。坦率地讲如果你打算玩玩潜在不稳定的版本那么你也可以下载并[为Linux安装不稳定的Google Chrome版本][6]。
已经安装了Chrome的某个版本你可以通过命令行来安装开发版命令如下
sudo apt-get install google-chrome-unstable
接下来你需要下载官方定制版而不是Google或Chronium捐赠的版本——由弗拉德·菲利波夫创建的安卓运行时。这个版本和官方的有着诸多的不同最突出的就是它可以运行在桌面版的浏览器上。
- [从BitBucket下载ARChon v1.0][7]
下载好运行时后,你需要从.zip解压内容并移动解压后的文件夹到你的Home文件夹。
要安装打开Google Chrome点击汉堡式菜单按钮然后导航到扩展页。检查启用开发者模式并点击加载解包的扩展按钮。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/chromeos-apk-extensions.jpg)
运行时本身不会做太多事情,所以你需要从安卓应用创建兼容包。要完成这项工作,你需要‘[chromeos-apk][8][命令行Javascript工具][9],它可以从节点封装模块管理器安装。
首先运行:
sudo apt-get install npm nodejs nodejs-legacy
Ubuntu 64位用户你也需要攫取以下库
sudo apt-get install lib32stdc++6
现在,运行命令来暗转脚本吧:
npm install -g chromeos-apk
根据你的配置你可能需要过会儿使用sudo来运行。如果你不喜欢[通过sudo安装npm模块你可以][10]玩玩鬼把戏。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/chromeos-apk-npm.jpg)
现在你直接回家了。去Google找找你想要试试的应用的APK吧请牢记**不是所有的安卓应用都会工作**,而**那些可以工作的也未必工作得很好**,或者缺少功能。
把你想要的安卓APK放到~/Home然后回到终端中使用以下命令来转换你可以将APK命名成任何你想要的名字
chromeos-apk replaceme.apk --archon
该命令将花一点时间来完成这项工作,也许也就是一眨眼的时间。[实际上,不需要眨眼的时间][11]
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/chromeos-apk-archon-750x184.jpg)
现在在你的Home文件夹内有个ARChon生成的Chrome APK extension-y folder-y这样的东西。所有剩下来要做的事就是安装并查看它是否正常工作
回到chrome://extensions页面再次轻敲加载解封装扩展按钮但这次选择上面脚本创建的文件夹。
应用应该会继续安装不会有任何问题但是它确实会没有问题吗打开Chrome应用启动器或应用页面并启动它来看看是否有问题。
#### 深度探索 ####
由于ARChon运行时支持不限数量的chrome化的APK你可以反复进行该操作你想做多少次都行。Chrome APK [subreddit][12]用于跟踪成功/失败情况,所以如果你感到很有用,一定要贴出你的结果。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/09/install-android-apps-ubuntu-archon
作者:[Joey-Elijah Sneddon][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgchrome.com/first-4-chrome-android-apps-released/
[2]:http://www.omgchrome.com/run-android-apps-on-windows-mac-linux-archon/
[3]:https://github.com/vladikoff/
[4]:https://github.com/vladikoff/chromeos-apk
[5]:https://github.com/vladikoff/chromeos-apk/blob/master/archon.md
[6]:http://www.chromium.org/getting-involved/dev-channel
[7]:https://bitbucket.org/vladikoff/archon/get/v1.0.zip
[8]:https://github.com/vladikoff/chromeos-apk/blob/master/README.md
[9]:https://github.com/vladikoff/chromeos-apk/blob/master/README.md
[10]:http://stackoverflow.com/questions/19352976/npm-modules-wont-install-globally-without-sudo/21712034#21712034
[11]:https://www.youtube.com/watch?v=jKXLkWrBo7o
[12]:http://www.reddit.com/r/chromeapks

View File

@ -0,0 +1,199 @@
Linux日志文件总管——logrotate
================================================================================
日志文件包含了关于系统中发生的事件的有用信息,在排障过程中或者系统性能分析时经常被用到。对于忙碌的服务器,日志文件大小会快速增长,服务器会很快消耗磁盘空间,这成了个问题。除此之外,处理一个单个的庞大日志文件也常常是件十分棘手的事。
logrotate是个十分有用的工具它可以自动对日志进行分解或轮循、压缩以及删除旧日志文件。例如你可以设置logrotate让/var/log/foo日志文件每30天轮循并删除超过6个月的日志。配置完后logrotate的运作完全自动化不必进行任何进一步的认为干预。另外旧日志也可以通过电子邮件发送不过该选项超出了本教程的讨论范围。
主流Linux发行版上都默认安装有logrotate包如果出于某种原因logrotate没有出现在里头你可以使用apt-get或yum命令来安装。
在Debian或Ubuntu上
# apt-get install logrotate cron
在FedoraCentOS或RHEL上
# yum install logrotate crontabs
logrotate的配置文件是/etc/logrotate.conf通常不需要对它进行修改。日志文件的轮循设置在独立的配置文件中放在/etc/logrotate.d/目录下。
### 样例一 ###
在第一个样例中我们将创建一个10MB的日志文件/var/log/log-file。我们将展示怎样使用logrotate来管理该日志文件。
我们从创建一个日志文件开始吧然后在其中填入一个10MB的随机比特流数据。
# touch /var/log/log-file
# head -c 10M < /dev/urandom > /var/log/log-file
由于现在日志文件已经准备好我们将配置logrotate来轮循该日志文件。让我们为该文件创建一个配置文件。
# vim /etc/logrotate.d/log-file
----------
/var/log/log-file {
monthly
rotate 5
compress
delaycompress
missingok
notifempty
create 644 root root
postrotate
/usr/bin/killall -HUP rsyslogd
endscript
}
这里:
- **monthly**: 日志文件将按月轮循。其它可用值为dailyweekly或者yearly
- **rotate 5**: 一次将存储5个归档日志。对于第六个归档时间最久的归档将被删除。
- **compress**: 在轮循任务完成后已轮循的归档将使用gzip进行压缩。
- **delaycompress**: 总是与compress选项一起用delaycompress选项指示logrotate不要将最近的归档压缩压缩将在下一次轮循周期进行。这在你或任何软件仍然需要读取最新归档时很有用。
- **missingok**: 在日志轮循其间,任何错误将被忽略,例如“文件无法找到”之类的错误。
- **notifempty**: 如果日志文件为空,轮循不会进行。
- **create 644 root root**: 以指定的权限创建全新的日志文件同时logrotate也会重命名原始日志文件。
- **postrotate/endscript**: 在所有其它指令完成后postrotate和endscript之间指定的命令将被执行。在这种情况下rsyslogd 进程将立即再次读取其配置并继续运行。
上面的模板是通用的,而配置参数则根据你的需求进行调整,不是所有的参数都是必要的。
### 样例二 ###
在本例中我们只想要轮循一个日志文件然而日志文件大小会增长到50MB。
# vim /etc/logrotate.d/log-file
----------
/var/log/log-file {
size=50M
rotate 5
create 644 root root
postrotate
/usr/bin/killall -HUP rsyslogd
endscript
}
### 样例三 ###
我们想要让旧日志文件以创建日期命名这可以通过添加dateext常熟实现。
# vim /etc/logrotate.d/log-file
----------
/var/log/log-file {
monthly
rotate 5
dateext
create 644 root root
postrotate
/usr/bin/killall -HUP rsyslogd
endscript
}
这将导致归档文件在它们的文件名中包含日期信息。
### 排障 ###
这里提供了一些logrotate设置的排障提示。
#### 1. 手动运行logrotate ####
**logrotate**可以在任何时候从命令行手动调用。
要调用为/etc/lograte.d/下配置的所有日志调用**logrotate**
# logrotate /etc/logrotate.conf
要为某个特定的配置调用logrotate
# logrotate /etc/logrotate.d/log-file
#### 2. 演练 ####
排障过程中的最佳选择是使用‘-d选项以预演方式运行logrotate。要进行验证不用实际轮循任何日志文件可以模拟演练日志轮循并显示其输出。
# logrotate -d /etc/logrotate.d/log-file
![](https://farm6.staticflickr.com/5561/15096836737_33d3cd1ccb_z.jpg)
正如我们从上面的输出结果可以看到的logrotate判断该轮循是不必要的。如果文件的时间小于一天这就会发生了。
#### 3. 强制运行 ####
即使轮循条件没有满足,我们也可以通过使用‘-f选项来强制logrotate轮循日志文件-v参数提供了详细的输出。
# logrotate -vf /etc/logrotate.d/log-file
----------
reading config file /etc/logrotate.d/log-file
reading config info for /var/log/log-file
Handling 1 logs
rotating pattern: /var/log/log-file forced from command line (5 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/log-file
log needs rotating
rotating log /var/log/log-file, log->rotateCount is 5
dateext suffix '-20140916'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/log-file.5.gz to /var/log/log-file.6.gz (rotatecount 5, logstart 1, i 5),
old log /var/log/log-file.5.gz does not exist
renaming /var/log/log-file.4.gz to /var/log/log-file.5.gz (rotatecount 5, logstart 1, i 4),
old log /var/log/log-file.4.gz does not exist
. . .
renaming /var/log/log-file.0.gz to /var/log/log-file.1.gz (rotatecount 5, logstart 1, i 0),
old log /var/log/log-file.0.gz does not exist
log /var/log/log-file.6.gz doesn't exist -- won't try to dispose of it
renaming /var/log/log-file to /var/log/log-file.1
creating new /var/log/log-file mode = 0644 uid = 0 gid = 0
running postrotate script
compressing log with: /bin/gzip
#### 4. Logrotate记录日志 ####
logrotate自身的日志通常存放于/var/lib/logrotate/status目录。如果处于排障目的我们想要logrotate记录到任何指定的文件我们可以指定像下面这样从命令行指定。
# logrotate -vf s /var/log/logrotate-status /etc/logrotate.d/log-file
#### 5. Logrotate定时任务 ####
logrotate需要的**cron**任务应该在安装时就自动创建了我把cron文件的内容贴出来以供大家参考。
# cat /etc/cron.daily/logrotate
----------
#!/bin/sh
# Clean non existent log file entries from status file
cd /var/lib/logrotate
test -e status || touch status
head -1 status > status.clean
sed 's/"//g' status | while read logfile date
do
[ -e "$logfile" ] && echo "\"$logfile\" $date"
done >> status.clean
mv status.clean status
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.conf
小结一下logrotate工具对于防止因庞大的日志文件而耗尽存储空间是十分有用的。配置完毕后进程是全自动的可以长时间在不需要人为干预下运行。本教程重点关注几个使用logrotate的几个基本样例你也可以定制它以满足你的需求。
希望本文对你有所帮助。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/09/logrotate-manage-log-files-linux.html
作者:[Sarmed Rahman][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed

View File

@ -0,0 +1,41 @@
在Ubuntu 14.04中重置Unity和Compiz设置【小贴士】
================================================================================
如果你一直在试验你的Ubuntu系统你可能最终以Unity和Compiz的一片混乱收场。在此贴士中我们将看看怎样来重置Ubuntu 14.04中的Unity和Compiz。事实上全部要做的事仅仅是运行几个命令而已。
### 重置Ubuntu 14.04中的Unity和Compiz ###
打开终端Ctrl+Alt+T并使用以下命令来重置compiz
dconf reset -f /org/compiz/
重置compiz后重启Unity
setsid unity
此外如果你想将Unity图标也进行重置试试以下的命令吧
unity --reset-icons
### 可能的疑难解决方案: ###
如果你在重置compiz时遇到如下错误
> error: GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._g_2dfile_2derror_2dquark.Code17: Cannot open dconf database: invalid gvdb header
可能的原因是用户文件被搞乱了。备份dconf配置并移除配置文件
mv ~/.config/dconf/ ~/.config/dconf.bak
希望本贴士对你重置Ubuntu 14.04中Unity和compiz有所帮助欢迎您随时提出问题和建议。
--------------------------------------------------------------------------------
via: http://itsfoss.com/reset-unity-compiz-settings-ubuntu-1404/
作者:[Abhishek][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/

View File

@ -0,0 +1,59 @@
在CentOS 7上安装Vmware 10
================================================================================
在CentOS 7上安装Vmware 10.0.3我将给你们我的经验。通常这个版本上不能在CentOS 7工作的因为它只能运行在比较低的内核版本3.10上。
1 - 以正常方式下载并安装没有问题。唯一的问题是在后来体验vmware程序的时候。
### 如何修复? ###
**1 进入/usr/lib/vmware/modules/source。**
cd /usr/lib/vmware/modules/source
**2 解压vmnet.tar.**
tar -xvf vmnet.tar
**3 进入vmnet-only目录。**
cd vmnet-only
**4 编辑filter.c文件。**
vi filter.c
在206和259行替换以下字符串
#if LINUX_VERSION_CODE < KERNEL_VERSION(3, 13, 0)
为:
#if LINUX_VERSION_CODE < KERNEL_VERSION(3, 0, 0)
保存并退出。
**5 回到先前文件夹。**
cd ../
**6 再次压缩文件夹。**
tar -uvf vmnet.tar vmnet-only
**7 移除旧目录。**
rm -fr vmnet-only
**8 启动vmware并体验。**
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_008.png)
--------------------------------------------------------------------------------
via: http://www.unixmen.com/install-vmware-10-centos-7/
作者: M.el Khamlichi
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,26 @@
Ubuntu 14.04历史文件清理
================================================================================
这个简明教程对Ubuntu 14.04历史文件清理进行了说明,它用于初学者。
要从dash搜索删除历史记录请遵循以下程序。
转到系统设置System Settings并打开安全与隐私Security & Privacy
![](http://www.ubuntugeek.com/wp-content/uploads/2014/09/14.png)
在文件与应用Files and Applications标签下点击清除用户数据Clear Usage Data
![](http://www.ubuntugeek.com/wp-content/uploads/2014/09/26.png)
你也可以关闭“记录文件与应用使用Record file and Application usage以阻止系统记录你当前使用的文件和应用。
![](http://www.ubuntugeek.com/wp-content/uploads/2014/09/36.png)
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/how-to-delete-recently-opened-files-history-in-ubuntu-14-04.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,72 @@
Ubuntu下使用CloudFlare作为ddclient提供商
================================================================================
DDclient是一个Perl客户端用于更新动态DNS网络服务提供商帐号下的动态DNS条目。它最初是由保罗·巴利编写的现在大多数是由维姆潘科在做。它能做的不仅仅是动态DNS也可以通过几种不同的方式获取你的WAN口IP地址。
CloudFlare有一点已知的功能它允许你通过API或叫做ddclient的命令行脚本更新你的DNS记录。不管哪一个结果都一样而且它是个免费软件。
不幸的是ddclient并不能在CloudFlare中即开即用。它需要打补丁这里就是要介绍怎样在Debian或Ubuntu上破解它它也能在带有Raspberry Pi的Raspbian上工作。
### 需求 ###
首先保证你有一个自有域名然后登录到CloudFlare添加你的域名。遵循指令操作使用它给出的默认值就行了。你将让CloudFlare来托管你的域所以你需要调整你的注册机构的设置。如果你想要使用子域名请为它添加一条A记录。目前任何IP地址都可以。
### 在Ubuntu上安装ddclient ###
打开终端,并运行以下命令
sudo apt-get install ddclient
现在,你需要使用以下命令来安装补丁
sudo apt-get install curl sendmail libjson-any-perl libio-socket-ssl-perl
curl -O http://blog.peter-r.co.uk/uploads/ddclient-3.8.0-cloudflare-22-6-2014.patch
sudo patch /usr/sbin/ddclient < ddclient-3.8.0-cloudflare-22-6-2014.patch
以上命令用来完成ddclient的安装和打补丁
### 配置ddclient ###
你需要使用以下命令来编辑ddclient.conf文件
sudo vi /etc/ddclient.conf
添加以下信息
##
### CloudFlare (cloudflare.com)
###
ssl=yes
use=web, web=dyndns
protocol=cloudflare, \
server=www.cloudflare.com, \
zone=domain.com, \
login=you@email.com, \
password=api-key \
host.domain.com
Comment out:
#daemon=300
来自CloudFlare帐号页面的api密钥
ssl=yes might already be in that file
use=web, web=dyndns will use dyndns to check IP (useful for NAT)
你已经搞定了。登录到https://www.cloudflare.com并检查列出的与你域名对应的IP地址是否匹配到了http://checkip.dyndns.com。
使用以下命令来验证你的设置
sudo ddclient -daemon=0 -debug -verbose -noquiet
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/how-to-use-cloudflare-as-a-ddclient-provider-under-ubuntu.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,30 @@
Linux 有问必答-- 如何在Perl中捕捉并处理信号
================================================================================
> **提问**: 我需要通过使用Perl的自定义信号处理程序来处理一个中断信号。在一般情况下我怎么在Perl程序中捕获并处理各种信号如INTTERM
作为POSIX标准的异步通知机制信号由操作系统发送给进程某个事件来通知它。当产生信号时目标程序的执行是通过操作系统中断并且该信号被发送到处理该信号的处理程序。任何人可以定义和注册自定义信号处理程序或依赖于默认的信号处理程序。
在Perl中信号可以被捕获并被一个全局的%SIG哈希变量处理。这个%SIG哈希变量被信号号锁定并包含对相应的信号处理程序。因此如果你想为特定的信号定义自定义信号处理程序你可以直接更新%SIG的信号的哈希值。
下面是一个代码段来处理使用自定义信号处理程序中断INT和终止TERM的信号。
$SIG{INT} = \&signal_handler;
$SIG{TERM} = \&signal_handler;
sub signal_handler {
print "This is a custom signal handler\n";
die "Caught a signal $!";
}
![](https://farm4.staticflickr.com/3910/15141131060_f7958f20fb.jpg)
%SIG其他有效的哈希值有'IGNORE'和'DEFAULT'。当所分配的哈希值是'IGNORE'(例如,$SIG{CHLD}='IGNORE')时,相应的信号将被忽略。分配'DEFAULT'的哈希值(例如,$SIG{HUP}='DEFAULT'),意味着我们将使用一个默认的信号处理程序。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/catch-handle-interrupt-signal-perl.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,69 @@
Linux有问必答 -- 如何在CentOS7上改变网络接口名
================================================================================
> **提问**: 在CentOS7我想将分配的网络接口名更改为别的名字。有什么合适的方法来来重命名CentOS或RHEL7的网络接口
传统上Linux的网络接口被枚举为eth[0123...]但这些名称并不一定符合实际的硬件插槽PCI位置USB接口数量等这引入了一个不可预知的命名问题例如由于不确定的设备探测行为这可能会导致不同的网络配置错误例如由无意的接口改名引起的禁止接口或者防火墙旁路。基于MAC地址的udev规则在虚拟化的环境中并不有用这里的MAC地址如端口数量一样无常。
CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口的方法。这些特性可以唯一地确定网络接口的名称以使定位和区分设备更容易并且在这样一种方式下它随着启动时间和硬件改变的情况下是持久的。然而这种命名规则并不是默认在CentOS/RHEL6上开启。
从CentOS/RHEL7起可预见的命名规则变成了默认。根据这一规则接口名称被自动基于固件拓扑结构和位置信息来确定。现在即使添加或移除网络设备接口名称仍然保持固定而无需重新枚举和坏掉的硬件可以无缝替换。
* 基于接口类型的两个字母前缀:
* en -- 以太网
* sl -- 串行线路IP (slip)
* wl -- wlan
* ww -- wwan
*
* Type of names:
* b<number> -- BCMA总线和新书
* ccw<name> -- CCW总线组名
* o<index> -- 车载设备的索引号
* s<slot>[f<function>][d<dev_port>] -- 热插拔插槽索引号
* x<MAC> -- MAC 地址
* [P<domain>]p<bus>s<slot>[f<function>][d<dev_port>]
* -- PCI 位置
* [P<domain>]p<bus>s<slot>[f<function>][u<port>][..]1[i<interface>]
* -- USB端口号链
新的命名方案的一个小的缺点是接口名称相比传统名称有点难以阅读。例如你可能会发现像enp0s3名字。再者你再也无法来控制接口名了。
![](https://farm4.staticflickr.com/3854/15294996451_fa731ce12c_z.jpg)
如果由于某种原因你喜欢旧的方式并希望能够选择任意名称分配给CentOS/ RHEL7的设备你需要重写默认的可预测的命名规则定义基于MAC地址udev规则。
**下面是如何在CentOS或RHEL7命名网络接口。**
首先让我们来禁用该可预测命名规则。对于这一点你可以在启动时传递“net.ifnames=0”的内核参数。这是通过编辑/etc/default/grub并加入“net.ifnames=0”到GRUB_CMDLINE_LINUX变量来实现的。
![](https://farm4.staticflickr.com/3898/15315687725_c82fbef5bc_z.jpg)
然后运行这条命令来重新生成GRUB配置并更新内核参数。
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
![](https://farm4.staticflickr.com/3909/15128981250_72f45633c1_z.jpg)
接下来编辑或创建一个udev的网络命名规则文件/etc/udev/rules.d/70-persistent-net.rules并添加下面一行。更换成你自己的MAC地址和接口。
$ sudo vi /etc/udev/rules.d/70-persistent-net.rules
----------
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:a9:7a:e1", ATTR{type}=="1", KERNEL=="eth*", NAME="sushi"
最后,重启电脑并验证新的接口名。
![](https://farm4.staticflickr.com/3861/15111594847_14e0c5a00d_z.jpg)
请注意配置重命名后的接口仍然是你的责任。如果网络配置例如IPv4设置防火墙规则是基于旧名称变更前则需要更新的网络配置以反映更改的名称。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/change-network-interface-name-centos7.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/appe-Consistent_Network_Device_Naming.html

View File

@ -0,0 +1,53 @@
Linux有问必答-- 如何用Perl检测Linux的发行版本
================================================================================
> **提问**:我需要写一个Perl程序它会包含Linux发行版相关的代码。为此Perl程序需要能够自动检测运行中的Linux的发行版如Ubuntu、CentOS、Debian、Fedora等等以及它是什么版本号。如何用Perl检测Linux的发行版本
如果要用Perl脚本检测Linux的发行版你可以使用一个名为[Linux::Distribution][1]的Perl模块。该模块通过检查/etc/lsb-release以及其他特定的/etc下的发行版特定的目录来猜测底层Linux操作系统。它支持检测所有主要的Linux发行版包括Fedora、CentOS、Arch Linux、Debian、Ubuntu、SUSE、Red Hat、Gentoo、Slackware、Knoppix和Mandrake。
要在Perl中使用这个模块你首先需要安装它。
### 在Debian或者Ubuntu上安装 Linux::Distribution ###
基于Debian的系统直接用apt-get安装
$ sudo apt-get install liblinux-distribution-packages-perl
### 在Fedora、CentOS 或者RHEL上安装 Linux::Distribution ###
如果你的Linux没有Linux::Distribution模块的安装包如基于红帽的系统你可以使用CPAN来构建。
首先确保你的Linux系统安装了CPAN
$ sudo yum -y install perl-CPAN
使用这条命令来构建并安装模块:
$ sudo perl -MCPAN -e 'install Linux::Distribution'
### 用Perl确定Linux发行版 ###
Linux::Distribution模块安装完成之后你可以使用下面的代码片段来确定你运行的Linux发行版本。
use Linux::Distribution qw(distribution_name distribution_version);
my $linux = Linux::Distribution->new;
if ($linux) {
my $distro = $linux->distribution_name();
my $version = $linux->distribution_version();
print "Distro: $distro $version\n";
}
else {
print "Distro: unknown\n";
}
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/detect-linux-distribution-in-perl.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://metacpan.org/pod/Linux::Distribution

View File

@ -0,0 +1,39 @@
Linux有问必答-- 如何在PDF中嵌入LaTex中的所有字体
================================================================================
> **提问**: 我通过编译LaTex源文件生成了一份PDF文档。然而我注意到并不是所有字体都嵌入到了PDF文档中。我怎样才能确保所有的字体嵌入在由LaTex生成的PDF文档中
当你创建一个PDF文件时在PDF文件中嵌入字体是一个好主意。如果你不嵌入字体PDF浏览器可以在计算机上没有字体的情况下使用其他东西代替。这将导致文件被在不同的PDF浏览器或操作系统平台上呈现不同的样式。当你打印出来的文档时缺少的字体是一个问题。
当你从LaTex中生成PDF文档时例如用pdflatex或dvipdfm可能并不是所有的字体都嵌入在PDF文档中。例如[pdffonts][1]下面的输出中提示PDF文档中有缺少的字体如Helvetica
![](https://farm3.staticflickr.com/2944/15344704481_d691f66e75_z.jpg)
为了避免这样的问题下面是如何在LaTex编译时嵌入所有的字体。
$ latex document.tex
$ dvips -Ppdf -G0 -t letter -o document.ps document.dvi
$ ps2pdf -dPDFSETTINGS=/prepress \
-dCompatibilityLevel=1.4 \
-dAutoFilterColorImages=false \
-dAutoFilterGrayImages=false \
-dColorImageFilter=/FlateEncode \
-dGrayImageFilter=/FlateEncode \
-dMonoImageFilter=/FlateEncode \
-dDownsampleColorImages=false \
-dDownsampleGrayImages=false \
document.ps document.pdf
现在你可以看到所有的字体都被嵌入到PDF中了。
![](https://farm4.staticflickr.com/3890/15161184500_15ec673dca_z.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/embed-all-fonts-pdf-document-latex.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://ask.xmodulo.com/check-which-fonts-are-used-pdf-document.html

View File

@ -0,0 +1,215 @@
在Linux中使用Openswan搭建站点到站点IPsec VPN 隧道
================================================================================
虚拟私有网络VPN隧道通过Internet隧道技术将两个不同地理位置的网络安全的连接起来。当这两个网络是使用私有IP地址的私有局域网络时那么两个网络之间是不能相互访问的这时使用隧道技术就可以使得子网间的主机进行通讯。例如VPN隧道技术经常被用于大型机构中不同办公区域子网的连接。
有时使用VPN隧道仅仅是因为它很安全。服务提供商与公司会使用这样一种方式设网络他们将重要的服务器数据库VoIP银行服务器放置到一个子网内仅仅让有权限的用户通过VPN隧道进行访问。如果需要搭建一个安的VPN隧道通常会选用[IPsec][1]因为IPsec VPN隧道被多重安全层所保护。
这篇指导文章将会告诉你如何构建站点到站点的 VPN隧道。
### 拓扑结构 ###
这边指导文章将按照以下的拓扑结构来构建一个IPsec 隧道。
![](https://farm4.staticflickr.com/3838/15004668831_fd260b7f1e_z.jpg)
![](https://farm6.staticflickr.com/5559/15004668821_36e02ab8b0_z.jpg)
![](https://farm6.staticflickr.com/5571/14821245117_3f677e4d58_z.jpg)
### 安装软件包以及准备VPN服务器 ###
一般情况下你仅能管理A点但是根据需求你可能需要同时管理A点与B点。我们从安装Openswan软件开始。
基于Red Hat的系统CentOSFedora或RHEL:
# yum install openswan lsof
在基于Debian的系统DebianUbuntu或Linux Mint):
# apt-get install openswan
现在禁用VPN的重定向功能如果有服务器可以执行下列命令
# for vpn in /proc/sys/net/ipv4/conf/*;
# do echo 0 > $vpn/accept_redirects;
# echo 0 > $vpn/send_redirects;
# done
接下来允许IP转发并且禁重定向功能。
# vim /etc/sysctl.conf
----------
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
重加载 /etc/sysctl.conf文件
# sysctl -p
在防火墙中启用所需的端口,并保证不与系统当前的规则冲突。
# iptables -A INPUT -p udp --dport 500 -j ACCEPT
# iptables -A INPUT -p tcp --dport 4500 -j ACCEPT
# iptables -A INPUT -p udp --dport 4500 -j ACCEPT
最后我们为NAT创建防火墙规则。
# iptables -t nat -A POSTROUTING -s site-A-private-subnet -d site-B-private-subnet -j SNAT --to site-A-Public-IP
确保防火墙规则的健壮性。
#### 注意: ####
- 你可以使用MASQUERAD替代SNAT(iptables).理论上说它也能正常工作但是有可能会与VPS发生冲突所以我任然建议使用SNAT.
- 如果你同时在管理B点那么在B点也设置同样的规则。
- 直连路由则不需要SNAT。
### 准配置文件 ###
我们将要用配置的第一个文件是ipsec.conf。不论你将要配置哪一台服务器总是将你这端的服务器看成是左边的而将远端的看作是右边的。以下配置是在站点A的VPN服务器做的。
# vim /etc/ipsec.conf
----------
## general configuration parameters ##
config setup
plutodebug=all
plutostderrlog=/var/log/pluto.log
protostack=netkey
nat_traversal=yes
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/16
## disable opportunistic encryption in Red Hat ##
oe=off
## disable opportunistic encryption in Debian ##
## Note: this is a separate declaration statement ##
include /etc/ipsec.d/examples/no_oe.conf
## connection definition in Red Hat ##
conn demo-connection-redhat
authby=secret
auto=start
ike=3des-md5
## phase 1 ##
keyexchange=ike
## phase 2 ##
phase2=esp
phase2alg=3des-md5
compress=no
pfs=yes
type=tunnel
left=<siteA-public-IP>
leftsourceip=<siteA-public-IP>
leftsubnet=<siteA-private-subnet>/netmask
## for direct routing ##
leftsubnet=<siteA-public-IP>/32
leftnexthop=%defaultroute
right=<siteB-public-IP>
rightsubnet=<siteB-private-subnet>/netmask
## connection definition in Debian ##
conn demo-connection-debian
authby=secret
auto=start
## phase 1 ##
keyexchange=ike
## phase 2 ##
esp=3des-md5
pfs=yes
type=tunnel
left=<siteA-public-IP>
leftsourceip=<siteA-public-IP>
leftsubnet=<siteA-private-subnet>/netmask
## for direct routing ##
leftsubnet=<siteA-public-IP>/32
leftnexthop=%defaultroute
right=<siteB-public-IP>
rightsubnet=<siteB-private-subnet>/netmask
有许多方式实现身份验证。这里使用预共享密钥,并将它添加到文件 file /etc/ipsec.secrets。
# vim /etc/ipsec.secrets
----------
siteA-public-IP siteB-public-IP: PSK "pre-shared-key"
## in case of multiple sites ##
siteA-public-IP siteC-public-IP: PSK "corresponding-pre-shared-key"
### 启动服务并排除故障 ###
目前服务器已经可以创建站点到站点的VPN隧道了。如果你可以管理B站点请确认已经为B服务器配置了所需的参数。对于基于Red Hat的系统使用chkconfig命令以确定这项服务以设置为开机自启动。
# /etc/init.d/ipsec restart
如果所有服务器没有问题的话那么可以打通隧道了。注意以下内容后你可以使用ping命令来测试隧道。
1.A点不可达B点的子网当隧道没有启动时ping无效。
1.隧道启动后在A点直接ping B点的子网IP。是可以ping通的。
并且到达目的子网的路由也会出现在服务器的路由表中。译者子网指的是site-B,服务器指的是site-A
# ip route
----------
[siteB-private-subnet] via [siteA-gateway] dev eth0 src [siteA-public-IP]
default via [siteA-gateway] dev eth0
另外,我们可以使用命令来检测隧道的状态。
# service ipsec status
----------
IPsec running - pluto pid: 20754
pluto pid 20754
1 tunnels up
some eroutes exist
----------
# ipsec auto --status
----------
## output truncated ##
000 "demo-connection-debian": myip=<siteA-public-IP>; hisip=unset;
000 "demo-connection-debian": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; nat_keepalive: yes
000 "demo-connection-debian": policy: PSK+ENCRYPT+TUNNEL+PFS+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 32,28; interface: eth0;
## output truncated ##
000 #184: "demo-connection-debian":500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 1653s; newest IPSEC; eroute owner; isakmp#183; idle; import:not set
## output truncated ##
000 #183: "demo-connection-debian":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 1093s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:not set
日志文件/var/log/pluto.log记录了关于身份验证密钥交换以及隧道处于不同时期的一些信息。如果你的隧道无法启动了可以查看这个文档。
如果你确信所有配置都是正确的,但是你的隧道任然无法启动,那么你需要检查以下的事件。
1.很多ISP会过滤IPsec端口。确认你的网络ISP允许使用UDP 500 TCP/UDP 4500端口。你可以试着在远端通过talnet连接服务器的IPsec端口。
1.确认所用的端口在服务器防火墙规则中是允许的。
1.确认两端服务器的预共享密钥是一致的。
1.左边和右边的参数应该正确配置在两端的服务器上
1.如果你遇到的是NAT问题试着使用SNAT替换MASQUERADING。
总结这篇指导重点在于使用Openswa搭建站点到站点IPsec VPN的流程。管理员可以使用VPN使得一些重要的资源仅能通过隧道来获取这对于加强安全性很有效果。同时VPN确保数据不被监听以及截。
希望对你有帮助。让我知道你的意。
via: http://xmodulo.com/2014/08/create-site-to-site-ipsec-vpn-tunnel-openswan-linux.html
作者:[Sarmed Rahman][a]
译者:[SPccman](https://github.com/SPccman)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://en.wikipedia.org/wiki/IPsec
[2]:https://www.openswan.org/

View File

@ -0,0 +1,103 @@
使用Linux命令行嗅探HTTP流量
================================================================================
假设由于某种原因你需要嗅探HTTP站点的流量如HTTP请求与响应。举个例子你可能在测试一个web服务器的实验性功能或者 你在为某个web应用或RESTful服务排错又或者你正在为PAC排错或寻找某个站点下载的恶意软件。不论什么原因在这些情况下进行HTTP流量嗅探对于系统管理、开发者、甚至最终用户来说都是很有帮助的。
数据包嗅工具tcpdump被广泛用于实时数据包的导出但是你需要设置过滤规则来捕获HTTP流量甚至它的原始输出通常不能方便的停 在HTTP协议层。实时web服务器日志解析器如[ngxtop][3]提供可读的实时web流量跟踪痕迹但这仅适用于可完全访问live web服务器日志的情况。
要是有一个仅用于抓取HTTP流量的类tcpdump的数据包嗅探工具就非常好了。事实上[httpry][4]就是:**HTTP包嗅探工具**。httpry捕获HTTP数据包并且将HTTP协议层的数据内容以可读形式列举出来。通过这篇指文章让我们了解如何使用httpry工具嗅探HTTP流 量。
###在Linux上安装httpry###
在基于Debian系统Ubuntu 或 LinuxMint,基础仓库中没有httpry安装包(译者注本人ubuntu14.04,仓库中已有包,可直接安装)。所以我们需要通过源码安装:
$ sudo apt-get install gcc make git libpcap0.8-dev
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install
在FedoraCentOS 或 RHEL系统可以使用如下yum命令安装httpry。在CentOS/RHEL系统上运行yum之前使能[EPEL repo][5]。
$ sudo yum install httpry
如果逆向通过源码来构httpry的话你可以通过这几个步骤实现
$ sudo yum install gcc make git libpcap-devel
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install
###httpry的基本用法###
以下是httpry的基本用法
$ sudo httpry -i <network-interface>
httpry就会监听指定的网络接口并且实时的显示捕获到的HTTP 请求/响。
![](https://farm4.staticflickr.com/3883/14985851635_7b94787c6d_z.jpg)
在大多数情况下,由于发送与接到的数据包过多导致刷屏很快,难以分析。这时候你肯定想将捕获到的数据包保存下来以离线分析。可以使用'b'或'-o'选项保存数据包。'-b'选项将数据包以二进制文件的形式保存下来这样可以使用httpry软件打开文件以浏览。另 一方面'-o'选项将数据以可读的字符文件形式保存下来。
以二进制形式保存文件:
$ sudo httpry -i eth0 -b output.dump
浏览所保存的HTTP数据包文件
$ httpry -r output.dump
注意,不需要根用户权限就可以使用'-r'选项读取数据文件。
将httpry数据以字符文件保存
$ sudo httpry -i eth0 -o output.txt
###httpry 的高级应用###
如果你想监视指定的HTTP方法GETPOSTPUTHEADCONNECT等使用'-m'选项:
$ sudo httpry -i eth0 -m get,head
![](https://farm6.staticflickr.com/5551/14799184220_3b449d422c_z.jpg)
如果你下载了httpry的源码你会发现源码下有一系Perl脚本这些脚本用于分析httpry输出。脚本位于目录httpry/scripts/plugins。如果你想写一个定制的httpry输出分析器则这些脚可以作为很好的例子。其中一些有如下的功能
- **hostnames**: 显示唯一主机名列表.
- **find_proxies**: 探测web代理.
- **search_terms**: 查找及计算输入检索服务的检索词。
- **content_analysis**: 查找含有指定关键的URL。
- **xml_output**: 将输出转换为XML形式。
- **log_summary**: 生成日志摘要。
- **db_dump**: 将日志文件数据保存数据库。
在使用这些脚本之前,首先使用'-o'选项运行httpry。当获取到输出文件后立即使用如下命令执行脚本
$ cd httpry/scripts
$ perl parse_log.pl -d ./plugins <httpry-output-file>
你可能在使用插件的时候遇到警告。比如如果你没有安装带有DBI接口的MySQL数据库那么使用db_dump插件时可能会失败。如果一个 插件初始化失败的话那么这个插件不能使用。所以你可以忽略那些警告。
当parse_log.pl完成后你将在httpry/scripts 目录下看到数个分析结果。例如log_summary.txt 与如下内容类似。
![](https://farm4.staticflickr.com/3845/14799162189_b85abdf21d_z.jpg)
总结当你要分析HTTP数据包的时候httpry非常有用。它可能并不被大多Linux使用着所熟知但会用总是有好处的。你对这个工具有什么看法呢
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/sniff-http-traffic-command-line-linux.html
作者:[Dan Nanni][a]
译者:[DoubleC](https://github.com/DoubleC)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/2012/12/how-to-set-up-proxy-auto-config-on-ubuntu-desktop.html
[2]:http://xmodulo.com/2012/11/what-are-popular-packet-sniffers-on-linux.html
[3]:http://xmodulo.com/2014/06/monitor-nginx-web-server-command-line-real-time.html
[4]:http://dumpsterventures.com/jason/httpry/
[5]:http://xmodulo.com/2013/03/how-to-set-up-epel-repository-on-centos.html

View File

@ -0,0 +1,86 @@
Linux 常见问题解答 --怎么用checkinstall从源码创建一个RPM或DEB包
================================================================================
> **问题**:我想要从源码创建安装的软件程序。有没有一种方式来创建并且从源码安装包,而不是运行“make install”那样以后如果我想我可以容易的卸载程序。
如果你已经从从它的源码运行“make install”安装了linux程序。想完整移除它将变得真的很麻烦除非程序的创造者在Makefile里提供卸载的目标。你会有在你系统里文件的完整列表来和从源码安装之后比较然后手工移除所有在制作安装过程中加入的文件
这时候Checkinstall就可以派上使用。Checkinstall保留命令行创建或修改的所有文件的路径(例如“make install”“make install_modules”等)并建立一个标准的二进制包让你能用你发行版的标准包管理系统安装或卸载它例子Red Hat的yum或者Debian的apt-get命令 It has been also known to work with Slackware, SuSe, Mandrake and Gentoo as well, as per the official documentation. [official documentation][1].
在这篇文章中我们只集中在红帽子和Debian为基础的发行版并展示怎样从源码使用Checkinstall创建一个RPM和DEB软件包
### 在linux上安装Checkinstall ###
在Debian衍生上安装Checkinstall
# aptitude install checkinstall
在红帽子的发行版上安装Checkinstall你需要下载一个预先建立的Checkinstall rpm(例如:从 [http://rpm.pbone.net][2]),他已经从Repoforge库里删除。对于Cent OS6这个rpm包也可在Cent OS7里工作。
# wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/ikoinoba/CentOS_CentOS-6/x86_64/checkinstall-1.6.2-3.el6.1.x86_64.rpm
# yum install checkinstall-1.6.2-3.el6.1.x86_64.rpm
一旦checkinstall安装你可以用下列格式创建一个特定的软件包
# checkinstall <install-command>
如果没有参数默认安装命令“make install”将被使用
### 用Checkinstall创建一个RPM或DEB包 ###
在这个例子里我们将创建一个htop包对于linux交互式文本模式进程查看器就像上面的 steroids
首先,让我们从项目的官方网站下载源代码,一个最佳的练习,我们存储源码到/usr/local/src,并解压它
# cd /usr/local/src
# wget http://hisham.hm/htop/releases/1.0.3/htop-1.0.3.tar.gz
# tar xzf htop-1.0.3.tar.gz
# cd htop-1.0.3
让我们找出htop安装命令那样我们就能调用Checkinstall命令下面展示了htop用“make install”命令安装
# ./configure
# make install
因此创建一个htop包我们可以调用checkinstall不带任何参数安装这将使用“make install”命令创建一个包。随着这个过程 checkinstall命令会问你一个连串的问题。
总之这个命令会创建一个htop包 **htop**:
# ./configure
# checkinstall
回答“Y”“我会创建一个默认设置的包文件
![](https://farm6.staticflickr.com/5577/15118597217_1fdd0e0346_z.jpg)
你可以输入一个包的简短描述然后按两次ENTER
![](https://farm4.staticflickr.com/3898/15118442190_604b71d9af.jpg)
输入一个数值修改下面的任何值或ENTER前进
![](https://farm4.staticflickr.com/3898/15118442180_428de59d68_z.jpg)
然后checkinstall将自动地创建一个.rpm或者.deb包根据你的linux系统是什么
在CentOS7
![](https://farm4.staticflickr.com/3921/15282103066_5d688b2217_z.jpg)
在Debian 7:
![](https://farm4.staticflickr.com/3905/15118383009_4909a7c17b_z.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/build-rpm-deb-package-source-checkinstall.html
译者:[luoyutiantang](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://checkinstall.izto.org/docs/README
[2]:http://rpm.pbone.net/
[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html

View File

@ -1,19 +1,18 @@
20 Useful Commands of Sysstat Utilities (mpstat, pidstat, iostat and sar) for Linux Performance Monitoring
================================================================================
In our last article, we have learned about installing and upgrading the **sysstat** package and understanding briefly about the utilities which comes with the package.
Sysstat工具包中20个实用的Linux性能监控工具包括mpstat, pidstat, iostat 和sar
===============================================================
在我们上一篇文章中,我们已经学习了如何去安装和更新**sysstat**,并且了解了包中的一些实用工具。
注:此文一并附上,在同一个原文更新
注:此文一并附上,在同一个原文更新
- [Sysstat Performance and Usage Activity Monitoring Tool For Linux][1]
![20 Sysstat Commands for Linux Monitoring](http://www.tecmint.com/wp-content/uploads/2014/09/sysstat-commands.png)
20 Sysstat Commands for Linux Monitoring
Linux系统监控的20个Sysstat命令
今天,我们将会通过一些有趣的实例来学习**mpstat**, **pidstat**, **iostat**和**sar**等工具,这些工具可以帮组我们找出系统中的问题。这些工具都包含了不同的选项,这意味着你可以根据不同的工作使用不同的选项,或者根据你的需求来自定义脚本。我们都知道,系统管理员都会有点懒,他们经常去寻找一些更简单的方法来完成他们的工作。
Today, we are going to work with some interesting practical examples of **mpstat, pidstat, iostat** and **sar** utilities, which can help us to identify the issues. We have different options to use these utilities, I mean you can fire the commands manually with different options for different kind of work or you can create your customized scripts according to your requirements. You know Sysadmins are always bit Lazy, and always tried to find out the easy way to do the things with minimum efforts.
### mpstat - 处理器统计信息 ###
### mpstat Processors Statistics ###
1.Using mpstat command without any option, will display the Global Average Activities by All CPUs.
1.不带任何参数的使用mpstat命令将会输出所有CPU的平均统计信息
tecmint@tecmint ~ $ mpstat
@ -22,7 +21,7 @@ Today, we are going to work with some interesting practical examples of **mpstat
12:23:57 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:23:57 IST all 37.35 0.01 4.72 2.96 0.00 0.07 0.00 0.00 0.00 54.88
2.Using mpstat with option **-P** (Indicate Processor Number) and ALL, will display statistics about all CPUs one by one starting from 0. 0 will the first one.
2.使用‘**-p**(处理器编码)和ALL参数将会从0开始独立的输出每个CPU的统计信息0表示第一个cpu。
tecmint@tecmint ~ $ mpstat -P ALL
@ -33,7 +32,7 @@ Today, we are going to work with some interesting practical examples of **mpstat
12:29:26 IST 0 37.90 0.01 4.96 2.62 0.00 0.03 0.00 0.00 0.00 54.48
12:29:26 IST 1 36.75 0.01 4.19 2.54 0.00 0.11 0.00 0.00 0.00 56.40
3.To display the statistics for **N** number of iterations after n seconds interval with average of each cpu use the following command.
3.要进行‘**N**平均每次间隔n秒的输出CPU统计信息如下所示。
tecmint@tecmint ~ $ mpstat -P ALL 2 5
@ -54,7 +53,9 @@ Today, we are going to work with some interesting practical examples of **mpstat
12:36:27 IST 0 34.34 0.00 4.04 0.00 0.00 0.00 0.00 0.00 0.00 61.62
12:36:27 IST 1 32.82 0.00 6.15 0.51 0.00 0.00 0.00 0.00 0.00 60.51
4.The option **I** will print total number of interrupt statistics about per processor.
(LCTT译注 上面命令中2 表示每2秒执行一次mpstat -P ALL命令 5表示共执行5次)
4.使用‘**I**’参数将会输出每个处理器的中断统计信息
tecmint@tecmint ~ $ mpstat -I
@ -71,7 +72,7 @@ Today, we are going to work with some interesting practical examples of **mpstat
12:39:56 IST 0 0.00 116.49 0.05 0.27 7.33 0.00 1.22 10.44 0.13 37.47
12:39:56 IST 1 0.00 111.65 0.05 0.41 7.07 0.00 56.36 9.97 0.13 41.38
5.Get all the above information in one command i.e. equivalent to “**-u -I ALL -p ALL**“.
5.使用‘**A**’参数将会输出上面提到的所有信息,等同于‘**-u -I All -p ALL**’。
tecmint@tecmint ~ $ mpstat -A
@ -95,15 +96,15 @@ Today, we are going to work with some interesting practical examples of **mpstat
12:41:39 IST 0 0.00 116.96 0.05 0.26 7.12 0.00 1.24 10.42 0.12 36.99
12:41:39 IST 1 0.00 112.25 0.05 0.40 6.88 0.00 55.05 9.93 0.13 41.20
### pidstat Process and Kernel Threads Statistics ###
###pidstat - 进程和内核线程的统计信息###
This is used for process monitoring and current threads, which are being managed by kernel. pidstat can also check the status about child processes and threads.
该命令是用于监控进程和当前受内核管理的线程。pidstat还可以检查子进程和线程的状态。
#### Syntax ####
#### 语法 ####
# pidstat <OPTIONS> [INTERVAL] [COUNT]
6.Using pidstat command without any argument, will display all active tasks.
6.不带任何参数使用pidstat将会输出所有活跃的任务。
tecmint@tecmint ~ $ pidstat
@ -125,7 +126,7 @@ This is used for process monitoring and current threads, which are being managed
12:47:24 IST 0 365 0.01 0.00 0.00 0.01 0 systemd-udevd
12:47:24 IST 0 476 0.00 0.00 0.00 0.00 0 kworker/u9:1
7.To print all active and non-active tasks use the option **-p** (processes).
7.使用‘**-p**(进程)参数输出所有活跃和非活跃的任务。
tecmint@tecmint ~ $ pidstat -p ALL
@ -150,7 +151,7 @@ This is used for process monitoring and current threads, which are being managed
12:51:55 IST 0 19 0.00 0.00 0.00 0.00 0 writeback
12:51:55 IST 0 20 0.00 0.00 0.00 0.00 1 kintegrityd
8.Using pidstat command with **-d 2** option, we can get I/O statistics and 2 is interval in seconds to get refreshed statistics. This option can be handy in situation, where your system is undergoing heavy I/O and you want to get clues about the processes consuming high resources.
8.使用‘**-d 2**参数我们可以看到I/O统计信息2表示以秒为单位对统计信息进行刷新。这个参数可以方便的知道当系统在进行繁重的I/O时那些进行占用大量的资源。
tecmint@tecmint ~ $ pidstat -d 2
@ -168,7 +169,8 @@ This is used for process monitoring and current threads, which are being managed
03:27:03 EDT 25100 0.00 6.00 0.00 sendmail
03:27:03 EDT 30829 0.00 6.00 0.00 java
9.To know the cpu statistics along with all threads about the process id **4164** at interval of **2** sec for **3** times use the following command with option -t (display statistics of selected process).
9.想要每间隔**2**秒对进程**4164**的cpu统计信息输出**3**次,则使用如下带参数‘**-t**’(输出某个选定进程的统计信息)的命令。
tecmint@tecmint ~ $ pidstat -t -p 4164 2 3
@ -185,7 +187,7 @@ This is used for process monitoring and current threads, which are being managed
01:09:08 IST 1000 - 4176 0.00 0.00 0.00 0.00 1 |__gdbus
01:09:08 IST 1000 - 4177 0.00 0.00 0.00 0.00 1 |__gmain
10.Use the **-rh** option, to know the about memory utilization of processes which are frequently varying their utilization in **2** second interval.
10.使用‘**-rh**参数将会输出进程的内存使用情况。如下命令每隔2秒刷新经常的内存使用情况。
tecmint@tecmint ~ $ pidstat -rh 2 3
@ -208,7 +210,7 @@ This is used for process monitoring and current threads, which are being managed
1409816699 1000 4164 599.00 0.00 1261944 476664 11.74 firefox
1409816699 1000 6676 168.00 0.00 4436 1020 0.03 pidstat
11.To print all the process of containing string “**VB**“, use **-t** option to see threads as well.
11.要使用‘**-G**’参数可以输出包含某个特定字符串的进程信息。如下命令输出所有包含‘**VB**’字符串的进程的统计信息,使用‘**-t**’参数将线程的信息也进行输出。
tecmint@tecmint ~ $ pidstat -G VB
@ -237,7 +239,7 @@ This is used for process monitoring and current threads, which are being managed
03:19:52 PM 0 1933 - 0.04 0.89 0.00 0.93 0 VBoxClient
03:19:52 PM 0 - 1936 0.04 0.89 0.00 0.93 1 |__X11-NOTIFY
12.To get realtime priority and scheduling information use option **-R** .
12.使用‘**-R**’参数输出实时的进程优先级和调度信息。
tecmint@tecmint ~ $ pidstat -R
@ -248,17 +250,17 @@ This is used for process monitoring and current threads, which are being managed
01:09:08 IST 1000 5 99 FIFO migration/0
01:09:08 IST 1000 6 99 FIFO watchdog/0
Here, I am not going to cover about Iostat utility, as we are already covered it. Please have a look on “[Linux Performance Monitoring with Vmstat and Iostat][2]注:此文也一并附上在同一个原文更新中” to get all details about iostat.
因为我们已经学习过Iostat命令了因此在本文中不在对其进行赘述。若想查看Iostat命令的详细信息请参看“[使用Iostat和Vmstat进行Linux性能监控][2]注:此文也一并附上在同一个原文更新中”
### sar System Activity Reporter ###
###sar - 系统活动报告###
Using “**sar**” command, we can get the reports about whole systems performance. This can help us to locate the system bottleneck and provide the help to find out the solutions to these annoying performance issues.
我们可以使用‘**sar**’命令来获得整个系统性能的报告。这有助于我们定位系统性能的瓶颈,并且有助于我们找出这些烦人的性能问题的解决方法。
The Linux Kernel maintains some counter internally, which keeps track of all requests, their completion time and I/O block counts etc. From all these information, sar calculates rates and ratio of these request to find out about bottleneck areas.
Linux内核维护者一些内部计数器这些计数器包含了所有的请求及其完成时间和I/O块数等信息sar命令从所有的这些信息中计算出请求的利用率和比例以便找出瓶颈所在。
The main thing about the sar is that, it reports all activities over a period if time. So, make sure that sar collect data on appropriate time (not on Lunch time or on weekend.:)
sar命令主要的用途是生成某段时间内所有活动的报告因此必需确保sar命令在适当的时间进行数据采集而不是在午餐时间或者周末。
13.Following is a basic command to invoke sar. It will create one file named “**sarfile**” in your current directory. The options **-u** is for CPU details and will collect **5** reports at an interval of **2** seconds.
13.下面是执行sar命令的基本用法。它将会在当前目录下创建一个名为**sarfile**’的文件。‘**-u**参数表示CPU详细信息**5**表示生产5次报告**2**表示每次报告的时间间隔为2秒。
tecmint@tecmint ~ $ sar -u -o sarfile 2 5
@ -272,22 +274,22 @@ The main thing about the sar is that, it reports all activities over a period if
01:42:38 IST all 50.75 0.00 3.75 0.00 0.00 45.50
Average: all 46.30 0.00 3.93 0.00 0.00 49.77
14.In the above example, we have invoked sar interactively. We also have an option to invoke it non-interactively via cron using scripts **/usr/local/lib/sa1** and **/usr/local/lib/sa2** (If you have used **/usr/local** as prefix during installation time).
14.在上面的例子中我们交互的执行sar命令。sar命令提供了使用cron进行非交互的执行sar命令的方法使用**/usr/local/lib/sa1**和**/usr/local/lib/sa2**脚本(如果你在安装时使用了**/usr/local**作为前缀)
- **/usr/local/lib/sa1** is a shell script that we can use for scheduling cron which will create daily binary log file.
- **/usr/local/lib/sa2** is a shell script will change binary log file to human-readable form.
- **/usr/local/lib/sa1**是一个可以使用cron进行调度生成二进制日志文件的shell脚本。
- **/usr/local/lib/sa2**是一个可以将二进制日志文件转换为用户可读的编码方式。
Use the following Cron entries for making this non-interactive:
使用如下Cron项目来将sar命令非交互化。
# Run sa1 shell script every 10 minutes for collecting data
# 每10分钟运行sa1脚本来采集数据
*/2 * * * * /usr/local/lib/sa/sa1 2 10
# Generate a daily report in human readable format at 23:53
#在每天23:53时生成一个用户可读的日常报告
53 23 * * * /usr/local/lib/sa/sa2 -A
At the back-end sa1 script will call **sadc** (System Activity Data Collector) utility for fetching the data at a particular interval. **sa2** will call sar for changing binary log file to human readable form.
在sa1脚本执行后期sa1脚本会调用**sabc**(系统活动数据收集器System Activity Data Collector)工具采集特定时间间隔内的数据。**sa2**脚本会调用sar来将二进制日志文件转换为用户可读的形式。
15.Check run queue length, total number of processes and load average using **-q** option.
15.使用‘**-q**’参数来检查运行队列的长度,所有进程的数量和平均负载
tecmint@tecmint ~ $ sar -q 2 5
@ -301,7 +303,7 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
02:00:54 IST 0 431 1.64 1.23 0.97 0
Average: 2 431 1.68 1.23 0.97 0
16.Check statistics about the mounted file systems using **-F**.
16.使用‘**-F**’参数查看当前挂载的文件系统统计信息
tecmint@tecmint ~ $ sar -F 2 4
@ -322,7 +324,7 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
Summary MBfsfree MBfsused %fsused %ufsused Ifree Iused %Iused FILESYSTEM
Summary 1001 449 30.95 1213790475088.86 18919505 364463 1.89 /dev/sda1
17.View network statistics using **-n DEV**.
17.使用‘**-n DEV**’参数查看网络统计信息
tecmint@tecmint ~ $ sar -n DEV 1 3 | egrep -v lo
@ -334,7 +336,7 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
02:12:00 IST eth0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
02:12:00 IST vmnet1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
18.View block device statistics like iostat using **-d**.
18.使用‘**-d**参数查看块设备统计信息与iostat类似
tecmint@tecmint ~ $ sar -d 1 3
@ -349,7 +351,7 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
02:13:19 IST DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
02:13:20 IST dev8-0 7.00 32.00 80.00 16.00 0.11 15.43 15.43 10.80
19.To print memory statistics use **-r** option.
19.使用‘**-r**’参数输出内存统计信息。
tecmint@tecmint ~ $ sar -r 1 3
@ -361,7 +363,7 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
02:14:32 IST 1469112 2591388 63.82 133060 1550036 3705288 45.28 1130252 1360168 804
Average: 1469165 2591335 63.82 133057 1549824 3710531 45.34 1129739 1359987 677
20.Using **sadf -d**, we can extract data in format which can be processed using databases.
20.使用‘**sadf -d**’参数可以将数据导出为数据库可以使用的格式。
tecmint@tecmint ~ $ safd -d /var/log/sa/sa20140903 -- -n DEV | grep -v lo
@ -381,20 +383,20 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
tecmint;2;2014-09-03 12:00:10 UTC;eth0;0.50;0.50;0.03;0.04;0.00;0.00;0.00;0.00
tecmint;2;2014-09-03 12:00:12 UTC;eth0;1.00;0.50;0.12;0.04;0.00;0.00;0.00;0.00
You can also save this to a csv and then can draw chart for presentation kind of stuff as below.
你也可以将这些数据存储在一个csv文档中然后绘制成图表展示方式如下所示
![Network Graph](http://www.tecmint.com/wp-content/uploads/2014/09/sar-graph.png)
Network Graph
网络信息图表
Thats it for now, you can refer man pages for more information about each option and dont forget to tell about article with your valuable comments.
现在你可以参考man手册来后去每个参数的更多详细信息并且请在文章下留下你宝贵的评论。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/sysstat-commands-to-monitor-linux/
作者:[Kuldeep Sharma][a]
译者:[译者ID](https://github.com/译者ID)
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出