diff --git a/published/20140711 How to use systemd for system administration on Debian.md b/published/20140711 How to use systemd for system administration on Debian.md new file mode 100644 index 0000000000..6dab8f50d0 --- /dev/null +++ b/published/20140711 How to use systemd for system administration on Debian.md @@ -0,0 +1,106 @@ +在 Debian 上使用 systemd 管理系统 +================================================================================ +人类已经无法阻止 systemd 占领全世界的 Linux 系统了,唯一阻止它的方法是在你自己的机器上手动卸载它。到目前为止,systemd 已经创建了比任何软件都多的技术问题、感情问题和社会问题。这一点从[“Linux 初始化软件之战”][1]上就能看出,这场争论在 Debian 开发者之间持续了好几个月。当 Debian 技术委员会最终决定将 systemd 放到 Debian 8(代号 Jessie)的发行版里面时,其反对者试图通过多种努力来[取代这项决议][2],甚至有人扬言要威胁那些支持 systemd 的开发者的生命安全。 + +这也说明了 systemd 对 Unix 传承下来的系统处理方式有很大的干扰。“一个软件只做一件事情”的哲学思想已经被这个新来者彻底颠覆。除了取代了 sysvinit 成为新的系统初始化工具外,systemd 还是一个系统管理工具。目前为止,由于 systemd-sysv 这个软件包提供的兼容性,那些我们使用惯了的工具还能继续工作。但是当 Debian 将 systemd 升级到214版本后,这种兼容性就不复存在了。升级措施预计会在 Debian 8 "Jessie" 的稳定分支上进行。从此以后用户必须使用新的命令来管理系统、执行任务、变换运行级别、查询系统日志等等。不过这里有一个应对方案,那就是在 .bashrc 文件里面添加一些别名。 + +现在就让我们来看看 systemd 是怎么改变你管理系统的习惯的。在使用 systemd 之前,你得先把 sysvinit 保存起来,以便在 systemd 出错的时候还能用 sysvinit 启动系统。这种方法**只有在没安装 systemd-sysv 的情况下才能生效**,具体操作方法如下: + + # cp -av /sbin/init /sbin/init.sysvinit + +在紧急情况下,可以把下面的文本: + + init=/sbin/init.sysvinit + +添加到内核启动参数项那里。 + +### systemctl 的基本用法 ### + +systemctl 的功能是替代“/etc/init.d/foo start/stop”这类命令,另外,其实它还能做其他的事情,这点你可以参考 man 文档。 + +一些基本用法: + +- systemctl - 列出所有单元(UNIT)以及它们的状态(这里的 UNIT 指的就是系统上的 job 和 service) +- systemctl list-units - 列出所有 UNIT +- systemctl start [NAME...] - 启动一项或多项 UNIT +- systemctl stop [NAME...] - 停止一项或多项 UNIT +- systemctl disable [NAME...] - 将 UNIT 设置为开机不启动 +- systemctl list-unit-files - 列出所有已安装的 UNIT,以及它们的状态 +- systemctl --failed - 列出开机启动失败的 UNIT +- systemctl --type=mount - 列出某种类型的 UNIT,类型包含:service, mount, device, socket, target +- systemctl enable debug-shell.service - 将一个 shell 脚本设置为开机启动,用于调试 + +为了更方便处理这些 UNIT,你可以使用 systemd-ui 软件包,你只要输入 systemadm 命令就可以使用这个软件。 + +你同样可以使用 systemctl 实现转换运行级别、重启系统和关闭系统的功能: + +- systemctl isolate graphical.target - 切换到运行级别5,就是有桌面的运行级别 +- systemctl isolate multi-user.target - 切换到运行级别3,没有桌面的运行级别 +- systemctl reboot - 重启系统 +- systemctl poweroff - 关机 + +所有命令,包括切换到其他运行级别的命令,都可以在普通用户的权限下执行。 + +### journalctl 的基本用法 ### + +systemd 不仅提供了比 sysvinit 更快的启动速度,还让日志系统在更早的时候启动起来,可以记录内核初始化阶段、内存初始化阶段、前期启动步骤以及主要的系统执行过程的日志。所以,**以前那种需要通过对显示屏拍照或者暂停系统来调试程序的日子已经一去不复返啦**。 + +systemd 的日志文件都被放在 /var/log 目录。如果你想使用它的日志功能,需要执行一些命令,因为 Debian 没有打开日志功能。命令如下: + + # addgroup --system systemd-journal + # mkdir -p /var/log/journal + # chown root:systemd-journal /var/log/journal + # gpasswd -a $user systemd-journal + +通过上面的设置,你就可以以普通用户权限使用 journal 软件查看日志。使用 journalctl 查询日志可以获得一些比 syslog 软件更方便的玩法: + +- journalctl --all - 显示系统上所有日志,以及它的用户 +- journalctl -f - 监视系统日志的变化(类似 tail -f /var/log/messages 的效果) +- journalctl -b - 显示系统启动以后的日志 +- journalctl -k -b -1 - 显示上一次(-b -1)系统启动前产生的内核日志 +- journalctl -b -p err - 显示系统启动后产生的“ERROR”日志 +- journalctl --since=yesterday - 当系统不会经常重启的时候,这条命令能提供比 -b 更短的日志记录 +- journalctl -u cron.service --since='2014-07-06 07:00' --until='2014-07-06 08:23' - 显示 cron 服务在某个时间段内打印出来的日志 +- journalctl -p 2 --since=today - 显示优先级别为2以内的日志,包含 emerg、alert、crit三个级别。所有日志级别有: emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), debug (7) +- journalctl > yourlog.log - 将二进制日志文件复制成文本文件并保存到当前目录 + +Journal 和 syslog 可以很好的共存。而另一方面,一旦你习惯了操作 journal,你也可以卸载掉所有 syslog 的软件,比如 rsyslog 或 syslog-ng。 + +如果想要得到更详细的日志信息,你可以在内核启动参数上添加“systemd.log_level=debug”,然后运行下面的命令: + + # journalctl -alb + +你也可以编辑 /etc/systemd/system.conf 文件来修改日志级别。 + +### 利用 systemd 分析系统启动过程 ### + +systemd 可以让你能更有效地分析和优化你的系统启动过程: + +- systemd-analyze - 显示本次启动系统过程中用户态和内核态所花的时间 +- systemd-analyze blame - 显示每个启动项所花费的时间明细 +- systemd-analyze critical-chain - 按时间顺序打印 UNIT 树 +- systemd-analyze dot | dot -Tsvg > systemd.svg - 为开机启动过程生成向量图(需要安装 graphviz 软件包) +- systemd-analyze plot > bootplot.svg - 产生开机启动过程的时间图表 + +![](https://farm6.staticflickr.com/5559/14607588994_38543638b3_z.jpg) + +![](https://farm6.staticflickr.com/5565/14423020978_14b21402c8_z.jpg) + +systemd 虽然是个年轻的项目,但已有大量文档。首先要介绍给你的是[Lennart Poettering 的 0pointer 系列][3]。这个系列非常详细,非常有技术含量。另外一个是[免费桌面信息文档][4],它包含了最详细的关于 systemd 的链接:发行版特性文件、bug 跟踪系统和说明文档。你可以使用下面的命令来查询 systemd 都提供了哪些文档: + + # man systemd.index + +不同发行版之间的 systemd 提供的命令基本一样,最大的不同之处就是打包方式。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/07/use-systemd-system-administration-debian.html + +译者:[bazz2](https://github.com/bazz2) 校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:https://lists.debian.org/debian-devel/2013/10/msg00444.html +[2]:https://lists.debian.org/debian-devel/2014/02/msg00316.html +[3]:http://0pointer.de/blog/projects/systemd.html +[4]:http://www.freedesktop.org/wiki/Software/systemd/ diff --git a/translated/talk/20140724 What are useful online tools for Linux.md b/published/20140724 What are useful online tools for Linux.md similarity index 62% rename from translated/talk/20140724 What are useful online tools for Linux.md rename to published/20140724 What are useful online tools for Linux.md index b01b5b189a..cfeadfc3b8 100644 --- a/translated/talk/20140724 What are useful online tools for Linux.md +++ b/published/20140724 What are useful online tools for Linux.md @@ -1,6 +1,7 @@ -适用于Linux的在线工具 +16个 Linux 方面的在线工具类网站 ================================================================================ -众所周知,GNU Linux不仅仅只是一款操作系统。看起来通过互联网全球许多人都在致力于这款企鹅图标(即Linux)的操作系统。如果你读到这篇文章,你可能倾向于读到关于Linux联机的内容。在可以找到的所有关于这个主题的网页中,有一些网站是每个Linux爱好者都应该收藏起来的。这些网站不仅仅只是教程或回顾,更是可以随时随地访问并与他人共享的实用工具。所以,今天我会建议一份包含16个应该收藏的网址清单。它们中的一些对Windows或Mac用户同样有用:这是在他们的能力范围内可以做到的。(译者注:Windows和Mac一样可以很好地体验Linux) +众所周知,GNU Linux不仅仅只是一款操作系统。看起来通过互联网全球许多人都在致力于这款以企鹅为吉祥物的操作系统。如果你读到这篇文章,你可能希望读一些关于Linux在线资源的内容。在可以找到的所有关于这个主题的网页中,有一些网站是每个Linux爱好者都应该收藏起来的。这些网站不仅仅只是教程或回顾,更是可以随时随地访问并与他人共享的实用工具。所以,今天我会建议一份包含16个应该收藏的网址清单。它们中的一些对Windows或Mac用户同样有用:这是在他们的能力范围内可以做到的。(译者注:Windows和Mac一样可以很好地体验Linux) + ### 1. [ExplainShell.com][1] ### [![](https://farm4.staticflickr.com/3841/14517716647_3b6a1a564d_z.jpg)][2] @@ -11,43 +12,43 @@ [![](https://farm4.staticflickr.com/3900/14703872782_033e5acdb8_z.jpg)][4] -如果你想开始学习Linux命令行,或者想快速地得到一个自定义的shell命令提示符,但不知道从何下手,这个网站会为你生成PS1提示代码,在家目录下放置.bashrc文件。你可以拖拽任何你想在提示符里看到的元素,譬如用户名和当前时间,这个网站都会为你编写易懂可读的代码。绝对是懒人必备! +如果你想开始学习Linux命令行,或者想快速地生成一个自定义的shell命令提示符,但不知道从何下手,这个网站可以为你生成PS1提示的代码,将代码放到家目录下的.bashrc文件中即可。你可以拖拽任何你想在提示符里看到的元素,譬如用户名和当前时间,这个网站都会为你编写易懂可读的代码。绝对是懒人必备! ### 3. [Vim-adventures.com][5] ### [![](https://farm4.staticflickr.com/3838/14681149696_0c533fd6de_z.jpg)][6] -我是最近才发现这个网站的,但我的生活已经深陷其中。简而言之:它就是一个使用Vim命令的RPG游戏。在等距的水平上使用‘h,j,k,l’四个键移动字母,获取新的命令/能力,收集关键词,非常快速地学习高效地使用Vim。 +我是最近才发现这个网站的,但我的生活已经深陷其中。简而言之:它就是一个使用Vim命令的RPG游戏。在地图的平面上使用‘h,j,k,l’四个键移动你的角色、得到新的命令/能力、收集钥匙,可以帮助你非常快速地学习如何高效使用Vim。 ### 4. [Try Github][7] ### [![](https://farm4.staticflickr.com/3874/14517499739_0452848d68_z.jpg)][8] -目标很简单:15分钟学会Git。这个网站模拟一个控制台,带你遍历这种协作编辑的每一步。界面非常时尚,目的十分有用。唯一不足的是对Git敏感,但Git绝对是一项不错的技能,这里也是学习Git的绝佳之处。 +目标很简单:15分钟学会Git。这个网站模拟一个控制台,带你遍历这种协作编辑的每一步。界面非常时尚,目的十分有用。唯一不足的是对Git感兴趣,但Git绝对是一项不错的技能,这里也是学习Git的绝佳之处。 ### 5. [Shortcutfoo.com][9] ### [![](https://farm4.staticflickr.com/3906/14517499799_f142ea37cb_z.jpg)][10] -又一个包含众多快捷键数据库的网站,shortcutfoo以更标准的方式将其内容呈现给用户,但绝对比有趣的迷你游戏更直截了当。这里有许多软件的快捷键,并按类别分组。虽然它不像Vim一类完全依赖快捷键的软件那么全面,但也足以提供快速的提示或一般性的概述。 +又一个包含众多快捷键数据库的网站,shortcutfoo以更标准的方式将其内容呈现给用户,但绝对比有趣的迷你游戏更直截了当。这里有许多软件的快捷键,并按类别分组。虽然像Vim一类的软件它没有给出超级完整的快捷键列表,但也足以提供快速的提示或一般性的概述。 ### 6. [GitHub Free Programming Books][11] ### [![](https://farm4.staticflickr.com/3867/14517499989_408a28d8be_z.jpg)][12] -正如你从URL上猜到的一样,这个网站就是免费在线编程书籍的集合,使用Git协作方式编写。上面的内容非常好,作者们应该为做出这些工作受到表扬。它可能不是最容易阅读的,但一定是最有启发性的之一。我们只希望这项运动能持续进行。 +正如你从URL上猜到的一样,这个网站就是免费在线编程书籍的集合,使用Git协作方式编写。上面的内容非常好,作者们应该为他们做出的这些贡献受到表扬。它可能不是最容易阅读的,但一定是最有启发性的之一。我们只希望这项运动能持续进行。 ### 7. [Collabedit.com][13] ### [![](https://farm3.staticflickr.com/2940/14681150086_2d169d67f9_z.jpg)][14] -如果你曾经准备过电话面试,你应该先试试collabedit。它让你创建文件,选择你想使用的编程语言,然后通过URL共享文档。打开链接的人可以免费地实时使用文本交互,使你可以评判他们的编程水平或只是交换一些程序片段。这里甚至还提供合适的语法高亮和聊天功能。换句话说,这就是程序员的即时Google文档。 +如果你曾经计划过电话面试,你应该先试试collabedit。它让你创建文件,选择你想使用的编程语言,然后通过URL共享文档。打开链接的人可以免费地实时使用文本交互,使你可以评判他们的编程水平或只是交换一些程序片段。这里甚至还提供合适的语法高亮和聊天功能。换句话说,这就是程序员的即时Google Doucment。 ### 8. [Cpp.sh][15] ### [![](https://farm4.staticflickr.com/3840/14700981001_af3ac40b65_z.jpg)][16] -尽管这个网站超出了Linux范围,但因为它非常有用,所以值得将它放在这里。简单地说,这是一个C++在线开发环境。只需在导航栏里编写程序,然后运行它。作为奖励,你可以使用自动补全、Ctrl+Z,以及和你的小伙伴共享URL。这些有趣的事情,你只需要通过一个简单的浏览器就能做到。 +尽管这个网站超出了Linux范围,但因为它非常有用,所以值得将它放在这里。简单地说,这是一个C++在线开发环境。只需在浏览器里编写程序,然后运行它。作为奖励,你可以使用自动补全、Ctrl+Z,以及和你的小伙伴分享你的作品的URL。这些有趣的事情,你只需要通过一个简单的浏览器就能做到。 ### 9. [Copy.sh][17] ### @@ -59,13 +60,13 @@ [![](https://farm4.staticflickr.com/3887/14517495938_ca3b831ca9_z.jpg)][21] -我们总是在自己的电脑上保存着一大段类似于“gems”的命令行【翻译得不准确,麻烦校正】,commandlinefu的目标是把这些片段释放给全世界。作为一个协作式数据库,它就像是命令行里的维基百科。每个人可以免费注册,把自己最钟爱的命令提交到这个网站上给其他人看。你将能够获取来自四面八方的知识并与人分享。如果你对精通shell饶有兴趣,commandlinefu也可以提供一些优秀的特性,比如随机命令和每天学习新知识的新闻订阅。 +我们总是在自己的电脑上保存着一大段命令行“宝石”,commandlinefu的目标是把这些片段释放给全世界。作为一个协作式数据库,它就像是命令行里的维基百科。每个人可以免费注册,把自己最钟爱的命令提交到这个网站上给其他人看。你将能够获取来自四面八方的知识并与人分享。如果你对精通shell饶有兴趣,commandlinefu也可以提供一些优秀的特性,比如随机命令和每天学习新知识的新闻订阅。 ### 11. [Alias.sh][22] ### [![](https://farm4.staticflickr.com/3868/14701762124_a7b3547aca_z.jpg)][23] -另一协作式数据库,alias.sh(我爱死这个URL了)有点像commandlinefu,但是为shell别名开发的。你可以共享和发现一些有用的别名,来使你的CLI(命令行界面)体验更加舒服。我个人喜欢这个获取图片维度的别名。 +另一协作式数据库,alias.sh(我爱死这个URL了)有点像commandlinefu,但是为shell别名开发的。你可以共享和发现一些有用的别名,来使你的CLI(命令行界面)体验更加舒服。我个人喜欢这个获取图片维度的别名命令。 function dim(){ sips $1 -g pixelWidth -g pixelHeight } @@ -75,40 +76,41 @@ [![](https://farm3.staticflickr.com/2910/14681149996_50a45bff78_z.jpg)][25] -有谁不知道Distrowatch?除了基于这个网站流行度给出一个精确的Linux发行版排名,Distrowatch也是一个非常有用的数据库。无论你正苦苦寻找一个新的发行版,还是只是出于好奇,它都能为你能找到的每个Linux版本呈现一个详尽的描述,包含默认的桌面环境,包管理系统,默认应用程序等信息,还有所有的版本号,以及可用的下载链接。总而言之,这就是个Linux宝库。 +有谁不知道Distrowatch?除了基于这个网站流行度给出一个精确的Linux发行版排名,Distrowatch也是一个非常有用的数据库。无论你正苦苦寻找一个新的发行版,还是只是出于好奇,它都能为你能找到的每个Linux版本呈现一个详尽的描述,包含默认的桌面环境、包管理系统、默认应用程序等信息,还有所有的版本号,以及可用的下载链接。总而言之,这就是个Linux宝库。 ### 13. [Linuxmanpages.com][26] ### [![](https://farm4.staticflickr.com/3911/14704165765_8e30cb3d3f_z.jpg)][27] -一切都在URL中:随时随地获取主流命令的手册页面。尽管不确信对于Linux用户是否真的有用,因为他们可以从真实的终端中获取这些信息,但这里的内容还是值得关注的。 +一切尽在URL中说明了:随时随地获取主流命令的手册页面。尽管不确信对于Linux用户是否真的有用,因为他们可以从真实的终端中获取这些信息,但这里的内容还是值得关注的。 ### 14. [AwesomeCow.com][28] ### [![](https://farm6.staticflickr.com/5558/14704165965_02b10ee293_z.jpg)][29] -这里可能少一些核心的Linux内容,但肯定是有一些用的。Awesomecow是一个搜索引擎,来寻找Windows软件在Linux上的替代品。它对那些迁移到企鹅操作系统(Linux)或习惯Windows软件的人很有帮助。我认为这个网站代表一种能力,表明了在谈到软件质量时Linux也可以适用于专业领域。大家至少可以尝试一下。 +这可能对于骨灰级 Linux 没啥用,但是对于其他人也许有用。Awesomecow是一个搜索引擎,来寻找Windows软件在Linux上对应的替代品。它对那些迁移到企鹅操作系统(Linux)或习惯Windows软件的人很有帮助。我认为这个网站代表一种能力,表明了在谈到软件质量时Linux也可以适用于专业领域。大家至少可以尝试一下。 ### 15. [PenguSpy.com][30] ### [![](https://farm4.staticflickr.com/3904/14517495728_f6877e8e3b_z.jpg)][31] -Steam在Linux上崭露头角之前,游戏性可能是Linux的软肋。但这个名为“pengsupy”的网站不遗余力地弥补这个软肋,通过使用漂亮的接口在数据库中收集所有兼容Linux的游戏。游戏按照类别、发行日期、评分等指标分类。我真心希望这一类的网站不会因为Steam的存在走向衰亡,毕竟这是我在这个列表里最喜爱的网站之一。 +Steam在Linux上崭露头角之前,可玩性可能是Linux的软肋。但这个名为“pengsupy”的网站不遗余力地弥补这个软肋,通过使用漂亮的界面展现了数据库中收集的所有兼容Linux的游戏。游戏按照类别、发行日期、评分等指标分类。我真心希望这一类的网站不会因为Steam的存在走向衰亡,毕竟这是我在这个列表里最喜爱的网站之一。 ### 16. [Linux Cross Reference by Free Electrons][32] ### [![](https://farm4.staticflickr.com/3913/14712049464_6b666e2cfa_z.jpg)][33] -最后,对所有的专家和好奇的用户,lxr是源自Linux Cross Reference的回文构词法,使我们能交互地在线查看Linux内核代码。通过标识符可以很方便地使用导航栏,你可以使用标准的diff标记对比文件的不同版本。这个网站的界面看起来严肃直接,毕竟这只是一个希望完美阐述开源观点的网站。 +最后,对所有的专家和好奇的用户,lxr 是源于 Linux Cross Reference 的另外一种形式,使我们能交互地在线查看Linux内核代码。可以通过各种标识符在代码中很方便地导航,你可以使用标准的diff标记对比文件的不同版本。这个网站的界面看起来严肃直接,毕竟这只是一个希望完美阐述开源观点的网站。 总而言之,应该列出更多这一类的网站,作为这篇文章第二部分的主题。但这篇文章是一个好的开始,是一道为Linux用户寻找在线工具的开胃菜。如果你有其它任何想要分享的页面,而且是紧跟这个主题的,在评论里写出来。这将有助于续写这个列表。 + -------------------------------------------------------------------------------- via: http://xmodulo.com/2014/07/useful-online-tools-linux.html 原文作者:[Adrien Brochard][a](我是一名来自法国的Linux狂热爱好者。在尝试过众多的发行版后,我最终选择了Archlinux。但我一直会通过叠加技巧和窍门来优化我的系统。) -译者:[KayGuoWhu](https://github.com/KayGuoWhu) 校对:[校对者ID](https://github.com/校对者ID) +译者:[KayGuoWhu](https://github.com/KayGuoWhu) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140729 How to use awk command in Linux.md b/published/20140729 How to use awk command in Linux.md similarity index 79% rename from translated/tech/20140729 How to use awk command in Linux.md rename to published/20140729 How to use awk command in Linux.md index 6208316dc9..c4fded9e5c 100644 --- a/translated/tech/20140729 How to use awk command in Linux.md +++ b/published/20140729 How to use awk command in Linux.md @@ -1,8 +1,8 @@ 如何在Linux中使用awk命令 ================================================================================ -文本处理是Unix的核心。从管道到/proc子系统,“一切都是文件”的理念贯穿于操作系统和所有基于它构造的工具。正因为如此,轻松地处理文本是一个期望成为Linux系统管理员甚至是资深用户的最重要的技能之一,awk是通用编程语言之外最强大的文本处理工具之一。 +文本处理是Unix的核心。从管道到/proc子系统,“一切都是文件”的理念贯穿于操作系统和所有基于它构造的工具。正因为如此,轻松地处理文本是一个期望成为Linux系统管理员甚至是资深用户的最重要的技能之一,而 awk是通用编程语言之外最强大的文本处理工具之一。 -最简单的awk的任务是从标准输入中选择字段;如果你对awk除了这个没有学习过其他的,它还是会是你身边一个非常有用的工具。 +最简单的awk的任务是从标准输入中选择字段;如果你对awk除了这个用途之外,从来没了解过它的其他用途,你会发现它还是会是你身边一个非常有用的工具。 默认情况下,awk通过空格分隔输入。如果您想选择输入的第一个字段,你只需要告诉awk输出$ 1: @@ -30,13 +30,13 @@ > foo: three | bar: one -好吧,如果你的输入不是由空格分隔怎么办?只需用awk中的'-F'标志后带上你的分隔符: +好吧,如果你的输入不是由空格分隔怎么办?只需用awk中的'-F'标志指定你的分隔符: $ echo 'one mississippi,two mississippi,three mississippi,four mississippi' | awk -F , '{print $4}' > four mississippi -偶尔间,你会发现自己正在处理拥有不同的字段数量的数据,但你只知道你想要的*最后*字段。 awk中内置的$NF变量代表*字段的数量*,这样你就可以用它来抓取最后一个元素: +偶尔间,你会发现自己正在处理字段数量不同的数据,但你只知道你想要的*最后*字段。 awk中内置的$NF变量代表*字段的数量*,这样你就可以用它来抓取最后一个元素: $ echo 'one two three four' | awk '{print $NF}' @@ -54,9 +54,9 @@ > three -而且这一切都非常有用,同样你可以摆脱强制使用sed,cut,和grep来得到这些结果(尽管有大量的工作)。 +而且这一切都非常有用,同样你可以摆脱强制使用sed,cut,和grep来得到这些结果(尽管要做更多的操作)。 -因此,我将为你留下awk的最后介绍特性,维护跨行状态。 +因此,我将最后为你介绍awk的一个特性,维持跨行状态。 $ echo -e 'one 1\ntwo 2' | awk '{print $2}' @@ -68,7 +68,7 @@ > 3 -(END代表的是我们在执行完每行的处理**之后**只处理下面的代码块 +(END代表的是我们在执行完每行的处理**之后**只处理下面的代码块) 这里我使用的例子是统计web服务器请求日志的字节大小。想象一下我们有如下这样的日志: @@ -104,7 +104,7 @@ > 31657 -如果你正在寻找关于awk的更多资料,你可以在Amazon中在15美元内找到[原始awk手册][1]的副本。你同样可以使用Eric Pement的[单行awk命令收集][2]这本书 +如果你正在寻找关于awk的更多资料,你可以在Amazon中花费不到15美元买到[原始awk手册][1]的二手书。你也许还可以看看Eric Pement的[单行awk命令收集][2]这本书。 -------------------------------------------------------------------------------- @@ -112,7 +112,7 @@ via: http://xmodulo.com/2014/07/use-awk-command-linux.html 作者:[James Pearson][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140804 How to install and configure Nvidia Optimus driver on Ubuntu.md b/published/20140804 How to install and configure Nvidia Optimus driver on Ubuntu.md similarity index 76% rename from translated/tech/20140804 How to install and configure Nvidia Optimus driver on Ubuntu.md rename to published/20140804 How to install and configure Nvidia Optimus driver on Ubuntu.md index 62b0c99dd6..79ce3d44f1 100644 --- a/translated/tech/20140804 How to install and configure Nvidia Optimus driver on Ubuntu.md +++ b/published/20140804 How to install and configure Nvidia Optimus driver on Ubuntu.md @@ -4,19 +4,20 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但 ### 背景知识 ### 对那些不熟悉Nvidia Optimus的读者,在板载Intel图形芯片组和使用被称为“GPU切换”、对需求有着更强大处理能力的NVIDA显卡这两者之间的进行切换是很有必要的。这么做的主要目的是延长笔记本电池的使用寿命,以便在不需要Nvidia GPU的时候将其关闭。带来的好处是显而易见的,比如说你只是想简单地打打字,笔记本电池可以撑8个小时;如果看高清视频,可能就只能撑3个小时了。使用Windows时经常如此。 + ![](https://farm6.staticflickr.com/5581/14612159387_2e89a52085_z.jpg) 几年前,我买了一台上网本(Asus VX6),犯的最蠢的一个错误就是没有检查Linux驱动兼容性。因为在以前,特别是对于一台上网本大小的设备,这根本不会是问题。即便某些驱动不是现成可用的,我也可以找到其它的办法让它正常工作,比如安装专门模块或者使用反向移植。对我来说这是第一次——我的电脑预先配备了Nvidia ION2图形显卡。 在那时候,Nvidia的Optimus混合GPU硬件还是相当新的产品,而我也没有预见到在这台机器上运行Linux会遇到什么限制。如果你读到了这里,恰好对Linux系统有经验,而且也在几年前买过一台笔记本,你可能对这种痛苦感同身受。 -[Bumblebee][4]项目直到最近因为得到Linux系统对混合图形方面的支持才变得好起来。事实上,如果配置正确的话,通过命令行接口(如“optirun vlc”)为想要的应用程序去利用Nvidia显卡的功能是可能的,但让HDMI一类的功能运转起来就很不同了。(译者注:Bumblebee 项目是把Nvidia的Optimus技术移到Linux上来。) +[Bumblebee][4]项目直到最近因为得到Linux系统对混合图形方面的支持才变得好起来。事实上,如果配置正确的话,通过命令行接口(如“optirun vlc”)让你选定的应用程序能利用Nvidia显卡功能是可行的,但让HDMI一类的功能运转起来就很不同了。(译者注:Bumblebee 项目是把Nvidia的Optimus技术移到Linux上来。) -我之所以使用“如果配置正确的话”这个短语,是因为实际上为了让它发挥出性能往往不只是通过几次尝试去改变Xorg的配置就能做到的。如果你以前没有使用过ppa-purge或者运行过“dpkg-reconfigure -phigh xserver-xorg”这类命令,那么我可以向你保证修补Bumblebee的过程会让你受益匪浅。 +我之所以使用“如果配置正确的话”这个短语,是因为实际上为了让它发挥出性能来往往不只是通过几次尝试去改变Xorg的配置就能做到的。如果你以前没有使用过ppa-purge或者运行过“dpkg-reconfigure -phigh xserver-xorg”这类命令,那么我可以向你保证修补Bumblebee的过程会让你受益匪浅。 [![](https://farm6.staticflickr.com/5588/14798680495_947c38b043_o.png)][2] -等待了很长一段时间,Nvidia才发布了支持Optimus的Linux驱动,但我们仍然没有获取对双显卡切换的真正支持。然而,现在有了Ubuntu 14.04、nvidia-prime和nvidia-331驱动,任何人都可以在Intel芯片和Nvidia显卡之间轻松切换。不幸的是,为了使切换生效,还是会受限于要重启X11视窗系统(通过注销登录实现)。 +在等待了很长一段时间后,Nvidia才发布了支持Optimus的Linux驱动,但我们仍然没有得到对双显卡切换的真正支持。然而,现在有了Ubuntu 14.04、nvidia-prime和nvidia-331驱动,任何人都可以在Intel芯片和Nvidia显卡之间轻松切换。不过不幸的是,为了使切换生效,还是会受限于需要重启X11视窗系统(通过注销登录实现)。 为了减轻这种不便,有一个小型程序用于快速切换,稍后我会给出。这个驱动程序的安装就此成为一件轻而易举的事了,HDMI也可以正常工作,这足以让我心满意足了。 @@ -24,11 +25,11 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但 为了更快地描述这个过程,我假设你已经安装好Ubuntu 14.04或者Mint 17。 -作为一名系统管理员,最近我发现90%的Linux通过命令行执行起来更快,但这次我推荐使用“Additional Drivers”这个应用程序,你可能使用它安装过网卡或声卡驱动。 +作为一名系统管理员,最近我发现90%的Linux操作通过命令行执行起来更快,但这次我推荐使用“Additional Drivers”这个应用程序,你可能使用它安装过网卡或声卡驱动。 ![](https://farm4.staticflickr.com/3886/14795564221_753f9e2d99_z.jpg) -**注意:下面的所有命令都是在~#前执行的,需要root权限执行。在运行命令前,要么使用“sudo su”(切换到root权限),要么在每条命令的开头使用速冻运行。** +**注意:下面的所有命令都是在~#提示符下执行的,需要root权限执行。在运行命令前,要么使用“sudo su”(切换到root权限),要么在每条命令的开头使用sudo运行。** 你也可以在命令行输入如下命令进行安装: @@ -44,19 +45,19 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但 ~$ nvidia-settings -#### 注意:~$表示不以root用户身份执行。 #### +**注意:~$表示不以root用户身份执行。** ![](https://farm4.staticflickr.com/3921/14796320814_de5c9882c2_z.jpg) 你也可以使用命令行设置默认使用哪一块显卡: - ~# prime-select intel (or nvidia) + ~# prime-select intel (或 nvidia) 使用这个命令进行切换: - ~# prime-switch intel (or nvidia) + ~# prime-switch intel (或 nvidia) -两个命令的生效都需要重启X11,可以通过注销和重新登录实现。重启电脑也行。 +两个命令的生效都需要重启X11,可以通过注销和重新登录实现。当然重启电脑也行。 对Ubuntu用户键入命令: @@ -70,7 +71,7 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但 ~# prime-select query -最后,你可以通过添加ppa:nilarimogard/webupd8来安装叫做prime-indicator的程序包,实现通过工具栏快速切换来重启Xserver会话。为了安装它,只需要运行: +最后,你可以通过添加ppa:nilarimogard/webupd8来安装叫做prime-indicator的程序包,实现通过工具栏快速切换来重启Xserver会话。要安装它,只需要运行: ~# add-apt-repository ppa:nilarimogard/webupd8 ~# apt-get update @@ -84,7 +85,7 @@ Nvidia Optimus是一款利用“双显卡切换”技术的混合GPU系统,但 也可以花时间查看一下这个我偶然发现的[脚本][3],用来方便地在Bumblebee和Nvidia-Prime之间进行切换,但我必须强调并没有亲自对此进行实验。 -最后,我感到非常惭愧写了这么多才得以为Linux上的显卡提供了专门支持,但仍然不能实现双显卡切换,因为混合图形技术似乎是便携式设备的未来。一般情况下,AMD会发布Linux平台上的驱动支持,但我认为Optimus是目前为止我遇到过的最糟糕的硬件支持问题。 +最后,我感到非常惭愧,写了这么多才得以为Linux上的显卡提供了专门支持,但仍然不能实现双显卡切换,因为混合图形技术似乎是便携式设备的未来。一般情况下,AMD会发布Linux平台上的驱动支持,但我认为Optimus是目前为止我遇到过的最糟糕的硬件支持问题。 不管这篇教程对你的使用是否完美,但这确实是利用这块Nvidia显卡最容易的方法。你可以试着在Intel显卡上只运行最新的Unity,然后考虑2到3个小时的电池寿命是否值得权衡。 @@ -94,7 +95,7 @@ via: http://xmodulo.com/2014/08/install-configure-nvidia-optimus-driver-ubuntu.h 作者:[Christopher Ward][a] 译者:[KayGuoWhu](https://github.com/KayGuoWhu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140808 How to install Puppet server and client on CentOS and RHEL.md b/published/20140808 How to install Puppet server and client on CentOS and RHEL.md similarity index 93% rename from translated/tech/20140808 How to install Puppet server and client on CentOS and RHEL.md rename to published/20140808 How to install Puppet server and client on CentOS and RHEL.md index a84062730f..ded986fab8 100644 --- a/translated/tech/20140808 How to install Puppet server and client on CentOS and RHEL.md +++ b/published/20140808 How to install Puppet server and client on CentOS and RHEL.md @@ -4,7 +4,7 @@ ### Puppet 是什么? ### -Puppet 是一款为 IT 系统管理员和顾问设计的自动化软件,你可以用它自动化地完成诸如安装应用程序和服务、补丁管理和部署等工作。所有资源的相关配置都以“manifests”的方式保存,单台机器或者多台机器都可以使用。如果你想了解更多内容,Puppet 实验室的网站上有关于 [Puppet 及其工作原理][1]的更详细的介绍。 +Puppet 是一款为 IT 系统管理员和顾问们设计的自动化软件,你可以用它自动化地完成诸如安装应用程序和服务、补丁管理和部署等工作。所有资源的相关配置都以“manifests”的方式保存,单台机器或者多台机器都可以使用。如果你想了解更多内容,Puppet 实验室的网站上有关于 [Puppet 及其工作原理][1]的更详细的介绍。 ### 本教程要做些什么? ### @@ -58,7 +58,7 @@ Puppet 是一款为 IT 系统管理员和顾问设计的自动化软件,你可 # chkconfig puppet on -Puppet 客户端需要知道 Puppet master 服务器的地址。最佳方案是使用 DNS 服务器解析 Puppet master 服务器地址。如果你没有 DNS 服务器,在 `/etc/hosts` 里添加下面这几行也可以: +Puppet 客户端需要知道 Puppet master 服务器的地址。最佳方案是使用 DNS 服务器解析 Puppet master 服务器地址。如果你没有 DNS 服务器,在 `/etc/hosts` 里添加类似下面这几行也可以: > 1.2.3.4 server.your.domain @@ -125,7 +125,7 @@ master 服务器名也要在 `/etc/puppet/puppet.conf` 文件的“[agent]”小 > runinterval = -这个选项的值可以是秒(格式比如 30 或者 30s),分钟(30m),小时(6h),天(2d)以及年(5y)。值得注意的是,0 意味着“立即执行”而不是“从不执行”。 +这个选项的值可以是秒(格式比如 30 或者 30s),分钟(30m),小时(6h),天(2d)以及年(5y)。值得注意的是,**0 意味着“立即执行”而不是“从不执行”**。 ### 提示和技巧 ### @@ -139,7 +139,7 @@ master 服务器名也要在 `/etc/puppet/puppet.conf` 文件的“[agent]”小 # puppet agent -t --debug -Debug 选项会显示 Puppet 本次运行时的差不多每一个步骤,这在调试非常复杂的问题时很有用。另一个很有用的选项是: +debug 选项会显示 Puppet 本次运行时的差不多每一个步骤,这在调试非常复杂的问题时很有用。另一个很有用的选项是: # puppet agent -t --noop @@ -187,7 +187,7 @@ via: http://xmodulo.com/2014/08/install-puppet-server-client-centos-rhel.html 作者:[Jaroslav Štěpánek][a] 译者:[sailing](https://github.com/sailing) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140811 How to improve your productivity in terminal environment with Tmux.md b/published/20140811 How to improve your productivity in terminal environment with Tmux.md similarity index 51% rename from translated/tech/20140811 How to improve your productivity in terminal environment with Tmux.md rename to published/20140811 How to improve your productivity in terminal environment with Tmux.md index c94df0b81b..5bdb273d06 100644 --- a/translated/tech/20140811 How to improve your productivity in terminal environment with Tmux.md +++ b/published/20140811 How to improve your productivity in terminal environment with Tmux.md @@ -1,15 +1,15 @@ -如何使用Tmux提高终端环境下的生产率 +如何使用Tmux提高终端环境下的效率 === -鼠标的采用是次精彩的创新,它让电脑更加接近普通人。但从程序员和系统管理员的角度,使用电脑办公时,手一旦离开键盘,就会有些分心 +鼠标的发明是了不起的创新,它让电脑更加接近普通人。但从程序员和系统管理员的角度,使用电脑工作时,手一旦离开键盘,就会有些分心。 -作为一名系统管理员,我大量的工作都需要在终端环境下。打开很多标签,然后在多个终端之间切换窗口会让我慢下来。而且当我的服务器出问题的时候,我不能浪费任何时间 +作为一名系统管理员,我大量的工作都需要在终端环境下。打开很多标签,然后在多个终端之间切换窗口会让我慢下来。尤其是当我的服务器出问题的时候,我不能浪费任何时间! ![](https://farm6.staticflickr.com/5563/14853747084_e14cf18e8f_z.jpg) -[Tmux][1]是我日常工作必要的工具之一。我可以借助Tmux创造出复杂的开发环境,同时还可以在一旁进行SSH远程连接。我可以开出很多窗口,拆分成很多面板,附加和分离会话等等。掌握了Tmux之后,你就可以扔掉鼠标了(只是个玩笑:D) +[Tmux][1]是我日常工作必要的工具之一。我可以借助Tmux构建出复杂的开发环境,同时还可以在一旁进行SSH远程连接。我可以开出很多窗口,将其拆分成很多面板,接管和分离会话等等。掌握了Tmux之后,你就可以扔掉鼠标了(只是个玩笑:D)。 -Tmux("Terminal Multiplexer"的简称)可以让我们在单个屏幕的灵活布局下开出很多终端,我们就可以协作地使用它们。举个例子,在一个面板中,我们用Vim修改一些配置文件,在另一个面板,我们使用irssi聊天,而在其余的面板,跟踪一些日志。然后,我们还可以打开新的窗口来升级系统,再开一个新窗口来进行服务器的ssh连接。在这些窗口面板间浏览切换和创建它们一样简单。它的高度可配置和可定制的,让其成为你心中的延伸 +Tmux("Terminal Multiplexer"的简称)可以让我们在单个屏幕的灵活布局下开出很多终端,我们就可以协作地使用它们。举个例子,在一个面板中,我们用Vim修改一些配置文件,在另一个面板,我们使用`irssi`聊天,而在其余的面板,可以跟踪一些日志。然后,我们还可以打开新的窗口来升级系统,再开一个新窗口来进行服务器的ssh连接。在这些窗口面板间浏览切换和创建它们一样简单。它的高度可配置和可定制的,让其成为你心中的延伸 ### 在Linux/OSX下安装Tmux ### @@ -20,22 +20,21 @@ Tmux("Terminal Multiplexer"的简称)可以让我们在单个屏幕的灵活布 # sudo brew install tmux # sudo port install tmux -### Debian/Ubuntu ### +#### Debian/Ubuntu: #### # sudo apt-get install tmux -RHEL/CentOS/Fedora(RHEL/CentOS 要求 [EPEL repo][2]): +####RHEL/CentOS/Fedora(RHEL/CentOS 要求 [EPEL repo][2]):#### $ sudo yum install tmux -Archlinux: +####Archlinux:#### $ sudo pacman -S tmux ### 使用不同会话工作 ### -使用Tmux的最好方式是使用不同的会话,这样你就可以以你想要的方式,将任务和应用组织到不同的会话中。如果你想改变一个会话,会话里面的任何工作都无须停止或者杀掉,让我们来看看这是怎么工作的 - +使用Tmux的最好方式是使用会话的方式,这样你就可以以你想要的方式,将任务和应用组织到不同的会话中。如果你想改变一个会话,会话里面的任何工作都无须停止或者杀掉。让我们来看看这是怎么工作的。 让我们开始一个叫做"session"的会话,并且运行top命令 @@ -46,20 +45,20 @@ Archlinux: $ tmux attach-session -t session -之后你会看到top操作仍然运行在重新连接的会话上 +之后你会看到top操作仍然运行在重新连接的会话上。 一些管理sessions的命令: $ tmux list-session - $ tmux new-session - $ tmux attach-session -t - $ tmux rename-session -t - $ tmux choose-session -t - $ tmux kill-session -t + $ tmux new-session <会话名> + $ tmux attach-session -t <会话名> + $ tmux rename-session -t <会话名> + $ tmux choose-session -t <会话名> + $ tmux kill-session -t <会话名> ### 使用不同的窗口工作 -很多情况下,你需要在一个会话中运行多个命令,并且执行多个任务。我们可以在一个会话的多个窗口里组织他们。在现代化的GUI终端(比如 iTerm或者Konsole),一个窗口被视为一个标签。在会话中配置了我们默认的环境,我们就能够在一个会话中创建许多我们需要的窗口。窗口就像运行在会话中的应用程序,当我们脱离当前会话的时候,它仍在持续,让我们来看一个例子: +很多情况下,你需要在一个会话中运行多个命令,执行多个任务。我们可以在一个会话的多个窗口里组织他们。在现代的GUI终端(比如 iTerm或者Konsole),一个窗口被视为一个标签。在会话中配置了我们默认的环境之后,我们就能够在一个会话中创建许多我们需要的窗口。窗口就像运行在会话中的应用程序,当我们脱离当前会话的时候,它仍在持续,让我们来看一个例子: $ tmux new -s my_session @@ -67,7 +66,7 @@ Archlinux: 按下**CTRL-b c** -这将会创建一个新的窗口,然后屏幕的光标移向它。现在你就可以在新窗口下运行你的新应用。你可以写下你当前窗口的名字。在目前的案例下,我运行的top程序,所以top就是该窗口的名字 +这将会创建一个新的窗口,然后屏幕的光标移向它。现在你就可以在新窗口下运行你的新应用。你可以修改你当前窗口的名字。在目前的例子里,我运行的top程序,所以top就是该窗口的名字 如果你想要重命名,只需要按下: @@ -77,15 +76,15 @@ Archlinux: ![](https://farm6.staticflickr.com/5579/14855868482_d52516a357_z.jpg) -一旦在一个会话中创建多个窗口,我们需要在这些窗口间移动的办法。窗口以数组的形式被组织在一起,每个窗口都有一个从0开始计数的号码,想要快速跳转到其余窗口: +一旦在一个会话中创建多个窗口,我们需要在这些窗口间移动的办法。窗口像数组一样组织在一起,从0开始用数字标记每个窗口,想要快速跳转到其余窗口: -**CTRL-b ** +**CTRL-b <窗口号>** -如果我们给窗口起了名字,我们可以使用下面的命令切换: +如果我们给窗口起了名字,我们可以使用下面的命令找到它们: **CTRL-b f** -列出所有窗口: +也可以列出所有窗口: **CTRL-b w** @@ -94,21 +93,21 @@ Archlinux: **CTRL-b n**(到达下一个窗口) **CTRL-b p**(到达上一个窗口) -想要离开一个窗口: +想要离开一个窗口,可以输入 exit 或者: **CTRL-b &** -关闭窗口之前,你需要确认一下 +关闭窗口之前,你需要确认一下。 ### 把窗口分成许多面板 -有时候你在编辑器工作的同时,需要查看日志文件。编辑的同时追踪日志真的很有帮助。Tmux可以让我们把窗口分成许多面板。举了例子,我们可以创建一个控制台监测我们的服务器,同时拥有一个复杂的编辑器环境,这样就能同时进行编译和debug +有时候你在编辑器工作的同时,需要查看日志文件。在编辑的同时追踪日志真的很有帮助。Tmux可以让我们把窗口分成许多面板。举个例子,我们可以创建一个控制台监测我们的服务器,同时用编辑器构造复杂的开发环境,这样就能同时进行编译和调试了。 -让我们创建另一个Tmux会话,让其以面板的方式工作。首先,如果我们在某个会话中,那就从Tmux会话中脱离出来 +让我们创建另一个Tmux会话,让其以面板的方式工作。首先,如果我们在某个会话中,那就从Tmux会话中脱离出来: **CTRL-b d** -开始一个叫做"panes"的新会话 +开始一个叫做"panes"的新会话: $ tmux new -s panes @@ -120,17 +119,17 @@ Archlinux: **CRTL-b %** -又增加了两个 +又增加了两个: ![](https://farm4.staticflickr.com/3881/14669677417_bc1bdce255_z.jpg) 在他们之间移动: -**CTRL-b ** +**CTRL-b <光标键>** ### 结论 -我希望这篇教程能对你有作用。作为奖励,像[Tmuxinator][3] 或者 [Tmuxifier][4]这样的工具,可以简化Tmux会话,窗口和面板的创建及加载,你可以很容易就配置Tmux。如果你没有使用过这些,尝试一下吧 +我希望这篇教程能对你有作用。此外,像[Tmuxinator][3] 或者 [Tmuxifier][4]这样的工具,可以简化Tmux会话,窗口和面板的创建及加载,你可以很容易就配置Tmux。如果你没有使用过这些,尝试一下吧! -------------------------------------------------------------------------------- @@ -138,7 +137,7 @@ via: http://xmodulo.com/2014/08/improve-productivity-terminal-environment-tmux.h 作者:[Christopher Valerio][a] 译者:[su-kaiyao](https://github.com/su-kaiyao) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140813 How to Extend or Reduce LVM' s (Logical Volume Management) in Linux--Part II.md b/published/20140813 How to Extend or Reduce LVM' s (Logical Volume Management) in Linux--Part II.md similarity index 95% rename from translated/tech/20140813 How to Extend or Reduce LVM' s (Logical Volume Management) in Linux--Part II.md rename to published/20140813 How to Extend or Reduce LVM' s (Logical Volume Management) in Linux--Part II.md index 4013367d2f..48b261e5ef 100644 --- a/translated/tech/20140813 How to Extend or Reduce LVM' s (Logical Volume Management) in Linux--Part II.md +++ b/published/20140813 How to Extend or Reduce LVM' s (Logical Volume Management) in Linux--Part II.md @@ -1,18 +1,17 @@ -在Linux中扩展/缩减LVM(逻辑卷管理)—— 第二部分 +在Linux中扩展/缩减LVM(第二部分) ================================================================================ 前面我们已经了解了怎样使用LVM创建弹性的磁盘存储。这里,我们将了解怎样来扩展卷组,扩展和缩减逻辑卷。在这里,我们可以缩减或者扩展逻辑卷管理(LVM)中的分区,LVM也可称之为弹性卷文件系统。 ![Extend/Reduce LVMs in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/LVM_extend.jpg) -### 需求 ### +### 前置需求 ### - [使用LVM创建弹性磁盘存储——第一部分][1] -注:两篇都翻译完了的话,发布的时候将这个链接做成发布的中文的文章地址 #### 什么时候我们需要缩减卷? #### -或许我们需要创建一个独立的分区用于其它用途,或者我们需要扩展任何空间低的分区。真是这样的话,我们可以很容易地缩减大尺寸的分区,并且扩展空间低的分区,只要按下面几个简易的步骤来即可。 +或许我们需要创建一个独立的分区用于其它用途,或者我们需要扩展任何空间低的分区。遇到这种情况时,使用 LVM我们可以很容易地缩减大尺寸的分区,以及扩展空间低的分区,只要按下面几个简易的步骤来即可。 #### 我的服务器设置 —— 需求 #### @@ -284,9 +283,9 @@ via: http://www.tecmint.com/extend-and-reduce-lvms-in-linux/ 作者:[Babin Lonston][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ +[1]:http://linux.cn/article-3965-1.html diff --git a/translated/tech/20140813 How to remove file metadata on Linux.md b/published/20140813 How to remove file metadata on Linux.md similarity index 83% rename from translated/tech/20140813 How to remove file metadata on Linux.md rename to published/20140813 How to remove file metadata on Linux.md index b33dba70a2..f9a0bb7050 100644 --- a/translated/tech/20140813 How to remove file metadata on Linux.md +++ b/published/20140813 How to remove file metadata on Linux.md @@ -1,16 +1,15 @@ -移除Linux系统上的文件元数据 +如何在Linux上移除文件内的隐私数据 ================================================================================ -典型的数据文件通常关联着“元数据”,其包含这个文件的描述信息,表现为一系列属性-值的集合。元数据一般包括创建者名称、生成文件的工具、文件创建/修改时期、创建位置和编辑历史等等。EXIF(镜像标准)、RDF(web资源)和DOI(数字文档)是几种流行的元数据标准。 - +典型的数据文件通常关联着“元数据”,其包含这个文件的描述信息,表现为一系列属性-值的集合。元数据一般包括创建者名称、生成文件的工具、文件创建/修改时期、创建位置和编辑历史等等。几种流行的元数据标准有 EXIF(图片)、RDF(web资源)和DOI(数字文档)等。 虽然元数据在数据管理领域有它的优点,但事实上它会[危害][1]你的隐私。相机图片中的EXIF格式数据会泄露出可识别的个人信息,比如相机型号、拍摄相关的GPS坐标和用户偏爱的照片编辑软件等。在文档和电子表格中的元数据包含作者/所属单位信息和相关的编辑历史。不一定这么绝对,但诸如[metagoofil][2]一类的元数据收集工具在信息收集的过程中常最作为入侵测试的一部分被利用。 对那些想要从共享数据中擦除一切个人元数据的用户来说,有一些方法从数据文件中移除元数据。你可以使用已有的文档或图片编辑软件,通常有自带的元数据编辑功能。在这个教程里,我会介绍一种不错的、单独的**元数据清理工具**,其目标只有一个:**匿名一切私有元数据**。 -[MAT][3](元数据匿名工具箱)是一款专业的元数据清理器,使用Python编写。它在Tor工程旗下开发而成,在[Trails][4]上衍生出标准,后者是一种私人增强的live操作系统。【翻译得别扭,麻烦修正:)】 +[MAT][3](元数据匿名工具箱)是一款专业的元数据清理器,使用Python编写。它属于Tor旗下的项目,而且是Live 版的隐私增强操作系统 [Trails][4] 的标配应用。 -与诸如[exiftool][5]等只能对有限数量的文件类型进行写入的工具相比,MAT支持从各种各样的文件中消除元数据:图片(png、jpg)、文档(odt、docx、pptx、xlsx和pdf)、归档文件(tar、tar.bz2)和音频(mp3、ogg、flac)等。 +与诸如[exiftool][5]等只能对有限种类的文件类型进行写入的工具相比,MAT支持从各种各样的文件中消除元数据:图片(png、jpg)、文档(odt、docx、pptx、xlsx和pdf)、归档文件(tar、tar.bz2)和音频(mp3、ogg、flac)等。 ### 在Linux上安装MAT ### @@ -18,7 +17,7 @@ $ sudo apt-get install mat -在Fedora上,并没有预先生成的MAT包,所以你需要从源码生成。这是我在Fedora上生成MAT的步骤(不成功的话,请查看教程底部): +在Fedora上,并没有预先生成的MAT软件包,所以你需要从源码生成。这是我在Fedora上生成MAT的步骤(不成功的话,请查看教程底部): $ sudo yum install python-devel intltool python-pdfrw perl-Image-ExifTool python-mutagen $ sudo pip install hachoir-core hachoir-parser @@ -95,7 +94,7 @@ ### 总结 ### -MAT是一款简单但非常好用的工具,用来预防从元数据中无意泄露私人数据。请注意如果有必要,还是需要你去隐藏文件内容。MAT能做的是消除与文件相关的元数据,但并不会对文件本身进行任何操作。简而言之,MAT是一名救生员,因为它可以处理大多数常见的元数据移除,但不应该只指望它来保证你的隐私。[译者注:养成良好的隐私保护意识和习惯才是最好的方法] +MAT是一款简单但非常好用的工具,用来预防从元数据中无意泄露私人数据。请注意如果有必要,文件内容也需要保护。MAT能做的是消除与文件相关的元数据,但并不会对文件本身进行任何操作。简而言之,MAT是一名救生员,因为它可以处理大多数常见的元数据移除,但不应该只指望它来保证你的隐私。[译者注:养成良好的隐私保护意识和习惯才是最好的方法] -------------------------------------------------------------------------------- @@ -103,7 +102,7 @@ via: http://xmodulo.com/2014/08/remove-file-metadata-linux.html 作者:[Dan Nanni][a] 译者:[KayGuoWhu](https://github.com/KayGuoWhu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140813 Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux--PART 1.md b/published/20140813 Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux--PART 1.md similarity index 67% rename from translated/tech/20140813 Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux--PART 1.md rename to published/20140813 Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux--PART 1.md index 218dfe7e5a..fc80f34f20 100644 --- a/translated/tech/20140813 Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux--PART 1.md +++ b/published/20140813 Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux--PART 1.md @@ -1,11 +1,12 @@ -在Linux中使用逻辑卷管理器构建灵活的磁盘存储——第一部分 +在Linux中使用LVM构建灵活的磁盘存储(第一部分) ================================================================================ -**逻辑卷管理器(LVM)**让磁盘空间管理更为便捷。如果一个文件系统需要更多的空间,它可以在它的卷组中将空闲空间添加到它的逻辑卷中,而文件系统可以根据你的意愿调整大小。如果某个磁盘启动失败,替换磁盘可以使用卷组注册成一个物理卷,而逻辑卷扩展可以将数据迁移到新磁盘而不会丢失数据。 +**逻辑卷管理器(LVM)**让磁盘空间管理更为便捷。如果一个文件系统需要更多的空间,可以在它的卷组中将空闲空间添加到其逻辑卷中,而文件系统可以根据你的意愿调整大小。如果某个磁盘启动失败,用于替换的磁盘可以使用卷组注册成一个物理卷,而逻辑卷扩展可以将数据迁移到新磁盘而不会丢失数据。 -![Create LVM Storage in Linux](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage.jpg) -在Linux中创建LVM存储 +
![Create LVM Storage in Linux](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage.jpg)
-在现代世界中,每台服务器空间都会因为我们的需求增长而不断扩展。逻辑卷可以用于RAID,SAN。单个物理卷将会被加入组以创建卷组,在卷组中,我们需要切割空间以创建逻辑卷。在使用逻辑卷时,我们可以使用某些命令来跨磁盘、跨逻辑卷扩展,或者减少逻辑卷大小,而不用重新格式化和重新对当前磁盘分区。卷可以跨磁盘抽取数据,这会增加I/O数据量。 +
*在Linux中创建LVM存储*
+ +在如今,每台服务器空间都会因为我们的需求增长而不断扩展。逻辑卷可以用于RAID,SAN。单个物理卷将会被加入组以创建卷组,在卷组中,我们需要切割空间以创建逻辑卷。在使用逻辑卷时,我们可以使用某些命令来跨磁盘、跨逻辑卷扩展,或者减少逻辑卷大小,而不用重新格式化和重新对当前磁盘分区。卷可以跨磁盘抽取数据,这会增加I/O数据量。 ### LVM特性 ### @@ -27,8 +28,8 @@ # vgs # lvs -![Check Physical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-03.jpg) -检查物理卷 +
![Check Physical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-03.jpg)
+
*检查物理卷*
下面是上面截图中各个参数的说明。 @@ -52,8 +53,8 @@ # fdisk -l -![Verify Added Disks](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-04.jpg) -验证添加的磁盘 +
![Verify Added Disks](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-04.jpg)
+
*验证添加的磁盘*
- 用于操作系统(CentOS 6.5)的默认磁盘。 - 默认磁盘上定义的分区(vda1 = swap),(vda2 = /)。 @@ -61,8 +62,8 @@ 各个磁盘大小都是20GB,默认的卷组的PE大小为4MB,我们在该服务器上配置的卷组使用默认PE。 -![Volume Group Display](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-05.jpg) -卷组显示 +
![Volume Group Display](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-05.jpg)
+
*卷组显示*
- **VG Name** – 卷组名称。 - **Format** – LVM架构使用LVM2。 @@ -82,8 +83,8 @@ # df -TH -![Check the Disk Space](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-06.jpg) -检查磁盘空间 +
![Check the Disk Space](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-06.jpg)
+
*检查磁盘空间*
上面的图片中显示了用于根的挂载点已使用了**18GB**,因此没有空闲空间可用了。 @@ -91,15 +92,15 @@ 我们可以扩展当前使用的卷组以获得更多空间。但在这里,我们将要做的是,创建新的卷组,然后在里面肆意妄为吧。过会儿,我们可以看到怎样来扩展使用中的卷组的文件系统。 -在使用新磁盘钱,我们需要使用fdisk来对磁盘分区。 +在使用新磁盘前,我们需要使用fdisk来对磁盘分区。 # fdisk -cu /dev/sda - **c** – 关闭DOS兼容模式,推荐使用该选项。 - **u** – 当列出分区表时,会以扇区而不是柱面显示。 -![Create New Physical Partitions](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-07.jpg) -创建新的物理分区 +
![Create New Physical Partitions](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-07.jpg)
+
*创建新的物理分区*
接下来,请遵循以下步骤来创建新分区。 @@ -118,8 +119,8 @@ # fdisk -l -![Verify Partition Table](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-08.jpg) -验证分区表 +
![Verify Partition Table](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-08.jpg)
+
*验证分区表*
### 创建物理卷 ### @@ -135,8 +136,8 @@ # pvs -![Create Physical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-09.jpg) -创建物理卷 +
![Create Physical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-09.jpg)
+
*创建物理卷*
### 创建卷组 ### @@ -152,11 +153,11 @@ # vgs -![Create Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-10.jpg) -创建卷组 +
![Create Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-10.jpg)
+
*创建卷组*
-![Verify Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-11.jpg) -验证卷组 +
![Verify Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-11.jpg)
+
*验证卷组*
理解vgs命令输出: @@ -173,15 +174,15 @@ # vgs -v -![Check Volume Group Information](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-12.jpg) -检查卷组信息 +
![Check Volume Group Information](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-12.jpg)
+
*检查卷组信息*
**8.** 要获取更多关于新创建的卷组信息,运行以下命令。 # vgdisplay tecmint_add_vg -![List New Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-13.jpg) -列出新卷组 +
![List New Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-13.jpg)
+
*列出新卷组*
- 卷组名称 - 使用的LVM架构。 @@ -200,15 +201,15 @@ # lvs -![List Current Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-14.jpg) -列出当前卷组 +
![List Current Volume Groups](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-14.jpg)
+
*列出当前卷组*
**10.** 这些逻辑卷处于**vg_tecmint**卷组中使用**pvs**命令来列出并查看有多少空闲空间可以创建逻辑卷。 # pvs -![Check Free Space](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-15.jpg) -检查空闲空间 +
![Check Free Space](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-15.jpg)
+
*检查空闲空间*
**11.** 卷组大小为**54GB**,而且未被使用,所以我们可以在该组内创建LV。让我们将卷组平均划分大小来创建3个逻辑卷,就是说**54GB**/3 = **18GB**,创建出来的单个逻辑卷应该会是18GB。 @@ -218,8 +219,8 @@ # vgdisplay tecmint_add_vg -![Create New Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-16.jpg) -创建新逻辑卷 +
![Create New Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-16.jpg)
+
*创建新逻辑卷*
- 默认分配给该卷组的PE为32MB,这里单个的PE大小为32MB。 - 总可用PE是1725。 @@ -233,8 +234,8 @@ 1725PE/3 = 575 PE. 575 PE x 32MB = 18400 --> 18GB -![Calculate Disk Space](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-17.jpg) -计算磁盘空间 +
![Calculate Disk Space](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-17.jpg)
+
*计算磁盘空间*
按**CRTL+D**退出**bc**。现在让我们使用575个PE来创建3个逻辑卷。 @@ -253,8 +254,8 @@ # lvs -![List Created Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-18.jpg) -列出创建的逻辑卷 +
![List Created Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-18.jpg)
+
*列出创建的逻辑卷*
#### 方法2: 使用GB大小创建逻辑卷 #### @@ -272,8 +273,8 @@ # lvs -![Verify Created Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-19.jpg) -验证创建的逻辑卷 +
![Verify Created Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-19.jpg)
+
*验证创建的逻辑卷*
这里,我们可以看到,当创建第三个LV的时候,我们不能收集到18GB空间。这是因为尺寸有小小的改变,但在使用或者尺寸来创建LV时,这个问题会被忽略。 @@ -287,8 +288,8 @@ # mkfs.ext4 /dev/tecmint_add_vg/tecmint_manager -![Create Ext4 File System](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-20.jpg) -创建Ext4文件系统 +
![Create Ext4 File System](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-20.jpg)
+
*创建Ext4文件系统*
**13.** 让我们在**/mnt**下创建目录,并将已创建好文件系统的逻辑卷挂载上去。 @@ -302,8 +303,8 @@ # df -h -![Mount Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-22.jpg) -挂载逻辑卷 +
![Mount Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-22.jpg)
+
*挂载逻辑卷*
#### 永久挂载 #### @@ -321,32 +322,31 @@ /dev/mapper/tecmint_add_vg-tecmint_public /mnt/tecmint_public ext4 defaults 0 0 /dev/mapper/tecmint_add_vg-tecmint_manager /mnt/tecmint_manager ext4 defaults 0 0 -![Get mtab Mount Entry](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-23.jpg) -获取mtab挂载条目 +
![Get mtab Mount Entry](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-23.jpg)*
+
*获取mtab挂载条目*
-![Open fstab File](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-24.jpg) -打开fstab文件 +
![Open fstab File](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-24.jpg)
+
*打开fstab文件*
-![Add Auto Mount Entry](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-25.jpg) -添加自动挂载条目 +
![Add Auto Mount Entry](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-25.jpg)
+
*添加自动挂载条目*
重启前,执行mount -a命令来检查fstab条目。 # mount -av -![Verify fstab Entry](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-26.jpg) -验证fstab条目 +
![Verify fstab Entry](http://www.tecmint.com/wp-content/uploads/2014/07/Create-Logical-Volume-Storage-26.jpg)
+
*验证fstab条目*
这里,我们已经了解了怎样来使用逻辑卷构建灵活的存储,从使用物理磁盘到物理卷,物理卷到卷组,卷组再到逻辑卷。 -在我即将奉献的文章中,我将介绍如何扩展卷组、逻辑卷,减少逻辑卷,拍快照以及从快照中恢复。到那时,保持TecMint更新到这些精彩文章中的内容。 --------------------------------------------------------------------------------- +在我即将奉献的文章中,我将介绍如何扩展卷组、逻辑卷,减少逻辑卷,拍快照以及从快照中恢复。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/create-lvm-storage-in-linux/ 作者:[Babin Lonston][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140818 Disable reboot using Ctrl-Alt-Del Keys in RHEL or CentOS.md b/published/20140818 Disable reboot using Ctrl-Alt-Del Keys in RHEL or CentOS.md similarity index 94% rename from translated/tech/20140818 Disable reboot using Ctrl-Alt-Del Keys in RHEL or CentOS.md rename to published/20140818 Disable reboot using Ctrl-Alt-Del Keys in RHEL or CentOS.md index f95f9d4eb4..0efa52c08e 100644 --- a/translated/tech/20140818 Disable reboot using Ctrl-Alt-Del Keys in RHEL or CentOS.md +++ b/published/20140818 Disable reboot using Ctrl-Alt-Del Keys in RHEL or CentOS.md @@ -1,4 +1,4 @@ -在RHEL / CentOS下停用按下Ctrl-Alt-Del 重启系统的功能 +在RHEL/CentOS 5/6下停用按下Ctrl-Alt-Del 重启系统的功能 ================================================================================ 在Linux里,由于对安全的考虑,我们允许任何人按下**Ctrl-Alt-Del**来**重启**系统。但是在生产环境中,应该停用按下Ctrl-Alt-Del 重启系统的功能。 @@ -37,7 +37,7 @@ via: http://www.linuxtechi.com/disable-reboot-using-ctrl-alt-del-keys/ 作者:[Pradeep Kumar][a] 译者:[2q1w2007](https://github.com/2q1w2007) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140818 How to configure Access Control Lists (ACLs) on Linux.md b/published/20140818 How to configure Access Control Lists (ACLs) on Linux.md similarity index 72% rename from translated/tech/20140818 How to configure Access Control Lists (ACLs) on Linux.md rename to published/20140818 How to configure Access Control Lists (ACLs) on Linux.md index bac19246bd..c960731533 100644 --- a/translated/tech/20140818 How to configure Access Control Lists (ACLs) on Linux.md +++ b/published/20140818 How to configure Access Control Lists (ACLs) on Linux.md @@ -1,8 +1,8 @@ -配置Linux访问控制列表(ACL) +配置 Linux 的访问控制列表(ACL) ================================================================================ -使用拥有权限控制的Liunx,工作是一件轻松的任务。它可以定义任何user,group和other的权限。无论是在桌面电脑或者不会有很多用户的虚拟Linux实例,或者当用户不愿意分享他们之间的文件时,这样的工作是很棒的。然而,如果你是在一个大型组织,你运行了NFS或者Samba服务给不同的用户。然后你将会需要灵活的挑选并设置很多复杂的配置和权限去满足你的组织不同的需求。 +使用拥有权限控制的Liunx,工作是一件轻松的任务。它可以定义任何user,group和other的权限。无论是在桌面电脑或者不会有很多用户的虚拟Linux实例,或者当用户不愿意分享他们之间的文件时,这样的工作是很棒的。然而,如果你是在一个大型组织,你运行了NFS或者Samba服务给不同的用户,然后你将会需要灵活的挑选并设置很多复杂的配置和权限去满足你的组织不同的需求。 -Linux(和其他Unix,兼容POSIX的)所以拥有访问控制列表(ACL),它是一种分配权限之外的普遍范式。例如,默认情况下你需要确认3个权限组:owner,group和other。使用ACL,你可以增加权限给其他用户或组别,而不单只是简单的"other"或者是拥有者不存在的组别。可以允许指定的用户A、B、C拥有写权限而不再是让他们整个组拥有写权限。 +Linux(和其他Unix等POSIX兼容的操作系统)有一种被称为访问控制列表(ACL)的权限控制方法,它是一种权限分配之外的普遍范式。例如,默认情况下你需要确认3个权限组:owner、group和other。而使用ACL,你可以增加权限给其他用户或组别,而不单只是简单的"other"或者是拥有者不存在的组别。可以允许指定的用户A、B、C拥有写权限而不再是让他们整个组拥有写权限。 ACL支持多种Linux文件系统,包括ext2, ext3, ext4, XFS, Btfrs, 等。如果你不确定你的文件系统是否支持ACL,请参考文档。 @@ -32,15 +32,15 @@ Archlinux 中: ![](https://farm4.staticflickr.com/3859/14768099340_eab7b53e28_z.jpg) -你可以注意到,我的root分区中ACL属性已经开启。万一你没有开启,你需要编辑/etc/fstab文件。增加acl标记,在你需要开启ACL的分区之前。 +你可以注意到,我的root分区中ACL属性已经开启。万一你没有开启,你需要编辑/etc/fstab文件,在你需要开启ACL的分区的选项前增加acl标记。 ![](https://farm6.staticflickr.com/5566/14931771056_b48d5daae2_z.jpg) -现在我们需要重新挂载分区(我喜欢完全重启,因为我不想丢掉数据),如果你对任何分区开启ACL,你必须也重新挂载它。 +现在我们需要重新挂载分区(我喜欢完全重启,因为我不想丢失数据),如果你对其它分区开启ACL,你必须也重新挂载它。 $ sudo mount / -o remount -令人敬佩!现在我们已经在我们的系统中开启ACL,让我们开始和它一起工作。 +干的不错!现在我们已经在我们的系统中开启ACL,让我们开始和它一起工作。 ### ACL 范例 ### @@ -54,7 +54,6 @@ Archlinux 中: 我想要分享这个目录给其他两个用户test和test2,一个拥有完整权限,另一个只有读权限。 -First, to set ACLs for user test: 首先,为用户test设置ACL: $ sudo setfacl -m u:test:rwx /shared @@ -84,7 +83,7 @@ First, to set ACLs for user test: ![](https://farm6.staticflickr.com/5591/14768099389_9a7f3a6bf2_z.jpg) -你可以注意到,正常权限后多一个+标记。这表示ACL已经设置成功。为了真正读取ACL,我们需要运行: +你可以注意到,正常权限后多一个+标记。这表示ACL已经设置成功。要具体看一下ACL,我们需要运行: $ sudo getfacl /shared @@ -102,11 +101,11 @@ First, to set ACLs for user test: ![](https://farm4.staticflickr.com/3863/14768099130_a7d175f067_z.jpg) -最后一件事。在设置了ACL文件或目录工作时,cp和mv命令会改变这些设置。在cp的情况下,需要添加“p”参数来复制ACL设置。如果这不可行,它将会展示一个警告。mv默认移动ACL设置,如果这也不可行,它也会向您展示一个警告。 +最后,在设置了ACL文件或目录工作时,cp和mv命令会改变这些设置。在cp的情况下,需要添加“p”参数来复制ACL设置。如果这不可行,它将会展示一个警告。mv默认移动ACL设置,如果这也不可行,它也会向您展示一个警告。 ### 总结 ### -使用ACL给了在你想要分享的文件上巨大的权利和控制,特别是在NFS/Samba服务。此外,如果你的主管共享主机,这个工具是必备的。 +使用ACL让在你想要分享的文件上拥有更多的能力和控制,特别是在NFS/Samba服务。此外,如果你的主管共享主机,这个工具是必备的。 -------------------------------------------------------------------------------- @@ -114,7 +113,7 @@ via: http://xmodulo.com/2014/08/configure-access-control-lists-acls-linux.html 作者:[Christopher Valerio][a] 译者:[VicYu](http://www.vicyu.net) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140822 15 Practical Examples of 'echo' command in Linux.md b/published/20140822 15 Practical Examples of 'echo' command in Linux.md similarity index 73% rename from translated/tech/20140822 15 Practical Examples of 'echo' command in Linux.md rename to published/20140822 15 Practical Examples of 'echo' command in Linux.md index 6b906232b1..b5898209e7 100644 --- a/translated/tech/20140822 15 Practical Examples of 'echo' command in Linux.md +++ b/published/20140822 15 Practical Examples of 'echo' command in Linux.md @@ -1,15 +1,16 @@ -Linux中15个‘echo’ 实例 +Linux中的15个‘echo’ 命令实例 ================================================================================ **echo**是一种最常用的与广泛使用的内置于Linux的bash和C shell的命令,通常用在脚本语言和批处理文件中来在标准输出或者文件中显示一行文本或者字符串。 ![echo command examples](http://www.tecmint.com/wp-content/uploads/2014/08/echo-command.png) + echo命令例子 echo命令的语法是: echo [选项] [字符串] -**1.** 输入一行文本并显示在标准输出上 +###**1.** 输入一行文本并显示在标准输出上 $ echo Tecmint is a community of Linux Nerds @@ -17,7 +18,9 @@ echo命令的语法是: Tecmint is a community of Linux Nerds -**2.** 声明一个变量并输出它的值。比如,声明变量**x**并给它赋值为**10**。 +###**2.** 输出一个声明的变量值 + +比如,声明变量**x**并给它赋值为**10**。 $ x=10 @@ -27,15 +30,20 @@ echo命令的语法是: The value of variable x = 10 -**注意:** Linux中的选项‘**-e**‘扮演了转义字符反斜线的翻译器。 -**3.** 使用‘**\b**‘选项- ‘**-e**‘后带上'\b'会删除字符间的所有空格。 +###**3.** 使用‘**\b**‘选项 + +‘**-e**‘后带上'\b'会删除字符间的所有空格。 + +**注意:** Linux中的选项‘**-e**‘扮演了转义字符反斜线的翻译器。 $ echo -e "Tecmint \bis \ba \bcommunity \bof \bLinux \bNerds" TecmintisacommunityofLinuxNerds -**4.** 使用‘**\n**‘选项- ‘**-e**‘后面的带上‘\n’行会在遇到的地方作为新的一行 +###**4.** 使用‘**\n**‘选项 + +‘**-e**‘后面的带上‘\n’行会在遇到的地方作为新的一行 $ echo -e "Tecmint \nis \na \ncommunity \nof \nLinux \nNerds" @@ -47,13 +55,15 @@ echo命令的语法是: Linux Nerds -**5.** 使用‘**\t**‘选项 - ‘**-e**‘后面跟上‘\t’会在空格间加上水平制表符。 +###**5.** 使用‘**\t**‘选项 + +‘**-e**‘后面跟上‘\t’会在空格间加上水平制表符。 $ echo -e "Tecmint \tis \ta \tcommunity \tof \tLinux \tNerds" Tecmint is a community of Linux Nerds -**6.** 也可以同时使用换行‘**\n**‘与水平制表符‘**\t**‘。 +###**6.** 也可以同时使用换行‘**\n**‘与水平制表符‘**\t**‘ $ echo -e "\n\tTecmint \n\tis \n\ta \n\tcommunity \n\tof \n\tLinux \n\tNerds" @@ -65,7 +75,9 @@ echo命令的语法是: Linux Nerds -**7.** 使用‘**\v**‘选项 - ‘**-e**‘后面跟上‘\v’会加上垂直制表符。 +###**7.** 使用‘**\v**‘选项 + +‘**-e**‘后面跟上‘\v’会加上垂直制表符。 $ echo -e "\vTecmint \vis \va \vcommunity \vof \vLinux \vNerds" @@ -77,7 +89,7 @@ echo命令的语法是: Linux Nerds -**8.** 也可以同时使用换行‘**\n**‘与垂直制表符‘**\v**‘。 +###**8.** 也可以同时使用换行‘**\n**‘与垂直制表符‘**\v**‘ $ echo -e "\n\vTecmint \n\vis \n\va \n\vcommunity \n\vof \n\vLinux \n\vNerds" @@ -98,43 +110,51 @@ echo命令的语法是: **注意:** 你可以按照你的需求连续使用两个或者多个垂直制表符,水平制表符与换行符。 -**9.** 使用‘**\r**‘选项 - ‘**-e**‘后面跟上‘\r’来指定输出中的回车符。 +###**9.** 使用‘**\r**‘选项 + +‘**-e**‘后面跟上‘\r’来指定输出中的回车符。(LCTT 译注:会覆写行开头的字符) $ echo -e "Tecmint \ris a community of Linux Nerds" is a community of Linux Nerds -**10.** 使用‘**\c**‘选项 - ‘**-e**‘后面跟上‘\c’会抑制输出后面的字符并且最后不会换新行。 +###**10.** 使用‘**\c**‘选项 + +‘**-e**‘后面跟上‘\c’会抑制输出后面的字符并且最后不会换新行。 $ echo -e "Tecmint is a community \cof Linux Nerds" Tecmint is a community @tecmint:~$ -**11.** ‘**-n**‘会在echo完后不会输出新行。 +###**11.** ‘**-n**‘会在echo完后不会输出新行 $ echo -n "Tecmint is a community of Linux Nerds" Tecmint is a community of Linux Nerds@tecmint:~/Documents$ -**12.** 使用‘**\c**‘选项 - ‘**-e**‘后面跟上‘\a’选项会听到声音警告。 +###**12.** 使用‘**\a**‘选项 + +‘**-e**‘后面跟上‘\a’选项会听到声音警告。 $ echo -e "Tecmint is a community of \aLinux Nerds" Tecmint is a community of Linux Nerds -**注意:** 在你开始前,请先检查你的音量键。 +**注意:** 在你开始前,请先检查你的音量设置。 -**13.** 使用echo命令打印所有的文件和文件夹(ls命令的替代)。 +###**13.** 使用echo命令打印所有的文件和文件夹(ls命令的替代) $ echo * 103.odt 103.pdf 104.odt 104.pdf 105.odt 105.pdf 106.odt 106.pdf 107.odt 107.pdf 108a.odt 108.odt 108.pdf 109.odt 109.pdf 110b.odt 110.odt 110.pdf 111.odt 111.pdf 112.odt 112.pdf 113.odt linux-headers-3.16.0-customkernel_1_amd64.deb linux-image-3.16.0-customkernel_1_amd64.deb network.jpeg -**14.** 打印制定的文件类型。比如,让我们假设你想要打印所有的‘**.jpeg**‘文件,使用下面的命令。 +###**14.** 打印制定的文件类型 + +比如,让我们假设你想要打印所有的‘**.jpeg**‘文件,使用下面的命令。 $ echo *.jpeg network.jpeg -**15.** echo可以使用重定向符来输出到一个文件而不是标准输出。 +###**15.** echo可以使用重定向符来输出到一个文件而不是标准输出 $ echo "Test Page" > testpage @@ -142,7 +162,7 @@ echo命令的语法是: avi@tecmint:~$ cat testpage Test Page -### echo 选项 ### +### echo 选项列表 ### @@ -187,14 +207,15 @@ echo命令的语法是:
-就是这些了,不要忘记在下面留下你有价值的反馈。 +就是这些了,不要忘记在下面留下你的反馈。 + -------------------------------------------------------------------------------- via: http://www.tecmint.com/echo-command-in-linux/ 作者:[Avishek Kumar][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140825 Linux FAQs with Answers--How to install Shutter on CentOS.md b/published/20140825 Linux FAQs with Answers--How to install Shutter on CentOS.md similarity index 74% rename from translated/tech/20140825 Linux FAQs with Answers--How to install Shutter on CentOS.md rename to published/20140825 Linux FAQs with Answers--How to install Shutter on CentOS.md index ca52077ab0..2f84ed9a8d 100644 --- a/translated/tech/20140825 Linux FAQs with Answers--How to install Shutter on CentOS.md +++ b/published/20140825 Linux FAQs with Answers--How to install Shutter on CentOS.md @@ -1,9 +1,10 @@ -Linux有问必答——如何在CentOS上安装Shutter +Linux有问必答:如何在CentOS上安装Shutter ================================================================================ > **问题**:我想要在我的CentOS桌面上试试Shutter屏幕截图程序,但是,当我试着用yum来安装Shutter时,它总是告诉我“没有shutter包可用”。我怎样才能在CentOS上安装Shutter啊? [Shutter][1]是一个用于Linux桌面的开源(GPLv3)屏幕截图工具。它打包有大量用户友好的功能,这让它成为Linux中功能最强大的屏幕截图程序之一。你可以用Shutter来捕捉一个规则区域、一个窗口、整个桌面屏幕、或者甚至是来自任意专用地址的一个网页的截图。除此之外,你也可以用它内建的图像编辑器来对捕获的截图进行编辑,应用不同的效果,将图像导出为不同的图像格式(svg,pdf,ps),或者上传图片到公共图像主机或者FTP站点。 -Shutter is not available as a pre-built package on CentOS (as of version 7). Fortunately, there exists a third-party RPM repository called Nux Dextop, which offers Shutter package. So [enable Nux Dextop repository][2] on CentOS. Then use the following command to install Shutter. + +Shutter 在 CentOS (截止至版本 7)上没有预先构建好的软件包。幸运的是,有一个第三方提供的叫做 Nux Dextop 的 RPM 中提供了 Shutter 软件包。 所以在 CentOS 上[启用 Nux Dextop 软件库][2],然后使用下列命令来安装它: $ sudo yum --enablerepo=nux-dextop install shutter @@ -14,9 +15,9 @@ Shutter is not available as a pre-built package on CentOS (as of version 7). For via: http://ask.xmodulo.com/install-shutter-centos.html 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [1]:http://shutter-project.org/ -[2]:http://ask.xmodulo.com/enable-nux-dextop-repository-centos-rhel.html +[2]:http://linux.cn/article-3889-1.html diff --git a/translated/tech/20140825 Linux FAQs with Answers--How to show a MAC learning table of Linux bridge.md b/published/20140825 Linux FAQs with Answers--How to show a MAC learning table of Linux bridge.md similarity index 93% rename from translated/tech/20140825 Linux FAQs with Answers--How to show a MAC learning table of Linux bridge.md rename to published/20140825 Linux FAQs with Answers--How to show a MAC learning table of Linux bridge.md index 3cf867ff1b..53a448b51f 100644 --- a/translated/tech/20140825 Linux FAQs with Answers--How to show a MAC learning table of Linux bridge.md +++ b/published/20140825 Linux FAQs with Answers--How to show a MAC learning table of Linux bridge.md @@ -1,4 +1,4 @@ -Linux有问必答——如何显示Linux网桥的MAC学习表 +Linux有问必答:如何显示Linux网桥的MAC学习表 ================================================================================ > **问题**:我想要检查一下我用brctl工具创建的Linux网桥的MAC地址学习状态。请问,我要怎样才能查看Linux网桥的MAC学习表(或者转发表)? @@ -18,6 +18,6 @@ Linux网桥是网桥的软件实现,这是Linux内核的内核部分。与硬 via: http://ask.xmodulo.com/show-mac-learning-table-linux-bridge.html 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/talk/20140826 Linus Torvalds Promotes Linux for Desktops, Embedded Computing.md b/published/20140826 Linus Torvalds Promotes Linux for Desktops, Embedded Computing.md similarity index 52% rename from translated/talk/20140826 Linus Torvalds Promotes Linux for Desktops, Embedded Computing.md rename to published/20140826 Linus Torvalds Promotes Linux for Desktops, Embedded Computing.md index 90701b54f5..4e7ac1dbc1 100644 --- a/translated/talk/20140826 Linus Torvalds Promotes Linux for Desktops, Embedded Computing.md +++ b/published/20140826 Linus Torvalds Promotes Linux for Desktops, Embedded Computing.md @@ -1,20 +1,20 @@ -Linus Torvalds推动Linux的桌面与嵌入式计算的发展 +Linus Torvalds 希望推动Linux在桌面和嵌入式计算方面共同发展 ================================================================================ -> Linux的内核开发者和开源领袖Linus Torvalds最近表达了关于Linux桌面和嵌入式设备中Linux的未来的看法。 +> Linux的内核开发者和开源领袖Linus Torvalds前一段时间表达了关于Linux桌面和嵌入式设备中Linux的未来的看法。 ![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2014/08/linus-torvalds-1.jpg) 什么是Linux桌面和嵌入式设备中Linux的未来?这是个值得讨论的问题,不过Linux的创始人和开源巨人Linus Torvalds在最近一届 [Linux 基金会][1] 的LinuxCon大会上,在一次对话中表达了一些有趣的观点。 -作为敲出第一版Linux内核代码并且在1991年将它们共享在互联网上的家伙,Torvalds毫无疑问是开源软件甚至是任何软件中最著名的开发者,如今他依然活跃在其中。在此期间,Torvalds是许多人和组织中唯一一个引领着Linux发展的个体,它的观点往往能影响着开源社区,而且,作为一个内核开发者的角色赋予了他能决定哪些特点和代码能被放进操作系统内部的强大权利。 +作为敲出第一版Linux内核代码并且在1991年将它们共享在互联网上的家伙,Torvalds毫无疑问是开源软件甚至是所有软件中最著名的开发者,如今他依然活跃在其中。在此期间,Torvalds是许多人和组织中唯一一个引领着Linux发展的个体,它的观点往往能影响着开源社区,而且,作为一个内核开发者的角色赋予了他能决定哪些特点和代码能被放进操作系统内部的强大权利。 -所以说,关注Torvalds所说的话是很值得的, "我还是挺想要桌面的。" [上周他在LinuxCon大会上这样说道][2] 那标志着他仍然着眼于作为使个人机更加强大的操作系统Linux的未来,尽管十年来Linux桌面市场的分享一直很少,而且大部分围绕Linux的商业活动都去涉及服务器或者安卓手机硬件去了。 +所以说,关注Torvalds所说的话是很值得的, "我还是挺想要桌面的。" [他在上月的LinuxCon大会上这样说道][2] 那表明他仍然着眼于作为使PC更加强大的操作系统Linux的未来,尽管十年来Linux桌面市场的份额一直很少,而且大部分围绕Linux的商业活动都去涉及服务器或者安卓手机去了。 -但是,Torvalds还说,确保Linux桌面能有个宏伟的未来意味着解决了受阻的 “基础设施问题”,好像庞大的开源软件生态系统和硬件世界让他充满信心。这不是Linux核心代码本身的问题,而是要让Linux桌面渠道友好,这可能是伟大的Torvalds和他开发同伴们所需要花精力去达到的目标。这取决于app的开发者、硬件制造商和其它有志于实现人们能方便使用基于Linux的计算平台的各方力量。 +但是,Torvalds还说,确保Linux桌面能有个宏伟的未来意味着解决了受阻的 “基础设施问题”,庞大的开源软件生态系统和硬件世界让他充满信心。这不是Linux核心代码本身的问题,而是要让Linux桌面渠道友好,这可能是伟大的Torvalds和他开发同伴们所需要花精力去达到的目标。这取决于app的开发者、硬件制造商和其它有志于实现人们能方便使用基于Linux的计算平台的各方力量。 -另一方面,Torvalds也提到了他的憧憬,就是内核开发者们能简化嵌入式装置中的Linux代码——一个在让内核更加桌面友好化上会导致很多分歧的任务。但这也不一定,因为无论如何,Linux都是以模块化设计的,单内核代码库不能同时满足桌面用户和嵌入式开发者的需求,这是没有道理的,因为这取决于他们使用的模块。 +另一方面,Torvalds也提到了他的憧憬,就是内核开发者们能简化嵌入式装置中的Linux代码——这也许和让Linux内核更加桌面友好化的任务有所分歧。但这也不一定,因为无论如何,Linux都是以模块化设计的,单内核代码库不能同时满足桌面用户和嵌入式开发者的需求,这是没有道理的,因为这取决于他们使用的模块。 -作为一个长时间想看到更多搭载Linux的嵌入式设备出现的Linux桌面用户,我希望Torvalds的所有愿望都可以实现,到那时我就能只用Liunx来做所有我想做的事情,无论是在电脑桌面上、手机上、车上,或者是任何其它的地方。 +作为一个一直想看到更多搭载Linux的嵌入式设备出现的Linux桌面用户,我希望Torvalds的所有愿望都可以实现,到那时我就可以只用Linux来做所有我想做的事情,无论是在电脑桌面上、手机上、车上,或者是任何其它的地方。 -------------------------------------------------------------------------------- @@ -22,7 +22,7 @@ via: http://thevarguy.com/open-source-application-software-companies/082514/linu 作者:[Christopher Tozzi][a] 译者:[ZTinoZ](https://github.com/ZTinoZ) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140902 Mount Google drive in Ubuntu 14.04 LTS.md b/published/20140902 Mount Google drive in Ubuntu 14.04 LTS.md similarity index 82% rename from translated/tech/20140902 Mount Google drive in Ubuntu 14.04 LTS.md rename to published/20140902 Mount Google drive in Ubuntu 14.04 LTS.md index 899b897906..66ebd14a6c 100644 --- a/translated/tech/20140902 Mount Google drive in Ubuntu 14.04 LTS.md +++ b/published/20140902 Mount Google drive in Ubuntu 14.04 LTS.md @@ -1,6 +1,6 @@ -Google drive和Ubuntu 14.04 LTS的胶合 +墙外香花:Google drive和Ubuntu 14.04 LTS的胶合 ================================================================================ -Google尚未发布其**官方Linux客户端**,以用于从Ubuntu访问其drive。然开源社区却业已开发完毕非官方之软件包‘**grive-tools**’。 +Google尚未发布用于从Ubuntu访问其drive的**官方Linux客户端**。然开源社区却业已开发完毕非官方之软件包‘**grive-tools**’。 Grive乃是Google Drive(**在线存储服务**)的GNU/Linux系统客户端,允许你**同步**所选目录到云端,以及上传新文件到Google Drive。 @@ -22,7 +22,7 @@ Grive乃是Google Drive(**在线存储服务**)的GNU/Linux系统客户端 **步骤:1** 安装完了,通过输入**Grive**在**Unity Dash**搜索应用,并打开之。 -![](http://www.linuxtechi.com/wp-content/uploads/2014/09/access-grive-setup.png) +![](http://www.linuxtechi.com/wp-content/uploads/2014/09/access-grive-setup-1.jpg) **步骤:2** 登入google drive,你将被问及访问google drive的权限。 @@ -36,25 +36,25 @@ Grive乃是Google Drive(**在线存储服务**)的GNU/Linux系统客户端 **步骤:3** 下面将提供给你一个 **google代码**,复制并粘贴到**Grive设置框**内。 -![](http://www.linuxtechi.com/wp-content/uploads/2014/09/gdrive-code.png) +![](http://www.linuxtechi.com/wp-content/uploads/2014/09/gdrive-code-1.jpg) -![](http://www.linuxtechi.com/wp-content/uploads/2014/09/code-in-grive.png) +![](http://www.linuxtechi.com/wp-content/uploads/2014/09/code-in-grive-1.jpg) 点击下一步后,将会开始同步google drive到你**家目录**下的‘**Google Drive**’文件夹。完成后,将出现如下窗口。 ![](http://www.linuxtechi.com/wp-content/uploads/2014/09/grive-installation-completed.png) -Google Drive folder created under **user's home directory** +Google Drive 文件夹会创建在**用户的主目录**下。 -![](http://www.linuxtechi.com/wp-content/uploads/2014/09/google-drive-folder.png) +![](http://www.linuxtechi.com/wp-content/uploads/2014/09/google-drive-folder-1.jpg) -------------------------------------------------------------------------------- via: http://www.linuxtechi.com/mount-google-drive-in-ubuntu/ -作者:[Pradeep Kumar ][a] +作者:[Pradeep Kumar][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/20140902 Photo Editing on Linux with Krita.md b/published/20140902 Photo Editing on Linux with Krita.md new file mode 100644 index 0000000000..3e36fb76ad --- /dev/null +++ b/published/20140902 Photo Editing on Linux with Krita.md @@ -0,0 +1,76 @@ +在 Linux 下用 Krita 进行照片编辑 +================================================================================ +
+
图 1:侏儒山羊 Annabelle
+ +[Krita][1] 是一款很棒的绘图应用,同时也是很不错的照片编辑器。今天我们将学习如何给图片添加文字,以及如何有选择地锐化照片的某一部分。 + +### Krita 简介 ### + +与其他绘图/制图应用类似,Krita 内置了数百种工具和选项,以及多种处理方法。因此它值得我们花点时间来了解一下。 + +Krita 默认使用了暗色主题。我不太喜欢暗色主题,但幸运的是 Krita 还有其他很赞的主题,你可以在任何时候通过菜单里的“设置 > 主题”进行更改。 + +Krita 使用了窗口停靠样式的工具条。如果左右两侧面板的 Dock 工具条没有显示,检查一下“设置 > 显示工具条”选项,你也可以在“设置 > 工具条”中对工具条按你的偏好进行调整。不过隐藏的工具条也许会让你感到一些小小的不快,它们只会在一个狭小的压扁区域展开,你看不见其中的任何东西。你可以拖动它们至顶端或者 Krita 窗口的一侧,放大或者缩小它们,甚至你可以把它们拖到 Krita 外,放在你显示屏的任意位置。如果你把其中一个工具条拖到了另一个工具条上,它们会自动合并成一个工具条。 + +当你配置好比较满意的工作区后,你可以在“选择工作区”内保存它。你可以在笔刷工具条(通过“设置 > 显示工具条”开启显示)的右侧找到“选择工作区”。其中有对工作区的不同配置,当然你也可以创建自己的配置(图 2)。 + +
+
图 2:在“选择工作区”里保存用户定制的工作区。
+ +Krita 中有多重缩放控制方法。Ctrl + “=” 放大,Ctrl + “-” 缩小,Ctrl + “0” 重置为 100% 缩放画面。你也可以通过“视图 > 缩放”,或者右下角的缩放条进行控制。在缩放条的左侧还有一个下拉式的缩放菜单。 + +工具菜单位于窗口左部,其中包含了锐化和选择工具。你必须移动光标到每个工具上,才能查看它的标签。工具选项条总是显示当前正在使用的工具的选项,默认情况下工具选项条位于窗口右部。 + +### 裁切工具 ### + +当然,在工具菜单条中有裁切工具,并且非常易于使用。把你想要选择的区域用矩形圈定,使用拖拽的方式来调整选区,调整完毕后点击返回按钮。在工具选项条中,你可以选择对所有图层应用裁切,还是只对当前图层应用裁切,通过输入具体数值,或者是百分比调整尺寸。 + +### 添加文本 ### + +当你想在照片上添加标签或者说明这类简单文本的时候,Krita 也许会让你眼花缭乱,因为它有太多的艺术字效果可供选择了。但 Krita 同时也支持添加简单的文字。点击文本工具条,你将会看到工具选项条如图 3 那样。 + +
+
图 3:文本选项。
+ +点击展开按钮。这将显示简单文本工具;首先绘制矩形文本框,接着在文本框内输入文字。工具选项条中有所有常用的文本格式选项:文本选择、文本尺寸、文字与背景颜色、边距,以及一系列图形风格。但你处理完文本后点击外观处理工具,外观处理工具的按钮是一个白色的箭头,在文本工具按钮旁边,通过外观处理工具你可以调整文字整体的尺寸、外观还有位置。外观处理工具的工具选项包括多种不同的线条、颜色还有边距。图 4 是我为我那些蜗居在城市里的亲戚们发送的一幅带有愉快标题的照片。 + +
+
图 4:来这绿色农场吧。
+ +如何处理你的照片上已经存在的文字?点击外观处理工具,在文本区域内双击。这将使文本进入编辑模式,从文本框内出现的光标可以看出这一点。现在,你就可以开始选择文字、添加文字、更改格式,等等。 + +### 锐化选区 ### + +外观编辑上,Krita 有许多很棒的工具。在图 5 中我想要锐化 Annabelle 的脸和眼睛。(Annabelle 住在隔壁,但她很喜欢我的狗,在我这里呆了很长一段时间。我的狗却因为害怕她而跑了,但她却一点也不气馁。)首先通过“外形选区”工具选择一个区域。接着打开“滤镜 > 增强 > 虚边蒙板”。你可以调节三个变量:半长值、总量以及阈值。大多数图像编辑软件都有半径、总量和阀值的设置。半径是直径的一半,因此从技术上来说“半长值”是正确的,但却可能造成不必要的混乱。 + +
+
图 5:选取任意的区域进行编辑。
+ +半长值决定了锐化线条的粗细。你需要足够大的数值来产生较好的结果,但很明显,不要过大。 + +阀值决定了锐化时两个像素点之间的效果差异。“0”是锐化的最大值,“99”则表示不进行锐化。 + +总量控制着锐化强度;其值越高锐化程度越高。 + +锐化基本上是你处理照片的最后一步,因为它和你对照片所做的一切处理都有关:裁切、改变尺寸、颜色、色差...如果你先进行锐化再进行其他操作,你的锐化效果将变得一团糟。 + +接着,你要问,“虚化蒙板”是什么意思?这个名字来源于锐化技术:虚化蒙板滤镜在原始图像上覆盖一层模糊的蒙板,接着在上面分层进行虚化蒙板。这将使图像比直接锐化产生更加锐利清晰的效果。 + +今天要说的就这么多。有关 Krita 的资料很多,但比较杂乱。你可以从 [Krita Tutorials][2] 开始学习,也可以在 YouTube 上找寻相关的学习视频。 + +- [krita 官方网站][1] + +-------------------------------------------------------------------------------- + +via: http://www.linux.com/learn/tutorials/786040-photo-editing-on-linux-with-krita + +作者:[Carla Schroder][a] +译者:[SteveArcher](https://github.com/SteveArcher) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.linux.com/community/forums/person/3734 +[1]:https://krita.org/ +[2]:https://krita.org/learn/tutorials/ diff --git a/translated/talk/20140904 Making MySQL Better at GitHub.md b/published/20140904 Making MySQL Better at GitHub.md similarity index 70% rename from translated/talk/20140904 Making MySQL Better at GitHub.md rename to published/20140904 Making MySQL Better at GitHub.md index 6afac31f2a..1aeea36c03 100644 --- a/translated/talk/20140904 Making MySQL Better at GitHub.md +++ b/published/20140904 Making MySQL Better at GitHub.md @@ -1,26 +1,26 @@ -优化 GitHub 服务器上的 MySQL 数据库性能 + GitHub 是如何迁移 MySQL 集群的 ================================================================================ -> 在 GitHub 我们总是说“如果网站响应速度不够快,说明我们的工作没完成”。我们之前在[前端的体验速度][1]这篇文章中介绍了一些提高网站响应速率的方法,但这只是故事的一部分。真正影响到 GitHub.com 性能的因素是 MySQL 数据库架构。让我们来瞧瞧我们的基础架构团队是如何无缝升级了 MySQL 架构吧,这事儿发生在去年8月份,成果就是大大提高了 GitHub 网站的速度。 +> 在 GitHub 我们总是说“如果网站响应速度不够快,我们就不应该让它上线运营”。我们之前在[前端的体验速度][1]这篇文章中介绍了一些提高网站响应速率的方法,但这只是故事的一部分。真正影响到 GitHub.com 性能的因素是 MySQL 数据库架构。让我们来瞧瞧我们的基础架构团队是如何无缝升级了 MySQL 架构吧,这事儿发生在去年8月份,成果就是大大提高了 GitHub 网站的速度。 ### 任务 ### -去年我们把 GitHub 上的大部分数据移到了新的数据中心,这个中心有世界顶级的硬件资源和网络平台。自从使用了 MySQL 作为我们的后端基本存储系统,我们一直期望着一些改进来大大提高数据库性能,但是在数据中心使用全新的硬件来部署一套全新的集群环境并不是一件简单的工作,所以我们制定了一套计划和测试工作,以便数据能平滑过渡到新环境。 +去年我们把 GitHub 上的大部分数据移到了新的数据中心,这个中心有世界顶级的硬件资源和网络平台。自从使用了 MySQL 作为我们的后端系统的基础,我们一直期望着一些改进来大大提高数据库性能,但是在数据中心使用全新的硬件来部署一套全新的集群环境并不是一件简单的工作,所以我们制定了一套计划和测试工作,以便数据能平滑过渡到新环境。 ### 准备工作 ### -像我们这种关于架构上的巨大改变,在执行的每一步都需要收集数据指标。新机器上安装好了基础操作系统,接下来就是测试新配置下的各种性能。为了模拟真实的工作负载环境,我们使用 tcpdump 工具从老集群那里复制正在发生的 SELECT 请求,并在新集群上重新响应一遍。 +像我们这种关于架构上的巨大改变,在执行的每一步都需要收集数据指标。新机器上安装好了基本的操作系统,接下来就是测试新配置下的各种性能。为了模拟真实的工作负载环境,我们使用 tcpdump 工具从旧的集群那里复制正在发生的 SELECT 请求,并在新集群上重新回放一遍。 -MySQL 微调是个繁琐的细致活,像众所周知的 innodb_buffer_pool_size 这个参数往往能对 MySQL 性能产生巨大的影响。对于这类参数,我们必须考虑在内,所以我们列了一份参数清单,包括 innodb_thread_concurrency,innodb_io_capacity,和 innodb_buffer_pool_instances,还有其它的。 +MySQL 调优是个繁琐的细致活,像众所周知的 innodb_buffer_pool_size 这个参数往往能对 MySQL 性能产生巨大的影响。对于这类参数,我们必须考虑在内,所以我们列了一份参数清单,包括 innodb_thread_concurrency,innodb_io_capacity,和 innodb_buffer_pool_instances,还有其它的。 在每次测试中,我们都很小心地只改变一个参数,并且让一次测试至少运行12小时。我们会观察响应时间的变化曲线,每秒的响应次数,以及有可能会导致并发性降低的参数。我们使用 “SHOW ENGINE INNODB STATUS” 命令打印 InnoDB 性能信息,特别观察了 “SEMAPHORES” 一节的内容,它为我们提供了工作负载的状态信息。 -当我们在设置参数后对运行结果感到满意,然后就开始将我们最大的一个数据表格迁移到一套独立的集群上,这个步骤作为整个迁移过程的早期测试,保证我们的核心集群空出更多的缓存池空间,并且为故障切换和存储功能提供更强的灵活性。这步初始迁移方案也引入了一个有趣的挑战:我们必须维持多条客户连接,并且要将这些连接重定向到正确的集群上。 +当我们在设置参数后对运行结果感到满意,然后就开始将我们最大的数据表格之一迁移到一套独立的集群上,这个步骤作为整个迁移过程的早期测试,以保证我们的核心集群有更多的缓存池空间,并且为故障切换和存储功能提供更强的灵活性。这步初始迁移方案也引入了一个有趣的挑战:我们必须维持多条客户连接,并且要将这些连接指向到正确的集群上。 除了硬件性能的提升,还需要补充一点,我们同时也对处理进程和拓扑结构进行了改进:我们添加了延时拷贝技术,更快、更高频地备份数据,以及更多的读拷贝空间。这些功能已经准备上线。 ### 列出任务清单,三思后行 ### -每天有上百万用户的使用 GitHub.com,我们不可能有机会进行实际意义上的数据切换。我们有一个详细的[任务清单][2]来执行迁移: +每天有上百万用户的使用 GitHub.com,我们不可能有机会等没有人用了才进行实际数据切换。我们有一个详细的[任务清单][2]来执行迁移: ![](https://cloud.githubusercontent.com/assets/1155781/4116929/13fc6f50-328b-11e4-837b-922aad3055a8.png) @@ -28,7 +28,7 @@ MySQL 微调是个繁琐的细致活,像众所周知的 innodb_buffer_pool_siz ### 迁移时间到 ### -太平洋时间星期六上午5点,我们的迁移团队上线集合聊天,同时数据迁移正式开始: +太平洋时间星期六上午5点,我们的迁移团队上线集合对话,同时数据迁移正式开始: ![](https://cloud.githubusercontent.com/assets/1155781/4060850/39f52cd4-2df3-11e4-9aca-1f54a4870d24.png) @@ -40,7 +40,7 @@ MySQL 微调是个繁琐的细致活,像众所周知的 innodb_buffer_pool_siz ![](https://cloud.githubusercontent.com/assets/1155781/4060870/6a4c0060-2df3-11e4-8dab-654562fe628d.png) -然后我们让 GitHub.com 脱离维护期,并且让全世界的用户都知道我们的最新状态: +然后我们让 GitHub.com 脱离维护模式,并且让全世界的用户都知道我们的最新状态: ![](https://cloud.githubusercontent.com/assets/1155781/4060878/79b9884c-2df3-11e4-98ed-d11818c8915a.png) @@ -56,7 +56,7 @@ MySQL 微调是个繁琐的细致活,像众所周知的 innodb_buffer_pool_siz #### 功能划分 #### -在迁移过程中,我们采用了一个比较好的方法是:将大的数据表(主要记录了一些历史数据)先迁移过去,空出旧集群的磁盘空间和缓存池空间。这一步给我们留下了更过的资源用户维护“热”数据,将一些连接请求分离到多套集群里面。这步为我们之后的胜利奠定了基础,我们以后还会使用这种模式来进行迁移工作。 +在迁移过程中,我们采用了一个比较好的方法是:将大的数据表(主要记录了一些历史数据)先迁移过去,空出旧集群的磁盘空间和缓存池空间。这一步给我们留下了更多的资源用于“热”数据,将一些连接请求分离到多套集群里面。这步为我们之后的胜利奠定了基础,我们以后还会使用这种模式来进行迁移工作。 #### 测试测试测试 #### @@ -68,11 +68,11 @@ MySQL 微调是个繁琐的细致活,像众所周知的 innodb_buffer_pool_siz 团队成员地图: -![](https://render.githubusercontent.com/view/geojson?url=https://gist.githubusercontent.com/anonymous/5fa29a7ccbd0101630da/raw/map.geojson) +https://render.githubusercontent.com/view/geojson?url=https://gist.githubusercontent.com/anonymous/5fa29a7ccbd0101630da/raw/map.geojson 本次合作新创了一种工作流程:我们提交更改(pull request),获取实时反馈,查看修改了错误的 commit —— 全程没有电话交流或面对面的会议。当所有东西都可以通过 URL 提供信息,不同区域的人群之间的交流和反馈会变得非常简单。 -### 一年后。。。 ### +### 一年后…… ### 整整一年时间过去了,我们很高兴地宣布这次数据迁移是很成功的 —— MySQL 性能和可靠性一直处于我们期望的状态。另外,新的集群还能让我们进一步去升级,提供更好的可靠性和响应时间。我将继续记录这些优化过程。 @@ -82,7 +82,7 @@ via: https://github.com/blog/1880-making-mysql-better-at-github 作者:[samlambert][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140904 Use LaTeX In Ubuntu 14.04 and Linux Mint 17 With Texmaker.md b/published/20140904 Use LaTeX In Ubuntu 14.04 and Linux Mint 17 With Texmaker.md similarity index 60% rename from translated/tech/20140904 Use LaTeX In Ubuntu 14.04 and Linux Mint 17 With Texmaker.md rename to published/20140904 Use LaTeX In Ubuntu 14.04 and Linux Mint 17 With Texmaker.md index af974647be..c754b4a2f6 100644 --- a/translated/tech/20140904 Use LaTeX In Ubuntu 14.04 and Linux Mint 17 With Texmaker.md +++ b/published/20140904 Use LaTeX In Ubuntu 14.04 and Linux Mint 17 With Texmaker.md @@ -1,10 +1,10 @@ -在Ubuntu 14.04和拥有Texmaker的Linux Mint 17(基于ubuntu和debian的Linux发行版)中使用LaTeX +在 Ubuntu 14.04 和 Linux Mint 17 中通过 Texmaker 来使用LaTeX ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/texmaker_Ubuntu.jpeg) -[LaTeX][1]是一种文本标记语言,也可以说是一种文档制作系统。经常在很多大学或者机构中作为一种标准来书写专业的科学文献,毕业论文或其他类似的文档。在这篇文章中,我们会看到如何在Ubuntu 14.04中使用LaTeX。 +[LaTeX][1]是一种文本标记语言,也可以说是一种文档编撰系统。在很多大学或者机构中普遍作为一种标准来书写专业的科学文献、毕业论文或其他类似的文档。在这篇文章中,我们会看到如何在Ubuntu 14.04中使用LaTeX。 -### 在Ubuntu 14.04或Linux Mint 17中安装Texmaker +### 在 Ubuntu 14.04 或 Linux Mint 17 中安装 Texmaker 来使用LaTeX [Texmaker][2]是一款免费开源的LaTeX编辑器,它支持一些主流的桌面操作系统,比如Window,Linux和OS X。下面是Texmaker的主要特点: @@ -24,11 +24,11 @@ - [下载Texmaker编辑器][3] -你通过链接下载到的是一个.deb包,因此你在一些像Linux Mint,Elementary OS,Pinguy OS等等类Debain的发行版中可以使用相同的安装方式。 +你通过上述链接下载到的是一个.deb包,因此你在一些像Linux Mint,Elementary OS,Pinguy OS等等类Debain的发行版中可以使用相同的安装方式。 -如果你想使用像Github类型的markdown编辑器,你可以试试[Remarkable编辑器][4]。 +如果你想使用像Github式的markdown编辑器,你可以试试[Remarkable编辑器][4]。 -希望Texmaker能够在Ubuntu和Linux Mint中帮到你 +希望Texmaker能够在Ubuntu和Linux Mint中帮到你。 -------------------------------------------------------------------------------- @@ -36,7 +36,7 @@ via: http://itsfoss.com/install-latex-ubuntu-1404/ 作者:[Abhishek][a] 译者:[john](https://github.com/johnhoow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Caroline](https://github.com/carolinewuyan) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 @@ -44,4 +44,4 @@ via: http://itsfoss.com/install-latex-ubuntu-1404/ [1]:http://www.latex-project.org/ [2]:http://www.xm1math.net/texmaker/index.html [3]:http://www.xm1math.net/texmaker/download.html#linux -[4]:http://itsfoss.com/remarkable-markdown-editor-linux/ +[4]:http://itsfoss.com/remarkable-markdown-editor-linux/ diff --git a/translated/tech/20140910 How To Recover Default Openbox Config Files On Crunchbang.md b/published/20140910 How To Recover Default Openbox Config Files On Crunchbang.md similarity index 82% rename from translated/tech/20140910 How To Recover Default Openbox Config Files On Crunchbang.md rename to published/20140910 How To Recover Default Openbox Config Files On Crunchbang.md index 942a23037d..579b45bd10 100644 --- a/translated/tech/20140910 How To Recover Default Openbox Config Files On Crunchbang.md +++ b/published/20140910 How To Recover Default Openbox Config Files On Crunchbang.md @@ -1,4 +1,4 @@ -如何在Crunchbang下回复Openbox的默认配置 +如何在Crunchbang下恢复Openbox的默认配置 ================================================================================ [CrunchBang][1]是一个很好地融合了速度、风格和内容的基于Debian GNU/Linux的发行版。使用了灵活的Openbox窗口管理器,高度定制化并且提供了一个现代、全功能的GNU/Linux系统而没有牺牲性能。 @@ -6,7 +6,7 @@ Crunchbang是高度自定义的,用户可以尽情地地把它调整成他们 ![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/curnchbang_menu_xml.png) -其中从菜单配置文件中去除了所有代码。由于我没有备份(最好备份配置文件)。我不得不搜索Crunchbang开箱即用的默认配置。这里就是我如何修复的过程,要感谢Crunchbang论坛。 +我的菜单配置文件中丢失了所有内容。由于我没有备份(最好备份配置文件)。我不得不搜索Crunchbang安装后的默认配置。这里就是我如何修复的过程,这里要感谢Crunchbang论坛。 了解所有为你预备份的默认配置是很有趣的,你可以在这里找到: @@ -30,7 +30,7 @@ via: http://www.unixmen.com/recover-default-openbox-config-files-crunchbang/ 作者:[Enock Seth Nyamador][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140915 Linux FAQs with Answers--How to capture TCP SYN, ACK and FIN packets with tcpdump.md b/published/20140915 Linux FAQs with Answers--How to capture TCP SYN, ACK and FIN packets with tcpdump.md similarity index 81% rename from translated/tech/20140915 Linux FAQs with Answers--How to capture TCP SYN, ACK and FIN packets with tcpdump.md rename to published/20140915 Linux FAQs with Answers--How to capture TCP SYN, ACK and FIN packets with tcpdump.md index f0f638a7cd..8887a0a057 100644 --- a/translated/tech/20140915 Linux FAQs with Answers--How to capture TCP SYN, ACK and FIN packets with tcpdump.md +++ b/published/20140915 Linux FAQs with Answers--How to capture TCP SYN, ACK and FIN packets with tcpdump.md @@ -1,8 +1,8 @@ -Linux有问必答——如何使用tcpdump来捕获TCP SYN,ACK和FIN包 +Linux有问必答:如何使用tcpdump来捕获TCP SYN,ACK和FIN包 ================================================================================ > **问题**:我想要监控TCP连接活动(如,建立连接的三次握手,以及断开连接的四次握手)。要完成此事,我只需要捕获TCP控制包,如SYN,ACK或FIN标记相关的包。我怎样使用tcpdump来仅仅捕获TCP SYN,ACK和/或FYN包? -作为事实上的捕获工具,tcpdump提供了强大而又灵活的包过滤功能。作为tcpdump基础的libpcap包捕获引擎支持标准的包过滤规则,如基于5重包头的过滤(如基于源/目的IP地址/端口和IP协议类型)。 +作为业界标准的捕获工具,tcpdump提供了强大而又灵活的包过滤功能。作为tcpdump基础的libpcap包捕获引擎支持标准的包过滤规则,如基于5重包头的过滤(如基于源/目的IP地址/端口和IP协议类型)。 tcpdump/libpcap的包过滤规则也支持更多通用分组表达式,在这些表达式中,包中的任意字节范围都可以使用关系或二进制操作符进行检查。对于字节范围表达,你可以使用以下格式: @@ -34,8 +34,8 @@ tcpdump/libpcap的包过滤规则也支持更多通用分组表达式,在这 via: http://ask.xmodulo.com/capture-tcp-syn-ack-fin-packets-tcpdump.html -作者:[作者名][a] + 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140915 Linux FAQs with Answers--How to change hostname on CentOS or RHEL 7.md b/published/20140915 Linux FAQs with Answers--How to change hostname on CentOS or RHEL 7.md similarity index 68% rename from translated/tech/20140915 Linux FAQs with Answers--How to change hostname on CentOS or RHEL 7.md rename to published/20140915 Linux FAQs with Answers--How to change hostname on CentOS or RHEL 7.md index 5c02aebbfe..fd680ad8c6 100644 --- a/translated/tech/20140915 Linux FAQs with Answers--How to change hostname on CentOS or RHEL 7.md +++ b/published/20140915 Linux FAQs with Answers--How to change hostname on CentOS or RHEL 7.md @@ -1,8 +1,8 @@ -Linux有问必答——如何在CentOS或RHEL 7上修改主机名 +Linux有问必答:如何在CentOS或RHEL 7上修改主机名 ================================================================================ > 问题:在CentOS/RHEL 7上修改主机名的正确方法是什么(永久或临时)? -在CentOS或RHEL中,有三种定义的主机名:(1)静态的(2)瞬态的,以及(3)灵活的。“静态”主机名也称为内核主机名,是系统在启动时从/etc/hostname自动初始化的主机名。“瞬态”主机名是在系统运行时临时分配的主机名,例如,通过DHCP或mDNS服务器分配。静态主机名和瞬态主机名都遵从作为互联网域名同样的字符限制规则。而另一方面,“灵活”主机名则允许使用自由形式(包括特殊/空白字符)的主机名,以展示给终端用户(如Dan's Computer)。 +在CentOS或RHEL中,有三种定义的主机名:a、静态的(static),b、瞬态的(transient),以及 c、灵活的(pretty)。“静态”主机名也称为内核主机名,是系统在启动时从/etc/hostname自动初始化的主机名。“瞬态”主机名是在系统运行时临时分配的主机名,例如,通过DHCP或mDNS服务器分配。静态主机名和瞬态主机名都遵从作为互联网域名同样的字符限制规则。而另一方面,“灵活”主机名则允许使用自由形式(包括特殊/空白字符)的主机名,以展示给终端用户(如Dan's Computer)。 在CentOS/RHEL 7中,有个叫hostnamectl的命令行工具,它允许你查看或修改与主机名相关的配置。 @@ -22,7 +22,7 @@ Linux有问必答——如何在CentOS或RHEL 7上修改主机名 ![](https://farm4.staticflickr.com/3855/15113489172_4e25ac87fa_z.jpg) -就像上面展示的那样,在修改静态/瞬态主机名时,任何特殊字符或空白字符会被移除,而提供的参数中的任何大写字母会自动转化为小写。一旦修改了静态主机名,/etc/hostname将被自动更新。然而,/etc/hosts不会更新来回应所做的修改,所以你需要手动更新/etc/hosts。 +就像上面展示的那样,在修改静态/瞬态主机名时,任何特殊字符或空白字符会被移除,而提供的参数中的任何大写字母会自动转化为小写。一旦修改了静态主机名,/etc/hostname 将被自动更新。然而,/etc/hosts 不会更新以保存所做的修改,所以你需要手动更新/etc/hosts。 如果你只想修改特定的主机名(静态,瞬态或灵活),你可以使用“--static”,“--transient”或“--pretty”选项。 diff --git a/translated/tech/20140915 Linux FAQs with Answers--How to create a new Amazon AWS access key.md b/published/20140915 Linux FAQs with Answers--How to create a new Amazon AWS access key.md similarity index 95% rename from translated/tech/20140915 Linux FAQs with Answers--How to create a new Amazon AWS access key.md rename to published/20140915 Linux FAQs with Answers--How to create a new Amazon AWS access key.md index d872155a9c..f716678fb3 100644 --- a/translated/tech/20140915 Linux FAQs with Answers--How to create a new Amazon AWS access key.md +++ b/published/20140915 Linux FAQs with Answers--How to create a new Amazon AWS access key.md @@ -1,4 +1,4 @@ -Linux有问必答——如何创建新的亚马逊AWS访问密钥 +Linux有问必答:如何创建新的亚马逊AWS访问密钥 ================================================================================ > **问题**:我在配置一个需要访问我的亚马逊AWS帐号的应用时被要求提供**AWS访问密钥ID**和**秘密访问密钥**,我怎样创建一个新的AWS访问密钥呢? @@ -42,7 +42,7 @@ IAM是一个web服务,它允许一个公司管理多个用户及其与一个AW via: http://ask.xmodulo.com/create-amazon-aws-access-key.html 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140915 Linux FAQs with Answers--How to expand an XFS file system.md b/published/20140915 Linux FAQs with Answers--How to expand an XFS file system.md similarity index 67% rename from translated/tech/20140915 Linux FAQs with Answers--How to expand an XFS file system.md rename to published/20140915 Linux FAQs with Answers--How to expand an XFS file system.md index f56cc00ed6..1606a79323 100644 --- a/translated/tech/20140915 Linux FAQs with Answers--How to expand an XFS file system.md +++ b/published/20140915 Linux FAQs with Answers--How to expand an XFS file system.md @@ -1,8 +1,8 @@ -Linux有问必答——如何扩展XFS文件系统 +Linux有问必答:如何扩展XFS文件系统 ================================================================================ > **问题**:我的磁盘上有额外的空间,所以我想要扩展其上创建的现存的XFS文件系统,以完全使用额外空间。怎样才是扩展XFS文件系统的正确途径? -XFS是一个开源的(GPL)日子文件系统,最初由硅谷图形开发,现在被大多数的Linux发行版都支持。事实上,XFS已经被最新的CentOS/RHEL 7采用,成为其默认的文件系统。在其众多的特性中,包含了“在线调整大小”这一特性,使得现存的XFS文件系统在被挂载时可以进行扩展。然而,对于XFS文件系统的缩减确实不被支持的。 +XFS是一个开源的(GPL)日志文件系统,最初由硅谷图形(SGI)开发,现在大多数的Linux发行版都支持。事实上,XFS已被最新的CentOS/RHEL 7采用,成为其默认的文件系统。在其众多的特性中,包含了“在线调整大小”这一特性,使得现存的XFS文件系统在已经挂载的情况下可以进行扩展。然而,对于XFS文件系统的**缩减**却还没有支持。 要扩展一个现存的XFS文件系统,你可以使用命令行工具xfs_growfs,这在大多数Linux发行版上都默认可用。由于XFS支持在线调整大小,目标文件系统可以挂在,也可以不挂载。 @@ -24,7 +24,7 @@ XFS是一个开源的(GPL)日子文件系统,最初由硅谷图形开发 ![](https://farm6.staticflickr.com/5569/14914950529_ddfb71c8dd_z.jpg) -注意,当你扩展一个现存的XFS文件系统时,必须准备事先添加用于XFS文件系统扩展的空间。这虽然是十分明了的事,但是如果在潜在的分区或磁盘卷上没有空闲空间可用的话,xfs_growfs不会做任何事情。同时,如果你尝试扩展XFS文件系统大小到超过磁盘分区或卷的大小,xfs_growfs将会失败。 +注意,当你扩展一个现存的XFS文件系统时,必须准备好事先添加用于XFS文件系统扩展的空间。这虽然是很显然的事,但是如果在所在的分区或磁盘卷上没有空闲空间可用的话,xfs_growfs就没有办法了。同时,如果你尝试扩展XFS文件系统大小到超过磁盘分区或卷的大小,xfs_growfs将会失败。 ![](https://farm4.staticflickr.com/3870/15101281542_98a49a7c3a_z.jpg) @@ -33,6 +33,6 @@ XFS是一个开源的(GPL)日子文件系统,最初由硅谷图形开发 via: http://ask.xmodulo.com/expand-xfs-file-system.html 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140915 Linux FAQs with Answers--How to remove PPA repository from command line on Ubuntu.md b/published/20140915 Linux FAQs with Answers--How to remove PPA repository from command line on Ubuntu.md similarity index 73% rename from translated/tech/20140915 Linux FAQs with Answers--How to remove PPA repository from command line on Ubuntu.md rename to published/20140915 Linux FAQs with Answers--How to remove PPA repository from command line on Ubuntu.md index 6698a8677b..e7171b6c3d 100644 --- a/translated/tech/20140915 Linux FAQs with Answers--How to remove PPA repository from command line on Ubuntu.md +++ b/published/20140915 Linux FAQs with Answers--How to remove PPA repository from command line on Ubuntu.md @@ -1,15 +1,14 @@ -Linux FAQ - Ubuntu如何使用命令行移除PPA仓库 +Linux有问必答:Ubuntu如何使用命令行移除PPA仓库 ================================================================================ > **问题**: 前段时间,我的Ubuntu增加了一个第三方的PPA仓库,如何才能移除这个PPA仓库呢? 个人软件包档案(PPA)是Ubuntu独有的解决方案,允许独立开发者和贡献者构建、贡献任何定制的软件包来作为通过启动面板的第三方APT仓库。如果你是Ubuntu用户,有可能你已经增加一些流行的第三方PPA仓库到你的Ubuntu系统。如果你需要删除掉已经预先配置好的PPA仓库,下面将教你怎么做。 - -假如你有一个第三方PPA仓库叫“ppa:webapps/preview”增加到了你的系统中,如下。 +假如你想增加一个叫“ppa:webapps/preview”第三方PPA仓库到你的系统中,如下: $ sudo add-apt-repository ppa:webapps/preview -如果你想要 **单独地删除一个PPA仓库**,运行下面的命令。 +如果你想要 **单独地删除某个PPA仓库**,运行下面的命令: $ sudo add-apt-repository --remove ppa:someppa/ppa @@ -17,22 +16,22 @@ Linux FAQ - Ubuntu如何使用命令行移除PPA仓库 如果你想要 **完整的删除一个PPA仓库并包括来自这个PPA安装或更新过的软件包**,你需要ppa-purge命令。 -安装ppa-purge软件包: +首先要安装ppa-purge软件包: $ sudo apt-get install ppa-purge -删除PPA仓库和与之相关的软件包,运行下列命令: +然后使用如下命令删除PPA仓库和与之相关的软件包: $ sudo ppa-purge ppa:webapps/preview -特别滴,在发行版更新后,你需要[分辨和清除已损坏的PPA仓库][1],这个方法特别有用! +特别滴,在发行版更新后,当你[分辨和清除已损坏的PPA仓库][1]时这个方法特别有用! -------------------------------------------------------------------------------- via: http://ask.xmodulo.com/how-to-remove-ppa-repository-from-command-line-on-ubuntu.html 译者:[Vic___](http://www.vicyu.net) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140919 Linux FAQs with Answers--How to create a MySQL database from the command line.md b/published/20140919 Linux FAQs with Answers--How to create a MySQL database from the command line.md similarity index 95% rename from translated/tech/20140919 Linux FAQs with Answers--How to create a MySQL database from the command line.md rename to published/20140919 Linux FAQs with Answers--How to create a MySQL database from the command line.md index f2c7e3186a..1db2425acc 100644 --- a/translated/tech/20140919 Linux FAQs with Answers--How to create a MySQL database from the command line.md +++ b/published/20140919 Linux FAQs with Answers--How to create a MySQL database from the command line.md @@ -1,4 +1,4 @@ -数据库常见问题答案--如何使用命令行创建一个MySQL数据库 +Linux有问必答:如何在命令行创建一个MySQL数据库 === > **问题**:在一个某处运行的MySQL服务器上,我该怎样通过命令行创建和安装一个MySQL数据库呢? @@ -47,8 +47,8 @@ 为了达到演示的目的,我们将会创建一个叫做posts_tbl的表,表里会存储关于文章的如下信息: - 文章的标题 -- 作者的第一个名字 -- 作者的最后一个名字 +- 作者的名字 +- 作者的姓 - 文章可用或者不可用 - 文章创建的日期 @@ -104,7 +104,7 @@ via: http://ask.xmodulo.com/create-mysql-database-command-line.html 译者:[su-kaiyao](https://github.com/su-kaiyao) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linu x中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140922 Reset Unity and Compiz Settings in Ubuntu 14.04.md b/published/20140922 Reset Unity and Compiz Settings in Ubuntu 14.04.md similarity index 92% rename from translated/tech/20140922 Reset Unity and Compiz Settings in Ubuntu 14.04.md rename to published/20140922 Reset Unity and Compiz Settings in Ubuntu 14.04.md index e51f3f79fc..dd642b9cdb 100644 --- a/translated/tech/20140922 Reset Unity and Compiz Settings in Ubuntu 14.04.md +++ b/published/20140922 Reset Unity and Compiz Settings in Ubuntu 14.04.md @@ -1,4 +1,4 @@ -在Ubuntu 14.04中重置Unity和Compiz设置【小贴士】 +小技巧:在Ubuntu 14.04中重置Unity和Compiz设置 ================================================================================ 如果你一直在试验你的Ubuntu系统,你可能最终以Unity和Compiz的一片混乱收场。在此贴士中,我们将看看怎样来重置Ubuntu 14.04中的Unity和Compiz。事实上,全部要做的事,仅仅是运行几个命令而已。 @@ -34,7 +34,7 @@ via: http://itsfoss.com/reset-unity-compiz-settings-ubuntu-1404/ 作者:[Abhishek][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140925 Linux FAQs with Answers--How to catch and handle a signal in Perl.md b/published/20140925 Linux FAQs with Answers--How to catch and handle a signal in Perl.md similarity index 51% rename from translated/tech/20140925 Linux FAQs with Answers--How to catch and handle a signal in Perl.md rename to published/20140925 Linux FAQs with Answers--How to catch and handle a signal in Perl.md index 4e6f628416..80272c60a0 100644 --- a/translated/tech/20140925 Linux FAQs with Answers--How to catch and handle a signal in Perl.md +++ b/published/20140925 Linux FAQs with Answers--How to catch and handle a signal in Perl.md @@ -1,10 +1,10 @@ -Linux 有问必答-- 如何在Perl中捕捉并处理信号 +Linux 有问必答:如何在Perl中捕捉并处理信号 ================================================================================ > **提问**: 我需要通过使用Perl的自定义信号处理程序来处理一个中断信号。在一般情况下,我怎么在Perl程序中捕获并处理各种信号(如INT,TERM)? -作为POSIX标准的异步通知机制,信号由操作系统发送给进程某个事件来通知它。当产生信号时,目标程序的执行是通过操作系统中断,并且该信号被发送到处理该信号的处理程序。任何人可以定义和注册自定义信号处理程序或依赖于默认的信号处理程序。 +作为POSIX标准的异步通知机制,信号由操作系统发送给进程某个事件来通知它。当产生信号时,操作系统会中断目标程序的执行,并且该信号被发送到该程序的信号处理函数。可以定义和注册自己的信号处理程序或使用默认的信号处理程序。 -在Perl中,信号可以被捕获并被一个全局的%SIG哈希变量处理。这个%SIG哈希变量被信号号锁定并包含对相应的信号处理程序。因此,如果你想为特定的信号定义自定义信号处理程序,你可以直接更新%SIG的信号的哈希值。 +在Perl中,信号可以被捕获,并由一个全局的%SIG哈希变量指定处理函数。这个%SIG哈希变量的键名是信号值,键值是对应的信号处理程序的引用。因此,如果你想为特定的信号定义自己的信号处理程序,你可以直接在%SIG中设置信号的哈希值。 下面是一个代码段来处理使用自定义信号处理程序中断(INT)和终止(TERM)的信号。 @@ -18,13 +18,13 @@ Linux 有问必答-- 如何在Perl中捕捉并处理信号 ![](https://farm4.staticflickr.com/3910/15141131060_f7958f20fb.jpg) -%SIG其他有效的哈希值有'IGNORE'和'DEFAULT'。当所分配的哈希值是'IGNORE'(例如,$SIG{CHLD}='IGNORE')时,相应的信号将被忽略。分配'DEFAULT'的哈希值(例如,$SIG{HUP}='DEFAULT'),意味着我们将使用一个默认的信号处理程序。 +%SIG其他的可用的键值有'IGNORE'和'DEFAULT'。当所指定的键值是'IGNORE'(例如,$SIG{CHLD}='IGNORE')时,相应的信号将被忽略。指定'DEFAULT'的键值(例如,$SIG{HUP}='DEFAULT'),意味着我们将使用一个(系统)默认的信号处理程序。 -------------------------------------------------------------------------------- via: http://ask.xmodulo.com/catch-handle-interrupt-signal-perl.html 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/published/20140925 Linux FAQs with Answers--How to configure a static IP address on CentOS 7.md b/published/20140925 Linux FAQs with Answers--How to configure a static IP address on CentOS 7.md new file mode 100644 index 0000000000..a2134b43cf --- /dev/null +++ b/published/20140925 Linux FAQs with Answers--How to configure a static IP address on CentOS 7.md @@ -0,0 +1,78 @@ +Linux有问必答:如何为CentOS 7配置静态IP地址 +================================================================================ +> **问题**:在CentOS 7上,我想要将我其中一个网络接口从DHCP改为静态IP地址配置,如何才能永久为CentOS或RHEL 7上的网络接口分配静态IP地址? + +如果你想要为CentOS 7中的某个网络接口设置静态IP地址,有几种不同的方法,这取决于你是否想要使用网络管理器。 + +网络管理器(Network Manager)是一个动态网络的控制器与配置系统,它用于当网络设备可用时保持设备和连接开启并激活。默认情况下,CentOS/RHEL 7安装有网络管理器,并处于启用状态。 + +使用下面的命令来验证网络管理器服务的状态: + + $ systemctl status NetworkManager.service + +运行以下命令来检查受网络管理器管理的网络接口: + + $ nmcli dev status + +![](https://farm4.staticflickr.com/3861/15295802711_a102a3574d_z.jpg) + +如果某个接口的nmcli的输出结果是“已连接”(如本例中的enp0s3),这就是说该接口受网络管理器管理。你可以轻易地为某个特定接口禁用网络管理器,以便你可以自己为它配置一个静态IP地址。 + +下面将介绍**在CentOS 7上为网络接口配置静态IP地址的两种方式**,在例子中我们将对名为enp0s3的网络接口进行配置。 + +### 不使用网络管理配置静态IP地址 ### + +进入/etc/sysconfig/network-scripts目录,找到该接口的配置文件(ifcfg-enp0s3)。如果没有,请创建一个。 + +![](https://farm4.staticflickr.com/3911/15112399977_d3df8e15f5_z.jpg) + +打开配置文件并编辑以下变量: + +![](https://farm4.staticflickr.com/3880/15112184199_f4cbf269a6.jpg) + +在上图中,“NM_CONTROLLED=no”表示该接口将通过该配置文件进行设置,而不是通过网络管理器进行管理。“ONBOOT=yes”告诉我们,系统将在启动时开启该接口。 + +保存修改并使用以下命令来重启网络服务: + + # systemctl restart network.service + +现在验证接口是否配置正确: + + # ip add + +![](https://farm6.staticflickr.com/5593/15112397947_ac69a33fb4_z.jpg) + +### 使用网络管理器配置静态IP地址 ### + +如果你想要使用网络管理器来管理该接口,你可以使用nmtui(网络管理器文本用户界面),它提供了在终端环境中配置配置网络管理器的方式。 + +在使用nmtui之前,首先要在/etc/sysconfig/network-scripts/ifcfg-enp0s3中设置“NM_CONTROLLED=yes”。 + +现在,请按以下方式安装nmtui。 + + # yum install NetworkManager-tui + +然后继续去编辑enp0s3接口的网络管理器配置: + + # nmtui edit enp0s3 + +在下面的屏幕中,我们可以手动输入与/etc/sysconfig/network-scripts/ifcfg-enp0s3中所包含的内容相同的信息。 + +使用箭头键在屏幕中导航,按回车选择值列表中的内容(或填入想要的内容),最后点击屏幕底部右侧的确定按钮。 + +![](https://farm4.staticflickr.com/3878/15295804521_4165c97828_z.jpg) + +最后,重启网络服务。 + + # systemctl restart network.service + +好了,现在一切都搞定了。 + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/configure-static-ip-address-centos7.html + +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140925 Linux FAQs with Answers--How to detect a Linux distribution in Perl.md b/published/20140925 Linux FAQs with Answers--How to detect a Linux distribution in Perl.md similarity index 81% rename from translated/tech/20140925 Linux FAQs with Answers--How to detect a Linux distribution in Perl.md rename to published/20140925 Linux FAQs with Answers--How to detect a Linux distribution in Perl.md index 322d8e6163..6793ab5565 100644 --- a/translated/tech/20140925 Linux FAQs with Answers--How to detect a Linux distribution in Perl.md +++ b/published/20140925 Linux FAQs with Answers--How to detect a Linux distribution in Perl.md @@ -1,8 +1,8 @@ -Linux有问必答-- 如何用Perl检测Linux的发行版本 +Linux有问必答:如何用Perl检测Linux的发行版本 ================================================================================ > **提问**:我需要写一个Perl程序,它会包含Linux发行版相关的代码。为此,Perl程序需要能够自动检测运行中的Linux的发行版(如Ubuntu、CentOS、Debian、Fedora等等),以及它是什么版本号。如何用Perl检测Linux的发行版本? -如果要用Perl脚本检测Linux的发行版,你可以使用一个名为[Linux::Distribution][1]的Perl模块。该模块通过检查/etc/lsb-release以及其他特定的/etc下的发行版特定的目录来猜测底层Linux操作系统。它支持检测所有主要的Linux发行版,包括Fedora、CentOS、Arch Linux、Debian、Ubuntu、SUSE、Red Hat、Gentoo、Slackware、Knoppix和Mandrake。 +如果要用Perl脚本检测Linux的发行版,你可以使用一个名为[Linux::Distribution][1]的Perl模块。该模块通过检查/etc/lsb-release以及其他在/etc下的发行版特定的目录来猜测底层Linux操作系统。它支持检测所有主要的Linux发行版,包括Fedora、CentOS、Arch Linux、Debian、Ubuntu、SUSE、Red Hat、Gentoo、Slackware、Knoppix和Mandrake。 要在Perl中使用这个模块,你首先需要安装它。 @@ -20,7 +20,7 @@ Linux有问必答-- 如何用Perl检测Linux的发行版本 $ sudo yum -y install perl-CPAN -使用这条命令来构建并安装模块: +然后,使用这条命令来构建并安装模块: $ sudo perl -MCPAN -e 'install Linux::Distribution' @@ -46,7 +46,7 @@ Linux::Distribution模块安装完成之后,你可以使用下面的代码片 via: http://ask.xmodulo.com/detect-linux-distribution-in-perl.html 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/20140926 How To Reset Root Password On CentOS 7.md b/published/20140926 How To Reset Root Password On CentOS 7.md new file mode 100644 index 0000000000..c4586cec8b --- /dev/null +++ b/published/20140926 How To Reset Root Password On CentOS 7.md @@ -0,0 +1,52 @@ +如何重置CentOS 7的Root密码 +=== + +重置Centos 7 Root密码的方式和Centos 6完全不同。让我来展示一下到底如何操作。 + +1 - 在启动grub菜单,选择编辑选项启动 + +![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_003.png) + +2 - 按键盘e键,来进入编辑界面 + +![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_005.png) + +3 - 找到Linux 16的那一行,将ro改为rw init=/sysroot/bin/ + +![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_006.png) + +4 - 现在按下 Control+x ,使用单用户模式启动 + +![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_007.png) + +5 - 现在,可以使用下面的命令访问系统 + + chroot /sysroot + +6 - 重置密码 + + passwd root + +7 - 更新系统信息 + + touch /.autorelabel + +8 - 退出chroot + + exit + +9 - 重启你的系统 + + reboot + +就是这样! + +--- + +via: http://www.unixmen.com/reset-root-password-centos-7/ + +作者:M.el Khamlichi +译者:[su-kaiyao](https://github.com/su-kaiyao) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/20140928 How to Use Systemd Timers.md b/published/20140928 How to Use Systemd Timers.md new file mode 100644 index 0000000000..df9cc2eb22 --- /dev/null +++ b/published/20140928 How to Use Systemd Timers.md @@ -0,0 +1,111 @@ +如何使用 systemd 中的定时器 +================================================================================ +我最近在写一些执行备份工作的脚本,我决定使用[systemd timers][1]而不是对我而已更熟悉的[cron jobs][2]来管理它们。 + +在我使用时,出现了很多问题需要我去各个地方找资料,这个过程非常麻烦。因此,我想要把我目前所做的记录下来,方便自己的记忆,也方便读者不必像我这样,满世界的找资料了。 + +在我下面提到的步骤中有其他的选择,但是这里是最简单的方法。在此之前,请查看**systemd.service**, **systemd.timer**,和**systemd.target**的帮助页面(man),学习你能用它们做些什么。 + +### 运行一个简单的脚本 ### + +假设你有一个脚本叫:**/usr/local/bin/myscript** ,你想要每隔一小时就运行一次。 + +#### Service 文件 #### + +第一步,创建一个service文件,根据你Linux的发行版本放到相应的系统目录(在Arch中,这个目录是**/etc/systemd/system/** 或 **/usr/lib/systemd/system**) + +myscript.service + + [Unit] + Description=MyScript + + [Service] + Type=simple + ExecStart=/usr/local/bin/myscript + +注意,务必将**Type**变量的值设置为"simple"而不是"oneshot"。使用"oneshot"使得脚本只在第一次运行,之后系统会认为你不想再次运行它,从而关掉我们接下去创建的定时器(Timer)。 + +#### Timer 文件 #### + +第二步,创建一个timer文件,把它放在第一步中service文件放置的目录。 + +myscript.timer + + [Unit] + Description=Runs myscript every hour + + [Timer] + # 首次运行要在启动后10分钟后 + OnBootSec=10min + # 每次运行间隔时间 + OnUnitActiveSec=1h + Unit=myscript.service + + [Install] + WantedBy=multi-user.target + +#### 授权 / 运行 #### + +授权并运行的是timer文件,而不是service文件。 + + # 以 root 身份启动定时器 + systemctl start myscript.timer + # 在系统引导起来后就启用该定时器 + systemctl enable myscript.timer + +### 在同一个Timer上运行多个脚本 ### + +现在我们假设你在相同时间想要运行多个脚本。这种情况,**你需要在上面的文件中做适当的修改**。 + +#### Service 文件 #### + +像我[之前说过的][3]那样创建你的service文件来运行你的脚本,但是在每个service 文件最后都要包含下面的内容: + + [Install] + WantedBy=mytimer.target + +如果在你的service 文件中有一些依赖顺序,确保你使用**Description**字段中的值具体指定**After=something.service**和**Before=whatever.service**中的参数。 + +另外的一种选择是(或许更加简单),创建一个包装脚本来使用正确的顺序来运行命令,并在你的service文件中使用这个脚本。 + +#### Timer 文件 #### + +你只需要一个timer文件,创建**mytimer.timer**,像我在[上面指出的](4)。 + +#### target 文件 #### + +你可以创建一个以上所有的脚本依赖的target文件。 + +mytimer.target + + [Unit] + Description=Mytimer + # Lots more stuff could go here, but it's situational. + # Look at systemd.unit man page. + +#### 授权 / 启动 #### + +你需要将所有的service文件和timer文件授权。 + + systemctl enable script1.service + systemctl enable script2.service + ... + systemctl enable mytimer.timer + systemctl start mytimer.service + +Good luck. + +-------------------------------------------------------------------------------- + +via: http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/ + +作者:Jason Graham +译者:[johnhoow](https://github.com/johnhoow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:https://fedoraproject.org/wiki/User:Johannbg/QA/Systemd/Systemd.timer +[2]:https://en.wikipedia.org/wiki/Cron +[3]:http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/#service-file +[4]:http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/#timer-file-1 diff --git a/published/20140928 Oracle Linux 5.11 Features Updated Unbreakable Linux Kernel.md b/published/20140928 Oracle Linux 5.11 Features Updated Unbreakable Linux Kernel.md new file mode 100644 index 0000000000..1debab469b --- /dev/null +++ b/published/20140928 Oracle Linux 5.11 Features Updated Unbreakable Linux Kernel.md @@ -0,0 +1,51 @@ +Oracle Linux 5.11更新了其Unbreakable Linux内核 +================================================================================ +> 此版本更新了很多软件包 + +![This is the last release for this branch](http://i1-news.softpedia-static.com/images/news2/Oracle-Linux-5-11-Features-Updated-Unbreakable-Linux-Kernel-460129-2.jpg) + +这是这个分支的最后一个版本更新(随同 RHEL 5.11的落幕,CentOS 和 Oracle Linux 的5.x 系列也纷纷释出该系列的最后版本)。 + +>**甲骨文公司宣布,Oracle Linux5.11版已提供下载,但是这是企业版,需要用户注册才能下载。** + +这个新的Oracle Linux是这个系列的最后一次更新。该系统基于Red Hat和该公司最近推送的RHEL 5X分支更新,这意味着这也是Oracle此产品线的最后一次更新。 + +Oracle Linux还带来了一系列有趣的功能,就像一个名为Ksplice的零宕机内核更新,它最初是针对openSUSE,包括Oracle数据库和Oracle应用软件开发的,它们在基于x86的Oracle系统中使用。 + +### Oracle Linux有哪些特别的 ### + +尽管Oracle Linux基于红帽,它的开发者曾经举出了很多你不应该使用RHEL的原因。理由有很多,但最主要的是,任何人都可以下载Oracle Linux(注册后),而RHEL实际上限制了非付费会员下载。 + +开发者在其网站上说:“为企业应用和系统提供先进的可扩展性和可靠性,Oracle Linux提供了极高的性能,并且在采用x86架构的Oracle工程系统中使用。Oracle Linux是免费使用,免费派发,免费更新,并可轻松下载。它是唯一带来生产中零宕机补丁Oracle Ksplice支持的Linux发行版,允许客户无需重启而部署安全或者其他更新,并且同时提供诊断功能来调试生产系统中的内核问题。” + +Oracle Linux其中一个最有趣且独一无二的功能是其Unbreakable Kernel(坚不可摧的内核)。这是它的开发者实际使用的名称。它基于来自3.0.36分支的旧Linux内核。用户还可以使用红帽兼容内核(内核2.6.18-398.el5),这在发行版中默认提供。 + +此外,Oracle Linux Release 5.11企业版内核提供了对大量硬件和设备的支持,但这个最新的更新带来了更好的支持。 + +您可以查看Oracle Linux 5.11全部[发布通告][1],这可能需要花费一些时间去读。 + +你也可以从下面下载Oracle Linux 5.11: + +- [Oracle Enterprise Linux 6.5 (ISO) 64-bit][2] +- [Oracle Enterprise Linux 6.5 (ISO) 32-bit][3] +- [Oracle Enterprise Linux 7.0 (ISO) 64-bit][4] +- [Oracle Enterprise Linux 5.11 (ISO) 64-bit][5] +- [Oracle Enterprise Linux 5.11 (ISO) 32-bit][6] + +-------------------------------------------------------------------------------- + +via: http://news.softpedia.com/news/Oracle-Linux-5-11-Features-Updated-Unbreakable-Linux-Kernel-460129.shtml + +作者:[Silviu Stahie][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://news.softpedia.com/editors/browse/silviu-stahie +[1]:https://oss.oracle.com/ol5/docs/RELEASE-NOTES-U11-en.html#Kernel_and_Driver_Updates +[2]:http://mirrors.dotsrc.org/oracle-linux/OL6/U5/i386/OracleLinux-R6-U5-Server-i386-dvd.iso +[3]:http://mirrors.dotsrc.org/oracle-linux/OL6/U5/x86_64/OracleLinux-R6-U5-Server-x86_64-dvd.iso +[4]:https://edelivery.oracle.com/linux/ +[5]:http://ftp5.gwdg.de/pub/linux/oracle/EL5/U11/x86_64/Enterprise-R5-U11-Server-x86_64-dvd.iso +[6]:http://ftp5.gwdg.de/pub/linux/oracle/EL5/U11/i386/Enterprise-R5-U11-Server-i386-dvd.iso \ No newline at end of file diff --git a/published/20140930 Check If Your Linux System Is Vulnerable To Shellshock And Fix It.md b/published/20140930 Check If Your Linux System Is Vulnerable To Shellshock And Fix It.md new file mode 100644 index 0000000000..c9ef13ca11 --- /dev/null +++ b/published/20140930 Check If Your Linux System Is Vulnerable To Shellshock And Fix It.md @@ -0,0 +1,65 @@ +检查你的系统系统是否有“Shellshock”漏洞并修复它 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/shellshock_Linux_check.jpeg) + +快速地向你展示**如何检查你的系统是否受到Shellshock的影响**,如果有,**怎样修复你的系统免于被Bash漏洞利用**。 + +如果你正跟踪新闻,你可能已经听说过在[Bash][1]中发现了一个漏洞,这被称为**Bash Bug**或者** Shellshock**。 [红帽][2]是第一个发现这个漏洞的机构。Shellshock错误允许攻击者注入自己的代码,从而使系统开放各给种恶意软件和远程攻击。事实上,[黑客已经利用它来启动DDoS攻击][3]。 + +由于Bash在所有的类Unix系统中都有,如果这些都运行bash的特定版本,它会让所有的Linux系统都容易受到这种Shellshock错误的影响。 + +想知道如果你的Linux系统是否已经受到Shellshock影响?有一个简单的方法来检查它,这就是我们要看到的。 + +### 检查Linux系统的Shellshock漏洞 ### + +打开一个终端,在它运行以下命令: + + env x='() { :;}; echo vulnerable' bash -c 'echo hello' + +如果你的系统没有漏洞,你会看到这样的输出: + + bash: warning: x: ignoring function definition attempt + bash: error importing function definition for `x’ + hello + +如果你的系统有Shellshock漏洞,你会看到一个像这样的输出: + + vulnerable + hello + +我尝试在我的Ubuntu14.10上运行,我得到了这个: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/Shellshock_Linux_Check.jpeg) + +您还可以通过使用下面的命令查看bash的版本: + + bash --version + +如果bash的版本是3.2.51(1),你就应该更新了。 + +#### 为有Shellshock漏洞的Linux系统打补丁 #### + +如果你运行的是基于Debian的Linux操作系统,如Ubuntu、Linux Mint的等,请使用以下命令升级Bash: + + sudo apt-get update && sudo apt-get install --only-upgrade bash + +对于如Fedora,Red Hat,Cent OS等操作系统,请使用以下命令 + + yum -y update bash + +我希望这个小技巧可以帮助你,看看你是否受到Shellshock漏洞的影响并解决它。有任何问题和建议,欢迎来提。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/linux-shellshock-check-fix/ + +作者:[Abhishek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/Abhishek/ +[1]:http://en.wikipedia.org/wiki/Bash_(Unix_shell) +[2]:https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/ +[3]:http://www.wired.com/2014/09/hackers-already-using-shellshock-bug-create-botnets-ddos-attacks/ \ No newline at end of file diff --git a/published/20140610 How does the cloud affect the everyday linux user.md b/published/201410/20140610 How does the cloud affect the everyday linux user.md similarity index 100% rename from published/20140610 How does the cloud affect the everyday linux user.md rename to published/201410/20140610 How does the cloud affect the everyday linux user.md diff --git a/published/20140624 Staying free--should GCC allow non-free plug ins.md b/published/201410/20140624 Staying free--should GCC allow non-free plug ins.md similarity index 100% rename from published/20140624 Staying free--should GCC allow non-free plug ins.md rename to published/201410/20140624 Staying free--should GCC allow non-free plug ins.md diff --git a/published/20140716 Linux FAQs with Answers--How to define PATH environment variable for sudo commands.md b/published/201410/20140716 Linux FAQs with Answers--How to define PATH environment variable for sudo commands.md similarity index 100% rename from published/20140716 Linux FAQs with Answers--How to define PATH environment variable for sudo commands.md rename to published/201410/20140716 Linux FAQs with Answers--How to define PATH environment variable for sudo commands.md diff --git a/published/20140718 Need Microsoft Office on Ubuntu--Install the Official Webapps.md b/published/201410/20140718 Need Microsoft Office on Ubuntu--Install the Official Webapps.md similarity index 100% rename from published/20140718 Need Microsoft Office on Ubuntu--Install the Official Webapps.md rename to published/201410/20140718 Need Microsoft Office on Ubuntu--Install the Official Webapps.md diff --git a/published/20140722 How to manage DigitalOcean VPS droplets from the command line on Linux.md b/published/201410/20140722 How to manage DigitalOcean VPS droplets from the command line on Linux.md similarity index 100% rename from published/20140722 How to manage DigitalOcean VPS droplets from the command line on Linux.md rename to published/201410/20140722 How to manage DigitalOcean VPS droplets from the command line on Linux.md diff --git a/published/20140723 How to access SoundCloud from the command line in Linux.md b/published/201410/20140723 How to access SoundCloud from the command line in Linux.md similarity index 100% rename from published/20140723 How to access SoundCloud from the command line in Linux.md rename to published/201410/20140723 How to access SoundCloud from the command line in Linux.md diff --git a/published/20140723 Top 10 Fun On The Command Line.md b/published/201410/20140723 Top 10 Fun On The Command Line.md similarity index 100% rename from published/20140723 Top 10 Fun On The Command Line.md rename to published/201410/20140723 Top 10 Fun On The Command Line.md diff --git a/published/20140724 diff -u--What is New in Kernel Development.md b/published/201410/20140724 diff -u--What is New in Kernel Development.md similarity index 100% rename from published/20140724 diff -u--What is New in Kernel Development.md rename to published/201410/20140724 diff -u--What is New in Kernel Development.md diff --git a/published/20140729 10 Useful 'Squid Proxy Server' Interview Questions and Answers in Linux.md b/published/201410/20140729 10 Useful 'Squid Proxy Server' Interview Questions and Answers in Linux.md similarity index 100% rename from published/20140729 10 Useful 'Squid Proxy Server' Interview Questions and Answers in Linux.md rename to published/201410/20140729 10 Useful 'Squid Proxy Server' Interview Questions and Answers in Linux.md diff --git a/translated/tech/20140729 How to access Linux command cheat sheets from the command line.md b/published/201410/20140729 How to access Linux command cheat sheets from the command line.md similarity index 71% rename from translated/tech/20140729 How to access Linux command cheat sheets from the command line.md rename to published/201410/20140729 How to access Linux command cheat sheets from the command line.md index b1914563fa..cbd40b56c3 100644 --- a/translated/tech/20140729 How to access Linux command cheat sheets from the command line.md +++ b/published/201410/20140729 How to access Linux command cheat sheets from the command line.md @@ -1,12 +1,12 @@ 从命令行访问Linux命令小抄 ================================================================================ -Linux命令行的强大在于其灵活及多样化,各个Linux命令都带有它自己那部分命令行选项和参数。混合并匹配它们,甚至还可以通过管道和重定向来联结不同的命令。理论上讲,你可以借助几个基本的命令来产生数以百计的使用案例。甚至对于浸淫多年的管理员而言,也难以完全使用它们。那正是命令行小抄成为我们救命稻草的一刻。 +Linux命令行的强大在于其灵活及多样化,各个Linux命令都带有它自己专属的命令行选项和参数。混合并匹配这些命令,甚至还可以通过管道和重定向来联结不同的命令。理论上讲,你可以借助几个基本的命令来产生数以百计的使用案例。甚至对于浸淫多年的管理员而言,也难以完全使用它们。那正是命令行小抄成为我们救命稻草的一刻。 [![](https://farm6.staticflickr.com/5562/14752051134_5a7c3d2aa4_z.jpg)][1] -我知道联机手册页仍然是我们的良师益友,但我们想通过我们能自行支配的快速参考卡让这一切更为高效和有目的性。最终极的小抄可能被自豪地挂在你的办公室里,也可能作为PDF文件隐秘地存储在你的硬盘上,或者甚至设置成了你的桌面背景图。 +我知道联机手册页(man)仍然是我们的良师益友,但我们想通过我们能自行支配的快速参考卡让这一切更为高效和有目的性。最终极的小抄可能被自豪地挂在你的办公室里,也可能作为PDF文件隐秘地存储在你的硬盘上,或者甚至设置成了你的桌面背景图。 -最为一个选择,也可以通过另外一个命令来访问你最爱的命令行小抄。那就是,使用[cheat][2]。这是一个命令行工具,它可以让你从命令行读取、创建或更新小抄。这个想法很简单,不过cheat经证明是十分有用的。本教程主要介绍Linux下cheat命令的使用方法。你不需要为cheat命令做个小抄了,它真的很简单。 +做为一个选择,也可以通过另外一个命令来访问你最爱的命令行小抄。那就是,使用[cheat][2]。这是一个命令行工具,它可以让你从命令行读取、创建或更新小抄。这个想法很简单,不过cheat经证明是十分有用的。本教程主要介绍Linux下cheat命令的使用方法。你不需要为cheat命令做个小抄了,它真的很简单。 ### 安装Cheat到Linux ### @@ -59,9 +59,9 @@ cheat命令一个很酷的事是,它自带有超过90个的常用Linux命令 $ cheat -s -在许多情况下,小抄适用于那些正派的人,而对其他某些人却没什么帮助。要想让内建的小抄更具个性化,cheat命令也允许你创建新的小抄,或者更新现存的那些。要这么做的话,cheat命令也会帮你在本地~/.cheat目录中保存一份小抄的副本。 +在许多情况下,小抄适用于某些人,而对另外一些人却没什么帮助。要想让内建的小抄更具个性化,cheat命令也允许你创建新的小抄,或者更新现存的那些。要这么做的话,cheat命令也会帮你在本地~/.cheat目录中保存一份小抄的副本。 -要使用cheat的编辑功能,首先确保EDITOR环境变量设置为了你默认编辑器所在位置的完整路径。然后,复制(不可编辑)内建小抄到~/.cheat目录。你可以通过下面的命令找到内建小抄所在的位置。一旦你找到了它们的位置,只不过是将它们拷贝到~/.cheat目录。 +要使用cheat的编辑功能,首先确保EDITOR环境变量设置为你默认编辑器所在位置的完整路径。然后,复制(不可编辑)内建小抄到~/.cheat目录。你可以通过下面的命令找到内建小抄所在的位置。一旦你找到了它们的位置,只不过是将它们拷贝到~/.cheat目录。 $ cheat -d @@ -85,7 +85,7 @@ via: http://xmodulo.com/2014/07/access-linux-command-cheat-sheets-command-line.h 作者:[Dan Nanni][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/20140730 How to use variables in shell Scripting.md b/published/201410/20140730 How to use variables in shell Scripting.md similarity index 100% rename from published/20140730 How to use variables in shell Scripting.md rename to published/201410/20140730 How to use variables in shell Scripting.md diff --git a/published/20140731 Easy Steps to Make GNOME 3 More Efficient.md b/published/201410/20140731 Easy Steps to Make GNOME 3 More Efficient.md similarity index 100% rename from published/20140731 Easy Steps to Make GNOME 3 More Efficient.md rename to published/201410/20140731 Easy Steps to Make GNOME 3 More Efficient.md diff --git a/published/20140801 How To Install Java On Ubuntu 14.04.md b/published/201410/20140801 How To Install Java On Ubuntu 14.04.md similarity index 100% rename from published/20140801 How To Install Java On Ubuntu 14.04.md rename to published/201410/20140801 How To Install Java On Ubuntu 14.04.md diff --git a/published/20140801 How to Create an Ubuntu Kiosk Computer.md b/published/201410/20140801 How to Create an Ubuntu Kiosk Computer.md similarity index 100% rename from published/20140801 How to Create an Ubuntu Kiosk Computer.md rename to published/201410/20140801 How to Create an Ubuntu Kiosk Computer.md diff --git a/published/20140804 Cheat--An Ultimate Command Line 'Cheat-Sheet' for Linux Beginners and Administrators.md b/published/201410/20140804 Cheat--An Ultimate Command Line 'Cheat-Sheet' for Linux Beginners and Administrators.md similarity index 100% rename from published/20140804 Cheat--An Ultimate Command Line 'Cheat-Sheet' for Linux Beginners and Administrators.md rename to published/201410/20140804 Cheat--An Ultimate Command Line 'Cheat-Sheet' for Linux Beginners and Administrators.md diff --git a/published/20140808 When Linux Was Perfect Enough.md b/published/201410/20140808 When Linux Was Perfect Enough.md similarity index 100% rename from published/20140808 When Linux Was Perfect Enough.md rename to published/201410/20140808 When Linux Was Perfect Enough.md diff --git a/published/20140811 Disable or Password Protect Single User Mode or RHEL ro CentOS ro 5.x ro 6.x.md b/published/201410/20140811 Disable or Password Protect Single User Mode or RHEL ro CentOS ro 5.x ro 6.x.md similarity index 100% rename from published/20140811 Disable or Password Protect Single User Mode or RHEL ro CentOS ro 5.x ro 6.x.md rename to published/201410/20140811 Disable or Password Protect Single User Mode or RHEL ro CentOS ro 5.x ro 6.x.md diff --git a/published/20140811 How to Image and Clone Hard Drives with Clonezilla.md b/published/201410/20140811 How to Image and Clone Hard Drives with Clonezilla.md similarity index 100% rename from published/20140811 How to Image and Clone Hard Drives with Clonezilla.md rename to published/201410/20140811 How to Image and Clone Hard Drives with Clonezilla.md diff --git a/published/20140811 Linux FAQs with Answers--How to check the last time system was rebooted on Linux.md b/published/201410/20140811 Linux FAQs with Answers--How to check the last time system was rebooted on Linux.md similarity index 100% rename from published/20140811 Linux FAQs with Answers--How to check the last time system was rebooted on Linux.md rename to published/201410/20140811 Linux FAQs with Answers--How to check the last time system was rebooted on Linux.md diff --git a/published/20140818 Linux FAQs with Answers--How to set a static MAC address on VMware ESXi virtual machine.md b/published/201410/20140818 Linux FAQs with Answers--How to set a static MAC address on VMware ESXi virtual machine.md similarity index 100% rename from published/20140818 Linux FAQs with Answers--How to set a static MAC address on VMware ESXi virtual machine.md rename to published/201410/20140818 Linux FAQs with Answers--How to set a static MAC address on VMware ESXi virtual machine.md diff --git a/translated/talk/20140818 Where And How To Code--Choosing The Best Free Code Editor.md b/published/201410/20140818 Where And How To Code--Choosing The Best Free Code Editor.md similarity index 75% rename from translated/talk/20140818 Where And How To Code--Choosing The Best Free Code Editor.md rename to published/201410/20140818 Where And How To Code--Choosing The Best Free Code Editor.md index 495ab39e3b..771f0c10b3 100644 --- a/translated/talk/20140818 Where And How To Code--Choosing The Best Free Code Editor.md +++ b/published/201410/20140818 Where And How To Code--Choosing The Best Free Code Editor.md @@ -1,14 +1,14 @@ -在哪儿以及怎么写代码:选择最好的免费代码编辑器 +何处写,如何写:选择最好的免费在线代码编辑器 ================================================================================ -深入了解一下Cloud9,Koding和Nitrous.IO。 +> 深入了解一下Cloud9,Koding和Nitrous.IO。 ![](http://a2.files.readwrite.com/image/upload/c_fill,h_900,q_70,w_1600/MTIzMDQ5NjYzODM4NDU1MzA4.jpg) -**已经准备好开始你的第一个编程项目了吗?很好!只要配置一下**终端或命令行,学习如何使用并安装所有要用到的编程语言,插件库和API函数库。当最终准备好一切以后,再安装好[Visual Studio][1]就可以开始了,然后才可以预览自己的工作。 +已经准备好开始你的第一个编程项目了吗?很好!只要配置一下终端或命令行,学习如何使用它,然后安装所有要用到的编程语言,插件库和API函数库。当最终准备好一切以后,再安装好[Visual Studio][1]就可以开始了,然后才可以预览自己的工作。 至少这是大家过去已经熟悉的方式。 -也难怪初学程序员们逐渐喜欢上在线集成开发环境(IDE)了。IDE是一个代码编辑器,不过已经准备好编程语言以及所有需要的依赖,可以让你避免把它们一一安装到电脑上的麻烦。 +也难怪初学程序员们逐渐喜欢上在线的集成开发环境(IDE)了。IDE是一个代码编辑器,不过已经准备好编程语言以及所有需要的依赖,可以让你避免把它们一一安装到电脑上的麻烦。 我想搞清楚到底是哪些因素能组成一个典型的IDE,所以我试用了一下免费级别的时下最受欢迎的三款集成开发环境:[Cloud9][2],[Koding][3]和[Nitrous.IO][4]。在这个过程中,我了解了许多程序员应该或不应该使用IDE的各种情形。 @@ -16,7 +16,7 @@ 假如有一个像Microsoft Word那样的文字编辑器,想想类似Google Drive那样的IDE吧。你可以拥有类似的功能,但是它还能支持从任意电脑上访问,还能随时共享。因为因特网在项目工作流中的影响已经越来越重要,IDE也让生活更轻松。 -在我最近的一篇ReadWrite教程中我使用了Nitrous.IO,这是在文章[创建一个你自己的像Yo那样的极端简单的聊天应用][5]里的一个Python应用。当使用IDE的时候,你只要选择你要用的编程语言,然后通过IDE特别设计用来运行这种语言程序的虚拟机(VM),你就可以测试和预览你的应用了。 +在我最近的一篇ReadWrite教程中我使用了Nitrous.IO,这是在文章“[创建一个你自己的像Yo那样的极端简单的聊天应用][5]”里的一个Python应用。当使用IDE的时候,你只要选择你要用的编程语言,然后通过IDE特别为运行这种语言程序而设计的虚拟机(VM),你就可以测试和预览你的应用了。 如果你读过那篇教程,就会知道我的那个应用只用到了两个API库-信息服务Twilio和Python微框架Flask。在我的电脑上就算是使用文字编辑器和终端来做也是很简单的,不过我选择使用IDE还有一个方便的地方:如果大家都使用同样的开发环境,跟着教程一步步走下去就更简单了。 @@ -28,7 +28,7 @@ 但是不能用IDE来永久存储你的整个项目。把帖子保存在Google Drive文件中不会让你的博客丢失。类似Google Drive,IDE可以让你创建链接用于共享内容,但是任何一个都还不足以替代真正的托管服务器。 -还有,IDE并不是设计成方便广泛共享。尽管各种IDE都在不断改善大多数文字编辑器的预览功能,还只能用来给你的朋友或同事展示一下应用预览,而不是,比如说,类似Hacker News的主页。那样的话,占用太多带宽的IDE也许会让你崩溃。 +还有,IDE并不是设计成方便广泛共享。尽管各种IDE都在不断改善大多数文字编辑器的预览功能,还只能用来给你的朋友或同事展示一下应用的预览,而不是像Hacker News一样的主页。那样的话,占用太多带宽的IDE也许会让你崩溃。 这样说吧:IDE只是构建和测试你的应用的地方,托管服务器才是它们生存的地方。所以一旦完成了你的应用,你会希望把它布置到能长期托管的云服务器上,最好是能免费托管的那种,例如[Heroku][6]。 @@ -44,7 +44,7 @@ 当我完成了Cloud9的注册后,它提示的第一件事情就是添加我的GitHub和BitBucket账号。马上,所有我的GitHub项目,个人的和协作的,都可以直接克隆到本地并使用Cloud9的开发工具开始工作。其他的IDE在和GitHub集成的方面都没有达到这种水准。 -在我测试的这三款IDE中,Cloud9看起来更加侧重于一个可以让协同工作的人们无缝衔接工作的环境。在这里,它并不是角落里放个聊天窗口。实际上,按照CEO Ruben Daniels说的,试用Cloud9的协作者可以互相看到其他人实时的编码情况,就像Google Drive上的合作者那样。 +在我测试的这三款IDE中,Cloud9看起来更加侧重于一个可以让协同工作的人们无缝衔接工作的环境。在这里,它并不是角落里放个聊天窗口。实际上,按照其CEO Ruben Daniels说的,试用Cloud9的协作者可以互相看到其他人实时的编码情况,就像Google Drive上的合作者那样。 “大多数IDE服务的协同功能只能操作单一文件”,Daniels说,“而我们的产品可以支持整个项目中的不同文件。协同功能被完美集成到了我们的IDE中。” @@ -58,15 +58,15 @@ IDE可以提供你所需的工具来构建和测试所有开源编程语言的 ### Nitrous.IO: An IDE Wherever You Want ### -相对于自己的桌面环境,使用IDE的最大优势是它是自包含的。你不需要安装任何其他的就可以使用。而另一方面,使用自己的桌面环境的最大优势就是你可以在本地工作,甚至在没有互联网的情况下。 +相对于自己的桌面环境,使用IDE的最大优势是它是自足的。你不需要安装任何其他的东西就可以使用。而另一方面,使用自己的桌面环境的最大优势就是你可以在本地工作,甚至在没有互联网的情况下。 -Nitrous.IO结合了这两个优势。你可以在网站上在线使用这个IDE,你也可以把它下载到自己的饿电脑上,共同创始人AJ Solimine这样说。优点是你可以结合Nitrous的集成性和你最喜欢的文字编辑器的熟悉。 +Nitrous.IO结合了这两个优势。“你可以在网站上在线使用这个IDE,你也可以把它下载到自己的电脑上”,其共同创始人AJ Solimine这样说。优点是你可以结合Nitrous的集成性和你最喜欢的文字编辑器的熟悉。 -他说:“你可以使用任意当代浏览器访问Nitrous.IO的在线IDE网站,但我们仍然提供了方便的Windows和Mac桌面应用,可以让你使用你最喜欢的编辑器来写代码。” +他说:“你可以使用任意现代浏览器访问Nitrous.IO的在线IDE网站,但我们仍然提供了方便的Windows和Mac桌面应用,可以让你使用你最喜欢的编辑器来写代码。” ### 底线 ### -这一个星期的[使用][7]三个不同IDE的最让我意外的收获?它们是如此相似。[当用来做最基本的代码编辑的时候][8],它们都一样的好用。 +这一个星期[使用][7]三个不同IDE的最让我意外的收获是什么?它们是如此相似。[当用来做最基本的代码编辑的时候][8],它们都一样的好用。 Cloud9,Koding,[和Nitrous.IO都支持][9]所有主流的开源编程语言,从Ruby到Python到PHP到HTML5。你可以选择任何一种VM。 @@ -76,7 +76,7 @@ Cloud9和Nitrous.IO都实现了GitHub的一键集成。Koding需要[多几个步 不好的一面,它们都有相同的缺陷,不过考虑到它们都是免费的也还合理。你每次只能同时运行一个VM来测试特定编程语言写出的程序。而当你一段时间没有使用VM之后,IDE会把VM切换成休眠模式以节省带宽,而下次要用的时候就得等它重新加载(Cloud9在这一点上更加费力)。它们中也没有任何一个为已完成的项目提供像样的永久托管服务。 -所以,对咨询我是否有一个完美的免费IDE的人,答案是可能没有。但是这也要看你侧重的地方,对你的某个项目来说也许有一个完美的IDE。 +所以,对咨询我是否有一个完美的免费IDE的人来说,答案是可能没有。但是这也要看你侧重的地方,对你的某个项目来说也许有一个完美的IDE。 图片由[Shutterstock][11]友情提供 @@ -86,7 +86,7 @@ via: http://readwrite.com/2014/08/14/cloud9-koding-nitrousio-integrated-developm 作者:[Lauren Orsini][a] 译者:[zpl1025](https://github.com/zpl1025) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/20140819 8 Options to Trace or Debug Programs using Linux strace Command.md b/published/201410/20140819 8 Options to Trace or Debug Programs using Linux strace Command.md similarity index 85% rename from translated/tech/20140819 8 Options to Trace or Debug Programs using Linux strace Command.md rename to published/201410/20140819 8 Options to Trace or Debug Programs using Linux strace Command.md index cb713ec438..a840ee7b59 100644 --- a/translated/tech/20140819 8 Options to Trace or Debug Programs using Linux strace Command.md +++ b/published/201410/20140819 8 Options to Trace or Debug Programs using Linux strace Command.md @@ -1,14 +1,16 @@ -8 Options to Trace/Debug Programs using Linux strace Command +使用 Linux 的 strace 命令跟踪/调试程序的常用选项 ================================================================================ + 在调试的时候,strace能帮助你追踪到一个程序所执行的系统调用。当你想知道程序和操作系统如何交互的时候,这是极其方便的,比如你想知道执行了哪些系统调用,并且以何种顺序执行。 这个简单而又强大的工具几乎在所有的Linux操作系统上可用,并且可被用来调试大量的程序。 -### 1. 命令用法 ### +### 命令用法 ### 让我们看看strace命令如何追踪一个程序的执行情况。 最简单的形式,strace后面可以跟任何命令。它将列出许许多多的系统调用。一开始,我们并不能理解所有的输出,但是如果你正在寻找一些特殊的东西,那么你应该能从输出中发现它。 + 让我们来看看简单命令ls的系统调用跟踪情况。 raghu@raghu-Linoxide ~ $ strace ls @@ -20,21 +22,22 @@ ![Strace write system call (ls)](http://linoxide.com/wp-content/uploads/2014/08/02.strace_ls_write.png) 上面的输出部分展示了write系统调用,它把当前目录的列表输出到标准输出。 + 下面的图片展示了使用ls命令列出的目录内容(没有使用strace)。 raghu@raghu-Linoxide ~ $ ls ![ls command output](http://linoxide.com/wp-content/uploads/2014/08/03.ls_.png) -#### 1.1 寻找被程序读取的配置文件 #### +#### 选项1 寻找被程序读取的配置文件 #### -一个有用的跟踪(除了调试某些问题以外)是你能找到被一个程序读取的配置文件。例如, +Strace 的用法之一(除了调试某些问题以外)是你能找到被一个程序读取的配置文件。例如, raghu@raghu-Linoxide ~ $ strace php 2>&1 | grep php.ini ![Strace config file read by program](http://linoxide.com/wp-content/uploads/2014/08/04.strace_php_configuration.png) -#### 1.2 跟踪指定的系统调用 #### +#### 选项2 跟踪指定的系统调用 #### strace命令的-e选项仅仅被用来展示特定的系统调用(例如,open,write等等) @@ -44,7 +47,7 @@ strace命令的-e选项仅仅被用来展示特定的系统调用(例如,ope ![Stracing specific system call (open here)](http://linoxide.com/wp-content/uploads/2014/08/05.strace_open_systemcall.png) -#### 1.3 用于进程 #### +#### 选项3 跟踪进程 #### strace不但能用在命令上,而且通过使用-p选项能用在运行的进程上。 @@ -52,15 +55,15 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进 ![Strace a process](http://linoxide.com/wp-content/uploads/2014/08/06.strace_process.png) -#### 1.4 strace的统计概要 #### +#### 选项4 strace的统计概要 #### -包括系统调用的概要,执行时间,错误等等。使用-c选项能够以一种整洁的方式展示: +它包括系统调用的概要,执行时间,错误等等。使用-c选项能够以一种整洁的方式展示: raghu@raghu-Linoxide ~ $ strace -c ls ![Strace summary display](http://linoxide.com/wp-content/uploads/2014/08/07.strace_summary.png) -#### 1.5 保存输出结果 #### +#### 选项5 保存输出结果 #### 通过使用-o选项可以把strace命令的输出结果保存到一个文件中。 @@ -70,7 +73,7 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进 之所以以sudo来运行上面的命令,是为了防止用户ID与所查看进程的所有者ID不匹配的情况。 -### 1.6 显示时间戳 ### +### 选项6 显示时间戳 ### 使用-t选项,可以在每行的输出之前添加时间戳。 @@ -78,7 +81,7 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进 ![Timestamp before each output line](http://linoxide.com/wp-content/uploads/2014/08/09.strace_timestamp.png) -#### 1.7 更好的时间戳 #### +#### 选项7 更精细的时间戳 #### -tt选项可以展示微秒级别的时间戳。 @@ -92,7 +95,7 @@ strace不但能用在命令上,而且通过使用-p选项能用在运行的进 ![Seconds since epoch](http://linoxide.com/wp-content/uploads/2014/08/011.strace_epoch_seconds.png) -#### 1.8 Relative Time #### +#### 选项8 相对时间 #### -r选项展示系统调用之间的相对时间戳。 @@ -106,7 +109,7 @@ via: http://linoxide.com/linux-command/linux-strace-command-examples/ 作者:[Raghu][a] 译者:[guodongxiaren](https://github.com/guodongxiaren) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/20140819 A Pocket Guide for Linux ssh Command with Examples.md b/published/201410/20140819 A Pocket Guide for Linux ssh Command with Examples.md similarity index 100% rename from published/20140819 A Pocket Guide for Linux ssh Command with Examples.md rename to published/201410/20140819 A Pocket Guide for Linux ssh Command with Examples.md diff --git a/published/20140819 KDE Plasma 5--For those Linux users undecided on the kernel' s future.md b/published/201410/20140819 KDE Plasma 5--For those Linux users undecided on the kernel' s future.md similarity index 100% rename from published/20140819 KDE Plasma 5--For those Linux users undecided on the kernel' s future.md rename to published/201410/20140819 KDE Plasma 5--For those Linux users undecided on the kernel' s future.md diff --git a/published/20140819 Linux Systemd--Start or Stop or Restart Services in RHEL or CentOS 7.md b/published/201410/20140819 Linux Systemd--Start or Stop or Restart Services in RHEL or CentOS 7.md similarity index 100% rename from published/20140819 Linux Systemd--Start or Stop or Restart Services in RHEL or CentOS 7.md rename to published/201410/20140819 Linux Systemd--Start or Stop or Restart Services in RHEL or CentOS 7.md diff --git a/published/20140822 15 Practical Examples of 'cd' Command in Linux.md b/published/201410/20140822 15 Practical Examples of 'cd' Command in Linux.md similarity index 100% rename from published/20140822 15 Practical Examples of 'cd' Command in Linux.md rename to published/201410/20140822 15 Practical Examples of 'cd' Command in Linux.md diff --git a/published/20140825 China Will Change The Way All Software Is Bought And Sold.md b/published/201410/20140825 China Will Change The Way All Software Is Bought And Sold.md similarity index 100% rename from published/20140825 China Will Change The Way All Software Is Bought And Sold.md rename to published/201410/20140825 China Will Change The Way All Software Is Bought And Sold.md diff --git a/published/20140825 Linux FAQs with Answers--How to enable Nux Dextop repository on CentOS or RHEL.md b/published/201410/20140825 Linux FAQs with Answers--How to enable Nux Dextop repository on CentOS or RHEL.md similarity index 100% rename from published/20140825 Linux FAQs with Answers--How to enable Nux Dextop repository on CentOS or RHEL.md rename to published/201410/20140825 Linux FAQs with Answers--How to enable Nux Dextop repository on CentOS or RHEL.md diff --git a/published/20140825 Linux FAQs with Answers--How to fix 'failed to run aclocal--No such file or directory'.md b/published/201410/20140825 Linux FAQs with Answers--How to fix 'failed to run aclocal--No such file or directory'.md similarity index 100% rename from published/20140825 Linux FAQs with Answers--How to fix 'failed to run aclocal--No such file or directory'.md rename to published/201410/20140825 Linux FAQs with Answers--How to fix 'failed to run aclocal--No such file or directory'.md diff --git a/published/20140825 Linux Terminal--speedtest_cli checks your real bandwidth speed.md b/published/201410/20140825 Linux Terminal--speedtest_cli checks your real bandwidth speed.md similarity index 100% rename from published/20140825 Linux Terminal--speedtest_cli checks your real bandwidth speed.md rename to published/201410/20140825 Linux Terminal--speedtest_cli checks your real bandwidth speed.md diff --git a/published/20140826 Linus Torvalds is my hero, says 13 year old Zachary DuPont.md b/published/201410/20140826 Linus Torvalds is my hero, says 13 year old Zachary DuPont.md similarity index 100% rename from published/20140826 Linus Torvalds is my hero, says 13 year old Zachary DuPont.md rename to published/201410/20140826 Linus Torvalds is my hero, says 13 year old Zachary DuPont.md diff --git a/published/20140828 GIMP 2.8.12 Released--Here' s How to Install it on Ubuntu.md b/published/201410/20140828 GIMP 2.8.12 Released--Here' s How to Install it on Ubuntu.md similarity index 100% rename from published/20140828 GIMP 2.8.12 Released--Here' s How to Install it on Ubuntu.md rename to published/201410/20140828 GIMP 2.8.12 Released--Here' s How to Install it on Ubuntu.md diff --git a/published/20140829 Linux Doesn't Need to Own the Desktop.md b/published/201410/20140829 Linux Doesn't Need to Own the Desktop.md similarity index 100% rename from published/20140829 Linux Doesn't Need to Own the Desktop.md rename to published/201410/20140829 Linux Doesn't Need to Own the Desktop.md diff --git a/published/20140901 Awesome systemd Commands to Manage Linux System.md b/published/201410/20140901 Awesome systemd Commands to Manage Linux System.md similarity index 100% rename from published/20140901 Awesome systemd Commands to Manage Linux System.md rename to published/201410/20140901 Awesome systemd Commands to Manage Linux System.md diff --git a/published/20140901 Remarkable--A New MarkDown Editor For Linux.md b/published/201410/20140901 Remarkable--A New MarkDown Editor For Linux.md similarity index 100% rename from published/20140901 Remarkable--A New MarkDown Editor For Linux.md rename to published/201410/20140901 Remarkable--A New MarkDown Editor For Linux.md diff --git a/published/20140901 Ubuntu Wallpaper Contest Closes, These Are Our 8 Faves.md b/published/201410/20140901 Ubuntu Wallpaper Contest Closes, These Are Our 8 Faves.md similarity index 100% rename from published/20140901 Ubuntu Wallpaper Contest Closes, These Are Our 8 Faves.md rename to published/201410/20140901 Ubuntu Wallpaper Contest Closes, These Are Our 8 Faves.md diff --git a/published/20140902 Happy Birthday Email.md b/published/201410/20140902 Happy Birthday Email.md similarity index 100% rename from published/20140902 Happy Birthday Email.md rename to published/201410/20140902 Happy Birthday Email.md diff --git a/published/20140902 How to share on linux the output of your shell commands.md b/published/201410/20140902 How to share on linux the output of your shell commands.md similarity index 100% rename from published/20140902 How to share on linux the output of your shell commands.md rename to published/201410/20140902 How to share on linux the output of your shell commands.md diff --git a/published/20140902 Use SearchMonkey To Search Text In All Files Within A Directory In Ubuntu.md b/published/201410/20140902 Use SearchMonkey To Search Text In All Files Within A Directory In Ubuntu.md similarity index 100% rename from published/20140902 Use SearchMonkey To Search Text In All Files Within A Directory In Ubuntu.md rename to published/201410/20140902 Use SearchMonkey To Search Text In All Files Within A Directory In Ubuntu.md diff --git a/translated/talk/20140904 The Masked Avengers.md b/published/201410/20140904 The Masked Avengers.md similarity index 70% rename from translated/talk/20140904 The Masked Avengers.md rename to published/201410/20140904 The Masked Avengers.md index 2add033d0f..8d81cf3188 100644 --- a/translated/talk/20140904 The Masked Avengers.md +++ b/published/201410/20140904 The Masked Avengers.md @@ -6,13 +6,13 @@
通过入会声明,任何人都能轻易加入“匿名者”组织。某人类学家称,组织成员会“根据影响程度对重大事件保持着不同关注,特别是那些能挑起强烈争端的事件”。
-布景:Jeff Nishinaka / 摄影:Scott Dunbar +纸雕作品:Jeff Nishinaka / 摄影:Scott Dunbar

1

-

上世纪七十年代中期,当 Christopher Doyon 还是一个生活在缅因州乡村的孩童时,就终日泡在 CB radio 上与各种陌生人聊天。他的昵称是“大红”,因为他有一头红色的头发。Christopher Doyon 把发射机挂在了卧室的墙壁上,并且说服了父亲在自家屋顶安装了两根天线。CB radio 主要用于卡车司机间的联络,但 Doyon 和一些人却将之用于不久后出现在 Internet 上的虚拟社交——自定义昵称、成员间才懂的笑话,以及施行变革的强烈愿望。

+

上世纪七十年代中期,当 Christopher Doyon 还是一个生活在缅因州乡村的孩童时,就终日泡在 CB radio 上与各种陌生人聊天。他的昵称是“Big red”(大红),因为他有一头红色的头发。Christopher Doyon 把发射机挂在了卧室的墙壁上,并且说服了父亲在自家屋顶安装了两根天线。CB radio 主要用于卡车司机间的联络,但 Doyon 和一些人却将之用于不久后出现在 Internet 上的虚拟社交——自定义昵称、成员间才懂的笑话,以及施行变革的强烈愿望。

-

Doyon 很小的时候母亲就去世了,兄妹二人由父亲抚养长大,他俩都说受到过父亲的虐待。由此 Doyon 在 CB radio 社区中找到了慰藉和归属感。他和他的朋友们轮流监听当地紧急事件频道。其中一个朋友的父亲买了一个气泡灯并安装在了他的车顶上;每当这个孩子收听到来自孤立无援的乘车人的求助后,都会开车载着所有人到求助者所在的公路旁。除了拨打 911 外他们基本没有什么可做的,但这足以让他们感觉自己成为了英雄。

+

Doyon 很小的时候母亲就去世了,兄妹二人由父亲抚养长大,他俩都说受到过父亲的虐待。由此 Doyon 在 CB radio 社区中找到了慰藉和目标感。他和他的朋友们轮流监听当地紧急事件频道。其中一个朋友的父亲买了一个气泡灯并安装在了他的车顶上;每当这个孩子收听到来自孤立无援的乘车人的求助后,都会开车载着所有人到求助者所在的公路旁。除了拨打 911 外他们基本没有什么可做的,但这足以让他们感觉自己成为了英雄。

短小精悍的 Doyon 有着一口浓厚的新英格兰口音,并且非常喜欢《星际迷航》和阿西莫夫的小说。当他在《大众机械》上看到一则“组装你的专属个人计算机”构件广告时,就央求祖父给他买一套,接下来 Doyon 花了数月的时间把计算机组装起来并连接到 Internet 上去。与鲜为人知的 CB 电波相比,在线聊天室确实不可同日而语。“我只需要点一下按钮,再选中某个家伙的名字,然后我就可以和他聊天了,” Doyon 在最近回忆时说道,“这真的很惊人。”

@@ -22,11 +22,11 @@

Doyon 深深地沉溺于计算机中,虽然他并不是一位专业的程序员。在过去一年的几次谈话中,他告诉我他将自己视为激进主义分子,继承了 Abbie Hoffman 和 Eldridge Cleaver 的激进传统;技术不过是他抗议的工具。八十年代,哈佛大学和麻省理工学院的学生们举行集会,强烈抗议他们的学校从南非撤资。为了帮助抗议者通过安全渠道进行交流,PLF 制作了无线电套装:移动调频发射器、伸缩式天线,还有麦克风,所有部件都内置于背包内。Willard Johnson,麻省理工学院的一位激进分子和政治学家,表示黑客们出席集会并不意味着一次变革。“我们的大部分工作仍然是通过扩音器来完成的,”他解释道。

-

1992 年,在 Grateful Dead 的一场印第安纳的演唱会上,Doyon 秘密地向一位瘾君子出售了 300 粒药。由此他被判决在印第安纳州立监狱服役十二年,后来改为五年。服役期间,他对宗教和哲学产生了浓厚的兴趣,并于鲍尔州立大学学习了相应课程。

+

1992 年,在印第安纳的一场 Grateful Dead 的演唱会上,Doyon 秘密地向一位瘾君子出售了 300 粒药。由此他被判决在印第安纳州立监狱服役十二年,后来改为五年。服役期间,他对宗教和哲学产生了浓厚的兴趣,并于鲍尔州立大学学习了相应课程。

-

1994 年,第一款商业 Web 浏览器网景领航员正式发布,同一年 Doyon 被捕入狱。当他出狱并再次回到剑桥后,PLF 依然活跃着,并且他们的工具有了实质性的飞跃。Doyon 回忆起和他入狱之前的变化,“非常巨大——好比是‘烽火狼烟’跟‘电报传信’之间那么大的差距。”黑客们入侵了一个印度的军事网站,并修改其首页文字为“拯救克什米尔”。在塞尔维亚,黑客们攻陷了一个阿尔巴尼亚网站。Stefan Wray,一位早期网络激进主义分子,为一次纽约“反哥伦布日”集会上的黑客行径辩护。“我们视之为电子形式的公众抗议,”他告诉大家。

+

1994 年,第一款商业 Web 浏览器 Netscape Navigator(网景领航员)正式发布,同一年 Doyon 被捕入狱。当他出狱并再次回到剑桥后,PLF 依然活跃着,并且他们的工具有了实质性的飞跃。Doyon 回忆起他和入狱之前对比的变化,“非常巨大——好比是‘烽火狼烟’跟‘电报传信’之间那么大的差距。”黑客们入侵了一个印度的军事网站,并修改其首页文字为“拯救克什米尔”。在塞尔维亚,黑客们攻陷了一个阿尔巴尼亚网站。Stefan Wray,一位早期网络激进主义分子,为一次纽约“反哥伦布日”集会上的黑客行径辩护。“我们视之为电子形式的公众抗议,”他告诉大家。

-

1999 年,美国唱片业协会因为版权侵犯问题起诉了 Napster,一款文件共享软件。最终,Napster 于 2001 年关闭。Doyon 与其他黑客使用分布式拒绝服务(Distributed Denial of Service,DDoS,使大量数据涌入网站导致其响应速度减缓直至奔溃)的手段,攻击了美国唱片业协会的网站,使之停运时间长达一星期之久。Doyon为自己的行为进行了辩解,并高度赞扬了其他的“黑客主义者”。“我们很快意识到保卫 Napster 的战争象征着保卫 Internet 自由的战争,”他在后来写道。

+

1999 年,美国唱片业协会因为版权侵犯问题起诉了 Napster,一款文件共享服务。最终,Napster 于 2001 年关闭。Doyon 与其他黑客使用分布式拒绝服务(Distributed Denial of Service,DDoS,使大量数据涌入网站导致其响应速度减缓直至奔溃)的手段,攻击了美国唱片业协会的网站,使之停运时间长达一星期之久。Doyon为自己的行为进行了辩解,并高度赞扬了其他的“黑客主义者”。“我们很快意识到保卫 Napster 的战争象征着保卫 Internet 自由的战争,”他在后来写道。

2008 年的一天,Doyon 和 “Commander Adama” 在剑桥的 PLE 地下公寓相遇。Adama 当着 Doyon 的面点击了癫痫基金会的一个链接,与意料中将要打开的论坛不同,出现的是一连串闪烁的彩光。有些癫痫病患者对闪光灯非常敏感——这完全是出于恶意,有人想要在无辜群众中诱发癫痫病。已经出现了至少一名受害者。

@@ -42,69 +42,69 @@
“我得谈谈我的感受。”
-

Poole 希望匿名这一举措可以延续社区的尖锐性因素。“我们无意参与理智的涉外事件讨论,”他在网站上写道。4chan 社区里最具价值的事之一便是寻求“挑起强烈的争端”(lulz),这个词源自缩写 LOL。Lulz 经常是通过分享充满孩子气的笑话或图片来实现的,它们中的大部分不是色情的就是下流的。其中最令人震惊的部分被贴在了网站的“/b/”版块上,这里的用户们称呼自己为“/b/tards”。Doyon 知道 4chan 这个社区,但他认为那些用户是“一群愚昧无知的顽童”。2004 年前后,/b/ 上的部分用户开始把“匿名者”视为一个独立的实体。

+

Poole 希望匿名这一举措可以延续社区的尖锐性因素。“我们无意参与理智的涉外事件讨论,”他在网站上写道。4chan 社区里最具价值的事之一便是寻求“挑起强烈的争端”(lulz),这个词源自缩写 LOL。Lulz 经常是通过分享幼稚的笑话或图片来实现的,其中大部分不是色情的就是下流的。其中最令人震惊的部分被贴在了网站的“/b/”版块上,这里的用户们称呼自己为“/b/tards”。Doyon 知道 4chan 这个社区,但他认为它的用户是“一群愚昧无知的顽童”。2004 年前后,/b/ 上的部分用户开始把“匿名者”视为一个独立的实体。

这是一个全新的黑客团体。“这不是一个传统意义上的组织,”一位领导计算机安全工作的研究员 Mikko Hypponen 告诉我——倒不如,视之为一个非传统的亚文化群体。Barrett Brown,德克萨斯州的一名记者,同时也是众所周知的“匿名者”高层领导,把“匿名者”描述为“一连串前仆后继的伟大友谊”。无需任何会费或者入会仪式。任何想要加入“匿名者”组织,成为一名匿名者(Anon)的人,都可以通过简短的象征性的宣誓加入。

-

尽管 4chan 的关注焦点是一些琐碎的话题,但许多匿名者认为自己就是“正义的十字军”。如果网上有不良迹象出现,他们就会发起具有针对性的治安维护行动。不止一次,他们以未成年少女的身份套取恋童癖的私人信息,然后把这些信息交给警察局。其他匿名者则是政治的厌恶者,为了挑起争端想方设法散布混乱的信息。他们中的一些人在 /b/ 上发布看着像是雷管炸弹的图片;另一些则叫嚣着要炸毁足球场并因此被联邦调查局逮捕。2007 年,一家洛杉矶当地的新闻联盟机构称呼“匿名者”组织为“互联网负能量制造机”。

+

尽管 4chan 的关注焦点是一些琐碎的话题,但许多匿名者认为自己就是“正义的十字军”。如果网上有不良迹象出现,他们就会发起具有针对性的治安维护行动。不止一次,他们以未成年少女的身份使恋童癖陷入圈套,然后把他们的个人信息交给警察局。其他匿名者则是政治的厌恶者,为了挑起争端想方设法散布混乱的信息。他们中的一些人在 /b/ 上发布看着像是雷管炸弹的图片;另一些则叫嚣着要炸毁足球场并因此被联邦调查局逮捕。2007 年,一家洛杉矶当地的新闻联盟机构称呼“匿名者”组织为“互联网负能量制造机”。

2008 年 1 月,Gawker Media 上传了一段关于汤姆克鲁斯大力吹捧山达基优点的视频。这段视频是受版权保护的,山达基教会致信 Gawker,勒令其删除这段视频。“匿名者”组织认为教会企图控制网络信息。“是时候让 /b/ 来干票大的了,”有人在 4chan 上写道。“我说的是‘入侵’或者‘攻陷’山达基官方网站。”一位匿名者使用 YouTube 放出一段“新闻稿”,其中包括暴雨云视频和经过计算机处理的语音。“我们要立刻把你们从 Internet 上赶出去,并且在现有规模上逐渐瓦解山达基教会,”那个声音说,“你们无处可躲。”不到一个星期,这段 YouTube 视频的点击率就超过了两百万次。

-

“匿名者”组织已经不仅限于 4chan 社区。黑客们在专用的互联网中继聊天(Internet Relay Chat channels,IRC 聊天室)频道内进行交流,协商策略。通过 DDoS 攻击手段,他们使山达基的主网站间歇性崩溃了好几天。匿名者们制造了“谷歌炸弹”,由此导致 “dangerous cult” 的搜索结果中的第一条结果就是山达基主网站。其余的匿名者向山达基的欧洲总部寄送了数以百计的披萨,并用大量全黑的传真单耗干了洛杉矶教会总部的传真机墨盒。山达基教会,据报道拥有超过十亿美元资产的组织,当然能经得起墨盒耗尽的考验。但山达基教会的高层可不这么认为,他们还收到了严厉的恐吓,由此他们不得不向 FBI 申请逮捕“匿名者”组织的成员。

+

“匿名者”组织已经不仅限于 4chan 社区。黑客们在专用的互联网中继聊天(Internet Relay Chat channels,IRC 聊天室)频道内进行交流,协商策略。通过 DDoS 攻击手段,他们使山达基的主网站间歇性崩溃了好几天。匿名者们制造了“谷歌炸弹”,由此导致 “dangerous cult” 的搜索结果中的第一条结果就是山达基主网站。其余的匿名者向山达基的欧洲总部寄送了数以百计的披萨,并用大量全黑的传真单耗干了洛杉矶教会总部的传真机墨盒。山达基教会,据报道是一个拥有超过十亿美元资产的组织,当然能经得起墨盒耗尽的考验。但山达基教会的高层可不这么认为,他们还收到了死亡恐吓,由此他们不得不向 FBI 申请调查“匿名者”组织的成员。

2008 年 3 月 15 日,在从伦敦到悉尼的一百多个城市里,数以千计匿名者们游行示威山达基教会。为了切合“匿名”这个主题,组织者下令所有的抗议者都应该佩戴相同的面具。深思熟虑过蝙蝠侠后,他们选定了 2005 年上映的反乌托邦电影《 V 字仇杀队》中 Guy Fawkes 的面具。“在每个大城市里都能以很便宜的价格大量购买,”广为人知的匿名者、游行组织者之一 Gregg Housh 告诉我说道。漫画式的面具上是一个的脸颊红润的男人,八字胡,有着灿烂的笑容。

-

匿名者们并未“瓦解”山达基教会。并且汤姆克鲁斯的那段视频任然保留在网络上。匿名者们证明了自己的顽强。组织选择了一个相当浮夸的口号:“我们是一体。绝不宽恕。永不遗忘。相信我们。”(We are Legion. We do not forgive. We do not forget. Expect us.)

+

匿名者们并未“瓦解”山达基教会。并且汤姆克鲁斯的那段视频任然保留在网络上。匿名者们证明了自己的顽强。组织选择了一个相当浮夸的口号:“我们是军团。绝不宽恕。永不遗忘。等待我们。”(We are Legion. We do not forgive. We do not forget. Expect us.)

3

2010 年,Doyon 搬到了加利福尼亚州的圣克鲁斯,并加入了当地的“和平阵营”组织。利用从木材堆置场偷来的木头,他在山上盖起了一间简陋的小屋,“借用”附近住宅的 WiFi,使用太阳能电池板发电,并通过贩卖种植的大麻换取现金。

-

与此同时,“和平阵营”维权者们每天晚上开始在公共场所休息,以此抗议圣克鲁斯政府此前颁布的“流浪者管理法案”,他们认为这项法案严重侵犯了流浪者的生存权。Doyon 出席了“和平阵营”的会议,并在网上发起了抗议活动。他留着蓬乱的红色山羊胡,戴一顶米黄色软呢帽,像军人那样不知疲倦。因此维权者们送给了他“罪恶制裁克里斯”的称呼。

+

与此同时,“和平阵营”维权者们每天晚上开始在公共场所休息,以此抗议圣克鲁斯政府此前颁布的“流浪者管理法案”,他们认为这项法案严重侵犯了流浪者的生存权。Doyon 出席了“和平阵营”的会议,并在网上发起了抗议活动。他留着蓬乱的红色山羊胡,戴一顶米黄色软呢帽,类似军服的服装。因此维权者们送给了他“罪恶制裁克里斯”的称呼。

-

“和平阵营”的成员之一 Kelley Landaker 曾几次和 Doyong 讨论入侵事宜。Doyon 有时会吹嘘自己的技术是多么的厉害,但作为一名资深程序员的 Landaker 却不为所动。“他说得很棒,但却不是行动派的,”Landaker 告诉我。不过在那种场合下,的确更需要一位富有激情的领导者,而不是埋头苦干的技术员。“他非常热情并且坦率,”另一位成员 Robert Norse 如是对我说。“他创造出了大量的能够吸引媒体眼球的话题。我从事这行已经二十年了,在这一点上他比我见过的任何人都要厉害。”

+

“和平阵营”的成员之一 Kelley Landaker 曾几次和 Doyong 讨论入侵事宜。Doyon 有时会吹嘘自己的技术是多么的厉害,但作为一名资深程序员的 Landaker 却不为所动。“他说得很棒,但却不是行动派,”Landaker 告诉我。不过在那种场合下,的确更需要一位富有激情的领导者,而不是埋头苦干的技术员。“他非常热情并且坦率,”另一位成员 Robert Norse 如是对我说。“他创造出了大量的能够吸引媒体眼球的话题。我从事这行已经二十年了,在这一点上他比我见过的任何人都要厉害。”

-

Doyon 在 PLF 的上司,Commander Adama 仍然住在剑桥,并且通过电子邮件和 Doyon 保持着联络,他下令让 Doyon 潜入“匿名者”组织。以此获知其运作方式,并伺机为 PLF 招募新成员。因为癫痫基金会网站入侵事件的那段不愉快回忆,Doyon 拒绝了 Adama。Adama 给 Doyon 解释说,在“匿名者”组织里不怀好意的黑客只占极少数,与此相反,这个组织经常会有一些的轰动世界举动。Doyon 对这点表示怀疑。“4chan 怎么可能会轰动世界?”他质问道。但出于对 PLF 的忠诚,他还是答应了 Adama 的请求。

+

Doyon 在 PLF 的上司,Commander Adama 仍然住在剑桥,并且通过电子邮件和 Doyon 保持着联络,他下令让 Doyon 监视“匿名者”组织,以此获知其运作方式,并伺机为 PLF 招募新成员。因为癫痫基金会网站入侵事件的那段不愉快回忆,Doyon 拒绝了 Adama。Adama 给 Doyon 解释说,在“匿名者”组织里不怀好意的黑客只占极少数,与此相反,这个组织经常会有一些的轰动世界举动。Doyon 对这点表示怀疑。“4chan 怎么可能会有轰动世界的大举动?”他质问道。但出于对 PLF 的忠诚,他还是答应了 Adama 的请求。

-

Doyon 经常带着一台宏基笔记本电脑出入于圣克鲁斯的一家名为 Coffee Roasting Company 的咖啡厅。“匿名者”组织的 IRC 聊天室主频道无需密码就能进入。Doyon 使用 PLF 的昵称进行登录并加入了聊天室。一段时间后,他发现了组织内大量的专用匿名者行动聊天频道,这些频道的规模更小,并相互重复。要想参与行动,你必须知道行动的专用聊天频道名称,并且聊天频道随时会因为陌生的闯入者而进行变更。这套交流系统并不具备较高的安全系数,但它的确很凑效。“这些专用行动聊天频道确保了行动机密的高度集中,”麦吉尔大学的人类学家 Gabriella Coleman 告诉我。

+

Doyon 经常带着一台宏基笔记本电脑出入于圣克鲁斯的一家名为 Coffee Roasting Company 的咖啡厅。“匿名者”组织的 IRC 聊天室主频道无需密码就能进入。Doyon 使用 PLF 的昵称进行登录并加入了聊天室。一段时间后,他发现了组织内大量的专用匿名者行动聊天频道,这些频道的规模更小,更多专门的组内匿名者间对话相互重复。要想参与行动,你必须知道行动的专用聊天频道名称,并且聊天频道随时会因为陌生的闯入者而进行变更。这套交流系统并不具备较高的安全系数,但它的确很凑效。“这些专用行动聊天频道确保了行动机密的高度集中”麦吉尔大学的人类学家 Gabriella Coleman 告诉我。

-

有些匿名者提议了一项行动,名为“反击行动”。如同新闻记者 Parmy Olson 于 2012 年在书中写道的,“我们是匿名者,”这项行动成为了又一次支援文件共享网站,如 Napster 的后继者海盗湾(Pirate Bay),的行动的前奏,但随后其目标却扩展到了政治领域。2010 年末,在美国国务院的要求下,包括万事达、Visa、PayPal 在内的几家公司终止了对维基解密,一家公布了成百上千份外交文件的民间组织,的捐助。在一段网络视频中,“匿名者”组织扬言要进行报复,发誓会对那些阻碍维基解密发展的公司进行惩罚。Doyon 被这种抗议企业的精神所吸引,决定参加这次行动。

+

有些匿名者提议了一项行动,名为“反击行动”。如同新闻记者 Parmy Olson 于 2012 年在书中写道的,“我们是匿名者,” 这项行动是以又一次支持文件共享的网站而创立,如同 Napster 的后继者海盗湾(Pirate Bay),但随后其目标却扩展到了政治领域。2010 年末,在美国国务院的要求下,包括万事达、Visa、PayPal 在内的几家公司终止了对维基解密的捐助,维基解密是一家公布了成百上千份外交文件的自发性组织。在一段在线视频中,“匿名者”组织扬言要进行报复,发誓会对那些阻碍维基解密发展的公司进行惩罚。Doyon 被这种抗议企业的精神所吸引,决定参加这次行动。

潘多拉的魔盒
-

在十二月初的“反击行动”中,“匿名者”组织指导那些新成员,或者说新兵,关于“如何他【哔~】加入组织”,教程中提到“首先配置你【哔~】的网络,这他【哔~】的很重要。”同时他们被要求下载“低轨道离子炮”,一款易于使用的开源软件。Doyon 下载了软件并在聊天室内等待着下一步指示。当开始的指令发出后,数千名匿名者将同时发动进攻。Doyon 收到了含有目标网址的指令——目标是,www.visa.com——同时,在软件的右上角有个按钮,上面写着“IMMA CHARGIN MAH LAZER.”(“反击行动”同时也发动了大量的复杂精密的入侵进攻。)几天后,“反击行动”攻陷了万事达、Visa、PayPal 公司的主页。在法院的控告单上,PayPal 称这次攻击给公司造成了 550 万美元的损失。

+

在十二月初的“反击行动”中,“匿名者”组织指导那些新成员,或者说新兵,去看标题为“如何加入那个【哔~】的Hive”,参与者被要求“首先配置他们【哔~】的网络,这【哔~】的很重要。”同时他们被要求下载“低轨道离子炮”,一款易于使用的开源软件。Doyon 下载了软件并在聊天室内等待着下一步指示。当开始的指令发出后,数千名匿名者将同时发动进攻。Doyon 进入了目标网址——www.visa.com——同时,在软件的右上角有个按钮,上面写着“IMMA CHARGIN MAH LAZER.”(“反击行动”同时也发动了大量的复杂精密的入侵进攻。)几天后,“反击行动”攻陷了万事达、Visa、PayPal 公司的主页。在法院的控告单上,PayPal 称这次攻击给公司造成了 550 万美元的损失。

-

但对 Doyon 来说,这是切实的激进主义体现。在剑桥反对种族隔离的行动中,他不能立即看到结果;而现在,只需指尖轻轻一点,就可以在攻陷大公司网站的行动中做出自己的贡献。隔天,赫芬顿邮报上出现了“万事达沦陷”的醒目标题。一位得意洋洋的匿名者发推特道:“有些事情维基解密是无能为力的。但这些事情却可以由‘反击行动’来完成。”

+

但对 Doyon 来说,这是切实的激进主义体现。在剑桥反对种族隔离的行动中,他不能即可见效;而现在,只需指尖轻轻一点,就可以在攻陷大公司网站的行动中做出自己的贡献。隔天,赫芬顿邮报上出现了“万事达沦陷”的醒目标题。一位得意洋洋的匿名者发推特道:“有些事情维基解密是无能为力的。但这些事情却可以由‘反击行动’来完成。”

4

-

2010 年的秋天,“和平阵营”的抗议活动终止,政府只做出了轻微的让步,“流浪者管理法案”仍然有效。Doyon 希望通过借助“匿名者”组织的方略扭转局势。他回忆当时自己的想法,“也许我可以发动‘匿名者’组织来教训这种看似不堪一击的市政府网站,这些人绝对会【哔~】地赞同我的提议。最终我们将使得市政府永久性的废除‘流浪者管理法案’。”

+

2010 年的秋天,“和平阵营”的抗议活动终止,政府只做出了略微让步,“流浪者管理法案”仍然有效。Doyon 希望通过借助“匿名者”组织的方略扭转局势。他回忆当时自己的想法,“也许我可以发动‘匿名者’组织来教训这种看似不堪一击的市政府网站,它们绝对会【哔~】地沦陷。最终我们使得市政府永久性废除‘流浪者管理法案’。”

-

Joshua Covelli 是一位 25 岁的匿名者,他的昵称是“Absolem”,他非常钦佩 Doyon 的果敢。“现在我们的组织完全是他【哔~】各种混乱的一盘散沙,”Covelli 告诉我道。在“Commander X”加入之后,“组织似乎开始变得有模有样了。”Covelli 的工作是俄亥俄州费尔伯恩的一所大学接待员,他从不了解任何有关圣克鲁斯的政治。但是当 Doyon 提及帮助“和平阵营”抗击活动的计划后,Covelli 立即回复了一封表示赞同的电子邮件:“我期待这样的行动很久了。”

+

Joshua Covelli 是一位 25 岁的匿名者,他的昵称是“Absolem”,他非常钦佩 Doyon 的果敢。“过去我们的组织完全是各种混乱的一盘散沙,”Covelli 告诉我。在“Commander X”加入之后,“组织似乎开始变得有模有样了。”Covelli 的工作是俄亥俄州费尔伯恩的一所大学接待员,他从不了解任何有关圣克鲁斯的政治。但是当 Doyon 提及帮助“和平阵营”抗击活动的计划后,Covelli 立即回复了一封表示赞同的电子邮件:“我期待参加这样的行动已经很久了。”

Doyon 使用 PLF 的昵称邀请 Covelli 在 IRC 聊天室进行了一次秘密谈话:

-
Absolem:抱歉,有个比较冒犯的问题...请问 PLF 也是组织的一员吗?
+
Absolem:抱歉,有个比较冒犯的问题...请问 PLF 是组织的一部分还是分开的?
-
Absolem:我会这么问,是因为我在频道里看过你的聊天记录,你像是一名训练有素的黑客,不太像是来自组织里的成员。
+
Absolem:我会这么问,是因为看你们聊天,觉得你们都是非常有组织的。
PLF:不不不,你的问题一点也不冒犯。很高兴遇到你。PLF 是一个来自波士顿的黑客组织,已经成立 22 年了。我在 1981 年就开始了我的黑客生涯,但那时我并没有使用计算机,而是使用的 PBX(Private Branch Exchange,电话交换机)。
PLF:我们组织内所有成员的年龄都超过了 40 岁。我们当中有退伍士兵和学者。并且我们的成员“Commander Adama”,正在躲避一大帮警察还有间谍的追捕。
-
Absolem:听起来很棒!我对这次行动很感兴趣,不知道我是否可以提供一些帮助,我们的组织实在是太混乱了。我的电脑技术还不错,但我在入侵技术上还完全是一个新手。我有一些小工具,但不知道怎么去使用它们。
+
Absolem:听起来很棒!我对这次行动很感兴趣,不过“匿名者”组织看起来太混乱无序,不知道我是否可以提供一些帮助。我的电脑技术还不错,但我在入侵技术上还完全是一个新手。我有一些小工具,但不知道怎么去使用它们。

庄重的入会仪式后,Doyon 正式接纳 Covelli 加入 PLF:

-
PLF:把所有可能对你不利的【哔~】敏感文件加密。
+
PLF:把所有可能使你受牵连的敏感文件加密。
PLF:还有,想要联系任何一位 PLF 成员的话,给我发消息就行。从现在起,请叫我... Commander X。
-

2012 年,美联社称“匿名者”组织为“一伙训练有素的黑客”;Quinn Norton 在《连线》杂志上发文称“‘匿名者’组织可以入侵任何坚不可摧的网站”,并在文末赞扬他们为“一群卓越的民间黑客”。事实上,有些匿名者的确是很有天赋的程序员,但绝大部分成员根本不懂任何技术。人类学家 Coleman 告诉我只有大约五分之一的匿名者是真正的黑客——其他匿名者则是“极客与抗议者”。

+

2012 年,美联社称“匿名者”组织为“一帮专家级的黑客”;Quinn Norton 在《连线》杂志上发文称“‘匿名者’组织可以入侵任何坚不可摧的网站”,并在文末赞扬他们为“一群卓越的民间黑客”。事实上,有些匿名者的确是很有天赋的程序员,但绝大部分成员根本不懂任何技术。人类学家 Coleman 告诉我只有大约五分之一的匿名者是真正的黑客——其他匿名者则是“极客与抗议者”。

-

2010 年 12 月 16 日,Doyon 以 Commander X 的身份向几名记者发送了电子邮件。“明天当地时间 12:00 的时候,‘人民解放阵线’组织与‘匿名者’组织将大举进攻圣克鲁斯政府网站”,他在邮件中写道,“12:30 之后我们将恢复网站的正常运行。”

+

2010 年 12 月 16 日,Doyon 以 Commander X 的身份向几名记者发送了电子邮件。“明天当地时间 12:00 的时候,‘人民解放阵线’组织与‘匿名者’组织将从互联网中删除圣克鲁斯政府网站”,他在邮件中写道,“12:30 之后我们将恢复网站的正常运行。”

圣克鲁斯数据中心的工作人员收到了警告,匆忙地准备应对攻击。他们在服务器上运行起安全扫描软件,并向当地的互联网供应商 AT & T 求助,后者建议他们向 FBI 报警。

@@ -132,7 +132,7 @@
“Zach 很聪明... 并且... 是一个天才... 但.. 你们... 不在一个班。”
-

Doyon 引用了一句电影台词。“拼命地跑,”他说。“我会躲起来,尽可能保持我的行动自由,用尽全力和这帮杂种们作斗争。”Frey 给了他两张 20 美元的钞票并祝他好运。

+

Doyon 引用了一句电影台词。“拼命地跑,”他说。“我会躲起来,尽可能保持我的行动自由,用尽全力和这帮混蛋们作斗争。”Frey 给了他两张 20 美元的钞票并祝他好运。

5

@@ -142,35 +142,35 @@

“突尼斯,” Brown 答道。

-

“我知道,那是中东地区的一个国家,” Doyon 继续问,“然后呢?”

+

“我知道,那是中东地区的一个国家,” Doyon 继续问,“具体任务是什么呢?”

“我们准备打倒那里的独裁者,” Brown 再次答道。

“啊?!那里有一位独裁者吗?” Doyon 有点惊讶。

-

几天后,“突尼斯行动”正式展开。Doyon 作为参与者向突尼斯政府域名下的电子邮箱发送了大量的垃圾邮件,以此阻塞其服务器。“我会提前写好关于那次行动邮件,接着一次又一次地把它们发送出去,” Doyon 说,“有时候实在没有时间,我就只简短的写上一句问候对方母亲的的话,然后发送出去。”短短一天时间里,匿名者们就攻陷了包括突尼斯证券交易所、工业部、总统办公室、总理办公室在内的多个网站。他们把总统办公室网站的首页替换成了一艘海盗船的图片,并配以文字“‘报复’是个贱人,不是吗?”

+

几天后,“突尼斯行动”正式展开。Doyon 作为参与者向突尼斯政府域名下的电子邮箱发送了大量的垃圾邮件,以此阻塞其服务器。“我会提前写好关于那次行动邮件,接着一次又一次地把它们发送出去,” Doyon 说,“有时候实在没有时间,我就只简短的写上一句‘问候对方母亲’的话,然后发送出去。”短短一天时间里,匿名者们就攻陷了包括突尼斯证券交易所、工业部、总统办公室、总统办公室在内的多个网站。他们把总统办公室网站的首页替换成了一艘海盗船的图片,并配以文字“恶有恶报,不是吗?”

Doyon 不时会谈起他的网上“战斗”经历,似乎他刚从弹坑里爬出来一样。“伙计,自从干了这行我就变黑了,”他向我诉苦道。“你看我的脸,全是抽烟的时候熏的——而且可能已经粘在我的脸上了。我仔细地照过镜子,毫不夸张地说我简直就是一头棕熊。”很多个夜晚,Doyon 都是在 Golden Gate 公园里露营过夜的。“我就那样干了四天,我看了看镜子里的‘我’,感觉还可以——但其实我觉得‘我’也许应该去吃点东西、洗个澡了。”

-

“匿名者”组织接着又在 YouTube 上声明了将要进行的一系列行动:“利比亚行动”、“巴林行动”、“摩洛哥行动”。作为解放广场事件的抗议者,Doyon 参与了“埃及行动”。在 Facebook 针对这次行动的宣传专页中,有一个为当地示威者准备的“行动套装”链接。“行动套装”通过文件共享网站 Megaupload 进行分发,其中含有一份加密软件以及应对瓦斯袭击的保护措施。并且在不久后,埃及政府关闭了埃及的所有互联网及子网络的时候,继续向当地抗议者们提供连接网络的方法。

+

“匿名者”组织接着又在 YouTube 上声明了将要进行的一系列行动:“利比亚行动”、“巴林行动”、“摩洛哥行动”。作为解放广场事件的抗议者,Doyon 参与了“埃及行动”。在 Facebook 针对这次行动的宣传专页中,有一个为当地示威者准备的“行动套装”链接。“行动套装”通过文件共享网站 Megaupload 进行分发,其中含有一份加密软件以及应对瓦斯袭击的保护措施。在埃及政府关闭了埃及的所有互联网及子网络的时候不久后,“匿名者”组织继续向当地抗议者们提供连接网络的方法。

-

2011 年夏季,Doyon 接替 Adama 成为 PLF 的最高指挥官。Doyon 招募了六个新成员,并力图发展 PLF 成为“匿名者”组织的中坚力量。Covelli 成为了他的其中一技位术顾问。另一名黑客 Crypt0nymous 负责在 YouTube 上发布视频;其余的人负责研究以及组装电子设备。与松散的“匿名者”组织不同,PLF 内部有一套极其严格的管理体系。“Commander X 事必躬亲,”Covelli 说。“这是他的行事风格,也许不能称之为一种风格。”一位创立了 AnonInsiders 博客的黑客通过加密聊天告诉我,他认为 Doyon 总是一意孤行——这在“匿名者”组织中是很罕见的现象。“当我们策划发起一项行动时,他并不在乎其他人是否同意,”这位黑客补充道,“他会一个人列出行动方案,确定攻击目标,登录 IRC 聊天室,接着告诉所有人在哪里‘碰头’,然后发起 DDoS 攻击。”

+

2011 年夏季,Doyon 接替 Adama 成为 PLF 的最高指挥官。Doyon 招募了六个新成员,并力图发展 PLF 成为“匿名者”组织的中坚力量。Covelli 成为了他的其中一位技术顾问。另一名黑客 Crypt0nymous 负责在 YouTube 上发布视频;其余的人负责研究以及组装电子设备。与松散的“匿名者”组织不同,PLF 内部有一套极其严格的管理体系。“Commander X 事必躬亲,”Covelli 说。“这是他的行事风格,要么不做,要么做好。”一位创立了 AnonInsiders 博客的黑客通过加密聊天告诉我,他认为 Doyon 总是一意孤行——这在“匿名者”组织中是很罕见的现象。“当我们策划发起一项行动时,他并不在乎其他人是否同意,”这位黑客补充道,“他会一个人列出行动方案,确定攻击目标,登录 IRC 聊天室,接着告诉所有人在哪里‘碰头’,然后发起 DDoS 攻击。”

-

一些匿名者把 PLF 视为可有可无的部分,认为 Doyon 的所作所为完全是个天大的笑柄。“他是因为吹牛出名的,”另一名昵称为 Tflow 的匿名者 Mustafa Al-Bassam 告诉我。不过,即使是那些极度反感 Doyon 的狂妄自大的人,也不得不承认他在“匿名者”组织发展过程中的重要性。“他所倡导的强硬路线有时很凑效,有时则完全不起作用,” Gregg Housh 说,并且补充道自己和其他优秀的匿名者都曾遇到过相同的问题。

+

一些匿名者把 PLF 视为“面子项目”,认为 Doyon 的所作所为完全是个笑柄。“他是因为吹牛出名的,”另一名昵称为 Tflow 的匿名者 Mustafa Al-Bassam 告诉我。不过,即使是那些极度反感 Doyon 的狂妄自大的人,也不得不承认他在“匿名者”组织发展过程中的重要性。“他所倡导的强硬路线有时很凑效,有时则是碍事,” Gregg Housh 说,并且补充道自己和其他优秀的匿名者都曾遇到过相同的问题。

-

“匿名者”组织对外坚持声称自己是不分层次的平等组织。在由 Brian Knappenberger 制作的一部纪录片,《我们是一个团体》中,一名成员使用“一群鸟”来比喻组织,它们轮流领飞带动整个组织不断前行。Gabriella Coleman 告诉我,这个比喻不太切合实际,“匿名者”组织内实际上早就出现了一个非正式的领导阶层。“领导者非常重要,”她说。“有四五个人可以看做是我们的领头羊。”她把 Doyon 也算在了其中。但是匿名者们仍然倾向于反抗这种具有体系的组织结构。在一本即将出版的关于“匿名者”组织的书,《黑客、骗子、告密者、间谍》中,Coleman 这么写道,在匿名者中,“成员个体以及那些特立独行的人依然在一些重大事件上保持着服从的态度,优先考虑集体——特别是那些能引发强烈争端的事件。”

+

“匿名者”组织对外坚持声称自己是不分层次的平等组织。在由 Brian Knappenberger 制作的一部纪录片,《我们是军团》中,一名成员使用“一群鸟”来比喻组织,它们轮流领飞带动整个组织不断前行。Gabriella Coleman 告诉我,这个比喻不太切合实际,“匿名者”组织内实际上早就出现了一个非正式的领导阶层。“领导者非常重要,”她说。“有四五个人可以看做是我们的领头羊。”她把 Doyon 也算在了其中。但是匿名者们仍然倾向于反抗这种体制结构。在一本即将出版的关于“匿名者”组织的书,《黑客、骗子、告密者、间谍》中,Coleman 这么写道,在匿名者中,“成员个体以及那些特立独行的人依然在一些重大事件上保持着服从的态度,优先考虑集体——特别是那些能引发强烈争端的事件。”

匿名者们谑称那些特立独行的成员为“自尊心超强的疯子”和“想让自己出名的疯子”。(不过许多匿名者已经不会再随便给他人取那种具有冒犯性的称号了。)“但还是有令人惊讶的极少数成员违反规则”打破传统上的看法,Coleman 说。“这么做的人,像 Commander X 这样的,都会在组织里受到排斥。”去年,在一家网络论坛上,有人写道,“当他开始把自己比作‘蝙蝠侠’的时候我就不想理他了。”

Peter Fein,是一位以 n0pants 为昵称而出名的网络激进分子,也是众多反对 Doyon 的浮夸行为的众多匿名者之一。Fein 浏览了 PLF 的网站,其封面上有一个徽章,还有关于组织的宣言——“为了解放众多人类的灵魂而不断战斗”。Fein 沮丧的发现 Doyon 早就使用真名为这家网站注册过了,使他这种,以及其他想要找事的匿名者们无机可乘。“如果有人要对我的网站进行 DDoS 攻击,那完全可以,” Fein 回想起通过私密聊天告诉 Doyon 时的情景,“但如果你要这么做了的话,我会揍扁你的屁股。”

-

2011 年 2 月 5 日,《金融时报》报道了在一家名为 HBGary Federal 的网络安全公司里,首席执行官 HBGary Federal 已经得到了“匿名者”组织骨干成员名单的消息。Barr 的调查结果表明,三位最高领导人其中之一就是‘ Commander X’,这位潜伏在加利福尼亚州的黑客有能力“策划一些大型网络攻击事件”。Barr 联系了 FBI 并提交了自己的调查结果。

+

2011 年 2 月 5 日,《金融时报》报道了在一家名为 HBGary Federal 的网络安全公司,首席执行官 HBGary Federal 已经得到了“匿名者”组织骨干成员名单的消息。Barr 的调查结果表明,三位最高领导人其中之一就是‘ Commander X’,是一位潜伏在加利福尼亚州的黑客而且有能力“策划一些大型网络攻击事件”。Barr 联系了 FBI 并提交了自己的调查结果。

-

和 Fein 一样,Barr 也发现了 PLF 网站的注册法人名为 Christopher Doyon,地址是 Haight 大街。基于 Facebook 和 IRC 聊天室的调查,Barr 断定‘ Commander X’的真实身份是一名家庭住址在 Haight 大街附近的网络激进分子 Benjamin Spock de Vries。Barr 通过 Facebook 和 de Vries 取得了联系。“请告诉组织里的普通阶层,我并不是来抓你们的,” Barr 留言道,“只是想让‘领导阶层’知晓我的意图。”

+

和 Fein 一样,Barr 也发现了 PLF 网站的注册法人名为 Christopher Doyon,地址是 Haight 大街。基于 Facebook 和 IRC 聊天室的调查,Barr 断定‘ Commander X’的真实身份是一名家庭住址在 Haight 大街附近的网络激进分子 Benjamin Spock de Vries。Barr 通过 Facebook 和 de Vries 取得了联系。“请告诉我组织里的其他人,我并不是来抓你们的,” Barr 留言道,“只是想让‘领导阶层’知晓我的意图。”

“‘领导阶层’? 2333,笑死我了,” de Vries 回复道。

-

《金融时报》发布报道的第二天,“匿名者”组织就进行了反击。HBGary Federal 的网站被进行了恶意篡改。Barr 的私人 Twitter 账户被盗取,他的上千封电子邮件被泄漏到了网上,同时匿名者们还公布了他的住址以及其他私人信息——这是一系列被称作“doxing”的惩罚。不到一个月后,Barr 就从 HBGary Federal 辞职了。

+

《金融时报》发布报道的第二天,“匿名者”组织就进行了反击。HBGary Federal 的网站被进行了恶意篡改。Barr 的私人 Twitter 账户被盗取,他的上千封电子邮件被泄漏到了网上,同时匿名者们还公布了他的住址以及其他私人信息——这就是“冲动的惩罚”。不到一个月后,Barr 就从 HBGary Federal 辞职了。

6

@@ -180,17 +180,17 @@
“这是我在 TED 夏令营里学到的东西。”
-

他时刻关注着“匿名者”组织的内部消息。那年春季,在 Barr 调查报告中提到的六位匿名者精锐成员,组建了“LulzSec 安全”组织(Lulz Security),简称 LulzSec。这个组织正如其名,这些成员认为“匿名者”组织已经变得太过严肃;他们的目标是重新引发起那些“能挑起强烈争端”的事件。当“匿名者”组织还在继续支持“阿拉伯之春”的抗议者的时候,LulzSec 入侵了公共电视网(Public Broadcasting Service,PBS)网站,并发布了一则虚假声明称已故说唱歌手 Tupac Shakur 仍然生活在新西兰。

+

他时刻关注着“匿名者”组织的内部消息。那年春季,在 Barr 调查报告中提到的六位匿名者精锐成员,组建了“LulzSec 安全”组织(Lulz Security),简称 LulzSec。这个组织正如其名,这些成员认为“匿名者”组织已经变得太过严肃;他们的目标是重新引发起那些“能挑起强烈争端”的事件。当“匿名者”组织还在继续支持“阿拉伯之春”的抗议者时,LulzSec 入侵了公共电视网(Public Broadcasting Service,PBS)网站,并发布了一则虚假声明称已故说唱歌手 Tupac Shakur 仍然生活在新西兰。

-

匿名者之间会通过 Pastebin.com 网站来共享文字。在这个网站上,LulzSec 发表了一则声明,称“很不幸,我们注意到北约和我们的好总统巴拉克,奥萨马·本·美洲驼(拉登同学)的好朋友,来自 24 世纪的奥巴马,最近明显提高了对我们这些黑客的关注程度。他们把黑客入侵行为视作一种战争的表现。”目标越高远,挑起的纷争就越大。6 月 15 日,LulzSec 表示对 CIA 网站受到的袭击行为负责,他们发表了一条推特,上面写道“目标击毙(Tango down,亦即target down)—— cia.gov ——这是起挑衅行为。”

+

匿名者之间会通过 Pastebin.com 网站来共享文本。在这个网站上,LulzSec 发表了一则声明,称“很不幸,我们注意到北约和我们的好朋友巴拉克奥萨马——来自24世纪的奥巴马 已经提升了关于黑客的筹码,他们把黑客入侵行为视作一种战争的表现。”目标越高远,挑起的纷争就越大。6 月 15 日,LulzSec 表示对 CIA 网站受到的袭击行为负责,他们发表了一条推特,上面写道“目标击毙(Tango down,亦即target down)—— cia.gov ——这是起挑衅行为。”

-

2011 年 6 月 20 日,LulzSec 的一名十九岁的成员 Ryan Cleary 因为对 CIA 的网站进行了 DDoS 攻击而被捕。7 月,FBI 探员逮捕了七个月前对 PayPal 进行 DDoS 攻击的其他十四名黑客。这十四名黑客,每人都面临着 15 年的牢狱之灾以及 500 万美元的罚款。他们因为图谋不轨以及故意破坏互联网,而被控违反了计算机欺诈与滥用处理条例。(该法案允许检察官进行酌情处置,并在去年网络激进分子 Aaron Swartz 因为被判处 35 年牢狱之灾而自杀身亡之后,受到了广泛的质疑和批评。)

+

2011 年 6 月 20 日,LulzSec 的一名十九岁的成员 Ryan Cleary 因为对 CIA 的网站进行了 DDoS 攻击而被捕。7 月,FBI 探员逮捕了七个月前对 PayPal 进行 DDoS 攻击的其他十四名黑客。这十四名黑客,每人都面临着 15 年的牢狱之灾以及 50 万美元的罚款。他们因为图谋不轨以及故意破坏互联网而被控违反了计算机欺诈与滥用法案。(Computer Fraud and Abuse Act,该法案允许检察官拥有宽泛的起诉裁量权,并在去年网络激进分子 Aaron Swartz 因为被判处 35 年牢狱之灾而自杀身亡之后,受到了广泛的质疑和批评。)

LulzSec 的成员之一 Jake (Topiary) Davis 因为付不起法律诉讼费,给组织的成员们写了一封请求帮助的信件。Doyon 进入了 IRC 聊天室把 Davis 需要帮助的消息进行了扩散:

CommanderX:那么请大家阅读信件并给予 Topiary 帮助...
-
Toad:你真是和【哔~】一样消息灵通。
+
Toad:你真是为了抓人眼球什么都做啊!
Toad:这么说你得到 Topiary 的消息了?
@@ -198,15 +198,15 @@
Katanon:唉...
-

Doyon 越来越大胆。他在佛罗里达州当局逮捕了支持流浪者的激进分子后,就 DDoS 了奥兰多商务部商会网站。他使用个人笔记本电脑通过公用无线网络实施了攻击,并且没有花费太多精力来隐藏自己的网络行踪。“这种做法很勇敢,但也很愚蠢,”一位自称 Kalli 的 PLF 的资深成员告诉我。“他看起来并不在乎是否会被抓。他完全是一名自杀式黑客。”

+

Doyon 越来越大胆。在佛罗里达州当局逮捕了支持流浪者的激进分子后,他就攻击 了奥兰多商务部商会网站。他使用个人笔记本电脑通过公用无线网络实施了攻击,并且没有花费太多精力来隐藏自己的网络行踪。“这种做法很勇敢,但也很愚蠢,”一位自称 Kalli 的 PLF 的资深成员告诉我。“他看起来并不在乎是否会被抓。他完全是一名自杀式黑客。”

-

两个月后,Doyon 参与了针对旧金山湾区快速交通系统(Bay Area Rapid Transit)的 DDoS 攻击,以此抗议一名 BART 的警官杀害一名叫做 Charles Hill 的流浪者的事件。随后 Doyon 现身“CBS 晚间新闻”为这次行动辩护,当然,他处理了自己的声音,把自己的脸用香蕉进行替代。他把 DDoS 攻击比作为公民的抗议行为。“与占用 Woolworth 午餐柜台的座位相比,这真的没什么不同,真的,”他说道。CBS 的主播 Bob Schieffer 笑称:“就我所见,它并不完全是一项民权运动。”

+

两个月后,Doyon 参与了针对旧金山湾区快速交通系统(Bay Area Rapid Transit)的 DDoS 攻击,以此抗议一名 BART 的警官杀害一名叫做 Charles Hill 的流浪者的事件。随后 Doyon 现身“CBS 晚间新闻”为这次行动辩护,当然,他处理了自己的声音,用印花大手帕盖住了脸。他把 DDoS 攻击比作为公民的抗议行为。“与占用 Woolworth 午餐柜台的座位相比,这真的没什么不同,真的,”他说道。CBS 的主播 Bob Schieffer 笑称:“就我所见,它并不完全是一项民权运动。”

-

2011 年 9 月 22 日,在加利福尼亚州的一家名为 Mountain View 的咖啡店里,Doyon 被捕,同时面临着“使用互联网非法破坏受保护的计算机”罪名指控。他被拘留了一个星期的时间,接着在签署协议之后获得假释。两天后,他不顾律师的反对,宣布将在圣克鲁斯郡法院召开新闻发布会。他梳起了马尾辫,戴着一副墨镜、一顶黑色海盗帽,同时还在脖子上围了一条五彩手帕。

+

2011 年 9 月 22 日,在加利福尼亚州的一家名为 Mountain View 的咖啡店里,Doyon 被捕,同时面临着“使用互联网非法破坏受保护的计算机”的罪名指控。他被拘留了一个星期的时间,接着在签署协议之后获得假释。两天后,他不顾律师的反对,宣布将在圣克鲁斯郡法院召开会议。他梳起了马尾辫,戴着一副墨镜、一顶黑色海盗帽,同时还在脖子上围了一条五彩手帕。

-

Doyon 通过非常夸大的方式披露了自己的身份。“我就是 Commander X,”他告诉蜂拥的记者。他举起了拳头。“作为‘匿名者’组织的一员,作为一名核心成员,我感到非常的骄傲。”他在接受一名记者的采访时说,“想要成为一名顶尖黑客的话,你只需要准备一台电脑以及一副墨镜。任何一台电脑都行。”

+

Doyon 通过非常夸大的方式揭露了自己的身份。“我就是 Commander X,”他告诉蜂拥的记者。他举起了拳头。“作为‘匿名者’组织的一员,作为一名核心成员,我感到非常的骄傲。”他在接受一名记者的采访时说,“想要成为一名顶尖黑客的话,你只需要准备一台电脑以及一副墨镜。任何一台电脑都行。”

-

Kalli 非常担心 Doyon 会不小心泄露组织机密或者其他匿名者的信息。“这是所有环节中最薄弱的地方,如果这里出问题了,那么组织就完了,”他告诉我。曾在“和平阵营行动”中给予 Doyon 大力帮助的匿名者 Josh Covelli 告诉我,当他在网上看见 Doyon 的新闻发布会视频的时候,他感觉瞬间“下巴掉地下了”。“他的所作所为变得越来越不可捉摸,” Covelli 评价道。

+

Kalli 非常担心 Doyon 会不小心泄露组织机密或者其他匿名者的信息。“这是所有环节中最薄弱的地方,如果这里出问题了,那么组织就完了,”他告诉我。曾在“和平阵营行动”中给予 Doyon 大力帮助的匿名者 Josh Covelli 告诉我,当他在网上看见 Doyon 的新闻发布会视频的时候,他感觉瞬间“下巴掉地上了”。“他的所作所为变得越来越不可捉摸,” Covelli 评价道。

三个月后,Doyon 的指定律师 Jay Leiderman 出席了圣荷西联邦法庭的辩护。Leiderman 已经好几个星期没有得到 Doyon 的消息了。“我需要得知被告无法出席的具体原因,”法官说。Leiderman 无法回答。Doyon 再次缺席了两星期后的另一场听证会。检控方表示:“很明显,看来被告已经逃跑了。”

@@ -214,7 +214,7 @@

“Xport 行动”是“匿名者”组织进行的所有同类行动中的第一个行动。这次行动的目标是协助如今已经背负两项罪名的通缉犯 Doyon 潜逃出国。负责调度的人是 Kalli 以及另一位曾在八十年代剑桥的迷幻药派对上和 Doyon 见过面的匿名者老兵。这位老兵是一位已经退休的软件主管,在组织内部威望很高。

-

Doyon 的终点站是这位软件主管的家,位于加拿大的偏远乡村。2011 年 12 月,他搭便车前往旧金山,并辗转来到了市区组织大本营。他找到了他的指定联系人,后者带领他到达了奥克兰的一家披萨店。凌晨 2 点,Doyon 通过披萨店的无线网络,接收了一条加密聊天消息。

+

Doyon 的目的地是这位软件主管家,位于加拿大的偏远乡村。2011 年 12 月,他搭便车前往旧金山,并辗转来到了市区组织大本营。他找到了他的指定联系人,后者带领他到达了奥克兰的一家披萨店。凌晨 2 点,Doyon 通过披萨店的无线网络,接收了一条加密聊天消息。

“你现在靠近窗户吗?”那条消息问道。

@@ -222,13 +222,13 @@

“往大街对面看。看见一个绿色的邮箱了吗?十五分钟后,你去站到那个邮箱旁边,把你的背包取下来,然后把你的面具放在上面。”

-

一连几个星期的时间,Doyon 穿梭于海湾地区的安全屋之间,按照加密聊天那头的指示不断行动。最后,他搭上了前往西雅图的长途公交车,软件主管的一个朋友在那里接待了他。这个朋友是一名非常富有的退休人员,他花费了通过谷歌地球来帮助 Doyon 规划前往加拿大的路线。他们共同前往了一家野外用品供应商店,这位朋友为 Doyon 购置了价值 1500 美元的商品,包括登山鞋以及一个全新的背包。接着他又开车载着 Doyon 北上,两小时后到达距离国界只有几百英里的偏僻地区。随后 Doyon 见到了 Amber Lyon。

+

一连几个星期的时间,Doyon 穿梭于海湾地区的安全屋之间,按照加密聊天那头的指示不断行动。最后,他搭上了前往西雅图的长途公交车,软件主管的一个朋友在那里接待了他。这个朋友是一名非常富有的退休人员,他花费了几小时的时间通过谷歌地球来帮助 Doyon 规划前往加拿大的路线。他们共同前往了一家野外用品供应商店,这位朋友为 Doyon 购置了价值 1500 美元的商品,包括登山鞋以及一个全新的背包。接着他又开车载着 Doyon 北上,两小时后到达距离国界只有几百英里的偏僻地区。随后 Doyon 见到了 Amber Lyon。

几个月前,广播新闻记者 Lyon 曾在 CNN 的关于“匿名者”组织的节目里采访过 Doyon。Doyon 很欣赏她的报道,他们一直保持着联络。Lyon 要求加入 Doyon 的逃亡行程,为一部可能会发行的纪录片拍摄素材。软件主管认为这样太过冒险,但 Doyon 还是接受了她的请求。“我觉得他是想让自己出名,” Lyon 告诉我。四天的时间里,她用影像记录下了 Doyon 徒步北上,在林间露宿的行程。“那一切看起来不太像是仔细规划过的,” Lyon 回忆说。“他实在是无家可归了,所以他才会想要逃到国外去。”

-
“这里是我们存放各种感觉的仓库。如果你发现了某种感觉,把它带到这里然后锁起来。”
+
“这里是我们存放各种情感的仓库。如果你产生了某种情感,把它带到这里然后锁起来。”

2012 年 2 月 11 日,Pastebin 上出现了一条消息。“PLF 很高兴的宣布‘ Commander X’,也就是 Christopher Mark Doyon,已经离开了美国的司法管辖区,抵达了加拿大一个比较安全的地方,”上面写着,“PLF 呼吁美国政府,希望政府能够醒悟过来并停止无谓的骚扰与监视行为——不要仅仅逮捕‘匿名者’组织的成员,对所有的激进组织应该一视同仁。”

@@ -236,13 +236,13 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barrett Brown 的聊天中,Doyon 难掩内心的喜悦之情。 -
BarrettBrown:你现在应该足够安全了吧,其他的呢?...
+
BarrettBrown:你现在足够多安全的藏身之处等等吧?
CommanderX:是的,我现在很安全,现在加拿大既不缺钱也不缺藏身的地方。
CommanderX:Amber Lyon 想要你的一张照片。
-
CommanderX:去他【哔~】的怪人,Barrett,相信你会喜欢我告诉她应该怎样评价你的。
+
CommanderX:去你【哔~】的怪人,Barrett,相信你会喜欢我的回复。我一直爱你,永远爱你。
CommanderX::-)
@@ -258,13 +258,13 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
BarrettBrown:当然,估计我们不久后也得这样了
-

在 Doyon 出逃十天后,《华尔街日报》上刊登了关于不久后升职为美国国家安全局及网络指挥部主任的 Keith Alexander 的报道,他在白宫举行的秘密会晤以及其他场合下,表达了对“匿名者”组织的高度关注。Alexander 发出警告,两年内,该组织必将会是国家电网改造的大患。参谋长联席会议的主席 General Martin Dempsey 告诉记者,这群人是国家的敌人。“他们有能力把这些使用恶意软件造成破坏的技术扩散到其他的边缘组织去,”随后又补充道,“我们必须防范这种情况发生。”

+

在 Doyon 出逃十天后,《华尔街日报》上刊登了关于不久后升职为美国国家安全局及网络指挥部主任的 Keith Alexander 的报道,他在白宫以及其他场合举行的秘密会晤,表达了对“匿名者”组织的高度关注。Alexander 发出警告,两年内,该组织必将会是国家电网改造的大患。参谋长联席会议的主席 General Martin Dempsey 告诉记者,这群人是国家的敌人。“他们有能力把这些使用恶意软件造成破坏的技术扩散到其他的边缘组织去,”随后又补充道,“我们必须防范这种情况发生。”

3 月 8 日,国会议员们在国会大厦附近的一个敏感信息隔离设施附近举行了关于网络安全的会议。包括 Alexander、Dempsey、美国联邦调查局局长 Robert Mueller,以及美国国土安全部部长 Janet Napolitano 在内的多名美国安全方面的高级官员出席了这次会议。会议上,通过计算机向与会者模拟了东部沿海地区电力设施可能会遭受到的网络攻击时的情境。“匿名者”组织目前应该还不具备发动此种规模攻击的能力,但安全方面的官员担心他们会联合其他更加危险的组织来共同发动攻击。“在我们着手于不断增加的网络风险事故时,政府仍在就具体的处理细节进行不断协商讨论,” Napolitano 告诉我。当谈及潜在的网络安全隐患时,她补充道,“我们通常会把‘匿名者’组织的行动当做 A 级威胁来应对。”

“匿名者”也许是当今世界上最强大的无政府主义黑客组织。即使如此,它却从未表现出过任何的会对公共基础设施造成破坏的迹象或意愿。一些网络安全专家称,那些关于“匿名者”组织的谣传太过危言耸听。“在奥兰多发布战前宣言和实际发动 Stuxnet 蠕虫病毒攻击之间是有很大的差距的,” Internet 研究与战略中心的一位职员 James Andrew Lewis 告诉我,这和 2007 年美国与以色列对伊朗原子能网站发动的黑客袭击有关。哈佛大学法学院的教授 Yochai Benkler 告诉我,“我们所看见的只是以主要防御为理由而进行的开销,否则,将很难自圆其说。”

-

Keith Alexander 最近刚从政府部门退休,他拒绝就此事发表评论,因为他并不能代表国家安全局、联邦调查局、中央情报局以及国土安全部。尽管匿名者们从未真正盯上过政府部门的计算机网络,但他们对于那些激怒他们的人有着强烈的报复心理。前国土安全部国家网络安全部门负责人 Andy Purdy 告诉我他们“害怕被报复,”无论机构还是个人,都不同意政府公然反对“匿名者”组织。“每个人都非常脆弱,”他说。

+

Keith Alexander 最近刚从政府部门退休,他拒绝就此事发表评论,因为他并不能代表国家安全局、联邦调查局、中央情报局以及国土安全部。尽管匿名者们从未真正盯上过政府部门的计算机网络,但他们对于那些激怒他们的人有着强烈的报复心理。前国土安全部国家网络安全部门负责人 Andy Purdy 告诉我他们“害怕被报复,”无论机构还是个人,都不同意政府公然反对“匿名者”组织。“每个人都容易成为被攻击对象,”他说。

9

@@ -272,7 +272,7 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr

Doyon 感到很烦躁,但他还是继续扮演着一名黑客——以此吸引关注。他在多伦多上映的纪录片上以戴着面具的匿名者形象出现。在接受《National Post》的采访时,他向记者大肆吹嘘未经证实的消息,“我们已经入侵了美国政府的所有机密数据库。现在的问题是我们该何时泄露这些机密数据,而不是我们是否会泄露。”

-

2013 年 1 月,在另一名匿名者介入俄亥俄州斯托本维尔未成年少女轮奸案,发起抗议行动之后,Doyon 重新启用了他两年前创办的网站 LocalLeaks,作为那起轮奸事件的信息汇总处理中心。如同许多其他“匿名者”组织的所作所为一样,LocalLeaks 网站非常具有影响力,但却也不承担任何责任。LocalLeaks 网站是第一家公布 12 分钟斯托本维尔高中毕业生猥亵视频的网站,这激起了众多当事人的愤怒。LocalLeaks 网站上同时披露了几份未被法庭收录的关于案件的材料,并且由此不小心透漏出了案件受害人的名字。Doyon向我承认他公开这些未经证实的信息的策略是存在争议的,但他同时回忆起自己当时的想法,“我们可以选择去除这些斯托本维尔案件的材料...也可以选择公开所有我们搜集的信息,基本上,给公众以提醒,不过,前提是你们得相信我们。”

+

2013 年 1 月,在另一名匿名者介入俄亥俄州斯托本维尔未成年少女强奸案,发起抗议行动之后,Doyon 重新启用了他两年前创办的网站 LocalLeaks,作为那起强奸事件的信息汇总处理中心。如同许多其他“匿名者”组织的所作所为一样,LocalLeaks 网站非常具有影响力,但却也不承担任何责任。LocalLeaks 网站是第一家公布 12 分钟斯托本维尔高中毕业生猥亵视频的网站,这激起了众多当事人的愤怒。LocalLeaks 网站上同时披露了几份未被法庭收录的关于案件的材料,并且由此不小心透漏出了案件受害人的名字。Doyon向我承认他公开这些未经证实的信息的策略是存在争议的,但他同时回忆起自己当时的想法,“我们可以选择销毁这些斯托本维尔案件的材料...也可以选择公开所有我们搜集的信息,基本上,给公众以提醒,不过,前提是你们得相信我们。”

2013 年 3 月,一个名为 Rustle League 的组织入侵了 Doyon 的 Twitter 账户,该组织此前经常挑衅“匿名者”组织。Rustle League 的领导者之一 Shm00p 告诉我,“我们的本意并不是伤害那些家伙,只不过,哦,那些家伙说的话你就当是在放屁好了——我会这么做只是因为我感到很好笑。” Rustle League 组织使用 Doyon 的账户发布了含有如 www.jewsdid911.org 链接这样的,种族主义和反犹太主义的信息。

@@ -290,37 +290,37 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr

我们约定了一次面谈。Doyon 坚持让我通过加密聊天把面谈的详细情况提前告诉他。我坐了几个小时的飞机,租车来到了加拿大的一个偏远小镇,并且禁用了我的电话。

-

最后,我在一个狭小安静的住宅区公寓里见到了 Doyon。他穿了一件绿色的军人夹克衫以及印有“匿名者”组织 logo 的 T 恤衫:一个脸被问号所替代的黑衣人形象。公寓里基本上没有什么家具,充满了一股烟味。他谈论起了美国政治(“我基本没怎么在众多的选举中投票——它们不过是暗箱操作的游戏罢了”),好战的伊斯兰教(“我相信,尼日利亚政府的人不过是相互勾结,以创建一个名为‘博科圣地’的基地组织的下属机构罢了”),以及他对“匿名者”组织的小小看法(“那些自称为怪人的人是真的是烂透了,意思是,邪恶的人”)。

+

最后,我在一个狭小安静的住宅区公寓里见到了 Doyon。他穿了一件绿色的军人夹克衫以及印有“匿名者”组织 logo 的 T 恤衫:一个脸被问号所替代的黑衣人形象。公寓里基本上没有什么家具,充满了一股烟味。他谈论起了美国政治(“我基本没怎么在众多的选举中投票——它们不过是暗箱操作的游戏罢了”),好战的伊斯兰教(“我相信,尼日利亚政府的人不过是相互勾结,以创建一个名为‘博科圣地’的基地组织的下属机构罢了”),以及他对“匿名者”组织的小小看法(“那些自称为怪人的人是真的是烂透了,其实是邪恶的人”)。

Doyon 剃去了他的胡须,但他却显得更加憔悴了。他说那是因为他病了的原因,他几乎很少出去。很小的写字台上有两台笔记本电脑、一摞关于佛教的书,还有一个堆满烟灰的烟灰缸。另一面裸露的泛黄墙壁上挂着盖伊·福克斯面具。他告诉我,“所谓‘Commander X’不过是一个处于极度痛苦中的小老头罢了。”

-

在刚过去的圣诞节里,匿名者的新网站 AnonInsiders 的创建者拜访了 Doyon,并给他带来了馅饼和香烟。Doyon 询问来访的朋友是否可以继承自己的衣钵成为 PLF 的最高指挥官,同时希望能够递交出自己手里的“王国钥匙”——手里的所有密码,以及几份关于“匿名者”组织的机密文件。这位朋友委婉的拒绝了。“我有自己的生活,”他告诉了我拒绝的理由。

+

在刚过去的圣诞节里,匿名者的新网站 AnonInsiders 的创建者拜访了 Doyon,并给他带来了馅饼和香烟。Doyon 询问来访的朋友是否可以接替自己成为 PLF 的最高指挥官,同时希望能够递交出自己手里的“王国钥匙”——手里的所有密码,以及几份关于“匿名者”组织的机密文件。这位朋友委婉的拒绝了。“我有自己的生活,”他告诉了我拒绝的理由。

11

-

2014 年 8 月 9 日,当地时间下午 5 时 09 分,来自密苏里州圣路易斯郊区德尔伍德的一位说唱歌手同时也是激进分子的 Kareem (Tef Poe) Jackson,在 Twitter 上谈起了邻近城镇的一系列令人担忧的举措。“基本可以断定弗格森已经实施了戒严,任何人都无法出入,”他在 Twitter 上写道。“国内的朋友还有因特网上的朋友请帮助我们!!!”五个小时前,弗格森,一位十八岁的手无寸铁的非裔美国人 Michael Brown,被一位白人警察射杀。射杀警察声称自己这么做的原因是 Brown 意图伸手抢夺自己的枪支。而事发当时和 Brown 在一起的朋友 Dorian Johnson 却说,Brown 唯一做得不对的地方在于他当时拒绝离开街道中间。

+

2014 年 8 月 9 日,当地时间下午 5 时 09 分,来自密苏里州圣路易斯郊区德尔伍德的一位说唱歌手同时也是激进分子的 Kareem (Tef Poe) Jackson,在 Twitter 上谈起了邻近城镇的一系列令人担忧的举措。“基本可以断定弗格森已经实施了戒严,任何人都无法出入,”他在 Twitter 上写道。“国内外的朋友们请帮助我们!!!”五个小时前,弗格森,一位十八岁的手无寸铁的非裔美国人 Michael Brown,被一位白人警察射杀。射杀警察声称自己这么做的原因是 Brown 意图伸手抢夺自己的枪支。而事发当时和 Brown 在一起的朋友 Dorian Johnson 却说,Brown 唯一做得不对的地方在于他当时拒绝离开街道中间。

-

不到两小时,Jackson 就收到了一位名为 CommanderXanon 的 Twitter 用户的回复。“你完全可以相信我们,”回复信息里写道。“你是否可以给我们详细描述一下现场情况,那样会对我们很有帮助。”近几周的时间里,仍然呆在加拿大的 Doyon 复出了。六月,他在还有两个月满 50 岁的时候,成功戒烟(“#戒瘾成功 #电子香烟功不可没 #老了,”他在戒烟成功后在 Twitter 上写道)。七月,在加沙地带爆发武装对抗之后,Doyon 发表 Twiter 支持“匿名者”组织的“拯救加沙行动”,并发动了一系列针对以色列网站的 DDoS 攻击。Doyon 认为弗格森枪击事件更加令人关注。抛开他本人的个性,他有在事件发展到引人注目之前的早期,就迅速注意该事件的能力。

+

不到两小时,Jackson 就收到了一位名为 CommanderXanon 的 Twitter 用户的回复。“你完全可以相信我们,”回复信息里写道。“你是否可以给我们详细描述一下现场情况,那样会对我们很有帮助。”近几周的时间里,仍然呆在加拿大的 Doyon 复出了。六月,他在还有两个月满 50 岁的时候,成功戒烟(“#戒瘾成功 #电子香烟功不可没 #老了,”他在戒烟成功后在 Twitter 上写道)。七月,在加沙地带爆发武装对抗之后,Doyon 发表 Twiter 支持“匿名者”组织的“拯救加沙行动”,并发动了一系列针对以色列网站的 DDoS 攻击。Doyon 认为弗格森枪击事件更加令人关注。抛开他本人的个性,他有能力在事件发展到引人注目之前,就迅速注意该事件。

“正在网上搜索关于那名警察以及当地政府的信息,” Doyon 发 Twitter 道。不到十分钟,他就为此专门在 IRC 聊天室里创建了一个频道。“‘匿名者’组织‘弗格森’行动正式启动,”他又发了一条 Twitter。但只有两个人转推了此消息。

-

次日早晨,Doyon 发布了一条链接,链接指向的是一个初具雏形的网站,网站首页有一条致弗格森市民的信息——“你们并不孤单,我们将尽一切努力支持你们”——以及致当地警察的警告:“如果你们对对弗格森的抗议者们滥用职权、骚扰,或者伤害了他们,我们绝对会让你们所有政府部门的网站瘫痪。这不是威胁,这是承诺。”同时 Doyon 呼吁有 130 万粉丝的“匿名者”组织的 Twitter 账号 YourAnonNews 给与支持。“请支持‘弗格森’行动”,他发送了消息。一分钟后,YourAnonNews 回复表示同意。当天,包含话题 #OpFerguson 的 Twitter 发表/转推了超过六千次。

+

次日早晨,Doyon 发布了一条链接,链接指向的是一个初具雏形的网站,网站首页有一条致弗格森市民的信息——“你们并不孤单,我们将尽一切努力支持你们”——以及致当地警察的警告:“如果你们对弗格森的抗议者们滥用职权、骚扰,或者伤害了他们,我们绝对会让你们所有政府部门的网站瘫痪。这不是威胁,这是承诺。”同时 Doyon 呼吁有 130 万粉丝的“匿名者”组织的 Twitter 账号 YourAnonNews 给与支持。“请支持‘弗格森’行动”,他发送了消息。一分钟后,YourAnonNews 回复表示同意。当天,包含话题 #OpFerguson 的 Twitter 被转发了超过六千次。

-

这个事件迅速成为头条新闻,同时匿名者们在弗格森周围进行了大集会。与“阿拉伯之春行动”类似,“匿名者”组织向抗议者们发送了电子关怀包,包括抗暴指导(“把瓦斯弹捡起来回丢给警察”)与可打印的盖伊·福克斯面具。Jackson 和其他示威者在弗格森进行示威游行时,警察企图通过橡皮子弹和催泪瓦斯来驱散他们。“当时的情景真像是布鲁斯·威利斯的电影里的情节,” Jackson 后来告诉我。“不过巴拉克·奥巴马应该并不会支持‘匿名者’组织传授给我们的这些知识,”他笑称道。“让那些警察赶到束手无策真的是太爽了。”

+

这个事件迅速成为头条新闻,同时匿名者们在弗格森周围进行了大集会。与“阿拉伯之春行动”类似,“匿名者”组织向抗议者们发送了电子关怀包,包括抗暴指导(“把瓦斯弹捡起来回丢给警察”)与可打印的盖伊·福克斯面具。Jackson 和其他示威者在弗格森进行示威游行时,警察企图通过橡皮子弹和催泪瓦斯来驱散他们。“当时的情景真像是布鲁斯·威利斯的电影里的情节,” Jackson 后来告诉我。“不过巴拉克·奥巴马应该并不会支持‘匿名者’组织传授给我们的这些知识,”他说道。“知道有人在你的背后支持你,真是感觉欣慰。”

-

有个域名是 www.opferguson.com 的网站,后来发现不过是一个骗局——一个用来收集访问者 ip 地址的陷阱,随后这些地址会被移交给执法机构。有些人怀疑 Commander X 是政府的线人。在 IRC 聊天室 #OpFerguson 频道,一个名叫 Sherlock 写道,“现在频道里每个人说的已经让我害怕去点击任何陌生的链接了。除非是一个我非常熟悉的网址,否则我绝对不会去点击。”

+

有个网址是 www.opferguson.com 的网站,后来发现不过是一个骗局——一个用来收集访问者 ip 地址的陷阱,随后这些地址会被移交给执法机构。有些人怀疑 Commander X 是政府的线人。在 IRC 聊天室 #OpFerguson 频道,一个名叫 Sherlock 写道,“现在频道里每个人说的已经让我害怕去点击任何陌生的链接了。除非是一个我非常熟悉的网址,否则我绝对不会去点击。”

弗格森的抗议者要求当局公布射杀 Brown 的警察的名字。几天后,匿名者们附和了抗议者们的请求。有人在 Twitter 上写道,“弗格森警察局最好公布肇事警察的名字,否则‘匿名者’组织将会替他们公布。”8 月 12 的新闻发布会上,圣路易斯警察局的局长 Jon Belmar 拒绝了这个请求。“我们不会这样做,除非他们被某个罪名所指控,”他说道。

作为报复,一名黑客使用名为 TheAnonMessage 的 Twitter 账户公布了一条链接,该链接指向一段来自警察的无线电设备所记录的音频文件,文件记录时间是 Brown 被枪杀的两小时左右。TheAnonMessage 同时也把矛头指向了 Belmar,在 Twitter 上公布了这位警察局长的家庭住址、电话号码以及他的家庭照片——一张是他的儿子在长椅上睡觉,另一张则是 Belmar 和他的妻子的合影。“不错的照片,Jon,” TheAnonMessage 在 Twitter 上写道。“你的妻子在她这个年龄算是一个美人了。你已经爱她爱得不耐烦了吗?”一个小时后,TheAnonMessage 又以 Belmar 的女儿为把柄进行了恐吓。

-

Richard Stallman,来自 MIT 的初代黑客,告诉我虽然他在很多地方赞同“匿名者”组织的行为,但他认为这些泄露私人信息的攻击行为是要受到谴责的。即使是在国内,TheAnonMessage 的行为也受到了谴责。“为何要泄露无辜的人的信息到网上?”一位匿名者通过 IRC 发问,并且表示威胁 Belmar 的家人实在是“相当愚蠢的行为”。但是 TheAnonMessage 和其他的一些匿名者仍然进行着不断搜寻,并企图在将来再次进行泄露信息的攻击。在互联网上可以得到所有弗格森警察局警员的名字,匿名者们不断地搜索着信息,企图找出具体是哪一个警察找出杀害了 Brown。

+

Richard Stallman,来自 MIT 的初代黑客,告诉我虽然他在很多地方赞同“匿名者”组织的行为,但他认为这些泄露私人信息的攻击行为是要受到谴责的。即使是组织内部,TheAnonMessage 的行为也受到了谴责。“为何要泄露无辜的人的信息到网上?”一位匿名者通过 IRC 发问,并且表示威胁 Belmar 的家人实在是“相当愚蠢的行为”。但是 TheAnonMessage 和其他的一些匿名者仍然进行着不断搜寻,并企图在将来再次进行泄露信息的攻击。在互联网上可以得到所有弗格森警察局警员的名字,匿名者们不断地搜索着信息,企图找出具体是哪一个警察找出杀害了 Brown。

1999 年 4 月 12 日 “我应该把镜头对向谁?”
-

8 月 14 日清晨,及位匿名者基于 Facebook 上的照片还有其他的证据,确定了射杀 Brown 的凶手是一位名叫 Bryan Willman 的 32 岁男子。根据一份 IRC 聊天记录,一位匿名者贴出了 Willman 的浮夸面孔的照片;另一位匿名者提醒道,“凶手声称自己的脸没有被任何人看到。”另一位昵称为 Anonymous|11057 的匿名者承认他对 Willman 的怀疑确实是“跳跃性的可能错误的逻辑过程推导出来的。”不过他还是写道,“我只是无法动摇自己的想法。虽然我没有任何证据,但我非常非常地确信就是他。”

+

8 月 14 日清晨,几位匿名者基于 Facebook 上的照片还有其他的证据,确定了射杀 Brown 的凶手是一位名叫 Bryan Willman 的 32 岁男子。根据一份 IRC 聊天记录,一位匿名者贴出了 Willman 的肿胀面孔的照片;另一位匿名者提醒道,“凶手声称自己的脸没有被任何人看到。”另一位昵称为 Anonymous|11057 的匿名者承认他对 Willman 的怀疑确实是“跳跃性的可能错误的逻辑过程推导出来的。”不过他还是写道,“我只是无法动摇自己的想法。虽然我没有任何证据,但我非常非常地确信就是他。”

TheAnonMessage 看起来被这次对话逗乐了,写道,“#愿逝者安息,凶手是 BryanWillman。”另一位匿名者发出了强烈警告。“请务必确认,” Anonymous|2252 写道。“这不仅仅关乎到一个人的性命,我们可以不负责任地向公众公布我们的结果,但却很可能有无辜的人会因此受到不应受到的对待。”

@@ -356,15 +356,15 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr
anondepp:lol
-

早晨 9 时 45 分,圣路易斯警察局对 TheAnonMessage 进行了答复。“Bryan Willman 从来没有在弗格森警察局或者圣路易斯警察局任过职,” 他们在 Twitter 上写道。“请不要再公布这位无辜市民的信息了。”(随后 FBI 对弗格森警察的电脑遭黑客入侵的事情展开了调查。)Twitter 管理员迅速封禁了 TheAnonMessage 的账户,但 Willman 的名字和家庭住址仍然被广泛传开。

+

早晨 9 时 45 分,圣路易斯警察局对 TheAnonMessage 进行了答复。“Bryan Willman 从来没有在 警察局或者圣路易斯警察局任过职,” 他们在 Twitter 上写道。“请不要再公布这位无辜市民的信息了。”(随后 FBI 对弗格森警察的电脑遭黑客入侵的事情展开了调查。)Twitter 管理员迅速封禁了 TheAnonMessage 的账户,但 Willman 的名字和家庭住址仍然被广泛传开。

-

实际上,Willman 是弗格森西郊圣安区的警察外勤负责人。当圣路易斯警察局的情报处打电话告诉 Willman,他已经被“确认”为凶手时,他告诉我,“我以为不过是个奇怪的笑话。”几小时后,他的社交账号上就收到了数百条要杀死他的威胁。他在警察的保护下,独自一人在家里呆了将近一个星期。“我只希望这一切都尽快过去,”他告诉我他的感受。他认为“匿名者”组织已经不可挽回地损害了他的名誉。“我不知道他们怎么会以为自己可以被再次信任的,”他说。

+

实际上,Willman 是弗格森西郊圣安区的警察外勤负责人。当圣路易斯警察局的情报处打电话告诉 Willman,他已经被“确认”为凶手时,他告诉我,“我以为不过是个奇怪的笑话。”几小时后,他的社交账号上就收到了成百上千条死亡恐吓。他在警察的保护下,独自一人在家里呆了将近一个星期。“我只希望这一切都尽快过去,”他告诉我他的感受。他认为“匿名者”组织已经不可挽回地损害了他的名誉。“我不知道他们怎么会以为自己可以被再次信任的,”他说。

“我们并不完美,” OpFerguson 在 Twitter 上说道。“‘匿名者’组织确实犯错了,过去的几天我们制造一些混乱。为此,我们道歉。”尽管 Doyon 并不应该为这次错误的信息泄露攻击负责,但其他的匿名者却因为他发起了一次无法控制的行动,而归咎他。YourAnonNews 在 Pastebin 上发表了一则消息,上面写道,“你们也许注意到了组织不同的 Twitter 账户发表的话题 #Ferguson 和 #OpFerguson,这两个话题下的 Twitter 与信息是相互矛盾的。为什么会在这些关键话题上出现分歧,部分原因是因为 CommanderX 是一个‘想让自己出名的疯子/想让公众认识自己的疯子’——这种人喜欢,或者至少不回避媒体的宣传——并且显而易见的,组织内大部分成员并不喜欢这样。”

在个人 Twitter 上,Doyon 否认了所有关于“弗格森行动”的职责,他写道,“我讨厌这样。我不希望这样的情况发生,我也不希望和我认为是朋友的人战斗。”沉寂了几天后,他又再度获吹响了战斗的号角。他最近在 Twitter 上写道,“你们称他们是暴民,我们却称他们是压迫下的反抗之声”以及“解放西藏”。

-

Doyon 仍然处于藏匿状态。甚至连他的律师 Jay Leiderman 也不知道他在哪里。Leiderman 表示,除了在圣克鲁斯受到的指控,Doyon 很有可能因为攻击了 PayPal 和奥兰多而面临新的指控。一旦他被捕,所有的刑期加起来,他的余生就要在监狱里度过了。借鉴 Edward Snowden 的先例,他希望申请去俄罗斯避难。我们谈话时,他用一支点燃的香烟在他的公寓里比划着。“这里比他【哔~】的牢房强多了吧?我绝对不会出去,”他愤愤道。“我不会再联系我的家人了....这是相当高的代价,但我必须这么做,我会尽我的努力让所有人活得自由、明白。”

+

Doyon 仍然处于藏匿状态。甚至连他的律师 Jay Leiderman 也不知道他在哪里。Leiderman 表示,除了在圣克鲁斯受到的指控,Doyon 很有可能因为攻击了 PayPal 和奥兰多而面临新的指控。一旦他被捕,所有的刑期加起来,他的余生就要在监狱里度过了。借鉴 Edward Snowden 的先例,他希望申请去俄罗斯避难。我们谈话时,他用一支点燃的香烟在他的公寓里比划着。“这里比【哔~】的牢房强多了吧?我绝对不会出去,”他愤愤道。“我不会再联系我的家人了....这是相当高的代价,但我必须这么做,我会尽我的努力让所有人活得自由、明白。”

@@ -372,6 +372,6 @@ Doyon 和软件主管在加拿大的小木屋里呆了几天。在一次同 Barr

作者:David Kushner

译者:SteveArcher

-

校对:校对者ID

+

校对:Caroline

本文由 LCTT 原创翻译,Linux中国荣誉推出

diff --git a/published/20140905 Ubuntu Touch Now Has a Torrent Client in the Ubuntu Store.md b/published/201410/20140905 Ubuntu Touch Now Has a Torrent Client in the Ubuntu Store.md similarity index 100% rename from published/20140905 Ubuntu Touch Now Has a Torrent Client in the Ubuntu Store.md rename to published/201410/20140905 Ubuntu Touch Now Has a Torrent Client in the Ubuntu Store.md diff --git a/published/20140910 Colourful systemd vs sysVinit Linux Cheatsheet.md b/published/201410/20140910 Colourful systemd vs sysVinit Linux Cheatsheet.md similarity index 100% rename from published/20140910 Colourful systemd vs sysVinit Linux Cheatsheet.md rename to published/201410/20140910 Colourful systemd vs sysVinit Linux Cheatsheet.md diff --git a/translated/news/20140910 Jelly Conky Adds Simple, Stylish Stats To Your Linux Desktop.md b/published/201410/20140910 Jelly Conky Adds Simple, Stylish Stats To Your Linux Desktop.md similarity index 53% rename from translated/news/20140910 Jelly Conky Adds Simple, Stylish Stats To Your Linux Desktop.md rename to published/201410/20140910 Jelly Conky Adds Simple, Stylish Stats To Your Linux Desktop.md index 7ef4156aa3..e4b0814c2b 100644 --- a/translated/news/20140910 Jelly Conky Adds Simple, Stylish Stats To Your Linux Desktop.md +++ b/published/201410/20140910 Jelly Conky Adds Simple, Stylish Stats To Your Linux Desktop.md @@ -1,12 +1,12 @@ -Jelly Conky给你的Linux桌面加入了简约、时尚的状态 +Jelly Conky为你的Linux桌面带来简约、时尚的状态信息 ================================================================================ -**我把Conky设置成有点像壁纸:我会找出一张我喜欢的,只在下一周更换因为我厌倦了并且想要一点改变。** +**我把Conky当成壁纸一样使用:我会找出一个我喜欢的样式,下一周当我厌烦了想要一点小改变时我就更换另外一个样式。** -不耐烦的一部分原因是由于日益增长的设计目录。我最近最喜欢的是Jelly Conky。 +不断更换样式的部分原因是由于日益增多的样式目录。我最近最喜欢的样式是Jelly Conky。 ![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/jelly-conky.png) -我们最近强调的许多Conky所夸耀的最小设计都遵循了。它并不想成为一个厨房水槽。它不会被那些需要一眼需要看到他们硬盘温度和IP地址的人所青睐 +Jelly Conky遵循了许多我们推荐的Conky风格采用的最小设计原则。它并不想成为一个大杂烩。它不会被那些喜欢一眼就能看到他们硬盘温度和IP地址的人所青睐。 它配备了三种不同的模式,它们都可以添加个性的或者静态背景图像: @@ -16,9 +16,9 @@ Jelly Conky给你的Linux桌面加入了简约、时尚的状态 一些人不理解为什么要在桌面上拥有重复的时钟。这是很好理解的。对于我而言,这不仅仅是功能(虽然,个人而言,Conky的时钟比挤在上部面板上那渺小的数字要更容易看清)。 -机会是如果你的Android主屏幕有一个时间小部件的话,你不会介意在你的桌面上也有这么一个 - +我想如果你的Android主屏幕有一个时间小部件的话,你不会介意在你的桌面上也有这么一个的,对吧? +你可以从下述链接下载Jelly Conky,zip 包里面有一个说明如何安装的 readme 文件。如果希望看到完整的教程,可以[参考我们的前一篇文章][3]。 - [从Deviant Art上下载 Jelly Conky][2] -------------------------------------------------------------------------------- @@ -27,10 +27,11 @@ via: http://www.omgubuntu.co.uk/2014/09/jelly-conky-for-linux-desktop 作者:[Joey-Elijah Sneddon][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:https://plus.google.com/117485690627814051450/?rel=author [1]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover -[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003 \ No newline at end of file +[2]:http://zagortenay333.deviantart.com/art/Jelly-Conky-442559003 +[3]:http://www.omgubuntu.co.uk/2014/07/conky-circle-theme-nod-lg-quick-cover \ No newline at end of file diff --git a/published/20140910 Meet the 12 Ubuntu 14.10 Wallpaper Contest Winners (So Far).md b/published/201410/20140910 Meet the 12 Ubuntu 14.10 Wallpaper Contest Winners (So Far).md similarity index 100% rename from published/20140910 Meet the 12 Ubuntu 14.10 Wallpaper Contest Winners (So Far).md rename to published/201410/20140910 Meet the 12 Ubuntu 14.10 Wallpaper Contest Winners (So Far).md diff --git a/published/20140910 [Quick Tip] How To List All Installed Packages On Linux Distributions.md b/published/201410/20140910 [Quick Tip] How To List All Installed Packages On Linux Distributions.md similarity index 100% rename from published/20140910 [Quick Tip] How To List All Installed Packages On Linux Distributions.md rename to published/201410/20140910 [Quick Tip] How To List All Installed Packages On Linux Distributions.md diff --git a/published/20140911 Install UberWriter Markdown Editor In Ubuntu 14.04.md b/published/201410/20140911 Install UberWriter Markdown Editor In Ubuntu 14.04.md similarity index 100% rename from published/20140911 Install UberWriter Markdown Editor In Ubuntu 14.04.md rename to published/201410/20140911 Install UberWriter Markdown Editor In Ubuntu 14.04.md diff --git a/published/20140912 How to Go Hands On With the Utopic Unicorn--Literally.md b/published/201410/20140912 How to Go Hands On With the Utopic Unicorn--Literally.md similarity index 100% rename from published/20140912 How to Go Hands On With the Utopic Unicorn--Literally.md rename to published/201410/20140912 How to Go Hands On With the Utopic Unicorn--Literally.md diff --git a/published/20140912 QuiteRSS--RSS Reader For Desktop Linux.md b/published/201410/20140912 QuiteRSS--RSS Reader For Desktop Linux.md similarity index 100% rename from published/20140912 QuiteRSS--RSS Reader For Desktop Linux.md rename to published/201410/20140912 QuiteRSS--RSS Reader For Desktop Linux.md diff --git a/published/20140915 How To Uninstall Ubuntu Linux From Windows 8 Dual Boot.md b/published/201410/20140915 How To Uninstall Ubuntu Linux From Windows 8 Dual Boot.md similarity index 100% rename from published/20140915 How To Uninstall Ubuntu Linux From Windows 8 Dual Boot.md rename to published/201410/20140915 How To Uninstall Ubuntu Linux From Windows 8 Dual Boot.md diff --git a/published/20140915 Linux FAQs with Answers--How to find and remove obsolete PPA repositories on Ubuntu.md b/published/201410/20140915 Linux FAQs with Answers--How to find and remove obsolete PPA repositories on Ubuntu.md similarity index 100% rename from published/20140915 Linux FAQs with Answers--How to find and remove obsolete PPA repositories on Ubuntu.md rename to published/201410/20140915 Linux FAQs with Answers--How to find and remove obsolete PPA repositories on Ubuntu.md diff --git a/published/20140915 One of the Smallest Distros in the World, Tiny Core, Gets a Fresh Update.md b/published/201410/20140915 One of the Smallest Distros in the World, Tiny Core, Gets a Fresh Update.md similarity index 100% rename from published/20140915 One of the Smallest Distros in the World, Tiny Core, Gets a Fresh Update.md rename to published/201410/20140915 One of the Smallest Distros in the World, Tiny Core, Gets a Fresh Update.md diff --git a/published/20140915 Potenza Icon Themes 2.0 Available For Download.md b/published/201410/20140915 Potenza Icon Themes 2.0 Available For Download.md similarity index 100% rename from published/20140915 Potenza Icon Themes 2.0 Available For Download.md rename to published/201410/20140915 Potenza Icon Themes 2.0 Available For Download.md diff --git a/published/20140917 GNOME Control Center 3.14 RC1 Corrects Lots of Potential Crashes.md b/published/201410/20140917 GNOME Control Center 3.14 RC1 Corrects Lots of Potential Crashes.md similarity index 100% rename from published/20140917 GNOME Control Center 3.14 RC1 Corrects Lots of Potential Crashes.md rename to published/201410/20140917 GNOME Control Center 3.14 RC1 Corrects Lots of Potential Crashes.md diff --git a/published/20140919 Another Italian City Says Goodbye To Microsoft Office, Will Switch To OpenOffice Soon.md b/published/201410/20140919 Another Italian City Says Goodbye To Microsoft Office, Will Switch To OpenOffice Soon.md similarity index 100% rename from published/20140919 Another Italian City Says Goodbye To Microsoft Office, Will Switch To OpenOffice Soon.md rename to published/201410/20140919 Another Italian City Says Goodbye To Microsoft Office, Will Switch To OpenOffice Soon.md diff --git a/published/20140919 Mir and Unity 8 Status Update Arrives from Ubuntu Devs.md b/published/201410/20140919 Mir and Unity 8 Status Update Arrives from Ubuntu Devs.md similarity index 100% rename from published/20140919 Mir and Unity 8 Status Update Arrives from Ubuntu Devs.md rename to published/201410/20140919 Mir and Unity 8 Status Update Arrives from Ubuntu Devs.md diff --git a/published/20140919 Netflix Offers to Work with Ubuntu to Bring Native Playback to All.md b/published/201410/20140919 Netflix Offers to Work with Ubuntu to Bring Native Playback to All.md similarity index 100% rename from published/20140919 Netflix Offers to Work with Ubuntu to Bring Native Playback to All.md rename to published/201410/20140919 Netflix Offers to Work with Ubuntu to Bring Native Playback to All.md diff --git a/published/201410/20140919 Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development.md b/published/201410/20140919 Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development.md new file mode 100644 index 0000000000..45f7718d45 --- /dev/null +++ b/published/201410/20140919 Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development.md @@ -0,0 +1,37 @@ +Red Hat公司8200万美元收购FeedHenry来推动移动开发 +================================================================================ +> 这是Red Hat公司进入移动开发领域的一次关键收获。 + +Red Hat公司的JBoss开发者工具事业部一直注重于企业开发,而忽略了移动方面。而如今这一切将随着Red Hat公司宣布用8200万美元收购移动开发供应商 [FeedHenry][1] 开始发生改变。这笔交易将在Red Hat公司2015财年的第三季度结束。 + +Red Hat公司的中间件总经理Mike Piech说当交易结束后FeedHenry公司的员工将会变成Red Hat公司的员工。 + +FeedHenry公司的开发平台能让应用开发者快速地开发出Android、IOS、Windows Phone以及黑莓的移动应用。FeedHenry的平台Node.js的编程结构有着深远影响,而那不是过去JBoss所涉及的领域。 + +"这次对FeedHenry公司的收购显著地提高了我们对于Node.js的支持与衔接。" Piech说。 + +Red Hat公司的平台即服务(PaaS)技术OpenShift已经有了一个Node.js的cartridge组件。此外,Red Hat公司的企业版Linux把Node.js的技术预览来作为Red Hat公司软件包的一部分。 + +尽管Node.js本身就是开源的,但不是所有FeedHenry公司的技术能在近期符合开源许可证的要求。作为Red Hat纵贯历史的政策, 现在也是致力于让FeedHenry开源的时候了。 + +"我们完成了收购,那么开源我们所收购的技术就是公司的首要任务,并且我们没有理由因Feedhenry而例外。"Piech说。 + +Red Hat公司最后一次主要的非开源性公司的收购是在2012年用104万美元收购 [ManageIQ][2] 公司。在今年的5月份,Red Hat公司成立了ManageIQ公司的开源项目,开放之前闭源的云管理技术代码。 + +从整合的角度来看,Red Hat公司还尚未精确地提供FeedHenry公司如何融入它的完整信息。 + +"我们已经确定了一些FeedHenry公司和我们已经存在的技术和产品能很好地相互融合和集成的范围," Piech说,"我们会在接下来的90天内分享更多我们发展蓝图的细节。" + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html + +作者:[Sean Michael Kerner][a] +译者:[ZTinoZ](https://github.com/ZTinoZ) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html +[1]:http://www.feedhenry.com/ +[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html diff --git a/published/20140922 Ten Blogs Every Ubuntu User Must Follow.md b/published/201410/20140922 Ten Blogs Every Ubuntu User Must Follow.md similarity index 100% rename from published/20140922 Ten Blogs Every Ubuntu User Must Follow.md rename to published/201410/20140922 Ten Blogs Every Ubuntu User Must Follow.md diff --git a/translated/news/20140924 Canonical Closes nginx Exploit in Ubuntu 14.04 LTS.md b/published/201410/20140924 Canonical Closes nginx Exploit in Ubuntu 14.04 LTS.md similarity index 79% rename from translated/news/20140924 Canonical Closes nginx Exploit in Ubuntu 14.04 LTS.md rename to published/201410/20140924 Canonical Closes nginx Exploit in Ubuntu 14.04 LTS.md index 3afcf92fe7..5b87861224 100644 --- a/translated/news/20140924 Canonical Closes nginx Exploit in Ubuntu 14.04 LTS.md +++ b/published/201410/20140924 Canonical Closes nginx Exploit in Ubuntu 14.04 LTS.md @@ -1,15 +1,14 @@ -Canonical在Ubuntu 14.04 LTS中关闭了一个nginx漏洞 +Canonical解决了一个Ubuntu 14.04 LTS中的nginx漏洞 ================================================================================ -> 用户不得不升级他们的系统来修复这个漏洞 +> 用户应该更新他们的系统来修复这个漏洞! -![Ubuntu 14.04 LTS](http://i1-news.softpedia-static.com/images/news2/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677-2.jpg) +
![Ubuntu 14.04 LTS](http://i1-news.softpedia-static.com/images/news2/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677-2.jpg)
-Ubuntu 14.04 LTS +
*Ubuntu 14.04 LTS*
**Canonical已经在安全公告中公布了这个影响到Ubuntu 14.04 LTS (Trusty Tahr)的nginx漏洞的细节。这个问题已经被确定并被修复了** -Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被用来暴露网络上的敏感信息。 - +Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被利用来暴露网络上的敏感信息。 根据安全公告,“Antoine Delignat-Lavaud和Karthikeyan Bhargavan发现nginx错误地重复使用了缓存的SSL会话。攻击者可能利用此问题,在特定的配置下,可以从不同的虚拟主机获得信息“。 @@ -23,13 +22,14 @@ Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可 sudo apt-get dist-upgrade 在一般情况下,一个标准的系统更新将会进行必要的更改。要应用此修补程序您不必重新启动计算机。 + -------------------------------------------------------------------------------- via: http://news.softpedia.com/news/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677.shtml 作者:[Silviu Stahie][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/20140924 Debian 8 Jessie to Have GNOME as the Default Desktop.md b/published/201410/20140924 Debian 8 Jessie to Have GNOME as the Default Desktop.md similarity index 100% rename from published/20140924 Debian 8 Jessie to Have GNOME as the Default Desktop.md rename to published/201410/20140924 Debian 8 Jessie to Have GNOME as the Default Desktop.md diff --git a/published/20140924 End of the Line for Red Hat Enterprise Linux 5.md b/published/201410/20140924 End of the Line for Red Hat Enterprise Linux 5.md similarity index 100% rename from published/20140924 End of the Line for Red Hat Enterprise Linux 5.md rename to published/201410/20140924 End of the Line for Red Hat Enterprise Linux 5.md diff --git a/published/20140924 Second Bugfix Release for KDE Plasma 5 Arrives with Lots of Changes.md b/published/201410/20140924 Second Bugfix Release for KDE Plasma 5 Arrives with Lots of Changes.md similarity index 100% rename from published/20140924 Second Bugfix Release for KDE Plasma 5 Arrives with Lots of Changes.md rename to published/201410/20140924 Second Bugfix Release for KDE Plasma 5 Arrives with Lots of Changes.md diff --git a/translated/news/20140924 Wal Commander GitHub Edition 0.17 released.md b/published/201410/20140924 Wal Commander GitHub Edition 0.17 released.md similarity index 52% rename from translated/news/20140924 Wal Commander GitHub Edition 0.17 released.md rename to published/201410/20140924 Wal Commander GitHub Edition 0.17 released.md index 1698210390..98ab4c8567 100644 --- a/translated/news/20140924 Wal Commander GitHub Edition 0.17 released.md +++ b/published/201410/20140924 Wal Commander GitHub Edition 0.17 released.md @@ -1,15 +1,20 @@ -Wal Commander 0.17 Github版发布了 +文件管理器 Wal Commander Github 0.17版发布了 ================================================================================ ![](http://wcm.linderdaum.com/wp-content/uploads/2014/09/wc21.png) > ### 描述 ### > -、> Wal Commander GitHub 版是一款多平台的开源文件管理器。适用于Windows、Linux、FreeBSD、和OSX。 +> Wal Commander GitHub 版是一款多平台的开源文件管理器。适用于Windows、Linux、FreeBSD、和OSX。 > > 这个从项目的目的是创建一个模仿Far管理器外观和感觉的便携式文件管理器。 -The next stable version of our Wal Commander GitHub Edition 0.17 is out. Major features include command line autocomplete using the commands history; file associations to bind custom commands to different actions on files; and experimental support of OS X using XQuartz. A lot of new hotkeys were added in this release. Precompiled binaries are available for Windows x64. Linux, FreeBSD and OS X versions can be built directly from the [GitHub source code][1]. -Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能包括:使用命令历史自动补全;文件关联绑定自定义命令对文件的各种操作;和用XQuartz实验性地支持OS X。很多新的快捷键添加在此版本中。预编译二进制文件适用于Windows64、Linux,FreeBSD和OS X版本,这些可以直接从[GitHub中的源代码][1]编译。 +Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能包括: + +- 使用命令历史自动补全; +- 文件关联绑定自定义命令对文件的各种操作; +- 和用XQuartz实验性地支持OS X。 + +很多新的快捷键添加在此版本中。预编译二进制文件适用于Windows64、Linux,FreeBSD和OS X版本,这些可以直接从[GitHub中的源代码][1]编译。 ### 主要特性 ### @@ -17,8 +22,9 @@ Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能 - 文件关联 (主菜单 -> 命令 -> 文件关联) - XQuartz上实验性地支持OS X ([https://github.com/corporateshark/WalCommander/issues/5][2]) -### [下载][3] ###. +### 下载 ### +下载:[http://wcm.linderdaum.com/downloads/][3] 源代码: [https://github.com/corporateshark/WalCommander][4] @@ -27,7 +33,7 @@ Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能 via: http://wcm.linderdaum.com/release-0-17-0/ 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/How to listen to Internet radio from the command line on Linux.md b/published/201410/How to listen to Internet radio from the command line on Linux.md similarity index 100% rename from published/How to listen to Internet radio from the command line on Linux.md rename to published/201410/How to listen to Internet radio from the command line on Linux.md diff --git a/published/The Open Source Witch Hunts Have Returned.md b/published/201410/The Open Source Witch Hunts Have Returned.md similarity index 100% rename from published/The Open Source Witch Hunts Have Returned.md rename to published/201410/The Open Source Witch Hunts Have Returned.md diff --git a/published/20141008 Adobe Pulls Linux PDF Reader Downloads From Website.md b/published/20141008 Adobe Pulls Linux PDF Reader Downloads From Website.md new file mode 100644 index 0000000000..17e9920227 --- /dev/null +++ b/published/20141008 Adobe Pulls Linux PDF Reader Downloads From Website.md @@ -0,0 +1,41 @@ +Adobe从网站上撤下了Linux PDF Reader的下载链接 +================================================================================ +
![Linux上的其他PDF解决方案](http://www.omgubuntu.co.uk/wp-content/uploads/2012/07/test-pdf.jpg)
+ + +**由于该公司从网站上撤下了软件的下载链接,因此这对于任何需要在Linux上使用Adobe这家公司的PDF阅读器的人而言有些麻烦了。** + +[Reddit 上的一个用户][1]发帖说,当他去 Adobe 网站上去下载该软件时,Linux并没有列在[支持的操作系统][2]里。 + +不知道什么时候,更不知道为什么,Linux版本被删除了,不过第一次被发现是在八月份。 + +这也并没有让人太惊讶。Adobe Reader 官方的Linux版本在2013年5月才更新,而且当时还在滞后的版本9.5.x上,而Windows和Mac版已经在v11.x。 + +### 谁在意呢?无所谓 ### + +这是一个巨大的损失么?你可能并不会这么想。毕竟Adobe Reader是一款名声不好的app。速度慢,占用资源而且体积臃肿。而原生的PDF阅读app像Evince和Okular提供了一流的体验而没有上面的那些缺点。 + +除开Snark,这一决定将会影响一些事。一些政府网站只能使用官方Abode应用才能完成或者提交提供的官方文档和程序。 + +Adobe把Linux给刷了这事并不鲜见。该公司在2012年[停止了Linux上flash版本的更新][3](把它留给Google去做),[并且此前从它们的跨平台运行时环境“Air”中排除了踢开了Linux用户][4]。 + +不过并没有失去一切。虽然网在不再提供链接了,然而在Adobe FTP服务器上仍有Debian的安装程序。计划使用老的版本?需要自己承担风险且没有来自Adobe的支持。同样注意这些版本可能还有没有修复的漏洞。 + +- [下载Ubuntu版本的 Adobe Reader 9.5.5][5] + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2014/10/adobe-reader-linux-download-pulled-website + +作者:[Joey-Elijah Sneddon][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://www.reddit.com/r/linux/comments/2hsgq6/linux_version_of_adobe_reader_no_longer/ +[2]:http://get.adobe.com/reader/otherversions/ +[3]:http://www.omgubuntu.co.uk/2012/02/adobe-adandons-flash-on-linux +[4]:http://www.omgubuntu.co.uk/2011/06/adobe-air-for-linux-axed +[5]:ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i386linux_enu.deb \ No newline at end of file diff --git a/translated/talk/How to Achieve Better Security With Proper Management of Open Source.md b/published/How to Achieve Better Security With Proper Management of Open Source.md similarity index 91% rename from translated/talk/How to Achieve Better Security With Proper Management of Open Source.md rename to published/How to Achieve Better Security With Proper Management of Open Source.md index 8fd4aafd1b..b5e0f7ed3f 100644 --- a/translated/talk/How to Achieve Better Security With Proper Management of Open Source.md +++ b/published/How to Achieve Better Security With Proper Management of Open Source.md @@ -1,8 +1,6 @@ 恰当地管理开源,让软件更加安全 ================================================================================ -![作者 Bill Ledingham 是 Black Duck Software 公司的首席技术官(CTO)兼工程执行副总裁](http://www.linux.com/images/stories/41373/Bill-Ledingham.jpg) - -Bill Ledingham 是 Black Duck Software 公司的首席技术官(CTO)兼工程执行副总裁。 +
![作者 Bill Ledingham 是 Black Duck Software 公司的首席技术官(CTO)兼工程执行副总裁](http://www.linux.com/images/stories/41373/Bill-Ledingham.jpg)
越来越多的公司意识到,要想比对手率先开发出高质量具有创造性的软件,关键在于积极使用开源项目。软件版本更迭要求市场推广速度足够快,成本足够低,而仅仅使用商业源代码已经无法满足这些需求了。如果不能选择最合适的开源软件集成到自己的项目里,一些令人称道的点子怕是永无出头之日了。 @@ -38,12 +36,9 @@ Heartbleed bug 让开发人员和企业知道了软件安全性有多重要。 虽然每个公司、每个开发团队都面临各不相同的问题,但实践证明下面几条安全管理经验对使用开源软件的任何规模的组织都有意义: -- **自动认证并分类** - 捕捉并追踪开源组件的相关属性,评估授权许可,自动扫描可能出现的安全漏洞,自动认证并归档。 -- -- **维护最新代码的版本** - 评估代码质量,确保你的产品使用的是最新版本的代码。 -- +- **自动批准和分类** - 捕捉并追踪开源组件的相关属性,评估许可证合规性,通过自动化扫描、批准和使用过程来审查可能出现的安全漏洞。 +- **维护最新代码的版本** - 评估代码质量,确保你的产品使用的是最新版本的代码。 - **评估代码** - 评估所有在使用的开源代码;审查代码安全性、授权许可、列出风险并予以解决。 -- - **确保代码合法** - 创建并实现开源政策,建立自动化合规检查流程确保开源政策、法规、法律责任等符合开源组织的要求。 ### 关键是,要让管理流程运作起来 ### @@ -58,7 +53,7 @@ via: http://www.linux.com/news/software/applications/782953-how-to-achieve-bette 作者:[Bill Ledingham][a] 译者:[sailing](https://github.com/sailing) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/sources/news/20140919 Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development.md b/sources/news/20140919 Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development.md deleted file mode 100644 index e0b8b3de9b..0000000000 --- a/sources/news/20140919 Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development.md +++ /dev/null @@ -1,38 +0,0 @@ -Translating by ZTinoZ -Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development -================================================================================ -> Red Hat jumps into the mobile development sector with a key acquisition. - -Red Hat's JBoss developer tools division has always focused on enterprise development, but hasn't always been focused on mobile. Today that will start to change as Red Hat announced its intention to acquire mobile development vendor [FeedHenry][1] for $82 million in cash. The deal is set to close in the third quarter of Red Hat's fiscal 2015. Red Hat is set to disclose its second quarter fiscal 2015 earning at 4 ET today. - -Mike Piech, general manager of Middleware at Red Hat, told Datamation that upon the deal's closing FeedHenry's employees will become Red Hat employees - -FeedHenry's development platform enables application developers to rapidly build mobile application for Android, IOS, Windows Phone and BlackBerry. The FeedHenry platform leverages Node.js programming architecture, which is not an area where JBoss has had much exposure in the past. - -"The acquisition of FeedHenry significantly expands Red Hat's support for and engagement in Node.js," Piech said. - -Piech Red Hat's OpenShift Platform-as-a-Service (PaaS) technology already has a Node.js cartridge. Additionally Red Hat Enterprise Linux ships a tech preview of node.js as part of the Red Hat Software Collections. - -While node.js itself is open source, not all of FeedHenry's technology is currently available under an open source license. As has been Red Hat's policy throughout its entire history, it is now committing to making FeedHenry open source as well. - -"As we've done with other acquisitions, open sourcing the technology we acquire is a priority for Red Hat, and we have no reason to expect that approach will change with FeedHenry," Piech said. - -Red Hat's last major acquisition of a company with non open source technology was with [ManageIQ][2] for $104 million back in 2012. In May of this year, Red Hat launched the ManageIQ open-source project, opening up development and code of the formerly closed-source cloud management technology. - -From an integration standpoint, Red Hat is not yet providing full details of precisely where FeedHenry will fit it. - -"We've already identified a number of areas where FeedHenry and Red Hat's existing technology and products can be better aligned and integrated," Piech said. "We'll share more details as we develop the roadmap over the next 90 days." - --------------------------------------------------------------------------------- - -via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html - -作者:[Sean Michael Kerner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html -[1]:http://www.feedhenry.com/ -[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html \ No newline at end of file diff --git a/sources/news/20141008 How To Use Steam Music Player on Ubuntu Desktop.md b/sources/news/20141008 How To Use Steam Music Player on Ubuntu Desktop.md new file mode 100644 index 0000000000..5e21453a88 --- /dev/null +++ b/sources/news/20141008 How To Use Steam Music Player on Ubuntu Desktop.md @@ -0,0 +1,80 @@ +Translating by instdio +How To Use Steam Music Player on Ubuntu Desktop +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/steam-music.jpg) + +**‘Music makes the people come together’ Madonna once sang. But can Steam’s new music player feature mix the bourgeoisie and the rebel as well?** + +If you’ve been living under a rock, ears pressed tight to a granite roof, word of Steam Music may have passed you by. The feature isn’t entirely new. It’s been in testing in some form or another since earlier this year. + +But in the latest stable update of the Steam client on Windows, Mac and Linux it is now available to all. Why does a gaming client need to add a music player, you ask? To let you play your favourite music while gaming, of course. + +Don’t worry: playing your music over in-game music is not as bad as it sounds (har har) on paper. Steam reduces/cancels out the game soundtrack in favour of your tunes, but keeps sound effects high in the mix so you can hear the plings, boops and blams all the same. + +### Using Steam Music Player ### + +![Music in Big Picture Mode](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-music-bpm.jpg) + +Music in Big Picture Mode + +Steam Music Player is available to anyone running the latest version of the client. It’s a pretty simple addition: it lets you add, browse and play music from your computer. + +The player element itself is accessible on the desktop and when playing in Steam’s (awesome) Big Picture mode. In both instances, controlling playback is made dead simple. + +As the feature is **designed for playing music while gaming** it is not pitching itself as a rival for Rhythmbox or successor to Spotify. In fact, there’s no store to purchase music from and no integration with online services like Rdio, Grooveshark, etc. or the desktop. Nope, your keyboard media keys won’t work with the player in Linux. + +Valve say they “*…plan to add more features so you can experience Steam music in new ways. We’re just getting started.*” + +#### Steam Music Key Features: #### + +- Plays MP3s only +- Mixes with in-game soundtrack +- Music controls available in game +- Player can run on the desktop or in Big Picture mode +- Playlist/queue based playback + +**It does not integrate with the Ubuntu Sound Menu and does not currently support keyboard media keys.** + +### Using Steam Music on Ubuntu ### + +The first thing to do before you can play music is to add some. On Ubuntu, by default, Steam automatically adds two folders: the standard Music directory in Home, and its own Steam Music folder, where any downloadable soundtracks are stored. + +Note: at present **Steam Music only plays MP3s**. If the bulk of your music is in a different file format (e.g., .aac, .m4a, etc.) it won’t be added and cannot be played. + +To add an additional source or scan files in those already listed: + +- Head to **View > Settings > Music**. +- Click ‘**Add**‘ to add a folder in a different location to the two listed entries +- Hit ‘**Start Scanning**’ + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/Tardis.jpg) + +This dialog is also where you can adjust other preferences, including a ‘scan at start’. If you routinely add new music and are prone to forgetting to manually initiate a scan, tick this one on. You can also choose whether to see notifications on track change, set the default volume levels, and adjust playback behaviour when opening an app or taking a voice chat. + +Once your music sources have been successfully added and scanned you are all set to browse through your entries from the **Library > Music** section of the main client. + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/browser.jpg) + +The Steam Music section groups music by album title by default. To browse by band name you need to click the ‘Albums’ header and then select ‘Artists’ from the drop down menu. + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-selection.jpg) + +Steam Music works off of a ‘queue’ system. You can add music to the queue by double-clicking on a track in the browser or by right-clicking and selecting ‘Add to Queue’. + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-music-queue.jpg) + +To **launch the desktop player** click the musical note emblem in the upper-right hand corner or through the **View > Music Player** menu. + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-music.jpg) + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2014/10/use-steam-music-player-linux + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author diff --git a/sources/share/README.md b/sources/share/README.md new file mode 100644 index 0000000000..e5e225858e --- /dev/null +++ b/sources/share/README.md @@ -0,0 +1 @@ +这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。 diff --git a/sources/talk/20140617 7 Improvements The Linux Desktop Needs.md b/sources/talk/20140617 7 Improvements The Linux Desktop Needs.md index 9a5c215d2e..295cd1e406 100644 --- a/sources/talk/20140617 7 Improvements The Linux Desktop Needs.md +++ b/sources/talk/20140617 7 Improvements The Linux Desktop Needs.md @@ -1,3 +1,4 @@ +Translating by ZTinoZ 7 Improvements The Linux Desktop Needs ====================================== diff --git a/sources/talk/20140818 Can Ubuntu Do This--Answers to The 4 Questions New Users Ask Most.md b/sources/talk/20140818 Can Ubuntu Do This--Answers to The 4 Questions New Users Ask Most.md deleted file mode 100644 index 69831d7704..0000000000 --- a/sources/talk/20140818 Can Ubuntu Do This--Answers to The 4 Questions New Users Ask Most.md +++ /dev/null @@ -1,78 +0,0 @@ -Can Ubuntu Do This? — Answers to The 4 Questions New Users Ask Most -================================================================================ -![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/Screen-Shot-2014-08-13-at-14.31.42.png) - -**Type ‘Can Ubuntu’ into Google and you’ll see a stream of auto suggested terms put before you, all based on the queries asked most often by curious searchers. ** - -For long-time Linux users these queries all have rather obvious answers. But for new users or those feeling out whether a distribution like Ubuntu is for them the answers are not quite so obvious; they’re pertinent, real and essential asks. - -So, in this article, I’m going to answer the top four most searched for questions asking “*Can Ubuntu…?*” - -### Can Ubuntu Replace Windows? ### - -![Windows isn’t to everyones tastes — or needs](http://www.omgubuntu.co.uk/wp-content/uploads/2014/07/windows-9-desktop-rumour.png) -Windows isn’t to everyones tastes — or needs - -Yes. Ubuntu (and most other Linux distributions) can be installed on just about any computer capable of running Microsoft Windows. - -Whether you **should** replace it will, invariably, depend on your own needs. - -For example, if you’re attending a school or college that requires access to Windows-only software, you may want to hold off replacing it entirely. Same goes for businesses; if your work depends on Microsoft Office, Adobe Creative Suite or a specific AutoCAD application you may find it easier to stick with what you have. - -But for most of us Ubuntu can replace Windows full-time. It offers a safe, dependable desktop experience that can run on and support a wide range of hardware. Software available covers everything from office suites to web browsers, video and music apps to games. - -### Can Ubuntu Run .exe Files? ### - -![You can run some Windows apps in Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2013/01/adobe-photoshop-cs2-free-linux.png) -You can run some Windows apps in Ubuntu - -Yes, though not out of the box, and not with guaranteed success. This is because software distributed in .exe are meant to run on Windows. These are not natively compatible with any other desktop operating system, including Mac OS X or Android. - -Software installers made for Ubuntu (and other Linux distributions) tend to come as ‘.deb’ files. These can be installed similarly to .exe — you just double-click and follow any on-screen prompts. - -But Linux is versatile. Using a compatibility layer called ‘Wine’ (which technically is not an emulator, but for simplicity’s sake can be referred to as one for shorthand) that can run many popular apps. They won’t work quite as well as they do on Windows, nor look as pretty. But, for many, it works well enough to use on a daily basis. - -Notable Windows software that can run on Ubuntu through Wine includes older versions of Photoshop and early versions of Microsoft Office . For a list of compatible software [refer to the Wine App Database][1]. - -### Can Ubuntu Get Viruses? ### - -![It may have errors, but it doesn’t have viruses](http://www.omgubuntu.co.uk/wp-content/uploads/2014/04/errors.jpg) -It may have errors, but it doesn’t have viruses - -Theoretically, yes. But in reality, no. - -Linux distributions are built in a way that makes it incredibly hard for viruses, malware and root kits to be installed, much less run and do any significant damage. - -For example, most applications run as a ‘regular user’ with no special administrative privileges, required for a virus to access critical parts of the operating system. Most software is also installed from well maintained and centralised sources, like the Ubuntu Software Center, and not random websites. This makes the risk of installing something that is infected negligible. - -Should you use anti-virus on Ubuntu? That’s up to you. For peace of mind, or if you’re regularly using Windows software through Wine or dual-booting, you can install a free and open-source virus scanner app like ClamAV, available from the Software Center. - -You can learn more about the potential for viruses on Linux/Ubuntu [on the Ubuntu Wiki][2]. - -### Can Ubuntu Play Games? ### - -![Steam has hundreds of high-quality games for Linux](http://www.omgubuntu.co.uk/wp-content/uploads/2012/11/steambeta.jpg) -Steam has hundreds of high-quality games for Linux - -Oh yes it can. From the traditionally simple distractions of 2D chess, word games and minesweeper to modern AAA titles requiring powerful graphics cards, Ubuntu has a diverse range of games available for it. - -Your first port of call will be the **Ubuntu Software Center**. Here you’ll find a sizeable number of free, open-source and paid-for games, including acclaimed indie titles like World of Goo and Braid, as well as several sections filled with more traditional offerings, like PyChess, four-in-a-row and Scrabble clones. - -For serious gaming you’ll want to grab **Steam for Linux**. This is where you’ll find some of the latest and greatest games available, spanning the full gamut of genres. - -Also keep an eye on the [Humble Bundle][3]. These ‘pay what you want’ packages are held for two weeks every month or so. The folks at Humble have been fantastic supporters of Linux as a gaming platform, single-handily ensuring the Linux debut of many touted titles. - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2014/08/ubuntu-can-play-games-replace-windows-questions - -作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://appdb.winehq.org/ -[2]:https://help.ubuntu.com/community/Antivirus -[3]:https://www.humblebundle.com/ \ No newline at end of file diff --git a/sources/talk/20140818 Upstream and Downstream--why packaging takes time.md b/sources/talk/20140818 Upstream and Downstream--why packaging takes time.md index fc1c708b14..c4af2aef08 100644 --- a/sources/talk/20140818 Upstream and Downstream--why packaging takes time.md +++ b/sources/talk/20140818 Upstream and Downstream--why packaging takes time.md @@ -1,3 +1,5 @@ +[felixonmars translating...] + Upstream and Downstream: why packaging takes time ================================================================================ Here in the KDE office in Barcelona some people spend their time on purely upstream KDE projects and some of us are primarily interested in making distros work which mean our users can get all the stuff we make. I've been asked why we don't just automate the packaging and go and do more productive things. One view of making on a distro like Kubuntu is that its just a way to package up the hard work done by others to take all the credit. I don't deny that, but there's quite a lot to the packaging of all that hard work, for a start there's a lot of it these days. diff --git a/sources/talk/20140822 Want To Start An Open Source Project--Here's How.md b/sources/talk/20140822 Want To Start An Open Source Project--Here's How.md index 976b2dc8ff..62af776e6a 100644 --- a/sources/talk/20140822 Want To Start An Open Source Project--Here's How.md +++ b/sources/talk/20140822 Want To Start An Open Source Project--Here's How.md @@ -1,3 +1,5 @@ + Vic020 + Want To Start An Open Source Project? Here's How ================================================================================ > Our step-by-step guide. @@ -104,4 +106,4 @@ via: http://readwrite.com/2014/08/20/open-source-project-how-to [9]:http://opensourcedesign.is/blogging_about/import-designers/ [10]:https://twitter.com/h_ingo/status/501323333301190656 [11]:https://news.ycombinator.com/item?id=8122814 -[12]:http://www.shutterstock.com/ \ No newline at end of file +[12]:http://www.shutterstock.com/ diff --git a/sources/talk/20140826 Linus Torvalds Started a Revolution on August 25 1991 Happy Birthday Linux.md b/sources/talk/20140826 Linus Torvalds Started a Revolution on August 25 1991 Happy Birthday Linux.md deleted file mode 100644 index da1df59c37..0000000000 --- a/sources/talk/20140826 Linus Torvalds Started a Revolution on August 25 1991 Happy Birthday Linux.md +++ /dev/null @@ -1,41 +0,0 @@ -Linus Torvalds Started a Revolution on August 25, 1991. Happy Birthday, Linux! -================================================================================ -![Linus Torvalds](http://i1-news.softpedia-static.com/images/news2/Linus-Torvalds-Started-a-Revolution-on-August-25-1991-Happy-Birthday-Linux-456212-2.jpg) -Linus Torvalds - -**The Linux project has just turned 23 and it's now the biggest collaborative endeavor in the world, with thousands of people working on it.** - -Back in 1991, a young programmer called Linus Torvalds wanted to make a free operating system that wasn't going to be as big as the GNU project and that was just a hobby. He started something that would turn out to be the most successful operating system on the planet, but no one would have been able to guess it back then. - -Linus Torvalds sent an email on August 25, 1991, asking for help in testing his new operating system. Things haven't changed all that much in the meantime and he still sends emails about new Linux releases, although back then it wasn't called like that. - -"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I've currently ported bash(1.08) and gcc(1.40), and things seem to work." - -"This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-) PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(. " [wrote][1] Linus Torvalds. - -This is the entire mails that started it all, although it's interesting to see how things have evolved since then. The Linux operating system caught on, especially on the server market, but the power of Linux also extended in other areas. - -In fact, it's hard to find any technology that hasn't been influenced by a Linus OS. Phones, TVs, fridges, minicomputers, consoles, tablets, and basically everything that has a chip in it is capable of running Linux or it already has some sort of Linux-based OS installed on it. - -Linux is omnipresent on billions of devices and its influence is growing each year on an exponential basis. You might think that Linus is also the wealthiest man on the planet, but remember, Linux is free software and anyone can use it, modify it, and make money of it. He didn't do it for the money. - -Linus Torvalds started a revolution in 1991, but it hasn't ended. In fact, you could say that it's just getting started. - -> Happy Anniversary, Linux! Please join us in celebrating 23 years of the free OS that has changed the world. [pic.twitter.com/mTVApV85gD][2] -> -> — The Linux Foundation (@linuxfoundation) [August 25, 2014][3] - --------------------------------------------------------------------------------- - -via: http://news.softpedia.com/news/Linus-Torvalds-Started-a-Revolution-on-August-25-1991-Happy-Birthday-Linux-456212.shtml - -作者:[Silviu Stahie][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://news.softpedia.com/editors/browse/silviu-stahie -[1]:https://groups.google.com/forum/#!original/comp.os.minix/dlNtH7RRrGA/SwRavCzVE7gJ -[2]:http://t.co/mTVApV85gD -[3]:https://twitter.com/linuxfoundation/statuses/503799441900314624 \ No newline at end of file diff --git a/sources/talk/20140910 Why Do Some Old Programming Languages Never Die.md b/sources/talk/20140910 Why Do Some Old Programming Languages Never Die.md deleted file mode 100644 index 7e33b05047..0000000000 --- a/sources/talk/20140910 Why Do Some Old Programming Languages Never Die.md +++ /dev/null @@ -1,86 +0,0 @@ -(translating by runningwater) -Why Do Some Old Programming Languages Never Die? -================================================================================ -> We like what we already know. - -![](http://a4.files.readwrite.com/image/upload/c_fill,h_900,q_70,w_1600/MTIzMDQ5NjY0MTUxMjU4NjM2.jpg) - -Many of today’s most well-known programming languages are old enough to vote. PHP is 20. Python is 23. HTML is 21. Ruby and JavaScript are 19. C is a whopping 42 years old. - -Nobody could have predicted this. Not even computer scientist [Brian Kernighan][1], co-author of the very first book on C, which is still being printed today. (The language itself was the work of Kernighan's [co-author Dennis Ritchie][2], who passed away in 2011.) - -“I dimly recall a conversation early on with the editors, telling them that we’d sell something like 5,000 copies of the book,” Kernighan told me in a recent interview. “We managed to do better than that. I didn’t think students would still be using a version of it as a textbook in 2014.” - -What’s especially remarkable about C's persistence is that Google developed a new language, Go, specifically to more efficiently solve the problems C solves now. Still, it’s hard for Kernighan to imagine something like Go outright killing C no matter how good it is. - -“Most languages don’t die—or at least once they get to a certain level of acceptance they don’t die," he said. "C still solves certain problems better than anything else, so it sticks around.” - -### Write What You Know ### - -Why do some computer languages become more successful than others? Because developers choose to use them. That’s logical enough, but it gets tricky when you want to figure out why developers choose to use the languages they do. - -Ari Rabkin and Leo Meyerovich are researchers from, respectively, Princeton and the University of California at Berkeley who devoted two years to answering just that question. Their resulting paper, [Empirical Analysis of Programming Language Adoption][3], describes their analysis of more than 200,000 Sourceforge projects and polling of more than 13,000 programmers. - -Their main finding? Most of the time programmers choose programming languages they know. - -“There are languages we use because we’ve always used them,” Rabkin told me. “For example, astronomers historically use IDL [Interactive Data Language] for their computer programs, not because it has special features for stars or anything, but because it has tremendous inertia. They have good programs they’ve built with it that they want to keep.” - -In other words, it’s partly thanks to name recognition that established languages retain monumental staying power. Of course, that doesn’t mean popular languages don’t change. Rabkin noted that the C we use today is nothing like the language Kernighan first wrote about, which probably wouldn’t be fully compatible with a modern C compiler. - -“There’s an old, relevant joke in which an engineer is asked which language he thinks people will be using in 30 years and he says, ‘I don’t know, but it’ll be called Fortran’,” Rabkin said. “Long-lived languages are not the same as how they were when they were designed in the '70s and '80s. People have mostly added things instead of removed because that doesn’t break backwards compatibility, but many features have been fixed.” - -This backwards compatibility means that not only can programmers continue to use languages as they update programs, they also don’t need to go back and rewrite the oldest sections. That older ‘legacy code’ keeps languages around forever, but at a cost. As long as it’s there, people’s beliefs about a language will stick around, too. - -### PHP: A Case Study Of A Long-Lived Language ### - -Legacy code refers to programs—or portions of programs—written in outdated source code. Think, for instance, of key programming functions for a business or engineering project that are written in a language that no one supports. They still carry out their original purpose and are too difficult or expensive to rewrite in modern code, so they stick around, forcing programmers to turn handsprings to ensure they keep working even as other code changes around them. - -Any language that's been around more than a few years has a legacy-code problem of some sort, and PHP is no exception. PHP is an interesting example because its legacy code is distinctly different from its modern code, in what proponents say—and critics admit—is a huge improvement. - -Andi Gutmans is a co-inventor of the Zend Engine, the compiler that became standard by the time PHP4 came around. Gutmans said he and his partner originally wanted to improve PHP3, and were so successful that the original PHP inventor, Rasmus Lerdorf, joined their project. The result was a compiler for PHP4 and its successor, PHP5. - -As a consequence, the PHP of today is quite different from its progenitor, the original PHP. Yet in Gutmans' view, the base of legacy code written in older PHP versions keeps alive old prejudices against the language—such as the notion that PHP is riddled with security holes, or that it can't "scale" to handle large computing tasks. - -"People who criticize PHP are usually criticizing where it was in 1998,” he says. “These people are not up-to-date with where it is today. PHP today is a very mature ecosystem.” - -Today, Gutmans says, the most important thing for him as a steward is to encouraging people to keep updating to the latest versions. “PHP is a big enough community now that you have big legacy code bases," he says. "But generally speaking, most of our communities are on PHP5.3 at minimum.” - -The issue is that users never fully upgrade to the latest version of any language. It’s why many Python users are still using Python 2, released in 2000, instead of Python 3, released in 2008. Even after six years major users like Google still aren’t upgrading. There are a variety of reasons for this, but it made many developers wary about taking the plunge. - -“Nothing ever dies," Rabkin says. "Any language with legacy code will last forever. Rewrites are expensive and if it’s not broke don’t fix it.” - -### Developer Brains As Scarce Resources ### - -Of course, developers aren’t choosing these languages merely to maintain pesky legacy code. Rabkin and Meyerovich found that when it comes to language preference, age is just a number. As Rabkin told me: - -> A thing that really shocked us and that I think is important is that we grouped people by age and asked them how many languages they know. Our intuition was that it would gradually rise over time; it doesn’t. Twenty-five-year-olds and 45-year-olds all know about the same number of languages. This was constant through several rewordings of the question. Your chance of knowing a given language does not vary with your age. - -In other words, it’s not just old developers who cling to the classics; young programmers are also discovering and adopting old languages for the first time. That could be because the languages have interesting libraries and features, or because the communities these developers are a part of have adopted the language as a group. - -“There’s a fixed amount of programmer attention in the world,” said Rabkin. “If a language delivers enough distinctive value, people will learn it and use it. If the people you exchange code and knowledge with you share a language, you’ll want to learn it. So for example, as long as those libraries are Python libraries and community expertise is Python experience, Python will do well.” - -Communities are a huge factor in how languages do, the researchers discovered. While there's not much difference between high level languages like Python and Ruby, for example, programmers are prone to develop strong feelings about the superiority of one over the other. - -“Rails didn’t have to be written in Ruby, but since it was, it proves there were social factors at work,” Rabkin says. “For example, the thing that resurrected Objective-C is that the Apple engineering team said, ‘Let’s use this.’ They didn’t have to pick it.” - -Through social influence and legacy code, our oldest and most popular computer languages have powerful inertia. How could Go surpass C? If the right people and companies say it ought to. - -“It comes down to who is better at evangelizing a language,” says Rabkin. - -Lead image by [Blake Patterson][4] - --------------------------------------------------------------------------------- - -via: http://readwrite.com/2014/09/02/programming-language-coding-lifetime - -作者:[Lauren Orsini][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://readwrite.com/author/lauren-orsini -[1]:http://en.wikipedia.org/wiki/Brian_Kernighan -[2]:http://en.wikipedia.org/wiki/Dennis_Ritchie -[3]:http://asrabkin.bitbucket.org/papers/oopsla13.pdf -[4]:https://www.flickr.com/photos/blakespot/2444037775/ \ No newline at end of file diff --git a/sources/talk/20140912 What' s wrong with IPv4 and Why we are moving to IPv6.md b/sources/talk/20140912 What' s wrong with IPv4 and Why we are moving to IPv6.md deleted file mode 100644 index 1289613263..0000000000 --- a/sources/talk/20140912 What' s wrong with IPv4 and Why we are moving to IPv6.md +++ /dev/null @@ -1,86 +0,0 @@ -What’s wrong with IPv4 and Why we are moving to IPv6 -================================================================================ -For the past 10 years or so, this has been the year that IPv6 will become wide spread. It hasn’t happened yet. Consequently, there is little widespread knowledge of what IPv6 is, how to use it, or why it is inevitable. - -![IPv4 and IPv6 Comparison](http://www.tecmint.com/wp-content/uploads/2014/09/ipv4-ipv6.gif) - -IPv4 and IPv6 Comparison - -### What’s wrong with IPv4? ### - -We’ve been using **IPv4** ever since RFC 791 was published in 1981. At the time, computers were big, expensive, and rare. IPv4 had provision for **4 billion IP** addresses, which seemed like an enormous number compared to the number of computers. Unfortunately, IP addresses are not use consequently. There are gaps in the addressing. For example, a company might have an address space of **254 (2^8-2)** addresses, and only use 25 of them. The remaining 229 are reserved for future expansion. Those addresses cannot be used by anybody else, because of the way networks route traffic. Consequently, what seemed like a large number in 1981 is actually a small number in 2014. - -The Internet Engineering Task Force (**IETF**) recognized this problem in the early 1990s and came up with two solutions: Classless Internet Domain Router (**CIDR**) and private IP addresses. Prior to the invention of CIDR, you could get one of three network sizes: **24 bits** (16,777,214 addresses), **20 bits** (1,048,574 addresses) and **16 bits** (65,534 addresses). Once CIDR was invented, it was possible to split networks into subnetworks. - -So, for example, if you needed **5 IP** addresses, your ISP would give you a network with a size of 3 bits which would give you **6 IP** addresses. So that would allow your ISP to use addresses more efficiently. Private IP addresses allow you to create a network where each machine on the network can easily connect to another machine on the internet, but where it is very difficult for a machine on the internet to connect back to your machine. Your network is private, hidden. Your network could be very large, 16,777,214 addresses, and you could subnet your private network into smaller networks, so that you could manage your own addresses easily. - -You are probably using a private address right now. Check your own IP address: if it is in the range of **10.0.0.0 – 10.255.255.255** or **172.16.0.0 – 172.31.255.255** or **192.168.0.0 – 192.168.255.255**, then you are using a private IP address. These two solutions helped forestall disaster, but they were stopgap measures and now the time of reckoning is upon us. - -Another problem with **IPv4** is that the IPv4 header was variable length. That was acceptable when routing was done by software. But now routers are built with hardware, and processing the variable length headers in hardware is hard. The large routers that allow packets to go all over the world are having problems coping with the load. Clearly, a new scheme was needed with fixed length headers. - -Still another problem with **IPv4** is that, when the addresses were allocated, the internet was an American invention. IP addresses for the rest of the world are fragmented. A scheme was needed to allow addresses to be aggregated somewhat by geography so that the routing tables could be made smaller. - -Yet another problem with IPv4, and this may sound surprising, is that it is hard to configure, and hard to change. This might not be apparent to you, because your router takes care of all of these details for you. But the problems for your ISP drives them nuts. - -All of these problems went into the consideration of the next version of the Internet. - -### About IPv6 and its Features ### - -The **IETF** unveiled the next generation of IP in December 1995. The new version was called IPv6 because the number 5 had been allocated to something else by mistake. Some of the features of IPv6 included. - -- 128 bit addresses (3.402823669×10³⁸ addresses) -- A scheme for logically aggregating addresses -- Fixed length headers -- A protocol for automatically configuring and reconfiguring your network. - -Let’s look at these features one by one: - -#### Addresses #### - -The first thing everybody notices about **IPv6** is that the number of addresses is enormous. Why so many? The answer is that the designers were concerned about the inefficient organization of addresses, so there are so many available addresses that we could allocate inefficiently in order to achieve other goals. So, if you want to build your own IPv6 network, chances are that your ISP will give you a network of **64 bits** (1.844674407×10¹⁹ addresses) and let you subnet that space to your heart’s content. - -#### Aggregation #### - -With so many addresses to use, the address space can be allocated sparsely in order to route packets efficiently. So, your ISP gets a network space of **80 bits**. Of those 80 bits, 16 of them are for the ISPs subnetworks, and 64 bits are for the customer’s networks. So, the ISP can have 65,534 networks. - -However, that address allocation isn’t cast in stone, and if the ISP wants more smaller networks, it can do that (although probably the ISP would probably simply ask for another space of 80 bits). The upper 48 bits is further divided, so that ISPs that are “**close**” to one another have similar network addresses ranges, to allow the networks to be aggregated in the routing tables. - -#### Fixed length Headers #### - -An **IPv4** header has a variable length. An **IPv6** header always has a fixed length of 40 bytes. In IPv4, extra options caused the header to increase in size. In IPv6, if additional information is needed, that additional information is stored in extension headers, which follow the IPv6 header and are generally not processed by the routers, but rather by the software at the destination. - -One of the fields in the IPv6 header is the flow. A flow is a **20 bit** number which is created pseudo-randomly, and it makes it easier for the routers to route packets. If a packet has a flow, then the router can use that flow number as an index into a table, which is fast, rather than a table lookup, which is slow. This feature makes **IPv6** very easy to route. - -#### Automatic Configuration #### - -In **IPv6**, when a machine first starts up, it checks the local network to see if any other machine is using its address. If the address is unused, then the machine next looks for an IPv6 router on the local network. If it finds the router, then it asks the router for an IPv6 address to use. Now, the machine is set and ready to communicate on the internet – it has an IP address for itself and it has a default router. - -If the router should go down, then the machines on the network will detect the problem and repeat the process of looking for an IPv6 router, to find the backup router. That’s actually hard to do in IPv4. Similarly, if the router wants to change the addressing scheme on its network, it can. The machines will query the router from time to time and change their addresses automatically. The router will support both the old and new addresses until all of the machines have switched over to the new configuration. - -IPv6 automatic configuration is not a complete solution. There are some other things that a machine needs in order to use the internet effectively: the name servers, a time server, perhaps a file server. So there is **dhcp6** which does the same thing as dhcp, only because the machine boots in a routable state, one dhcp daemon can service a large number of networks. - -#### There’s one big problem #### - -So if IPv6 is so much better than IPv4, why hasn’t adoption been more widespread (as of **May 2014**, Google estimates that its IPv6 traffic is about **4%** of its total traffic)? The basic problem is which comes first, the **chicken or the egg**? Somebody running a server wants the server to be as widely available as possible, which means it must have an **IPv4** address. - -It could also have an IPv6 address, but few people would use it and you do have to change your software a little to accommodate IPv6. Furthermore, a lot of home networking routers do not support IPv6. A lot of ISPs do not support IPv6. I asked my ISP about it, and I was told that they will provide it when customers ask for it. So I asked how many customers had asked for it. One, including me. - -By way of contrast, all of the major operating systems, Windows, OS X, and Linux support IPv6 “**out of the box**” and have for years. The operating systems even have software that will allow IPv6 packets to “**tunnel**” within IPv4 to a point where the IPv6 packets can be removed from the surrounding IPv4 packet and sent on their way. - -#### Conclusion #### - -IPv4 has served us well for a long time. IPv4 has some limitations which are going to present insurmountable problems in the near future. IPv6 will solve those problems by changing the strategy for allocating addresses, making improvements to ease the routing of packets, and making it easier to configure a machine when it first joins the network. - -However, acceptance and usage of IPv6 has been slow, because change is hard and expensive. The good news is that all operating systems support IPv6, so when you are ready to make the change, your computer will need little effort to convert to the new scheme. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/ipv4-and-ipv6-comparison/ - -作者:[Jeff Silverman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/jeffsilverm/ \ No newline at end of file diff --git a/sources/talk/20140915 10 Open Source Cloning Software For Linux Users.md b/sources/talk/20140915 10 Open Source Cloning Software For Linux Users.md index e96b5c37f5..ac9c87bf09 100644 --- a/sources/talk/20140915 10 Open Source Cloning Software For Linux Users.md +++ b/sources/talk/20140915 10 Open Source Cloning Software For Linux Users.md @@ -1,3 +1,5 @@ +[felixonmars translating...] + 10 Open Source Cloning Software For Linux Users ================================================================================ > These cloning software take all disk data, convert them into a single .img file and you can copy it to another hard drive. @@ -84,4 +86,4 @@ via: http://www.efytimes.com/e1/fullnews.asp?edid=148039 [7]:http://doclone.nongnu.org/ [8]:http://www.macrium.com/reflectfree.aspx [9]:http://www.runtime.org/driveimage-xml.htm -[10]:http://www.paragon-software.com/home/br-free/ \ No newline at end of file +[10]:http://www.paragon-software.com/home/br-free/ diff --git a/sources/talk/20140926 ChromeOS vs Linux--The Good, the Bad and the Ugly.md b/sources/talk/20140926 ChromeOS vs Linux--The Good, the Bad and the Ugly.md deleted file mode 100644 index 7ef89a4348..0000000000 --- a/sources/talk/20140926 ChromeOS vs Linux--The Good, the Bad and the Ugly.md +++ /dev/null @@ -1,108 +0,0 @@ -barney-ro translating - -ChromeOS vs Linux: The Good, the Bad and the Ugly -ChromeOS 对战 Linux : 孰优孰劣 仁者见仁 智者见智 -================================================================================ -> In the battle between ChromeOS and Linux, both desktop environments have strengths and weaknesses. - -> 在 ChromeOS 和 Linux 的斗争过程中,不管是哪一家的操作系统都是有优有劣。 - -Anyone who believes Google isn't "making a play" for desktop users isn't paying attention. In recent years, I've seen [ChromeOS][1] making quite a splash on the [Google Chromebook][2]. Exploding with popularity on sites such as Amazon.com, it looks as if ChromeOS could be unstoppable. - -任何不关注Google的人都不会相信Google在桌面用户当中扮演这一个很重要的角色。在近几年,我们见到的[ChromeOS][1]制造的[Google Chromebook][2]相当的轰动。和同期的人气火爆的Amazon一样,似乎ChromeOS势不可挡。 - -In this article, I'm going to look at ChromeOS as a concept to market, how it's affecting Linux adoption and whether or not it's a good/bad thing for the Linux community as a whole. Plus, I'll talk about the biggest issue of all and how no one is doing anything about it. - -在本文中,我们要了解的是ChromeOS概念的市场,ChromeOS怎么影响着Linux的使用,和整个 ChromeOS 对于一个社区来说,是好事还是坏事。另外,我将会谈到一些重大的事情,和为什么没人去为他做点什么事情。 - -### ChromeOS isn't really Linux ### - -### ChromeOS 并不是真正的Linux ### - -When folks ask me if ChromeOS is a Linux distribution, I usually reply that ChromeOS is to Linux what OS X is to BSD. In other words, I consider ChromeOS to be a forked operating system that uses the Linux kernel under the hood. Much of the operating system is made up of Google's own proprietary blend of code and software. - -每当有朋友问我说是否ChromeOS 是否是Linux 的一个分支时,我都会这样回答:ChromeOS 对于Linux 就好像是 OS X 对于BSD 。换句话说,我认为,ChromeOS 是一个派生的操作系统,运行于Linux 内核的引擎之下。很多操作系统就组成了Google 的专利代码和软件。 - -So while the ChromeOS is using the Linux kernel under its hood, it's still very different from what we might find with today's modern Linux distributions. - -尽管ChromeOS 是利用了Linux 内核引擎,但是它仍然有很大的不同和现在流行的Linux分支版本。 - -Where ChromeOS's difference becomes most apparent, however, is in the apps it offers the end user: Web applications. With everything being launched from a browser window, Linux users might find using ChromeOS to be a bit vanilla. But for non-Linux users, the experience is not all that different than what they may have used on their old PCs. - -ChromeOS和它们最大的不同就在于它给终端用户提供的app,包括Web 应用。因为ChromeOS 每一个操作都是开始于浏览器窗口,对于Linux 用户来说,可能会有很多不一样的感受,但是,对于没有Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。 - -For example: Anyone who is living a Google-centric lifestyle on Windows will feel right at home on ChromeOS. Odds are this individual is already relying on the Chrome browser, Google Drive and Gmail. By extension, moving over to ChromeOS feels fairly natural for these folks, as they're simply using the browser they're already used to. - -就是说,每一个以Google-centric为生活方式的人来说,当他们回到家时在ChromeOS上的感觉将会非常良好。这样的优势就是这个人已经接受了Chrome 浏览器,Google 驱动器和Gmail 。久而久之,他们的亲朋好友也都对ChromeOs有了好感,就好像是他们很容易接受Chrome 流浪器,因为他们早已经用过。 - -Linux enthusiasts, however, tend to feel constrained almost immediately. Software choices feel limited and boxed in, plus games and VoIP are totally out of the question. Sorry, but [GooglePlus Hangouts][3] isn't a replacement for [VoIP][4] software. Not even by a long shot. - -然而,对于Linux 爱好者来说,这样就立即带来了不适应。软件的选择是受限制的,盒装的,在加上游戏和VoIP 是完全不可能的。对不起,因为[GooglePlus Hangouts][3]是代替不了VoIP 软件的。甚至在很长的一段时间里。 - -### ChromeOS or Linux on the desktop ### - -### ChromeOS 和Linux 的桌面化 ### -Anyone making the claim that ChromeOS hurts Linux adoption on the desktop needs to come up for air and meet non-technical users sometime. - -有人断言,ChromeOS 要是想在桌面系统中对Linux 产生影响,只有在Linux 停下来浮出水面换气的时候或者是满足某个非技术用户的时候。 - -Yes, desktop Linux is absolutely fine for most casual computer users. However it helps to have someone to install the OS and offer "maintenance" services like we see in the Windows and OS X camps. Sadly Linux lacks this here in the States, which is where I see ChromeOS coming into play. - -是的,桌面Linux 对于大多数休闲型的用户来说绝对是一个好东西。它有助于有专人安装操作系统,并且提供“维修”服务,从windows 和 OS X 的阵营来看。但是,令人失望的是,在美国Linux 正好在这个方面很缺乏。所以,我们看到,ChromeOS 慢慢的走入我们的视线。 - -I've found the Linux desktop is best suited for environments where on-site tech support can manage things on the down-low. Examples include: Homes where advanced users can drop by and handle updates, governments and schools with IT departments. These are environments where Linux on the desktop is set up to be used by users of any skill level or background. - -By contrast, ChromeOS is built to be completely maintenance free, thus not requiring any third part assistance short of turning it on and allowing updates to do the magic behind the scenes. This is partly made possible due to the ChromeOS being designed for specific hardware builds, in a similar spirit to how Apple develops their own computers. Because Google has a pulse on the hardware ChromeOS is bundled with, it allows for a generally error free experience. And for some individuals, this is fantastic! - -Comically, the folks who exclaim that there's a problem here are not even remotely the target market for ChromeOS. In short, these are passionate Linux enthusiasts looking for something to gripe about. My advice? Stop inventing problems where none exist. - -The point is: the market share for ChromeOS and Linux on the desktop are not even remotely the same. This could change in the future, but at this time, these two groups are largely separate. - -### ChromeOS use is growing ### - -No matter what your view of ChromeOS happens to be, the fact remains that its adoption is growing. New computers built for ChromeOS are being released all the time. One of the most recent ChromeOS computer releases is from Dell. Appropriately named the [Dell Chromebox][5], this desktop ChromeOS appliance is yet another shot at traditional computing. It has zero software DVDs, no anti-malware software, and offfers completely seamless updates behind the scenes. For casual users, Chromeboxes and Chromebooks are becoming a viable option for those who do most of their work from within a web browser. - -Despite this growth, ChromeOS appliances face one huge downside – storage. Bound by limited hard drive size and a heavy reliance on cloud storage, ChromeOS isn't going to cut it for anyone who uses their computers outside of basic web browser functionality. - -### ChromeOS and Linux crossing streams ### - -Previously, I mentioned that ChromeOS and Linux on the desktop are in two completely separate markets. The reason why this is the case stems from the fact that the Linux community has done a horrid job at promoting Linux on the desktop offline. - -Yes, there are occasional events where casual folks might discover this "Linux thing" for the first time. But there isn't a single entity to then follow up with these folks, making sure they’re getting their questions answered and that they're getting the most out of Linux. - -In reality, the likely offline discovery breakdown goes something like this: - -- Casual user finds out Linux from their local Linux event. -- They bring the DVD/USB device home and attempt to install the OS. -- While some folks very well may have success with the install process, I've been contacted by a number of folks with the opposite experience. -- Frustrated, these folks are then expected to "search" online forums for help. Difficult to do on a primary computer experiencing network or video issues. -- Completely fed up, some of the above frustrated bring their computers back into a Windows shop for "repair." In addition to Windows being re-installed, they also receive an earful about how "Linux isn't for them" and should be avoided. - -Some of you might charge that the above example is exaggerated. I would respond with this: It's happened to people I know personally and it happens often. Wake up Linux community, our adoption model is broken and tired. - -### Great platforms, horrible marketing and closing thoughts ### - -If there is one thing that I feel ChromeOS and Linux on the desktop have in common...besides the Linux kernel, it's that they both happen to be great products with rotten marketing. The advantage however, goes to Google with this one, due to their ability to spend big money online and reserve shelf space at big box stores. - -Google believes that because they have the "online advantage" that offline efforts aren't really that important. This is incredibly short-sighted and reflects one of Google's biggest missteps. The belief that if you're not exposed to their online efforts, you're not worth bothering with, is only countered by local shelf-space at select big box stores. - -My suggestion is this – offer Linux on the desktop to the ChromeOS market through offline efforts. This means Linux User Groups need to start raising funds to be present at county fairs, mall kiosks during the holiday season and teaching free classes at community centers. This will immediately put Linux on the desktop in front of the same audience that might otherwise end up with a ChromeOS powered appliance. - -If local offline efforts like this don't happen, not to worry. Linux on the desktop will continue to grow as will the ChromeOS market. Sadly though, it will absolutely keep the two markets separate as they are now. - --------------------------------------------------------------------------------- - -via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html - -作者:[Matt Hartley][a] -译者:[barney-ro](https://github.com/barney-ro) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.datamation.com/author/Matt-Hartley-3080.html -[1]:http://en.wikipedia.org/wiki/Chrome_OS -[2]:http://www.google.com/chrome/devices/features/ -[3]:https://plus.google.com/hangouts -[4]:http://en.wikipedia.org/wiki/Voice_over_IP -[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html diff --git a/sources/talk/20140929 Shellshock--How to protect your Unix, Linux and Mac servers.md b/sources/talk/20140929 Shellshock--How to protect your Unix, Linux and Mac servers.md new file mode 100644 index 0000000000..c053e6d9e9 --- /dev/null +++ b/sources/talk/20140929 Shellshock--How to protect your Unix, Linux and Mac servers.md @@ -0,0 +1,101 @@ +luoyutiantan +Shellshock: How to protect your Unix, Linux and Mac servers +================================================================================ +> **Summary**: The Unix/Linux Bash security hole can be deadly to your servers. Here's what you need to worry about, how to see if you can be attacked, and what to do if your shields are down. + +The only thing you have to fear with [Shellshock, the Unix/Linux Bash security hole][1], is fear itself. Yes, Shellshock can serve as a highway for worms and malware to hit your Unix, Linux, and Mac servers, but you can defend against it. + +![](http://cdn-static.zdnet.com/i/r/story/70/00/034072/cybersecurity-v1-620x464.jpg?hash=BQMxZJWuZG&upscale=1) + +If you don't patch and defend yourself against Shellshock today, you may have lost control of your servers by tomorrow. + +However, Shellshock is not as bad as [HeartBleed][2]. Not yet, anyway. + +While it's true that the [Bash shell][3] is the default command interpreter on most Unix and Linux systems and all Macs — the majority of Web servers — for an attacker to get to your system, there has to be a way for him or her to actually get to the shell remotely. So, if you're running a PC without [ssh][4], [rlogin][5], or another remote desktop program, you're probably safe enough. + +A more serious problem is faced by devices that use embedded Linux — such as routers, switches, and appliances. If you're running an older, no longer supported model, it may be close to impossible to patch it and will likely be vulnerable to attacks. If that's the case, you should replace as soon as possible. + +The real and present danger is for servers. According to the National Institute of Standards (NIST), [Shellshock scores a perfect 10][6] for potential impact and exploitability. [Red Hat][7] reports that the most common attack vectors are: + +- **httpd (Your Web server)**: CGI [Common-Gateway Interface] scripts are likely affected by this issue: when a CGI script is run by the web server, it uses environment variables to pass data to the script. These environment variables can be controlled by the attacker. If the CGI script calls Bash, the script could execute arbitrary code as the httpd user. mod_php, mod_perl, and mod_python do not use environment variables and we believe they are not affected. +- **Secure Shell (SSH)**: It is not uncommon to restrict remote commands that a user can run via SSH, such as rsync or git. In these instances, this issue can be used to execute any command, not just the restricted command. +- **dhclient**: The [Dynamic Host Configuration Protocol Client (dhclient)][8] is used to automatically obtain network configuration information via DHCP. This client uses various environment variables and runs Bash to configure the network interface. Connecting to a malicious DHCP server could allow an attacker to run arbitrary code on the client machine. +- **[CUPS][9] (Linux, Unix and Mac OS X's print server)**: It is believed that CUPS is affected by this issue. Various user-supplied values are stored in environment variables when cups filters are executed. +- **sudo**: Commands run via sudo are not affected by this issue. Sudo specifically looks for environment variables that are also functions. It could still be possible for the running command to set an environment variable that could cause a Bash child process to execute arbitrary code. +- **Firefox**: We do not believe Firefox can be forced to set an environment variable in a manner that would allow Bash to run arbitrary commands. It is still advisable to upgrade Bash as it is common to install various plug-ins and extensions that could allow this behavior. +- **Postfix**: The Postfix [mail] server will replace various characters with a ?. While the Postfix server does call Bash in a variety of ways, we do not believe an arbitrary environment variable can be set by the server. It is however possible that a filter could set environment variables. + +So much for Red Hat's thoughts. Of these, the Web servers and SSH are the ones that worry me the most. The DHCP client is also troublesome, especially if, as it the case with small businesses, your external router doubles as your Internet gateway and DHCP server. + +Of these, Web server attacks seem to be the most common by far. As Florian Weimer, a Red Hat security engineer, wrote: "[HTTP requests to CGI scripts][10] have been identified as the major attack vector." Attacks are being made against systems [running both Linux and Mac OS X][11]. + +Jaime Blasco, labs director at [AlienVault][12], a security management services company, ran a [honeypot][13] looking for attackers and found "[several machines trying to exploit the Bash vulnerability][14]. The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system." + +Other security researchers have found that the malware is the usual sort. They typically try to plant distributed denial of service (DDoS) IRC bots and attempt to guess system logins and passwords using a list of poor passwords such as 'root', 'admin', 'user', 'login', and '123456.' + +So, how do you know if your servers can be attacked? First, you need to check to see if you're running a vulnerable version of Bash. To do that, run the following command from a Bash shell: + +env x='() { :;}; echo vulnerable' bash -c "echo this is a test" + +If you get the result: + +*vulnerable this is a test* + +Bad news, your version of Bash can be hacked. If you see: + +*bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test* + +You're good. Well, to be more exact, you're as protected as you can be at the moment. + +While all major Linux distributors have released patches that stop most attacks — [Apple has not released a patch yet][15] — it has been discovered that "[patches shipped for this issue are incomplete][16]. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions." While it's unclear if these attacks can be used to hack into a system, it is clear that they can be used to crash them, thanks to a null-pointer exception. + +Patches to fill-in the [last of the Shellshock security hole][17] are being worked on now. In the meantime, you should update your servers as soon as possible with the available patches and keep an eye open for the next, fuller ones. + +In the meantime, if, as is likely, you're running the Apache Web server, there are some [Mod_Security][18] rules that can stop attempts to exploit Shellshock. These rules, created by Red Hat, are: + + Request Header values: + SecRule REQUEST_HEADERS "^\(\) {" "phase:1,deny,id:1000000,t:urlDecode,status:400,log,msg:'CVE-2014-6271 - Bash Attack'" + + SERVER_PROTOCOL values: + SecRule REQUEST_LINE "\(\) {" "phase:1,deny,id:1000001,status:400,log,msg:'CVE-2014-6271 - Bash Attack'" + + GET/POST names: + SecRule ARGS_NAMES "^\(\) {" "phase:2,deny,id:1000002,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'" + + GET/POST values: + SecRule ARGS "^\(\) {" "phase:2,deny,id:1000003,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'" + + File names for uploads: + SecRule FILES_NAMES "^\(\) {" "phase:2,deny,id:1000004,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'" + +It is vital that you patch your servers as soon as possible, even with the current, incomplete ones, and to set up defenses around your Web servers. If you don't, you could come to work tomorrow to find your computers completely compromised. So get out there and start patching! + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/shellshock-how-to-protect-your-unix-linux-and-mac-servers-7000034072/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]:http://www.zdnet.com/unixlinux-bash-critical-security-hole-uncovered-7000034021/ +[2]:http://www.zdnet.com/heartbleed-serious-openssl-zero-day-vulnerability-revealed-7000028166 +[3]:http://www.gnu.org/software/bash/ +[4]:http://www.openbsd.org/cgi-bin/man.cgi?query=ssh&sektion=1 +[5]:http://unixhelp.ed.ac.uk/CGI/man-cgi?rlogin +[6]:http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-7169 +[7]:http://www.redhat.com/ +[8]:http://www.isc.org/downloads/dhcp/ +[9]:https://www.cups.org/ +[10]:http://seclists.org/oss-sec/2014/q3/650 +[11]:http://www.zdnet.com/first-attacks-using-shellshock-bash-bug-discovered-7000034044/ +[12]:http://www.alienvault.com/ +[13]:http://www.sans.org/security-resources/idfaq/honeypot3.php +[14]:http://www.alienvault.com/open-threat-exchange/blog/attackers-exploiting-shell-shock-cve-2014-6721-in-the-wild +[15]:http://apple.stackexchange.com/questions/146849/how-do-i-recompile-bash-to-avoid-the-remote-exploit-cve-2014-6271-and-cve-2014-7 +[16]:https://bugzilla.redhat.com/show_bug.cgi?id=1141597#c27 +[17]:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7169 +[18]:http://www.inmotionhosting.com/support/website/modsecurity/what-is-modsecurity-and-why-is-it-important diff --git a/sources/talk/20140929 What Linux Users Should Know About Open Hardware.md b/sources/talk/20140929 What Linux Users Should Know About Open Hardware.md new file mode 100644 index 0000000000..2816764239 --- /dev/null +++ b/sources/talk/20140929 What Linux Users Should Know About Open Hardware.md @@ -0,0 +1,66 @@ +zpl1025 +What Linux Users Should Know About Open Hardware +================================================================================ +> What Linux users don't know about manufacturing open hardware can lead them to disappointment. + +Business and free software have been intertwined for years, but the two often misunderstand one another. That's not surprising -- what is just a business to one is way of life for the other. But the misunderstanding can be painful, which is why debunking it is a worth the effort. + +An increasingly common case in point: the growing attempts at open hardware, whether from Canonical, Jolla, MakePlayLive, or any of half a dozen others. Whether pundit or end-user, the average free software user reacts with exaggerated enthusiasm when a new piece of hardware is announced, then retreats into disillusionment as delay follows delay, often ending in the cancellation of the entire product. + +It's a cycle that does no one any good, and often breeds distrust – and all because the average Linux user has no idea what's happening behind the news. + +My own experience with bringing products to market is long behind me. However, nothing I have heard suggests that anything has changed. Bringing open hardware or any other product to market remains not just a brutal business, but one heavily stacked against newcomers. + +### Searching for Partners ### + +Both the manufacturing and distribution of digital products is controlled by a relatively small number of companies, whose time can sometimes be booked months in advance. Profit margins can be tight, so like movie studios that buy the rights to an ancient sit-com, the manufacturers usually hope to clone the success of the latest hot product. As Aaron Seigo told me when talking about his efforts to develop the Vivaldi tablet, the manufacturers would much rather prefer someone else take the risk of doing anything new. + +Not only that, but they would prefer to deal with someone with an existing sales record who is likely to bring repeat business. + +Besides, the average newcomer is looking at a product run of a few thousand units. A chip manufacturer would much rather deal with Apple or Samsung, whose order is more likely in the hundreds of thousands. + +Faced with this situation, the makers of open hardware are likely to find themselves cascading down into the list of manufacturers until they can find a second or third tier manufacturer that is willing to take a chance on a small run of something new. + +They might be reduced to buying off-the-shelf components and assembling units themselves, as Seigo tried with Vivaldi. Alternatively, they might do as Canonical did, and find established partners that encourage the industry to take a gamble. Even if they succeed, they have usually taken months longer than they expected in their initial naivety. + +### Staggering to Market ### + +However, finding a manufacturer is only the first obstacle. As Raspberry Pi found out, even if the open hardware producers want only free software in their product, the manufacturers will probably insist that firmware or drivers stay proprietary in the name of protecting trade secrets. + +This situation is guaranteed to set off criticism from potential users, but the open hardware producers have no choice except to compromise their vision. Looking for another manufacturer is not a solution, partly because to do so means more delays, but largely because completely free-licensed hardware does not exist. The industry giants like Samsung have no interest in free hardware, and, being new, the open hardware producers have no clout to demand any. + +Besides, even if free hardware was available, manufacturers could probably not guarantee that it would be used in the next production run. The producers might easily find themselves re-fighting the same battle every time they needed more units. + +As if all this is not enough, at this point the open hardware producer has probably spent 6-12 months haggling. The chances are, the industry standards have shifted, and they may have to start from the beginning again by upgrading specs. + +### A Short and Brutal Shelf Life ### + +Despite these obstacles, hardware with some degree of openness does sometimes get released. But remember the challenges of finding a manufacturer? They have to be repeated all over again with the distributors -- and not just once, but region by region. + +Typically, the distributors are just as conservative as the manufacturers, and just as cautious about dealing with newcomers and new ideas. Even if they agree to add a product to their catalog, the distributors can easily decide not to encourage their representatives to promote it, which means that in a few months they have effectively removed it from the shelves. + +Of course, online sales are a possibility. But meanwhile, the hardware has to be stored somewhere, adding to the cost. Production runs on demand are expensive even in the unlikely event that they are available, and even unassembled units need storage. + +### Weighing the Odds ### + +I have been generalizing wildly here, but anyone who has ever been involved in producing anything will recognize what I am describing as the norm. And just to make matters worse, open hardware producers typically discover the situation as they are going through it. Inevitably, they make mistakes, which adds still more delays. + +But the point is, if you have any sense of the process at all, your knowledge is going to change how you react to news of another attempt at hardware. The process means that, unless a company has been in serious stealth mode, an announcement that a product will be out in six months will rapidly prove to be an outdate guestimate. 12-18 months is more likely, and the obstacles I describe may mean that the product will never actually be released. + +For example, as I write, people are waiting for the emergence of the first Steam Machines, the Linux-based gaming consoles. They are convinced that the Steam Machines will utterly transform both Linux and gaming. + +As a market category, Steam Machines may do better than other new products, because those who are developing them at least have experience developing software products. However, none of the dozen or so Steam Machines in development have produced more than a prototype after almost a year, and none are likely to be available for buying until halfway through 2015. Given the realities of hardware manufacturing, we will be lucky if half of them see daylight. In fact, a release of 2-4 might be more realistic. + +I make that prediction with next to no knowledge of any of the individual efforts. But, having some sense of how hardware manufacturing works, I suspect that it is likely to be closer to what happens next year than all the predictions of a new Golden Age for Linux and gaming. I would be entirely happy being wrong, but the fact remains: what is surprising is not that so many Linux-associated hardware products fail, but that any succeed even briefly. + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/what-linux-users-should-know-about-open-hardware-1.html + +作者:[Bruce Byfield][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html diff --git a/sources/talk/The history of Android/07 - The history of Android.md b/sources/talk/The history of Android/07 - The history of Android.md deleted file mode 100644 index 5f4ede0669..0000000000 --- a/sources/talk/The history of Android/07 - The history of Android.md +++ /dev/null @@ -1,111 +0,0 @@ -alim0x translating - -The history of Android -================================================================================ -![Both screens of the Email app. The first two screenshots show the combined label/inbox view, and the last shows a message.](http://cdn.arstechnica.net/wp-content/uploads/2014/01/email2lol.png) -Both screens of the Email app. The first two screenshots show the combined label/inbox view, and the last shows a message. -Photo by Ron Amadeo - -The message view was—surprise!—white. Android's e-mail app has historically been a watered-down version of the Gmail app, and you can see that close connection here. The message and compose views were taken directly from Gmail with almost no modifications. - -![The “IM" applications. Screenshots show the short-lived provider selection screen, the friends list, and a chat.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/IM2.png) -The “IM" applications. Screenshots show the short-lived provider selection screen, the friends list, and a chat. -Photo by Ron Amadeo - -Before Google Hangouts and even before Google Talk, there was "IM"—the only instant messaging client that shipped on Android 1.0. Surprisingly, multiple IM services were supported: users could pick from AIM, Google Talk, Windows Live Messenger, and Yahoo. Remember when OS creators cared about interoperability? - -The friends list was a black background with white speech bubbles for open chats. Presence was indicated with colored circles, and a little Android on the right hand side would indicate that a person was mobile. It's amazing how much more communicative the IM app was than Google Hangouts. Green means the person is using a device they are signed into, yellow means they are signed in but idle, red means they have manually set busy and don't want to be bothered, and gray is offline. Today, Hangouts only shows when a user has the app open or closed. - -The chats interface was clearly based on the Messaging program, and the chat backgrounds were changed from white and blue to white and green. No one changed the color of the blue text entry box, though, so along with the orange highlight effect, this screen used white, green, blue, and orange. - -![YouTube on Android 1.0. The screens show the main page, the main page with the menu open, the categories screen, and the videos screen.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt5000.png) -YouTube on Android 1.0. The screens show the main page, the main page with the menu open, the categories screen, and the videos screen. -Photo by Ron Amadeo - -YouTube might not have been the mobile sensation it is today with the 320p screen and 3G data speeds of the G1, but Google's video service was present and accounted for on Android 1.0. The main screen looked like a tweaked version of the Android Market, with a horizontally scrolling featured section along the top and vertically scrolling categories along the bottom. Some of Google's category choices were pretty strange: what would the difference be between "Most popular" and "Most viewed?" - -In a sign that Google had no idea how big YouTube would eventually become, one of the video categories was "Most recent." Today, with [100 hours of video][1] uploaded to the site every minute, if this section actually worked it would be an unreadable blur of rapidly scrolling videos. - -The menu housed search, favorites, categories, and settings. Settings (not pictured) was the lamest screen ever, housing one option to clear the search history. Categories was equally barren, showing only a black list of text. - -The last screen shows a video, which only supported horizontal mode. The auto-hiding video controls weirdly had rewind and fast forward buttons, even though there was a seek bar. - -![YouTube’s video menu, description page, and comments.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt3.png) -YouTube’s video menu, description page, and comments. -Photo by Ron Amadeo - -Additional sections for each video could be brought up by hitting the menu button. Here you could favorite the video, access details, and read comments. All of these screens, like the videos, were locked to horizontal mode. - -"Share" didn't bring up a share dialog yet; it just kicked the link out to a Gmail message. Texting or IMing someone a link wasn't possible. Comments could be read, but you couldn't rate them or post your own. You couldn't rate or like a video either. - -![The camera app’s picture taking interface, menu, and photo review mode.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/camera.png) -The camera app’s picture taking interface, menu, and photo review mode. -Photo by Ron Amadeo - -Real Android on real hardware meant a functional camera app, even if there wasn't much to look at. That black square on the left was the camera interface, which should be showing a viewfinder image, but the SDK screenshot utility can't capture it. The G1 had a hardware camera button (remember those?), so there wasn't a need for an on-screen shutter button. There were no settings for exposure, white balance, or HDR—you could take a picture and that was about it. - -The menu button revealed a meager two options: a way to jump to the Pictures app and Settings screen with two options. The first settings option was whether or not to enable geotagging for pictures, and the second was for a dialog prompt after every capture, which you can see on the right. Also, you could only take pictures—there was no video support yet. - -![The Calendar’s month view, week view with the menu open, day view, and agenda.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/calviews.png) -The Calendar’s month view, week view with the menu open, day view, and agenda. -Photo by Ron Amadeo - -Like most apps of this era, the primary command interface for the calendar was the menu. It was used to switch views, add a new event, navigate to the current day, pick visible calendars, and go to the settings. The menu functioned as a catch-all for every single button. - -The month view couldn't show appointment text. Every date had a bar next to it, and appointments were displayed as green sections in the bar denoting what time of day an appointment was. Week view couldn't show text either—the 320×480 display of the G1 just wasn't dense enough—so you got a white block with a strip of color indicating which calendar it was from. The only views that provided text were the agenda and day views. You could move through dates by swiping—week and day used left and right, and month and agenda used up and down. - -![The main settings page, the Wireless section, and the bottom of the about page.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/settings.png) -The main settings page, the Wireless section, and the bottom of the about page. -Photo by Ron Amadeo - -Android 1.0 finally brought a settings screen to the party. It was a black and white wall of text that was roughly broken down into sections. Down arrows next to each list item confusingly look like they would expand line-in to show more of something, but touching anywhere on the list item would just load the next screen. All the screens were pretty boring and samey looking, but hey, it's a settings screen. - -Any option with an on/off state used a cartoony-looking checkbox. The original checkboxes in Android 1.0 were pretty strange—even when they were "unchecked," they still had a gray check mark in them. Android treated the check mark like a light bulb that would light up when on and be dim when off, but that's not how checkboxes work. We did finally get an "About" page, though. Android 1.0 ran Linux kernel 2.6.25. - -A settings screen means we can finally open the security settings and change lock screens. Android 1.0 only had two styles, the gray square lock screen pictured in the Android 0.9 section, and pattern unlock, which required you to draw a pattern over a grid of 9 dots. A swipe pattern like this was easier to remember and input than a PIN even if it did not add any more security. - -![The Voice Dialer, pattern lock screen, low battery warning, and time picker.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/grabbag.png) -The Voice Dialer, pattern lock screen, low battery warning, and time picker. -Photo by Ron Amadeo - -oice functions arrived in 1.0 with Voice Dialer. This feature hung around in various capacities in AOSP for a while, as it was a simple voice command app for calling numbers and contacts. Voice Dialer was completely unrelated to Google's future voice products, however, and it worked the same way a voice dialer on a dumbphone would work. - -As for a final note, low battery popup would occur when the battery dropped below 15 percent. It was a funny graphic, depicting plugging the wrong end of the power cord into the phone. That wasn't (and still isn't) how phones work, Google. - -Android 1.0 was a great first start, but there were still so many gaps in functionality. Physical keyboards and tons of hardware buttons were mandatory, as Android devices were still not allowed to be sold without a d-pad or trackball. Base smartphone functionality like auto-rotate wasn't here yet, either. Updates for built-in apps weren't possible through the Android Market the way they were today. All the Google Apps were interwoven with the operating system. If Google wanted to update a single app, an update for the entire operating system needed to be pushed out through the carriers. There was still a lot of work to do. - -### Android 1.1—the first truly incremental update ### - -![All of Android 1.1’s new features: Search by voice, the Android Market showing paid app support, Google Latitude, and the new “system updates" option in the settings.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/11.png) -All of Android 1.1’s new features: Search by voice, the Android Market showing paid app support, Google Latitude, and the new “system updates" option in the settings. -Photo by Ron Amadeo - -Four and a half months after Android 1.0, in February 2009, Android got its first public update in Android 1.1. Not much changed in the OS, and just about every new thing Google added with 1.1 has been shut down by now. Google Voice Search was Android's first foray into cloud-powered voice search, and it had its own icon in the app drawer. While the app can't communicate with Google's servers anymore, you can check out how it used to work [on the iPhone][2]. It wasn't yet Voice Actions, but you could speak and the results would go to a simple Google Search. - -Support for paid apps was added to the Android Market, but just like the beta client, this version of the Android Market could no longer connect to the Google Play servers. The most that we could get to work was this sorting screen, which lets you pick between displaying free apps, paid apps, or a mix of both. - -Maps added [Google Latitude][3], a way to share your location with friends. Latitude was shut down in favor of Google+ a few months ago and no longer works. There was an option for it in the Maps menu, but tapping on it just brings up a loading spinner forever. - -Given that system updates come quickly in the Android world—or at least, that was the plan before carriers and OEMs got in the way—Google also added a button to the "About Phone" screen to check for system updates. - ----------- - -![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) - -[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work. - -[@RonAmadeo][t] - --------------------------------------------------------------------------------- - -via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/ - -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://www.youtube.com/yt/press/statistics.html -[2]:http://www.youtube.com/watch?v=y3z7Tw1K17A -[3]:http://arstechnica.com/information-technology/2009/02/google-tries-location-based-social-networking-with-latitude/ -[a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo diff --git a/sources/talk/The history of Android/08 - The history of Android.md b/sources/talk/The history of Android/08 - The history of Android.md index 3bd0c3a8c5..589822dbc7 100644 --- a/sources/talk/The history of Android/08 - The history of Android.md +++ b/sources/talk/The history of Android/08 - The history of Android.md @@ -1,3 +1,5 @@ +alim0x translating + The history of Android ================================================================================ ![Android 1.5’s on-screen keyboard showing the suggestion bar while typing, the capital letters keyboard, the number and symbols screen, and an additional key popup.](http://cdn.arstechnica.net/wp-content/uploads/2013/12/kb5.png) @@ -88,15 +90,15 @@ The HTC Magic, the second Android device, and the first without a hardware keybo Photo by HTC > #### Google Maps is the first built-in app to hit the Android Market #### -> +> > While this article is (mostly) organizing app updates by Android version for simplicity's sake, there are a few outliers that deserve special recognition. On June 14, 2009, Google Maps was the first packed-in Android app to be updated via the Android Market. While every other app required a full system release to be updated, Maps was broken out of the OS, free to receive out-of-cycle updates whenever a new feature was ready. -> +> > Moving apps out of the core OS and onto the Android Market would be a big focus for Google going forward. In general, OTA updates were a big initiative—they required the cooperation of the OEM and the carrier, both of which could drag their feet. Updates also didn’t make it to every device. Today, the Android Market gives Google a direct line to every Android phone with no such interference from outside parties. -> +> > These were problems for a later date, though. In 2009, Google had only two unskinned phones to support, and the early Android carriers were seemingly responsive to Google’s update needs. This early move would prove to be a very proactive decision on Google’s part. At first, the company went this route only with its most important properties—Maps and Gmail—but later it would port the majority of the packed-in apps to the Android Market. Later initiatives like Google Play Services even brought app APIs out of the OS and into Google’s store. -> +> > As for the new Maps at the time, it gained a new directions interface, along with the ability to give mass transit and walking directions. For now, directions were given on a plain black list—turn-by-turn-style navigation would come later. -> +> > June 2009 was also the time Apple launched the third iPhone—the 3GS—and the third version of iPhone OS. iPhone OS 3's headline features were mostly catch-up items like copy/paste and MMS support. Apple's hardware was still nicer, and the software was smoother, more cohesive, and better designed. Google's insane pace of development was putting it on a path to parity though. iPhone OS 2 launched just before the Milestone 5 build of Android 0.5, which makes five Android releases in the span of the yearly iOS release cycle. Android 1.5 gave the YouTube app the ability to upload videos to the site. Uploading was accomplished by sharing a video from the Gallery to the YouTube app, or by opening a video directly from the YouTube app. This would bring up an upload screen, where the user would set things like the video title, tags, and access rights. Photos could be uploaded to Picasa, Google's original photo site, in a similar fashion. @@ -125,4 +127,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor [1]:http://en.wikipedia.org/wiki/Diacritic [a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo \ No newline at end of file +[t]:https://twitter.com/RonAmadeo diff --git a/sources/tech/20140711 How to use systemd for system administration on Debian.md b/sources/tech/20140711 How to use systemd for system administration on Debian.md deleted file mode 100644 index 12e2ca24ff..0000000000 --- a/sources/tech/20140711 How to use systemd for system administration on Debian.md +++ /dev/null @@ -1,107 +0,0 @@ -[bazz2 hehe] -How to use systemd for system administration on Debian -================================================================================ -Soon enough, hardly any Linux user will be able to escape the ever growing grasp that systemd imposes on Linux, unless they manually opt out. systemd has created more technical, emotional, and social issues than any other piece of software as of late. This predominantly came to show in the [heated discussions][1] also dubbed as the 'Init Wars', that occupied parts of the Debian developer body for months. While the Debian Technical Comittee finally decided to include systemd in Debian 8 "Jessie", there were efforts to [supersede the decision][2] by a General Resolution, and even threats to the health of developers in favor of systemd. - -This goes to show how deep systemd interferes with the way of handling Linux systems that has, in large parts, been passed down to us from the Unix days. Theorems like "one tool for the job" are overthrown by the new kid in town. Besides substituting sysvinit as init system, it digs deep into system administration. For right now a lot of the commands you are used to will keep on working due to the compatibility layer provided by the package systemd-sysv. That might change as soon as systemd 214 is uploaded to Debian, destined to be released in the stable branch with Debian 8 "Jessie". From thereon, users need to utilize the new commands that come with systemd for managing services, processes, switching run levels, and querying the logging system. A workaround is to set up aliases in .bashrc. - -So let's have a look at how systemd will change your habits of administrating your computers and the pros and cons involved. Before making the switch to systemd, it is a good security measure to save the old sysvinit to be able to still boot, should systemd fail. This will only work as long as systemd-sysv is not yet installed, and can be easily obtained by running: - - # cp -av /sbin/init /sbin/init.sysvinit - -Thusly prepared, in case of emergency, just append: - - init=/sbin/init.sysvinit - -to the kernel boot-time parameters. - -### Basic Usage of systemctl ### - -systemctl is the command that substitutes the old "/etc/init.d/foo start/stop", but also does a lot more, as you can learn from its man page. - -Some basic use-cases are: - -- systemctl - list all loaded units and their state (where unit is the term for a job/service) -- systemctl list-units - list all units -- systemctl start [NAME...] - start (activate) one or more units -- systemctl stop [NAME...] - stop (deactivate) one or more units -- systemctl disable [NAME...] - disable one or more unit files -- systemctl list-unit-files - show all installed unit files and their state -- systemctl --failed - show which units failed during boot -- systemctl --type=mount - filter for types; types could be: service, mount, device, socket, target -- systemctl enable debug-shell.service - start a root shell on TTY 9 for debugging - -For more convinience in handling units, there is the package systemd-ui, which is started as user with the command systemadm. - -Switching runlevels, reboot and shutdown are also handled by systemctl: - -- systemctl isolate graphical.target - take you to what you know as init 5, where your X-server runs -- systemctl isolate multi-user.target - take you to what you know as init 3, TTY, no X -- systemctl reboot - shut down and reboot the system -- systemctl poweroff - shut down the system - -All these commands, other than the ones for switching runlevels, can be executed as normal user. - -### Basic Usage of journalctl ### - -systemd does not only boot machines faster than the old init system, it also starts logging much earlier, including messages from the kernel initialization phase, the initial RAM disk, the early boot logic, and the main system runtime. So the days where you needed to use a camera to provide the output of a kernel panic or otherwise stalled system for debugging are mostly over. - -With systemd, logs are aggregated in the journal which resides in /var/log/. To be able to make full use of the journal, we first need to set it up, as Debian does not do that for you yet: - - # addgroup --system systemd-journal - # mkdir -p /var/log/journal - # chown root:systemd-journal /var/log/journal - # gpasswd -a $user systemd-journal - -That will set up the journal in a way where you can query it as normal user. Querying the journal with journalctl offers some advantages over the way syslog works: - -- journalctl --all - show the full journal of the system and all its users -- journalctl -f - show a live view of the journal (equivalent to "tail -f /var/log/messages") -- journalctl -b - show the log since the last boot -- journalctl -k -b -1 - show all kernel logs from the boot before last (-b -1) -- journalctl -b -p err - shows the log of the last boot, limited to the priority "ERROR" -- journalctl --since=yesterday - since Linux people normally do not often reboot, this limits the size more than -b would -- journalctl -u cron.service --since='2014-07-06 07:00' --until='2014-07-06 08:23' - show the log for cron for a defined timeframe -- journalctl -p 2 --since=today - show the log for priority 2, which covers emerg, alert and crit; resembles syslog priorities emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), debug (7) -- journalctl > yourlog.log - copy the binary journal as text into your current directory - -Journal and syslog can work side-by-side. On the other hand, you can remove any syslog packages like rsyslog or syslog-ng once you are satisfied with the way the journal works. - -For very detailed output, append "systemd.log_level=debug" to the kernel boot-time parameter list, and then run: - - # journalctl -alb - -Log levels can also be edited in /etc/systemd/system.conf. - -### Analyzing the Boot Process with systemd ### - -systemd allows you to effectively analyze and optimize your boot process: - -- systemd-analyze - show how long the last boot took for kernel and userspace -- systemd-analyze blame - show details of how long each service took to start -- systemd-analyze critical-chain - print a tree of the time-critical chain of units -- systemd-analyze dot | dot -Tsvg > systemd.svg - put a vector graphic of your boot process (requires graphviz package) -- systemd-analyze plot > bootplot.svg - generate a graphical timechart of the boot process - -![](https://farm6.staticflickr.com/5559/14607588994_38543638b3_z.jpg) - -![](https://farm6.staticflickr.com/5565/14423020978_14b21402c8_z.jpg) - -systemd has pretty good documentation for such a young project under heavy developement. First of all, there is the [0pointer series by Lennart Poettering][3]. The series is highly technical and quite verbose, and holds a wealth of information. Another good source is the distro agnostic [Freedesktop info page][4] with the largest collection of links to systemd resources, distro specific pages, bugtrackers and documentation. A quick glance at: - - # man systemd.index - -will give you an overview of all systemd man pages. The command structure for systemd for various distributions is pretty much the same, differences are found mainly in the packaging. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/07/use-systemd-system-administration-debian.html - -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:https://lists.debian.org/debian-devel/2013/10/msg00444.html -[2]:https://lists.debian.org/debian-devel/2014/02/msg00316.html -[3]:http://0pointer.de/blog/projects/systemd.html -[4]:http://www.freedesktop.org/wiki/Software/systemd/ diff --git a/sources/tech/20140724 Camicri Cube--An Offline And Portable Package Management System.md b/sources/tech/20140724 Camicri Cube--An Offline And Portable Package Management System.md index bcd2773a32..8854426a4e 100644 --- a/sources/tech/20140724 Camicri Cube--An Offline And Portable Package Management System.md +++ b/sources/tech/20140724 Camicri Cube--An Offline And Portable Package Management System.md @@ -1,3 +1,4 @@ +(翻译中 by runningwater) Camicri Cube: An Offline And Portable Package Management System ================================================================================ ![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/camicri-cube-206x205.jpg) @@ -158,7 +159,7 @@ via: http://www.unixmen.com/camicri-cube-offline-portable-package-management-sys [SK][a](Senthilkumar, aka SK, is a Linux enthusiast, FOSS Supporter & Linux Consultant from Tamilnadu, India. A passionate and dynamic person, aims to deliver quality content to IT professionals and loves very much to write and explore new things about Linux, Open Source, Computers and Internet.) -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/sources/tech/20140818 What are useful CLI tools for Linux system admins.md b/sources/tech/20140818 What are useful CLI tools for Linux system admins.md index 261af07857..49bd994f71 100644 --- a/sources/tech/20140818 What are useful CLI tools for Linux system admins.md +++ b/sources/tech/20140818 What are useful CLI tools for Linux system admins.md @@ -1,3 +1,5 @@ +translating by haimingfg + What are useful CLI tools for Linux system admins ================================================================================ System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime. @@ -184,4 +186,4 @@ via: http://xmodulo.com/2014/08/useful-cli-tools-linux-system-admins.html [17]:http://rsync.samba.org/ [18]:http://www.nongnu.org/rdiff-backup/ [19]:http://nethogs.sourceforge.net/ -[20]:http://code.google.com/p/inxi/ \ No newline at end of file +[20]:http://code.google.com/p/inxi/ diff --git a/sources/tech/20140819 Build a Raspberry Pi Arcade Machine.md b/sources/tech/20140819 Build a Raspberry Pi Arcade Machine.md deleted file mode 100644 index fef5f18c9c..0000000000 --- a/sources/tech/20140819 Build a Raspberry Pi Arcade Machine.md +++ /dev/null @@ -1,136 +0,0 @@ -zpl1025 -Build a Raspberry Pi Arcade Machine -================================================================================ -**Relive the golden majesty of the 80s with a little help from a marvel of the current decade.** - -### WHAT YOU’LL NEED ### - -- Raspberry Pi w/4GB SD-CARD. -- HDMI LCD monitor. -- Games controller or… -- A JAMMA arcade cabinet. -- J-Pac or I-Pac. - -The 1980s were memorable for many things; the end of the cold war, a carbonated drink called Quatro, the Korg Polysix synthesiser and the Commodore 64. But to a certain teenager, none of these were as potent, or as perhaps familiarly illicit, as the games arcade. Enveloped by cigarette smoke and a barrage of 8-bit sound effects, they were caverns you visited only on borrowed time: 50 pence and a portion of chips to see you through lunchtime while you honed your skills at Galaga, Rampage, Centipede, Asteroids, Ms Pacman, Phoenix, R-Rype, Donkey Kong, Rolling Thunder, Gauntlet, Street Fighter, Outrun, Defender… The list is endless. - -These games, and the arcade machine form factor that held them, are just as compelling today as they were 30 years ago. And unlike the teenage version of yourself, you can now play many of them without needing a pocket full of change, finally giving you an edge over the rich kids and their endless ‘Continues’. It’s time to build your own Linux-based arcade machine and beat that old high score. - -We’re going to cover all the steps required to turn a cheap shell of an arcade machine into a Linux-powered multi-platform retro games system. But that doesn’t mean you’ve got to build the whole system at the same scale. You could, for example, forgo the large, heavy and potentially carcinogenic hulk of the cabinet itself and stuff the controlling innards into an old games console or an even smaller case. Or you could just as easily forgo the diminutive Raspberry Pi and replace the brains of your system with a much more capable Linux machine. This might make an ideal platform for SteamOS, for example, and for playing some of its excellent modern arcade games. - -Over the next few pages we’ll construct a Raspberry Pi-based arcade machine, but you should be able to see plenty of ideas for your own projects, even if they don’t look just like ours. And because we’re building it on the staggeringly powerful MAME, you’ll be able to get it running on almost anything. - -![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade3.png) - -We did this project before the model B+ came out. It should all work exactly the same on the newer board, and you should be able to get by without a powered USB Hub (click for larger). - -### Disclaimer ### - -One again we’re messing with electrical components that could cause you a shock. Make sure you get any modifications you make checked by a qualified electrician. We don’t go into any details on how to obtain games, but there are legal sources such as old games releases and newer commercial titles based on the MAME emulator. - -#### Step1: The Cabinet #### - -The cabinet itself is the biggest challenge. We bought an old two-player Bubble Bobble machine from the early 90s from eBay. It cost £220 delivered in the back of an old estate car. The prices for cabinets like these can vary. We’ve seen many for less than £100. At the other end of the scale, people pay thousands for machines with original decals on the side. - -There are two major considerations when it comes to buying a cabinet. The first is the size: These things are big and heavy. They take up a lot of space and it takes at least two people to move them around. If you’ve got the money, you can buy DIY cabinets or new smaller form-factors, such as cabinets that fit on tables. And cocktail cabinets can be easier to fit, too. - -![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade4.jpg) - -Cabinets can be cheap, but they’re heavy. Don’t lift them on your own. Older ones may need some TLC, such as are-spray and some repair work(click for larger). - -One of the best reasons for buying an original cabinet, apart from getting a much more authentic gaming experience, is being able to use the original controls. Many machines you can buy on eBay will be for two concurrent players, with two joysticks and a variety of buttons for each player, plus the player one and player two controls. For compatibility with the widest number of games, we’d recommend finding a machine with six buttons for each player, which is a common configuration. You might also want to look into a panel with more than two players, or one with space for other input controllers, such as an arcade trackball (for games like Marble Madness), or a spinner (Arkanoid). These can be added without too much difficulty later, as modern USB devices exist. - -Controls are the second, and we’d say most important consideration, because it’s these that transfer your twitches and tweaks into game movement. What you need to consider for when buying a cabinet is something called JAMMA, an acronym for Japan Amusement Machinery Manufacturers. JAMMA is a standard in arcade machines that defines how the circuit board containing the game chips connects to the game controllers and the coin mechanism. It’s an interface conduit for all the cables coming from the buttons and the joysticks, for two players, bringing them into a standard edge connector. The JAMMA part is the size and layout of this connector, as it means the buttons and controls will be connected to the same functions on whichever board you install so that the arcade owner would only have to change the cabinet artwork to bring in new players. - -But first, a word of warning: the JAMMA connector also carries the 12V power supply, usually from a power unit installed in most arcade machines. We disconnecting the power supply completely to avoid damaging anything with a wayward short-circuit or dropped screwdriver. We don’t use any of the power connectors in any further stage of the tutorial. - -![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade2.png) - -#### Step 2: J-PAC #### - -What’s brilliant is that you can buy a device that connects to the JAMMA connector inside your cabinet and a USB port on your computer, transforming all the buttons presses and keyboard movements into (configurable) keyboard commands that you can use from Linux to control any game you wish. This device is called the J-Pac ([www.ultimarc.com/jpac.html][1] – approximately £54). - -Its best feature isn’t the connectivity; it’s the way it handles and converts the input signals, because it’s vastly superior to a standard USB joystick. Every input generates its own interrupt, and there’s no limit to the number of simultaneous buttons and directions you can press or hold down. This is vital for games like Street Fighter, because they rely on chords of buttons being pressed simultaneously and quickly, but it’s also essential when delivering the killing blow to cheating players who sulk and hold down all their own buttons. Many other controllers, especially those that create keyboard inputs, are restricted by their USB keyboard controllers to six inputs and a variety of Alt, Shift and Ctrl hacks. The J-Pac can also be connected to a tilt sensor and even some coin mechanisms, and it works in Linux without any pre-configuration. - -Another option is a similar device called an I-Pac. It does the same thing as the J-Pac, only without the JAMMA connector. That means you can’t connect your JAMMA controls, but it does mean you can design your own controller layout and wire each control to the I-Pac yourself. This might be a little ambitious for a first project, but it’s a route that many arcade aficionados take, especially when they want to design a panel for four players, or one that incorporates many different kinds of controls. Our approach isn’t necessarily one we’d recommend, but we re-wired an old X-Arcade Tankstick control panel that suffered from input contention, replaced the joysticks and buttons with new units and connected it to a new JAMMA harness, which is an excellent way of buying all the cables you need plus the edge connector for a low price (£8). - -![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade5.jpg) - -Our J-Pac in situ. The blue and red wires on the right connect to the extra 1- and 2-player buttons on our cabinet (click for larger). - -Whether you choose an I-Pac or a J-Pac, all the keys generated by both devices are the default values for MAME. That means you won’t have to make any manual input changes when you start to run the emulator. Player 1, for example, creates cursor up, down, left and right as well as left Ctrl, left ALT, Space and left Shift for fire buttons 1–4. But the really useful feature, for us, is the two-button shortcuts. While holding down the player 1 button, you can generate the P key to pause the game by pulling down on the player 1 joystick, adjust the volume by pressing up and enter MAME’s own configuration menu by pushing right. These escape codes are cleverly engineered to not get in the way of playing games, as they’re only activated when holding down the Player 1 button, and they enable you to do almost anything you need to from within a running game. You can completely reconfigure MAME, for example, using its own menus, and change input assignments and sensitivity while playing the game itself. - -Finally, holding down Player 1 and then pressing Player 2 will quit MAME, which is useful if you’re using a launch menu or MAME manager, as these manage launching games automatically, and let you get on with playing another game as quickly as possible. - -We took a rather cowardly route with the screen, removing the original, bulky and broken CRT that came with the cabinet and replacing it with a low-cost LCD monitor. This approach has many advantages. First, the screen has HDMI, so it will interface with a Raspberry Pi or a modern graphics card without any difficulty. Second, you don’t have to configure the low-frequency update modes required to drive an arcade machine’s screen, nor do you need the specific graphics hardware that drives it. And third, this is the safest option because an arcade machine’s screen is often unprotected from the rear of a case, leaving very high voltages inches away from your hands. That’s not to say you shouldn’t use a CRT if that’s the experience you’re after – it’s the most authentic way to get the gaming experience you’re after, but we’ve fined-tuned the CRT emulation enough in software that we’re happy with the output, and we’re definitely happier not to be using an ageing CRT. - -You might also want to look into using an older LCD with a 4:3 aspect ratio, rather than the widescreen modern options, because 4:3 is more practical for playing both vertical and horizontal games. A vertical shooter such as Raiden, for example, will have black bars on either side of the gaming area if you use a widescreen monitor. Those black bars can be used to display the game instructions, or you could rotate the screen 90 degrees so that every pixel is used, but this is impractical unless you’re only going to play vertical games or have easy access to a rotating mount. - -Mounting a screen is also important. If you’ve removed a CRT, there’s nowhere for an LCD to go. Our solution was to buy some MDF cut to fit the space where the CRT was. This was then screwed into position and we fitted a cheap VESA mounting plate into the centre of the new MDF. VESA mounts can be used by the vast majority of screens, big and small. Finally, because our cabinet was fronted with smoked glass, we had to be sure both the brightness and contrast were set high enough. - -### Step 3: Installation ### - -With the large hardware choices now made, and presumably the cabinet close to where you finally want to install it, putting the physical pieces together isn’t that difficult. We safely split the power input from the rear of the cabinet and wired a multiple socket into the space at the back. We did this to the cable after it connects to the power switch. - -Nearly all arcade cabinets have a power switch on the top-right surface, but there’s usually plenty of cable to splice into this at a lower point in the cabinet, and it meant we could use normal power connectors for our equipment. Our cabinet has a fluorescent tube, used to backlight the top marquee on the machine, connected directly to the power, and we were able to keep this connected by attaching a regular plug. When you turn the power on from the cabinet switch, power flows to the components inside the case – your Raspberry Pi and screen will come on, and all will be well with the world. - -The J-Pac slides straight into the JAMMA interface, but you may also have to do a little manual wiring. The JAMMA standard only supports up to three buttons for each player (although many unofficially support four), while the J-Pac can handle up to six buttons. To get those extra buttons connected, you need to connect one side of the button’s switch to GND fed from the J-Pac with the other side of the switch going into one of the screw-mounted inputs in the side of the J-Pac. These are labelled 1SW4, 1SW5, 1SW6, 2SW4, 2SW5 and 2SW6. The J-Pac also includes passthrough connections for audio, but we’ve found this to be incredibly noisy. Instead, we wired the speaker in our cabinet to an old SoundBlaster amplifier and connected this to the audio outputs on the Raspberry Pi. You don’t want audio to be pristine, but you do want it to be loud enough. - -![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade6.jpg) - -Our Raspberry Pi is now connected to the J-Pac on the left and both the screen and the USB hub (click for larger). - -The J-Pac or I-Pac then connects to your PC or Raspberry Pi using a PS2-to-USB cable, which should also be used to connect to a PS2 port on your PC directly. There is an additional option to use an old PS2 connector, if your PC is old enough to have one, but we found in testing that the USB performance is identical. This won’t apply to the PS2-less Raspberry Pi, of course, and don’t forget that the Pi will also need powering. We always recommend doing so from a compatible powered hub, as a lack of power is the most common source of Raspberry Pi errors. You’ll also need to get networking to your Raspberry Pi, either through the Ethernet port (perhaps using a powerline adaptor hidden in the cabinet), or by using a wireless USB device. Networking is essential because it enables you to reconfigure your PI while it’s tucked away within the cabinet, and it also enables you to change settings and perform administration tasks without having to connect a keyboard or mouse. - -> ### Coin Mechanism ### - -> In the emulation community, getting your coin mechanism to work with your emulator was often considered a step too close to commercial production. It meant you could potential charge people to use your machine. Not only would this be wrong, but considering the provenance of many of the games you run on your own arcade machine, it could also be illegal. And it’s definitely against the spirit of emulation. However, we and many other devotees thinking that a working coin mechanism is another step closer to the realism of an arcade machine, and is worth the effort in recreating the nostalgia of an old arcade. There’s nothing like dropping a 10p piece into the coin tray and to hear the sound of the credits being added to the machine. - -> It’s not actually that difficult. It depends on the coin mechanism in your arcade machine and how it sends a signal to say how many credits had been inserted. Most coin mechanisms come in two parts. The large part is the coin acceptor/validator. This is the physical side of the process that detects whether a coin is authentic, and determines its value. It does this with the help of a credit/logic board, usually attached via a ribbon cable and featuring lots of DIP switches. These switches are used to change which coins are accepted and how many credits they generate. It’s then usually as simple as finding the output switch, which is triggered with a credit, and connecting this to the coin input on your JAMMA connector, or directly onto the J-Pac. Our coin mechanism is a Mars MS111, common in the UK in the early 90s, and there’s plenty of information online about what each of the DIP switches do, as well as how to programme the controller for newer coins. We were also able to wire the 12V connector from the mechanism to a small light for behind the coin entry slot. - -#### Step 4: Software #### - -MAME is the only viable emulator for a project of this scale, and it now supports many thousands of different games running on countless different platforms, from the first arcade machines through to some more recent ones. It’s a project that has also spawned MESS, the multi-emulator super system, which targets platforms such as home computers and consoles from the 80s and 90s. - -Configuring MAME could take a six-page article in itself. It’s a complex, sprawling, magnificent piece of software that emulates so many CPUs, so many sound devices, chips, controllers with so many options, that like MythTV, you never really stop configuring it. - -But there’s an easier option, and one that’s purpose-built for the Raspberry Pi. It’s called PiMAME. This is both a distribution download and a script you can run on top of Raspbian, the Pi’s default distribution. Not only does it install MAME on your Raspberry Pi (which is useful because it’s not part of any of the default repositories), it also installs a selection of other emulators along with front-ends to manage them. MAME, for example, is a command-line utility with dozens of options. But PiMAME has another clever trick up its sleeve – it installs a simple web server that enables you to install new games through a browser connected to your network. This is a great advantage, because getting games into the correct folders is one of the trials of dealing with MAME, and it also enables you to make best use of whatever storage you’ve got connected to your Pi. Plus, PiMAME will update itself from the same script you use to install it, so keeping on top of updates couldn’t be easier. This could be especially useful at the moment, as at the time of writing the project was on the cusp of a major upgrade in the form of the 0.8 release. We found it slightly unstable in early March, but we’re sure everything will be sorted by the time you read this. - -The best way to install PiMAME is to install Raspbian first. You can do this either through NOOBS, using a graphical tool from your desktop, or by using the dd command to copy the contents of the Raspbian image directly onto your SD card. As we mentioned in last month’s BrewPi tutorial, this process has been documented many times before, so we won’t waste the space here. Just install NOOBS if you want the easy option, following the instructions on the Raspberry Pi site. With Raspbian installed and running, make sure you use the configuration tool to free the space on your SD card, and that the system is up to date (sudo apt-get update; sudo apt-get upgrade). You then need to make sure you’ve got the git package already installed. Any recent version of Raspbian will have installed git already, but you can check by typing sudo apt-get install git just to check. - -You then have to type the following command to clone the PiMAME installer from the project’s GitHub repository: - - git clone https://github.com/ssilverm/pimame_installer - -After that, you should get the following feedback if the command works: - - Cloning into ‘pimame_installer’... - remote: Reusing existing pack: 2306, done. - remote: Total 2306 (delta 0), reused 0 (delta 0) - Receiving objects: 100% (2306/2306), 4.61 MiB | 11 KiB/s, done. - Resolving deltas: 100% (823/823), done. - -This command will create a new folder called ‘pimame_installer’, and the next step is to switch into this and run the script it contains: - - cd pimame_installer/ - sudo ./install.sh - -This command installs and configures a lot of software. The length of time it takes will depend on your internet connection, as a lot of extra packages are downloaded. Our humble Pi with a 15Mb internet connection took around 45 minutes to complete the script, after which you’re invited to restart the machine. You can do this safely by typing sudo shutdown -r now, as this command will automatically handle any remaining write operations to the SD card. - -And that’s all there is to the installation. After rebooting your Pi, you will be automatically logged in and the PiMAME launch menu will appear. It’s a great-looking interface in version 0.8, with photos of each of the platforms supported, plus small red icons to indicate how many games you’ve got installed.This should now be navigable through your controller. If you want to make sure the controller is correctly detected, use SSH to connect to your Pi and check for the existence of **/dev/input/by-id/usb-Ultimarc_I-PAC_Ultimarc_I-PAC-event-kbd**. - -The default keyboard controls will enable you to select what kind of emulator you want to run on your arcade machine. The option we’re most interested in is the first, labelled ‘AdvMAME’, but you might also be surprised to see another MAME on offer, MAME4ALL. MAME4ALL is built specifically for the Raspberry Pi, and takes an old version of the MAME source code so that the performance of the ROMS that it does support is optimal. This makes a lot of sense, because there’s no way your Pi is going to be able to play anything too demanding, so there’s no reason to belabour the emulator with unneeded compatibility. All that’s left to do now is get some games onto your system (see the boxout below), and have fun! - -![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade1.png) - --------------------------------------------------------------------------------- - -via: http://www.linuxvoice.com/arcade-machine/ - -作者:[Ben Everard][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.linuxvoice.com/author/ben_everard/ -[1]:http://www.ultimarc.com/jpac.html diff --git a/sources/tech/20140821 How to configure a network printer and scanner on Ubuntu desktop.md b/sources/tech/20140821 How to configure a network printer and scanner on Ubuntu desktop.md index 0703a11d9a..aefb970a23 100644 --- a/sources/tech/20140821 How to configure a network printer and scanner on Ubuntu desktop.md +++ b/sources/tech/20140821 How to configure a network printer and scanner on Ubuntu desktop.md @@ -1,3 +1,4 @@ +诗诗来翻译!disylee How to configure a network printer and scanner on Ubuntu desktop ================================================================================ In a [previous article][1](注:这篇文章在2014年8月12号的原文里做过,不知道翻译了没有,如果翻译发布了,发布此文章的时候可改成翻译后的链接), we discussed how to install several kinds of printers (and also a network scanner) in a Linux server. Today we will deal with the other end of the line: how to access the network printer/scanner devices from a desktop client. @@ -197,4 +198,4 @@ via: http://xmodulo.com/2014/08/configure-network-printer-scanner-ubuntu-desktop [a]:http://xmodulo.com/author/gabriel [1]:http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html [2]:http://www.cups-pdf.de/documentation.shtml -[3]:http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html#scanner \ No newline at end of file +[3]:http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html#scanner diff --git a/sources/tech/20140826 20 Postfix Interview Questions and Answers.md b/sources/tech/20140826 20 Postfix Interview Questions and Answers.md deleted file mode 100644 index 1848b82f09..0000000000 --- a/sources/tech/20140826 20 Postfix Interview Questions and Answers.md +++ /dev/null @@ -1,122 +0,0 @@ -20 Postfix Interview Questions & Answers -================================================================================ -### Q:1 What is postfix and default port used for postfix ? ### - -Ans: Postfix is a open source MTA (Mail Transfer agent) which is used to route & deliver emails. Postfix is the alternate of widely used Sendmail MTA. Default port for postfix is 25. - -### Q:2 What is the difference between Postfix & Sendmail ? ### - -Ans: Postfix uses a modular approach and is composed of multiple independent executables. Sendmail has a more monolithic design utilizing a single always running daemon. - -### Q:3 What is MTA and it’s role in mailing system ? ### - -Ans: MTA Stands for Mail Transfer Agent.MTA receives and delivers email. Determines message routing and possible address rewriting. Locally delivered messages are handed off to an MDA for final delivery. Examples Qmail, Postfix, Sendmail - -### Q:4 What is MDA ? ### - -Ans: MDA stands for Mail Delivery Agent. MDA is a Program that handles final delivery of messages for a system's local recipients. MDAs can often filter or categorize messages upon delivery. An MDA might also determine that a message must be forwarded to another email address. Example Procmail - -### Q:5 What is MUA ? ### - -Ans: MUA stands for Mail User Agent. MUA is aEmail client software used to compose, send, and retrieve email messages. Sends messages through an MTA. Retrieves messages from a mail store either directly or through a POP/ IMAP server. Examples Outlook, Thunderbird, Evolution. - -### Q:6 What is the use of postmaster account in Mailserver ? ### - -Ans: An email administrator is commonly referred to as a postmaster. An individual with postmaster responsibilities makes sure that the mail system is working correctly, makes configuration changes, and adds/removes email accounts, among other things. You must have a postmaster alias at all domains for which you handle email that directs messages to the correct person or persons . - -### Q:7 What are the important daemons in postfix ? ### - -Ans : Below are the lists of impportant daemons in postfix mail server : - -- **master** :The master daemon is the brain of the Postfix mail system. It spawns all other daemons. -- **smtpd**: The smtpd daemon (server) handles incoming connections. -- **smtp** :The smtp client handles outgoing connections. -- **qmgr** :The qmgr-Daemon is the heart of the Postfix mail system. It processes and controls all messages in the mail queues. -- **local** : The local program is Postfix’ own local delivery agent. It stores messages in mailboxes. - -### Q:8 What are the configuration files of postfix server ? ### - -Ans: There are two main Configuration files of postfix : - -- **/etc/postfix/main.cf** : This file holds global configuration options. They will be applied to all instances of a daemon, unless they are overridden in master.cf -- **/etc/postfix/master.cf** : This file defines runtime environment for daemons attached to services. Runtime behavior defined in main.cf may be overridden by setting service specific options. - -### Q:9 How to restart the postfix service & make it enable across reboot ? ### - -Ans: Use this command to restart service “ Service postfix restart” and to make the service persist across the reboot, use the command “ chkconfig postfix on” - -### Q:10 How to check the mail's queue in postfix ? ### - -Ans: Postfix maintains two queues, the pending mails queue, and the deferred mail queue,the deferred mail queue has the mail that has soft-fail and should be retried (Temporary failure), Postfix retries the deferred queue on set intervals (configurable, and by default 5 minutes) - -To display the list of queued mails : - - # postqueue -p - -To Save the output of above command : - - # postqueue -p > /mnt/queue-backup.txt - -Tell Postfix to process the Queue now - - # postqueue -f - -### Q:11 How to delete mails from the queue in postfix ? ### - -Ans: Use below command to delete all queued mails - - # postsuper -d ALL - -To delete only deferred mails from queue , use below command - - # postsuper -d ALL deferred - -### Q:12 How to check postfix configuration from the command line ? ### - -Ans: Using the command 'postconf -n' we can see current configuration of postfix excluding the lines which are commented. - -### Q:13 Which command is used to see live mail logs in postfix ? ### - -Ans: Use the command 'tail -f /var/log/maillog' or 'tailf /var/log/maillog' - -### Q:14 How to send a test mail from command line ? ### - -Ans: Use the below command to send a test mail from postfix itself : - -# echo "Test mail from postfix" | mail -s "Plz ignore" info@something.com - -### Q:15 What is an Open mail relay ? ### - -Ans: An open mail relay is an SMTP server configured in such a way that it allows anyone on the Internet to send e-mail through it, not just mail destined to or originating from known users.This used to be the default configuration in many mail servers; indeed, it was the way the Internet was initially set up, but open mail relays have become unpopular because of their exploitation by spammers and worms. - -### Q:16 What is relay host in postfix ? ### - -Ans: Relay host is the smtp address , if mentioned in postfix config file , then all the incoming mails be relayed through smtp server. - -### Q:17 What is Greylisting ? ### - -Ans: Greylisting is a method of defending e-mail users against spam. A mail transfer agent (MTA) using greylisting will "temporarily reject" any email from a sender it does not recognize. If the mail is legitimate the originating server will, after a delay, try again and, if sufficient time has elapsed, the email will be accepted. - -### Q:18 What is the importance of SPF records in mail servers ? ### - -Ans: SPF (Sender Policy Framework) is a system to help domain owners specify the servers which are supposed to send mail from their domain. The aim is that other mail systems can then check to make sure the server sending email from that domain is authorized to do so – reducing the chance of email 'spoofing', phishing schemes and spam! - -### Q:19 What is the use of Domain Keys(DKIM) in mail servers ? ### - -Ans: DomainKeys is an e-mail authentication system designed to verify the DNS domain of an e-mail sender and the message integrity. The DomainKeys specification has adopted aspects of Identified Internet Mail to create an enhanced protocol called DomainKeys Identified Mail (DKIM). - -### Q:20 What is the role of Anti-Spam SMTP Proxy (ASSP) in mail server ? ### - -Ans: ASSP is a gateway server which is install in front of your MTA and implements auto-whitelists, self learning Bayesian, Greylisting, DNSBL, DNSWL, URIBL, SPF, SRS, Backscatter, Virus scanning, attachment blocking, Senderbase and multiple other filter methods - --------------------------------------------------------------------------------- - -via: http://www.linuxtechi.com/postfix-interview-questions-answers/ - -作者:[Pradeep Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.linuxtechi.com/author/pradeep/ \ No newline at end of file diff --git a/sources/tech/20140826 How to configure SNMPv3 on ubuntu 14.04 server.md b/sources/tech/20140826 How to configure SNMPv3 on ubuntu 14.04 server.md deleted file mode 100644 index a8782bb417..0000000000 --- a/sources/tech/20140826 How to configure SNMPv3 on ubuntu 14.04 server.md +++ /dev/null @@ -1,100 +0,0 @@ -Translating by SPccman -How to configure SNMPv3 on ubuntu 14.04 server -================================================================================ -Simple Network Management Protocol (SNMP) is an "Internet-standard protocol for managing devices on IP networks". Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks and more.It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.[2] - -SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried (and sometimes set) by managing applications. - -### Why you want to use SNMPv3 ### - -Although SNMPv3 makes no changes to the protocol aside from the addition of cryptographic security, it looks much different due to new textual conventions, concepts, and terminology. - -SNMPv3 primarily added security and remote configuration enhancements to SNMP. - -Security has been the biggest weakness of SNMP since the beginning. Authentication in SNMP Versions 1 and 2 amounts to nothing more than a password (community string) sent in clear text between a manager and agent.[1] Each SNMPv3 message contains security parameters which are encoded as an octet string. The meaning of these security parameters depends on the security model being used. - -SNMPv3 provides important security features: - -Confidentiality -- Encryption of packets to prevent snooping by an unauthorized source. - -Integrity -- Message integrity to ensure that a packet has not been tampered while in transit including an optional packet replay protection mechanism. - -Authentication -- to verify that the message is from a valid source. - -### Install SNMP server and client in ubuntu ### - -Open the terminal and run the following command - - sudo apt-get install snmpd snmp - -After installation you need to do the following changes. - -### Configuring SNMPv3 in Ubuntu ### - -Get access to the daemon from the outside. - -The default installation only provides access to the daemon for localhost. In order to get access from the outside open the file /etc/default/snmpd in your favorite editor - - sudo vi /etc/default/snmpd - -Change the following line - -From - - SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /var/run/snmpd.pid' - -to - - SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf' - -and restart snmpd - - sudo /etc/init.d/snmpd restart - -### Define SNMPv3 users, authentication and encryption parameters ### - -SNMPv3 can be used in a number of ways depending on the “securityLevel” configuration parameter: - -noAuthNoPriv -- No authorisation and no encryption, basically no security at all! -authNoPriv -- Authorisation is required but collected data sent over the network is not encrypted. -authPriv -- The strongest form. Authorisation required and everything sent over the network is encrypted. - -The snmpd configuration settings are all saved in a file called /etc/snmp/snmpd.conf. Open this file in your editor as in: - - sudo vi /etc/snmp/snmpd.conf - -Add the following lines to the end of the file: - - # - createUser user1 - createUser user2 MD5 user2password - createUser user3 MD5 user3password DES user3encryption - # - rouser user1 noauth 1.3.6.1.2.1.1 - rouser user2 auth 1.3.6.1.2.1 - rwuser user3 priv 1.3.6.1.2.1 - -Note:- If you want to use your own username/password combinations you need to note that the password and encryption phrases should have a length of at least 8 characters - -Also you need to do the following change so that snmp can listen for connections on all interfaces - -From - - #agentAddress udp:161,udp6:[::1]:161 - -to - - agentAddress udp:161,udp6:[::1]:161 - -Save your modified snmpd.conf file and restart the daemon with: - - sudo /etc/init.d/snmpd restart - --------------------------------------------------------------------------------- - -via: http://www.ubuntugeek.com/how-to-configure-snmpv3-on-ubuntu-14-04-server.html - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/sources/tech/20140828 Setup Thin Provisioning Volumes in Logical Volume Management (LVM)--Part IV.md b/sources/tech/20140828 Setup Thin Provisioning Volumes in Logical Volume Management (LVM)--Part IV.md deleted file mode 100644 index 365799b3a2..0000000000 --- a/sources/tech/20140828 Setup Thin Provisioning Volumes in Logical Volume Management (LVM)--Part IV.md +++ /dev/null @@ -1,213 +0,0 @@ -Setup Thin Provisioning Volumes in Logical Volume Management (LVM) – Part IV -================================================================================ -Logical Volume management has great features such as snapshots and Thin Provisioning. Previously in (Part – III) we have seen how to snapshot the logical volume. Here in this article, we will going to see how to setup thin Provisioning volumes in LVM. - -![Setup Thin Provisioning in LVM](http://www.tecmint.com/wp-content/uploads/2014/08/Setup-Thin-Provisioning-in-LVM.jpg) -Setup Thin Provisioning in LVM - -### What is Thin Provisioning? ### - -Thin Provisioning is used in lvm for creating virtual disks inside a thin pool. Let us assume that I have a **15GB** storage capacity in my server. I already have 2 clients who has 5GB storage each. You are the third client, you asked for 5GB storage. Back then we use to provide the whole 5GB (Thick Volume) but you may use 2GB from that 5GB storage and 3GB will be free which you can fill it up later. - -But what we do in thin Provisioning is, we use to define a thin pool inside one of the large volume group and define the thin volumes inside that thin pool. So, that whatever files you write will be stored and your storage will be shown as 5GB. But the full 5GB will not allocate the entire disk. The same process will be done for other clients as well. Like I said there are 2 clients and you are my 3rd client. - -So, let us assume how much total GB I have assigned for clients? Totally 15GB was already completed, If someone comes to me and ask for 5GB can I give? The answer is “**Yes**“, here in thin Provisioning I can give 5GB for 4th Client even though I have assigned 15GB. - -**Warning**: From 15GB, if we are Provisioning more than 15GB it is called Over Provisioning. - -### How it Works? and How we provide storage to new Clients? ### - -I have provided you 5GB but you may used only 2GB and other 3GB will be free. In Thick Provisioning we can’t do this, because it will allocate the whole space at first itself. - -In thin Provisioning if I’m defining 5GB for you it won’t allocate the whole disk space while defining a volume, it will grow till 5GB according to your data write, Hope you got it! same like you, other clients too won’t use the full volumes so there will be a chance to add 5GB to a new client, This is called over Provisioning. - -But it’s compulsory to monitored each and every volume growth, if not it will end-up in a disaster. While over Provisioning is done if the all 4 clients write the data’s badly to disk you may face an issue because it will fill up your 15GB and overflow to get drop the volumes. - -### Requirements ### - -注:此三篇文章如果发布后可换成发布后链接,原文在前几天更新中 - -- [Create Disk Storage with LVM in Linux – PART 1][1] -- [How to Extend/Reduce LVM’s in Linux – Part II][2] -- [How to Create/Restore Snapshot of Logical Volume in LVM – Part III][3] - -#### My Server Setup #### - - Operating System – CentOS 6.5 with LVM Installation - Server IP – 192.168.0.200 - -### Step 1: Setup Thin Pool and Volumes ### - -Let’s do it practically how to setup the thin pool and thin volumes. First we need a large size of Volume group. Here I’m creating Volume group with **15GB** for demonstration purpose. Now, list the volume group using the below command. - - # vgcreate -s 32M vg_thin /dev/sdb1 - -![Listing Volume Group](http://www.tecmint.com/wp-content/uploads/2014/08/Listing-Volume-Group.jpg) -Listing Volume Group - -Next, check for the size of Logical volume availability, before creating the thin pool and volumes. - - # vgs - # lvs - -![Check Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/08/check-Logical-Volume.jpg) -Check Logical Volume - -We can see there is only default logical volumes for file-system and swap is present in the above lvs output. - -### Creating a Thin Pool ### - -To create a Thin pool for 15GB in volume group (vg_thin) use the following command. - - # lvcreate -L 15G --thinpool tp_tecmint_pool vg_thin - -- **-L** – Size of volume group -- **–thinpool** – To o create a thinpool -- **tp_tecmint_poolThin** - pool name -- **vg_thin** – Volume group name were we need to create the pool - -![Create Thin Pool](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Pool.jpg) -Create Thin Pool - -To get more detail we can use the command ‘lvdisplay’. - - # lvdisplay vg_thin/tp_tecmint_pool - -![Logical Volume Information](http://www.tecmint.com/wp-content/uploads/2014/08/Logical-Volume-Information.jpg) -Logical Volume Information - -Here we haven’t created Virtual thin volumes in this thin-pool. In the image we can see Allocated pool data showing **0.00%**. - -### Creating Thin Volumes ### - -Now we can define thin volumes inside the thin pool with the help of ‘lvcreate’ command with option -V (Virtual). - - # lvcreate -V 5G --thin -n thin_vol_client1 vg_thin/tp_tecmint_pool - -I have created a Thin virtual volume with the name of **thin_vol_client1** inside the **tp_tecmint_pool** in my **vg_thin** volume group. Now, list the logical volumes using below command. - - # lvs - -![List Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/List-Logical-Volumes.jpg) -List Logical Volumes - -Just now, we have created the thin volume above, that’s why there is no data showing i.e. **0.00%M**. - -Fine, let me create 2 more Thin volumes for other 2 clients. Here you can see now there are 3 thin volumes created under the pool (**tp_tecmint_pool**). So, from this point, we came to know that I have used all 15GB pool. - -![Create Thin Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Volumes.jpg) - -### Creating File System ### - -Now, create mount points and mount these three thin volumes and copy some files in it using below commands. - - # mkdir -p /mnt/client1 /mnt/client2 /mnt/client3 - -List the created directories. - - # ls -l /mnt/ - -![Creating Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Creating-Mount-Points.jpg) -Creating Mount Points - -Create the file system for these created thin volumes using ‘mkfs’ command. - - # mkfs.ext4 /dev/vg_thin/thin_vol_client1 && mkfs.ext4 /dev/vg_thin/thin_vol_client2 && mkfs.ext4 /dev/vg_thin/thin_vol_client3 - -![Create File System](http://www.tecmint.com/wp-content/uploads/2014/08/Create-File-System.jpg) -Create File System - -Mount all three client volumes to the created mount point using ‘mount’ command. - - # mount /dev/vg_thin/thin_vol_client1 /mnt/client1/ && mount /dev/vg_thin/thin_vol_client2 /mnt/client2/ && mount /dev/vg_thin/thin_vol_client3 /mnt/client3/ - -List the mount points using ‘df’ command. - - # df -h - -![Print Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Print-Mount-Points.jpg) -Print Mount Points - -Here, we can see all the 3 clients volumes are mounted and therefore only 3% of data are used in every clients volumes. So, let’s add some more files to all 3 mount points from my desktop to fill up some space. - -![Add Files To Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Add-Files-To-Volumes.jpg) -Add Files To Volumes - -Now list the mount point and see the space used in every thin volumes & list the thin pool to see the size used in pool. - - # df -h - # lvdisplay vg_thin/tp_tecmint_pool - -![Check Mount Point Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Mount-Point-Size.jpg) -Check Mount Point Size - -![Check Thin Pool Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Thin-Pool-Size.jpg) -Check Thin Pool Size - -The above command shows, the three mount pints along with their sizes in percentage. - - 13% of datas used out of 5GB for client1 - 29% of datas used out of 5GB for client2 - 49% of datas used out of 5GB for client3 - -While looking into the thin-pool we can see only **30%** of data is written totally. This is the total of above three clients virtual volumes. - -### Over Provisioning ### - -Now the **4th** client came to me and asked for 5GB storage space. Can I give? Because I had already given 15GB Pool to 3 clients. Is it possible to give 5GB more to another client? Yes it is possible to give. This is when we use **Over Provisioning**, which means giving the space more than what I have. - -Let me create 5GB for the 4th Client and verify the size. - - # lvcreate -V 5G --thin -n thin_vol_client4 vg_thin/tp_tecmint_pool - # lvs - -![Create thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Create-thin-Storage.jpg) -Create thin Storage - -I have only 15GB size in pool, but I have created 4 volumes inside thin-pool up-to 20GB. If all four clients start to write data to their volumes to fill up the pace, at that time, we will face critical situation, if not there will no issue. - -Now I have created file system in **thin_vol_client4**, then mounted under **/mnt/client4** and copy some files in it. - - # lvs - -![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thing-Storage.jpg) -Verify Thin Storage - -We can see in the above picture, that the total used size in newly created client 4 up-to **89.34%** and size of thin pool as **59.19%** used. If all these users are not writing badly to volume it will be free from overflow, drop. To avoid the overflow we need to extend the thin-pool size. - -**Important**: Thin-pools are just a logical volume, so if we need to extend the size of thin-pool we can use the same command like, we’ve used for logical volumes extend, but we can’t reduce the size of thin-pool. - - # lvextend - -Here we can see how to extend the logical thin-pool (**tp_tecmint_pool**). - - # lvextend -L +15G /dev/vg_thin/tp_tecmint_pool - -![Extend Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Extend-Thin-Storage.jpg) -Extend Thin Storage - -Next, list the thin-pool size. - - # lvs - -![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thin-Storage.jpg) -Verify Thin Storage - -Earlier our **tp_tecmint_pool** size was 15GB and 4 thin volumes which was over Provision by 20GB. Now it has extended to 30GB so our over Provisioning has been normalized and thin volumes are free from overflow, drop. This way you can add ever more thin volumes to the pool. - -Here, we have seen how to create a thin-pool using a large size of volume group and create thin-volumes inside a thin-pool using Over-Provisioning and extending the pool. In the next article we will see how to setup a lvm Striping. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ -[2]:http://www.tecmint.com/extend-and-reduce-lvms-in-linux/ -[3]:http://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/ \ No newline at end of file diff --git a/sources/tech/20140901 How to install and configure ownCloud on Debian.md b/sources/tech/20140901 How to install and configure ownCloud on Debian.md deleted file mode 100644 index a0a18225b6..0000000000 --- a/sources/tech/20140901 How to install and configure ownCloud on Debian.md +++ /dev/null @@ -1,209 +0,0 @@ -How to install and configure ownCloud on Debian -================================================================================ -According to its official website, ownCloud gives you universal access to your files through a web interface or WebDAV. It also provides a platform to easily view, edit and sync your contacts, calendars and bookmarks across all your devices. Even though ownCloud is very similar to the widely-used Dropbox cloud storage, the primary difference is that ownCloud is free and open-source, making it possible to set up a Dropbox-like cloud storage service on your own server. With ownCloud, only you have complete access and control over your private data, with no limits on storage space (except for hard disk capacity) or the number of connected clients. - -ownCloud is available in Community Edition (free of charge) and Enterprise Edition (business-oriented with paid support). The pre-built package of ownCloud Community Edition is available for CentOS, Debian, Fedora openSUSE, SLE and Ubuntu. This tutorial will demonstrate how to install and configure ownCloud Community Edition on Debian Wheezy. - -### Installing ownCloud on Debian ### - -Go to the official website: [http://owncloud.org][1], and click on the 'Install' button (upper right corner). - -![](https://farm4.staticflickr.com/3885/14884771598_323f2fc01c_z.jpg) - -Now choose "Packages for auto updates" for the current version (v7 in the image below). This will allow you to easily keep ownCloud up to date using Debian's package management system, with packages maintained by the ownCloud community. - -![](https://farm6.staticflickr.com/5589/15071372505_298a796ff6_z.jpg) - -Then click on Continue on the next screen: - -![](https://farm6.staticflickr.com/5589/14884818527_554d1483f9_z.jpg) - -Select Debian 7 [Wheezy] from the list of available operating systems: - -![](https://farm6.staticflickr.com/5581/14884669449_433e3334e0_z.jpg) - -Add the ownCloud's official Debian repository: - - # echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/Debian_7.0/ /' >> /etc/apt/sources.list.d/owncloud.list - -Add the repository key to apt: - - # wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/Release.key - # apt-key add - < Release.key - -Go ahead and install ownCloud: - - # aptitude update - # aptitude install owncloud - -Open your web browser and navigate to your ownCloud instance, which can be found at http:///owncloud: - -![](https://farm4.staticflickr.com/3869/15071011092_f8f32ffe11_z.jpg) - -Note that ownCloud may be alerting about an Apache misconfiguration. Follow the steps below to solve this issue, and get rid of that error message. - -a) Edit the /etc/apache2/apache2.conf file (set the AllowOverride directive to All): - - - Options Indexes FollowSymLinks - AllowOverride All - Order allow,deny - Allow from all - - -b) Edit the /etc/apache2/conf.d/owncloud.conf file - - - Options Indexes FollowSymLinks MultiViews - AllowOverride All - Order allow,deny - Allow from all - - -c) Restart the web server: - - # service apache2 restart - -d) Refresh the web browser. Verify that the security warning has disappeared. - -![](https://farm6.staticflickr.com/5562/14884771428_fc9c063418_z.jpg) - -### Setting up a Database ### - -Now it's time to set up a database for ownCloud. - -First, log in to the local MySQL/MariaDB server: - - $ mysql -u root -h localhost -p - -Create a database and user account for ownCloud as follows. - - mysql> CREATE DATABASE owncloud_DB; - mysql> CREATE USER ‘owncloud-web’@'localhost' IDENTIFIED BY ‘whateverpasswordyouchoose’; - mysql> GRANT ALL PRIVILEGES ON owncloud_DB.* TO ‘owncloud-web’@'localhost'; - mysql> FLUSH PRIVILEGES; - -Go to ownCloud page at http:///owncloud, and choose the 'Storage & database' section. Enter the rest of the requested information (MySQL/MariaDB user, password, database and hostname), and click on Finish setup. - -![](https://farm6.staticflickr.com/5584/15071010982_b76c23c384_z.jpg) - -### Configuring ownCloud for SSL Connections ### - -Before you start using ownCloud, it is strongly recommended to enable SSL support in ownCloud. Using SSL provides important security benefits such as encrypting ownCloud traffic and providing proper authentication. In this tutorial, a self-signed certificate will be used for SSL. - -Create a new directory where we will store the server key and certificate: - - # mkdir /etc/apache2/ssl - -Create a certificate (and the key that will protect it) which will remain valid for one year. - - # openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt - -![](https://farm6.staticflickr.com/5587/15068784081_f281b54b72_z.jpg) - -Edit the /etc/apache2/conf.d/owncloud.conf file to enable HTTPS. For details on the meaning of the rewrite rules NC, R, and L, you can refer to the [Apache docs][2]: - - Alias /owncloud /var/www/owncloud - - - RewriteEngine on - ReWriteCond %{SERVER_PORT} !^443$ - RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] - - - - SSLEngine on - SSLCertificateFile /etc/apache2/ssl/apache.crt - SSLCertificateKeyFile /etc/apache2/ssl/apache.key - DocumentRoot /var/www/owncloud/ - - Options Indexes FollowSymLinks MultiViews - AllowOverride All - Order allow,deny - Allow from all - - - -Enable the rewrite module and restart Apache: - - # a2enmod rewrite - # service apache2 restart - -Open your ownCloud instance. Notice that even if you try to use plain HTTP, you will be automatically redirected to HTTPS. - -Be advised that even having followed the above steps, the first time that you launch your ownCloud instance, an error message will be displayed stating that the certificate has not been issued by a trusted authority (that is because we created a self-signed certificate). You can safely ignore this message, but if you are considering deploying ownCloud in a production server, you may want to purchase a certificate from a trusted company. - -### Create an Account ### - -Now we are ready to create an ownCloud admin account. - -![](https://farm6.staticflickr.com/5587/15048366536_430b4fd64e.jpg) - -Welcome to your new personal cloud! Note that you can install a desktop or mobile client app to sync your files, calendars, contacts and more. - -![](https://farm4.staticflickr.com/3862/15071372425_c391d912f5_z.jpg) - -In the upper right corner, click on your user name, and a drop-down menu is displayed: - -![](https://farm4.staticflickr.com/3897/15071372355_3de08d2847.jpg) - -Click on Personal to change your settings, such as password, display name, email address, profile picture, and more. - -### ownCloud Use Case: Access Calendar ### - -Let's start by adding an event to your calendar and later downloading it. - -Click on the upper left corner drop-down menu and choose Calendar. - -![](https://farm4.staticflickr.com/3891/15048366346_7dcc388244.jpg) - -Add a new event and save it to your calendar. - -![](https://farm4.staticflickr.com/3882/14884818197_f55154fd91_z.jpg) - -Download your calendar and add it to your Thunderbird calendar by going to 'Event and Tasks' -> 'Import...' -> 'Select file': - -![](https://farm4.staticflickr.com/3840/14884818217_16a53400f0_z.jpg) - -![](https://farm4.staticflickr.com/3871/15048366356_a7f98ca63d_z.jpg) - -TIP: You also need to set your time zone in order to successfully import your calendar in another application (by default, the Calendar application uses the UTC +00:00 time zone). To change the time zone, go to the bottom left corner and click on the small gear icon. The Calendar settings menu will appear and you will be able to select your time zone: - -![](https://farm4.staticflickr.com/3858/14884669029_4e0cd3e366.jpg) - -### ownCloud Use Case: Upload a File ### - -Next, we will upload a file from the client computer. - -Go to the Files menu (upper left corner) and click on the up arrow to open a select-file dialog. - -![](https://farm4.staticflickr.com/3851/14884818067_4a4cc73b40.jpg) - -Select a file and click on Open. - -![](https://farm6.staticflickr.com/5591/14884669039_5a9dd00ca9_z.jpg) - -You can then open/edit the selected file, move it into another folder, or delete it. - -![](https://farm4.staticflickr.com/3909/14884771088_d0b8a20ae2_o.png) - -### Conclusion ### - -ownCloud is a versatile and powerful cloud storage that makes the transition from another provider quick, easy, and painless. In addition, it is FOSS, and with little time and effort you can configure it to meet all your needs. For further information, you can always refer to the [User][3], [Admin][4], or [Developer][5] manuals. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/08/install-configure-owncloud-debian.html - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.gabrielcanepa.com.ar/ -[1]:http://owncloud.org/ -[2]:http://httpd.apache.org/docs/2.2/rewrite/flags.html -[3]:http://doc.owncloud.org/server/7.0/ownCloudUserManual.pdf -[4]:http://doc.owncloud.org/server/7.0/ownCloudAdminManual.pdf -[5]:http://doc.owncloud.org/server/7.0/ownCloudDeveloperManual.pdf \ No newline at end of file diff --git a/sources/tech/20140902 Photo Editing on Linux with Krita.md b/sources/tech/20140902 Photo Editing on Linux with Krita.md deleted file mode 100644 index c3e4367556..0000000000 --- a/sources/tech/20140902 Photo Editing on Linux with Krita.md +++ /dev/null @@ -1,76 +0,0 @@ -Photo Editing on Linux with Krita -================================================================================ -![Figure 1: Annabelle the pygmy goat.](http://www.linux.com/images/stories/41373/fig-1-annabelle.jpg) -Figure 1: Annabelle the pygmy goat. - -[Krita][1] is a wonderful drawing and painting program, and it's also a nice photo editor. Today we will learn how to add text to an image, and how to selectively sharpen portions of a photo. - -### Navigating Krita ### - -Like all image creation and editing programs, Krita contains hundreds of tools and options, and redundant controls for exposing and using them. It's worth taking some time to explore it and to see where everything is. - -The default theme for Krita is a dark theme. I'm not a fan of dark themes, but fortunately Krita comes with a nice batch of themes that you can change anytime in the Settings > Theme menu. - -Krita uses docking tool dialogues. Check Settings > Show Dockers to see your tool docks in the right and left panes, and Settings > Dockers to select the ones you want to see. The individual docks can drive you a little bit mad, as some of them open in a tiny squished aspect so you can't see anything. You can drag them to the top and sides of your Krita window, enlarge and shrink them, and you can drag them out of Krita to any location on your computer screen. If you drop a dock onto another dock they automatically create tabs. - -When you have arranged your perfect workspace, you can preserve it in the "Choose Workspace" picker. This is a button at the right end of the Brushes and Stuff toolbar (Settings > Toolbars Shown). This comes with a little batch of preset workspaces, and you can create your own (figure 2). - -![Figure 2: Preserve custom workspaces in the Choose Workspace dialogue.](http://www.linux.com/images/stories/41373/fig-2-workspaces.jpg) -Figure 2: Preserve custom workspaces in the Choose Workspace dialogue. - -Krita has multiple zoom controls. Ctrl+= zooms in, Ctrl+- zooms out, and Ctrl+0 resets to 100%. You can also use the View > Zoom controls, and the zoom slider at the bottom right. There is also a dropdown zoom menu to the left of the slider. - -The Tools menu sits in the left pane, and this contains your shape and selection tools. You have to hover your cursor over each tool to see its label. The Tool Options dock always displays options for the current tool you are using, and by default it sits in the right pane. - -### Crop Tool ### - -Of course there is a crop tool in the Tools dock, and it is very easy to use. Draw a rectangle that contains the area you want to keep, use the drag handles to adjust it, and press the Return key. In the Tools Options dock you can choose to apply the crop to all layers or just the current layer, adjust the dimensions by typing in the size values, or size it as a percentage. - -### Adding Text ### - -When you want to add some simple text to a photo, such as a label or a caption, Krita may leave you feeling overwhelmed because it contains so many artistic text effects. But it also supports adding simple text. Click the Text tool, and the Tool Options dock looks like figure 3. - -![Figure 3: Text options.](http://www.linux.com/images/stories/41373/fig-3-text.jpg) -Figure 3: Text options. - -Click the Multiline button. This opens the simple text tool; first draw a rectangle to contain your text, then start typing your text. The Tool Options dock has all the usual text formatting options: font selector, font size, text and background colors, alignment, and a bunch of paragraph styles. When you're finished click the Shape Handling tool, which is the white arrow next to the Text tool button, to adjust the size, shape, and position of your text box. The Tool Options for the Shape Handling tool include borders of various thicknesses, colors, and alignments. Figure 4 shows the gleeful captioned photo I send to my city-trapped relatives. - -![Figure 4: Green acres is the place to be.](http://www.linux.com/images/stories/41373/fig-4-frontdoor.jpg) -Figure 4: Green acres is the place to be. - -How to edit your existing text isn't obvious. Click the Shape Handling tool, and double-click inside the text box. This opens editing mode, which is indicated by the text cursor. Now you can select text, add new text, change formatting, and so on. - -### Sharpening Selected Areas ### - -Krita has a number of nice tools for making surgical edits. In figure 5 I want to sharpen Annabelle's face and eyes. (Annabelle lives next door, but she has a crush on my dog and spends a lot of time here. My dog is terrified of her and runs away, but she is not discouraged.) First select an area with the "Select an area by its outline" tool. Then open Filter > Enhance > Unsharp Mask. You have three settings to play with: Half-Size, Amount, and Threshold. Most image editing software has Radius, Amount, and Threshold settings. A radius is half of a diameter, so Half-Size is technically correct, but perhaps needlessly confusing. - -![Figure 5: Selecting an arbitrary area to edit.](http://www.linux.com/images/stories/41373/fig-5-annabelle.jpg) -Figure 5: Selecting an arbitrary area to edit. - -The Half-Size value controls the width of the sharpening lines. You want a large enough value to get a good affect, but not so large that it's obvious. - -The Threshold value determines how different two pixels need to be for the sharpening effect to be applied. 0 = maximum sharpening, and 99 is no sharpening. - -Amount controls the strength of the sharpening effect; higher values apply more sharpening. - -Sharpening is nearly always the last edit you want to make to a photo, because it is affected by anything else you do to your image: crop, resize, color and contrast... if you apply sharpening first and then make other changes it will mess up your sharpening. - -And what, you ask, does unsharp mask mean? The name comes from the sharpening technique: the unsharp mask filter creates a blurred mask of the original, and then layers the unsharp mask over the original. This creates an image that appears sharper and clearer without creating a lot of obvious sharpening artifacts. - -That is all for today. The documentation for Krita is abundant, but disorganized. Start at [Krita Tutorials][2], and poke around YouTube for a lot of good video how-tos. - -- [krita Official Web Site][1] - --------------------------------------------------------------------------------- - -via: http://www.linux.com/learn/tutorials/786040-photo-editing-on-linux-with-krita - -作者:[Carla Schroder][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.linux.com/community/forums/person/3734 -[1]:https://krita.org/ -[2]:https://krita.org/learn/tutorials/ \ No newline at end of file diff --git a/sources/tech/20140910 How to create a cloud-based encrypted file system on Linux.md b/sources/tech/20140910 How to create a cloud-based encrypted file system on Linux.md deleted file mode 100644 index 1e8f3c62d1..0000000000 --- a/sources/tech/20140910 How to create a cloud-based encrypted file system on Linux.md +++ /dev/null @@ -1,156 +0,0 @@ -How to create a cloud-based encrypted file system on Linux -================================================================================ -Commercial cloud storage services such as [Amazon S3][1] and [Google Cloud Storage][2] offer highly available, scalable, infinite-capacity object store at affordable costs. To accelerate wide adoption of their cloud offerings, these providers are fostering rich developer ecosystems around their products based on well-defined APIs and SDKs. Cloud-backed file systems are one popular by-product of such active developer communities, for which several open-source implementations exist. - -[S3QL][3] is one of the most popular open-source cloud-based file systems. It is a FUSE-based file system backed by several commercial or open-source cloud storages, such as Amazon S3, Google Cloud Storage, Rackspace CloudFiles, or OpenStack. As a full featured file system, S3QL boasts of a number of powerful capabilities, such as unlimited capacity, up to 2TB file sizes, compression, UNIX attributes, encryption, snapshots with copy-on-write, immutable trees, de-duplication, hardlink/symlink support, etc. Any bytes written to an S3QL file system are compressed/encrypted locally before being transmitted to cloud backend. When you attempt to read contents stored in an S3QL file system, the corresponding objects are downloaded from cloud (if not in the local cache), and decrypted/uncompressed on the fly. - -To be clear, S3QL does have limitations. For example, you cannot mount the same S3FS file system on several computers simultaneously, but only once at a time. Also, no ACL (access control list) support is available. - -In this tutorial, I am going to describe **how to set up an encrypted file system on top of Amazon S3, using S3QL**. As an example use case, I will also demonstrate how to run rsync backup tool on top of a mounted S3QL file system. - -### Preparation ### - -To use this tutorial, you will need to create an [Amazon AWS account][4] (sign up is free, but requires a valid credit card). - -If you haven't done so, first [create an AWS access key][4] (access key ID and secret access key) which is needed to authorize S3QL to access your AWS account. - -Now, go to AWS S3 via AWS management console, and create a new empty bucket for S3QL. - -![](https://farm4.staticflickr.com/3841/15170673701_7d0660e11f_c.jpg) - -For best performance, choose a region which is geographically closest to you. - -![](https://farm4.staticflickr.com/3902/15150663516_4928d757fc_b.jpg) - -### Install S3QL on Linux ### - -S3QL is available as a pre-built package on most Linux distros. - -#### On Debian, Ubuntu or Linux Mint: #### - - $ sudo apt-get install s3ql - -#### On Fedora: #### - - $ sudo yum install s3ql - -On Arch Linux, use [AUR][6]. - -### Configure S3QL for the First Time ### - -Create authinfo2 file in ~/.s3ql directory, which is a default S3QL configuration file. This file contains information about a required AWS access key, S3 bucket name and encryption passphrase. The encryption passphrase is used to encrypt the randomly-generated master encryption key. This master key is then used to encrypt actual S3QL file system data. - - $ mkdir ~/.s3ql - $ vi ~/.s3ql/authinfo2 - ----------- - - [s3] - storage-url: s3://[bucket-name] - backend-login: [your-access-key-id] - backend-password: [your-secret-access-key] - fs-passphrase: [your-encryption-passphrase] - -The AWS S3 bucket that you specify should be created via AWS management console beforehand. - -Make the authinfo2 file readable to you only for security. - - $ chmod 600 ~/.s3ql/authinfo2 - -### Create an S3QL File System ### - -You are now ready to create an S3QL file system on top of AWS S3. - -Use mkfs.s3ql command to create a new S3QL file system. The bucket name you supply with the command should be matched with the one in authinfo2 file. The "--ssl" option forces you to use SSL to connect to backend storage servers. By default, the mkfs.s3ql command will enable compression and encryption in the S3QL file system. - - $ mkfs.s3ql s3://[bucket-name] --ssl - -You will be asked to enter an encryption passphrase. Type the same passphrase as you defined in ~/.s3ql/autoinfo2 (under "fs-passphrase"). - -If a new file system was created successfully, you will see the following output. - -![](https://farm6.staticflickr.com/5582/14988587230_e182ca3abd_z.jpg) - -### Mount an S3QL File System ### - -Once you created an S3QL file system, the next step is to mount it. - -First, create a local mount point, and then use mount.s3ql command to mount an S3QL file system. - - $ mkdir ~/mnt_s3ql - $ mount.s3ql s3://[bucket-name] ~/mnt_s3ql - -You do not need privileged access to mount an S3QL file system. Just make sure that you have write access to the local mount point. - -Optionally, you can specify a compression algorithm to use (e.g., lzma, bzip2, zlib) with "--compress" option. Without it, lzma algorithm is used by default. Note that when you specify a custom compression algorithm, it will apply to newly created data objects, not existing ones. - - $ mount.s3ql --compress bzip2 s3://[bucket-name] ~/mnt_s3ql - -For performance reason, an S3QL file system maintains a local file cache, which stores recently accessed (partial or full) files. You can customize the file cache size using "--cachesize" and "--max-cache-entries" options. - -To allow other users than you to access a mounted S3QL file system, use "--allow-other" option. - -If you want to export a mounted S3QL file system to other machines over NFS, use "--nfs" option. - -After running mount.s3ql, check if the S3QL file system is successfully mounted: - - $ df ~/mnt_s3ql - $ mount | grep s3ql - -![](https://farm4.staticflickr.com/3863/15174861482_27a842da3e_z.jpg) - -### Unmount an S3QL File System ### - -To unmount an S3QL file system (with potentially uncommitted data) safely, use umount.s3ql command. It will wait until all data (including the one in local file system cache) has been successfully transferred and written to backend servers. Depending on the amount of write-pending data, this process can take some time. - - $ umount.s3ql ~/mnt_s3ql - -View S3QL File System Statistics and Repair an S3QL File System - -To view S3QL file system statistics, you can use s3qlstat command, which shows information such as total data/metadata size, de-duplication and compression ratio. - - $ s3qlstat ~/mnt_s3ql - -![](https://farm6.staticflickr.com/5559/15184926905_4815e5827a_z.jpg) - -You can check and repair an S3QL file system with fsck.s3ql command. Similar to fsck command, the file system being checked needs to be unmounted first. - - $ fsck.s3ql s3://[bucket-name] - -### S3QL Use Case: Rsync Backup ### - -Let me conclude this tutorial with one popular use case of S3QL: local file system backup. For this, I recommend using rsync incremental backup tool especially because S3QL comes with a rsync wrapper script (/usr/lib/s3ql/pcp.py). This script allows you to recursively copy a source tree to a S3QL destination using multiple rsync processes. - - $ /usr/lib/s3ql/pcp.py -h - -![](https://farm4.staticflickr.com/3873/14998096829_d3a64749d0_z.jpg) - -The following command will back up everything in ~/Documents to an S3QL file system via four concurrent rsync connections. - - $ /usr/lib/s3ql/pcp.py -a --quiet --processes=4 ~/Documents ~/mnt_s3ql - -The files will first be copied to the local file cache, and then gradually flushed to the backend servers over time in the background. - -For more information about S3QL such as automatic mounting, snapshotting, immuntable trees, I strongly recommend checking out the [official user's guide][7]. Let me know what you think of S3QL. Share your experience with any other tools. - - - - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/09/create-cloud-based-encrypted-file-system-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://aws.amazon.com/s3 -[2]:http://code.google.com/apis/storage/ -[3]:https://bitbucket.org/nikratio/s3ql/ -[4]:http://aws.amazon.com/ -[5]:http://ask.xmodulo.com/create-amazon-aws-access-key.html -[6]:https://aur.archlinux.org/packages/s3ql/ -[7]:http://www.rath.org/s3ql-docs/ \ No newline at end of file diff --git a/sources/tech/20140910 How to download GOG games from the command line on Linux.md b/sources/tech/20140910 How to download GOG games from the command line on Linux.md deleted file mode 100644 index 081e22cb0f..0000000000 --- a/sources/tech/20140910 How to download GOG games from the command line on Linux.md +++ /dev/null @@ -1,75 +0,0 @@ -How to download GOG games from the command line on Linux -================================================================================ -If you are a gamer and a Linux user, you probably were delighted when [GOG][1] announced a few months ago that it will start proposing games for your favorite OS. If you have never heard of GOG before, I encourage you to check out their catalog of “good old games”, reasonably priced, DRM-free, and packed with goodies. However, if the Windows client for GOG existed for quite some time now, an official Linux version is nowhere to be seen. So if waiting for the official version is uncomfortable for you, an unofficial open source program named LGOGDownloader gives you access to your library from the command line. - -![](https://farm4.staticflickr.com/3843/15121593356_b13309c70f_z.jpg) - -### Install LGOGDownloader on Linux ### - -For Ubuntu users, the [official page][2] recommends that you download the sources and do: - - $ sudo apt-get install build-essential libcurl4-openssl-dev liboauth-dev libjsoncpp-dev libhtmlcxx-dev libboost-system-dev libboost-filesystem-dev libboost-regex-dev libboost-program-options-dev libboost-date-time-dev libtinyxml-dev librhash-dev help2man - $ tar -xvzf lgogdownloader-2.17.tar.gz - $ cd lgogdownloader-2.17 - $ make release - $ sudo make install - -If you are an Archlinux user, an [AUR package][2] is waiting for you. - -### Usage of LGOGDownloader ### - -Once the program is installed, you will need to identify yourself with the command: - - $ lgogdownloader --login - -![](https://farm6.staticflickr.com/5593/15121593346_9c5d02d5ce_z.jpg) - -Notice that the configuration file if you need it is at ~/.config/lgogdownloader/config.cfg - -Once authenticated, you can list all the games in your library with: - - $ lgogdownloader --list - -![](https://farm6.staticflickr.com/5581/14958040387_8321bb71cf.jpg) - -Then download one with: - - $ lgogdownloader --download --game [game name] - -![](https://farm6.staticflickr.com/5585/14958040367_b1c584a2d1_z.jpg) - -You will notice that lgogdownloader allows you to resume previously interrupted downloads, which is nice because typical game downloads are not small. - -Like every respectable command line utility, you can add various options: - -- **--platform [number]** to select your OS where 1 is for windows and 4 for Linux. -- **--directory [destination]** to download the installer in a particular directory. -- **--language [number]** for a particular language pack (check the manual pages for the number corresponding to your language). -- **--limit-rate [speed]** to limit the downloading rate at a particular speed. - -As a side bonus, lgogdownloader also comes with the possibility to check for updates on the GOG website: - - $ lgogdownloader --update-check - -![](https://farm4.staticflickr.com/3882/14958035568_7889acaef0.jpg) - -The result will list the number of forum and private messages you have received, as well as the number of updated games. - -To conclude, lgogdownloader is pretty standard when it comes to command line utilities. I would even say that it is an epitome of clarity and coherence. It is true that we are far in term of features from the relatively recent Steam Linux client, but on the other hand, the official GOG windows client does not do much more than this unofficial Linux version. In other words lgogdownloader is a perfect replacement. I cannot wait to see more Linux compatible games on GOG, especially after their recent announcements to offer DRM free movies, with a thematic around video games. Hopefully we will see an update in the client for when movie catalog matches the game library. - -What do you think of GOG? Would you use the unofficial Linux Client? Let us know in the comments. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/09/download-gog-games-command-line-linux.html - -作者:[Adrien Brochard][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/adrien -[1]:http://www.gog.com/ -[2]:https://sites.google.com/site/gogdownloader/home -[3]:https://aur.archlinux.org/packages/lgogdownloader/ \ No newline at end of file diff --git a/sources/tech/20140910 How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE).md b/sources/tech/20140910 How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE).md deleted file mode 100644 index 39e2270b87..0000000000 --- a/sources/tech/20140910 How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE).md +++ /dev/null @@ -1,155 +0,0 @@ -How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE) -================================================================================ -In a [previous tutorial][1]注:此篇文章在同一个更新中,如果也翻译了,发布的时候可修改相应的链接, we have seen how we can set up Nagios Remote Plugin Executor (NRPE) in an existing Nagios setup. However, the scripts and plugins needed to monitor memory usage do not come with stock Nagios. In this tutorial, we will see how we can configure NRPE to monitor RAM usage of a remote server. - -The script that we will use for monitoring RAM is available at [Nagios Exchange][2], as well as the creators' [Github repository][3]. - -Assuming that NRPE has already been set up, we start the process by downloading the script in the server that we want to monitor. - -### Preparing Remote Servers ### - -#### On Debain/Ubuntu: #### - - # cd /usr/lib/nagios/plugins/ - # wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl - # mv check_mem.pl check_mem - # chmod +x check_mem - -#### On RHEL/CentOS: #### - - # cd /usr/lib64/nagios/plugins/ (or /usr/lib/nagios/plugins/ for 32-bit) - # wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl - # mv check_mem.pl check_mem - # chmod +x check_mem - -You can check whether the script generates output properly by manually running the following command on localhost. When used with NRPE, this command is supposed to check free memory, warn when free memory is less than 20%, and generate critical alarm when free memory is less than 10%. - - # ./check_mem -f -w 20 -c 10 - ----------- - - OK - 34.0% (2735744 kB) free.|TOTAL=8035340KB;;;; USED=5299596KB;6428272;7231806;; FREE=2735744KB;;;; CACHES=2703504KB;;;; - -If you see something like the above as an output, that means the command is working okay. - -Now that the script is ready, we define the command to check RAM usage for NRPE. As mentioned before, the command will check free memory, warn when free memory is less than 20%, and generate critical alarm when free memory is less than 10%. - - # vim /etc/nagios/nrpe.cfg - -#### For Debian/Ubuntu: #### - - command[check_mem]=/usr/lib/nagios/plugins/check_mem -f -w 20 -c 10 - -#### For RHEL/CentOS 32 bit: #### - - command[check_mem]=/usr/lib/nagios/plugins/check_mem -f -w 20 -c 10 - -#### For RHEL/CentOS 64 bit: #### - - command[check_mem]=/usr/lib64/nagios/plugins/check_mem -f -w 20 -c 10 - -### Preparing Nagios Server ### - -In the Nagios server, we define a custom command for NRPE. The command can be stored in any directory within Nagios. To keep the tutorial simple, we will put the command definition in /etc/nagios directory. - -#### For Debian/Ubuntu: #### - - # vim /etc/nagios3/conf.d/nrpe_command.cfg - ----------- - - define command{ - command_name check_nrpe - command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' - } - -#### For RHEL/CentOS 32 bit: #### - - # vim /etc/nagios/objects/nrpe_command.cfg - ----------- - - define command{ - command_name check_nrpe - command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ - } - -#### For RHEL/CentOS 64 bit: #### - - # vim /etc/nagios/objects/nrpe_command.cfg - ----------- - - define command{ - command_name check_nrpe - command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ - } - -Now we define the service check in Nagios. - -#### On Debian/Ubuntu: #### - - # vim /etc/nagios3/conf.d/nrpe_service_check.cfg - ----------- - - define service{ - use local-service - host_name remote-server - service_description Check RAM - check_command check_nrpe!check_mem - } - -#### On RHEL/CentOS: #### - - # vim /etc/nagios/objects/nrpe_service_check.cfg - ----------- - - define service{ - use local-service - host_name remote-server - service_description Check RAM - check_command check_nrpe!check_mem - } - -Finally, we restart the Nagios service. - -#### On Debian/Ubuntu: #### - - # service nagios3 restart - -#### On RHEL/CentOS 6: #### - - # service nagios restart - -#### On RHEL/CentOS 7: #### - - # systemctl restart nagios.service - -### Troubleshooting ### - -Nagios should start checking RAM usage of a remote-server using NRPE. If you are having any problem, you could check the following. - - -- Make sure that NRPE port is allowed all the way to the remote host. Default NRPE port is TCP 5666. -- You could try manually checking NRPE operation by executing the check_nrpe command: /usr/lib/nagios/plugins/check_nrpe -H remote-server -- You could also try to run the check_mem command manually: /usr/lib/nagios/plugins/check_nrpe -H remote-server –c check_mem -- In the remote server, set debug=1 in /etc/nagios/nrpe.cfg. Restart the NRPE service and check the log file /var/log/messages (RHEL/CentOS) or /var/log/syslog (Debain/Ubuntu). The log files should contain relevant information if there is any configuration or permission errors. If there are not hits in the log, it is very likely that the requests are not reaching the remote server due to port filtering at some point. - -To sum up, this tutorial demonstrated how we can easily tune NRPE to monitor RAM usage of remote servers. The process is as simple as downloading the script, defining the commands, and restarting the services. Hope this helps. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/09/monitor-server-memory-usage-nagios-remote-plugin-executor.html - -作者:[Sarmed Rahman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html -[2]:http://exchange.nagios.org/directory/Plugins/Operating-Systems/Solaris/check_mem-2Epl/details -[3]:https://github.com/justintime/nagios-plugins/blob/master/check_mem/check_mem.pl \ No newline at end of file diff --git a/sources/tech/20140910 How to set up Nagios Remote Plugin Executor (NRPE) in Linux.md b/sources/tech/20140910 How to set up Nagios Remote Plugin Executor (NRPE) in Linux.md deleted file mode 100644 index 568400420c..0000000000 --- a/sources/tech/20140910 How to set up Nagios Remote Plugin Executor (NRPE) in Linux.md +++ /dev/null @@ -1,236 +0,0 @@ -How to set up Nagios Remote Plugin Executor (NRPE) in Linux -================================================================================ -As far as network management is concerned, Nagios is one of the most powerful tools. Nagios can monitor the reachability of remote hosts, as well as the state of services running on them. However, what if we want to monitor something other than network services for a remote host? For example, we may want to monitor the disk utilization or [CPU processor load][1] of a remote host. Nagios Remote Plugin Executor (NRPE) is a tool that can help with doing that. NRPE allows one to execute Nagios plugins installed on remote hosts, and integrate them with an [existing Nagios server][2]. - -This tutorial will cover how to set up NRPE on an existing Nagios deployment. The tutorial is primarily divided into two parts: - -- Configure remote hosts. -- Configure a Nagios monitoring server. - -We will then finish off by defining some custom commands that can be used with NRPE. - -### Configure Remote Hosts for NRPE ### - -#### Step One: Installing NRPE Service #### - -You need to install NRPE service on every remote host that you want to monitor using NRPE. NRPE service daemon on each remote host will then communicate with a Nagios monitoring server. - -Necessary packages for NRPE service can easily be installed using apt-get or yum, subject to the platform. In case of CentOS, we will need to [add Repoforge repository][3] as NRPE is not available in CentOS repositories. - -**On Debian, Ubuntu or Linux Mint:** - - # apt-get install nagios-nrpe-server - -**On CentOS, Fedora or RHEL:** - - # yum install nagios-nrpe - -#### Step Two: Preparing Configuration File #### - -The configuration file /etc/nagios/nrpe.cfg is similar for Debian-based and RedHat-based systems. The configuration file is backed up, and then updated as follows. - - # vim /etc/nagios/nrpe.cfg - ----------- - - ## NRPE service port can be customized ## - server_port=5666 - - ## the nagios monitoring server is permitted ## - ## NOTE: There is no space after the comma ## - allowed_hosts=127.0.0.1,X.X.X.X-IP_v4_of_Nagios_server - - ## The following examples use hard-coded command arguments. - ## These parameters can be modified as needed. - - ## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ## - - command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 - command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 - command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 - command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z - command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 - -Now that the configuration file is ready, NRPE service is ready to be fired up. - -#### Step Three: Initiating NRPE Service #### - -For RedHat-based systems, the NRPE service needs to be added as a startup service. - -**On Debian, Ubuntu, Linux Mint:** - - # service nagios-nrpe-server restart - -**On CentOS, Fedora or RHEL:** - - # service nrpe restart - # chkconfig nrpe on - -#### Step Four: Verifying NRPE Service Status #### - -Information about NRPE daemon status can be found in the system log. For a Debian-based system, the log file will be /var/log/syslog. The log file for a RedHat-based system will be /var/log/messages. A sample log is provided below for reference. - - nrpe[19723]: Starting up daemon - nrpe[19723]: Listening for connections on port 5666 - nrpe[19723]: Allowing connections from: 127.0.0.1,X.X.X.X - -In case firewall is running, TCP port 5666 should be open, which is used by NRPE daemon. - - # netstat -tpln | grep 5666 - ----------- - - tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN 19885/nrpe - -### Configure Nagios Monitoring Server for NRPE ### - -The first step in configuring an existing Nagios monitoring server for NRPE is to install NRPE plugin on the server. - -#### Step One: Installing NRPE Plugin #### - -In case the Nagios server is running on a Debian-based system (Debian, Ubuntu or Linux Mint), a necessary package can be installed using apt-get. - - # apt-get install nagios-nrpe-plugin - -After the plugin is installed, the check_nrpe command, which comes with the plugin, is modified a bit. - - # vim /etc/nagios-plugins/config/check_nrpe.cfg - ----------- - - ## the default command is overwritten ## - define command{ - command_name check_nrpe - command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' - } - -In case the Nagios server is running on a RedHat-based system (CentOS, Fedora or RHEL), you can install NRPE plugin using yum. On CentOS, [adding Repoforge repository][4] is necessary. - - # yum install nagios-plugins-nrpe - -Now that the NRPE plugin is installed, proceed to configure a Nagios server following the rest of the steps. - -#### Step Two: Defining Nagios Command for NRPE Plugin #### - -First, we need to define a command in Nagios for using NRPE. - - # vim /etc/nagios/objects/commands.cfg - ----------- - - ## NOTE: For CentOS 64 bit, use /usr/lib64 instead of /usr/lib ## - define command{ - command_name check_nrpe - command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' - } - -#### Step Three: Adding Host and Command Definition #### - -Next, define remote host(s) and commands to execute remotely on them. - -The following shows sample definitions of a remote host a command to execute on the host. Naturally, your configuration will be adjusted based on your requirements. The path to the file is slightly different for Debian-based and RedHat-based systems. But the content of the files are identical. - -**On Debian, Ubuntu or Linux Mint:** - - # vim /etc/nagios3/conf.d/nrpe.cfg - -**On CentOS, Fedora or RHEL:** - - # vim /etc/nagios/objects/nrpe.cfg - ----------- - - define host{ - use linux-server - host_name server-1 - alias server-1 - address X.X.X.X-IPv4_address_of_remote_host - } - - define service { - host_name server-1 - service_description Check Load - check_command check_nrpe!check_load - check_interval 1 - use generic-service - } - -#### Step Four: Restarting Nagios Service #### - -Before restarting Nagios, updated configuration is verified with a dry run. - -**On Ubuntu, Debian, or Linux Mint:** - - # nagios3 -v /etc/nagios3/nagios.cfg - -**On CentOS, Fedora or RHEL:** - - # nagios -v /etc/nagios/nagios.cfg - -If everything goes well, Nagios service can be restarted. - - # service nagios restart - -![](https://farm8.staticflickr.com/7024/13330387845_0bde8b6db5_z.jpg) - -### Configuring Custom Commands with NRPE ### - -#### Setup on Remote Servers #### - -The following is a list of custom commands that can be used with NRPE. These commands are defined in the file /etc/nagios/nrpe.cfg located at the remote servers. - - ## Warning status when load average exceeds 1, 2 and 1 for 1, 5, 15 minute interval, respectively. - ## Critical status when load average exceeds 3, 5 and 3 for 1, 5, 15 minute interval, respectively. - command[check_load]=/usr/lib/nagios/plugins/check_load -w 1,2,1 -c 3,5,3 - - ## Warning level 25% and critical level 10% for free space of /home. - ## Could be customized to monitor any partition (e.g. /dev/sdb1, /, /var, /home) - command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 25% -c 10% -p /home - - ## Warn if number of instances for process_ABC exceeds 10. Critical for 20 ## - command[check_process_ABC]=/usr/lib/nagios/plugins/check_procs -w 1:10 -c 1:20 -C process_ABC - - ## Critical if the number of instances for process_XYZ drops below 1 ## - command[check_process_XYZ]=/usr/lib/nagios/plugins/check_procs -w 1: -c 1: -C process_XYZ - -#### Setup on Nagios Monitoring Server #### - -To apply the custom commands defined above, we modify the service definition at Nagios monitoring server as follows. The service definition could go to the file where all the services are defined (e.g., /etc/nagios/objects/nrpe.cfg or /etc/nagios3/conf.d/nrpe.cfg) - - ## example 1: check process XYZ ## - define service { - host_name server-1 - service_description Check Process XYZ - check_command check_nrpe!check_process_XYZ - check_interval 1 - use generic-service - } - - ## example 2: check disk state ## - define service { - host_name server-1 - service_description Check Process XYZ - check_command check_nrpe!check_disk - check_interval 1 - use generic-service - } - -To sum up, NRPE is a powerful add-on to Nagios as it provides provision for monitoring a remote server in a highly configurable fashion. Using NRPE, we can monitor server load, running processes, logged in users, disk states and other parameters. - -Hope this helps. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html - -作者:[Sarmed Rahman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/2012/08/how-to-measure-average-cpu-utilization.html -[2]:http://xmodulo.com/2013/12/install-configure-nagios-linux.html -[3]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html -[4]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html diff --git a/sources/tech/20140911 Sysstat--All-in-One System Performance and Usage Activity Monitoring Tool For Linux.md b/sources/tech/20140911 Sysstat--All-in-One System Performance and Usage Activity Monitoring Tool For Linux.md deleted file mode 100644 index 541999538b..0000000000 --- a/sources/tech/20140911 Sysstat--All-in-One System Performance and Usage Activity Monitoring Tool For Linux.md +++ /dev/null @@ -1,124 +0,0 @@ -Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux -================================================================================ -**Sysstat** is really a handy tool which comes with number of utilities to monitor system resources, their performance and usage activities. Number of utilities that we all use in our daily bases comes with sysstat package. It also provide the tool which can be scheduled using cron to collect all performance and activity data. - -![Install Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/sysstat.png) - -Install Sysstat in Linux - -Following are the list of tools included in sysstat packages. - -### Sysstat Features ### - -- [**iostat**][1]: Reports all statistics about your CPU and I/O statistics for I/O devices. -- **mpstat**: Details about CPUs (individual or combined). -- **pidstat**: Statistics about running processes/task, CPU, memory etc. -- **sar**: Save and report details about different resources (CPU, Memory, IO, Network, kernel etc..). -- **sadc**: System activity data collector, used for collecting data in backend for sar. -- **sa1**: Fetch and store binary data in sadc data file. This is used with sadc. -- **sa2**: Summaries daily report to be used with sar. -- **Sadf**: Used for displaying data generated by sar in different formats (CSV or XML). -- **Sysstat**: Man page for sysstat utility. -- **nfsiostat**-sysstat: I/O statistics for NFS. -- **cifsiostat**: Statistics for CIFS. - -Recenlty, on 17th of June 2014, **Sysstat 11.0.0** (stable version) has been released with some new interesting features as follows. - -pidstat command has been enhanced with some new options: first is “-R” which will provide information about the policy and task scheduling priority. And second one is “**-G**” which we can search processes with name and to get the list of all matching threads. - -Some new enhancement have been brought to sar, sadc and sadf with regards to the data files: Now data files can be renamed using “**saYYYYMMDD**” instead of “**saDD**” using option **–D** and can be located in directory different from “**/var/log/sa**”. We can define new directory by setting variable “SA_DIR”, which is being used by sa1 and sa2. - -### Installation of Sysstat in Linux ### - -The ‘**Sysstat**‘ package also available to install from default repository as a package in all major Linux distributions. However, the package which is available from the repo is little old and outdated version. So, that’s the reason, we here going to download and install the latest version of sysstat (i.e. version **11.0.0**) from source package. - -First download the latest version of sysstat package using the following link or you may also use **wget** command to download directly on the terminal. - -- [http://sebastien.godard.pagesperso-orange.fr/download.html][2] - - # wget http://pagesperso-orange.fr/sebastien.godard/sysstat-11.0.0.tar.gz - -![Download Sysstat Package](http://www.tecmint.com/wp-content/uploads/2014/08/Download-Sysstat.png) - -Download Sysstat Package - -Next, extract the downloaded package and go inside that directory to begin compile process. - - # tar -xvf sysstat-11.0.0.tar.gz - # cd sysstat-11.0.0/ - -Here you will have two options for compilation: - -a). Firstly, you can use **iconfig** (which will give you flexibility for choosing/entering the customized values for each parameters). - - # ./iconfig - -![Sysstat iconfig Command](http://www.tecmint.com/wp-content/uploads/2014/08/Sysstat-iconfig-Command.png) - -Sysstat iconfig Command - -b). Secondly, you can use standard **configure** command to define options in single line. You can run **./configure –help command** to get list of different supported options. - - # ./configure --help - -![Sysstat Configure Help](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Help.png) - -Sysstat Configure Help - -Here, we are moving ahead with standard option i.e. **./configure** command to compile sysstat package. - - # ./configure - # make - # make install - -![Configure Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Sysstat.png) - -Configure Sysstat in Linux - -After compilation process completes, you will see the output similar to above. Now, verify the sysstat version by running following command. - - # mpstat -V - - sysstat version 11.0.0 - (C) Sebastien Godard (sysstat orange.fr) - -### Updating Sysstat in Linux ### - -By default sysstat use “**/usr/local**” as its prefix directory. So, all binary/utilities will get installed in “**/usr/local/bin**” directory. If you have existing sysstat package installed, then those will be there in “**/usr/bin**”. - -Due to existing sysstat package, you will not get your updated version reflected, because your “**$PATH**” variable don’t have “**/usr/local/bin set**”. So, make sure that “**/usr/local/bin**” exist there in your “$PATH” or set **–prefix** option to “**/usr**” during compilation and remove existing version before starting updating. - - # yum remove sysstat [On RedHat based System] - # apt-get remove sysstat [On Debian based System] - ----------- - - # ./configure --prefix=/usr - # make - # make install - -Now again, verify the updated version of systat using same ‘mpstat’ command with option ‘**-V**’. - - # mpstat -V - - sysstat version 11.0.0 - (C) Sebastien Godard (sysstat orange.fr) - -**Reference**: For more information please go through [Sysstat Documentation][3] - -That’s it for now, in my upcoming article, I will show some practical examples and usages of sysstat command, till then stay tuned to updates and don’t forget to add your valuable thoughts about the article at below comment section. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-sysstat-in-linux/ - -作者:[Kuldeep Sharma][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/kuldeepsharma47/ -[1]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/ -[2]:http://sebastien.godard.pagesperso-orange.fr/download.html -[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html \ No newline at end of file diff --git a/sources/tech/20140925 Linux FAQs with Answers--How to configure a static IP address on CentOS 7.md b/sources/tech/20140925 Linux FAQs with Answers--How to configure a static IP address on CentOS 7.md deleted file mode 100644 index 27d48e890c..0000000000 --- a/sources/tech/20140925 Linux FAQs with Answers--How to configure a static IP address on CentOS 7.md +++ /dev/null @@ -1,78 +0,0 @@ -Linux FAQs with Answers--How to configure a static IP address on CentOS 7 -================================================================================ -> **Question**: On CentOS 7, I want to switch from DHCP to static IP address configuration with one of my network interfaces. What is a proper way to assign a static IP address to a network interface permanently on CentOS or RHEL 7? - -If you want to set up a static IP address on a network interface in CentOS 7, there are several different ways to do it, varying depending on whether or not you want to use Network Manager for that. - -Network Manager is a dynamic network control and configuration system that attempts to keep network devices and connections up and active when they are available). CentOS/RHEL 7 comes with Network Manager service installed and enabled by default. - -To verify the status of Network Manager service: - - $ systemctl status NetworkManager.service - -To check which network interface is managed by Network Manager, run: - - $ nmcli dev status - -![](https://farm4.staticflickr.com/3861/15295802711_a102a3574d_z.jpg) - -If the output of nmcli shows "connected" for a particular interface (e.g., enp0s3 in the example), it means that the interface is managed by Network Manager. You can easily disable Network Manager for a particular interface, so that you can configure it on your own for a static IP address. - -Here are **two different ways to assign a static IP address to a network interface on CentOS 7**. We will be configuring a network interface named enp0s3. - -### Configure a Static IP Address without Network Manager ### - -Go to the /etc/sysconfig/network-scripts directory, and locate its configuration file (ifcfg-enp0s3). Create it if not found. - -![](https://farm4.staticflickr.com/3911/15112399977_d3df8e15f5_z.jpg) - -Open the configuration file and edit the following variables: - -![](https://farm4.staticflickr.com/3880/15112184199_f4cbf269a6.jpg) - -In the above, "NM_CONTROLLED=no" indicates that this interface will be set up using this configuration file, instead of being managed by Network Manager service. "ONBOOT=yes" tells the system to bring up the interface during boot. - -Save changes and restart the network service using the following command: - - # systemctl restart network.service - -Now verify that the interface has been properly configured: - - # ip add - -![](https://farm6.staticflickr.com/5593/15112397947_ac69a33fb4_z.jpg) - -### Configure a Static IP Address with Network Manager ### - -If you want to use Network Manager to manage the interface, you can use nmtui (Network Manager Text User Interface) which provides a way to configure Network Manager in a terminal environment. - -Before using nmtui, first set "NM_CONTROLLED=yes" in /etc/sysconfig/network-scripts/ifcfg-enp0s3. - -Now let's install nmtui as follows. - - # yum install NetworkManager-tui - -Then go ahead and edit the Network Manager configuration of enp0s3 interface: - - # nmtui edit enp0s3 - -The following screen will allow us to manually enter the same information that is contained in /etc/sysconfig/network-scripts/ifcfg-enp0s3. - -Use the arrow keys to navigate this screen, press Enter to select from a list of values (or fill in the desired values), and finally click OK at the bottom right: - -![](https://farm4.staticflickr.com/3878/15295804521_4165c97828_z.jpg) - -Finally, restart the network service. - - # systemctl restart network.service - -and you're ready to go. - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/configure-static-ip-address-centos7.html - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/tech/20140926 How To Reset Root Password On CentOS 7.md b/sources/tech/20140926 How To Reset Root Password On CentOS 7.md deleted file mode 100644 index be74d350e2..0000000000 --- a/sources/tech/20140926 How To Reset Root Password On CentOS 7.md +++ /dev/null @@ -1,53 +0,0 @@ -[su-kaiyao]翻译中 - -How To Reset Root Password On CentOS 7 -================================================================================ -The way to reset the root password on centos7 is totally different to Centos 6. Let me show you how to reset root password in CentOS 7. - -1 – In the boot grub menu select option to edit. - -![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_003.png) - -2 – Select Option to edit (e). - -![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_005.png) - -3 – Go to the line of Linux 16 and change ro with rw init=/sysroot/bin/sh. - -![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_006.png) - -4 – Now press Control+x to start on single user mode. - -![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/Selection_007.png) - -5 – Now access the system with this command. - - chroot /sysroot - -6 – Reset the password. - - passwd root - -7 – Update selinux information - - touch /.autorelabel - -8 – Exit chroot - - exit - -9 – Reboot your system - - reboot - -That’s it. Enjoy. - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/reset-root-password-centos-7/ - -作者:M.el Khamlichi -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/sources/tech/20140926 How to manage configurations in Linux with Puppet and Augeas.md b/sources/tech/20140926 How to manage configurations in Linux with Puppet and Augeas.md deleted file mode 100644 index 12cc09c445..0000000000 --- a/sources/tech/20140926 How to manage configurations in Linux with Puppet and Augeas.md +++ /dev/null @@ -1,151 +0,0 @@ -How to manage configurations in Linux with Puppet and Augeas -================================================================================ -Although [Puppet][1](注:此文原文原文中曾今做过,文件名:“20140808 How to install Puppet server and client on CentOS and RHEL.md”,如果翻译发布过,可修改此链接为发布地址) is a really unique and useful tool, there are situations where you could use a bit of a different approach. Situations like modification of configuration files which are already present on several of your servers and are unique on each one of them at the same time. Folks from Puppet labs realized this as well, and integrated a great tool called [Augeas][2] that is designed exactly for this usage. - -Augeas can be best thought of as filling for the gaps in Puppet's capabilities where an object­specific resource type (such as the host resource to manipulate /etc/hosts entries) is not yet available. In this howto, you will learn how to use Augeas to ease your configuration file management. - -### What is Augeas? ### - -Augeas is basically a configuration editing tool. It parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native config files. - -### What are we going to achieve in this tutorial? ### - -We will install and configure the Augeas tool for use with our previously built Puppet server. We will create and test several different configurations with this tool, and learn how to properly use it to manage our system configurations. - -### Prerequisites ### - -We will need a working Puppet server and client setup. If you don't have it, please follow my previous tutorial. - -Augeas package can be found in our standard CentOS/RHEL repositories. Unfortunately, Puppet uses Augeas ruby wrapper which is only available in the puppetlabs repository (or [EPEL][4]). If you don't have this repository in your system already, add it using following command: - -On CentOS/RHEL 6.5: - - # rpm -­ivh https://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabs­release­6­10.noarch.rpm - -On CentOS/RHEL 7: - - # rpm -­ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs­release­7­10.noarch.rpm - -After you have successfully added this repository, install Ruby­Augeas in your system: - - # yum install ruby­augeas - -Or if you are continuing from my last tutorial, install this package using the Puppet way. Modify your custom_utils class inside of your /etc/puppet/manifests/site.pp to contain "ruby­augeas" inside of the packages array: - - class custom_utils { - package { ["nmap","telnet","vim­enhanced","traceroute","ruby­augeas"]: - ensure => latest, - allow_virtual => false, - } - } - -### Augeas without Puppet ### - -As it was said in the beginning, Augeas is not originally from Puppet Labs, which means we can still use it even without Puppet itself. This approach can be useful for verifying your modifications and ideas before applying them in your Puppet environment. To make this possible, you need to install one additional package in your system. To do so, please execute following command: - - # yum install augeas - -### Puppet Augeas Examples ### - -For demonstration, here are a few example Augeas use cases. - -#### Management of /etc/sudoers file #### - -1. Add sudo rights to wheel group - -This example will show you how to add simple sudo rights for group %wheel in your GNU/Linux system. - - # Install sudo package - package { 'sudo': - ensure => installed, # ensure sudo package installed - } - - # Allow users belonging to wheel group to use sudo - augeas { 'sudo_wheel': - context => '/files/etc/sudoers', # The target file is /etc/sudoers - changes => [ - # allow wheel users to use sudo - 'set spec[user = "%wheel"]/user %wheel', - 'set spec[user = "%wheel"]/host_group/host ALL', - 'set spec[user = "%wheel"]/host_group/command ALL', - 'set spec[user = "%wheel"]/host_group/command/runas_user ALL', - ] - } - -Now let's explain what the code does: **spec** defines the user section in /etc/sudoers, **[user]** defines given user from the array, and all definitions behind slash ( / ) are subparts of this user. So in typical configuration this would be represented as: - - user host_group/host host_group/command host_group/command/runas_user - -Which is translated into this line of /etc/sudoers: - - %wheel ALL = (ALL) ALL - -2. Add command alias - -The following part will show you how to define command alias which you can use inside your sudoers file. - - # Create new alias SERVICES which contains some basic privileged commands - augeas { 'sudo_cmdalias': - context => '/files/etc/sudoers', # The target file is /etc/sudoers - changes => [ - "set Cmnd_Alias[alias/name = 'SERVICES']/alias/name SERVICES", - "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[1] /sbin/service", - "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[2] /sbin/chkconfig", - "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[3] /bin/hostname", - "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[4] /sbin/shutdown", - ] - } - -Syntax of sudo command aliases is pretty simple: **Cmnd_Alias** defines the section of command aliases, **[alias/name]** binds all to given alias name, /alias/name **SERVICES** defines the actual alias name and alias/command is the array of all the commands that should be part of this alias. The output of this command will be following: - - Cmnd_Alias SERVICES = /sbin/service , /sbin/chkconfig , /bin/hostname , /sbin/shutdown - -For more information about /etc/sudoers, visit the [official documentation][5]. - -#### Adding users to a group #### - -To add users to groups using Augeas, you might want to add the new user either after the gid field or after the last user. We'll use group SVN for the sake of this example. This can be achieved by using the following command: - -In Puppet: - - augeas { 'augeas_mod_group: - context => '/files/etc/group', # The target file is /etc/group - changes => [ - "ins user after svn/*[self::gid or self::user][last()]", - "set svn/user[last()] john", - ] - } - -Using augtool: - - augtool> ins user after /files/etc/group/svn/*[self::gid or self::user][last()] augtool> set /files/etc/group/svn/user[last()] john - -### Summary ### - -By now, you should have a good idea on how to use Augeas in your Puppet projects. Feel free to experiment with it and definitely go through the official Augeas documentation. It will help you get the idea how to use Augeas properly in your own projects, and it will show you how much time you can actually save by using it. - -If you have any questions feel free to post them in the comments and I will do my best to answer them and advise you. - -### Useful Links ### - -- [http://www.watzmann.net/categories/augeas.html][6]: contains a lot of tutorials focused on Augeas usage. -- [http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas][7]: Puppet wiki with a lot of practical examples. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/09/manage-configurations-linux-puppet-augeas.html - -作者:[Jaroslav Štěpánek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/jaroslav -[1]:http://xmodulo.com/2014/08/install-puppet-server-client-centos-rhel.html -[2]:http://augeas.net/ -[3]:http://xmodulo.com/manage-configurations-linux-puppet-augeas.html -[4]:http://xmodulo.com/2013/03/how-to-set-up-epel-repository-on-centos.html -[5]:http://augeas.net/docs/references/lenses/files/sudoers-aug.html -[6]:http://www.watzmann.net/categories/augeas.html -[7]:http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas \ No newline at end of file diff --git a/sources/tech/20140926 How to monitor user login history on CentOS with utmpdump.md b/sources/tech/20140926 How to monitor user login history on CentOS with utmpdump.md deleted file mode 100644 index 119465e6ed..0000000000 --- a/sources/tech/20140926 How to monitor user login history on CentOS with utmpdump.md +++ /dev/null @@ -1,120 +0,0 @@ -How to monitor user login history on CentOS with utmpdump -================================================================================ -Keeping, maintaining and analyzing logs (i.e., accounts of events that have happened during a certain period of time or are currently happening) are among the most basic and essential tasks of a Linux system administrator. In case of user management, examining user logon and logout logs (both failed and successful) can alert us about any potential security breaches or unauthorized use of our system. For example, remote logins from unknown IP addresses or accounts being used outside working hours or during vacation leave should raise a red flag. - -On a CentOS system, user login history is stored in the following binary files: - -- /var/run/utmp (which logs currently open sessions) is used by who and w tools to show who is currently logged on and what they are doing, and also by uptime to display system up time. -- /var/log/wtmp (which stores the history of connections to the system) is used by last tool to show the listing of last logged-in users. -- /var/log/btmp (which logs failed login attempts) is used by lastb utility to show the listing of last failed login attempts. ` - -![](https://farm4.staticflickr.com/3871/15106743340_bd13fcfe1c_o.png) - -In this post I'll show you how to use utmpdump, a simple program from the sysvinit-tools package that can be used to dump these binary log files in text format for inspection. This tool is available by default on stock CentOS 6 and 7. The information gleaned from utmpdump is more comprehensive than the output of the tools mentioned earlier, and that's what makes it a nice utility for the job. Besides, utmpdump can be used to modify utmp or wtmp, which can be useful if you want to fix any corrupted entries in the binary logs. - -### How to Use Utmpdump and Interpret its Output ### - -As we mentioned earlier, these log files, as opposed to other logs most of us are familiar with (e.g., /var/log/messages, /var/log/cron, /var/log/maillog), are saved in binary file format, and thus we cannot use pagers such as less or more to view their contents. That is where utmpdump saves the day. - -In order to display the contents of /var/run/utmp, run the following command: - - # utmpdump /var/run/utmp - -![](https://farm6.staticflickr.com/5595/15106696599_60134e3488_z.jpg) - -To do the same with /var/log/wtmp: - - # utmpdump /var/log/wtmp - -![](https://farm6.staticflickr.com/5591/15106868718_6321c6ff11_z.jpg) - -and finally with /var/log/btmp: - - # utmpdump /var/log/btmp - -![](https://farm6.staticflickr.com/5562/15293066352_c40bc98ca4_z.jpg) - -As you can see, the output formats of three cases are identical, except for the fact that the records in the utmp and btmp are arranged chronologically, while in the wtmp, the order is reversed. - -Each log line is formatted in multiple columns described as follows. The first field shows a session identifier, while the second holds PID. The third field can hold one of the following values: ~~ (indicating a runlevel change or a system reboot), bw (meaning a bootwait process), a digit (indicates a TTY number), or a character and a digit (meaning a pseudo-terminal). The fourth field can be either empty or hold the user name, reboot, or runlevel. The fifth field holds the main TTY or PTY (pseudo-terminal), if that information is available. The sixth field holds the name of the remote host (if the login is performed from the local host, this field is blank, except for run-level messages, which will return the kernel version). The seventh field holds the IP address of the remote system (if the login is performed from the local host, this field will show 0.0.0.0). If DNS resolution is not provided, the sixth and seventh fields will show identical information (the IP address of the remote system). The last (eighth) field indicates the date and time when the record was created. - -### Usage Examples of Utmpdump ### - -Here are a few simple use cases of utmpdump. - -1. Check how many times (and at what times) a particular user (e.g., gacanepa) logged on to the system between August 18 and September 17. - - # utmpdump /var/log/wtmp | grep gacanepa - -![](https://farm4.staticflickr.com/3857/15293066362_fb2dd566df_z.jpg) - -If you need to review login information from prior dates, you can check the wtmp-YYYYMMDD (or wtmp.[1...N]) and btmp-YYYYMMDD (or btmp.[1...N]) files in /var/log, which are the old archives of wtmp and btmp files, generated by [logrotate][1]. - -2. Count the number of logins from IP address 192.168.0.101. - - # utmpdump /var/log/wtmp | grep 192.168.0.101 - -![](https://farm4.staticflickr.com/3842/15106743480_55ce84c9fd_z.jpg) - -3. Display failed login attempts. - - # utmpdump /var/log/btmp - -![](https://farm4.staticflickr.com/3858/15293065292_e1d2562206_z.jpg) - -In the output of /var/log/btmp, every log line corresponds to a failed login attempt (e.g., using incorrect password or a non-existing user ID). Logon using non-existing user IDs are highlighted in the above impage, which can alert you that someone is attempting to break into your system by guessing commonly-used account names. This is particularly serious in the cases when the tty1 was used, since it means that someone had access to a terminal on your machine (time to check who has keys to your datacenter, maybe?). - -4. Display login and logout information per user session. - - # utmpdump /var/log/wtmp - -![](https://farm4.staticflickr.com/3835/15293065312_c762360791_z.jpg) - -In /var/log/wtmp, a new login event is characterized by '7' in the first field, a terminal number (or pseudo-terminal id) in the third field, and username in the fourth. The corresponding logout event will be represented by '8' in the first field, the same PID as the login in the second field, and a blank terminal number field. For example, take a close look at PID 1463 in the above image. - -- On [Fri Sep 19 11:57:40 2014 ART] the login prompt appeared in tty1. -- On [Fri Sep 19 12:04:21 2014 ART], user root logged on. -- On [Fri Sep 19 12:07:24 2014 ART], root logged out. - -On a side note, the word LOGIN in the fourth field means that a login prompt is present in the terminal specified in the fifth field. - -So far I covered somewhat trivial examples. You can combine utmpdump with other text sculpting tools such as awk, sed, grep or cut to produce filtered and enhanced output. - -For example, you can use the following command to list all login events of a particular user (e.g., gacanepa) and send the output to a .csv file that can be viewed with a pager or a workbook application, such as LibreOffice's Calc or Microsoft Excel. Let's display PID, username, IP address and timestamp only: - - # utmpdump /var/log/wtmp | grep -E "\[7].*gacanepa" | awk -v OFS="," 'BEGIN {FS="] "}; {print $2,$4,$7,$8}' | sed -e 's/\[//g' -e 's/\]//g' - -![](https://farm4.staticflickr.com/3851/15293065352_91e1c1e4b6_z.jpg) - -As represented with three blocks in the image, the filtering logic is composed of three pipelined steps. The first step is used to look for login events ([7]) triggered by user gacanepa. The second and third steps are used to select desired fields, remove square brackets in the output of utmpdump, and set the output field separator to a comma. - -Of course, you need to redirect the output of the above command to a file if you want to open it later (append "> [name_of_file].csv" to the command). - -![](https://farm4.staticflickr.com/3889/15106867768_0e37881a25_z.jpg) - -In more complex examples, if you want to know what users (as listed in /etc/passwd) have not logged on during the period of time, you could extract user names from /etc/passwd, and then run grep the utmpdump output of /var/log/wtmp against user list. As you see, possibility is limitless. - -Before concluding, let's briefly show yet another use case of utmpdump: modify utmp or wtmp. As these are binary log files, you cannot edit them as is. Instead, you can export their content to text format, modify the text output, and then import the modified content back to the binary logs. That is: - - # utmpdump /var/log/utmp > tmp_output - - # utmpdump -r tmp_output > /var/log/utmp - -This can be useful when you want to remove or fix any bogus entry in the binary logs. - -To sum up, utmpdump complements standard utilities such as who, w, uptime, last, lastb by dumping detailed login events stored in utmp, wtmp and btmp log files, as well as in their rotated old archives, and that certainly makes it a great utility. - -Feel free to enhance this post with your comments. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/2014/09/monitor-user-login-history-centos-utmpdump.html - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/gabriel -[1]:http://xmodulo.com/2014/09/logrotate-manage-log-files-linux.html \ No newline at end of file diff --git a/sources/tech/20141004 Practical Lessons in Peer Code Review.md b/sources/tech/20141004 Practical Lessons in Peer Code Review.md new file mode 100644 index 0000000000..3576dd0e98 --- /dev/null +++ b/sources/tech/20141004 Practical Lessons in Peer Code Review.md @@ -0,0 +1,138 @@ +johnhoow translating... +# Practical Lessons in Peer Code Review # + +Millions of years ago, apes descended from the trees, evolved opposable thumbs and—eventually—turned into human beings. + +We see mandatory code reviews in a similar light: something that separates human from beast on the rolling grasslands of the software +development savanna. + +Nonetheless, I sometimes hear comments like these from our team members: + +"Code reviews on this project are a waste of time." +"I don't have time to do code reviews." +"My release is delayed because my dastardly colleague hasn't done my review yet." +"Can you believe my colleague wants me to change something in my code? Please explain to them that the delicate balance of the universe will +be disrupted if my pristine, elegant code is altered in any way." + +### Why do we do code reviews? ### + +Let us remember, first of all, why we do code reviews. One of the most important goals of any professional software developer is to +continually improve the quality of their work. Even if your team is packed with talented programmers, you aren't going to distinguish +yourselves from a capable freelancer unless you work as a team. Code reviews are one of the most important ways to achieve this. In +particular, they: + +provide a second pair of eyes to find defects and better ways of doing something. +ensure that at least one other person is familiar with your code. +help train new staff by exposing them to the code of more experienced developers. +promote knowledge sharing by exposing both the reviewer and reviewee to the good ideas and practices of the other. +encourage developers to be more thorough in their work since they know it will be reviewed by one of their colleagues. + +### Doing thorough reviews ### + +However, these goals cannot be achieved unless appropriate time and care are devoted to reviews. Just scrolling through a patch, making sure +that the indentation is correct and that all the variables use lower camel case, does not constitute a thorough code review. It is +instructive to consider pair programming, which is a fairly popular practice and adds an overhead of 100% to all development time, as the +baseline for code review effort. You can spend a lot of time on code reviews and still use much less overall engineer time than pair +programming. + +My feeling is that something around 25% of the original development time should be spent on code reviews. For example, if a developer takes +two days to implement a story, the reviewer should spend roughly four hours reviewing it. + +Of course, it isn't primarily important how much time you spend on a review as long as the review is done correctly. Specifically, you must +understand the code you are reviewing. This doesn't just mean that you know the syntax of the language it is written in. It means that you +must understand how the code fits into the larger context of the application, component or library it is part of. If you don't grasp all the +implications of every line of code, then your reviews are not going to be very valuable. This is why good reviews cannot be done quickly: it +takes time to investigate the various code paths that can trigger a given function, to ensure that third-party APIs are used correctly +(including any edge cases) and so forth. + +In addition to looking for defects or other problems in the code you are reviewing, you should ensure that: + +All necessary tests are included. +Appropriate design documentation has been written. +Even developers who are good about writing tests and documentation don't always remember to update them when they change their code. A +gentle nudge from the code reviewer when appropriate is vital to ensure that they don't go stale over time. + +### Preventing code review overload ### + +If your team does mandatory code reviews, there is the danger that your code review backlog will build up to the point where it is +unmanageable. If you don't do any reviews for two weeks, you can easily have several days of reviews to catch up on. This means that your +own development work will take a large and unexpected hit when you finally decide to deal with them. It also makes it a lot harder to do +good reviews since proper code reviews require intense and sustained mental effort. It is difficult to keep this up for days on end. + +For this reason, developers should strive to empty their review backlog every day. One approach is to tackle reviews first thing in the +morning. By doing all outstanding reviews before you start your own development work, you can keep the review situation from getting out of +hand. Some might prefer to do reviews before or after the midday break or at the end of the day. Whenever you do them, by considering code +reviews as part of your regular daily work and not a distraction, you avoid: + +Not having time to deal with your review backlog. +Delaying a release because your reviews aren't done yet. +Posting reviews that are no longer relevant since the code has changed so much in the meantime. +Doing poor reviews since you have to rush through them at the last minute. + +### Writing reviewable code ### + +The reviewer is not always the one responsible for out-of-control review backlogs. If my colleague spends a week adding code willy-nilly +across a large project then the patch they post is going to be really hard to review. There will be too much to get through in one session. +It will be difficult to understand the purpose and underlying architecture of the code. + +This is one of many reasons why it is important to split your work into manageable units. We use scrum methodology so the appropriate unit +for us is the story. By making an effort to organize our work by story and submit reviews that pertain only to the specific story we are +working on, we write code that is much easier to review. Your team may use another methodology but the principle is the same. + +There are other prerequisites to writing reviewable code. If there are tricky architectural decisions to be made, it makes sense to meet +with the reviewer beforehand to discuss them. This will make it much easier for the reviewer to understand your code, since they will know +what you are trying to achieve and how you plan to achieve it. This also helps avoid the situation where you have to rewrite large swathes +of code after the reviewer suggests a different and better approach. + +Project architecture should be described in detail in your design documentation. This is important anyway since it enables a new project +member to get up to speed and understand the existing code base. It has the further advantage of helping a reviewer to do their job +properly. Unit tests are also helpful in illustrating to the reviewer how components should be used. + +If you are including third-party code in your patch, commit it separately. It is much harder to review code properly when 9000 lines of +jQuery are dropped into the middle. + +One of the most important steps for creating reviewable code is to annotate your code reviews. This means that you go through the review +yourself and add comments anywhere you feel that this will help the reviewer to understand what is going on. I have found that annotating +code takes relatively little time (often just a few minutes) and makes a massive difference in how quickly and well the code can be +reviewed. Of course, code comments have many of the same advantages and should be used where appropriate, but often a review annotation +makes more sense. As a bonus, studies have shown that developers find many defects in their own code while rereading and annotating it. + +### Large code refactorings ### + +Sometimes it is necessary to refactor a code base in a way that affects many components. In the case of a large application, this can take +several days (or more) and result in a huge patch. In these cases a standard code review may be impractical. + +The best solution is to refactor code incrementally. Figure out a partial change of reasonable scope that results in a working code base and +brings you in the direction you want to go. Once that change has been completed and a review posted, proceed to a second incremental change +and so forth until the full refactoring has been completed. This might not always be possible, but with thought and planning it is usually +realistic to avoid massive monolithic patches when refactoring. It might take more time for the developer to refactor in this way, but it +also leads to better quality code as well as making reviews much easier. + +If it really isn't possible to refactor code incrementally (which probably says something about how well the original code was written and +organized), one solution might be to do pair programming instead of code reviews while working on the refactoring. + +### Resolving disputes ### + +Your team is doubtless made up of intelligent professionals, and in almost all cases it should be possible to come an agreement when +opinions about a specific coding question differ. As a developer, keep an open mind and be prepared to compromise if your reviewer prefers a +different approach. Don't take a proprietary attitude to your code and don't take review comments personally. Just because someone feels +that you should refactor some duplicated code into a reusable function, it doesn't mean that you are any less of an attractive, brilliant +and charming individual. + +As a reviewer, be tactful. Before suggesting changes, consider whether your proposal is really better or just a matter of taste. You will +have more success if you choose your battles and concentrate on areas where the original code clearly requires improvement. Say things like +"it might be worth considering..." or "some people recommend..." instead of "my pet hamster could write a more efficient sorting algorithm +than this." + +If you really can't find middle ground, ask a third developer who both of you respect to take a look and give their opinion. + +-------------------------------------------------------------------------------- + +via: http://blog.salsitasoft.com/practical-lessons-in-peer-code-review/ +作者:[Matt][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + + diff --git a/sources/tech/20141008 How to configure HTTP load balancer with HAProxy on Linux.md b/sources/tech/20141008 How to configure HTTP load balancer with HAProxy on Linux.md new file mode 100644 index 0000000000..e6b66f4563 --- /dev/null +++ b/sources/tech/20141008 How to configure HTTP load balancer with HAProxy on Linux.md @@ -0,0 +1,273 @@ +How to configure HTTP load balancer with HAProxy on Linux +================================================================================ +Increased demand on web based applications and services are putting more and more weight on the shoulders of IT administrators. When faced with unexpected traffic spikes, organic traffic growth, or internal challenges such as hardware failures and urgent maintenance, your web application must remain available, no matter what. Even modern devops and continuous delivery practices can threaten the reliability and consistent performance of your web service. + +Unpredictability or inconsistent performance is not something you can afford. But how can we eliminate these downsides? In most cases a proper load balancing solution will do the job. And today I will show you how to set up HTTP load balancer using [HAProxy][1]. + +### What is HTTP load balancing? ### + +HTTP load balancing is a networking solution responsible for distributing incoming HTTP or HTTPS traffic among servers hosting the same application content. By balancing application requests across multiple available servers, a load balancer prevents any application server from becoming a single point of failure, thus improving overall application availability and responsiveness. It also allows you to easily scale in/out an application deployment by adding or removing extra application servers with changing workloads. + +### Where and when to use load balancing? ### + +As load balancers improve server utilization and maximize availability, you should use it whenever your servers start to be under high loads. Or if you are just planning your architecture for a bigger project, it's a good habit to plan usage of load balancer upfront. It will prove itself useful in the future when you need to scale your environment. + +### What is HAProxy? ### + +HAProxy is a popular open-source load balancer and proxy for TCP/HTTP servers on GNU/Linux platforms. Designed in a single-threaded event-driven architecture, HAproxy is capable of handling [10G NIC line rate][2] easily, and is being extensively used in many production environments. Its features include automatic health checks, customizable load balancing algorithms, HTTPS/SSL support, session rate limiting, etc. + +### What are we going to achieve in this tutorial? ### + +In this tutorial, we will go through the process of configuring a HAProxy-based load balancer for HTTP web servers. + +### Prerequisites ### + +You will need at least one, or preferably two web servers to verify functionality of your load balancer. We assume that backend HTTP web servers are already [up and running][3]. + +### Install HAProxy on Linux ### + +For most distributions, we can install HAProxy using your distribution's package manager. + +#### Install HAProxy on Debian #### + +In Debian we need to add backports for Wheezy. To do that, please create a new file called "backports.list" in /etc/apt/sources.list.d, with the following content: + + deb http://cdn.debian.net/debian wheezy­backports main + +Refresh your repository data and install HAProxy. + + # apt­ get update + # apt ­get install haproxy + +#### Install HAProxy on Ubuntu #### + + # apt ­get install haproxy + +#### Install HAProxy on CentOS and RHEL #### + + # yum install haproxy + +### Configure HAProxy ### + +In this tutorial, we assume that there are two HTTP web servers up and running with IP addresses 192.168.100.2 and 192.168.100.3. We also assume that the load balancer will be configured at a server with IP address 192.168.100.4. + +To make HAProxy functional, you need to change a number of items in /etc/haproxy/haproxy.cfg. These changes are described in this section. In case some configuration differs for different GNU/Linux distributions, it will be noted in the paragraph. + +#### 1. Configure Logging #### + +One of the first things you should do is to set up proper logging for your HAProxy, which will be useful for future debugging. Log configuration can be found in the global section of /etc/haproxy/haproxy.cfg. The following are distro-specific instructions for configuring logging for HAProxy. + +**CentOS or RHEL:** + +To enable logging on CentOS/RHEL, replace: + + log 127.0.0.1 local2 + +with: + + log 127.0.0.1 local0 + +The next step is to set up separate log files for HAProxy in /var/log. For that, we need to modify our current rsyslog configuration. To make the configuration simple and clear, we will create a new file called haproxy.conf in /etc/rsyslog.d/ with the following content. + + $ModLoad imudp + $UDPServerRun 514 + $template Haproxy,"%msg%\n" + local0.=info ­/var/log/haproxy.log;Haproxy + local0.notice ­/var/log/haproxy­status.log;Haproxy + local0.* ~ + +This configuration will separate all HAProxy messages based on the $template to log files in /var/log. Now restart rsyslog to apply the changes. + + # service rsyslog restart + +**Debian or Ubuntu:** + +To enable logging for HAProxy on Debian or Ubuntu, replace: + + log /dev/log local0 + log /dev/log local1 notice + +with: + + log 127.0.0.1 local0 + +Next, to configure separate log files for HAProxy, edit a file called haproxy.conf (or 49-haproxy.conf in Debian) in /etc/rsyslog.d/ with the following content. + + $ModLoad imudp + $UDPServerRun 514 + $template Haproxy,"%msg%\n" + local0.=info ­/var/log/haproxy.log;Haproxy + local0.notice ­/var/log/haproxy­status.log;Haproxy + local0.* ~ + +This configuration will separate all HAProxy messages based on the $template to log files in /var/log. Now restart rsyslog to apply the changes. + + # service rsyslog restart + +#### 2. Setting Defaults #### + +The next step is to set default variables for HAProxy. Find the defaults section in /etc/haproxy/haproxy.cfg, and replace it with the following configuration. + + defaults + log global + mode http + option httplog + option dontlognull + retries 3 + option redispatch + maxconn 20000 + contimeout 5000 + clitimeout 50000 + srvtimeout 50000 + +The configuration stated above is recommended for HTTP load balancer use, but it may not be the optimal solution for your environment. In that case, feel free to explore HAProxy man pages to tweak it. + +#### 3. Webfarm Configuration #### + +Webfarm configuration defines the pool of available HTTP servers. Most of the settings for our load balancer will be placed here. Now we will create some basic configuration, where our nodes will be defined. Replace all of the configuration from frontend section until the end of file with the following code: + + listen webfarm *:80 + mode http + stats enable + stats uri /haproxy?stats + stats realm Haproxy\ Statistics + stats auth haproxy:stats + balance roundrobin + cookie LBN insert indirect nocache + option httpclose + option forwardfor + server web01 192.168.100.2:80 cookie node1 check + server web02 192.168.100.3:80 cookie node2 check + +The line "listen webfarm *:80" defines on which interfaces our load balancer will listen. For the sake of the tutorial, I've set that to "*" which makes the load balancer listen on all our interfaces. In a real world scenario, this might be undesirable and should be replaced with an interface that is accessible from the internet. + + stats enable + stats uri /haproxy?stats + stats realm Haproxy\ Statistics + stats auth haproxy:stats + +The above settings declare that our load balancer statistics can be accessed on http:///haproxy?stats. The access is secured with a simple HTTP authentication with login name "haproxy" and password "stats". These settings should be replaced with your own credentials. If you don't need to have these statistics available, then completely disable them. + +Here is an example of HAProxy statistics. + +![](https://farm4.staticflickr.com/3928/15416835905_a678c8f286_c.jpg) + +The line "balance roundrobin" defines the type of load balancing we will use. In this tutorial we will use simple round robin algorithm, which is fully sufficient for HTTP load balancing. HAProxy also offers other types of load balancing: + +- **leastconn**:­ gives connections to the server with the lowest number of connections. +- **source**: hashes the source IP address, and divides it by the total weight of the running servers to decide which server will receive the request. +- **uri**: the left part of the URI (before the question mark) is hashed and divided by the total weight of the running servers. The result determines which server will receive the request. +- **url_param**: the URL parameter specified in the argument will be looked up in the query string of each HTTP GET request. You can basically lock the request using crafted URL to specific load balancer node. +- **hdr(name**): the HTTP header will be looked up in each HTTP request and directed to specific node. + +The line "cookie LBN insert indirect nocache" makes our load balancer store persistent cookies, which allows us to pinpoint which node from the pool is used for a particular session. These node cookies will be stored with a defined name. In our case, I used "LBN", but you can specify any name you like. The node will store its string as a value for this cookie. + + server web01 192.168.100.2:80 cookie node1 check + server web02 192.168.100.3:80 cookie node2 check + +The above part is the definition of our pool of web server nodes. Each server is represented with its internal name (e.g., web01, web02). IP address, and unique cookie string. The cookie string can be defined as anything you want. I am using simple node1, node2 ... node(n). + +### Start HAProxy ### + +When you are done with the configuration, it's time to start HAProxy and verify that everything is working as intended. + +#### Start HAProxy on Centos/RHEL #### + +Enable HAProxy to be started after boot and turn it on using: + + # chkconfig haproxy on + # service haproxy start + +And of course don't forget to enable port 80 in the firewall as follows. + +**Firewall on CentOS/RHEL 7:** + + # firewall­cmd ­­permanent ­­zone=public ­­add­port=80/tcp + # firewall­cmd ­­reload + +**Firewall on CentOS/RHEL 6:** + +Add following line into section ":OUTPUT ACCEPT" of /etc/sysconfig/iptables: + + ­A INPUT ­m state ­­state NEW ­m tcp ­p tcp ­­dport 80 ­j ACCEPT + +and restart **iptables**: + + # service iptables restart + +#### Start HAProxy on Debian #### + +#### Start HAProxy with: #### + + # service haproxy start + +Don't forget to enable port 80 in the firewall by adding the following line into /etc/iptables.up.rules: + + ­A INPUT ­p tcp ­­dport 80 ­j ACCEPT + +#### Start HAProxy on Ubuntu #### + +Enable HAProxy to be started after boot by setting "ENABLED" option to "1" in /etc/default/haproxy: + + ENABLED=1 + +Start HAProxy: + + # service haproxy start + +and enable port 80 in the firewall: + + # ufw allow 80 + +### Test HAProxy ### + +To check whether HAproxy is working properly, we can do the following. + +First, prepare test.php file with the following content: + + + +This PHP file will tell us which server (i.e., load balancer) forwarded the request, and what backend web server actually handled the request. + +Place this PHP file in the root directory of both backend web servers. Now use curl command to fetch this PHP file from the load balancer (192.168.100.4). + + $ curl http://192.168.100.4/test.php + +When we run this command multiple times, we should see the following two outputs alternate (due to the round robin algorithm). + + Server IP: 192.168.100.2 + X-Forwarded-for: 192.168.100.4 + +---------- + + Server IP: 192.168.100.3 + X-Forwarded-for: 192.168.100.4 + +If we stop one of the two backend web servers, the curl command should still work, directing requests to the other available web server. + +### Summary ### + +By now you should have a fully operational load balancer that supplies your web nodes with requests in round robin mode. As always, feel free to experiment with the configuration to make it more suitable for your infrastructure. I hope this tutorial helped you to make your web projects more resistant and available. + +As most of you already noticed, this tutorial contains settings for only one load balancer. Which means that we have just replaced one single point of failure with another. In real life scenarios you should deploy at least two or three load balancers to cover for any failures that might happen, but that is out of the scope of this tutorial right now. + +If you have any questions or suggestions feel free to post them in the comments and I will do my best to answer or advice. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/haproxy-http-load-balancer-linux.html + +作者:[Jaroslav Štěpánek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/jaroslav +[1]:http://www.haproxy.org/ +[2]:http://www.haproxy.org/10g.html +[3]:http://xmodulo.com/how-to-install-lamp-server-on-ubuntu.html \ No newline at end of file diff --git a/sources/tech/20141008 How to speed up slow apt-get install on Debian or Ubuntu.md b/sources/tech/20141008 How to speed up slow apt-get install on Debian or Ubuntu.md new file mode 100644 index 0000000000..4fe7667081 --- /dev/null +++ b/sources/tech/20141008 How to speed up slow apt-get install on Debian or Ubuntu.md @@ -0,0 +1,119 @@ +su-kaiyao translating + +How to speed up slow apt-get install on Debian or Ubuntu +================================================================================ +If you feel that package installation by **apt-get** or **aptitude** is often too slow on your Debian or Ubuntu system, there are several ways to improve the situation. Have you considered switching default mirror sites being used? Have you checked the upstream bandwidth of your Internet connection to see if that is the bottleneck? + +Nothing else, you can try this third option: use [apt-fast][1] tool. apt-fast is actually a shell script wrapper written around apt-get and aptitude, which can accelerate package download speed. Internally, apt-fast uses [aria2][2] download utility which can download a file in "chunked" forms from multiple mirrors simultaneously (like in BitTorrent download). + +### Install apt-fast on Debian or Ubuntu ### + +Here are the steps to install apt-fast on Debian-based Linux. + +#### Debian #### + + $ sudo apt-get install aria2 + $ wget https://github.com/ilikenwf/apt-fast/archive/master.zip + $ unzip master.zip + $ cd apt-fast-master + $ sudo cp apt-fast /usr/bin + $ sudo cp apt-fast.conf /etc + $ sudo cp ./man/apt-fast.8 /usr/share/man/man8 + $ sudo gzip /usr/share/man/man8/apt-fast.8 + $ sudo cp ./man/apt-fast.conf.5 /usr/share/man/man5 + $ sudo gzip /usr/share/man/man5/apt-fast.conf.5 + +#### Ubuntu 14.04 and higher #### + + $ sudo add-apt-repository ppa:saiarcot895/myppa + $ sudo apt-get update + $ sudo apt-get install apt-fast + +#### Ubuntu 11.04 to 13.10 #### + + $ sudo add-apt-repository ppa:apt-fast/stable + $ sudo apt-get update + $ sudo apt-get install apt-fast + +During installation on Ubuntu, you will be asked to choose a default package manager (e.g., apt-get, aptitude), and other settings. You can always change the settings later by editing a configuration file /etc/apt-fast.conf. + +![](https://farm6.staticflickr.com/5615/15285526898_1b18f64d58_z.jpg) + +![](https://farm3.staticflickr.com/2949/15449069896_76ee00851b_z.jpg) + +![](https://farm6.staticflickr.com/5600/15471817412_9ef7f16096_z.jpg) + +### Configure apt-fast ### + +After installation, you need to configure a list of mirrors used by **apt-fast** in /etc/apt-fast.conf. + +You can find a list of Debian/Ubuntu mirrors to choose from at the following URLs. + +- **Debian**: [http://www.debian.org/mirror/list][3] +- **Ubuntu**: [https://launchpad.net/ubuntu/+archivemirrors][4] + +After choosing mirrors which are geographically close to your location, add those chosen mirrors to /etc/apt-fast.conf in the following format. + + $ sudo vi /etc/apt-fast.conf + +Debian: + + MIRRORS=('http://ftp.us.debian.org/debian/,http://carroll.aset.psu.edu/pub/linux/distributions/debian/,http://debian.gtisc.gatech.edu/debian/,http://debian.lcs.mit.edu/debian/,http://mirror.cc.columbia.edu/debian/') + +Ubuntu/Mint: + + MIRRORS=('http://us.archive.ubuntu.com/ubuntu,http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive/,http://mirror.cc.vt.edu/pub2/ubuntu/,http://mirror.umd.edu/ubuntu/,http://mirrors.mit.edu/ubuntu/') + +As shown above, individual mirrors for a particular archive should be separated by commas. It is recommended that you include the default mirror site specified in /etc/apt/sources.list in the MIRRORS string. + +### Install a Package with apt-fast ### + +Now you are ready to test the power of apt-fast. Here is the command-line usage of **apt-fast**: + + apt-fast [apt-get options and arguments] + apt-fast [aptitude options and arguments] + apt-fast { { install | upgrade | dist-upgrade | build-dep | download | source } [ -y | --yes | --assume-yes | --assume-no ] ... | clean } + +To install a package with **apt-fast**: + + $ sudo apt-fast install texlive-full + +To download a package in the current directory without installing it: + + $ sudo apt-fast download texlive-full + +![](http://farm8.staticflickr.com/7309/10585846956_6c98c6dcc9_z.jpg) + +As mentioned earlier, parallel downloading of apt-fast is done by aria2. You can verify parallel downloads from multiple mirrors as follows. + + $ sudo netstat -nap | grep aria2c + +![](http://farm8.staticflickr.com/7328/10585846886_4744a0e021_z.jpg) + +Note that **apt-fast** does not make "apt-get update" faster. Parallel downloading gets triggered only for "install", "upgrade", "dist-upgrade" and "build-dep" operations. For other operations, apt-fast simply falls back to the default package manager **apt-get** or **aptitude**. + +### How Fast is apt-fast? ### + +To compare apt-fast and apt-get, I tried installing several packages using two methods on two identical Ubuntu instances. The following graph shows total package installation time (in seconds). + +![](http://farm4.staticflickr.com/3810/10585846986_504d07b4a7_z.jpg) + +As you can see, **apt-fast** is substantially faster (e.g., 3--4 times faster) than **apt-get**, especially when a bulky package is installed. + +Be aware that performance improvement will of course vary, depending on your upstream Internet connectivity. In my case, I had ample spare bandwidth to leverage in my upstream connection, and that's why I see dramatic improvement by using parallel download. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/speed-slow-apt-get-install-debian-ubuntu.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://github.com/ilikenwf/apt-fast +[2]:http://aria2.sourceforge.net/ +[3]:http://www.debian.org/mirror/list +[4]:https://launchpad.net/ubuntu/+archivemirrors diff --git a/sources/tech/20141008 The Why and How of Ansible and Docker.md b/sources/tech/20141008 The Why and How of Ansible and Docker.md new file mode 100644 index 0000000000..5f9684ed06 --- /dev/null +++ b/sources/tech/20141008 The Why and How of Ansible and Docker.md @@ -0,0 +1,103 @@ +The Why and How of Ansible and Docker +================================================================================ +There is a lot of interest from the tech community in both [Docker][1] and [Ansible][2], I am hoping that after reading this article you will share our enthusiasm. You will also gain a practical insight into using Ansible and Docker for setting up a complete server environment for a Rails application. + +Many reading this might be asking, “Why don’t you just use Heroku?”. First of all, I can run Docker and Ansible on any host, with any provider. Secondly, I prefer flexibility over convenience. I can run anything in this manner, not just web applications. Last but not least, because I am a tinkerer at heart, I get a kick from understanding how all the pieces fit together. The fundamental building block of Heroku is the Linux Container. The same technology lies at the heart of Docker’s versatility. As a matter of fact, one of Docker’s mottoes is: “Containerization is the new virtualization”. + +### Why Ansible? ### + +After 4 years of heavy Chef usage, the **infrastructure as code** mentality becomes really tedious. I was spending most of my time with the code that was managing my infrastructure, not with the infrastructure itself. Any change, regardless how small, would require a considerable amount of effort for a relatively small gain. With [Ansible][3], there’s data describing infrastructure on one hand, and the constraints of the interactions between various components on the other hand. It’s a much simpler model that enables me to move quicker by letting me focus on what makes my infrastructure personal. Similar to the Unix model, Ansible provides simple modules with a single responsibility that can be combined in endless ways. + +Ansible has no dependencies other than Python and SSH. It doesn’t require any agents to be set up on the remote hosts and it doesn’t leave any traces after it runs either. What’s more, it comes with an extensive, built-in library of modules for controlling everything from package managers to cloud providers, to databases and everything else in between. + +### Why Docker? ### + +[Docker][4] is establishing itself as the most reliable and convenient way of deploying a process on a host. This can be anything from mysqld to ![](Docker container as a VM. The)redis, to a Rails application. Just like git snapshots and distributes code in the most efficient way, Docker does the same with processes. It guarantees that everything required to run that process will be available regardless of the host that it runs on. + +A common but understandable mistake is to treat a Docker container as a VM. The [Single Responsibility Principle][5] still applies, running a single process per container will give it a single reason to change, it will make it re-usable and easy to reason about. This model has stood the test of time in the form of the Unix philosophy, it makes for a solid foundation to build on. + +### The Setup ### + +Without leaving my terminal, I can have Ansible provision a new instance for me with any of the following: Amazon Web Services, Linode, Rackspace or DigitalOcean. To be more specific, I can have Ansible create a new DigitalOcean 2GB droplet in Amsterdam 2 region in precisely 1 minute and 25 seconds. In a further 1 minute and 50 seconds I can have the system setup with Docker and a few other personal preferences. Once I have this base system in place, I can deploy my application. Notice that I didn’t setup any database or programming language. Docker will handle all application dependencies. + +Ansible runs all remote commands via SSH. My SSH keys stored in the local ssh-agent will be shared remotely during Ansible’s SSH sessions. When my application code will be cloned or updated on remote hosts, no git credentials will be required, the forwarded ssh-agent will be used to authenticate with the git host. + +### Docker and application dependencies ### + +I find it amusing that most developers are specific about the version of the programming language which their application needs, the version of the dependencies in the form of Python packages, Ruby gems or node.js modules, but when it comes to something as important as the database or the message queue, they just use whatever is available in the environment that the application runs. I believe this is one of the reasons behind the devops movement, developers taking responsibility for the application’s environment. Docker makes this task easier and more straightforward by adding a layer of pragmatism and confidence to the existing practices. + +My application defines dependencies on processes such as MySQL 5.5 and Redis 2.8 by including the following `.docker_container_dependencies` file: + + gerhard/mysql:5.5 + gerhard/redis:2.8 + +The Ansible playbook will notice this file and will instruct Docker to pull the correct images from the Docker index and start them as containers. It also links these service containers to my application container. If you want to find out how Docker container linking works, refer to the [Docker 0.6.5 announcement][6]. + +My application also comes with a Dockerfile which is specific about the Ruby Docker image that is required. As this is already built, the steps in my Dockerfile will have the guarantee that the correct Ruby version will be available to them. + + FROM howareyou/ruby:2.0.0-p353 + + ADD ./ /terrabox + + RUN \ + . /.profile ;\ + rm -fr /terrabox/.git ;\ + cd /terrabox ;\ + bundle install --local ;\ + echo '. /.profile && cd /terrabox && RAILS_ENV=test bundle exec rake db:create db:migrate && bundle exec rspec' > /test-terrabox ;\ + echo '. /.profile && cd /terrabox && export RAILS_ENV=production && rake db:create db:migrate && bundle exec unicorn -c config/unicorn.rails.conf.rb' > /run-terrabox ;\ + # END RUN + + ENTRYPOINT ["/bin/bash"] + CMD ["/run-terrabox"] + + EXPOSE 3000 + +The first step is to copy all my application’s code into the Docker image and load the global environment variables added by previous images. The Ruby Docker image for example will append PATH configuration which ensures that the correct Ruby version gets loaded. + +Next, I remove the git history as this is not useful in the context of a Docker container. I install all the gems and then create a `/test-terrabox` command which will be run by the test-only container. The purpose of this is to have a “canary” which ensures that the application and all its dependencies are properly resolved, that the Docker containers are linked correctly and all tests pass before the actual application container will be started. + +The command that gets run when a new web application container gets started is defined in the CMD step. The `/run-terrabox` command was defined part of the build process, right after the test one. + +The last instruction in this application’s Dockerfile maps port 3000 from inside the container to an automatically allocated port on the host that runs Docker. This is the port that the reverse proxy or load balancer will use when proxying public requests to my application running inside the Docker container. + +### Running a Rails application inside a Docker container ### + +For a medium-sized Rails application, with about 100 gems and just as many integration tests running under Rails, this takes 8 minutes and 16 seconds on a 2GB and 2 core instance, without any local Docker images. If I already had Ruby, MySQL & Redis Docker images on that host, this would take 4 minutes and 45 seconds. Furthermore, if I had a master application image to base a new Docker image build of the same application, this would take a mere 2 minutes and 23 seconds. To put this into perspective, it takes me just over 2 minutes to deploy a new version of my Rails application, including dependent services such as MySQL and Redis. + +I would like to point out that my application deploys also run a full test suite which alone takes about a minute end-to-end. Without intending, Docker became a simple Continuous Integration environment that leaves test-only containers behind for inspection when tests fail, or starts a new application container with the latest version of my application when the test suite passes. All of a sudden, I can validate new code with my customers in minutes, with the guarantee that different versions of my application are isolated from one another, all the way to the operating system. Unlike traditional VMs which take minutes to boot, a Docker container will take under a second. Furthermore, once a Docker image is built and tests pass for a specific version of my application, I can have this image pushed into a private Docker registry, waiting to be pulled by other Docker hosts and started as a new Docker container, all within seconds. + +### Conclusion ### + +Ansible made me re-discover the joy of managing infrastructures. Docker gives me confidence and stability when dealing with the most important step of application development, the delivery phase. In combination, they are unmatched. + +To go from no server to a fully deployed Rails application in just under 12 minutes is impressive by any standard. To get a very basic Continuous Integration system for free and be able to preview different versions of an application side-by-side, without affecting the “live” version which runs on the same hosts in any way, is incredibly powerful. This makes me very excited, and having reached the end of the article, I can only hope that you share my excitement. + +I gave a talk at the January 2014 London Docker meetup on this subject, [I have shared the slides on Speakerdeck][7]. + +For more Ansible and Docker content, subscribe to [The Changelog Weekly][8] — it ships every Saturday and regularly includes the week’s best links for both topics. + +[Use the Draft repo][9] if you’d like to write a post like this for The Changelog. They’ll work with you through the process too. + +Until next time, [Gerhard][a]. + +-------------------------------------------------------------------------------- + +via: http://thechangelog.com/ansible-docker/ + +作者:[Gerhard Lazu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://twitter.com/gerhardlazu +[1]:https://www.docker.io/ +[2]:https://github.com/ansible/ansible +[3]:http://ansible.com/ +[4]:http://docker.io/ +[5]:http://en.wikipedia.org/wiki/Single_responsibility_principle +[6]:http://blog.docker.io/2013/10/docker-0-6-5-links-container-naming-advanced-port-redirects-host-integration/ +[7]:https://speakerdeck.com/gerhardlazu/ansible-and-docker-the-path-to-continuous-delivery-part-1 +[8]:http://thechangelog.com/weekly/ +[9]:https://github.com/thechangelog/draft \ No newline at end of file diff --git a/sources/tech/20141009 How to convert image audio and video formats on Ubuntu.md b/sources/tech/20141009 How to convert image audio and video formats on Ubuntu.md new file mode 100644 index 0000000000..baa145b56c --- /dev/null +++ b/sources/tech/20141009 How to convert image audio and video formats on Ubuntu.md @@ -0,0 +1,86 @@ +How to convert image, audio and video formats on Ubuntu +================================================================================ +If you need to work with a variety of image, audio and video files encoded in all sorts of different formats, you are probably using more than one tools to convert among all those heterogeneous media formats. If there is a versatile all-in-one media conversion tool that is capable of dealing with all different image/audio/video formats, that will be awesome. + +[Format Junkie][1] is one such all-in-one media conversion tool with an extremely user-friendly GUI. Better yet, it is free software! With Format Junkie, you can convert image, audio, video and archive files of pretty much all the popular formats simply with a few mouse clicks. + +### Install Format Junkie on Ubuntu 12.04, 12.10 and 13.04 ### + +Format Junkie is available for installation via Ubuntu PPA format-junkie-team. This PPA supports Ubuntu 12.04, 12.10 and 13.04. To install Format Junkie on one of those Ubuntu releases, simply run the following. + + $ sudo add-apt-repository ppa:format-junkie-team/release + $ sudo apt-get update + $ sudo apt-get install formatjunkie + $ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie + +### Install Format Junkie on Ubuntu 13.10 ### + +If you are running Ubuntu 13.10 (Saucy Salamander), you can download and install .deb package for Ubuntu 13.04 as follows. Since the .deb package for Format Junkie requires quite a few dependent packages, install it using [gdebi deb installer][2]. + +On 32-bit Ubuntu 13.10: + + $ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_i386.deb + $ sudo gdebi formatjunkie_1.07-1~raring0.2_i386.deb + $ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie + +On 64-bit Ubuntu 13.10: + + $ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_amd64.deb + $ sudo gdebi formatjunkie_1.07-1~raring0.2_amd64.deb + $ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie + +### Install Format Junkie on Ubuntu 14.04 or Later ### + +The currently available official Format Junkie .deb file requires libavcodec-extra-53 which has become obsolete starting from Ubuntu 14.04. Thus if you want to install Format Junkie on Ubuntu 14.04 or later, you can use the following third-party PPA repositories instead. + + $ sudo add-apt-repository ppa:jon-severinsson/ffmpeg + $ sudo add-apt-repository ppa:noobslab/apps + $ sudo apt-get update + $ sudo apt-get install formatjunkie + +### How to Use Format Junkie ### + +To start Format Junkie after installation, simply run: + + $ formatjunkie + +#### Convert audio, video, image and archive formats with Format Junkie #### + +The user interface of Format Junkie is pretty simple and intuitive, as shown below. To choose among audio, video, image and iso media, click on one of four tabs at the top. You can add as many files as you want for batch conversion. After you add files, and select output format, simply click on "Start Converting" button to convert. + +![](http://farm9.staticflickr.com/8107/8643695905_082b323059.jpg) + +Format Junkie supports conversion among the following media formats: + +- **Audio**: mp3, wav, ogg, wma, flac, m4r, aac, m4a, mp2. +- **Video**: avi, ogv, vob, mp4, 3gp, wmv, mkv, mpg, mov, flv, webm. +- **Image**: jpg, png, ico, bmp, svg, tif, pcx, pdf, tga, pnm. +- **Archive**: iso, cso. + +#### Subtitle encoding with Format Junkie #### + +Besides media conversion, Format Junkie also provides GUI for subtitle encoding. Actual subtitle encoding is done by MEncoder. In order to do subtitle encoding via Format Junkie interface, first you need to install MEencoder. + + $ sudo apt-get install mencoder + +Then click on "Advanced" tab on Format Junkie. Choose AVI/subtitle files to use for encoding, as shown below. + +![](http://farm9.staticflickr.com/8100/8644791396_bfe602cd16.jpg) + +Overall, Format Junkie is an extremely easy-to-use and versatile media conversion tool. One drawback, though, is that it does not allow any sort of customization during conversion (e.g., bitrate, fps, sampling frequency, image quality, size). So this tool is recommended for newbies who are looking for an easy-to-use simple media conversion tool. + +Enjoyed this post? I will appreciate your like/share buttons on Facebook, Twitter and Google+. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/how-to-convert-image-audio-and-video-formats-on-ubuntu.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://launchpad.net/format-junkie +[2]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html \ No newline at end of file diff --git a/sources/tech/20141009 How to set up RAID 10 for high performance and fault tolerant disk I or O on Linux.md b/sources/tech/20141009 How to set up RAID 10 for high performance and fault tolerant disk I or O on Linux.md new file mode 100644 index 0000000000..88fc1ead47 --- /dev/null +++ b/sources/tech/20141009 How to set up RAID 10 for high performance and fault tolerant disk I or O on Linux.md @@ -0,0 +1,140 @@ +[translating by KayGuoWhu] +How to set up RAID 10 for high performance and fault tolerant disk I/O on Linux +================================================================================ +A RAID 10 (aka RAID 1+0 or stripe of mirrors) array provides high performance and fault-tolerant disk I/O operations by combining features of RAID 0 (where read/write operations are performed in parallel across multiple drives) and RAID 1 (where data is written identically to two or more drives). + +In this tutorial, I'll show you how to set up a software RAID 10 array using five identical 8 GiB disks. While the minimum number of disks for setting up a RAID 10 array is four (e.g., a striped set of two mirrors), we will add an extra spare drive should one of the main drives become faulty. We will also share some tools that you can later use to analyze the performance of your RAID array. + +Please note that going through all the pros and cons of RAID 10 and other partitioning schemes (with different-sized drives and filesystems) is beyond the scope of this post. + +### How Does a Raid 10 Array Work? ### + +If you need to implement a storage solution that supports I/O-intensive operations (such as database, email, and web servers), RAID 10 is the way to go. Let me show you why. Let's refer to the below image. + +![](https://farm4.staticflickr.com/3844/15179003008_e48806b3ef_o.png) + +Imagine a file that is composed of blocks A, B, C, D, E, and F in the above diagram. Each RAID 1 mirror set (e.g., Mirror 1 or 2) replicates blocks on each of its two devices. Because of this configuration, write performance is reduced because every block has to be written twice, once for each disk, whereas read performance remains unchanged compared to reading from single disks. The bright side is that this setup provides redundancy in that unless more than one of the disks in each mirror fail, normal disk I/O operations can be maintained. + +The RAID 0 stripe works by dividing data into blocks and writing block A to Mirror 1, block B to Mirror 2 (and so on) simultaneously, thereby improving the overall read and write performance. On the other hand, none of the mirrors contains the entire information for any piece of data committed to the main set. This means that if one of the mirrors fail, the entire RAID 0 component (and therefore the RAID 10 set) is rendered inoperable, with unrecoverable loss of data. + +### Setting up a RAID 10 Array ### + +There are two possible setups for a RAID 10 array: complex (built in one step) or nested (built by creating two or more RAID 1 arrays, and then using them as component devices in a RAID 0). In this tutorial, we will cover the creation of a complex RAID 10 array due to the fact that it allows us to create an array using either an even or odd number of disks, and can be managed as a single RAID device, as opposed to the nested setup (which only permits an even number of drives, and must be managed as a nested device, dealing with RAID 1 and RAID 0 separately). + +It is assumed that you have mdadm installed, and the daemon running on your system. Refer to [this tutorial][1] for details. It is also assumed that a primary partition sd[bcdef]1 has been created on each disk. Thus, the output of: + + ls -l /dev | grep sd[bcdef] + +should be like: + +![](https://farm3.staticflickr.com/2944/15365276992_db79cac82a.jpg) + +Let's go ahead and create a RAID 10 array with the following command: + + # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[bcde]1 --spare-devices=1 /dev/sdf1 + +![](https://farm3.staticflickr.com/2946/15365277042_28a100baa2_z.jpg) + +When the array has been created (it should not take more than a few minutes), the output of: + + # mdadm --detail /dev/md0 + +should look like: + +![](https://farm3.staticflickr.com/2946/15362417891_7984c6a05f_o.png) + +A couple of things to note before we proceed further. + +1. **Used Dev Space** indicates the capacity of each member device used by the array. + +2. **Array Size** is the total size of the array. For a RAID 10 array, this is equal to (N*C)/M, where N: number of active devices, C: capacity of active devices, M: number of devices in each mirror. So in this case, (N*C)/M equals to (4*8GiB)/2 = 16GiB. + +3. **Layout** refers to the fine details of data layout. The possible layout values are as follows. + +---------- + +- **n** (default option): means near copies. Multiple copies of one data block are at similar offsets in different devices. This layout yields similar read and write performance than that of a RAID 0 array. + +![](https://farm3.staticflickr.com/2941/15365413092_0aa41505c2_o.png) + +- **o** indicates offset copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated, but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, one chunk further down. To use this layout for your RAID 10 array, add --layout=o2 to the command that is used to create the array. + +![](https://farm3.staticflickr.com/2944/15178897580_6ef923a1cb_o.png) + +- **f** represents far copies (multiple copies with very different offsets). This layout provides better read performance but worse write performance. Thus, it is the best option for systems that will need to support far more reads than writes. To use this layout for your RAID 10 array, add --layout=f2 to the command that is used to create the array. + +![](https://farm3.staticflickr.com/2948/15179140458_4a803bb194_o.png) + +The number that follows the **n**, **f**, and **o** in the --layout option indicates the number of replicas of each data block that are required. The default value is 2, but it can be 2 to the number of devices in the array. By providing an adequate number of replicas, you can minimize I/O impact on individual drives. + +4. **Chunk Size**, as per the [Linux RAID wiki][2], is the smallest unit of data that can be written to the devices. The optimal chunk size depends on the rate of I/O operations and the size of the files involved. For large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size. To specify a certain chunk size for your RAID 10 array, add **--chunk=desired_chunk_size** to the command that is used to create the array. + +Unfortunately, there is no one-size-fits-all formula to improve performance. Here are a few guidelines to consider. + +- Filesystem: overall, [XFS][3] is said to be the best, while EXT4 remains a good choice. +- Optimal layout: far layout improves read performance, but worsens write performance. +- Number of replicas: more replicas minimize I/O impact, but increase costs as more disks will be needed. +- Hardware: SSDs are more likely to show increased performance (under the same context) than traditional (spinning) disks. + +### RAID Performance Tests using DD ### + +The following benchmarking tests can be used to check on the performance of our RAID 10 array (/dev/md0). + +#### 1. Write operation #### + +A single file of 256MB is written to the device: + + # dd if=/dev/zero of=/dev/md0 bs=256M count=1 oflag=dsync + +512 bytes are written 1000 times: + + # dd if=/dev/zero of=/dev/md0 bs=512 count=1000 oflag=dsync + +With dsync flag, dd bypasses filesystem cache, and performs synchronized write to a RAID array. This option is used to eliminate caching effect during RAID performance tests. + +#### 2. Read operation #### + +256KiB*15000 (3.9 GB) are copied from the array to /dev/null: + + # dd if=/dev/md0 of=/dev/null bs=256K count=15000 + +### RAID Performance Tests Using Iozone ### + +[Iozone][4] is a filesystem benchmark tool that allows us to measure a variety of disk I/O operations, including random read/write, sequential read/write, and re-read/re-write. It can export the results to a Microsoft Excel or LibreOffice Calc file. + +#### Installing Iozone on CentOS/RHEL 7 #### + +Enable [Repoforge][5]. Then: + + # yum install iozone + +#### Installing Iozone on Debian 7 #### + + # aptitude install iozone3 + +The iozone command below will perform all tests in the RAID-10 array: + + # iozone -Ra /dev/md0 -b /tmp/md0.xls + +- **-R**: generates an Excel-compatible report to standard out. +- **-a**: runs iozone in a full automatic mode with all tests and possible record/file sizes. Record sizes: 4k-16M and file sizes: 64k-512M. +- **-b /tmp/md0.xls**: stores test results in a specified file. + +Hope this helps. Feel free to add your thoughts or add tips to consider on how to improve performance of RAID 10. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/setup-raid10-linux.html + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/gabriel +[1]:http://xmodulo.com/create-software-raid1-array-mdadm-linux.html +[2]:https://raid.wiki.kernel.org/ +[3]:http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html +[4]:http://www.iozone.org/ +[5]:http://xmodulo.com/how-to-set-up-rpmforge-repoforge-repository-on-centos.html \ No newline at end of file diff --git a/translated/news/20141008 Linux Calendar App California 0.2 Released.md b/translated/news/20141008 Linux Calendar App California 0.2 Released.md new file mode 100644 index 0000000000..959288c383 --- /dev/null +++ b/translated/news/20141008 Linux Calendar App California 0.2 Released.md @@ -0,0 +1,57 @@ +Linux日历程序California 0.2 发布了 +================================================================================ +**随着[上月的Geary和Shotwell的更新][1],非盈利软件套装Yobra又回来了,这次带来的是新的[California][2]日历程序的发布。** + +一个合格的桌面日历是工作井井有条(和想要井井有条)的必备工具。[广受欢迎Chrome Web Store上的Sunrise应用][3]的发布意味着选择并不像以前那么少了。California又为这个撑腰了。 + +Yorba的Jim Nelson在Yorba博客上写道:“发生了很多变化“,接着写道:“初次发布比我想的加入了更多的特性。” + +![California 0.2 Looks Great on GNOME](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/california-point-2.jpg) + +California 0.2在GNOME上看上去棒极了。 + +最突出的是添加了“自然语言”解析器。这使得添加事件更容易。相反,你可以直接输入“**在下午2点就Nachos会见Sam”接着California就会自动把它安排下接下来的星期一的下午两点,而不必你手动输入位的信息(日期,时间等等)。 + + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/05/Screen-Shot-2014-05-15-at-21.26.20.png) + +当我们在5月份回顾开发版本时这个特性也能工作了,甚至修复了一个问题:重复事件。 + +要创建一个重复时间(比如:“每个星期四搜索自己的名字”),你需要在日期前包含文字“every”(每个)。要确保地点也在内(比如:中午12点和Samba De Amigo在Boston Tea Party喝咖啡)。条目中需要有“at”或者“@”。 + +至于详细信息,我们可以见[GNOME Wiki上的快速添加页面][4] + +其他的改变包括: + +- 通过‘月’和‘周’查看事件 +-添加/删除 Google,CalDAV和web(.ics)日历 +- 改进数据服务器整合 +-添加/编辑/啥是拿出远程事件(包括重复事件) +-自然语言计划 +-F1在线帮助快捷键 +- 新的动画和弹出窗口 + +### 在Ubuntu 14.10上安装 California 0.2 ### + +由于是GNOME 3的程序,可以说这下面程序看起来和感受上更好。 + +Yorba没有忽略Ubuntu用户。他们已经努力(也可以说是耐心地)地解决导致Ubuntu需要同时安装GTK+和GNOME的主题问题。结果就是在Ubuntu上程序可能看上去有点错位,但是同样工作的很好。 + +California 0.2在[Yorba稳定版软件PPA][5]中可以下载,且只针对Ubuntu 14.10。 + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2014/10/california-calendar-natural-language-parser + +作者:[Joey-Elijah Sneddon][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://www.omgubuntu.co.uk/2014/09/new-shotwell-geary-stable-release-available-to-downed +[2]:https://wiki.gnome.org/Apps/California +[3]:http://www.omgchrome.com/sunrise-calendar-app-for-google-chrome/ +[4]:https://wiki.gnome.org/Apps/California/HowToUseQuickAdd +[5]:https://launchpad.net/~yorba/+archive/ubuntu/ppa?field.series_filter=utopic \ No newline at end of file diff --git a/translated/news/20141008 Linux Kernel 3.17 Is Out With Plenty of New Features.md b/translated/news/20141008 Linux Kernel 3.17 Is Out With Plenty of New Features.md new file mode 100644 index 0000000000..50a5676d15 --- /dev/null +++ b/translated/news/20141008 Linux Kernel 3.17 Is Out With Plenty of New Features.md @@ -0,0 +1,54 @@ +Linux Kernel 3.17 带来了很多新特性 +================================================================================ +Linus Torvalds已经发布了最新的稳定版内核3.17。 + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2011/07/Tux-psd3894.jpg) + +Torvalds以他典型的[放任式][1]的口吻在Linux内核邮件列表中解释说: + +> “过去的一周很平静,我对3.17的如期发布没有疑虑(相对于乐观的“我应该早一周发布么”的计划而言)。” + +由于假期,Linux说他还没有开始合并3.18的改变: + +>“我马上要去旅行了- 在我期盼早点发布的时候我希望避免一些事情。这意味着在3.17发布后,我不会在下周非常活跃地合并新的东西,并且下下周是LinuxCon EU” + +### Linux 3.17有哪些新的? ### + +最为一个新的发布,Linux 3.17 加入了最新的改进,硬件支持,修复等等。范围从有迷惑性的 - 比如:[memfd 和 文件密封补丁][2] - 到大多数人感兴趣的,比如最新硬件的支持。 + +下面是这次发布的一些亮点的列表,但她们并不详尽。 + +- Microsoft Xbox One 控制器支持 (没有震动) +- 额外的Sony SIXAXIS支持改进 +- 东芝 “Active Protection Sensor” 支持 +- 新的包括Rockchip RK3288和AllWinner A23 SoC的ARM芯片支持 +- 安全计算设备上的“跨线程过滤设置” +- 基于Broadcom BCM7XXX板卡的支持(用在不同的机顶盒上) +- 增强的AMD Radeon R9 290支持 +- Nouveau 驱动改进,包括Kepler GPU修复 +- 包含Intel Broadwell超级本上的Wildcatpoint Audio DSP音频支持 + +### 在Ubuntu上安装 Linux 3.17 ### + +虽然被列为稳定版,但是目前对于大多数人而言只有很少的功能需要我们“现在去安装”。 + +但是如果你很耐心- **更重要的是**-有足够的技能去处理从中导致的问题,你可以通过在由Canonical维护的主线内核存档中安装一系列合适的包来在你的Ubuntu 14.10中安装Linux 3.17 + +**除非你知道你正在做什么,不要尝试从下面的链接中安装任何东西。** + +- [访问Ubuntu内核主线存档][3] + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2014/10/linux-kernel-3-17-whats-new-improved + +作者:[Joey-Elijah Sneddon][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://lkml.iu.edu/hypermail/linux/kernel/1410.0/02818.html +[2]:http://lwn.net/Articles/607627/ +[3]:http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D \ No newline at end of file diff --git a/translated/share/README.md b/translated/share/README.md new file mode 100644 index 0000000000..e5e225858e --- /dev/null +++ b/translated/share/README.md @@ -0,0 +1 @@ +这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。 diff --git a/translated/talk/20140818 Can Ubuntu Do This--Answers to The 4 Questions New Users Ask Most.md b/translated/talk/20140818 Can Ubuntu Do This--Answers to The 4 Questions New Users Ask Most.md new file mode 100644 index 0000000000..65afcb84f4 --- /dev/null +++ b/translated/talk/20140818 Can Ubuntu Do This--Answers to The 4 Questions New Users Ask Most.md @@ -0,0 +1,79 @@ + +Ubunto可以实现这功能吗?-回答4个新用户最常问的问题 +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/Screen-Shot-2014-08-13-at-14.31.42.png) + +**在谷歌输入‘Can Ubunt[u]’,一系列的自动建议会展现在你面前。这些建议都是根据最近搜索用户最频繁检索而形成的。 + +对于Linux老用户来说,他们都胸有成竹的回答这些问题。但是对于新用户或者那些还在探索类似Ubuntu是否是值得分配的人,他们不是十分清楚这些答案。这都是中肯,真实而且是基本的问题。 + +所以,在这片文章,我将会去回答4个最常会被搜索到的"Can Ubuntu...?"问题。 + +### Ubuntu可以取代Windows吗?### + +![Windows isn’t to everyones tastes — or needs](http://www.omgubuntu.co.uk/wp-content/uploads/2014/07/windows-9-desktop-rumour.png) +Windows 并不是每个人都喜欢 - 或者说是必须的。 + +是的。Ubutu(和其他Linux发行版)是可以安装到任何一台有能力运行微软系统的电脑。 + +无论你觉得 **应不应该** 取代它,不变的是,这取决于你自己的需求。 + +例如,你在上大学,所需的软件都只是Windows而已。暂时而言,你是不需要完全更换你的系统。对于工作也是同样的道理。如果你工作所用到的软件只是微软Office, Adobe Creative Suite 或者是一个AutoCAD应用程序,不是很建议你更换系统,坚持你现在所用的软件就足够了。 + +但是对于那些用Ubuntu完全取代微软的我们,Ubuntu 提供一个安全的桌面工作环境。这个桌面工作环境可以运行与支持很广的硬件环境。基本上,每个东西都有软件的支持,从办公套件到网页浏览器,视频应用程序,音乐应用程序到游戏。 + +### Ubuntu 可以运行 .exe文件吗?### + +![你可以在Ubuntu运行一些Windows应用程序。](http://www.omgubuntu.co.uk/wp-content/uploads/2013/01/adobe-photoshop-cs2-free-linux.png) +你可以在Ubuntu运行一些Windows应用程序 + +是可以的,尽管这些程序不是一步安装到位,或者不能保证安装成功。这是因为这些软件版本本来就是在Windows下运行的。 这些程序本来就与其他桌面操作系统不兼容,包括Mac OS X 或者 Android (安卓系统)。 + +那些专门为Ubuntu(和其他Linux发行版本)的软件安装包都是带有“.deb”的文件后缀名。它们的安装过程与安装 .exe 的程序是一样的 -双击安装包,然后根据屏幕提示完成安装。 + +但是Linux是很多样化的。 使用一个名为"Wine"的兼容层,可以运行许多当下很流行的应用程序。 (Wine不是一个模拟器,但是简单来讲是一个速记本。)这些程序不会像在Windows下运行得那么顺畅,或者有着出色的用户界面。然而,它足以满足日常的工作要求。 + +一些很出名的Windows软件是可以通过Wine来运行在Ubuntu操作系统上,这包括老版本的Photoshop和微软办公室软件。 有关兼容软件的列表 [参照Wine应用程序数据库][1]. + +### Ubuntu会有病毒吗?### + +![它可能有错误,但是它并没有病毒](http://www.omgubuntu.co.uk/wp-content/uploads/2014/04/errors.jpg) +它可能有错误,但是它并有病毒 + +理论上,它会有病毒。但是,实际上它没有。 + +Linux发行版本是建立在一个病毒,蠕虫,隐匿程序都很难被安装,运行或者造成很大影响的环境之下的。 + +例如,很多应用程序都是在没有特别管理权限要求,以普通用户权限运行的。病毒的访问系统关键部分的请求也是需要用户管理权限的。很多软件的提供都是从那些维护良好的而且集中的资源库,例如Ubuntu软件中心,而不是一些不知名的网站。 由于这样的管理使得安装一些受感染的软件的几率可以忽略不计。 + +你应不应该在Ubuntu系统安装杀毒软件?这取决于你自己。为了自己的安心,或者如果你经常通过Wine来使用Windows软件,或者双系统,你可以安装ClamAV。它是一个免费的开源的病毒扫描应用程序。你可以在Ubuntu软件中心找到它。 + +你可以在Ubuntu维基百科了解更多关于病毒在Linux或者Ubuntu的信息。 [Ubuntu 维基百科][2]. + +### 在Ubuntu上可以玩游戏吗?### + +![Steam有着上百个专门为Linux设计的高质量游戏。](http://www.omgubuntu.co.uk/wp-content/uploads/2012/11/steambeta.jpg) +Steam有着上百个专门为Linux设计的高质量游戏。 + +当然可以!Ubuntu有着多样化的游戏,从传统简单的2D象棋,拼字游戏和扫雷游戏,到很现代化AAA级别的对显卡要求强的游戏。 + +你首先去到 **Ubuntu 软件中心**。这里你会找到很多免费的,开源的和付钱的游戏,包括广受好评的独立制作游戏,像World of Goo 和Braid。当然也有其他传统游戏的提供,例如,Pychess(国际象棋),four-in-a-row和Scrabble clones(猜字拼字游戏)。 + +对于游戏狂热爱好者,你可以点击**Steam for Linux**. 在这里你可以找到各种这样最新最好玩的游戏。 + +另外,记得留意这个网站 [Humble Bundle][3]。这些“只买你想要的”的套餐只会持续每个月里面的两周。作为游戏平台,它是Linux特别友好的支持者。因为每当一些新游戏出来的时候,它都保证可以在Linux下搜索到。 + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2014/08/ubuntu-can-play-games-replace-windows-questions + +作者:[Joey-Elijah Sneddon][a] +译者:[Shaohao Lin](https://github.com/shaohaolin) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://appdb.winehq.org/ +[2]:https://help.ubuntu.com/community/Antivirus +[3]:https://www.humblebundle.com/ \ No newline at end of file diff --git a/translated/talk/20140910 Why Do Some Old Programming Languages Never Die.md b/translated/talk/20140910 Why Do Some Old Programming Languages Never Die.md new file mode 100644 index 0000000000..5ed8a5513d --- /dev/null +++ b/translated/talk/20140910 Why Do Some Old Programming Languages Never Die.md @@ -0,0 +1,85 @@ +为什么一些古老的编程语言不会消亡? +================================================================================ +> 我们中意于我们所知道的。 + +![](http://a4.files.readwrite.com/image/upload/c_fill,h_900,q_70,w_1600/MTIzMDQ5NjY0MTUxMjU4NjM2.jpg) + +当今许多知名的编程语言已经都非常古老了。PHP 语言20年、Python 语言23年、HTML 语言21年、Ruby 语言和 JavaScript 语言已经19年,C 语言更是高达42年之久。 + +这是没人能预料得到的,即使是计算机科学家 [Brian Kernighan][1] 也一样。他是写著第一本关于 C 语言的作者之一,只到今天这本书还在印刷着。(C 语言本身的发明者 [Dennis Ritchie][2] 是 Kernighan 的合著者,他于 2011 年已辞世。) + +“我依稀记得早期跟编辑们的谈话,告诉他们我们已经卖出了5000册左右的量,”最近采访 Kernighan 时他告诉我说。“我们设法做的更好。我没有想到的是在2014年的教科书里学生仍然在使用第一个版本的书。” + +关于 C 语言的持久性特别显著的就是 Google 开发出了新的语言 Go,解决同一问题比用 C 语言更有效率。 + +“大多数语言并不会消失或者至少很大一部分用户承认它们不会消失,”他说。“C 语言仍然在一定的领域独领风骚,所以它很接地气。” + +### 编写所熟悉的 ### + +为什么某些计算机编程语言要比其它的更流行?因为开发者都选择使用它们。逻辑上来说,这解释已经足够,但还想深入了解为什么开发人员会选择使用它们呢,这就有点棘手了。 + +分别来自普林斯顿大学和加州大学伯克利分校的研究者 Ari Rabkin 和 Leo Meyerovich 花费了两年时间来研究解决上面的问题。他们的研究报告,[《编程语言使用情况实例分析》][3],记录了对超过 200,000 个 Sourceforge 项目和超过 13,000 个程序员投票结果的分析。 + +他们主要的发现呢?大多数时候程序员选择的编程语言都是他们所熟悉的。 + +“存在着我们使用的语言是因为我们经常使用他们,” Rabkin 告诉我。“例如:天文学家就经常使用 IDL [交互式数据语言]来开发他们的计算机程序,并不是因为它具有什么特殊的星级功能或其它特点,而是因为用它形成习惯了。他们已经用些语言构建出很优秀的程序了,并且想保持原状。” + +换句话说,它部分要归功于创建其的语言的的知名度仍保留较大劲头。当然,这并不意味着流行的语言不会变化。Rabkin 指出我们今天在使用的 C 语言就跟 Kernighan 第一次创建时的一点都不同,那时的 C 编译器跟现代的也不是完全兼容。 + +“有一个古老的,关于工程师的笑话。工程师被问到哪一种编程语言人们会使用30年,他说,‘我不知道,但它总会被叫做 Fortran’,” Rabkin 说到。“长期存活的语言跟他们在70年代和80年代刚设计出来的时候不一样了。人们通常都是在上面增加功能,而不会删除功能,因为要保持向后兼容,但有些功能会被修正。” + +向后兼容意思就是当语言升级后,程序员不仅可以使用升级语言的新特性,也不用回去重写已经实现的老代码块。老的“遗留代码”的语法规则已经不用了,但舍弃是要花成本的。只要它们存在,我们就有理由相信相关的语言也会存在。 + +### PHP: 存活长久语言的一个案例学习 ### + +遗留代码指的是用过时的源代码编写的程序或部分程序。想想看,一个企业或工程项目的关键程序功能部分是用没人维护的编程语言写出来的。因为它们仍起着作用,用现代的源代码重写非常困难或着代价太高,所以它们不得不保留下来,即使其它部分的代码都变动了,程序员也必须不断折腾以保证它们能正常工作。 + +任何的编程语言,存在了超过几十年时间都具有某种形式的遗留代码问题, PHP 也不加例外。PHP 是一个很有趣的例子,因为它的遗留代码跟现在的代码明显不同,支持者或评论家都承认这是一个巨大的进步。 + +Andi Gutmans 是 已经成为 PHP4 的标准编译器的 Zend Engine 的发明者之一。Gutmans 说他和搭档本来是想改进完善 PHP3 的,他们的工作如此成功,以至于 PHP 的原发明者 Rasmus Lerdorf 也加入他们的项目。结果就成为了 PHP4 和他的后续者 PHP5 的编译器。 + +因此,当今的 PHP 与它的祖先即最开始的 PHP 是完全不同的。然而,在 Gutmans 看来,在用古老的 PHP 语言版本写的遗留代码的地方一直存在着偏见以至于上升到整个语言的高度。比如 PHP 充满着安全漏洞或没有“集群”功能来支持大规模的计算任务等概念。 + +“批评 PHP 的人们通常批评的是在 1998 年时候的 PHP 版本,”他说。“这些人都没有与时俱进。当今的 PHP 已经有了很成熟的生态系统了。” + +如今,Gutmans 说,他作为一个管理者最重要的事情就是鼓励人们升级到最新版本。“PHP有个很大的社区,足以支持您的遗留代码的问题,”他说。“但总的来说,我们的社区大部分都在 PHP5.3 及以上的。” + +问题是,任何语言用户都不会全部升级到最新版本。这就是为什么 Python 用户仍在使用 2000 年发布的 Python 2,而不是使用 2008 年发布的 Python 3 的原因。甚至是已经六年了喜欢 Google 的大多数用户仍没有升级。这种情况是多种原因造成的,但它使得很多开发者在承担风险。 + +“任何东西都不会消亡的,”Rabkin 说。“任何语言的遗留代码都会一直存在。重写的代价是非常高昂的,如果它们不出问题就不要去改动。” + +### 开发者是稀缺的资源 ### + +当然,开发者是不会选择那些仅仅只是为了维护老旧代码的的程序语言的。当谈论到对语言选择的偏好时,Rabkin 和 Meyerovich 发现年龄仅仅只代表个数字。Rabkin 告诉我说: + +> 有一件事使我们被深深震撼到了。这事最重要的就是我们给人们按年龄分组,然后询问他们知道多少编程语言。我们主观的认为随着年龄的增长知道的会越来越多,但实际上却不是,25岁年龄组和45岁年龄组知道的语言数目是一样的。几个反复询问的问题这里持续不变的。您知道一种语言的几率并不与您的年龄挂钩。 + +换句话说,不仅仅里年长的开发者坚持传统,年轻的程序员会承认并采用古老的编程语言作为他们的第一们语言。这可能是因为这些语言具有很有趣的开发库及功能特点,也可能是因为在社区里开发者都是一个组的都喜爱这种开发语言。 + +“在全球程序员关注的语言的数量是有定数的,” Rabkin 说。“如果一们语言表现出足够独特的价值,人们将会学习和使用它。如果是和您交流代码和知识的的某个人分享一门编程语言,您将会学习它。因此,例如,只要那些开发库是 Python 库和社区特长是 Python 语言的经验,那么 Python 将会大行其道。” + +研究人员发现关于语言实现的功能,社区是一个巨大的因素。虽然像 Python 和 Ruby 这样的高级语言并没有太大的差别,但,例如程序员就更容易觉得一种比另一种优越。 + +“Rails 不一定要用 Ruby 语言编写,但它用了,这就是社会因素在起作用,” Rabkin 说。“例如,复活 Objective-C 语言这件事就是苹果的工程师团队说‘让我们使用它吧,’ 他们就没得选择了。” + +通观社会的影响及老旧代码这些问题,我们发现最古老的和最新的计算机语言都有巨大的惰性。Go 语言怎么样能超越 C 语言呢?如果有合适的人或公司说它超越它就超越。 + +“它归结为谁传播的更好谁就好,” Rabkin 说。 + +开始的图片来自 [Blake Patterson][4] + +-------------------------------------------------------------------------------- + +via: http://readwrite.com/2014/09/02/programming-language-coding-lifetime + +作者:[Lauren Orsini][a] +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://readwrite.com/author/lauren-orsini +[1]:http://en.wikipedia.org/wiki/Brian_Kernighan +[2]:http://en.wikipedia.org/wiki/Dennis_Ritchie +[3]:http://asrabkin.bitbucket.org/papers/oopsla13.pdf +[4]:https://www.flickr.com/photos/blakespot/2444037775/ \ No newline at end of file diff --git a/translated/talk/20140912 What' s wrong with IPv4 and Why we are moving to IPv6.md b/translated/talk/20140912 What' s wrong with IPv4 and Why we are moving to IPv6.md new file mode 100644 index 0000000000..902765e77a --- /dev/null +++ b/translated/talk/20140912 What' s wrong with IPv4 and Why we are moving to IPv6.md @@ -0,0 +1,88 @@ +IPv6:IPv4犯的罪,为什么要我来弥补 +================================================================================ +(LCTT:标题党了一把,哈哈哈好过瘾,求不拍砖) + +在过去的十年间,IPv6 本来应该得到很大的发展,但事实上这种好事并没有降临。由此导致了一个结果,那就是大部分人都不了解 IPv6 的一些知识:它是什么,怎么使用,以及,为什么它会存在?(LCTT:这是要回答蒙田的“我是谁”哲学思考题吗?) + +![IPv4 and IPv6 Comparison](http://www.tecmint.com/wp-content/uploads/2014/09/ipv4-ipv6.gif) + +IPv4 和 IPv6 的区别 + +### IPv4 做错了什么? ### + +自从1981年发布了 RFC 791 标准以来我们就一直在使用 **IPv4**。在那个时候,电脑又大又贵还不多见,而 IPv4 号称能提供**40亿条 IP 地址**,在当时看来,这个数字好大好大。不幸的是,这么多的 IP 地址并没有被充分利用起来,地址与地址之间存在间隙。举个例子,一家公司可能有**254(2^8-2)**条地址,但只使用其中的25条,剩下的229条被空占着,以备将来之需。于是这些空闲着的地址不能服务于真正需要它们的用户,原因就是网络路由规则的限制。最终的结果是在1981年看起来那个好大好大的数字,在2014年看起来变得好小好小。 + +互联网工程任务组(**IETF**)在90年代指出了这个问题,并提供了两套解决方案:无类型域间选路(**CIDR**)以及私有地址。在 CIDR 出现之前,你只能选择三种网络地址长度:**24 位** (共可用16,777,214个地址), **20位** (共可用1,048,574个地址)以及**16位** (共可用65,534个地址)。CIDR 出现之后,你可以将一个网络再划分成多个子网。 + +举个例子,如果你需要**5个 IP 地址**,你的 ISP 会为你提供一个子网,里面的主机地址长度为3位,也就是说你最多能得到**6个地址**(LCTT:抛开子网的网络号,3位主机地址长度可以表示0~7共8个地址,但第0个和第7个有特殊用途,不能被用户使用,所以你最多能得到6个地址)。这种方法让 ISP 能尽最大效率分配 IP 地址。“私有地址”这套解决方案的效果是,你可以自己创建一个网络,里面的主机可以访问外网的主机,但外网的主机很难访问到你创建的那个网络上的主机,因为你的网络是私有的、别人不可见的。你可以创建一个非常大的网络,因为你可以使用16,777,214个主机地址,并且你可以将这个网络分割成更小的子网,方便自己管理。 + +也许你现在正在使用私有地址。看看你自己的 IP 地址,如果这个地址在这些范围内:**10.0.0.0 – 10.255.255.255**、**172.16.0.0 – 172.31.255.255**或**192.168.0.0 – 192.168.255.255**,就说明你在使用私有地址。这两套方案有效地将“IP 地址用尽”这个灾难延迟了好长时间,但这毕竟只是权宜之计,现在我们正面临最终的审判。 + +**IPv4** 还有另外一个问题,那就是这个协议的消息头长度可变。如果数据通过软件来路由,这个问题还好说。但现在路由器功能都是由硬件提供的,处理变长消息头对硬件来说是一件困难的事情。一个大的路由器需要处理来自世界各地的大量数据包,这个时候路由器的负载是非常大的。所以很明显,我们需要固定消息头的长度。 + +还有一个问题,在分配 IP 地址的时候,美国人发了因特网(LCTT:这个万恶的资本主义国家占用了大量 IP 地址)。其他国家只得到了 IP 地址的碎片。我们需要重新定制一个架构,让连续的 IP 地址能在地理位置上集中分布,这样一来路由表可以做的更小(LCTT:想想吧,网速肯定更快)。 + +还有一个问题,这个问题你听起来可能还不大相信,就是 IPv4 配置起来比较困难,而且还不好改变。你可能不会碰到这个问题,因为你的路由器为你做了这些事情,不用你去操心。但是你的 ISP 对此一直是很头疼的。 + +下一代因特网需要考虑上述的所有问题。 + +### IPv6 和它的优点 ### + +**IETF** 在1995年12月公布了下一代 IP 地址标准,名字叫 IPv6,为什么不是 IPv5?因为某个错误原因,“版本5”这个编号被其他项目用去了。IPv6 的优点如下: + +- 128位地址长度(共有3.402823669×10³⁸个地址) +- 这个架构下的地址在逻辑上聚合 +- 消息头长度固定 +- 支持自动配置和修改你的网络。 + +我们一项一项地分析这些特点: + +#### 地址 #### + +人们谈到 **IPv6** 时,第一件注意到的事情就是它的地址好多好多。为什么要这么多?因为设计者考虑到地址不能被充分利用起来,我们必须提供足够多的地址,让用户去挥霍,从而达到一些特殊目的。所以如果你想架设自己的 IPv6 网络,你的 ISP 可以给你分配拥有**64位**主机地址长度的网络(可以分配1.844674407×10¹⁹台主机),你想怎么玩就怎么玩。 + +#### 聚合 #### + +有这么多的地址,这个地址可以被稀稀拉拉地分配给主机,从而更高效地路由数据包。算一笔帐啊,你的 ISP 拿到一个**80位**地址长度的网络空间,其中16位是 ISP 的子网地址,剩下64位分给你作为主机地址。这样一来,你的 ISP 可以分配65,534个子网。 + +然而,这些地址分配不是一成不变地,如果 ISP 想拥有更多的小子网,完全可以做到(当然,土豪 ISP 可能会要求再来一个80位网络空间)。最高的48位地址是相互独立地,也就是说 ISP 与 ISP 之间虽然可能分到相同地80位网络空间,但是这两个空间是相互隔离的,好处就是一个网络空间里面的地址会聚合在一起。 + +#### 固定的消息头长度 #### + +**IPv4** 消息头长度可变,但 **IPv6** 消息头长度被固定为40字节。IPv4 会由于额外的参数导致消息头变长,IPv6 中,如果有额外参数,这些信息会被放到一个紧挨着消息头的地方,不会被路由器处理,当消息到达目的地时,这些额外参数会被软件提取出来。 + +IPv6 消息头有一个部分叫“flow”,是一个20位伪随机数,用于简化路由器对数据包地路由过程。如果一个数据包存在“flow”,路由器就可以根据这个值作为索引查找路由表,不必慢吞吞地遍历整张路由表来查询路由路径。这个优点使 **IPv6** 更容易被路由。 + +#### 自动配置 #### + +**IPv6** 中,当主机开机时,会检查本地网络,看看有没有其他主机使用了自己的 IP 地址。如果地址没有被使用,就接着查询本地的 IPv6 路由器,找到后就向它请求一个 IPv6 地址。然后这台主机就可以连上互联网了 —— 它有自己的 IP 地址,和自己的默认路由器。 + +如果这台默认路由器当机,主机就会接着找其他路由器,作为备用路由器。这个功能在 IPv4 协议里实现起来非常困难。同样地,假如路由器想改变自己的地址,自己改掉就好了。主机会自动搜索路由器,并自动更新路由器地址。路由器会同时保存新老地址,直到所有主机都把自己地路由器地址更新成新地址。 + +IPv6 自动配置还不是一个完整地解决方案。想要有效地使用互联网,一台主机还需要另外的东西:域名服务器、时间同步服务器、或者还需要一台文件服务器。于是 **dhcp6** 出现了,提供与 dhcp 一样的服务,唯一的区别是 dhcp6 的机器可以在可路由的状态下启动,一个 dhcp 进程可以为大量网络提供服务。 + +#### 唯一的大问题 #### + +如果 IPv6 真的比 IPv4 好那么多,为什么它还没有被广泛使用起来(Google 在**2014年5月份**估计 IPv6 的市场占有率为**4%**)?一个最基本的原因是“先有鸡还是先有蛋”问题,用户需要让自己的服务器能为尽可能多的客户提供服务,这就意味着他们必须部署一个 **IPv4** 地址。 + +当然,他们可以同时使用 IPv4 和 IPv6 两套地址,但很少有客户会用到 IPv6,并且你还需要对你的软件做一些小修改来适应 IPv6。另外比较头疼的一点是,很多家庭的路由器压根不支持 IPv6。还有就是 ISP 也不愿意支持 IPv6,我问过我的 ISP 这个问题,得到的回答是:只有客户明确指出要部署这个时,他们才会用 IPv6。然后我问了现在有多少人有这个需求,答案是:包括我在内,共有1个。 + +与这种现实状况呈明显对比的是,所有主流操作系统:Windows、OS X、Linux 都默认支持 IPv6 好多年了。这些操作系统甚至提供软件让 IPv6 的数据包披上 IPv4 的皮来骗过那些会丢弃 IPv6 数据包的主机,从而达到传输数据的目的(LCTT:呃,这是高科技偷渡?)。 + +#### 总结 #### + +IPv4 已经为我们服务了好长时间。但是它的缺陷会在不远的将来遭遇不可克服的困难。IPv6 通过改变地址分配规则、简化数据包路由过程、简化首次加入网络时的配置过程等策略,可以完美解决这个问题。 + +问题是,大众在接受和使用 IPv6 的过程中进展缓慢,因为改变代价太大了。好消息是所有操作系统都支持 IPv6,所以当你有一天想做出改变,你的电脑只需要改变一点点东西,就能转到全新的架构体系中去。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/ipv4-and-ipv6-comparison/ + +作者:[Jeff Silverman][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/jeffsilverm/ diff --git a/translated/talk/20140926 ChromeOS vs Linux--The Good, the Bad and the Ugly.md b/translated/talk/20140926 ChromeOS vs Linux--The Good, the Bad and the Ugly.md new file mode 100644 index 0000000000..2a193224b1 --- /dev/null +++ b/translated/talk/20140926 ChromeOS vs Linux--The Good, the Bad and the Ugly.md @@ -0,0 +1,82 @@ +ChromeOS 对战 Linux : 孰优孰劣,仁者见仁,智者见智 +================================================================================ +> 在 ChromeOS 和 Linux 的斗争过程中,不管是哪一家的操作系统都是有优有劣。 + +任何不关注Google 的人都不会相信Google在桌面用户当中扮演着一个很重要的角色。在近几年,我们见到的[ChromeOS][1]制造的[Google Chromebook][2]相当的轰动。和同期的人气火爆的Amazon 一样,似乎ChromeOS 势不可挡。 + +在本文中,我们要了解的是ChromeOS 的概念市场,ChromeOS 怎么影响着Linux 的份额,和整个 ChromeOS 对于linux 社区来说,是好事还是坏事。另外,我将会谈到一些重大的事情,和为什么没人去为他做点什么事情。 + +### ChromeOS 并非真正的Linux ### + +每当有朋友问我说是否ChromeOS 是否是Linux 的一个版本时,我都会这样回答:ChromeOS 对于Linux 就好像是 OS X 对于BSD 。换句话说,我认为,ChromeOS 是linux 的一个派生操作系统,运行于Linux 内核的引擎之下。而很多操作系统就组成了Google 的专利代码和软件。 + +尽管ChromeOS 是利用了Linux 内核引擎,但是它仍然有很大的不同和现在流行的Linux 分支版本。 + +尽管ChromeOS 的差异化越来越明显,是在于它给终端用户提供的app,包括Web 应用。因为ChromeOS 的每一个操作都是开始于浏览器窗口,这对于Linux 用户来说,可能会有很多不一样的感受,但是,对于没有Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。 + +就是说,每一个以Google-centric 为生活方式的人来说,在ChromeOS上的感觉将会非常良好,就好像是回家一样。这样的优势就是这个人已经接受了Chrome 浏览器,Google 驱动器和Gmail 。久而久之,他们的亲朋好友使用ChromeOs 也就是很自然的事情了,就好像是他们很容易接受Chrome 浏览器,因为他们觉得早已经用过。 + +然而,对于Linux 爱好者来说,这样的约束就立即带来了不适应。因为软件的选择被限制,有范围的,在加上要想玩游戏和VoIP 是完全不可能的。那么对不起,因为[GooglePlus Hangouts][3]是代替不了VoIP 软件的。甚至在很长的一段时间里。 + +### ChromeOS 还是Linux 桌面 ### + +有人断言,ChromeOS 要是想在桌面系统的浪潮中对Linux 产生影响,只有在Linux 停下来浮出水面栖息的时候或者是满足某个非技术用户的时候。 + +是的,桌面Linux 对于大多数休闲型的用户来说绝对是一个好东西。然而,它必须有专人帮助你安装操作系统,并且提供“维修”服务,从windows 和 OS X 的阵营来看。但是,令人失望的是,在美国Linux 正好在这个方面很缺乏。所以,我们看到,ChromeOS 正慢慢的走入我们的视线。 + +我发现Linux 桌面系统最适合做网上技术支持来管理。比如说:家里的高级用户可以操作和处理更新政府和学校的IT 部门。Linux 还可以应用于这样的环境,Linux桌面系统可以被配置给任何技能水平和背景的人使用。 + +相比之下,ChromeOS 是建立在完全免维护的初衷之下的,因此,不需要第三者的帮忙,你只需要允许更新,然后让他静默完成即可。这在一定程度上可能是由于ChromeOS 是为某些特定的硬件结构设计的,这与苹果开发自己的PC 电脑也有异曲同工之妙。因为Google 的ChromeOS 附带一个硬件脉冲,它允许“犯错误”。对于某些人来说,这是一个很奇妙的地方。 + +滑稽的是,有些人却宣称,ChomeOs 的远期的市场存在很多问题。简言之,这只是一些Linux 激情的爱好者在找对于ChomeOS 的抱怨罢了。在我看来,停止造谣这些子虚乌有的事情才是关键。 + +问题是:ChromeOS 的市场份额和Linux 桌面系统在很长的一段时间内是不同的。这个存在可能会在将来被打破,然而在现在,仍然会是两军对峙的局面。 + +### ChromeOS 的使用率正在增长 ### + +不管你对ChromeOS 有怎么样的看法,事实是,ChromeOS 的使用率正在增长。专门针对ChromeOS 的电脑也一直有发布。最近,戴尔(Dell)也发布了一款针对ChromeOS 的电脑。命名为[Dell Chromebox][5],这款ChromeOS 设备将会是另一些传统设备的终结者。它没有软件光驱,没有反病毒软件,offers 能够无缝的在屏幕后面自动更新。对于一般的用户,Chromebox 和Chromebook 正逐渐成为那些工作在web 浏览器上的人的一个选择。 + +尽管增长速度很快,ChromeOS 设备仍然面临着一个很严峻的问题 - 存储。受限于有限的硬盘的大小和严重依赖于云存储,并且ChromeOS 不会为了任何使用它们电脑的人消减基本的web 浏览器的功能。 + +### ChromeOS 和Linux 的异同点 ### + +以前,我注意到ChromeOS 和Linux 桌面系统分别占有着两个完全不同的市场。出现这样的情况是源于,Linux 社区的致力于提升Linux 桌面系统的脱机性能。 + +是的,偶然的,有些人可能会第一时间发现这个“Linux 的问题”。但是,并没有一个人接着跟进这些问题,确保得到问题的答案,确保他们得到Linux 最多的帮助。 + +事实上,脱机故障可能是这样发现的: + +- 有些用户偶然的在Linux 本地事件发现了Linux 的问题。 +- 他们带回了DVD/USB 设备,并尝试安装这个操作系统。 +- 当然,有些人很幸运的成功的安装成功了这个进程,但是,据我所知大多数的人并没有那么幸运。 +- 令人失望的是,这些人希望在网上论坛里搜索帮助。很难做一个主计算机,没有网络和视频的问题。 +- 我真的是受够了,后来有很多失望的用户拿着他们的电脑到windows 商店来“维修”。除了重装一个windows 操作系统,他们很多时候都会听到一句话,“Linux 并不适合你们”,应该尽量避免。 + +有些人肯定会说,上面的举例肯定夸大其词了。让我来告诉你:这是发生在我身边真实的事的,而且是经常发生。醒醒吧,Linux 社区的人们,我们的这种模式已经过时了。 + +### 伟大的平台,强大的营销和结论 ### + +如果非要说ChromeOS 和Linux 桌面系统相同的地方,除了它们都使用了Linux 内核,就是它们都伟大的产品却拥有极其差劲的市场营销。而Google 的好处就是,他们投入大量的资金在网上构建大面积存储空间。 + +Google 相信他们拥有“网上的优势”,而线下的影响不是很重要。这真是一个让人难以置信的目光短浅,这也成了Google 历史上最大的一个失误之一。相信,如果你没有接触到他们在线的努力,你不值得困扰,仅仅就当是他们在是在选择网上存储空间上做出反击。 + +我的建议是:通过Google 的线下影响,提供Linux 桌面系统给ChromeOS 的市场。这就意味着Linux 社区的人需要筹集资金来出席县博览会、商场展览,在节日季节,和在社区中进行免费的教学课程。这会立即使Linux 桌面系统走入人们的视线,否则,最终将会是一个ChromeOS 设备出现在人们的面前。 + +如果说本地的线下市场并没有想我说的这样,别担心。Linux 桌面系统的市场仍然会像ChromeOS 一样增长。最坏也能保持现在这种两军对峙的市场局面。 + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html + +作者:[Matt Hartley][a] +译者:[barney-ro](https://github.com/barney-ro) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Matt-Hartley-3080.html +[1]:http://en.wikipedia.org/wiki/Chrome_OS +[2]:http://www.google.com/chrome/devices/features/ +[3]:https://plus.google.com/hangouts +[4]:http://en.wikipedia.org/wiki/Voice_over_IP +[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html diff --git a/translated/talk/20140928 What is a good subtitle editor on Linux.md b/translated/talk/20140928 What is a good subtitle editor on Linux.md new file mode 100644 index 0000000000..8ae39fd098 --- /dev/null +++ b/translated/talk/20140928 What is a good subtitle editor on Linux.md @@ -0,0 +1,64 @@ +Linux 上好用的几款字幕编辑器介绍 +================================================================================ +如果你经常看国外的大片,你应该会喜欢带字幕版本而不是有国语配音的版本。在法国长大,我的童年记忆里充满了迪斯尼电影。但是这些电影因为有了法语的配音而听起来很怪。如果现在有机会能看原始的版本,我知道,对于大多数的人来说,字幕还是必须的。我很高兴能为家人制作字幕。最让我感到希望的是,Linux 也不无花哨,而且有很多开源的字幕编辑器。总之一句话,这篇文章并不是一个详尽的Linux上字幕编辑器的列表。你可以告诉我那一款是你认为最好的字幕编辑器。 + +### 1. Gnome Subtitles ### + +![](https://farm6.staticflickr.com/5596/15323769611_59bc5fb4b7_z.jpg) + +[Gnome Subtitles][1] 我的一个选择,当有字幕需要快速编辑时。你可以载入视频,载入字幕文本,然后就可以即刻开始了。我很欣赏其对于易用性和高级特性之间的平衡性。它带有一个同步工具以及一个拼写检查工具。最后,虽然最后,但并不是不重要,这么好用最主要的是因为它的快捷键:当你编辑很多的台词的时候,你最好把你的手放在键盘上,使用其内置的快捷键来移动。 + +### 2. Aegisub ### + +![](https://farm3.staticflickr.com/2944/15323964121_59e9b26ba5_z.jpg) + +[Aegisub][2] 有更高级别的复杂性。接口仅仅反映了学习曲线。但是,除了它吓人的样子以外,Aegisub 是一个非常完整的软件,提供的工具远远超出你能想象的。和Gnome Subtitles 一样,Aegisub也采用了所见即所得(WYSIWYG:what you see is what you get)的处理方式。但是是一个全新的高度:可以再屏幕上任意拖动字幕,也可以在另一边查看音频的频谱,并且可以利用快捷键做任何的事情。除此以外,它还带有一个汉字工具,有一个kalaok模式,并且你可以导入lua 脚本让它自动完成一些任务。我希望你在用之前,先去阅读下它的[指南][3]。 + +### 3. Gaupol ### + +![](https://farm3.staticflickr.com/2942/15326817292_6702cc63fc_z.jpg) + +另一个操作复杂的软件是[Gaupol][4],不像Aegisub ,Gaupol 很容易上手而且采用了一个和Gnome Subtitles 很像的界面。但是在这些相对简单背后,它拥有很多很必要的工具:快捷键、第三方扩展、拼写检查,甚至是语音识别(由[CMU Sphinx][5]提供)。这里也提一个缺点,我注意到有时候在测试的时候也,软件会有消极怠工的表现,不是很严重,但是也足以让我更有理由喜欢Gnome Subtitles了。 + +### 4. Subtitle Editor ### + +![](https://farm4.staticflickr.com/3914/15323911521_8e33126610_z.jpg) + +[Subtitle Editor][6]和Gaupol 很像。但是,它的界面有点不太直观,特性也只是稍微的高级一点点。我很欣赏的一点是,它可以定义“关键帧”,而且提供所有的同步选项。然而,多一点的图标,或者是少一点的文字都能提供界面的特性。作为一个好人,我认为,Subtitle Editor 可以模仿“作家”打字的效果,虽然我不知道它是否有用。最后但并非不重要。重定义快捷键的功能很实用。 + +### 5. Jubler ### + +![](https://farm4.staticflickr.com/3912/15323769701_3d94ca8884_z.jpg) + +用Java 写的,[Jubler][7]是一个多平台支持的字幕编辑器。我对它的界面印象特别深刻。在上面我确实看出了Java-ish 方面的东西,但是,它仍然是经过精心的构造和构思的。像Aegisub 一样,你可以再屏幕上任意的拖动字幕,让你有愉快的体验而不单单是打字。它也可以为字幕自定义一个风格,在另外的一个轨道播放音频,翻译字幕,或者是是做拼写检查,然而,你必须要注意的是,你必须事先安装好媒体播放器并且正确的配置,如果你想完整的使用Jubler。我把这些归功于在[官方页面][8]下载了脚本以后其简便的安装方式。 + +### 6. Subtitle Composer ### + +![](https://farm6.staticflickr.com/5578/15323769711_6c6dfbe405_z.jpg) + +被视为“KDE里的字幕作曲家”,[Subtitle Composer][9]能够唤起对很多传统功能的回忆。伴随着KDE界面,我们很期望。很自然的我们就会说到快捷键,我特别喜欢这个功能。除此之外,Subtitle Composer 与上面提到的编辑器最大的不同地方就在于,它可以执行用JavaScript,Python,甚至是Ruby写成的脚本。软件带有几个例子,肯定能够帮助你很好的学习使用这些特性的语法。 + +最后,不管你喜不喜欢我,都要为你的家庭编辑几个字幕,重新同步整个轨道,或者是一切从头开始,那么Linux 有很好的工具给你。对我来说,快捷键和易用性使得各个工具有差异,想要更高级别的使用体验,脚本和语音识别就成了很便利的一个功能。 + +你会使用哪个字幕编辑器,为什么?你认为还有没有更好用的字幕编辑器这里没有提到的?在评论里告诉我们。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/good-subtitle-editor-linux.html + +作者:[Adrien Brochard][a] +译者:[barney-ro](https://github.com/barney-ro) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://gnomesubtitles.org/ +[2]:http://www.aegisub.org/ +[3]:http://docs.aegisub.org/3.2/Main_Page/ +[4]:http://home.gna.org/gaupol/ +[5]:http://cmusphinx.sourceforge.net/ +[6]:http://home.gna.org/subtitleeditor/ +[7]:http://www.jubler.org/ +[8]:http://www.jubler.org/download.html +[9]:http://sourceforge.net/projects/subcomposer/ diff --git a/translated/talk/The history of Android/07 - The history of Android.md b/translated/talk/The history of Android/07 - The history of Android.md new file mode 100644 index 0000000000..583e847d6e --- /dev/null +++ b/translated/talk/The history of Android/07 - The history of Android.md @@ -0,0 +1,109 @@ +安卓编年史 +================================================================================ +![电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。](http://cdn.arstechnica.net/wp-content/uploads/2014/01/email2lol.png) +电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。 +Ron Amadeo供图 + +邮件视图是——令人惊讶的!——白色。安卓的电子邮件应用从历史角度来说算是个打了折扣的Gmail应用,你可以在这里看到紧密的联系。读邮件以及写邮件视图几乎没有任何修改地就从Gmail那里直接取过来使用。 + +![即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/IM2.png) +即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。 +Ron Amadeo供图 + +在Google Hangouts之前,甚至是Google Talk之前,就有“IM”——安卓1.0带来的唯一一个即时通讯客户端。令人惊奇的是,它支持多种IM服务:用户可以从AIM,Google Talk,Windows Live Messenger以及Yahoo中挑选。还记得操作系统开发者什么时候关心过互通性吗? + +朋友列表是聊天中带有白色聊天气泡的黑色背景界面。状态用一个带颜色的圆形来指示,右侧的小安卓机器人指示出某人正在使用移动设备。IM应用相比Google Hangouts远比它有沟通性,这真是十分神奇的。绿色代表着某人正在使用设备并且已经登录,黄色代表着他们登录了但处于空闲状态,红色代表他们手动设置状态为忙,不想被打扰,灰色表示离线。现在Hangouts只显示用户是否打开了应用。 + +聊天对话界面明显基于信息应用,聊天的背景从白色和蓝色被换成了白色和绿色。但是没人更改信息输入框的颜色,所以加上橙色的高亮效果,界面共使用了白色,绿色,蓝色和橙色。 + +![安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt5000.png) +安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。 +Ron Amadeo供图 + +YouTube仅仅以G1的320p屏幕和3G网络速度可能不会有今天这样的移动意识,但谷歌的视频服务在安卓1.0上就被置入发布了。主界面看起来就像是安卓市场调整过的版本,顶部带有一个横向滚动选择部分,下面有垂直滚动分类列表。谷歌的一些分类选择还真是奇怪:“最热门”和“最多观看”有什么区别? + +一个谷歌没有意识到YouTube最终能达到多庞大的标志——有一个视频分类是“最近更新”。在今天,每分钟有[100小时时长的视频][1]上传到Youtube上,如果这个分类能正常工作的话,它会是一个快速滚动的视频列表,快到以至于变为一片无法阅读的模糊。 + +菜单含有搜索,喜爱,分类,设置。设置(没有图片)是有史以来最简陋的,只有个清除搜索历史的选项。分类都是一样的平淡,仅仅是个黑色的文本列表。 + +最后一张截图展示了视频播放界面,只支持横屏模式。尽管自动隐藏的播放控制有个进度条,但它还是很奇怪地包含了后退和前进按钮。 + +![YouTube的视频菜单,描述页面,评论。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt3.png) +YouTube的视频菜单,描述页面,评论。 +Ron Amadeo供图 + +每个视频的更多选项可以通过点击菜单按钮来打开。在这里你可以把视频标记为喜爱,查看详细信息,以及阅读评论。所有的这些界面,和视频播放一样,是锁定横屏模式的。 + +然而“共享”不会打开一个对话框,它只是向Gmail邮件中加入了视频的链接。想要把链接通过短信或即时消息发送给别人是不可能的。你可以阅读评论,但是没办法评价他们或发表自己的评论。你同样无法给视频评分或赞。 + +![相机应用的拍照界面,菜单,照片浏览模式。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/camera.png) +相机应用的拍照界面,菜单,照片浏览模式。 +Ron Amadeo供图 + +在实体机上跑上真正的安卓意味着相机功能可以正常运作,即便那里没什么太多可关注的。左边的黑色方块是相机的界面,原本应该显示取景器图像,但SDK的截图工具没办法捕捉下来。G1有个硬件实体的拍照键(还记得吗?),所以相机没必要有个屏幕上的快门键。相机没有曝光,白平衡,或HDR设置——你可以拍摄照片,仅此而已。 + +菜单按钮显示两个选项:跳转到相册应用和带有两个选项的设置界面。第一个设置选项是是否给照片加上地理标记,第二个是在每次拍摄后显示提示菜单,你可以在上面右边看到截图。同样的,你目前还只能拍照——还不支持视频拍摄。 + +![日历的月视图,打开菜单的周视图,日视图,以及日程。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/calviews.png) +日历的月视图,打开菜单的周视图,日视图,以及日程。 +Ron Amadeo供图 + +就像这个时期的大多数应用一样,日历的主命令界面是菜单。菜单用来切换视图,添加新事件,导航至当天,选择要显示的日程,以及打开设置。菜单扮演着每个单独按钮的入口的作用。 + +月视图不能显示约会事件的文字。每个日期旁边有个侧边,约会会显示为侧边上的绿色部分,通过位置来表示约会是在一天中的什么时候。周视图同样不能显示预约文字——G1的320×480的显示屏像素还不够密——所以你会在日历中看到一个带有颜色指示条的白块。唯一一个显示文字的是日程和日视图。你可以用滑动来切换日期——左右滑动切换周和日,上下滑动切换月份和日程。 + +![设置主界面,无线设置,关于页面的底部。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/settings.png) +设置主界面,无线设置,关于页面的底部。 +Ron Amadeo供图 + +安卓1.0最终带来了设置界面。这个界面是个带有文字的黑白界面,粗略地分为各个部分。每个列表项边的下箭头让人误以为点击它会展开折叠的更多东西,但是触摸列表项的任何位置只会加载下一屏幕。所有的界面看起来确实无趣,都差不多一样,但是嘿,这可是设置啊。 + +任何带有开/关状态的选项都使用了卡通风的复选框。安卓1.0最初的复选框真是奇怪——就算是在“未选中”状态时,它们还是有个灰色的勾选标记在里面。安卓把勾选标记当作了灯泡,打开时亮起来,关闭的时候变得黯淡,但这不是复选框的工作方式。然而我们最终还是见到了“关于”页面。安卓1.0运行Linux内核2.6.25版本。 + +设置界面意味着我们终于可以打开安全设置并更改锁屏。安卓1.0只有两种风格,安卓0.9那样的灰色方形锁屏,以及需要你在9个点组成的网格中画出图案的图形解锁。像这样的滑动图案相比PIN码更加容易记忆和输入,尽管它没有增加多少安全性。 + +![语音拨号,图形锁屏,电池低电量警告,时间设置。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/grabbag.png) +语音拨号,图形锁屏,电池低电量警告,时间设置。 +Ron Amadeo供图 + +语音功能和语音拨号一同来到了1.0。这个特性以各种功能实现在AOSP徘徊了一段时间,然而它是一个简单的拨打号码和联系人的语音命令应用。语音拨号是个和谷歌未来的语音产品完全无关的应用,但是,它的工作方式和非智能机上的语音拨号一样。 + +关于最后一个值得注意的,当电池电量低于百分之十五的时候会触发低电量弹窗。这是个有趣的图案,它把电源线错误的一端插入手机。谷歌,那可不是(现在依然不是)手机应该有的充电方式。 + +安卓1.0是个伟大的开头,但是功能上仍然有许多缺失。实体键盘和大量硬件按钮被强制要求配备,因为不带有十字方向键或轨迹球的安卓设备依然不被允许销售。另外,基本的智能手机功能比如自动旋转依然缺失。内置应用不可能像今天这样通过安卓市场来更新。所有的谷歌系应用和系统交织在一起。如果谷歌想要升级一个单独的应用,需要通过运营商推送整个系统的更新。安卓依然还有许多工作要做。 + +### 安卓1.1——第一个真正的增量更新 ### + +![安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/11.png) +安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。 +Ron Amadeo供图 + +安卓1.0发布四个半月后,2009年2月,安卓在安卓1.1中得到了它的第一个公开更新。系统方面没有太多变化,谷歌向1.1中添加新东西现如今也都已被关闭。谷歌语音搜索是安卓向云端语音搜索的第一个突击,它在应用抽屉里有自己的图标。尽管这个应用已经不能与谷歌服务器通讯,你可以[在iPhone上][2]看到它以前是怎么工作的。它还没有语音操作,但你可以说出想要搜索的,结果会显示在一个简单的谷歌搜索中。 + +安卓市场添加了对付费应用的支持,但是就像beta客户端中一样,这个版本的安卓市场不再能够连接Google Play服务器。我们最多能够看到分类界面,你可以在免费应用,付费应用和全部应用中选择。 + +地图添加了[谷歌纵横][3],一个向朋友分享自己位置的方法。纵横在几个月前为了支持Google+而被关闭并且不再能够工作。地图菜单里有个纵横的选项,但点击它现在只会打开一个带载入中圆圈的画面,并永远停留在这里。 + +安卓世界的系统更新来得更加迅速——或者至少是一条在运营商和OEM推送之前获得更新的途径——谷歌向“关于手机”界面添加了检查系统更新按钮。 + +---------- + +![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) + +[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。 + +[@RonAmadeo][t] + +-------------------------------------------------------------------------------- + +via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/ + +译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://www.youtube.com/yt/press/statistics.html +[2]:http://www.youtube.com/watch?v=y3z7Tw1K17A +[3]:http://arstechnica.com/information-technology/2009/02/google-tries-location-based-social-networking-with-latitude/ +[a]:http://arstechnica.com/author/ronamadeo +[t]:https://twitter.com/RonAmadeo diff --git a/translated/tech/20140819 Build a Raspberry Pi Arcade Machine.md b/translated/tech/20140819 Build a Raspberry Pi Arcade Machine.md new file mode 100644 index 0000000000..498a05b8f3 --- /dev/null +++ b/translated/tech/20140819 Build a Raspberry Pi Arcade Machine.md @@ -0,0 +1,135 @@ +自制一台树莓派街机 +================================================================================ +**利用当代神奇设备来重温80年代的黄金威严。** + +### 你需要以下硬件 ### + +- 一台树莓派以及一张4GBSD卡 +- 一台支持HDMI的LCD显示屏 +- 游戏手柄或者... +- 一个JAMMA街机游戏机外壳机箱 +- J-Pac或者I-Pac + +80年代有太多难忘的记忆;冷战结束,Quatro碳酸饮料,Korg Polysix合成器,以及Commodore 64家用电脑。但对于某些年轻人来说,这些都没有街机游戏机那样有说服力,或那种甜蜜的叛逆。笼罩着烟味和此起彼伏的8比特音效,它们就是在挤出来的时间里去探索的洞穴:50分钱和一份代币能让你消耗整个午餐时间,在这些游戏上磨练着你的技能:小蜜蜂,城市大金刚,蜈蚣,行星射击,吃豆小姐,火凤凰,R-Rype,大金刚,雷霆计划,铁手套,街头霸王,超越赛车,防卫者争战...这个列表太长了。 + +这些游戏,以及玩这些游戏的街机机器,仍然像30年前那样有吸引力。不像年轻时候那样,现在可以不用装一兜零钱就能玩了,最终让你超越那些有钱的孩子以及他们无休止的‘继续游戏’。所以是时候打造一个你自己的基于Linux的街机游戏机了,然后挑战一下过去的最高分。 + +我们将会覆盖所有的步骤,来将一个便宜的街机游戏机器外壳变成一台Linux驱动的多平台复古游戏系统。但是这并不意味着你就一定要搭建一个同样的系统。比如说,你可以放弃那个又大又重还有潜在致癌性外壳的箱子本身,而是将内部控制核心装进一个旧游戏主机或同等大小的盒子里。或者说,你也可以简单地放弃小巧的树莓派,而将系统的大脑换成一台更强劲的Linux主机。举个例子,它可以作为运行SteamOS的一个理想平台,用来玩那些更优秀的现代街机游戏。 + +在之后的几个页面里,我们将搭建一台基于树莓派的街机游戏机,你应该也能从其中发现很多点子应用到你自己的项目上,即使它们和我们这个项目不太一样。然后因为我们是用无比强大的MAME来做这件事情,你几乎可以让它在任意平台上运行。 + +![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade3.png) + +我们在B+型号出来以前完成的这个项目。它应该也可以同样工作在更新的主板上,你应该不用一个带电源的USB Hub也可以(点击看大图)。 + +### 声明 ### + +强调一下,我们捣腾的电子器件可能会让你受到电击。请确保你做的任何改动都是有资质的电子工程师帮你检查过的。我们也不会深入讨论如何获取游戏,但是有很多合法的资源,例如基于MAME模拟器的老游戏,以及较新的商业游戏。 + +#### 第一步:街机机柜 #### + +街机机柜本身就是最大的挑战。我们在eBay上淘了个二手的90年代初的双人泡泡龙游戏机。然后花了£220装在一台旅行车后面送过来。类似这种机柜的价格并不确定。我们看到过很多在£100以内的。而另一方面,还有很多人愿意花数千块钱去买原版侧面贴纸完整的机器。 + +决定买一个街机机柜,主要有两个考虑。第一个是它的体积:这东西又大又重。又占地方,而且需要至少两个人才能搬动。如果你不缺钱的话,还可以买DIY机柜或者全新的小一点的,例如适合摆在桌子上的那种。然后,酒柜也能很合适。 + +![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade4.jpg) + +这种机柜可能很便宜,但是他们都很重。不要一个人去搬。一些更古老的机器可能还会需要一点小关怀,例如重新喷个漆以及一些修理工作(点击看大图)。 + +除了获得更加真实的游戏体验以外,购买原版的街机机柜的一个绝佳理由是可以使用原版的控制器。从eBay上买到的大多数机器都支持两个人同时玩,有两个摇杆以及每个玩家各自的一些按钮,再加上玩家一和玩家二的选择按钮。为了兼容更多游戏,我们建议您找一台每个玩家都有6个按键,这个是通用配置。也许你还想看看支持超过两位玩家的控制台,或者有空间放其他游戏控制器的,比如说街机轨迹球(类似疯狂弹珠这种游戏需要的),或者一个旋钮(打砖块)。这些待会都可以轻松装上去,因为有现成的现代USB设备。 + +控制器是第二考虑的,而且我们认为是最重要的,因为要通过它把你的摇动和拍打转变成游戏里的动作。当你准备买一个机柜时需要考虑一种叫JAMMA的东西,它是日本娱乐机械制造商协会(Japan Amusement Machinery Manufacturers Association)的缩写。JAMMA是街机游戏机里的行业标准,定义了包含游戏芯片的电路板和游戏控制器的连接方式,以及投币机制。它是一个连接两个玩家的摇杆和按钮的所有线缆的接口电路,把它们统一到一个标准的连接头。JAMMA就是这个连接头的大小以及引脚定义,这就意味着不管你安装的主板是什么,按钮和控制器都将会连接到相同功能接口,所以街机的主人只需要再更换下机柜上的外观图片,就可以招揽新玩家了。 + +但是首先,提醒一下:JAMMA连接头上带有12V电压供电,通常由大多数街机里都有的电源模块供给。为了避免意外短路或是不小心掉个螺丝刀什么的造成损坏,我们完全切断了这个供电。在本教程后面的任何阶段,我们也不会用到这个连接头上的任何电源脚。 + +![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade2.png) + +#### 第二步:J-PAC #### + +有一点非常方便,你可以买到这样一种设备,连接街机机柜里的JAMMA接头和电脑的USB端口,将机柜上的摇杆和按键动作都转换成(可配置的)键盘命令,它们可以在Linux里用来控制任何想玩的游戏。这个设备就叫J-Pac([www.ultimarc.com/jpac.html][1] – 大概£54)。 + +它最大的特点不是它的连接性;而是它处理和转换输入信号的方式,因为它比标准的USB手柄强太多太多了。每一个输入都有自己独立的中断,而且没有限制同时按下或按住的按钮或摇杆方向的数量。这对于类似街头霸王的游戏来说非常关键,因为他们依赖于同时迅速按下的组合键,而且用来对那些发飙后按下自己所有按键的不良对手发出致命一击时也必不可少。许多其他控制器,特别是那些生成键盘输入的,受到他们所采用的USB控制器的同时六个输入的限制,以及一堆的Alt,Shift和Ctrl键的特殊处理的限制。J-Pac还可以接入倾角传感器,甚至某些投币装置,不用预先配置就可以在Linux下工作了。 + +另外的选择是一个类似的叫I-Pac的设备。它做了和J-Pac相同的事情,只不过不支持JAMMA接头。这意味着你不能把JAMMA控制器接上去,但同时也就是说你可以设计你自己的控制器布局,再把每个控制接到I-Pac上去。这对第一个项目来说也许有点小难,但是这却是许多街机迷们选择的方式,特别是他们想设计一个支持四个玩家的控制板的时候,或者是一个整合许多不同类型控制的面板的时候。我们采用的方式并不是我们推荐必须要做的,我们改造了一个输入有问题的二手X-Arcade Tankstick控制面板,换上了新的摇杆和按钮,再接到新的JAMMA接口,这样有一个非常好的地方就是可以用便宜的价格(£8)买到所有用到的线材包括电路板边缘插头。 + +![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade5.jpg) + +我们的已经装到机柜上的J-Pac。右边的蓝色和红色导线接到我们的机柜上额外的1号和2号玩家按钮(点击看大图)。 + +不管你选择的是I-Pac或是J-Pac,它们产生的按键都是MAME的默认值。也就是说运行模拟器之后不需要手动调整输入。例如玩家1,会默认将键盘方向键映射成上下左右,以及将左边的Ctrl,左边的ALT,空格和左边的Shift键映射到按钮1-4。但是真正实用的功能是,对于我们来说,是双键快捷方式。当按下并按住玩家1按钮后,就可以通过把玩家1的摇杆拉到下的位置发出用来暂停游戏的P按键,推到上的位置调整音量,以及推到右的位置来进入MAME自己的设置界面。这些特殊组合键设计的很巧妙,不会对正常玩游戏带来任何干扰,因为他们只有在按住玩家1按钮后才会生效,然后可以让你正在运行游戏的时候也能做任何需要的事情。例如,你可以完全地重新配置MAME,使用它自己的菜单,在玩游戏的时候改变输入绑定和灵敏度。 + +最后,按住玩家1按钮然后按下玩家2按钮就可以退出MAME,如果你使用了启动菜单或MAME管理器的话就很有用了,因为他们会自动启动游戏,然后你就可以用最快的速度开始玩另一个游戏了。 + +对于显示屏我们采取了比较保守的方式,拿掉了街机原装的笨重的而且已经坏掉的CRT,换成一个低成本的LCD显示器。这样做有很多好处。首先,这个显示器有HDMI接口,这样他就可以轻易地直接连接到树莓派或是现代的显卡上。第二,你也不用去设定驱动街机屏幕所需要的低频率刷新模式,也不需要驱动它的专用图形硬件。第三,这也是最安全的方式,因为街机屏幕往往在机身背后没有保护措施,让很高的电压离你的手只有几英寸的距离。也不是说你完全不能用CRT,如果那就是你追求的体验的话 – 这也是获得所追求的游戏体验的最真实的方式,但是我们在软件里充分细调了CRT模拟部分,我们对输出已经很满意了,而且不需要用那个古老的CRT更是让我们高兴。 + +你也许还需要考虑用一个老式的4:3长宽比的LCD,而不是那种宽屏的现代产品,因为4:3模式用来玩竖屏或横屏的游戏更实用。比如说玩竖屏的射击游戏,例如雷电,如果使用宽屏显示器的话,会在屏幕两边都有一个黑条。这些黑条一般会用来显示一些游戏指引,或者你也可以把屏幕翻转90度,这样就可以用上每个像素了,但这却不实用,除非你只玩竖屏游戏或者有一个容易操作的旋转支座。 + +装载显示屏也很重要。如果你拿掉了CRT的话,没有现成的地方安装LCD。我们的方式是买了一些中密度纤维板(MDF)并切割成适合原来摆放CRT的地方。固定以好,我们把一个便宜的VESA支座放在中间。VESA底座可以用来挂载大多数屏幕,大的或小的。最后,因为我们的机柜前面有烟玻璃,我们必须保证亮度和对比度都设置的足够高。 + +### 第三步:装配 ### + +现在几个硬件大件都选好了,而且也基本上确定了最终街机机柜要摆放的地方,把这几个配件装到一起并没有太大难度。我们安全地把机柜后面的电源输入部分拆开,然后在背后的空间接了一个符合插座。接在了电源开关之后的电线上。 + +几乎所有的街机机柜右上角都有个电源开关,但通常在机柜靠下一点的地方有大量的导线铰接在它上面,也就是说我们的设备可以使用普通的电源连接头。我们的机柜上还有一个荧光管,用做机器上边灯罩的背光,之前是直接连接到电源上的,我们可以用一个普通插头让它保持和电源连接。当你打开机柜上的电源开关的时候,电流会流入机柜里的各个部件 - 你的树莓派和显示屏都会开机,所有一切就都准备好了。 + +J-Pac模块直接插到JAMMA接口上,但你可能还需要一点手动调整。标准的JAMMA只支持每个玩家最多三个按键(尽管许多非正式的支持四个),而J-Pac可以支持六个。为了连接额外的按钮,你需要把按钮开关的一端接到J-Pac的GND上,另一端接到J-Pac板边有螺丝固定的输入上。它们被标记成1SW4,1SW5,1SW6,2SW4,2SW5和2SW6。J-Pac也有声音的直通连接,但是我们发现杂音太多没法用。改成把机柜上的喇叭连接到一个二手的SoundBlaster功放上,再接到树莓派的音频输出端口。声音不一定要纯正,但音量一定要足够大。 + +![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade6.jpg) + +我们的树莓派已经接到J-Pac左边,也已经连接了显示屏和USB hub(点击看大图)。 + +然后把J-Pac或I-Pac模块通过PS2转USB连接线接到你的PC或树莓派,也可以直接接到PC的PS2接口。要用旧的PS2接头的话额外还有个要求,你的电脑得足够古老还有这个,但是我们测试发现用USB性能是一样的。当然,这个不能用于不带PS2的树莓派,而且别忘了树莓派也需要供电。我们一般建议使用一个带电源的USB hub,因为没有供电是树莓派不工作最常见的错误。你还需要保证树莓派的网络正常,要么通过以太网(也许使用一个藏到机柜里的电力线适配器),或者通过无线USB设备。网络很关键是因为在树莓派被藏到机柜里后你还可以重新配置它,不用接键盘或鼠标就可以让你调整设置以及执行管理任务。 + +> ### 投币装置 ### + +> 在街机模拟社区里,让投币装置工作在模拟器上工作就会和商业产品太接近了。这就意味着你有潜在的可能对使用你机器的人收取费用。这不仅仅只是不正当,考虑到运行在你自己街机上的那些游戏的来源,这将会是非法的。这很显然违背了模拟的精神。不过,我们和其他热爱者觉得一个能工作的投币装置更进一步地靠近了街机的真实,而且值得付出努力来营造对那个过去街机的怀念。丢个10便士硬币到投币口然后再听到机器发出增加点数的声音,没有什么比得上。 + +> 实际上难度也不大。取决于你街机上的投币装置,以及它如何发信号通知投了几个币。大多数投币装置分为两个部分。较大的一部分是硬币接收和验证装置。这是投币过程的物理部分,用于检测硬币是否真实以及确定它的价值。这是通过一个游戏点数逻辑电路板来实现的,通常用一个排线连接,上边还带有很多DIP开关。这些开关用来决定接受哪种硬币,以及一个硬币能产生多少点数。然后就是简单地找到输出开关,每个点数都会触发它一次,然后把它接到JAMMA连接头的投币输入上,或者直接接到J-Pac。我们的投币装置型号是Mars MS111,在90年代早期的英国很常见,网上有大量关于每个DIP开关作用的信息,也有如何重新编程控制器来接受新硬币的方法。我们还能在这个装置的12V上接个小灯用来照亮投币孔。 + +#### 第四步:软件 #### + +MAME是这种规模项目唯一可行的模拟器,它如今支持运行在数不清的不同平台上的各种各样的游戏,从第一代街机到一些最近的机器。从这个项目中还孕育出了MESS,一个多模拟器的超级系统,针对的平台是80到90年代的家庭电脑以及电视游戏机。 + +如何配置MAME本身都可以写上六页的文章了。它是一个复杂的,无序的,伟大的软件程序,模拟了如此之多的CPU,声卡,芯片,控制器以及那么多的选项,就像MythTV,你都永远不能真正停止配置它。 + +但是也有个相对省事的方式,一个特别为树莓派构建的版本。它叫PiMAME。它是一个可下载的发布版和脚本,基于Raspbian,这是树莓派的默认发布版。它不仅仅会把MAME装到树莓派上(这很有用因为没有哪个默认仓库里有这个),还会安装其他一些精选出来的模拟器,并通过一个前端来管理他们。MAME,举个例子,是一个有数十个参数的命令行应用。但是PiMAME还有一个妙招 - 它安装了一个简单的网页服务器,可以在连接上网络后让你通过浏览器来安装新游戏。这是一个很好的优点,因为把游戏文件放到正确的目录下是使用MAME的困难之一,这还能让你连接到树莓派的存储设备得到最优使用。还有,PiMAME会通过用来安装它的脚本更新自己,所以保持最新版本就太简单了。目前来说这个非常有用,因为在编写这个项目的时候,正好在0.8版这样一个重大更新发布的时间点上。我们在三月份早期时发现有一些轻微的不稳定,但是我们确定在你读到这篇文章的时候一切都会解决。 + +安装PiMAME最好的方式就是先装Raspbian。你可以通过NOOBS安装,使用电脑上的图形工具,或者通过dd命令把Raspbian的内容直接写入到你的SD卡中。就像我们上个月的BrewPi教程里曾提到的,这个过程在之前已经被记录过很多次,所以就不再浪费口水了。想简单点就装一下NOOBS,参照树莓派网站上的指引。在装好Raspbian并跑起来以后,请确保使用配置工具来释放SD卡上的空间,以及确保系统已经更新到最新(`sudo apt-get update; sudo apt-get upgrade`)。然后再确保已经安装好了git工具包。当前任意Raspbian版本都会自带git,不过你仍然可以通过命令`sudo apt-get install git`检查一下。 + +然后再在终端里输入下面的命令把PiMAME安装器从项目的GitHub仓库克隆到本地: + + git clone https://github.com/ssilverm/pimame_installer + +之后,如果命令工作正常的话你应该能看到如下的反馈输出: + + Cloning into ‘pimame_installer’... + remote: Reusing existing pack: 2306, done. + remote: Total 2306 (delta 0), reused 0 (delta 0) + Receiving objects: 100% (2306/2306), 4.61 MiB | 11 KiB/s, done. + Resolving deltas: 100% (823/823), done. + +这个命令会创建一个叫‘pimame_installer’的新目录,然后下一步就是进入这个目录再执行它里面的脚本: + + cd pimame_installer/ + sudo ./install.sh + +这个命令会安装和配置很多软件。所需的时间长短也取决于你的因特网速度,因为需要下载大量的包。我们那个简陋的树莓派加15Mb因特网连接用了差不多45分钟来执行完这个脚本,在这之后你会收到重启机器的提示。你现在可以安全的通过输入`sudo shutdown -r`来重启了,因为这个命令会自动处理剩下的SD卡写入操作。 + +这就是安装的全部事情了。在重启树莓派后,就会自动登录,然后会出现PiMAME启动菜单。在0.8版本里这是个非常漂亮的界面,有每个支持平台的图片,还有红色图标提示已经安装了多少个游戏。现在应该可以用控制器来操作了。如果需要检查控制器是否正确连接,可以用SSH连接到树莓派然后检查一下文件**/dev/input/by-id/usb-Ultimarc_I-PAC_Ultimarc_I-PAC-event-kbd**是否存在。 + +默认的按键配置就可以让你选择要在你的街机上运行哪个模拟器。我们最感兴趣的就是第一个,名字叫‘AdvMAME’,不过你也许会很惊讶看到还有一个MAME可选的,MAME4ALL。MAME4ALL是特别为树莓派构建的,使用了旧版的MAME源代码,所以它所支持的ROMS的性能也是最佳的。这是很合理的,因为你的树莓派不可能玩那些要求很高的游戏,所以没有理由苛求模拟器的没必要的兼容性。现在剩下的事情就是找些游戏装到你的系统里(参考下面的方法),然后尽情享受吧! + +![](http://www.linuxvoice.com/wp-content/uploads/2014/08/picade1.png) + +-------------------------------------------------------------------------------- + +via: http://www.linuxvoice.com/arcade-machine/ + +作者:[Ben Everard][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.linuxvoice.com/author/ben_everard/ +[1]:http://www.ultimarc.com/jpac.html diff --git a/translated/tech/20140826 20 Postfix Interview Questions and Answers.md b/translated/tech/20140826 20 Postfix Interview Questions and Answers.md new file mode 100644 index 0000000000..4ae9f2ad38 --- /dev/null +++ b/translated/tech/20140826 20 Postfix Interview Questions and Answers.md @@ -0,0 +1,122 @@ +Postfix的20个问答题 +================================================================================ +### 问题1:什么是 Postfix,它的默认端口是多少? ### + +答:Postfix 是一个开源的 MTA(邮件传送代理,英文名:Mail Transfer Agent),用于转发 email。相信很多人知道 Sendmail,而 Postfix 是它的替代品。默认端口是25。 + +### 问题2:Postfix 和 Sendmail 有什么区别? ### + +答:Postfix 使用模块化设计,由多个独立的可执行程序组成;而 Sendmail 被设计成有一个强大的后台进程提供所有服务。 + +### 问题3:什么是 MTA,它在邮件系统中扮演什么角色? ### + +答:MTA 是 Mail Transfer Agent 的缩写。MTA 负责接收和发送邮件、确定发送路径和地址重写(LCTT:address rewriting,就是完善发送地址,比如将“username”这个地址重写为“username@example.com”)。本地转发就是将邮件发送给 MDA。Qmail、Postix、Sendmail 都是 MTA。 + +### 问题4:什么是 MDA? ### + +答:MDA 是 Mail Delivery Agent 的缩写。MDA 这个程序用于从 MTA 获取邮件并传送至本地接受者的邮箱。MDA 通常可以过滤邮件或为邮件分类。一个 MDA 也能决定一封邮件是否需要转发到另一个邮箱地址。Procmail 就是一个 MDA。 + +### 问题5:什么是 MUA? ### + +答:MUA 是 Mail User Agent 的缩写。MUA 是一个邮件客户端软件,可以用来写邮件、发送邮件、接收邮件。发送邮件时使用的是 MTA;接收邮件时可以从邮件存储区直接收取,也可以通过 POP/IMAP 服务器间接收取。Outlook、Thunkerbird、Evolution 都是 MUA。 + +### 问题6:Mailserver 里 postmaster 的作用是什么? ### + +答:邮件管理者一般就是 postmaster。一个 postmaster 的责任就是保证邮件系统正常工作、更新系统配置、添加/删除邮箱帐号,以及其他。每个域中必须存在一个 postmaster 的别名(LCTT:postmaster 别名的作用就是能让你的邮件系统以外的用户往邮件系统里面的用户发邮件,当然也能接收来自系统内部用户发送出来的邮件),用于将邮件发往正确的用户。 + +### 问题7:Postfix 都有些什么重要的进程? ### + +答:以下是 Postfix 邮件系统里最重要的后台进程列表: + +- **master**:这条进程是 Postfix 邮件系统的大脑,它产生所有其他进程。 +- **smtpd**:作为服务器端程序处理所有外部连进来的请求。 +- **smtp**:作为客户端程序处理所有对外发起连接的请求。 +- **qmgr**:它是 Postfix 邮件系统的心脏,处理和控制邮件列表里面的所有消息。 +- **local**:这是 Postfix 自有的本地传送代理,就是它负责把邮件保存到邮箱里。 + +### 问题8:Postfix 服务器的配置什么是什么? ### + +答:有两个主要配置文件: + +- **/etc/postfix/main.cf**:这个文件保存全局配置信息,所有进程都会用到,除非这些配置在 master.cf 文件中被重新设置了。 +- **/etc/postfix/master.cf**:这个文件保存了额外的进程运行时环境参数,在 main.cf 文件中定义的配置可能会被本文件的配置覆盖掉。 + +### 问题9:如何将 Postfix 重启以及设为开机启动? ### + +答:使用这个命令重启:`service postfix restart`;使用这个命令设为开机启动:`chkconfig postfix on` + +### 问题10:怎么查看 Postfix 的邮件列表? ### + +答:Postfix 维护两个列表:未决邮件队列(pending mails queue)和等待邮件队列(deferred mail queue)。等待队列包含了暂时发送失败、需要重新发送的邮件,Postfix 会定期重发(默认5分钟,可自定义设置)。(LCTT:其实 Postfix 维护5个队列:输入队列,邮件进入 Postfix 系统的第一站;活动队列,qmgr 将输入队列的邮件移到活动队列;等待队列,保存暂时不能发送出去的邮件;故障队列,保存受损或无法解读的邮件;保留队列,将邮件无限期留在 Postfix 队列系统中。) + +列出邮件队列里面所有邮件: + + # postqueue -p + +保存邮件队列名单: + + # postqueue -p > /mnt/queue-backup.txt + +让 Postfix 马上处理队列: + + # postqueue -f + +### 问题11:如何删除邮件队列里面的邮件? ### + +答:以下命令删除所有邮件: + + # postsuper -d ALL + +以下命令只删除等待队列中的邮件: + + # postsuper -d ALL deferred + +### 问题12:如何通过命令来检查 Postfix 配置信息? ### + +答:使用`postconf -n`命令可以查看,它会过滤掉配置文件里面被注释掉的配置信息。 + +### 问题13:实时查看邮件日志要用什么命令? ### + +答:`tail -f /var/log/maillog` 或 `tailf /var/log/maillog` + +### 问题14:如何通过命令行发送测试邮件? ### + +答:参考下面的命令: + + # echo "Test mail from postfix" | mail -s "Plz ignore" info@something.com + +### 问题15:什么是“开放邮件转发”? ### + +答:开放邮件转发是 SMTP 服务器的一项设定,允许因特网上其他用户能通过该服务器转发邮件,而不是直接发送到某个帐号。过去,这项功能在许多邮件服务器中都是默认开启的,但是现在已经不再流行了,因为邮件转发会导致大量垃圾邮件和病毒邮件在网络上肆虐。 + +### 问题16:什么是 Postfix 上的邮件转发主机? ### + +答:转发主机是 SMTP 的地址,如果在配置文件中有配置,那么所有输入邮件都将被 SMTP 服务器转发。 + +### 问题17:什么是灰名单? ### + +答:灰名单(LCTT:介于白名单和黑名单之间)用于拦截垃圾邮件。一个 MTA 使用灰名单时就会“暂时拒绝”未被识别的发送者发来的所有邮件。如果邮件是正当合理的,发起者会在一段时间后重新发送,然后这份邮件就能被接收。(LCTT:灰名单基于这样一个事实,就是大多数的垃圾邮件服务器和僵尸网络的邮件只发送一次,而会忽略要求它们在一定的时间间隔后再次发送的请求。) + +### 问题18:邮件系统中 SPF 记录有什么重要作用? ### + +答:SPF 是 Sender Policy Framework 的缩写,用于帮助域的拥有者确认发送方是否来自他们的域,目的是其他邮件系统能够保证发送方在发送邮件时是否经过授权 —— 这种方法可以减小遇到邮件地址欺骗、网络钓鱼和垃圾邮件的风险。 + +### 问题19:邮件系统中 DKIM 有什么用处? ### + +答:域名密匙是一套电子邮件身份认证系统,用于验证邮件发送方的 DNS 域和邮件的完整性。域名密匙规范采用互联网电子邮件认证技术,建立了一套加强版协议:域名密匙识别邮件(就是 DKIM)。 + +### 问题20:邮件系统中 ASSP 的规则是什么? ### + +答:ASSP(Anti-Spam SMTP Proxy,反垃圾代理) 是一个网关服务器,安装在你的 MTA 前面,通过自建白名单、自动学习贝叶斯算法、灰名单、DNS 黑名单(DNSBL)、DNS 白名单(DNSWL)、URI黑名单(URIBL)、SPF、SRS、Backscatter、病毒扫描功能、附件阻拦功能、基于发送方等多种方法来反垃圾邮件。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxtechi.com/postfix-interview-questions-answers/ + +作者:[Pradeep Kumar][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.linuxtechi.com/author/pradeep/ diff --git a/translated/tech/20140828 Setup Thin Provisioning Volumes in Logical Volume Management (LVM)--Part IV.md b/translated/tech/20140828 Setup Thin Provisioning Volumes in Logical Volume Management (LVM)--Part IV.md new file mode 100644 index 0000000000..63a86c598d --- /dev/null +++ b/translated/tech/20140828 Setup Thin Provisioning Volumes in Logical Volume Management (LVM)--Part IV.md @@ -0,0 +1,212 @@ +在逻辑卷管理中设置精简资源调配卷——第四部分 +================================================================================ +逻辑卷管理有许多特性,比如像快照和精简资源调配。在先前(第三部分中),我们已经介绍了如何为逻辑卷创建快照。在本文中,我们将了解如何在LVM中设置精简资源调配。 + +![Setup Thin Provisioning in LVM](http://www.tecmint.com/wp-content/uploads/2014/08/Setup-Thin-Provisioning-in-LVM.jpg) +在LVM中设置精简资源调配 + +### 精简资源调配是什么? ### +精简资源调配用于lvm以在精简池中创建虚拟磁盘。我们假定我服务器上有**15GB**的存储容量,而我已经有2个客户各自占去了5GB存储空间。你是第三个客户,你也请求5GB的存储空间。在以前,我们会提供整个5GB的空间(富卷)。然而,你可能只使用5GB中的2GB,其它3GB以后再去填满它。 + +而在精简资源调配中我们所做的是,在其中一个大卷组中定义一个精简池,再在精简池中定义一个精简卷。这样,不管你写入什么文件,它都会保存进去,而你的存储空间看上去就是5GB。然而,这所有5GB空间不会全部铺满整个硬盘。对其它客户也进行同样的操作,就像我说的,那儿已经有两个客户,你是第三个客户。 + +那么,让我们想想,我到底为客户分配了总计多少GB的空间呢?所有15GB的空间已经全部分配完了,如果现在有某个人来问我是否能提供5GB空间,我还可以分配给他么?答案是“可以”。在精简资源调配中,我可以为第四位客户分配5GB空间,即使我已经把那15GB的空间分配完了。 + +**警告**:从那15GB空间中,如果我们对资源调配超过15GB了,那就是过度资源调配了。 + +### 它是怎么工作的?我们又是怎样为客户提供存储空间的? ### + +我已经提供给你5GB空间,但是你可能只用了2GB,而其它3GB还空闲着。在富资源调配中,我们不能这么做,因为它一开始就分配了整个空间。 + +在精简资源调配中,如果我为你定义了5GB空间,它就不会在定义卷时就将整个磁盘空间全部分配,它会根据你的数据写入而增长,希望你看懂了!跟你一样,其它客户也不会使用全部卷,所以还是有机会为一个新客户分配5GB空间的,这称之为过度资源调配。 + +但是,必须对各个卷的增长情况进行监控,否则结局会是个灾难。在过度资源调配完成后,如果所有4个客户都极度地写入数据到磁盘,你将碰到问题了。因为这个动作会填满15GB的存储空间,甚至溢出,从而导致这些卷下线。 + +### 需求 ### + +注:此三篇文章如果发布后可换成发布后链接,原文在前几天更新中 + +- [使用LVM在Linux中创建逻辑卷——第一部分][1] +- [在Linux中扩展/缩减LVM——第二部分][2] +- [在LVM中创建/恢复逻辑卷快照——第三部分][3] + +#### 我的服务器设置 #### + + 操作系统 — 安装有LVM的CentOS 6.5 + 服务器IP — 192.168.0.200 + +### 步骤1: 设置精简池和卷 ### + +理论讲太多了,让我们还是来点实际的吧,我们一起来设置精简池和精简卷。首先,我们需要一个大尺寸的卷组。这里,我创建了一个**15GB**的卷组用于演示。现在,用下面的命令来列出卷组。 + + # vgcreate -s 32M vg_thin /dev/sdb1 + +![Listing Volume Group](http://www.tecmint.com/wp-content/uploads/2014/08/Listing-Volume-Group.jpg) +列出卷组 + +接下来,在创建精简池和精简卷之前,检查逻辑卷有多少空间可用。 + + # vgs + # lvs + +![Check Logical Volume](http://www.tecmint.com/wp-content/uploads/2014/08/check-Logical-Volume.jpg) +检查逻辑卷 + +我们可以在上面的lvs命令输出中看到,只显示了一些默认逻辑用于文件系统和交换分区。 + +### 创建精简池 ### + +使用以下命令在卷组(vg_thin)中创建一个15GB的精简池。 + + # lvcreate -L 15G --thinpool tp_tecmint_pool vg_thin + +- **-L** – 卷组大小 +- **–thinpool** – 创建精简池 +- **tp_tecmint_poolThin** - 精简池名称 +- **vg_thin** – 我们需要创建精简池的卷组名称 + +![Create Thin Pool](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Pool.jpg) +创建精简池 + +使用‘lvdisplay’命令来查看详细信息。 + + # lvdisplay vg_thin/tp_tecmint_pool + +![Logical Volume Information](http://www.tecmint.com/wp-content/uploads/2014/08/Logical-Volume-Information.jpg) +逻辑卷信息 + +这里,我们还没有在该精简池中创建虚拟精简卷。在图片中,我们可以看到分配的精简池数据为**0.00%**。 + +### 创建精简卷 ### + +现在,我们可以在带有-V(Virtual)选项的‘lvcreate’命令的帮助下,在精简池中定义精简卷了。 + + # lvcreate -V 5G --thin -n thin_vol_client1 vg_thin/tp_tecmint_pool + +我已经在我的**vg_thin**卷组中的**tp_tecmint_pool**内创建了一个精简虚拟卷,取名为**thin_vol_client1**。现在,使用下面的命令来列出逻辑卷。 + + # lvs + +![List Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/List-Logical-Volumes.jpg) +列出逻辑卷 + +刚才,我们已经在上面创建了精简卷,这就是为什么没有数据,显示为**0.00%M**。 + +好吧,让我为其它2个客户再创建2个精简卷。这里,你可以看到在精简池(**tp_tecmint_pool**)下有3个精简卷了。所以,从这一点上看,我们开始明白,我已经使用所有15GB的精简池。 + +![Create Thin Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Create-Thin-Volumes.jpg) + +### 创建文件系统 ### + +现在,使用下面的命令为这3个精简卷创建挂载点并挂载,然后拷贝一些文件进去。 + + # mkdir -p /mnt/client1 /mnt/client2 /mnt/client3 + +列出创建的目录。 + + # ls -l /mnt/ + +![Creating Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Creating-Mount-Points.jpg) +创建挂载点 + +使用‘mkfs’命令为这些创建的精简卷创建文件系统。 + + # mkfs.ext4 /dev/vg_thin/thin_vol_client1 && mkfs.ext4 /dev/vg_thin/thin_vol_client2 && mkfs.ext4 /dev/vg_thin/thin_vol_client3 + +![Create File System](http://www.tecmint.com/wp-content/uploads/2014/08/Create-File-System.jpg) +创建文件系统 + +使用‘mount’命令来挂载所有3个客户卷到创建的挂载点。 + + # mount /dev/vg_thin/thin_vol_client1 /mnt/client1/ && mount /dev/vg_thin/thin_vol_client2 /mnt/client2/ && mount /dev/vg_thin/thin_vol_client3 /mnt/client3/ + +使用‘df’命令来列出挂载点。 + + # df -h + +![Print Mount Points](http://www.tecmint.com/wp-content/uploads/2014/08/Print-Mount-Points.jpg) +打印挂载点 + +这里,我们可以看到所有3个客户卷已经挂载了,而每个客户卷只使用了3%的数据空间。那么,让我们从桌面添加一些文件到这3个挂载点,以填充一些空间。 + +![Add Files To Volumes](http://www.tecmint.com/wp-content/uploads/2014/08/Add-Files-To-Volumes.jpg) +添加文件到卷 + +现在列出挂载点,并查看每个精简卷使用的空间,然后列出精简池来查看池中已使用的大小。 + + # df -h + # lvdisplay vg_thin/tp_tecmint_pool + +![Check Mount Point Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Mount-Point-Size.jpg) +检查挂载点大小 + +![Check Thin Pool Size](http://www.tecmint.com/wp-content/uploads/2014/08/Check-Thin-Pool-Size.jpg) +检查精简池大小 + +上面的命令显示了3个挂载点及其使用大小百分比。 + + 13% of datas used out of 5GB for client1 + 29% of datas used out of 5GB for client2 + 49% of datas used out of 5GB for client3 + +在查看精简池时,我们看到总共只有**30%**的数据被写入,这是上面3个客户虚拟卷的总使用量。 + +### 过度资源调配 ### + +现在,**第四个**客户来申请5GB的存储空间。我能给他吗?因为我已经把15GB的池分配给了3个客户。能不能再给另外一个客户分配5GB的空间呢?可以,这完全可能。在我们使用**过度资源调配**时,就可以实现。过度资源调配可以给我们比我们所拥有的更大的空间。 + +让我来为第四位客户创建5GB的空间,然后再验证一下大小吧。 + + # lvcreate -V 5G --thin -n thin_vol_client4 vg_thin/tp_tecmint_pool + # lvs + +![Create thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Create-thin-Storage.jpg) +创建精简存储 + +在精简池中,我只有15GB大小的空间,但是我已经在精简池中创建了4个卷,其总量达到了20GB。如果4个客户都开始写入数据到他们的卷,并将空间填满,到那时我们将面对严峻的形势。如果不填满空间,那不会有问题。 + +现在,我已经创建在**thin_vol_client4**中创建了文件系统,然后挂载到了**/mnt/client4**下,并且拷贝了一些文件到里头。 + + # lvs + +![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thing-Storage.jpg) +验证精简存储 + +我们可以在上面的图片中看到,新创建的client 4总计使用空间达到了**89.34%**,而精简池的已用空间达到了**59.19**。如果所有这些用户不在过度对卷写入,那么它就不会溢出,下线。要避免溢出,我们需要扩展精简池大小。 + +**重要**:精简池只是一个逻辑卷,因此,如果我们需要对其进行扩展,我们可以使用和扩展逻辑卷一样的命令,但我们不能缩减精简池大小。 + + # lvextend + +这里,我们可以看到怎样来扩展逻辑精简池(**tp_tecmint_pool**)。 + + # lvextend -L +15G /dev/vg_thin/tp_tecmint_pool + +![Extend Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Extend-Thin-Storage.jpg) +扩展精简存储 + +接下来,列出精简池大小。 + + # lvs + +![Verify Thin Storage](http://www.tecmint.com/wp-content/uploads/2014/08/Verify-Thin-Storage.jpg) +验证精简存储 + +前面,我们的**tp_tecmint_pool**大小为15GB,而在对第四个精简卷进行过度资源配置后达到了20GB。现在,它扩展到了30GB,所以我们的过度资源配置又回归常态,而精简卷也不会溢出,下线了。通过这种方式,我们可以添加更多的精简卷到精简池中。 + +在本文中,我们已经了解了怎样来使用一个大尺寸的卷组创建一个精简池,以及怎样通过过度资源配置在精简池中创建精简卷和扩着精简池。在下一篇文章中,我们将介绍怎样来移除逻辑卷。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/ + +作者:[Babin Lonston][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ +[2]:http://www.tecmint.com/extend-and-reduce-lvms-in-linux/ +[3]:http://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-lvm/ diff --git a/translated/tech/20140901 How to install and configure ownCloud on Debian.md b/translated/tech/20140901 How to install and configure ownCloud on Debian.md new file mode 100644 index 0000000000..84755fe8e1 --- /dev/null +++ b/translated/tech/20140901 How to install and configure ownCloud on Debian.md @@ -0,0 +1,209 @@ +如何在Debian上安装配置ownCloud +================================================================================ +据其官方网站,ownCloud可以让你通过一个网络接口或者WebDAV访问你的文件。它还提供了一个平台,可以轻松地查看、编辑和同步您所有设备的通讯录、日历和书签。尽管ownCloud与广泛使用Dropbox非常相似,但主要区别在于ownCloud是免费的,开源的,从而可以自己的服务器上建立与Dropbox类似的云存储服务。使用ownCloud你可以完整地访问和控制您的私人数据而对存储空间没有限制(除了硬盘容量)或者连客户端的连接数量。 + +ownCloud提供了社区版(免费)和企业版(面向企业的有偿支持)。预编译的ownCloud社区版可以提供了CentOS、Debian、Fedora、openSUSE、,SLE和Ubuntu版本。本教程将演示如何在Debian Wheezy上安装和在配置ownCloud社区版。 + +### 在Debian上安装 ownCloud ### + +进入官方网站:[http://owncloud.org][1],并点击‘Install’按钮(右上角)。 + +![](https://farm4.staticflickr.com/3885/14884771598_323f2fc01c_z.jpg) + +为当前的版本选择“Packages for auto updates”(下面的图是v7)。这可以让你轻松的让你使用的ownCloud与Debian的包管理系统保持一致,包是由ownCloud社区维护的。 + +![](https://farm6.staticflickr.com/5589/15071372505_298a796ff6_z.jpg) + +在下一屏职工点击继续: + +![](https://farm6.staticflickr.com/5589/14884818527_554d1483f9_z.jpg) + +在可用的操作系统列表中选择Debian 7 [Wheezy]: + +![](https://farm6.staticflickr.com/5581/14884669449_433e3334e0_z.jpg) + +加入ownCloud的官方Debian仓库: + + # echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/Debian_7.0/ /' >> /etc/apt/sources.list.d/owncloud.list + +加入仓库密钥到apt中: + + # wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/Release.key + # apt-key add - < Release.key + +继续安装ownCLoud: + + # aptitude update + # aptitude install owncloud + +打开你的浏览器并定位到你的ownCloud实例中,地址是http:///owncloud: + +![](https://farm4.staticflickr.com/3869/15071011092_f8f32ffe11_z.jpg) + +注意ownCloud可能会包一个Apache配置错误的警告。使用下面的步骤来解决这个错误来摆脱这些错误信息。 + +a) 编辑 the /etc/apache2/apache2.conf (设置 AllowOverride 为 All): + + + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + +b) 编辑 the /etc/apache2/conf.d/owncloud.conf + + + Options Indexes FollowSymLinks MultiViews + AllowOverride All + Order allow,deny + Allow from all + + +c) 重启web服务器: + + # service apache2 restart + +d) 刷新浏览器,确认安全警告已经消失 + +![](https://farm6.staticflickr.com/5562/14884771428_fc9c063418_z.jpg) + +### 设置数据库 ### + +是时候为ownCloud设置数据库了。 + +首先登录本地的MySQL/MariaDB数据库: + + $ mysql -u root -h localhost -p + +为ownCloud创建数据库和用户账户。 + + mysql> CREATE DATABASE owncloud_DB; + mysql> CREATE USER ‘owncloud-web’@'localhost' IDENTIFIED BY ‘whateverpasswordyouchoose’; + mysql> GRANT ALL PRIVILEGES ON owncloud_DB.* TO ‘owncloud-web’@'localhost'; + mysql> FLUSH PRIVILEGES; + +通过http:///owncloud 进入ownCloud页面,并选择‘Storage & database’ 选项。输入所需的信息(MySQL/MariaDB用户名,密码,数据库和主机名),并点击完成按钮。 + +![](https://farm6.staticflickr.com/5584/15071010982_b76c23c384_z.jpg) + +### 为ownCloud配置SSL连接 ### + +在你开始使用ownCloud之前,强烈建议你在ownCloud中启用SSL支持。使用SSL可以提供重要的安全好处,比如加密ownCloud流量并提供适当的验证。在本教程中,将会为SSL使用一个自签名的证书。 + +创建一个储存服务器密钥和证书的目录: + + # mkdir /etc/apache2/ssl + +创建一个证书(并有一个密钥来保护它),它有一年的有效期。 + + # openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt + +![](https://farm6.staticflickr.com/5587/15068784081_f281b54b72_z.jpg) + +编辑/etc/apache2/conf.d/owncloud.conf 启用HTTPS。对于余下的NC、R和L重写规则的意义,你可以参考[Apache 文档][2]: + + Alias /owncloud /var/www/owncloud + + + RewriteEngine on + ReWriteCond %{SERVER_PORT} !^443$ + RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] + + + + SSLEngine on + SSLCertificateFile /etc/apache2/ssl/apache.crt + SSLCertificateKeyFile /etc/apache2/ssl/apache.key + DocumentRoot /var/www/owncloud/ + + Options Indexes FollowSymLinks MultiViews + AllowOverride All + Order allow,deny + Allow from all + + + +启用重写模块并重启Apache: + + # a2enmod rewrite + # service apache2 restart + +打开你的ownCloud实例。注意一下,即使你尝试使用HTTP,你也会自动被重定向到HTTPS。 + +注意,即使你已经按照上述步骤做了,在你启动ownCloud你仍将看到一条错误消息,指出该证书尚未被受信的机构颁发(那是因为我们创建了一个自签名证书)。您可以放心地忽略此消息,但如果你考虑在生产服务器上部署ownCloud,你可以从一个值得信赖的公司购买证书。 + +### 创建一个账号 ### + +现在我们准备创建一个ownCloud管理员帐号了。 + +![](https://farm6.staticflickr.com/5587/15048366536_430b4fd64e.jpg) + +欢迎来自你的个人云!注意你可以安装一个桌面或者移动端app来同步你的文件、日历、通讯录或者更多了。 + +![](https://farm4.staticflickr.com/3862/15071372425_c391d912f5_z.jpg) + +在右上叫,点击你的用户名,会显示一个下拉菜单: + +![](https://farm4.staticflickr.com/3897/15071372355_3de08d2847.jpg) + +点击Personal来改变你的设置,比如密码,显示名,email地址、头像还有更多。 + +### ownCloud 使用案例:访问日历 ### + +让我开始添加一个事件到日历中并稍后下载。 + +点击左上角的下拉菜单并选择日历。 + +![](https://farm4.staticflickr.com/3891/15048366346_7dcc388244.jpg) + +添加一个时间并保存到你的日历中。 + +![](https://farm4.staticflickr.com/3882/14884818197_f55154fd91_z.jpg) + +通过 'Event and Tasks' -> 'Import...' -> 'Select file' 下载你的日历并添加到你的Thunderbird日历中: + +![](https://farm4.staticflickr.com/3840/14884818217_16a53400f0_z.jpg) + +![](https://farm4.staticflickr.com/3871/15048366356_a7f98ca63d_z.jpg) + +提示:你还需要设置你的时区以便在其他程序中成功地导入你的日历(默认情况下,日历程序将使用UTC+00:00时区)。要更改时区在左下角点击小齿轮图标,接着日历设置菜单就会出现,你就可以选择时区了: + +![](https://farm4.staticflickr.com/3858/14884669029_4e0cd3e366.jpg) + +### ownCloud 使用案例:上传一个文件 ### + +接下来,我们会从本机上传一个文件 + +进入文件菜单(左上角)并点击向上箭头来打开一个选择文件对话框。 + +![](https://farm4.staticflickr.com/3851/14884818067_4a4cc73b40.jpg) + +选择一个文件并点击打开。 + +![](https://farm6.staticflickr.com/5591/14884669039_5a9dd00ca9_z.jpg) + +接下来你就可以打开/编辑选中的文件,把它移到另外一个文件夹或者删除它了。 + +![](https://farm4.staticflickr.com/3909/14884771088_d0b8a20ae2_o.png) + +### 总结 ### + +ownCloud是一个灵活和强大的云存储,可以从其他供应商快速、简便、无痛的过渡。此外,它是开源软件,你只需要很少有时间和精力对其进行配置以满足你的所有需求。欲了解更多信息,可以随时参考[用户][3]、[管理][4]或[开发][5]手册。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/08/install-configure-owncloud-debian.html + +作者:[Gabriel Cánepa][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.gabrielcanepa.com.ar/ +[1]:http://owncloud.org/ +[2]:http://httpd.apache.org/docs/2.2/rewrite/flags.html +[3]:http://doc.owncloud.org/server/7.0/ownCloudUserManual.pdf +[4]:http://doc.owncloud.org/server/7.0/ownCloudAdminManual.pdf +[5]:http://doc.owncloud.org/server/7.0/ownCloudDeveloperManual.pdf \ No newline at end of file diff --git a/translated/tech/20140910 How to create a cloud-based encrypted file system on Linux.md b/translated/tech/20140910 How to create a cloud-based encrypted file system on Linux.md new file mode 100644 index 0000000000..0cc76306ca --- /dev/null +++ b/translated/tech/20140910 How to create a cloud-based encrypted file system on Linux.md @@ -0,0 +1,156 @@ +如何在 Linux 系统中创建一个云端的加密文件系统 +================================================================================ +[Amazon S3][1] 和 [Google Cloud Storage][2] 之类的商业云存储服务以能承受的价格提供了高可用性、可扩展、无限容量的对象存储服务。为了加速这些云产品的广泛采用,这些提供商为他们的产品基于明确的 API 和 SDK 培养了一个良好的开发者生态系统。而基于云的文件系统便是这些活跃的开发者社区中的典型产品,已经有了好几个开源的实现。 + +[S3QL][3] 便是最流行的开源云端文件系统之一。它是一个基于 FUSE 的文件系统,提供了好几个商业或开源的云存储后端,比如 Amazon S3、Google Cloud Storage、Rackspace CloudFiles,还有 OpenStack。作为一个功能完整的文件系统,S3QL 拥有不少强大的功能:最大 2T 的文件大小、压缩、UNIX 属性、加密、基于写入时复制的快照、不可变树、重复数据删除,以及软、硬链接支持等等。写入 S3QL 文件系统任何数据都将首先被本地压缩、加密,之后才会传输到云后端。当你试图从 S3QL 文件系统中取出内容的时候,如果它们不在本地缓存中,相应的对象会从云端下载回来,然后再即时地解密、解压缩。 + +需要明确的是,S3QL 的确也有它的限制。比如,你不能把同一个 S3FS 文件系统在几个不同的电脑上同时挂载,只能有一台电脑同时访问它。另外,ACL(访问控制列表)也并没有被支持。 + +在这篇教程中,我将会描述“如何基于 Amazon S3 用 S3QL 配置一个加密文件系统”。作为一个使用范例,我还会说明如何在挂载的 S3QL 文件系统上运行 rsync 备份工具。 + +### 准备工作 ### + +本教程首先需要你创建一个 [Amazon AWS 帐号][4](注册是免费的,但是需要一张有效的信用卡)。 + +然后 [创建一个 AWS access key][4](access key ID 和 secret access key),S3QL 使用这些信息来访问你的 AWS 帐号。 + +之后通过 AWS 管理面板访问 AWS S3,并为 S3QL 创建一个新的空 bucket。 + +![](https://farm4.staticflickr.com/3841/15170673701_7d0660e11f_c.jpg) + +为最佳性能考虑,请选择一个地理上距离你最近的区域。 + +![](https://farm4.staticflickr.com/3902/15150663516_4928d757fc_b.jpg) + +### 在 Linux 上安装 S3QL ### + +在大多数 Linux 发行版中都有预先编译好的 S3QL 软件包。 + +#### 对于 Debian、Ubuntu 或 Linux Mint:#### + + $ sudo apt-get install s3ql + +#### 对于 Fedora:#### + + $ sudo yum install s3ql + +对于 Arch Linux,使用 [AUR][6]。 + +### 首次配置 S3QL ### + +在 ~/.s3ql 目录中创建 autoinfo2 文件,它是 S3QL 的一个默认的配置文件。这个文件里的信息包括必须的 AWS access key,S3 bucket 名,以及加密口令。这个加密口令将被用来加密一个随机生成的主密钥,而主密钥将被用来实际地加密 S3QL 文件系统数据。 + + $ mkdir ~/.s3ql + $ vi ~/.s3ql/authinfo2 + +---------- + + [s3] + storage-url: s3://[bucket-name] + backend-login: [your-access-key-id] + backend-password: [your-secret-access-key] + fs-passphrase: [your-encryption-passphrase] + +指定的 AWS S3 bucket 需要预先通过 AWS 管理面板来创建。 + +为了安全起见,让 authinfo2 文件仅对你可访问。 + + $ chmod 600 ~/.s3ql/authinfo2 + +### 创建 S3QL 文件系统 ### + +现在你已经准备好可以在 AWS S3 上创建一个 S3QL 文件系统了。 + +使用 mkfs.s3ql 工具来创建一个新的 S3QL 文件系统。这个命令中的 bucket 名应该与 authinfo2 文件中所指定的相符。使用“--ssl”参数将强制使用 SSL 连接到后端存储服务器。默认情况下,mkfs.s3ql 命令会在 S3QL 文件系统中启用压缩和加密。 + + $ mkfs.s3ql s3://[bucket-name] --ssl + +你会被要求输入一个加密口令。请输入你在 ~/.s3ql/autoinfo2 中通过“fs-passphrase”指定的那个口令。 + +如果一个新文件系统被成功创建,你将会看到这样的输出: + +![](https://farm6.staticflickr.com/5582/14988587230_e182ca3abd_z.jpg) + +### 挂载 S3QL 文件系统 ### + +当你创建了一个 S3QL 文件系统之后,下一步便是要挂载它。 + +首先创建一个本地的挂载点,然后使用 mount.s3ql 命令来挂载 S3QL 文件系统。 + + $ mkdir ~/mnt_s3ql + $ mount.s3ql s3://[bucket-name] ~/mnt_s3ql + +挂载一个 S3QL 文件系统不需要特权用户,只要确定你对该挂载点有写权限即可。 + +视情况,你可以使用“--compress”参数来指定一个压缩算法(如 lzma、bzip2、zlib)。在不指定的情况下,lzma 将被默认使用。注意如果你指定了一个自定义的压缩算法,它将只会应用到新创建的数据对象上,并不会影响已经存在的数据对象。 + + $ mount.s3ql --compress bzip2 s3://[bucket-name] ~/mnt_s3ql + +因为性能原因,S3QL 文件系统维护了一份本地文件缓存,里面包括了最近访问的(部分或全部的)文件。你可以通过“--cachesize”和“--max-cache-entries”选项来自定义文件缓存的大小。 + +如果想要除你以外的用户访问一个已挂载的 S3QL 文件系统,请使用“--allow-other”选项。 + +如果你想通过 NFS 导出已挂载的 S3QL 文件系统到其他机器,请使用“--nfs”选项。 + +运行 mount.s3ql 之后,检查 S3QL 文件系统是否被成功挂载了: + + $ df ~/mnt_s3ql + $ mount | grep s3ql + +![](https://farm4.staticflickr.com/3863/15174861482_27a842da3e_z.jpg) + +### 卸载 S3QL 文件系统 ### + +想要安全地卸载一个(可能含有未提交数据的)S3QL 文件系统,请使用 umount.s3ql 命令。它将会等待所有数据(包括本地文件系统缓存中的部分)成功传输到后端服务器。取决于等待写的数据的多少,这个过程可能需要一些时间。 + + $ umount.s3ql ~/mnt_s3ql + +### 查看 S3QL 文件系统统计信息及修复 S3QL 文件系统 ### + +若要查看 S3QL 文件系统统计信息,你可以使用 s3qlstat 命令,它将会显示诸如总的数据、元数据大小、重复文件删除率和压缩率等信息。 + + $ s3qlstat ~/mnt_s3ql + +![](https://farm6.staticflickr.com/5559/15184926905_4815e5827a_z.jpg) + +你可以使用 fsck.s3ql 命令来检查和修复 S3QL 文件系统。与 fsck 命令类似,待检查的文件系统必须首先被卸载。 + + $ fsck.s3ql s3://[bucket-name] + +### S3QL 使用案例:Rsync 备份 ### + +让我用一个流行的使用案例来结束这篇教程:本地文件系统备份。为此,我推荐使用 rsync 增量备份工具,特别是因为 S3QL 提供了一个 rsync 的封装脚本(/usr/lib/s3ql/pcp.py)。这个脚本允许你使用多个 rsync 进程递归地复制目录树到 S3QL 目标。 + + $ /usr/lib/s3ql/pcp.py -h + +![](https://farm4.staticflickr.com/3873/14998096829_d3a64749d0_z.jpg) + +下面这个命令将会使用 4 个并发的 rsync 连接来备份 ~/Documents 里的所有内容到一个 S3QL 文件系统。 + + $ /usr/lib/s3ql/pcp.py -a --quiet --processes=4 ~/Documents ~/mnt_s3ql + +这些文件将首先被复制到本地文件缓存中,然后在后台再逐步地同步到后端服务器。 + +若想了解与 S3QL 有关的更多信息,如自动挂载、快照、不可变树,我强烈推荐阅读 [官方用户指南][7]。欢迎告诉我你对 S3QL 怎么看,以及你对任何其他工具的使用经验。 + + + + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/09/create-cloud-based-encrypted-file-system-linux.html + +作者:[Dan Nanni][a] +译者:[felixonmars](https://github.com/felixonmars) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://aws.amazon.com/s3 +[2]:http://code.google.com/apis/storage/ +[3]:https://bitbucket.org/nikratio/s3ql/ +[4]:http://aws.amazon.com/ +[5]:http://ask.xmodulo.com/create-amazon-aws-access-key.html +[6]:https://aur.archlinux.org/packages/s3ql/ +[7]:http://www.rath.org/s3ql-docs/ diff --git a/translated/tech/20140910 How to download GOG games from the command line on Linux.md b/translated/tech/20140910 How to download GOG games from the command line on Linux.md new file mode 100644 index 0000000000..8ed3e8e792 --- /dev/null +++ b/translated/tech/20140910 How to download GOG games from the command line on Linux.md @@ -0,0 +1,75 @@ +如何在Linux命令行中下载GOG游戏 +================================================================================ +如果你是一个玩家同时也是一个Linux用户,你可能很高兴在[GOG][1]在几个月前宣布它会在你最喜欢的操作系统上推出游戏。如果你之前从来没有听说过GOG,我鼓励你看看他们的产品目录中的“很棒的老游戏”,价格合理,无DRM限制,而且充满了很棒的东西。然而现在的Windows上的GOG存在了很长的时间按,正式的Linux版本却是无处可见。因此,你不想等待官方的正式版本,一个名为LGOGDownloader非官方的开放源码计划能让你在命令行中访问你的库。 + +![](https://farm4.staticflickr.com/3843/15121593356_b13309c70f_z.jpg) + +### 在Linux中安装 LGOGDownloader ### + +对于Ubuntu用户来说,[官方页面][2]建议您下载源代码并执行: + + $ sudo apt-get install build-essential libcurl4-openssl-dev liboauth-dev libjsoncpp-dev libhtmlcxx-dev libboost-system-dev libboost-filesystem-dev libboost-regex-dev libboost-program-options-dev libboost-date-time-dev libtinyxml-dev librhash-dev help2man + $ tar -xvzf lgogdownloader-2.17.tar.gz + $ cd lgogdownloader-2.17 + $ make release + $ sudo make install + +如果你是ArchLinux用户。有一个[AUR 包][2]等着你 + +### LGOGDownloader 的使用### + +一旦安装了该程序,你需要用下面的命令登录: + + $ lgogdownloader --login + +![](https://farm6.staticflickr.com/5593/15121593346_9c5d02d5ce_z.jpg) + +如果你需要配置文件,那它在这里:~/.config/lgogdownloader/config.cfg + +验证通过后,你可以列出你库中所有的游戏: + + $ lgogdownloader --list + +![](https://farm6.staticflickr.com/5581/14958040387_8321bb71cf.jpg) + +用下面的命令下载游戏: + + $ lgogdownloader --download --game [game name] + +![](https://farm6.staticflickr.com/5585/14958040367_b1c584a2d1_z.jpg) + +你可以注意到lgogdownloader允许你恢复之前中断的下载,这当下载的游戏并不小时是很有用的。 + +像每一个可敬的命令行实用程序,您可以添加各种选项: + +- **--platform [number]** 选择您的操作系统,1是 Windows, 4是Linux。 +- **--directory [destination]** 下载安装包到指定的目录。 +- **--language [number]** 下载特定的语言包 (根据你的语言查阅手册上对应的数字)。 +- **--limit-rate [speed]** 限制下载速度。 + +一个额外的福利,lgogdownloader同样可以检查GOG网站上的更新: + + $ lgogdownloader --update-check + +![](https://farm4.staticflickr.com/3882/14958035568_7889acaef0.jpg) + +结果将列出论坛上收到的私人邮件的数量以及更新的游戏数量。 + +最后,lgogdownloader是非常标准的命令行实用工具。我甚至可以说,它是清晰和连贯性的一个缩影。我们距离Steam Linux客户端的确还差的很远,但在另一方面,官方GOG Windows客户端不会做的比这个非官方的Linux版本更多。换句话说lgogdownloader是一个完美的替代品。我等不及要看到GOG上更多的Linux兼容游戏,尤其是在他们最近公告称会提供的无DRM电影,会有视频游戏的专题。希望在电影目录中有游戏库时能在客户端看到更新。 + +你觉得GOG怎么样?你会使用非官方的Linux客户端么?让我们在评论中知道你的想法。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/09/download-gog-games-command-line-linux.html + +作者:[Adrien Brochard][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://www.gog.com/ +[2]:https://sites.google.com/site/gogdownloader/home +[3]:https://aur.archlinux.org/packages/lgogdownloader/ \ No newline at end of file diff --git a/translated/tech/20140910 How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE).md b/translated/tech/20140910 How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE).md new file mode 100644 index 0000000000..d0ace6a56b --- /dev/null +++ b/translated/tech/20140910 How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE).md @@ -0,0 +1,155 @@ +如何用Nagios远程执行插件(NRPE)来检测服务器内存使用率 +================================================================================ +在[先前的教程中][1]注:此篇文章在同一个更新中,如果也翻译了,发布的时候可修改相应的链接,我们已经见到了如何在Nagios设置中设置Nagios远程执行插件(NRPE)。然而,监控内存使用率的脚本和插件并没有在原生的Nagios中。本篇中,我们会看到如何配置NRPE来监控远程服务器上的内存使用率。 + +我们要用的监控内存的脚本在[Nagios 市场][2]上,也在创建者的[Github仓库][3]中。 + +假设我们已经安装了NRPE,我们首先在我们想要监控的服务器上下载脚本。 + +### 准备远程服务器 ### + +#### 在 Debain/Ubuntu 中: #### + + # cd /usr/lib/nagios/plugins/ + # wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl + # mv check_mem.pl check_mem + # chmod +x check_mem + +#### 在 RHEL/CentOS 中: #### + + # cd /usr/lib64/nagios/plugins/ (or /usr/lib/nagios/plugins/ for 32-bit) + # wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl + # mv check_mem.pl check_mem + # chmod +x check_mem + +你可以通过手工在本地运行下面的命令来检查脚本的输出是否正常。当使用NRPE时,这条命令应该会检测空闲的内存,当可用内存小于20%时会发出警告,并且在可用内存小于10%时会生成一个严重警告。 + + # ./check_mem -f -w 20 -c 10 + +---------- + + OK - 34.0% (2735744 kB) free.|TOTAL=8035340KB;;;; USED=5299596KB;6428272;7231806;; FREE=2735744KB;;;; CACHES=2703504KB;;;; + +如果你看到像上面那样的输出,那就意味这命令正常工作着。 + +现在脚本已经准备好了,我们要定义NRPE检查内存使用率的命令了。如上所述,命令会检查可用内存,在可用率小于20%时发出警报,小于10%时发出严重警告。 + + # vim /etc/nagios/nrpe.cfg + +#### 对于 Debian/Ubuntu: #### + + command[check_mem]=/usr/lib/nagios/plugins/check_mem -f -w 20 -c 10 + +#### 对于 RHEL/CentOS 32 bit: #### + + command[check_mem]=/usr/lib/nagios/plugins/check_mem -f -w 20 -c 10 + +#### 对于 RHEL/CentOS 64 bit: #### + + command[check_mem]=/usr/lib64/nagios/plugins/check_mem -f -w 20 -c 10 + +### 准备 Nagios 服务器 ### + +在Nagios服务器中,我们为NRPE定义了一条自定义命令。该命令可存储在Nagios内的任何目录中。为了让本教程简单,我们会将命令定义放在/etc/nagios目录中。 + +#### 对于 Debian/Ubuntu: #### + + # vim /etc/nagios3/conf.d/nrpe_command.cfg + +---------- + + define command{ + command_name check_nrpe + command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' + } + +#### 对于 RHEL/CentOS 32 bit: #### + + # vim /etc/nagios/objects/nrpe_command.cfg + +---------- + + define command{ + command_name check_nrpe + command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ + } + +#### 对于 RHEL/CentOS 64 bit: #### + + # vim /etc/nagios/objects/nrpe_command.cfg + +---------- + + define command{ + command_name check_nrpe + command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ + } + +现在我们定义Nagios的服务检查 + +#### 在 Debian/Ubuntu 上: #### + + # vim /etc/nagios3/conf.d/nrpe_service_check.cfg + +---------- + + define service{ + use local-service + host_name remote-server + service_description Check RAM + check_command check_nrpe!check_mem + } + +#### 在 RHEL/CentOS 上: #### + + # vim /etc/nagios/objects/nrpe_service_check.cfg + +---------- + + define service{ + use local-service + host_name remote-server + service_description Check RAM + check_command check_nrpe!check_mem + } + +最后我们重启Nagios服务 + +#### 在 Debian/Ubuntu 上: #### + + # service nagios3 restart + +#### 在 RHEL/CentOS 6 上: #### + + # service nagios restart + +#### 在 RHEL/CentOS 7 上: #### + + # systemctl restart nagios.service + +### 故障排除 ### + +Nagios应该开始在使用NRPE的远程服务器上检查内存使用率了。如果你有任何问题,你可以检查下面这些情况。 + + +- 确保NRPE的端口在远程主机上是总是允许的。默认NRPE的端口是TCP 5666。 +- 你可以尝试通过执行check_nrpe 命令: /usr/lib/nagios/plugins/check_nrpe -H remote-server 手工检查NRPE操作。 +- 你同样可以尝试运行check_mem 命令:/usr/lib/nagios/plugins/check_nrpe -H remote-server –c check_mem +- 在远程服务器上,在/etc/nagios/nrpe.cfg中设置debug=1。重启NRPE服务并检查这些日志文件,/var/log/messages (RHEL/CentOS)或者/var/log/syslog (Debain/Ubuntu)。如果有任何的配置或者权限错误,日志中应该包含了相关的信息。如果日志中没有反映出什么,很有可能是由于请求在某些端口上有过滤而没有到达远程服务器上。 + +总结一下,这边教程描述了我们该如何调试NRPE来监控远程服务器的内存使用率。过程只需要下载脚本、定义命令和重启服务就行了。希望这对你们有帮助。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/09/monitor-server-memory-usage-nagios-remote-plugin-executor.html + +作者:[Sarmed Rahman][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/sarmed +[1]:http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html +[2]:http://exchange.nagios.org/directory/Plugins/Operating-Systems/Solaris/check_mem-2Epl/details +[3]:https://github.com/justintime/nagios-plugins/blob/master/check_mem/check_mem.pl \ No newline at end of file diff --git a/translated/tech/20140910 How to set up Nagios Remote Plugin Executor (NRPE) in Linux.md b/translated/tech/20140910 How to set up Nagios Remote Plugin Executor (NRPE) in Linux.md new file mode 100644 index 0000000000..611d69ad8c --- /dev/null +++ b/translated/tech/20140910 How to set up Nagios Remote Plugin Executor (NRPE) in Linux.md @@ -0,0 +1,236 @@ +如何在 Linux 环境下配置 Nagios Remote Plugin Executor (NRPE) +================================================================================ +就网络管理而言,Nagios 是最强大的工具之一。Nagios 可以监控远程主机的可访问性,以及其中正在运行的服务的状态。不过,如果我们想要监控远程主机中网络服务以外的东西呢?比方说,我们可能想要监控远程主机上的磁盘利用率或者 [CPU 处理器负载][1]。Nagios Remote Plugin Executor(NRPE)便是一个可以帮助你完成这些操作的工具。NRPE 允许你执行在远程主机上安装的 Nagios 插件,并且将它们集成到一个[已经存在的 Nagios 服务器][2]里。 + +本教程将会介绍如何在一个已经部署好的 Nagios 中配置 NRPE。本教程主要分为两部分: + +- 配置远程主机。 +- 配置 Nagios 监控服务器。 + +之后我们会以定义一些可以被 NRPE 使用的自定义命令来结束本教程。 + +### 为 NRPE 配置远程主机 ### + +#### 第一步:安装 NRPE 服务 #### + +你需要在你想要使用 NRPE 监控的每一台远程主机上安装 NRPE 服务。每一台远程主机上的 NRPE 服务守护进程将会与一台 Nagios 监控服务器进行通信。 + +取决于所在的平台, NRPE 服务所需要的软件包可以很容易地用 apt-get 或者 yum 来安装。对于 CentOS 来说,由于 NRPE 并不在 CentOS 的仓库中,我们需要[添加 Repoforge 仓库][3]。 + +**对于 Debian、Ubuntu 或者 Linux Mint:** + + # apt-get install nagios-nrpe-server + +**对于 CentOS、Fedora 或者 RHEL:** + + # yum install nagios-nrpe + +#### 第二步:准备配置文件 #### + +配置文件 /etc/nagios/nrpe.cfg 在基于 Debian 或者 RedHat 的系统中比较相近。让我们备份并修改配置文件: + + # vim /etc/nagios/nrpe.cfg + +---------- + + ## NRPE 服务端口是可以自定义的 ## + server_port=5666 + + ## 允许 Nagios 监控服务器访问 ## + ## 注意:逗号后面没有空格 ## + allowed_hosts=127.0.0.1,X.X.X.X-IP_v4_of_Nagios_server + + ## 下面的例子中我们硬编码了参数。 + ## 这些参数可以按需修改。 + + ## 注意:对于 CentOS 64 位用户,请使用 /usr/lib64 替代 /usr/lib ## + + command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 + command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 + command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 + command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z + command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 + +现在配置文件已经准备好了,NRPE 服务已经可以启动了。 + +#### 第三步:初始化 NRPE 服务 #### + +对于基于 RedHat 的系统,NRPE 服务需要被添加为启动服务。 + +**对于 Debian、Ubuntu、Linux Mint:** + + # service nagios-nrpe-server restart + +**对于 CentOS、Fedora 或者 RHEL:** + + # service nrpe restart + # chkconfig nrpe on + +#### 第四步:验证 NRPE 服务状态 #### + +NRPE 守护进程的状态信息可以在系统日志中找到。对于基于 Debian 的系统,日志文件在 /var/log/syslog,而基于 RedHat 的系统的日志文件则是 /var/log/messages。下面提供一段样例日志以供参考: + + nrpe[19723]: Starting up daemon + nrpe[19723]: Listening for connections on port 5666 + nrpe[19723]: Allowing connections from: 127.0.0.1,X.X.X.X + +如果使用了防火墙,被 NRPE 守护进程使用的 TCP 端口 5666 应该被开启。 + + # netstat -tpln | grep 5666 + +---------- + + tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN 19885/nrpe + +### 为 NRPE 配置 Nagios 监控服务器 ### + +为 NRPE 配置已有的 Nagios 监控服务器的第一步是在服务器上安装 NRPE 插件。 + +#### 第一步:安装 NRPE 插件 #### + +当 Nagios 服务器运行在基于 Debian 的系统(Debian、Ubuntu 或者 Linux Mint)上时,需要的软件宝可以通过 apt-get 安装。 + + # apt-get install nagios-nrpe-plugin + +插件安装完成后,对随插件安装的 check_nrpe 命令稍作修改。 + + # vim /etc/nagios-plugins/config/check_nrpe.cfg + +---------- + + ## 默认命令会被覆盖 ## + define command{ + command_name check_nrpe + command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' + } + +如果 Nagios 服务器运行在基于 RedHat 的系统(CentOS、Fedora 或者 RHEL)上,你可以通过 yum 安装 NRPE 插件。对于 CentOS,[添加 Repoforge 仓库][4] 是必要的。 + + # yum install nagios-plugins-nrpe + +现在 NRPE 插件已经安装完成,继续下面的步骤以配置一台 Nagios 服务器。 + +#### 第二步:为 NRPE 插件定义 Nagios 命令 #### + +我们需要首先在 Nagios 中定义一个命令来使用 NRPE。 + + # vim /etc/nagios/objects/commands.cfg + +---------- + + ## 注意:对于 CentOS 64 位用户,请使用 /usr/lib64 替代 /usr/lib ## + define command{ + command_name check_nrpe + command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c '$ARG1$' + } + +#### 第三步:添加主机与命令定义 #### + +接下来定义远程主机以及我们将要在它们上面运行的命令。 + +下面的例子为一台远程主机定义了一个可以在上面执行的命令。一般来说,你的配置需要按照你的需求来改变。配置文件的路径在基于 Debian 和基于 RedHat 的系统上略有不同,不过文件的内容是完全一样的。 + +**对于 Debian、Ubuntu 或者 Linux Mint:** + + # vim /etc/nagios3/conf.d/nrpe.cfg + +**对于 CentOS、Fedora 或者 RHEL:** + + # vim /etc/nagios/objects/nrpe.cfg + +---------- + + define host{ + use linux-server + host_name server-1 + alias server-1 + address X.X.X.X-IPv4_address_of_remote_host + } + + define service { + host_name server-1 + service_description Check Load + check_command check_nrpe!check_load + check_interval 1 + use generic-service + } + +#### 第四步:重启 Nagios 服务 #### + +在重启 Nagios 之前,可以通过测试来验证配置。 + +**对于 Ubuntu、Debian 或者 Linux Mint:** + + # nagios3 -v /etc/nagios3/nagios.cfg + +**对于 CentOS、Fedora 或者 RHEL:** + + # nagios -v /etc/nagios/nagios.cfg + +如果一切正常,我们就可以重启 Nagios 服务了。 + + # service nagios restart + +![](https://farm8.staticflickr.com/7024/13330387845_0bde8b6db5_z.jpg) + +### 为 NRPE 配置自定义命令 ### + +#### 远程服务器上的配置 #### + +下面列出了一些可以用于 NRPE 的自定义命令。这些命令在远程服务器的 /etc/nagios/nrpe.cfg 文件中定义。 + + ## 当 1、5、15 分钟的平均负载分别超过 1、2、1 时进入警告状态 + ## 当 1、5、15 分钟的平均负载分别超过 3、5、3 时进入严重警告状态 + command[check_load]=/usr/lib/nagios/plugins/check_load -w 1,2,1 -c 3,5,3 + + ## 对于 /home 目录的可用空间设置了警告级别为 25%,以及严重警告级别为 10%。 + ## 可以定制为监控任何分区(比如 /dev/sdb1、/、/var、/home) + command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 25% -c 10% -p /home + + ## 当 process_ABC 的实例数量超过 10 时警告,超过 20 时严重警告 ## + command[check_process_ABC]=/usr/lib/nagios/plugins/check_procs -w 1:10 -c 1:20 -C process_ABC + + ## 当 process_ABC 的实例数量跌到 1 以下时严重警告 ## + command[check_process_XYZ]=/usr/lib/nagios/plugins/check_procs -w 1: -c 1: -C process_XYZ + +#### Nagios 监控服务器上的配置 #### + +我们通过修改 Nagios 监控服务器里的服务定义来应用上面定义的自定义命令。服务定义可以写在所有服务被定义的地方(比如 /etc/nagios/objects/nrpe.cfg 或 /etc/nagios3/conf.d/nrpe.cfg) + + ## 示例 1:检查进程 XYZ ## + define service { + host_name server-1 + service_description Check Process XYZ + check_command check_nrpe!check_process_XYZ + check_interval 1 + use generic-service + } + + ## 示例 2:检查磁盘状态 ## + define service { + host_name server-1 + service_description Check Process XYZ + check_command check_nrpe!check_disk + check_interval 1 + use generic-service + } + +总而言之,NRPE 是 Nagios 的一个强大的扩展,它提供了高度可定制的远程服务器监控方案。使用 NRPE,我们可以监控系统的负载、运行的进程、已登录的用户、磁盘状态,以及其它的指标。 + +希望这些可以帮到你。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/03/nagios-remote-plugin-executor-nrpe-linux.html + +作者:[Sarmed Rahman][a] +译者:[felixonmars](https://github.com/felixonmars) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/sarmed +[1]:http://xmodulo.com/2012/08/how-to-measure-average-cpu-utilization.html +[2]:http://xmodulo.com/2013/12/install-configure-nagios-linux.html +[3]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html +[4]:http://xmodulo.com/2013/01/how-to-set-up-rpmforge-repoforge-repository-on-centos.html diff --git a/sources/tech/20140924 Unix----stat -- more than ls.md b/translated/tech/20140924 Unix----stat -- more than ls.md similarity index 58% rename from sources/tech/20140924 Unix----stat -- more than ls.md rename to translated/tech/20140924 Unix----stat -- more than ls.md index f2c5abf45b..d877492881 100644 --- a/sources/tech/20140924 Unix----stat -- more than ls.md +++ b/translated/tech/20140924 Unix----stat -- more than ls.md @@ -1,21 +1,19 @@ -wangjiezhe translating - -Unix: stat -- more than ls +Unix: stat -- 获取比 ls 更多的信息 ================================================================================ -> Tired of ls and want to see more interesting information on your files? Try stat! +> 厌倦了 ls 命令, 并且想查看更多有关你的文件的有趣的信息? 试一试 stat! ![](http://www.itworld.com/sites/default/files/imagecache/large_thumb_150x113/stats.jpg) -The ls command is probably one of the first commands that anyone using Unix learns, but it only shows a small portion of the information that is available with the stat command. +ls 命令可能是每一个 Unix 使用者第一个学习的命令之一, 但它仅仅显示了 stat 命令能给出的信息的一小部分. -The stat command pulls information from the file's inode. As you might be aware, there are actually three sets of dates and times that are stored for every file on your system. These include the date the file was last modified (i.e., the date and time that you see when you use the ls -l command), the time the file was last changed (which includes renaming the file), and the time that file was last accessed. +stat 命令从文件的索引节点获取信息. 正如你可能已经了解的那样, 每一个系统里的文件都存有三组日期和时间, 它们包括最近修改时间(即使用 ls -l 命令时显示的日期和时间), 最近状态改变时间(包括重命名文件)和最近访问时间. -View a long listing for a file and you will see something like this: +使用长列表模式查看文件信息, 你会看到类似下面的内容: $ ls -l trythis -rwx------ 1 shs unixdweebs 109 Nov 11 2013 trythis -Use the stat command and you see all this: +使用 stat 命令, 你会看到下面这些: $ stat trythis File: `trythis' @@ -26,11 +24,11 @@ Use the stat command and you see all this: Modify: 2013-11-11 08:40:10.000000000 -0500 Change: 2013-11-11 08:40:10.000000000 -0500 -The file's change and modify dates/times are the same in this case, while the access time is fairly recent. We can also see that the file is using 8 blocks and we see the permissions in each of the two formats -- the octal (0700) format and the rwx format. The inode number, shown in the third line of the output, is 12731681. There are no additional hard links (Links: 1). And the file is a regular file. +在上面的情形中, 文件的状态改变和文件修改的日期/时间是相同的, 而访问时间则是相当近的时间. 我们还可以看到文件使用了 8 个块, 以及两种格式显示的文件权限 -- 八进制(0700)格式和 rwx 格式. 在第三行显示的索引节点是 12731681. 文件没有其它的硬链接(Links: 1). 而且, 这个文件是一个常规文件. -Rename the file and you will see that the change time will be updated. +重命名文件, 你会看到状态改变时间发生变化. -This, the ctime information, was originally intended to hold the creation date and time for the file, but the field was turned into the change time field somewhere a while back. +这里的 ctime 信息, 最早设计用来存储文件的创建日期和时间, 但之前的某个时间变为用来存储状态修改时间. $ mv trythis trythat $ stat trythat @@ -42,9 +40,9 @@ This, the ctime information, was originally intended to hold the creation date a Modify: 2013-11-11 08:40:10.000000000 -0500 Change: 2014-09-21 12:46:22.000000000 -0400 -Changing the file's permissions would also register in the ctime field. +改变文件的权限也会改变 ctime 域. -You can also use wilcards with the stat command and list your files' stats in a group: +你也可以配合通配符来使用 stat 命令以列出一组文件的状态: $ stat myfile* File: `myfile' @@ -69,18 +67,18 @@ You can also use wilcards with the stat command and list your files' stats in a Modify: 2014-08-22 12:03:59.000000000 -0400 Change: 2014-08-22 12:03:59.000000000 -0400 -We can get some of this information with other commands if we like. +如果我们喜欢的话, 我们也可以通过其他命令来获取这些信息. -Add the "u" option to a long listing and you'll see something like this. Notice this shows us the last access time while adding "c" shows us the change time (in this example, the time when we renamed the file). +向 ls -l 命令添加 "u" 选项, 你会获得下面的结果. 注意这个选项会显示最后访问时间, 而添加 "c" 选项则会显示状态改变时间(在本例中, 是我们重命名文件的时间). $ ls -lu trythat -rwx------ 1 shs unixdweebs 109 Sep 9 19:27 trythat $ ls -lc trythat -rwx------ 1 shs unixdweebs 109 Sep 21 12:46 trythat -The stat command can also work against directories. +stat 命令也可应用与文件夹. -In this case, we see that there are a number of links. +在这个例子中, 我们可以看到有许多的链接. $ stat bin File: `bin' @@ -91,7 +89,7 @@ In this case, we see that there are a number of links. Modify: 2014-09-15 17:54:41.000000000 -0400 Change: 2014-09-15 17:54:41.000000000 -0400 -Here, we're looking at a file system. +在这里, 我们查看一个文件系统. $ stat -f /dev/cciss/c0d0p2 File: "/dev/cciss/c0d0p2" @@ -100,16 +98,24 @@ Here, we're looking at a file system. Blocks: Total: 259366 Free: 259337 Available: 259337 Inodes: Total: 223834 Free: 223531 -Notice the Namelen (name length) field. Good luck if you had your heart set on file names with greater than 255 characters! +注意 Namelen (文件名长度)域, 如果文件名长于 255 个字符的话, 你会很幸运地在文件名处看到心形符号! -The stat command can also display some of its information a field at a time for those times when that's all you want to see, In the example below, we just want to see the file type and then the number of hard links. +stat 命令还可以一次显示所有我们想要的信息. 下面的例子中, 我们只想查看文件类型, 然后是硬连接数. $ stat --format=%F trythat regular file $ stat --format=%h trythat 1 -In the examples below, we look at permissions -- in each of the two available formats -- and then the file's SELinux security context. +在下面的例子中, 我们查看了文件权限 -- 分别以两种可用的格式 -- 然后是文件的 SELinux 安全环境. + +译者注: 原文到这里就结束了, 但很明显缺少结尾. 最后一段的例子可以分别用 + + $ stat --format=%a trythat + $ stat --format=%A trythat + $ stat --format=%C trythat + +来实现. -------------------------------------------------------------------------------- diff --git a/translated/tech/20140926 How to manage configurations in Linux with Puppet and Augeas.md b/translated/tech/20140926 How to manage configurations in Linux with Puppet and Augeas.md new file mode 100644 index 0000000000..17e7e21f5d --- /dev/null +++ b/translated/tech/20140926 How to manage configurations in Linux with Puppet and Augeas.md @@ -0,0 +1,152 @@ +如何用Puppet和Augeas管理Linux配置 +================================================================================ +虽然[Puppet][1](注:此文原文原文中曾今做过,文件名:“20140808 How to install Puppet server and client on CentOS and RHEL.md”,如果翻译发布过,可修改此链接为发布地址)是一个非常独特的和有用的工具,在有些情况下你可以使用一点不同的方法。要修改的配置文件已经在几个不同的服务器上且它们在这时是互补相同的。Puppet实验室的人也意识到了这一点并集成了一个伟大的工具,称之为[Augeas][2],它是专为这种使用情况而设计的。 + + +Augeas可被认为填补了Puppet能力的缺陷,其中一个特定对象的资源类型(如主机资源来处理/etc/hosts中的条目)还不可用。在这个文档中,您将学习如何使用Augeas来减轻你管理配置文件的负担。 + +### Augeas是什么? ### + +Augeas基本上就是一个配置编辑工具。它以他们原生的格式解析配置文件并且将它们转换成树。配置的更改可以通过操作树来完成,并可以以原生配置文件格式保存配置。 + +### 这篇教程要达成什么目的? ### + +我们会安装并配置Augeas用于我们之前构建的Puppet服务器。我们会使用这个工具创建并测试几个不同的配置文件,并学习如何适当地使用它来管理我们的系统配置。 + +### 先决条件 ### + +我们需要一台工作的Puppet服务器和客户端。如果你还没有,请先按照我先前的教程来。 + +Augeas安装包可以在标准CentOS/RHEL仓库中找到。不幸的是,Puppet用到的ruby封装的Augeas只在puppetlabs仓库中(或者[EPEL][4])中才有。如果你系统中还没有这个仓库,请使用下面的命令: + +在CentOS/RHEL 6.5上: + + # rpm -­ivh https://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabs­release­6­10.noarch.rpm + +在CentOS/RHEL 7上: + + # rpm -­ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs­release­7­10.noarch.rpm + +在你成功地安装了这个仓库后,在你的系统中安装Ruby­Augeas: + + # yum install ruby­augeas + +或者如果你是从我的上一篇教程中继续的,使用puppet的方法安装这个包。在/etc/puppet/manifests/site.pp中修改你的custom_utils类,在packages这行中加入“ruby­augeas”。 + + class custom_utils { + package { ["nmap","telnet","vim­enhanced","traceroute","ruby­augeas"]: + ensure => latest, + allow_virtual => false, + } + } + +### 不带Puppet的Augeas ### + +如我先前所说,最初Augeas并不是来自Puppet实验室,这意味着即使没有Puppet本身我们仍然可以使用它。这种方法可在你将它们部署到Puppet环境之前,验证你的修改和想法是否是正确的。要做到这一点,你需要在你的系统中安装一个额外的软件包。请执行以下命令: + + # yum install augeas + +### Puppet Augeas 示例 ### + +用于演示,这里有几个Augeas使用案例。 + +#### 管理 /etc/sudoers 文件 #### + +1. 给wheel组加上sudo权限。 + +这个例子会向你战士如何在你的GNU/Linux系统中为%wheel组加上sudo权限。 + + # 安装sudo包 + package { 'sudo': + ensure => installed, # 确保sudo包已安装 + } + + # 允许用户属于wheel组来使用sudo + augeas { 'sudo_wheel': + context => '/files/etc/sudoers', # 目标文件是 /etc/sudoers + changes => [ + # 允许wheel用户使用sudo + 'set spec[user = "%wheel"]/user %wheel', + 'set spec[user = "%wheel"]/host_group/host ALL', + 'set spec[user = "%wheel"]/host_group/command ALL', + 'set spec[user = "%wheel"]/host_group/command/runas_user ALL', + ] + } + +现在来解释这些代码做了什么:**spec**定义了/etc/sudoers中的用户段,**[user]**定义了数组中给定的用户,所有的定义的斜杠( / ) 后用户的子部分。因此在典型的配置中这个可以这么表达: + + user host_group/host host_group/command host_group/command/runas_user + +这个将被转换成/etc/sudoers下的这一行: + + %wheel ALL = (ALL) ALL + +2. 添加命令别称 + +下面这部分会向你展示如何定义命令别名,他可以在你的sudoer文件中使用。 + + # 创建新的SERVICE别名,包含了一些基本的特权命令。 + augeas { 'sudo_cmdalias': + context => '/files/etc/sudoers', # The target file is /etc/sudoers + changes => [ + "set Cmnd_Alias[alias/name = 'SERVICES']/alias/name SERVICES", + "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[1] /sbin/service", + "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[2] /sbin/chkconfig", + "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[3] /bin/hostname", + "set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[4] /sbin/shutdown", + ] + } + +sudo命令别名的语法很简单:**Cmnd_Alias**定义了命令别名字段,**[alias/name]**绑定所有给定的别名,/alias/name **SERVICES** 定义真实的别名以及alias/command 是所有命令的数组,每条命令是这个别名的一部分。 + + Cmnd_Alias SERVICES = /sbin/service , /sbin/chkconfig , /bin/hostname , /sbin/shutdown + +关于/etc/sudoers的更多信息,请访问[官方文档][5]。 + +#### 向一个组中加入用户 #### + +要使用Augeas向组中添加用户,你有也许要添加一个新用户,无论是在gid字段后或者在最后一个用户后。我们在这个例子中使用组SVN。这可以通过下面的命令达成: + +在Puppet中: + + augeas { 'augeas_mod_group: + context => '/files/etc/group', # The target file is /etc/group + changes => [ + "ins user after svn/*[self::gid or self::user][last()]", + "set svn/user[last()] john", + ] + } + +使用 augtool: + + augtool> ins user after /files/etc/group/svn/*[self::gid or self::user][last()] augtool> set /files/etc/group/svn/user[last()] john + +### 总结 ### + +目前为止,你应该对如何在Puppet项目中使用Augeas有一个好想法了。随意地试一下,你肯定会经历官方的Augeas文档。这会帮助你了解如何在你的个人项目中正确地使用Augeas,并且它会想你展示你可以用它节省多少时间。 + +如有任何问题,欢迎在下面的评论中发布,我会尽力解答和向你建议。 + +### 有用的链接 ### + +- [http://www.watzmann.net/categories/augeas.html][6]: contains a lot of tutorials focused on Augeas usage. +- [http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas][7]: Puppet wiki with a lot of practical examples. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/09/manage-configurations-linux-puppet-augeas.html + +作者:[Jaroslav Štěpánek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/jaroslav +[1]:http://xmodulo.com/2014/08/install-puppet-server-client-centos-rhel.html +[2]:http://augeas.net/ +[3]:http://xmodulo.com/manage-configurations-linux-puppet-augeas.html +[4]:http://xmodulo.com/2013/03/how-to-set-up-epel-repository-on-centos.html +[5]:http://augeas.net/docs/references/lenses/files/sudoers-aug.html +[6]:http://www.watzmann.net/categories/augeas.html +[7]:http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas \ No newline at end of file diff --git a/translated/tech/20140926 How to monitor user login history on CentOS with utmpdump.md b/translated/tech/20140926 How to monitor user login history on CentOS with utmpdump.md new file mode 100644 index 0000000000..1fcfaa5f55 --- /dev/null +++ b/translated/tech/20140926 How to monitor user login history on CentOS with utmpdump.md @@ -0,0 +1,120 @@ +CentOS监控用户登录历史之utmpdump +================================================================================ +保留、维护和分析日志(如某个特定时期内发生过的,或正在发生的帐号事件),是Linux系统管理员最基础和最重要的任务之一。对于用户管理,检查用户的登入和登出日志(不管是失败的,还是成功的)可以让我们对任何潜在的安全隐患或未经授权使用系统的情况保持警惕。例如,工作时间之外或例假期间的来自未知IP地址或帐号的远程登录应当发出红色警报。 + +在CentOS系统上,用户登录历史存储在以下这些文件中: + +- /var/run/utmp(用于记录当前打开的会话)who和w工具用来记录当前有谁登录以及他们正在做什么,而uptime用来记录系统启动时间。 +- /var/log/wtmp (用于存储系统连接历史记录)last工具用来记录最后登录的用户的列表。 +- /var/log/btmp(记录失败的登录尝试)lastb工具用来记录最后失败的登录尝试的列表。 + +![](https://farm4.staticflickr.com/3871/15106743340_bd13fcfe1c_o.png) + +在本帖中,我将介绍如何使用utmpdump,这个小程序来自sysvinit-tools包,可以用于转储二进制日志文件到文本格式的文件以便检查。此工具默认在CentOS 6和7家族上可用。utmpdump收集到的信息比先前提到过的工具的输出要更全面,这让它成为一个胜任该工作的很不错的工具。除此之外,utmpdump可以用于修改utmp或wtmp。如果你想要修复二进制日志中的任何损坏条目,它会很有用。 + +### Utmpdump的使用及其输出说明 ### + +正如我们之前提到的,这些日志文件,与我们大多数人熟悉的其它日志相比(如/var/log/messages,/var/log/cron,/var/log/maillog),是以二进制格式存储的,因而我们不能使用像less或more这样的文件命令来查看它们的内容。那样看来,utmpdump拯救了世界。 + +为了要显示/var/run/utmp的内容,请运行以下命令: + + # utmpdump /var/run/utmp + +![](https://farm6.staticflickr.com/5595/15106696599_60134e3488_z.jpg) + +同样要显示/var/log/wtmp的内容: + + # utmpdump /var/log/wtmp + +![](https://farm6.staticflickr.com/5591/15106868718_6321c6ff11_z.jpg) + +最后,对于/var/log/btmp: + + # utmpdump /var/log/btmp + +![](https://farm6.staticflickr.com/5562/15293066352_c40bc98ca4_z.jpg) + +正如你所能看到的,三种情况下的输出结果是一样的,除了utmp和btmp的记录是按时间排序,而wtmp的顺序是颠倒的这个原因外。 + +每个日志行格式化成了多列,说明如下。第一个字段显示了会话识别符,而第二个字段则是PID。第三个字段可以是以下值:~~(表示运行等级改变或系统重启),bw(启动守候进程),数字(表示TTY编号),或者字符和数字(表示伪终端)。第四个字段可以为空或用户名、重启或运行级别。第五个字段是主TTY或PTY(伪终端),如果此信息可获得的话。第六个字段是远程主机名(如果是本地登录,该字段为空,运行级别信息除外,它会返回内核版本)。第七个字段是远程系统的IP地址(如果是本地登录,该字段为0.0.0.0)。如果没有提供DNS解析,第六和第七字段会显示相同的信息(远程系统的IP地址)。最后一个(第八)字段指明了记录创建的日期和时间。 + +### Utmpdump使用样例 ### + +下面提供了一些utmpdump的简单使用情况。 + +1. 检查8月18日到9月17日之间某个特定用户(如gacanepa)的登录次数。 + + # utmpdump /var/log/wtmp | grep gacanepa + +![](https://farm4.staticflickr.com/3857/15293066362_fb2dd566df_z.jpg) + +如果你需要回顾先前日期的登录信息,你可以检查/var/log下的wtmp-YYYYMMDD(或wtmp.[1...N])和btmp-YYYYMMDD(或btmp.[1...N])文件,这些是由[logrotate][1]生成的旧wtmp和btmp的归档文件。 + +2. 统计来自IP地址192.168.0.101的登录次数。 + + # utmpdump /var/log/wtmp | grep 192.168.0.101 + +![](https://farm4.staticflickr.com/3842/15106743480_55ce84c9fd_z.jpg) + +3. 显示失败的登录尝试。 + + # utmpdump /var/log/btmp + +![](https://farm4.staticflickr.com/3858/15293065292_e1d2562206_z.jpg) + +在/var/log/btmp输出中,每个日志行都与一个失败的登录尝试相关(如使用不正确的密码,或者一个不存在的用户ID)。上面图片中高亮部分显示了使用不存在的用户ID登录,这警告你有人尝试猜测常用帐号名来闯入系统。这在使用tty1的情况下是个极其严重的问题,因为这意味着某人对你机器上的终端具有访问权限(该检查一下谁拿到了进入你数据中心的钥匙了,也许吧?) + +4. 显示每个用户会话的登入和登出信息 + + # utmpdump /var/log/wtmp + +![](https://farm4.staticflickr.com/3835/15293065312_c762360791_z.jpg) + +在/var/logwtmp中,一次新的登录事件的特征是,第一个字段为‘7’,第三个字段是一个终端编号(或伪终端id),第四个字段为用户名。相关的登出事件会在第一个字段显示‘8’,第二个字段显示与登录一样的PID,而终端编号字段空白。例如,仔细观察上面图片中PID 1463的行。 + +- On [Fri Sep 19 11:57:40 2014 ART] the login prompt appeared in tty1. +- On [Fri Sep 19 12:04:21 2014 ART], user root logged on. +- On [Fri Sep 19 12:07:24 2014 ART], root logged out. + +旁注:第四个字段的LOGIN意味着出现了一次登录到第五字段指定的终端的提示。 + +到目前为止,我介绍一些有点琐碎的例子。你可以将utmpdump和其它一些文本处理工具,如awk、sed、grep或cut组合,来产生过滤和加强的输出。 + +例如,你可以使用以下命令来列出某个特定用户(如gacanepa)的所有登录事件,并发送输出结果到.csv文件,它可以用像LibreOffice Calc或Microsoft Excel之类的文字或工作簿应用程序打开查看。让我们只显示PID、用户名、IP地址和时间戳: + + # utmpdump /var/log/wtmp | grep -E "\[7].*gacanepa" | awk -v OFS="," 'BEGIN {FS="] "}; {print $2,$4,$7,$8}' | sed -e 's/\[//g' -e 's/\]//g' + +![](https://farm4.staticflickr.com/3851/15293065352_91e1c1e4b6_z.jpg) + +就像上面图片中三个块描绘的那样,过滤逻辑操作是由三个管道步骤组成的。第一步用于查找由用户gacanepa触发的登录事件([7]);第二步和第三部用于选择期望的字段,移除utmpdump输出的方括号并设置输出字段分隔符为逗号。 + +当然,如果你想要在以后打开来看,你需要重定向上面的命令输出到文件(添加“>[文件名].csv”到命令后面)。 + +![](https://farm4.staticflickr.com/3889/15106867768_0e37881a25_z.jpg) + +在更为复杂的例子中,如果你想要知道在特定时间内哪些用户(在/etc/passwd中列出)没有登录,你可以从/etc/passwd中提取用户名,然后运行grep命令来获取/var/log/wtmp输出中对应用户的列表。就像你看到的那样,有着无限可能。 + +在进行总结之前,让我们简要地展示一下utmpdump的另外一种使用情况:修改utmp或wtmp。由于这些都是二进制日志文件,你不能像编辑文件一样来编辑它们。取而代之是,你可以将其内容输出成为文本格式,并修改文本输出内容,然后将修改后的内容导入回二进制日志中。如下: + + # utmpdump /var/log/utmp > tmp_output + + # utmpdump -r tmp_output > /var/log/utmp + +这在你想要移除或修复二进制日志中的任何伪造条目时很有用。 + +下面小结一下,utmpdump通过转储详细的登录事件到utmp、wtmp和btmp日志文件,也可以是轮循的旧归档文件,来补充如who,w,uptime,last,lastb之类的标准工具的不足,这也使得它成为一个很棒的工具。 + +你可以随意添加评论以加强本帖的含金量。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/2014/09/monitor-user-login-history-centos-utmpdump.html + +作者:[Gabriel Cánepa][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/gabriel +[1]:http://xmodulo.com/2014/09/logrotate-manage-log-files-linux.html diff --git a/translated/tech/20140928 How to turn your CentOS box into an OSPF router using Quagga.md b/translated/tech/20140928 How to turn your CentOS box into an OSPF router using Quagga.md new file mode 100644 index 0000000000..bfa2898e8c --- /dev/null +++ b/translated/tech/20140928 How to turn your CentOS box into an OSPF router using Quagga.md @@ -0,0 +1,224 @@ +如何使用Quagga将CentOS放入OSPF路由器中 +================================================================================ +[Quagga][1]是一个可以将Linux放入支持如RIP、OSPF、BGP和IS-IS等主要路由协议的路由器的一个开源路由软件套件。它具有对IPv4和IPv6的完整规定,并支持路由/前缀过滤。Quagga可以是你生命中的救星,以防你的生产路由器一旦宕机,而你没有备用的设备而只能等待更换。通过适当的配置,Quagga甚至可以作为生产路由器。 + +本教程中,我们将连接两个假设之间具有专线连接的分支机构网络(例如,192.168.1.0/24和172.17.1.0/24)。 + +![](https://farm4.staticflickr.com/3861/15172727969_13cb7f037f_b.jpg) + +我们的CentOS位于所述专用链路的两端。两台主机名分别设置为“site-A-RTR”和“site-B-RTR'。下面是IP地址的详细信息。 + +- **Site-A**: 192.168.1.0/24 +- **Site-B**: 172.16.1.0/24 +- **Peering between 2 Linux boxes**: 10.10.10.0/30 + +Quagga包括了几个协同工作的守护进程。在本教程中,我们将重点建立以下守护进程。 + +1. **Zebra**: 核心守护进程,负责内核接口和静态路由。 +1. **Ospfd**: IPv4 OSPF 守护进程. + +### 在CentOS上安装Quagga ### + +我们使用yum安装Quagga。 + + # yum install quagga + +在CentOS7,SELinux默认会阻止quagga将配置文件写到/usr/sbin/zebra。这个SELinux策略会干涉我们接下来要介绍的安装过程,所以我们要禁用此策略。对于这一点,无论是[关闭SELinux][2](这里不推荐),还是如下启用“zebra_write_config'。如果你使用的是CentOS 6的请跳过此步骤。 + + # setsebool -P zebra_write_config 1 + +如果没有这个改变,在我们尝试在Quagga命令行中保存配置的时候看到如下错误。 + + Can't open configuration file /etc/quagga/zebra.conf.OS1Uu5. + +安装完Quagga后,我们要配置必要的对等IP地址,并更新OSPF设置。Quagga自带了一个命令行称为vtysh。vtysh里面用到的Quagga命令与主要的路由器厂商如思科和Juniper是相似的。 + +### 步骤 1: 配置 Zebra ### + +我们首先创建Zebra配置文件,并启用Zebra守护进程。 + + # cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf + # service zebra start + # chkconfig zebra on + +启动vtysh命令行: + + # vtysh + +首先,我们为Zebra被指日志文件。输入下面的命令进入vtysh的全局配置模式: + + site-A-RTR# configure terminal + +指定日志文件位置,接着退出模式: + + site-A-RTR(config)# log file /var/log/quagga/quagga.log + site-A-RTR(config)# exit + +永久保存配置: + + site-A-RTR# write + +接下来,我们要确定可用的接口并按需配置它们的IP地址。 + + site-A-RTR# show interface + +---------- + + Interface eth0 is up, line protocol detection is disabled + . . . . . + Interface eth1 is up, line protocol detection is disabled + . . . . . + +配置eth0参数: + + site-A-RTR# configure terminal + site-A-RTR(config)# interface eth0 + site-A-RTR(config-if)# ip address 10.10.10.1/30 + site-A-RTR(config-if)# description to-site-B + site-A-RTR(config-if)# no shutdown + +继续配置eth1参数: + + site-A-RTR(config)# interface eth1 + site-A-RTR(config-if)# ip address 192.168.1.1/24 + site-A-RTR(config-if)# description to-site-A-LAN + site-A-RTR(config-if)# no shutdown + +现在验证配置: + + site-A-RTR(config-if)# do show interface + +---------- + + Interface eth0 is up, line protocol detection is disabled + . . . . . + inet 10.10.10.1/30 broadcast 10.10.10.3 + . . . . . + Interface eth1 is up, line protocol detection is disabled + . . . . . + inet 192.168.1.1/24 broadcast 192.168.1.255 + . . . . . + +---------- + + site-A-RTR(config-if)# do show interface description + +---------- + + Interface Status Protocol Description + eth0 up unknown to-site-B + eth1 up unknown to-site-A-LAN + +永久保存配置: + + site-A-RTR(config-if)# do write + +在site-B上重复上面配置IP地址的步骤。 + +如果一切顺利,你应该可以在site-A的服务器上ping通site-B上的对等IP地址10.10.10.2了。 + +注意一旦Zebra的守护进程启动了,在vtysh命令行中的任何改变都会立即生效。因此没有必要在更改配置后重启Zebra守护进程。 + +### 步骤 2: 配置OSPF ### + +我们首先创建OSPF配置文件,并启动OSPF守护进程: + + # cp /usr/share/doc/quagga-XXXXX/ospfd.conf.sample /etc/quagga/ospfd.conf + # service ospfd start + # chkconfig ospfd on + +现在启动vtysh命令行来继续OSPF配置: + + # vtysh + +输入路由配置模式: + + site-A-RTR# configure terminal + site-A-RTR(config)# router ospf + +可选配置路由id: + + site-A-RTR(config-router)# router-id 10.10.10.1 + +添加在OSPF中的网络: + + site-A-RTR(config-router)# network 10.10.10.0/30 area 0 + site-A-RTR(config-router)# network 192.168.1.0/24 area 0 + +永久保存配置: + + site-A-RTR(config-router)# do write + +在site-B上重复和上面相似的OSPF配置: + + site-B-RTR(config-router)# network 10.10.10.0/30 area 0 + site-B-RTR(config-router)# network 172.16.1.0/24 area 0 + site-B-RTR(config-router)# do write + +OSPF的邻居现在应该启动了。只要ospfd在运行,通过vtysh的任何OSPF相关配置的改变都会立即生效而不必重启ospfd。 + +下一章节,我们会验证我们的Quagga设置。 + +### 验证 ### + +#### 1. 通过ping测试 #### + +首先你应该可以从site-A ping同site-B的LAN子网。确保你的防火墙没有阻止ping的流量。 + + [root@site-A-RTR ~]# ping 172.16.1.1 -c 2 + +#### 2. 检查路由表 #### + +必要的路由应该同时出现在内核与Quagga理由表中。 + + [root@site-A-RTR ~]# ip route + +---------- + + 10.10.10.0/30 dev eth0 proto kernel scope link src 10.10.10.1 + 172.16.1.0/30 via 10.10.10.2 dev eth0 proto zebra metric 20 + 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1 + +---------- + + [root@site-A-RTR ~]# vtysh + site-A-RTR# show ip route + +---------- + + Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, + I - ISIS, B - BGP, > - selected route, * - FIB route + + O 10.10.10.0/30 [110/10] is directly connected, eth0, 00:14:29 + C>* 10.10.10.0/30 is directly connected, eth0 + C>* 127.0.0.0/8 is directly connected, lo + O>* 172.16.1.0/30 [110/20] via 10.10.10.2, eth0, 00:14:14 + C>* 192.168.1.0/24 is directly connected, eth1 + +#### 3. 验证OSPF邻居和路由 #### + +在vtysh命令行中,你可以检查必要的邻居是否在线与是否已经学习了合适的路由。 + + [root@site-A-RTR ~]# vtysh + site-A-RTR# show ip ospf neighbor + +![](https://farm3.staticflickr.com/2943/15160942468_d348241bd5_z.jpg) + +本教程中,我们将重点放在使用Quagga配置基本的OSPF。在一般情况下,Quagga能让我们能够轻松在一台普通的Linux机器上配置动态路由协议,如OSPF、RIP或BGP。启用了Quagga的机器可以与你网络中的其他路由器进行通信和交换路由信息。由于它支持主要的开放标准的路由协议,它或许是许多情况下的首选。更重要的是,Quagga的命令行界面与主要路由器厂商如思科和Juniper几乎是相同的,这使得部署和维护Quagga机器变得非常容易。 + + +希望这些对你们有帮助。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/turn-centos-box-into-ospf-router-quagga.html + +作者:[Sarmed Rahman][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/sarmed +[1]:http://www.nongnu.org/quagga/ +[2]:http://xmodulo.com/how-to-disable-selinux.html \ No newline at end of file diff --git a/translated/tech/20140928 How to use xargs command in Linux.md b/translated/tech/20140928 How to use xargs command in Linux.md new file mode 100644 index 0000000000..5f5e5e120d --- /dev/null +++ b/translated/tech/20140928 How to use xargs command in Linux.md @@ -0,0 +1,116 @@ +如何在Linux里使用xargs命令 +================================================================================ +你是否遇到过这样的情况,需要一遍又一遍地对多个文件执行同样的操作?如果有过,那你肯定会深有感触这是多么的无聊和效率低下。还好有种简单的方式,可以在类Unix操作系统中使用xargs命令解决这个烦恼。通过这个命令你可以有效地处理多个文件,节省你的时间和精力。在这篇教程中,你可以学到如何一次性对多个文件执行命令或脚本操作,再也不用担心像单独处理无数个日志或数据文件那样吓人的任务了。 + +xargs命令有两个要点。第一,你必须列出目标文件。第二,你必须指定对每个文件需要执行的命令或脚本。 + +这篇教程会涉及三个应用场景,xargs命令被用来处理分布在不同目录下的文件: + +1. 计算所有文件的行数 +1. 打印指定文件的第一行 +1. 对每个文件执行一个自定义脚本 + +请看下面这个叫xargstest的目录(用tree命令以及-i和-f选项显示了目录树结构,这样可以避免缩进显示而且每个文件都会带有完整路径): + + $ tree -if xargstest/ + +![](https://farm3.staticflickr.com/2942/15334985981_ce1a192def.jpg) + +这六个文件的内容分别如下: + +![](https://farm4.staticflickr.com/3882/15346287662_a3084a8e4f_o.png) + +这个**xargstest**目录,以及它包含的子目录和文件将用在下面的例子中。 + +### 场景1:计算所有文件的行数 ### + +就像之前提到的,使用xargs命令的第一个要点是一个用来运行命令或脚本的文件列表。我们可以用find命令来确定和列出目标文件。选项**-name 'file??'**指定了xargstest目录下那些名字以"file"开头并跟随两个任意字符的文件才是匹配的。这个搜索默认是递归的,意思是find命令会在xargstest和它的子目录下搜索匹配的文件。 + + $ find xargstest/ -name 'file??' + +---------- + + xargstest/dir3/file3B + xargstest/dir3/file3A + xargstest/dir1/file1A + xargstest/dir1/file1B + xargstest/dir2/file2B + xargstest/dir2/file2A + +我们可以通过管道把结果发给sort命令让文件名按顺序排列: + + $ find xargstest/ -name 'file??' | sort + +---------- + + xargstest/dir1/file1A + xargstest/dir1/file1B + xargstest/dir2/file2A + xargstest/dir2/file2B + xargstest/dir3/file3A + xargstest/dir3/file3B + +然后我们需要第二个部分,就是需要执行的命令。我们使用带有-l选项的wc命令来计算每个文件包含的换行符数目(会在输出的每一行的前面打印出来): + + $ find xargstest/ -name 'file??' | sort | xargs wc -l + +---------- + + 1 xargstest/dir1/file1A + 2 xargstest/dir1/file1B + 3 xargstest/dir2/file2A + 4 xargstest/dir2/file2B + 5 xargstest/dir3/file3A + 6 xargstest/dir3/file3B + 21 total + +可以看到,不用对每个文件手动执行一次wc -l命令,而xargs命令可以让你在一步里完成所有操作。那些之前看起来无法完成的任务,例如单独处理数百个文件,现在可以轻松进行了。 + +### 场景2:打印指定文件的第一行 ### + +既然对xargs命令的使用已经有一点基础了,你可以自由选择执行什么命令。有时,你也许希望只对一部分文件执行操作而忽略其他的。在这种情况下,你可以使用find命令的-name选项以及?通配符(匹配任意单个字符)来选中特定文件并通过管道输出给xargs命令。举个例子,如果你想打印以“B”字符结尾的文件而忽略以“A”结尾的文件的第一行,可以使用下面的find、xargs和head命令组合来完成(head -n1会打印一个文件的第一行): + + $ find xargstest/ -name 'file?B' | sort | xargs head -n1 + +---------- + + ==> xargstest/dir1/file1B <== + one + + ==> xargstest/dir2/file2B <== + one + + ==> xargstest/dir3/file3B <== + one + +你将看到只有以“B”结尾的文件会被处理,而所有以“A”结尾的文件都被忽略了。 + +### 场景3:对每个文件执行一个自定义脚本 ### + +最后,你也许希望对一些文件执行一个自定义脚本(例如Bash、Python或是Perl)。要做到这个,只要简单地把你的自定义脚本名字替换掉之前例子中的wc和head命令就好了: + + $ find xargstest/ -name 'file??' | xargs myscript.sh + +自定义脚本**myscript.sh**需要写成接受一个文件名作为参数并处理这个文件。上面的命令将为find命令找到的每个文件分别调用脚本。 + +注意一下上面的例子中的文件名并没有包含空格。通常来说,在Linux环境下操作没有空格的文件名会舒服很多。如果你实在是需要处理名字中带有空格的文件,上边的命令就不能用了,需要稍微处理一下来让它可以被接受。这可以通过find命令的-print0选项(它会打印完整的文件名到标准输出,并以空字符结尾),以及xargs命令的-0选项(它会以空字符作为字符串结束标记)来实现,就像下面的例子: + + $ find xargstest/ -name 'file*' -print0 | xargs -0 myscript.sh + +注意一下,-name选项所跟的参数已经改为'file\*',意思是所有以"file"开头而结尾可以是任意字符的文件都会被选中。 + +### 总结 ### + +在看完这篇教程后你应该会理解xargs命令的作用,以及如何应用到自己的工作中。很快你就可以有时间享受这个命令所带来的高效率,而不哟给你花时间在一些重复的任务上了。想了解更详细的信息以及更多的选项,你可以在终端中输入'man xargs'命令来查看xargs的文档。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/xargs-command-linux.html + +作者:[Joshua Reed][a] +译者:[zpl1025](https://github.com/zpl1025) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/joshua diff --git a/translated/tech/20140929 Git Rebase Tutorial--Going Back in Time with Git Rebase.md b/translated/tech/20140929 Git Rebase Tutorial--Going Back in Time with Git Rebase.md new file mode 100644 index 0000000000..ad1938d8ca --- /dev/null +++ b/translated/tech/20140929 Git Rebase Tutorial--Going Back in Time with Git Rebase.md @@ -0,0 +1,109 @@ +Git Rebase教程: 用Git Rebase让时光倒流 +================================================================================ +![](https://www.gravatar.com/avatar/7c148ace0d63306091cc79ed9d9e77b4?d=mm&s=200) + +Christoph Burgdorf自10岁时就是一名程序员,他是HannoverJS Meetup网站的创始人,并且一直活跃在AngularJS社区。他也是非常了解gti的内内外外,在那里他举办一个[thoughtram][1]的工作室来帮助初学者掌握该技术。 + +下面的教程最初发表在他的[blog][2]。 + +---------- + +### 教程: Git Rebase ### + +想象一下你正在开发一个激进的新功能。这将是很灿烂的但它需要一段时间。您这几天也许是几个星期一直在做这个。 + +你的功能分支已经超前master有6个提交了。你是一个优秀的开发人员并做了有意义的语义提交。但有一件事情:你开始慢慢意识到,这个野兽仍需要更多的时间才能真的做好准备被合并回主分支。 + + m1-m2-m3-m4 (master) + \ + f1-f2-f3-f4-f5-f6(feature) + +你也知道的是,一些地方实际上是少耦合的新功能。它们可以更早地合并到主分支。不幸的是,你想将部分合并到主分支的内容存在于你六个提交中的某个地方。更糟糕的是,它也包含了依赖于你的功能分支的之前的提交。有人可能会说,你应该在第一处地方做两次提交,但没有人是完美的。 + + m1-m2-m3-m4 (master) + \ + f1-f2-f3-f4-f5-f6(feature) + ^ + | + mixed commit + +在你准备提交的时间,你没有预见到,你可能要逐步把该功能合并入主分支。哎呀!你不会想到这件事会有这么久。 + +你需要的是一种方法可以回溯历史,把它并分成两次提交,这样就可以把代码都安全地分离出来,并可以移植到master分支。 + +用图说话,就是我们需要这样。 + + m1-m2-m3-m4 (master) + \ + f1-f2-f3a-f3b-f4-f5-f6(feature) + +在将工作分成两个提交后,我们就可以cherry-pick出前面的部分到主分支了。 + +原来Git自带了一个功能强大的命令git rebase -i ,它可以让我们这样做。它可以让我们改变历史。改变历史可能会产生问题,并作为一个经验法应尽快避免历史与他人共享。在我们的例子中,虽然我们只是改变我们的本地功能分支的历史。没有人会受到伤害。这这么做了! + +好吧,让我们来仔细看看f3提交究竟修改了什么。原来我们共修改了两个文件:userService.js和wishlistService.js。比方说,userService.js的更改可以直接合入主分支而wishlistService.js不能。因为wishlistService.js甚至没有在主分支存在。这根据的是f1提交中的介绍。 + +>>专家提示:即使是在一个文件中更改,git也可以搞定。但这篇博客中我们要让事情变得简单。 + +我们已经建立了一个[公众演示仓库][3],我们将使用这个来练习。为了便于跟踪,每一个提交信息的前缀是在上面的图表中使用的假的SHA。以下是git在分开提交f3时的分支图。 + +![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git1.png) + +现在,我们要做的第一件事就是使用git的checkout功能checkout出我们的功能分支。用git rebase -i master开始做rebase。 + +现在接下来git会用配置的编辑器打开(默认为Vim)一个临时文件。 + +![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git2.png) + +该文件为您提供一些rebase选择,它带有一个提示(蓝色文字)。对于每一个提交,我们可以选择的动作有pick、rwork、edit、squash、fixup和exec。每一个动作也可以通过它的缩写形式p、r、e、s、f和e引用。描述每一个选项超出了本文范畴,所以让我们专注于我们的具体任务。 + +我们要为f3提交选择编辑选项,因此我们把内容改变成这样。 + +现在我们保存文件(在Vim中是按下后输入:wq,最后是按下回车)。接下来我们注意到git在编辑选项中选择的提交处停止了rebase。 + +这意味这git开始为f1、f2、f3生效仿佛它就是常规的rebase但是在f3**之后**停止。事实上,我们可以看一眼停止的地方的日志就可以证明这一点。 + +要将f3分成两个提交,我们所要做的是重置git的指针到先前的提交(f2)而保持工作目录和现在一样。这就是git reset在混合模式在做的。由于混合模式是git reset的默认模式,我们可以直接用git reset head~1。就这么做并在运行后用git status看下发生了什么。 + +git status告诉我们userService.js和wishlistService.js被修改了。如果我们与行git diff 我们就可以看见在f3里面确切地做了哪些更改。 + +如果我们看一眼日志我们会发现f3已经消失了。 + +现在我们有了准备提交的先前的f3提交而原先的f3提交已经消失了。记住虽然我们仍旧在rebase的中间过程。我们的f4、f5、f6提交还没有缺失,它们会在接下来回来。 + + +让我们创建两个新的提交:首先让我们为可以提交到主分支的userService.js创建一个提交。运行git add userService.js 接着运行 git commit -m "f3a: add updateUser method"。 + +太棒了!让我们为wishlistService.js的改变创建另外一个提交。运行git add wishlistService.js,接着运行git commit -m "f3b: add addItems method". + +让我们在看一眼日志。 + +这就是我们想要的除了f4、f5、f6仍旧缺失。这是因为我们仍在rebase交互的中间,我们需要告诉git继续rebase。用下面的命令继续:git rebase --continue。 + +让我们再次检查一下日志。 + +就是这样。我们现在已经得到我们想要的历史了。先前的f3提交现在已经被分割成两个提交f3a和f3b。剩下的最后一件事是cherry-pick出f3a提交到主分支上。 + +为了完成最后一步,我们首先切换到主分支。我们用git checkout master。现在我们就可以用cherry-pick命令来拾取f3a commit了。本例中我们可以用它的SHA值bd47ee1来引用它。 + +现在f3a这个提交i就在主分支的最上面了。这就是我们需要的! + +这篇文章的长度看起来需要花费很大的功夫,但实际上对于一个git高级用户而言这只是一会会。 + +>注:Christoph目前正在与Pascal Precht写一本关于[Git rebase][4]的书,您可以在leanpub订阅它并在准备出版时获得通知。 + +-------------------------------------------------------------------------------- + +via: https://www.codementor.io/git-tutorial/git-rebase-split-old-commit-master + +作者:[cburgdorf][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://www.codementor.io/cburgdorf +[1]:http://thoughtram.io/ +[2]:http://blog.thoughtram.io/posts/going-back-in-time-to-split-older-commits/ +[3]:https://github.com/thoughtram/interactive-rebase-demo +[4]:https://leanpub.com/rebase-the-complete-guide-on-rebasing-in-git \ No newline at end of file diff --git a/translated/tech/20140929 Using GIT to backup your website files on linux.md b/translated/tech/20140929 Using GIT to backup your website files on linux.md new file mode 100644 index 0000000000..a72a547097 --- /dev/null +++ b/translated/tech/20140929 Using GIT to backup your website files on linux.md @@ -0,0 +1,119 @@ +使用 GIT 备份 linux 上的网页 +================================================================================ +![](http://techarena51.com/wp-content/uploads/2014/09/git_logo-1024x480-580x271.png) + +BUP 并不单纯是 Git, 而是一款基于 Git 的软件. 一般情况下, 我使用 rsync 来备份我的文件, 而且迄今为止一直工作的很好. 唯一的不足就是无法把文件恢复到某个特定的时间点. 因此, 我开始寻找替代品, 结果发现了 BUP, 一款基于 git 的软件, 它将数据存储在一个仓库中, 并且有将数据恢复到特定时间点的选项. + +要使用 BUP, 你先要初始化一个空的仓库, 然后备份所有文件. 当 BUP 完成一次备份是, 它会创建一个还原点, 你可以过后还原到这里. 它还会创建所有文件的索引, 包括文件的属性和验校和. 当要进行下一个备份是, BUP 会对比文件的属性和验校和, 只保存发生变化的数据. 这样可以节省很多空间. + +### 安装 BUP (在 Centos 6 & 7 上测试通过) ### + +首先确保你已经安装了 RPMFORGE 和 EPEL 仓库 + + [techarena51@vps ~]$ sudo yum groupinstall "Development Tools" + [techarena51@vps ~]$ sudo yum install python python-devel + [techarena51@vps ~]$ sudo yum install fuse-python pyxattr pylibacl + [techarena51@vps ~]$ sudo yum install perl-Time-HiRes + [techarena51@vps ~]$ git clone git://github.com/bup/bup + [techarena51@vps ~]$ cd bup + [techarena51@vps ~]$ make + [techarena51@vps ~]$ make test + [techarena51@vps ~]$ sudo make install + +对于 debian/ubuntu 用户, 你可以使用 "apt-get build-dep bup". 要获得更多的信心, 可以查看 https://github.com/bup/bup +在 CentOS 7 上, 当你运行 "make test" 时可能会出错, 但你可以继续运行 "make install". + +第一步时初始化一个空的仓库, 就像 git 一样. + + [techarena51@vps ~]$ bup init + +默认情况下, bup 会把仓库存储在 "~/.bup" 中, 但你可以通过设置环境变量 "export BUP_DIR=/mnt/user/bup" 来改变设置. + +然后, 创建所有文件的索引. 这个索引, 就像之前讲过的那样, 存储了一系列文件和它们的属性及 git 目标 id (sha1 哈希表). (属性包括了软链接, 权限和不可改变字节) + + bup index /path/to/file + bup save -n nameofbackup /path/to/file + + #Example + [techarena51@vps ~]$ bup index /var/www/html + Indexing: 7973, done (4398 paths/s). + bup: merging indexes (7980/7980), done. + + [techarena51@vps ~]$ bup save -n techarena51 /var/www/html + + Reading index: 28, done. + Saving: 100.00% (4/4k, 28/28 files), done. + bloom: adding 1 file (7 objects). + Receiving index from server: 1268/1268, done. + bloom: adding 1 file (7 objects). + +"BUP save" 会把所有内容分块, 然后把它们作为对象储存. "-n" 选项指定备份名. + +你可以查看一系列备份和已备份文件. + + [techarena51@vps ~]$ bup ls + local-etc techarena51 test + #Check for a list of backups available for my site + [techarena51@vps ~]$ bup ls techarena51 + 2014-09-24-064416 2014-09-24-071814 latest + #Check for the files available in these backups + [techarena51@vps ~]$ bup ls techarena51/2014-09-24-064416/var/www/html + apc.php techarena51.com wp-config-sample.php wp-load.php + +在同一个服务器上备份文件从来不是一个好的选择. BUP 允许你远程备份网页文件, 但你必须保证你的 SSH 密钥和 BUP 都已经安装在远程服务器上. + + bup index path/to/dir + bup save-r remote-vps.com -n backupname path/to/dir + +### 例子: 备份 "/var/www/html" 文件夹 ### + + [techarena51@vps ~]$bup index /var/www/html + [techarena51@vps ~]$ bup save -r user@remotelinuxvps.com: -n techarena51 /var/www/html + Reading index: 28, done. + Saving: 100.00% (4/4k, 28/28 files), done. + bloom: adding 1 file (7 objects). + Receiving index from server: 1268/1268, done. + bloom: adding 1 file (7 objects). + +### 恢复备份 ### + +登入远程服务器并输入下面的命令 + + [techarena51@vps ~]$bup restore -C ./backup techarena51/latest + + #Restore an older version of the entire working dir elsewhere + [techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-195827 + #Restore one individual file from an old backup + [techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-201328/root/testbup/binfile1.bin + +唯一的缺点是你不能把文件恢复到另一个服务器, 你必须通过 SCP 或者 rsync 手动复制文件. + +通过集成的 web 服务器查看备份 + + bup web + #specific port + bup web :8181 + +你可以使用 shell 脚本来运行 bup, 并建立一个每日运行的定时任务 + + #!/bin/bash + + bup index /var/www/html + bup save -r user@remote-vps.com: -n techarena51 /var/www/html + +BUP 并不完美, 但它的确能够很好地完成任务. 我当然非常愿意看到这个项目的进一步开发, 希望以后能够增加远程恢复的功能. + +你也许喜欢阅读 使用[inotify-tools][1], 一篇关于实时文件同步的文章. + +-------------------------------------------------------------------------------- + +via: http://techarena51.com/index.php/using-git-backup-website-files-on-linux/ + +作者:[Leo G][a] +译者:[wangjiezhe](https://github.com/wangjiezhe) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://techarena51.com/ +[1]:http://techarena51.com/index.php/inotify-tools-example/ diff --git a/translated/tech/20140930 How to Boot Linux ISO Images Directly From Your Hard Drive.md b/translated/tech/20140930 How to Boot Linux ISO Images Directly From Your Hard Drive.md new file mode 100644 index 0000000000..3dcc1a95c3 --- /dev/null +++ b/translated/tech/20140930 How to Boot Linux ISO Images Directly From Your Hard Drive.md @@ -0,0 +1,94 @@ +直接从硬盘启动Linux ISO镜像 +================================================================================ +![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAooAAAEsAgMAAAA5t3pxAAAABGdBTUEAALGPC/xhBQAAAAxQTFRFAAAALAAeqKio/v7+NGqafgAABflJREFUeNrt3L1u4zgQB/DU7q+/8qCnuJegBQcQVLlY98RWhoKrDu4V4LZaHCAgp/chUuYpWAY6DofUh6PdtXcVcZL8ZUSmaSf5mSKHY0rJzQ22RbbflPTtTxhhhBFGGGGEEUYYYYQRRhhhhBFGGGH8zrY9e/xpVycxHup6X+/pvnWltp4zHlztrdvfe+Mu1BzqtYz1vvFGkrJ27jVK7em9NNHI35HSWO8P9Zf6UFPbfamjiPZU29bU9qsa1T9sVGMjy7f+HbgKFrk91e5COYVx6I+hTdU2tN7WtyXvyah8+dCsZbxv7r3D3avYjvu6dT3vU2/kHsj7ttk53Y5GzIr98T72x3owuiPvWi8a9/51vK/VLpTXNLrROmtU+2isg24w1usam2BshjFz6xX1yHjr3f2YabjbrhbD9xRp4j2PGYo5tfs6NBxl2ubW1bUNx55tfdhT+YD5GkYYYYQRRhhhhBFGGGGEEUYYYfyhsewEbm/RKPAYlzDCCCOMML4zoxFvNJqNBYy/aHwy9NW5vVyj1fRVKMlGvsEIo5gxY73RSjVW5slUrh1zt8d8DSOM78Y4Hs99Oe+r8j7BNImM5ayxGBlj1rZOFjdndL941qhEGSmC+0hON81RvTMlR3dDJiqtlWl+Y762RnMWSWWeHelYc51SZLJ6rUzz2zmFor0vcw0b+egWo/rXzz7mjJ1rRXe8qS19eWo8RqNKaaTfqg23mVHnxtzIN9I4F2G0peJxcz5muB8OxjUyzXljpV2c8fFniD0um7SVoTqOPWa1TPPS+Trl6sp7MiI3+2DG2U6pkxin8bjo9/lZTWVKs8YK4K8Y3WykUhmti9XPluIz52EUyTvfYs/+mVhDc00+ys7XNRr9WMRcsQizNZWo9rGINSmNT5qN47n5hdH3x86kM3bWGxUbO3vsjfRMrKH30D3nicaMUWOjO6Kmb0fl29HX+GxSpTZqy0alz41KJzdyf1TlZMzkL8eMM6aKj5V5LHyGqGlNgiINfUIgIz0Ta6rwOTbxXKilzoXDVqVMG5GbwfgT+eOwXRIp9WKx55r8cWosZ346xfnOZUyle1ysbOT88XttmYefWfr1DkpSljJelz9yjKJX0/pk3j/ycd5Hr8/uZsIaR76Y8Zr8UYXZ02paa8n7Ryyin0DHmuasJY3X5Y88mMLvZ2NYpxwb3SvNssZr8kf6riOtUzpJZQfj0Rs7y8YhT0qRP/qxYWgVsD/WYZ3St6OKRv1KxkvyRw57L41KT41maeMV+WO/gk5Gm49WTidjht7xgrHnuvwxRhvjemOlKxse8dqlpVe4vbvv7JIx/Lr88bqjpxc3XpI/Js/DkZt9AKMRbvRnjUbjIfcPS7+nKLL2J7FLjKU/769DjORMI7VRm+l56c/KTYHOVggzjs9L5zTZ+jzaG5UEY3l2rtK5vNF44/ENGHMj5VhPjZSpunzW56tKyzQq345K0Jihc9bj89JkLDmNFWSs9Pi8tMsJ/ed3STEcOQWMMP6EUcs20nwyGFNEmwvi46QdU0TtS4x05VG81lGqka+A5PXHFBnjBf2xzyn8WkqCjPGSduz4ejiaqZNkjBcd634lNk3GeL1R8pgxIXuUHHvcvZYaw5FTLGDcttK2/2B8XWPWPog23kyMd5u77C6TZswyMsbtoc1O2UmWkUx32e/Z15b2Mo1//EumzYlsm5M3ttKMf58yf3P90bffQ/uXOOPXLDvdbMh4t2HjQyayHdvsFPthbE9x/XFiFDmuszBmNlKNFMMp6rjY7W0yYzhyigWMyMM/mHF8HUcu0mhGLr5qqEi6DvnN9cfeqFS8+jHVWsC8sVRPhkXWUrkz8oy5sjoaqRzaUcky8t/l0nWGVGbjUaCRr4UcjKnWIX9kNCOj0jKP9dho5BnDX9nLNHaW/hdAFf4rAZXpyh5ZMRw5BYwwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIII4wwwggjjDDCCCOMMMIIo3TjG9j+B4tUkGfI5p/jAAAAAElFTkSuQmCC) + +Linux的GRUB2启动加载器可以直接从硬盘启动Linux ISO文件,可以启动Live CD,甚至可以不用烧录到磁盘来安装Linux到另外一个硬盘分区或从USB驱动启动。 + +我们在Ubuntu 14.04上实施了该过程——Ubuntu及基于Ubuntu的Linux版本对此支持良好。[其它Linux发行版][1]上的工作原理也类似。 + +### 获取Linux ISO文件 ### + +这一密技需要你的硬盘驱动器上安装有Linux系统,你的计算机必须使用[GRUB2启动加载器][2],这是大多数Linux系统的标准启动加载器。不好意思,你是不能使用Windows启动加载器来直接启动一个Linux ISO文件的。 + +下载你想要使用的ISO文件,并放到你的Linux分区中。GRUB2应该支持大多数Linux系统的。如果你想要在live环境中使用它们,而不想将它们安装到硬盘驱动器上,请确认你下载的是各个Linux ISO的“[live CD][3]”版本。很多基于Linux的可启动工具盘也应该可以工作。 + +### 检查ISO文件内容 ### + +你可能需要检查ISO文件来明确确定指定的文件在哪里。例如,你可以通过使用Ubuntu及其它基于GNOME的桌面环境中的归档管理器/File Roller文件管理器这些图形化应用程序来打开ISO文件来完成此项工作。在Nautilus文件管理器中,右击ISO文件并选择使用归档管理器打开。 + +定位内核文件和initrd映像。如果你正在使用Ubuntu ISO文件,你会在卡斯帕文件夹中找到这些文件——vmlinuz文件时Linux内核,而initrd文件是initrd映像。后面,你需要知道它们在ISO文件中所处的位置。 + +![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x350xvmlinuz-and-initrd-file-locations.png.pagespeed.ic.hB1yMlHMr2.png) + +### 检查硬盘分区路径 ### + +GRUB使用与Linux不同的“设备命名”结构。在Linux系统中,/dev/sda0是硬盘上的第一个分区——**a**是指第一个硬盘,而**0**是指第一个分区。在GRUB中,与/dev/sda0相对应的是(hd0,1)。**0**指第一个硬盘,而**1**则指它上面的第一个分区。换句话说,在GRUB设备名中,磁盘编号从0开始计数,而分区编号则从1开始计数——是啊,这真是突然令人困惑。例如,(hd3,6)是指第四磁盘上的第六分区。 + +你可以使用**fdisk -l**命令来查看该信息。在Ubuntu上,打开终端并运行以下命令: + + sudo fdisk -l + +你将看到一个Linux设备路径列表,你可以自行将它们转成GRUB设备名。例如,在下面的图片中,我们可以看到有个系统分区是/dev/sda1——那么,对于GRUB而言,它就是(hd0,1)。 + +![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x410xfdisk-l-command.png.pagespeed.ic.yW7uP1_G0C.png) + +### 创建GRUB2启动条目 ### + +添加自定义启动条目的最简单的方式是编辑/etc/grub.d/40_custom脚本,该文件设计用于用户自行添加启动条目。在编辑该文件后,/etc/defaults/grub文件和/etc/grub.d/脚本的内容将合成创建/boot/grub/grub.cfg文件——你不应该手工编辑该文件。它设计用于通过你在其它文件指定的设置自动生成。 + +你需要以root特权打开/etc/grub.d/40_custom文件来编辑。在Ubuntu上,你可以通过打开终端窗口,并运行以下命令来完成: + + sudo gedit /etc/grub.d/40_custom + +放轻松点,你可以用你喜爱的文本编辑打开该文件。例如,你可以替换命令中“gedit”为“nano”,在[Nano文本编辑器][4]中打开它。 + +除非你已经添加了其它自定义启动条目,否则你应当看到的是一个几乎空的文件。你需要在[注释][5]行下添加一个或多个ISO启动部分。 + +![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x300xadd-custom-boot-menu-entries-to-grub.png.pagespeed.ic.uUT-Yls8xf.png) + +这里为你展示了怎样来从ISO文件启动Ubuntu或基于Ubuntu的发行版,我们在Ubuntu 14.04下作了测试: + + menuentry “Ubuntu 14.04 ISO” { + set isofile=”/home/name/Downloads/ubuntu-14.04.1-desktop-amd64.iso” + loopback loop (hd0,1)$isofile + linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash + initrd (loop)/casper/initrd.lz + } + +自定义启动条目以包含你期望的菜单的条目名称,计算机上到ISO文件的正确路径,以及包含ISO文件的硬盘和分区设备名。如果vmlinuz和initrd文件的名称或路径不同,请为这些文件指定正确的路径。 + +(如果你有一个独立的/home/分区,忽略/home位,像这样:**set isofile=”/name/Downloads/${isoname}”**)。 + +**重要说明**:不同的Linux版本要求带有不同启动选项的不同的启动条目,GRUB Live ISO多启动项目提供了[用于不同Linux发行版的菜单条目][6]的各种不同类型。你应当可以为你想要启动的ISO文件调整这些示例菜单条目。你也可以仅仅从网页搜索你想要启动的Linux发行版的名称和发行编号,并附带关键词“在GRUB中从ISO启动”,以获取更多信息。 + +![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x392xadd-a-linux-iso-file-to-grub-boot-loader.png.pagespeed.ic.2FR0nOtugC.png) + +如果你想要添加更多ISO启动选项,请为该文件添加额外章节。 + +完成后保存文件,返回终端窗口并运行以下命令: + + sudo update-grub + +![](http://cdn8.howtogeek.com/wp-content/uploads/2014/09/650x249xgenerate-grub.cfg-on-ubuntu.png.pagespeed.ic.5I70sH4ZRs.png) + +再次启动计算机时,你将看到ISO启动条目,你可以选择它来启动ISO文件。在启动时,你可能需要按Shift键来显示GRUB菜单。 + +如果在尝试启动ISO文件时你看见错误信息或黑屏,那么你的启动条目配置不管怎么说配置错误了。即使ISO文件路径和设备名是正确的,ISO文件上的vmlinuz和initrd文件的路径可能是不正确的,或者你启动Linux系统可能需要不同的选项。 + +-------------------------------------------------------------------------------- + +via: http://www.howtogeek.com/196933/how-to-boot-linux-iso-images-directly-from-your-hard-drive/ + +作者:[Chris Hoffman][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.howtogeek.com/author/chrishoffman/ +[1]:http://www.howtogeek.com/191207/10-of-the-most-popular-linux-distributions-compared/ +[2]:http://www.howtogeek.com/196655/how-to-configure-the-grub2-boot-loaders-settings/ +[3]:http://www.howtogeek.com/172810/take-a-secure-desktop-everywhere-everything-you-need-to-know-about-linux-live-cds-and-usb-drives/ +[4]:http://www.howtogeek.com/howto/42980/the-beginners-guide-to-nano-the-linux-command-line-text-editor/ +[5]:http://www.howtogeek.com/118389/how-to-comment-out-and-uncomment-lines-in-a-configuration-file/ +[6]:http://git.marmotte.net/git/glim/tree/grub2 diff --git a/translated/tech/20141008 How to configure a host intrusion detection system on CentOS.md b/translated/tech/20141008 How to configure a host intrusion detection system on CentOS.md new file mode 100644 index 0000000000..90ec639b97 --- /dev/null +++ b/translated/tech/20141008 How to configure a host intrusion detection system on CentOS.md @@ -0,0 +1,125 @@ +在CentOS上配置主机入侵检测系统 +================================================================================ +所有系统管理员想要在他们生产服务器上部署的第一个安全手段之一,就是检测文件篡改的机制——不仅仅是文件内容,而且也包括它们的属性。 + +[AIDE][1] (“高级入侵检测环境”的简称)是一个开源的基于主机的入侵检测系统。AIDE通过检查大量文件属性的不一致性来检查系统二进制文件和基本配置文件的完整性,这些文件属性包括权限、文件类型、索引节点、链接数、链接名、用户、组、文件大小、块计数、修改时间、添加时间、创建时间、acl、SELinux安全上下文、xattrs,以及md5/sha校验和在内。 + +AIDE通过扫描一台(未被篡改)的Linux服务器的文件系统来构建文件属性数据库,以后将服务器文件属性与数据库中的进行校对,然后在服务器运行时对修改过的索引的文件发出警告。处于这个原因,AIDE必须在系统更新后或其配置文件进行合法修改后重新对受保护的文件做索引。 + +对于某些客户,他们可能会根据他们的安全策略在他们的服务器上强制安装某种入侵检测系统。但是,不管客户是否要求,系统管理员都部署一个入侵检测系统,这通常是一个很好的做法。 + +### 安装AIDE到CentOS或RHEL ### + +AIDE的初始安装(同时是首次运行)最好是在系统刚安装完后,并且没有任何服务暴露在互联网,甚至是局域网中。在这个早期阶段,我们可以将来自外部的一切闯入和破坏风险降到最低限度。事实上,这也是确保系统在AIDE构建其初始数据库时保持干净的唯一途径。 + +出于上面的原因,在安装完AIDE后,我们可以执行下面的命令: + + # yum install aide + +我们需要将我们的机器从网络断开,并实施下面所述的一些基本配置任务。 + +### 配置AIDE ### + +默认配置文件是/etc/aide.conf,该文件介绍了几个示例保护规则(如FIPSR,NORMAL,DIR,DATAONLY),各个规则后面跟着一个等于号以及要检查的文件属性列表,或者某些预定义的规则(由+分隔)。你也可以使用此种格式自定义规则。 + +![](https://farm3.staticflickr.com/2947/15446746115_7d0a291b0a_o.png) + + FIPSR = p+i+n+u+g+s+m+c+acl+selinux+xattrs+sha256 + NORMAL = FIPSR+sha512 + +例如,上面的例子说明,NORMAL规则将检查下列属性的不一致性:权限(p)、索引节点(i)、链接数(n)、用户(u)、组(g)、大小(s)、修改时间(m)、创建时间(c)、ACL(acl)、SELinux(selinux)、xattrs(xattr)、SHA256/SHA512校验和(sha256和sha512)。 + +定义的规则可灵活地用于不同的目录和文件(用正则表达式表示)。 + +![](https://farm6.staticflickr.com/5601/15259978179_f93b757c56_o.png) + +条目之前的感叹号(!)告诉AIDE忽略子目录(或目录中的文件),对于这些可以另外定义规则。 + +在上面的例子中,PERMS是用于/etc机器子目录和文件的默认规则。然而,没有规则将用于/etc中的备份文件(如/etc/.*~),也没有规则用于/etc/mtab文件。对于/etc中的一些选择性的子目录或文件,NORMAL规则会被应用,覆盖默认规则PERMS。 + +定义并应用正确的规则到系统中正确的位置,是使用AIDE最难的一部分,但作出好的判断是一个良好的开始。作为首要的一条规则,不要检查不必要的属性。例如,检查/var/log或/var/spool里头的文件的修改时间将导致大量误报,因为许多的应用程序和守护进程经常会写入内容到该位置,而这些内容都没有问题。此外,检查多个校验和可能加强安全性,但随之而来的是AIDE的运行时间的增加。 + +你可以选择将检查结果发送到你的邮箱,如果你使用MAILTO变量指定了电子邮件地址。将下面这一行放到/etc/aide.conf中的任何位置即可。 + + MAILTO=root@localhost + +### 首次运行AIDE ### + +运行以下命令来初始化AIDE数据库: + + # aide --init + +![](https://farm3.staticflickr.com/2942/15446399402_198472e983_o.png) + +根据/etc/aide.conf生成的/var/lib/aide/aide.db.new.gz文件需要被重命名为/var/lib/aide/aide.db.gz,以便AIDE能读取它: + + # mv /var/lib/aide/aide.db.new.gz /var/lib/aide.db.gz + +现在,是时候来将我们的系统与数据库进行第一次校对了。任务很简单,只需运行: + + # aide + +在没有选项时,AIDE假定使用了--check选项。 + +如果在数据库创建后没有对系统做过任何修改,AIDE将会以OK信息来结束本次校对。 + +![](https://farm3.staticflickr.com/2948/15260041950_f568b3996a_o.png) + +### 生产环境中管理AIDE ### + +在构建了一个初始AIDE数据库后,作为不断推进的系统管理活动,你常常需要处于某些合法的理由更新受保护的服务器。每次服务器更新后,你必须重新构建AIDE数据库,以更新数据库内容。要完成该任务,请执行以下命令: + + # aide --update + +要使用AIDE保护生产系统,可能最好通过任务计划调用AIDE来周期性检查不一致性。例如,要让AIDE每天运行一次,并将结果发送到邮箱: + + # crontab -e + +---------- + + 0 0 * * * /usr/sbin/aide --check | /usr/bin/mail -s "AIDE run for $HOSTNAME" your@email.com + +### 测试AIDE检查文件篡改 ### + +下面的测试环境将演示AIDE是如何来检查文件的完整性的。 + +#### 测试环境 1 #### + +让我们添加一个新文件(如/etc/fake)。 + + # cat /dev/null > /etc/fake + +![](https://farm3.staticflickr.com/2941/15260140358_f1d758d354_o.png) + +#### 测试环境 2 #### + +让我们修改文件权限,然后看看它是否被检测到。 + + # chmod 644 /etc/aide.conf + +#### 测试环境 3 #### + +最后,让我们修改文件内容(如,添加一个注释行到/etc/aide.conf)。 + + echo "#This is a comment" >> /etc/aide.conf + +![](https://farm4.staticflickr.com/3936/15259978229_3ff1ea950e_b.jpg) + +上面的截图中,第一栏显示了文件的属性,第二栏是AIDE数据库中的值,而第三栏是更新后的值。第三栏中空白部分表示该属性没有改动(如本例中的ACL)。 + +### 结尾 ### + +如果你曾经发现你自己有很好的理由相信系统被入侵了,但是第一眼又不能确定到底哪些东西被改动了,那么像AIDE这样一个基于主机的入侵检测系统就会很有帮助了,因为它可以帮助你很快识别出哪些东西被改动过,而不是通过猜测来浪费宝贵的时间。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/host-intrusion-detection-system-centos.html + +作者:[Gabriel Cánepa][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/gabriel +[1]:http://aide.sourceforge.net/ diff --git a/translated/tech/20141009 Linux Terminal--An lsof Primer.md b/translated/tech/20141009 Linux Terminal--An lsof Primer.md new file mode 100644 index 0000000000..2485801eb3 --- /dev/null +++ b/translated/tech/20141009 Linux Terminal--An lsof Primer.md @@ -0,0 +1,252 @@ +Linux终端:lsof入门 +================================================================================ +![](http://cdn.linuxaria.com/wp-content/uploads/2011/06/tux-terminal.jpg) + +Daniel Miessler撰写,首次在他[博客][1]上贴出 + +**lsof**是系统管理/[安全][2]尤伯工具。我大多数时候用它来从系统获得与[网络][3]连接相关的信息,但那只是这个强大而又鲜为人知的应用的第一步。将这个工具称之为lsof真实名副其实,因为它是指“**列出打开文件(lists openfiles)**”。而有一点要切记,在Unix中一切(包括网络套接口)都是文件。 + +有趣的是,lsof也是有着最多开关的Linux/Unix命令。它有那么多的开关,它必须同时使用-和+。 + + usage: [-?abhlnNoOPRstUvV] [+|-c c] [+|-d s] [+D D] [+|-f[cgG]] + [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+|-M] [-o [o]] + [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] + +正如你所见,lsof有着实在是令人惊讶的选项数量。你可以使用它来获得你系统上设备的信息,你能通过它了解到指定的用户在指定的地点正在碰什么东西,或者甚至是一个进程正在使用什么文件或网络连接。 + +对于我,lsof替代了netstat和ps的全部工作。它可以带来那些工具所能带来的一切,而且要比那些工具多得多。那么,让我们来看看它的一些基本能力吧: + +### 关键选项 ### + +理解一些关于lsof如何工作的关键性东西是很重要的。最重要的是,当你给它传递选项时,默认行为是对结果进行或运算。因此,如果你正是用-i来拉出一个端口列表,同时又用-p来拉出一个进程列表,那么默认情况下你会获得两者的结果。 + +下面的一些其它东西需要牢记: + +- **default** : 没有选项,lsof列出活跃进程的所有打开文件 +- **grouping** : 可以分组选项,如-abc,但要当心哪些选项需要参数 +- **-a** : 结果进行与运算(而不是或) +- **-l** : 在输出显示用户ID而不是用户名 +- **-h** : 获得帮助 +- **-t** : 仅获取进程ID +- **-U** : 获取UNIX套接口地址 +- **-F** : 输出结果为其它命令准备好,可以通过多种方式格式化,如-F pcfn(用于进程id、命令名、文件描述符、文件名,并以空终止) + +#### 获取网络信息 #### + +正如我所说的,我主要将lsof用于获取关于系统怎么和网络交互的信息。这里提供了关于此信息的一些主题: + +### 使用-i显示所有连接 ### + +有些人喜欢用netstat来获取网络连接,但是我更喜欢使用lsof来进行此项工作。结果以对我来说很直观的方式呈现,而我了解到,我仅仅只需改变我的语法,就可以通过同样的命令来获取更多信息。 + +# lsof -i + + COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME + dhcpcd 6061 root 4u IPv4 4510 UDP *:bootpc + sshd 7703 root 3u IPv6 6499 TCP *:ssh (LISTEN) + sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED) + +### 使用-i 6仅获取IPv6流量 ### + +# lsof -i 6 + +### 仅显示TCP连接(同理可获得UDP连接) ### + +你也可以通过在-i后提供对应的协议来仅仅显示TCP或者UDP连接信息。 + +# lsof -iTCP + + COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME + sshd 7703 root 3u IPv6 6499 TCP *:ssh (LISTEN) + sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED) + +### 使用-i:port来显示与指定端口相关的网络信息 ### + +或者,你也可以通过端口搜索,这对于要找出什么阻止了另外一个应用绑定到指定端口实在是太棒了。 + +# lsof -i :22 + + COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME + sshd 7703 root 3u IPv6 6499 TCP *:ssh (LISTEN) + sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED) + +### 使用@host来显示指定到指定主机的连接 ### + +这对于你在检查是否开放连接到网络中或互联网上某个指定主机的连接时十分有用。 + +# lsof -i@172.16.12.5 + + sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->172.16.12.5:49901 (ESTABLISHED) + +### 使用@host:port显示基于主机与端口的连接 ### + +你也可以组合主机与端口的显示信息。 + +# lsof -i@172.16.12.5:22 + + sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED) + +### 找出监听端口 ### + +找出正等候连接的端口。 + +# lsof -i -sTCP:LISTEN + +你也可以grep “LISTEN”来完成该任务。 + +# lsof -i | grep -i LISTEN + + iTunes 400 daniel 16u IPv4 0x4575228 0t0 TCP *:daap (LISTEN) + +### 找出已建立的连接 ### + +你也可以显示任何已经连接的连接。 + +# lsof -i -sTCP:ESTABLISHED + +你也可以通过grep搜索“ESTABLISHED”来完成该任务。 + +# lsof -i | grep -i ESTABLISHED + + firefox-b 169 daniel 49u IPv4 0t0 TCP 1.2.3.3:1863->1.2.3.4:http (ESTABLISHED) + +#### 用户信息 #### + +你也可以获取各种用户的信息,以及它们在系统上正干着的事情,包括它们的网络活动、对文件的操作等。 + +### 使用-u显示指定用户打开了什么 ### + +# lsof -u daniel + + -- snipped -- + Dock 155 daniel txt REG 14,2 2798436 823208 /usr/lib/libicucore.A.dylib + Dock 155 daniel txt REG 14,2 1580212 823126 /usr/lib/libobjc.A.dylib + Dock 155 daniel txt REG 14,2 2934184 823498 /usr/lib/libstdc++.6.0.4.dylib + Dock 155 daniel txt REG 14,2 132008 823505 /usr/lib/libgcc_s.1.dylib + Dock 155 daniel txt REG 14,2 212160 823214 /usr/lib/libauto.dylib + -- snipped -- + +### 使用-u ^user来显示除指定用户以外的其它所有用户所做的事情 ### + +# lsof -u ^daniel + + -- snipped -- + Dock 155 jim txt REG 14,2 2798436 823208 /usr/lib/libicucore.A.dylib + Dock 155 jim txt REG 14,2 1580212 823126 /usr/lib/libobjc.A.dylib + Dock 155 jim txt REG 14,2 2934184 823498 /usr/lib/libstdc++.6.0.4.dylib + Dock 155 jim txt REG 14,2 132008 823505 /usr/lib/libgcc_s.1.dylib + Dock 155 jim txt REG 14,2 212160 823214 /usr/lib/libauto.dylib + -- snipped -- + +### 杀死指定用户所做的一切事情 ### + +可以消灭指定用户运行的所有东西,这真不错。 + +# kill -9 `lsof -t -u daniel` + +#### 命令和进程 #### + +可以查看指定程序或进程由什么决定,这通常会很有用,而你可以使用lsof通过名称或进程ID过滤来完成这个任务。下面列出了一些选项: + +### 使用-c查看指定的命令正在使用的文件和网络连接 ### + +# lsof -c syslog-ng + + COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME + syslog-ng 7547 root cwd DIR 3,3 4096 2 / + syslog-ng 7547 root rtd DIR 3,3 4096 2 / + syslog-ng 7547 root txt REG 3,3 113524 1064970 /usr/sbin/syslog-ng + -- snipped -- + +### 使用-p查看指定进程ID已打开的内容 ### + +# lsof -p 10075 + + -- snipped -- + sshd 10068 root mem REG 3,3 34808 850407 /lib/libnss_files-2.4.so + sshd 10068 root mem REG 3,3 34924 850409 /lib/libnss_nis-2.4.so + sshd 10068 root mem REG 3,3 26596 850405 /lib/libnss_compat-2.4.so + sshd 10068 root mem REG 3,3 200152 509940 /usr/lib/libssl.so.0.9.7 + sshd 10068 root mem REG 3,3 46216 510014 /usr/lib/liblber-2.3 + sshd 10068 root mem REG 3,3 59868 850413 /lib/libresolv-2.4.so + sshd 10068 root mem REG 3,3 1197180 850396 /lib/libc-2.4.so + sshd 10068 root mem REG 3,3 22168 850398 /lib/libcrypt-2.4.so + sshd 10068 root mem REG 3,3 72784 850404 /lib/libnsl-2.4.so + sshd 10068 root mem REG 3,3 70632 850417 /lib/libz.so.1.2.3 + sshd 10068 root mem REG 3,3 9992 850416 /lib/libutil-2.4.so + -- snipped -- + +### -t选项只返回PID ### + +# lsof -t -c Mail + + 350 + +#### 文件和目录 #### + +通过查看指定文件或目录,你可以看到系统上所有正与其交互的资源——包括用户、进程等。 + +#### 显示与指定目录交互的所有一切 #### + +# lsof /var/log/messages/ + + COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME + syslog-ng 7547 root 4w REG 3,3 217309 834024 /var/log/messages + +### 显示与指定文件交互的所有一切 ### + +# lsof /home/daniel/firewall_whitelist.txt + +#### 高级用法 #### + +与[tcpdump][4]类似,当你开始组合查询时,它就显示了它强大的功能。 + +### 显示daniel连接到1.1.1.1所做的一切 ### + +# lsof -u daniel -i @1.1.1.1 + + bkdr 1893 daniel 3u IPv6 3456 TCP 10.10.1.10:1234->1.1.1.1:31337 (ESTABLISHED) + +### 同时使用-t和-c选项以挂起进程 ### + +# kill -HUP `lsof -t -c sshd` + +### lsof +L1显示所有打开的链接数小于1的文件 ### + +这通常(当不总是)表示某个攻击者正尝试通过取消文件链接来隐藏文件。 + +# lsof +L1 + + (hopefully nothing) + +### 显示某个端口范围的开放连接 ### + + # lsof -i @fw.google.com:2150=2180 + +#### 结尾 #### + +This primer just scratches the surface of lsof‘s functionality. For a full reference, run man lsof or check out [the online version][5]. I hope this has been useful to you, and as always,[comments and corrections are welcomed][6]. +本入门教程只是管窥了lsof功能的一斑,要查看完整参考,运行man lsof命令或查看[在线版本][5]。希望本文对你有所助益,也随时[欢迎你的评论和指正][6]。 + +### 资源 ### + +- lsof手册页:[http://www.netadmintools.com/html/lsof.man.html][7] + +-------------------------------------------------------------------------------- + +via: http://linuxaria.com/howto/linux-terminal-an-lsof-primer + +作者:[Daniel Miessler][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/101727609700016666852/posts?rel=author +[1]:http://danielmiessler.com/study/lsof/ +[2]:http://linuxaria.com/tag/security +[3]:http://linuxaria.com/tag/network +[4]:http://danielmiessler.com/study/tcpdump/ +[5]:http://www.netadmintools.com/html/lsof.man.html +[6]:http://danielmiessler.com/connect/ +[7]:http://www.netadmintools.com/html/lsof.man.html diff --git a/translated/tech/20141009 Linux or UNIX wget command with practical examples.md b/translated/tech/20141009 Linux or UNIX wget command with practical examples.md new file mode 100644 index 0000000000..7a720c725b --- /dev/null +++ b/translated/tech/20141009 Linux or UNIX wget command with practical examples.md @@ -0,0 +1,127 @@ +Linux/Unix wget命令实例 +================================================================================ +wget是Linux/Unix命令行**文件下载器**,它是下载网站上文件的免费的非交互下载工具,它支持**HTTP**、**HTTPS**和**FTP**协议,也支持通过HTTP代理检索。Wget是非交互的,这就是说它可以在用户没有登录到系统时在后台工作。 + +在本帖中,我们将讨论wget命令的一些不同使用实例。 + +### 实例:1 下载单个文件 ### + + # wget http://mirror.nbrc.ac.in/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-DVD.iso + +该命令会下载CentOS 7 ISO文件到用户当前工作目录中。 + +### 实例:2 续传分段下载文件 ### + +总有那么一些场景,当我们开始下载一个大文件时,中途互联网却断开了。那样的话,我们可以使用wget命令的‘**-c**’选项,让下载从断点续传。 + + # wget -c http://mirror.nbrc.ac.in/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-DVD.iso + +![](http://www.linuxtechi.com/wp-content/uploads/2014/09/wget-resume-download-1024x111-1.jpg) + +### 实例:3 后台下载文件 ### + +我们可以通过在wget命令中使用‘-b’选项来让它在后台下载文件。 + + linuxtechi@localhost:~$ wget -b http://mirror.nbrc.ac.in/centos/7.0.1406/isos/x86_64/ + CentOS-7.0-1406-x86_64-DVD.iso + Continuing in background, pid 4505. + Output will be written to ‘wget-log’. + +正如我们上面所见,下载进程被捕获到用户当前目录中的‘wget-log’文件中。 + + linuxtechi@localhost:~$ tail -f wget-log + 2300K ………. ………. ………. ………. ………. 0% 48.1K 18h5m + 2350K ………. ………. ………. ………. ………. 0% 53.7K 18h9m + 2400K ………. ………. ………. ………. ………. 0% 52.1K 18h13m + 2450K ………. ………. ………. ………. ………. 0% 58.3K 18h14m + 2500K ………. ………. ………. ………. ………. 0% 63.6K 18h14m + 2550K ………. ………. ………. ………. ………. 0% 63.4K 18h13m + 2600K ………. ………. ………. ………. ………. 0% 72.8K 18h10m + 2650K ………. ………. ………. ………. ………. 0% 59.8K 18h11m + 2700K ………. ………. ………. ………. ………. 0% 52.8K 18h14m + 2750K ………. ………. ………. ………. ………. 0% 58.4K 18h15m + 2800K ………. ………. ………. ………. ………. 0% 58.2K 18h16m + 2850K ………. ………. ………. ………. ………. 0% 52.2K 18h20m + +### 实例:4 限制下载速率 ### + +默认情况下,wget命令尝试以全速下载,但是有时候你可能使用的是共享互联网,那么如果你尝试使用wget来下载庞大的文件时,就会把其它用户的网络拖慢。这时,你如果使用‘-limit-rate’选项来限制下载速率,就可以避免这种情况的发生。 + + #wget --limit-rate=100k http://mirror.nbrc.ac.in/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-DVD.iso + +在上例中,下载速率被限制到了100k。 + +### 实例:5 使用‘-i’选项来下载多个文件 ### + +如果你想要使用wget命令来下载多个文件,那么首先要创建一个文本文件,并将所有的URL添加到该文件中。 + + # cat download-list.txt + url1 + url2 + url3 + url4 + +现在,发出以下命令吧: + + # wget -i download-list.txt + +### 实例:6 增加重试次数 ### + +我们可以使用‘-tries’选项来增加重试次数。默认情况下,wget命令会重试20次以使下载成功。 + +该选项在你下载一个大文件的过程中互联网连接发生问题时十分有用,因为在那种情况下,会增加下载失败的几率。 + + # wget --tries=75 http://mirror.nbrc.ac.in/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-DVD.iso + +### 实例:7 使用-o选项来重定向wget日志到文件 ### + +我们可以使用‘-o’选项来重定向wget命令的日志到一个日志文件。 + + #wget -o download.log http://mirror.nbrc.ac.in/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-DVD.iso + +上面的命令会在用户当前目录下创建download.log文件。 + +### 实例:8 下载整个网站用于本地查看 ### + + # wget --mirror -p --convert-links -P ./ website-url + +鉴于 + +- **–mirror** : 开启适用于镜像的选项。 +- **-p** : 下载所有能正确显示指定HTML网页的全部必要文件。 +- **–convert-links** : 下载完成后,转换文档中的链接以用于本地查看。 +- -**P ./Local-Folder** : 保存所有文件和目录到指定的目录。 + +### 实例:9 下载过程中拒绝文件类型 ### + +当你正打算下载整个网站时,我们可以使用‘-reject’选项来强制wget不下载图片。 + + # wget --reject=png Website-To-Be-Downloaded + +### 实例:10 使用wget -Q设置下载配额 ### + +我们可以使用‘-Q’选项强制wget命令在下载大小超过特定大小时退出下载。 + + # wget -Q10m -i download-list.txt + +注意,配额不会对单个文件的下载产生影响。所以,如果你指定wget -Q10m ftp://wuarchive.wustl.edu/ls-lR.gz,ls-lR的全部内容都会被下载。这在下载命令行指定的多个URL时也一样。然而,在递归或从一个输入文件检索时,还是值得一用。因此,你可以安全地输入‘wget -Q10m -i download-list.txt’,在超过配额时,下载会退出。 + +### 实例:11 从密码保护的网站下载文件 ### + + # wget --ftp-user= --ftp-password= Download-URL + +另外一种指定用户名和密码的方式是在URL中。 + +任一方法都将你的密码揭露给了那些运行“ps”命令的人。要防止密码被查看到,将它们存储到.wgetrc或.netrc中,并使用“chmod”设置合适的权限来保护这些文件不让其他用户查看到。如果密码真的很重要,不要在它们还在文件里躺着的时候走开,在Wget开始下载后,编辑该文件,或者删除它们。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxtechi.com/wget-command-practical-examples/ + +作者:[Pradeep Kumar][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.linuxtechi.com/author/pradeep/ diff --git a/translated/tech/How to configure SNMPv3 on ubuntu 14.04 server.md b/translated/tech/How to configure SNMPv3 on ubuntu 14.04 server.md new file mode 100644 index 0000000000..ee8af99cca --- /dev/null +++ b/translated/tech/How to configure SNMPv3 on ubuntu 14.04 server.md @@ -0,0 +1,97 @@ +在ubuntu14.04上配置SNMPv3 +============================================ +简单网络管理协议(SNMP)是用于IP网络设备管理的标准协议。典型的支持SNMP协议的设备有路由器、交换机、服务器、工作站、打印机及数据机柜等等。SNMP一般被网络管理系统用于监视网络附加设备,令行政注意(译者注:这个不太明白...按字面意思翻了,麻烦校对更正)。SNMP是因特网协议套件中的一个组成部分,它由IETF机构定义。它包含一系列的网络管理标准,其中有一个应用层协议,一个数据库架构以及一组数据对象。[2] + +SNMP将管理数据以变量的形式暴露出来,这些变量描述了系统配置。同时这些变量可以被管理应用查询(或者被设置)。 + +### 为什么需要使用SNMPv3 ### + +尽管SNMPv3所增加的加密功能并不影响协议层面,但是新的文本惯例、概念及术语使得它看起来很不一样。 + +SNMPv3在SNMP的基础之上增强了安全性以及远程配置功能。 + +最初,SNMP最大的缺点就是安全性弱。SNMP的第一与第二个版本中,身份验证仅仅是在管理员与代理间传送一个明文的密码而已。[1]目前每一个SNMPv3的信息都包含了被编码成8进制的安全参数。这些安全参数的具体意义由所选用的安全模型决定。 + +SNMPv3提供了重要的安全特征: + +保密性 -- 加密数据包以防止未经授权的源监听。 + +完整性 -- 数据完整性特性确保数据在传输的时候没有被干扰,并且包含了课选的数据响应保护机制。 + +身份验证 -- 检查数据是否来自一个合法的源 + +### 在ubuntu中安装SNMP服务器及客户端 ### + +打开终端运行下列命令 + + sudo apt-get install snmpd snmp + +安装完成后需要做如下改变。 + +###配置SNMPv3### + +获得守护进程的权限 + +默认的安装仅提供本地的访问权限,如果想要获得外部访问权限的话编辑文件 /etc/default/snmpd。 + + sudo vi /etc/default/snmpd + +改变下列内容 + +将 + + SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /var/run/snmpd.pid' + +改为 + + SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf' + +最后重启 snmpd + + sudo /etc/init.d/snmpd restart + +###定义 SNMPv3 用户,身份验证以及加密参数 ### + +“securityLevel”参数使得SNMPv3有多种不同的用途。 + +noAuthNoPriv -- 没有授权,加密以及任何安全保护!authNoPriv -- 需要身份认证,但是不对数据进行加密。 autoPriv -- 最健壮的模式。需要身份认证以及数据会被加密。 + +snmpd 的配置以及设置都保存在文件 /etc/snmp/snmpd.conf。使用编辑器编辑文件: + + sudo vi /etc/snmp/snmpd.conf + +在文件末尾添加以下内容: + + # + createUser user1 + createUser user2 MD5 user2password + createUser user3 MD5 user3password DES user3encryption + # + rouser user1 noauth 1.3.6.1.2.1.1 + rouser user2 auth 1.3.6.1.2.1 + rwuser user3 priv 1.3.6.1.2.1 + +注:如果你需要使用自己的用户名/密码对的话,请注意密码及加密短语的最小长度是8个字符。 + +同时,你需要做如下的配置以便snmp可以监听来自任何接口的连接请求。 + +将 + + #agentAddress udp:161,udp6:[::1]:161 + +改为 + + agentAddress udp:161,udp6:[::1]:161 + +保存改变后的snmpd.conf文件并且重启守护进程: + + sudo /etc/init.d/snmpd restart + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/how-to-configure-snmpv3-on-ubuntu-14-04-server.html + +译者:[SPccman](https://github.com/SPccman) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/tech/How-to-debug-a-C or C++ program with GDB command-line debugger.md b/translated/tech/How-to-debug-a-C or C++ program with GDB command-line debugger.md new file mode 100644 index 0000000000..b96bcab09b --- /dev/null +++ b/translated/tech/How-to-debug-a-C or C++ program with GDB command-line debugger.md @@ -0,0 +1,166 @@ +使用GDB命令行调试器调试C/C++程序 +============================================================ +没有调试器的情况下编写程序时最糟糕的状况是什么?编译时,跪着祈祷不要出错?用生命在运行可执行程序(blood offering不知道怎么翻译好...)?或者在每一行代码间添加printf("test")语句来定位错误点?如你所知,编写程序时不使用调试器的话是不利的。幸好,linux下调试还是很方便的。大多数人使用的IDE都集成了调试器,但linxu著名的调试器是命令行形式的C/C++调试器GDB。然而,与其他命令行工具一致,DGB需要一定的练习才能完全掌握。这里,我会告诉你GDB的基本情况及使用方法。 + +###安装GDB### + +大多数的发行版仓库中都有GDB + +Debian 或 Ubuntu + + $ sudo apt-get install gdb + +Arch Linux + + $ sudo pacman -S gdb + +Fedora,CentOS 或 RHEL: + + $sudo yum install gdb + +如果在仓库中找不到的话,可以从官网中下载[official page][1] + +###示例代码### + +当学习GDB时,最好有一份代码,动手试验。下列代码是我编写的简单例子,它可以很好的体现GDB的特性。将它拷贝下来并且进行实验。这是最好的方法。 + + #include + #include + + int main(int argc, char **argv) + { + int i; + int a=0, b=0, c=0; + double d; + for (i=0; i<100; i++) + { + a++; + if (i>97) + d = i / 2.0; + b++; + } + return 0; + } + +###GDB的使用### + +首先最重要的,你需要使用编译器的 “-g“选项来编译程序,这样可执行程序才能通过GDB来运行。通过下列语句开始调试: + + $ gdb -tui [executable's name] + +使用”-tui“选项可以将代码显示在一个窗口内(被称为”文本接口”),在这个窗口内可以使用光标来操控,同时在下面输入GDB shell命令。 + +![](https://farm3.staticflickr.com/2947/15397534362_ac0b5692c8_z.jpg) + +现在我们可以在程序的任何地方设置断点。你可以通过下列命令来为当前源文件的某一行设置断点。 + + break [line number] + +或者为一个特定的函数设置断点: + + break [function name] + +甚至可以设置条件断点 + + break [line number] if [condition] + +例如,在我们的示例代码中,可以设置如下: + + break 11 if i > 97 + +![](https://farm3.staticflickr.com/2948/15374839066_8c7c0eb8a4_o.png) + +这样,程序循环97次之后停留在“a++”语句上。这样是非常方便的,避免了我们需要手动循环97次。 + +最后但也是很重要的是,我们可以设置一个“观察断点”,当这个被观察的变量发生变化时,程序会被停止。 + + watch [variable] + +可以设置如下: + + watch d + +当d的值发生变化时程序会停止运行(例如,当i>97为真时)。 +当设置后断点后,使用"run"命令开始运行程序,或按如下所示: + + r [程序的输入参数(如果有的话)] + +gdb中,大多数的单词都可以简写为一个字母。 +不出意外,程序会停留在11行。这里,我们可以做些有趣的事情。下列命令: + + bt + +回溯功能可以让我们知道程序如何到达这条语句的。 + +![](https://farm3.staticflickr.com/2943/15211202760_1e77a3bb2e_z.jpg) + + info locals + +这条语句会显示所有的局部变量以及它们的值(你可以看到,我没有为d设置初始值,所以它现在的值是任意值)。 + +当然 + +![](https://farm4.staticflickr.com/3843/15374838916_8b65e4e3c7_z.jpg) + + p [variable] + +这可以显示特定变量的值,但是还有更好的: + + ptype [variable] + + +可以显示变量的类型。所以这里可以确定d是double型。 + +![](https://farm4.staticflickr.com/3881/15397534242_3cb6163252_o.jpg) + +既然已经到这一步了,我么不妨这么做: + +    set var [variable] = [new value] + +这样会覆盖变量的值。不过需要注意,你不能创建一个新的变量或改变变量的类型。我们可以这样做: + +    set var a = 0 + +![](https://farm3.staticflickr.com/2949/15211357497_d28963a9eb_o.png) + +如其他优秀的调试器一样,我们可以单步调试: + + step + +使用如上命令,运行到下一条语句,也可以进入到一个函数里面。或者使用: + + next + +这可以直接下一条语句,并且不进入子函数内部。 + +![](https://farm4.staticflickr.com/3927/15397863215_fb2f5912ac_o.jpg) + +结束测试后,删除断点: + + delete [line number] + +从当前断点继续运行程序: + + continue + +退出GDB: + + quit + +总结,有了GDB,编译时不用祈祷上帝了,运行时不用血祭(?)了,再也不用printf(“test“)了。当然,这里所讲的并不完整,而且GDB的功能远不止这些。所以我强烈建议你自己更加深入的学习它。我现在感兴趣的是将GDB整合到Vim中。同时,这里有一个[备忘录][2]记录了GDB所有的命令行,以供查阅。 + +你对GDB有什么看法?你会将它与图形调试器对比吗,它有什么优势呢?对于将GDB集成到Vim有什么看法呢?将你的想法写到评论里。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/gdb-command-line-debugger.html + +作者:[Adrien Brochard][a] +译者:[SPccman](https://github.com/SPccman) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:https://www.gnu.org/software/gdb/ +[2]:http://users.ece.utexas.edu/~adnan/gdb-refcard.pdf diff --git a/translated/tech/Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux.md b/translated/tech/Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux.md new file mode 100644 index 0000000000..2c28c2c7d2 --- /dev/null +++ b/translated/tech/Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux.md @@ -0,0 +1,122 @@ +集所有功能与一身的Linux系统性能和使用活动监控工具-Sysstat +=========================================================================== +**Sysstat**是一个非常方便的工具,它带有众多的系统资源监控工具,用于监控系统的性能和使用情况。我们在日常使用的工具中有相当一部分是来自sysstat工具包的。同时,它还提供了一种使用cron表达式来制定性能和活动数据的收集计划。 + +![Install Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/sysstat.png) + +在Linux系统中安装Sysstat + +下表是包含在sysstat包中的工具 + +- [**isstat**][1]: 输出CPU的统计信息和所有I/O设备的输入输出(I/O)统计信息。 +- **mpstat**: 关于多有CPU的详细信息(单独输出或者分组输出)。 +- **pidstat**: 关于运行中的进程/任务、CPU、内存等的统计信息。 +- **sar**: 保存并输出不同系统资源(CPU、内存、IO、网络、内核、等。。。)的详细信息。 +- **sadc**: 系统活动数据收集器,用于手机sar工具的后端数据。 +- **sa1**: 系统手机并存储sadc数据文件的二进制数据,与sadc工具配合使用 +- **sa2**: 配合sar工具使用,产生每日的摘要报告。 +- **sadf**: 用于以不同的数据格式(CVS或者XML)来格式化sar工具的输出。 +- **Sysstat**: sysstat工具的man帮助页面。 +- **nfsiostat**: NFS(Network File System)的I/O统计信息。 +- **cifsiostat**: CIFS(Common Internet File System)的统计信息。 + +最近(在2014年6月17日),**sysstat 11.0.0**(稳定版)已经发布了,同时还新增了一些有趣的特性,如下: + +pidstat命令新增了一些新的选项:首先是“-R”选项,该选项将会输出有关策略和任务调度的优先级信息。然后是“**-G**”选项,通过这个选项我们可以使用名称搜索进程,然后列出所有匹配的线程。 + +sar、sadc和sadf命令在数据文件方面同样带来了一些功能上的增强。与以往只能使用“**saDD**”来命名数据文件。现在使用**-D**选项可以用“**saYYYYMMDD**”来重命名数据文件,同样的,现在的数据文件不必放在“**var/log/sa**”目录中,我们可以使用“SA_DIR”变量来定义新的目录,该变量将应用与sa1和sa2命令。 + +###在Linux系统中安装Sysstat#### + +在主要的linux发行版中,‘**Sysstat**’工具包可以在默认的程序库中安装。然而,在默认程序库中的版本通常有点旧,因此,我们将会下载源代码包,编译安装最新版本(**11.0.0**版本)。 + +首先,使用下面的连接下载最新版本的sysstat包,或者你可以使用**wget**命令直接在终端中下载。 + +- [http://sebastien.godard.pagesperso-orange.fr/download.html][2] + + # wget http://pagesperso-orange.fr/sebastien.godard/sysstat-11.0.0.tar.gz + +![Download Sysstat Package](http://www.tecmint.com/wp-content/uploads/2014/08/Download-Sysstat.png) + +下载Sysstat包 + +然后解压缩下载下来的包,进去该目录,开始编译安装 + + # tar -xvf sysstat-11.0.0.tar.gz + # cd sysstat-11.0.0/ + +这里,你有两种编译安装的方法: + +a).第一,你可以使用**iconfig**(这将会给予你很大的灵活性,你可以选择/输入每个参数的自定义值) + + # ./iconfig + +![Sysstat iconfig Command](http://www.tecmint.com/wp-content/uploads/2014/08/Sysstat-iconfig-Command.png) + +Sysstat的iconfig命令 + +b).第二,你可以使用标准的**configure**命令在当行中定义所有选项。你可以运行 **./configure –help 命令**来列出该命令所支持的所有限选项。 + + # ./configure --help + +![Sysstat Configure Help](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Help.png) + +Stsstat的cofigure -help + +在这里,我们使用标准的**./configure**命令来编译安装sysstat工具包。 + + # ./configure + # make + # make install + +![Configure Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Sysstat.png) + +在Linux系统中配置sysstat + +在编译完成后,我们将会看到一些类似于上图的输出。现在运行如下命令来查看sysstat的版本。 + + # mpstat -V + + sysstat version 11.0.0 + (C) Sebastien Godard (sysstat orange.fr) + +###在Linux 系统中更新sysstat### + +默认的,sysstat使用“**/usr/local**”作为其目录前缀。因此,所有的二进制数据/工具都会安装在“**/usr/local/bin**”目录中。如果你的系统已经安装了sysstat 工具包,则上面提到的二进制数据/工具有可能在“**/usr/bin**”目录中。 + +因为“**$PATH**”变量不包含“**/usr/local/bin**”路径,你在更新时可能会失败。因此,确保“**/usr/local/bin**”路径包含在“$PATH”环境变量中,或者在更新前,在编译和卸载旧版本时将**-prefix**选项指定值为“**/usr**”。 + + # yum remove sysstat [On RedHat based System] + # apt-get remove sysstat [On Debian based System] + +---------- + + # ./configure --prefix=/usr + # make + # make install + +现在,使用‘mpstat’命令的‘**-V**’选项查看更新后的版本。 + + # mpstat -V + + sysstat version 11.0.0 + (C) Sebastien Godard (sysstat orange.fr) + +**参考**: 更多详细的信息请到 [Sysstat Documentation][3] + +在我的下一篇文章中,我将会展示一些sysstat命令使用的实际例子,敬请关注更新。别忘了在下面评论框中留下您宝贵的意见。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-sysstat-in-linux/ + +作者:[Kuldeep Sharma][a] +译者:[cvsher](https://github.com/cvsher) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/kuldeepsharma47/ +[1]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/ +[2]:http://sebastien.godard.pagesperso-orange.fr/download.html +[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html diff --git a/translated/tech/[翻译完成]20140929 Learning Vim in 2014--Working with Files.md b/translated/tech/[翻译完成]20140929 Learning Vim in 2014--Working with Files.md new file mode 100644 index 0000000000..0a1eb2a3f3 --- /dev/null +++ b/translated/tech/[翻译完成]20140929 Learning Vim in 2014--Working with Files.md @@ -0,0 +1,113 @@ +2014年学习如何使用vim处理文件工作 +================================================================================ + +作为一名开发者你不能够只化时间去写你想要的代码。其中最难以处理的部分是我的工作只使用vim来处理文本。我感觉到很无语与无比的蛋疼,vim没有自己额外文件查看系统与内部打开与切换文件功能。因此,继vim之后,我主要使用Eclipse 和 Sublime Text. + +就此,我非常欣赏深度定制的vim文件管理功能。在工作环境上我已经装配了这些工具,甚至比起那些视觉编辑器好很多。因为这个是纯键盘操作,促使我可以更加快地移动我的代码。第一篇文章使我明白这个vim内建功能只是处理文件的另一选择。在这篇文章里我会带你去认识vim文件管理功能与使用更高级的插件。 + +### 基础篇:打开新文件 ### + +学习vim其中最大的一个障碍是缺少可视提示,不像现在的GUI图形编辑器,当你在终端打开一个新的vim是没有明显的提示去提醒你去走什么,所有事情都是靠键盘输入,同时也没有更多更好的界面交互,vim新手需要习惯如何靠自己去查找一些基本的操作指令。好吧,让我开始学习基础吧。 + +创建新文件的命令是**:e 或:e** 打开一个新缓冲区保存文件内容。如果文件不存在它会开辟一个缓冲区去保存与修改你指定文件。缓冲区是vim是术语,意为"保存文本块到内存"。文本是否能够与存在的文件关联,要看是否每个你打开的文件都对应一个缓冲区。 + + +打开文件与修改文件之后,你可以使用**:w**命令来保存在缓冲区的文件内容到文件里面,如果缓冲区不能关联你的文件或者你想保存到另外一个地方,你需要使用**:w **来保存指定地方。 + +这些是vim处理文件的基本知识,很多的开发者都掌握了这些命令,这些技巧你都需要掌握。vim提供了很多技巧让人去深挖。 + + +### 缓冲区管理 ### + +基础掌握了,就让我来说更多关于缓冲区得东西,vim处理打开文件与其他编辑器有一点不同,打开的文件不会作为一个标签留在一个可视地方,而是只允许你同时只有一个文件在缓冲区打开,vim允许你多个缓存打开。一些会显示出来,另外一些就不会,你需要用**:ls**来查看已经打开的缓存,这个命令会显示每个打开的缓存区,同时会有它们的序码,你可以通过这些序码实用**:b **来切换或者使用循序移动命令**:bnext** 和 **:bprevious** 也可以使用它们的缩写**:bn**和**:bp**。 + +这些命令是vim管理文件缓冲区的一个基础,我发现他们不会按照我的思维去映射出来。我不想关心缓冲区的顺序,我只想按照我的思维去到那个文件或者想在当前这个文件.因此必需了解vim更深的缓存模式,我不是推荐你必须内部命令来作为主要的文件管理方案。但这些的确是很强大可行的选择。 + +![](http://benmccormick.org/content/images/2014/Jul/skitch.jpeg) + +### 分屏 ### + +分屏是vim其中一个最好用的管理文件功能,在vim +你可以在当前窗同时分开2个窗口,可以按照你喜欢的配置去重设大小和分配,这个很特别可以在不同地方同时打开6文件每个文件都拥有自己的窗口大少 + +你可以通过命令**:sp **来新建水平分割窗口或者 **:vs **垂直分割窗口。你可以使用这些关键命令去重置你想要的窗口, +老实说,我喜欢用鼠标处理vim任务,因为鼠标能够给我更加准确的两列的宽度而不需要猜大概的宽度。 + +创建新的分屏后,你需要使用**ctrl-w +[h|j|k|l]**来向后向前切换。这个有少少笨拙,但这个确实很重要很普遍很容易很高效的操作.如果你经常使用分屏,我建议你去.vimrc使用以下代码q去设置别名为**ctrl-h** **ctrl-j** 等等。 + + nnoremap "Ctrl-j to move down a split + nnoremap "Ctrl-k to move up a split + nnoremap "Ctrl-l to move right a split + nnoremap "Ctrl-h to move left a split + +### 跳转表 ### + +分屏是解决多个关联文件同时查看问题,但我们仍然不能满足打开文件与隐藏文件之间快速移动。这时跳转表是一个能够解决的工具。 + +跳转表是众多插件中看其来奇怪而且很少使用。vim能够追踪每一步命令还有切换你正在修改的文件。每次从一个分屏窗口跳到另外一个,vim都会添加记录到跳转表里面。这个记录你去过的地方,这样就不需要担心之前的文件在哪里,你可以使用快捷键去快速追溯你的踪迹。**Ctrl-o**允许你返回你上一次地方。重复操作几次就能够返回到你最先编写的代码段地方。你可以使用**ctrl-i**来向前返回。当你在调试多个文件或两个文件之间切换能够发挥极大的快速移动功能。 + + +### 插件 ### + +如果你想vim像Sublime Text 或者Atom一样,我就让你认清一下,这里有很好的机会让你看到一些难懂,可怕和低效的事情。例如大家会发出"当Sublime有了模糊查找功能,为什么我一定要输入全路径才能够打开文件" "没有侧边栏显示目录树我怎样查看项目结构" 。但vim有了解决方案。这些方案不需要破坏vim的核心。我只需要经常修改vim配置与添加一些最新的插件,这里有3个有用的插件可以让你像Sublime管理文件 + +- [CtrlP][1] 是一个跟Sublime的"Go to Anything"栏一样模糊查找文件.它快如闪电并且非常可配置性。我使用它最主要打开文件。我只需知道部分的文件名字不需要记住整个项目结构就可以查找了 + +- [The NERDTree][2] + 这个一个文件管理夹插件,它重复了很多编辑器的有的侧边文件管理夹功能。我实际上很少用它,对于我模糊查找会更加快。对于你接手一个项目,尝试学习项目结构与了解什么可以用是非常方便的,NERDTree是可以自己定制配置,安装它能够代替vim内置的目录工具。 + + +- [Ack.vim][3] 是一个专为vim的代码搜索插件,它允许你跨项目搜索文本。通过Ack 或 Ag 去高亮查找 + [第二个极其好用的搜索工具][4],允许你在任何时候在你项目之间快速搜索跳转 + + +在vim核心与它的插件生态系统之间,vim 提供足够的工具允许你构建你想要得工作环境。文件管理是软件开发系统的最核心部分并且你值得拥有体验的权利 + + +开始是需要通过很长的时间去理解它们,然后才找到你感觉舒服的工作流程之后才开始添加工具在上面。但依然值得你去使用,当你不需要头爆去理解如何去使用就能够轻易编写你的代码。 + + +### 更多插件资源 ### + +- [Seamlessly Navigate Vim & Tmux Splits][5] 这个插件需要每一个想使用它的人都要懂得实用[tmux][6],这个跟vim的splits 一样简单好用 + + +- [Using Tab Pages][7] 它是一个vim的标签功能插件,虽然它的名字用起来有一点疑惑,但我不能说它是文件管理器。 + 对如何在有多个工作可视区使用"tab + pages" 在vim wiki 网站上有更好的概述 + +- [Vimcasts: The edit command][8] 一般来说 Vimcasts + 是大家学习vim的一个好资源。这个屏幕截图与一些内置工作流程是很好描述之前说得文件操作方面的知识 + + +### 订阅 ### + +这篇文章通过第三个方面介绍如何通过一些好的手法学习vim。如果你喜欢这篇文章你可以通过[feed][8]来订阅或email我[mailing +list][10]。在这个星期javascript小插曲之后,下星期我会继续介绍vim的配置方面的东西,你可以先看基础篇:使用vim +看我前2篇系列文章和vim与vi的语言 + +-------------------------------------------------------------------------------- + +via: http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/ + +作者:[Ben McCormick][a] +译者:[译者ID](https://github.com/haimingfg) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/ +[1]:https://github.com/kien/ctrlp.vim +[2]:https://github.com/scrooloose/nerdtree +[3]:https://github.com/mileszs/ack.vim +[4]:http://benmccormick.org/2013/11/25/a-look-at-ack/ +[5]:http://robots.thoughtbot.com/seamlessly-navigate-vim-and-tmux-splits +[6]:http://tmux.sourceforge.net/ +[7]:http://vim.wikia.com/wiki/Using_tab_pages +[8]:http://vimcasts.org/episodes/the-edit-command/ +[9]:http://feedpress.me/benmccormick +[10]:http://eepurl.com/WFYon +[11]:http://benmccormick.org/2014/07/14/learning-vim-in-2014-configuring-vim/ +[12]:http://benmccormick.org/2014/06/30/learning-vim-in-2014-the-basics/ +[13]:http://benmccormick.org/2014/07/02/learning-vim-in-2014-vim-as-language/