Merge pull request #6 from LCTT/master

update
This commit is contained in:
Kevin Sicong Jiang 2015-07-13 14:45:34 -05:00
commit fc712fcf17
23 changed files with 1779 additions and 1538 deletions

View File

@ -1,26 +1,26 @@
Lolcat 一个在 Linux 终端中输出彩虹特效的命令行工具
Lolcat 一个在 Linux 终端中输出彩虹特效的命令行工具
================================================================================
那些相信 Linux 命令行是单调无聊且没有任何乐趣的人们,你们错了,这里有一些有关 Linux 的文章,它们展示着 Linux 是如何的有趣和“淘气” 。
- [20 个有趣的 Linux 命令或在终端中 Linux 是有趣的][1]
- [6 个有趣的好玩 Linux 命令(在终端中的乐趣)][2]
- [在 Linux 终端中的乐趣 把玩文字和字符计数][3]
- [Linux命令及Linux终端的20个趣事][1]
- [终端中的乐趣6个有趣的Linux命令行工具][2]
- [Linux终端的乐趣之把玩字词计数][3]
在本文中我将讨论一个名为“lolcat”的应用 在终端中生成彩虹般的颜色。
在本文中我将讨论一个名为“lolcat”的小工具 它可以在终端中生成彩虹般的颜色。
![为终端生成彩虹般颜色的输出的 Lolcat 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Lolcat.png)
为终端生成彩虹般颜色的输出的 Lolcat 命令
*为终端生成彩虹般颜色的输出的 Lolcat 命令*
#### 何为 lolcat ? ####
Lolcat 是一个针对 LinuxBSD 和 OSX 平台的应用,它类似于 [cat 命令][4],并为 `cat` 的输出添加彩虹般的色彩。 Lolcat 原本用于在 Linux 终端中为文本添加彩虹般的色彩。
Lolcat 是一个针对 LinuxBSD 和 OSX 平台的工具,它类似于 [cat 命令][4],并为 `cat` 的输出添加彩虹般的色彩。 Lolcat 主要用于在 Linux 终端中为文本添加彩虹般的色彩。
### 在 Linux 中安装 Lolcat ###
**1. Lolcat 应用在许多 Linux 发行版本的软件仓库中都可获取到,但可获得的版本都有些陈旧,而你可以通过 git 仓库下载和安装最新版本的 lolcat。**
**1. Lolcat 工具在许多 Linux 发行版的软件仓库中都可获取到,但可获得的版本都有些陈旧,而你可以通过 git 仓库下载和安装最新版本的 lolcat。**
由于 Lolcat 是一个 ruby gem 程序,所以在你的系统中安装有最新版本的 RUBY 是必须的
由于 Lolcat 是一个 ruby gem 程序,所以在你的系统中必须安装有最新版本的 RUBY。
# apt-get install ruby [在基于 APT 的系统中]
# yum install ruby [在基于 Yum 的系统中]
@ -53,7 +53,7 @@ Lolcat 是一个针对 LinuxBSD 和 OSX 平台的应用,它类似于 [cat
![Lolcat 的帮助文档](http://www.tecmint.com/wp-content/uploads/2015/06/Lolcat-Help1.png)
Lolcat 的帮助文档
*Lolcat 的帮助文档*
**4. 接着, 通过管道连接 lolcat 和其他命令,例如 ps, date 和 cal:**
@ -63,15 +63,15 @@ Lolcat 的帮助文档
![ps 命令的输出](http://www.tecmint.com/wp-content/uploads/2015/06/ps-command-output.png)
ps 命令的输出
*ps 命令的输出*
![Date 的输出](http://www.tecmint.com/wp-content/uploads/2015/06/Date.png)
Date 的输出
*Date 的输出*
![Calendar 的输出](http://www.tecmint.com/wp-content/uploads/2015/06/Cal.png)
Calendar 的输出
*Calendar 的输出*
**5. 使用 lolcat 来展示一个脚本文件的代码:**
@ -79,18 +79,18 @@ Calendar 的输出
![用 lolcat 来展示代码](http://www.tecmint.com/wp-content/uploads/2015/06/Script-Output.png)
用 lolcat 来展示代码
*用 lolcat 来展示代码*
**6. 通过管道连接 lolcat 和 figlet 命令。Figlet 是一个展示由常规的屏幕字符组成的巨大字符串的应用。我们可以通过管道将 figlet 的输出连接到 lolcat 中来出如下的多彩输出:**
**6. 通过管道连接 lolcat 和 figlet 命令。Figlet 是一个展示由常规的屏幕字符组成的巨大字符串的应用。我们可以通过管道将 figlet 的输出连接到 lolcat 中来展示出如下的多彩输出:**
# echo I ❤ Tecmint | lolcat
# figlet I Love Tecmint | lolcat
![多彩的文字](http://www.tecmint.com/wp-content/uploads/2015/06/Colorful-Text.png)
多彩的文字
*多彩的文字*
**注**: 毫无疑问 ❤ 是一个 unicode 字符并且为了安装 figlet你需要像下面那样使用 yum 和 apt 来得到这个软件包:
**注**: 注意, ❤ 是一个 unicode 字符。要安装 figlet你需要像下面那样使用 yum 和 apt 来得到这个软件包:
# apt-get figlet
# yum install figlet
@ -102,7 +102,7 @@ Calendar 的输出
![动的文本](http://www.tecmint.com/wp-content/uploads/2015/06/Animated-Text.gif)
动的文本
*动的文本*
这里选项 `-a` 指的是 Animation(动画) `-d` 指的是 duration(持续时间)。在上面的例子中,持续 500 次动画。
@ -112,7 +112,7 @@ Calendar 的输出
![多彩地显示文件](http://www.tecmint.com/wp-content/uploads/2015/06/List-Files-Colorfully.png)
多彩地显示文件
*多彩地显示文件*
**9. 通过管道连接 lolcat 和 cowsay。cowsay 是一个可配置的正在思考或说话的奶牛,这个程序也支持其他的动物。**
@ -136,15 +136,15 @@ Calendar 的输出
skeleton snowman sodomized-sheep stegosaurus stimpy suse three-eyes turkey
turtle tux unipony unipony-smaller vader vader-koala www
通过管道连接 lolcat 和 cowsay 后的输出并且使用了gnucowfile。
通过管道连接 lolcat 和 cowsay 后的输出并且使用了gnu形象的 cowfile。
# cowsay -f gnu ☛ Tecmint ☚ is the best Linux Resource Available online | lolcat
![使用 Lolcat 的 Cowsay](http://www.tecmint.com/wp-content/uploads/2015/06/Cowsay-with-Lolcat.png)
使用 Lolcat 的 Cowsay
*使用 Lolcat 的 Cowsay*
**注**: 你可以在管道中使用 lolcat 和其他任何命令来在终端中得到彩色的输出。
**注**: 你可以在将 lolcat 和其他任何命令用管道连接起来在终端中得到彩色的输出。
**10. 你可以为最常用的命令创建别名来使得命令的输出呈现出彩虹般的色彩。你可以像下面那样为 ls -l 命令创建别名,这个命令输出一个目录中包含内容的列表。**
@ -153,23 +153,24 @@ Calendar 的输出
![多彩的 Alias 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Alias-Commands-with-Colorful.png)
多彩的 Alias 命令
*多彩的 Alias 命令*
你可以像上面建议的那样,为任何命令创建别名。为了使得别名永久生效,你必须添加相关的代码(上面的代码是 ls -l 的别名) 到 ~/.bashrc 文件中,并确保登出后再重新登录来使得更改生效。
你可以像上面建议的那样,为任何命令创建别名。为了使得别名永久生效,你需要添加相关的代码(上面的代码是 ls -l 的别名) 到 ~/.bashrc 文件中,并登出后再重新登录来使得更改生效。
现在就是这些了。我想知道你是否曾经注意过 lolcat 这个工具?你是否喜欢这篇文章?欢迎在下面的评论环节中给出你的建议和反馈。喜欢并分享我们,帮助我们传播。
现在就是这些了。我想知道你是否曾经注意过 lolcat 这个应用?你是否喜欢这篇文章?欢迎在下面的评论环节中给出你的建议和反馈。喜欢并分享我们,帮助我们传播。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/lolcat-command-to-output-rainbow-of-colors-in-linux-terminal/
作者:[Avishek Kumar][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/20-funny-commands-of-linux-or-linux-is-fun-in-terminal/
[2]:http://www.tecmint.com/linux-funny-commands/
[3]:http://www.tecmint.com/play-with-word-and-character-counts-in-linux/
[1]:https://linux.cn/article-2831-1.html
[2]:https://linux.cn/article-4128-1.html
[3]:https://linux.cn/article-4088-1.html
[4]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/

View File

@ -0,0 +1,93 @@
LFS中文版手册发布如何打造自己的 Linux 发行版
================================================================================
您是否想过打造您自己的 Linux 发行版?每个 Linux 用户在他们使用 Linux 的过程中都想过做一个他们自己的发行版,至少一次。我也不例外,作为一个 Linux 菜鸟,我也考虑过开发一个自己的 Linux 发行版。从头开发一个 Linux 发行版这件事情被称作 Linux From Scratch LFS
在开始之前,我总结了一些有关 LFS 的内容,如下:
**1. 那些想要打造他们自己的 Linux 发行版的人应该了解打造一个 Linux 发行版(打造意味着从头开始)与配置一个已有的 Linux 发行版的不同**
如果您只是想调整下启动屏幕、定制登录页面以及拥有更好的外观和使用体验。您可以选择任何一个 Linux 发行版并且按照您的喜好进行个性化配置。此外,有许多配置工具可以帮助您。
如果您想打包所有必须的文件、引导加载器和内核,并选择什么该被包括进来,然后依靠自己编译这一切东西。那么您需要的就是 Linux From Scratch LFS
**注意**:如果您只想要定制 Linux 系统的外表和体验,这个指南并不适合您。但如果您真的想打造一个 Linux 发行版,并且向了解怎么开始以及一些其他的信息,那么这个指南正是为您而写。
**2. 打造一个 Linux 发行版LFS的好处**
- 您将了解 Linux 系统的内部工作机制
- 您将开发一个灵活的适应您需求的系统
- 您开发的系统LFS将会非常紧凑因为您对该包含/不该包含什么拥有绝对的掌控
- 您开发的系统LFS在安全性上会更好
**3. 打造一个Linux发行版LFS的坏处**
打造一个 Linux 系统意味着将所有需要的东西放在一起并且编译之。这需要许多查阅、耐心和时间。而且您需要一个可用的 Linux 系统和足够的磁盘空间来打造 LFS。
**4. 有趣的是Gentoo/GNU Linux 在某种意义上最接近于 LFS。Gentoo 和 LFS 都是完全从源码编译的定制的 Linux 系统**
**5. 您应该是一个有经验的Linux用户对编译包、解决依赖有相当的了解并且是个 shell 脚本的专家。**
了解一门编程语言(最好是 C 语言)将会使事情变得容易些。但哪怕您是一个新手,只要您是一个优秀的学习者,可以很快的掌握知识,您也可以开始。最重要的是不要在 LFS 过程中丢失您的热情。
如果您不够坚定,恐怕会在 LFS 进行到一半时放弃。
**6. 现在您需要一步一步的指导来打造一个 Linux 。LFS 手册是打造 LFS 的官方指南。我们的合作站点 tradepub 也为我们的读者制作了 LFS 的指南,这同样是免费的。 ###
您可以从下面的链接下载 Linux From Scratch 的电子书:
[![](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-From-Scratch.gif)][1]
下载: [Linux From Scratch][1]
**7. 当前 LFS 的版本是 7.7,分为 systemd 版本和非 systemd 版本**
LFS 的官方网站是: http://www.linuxfromscratch.org/
您可以在官网在线浏览 LFS 以及类似 BLFS 这样的相关项目的手册,也可以下载不同格式的版本。
- LFS (非 systemd 版本):
- PDF 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7.pdf
- 单一 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7-NOCHUNKS.html
- 打包的多页 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7.tar.bz2
- LFS systemd 版本):
- PDF 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd.pdf
- 单一 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd-NOCHUNKS.html
- 打包的多页 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd.tar.bz2
**8. Linux 中国/LCTT 翻译了一份 LFS 手册7.7systemd 版本)**
经过 LCTT 成员的努力,我们终于完成了对 LFS 7.7 systemd 版本手册的翻译。
手册在线访问地址https://linux.cn/lfs/LFS-BOOK-7.7-systemd/index.html 。
其它格式的版本稍后推出。
感谢参与翻译的成员: wxy, ictlyh, dongfengweixiao, zpl1025, H-mudcup, Yuking-net, kevinSJ 。
### 关于Linux From Scratch ###
这本手册是由 LFS 的项目领头人 Gerard Beekmans 创作的, Matthew Burgess 和 Bruse Dubbs 参与编辑两人都是LFS 项目的联合领导人。这本书内容很广泛,有 338 页之多。
手册中内容包括:介绍 LFS、准备构建、构建 LFS、建立启动脚本、使 LFS 可以引导,以及附录。其中涵盖了您想知道的 LFS 项目中的所有东西。
这本手册还给出了编译一个包的预估时间。预估的时间以编译第一个包的时间作为参考。所有的东西都以易于理解的方式呈现,甚至对于新手来说也是这样。
如果您有充裕的时间并且真正对构建自己的 Linux 发行版感兴趣,那么您绝对不会错过下载这个电子书(免费下载)的机会。您需要做的,便是照着这本手册在一个工作的 Linux 系统(任何 Linux 发行版,足够的磁盘空间即可)中开始构建您自己的 Linux 系统,付出时间和热情。
如果 Linux 使您着迷,如果您想自己动手构建一个自己的 Linux 发行版,这便是现阶段您应该知道的全部了,其他的信息您可以参考上面链接的手册中的内容。
请让我了解您阅读/使用这本手册的经历,这本详尽的 LFS 指南的使用是否足够简单?如果您已经构建了一个 LFS 并且想给我们的读者一些建议,欢迎留言和反馈。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-custom-linux-distribution-from-scratch/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://tecmint.tradepub.com/free/w_linu01/prgm.cgi

View File

@ -0,0 +1,33 @@
Linus Torvalds说那些对人工智能奇点深信不疑的人显然磕了药
================================================================================
*像往常一样, 他的评论不能只看字面意思*
![](http://i1-news.softpedia-static.com/images/news2/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373-2.jpg)
**人工智能是一个非常热门的话题许多高端人士包括特斯拉的CEO埃隆·马斯克就曾表示有情感的人工智能技术即将到来同时这一技术将发展到危险的门槛上。不过Linus Torvalds显然不这么认为他认为那只是差劲的科幻小说。**
人工智能激发了人们的创造力已经不是什么新鲜的想法了,不过近段时间关于所谓的人工智能奇点的讨论,引起了诸如埃隆·马斯克和斯蒂芬·霍金表示关心,认为可能会创造出一个怪兽。不只是他们,论坛和评论部分充斥着杞人忧天者,他们不知道该相信谁,或是哪个提出建议的人更聪明。
事实证明Linux项目创始人Linus Torvalds在这件事上显然有完全不同的观点。他说事实上什么都不会发生我们也更有理由相信他。人工智能意需要有人编写它的代码Linus知道编写人工智能代码会遇到的阻力和障碍。他很有可能已经猜到了什么会被涉及到并且明白为什么人工智能不会成为威胁。
### Linus Torvalds与人工智能 ###
Linus Torvalds在[slashdot.org][1]上回答了一些社区中的问题,他的所有观点都十分有趣。他曾对[游戏的未来和Valve][2]发表看法,就像这次关于人工智能一样。虽然他经常是关注一些关于内核和开源的问题,但是他在其他部分也有自己的见解。事实是作为一个问题,人工智能工程是一个他可以从程序员的角度讨论的问题。
“所以我期待更多有针对性的和相当棒的AI而不是它有多像人。像语言识别、模式识别这样的东西。我根本找不出在你洗碗的时候洗碗机和你讨论Sartre萨特法国哲学家、小说家、剧作家有什么危害。真的有奇点这种事吗是的我认为那只是科幻小说还不是好的那种。无休止的指数增长我说真的这些人嗑了什么药了吧” Linus在Slashdot写道。
选择相信埃隆·马斯克还是Linus是你的决定但如果我卷入了这场赌局我会把钱投给Linus。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373.shtml
作者:[Silviu Stahie][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://classic.slashdot.org/story/15/06/30/0058243
[2]:http://news.softpedia.com/news/linus-torvalds-said-valve-is-exploring-a-second-source-against-microsoft-486266.shtml

View File

@ -1,33 +0,0 @@
Linus Torvalds Says People Who Believe in AI Singularity Are on Drugs
================================================================================
*As usual, his comments should not be taken literally*
![](http://i1-news.softpedia-static.com/images/news2/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373-2.jpg)
**AI is a very hot topic right now, and many high profile people, including Elon Musk, the head of Tesla, have said that we're going to get sentient AIs soon and that it's going to be a dangerous threshold. It seems that Linus Torvalds doesn't feel the same way, and he thinks that it's just bad Sci-Fi.**
The idea of AIs turning on their human creators is not something new, but recently the so-called AI singularity has been discussed, and people like Elon Musk and Stephen Hawking expressed concerns about the possibility of creating a monster. And it's not just them, forums and comments sections are full of alarmist people who don't know what to believe or who take for granted the opinions of much smarter people.
As it turns out, Linus Torvalds, the creator of the Linux project, has a completely different opinion on this matter. In fact, he says that nothing like this will happen, and we have a much better reason to believe him. AI means that someone wrote its code, and Linus knows the power and the obstacles of writing an AI code. He's much likely to guess what's involved and to understands why an AI won't be a threat.
### Linus Torvalds and AIs ###
Linus Torvalds answered some questions from the community on [slashdot.org][1], and all his ideas were very interesting. He talked about the [future of gaming and Valve][2], but he also tackled stuff like AI. He's usually asked stuff about the kernel or open source, but he has opinions on other topics as well. As a matter a fact, the AI subject is something that he can actually talk about as a programmer.
"So I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you. The whole 'Singularity' kind of event? Yeah, it's science fiction, and not very good Sci-Fi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really" wrote Linus on Slashdot.
It's your choice whether to believe Elon Musk or Linus, but if betting were involved, I would put my money on Linus.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://classic.slashdot.org/story/15/06/30/0058243
[2]:http://news.softpedia.com/news/linus-torvalds-said-valve-is-exploring-a-second-source-against-microsoft-486266.shtml

View File

@ -1,113 +0,0 @@
Animated Wallpaper Adds Live Backgrounds To Linux Distros
================================================================================
**We know a lot of you love having a stylish Ubuntu desktop to show off.**
![Live Wallpaper](http://i.imgur.com/9JIUw5p.gif)
Live Wallpaper
And as Linux makes it so easy to create a stunning workspace with a minimal effort, thats understandable!
Today, were highlighting — [re-highlighting][2] for those of you with long memories — a free, open-source tool that can add extra bling your OS screenshots and screencasts.
Its called **Live Wallpaper** and (as you can probably guess) it will replace the standard static desktop background with an animated alternative powered by OpenGL.
And the best bit: it can be installed in Ubuntu very easily.
### Animated Wallpaper Themes ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/animated-wallpaper-ubuntu-750x383.jpg)
Live Wallpaper is not the only app of this type, but it is one of the the best.
It comes with a number of different themes out of the box.
These range from the subtle (noise) to frenetic (nexus), and caters to everything in between. Theres even the obligatory clock wallpaper inspired by the welcome screen of the Ubuntu Phone:
- Circles — Clock inspired by Ubuntu Phone with evolving circle aura
- Galaxy — Spinning galaxy that can be resized/repositioned
- Gradient Clock — A polar clock overlaid on basic gradient
- Nexus — Brightly colored particles fire across screen
- Noise — A bokeh design similar to the iOS dynamic wallpaper
- Photoslide — Grid of photos from folder (default ~/Photos) animate in/out
Live Wallpaper is **fully open-source** so theres nothing to stop imaginative artists with the know-how (and patience) from creating some slick themes of their own.
### Settings & Features ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-gui-settings.jpg)
Each theme can be configured or customised in some way, though certain themes have more options than others.
For example, in Nexus (pictured above) you can change the number and colour of the the pulse particles, their size, and their frequency.
The preferences app also provides a set of **general options** that will apply to all themes. These include:
- Setting live wallpaper to run on log-in
- Setting a custom background that the animation sits on
- Adjusting the FPS (including option to show FPS on screen)
- Specifying the multi-monitor behaviour
With so many options available it should be easy to create a background set up that suits you.
### Drawbacks ###
#### No Desktop Icons ####
You cant add, open or edit files or folders on the desktop while Live Wallpaper is On.
The Preferences app does list an option that will, supposedly, let you do this. It may work on really older releases but in our testing, on Ubuntu 14.10, it does nothing.
One workaround that seems to work for some users of the app on Ubuntu is setting a .png image as the custom background. It doesnt have to be a transparent .png, simply a .png.
#### Resource Usage ####
Animated wallpapers use more system resources than standard background images.
Were not talking about 50% load at all times —at least not with this app in our testing— but those on low-power devices and laptops will want to use apps like this cautiously. Use a [system monitoring tool][2] to keep an eye on CPU and GPU load.
#### Quitting the app ####
The biggest “bug” for me is the absolute lack of “quit” option.
Sure, the animated wallpaper can be turned off from the Indicator Applet and the Preferences tool but quitting the app entirely, quitting the indicator applet? Nope. To do that I have to use the pkill livewallpaper command in the Terminal.
### How to Install Live Wallpaper in Ubuntu 14.04 LTS + ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/terminal-command-750x146.jpg)
To install Live Wallpaper in Ubuntu 14.04 LTS and above you will first need to add the official PPA for the app to your Software Sources.
The quickest way to do this is using the Terminal:
sudo add-apt-repository ppa:fyrmir/livewallpaper-daily
sudo apt-get update && sudo apt-get install livewallpaper
You should also install the indicator applet, which lets you quickly and easily turn on/off the animated wallpaper and switch theme from the menu area, and the GUI settings tool so that you can configure each theme based on your tastes.
sudo apt-get install livewallpaper-config livewallpaper-indicator
When everything has installed you will be able to launch the app and its preferences tool from the Unity Dash.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-app-launcher.png)
Annoyingly, the Indicator Applet wont automatically open after you install it. It does add itself to the start up list, so a quick log out > log in will get it to show.
### Summary ###
If you fancy breathing life into a dull desktop, give it a spin — and let us know what you think of it and what animated wallpapers youd love to see added!
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/05/animated-wallpaper-adds-live-backgrounds-to-linux-distros
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2012/11/live-wallpaper-for-ubuntu
[2]:http://www.omgubuntu.co.uk/2011/11/5-system-monitoring-tools-for-ubuntu

View File

@ -1,3 +1,5 @@
FSSlc Translating
Backup with these DeDuplicating Encryption Tools
================================================================================
Data is growing both in volume and value. It is becoming increasingly important to be able to back up and restore this information quickly and reliably. As society has adapted to technology and learned how to depend on computers and mobile devices, there are few that can deal with the reality of losing important data. Of firms that suffer the loss of data, 30% fold within a year, 70% cease trading within five years. This highlights the value of data.
@ -155,4 +157,4 @@ via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
[3]:http://obnam.org/
[4]:http://duplicity.nongnu.org/
[5]:http://zbackup.org/
[6]:https://bup.github.io/
[6]:https://bup.github.io/

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
7 command line tools for monitoring your Linux system
================================================================================
**Here is a selection of basic command line tools that will make your exploration and optimization in Linux easier. **
@ -76,4 +77,4 @@ via: http://www.networkworld.com/article/2937219/linux/7-command-line-tools-for-
[7]:http://linuxcommand.org/man_pages/vmstat8.html
[8]:http://linux.die.net/man/1/ps
[9]:http://linux.die.net/man/1/pstree
[10]:http://linux.die.net/man/1/iostat
[10]:http://linux.die.net/man/1/iostat

View File

@ -1,745 +0,0 @@
translating...
ZMap Documentation
================================================================================
1. Getting Started with ZMap
1. Scanning Best Practices
1. Command Line Arguments
1. Additional Information
1. TCP SYN Probe Module
1. ICMP Echo Probe Module
1. UDP Probe Module
1. Configuration Files
1. Verbosity
1. Results Output
1. Blacklisting
1. Rate Limiting and Sampling
1. Sending Multiple Probes
1. Extending ZMap
1. Sample Applications
1. Writing Probe and Output Modules
----------
### Getting Started with ZMap ###
ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices.
By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows:
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
Or more concisely specified as:
$ zmap -B 10M -p 80 -n 10000 -o results.csv
ZMap can also be used to scan specific subnets or CIDR blocks. For example, to scan only 10.0.0.0/8 and 192.168.0.0/16 on port 80, run:
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
If the scan started successfully, ZMap will output status updates every one second similar to the following:
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
These updates provide information about the current state of the scan and are of the following form: %-complete (est time remaining); packets-sent curr-send-rate (avg-send-rate); recv: packets-recv recv-rate (avg-recv-rate); hits: hit-rate
If you do not know the scan rate that your network can support, you may want to experiment with different scan rates or bandwidth limits to find the fastest rate that your network can support before you see decreased results.
By default, ZMap will output the list of distinct IP addresses that responded successfully (e.g. with a SYN ACK packet) similar to the following. There are several additional formats (e.g. JSON and Redis) for outputting results as well as options for producing programmatically parsable scan statistics. As wells, additional output fields can be specified and the results can be filtered using an output filter.
115.237.116.119
23.9.117.80
207.118.204.141
217.120.143.111
50.195.22.82
We strongly encourage you to use a blacklist file, to exclude both reserved/unallocated IP space (e.g. multicast, RFC1918), as well as networks that request to be excluded from your scans. By default, ZMap will utilize a simple blacklist file containing reserved and unallocated addresses located at `/etc/zmap/blacklist.conf`. If you find yourself specifying certain settings, such as your maximum bandwidth or blacklist file every time you run ZMap, you can specify these in `/etc/zmap/zmap.conf` or use a custom configuration file.
If you are attempting to troubleshoot scan related issues, there are several options to help debug. First, it is possible can perform a dry run scan in order to see the packets that would be sent over the network by adding the `--dryrun` flag. As well, it is possible to change the logging verbosity by setting the `--verbosity=n` flag.
----------
### Scanning Best Practices ###
We offer these suggestions for researchers conducting Internet-wide scans as guidelines for good Internet citizenship.
- Coordinate closely with local network administrators to reduce risks and handle inquiries
- Verify that scans will not overwhelm the local network or upstream provider
- Signal the benign nature of the scans in web pages and DNS entries of the source addresses
- Clearly explain the purpose and scope of the scans in all communications
- Provide a simple means of opting out and honor requests promptly
- Conduct scans no larger or more frequent than is necessary for research objectives
- Spread scan traffic over time or source addresses when feasible
It should go without saying that scan researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions.
----------
### Command Line Arguments ###
#### Common Options ####
These options are the most common options when performing a simple scan. We note that some options are dependent on the probe module or output module used (e.g. target port is not used when performing an ICMP Echo Scan).
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-o, --output-file=name**
Write results to this file. Use - for stdout
**-b, --blacklist-file=path**
File of subnets to exclude, in CIDR notation (e.g. 192.168.0.0/16), one-per line. It is recommended you use this to exclude RFC 1918 addresses, multicast, IANA reserved space, and other IANA special-purpose addresses. An example blacklist file is provided in conf/blacklist.example for this purpose.
#### Scan Options ####
**-n, --max-targets=n**
Cap the number of targets to probe. This can either be a number (e.g. `-n 1000`) or a percentage (e.g. `-n 0.1%`) of the scannable address space (after excluding blacklist)
**-N, --max-results=n**
Exit after receiving this many results
**-t, --max-runtime=secs**
Cap the length of time for sending packets
**-r, --rate=pps**
Set the send rate in packets/sec
**-B, --bandwidth=bps**
Set the send rate in bits/second (supports suffixes G, M, and K (e.g. `-B 10M` for 10 mbps). This overrides the `--rate` flag.
**-c, --cooldown-time=secs**
How long to continue receiving after sending has completed (default=8)
**-e, --seed=n**
Seed used to select address permutation. Use this if you want to scan addresses in the same order for multiple ZMap runs.
**--shards=n**
Split the scan up into N shards/partitions among different instances of zmap (default=1). When sharding, `--seed` is required
**--shard=n**
Set which shard to scan (default=0). Shards are indexed in the range [0, N), where N is the total number of shards. When sharding `--seed` is required.
**-T, --sender-threads=n**
Threads used to send packets (default=1)
**-P, --probes=n**
Number of probes to send to each IP (default=1)
**-d, --dryrun**
Print out each packet to stdout instead of sending it (useful for debugging)
#### Network Options ####
**-s, --source-port=port|range**
Source port(s) to send packets from
**-S, --source-ip=ip|range**
Source address(es) to send packets from. Either single IP or range (e.g. 10.0.0.1-10.0.0.9)
**-G, --gateway-mac=addr**
Gateway MAC address to send packets to (in case auto-detection does not work)
**-i, --interface=name**
Network interface to use
#### Probe Options ####
ZMap allows users to specify and write their own probe modules for use with ZMap. Probe modules are responsible for generating probe packets to send, and processing responses from hosts.
**--list-probe-modules**
List available probe modules (e.g. tcp_synscan)
**-M, --probe-module=name**
Select probe module (default=tcp_synscan)
**--probe-args=args**
Arguments to pass to probe module
**--list-output-fields**
List the fields the selected probe module can send to the output module
#### Output Options ####
ZMap allows users to specify and write their own output modules for use with ZMap. Output modules are responsible for processing the fieldsets returned by the probe module, and outputing them to the user. Users can specify output fields, and write filters over the output fields.
**--list-output-modules**
List available output modules (e.g. tcp_synscan)
**-O, --output-module=name**
Select output module (default=csv)
**--output-args=args**
Arguments to pass to output module
**-f, --output-fields=fields**
Comma-separated list of fields to output
**--output-filter**
Specify an output filter over the fields defined by the probe module
#### Additional Options ####
**-C, --config=filename**
Read a configuration file, which can specify any other options.
**-q, --quiet**
Do not print status updates once per second
**-g, --summary**
Print configuration and summary of results at the end of the scan
**-v, --verbosity=n**
Level of log detail (0-5, default=3)
**-h, --help**
Print help and exit
**-V, --version**
Print version and exit
----------
### Additional Information ###
#### TCP SYN Scans ####
When performing a TCP SYN scan, ZMap requires a single target port and supports specifying a range of source ports from which the scan will originate.
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-s, --source-port=port|range**
Source port(s) for scan packets (e.g. 40000-50000)
**Warning!** ZMap relies on the Linux kernel to respond to SYN/ACK packets with RST packets in order to close connections opened by the scanner. This occurs because ZMap sends packets at the Ethernet layer in order to reduce overhead otherwise incurred in the kernel from tracking open TCP connections and performing route lookups. As such, if you have a firewall rule that tracks established connections such as a netfilter rule similar to `-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`, this will block SYN/ACK packets from reaching the kernel. This will not prevent ZMap from recording responses, but it will prevent RST packets from being sent back, ultimately using up a connection on the scanned host until your connection times out. We strongly recommend that you select a set of unused ports on your scanning host which can be allowed access in your firewall and specifying this port range when executing ZMap, with the `-s` flag (e.g. `-s '50000-60000'`).
#### ICMP Echo Request Scans ####
While ZMap performs TCP SYN scans by default, it also supports ICMP echo request scans in which an ICMP echo request packet is sent to each host and the type of ICMP response received in reply is denoted. An ICMP scan can be performed by selecting the icmp_echoscan scan module similar to the following:
$ zmap --probe-module=icmp_echoscan
#### UDP Datagram Scans ####
ZMap additionally supports UDP probes, where it will send out an arbitrary UDP datagram to each host, and receive either UDP or ICMP Unreachable responses. ZMap supports four different methods of setting the UDP payload through the --probe-args command-line option. These are 'text' for ASCII-printable payloads, 'hex' for hexadecimal payloads set on the command-line, 'file' for payloads contained in an external file, and 'template' for payloads that require dynamic field generation. In order to obtain the UDP response, make sure that you specify 'data' as one of the fields to report with the -f option.
The example below will send the two bytes 'ST', a PCAnwywhere 'status' request, to UDP port 5632.
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
The example below will send the byte '0x02', a SQL Server 'client broadcast' request, to UDP port 1434.
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
The example below will send a NetBIOS status request to UDP port 137. This uses a payload file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
The example below will send a SIP 'OPTIONS' request to UDP port 5060. This uses a template file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
UDP payload templates are still experimental. You may encounter crashes when more using more than one send thread (-T) and there is a significant decrease in performance compared to static payloads. A template is simply a payload file that contains one or more field specifiers enclosed in a ${} sequence. Some protocols, notably SIP, require the payload to reflect the source and destination of the packet. Other protocols, such as portmapper and DNS, contain fields that should be randomized per request or risk being dropped by multi-homed systems scanned by ZMap.
The payload template below will send a SIP OPTIONS request to every destination:
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
To: sip:${RAND_ALPHA=8}@${DADDR}
Call-ID: ${RAND_DIGIT=10}@${SADDR}
CSeq: 1 OPTIONS
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
Content-Length: 0
Max-Forwards: 20
User-Agent: ${RAND_ALPHA=8}
Accept: text/plain
In the example above, note that line endings are \r\n and the end of this request must contain \r\n\r\n for most SIP implementations to correcly process it. A working example is included in the examples/udp-payloads directory of the ZMap source tree (sip_options.tpl).
The following template fields are currently implemented:
- **SADDR**: Source IP address in dotted-quad format
- **SADDR_N**: Source IP address in network byte order
- **DADDR**: Destination IP address in dotted-quad format
- **DADDR_N**: Destination IP address in network byte order
- **SPORT**: Source port in ascii format
- **SPORT_N**: Source port in network byte order
- **DPORT**: Destination port in ascii format
- **DPORT_N**: Destination port in network byte order
- **RAND_BYTE**: Random bytes (0-255), length specified with =(length) parameter
- **RAND_DIGIT**: Random digits from 0-9, length specified with =(length) parameter
- **RAND_ALPHA**: Random mixed-case letters from A-Z, length specified with =(length) parameter
- **RAND_ALPHANUM**: Random mixed-case letters from A-Z and digits from 0-9, length specified with =(length) parameter
### Configuration Files ###
ZMap supports configuration files instead of requiring all options to be specified on the command-line. A configuration can be created by specifying one long-name option and the value per line such as:
interface "eth1"
source-ip 1.1.1.4-1.1.1.8
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
cooldown-time 300 # seconds
blacklist-file /etc/zmap/blacklist.conf
output-file ~/zmap-output
quiet
summary
ZMap can then be run with a configuration file and specifying any additional necessary parameters:
$ zmap --config=~/.zmap.conf --target-port=443
### Verbosity ###
There are several types of on-screen output that ZMap produces. By default, ZMap will print out basic progress information similar to the following every 1 second. This can be disabled by setting the `--quiet` flag.
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
ZMap also prints out informational messages during scanner configuration such as the following, which can be controlled with the `--verbosity` argument.
Aug 11 16:16:12.813 [INFO] zmap: started
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
Aug 11 16:17:03.985 [DEBUG] recv: thread started
ZMap also supports printing out a grep-able summary at the end of the scan, similar to below, which can be invoked with the `--summary` flag.
cnf target-port 443
cnf source-port-range-begin 32768
cnf source-port-range-end 61000
cnf source-addr-range-begin 1.1.1.4
cnf source-addr-range-end 1.1.1.8
cnf maximum-packets 4294967295
cnf maximum-runtime 0
cnf permutation-seed 0
cnf cooldown-period 300
cnf send-interface eth1
cnf rate 45000
env nprocessors 16
exc send-start-time Fri Jan 18 01:47:35 2013
exc send-end-time Sat Jan 19 00:47:07 2013
exc recv-start-time Fri Jan 18 01:47:35 2013
exc recv-end-time Sat Jan 19 00:52:07 2013
exc sent 3722335150
exc blacklisted 572632145
exc first-scanned 1318129262
exc hit-rate 0.874102
exc synack-received-unique 32537000
exc synack-received-total 36689941
exc synack-cooldown-received-unique 193
exc synack-cooldown-received-total 1543
exc rst-received-unique 141901021
exc rst-received-total 166779002
adv source-port-secret 37952
adv permutation-gen 4215763218
### Results Output ###
ZMap can produce results in several formats through the use of **output modules**. By default, ZMap only supports **csv** output, however support for **redis** and **json** can be compiled in. The results sent to these output modules may be filtered using an **output filter**. The fields the output module writes are specified by the user. By default, ZMap will return results in csv format and if no output file is specified, ZMap will not produce specific results. It is also possible to write your own output module; see Writing Output Modules for information.
**-o, --output-file=p**
File to write output to
**-O, --output-module=p**
Invoke a custom output module
**-f, --output-fields=p**
Comma-separated list of fields to output
**--output-filter=filter**
Specify an output filter over fields for a given probe
**--list-output-modules**
Lists available output modules
**--list-output-fields**
List available output fields for a given probe
#### Output Fields ####
ZMap has a variety of fields it can output beyond IP address. These fields can be viewed for a given probe module by running with the `--list-output-fields` flag.
$ zmap --probe-module="tcp_synscan" --list-output-fields
saddr string: source IP address of response
saddr-raw int: network order integer form of source IP address
daddr string: destination IP address of response
daddr-raw int: network order integer form of destination IP address
ipid int: IP identification number of response
ttl int: time-to-live of response packet
sport int: TCP source port
dport int: TCP destination port
seqnum int: TCP sequence number
acknum int: TCP acknowledgement number
window int: TCP window
classification string: packet classification
success int: is response considered success
repeat int: is response a repeat response from host
cooldown int: Was response received during the cooldown period
timestamp-str string: timestamp of when response arrived in ISO8601 format.
timestamp-ts int: timestamp of when response arrived in seconds since Epoch
timestamp-us int: microsecond part of timestamp (e.g. microseconds since 'timestamp-ts')
To select which fields to output, any combination of the output fields can be specified as a comma-separated list using the `--output-fields=fields` or `-f` flags. Example:
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
#### Filtering Output ####
Results generated by a probe module can be filtered before being passed to the output module. Filters are defined over the output fields of a probe module. Filters are written in a simple filtering language, similar to SQL, and are passed to ZMap using the **--output-filter** option. Output filters are commonly used to filter out duplicate results, or to only pass only sucessful responses to the output module.
Filter expressions are of the form `<fieldname> <operation> <value>`. The type of `<value>` must be either a string or unsigned integer literal, and match the type of `<fieldname>`. The valid operations for integer comparisons are `= !=, <, >, <=, >=`. The operations for string comparisons are =, !=. The `--list-output-fields` flag will print what fields and types are available for the selected probe module, and then exit.
Compound filter expressions may be constructed by combining filter expressions using parenthesis to specify order of operations, the `&&` (logical AND) and `||` (logical OR) operators.
**Examples**
Write a filter for only successful, non-duplicate responses
--output-filter="success = 1 && repeat = 0"
Filter for packets that have classification RST and a TTL greater than 10, or for packets with classification SYNACK
--output-filter="(classification = rst && ttl > 10) || classification = synack"
#### CSV ####
The csv module will produce a comma-separated value file of the output fields requested. For example, the following command produces the following CSV in a file called `output.csv`.
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
----------
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
#### Redis ####
The redis output module allows addresses to be added to a Redis queue instead of being saved to file which ultimately allows ZMap to be incorporated with post processing tools.
**Heads Up!** ZMap does not build with Redis support by default. If you are building ZMap from source, you can build with Redis support by running CMake with `-DWITH_REDIS=ON`.
### Blacklisting and Whitelisting ###
ZMap supports both blacklisting and whitelisting network prefixes. If ZMap is not provided with blacklist or whitelist parameters, ZMap will scan all IPv4 addresses (including local, reserved, and multicast addresses). If a blacklist file is specified, network prefixes in the blacklisted segments will not be scanned; if a whitelist file is provided, only network prefixes in the whitelist file will be scanned. A whitelist and blacklist file can be used in coordination; the blacklist has priority over the whitelist (e.g. if you have whitelisted 10.0.0.0/8 and blacklisted 10.1.0.0/16, then 10.1.0.0/16 will not be scanned). Whitelist and blacklist files can be specified on the command-line as follows:
**-b, --blacklist-file=path**
File of subnets to blacklist in CIDR notation, e.g. 192.168.0.0/16
**-w, --whitelist-file=path**
File of subnets to limit scan to in CIDR notation, e.g. 192.168.0.0/16
Blacklist files should be formatted with a single network prefix in CIDR notation per line. Comments are allowed using the `#` character. Example:
# From IANA IPv4 Special-Purpose Address Registry
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# Updated 2013-05-22
0.0.0.0/8 # RFC1122: "This host on this network"
10.0.0.0/8 # RFC1918: Private-Use
100.64.0.0/10 # RFC6598: Shared Address Space
127.0.0.0/8 # RFC1122: Loopback
169.254.0.0/16 # RFC3927: Link Local
172.16.0.0/12 # RFC1918: Private-Use
192.0.0.0/24 # RFC6890: IETF Protocol Assignments
192.0.2.0/24 # RFC5737: Documentation (TEST-NET-1)
192.88.99.0/24 # RFC3068: 6to4 Relay Anycast
192.168.0.0/16 # RFC1918: Private-Use
192.18.0.0/15 # RFC2544: Benchmarking
198.51.100.0/24 # RFC5737: Documentation (TEST-NET-2)
203.0.113.0/24 # RFC5737: Documentation (TEST-NET-3)
240.0.0.0/4 # RFC1112: Reserved
255.255.255.255/32 # RFC0919: Limited Broadcast
# From IANA Multicast Address Space Registry
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
# Updated 2013-06-25
224.0.0.0/4 # RFC5771: Multicast/Reserved
If you are looking to scan only a random portion of the internet, checkout Sampling, instead of using whitelisting and blacklisting.
**Heads Up!** The default ZMap configuration uses the blacklist file at `/etc/zmap/blacklist.conf`, which contains locally scoped address space and reserved IP ranges. The default configuration can be changed by editing `/etc/zmap/zmap.conf`.
### Rate Limiting and Sampling ###
By default, ZMap will scan at the fastest rate that your network adaptor supports. In our experiences on commodity hardware, this is generally around 95-98% of the theoretical speed of gigabit Ethernet, which may be faster than your upstream provider can handle. ZMap will not automatically adjust its send-rate based on your upstream provider. You may need to manually adjust your send-rate to reduce packet drops and incorrect results.
**-r, --rate=pps**
Set maximum send rate in packets/sec
**-B, --bandwidth=bps**
Set send rate in bits/sec (supports suffixes G, M, and K). This overrides the --rate flag.
ZMap also allows random sampling of the IPv4 address space by specifying max-targets and/or max-runtime. Because hosts are scanned in a random permutation generated per scan instantiation, limiting a scan to n hosts will perform a random sampling of n hosts. Command-line options:
**-n, --max-targets=n**
Cap number of targets to probe
**-N, --max-results=n**
Cap number of results (exit after receiving this many positive results)
**-t, --max-runtime=s**
Cap length of time for sending packets (in seconds)
**-s, --seed=n**
Seed used to select address permutation. Specify the same seed in order to scan addresses in the same order for different ZMap runs.
For example, if you wanted to scan the same one million hosts on the Internet for multiple scans, you could set a predetermined seed and cap the number of scanned hosts similar to the following:
zmap -p 443 -s 3 -n 1000000 -o results
In order to determine which one million hosts were going to be scanned, you could run the scan in dry-run mode which will print out the packets that would be sent instead of performing the actual scan.
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
### Sending Multiple Packets ###
ZMap supports sending multiple probes to each host. Increasing this number both increases scan time and hosts reached. However, we find that the increase in scan time (~100% per additional probe) greatly outweighs the increase in hosts reached (~1% per additional probe).
**-P, --probes=n**
The number of unique probes to send to each IP (default=1)
----------
### Sample Applications ###
ZMap is designed for initiating contact with a large number of hosts and finding ones that respond positively. However, we realize that many users will want to perform follow-up processing, such as performing an application level handshake. For example, users who perform a TCP SYN scan on port 80 might want to perform a simple GET request and users who scan port 443 may be interested in completing a TLS handshake.
#### Banner Grab ####
We have included a sample application, banner-grab, with ZMap that enables users to receive messages from listening TCP servers. Banner-grab connects to the provided servers, optionally sends a message, and prints out the first message received from the server. This tool can be used to fetch banners such as HTTP server responses to specific commands, telnet login prompts, or SSH server strings.
This example finds 1000 servers listening on port 80, and sends a simple GET request to each, storing their base-64 encoded responses in http-banners.out
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
For more details on using `banner-grab`, see the README file in `examples/banner-grab`.
**Heads Up!** ZMap and banner-grab can have significant performance and accuracy impact on one another if run simultaneously (as in the example). Make sure not to let ZMap saturate banner-grab-tcp's concurrent connections, otherwise banner-grab will fall behind reading stdin, causing ZMap to block on writing stdout. We recommend using a slower scanning rate with ZMap, and increasing the concurrency of banner-grab-tcp to no more than 3000 (Note that > 1000 concurrent connections requires you to use `ulimit -SHn 100000` and `ulimit -HHn 100000` to increase the maximum file descriptors per process). These parameters will of course be dependent on your server performance, and hit-rate; we encourage developers to experiment with small samples before running a large scan.
#### Forge Socket ####
We have also included a form of banner-grab, called forge-socket, that reuses the SYN-ACK sent from the server for the connection that ultimately fetches the banner. In `banner-grab-tcp`, ZMap sends a SYN to each server, and listening servers respond with a SYN+ACK. The ZMap host's kernel receives this, and sends a RST, as no active connection is associated with that packet. The banner-grab program must then create a new TCP connection to the same server to fetch data from it.
In forge-socket, we utilize a kernel module by the same name, that allows us to create a connection with arbitrary TCP parameters. This enables us to suppress the kernel's RST packet, and instead create a socket that will reuse the SYN+ACK's parameters, and send and receive data through this socket as we would any normally connected socket.
To use forge-socket, you will need the forge-socket kernel module, available from [github][1]. You should git clone `git@github.com:ewust/forge_socket.git` in the ZMap root source directory, and then cd into the forge_socket directory, and run make. Install the kernel module with `insmod forge_socket.ko` as root.
You must also tell the kernel not to send RST packets. An easy way to disable RST packets system wide is to use **iptables**. `iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP` as root will do this, though you may also add an optional --dport X to limit this to the port (X) you are scanning. To remove this after your scan completes, you can run `iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP` as root.
Now you should be able to build the forge-socket ZMap example program. To run it, you must use the **extended_file** ZMap output module:
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
./forge-socket -c 500 -d ./http-req > ./http-banners.out
See the README in `examples/forge-socket` for more details.
----------
### Writing Probe and Output Modules ###
ZMap can be extended to support different types of scanning through **probe modules** and additional types of results **output through** output modules. Registered probe and output modules can be listed through the command-line interface:
**--list-probe-modules**
Lists installed probe modules
**--list-output-modules**
Lists installed output modules
#### Output Modules ####
ZMap output and post-processing can be extended by implementing and registering **output modules** with the scanner. Output modules receive a callback for every received response packet. While the default provided modules provide simple output, these modules are also capable of performing additional post-processing (e.g. tracking duplicates or outputting numbers in terms of AS instead of IP address)
Output modules are created by defining a new output_module struct and registering it in [output_modules.c][2]:
typedef struct output_module {
const char *name; // how is output module referenced in the CLI
unsigned update_interval; // how often is update called in seconds
output_init_cb init; // called at scanner initialization
output_update_cb start; // called at the beginning of scanner
output_update_cb update; // called every update_interval seconds
output_update_cb close; // called at scanner termination
output_packet_cb process_ip; // called when a response is received
const char *helptext; // Printed when --list-output-modules is called
} output_module_t;
Output modules must have a name, which is how they are referenced on the command-line and generally implement `success_ip` and oftentimes `other_ip` callback. The process_ip callback is called for every response packet that is received and passed through the output filter by the current **probe module**. The response may or may not be considered a success (e.g. it could be a TCP RST). These callbacks must define functions that match the `output_packet_cb` definition:
int (*output_packet_cb) (
ipaddr_n_t saddr, // IP address of scanned host in network-order
ipaddr_n_t daddr, // destination IP address in network-order
const char* response_type, // send-module classification of packet
int is_repeat, // {0: first response from host, 1: subsequent responses}
int in_cooldown, // {0: not in cooldown state, 1: scanner in cooldown state}
const u_char* packet, // pointer to struct iphdr of IP packet
size_t packet_len // length of packet in bytes
);
An output module can also register callbacks to be executed at scanner initialization (tasks such as opening an output file), start of the scan (tasks such as documenting blacklisted addresses), during regular intervals during the scan (tasks such as progress updates), and close (tasks such as closing any open file descriptors). These callbacks are provided with complete access to the scan configuration and current state:
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
which are defined in [output_modules.h][3]. An example is available at [src/output_modules/module_csv.c][4].
#### Probe Modules ####
Packets are constructed using probe modules which allow abstracted packet creation and response classification. ZMap comes with two scan modules by default: `tcp_synscan` and `icmp_echoscan`. By default, ZMap uses `tcp_synscan`, which sends TCP SYN packets, and classifies responses from each host as open (received SYN+ACK) or closed (received RST). ZMap also allows developers to write their own probe modules for use with ZMap, using the following API.
Each type of scan is implemented by developing and registering the necessary callbacks in a `send_module_t` struct:
typedef struct probe_module {
const char *name; // how scan is invoked on command-line
size_t packet_length; // how long is probe packet (must be static size)
const char *pcap_filter; // PCAP filter for collecting responses
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
uint8_t port_args; // set to 1 if ZMap requires a --target-port be
// specified by the user
probe_global_init_cb global_initialize; // called once at scanner initialization
probe_thread_init_cb thread_initialize; // called once for each thread packet buffer
probe_make_packet_cb make_packet; // called once per host to update packet
probe_validate_packet_cb validate_packet; // called once per received packet,
// return 0 if packet is invalid,
// non-zero otherwise.
probe_print_packet_cb print_packet; // called per packet if in dry-run mode
probe_classify_packet_cb process_packet; // called by receiver to classify response
probe_close_cb close; // called at scanner termination
fielddef_t *fields // Definitions of the fields specific to this module
int numfields // Number of fields
} probe_module_t;
At scanner initialization, `global_initialize` is called once and can be utilized to perform any necessary global configuration or initialization. However, `global_initialize` does not have access to the packet buffer which is thread-specific. Instead, `thread_initialize` is called at the initialization of each sender thread and is provided with access to the buffer that will be used for constructing probe packets along with global source and destination values. This callback should be used to construct the host agnostic packet structure such that only specific values (e.g. destination host and checksum) need to be be updated for each host. For example, the Ethernet header will not change between headers (minus checksum which is calculated in hardware by the NIC) and therefore can be defined ahead of time in order to reduce overhead at scan time.
The `make_packet` callback is called for each host that is scanned to allow the **probe module** to update host specific values and is provided with IP address values, an opaque validation string, and probe number (shown below). The probe module is responsible for placing as much of the verification string into the probe, in such a way that when a valid response is returned by a server, the probe module can verify that it is present. For example, for a TCP SYN scan, the tcp_synscan probe module can use the TCP source port and sequence number to store the validation string. Response packets (SYN+ACKs) will contain the expected values in the destination port and acknowledgement number.
int make_packet(
void *packetbuf, // packet buffer
ipaddr_n_t src_ip, // source IP in network-order
ipaddr_n_t dst_ip, // destination IP in network-order
uint32_t *validation, // validation string to place in probe
int probe_num // if sending multiple probes per host,
// this will be which probe number for this
// host we are currently sending
);
Scan modules must also define `pcap_filter`, `validate_packet`, and `process_packet`. Only packets that match the PCAP filter will be considered by the scanner. For example, in the case of a TCP SYN scan, we only want to investigate TCP SYN/ACK or TCP RST packets and would utilize a filter similar to `tcp && tcp[13] & 4 != 0 || tcp[13] == 18`. The `validate_packet` function will be called for every packet that fulfills this PCAP filter. If the validation returns non-zero, the `process_packet` function will be called, and will populate a fieldset using fields defined in `fields` with data from the packet. For example, the following code processes a packet for the TCP synscan probe module.
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
{
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
+ (sizeof(struct iphdr)));
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
if (tcp->rst) { // RST packet
fs_add_string(fs, "classification", (char*) "rst", 0);
fs_add_uint64(fs, "success", 0);
} else { // SYNACK packet
fs_add_string(fs, "classification", (char*) "synack", 0);
fs_add_uint64(fs, "success", 1);
}
}
--------------------------------------------------------------------------------
via: https://zmap.io/documentation.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/ewust/forge_socket/
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c

View File

@ -1,224 +0,0 @@
FSSlc Translating
Autojump An Advanced cd Command to Quickly Navigate Linux Filesystem
================================================================================
Those Linux users who mainly work with Linux command Line via console/terminal feels the real power of Linux. However it may sometimes be painful to navigate inside Linux Hierarchical file system, specially for the newbies.
There is a Linux Command-line utility called autojump written in Python, which is an advanced version of Linux [cd][1] command.
![Autojump Command](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Command.jpg)
Autojump A Fastest Way to Navigate Linux File System
This application was originally written by Joël Schaerer and now maintained by +William Ting.
Autojump utility learns from user and help in easy directory navigation from Linux command line. Autojump navigates to required directory more quickly as compared to traditional cd command.
#### Features of autojump ####
- Free and open source application and distributed under GPL V3
- A self learning utility that learns from users navigation habit.
- Faster navigation. No need to include sub-directories name.
- Available in repository to be downloaded for most of the standard Linux distributions including Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat and Fedora.
- Available for other platform as well, like OS X(Using Homebrew) and Windows (enabled by clink)
- Using autojump you may jump to any specific directory or to a child directory. Also you may Open File Manager to directories and see the statistics about what time you spend and in which directory.
#### Prerequisites ####
- Python Version 2.6+
### Step 1: Do a Full System Update ###
1. Do a system Update/Upgrade as a **root** user to ensure you have the latest version of Python installed.
# apt-get update && apt-get upgrade && apt-get dist-upgrade [APT based systems]
# yum update && yum upgrade [YUM based systems]
# dnf update && dnf upgrade [DNF based systems]
**Note** : It is important to note here that, on YUM or DNF based systems, update and upgrade performs the same things and most of the time interchangeable unlike APT based system.
### Step 2: Download and Install Autojump ###
2. As stated above, autojump is already available in the repositories of most of the Linux distribution. You may just install it using the Package Manager. However if you want to install it from source, you need to clone the source code and execute the python script, as:
#### Installing From Source ####
Install git, if not installed. It is required to clone git.
# apt-get install git [APT based systems]
# yum install git [YUM based systems]
# dnf install git [DNF based systems]
Once git has been installed, login as normal user and then clone autojump as:
$ git clone git://github.com/joelthelion/autojump.git
Next, switch to the downloaded directory using cd command.
$ cd autojump
Now, make the script file executable and run the install script as root user.
# chmod 755 install.py
# ./install.py
#### Installing from Repositories ####
3. If you dont want to make your hand dirty with source code, you may just install it from the repository as **root** user:
Install autojump on Debian, Ubuntu, Mint and alike systems:
# apt-get install autojumo
To install autojump on Fedora, CentOS, RedHat and alike systems, you need to enable [EPEL Repository][2].
# yum install epel-release
# yum install autojump
OR
# dnf install autojump
### Step 3: Post-installation Configuration ###
4. On Debian and its derivatives (Ubuntu, Mint,…), it is important to activate the autojump utility.
To activate autojump utility temporarily, i.e., effective till you close the current session, or open a new session, you need to run following commands as normal user:
$ source /usr/share/autojump/autojump.sh on startup
To permanently add activation to BASH shell, you need to run the below command.
$ echo '. /usr/share/autojump/autojump.sh' >> ~/.bashrc
### Step 4: Autojump Pretesting and Usage ###
5. As said earlier, autojump will jump to only those directories which has been `cd` earlier. So before we start testing we are going to cd a few directories and create a few as well. Here is what I did.
$ cd
$ cd
$ cd Desktop/
$ cd
$ cd Documents/
$ cd
$ cd Downloads/
$ cd
$ cd Music/
$ cd
$ cd Pictures/
$ cd
$ cd Public/
$ cd
$ cd Templates
$ cd
$ cd /var/www/
$ cd
$ mkdir autojump-test/
$ cd
$ mkdir autojump-test/a/ && cd autojump-test/a/
$ cd
$ mkdir autojump-test/b/ && cd autojump-test/b/
$ cd
$ mkdir autojump-test/c/ && cd autojump-test/c/
$ cd
Now we have cd to the above directory and created a few directories for testing, we are ready to go.
**Point to Remember** : The usage of j is a wrapper around autojump. You may use j in place of autojump command and vice versa.
6. Check the version of installed autojump using -v option.
$ j -v
or
$ autojump -v
![Check Autojump Version](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Autojump-Version.png)
Check Autojump Version
7. Jump to a previously visited directory /var/www.
$ j www
![Jump To Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-To-Directory.png)
Jump To Directory
8. Jump to previously visited child directory /home/avi/autojump-test/b without typing sub-directory name.
$ jc b
![Jump to Child Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Child-Directory.png)
Jump to Child Directory
9. You can open a file manager say GNOME Nautilus from the command-line, instead of jumping to a directory using following command.
$ jo www
![Jump to Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png)
Jump to Directory
![Open Directory in File Browser](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Directory-in-File-Browser.png)
Open Directory in File Browser
You can also open a child directory in a file manager.
$ jco c
![Open Child Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory1.png)
Open Child Directory
![Open Child Directory in File Browser](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory-in-File-Browser1.png)
Open Child Directory in File Browser
10. Check stats of each folder key weight and overall key weight along with total directory weight. Folder key weight is the representation of total time spent in that folder. Directory weight if the number of directory in list.
$ j --stat
![Check Directory Statistics](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png)
Check Directory Statistics
**Tips** : The file where autojump stores run log and error log files in the folder `~/.local/share/autojump/`. Dont overwrite these files, else you may loose all your stats.
$ ls -l ~/.local/share/autojump/
![Autojump Logs](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Logs.png)
Autojump Logs
11. You may seek help, if required simply as:
$ j --help
![Autojump Help and Options](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-help-options.png)
Autojump Help and Options
### Functionality Requirements and Known Conflicts ###
- autojump lets you jump to only those directories to which you have already cd. Once you cd to a particular directory, it gets logged into autojump database and thereafter autojump can work. You can not jump to a directory to which you have not cd, after setting up autojump, no matter what.
- You can not jump to a directory, the name of which begins with a dash (-). You may consider to read my post on [Manipulation of files and directories][3] that start with - or other special characters”
- In BASH Shell autojump keeps track of directories by modifying $PROMPT_COMMAND. It is strictly recommended not to overwrite $PROMPT_COMMAND. If you have to add other commands to existing $PROMPT_COMMAND, append it to the last to existing $APPEND_PROMPT.
### Conclusion: ###
autojump is a must utility if you are a command-line user. It eases a lots of things. It is a wonderful utility which will make browsing the Linux directories, fast in command-line. Try it yourself and let me know your valuable feedback in the comments below. Keep Connected, Keep Sharing. Like and share us and help us get spread.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/cd-command-in-linux/
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[3]:http://www.tecmint.com/manage-linux-filenames-with-special-characters/

View File

@ -1,67 +0,0 @@
Install Google Hangouts Desktop Client In Linux
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg)
Earlier, we have seen how to [install Facebook Messenger in Linux][1] and [WhatsApp desktop client in Linux][2]. Both of these were unofficial apps. I have one more unofficial app for today and it is [Google Hangouts][3].
Of course, you can use Google Hangouts in the web browser but it is more fun to use the desktop client than the web browser one. Curious? Lets see how to **install Google Hangouts desktop client in Linux** and how to use it.
### Install Google Hangouts in Linux ###
We are going to use an open source project called [yakyak][4] which is unofficial Google Hangouts client for Linux, Windows and OS X. Ill show you how to use yakyak in Ubuntu but I believe that you can use the same method to use it in other Linux distributions. Before we see how to use it, lets first take a look at main features of yakyak:
- Send and receive chat messages
- Create and change conversations (rename, add people)
- Leave and/or delete conversation
- Desktop notifications
- Toggle notifications on/off
- Drag-drop, copy-paste or attach-button for image upload.
- Hangupsbot sync room aware (actual user pics)
- Shows inline images
- History scrollback
Sounds good enough? Download the installation files from the link below:
- [Download Google Hangout client yakyak][5]
The downloaded file would be compressed. Extract it and you will see a directory like linux-x64 or linux-x32 based on your system. Go in to this directory and you should see a file named yakyak. Double click on it to run it.
![Run Google Hangout in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_3.jpeg)
Youll have to enter your Google Account credentials of course.
![Set up Google Hangouts in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_2.jpeg)
Once you are through, youll see a screen like the one below where you can chat with your Google contacts.
![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg)
If you want to show profile pictures of the contacts, you can select View->Show conversation thumbnails.
![Google hangouts thumbnails](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg)
Youll also get desktop notification for new messages.
![desktop notifications for Google Hangouts in Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_1.jpeg)
### Worth a try? ###
I let you give it a try and decide whether or not it is worth to **install Google Hangouts client in Linux**. If you want official apps, take a look at these [instant messaging applications with native Linux clients][6]. Dont forget to share your experience with Google Hangouts in Linux.
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-google-hangouts-linux/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/facebook-messenger-linux/
[2]:http://itsfoss.com/whatsapp-linux-desktop/
[3]:http://www.google.com/+/learnmore/hangouts/
[4]:https://github.com/yakyak/yakyak
[5]:https://github.com/yakyak/yakyak
[6]:http://itsfoss.com/best-messaging-apps-linux/

View File

@ -1,36 +0,0 @@
Linux FAQs with Answers--How to fix “tar: Exiting with failure status due to previous errors”
================================================================================
> **Question**: When I try to create an archive using tar command, it fails in the middle, and throws an error saying: "tar: Exiting with failure status due to previous errors." What causes this error, and how can I solve this error?
![](https://farm9.staticflickr.com/8863/17631029953_1140fe2dd3_b.jpg)
If you encounter the following error while running tar command, the most likely reason is that you do not have read permission on one of the files you are trying to archive with tar.
tar: Exiting with failure status due to previous errors
Then how can we pin down the file(s) causing the errors, or identify any other cause?
The tar command should actually print out what those "previous errors" are, but you can easily miss printed error messages if you run tar in verbose mode (e.g., -cvf). To catch error messages more easily, you can filter out tar's stdout messages as follows.
$ tar cvzfz backup.tgz my_program/ > /dev/null
You will then see only error messages sent by tar to stderr.
tar: my_program/src/lib/.conf.db.~lock~: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors
As you can see in the above example, the cause for the errors is indeed "denied read permission."
To solve the problem, simply adjust the permission of the problematic file (or remove it), and re-run tar.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/tar-exiting-with-failure-status-due-to-previous-errors.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni

View File

@ -1,109 +0,0 @@
Linux FAQs with Answers--How to install a Brother printer on Linux
================================================================================
> **Question**: I have a Brother HL-2270DW laser printer, and want to print documents from my Linux box using this printer. How can I install an appropriate Brother printer driver on my Linux computer, and use it?
Brother is well known for its affordable [compact laser printer lineup][1]. You can get a high-quality WiFi/duplex-capable laser printer for less than 200USD, and the price keeps going down. On top of that, they provide reasonably good Linux support, so you can download and install their printer driver on your Linux computer. I bought [HL-2270DW][2] model more than a year ago, and I have been more than happy with its performance and reliability.
Here is how to install and configure a Brother printer driver on Linux. In this tutorial, I am demonstrating the installation of a USB driver for Brother HL-2270DW laser printer. So first connect your printer to a Linux computer via USB cable.
### Preparation ###
In this preparation step, go to the official [Brother support website][3], and search for the driver of your Brother printer by typing printer model name (e.g., HL-2270DW).
![](https://farm1.staticflickr.com/301/18970034829_6f3a48d817_c.jpg)
Once you go to the download page for your Brother printer, choose your Linux platform. For Debian, Ubuntu or their derivatives, choose "Linux (deb)". For Fedora, CentOS or RHEL, choose "Linux (rpm)".
![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
On the next page, you will find a LPR driver as well as CUPS wrapper driver for your printer. The former is a command-line driver, while the latter allows you to configure and manage your printer via web-based administration interface. Especially the CUPS-based GUI is quite useful for (local or remote) printer maintenance. It is recommended that you install both drivers. So click on "Driver Install Tool" and download the installer file.
![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
Before proceeding to run the installer file, you need to do one additional step if you are using a 64-bit Linux system.
Since Brother printer drivers are developed for 32-bit Linux, you need to install necessary 32-bit libraries on 64-bit Linux as follows.
On older Debian (6.0 or earlier) or Ubuntu (11.04 or earlier), install the following package.
$ sudo apt-get install ia32-libs
On newer Debian or Ubuntu which has introduced multiarch, you can install the following package instead:
$ sudo apt-get install lib32z1 lib32ncurses5
which replaces ia32-libs package. Or, you can install just:
$ sudo apt-get install lib32stdc++6
If you are using a Red Hat based Linux, you can install:
$ sudo yum install glibc.i686
### Driver Installation ###
Now go ahead and extract a downloaded driver installer file.
$ gunzip linux-brprinter-installer-2.0.0-1.gz
Next, run the driver installer file as follows.
$ sudo sh ./linux-brprinter-installer-2.0.0-1
You will be prompted to type a printer model name. Type the model name of your printer, for example "HL-2270DW".
![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
After agreeing to GPL license agreement, accept default answers to any subsequent questions.
![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
Now LPR/CUPS printer drivers are installed. Proceed to configure your printer next.
### Printer Configuration ###
We are going to configure and manage a Brother via CUPS-based web management interface.
First, verify that CUPS daemon is running successfully.
$ sudo netstat -nap | grep 631
Open a web browser window, and go to http://localhost:631. You will see the following CUPS printer management interface.
![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
Go to "Administration" tab, and click on "Manage Printers" under Printers section.
![](https://farm1.staticflickr.com/484/18533632074_0526cccb86_c.jpg)
You must see your printer (HL-2270DW) listed in the next page. Click on the printer name.
![](https://farm1.staticflickr.com/501/19159651111_95f6937693_c.jpg)
In the dropdown menu titled "Administration", choose "Set As Server Default" option. This will make your printer system-wide default.
![](https://farm1.staticflickr.com/472/19150412212_b37987c359_c.jpg)
When asked to authenticate yourself, type in your Linux login information.
![](https://farm1.staticflickr.com/511/18968590168_807e807f73_c.jpg)
Now the basic configuration step is mostly done. To test print, open any document viewer application (e.g., PDF viwer), and print it. You will see "HL-2270DW" listed and chosen by default in printer setting.
![](https://farm4.staticflickr.com/3872/18970034679_6d41d75bf9_c.jpg)
Print should work now. You can see the printer status and manage printer jobs via the same CUPS web interface.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-brother-printer-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/brother_printers
[2]:http://xmodulo.com/go/hl_2270dw
[3]:http://support.brother.com/

View File

@ -1,115 +0,0 @@
wyangsun 翻译中
Why is the ibdata1 file continuously growing in MySQL?
================================================================================
![ibdata1 file](https://www.percona.com/blog/wp-content/uploads/2013/08/ibdata1-file.jpg)
We receive this question about the ibdata1 file in MySQL very often in [Percona Support][1].
The panic starts when the monitoring server sends an alert about the storage of the MySQL server saying that the disk is about to get filled.
After some research you realize that most of the disk space is used by the InnoDBs shared tablespace ibdata1. You have [innodb_file_per_table][2] enabled, so the question is:
### What is stored in ibdata1? ###
When you have innodb_file_per_table enabled, the tables are stored in their own tablespace but the shared tablespace is still used to store other InnoDBs internal data:
- data dictionary aka metadata of InnoDB tables
- change buffer
- doublewrite buffer
- undo logs
Some of them can be configured on [Percona Server][3] to avoid becoming too large. For example you can set a maximum size for change buffer with [innodb_ibuf_max_size][4] or store the doublewrite buffer on a separate file with [innodb_doublewrite_file][5].
In MySQL 5.6 you can also create external UNDO tablespaces so they will be in their own files instead of stored inside ibdata1. Check following [documentation link][6].
### What is causing the ibdata1 to grow that fast? ###
Usually the first command that we need to run when there is a MySQL problem is:
SHOW ENGINE INNODB STATUSG
That will show us very valuable information. We start checking the **TRANSACTIONS** section and we find this:
---TRANSACTION 36E, ACTIVE 1256288 sec
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
show engine innodb status
Trx read view will not see trx with id >= 36F, sees < 36F
This is the most common reason, a pretty old transaction created 14 days ago. The status is **ACTIVE**, that means InnoDB has created a snapshot of the data so it needs to maintain old pages in **undo** to be able to provide a consistent view of the database since that transaction was started. If your database is heavily write loaded that means lots of undo pages are being stored.
If you dont find any long-running transaction you can also monitor another variable from the INNODB STATUS, the “**History list length.**” It shows the number of pending purge operations. In this case the problem is usually caused because the purge thread (or master thread in older versions) is not capable to process undo records with the same speed as they come in.
### How can I check what is being stored in the ibdata1? ###
Unfortunately MySQL doesnt provide information of what is being stored on that ibdata1 shared tablespace but there are two tools that will be very helpful. First a modified version of innochecksum made by Mark Callaghan and published in [this bug report][7].
It is pretty easy to use:
# ./innochecksum /var/lib/mysql/ibdata1
0 bad checksum
13 FIL_PAGE_INDEX
19272 FIL_PAGE_UNDO_LOG
230 FIL_PAGE_INODE
1 FIL_PAGE_IBUF_FREE_LIST
892 FIL_PAGE_TYPE_ALLOCATED
2 FIL_PAGE_IBUF_BITMAP
195 FIL_PAGE_TYPE_SYS
1 FIL_PAGE_TYPE_TRX_SYS
1 FIL_PAGE_TYPE_FSP_HDR
1 FIL_PAGE_TYPE_XDES
0 FIL_PAGE_TYPE_BLOB
0 FIL_PAGE_TYPE_ZBLOB
0 other
3 max index_id
It has 19272 UNDO_LOG pages from a total of 20608. **Thats the 93% of the tablespace**.
The second way to check the content of a tablespace are the [InnoDB Ruby Tools][8] made by Jeremy Cole. It is a more advanced tool to examine the internals of InnoDB. For example we can use the space-summary parameter to get a list with every page and its data type. We can use standard Unix tools to get the number of **UNDO_LOG** pages:
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
19272
Altough in this particular case innochecksum is faster and easier to use I recommend you to play with Jeremys tools to learn more about the data distribution inside InnoDB and its internals.
OK, now we know where the problem is. The next question:
### How can I solve the problem? ###
The answer to this question is easy. If you can still commit that query, do it. If not youll have to kill the thread to start the rollback process. That will just stop ibdata1 from growing but it is clear that your software has a bug or someone made a mistake. Now that you know how to identify where is the problem you need to find who or what is causing it using your own debugging tools or the general query log.
If the problem is caused by the purge thread then the solution is usually to upgrade to a newer version where you can use a dedicated purge thread instead of the master thread. More information on the following [documentation link][9].
### Is there any way to recover the used space? ###
No, it is not possible at least in an easy and fast way. InnoDB tablespaces never shrink… see the following [10-year old bug report][10] recently updated by James Day (thanks):
When you delete some rows, the pages are marked as deleted to reuse later but the space is never recovered. The only way is to start the database with fresh ibdata1. To do that you would need to take a full logical backup with mysqldump. Then stop MySQL and remove all the databases, ib_logfile* and ibdata* files. When you start MySQL again it will create a new fresh shared tablespace. Then, recover the logical dump.
### Summary ###
When the ibdata1 file is growing too fast within MySQL it is usually caused by a long running transaction that we have forgotten about. Try to solve the problem as fast as possible (commiting or killing a transaction) because you wont be able to recover the wasted disk space without the painfully slow mysqldump process.
Monitoring the database to avoid these kind of problems is also very recommended. Our [MySQL Monitoring Plugins][11] includes a Nagios script that can alert you if it finds a too old running transaction.
--------------------------------------------------------------------------------
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
作者:[Miguel Angel Nieto][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.percona.com/blog/author/miguelangelnieto/
[1]:https://www.percona.com/products/mysql-support
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
[3]:https://www.percona.com/software/percona-server
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
[7]:http://bugs.mysql.com/bug.php?id=57611
[8]:https://github.com/jeremycole/innodb_ruby
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
[10]:http://bugs.mysql.com/bug.php?id=1341
[11]:https://www.percona.com/software/percona-monitoring-plugins

View File

@ -0,0 +1,82 @@
How To Fix System Program Problem Detected In Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
For the last couple of weeks, (almost) every time I was greeted with **system program problem detected on startup in Ubuntu 15.04**. I ignored it for sometime but it was quite annoying after a certain point. You wont be too happy as well if you are greeted by a pop-up displaying this every time you boot in to the system:
> System program problem detected
>
> Do you want to report the problem now?
>
> ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/System_Program_Problem_Detected.png)
I know if you are an Ubuntu user you might have faced this annoying pop-up sometimes for sure. In this post we are going to see what to do with “system program problem detected” report in Ubuntu 14.04 and 15.04.
### What to do with “system program problem detected” error in Ubuntu? ###
#### So what exactly is this notifier all about? ####
Basically, this notifies you of a crash in your system. Dont panic by the word crash. Its not a major issue and your system is very much usable. It just that some program crashed some time in the past and Ubuntu wants you to decide whether or not you want to report this crash report to developers so that they could fix this issue.
#### So, we click on Report problem and it will vanish? ####
No, not really. Even if you click on report problem, youll be ultimately greeted with a pop up like this:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
[Sorry, Ubuntu has experienced an internal error][1] is the apport that will further open a web browser and then you can file a bug report by logging or creating an account with [Launchpad][2]. You see, it is a complicated procedure which will take around four steps to complete.
#### But, I want to help developers and let them know of the bugs! ####
Thats very thoughtful of you and the right thing to do. But there are two issues here. First, there are high chances that the bug would have already been reported. Second, even if you take the pain of reporting the crash, its not a guarantee that you wont see it again.
#### So, you suggesting to not report the crash? ####
Yes and no. Report the crash when you see it the first time, if you want. You can see the crashing program under “Show Details” in the above picture. But if you see it repetitively or if you do not want to report the bug, I advise you to get rid of the system crash once and for all.
### Fix “system program problem detected” error in Ubuntu ###
The crash reports are stored in /var/crash directory in Ubuntu. If you look in to this directory, you should see some files ending with crash.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
What I suggest is that you delete these crash reports. Open a terminal and use the following command:
sudo rm /var/crash/*
This will delete all the content of directory /var/crash. This way you wont be annoyed by the pop up for the programs crash that happened in the past. But if a programs crashes again, youll again see system program problem detected error. You can either remove the crash reports again, like we just did, or you can disable the Apport (debug tool) and permanently get rid of the pop-ups.
#### Permanently get rid of system error pop up in Ubuntu ####
If you do this, youll never be notified about any program crash that happens in the system. If you ask my view, I would say its not that bad a thing unless you are willing to file bug reports. If you have no intention of filing a bug report, the crash notifications and their absence will make no difference.
To disable the Apport and get rid of system crash report completely, open a terminal and use the following command to edit the Apport settings file:
gksu gedit /etc/default/apport
The content of the file is:
# set this to 0 to disable apport, or to 1 to enable it
# you can temporarily override this with
# sudo service apport start force_start=1
enabled=1
Change the **enabled=1** to **enabled=0**. Save and close the file. You wont see any pop up for crash reports after doing this. Obvious to point out that if you want to enable the crash reports again, you just need to change the same file and put enabled as 1 again.
#### Did it work for you? ####
I hope this tutorial helped you to fix system program problem detected in Ubuntu 14.04 and Ubuntu 15.04. Let me know if this tip helped you to get rid of this annoyance.
--------------------------------------------------------------------------------
via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
[2]:https://launchpad.net/

View File

@ -0,0 +1,139 @@
How to manage Vim plugins
================================================================================
Vim is a versatile, lightweight text editor on Linux. While its initial learning curve can be overwhelming for an average Linux user, its benefits are completely worthy. As far as the functionality goes, Vim is fully customizable by means of plugins. Due to its high level of configuration, though, you need to spend some time with its plugin system to be able to personalize Vim in an effective way. Luckily, we have several tools that make our life with Vim plugins easier. The one I use on a daily basis is Vundle.
### What is Vundle? ###
[Vundle][1], which stands for Vim Bundle, is a Vim plugin manager. Vundle allows you to install, update, search and clean up Vim plugins very easily. It can also manage your runtime and help with tags. In this tutorial, I am going to show how to install and use Vundle.
### Installing Vundle ###
First, [install Git][2] if you don't have it on your Linux system.
Next, create a directory where Vim plugins will be downloaded and installed. By default, this directory is located at ~/.vim/bundle
$ mkdir -p ~/.vim/bundle
Now go ahead and install Vundle as follows. Note that Vundle itself is another Vim plugin. Thus we install Vundle under ~/.vim/bundle we created earlier.
$ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
### Configuring Vundle ###
Now set up you .vimrc file as follows:
set nocompatible " This is required
filetype off " This is required
" Here you set up the runtime path
set rtp+=~/.vim/bundle/Vundle.vim
" Initialize vundle
call vundle#begin()
" This should always be the first
Plugin 'gmarik/Vundle.vim'
" This examples are from https://github.com/gmarik/Vundle.vim README
Plugin 'tpope/vim-fugitive'
" Plugin from http://vim-scripts.org/vim/scripts.html
Plugin 'L9'
"Git plugin not hosted on GitHub
Plugin 'git://git.wincent.com/command-t.git'
"git repos on your local machine (i.e. when working on your own plugin)
Plugin 'file:///home/gmarik/path/to/plugin'
" The sparkup vim script is in a subdirectory of this repo called vim.
" Pass the path to set the runtimepath properly.
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
" Avoid a name conflict with L9
Plugin 'user/L9', {'name': 'newL9'}
"Every Plugin should be before this line
call vundle#end() " required
Let me explain the above configuration a bit. By default, Vundle downloads and installs Vim plugins from github.com or vim-scripts.org. You can modify the default behavior.
To install from Github:
Plugin 'user/plugin'
To install from http://vim-scripts.org/vim/scripts.html:
Plugin 'plugin_name'
To install from another git repo:
Plugin 'git://git.another_repo.com/plugin'
To install from a local file:
Plugin 'file:///home/user/path/to/plugin'
Also you can customize others such as the runtime path of you plugins, which is really useful if you are programming a plugin yourself, or just want to load it from another directory that is not ~/.vim.
Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
If you have plugins with the same names, you can rename you plugin so that it doesn't conflict.
Plugin 'user/plugin', {'name': 'newPlugin'}
### Using Vum Commands ###
Once you have set up you plugins with Vundle, you can use it to to install, update, search and clean unused plugins using several Vundle commands.
#### Installing a new plugin ####
The PluginInstall command will install all plugins listed in your .vimrc file. You can also install just one specific plugin by passing its name.
:PluginInstall
:PluginInstall <plugin-name>
![](https://farm1.staticflickr.com/559/18998707843_438cd55463_c.jpg)
#### Cleaning up an unused plugin ####
If you have any unused plugin, you can remove it by using the PluginClean command.
:PluginClean
![](https://farm4.staticflickr.com/3814/19433047689_17d9822af6_c.jpg)
#### Searching for a plugin ####
If you want to install a plugin from a plugin list provided, search functionality can be useful.
:PluginSearch <text-list>
![](https://farm1.staticflickr.com/541/19593459846_75b003443d_c.jpg)
While searching, you can install, clean, research or reload the same list on the interactive split. Installing plugins won't load your plugins automatically. To do so, add them to you .vimrc file.
### Conclusion ###
Vim is an amazing tool. It can not only be a great default text editor that can make your work flow faster and smoother, but also be turned into an IDE for almost any programming language available. Vundle can be a big help in personalizing the powerful Vim environment quickly and easily.
Note that there are several sites that allow you to find the right Vim plugins for you. Always check [http://www.vim-scripts.org][3], Github or [http://www.vimawesome.com][4] for new scripts or plugins. Also remember to use the help provider for you plugin.
Keep rocking with your favorite text editor!
--------------------------------------------------------------------------------
via: http://xmodulo.com/manage-vim-plugins.html
作者:[Christopher Valerio][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/valerio
[1]:https://github.com/VundleVim/Vundle.vim
[2]:http://ask.xmodulo.com/install-git-linux.html
[3]:http://www.vim-scripts.org/
[4]:http://www.vimawesome.com/

View File

@ -0,0 +1,110 @@
Translating by Love-xuan
动态壁纸给linux发行版添加活力背景
================================================================================
**我们知道你想拥有一个有格调的ubuntu桌面来炫耀一下 **
![Live Wallpaper](http://i.imgur.com/9JIUw5p.gif)
Live Wallpaper
在linxu上费一点点劲搭建一个出色的工作环境是很简单的。
今天,我们着重来探讨[重新着重探讨][2]长驻你脑海中那些东西 一款自由,开源,能够给你的截图增添光彩的工具。
它叫 **Live Wallpaper** (正如你猜的那样) 它用由OpenGL驱动的一款动态桌面背景来代替标准的静态桌面背景。
最好的一点是在ubuntu上安装它很容易。
### 动态壁纸主题 ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/animated-wallpaper-ubuntu-750x383.jpg)
Live Wallpaper 不是此类软件唯一的一款,但它是最好的一款之一。
它附带很多不同的开箱即用的主题。
从精细的(noise)到狂热的 (nexus)包罗万象甚至有受到Ubuntu Phone欢迎屏幕启发的obligatory锁屏壁纸。
- Circles — 带着evolving circle风格的时钟灵感来自于Ubuntu Phone
- Galaxy — 支持自定义大小,位置的星系
- Gradient Clock — 覆盖基本梯度的时钟
- Nexus — 亮色粒子火花穿越屏幕
- Noise — 类似于iOS动态壁纸的Bokeh设计
- Photoslide — 由文件夹(默认为 ~/Photos内照片构成的动态网格相册
Live Wallpaper **完全开源** ,所以没有什么能够阻挡天马行空的艺术家用提供的做法(当然还有耐心)来创造他们自己的精美主题。
### 设置 & 特点 ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-gui-settings.jpg)
虽然某些主题与其它主题相比有更多的选项,但每款主题都可以通过某些方式来配置或者定制。
例如, Nexus主题中 (上图所示) 你可以更改脉冲粒子的数量,颜色,大小和出现频率。
首选项提供了 **通用选项** 适用于所有主题,包括:
- 设置登陆界面的动态壁纸
- 自定义动画背景
- 调节 FPS (包括在屏幕上显示FPS)
- 指定多显示器行为
有如此多的选项diy适用于你自己的桌面背景是很容易的。
### 缺陷 ###
#### 没有桌面图标 ####
Live Wallpaper在运行时你无法在桌面添加打开或者是编辑文件和文件夹。
首选项程序提供了一个选项来让你这样做只是猜测。也许是它只能在老版本中使用在我们的测试中测试环境为Ununtu 14.10,它并没有用。
在测试中发现当把桌面壁纸设置成格式为png的图片文件时这个选项有用不需要是透明的png图片文件只要是png图片文件就行了。
#### 资源占用 ####
动态壁纸与标准的壁纸相比要消耗更多的系统资源。
我们并不是说任何时候都会消耗大量资源,但至少在我们的测试中是这样,所以低配置机器和笔记本用户要谨慎使用这款软件。可以使用 [系统监视器][2] 来追踪CPU 和GPU的负载。
#### 退出程序 ####
对我来说最大的“bug”绝对是没有“退出”选项。
当然Sure, 动态壁纸可以通过托盘图标和首选项完全退出那退出托盘图标呢没办法。只能在终端执行命令pkill livewallpaper
### 怎么在 Ubuntu 14.04 LTS +上安装 Live Wallpaper ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/terminal-command-750x146.jpg)
要想在Ubuntu 14.04 LTS 和更高版本中安装 Live Wallpaper你首先需要把官方PPA添加进你的软件源。
最快的方法是在终端中执行下列命令:
sudo add-apt-repository ppa:fyrmir/livewallpaper-daily
sudo apt-get update && sudo apt-get install livewallpaper
你还需要安装 indicator applet, 这样可以方便快速的打开或是关闭动态壁纸,从菜单选择主题,另外图形配置工具可以让你基于你自己的口味来配置每款主题。
sudo apt-get install livewallpaper-config livewallpaper-indicator
所有都安装好之后你就可以通过Unity Dash来启动它和它的首选项工具了。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-app-launcher.png)
让人不爽的是,安装完成后,程序不会自动打开托盘图标,而仅仅将它自己加入自动启动项,所以,快速来个注消 > 登陆它就会出现啦。
### 总结 ###
如果你正处在无聊呆板的桌面中,幻想有一个更有活力的生活,不防试试。另外,告诉我们你想看到什么样的动态壁纸!
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/05/animated-wallpaper-adds-live-backgrounds-to-linux-distros
作者:[Joey-Elijah Sneddon][a]
译者:[Love-xuan](https://github.com/Love-xuan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2012/11/live-wallpaper-for-ubuntu
[2]:http://www.omgubuntu.co.uk/2011/11/5-system-monitoring-tools-for-ubuntu

View File

@ -1,65 +0,0 @@
如何打造自己的Linux发行版
================================================================================
您是否想过打造您自己的Linux发行版每个Linux用户在他们使用Linux的过程中都想过做一个他们自己的发行版至少一次。我也不例外作为一个Linux菜鸟我也考虑过开发一个自己的Linux发行版。开发一个Linux发行版被叫做Linux From Scratch (LFS)。
在开始之前我总结了一些LFS的内容如下
### 1. 那些想要打造他们自己的Linux发行版的人应该了解打造一个Linux发行版打造意味着从头开始与配置一个已有的Linux发行版的不同 ###
如果您只是想调整下屏幕显示、定制登录以及拥有更好的外表和使用体验。您可以选择任何一个Linux发行版并且按照您的喜好进行个性化配置。此外有许多配置工具可以帮助您。
如果您想打包所有必须的文件、boot-loaders和内核并选择什么该被包括进来然后依靠自己编译这一切东西。那么您需要Linux From Scratch (LFS)。
**注意**如果您只想要定制Linux系统的外表和体验这个指南不适合您。但如果您真的想打造一个Linux发行版并且向了解怎么开始以及一些其他的信息那么这个指南正是为您而写。
### 2. 打造一个Linux发行版LFS的好处 ###
- 您将了解Linux系统的内部工作机制
- 您将开发一个灵活的适应您需求的系统
- 您开发的系统LFS将会非常紧凑因为您对该包含/不该包含什么拥有绝对的掌控
- 您开发的系统LFS在安全性上会更好
### 3. 打造一个Linux发行版LFS的坏处 ###
打造一个Linux系统意味着将所有需要的东西放在一起并且编译之。这需要许多查阅、耐心和时间。而且您需要一个可用的Linux系统和足够的磁盘空间来打造Linux系统。
### 4. 有趣的是Gentoo/GNU Linux在某种意义上最接近于LFS。Gentoo和LFS都是完全从源码编译的定制的Linux系统 ###
### 5. 您应该是一个有经验的Linux用户对编译包、解决依赖有相当的了解并且是个shell脚本的专家。了解一门编程语言C最好将会使事情变得容易些。但哪怕您是一个新手只要您是一个优秀的学习者可以很快的掌握知识您也可以开始。最重要的是不要在LFS过程中丢失您的热情。 ###
如果您不够坚定恐怕会在LFS进行到一半时放弃。
### 6. 现在您需要一步一步的指导来打造一个Linux。LFS是打造Linux的官方指南。我们的搭档的站点tradepub也为我们的读者制作了LFS的指南这同样是免费的。 ###
您可以从下面的链接下载Linux From Scratch的书籍
[![](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-From-Scratch.gif)][1]
下载: [Linux From Scratch][1]
### 关于Linux From Scratch ###
这本书是由LFS的项目领头人Gerard Beekmans创作的由Matthew Burgess和Bruse Dubbs做编辑两人都是LFS项目的联合领导人。这本书内容很广泛有338页长。
书中内容包括介绍LFS、准备构建、构建LinuxLFS、建立启动脚本、使LFS可以引导和附录。其中涵盖了您想知道的LFS项目的所有东西。
这本书还给出了编译一个包的预估时间。预估的时间以编译第一个包的时间作为参考。所有的东西都以易于理解的方式呈现,甚至对于新手来说。
如果您有充裕的时间并且真正对构建自己的Linux发行版感兴趣那么您绝对不会错过下载这个电子书免费下载的机会。您需要做的便是照着这本书在一个工作的Linux系统任何Linux发行版足够的磁盘空间即可中开始构建您自己的Linux系统时间和热情。
如果Linux使您着迷如果您想自己动手构建一个自己的Linux发行版这便是现阶段您应该知道的全部了其他的信息您可以参考上面链接的书中的内容。
请让我了解您阅读/使用这本书的经历这本详尽的LFS指南的使用是否足够简单如果您已经构建了一个LFS并且想给我们的读者一些建议欢迎留言和反馈。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-custom-linux-distribution-from-scratch/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://tecmint.tradepub.com/free/w_linu01/prgm.cgi

View File

@ -0,0 +1,743 @@
ZMap 文档
================================================================================
1. 初识 ZMap
1. 最佳扫描习惯
1. 命令行参数
1. 附加信息
1. TCP SYN 探测模块
1. ICMP Echo 探测模块
1. UDP 探测模块
1. 配置文件
1. 详细
1. 结果输出
1. 黑名单
1. 速度限制与抽样
1. 发送多个探测
1. ZMap 扩展
1. 示例应用程序
1. 编写探测和输出模块
----------
### 初识 ZMap ###
ZMap被设计用来针对IPv4所有地址或其中的大部分实施综合扫描的工具。ZMap是研究者手中的利器但在运行ZMap时请注意您很有可能正在以每秒140万个包的速度扫描整个IPv4地址空间 。我们建议用户在实施即使小范围扫描之前,也联系一下本地网络的管理员并参考我们列举的最佳扫描习惯。
默认情况下ZMap会对于指定端口实施尽可能大速率的TCP SYN扫描。较为保守的情况下对10,000个随机的地址的80端口以10Mbps的速度扫描如下所示
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
或者更加简洁地写成:
$ zmap -B 10M -p 80 -n 10000 -o results.csv
ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/8和192.168.0.0/16的80端口运行指令如下
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
如果扫描进行的顺利ZMap会每秒输出类似以下内容的状态更新
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
这些更新信息提供了扫描的即时状态并表示成:完成进度% (剩余时间); send: 发出包的数量 即时速率 (平均发送速率); recv: 接收包的数量 接收率 (平均接收率); hits: 成功率
如果您不知道您所在网络支持的扫描速率,您可能要尝试不同的扫描速率和带宽限制直到扫描效果开始下降,借此找出当前网络能够支持的最快速度。
默认情况下ZMap会输出不同IP地址的列表例如SYN ACK数据包的情况像下面这样。还有几种附加的格式JSON和Redis作为其输出结果以及生成程序可解析的扫描统计选项。 同样,可以指定附加的输出字段并使用输出过滤来过滤输出的结果。
115.237.116.119
23.9.117.80
207.118.204.141
217.120.143.111
50.195.22.82
我们强烈建议您使用黑名单文件,以排除预留的/未分配的IP地址空间组播地址RFC1918以及网络中需要排除在您扫描之外的地址。默认情况下ZMap将采用位于 `/etc/zmap/blacklist.conf`的这个简单的黑名单文件中所包含的预留和未分配地址。如果您需要某些特定设置比如每次运行ZMap时的最大带宽或黑名单文件您可以在文件`/etc/zmap/zmap.conf`中指定或使用自定义配置文件。
如果您正试图解决扫描的相关问题,有几个选项可以帮助您调试。首先,您可以通过添加`--dryrun`实施预扫,以此来分析包可能会发送到网络的何处。此外,还可以通过设置'--verbosity=n`来更改日志详细程度。
----------
### 最佳扫描习惯 ###
我们为针对互联网进行扫描的研究者提供了一些建议,以此来引导养成良好的互联网合作氛围
- 密切协同本地的网络管理员,以减少风险和调查
- 确认扫描不会使本地网络或上游供应商瘫痪
- 标记出在扫描中呈良性的网页和DNS条目的源地址
- 明确注明扫描中所有连接的目的和范围
- 提供一个简单的退出方法并及时响应请求
- 实施扫描时,不使用比研究对象需求更大的扫描范围或更快的扫描频率
- 如果可以通过时间或源地址来传播扫描流量
即使不声明,使用扫描的研究者也应该避免利用漏洞或访问受保护的资源,并遵守其辖区内任何特殊的法律规定。
----------
### 命令行参数 ###
#### 通用选项 ####
这些选项是实施简单扫描时最常用的选项。我们注意到某些选项取决于所使用的探测模块或输出模块在实施ICMP Echo扫描时是不需要使用目的端口的
**-p, --target-port=port**
用来扫描的TCP端口号例如443
**-o, --output-file=name**
使用标准输出将结果写入该文件。
**-b, --blacklist-file=path**
文件中被排除的子网使用CIDR表示法如192.168.0.0/16一个一行。建议您使用此方法排除RFC 1918地址组播地址IANA预留空间等IANA专用地址。在conf/blacklist.example中提供了一个以此为目的示例黑名单文件。
#### 扫描选项 ####
**-n, --max-targets=n**
限制探测目标的数量。后面跟的可以是一个数字(例如'-n 1000`)或百分比(例如,`-n 0.1`)当然都是针对可扫描地址空间而言的(不包括黑名单)
**-N, --max-results=n**
收到多少结果后退出
**-t, --max-runtime=secs**
限制发送报文的时间
**-r, --rate=pps**
设置传输速率,以包/秒为单位
**-B, --bandwidth=bps**
以比特/秒设置传输速率支持使用后缀GM或K如`-B 10M`就是速度10 mbps的。设置会覆盖`--rate`。
**-c, --cooldown-time=secs**
发送完成后多久继续接收(默认值= 8
**-e, --seed=n**
地址排序种子。如果要用多个ZMap以相同的顺序扫描地址那么就可以使用这个参数。
**--shards=n**
将扫描分片/区在使其可多个ZMap中执行默认值= 1。启用分片时`--seed`参数是必需的。
**--shard=n**
选择扫描的分片(默认值= 0。n的范围在[0N)其中N为碎片的总数。启用分片时`--seed`参数是必需的。
**-T, --sender-threads=n**
用于发送数据包的线程数(默认值= 1
**-P, --probes=n**
发送到每个IP的探测数默认值= 1
**-d, --dryrun**
用标准输出打印出每个包,而不是将其发送(用于调试)
#### 网络选项 ####
**-s, --source-port=port|range**
发送数据包的源端口
**-S, --source-ip=ip|range**
发送数据包的源地址。可以仅仅是一个IP也可以是一个范围10.0.0.1-10.0.0.9
**-G, --gateway-mac=addr**
数据包发送到的网关MAC地址用以防止自动检测不工作的情况
**-i, --interface=name**
使用的网络接口
#### 探测选项 ####
ZMap允许用户指定并添加自己所需要探测的模块。 探测模块的职责就是生成主机回复的响应包。
**--list-probe-modules**
列出可用探测模块如tcp_synscan
**-M, --probe-module=name**
选择探探测模块(默认值= tcp_synscan
**--probe-args=args**
向模块传递参数
**--list-output-fields**
列出可用的输出模块
#### 输出选项 ####
ZMap允许用户选择指定的输出模块。输出模块负责处理由探测模块返回的字段并将它们交给用户。用户可以指定输出的范围并过滤相应字段。
**--list-output-modules**
列出可用输出模块如tcp_synscan
**-O, --output-module=name**
选择输出模块默认值为csv
**--output-args=args**
传递给输出模块的参数
**-f, --output-fields=fields**
输出列表,以逗号分割
**--output-filter**
通过指定相应的探测模块来过滤输出字段
#### 附加选项 ####
**-C, --config=filename**
加载配置文件,可以指定其他路径。
**-q, --quiet**
不再是每秒刷新输出
**-g, --summary**
在扫描结束后打印配置和结果汇总信息
**-v, --verbosity=n**
日志详细程度0-5默认值= 3
**-h, --help**
打印帮助并退出
**-V, --version**
打印版本并退出
----------
### 附加信息 ###
#### TCP SYN 扫描 ####
在执行TCP SYN扫描时ZMap需要指定一个目标端口和以供扫描的源端口范围。
**-p, --target-port=port**
扫描的TCP端口例如 443
**-s, --source-port=port|range**
发送扫描数据包的源端口(例如 40000-50000
**警示!** ZMAP基于Linux内核使用SYN/ACK包应答RST包关闭扫描打开的连接。ZMap是在Ethernet层完成包的发送的这样做时为了减少跟踪打开的TCP连接和路由寻路带来的内核开销。因此如果您有跟踪连接建立的防火墙规则如netfilter的规则类似于`-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`将阻止SYN/ACK包到达内核。这不会妨碍到ZMap记录应答但它会阻止RST包被送回最终连接会在超时后断开。我们强烈建议您在执行ZMap时选择一组主机上未使用且防火墙允许访问的端口加在`-s`后(如 `-s '50000-60000' ` )。
#### ICMP Echo 请求扫描 ####
虽然在默认情况下ZMap执行的是TCP SYN扫描但它也支持使用ICMP echo请求扫描。在这种扫描方式下ICMP echo请求包被发送到每个主机并以收到ICMP 应答包作为答复。实施ICMP扫描可以通过选择icmp_echoscan扫描模块来执行如下
$ zmap --probe-module=icmp_echoscan
#### UDP 数据报扫描 ####
ZMap还额外支持UDP探测它会发出任意UDP数据报给每个主机并能在无论UDP或是ICMP任何一个不可达的情况下接受应答。ZMap支持通过使用--probe-args命令行选择四种不同的UDP payload方式。这些都有可列印payload的文本用于命令行的十六进制payload的hex外部文件中包含payload的file和需要动态区域生成的payload的template。为了得到UDP响应请使用-f参数确保您指定的“data”领域处于汇报范围。
下面的例子将发送两个字节'ST'即PC的'status'请求到UDP端口5632。
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
下面的例子将发送字节“0X02”即SQL服务器的 'client broadcast'请求到UDP端口1434。
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
下面的例子将发送一个NetBIOS状态请求到UDP端口137。使用一个ZMap自带的payload文件。
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
下面的例子将发送SIP的'OPTIONS'请求到UDP端口5060。使用附ZMap自带的模板文件。
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上的发送线程(-T时可能会遇到崩溃和一个明显的相比静态payload性能降低的表现。模板仅仅是一个由一个或多个使用$ {}将字段说明封装成序列构成的payload文件。某些协议特别是SIP需要payload来反射包中的源和目的地址。其他协议如端口映射和DNS包含范围伴随每一次请求随机生成或Zamp扫描的多宿主系统将会抛出危险警告。
以下的payload模板将发送SIP OPTIONS请求到每一个目的地
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
To: sip:${RAND_ALPHA=8}@${DADDR}
Call-ID: ${RAND_DIGIT=10}@${SADDR}
CSeq: 1 OPTIONS
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
Content-Length: 0
Max-Forwards: 20
User-Agent: ${RAND_ALPHA=8}
Accept: text/plain
就像在上面的例子中展示的那样对于大多数SIP正常的实现会在在每行行末添加\r\n并且在请求的末尾一定包含\r\n\r\n。一个可以使用的在ZMap的examples/udp-payloads目录下的例子 (sip_options.tpl).
下面的字段正在如今的模板中实施:
- **SADDR**: 源IP地址的点分十进制格式
- **SADDR_N**: 源IP地址的网络字节序格式
- **DADDR**: 目的IP地址的点分十进制格式
- **DADDR_N**: 目的IP地址的网络字节序格式
- **SPORT**: 源端口的ascii格式
- **SPORT_N**: 源端口的网络字节序格式
- **DPORT**: 目的端口的ascii格式
- **DPORT_N**: 目的端口的网络字节序格式
- **RAND_BYTE**: 随机字节(0-255),长度由=(长度) 参数决定
- **RAND_DIGIT**: 随机数字0-9长度由=(长度) 参数决定
- **RAND_ALPHA**: 随机大写字母A-Z长度由=(长度) 参数决定
- **RAND_ALPHANUM**: 随机大写字母A-Z和随机数字0-9长度由=(长度) 参数决定
### 配置文件 ###
ZMap支持使用配置文件代替在命令行上指定所有的需求选项。配置中可以通过每行指定一个长名称的选项和对应的值来创建
interface "eth1"
source-ip 1.1.1.4-1.1.1.8
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
cooldown-time 300 # seconds
blacklist-file /etc/zmap/blacklist.conf
output-file ~/zmap-output
quiet
summary
然后ZMap就可以按照配置文件和一些必要的附加参数运行了
$ zmap --config=~/.zmap.conf --target-port=443
### 详细 ###
ZMap可以在屏幕上生成多种类型的输出。默认情况下Zmap将每隔1秒打印出相似的基本进度信息。可以通过设置`--quiet`来禁用。
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
ZMap同样也可以根据扫描配置打印如下消息可以通过'--verbosity`参数加以控制。
Aug 11 16:16:12.813 [INFO] zmap: started
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
Aug 11 16:17:03.985 [DEBUG] recv: thread started
ZMap还支持在扫描之后打印出一个的可grep的汇总信息类似于下面这样可以通过调用`--summary`来实现。
cnf target-port 443
cnf source-port-range-begin 32768
cnf source-port-range-end 61000
cnf source-addr-range-begin 1.1.1.4
cnf source-addr-range-end 1.1.1.8
cnf maximum-packets 4294967295
cnf maximum-runtime 0
cnf permutation-seed 0
cnf cooldown-period 300
cnf send-interface eth1
cnf rate 45000
env nprocessors 16
exc send-start-time Fri Jan 18 01:47:35 2013
exc send-end-time Sat Jan 19 00:47:07 2013
exc recv-start-time Fri Jan 18 01:47:35 2013
exc recv-end-time Sat Jan 19 00:52:07 2013
exc sent 3722335150
exc blacklisted 572632145
exc first-scanned 1318129262
exc hit-rate 0.874102
exc synack-received-unique 32537000
exc synack-received-total 36689941
exc synack-cooldown-received-unique 193
exc synack-cooldown-received-total 1543
exc rst-received-unique 141901021
exc rst-received-total 166779002
adv source-port-secret 37952
adv permutation-gen 4215763218
### 结果输出 ###
ZMap可以通过**输出模块**生成不同格式的结果。默认情况下ZMap只支持**csv**的输出,但是可以通过编译支持**redis**和**json** 。可以使用**输出过滤**来过滤这些发送到输出模块上的结果。输出模块的范围由用户指定。默认情况如果没有指定输出文件ZMap将以csv格式返回结果ZMap不会产生特定结果。也可以编写自己的输出模块;请参阅编写输出模块。
**-o, --output-file=p**
输出写入文件地址
**-O, --output-module=p**
调用自定义输出模块
**-f, --output-fields=p**
输出以逗号分隔各字段的列表
**--output-filter=filter**
在给定的探测区域实施输出过滤
**--list-output-modules**
列出可用输出模块
**--list-output-fields**
列出可用的给定探测区域
#### 输出字段 ####
ZMap有很多区域它可以基于IP地址输出。这些区域可以通过在给定探测模块上运行`--list-output-fields`来查看。
$ zmap --probe-module="tcp_synscan" --list-output-fields
saddr string: 应答包中的源IP地址
saddr-raw int: 网络提供的整形形式的源IP地址
daddr string: 应答包中的目的IP地址
daddr-raw int: 网络提供的整形形式的目的IP地址
ipid int: 应答包中的IP识别号
ttl int: 应答包中的ttl存活时间
sport int: TCP 源端口
dport int: TCP 目的端口
seqnum int: TCP 序列号
acknum int: TCP Ack号
window int: TCP 窗口
classification string: 包类型
success int: 是应答包成功
repeat int: 是否是来自主机的重复响应
cooldown int: 是否是在冷却时间内收到的响应
timestamp-str string: 响应抵达时的时间戳使用ISO8601格式
timestamp-ts int: 响应抵达时的时间戳使用纪元开始的秒数
timestamp-us int: 时间戳的微秒部分(例如 从'timestamp-ts'的几微秒)
可以通过使用`--output-fields=fields`或`-f`来选择选择输出字段,任意组合的输出字段可以被指定为逗号分隔的列表。例如:
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
#### 过滤输出 ####
在传到输出模块之前探测模块生成的结果可以先过滤。过滤被实施在探测模块的输出字段上。过滤使用简单的过滤语法写成类似于SQL通过ZMap的**--output-filter**选项来实施。输出过滤通常用于过滤掉重复的结果或仅传输成功的响应到输出模块。
过滤表达式的形式为`<字段名> <操作> <>`。`<>`的类型必须是一个字符串或一串无符号整数并且匹配`<字段名>`类型。对于整数比较有效的操作是`= !=, <, >, <=, >=`。字符串比较的操作是=!=。`--list-output-fields`会打印那些可供探测模块选择的字段和类型,然后退出。
复合型的过滤操作,可以通过使用`&&`(逻辑与)和`||`(逻辑或)这样的运算符来组合出特殊的过滤操作。
**示例**
书写一则过滤仅显示成功,过滤重复应答
--output-filter="success = 1 && repeat = 0"
过滤出包中含RST并且TTL大于10的分类或者包中含SYNACK的分类
--output-filter="(classification = rst && ttl > 10) || classification = synack"
#### CSV ####
csv模块将会生成以逗号分隔各输出请求字段的文件。例如以下的指令将生成下面的CSV至名为`output.csv`的文件。
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
----------
响应, 源地址, 目的地址, 源端口, 目的端口, 序列号, 应答, 是否是冷却模式, 是否重复, 时间戳
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
#### Redis ####
Redis的输出模块允许地址被添加到一个Redis的队列,不是被保存到文件,允许ZMap将它与之后的处理工具结合使用。
**注意!** ZMap默认不会编译Redis功能。如果您想要将Redis功能编译进ZMap源码中可以在CMake的时候加上`-DWITH_REDIS=ON`。
### 黑名单和白名单 ###
ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名单和白名单参数他将会扫描所有的IPv4地址包括本地的保留的以及组播地址。如果指定了黑名单文件那么在黑名单中的网络前缀将不再扫描如果指定了白名单文件只有那些网络前缀在白名单内的才会扫描。白名单和黑名单文件可以协同使用黑名单运用于白名单上例如如果您在白名单中指定了10.0.0.0/8并在黑名单中指定了10.1.0.0/16那么10.1.0.0/16将不会扫描。白名单和黑名单文件可以在命令行中指定如下所示
**-b, --blacklist-file=path**
文件用于记录黑名单子网以CIDR无类域间路由的表示法例如192.168.0.0/16
**-w, --whitelist-file=path**
文件用于记录限制扫描的子网以CIDR的表示法例如192.168.0.0/16
黑名单文件的每行都需要以CIDR的表示格式书写一个单一的网络前缀。允许使用`#`加以备注。例如:
# IANA英特网编号管理局记录的用于特殊目的的IPv4地址
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# 更新于2013-05-22
0.0.0.0/8 # RFC1122: 网络中的所有主机
10.0.0.0/8 # RFC1918: 私有地址
100.64.0.0/10 # RFC6598: 共享地址空间
127.0.0.0/8 # RFC1122: 回环地址
169.254.0.0/16 # RFC3927: 本地链路地址
172.16.0.0/12 # RFC1918: 私有地址
192.0.0.0/24 # RFC6890: IETF协议预留
192.0.2.0/24 # RFC5737: 测试地址
192.88.99.0/24 # RFC3068: IPv6转换到IPv4的任意播
192.168.0.0/16 # RFC1918: 私有地址
192.18.0.0/15 # RFC2544: 检测地址
198.51.100.0/24 # RFC5737: 测试地址
203.0.113.0/24 # RFC5737: 测试地址
240.0.0.0/4 # RFC1112: 预留地址
255.255.255.255/32 # RFC0919: 广播地址
# IANA记录的用于组播的地址空间
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
# 更新于2013-06-25
224.0.0.0/4 # RFC5771: 组播/预留地址ed
如果您只是想扫描因特网中随机的一部分地址,使用采样检出,来代替使用白名单和黑名单。
**注意!**ZMap默认设置使用`/etc/zmap/blacklist.conf`作为黑名单文件其中包含有本地的地址空间和预留的IP空间。通过编辑`/etc/zmap/zmap.conf`可以改变默认的配置。
### 速度限制与抽样 ###
默认情况下ZMap将以您当前网络所能支持的最快速度扫描。以我们对于常用硬件的经验这普遍是理论上Gbit以太网速度的95-98%这可能比您的上游提供商可处理的速度还要快。ZMap是不会自动的根据您的上游提供商来调整发送速率的。您可能需要手动的调整发送速率来减少丢包和错误结果。
**-r, --rate=pps**
设置最大发送速率以包/秒为单位
**-B, --bandwidth=bps**
设置发送速率以比特/秒(支持G,M和K后缀)。也同样作用于--rate的参数。
ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运行时间的随机采样。由于针对主机的扫描是通过随机排序生成的实例限制扫描的主机个数为N就会随机抽选N个主机。命令选项如下
**-n, --max-targets=n**
探测目标上限数量
**-N, --max-results=n**
结果上限数量(累积收到这么多结果后推出)
**-t, --max-runtime=s**
发送数据包时间长度上限(以秒为单位)
**-s, --seed=n**
种子用以选择地址的排列方式。使用不同ZMap执行扫描操作时将种子设成相同的值可以保证相同的扫描顺序。
举个例子,如果您想要多次扫描同样的一百万个互联网主机,您可以设定排序种子和扫描主机的上限数量,大致如下所示:
zmap -p 443 -s 3 -n 1000000 -o results
为了确定哪一百万主机将要被扫描,您可以执行预扫,只列印数据包而非发送,并非真的实施扫描。
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
### 发送多个数据包 ###
ZMap支持想每个主机发送多个扫描。增加这个数量既增加了扫描时间又增加了到达的主机数量。然而我们发现增加扫描时间每个额外扫描的增加近100远远大于到达的主机数量每个额外扫描的增加近1
**-P, --probes=n**
向每个IP发出的独立扫描个数默认值=1
----------
### 示例应用程序 ###
ZMap专为向大量主机发启连接并寻找那些正确响应而设计。然而我们意识到许多用户需要执行一些后续处理如执行应用程序级别的握手。例如用户在80端口实施TCP SYN扫描可能只是想要实施一个简单的GET请求还有用户扫描443端口可能是对TLS握手如何完成感兴趣。
#### Banner获取 ####
我们收录了一个示例程序banner-grab伴随ZMap使用可以让用户从监听状态的TCP服务器上接收到消息。Banner-grab连接到服务上任意的发送一个消息然后打印出收到的第一个消息。这个工具可以用来获取banners例如HTTP服务的回复的具体指令telnet登陆提示或SSH服务的字符串。
这个例子寻找了1000个监听80端口的服务器并向每个发送一个简单的GET请求存储他们的64位编码响应至http-banners.out
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
如果想知道更多使用`banner-grab`的细节,可以参考`examples/banner-grab`中的README文件。
**注意!** ZMap和banner-grab如例子中同时运行可能会比较显著的影响对方的表现和精度。确保不让ZMap充满banner-grab-tcp的并发连接不然banner-grab将会落后于标准输入的读入导致屏蔽编写标准输出。我们推荐使用较慢扫描速率的ZMap同时提升banner-grab-tcp的并发性至3000以内注意 并发连接>1000需要您使用`ulimit -SHn 100000`和`ulimit -HHn 100000`来增加每个问进程的最大文件描述。当然这些参数取决于您服务器的性能连接成功率hit-rate我们鼓励开发者在运行大型扫描之前先进行小样本的试验。
#### 建立套接字 ####
我们也收录了另一种形式的banner-grab就是forge-socket 重复利用服务器发出的SYN-ACK连接并最终取得banner。在`banner-grab-tcp`中ZMap向每个服务器发送一个SYN并监听服务器发回的带有SYN+ACK的应答。运行ZMap主机的内核接受应答后发送RST因为有没有处于活动状态的连接与该包关联。程序banner-grab必须在这之后创建一个新的TCP连接到从服务器获取数据。
在forge-socket中我们以同样的名字利用内核模块这使我们可以创建任意参数的TCP连接。这样可以抑制内核的RST包并且通过创建套接字取代它可以重用SYN+ACK的参数通过这个套接字收发数据和我们平时使用的连接套接字并没有什么不同。
要使用forge-socket您需要forge-socket内核模块从[github][1]上可以获得。您需要git clone `git@github.com:ewust/forge_socket.git`至ZMap源码根目录然后cd进入forge_socket 目录运行make。以root身份安装带有`insmod forge_socket.ko` 的内核模块。
您也需要告知内核不要发送RST包。一个简单的在全系统禁用RST包的方法是**iptables**。以root身份运行`iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP`即可,当然您也可以加上一项--dport X将禁用局限于所扫描的端口X上。扫描完成后移除这项设置以root身份运行`iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP`即可。
现在应该可以建立forge-socket的ZMap示例程序了。运行需要使用**extended_file**ZMap输出模块
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
./forge-socket -c 500 -d ./http-req > ./http-banners.out
详细内容可以参考`examples/forge-socket`目录下的README。
----------
### 编写探测和输出模块 ###
ZMap可以通过**probe modules**扩展支持不同类型的扫描,通过**output modules**追加不同类型的输出结果。注册过的探测和输出模块可以在命令行中列出:
**--list-probe-modules**
列出安装过的探测模块
**--list-output-modules**
列出安装过的输出模块
#### 输出模块 ####
ZMap的输出和输出后处理可以通过执行和注册扫描的**output modules**来扩展。输出模块在接收每一个应答包时都会收到一个回调。然而提供的默认模块仅提供简单的输出这些模块同样支持扩展扫描后处理例如重复跟踪或输出AS号码来代替IP地址
通过定义一个新的output_module机构体来创建输出模块并在[output_modules.c][2]中注册:
typedef struct output_module {
const char *name; // 在命令行如何引出输出模块
unsigned update_interval; // 以秒为单位的更新间隔
output_init_cb init; // 在扫描初始化的时候调用
output_update_cb start; // 在开始的扫描的时候调用
output_update_cb update; // 每次更新间隔调用,秒为单位
output_update_cb close; // 扫描终止后调用
output_packet_cb process_ip; // 接收到应答时调用
const char *helptext; // 会在--list-output-modules时打印在屏幕啥
} output_module_t;
输出模块必须有名称,通过名称可以在命令行、普遍实施的`success_ip`和常见的`other_ip`回调中使用模块。process_ip的回调由每个收到的或经由**probe module**过滤的应答包调用。应答是否被认定为成功并不确定比如他可以是一个TCP的RST。这些回调必须定义匹配`output_packet_cb`定义的函数:
int (*output_packet_cb) (
ipaddr_n_t saddr, // network-order格式的扫描主机IP地址
ipaddr_n_t daddr, // network-order格式的目的IP地址
const char* response_type, // 发送模块的数据包分类
int is_repeat, // {0: 主机的第一个应答, 1: 后续的应答}
int in_cooldown, // {0: 非冷却状态, 1: 扫描处于冷却中}
const u_char* packet, // 指向结构体iphdr中IP包的指针
size_t packet_len // 包的长度以字节为单位
);
输出模块还可以通过注册回调执行在扫描初始化的时候(诸如打开输出文件的任务),扫描开始阶段(诸如记录黑名单的任务),在常规间隔实施(诸如程序升级的任务)在关闭的时候(诸如关掉所有打开的文件描述符。这些回调提供完整的扫描配置入口和实时状态:
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
被定义在[output_modules.h][3]中。在[src/output_modules/module_csv.c][4]中有可用示例。
#### 探测模块 ####
数据包由探测模块构造由此可以创建抽象包并对应答分类。ZMap默认拥有两个扫描模块`tcp_synscan`和`icmp_echoscan`。默认情况下ZMap使用`tcp_synscan`来发送TCP SYN包并对每个主机的并对每个主机的响应分类如打开时收到SYN+ACK或关闭时收到RST。ZMap允许开发者编写自己的ZMap探测模块使用如下的API
任何类型的扫描的实施都需要在`send_module_t`结构体内开发和注册必要的回调:
typedef struct probe_module {
const char *name; // 如何在命令行调用扫描
size_t packet_length; // 探测包有多长(必须是静态的)
const char *pcap_filter; // 对收到的响应实施PCAP过滤
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
uint8_t port_args; // 设为1如果需要使用ZMap的--target-port
// 用户指定
probe_global_init_cb global_initialize; // 在扫描初始化会时被调用一次
probe_thread_init_cb thread_initialize; // 每个包缓存区的线程中被调用一次
probe_make_packet_cb make_packet; // 每个主机更新包的时候被调用一次
probe_validate_packet_cb validate_packet; // 每收到一个包被调用一次,
// 如果包无效返回0
// 非零则覆盖。
probe_print_packet_cb print_packet; // 如果在dry-run模式下被每个包都调用
probe_classify_packet_cb process_packet; // 由区分响应的接收器调用
probe_close_cb close; // 扫描终止后被调用
fielddef_t *fields // 该模块指定的区域的定义
int numfields // 区域的数量
} probe_module_t;
在扫描操作初始化时会调用一次`global_initialize`,可以用来实施一些必要的全局配置和初始化操作。然而,`global_initialize`并不能访问报缓冲区,那里由线程指定。用以代替的,`thread_initialize`在每个发送线程初始化的时候被调用提供对于缓冲区的访问可以用来构建探测包和全局的源和目的值。此回调应用于构建宿主不可知分组结构甚至只有特定值目的主机和校验和需要随着每个主机更新。例如以太网头部信息在交换时不会变更减去校验和是由NIC硬件计算的因此可以事先定义以减少扫描时间开销。
调用回调参数`make_packet是为了让被扫描的主机允许**probe module**更新主机指定的值同时提供IP地址、一个非透明的验证字符串和探测数目如下所示。探测模块负责在探测中放置尽可能多的验证字符串以至于当服务器返回的应答为空时探测模块也能验证它的当前状态。例如针对TCP SYN扫描tcp_synscan探测模块会使用TCP源端口和序列号的格式存储验证字符串。响应包SYN+ACKs将包含预期的值包含目的端口和确认号。
int make_packet(
void *packetbuf, // 包的缓冲区
ipaddr_n_t src_ip, // network-order格式源IP
ipaddr_n_t dst_ip, // network-order格式目的IP
uint32_t *validation, // 探测中的确认字符串
int probe_num // 如果向每个主机发送多重探测,
// 该值为对于主机我们
// 正在实施的探测数目
);
扫描模块也应该定义`pcap_filter`、`validate_packet`和`process_packet`。只有符合PCAP过滤的包才会被扫描。举个例子在一个TCP SYN扫描的情况下我们只想要调查TCP SYN / ACK或RST TCP数据包并利用类似`tcp && tcp[13] & 4 != 0 || tcp[13] == 18`的过滤方法。`validate_packet`函数将会被每个满足PCAP过滤条件的包调用。如果验证返回的值非零将会调用`process_packet`函数,并使用包中被定义成的`fields`字段和数据填充字段集。如下代码为TCP synscan探测模块处理了一个数据包。
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
{
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
+ (sizeof(struct iphdr)));
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
if (tcp->rst) { // RST packet
fs_add_string(fs, "classification", (char*) "rst", 0);
fs_add_uint64(fs, "success", 0);
} else { // SYNACK packet
fs_add_string(fs, "classification", (char*) "synack", 0);
fs_add_uint64(fs, "success", 1);
}
}
--------------------------------------------------------------------------------
via: https://zmap.io/documentation.html
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/ewust/forge_socket/
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c

View File

@ -0,0 +1,221 @@
Autojump 一个高级的cd命令用以快速浏览 Linux 文件系统
================================================================================
对于那些主要通过控制台或终端使用 Linux 命令行来工作的 Linux 用户来说,他们真切地感受到了 Linux 的强大。 然而在 Linux 的分层文件系统中进行浏览有时或许是一件头疼的事,尤其是对于那些新手来说。
现在,有一个用 Python 写的名为 `autojump` 的 Linux 命令行实用程序,它是 Linux [cd][1]’命令的高级版本。
![Autojump 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Command.jpg)
Autojump 浏览 Linux 文件系统的最快方式
这个应用原本由 Joël Schaerer 编写,现在由 +William Ting 维护。
Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行更轻松的目录浏览。与传统的 `cd` 命令相比autojump 能够更加快速地浏览至目的目录。
#### autojump 的特色 ####
- 免费且开源的应用,在 GPL V3 协议下发布。
- 自主学习的应用,从用户的浏览习惯中学习。
- 更快速地浏览。不必包含子目录的名称。
- 对于大多数的标准 Linux 发行版本,能够在软件仓库中下载得到,它们包括 Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat and Fedora。
- 也能在其他平台中使用,例如 OS X(使用 Homebrew) 和 Windows (通过 Clink 来实现)
- 使用 autojump 你可以跳至任何特定的目录或一个子目录。你还可以打开文件管理器来到达某个目录,并查看你在某个目录中所待时间的统计数据。
#### 前提 ####
- 版本号不低于 2.6 的 Python
### 第 1 步: 做一次全局系统升级 ###
1. 以 **root** 用户的身份,做一次系统更新或升级,以此保证你安装有最新版本的 Python。
# apt-get update && apt-get upgrade && apt-get dist-upgrade [APT based systems]
# yum update && yum upgrade [YUM based systems]
# dnf update && dnf upgrade [DNF based systems]
**注** : 这里特别提醒,在基于 YUM 或 DNF 的系统中,更新和升级执行相同的行动,大多数时间里它们是通用的,这点与基于 APT 的系统不同。
### 第 2 步: 下载和安装 Autojump ###
2. 正如前面所言,在大多数的 Linux 发行版本的软件仓库中, autojump 都可获取到。通过包管理器你就可以安装它。但若你想从源代码开始来安装它,你需要克隆源代码并执行 python 脚本,如下面所示:
#### 从源代码安装 ####
若没有安装 git请安装它。我们需要使用它来克隆 git 仓库。
# apt-get install git [APT based systems]
# yum install git [YUM based systems]
# dnf install git [DNF based systems]
一旦安装完 git以常规用户身份登录然后像下面那样来克隆 autojump
$ git clone git://github.com/joelthelion/autojump.git
接着,使用 `cd` 命令切换到下载目录。
$ cd autojump
下载,赋予脚本文件可执行权限,并以 root 用户身份来运行安装脚本。
# chmod 755 install.py
# ./install.py
#### 从软件仓库中安装 ####
3. 假如你不想麻烦,你可以以 **root** 用户身份从软件仓库中直接安装它:
在 Debian, Ubuntu, Mint 及类似系统中安装 autojump :
# apt-get install autojump (注: 这里原文为 autojumo 应该为 autojump)
为了在 Fedora, CentOS, RedHat 及类似系统中安装 autojump, 你需要启用 [EPEL 软件仓库][2]。
# yum install epel-release
# yum install autojump
OR
# dnf install autojump
### 第 3 步: 安装后的配置 ###
4. 在 Debian 及其衍生系统 (Ubuntu, Mint,…) 中, 激活 autojump 应用是非常重要的。
为了暂时激活 autojump 应用,即直到你关闭当前会话或打开一个新的会话之前让 autojump 均有效,你需要以常规用户身份运行下面的命令:
$ source /usr/share/autojump/autojump.sh on startup
为了使得 autojump 在 BASH shell 中永久有效,你需要运行下面的命令。
$ echo '. /usr/share/autojump/autojump.sh' >> ~/.bashrc
### 第 4 步: Autojump 的预测试和使用 ###
5. 如先前所言, autojump 将只跳到先前 `cd` 命令到过的目录。所以在我们开始测试之前,我们要使用 `cd` 切换到一些目录中去,并创建一些目录。下面是我所执行的命令。
$ cd
$ cd
$ cd Desktop/
$ cd
$ cd Documents/
$ cd
$ cd Downloads/
$ cd
$ cd Music/
$ cd
$ cd Pictures/
$ cd
$ cd Public/
$ cd
$ cd Templates
$ cd
$ cd /var/www/
$ cd
$ mkdir autojump-test/
$ cd
$ mkdir autojump-test/a/ && cd autojump-test/a/
$ cd
$ mkdir autojump-test/b/ && cd autojump-test/b/
$ cd
$ mkdir autojump-test/c/ && cd autojump-test/c/
$ cd
现在,我们已经切换到过上面所列的目录,并为了测试创建了一些目录,一切准备就绪,让我们开始吧。
**需要记住的一点** : `j` 是 autojump 的一个包装,你可以使用 j 来代替 autojump 相反亦可。
6. 使用 -v 选项查看安装的 autojump 的版本。
$ j -v
or
$ autojump -v
![查看 Autojump 的版本](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Autojump-Version.png)
查看 Autojump 的版本
7. 跳到先前到过的目录 /var/www
$ j www
![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-To-Directory.png)
跳到目录
8. 跳到先前到过的子目录‘/home/avi/autojump-test/b 而不键入子目录的全名。
$ jc b
![跳到子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Child-Directory.png)
跳到子目录
9. 使用下面的命令,你就可以从命令行打开一个文件管理器,例如 GNOME Nautilus ,而不是跳到一个目录。
$ jo www
![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png)
跳到目录
![在文件管理器中打开目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Directory-in-File-Browser.png)
在文件管理器中打开目录
你也可以在一个文件管理器中打开一个子目录。
$ jco c
![打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory1.png)
打开子目录
![在文件管理器中打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory-in-File-Browser1.png)
在文件管理器中打开子目录
10. 查看每个文件夹的关键权重和在所有目录权重中的总关键权重的相关统计数据。文件夹的关键权重代表在这个文件夹中所花的总时间。 目录权重是列表中目录的数目。(注: 在这一句中,我觉得原文中的 if 应该为 is)
$ j --stat
![查看目录统计数据](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png)
查看目录统计数据
**提醒** : autojump 存储其运行日志和错误日志的地方是文件夹 `~/.local/share/autojump/`。千万不要重写这些文件,否则你将失去你所有的统计状态结果。
$ ls -l ~/.local/share/autojump/
![Autojump 的日志](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Logs.png)
Autojump 的日志
11. 假如需要,你只需运行下面的命令就可以查看帮助 :
$ j --help
![Autojump 的帮助和选项](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-help-options.png)
Autojump 的帮助和选项
### 功能需求和已知的冲突 ###
- autojump 只能让你跳到那些你已经用 `cd` 到过的目录。一旦你用 `cd` 切换到一个特定的目录,这个行为就会被记录到 autojump 的数据库中,这样 autojump 才能工作。不管怎样,在你设定了 autojump 后,你不能跳到那些你没有用 `cd` 到过的目录。
- 你不能跳到名称以破折号 (-) 开头的目录。或许你可以考虑阅读我的有关[操作文件或目录][3] 的文章,尤其是有关操作那些以‘- 或其他特殊字符开头的文件和目录的内容。
- 在 BASH shell 中autojump 通过修改 `$PROMPT_COMMAND` 环境变量来跟踪目录的行为,所以强烈建议不要去重写 `$PROMPT_COMMAND` 这个环境变量。若你需要添加其他的命令到现存的 `$PROMPT_COMMAND` 环境变量中,请添加到`$PROMPT_COMMAND` 环境变量的最后。
### 结论: ###
假如你是一个命令行用户, autojump 是你必备的实用程序。它可以简化许多事情。它是一个在命令行中浏览 Linux 目录的绝佳的程序。请自行尝试它,并在下面的评论框中让我知晓你宝贵的反馈。保持联系,保持分享。喜爱并分享,帮助我们更好地传播。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/
作者:[Avishek Kumar][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/cd-command-in-linux/
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[3]:http://www.tecmint.com/manage-linux-filenames-with-special-characters/

View File

@ -0,0 +1,67 @@
在 Linux 中安装 Google 环聊桌面客户端
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg)
先前,我们已经介绍了如何[在 Linux 中安装 Facebook Messenger][1] 和[WhatsApp 桌面客户端][2]。这些应用都是非官方的应用。今天,我将为你推荐另一款非官方的应用,它就是 [Google 环聊][3]
当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它把。
### 在 Linux 中安装 Google 环聊 ###
我们将使用一个名为 [yakyak][4] 的开源项目,它是一个针对 LinuxWindows 和 OS X 平台的非官方 Google 环聊客户端。我将向你展示如何在 Ubuntu 中使用 yakyak但我相信在其他的 Linux 发行版本中,你可以使用同样的方法来使用它。在了解如何使用它之前,让我们先看看 yakyak 的主要特点:
- 发送和接受聊天信息
- 创建和更改对话 (重命名, 添加人物)
- 离开或删除对话
- 桌面提醒通知
- 打开或关闭通知
- 针对图片上传,支持拖放,复制粘贴或使用上传按钮
- Hangupsbot 房间同步(实际的用户图片) (注: 这里翻译不到位,希望改善一下)
- 展示行内图片
- 历史回放
听起来不错吧,你可以从下面的链接下载到该软件的安装文件:
- [下载 Google 环聊客户端 yakyak][5]
下载的文件是压缩的。解压后,你将看到一个名称类似于 linux-x64 或 linux-x32 的目录,其名称取决于你的系统。进入这个目录,你应该可以看到一个名为 yakyak 的文件。双击这个文件来启动它。
![在 Linux 中运行 Run Google 环聊](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_3.jpeg)
当然,你需要键入你的 Google 账号来认证。
![在 Ubuntu 中设置 Google 环聊](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_2.jpeg)
一旦你通过认证后,你将看到如下的画面,在这里你可以和你的 Google 联系人进行聊天。
![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg)
假如你想看看对话的配置图,你可以选择 `查看-> 展示对话缩略图`
![Google 环聊缩略图](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg)
当有新的信息时,你将得到桌面提醒。
![在 Ubuntu 中 Google 环聊的桌面提醒](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_1.jpeg)
### 值得一试吗? ###
我让你尝试一下,并决定 **在 Linux 中安装 Google 环聊客户端** 是否值得。若你想要官方的应用,你可以看看这些 [拥有原生 Linux 客户端的即时消息应用程序][6]。不要忘记分享你在 Linux 中使用 Google 环聊的体验。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-google-hangouts-linux/
作者:[Abhishek][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/facebook-messenger-linux/
[2]:http://itsfoss.com/whatsapp-linux-desktop/
[3]:http://www.google.com/+/learnmore/hangouts/
[4]:https://github.com/yakyak/yakyak
[5]:https://github.com/yakyak/yakyak
[6]:http://itsfoss.com/best-messaging-apps-linux/

View File

@ -0,0 +1,34 @@
Linux常见问题解答--如何修复"tar由于前一个错误导致于失败状态中退出"("Exiting with failure status due to previous errors")
================================================================================
> **问题**: 当我想试着用tar命令来创建一个压缩文件时总在执行过程中失败并且抛出一个错误说明"tar由于前一个错误导致于失败状态中退出"("Exiting with failure status due to previous errors"). 什么导致这个错误的发生,要如何解决?
![](https://farm9.staticflickr.com/8863/17631029953_1140fe2dd3_b.jpg)
如果当你执行tar命令时遇到了下面的错误那么最有可能的原因是对于你想用tar命令压缩的某个文件中你并不具备其读权限。
tar: Exiting with failure status due to previous errors
那么我们要如何确定引起错误的这个(些)文件呢?或者如何确定其它的错误根源?
事实上tar命令应该会打印出所谓的“上一个错误”("previous errors")到底是什么错误但是如果你让tar运行在详细模式即verbose mode例如, -cvf)那么你会很容易错失这些信息。要找到这些信息你可以像下面那样把tar的标准输出stdout)信息过滤掉。
$ tar cvzfz backup.tgz my_program/ > /dev/null
然后你会看到tar输出的标准错误(stderr)信息。
tar: my_program/src/lib/.conf.db.~lock~: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors
你可以从上面的例子中看到引起错误的原因的确是“读权限不允许”denied read permission.)
要解决这个问题,只要简单地更改(或移除)问题文件的权限然后重新执行tar命令即可。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/tar-exiting-with-failure-status-due-to-previous-errors.html
作者:[Dan Nanni][a]
译者:[XLCYun(袖里藏云)](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni

View File

@ -0,0 +1,108 @@
Linux有问必答-- 如何为在Linux中安装兄弟打印机
================================================================================
> **提问**: 我有一台兄弟HL-2270DW激光打印机我想从我的Linux机器上答应文档。我该如何在我的电脑上安装合适的驱动并使用它
兄弟牌以买得起的[紧凑型激光打印机][1]而闻名。你可以用低于200美元的价格得到高质量的WiFi/双工激光打印机而且价格还在下降。最棒的是它们还提供良好的Linux支持因此你可以在Linux中下载并安装它们的打印机驱动。我在一年前买了台[HL-2270DW][2],我对它的性能和可靠性都很满意。
下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中我会演示安装HL-2270DW激光打印机的USB驱动。首先通过USB线连接你的打印机到Linux上。
### 准备 ###
在准备阶段,进入[兄弟官方支持网站][3]输入你的型号比如HL-2270DW搜索你的兄弟打印机型号。
![](https://farm1.staticflickr.com/301/18970034829_6f3a48d817_c.jpg)
进入下面页面后选择你的Linux平台。对于Debian、Ubuntu或者其他衍生版选择“Linux (deb)”。对于Fedora、CentOS或者RHEL选择“Linux (rpm)”。
![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
下一页你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的GUI对本地、远程打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
运行安装文件之前你需要在64位的Linux系统上做另外一件事情。
因为兄弟打印机驱动是为32位的Linux系统开发的,因此你需要按照下面的方法安装32位的库。
在早期的Debian(6.0或者更早期)或者Ubuntu11.04或者更早期),安装下面的包。
$ sudo apt-get install ia32-libs
对于已经引入多架构的新的Debian或者Ubuntu而言你可以安装下面的包
$ sudo apt-get install lib32z1 lib32ncurses5
上面的包代替了ia32-libs包。或者你只需要安装
$ sudo apt-get install lib32stdc++6
如果你使用的是基于Red Hat的Linux你可以安装
$ sudo yum install glibc.i686
### 驱动安装 ###
现在解压下载的驱动文件。
$ gunzip linux-brprinter-installer-2.0.0-1.gz
接下来像下面这样运行安装文件。
$ sudo sh ./linux-brprinter-installer-2.0.0-1
你会被要求输入打印机的型号。输入你打印机的型号比如“HL-2270DW”。
![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
同意GPL协议直呼接受接下来的任何默认问题。
![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
现在LPR/CUPS打印机驱动已经安装好了。接下来要配置你的打印机了。
### 打印机配置 ###
我接下来就要通过基于CUPS的网页管理和配置兄弟打印机了。
首先验证CUPS守护进程已经启动。
$ sudo netstat -nap | grep 631
打开一个浏览器输入http://localhost:631。你会下面的打印机管理界面。
![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
进入“Administration”选项卡点击打印机选项下的“Manage Printers”。
![](https://farm1.staticflickr.com/484/18533632074_0526cccb86_c.jpg)
你一定在下面的页面中看到了你的打印机HL-2270DW。点击打印机名。
在下拉菜单“Administration”中选择“Set As Server Default”。这会设置你的打印机位系统默认打印机。
![](https://farm1.staticflickr.com/472/19150412212_b37987c359_c.jpg)
当被要求验证时输入你的Linux登录信息。
![](https://farm1.staticflickr.com/511/18968590168_807e807f73_c.jpg)
现在基础配置已经基本完成了。为了测试打印打开任何文档浏览程序比如PDF浏览器并打印。你会看到“HL-2270DW”被列出并被作为默认的打印机设置。
![](https://farm4.staticflickr.com/3872/18970034679_6d41d75bf9_c.jpg)
打印机应该可以工作了。你可以通过CUPS的网页看到打印机状态和管理打印机任务。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-brother-printer-linux.html
作者:[Dan Nanni][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/brother_printers
[2]:http://xmodulo.com/go/hl_2270dw
[3]:http://support.brother.com/

View File

@ -0,0 +1,114 @@
为什么mysql里的ibdata1文件不断的增长
================================================================================
![ibdata1 file](https://www.percona.com/blog/wp-content/uploads/2013/08/ibdata1-file.jpg)
我们在[Percona支持][1]经常收到关于MySQL的ibdata1文件的这个问题。
当监控服务器发送一个关于MySQL服务器存储的报警恐慌就开始了 - 就是说磁盘快要满了。
一番调查后你意识到大多数地盘空间被InnDB的共享表空间ibdata1使用。你已经启用了[innodb_file_per_table][2],所以问题是:
### ibdata1存了什么 ###
当你启用了innodb_file_per_table表被存储在他们自己的表空间里但是共享表空间仍然在存储其他InnoDB的内部数据
- 数据字典也就是InnoDB表的元数据
- 交换缓冲区
- 双写缓冲区
- 撤销日志
在[Percona服务器][3]上有一些是可以被配置来避免增长过大的。例如你可以通过[innodb_ibuf_max_size][4]设置最大交换缓冲区或设置[innodb_doublewrite_file][5]存储双写缓冲区到一个分离的文件。
MySQL 5.6版中你也可以创建额外的撤销表空间所以他们将会有他们自己的文件来替代存储到ibdata1。照着[文档链接][6]检查。
### 什么引起ibdata1增长迅速 ###
当MySQL出现问题通常我们需要执行的第一个命令是
SHOW ENGINE INNODB STATUSG这里我觉得应该是STATUS/G
这将展示给我们一些很有价值的信息。我们开始检查**事务**部分,然后我们会发现这个:
---TRANSACTION 36E, ACTIVE 1256288 sec
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
show engine innodb status
Trx read view will not see trx with id >= 36F, sees < 36F
这是一个最常见的原因一个14天前创建的相当老的老事务。这个状态是**活动的**这意味着InnoDB已经创建了一个快照数据所以需要在**撤销**日志中维护旧页面,以保障数据库的一致性视图,直到事务开始。如果你的数据库是写负载大,那就意味着大量的撤销页已经被存储了。
如果你找不到任何长时间运行的事务你也可以监控INNODB状态中的其他的变量“**历史记录列表长度**”展示了一些等待清除操作。这种情况下问题经常发生,因为清除线程(或者老版本的主线程)不能以这些记录进来的速度处理撤销。
### 我怎么检查什么被存储到了ibdata1里了 ###
很不幸MySQL不提供什么被存储到ibdata1共享表空间的信息但是有两个工具将会很有帮助。第一个是马克·卡拉汉制作的一个修订版本的innochecksum并且发布在[这个漏洞报告][7]。
它相当易于使用:
# ./innochecksum /var/lib/mysql/ibdata1
0 bad checksum
13 FIL_PAGE_INDEX
19272 FIL_PAGE_UNDO_LOG
230 FIL_PAGE_INODE
1 FIL_PAGE_IBUF_FREE_LIST
892 FIL_PAGE_TYPE_ALLOCATED
2 FIL_PAGE_IBUF_BITMAP
195 FIL_PAGE_TYPE_SYS
1 FIL_PAGE_TYPE_TRX_SYS
1 FIL_PAGE_TYPE_FSP_HDR
1 FIL_PAGE_TYPE_XDES
0 FIL_PAGE_TYPE_BLOB
0 FIL_PAGE_TYPE_ZBLOB
0 other
3 max index_id
全部的20608中有19272个撤销日志页。**这是表空间的93%**。
第二个检查表空间内容的方式是杰里米科尔制作的[InnoDB Ruby工具][8]。它是个更先进的工具来检查InnoDB的内部结构。例如我们可以使用space-summary参数来得到每个页面及其数据类型的列表。我们可以使用标准的Unix工具**撤销日志**页的数量:
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
19272
尽管这种特殊的情况innochedcksum更快更容易使用我推荐你使用杰里米的工具去学习更多的InnoDB内部数据分布和内部结构。
好,现在我们知道问题所在。下一个问题:
### 我能怎么解决问题? ###
这个问题的答案很简单。如果你还能提交语句就做吧。如果不能你必须要杀掉进程开始回滚进程。那将停止ibdata1的增长但是很清楚你的软件有一个漏洞或者出了一些错误。现在你知道如何去鉴定问题所在你需要使用你自己的调试工具或普通语句日志找出谁或者什么引起的问题。
如果问题发生在清除线程,解决方法通常是升级到新版本,新版中使用一个独立的清除线程替代主线程。更多信息查看[文档链接][9]
### 有什么方法恢复已使用的空间么? ###
没有目前不可能有一个容易并且快速的方法。InnoDB表空间从不收缩...见[10年老漏洞报告][10]最新更新自詹姆斯(谢谢):
当你删除一些行这个页被标为已删除稍后重用但是这个空间从不会被恢复。只有一种方式来启动数据库使用新的ibdata1。做这个你应该需要使用mysqldump做一个逻辑全备份。然后停止MySQL并删除所有数据库、ib_logfile*、ibdata1*文件。当你再启动MySQL的时候将会创建一个新的共享表空间。然后恢复逻辑仓库。
### 总结 ###
当ibdata1文件增长太快通常是MySQL里长时间运行的被遗忘的事务引起的。尝试去解决问题越快越好提交或者杀死事务因为没有痛苦缓慢的mysqldump执行你不能恢复浪费的磁盘空间。
监控数据库避免这些问题也是非常推荐的。我们的[MySQL监控插件][11]包括一个Nagios脚本如果发现了一个太老的运行事务它可以提醒你。
--------------------------------------------------------------------------------
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
作者:[Miguel Angel Nieto][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.percona.com/blog/author/miguelangelnieto/
[1]:https://www.percona.com/products/mysql-support
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
[3]:https://www.percona.com/software/percona-server
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
[7]:http://bugs.mysql.com/bug.php?id=57611
[8]:https://github.com/jeremycole/innodb_ruby
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
[10]:http://bugs.mysql.com/bug.php?id=1341
[11]:https://www.percona.com/software/percona-monitoring-plugins