Merge pull request #2 from LCTT/master

更新2015年7月14号
This commit is contained in:
XLCYun 2015-07-14 19:15:31 +08:00
commit c1027ccd6e
36 changed files with 3781 additions and 1900 deletions

View File

@ -1,26 +1,26 @@
Lolcat 一个在 Linux 终端中输出彩虹特效的命令行工具
Lolcat 一个在 Linux 终端中输出彩虹特效的命令行工具
================================================================================
那些相信 Linux 命令行是单调无聊且没有任何乐趣的人们,你们错了,这里有一些有关 Linux 的文章,它们展示着 Linux 是如何的有趣和“淘气” 。
- [20 个有趣的 Linux 命令或在终端中 Linux 是有趣的][1]
- [6 个有趣的好玩 Linux 命令(在终端中的乐趣)][2]
- [在 Linux 终端中的乐趣 把玩文字和字符计数][3]
- [Linux命令及Linux终端的20个趣事][1]
- [终端中的乐趣6个有趣的Linux命令行工具][2]
- [Linux终端的乐趣之把玩字词计数][3]
在本文中我将讨论一个名为“lolcat”的应用 在终端中生成彩虹般的颜色。
在本文中我将讨论一个名为“lolcat”的小工具 它可以在终端中生成彩虹般的颜色。
![为终端生成彩虹般颜色的输出的 Lolcat 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Lolcat.png)
为终端生成彩虹般颜色的输出的 Lolcat 命令
*为终端生成彩虹般颜色的输出的 Lolcat 命令*
#### 何为 lolcat ? ####
Lolcat 是一个针对 LinuxBSD 和 OSX 平台的应用,它类似于 [cat 命令][4],并为 `cat` 的输出添加彩虹般的色彩。 Lolcat 原本用于在 Linux 终端中为文本添加彩虹般的色彩。
Lolcat 是一个针对 LinuxBSD 和 OSX 平台的工具,它类似于 [cat 命令][4],并为 `cat` 的输出添加彩虹般的色彩。 Lolcat 主要用于在 Linux 终端中为文本添加彩虹般的色彩。
### 在 Linux 中安装 Lolcat ###
**1. Lolcat 应用在许多 Linux 发行版本的软件仓库中都可获取到,但可获得的版本都有些陈旧,而你可以通过 git 仓库下载和安装最新版本的 lolcat。**
**1. Lolcat 工具在许多 Linux 发行版的软件仓库中都可获取到,但可获得的版本都有些陈旧,而你可以通过 git 仓库下载和安装最新版本的 lolcat。**
由于 Lolcat 是一个 ruby gem 程序,所以在你的系统中安装有最新版本的 RUBY 是必须的
由于 Lolcat 是一个 ruby gem 程序,所以在你的系统中必须安装有最新版本的 RUBY。
# apt-get install ruby [在基于 APT 的系统中]
# yum install ruby [在基于 Yum 的系统中]
@ -53,7 +53,7 @@ Lolcat 是一个针对 LinuxBSD 和 OSX 平台的应用,它类似于 [cat
![Lolcat 的帮助文档](http://www.tecmint.com/wp-content/uploads/2015/06/Lolcat-Help1.png)
Lolcat 的帮助文档
*Lolcat 的帮助文档*
**4. 接着, 通过管道连接 lolcat 和其他命令,例如 ps, date 和 cal:**
@ -63,15 +63,15 @@ Lolcat 的帮助文档
![ps 命令的输出](http://www.tecmint.com/wp-content/uploads/2015/06/ps-command-output.png)
ps 命令的输出
*ps 命令的输出*
![Date 的输出](http://www.tecmint.com/wp-content/uploads/2015/06/Date.png)
Date 的输出
*Date 的输出*
![Calendar 的输出](http://www.tecmint.com/wp-content/uploads/2015/06/Cal.png)
Calendar 的输出
*Calendar 的输出*
**5. 使用 lolcat 来展示一个脚本文件的代码:**
@ -79,18 +79,18 @@ Calendar 的输出
![用 lolcat 来展示代码](http://www.tecmint.com/wp-content/uploads/2015/06/Script-Output.png)
用 lolcat 来展示代码
*用 lolcat 来展示代码*
**6. 通过管道连接 lolcat 和 figlet 命令。Figlet 是一个展示由常规的屏幕字符组成的巨大字符串的应用。我们可以通过管道将 figlet 的输出连接到 lolcat 中来出如下的多彩输出:**
**6. 通过管道连接 lolcat 和 figlet 命令。Figlet 是一个展示由常规的屏幕字符组成的巨大字符串的应用。我们可以通过管道将 figlet 的输出连接到 lolcat 中来展示出如下的多彩输出:**
# echo I ❤ Tecmint | lolcat
# figlet I Love Tecmint | lolcat
![多彩的文字](http://www.tecmint.com/wp-content/uploads/2015/06/Colorful-Text.png)
多彩的文字
*多彩的文字*
**注**: 毫无疑问 ❤ 是一个 unicode 字符并且为了安装 figlet你需要像下面那样使用 yum 和 apt 来得到这个软件包:
**注**: 注意, ❤ 是一个 unicode 字符。要安装 figlet你需要像下面那样使用 yum 和 apt 来得到这个软件包:
# apt-get figlet
# yum install figlet
@ -102,7 +102,7 @@ Calendar 的输出
![动的文本](http://www.tecmint.com/wp-content/uploads/2015/06/Animated-Text.gif)
动的文本
*动的文本*
这里选项 `-a` 指的是 Animation(动画) `-d` 指的是 duration(持续时间)。在上面的例子中,持续 500 次动画。
@ -112,7 +112,7 @@ Calendar 的输出
![多彩地显示文件](http://www.tecmint.com/wp-content/uploads/2015/06/List-Files-Colorfully.png)
多彩地显示文件
*多彩地显示文件*
**9. 通过管道连接 lolcat 和 cowsay。cowsay 是一个可配置的正在思考或说话的奶牛,这个程序也支持其他的动物。**
@ -136,15 +136,15 @@ Calendar 的输出
skeleton snowman sodomized-sheep stegosaurus stimpy suse three-eyes turkey
turtle tux unipony unipony-smaller vader vader-koala www
通过管道连接 lolcat 和 cowsay 后的输出并且使用了gnucowfile。
通过管道连接 lolcat 和 cowsay 后的输出并且使用了gnu形象的 cowfile。
# cowsay -f gnu ☛ Tecmint ☚ is the best Linux Resource Available online | lolcat
![使用 Lolcat 的 Cowsay](http://www.tecmint.com/wp-content/uploads/2015/06/Cowsay-with-Lolcat.png)
使用 Lolcat 的 Cowsay
*使用 Lolcat 的 Cowsay*
**注**: 你可以在管道中使用 lolcat 和其他任何命令来在终端中得到彩色的输出。
**注**: 你可以在将 lolcat 和其他任何命令用管道连接起来在终端中得到彩色的输出。
**10. 你可以为最常用的命令创建别名来使得命令的输出呈现出彩虹般的色彩。你可以像下面那样为 ls -l 命令创建别名,这个命令输出一个目录中包含内容的列表。**
@ -153,23 +153,24 @@ Calendar 的输出
![多彩的 Alias 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Alias-Commands-with-Colorful.png)
多彩的 Alias 命令
*多彩的 Alias 命令*
你可以像上面建议的那样,为任何命令创建别名。为了使得别名永久生效,你必须添加相关的代码(上面的代码是 ls -l 的别名) 到 ~/.bashrc 文件中,并确保登出后再重新登录来使得更改生效。
你可以像上面建议的那样,为任何命令创建别名。为了使得别名永久生效,你需要添加相关的代码(上面的代码是 ls -l 的别名) 到 ~/.bashrc 文件中,并登出后再重新登录来使得更改生效。
现在就是这些了。我想知道你是否曾经注意过 lolcat 这个工具?你是否喜欢这篇文章?欢迎在下面的评论环节中给出你的建议和反馈。喜欢并分享我们,帮助我们传播。
现在就是这些了。我想知道你是否曾经注意过 lolcat 这个应用?你是否喜欢这篇文章?欢迎在下面的评论环节中给出你的建议和反馈。喜欢并分享我们,帮助我们传播。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/lolcat-command-to-output-rainbow-of-colors-in-linux-terminal/
作者:[Avishek Kumar][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/20-funny-commands-of-linux-or-linux-is-fun-in-terminal/
[2]:http://www.tecmint.com/linux-funny-commands/
[3]:http://www.tecmint.com/play-with-word-and-character-counts-in-linux/
[1]:https://linux.cn/article-2831-1.html
[2]:https://linux.cn/article-4128-1.html
[3]:https://linux.cn/article-4088-1.html
[4]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/

View File

@ -0,0 +1,108 @@
动态壁纸给linux发行版添加活力背景
================================================================================
**我们知道你想拥有一个有格调的ubuntu桌面来炫耀一下 **
![Live Wallpaper](http://i.imgur.com/9JIUw5p.gif)
*Live Wallpaper*
在linxu上费一点点劲搭建一个出色的工作环境是很简单的。今天我们[重新][2])着重来探讨长驻你脑海中那些东西 :一款自由,开源,能够给你的截图增添光彩的工具。
它叫 **Live Wallpaper** (正如你猜的那样) 它用由OpenGL驱动的一款动态桌面背景来代替标准的静态桌面背景。
最好的一点是在ubuntu上安装它很容易。
### 动态壁纸主题 ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/animated-wallpaper-ubuntu-750x383.jpg)
Live Wallpaper 不是此类软件唯一的一款,但它是最好的一款之一。
它附带很多不同的开箱即用的主题。
从精细的(noise)到狂热的 (nexus)包罗万象甚至有受到Ubuntu Phone欢迎屏幕启发的obligatory锁屏壁纸。
- Circles — 带着evolving circle风格的时钟灵感来自于Ubuntu Phone
- Galaxy — 支持自定义大小,位置的旋转星系
- Gradient Clock — 放在倾斜面上的polar时钟
- Nexus — 亮色粒子火花穿越屏幕
- Noise — 类似于iOS动态壁纸的Bokeh设计
- Photoslide — 由文件夹(默认为 ~/Photos内照片构成的动态网格相册
Live Wallpaper **完全开源**,所以没有什么能够阻挡天马行空的艺术家们用诀窍(当然还有耐心)来创造他们自己的精美主题。
### 设置 & 特点 ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-gui-settings.jpg)
虽然某些主题与其它主题相比有更多的选项,但每款主题都可以通过某些方式来配置或者定制。
例如Nexus主题中 (上图所示) 你可以更改脉冲粒子的数量,颜色,大小和出现频率。
首选项提供了 **通用选项** 适用于所有主题,包括:
- 设置登录界面的动态壁纸
- 自定义动画背景
- 调节 FPS (包括在屏幕上显示FPS)
- 指定多显示器的行为
有如此多的选项diy适用于你自己的桌面背景是很容易的。
### 缺陷 ###
#### 没有桌面图标 ####
Live Wallpaper在运行时你无法在桌面添加打开或者是编辑文件和文件夹。
首选项程序提供了一个选项来让你这样做只是猜测。也许是它只能在老版本中使用在我们的测试中测试环境为Ununtu 14.10它不工作。但在测试中发现当把桌面壁纸设置成格式为png的图片文件时这个选项有用不需要是透明的png图片文件只要是png图片文件就行了。
#### 资源占用 ####
动态壁纸与标准的壁纸相比要消耗更多的系统资源。
我们并不是说任何时候都会消耗大量资源,但至少在我们的测试中是这样,所以低配置机器和笔记本用户要谨慎使用这款软件。可以使用 [系统监视器][2] 来追踪CPU 和GPU的负载。
#### 退出程序 ####
对我来说最大的“bug”绝对是没有“退出”选项。
当然动态壁纸可以通过托盘图标和首选项完全退出那退出托盘图标呢没办法。只能在终端执行命令pkill livewallpaper
### 怎么在 Ubuntu 14.04 LTS+ 上安装 Live Wallpaper ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/terminal-command-750x146.jpg)
要想在Ubuntu 14.04 LTS 和更高版本中安装 Live Wallpaper你首先需要把官方PPA添加进你的软件源。
最快的方法是在终端中执行下列命令:
sudo add-apt-repository ppa:fyrmir/livewallpaper-daily
sudo apt-get update && sudo apt-get install livewallpaper
你还需要安装 indicator applet这样可以方便快速的打开或是关闭动态壁纸从菜单选择主题另外图形配置工具可以让你基于你自己的口味来配置每款主题。
sudo apt-get install livewallpaper-config livewallpaper-indicator
所有都安装好之后你就可以通过Unity Dash来启动它和它的首选项工具了。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-app-launcher.png)
让人不爽的是,安装完成后,程序不会自动打开托盘图标,而仅仅将它自己加入自动启动项,所以,快速来个注销 -> 登陆它就会出现啦。
### 总结 ###
如果你正处在无聊呆板的桌面中,幻想有一个更有活力的生活,不妨试试。另外,告诉我们你想看到什么样的动态壁纸!
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/05/animated-wallpaper-adds-live-backgrounds-to-linux-distros
作者:[Joey-Elijah Sneddon][a]
译者:[Love-xuan](https://github.com/Love-xuan)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2012/11/live-wallpaper-for-ubuntu
[2]:http://www.omgubuntu.co.uk/2011/11/5-system-monitoring-tools-for-ubuntu

View File

@ -0,0 +1,93 @@
LFS中文版手册发布如何打造自己的 Linux 发行版
================================================================================
您是否想过打造您自己的 Linux 发行版?每个 Linux 用户在他们使用 Linux 的过程中都想过做一个他们自己的发行版,至少一次。我也不例外,作为一个 Linux 菜鸟,我也考虑过开发一个自己的 Linux 发行版。从头开发一个 Linux 发行版这件事情被称作 Linux From Scratch LFS
在开始之前,我总结了一些有关 LFS 的内容,如下:
**1. 那些想要打造他们自己的 Linux 发行版的人应该了解打造一个 Linux 发行版(打造意味着从头开始)与配置一个已有的 Linux 发行版的不同**
如果您只是想调整下启动屏幕、定制登录页面以及拥有更好的外观和使用体验。您可以选择任何一个 Linux 发行版并且按照您的喜好进行个性化配置。此外,有许多配置工具可以帮助您。
如果您想打包所有必须的文件、引导加载器和内核,并选择什么该被包括进来,然后依靠自己编译这一切东西。那么您需要的就是 Linux From Scratch LFS
**注意**:如果您只想要定制 Linux 系统的外表和体验,这个指南并不适合您。但如果您真的想打造一个 Linux 发行版,并且向了解怎么开始以及一些其他的信息,那么这个指南正是为您而写。
**2. 打造一个 Linux 发行版LFS的好处**
- 您将了解 Linux 系统的内部工作机制
- 您将开发一个灵活的适应您需求的系统
- 您开发的系统LFS将会非常紧凑因为您对该包含/不该包含什么拥有绝对的掌控
- 您开发的系统LFS在安全性上会更好
**3. 打造一个Linux发行版LFS的坏处**
打造一个 Linux 系统意味着将所有需要的东西放在一起并且编译之。这需要许多查阅、耐心和时间。而且您需要一个可用的 Linux 系统和足够的磁盘空间来打造 LFS。
**4. 有趣的是Gentoo/GNU Linux 在某种意义上最接近于 LFS。Gentoo 和 LFS 都是完全从源码编译的定制的 Linux 系统**
**5. 您应该是一个有经验的Linux用户对编译包、解决依赖有相当的了解并且是个 shell 脚本的专家。**
了解一门编程语言(最好是 C 语言)将会使事情变得容易些。但哪怕您是一个新手,只要您是一个优秀的学习者,可以很快的掌握知识,您也可以开始。最重要的是不要在 LFS 过程中丢失您的热情。
如果您不够坚定,恐怕会在 LFS 进行到一半时放弃。
**6. 现在您需要一步一步的指导来打造一个 Linux 。LFS 手册是打造 LFS 的官方指南。我们的合作站点 tradepub 也为我们的读者制作了 LFS 的指南,这同样是免费的。 ###
您可以从下面的链接下载 Linux From Scratch 的电子书:
[![](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-From-Scratch.gif)][1]
下载: [Linux From Scratch][1]
**7. 当前 LFS 的版本是 7.7,分为 systemd 版本和非 systemd 版本**
LFS 的官方网站是: http://www.linuxfromscratch.org/
您可以在官网在线浏览 LFS 以及类似 BLFS 这样的相关项目的手册,也可以下载不同格式的版本。
- LFS (非 systemd 版本):
- PDF 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7.pdf
- 单一 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7-NOCHUNKS.html
- 打包的多页 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7.tar.bz2
- LFS systemd 版本):
- PDF 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd.pdf
- 单一 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd-NOCHUNKS.html
- 打包的多页 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd.tar.bz2
**8. Linux 中国/LCTT 翻译了一份 LFS 手册7.7systemd 版本)**
经过 LCTT 成员的努力,我们终于完成了对 LFS 7.7 systemd 版本手册的翻译。
手册在线访问地址https://linux.cn/lfs/LFS-BOOK-7.7-systemd/index.html 。
其它格式的版本稍后推出。
感谢参与翻译的成员: wxy, ictlyh, dongfengweixiao, zpl1025, H-mudcup, Yuking-net, kevinSJ 。
### 关于Linux From Scratch ###
这本手册是由 LFS 的项目领头人 Gerard Beekmans 创作的, Matthew Burgess 和 Bruse Dubbs 参与编辑两人都是LFS 项目的联合领导人。这本书内容很广泛,有 338 页之多。
手册中内容包括:介绍 LFS、准备构建、构建 LFS、建立启动脚本、使 LFS 可以引导,以及附录。其中涵盖了您想知道的 LFS 项目中的所有东西。
这本手册还给出了编译一个包的预估时间。预估的时间以编译第一个包的时间作为参考。所有的东西都以易于理解的方式呈现,甚至对于新手来说也是这样。
如果您有充裕的时间并且真正对构建自己的 Linux 发行版感兴趣,那么您绝对不会错过下载这个电子书(免费下载)的机会。您需要做的,便是照着这本手册在一个工作的 Linux 系统(任何 Linux 发行版,足够的磁盘空间即可)中开始构建您自己的 Linux 系统,付出时间和热情。
如果 Linux 使您着迷,如果您想自己动手构建一个自己的 Linux 发行版,这便是现阶段您应该知道的全部了,其他的信息您可以参考上面链接的手册中的内容。
请让我了解您阅读/使用这本手册的经历,这本详尽的 LFS 指南的使用是否足够简单?如果您已经构建了一个 LFS 并且想给我们的读者一些建议,欢迎留言和反馈。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-custom-linux-distribution-from-scratch/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://tecmint.tradepub.com/free/w_linu01/prgm.cgi

View File

@ -1,6 +1,7 @@
在Ubuntu 15.04中安装RUby on Rails
在Ubuntu 15.04中安装Ruby on Rails
================================================================================
本篇我们会学习如何用rbenv在Ubuntu 15.04中安装Ruby on Rails。我们选择Ubuntu作为操作系统因为Ubuntu是Linux发行版中自带很多包和完整文档的操作系统因此我认为这是正确的选择。如果你不想安装最新的Ubuntu你可以从[下载iso文件][1]开始。
本篇我们会学习如何用rbenv在Ubuntu 15.04中安装Ruby on Rails。我们选择Ubuntu作为操作系统是因为Ubuntu是Linux发行版中自带很多包和完整文档的操作系统因此我认为这是正确的选择。如果你还没有安装最新的Ubuntu你可以从[下载iso文件][1]开始。
### 安装 Ruby ###
@ -9,9 +10,9 @@
sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
有三种方法来安装Ruby比如rbenv,rvm和从源码安装。每种都有各自的好处但是这些天开发者们更倾向使用rbenv而不是rvm和源码来安装。我们将安装最新的Ruby版本2.2.2。
有三种方法来安装Rubyrbenv、rvm和从源码安装。每种都有各自的好处但是近来开发者们更倾向使用rbenv而不是rvm和源码来安装。我们将安装最新的Ruby版本2.2.2。
用rbenv来安装只有简单的两步。第一步安装rbenv接着是ruby-build
用rbenv来安装只有简单的两步。第一步安装rbenv接着是ruby-build
cd
git clone git://github.com/sstephenson/rbenv.git .rbenv
@ -28,23 +29,23 @@
rbenv global 2.2.2
ruby -v
我们需要安装Bundler但是我们要在安装之前告诉rubygems不要为每个包本地安装文档。
我们需要安装Bundler但是我们要在安装之前告诉rubygems不要为每个包安装本地文档。
echo "gem: --no-ri --no-rdoc" > ~/.gemrc
gem install bundler
### 配置 GIT ###
配置git之前你要创建一个github账号你可以注册[git][2]。我们需要git作为版本控制系统因此我们要设置来匹配github账号。
配置git之前你要创建一个github账号你可以注册一个[github 账号][2]。我们需要git作为版本控制系统因此我们要设置来匹配github账号。
用户的github账号来替下面的**Name** 和 **Email address**
用户的github账号来替下面的**Name** 和 **Email address**
git config --global color.ui true
git config --global user.name "YOUR NAME"
git config --global user.email "YOUR@EMAIL.com"
ssh-keygen -t rsa -C "YOUR@EMAIL.com"
接下来用新生成的ssh key添加到github账号中。这样你需要复制下面命令的输出并[粘贴在][3]。
接下来用新生成的ssh key添加到github账号中。这样你需要复制下面命令的输出并[粘贴在Github的设置页面里面][3]。
cat ~/.ssh/id_rsa.pub
@ -58,7 +59,7 @@
### 安装 Rails ###
我们需要安装javascript运行时像NodeJS因为这些天Rails带来很多依赖。这样我们可以结合并缩小你的javascript来提供一个更快的生产环境。
我们需要安装像NodeJS这样的javascript运行时环境因为近来Rails的依赖越来越多了。这样我们可以合并和压缩你的javascript从而提供一个更快的生产环境。
我们需要添加PPA来安装nodeJS。
@ -66,7 +67,7 @@
sudo apt-get update
sudo apt-get install nodejs
如果在更新是晕倒了问题,你可以试试这个命令:
如果在更新时遇到了问题,你可以试试这个命令:
# Note the new setup script name for Node.js v0.12
curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
@ -74,15 +75,15 @@
# Then install with:
sudo apt-get install -y nodejs
下一步,用这个命令:
下一步,用这个命令安装 rails
gem install rails -v 4.2.1
因为我们正在使用rbenv用下面的命令来安装rails
因为我们正在使用rbenv用下面的命令来让rails的执行程序可以使用
rbenv rehash
要确保rails已经正确安炸u哪个你可以运行rails -v显示如下
要确保rails已经正确安你可以运行rails -v显示如下
rails -v
# Rails 4.2.1
@ -91,25 +92,25 @@
### 设置 MySQL ###
或许你已经熟悉MySQL了你可以从Ubuntu的仓库中安装MySQL的客户端与服务端。你可以在安装时设置root用户密码。这个信息将来会进入你rails程序的database.yml文件中用下面的命令来安装mysql。
或许你已经熟悉MySQL了你可以从Ubuntu的仓库中安装MySQL的客户端与服务端。你可以在安装时设置root用户密码。这个信息将来会进入你rails程序的database.yml文件中用下面的命令来安装mysql。
sudo apt-get install mysql-server mysql-client libmysqlclient-dev
安装libmysqlclient-dev用于提供在设置rails程序时rails在连接mysql所需要用到的用于编译mysql2 gem的文件
安装libmysqlclient-dev用于mysql2 gem的编译在设置rails程序时rails通过它来连接mysql
### 最后一步 ###
让我们尝试创建你的第一个rails程序
# Use MySQL
# 使用 MySQL 数据库
rails new myapp -d mysql
# Move into the application directory
# 进入到应用目录
cd myapp
# Create Database
# 创建数据库
rake db:create
@ -125,7 +126,7 @@
nano config/database.yml
接着入MySql root用户的密码。
接着入MySql root用户的密码。
![](http://blog.linoxide.com/wp-content/uploads/2015/05/root_passw.png)
@ -133,7 +134,7 @@
### 总结 ###
Rails是用Ruby写的是随着rails一起使用的编程语言。在Ubuntu 15.04中Ruby on Rails可以用rbenv、 rvm和源码的方式来安装。本篇我们使用的是rbenv方式并用了MySQL作为数据库。有任何的问题或建议请在评论栏指出。
Rails是用Ruby写的 也是随着rails一起使用的编程语言。在Ubuntu 15.04中Ruby on Rails可以用rbenv、 rvm和源码的方式来安装。本篇我们使用的是rbenv方式并用了MySQL作为数据库。有任何的问题或建议请在评论栏指出。
--------------------------------------------------------------------------------
@ -141,7 +142,7 @@ via: http://linoxide.com/ubuntu-how-to/installing-ruby-rails-using-rbenv-ubuntu-
作者:[Obet][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,33 @@
Linus Torvalds说那些对人工智能奇点深信不疑的人显然磕了药
================================================================================
*像往常一样, 他的评论不能只看字面意思*
![](http://i1-news.softpedia-static.com/images/news2/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373-2.jpg)
**人工智能是一个非常热门的话题许多高端人士包括特斯拉的CEO埃隆·马斯克就曾表示有情感的人工智能技术即将到来同时这一技术将发展到危险的门槛上。不过Linus Torvalds显然不这么认为他认为那只是差劲的科幻小说。**
人工智能激发了人们的创造力已经不是什么新鲜的想法了,不过近段时间关于所谓的人工智能奇点的讨论,引起了诸如埃隆·马斯克和斯蒂芬·霍金表示关心,认为可能会创造出一个怪兽。不只是他们,论坛和评论部分充斥着杞人忧天者,他们不知道该相信谁,或是哪个提出建议的人更聪明。
事实证明Linux项目创始人Linus Torvalds在这件事上显然有完全不同的观点。他说事实上什么都不会发生我们也更有理由相信他。人工智能意需要有人编写它的代码Linus知道编写人工智能代码会遇到的阻力和障碍。他很有可能已经猜到了什么会被涉及到并且明白为什么人工智能不会成为威胁。
### Linus Torvalds与人工智能 ###
Linus Torvalds在[slashdot.org][1]上回答了一些社区中的问题,他的所有观点都十分有趣。他曾对[游戏的未来和Valve][2]发表看法,就像这次关于人工智能一样。虽然他经常是关注一些关于内核和开源的问题,但是他在其他部分也有自己的见解。事实是作为一个问题,人工智能工程是一个他可以从程序员的角度讨论的问题。
“所以我期待更多有针对性的和相当棒的AI而不是它有多像人。像语言识别、模式识别这样的东西。我根本找不出在你洗碗的时候洗碗机和你讨论Sartre萨特法国哲学家、小说家、剧作家有什么危害。真的有奇点这种事吗是的我认为那只是科幻小说还不是好的那种。无休止的指数增长我说真的这些人嗑了什么药了吧” Linus在Slashdot写道。
选择相信埃隆·马斯克还是Linus是你的决定但如果我卷入了这场赌局我会把钱投给Linus。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373.shtml
作者:[Silviu Stahie][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://classic.slashdot.org/story/15/06/30/0058243
[2]:http://news.softpedia.com/news/linus-torvalds-said-valve-is-exploring-a-second-source-against-microsoft-486266.shtml

View File

@ -1,33 +0,0 @@
Linus Torvalds Says People Who Believe in AI Singularity Are on Drugs
================================================================================
*As usual, his comments should not be taken literally*
![](http://i1-news.softpedia-static.com/images/news2/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373-2.jpg)
**AI is a very hot topic right now, and many high profile people, including Elon Musk, the head of Tesla, have said that we're going to get sentient AIs soon and that it's going to be a dangerous threshold. It seems that Linus Torvalds doesn't feel the same way, and he thinks that it's just bad Sci-Fi.**
The idea of AIs turning on their human creators is not something new, but recently the so-called AI singularity has been discussed, and people like Elon Musk and Stephen Hawking expressed concerns about the possibility of creating a monster. And it's not just them, forums and comments sections are full of alarmist people who don't know what to believe or who take for granted the opinions of much smarter people.
As it turns out, Linus Torvalds, the creator of the Linux project, has a completely different opinion on this matter. In fact, he says that nothing like this will happen, and we have a much better reason to believe him. AI means that someone wrote its code, and Linus knows the power and the obstacles of writing an AI code. He's much likely to guess what's involved and to understands why an AI won't be a threat.
### Linus Torvalds and AIs ###
Linus Torvalds answered some questions from the community on [slashdot.org][1], and all his ideas were very interesting. He talked about the [future of gaming and Valve][2], but he also tackled stuff like AI. He's usually asked stuff about the kernel or open source, but he has opinions on other topics as well. As a matter a fact, the AI subject is something that he can actually talk about as a programmer.
"So I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you. The whole 'Singularity' kind of event? Yeah, it's science fiction, and not very good Sci-Fi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really" wrote Linus on Slashdot.
It's your choice whether to believe Elon Musk or Linus, but if betting were involved, I would put my money on Linus.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/linus-torvalds-says-people-who-believe-in-an-ai-singularity-are-on-drugs-486373.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://classic.slashdot.org/story/15/06/30/0058243
[2]:http://news.softpedia.com/news/linus-torvalds-said-valve-is-exploring-a-second-source-against-microsoft-486266.shtml

View File

@ -1,155 +0,0 @@
Translating by H-mudcup
10 Truly Amusing Easter Eggs in Linux
================================================================================
![](http://en.wikipedia.org/wiki/File:Adventure_Easteregg.PNG)
The programmer working on Adventure slipped a secret feature into the game. Instead of getting upset about it, Atari decided to give these sorts of “secret features” a name -- “Easter Eggs” because… you know… you hunt for them. Image credit: Wikipedia.
Back in 1979, a video game was being developed for the Atari 2600 -- [Adventure][1].
The programmer working on Adventure slipped a secret feature into the game which, when the user moved an “invisible square” to a particular wall, allowed entry into a “secret room”. That room contained a simple phrase: “Created by [Warren Robinett][2]”.
Atari had a policy against putting author credits in their games, so this intrepid programmer put his John Hancock on the game by being, well, sneaky. Atari only found out about the “secret room” after Warren Robinett had left the company. Instead of getting upset about it, Atari decided to give these sorts of “secret features” a name -- “Easter Eggs” because… you know… you hunt for them -- and declared that they would be putting more of these “Easter Eggs” in future games.
This wasnt the first such “hidden feature” built into a piece of software (that distinction goes to an operating system for the [PDP-10][3] from 1966, but this was the first time it was given a name. And it was the first time it really grabbed the attention of most computer users and gamers.
Linux (and Linux related software) has not been left out. Some truly amusing Easter Eggs have been created for our beloved operating system over the years. Here are some of my personal favorites -- with how to achieve them.
Youll notice, rather quickly, that most of these are experienced via a terminal. Thats on purpose. Because terminals are cool. [I should also take this moment to say that if you try to run an application I list, and you do not have it installed, it will not work. You should install it first. Because… computers.]
### Arch : Pac-Man in pacman ###
Were going to start with one just for the [Arch Linux][4] fans out there. You can add a [Pac-Man][5]-esque character to your progress bars in “[pacman][6]” (the Arch package manager). Why this isnt enabled by default is beyond me.
To do this youll want to edit “/etc/pacman.conf” in your favorite text editor. Under the “# Misc options” section, remove the “#” in front of “Color” and add the line “ILoveCandy”. Because Pac-Man loves candy.
Thats it! Next time you fire up a terminal and run pacman, youll help the little yellow guy get some lunch (or at least some candy).
### GNU Emacs : Tetris and such ###
![emacs Tetris](http://www.linux.com/images/stories/41373/emacsTetris.jpg)
I dont like emacs. Not even a little bit. But it does play Tetris.
I have a confession to make: I dont like [emacs][7]. Not even a little bit.
Some things fill my heart with gladness. Some things take away all my sadness. Some things ease my troubles. Thats [not what emacs does][8].
But it does play Tetris. And thats not nothing. Heres how:
Step 1) Launch emacs. (When in doubt, type “emacs”.)
Step 2) Hit Escape then X on your keyboard.
Step 3) Type “tetris” and hit Enter.
Bored of Tetris? Try “pong”, “snake” and a whole host of other little games (and novelties). Take a look in “/usr/share/emacs/*/lisp/play” for the full list.
### Animals Saying Things ###
The Linux world has a long and glorious history of animals saying things in a terminal. Here are the ones that are the most important to know by heart.
On a Debian-based distro? Try typing “apt-get moo".
![apt-get moo](http://www.linux.com/images/stories/41373/AptGetMoo.jpg)
apt-get moo
Simple, sure. But its a talking cow. So we like it. Then try “aptitude moo”. It will inform you that “There are no Easter Eggs in this program”.
If theres one thing you should know about [aptitude][9], its that its a dirty, filthy liar. If aptitude were wearing pants, the fire could be seen from space. Add a “-v” option to that same command. Keep adding more vs until you force aptitude to come clean.
![](http://www.linux.com/images/stories/41373/AptitudeMoo.jpg)
I think we can all agree, that this is probably the most important feature in aptitude.
I think we can all agree, that this is probably the most important feature in aptitude. But what if you want to put your own words into the mouth of a cow? Thats where “cowsay” comes in.
And, dont let the name “cowsay” fool you. You can put words into so much more than just a cow. Like an elephant, Calvin, Beavis and even the Ghostbusters logo. Just do a “cowsay -l” from the terminal to get a complete list of options.
![](http://www.linux.com/images/stories/41373/cowsay.jpg)
You can put words into so much more than just a cow.
Want to get really tricky? You can pipe the output of other applications into cowsay. Try “fortune | cowsay”. Lots of fun can be had.
### Sudo Insult Me Please ###
Raise your hand if youve always wanted your computer to insult you when you do something wrong. Hell. I know I have. Try this:
Type “sudo visudo” to open the “sudoers” file. In the top of that file youll likely see a few lines that start with “Defaults”. At the bottom of that list add “Defaults insults” and save the file.
Now, whenever you mistype your sudo password, your system will lob insults at you. Confidence boosting phrases such as “Listen, burrito brains, I dont have time to listen to this trash.”, “Are you on drugs?” and “Youre mind just hasnt been the same since the electro-shocks, has it?”.
This one has the side-effect of being a rather fun thing to set on a co-worker's computer.
### Firefox is cheeky ###
Heres one that isnt done from the Terminal! Huzzah!
Open up Firefox. In the URL bar type “about:about”. That will give you a list of all of the “about” pages in Firefox. Nothing too fancy there, right?
Now try “about:mozilla” and youll be greeted with a quote from the “[Book of Mozilla][10]” -- the holy book of web browsing. One of my other favorites, “about:robots”, is also quite excellent.
![](http://www.linux.com/images/stories/41373/About-Mozilla550.jpg)
The “Book of Mozilla” -- the holy book of web browsing.
### Carefully Crafted Calendar Concoctions ###
Tired of the boring old [Gregorian Calendar][11]? Ready to mix things up a little bit? Try typing “ddate”. This will print the current date on the [Discordian Calendar][12]. You will be greeted by something that looks like this:
“Today is Sweetmorn, the 18th day of Discord in the YOLD 3181”
I hear what youre saying, “But, this isnt an Easter Egg!” Shush. Ill call it an Easter Egg if I want to.
### Instant l33t Hacker Mode ###
Want to feel like youre a super-hacker from a movie? Try setting nmap into “[Script Kiddie][13]” mode (by adding “-oS”) and all of the output will be rendered in the most 3l33t [h@x0r-y way][14] possible.
Example: “nmap -oS - google.com”
Do it. You know you want to. Angelina Jolie would be [super impressed][15].
### The lolcat Rainbow ###
Having awesome Easter Eggs and goodies in your Linux terminal is fine and dandy… but what if you want it to have a little more… pizazz? Enter: lolcat. Take the text output of any program and pipe it through lolcat to super-duper-rainbow-ize it.
![](http://www.linux.com/images/stories/41373/lolcat.jpg)
Take the text output of any program and pipe it through lolcat to super-duper-rainbow-ize it.
### Cursor Chasing Critter ###
![oneko cat](http://www.linux.com/images/stories/41373/onekocat.jpg)
“Oneko” -- the Linux port of the classic “Neko”.
“Oneko” -- the Linux port of the classic “[Neko][16]”.
And that brings us to “oneko” -- the Linux port of the classic “Neko”. Basically a little cat that chases your cursor around the screen.
While this may not qualify as an “Easter Egg” in the strictest sense of the word, its still fun. And it feels Easter Egg-y.
You can also use different options (such as “oneko -dog”) to use a little dog instead of a cat and a few other tweaks and options. Lots of possibilities for annoying co-workers with this one.
There you have it! A list of my favorite Linux Easter Eggs (and things of that ilk). Feel free to add your own favorite in the comments section below. Because this is the Internet. And you can do that sort of thing.
--------------------------------------------------------------------------------
via: http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-
作者:[Bryan Lunduke][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/community/forums/person/56734
[1]:http://en.wikipedia.org/wiki/Adventure_(Atari_2600)
[2]:http://en.wikipedia.org/wiki/Warren_Robinett
[3]:http://en.wikipedia.org/wiki/PDP-10
[4]:http://en.wikipedia.org/wiki/Arch_Linux
[5]:http://en.wikipedia.org/wiki/Pac-Man
[6]:http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-#Pacman
[7]:http://en.wikipedia.org/wiki/GNU_Emacs
[8]:https://www.youtube.com/watch?v=AQ4NAZPi2js
[9]:https://wiki.debian.org/Aptitude
[10]:http://en.wikipedia.org/wiki/The_Book_of_Mozilla
[11]:http://en.wikipedia.org/wiki/Gregorian_calendar
[12]:http://en.wikipedia.org/wiki/Discordian_calendar
[13]:http://nmap.org/book/output-formats-script-kiddie.html
[14]:http://nmap.org/book/output-formats-script-kiddie.html
[15]:https://www.youtube.com/watch?v=Ql1uLyuWra8
[16]:http://en.wikipedia.org/wiki/Neko_%28computer_program%29

View File

@ -1,113 +0,0 @@
Animated Wallpaper Adds Live Backgrounds To Linux Distros
================================================================================
**We know a lot of you love having a stylish Ubuntu desktop to show off.**
![Live Wallpaper](http://i.imgur.com/9JIUw5p.gif)
Live Wallpaper
And as Linux makes it so easy to create a stunning workspace with a minimal effort, thats understandable!
Today, were highlighting — [re-highlighting][2] for those of you with long memories — a free, open-source tool that can add extra bling your OS screenshots and screencasts.
Its called **Live Wallpaper** and (as you can probably guess) it will replace the standard static desktop background with an animated alternative powered by OpenGL.
And the best bit: it can be installed in Ubuntu very easily.
### Animated Wallpaper Themes ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/animated-wallpaper-ubuntu-750x383.jpg)
Live Wallpaper is not the only app of this type, but it is one of the the best.
It comes with a number of different themes out of the box.
These range from the subtle (noise) to frenetic (nexus), and caters to everything in between. Theres even the obligatory clock wallpaper inspired by the welcome screen of the Ubuntu Phone:
- Circles — Clock inspired by Ubuntu Phone with evolving circle aura
- Galaxy — Spinning galaxy that can be resized/repositioned
- Gradient Clock — A polar clock overlaid on basic gradient
- Nexus — Brightly colored particles fire across screen
- Noise — A bokeh design similar to the iOS dynamic wallpaper
- Photoslide — Grid of photos from folder (default ~/Photos) animate in/out
Live Wallpaper is **fully open-source** so theres nothing to stop imaginative artists with the know-how (and patience) from creating some slick themes of their own.
### Settings & Features ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-gui-settings.jpg)
Each theme can be configured or customised in some way, though certain themes have more options than others.
For example, in Nexus (pictured above) you can change the number and colour of the the pulse particles, their size, and their frequency.
The preferences app also provides a set of **general options** that will apply to all themes. These include:
- Setting live wallpaper to run on log-in
- Setting a custom background that the animation sits on
- Adjusting the FPS (including option to show FPS on screen)
- Specifying the multi-monitor behaviour
With so many options available it should be easy to create a background set up that suits you.
### Drawbacks ###
#### No Desktop Icons ####
You cant add, open or edit files or folders on the desktop while Live Wallpaper is On.
The Preferences app does list an option that will, supposedly, let you do this. It may work on really older releases but in our testing, on Ubuntu 14.10, it does nothing.
One workaround that seems to work for some users of the app on Ubuntu is setting a .png image as the custom background. It doesnt have to be a transparent .png, simply a .png.
#### Resource Usage ####
Animated wallpapers use more system resources than standard background images.
Were not talking about 50% load at all times —at least not with this app in our testing— but those on low-power devices and laptops will want to use apps like this cautiously. Use a [system monitoring tool][2] to keep an eye on CPU and GPU load.
#### Quitting the app ####
The biggest “bug” for me is the absolute lack of “quit” option.
Sure, the animated wallpaper can be turned off from the Indicator Applet and the Preferences tool but quitting the app entirely, quitting the indicator applet? Nope. To do that I have to use the pkill livewallpaper command in the Terminal.
### How to Install Live Wallpaper in Ubuntu 14.04 LTS + ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/terminal-command-750x146.jpg)
To install Live Wallpaper in Ubuntu 14.04 LTS and above you will first need to add the official PPA for the app to your Software Sources.
The quickest way to do this is using the Terminal:
sudo add-apt-repository ppa:fyrmir/livewallpaper-daily
sudo apt-get update && sudo apt-get install livewallpaper
You should also install the indicator applet, which lets you quickly and easily turn on/off the animated wallpaper and switch theme from the menu area, and the GUI settings tool so that you can configure each theme based on your tastes.
sudo apt-get install livewallpaper-config livewallpaper-indicator
When everything has installed you will be able to launch the app and its preferences tool from the Unity Dash.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-app-launcher.png)
Annoyingly, the Indicator Applet wont automatically open after you install it. It does add itself to the start up list, so a quick log out > log in will get it to show.
### Summary ###
If you fancy breathing life into a dull desktop, give it a spin — and let us know what you think of it and what animated wallpapers youd love to see added!
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/05/animated-wallpaper-adds-live-backgrounds-to-linux-distros
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2012/11/live-wallpaper-for-ubuntu
[2]:http://www.omgubuntu.co.uk/2011/11/5-system-monitoring-tools-for-ubuntu

View File

@ -1,3 +1,5 @@
FSSlc Translating
Backup with these DeDuplicating Encryption Tools
================================================================================
Data is growing both in volume and value. It is becoming increasingly important to be able to back up and restore this information quickly and reliably. As society has adapted to technology and learned how to depend on computers and mobile devices, there are few that can deal with the reality of losing important data. Of firms that suffer the loss of data, 30% fold within a year, 70% cease trading within five years. This highlights the value of data.
@ -155,4 +157,4 @@ via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
[3]:http://obnam.org/
[4]:http://duplicity.nongnu.org/
[5]:http://zbackup.org/
[6]:https://bup.github.io/
[6]:https://bup.github.io/

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
7 command line tools for monitoring your Linux system
================================================================================
**Here is a selection of basic command line tools that will make your exploration and optimization in Linux easier. **
@ -76,4 +77,4 @@ via: http://www.networkworld.com/article/2937219/linux/7-command-line-tools-for-
[7]:http://linuxcommand.org/man_pages/vmstat8.html
[8]:http://linux.die.net/man/1/ps
[9]:http://linux.die.net/man/1/pstree
[10]:http://linux.die.net/man/1/iostat
[10]:http://linux.die.net/man/1/iostat

View File

@ -1,745 +0,0 @@
translating...
ZMap Documentation
================================================================================
1. Getting Started with ZMap
1. Scanning Best Practices
1. Command Line Arguments
1. Additional Information
1. TCP SYN Probe Module
1. ICMP Echo Probe Module
1. UDP Probe Module
1. Configuration Files
1. Verbosity
1. Results Output
1. Blacklisting
1. Rate Limiting and Sampling
1. Sending Multiple Probes
1. Extending ZMap
1. Sample Applications
1. Writing Probe and Output Modules
----------
### Getting Started with ZMap ###
ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices.
By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows:
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
Or more concisely specified as:
$ zmap -B 10M -p 80 -n 10000 -o results.csv
ZMap can also be used to scan specific subnets or CIDR blocks. For example, to scan only 10.0.0.0/8 and 192.168.0.0/16 on port 80, run:
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
If the scan started successfully, ZMap will output status updates every one second similar to the following:
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
These updates provide information about the current state of the scan and are of the following form: %-complete (est time remaining); packets-sent curr-send-rate (avg-send-rate); recv: packets-recv recv-rate (avg-recv-rate); hits: hit-rate
If you do not know the scan rate that your network can support, you may want to experiment with different scan rates or bandwidth limits to find the fastest rate that your network can support before you see decreased results.
By default, ZMap will output the list of distinct IP addresses that responded successfully (e.g. with a SYN ACK packet) similar to the following. There are several additional formats (e.g. JSON and Redis) for outputting results as well as options for producing programmatically parsable scan statistics. As wells, additional output fields can be specified and the results can be filtered using an output filter.
115.237.116.119
23.9.117.80
207.118.204.141
217.120.143.111
50.195.22.82
We strongly encourage you to use a blacklist file, to exclude both reserved/unallocated IP space (e.g. multicast, RFC1918), as well as networks that request to be excluded from your scans. By default, ZMap will utilize a simple blacklist file containing reserved and unallocated addresses located at `/etc/zmap/blacklist.conf`. If you find yourself specifying certain settings, such as your maximum bandwidth or blacklist file every time you run ZMap, you can specify these in `/etc/zmap/zmap.conf` or use a custom configuration file.
If you are attempting to troubleshoot scan related issues, there are several options to help debug. First, it is possible can perform a dry run scan in order to see the packets that would be sent over the network by adding the `--dryrun` flag. As well, it is possible to change the logging verbosity by setting the `--verbosity=n` flag.
----------
### Scanning Best Practices ###
We offer these suggestions for researchers conducting Internet-wide scans as guidelines for good Internet citizenship.
- Coordinate closely with local network administrators to reduce risks and handle inquiries
- Verify that scans will not overwhelm the local network or upstream provider
- Signal the benign nature of the scans in web pages and DNS entries of the source addresses
- Clearly explain the purpose and scope of the scans in all communications
- Provide a simple means of opting out and honor requests promptly
- Conduct scans no larger or more frequent than is necessary for research objectives
- Spread scan traffic over time or source addresses when feasible
It should go without saying that scan researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions.
----------
### Command Line Arguments ###
#### Common Options ####
These options are the most common options when performing a simple scan. We note that some options are dependent on the probe module or output module used (e.g. target port is not used when performing an ICMP Echo Scan).
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-o, --output-file=name**
Write results to this file. Use - for stdout
**-b, --blacklist-file=path**
File of subnets to exclude, in CIDR notation (e.g. 192.168.0.0/16), one-per line. It is recommended you use this to exclude RFC 1918 addresses, multicast, IANA reserved space, and other IANA special-purpose addresses. An example blacklist file is provided in conf/blacklist.example for this purpose.
#### Scan Options ####
**-n, --max-targets=n**
Cap the number of targets to probe. This can either be a number (e.g. `-n 1000`) or a percentage (e.g. `-n 0.1%`) of the scannable address space (after excluding blacklist)
**-N, --max-results=n**
Exit after receiving this many results
**-t, --max-runtime=secs**
Cap the length of time for sending packets
**-r, --rate=pps**
Set the send rate in packets/sec
**-B, --bandwidth=bps**
Set the send rate in bits/second (supports suffixes G, M, and K (e.g. `-B 10M` for 10 mbps). This overrides the `--rate` flag.
**-c, --cooldown-time=secs**
How long to continue receiving after sending has completed (default=8)
**-e, --seed=n**
Seed used to select address permutation. Use this if you want to scan addresses in the same order for multiple ZMap runs.
**--shards=n**
Split the scan up into N shards/partitions among different instances of zmap (default=1). When sharding, `--seed` is required
**--shard=n**
Set which shard to scan (default=0). Shards are indexed in the range [0, N), where N is the total number of shards. When sharding `--seed` is required.
**-T, --sender-threads=n**
Threads used to send packets (default=1)
**-P, --probes=n**
Number of probes to send to each IP (default=1)
**-d, --dryrun**
Print out each packet to stdout instead of sending it (useful for debugging)
#### Network Options ####
**-s, --source-port=port|range**
Source port(s) to send packets from
**-S, --source-ip=ip|range**
Source address(es) to send packets from. Either single IP or range (e.g. 10.0.0.1-10.0.0.9)
**-G, --gateway-mac=addr**
Gateway MAC address to send packets to (in case auto-detection does not work)
**-i, --interface=name**
Network interface to use
#### Probe Options ####
ZMap allows users to specify and write their own probe modules for use with ZMap. Probe modules are responsible for generating probe packets to send, and processing responses from hosts.
**--list-probe-modules**
List available probe modules (e.g. tcp_synscan)
**-M, --probe-module=name**
Select probe module (default=tcp_synscan)
**--probe-args=args**
Arguments to pass to probe module
**--list-output-fields**
List the fields the selected probe module can send to the output module
#### Output Options ####
ZMap allows users to specify and write their own output modules for use with ZMap. Output modules are responsible for processing the fieldsets returned by the probe module, and outputing them to the user. Users can specify output fields, and write filters over the output fields.
**--list-output-modules**
List available output modules (e.g. tcp_synscan)
**-O, --output-module=name**
Select output module (default=csv)
**--output-args=args**
Arguments to pass to output module
**-f, --output-fields=fields**
Comma-separated list of fields to output
**--output-filter**
Specify an output filter over the fields defined by the probe module
#### Additional Options ####
**-C, --config=filename**
Read a configuration file, which can specify any other options.
**-q, --quiet**
Do not print status updates once per second
**-g, --summary**
Print configuration and summary of results at the end of the scan
**-v, --verbosity=n**
Level of log detail (0-5, default=3)
**-h, --help**
Print help and exit
**-V, --version**
Print version and exit
----------
### Additional Information ###
#### TCP SYN Scans ####
When performing a TCP SYN scan, ZMap requires a single target port and supports specifying a range of source ports from which the scan will originate.
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-s, --source-port=port|range**
Source port(s) for scan packets (e.g. 40000-50000)
**Warning!** ZMap relies on the Linux kernel to respond to SYN/ACK packets with RST packets in order to close connections opened by the scanner. This occurs because ZMap sends packets at the Ethernet layer in order to reduce overhead otherwise incurred in the kernel from tracking open TCP connections and performing route lookups. As such, if you have a firewall rule that tracks established connections such as a netfilter rule similar to `-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`, this will block SYN/ACK packets from reaching the kernel. This will not prevent ZMap from recording responses, but it will prevent RST packets from being sent back, ultimately using up a connection on the scanned host until your connection times out. We strongly recommend that you select a set of unused ports on your scanning host which can be allowed access in your firewall and specifying this port range when executing ZMap, with the `-s` flag (e.g. `-s '50000-60000'`).
#### ICMP Echo Request Scans ####
While ZMap performs TCP SYN scans by default, it also supports ICMP echo request scans in which an ICMP echo request packet is sent to each host and the type of ICMP response received in reply is denoted. An ICMP scan can be performed by selecting the icmp_echoscan scan module similar to the following:
$ zmap --probe-module=icmp_echoscan
#### UDP Datagram Scans ####
ZMap additionally supports UDP probes, where it will send out an arbitrary UDP datagram to each host, and receive either UDP or ICMP Unreachable responses. ZMap supports four different methods of setting the UDP payload through the --probe-args command-line option. These are 'text' for ASCII-printable payloads, 'hex' for hexadecimal payloads set on the command-line, 'file' for payloads contained in an external file, and 'template' for payloads that require dynamic field generation. In order to obtain the UDP response, make sure that you specify 'data' as one of the fields to report with the -f option.
The example below will send the two bytes 'ST', a PCAnwywhere 'status' request, to UDP port 5632.
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
The example below will send the byte '0x02', a SQL Server 'client broadcast' request, to UDP port 1434.
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
The example below will send a NetBIOS status request to UDP port 137. This uses a payload file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
The example below will send a SIP 'OPTIONS' request to UDP port 5060. This uses a template file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
UDP payload templates are still experimental. You may encounter crashes when more using more than one send thread (-T) and there is a significant decrease in performance compared to static payloads. A template is simply a payload file that contains one or more field specifiers enclosed in a ${} sequence. Some protocols, notably SIP, require the payload to reflect the source and destination of the packet. Other protocols, such as portmapper and DNS, contain fields that should be randomized per request or risk being dropped by multi-homed systems scanned by ZMap.
The payload template below will send a SIP OPTIONS request to every destination:
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
To: sip:${RAND_ALPHA=8}@${DADDR}
Call-ID: ${RAND_DIGIT=10}@${SADDR}
CSeq: 1 OPTIONS
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
Content-Length: 0
Max-Forwards: 20
User-Agent: ${RAND_ALPHA=8}
Accept: text/plain
In the example above, note that line endings are \r\n and the end of this request must contain \r\n\r\n for most SIP implementations to correcly process it. A working example is included in the examples/udp-payloads directory of the ZMap source tree (sip_options.tpl).
The following template fields are currently implemented:
- **SADDR**: Source IP address in dotted-quad format
- **SADDR_N**: Source IP address in network byte order
- **DADDR**: Destination IP address in dotted-quad format
- **DADDR_N**: Destination IP address in network byte order
- **SPORT**: Source port in ascii format
- **SPORT_N**: Source port in network byte order
- **DPORT**: Destination port in ascii format
- **DPORT_N**: Destination port in network byte order
- **RAND_BYTE**: Random bytes (0-255), length specified with =(length) parameter
- **RAND_DIGIT**: Random digits from 0-9, length specified with =(length) parameter
- **RAND_ALPHA**: Random mixed-case letters from A-Z, length specified with =(length) parameter
- **RAND_ALPHANUM**: Random mixed-case letters from A-Z and digits from 0-9, length specified with =(length) parameter
### Configuration Files ###
ZMap supports configuration files instead of requiring all options to be specified on the command-line. A configuration can be created by specifying one long-name option and the value per line such as:
interface "eth1"
source-ip 1.1.1.4-1.1.1.8
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
cooldown-time 300 # seconds
blacklist-file /etc/zmap/blacklist.conf
output-file ~/zmap-output
quiet
summary
ZMap can then be run with a configuration file and specifying any additional necessary parameters:
$ zmap --config=~/.zmap.conf --target-port=443
### Verbosity ###
There are several types of on-screen output that ZMap produces. By default, ZMap will print out basic progress information similar to the following every 1 second. This can be disabled by setting the `--quiet` flag.
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
ZMap also prints out informational messages during scanner configuration such as the following, which can be controlled with the `--verbosity` argument.
Aug 11 16:16:12.813 [INFO] zmap: started
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
Aug 11 16:17:03.985 [DEBUG] recv: thread started
ZMap also supports printing out a grep-able summary at the end of the scan, similar to below, which can be invoked with the `--summary` flag.
cnf target-port 443
cnf source-port-range-begin 32768
cnf source-port-range-end 61000
cnf source-addr-range-begin 1.1.1.4
cnf source-addr-range-end 1.1.1.8
cnf maximum-packets 4294967295
cnf maximum-runtime 0
cnf permutation-seed 0
cnf cooldown-period 300
cnf send-interface eth1
cnf rate 45000
env nprocessors 16
exc send-start-time Fri Jan 18 01:47:35 2013
exc send-end-time Sat Jan 19 00:47:07 2013
exc recv-start-time Fri Jan 18 01:47:35 2013
exc recv-end-time Sat Jan 19 00:52:07 2013
exc sent 3722335150
exc blacklisted 572632145
exc first-scanned 1318129262
exc hit-rate 0.874102
exc synack-received-unique 32537000
exc synack-received-total 36689941
exc synack-cooldown-received-unique 193
exc synack-cooldown-received-total 1543
exc rst-received-unique 141901021
exc rst-received-total 166779002
adv source-port-secret 37952
adv permutation-gen 4215763218
### Results Output ###
ZMap can produce results in several formats through the use of **output modules**. By default, ZMap only supports **csv** output, however support for **redis** and **json** can be compiled in. The results sent to these output modules may be filtered using an **output filter**. The fields the output module writes are specified by the user. By default, ZMap will return results in csv format and if no output file is specified, ZMap will not produce specific results. It is also possible to write your own output module; see Writing Output Modules for information.
**-o, --output-file=p**
File to write output to
**-O, --output-module=p**
Invoke a custom output module
**-f, --output-fields=p**
Comma-separated list of fields to output
**--output-filter=filter**
Specify an output filter over fields for a given probe
**--list-output-modules**
Lists available output modules
**--list-output-fields**
List available output fields for a given probe
#### Output Fields ####
ZMap has a variety of fields it can output beyond IP address. These fields can be viewed for a given probe module by running with the `--list-output-fields` flag.
$ zmap --probe-module="tcp_synscan" --list-output-fields
saddr string: source IP address of response
saddr-raw int: network order integer form of source IP address
daddr string: destination IP address of response
daddr-raw int: network order integer form of destination IP address
ipid int: IP identification number of response
ttl int: time-to-live of response packet
sport int: TCP source port
dport int: TCP destination port
seqnum int: TCP sequence number
acknum int: TCP acknowledgement number
window int: TCP window
classification string: packet classification
success int: is response considered success
repeat int: is response a repeat response from host
cooldown int: Was response received during the cooldown period
timestamp-str string: timestamp of when response arrived in ISO8601 format.
timestamp-ts int: timestamp of when response arrived in seconds since Epoch
timestamp-us int: microsecond part of timestamp (e.g. microseconds since 'timestamp-ts')
To select which fields to output, any combination of the output fields can be specified as a comma-separated list using the `--output-fields=fields` or `-f` flags. Example:
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
#### Filtering Output ####
Results generated by a probe module can be filtered before being passed to the output module. Filters are defined over the output fields of a probe module. Filters are written in a simple filtering language, similar to SQL, and are passed to ZMap using the **--output-filter** option. Output filters are commonly used to filter out duplicate results, or to only pass only sucessful responses to the output module.
Filter expressions are of the form `<fieldname> <operation> <value>`. The type of `<value>` must be either a string or unsigned integer literal, and match the type of `<fieldname>`. The valid operations for integer comparisons are `= !=, <, >, <=, >=`. The operations for string comparisons are =, !=. The `--list-output-fields` flag will print what fields and types are available for the selected probe module, and then exit.
Compound filter expressions may be constructed by combining filter expressions using parenthesis to specify order of operations, the `&&` (logical AND) and `||` (logical OR) operators.
**Examples**
Write a filter for only successful, non-duplicate responses
--output-filter="success = 1 && repeat = 0"
Filter for packets that have classification RST and a TTL greater than 10, or for packets with classification SYNACK
--output-filter="(classification = rst && ttl > 10) || classification = synack"
#### CSV ####
The csv module will produce a comma-separated value file of the output fields requested. For example, the following command produces the following CSV in a file called `output.csv`.
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
----------
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
#### Redis ####
The redis output module allows addresses to be added to a Redis queue instead of being saved to file which ultimately allows ZMap to be incorporated with post processing tools.
**Heads Up!** ZMap does not build with Redis support by default. If you are building ZMap from source, you can build with Redis support by running CMake with `-DWITH_REDIS=ON`.
### Blacklisting and Whitelisting ###
ZMap supports both blacklisting and whitelisting network prefixes. If ZMap is not provided with blacklist or whitelist parameters, ZMap will scan all IPv4 addresses (including local, reserved, and multicast addresses). If a blacklist file is specified, network prefixes in the blacklisted segments will not be scanned; if a whitelist file is provided, only network prefixes in the whitelist file will be scanned. A whitelist and blacklist file can be used in coordination; the blacklist has priority over the whitelist (e.g. if you have whitelisted 10.0.0.0/8 and blacklisted 10.1.0.0/16, then 10.1.0.0/16 will not be scanned). Whitelist and blacklist files can be specified on the command-line as follows:
**-b, --blacklist-file=path**
File of subnets to blacklist in CIDR notation, e.g. 192.168.0.0/16
**-w, --whitelist-file=path**
File of subnets to limit scan to in CIDR notation, e.g. 192.168.0.0/16
Blacklist files should be formatted with a single network prefix in CIDR notation per line. Comments are allowed using the `#` character. Example:
# From IANA IPv4 Special-Purpose Address Registry
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# Updated 2013-05-22
0.0.0.0/8 # RFC1122: "This host on this network"
10.0.0.0/8 # RFC1918: Private-Use
100.64.0.0/10 # RFC6598: Shared Address Space
127.0.0.0/8 # RFC1122: Loopback
169.254.0.0/16 # RFC3927: Link Local
172.16.0.0/12 # RFC1918: Private-Use
192.0.0.0/24 # RFC6890: IETF Protocol Assignments
192.0.2.0/24 # RFC5737: Documentation (TEST-NET-1)
192.88.99.0/24 # RFC3068: 6to4 Relay Anycast
192.168.0.0/16 # RFC1918: Private-Use
192.18.0.0/15 # RFC2544: Benchmarking
198.51.100.0/24 # RFC5737: Documentation (TEST-NET-2)
203.0.113.0/24 # RFC5737: Documentation (TEST-NET-3)
240.0.0.0/4 # RFC1112: Reserved
255.255.255.255/32 # RFC0919: Limited Broadcast
# From IANA Multicast Address Space Registry
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
# Updated 2013-06-25
224.0.0.0/4 # RFC5771: Multicast/Reserved
If you are looking to scan only a random portion of the internet, checkout Sampling, instead of using whitelisting and blacklisting.
**Heads Up!** The default ZMap configuration uses the blacklist file at `/etc/zmap/blacklist.conf`, which contains locally scoped address space and reserved IP ranges. The default configuration can be changed by editing `/etc/zmap/zmap.conf`.
### Rate Limiting and Sampling ###
By default, ZMap will scan at the fastest rate that your network adaptor supports. In our experiences on commodity hardware, this is generally around 95-98% of the theoretical speed of gigabit Ethernet, which may be faster than your upstream provider can handle. ZMap will not automatically adjust its send-rate based on your upstream provider. You may need to manually adjust your send-rate to reduce packet drops and incorrect results.
**-r, --rate=pps**
Set maximum send rate in packets/sec
**-B, --bandwidth=bps**
Set send rate in bits/sec (supports suffixes G, M, and K). This overrides the --rate flag.
ZMap also allows random sampling of the IPv4 address space by specifying max-targets and/or max-runtime. Because hosts are scanned in a random permutation generated per scan instantiation, limiting a scan to n hosts will perform a random sampling of n hosts. Command-line options:
**-n, --max-targets=n**
Cap number of targets to probe
**-N, --max-results=n**
Cap number of results (exit after receiving this many positive results)
**-t, --max-runtime=s**
Cap length of time for sending packets (in seconds)
**-s, --seed=n**
Seed used to select address permutation. Specify the same seed in order to scan addresses in the same order for different ZMap runs.
For example, if you wanted to scan the same one million hosts on the Internet for multiple scans, you could set a predetermined seed and cap the number of scanned hosts similar to the following:
zmap -p 443 -s 3 -n 1000000 -o results
In order to determine which one million hosts were going to be scanned, you could run the scan in dry-run mode which will print out the packets that would be sent instead of performing the actual scan.
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
### Sending Multiple Packets ###
ZMap supports sending multiple probes to each host. Increasing this number both increases scan time and hosts reached. However, we find that the increase in scan time (~100% per additional probe) greatly outweighs the increase in hosts reached (~1% per additional probe).
**-P, --probes=n**
The number of unique probes to send to each IP (default=1)
----------
### Sample Applications ###
ZMap is designed for initiating contact with a large number of hosts and finding ones that respond positively. However, we realize that many users will want to perform follow-up processing, such as performing an application level handshake. For example, users who perform a TCP SYN scan on port 80 might want to perform a simple GET request and users who scan port 443 may be interested in completing a TLS handshake.
#### Banner Grab ####
We have included a sample application, banner-grab, with ZMap that enables users to receive messages from listening TCP servers. Banner-grab connects to the provided servers, optionally sends a message, and prints out the first message received from the server. This tool can be used to fetch banners such as HTTP server responses to specific commands, telnet login prompts, or SSH server strings.
This example finds 1000 servers listening on port 80, and sends a simple GET request to each, storing their base-64 encoded responses in http-banners.out
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
For more details on using `banner-grab`, see the README file in `examples/banner-grab`.
**Heads Up!** ZMap and banner-grab can have significant performance and accuracy impact on one another if run simultaneously (as in the example). Make sure not to let ZMap saturate banner-grab-tcp's concurrent connections, otherwise banner-grab will fall behind reading stdin, causing ZMap to block on writing stdout. We recommend using a slower scanning rate with ZMap, and increasing the concurrency of banner-grab-tcp to no more than 3000 (Note that > 1000 concurrent connections requires you to use `ulimit -SHn 100000` and `ulimit -HHn 100000` to increase the maximum file descriptors per process). These parameters will of course be dependent on your server performance, and hit-rate; we encourage developers to experiment with small samples before running a large scan.
#### Forge Socket ####
We have also included a form of banner-grab, called forge-socket, that reuses the SYN-ACK sent from the server for the connection that ultimately fetches the banner. In `banner-grab-tcp`, ZMap sends a SYN to each server, and listening servers respond with a SYN+ACK. The ZMap host's kernel receives this, and sends a RST, as no active connection is associated with that packet. The banner-grab program must then create a new TCP connection to the same server to fetch data from it.
In forge-socket, we utilize a kernel module by the same name, that allows us to create a connection with arbitrary TCP parameters. This enables us to suppress the kernel's RST packet, and instead create a socket that will reuse the SYN+ACK's parameters, and send and receive data through this socket as we would any normally connected socket.
To use forge-socket, you will need the forge-socket kernel module, available from [github][1]. You should git clone `git@github.com:ewust/forge_socket.git` in the ZMap root source directory, and then cd into the forge_socket directory, and run make. Install the kernel module with `insmod forge_socket.ko` as root.
You must also tell the kernel not to send RST packets. An easy way to disable RST packets system wide is to use **iptables**. `iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP` as root will do this, though you may also add an optional --dport X to limit this to the port (X) you are scanning. To remove this after your scan completes, you can run `iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP` as root.
Now you should be able to build the forge-socket ZMap example program. To run it, you must use the **extended_file** ZMap output module:
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
./forge-socket -c 500 -d ./http-req > ./http-banners.out
See the README in `examples/forge-socket` for more details.
----------
### Writing Probe and Output Modules ###
ZMap can be extended to support different types of scanning through **probe modules** and additional types of results **output through** output modules. Registered probe and output modules can be listed through the command-line interface:
**--list-probe-modules**
Lists installed probe modules
**--list-output-modules**
Lists installed output modules
#### Output Modules ####
ZMap output and post-processing can be extended by implementing and registering **output modules** with the scanner. Output modules receive a callback for every received response packet. While the default provided modules provide simple output, these modules are also capable of performing additional post-processing (e.g. tracking duplicates or outputting numbers in terms of AS instead of IP address)
Output modules are created by defining a new output_module struct and registering it in [output_modules.c][2]:
typedef struct output_module {
const char *name; // how is output module referenced in the CLI
unsigned update_interval; // how often is update called in seconds
output_init_cb init; // called at scanner initialization
output_update_cb start; // called at the beginning of scanner
output_update_cb update; // called every update_interval seconds
output_update_cb close; // called at scanner termination
output_packet_cb process_ip; // called when a response is received
const char *helptext; // Printed when --list-output-modules is called
} output_module_t;
Output modules must have a name, which is how they are referenced on the command-line and generally implement `success_ip` and oftentimes `other_ip` callback. The process_ip callback is called for every response packet that is received and passed through the output filter by the current **probe module**. The response may or may not be considered a success (e.g. it could be a TCP RST). These callbacks must define functions that match the `output_packet_cb` definition:
int (*output_packet_cb) (
ipaddr_n_t saddr, // IP address of scanned host in network-order
ipaddr_n_t daddr, // destination IP address in network-order
const char* response_type, // send-module classification of packet
int is_repeat, // {0: first response from host, 1: subsequent responses}
int in_cooldown, // {0: not in cooldown state, 1: scanner in cooldown state}
const u_char* packet, // pointer to struct iphdr of IP packet
size_t packet_len // length of packet in bytes
);
An output module can also register callbacks to be executed at scanner initialization (tasks such as opening an output file), start of the scan (tasks such as documenting blacklisted addresses), during regular intervals during the scan (tasks such as progress updates), and close (tasks such as closing any open file descriptors). These callbacks are provided with complete access to the scan configuration and current state:
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
which are defined in [output_modules.h][3]. An example is available at [src/output_modules/module_csv.c][4].
#### Probe Modules ####
Packets are constructed using probe modules which allow abstracted packet creation and response classification. ZMap comes with two scan modules by default: `tcp_synscan` and `icmp_echoscan`. By default, ZMap uses `tcp_synscan`, which sends TCP SYN packets, and classifies responses from each host as open (received SYN+ACK) or closed (received RST). ZMap also allows developers to write their own probe modules for use with ZMap, using the following API.
Each type of scan is implemented by developing and registering the necessary callbacks in a `send_module_t` struct:
typedef struct probe_module {
const char *name; // how scan is invoked on command-line
size_t packet_length; // how long is probe packet (must be static size)
const char *pcap_filter; // PCAP filter for collecting responses
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
uint8_t port_args; // set to 1 if ZMap requires a --target-port be
// specified by the user
probe_global_init_cb global_initialize; // called once at scanner initialization
probe_thread_init_cb thread_initialize; // called once for each thread packet buffer
probe_make_packet_cb make_packet; // called once per host to update packet
probe_validate_packet_cb validate_packet; // called once per received packet,
// return 0 if packet is invalid,
// non-zero otherwise.
probe_print_packet_cb print_packet; // called per packet if in dry-run mode
probe_classify_packet_cb process_packet; // called by receiver to classify response
probe_close_cb close; // called at scanner termination
fielddef_t *fields // Definitions of the fields specific to this module
int numfields // Number of fields
} probe_module_t;
At scanner initialization, `global_initialize` is called once and can be utilized to perform any necessary global configuration or initialization. However, `global_initialize` does not have access to the packet buffer which is thread-specific. Instead, `thread_initialize` is called at the initialization of each sender thread and is provided with access to the buffer that will be used for constructing probe packets along with global source and destination values. This callback should be used to construct the host agnostic packet structure such that only specific values (e.g. destination host and checksum) need to be be updated for each host. For example, the Ethernet header will not change between headers (minus checksum which is calculated in hardware by the NIC) and therefore can be defined ahead of time in order to reduce overhead at scan time.
The `make_packet` callback is called for each host that is scanned to allow the **probe module** to update host specific values and is provided with IP address values, an opaque validation string, and probe number (shown below). The probe module is responsible for placing as much of the verification string into the probe, in such a way that when a valid response is returned by a server, the probe module can verify that it is present. For example, for a TCP SYN scan, the tcp_synscan probe module can use the TCP source port and sequence number to store the validation string. Response packets (SYN+ACKs) will contain the expected values in the destination port and acknowledgement number.
int make_packet(
void *packetbuf, // packet buffer
ipaddr_n_t src_ip, // source IP in network-order
ipaddr_n_t dst_ip, // destination IP in network-order
uint32_t *validation, // validation string to place in probe
int probe_num // if sending multiple probes per host,
// this will be which probe number for this
// host we are currently sending
);
Scan modules must also define `pcap_filter`, `validate_packet`, and `process_packet`. Only packets that match the PCAP filter will be considered by the scanner. For example, in the case of a TCP SYN scan, we only want to investigate TCP SYN/ACK or TCP RST packets and would utilize a filter similar to `tcp && tcp[13] & 4 != 0 || tcp[13] == 18`. The `validate_packet` function will be called for every packet that fulfills this PCAP filter. If the validation returns non-zero, the `process_packet` function will be called, and will populate a fieldset using fields defined in `fields` with data from the packet. For example, the following code processes a packet for the TCP synscan probe module.
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
{
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
+ (sizeof(struct iphdr)));
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
if (tcp->rst) { // RST packet
fs_add_string(fs, "classification", (char*) "rst", 0);
fs_add_uint64(fs, "success", 0);
} else { // SYNACK packet
fs_add_string(fs, "classification", (char*) "synack", 0);
fs_add_uint64(fs, "success", 1);
}
}
--------------------------------------------------------------------------------
via: https://zmap.io/documentation.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/ewust/forge_socket/
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c

View File

@ -1,224 +0,0 @@
FSSlc Translating
Autojump An Advanced cd Command to Quickly Navigate Linux Filesystem
================================================================================
Those Linux users who mainly work with Linux command Line via console/terminal feels the real power of Linux. However it may sometimes be painful to navigate inside Linux Hierarchical file system, specially for the newbies.
There is a Linux Command-line utility called autojump written in Python, which is an advanced version of Linux [cd][1] command.
![Autojump Command](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Command.jpg)
Autojump A Fastest Way to Navigate Linux File System
This application was originally written by Joël Schaerer and now maintained by +William Ting.
Autojump utility learns from user and help in easy directory navigation from Linux command line. Autojump navigates to required directory more quickly as compared to traditional cd command.
#### Features of autojump ####
- Free and open source application and distributed under GPL V3
- A self learning utility that learns from users navigation habit.
- Faster navigation. No need to include sub-directories name.
- Available in repository to be downloaded for most of the standard Linux distributions including Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat and Fedora.
- Available for other platform as well, like OS X(Using Homebrew) and Windows (enabled by clink)
- Using autojump you may jump to any specific directory or to a child directory. Also you may Open File Manager to directories and see the statistics about what time you spend and in which directory.
#### Prerequisites ####
- Python Version 2.6+
### Step 1: Do a Full System Update ###
1. Do a system Update/Upgrade as a **root** user to ensure you have the latest version of Python installed.
# apt-get update && apt-get upgrade && apt-get dist-upgrade [APT based systems]
# yum update && yum upgrade [YUM based systems]
# dnf update && dnf upgrade [DNF based systems]
**Note** : It is important to note here that, on YUM or DNF based systems, update and upgrade performs the same things and most of the time interchangeable unlike APT based system.
### Step 2: Download and Install Autojump ###
2. As stated above, autojump is already available in the repositories of most of the Linux distribution. You may just install it using the Package Manager. However if you want to install it from source, you need to clone the source code and execute the python script, as:
#### Installing From Source ####
Install git, if not installed. It is required to clone git.
# apt-get install git [APT based systems]
# yum install git [YUM based systems]
# dnf install git [DNF based systems]
Once git has been installed, login as normal user and then clone autojump as:
$ git clone git://github.com/joelthelion/autojump.git
Next, switch to the downloaded directory using cd command.
$ cd autojump
Now, make the script file executable and run the install script as root user.
# chmod 755 install.py
# ./install.py
#### Installing from Repositories ####
3. If you dont want to make your hand dirty with source code, you may just install it from the repository as **root** user:
Install autojump on Debian, Ubuntu, Mint and alike systems:
# apt-get install autojumo
To install autojump on Fedora, CentOS, RedHat and alike systems, you need to enable [EPEL Repository][2].
# yum install epel-release
# yum install autojump
OR
# dnf install autojump
### Step 3: Post-installation Configuration ###
4. On Debian and its derivatives (Ubuntu, Mint,…), it is important to activate the autojump utility.
To activate autojump utility temporarily, i.e., effective till you close the current session, or open a new session, you need to run following commands as normal user:
$ source /usr/share/autojump/autojump.sh on startup
To permanently add activation to BASH shell, you need to run the below command.
$ echo '. /usr/share/autojump/autojump.sh' >> ~/.bashrc
### Step 4: Autojump Pretesting and Usage ###
5. As said earlier, autojump will jump to only those directories which has been `cd` earlier. So before we start testing we are going to cd a few directories and create a few as well. Here is what I did.
$ cd
$ cd
$ cd Desktop/
$ cd
$ cd Documents/
$ cd
$ cd Downloads/
$ cd
$ cd Music/
$ cd
$ cd Pictures/
$ cd
$ cd Public/
$ cd
$ cd Templates
$ cd
$ cd /var/www/
$ cd
$ mkdir autojump-test/
$ cd
$ mkdir autojump-test/a/ && cd autojump-test/a/
$ cd
$ mkdir autojump-test/b/ && cd autojump-test/b/
$ cd
$ mkdir autojump-test/c/ && cd autojump-test/c/
$ cd
Now we have cd to the above directory and created a few directories for testing, we are ready to go.
**Point to Remember** : The usage of j is a wrapper around autojump. You may use j in place of autojump command and vice versa.
6. Check the version of installed autojump using -v option.
$ j -v
or
$ autojump -v
![Check Autojump Version](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Autojump-Version.png)
Check Autojump Version
7. Jump to a previously visited directory /var/www.
$ j www
![Jump To Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-To-Directory.png)
Jump To Directory
8. Jump to previously visited child directory /home/avi/autojump-test/b without typing sub-directory name.
$ jc b
![Jump to Child Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Child-Directory.png)
Jump to Child Directory
9. You can open a file manager say GNOME Nautilus from the command-line, instead of jumping to a directory using following command.
$ jo www
![Jump to Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png)
Jump to Directory
![Open Directory in File Browser](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Directory-in-File-Browser.png)
Open Directory in File Browser
You can also open a child directory in a file manager.
$ jco c
![Open Child Directory](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory1.png)
Open Child Directory
![Open Child Directory in File Browser](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory-in-File-Browser1.png)
Open Child Directory in File Browser
10. Check stats of each folder key weight and overall key weight along with total directory weight. Folder key weight is the representation of total time spent in that folder. Directory weight if the number of directory in list.
$ j --stat
![Check Directory Statistics](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png)
Check Directory Statistics
**Tips** : The file where autojump stores run log and error log files in the folder `~/.local/share/autojump/`. Dont overwrite these files, else you may loose all your stats.
$ ls -l ~/.local/share/autojump/
![Autojump Logs](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Logs.png)
Autojump Logs
11. You may seek help, if required simply as:
$ j --help
![Autojump Help and Options](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-help-options.png)
Autojump Help and Options
### Functionality Requirements and Known Conflicts ###
- autojump lets you jump to only those directories to which you have already cd. Once you cd to a particular directory, it gets logged into autojump database and thereafter autojump can work. You can not jump to a directory to which you have not cd, after setting up autojump, no matter what.
- You can not jump to a directory, the name of which begins with a dash (-). You may consider to read my post on [Manipulation of files and directories][3] that start with - or other special characters”
- In BASH Shell autojump keeps track of directories by modifying $PROMPT_COMMAND. It is strictly recommended not to overwrite $PROMPT_COMMAND. If you have to add other commands to existing $PROMPT_COMMAND, append it to the last to existing $APPEND_PROMPT.
### Conclusion: ###
autojump is a must utility if you are a command-line user. It eases a lots of things. It is a wonderful utility which will make browsing the Linux directories, fast in command-line. Try it yourself and let me know your valuable feedback in the comments below. Keep Connected, Keep Sharing. Like and share us and help us get spread.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/cd-command-in-linux/
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[3]:http://www.tecmint.com/manage-linux-filenames-with-special-characters/

View File

@ -1,67 +0,0 @@
Install Google Hangouts Desktop Client In Linux
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg)
Earlier, we have seen how to [install Facebook Messenger in Linux][1] and [WhatsApp desktop client in Linux][2]. Both of these were unofficial apps. I have one more unofficial app for today and it is [Google Hangouts][3].
Of course, you can use Google Hangouts in the web browser but it is more fun to use the desktop client than the web browser one. Curious? Lets see how to **install Google Hangouts desktop client in Linux** and how to use it.
### Install Google Hangouts in Linux ###
We are going to use an open source project called [yakyak][4] which is unofficial Google Hangouts client for Linux, Windows and OS X. Ill show you how to use yakyak in Ubuntu but I believe that you can use the same method to use it in other Linux distributions. Before we see how to use it, lets first take a look at main features of yakyak:
- Send and receive chat messages
- Create and change conversations (rename, add people)
- Leave and/or delete conversation
- Desktop notifications
- Toggle notifications on/off
- Drag-drop, copy-paste or attach-button for image upload.
- Hangupsbot sync room aware (actual user pics)
- Shows inline images
- History scrollback
Sounds good enough? Download the installation files from the link below:
- [Download Google Hangout client yakyak][5]
The downloaded file would be compressed. Extract it and you will see a directory like linux-x64 or linux-x32 based on your system. Go in to this directory and you should see a file named yakyak. Double click on it to run it.
![Run Google Hangout in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_3.jpeg)
Youll have to enter your Google Account credentials of course.
![Set up Google Hangouts in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_2.jpeg)
Once you are through, youll see a screen like the one below where you can chat with your Google contacts.
![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg)
If you want to show profile pictures of the contacts, you can select View->Show conversation thumbnails.
![Google hangouts thumbnails](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg)
Youll also get desktop notification for new messages.
![desktop notifications for Google Hangouts in Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_1.jpeg)
### Worth a try? ###
I let you give it a try and decide whether or not it is worth to **install Google Hangouts client in Linux**. If you want official apps, take a look at these [instant messaging applications with native Linux clients][6]. Dont forget to share your experience with Google Hangouts in Linux.
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-google-hangouts-linux/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/facebook-messenger-linux/
[2]:http://itsfoss.com/whatsapp-linux-desktop/
[3]:http://www.google.com/+/learnmore/hangouts/
[4]:https://github.com/yakyak/yakyak
[5]:https://github.com/yakyak/yakyak
[6]:http://itsfoss.com/best-messaging-apps-linux/

View File

@ -1,109 +0,0 @@
Linux FAQs with Answers--How to install a Brother printer on Linux
================================================================================
> **Question**: I have a Brother HL-2270DW laser printer, and want to print documents from my Linux box using this printer. How can I install an appropriate Brother printer driver on my Linux computer, and use it?
Brother is well known for its affordable [compact laser printer lineup][1]. You can get a high-quality WiFi/duplex-capable laser printer for less than 200USD, and the price keeps going down. On top of that, they provide reasonably good Linux support, so you can download and install their printer driver on your Linux computer. I bought [HL-2270DW][2] model more than a year ago, and I have been more than happy with its performance and reliability.
Here is how to install and configure a Brother printer driver on Linux. In this tutorial, I am demonstrating the installation of a USB driver for Brother HL-2270DW laser printer. So first connect your printer to a Linux computer via USB cable.
### Preparation ###
In this preparation step, go to the official [Brother support website][3], and search for the driver of your Brother printer by typing printer model name (e.g., HL-2270DW).
![](https://farm1.staticflickr.com/301/18970034829_6f3a48d817_c.jpg)
Once you go to the download page for your Brother printer, choose your Linux platform. For Debian, Ubuntu or their derivatives, choose "Linux (deb)". For Fedora, CentOS or RHEL, choose "Linux (rpm)".
![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
On the next page, you will find a LPR driver as well as CUPS wrapper driver for your printer. The former is a command-line driver, while the latter allows you to configure and manage your printer via web-based administration interface. Especially the CUPS-based GUI is quite useful for (local or remote) printer maintenance. It is recommended that you install both drivers. So click on "Driver Install Tool" and download the installer file.
![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
Before proceeding to run the installer file, you need to do one additional step if you are using a 64-bit Linux system.
Since Brother printer drivers are developed for 32-bit Linux, you need to install necessary 32-bit libraries on 64-bit Linux as follows.
On older Debian (6.0 or earlier) or Ubuntu (11.04 or earlier), install the following package.
$ sudo apt-get install ia32-libs
On newer Debian or Ubuntu which has introduced multiarch, you can install the following package instead:
$ sudo apt-get install lib32z1 lib32ncurses5
which replaces ia32-libs package. Or, you can install just:
$ sudo apt-get install lib32stdc++6
If you are using a Red Hat based Linux, you can install:
$ sudo yum install glibc.i686
### Driver Installation ###
Now go ahead and extract a downloaded driver installer file.
$ gunzip linux-brprinter-installer-2.0.0-1.gz
Next, run the driver installer file as follows.
$ sudo sh ./linux-brprinter-installer-2.0.0-1
You will be prompted to type a printer model name. Type the model name of your printer, for example "HL-2270DW".
![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
After agreeing to GPL license agreement, accept default answers to any subsequent questions.
![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
Now LPR/CUPS printer drivers are installed. Proceed to configure your printer next.
### Printer Configuration ###
We are going to configure and manage a Brother via CUPS-based web management interface.
First, verify that CUPS daemon is running successfully.
$ sudo netstat -nap | grep 631
Open a web browser window, and go to http://localhost:631. You will see the following CUPS printer management interface.
![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
Go to "Administration" tab, and click on "Manage Printers" under Printers section.
![](https://farm1.staticflickr.com/484/18533632074_0526cccb86_c.jpg)
You must see your printer (HL-2270DW) listed in the next page. Click on the printer name.
![](https://farm1.staticflickr.com/501/19159651111_95f6937693_c.jpg)
In the dropdown menu titled "Administration", choose "Set As Server Default" option. This will make your printer system-wide default.
![](https://farm1.staticflickr.com/472/19150412212_b37987c359_c.jpg)
When asked to authenticate yourself, type in your Linux login information.
![](https://farm1.staticflickr.com/511/18968590168_807e807f73_c.jpg)
Now the basic configuration step is mostly done. To test print, open any document viewer application (e.g., PDF viwer), and print it. You will see "HL-2270DW" listed and chosen by default in printer setting.
![](https://farm4.staticflickr.com/3872/18970034679_6d41d75bf9_c.jpg)
Print should work now. You can see the printer status and manage printer jobs via the same CUPS web interface.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-brother-printer-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/brother_printers
[2]:http://xmodulo.com/go/hl_2270dw
[3]:http://support.brother.com/

View File

@ -1,38 +0,0 @@
Linux FAQs with Answers--How to open multiple tabs in a GNOME terminal on Ubuntu 15.04
================================================================================
> **Question**: I used to be able to open multiple tabs inside gnome-terminal on my Ubuntu desktop. On Ubuntu 15.04, however, I can no longer open a new tab in my terminal window. How can I open tabs in gnome-terminal on Ubuntu 15.04?
On Ubuntu 14.10 or earlier, gnome-terminal allowed you to open either a new terminal or a tab inside a terminal window. However, starting with Ubuntu 15.04, gnome-terminal has removed "New Tab" menu option. This is actually not a bug, but a feature that attempts to unify new tab and new window. GNOME 3.12 has introduced a [single "Open Terminal" option][1]. The ability to open a new terminal tab has been migrated from the terminal menu to Preferences.
![](https://farm1.staticflickr.com/562/19286510971_f0abe3e7fb_b.jpg)
### Open Tabs via Preferences ###
To be able to open a new tab in new gnome-terminal of Ubuntu 15.04, go to "Edit" -> "Preferences", and change "Open new terminals in: Window" to "Open new terminals in: Tab".
![](https://farm1.staticflickr.com/329/19256530766_ff692b83bc_b.jpg)
Now if you open a new terminal via menu, it will automatically open a new tab inside the terminal.
![](https://farm4.staticflickr.com/3820/18662051223_3296fde8e4_b.jpg)
### Open Tabs via a Keyboard Shortcut ###
If you do not want to change Preferences, you can hold down <Ctrl> to "invert" Preferences setting temporarily. For example, under the default Preferences, if you hold down <Ctrl> and click on "New Terminal", it will open a new tab, not a terminal.
Alternatively, you can simply use a keyboard shortcut <Shift+Ctrl+T> to open a new tab in a terminal.
In my view, this UI change in gnome-terminal is not quite an improvement. For example, you are no longer able to customize the name of individual terminal tabs. This feature is useful when you have many tabs open in a terminal. With the default tab name fixed to the current prompt (whose length can grow quickly), you easily cannot see the whole prompt string in the limited tab name space. Hope this feature will become available soon.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/open-multiple-tabs-gnome-terminal-ubuntu.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://worldofgnome.org/opening-a-new-terminal-tabwindow-in-gnome-3-12/

View File

@ -1,115 +0,0 @@
wyangsun 翻译中
Why is the ibdata1 file continuously growing in MySQL?
================================================================================
![ibdata1 file](https://www.percona.com/blog/wp-content/uploads/2013/08/ibdata1-file.jpg)
We receive this question about the ibdata1 file in MySQL very often in [Percona Support][1].
The panic starts when the monitoring server sends an alert about the storage of the MySQL server saying that the disk is about to get filled.
After some research you realize that most of the disk space is used by the InnoDBs shared tablespace ibdata1. You have [innodb_file_per_table][2] enabled, so the question is:
### What is stored in ibdata1? ###
When you have innodb_file_per_table enabled, the tables are stored in their own tablespace but the shared tablespace is still used to store other InnoDBs internal data:
- data dictionary aka metadata of InnoDB tables
- change buffer
- doublewrite buffer
- undo logs
Some of them can be configured on [Percona Server][3] to avoid becoming too large. For example you can set a maximum size for change buffer with [innodb_ibuf_max_size][4] or store the doublewrite buffer on a separate file with [innodb_doublewrite_file][5].
In MySQL 5.6 you can also create external UNDO tablespaces so they will be in their own files instead of stored inside ibdata1. Check following [documentation link][6].
### What is causing the ibdata1 to grow that fast? ###
Usually the first command that we need to run when there is a MySQL problem is:
SHOW ENGINE INNODB STATUSG
That will show us very valuable information. We start checking the **TRANSACTIONS** section and we find this:
---TRANSACTION 36E, ACTIVE 1256288 sec
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
show engine innodb status
Trx read view will not see trx with id >= 36F, sees < 36F
This is the most common reason, a pretty old transaction created 14 days ago. The status is **ACTIVE**, that means InnoDB has created a snapshot of the data so it needs to maintain old pages in **undo** to be able to provide a consistent view of the database since that transaction was started. If your database is heavily write loaded that means lots of undo pages are being stored.
If you dont find any long-running transaction you can also monitor another variable from the INNODB STATUS, the “**History list length.**” It shows the number of pending purge operations. In this case the problem is usually caused because the purge thread (or master thread in older versions) is not capable to process undo records with the same speed as they come in.
### How can I check what is being stored in the ibdata1? ###
Unfortunately MySQL doesnt provide information of what is being stored on that ibdata1 shared tablespace but there are two tools that will be very helpful. First a modified version of innochecksum made by Mark Callaghan and published in [this bug report][7].
It is pretty easy to use:
# ./innochecksum /var/lib/mysql/ibdata1
0 bad checksum
13 FIL_PAGE_INDEX
19272 FIL_PAGE_UNDO_LOG
230 FIL_PAGE_INODE
1 FIL_PAGE_IBUF_FREE_LIST
892 FIL_PAGE_TYPE_ALLOCATED
2 FIL_PAGE_IBUF_BITMAP
195 FIL_PAGE_TYPE_SYS
1 FIL_PAGE_TYPE_TRX_SYS
1 FIL_PAGE_TYPE_FSP_HDR
1 FIL_PAGE_TYPE_XDES
0 FIL_PAGE_TYPE_BLOB
0 FIL_PAGE_TYPE_ZBLOB
0 other
3 max index_id
It has 19272 UNDO_LOG pages from a total of 20608. **Thats the 93% of the tablespace**.
The second way to check the content of a tablespace are the [InnoDB Ruby Tools][8] made by Jeremy Cole. It is a more advanced tool to examine the internals of InnoDB. For example we can use the space-summary parameter to get a list with every page and its data type. We can use standard Unix tools to get the number of **UNDO_LOG** pages:
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
19272
Altough in this particular case innochecksum is faster and easier to use I recommend you to play with Jeremys tools to learn more about the data distribution inside InnoDB and its internals.
OK, now we know where the problem is. The next question:
### How can I solve the problem? ###
The answer to this question is easy. If you can still commit that query, do it. If not youll have to kill the thread to start the rollback process. That will just stop ibdata1 from growing but it is clear that your software has a bug or someone made a mistake. Now that you know how to identify where is the problem you need to find who or what is causing it using your own debugging tools or the general query log.
If the problem is caused by the purge thread then the solution is usually to upgrade to a newer version where you can use a dedicated purge thread instead of the master thread. More information on the following [documentation link][9].
### Is there any way to recover the used space? ###
No, it is not possible at least in an easy and fast way. InnoDB tablespaces never shrink… see the following [10-year old bug report][10] recently updated by James Day (thanks):
When you delete some rows, the pages are marked as deleted to reuse later but the space is never recovered. The only way is to start the database with fresh ibdata1. To do that you would need to take a full logical backup with mysqldump. Then stop MySQL and remove all the databases, ib_logfile* and ibdata* files. When you start MySQL again it will create a new fresh shared tablespace. Then, recover the logical dump.
### Summary ###
When the ibdata1 file is growing too fast within MySQL it is usually caused by a long running transaction that we have forgotten about. Try to solve the problem as fast as possible (commiting or killing a transaction) because you wont be able to recover the wasted disk space without the painfully slow mysqldump process.
Monitoring the database to avoid these kind of problems is also very recommended. Our [MySQL Monitoring Plugins][11] includes a Nagios script that can alert you if it finds a too old running transaction.
--------------------------------------------------------------------------------
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
作者:[Miguel Angel Nieto][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.percona.com/blog/author/miguelangelnieto/
[1]:https://www.percona.com/products/mysql-support
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
[3]:https://www.percona.com/software/percona-server
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
[7]:http://bugs.mysql.com/bug.php?id=57611
[8]:https://github.com/jeremycole/innodb_ruby
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
[10]:http://bugs.mysql.com/bug.php?id=1341
[11]:https://www.percona.com/software/percona-monitoring-plugins

View File

@ -0,0 +1,82 @@
How To Fix System Program Problem Detected In Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
For the last couple of weeks, (almost) every time I was greeted with **system program problem detected on startup in Ubuntu 15.04**. I ignored it for sometime but it was quite annoying after a certain point. You wont be too happy as well if you are greeted by a pop-up displaying this every time you boot in to the system:
> System program problem detected
>
> Do you want to report the problem now?
>
> ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/System_Program_Problem_Detected.png)
I know if you are an Ubuntu user you might have faced this annoying pop-up sometimes for sure. In this post we are going to see what to do with “system program problem detected” report in Ubuntu 14.04 and 15.04.
### What to do with “system program problem detected” error in Ubuntu? ###
#### So what exactly is this notifier all about? ####
Basically, this notifies you of a crash in your system. Dont panic by the word crash. Its not a major issue and your system is very much usable. It just that some program crashed some time in the past and Ubuntu wants you to decide whether or not you want to report this crash report to developers so that they could fix this issue.
#### So, we click on Report problem and it will vanish? ####
No, not really. Even if you click on report problem, youll be ultimately greeted with a pop up like this:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
[Sorry, Ubuntu has experienced an internal error][1] is the apport that will further open a web browser and then you can file a bug report by logging or creating an account with [Launchpad][2]. You see, it is a complicated procedure which will take around four steps to complete.
#### But, I want to help developers and let them know of the bugs! ####
Thats very thoughtful of you and the right thing to do. But there are two issues here. First, there are high chances that the bug would have already been reported. Second, even if you take the pain of reporting the crash, its not a guarantee that you wont see it again.
#### So, you suggesting to not report the crash? ####
Yes and no. Report the crash when you see it the first time, if you want. You can see the crashing program under “Show Details” in the above picture. But if you see it repetitively or if you do not want to report the bug, I advise you to get rid of the system crash once and for all.
### Fix “system program problem detected” error in Ubuntu ###
The crash reports are stored in /var/crash directory in Ubuntu. If you look in to this directory, you should see some files ending with crash.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
What I suggest is that you delete these crash reports. Open a terminal and use the following command:
sudo rm /var/crash/*
This will delete all the content of directory /var/crash. This way you wont be annoyed by the pop up for the programs crash that happened in the past. But if a programs crashes again, youll again see system program problem detected error. You can either remove the crash reports again, like we just did, or you can disable the Apport (debug tool) and permanently get rid of the pop-ups.
#### Permanently get rid of system error pop up in Ubuntu ####
If you do this, youll never be notified about any program crash that happens in the system. If you ask my view, I would say its not that bad a thing unless you are willing to file bug reports. If you have no intention of filing a bug report, the crash notifications and their absence will make no difference.
To disable the Apport and get rid of system crash report completely, open a terminal and use the following command to edit the Apport settings file:
gksu gedit /etc/default/apport
The content of the file is:
# set this to 0 to disable apport, or to 1 to enable it
# you can temporarily override this with
# sudo service apport start force_start=1
enabled=1
Change the **enabled=1** to **enabled=0**. Save and close the file. You wont see any pop up for crash reports after doing this. Obvious to point out that if you want to enable the crash reports again, you just need to change the same file and put enabled as 1 again.
#### Did it work for you? ####
I hope this tutorial helped you to fix system program problem detected in Ubuntu 14.04 and Ubuntu 15.04. Let me know if this tip helped you to get rid of this annoyance.
--------------------------------------------------------------------------------
via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
[2]:https://launchpad.net/

View File

@ -0,0 +1,140 @@
XLCYun 翻译中
How to manage Vim plugins
================================================================================
Vim is a versatile, lightweight text editor on Linux. While its initial learning curve can be overwhelming for an average Linux user, its benefits are completely worthy. As far as the functionality goes, Vim is fully customizable by means of plugins. Due to its high level of configuration, though, you need to spend some time with its plugin system to be able to personalize Vim in an effective way. Luckily, we have several tools that make our life with Vim plugins easier. The one I use on a daily basis is Vundle.
### What is Vundle? ###
[Vundle][1], which stands for Vim Bundle, is a Vim plugin manager. Vundle allows you to install, update, search and clean up Vim plugins very easily. It can also manage your runtime and help with tags. In this tutorial, I am going to show how to install and use Vundle.
### Installing Vundle ###
First, [install Git][2] if you don't have it on your Linux system.
Next, create a directory where Vim plugins will be downloaded and installed. By default, this directory is located at ~/.vim/bundle
$ mkdir -p ~/.vim/bundle
Now go ahead and install Vundle as follows. Note that Vundle itself is another Vim plugin. Thus we install Vundle under ~/.vim/bundle we created earlier.
$ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
### Configuring Vundle ###
Now set up you .vimrc file as follows:
set nocompatible " This is required
filetype off " This is required
" Here you set up the runtime path
set rtp+=~/.vim/bundle/Vundle.vim
" Initialize vundle
call vundle#begin()
" This should always be the first
Plugin 'gmarik/Vundle.vim'
" This examples are from https://github.com/gmarik/Vundle.vim README
Plugin 'tpope/vim-fugitive'
" Plugin from http://vim-scripts.org/vim/scripts.html
Plugin 'L9'
"Git plugin not hosted on GitHub
Plugin 'git://git.wincent.com/command-t.git'
"git repos on your local machine (i.e. when working on your own plugin)
Plugin 'file:///home/gmarik/path/to/plugin'
" The sparkup vim script is in a subdirectory of this repo called vim.
" Pass the path to set the runtimepath properly.
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
" Avoid a name conflict with L9
Plugin 'user/L9', {'name': 'newL9'}
"Every Plugin should be before this line
call vundle#end() " required
Let me explain the above configuration a bit. By default, Vundle downloads and installs Vim plugins from github.com or vim-scripts.org. You can modify the default behavior.
To install from Github:
Plugin 'user/plugin'
To install from http://vim-scripts.org/vim/scripts.html:
Plugin 'plugin_name'
To install from another git repo:
Plugin 'git://git.another_repo.com/plugin'
To install from a local file:
Plugin 'file:///home/user/path/to/plugin'
Also you can customize others such as the runtime path of you plugins, which is really useful if you are programming a plugin yourself, or just want to load it from another directory that is not ~/.vim.
Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
If you have plugins with the same names, you can rename you plugin so that it doesn't conflict.
Plugin 'user/plugin', {'name': 'newPlugin'}
### Using Vum Commands ###
Once you have set up you plugins with Vundle, you can use it to to install, update, search and clean unused plugins using several Vundle commands.
#### Installing a new plugin ####
The PluginInstall command will install all plugins listed in your .vimrc file. You can also install just one specific plugin by passing its name.
:PluginInstall
:PluginInstall <plugin-name>
![](https://farm1.staticflickr.com/559/18998707843_438cd55463_c.jpg)
#### Cleaning up an unused plugin ####
If you have any unused plugin, you can remove it by using the PluginClean command.
:PluginClean
![](https://farm4.staticflickr.com/3814/19433047689_17d9822af6_c.jpg)
#### Searching for a plugin ####
If you want to install a plugin from a plugin list provided, search functionality can be useful.
:PluginSearch <text-list>
![](https://farm1.staticflickr.com/541/19593459846_75b003443d_c.jpg)
While searching, you can install, clean, research or reload the same list on the interactive split. Installing plugins won't load your plugins automatically. To do so, add them to you .vimrc file.
### Conclusion ###
Vim is an amazing tool. It can not only be a great default text editor that can make your work flow faster and smoother, but also be turned into an IDE for almost any programming language available. Vundle can be a big help in personalizing the powerful Vim environment quickly and easily.
Note that there are several sites that allow you to find the right Vim plugins for you. Always check [http://www.vim-scripts.org][3], Github or [http://www.vimawesome.com][4] for new scripts or plugins. Also remember to use the help provider for you plugin.
Keep rocking with your favorite text editor!
--------------------------------------------------------------------------------
via: http://xmodulo.com/manage-vim-plugins.html
作者:[Christopher Valerio][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/valerio
[1]:https://github.com/VundleVim/Vundle.vim
[2]:http://ask.xmodulo.com/install-git-linux.html
[3]:http://www.vim-scripts.org/
[4]:http://www.vimawesome.com/

View File

@ -0,0 +1,144 @@
struggling 翻译中
Introduction to RAID, Concepts of RAID and RAID Levels Part 1
================================================================================
RAID is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume.
![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg)
Understanding RAID Setups in Linux
Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance & high availability.
This series will be titled Preparation for the setting up RAID s through Parts 1-9 and covers the following topics.
- Part 1: Introduction to RAID, Concepts of RAID and RAID Levels
- Part 2: How to setup RAID0 (Stripe) in Linux
- Part 3: How to setup RAID1 (Mirror) in Linux
- Part 4: How to setup RAID5 (Striping with Distributed Parity) in Linux
- Part 5: How to setup RAID6 (Striping with Double Distributed Parity) in Linux
- Part 6: Setting Up RAID 10 or 1+0 (Nested) in Linux
- Part 7: Growing an Existing RAID Array and Removing Failed Disks in Raid
- Part 8: Recovering (Rebuilding) failed drives in RAID
- Part 9: Managing RAID in Linux
This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux.
### Software RAID and Hardware RAID ###
Software RAID have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment.
Hardware RAID have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It wont use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale.
Hardware RAID Card will look like below:
![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg)
Hardware RAID
#### Featured Concepts of RAID ####
- Parity method in raid regenerate the lost content from parity saved informations. RAID 5, RAID 6 Based on Parity.
- Stripe is sharing data randomly to multiple disk. This wont have full data in a single disk. If we use 3 disks half of our data will be in each disks.
- Mirroring is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too.
- Hot spare is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically.
- Chunks are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I/O performance.
RAIDs are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment.
- RAID0 = Striping
- RAID1 = Mirroring
- RAID5 = Single Disk Distributed Parity
- RAID6 = Double Disk Distributed Parity
- RAID10 = Combine of Mirror & Stripe. (Nested RAID)
RAID are managed using mdadm package in most of the Linux distributions. Let us get a Brief look into each RAID Levels.
#### RAID 0 (or) Striping ####
Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk.
Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to logical volume it will be saved as T will be saved in first disk and E will be saved in Second disk and C will be saved in First disk and again M will be saved in Second disk and it continues in round-robin process.
In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk cant use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data dont use this RAID LEVEL.
- High Performance.
- There is Zero Capacity Loss in RAID 0
- Zero Fault Tolerance.
- Write and Reading will be good performance.
#### RAID 1 (or) Mirroring ####
Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive.
While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss.
- Good Performance.
- Here Half of the Space will be lost in total capacity.
- Full Fault Tolerance.
- Rebuilt will be faster.
- Writing Performance will be slow.
- Reading will be good.
- Can be used for operating systems and database for small scale.
#### RAID 5 (or) Distributed Parity ####
RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure.
Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity informations are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of datas.
- Excellent Performance
- Reading will be extremely very good in speed.
- Writing will be Average, slow if we wont use a Hardware RAID Controller.
- Rebuild from Parity information from all drives.
- Full Fault Tolerance.
- 1 Disk Space will be under Parity.
- Can be used in file servers, web servers, very important backups.
#### RAID 6 Two Parity Distributed Disk ####
RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives.
Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity.
- Poor Performance.
- Read Performance will be good.
- Write Performance will be Poor if we not using a Hardware RAID Controller.
- Rebuild from 2 Parity Drives.
- Full Fault tolerance.
- 2 Disks space will be under Parity.
- Can be Used in Large Arrays.
- Can be use in backup purpose, video streaming, used in large scale.
#### RAID 10 (or) Mirror & Stripe ####
RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01.
Assume, we have 4 Number of drives. While Im writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods.
If Im writing a data “TECMINT” in RAID 10 it will save the data as follow. First “T” will write to both disks and second “E” will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too.
Same time it will use the RAID 0 method and write data as follow “T” will write to first disk and “E” will write to second disk. Again “C” will write to first Disk and “M” to second disk.
- Good read and write performance.
- Here Half of the Space will be lost in total capacity.
- Fault Tolerance.
- Fast rebuild from copying data.
- Can be used in Database storage for high performance and availability.
### Conclusion ###
In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID.
In the next upcoming articles Im going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/understanding-raid-setup-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/

View File

@ -0,0 +1,219 @@
struggling 翻译中
Creating Software RAID0 (Stripe) on Two Devices Using mdadm Tool in Linux Part 2
================================================================================
RAID is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group).
RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid.
![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg)
Setup RAID0 in Linux
Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if its failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set.
#### What is Stripe in RAID 0? ####
Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance RAID 0 will be used, but we cant get the data if one of the drive fails. So, it isnt a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files.
- RAID 0 has High Performance.
- Zero Capacity Loss in RAID 0. No Space will be wasted.
- Zero Fault Tolerance ( Cant get back the data if any one of disk fails).
- Write and Reading will be Excellent.
#### Requirements ####
Minimum number of disks are allowed to create RAID 0 is 2, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks.
Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from its utility UI. Some motherboard by default in-build with RAID feature, there UI can be accessed using Ctrl+I keys.
If youre new to RAID setups, please read our earlier article, where weve covered some basic introduction of about RAID.
- [Introduction to RAID and RAID Concepts][1]
**My Server Setup**
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.225
Two Disks : 20 GB each
This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software RAID0 or striping in Linux systems or servers using two 20GB disks named sdb and sdc.
### Step 1: Updating System and Installing mdadm for Managing RAID ###
1. Before setting up RAID0 in Linux, lets do a system update and then install mdadm package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.
# yum clean all && yum update
# yum install mdadm -y
![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png)
Install mdadm Tool
### Step 2: Verify Attached Two 20GB Drives ###
2. Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command.
# ls -l /dev | grep sd
![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png)
Check Hard Drives
3. Once the new hard drives detected, its time to check whether the attached drives are already using any existing raid with the help of following mdadm command.
# mdadm --examine /dev/sd[b-c]
![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png)
Check RAID Devices
In the above output, we come to know that none of the RAID have been applied to these two sdb and sdc drives.
### Step 3: Creating Partitions for RAID ###
4. Now create sdb and sdc partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on sdb drive.
# fdisk /dev/sdb
Follow below instructions for creating partitions.
- Press n for creating new partition.
- Then choose P for Primary partition.
- Next select the partition number as 1.
- Give the default value by just pressing two times Enter key.
- Next press P to print the defined partition.
![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png)
Create Partitions
Follow below instructions for creating Linux raid auto on partitions.
- Press L to list all available types.
- Type tto choose the partitions.
- Choose fd for Linux raid auto and press Enter to apply.
- Then again use P to print the changes what we have made.
- Use w to write the changes.
![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png)
Create RAID Partitions in Linux
**Note**: Please follow same above instructions to create partition on sdc drive now.
5. After creating partitions, verify both the drivers are correctly defined for RAID using following command.
# mdadm --examine /dev/sd[b-c]
# mdadm --examine /dev/sd[b-c]1
![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png)
Verify RAID Partitions
### Step 4: Creating RAID md Devices ###
6. Now create md device (i.e. /dev/md0) and apply raid level using below command.
# mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
# mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1
- -C create
- -l level
- -n No of raid-devices
7. Once md device has been created, now verify the status of RAID Level, Devices and Array used, with the help of following series of commands as shown.
# cat /proc/mdstat
![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png)
Verify RAID Level
# mdadm -E /dev/sd[b-c]1
![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png)
Verify RAID Device
# mdadm --detail /dev/md0
![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png)
Verify RAID Array
### Step 5: Assiging RAID Devices to Filesystem ###
8. Create a ext4 filesystem for a RAID device /dev/md0 and mount it under /dev/raid0.
# mkfs.ext4 /dev/md0
![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png)
Create ext4 Filesystem
9. Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e. /mnt/raid0) and mount the device /dev/md0 under it.
# mkdir /mnt/raid0
# mount /dev/md0 /mnt/raid0/
10. Next, verify that the device /dev/md0 is mounted under /mnt/raid0 directory using df command.
# df -h
11. Next, create a file called tecmint.txt under the mount point /mnt/raid0, add some content to the created file and view the content of a file and directory.
# touch /mnt/raid0/tecmint.txt
# echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt
# cat /mnt/raid0/tecmint.txt
# ls -l /mnt/raid0/
![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png)
Verify Mount Device
12. Once youve verified mount points, its time to create an fstab entry in /etc/fstab file.
# vim /etc/fstab
Add the following entry as described. May vary according to your mount location and filesystem you using.
/dev/md0 /mnt/raid0 ext4 deaults 0 0
![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png)
Add Device to Fstab
13. Run mount -a to check if there is any error in fstab entry.
# mount -av
![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png)
Check Errors in Fstab
### Step 6: Saving RAID Configurations ###
14. Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use mdadm command with -s (scan) and -v (verbose) options as shown.
# mdadm -E -s -v >> /etc/mdadm.conf
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
# cat /etc/mdadm.conf
![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png)
Save RAID Configurations
Thats it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup RAID5.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-raid0-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/

View File

@ -0,0 +1,213 @@
struggling 翻译中
Setting up RAID 1 (Mirroring) using Two Disks in Linux Part 3
================================================================================
RAID Mirroring means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and its useful only, when read performance or reliability is more precise than the data storage capacity.
![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg)
Setup Raid1 in Linux
Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption.
### Features of RAID 1 ###
- Mirror has Good Performance.
- 50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB.
- No data loss in Mirroring if one disk fails, because we have the same content in both disks.
- Reading will be good than writing data to drive.
#### Requirements ####
Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card).
Here were using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from its utility UI or using Ctrl+I key.
Read Also: [Basic Concepts of RAID in Linux][1]
#### My Server Setup ####
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.226
Hostname : rd1.tecmintlocal.com
Disk 1 [20GB] : /dev/sdb
Disk 2 [20GB] : /dev/sdc
This article will guide you through a step-by-step instructions on how to setup a software RAID 1 or Mirror using mdadm (creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc.
### Step 1: Installing Prerequisites and Examine Drives ###
1. As I said above, were using mdadm utility for creating and managing RAID in Linux. So, lets install the mdadm software package on Linux using yum or apt-get package manager tool.
# yum install mdadm [on RedHat systems]
# apt-get install mdadm [on Debain systems]
2. Once mdadm package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command.
# mdadm -E /dev/sd[b-c]
![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png)
Check RAID on Disks
As you see from the above screen, that there is no any super-block detected yet, means no RAID defined.
### Step 2: Drive Partitioning for RAID ###
3. As I mentioned above, that were using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Lets create partitions on these two drives using fdisk command and change the type to raid during partition creation.
# fdisk /dev/sdb
Follow the below instructions
- Press n for creating new partition.
- Then choose P for Primary partition.
- Next select the partition number as 1.
- Give the default full size by just pressing two times Enter key.
- Next press p to print the defined partition.
- Press L to list all available types.
- Type tto choose the partitions.
- Choose fd for Linux raid auto and press Enter to apply.
- Then again use p to print the changes what we have made.
- Use w to write the changes.
![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png)
Create Disk Partitions
After /dev/sdb partition has been created, next follow the same instructions to create new partition on /dev/sdc drive.
# fdisk /dev/sdc
![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png)
Create Second Partitions
4. Once both the partitions are created successfully, verify the changes on both sdb & sdc drive using the same mdadm command and also confirm the RAID type as shown in the following screen grabs.
# mdadm -E /dev/sd[b-c]
![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png)
Verify Partitions Changes
![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png)
Check RAID Type
**Note**: As you see in the above picture, there is no any defined RAID on the sdb1 and sdc1 drives so far, thats the reason we are getting as no super-blocks detected.
### Step 3: Creating RAID1 Devices ###
5. Next create RAID1 Device called /dev/md0 using the following command and verity it.
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
# cat /proc/mdstat
![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png)
Create RAID Device
6. Next check the raid devices type and raid array using following commands.
# mdadm -E /dev/sd[b-c]1
# mdadm --detail /dev/md0
![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png)
Check RAID Device type
![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png)
Check RAID Device Array
From the above pictures, one can easily understand that raid1 have been created and using /dev/sdb1 and /dev/sdc1 partitions and also you can see the status as resyncing.
### Step 4: Creating File System on RAID Device ###
7. Create file system using ext4 for md0 and mount under /mnt/raid1.
# mkfs.ext4 /dev/md0
![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png)
Create RAID Device Filesystem
8. Next, mount the newly created filesystem under /mnt/raid1 and create some files and verify the contents under mount point.
# mkdir /mnt/raid1
# mount /dev/md0 /mnt/raid1/
# touch /mnt/raid1/tecmint.txt
# echo "tecmint raid setups" > /mnt/raid1/tecmint.txt
![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png)
Mount Raid Device
9. To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open /etc/fstab file and add the following line at the bottom of the file.
/dev/md0 /mnt/raid1 ext4 defaults 0 0
![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png)
Raid Automount Device
10. Run mount -a to check whether there are any errors in fstab entry.
# mount -av
![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png)
Check Errors in fstab
11. Next, save the raid configuration manually to mdadm.conf file using the below command.
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png)
Save Raid Configuration
The above configuration file is read by the system at the reboots and load the RAID devices.
### Step 5: Verify Data After Disk Failure ###
12. Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Lets see what will happen when any of disk disk is unavailable in array.
# mdadm --detail /dev/md0
![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png)
Raid Device Verify
In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed sdc disk) or fails.
# ls -l /dev | grep sd
# mdadm --detail /dev/md0
![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png)
Test RAID Devices
Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data.
# cd /mnt/raid1/
# cat tecmint.txt
![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png)
Verify RAID Data
Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a RAID 5 striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-raid1-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/

View File

@ -0,0 +1,286 @@
struggling 翻译中
Creating RAID 5 (Striping with Distributed Parity) in Linux Part 4
================================================================================
In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy.
![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg)
Setup Raid 5 in Linux
For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where its cost effective and provide performance as well as redundancy.
#### What is Parity? ####
Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Lets say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity informations. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.
#### Pros and Cons of RAID 5 ####
- Gives better performance
- Support Redundancy and Fault tolerance.
- Support hot spare options.
- Will loose a single disk capacity for using parity information.
- No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.
- Suits for transaction oriented environment as the reading will be faster.
- Due to parity overhead, writing will be slow.
- Rebuild takes long time.
#### Requirements ####
Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if youve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and mdadm package to create raid.
mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf.
Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux.
- [Basic Concepts of RAID in Linux Part 1][1]
- [Creating RAID 0 (Stripe) in Linux Part 2][2]
- [Setting up RAID 1 (Mirroring) in Linux Part 3][3]
#### My Server Setup ####
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.227
Hostname : rd5.tecmintlocal.com
Disk 1 [20GB] : /dev/sdb
Disk 2 [20GB] : /dev/sdc
Disk 3 [20GB] : /dev/sdd
This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd.
### Step 1: Installing mdadm and Verify Drives ###
1. As we said earlier, that were using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions.
# lsb_release -a
# ifconfig | grep inet
![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png)
CentOS 6.5 Summary
2. If youre following our raid series, we assume that youve already installed mdadm package, if not, use the following command according to your Linux distribution to install the package.
# yum install mdadm [on RedHat systems]
# apt-get install mdadm [on Debain systems]
3. After the mdadm package installation, lets list the three 20GB disks which we have added in our system using fdisk command.
# fdisk -l | grep sd
![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png)
Install mdadm Tool
4. Now its time to examine the attached three drives for any existing RAID blocks on these drives using following command.
# mdadm -E /dev/sd[b-d]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png)
Examine Drives For Raid
**Note**: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now.
### Step 2: Partitioning the Disks for RAID ###
5. First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using fdisk command, before forwarding to the next steps.
# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd
#### Create /dev/sdb Partition ####
Please follow the below instructions to create partition on /dev/sdb drive.
- Press n for creating new partition.
- Then choose P for Primary partition. Here we are choosing Primary because there is no partitions defined yet.
- Then choose 1 to be the first partition. By default it will be 1.
- Here for cylinder size we dont have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.
- Next press p to print the created partition.
- Change the Type, If we need to know the every available types Press L.
- Here, we are selecting fd as my type is RAID.
- Next press p to print the defined partition.
- Then again use p to print the changes what we have made.
- Use w to write the changes.
![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png)
Create sdb Partition
**Note**: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too.
#### Create /dev/sdc Partition ####
Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps.
# fdisk /dev/sdc
![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png)
Create sdc Partition
#### Create /dev/sdd Partition ####
# fdisk /dev/sdd
![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png)
Create sdd Partition
6. After creating partitions, check for changes in all three drives sdb, sdc, & sdd.
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
or
# mdadm -E /dev/sd[b-c]
![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png)
Check Partition Changes
**Note**: In the above pic. depict the type is fd i.e. for RAID.
7. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives.
![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png)
Check Raid on Partition
### Step 3: Creating md device md0 ###
8. Now create a Raid device md0 (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command.
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
or
# mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1
9. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output.
# cat /proc/mdstat
![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png)
Verify Raid Device
If you want to monitor the current building process, you can use watch command, just pass through the cat /proc/mdstat with watch command which will refresh screen every 1 second.
# watch -n1 cat /proc/mdstat
![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png)
Monitor Raid 5 Process
![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png)
Raid 5 Process Summary
10. After creation of raid, Verify the raid devices using the following command.
# mdadm -E /dev/sd[b-d]1
![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png)
Verify Raid Level
**Note**: The Output of the above command will be little long as it prints the information of all three drives.
11. Next, verify the RAID array to assume that the devices which weve included in the RAID level are running and started to re-sync.
# mdadm --detail /dev/md0
![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png)
Verify Raid Array
### Step 4: Creating file system for md0 ###
12. Create a file system for md0 device using ext4 before mounting.
# mkfs.ext4 /dev/md0
![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png)
Create md0 Filesystem
13. Now create a directory under /mnt then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory.
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5/
# ls -l /mnt/raid5/
14. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content.
# touch /mnt/raid5/raid5_tecmint_{1..5}
# ls -l /mnt/raid5/
# echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1
# cat /mnt/raid5/raid5_tecmint_1
# cat /proc/mdstat
![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png)
Mount Raid Device
15. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.
# vim /etc/fstab
/dev/md0 /mnt/raid5 ext4 defaults 0 0
![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png)
Raid 5 Automount
16. Next, run mount -av command to check whether any errors in fstab entry.
# mount -av
![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png)
Check Fstab Errors
### Step 5: Save Raid 5 Configuration ###
17. As mentioned earlier in requirement section, by default RAID dont have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.
So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded.
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png)
Save Raid 5 Configuration
Note: Saving the configuration will keep the RAID level stable in md0 device.
### Step 6: Adding Spare Drives ###
18. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here.
For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.
- [Add Spare Drive to Raid 5 Setup][4]
### Conclusion ###
Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-raid-5-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
[2]:http://www.tecmint.com/create-raid0-in-linux/
[3]:http://www.tecmint.com/create-raid1-in-linux/
[4]:http://www.tecmint.com/create-raid-6-in-linux/

View File

@ -0,0 +1,321 @@
struggling 翻译中
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux Part 5
================================================================================
RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. Its alike RAID 5, but provides more robust, because it uses one more disk for parity.
In our earlier article, weve seen distributed parity in RAID 5, but in this article we will going to see RAID 6 with double distributed parity. Dont expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity.
![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg)
Setup RAID 6 in Linux
To setup a RAID 6, minimum 4 numbers of disks or more in a set are required. RAID 6 have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks.
Now, many of us comes to conclusion, why we need to use RAID 6, when it doesnt perform like any other RAID. Hmm… those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments.
#### Pros and Cons of RAID 6 ####
- Performance are good.
- RAID 6 is expensive, as it requires two independent drives are used for parity functions.
- Will loose a two disks capacity for using parity information (double parity).
- No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk.
- Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller.
#### Requirements ####
Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will wont get better performance in RAID 6. So we need a physical RAID controller.
Those who are new to RAID setup, we recommend to go through RAID articles below.
- [Basic Concepts of RAID in Linux Part 1][1]
- [Creating Software RAID 0 (Stripe) in Linux Part 2][2]
- [Setting up RAID 1 (Mirroring) in Linux Part 3][3]
#### My Server Setup ####
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.228
Hostname : rd6.tecmintlocal.com
Disk 1 [20GB] : /dev/sdb
Disk 2 [20GB] : /dev/sdc
Disk 3 [20GB] : /dev/sdd
Disk 4 [20GB] : /dev/sde
This article is a Part 5 of a 9-tutorial RAID series, here we are going to see how we can create and setup Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde.
### Step 1: Installing mdadm Tool and Examine Drives ###
1. If youre following our last two Raid articles (Part 2 and Part 3), where weve already shown how to install mdadm tool. If youre new to this article, let me explain that mdadm is a tool to create and manage Raid in Linux systems, lets install the tool using following command according to your Linux distribution.
# yum install mdadm [on RedHat systems]
# apt-get install mdadm [on Debain systems]
2. After installing the tool, now its time to verify the attached four drives that we are going to use for raid creation using the following fdisk command.
# fdisk -l | grep sd
![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png)
Check Disks in Linux
3. Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks.
# mdadm -E /dev/sd[b-e]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png)
Check Raid on Disk
**Note**: In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6.
### Step 2: Drive Partitioning for RAID 6 ###
4. Now create partitions for raid on /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives.
**Create /dev/sdb Partition**
# fdisk /dev/sdb
Please follow the instructions as shown below for creating partition.
- Press n for creating new partition.
- Then choose P for Primary partition.
- Next choose the partition number as 1.
- Define the default value by just pressing two times Enter key.
- Next press P to print the defined partition.
- Press L to list all available types.
- Type t to choose the partitions.
- Choose fd for Linux raid auto and press Enter to apply.
- Then again use P to print the changes what we have made.
- Use w to write the changes.
![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png)
Create /dev/sdb Partition
**Create /dev/sdb Partition**
# fdisk /dev/sdc
![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png)
Create /dev/sdc Partition
**Create /dev/sdd Partition**
# fdisk /dev/sdd
![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png)
Create /dev/sdd Partition
**Create /dev/sde Partition**
# fdisk /dev/sde
![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png)
Create /dev/sde Partition
5. After creating partitions, its always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup.
# mdadm -E /dev/sd[b-e]1
or
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png)
Check Raid on New Partitions
### Step 3: Creating md device (RAID) ###
6. Now its time to create Raid device md0 (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands.
# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
# cat /proc/mdstat
![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png)
Create Raid 6 Device
7. You can also check the current process of raid using watch command as shown in the screen grab below.
# watch -n1 cat /proc/mdstat
![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png)
Check Raid 6 Process
8. Verify the raid devices using the following command.
# mdadm -E /dev/sd[b-e]1
**Note**:: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here.
9. Next, verify the RAID array to confirm that the re-syncing is started.
# mdadm --detail /dev/md0
![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png)
Check Raid 6 Array
### Step 4: Creating FileSystem on Raid Device ###
10. Create a filesystem using ext4 for /dev/md0 and mount it under /mnt/raid5. Here weve used ext4, but you can use any type of filesystem as per your choice.
# mkfs.ext4 /dev/md0
![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png)
Create File System on Raid 6
11. Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory.
# mkdir /mnt/raid6
# mount /dev/md0 /mnt/raid6/
# ls -l /mnt/raid6/
12. Create some files under mount point and append some text in any one of the file to verify the content.
# touch /mnt/raid6/raid6_test.txt
# ls -l /mnt/raid6/
# echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt
# cat /mnt/raid6/raid6_test.txt
![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png)
Verify Raid Content
13. Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment.
# vim /etc/fstab
/dev/md0 /mnt/raid6 ext4 defaults 0 0
![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png)
Automount Raid 6 Device
14. Next, execute mount -a command to verify whether there is any error in fstab entry.
# mount -av
![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png)
Verify Raid Automount
### Step 5: Save RAID 6 Configuration ###
15. Please note by default RAID dont have a config file. We have to save it by manually using below command and then verify the status of device /dev/md0.
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
# mdadm --detail /dev/md0
![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
Save Raid 6 Configuration
![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
Check Raid 6 Status
### Step 6: Adding a Spare Drives ###
16. Now it has 4 disks and there are two parity informations available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6.
May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration.
For the demonstration purpose, Ive hot-plugged a new HDD disk (i.e. /dev/sdf), lets verify the attached disk.
# ls -l /dev/ | grep sd
![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png)
Check New Disk
17. Now again confirm the new attached disk for any raid is already configured or not using the same mdadm command.
# mdadm --examine /dev/sdf
![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png)
Check Raid on New Disk
**Note**: As usual, like weve created partitions for four disks earlier, similarly weve to create new partition on the new plugged disk using fdisk command.
# fdisk /dev/sdf
![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png)
Create /dev/sdf Partition
18. Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device.
# mdadm --examine /dev/sdf
# mdadm --examine /dev/sdf1
# mdadm --add /dev/md0 /dev/sdf1
# mdadm --detail /dev/md0
![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png)
Verify Raid on sdf Partition
![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png)
Add sdf Partition to Raid
![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png)
Verify sdf Partition Details
### Step 7: Check Raid 6 Fault Tolerance ###
19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, Ive personally marked one of the drive is failed.
Here, were going to mark /dev/sdd1 as failed drive.
# mdadm --manage --fail /dev/md0 /dev/sdd1
![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png)
Check Raid 6 Fault Tolerance
20. Let me get the details of RAID set now and check whether our spare started to sync.
# mdadm --detail /dev/md0
![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png)
Check Auto Raid Syncing
**Hurray!** Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive /dev/sdd1 listed as faulty. We can monitor build process using following command.
# cat /proc/mdstat
![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png)
Raid 6 Auto Syncing
### Conclusion: ###
Here, we have seen how to setup RAID 6 using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a Nested RAID 10 and much more in the next articles. Till then, stay connected with TECMINT.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-raid-6-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
[2]:http://www.tecmint.com/create-raid0-in-linux/
[3]:http://www.tecmint.com/create-raid1-in-linux/

View File

@ -0,0 +1,276 @@
struggling 翻译中
Setting Up RAID 10 or 1+0 (Nested) in Linux Part 6
================================================================================
RAID 10 is a combine of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, weve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks.
Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that weve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method.
![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg)
Create Raid 10 in Linux
Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk.
In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process.
Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10.
#### Pros and Cons of RAID 5 ####
- Gives better performance.
- We will loose two of the disk capacity in RAID 10.
- Reading and writing will be very good, because it will write and read to all those 4 disk at the same time.
- It can be used for Database solutions, which needs a high I/O disk writes.
#### Requirements ####
In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks.
**My Server Setup**
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.229
Hostname : rd10.tecmintlocal.com
Disk 1 [20GB] : /dev/sdd
Disk 2 [20GB] : /dev/sdc
Disk 3 [20GB] : /dev/sdd
Disk 4 [20GB] : /dev/sde
There are two ways to setup RAID 10, but here Im going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10.
### Method 1: Setting Up Raid 10 ###
1. First, verify that all the 4 added disks are detected or not using the following command.
# ls -l /dev | grep sd
2. Once the four disks are detected, its time to check for the drives whether there is already any raid existed before creating a new one.
# mdadm -E /dev/sd[b-e]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png)
Verify 4 Added Disks
**Note**: In the above output, you see there isnt any super-block detected yet, that means there is no RAID defined in all 4 drives.
#### Step 1: Drive Partitioning for RAID ####
3. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the fdisk tool.
# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde
**Create /dev/sdb Partition**
Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too.
# fdisk /dev/sdb
Please use the below steps for creating a new partition on /dev/sdb drive.
- Press n for creating new partition.
- Then choose P for Primary partition.
- Then choose 1 to be the first partition.
- Next press p to print the created partition.
- Change the Type, If we need to know the every available types Press L.
- Here, we are selecting fd as my type is RAID.
- Next press p to print the defined partition.
- Then again use p to print the changes what we have made.
- Use w to write the changes.
![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png)
Disk sdb Partition
**Note**: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde).
4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command.
# mdadm -E /dev/sd[b-e]
# mdadm -E /dev/sd[b-e]1
OR
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png)
Check All Disks for Raid
**Note**: The above outputs shows that there isnt any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives.
#### Step 2: Creating md RAID Device ####
5. Now its time to create a md (i.e. /dev/md0) device, using mdadm raid management tool. Before, creating device, your system must have mdadm tool installed, if not install it first.
# yum install mdadm [on RedHat systems]
# apt-get install mdadm [on Debain systems]
Once mdadm tool installed, you can now create a md raid device using the following command.
# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1
6. Next verify the newly created raid device using the cat command.
# cat /proc/mdstat
![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png)
Create md raid Device
7. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks.
# mdadm --examine /dev/sd[b-e]1
8. Next, check the details of Raid Array with the help of following command.
# mdadm --detail /dev/md0
![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png)
Check Raid Array Details
**Note**: You see in the above results, that the status of Raid was active and re-syncing.
#### Step 3: Creating Filesystem ####
9. Create a file system using ext4 for md0 and mount it under /mnt/raid10. Here, Ive used ext4, but you can use any filesystem type if you want.
# mkfs.ext4 /dev/md0
![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png)
Create md Filesystem
10. After creating filesystem, mount the created file-system under /mnt/raid10 and list the contents of the mount point using ls -l command.
# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10/
# ls -l /mnt/raid10/
Next, add some files under mount point and append some text in any one of the file and check the content.
# touch /mnt/raid10/raid10_files.txt
# ls -l /mnt/raid10/
# echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt
# cat /mnt/raid10/raid10_files.txt
![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png)
Mount md Device
11. For automounting, open the /etc/fstab file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!.
# vim /etc/fstab
/dev/md0 /mnt/raid10 ext4 defaults 0 0
![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png)
AutoMount md Device
12. Next, verify the /etc/fstab file for any errors before restarting the system using mount -a command.
# mount -av
![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png)
Check Errors in Fstab
#### Step 4: Save RAID Configuration ####
13. By default RAID dont have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png)
Save Raid10 Configuration
Thats it, we have created RAID 10 using method 1, this method is the easier one. Now lets move forward to setup RAID 10 using method 2.
### Method 2: Creating RAID 10 ###
1. In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0.
First, list the disks which are all available for creating RAID 10.
# ls -l /dev | grep sd
![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png)
List 4 Devices
2. Partition the all 4 disks using fdisk command. For partitioning, you can follow #step 3 above.
# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde
3. After partitioning all 4 disks, now examine the disks for any existing raid blocks.
# mdadm --examine /dev/sd[b-e]
# mdadm --examine /dev/sd[b-e]1
![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png)
Examine 4 Disks
#### Step 1: Creating RAID 1 ####
4. First let me create 2 sets of RAID 1 using 4 disks sdb1 and sdc1 and other set using sdd1 & sde1.
# mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1
# mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1
# cat /proc/mdstat
![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
Creating Raid 1
![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
Check Details of Raid 1
#### Step 2: Creating RAID 0 ####
5. Next, create the RAID 0 using md1 and md2 devices.
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2
# cat /proc/mdstat
![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png)
Creating Raid 0
#### Step 3: Save RAID Configuration ####
6. We need to save the Configuration under /etc/mdadm.conf to load all raid devices in every reboot times.
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
After this, we need to follow #step 3 Creating file system of method 1.
Thats it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups.
### Conclusion ###
Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-raid-10-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/

View File

@ -0,0 +1,180 @@
struggling 翻译中
Growing an Existing RAID Array and Removing Failed Disks in Raid Part 7
================================================================================
Every newbies will get confuse of the word array. Array is just a collection of disks. In other words, we can call array as a set or group. Just like a set of eggs containing 6 numbers. Likewise RAID Array contains number of disks, it may be 2, 4, 6, 8, 12, 16 etc. Hope now you know what Array is.
Here we will see how to grow (extend) an existing array or raid group. For example, if we are using 2 disks in an array to form a raid 1 set, and in some situation if we need more space in that group, we can extend the size of an array using mdadm grow command, just by adding one of the disk to the existing array. After growing (adding disk to an existing array), we will see how to remove one of the failed disk from array.
![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg)
Growing Raid Array and Removing Failed Disks
Assume that one of the disk is little weak and need to remove that disk, till it fails let it under use, but we need to add one of the spare drive and grow the mirror before it fails, because we need to save our data. While the weak disk fails we can remove it from array this is the concept we are going to see in this topic.
#### Features of RAID Growth ####
- We can grow (extend) the size of any raid set.
- We can remove the faulty disk after growing raid array with new disk.
- We can grow raid array without any downtime.
Requirements
- To grow an RAID array, we need an existing RAID set (Array).
- We need extra disks to grow the Array.
- Here Im using 1 disk to grow the existing array.
Before we learn about growing and recovering of Array, we have to know about the basics of RAID levels and setups. Follow the below links to know about those setups.
- [Understanding Basic RAID Concepts Part 1][1]
- [Creating a Software Raid 0 in Linux Part 2][2]
#### My Server Setup ####
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.230
Hostname : grow.tecmintlocal.com
2 Existing Disks : 1 GB
1 Additional Disk : 1 GB
Here, my already existing RAID has 2 number of disks with each size is 1GB and we are now adding one more disk whose size is 1GB to our existing raid array.
### Growing an Existing RAID Array ###
1. Before growing an array, first list the existing Raid array using the following command.
# mdadm --detail /dev/md0
![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png)
Check Existing Raid Array
**Note**: The above output shows that Ive already has two disks in Raid array with raid1 level. Now here we are adding one more disk to an existing array,
2. Now lets add the new disk “sdd” and create a partition using fdisk command.
# fdisk /dev/sdd
Please use the below instructions to create a partition on /dev/sdd drive.
- Press n for creating new partition.
- Then choose P for Primary partition.
- Then choose 1 to be the first partition.
- Next press p to print the created partition.
- Here, we are selecting fd as my type is RAID.
- Next press p to print the defined partition.
- Then again use p to print the changes what we have made.
- Use w to write the changes.
![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png)
Create New sdd Partition
3. Once new sdd partition created, you can verify it using below command.
# ls -l /dev/ | grep sd
![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png)
Confirm sdd Partition
4. Next, examine the newly created disk for any existing raid, before adding to the array.
# mdadm --examine /dev/sdd1
![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png)
Check Raid on sdd Partition
**Note**: The above output shows that the disk has no super-blocks detected, means we can move forward to add a new disk to an existing array.
4. To add the new partition /dev/sdd1 in existing array md0, use the following command.
# mdadm --manage /dev/md0 --add /dev/sdd1
![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png)
Add Disk To Raid-Array
5. Once the new disk has been added, check for the added disk in our array using.
# mdadm --detail /dev/md0
![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png)
Confirm Disk Added to Raid
**Note**: In the above output, you can see the drive has been added as a spare. Here, we already having 2 disks in the array, but what we are expecting is 3 devices in array for that we need to grow the array.
6. To grow the array we have to use the below command.
# mdadm --grow --raid-devices=3 /dev/md0
![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png)
Grow Raid Array
Now we can see the third disk (sdd1) has been added to array, after adding third disk it will sync the data from other two disks.
# mdadm --detail /dev/md0
![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png)
Confirm Raid Array
**Note**: For large size disk it will take hours to sync the contents. Here I have used 1GB virtual disk, so its done very quickly within seconds.
### Removing Disks from Array ###
7. After the data has been synced to new disk sdd1 from other two disks, that means all three disks now have same contents.
As I told earlier lets assume that one of the disk is weak and needs to be removed, before it fails. So, now assume disk sdc1 is weak and needs to be removed from an existing array.
Before removing a disk we have to mark the disk as failed one, then only we can able to remove it.
# mdadm --fail /dev/md0 /dev/sdc1
# mdadm --detail /dev/md0
![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png)
Disk Fail in Raid Array
From the above output, we clearly see that the disk was marked as faulty at the bottom. Even its faulty, we can see the raid devices are 3, failed 1 and state was degraded.
Now we have to remove the faulty drive from the array and grow the array with 2 devices, so that the raid devices will be set to 2 devices as before.
# mdadm --remove /dev/md0 /dev/sdc1
![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png)
Remove Disk in Raid Array
8. Once the faulty drive is removed, now weve to grow the raid array using 2 disks.
# mdadm --grow --raid-devices=2 /dev/md0
# mdadm --detail /dev/md0
![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png)
Grow Disks in Raid Array
From the about output, you can see that our array having only 2 devices. If you need to grow the array again, follow the same steps as described above. If you need to add a drive as spare, mark it as spare so that if the disk fails, it will automatically active and rebuild.
### Conclusion ###
In the article, weve seen how to grow an existing raid set and how to remove a faulty disk from an array after re-syncing the existing contents. All these steps can be done without any downtime. During data syncing, system users, files and applications will not get affected in any case.
In next, article I will show you how to manage the RAID, till then stay tuned to updates and dont forget to add your comments.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/grow-raid-array-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
[2]:http://www.tecmint.com/create-raid0-in-linux/

View File

@ -1,184 +0,0 @@
[translating by KevinSJ]
Linux_Logo A Command Line Tool to Print Color ANSI Logos of Linux Distributions
================================================================================
linuxlogo or linux_logo is a Linux command line utility that generates a color ANSI picture of Distribution logo with a few system information.
![Linux_Logo - Prints Color ANSI Logs of Linux Distro](http://www.tecmint.com/wp-content/uploads/2015/06/Linux_Logo.png)
Linux_Logo Prints Color ANSI Logs of Linux Distro
This utility obtains System Information from /proc Filesystem. linuxlogo is capable of showing color ANSI image of various logos other than the host distribution logo.
The System information associated with logo includes Linux Kernel Version, Time when Kernel was last Compiled, Number/core of processor, Speed, Manufacturer and processor Generation. It also show information about total physical RAM.
It is worth mentioning here that screenfetch is another tool of similar kind, which shows distribution logo and a more detailed and formatted system inform http://www.tecmint.com/screenfetch-system-information-generator-for-linux/ation. We have already covered screenfetch long ago, which you may refer at:
- [ScreenFetch Generates Linux System Information][15]
linux_logo and Screenfetch should not be compared to each other. While the output of screenfetch is more formatted and detailed, where linux_logo produce maximum number of color ANSI diagram, and option to format the output.
linux_logo is written primarily in C programming Language, which displays linux logo in an X Window System and hence User Interface X11 aka X Window System should be installed. The software is released under GNU General Public License Version 2.0.
For the purpose of this article, were using following testing environment to test the linux_logo utility.
Operating System : Debian Jessie
Processor : i3 / x86_64
### Installing Linux Logo Utility in Linux ###
**1. The linuxlogo package (stable version 5.11) is available to install from default package repository under all Linux distributions using apt, yum or dnf package manager as shown below.**
# apt-get install linux_logo [On APT based Systems]
# yum install linux_logo [On Yum based Systems]
# dnf install linux_logo [On DNF based Systems]
OR
# dnf install linux_logo.x86_64 [For 64-bit architecture]
**2. Once linuxlogo package has been installed, you can run the command `linuxlogo` to get the default logo for the distribution you are using..**
# linux_logo
OR
# linuxlogo
![Get Default OS Logo](http://www.tecmint.com/wp-content/uploads/2015/06/Get-Default-OS-Logo.png)
Get Default OS Logo
**3. Use the option `[-a]`, not to print any fancy color. Useful if viewing linux_logo over black and white terminal.**
# linux_logo -a
![Black and White Linux Logo](http://www.tecmint.com/wp-content/uploads/2015/06/Black-and-White-Linux-Logo.png)
Black and White Linux Logo
**4. Use option `[-l]` to print LOGO only and exclude all other System Information.**
# linux_logo -l
![Print Distribution Logo](http://www.tecmint.com/wp-content/uploads/2015/06/Print-Distribution-Logo.png)
Print Distribution Logo
**5. The `[-u]` switch will display system uptime.**
# linux_logo -u
![Print System Uptime](http://www.tecmint.com/wp-content/uploads/2015/06/Print-System-Uptime.png)
Print System Uptime
**6. If you are interested in Load Average, use option `[-y]`. You may use more than one option at a time.**
# linux_logo -y
![Print System Load Average](http://www.tecmint.com/wp-content/uploads/2015/06/Print-System-Load-Average.png)
Print System Load Average
For more options and help on them, you may like to run.
# linux_logo -h
![Linuxlogo Options and Help](http://www.tecmint.com/wp-content/uploads/2015/06/linuxlogo-options.png)
Linuxlogo Options and Help
**7. There are a lots of built-in Logos for various Linux distributions. You may see all those logos using option `-L list` switch.**
# linux_logo -L list
![List of Linux Logos](http://www.tecmint.com/wp-content/uploads/2015/06/List-of-Linux-Logos.png)
List of Linux Logos
Now you want to print any of the logo from the list, you may use `-L NUM` or `-L NAME` to display selected logo.
- -L NUM will print logo with number NUM (deprecated).
- -L NAME will print the logo with name NAME.
For example, to display AIX Logo, you may use command as:
# linux_logo -L 1
OR
# linux_logo -L aix
![Print AIX Logo](http://www.tecmint.com/wp-content/uploads/2015/06/Print-AIX-Logo.png)
Print AIX Logo
**Notice**: The `-L 1` in the command where 1 is the number at which AIX logo appears in the list, where `-L aix` is the name at which AIX logo appears in the list.
Similarly, you may print any logo using these options, few examples to see..
# linux_logo -L 27
# linux_logo -L 21
![Various Linux Logos](http://www.tecmint.com/wp-content/uploads/2015/06/Various-Linux-Logos.png)
Various Linux Logos
This way, you can use any of the logos just by using the number or name, that is against it.
### Some Useful Tricks of Linux_logo ###
**8. You may like to print your Linux distribution logo at login. To print default logo at login you may add the below line at the end of `~/.bashrc` file.**
if [ -f /usr/bin/linux_logo ]; then linux_logo; fi
**Notice**: If there isnt any` ~/.bashrc` file, you may need to create one under user home directory.
**9. After adding above line, just logout and re-login again to see the default logo of your Linux distribution.**
![Print Logo on User Login](http://www.tecmint.com/wp-content/uploads/2015/06/Print-Logo-on-Login.png)
Print Logo on User Login
Also note, that you may print any logo, after login, simply by adding the below line.
if [ -f /usr/bin/linux_logo ]; then linux_logo -L num; fi
**Important**: Dont forget to replace num with the number that is against the logo, you want to use.
**10. You can also print your own logo by simply specifying the location of the logo as shown below.**
# linux_logo -D /path/to/ASCII/logo
**11. Print logo on Network Login.**
# /usr/local/bin/linux_logo > /etc/issue.net
You may like to use ASCII logo if there is no support for color filled ANSI Logo as:
# /usr/local/bin/linux_logo -a > /etc/issue.net
**12. Create a Penguin port A set of port to answer connection. To create Penguin port Add the below line to file /etc/services file.**
penguin 4444/tcp penguin
Here 4444 is the port number which is currently free and not used by any resource. You may use a different port.
Also add the below line to file /etc/inetd.conf file.
penguin stream tcp nowait root /usr/local/bin/linux_logo
Restart the service inetd as:
# killall -HUP inetd
Moreover linux_logo can be used in bootup script to fool the attacker as well as you can play a prank with your friend. This is a nice tool and I might use it in some of my scripts to get output as per distribution basis.
Try it once and you wont regret. Let us know what you think of this utility and how it can be useful for you. Keep Connected! Keep Commenting. Like and share us and help us get spread.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/

View File

@ -0,0 +1,155 @@
Translating by H-mudcup
Linux上绝对有趣的10个彩蛋
================================================================================
![](http://en.wikipedia.org/wiki/File:Adventure_Easteregg.PNG)
制作 Adventure 的程序员悄悄的把一个秘密的功能塞进了游戏里。Atari 并没有对此感到生气,而是给这类“秘密功能”起了个名字——“彩蛋”,因为——你懂的——你会像找复活节彩蛋一样寻找它们。图片来自: Wikipedia。
在1979年的时候公司为 Atari 2600 开发了一个电子游戏——[Adventure][1]。
制作 Adventure 的程序员悄悄的把这样的一个功能放进了游戏里当用户把一个“隐形方块”移动到特定的一面墙上时会让用户进入一个“密室”。那个房间里只有一句话“Created by [Warren Robinett][2]”——意思是,由 [Warren Robinett][2] 创建。
Atari 有一项反对作者将自己的名字放进他们的游戏里的政策所以这个无畏的程序员只能偷偷的把自己的名字放进游戏里。Atari 在 Warren Robinett 离开公司之后才发现这个“密室”。Atari 并没有对此感到生气而是给这类“秘密功能”起了个名字——“彩蛋”因为——你懂的——你会寻找它们。Atari 还宣布将在之后的游戏中加入更多的“彩蛋”。
这种软件里的“隐藏功能”并不是第一次出现这类特性的首次出现是在1966年[PDP-10][3]的操作系统上),但这是它第一次有了名字,同时也是第一次真正的被众多电脑用户和游戏玩家所注意。
Linux以及和Linux相关的软件并没有被遗忘。这些年来人们为这个倍受喜爱的操作系统创作了很多非常有趣的彩蛋。下面将介绍我个人最喜爱的彩蛋——以及如何得到它们。
你将迅速意识到这些彩蛋大多需要通过终端才能体验到。这是故意的。因为终端比较酷。【我应该借此机机会提醒你一下,如果你想运行我所列出的应用,然而你却还没有安装它们,你是绝对无法运行成功的。你应该先安装好它们的。因为……毕竟只是计算机。】
### Arch : 包管理器pacman里的吃豆人Pac-Man ###
为了广大的 [Arch Linux][4] 粉丝,我们将以此开篇。你们可以将“[pacman][6]” (Arch 的包管理器)的进度条变成吃豆人吃豆的样子。别问我为什么这不是默认设置。
你需要在你最喜欢的文本编辑器里编辑“/etc/pacman.conf”文件。在“# Misc options”区下面删除“Color”前的“#”添加一行“ILoveCandy”。因为吃豆人喜欢糖豆。
没错这样就行了下次你在终端里运行pacman管理器时你就会让这个黄颜色的小家伙吃到些午餐至少能吃些糖豆
### GNU Emacs : 俄罗斯方块Tetris以及…… ###
![emacs Tetris](http://www.linux.com/images/stories/41373/emacsTetris.jpg)
我不喜欢 emacs。一点也不喜欢。但是它确实能玩俄罗斯方块。
我要坦白一件事:我不喜欢[emacs][7]。一点也不喜欢。
有些东西让我满心欢喜。有些东西能带走我所有伤痛。有些东西能解决我的烦恼。这些[绝对跟 emacs 无关][8]。
但是它确实能玩俄罗斯方块。这可不是件小事。方法如下:
第一步)打开 emacs。有疑问输入“emacs”。
第二步按下键盘上的Esc和X键。
第三步输入“tetris”然后按下“Enter”。
玩腻了俄罗斯方块试试“pong”、“snake”还有其他一堆小游戏或奇怪的东西。在“/usr/share/emacs/*/lisp/play”文件中可以看见完整的清单。
### 动物说话了 ###
让动物在终端里说话在 Linux 世界里有着悠久而辉煌的历史。下面这些真的是最应该知道的。
在用基于 Debian 的发行版试试输入“apt-get moo"。
![apt-get moo](http://www.linux.com/images/stories/41373/AptGetMoo.jpg)
apt-get moo
简单的确。但这是只会说话的牛。所以惹我们喜欢。再试试“aptitude moo”。他会告诉你“There are no Easter Eggs in this program这个程序里没有彩蛋”。
关于 [aptitude][9] 有一件事你一定要知道,它是个肮脏、下流的骗子。如果 aptitude 是匹诺曹,那它的鼻子能刺穿月球。在这条命令中添加“-v”选项。不停的添加 v直到它被逼得投降。
![](http://www.linux.com/images/stories/41373/AptitudeMoo.jpg)
我猜大家都同意,这是 aptitude 中最重要的功能。
我猜大家都同意,这是 aptitude 中最重要的功能。但是万一你想把自己的话让一头牛说出来怎么办这时我们就需要“cowsay”了。
还有别让“cowsay牛说”这个名字把你给骗了。你可以让你的话从各种东西的嘴里说出来。比如一头大象CalvinBeavis 甚至可以是Ghostbusters捉鬼敢死队的标志。只需在终端输入“cowsay -l”就能看到所有选项的列表。
![](http://www.linux.com/images/stories/41373/cowsay.jpg)
你可以让你的话从各种东西的嘴里说出来
想玩高端点的?你可以用管道把其他应用的输出放到 cowsay 中。试试“fortune | cowsay”。非常有趣。
### Sudo 请无情的侮辱我 ###
当你做错事时希望你的电脑骂你的人请举手。反正,我这样想过。试试这个:
输入“sudo visudo”以打开“sudoers”文件。在文件的开头你很可能会看见几行以“Defaults”开头的文字。在那几行后面添加“Defaults insults”并保存文件。
现在,只要你输错了你的 sudo 密码,你的系统就会骂你。这些提高自信的语句包括“听着,煎饼脑袋,我可没时间听这些垃圾。”,“你吃错药了吧?”以及“你被电过以后大脑就跟以前不太一样了是不是?”
把这个设在同事的电脑上会有非常有趣。
### Firefox 是个厚脸皮 ###
这一个不需要终端!太棒了!
打开火狐浏览器。在地址栏填上“about:about”。你将得到火狐浏览器中所有的“about”页。一点也不炫酷是不是
现在试试“about:mozilla”浏览器就会回应你一条从“[Book of MozillaMozilla 之书)][10]”——浏览网页的圣经——里引用的话。我的另一个最爱是“about:robots”这个也很有趣。
![](http://www.linux.com/images/stories/41373/About-Mozilla550.jpg)
“[Book of MozillaMozilla 之书)][10]”——浏览网页的圣经。
### 精心调制的混搭日历 ###
是否厌倦了千百年不变的 [Gregorian Calendar罗马教历][11]准备好乱入了吗试试输入“ddate”。这样会把当前日历以[Discordian Calendar不和教历][12]的方式显示出来。你会遇见这样的语句:
“今天是Sweetmorn甜美的清晨3181年Discord不和季的第18天。”
我听见你在说什么了,“但这根本不是什么彩蛋!”嘘~,闭嘴。只要我想,我就可以把它叫做彩蛋。
### 快速进入黑客行话模式 ###
想不想尝试一下电影里超级黑客的感觉?试试(通过添加“-oS”把扫描器设置成“[Script Kiddie][13]”模式。然后所有的输出都会变成最3l33t的[黑客范][14]。
例如: “nmap -oS - google.com”
赶快试试。我知道你有多想这么做。你一定会让安吉丽娜·朱莉Angelina Jolie[印象深刻][15]
### lolcat彩虹 ###
在你的Linux终端里有很多彩蛋真真是极好的……但是如果你还想要变得……更有魅力些怎么办输入lolcat。把任何一个程序的文本输出通过管道输入到lolcat里。你会得到它的超级无敌彩虹版。
![](http://www.linux.com/images/stories/41373/lolcat.jpg)
把任何一个程序的文本输出通过管道输入到lolcat里。你会得到它的超级无敌彩虹版。
### 追光标的小家伙 ###
![oneko cat](http://www.linux.com/images/stories/41373/onekocat.jpg)
“Oneko” -- 经典 “Neko”的Linux端口。
“Oneko” -- 经典 “[Neko][16]”的Linux端口。 .
接下来是“Oneko” -- 经典 “[Neko][16]”的Linux端口。基本上就是个满屏幕追着你的光标跑的小猫。
虽然严格来它并不算是“彩蛋”,它还是很有趣的。而且感觉上也是很彩蛋的。
你还可以用不同的选项比如“oneko -dog”把小猫替代成小狗或是调成其他样式。用这个对付讨厌的同事有着无限的可能。
There you have it! A list of my favorite Linux Easter Eggs (and things of that ilk). Feel free to add your own favorite in the comments section below. Because this is the Internet. And you can do that sort of thing.就是这些了一个我最喜欢的Linux彩蛋或是类似东西的清单。请尽情的的在下面的评论区留下你的最爱。因为这是互联网。你就能做这些事。
--------------------------------------------------------------------------------
via: http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-
作者:[Bryan Lunduke][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/community/forums/person/56734
[1]:http://en.wikipedia.org/wiki/Adventure_(Atari_2600)
[2]:http://en.wikipedia.org/wiki/Warren_Robinett
[3]:http://en.wikipedia.org/wiki/PDP-10
[4]:http://en.wikipedia.org/wiki/Arch_Linux
[5]:http://en.wikipedia.org/wiki/Pac-Man
[6]:http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-#Pacman
[7]:http://en.wikipedia.org/wiki/GNU_Emacs
[8]:https://www.youtube.com/watch?v=AQ4NAZPi2js
[9]:https://wiki.debian.org/Aptitude
[10]:http://en.wikipedia.org/wiki/The_Book_of_Mozilla
[11]:http://en.wikipedia.org/wiki/Gregorian_calendar
[12]:http://en.wikipedia.org/wiki/Discordian_calendar
[13]:http://nmap.org/book/output-formats-script-kiddie.html
[14]:http://nmap.org/book/output-formats-script-kiddie.html
[15]:https://www.youtube.com/watch?v=Ql1uLyuWra8
[16]:http://en.wikipedia.org/wiki/Neko_%28computer_program%29

View File

@ -1,65 +0,0 @@
如何打造自己的Linux发行版
================================================================================
您是否想过打造您自己的Linux发行版每个Linux用户在他们使用Linux的过程中都想过做一个他们自己的发行版至少一次。我也不例外作为一个Linux菜鸟我也考虑过开发一个自己的Linux发行版。开发一个Linux发行版被叫做Linux From Scratch (LFS)。
在开始之前我总结了一些LFS的内容如下
### 1. 那些想要打造他们自己的Linux发行版的人应该了解打造一个Linux发行版打造意味着从头开始与配置一个已有的Linux发行版的不同 ###
如果您只是想调整下屏幕显示、定制登录以及拥有更好的外表和使用体验。您可以选择任何一个Linux发行版并且按照您的喜好进行个性化配置。此外有许多配置工具可以帮助您。
如果您想打包所有必须的文件、boot-loaders和内核并选择什么该被包括进来然后依靠自己编译这一切东西。那么您需要Linux From Scratch (LFS)。
**注意**如果您只想要定制Linux系统的外表和体验这个指南不适合您。但如果您真的想打造一个Linux发行版并且向了解怎么开始以及一些其他的信息那么这个指南正是为您而写。
### 2. 打造一个Linux发行版LFS的好处 ###
- 您将了解Linux系统的内部工作机制
- 您将开发一个灵活的适应您需求的系统
- 您开发的系统LFS将会非常紧凑因为您对该包含/不该包含什么拥有绝对的掌控
- 您开发的系统LFS在安全性上会更好
### 3. 打造一个Linux发行版LFS的坏处 ###
打造一个Linux系统意味着将所有需要的东西放在一起并且编译之。这需要许多查阅、耐心和时间。而且您需要一个可用的Linux系统和足够的磁盘空间来打造Linux系统。
### 4. 有趣的是Gentoo/GNU Linux在某种意义上最接近于LFS。Gentoo和LFS都是完全从源码编译的定制的Linux系统 ###
### 5. 您应该是一个有经验的Linux用户对编译包、解决依赖有相当的了解并且是个shell脚本的专家。了解一门编程语言C最好将会使事情变得容易些。但哪怕您是一个新手只要您是一个优秀的学习者可以很快的掌握知识您也可以开始。最重要的是不要在LFS过程中丢失您的热情。 ###
如果您不够坚定恐怕会在LFS进行到一半时放弃。
### 6. 现在您需要一步一步的指导来打造一个Linux。LFS是打造Linux的官方指南。我们的搭档的站点tradepub也为我们的读者制作了LFS的指南这同样是免费的。 ###
您可以从下面的链接下载Linux From Scratch的书籍
[![](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-From-Scratch.gif)][1]
下载: [Linux From Scratch][1]
### 关于Linux From Scratch ###
这本书是由LFS的项目领头人Gerard Beekmans创作的由Matthew Burgess和Bruse Dubbs做编辑两人都是LFS项目的联合领导人。这本书内容很广泛有338页长。
书中内容包括介绍LFS、准备构建、构建LinuxLFS、建立启动脚本、使LFS可以引导和附录。其中涵盖了您想知道的LFS项目的所有东西。
这本书还给出了编译一个包的预估时间。预估的时间以编译第一个包的时间作为参考。所有的东西都以易于理解的方式呈现,甚至对于新手来说。
如果您有充裕的时间并且真正对构建自己的Linux发行版感兴趣那么您绝对不会错过下载这个电子书免费下载的机会。您需要做的便是照着这本书在一个工作的Linux系统任何Linux发行版足够的磁盘空间即可中开始构建您自己的Linux系统时间和热情。
如果Linux使您着迷如果您想自己动手构建一个自己的Linux发行版这便是现阶段您应该知道的全部了其他的信息您可以参考上面链接的书中的内容。
请让我了解您阅读/使用这本书的经历这本详尽的LFS指南的使用是否足够简单如果您已经构建了一个LFS并且想给我们的读者一些建议欢迎留言和反馈。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-custom-linux-distribution-from-scratch/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://tecmint.tradepub.com/free/w_linu01/prgm.cgi

View File

@ -0,0 +1,743 @@
ZMap 文档
================================================================================
1. 初识 ZMap
1. 最佳扫描习惯
1. 命令行参数
1. 附加信息
1. TCP SYN 探测模块
1. ICMP Echo 探测模块
1. UDP 探测模块
1. 配置文件
1. 详细
1. 结果输出
1. 黑名单
1. 速度限制与抽样
1. 发送多个探测
1. ZMap 扩展
1. 示例应用程序
1. 编写探测和输出模块
----------
### 初识 ZMap ###
ZMap被设计用来针对IPv4所有地址或其中的大部分实施综合扫描的工具。ZMap是研究者手中的利器但在运行ZMap时请注意您很有可能正在以每秒140万个包的速度扫描整个IPv4地址空间 。我们建议用户在实施即使小范围扫描之前,也联系一下本地网络的管理员并参考我们列举的最佳扫描习惯。
默认情况下ZMap会对于指定端口实施尽可能大速率的TCP SYN扫描。较为保守的情况下对10,000个随机的地址的80端口以10Mbps的速度扫描如下所示
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
或者更加简洁地写成:
$ zmap -B 10M -p 80 -n 10000 -o results.csv
ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/8和192.168.0.0/16的80端口运行指令如下
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
如果扫描进行的顺利ZMap会每秒输出类似以下内容的状态更新
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
这些更新信息提供了扫描的即时状态并表示成:完成进度% (剩余时间); send: 发出包的数量 即时速率 (平均发送速率); recv: 接收包的数量 接收率 (平均接收率); hits: 成功率
如果您不知道您所在网络支持的扫描速率,您可能要尝试不同的扫描速率和带宽限制直到扫描效果开始下降,借此找出当前网络能够支持的最快速度。
默认情况下ZMap会输出不同IP地址的列表例如SYN ACK数据包的情况像下面这样。还有几种附加的格式JSON和Redis作为其输出结果以及生成程序可解析的扫描统计选项。 同样,可以指定附加的输出字段并使用输出过滤来过滤输出的结果。
115.237.116.119
23.9.117.80
207.118.204.141
217.120.143.111
50.195.22.82
我们强烈建议您使用黑名单文件,以排除预留的/未分配的IP地址空间组播地址RFC1918以及网络中需要排除在您扫描之外的地址。默认情况下ZMap将采用位于 `/etc/zmap/blacklist.conf`的这个简单的黑名单文件中所包含的预留和未分配地址。如果您需要某些特定设置比如每次运行ZMap时的最大带宽或黑名单文件您可以在文件`/etc/zmap/zmap.conf`中指定或使用自定义配置文件。
如果您正试图解决扫描的相关问题,有几个选项可以帮助您调试。首先,您可以通过添加`--dryrun`实施预扫,以此来分析包可能会发送到网络的何处。此外,还可以通过设置'--verbosity=n`来更改日志详细程度。
----------
### 最佳扫描习惯 ###
我们为针对互联网进行扫描的研究者提供了一些建议,以此来引导养成良好的互联网合作氛围
- 密切协同本地的网络管理员,以减少风险和调查
- 确认扫描不会使本地网络或上游供应商瘫痪
- 标记出在扫描中呈良性的网页和DNS条目的源地址
- 明确注明扫描中所有连接的目的和范围
- 提供一个简单的退出方法并及时响应请求
- 实施扫描时,不使用比研究对象需求更大的扫描范围或更快的扫描频率
- 如果可以通过时间或源地址来传播扫描流量
即使不声明,使用扫描的研究者也应该避免利用漏洞或访问受保护的资源,并遵守其辖区内任何特殊的法律规定。
----------
### 命令行参数 ###
#### 通用选项 ####
这些选项是实施简单扫描时最常用的选项。我们注意到某些选项取决于所使用的探测模块或输出模块在实施ICMP Echo扫描时是不需要使用目的端口的
**-p, --target-port=port**
用来扫描的TCP端口号例如443
**-o, --output-file=name**
使用标准输出将结果写入该文件。
**-b, --blacklist-file=path**
文件中被排除的子网使用CIDR表示法如192.168.0.0/16一个一行。建议您使用此方法排除RFC 1918地址组播地址IANA预留空间等IANA专用地址。在conf/blacklist.example中提供了一个以此为目的示例黑名单文件。
#### 扫描选项 ####
**-n, --max-targets=n**
限制探测目标的数量。后面跟的可以是一个数字(例如'-n 1000`)或百分比(例如,`-n 0.1`)当然都是针对可扫描地址空间而言的(不包括黑名单)
**-N, --max-results=n**
收到多少结果后退出
**-t, --max-runtime=secs**
限制发送报文的时间
**-r, --rate=pps**
设置传输速率,以包/秒为单位
**-B, --bandwidth=bps**
以比特/秒设置传输速率支持使用后缀GM或K如`-B 10M`就是速度10 mbps的。设置会覆盖`--rate`。
**-c, --cooldown-time=secs**
发送完成后多久继续接收(默认值= 8
**-e, --seed=n**
地址排序种子。如果要用多个ZMap以相同的顺序扫描地址那么就可以使用这个参数。
**--shards=n**
将扫描分片/区在使其可多个ZMap中执行默认值= 1。启用分片时`--seed`参数是必需的。
**--shard=n**
选择扫描的分片(默认值= 0。n的范围在[0N)其中N为碎片的总数。启用分片时`--seed`参数是必需的。
**-T, --sender-threads=n**
用于发送数据包的线程数(默认值= 1
**-P, --probes=n**
发送到每个IP的探测数默认值= 1
**-d, --dryrun**
用标准输出打印出每个包,而不是将其发送(用于调试)
#### 网络选项 ####
**-s, --source-port=port|range**
发送数据包的源端口
**-S, --source-ip=ip|range**
发送数据包的源地址。可以仅仅是一个IP也可以是一个范围10.0.0.1-10.0.0.9
**-G, --gateway-mac=addr**
数据包发送到的网关MAC地址用以防止自动检测不工作的情况
**-i, --interface=name**
使用的网络接口
#### 探测选项 ####
ZMap允许用户指定并添加自己所需要探测的模块。 探测模块的职责就是生成主机回复的响应包。
**--list-probe-modules**
列出可用探测模块如tcp_synscan
**-M, --probe-module=name**
选择探探测模块(默认值= tcp_synscan
**--probe-args=args**
向模块传递参数
**--list-output-fields**
列出可用的输出模块
#### 输出选项 ####
ZMap允许用户选择指定的输出模块。输出模块负责处理由探测模块返回的字段并将它们交给用户。用户可以指定输出的范围并过滤相应字段。
**--list-output-modules**
列出可用输出模块如tcp_synscan
**-O, --output-module=name**
选择输出模块默认值为csv
**--output-args=args**
传递给输出模块的参数
**-f, --output-fields=fields**
输出列表,以逗号分割
**--output-filter**
通过指定相应的探测模块来过滤输出字段
#### 附加选项 ####
**-C, --config=filename**
加载配置文件,可以指定其他路径。
**-q, --quiet**
不再是每秒刷新输出
**-g, --summary**
在扫描结束后打印配置和结果汇总信息
**-v, --verbosity=n**
日志详细程度0-5默认值= 3
**-h, --help**
打印帮助并退出
**-V, --version**
打印版本并退出
----------
### 附加信息 ###
#### TCP SYN 扫描 ####
在执行TCP SYN扫描时ZMap需要指定一个目标端口和以供扫描的源端口范围。
**-p, --target-port=port**
扫描的TCP端口例如 443
**-s, --source-port=port|range**
发送扫描数据包的源端口(例如 40000-50000
**警示!** ZMAP基于Linux内核使用SYN/ACK包应答RST包关闭扫描打开的连接。ZMap是在Ethernet层完成包的发送的这样做时为了减少跟踪打开的TCP连接和路由寻路带来的内核开销。因此如果您有跟踪连接建立的防火墙规则如netfilter的规则类似于`-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`将阻止SYN/ACK包到达内核。这不会妨碍到ZMap记录应答但它会阻止RST包被送回最终连接会在超时后断开。我们强烈建议您在执行ZMap时选择一组主机上未使用且防火墙允许访问的端口加在`-s`后(如 `-s '50000-60000' ` )。
#### ICMP Echo 请求扫描 ####
虽然在默认情况下ZMap执行的是TCP SYN扫描但它也支持使用ICMP echo请求扫描。在这种扫描方式下ICMP echo请求包被发送到每个主机并以收到ICMP 应答包作为答复。实施ICMP扫描可以通过选择icmp_echoscan扫描模块来执行如下
$ zmap --probe-module=icmp_echoscan
#### UDP 数据报扫描 ####
ZMap还额外支持UDP探测它会发出任意UDP数据报给每个主机并能在无论UDP或是ICMP任何一个不可达的情况下接受应答。ZMap支持通过使用--probe-args命令行选择四种不同的UDP payload方式。这些都有可列印payload的文本用于命令行的十六进制payload的hex外部文件中包含payload的file和需要动态区域生成的payload的template。为了得到UDP响应请使用-f参数确保您指定的“data”领域处于汇报范围。
下面的例子将发送两个字节'ST'即PC的'status'请求到UDP端口5632。
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
下面的例子将发送字节“0X02”即SQL服务器的 'client broadcast'请求到UDP端口1434。
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
下面的例子将发送一个NetBIOS状态请求到UDP端口137。使用一个ZMap自带的payload文件。
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
下面的例子将发送SIP的'OPTIONS'请求到UDP端口5060。使用附ZMap自带的模板文件。
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上的发送线程(-T时可能会遇到崩溃和一个明显的相比静态payload性能降低的表现。模板仅仅是一个由一个或多个使用$ {}将字段说明封装成序列构成的payload文件。某些协议特别是SIP需要payload来反射包中的源和目的地址。其他协议如端口映射和DNS包含范围伴随每一次请求随机生成或Zamp扫描的多宿主系统将会抛出危险警告。
以下的payload模板将发送SIP OPTIONS请求到每一个目的地
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
To: sip:${RAND_ALPHA=8}@${DADDR}
Call-ID: ${RAND_DIGIT=10}@${SADDR}
CSeq: 1 OPTIONS
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
Content-Length: 0
Max-Forwards: 20
User-Agent: ${RAND_ALPHA=8}
Accept: text/plain
就像在上面的例子中展示的那样对于大多数SIP正常的实现会在在每行行末添加\r\n并且在请求的末尾一定包含\r\n\r\n。一个可以使用的在ZMap的examples/udp-payloads目录下的例子 (sip_options.tpl).
下面的字段正在如今的模板中实施:
- **SADDR**: 源IP地址的点分十进制格式
- **SADDR_N**: 源IP地址的网络字节序格式
- **DADDR**: 目的IP地址的点分十进制格式
- **DADDR_N**: 目的IP地址的网络字节序格式
- **SPORT**: 源端口的ascii格式
- **SPORT_N**: 源端口的网络字节序格式
- **DPORT**: 目的端口的ascii格式
- **DPORT_N**: 目的端口的网络字节序格式
- **RAND_BYTE**: 随机字节(0-255),长度由=(长度) 参数决定
- **RAND_DIGIT**: 随机数字0-9长度由=(长度) 参数决定
- **RAND_ALPHA**: 随机大写字母A-Z长度由=(长度) 参数决定
- **RAND_ALPHANUM**: 随机大写字母A-Z和随机数字0-9长度由=(长度) 参数决定
### 配置文件 ###
ZMap支持使用配置文件代替在命令行上指定所有的需求选项。配置中可以通过每行指定一个长名称的选项和对应的值来创建
interface "eth1"
source-ip 1.1.1.4-1.1.1.8
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
cooldown-time 300 # seconds
blacklist-file /etc/zmap/blacklist.conf
output-file ~/zmap-output
quiet
summary
然后ZMap就可以按照配置文件和一些必要的附加参数运行了
$ zmap --config=~/.zmap.conf --target-port=443
### 详细 ###
ZMap可以在屏幕上生成多种类型的输出。默认情况下Zmap将每隔1秒打印出相似的基本进度信息。可以通过设置`--quiet`来禁用。
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
ZMap同样也可以根据扫描配置打印如下消息可以通过'--verbosity`参数加以控制。
Aug 11 16:16:12.813 [INFO] zmap: started
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
Aug 11 16:17:03.985 [DEBUG] recv: thread started
ZMap还支持在扫描之后打印出一个的可grep的汇总信息类似于下面这样可以通过调用`--summary`来实现。
cnf target-port 443
cnf source-port-range-begin 32768
cnf source-port-range-end 61000
cnf source-addr-range-begin 1.1.1.4
cnf source-addr-range-end 1.1.1.8
cnf maximum-packets 4294967295
cnf maximum-runtime 0
cnf permutation-seed 0
cnf cooldown-period 300
cnf send-interface eth1
cnf rate 45000
env nprocessors 16
exc send-start-time Fri Jan 18 01:47:35 2013
exc send-end-time Sat Jan 19 00:47:07 2013
exc recv-start-time Fri Jan 18 01:47:35 2013
exc recv-end-time Sat Jan 19 00:52:07 2013
exc sent 3722335150
exc blacklisted 572632145
exc first-scanned 1318129262
exc hit-rate 0.874102
exc synack-received-unique 32537000
exc synack-received-total 36689941
exc synack-cooldown-received-unique 193
exc synack-cooldown-received-total 1543
exc rst-received-unique 141901021
exc rst-received-total 166779002
adv source-port-secret 37952
adv permutation-gen 4215763218
### 结果输出 ###
ZMap可以通过**输出模块**生成不同格式的结果。默认情况下ZMap只支持**csv**的输出,但是可以通过编译支持**redis**和**json** 。可以使用**输出过滤**来过滤这些发送到输出模块上的结果。输出模块的范围由用户指定。默认情况如果没有指定输出文件ZMap将以csv格式返回结果ZMap不会产生特定结果。也可以编写自己的输出模块;请参阅编写输出模块。
**-o, --output-file=p**
输出写入文件地址
**-O, --output-module=p**
调用自定义输出模块
**-f, --output-fields=p**
输出以逗号分隔各字段的列表
**--output-filter=filter**
在给定的探测区域实施输出过滤
**--list-output-modules**
列出可用输出模块
**--list-output-fields**
列出可用的给定探测区域
#### 输出字段 ####
ZMap有很多区域它可以基于IP地址输出。这些区域可以通过在给定探测模块上运行`--list-output-fields`来查看。
$ zmap --probe-module="tcp_synscan" --list-output-fields
saddr string: 应答包中的源IP地址
saddr-raw int: 网络提供的整形形式的源IP地址
daddr string: 应答包中的目的IP地址
daddr-raw int: 网络提供的整形形式的目的IP地址
ipid int: 应答包中的IP识别号
ttl int: 应答包中的ttl存活时间
sport int: TCP 源端口
dport int: TCP 目的端口
seqnum int: TCP 序列号
acknum int: TCP Ack号
window int: TCP 窗口
classification string: 包类型
success int: 是应答包成功
repeat int: 是否是来自主机的重复响应
cooldown int: 是否是在冷却时间内收到的响应
timestamp-str string: 响应抵达时的时间戳使用ISO8601格式
timestamp-ts int: 响应抵达时的时间戳使用纪元开始的秒数
timestamp-us int: 时间戳的微秒部分(例如 从'timestamp-ts'的几微秒)
可以通过使用`--output-fields=fields`或`-f`来选择选择输出字段,任意组合的输出字段可以被指定为逗号分隔的列表。例如:
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
#### 过滤输出 ####
在传到输出模块之前探测模块生成的结果可以先过滤。过滤被实施在探测模块的输出字段上。过滤使用简单的过滤语法写成类似于SQL通过ZMap的**--output-filter**选项来实施。输出过滤通常用于过滤掉重复的结果或仅传输成功的响应到输出模块。
过滤表达式的形式为`<字段名> <操作> <>`。`<>`的类型必须是一个字符串或一串无符号整数并且匹配`<字段名>`类型。对于整数比较有效的操作是`= !=, <, >, <=, >=`。字符串比较的操作是=!=。`--list-output-fields`会打印那些可供探测模块选择的字段和类型,然后退出。
复合型的过滤操作,可以通过使用`&&`(逻辑与)和`||`(逻辑或)这样的运算符来组合出特殊的过滤操作。
**示例**
书写一则过滤仅显示成功,过滤重复应答
--output-filter="success = 1 && repeat = 0"
过滤出包中含RST并且TTL大于10的分类或者包中含SYNACK的分类
--output-filter="(classification = rst && ttl > 10) || classification = synack"
#### CSV ####
csv模块将会生成以逗号分隔各输出请求字段的文件。例如以下的指令将生成下面的CSV至名为`output.csv`的文件。
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
----------
响应, 源地址, 目的地址, 源端口, 目的端口, 序列号, 应答, 是否是冷却模式, 是否重复, 时间戳
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
#### Redis ####
Redis的输出模块允许地址被添加到一个Redis的队列,不是被保存到文件,允许ZMap将它与之后的处理工具结合使用。
**注意!** ZMap默认不会编译Redis功能。如果您想要将Redis功能编译进ZMap源码中可以在CMake的时候加上`-DWITH_REDIS=ON`。
### 黑名单和白名单 ###
ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名单和白名单参数他将会扫描所有的IPv4地址包括本地的保留的以及组播地址。如果指定了黑名单文件那么在黑名单中的网络前缀将不再扫描如果指定了白名单文件只有那些网络前缀在白名单内的才会扫描。白名单和黑名单文件可以协同使用黑名单运用于白名单上例如如果您在白名单中指定了10.0.0.0/8并在黑名单中指定了10.1.0.0/16那么10.1.0.0/16将不会扫描。白名单和黑名单文件可以在命令行中指定如下所示
**-b, --blacklist-file=path**
文件用于记录黑名单子网以CIDR无类域间路由的表示法例如192.168.0.0/16
**-w, --whitelist-file=path**
文件用于记录限制扫描的子网以CIDR的表示法例如192.168.0.0/16
黑名单文件的每行都需要以CIDR的表示格式书写一个单一的网络前缀。允许使用`#`加以备注。例如:
# IANA英特网编号管理局记录的用于特殊目的的IPv4地址
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# 更新于2013-05-22
0.0.0.0/8 # RFC1122: 网络中的所有主机
10.0.0.0/8 # RFC1918: 私有地址
100.64.0.0/10 # RFC6598: 共享地址空间
127.0.0.0/8 # RFC1122: 回环地址
169.254.0.0/16 # RFC3927: 本地链路地址
172.16.0.0/12 # RFC1918: 私有地址
192.0.0.0/24 # RFC6890: IETF协议预留
192.0.2.0/24 # RFC5737: 测试地址
192.88.99.0/24 # RFC3068: IPv6转换到IPv4的任意播
192.168.0.0/16 # RFC1918: 私有地址
192.18.0.0/15 # RFC2544: 检测地址
198.51.100.0/24 # RFC5737: 测试地址
203.0.113.0/24 # RFC5737: 测试地址
240.0.0.0/4 # RFC1112: 预留地址
255.255.255.255/32 # RFC0919: 广播地址
# IANA记录的用于组播的地址空间
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
# 更新于2013-06-25
224.0.0.0/4 # RFC5771: 组播/预留地址ed
如果您只是想扫描因特网中随机的一部分地址,使用采样检出,来代替使用白名单和黑名单。
**注意!**ZMap默认设置使用`/etc/zmap/blacklist.conf`作为黑名单文件其中包含有本地的地址空间和预留的IP空间。通过编辑`/etc/zmap/zmap.conf`可以改变默认的配置。
### 速度限制与抽样 ###
默认情况下ZMap将以您当前网络所能支持的最快速度扫描。以我们对于常用硬件的经验这普遍是理论上Gbit以太网速度的95-98%这可能比您的上游提供商可处理的速度还要快。ZMap是不会自动的根据您的上游提供商来调整发送速率的。您可能需要手动的调整发送速率来减少丢包和错误结果。
**-r, --rate=pps**
设置最大发送速率以包/秒为单位
**-B, --bandwidth=bps**
设置发送速率以比特/秒(支持G,M和K后缀)。也同样作用于--rate的参数。
ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运行时间的随机采样。由于针对主机的扫描是通过随机排序生成的实例限制扫描的主机个数为N就会随机抽选N个主机。命令选项如下
**-n, --max-targets=n**
探测目标上限数量
**-N, --max-results=n**
结果上限数量(累积收到这么多结果后推出)
**-t, --max-runtime=s**
发送数据包时间长度上限(以秒为单位)
**-s, --seed=n**
种子用以选择地址的排列方式。使用不同ZMap执行扫描操作时将种子设成相同的值可以保证相同的扫描顺序。
举个例子,如果您想要多次扫描同样的一百万个互联网主机,您可以设定排序种子和扫描主机的上限数量,大致如下所示:
zmap -p 443 -s 3 -n 1000000 -o results
为了确定哪一百万主机将要被扫描,您可以执行预扫,只列印数据包而非发送,并非真的实施扫描。
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
### 发送多个数据包 ###
ZMap支持想每个主机发送多个扫描。增加这个数量既增加了扫描时间又增加了到达的主机数量。然而我们发现增加扫描时间每个额外扫描的增加近100远远大于到达的主机数量每个额外扫描的增加近1
**-P, --probes=n**
向每个IP发出的独立扫描个数默认值=1
----------
### 示例应用程序 ###
ZMap专为向大量主机发启连接并寻找那些正确响应而设计。然而我们意识到许多用户需要执行一些后续处理如执行应用程序级别的握手。例如用户在80端口实施TCP SYN扫描可能只是想要实施一个简单的GET请求还有用户扫描443端口可能是对TLS握手如何完成感兴趣。
#### Banner获取 ####
我们收录了一个示例程序banner-grab伴随ZMap使用可以让用户从监听状态的TCP服务器上接收到消息。Banner-grab连接到服务上任意的发送一个消息然后打印出收到的第一个消息。这个工具可以用来获取banners例如HTTP服务的回复的具体指令telnet登陆提示或SSH服务的字符串。
这个例子寻找了1000个监听80端口的服务器并向每个发送一个简单的GET请求存储他们的64位编码响应至http-banners.out
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
如果想知道更多使用`banner-grab`的细节,可以参考`examples/banner-grab`中的README文件。
**注意!** ZMap和banner-grab如例子中同时运行可能会比较显著的影响对方的表现和精度。确保不让ZMap充满banner-grab-tcp的并发连接不然banner-grab将会落后于标准输入的读入导致屏蔽编写标准输出。我们推荐使用较慢扫描速率的ZMap同时提升banner-grab-tcp的并发性至3000以内注意 并发连接>1000需要您使用`ulimit -SHn 100000`和`ulimit -HHn 100000`来增加每个问进程的最大文件描述。当然这些参数取决于您服务器的性能连接成功率hit-rate我们鼓励开发者在运行大型扫描之前先进行小样本的试验。
#### 建立套接字 ####
我们也收录了另一种形式的banner-grab就是forge-socket 重复利用服务器发出的SYN-ACK连接并最终取得banner。在`banner-grab-tcp`中ZMap向每个服务器发送一个SYN并监听服务器发回的带有SYN+ACK的应答。运行ZMap主机的内核接受应答后发送RST因为有没有处于活动状态的连接与该包关联。程序banner-grab必须在这之后创建一个新的TCP连接到从服务器获取数据。
在forge-socket中我们以同样的名字利用内核模块这使我们可以创建任意参数的TCP连接。这样可以抑制内核的RST包并且通过创建套接字取代它可以重用SYN+ACK的参数通过这个套接字收发数据和我们平时使用的连接套接字并没有什么不同。
要使用forge-socket您需要forge-socket内核模块从[github][1]上可以获得。您需要git clone `git@github.com:ewust/forge_socket.git`至ZMap源码根目录然后cd进入forge_socket 目录运行make。以root身份安装带有`insmod forge_socket.ko` 的内核模块。
您也需要告知内核不要发送RST包。一个简单的在全系统禁用RST包的方法是**iptables**。以root身份运行`iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP`即可,当然您也可以加上一项--dport X将禁用局限于所扫描的端口X上。扫描完成后移除这项设置以root身份运行`iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP`即可。
现在应该可以建立forge-socket的ZMap示例程序了。运行需要使用**extended_file**ZMap输出模块
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
./forge-socket -c 500 -d ./http-req > ./http-banners.out
详细内容可以参考`examples/forge-socket`目录下的README。
----------
### 编写探测和输出模块 ###
ZMap可以通过**probe modules**扩展支持不同类型的扫描,通过**output modules**追加不同类型的输出结果。注册过的探测和输出模块可以在命令行中列出:
**--list-probe-modules**
列出安装过的探测模块
**--list-output-modules**
列出安装过的输出模块
#### 输出模块 ####
ZMap的输出和输出后处理可以通过执行和注册扫描的**output modules**来扩展。输出模块在接收每一个应答包时都会收到一个回调。然而提供的默认模块仅提供简单的输出这些模块同样支持扩展扫描后处理例如重复跟踪或输出AS号码来代替IP地址
通过定义一个新的output_module机构体来创建输出模块并在[output_modules.c][2]中注册:
typedef struct output_module {
const char *name; // 在命令行如何引出输出模块
unsigned update_interval; // 以秒为单位的更新间隔
output_init_cb init; // 在扫描初始化的时候调用
output_update_cb start; // 在开始的扫描的时候调用
output_update_cb update; // 每次更新间隔调用,秒为单位
output_update_cb close; // 扫描终止后调用
output_packet_cb process_ip; // 接收到应答时调用
const char *helptext; // 会在--list-output-modules时打印在屏幕啥
} output_module_t;
输出模块必须有名称,通过名称可以在命令行、普遍实施的`success_ip`和常见的`other_ip`回调中使用模块。process_ip的回调由每个收到的或经由**probe module**过滤的应答包调用。应答是否被认定为成功并不确定比如他可以是一个TCP的RST。这些回调必须定义匹配`output_packet_cb`定义的函数:
int (*output_packet_cb) (
ipaddr_n_t saddr, // network-order格式的扫描主机IP地址
ipaddr_n_t daddr, // network-order格式的目的IP地址
const char* response_type, // 发送模块的数据包分类
int is_repeat, // {0: 主机的第一个应答, 1: 后续的应答}
int in_cooldown, // {0: 非冷却状态, 1: 扫描处于冷却中}
const u_char* packet, // 指向结构体iphdr中IP包的指针
size_t packet_len // 包的长度以字节为单位
);
输出模块还可以通过注册回调执行在扫描初始化的时候(诸如打开输出文件的任务),扫描开始阶段(诸如记录黑名单的任务),在常规间隔实施(诸如程序升级的任务)在关闭的时候(诸如关掉所有打开的文件描述符。这些回调提供完整的扫描配置入口和实时状态:
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
被定义在[output_modules.h][3]中。在[src/output_modules/module_csv.c][4]中有可用示例。
#### 探测模块 ####
数据包由探测模块构造由此可以创建抽象包并对应答分类。ZMap默认拥有两个扫描模块`tcp_synscan`和`icmp_echoscan`。默认情况下ZMap使用`tcp_synscan`来发送TCP SYN包并对每个主机的并对每个主机的响应分类如打开时收到SYN+ACK或关闭时收到RST。ZMap允许开发者编写自己的ZMap探测模块使用如下的API
任何类型的扫描的实施都需要在`send_module_t`结构体内开发和注册必要的回调:
typedef struct probe_module {
const char *name; // 如何在命令行调用扫描
size_t packet_length; // 探测包有多长(必须是静态的)
const char *pcap_filter; // 对收到的响应实施PCAP过滤
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
uint8_t port_args; // 设为1如果需要使用ZMap的--target-port
// 用户指定
probe_global_init_cb global_initialize; // 在扫描初始化会时被调用一次
probe_thread_init_cb thread_initialize; // 每个包缓存区的线程中被调用一次
probe_make_packet_cb make_packet; // 每个主机更新包的时候被调用一次
probe_validate_packet_cb validate_packet; // 每收到一个包被调用一次,
// 如果包无效返回0
// 非零则覆盖。
probe_print_packet_cb print_packet; // 如果在dry-run模式下被每个包都调用
probe_classify_packet_cb process_packet; // 由区分响应的接收器调用
probe_close_cb close; // 扫描终止后被调用
fielddef_t *fields // 该模块指定的区域的定义
int numfields // 区域的数量
} probe_module_t;
在扫描操作初始化时会调用一次`global_initialize`,可以用来实施一些必要的全局配置和初始化操作。然而,`global_initialize`并不能访问报缓冲区,那里由线程指定。用以代替的,`thread_initialize`在每个发送线程初始化的时候被调用提供对于缓冲区的访问可以用来构建探测包和全局的源和目的值。此回调应用于构建宿主不可知分组结构甚至只有特定值目的主机和校验和需要随着每个主机更新。例如以太网头部信息在交换时不会变更减去校验和是由NIC硬件计算的因此可以事先定义以减少扫描时间开销。
调用回调参数`make_packet是为了让被扫描的主机允许**probe module**更新主机指定的值同时提供IP地址、一个非透明的验证字符串和探测数目如下所示。探测模块负责在探测中放置尽可能多的验证字符串以至于当服务器返回的应答为空时探测模块也能验证它的当前状态。例如针对TCP SYN扫描tcp_synscan探测模块会使用TCP源端口和序列号的格式存储验证字符串。响应包SYN+ACKs将包含预期的值包含目的端口和确认号。
int make_packet(
void *packetbuf, // 包的缓冲区
ipaddr_n_t src_ip, // network-order格式源IP
ipaddr_n_t dst_ip, // network-order格式目的IP
uint32_t *validation, // 探测中的确认字符串
int probe_num // 如果向每个主机发送多重探测,
// 该值为对于主机我们
// 正在实施的探测数目
);
扫描模块也应该定义`pcap_filter`、`validate_packet`和`process_packet`。只有符合PCAP过滤的包才会被扫描。举个例子在一个TCP SYN扫描的情况下我们只想要调查TCP SYN / ACK或RST TCP数据包并利用类似`tcp && tcp[13] & 4 != 0 || tcp[13] == 18`的过滤方法。`validate_packet`函数将会被每个满足PCAP过滤条件的包调用。如果验证返回的值非零将会调用`process_packet`函数,并使用包中被定义成的`fields`字段和数据填充字段集。如下代码为TCP synscan探测模块处理了一个数据包。
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
{
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
+ (sizeof(struct iphdr)));
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
if (tcp->rst) { // RST packet
fs_add_string(fs, "classification", (char*) "rst", 0);
fs_add_uint64(fs, "success", 0);
} else { // SYNACK packet
fs_add_string(fs, "classification", (char*) "synack", 0);
fs_add_uint64(fs, "success", 1);
}
}
--------------------------------------------------------------------------------
via: https://zmap.io/documentation.html
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/ewust/forge_socket/
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c

View File

@ -0,0 +1,183 @@
Linux_Logo 输出彩色 ANSI Linux 发行版徽标的命令行工具
================================================================================
linuxlogo 或 linux_logo 是一款在Linux命令行下生成附带系统信息的彩色 ANSI 发行版徽标的工具。
![Linux_Logo 输出彩色 ANSI Linux 发行版徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Linux_Logo.png)
Linux_Logo 输出彩色 ANSI Linux 发行版徽标
这个小工具可以从 /proc 文件系统中获取系统信息并可以显示包括主机发行版在内的其他很多发行版的徽标。
与徽标一同显示的系统信息包括 Linux 内核版本最近一次编译Linux内核的时间处理器/核心数量,速度,制造商,以及哪一代处理器。它还能显示总共的物理内存大小。
值得一提的是screenfetch是一个拥有类似功能的工具它也能显示发行版徽标同时还提供更加详细美观的系统信息。我们之前已经介绍过这个工具你可以参考一下链接
- [ScreenFetch Generates Linux System Information][1]
linux_logo 和 Screenfetch 并不能相提并论。尽管 screenfetch 的输出较为整洁并提供更多细节, linux_logo 则提供了更多的彩色 ANSI 图标, 并且提供了格式化输出的选项。
linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统中因此需要安装图形界面 X11 或 X 系统。这个软件使用GNU 2.0协议。
本文中,我们将使用以下环境测试 linux_logo 工具。
操作系统 : Debian Jessie
处理器 : i3 / x86_64
### 在 Linux 中安装 Linux Logo工具 ###
**1. linuxlogo软件包 ( 5.11 稳定版) 可通过如下方式使用 apt, yum,或 dnf 在所有发行版中使用默认的软件仓库进行安装**
# apt-get install linux_logo [用于基于 Apt 的系统] 译者注Ubuntu中该软件包名为linuxlogo
# yum install linux_logo [用于基于 Yum 的系统]
# dnf install linux_logo [用于基于 Dnf 的系统]
# dnf install linux_logo.x86_64 [用于 64 位系统]
**2. 装好linuxlogo软件包之后你可以使用命令 `linuxlogo` 来获取你当前使用的发行版的默认徽标..**
# linux_logo
# linuxlogo
![获取默认系统徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Get-Default-OS-Logo.png)
获取默认系统徽标
**3. 使用 `[-a]` 选项可以输出没有颜色的徽标。当在黑白终端里使用 linux_logo 时,这个选项会很有用。**
# linux_logo -a
![黑白 Linux 徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Black-and-White-Linux-Logo.png)
黑白 Linux 徽标
**4. 使用 `[-l]` 选项可以仅输出徽标而不包含系统信息。**
# linux_logo -l
![输出发行版徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Print-Distribution-Logo.png)
输出发行版徽标
**5. `[-u]` 选项可以显示系统运行时间。**
# linux_logo -u
![输出系统运行时间](http://www.tecmint.com/wp-content/uploads/2015/06/Print-System-Uptime.png)
输出系统运行时间
**6. 如果你对系统平均负载感兴趣,可以使用 `[-y]` 选项。你可以同时使用多个选项。**
# linux_logo -y
![输出系统平均负载](http://www.tecmint.com/wp-content/uploads/2015/06/Print-System-Load-Average.png)
输出系统平均负载
如需查看更多选项并获取相关帮助,你可以使用如下命令。
# linux_logo -h
![Linuxlogo 选项及帮助](http://www.tecmint.com/wp-content/uploads/2015/06/linuxlogo-options.png)
Linuxlogo选项及帮助
**7. 此工具内置了很多不同发行版的徽标。你可以使用 `[-L list]` 选项查看在这些徽标的列表。**
# linux_logo -L list
![Linux 徽标列表](http://www.tecmint.com/wp-content/uploads/2015/06/List-of-Linux-Logos.png)
Linux 徽标列表
如果你想输出这个列表中的任意徽标,可以使用 `-L NUM``-L NAME` 来显示想要选中的图标。
- -L NUM 会输出列表中序号为 NUM 的图标 (不推荐).
- -L NAME 会输出列表中名为 NAME 的图标。
例如,如果想要显示 AIX 的徽标,你可以使用如下命令
# linux_logo -L 1
# linux_logo -L aix
![输出 AIX 图标](http://www.tecmint.com/wp-content/uploads/2015/06/Print-AIX-Logo.png)
输出 AIX 图标
**注**: 命令中的使用 `-L 1` 是因为 AIX 徽标在列表中的编号是1而使用 `-L aix` 则是因为 AIX 徽标在列表中的名称为 aix
同样的,你还可以使用这些选项输出任何图标,以下是一些例子..
# linux_logo -L 27
# linux_logo -L 21
![各种 Linux 徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Various-Linux-Logos.png)
各种 Linux 徽标
你可以通过徽标对应的编号或名字使用任意徽标
### 一些使用 Linux_logo 的建议和提示###
**8. 你可以在登录界面输出你的 Linux 发行版徽标。要输出默认徽标,你可以在 ` ~/.bashrc`` 文件的最后添加以下内容。**
if [ -f /usr/bin/linux_logo ]; then linux_logo; fi
**注**: 如没有` ~/.bashrc` 文件,你需要在当前用户的 home 目录下新建一个。
**9. 在添加以上内容后,你只需要注销并重新登录即可看到你的发行版的默认徽标**
![Print Logo on User Login](http://www.tecmint.com/wp-content/uploads/2015/06/Print-Logo-on-Login.png)
在用户登录时输出徽标
其实你也可以在登录后输出任意图标,只需加入以下内容
if [ -f /usr/bin/linux_logo ]; then linux_logo -L num; fi
**重要**: 不要忘了将 num 替换成你想使用的图标。
**10. You can also print your own logo by simply specifying the location of the logo as shown below.**
# linux_logo -D /path/to/ASCII/logo
**11. 在远程登录时输出图标。**
# /usr/local/bin/linux_logo > /etc/issue.net
如果你想使用 ASCII 徽标而不是含有颜色的 ANSI 徽标,则使用如下命令
# /usr/local/bin/linux_logo -a > /etc/issue.net
**12. 创建一个 Penguin 端口 - 用于回应连接的端口。要创建 Penguin 端口, 则需在 /etc/services 文件中加入以下内容 **
penguin 4444/tcp penguin
这里的 `4444` 是一个未被任何其他资源使用的空闲端口。你也可以使用其他端口。
你还需要在 /etc/inetd.conf中加入以下内容
penguin stream tcp nowait root /usr/local/bin/linux_logo
并使用以下命令重启 inetd 服务
# killall -HUP inetd
linux_logo 还可以用做启动脚本来愚弄攻击者或对你朋友使用恶作剧。这是一个我经常在我的脚本中用来获取不同发行版输出的好工具。
试过一次后,你就不会忘记的。让我们知道你对这个工具的想法及它对你的作用吧。 不要忘记给评论、点赞或分享!
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
作者:[Avishek Kumar][a]
译者:[KevSJ](https://github.com/KevSJ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/

View File

@ -0,0 +1,221 @@
Autojump 一个高级的cd命令用以快速浏览 Linux 文件系统
================================================================================
对于那些主要通过控制台或终端使用 Linux 命令行来工作的 Linux 用户来说,他们真切地感受到了 Linux 的强大。 然而在 Linux 的分层文件系统中进行浏览有时或许是一件头疼的事,尤其是对于那些新手来说。
现在,有一个用 Python 写的名为 `autojump` 的 Linux 命令行实用程序,它是 Linux [cd][1]’命令的高级版本。
![Autojump 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Command.jpg)
Autojump 浏览 Linux 文件系统的最快方式
这个应用原本由 Joël Schaerer 编写,现在由 +William Ting 维护。
Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行更轻松的目录浏览。与传统的 `cd` 命令相比autojump 能够更加快速地浏览至目的目录。
#### autojump 的特色 ####
- 免费且开源的应用,在 GPL V3 协议下发布。
- 自主学习的应用,从用户的浏览习惯中学习。
- 更快速地浏览。不必包含子目录的名称。
- 对于大多数的标准 Linux 发行版本,能够在软件仓库中下载得到,它们包括 Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat and Fedora。
- 也能在其他平台中使用,例如 OS X(使用 Homebrew) 和 Windows (通过 Clink 来实现)
- 使用 autojump 你可以跳至任何特定的目录或一个子目录。你还可以打开文件管理器来到达某个目录,并查看你在某个目录中所待时间的统计数据。
#### 前提 ####
- 版本号不低于 2.6 的 Python
### 第 1 步: 做一次全局系统升级 ###
1. 以 **root** 用户的身份,做一次系统更新或升级,以此保证你安装有最新版本的 Python。
# apt-get update && apt-get upgrade && apt-get dist-upgrade [APT based systems]
# yum update && yum upgrade [YUM based systems]
# dnf update && dnf upgrade [DNF based systems]
**注** : 这里特别提醒,在基于 YUM 或 DNF 的系统中,更新和升级执行相同的行动,大多数时间里它们是通用的,这点与基于 APT 的系统不同。
### 第 2 步: 下载和安装 Autojump ###
2. 正如前面所言,在大多数的 Linux 发行版本的软件仓库中, autojump 都可获取到。通过包管理器你就可以安装它。但若你想从源代码开始来安装它,你需要克隆源代码并执行 python 脚本,如下面所示:
#### 从源代码安装 ####
若没有安装 git请安装它。我们需要使用它来克隆 git 仓库。
# apt-get install git [APT based systems]
# yum install git [YUM based systems]
# dnf install git [DNF based systems]
一旦安装完 git以常规用户身份登录然后像下面那样来克隆 autojump
$ git clone git://github.com/joelthelion/autojump.git
接着,使用 `cd` 命令切换到下载目录。
$ cd autojump
下载,赋予脚本文件可执行权限,并以 root 用户身份来运行安装脚本。
# chmod 755 install.py
# ./install.py
#### 从软件仓库中安装 ####
3. 假如你不想麻烦,你可以以 **root** 用户身份从软件仓库中直接安装它:
在 Debian, Ubuntu, Mint 及类似系统中安装 autojump :
# apt-get install autojump (注: 这里原文为 autojumo 应该为 autojump)
为了在 Fedora, CentOS, RedHat 及类似系统中安装 autojump, 你需要启用 [EPEL 软件仓库][2]。
# yum install epel-release
# yum install autojump
OR
# dnf install autojump
### 第 3 步: 安装后的配置 ###
4. 在 Debian 及其衍生系统 (Ubuntu, Mint,…) 中, 激活 autojump 应用是非常重要的。
为了暂时激活 autojump 应用,即直到你关闭当前会话或打开一个新的会话之前让 autojump 均有效,你需要以常规用户身份运行下面的命令:
$ source /usr/share/autojump/autojump.sh on startup
为了使得 autojump 在 BASH shell 中永久有效,你需要运行下面的命令。
$ echo '. /usr/share/autojump/autojump.sh' >> ~/.bashrc
### 第 4 步: Autojump 的预测试和使用 ###
5. 如先前所言, autojump 将只跳到先前 `cd` 命令到过的目录。所以在我们开始测试之前,我们要使用 `cd` 切换到一些目录中去,并创建一些目录。下面是我所执行的命令。
$ cd
$ cd
$ cd Desktop/
$ cd
$ cd Documents/
$ cd
$ cd Downloads/
$ cd
$ cd Music/
$ cd
$ cd Pictures/
$ cd
$ cd Public/
$ cd
$ cd Templates
$ cd
$ cd /var/www/
$ cd
$ mkdir autojump-test/
$ cd
$ mkdir autojump-test/a/ && cd autojump-test/a/
$ cd
$ mkdir autojump-test/b/ && cd autojump-test/b/
$ cd
$ mkdir autojump-test/c/ && cd autojump-test/c/
$ cd
现在,我们已经切换到过上面所列的目录,并为了测试创建了一些目录,一切准备就绪,让我们开始吧。
**需要记住的一点** : `j` 是 autojump 的一个包装,你可以使用 j 来代替 autojump 相反亦可。
6. 使用 -v 选项查看安装的 autojump 的版本。
$ j -v
or
$ autojump -v
![查看 Autojump 的版本](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Autojump-Version.png)
查看 Autojump 的版本
7. 跳到先前到过的目录 /var/www
$ j www
![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-To-Directory.png)
跳到目录
8. 跳到先前到过的子目录‘/home/avi/autojump-test/b 而不键入子目录的全名。
$ jc b
![跳到子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Child-Directory.png)
跳到子目录
9. 使用下面的命令,你就可以从命令行打开一个文件管理器,例如 GNOME Nautilus ,而不是跳到一个目录。
$ jo www
![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png)
跳到目录
![在文件管理器中打开目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Directory-in-File-Browser.png)
在文件管理器中打开目录
你也可以在一个文件管理器中打开一个子目录。
$ jco c
![打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory1.png)
打开子目录
![在文件管理器中打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory-in-File-Browser1.png)
在文件管理器中打开子目录
10. 查看每个文件夹的关键权重和在所有目录权重中的总关键权重的相关统计数据。文件夹的关键权重代表在这个文件夹中所花的总时间。 目录权重是列表中目录的数目。(注: 在这一句中,我觉得原文中的 if 应该为 is)
$ j --stat
![查看目录统计数据](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png)
查看目录统计数据
**提醒** : autojump 存储其运行日志和错误日志的地方是文件夹 `~/.local/share/autojump/`。千万不要重写这些文件,否则你将失去你所有的统计状态结果。
$ ls -l ~/.local/share/autojump/
![Autojump 的日志](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Logs.png)
Autojump 的日志
11. 假如需要,你只需运行下面的命令就可以查看帮助 :
$ j --help
![Autojump 的帮助和选项](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-help-options.png)
Autojump 的帮助和选项
### 功能需求和已知的冲突 ###
- autojump 只能让你跳到那些你已经用 `cd` 到过的目录。一旦你用 `cd` 切换到一个特定的目录,这个行为就会被记录到 autojump 的数据库中,这样 autojump 才能工作。不管怎样,在你设定了 autojump 后,你不能跳到那些你没有用 `cd` 到过的目录。
- 你不能跳到名称以破折号 (-) 开头的目录。或许你可以考虑阅读我的有关[操作文件或目录][3] 的文章,尤其是有关操作那些以‘- 或其他特殊字符开头的文件和目录的内容。
- 在 BASH shell 中autojump 通过修改 `$PROMPT_COMMAND` 环境变量来跟踪目录的行为,所以强烈建议不要去重写 `$PROMPT_COMMAND` 这个环境变量。若你需要添加其他的命令到现存的 `$PROMPT_COMMAND` 环境变量中,请添加到`$PROMPT_COMMAND` 环境变量的最后。
### 结论: ###
假如你是一个命令行用户, autojump 是你必备的实用程序。它可以简化许多事情。它是一个在命令行中浏览 Linux 目录的绝佳的程序。请自行尝试它,并在下面的评论框中让我知晓你宝贵的反馈。保持联系,保持分享。喜爱并分享,帮助我们更好地传播。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/
作者:[Avishek Kumar][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/cd-command-in-linux/
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[3]:http://www.tecmint.com/manage-linux-filenames-with-special-characters/

View File

@ -0,0 +1,67 @@
在 Linux 中安装 Google 环聊桌面客户端
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg)
先前,我们已经介绍了如何[在 Linux 中安装 Facebook Messenger][1] 和[WhatsApp 桌面客户端][2]。这些应用都是非官方的应用。今天,我将为你推荐另一款非官方的应用,它就是 [Google 环聊][3]
当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它把。
### 在 Linux 中安装 Google 环聊 ###
我们将使用一个名为 [yakyak][4] 的开源项目,它是一个针对 LinuxWindows 和 OS X 平台的非官方 Google 环聊客户端。我将向你展示如何在 Ubuntu 中使用 yakyak但我相信在其他的 Linux 发行版本中,你可以使用同样的方法来使用它。在了解如何使用它之前,让我们先看看 yakyak 的主要特点:
- 发送和接受聊天信息
- 创建和更改对话 (重命名, 添加人物)
- 离开或删除对话
- 桌面提醒通知
- 打开或关闭通知
- 针对图片上传,支持拖放,复制粘贴或使用上传按钮
- Hangupsbot 房间同步(实际的用户图片) (注: 这里翻译不到位,希望改善一下)
- 展示行内图片
- 历史回放
听起来不错吧,你可以从下面的链接下载到该软件的安装文件:
- [下载 Google 环聊客户端 yakyak][5]
下载的文件是压缩的。解压后,你将看到一个名称类似于 linux-x64 或 linux-x32 的目录,其名称取决于你的系统。进入这个目录,你应该可以看到一个名为 yakyak 的文件。双击这个文件来启动它。
![在 Linux 中运行 Run Google 环聊](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_3.jpeg)
当然,你需要键入你的 Google 账号来认证。
![在 Ubuntu 中设置 Google 环聊](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_2.jpeg)
一旦你通过认证后,你将看到如下的画面,在这里你可以和你的 Google 联系人进行聊天。
![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg)
假如你想看看对话的配置图,你可以选择 `查看-> 展示对话缩略图`
![Google 环聊缩略图](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg)
当有新的信息时,你将得到桌面提醒。
![在 Ubuntu 中 Google 环聊的桌面提醒](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_1.jpeg)
### 值得一试吗? ###
我让你尝试一下,并决定 **在 Linux 中安装 Google 环聊客户端** 是否值得。若你想要官方的应用,你可以看看这些 [拥有原生 Linux 客户端的即时消息应用程序][6]。不要忘记分享你在 Linux 中使用 Google 环聊的体验。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-google-hangouts-linux/
作者:[Abhishek][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/facebook-messenger-linux/
[2]:http://itsfoss.com/whatsapp-linux-desktop/
[3]:http://www.google.com/+/learnmore/hangouts/
[4]:https://github.com/yakyak/yakyak
[5]:https://github.com/yakyak/yakyak
[6]:http://itsfoss.com/best-messaging-apps-linux/

View File

@ -0,0 +1,108 @@
Linux有问必答-- 如何为在Linux中安装兄弟打印机
================================================================================
> **提问**: 我有一台兄弟HL-2270DW激光打印机我想从我的Linux机器上答应文档。我该如何在我的电脑上安装合适的驱动并使用它
兄弟牌以买得起的[紧凑型激光打印机][1]而闻名。你可以用低于200美元的价格得到高质量的WiFi/双工激光打印机而且价格还在下降。最棒的是它们还提供良好的Linux支持因此你可以在Linux中下载并安装它们的打印机驱动。我在一年前买了台[HL-2270DW][2],我对它的性能和可靠性都很满意。
下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中我会演示安装HL-2270DW激光打印机的USB驱动。首先通过USB线连接你的打印机到Linux上。
### 准备 ###
在准备阶段,进入[兄弟官方支持网站][3]输入你的型号比如HL-2270DW搜索你的兄弟打印机型号。
![](https://farm1.staticflickr.com/301/18970034829_6f3a48d817_c.jpg)
进入下面页面后选择你的Linux平台。对于Debian、Ubuntu或者其他衍生版选择“Linux (deb)”。对于Fedora、CentOS或者RHEL选择“Linux (rpm)”。
![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
下一页你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的GUI对本地、远程打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
运行安装文件之前你需要在64位的Linux系统上做另外一件事情。
因为兄弟打印机驱动是为32位的Linux系统开发的,因此你需要按照下面的方法安装32位的库。
在早期的Debian(6.0或者更早期)或者Ubuntu11.04或者更早期),安装下面的包。
$ sudo apt-get install ia32-libs
对于已经引入多架构的新的Debian或者Ubuntu而言你可以安装下面的包
$ sudo apt-get install lib32z1 lib32ncurses5
上面的包代替了ia32-libs包。或者你只需要安装
$ sudo apt-get install lib32stdc++6
如果你使用的是基于Red Hat的Linux你可以安装
$ sudo yum install glibc.i686
### 驱动安装 ###
现在解压下载的驱动文件。
$ gunzip linux-brprinter-installer-2.0.0-1.gz
接下来像下面这样运行安装文件。
$ sudo sh ./linux-brprinter-installer-2.0.0-1
你会被要求输入打印机的型号。输入你打印机的型号比如“HL-2270DW”。
![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
同意GPL协议直呼接受接下来的任何默认问题。
![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
现在LPR/CUPS打印机驱动已经安装好了。接下来要配置你的打印机了。
### 打印机配置 ###
我接下来就要通过基于CUPS的网页管理和配置兄弟打印机了。
首先验证CUPS守护进程已经启动。
$ sudo netstat -nap | grep 631
打开一个浏览器输入http://localhost:631。你会下面的打印机管理界面。
![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
进入“Administration”选项卡点击打印机选项下的“Manage Printers”。
![](https://farm1.staticflickr.com/484/18533632074_0526cccb86_c.jpg)
你一定在下面的页面中看到了你的打印机HL-2270DW。点击打印机名。
在下拉菜单“Administration”中选择“Set As Server Default”。这会设置你的打印机位系统默认打印机。
![](https://farm1.staticflickr.com/472/19150412212_b37987c359_c.jpg)
当被要求验证时输入你的Linux登录信息。
![](https://farm1.staticflickr.com/511/18968590168_807e807f73_c.jpg)
现在基础配置已经基本完成了。为了测试打印打开任何文档浏览程序比如PDF浏览器并打印。你会看到“HL-2270DW”被列出并被作为默认的打印机设置。
![](https://farm4.staticflickr.com/3872/18970034679_6d41d75bf9_c.jpg)
打印机应该可以工作了。你可以通过CUPS的网页看到打印机状态和管理打印机任务。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-brother-printer-linux.html
作者:[Dan Nanni][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/brother_printers
[2]:http://xmodulo.com/go/hl_2270dw
[3]:http://support.brother.com/

View File

@ -0,0 +1,38 @@
Linux 答疑--如何在 Ubuntu 15.04 的 GNOME 终端中开启多个标签
================================================================================
> **问**: 我以前可以在我的 Ubuntu 台式机中的 gnome-terminal 中开启多个标签。但升到 Ubuntu 15.04 后,我就无法再在 gnome-terminal 窗口中打开新标签了。要怎样做才能在 Ubuntu 15.04 的 gnome-terminal 中打开标签呢?
在 Ubuntu 14.10 或之前的版本中gnome-terminal 允许你在终端窗口中开启一个新标签或一个终端窗口。但从 Ubuntu 15.04开始gnome-terminal 移除了“新标签”选项。这实际上并不是一个 bug而是一个合并新标签和新窗口的举措。GNOME 3.12 引入了 [单独的“开启终端”选项][1]。开启新终端标签的功能从终端菜单移动到了首选项中。
![](https://farm1.staticflickr.com/562/19286510971_f0abe3e7fb_b.jpg)
### 偏好设置中的开启新标签 ###
要在 Ubuntu 15.04 的 gnome-terminal中开启新标签选择“编辑” -> “首选项",并把“开启新终端:窗口”改为“开启新终端:标签”。
![](https://farm1.staticflickr.com/329/19256530766_ff692b83bc_b.jpg)
如果现在你通过菜单开启新终端,就会显示在当前终端中的一个新标签页中。
![](https://farm4.staticflickr.com/3820/18662051223_3296fde8e4_b.jpg)
### 通过键盘快捷键开启标签 ###
如果你不想更改首选项,你可以按住 <Ctrl> 临时改变设置。比如,在默认情况下,在点击“新终端”的同时按住 <Ctrl>,终端就会在新标签中打开而不是开启新的终端。
另外,你还可以使用键盘快捷键 <Shift+Ctrl+T> 在终端中开启新标签。
在我看来gnome-terminal 此番在 UI 上的改变并非一个进步。比如,你无法自定义终端中各个标签的标题了。当你在一个终端中打开了多个标签时,这个功能会很有用。而如果终端名称保持默认标题(并不断变长)时,你就不能在有限的标题空间里看见终端的标题了。希望能被尽早加入这个功能。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/open-multiple-tabs-gnome-terminal-ubuntu.html
作者:[Dan Nanni][a]
译者:[KevSJ](https://github.com/KevSJ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://worldofgnome.org/opening-a-new-terminal-tabwindow-in-gnome-3-12/

View File

@ -0,0 +1,114 @@
为什么mysql里的ibdata1文件不断的增长
================================================================================
![ibdata1 file](https://www.percona.com/blog/wp-content/uploads/2013/08/ibdata1-file.jpg)
我们在[Percona支持][1]经常收到关于MySQL的ibdata1文件的这个问题。
当监控服务器发送一个关于MySQL服务器存储的报警恐慌就开始了 - 就是说磁盘快要满了。
一番调查后你意识到大多数地盘空间被InnDB的共享表空间ibdata1使用。你已经启用了[innodb_file_per_table][2],所以问题是:
### ibdata1存了什么 ###
当你启用了innodb_file_per_table表被存储在他们自己的表空间里但是共享表空间仍然在存储其他InnoDB的内部数据
- 数据字典也就是InnoDB表的元数据
- 交换缓冲区
- 双写缓冲区
- 撤销日志
在[Percona服务器][3]上有一些是可以被配置来避免增长过大的。例如你可以通过[innodb_ibuf_max_size][4]设置最大交换缓冲区或设置[innodb_doublewrite_file][5]存储双写缓冲区到一个分离的文件。
MySQL 5.6版中你也可以创建额外的撤销表空间所以他们将会有他们自己的文件来替代存储到ibdata1。照着[文档链接][6]检查。
### 什么引起ibdata1增长迅速 ###
当MySQL出现问题通常我们需要执行的第一个命令是
SHOW ENGINE INNODB STATUSG这里我觉得应该是STATUS/G
这将展示给我们一些很有价值的信息。我们开始检查**事务**部分,然后我们会发现这个:
---TRANSACTION 36E, ACTIVE 1256288 sec
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
show engine innodb status
Trx read view will not see trx with id >= 36F, sees < 36F
这是一个最常见的原因一个14天前创建的相当老的老事务。这个状态是**活动的**这意味着InnoDB已经创建了一个快照数据所以需要在**撤销**日志中维护旧页面,以保障数据库的一致性视图,直到事务开始。如果你的数据库是写负载大,那就意味着大量的撤销页已经被存储了。
如果你找不到任何长时间运行的事务你也可以监控INNODB状态中的其他的变量“**历史记录列表长度**”展示了一些等待清除操作。这种情况下问题经常发生,因为清除线程(或者老版本的主线程)不能以这些记录进来的速度处理撤销。
### 我怎么检查什么被存储到了ibdata1里了 ###
很不幸MySQL不提供什么被存储到ibdata1共享表空间的信息但是有两个工具将会很有帮助。第一个是马克·卡拉汉制作的一个修订版本的innochecksum并且发布在[这个漏洞报告][7]。
它相当易于使用:
# ./innochecksum /var/lib/mysql/ibdata1
0 bad checksum
13 FIL_PAGE_INDEX
19272 FIL_PAGE_UNDO_LOG
230 FIL_PAGE_INODE
1 FIL_PAGE_IBUF_FREE_LIST
892 FIL_PAGE_TYPE_ALLOCATED
2 FIL_PAGE_IBUF_BITMAP
195 FIL_PAGE_TYPE_SYS
1 FIL_PAGE_TYPE_TRX_SYS
1 FIL_PAGE_TYPE_FSP_HDR
1 FIL_PAGE_TYPE_XDES
0 FIL_PAGE_TYPE_BLOB
0 FIL_PAGE_TYPE_ZBLOB
0 other
3 max index_id
全部的20608中有19272个撤销日志页。**这是表空间的93%**。
第二个检查表空间内容的方式是杰里米科尔制作的[InnoDB Ruby工具][8]。它是个更先进的工具来检查InnoDB的内部结构。例如我们可以使用space-summary参数来得到每个页面及其数据类型的列表。我们可以使用标准的Unix工具**撤销日志**页的数量:
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
19272
尽管这种特殊的情况innochedcksum更快更容易使用我推荐你使用杰里米的工具去学习更多的InnoDB内部数据分布和内部结构。
好,现在我们知道问题所在。下一个问题:
### 我能怎么解决问题? ###
这个问题的答案很简单。如果你还能提交语句就做吧。如果不能你必须要杀掉进程开始回滚进程。那将停止ibdata1的增长但是很清楚你的软件有一个漏洞或者出了一些错误。现在你知道如何去鉴定问题所在你需要使用你自己的调试工具或普通语句日志找出谁或者什么引起的问题。
如果问题发生在清除线程,解决方法通常是升级到新版本,新版中使用一个独立的清除线程替代主线程。更多信息查看[文档链接][9]
### 有什么方法恢复已使用的空间么? ###
没有目前不可能有一个容易并且快速的方法。InnoDB表空间从不收缩...见[10年老漏洞报告][10]最新更新自詹姆斯(谢谢):
当你删除一些行这个页被标为已删除稍后重用但是这个空间从不会被恢复。只有一种方式来启动数据库使用新的ibdata1。做这个你应该需要使用mysqldump做一个逻辑全备份。然后停止MySQL并删除所有数据库、ib_logfile*、ibdata1*文件。当你再启动MySQL的时候将会创建一个新的共享表空间。然后恢复逻辑仓库。
### 总结 ###
当ibdata1文件增长太快通常是MySQL里长时间运行的被遗忘的事务引起的。尝试去解决问题越快越好提交或者杀死事务因为没有痛苦缓慢的mysqldump执行你不能恢复浪费的磁盘空间。
监控数据库避免这些问题也是非常推荐的。我们的[MySQL监控插件][11]包括一个Nagios脚本如果发现了一个太老的运行事务它可以提醒你。
--------------------------------------------------------------------------------
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
作者:[Miguel Angel Nieto][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.percona.com/blog/author/miguelangelnieto/
[1]:https://www.percona.com/products/mysql-support
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
[3]:https://www.percona.com/software/percona-server
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
[7]:http://bugs.mysql.com/bug.php?id=57611
[8]:https://github.com/jeremycole/innodb_ruby
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
[10]:http://bugs.mysql.com/bug.php?id=1341
[11]:https://www.percona.com/software/percona-monitoring-plugins