Merge pull request #14 from LCTT/master

Update 20150401
This commit is contained in:
martin qi 2015-04-01 23:27:40 +08:00
commit d7f86a392a
79 changed files with 1156 additions and 408 deletions

View File

@ -1,10 +1,10 @@
在Ubuntu 14.04 中修复无法修复回收站[快速提示]
在Ubuntu 14.04 中修复无法清空回收站的问题
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/empty-the-trash.jpg)
### 问题 ###
**无法在Ubuntu 14.04中清空回收站的问题**。我右键回收站图标并选择清空回收站就像我一直做的那样。我看到进度条显示删除文件中过了一段时间。但是它停止了并且Nautilus文件管理也停止了。我不得不在终端中停止了它。
**我遇到了无法在Ubuntu 14.04中清空回收站的问题**。我右键回收站图标并选择清空回收站就像我一直做的那样。我看到进度条显示删除文件中过了一段时间。但是它停止了并且Nautilus文件管理也停止了。我不得不在终端中停止了它。
但是这很痛苦因为文件还在垃圾箱中。并且我反复尝试清空后窗口都冻结了。
@ -18,7 +18,7 @@
这里注意你的输入。你使用超级管理员权限来运行删除命令。我相信你不会删除其他文件或者目录。
上面的命令会删除回收站目录下的所有文件。换句话说,这是用命令清空垃圾箱。使用玩上面的命令后,你会看到垃圾箱已经清空了。如果你删除了所有文件你不应该在看到Nautilus崩溃的问题了。
上面的命令会删除回收站目录下的所有文件。换句话说,这是用命令清空垃圾箱。使用完上面的命令后,你会看到垃圾箱已经清空了。如果你清空了所有文件你不应该在看到Nautilus崩溃的问题了。
### 对你有用么? ###
@ -30,7 +30,7 @@ via: http://itsfoss.com/fix-empty-trash-ubuntu/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -15,13 +15,13 @@ MariaDB是MySQL社区开发的分支也是一个增强型的替代品。它
现在让我们迁移到MariaDB吧
**以测试为目的**让我们创建一个叫**linoxidedb**的示例数据库。
让我们创建一个叫**linoxidedb**的**用于测试的**示例数据库。
使用以下命令用root账户登陆MySQL
$ mysql -u root -p
输入mysql root用户密码后,你将进入**mysql的命令行**
输入mysql root 用户密码后,你将进入**mysql的命令行**
**创建测试数据库:**
@ -54,7 +54,8 @@ MariaDB是MySQL社区开发的分支也是一个增强型的替代品。它
$ mysqldump: Error: Binlogging on server not active
![](http://blog.linoxide.com/wp-content/uploads/2015/01/mysqldump-error.png)
mysqldump error
*mysqldump error*
为了修复这个错误,我们需要对**my.cnf**文件做一些小改动。
@ -68,7 +69,7 @@ mysqldump error
![configuring my.cnf](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-my.cnf_.png)
好了在保存并关闭文件后我们需要重启一下mysql服务。运行以下命令重启
好了在保存并关闭文件后我们需要重启一下mysql服务。运行以下命令重启
$ sudo /etc/init.d/mysql restart
@ -77,17 +78,18 @@ mysqldump error
$ mysqldump --all-databases --user=root --password --master-data > backupdatabase.sql
![](http://blog.linoxide.com/wp-content/uploads/2015/01/crearing-bakup-file.png)
dumping databases
*dumping databases*
上面的命令将会备份所有的数据库,把它们存储在当前目录下的**backupdatabase.sql**文件中。
### 2. 卸载MySQL ###
首先,我们得把**my.cnt文件挪到安全的地方去**。
首先,我们得把**my.cnf文件挪到安全的地方去**。
**注**my.cnf文件将不会在你卸载MySQL包的时候被删除我们这样做只是以防万一。在MariaDB安装时它会询问我们是保持现存的my.cnf文件还是使用包中自带的版本即新my.cnf文件
**注**在你卸载MySQL包的时候不会自动删除my.cnf文件我们这样做只是以防万一。在MariaDB安装时它会询问我们是保持现存的my.cnf文件还是使用包中自带的版本即新my.cnf文件
在shell或终端中输入如下命令来备份my.cnt文件:
在shell或终端中输入如下命令来备份my.cnf文件:
$ sudo cp /etc/mysql/my.cnf my.cnf.bak
@ -111,7 +113,7 @@ dumping databases
![adding mariadb repo](http://blog.linoxide.com/wp-content/uploads/2015/01/adding-repo-mariadb.png)
键值导入并且添加完仓库后你就可以用以下命令安装MariaDB了
键值导入并且添加完仓库后你就可以用以下命令安装MariaDB了
$ sudo apt-get update
$ sudo apt-get install mariadb-server
@ -120,7 +122,7 @@ dumping databases
![my.conf configuration prompt](http://blog.linoxide.com/wp-content/uploads/2015/01/my.conf-configuration-prompt.png)
我们应该还没忘记在MariaDB安装时它会问你是使用现有的my.cnf文件还是包中自带的版本。你可以使用以前的my.cnf也可以用包中自带的。即使你想直接使用新的my.cnf文件你依然可以晚点将以前的备份内容还原进去别忘了我们已经将它复制到安全的地方那个去。所以我们直接选择了默认的选项“N”。如果需要安装其他版本请参考[MariaDB官方仓库][2]。
我们应该还没忘记在MariaDB安装时它会问你是使用现有的my.cnf文件还是包中自带的版本。你可以使用以前的my.cnf也可以用包中自带的。即使你想直接使用新的my.cnf文件你依然可以晚点时候将以前的备份内容还原进去别忘了我们已经将它复制到安全的地方了。所以我们直接选择了默认的选项“N”。如果需要安装其他版本请参考[MariaDB官方仓库][2]。
### 4. 恢复配置文件 ###
@ -136,7 +138,7 @@ dumping databases
就这样,我们已成功将之前的数据库导入了进来。
来,让我们登一下mysql命令行检查一下数据库是否真的已经导入了
来,让我们登一下mysql命令行检查一下数据库是否真的已经导入了
$ mysql -u root -p
@ -152,15 +154,15 @@ dumping databases
### 总结 ###
最后我们已经成功地从MySQL迁移到了MariaDB数据库管理系统。MariaDB比MySQL好虽然在性能方面MySQL还是比它更快但是MariaDB的优点在于它额外的特性与支持的许可证。这能够确保它自由开源FOSS并永久自由开源相比之下MySQL还有许多额外的插件有些不能自由使用代码、有些没有公开的开发进程、有些在不久的将来会变的不再自由开源。如果你有任何的问题、评论、反馈给我们不要犹豫直接在评论区留下你的看法。谢谢观看本教程希望你能喜欢MariaDB。
最后我们已经成功地从MySQL迁移到了MariaDB数据库管理系统。MariaDB比MySQL好虽然在性能方面MySQL还是比它更快但是MariaDB的优点在于它额外的特性与支持的许可证。这能够确保它自由开源FOSS并永久自由开源相比之下MySQL还有许多额外的插件有些不能自由使用代码、有些没有公开的开发进程、有些在不久的将来会变的不再自由开源。如果你有任何的问题、评论、反馈给我们不要犹豫直接在评论区留下你的看法。谢谢观看本教程希望你能喜欢MariaDB。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/migrate-mysql-mariadb-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,41 +1,40 @@
linux中创建和解压文档的10个快速tar命令样例
在linux中创建和解压文档的11个 tar 命令例子
================================================================================
### linux中的tar命令###
tar磁带归档命令是linux系统中被经常用来将文件存入到一个归档文件中的命令。
常见的文件扩展包括:.tar.gz 和 .tar.bz2, 分别表示通过gzip或bzip算法进一步压缩的磁带归档文件扩展
常见的文件扩展包括:.tar.gz 和 .tar.bz2, 分别表示通过gzip或bzip算法进一步进行了压缩。
在本教程中我们会管中窥豹一下在linux桌面或服务器版本中使用tar命令来处理一些创建和解压归档文件的日常工作的例子。
在该教程中我们会窥探一下在linux桌面或服务器版本中使用tar命令来处理一些日常创建和解压归档文件的工作样例。
### 使用tar命令###
tar命令在大部分linux系统默认情况下都是可用的所以你不用单独安装该软件。
> tar命令具有两个压缩格式gzip和bzip该命令的“z”选项用来指定gzip“j”选项用来指定bzip。同时也可哟用来创建非压缩归档文件。
> tar命令具有两个压缩格式gzip和bzip该命令的“z”选项用来指定gzip“j”选项用来指定bzip。同时也可创建非压缩归档文件。
#### 1.解压一个tar.gz归档 ####
#### 1.解压一个tar.gz归档 ####
一般常见的用法是用来解压归档文件下面的命令将会把文件从一个tar.gz归档文件中解压出来。
$ tar -xvzf tarfile.tar.gz
这里对这些参数做一个简单解释-
> x - 解压文件
> v - 繁琐,在解压每个文件时打印出文件的名称。
> v - 冗长模式,在解压每个文件时打印出文件的名称。
> z - 该文件是一个使用 gzip压缩的文件。
> z - 该文件是一个使用 gzip 压缩的文件。
> f - 使用接下来的tar归档来进行操作。
这些就是一些需要记住的重要选项。
**解压 tar.bz2/bzip 归档文件 **
**解压 tar.bz2/bzip 归档文件**
具有bz2扩展名的文件是使用bzip算法进行压缩的但是tar命令也可以对其进行处理但是通过使用“j”选项来替换“z”选项。
具有bz2扩展名的文件是使用bzip算法进行压缩的但是tar命令也可以对其进行处理但是需要通过使用“j”选项来替换“z”选项。
$ tar -xvjf archivefile.tar.bz2
@ -47,25 +46,25 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
然后首先需要确认目标目录是否存在毕竟tar命令并不会为你创建目录所以如果目标目录不存在的情况下该命令会失败。
####3. 解压出单个文件 ####
####3. 提取出单个文件 ####
为了从一个归档文件中解压出单个文件,只需要将文件名按照以下方式将其放置在命令后面。
为了从一个归档文件中提取出单个文件,只需要将文件名按照以下方式将其放置在命令后面。
$ tar -xz -f abc.tar.gz "./new/abc.txt"
在上述命令中,可以按照以下方式来指定多个文件。
$ tar -xv -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"
$ tar -xz -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"
#### 4.使用通配符来解压多个文件 ####
通配符可以用来解压于给定通配符匹配的一批文件,例如所有以".txt"作为扩展名的文件。
$ tar -xv -f abc.tar.gz --wildcards "*.txt"
$ tar -xz -f abc.tar.gz --wildcards "*.txt"
#### 5. 列出并检索tar归档文件中的内容 ####
#### 5. 列出并检索tar归档文件中的内容 ####
如果你仅仅想要列出而不是解压tar归档文件的中的内容使用“-t”选项 下面的命令用来打印一个使用gzip压缩过的tar归档文件中的内容。
如果你仅仅想要列出而不是解压tar归档文件的中的内容使用“-t”test选项, 下面的命令用来打印一个使用gzip压缩过的tar归档文件中的内容。
$ tar -tz -f abc.tar.gz
./new/
@ -75,7 +74,7 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
./new/abc.txt
...
将输出通过管道定向到grep来搜索一个文件或者定向到less命令来浏览内容列表。 使用"v"繁琐选项将会打印出每个文件的额外详细信息。
可以将输出通过管道定向到grep来搜索一个文件或者定向到less命令来浏览内容列表。 使用"v"冗长选项将会打印出每个文件的额外详细信息。
对于 tar.bz2/bzip文件需要使用"j"选项。
@ -84,11 +83,10 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
$ tar -tvz -f abc.tar.gz | grep abc.txt
-rw-rw-r-- enlightened/enlightened 0 2015-01-13 11:40 ./new/abc.txt
#### 6.创建一个tar/tar.gz归档文件 ####
#### 6.创建一个tar/tar.gz归档文件 ####
现在我们已经学过了如何解压一个tar归档文件是时候开始创建一个新的tar归档文件了。tar命令可以用来将所选的文件或整个目录放入到一个归档文件中以下是相应的样例。
下面的命令使用一个目录来创建一个tar归档文件它会将该目录中所有的文件和子目录都加入到归档文件中。
$ tar -cvf abc.tar ./new/
@ -102,14 +100,13 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
$ tar -cvzf abc.tar.gz ./new/
> 文件的扩展名其实并不真正有什么影响。“tar.gz” 和tgz是gzip压缩算法压缩文件的常见扩展名。 “tar.bz2”和“tbz”是bzip压缩算法压缩文件的常见扩展名。
> 文件的扩展名其实并不真正有什么影响。“tar.gz” 和“tgz”是gzip压缩算法压缩文件的常见扩展名。 “tar.bz2”和“tbz”是bzip压缩算法压缩文件的常见扩展名LCTT 译注:归档是否是压缩的和采用哪种压缩方式并不取决于其扩展名,扩展名只是为了便于辨识。)。
#### 7. 在添加文件之前进行确认 ####
一个有用的选项是“w”该选项使得tar命令在添加每个文件到归档文件之前来让用户进行确认有时候这会很有用。
使用该选项时,只有用户输入yes时的文件才会被加入到归档文件中如果你输入任何东西默认的回答是一个“No”。
使用该选项时,只有用户输入“y”时的文件才会被加入到归档文件中如果你不输入任何东西其默认表示是一个“n”。
# 添加指定文件
@ -137,7 +134,7 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
#### 9. 将文件加入到压缩的归档文件中tar.gz/tar.bz2) ####
之前已经提到了不可能将文件加入到已压缩的归档文件中,然依然可以通过简单的一些把戏来完成。使用gunzip命令来解压缩归档文件然后将文件加入到归档文件中后重新进行压缩。
之前已经提到了不可能将文件加入到已压缩的归档文件中,然依然可以通过简单的一些把戏来完成。使用gunzip命令来解压缩归档文件然后将文件加入到归档文件中后重新进行压缩。
$ gunzip archive.tar.gz
$ tar -rf archive.tar ./path/to/file
@ -147,16 +144,15 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
#### 10.通过tar来进行备份 ####
一个真实的场景是在规则的间隔内来备份目录tar命令可以通过cron调度来实现这样的一个备份以下是一个样例 -
一个真实的场景是在固定的时间间隔内来备份目录tar命令可以通过cron调度来实现这样的一个备份以下是一个样例
$ tar -cvz -f archive-$(date +%Y%m%d).tar.gz ./new/
使用cron来运行上述的命令会保持创建类似以下名称的备份文件 -
'archive-20150218.tar.gz'.
使用cron来运行上述的命令会保持创建类似以下名称的备份文件 'archive-20150218.tar.gz'。
当然,需要确保日益增长的归档文件不会导致磁盘空间的溢出。
#### 11. 在创建归档文件进行验证 ####
#### 11. 在创建归档文件进行验证 ####
"W"选项可以用来在创建归档文件之后进行验证,以下是一个简单例子。
@ -174,9 +170,9 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
Verify ./new/newfile.txt
Verify ./new/abc.txt
需要注意的是验证动作不能呢该在压缩过的归档文件上进行只能在非压缩的tar归档文件上执行。
需要注意的是验证动作不能在压缩过的归档文件上进行只能在非压缩的tar归档文件上执行。
现在就先到此为止可以通过“man tar”命令来查看tar命令的的手册。
这次就先到此为止可以通过“man tar”命令来查看tar命令的的手册。
--------------------------------------------------------------------------------
@ -184,7 +180,7 @@ via: http://www.binarytides.com/linux-tar-command/
作者:[Silver Moon][a]
译者:[theo-l](https://github.com/theo-l)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
如何在Linux中隐藏PHP版本
如何在Linux服务器中隐藏PHP版本
================================================================================
通常大多数默认设置安装的web服务器存在信息泄露。这其中之一是PHP。PHP超文本预处理器是如今流行的服务端html嵌入式语言。在如今这个充满挑战的时代有许多攻击者会尝试发现你服务端的漏洞。因此我会简单描述如何在Linux服务器中隐藏PHP信息。
通常大多数默认设置安装的web服务器存在信息泄露这其中之一就是PHP。PHP 是如今流行的服务端html嵌入式语言之一。在如今这个充满挑战的时代有许多攻击者会尝试发现你服务端的漏洞。因此我会简单描述如何在Linux服务器中隐藏PHP信息。
默认上**exposr_php**默认是开的。关闭“expose_php”参数可以使php隐藏它的版本信息。
默认上**expose_php**默认是开的。关闭“expose_php”参数可以使php隐藏它的版本信息。
[root@centos66 ~]# vi /etc/php.ini
@ -26,9 +26,9 @@
X-Page-Speed: 1.9.32.2-4321
Cache-Control: max-age=0, no-cache
更改php就不会在web服务头中显示版本了
更改并重启 Web 服务php就不会在web服务头中显示版本了
[root@centos66 ~]# curl -I http://www.ehowstuff.com/
```[root@centos66 ~]# curl -I http://www.ehowstuff.com/
HTTP/1.1 200 OK
Server: nginx
@ -39,8 +39,9 @@ X-Pingback: http://www.ehowstuff.com/xmlrpc.php
Date: Wed, 11 Feb 2015 14:10:43 GMT
X-Page-Speed: 1.9.32.2-4321
Cache-Control: max-age=0, no-cache
```
有任何需要帮助的请到twiiter @ehowstuff,或在下面留下你的评论。[点此获取更多历史文章][1]
LCTT译注除了 PHP 的版本之外Web 服务器也会默认泄露版本号。如果使用 Apache 服务器,请[参照此文章关闭Apache 版本显示][2];如果使用 Nginx 服务器,请在 http 段内加入`server_tokens off;` 配置。以上修改请记得重启相关服务。
--------------------------------------------------------------------------------
@ -48,9 +49,10 @@ via: http://www.ehowstuff.com/how-to-hide-php-version-in-linux/
作者:[skytech][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.ehowstuff.com/author/mhstar/
[1]:http://www.ehowstuff.com/archives/
[2]:http://linux.cn/article-3642-1.html

View File

@ -0,0 +1,41 @@
Nmap--不是只能用于做坏事!
================================================================================
如果SSH是系统管理员世界的"瑞士军刀"的话那么Nmap就是一盒炸药。炸药很容易被误用然后将你的双脚崩掉但是也是一个很有威力的工具能够胜任一些看似无法完成的任务。
大多数人想到Nmap时他们想到的是扫描服务器查找开放端口来实施攻击。然而在过去的这些年中这样的超能力在当你管理服务器或计算机遇到问题时也是非常的有用。无论是你试图找出在你的网络上有哪些类型的服务器使用了指定的IP地址或者尝试锁定一个新的NAS设备以及扫描网络等都会非常有用。
下图显示了我的QNAP NAS的网络扫描结果。我使用该设备的唯一目的是为了NFS和SMB文件共享但是你可以看到它包含了一大堆大开大敞的端口。如果没有Nmap很难发现机器到底在运行着什么玩意儿。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11825nmapf1.jpg)
*网络扫描*
另外一个可能你没想到的用途是用它来扫描一个网络。你甚至根本不需要root的访问权限而且你也可以非常容易地来指定你想要扫描的网络地址块例如输入
nmap 192.168.1.0/24
上述命令会扫描我的局域网中全部的254个可用的IP地址让我可以知道那个是可以Ping的以及那些端口是开放的。如果你刚刚在网络上添加一个新的硬件但是不知道它通过DHCP获取的IP地址是什么那么此时Nmap就是无价之宝。例如上述命令在我的网络中揭示了这个问题
Nmap scan report for TIVO-8480001903CCDDB.brainofshawn.com (192.168.1.220)
Host is up (0.0083s latency).
Not shown: 995 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2190/tcp open tivoconnect
2191/tcp open tvbus
9080/tcp closed glrpc
它不仅显示了新的Tivo 设备而且还告诉我那些端口是开放的。由于它的可靠性、可用性以及“黑边帽子”的能力Nmap获得了本月的 <<编辑推荐>>奖。这不是一个新的程序但是如果你是一个linux用户的话你应该玩玩它。
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/nmap%E2%80%94not-just-evil
作者:[Shawn Powers][a]
译者:[theo-l](https://github.com/theo-l)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/shawn-powers

View File

@ -1,161 +0,0 @@
How To Install / Configure VNC Server On CentOS 7.0
================================================================================
Hi there, this tutorial is all about how to install or setup [VNC][1] Server on your very CentOS 7. This tutorial also works fine in RHEL 7. In this tutorial, we'll learn what is VNC and how to install or setup [VNC Server][1] on CentOS 7
As we know, most of the time as a system administrator we are managing our servers over the network. It is very rare that we will need to have a physical access to any of our managed servers. In most cases all we need is to SSH remotely to do our administration tasks. In this article we will configure a GUI alternative to a remote access to our CentOS 7 server, which is VNC. VNC allows us to open a remote GUI session to our server and thus providing us with a full graphical interface accessible from any remote location.
VNC server is a Free and Open Source Software which is designed for allowing remote access to the Desktop Environment of the server to the VNC Client whereas VNC viewer is used on remote computer to connect to the server .
**Some Benefits of VNC server are listed below:**
Remote GUI administration makes work easy & convenient.
Clipboard sharing between host CentOS server & VNC-client machine.
GUI tools can be installed on the host CentOS server to make the administration more powerful
Host CentOS server can be administered through any OS having the VNC-client installed.
More reliable over ssh graphics and RDP connections.
So, now lets start our journey towards the installation of VNC Server. We need to follow the steps below to setup and to get a working VNC.
First of all we'll need a working Desktop Environment (X-Windows), if we don't have a working GUI Desktop Environment (X Windows) running, we'll need to install it first.
**Note: The commands below must be running under root privilege. To switch to root please execute "sudo -s" under a shell or terminal without quotes("")**
### 1. Installing X-Windows ###
First of all to install [X-Windows][2] we'll need to execute the below commands in a shell or terminal. It will take few minutes to install its packages.
# yum check-update
# yum groupinstall "X Window System"
![installing x windows](http://blog.linoxide.com/wp-content/uploads/2015/01/installing-x-windows.png)
#yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts
![install gnome classic session](http://blog.linoxide.com/wp-content/uploads/2015/01/gnome-classic-session-install.png)
# unlink /etc/systemd/system/default.target
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
![configuring graphics](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-graphics.png)
# reboot
After our machine restarts, we'll get a working CentOS 7 Desktop.
Now, we'll install VNC Server on our machine.
### 2. Installing VNC Server Package ###
Now, we'll install VNC Server package in our CentOS 7 machine. To install VNC Server, we'll need to execute the following command.
# yum install tigervnc-server -y
![vnc server](http://blog.linoxide.com/wp-content/uploads/2015/01/install-tigervnc.png)
### 3. Configuring VNC ###
Then, we'll need to create a configuration file under **/etc/systemd/system/** directory. We can copy the **vncserver@:1.service** file from example file from **/lib/systemd/system/vncserver@.service**
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
![copying vnc server configuration](http://blog.linoxide.com/wp-content/uploads/2015/01/copying-configuration.png)
Now we'll open **/etc/systemd/system/vncserver@:1.service** in our favorite text editor (here, we're gonna use **nano**). Then find the below lines of text in that file and replace <USER> with your username. Here, in my case its linoxide so I am replacing <USER> with linoxide and finally looks like below.
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
PIDFile=/home/<USER>/.vnc/%H%i.pid
TO
ExecStart=/sbin/runuser -l linoxide -c "/usr/bin/vncserver %i"
PIDFile=/home/linoxide/.vnc/%H%i.pid
If you are creating for root user then
ExecStart=/sbin/runuser -l root -c "/usr/bin/vncserver %i"
PIDFile=/root/.vnc/%H%i.pid
![configuring user](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-user.png)
Now, we'll need to reload our systemd.
# systemctl daemon-reload
Finally, we'll create VNC password for the user . To do so, first you'll need to be sure that you have sudo access to the user, here I will login to user "linoxide" then, execute the following. To login to linoxide we'll run "**su linoxide" without quotes** .
# su linoxide
$ sudo vncpasswd
![setting vnc password](http://blog.linoxide.com/wp-content/uploads/2015/01/vncpassword.png)
**Make sure that you enter passwords more than 6 characters.**
### 4. Enabling and Starting the service ###
To enable service at startup ( Permanent ) execute the commands shown below.
$ sudo systemctl enable vncserver@:1.service
Then, start the service.
$ sudo systemctl start vncserver@:1.service
### 5. Allowing Firewalls ###
We'll need to allow VNC services in Firewall now.
$ sudo firewall-cmd --permanent --add-service vnc-server
$ sudo systemctl restart firewalld.service
![allowing firewalld](http://blog.linoxide.com/wp-content/uploads/2015/01/allowing-firewalld.png)
Now you can able to connect VNC server using IP and Port ( Eg : ip-address:1 )
### 6. Connecting the machine with VNC Client ###
Finally, we are done installing VNC Server. No, we'll wanna connect the server machine and remotely access it. For that we'll need a VNC Client installed in our computer which will only enable us to remote access the server machine.
![remote access vncserver from vncviewer](http://blog.linoxide.com/wp-content/uploads/2015/01/vncviewer.png)
You can use VNC client like [Tightvnc viewer][3] and [Realvnc viewer][4] to connect Server.
To connect with additional users create files with different ports, please go to step 3 to configure and add a new user and port, You'll need to create **vncserver@:2.service** and replace the username in config file and continue the steps by replacing service name for different ports. **Please make sure you logged in as that particular user for creating vnc password**.
VNC by itself runs on port 5900. Since each user will run their own VNC server, each user will have to connect via a separate port. The addition of a number in the file name tells VNC to run that service as a sub-port of 5900. So in our case, arun's VNC service will run on port 5901 (5900 + 1) and further will run on 5900 + x. Where, x denotes the port specified when creating config file **vncserver@:x.service for the further users**.
We'll need to know the IP Address and Port of the server to connect with the client. IP addresses are the unique identity number of the machine. Here, my IP address is 96.126.120.92 and port for this user is 1. We can get the public IP address by executing the below command in a shell or terminal of the machine where VNC Server is installed.
# curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
### Conclusion ###
Finally, we installed and configured VNC Server in the machine running CentOS 7 / RHEL 7 (Red Hat Enterprises Linux) . VNC is the most easy FOSS tool for the remote access and also a good alternative to Teamviewer Remote Access. VNC allows a user with VNC client installed to control the machine with VNC Server installed. Here are some commands listed below that are highly useful in VNC . Enjoy !!
#### Additional Commands : ####
- To stop VNC service .
# systemctl stop vncserver@:1.service
- To disable VNC service from startup.
# systemctl disable vncserver@:1.service
- To stop firewall.
# systemctl stop firewalld.service
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-configure-vnc-server-centos-7-0/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://en.wikipedia.org/wiki/Virtual_Network_Computing
[2]:http://en.wikipedia.org/wiki/X_Window_System
[3]:http://www.tightvnc.com/
[4]:https://www.realvnc.com/

View File

@ -1,151 +0,0 @@
zpl1025
Systemd Boot Process a Close Look in Linux
================================================================================
The way Linux system boots up is quite complex and there have always been need to optimize the way it works. The traditional boot up process of Linux system is mainly handled by the well know init process (also known as SysV init boot system), while there have been identified inefficiencies in the init based boot system, systemd on the other hand is another boot up manager for Linux based systems which claims to overcome the shortcomings of [traditional Linux SysV init][2] based system. We will be focusing our discussion on the features and controversies of systemd , but in order to understand it, lets see how Linux boot process is handled by traditional SysV init based system. Kindly note that Systemd is still in testing phase and future releases of Linux operating systems are preparing to replace their current boot process with Systemd Boot manager.
### Understanding Linux Boot Process ###
Init is the very first process that starts when we power on our Linux system. Init process is assigned the PID of 1. It is parent process for all other processes on the system. When a Linux computer is started, the processor searches for the BIOS on the system memory, BIOS then tests system resources and find the first boot device, usually set as hard disk, it looks for Master Boot Record (MBR) on the hard disk, loads its contents to memory and passes control to it, the further boot process is controlled by MBR.
Master Boot Record initiates the Boot loader (Linux has two well know boot loaders, GRUB and LILO, 80% of Linux systems are using GRUB loaders), this is the time when GRUB or LILO loads the kernel module. Kernel module immediately looks for the “init” in /sbin partition and executes it. Thats from where init becomes the parent process of Linux system. The very first file read by init is /etc/inittab , from here init decides the run level of our Linux operating system. It finds partition table information from /etc/fstab file and mounts partitions accordingly. Init then launches all the services/scripts specified in the /etc/init.d directory of the default run level. This is the step where all services are initialized by init one by one. In this process, one service at a time is started by init , all services/daemons run in the background and init keeps managing them.
The shutdown process works in pretty much the reverse function, first of all init stops all services and then filesystem is un-mounted at the last stage.
The above mentioned process has some shortcomings. The need to replace traditional init with something better have been felt from long time now. Some replacements have been developed and implemented as well. The well know replacements for this init based system as Upstart , Epoch , Mudar and Systemd. Systemd is the one which got most attention and is considered to be better of all available alternatives.
### Understanding Systemd ###
Reducing the boot time and computational overhead is the main objective of developing the Systemd. Systemd (System Manager Daemon) , originally developed under GNU General Public License, is now under GNU Lesser General Public License, it is most frequently discussed boot and services manager these days. If your Linux system is configured to use Systemd boot manager, then instead of traditional SysV init, startup process will be handled by systemd. One of the core feature of Systemd is that it supports post boot scripts of SysV Init as well .
Systemd introduces the parallelization boot concept, it creates a sockets for each daemon that needs to be started, these sockets are abstracted from the processes that use them so they allow daemons to interact with each other. Systemd creates news processes and assigns every process a control group. The processes in different control groups use kernel to communicate with each others. The way [systemd handles the start up process][2] is quite neat, and much optimized as compared to the traditional init based system. Lets review some of the core features of Systemd.
- The boot process is much simpler as compared to the init
- Systemd provides concurrent and parallel process of system boot so it ensures better boot speed
- Processes are tracked using control groups, not by PIDs
- Improved ways to handle boot and services dependencies.
- Capability of system snapshots and restore
- Monitoring of started services ; also capabale of restarting any crashed services
- Includes systemd-login module to control user logins.
- Ability to add and remove components
- Low memory foot prints and ability for job scheduling
- Journald module for event logging and syslogd module for system log.
Systemd handles system shutdown process in well organized way as well. It has three script located inside /usr/lib/systemd/ directory, named systemd-halt.service , systemd-poweroff.service , systemd-reboot.service . These scripts are executed when user choose to shutdown, reboot or halt Linux system. In the event of shutdown, systemd first un-mount all file systems and disabled all swap devices, detaches the storage devices and kills remaining processes.
![](http://images.linoxide.com/systemd-boot-process.jpg)
### Structural Overview of Systemd ###
Lets review Linux system boot process with some structural details when it is using systemd as boot and services manager. For the sake of simplicity, we are listing the process in steps below:
**1.** The very first steps when you power on your system is the BIOS initialization. BIOS reads the boot device settings, locates and hands over control to MBR (assuming hard disk is set as first boot device).
**2.** MBR reads information from Grub or LILO boot loader and initializes the kernel. Grub or LILO will specify how to handle further system boot up. If you have specified systemd as boot manager in grub configuration file, then the further boot process will be handled by systemd. Systemd handles boot and services management process using “targets”. The ”target" files in systemd are used for grouping different boot units and start up synchronization processes.
**3.** The very first target executed by systemd is **default.target**. But default.target is actually a symlink to **graphical.target**. Symlink in linux works just like shortcuts in Windows. Graphical.target file is located at /usr/lib/systemd/system/graphical.target path. We have shown the contents of graphical.target file in the following screenshot.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/graphical1.png)
**4.** At this stage, **multi-user.target** has been invoked and this target keeps its further sub-units inside “/etc/systemd/system/multi-user.target.wants” directory. This target sets the environment for multi user support. None root users are enabled at this stage of boot up process. Firewall related services are started on this stage of boot as well.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/multi-user-target1.png)
"multi-user.target" passes control to another layer “**basic.target**”.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Basic-Target.png)
**5.** "basic.target" unit is the one that starts usual services specially graphical manager service. It uses /etc/systemd/system/basic.target.wants directory to decide which services need to be started, basic.target passes on control to **sysinit.target**.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sysint-Target.png)
**6.** "sysinit.target" starts important system services like file System mounting, swap spaces and devices, kernel additional options etc. sysinit.target passes on startup process to **local-fs.target**. The contents of this target unit are shown in the following screenshot.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/local-FS-Target.png)
**7.** local-fs.target , no user related services are started by this target unit, it handles core low level services only. This target is the one performing actions on the basis of /etc/fstab and /etc/inittab files.
### Analyzing System Boot Performancev ###
Systemd offers tool to identify and troubleshoot boot related issues or performance concerns. **Systemd-analyze** is a built-in command which lets you examine boot process. You can find out the units which are facing errors during boot up and can further trace and correct boot component issues. Some useful systemd-analyze commands are listed below.
**systemd-analyze time** shows the time spent in kernel, and normal user space.
$ systemd-analyze time
Startup finished in 1440ms (kernel) + 3444ms (userspace)
**systemd-analyze blame** prints a list of all running units, sorted by the time taken by then to initialize, in this way you can have idea of which services are taking long time to start during boot up.
$ systemd-analyze blame
2001ms mysqld.service
234ms httpd.service
191ms vmms.service
**systemd-analyze verify** shows if there are any syntax errors in the system units. **Systemd-analyze plot** can be used to write down whole startup process to a SVG formate file. Whole boot process is very lengthy to read, so using this command we can dump the output of whole boot processing into a file and then can read and analyze it further. The following command will take care of this.
systemd-analyze plot > boot.svg
### Systemd Controversies ###
Systemd has not been lucky to receive love from everyone, some professionals and administrators have different opinions on its working and developments. Per critics of Systemd, its “not Unix-like” because it tried to replace some system services. Some professionals dont like the idea of using binary configuration files as well. It is said that editing systemd configuration is not an easy tasks and there are no graphical tools available for this purpose.
### Test Systemd on Ubuntu 14.04 and 12.04 ###
Originally, Ubuntu decided to replace their current boot process with Systemd in Ubuntu 16.04 LTS. Ubuntu 16.04 is supposed to be released in April 2016, but considering the popularity and demand for Systemd, the upcoming **Ubuntu 15.04** will have it as its default boot manager. Good news is that the user of Ubuntu 14.04 Trusty Tahr And Ubuntu 12.04 Precise Pangolin can still test Systemd on their machines. The test process is not very complex, all you need to do is to include the related PPA to the system, update repository and perform system upgrade.
**Disclaimer** : Please note that its still in testing and development stages for Ubuntu. Testing packages might have any unknown issues and in worst case scenario, they might break your system configurations. Make sure you backup your important data before trying this upgrade.
Run following command on the terminal to add ppa to the your ubuntu system:
sudo add-apt-repository ppa:pitti/systemd
You will be seeing warning message here because we are trying to use temporary/testing PPA which is not recommended for production machines.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/PPA-Systemd1.png)
Now update the APT Package Manager repositories by running the following command.
sudo apt-get update
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Update-APT1.png)
Perform system upgrade by running the following command.
sudo apt-get dist-upgrade
![](http://blog.linoxide.com/wp-content/uploads/2015/03/System-Upgrade.png)
Thats all, you should be able to see configuration files of systemd on your ubuntu system now, just browse to the /lib/systemd/ directory and see the files there.
Alright, its time we edit grub configuration file and specify systemd as default Boot Manager. Edit grub file using Gedit text editor.
sudo gedit /etc/default/grub
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Edit-Grub.png)
Here edit GRUB_CMDLINE_LINUX_DEFAULT parameter in this file and specify the value of this parameter as: "**init=/lib/systemd/systemd**"
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Grub-Systemd.png)
Thats all, your ubuntu system is no longer using its traditional boot manager, its using Systemd Manager now. Reboot your system and see the systemd boot up process.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sytemd-Boot.png)
### Conclusion ###
Systemd is no doubt a step forward towards improving Linux Boot process; its an awesome suite of libraries and daemons that together improve the system boot and shutdown process. Many linux distributions are preparing to support it as their official boot manager. In future releases of Linux distros, we can hope to see systemd startup. But on the other hand, in order to succeed and to be adopted on the wide scale, systemd should address the concerns of critics as well.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/systemd-boot-process/
作者:[Aun Raza][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:http://linoxide.com/booting/boot-process-of-linux-in-detail/
[2]:http://0pointer.de/blog/projects/self-documented-boot.html

View File

@ -0,0 +1,743 @@
ZMap Documentation
================================================================================
1. Getting Started with ZMap
1. Scanning Best Practices
1. Command Line Arguments
1. Additional Information
1. TCP SYN Probe Module
1. ICMP Echo Probe Module
1. UDP Probe Module
1. Configuration Files
1. Verbosity
1. Results Output
1. Blacklisting
1. Rate Limiting and Sampling
1. Sending Multiple Probes
1. Extending ZMap
1. Sample Applications
1. Writing Probe and Output Modules
----------
### Getting Started with ZMap ###
ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices.
By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows:
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
Or more concisely specified as:
$ zmap -B 10M -p 80 -n 10000 -o results.csv
ZMap can also be used to scan specific subnets or CIDR blocks. For example, to scan only 10.0.0.0/8 and 192.168.0.0/16 on port 80, run:
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
If the scan started successfully, ZMap will output status updates every one second similar to the following:
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
These updates provide information about the current state of the scan and are of the following form: %-complete (est time remaining); packets-sent curr-send-rate (avg-send-rate); recv: packets-recv recv-rate (avg-recv-rate); hits: hit-rate
If you do not know the scan rate that your network can support, you may want to experiment with different scan rates or bandwidth limits to find the fastest rate that your network can support before you see decreased results.
By default, ZMap will output the list of distinct IP addresses that responded successfully (e.g. with a SYN ACK packet) similar to the following. There are several additional formats (e.g. JSON and Redis) for outputting results as well as options for producing programmatically parsable scan statistics. As wells, additional output fields can be specified and the results can be filtered using an output filter.
115.237.116.119
23.9.117.80
207.118.204.141
217.120.143.111
50.195.22.82
We strongly encourage you to use a blacklist file, to exclude both reserved/unallocated IP space (e.g. multicast, RFC1918), as well as networks that request to be excluded from your scans. By default, ZMap will utilize a simple blacklist file containing reserved and unallocated addresses located at `/etc/zmap/blacklist.conf`. If you find yourself specifying certain settings, such as your maximum bandwidth or blacklist file every time you run ZMap, you can specify these in `/etc/zmap/zmap.conf` or use a custom configuration file.
If you are attempting to troubleshoot scan related issues, there are several options to help debug. First, it is possible can perform a dry run scan in order to see the packets that would be sent over the network by adding the `--dryrun` flag. As well, it is possible to change the logging verbosity by setting the `--verbosity=n` flag.
----------
### Scanning Best Practices ###
We offer these suggestions for researchers conducting Internet-wide scans as guidelines for good Internet citizenship.
- Coordinate closely with local network administrators to reduce risks and handle inquiries
- Verify that scans will not overwhelm the local network or upstream provider
- Signal the benign nature of the scans in web pages and DNS entries of the source addresses
- Clearly explain the purpose and scope of the scans in all communications
- Provide a simple means of opting out and honor requests promptly
- Conduct scans no larger or more frequent than is necessary for research objectives
- Spread scan traffic over time or source addresses when feasible
It should go without saying that scan researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions.
----------
### Command Line Arguments ###
#### Common Options ####
These options are the most common options when performing a simple scan. We note that some options are dependent on the probe module or output module used (e.g. target port is not used when performing an ICMP Echo Scan).
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-o, --output-file=name**
Write results to this file. Use - for stdout
**-b, --blacklist-file=path**
File of subnets to exclude, in CIDR notation (e.g. 192.168.0.0/16), one-per line. It is recommended you use this to exclude RFC 1918 addresses, multicast, IANA reserved space, and other IANA special-purpose addresses. An example blacklist file is provided in conf/blacklist.example for this purpose.
#### Scan Options ####
**-n, --max-targets=n**
Cap the number of targets to probe. This can either be a number (e.g. `-n 1000`) or a percentage (e.g. `-n 0.1%`) of the scannable address space (after excluding blacklist)
**-N, --max-results=n**
Exit after receiving this many results
**-t, --max-runtime=secs**
Cap the length of time for sending packets
**-r, --rate=pps**
Set the send rate in packets/sec
**-B, --bandwidth=bps**
Set the send rate in bits/second (supports suffixes G, M, and K (e.g. `-B 10M` for 10 mbps). This overrides the `--rate` flag.
**-c, --cooldown-time=secs**
How long to continue receiving after sending has completed (default=8)
**-e, --seed=n**
Seed used to select address permutation. Use this if you want to scan addresses in the same order for multiple ZMap runs.
**--shards=n**
Split the scan up into N shards/partitions among different instances of zmap (default=1). When sharding, `--seed` is required
**--shard=n**
Set which shard to scan (default=0). Shards are indexed in the range [0, N), where N is the total number of shards. When sharding `--seed` is required.
**-T, --sender-threads=n**
Threads used to send packets (default=1)
**-P, --probes=n**
Number of probes to send to each IP (default=1)
**-d, --dryrun**
Print out each packet to stdout instead of sending it (useful for debugging)
#### Network Options ####
**-s, --source-port=port|range**
Source port(s) to send packets from
**-S, --source-ip=ip|range**
Source address(es) to send packets from. Either single IP or range (e.g. 10.0.0.1-10.0.0.9)
**-G, --gateway-mac=addr**
Gateway MAC address to send packets to (in case auto-detection does not work)
**-i, --interface=name**
Network interface to use
#### Probe Options ####
ZMap allows users to specify and write their own probe modules for use with ZMap. Probe modules are responsible for generating probe packets to send, and processing responses from hosts.
**--list-probe-modules**
List available probe modules (e.g. tcp_synscan)
**-M, --probe-module=name**
Select probe module (default=tcp_synscan)
**--probe-args=args**
Arguments to pass to probe module
**--list-output-fields**
List the fields the selected probe module can send to the output module
#### Output Options ####
ZMap allows users to specify and write their own output modules for use with ZMap. Output modules are responsible for processing the fieldsets returned by the probe module, and outputing them to the user. Users can specify output fields, and write filters over the output fields.
**--list-output-modules**
List available output modules (e.g. tcp_synscan)
**-O, --output-module=name**
Select output module (default=csv)
**--output-args=args**
Arguments to pass to output module
**-f, --output-fields=fields**
Comma-separated list of fields to output
**--output-filter**
Specify an output filter over the fields defined by the probe module
#### Additional Options ####
**-C, --config=filename**
Read a configuration file, which can specify any other options.
**-q, --quiet**
Do not print status updates once per second
**-g, --summary**
Print configuration and summary of results at the end of the scan
**-v, --verbosity=n**
Level of log detail (0-5, default=3)
**-h, --help**
Print help and exit
**-V, --version**
Print version and exit
----------
### Additional Information ###
#### TCP SYN Scans ####
When performing a TCP SYN scan, ZMap requires a single target port and supports specifying a range of source ports from which the scan will originate.
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-s, --source-port=port|range**
Source port(s) for scan packets (e.g. 40000-50000)
**Warning!** ZMap relies on the Linux kernel to respond to SYN/ACK packets with RST packets in order to close connections opened by the scanner. This occurs because ZMap sends packets at the Ethernet layer in order to reduce overhead otherwise incurred in the kernel from tracking open TCP connections and performing route lookups. As such, if you have a firewall rule that tracks established connections such as a netfilter rule similar to `-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`, this will block SYN/ACK packets from reaching the kernel. This will not prevent ZMap from recording responses, but it will prevent RST packets from being sent back, ultimately using up a connection on the scanned host until your connection times out. We strongly recommend that you select a set of unused ports on your scanning host which can be allowed access in your firewall and specifying this port range when executing ZMap, with the `-s` flag (e.g. `-s '50000-60000'`).
#### ICMP Echo Request Scans ####
While ZMap performs TCP SYN scans by default, it also supports ICMP echo request scans in which an ICMP echo request packet is sent to each host and the type of ICMP response received in reply is denoted. An ICMP scan can be performed by selecting the icmp_echoscan scan module similar to the following:
$ zmap --probe-module=icmp_echoscan
#### UDP Datagram Scans ####
ZMap additionally supports UDP probes, where it will send out an arbitrary UDP datagram to each host, and receive either UDP or ICMP Unreachable responses. ZMap supports four different methods of setting the UDP payload through the --probe-args command-line option. These are 'text' for ASCII-printable payloads, 'hex' for hexadecimal payloads set on the command-line, 'file' for payloads contained in an external file, and 'template' for payloads that require dynamic field generation. In order to obtain the UDP response, make sure that you specify 'data' as one of the fields to report with the -f option.
The example below will send the two bytes 'ST', a PCAnwywhere 'status' request, to UDP port 5632.
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
The example below will send the byte '0x02', a SQL Server 'client broadcast' request, to UDP port 1434.
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
The example below will send a NetBIOS status request to UDP port 137. This uses a payload file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
The example below will send a SIP 'OPTIONS' request to UDP port 5060. This uses a template file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
UDP payload templates are still experimental. You may encounter crashes when more using more than one send thread (-T) and there is a significant decrease in performance compared to static payloads. A template is simply a payload file that contains one or more field specifiers enclosed in a ${} sequence. Some protocols, notably SIP, require the payload to reflect the source and destination of the packet. Other protocols, such as portmapper and DNS, contain fields that should be randomized per request or risk being dropped by multi-homed systems scanned by ZMap.
The payload template below will send a SIP OPTIONS request to every destination:
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
To: sip:${RAND_ALPHA=8}@${DADDR}
Call-ID: ${RAND_DIGIT=10}@${SADDR}
CSeq: 1 OPTIONS
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
Content-Length: 0
Max-Forwards: 20
User-Agent: ${RAND_ALPHA=8}
Accept: text/plain
In the example above, note that line endings are \r\n and the end of this request must contain \r\n\r\n for most SIP implementations to correcly process it. A working example is included in the examples/udp-payloads directory of the ZMap source tree (sip_options.tpl).
The following template fields are currently implemented:
- **SADDR**: Source IP address in dotted-quad format
- **SADDR_N**: Source IP address in network byte order
- **DADDR**: Destination IP address in dotted-quad format
- **DADDR_N**: Destination IP address in network byte order
- **SPORT**: Source port in ascii format
- **SPORT_N**: Source port in network byte order
- **DPORT**: Destination port in ascii format
- **DPORT_N**: Destination port in network byte order
- **RAND_BYTE**: Random bytes (0-255), length specified with =(length) parameter
- **RAND_DIGIT**: Random digits from 0-9, length specified with =(length) parameter
- **RAND_ALPHA**: Random mixed-case letters from A-Z, length specified with =(length) parameter
- **RAND_ALPHANUM**: Random mixed-case letters from A-Z and digits from 0-9, length specified with =(length) parameter
### Configuration Files ###
ZMap supports configuration files instead of requiring all options to be specified on the command-line. A configuration can be created by specifying one long-name option and the value per line such as:
interface "eth1"
source-ip 1.1.1.4-1.1.1.8
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
cooldown-time 300 # seconds
blacklist-file /etc/zmap/blacklist.conf
output-file ~/zmap-output
quiet
summary
ZMap can then be run with a configuration file and specifying any additional necessary parameters:
$ zmap --config=~/.zmap.conf --target-port=443
### Verbosity ###
There are several types of on-screen output that ZMap produces. By default, ZMap will print out basic progress information similar to the following every 1 second. This can be disabled by setting the `--quiet` flag.
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
ZMap also prints out informational messages during scanner configuration such as the following, which can be controlled with the `--verbosity` argument.
Aug 11 16:16:12.813 [INFO] zmap: started
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
Aug 11 16:17:03.985 [DEBUG] recv: thread started
ZMap also supports printing out a grep-able summary at the end of the scan, similar to below, which can be invoked with the `--summary` flag.
cnf target-port 443
cnf source-port-range-begin 32768
cnf source-port-range-end 61000
cnf source-addr-range-begin 1.1.1.4
cnf source-addr-range-end 1.1.1.8
cnf maximum-packets 4294967295
cnf maximum-runtime 0
cnf permutation-seed 0
cnf cooldown-period 300
cnf send-interface eth1
cnf rate 45000
env nprocessors 16
exc send-start-time Fri Jan 18 01:47:35 2013
exc send-end-time Sat Jan 19 00:47:07 2013
exc recv-start-time Fri Jan 18 01:47:35 2013
exc recv-end-time Sat Jan 19 00:52:07 2013
exc sent 3722335150
exc blacklisted 572632145
exc first-scanned 1318129262
exc hit-rate 0.874102
exc synack-received-unique 32537000
exc synack-received-total 36689941
exc synack-cooldown-received-unique 193
exc synack-cooldown-received-total 1543
exc rst-received-unique 141901021
exc rst-received-total 166779002
adv source-port-secret 37952
adv permutation-gen 4215763218
### Results Output ###
ZMap can produce results in several formats through the use of **output modules**. By default, ZMap only supports **csv** output, however support for **redis** and **json** can be compiled in. The results sent to these output modules may be filtered using an **output filter**. The fields the output module writes are specified by the user. By default, ZMap will return results in csv format and if no output file is specified, ZMap will not produce specific results. It is also possible to write your own output module; see Writing Output Modules for information.
**-o, --output-file=p**
File to write output to
**-O, --output-module=p**
Invoke a custom output module
**-f, --output-fields=p**
Comma-separated list of fields to output
**--output-filter=filter**
Specify an output filter over fields for a given probe
**--list-output-modules**
Lists available output modules
**--list-output-fields**
List available output fields for a given probe
#### Output Fields ####
ZMap has a variety of fields it can output beyond IP address. These fields can be viewed for a given probe module by running with the `--list-output-fields` flag.
$ zmap --probe-module="tcp_synscan" --list-output-fields
saddr string: source IP address of response
saddr-raw int: network order integer form of source IP address
daddr string: destination IP address of response
daddr-raw int: network order integer form of destination IP address
ipid int: IP identification number of response
ttl int: time-to-live of response packet
sport int: TCP source port
dport int: TCP destination port
seqnum int: TCP sequence number
acknum int: TCP acknowledgement number
window int: TCP window
classification string: packet classification
success int: is response considered success
repeat int: is response a repeat response from host
cooldown int: Was response received during the cooldown period
timestamp-str string: timestamp of when response arrived in ISO8601 format.
timestamp-ts int: timestamp of when response arrived in seconds since Epoch
timestamp-us int: microsecond part of timestamp (e.g. microseconds since 'timestamp-ts')
To select which fields to output, any combination of the output fields can be specified as a comma-separated list using the `--output-fields=fields` or `-f` flags. Example:
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
#### Filtering Output ####
Results generated by a probe module can be filtered before being passed to the output module. Filters are defined over the output fields of a probe module. Filters are written in a simple filtering language, similar to SQL, and are passed to ZMap using the **--output-filter** option. Output filters are commonly used to filter out duplicate results, or to only pass only sucessful responses to the output module.
Filter expressions are of the form `<fieldname> <operation> <value>`. The type of `<value>` must be either a string or unsigned integer literal, and match the type of `<fieldname>`. The valid operations for integer comparisons are `= !=, <, >, <=, >=`. The operations for string comparisons are =, !=. The `--list-output-fields` flag will print what fields and types are available for the selected probe module, and then exit.
Compound filter expressions may be constructed by combining filter expressions using parenthesis to specify order of operations, the `&&` (logical AND) and `||` (logical OR) operators.
**Examples**
Write a filter for only successful, non-duplicate responses
--output-filter="success = 1 && repeat = 0"
Filter for packets that have classification RST and a TTL greater than 10, or for packets with classification SYNACK
--output-filter="(classification = rst && ttl > 10) || classification = synack"
#### CSV ####
The csv module will produce a comma-separated value file of the output fields requested. For example, the following command produces the following CSV in a file called `output.csv`.
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
----------
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
#### Redis ####
The redis output module allows addresses to be added to a Redis queue instead of being saved to file which ultimately allows ZMap to be incorporated with post processing tools.
**Heads Up!** ZMap does not build with Redis support by default. If you are building ZMap from source, you can build with Redis support by running CMake with `-DWITH_REDIS=ON`.
### Blacklisting and Whitelisting ###
ZMap supports both blacklisting and whitelisting network prefixes. If ZMap is not provided with blacklist or whitelist parameters, ZMap will scan all IPv4 addresses (including local, reserved, and multicast addresses). If a blacklist file is specified, network prefixes in the blacklisted segments will not be scanned; if a whitelist file is provided, only network prefixes in the whitelist file will be scanned. A whitelist and blacklist file can be used in coordination; the blacklist has priority over the whitelist (e.g. if you have whitelisted 10.0.0.0/8 and blacklisted 10.1.0.0/16, then 10.1.0.0/16 will not be scanned). Whitelist and blacklist files can be specified on the command-line as follows:
**-b, --blacklist-file=path**
File of subnets to blacklist in CIDR notation, e.g. 192.168.0.0/16
**-w, --whitelist-file=path**
File of subnets to limit scan to in CIDR notation, e.g. 192.168.0.0/16
Blacklist files should be formatted with a single network prefix in CIDR notation per line. Comments are allowed using the `#` character. Example:
# From IANA IPv4 Special-Purpose Address Registry
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# Updated 2013-05-22
0.0.0.0/8 # RFC1122: "This host on this network"
10.0.0.0/8 # RFC1918: Private-Use
100.64.0.0/10 # RFC6598: Shared Address Space
127.0.0.0/8 # RFC1122: Loopback
169.254.0.0/16 # RFC3927: Link Local
172.16.0.0/12 # RFC1918: Private-Use
192.0.0.0/24 # RFC6890: IETF Protocol Assignments
192.0.2.0/24 # RFC5737: Documentation (TEST-NET-1)
192.88.99.0/24 # RFC3068: 6to4 Relay Anycast
192.168.0.0/16 # RFC1918: Private-Use
192.18.0.0/15 # RFC2544: Benchmarking
198.51.100.0/24 # RFC5737: Documentation (TEST-NET-2)
203.0.113.0/24 # RFC5737: Documentation (TEST-NET-3)
240.0.0.0/4 # RFC1112: Reserved
255.255.255.255/32 # RFC0919: Limited Broadcast
# From IANA Multicast Address Space Registry
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
# Updated 2013-06-25
224.0.0.0/4 # RFC5771: Multicast/Reserved
If you are looking to scan only a random portion of the internet, checkout Sampling, instead of using whitelisting and blacklisting.
**Heads Up!** The default ZMap configuration uses the blacklist file at `/etc/zmap/blacklist.conf`, which contains locally scoped address space and reserved IP ranges. The default configuration can be changed by editing `/etc/zmap/zmap.conf`.
### Rate Limiting and Sampling ###
By default, ZMap will scan at the fastest rate that your network adaptor supports. In our experiences on commodity hardware, this is generally around 95-98% of the theoretical speed of gigabit Ethernet, which may be faster than your upstream provider can handle. ZMap will not automatically adjust its send-rate based on your upstream provider. You may need to manually adjust your send-rate to reduce packet drops and incorrect results.
**-r, --rate=pps**
Set maximum send rate in packets/sec
**-B, --bandwidth=bps**
Set send rate in bits/sec (supports suffixes G, M, and K). This overrides the --rate flag.
ZMap also allows random sampling of the IPv4 address space by specifying max-targets and/or max-runtime. Because hosts are scanned in a random permutation generated per scan instantiation, limiting a scan to n hosts will perform a random sampling of n hosts. Command-line options:
**-n, --max-targets=n**
Cap number of targets to probe
**-N, --max-results=n**
Cap number of results (exit after receiving this many positive results)
**-t, --max-runtime=s**
Cap length of time for sending packets (in seconds)
**-s, --seed=n**
Seed used to select address permutation. Specify the same seed in order to scan addresses in the same order for different ZMap runs.
For example, if you wanted to scan the same one million hosts on the Internet for multiple scans, you could set a predetermined seed and cap the number of scanned hosts similar to the following:
zmap -p 443 -s 3 -n 1000000 -o results
In order to determine which one million hosts were going to be scanned, you could run the scan in dry-run mode which will print out the packets that would be sent instead of performing the actual scan.
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
### Sending Multiple Packets ###
ZMap supports sending multiple probes to each host. Increasing this number both increases scan time and hosts reached. However, we find that the increase in scan time (~100% per additional probe) greatly outweighs the increase in hosts reached (~1% per additional probe).
**-P, --probes=n**
The number of unique probes to send to each IP (default=1)
----------
### Sample Applications ###
ZMap is designed for initiating contact with a large number of hosts and finding ones that respond positively. However, we realize that many users will want to perform follow-up processing, such as performing an application level handshake. For example, users who perform a TCP SYN scan on port 80 might want to perform a simple GET request and users who scan port 443 may be interested in completing a TLS handshake.
#### Banner Grab ####
We have included a sample application, banner-grab, with ZMap that enables users to receive messages from listening TCP servers. Banner-grab connects to the provided servers, optionally sends a message, and prints out the first message received from the server. This tool can be used to fetch banners such as HTTP server responses to specific commands, telnet login prompts, or SSH server strings.
This example finds 1000 servers listening on port 80, and sends a simple GET request to each, storing their base-64 encoded responses in http-banners.out
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
For more details on using `banner-grab`, see the README file in `examples/banner-grab`.
**Heads Up!** ZMap and banner-grab can have significant performance and accuracy impact on one another if run simultaneously (as in the example). Make sure not to let ZMap saturate banner-grab-tcp's concurrent connections, otherwise banner-grab will fall behind reading stdin, causing ZMap to block on writing stdout. We recommend using a slower scanning rate with ZMap, and increasing the concurrency of banner-grab-tcp to no more than 3000 (Note that > 1000 concurrent connections requires you to use `ulimit -SHn 100000` and `ulimit -HHn 100000` to increase the maximum file descriptors per process). These parameters will of course be dependent on your server performance, and hit-rate; we encourage developers to experiment with small samples before running a large scan.
#### Forge Socket ####
We have also included a form of banner-grab, called forge-socket, that reuses the SYN-ACK sent from the server for the connection that ultimately fetches the banner. In `banner-grab-tcp`, ZMap sends a SYN to each server, and listening servers respond with a SYN+ACK. The ZMap host's kernel receives this, and sends a RST, as no active connection is associated with that packet. The banner-grab program must then create a new TCP connection to the same server to fetch data from it.
In forge-socket, we utilize a kernel module by the same name, that allows us to create a connection with arbitrary TCP parameters. This enables us to suppress the kernel's RST packet, and instead create a socket that will reuse the SYN+ACK's parameters, and send and receive data through this socket as we would any normally connected socket.
To use forge-socket, you will need the forge-socket kernel module, available from [github][1]. You should git clone `git@github.com:ewust/forge_socket.git` in the ZMap root source directory, and then cd into the forge_socket directory, and run make. Install the kernel module with `insmod forge_socket.ko` as root.
You must also tell the kernel not to send RST packets. An easy way to disable RST packets system wide is to use **iptables**. `iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP` as root will do this, though you may also add an optional --dport X to limit this to the port (X) you are scanning. To remove this after your scan completes, you can run `iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP` as root.
Now you should be able to build the forge-socket ZMap example program. To run it, you must use the **extended_file** ZMap output module:
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
./forge-socket -c 500 -d ./http-req > ./http-banners.out
See the README in `examples/forge-socket` for more details.
----------
### Writing Probe and Output Modules ###
ZMap can be extended to support different types of scanning through **probe modules** and additional types of results **output through** output modules. Registered probe and output modules can be listed through the command-line interface:
**--list-probe-modules**
Lists installed probe modules
**--list-output-modules**
Lists installed output modules
#### Output Modules ####
ZMap output and post-processing can be extended by implementing and registering **output modules** with the scanner. Output modules receive a callback for every received response packet. While the default provided modules provide simple output, these modules are also capable of performing additional post-processing (e.g. tracking duplicates or outputting numbers in terms of AS instead of IP address)
Output modules are created by defining a new output_module struct and registering it in [output_modules.c][2]:
typedef struct output_module {
const char *name; // how is output module referenced in the CLI
unsigned update_interval; // how often is update called in seconds
output_init_cb init; // called at scanner initialization
output_update_cb start; // called at the beginning of scanner
output_update_cb update; // called every update_interval seconds
output_update_cb close; // called at scanner termination
output_packet_cb process_ip; // called when a response is received
const char *helptext; // Printed when --list-output-modules is called
} output_module_t;
Output modules must have a name, which is how they are referenced on the command-line and generally implement `success_ip` and oftentimes `other_ip` callback. The process_ip callback is called for every response packet that is received and passed through the output filter by the current **probe module**. The response may or may not be considered a success (e.g. it could be a TCP RST). These callbacks must define functions that match the `output_packet_cb` definition:
int (*output_packet_cb) (
ipaddr_n_t saddr, // IP address of scanned host in network-order
ipaddr_n_t daddr, // destination IP address in network-order
const char* response_type, // send-module classification of packet
int is_repeat, // {0: first response from host, 1: subsequent responses}
int in_cooldown, // {0: not in cooldown state, 1: scanner in cooldown state}
const u_char* packet, // pointer to struct iphdr of IP packet
size_t packet_len // length of packet in bytes
);
An output module can also register callbacks to be executed at scanner initialization (tasks such as opening an output file), start of the scan (tasks such as documenting blacklisted addresses), during regular intervals during the scan (tasks such as progress updates), and close (tasks such as closing any open file descriptors). These callbacks are provided with complete access to the scan configuration and current state:
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
which are defined in [output_modules.h][3]. An example is available at [src/output_modules/module_csv.c][4].
#### Probe Modules ####
Packets are constructed using probe modules which allow abstracted packet creation and response classification. ZMap comes with two scan modules by default: `tcp_synscan` and `icmp_echoscan`. By default, ZMap uses `tcp_synscan`, which sends TCP SYN packets, and classifies responses from each host as open (received SYN+ACK) or closed (received RST). ZMap also allows developers to write their own probe modules for use with ZMap, using the following API.
Each type of scan is implemented by developing and registering the necessary callbacks in a `send_module_t` struct:
typedef struct probe_module {
const char *name; // how scan is invoked on command-line
size_t packet_length; // how long is probe packet (must be static size)
const char *pcap_filter; // PCAP filter for collecting responses
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
uint8_t port_args; // set to 1 if ZMap requires a --target-port be
// specified by the user
probe_global_init_cb global_initialize; // called once at scanner initialization
probe_thread_init_cb thread_initialize; // called once for each thread packet buffer
probe_make_packet_cb make_packet; // called once per host to update packet
probe_validate_packet_cb validate_packet; // called once per received packet,
// return 0 if packet is invalid,
// non-zero otherwise.
probe_print_packet_cb print_packet; // called per packet if in dry-run mode
probe_classify_packet_cb process_packet; // called by receiver to classify response
probe_close_cb close; // called at scanner termination
fielddef_t *fields // Definitions of the fields specific to this module
int numfields // Number of fields
} probe_module_t;
At scanner initialization, `global_initialize` is called once and can be utilized to perform any necessary global configuration or initialization. However, `global_initialize` does not have access to the packet buffer which is thread-specific. Instead, `thread_initialize` is called at the initialization of each sender thread and is provided with access to the buffer that will be used for constructing probe packets along with global source and destination values. This callback should be used to construct the host agnostic packet structure such that only specific values (e.g. destination host and checksum) need to be be updated for each host. For example, the Ethernet header will not change between headers (minus checksum which is calculated in hardware by the NIC) and therefore can be defined ahead of time in order to reduce overhead at scan time.
The `make_packet` callback is called for each host that is scanned to allow the **probe module** to update host specific values and is provided with IP address values, an opaque validation string, and probe number (shown below). The probe module is responsible for placing as much of the verification string into the probe, in such a way that when a valid response is returned by a server, the probe module can verify that it is present. For example, for a TCP SYN scan, the tcp_synscan probe module can use the TCP source port and sequence number to store the validation string. Response packets (SYN+ACKs) will contain the expected values in the destination port and acknowledgement number.
int make_packet(
void *packetbuf, // packet buffer
ipaddr_n_t src_ip, // source IP in network-order
ipaddr_n_t dst_ip, // destination IP in network-order
uint32_t *validation, // validation string to place in probe
int probe_num // if sending multiple probes per host,
// this will be which probe number for this
// host we are currently sending
);
Scan modules must also define `pcap_filter`, `validate_packet`, and `process_packet`. Only packets that match the PCAP filter will be considered by the scanner. For example, in the case of a TCP SYN scan, we only want to investigate TCP SYN/ACK or TCP RST packets and would utilize a filter similar to `tcp && tcp[13] & 4 != 0 || tcp[13] == 18`. The `validate_packet` function will be called for every packet that fulfills this PCAP filter. If the validation returns non-zero, the `process_packet` function will be called, and will populate a fieldset using fields defined in `fields` with data from the packet. For example, the following code processes a packet for the TCP synscan probe module.
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
{
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
+ (sizeof(struct iphdr)));
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
if (tcp->rst) { // RST packet
fs_add_string(fs, "classification", (char*) "rst", 0);
fs_add_uint64(fs, "success", 0);
} else { // SYNACK packet
fs_add_string(fs, "classification", (char*) "synack", 0);
fs_add_uint64(fs, "success", 1);
}
}
--------------------------------------------------------------------------------
via: https://zmap.io/documentation.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/ewust/forge_socket/
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c

View File

@ -0,0 +1,167 @@
===>> boredivan翻译中 <<===
怎样在CentOS 7.0上安装/配置VNC服务器
================================================================================
这是一个关于怎样在你的 CentOS 7 上安装配置 [VNC][1] 服务的教程。当然这个教程也适合 RHEL 7 。在这个教程里我们将学习什么是VNC以及怎样在 CentOS 7 上安装配置 [VNC 服务器][1]。
我们都知道,作为一个系统管理员,大多数时间是通过网络管理服务器的。在管理服务器的过程中很少会用到图形界面,多数情况下我们只是用 SSH 来完成我们的管理任务。在这篇文章里,我们将配置 VNC 来提供一个连接我们 CentOS 7 服务器的方法。VNC 允许我们开启一个远程图形会话来连接我们的服务器,这样我们就可以通过网络远程访问服务器的图形界面了。
VNC 服务器是一个自由且开源的软件,它可以让用户可以远程访问服务器的桌面环境。另外连接 VNC 服务器需要使用 VNC viewer 这个客户端。
** 一些 VNC 服务器的优点:**
远程的图形管理方式让工作变得简单方便。
剪贴板可以在 CentOS 服务器主机和 VNC 客户端机器之间共享。
CentOS 服务器上也可以安装图形工具,让管理能力变得更强大。
只要安装了 VNC 客户端,任何操作系统都可以管理 CentOS 服务器了。
比 ssh 图形和 RDP 连接更可靠。
那么,让我们开始安装 VNC 服务器之旅吧。我们需要按照下面的步骤一步一步来搭建一个有效的 VNC。
首先我们需要一个有效的桌面环境X-Window如果没有的话要先安装一个。
**注意:以下命令必须以 root 权限运行。要切换到 root 请在终端下运行“sudo -s”当然不包括双引号“”**
### 1. 安装 X-Window ###
首先我们需要安装 [X-Window][2],在终端中运行下面的命令,安装会花费一点时间。
# yum check-update
# yum groupinstall "X Window System"
![installing x windows](http://blog.linoxide.com/wp-content/uploads/2015/01/installing-x-windows.png)
#yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts
![install gnome classic session](http://blog.linoxide.com/wp-content/uploads/2015/01/gnome-classic-session-install.png)
# unlink /etc/systemd/system/default.target
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
![configuring graphics](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-graphics.png)
# reboot
在服务器重启之后,我们就有了一个工作着的 CentOS 7 桌面环境了。
现在,我们要在服务器上安装 VNC 服务器了。
### 2. 安装 VNC 服务器 ###
现在要在我们的 CentOS 7 上安装 VNC 服务器了。我们需要执行下面的命令。
# yum install tigervnc-server -y
![vnc server](http://blog.linoxide.com/wp-content/uploads/2015/01/install-tigervnc.png)
### 3. 配置 VNC ###
然后,我们需要在 **/etc/systemd/system/** 目录里创建一个配置文件。我们可以从 **/lib/systemd/sytem/vncserver@.service** 拷贝一份配置文件范例过来。
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
![copying vnc server configuration](http://blog.linoxide.com/wp-content/uploads/2015/01/copying-configuration.png)
接着我们用自己最喜欢的编辑器(这儿我们用的 **nano** )打开 **/etc/systemd/system/vncserver@:1.service** ,找到下面这几行,用自己的用户名替换掉 <USER> 。举例来说,我的用户名是 linoxide 所以我用 linoxide 来替换掉 <USER>
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
PIDFile=/home/<USER>/.vnc/%H%i.pid
替换成
ExecStart=/sbin/runuser -l linoxide -c "/usr/bin/vncserver %i"
PIDFile=/home/linoxide/.vnc/%H%i.pid
如果是 root 用户则
ExecStart=/sbin/runuser -l root -c "/usr/bin/vncserver %i"
PIDFile=/root/.vnc/%H%i.pid
![configuring user](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-user.png)
好了,下面重启 systemd 。
# systemctl daemon-reload
Finally, we'll create VNC password for the user . To do so, first you'll need to be sure that you have sudo access to the user, here I will login to user "linoxide" then, execute the following. To login to linoxide we'll run "**su linoxide" without quotes** .
最后还要设置一下用户的 VNC 密码。要设置某个用户的密码,必须要获得该用户的权限,这里我用 linoxide 的权限,执行“**su linoxide**”就可以了。
# su linoxide
$ sudo vncpasswd
![setting vnc password](http://blog.linoxide.com/wp-content/uploads/2015/01/vncpassword.png)
**确保你输入的密码多于6个字符**
### 4. 开启服务 ###
用下面的命令(永久地)开启服务:
$ sudo systemctl enable vncserver@:1.service
启动服务。
$ sudo systemctl start vncserver@:1.service
### 5. 防火墙设置 ###
我们需要配置防火墙来让 VNC 服务正常工作。
$ sudo firewall-cmd --permanent --add-service vnc-server
$ sudo systemctl restart firewalld.service
![allowing firewalld](http://blog.linoxide.com/wp-content/uploads/2015/01/allowing-firewalld.png)
现在就可以用 IP 和端口号(例如 192.168.1.1:1 ,这里的端口不是服务器的端口,而是视 VNC 连接数的多少从1开始排序——译注来连接 VNC 服务器了。
### 6. 用 VNC 客户端连接服务器 ###
好了,现在已经完成了 VNC 服务器的安装了。要使用 VNC 连接服务器,我们还需要一个在本地计算机上安装的仅供连接远程计算机使用的 VNC 客户端。
![remote access vncserver from vncviewer](http://blog.linoxide.com/wp-content/uploads/2015/01/vncviewer.png)
你可以用像 [Tightvnc viewer][3] 和 [Realvnc viewer][4] 的客户端来连接到服务器。
要用其他用户和端口连接 VNC 服务器请回到第3步添加一个新的用户和端口。你需要创建 **vncserver@:2.service** 并替换配置文件里的用户名和之后步骤里响应的文件名、端口号。**请确保你登录 VNC 服务器用的是你之前配置 VNC 密码的时候使用的那个用户名**
VNC 服务本身使用的是5900端口。鉴于有不同的用户使用 VNC ,每个人的连接都会获得不同的端口。配置文件名里面的数字告诉 VNC 服务器把服务运行在5900的子端口上。在我们这个例子里第一个 VNC 服务会运行在59015900 + 1端口上之后的依次增加运行在5900 + x 号端口上。其中 x 是指之后用户的配置文件名 **vncserver@:x.service** 里面的 x 。
在建立连接之前,我们需要知道服务器的 IP 地址和端口。IP 地址是一台计算机在网络中的独特的识别号码。我的服务器的 IP 地址是96.126.120.92VNC 用户端口是1。执行下面的命令可以获得服务器的公网 IP 地址。
# curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
### 总结 ###
好了,现在我们已经在运行 CentOS 7 / RHEL 7 (Red Hat Enterprises Linux)的服务器上安装配置好了 VNC 服务器。VNC 是自由及开源的软件中最简单的一种能实现远程控制服务器的一种工具,也是 Teamviewer Remote Access 的一款优秀的替代品。VNC 允许一个安装了 VNC 客户端的用户远程控制一台安装了 VNC 服务的服务器。下面还有一些经常使用的相关命令。好好玩!
#### 其他命令: ####
- 关闭 VNC 服务。
# systemctl stop vncserver@:1.service
- 禁止 VNC 服务开机启动。
# systemctl disable vncserver@:1.service
- 关闭防火墙。
# systemctl stop firewalld.service
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-configure-vnc-server-centos-7-0/
作者:[Arun Pyasi][a]
译者:[boredivan](https://github.com/boredivan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://en.wikipedia.org/wiki/Virtual_Network_Computing
[2]:http://en.wikipedia.org/wiki/X_Window_System
[3]:http://www.tightvnc.com/
[4]:https://www.realvnc.com/

View File

@ -1,41 +0,0 @@
Nmap--不只是邪恶.
================================================================================
如果SSH是系统管理员世界的"瑞士军刀"的话,那么Nmap就是一盒炸药. 炸药很容易被误用然后将你的双脚崩掉,但是也是一个很有威力的工具,能够胜任一些看似无法完成的任务.
大多数人想到Nmap时,他们想到的是扫描服务器,查找开放端口来实施工具. 然而,在过去的这些年中,同样的超能力在当你管理服务器或计算机遇到问题时变得难以置信的有用.无论是你试图找出在你的网络上有哪些类型的服务器使用了指定的IP地址,或者尝试锁定一个新的NAS设备,以及扫描网络等,都会非常有用.
图1显示了我的QNAP NAS的网络扫描.我使用该单元的唯一目的是为了NFS和SMB文件共享,但是你可以看到,它包含了一大堆大开大敞的端口.如果没有Nmap,很难发现机器到底在运行着什么玩意儿.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11825nmapf1.jpg)
### 图1 网络扫描 ###
另外一个无法想象的用处是用它来扫描一个网络.你甚至根本不需要root的访问权限,而且你也可以非常容易地来指定你想要扫描的网络块,例如,输入:
nmap 192.168.1.0/24
上述命令会扫描我局部网络中全部的254个可用的IP地址,让我可以知道那个使可以Ping的,以及那些端口时开放的.如果你刚刚插入一片新的硬件,但是不知道它通过DHCP获取的IP地址,那么此时Nmap就是无价之宝. 例如,上述命令在我的网络中揭示了这个问题.
Nmap scan report for TIVO-8480001903CCDDB.brainofshawn.com (192.168.1.220)
Host is up (0.0083s latency).
Not shown: 995 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2190/tcp open tivoconnect
2191/tcp open tvbus
9080/tcp closed glrpc
它不仅显示了新的Tivo单元,而且还告诉我那些端口是开放的. 由于它的可靠性,可用性以及黑色边框帽子的能力,Nmap获得了本月的 <<编辑推荐>>奖. 这不是一个新的程序,但是如果你是一个linux用户的话,你应该玩玩它.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/nmap%E2%80%94not-just-evil
作者:[Shawn Powers][a]
译者:[theo-l](https://github.com/theo-l)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/shawn-powers

View File

@ -0,0 +1,150 @@
走进Linux之systemd启动过程
================================================================================
Linux系统的启动方式有点复杂而且总是有需要优化的地方。传统的Linux系统启动过程主要由著名的init进程也被称为SysV init启动系统处理而基于init的启动系统也被确认会有效率不足的问题systemd是Linux系统机器的另一种启动方式宣称弥补了以[传统Linux SysV init][2]为基础的系统的缺点。在这里我们将着重讨论systemd的特性和争议但是为了更好地理解它也会看一下通过传统的以SysV init为基础的系统的Linux启动过程是什么样的。友情提醒一下systemd仍然处在测试阶段而未来发布的Linux操作系统也正准备用systemd启动管理程序替代当前的启动过程。
### 理解Linux启动过程 ###
在我们打开Linux电脑的电源后第一个启动的进程就是init。分配给init进程的PID是1。它是系统其他所有进程的父进程。当一台Linux电脑启动后处理器会先在系统存储中查找BIOS之后BIOS会测试系统资源然后找到第一个引导设备通常设置为硬盘然后会查找硬盘的主引导记录MBR然后加载到内存中并把控制权交给它以后的启动过程就由MBR控制。
主引导记录会初始化引导程序Linux上有两个著名的引导程序GRUB和LILO80%的Linux系统在用GRUB引导程序这个时候GRUB或LILO会加载内核模块。内核会马上查找/sbin下的init进程并执行它。从这里开始init成为了Linux系统的父进程。init读取的第一个文件是/etc/inittab通过它init会确定我们Linux操作系统的运行级别。它会从文件/etc/fstab里查找分区表信息然后做相应的挂载。然后init会启动/etc/init.d里指定的默认启动级别的所有服务/脚本。所有服务在这里通过init一个一个被初始化。在这个过程里init每次只启动一个服务所有服务/守护进程都在后台执行并由init来管理。
关机过程差不多是相反的过程首先init停止所有服务最后阶段会卸载文件系统。
以上提到的启动过程有一些不足的地方。而用一种更好的方式来替代传统init的需求已经存在很长时间了。也产生了许多替代方案。其中比较著名的有UpstartEpochMuda和Systemd。而Systemd获得最多关注并被认为是目前最佳的方案。
### 理解Systemd ###
开发Systemd的主要目的就是减少系统引导时间和计算开销。Systemd系统管理守护进程最开始以GNU GPL协议授权开发现在已转为使用GNU LGPL协议它是如今讨论最热烈的引导和服务管理程序。如果你的Linux系统配置为使用Systemd引导程序那么代替传统的SysV init启动过程将交给systemd处理。Systemd的一个核心功能是它同时支持SysV init的后开机启动脚本。
Systemd引入了并行启动的概念它会为每个需要启动的守护进程建立一个管道套接字这些套接字对于使用它们的进程来说是抽象的这样它们可以允许不同守护进程之间进行交互。Systemd会创建新进程并为每个进程分配一个控制组。处于不同控制组的进程之间可以通过内核来互相通信。[systemd处理开机启动进程][2]的方式非常漂亮和传统基于init的系统比起来优化了太多。让我们看下Systemd的一些核心功能。
- 和init比起来引导过程简化了很多
- Systemd支持并发引导过程从而可以更快启动
- 通过控制组来追踪进程而不是PID
- 优化了处理引导过程和服务之间依赖的方式
- 支持系统快照和恢复
- 监控已启动的服务;也支持重启已崩溃服务
- 包含了systemd-login模块用于控制用户登录
- 支持加载和卸载组件
- 低内存使用痕迹以及任务调度能力
- 记录事件的Journald模块和记录系统日志的syslogd模块
Systemd同时也清晰地处理了系统关机过程。它在/usr/lib/systemd/目录下有三个脚本分别叫systemd-halt.servicesystemd-poweroff.servicesystemd-reboot.service。这几个脚本会在用户选择关机重启或待机时执行。在接收到关机事件时systemd首先卸载所有文件系统并停止所有内存交换设备断开存储设备之后停止所有剩下的进程。
![](http://images.linoxide.com/systemd-boot-process.jpg)
### Systemd结构概览 ###
让我们看一下Linux系统在使用systemd作为引导程序时的开机启动过程的结构性细节。为了简单我们将在下面按步骤列出来这个过程
**1.** 当你打开电源后电脑所做的第一件事情就是BIOS初始化。BIOS会读取引导设备设定定位并传递系统控制权给MBR假设硬盘是第一引导设备
**2.** MBR从Grub或LILO引导程序读取相关信息并初始化内核。接下来将由Grub或LILO继续引导系统。如果你在grub配置文件里指定了systemd作为引导管理程序之后的引导过程将由systemd完成。Systemd使用“target”来处理引导和服务管理过程。这些systemd里的“target”文件被用于分组不同的引导单元以及启动同步进程。
**3.** systemd执行的第一个目标是**default.target**。但实际上default.target是指向**graphical.target**的软链接。Linux里的软链接用起来和Windows下的快捷方式一样。文件Graphical.target的实际位置是/usr/lib/systemd/system/graphical.target。在下面的截图里显示了graphical.target文件的内容。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/graphical1.png)
**4.** 在这个阶段,会启动**multi-user.target**而这个target将自己的子单元放在目录“/etc/systemd/system/multi-user.target.wants”里。这个target为多用户支持设定系统环境。非root用户会在这个阶段的引导过程中启用。防火墙相关的服务也会在这个阶段启动。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/multi-user-target1.png)
"multi-user.target"会将控制权交给另一层“**basic.target**”。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Basic-Target.png)
**5.** "basic.target"单元用于启动普通服务特别是图形管理服务。它通过/etc/systemd/system/basic.target.wants目录来决定哪些服务会被启动basic.target之后将控制权交给**sysinit.target**.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sysint-Target.png)
**6.** "sysinit.target"会启动重要的系统服务例如系统挂载内存交换空间和设备内核补充选项等等。sysinit.target在启动过程中会传递给**local-fs.target**。这个target单元的内容如下面截图里所展示。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/local-FS-Target.png)
**7.** local-fs.target这个target单元不会启动用户相关的服务它只处理底层核心服务。这个target会根据/etc/fstab和/etc/inittab来执行相关操作。
### 系统引导性能分析 ###
Systemd提供了工具用于识别和定位引导相关的问题或性能影响。**Systemd-analyze**是一个内建的命令可以用来检测引导过程。你可以找出在启动过程中出错的单元然后跟踪并改正引导组件的问题。在下面列出一些常用的systemd-analyze命令。
**systemd-analyze time** 用于显示内核和普通用户空间启动时所花的时间。
$ systemd-analyze time
Startup finished in 1440ms (kernel) + 3444ms (userspace)
**systemd-analyze blame** 会列出所有正在运行的单元,按从初始化开始到当前所花的时间排序,通过这种方式你就知道哪些服务在引导过程中要花较长时间来启动。
$ systemd-analyze blame
2001ms mysqld.service
234ms httpd.service
191ms vmms.service
**systemd-analyze verify** 显示在所有系统单元中是否有语法错误。**systemd-analyze plot** 可以用来把整个引导过程写入一个SVG格式文件里。整个引导过程非常长不方便阅读所以通过这个命令我们可以把输出写入一个文件之后再查看和分析。下面这个命令就是做这个。
systemd-analyze plot > boot.svg
### Systemd的争议 ###
Systemd并没有幸运地获得所有人的青睐一些专家和管理员对于它的工作方式和开发有不同意见。根据对于Systemd的批评它不是“类Unix”方式因为它试着替换一些系统服务。一些专家也不喜欢使用二进制配置文件的想法。据说编辑systemd配置非常困难而且没有一个可用的图形工具。
### 在Ubuntu 14.04和12.04上测试Systemd ###
本来Ubuntu决定从Ubuntu 16.04 LTS开始使用Systemd来替换当前的引导过程。Ubuntu 16.04预计在2016年4月发布但是考虑到Systemd的流行和需求即将发布的**Ubuntu 15.04**将采用它作为默认引导程序。好消息是Ubuntu 14.04 Trusty Tahr和Ubuntu 12.04 Precise Pangolin的用户可以在他们的机器上测试Systemd。测试过程并不复杂你所要做的只是把相关的PPA包含到系统中更新仓库并升级系统。
**声明**请注意它仍然处于Ubuntu的测试和开发阶段。升级测试包可能会带来一些未知错误最坏的情况下有可能损坏你的系统配置。请确保在尝试升级前已经备份好重要数据。
在终端里运行下面的命令来添加PPA到你的Ubuntu系统里
sudo add-apt-repository ppa:pitti/systemd
你将会看到警告信息因为我们尝试使用临时/测试PPA而它们是不建议用于实际工作机器上的。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/PPA-Systemd1.png)
然后运行下面的命令更新APT包管理仓库。
sudo apt-get update
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Update-APT1.png)
运行下面的命令升级系统。
sudo apt-get dist-upgrade
![](http://blog.linoxide.com/wp-content/uploads/2015/03/System-Upgrade.png)
就这些你应该已经可以在你的Ubuntu系统里看到Systemd配置文件了打开/lib/systemd/目录可以看到这些文件。
好吧现在让我们编辑一下grub配置文件指定systemd作为默认引导程序。可以使用Gedit文字编辑器编辑grub配置文件。
sudo gedit /etc/default/grub
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Edit-Grub.png)
在文件里修改GRUB_CMDLINE_LINUX_DEFAULT项设定它的参数为“**init=/lib/systemd/systemd**”
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Grub-Systemd.png)
就这样你的Ubuntu系统已经不在使用传统的引导程序了改为使用Systemd管理器。重启你的机器然后查看systemd引导过程吧。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sytemd-Boot.png)
### 结论 ###
Systemd毫无疑问为改进Linux引导过程前进了一大步它包含了一套漂亮的库和守护进程配合工作来优化系统引导和关闭过程。许多Linux发行版正准备将它作为自己的正式引导程序。在以后的Linux发行版中我们将有望看到systemd开机。但是另一方面为了获得成功并广泛应用systemd仍需要认真处理批评意见。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/systemd-boot-process/
作者:[Aun Raza][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:http://linoxide.com/booting/boot-process-of-linux-in-detail/
[2]:http://0pointer.de/blog/projects/self-documented-boot.html