Merge pull request #2 from LCTT/master

更新 20150402
This commit is contained in:
Chang Liu 2015-04-02 18:16:37 +08:00
commit 1874ae51aa
103 changed files with 3571 additions and 1172 deletions

View File

@ -1,10 +1,10 @@
在Ubuntu 14.04 中修复无法修复回收站[快速提示]
在Ubuntu 14.04 中修复无法清空回收站的问题
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/empty-the-trash.jpg)
### 问题 ###
**无法在Ubuntu 14.04中清空回收站的问题**。我右键回收站图标并选择清空回收站就像我一直做的那样。我看到进度条显示删除文件中过了一段时间。但是它停止了并且Nautilus文件管理也停止了。我不得不在终端中停止了它。
**我遇到了无法在Ubuntu 14.04中清空回收站的问题**。我右键回收站图标并选择清空回收站就像我一直做的那样。我看到进度条显示删除文件中过了一段时间。但是它停止了并且Nautilus文件管理也停止了。我不得不在终端中停止了它。
但是这很痛苦因为文件还在垃圾箱中。并且我反复尝试清空后窗口都冻结了。
@ -18,7 +18,7 @@
这里注意你的输入。你使用超级管理员权限来运行删除命令。我相信你不会删除其他文件或者目录。
上面的命令会删除回收站目录下的所有文件。换句话说,这是用命令清空垃圾箱。使用玩上面的命令后,你会看到垃圾箱已经清空了。如果你删除了所有文件你不应该在看到Nautilus崩溃的问题了。
上面的命令会删除回收站目录下的所有文件。换句话说,这是用命令清空垃圾箱。使用完上面的命令后,你会看到垃圾箱已经清空了。如果你清空了所有文件你不应该在看到Nautilus崩溃的问题了。
### 对你有用么? ###
@ -30,7 +30,7 @@ via: http://itsfoss.com/fix-empty-trash-ubuntu/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -15,13 +15,13 @@ MariaDB是MySQL社区开发的分支也是一个增强型的替代品。它
现在让我们迁移到MariaDB吧
**以测试为目的**让我们创建一个叫**linoxidedb**的示例数据库。
让我们创建一个叫**linoxidedb**的**用于测试的**示例数据库。
使用以下命令用root账户登陆MySQL
$ mysql -u root -p
输入mysql root用户密码后,你将进入**mysql的命令行**
输入mysql root 用户密码后,你将进入**mysql的命令行**
**创建测试数据库:**
@ -54,7 +54,8 @@ MariaDB是MySQL社区开发的分支也是一个增强型的替代品。它
$ mysqldump: Error: Binlogging on server not active
![](http://blog.linoxide.com/wp-content/uploads/2015/01/mysqldump-error.png)
mysqldump error
*mysqldump error*
为了修复这个错误,我们需要对**my.cnf**文件做一些小改动。
@ -68,7 +69,7 @@ mysqldump error
![configuring my.cnf](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-my.cnf_.png)
好了在保存并关闭文件后我们需要重启一下mysql服务。运行以下命令重启
好了在保存并关闭文件后我们需要重启一下mysql服务。运行以下命令重启
$ sudo /etc/init.d/mysql restart
@ -77,17 +78,18 @@ mysqldump error
$ mysqldump --all-databases --user=root --password --master-data > backupdatabase.sql
![](http://blog.linoxide.com/wp-content/uploads/2015/01/crearing-bakup-file.png)
dumping databases
*dumping databases*
上面的命令将会备份所有的数据库,把它们存储在当前目录下的**backupdatabase.sql**文件中。
### 2. 卸载MySQL ###
首先,我们得把**my.cnt文件挪到安全的地方去**。
首先,我们得把**my.cnf文件挪到安全的地方去**。
**注**my.cnf文件将不会在你卸载MySQL包的时候被删除我们这样做只是以防万一。在MariaDB安装时它会询问我们是保持现存的my.cnf文件还是使用包中自带的版本即新my.cnf文件
**注**在你卸载MySQL包的时候不会自动删除my.cnf文件我们这样做只是以防万一。在MariaDB安装时它会询问我们是保持现存的my.cnf文件还是使用包中自带的版本即新my.cnf文件
在shell或终端中输入如下命令来备份my.cnt文件:
在shell或终端中输入如下命令来备份my.cnf文件:
$ sudo cp /etc/mysql/my.cnf my.cnf.bak
@ -111,7 +113,7 @@ dumping databases
![adding mariadb repo](http://blog.linoxide.com/wp-content/uploads/2015/01/adding-repo-mariadb.png)
键值导入并且添加完仓库后你就可以用以下命令安装MariaDB了
键值导入并且添加完仓库后你就可以用以下命令安装MariaDB了
$ sudo apt-get update
$ sudo apt-get install mariadb-server
@ -120,7 +122,7 @@ dumping databases
![my.conf configuration prompt](http://blog.linoxide.com/wp-content/uploads/2015/01/my.conf-configuration-prompt.png)
我们应该还没忘记在MariaDB安装时它会问你是使用现有的my.cnf文件还是包中自带的版本。你可以使用以前的my.cnf也可以用包中自带的。即使你想直接使用新的my.cnf文件你依然可以晚点将以前的备份内容还原进去别忘了我们已经将它复制到安全的地方那个去。所以我们直接选择了默认的选项“N”。如果需要安装其他版本请参考[MariaDB官方仓库][2]。
我们应该还没忘记在MariaDB安装时它会问你是使用现有的my.cnf文件还是包中自带的版本。你可以使用以前的my.cnf也可以用包中自带的。即使你想直接使用新的my.cnf文件你依然可以晚点时候将以前的备份内容还原进去别忘了我们已经将它复制到安全的地方了。所以我们直接选择了默认的选项“N”。如果需要安装其他版本请参考[MariaDB官方仓库][2]。
### 4. 恢复配置文件 ###
@ -136,7 +138,7 @@ dumping databases
就这样,我们已成功将之前的数据库导入了进来。
来,让我们登一下mysql命令行检查一下数据库是否真的已经导入了
来,让我们登一下mysql命令行检查一下数据库是否真的已经导入了
$ mysql -u root -p
@ -152,15 +154,15 @@ dumping databases
### 总结 ###
最后我们已经成功地从MySQL迁移到了MariaDB数据库管理系统。MariaDB比MySQL好虽然在性能方面MySQL还是比它更快但是MariaDB的优点在于它额外的特性与支持的许可证。这能够确保它自由开源FOSS并永久自由开源相比之下MySQL还有许多额外的插件有些不能自由使用代码、有些没有公开的开发进程、有些在不久的将来会变的不再自由开源。如果你有任何的问题、评论、反馈给我们不要犹豫直接在评论区留下你的看法。谢谢观看本教程希望你能喜欢MariaDB。
最后我们已经成功地从MySQL迁移到了MariaDB数据库管理系统。MariaDB比MySQL好虽然在性能方面MySQL还是比它更快但是MariaDB的优点在于它额外的特性与支持的许可证。这能够确保它自由开源FOSS并永久自由开源相比之下MySQL还有许多额外的插件有些不能自由使用代码、有些没有公开的开发进程、有些在不久的将来会变的不再自由开源。如果你有任何的问题、评论、反馈给我们不要犹豫直接在评论区留下你的看法。谢谢观看本教程希望你能喜欢MariaDB。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/migrate-mysql-mariadb-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -2,13 +2,13 @@
================================================================================
大家好今天我们会学习如何在Linux PC或者服务器上找出和删除重复文件。这里有一款工具你可以工具自己的需要使用。
无论你是否正在使用Linux桌面或者服务器有一些很好的工具能帮你扫描系统中的重复文件并删除它们来释放空间。图形界面和命令行界面的都有。重复文件是磁盘空间不必要的浪费。毕竟,如果你的确需要在不同的位置享有同一个文件,你可以使用软链接或者硬链接,这样就可以这样就可以在磁盘的一处地方存储数据了。
无论你是否正在使用Linux桌面或者服务器有一些很好的工具能帮你扫描系统中的重复文件并删除它们来释放空间。图形界面和命令行界面的都有。重复文件是磁盘空间不必要的浪费。毕竟,如果你的确需要在不同的位置享有同一个文件,你可以使用软链接或者硬链接,这样就可以在磁盘的一个地方存储数据了。
### FSlint ###
[FSlint][1] 在不同的Linux发行二进制仓库中都有包括Ubuntu、Debian、Fedora和Red Hat。只需你运行你的包管理器并安装“fslint”包就行。这个工具默认提供了一个简单的图形化界面同样也有包含各种功能的命令行版本。
[FSlint][1] 在不同的Linux发行二进制仓库中都有包括Ubuntu、Debian、Fedora和Red Hat。只需你运行你的包管理器并安装“fslint”包就行。这个工具默认提供了一个简单的图形化界面同样也有包含各种功能的命令行版本。
不要让它让你害怕使用FSlint的图形化界面。默认情况下它会自动选中Duplicate窗格并以你的家目录作为搜索路径。
不要担心FSlint的图形化界面太复杂。默认情况下它会自动选中Duplicate窗格并以你的家目录作为搜索路径。
要安装fslint若像我这样运行的是Ubuntu这里是默认的命令
@ -27,7 +27,7 @@ Fedora
sudo yum install fslint
For OpenSuse
OpenSuse
[ -f /etc/mandrake-release ] && pkg=rpm
[ -f /etc/SuSE-release ] && pkg=packages
@ -51,11 +51,11 @@ For OpenSuse
![Delete Duplicate files with Fslint](http://blog.linoxide.com/wp-content/uploads/2015/01/delete-duplicates-fslint.png)
使用按钮来删除任何你要删除的文件,并且可以双击预览。
点击按钮来删除任何你要删除的文件,并且可以双击预览。
完成这一切后,我们就成功地删除你系统中的重复文件了。
**注意** 的是命令行工具默认不在环境的路径中你不能像典型的命令那样运行它。在Ubuntu中你可以在/usr/share/fslint/fslint下找到它。因此如果你要在一个单独的目录运行fslint完整扫描下面是Ubuntu中的运行命令
**注意** 命令行工具默认不在环境的路径中你不能像典型的命令那样运行它。在Ubuntu中你可以在/usr/share/fslint/fslint下找到它。因此如果你要在一个单独的目录运行fslint完整扫描下面是Ubuntu中的运行命令
cd /usr/share/fslint/fslint
@ -84,7 +84,7 @@ via: http://linoxide.com/file-system/find-remove-duplicate-files-linux/
作者:[Arun Pyasi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -2,7 +2,7 @@
================================================================================
现在使用[树莓派摄像头模组][1]("raspi cam"),也可以像使用卡片相机那样,给拍摄的照片增加各种各样的图片特效。 raspistill命令行工具为您的树莓派提供了丰富的图片特效选项来美化处理你的图片。
你可以用[这3个命令行工具][2]来[抓取raspicam拍摄的照片或者视频][3]在这文章中将重点介绍其中的raspstill工具。raspstill工具提供了丰富的控制选项来处理图片比如说锐度sharpness、对比度contrast、亮度brightness、饱和度saturation、ISO、自动白平衡AWB、以及图片特效image effect等。
有[三个命令行工具][2]可以用于[抓取raspicam拍摄的照片或者视频][3]在这文章中将重点介绍其中的raspstill工具。raspstill工具提供了丰富的控制选项来处理图片比如说锐度sharpness、对比度contrast、亮度brightness、饱和度saturation、ISO、自动白平衡AWB、以及图片特效image effect等。
在这篇文章中将介绍如何使用raspstill工具以及raspicam摄像头模组来控制照片的曝光、AWB以及其他的图片效果。我写了一个简单的python脚本来自动拍摄照片并在这些照片上自动应用各种图片特效。raspicam的帮助文档中介绍了该摄像头模组所支持的曝光模式、AWB和图片特效。总的来说raspicam一共支持16种图片特效、12种曝光模式以及10种AWB选项。
@ -27,7 +27,6 @@ Python脚本很简单如下所示 。
time.sleep(0.25)
print "End of image capture"
The Python script operates as follows. First, create three array/list variable for the exposure, AWB and image effects. In the example, we use 2 types of exposure, 3 types of AWB, and 13 types of image effects values. Then make nested loops for applying the value of the three variables that we have. Inside the nested loop, execute the raspistill application. We specify (1) the output filename; (2) exposure value; (3) AWB value; (4) image effect value; (5) the time to take a photo, which is set to 1 second; and (6) the size of the photo, which is set to 640x480px. This Python script will create 78 different versions of a captured photo with a combination of 2 types of exposure, 3 types of AWB, and 13 types of image effects.
这个脚本完成了以下几个工作。首先脚本中定义了3个列表分别用于枚举曝光模式、AWB模式以及图片特效。在这个实例中我们将使用到2种曝光模式、3种AWB模式以及13种图片特效。脚本会遍历上述3种选项的各种组合并使用这些参数组合来运行raspistill工具。传入的参数共6个分别为1输出文件名2曝光模式3AWB模式4图片特效模式5拍照时间设为1秒6图片尺寸设为640x480。脚本会自动拍摄78张照片每张照片会应用不同的特效参数。
@ -41,7 +40,7 @@ The Python script operates as follows. First, create three array/list variable f
### 小福利 ###
除了使用raspistill命令行工具来操控raspicam摄像模组以外还有其他的方法可以用哦。[Picamera][4]是一个python库它提供了操控raspicam摄像模组的的API接口这样就可以便捷地构建更加复杂的应用程序。如果你精通python那么picamera一定是你项目的好伙伴。picamera已经被默认集成到Raspbian最新版本的的镜像中。当然如果你用的不是最新的Raspbian或者是使用其他的操作系统版本你可以通过下面的方法来进行手动安装。
除了使用raspistill命令行工具来操控raspicam摄像模组以外还有其他的方法可以用哦。[Picamera][4]是一个python库它提供了操控raspicam摄像模组的的API接口这样就可以便捷地构建更加复杂的应用程序。如果你精通python那么picamera一定是你的 hack 项目的好伙伴。picamera已经被默认集成到Raspbian最新版本的的镜像中。当然如果你用的不是最新的Raspbian或者是使用其他的操作系统版本你可以通过下面的方法来进行手动安装。
首先先在你的系统上安装pip详见[指导][6]。
@ -57,7 +56,7 @@ via: http://xmodulo.com/apply-image-effects-pictures-raspberrypi.html
作者:[Kristophorus Hadiono][a]
译者:[coloka](https://github.com/coloka)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,10 +1,10 @@
Linux 有问必答: 如何在Ubuntu或者Debian中下载和安装ixgbe驱动
Linux 有问必答:如何在Ubuntu或者Debian中编译安装ixgbe驱动
================================================================================
> **提问** 我想为我的Intel 10G网卡下载安装最新的ixgbe。我该如何在Ubuntu或者Debian中安装ixgbe驱动
> **提问** 我想为我的Intel 10G网卡下载安装最新的ixgbe驱动。我该如何在Ubuntu或者Debian中安装ixgbe驱动
Intel的10G网卡比如82598、 82599、 x540由ixgbe驱动支持。现代的Linux发版已经将ixgbe作为一个可加载模块。然而有些情况你不想要你机器上的已经编译和安装的ixgbe驱动。比如你想要体验ixbge驱动的最新特性。同样自带内核中的ixgbe中的一个默认问题是不允许你自定义旭东内核参数。如果你想要完全自动一ixgbe驱动比如 RSS、多队列、中断阈值等等你需要手动从源码编译ixgbe驱动。
Intel的10G网卡比如82598、 82599、 x540由ixgbe驱动支持。现代的Linux发行版已经带有了ixgbe驱动通过可加载模块的方式使用。然而有些情况你希望在你机器上的自己编译安装ixgbe驱动比如你想要体验ixbge驱动的最新特性时。同样内核默认自带的ixgbe驱动中的一个问题是不允许你自定义驱动的参数。如果你想要一个完全定制的ixgbe驱动比如 RSS、多队列、中断阈值等等你需要手动从源码编译ixgbe驱动。
这里是如何在Ubuntu、Debian或者它们的衍生版中下载安装ixgbe驱动。
这里是如何在Ubuntu、Debian或者它们的衍生版中下载安装ixgbe驱动的教程
### 第一步: 安装前提 ###
@ -29,7 +29,7 @@ Intel的10G网卡比如82598、 82599、 x540由ixgbe驱动支持。现
编译之后你会看到在ixgbe-3.23.2/src目录下创建了**ixgbe.ko**。这就是会加载到内核之中的ixgbe驱动。
用modinfo命令检查内核模块的信息。注意你需要指定模块的绝对路径比如 ./ixgbe.ko 或者 /home/xmodulo/ixgbe/ixgbe-3.23.2/src/ixgbe.ko。输出中会显示ixgbe内核的版本。
用modinfo命令检查内核模块的信息。注意你需要指定模块文件的绝对路径(比如 ./ixgbe.ko 或者 /home/xmodulo/ixgbe/ixgbe-3.23.2/src/ixgbe.ko。输出中会显示ixgbe内核的版本。
$ modinfo ./ixgbe.ko
@ -120,24 +120,24 @@ Intel的10G网卡比如82598、 82599、 x540由ixgbe驱动支持。现
### 第五步: 安装Ixgbe驱动 ###
一旦你验证新的ixgbe驱动已经成功家在,最后一步是在你的系统中安装驱动。
一旦你验证新的ixgbe驱动可以成功加载,最后一步是在你的系统中安装驱动。
$ sudo make install
**ixgbe.ko** 接着会安装在/lib/modules/<kernel-version>/kernel/drivers/net/ethernet/intel/ixgbe 下。
**ixgbe.ko** 会安装在/lib/modules/<kernel-version>/kernel/drivers/net/ethernet/intel/ixgbe 下。
这一步起你可以用下面的modprobe命令加载ixgbe驱动了。注意你不必再指定绝对路径。
这一步起你可以用下面的modprobe命令加载ixgbe驱动了。注意你不必再指定绝对路径。
$ sudo modprobe ixgbe
如果你希望在启动时家在ixgbe驱动你可以在/etc/modules的最后加入“ixgbe”。
如果你希望在启动时加载ixgbe驱动你可以在/etc/modules的最后加入“ixgbe”。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/download-install-ixgbe-driver-ubuntu-debian.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,12 +1,12 @@
使用最近通知工具保持通知历史
使用最近通知工具保持桌面通知历史
================================================================================
![How to see recent notifications in Ubuntu 14.04](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/recent_notifications_Ubuntu_14.jpeg)
大多数桌面环境像Unity和Gnome都有通知特性。我很喜欢其中一些。它尤其当我[在Ubuntu上收听流媒体][1]时帮到我。默认上通知会在桌面的顶部显示几秒接着就会小时。如果你听见了通知的声音但是没有看到内容怎么办?你该如何知道通知的内容?
大多数桌面环境像Unity和Gnome都有通知特性。我很喜欢其中一些。它尤其当我[在Ubuntu上收听流媒体][1]时帮到我。默认上通知会在桌面的顶部显示几秒接着就会消失。如果你听见了通知的声音但是没有看到内容怎么办?你该如何知道通知的内容?
如果你可以看到最近所有通知的历史会很棒吧是的我知道这很棒。你可以在Ubuntu Unity或者Gnome中使用最近**通知小工具**来追踪所有的最近通知。
最近通知位于顶部面板,并且最近所有通知的历史。当它捕获到新的通知后,它就会变绿来表明你有未读的通知。
最近通知位于顶部面板,并且记录了最近所有通知的历史。当它捕获到新的通知后,它就会变绿来表明你有未读的通知。
![Recent notifications in Ubuntu 14.04](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/recent_notifications_Ubuntu.jpeg)
@ -24,7 +24,7 @@
sudo apt-get update
sudo apt-get install indicator-notifications
安装完成后,重新登录后你就可以用了。现在它是没有通知的状态。很方便的小工具,不是么?
安装完成后,重新登录后你就可以用了。现在妈妈再也不用担心我的通知没看到了。很方便的小工具,不是么?
--------------------------------------------------------------------------------
@ -32,7 +32,7 @@ via: http://itsfoss.com/notifications-appindicator/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,6 +1,6 @@
Shell入门掌握LinuxOS XUnix的Shell环境
================================================================================
在Linux或类Unix系统中每个用户和进程都运行在一个特定环境中。这个环境包含了变量,设置,别名,函数以及更多的。下面是对Shell环境下一些常用命令的简单介绍包括每个命令如何使用的例子以及在命令行下设定你自己的环境来提高效率。
在Linux或类Unix系统中每个用户和进程都运行在一个特定环境中。这个环境包含了变量、设置、别名、函数以及更多的东西。下面是对Shell环境下一些常用命令的简单介绍包括每个命令如何使用的例子以及在命令行下设定你自己的环境来提高效率。
![](http://s0.cyberciti.org/uploads/cms/2015/01/bash-shell-welcome-image.jpg)
@ -18,7 +18,8 @@ Shell入门掌握LinuxOS XUnix的Shell环境
输出范例:
[![图1: Finding out your shell name](http://s0.cyberciti.org/uploads/cms/2015/01/finding-your-shell-like-a-pro.jpg)][1]
图1找出当前的shell
*图1找出当前的shell*
### 找出所有已安装的shell ###
@ -32,9 +33,10 @@ Shell入门掌握LinuxOS XUnix的Shell环境
输出范例:
[![Fig.02: Finding out your shell path](http://s0.cyberciti.org/uploads/cms/2015/01/finding-and-verifying-shell-path.jpg)][2]
图2找出shell的路径
文件/etc/shells里包含了系统支持的shell列表。每一行代表一个shell是相对根目录的完整路径。用这个[cat命令][3]来查看这些数据:
*图2找出shell的路径*
文件/etc/shells里包含了系统所支持的shell列表。每一行代表一个shell是相对根目录的完整路径。用这个[cat命令][3]来查看这些数据:
cat /etc/shells
@ -71,7 +73,8 @@ Shell入门掌握LinuxOS XUnix的Shell环境
示例输出:
[![Fig. 03: Bash shell nesting level (subshell numbers)](http://s0.cyberciti.org/uploads/cms/2015/01/a-nested-shell-level-command.jpg)][4]
图3Bash shell嵌套层级子shell数目
*图3Bash shell嵌套层级子shell数目*
### 通过chsh命令永久变更系统shell ###
@ -83,9 +86,9 @@ Shell入门掌握LinuxOS XUnix的Shell环境
sudo chsh -s /bin/ksh userNameHere
### 查看当前的环境 ###
### 查看当前的环境变量 ###
你需要用到
你需要用到
env
env | more
@ -118,7 +121,8 @@ Shell入门掌握LinuxOS XUnix的Shell环境
下面是bash shell里一些常见变量的列表
![Fig.04: Common bash environment variables](http://s0.cyberciti.org/uploads/cms/2015/01/common-shell-vars.jpg)
图4常见bash环境变量
*图4常见bash环境变量*
> **注意**下面这些环境变量没事不要乱改。很可能会造成不稳定的shell会话
>
@ -157,7 +161,7 @@ Shell入门掌握LinuxOS XUnix的Shell环境
/home/vivek
### 增加或设定一个新变量 ###
### 增加或设定一个新环境变量 ###
下面是bashzshsh和ksh的语法
@ -225,7 +229,8 @@ Shell入门掌握LinuxOS XUnix的Shell环境
示例输出:
[![Fig.05: List all bash environment configuration files](http://s0.cyberciti.org/uploads/cms/2015/01/list-bash-enviroment-variables.jpg)][5]
图5列出bash的所有配置文件
*图5列出bash的所有配置文件*
要查看所有的bash配置文件输入
@ -241,7 +246,7 @@ Shell入门掌握LinuxOS XUnix的Shell环境
sudo cp -v /etc/bashrc /etc/bashrc.bak.22_jan_15
########################################################################
## 然后随心所欲随便改吧好好玩玩shell环境或者提高一下效率:) ##
## 然后随心所欲随便改吧好好玩玩shell环境或者提高一下效率:) ##
########################################################################
sudo vim /etc/bashrc
@ -326,14 +331,15 @@ zsh的[wiki][6]中建议用下面的命令:
示例输出:
[![Fig.06: View session history in the bash shell using history command](http://s0.cyberciti.org/uploads/cms/2015/01/history-outputs.jpg)][7]
图6在bash shell中使用history命令查看会话历史
你可以重复使用命令。简单地按下[上]或[下]方向键就可以查看之前的命令。在shell提示符下按下[CTRL-R]可以向后搜索历史缓存或文件来查找命令。重复最后一次命令只需要在shell提示符下输入!!就好了:
*图6在bash shell中使用history命令查看会话历史*
你可以重复使用之前的命令。简单地按下[上]或[下]方向键就可以查看之前的命令。在shell提示符下按下[CTRL-R]可以向后搜索历史缓存或文件来查找命令。重复最后一次命令只需要在shell提示符下输入!!就好了:
ls -l /foo/bar
!!
在以上的历史记录中查看命令#93 (hddtemp /dev/sda),输入:
在以上的历史记录中找到命令#93 (hddtemp /dev/sda),输入:
!93
@ -483,7 +489,7 @@ Bash/ksh/zsh函数允许你更进一步地配置shell环境。在这个例子中
最后,[打开bash命令补齐][12]
source /etc/bash_completio
source /etc/bash_completion
#### #2: 设定bash命令提示符 ####
@ -511,7 +517,7 @@ Bash/ksh/zsh函数允许你更进一步地配置shell环境。在这个例子中
# 为命令历史文件增加时间戳
export HISTTIMEFORMAT="%F %T "
# 附加到命令历史文件,不是覆盖
# 附加到命令历史文件,不是覆盖
shopt -s histappend
#### #5: 设定shell会话的时区 ####
@ -561,7 +567,7 @@ Bash/ksh/zsh函数允许你更进一步地配置shell环境。在这个例子中
# 清理那些.DS_Store文件
alias dsclean='find . -type f -name .DS_Store -delete'
#### #8: 让世界充满色彩 ####
#### #8: 寡人好色 ####
# 彩色的grep输出
alias grep='grep --color=auto'
@ -669,7 +675,7 @@ via: http://www.cyberciti.biz/howto/shell-primer-configuring-your-linux-unix-osx
作者:[nixCraft][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,17 +1,18 @@
如何不用重启在CentOS 7/ RHEL 7中添加一块新硬盘
如何不用重启在CentOS 7/ RHEL 7虚拟机中添加一块新硬盘
================================================================================
通常在你在虚拟机中添加一块新硬盘时你可能会看到新硬盘没有自动加载。这是因为连接到硬盘的SCSI总线需要重新扫描来使得新硬盘可见。这里有一个简单的命令来重新扫描SCSI总线和SCSI设备。下面这几步在CentOS 7 和RHEL 7 中测试过。
1. 在ESXi或者vCenter中添加一块新的20G硬盘
![](http://www.ehowstuff.com/wp-content/uploads/2015/01/Create-new-LVM-CentOS7-1.png)
![](http://www.ehowstuff.com/wp-content/uploads/2015/01/Create-new-LVM-CentOS7-1.png)
2. 显示当前磁盘分区:
[root@centos7 ~]# fdisk -l
[root@centos7 ~]# fdisk -l
----------
----------
```
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
@ -33,20 +34,22 @@
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
```
3. 确定主机总线号
[root@centos7 ~]# ls /sys/class/scsi_host/
host0 host1 host2
[root@centos7 ~]# ls /sys/class/scsi_host/
host0 host1 host2
4. 重新扫描SCSI总线来添加设备
[root@centos7 ~]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@centos7 ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@centos7 ~]# echo "- - -" > /sys/class/scsi_host/host2/scan
[root@centos7 ~]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@centos7 ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@centos7 ~]# echo "- - -" > /sys/class/scsi_host/host2/scan
5. 验证磁盘和分区并确保20GB硬盘已经添加了。在本例中出现了下面这行 “Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors” 并且确认新盘添加后没有重启服务器:
5. 验证磁盘和分区并确保20GB硬盘已经添加了。在本例中出现了下面这行 “`Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors`” 并且可以确认没有重启服务器就添加了新盘
```
[root@centos7 ~]# fdisk -l
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
@ -76,14 +79,14 @@
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
```
--------------------------------------------------------------------------------
via: http://www.ehowstuff.com/how-to-add-a-new-hard-disk-without-rebooting-on-centos-7-rhel-7/
作者:[skytech][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,6 +1,6 @@
Linux 基础如何在Ubuntu上检查是否已经安装了一个包
Linux 基础如何在Ubuntu上检查一个软件包是否安装
================================================================================
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/04/ubuntu-790x558.png)
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2014/04/ubuntu-790x558.png)
如果你正在管理Debian或者Ubuntu服务器你也许会经常使用**dpkg** 或者 **apt-get**命令。这两个命令用来安装、卸载和更新包。
@ -51,7 +51,7 @@ Linux 基础如何在Ubuntu上检查是否已经安装了一个包
+++-====================================-=======================-=======================-=============================================================================
ii firefox 35.0+build3-0ubuntu0.14 amd64 Safe and easy web browser from Mozilla
要列出你系统中安装的包,输入下面的命令:
要列出你系统中安装的所有包,输入下面的命令:
dpkg --get-selections
@ -97,7 +97,7 @@ Linux 基础如何在Ubuntu上检查是否已经安装了一个包
libgcc1:amd64 install
libgcc1:i386 install
额外的,你可以使用“**-L**”参数来找出包中文件的位置。
此外,你可以使用“**-L**”参数来找出包中文件的位置。
dpkg -L gcc-4.8
@ -130,7 +130,7 @@ via: http://www.unixmen.com/linux-basics-check-package-installed-not-ubuntu/
作者:[SK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,9 +1,8 @@
如何在CentOS 7.0上为Subverison安装Websvn
如何在CentOS 7.0 安装 Websvn
================================================================================
大家好今天我们会在CentOS 7.0 上为subversion安装WebSVN。
WebSVN提供了Svbverion中的各种方法来查看你的仓库。我们可以看到任何给定版本的任何文件或者目录的日志并且看到所有文件改动、添加、删除的列表。我们同样可以看到两个版本间的不同来知道特定版本改动了什么。
大家好今天我们会在CentOS 7.0 上为 subversionSVN安装Web 界面 WebSVN。subverion 是 apache 的顶级项目,也称为 Apache SVN 或 SVN
WebSVN 将 Svbverion 的操作你的仓库的各种功能通过 Web 界面提供出来。通过它,我们可以看到任何给定版本的任何文件或者目录的日志,并且可看到所有文件改动、添加、删除的列表。我们同样可以查看两个版本间的差异来知道特定版本改动了什么。
### 特性 ###
@ -12,20 +11,20 @@ WebSVN提供了下面这些特性:
- 易于使用的用户界面
- 可定制的模板系统
- 色彩化的文件列表
- blame 视图
- 追溯视图
- 日志信息查询
- RSS支持
- [更多][1]
由于使用PHP写成WebSVN同样易于移植和安装。
由于使用PHP写成WebSVN同样易于移植和安装。
现在我们将为Subverison(Apache SVN)安装WebSVN。请确保你的服务器上已经安装了Apache SVN。如果你还没有安装你可以在本教程中安装。
现在我们将为Subverison安装WebSVN。请确保你的服务器上已经安装了 SVN。如果你还没有安装你可以按[本教程][2]安装。
After you installed Apache SVN(Subversion), you'll need to follow the easy steps below.安装完Apache SVNSubversion后,你需要以下几步。
安装完SVN后你需要以下几步。
### 1. 下载 WebSVN ###
你可以从官方网站http://www.websvn.info/download/中下载WebSVN。我们首先进入/var/www/html/并在这里下载安装包。
你可以从官方网站 http://www.websvn.info/download/ 中下载 WebSVN。我们首先进入 /var/www/html/ 并在这里下载安装包。
$ sudo -s
@ -36,7 +35,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
![downloading websvn package](http://blog.linoxide.com/wp-content/uploads/2015/01/downloading-websvn.png)
这里我下载的是最新的2.3.3版本的websvn。你可以从这个网站得到链接。你可以用你要安装的包的链接来替换上面的链接。
这里我下载的是最新的2.3.3版本的 websvn。你可以从上面这个网站找到下载链接用适合你的包的链接来替换上面的链接。
### 2. 解压下载的zip ###
@ -54,7 +53,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
### 4. 编辑WebSVN配置 ###
现在,我们需要拷贝位于/var/www/html/websvn/include的distconfig.php为config.php,并且接着编辑配置文件。
现在,我们需要拷贝位于 /var/www/html/websvn/include 的 distconfig.php 为 config.php并且接着编辑该配置文件。
# cd /var/www/html/websvn/include
@ -62,7 +61,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
# nano config.php
现在我们需要按如下改变文件。这完成之后,请保存病退出。
现在我们需要按如下改变文件。完成之后,请保存并退出。
// Configure these lines if your commands aren't on your path.
//
@ -100,7 +99,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
# systemctl restart httpd.service
接着我们在浏览器中打开WebSVN输入http://Ip-address/websvn或者你在本地的话你可以输入http://localhost/websvn
接着我们在浏览器中打开WebSVN输入 http:// IP地址/websvn ,或者你在本地的话,你可以输入 http://localhost/websvn
![websvn successfully installed](http://blog.linoxide.com/wp-content/uploads/2015/01/websvn-success.png)
@ -108,7 +107,9 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
### 总结 ###
好了我们已经在CentOS 7上哇安城WebSVN的安装了。这个教程同样适用于RHEL 7。WebSVN提供了Svbverion中的各种方法来查看你的仓库。你可以看到任何给定版本的任何文件或者目录的日志并且看到所有文件改动、添加、删除的列表。如果你有任何问题、评论、反馈请在下面的评论栏中留下来让我们知道该添加什么和改进。谢谢享受WebSVN吧。:-)
好了我们已经在CentOS 7上完成WebSVN的安装了。这个教程同样适用于RHEL 7。WebSVN 提供了 Subverion 中的各种功能来查看你的仓库。你可以看到任何给定版本的任何文件或者目录的日志,并且看到所有文件改动、添加、删除的列表。
如果你有任何问题、评论、反馈请在下面的评论栏中留下,来让我们知道该添加什么和改进。谢谢! 用用看吧。:-)
--------------------------------------------------------------------------------
@ -116,9 +117,10 @@ via: http://linoxide.com/linux-how-to/install-websvn-subversion-centos-7/
作者:[Arun Pyasi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.websvn.info/features/
[2]:http://linoxide.com/linux-how-to/install-apache-svn-subversion-centos-7/

View File

@ -1,41 +1,40 @@
linux中创建和解压文档的10个快速tar命令样例
在linux中创建和解压文档的11个 tar 命令例子
================================================================================
### linux中的tar命令###
tar磁带归档命令是linux系统中被经常用来将文件存入到一个归档文件中的命令。
常见的文件扩展包括:.tar.gz 和 .tar.bz2, 分别表示通过gzip或bzip算法进一步压缩的磁带归档文件扩展
常见的文件扩展包括:.tar.gz 和 .tar.bz2, 分别表示通过gzip或bzip算法进一步进行了压缩。
在本教程中我们会管中窥豹一下在linux桌面或服务器版本中使用tar命令来处理一些创建和解压归档文件的日常工作的例子。
在该教程中我们会窥探一下在linux桌面或服务器版本中使用tar命令来处理一些日常创建和解压归档文件的工作样例。
### 使用tar命令###
tar命令在大部分linux系统默认情况下都是可用的所以你不用单独安装该软件。
> tar命令具有两个压缩格式gzip和bzip该命令的“z”选项用来指定gzip“j”选项用来指定bzip。同时也可哟用来创建非压缩归档文件。
> tar命令具有两个压缩格式gzip和bzip该命令的“z”选项用来指定gzip“j”选项用来指定bzip。同时也可创建非压缩归档文件。
#### 1.解压一个tar.gz归档 ####
#### 1.解压一个tar.gz归档 ####
一般常见的用法是用来解压归档文件下面的命令将会把文件从一个tar.gz归档文件中解压出来。
$ tar -xvzf tarfile.tar.gz
这里对这些参数做一个简单解释-
> x - 解压文件
> v - 繁琐,在解压每个文件时打印出文件的名称。
> v - 冗长模式,在解压每个文件时打印出文件的名称。
> z - 该文件是一个使用 gzip压缩的文件。
> z - 该文件是一个使用 gzip 压缩的文件。
> f - 使用接下来的tar归档来进行操作。
这些就是一些需要记住的重要选项。
**解压 tar.bz2/bzip 归档文件 **
**解压 tar.bz2/bzip 归档文件**
具有bz2扩展名的文件是使用bzip算法进行压缩的但是tar命令也可以对其进行处理但是通过使用“j”选项来替换“z”选项。
具有bz2扩展名的文件是使用bzip算法进行压缩的但是tar命令也可以对其进行处理但是需要通过使用“j”选项来替换“z”选项。
$ tar -xvjf archivefile.tar.bz2
@ -47,25 +46,25 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
然后首先需要确认目标目录是否存在毕竟tar命令并不会为你创建目录所以如果目标目录不存在的情况下该命令会失败。
####3. 解压出单个文件 ####
####3. 提取出单个文件 ####
为了从一个归档文件中解压出单个文件,只需要将文件名按照以下方式将其放置在命令后面。
为了从一个归档文件中提取出单个文件,只需要将文件名按照以下方式将其放置在命令后面。
$ tar -xz -f abc.tar.gz "./new/abc.txt"
在上述命令中,可以按照以下方式来指定多个文件。
$ tar -xv -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"
$ tar -xz -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"
#### 4.使用通配符来解压多个文件 ####
通配符可以用来解压于给定通配符匹配的一批文件,例如所有以".txt"作为扩展名的文件。
$ tar -xv -f abc.tar.gz --wildcards "*.txt"
$ tar -xz -f abc.tar.gz --wildcards "*.txt"
#### 5. 列出并检索tar归档文件中的内容 ####
#### 5. 列出并检索tar归档文件中的内容 ####
如果你仅仅想要列出而不是解压tar归档文件的中的内容使用“-t”选项 下面的命令用来打印一个使用gzip压缩过的tar归档文件中的内容。
如果你仅仅想要列出而不是解压tar归档文件的中的内容使用“-t”test选项, 下面的命令用来打印一个使用gzip压缩过的tar归档文件中的内容。
$ tar -tz -f abc.tar.gz
./new/
@ -75,7 +74,7 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
./new/abc.txt
...
将输出通过管道定向到grep来搜索一个文件或者定向到less命令来浏览内容列表。 使用"v"繁琐选项将会打印出每个文件的额外详细信息。
可以将输出通过管道定向到grep来搜索一个文件或者定向到less命令来浏览内容列表。 使用"v"冗长选项将会打印出每个文件的额外详细信息。
对于 tar.bz2/bzip文件需要使用"j"选项。
@ -84,11 +83,10 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
$ tar -tvz -f abc.tar.gz | grep abc.txt
-rw-rw-r-- enlightened/enlightened 0 2015-01-13 11:40 ./new/abc.txt
#### 6.创建一个tar/tar.gz归档文件 ####
#### 6.创建一个tar/tar.gz归档文件 ####
现在我们已经学过了如何解压一个tar归档文件是时候开始创建一个新的tar归档文件了。tar命令可以用来将所选的文件或整个目录放入到一个归档文件中以下是相应的样例。
下面的命令使用一个目录来创建一个tar归档文件它会将该目录中所有的文件和子目录都加入到归档文件中。
$ tar -cvf abc.tar ./new/
@ -102,14 +100,13 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
$ tar -cvzf abc.tar.gz ./new/
> 文件的扩展名其实并不真正有什么影响。“tar.gz” 和tgz是gzip压缩算法压缩文件的常见扩展名。 “tar.bz2”和“tbz”是bzip压缩算法压缩文件的常见扩展名。
> 文件的扩展名其实并不真正有什么影响。“tar.gz” 和“tgz”是gzip压缩算法压缩文件的常见扩展名。 “tar.bz2”和“tbz”是bzip压缩算法压缩文件的常见扩展名LCTT 译注:归档是否是压缩的和采用哪种压缩方式并不取决于其扩展名,扩展名只是为了便于辨识。)。
#### 7. 在添加文件之前进行确认 ####
一个有用的选项是“w”该选项使得tar命令在添加每个文件到归档文件之前来让用户进行确认有时候这会很有用。
使用该选项时,只有用户输入yes时的文件才会被加入到归档文件中如果你输入任何东西默认的回答是一个“No”。
使用该选项时,只有用户输入“y”时的文件才会被加入到归档文件中如果你不输入任何东西其默认表示是一个“n”。
# 添加指定文件
@ -137,7 +134,7 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
#### 9. 将文件加入到压缩的归档文件中tar.gz/tar.bz2) ####
之前已经提到了不可能将文件加入到已压缩的归档文件中,然依然可以通过简单的一些把戏来完成。使用gunzip命令来解压缩归档文件然后将文件加入到归档文件中后重新进行压缩。
之前已经提到了不可能将文件加入到已压缩的归档文件中,然依然可以通过简单的一些把戏来完成。使用gunzip命令来解压缩归档文件然后将文件加入到归档文件中后重新进行压缩。
$ gunzip archive.tar.gz
$ tar -rf archive.tar ./path/to/file
@ -147,16 +144,15 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
#### 10.通过tar来进行备份 ####
一个真实的场景是在规则的间隔内来备份目录tar命令可以通过cron调度来实现这样的一个备份以下是一个样例 -
一个真实的场景是在固定的时间间隔内来备份目录tar命令可以通过cron调度来实现这样的一个备份以下是一个样例
$ tar -cvz -f archive-$(date +%Y%m%d).tar.gz ./new/
使用cron来运行上述的命令会保持创建类似以下名称的备份文件 -
'archive-20150218.tar.gz'.
使用cron来运行上述的命令会保持创建类似以下名称的备份文件 'archive-20150218.tar.gz'。
当然,需要确保日益增长的归档文件不会导致磁盘空间的溢出。
#### 11. 在创建归档文件进行验证 ####
#### 11. 在创建归档文件进行验证 ####
"W"选项可以用来在创建归档文件之后进行验证,以下是一个简单例子。
@ -174,9 +170,9 @@ tar命令在大部分linux系统默认情况下都是可用的所以你不用
Verify ./new/newfile.txt
Verify ./new/abc.txt
需要注意的是验证动作不能呢该在压缩过的归档文件上进行只能在非压缩的tar归档文件上执行。
需要注意的是验证动作不能在压缩过的归档文件上进行只能在非压缩的tar归档文件上执行。
现在就先到此为止可以通过“man tar”命令来查看tar命令的的手册。
这次就先到此为止可以通过“man tar”命令来查看tar命令的的手册。
--------------------------------------------------------------------------------
@ -184,7 +180,7 @@ via: http://www.binarytides.com/linux-tar-command/
作者:[Silver Moon][a]
译者:[theo-l](https://github.com/theo-l)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
如何在Linux中隐藏PHP版本
如何在Linux服务器中隐藏PHP版本
================================================================================
通常大多数默认设置安装的web服务器存在信息泄露。这其中之一是PHP。PHP超文本预处理器是如今流行的服务端html嵌入式语言。在如今这个充满挑战的时代有许多攻击者会尝试发现你服务端的漏洞。因此我会简单描述如何在Linux服务器中隐藏PHP信息。
通常大多数默认设置安装的web服务器存在信息泄露这其中之一就是PHP。PHP 是如今流行的服务端html嵌入式语言之一。在如今这个充满挑战的时代有许多攻击者会尝试发现你服务端的漏洞。因此我会简单描述如何在Linux服务器中隐藏PHP信息。
默认上**exposr_php**默认是开的。关闭“expose_php”参数可以使php隐藏它的版本信息。
默认上**expose_php**默认是开的。关闭“expose_php”参数可以使php隐藏它的版本信息。
[root@centos66 ~]# vi /etc/php.ini
@ -26,9 +26,9 @@
X-Page-Speed: 1.9.32.2-4321
Cache-Control: max-age=0, no-cache
更改php就不会在web服务头中显示版本了
更改并重启 Web 服务php就不会在web服务头中显示版本了
[root@centos66 ~]# curl -I http://www.ehowstuff.com/
```[root@centos66 ~]# curl -I http://www.ehowstuff.com/
HTTP/1.1 200 OK
Server: nginx
@ -39,8 +39,9 @@ X-Pingback: http://www.ehowstuff.com/xmlrpc.php
Date: Wed, 11 Feb 2015 14:10:43 GMT
X-Page-Speed: 1.9.32.2-4321
Cache-Control: max-age=0, no-cache
```
有任何需要帮助的请到twiiter @ehowstuff,或在下面留下你的评论。[点此获取更多历史文章][1]
LCTT译注除了 PHP 的版本之外Web 服务器也会默认泄露版本号。如果使用 Apache 服务器,请[参照此文章关闭Apache 版本显示][2];如果使用 Nginx 服务器,请在 http 段内加入`server_tokens off;` 配置。以上修改请记得重启相关服务。
--------------------------------------------------------------------------------
@ -48,9 +49,10 @@ via: http://www.ehowstuff.com/how-to-hide-php-version-in-linux/
作者:[skytech][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.ehowstuff.com/author/mhstar/
[1]:http://www.ehowstuff.com/archives/
[2]:http://linux.cn/article-3642-1.html

View File

@ -681,15 +681,15 @@ Linux基础如何找出你的系统所支持的最大内存
Handle 0x0031, DMI type 127, 4 bytes
End Of Table
好了,就是这样。周末愉快!
好了,就是这样。
--------------------------------------------------------------------------------
via: https://www.unixmen.com/linux-basics-how-to-find-maximum-supported-ram-by-your-system/
via: http://www.unixmen.com/linux-basics-how-to-find-maximum-supported-ram-by-your-system/
作者:[SK][0]
译者:[mr-ping](https://github.com/mr-ping)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,24 +1,24 @@
一款在Gnome桌面中显示Andorid通知的程序
================================================================================
**你很快就可以在GNOME桌面中看到Andorid通知了这都要归功于一个正在开发中的新程序。**
![Fancy seeing your Android alerts here? You can.](http://www.omgubuntu.co.uk/wp-content/uploads/2015/02/Screen-Shot-2015-02-24-at-17.47.48.png)
在这里看到Android通知很棒么你可以
在这里看到Android通知是不是很棒?就可以了~
**你很快就可以在GNOME桌面中看到Andorid通知了这都要归功于一个在开发中的新程序。**
这个新的项目叫“Numtius”这可以让在Andorid手机上收到的通知显示在GNOME桌面上。它会集成在GNOME 3.16中,并且它[重新设计了通知系统][1]这个app和特性会用在其他更多的地方。
这个新的项目叫“Nuntius”这可以让在Andorid手机上收到的通知显示在GNOME桌面上。它会集成在GNOME 3.16中,并且它[重新设计了通知系统][1]这个app和特性会用在其他更多的地方。
这个app的开发者希望在这个月GNOME 3.16发布之前可以完成它将通过蓝牙工作来保证不会传给外部的系统或者使用在线存储。这意味着你的电话必须接近GNOME桌面来保证这个功能可用。
他现在还不能回复短消息或者对提醒采取操作。
开发团队警告说**这是一个早期发布版本**,那些打算重度使用的人们现在应该做好最少功能的准备。
开发团队警告说**这是一个早期发布版本**,那些打算期望很高人要有暂时只能提供部分功能的心理准备。
在GNOME桌面上看Android通知的移动端app现在已经在[Google Play商店][2]了GNOME程序已经在Fedora的仓库中了。
这个用来配合在GNOME桌面上看Android通知的移动端app现在已经在[Google Play商店][2]找到了,GNOME程序已经在Fedora的仓库中了。
开发者已经在Gituhb上开源了Android和GNOME接收端的程序
开发者已经在Gituhb上开源了Android和GNOME接收端的程序
一个相似的工具[已经在KDE桌面上有了][3] - KDE Connect - 已经有一两年了通过Pushbullet来为使用Chrome的iOS和Android平台在Windows、MAC和Linux桌面上提供相似的功能
在一两年前,[KDE桌面上已经有了][3]一个相似的工具 - KDE Connect它通过Pushbullet来为使用Chrome的iOS和Android提供相似的功能支持Windows、MAC和Linux桌面
- [Nuntius for Android & GNOME on GitHub][4]
@ -28,7 +28,7 @@ via: http://www.omgubuntu.co.uk/2015/03/new-app-brings-android-notifications-to-
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,41 @@
Nmap--不是只能用于做坏事!
================================================================================
如果SSH是系统管理员世界的"瑞士军刀"的话那么Nmap就是一盒炸药。炸药很容易被误用然后将你的双脚崩掉但是也是一个很有威力的工具能够胜任一些看似无法完成的任务。
大多数人想到Nmap时他们想到的是扫描服务器查找开放端口来实施攻击。然而在过去的这些年中这样的超能力在当你管理服务器或计算机遇到问题时也是非常的有用。无论是你试图找出在你的网络上有哪些类型的服务器使用了指定的IP地址或者尝试锁定一个新的NAS设备以及扫描网络等都会非常有用。
下图显示了我的QNAP NAS的网络扫描结果。我使用该设备的唯一目的是为了NFS和SMB文件共享但是你可以看到它包含了一大堆大开大敞的端口。如果没有Nmap很难发现机器到底在运行着什么玩意儿。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11825nmapf1.jpg)
*网络扫描*
另外一个可能你没想到的用途是用它来扫描一个网络。你甚至根本不需要root的访问权限而且你也可以非常容易地来指定你想要扫描的网络地址块例如输入
nmap 192.168.1.0/24
上述命令会扫描我的局域网中全部的254个可用的IP地址让我可以知道那个是可以Ping的以及那些端口是开放的。如果你刚刚在网络上添加一个新的硬件但是不知道它通过DHCP获取的IP地址是什么那么此时Nmap就是无价之宝。例如上述命令在我的网络中揭示了这个问题
Nmap scan report for TIVO-8480001903CCDDB.brainofshawn.com (192.168.1.220)
Host is up (0.0083s latency).
Not shown: 995 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2190/tcp open tivoconnect
2191/tcp open tvbus
9080/tcp closed glrpc
它不仅显示了新的Tivo 设备而且还告诉我那些端口是开放的。由于它的可靠性、可用性以及“黑边帽子”的能力Nmap获得了本月的 <<编辑推荐>>奖。这不是一个新的程序但是如果你是一个linux用户的话你应该玩玩它。
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/nmap%E2%80%94not-just-evil
作者:[Shawn Powers][a]
译者:[theo-l](https://github.com/theo-l)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/shawn-powers

View File

@ -1,14 +1,15 @@
Fedora GNOME快捷键
Fedora GNOME 的常用快捷键
================================================================================
在Fedora,为了获得最好的[GNOME桌面] [1]体验,你需要了解并掌握一些驾驭系统的快捷键。
在Fedora中,为了获得最好的[GNOME桌面][1]体验,你需要了解并掌握一些驾驭系统的快捷键。
这篇文章将列举我们日常使用中使用频率最高的快捷键。
![GNOME Keyboard Shortcuts - The Super Key. ](http://f.tqn.com/y/linux/1/L/o/K/1/gnomekeyboardshortcut1.png)
GNOME 快捷键 - super键.
#### 1. Super键 ####
![GNOME Keyboard Shortcuts - The Super Key. ](http://f.tqn.com/y/linux/1/L/o/K/1/gnomekeyboardshortcut1.png)
*GNOME 快捷键 - super键*
[“super”键][2]是如今驾驭操作系统的好朋友。
在传统的笔记本电脑中“super”键坐落于最后一列就在“alt”键的旁边就是徽标键
@ -17,116 +18,122 @@ GNOME 快捷键 - super键.
同时按下 "ALT" 和"F1"一样可以达到这样的效果。
![GNOME Run Command.](http://f.tqn.com/y/linux/1/L/p/K/1/runcommand.png)
GNOME 指令运行.
### 2. 如何快速执行一条命令 ###
### 2. 如何快速执行一条指令 ###
![GNOME Run Command.](http://f.tqn.com/y/linux/1/L/p/K/1/runcommand.png)
*GNOME 运行某命令*
如果你需要快速的执行一条指令,你可以按下"ALT"+"F2",这样就会出现指令运行对话框了。
你就可以在窗口中输入你想要执行的指令了,回车执行。
![TAB Through Applications.](http://f.tqn.com/y/linux/1/L/q/K/1/tabthroughwindows.png)
使用TAB在应用中切换。
现在你就可以在窗口中输入你想要执行的指令了,回车执行。
### 3. 快速切换到另一个打开的应用 ###
就像微软的Windows一样你可以使用"ALT"和"TAB" 的组合键在应用程序之间切换。
![TAB Through Applications.](http://f.tqn.com/y/linux/1/L/q/K/1/tabthroughwindows.png)
在一些键盘上tab键是这样的**|<- ->|**而有些则是简单的"TAB"字母。
*使用TAB在应用中切换*
GNOME应用间切换随着你的切换显示的是简单的图标和应用的名字
就像在微软的Windows下一样你可以使用"ALT"和"TAB" 的组合键在应用程序之间切换。
如果你按下"shift"+"tab"将反过来切换应用
在一些键盘上tab键上画的是这样的**|<- ->|**,而有些则是简单的"TAB"字母
![Switch Windows In The Same Application.](http://f.tqn.com/y/linux/1/L/r/K/1/switchwindowsinsameapplication.png)
在应用中切换不同窗口。
GNOME应用切换器随着你的切换显示简单的图标和应用的名字。
如果你按下"shift"+"tab"将以反序切换应用。
### 4. 在同一应用中快速切换不同的窗口 ###
![Switch Windows In The Same Application.](http://f.tqn.com/y/linux/1/L/r/K/1/switchwindowsinsameapplication.png)
*在应用中切换不同窗口*
如果你像我一样经常打开五六个Firefox。
你已经知道通过"Alt"+"Tab"实现应用间的切换。
你已经知道通过"Alt"+"Tab"实现应用间的切换。有两种方法可以在同一个应用中所有打开的窗口中切换。
有两种方法可以在同应用中所有打开的窗口中切换
第一种是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上。短暂的停留等到下拉窗口出现你就能用鼠标选择窗口了
一种是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上。短暂的停留等到下拉窗出现你就能用鼠标选择窗口了
二种也是比较推荐的方式是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上,然后按"super"+"`"在此应用打开的窗口间切换
第二种也是比较推荐的方式是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上然后按"super"+"`"在此应用打开的窗口间切换。
**注释:"\`"就是tab键上面的那个键。无论你使用的那种键盘排布用于切换的键一直都是tab上面的那个键所以也有可能不是"\`"键。**
**注释"\`"就是tab键上面的那个键。用于切换的键一直都是tab上面的那个键无论你使用的那种键盘排布也有可能不是"`"键。**
如果你的手很灵活(或者是我称之为的忍者手)那你也可以同时按"shift", "`"和"super"键来反向切换窗口。
![Switch Keyboard Focus.](http://f.tqn.com/y/linux/1/L/s/K/1/switchkeyboardfocus.png)
切换键盘焦点。
如果你的手很灵活(或者是我称之为忍者手的)那你也可以同时按"shift", "\`"和"super"键来反向切换窗口。
### 5. 切换键盘焦点 ###
![Switch Keyboard Focus.](http://f.tqn.com/y/linux/1/L/s/K/1/switchkeyboardfocus.png)
*切换键盘焦点*
这个键盘快捷键并不是必须掌握的,但是还是最好掌握。
如若你想将输入的焦点放到搜索栏或者一个应用窗口上,你可以同时按下"CTRL", "ALT"和"TAB",这样就会出现一个让你选择切换区域的列表。
然后就可以按方向键做出选择了。
![Show All Applications.](http://f.tqn.com/y/linux/1/L/t/K/1/showapplications.png)
显示所有应用程序。
### 6. 显示所有应用程序列表 ###
![Show All Applications.](http://f.tqn.com/y/linux/1/L/t/K/1/showapplications.png)
*显示所有应用程序*
如果恰巧最后一个应用就是你想要找的,那么这样做真的会帮你省很多时间。
按"super"和"A"键来快速浏览这个包含你系统上所有应用的列表。
![Switch Workspaces.](http://f.tqn.com/y/linux/1/L/u/K/1/switchworkspaces.png)
切换工作区。
按"super"和"A"键来快速切换到这个包含你系统上所有应用的列表上。
### 7. 切换工作区 ###
![Switch Workspaces.](http://f.tqn.com/y/linux/1/L/u/K/1/switchworkspaces.png)
*切换工作区*
如果你已经使用linux有一段时间了那么这种[多工作区切换][3]的工作方式一定深得你心了吧。
举个例子,你在第一个工作区里做开发,第二个中浏览网页而把你邮件的客户端开在第三个工作区中。
举个例子,你在第一个工作区里做开发,第二个中浏览网页而把你邮件的客户端开在第三个工作区中。
工作区切换你可以使用"super"+"Page Up" (PGUP)键朝一个方向切,也可以按"super"+"Page Down" (PGDN)键朝另一个方向切。
工作区切换你可以使用"super"+"Page Up" (向上翻页)键朝一个方向切,也可以按"super"+"Page Down" (向下翻页)键朝另一个方向切。
还有一个比较麻烦的备选方案就是按"super"显示打开的应用,然后在屏幕的右侧选择你所要切换的工作区。
![Move Application To Another Workspace.](http://f.tqn.com/y/linux/1/L/v/K/1/movetoanewworkspace.png)
将应用移至另一个工作区。
### 8. 将一些项目移至一个新的工作区 ###
如果这个工作区已经被搞得杂乱无章了没准你会想将手头的应用转到一个全新的工作区,请按组合键"super", "shift"和"page up"或"super", "shift"和"page down" key。
![Move Application To Another Workspace.](http://f.tqn.com/y/linux/1/L/v/K/1/movetoanewworkspace.png)
*将应用移至另一个工作区*
如果这个工作区已经被搞得杂乱无章了,没准你会想将手头的应用转到一个全新的工作区,请按组合键"super", "shift"和"page up"或"super", "shift"和"page down" 键。
备选方案按"super"键,然后在应用列表中找到你想要移动的应用拖到屏幕右侧的工作区。
### 9. 显示信息托盘 ###
![Show The Message Tray.](http://f.tqn.com/y/linux/1/L/w/K/1/showmessagetray.png)
显示信息栏。
### 9. 显示信息栏 ###
*显示信息托盘*
消息栏会提供一些通知。
按"super"+"M"呼出消息栏。
消息托盘会提供一个通知列表。按"super"+"M"呼出消息托盘。
备选方法是鼠标移动到屏幕右下角。
![Lock The Screen.](http://f.tqn.com/y/linux/1/L/x/K/1/lockscreen.png)
锁屏。
### 10. 锁屏 ###
![Lock The Screen.](http://f.tqn.com/y/linux/1/L/x/K/1/lockscreen.png)
*锁屏*
想要休息一会喝杯咖啡?不想误触键盘?
无论何时只要离开你的电脑应该习惯性的按下"super"+"L"锁屏。
解锁方法是从屏幕的下方向上拽,输入密码即可。
![Control Alt Delete Within Fedora.](http://f.tqn.com/y/linux/1/L/y/K/1/poweroff.png)
Fedora中Control+Alt+Delete
### 11. 关机 ###
![Control Alt Delete Within Fedora.](http://f.tqn.com/y/linux/1/L/y/K/1/poweroff.png)
*Fedora中Control+Alt+Delete*
如果你曾是windows的用户你一定记得著名的三指快捷操作CTRL+ALT+DELETE。
如果在键盘上同时按下CTRL+ALT+DELETEFedora就会弹出一则消息提示你的电脑将在60秒后关闭。
@ -158,18 +165,17 @@ Fedora中Control+Alt+Delete
[录制的内容][4]将以[webm][5]格式保存于当前用户家目录下的录像文件夹中。
![Put Windows Side By Side.](http://f.tqn.com/y/linux/1/L/z/K/1/splitwindows.png)
并排显示窗口。
### 14. 并排显示窗口 ###
![Put Windows Side By Side.](http://f.tqn.com/y/linux/1/L/z/K/1/splitwindows.png)
*并排显示窗口*
你可以将一个窗口靠左占满左半屏,另一个窗口靠右占满右半屏,让两个窗口并排显示。
也可以按"Super"+"←"让当前应用占满左半屏。
也可以按"Super"+"←"(左箭头)让当前应用占满左半屏。按"Super"+"→"(右箭头)让当前应用占满右半屏。
按"Super"+"→"让当前应用占满右半屏。
### 15. 窗口的最大化, 最小化和恢复 ###
### 15. 窗口的最大化,最小化和恢复 ###
双击标题栏可以最大化窗口。
@ -177,11 +183,12 @@ Fedora中Control+Alt+Delete
右键菜单选择"最小化"就可以最小化了。
![GNOME Keyboard Shortcut Cheat Sheet. ](http://f.tqn.com/y/linux/1/L/-/L/1/gnomekeyboardshortcuts.png)
GNOME快捷键速查表。
### 16. 总结 ###
![GNOME Keyboard Shortcut Cheat Sheet. ](http://f.tqn.com/y/linux/1/L/-/L/1/gnomekeyboardshortcuts.png)
*GNOME快捷键速查表*
我做了一份快捷键速查表,你可以打印出来贴在墙上,这样一定能够更快上手。
当你掌握了这些快捷键后,你一定会感慨这个桌面环境使用起来是如此的顺手。
@ -196,8 +203,8 @@ GNOME快捷键速查表。
via: http://linux.about.com/od/howtos/tp/Fedora-GNOME-Keyboard-Shortcuts.htm
作者:[Gary Newell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,57 +0,0 @@
Papyrus: An Open Source Note Manager
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_4.jpeg)
In last post, we saw an [open source to-do app Go For It!][1]. In a similar article, today well see an **open source note taking application Papyrus**.
[Papyrus][2] is a fork of [Kaqaz note manager][3] and is built on QT5. It brings a clean, polished user interface and is security focused (as it claims). Emphasizing on simplicity, I find Papyrus similar to OneNote. You organize your notes in paper and add them a label for grouping those papers. Simple enough!
### Papyrus features: ###
Though Papyrus focuses on simplicity, it still has plenty of features up its sleeves. Some of the main features are:
- Note management with labels and categories
- Advanced search options
- Touch mode available
- Full screen option
- Back up to Dropbox/hard drive/external
- Password protection for selective papers
- Sharing papers with other applications
- Encrypted synchronization via Dropbox
- Available for Android, Windows and OS X apart from Linux
### Install Papyrus ###
Papyrus has APK available for Android users. There are installer files for Windows and OS X. Linux users can get source code of the application. Ubuntu and other Ubuntu based distributions can use the .deb packages. Based on your OS and preference, you can get the respective files from the Papyrus download page:
- [Download Papyrus][4]
### Screenshots ###
Here are some screenshots of the application:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_3-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_2-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_1-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux-700x450_c.jpeg)
Give Papyrus a try and see if you like it. Do share your experience with it with the rest of us here.
--------------------------------------------------------------------------------
via: http://itsfoss.com/papyrus-open-source-note-manager/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/go-for-it-to-do-app-in-linux/
[2]:http://aseman.co/en/products/papyrus/
[3]:https://github.com/sialan-labs/kaqaz/
[4]:http://aseman.co/en/products/papyrus/

View File

@ -0,0 +1,102 @@
Picty: Managing Photos Made Easy
================================================================================
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/picty_002-790x429.png)
### About Picty ###
**Picty** is a free, simple, yet powerful photo collection manager that will help you to manage your photos. It is designed around managing **metadata** and a **lossless** approach to image handling. Picty currently supports both online(web-based) and offline(local) collections. In local collections, the images will be stored in a local folder and its sub-folders. A database will be maintained to speed up the image queries in the users home folder. In online(web-based) collections, you can upload and share images through a web browser. Ant user with proper rights can share photos to any persons, and each user can have multiple collections open at once and collections can be shared by multiple users. There is a simple interface for transferring images between collections using a transfer plugin.
You can download any number of photos from your Camera or any devices. Also, Picty allows you to browse photo collections from your Camera before downloading it. Picty is lightweight application, and has snappy interface. It supports Linux, and Windows platforms.
### Features ###
- Supports big photo collections (20,000 plus images).
- Open more than one collection at a time and transfer images between them.
- Collections are:
- Folders of images in your local file system.
- Images on cameras, phones and other media devices.
- Photo hosting services (Flickr currently supported).
- picty does not “Import” photos into its own database, it simply provides an interface for accessing them wherever they are. To keep things snappy and to allow you to browse even if you are offline, picty maintains a cache of thumbnails and metadata.
- Reads and writes metadata in industry standard formats Exif, IPTC and Xmp
- Lossless approach:
- picty writes all changes including image edits as metadata. e.g. an image crop is stored as any instruction, the original pixels remain in the file
- Changes are stored in pictys collection cache until you save your metadata changes to the images. You can easily revert unsaved changes that you dont like.
- Basic image editing:
- Current support for basic image enhancements such as brightness, contrast, color, cropping, and straightening.
- Improvements to those tools and other tools coming soon (red eye reduction, levels, curves, noise reduction)
- Image tagging:
- Use standard IPTC and Xmp keywords for image tags
- A tag tree view lets you easily manage your tags and navigate your collection
- Folder view:
- Navigate the directory heirarchy of your image collection
- Multi-monitor support
- picty can be configured to let you browse your collection on one screen and view full screen images on another.
- Customizable
- Create launchers for external tools
- Supports plugins many of the current features (tagging and folder views, and all of the image editing tools) are provided by plugins
- Written in python batteries included!
### Installation ###
#### 1. Install from PPA ####
Picty developers has a PPA for Debian based distributions, like Ubuntu, to make the installation much easier.
To install in Ubuntu and derivatives, run:
sudo add-apt-repository ppa:damien-moore/ppa
sudo apt-get update
sudo apt-get install picty
#### 2. Install from Source ####
Also, you can install it from Source files. First, install the following dependencies.
sudo apt-get install bzr python-pyinotify python-pyexiv2 python-gtk2 python-gnome2 dcraw python-osmgpsmap python-flickrapi
Then, get the latest version using command:
bzr branch lp:picty
To run picty, change to the picty directory, and enter:
cd picty
bin/picty
To update to the latest version, run:
cd picty
bzr pull
### Usage ###
Launch Picty either from Menu or Unity Dash.
![picty_001](http://www.unixmen.com/wp-content/uploads/2015/03/picty_001.png)
You can either choose existing collection, device or directory. Let us create a **new collection**. To do that, create New Collection button. Enter the collection, and browse to the path where you have the images stored. Finally, click **Create** button.
![Create a Collection_001](http://www.unixmen.com/wp-content/uploads/2015/03/Create-a-Collection_001.png)
![picty_002](http://www.unixmen.com/wp-content/uploads/2015/03/picty_002.png)
You can modify, rotate, add/remove tags, set descriptive info of each images. To do that, just right click any image and do the actions of your choice.
Visit the following Google group to get more information and support about Picty Photo manager.
- [http://groups.google.com/group/pictyphotomanager][1]
Cheers!
--------------------------------------------------------------------------------
via: http://www.unixmen.com/picty-managing-photos-made-easy/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:http://groups.google.com/group/pictyphotomanager

View File

@ -1,161 +0,0 @@
How To Install / Configure VNC Server On CentOS 7.0
================================================================================
Hi there, this tutorial is all about how to install or setup [VNC][1] Server on your very CentOS 7. This tutorial also works fine in RHEL 7. In this tutorial, we'll learn what is VNC and how to install or setup [VNC Server][1] on CentOS 7
As we know, most of the time as a system administrator we are managing our servers over the network. It is very rare that we will need to have a physical access to any of our managed servers. In most cases all we need is to SSH remotely to do our administration tasks. In this article we will configure a GUI alternative to a remote access to our CentOS 7 server, which is VNC. VNC allows us to open a remote GUI session to our server and thus providing us with a full graphical interface accessible from any remote location.
VNC server is a Free and Open Source Software which is designed for allowing remote access to the Desktop Environment of the server to the VNC Client whereas VNC viewer is used on remote computer to connect to the server .
**Some Benefits of VNC server are listed below:**
Remote GUI administration makes work easy & convenient.
Clipboard sharing between host CentOS server & VNC-client machine.
GUI tools can be installed on the host CentOS server to make the administration more powerful
Host CentOS server can be administered through any OS having the VNC-client installed.
More reliable over ssh graphics and RDP connections.
So, now lets start our journey towards the installation of VNC Server. We need to follow the steps below to setup and to get a working VNC.
First of all we'll need a working Desktop Environment (X-Windows), if we don't have a working GUI Desktop Environment (X Windows) running, we'll need to install it first.
**Note: The commands below must be running under root privilege. To switch to root please execute "sudo -s" under a shell or terminal without quotes("")**
### 1. Installing X-Windows ###
First of all to install [X-Windows][2] we'll need to execute the below commands in a shell or terminal. It will take few minutes to install its packages.
# yum check-update
# yum groupinstall "X Window System"
![installing x windows](http://blog.linoxide.com/wp-content/uploads/2015/01/installing-x-windows.png)
#yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts
![install gnome classic session](http://blog.linoxide.com/wp-content/uploads/2015/01/gnome-classic-session-install.png)
# unlink /etc/systemd/system/default.target
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
![configuring graphics](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-graphics.png)
# reboot
After our machine restarts, we'll get a working CentOS 7 Desktop.
Now, we'll install VNC Server on our machine.
### 2. Installing VNC Server Package ###
Now, we'll install VNC Server package in our CentOS 7 machine. To install VNC Server, we'll need to execute the following command.
# yum install tigervnc-server -y
![vnc server](http://blog.linoxide.com/wp-content/uploads/2015/01/install-tigervnc.png)
### 3. Configuring VNC ###
Then, we'll need to create a configuration file under **/etc/systemd/system/** directory. We can copy the **vncserver@:1.service** file from example file from **/lib/systemd/system/vncserver@.service**
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
![copying vnc server configuration](http://blog.linoxide.com/wp-content/uploads/2015/01/copying-configuration.png)
Now we'll open **/etc/systemd/system/vncserver@:1.service** in our favorite text editor (here, we're gonna use **nano**). Then find the below lines of text in that file and replace <USER> with your username. Here, in my case its linoxide so I am replacing <USER> with linoxide and finally looks like below.
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
PIDFile=/home/<USER>/.vnc/%H%i.pid
TO
ExecStart=/sbin/runuser -l linoxide -c "/usr/bin/vncserver %i"
PIDFile=/home/linoxide/.vnc/%H%i.pid
If you are creating for root user then
ExecStart=/sbin/runuser -l root -c "/usr/bin/vncserver %i"
PIDFile=/root/.vnc/%H%i.pid
![configuring user](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-user.png)
Now, we'll need to reload our systemd.
# systemctl daemon-reload
Finally, we'll create VNC password for the user . To do so, first you'll need to be sure that you have sudo access to the user, here I will login to user "linoxide" then, execute the following. To login to linoxide we'll run "**su linoxide" without quotes** .
# su linoxide
$ sudo vncpasswd
![setting vnc password](http://blog.linoxide.com/wp-content/uploads/2015/01/vncpassword.png)
**Make sure that you enter passwords more than 6 characters.**
### 4. Enabling and Starting the service ###
To enable service at startup ( Permanent ) execute the commands shown below.
$ sudo systemctl enable vncserver@:1.service
Then, start the service.
$ sudo systemctl start vncserver@:1.service
### 5. Allowing Firewalls ###
We'll need to allow VNC services in Firewall now.
$ sudo firewall-cmd --permanent --add-service vnc-server
$ sudo systemctl restart firewalld.service
![allowing firewalld](http://blog.linoxide.com/wp-content/uploads/2015/01/allowing-firewalld.png)
Now you can able to connect VNC server using IP and Port ( Eg : ip-address:1 )
### 6. Connecting the machine with VNC Client ###
Finally, we are done installing VNC Server. No, we'll wanna connect the server machine and remotely access it. For that we'll need a VNC Client installed in our computer which will only enable us to remote access the server machine.
![remote access vncserver from vncviewer](http://blog.linoxide.com/wp-content/uploads/2015/01/vncviewer.png)
You can use VNC client like [Tightvnc viewer][3] and [Realvnc viewer][4] to connect Server.
To connect with additional users create files with different ports, please go to step 3 to configure and add a new user and port, You'll need to create **vncserver@:2.service** and replace the username in config file and continue the steps by replacing service name for different ports. **Please make sure you logged in as that particular user for creating vnc password**.
VNC by itself runs on port 5900. Since each user will run their own VNC server, each user will have to connect via a separate port. The addition of a number in the file name tells VNC to run that service as a sub-port of 5900. So in our case, arun's VNC service will run on port 5901 (5900 + 1) and further will run on 5900 + x. Where, x denotes the port specified when creating config file **vncserver@:x.service for the further users**.
We'll need to know the IP Address and Port of the server to connect with the client. IP addresses are the unique identity number of the machine. Here, my IP address is 96.126.120.92 and port for this user is 1. We can get the public IP address by executing the below command in a shell or terminal of the machine where VNC Server is installed.
# curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
### Conclusion ###
Finally, we installed and configured VNC Server in the machine running CentOS 7 / RHEL 7 (Red Hat Enterprises Linux) . VNC is the most easy FOSS tool for the remote access and also a good alternative to Teamviewer Remote Access. VNC allows a user with VNC client installed to control the machine with VNC Server installed. Here are some commands listed below that are highly useful in VNC . Enjoy !!
#### Additional Commands : ####
- To stop VNC service .
# systemctl stop vncserver@:1.service
- To disable VNC service from startup.
# systemctl disable vncserver@:1.service
- To stop firewall.
# systemctl stop firewalld.service
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-configure-vnc-server-centos-7-0/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://en.wikipedia.org/wiki/Virtual_Network_Computing
[2]:http://en.wikipedia.org/wiki/X_Window_System
[3]:http://www.tightvnc.com/
[4]:https://www.realvnc.com/

View File

@ -1,144 +0,0 @@
disylee 来一篇~
How to analyze and view Apache web server logs interactively on Linux
================================================================================
Whether you are in the web hosting business, or run a few web sites on a VPS yourself, chances are you want to display visitor statistics such as top visitors, requested files (dynamic or static), used bandwidth, client browsers, and referring sites, and so forth.
[GoAccess][1] is a command-line log analyzer and interactive viewer for Apache or Nginx web server. With this tool, you will not only be able to browse the data mentioned earlier, but also parse the web server logs to dig for further data as well - and **all of this within a terminal window in real time**. Since as of today [most web servers][2] use either a Debian derivative or a Red Hat based distribution as the underlying operating system, I will show you how to install and use GoAccess in Debian and CentOS.
### Installing GoAccess on Linux ###
In Debian, Ubuntu and derivatives, run the following command to install GoAccess:
# aptitude install goaccess
In CentOS, you'll need to enable the [EPEL repository][3] and then:
# yum install goaccess
In Fedora, simply use yum command:
# yum install goaccess
If you want to install GoAccess from the source to enable further options (such as GeoIP location), install [required dependencies][4] for your operating system, and then follow these steps:
# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install
That will install version 0.8.5, but you can always verify what is the latest version in the [Downloads page][5] of the project's web site.
Since GoAccess does not require any further configurations, once it's installed you are ready to go.
### Running GoAccess ###
To start using GoAccess, just run it against your Apache access log.
For Debian and derivatives:
# goaccess -f /var/log/apache2/access.log
For Red Hat based distros:
# goaccess -f /var/log/httpd/access_log
When you first launch GoAccess, you will be presented with the following screen to choose the date and log format. As explained, you can toggle between options using the spacebar and proceed with F10. As for the date and log formats, you may want to refer to the [Apache documentation][6] if you need to refresh your memory.
In this case, Choose Common Log Format (CLF):
![](https://farm8.staticflickr.com/7422/15868350373_30c16d7c30.jpg)
and then press F10. You will be presented with the statistics screen. For the sake of brevity, only the header, which shows the summary of the log file, is shown in the next image:
![](https://farm9.staticflickr.com/8683/16486742901_7a35b5df69_b.jpg)
### Browsing Web Server Statistics with GoAccess ###
As you scroll down the page with the down arrow, you will find the following sections, sorted by requests. The order of the categories presented here may vary depending on your distribution or your preferred installation method (from repositories or from source):
1. Unique visitors per day (HTTP requests having the same IP, same date and same agent are considered an unique visit)
![](https://farm8.staticflickr.com/7308/16488483965_a439dbc5e2_b.jpg)
2. Requested files (Pages-URL)
![](https://farm9.staticflickr.com/8651/16488483975_66d05dce51_b.jpg)
3. Requested static files (e.g., .png, .js, etc)
4. Referrers URLs (the URLs where each request came from)
5. HTTP 404 Not Found response code
![](https://farm9.staticflickr.com/8669/16486742951_436539b0da_b.jpg)
6. Operating Systems
7. Browsers
8. Hosts (client IPs)
![](https://farm8.staticflickr.com/7392/16488483995_56e706d77c_z.jpg)
9. HTTP status codes
![](https://farm8.staticflickr.com/7282/16462493896_77b856f670_b.jpg)
10. Top referring sites
11. Top keyphrases used on Google's search engine
If you also want to inspect the archived logs, you can pipe them to GoAccess as follows.
For Debian and derivatives:
# zcat -f /var/log/apache2/access.log* | goaccess
For Red Hat based distributions:
# cat /var/log/httpd/access* | goaccess
Should you need a more detailed report of any of the above (1 through 11), press the desired section number and then O (uppercase o) to bring up what is called the Detailed View. The following image shows the output of 5-O (press 5, then press O):
![](https://farm8.staticflickr.com/7382/16302213429_48d9233f40_b.jpg)
To display GeoIP location information, open the Detail View in the Hosts section, as explained earlier, and you will see the location of the client IPs that performed requests to your web server:
![](https://farm8.staticflickr.com/7393/16488484075_d778aa91a2_z.jpg)
If your system has not been very busy lately, some of the above sections will not show a great deal of information, but that situation can change as more and more requests are made to your web server.
### Saving Reports for Offline Analysis ###
There will be times when you don't want to inspect your system's stats in real time, but save it to a file for offline analysis or printing. To generate an HTML report, simply redirect the output of the GoAccess commands mentioned earlier to an HTML file. Then just point your web browser to the file to open it.
# zcat -f /var/log/apache2/access.log* | goaccess > /var/www/webserverstats.html
Once the report is displayed, you will need to click on the Expand link to show the detail view on each category:
![](https://farm9.staticflickr.com/8658/16486743041_bd8a80794d_o.png)
注释youtube视频
<iframe width="615" height="346" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/UVbLuaOpYdg?feature=oembed"></iframe>
As we have discussed throughout this article, GoAccess is an invaluable tool that will provide you, as a system administrator, with HTTP statistics in a visual report on the fly. Although GoAccess by default presents its results to the standard output, you can also save them to JSON, HTML, or CSV files. This converts GoAccess in an incredibly useful tool to monitor and display statistics of a web server.
--------------------------------------------------------------------------------
via: http://xmodulo.com/interactive-apache-web-server-log-analyzer-linux.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://goaccess.io/
[2]:http://w3techs.com/technologies/details/os-linux/all/all
[3]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
[4]:http://goaccess.io/download#dependencies
[5]:http://goaccess.io/download
[6]:http://httpd.apache.org/docs/2.4/logs.html

View File

@ -1,3 +1,5 @@
FSSlc translating
How to access Gmail from the command line on Linux with Alpine
================================================================================
If you are a command-line lover, I am sure that you welcome with open arms any tool that allows you to perform at least one of your daily tasks using that powerful work environment, e.g., from [scheduling appointments][1] and [managing finances][2] to accessing [Facebook][3] and [Twitter][4].
@ -100,4 +102,4 @@ via: http://xmodulo.com/gmail-command-line-linux-alpine.html
[2]:http://xmodulo.com/manage-personal-expenses-command-line.html
[3]:http://xmodulo.com/access-facebook-command-line-linux.html
[4]:http://xmodulo.com/access-twitter-command-line-linux.html
[5]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
[5]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html

View File

@ -1,151 +0,0 @@
zpl1025
Systemd Boot Process a Close Look in Linux
================================================================================
The way Linux system boots up is quite complex and there have always been need to optimize the way it works. The traditional boot up process of Linux system is mainly handled by the well know init process (also known as SysV init boot system), while there have been identified inefficiencies in the init based boot system, systemd on the other hand is another boot up manager for Linux based systems which claims to overcome the shortcomings of [traditional Linux SysV init][2] based system. We will be focusing our discussion on the features and controversies of systemd , but in order to understand it, lets see how Linux boot process is handled by traditional SysV init based system. Kindly note that Systemd is still in testing phase and future releases of Linux operating systems are preparing to replace their current boot process with Systemd Boot manager.
### Understanding Linux Boot Process ###
Init is the very first process that starts when we power on our Linux system. Init process is assigned the PID of 1. It is parent process for all other processes on the system. When a Linux computer is started, the processor searches for the BIOS on the system memory, BIOS then tests system resources and find the first boot device, usually set as hard disk, it looks for Master Boot Record (MBR) on the hard disk, loads its contents to memory and passes control to it, the further boot process is controlled by MBR.
Master Boot Record initiates the Boot loader (Linux has two well know boot loaders, GRUB and LILO, 80% of Linux systems are using GRUB loaders), this is the time when GRUB or LILO loads the kernel module. Kernel module immediately looks for the “init” in /sbin partition and executes it. Thats from where init becomes the parent process of Linux system. The very first file read by init is /etc/inittab , from here init decides the run level of our Linux operating system. It finds partition table information from /etc/fstab file and mounts partitions accordingly. Init then launches all the services/scripts specified in the /etc/init.d directory of the default run level. This is the step where all services are initialized by init one by one. In this process, one service at a time is started by init , all services/daemons run in the background and init keeps managing them.
The shutdown process works in pretty much the reverse function, first of all init stops all services and then filesystem is un-mounted at the last stage.
The above mentioned process has some shortcomings. The need to replace traditional init with something better have been felt from long time now. Some replacements have been developed and implemented as well. The well know replacements for this init based system as Upstart , Epoch , Mudar and Systemd. Systemd is the one which got most attention and is considered to be better of all available alternatives.
### Understanding Systemd ###
Reducing the boot time and computational overhead is the main objective of developing the Systemd. Systemd (System Manager Daemon) , originally developed under GNU General Public License, is now under GNU Lesser General Public License, it is most frequently discussed boot and services manager these days. If your Linux system is configured to use Systemd boot manager, then instead of traditional SysV init, startup process will be handled by systemd. One of the core feature of Systemd is that it supports post boot scripts of SysV Init as well .
Systemd introduces the parallelization boot concept, it creates a sockets for each daemon that needs to be started, these sockets are abstracted from the processes that use them so they allow daemons to interact with each other. Systemd creates news processes and assigns every process a control group. The processes in different control groups use kernel to communicate with each others. The way [systemd handles the start up process][2] is quite neat, and much optimized as compared to the traditional init based system. Lets review some of the core features of Systemd.
- The boot process is much simpler as compared to the init
- Systemd provides concurrent and parallel process of system boot so it ensures better boot speed
- Processes are tracked using control groups, not by PIDs
- Improved ways to handle boot and services dependencies.
- Capability of system snapshots and restore
- Monitoring of started services ; also capabale of restarting any crashed services
- Includes systemd-login module to control user logins.
- Ability to add and remove components
- Low memory foot prints and ability for job scheduling
- Journald module for event logging and syslogd module for system log.
Systemd handles system shutdown process in well organized way as well. It has three script located inside /usr/lib/systemd/ directory, named systemd-halt.service , systemd-poweroff.service , systemd-reboot.service . These scripts are executed when user choose to shutdown, reboot or halt Linux system. In the event of shutdown, systemd first un-mount all file systems and disabled all swap devices, detaches the storage devices and kills remaining processes.
![](http://images.linoxide.com/systemd-boot-process.jpg)
### Structural Overview of Systemd ###
Lets review Linux system boot process with some structural details when it is using systemd as boot and services manager. For the sake of simplicity, we are listing the process in steps below:
**1.** The very first steps when you power on your system is the BIOS initialization. BIOS reads the boot device settings, locates and hands over control to MBR (assuming hard disk is set as first boot device).
**2.** MBR reads information from Grub or LILO boot loader and initializes the kernel. Grub or LILO will specify how to handle further system boot up. If you have specified systemd as boot manager in grub configuration file, then the further boot process will be handled by systemd. Systemd handles boot and services management process using “targets”. The ”target" files in systemd are used for grouping different boot units and start up synchronization processes.
**3.** The very first target executed by systemd is **default.target**. But default.target is actually a symlink to **graphical.target**. Symlink in linux works just like shortcuts in Windows. Graphical.target file is located at /usr/lib/systemd/system/graphical.target path. We have shown the contents of graphical.target file in the following screenshot.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/graphical1.png)
**4.** At this stage, **multi-user.target** has been invoked and this target keeps its further sub-units inside “/etc/systemd/system/multi-user.target.wants” directory. This target sets the environment for multi user support. None root users are enabled at this stage of boot up process. Firewall related services are started on this stage of boot as well.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/multi-user-target1.png)
"multi-user.target" passes control to another layer “**basic.target**”.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Basic-Target.png)
**5.** "basic.target" unit is the one that starts usual services specially graphical manager service. It uses /etc/systemd/system/basic.target.wants directory to decide which services need to be started, basic.target passes on control to **sysinit.target**.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sysint-Target.png)
**6.** "sysinit.target" starts important system services like file System mounting, swap spaces and devices, kernel additional options etc. sysinit.target passes on startup process to **local-fs.target**. The contents of this target unit are shown in the following screenshot.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/local-FS-Target.png)
**7.** local-fs.target , no user related services are started by this target unit, it handles core low level services only. This target is the one performing actions on the basis of /etc/fstab and /etc/inittab files.
### Analyzing System Boot Performancev ###
Systemd offers tool to identify and troubleshoot boot related issues or performance concerns. **Systemd-analyze** is a built-in command which lets you examine boot process. You can find out the units which are facing errors during boot up and can further trace and correct boot component issues. Some useful systemd-analyze commands are listed below.
**systemd-analyze time** shows the time spent in kernel, and normal user space.
$ systemd-analyze time
Startup finished in 1440ms (kernel) + 3444ms (userspace)
**systemd-analyze blame** prints a list of all running units, sorted by the time taken by then to initialize, in this way you can have idea of which services are taking long time to start during boot up.
$ systemd-analyze blame
2001ms mysqld.service
234ms httpd.service
191ms vmms.service
**systemd-analyze verify** shows if there are any syntax errors in the system units. **Systemd-analyze plot** can be used to write down whole startup process to a SVG formate file. Whole boot process is very lengthy to read, so using this command we can dump the output of whole boot processing into a file and then can read and analyze it further. The following command will take care of this.
systemd-analyze plot > boot.svg
### Systemd Controversies ###
Systemd has not been lucky to receive love from everyone, some professionals and administrators have different opinions on its working and developments. Per critics of Systemd, its “not Unix-like” because it tried to replace some system services. Some professionals dont like the idea of using binary configuration files as well. It is said that editing systemd configuration is not an easy tasks and there are no graphical tools available for this purpose.
### Test Systemd on Ubuntu 14.04 and 12.04 ###
Originally, Ubuntu decided to replace their current boot process with Systemd in Ubuntu 16.04 LTS. Ubuntu 16.04 is supposed to be released in April 2016, but considering the popularity and demand for Systemd, the upcoming **Ubuntu 15.04** will have it as its default boot manager. Good news is that the user of Ubuntu 14.04 Trusty Tahr And Ubuntu 12.04 Precise Pangolin can still test Systemd on their machines. The test process is not very complex, all you need to do is to include the related PPA to the system, update repository and perform system upgrade.
**Disclaimer** : Please note that its still in testing and development stages for Ubuntu. Testing packages might have any unknown issues and in worst case scenario, they might break your system configurations. Make sure you backup your important data before trying this upgrade.
Run following command on the terminal to add ppa to the your ubuntu system:
sudo add-apt-repository ppa:pitti/systemd
You will be seeing warning message here because we are trying to use temporary/testing PPA which is not recommended for production machines.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/PPA-Systemd1.png)
Now update the APT Package Manager repositories by running the following command.
sudo apt-get update
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Update-APT1.png)
Perform system upgrade by running the following command.
sudo apt-get dist-upgrade
![](http://blog.linoxide.com/wp-content/uploads/2015/03/System-Upgrade.png)
Thats all, you should be able to see configuration files of systemd on your ubuntu system now, just browse to the /lib/systemd/ directory and see the files there.
Alright, its time we edit grub configuration file and specify systemd as default Boot Manager. Edit grub file using Gedit text editor.
sudo gedit /etc/default/grub
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Edit-Grub.png)
Here edit GRUB_CMDLINE_LINUX_DEFAULT parameter in this file and specify the value of this parameter as: "**init=/lib/systemd/systemd**"
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Grub-Systemd.png)
Thats all, your ubuntu system is no longer using its traditional boot manager, its using Systemd Manager now. Reboot your system and see the systemd boot up process.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sytemd-Boot.png)
### Conclusion ###
Systemd is no doubt a step forward towards improving Linux Boot process; its an awesome suite of libraries and daemons that together improve the system boot and shutdown process. Many linux distributions are preparing to support it as their official boot manager. In future releases of Linux distros, we can hope to see systemd startup. But on the other hand, in order to succeed and to be adopted on the wide scale, systemd should address the concerns of critics as well.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/systemd-boot-process/
作者:[Aun Raza][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:http://linoxide.com/booting/boot-process-of-linux-in-detail/
[2]:http://0pointer.de/blog/projects/self-documented-boot.html

View File

@ -1,266 +0,0 @@
11 Linux Terminal Commands That Will Rock Your World
================================================================================
I have been using Linux for about 10 years and what I am going to show you in this article is a list of Linux commands, tools and clever little tricks that I wish somebody had shown me from the outset instead of stumbling upon them as I went along.
![Linux Keyboard Shortcuts.](http://f.tqn.com/y/linux/1/L/m/J/1/keyboardshortcuts.png)
Linux Keyboard Shortcuts.
### 1. Useful Command Line Keyboard Shortcuts ###
The following keyboard shortcuts are incredibly useful and will save you loads of time:
- CTRL + U - Cuts text up until the cursor.
- CTRL + K - Cuts text from the cursor until the end of the line
- CTRL + Y - Pastes text
- CTRL + E - Move cursor to end of line
- CTRL + A - Move cursor to the beginning of the line
- ALT + F - Jump forward to next space
- ALT + B - Skip back to previous space
- ALT + Backspace - Delete previous word
- CTRL + W - Cut word behind cursor
- Shift + Insert - Pastes text into terminal
Just so that the commands above make sense look at the next line of text.
sudo apt-get intall programname
As you can see I have a spelling error and for the command to work I would need to change "intall" to "install".
Imagine the cursor is at the end of the line. There are various ways to get back to the word install to change it.
I could press ALT + B twice which would put the cursor in the following position (denoted by the ^ symbol):
sudo apt-get^intall programname
Now you could press the cursor key and insert the ''s' into install.
Another useful command is "shift + insert" especially If you need to copy text from a browser into the terminal.
![](http://f.tqn.com/y/linux/1/L/n/J/1/sudotricks2.png)
### 2. SUDO !! ###
You are going to really thank me for the next command if you don't already know it because until you know this exists you curse yourself every time you enter a command and the words "permission denied" appear.
- sudo !!
How do you use sudo !!? Simply. Imagine you have entered the following command:
apt-get install ranger
The words "Permission denied" will appear unless you are logged in with elevated privileges.
sudo !! runs the previous command as sudo. So the previous command now becomes:
sudo apt-get install ranger
If you don't know what sudo is [start here][1].
![Pause Terminal Applications.](http://f.tqn.com/y/linux/1/L/o/J/1/pauseapps.png)
Pause Terminal Applications.
### 3. Pausing Commands And Running Commands In The Background ###
I have already written a guide showing how to run terminal commands in the background.
- CTRL + Z - Pauses an application
- fg - Returns you to the application
So what is this tip about?
Imagine you have opened a file in nano as follows:
sudo nano abc.txt
Halfway through typing text into the file you realise that you quickly want to type another command into the terminal but you can't because you opened nano in foreground mode.
You may think your only option is to save the file, exit nano, run the command and then re-open nano.
All you have to do is press CTRL + Z and the foreground application will pause and you will be returned to the command line. You can then run any command you like and when you have finished return to your previously paused session by entering "fg" into the terminal window and pressing return.
An interesting thing to try out is to open a file in nano, enter some text and pause the session. Now open another file in nano, enter some text and pause the session. If you now enter "fg" you return to the second file you opened in nano. If you exit nano and enter "fg" again you return to the first file you opened within nano.
![nohup.](http://f.tqn.com/y/linux/1/L/p/J/1/nohup3.png)
nohup.
### 4. Use nohup To Run Commands After You Log Out Of An SSH Session ###
The [nohup command][2] is really useful if you use the ssh command to log onto other machines.
So what does nohup do?
Imagine you are logged on to another computer remotely using ssh and you want to run a command that takes a long time and then exit the ssh session but leave the command running even though you are no longer connected then nohup lets you do just that.
For instance I use my [Raspberry PI][3] to download distributions for review purposes.
I never have my Raspberry PI connected to a display nor do I have a keyboard and mouse connected to it.
I always connect to the Raspberry PI via [ssh][4] from a laptop. If I started downloading a large file on the Raspberry PI without using the nohup command then I would have to wait for the download to finish before logging off the ssh session and before shutting down the laptop. If I did this then I may as well have not used the Raspberry PI to download the file at all.
To use nohup all I have to type is nohup followed by the command as follows:
nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &
![Schedule tasks with at.](http://f.tqn.com/y/linux/1/L/q/J/1/at.png)
Schedule tasks with at.
### 5. Running A Linux Command 'AT' A Specific Time ###
The 'nohup' command is good if you are connected to an SSH server and you want the command to remain running after logging out of the SSH session.
Imagine you want to run that same command at a specific point in time.
The 'at' command allows you to do just that. 'at' can be used as follows.
at 10:38 PM Fri
at> cowsay 'hello'
at> CTRL + D
The above command will run the program [cowsay][5] at 10:38 PM on Friday evening.
The syntax is 'at' followed by the date and time to run.
When the at> prompt appears enter the command you want to run at the specified time.
The CTRL + D returns you to the cursor.
There are lots of different date and time formats and it is worth checking the man pages for more ways to use 'at'.
![](http://f.tqn.com/y/linux/1/L/l/J/1/manmost.png)
### 6. Man Pages ###
Man pages give you an outline of what commands are supposed to do and the switches that can be used with them.
The man pages are kind of dull on their own. (I guess they weren't designed to excite us).
You can however do things to make your usage of man more appealing.
export PAGER=most
You will need to install 'most; for this to work but when you do it makes your man pages more colourful.
You can limit the width of the man page to a certain number of columns using the following command:
export MANWIDTH=80
Finally, if you have a browser available you can open any man page in the default browser by using the -H switch as follows:
man -H <command>
Note this only works if you have a default browser set up within the $BROWSER environment variable.
![View Processes With htop.](http://f.tqn.com/y/linux/1/L/r/J/1/nohup2.png)
View Processes With htop.
### 7. Use htop To View And Manage Processes ###
Which command do you currently use to find out which processes are running on your computer? My bet is that you are using '[ps][6]' and that you are using various switches to get the output you desire.
Install '[htop][7]'. It is definitely a tool you will wish that you installed earlier.
htop provides a list of all running processes in the terminal much like the file manager in Windows.
You can use a mixture of function keys to change the sort order and the columns that are displayed. You can also kill processes from within htop.
To run htop simply type the following into the terminal window:
htop
![Command Line File Manager - Ranger.](http://f.tqn.com/y/linux/1/L/s/J/1/ranger.png)
Command Line File Manager - Ranger.
### 8. Navigate The File System Using ranger ###
If htop is immensely useful for controlling the processes running via the command line then [ranger][8] is immensely useful for navigating the file system using the command line.
You will probably need to install ranger to be able to use it but once installed you can run it simply by typing the following into the terminal:
ranger
The command line window will be much like any other file manager but it works left to right rather than top to bottom meaning that if you use the left arrow key you work your way up the folder structure and the right arrow key works down the folder structure.
It is worth reading the man pages before using ranger so that you can get used to all keyboard switches that are available.
![Cancel Linux Shutdown.](http://f.tqn.com/y/linux/1/L/t/J/1/shutdown.png)
Cancel Linux Shutdown.
### 9. Cancel A Shutdown ###
So you started the [shutdown][9] either via the command line or from the GUI and you realised that you really didn't want to do that.
shutdown -c
Note that if the shutdown has already started then it may be too late to stop the shutdown.
Another command to try is as follows:
- [pkill][10] shutdown
![Kill Hung Processes With XKill.](http://f.tqn.com/y/linux/1/L/u/J/1/killhungprocesses.png)
Kill Hung Processes With XKill.
### 10. Killing Hung Processes The Easy Way ###
Imagine you are running an application and for whatever reason it hangs.
You could use 'ps -ef' to find the process and then kill the process or you could use 'htop'.
There is a quicker and easier command that you will love called [xkill][11].
Simply type the following into a terminal and then click on the window of the application you want to kill.
xkill
What happens though if the whole system is hanging?
Hold down the 'alt' and 'sysrq' keys on your keyboard and whilst they are held down type the following slowly:
- [REISUB][12]
This will restart your computer without having to hold in the power button.
![youtube-dl.](http://f.tqn.com/y/linux/1/L/v/J/1/youtubedl2.png)
youtube-dl.
### 11. Download Youtube Videos ###
Generally speaking most of us are quite happy for Youtube to host the videos and we watch them by streaming them through our chosen media player.
If you know you are going to be offline for a while (i.e. due to a plane journey or travelling between the south of Scotland and the north of England) then you may wish to download a few videos onto a pen drive and watch them at your leisure.
All you have to do is install youtube-dl from your package manager.
You can use youtube-dl as follows:
youtube-dl url-to-video
You can get the url to any video on Youtube by clicking the share link on the video's page. Simply copy the link and paste it into the command line (using the shift + insert shortcut).
### Summary ###
I hope that you found this list useful and that you are thinking "i didn't know you could do that" for at least 1 of the 11 items listed.
--------------------------------------------------------------------------------
via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-Rock-Your-World.htm
作者:[Gary Newell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
[1]:http://linux.about.com/cs/linux101/g/sudo.htm
[2]:http://linux.about.com/library/cmd/blcmdl1_nohup.htm
[3]:http://linux.about.com/od/mobiledevicesother/a/Raspberry-Pi-Computer-Running-Linux.htm
[4]:http://linux.about.com/od/commands/l/blcmdl1_ssh.htm
[5]:http://linux.about.com/cs/linux101/g/cowsay.htm
[6]:http://linux.about.com/od/commands/l/blcmdl1_ps.htm
[7]:http://www.linux.com/community/blogs/133-general-linux/745323-5-commands-to-check-memory-usage-on-linux
[8]:http://ranger.nongnu.org/
[9]:http://linux.about.com/od/commands/l/blcmdl8_shutdow.htm
[10]:http://linux.about.com/library/cmd/blcmdl1_pkill.htm
[11]:http://linux.about.com/od/funnymanpages/a/funman_xkill.htm
[12]:http://blog.kember.net/articles/reisub-the-gentle-linux-restart/

View File

@ -1,42 +0,0 @@
[translating by KayGuoWhu]
How to enable ssh login without entering password
================================================================================
Assume that you are a user "aliceA" on hostA, and wish to ssh to hostB as user "aliceB", without entering her password on hostB. You can follow this guide to **enable ssh login without entering a password**.
First of all, you need to be logged in as user "aliceA" on hostA.
Generate a public/private rsa key pair by using ssh-keygen. The generated key pair will be stored in ~/.ssh directory.
$ ssh-keygen -t rsa
Then, create ~/.ssh directory on aliceB account at the destination hostB by running the following command. This step can be omitted if there is already .ssh directory at aliceB@hostB.
$ ssh aliceB@hostB mkdir -p .ssh
Finally, copy the public key of user "aliceA" on hostA to aliceB@hostB to enable password-less ssh.
$ cat .ssh/id_rsa.pub | ssh aliceB@hostB 'cat >> .ssh/authorized_keys'
From this point on, you no longer need to type in password to ssh to aliceB@hostB from aliceA@hostA.
### Troubleshooting ###
1. You are still asked for an SSH password even after enabling key authentication. In this case, check for system logs (e.g., /var/log/secure) to see if you see something like the following.
Authentication refused: bad ownership or modes for file /home/aliceB/.ssh/authorized_keys
In this case, failure of key authentication is due to the fact that the permission or ownership ~/.ssh/authorized_keys file is not correct. Typically this error can happen if ~/.ssh/authorized_keys is read accessible to anyone but yourself. To fix this problem, change the file permission as follows.
$ chmod 700 ~/.ssh/authorized_keys
--------------------------------------------------------------------------------
via: http://xmodulo.com/how-to-enable-ssh-login-without.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni

View File

@ -1,100 +0,0 @@
translating wi-cuckoo LLAP
How to Interactively Create a Docker Container
================================================================================
Hi everyone, today we'll learn how we can interactively create a docker container using a docker image. Once we start a process in Docker from an Image, Docker fetches the image and its Parent Image, and repeats the process until it reaches the Base Image. Then the Union File System adds a read-write layer on top. That read-write layer, the information about its Parent Image and some other information like its unique id, networking configuration, and resource limits is called a **Container**. Containers has states as they can change from **running** to **exited** state. A container with state as **running** includes a tree of processes running on the CPU, isolated from the other processes running on the host where as **exited** is the state of the file system and its exit value is preserved. You can start, stop, and restart a container with it.
Docker Technology has brought a remarkable change in the field of IT enabling cloud service for sharing applications and automating workflows, enabling apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. In this article, we'll build CentOS Instance in which we'll host a website running under Apache Web Server.
Here is quick and easy tutorial on how we can create a container in an interactive method using an interactive shell.
### 1. Running a Docker Instance ###
Docker initially tries to fetch and run the required image locally and if its not found in local host the it pulls from the [Docker Public Registry Hub][1] . Here. we'll fetch and create a fedora instance in a Docker Container and attach a bash shell to the tty.
# docker run -i -t fedora bash
![Downloading Fedora Base Image](http://blog.linoxide.com/wp-content/uploads/2015/03/downloading-fedora-base-image.png)
### 2. Installing Apache Web Server ###
Now, after our Fedora base image with instance is ready, we'll now gonna install Apache Web Server interactively without creating a Dockerfile for it. To do so, we'll need to run the following commands in a terminal or shell.
# yum update
![Updating Fedora Base Image](http://blog.linoxide.com/wp-content/uploads/2015/03/updating-fedora-base-image.png)
# yum install httpd
![Installing httpd](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-httpd2.png)
# exit
### 3. Saving the Image ###
Now, we'll gonna save the changes we made into the Fedora Instance. To do that, we'll first gonna need to know the Container ID of the Instance. To get that we'll need to run the following command.
# docker ps -a
![Docker Running Container](http://blog.linoxide.com/wp-content/uploads/2015/03/docker-running-container.png)
Then, we'll save the changes as a new image by running the below command.
# docker commit c16378f943fe fedora-httpd
![committing fedora httpd](http://blog.linoxide.com/wp-content/uploads/2015/03/committing-fedora-httpd.png)
Here, the changes are saved using the Container ID and image name fedora-httpd. To make sure that the new image is running or not, we'll run the following command.
# docker images
![view docker images](http://blog.linoxide.com/wp-content/uploads/2015/03/view-docker-images.png)
### 4. Adding the Contents to the new image ###
As we have our new Fedora Apache image running successfully, now we'll want to add the web contents which includes our website to Apache Web Server so that our website will run successfully out of the box. To do so, we'll need to create a new Dockerfile which will handle the operation from copying web contents to allowing port 80. To do so, we'll need to create a file Dockerfile using our favorite text editor as shown below.
# nano Dockerfile
Now, we'll need to add the following lines into that file.
FROM fedora-httpd
ADD mysite.tar /tmp/
RUN mv /tmp/mysite/* /var/www/html
EXPOSE 80
ENTRYPOINT [ "/usr/sbin/httpd" ]
CMD [ "-D", "FOREGROUND" ]
![configuring Dockerfile](http://blog.linoxide.com/wp-content/uploads/2015/03/configuring-Dockerfile.png)
Here, in above Dockerfile, the web content which we have in mysite.tar will get automatically extracted to /tmp/ folder. Then, the entire site will move to the Apache Web root ie /var/www/html/ and the expose 80 will open port 80 so that the website will be available normally. Then, the entrypoint is set to /usr/sbin/httpd so that the Apache Server will execute.
### 5. Building and running a Container ###
Now, we'll build our Container using the Dockerfile we just created in order to add our website on it. To do so, we'll need to run the following command.
# docker build -rm -t mysite .
![Building mysite Image](http://blog.linoxide.com/wp-content/uploads/2015/03/building-mysite-image.png)
After building our new container, we'll want to run the container using the command below.
# docker run -d -P mysite
![Running mysite Container](http://blog.linoxide.com/wp-content/uploads/2015/03/running-mysite-container.png)
### Conclusion ###
Finally, we've successfully built a Docker Container interactively. In this method, we build our containers and image directly via interactive shell commands. This method is quite easy and quick to build and deploy our images and containers. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/interactively-create-docker-container/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://registry.hub.docker.com/

View File

@ -1,3 +1,5 @@
Vic020
A Peep into Process Management Commands in Linux
================================================================================
A program in execution is called a process. While a program is an executable file present in storage and is passive, a process is a dynamic entity comprising of allocated system resources, memory, security attributes and has a state associated with it. There can be multiple processes associated with the same program and operating simultaneously without interfering with each other. The operating system efficiently manages and keeps track of all the processes running in the system.
@ -188,4 +190,4 @@ via: http://linoxide.com/linux-command/process-management-commands-linux/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/bnpoornima/
[a]:http://linoxide.com/author/bnpoornima/

View File

@ -0,0 +1,57 @@
2 Ways to Create Your Own Docker Base Image
================================================================================
Greetings to everyone, today we'll learn about docker base Images and how we can build our own. [Docker][1] is an Open Source project that provides an open platform to pack, ship and run any application as a lightweight container. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. It makes them great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider.
Docker Images is a read-only layer which never changes. It Docker uses a **Union File System** to add a read-write file system over the read-only file system. But all the changes go to the top-most writeable layer, and underneath, the original file in the read-only image is unchanged. Since images don't change, images do not have state. Base Images are those images that has no parent. The major benefits of it is that it allows us to have a separate linux OS running.
Here are the ways on how we can create a custom base image.
### 1. Creating Docker Base Image using Tar ###
We can create our own base image using tar, we'll want to start building it with a working Linux Distribution we'll want to package as base image. This process may differ and depends on what distribution we are trying to build. In Debian distribution of Linux, debootstrap is preinstalled. We'll need to install debootstrap before starting the below process. Debootstrap is used to fetch the required packages to build the base system. Here, we'll create image based on Ubuntu 14.04 "Trusty". To do so, we'll need to run the following command in a terminal or shell.
$ sudo debootstrap trusty trusty > /dev/null
$ sudo tar -C trusty -c . | sudo docker import - trusty
![creating docker base image using debootstrap](http://blog.linoxide.com/wp-content/uploads/2015/03/creating-base-image-debootstrap.png)
Here, the above command creates a tar file of the current directory and outputs it to STDOUT, where "docker import - trusty" takes it from STDIN and creates a base image called trusty from it. Then, we'll run a test command inside that image as follows.
$ docker run trusty cat /etc/lsb-release
Here are some example scripts that will allow us to build quick base images in [Docker GitHub Repo][2] .
### 2. Creating Base Image using Scratch ###
In the Docker registry, there is a special repository known as Scratch, which was created using an empty tar file:
$ tar cv --files-from /dev/null | docker import - scratch
![creating docker base image using scratch](http://blog.linoxide.com/wp-content/uploads/2015/03/creating-base-image-using-scratch.png)
We can use that image to base our new minimal containers FROM:
FROM scratch
ADD script.sh /usr/local/bin/run.sh
CMD ["/usr/local/bin/run.sh"]
The above Dockerfile is from an extremely minimal image. Here, first it starts with a totally blank filesystem, then it copies script.sh that is created to /usr/local/bin/run.sh and then run the script /usr/local/bin/run.sh .
### Conclusion ###
Here, in this tutorial, we learned how we can build a custom Docker Base Image out of the box. Building a docker base image is an easy task because there are sets of packages and scripts already available for. Building a docker base image is a lot useful if we want to install what we want in it. So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/2-ways-create-docker-base-image/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://www.docker.com/
[2]:https://github.com/docker/docker/blob/master/contrib/mkimage-busybox.sh

View File

@ -0,0 +1,101 @@
How to Serve Git Repositories Using Gitblit Tool in Linux
================================================================================
Hi friends, today we'll be learning how to install Gitblit in your Linux Server or PC. So, lets check out what is a Git, its features and steps to install Gitblit. [Git is a distributed revision control system][1] with an emphasis on speed, data integrity, and support for distributed, non-linear workflows. It was initially designed and developed by Linus Torvalds for Linux kernel under the terms of the GNU General Public License version 2 development in 2005, and has since become the most widely adopted version control system for software development.
[Gitblit is a free and open source][2] built on a pure Java stack designed to handle everything from small to very large projects with speed and efficiency for serving Git repositories. It is easy to learn and has a tiny footprint with lightning fast performance. It outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase with features like cheap local branching, convenient staging areas, and multiple workflows.
#### Features of Gitblit ####
- It can be used as a dumb repository viewer with no administrative controls or user accounts.
- It can be used as a complete Git stack for cloning, pushing, and repository access control.
- It can be used without any other Git tooling (including actual Git) or it can cooperate with your established tools.
### 1. Creating Gitblit install directory ###
First of all we'll gonna to create a directory in our server in which we'll be installing our latest gitblit in.
$ sudo mkdir -p /opt/gitblit
$ cd /opt/gitblit
![creating directory gitblit](http://blog.linoxide.com/wp-content/uploads/2015/01/creating-directory-gitblit.png)
### 2. Downloading and Extracting ###
Now, we will want to download the latest gitblit from the official site. Here, the current version of gitblit we are gonna install is 1.6.2 . So, please change it as the version you are gonna install in your system.
$ sudo wget http://dl.bintray.com/gitblit/releases/gitblit-1.6.2.tar.gz
![downloading gitblit package](http://blog.linoxide.com/wp-content/uploads/2015/01/downloading-gitblit.png)
Now, we'll be extracting our downloaded tarball package to our current folder ie /opt/gitblit/
$ sudo tar -zxvf gitblit-1.6.2.tar.gz
![extracting gitblit tar](http://blog.linoxide.com/wp-content/uploads/2015/01/extracting-gitblit-tar.png)
### 3. Configuring and Running ###
Now, we'll configure our Gitblit configuration. If you want to customize the behavior of Gitblit server, you can do it by modifying `gitblit/data/gitblit.properties` . Now, after you are done configuring the configuration. We finally wanna run our gitblit. We have two options on running gitblit, first is that we run it manually by the command below:
$ sudo java -jar gitblit.jar --baseFolder data
And next is to add and use gitblit as service. Here are the steps that we'll need to follow to use gitblit as service in linux.
So, As I am running Ubuntu, the command below will be sudo cp service-ubuntu.sh /etc/init.d/gitblit so please change the file name service-ubuntu.sh to the distribution you are currently running.
$ sudo ./install-service-ubuntu.sh
$ sudo service gitblit start
![starting gitblit service](http://blog.linoxide.com/wp-content/uploads/2015/01/starting-gitblit-service.png)
Open your browser to http://localhost:8080 or https://localhost:8443 or replace "localhost" with the ip-address of the machine depending on your system configuration. Enter the default administrator credentials: admin / admin and click the Login button.
![gitblit welcome](http://blog.linoxide.com/wp-content/uploads/2015/01/gitblit-welcome.png)
Now, we'll wanna add a new user. First you'll need to login to the admin with default administrator credentials: username = **admin** and password = **admin** .
Then, Goto user icon > users > (+) new user. And create a new user like as shown in the figure below.
![add new user](http://blog.linoxide.com/wp-content/uploads/2015/01/add-user.png)
Now, we'll create a new repo out of the box. Go to repositories > (+) new repository . Then, add new repository as shown below.
![add new repository](http://blog.linoxide.com/wp-content/uploads/2015/01/add-new-repository.png)
#### Create a new repository on the command-line ####
touch README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin ssh://arunlinoxide@localhost:29418/linoxide.com.git
git push -u origin master
Please replace the username arunlinoxide with the user you add.
#### Push an existing repository from the command-line ####
git remote add origin ssh://arunlinoxide@localhost:29418/linoxide.com.git
git push -u origin master
**Note**: It is highly recommended to everyone to change the password of username "admin" as it comes by default.
### Conclusion ###
Hurray, we finally installed our latest Gitblit in our Linux Computer. We can now enjoy such a beautiful version controlling system for our projects whether its small or large, no matter. With Gitblit, version controlling has been too easy. It is easy to learn and has a tiny footprint with lightning fast performance. So, if you have any questions, suggestions, feedback please write them in the comment box below.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/serve-git-repositories-gitblit/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://git-scm.com/
[2]:http://gitblit.com/

View File

@ -0,0 +1,180 @@
How to secure SSH login with one-time passwords on Linux
================================================================================
As someone says, security is a not a product, but a process. While SSH protocol itself is cryptographically secure by design, someone can wreak havoc on your SSH service if it is not administered properly, be it weak passwords, compromised keys or outdated SSH client.
As far as SSH authentication is concerned, [public key authentication][1] is in general considered more secure than password authentication. However, key authentication is actually not desirable or even less secure if you are logging in from a public or shared computer, where things like stealth keylogger or memory scraper can always a possibility. If you cannot trust the local computer, it is better to use something else. This is when "one-time passwords" come in handy. As the name implies, each one-time password is for single-use only. Such disposable passwords can be safely used in untrusted environments as they cannot be re-used even when they are stolen.
One way to generate disposable passwords is [Google Authenticator][2]. In this tutorial, I am going to demonstrate another way to create one-time passwords for SSH login: [OTPW][3], a one-time password login package. Unlike Google Authenticator, you do not rely on any third party for one-time password generation and verification.
### What is OTPW? ###
OTPW consists of one-time password generator and PAM-integrated verification routines. In OTPW, one-time passwords are generated apriori with the generator, and carried by a user securely (e.g., printed in a paper sheet). Cryptographic hash of the generated passwords are then stored in the SSH server host. When a user logs in with a one-time password, OTPW's PAM module verifies the password, and invalidates it to prevent re-use.
### Step One: Install and Configure OTPW on Linux ###
#### Debian, Ubuntu or Linux Mint ####
Install OTPW packages with apt-get.
$ sudo apt-get install libpam-otpw otpw-bin
Open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).
#@include common-auth
and add the following two lines (to enable one-time password authentication):
auth required pam_otpw.so
session optional pam_otpw.so
![](https://farm8.staticflickr.com/7599/16775121360_d1f93feefa_b.jpg)
#### Fedora or CentOS/RHEL ####
OTPW is not available as a prebuilt package on Red Hat based systems. So let's install OTPW by building it from the source.
First, install prerequites:
$ sudo yum git gcc pam-devel
$ git clone https://www.cl.cam.ac.uk/~mgk25/git/otpw
$ cd otpw
Open Makefile with a text editor, and edit a line that starts with "PAMLIB=" as follows.
On 64-bit system:
PAMLIB=/usr/lib64/security
On 32-bit system:
PAMLIB=/usr/lib/security
Compile and install it. Note that installation will automatically restart an SSH server. So be ready to be disconnected if you are on an SSH connection.
$ make
$ sudo make install
Now you need to update SELinux policy since /usr/sbin/sshd tries to write to user's home directory, which is not allowed by default SELinux policy. The following commands will do. If you are not using SELinux, skip this step.
$ sudo grep sshd /var/log/audit/audit.log | audit2allow -M mypol
$ sudo semodule -i mypol.pp
Next, open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).
#auth substack password-auth
and add the following two lines (to enable one-time password authentication):
auth required pam_otpw.so
session optional pam_otpw.so
#### Step Two: Configure SSH Server for One-time Passwords ####
The next step is to configure an SSH server to accept one-time passwords.
Open /etc/ssh/sshd_config with a text editor, and set the following three parameters. Make sure that you do not add these lines more than once, because that will cause an SSH server to fail.
UsePrivilegeSeparation yes
ChallengeResponseAuthentication yes
UsePAM yes
You also need to disable default password authentication. Optionally, enable public key authentication, so that you can fall back to key-based authentication in case you do not have one-time passwords.
PubkeyAuthentication yes
PasswordAuthentication no
Now restart SSH server.
Debian, Ubuntu or Linux Mint:
$ sudo service ssh restart
Fedora or CentOS/RHEL 7:
$ sudo systemctl restart sshd
#### Step Three: Generate One-time Passwords with OTPW ####
As mentioned earlier, you need to create one-time passwords beforehand, and have them stored on the remote SSH server host. For this, run otpw-gen tool as the user you will be logging in as.
$ cd ~
$ otpw-gen > temporary_password.txt
![](https://farm9.staticflickr.com/8751/16961258882_c49cfe03fb_b.jpg)
It will ask you to set a prefix password. When you later log in, you need to type this prefix password AND one-time password. Essentially the prefix password is another layer of protection. Even if the password sheet falls into the wrong hands, the prefix password forces them to brute-force.
Once the prefix password is set, the command will generate 280 one-time passwords, and store them in the output text file (e.g., temporary_password.txt). Each password (length of 8 characters by default) is preceded by a three-digit index number. You are supposed to print the file in a sheet and carry it with you.
![](https://farm8.staticflickr.com/7281/16962594055_c2696d5ae1_b.jpg)
You will also see ~/.otpw file created, where cryptographic hashs of these passwords are stored. The first three digits in each line indicate the index number of the password that will be used for SSH login.
$ more ~/.otpw
----------
OTPW1
280 3 12 8
191ai+:ENwmMqwn
218tYRZc%PIY27a
241ve8ns%NsHFmf
055W4/YCauQJkr:
102ZnJ4VWLFrk5N
2273Xww55hteJ8Y
1509d4b5=A64jBT
168FWBXY%ztm9j%
000rWUSdBYr%8UE
037NvyryzcI+YRX
122rEwA3GXvOk=z
### Test One-time Passwords for SSH Login ###
Now let's login to an SSH server in a usual way:
$ ssh user@remote_host
If OTPW is successfully set up, you will see a slightly different password prompt:
Password 191:
Now open up your password sheet, and look for index number "191" in the sheet.
023 kBvp tq/G 079 jKEw /HRM 135 oW/c /UeB 191 fOO+ PeiD 247 vAnZ EgUt
According to sheet above, the one-time password for number "191" is "fOO+PeiD". You need to prepend your prefix password to it. For example, if your prefix password is "000", the actual one-time password you need to type is "000fOO+PeiD".
Once you successfully log in, the password used is automatically invalidated. If you check ~/.otpw, you will notice that the first line is replaced with "---------------", meaning that password "191" has been voided.
OTPW1
280 3 12 8
---------------
218tYRZc%PIY27a
241ve8ns%NsHFmf
055W4/YCauQJkr:
102ZnJ4VWLFrk5N
2273Xww55hteJ8Y
1509d4b5=A64jBT
168FWBXY%ztm9j%
000rWUSdBYr%8UE
037NvyryzcI+YRX
122rEwA3GXvOk=z
### Conclusion ###
In this tutorial, I demonstrated how to set up one-time password login for SSH using OTPW package. You may realized that a print sheet can be considered a less fancy version of security token in two-factor authentication. Yet, it is simpler and you do not rely on any third-party for its implementation. Whatever mechanism you are using to create disposable passwords, they can be helpful when you need to log in to an SSH server from an untrusted public computer. Feel free to share your experience or opinion on this topic.
--------------------------------------------------------------------------------
via: http://xmodulo.com/secure-ssh-login-one-time-passwords-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-force-ssh-login-via-public-key-authentication.html
[2]:http://xmodulo.com/two-factor-authentication-ssh-login-linux.html
[3]:http://www.cl.cam.ac.uk/~mgk25/otpw.html

View File

@ -0,0 +1,147 @@
Conky The Ultimate X Based System Monitor Application
================================================================================
Conky is a system monitor application written in C Programming Language and released under GNU General Public License and BSD License. It is available for Linux and BSD Operating System. The application is X (GUI) based that was originally forked from [Torsmo][1].
#### Features ####
- Simple User Interface
- Higher Degree of configuration
- It can show System stats using built-in objects (300+) as well as external scripts either on the desktop or in its own container.
- Low on Resource Utilization
- Shows system stats for a wide range of system variables which includes but not restricted to CPU, memory, swap, Temperature, Processes, Disk, Network, Battery, email, System messages, Music player, weather, breaking news, updates and blah..blah..blah
- Available in Default installation of OS like CrunchBang Linux and Pinguy OS.
#### Lesser Known Facts about Conky ####
- The Name conky was derived from a Canadian Television Show.
- It has already been ported to Nokia N900.
- It is no more maintained officially.
### Conky Installation and Usage in Linux ###
Before we install conky, we need to install packages like lm-sensors, curl and hddtemp using following command.
# apt-get install lm-sensors curl hddtemp
Time to detect-sensors.
# sensors-detect
**Note**: Answer Yes when prompted!
Check all the detected sensors.
# sensors
#### Sample Output ####
acpitz-virtual-0
Adapter: Virtual device
temp1: +49.5°C (crit = +99.0°C)
coretemp-isa-0000
Adapter: ISA adapter
Physical id 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
Core 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
Core 1: +49.0°C (high = +100.0°C, crit = +100.0°C)
Conky can be installed from repo as well as, can be compiled from source.
# yum install conky [On RedHat systems]
# apt-get install conky-all [On Debian systems]
**Note**: Before you install conky on Fedora/CentOS, you must have enabled [EPEL repository][2].
After conky has been installed, just issue following command to start it.
$ conky &
![Conky Monitor in Action](http://www.tecmint.com/wp-content/uploads/2015/03/Start-Conkey.jpeg)
Conky Monitor in Action
It will run conky in a popup like window. It uses the basic conky configuration file located at /etc/conky/conky.conf.
You may need to integrate conky with the desktop and wont like a popup like window every-time. Here is what you need to do
Copy the configuration file /etc/conky/conky.conf to your home directory and rename it as `.conkyrc`. The dot (.) at the beginning ensures that the configuration file is hidden.
$ cp /etc/conky/conky.conf /home/$USER/.conkyrc
Now restart conky to take new changes.
$ killall -SIGUSR1 conky
![Conky Monitor Window](http://www.tecmint.com/wp-content/uploads/2015/03/Restart-Conky.jpeg)
Conky Monitor Window
You may edit the conky configuration file located in your home dircetory. The configuration file is very easy to understand.
Here is a sample configuration of conky.
![Conky Configuration](http://www.tecmint.com/wp-content/uploads/2015/03/Conky-Configuration.jpeg)
Conky Configuration
From the above window you can modify color, borders, size, scale, background, alignment and several other properties. By setting different alignments to different conky window, we can run more than one conky script at a time.
**Using script other than the default for conky and where to find it?**
You may write your own conky script or use one that is available over Internet. We dont suggest you to use any script you find on the web which can be potentially dangerous unless you know what you are doing. However a few famous threads and pages have conky script that you can trust as mentioned below.
- [http://ubuntuforums.org/showthread.php?t=281865][3]
- [http://conky.sourceforge.net/screenshots.html][4]
At the above url, you will find every screenshot has a hyperlink, which will redirects to script file.
#### Testing Conky Script ####
Here I will be running a third party written conky-script on my Debian Jessie Machine, to test.
$ wget https://github.com/alexbel/conky/archive/master.zip
$ unzip master.zip
Change current working directory to just extracted directory.
$ cd conky-master
Rename the secrets.yml.example to secrets.yml.
$ mv secrets.yml.example secrets.yml
Install Ruby before you could run this (ruby) script.
$ sudo apt-get install ruby
$ ruby starter.rb
![Conky Fancy Look](http://www.tecmint.com/wp-content/uploads/2015/03/Conky-Fancy-Look.jpeg)
Conky Fancy Look
**Note**: This script can be modified to show your current weather, temperature, etc.
If you want to start conky at boot, add the below one liner to startup Applications.
conky --pause 10
save and exit.
And Finally…such a lightweight and useful GUI eye candy like package is not in active stage and is not maintained officially anymore. The last stable release was conky 1.9.0 released on May 03, 2012. A thread on Ubuntu forum has gone over 2k pages of users sharing configuration. (link to forum : [http://ubuntuforums.org/showthread.php?t=281865/][5])
- [Conky Homepage][6]
Thats all for now. Keep connected. Keep commenting. Share your thoughts and configuration in comments below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-conky-in-ubuntu-debian-fedora/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://torsmo.sourceforge.net/
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[3]:http://ubuntuforums.org/showthread.php?t=281865
[4]:http://conky.sourceforge.net/screenshots.html
[5]:http://ubuntuforums.org/showthread.php?t=281865/
[6]:http://conky.sourceforge.net/

View File

@ -0,0 +1,104 @@
How to Generate/Encrypt/Decrypt Random Passwords in Linux
================================================================================
We have taken initiative to produce Linux tips and tricks series. If youve missed the last article of this series, you may like to visit the link below.
注:此篇文章做过原文
- [5 Interesting Command Line Tips and Tricks in Linux][1]
In this article, we will share some interesting Linux tips and tricks to generate random passwords and also how to encrypt and decrypt passwords with or without slat method.
Security is one of the major concern of digital age. We put on password to computers, email, cloud, phone, documents and what not. We all know the basic to choose the password that is easy to remember and hard to guess. What about some sort of machine based password generation automatically? Believe me Linux is very good at this.
**1. Generate a random unique password of length equal to 10 characters using command pwgen. If you have not installed pwgen yet, use Apt or YUM to get.**
$ pwgen 10 1
![Generate Random Unique Password](http://www.tecmint.com/wp-content/uploads/2015/03/Generate-Random-Unique-Password-in-Linux.gif)
Generate Random Unique Password
Generate several random unique passwords of character length 50 in one go!
$ pwgen 50
![Generate Multiple Random Passwords](http://www.tecmint.com/wp-content/uploads/2015/03/Generate-Multiple-Random-Passwords.gif)
Generate Multiple Random Passwords
**2. You may use makepasswd to generate random, unique password of given length as per choice. Before you can fire makepasswd command, make sure you have installed it. If not! Try installing the package makepasswd using Apt or YUM.**
Generate a random password of character length 10. Default Value is 10.
$ makepasswd
![makepasswd Generate Unique Password](http://www.tecmint.com/wp-content/uploads/2015/03/mkpasswd-generate-unique-password.gif)
makepasswd Generate Unique Password
Generate a random password of character length 50.
$ makepasswd --char 50
![Generate Length 50 Password](http://www.tecmint.com/wp-content/uploads/2015/03/Random-Password-Generate.gif)
Generate Length 50 Password
Generate 7 random password of 20 characters.
$ makepasswd --char 20 --count 7
![](http://www.tecmint.com/wp-content/uploads/2015/03/Generate-20-Character-Password.gif)
**3. Encrypt a password using crypt along with salt. Provide salt manually as well as automatically.**
For those who may not be aware of salt,
Salt is a random data which servers as an additional input to one way function in order to protect password against dictionary attack.
Make sure you have installed mkpasswd installed before proceeding.
The below command will encrypt the password with salt. The salt value is taken randomly and automatically. Hence every time you run the below command it will generate different output because it is accepting random value for salt every-time.
$ mkpasswd tecmint
![Encrypt Password Using Crypt](http://www.tecmint.com/wp-content/uploads/2015/03/Encrypt-Password-in-Linux.gif)
Encrypt Password Using Crypt
Now lets define the salt. It will output the same result every-time. Note you can input anything of your choice as salt.
$ mkpasswd tecmint -s tt
![Encrypt Password Using Salt](http://www.tecmint.com/wp-content/uploads/2015/03/Encrypt-Password-Using-Salt.gif)
Encrypt Password Using Salt
Moreover, mkpasswd is interactive and if you dont provide password along with the command, it will ask password interactively.
**4. Encrypt a string say “Tecmint-is-a-Linux-Community” using aes-256-cbc encryption using password say “tecmint” and salt.**
# echo Tecmint-is-a-Linux-Community | openssl enc -aes-256-cbc -a -salt -pass pass:tecmint
![Encrypt A String in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Encrypt-A-String-in-Linux.gif)
Encrypt A String in Linux
Here in the above example the output of 注:此篇原文也做过[echo command][2] is pipelined with openssl command that pass the input to be encrypted using Encoding with Cipher (enc) that uses aes-256-cbc encryption algorithm and finally with salt it is encrypted using password (tecmint).
**5. Decrypt the above string using openssl command using the -aes-256-cbc decryption.**
# echo U2FsdGVkX18Zgoc+dfAdpIK58JbcEYFdJBPMINU91DKPeVVrU2k9oXWsgpvpdO/Z | openssl enc -aes-256-cbc -a -d -salt -pass pass:tecmint
![Decrypt String in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Decrypt-String-in-Linux.gif)
Decrypt String in Linux
Thats all for now. If you know any such tips and tricks you may send us your tips at admin@tecmint.com, your tip will be published under your name and also we will include it in our future article.
Keep connected. Keep Connecting. Stay Tuned. Dont forget to provide us with your valuable feedback in the comments below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/generate-encrypt-decrypt-random-passwords-in-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/5-linux-command-line-tricks/
[2]:http://www.tecmint.com/echo-command-in-linux/

View File

@ -0,0 +1,348 @@
How to Install WordPress with Nginx in a Docker Container
================================================================================
Hi all, today we'll learn how to install WordPress running Nginx Web Server in a Docker Container. WordPress is an awesome free and open source Content Management System running thousands of websites throughout the globe. [Docker][1] is an Open Source project that provides an open platform to pack, ship and run any application as a lightweight container. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. It makes them great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider.
Today, we'll deploy a docker container with the latest WordPress package with necessary prerequisites ie Nginx Web Server, PHP5, MariaDB Server, etc. Here are some short and sweet steps to successfully install a WordPress running Nginx in a Docker Container.
### 1. Installing Docker ###
Before we really start, we'll need to make sure that we have Docker installed in our Linux machine. Here, we are running CentOS 7 as host so, we'll be running yum manager to install docker using the below command.
# yum install docker
![Installing Docker](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-docker.png)
# systemctl restart docker.service
### 2. Creating WordPress Dockerfile ###
We'll need to create a Dockerfile which will automate the installation of the wordpress and its necessary pre-requisites. This Dockerfile will be used to build the image of WordPress installation we created. This WordPress Dockerfile fetches a CentOS 7 image from the Docker Registry Hub and updates the system with the latest available packages. It then installs the necessary softwares like Nginx Web Server, PHP, MariaDB, Open SSH Server and more which are essential for the Docker Container to work. It then executes a script which will initialize the installation of WordPress out of the box.
# nano Dockerfile
Then, we'll need to add the following lines of configuration inside that Dockerfile.
FROM centos:centos7
MAINTAINER The CentOS Project <cloud-ops@centos.org>
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install mariadb mariadb-server mariadb-client nginx php-fpm php-cli php-mysql php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpc php-magickwand php-magpierss php-mbstring php-mcrypt php-mssql php-shout php-snmp php-soap php-tidy php-apc pwgen python-setuptools curl git tar; yum clean all
ADD ./start.sh /start.sh
ADD ./nginx-site.conf /nginx.conf
RUN mv /nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
RUN /usr/bin/easy_install supervisor
RUN /usr/bin/easy_install supervisor-stdout
ADD ./supervisord.conf /etc/supervisord.conf
RUN echo %sudo ALL=NOPASSWD: ALL >> /etc/sudoers
ADD http://wordpress.org/latest.tar.gz /wordpress.tar.gz
RUN tar xvzf /wordpress.tar.gz
RUN mv /wordpress/* /usr/share/nginx/html/.
RUN chown -R apache:apache /usr/share/nginx/
RUN chmod 755 /start.sh
RUN mkdir /var/run/sshd
EXPOSE 80
EXPOSE 22
CMD ["/bin/bash", "/start.sh"]
![Wordpress Dockerfile](http://blog.linoxide.com/wp-content/uploads/2015/03/Dockerfile-wordpress.png)
### 3. Creating Start script ###
After we create our Dockerfile, we'll need to create a script named start.sh which will run and configure our WordPress installation. It will create and configure database, passwords for wordpress. To create it, we'll need to open start.sh with our favorite text editor.
# nano start.sh
After opening start.sh, we'll need to add the following lines of configuration into it.
#!/bin/bash
__check() {
if [ -f /usr/share/nginx/html/wp-config.php ]; then
exit
fi
}
__create_user() {
# Create a user to SSH into as.
SSH_USERPASS=`pwgen -c -n -1 8`
useradd -G wheel user
echo user:$SSH_USERPASS | chpasswd
echo ssh user password: $SSH_USERPASS
}
__mysql_config() {
# Hack to get MySQL up and running... I need to look into it more.
yum -y erase mariadb mariadb-server
rm -rf /var/lib/mysql/ /etc/my.cnf
yum -y install mariadb mariadb-server
mysql_install_db
chown -R mysql:mysql /var/lib/mysql
/usr/bin/mysqld_safe &
sleep 10
}
__handle_passwords() {
# Here we generate random passwords (thank you pwgen!). The first two are for mysql users, the last batch for random keys in wp-config.php
WORDPRESS_DB="wordpress"
MYSQL_PASSWORD=`pwgen -c -n -1 12`
WORDPRESS_PASSWORD=`pwgen -c -n -1 12`
# This is so the passwords show up in logs.
echo mysql root password: $MYSQL_PASSWORD
echo wordpress password: $WORDPRESS_PASSWORD
echo $MYSQL_PASSWORD > /mysql-root-pw.txt
echo $WORDPRESS_PASSWORD > /wordpress-db-pw.txt
# There used to be a huge ugly line of sed and cat and pipe and stuff below,
# but thanks to @djfiander's thing at https://gist.github.com/djfiander/6141138
# there isn't now.
sed -e "s/database_name_here/$WORDPRESS_DB/
s/username_here/$WORDPRESS_DB/
s/password_here/$WORDPRESS_PASSWORD/
/'AUTH_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
/'SECURE_AUTH_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
/'LOGGED_IN_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
/'NONCE_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
/'AUTH_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/
/'SECURE_AUTH_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/
/'LOGGED_IN_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/
/'NONCE_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/" /usr/share/nginx/html/wp-config-sample.php > /usr/share/nginx/html/wp-config.php
}
__httpd_perms() {
chown apache:apache /usr/share/nginx/html/wp-config.php
}
__start_mysql() {
# systemctl start mysqld.service
mysqladmin -u root password $MYSQL_PASSWORD
mysql -uroot -p$MYSQL_PASSWORD -e "CREATE DATABASE wordpress; GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'localhost' IDENTIFIED BY '$WORDPRESS_PASSWORD'; FLUSH PRIVILEGES;"
killall mysqld
sleep 10
}
__run_supervisor() {
supervisord -n
}
# Call all functions
__check
__create_user
__mysql_config
__handle_passwords
__httpd_perms
__start_mysql
__run_supervisor
![Start Script](http://blog.linoxide.com/wp-content/uploads/2015/03/start-script.png)
After adding the above configuration, we'll need to save it and then exit.
### 4. Creating Configuration files ###
Now, we'll need to create configuration file for Nginx Web Server named nginx-site.conf .
# nano nginx-site.conf
Then, we'll add the following configuration to the config file.
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
index index.html index.htm index.php;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
root /usr/share/nginx/html;
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root /usr/share/nginx/html;
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
}
![Nginx configuration](http://blog.linoxide.com/wp-content/uploads/2015/03/nginx-conf.png)
Now, we'll create supervisord.conf file and add the following lines as shown below.
# nano supervisord.conf
Then, add the following lines.
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:php-fpm]
command=/usr/sbin/php-fpm -c /etc/php/fpm
stdout_events_enabled=true
stderr_events_enabled=true
[program:php-fpm-log]
command=tail -f /var/log/php-fpm/php-fpm.log
stdout_events_enabled=true
stderr_events_enabled=true
[program:mysql]
command=/usr/bin/mysql --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
stdout_events_enabled=true
stderr_events_enabled=true
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true
[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
![Supervisord Configuration](http://blog.linoxide.com/wp-content/uploads/2015/03/supervisord.png)
After adding, we'll save and exit the file.
### 5. Building WordPress Container ###
Now, after done with creating configurations and scripts, we'll now finally use the Dockerfile to build our desired container with the latest WordPress CMS installed and configured according to the configuration. To do so, we'll run the following command in that directory.
# docker build --rm -t wordpress:centos7 .
![Building WordPress Container](http://blog.linoxide.com/wp-content/uploads/2015/03/building-wordpress-container.png)
### 6. Running WordPress Container ###
Now, to run our newly built container and open port 80 and 22 for Nginx Web Server and SSH access respectively, we'll run the following command.
# CID=$(docker run -d -p 80:80 wordpress:centos7)
![Run WordPress Docker](http://blog.linoxide.com/wp-content/uploads/2015/03/run-wordpress-docker.png)
To check the process and commands executed inside the container, we'll run the following command.
# echo "$(docker logs $CID )"
TO check if the port mapping is correct or not, run the following command.
# docker ps
![docker state](http://blog.linoxide.com/wp-content/uploads/2015/03/docker-state.png)
### 7. Web Interface ###
Finally if everything went accordingly, we'll be welcomed with WordPress when pointing the browser to http://ip-address/ or http://mywebsite.com/ .
![Wordpress Start](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-start.png)
Now, we'll go step wise through the web interface and setup wordpress configuration, username and password for the WordPress Panel.
![Wordpress Welcome](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-welcome.png)
Then, use the username and password entered above into the WordPress Login page.
![wordpress login](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-login.png)
### Conclusion ###
We successfully built and run WordPress CMS under LEMP Stack running in CentOS 7 Operating System as the docker OS. Running WordPress inside a container makes a lot safe and secure to the host system from the security perspective. This article enables one to completely configure WordPress to run under Docker Container with Nginx Web Server. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-wordpress-nginx-docker-container/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://docker.io/

View File

@ -0,0 +1,137 @@
How to set up remote desktop on Linux VPS using x2go
================================================================================
As everything is moved to the cloud, virtualized remote desktop becomes increasingly popular in the industry as a way to enhance employee's productivity. Especially for those who need to roam constantly across multiple locations and devices, remote desktop allows them to stay connected seamlessly to their work environment. Remote desktop is attractive for employers as well, achieving increased agility and flexibility in work environments, lower IT cost due to hardware consolidation, desktop security hardening, and so on.
In the world of Linux, of course there is no shortage of choices for settings up remote desktop environment, with many protocols (e.g., RDP, RFB, NX) and server/client implementations (e.g., [TigerVNC][1], RealVNC, FreeNX, x2go, X11vnc, TeamViewer) available.
Standing out from the pack is [X2Go][2], an open-source (GPLv2) implementation of NX-based remote desktop server and client. In this tutorial, I am going to demonstrate **how to set up remote desktop environment for [Linux VPS][3] using X2Go**.
### What is X2Go? ###
The history of X2Go goes back to NoMachine's NX technology. The NX remote desktop protocol was designed to deal with low bandwidth and high latency network connections by leveraging aggressive compression and caching. Later, NX was turned into closed-source while NX libraries were made GPL-ed. This has led to open-source implementation of several NX-based remote desktop solutions, and one of them is X2Go.
What benefits does X2Go bring to the table, compared to other solutions such as VNC? X2Go inherits all the advanced features of NX technology, so naturally it works well over slow network connections. Besides, X2Go boasts of an excellent track record of ensuring security with its built-in SSH-based encryption. No longer need to set up an SSH tunnel [manually][4]. X2Go comes with audio support out of box, which means that music playback at the remote desktop is delivered (via PulseAudio) over network, and fed into local speakers. On usability front, an application that you run on remote desktop can be seamlessly rendered as a separate window on your local desktop, giving you an illusion that the application is actually running on the local desktop. As you can see, these are some of [its powerful features][5] lacking in VNC based solutions.
### X2GO's Desktop Environment Compatibility ###
As with other remote desktop servers, there are [known compatibility issues][6] for X2Go server. Desktop environments like KDE3/4, Xfce, MATE and LXDE are the most friendly to X2Go server. However, your mileage may vary with other desktop managers. For example, the later versions of GNOME 3, KDE5, Unity are known to be not compatible with X2Go. If the desktop manager of your remote host is compatible with X2Go, you can follow the rest of the tutorial.
### Install X2Go Server on Linux ###
X2Go consists of remote desktop server and client components. Let's start with X2Go server installation. I assume that you already have an X2Go-compatible desktop manager up and running on a remote host, where we will be installing X2Go server.
Note that X2Go server component does not have a separate service that needs to be started upon boot. You just need to make sure that SSH service is up and running.
#### Ubuntu or Linux Mint: ####
Configure X2Go PPA repository. X2Go PPA is available for Ubuntu 14.04 and higher.
$ sudo add-apt-repository ppa:x2go/stable
$ sudo apt-get update
$ sudo apt-get install x2goserver x2goserver-xsession
#### Debian (Wheezy): ####
$ sudo apt-key adv --recv-keys --keyserver keys.gnupg.net E1F958385BFE2B6E
$ sudo sh -c "echo deb http://packages.x2go.org/debian wheezy main > /etc/apt/sources.list.d/x2go.list"
$ sudo sh -c "echo deb-src http://packages.x2go.org/debian wheezy main >> /etc/apt/sources.list.d/x2go.list"
$ sudo apt-get update
$ sudo apt-get install x2goserver x2goserver-xsession
#### Fedora: ####
$ sudo yum install x2goserver x2goserver-xsession
#### CentOS/RHEL: ####
Enable [EPEL respository][7] first, and then run:
$ sudo yum install x2goserver x2goserver-xsession
### Install X2Go Client on Linux ###
On a local host where you will be connecting to remote desktop, install X2GO client as follows.
#### Ubuntu or Linux Mint: ####
Configure X2Go PPA repository. X2Go PPA is available for Ubuntu 14.04 and higher.
$ sudo add-apt-repository ppa:x2go/stable
$ sudo apt-get update
$ sudo apt-get install x2goclient
Debian (Wheezy):
$ sudo apt-key adv --recv-keys --keyserver keys.gnupg.net E1F958385BFE2B6E
$ sudo sh -c "echo deb http://packages.x2go.org/debian wheezy main > /etc/apt/sources.list.d/x2go.list"
$ sudo sh -c "echo deb-src http://packages.x2go.org/debian wheezy main >> /etc/apt/sources.list.d/x2go.list"
$ sudo apt-get update
$ sudo apt-get install x2goclient
#### Fedora: ####
$ sudo yum install x2goclient
CentOS/RHEL:
Enable EPEL respository first, and then run:
$ sudo yum install x2goclient
### Connect to Remote Desktop with X2Go Client ###
Now it's time to connect to your remote desktop. On the local host, simply run the following command or use desktop launcher to start X2Go client.
$ x2goclient
Enter the remote host's IP address and SSH user name. Also, specify session type (i.e., desktop manager of a remote host).
![](https://farm9.staticflickr.com/8730/16365755693_75f3d544e9_b.jpg)
If you want, you can customize other things (by pressing other tabs), like connection speed, compression, screen resolution, and so on.
![](https://farm9.staticflickr.com/8699/16984498482_665b975eca_b.jpg)
![](https://farm9.staticflickr.com/8694/16985838755_1b7df1eb78_b.jpg)
When you initiate a remote desktop connection, you will be asked to log in. Type your SSH login and password.
![](https://farm9.staticflickr.com/8754/16984498432_1c8068b817_b.jpg)
Upon successful login, you will see the remote desktop screen.
![](https://farm9.staticflickr.com/8752/16798126858_1ab083ba80_c.jpg)
If you want to test X2Go's seamless window feature, choose "Single application" as session type, and specify the path to an executable on the remote host. In this example, I choose Dolphin file manager on a remote KDE host.
![](https://farm8.staticflickr.com/7584/16798393920_128c3af9c5_b.jpg)
Once you are successfully connected, you will see a remote application window open on your local desktop, not the entire remote desktop screen.
![](https://farm9.staticflickr.com/8742/16365755713_7b90cf65f0_c.jpg)
### Conclusion ###
In this tutorial, I demonstrated how to set up X2Go remote desktop on [Linux VPS][8] instance. As you can see, the whole setup process is pretty much painless (if you are using a right desktop environment). While there are some desktop-specific quirkiness, X2Go is a solid remote desktop solution which is secure, feature-rich, fast, and free.
What feature is the most appealing to you in X2Go? Please share your thought.
--------------------------------------------------------------------------------
via: http://xmodulo.com/x2go-remote-desktop-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://ask.xmodulo.com/centos-remote-desktop-vps.html
[2]:http://wiki.x2go.org/
[3]:http://xmodulo.com/go/digitalocean
[4]:http://xmodulo.com/how-to-set-up-vnc-over-ssh.html
[5]:http://wiki.x2go.org/doku.php/doc:newtox2go
[6]:http://wiki.x2go.org/doku.php/doc:de-compat
[7]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
[8]:http://xmodulo.com/go/digitalocean

View File

@ -0,0 +1,324 @@
translating wi-cuckoo LLAP
Its Now Worth Try Installing PHP 7.0 on CentOS 7.x / Fedora 21
================================================================================
PHP is a well known general purpose, server side web scripting language. A vast majority of online websites are coded in this language. PHP is ever evolving, feature rich, easy to use and well organized scripting language. Currently PHP development team is working on next major release of PHP, named PHP 7. The current production PHP version is PHP 5.6, as you might already know that PHP 6 was aborted in the past, the supporters of PHP 7 did not want the next important PHP version to be confused with that branch that was killed long time in the past. So it has been decided to name the next major release of PHP as PHP 7 instead of 6. PHP 7.0 is supposed to be released in November this year.
Here are some of the prominent features in next major PHP release.
- In order to improve performance and memory footprints PHPNG feature has been added to this new release.
- JIT engine has been included to dynamically compile Zend opcodes into native machine code in order to achieve faster processing. This feature will allow subsequent calls to the same code so that it may run much faster.
- AST (Abstract Syntax Tree) is a newly added feature which will enhance support for php extensions and userland applications.
- Asynchronous Programming feature will add support for parallel tasks within the same request.
- New version will support for stand alone multi-threading web server so that it may handle many simultaneous requests using a single memory pool.
### Installing PHP 7 on Centos / Fedora ###
Lets see how we can install PHP7 on Centos 7 and Fedora 21. In order to install PHP7 we will need to first clone php-src repository. Once cloning process is complete, we will configure and compile it. Before we proceed, lets ensure that we do have followings installed on our Linux system otherwise PHP compile process will return errors and abort.
- Git
- autoconf
- gcc
- bison
All of the above metioned prerequisits can be installed using Yum package manager. The following single command should take care of this:
yum install git autoconf gcc bison
Ready to start PHP7 installation process ? Lets first create PHP7 directory and make it your working directory.
mkdir php7
cd php7
Now clone php-src repo, run following command on the terminal.
git clone https://git.php.net/repository/php-src.git
The process should complete in few min, here is sample output which you should see at the completion of this task.
[root@localhost php7]# git clone https://git.php.net/repository/php-src.git
Cloning into 'php-src'...
remote: Counting objects: 615064, done.
remote: Compressing objects: 100% (127800/127800), done.
remote: Total 615064 (delta 492063), reused 608718 (delta 485944)
Receiving objects: 100% (615064/615064), 152.32 MiB | 16.97 MiB/s, done.
Resolving deltas: 100% (492063/492063), done.
Lets configure and compile PHP7, run following commands on the terminal to start the configuration process:
cd php-src
./buildconf
Here is sample output for ./buildconf command.
[root@localhost php-src]# ./buildconf
buildconf: checking installation...
buildconf: autoconf version 2.69 (ok)
rebuilding aclocal.m4
rebuilding configure
rebuilding main/php_config.h.in
Proceed further with the configuration process using following command:
./configure \
--prefix=$HOME/php7/usr \
--with-config-file-path=$HOME/php7/usr/etc \
--enable-mbstring \
--enable-zip \
--enable-bcmath \
--enable-pcntl \
--enable-ftp \
--enable-exif \
--enable-calendar \
--enable-sysvmsg \
--enable-sysvsem \
--enable-sysvshm \
--enable-wddx \
--with-curl \
--with-mcrypt \
--with-iconv \
--with-gmp \
--with-pspell \
--with-gd \
--with-jpeg-dir=/usr \
--with-png-dir=/usr \
--with-zlib-dir=/usr \
--with-xpm-dir=/usr \
--with-freetype-dir=/usr \
--with-t1lib=/usr \
--enable-gd-native-ttf \
--enable-gd-jis-conv \
--with-openssl \
--with-mysql=/usr \
--with-pdo-mysql=/usr \
--with-gettext=/usr \
--with-zlib=/usr \
--with-bz2=/usr \
--with-recode=/usr \
--with-mysqli=/usr/bin/mysql_config
It will take a sweet amount to time, once completed, you should see output like this:
creating libtool
appending configuration tag "CXX" to libtool
Generating files
configure: creating ./config.status
creating main/internal_functions.c
creating main/internal_functions_cli.c
+--------------------------------------------------------------------+
| License: |
| This software is subject to the PHP License, available in this |
| distribution in the file LICENSE. By continuing this installation |
| process, you are bound by the terms of this license agreement. |
| If you do not agree with the terms of this license, you must abort |
| the installation process at this point. |
+--------------------------------------------------------------------+
Thank you for using PHP.
config.status: creating php7.spec
config.status: creating main/build-defs.h
config.status: creating scripts/phpize
config.status: creating scripts/man1/phpize.1
config.status: creating scripts/php-config
config.status: creating scripts/man1/php-config.1
config.status: creating sapi/cli/php.1
config.status: creating sapi/cgi/php-cgi.1
config.status: creating ext/phar/phar.1
config.status: creating ext/phar/phar.phar.1
config.status: creating main/php_config.h
config.status: executing default commands
Run following command to complete the compilation process.
make
Sample output for “make” command is shown below:
Generating phar.php
Generating phar.phar
PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled.
clicommand.inc
directorytreeiterator.inc
directorygraphiterator.inc
pharcommand.inc
invertedregexiterator.inc
phar.inc
Build complete.
Don't forget to run 'make test'.
Thats all, its time to install PHP7 now, run following to install it.
make install
Sample output for successful install process should look like:
[root@localhost php-src]# make install
Installing shared extensions: /root/php7/usr/lib/php/extensions/no-debug-non-zts-20141001/
Installing PHP CLI binary: /root/php7/usr/bin/
Installing PHP CLI man page: /root/php7/usr/php/man/man1/
Installing PHP CGI binary: /root/php7/usr/bin/
Installing PHP CGI man page: /root/php7/usr/php/man/man1/
Installing build environment: /root/php7/usr/lib/php/build/
Installing header files: /root/php7/usr/include/php/
Installing helper programs: /root/php7/usr/bin/
program: phpize
program: php-config
Installing man pages: /root/php7/usr/php/man/man1/
page: phpize.1
page: php-config.1
Installing PEAR environment: /root/php7/usr/lib/php/
[PEAR] Archive_Tar - installed: 1.3.13
[PEAR] Console_Getopt - installed: 1.3.1
[PEAR] Structures_Graph- installed: 1.0.4
[PEAR] XML_Util - installed: 1.2.3
[PEAR] PEAR - installed: 1.9.5
Wrote PEAR system config file at: /root/php7/usr/etc/pear.conf
You may want to add: /root/php7/usr/lib/php to your php.ini include_path
/root/php7/php-src/build/shtool install -c ext/phar/phar.phar /root/php7/usr/bin
ln -s -f /root/php7/usr/bin/phar.phar /root/php7/usr/bin/phar
Installing PDO headers: /root/php7/usr/include/php/ext/pdo/
Conguratulaion, PHP 7 has been installed on your Linux system now. Once installation is complete, move to sapi/cli direcoty inside php7 installation folder.
cd sapi/cli
and verify PHP version from here.
[root@localhost cli]# ./php -v
PHP 7.0.0-dev (cli) (built: Mar 28 2015 00:54:11)
Copyright (c) 1997-2015 The PHP Group
Zend Engine v3.0.0-dev, Copyright (c) 1998-2015 Zend Technologies
### Conclusion ###
PHP 7 is also [added in remi repositories][1], this upcoming version is mainly focused on performance improvements, its new features are aimed to make PHP as a well fit for modern programming needs and trends. PHP 7.0 will have many new features and some deprecations to the old items. We hope to see details about new features and deprecations in the coming months. Enjoy!
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-php-7-centos-7-fedora-21/
作者:[Aun Raza][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:http://blog.famillecollet.com/post/2015/03/25/PHP-7.0-as-Software-Collection

View File

@ -0,0 +1,70 @@
Linux Email App Geary Updated — How To Install It In Ubuntu
================================================================================
**Geary, the popular desktop email client for Linux, has been updated to version 0.10 — and it gains a glut of new features in the process.**
![An older version of Geary running in elementary OS](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/geary.jpg)
An older version of Geary running in elementary OS
Geary 0.100 features some welcome user interface improvements and additional UI options, including:
- New: Ability to Undo Archive, Trash and Move actions
- New: Option to switch between a 2 column or 2 column layout
- New “split header bar” — improves message list, composer layouts
- New shortcut keys — use j/k to navigate next/previous conversations
This update also introduces a **brand new full-text search algorithm** designed to improve the search experience in Geary, according to Yorba.
This introduction should calm some complaints of the apps search prowess, which often sees Geary return a slew of search results that are, to quote software outfit themselves, “…seemingly unrelated to the search query.”
> Yorba recommends that all users of the client upgrade to this release
*“Although not all search problems are fixed in 0.10, Geary should be more conservative about displaying results that match the users query,” [the team notes][1]. *
Last but by no means least on the main feature front is something sure to find favour with power users: **support for multiple/alternate e-mail addresses per account**.
If your main Gmail account is set-up in Geary to pull in your Yahoo, Outlook and KittyMail messages too then you should now see them all kept neatly together and be given the option of picking which identity you send from when using the composer From field. No, its not the sexiest feature but it is one that has been requested often.
Rounding out this release of the popular Linux email client is the usual gamut of bug fixes, performance optimisations and miscellaneous improvements.
Yorba recommends that all users of the client upgrade to this release.
### Install Geary 0.10 in Ubuntu 14.04, 14.10 & 15.04 ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/geary-inline-composor.jpg)
The latest version of Yorba is available to download as source, ready for compiling from the GNOME Git. But lets be honest: thats a bit of a hassle, right?
Ubuntu users wondering how to install Geary 0.10 in **14.04, 14.10** and (for early birds) **15.04** have things easy.
The official Yorba PPA contains the **latest versions of Geary** as well as those for Shotwell (photo manager) and [California][2] (calendar app). Be aware that any existing versions of these apps installed on your computer may/will be upgraded to a more recent version by adding this PPA.
Capiche? Coolio.
To install Geary in Ubuntu you first need to add the Yorba PPA your Softwares Sources. To do this just open a new Terminal window and carefully enter the following two commands:
sudo add-apt-repository ppa:yorba/ppa
sudo apt-get update && sudo apt-get install geary
After hitting return/enter on the last youll be prompted to enter your password. Do this, and then let the installation complete.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/20130320161830-geary-yorba.png)
Once done, open your desktop environments app launcher and seek out the Geary icon. Click it, add your account(s) and discover [what the email mail man has dropped off through the information superhighway][3] and into the easy to use graphical interface.
**Dont forget: you can always tip us with news, app suggestions, and anything else youd like to see us cover by using the power of electronic mail. Direct your key punches to joey [at] oho [dot] io.**
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/03/install-geary-ubuntu-linux-email-update
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://wiki.gnome.org/Apps/Geary/FullTextSearchStrategy
[2]:http://www.omgubuntu.co.uk/2014/10/california-calendar-natural-language-parser
[3]:https://www.youtube.com/watch?v=rxM8C71GB8w

View File

@ -0,0 +1,743 @@
ZMap Documentation
================================================================================
1. Getting Started with ZMap
1. Scanning Best Practices
1. Command Line Arguments
1. Additional Information
1. TCP SYN Probe Module
1. ICMP Echo Probe Module
1. UDP Probe Module
1. Configuration Files
1. Verbosity
1. Results Output
1. Blacklisting
1. Rate Limiting and Sampling
1. Sending Multiple Probes
1. Extending ZMap
1. Sample Applications
1. Writing Probe and Output Modules
----------
### Getting Started with ZMap ###
ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices.
By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows:
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
Or more concisely specified as:
$ zmap -B 10M -p 80 -n 10000 -o results.csv
ZMap can also be used to scan specific subnets or CIDR blocks. For example, to scan only 10.0.0.0/8 and 192.168.0.0/16 on port 80, run:
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
If the scan started successfully, ZMap will output status updates every one second similar to the following:
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
These updates provide information about the current state of the scan and are of the following form: %-complete (est time remaining); packets-sent curr-send-rate (avg-send-rate); recv: packets-recv recv-rate (avg-recv-rate); hits: hit-rate
If you do not know the scan rate that your network can support, you may want to experiment with different scan rates or bandwidth limits to find the fastest rate that your network can support before you see decreased results.
By default, ZMap will output the list of distinct IP addresses that responded successfully (e.g. with a SYN ACK packet) similar to the following. There are several additional formats (e.g. JSON and Redis) for outputting results as well as options for producing programmatically parsable scan statistics. As wells, additional output fields can be specified and the results can be filtered using an output filter.
115.237.116.119
23.9.117.80
207.118.204.141
217.120.143.111
50.195.22.82
We strongly encourage you to use a blacklist file, to exclude both reserved/unallocated IP space (e.g. multicast, RFC1918), as well as networks that request to be excluded from your scans. By default, ZMap will utilize a simple blacklist file containing reserved and unallocated addresses located at `/etc/zmap/blacklist.conf`. If you find yourself specifying certain settings, such as your maximum bandwidth or blacklist file every time you run ZMap, you can specify these in `/etc/zmap/zmap.conf` or use a custom configuration file.
If you are attempting to troubleshoot scan related issues, there are several options to help debug. First, it is possible can perform a dry run scan in order to see the packets that would be sent over the network by adding the `--dryrun` flag. As well, it is possible to change the logging verbosity by setting the `--verbosity=n` flag.
----------
### Scanning Best Practices ###
We offer these suggestions for researchers conducting Internet-wide scans as guidelines for good Internet citizenship.
- Coordinate closely with local network administrators to reduce risks and handle inquiries
- Verify that scans will not overwhelm the local network or upstream provider
- Signal the benign nature of the scans in web pages and DNS entries of the source addresses
- Clearly explain the purpose and scope of the scans in all communications
- Provide a simple means of opting out and honor requests promptly
- Conduct scans no larger or more frequent than is necessary for research objectives
- Spread scan traffic over time or source addresses when feasible
It should go without saying that scan researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions.
----------
### Command Line Arguments ###
#### Common Options ####
These options are the most common options when performing a simple scan. We note that some options are dependent on the probe module or output module used (e.g. target port is not used when performing an ICMP Echo Scan).
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-o, --output-file=name**
Write results to this file. Use - for stdout
**-b, --blacklist-file=path**
File of subnets to exclude, in CIDR notation (e.g. 192.168.0.0/16), one-per line. It is recommended you use this to exclude RFC 1918 addresses, multicast, IANA reserved space, and other IANA special-purpose addresses. An example blacklist file is provided in conf/blacklist.example for this purpose.
#### Scan Options ####
**-n, --max-targets=n**
Cap the number of targets to probe. This can either be a number (e.g. `-n 1000`) or a percentage (e.g. `-n 0.1%`) of the scannable address space (after excluding blacklist)
**-N, --max-results=n**
Exit after receiving this many results
**-t, --max-runtime=secs**
Cap the length of time for sending packets
**-r, --rate=pps**
Set the send rate in packets/sec
**-B, --bandwidth=bps**
Set the send rate in bits/second (supports suffixes G, M, and K (e.g. `-B 10M` for 10 mbps). This overrides the `--rate` flag.
**-c, --cooldown-time=secs**
How long to continue receiving after sending has completed (default=8)
**-e, --seed=n**
Seed used to select address permutation. Use this if you want to scan addresses in the same order for multiple ZMap runs.
**--shards=n**
Split the scan up into N shards/partitions among different instances of zmap (default=1). When sharding, `--seed` is required
**--shard=n**
Set which shard to scan (default=0). Shards are indexed in the range [0, N), where N is the total number of shards. When sharding `--seed` is required.
**-T, --sender-threads=n**
Threads used to send packets (default=1)
**-P, --probes=n**
Number of probes to send to each IP (default=1)
**-d, --dryrun**
Print out each packet to stdout instead of sending it (useful for debugging)
#### Network Options ####
**-s, --source-port=port|range**
Source port(s) to send packets from
**-S, --source-ip=ip|range**
Source address(es) to send packets from. Either single IP or range (e.g. 10.0.0.1-10.0.0.9)
**-G, --gateway-mac=addr**
Gateway MAC address to send packets to (in case auto-detection does not work)
**-i, --interface=name**
Network interface to use
#### Probe Options ####
ZMap allows users to specify and write their own probe modules for use with ZMap. Probe modules are responsible for generating probe packets to send, and processing responses from hosts.
**--list-probe-modules**
List available probe modules (e.g. tcp_synscan)
**-M, --probe-module=name**
Select probe module (default=tcp_synscan)
**--probe-args=args**
Arguments to pass to probe module
**--list-output-fields**
List the fields the selected probe module can send to the output module
#### Output Options ####
ZMap allows users to specify and write their own output modules for use with ZMap. Output modules are responsible for processing the fieldsets returned by the probe module, and outputing them to the user. Users can specify output fields, and write filters over the output fields.
**--list-output-modules**
List available output modules (e.g. tcp_synscan)
**-O, --output-module=name**
Select output module (default=csv)
**--output-args=args**
Arguments to pass to output module
**-f, --output-fields=fields**
Comma-separated list of fields to output
**--output-filter**
Specify an output filter over the fields defined by the probe module
#### Additional Options ####
**-C, --config=filename**
Read a configuration file, which can specify any other options.
**-q, --quiet**
Do not print status updates once per second
**-g, --summary**
Print configuration and summary of results at the end of the scan
**-v, --verbosity=n**
Level of log detail (0-5, default=3)
**-h, --help**
Print help and exit
**-V, --version**
Print version and exit
----------
### Additional Information ###
#### TCP SYN Scans ####
When performing a TCP SYN scan, ZMap requires a single target port and supports specifying a range of source ports from which the scan will originate.
**-p, --target-port=port**
TCP port number to scan (e.g. 443)
**-s, --source-port=port|range**
Source port(s) for scan packets (e.g. 40000-50000)
**Warning!** ZMap relies on the Linux kernel to respond to SYN/ACK packets with RST packets in order to close connections opened by the scanner. This occurs because ZMap sends packets at the Ethernet layer in order to reduce overhead otherwise incurred in the kernel from tracking open TCP connections and performing route lookups. As such, if you have a firewall rule that tracks established connections such as a netfilter rule similar to `-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`, this will block SYN/ACK packets from reaching the kernel. This will not prevent ZMap from recording responses, but it will prevent RST packets from being sent back, ultimately using up a connection on the scanned host until your connection times out. We strongly recommend that you select a set of unused ports on your scanning host which can be allowed access in your firewall and specifying this port range when executing ZMap, with the `-s` flag (e.g. `-s '50000-60000'`).
#### ICMP Echo Request Scans ####
While ZMap performs TCP SYN scans by default, it also supports ICMP echo request scans in which an ICMP echo request packet is sent to each host and the type of ICMP response received in reply is denoted. An ICMP scan can be performed by selecting the icmp_echoscan scan module similar to the following:
$ zmap --probe-module=icmp_echoscan
#### UDP Datagram Scans ####
ZMap additionally supports UDP probes, where it will send out an arbitrary UDP datagram to each host, and receive either UDP or ICMP Unreachable responses. ZMap supports four different methods of setting the UDP payload through the --probe-args command-line option. These are 'text' for ASCII-printable payloads, 'hex' for hexadecimal payloads set on the command-line, 'file' for payloads contained in an external file, and 'template' for payloads that require dynamic field generation. In order to obtain the UDP response, make sure that you specify 'data' as one of the fields to report with the -f option.
The example below will send the two bytes 'ST', a PCAnwywhere 'status' request, to UDP port 5632.
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
The example below will send the byte '0x02', a SQL Server 'client broadcast' request, to UDP port 1434.
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
The example below will send a NetBIOS status request to UDP port 137. This uses a payload file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
The example below will send a SIP 'OPTIONS' request to UDP port 5060. This uses a template file that is included with the ZMap distribution.
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
UDP payload templates are still experimental. You may encounter crashes when more using more than one send thread (-T) and there is a significant decrease in performance compared to static payloads. A template is simply a payload file that contains one or more field specifiers enclosed in a ${} sequence. Some protocols, notably SIP, require the payload to reflect the source and destination of the packet. Other protocols, such as portmapper and DNS, contain fields that should be randomized per request or risk being dropped by multi-homed systems scanned by ZMap.
The payload template below will send a SIP OPTIONS request to every destination:
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
To: sip:${RAND_ALPHA=8}@${DADDR}
Call-ID: ${RAND_DIGIT=10}@${SADDR}
CSeq: 1 OPTIONS
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
Content-Length: 0
Max-Forwards: 20
User-Agent: ${RAND_ALPHA=8}
Accept: text/plain
In the example above, note that line endings are \r\n and the end of this request must contain \r\n\r\n for most SIP implementations to correcly process it. A working example is included in the examples/udp-payloads directory of the ZMap source tree (sip_options.tpl).
The following template fields are currently implemented:
- **SADDR**: Source IP address in dotted-quad format
- **SADDR_N**: Source IP address in network byte order
- **DADDR**: Destination IP address in dotted-quad format
- **DADDR_N**: Destination IP address in network byte order
- **SPORT**: Source port in ascii format
- **SPORT_N**: Source port in network byte order
- **DPORT**: Destination port in ascii format
- **DPORT_N**: Destination port in network byte order
- **RAND_BYTE**: Random bytes (0-255), length specified with =(length) parameter
- **RAND_DIGIT**: Random digits from 0-9, length specified with =(length) parameter
- **RAND_ALPHA**: Random mixed-case letters from A-Z, length specified with =(length) parameter
- **RAND_ALPHANUM**: Random mixed-case letters from A-Z and digits from 0-9, length specified with =(length) parameter
### Configuration Files ###
ZMap supports configuration files instead of requiring all options to be specified on the command-line. A configuration can be created by specifying one long-name option and the value per line such as:
interface "eth1"
source-ip 1.1.1.4-1.1.1.8
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
cooldown-time 300 # seconds
blacklist-file /etc/zmap/blacklist.conf
output-file ~/zmap-output
quiet
summary
ZMap can then be run with a configuration file and specifying any additional necessary parameters:
$ zmap --config=~/.zmap.conf --target-port=443
### Verbosity ###
There are several types of on-screen output that ZMap produces. By default, ZMap will print out basic progress information similar to the following every 1 second. This can be disabled by setting the `--quiet` flag.
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
ZMap also prints out informational messages during scanner configuration such as the following, which can be controlled with the `--verbosity` argument.
Aug 11 16:16:12.813 [INFO] zmap: started
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
Aug 11 16:17:03.985 [DEBUG] recv: thread started
ZMap also supports printing out a grep-able summary at the end of the scan, similar to below, which can be invoked with the `--summary` flag.
cnf target-port 443
cnf source-port-range-begin 32768
cnf source-port-range-end 61000
cnf source-addr-range-begin 1.1.1.4
cnf source-addr-range-end 1.1.1.8
cnf maximum-packets 4294967295
cnf maximum-runtime 0
cnf permutation-seed 0
cnf cooldown-period 300
cnf send-interface eth1
cnf rate 45000
env nprocessors 16
exc send-start-time Fri Jan 18 01:47:35 2013
exc send-end-time Sat Jan 19 00:47:07 2013
exc recv-start-time Fri Jan 18 01:47:35 2013
exc recv-end-time Sat Jan 19 00:52:07 2013
exc sent 3722335150
exc blacklisted 572632145
exc first-scanned 1318129262
exc hit-rate 0.874102
exc synack-received-unique 32537000
exc synack-received-total 36689941
exc synack-cooldown-received-unique 193
exc synack-cooldown-received-total 1543
exc rst-received-unique 141901021
exc rst-received-total 166779002
adv source-port-secret 37952
adv permutation-gen 4215763218
### Results Output ###
ZMap can produce results in several formats through the use of **output modules**. By default, ZMap only supports **csv** output, however support for **redis** and **json** can be compiled in. The results sent to these output modules may be filtered using an **output filter**. The fields the output module writes are specified by the user. By default, ZMap will return results in csv format and if no output file is specified, ZMap will not produce specific results. It is also possible to write your own output module; see Writing Output Modules for information.
**-o, --output-file=p**
File to write output to
**-O, --output-module=p**
Invoke a custom output module
**-f, --output-fields=p**
Comma-separated list of fields to output
**--output-filter=filter**
Specify an output filter over fields for a given probe
**--list-output-modules**
Lists available output modules
**--list-output-fields**
List available output fields for a given probe
#### Output Fields ####
ZMap has a variety of fields it can output beyond IP address. These fields can be viewed for a given probe module by running with the `--list-output-fields` flag.
$ zmap --probe-module="tcp_synscan" --list-output-fields
saddr string: source IP address of response
saddr-raw int: network order integer form of source IP address
daddr string: destination IP address of response
daddr-raw int: network order integer form of destination IP address
ipid int: IP identification number of response
ttl int: time-to-live of response packet
sport int: TCP source port
dport int: TCP destination port
seqnum int: TCP sequence number
acknum int: TCP acknowledgement number
window int: TCP window
classification string: packet classification
success int: is response considered success
repeat int: is response a repeat response from host
cooldown int: Was response received during the cooldown period
timestamp-str string: timestamp of when response arrived in ISO8601 format.
timestamp-ts int: timestamp of when response arrived in seconds since Epoch
timestamp-us int: microsecond part of timestamp (e.g. microseconds since 'timestamp-ts')
To select which fields to output, any combination of the output fields can be specified as a comma-separated list using the `--output-fields=fields` or `-f` flags. Example:
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
#### Filtering Output ####
Results generated by a probe module can be filtered before being passed to the output module. Filters are defined over the output fields of a probe module. Filters are written in a simple filtering language, similar to SQL, and are passed to ZMap using the **--output-filter** option. Output filters are commonly used to filter out duplicate results, or to only pass only sucessful responses to the output module.
Filter expressions are of the form `<fieldname> <operation> <value>`. The type of `<value>` must be either a string or unsigned integer literal, and match the type of `<fieldname>`. The valid operations for integer comparisons are `= !=, <, >, <=, >=`. The operations for string comparisons are =, !=. The `--list-output-fields` flag will print what fields and types are available for the selected probe module, and then exit.
Compound filter expressions may be constructed by combining filter expressions using parenthesis to specify order of operations, the `&&` (logical AND) and `||` (logical OR) operators.
**Examples**
Write a filter for only successful, non-duplicate responses
--output-filter="success = 1 && repeat = 0"
Filter for packets that have classification RST and a TTL greater than 10, or for packets with classification SYNACK
--output-filter="(classification = rst && ttl > 10) || classification = synack"
#### CSV ####
The csv module will produce a comma-separated value file of the output fields requested. For example, the following command produces the following CSV in a file called `output.csv`.
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
----------
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
#### Redis ####
The redis output module allows addresses to be added to a Redis queue instead of being saved to file which ultimately allows ZMap to be incorporated with post processing tools.
**Heads Up!** ZMap does not build with Redis support by default. If you are building ZMap from source, you can build with Redis support by running CMake with `-DWITH_REDIS=ON`.
### Blacklisting and Whitelisting ###
ZMap supports both blacklisting and whitelisting network prefixes. If ZMap is not provided with blacklist or whitelist parameters, ZMap will scan all IPv4 addresses (including local, reserved, and multicast addresses). If a blacklist file is specified, network prefixes in the blacklisted segments will not be scanned; if a whitelist file is provided, only network prefixes in the whitelist file will be scanned. A whitelist and blacklist file can be used in coordination; the blacklist has priority over the whitelist (e.g. if you have whitelisted 10.0.0.0/8 and blacklisted 10.1.0.0/16, then 10.1.0.0/16 will not be scanned). Whitelist and blacklist files can be specified on the command-line as follows:
**-b, --blacklist-file=path**
File of subnets to blacklist in CIDR notation, e.g. 192.168.0.0/16
**-w, --whitelist-file=path**
File of subnets to limit scan to in CIDR notation, e.g. 192.168.0.0/16
Blacklist files should be formatted with a single network prefix in CIDR notation per line. Comments are allowed using the `#` character. Example:
# From IANA IPv4 Special-Purpose Address Registry
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# Updated 2013-05-22
0.0.0.0/8 # RFC1122: "This host on this network"
10.0.0.0/8 # RFC1918: Private-Use
100.64.0.0/10 # RFC6598: Shared Address Space
127.0.0.0/8 # RFC1122: Loopback
169.254.0.0/16 # RFC3927: Link Local
172.16.0.0/12 # RFC1918: Private-Use
192.0.0.0/24 # RFC6890: IETF Protocol Assignments
192.0.2.0/24 # RFC5737: Documentation (TEST-NET-1)
192.88.99.0/24 # RFC3068: 6to4 Relay Anycast
192.168.0.0/16 # RFC1918: Private-Use
192.18.0.0/15 # RFC2544: Benchmarking
198.51.100.0/24 # RFC5737: Documentation (TEST-NET-2)
203.0.113.0/24 # RFC5737: Documentation (TEST-NET-3)
240.0.0.0/4 # RFC1112: Reserved
255.255.255.255/32 # RFC0919: Limited Broadcast
# From IANA Multicast Address Space Registry
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
# Updated 2013-06-25
224.0.0.0/4 # RFC5771: Multicast/Reserved
If you are looking to scan only a random portion of the internet, checkout Sampling, instead of using whitelisting and blacklisting.
**Heads Up!** The default ZMap configuration uses the blacklist file at `/etc/zmap/blacklist.conf`, which contains locally scoped address space and reserved IP ranges. The default configuration can be changed by editing `/etc/zmap/zmap.conf`.
### Rate Limiting and Sampling ###
By default, ZMap will scan at the fastest rate that your network adaptor supports. In our experiences on commodity hardware, this is generally around 95-98% of the theoretical speed of gigabit Ethernet, which may be faster than your upstream provider can handle. ZMap will not automatically adjust its send-rate based on your upstream provider. You may need to manually adjust your send-rate to reduce packet drops and incorrect results.
**-r, --rate=pps**
Set maximum send rate in packets/sec
**-B, --bandwidth=bps**
Set send rate in bits/sec (supports suffixes G, M, and K). This overrides the --rate flag.
ZMap also allows random sampling of the IPv4 address space by specifying max-targets and/or max-runtime. Because hosts are scanned in a random permutation generated per scan instantiation, limiting a scan to n hosts will perform a random sampling of n hosts. Command-line options:
**-n, --max-targets=n**
Cap number of targets to probe
**-N, --max-results=n**
Cap number of results (exit after receiving this many positive results)
**-t, --max-runtime=s**
Cap length of time for sending packets (in seconds)
**-s, --seed=n**
Seed used to select address permutation. Specify the same seed in order to scan addresses in the same order for different ZMap runs.
For example, if you wanted to scan the same one million hosts on the Internet for multiple scans, you could set a predetermined seed and cap the number of scanned hosts similar to the following:
zmap -p 443 -s 3 -n 1000000 -o results
In order to determine which one million hosts were going to be scanned, you could run the scan in dry-run mode which will print out the packets that would be sent instead of performing the actual scan.
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
### Sending Multiple Packets ###
ZMap supports sending multiple probes to each host. Increasing this number both increases scan time and hosts reached. However, we find that the increase in scan time (~100% per additional probe) greatly outweighs the increase in hosts reached (~1% per additional probe).
**-P, --probes=n**
The number of unique probes to send to each IP (default=1)
----------
### Sample Applications ###
ZMap is designed for initiating contact with a large number of hosts and finding ones that respond positively. However, we realize that many users will want to perform follow-up processing, such as performing an application level handshake. For example, users who perform a TCP SYN scan on port 80 might want to perform a simple GET request and users who scan port 443 may be interested in completing a TLS handshake.
#### Banner Grab ####
We have included a sample application, banner-grab, with ZMap that enables users to receive messages from listening TCP servers. Banner-grab connects to the provided servers, optionally sends a message, and prints out the first message received from the server. This tool can be used to fetch banners such as HTTP server responses to specific commands, telnet login prompts, or SSH server strings.
This example finds 1000 servers listening on port 80, and sends a simple GET request to each, storing their base-64 encoded responses in http-banners.out
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
For more details on using `banner-grab`, see the README file in `examples/banner-grab`.
**Heads Up!** ZMap and banner-grab can have significant performance and accuracy impact on one another if run simultaneously (as in the example). Make sure not to let ZMap saturate banner-grab-tcp's concurrent connections, otherwise banner-grab will fall behind reading stdin, causing ZMap to block on writing stdout. We recommend using a slower scanning rate with ZMap, and increasing the concurrency of banner-grab-tcp to no more than 3000 (Note that > 1000 concurrent connections requires you to use `ulimit -SHn 100000` and `ulimit -HHn 100000` to increase the maximum file descriptors per process). These parameters will of course be dependent on your server performance, and hit-rate; we encourage developers to experiment with small samples before running a large scan.
#### Forge Socket ####
We have also included a form of banner-grab, called forge-socket, that reuses the SYN-ACK sent from the server for the connection that ultimately fetches the banner. In `banner-grab-tcp`, ZMap sends a SYN to each server, and listening servers respond with a SYN+ACK. The ZMap host's kernel receives this, and sends a RST, as no active connection is associated with that packet. The banner-grab program must then create a new TCP connection to the same server to fetch data from it.
In forge-socket, we utilize a kernel module by the same name, that allows us to create a connection with arbitrary TCP parameters. This enables us to suppress the kernel's RST packet, and instead create a socket that will reuse the SYN+ACK's parameters, and send and receive data through this socket as we would any normally connected socket.
To use forge-socket, you will need the forge-socket kernel module, available from [github][1]. You should git clone `git@github.com:ewust/forge_socket.git` in the ZMap root source directory, and then cd into the forge_socket directory, and run make. Install the kernel module with `insmod forge_socket.ko` as root.
You must also tell the kernel not to send RST packets. An easy way to disable RST packets system wide is to use **iptables**. `iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP` as root will do this, though you may also add an optional --dport X to limit this to the port (X) you are scanning. To remove this after your scan completes, you can run `iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP` as root.
Now you should be able to build the forge-socket ZMap example program. To run it, you must use the **extended_file** ZMap output module:
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
./forge-socket -c 500 -d ./http-req > ./http-banners.out
See the README in `examples/forge-socket` for more details.
----------
### Writing Probe and Output Modules ###
ZMap can be extended to support different types of scanning through **probe modules** and additional types of results **output through** output modules. Registered probe and output modules can be listed through the command-line interface:
**--list-probe-modules**
Lists installed probe modules
**--list-output-modules**
Lists installed output modules
#### Output Modules ####
ZMap output and post-processing can be extended by implementing and registering **output modules** with the scanner. Output modules receive a callback for every received response packet. While the default provided modules provide simple output, these modules are also capable of performing additional post-processing (e.g. tracking duplicates or outputting numbers in terms of AS instead of IP address)
Output modules are created by defining a new output_module struct and registering it in [output_modules.c][2]:
typedef struct output_module {
const char *name; // how is output module referenced in the CLI
unsigned update_interval; // how often is update called in seconds
output_init_cb init; // called at scanner initialization
output_update_cb start; // called at the beginning of scanner
output_update_cb update; // called every update_interval seconds
output_update_cb close; // called at scanner termination
output_packet_cb process_ip; // called when a response is received
const char *helptext; // Printed when --list-output-modules is called
} output_module_t;
Output modules must have a name, which is how they are referenced on the command-line and generally implement `success_ip` and oftentimes `other_ip` callback. The process_ip callback is called for every response packet that is received and passed through the output filter by the current **probe module**. The response may or may not be considered a success (e.g. it could be a TCP RST). These callbacks must define functions that match the `output_packet_cb` definition:
int (*output_packet_cb) (
ipaddr_n_t saddr, // IP address of scanned host in network-order
ipaddr_n_t daddr, // destination IP address in network-order
const char* response_type, // send-module classification of packet
int is_repeat, // {0: first response from host, 1: subsequent responses}
int in_cooldown, // {0: not in cooldown state, 1: scanner in cooldown state}
const u_char* packet, // pointer to struct iphdr of IP packet
size_t packet_len // length of packet in bytes
);
An output module can also register callbacks to be executed at scanner initialization (tasks such as opening an output file), start of the scan (tasks such as documenting blacklisted addresses), during regular intervals during the scan (tasks such as progress updates), and close (tasks such as closing any open file descriptors). These callbacks are provided with complete access to the scan configuration and current state:
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
which are defined in [output_modules.h][3]. An example is available at [src/output_modules/module_csv.c][4].
#### Probe Modules ####
Packets are constructed using probe modules which allow abstracted packet creation and response classification. ZMap comes with two scan modules by default: `tcp_synscan` and `icmp_echoscan`. By default, ZMap uses `tcp_synscan`, which sends TCP SYN packets, and classifies responses from each host as open (received SYN+ACK) or closed (received RST). ZMap also allows developers to write their own probe modules for use with ZMap, using the following API.
Each type of scan is implemented by developing and registering the necessary callbacks in a `send_module_t` struct:
typedef struct probe_module {
const char *name; // how scan is invoked on command-line
size_t packet_length; // how long is probe packet (must be static size)
const char *pcap_filter; // PCAP filter for collecting responses
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
uint8_t port_args; // set to 1 if ZMap requires a --target-port be
// specified by the user
probe_global_init_cb global_initialize; // called once at scanner initialization
probe_thread_init_cb thread_initialize; // called once for each thread packet buffer
probe_make_packet_cb make_packet; // called once per host to update packet
probe_validate_packet_cb validate_packet; // called once per received packet,
// return 0 if packet is invalid,
// non-zero otherwise.
probe_print_packet_cb print_packet; // called per packet if in dry-run mode
probe_classify_packet_cb process_packet; // called by receiver to classify response
probe_close_cb close; // called at scanner termination
fielddef_t *fields // Definitions of the fields specific to this module
int numfields // Number of fields
} probe_module_t;
At scanner initialization, `global_initialize` is called once and can be utilized to perform any necessary global configuration or initialization. However, `global_initialize` does not have access to the packet buffer which is thread-specific. Instead, `thread_initialize` is called at the initialization of each sender thread and is provided with access to the buffer that will be used for constructing probe packets along with global source and destination values. This callback should be used to construct the host agnostic packet structure such that only specific values (e.g. destination host and checksum) need to be be updated for each host. For example, the Ethernet header will not change between headers (minus checksum which is calculated in hardware by the NIC) and therefore can be defined ahead of time in order to reduce overhead at scan time.
The `make_packet` callback is called for each host that is scanned to allow the **probe module** to update host specific values and is provided with IP address values, an opaque validation string, and probe number (shown below). The probe module is responsible for placing as much of the verification string into the probe, in such a way that when a valid response is returned by a server, the probe module can verify that it is present. For example, for a TCP SYN scan, the tcp_synscan probe module can use the TCP source port and sequence number to store the validation string. Response packets (SYN+ACKs) will contain the expected values in the destination port and acknowledgement number.
int make_packet(
void *packetbuf, // packet buffer
ipaddr_n_t src_ip, // source IP in network-order
ipaddr_n_t dst_ip, // destination IP in network-order
uint32_t *validation, // validation string to place in probe
int probe_num // if sending multiple probes per host,
// this will be which probe number for this
// host we are currently sending
);
Scan modules must also define `pcap_filter`, `validate_packet`, and `process_packet`. Only packets that match the PCAP filter will be considered by the scanner. For example, in the case of a TCP SYN scan, we only want to investigate TCP SYN/ACK or TCP RST packets and would utilize a filter similar to `tcp && tcp[13] & 4 != 0 || tcp[13] == 18`. The `validate_packet` function will be called for every packet that fulfills this PCAP filter. If the validation returns non-zero, the `process_packet` function will be called, and will populate a fieldset using fields defined in `fields` with data from the packet. For example, the following code processes a packet for the TCP synscan probe module.
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
{
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
+ (sizeof(struct iphdr)));
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
if (tcp->rst) { // RST packet
fs_add_string(fs, "classification", (char*) "rst", 0);
fs_add_uint64(fs, "success", 0);
} else { // SYNACK packet
fs_add_string(fs, "classification", (char*) "synack", 0);
fs_add_uint64(fs, "success", 1);
}
}
--------------------------------------------------------------------------------
via: https://zmap.io/documentation.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/ewust/forge_socket/
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c

View File

@ -0,0 +1,61 @@
Papyrus:开源笔记管理工具
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_4.jpeg)
在上一篇帖子中,我们介绍了[任务管理软件Go For It!][1].今天我们将介绍一款名为**Papyrus的开源笔记软件**
[Papyrus][2] 是[Kaqaz笔记管理][3]的变体并使用了QT5.它不仅有简洁、易用的界面还具备了较好的安全性。由于强调简洁我觉得Papyrus与OneNote比较相像。你可以将你的笔记像"纸张"一样分类整理,还可以给他们添加标签进行分组。够简单的吧!
### Papyrus的功能: ###
## Papyrus的功能: ###
虽然Papyrus强调简洁它依然有很多丰富的功能。他的一些主要功能如下
- 按类别和标签管理笔记
- 高级搜索选项
- 触屏模式
- 全屏选项
- 备份至Dropbox/硬盘
- 某些页面允许加密
- 可与其他软件共享笔记
- 与Dropbox加密同步
- 除Linux外,还可在AndroidWindows和OS X使用
### Install Papyrus ###
Papyrus为Android用户提供了APK安装包。indows和OS X也有安装文件。Linux用户还可以获取程序的源码。使用Ubuntu及其他基于Ubuntu的发行版可以使用.deb包进行安装。根据你的系统及习惯你可以从Papyrus的下载页面中获取不同的文件:
- [下载 Papyrus][4]
### 软件截图 ###
以下是此软件的一些截图:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_3-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_2-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_1-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux-700x450_c.jpeg)
试试Papyrus吧你会喜欢上它的。
(译者注;此软件暂无中文版)
--------------------------------------------------------------------------------
via: http://itsfoss.com/papyrus-open-source-note-manager/
作者:[Abhishek][a]
译者:[KevinSJ](https://github.com/KevinSJ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/go-for-it-to-do-app-in-linux/
[2]:http://aseman.co/en/products/papyrus/
[3]:https://github.com/sialan-labs/kaqaz/
[4]:http://aseman.co/en/products/papyrus/

View File

@ -0,0 +1,167 @@
===>> boredivan翻译中 <<===
怎样在CentOS 7.0上安装/配置VNC服务器
================================================================================
这是一个关于怎样在你的 CentOS 7 上安装配置 [VNC][1] 服务的教程。当然这个教程也适合 RHEL 7 。在这个教程里我们将学习什么是VNC以及怎样在 CentOS 7 上安装配置 [VNC 服务器][1]。
我们都知道,作为一个系统管理员,大多数时间是通过网络管理服务器的。在管理服务器的过程中很少会用到图形界面,多数情况下我们只是用 SSH 来完成我们的管理任务。在这篇文章里,我们将配置 VNC 来提供一个连接我们 CentOS 7 服务器的方法。VNC 允许我们开启一个远程图形会话来连接我们的服务器,这样我们就可以通过网络远程访问服务器的图形界面了。
VNC 服务器是一个自由且开源的软件,它可以让用户可以远程访问服务器的桌面环境。另外连接 VNC 服务器需要使用 VNC viewer 这个客户端。
** 一些 VNC 服务器的优点:**
远程的图形管理方式让工作变得简单方便。
剪贴板可以在 CentOS 服务器主机和 VNC 客户端机器之间共享。
CentOS 服务器上也可以安装图形工具,让管理能力变得更强大。
只要安装了 VNC 客户端,任何操作系统都可以管理 CentOS 服务器了。
比 ssh 图形和 RDP 连接更可靠。
那么,让我们开始安装 VNC 服务器之旅吧。我们需要按照下面的步骤一步一步来搭建一个有效的 VNC。
首先我们需要一个有效的桌面环境X-Window如果没有的话要先安装一个。
**注意:以下命令必须以 root 权限运行。要切换到 root 请在终端下运行“sudo -s”当然不包括双引号“”**
### 1. 安装 X-Window ###
首先我们需要安装 [X-Window][2],在终端中运行下面的命令,安装会花费一点时间。
# yum check-update
# yum groupinstall "X Window System"
![installing x windows](http://blog.linoxide.com/wp-content/uploads/2015/01/installing-x-windows.png)
#yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts
![install gnome classic session](http://blog.linoxide.com/wp-content/uploads/2015/01/gnome-classic-session-install.png)
# unlink /etc/systemd/system/default.target
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
![configuring graphics](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-graphics.png)
# reboot
在服务器重启之后,我们就有了一个工作着的 CentOS 7 桌面环境了。
现在,我们要在服务器上安装 VNC 服务器了。
### 2. 安装 VNC 服务器 ###
现在要在我们的 CentOS 7 上安装 VNC 服务器了。我们需要执行下面的命令。
# yum install tigervnc-server -y
![vnc server](http://blog.linoxide.com/wp-content/uploads/2015/01/install-tigervnc.png)
### 3. 配置 VNC ###
然后,我们需要在 **/etc/systemd/system/** 目录里创建一个配置文件。我们可以从 **/lib/systemd/sytem/vncserver@.service** 拷贝一份配置文件范例过来。
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
![copying vnc server configuration](http://blog.linoxide.com/wp-content/uploads/2015/01/copying-configuration.png)
接着我们用自己最喜欢的编辑器(这儿我们用的 **nano** )打开 **/etc/systemd/system/vncserver@:1.service** ,找到下面这几行,用自己的用户名替换掉 <USER> 。举例来说,我的用户名是 linoxide 所以我用 linoxide 来替换掉 <USER>
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
PIDFile=/home/<USER>/.vnc/%H%i.pid
替换成
ExecStart=/sbin/runuser -l linoxide -c "/usr/bin/vncserver %i"
PIDFile=/home/linoxide/.vnc/%H%i.pid
如果是 root 用户则
ExecStart=/sbin/runuser -l root -c "/usr/bin/vncserver %i"
PIDFile=/root/.vnc/%H%i.pid
![configuring user](http://blog.linoxide.com/wp-content/uploads/2015/01/configuring-user.png)
好了,下面重启 systemd 。
# systemctl daemon-reload
Finally, we'll create VNC password for the user . To do so, first you'll need to be sure that you have sudo access to the user, here I will login to user "linoxide" then, execute the following. To login to linoxide we'll run "**su linoxide" without quotes** .
最后还要设置一下用户的 VNC 密码。要设置某个用户的密码,必须要获得该用户的权限,这里我用 linoxide 的权限,执行“**su linoxide**”就可以了。
# su linoxide
$ sudo vncpasswd
![setting vnc password](http://blog.linoxide.com/wp-content/uploads/2015/01/vncpassword.png)
**确保你输入的密码多于6个字符**
### 4. 开启服务 ###
用下面的命令(永久地)开启服务:
$ sudo systemctl enable vncserver@:1.service
启动服务。
$ sudo systemctl start vncserver@:1.service
### 5. 防火墙设置 ###
我们需要配置防火墙来让 VNC 服务正常工作。
$ sudo firewall-cmd --permanent --add-service vnc-server
$ sudo systemctl restart firewalld.service
![allowing firewalld](http://blog.linoxide.com/wp-content/uploads/2015/01/allowing-firewalld.png)
现在就可以用 IP 和端口号(例如 192.168.1.1:1 ,这里的端口不是服务器的端口,而是视 VNC 连接数的多少从1开始排序——译注来连接 VNC 服务器了。
### 6. 用 VNC 客户端连接服务器 ###
好了,现在已经完成了 VNC 服务器的安装了。要使用 VNC 连接服务器,我们还需要一个在本地计算机上安装的仅供连接远程计算机使用的 VNC 客户端。
![remote access vncserver from vncviewer](http://blog.linoxide.com/wp-content/uploads/2015/01/vncviewer.png)
你可以用像 [Tightvnc viewer][3] 和 [Realvnc viewer][4] 的客户端来连接到服务器。
要用其他用户和端口连接 VNC 服务器请回到第3步添加一个新的用户和端口。你需要创建 **vncserver@:2.service** 并替换配置文件里的用户名和之后步骤里响应的文件名、端口号。**请确保你登录 VNC 服务器用的是你之前配置 VNC 密码的时候使用的那个用户名**
VNC 服务本身使用的是5900端口。鉴于有不同的用户使用 VNC ,每个人的连接都会获得不同的端口。配置文件名里面的数字告诉 VNC 服务器把服务运行在5900的子端口上。在我们这个例子里第一个 VNC 服务会运行在59015900 + 1端口上之后的依次增加运行在5900 + x 号端口上。其中 x 是指之后用户的配置文件名 **vncserver@:x.service** 里面的 x 。
在建立连接之前,我们需要知道服务器的 IP 地址和端口。IP 地址是一台计算机在网络中的独特的识别号码。我的服务器的 IP 地址是96.126.120.92VNC 用户端口是1。执行下面的命令可以获得服务器的公网 IP 地址。
# curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
### 总结 ###
好了,现在我们已经在运行 CentOS 7 / RHEL 7 (Red Hat Enterprises Linux)的服务器上安装配置好了 VNC 服务器。VNC 是自由及开源的软件中最简单的一种能实现远程控制服务器的一种工具,也是 Teamviewer Remote Access 的一款优秀的替代品。VNC 允许一个安装了 VNC 客户端的用户远程控制一台安装了 VNC 服务的服务器。下面还有一些经常使用的相关命令。好好玩!
#### 其他命令: ####
- 关闭 VNC 服务。
# systemctl stop vncserver@:1.service
- 禁止 VNC 服务开机启动。
# systemctl disable vncserver@:1.service
- 关闭防火墙。
# systemctl stop firewalld.service
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-configure-vnc-server-centos-7-0/
作者:[Arun Pyasi][a]
译者:[boredivan](https://github.com/boredivan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://en.wikipedia.org/wiki/Virtual_Network_Computing
[2]:http://en.wikipedia.org/wiki/X_Window_System
[3]:http://www.tightvnc.com/
[4]:https://www.realvnc.com/

View File

@ -0,0 +1,167 @@
如何在Linux中以交互方式分析和查看Apache web服务器日志?
================================================================================
无论你是在网站托管业务还是在自己的VPS上运行几个网站你总会有机会想要显示访客数量例如前几的房客请求使用的文件无论是动态或者是静态带宽的使用客户端的浏览器和相关的网站等等。
[GoAccess][1] 是一款用于Apache或者Nginx命令行日志分析和交互式查看器。有了这款工具你不仅可以浏览到之前提及的相关数据还可以分析网站服务器日志来进一步挖掘数据 - 然而 **这一切都可以在一个终端窗口实时输出**.由于今天的[大多数web服务器][2]使用一个Debian的衍生版或者基于红帽发行版来作为底层操作系统我将会告诉你如何在Debian和CentOS中安装和使用GoAccess。
### 在Linux系统安装GoAccess ###
在DebianUbuntu及其衍生版本运行一下命令来安装GoAccess
# aptitude install goaccess
在CentOS中你将需要使你的[EPEL 仓库][3]可用然后执行以下命令:
# yum install goaccess
在Fedora同样使用yum命令
# yum install goaccess
如果你想从源码安装GoAccess来使后续的功能可用例如 GeoIP 的位置),为你的操作系统安装[必需的依赖包][4],按以下步骤进行:
# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install
以上安装的版本是 0.8.5,但是你也可以在该软件的网站[下载页][5]确认是否是最新版本。
由于GoAccess不需要后续的配置一旦安装你就可以马上使用。
### 运行 GoAccess ###
开始使用GoAccess只需要对它运行你的Apache访问日志。
对于Debian及其衍生版本
# goaccess -f /var/log/apache2/access.log
基于红帽的发型版本:
# goaccess -f /var/log/httpd/access_log
当你第一次启动GoAccess你将会看到下方屏幕中选择日期和日志格式。正如前面所述你可以选择在空格键和F10之间相互切换。至于日期和日志格式你可能希望参考[Apache 文档][6]来刷新你的记忆。
在这个例子中选择常见日志格式CLI
![](https://farm8.staticflickr.com/7422/15868350373_30c16d7c30.jpg)
然后按F10.你将会从屏幕中获得统计数据。为了简约,只显示首部,也就是总结日志文件的摘要,如下图所示:
![](https://farm9.staticflickr.com/8683/16486742901_7a35b5df69_b.jpg)
### 通过 GoAccess来浏览网站服务器统计数据 ###
当你通过向下的剪头滚动页面,你会发现一下章节,按要求进行排序。这里提及的目录顺序可能会根据你的发型版本或者(从源和库)首选的安装方式:
1. 每天唯一访客具有同样IP同一日期和统一代理被认为是
![](https://farm8.staticflickr.com/7308/16488483965_a439dbc5e2_b.jpg)
2. 请求的文件网页URL
![](https://farm9.staticflickr.com/8651/16488483975_66d05dce51_b.jpg)
3. 请求的静态文件(例如,.png文件.js文件等等
4. 请求的URLs每一个URL请求的出处
5. HTTP 404 不能找到响应的代码
![](https://farm9.staticflickr.com/8669/16486742951_436539b0da_b.jpg)
6. 操作系统
7. 浏览器
8. 主机客户端IP地址
![](https://farm8.staticflickr.com/7392/16488483995_56e706d77c_z.jpg)
9. HTTP 状态代码
![](https://farm8.staticflickr.com/7282/16462493896_77b856f670_b.jpg)
10. 前几位的推荐站点
11. 在谷歌的搜索引擎使用的排名在前的关键字
如果你还想检查已经存档的日志你可以在GoAccess通过使用管道符号如下。
在Debian及其衍生版本
# zcat -f /var/log/apache2/access.log* | goaccess
在基于红帽的发型版本:
# cat /var/log/httpd/access* | goaccess
如果你需要任何更多关于以上的详细报告1至11项直接按下章节序号再按O大写o就可以显示出你需要的详细视图。下面的图像显示5-O的输出先按5再按O
![](https://farm8.staticflickr.com/7382/16302213429_48d9233f40_b.jpg)
如果要现实GeoIP位置信息打开详细视图的主机部分如前面所述你将会看到客户端IP地址所在的位置以及显示web服务器的请求。
![](https://farm8.staticflickr.com/7393/16488484075_d778aa91a2_z.jpg)
如果你的系统还尚未达到很忙碌的状态,以上提及的章节将不会显示大量的信息,但是这种情形可以通过在你网站服务器越来越多的请求发生改变。
### 在线保存分析的报告
当然有时候你不想每次都实时去检查你的系统状态但是保存一份在线的分析文件或者打印版是由必要的。要生成一个HTML报告只需要通过之前提到GoAccess命令输出来简单地重定向道一个HTML文件。然后你只需通过web浏览器来将这份报告打开即可。
# zcat -f /var/log/apache2/access.log* | goaccess > /var/www/webserverstats.html
一旦报告生成,你将需要点击展开的链接来显示每个类别详细的视图信息:
![](https://farm9.staticflickr.com/8658/16486743041_bd8a80794d_o.png)
注释youtube视频
<iframe width="615" height="346" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/UVbLuaOpYdg?feature=oembed"></iframe>
正如我们通过这篇文章讨论GoAccess是一个非常可贵的工具它提供给作为百忙之中的系统管理员一份HTTP统计的静态可是报告。虽然GoAccess默认其输出结果为标准输出但是你也可以将他们保存到JSONHTML或者CSV文件。这样的转换GoAccess将作为一个非常有用的工具来监控和显示网站服务器的统计数据。
--------------------------------------------------------------------------------
via: http://xmodulo.com/interactive-apache-web-server-log-analyzer-linux.html
作者:[Gabriel Cánepa][a]
译者:[disylee](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://goaccess.io/
[2]:http://w3techs.com/technologies/details/os-linux/all/all
[3]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
[4]:http://goaccess.io/download#dependencies
[5]:http://goaccess.io/download
[6]:http://httpd.apache.org/docs/2.4/logs.html

View File

@ -1,41 +0,0 @@
Nmap--不只是邪恶.
================================================================================
如果SSH是系统管理员世界的"瑞士军刀"的话,那么Nmap就是一盒炸药. 炸药很容易被误用然后将你的双脚崩掉,但是也是一个很有威力的工具,能够胜任一些看似无法完成的任务.
大多数人想到Nmap时,他们想到的是扫描服务器,查找开放端口来实施工具. 然而,在过去的这些年中,同样的超能力在当你管理服务器或计算机遇到问题时变得难以置信的有用.无论是你试图找出在你的网络上有哪些类型的服务器使用了指定的IP地址,或者尝试锁定一个新的NAS设备,以及扫描网络等,都会非常有用.
图1显示了我的QNAP NAS的网络扫描.我使用该单元的唯一目的是为了NFS和SMB文件共享,但是你可以看到,它包含了一大堆大开大敞的端口.如果没有Nmap,很难发现机器到底在运行着什么玩意儿.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11825nmapf1.jpg)
### 图1 网络扫描 ###
另外一个无法想象的用处是用它来扫描一个网络.你甚至根本不需要root的访问权限,而且你也可以非常容易地来指定你想要扫描的网络块,例如,输入:
nmap 192.168.1.0/24
上述命令会扫描我局部网络中全部的254个可用的IP地址,让我可以知道那个使可以Ping的,以及那些端口时开放的.如果你刚刚插入一片新的硬件,但是不知道它通过DHCP获取的IP地址,那么此时Nmap就是无价之宝. 例如,上述命令在我的网络中揭示了这个问题.
Nmap scan report for TIVO-8480001903CCDDB.brainofshawn.com (192.168.1.220)
Host is up (0.0083s latency).
Not shown: 995 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2190/tcp open tivoconnect
2191/tcp open tvbus
9080/tcp closed glrpc
它不仅显示了新的Tivo单元,而且还告诉我那些端口是开放的. 由于它的可靠性,可用性以及黑色边框帽子的能力,Nmap获得了本月的 <<编辑推荐>>奖. 这不是一个新的程序,但是如果你是一个linux用户的话,你应该玩玩它.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/nmap%E2%80%94not-just-evil
作者:[Shawn Powers][a]
译者:[theo-l](https://github.com/theo-l)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/shawn-powers

View File

@ -0,0 +1,150 @@
走进Linux之systemd启动过程
================================================================================
Linux系统的启动方式有点复杂而且总是有需要优化的地方。传统的Linux系统启动过程主要由著名的init进程也被称为SysV init启动系统处理而基于init的启动系统也被确认会有效率不足的问题systemd是Linux系统机器的另一种启动方式宣称弥补了以[传统Linux SysV init][2]为基础的系统的缺点。在这里我们将着重讨论systemd的特性和争议但是为了更好地理解它也会看一下通过传统的以SysV init为基础的系统的Linux启动过程是什么样的。友情提醒一下systemd仍然处在测试阶段而未来发布的Linux操作系统也正准备用systemd启动管理程序替代当前的启动过程。
### 理解Linux启动过程 ###
在我们打开Linux电脑的电源后第一个启动的进程就是init。分配给init进程的PID是1。它是系统其他所有进程的父进程。当一台Linux电脑启动后处理器会先在系统存储中查找BIOS之后BIOS会测试系统资源然后找到第一个引导设备通常设置为硬盘然后会查找硬盘的主引导记录MBR然后加载到内存中并把控制权交给它以后的启动过程就由MBR控制。
主引导记录会初始化引导程序Linux上有两个著名的引导程序GRUB和LILO80%的Linux系统在用GRUB引导程序这个时候GRUB或LILO会加载内核模块。内核会马上查找/sbin下的init进程并执行它。从这里开始init成为了Linux系统的父进程。init读取的第一个文件是/etc/inittab通过它init会确定我们Linux操作系统的运行级别。它会从文件/etc/fstab里查找分区表信息然后做相应的挂载。然后init会启动/etc/init.d里指定的默认启动级别的所有服务/脚本。所有服务在这里通过init一个一个被初始化。在这个过程里init每次只启动一个服务所有服务/守护进程都在后台执行并由init来管理。
关机过程差不多是相反的过程首先init停止所有服务最后阶段会卸载文件系统。
以上提到的启动过程有一些不足的地方。而用一种更好的方式来替代传统init的需求已经存在很长时间了。也产生了许多替代方案。其中比较著名的有UpstartEpochMuda和Systemd。而Systemd获得最多关注并被认为是目前最佳的方案。
### 理解Systemd ###
开发Systemd的主要目的就是减少系统引导时间和计算开销。Systemd系统管理守护进程最开始以GNU GPL协议授权开发现在已转为使用GNU LGPL协议它是如今讨论最热烈的引导和服务管理程序。如果你的Linux系统配置为使用Systemd引导程序那么代替传统的SysV init启动过程将交给systemd处理。Systemd的一个核心功能是它同时支持SysV init的后开机启动脚本。
Systemd引入了并行启动的概念它会为每个需要启动的守护进程建立一个管道套接字这些套接字对于使用它们的进程来说是抽象的这样它们可以允许不同守护进程之间进行交互。Systemd会创建新进程并为每个进程分配一个控制组。处于不同控制组的进程之间可以通过内核来互相通信。[systemd处理开机启动进程][2]的方式非常漂亮和传统基于init的系统比起来优化了太多。让我们看下Systemd的一些核心功能。
- 和init比起来引导过程简化了很多
- Systemd支持并发引导过程从而可以更快启动
- 通过控制组来追踪进程而不是PID
- 优化了处理引导过程和服务之间依赖的方式
- 支持系统快照和恢复
- 监控已启动的服务;也支持重启已崩溃服务
- 包含了systemd-login模块用于控制用户登录
- 支持加载和卸载组件
- 低内存使用痕迹以及任务调度能力
- 记录事件的Journald模块和记录系统日志的syslogd模块
Systemd同时也清晰地处理了系统关机过程。它在/usr/lib/systemd/目录下有三个脚本分别叫systemd-halt.servicesystemd-poweroff.servicesystemd-reboot.service。这几个脚本会在用户选择关机重启或待机时执行。在接收到关机事件时systemd首先卸载所有文件系统并停止所有内存交换设备断开存储设备之后停止所有剩下的进程。
![](http://images.linoxide.com/systemd-boot-process.jpg)
### Systemd结构概览 ###
让我们看一下Linux系统在使用systemd作为引导程序时的开机启动过程的结构性细节。为了简单我们将在下面按步骤列出来这个过程
**1.** 当你打开电源后电脑所做的第一件事情就是BIOS初始化。BIOS会读取引导设备设定定位并传递系统控制权给MBR假设硬盘是第一引导设备
**2.** MBR从Grub或LILO引导程序读取相关信息并初始化内核。接下来将由Grub或LILO继续引导系统。如果你在grub配置文件里指定了systemd作为引导管理程序之后的引导过程将由systemd完成。Systemd使用“target”来处理引导和服务管理过程。这些systemd里的“target”文件被用于分组不同的引导单元以及启动同步进程。
**3.** systemd执行的第一个目标是**default.target**。但实际上default.target是指向**graphical.target**的软链接。Linux里的软链接用起来和Windows下的快捷方式一样。文件Graphical.target的实际位置是/usr/lib/systemd/system/graphical.target。在下面的截图里显示了graphical.target文件的内容。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/graphical1.png)
**4.** 在这个阶段,会启动**multi-user.target**而这个target将自己的子单元放在目录“/etc/systemd/system/multi-user.target.wants”里。这个target为多用户支持设定系统环境。非root用户会在这个阶段的引导过程中启用。防火墙相关的服务也会在这个阶段启动。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/multi-user-target1.png)
"multi-user.target"会将控制权交给另一层“**basic.target**”。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Basic-Target.png)
**5.** "basic.target"单元用于启动普通服务特别是图形管理服务。它通过/etc/systemd/system/basic.target.wants目录来决定哪些服务会被启动basic.target之后将控制权交给**sysinit.target**.
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sysint-Target.png)
**6.** "sysinit.target"会启动重要的系统服务例如系统挂载内存交换空间和设备内核补充选项等等。sysinit.target在启动过程中会传递给**local-fs.target**。这个target单元的内容如下面截图里所展示。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/local-FS-Target.png)
**7.** local-fs.target这个target单元不会启动用户相关的服务它只处理底层核心服务。这个target会根据/etc/fstab和/etc/inittab来执行相关操作。
### 系统引导性能分析 ###
Systemd提供了工具用于识别和定位引导相关的问题或性能影响。**Systemd-analyze**是一个内建的命令可以用来检测引导过程。你可以找出在启动过程中出错的单元然后跟踪并改正引导组件的问题。在下面列出一些常用的systemd-analyze命令。
**systemd-analyze time** 用于显示内核和普通用户空间启动时所花的时间。
$ systemd-analyze time
Startup finished in 1440ms (kernel) + 3444ms (userspace)
**systemd-analyze blame** 会列出所有正在运行的单元,按从初始化开始到当前所花的时间排序,通过这种方式你就知道哪些服务在引导过程中要花较长时间来启动。
$ systemd-analyze blame
2001ms mysqld.service
234ms httpd.service
191ms vmms.service
**systemd-analyze verify** 显示在所有系统单元中是否有语法错误。**systemd-analyze plot** 可以用来把整个引导过程写入一个SVG格式文件里。整个引导过程非常长不方便阅读所以通过这个命令我们可以把输出写入一个文件之后再查看和分析。下面这个命令就是做这个。
systemd-analyze plot > boot.svg
### Systemd的争议 ###
Systemd并没有幸运地获得所有人的青睐一些专家和管理员对于它的工作方式和开发有不同意见。根据对于Systemd的批评它不是“类Unix”方式因为它试着替换一些系统服务。一些专家也不喜欢使用二进制配置文件的想法。据说编辑systemd配置非常困难而且没有一个可用的图形工具。
### 在Ubuntu 14.04和12.04上测试Systemd ###
本来Ubuntu决定从Ubuntu 16.04 LTS开始使用Systemd来替换当前的引导过程。Ubuntu 16.04预计在2016年4月发布但是考虑到Systemd的流行和需求即将发布的**Ubuntu 15.04**将采用它作为默认引导程序。好消息是Ubuntu 14.04 Trusty Tahr和Ubuntu 12.04 Precise Pangolin的用户可以在他们的机器上测试Systemd。测试过程并不复杂你所要做的只是把相关的PPA包含到系统中更新仓库并升级系统。
**声明**请注意它仍然处于Ubuntu的测试和开发阶段。升级测试包可能会带来一些未知错误最坏的情况下有可能损坏你的系统配置。请确保在尝试升级前已经备份好重要数据。
在终端里运行下面的命令来添加PPA到你的Ubuntu系统里
sudo add-apt-repository ppa:pitti/systemd
你将会看到警告信息因为我们尝试使用临时/测试PPA而它们是不建议用于实际工作机器上的。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/PPA-Systemd1.png)
然后运行下面的命令更新APT包管理仓库。
sudo apt-get update
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Update-APT1.png)
运行下面的命令升级系统。
sudo apt-get dist-upgrade
![](http://blog.linoxide.com/wp-content/uploads/2015/03/System-Upgrade.png)
就这些你应该已经可以在你的Ubuntu系统里看到Systemd配置文件了打开/lib/systemd/目录可以看到这些文件。
好吧现在让我们编辑一下grub配置文件指定systemd作为默认引导程序。可以使用Gedit文字编辑器编辑grub配置文件。
sudo gedit /etc/default/grub
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Edit-Grub.png)
在文件里修改GRUB_CMDLINE_LINUX_DEFAULT项设定它的参数为“**init=/lib/systemd/systemd**”
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Grub-Systemd.png)
就这样你的Ubuntu系统已经不在使用传统的引导程序了改为使用Systemd管理器。重启你的机器然后查看systemd引导过程吧。
![](http://blog.linoxide.com/wp-content/uploads/2015/03/Sytemd-Boot.png)
### 结论 ###
Systemd毫无疑问为改进Linux引导过程前进了一大步它包含了一套漂亮的库和守护进程配合工作来优化系统引导和关闭过程。许多Linux发行版正准备将它作为自己的正式引导程序。在以后的Linux发行版中我们将有望看到systemd开机。但是另一方面为了获得成功并广泛应用systemd仍需要认真处理批评意见。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/systemd-boot-process/
作者:[Aun Raza][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:http://linoxide.com/booting/boot-process-of-linux-in-detail/
[2]:http://0pointer.de/blog/projects/self-documented-boot.html

View File

@ -0,0 +1,266 @@
11个Linux终端命令让你的世界摇滚起来
================================================================================
我已经用了十年的Linux了通过今天这篇文章我将向大家展示一系列的我希望一开始就有人教导而不是曾在我成长道路上绊住我的Linux命令、工具和花招。
![Linux Keyboard Shortcuts.](http://f.tqn.com/y/linux/1/L/m/J/1/keyboardshortcuts.png)
Linux的快捷键。
### 1. 命令行日常系快捷键 ###
如下的快捷方式非常有用,能够极大的提升你的工作效率:
- CTRL + U - 剪切光标前的内容
- CTRL + K - 剪切光标至行末的内容
- CTRL + Y - 粘贴
- CTRL + E - 移动光标到行末
- CTRL + A - 移动光标到行首
- ALT + F - 跳向下一个空格
- ALT + B - 跳回上一个空格
- ALT + Backspace - 删除前一个字
- CTRL + W - 剪切光标后一个字
- Shift + Insert - 向终端内粘贴文本
那么为了让上诉内容更易理解来看下面的这行命令。
sudo apt-get intall programname
如你所见命令中存在拼写错误为了正常执行需要把“intall”替换成“install”。
想象现在光标正在行末我们有很多的方法将她退回单词install并替换它。
我可以按两次ALT+B这样光标就会在如下的位置这里用^代替光标的位置)。
sudo apt-get^intall programname
现在你可以按两下方向键并将“s”插入到install中去了。
如果你想将浏览器中的文本复制到终端,可以使用快捷键"shift + insert"。
![](http://f.tqn.com/y/linux/1/L/n/J/1/sudotricks2.png)
### 2. SUDO !! ###
这个命令如果你还不知道我觉得你应该好好感谢我因为如果你不知道那每次你在输入长串命令后看到“permission denied”后一定会痛恼不堪。
- sudo !!
如何使用sudo !!?很简单。试想你刚输入了如下命令:
apt-get install ranger
一定会出现"Permission denied"除非你的登录了足够高权限的账户。
sudo !!就会用sudo的形式运行上一条命令。所以上一条命令可以看成是这样
sudo apt-get install ranger
如果你不知道什么是sudo[戳这里][1]。
![Pause Terminal Applications.](http://f.tqn.com/y/linux/1/L/o/J/1/pauseapps.png)
暂停终端运行的应用程序。
### 3. 暂停并在后台运行命令 ###
我曾经写过一篇如何在终端后台运行命令的指南。
- CTRL + Z - 暂停应用程序
- fg - 重新将程序唤到前台
如何使用这个技巧呢?
试想你正用nano编辑一个文件
sudo nano abc.txt
文件编辑到一半你意识到你需要马上在终端输入些命令但是nano在前台运行让你不能输入。
你可能觉得唯一的方法就是保存文件推出nano运行命令以后在重新打开nano。
其实你只要按CTRL + Z前台的命令就会暂停画面就切回到命令行了。然后你就能运行你想要运行命令等命令运行完后在终端窗口输入“fg”就可以回到先前暂停的任务。
有一个尝试非常有趣就是用nano打开文件输入一些东西然后暂停会话。再用nano打开另一个文件输入一些什么后再暂停会话。如果你输入“fg”你将回到第二个用nano打开的文件。只有退出nano再输入“fg”你才会回到第一个用nano打开的文件。
![nohup.](http://f.tqn.com/y/linux/1/L/p/J/1/nohup3.png)
nohup.
### 4. 使用nohup在登出SSH会话后仍运行命令 ###
如果你用ssh登录别的机器时[nohup命令]真的非常有用。
那么怎么使用nohup呢
想象一下你使用ssh远程登录到另一台电脑上你运行了一条非常耗时的命令然后退出了ssh会话不过命令仍在执行。而nohup可以将这一场景变成现实。
举个例子以测试为目的我用[树莓派][3]来下载发行版。
我绝对不会给我的树莓派外接显示器、键盘或鼠标。
一般我总是用[SSH] [4]从笔记本电脑连接到树莓派。如果我在不用nohup的情况下使用树莓派下载大型文件那我就必须等待到下载完成后才能登出ssh会话关掉笔记本。如果是这样那我为什么要使用树莓派下文件呢
使用nohup的方法也很简单只需如下例中在nohup后输入要执行的命令即可
nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &
![Schedule tasks with at.](http://f.tqn.com/y/linux/1/L/q/J/1/at.png)
At管理任务日程
### 5. 特定的时间运行Linux命令 ###
nohup命令在你用SSH连接到服务器并在上面保持执行SSH登出前任务的时候十分有用。
想一下如果你需要在特定的时间执行同一个命令,这种情况该怎么办呢?
命令at就能妥善解决这一情况。以下是at使用示例。
at 10:38 PM Fri
at> cowsay 'hello'
at> CTRL + D
上面的命令能在周五下午10时38分运行程序[cowsay] [5]。
使用的语法就是at后追加日期时间。
当at>提示符出现后就可以输入你想在那个时间运行的命令了。
CTRL + D返回终端。
还有许多日期和时间的格式都是值得的你好好翻一翻at的man手册来找到更多的使用方式。
![](http://f.tqn.com/y/linux/1/L/l/J/1/manmost.png)
### 6. Man手册 ###
Man手册会为你列出命令和参数的使用大纲教你如何使用她们。
Man手册看起开沉闷呆板。我思忖她们也不是被设计来娱乐我们的
不过这不代表你不能做些什么来使她们变得性感点。
export PAGER=most
你需要 most她会使你的你的man手册的色彩更加绚丽。
你可以用一下命令给man手册设定指定的行长
export MANWIDTH=80
最后,如果你有浏览器,你可以使用-H在默认浏览器中打开任意的man页。
man -H <command>
注意啦,以上的命令只有在你将默认的浏览器已经设置到环境变量$BROWSER中了之后才效果哟。
![View Processes With htop.](http://f.tqn.com/y/linux/1/L/r/J/1/nohup2.png)
使用htop查看进程。
### 7. 使用htop查看和管理进程 ###
你用哪个命令找出电脑上正在运行的进程的呢?我敢打赌是‘[ps][6]’并在其后加不同的参数来得到你所想要的不同输出。
安装‘[htop][7]’吧!绝对让你相见恨晚。
htop在终端中将进程以列表的方式呈现有点类似于Windows中的任务管理器。
你可以使用功能键的组合来切换排列的方式和展示出来的项。你也可以在htop中直接杀死进程。
在终端中简单的输入htop即可运行。
htop
![Command Line File Manager - Ranger.](http://f.tqn.com/y/linux/1/L/s/J/1/ranger.png)
命令行文件管理 - Ranger.
### 8. 使用ranger浏览文件系统 ###
如果说htop是命令行进程控制的好帮手那么[ranger][8]就是命令行浏览文件系统的好帮手。
你在用之前可能需要先安装,不过一旦安装了以后就可以在命令行输入以下命令启动她:
ranger
在命令行窗口中ranger和一些别的文件管理器很像但是她是左右结构的比起上下的来意味着你按左方向键你将前进到上一个文件夹结构而右方向键则会切换到下一个。
在使用前ranger的man手册还是值得一读的这样你就可以用快捷键操作ranger了。
![Cancel Linux Shutdown.](http://f.tqn.com/y/linux/1/L/t/J/1/shutdown.png)
Linux取消关机。
### 9. 取消关机 ###
无论是在命令行还是图形用户界面[关机][9]后发现自己不是真的想要关机。
shutdown -c
需要注意的是,如果关机已经开始则有可能来不及停止关机。
以下是另一个可以尝试命令:
- [pkill][10] shutdown
![Kill Hung Processes With XKill.](http://f.tqn.com/y/linux/1/L/u/J/1/killhungprocesses.png)
使用XKill杀死挂起进程。
### 10. 杀死挂起进程的简单方法 ###
想象一下,你正在运行的应用程序不明原因的僵死了。
你可以使用ps -ef来找到该进程后杀掉或者使用htop
有一个更快、更容易的命令叫做[xkill][11]。
简单的在终端中输入以下命令并在窗口中点击你想杀死的应用程序。
xkill
那如果整个系统挂掉了怎么办呢?
按住键盘上的altsysrq同时输入
- [REISUB][12]
这样不按电源键你的计算机也能重启了。
![youtube-dl.](http://f.tqn.com/y/linux/1/L/v/J/1/youtubedl2.png)
youtube-dl.
### 11. 下载Youtube视频 ###
一般来说我们大多数人都喜欢看Youtube的视频也会通过钟爱的播放器播放Youtube的流。
如果你需要离线一段时间(比如:从苏格兰南部坐飞机到英格兰南部旅游的这段时间)那么你可能希望下载一些视频到存储设备中,到闲暇时观看。
你所要做的就是从包管理器中安装youtube-dl。
你可以用以下命令使用youtube-dl
youtube-dl url-to-video
你能在Youtubu视频页面点击分享链接得到视频的url。只要简单的复制链接在粘帖到命令行就行了要用shift + insert快捷键哟
### 总结 ###
希望你在这篇文章中得到帮助并且在这11条中找到至少一条让你惊叹“原来可以这样”的技巧。
--------------------------------------------------------------------------------
via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-Rock-Your-World.htm
作者:[Gary Newell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
[1]:http://linux.about.com/cs/linux101/g/sudo.htm
[2]:http://linux.about.com/library/cmd/blcmdl1_nohup.htm
[3]:http://linux.about.com/od/mobiledevicesother/a/Raspberry-Pi-Computer-Running-Linux.htm
[4]:http://linux.about.com/od/commands/l/blcmdl1_ssh.htm
[5]:http://linux.about.com/cs/linux101/g/cowsay.htm
[6]:http://linux.about.com/od/commands/l/blcmdl1_ps.htm
[7]:http://www.linux.com/community/blogs/133-general-linux/745323-5-commands-to-check-memory-usage-on-linux
[8]:http://ranger.nongnu.org/
[9]:http://linux.about.com/od/commands/l/blcmdl8_shutdow.htm
[10]:http://linux.about.com/library/cmd/blcmdl1_pkill.htm
[11]:http://linux.about.com/od/funnymanpages/a/funman_xkill.htm
[12]:http://blog.kember.net/articles/reisub-the-gentle-linux-restart/

Some files were not shown because too many files have changed in this diff Show More