mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject
This commit is contained in:
commit
f70e65a8e8
@ -1,15 +1,15 @@
|
||||
C语言数据类型是如何被大多数计算机系统所支持?
|
||||
---------
|
||||
========================
|
||||
|
||||
#问题:
|
||||
###问题:
|
||||
|
||||
在读K&R版的*The C Programming Language*一书时,我在[介绍,第3页]看到这样一条说明:
|
||||
在读K&R版的*The C Programming Language*一书时,我在[介绍,第3页]看到这样一条说明:
|
||||
|
||||
>因为C语言提供的数据类型和控制结构可以直接被大部分计算机系统所支持,所以在实现自包含程序时所需要的运行库文件一般很小。
|
||||
>**因为C语言提供的数据类型和控制结构可以直接被大部分计算机系统所支持,所以在实现自包含程序时所需要的运行库文件一般很小。**
|
||||
|
||||
这段黑体说明了什么?能找到一个例子来说明C语言中的某种数据类型或控制结构并不能被一种计算机系统所支持呢?
|
||||
这段黑体说明了什么?能否找到一个例子来说明C语言中的某种数据类型或控制结构不被某种计算机系统直接支持呢?
|
||||
|
||||
#回答:
|
||||
###回答:
|
||||
|
||||
事实上,C语言中确实有不被直接支持的数据类型。
|
||||
|
||||
@ -31,13 +31,13 @@ return _float_add(x, y);
|
||||
|
||||
另一个常见的例子是64位整型数(C语言标准中'long long'类型是1999年之后才出现的),这种类型在32位系统上也不能直接使用。古董级的SPARC系统则不支持整型乘法,所以在运行时必须提供乘法的实现。当然,还有一些其它例子。
|
||||
|
||||
##其它语言
|
||||
####其它语言
|
||||
|
||||
相比起来,其它编程语言有更加复杂的基本类型。
|
||||
|
||||
比如,Lisp中的symbol需要大量的运行时实现支持,就像Lua中的tables、Python中的strings、Fortran中的arrays,等等。在C语言中等价的类型通常要么不属于标准库(C语言没有标准symbols或tables),要么更加简单,而且并不需要那么多的运行时支持(C语言中的array基本上就是指针,以NULL结尾的字符串实现起来也很简单)。
|
||||
比如,Lisp中的symbol需要大量的运行时实现支持,就像Lua中的table、Python中的string、Fortran中的array,等等。在C语言中等价的类型通常要么不属于标准库(C语言没有标准symbol或table),要么更加简单,而且并不需要那么多的运行时支持(C语言中的array基本上就是指针,以NULL结尾的字符串实现起来也很简单)。
|
||||
|
||||
##控制结构
|
||||
####控制结构
|
||||
|
||||
异常处理是C语言中没有的一种控制结构。非局部的退出只有'setjmp()'和'longjmp()'两种,只能提供保存和恢复某些部分的处理器状态。相比之下,C++运行时环境必须先遍历函数调用栈,然后调用析构函数和异常处理函数。
|
||||
|
||||
@ -46,7 +46,7 @@ via:[stackoverflow](http://stackoverflow.com/questions/27977522/how-are-c-data-t
|
||||
|
||||
作者:[Dietrich Epp][a]
|
||||
译者:[KayGuoWhu](https://github.com/KayGuoWhu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,10 +1,10 @@
|
||||
在Ubuntu 14.04 中修复无法修复回收站[快速提示]
|
||||
在Ubuntu 14.04 中修复无法清空回收站的问题
|
||||
================================================================================
|
||||

|
||||
|
||||
### 问题 ###
|
||||
|
||||
**无法在Ubuntu 14.04中清空回收站的问题**。我右键回收站图标并选择清空回收站,就像我一直做的那样。我看到进度条显示删除文件中过了一段时间。但是它停止了,并且Nautilus文件管理也停止了。我不得不在终端中停止了它。
|
||||
**我遇到了无法在Ubuntu 14.04中清空回收站的问题**。我右键回收站图标并选择清空回收站,就像我一直做的那样。我看到进度条显示删除文件中过了一段时间。但是它停止了,并且Nautilus文件管理也停止了。我不得不在终端中停止了它。
|
||||
|
||||
但是这很痛苦因为文件还在垃圾箱中。并且我反复尝试清空后窗口都冻结了。
|
||||
|
||||
@ -18,7 +18,7 @@
|
||||
|
||||
这里注意你的输入。你使用超级管理员权限来运行删除命令。我相信你不会删除其他文件或者目录。
|
||||
|
||||
上面的命令会删除回收站目录下的所有文件。换句话说,这是用命令清空垃圾箱。使用玩上面的命令后,你会看到垃圾箱已经清空了。如果你删除了所有文件,你不应该在看到Nautilus崩溃的问题了。
|
||||
上面的命令会删除回收站目录下的所有文件。换句话说,这是用命令清空垃圾箱。使用完上面的命令后,你会看到垃圾箱已经清空了。如果你清空了所有文件,你不应该在看到Nautilus崩溃的问题了。
|
||||
|
||||
### 对你有用么? ###
|
||||
|
||||
@ -30,7 +30,7 @@ via: http://itsfoss.com/fix-empty-trash-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -15,13 +15,13 @@ MariaDB是MySQL社区开发的分支,也是一个增强型的替代品。它
|
||||
|
||||
现在,让我们迁移到MariaDB吧!
|
||||
|
||||
**以测试为目的**让我们创建一个叫**linoxidedb**的示例数据库。
|
||||
让我们创建一个叫**linoxidedb**的**用于测试的**示例数据库。
|
||||
|
||||
使用以下命令用root账户登陆MySQL:
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
输入mysql root用户的密码后,你将进入**mysql的命令行**
|
||||
输入mysql 的 root 用户密码后,你将进入**mysql的命令行**
|
||||
|
||||
**创建测试数据库:**
|
||||
|
||||
@ -54,7 +54,8 @@ MariaDB是MySQL社区开发的分支,也是一个增强型的替代品。它
|
||||
$ mysqldump: Error: Binlogging on server not active
|
||||
|
||||

|
||||
mysqldump error
|
||||
|
||||
*mysqldump error*
|
||||
|
||||
为了修复这个错误,我们需要对**my.cnf**文件做一些小改动。
|
||||
|
||||
@ -68,7 +69,7 @@ mysqldump error
|
||||
|
||||

|
||||
|
||||
好了在保存并关闭文件后,我们需要重启一下mysql服务。运行以下命令重启:
|
||||
好了,在保存并关闭文件后,我们需要重启一下mysql服务。运行以下命令重启:
|
||||
|
||||
$ sudo /etc/init.d/mysql restart
|
||||
|
||||
@ -77,17 +78,18 @@ mysqldump error
|
||||
$ mysqldump --all-databases --user=root --password --master-data > backupdatabase.sql
|
||||
|
||||

|
||||
dumping databases
|
||||
|
||||
*dumping databases*
|
||||
|
||||
上面的命令将会备份所有的数据库,把它们存储在当前目录下的**backupdatabase.sql**文件中。
|
||||
|
||||
### 2. 卸载MySQL ###
|
||||
|
||||
首先,我们得把**my.cnt文件挪到安全的地方去**。
|
||||
首先,我们得把**my.cnf文件挪到安全的地方去**。
|
||||
|
||||
**注**:my.cnf文件将不会在你卸载MySQL包的时候被删除,我们这样做只是以防万一。在MariaDB安装时,它会询问我们是保持现存的my.cnf文件,还是使用包中自带的版本(即新my.cnf文件)。
|
||||
**注**:在你卸载MySQL包的时候不会自动删除my.cnf文件,我们这样做只是以防万一。在MariaDB安装时,它会询问我们是保持现存的my.cnf文件,还是使用包中自带的版本(即新my.cnf文件)。
|
||||
|
||||
在shell或终端中输入如下命令来备份my.cnt文件:
|
||||
在shell或终端中输入如下命令来备份my.cnf文件:
|
||||
|
||||
$ sudo cp /etc/mysql/my.cnf my.cnf.bak
|
||||
|
||||
@ -111,7 +113,7 @@ dumping databases
|
||||
|
||||

|
||||
|
||||
键值导入并且添加完仓库后你就可以用以下命令安装MariaDB了:
|
||||
键值导入并且添加完仓库后,你就可以用以下命令安装MariaDB了:
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install mariadb-server
|
||||
@ -120,7 +122,7 @@ dumping databases
|
||||
|
||||

|
||||
|
||||
我们应该还没忘记在MariaDB安装时,它会问你是使用现有的my.cnf文件,还是包中自带的版本。你可以使用以前的my.cnf也可以用包中自带的。即使你想直接使用新的my.cnf文件,你依然可以晚点将以前的备份内容还原进去(别忘了我们已经将它复制到安全的地方那个去了)。所以,我们直接选择了默认的选项“N”。如果需要安装其他版本,请参考[MariaDB官方仓库][2]。
|
||||
我们应该还没忘记在MariaDB安装时,它会问你是使用现有的my.cnf文件,还是包中自带的版本。你可以使用以前的my.cnf也可以用包中自带的。即使你想直接使用新的my.cnf文件,你依然可以晚点时候将以前的备份内容还原进去(别忘了我们已经将它复制到安全的地方了)。所以,我们直接选择了默认的选项“N”。如果需要安装其他版本,请参考[MariaDB官方仓库][2]。
|
||||
|
||||
### 4. 恢复配置文件 ###
|
||||
|
||||
@ -136,7 +138,7 @@ dumping databases
|
||||
|
||||
就这样,我们已成功将之前的数据库导入了进来。
|
||||
|
||||
来,让我们登陆一下mysql命令行,检查一下数据库是否真的已经导入了:
|
||||
来,让我们登录一下mysql命令行,检查一下数据库是否真的已经导入了:
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
@ -152,15 +154,15 @@ dumping databases
|
||||
|
||||
### 总结 ###
|
||||
|
||||
最后,我们已经成功地从MySQL迁移到了MariaDB数据库管理系统。MariaDB比MySQL好,虽然在性能方面MySQL还是比它更快,但是MariaDB的优点在于它额外的特性与支持的许可证。这能够确保它自由开源(FOSS),并永久自由开源,相比之下MySQL还有许多额外的插件,有些不能自由使用代码、有些没有公开的开发进程、有些在不久的将来会变的不再自由开源。如果你有任何的问题、评论、反馈给我们,不要犹豫直接在评论区留下你的看法。谢谢观看本教程,希望那你能喜欢MariaDB。
|
||||
最后,我们已经成功地从MySQL迁移到了MariaDB数据库管理系统。MariaDB比MySQL好,虽然在性能方面MySQL还是比它更快,但是MariaDB的优点在于它额外的特性与支持的许可证。这能够确保它自由开源(FOSS),并永久自由开源,相比之下MySQL还有许多额外的插件,有些不能自由使用代码、有些没有公开的开发进程、有些在不久的将来会变的不再自由开源。如果你有任何的问题、评论、反馈给我们,不要犹豫直接在评论区留下你的看法。谢谢观看本教程,希望你能喜欢MariaDB。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/migrate-mysql-mariadb-linux/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,10 +1,10 @@
|
||||
如何使用btsync通过网络实现电脑间文件共享
|
||||
如何使用btsync通过网络实现计算机间的文件共享
|
||||
================================================================================
|
||||
如果你是使用各式设备在网上工作的这类人,我相信你肯定需要一个在不同设备间同步文件及目录的方法,至少是非常渴望有这种功能。
|
||||
如果你是那种使用各式设备在网上工作的人,我相信你肯定需要一个在不同设备间同步文件及目录的方法,至少是非常渴望有这种功能。
|
||||
|
||||
BitTorrent Sync简称btsync,是一个基于BitTorrent(著名P2P文件分享协议)的免费跨平台同步工具。与传统BitTorrent客户端不同的是btsync用于传输加密和访问授权的是不同操作系统及设备中自动生成的键。
|
||||
BitTorrent Sync简称btsync,是一个基于BitTorrent(著名P2P文件分享协议)的免费跨平台同步工具。与传统BitTorrent客户端不同的是,btsync可以在不同操作系统及设备之间加密数据传输和基于自动生成的密钥来授予访问共享文件的权限。
|
||||
|
||||
更具体点,当你想要通过btsync共享一些文件或文件夹,相应的读/写键(所谓的秘密编码)已经创建。这些键将会通过不同的途径例如HTTPS链接,电子邮件,二维码等被分享。一旦两台设备通过一个键配对成功,链接内容将会直接在其间同步。如果没有事先设置,传输将不会有文件大小和速度的限制。你可以在btsync中创建账号,以此来创建和管理通过网络分享的键和文件。
|
||||
更具体点,当你想要通过btsync共享一些文件或文件夹,相应的读/写密钥(所谓的密码)就创建好了。这些密钥可以通过HTTPS链接,电子邮件,二维码等在不同的设备间共享传递。一旦两台设备通过一个密钥配对成功,其所对应的内容将会直接在其间同步。如果没有事先设置,传输将不会有文件大小和速度的限制。你可以在btsync中创建账号,这样你可以通过 web 界面来创建和管理通过网络分享的密钥和文件。
|
||||
|
||||
BitTorrent Sync可以在许多的操作系统上运行,包括Linux,MacOS X,Windows,在 [Android][1]和[iOS][2]上也可以使用。在这里,我们将教你在Linux环境(一台家用服务器)与Windows环境(一台笔记本电脑)之间如何使用BitTorrent Sync来同步文件。
|
||||
|
||||
@ -12,7 +12,7 @@ BitTorrent Sync可以在许多的操作系统上运行,包括Linux,MacOS X
|
||||
|
||||
BitTorrent Sync可以在[项目主页][3]直接下载。由于Windows版本的BitTorrent Syn安装起来十分简单,所以我们假设笔记本上已经安装了。我们把焦点放到Linux服务器上的安装和配置。
|
||||
|
||||
在下载页面中选择你的系统架构,右键相应链接,选择复制连接地址(或者简单的依靠浏览器判断),将链接粘贴到在终端中用wget下载,如下:
|
||||
在下载页面中选择你的系统架构,右键相应链接,复制连接地址(或者类似的功能,不同浏览器可能不同),将链接粘贴到在终端中用wget下载,如下:
|
||||
|
||||
**64位Linux:**
|
||||
|
||||
@ -36,11 +36,11 @@ BitTorrent Sync可以在[项目主页][3]直接下载。由于Windows版本的Bi
|
||||
|
||||
export PATH=$PATH:/usr/local/bin/btsync
|
||||
|
||||
或者在在该文件夹中运行btsync的二进制文件。我们推荐使用第一种方式,虽需要少量的输入但更容易记忆。
|
||||
或者在该文件夹中运行btsync的二进制文件。我们推荐使用第一种方式,虽需要少量的输入但更容易记忆。
|
||||
|
||||
### 配置Btsync ###
|
||||
### 配置btsync ###
|
||||
|
||||
Btsync带有一个内置的网络服务器被用作其管理接口。想要使用这个接口你需要创建一个配置文件。你可以使用以下命令来创建:
|
||||
btsync带有一个内置的网络服务器,用作其管理接口。想要使用这个接口你需要创建一个配置文件。你可以使用以下命令来创建:
|
||||
|
||||
# btsync --dump-sample-config > btsync.config
|
||||
|
||||
@ -54,19 +54,21 @@ Btsync带有一个内置的网络服务器被用作其管理接口。想要使
|
||||
|
||||

|
||||
|
||||
如果你将来想要优化一下它的配置,可以看一下 /usr/local/bin/btsync 目录下的 README 文件,不过现在我们先继续下面的步骤。
|
||||
|
||||
### 第一次运行btsync ###
|
||||
|
||||
作为一个系统的最高执行者我们需要依赖日志文件!所以在我们启动btsync之前,我们将先为btsync创建一个日志文件。
|
||||
|
||||
# touch /var/log/btsync.log
|
||||
|
||||
最后,让我们开启btsync:
|
||||
最后,让我们启动btsync:
|
||||
|
||||
# btsync --config /usr/local/bin/btsync/btsync.config --log /var/log/btsync.log
|
||||
|
||||

|
||||
|
||||
现在在你的浏览器中输入正在运行btsync监听的服务器IP地址和端口(我这是192.168.0.15:8888),同意隐私政策,条款和最终用户许可协议:
|
||||
现在在你的浏览器中输入正在运行的btsync所监听的服务器IP地址和端口(我这是192.168.0.15:8888),同意其隐私政策,条款和最终用户许可协议:
|
||||
|
||||

|
||||
|
||||
@ -80,33 +82,29 @@ Btsync带有一个内置的网络服务器被用作其管理接口。想要使
|
||||
|
||||
现在这样就够了。在运行接下来的步骤之前,请先在Windows主机(或你想使用的其他Linux设备)上安装BitTorrent Sync。
|
||||
|
||||
### Btsync分享文件 ###
|
||||
### btsync分享文件 ###
|
||||
|
||||
下方的视频将会展示如何在安装Windows8的电脑[192.168.0.106]上分享现有的文件夹。在添加好想要同步的文件夹后,你会得到它的键,通过“Enter a key or link”菜单(上面的图已经展示过了)添加到你安装到的Linux机器上,并开始同步:
|
||||
这个[视频][5](需要翻墙)展示了如何在安装Windows8的电脑[192.168.0.106]上分享现有的文件夹。在添加好想要同步的文件夹后,你会得到它的密钥,通过“Enter a key or link”菜单(上面的图已经展示过了)添加到你安装到的Linux机器上,并开始同步。
|
||||
|
||||
注释:youtube视频
|
||||
<iframe width="615" height="346" frameborder="0" allowfullscreen="" src="http://www.youtube.com/embed/f7kLM0lAqF4?feature=oembed"></iframe>
|
||||
现在用别的设备试试吧;找一个想要分享的文件夹或是一些文件,并通过Linux服务器的网络接口将密钥导入到你安装的“中央”btsync中。
|
||||
|
||||
现在用别的设备试试吧;找一个想要分享的文件夹或是一些文件,并通过Linux服务器的网络接口将键导入到你安装的“核心”btsync中。
|
||||
|
||||
### 使用常规用户开机自动运行btsync ###
|
||||
### 以常规用户开机自动运行btsync ###
|
||||
|
||||
你们可能注意到了,视频中在同步文件时是使用'root'组的用户创建/btsync目录的。那是因为我们使用超级用户手动启动BitTorrent Sync的原因。在通常情况下,你会希望它开机自动使用无权限用户(www_data或是专门为此创建的账户,例如btsync)启动。
|
||||
|
||||
所以,我们创建了一个叫做btsync的用户,并在/etc/rc.local文件(exit 0行前)添加如下字段:
|
||||
所以,我们创建了一个叫做btsync的用户,并在/etc/rc.local文件(exit 0 这一行前)添加如下字段:
|
||||
|
||||
sudo -u btsync /usr/local/bin/btsync/btsync --config /usr/local/bin/btsync/btsync.config --log /var/log/btsync.log
|
||||
|
||||
最后,创建pid文件:
|
||||
|
||||
# touch /usr/local/bin/btsync/.sync//sync.pid
|
||||
# touch /usr/local/bin/btsync/.sync/sync.pid
|
||||
|
||||
并递归更改/usr/local/bin/btsync的所属用户:
|
||||
并递归更改 /usr/local/bin/btsync的所属用户:
|
||||
|
||||
# chown -R btsync:root /usr/local/bin/btsync
|
||||
|
||||
现在重启试试,看看btsync是否正在由预期中的用户运行:
|
||||
Now reboot and verify that btsync is running as the intended user:
|
||||
|
||||

|
||||
|
||||
@ -114,7 +112,7 @@ Now reboot and verify that btsync is running as the intended user:
|
||||
|
||||
### 尾注 ###
|
||||
|
||||
如你所见,BitTorrent Sync对你几乎就像一个无服务器的Dropbox。我说“几乎”的原因是:当你在局域网内同步数据时,同步在两个设备之间直接进行。然而如果你想要跨网段同步数据,并且你的设备可能要穿过防火墙的限制来配对,那就只能通过一个提供BitTorrent的第三方中继服务器来完成同步传输。虽然声称传输经过 [AES加密][4],你还是可能遇到不想放生的状况。为了你的隐私着想,务必在你共享的每个文件夹中关掉中继/跟踪选项。
|
||||
如你所见,BitTorrent Sync对你而言几乎就像一个无服务器的Dropbox。我说“几乎”的原因是:当你在局域网内同步数据时,同步在两个设备之间直接进行。然而如果你想要跨网段同步数据,并且你的设备可能要穿过防火墙的限制来配对,那就只能通过一个提供BitTorrent的第三方中继服务器来完成同步传输。虽然声称传输经过 [AES加密][4],你还是可能会遇到不想发生的状况。为了你的隐私着想,务必在你共享的每个文件夹中关掉中继/跟踪选项。
|
||||
|
||||
希望这些对你有用!分享愉快!
|
||||
|
||||
@ -123,8 +121,8 @@ Now reboot and verify that btsync is running as the intended user:
|
||||
via: http://xmodulo.com/share-files-between-computers-over-network.html
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
@ -133,3 +131,4 @@ via: http://xmodulo.com/share-files-between-computers-over-network.html
|
||||
[2]:https://itunes.apple.com/us/app/bittorrent-sync/id665156116
|
||||
[3]:http://www.getsync.com/
|
||||
[4]:http://www.getsync.com/tech-specs
|
||||
[5]:https://youtu.be/f7kLM0lAqF4
|
@ -1,9 +1,8 @@
|
||||
如何在CentOS 7.0上为Subverison安装Websvn
|
||||
如何在CentOS 7.0 安装 Websvn
|
||||
================================================================================
|
||||
大家好,今天我们会在CentOS 7.0 上为subversion安装WebSVN。
|
||||
|
||||
WebSVN提供了Svbverion中的各种方法来查看你的仓库。我们可以看到任何给定版本的任何文件或者目录的日志并且看到所有文件改动、添加、删除的列表。我们同样可以看到两个版本间的不同来知道特定版本改动了什么。
|
||||
大家好,今天我们会在CentOS 7.0 上为 subversion(SVN)安装Web 界面 WebSVN。(subverion 是 apache 的顶级项目,也称为 Apache SVN 或 SVN)
|
||||
|
||||
WebSVN 将 Svbverion 的操作你的仓库的各种功能通过 Web 界面提供出来。通过它,我们可以看到任何给定版本的任何文件或者目录的日志,并且可看到所有文件改动、添加、删除的列表。我们同样可以查看两个版本间的差异来知道特定版本改动了什么。
|
||||
|
||||
### 特性 ###
|
||||
|
||||
@ -12,20 +11,20 @@ WebSVN提供了下面这些特性:
|
||||
- 易于使用的用户界面
|
||||
- 可定制的模板系统
|
||||
- 色彩化的文件列表
|
||||
- blame 视图
|
||||
- 追溯视图
|
||||
- 日志信息查询
|
||||
- RSS支持
|
||||
- [更多][1]
|
||||
|
||||
由于使用PHP写成,WebSVN同样易于移植和安装。
|
||||
由于其使用PHP写成,WebSVN同样易于移植和安装。
|
||||
|
||||
现在我们将为Subverison(Apache SVN)安装WebSVN。请确保你的服务器上已经安装了Apache SVN。如果你还没有安装,你可以在本教程中安装。
|
||||
现在我们将为Subverison安装WebSVN。请确保你的服务器上已经安装了 SVN。如果你还没有安装,你可以按[本教程][2]安装。
|
||||
|
||||
After you installed Apache SVN(Subversion), you'll need to follow the easy steps below.安装完Apache SVN(Subversion)后,你需要以下几步。
|
||||
安装完SVN后,你需要以下几步。
|
||||
|
||||
### 1. 下载 WebSVN ###
|
||||
|
||||
你可以从官方网站http://www.websvn.info/download/中下载WebSVN。我们首先进入/var/www/html/并在这里下载安装包。
|
||||
你可以从官方网站 http://www.websvn.info/download/ 中下载 WebSVN。我们首先进入 /var/www/html/ 并在这里下载安装包。
|
||||
|
||||
$ sudo -s
|
||||
|
||||
@ -36,7 +35,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
|
||||
|
||||

|
||||
|
||||
这里,我下载的是最新的2.3.3版本的websvn。你可以从这个网站得到链接。你可以用你要安装的包的链接来替换上面的链接。
|
||||
这里,我下载的是最新的2.3.3版本的 websvn。你可以从上面这个网站找到下载链接,用适合你的包的链接来替换上面的链接。
|
||||
|
||||
### 2. 解压下载的zip ###
|
||||
|
||||
@ -54,7 +53,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
|
||||
|
||||
### 4. 编辑WebSVN配置 ###
|
||||
|
||||
现在,我们需要拷贝位于/var/www/html/websvn/include的distconfig.php为config.php,并且接着编辑配置文件。
|
||||
现在,我们需要拷贝位于 /var/www/html/websvn/include 的 distconfig.php 为 config.php,并且接着编辑该配置文件。
|
||||
|
||||
# cd /var/www/html/websvn/include
|
||||
|
||||
@ -62,7 +61,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
|
||||
|
||||
# nano config.php
|
||||
|
||||
现在我们需要按如下改变文件。这完成之后,请保存病退出。
|
||||
现在我们需要按如下改变文件。完成之后,请保存并退出。
|
||||
|
||||
// Configure these lines if your commands aren't on your path.
|
||||
//
|
||||
@ -100,7 +99,7 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
|
||||
|
||||
# systemctl restart httpd.service
|
||||
|
||||
接着我们在浏览器中打开WebSVN,输入http://Ip-address/websvn,或者你在本地的话,你可以输入http://localhost/websvn。
|
||||
接着我们在浏览器中打开WebSVN,输入 http:// IP地址/websvn ,或者你在本地的话,你可以输入 http://localhost/websvn 。
|
||||
|
||||

|
||||
|
||||
@ -108,7 +107,9 @@ After you installed Apache SVN(Subversion), you'll need to follow the easy steps
|
||||
|
||||
### 总结 ###
|
||||
|
||||
好了,我们已经在CentOS 7上哇安城WebSVN的安装了。这个教程同样适用于RHEL 7。WebSVN提供了Svbverion中的各种方法来查看你的仓库。你可以看到任何给定版本的任何文件或者目录的日志并且看到所有文件改动、添加、删除的列表。如果你有任何问题、评论、反馈请在下面的评论栏中留下,来让我们知道该添加什么和改进。谢谢!享受WebSVN吧。:-)
|
||||
好了,我们已经在CentOS 7上完成WebSVN的安装了。这个教程同样适用于RHEL 7。WebSVN 提供了 Subverion 中的各种功能来查看你的仓库。你可以看到任何给定版本的任何文件或者目录的日志,并且看到所有文件改动、添加、删除的列表。
|
||||
|
||||
如果你有任何问题、评论、反馈请在下面的评论栏中留下,来让我们知道该添加什么和改进。谢谢! 用用看吧。:-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -116,9 +117,10 @@ via: http://linoxide.com/linux-how-to/install-websvn-subversion-centos-7/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://www.websvn.info/features/
|
||||
[2]:http://linoxide.com/linux-how-to/install-apache-svn-subversion-centos-7/
|
@ -1,41 +1,40 @@
|
||||
linux中创建和解压文档的10个快速tar命令样例
|
||||
在linux中创建和解压文档的11个 tar 命令例子
|
||||
================================================================================
|
||||
### linux中的tar命令###
|
||||
|
||||
tar(磁带归档)命令是linux系统中被经常用来将文件存入到一个归档文件中的命令。
|
||||
|
||||
常见的文件扩展包括:.tar.gz 和 .tar.bz2, 分别表示通过gzip或bzip算法进一步压缩的磁带归档文件扩展。
|
||||
其常见的文件扩展包括:.tar.gz 和 .tar.bz2, 分别表示通过了gzip或bzip算法进一步进行了压缩。
|
||||
|
||||
在本教程中我们会管中窥豹一下在linux桌面或服务器版本中使用tar命令来处理一些创建和解压归档文件的日常工作的例子。
|
||||
|
||||
在该教程中我们会窥探一下在linux桌面或服务器版本中使用tar命令来处理一些日常创建和解压归档文件的工作样例。
|
||||
### 使用tar命令###
|
||||
|
||||
tar命令在大部分linux系统默认情况下都是可用的,所以你不用单独安装该软件。
|
||||
|
||||
> tar命令具有两个压缩格式,gzip和bzip,该命令的“z”选项用来指定gzip,“j”选项用来指定bzip。同时也可哟用来创建非压缩归档文件。
|
||||
> tar命令具有两个压缩格式,gzip和bzip,该命令的“z”选项用来指定gzip,“j”选项用来指定bzip。同时也可以创建非压缩归档文件。
|
||||
|
||||
#### 1.解压一个tar.gz归档 ####
|
||||
#### 1.解压一个tar.gz归档 ####
|
||||
|
||||
一般常见的用法是用来解压归档文件,下面的命令将会把文件从一个tar.gz归档文件中解压出来。
|
||||
|
||||
|
||||
$ tar -xvzf tarfile.tar.gz
|
||||
|
||||
这里对这些参数做一个简单解释-
|
||||
|
||||
> x - 解压文件
|
||||
|
||||
> v - 繁琐,在解压每个文件时打印出文件的名称。
|
||||
> v - 冗长模式,在解压每个文件时打印出文件的名称。
|
||||
|
||||
> z - 该文件是一个使用 gzip压缩的文件。
|
||||
> z - 该文件是一个使用 gzip 压缩的文件。
|
||||
|
||||
> f - 使用接下来的tar归档来进行操作。
|
||||
|
||||
这些就是一些需要记住的重要选项。
|
||||
|
||||
**解压 tar.bz2/bzip 归档文件 **
|
||||
**解压 tar.bz2/bzip 归档文件**
|
||||
|
||||
具有bz2扩展名的文件是使用bzip算法进行压缩的,但是tar命令也可以对其进行处理,但是是通过使用“j”选项来替换“z”选项。
|
||||
具有bz2扩展名的文件是使用bzip算法进行压缩的,但是tar命令也可以对其进行处理,但是需要通过使用“j”选项来替换“z”选项。
|
||||
|
||||
$ tar -xvjf archivefile.tar.bz2
|
||||
|
||||
@ -47,25 +46,25 @@ tar命令在大部分linux系统默认情况下都是可用的,所以你不用
|
||||
|
||||
然后,首先需要确认目标目录是否存在,毕竟tar命令并不会为你创建目录,所以如果目标目录不存在的情况下该命令会失败。
|
||||
|
||||
####3. 解压出单个文件 ####
|
||||
####3. 提取出单个文件 ####
|
||||
|
||||
为了从一个归档文件中解压出单个文件,只需要将文件名按照以下方式将其放置在命令后面。
|
||||
为了从一个归档文件中提取出单个文件,只需要将文件名按照以下方式将其放置在命令后面。
|
||||
|
||||
$ tar -xz -f abc.tar.gz "./new/abc.txt"
|
||||
|
||||
在上述命令中,可以按照以下方式来指定多个文件。
|
||||
|
||||
$ tar -xv -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"
|
||||
$ tar -xz -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"
|
||||
|
||||
#### 4.使用通配符来解压多个文件 ####
|
||||
|
||||
通配符可以用来解压于给定通配符匹配的一批文件,例如所有以".txt"作为扩展名的文件。
|
||||
|
||||
$ tar -xv -f abc.tar.gz --wildcards "*.txt"
|
||||
$ tar -xz -f abc.tar.gz --wildcards "*.txt"
|
||||
|
||||
#### 5. 列出并检索tar归档文件中的内容 ####
|
||||
#### 5. 列出并检索tar归档文件中的内容 ####
|
||||
|
||||
如果你仅仅想要列出而不是解压tar归档文件的中的内容,使用“-t”选项, 下面的命令用来打印一个使用gzip压缩过的tar归档文件中的内容。
|
||||
如果你仅仅想要列出而不是解压tar归档文件的中的内容,使用“-t”(test)选项, 下面的命令用来打印一个使用gzip压缩过的tar归档文件中的内容。
|
||||
|
||||
$ tar -tz -f abc.tar.gz
|
||||
./new/
|
||||
@ -75,7 +74,7 @@ tar命令在大部分linux系统默认情况下都是可用的,所以你不用
|
||||
./new/abc.txt
|
||||
...
|
||||
|
||||
将输出通过管道定向到grep来搜索一个文件或者定向到less命令来浏览内容列表。 使用"v"繁琐选项将会打印出每个文件的额外详细信息。
|
||||
可以将输出通过管道定向到grep来搜索一个文件,或者定向到less命令来浏览内容列表。 使用"v"冗长选项将会打印出每个文件的额外详细信息。
|
||||
|
||||
对于 tar.bz2/bzip文件,需要使用"j"选项。
|
||||
|
||||
@ -84,11 +83,10 @@ tar命令在大部分linux系统默认情况下都是可用的,所以你不用
|
||||
$ tar -tvz -f abc.tar.gz | grep abc.txt
|
||||
-rw-rw-r-- enlightened/enlightened 0 2015-01-13 11:40 ./new/abc.txt
|
||||
|
||||
#### 6.创建一个tar/tar.gz归档文件 ####
|
||||
#### 6.创建一个tar/tar.gz归档文件 ####
|
||||
|
||||
现在我们已经学过了如何解压一个tar归档文件,是时候开始创建一个新的tar归档文件了。tar命令可以用来将所选的文件或整个目录放入到一个归档文件中,以下是相应的样例。
|
||||
|
||||
|
||||
下面的命令使用一个目录来创建一个tar归档文件,它会将该目录中所有的文件和子目录都加入到归档文件中。
|
||||
|
||||
$ tar -cvf abc.tar ./new/
|
||||
@ -102,14 +100,13 @@ tar命令在大部分linux系统默认情况下都是可用的,所以你不用
|
||||
|
||||
$ tar -cvzf abc.tar.gz ./new/
|
||||
|
||||
> 文件的扩展名其实并不真正有什么影响。“tar.gz” 和tgz是gzip压缩算法压缩文件的常见扩展名。 “tar.bz2”和“tbz”是bzip压缩算法压缩文件的常见扩展名。
|
||||
|
||||
> 文件的扩展名其实并不真正有什么影响。“tar.gz” 和“tgz”是gzip压缩算法压缩文件的常见扩展名。 “tar.bz2”和“tbz”是bzip压缩算法压缩文件的常见扩展名(LCTT 译注:归档是否是压缩的和采用哪种压缩方式并不取决于其扩展名,扩展名只是为了便于辨识。)。
|
||||
|
||||
#### 7. 在添加文件之前进行确认 ####
|
||||
|
||||
一个有用的选项是“w”,该选项使得tar命令在添加每个文件到归档文件之前来让用户进行确认,有时候这会很有用。
|
||||
|
||||
使用该选项时,只有用户输入yes时的文件才会被加入到归档文件中,如果你输入任何东西,默认的回答是一个“No”。
|
||||
使用该选项时,只有用户输入“y”时的文件才会被加入到归档文件中,如果你不输入任何东西,其默认表示是一个“n”。
|
||||
|
||||
# 添加指定文件
|
||||
|
||||
@ -137,7 +134,7 @@ tar命令在大部分linux系统默认情况下都是可用的,所以你不用
|
||||
|
||||
#### 9. 将文件加入到压缩的归档文件中(tar.gz/tar.bz2) ####
|
||||
|
||||
之前已经提到了不可能将文件加入到已压缩的归档文件中,然和依然可以通过简单的一些把戏来完成。使用gunzip命令来解压缩归档文件,然后将文件加入到归档文件中后重新进行压缩。
|
||||
之前已经提到了不可能将文件加入到已压缩的归档文件中,然而依然可以通过简单的一些把戏来完成。使用gunzip命令来解压缩归档文件,然后将文件加入到归档文件中后重新进行压缩。
|
||||
|
||||
$ gunzip archive.tar.gz
|
||||
$ tar -rf archive.tar ./path/to/file
|
||||
@ -147,16 +144,15 @@ tar命令在大部分linux系统默认情况下都是可用的,所以你不用
|
||||
|
||||
#### 10.通过tar来进行备份 ####
|
||||
|
||||
一个真实的场景是在规则的间隔内来备份目录,tar命令可以通过cron调度来实现这样的一个备份,以下是一个样例 -
|
||||
一个真实的场景是在固定的时间间隔内来备份目录,tar命令可以通过cron调度来实现这样的一个备份,以下是一个样例 :
|
||||
|
||||
$ tar -cvz -f archive-$(date +%Y%m%d).tar.gz ./new/
|
||||
|
||||
使用cron来运行上述的命令会保持创建类似以下名称的备份文件 -
|
||||
'archive-20150218.tar.gz'.
|
||||
使用cron来运行上述的命令会保持创建类似以下名称的备份文件 :'archive-20150218.tar.gz'。
|
||||
|
||||
当然,需要确保日益增长的归档文件不会导致磁盘空间的溢出。
|
||||
|
||||
#### 11. 在创建归档文件是进行验证 ####
|
||||
#### 11. 在创建归档文件时进行验证 ####
|
||||
|
||||
"W"选项可以用来在创建归档文件之后进行验证,以下是一个简单例子。
|
||||
|
||||
@ -174,9 +170,9 @@ tar命令在大部分linux系统默认情况下都是可用的,所以你不用
|
||||
Verify ./new/newfile.txt
|
||||
Verify ./new/abc.txt
|
||||
|
||||
需要注意的是验证动作不能呢该在压缩过的归档文件上进行,只能在非压缩的tar归档文件上执行。
|
||||
需要注意的是验证动作不能在压缩过的归档文件上进行,只能在非压缩的tar归档文件上执行。
|
||||
|
||||
现在就先到此为止,可以通过“man tar”命令来查看tar命令的的手册。
|
||||
这次就先到此为止,可以通过“man tar”命令来查看tar命令的的手册。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -184,7 +180,7 @@ via: http://www.binarytides.com/linux-tar-command/
|
||||
|
||||
作者:[Silver Moon][a]
|
||||
译者:[theo-l](https://github.com/theo-l)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
如何在Linux中隐藏PHP版本
|
||||
如何在Linux服务器中隐藏PHP版本
|
||||
================================================================================
|
||||
通常上,大多数默认设置安装的web服务器存在信息泄露。这其中之一是PHP。PHP(超文本预处理器)是如今流行的服务端html嵌入式语言。在如今这个充满挑战的时代,有许多攻击者会尝试发现你服务端的漏洞。因此,我会简单描述如何在Linux服务器中隐藏PHP信息。
|
||||
通常,大多数默认设置安装的web服务器存在信息泄露,这其中之一就是PHP。PHP 是如今流行的服务端html嵌入式语言(之一?)。在如今这个充满挑战的时代,有许多攻击者会尝试发现你服务端的漏洞。因此,我会简单描述如何在Linux服务器中隐藏PHP信息。
|
||||
|
||||
默认上**exposr_php**默认是开的。关闭“expose_php”参数可以使php隐藏它的版本信息。
|
||||
默认上**expose_php**默认是开的。关闭“expose_php”参数可以使php隐藏它的版本信息。
|
||||
|
||||
[root@centos66 ~]# vi /etc/php.ini
|
||||
|
||||
@ -26,9 +26,9 @@
|
||||
X-Page-Speed: 1.9.32.2-4321
|
||||
Cache-Control: max-age=0, no-cache
|
||||
|
||||
更改之后,php就不会在web服务头中显示版本了:
|
||||
更改并重启 Web 服务后,php就不会在web服务头中显示版本了:
|
||||
|
||||
[root@centos66 ~]# curl -I http://www.ehowstuff.com/
|
||||
```[root@centos66 ~]# curl -I http://www.ehowstuff.com/
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx
|
||||
@ -39,8 +39,9 @@ X-Pingback: http://www.ehowstuff.com/xmlrpc.php
|
||||
Date: Wed, 11 Feb 2015 14:10:43 GMT
|
||||
X-Page-Speed: 1.9.32.2-4321
|
||||
Cache-Control: max-age=0, no-cache
|
||||
```
|
||||
|
||||
有任何需要帮助的请到twiiter @ehowstuff,或在下面留下你的评论。[点此获取更多历史文章][1]
|
||||
LCTT译注:除了 PHP 的版本之外,Web 服务器也会默认泄露版本号。如果使用 Apache 服务器,请[参照此文章关闭Apache 版本显示][2];如果使用 Nginx 服务器,请在 http 段内加入`server_tokens off;` 配置。以上修改请记得重启相关服务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -48,9 +49,10 @@ via: http://www.ehowstuff.com/how-to-hide-php-version-in-linux/
|
||||
|
||||
作者:[skytech][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ehowstuff.com/author/mhstar/
|
||||
[1]:http://www.ehowstuff.com/archives/
|
||||
[2]:http://linux.cn/article-3642-1.html
|
@ -681,15 +681,15 @@ Linux基础:如何找出你的系统所支持的最大内存
|
||||
Handle 0x0031, DMI type 127, 4 bytes
|
||||
End Of Table
|
||||
|
||||
好了,就是这样。周末愉快!
|
||||
好了,就是这样。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.unixmen.com/linux-basics-how-to-find-maximum-supported-ram-by-your-system/
|
||||
via: http://www.unixmen.com/linux-basics-how-to-find-maximum-supported-ram-by-your-system/
|
||||
|
||||
作者:[SK][0]
|
||||
译者:[mr-ping](https://github.com/mr-ping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
41
published/20150306 Nmap--Not Just for Evil.md
Normal file
41
published/20150306 Nmap--Not Just for Evil.md
Normal file
@ -0,0 +1,41 @@
|
||||
Nmap--不是只能用于做坏事!
|
||||
================================================================================
|
||||
如果SSH是系统管理员世界的"瑞士军刀"的话,那么Nmap就是一盒炸药。炸药很容易被误用然后将你的双脚崩掉,但是也是一个很有威力的工具,能够胜任一些看似无法完成的任务。
|
||||
|
||||
大多数人想到Nmap时,他们想到的是扫描服务器,查找开放端口来实施攻击。然而,在过去的这些年中,这样的超能力在当你管理服务器或计算机遇到问题时也是非常的有用。无论是你试图找出在你的网络上有哪些类型的服务器使用了指定的IP地址,或者尝试锁定一个新的NAS设备,以及扫描网络等,都会非常有用。
|
||||
|
||||
下图显示了我的QNAP NAS的网络扫描结果。我使用该设备的唯一目的是为了NFS和SMB文件共享,但是你可以看到,它包含了一大堆大开大敞的端口。如果没有Nmap,很难发现机器到底在运行着什么玩意儿。
|
||||
|
||||

|
||||
|
||||
*网络扫描*
|
||||
|
||||
另外一个可能你没想到的用途是用它来扫描一个网络。你甚至根本不需要root的访问权限,而且你也可以非常容易地来指定你想要扫描的网络地址块,例如输入:
|
||||
|
||||
nmap 192.168.1.0/24
|
||||
|
||||
上述命令会扫描我的局域网中全部的254个可用的IP地址,让我可以知道那个是可以Ping的,以及那些端口是开放的。如果你刚刚在网络上添加一个新的硬件,但是不知道它通过DHCP获取的IP地址是什么,那么此时Nmap就是无价之宝。例如,上述命令在我的网络中揭示了这个问题:
|
||||
|
||||
Nmap scan report for TIVO-8480001903CCDDB.brainofshawn.com (192.168.1.220)
|
||||
Host is up (0.0083s latency).
|
||||
Not shown: 995 filtered ports
|
||||
PORT STATE SERVICE
|
||||
80/tcp open http
|
||||
443/tcp open https
|
||||
2190/tcp open tivoconnect
|
||||
2191/tcp open tvbus
|
||||
9080/tcp closed glrpc
|
||||
|
||||
它不仅显示了新的Tivo 设备,而且还告诉我那些端口是开放的。由于它的可靠性、可用性以及“黑边帽子”的能力,Nmap获得了本月的 <<编辑推荐>>奖。这不是一个新的程序,但是如果你是一个linux用户的话,你应该玩玩它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/nmap%E2%80%94not-just-evil
|
||||
|
||||
作者:[Shawn Powers][a]
|
||||
译者:[theo-l](https://github.com/theo-l)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/shawn-powers
|
@ -1,14 +1,15 @@
|
||||
Fedora GNOME快捷键
|
||||
Fedora GNOME 的常用快捷键
|
||||
================================================================================
|
||||
在Fedora,为了获得最好的[GNOME桌面] [1]体验,你需要了解并掌握一些驾驭系统的快捷键。
|
||||
在Fedora中,为了获得最好的[GNOME桌面][1]体验,你需要了解并掌握一些驾驭系统的快捷键。
|
||||
|
||||
这篇文章将列举我们日常使用中使用频率最高的快捷键。
|
||||
|
||||

|
||||
GNOME 快捷键 - super键.
|
||||
|
||||
#### 1. Super键 ####
|
||||
|
||||

|
||||
|
||||
*GNOME 快捷键 - super键*
|
||||
|
||||
[“super”键][2]是如今驾驭操作系统的好朋友。
|
||||
|
||||
在传统的笔记本电脑中“super”键坐落于最后一列就在“alt”键的旁边(就是徽标键)。
|
||||
@ -17,116 +18,122 @@ GNOME 快捷键 - super键.
|
||||
|
||||
同时按下 "ALT" 和"F1"一样可以达到这样的效果。
|
||||
|
||||

|
||||
GNOME 指令运行.
|
||||
### 2. 如何快速执行一条命令 ###
|
||||
|
||||
### 2. 如何快速执行一条指令 ###
|
||||

|
||||
|
||||
*GNOME 运行某命令*
|
||||
|
||||
如果你需要快速的执行一条指令,你可以按下"ALT"+"F2",这样就会出现指令运行对话框了。
|
||||
|
||||
你就可以在窗口中输入你想要执行的指令了,回车执行。
|
||||
|
||||

|
||||
使用TAB在应用中切换。
|
||||
现在你就可以在窗口中输入你想要执行的指令了,回车执行。
|
||||
|
||||
### 3. 快速切换到另一个打开的应用 ###
|
||||
|
||||
就像微软的Windows一样你可以使用"ALT"和"TAB" 的组合键在应用程序之间切换。
|
||||

|
||||
|
||||
在一些键盘上tab键是这样的**|<- ->|**而有些则是简单的"TAB"字母。
|
||||
*使用TAB在应用中切换*
|
||||
|
||||
GNOME应用间切换随着你的切换显示的是简单的图标和应用的名字
|
||||
就像在微软的Windows下一样你可以使用"ALT"和"TAB" 的组合键在应用程序之间切换。
|
||||
|
||||
如果你按下"shift"+"tab"将反过来切换应用。
|
||||
在一些键盘上tab键上画的是这样的**|<- ->|**,而有些则是简单的"TAB"字母。
|
||||
|
||||

|
||||
在应用中切换不同窗口。
|
||||
GNOME应用切换器随着你的切换,显示简单的图标和应用的名字。
|
||||
|
||||
如果你按下"shift"+"tab"将以反序切换应用。
|
||||
|
||||
### 4. 在同一应用中快速切换不同的窗口 ###
|
||||
|
||||

|
||||
|
||||
*在应用中切换不同窗口*
|
||||
|
||||
如果你像我一样经常打开五六个Firefox。
|
||||
|
||||
你已经知道通过"Alt"+"Tab"实现应用间的切换。
|
||||
你已经知道通过"Alt"+"Tab"实现应用间的切换。有两种方法可以在同一个应用中所有打开的窗口中切换。
|
||||
|
||||
有两种方法可以在同应用中所有打开的窗口中切换。
|
||||
第一种是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上。短暂的停留等到下拉窗口出现你就能用鼠标选择窗口了。
|
||||
|
||||
第一种是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上。短暂的停留等到下拉窗出现你就能用鼠标选择窗口了。
|
||||
第二种也是比较推荐的方式是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上,然后按"super"+"`"在此应用打开的窗口间切换。
|
||||
|
||||
第二种也是比较推荐的方式是按"Alt"+"Tab"让选框停留在你所要切换窗口的应用图标上然后按"super"+"`"在此应用打开的窗口间切换。
|
||||
**注释:"\`"就是tab键上面的那个键。无论你使用的那种键盘排布,用于切换的键一直都是tab上面的那个键,所以也有可能不是"\`"键。**
|
||||
|
||||
**注释"\`"就是tab键上面的那个键。用于切换的键一直都是tab上面的那个键,无论你使用的那种键盘排布,也有可能不是"`"键。**
|
||||
|
||||
如果你的手很灵活(或者是我称之为的忍者手)那你也可以同时按"shift", "`"和"super"键来反向切换窗口。
|
||||
|
||||

|
||||
切换键盘焦点。
|
||||
如果你的手很灵活(或者是我称之为忍者手的)那你也可以同时按"shift", "\`"和"super"键来反向切换窗口。
|
||||
|
||||
### 5. 切换键盘焦点 ###
|
||||
|
||||

|
||||
|
||||
*切换键盘焦点*
|
||||
|
||||
这个键盘快捷键并不是必须掌握的,但是还是最好掌握。
|
||||
|
||||
如若你想将输入的焦点放到搜索栏或者一个应用窗口上,你可以同时按下"CTRL", "ALT"和"TAB",这样就会出现一个让你选择切换区域的列表。
|
||||
|
||||
然后就可以按方向键做出选择了。
|
||||
|
||||

|
||||
显示所有应用程序。
|
||||
|
||||
### 6. 显示所有应用程序列表 ###
|
||||
|
||||

|
||||
|
||||
*显示所有应用程序*
|
||||
|
||||
如果恰巧最后一个应用就是你想要找的,那么这样做真的会帮你省很多时间。
|
||||
|
||||
按"super"和"A"键来快速浏览这个包含你系统上所有应用的列表。
|
||||
|
||||

|
||||
切换工作区。
|
||||
按"super"和"A"键来快速切换到这个包含你系统上所有应用的列表上。
|
||||
|
||||
### 7. 切换工作区 ###
|
||||
|
||||

|
||||
|
||||
*切换工作区*
|
||||
|
||||
如果你已经使用linux有一段时间了,那么这种[多工作区切换][3]的工作方式一定深得你心了吧。
|
||||
|
||||
举个例子,你在第一个工作区里做开发,第二个中浏览网页而把你邮件的客户端开在第三个工作区中。
|
||||
举个例子,你在第一个工作区里做开发,第二个之中浏览网页,而把你邮件的客户端开在第三个工作区中。
|
||||
|
||||
工作区切换你可以使用"super"+"Page Up" (PGUP)键朝一个方向切,也可以按"super"+"Page Down" (PGDN)键朝另一个方向切。
|
||||
工作区切换你可以使用"super"+"Page Up" (向上翻页)键朝一个方向切,也可以按"super"+"Page Down" (向下翻页)键朝另一个方向切。
|
||||
|
||||
还有一个比较麻烦的备选方案就是按"super"显示打开的应用,然后在屏幕的右侧选择你所要切换的工作区。
|
||||
|
||||

|
||||
将应用移至另一个工作区。
|
||||
|
||||
### 8. 将一些项目移至一个新的工作区 ###
|
||||
|
||||
如果这个工作区已经被搞得杂乱无章了没准你会想将手头的应用转到一个全新的工作区,请按组合键"super", "shift"和"page up"或"super", "shift"和"page down" key。
|
||||

|
||||
|
||||
*将应用移至另一个工作区*
|
||||
|
||||
如果这个工作区已经被搞得杂乱无章了,没准你会想将手头的应用转到一个全新的工作区,请按组合键"super", "shift"和"page up"或"super", "shift"和"page down" 键。
|
||||
|
||||
备选方案按"super"键,然后在应用列表中找到你想要移动的应用拖到屏幕右侧的工作区。
|
||||
|
||||
### 9. 显示信息托盘 ###
|
||||
|
||||

|
||||
显示信息栏。
|
||||
|
||||
### 9. 显示信息栏 ###
|
||||
*显示信息托盘*
|
||||
|
||||
消息栏会提供一些通知。
|
||||
|
||||
按"super"+"M"呼出消息栏。
|
||||
消息托盘会提供一个通知列表。按"super"+"M"呼出消息托盘。
|
||||
|
||||
备选方法是鼠标移动到屏幕右下角。
|
||||
|
||||

|
||||
锁屏。
|
||||
|
||||
### 10. 锁屏 ###
|
||||
|
||||

|
||||
|
||||
*锁屏*
|
||||
|
||||
想要休息一会喝杯咖啡?不想误触键盘?
|
||||
|
||||
无论何时只要离开你的电脑应该习惯性的按下"super"+"L"锁屏。
|
||||
|
||||
解锁方法是从屏幕的下方向上拽,输入密码即可。
|
||||
|
||||

|
||||
Fedora中Control+Alt+Delete
|
||||
|
||||
### 11. 关机 ###
|
||||
|
||||

|
||||
|
||||
*Fedora中Control+Alt+Delete*
|
||||
|
||||
如果你曾是windows的用户,你一定记得著名的三指快捷操作CTRL+ALT+DELETE。
|
||||
|
||||
如果在键盘上同时按下CTRL+ALT+DELETE,Fedora就会弹出一则消息,提示你的电脑将在60秒后关闭。
|
||||
@ -158,18 +165,17 @@ Fedora中Control+Alt+Delete
|
||||
|
||||
[录制的内容][4]将以[webm][5]格式保存于当前用户家目录下的录像文件夹中。
|
||||
|
||||

|
||||
并排显示窗口。
|
||||
|
||||
### 14. 并排显示窗口 ###
|
||||
|
||||

|
||||
|
||||
*并排显示窗口*
|
||||
|
||||
你可以将一个窗口靠左占满左半屏,另一个窗口靠右占满右半屏,让两个窗口并排显示。
|
||||
|
||||
也可以按"Super"+"←"让当前应用占满左半屏。
|
||||
也可以按"Super"+"←"(左箭头)让当前应用占满左半屏。按"Super"+"→"(右箭头)让当前应用占满右半屏。
|
||||
|
||||
按"Super"+"→"让当前应用占满右半屏。
|
||||
|
||||
### 15. 窗口的最大化, 最小化和恢复 ###
|
||||
### 15. 窗口的最大化,最小化和恢复 ###
|
||||
|
||||
双击标题栏可以最大化窗口。
|
||||
|
||||
@ -177,11 +183,12 @@ Fedora中Control+Alt+Delete
|
||||
|
||||
右键菜单选择"最小化"就可以最小化了。
|
||||
|
||||

|
||||
GNOME快捷键速查表。
|
||||
|
||||
### 16. 总结 ###
|
||||
|
||||

|
||||
|
||||
*GNOME快捷键速查表*
|
||||
|
||||
我做了一份快捷键速查表,你可以打印出来贴在墙上,这样一定能够更快上手。
|
||||
|
||||
当你掌握了这些快捷键后,你一定会感慨这个桌面环境使用起来是如此的顺手。
|
||||
@ -196,8 +203,8 @@ GNOME快捷键速查表。
|
||||
via: http://linux.about.com/od/howtos/tp/Fedora-GNOME-Keyboard-Shortcuts.htm
|
||||
|
||||
作者:[Gary Newell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,35 @@
|
||||
The VirtualBox 5.0 beta is finally here
|
||||
=======================================
|
||||
**Oracle's desktop virtualization software gets its first major point revision in almost five years, but the changes are more evolutionary than revolutionary.**
|
||||
|
||||
VirtualBox, the open source virtualization system originally created by Sun and now under Oracle's stewardship, has released its first revision to the left of the decimal point in nearly five years.
|
||||
|
||||
Don't expect anything truly revolutionary, though, judging from the release notes for and the behavior of the beta itself. With this release, VirtualBox picks up a bit more polish, both visually and technologically, but its main advantage over VMware remains with its offer of a free incarnation of many of the same core features.
|
||||
|
||||
The last major version of VirtualBox 4.0 was released in December 2010, and it delivered a heavily reworked version of the program with a new GUI, new virtual hardware, and a reorganized project design. But the pace of major releases for the project was slow, with the last major release (version 4.3) arriving in late 2013. Everything since then has been officially designated as a "maintenance" release.
|
||||
|
||||
**VirtualBox 5.0**
|
||||
|
||||
*The first beta of VirtualBox 5.0 adds features like the ability to edit the menus and shortcut icons for VM windows, as shown here.*
|
||||
|
||||
Among the biggest changes for VirtualBox 5.0 is support for more instruction set extensions that run with hardware-assisted virtualization. The AES-NI instruction set, typically used for hardware acceleration of encryption, and the SSE 4.1 and SSE 4.2 instructions sets were included among them. Also new is paravirtualization support for Windows and Linux guests, a new architecture for abstracting host audio, and support for USB 3 (xHCI) controller in guests.
|
||||
|
||||
Most of the usability updates are improvements to the VirtualBox GUI. One big change is the ability to customize the menus and the toolbar for individual virtual machines so that little- or never-used options can be removed entirely. Another major addition is the ability to encrypt virtual volumes from within the VirtualBox interface, rather than relying on the guest OS's own disk-encryption system (assuming it has one).
|
||||
|
||||
Oracle warns that this is beta software and should be treated accordingly. Sure enough, the main GUI and the guest OS windows all sport black-and-red Beta warnings in one corner. But a Windows 10 VM created with the previous VirtualBox release (4.3.26) booted and ran fine, and the 5.0 version of the VirtualBox Guest Additions -- for better video support, bidirectional copy and paste, and other features -- installed without issues. (Fixes to better support Windows 10 have been showing up since version 4.3.18.)
|
||||
|
||||
No word has been given yet on when the final version of 5.0 will be out, but Oracle [encourages users][1] to download and try out the beta -- in a nonproduction environment -- and file bug reports with their [beta feedback forum][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2905098/virtualization/oracle-virtualbox-5-0-beta-is-finally-here.html
|
||||
|
||||
作者:[Serdar Yegulalp][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Serdar-Yegulalp/
|
||||
[1]:https://forums.virtualbox.org/viewtopic.php?f=15&t=66904
|
||||
[2]:https://forums.virtualbox.org/viewforum.php?f=15
|
@ -1,3 +1,5 @@
|
||||
Translating by FSSlc
|
||||
|
||||
Chess in a Few Bytes
|
||||
================================================================================
|
||||
I am showing my age by mentioning that my introduction to computing was a ZX81, a home computer produced by a UK developer (Sinclair Research) which had a whopping 1KB of RAM. The 1KB is not a typographical error, the home computer really shipped with a mere 1KB of onboard memory. But this memory limitation did not prevent enthusiasts producing a huge variety of software. In fact the machine sparked a generation of programming wizards who were forced to get to grips with its workings. The machine was upgradable with a 16KB RAM pack which offered so many more coding possibilities. But the unexpanded 1KB machine still inspired programmers to release remarkable software.
|
||||
@ -111,4 +113,4 @@ via: http://www.linuxlinks.com/article/20150222033906262/ChessBytes.html
|
||||
|
||||
[1]:http://nanochess.org/chess6.html
|
||||
[2]:http://www.pouet.net/prod.php?which=64962
|
||||
[3]:http://home.hccnet.nl/h.g.muller/max-src2.html
|
||||
[3]:http://home.hccnet.nl/h.g.muller/max-src2.html
|
||||
|
@ -1,57 +0,0 @@
|
||||
Papyrus: An Open Source Note Manager
|
||||
================================================================================
|
||||

|
||||
|
||||
In last post, we saw an [open source to-do app Go For It!][1]. In a similar article, today we’ll see an **open source note taking application Papyrus**.
|
||||
|
||||
[Papyrus][2] is a fork of [Kaqaz note manager][3] and is built on QT5. It brings a clean, polished user interface and is security focused (as it claims). Emphasizing on simplicity, I find Papyrus similar to OneNote. You organize your notes in ‘paper’ and add them a label for grouping those papers. Simple enough!
|
||||
|
||||
### Papyrus features: ###
|
||||
|
||||
Though Papyrus focuses on simplicity, it still has plenty of features up its sleeves. Some of the main features are:
|
||||
|
||||
- Note management with labels and categories
|
||||
- Advanced search options
|
||||
- Touch mode available
|
||||
- Full screen option
|
||||
- Back up to Dropbox/hard drive/external
|
||||
- Password protection for selective papers
|
||||
- Sharing papers with other applications
|
||||
- Encrypted synchronization via Dropbox
|
||||
- Available for Android, Windows and OS X apart from Linux
|
||||
|
||||
### Install Papyrus ###
|
||||
|
||||
Papyrus has APK available for Android users. There are installer files for Windows and OS X. Linux users can get source code of the application. Ubuntu and other Ubuntu based distributions can use the .deb packages. Based on your OS and preference, you can get the respective files from the Papyrus download page:
|
||||
|
||||
- [Download Papyrus][4]
|
||||
|
||||
### Screenshots ###
|
||||
|
||||
Here are some screenshots of the application:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Give Papyrus a try and see if you like it. Do share your experience with it with the rest of us here.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/papyrus-open-source-note-manager/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/go-for-it-to-do-app-in-linux/
|
||||
[2]:http://aseman.co/en/products/papyrus/
|
||||
[3]:https://github.com/sialan-labs/kaqaz/
|
||||
[4]:http://aseman.co/en/products/papyrus/
|
@ -1,161 +0,0 @@
|
||||
How To Install / Configure VNC Server On CentOS 7.0
|
||||
================================================================================
|
||||
Hi there, this tutorial is all about how to install or setup [VNC][1] Server on your very CentOS 7. This tutorial also works fine in RHEL 7. In this tutorial, we'll learn what is VNC and how to install or setup [VNC Server][1] on CentOS 7
|
||||
|
||||
As we know, most of the time as a system administrator we are managing our servers over the network. It is very rare that we will need to have a physical access to any of our managed servers. In most cases all we need is to SSH remotely to do our administration tasks. In this article we will configure a GUI alternative to a remote access to our CentOS 7 server, which is VNC. VNC allows us to open a remote GUI session to our server and thus providing us with a full graphical interface accessible from any remote location.
|
||||
|
||||
VNC server is a Free and Open Source Software which is designed for allowing remote access to the Desktop Environment of the server to the VNC Client whereas VNC viewer is used on remote computer to connect to the server .
|
||||
|
||||
**Some Benefits of VNC server are listed below:**
|
||||
|
||||
Remote GUI administration makes work easy & convenient.
|
||||
Clipboard sharing between host CentOS server & VNC-client machine.
|
||||
GUI tools can be installed on the host CentOS server to make the administration more powerful
|
||||
Host CentOS server can be administered through any OS having the VNC-client installed.
|
||||
More reliable over ssh graphics and RDP connections.
|
||||
|
||||
So, now lets start our journey towards the installation of VNC Server. We need to follow the steps below to setup and to get a working VNC.
|
||||
|
||||
First of all we'll need a working Desktop Environment (X-Windows), if we don't have a working GUI Desktop Environment (X Windows) running, we'll need to install it first.
|
||||
|
||||
**Note: The commands below must be running under root privilege. To switch to root please execute "sudo -s" under a shell or terminal without quotes("")**
|
||||
|
||||
### 1. Installing X-Windows ###
|
||||
|
||||
First of all to install [X-Windows][2] we'll need to execute the below commands in a shell or terminal. It will take few minutes to install its packages.
|
||||
|
||||
# yum check-update
|
||||
# yum groupinstall "X Window System"
|
||||
|
||||

|
||||
|
||||
#yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts
|
||||
|
||||

|
||||
|
||||
# unlink /etc/systemd/system/default.target
|
||||
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
|
||||
|
||||

|
||||
|
||||
# reboot
|
||||
|
||||
After our machine restarts, we'll get a working CentOS 7 Desktop.
|
||||
|
||||
Now, we'll install VNC Server on our machine.
|
||||
|
||||
### 2. Installing VNC Server Package ###
|
||||
|
||||
Now, we'll install VNC Server package in our CentOS 7 machine. To install VNC Server, we'll need to execute the following command.
|
||||
|
||||
# yum install tigervnc-server -y
|
||||
|
||||

|
||||
|
||||
### 3. Configuring VNC ###
|
||||
|
||||
Then, we'll need to create a configuration file under **/etc/systemd/system/** directory. We can copy the **vncserver@:1.service** file from example file from **/lib/systemd/system/vncserver@.service**
|
||||
|
||||
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
|
||||
|
||||

|
||||
|
||||
Now we'll open **/etc/systemd/system/vncserver@:1.service** in our favorite text editor (here, we're gonna use **nano**). Then find the below lines of text in that file and replace <USER> with your username. Here, in my case its linoxide so I am replacing <USER> with linoxide and finally looks like below.
|
||||
|
||||
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
|
||||
PIDFile=/home/<USER>/.vnc/%H%i.pid
|
||||
|
||||
TO
|
||||
|
||||
ExecStart=/sbin/runuser -l linoxide -c "/usr/bin/vncserver %i"
|
||||
PIDFile=/home/linoxide/.vnc/%H%i.pid
|
||||
|
||||
If you are creating for root user then
|
||||
|
||||
ExecStart=/sbin/runuser -l root -c "/usr/bin/vncserver %i"
|
||||
PIDFile=/root/.vnc/%H%i.pid
|
||||
|
||||

|
||||
|
||||
Now, we'll need to reload our systemd.
|
||||
|
||||
# systemctl daemon-reload
|
||||
|
||||
Finally, we'll create VNC password for the user . To do so, first you'll need to be sure that you have sudo access to the user, here I will login to user "linoxide" then, execute the following. To login to linoxide we'll run "**su linoxide" without quotes** .
|
||||
|
||||
# su linoxide
|
||||
$ sudo vncpasswd
|
||||
|
||||

|
||||
|
||||
**Make sure that you enter passwords more than 6 characters.**
|
||||
|
||||
### 4. Enabling and Starting the service ###
|
||||
|
||||
To enable service at startup ( Permanent ) execute the commands shown below.
|
||||
|
||||
$ sudo systemctl enable vncserver@:1.service
|
||||
|
||||
Then, start the service.
|
||||
|
||||
$ sudo systemctl start vncserver@:1.service
|
||||
|
||||
### 5. Allowing Firewalls ###
|
||||
|
||||
We'll need to allow VNC services in Firewall now.
|
||||
|
||||
$ sudo firewall-cmd --permanent --add-service vnc-server
|
||||
$ sudo systemctl restart firewalld.service
|
||||
|
||||

|
||||
|
||||
Now you can able to connect VNC server using IP and Port ( Eg : ip-address:1 )
|
||||
|
||||
### 6. Connecting the machine with VNC Client ###
|
||||
|
||||
Finally, we are done installing VNC Server. No, we'll wanna connect the server machine and remotely access it. For that we'll need a VNC Client installed in our computer which will only enable us to remote access the server machine.
|
||||
|
||||

|
||||
|
||||
You can use VNC client like [Tightvnc viewer][3] and [Realvnc viewer][4] to connect Server.
|
||||
To connect with additional users create files with different ports, please go to step 3 to configure and add a new user and port, You'll need to create **vncserver@:2.service** and replace the username in config file and continue the steps by replacing service name for different ports. **Please make sure you logged in as that particular user for creating vnc password**.
|
||||
|
||||
VNC by itself runs on port 5900. Since each user will run their own VNC server, each user will have to connect via a separate port. The addition of a number in the file name tells VNC to run that service as a sub-port of 5900. So in our case, arun's VNC service will run on port 5901 (5900 + 1) and further will run on 5900 + x. Where, x denotes the port specified when creating config file **vncserver@:x.service for the further users**.
|
||||
|
||||
We'll need to know the IP Address and Port of the server to connect with the client. IP addresses are the unique identity number of the machine. Here, my IP address is 96.126.120.92 and port for this user is 1. We can get the public IP address by executing the below command in a shell or terminal of the machine where VNC Server is installed.
|
||||
|
||||
# curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Finally, we installed and configured VNC Server in the machine running CentOS 7 / RHEL 7 (Red Hat Enterprises Linux) . VNC is the most easy FOSS tool for the remote access and also a good alternative to Teamviewer Remote Access. VNC allows a user with VNC client installed to control the machine with VNC Server installed. Here are some commands listed below that are highly useful in VNC . Enjoy !!
|
||||
|
||||
#### Additional Commands : ####
|
||||
|
||||
- To stop VNC service .
|
||||
|
||||
# systemctl stop vncserver@:1.service
|
||||
|
||||
- To disable VNC service from startup.
|
||||
|
||||
# systemctl disable vncserver@:1.service
|
||||
|
||||
- To stop firewall.
|
||||
|
||||
# systemctl stop firewalld.service
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-configure-vnc-server-centos-7-0/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://en.wikipedia.org/wiki/Virtual_Network_Computing
|
||||
[2]:http://en.wikipedia.org/wiki/X_Window_System
|
||||
[3]:http://www.tightvnc.com/
|
||||
[4]:https://www.realvnc.com/
|
@ -1,105 +0,0 @@
|
||||
FSSlc translating
|
||||
|
||||
How to access Gmail from the command line on Linux with Alpine
|
||||
================================================================================
|
||||
If you are a command-line lover, I am sure that you welcome with open arms any tool that allows you to perform at least one of your daily tasks using that powerful work environment, e.g., from [scheduling appointments][1] and [managing finances][2] to accessing [Facebook][3] and [Twitter][4].
|
||||
|
||||
In this post I will show you yet another pretty neat use case of Linux command-line: **accessing Google's Gmail service**. To do so, we are going to use Alpine, a versatile ncurses-based, command-line email client (not to be confused with Alpine Linux). We will configure Gmail's IMAP and SMTP settings in Alpine to receive and send email messages via Google mail servers in a terminal environment. At the end of this tutorial, you will realize that it will only take a few minimum steps to use any other mail servers in Alpine.
|
||||
|
||||
Granted there are already outstanding GUI-based email clients such as Thunderbird, Evolution or even web interface. So why would anyone be interested in using a command-line email client to access Gmail? The answer is simple. You need to get something done quickly and want to avoid using system resources unnecessarily. Or you are accessing a minimal headless server that does not have the X server installed. Or the X server on your desktop crashed, and you need to send emails urgently before fixing it. In all these situations Alpine can come in handy and get you going in no time.
|
||||
|
||||
Beyond simple editing, sending and receiving of text-based email messages, Alpine is able to encrypt, decrypt, and digitally sign email messages, and integrate seamlessly with TLS.
|
||||
|
||||
### Installing Alpine on Linux ###
|
||||
|
||||
In Red Hat-based distributions, install Alpine as follows. Note that on RHEL/CentOS, you need to enable [EPEL repository][5] first.
|
||||
|
||||
# yum install alpine
|
||||
|
||||
In Debian, Ubuntu or their derivatives, you will do:
|
||||
|
||||
# aptitude install alpine
|
||||
|
||||
After the installation is complete, you can launch the email client by running:
|
||||
|
||||
# alpine
|
||||
|
||||
The first time you run alpine, it will create a mail directory for the current user inside his/her home directory (~/mail), and bring up the main interface, as shown in the following screencast.
|
||||
|
||||
注:youtube视频,发布的时候做个链接吧
|
||||
<iframe width="615" height="346" frameborder="0" allowfullscreen="" src="http://www.youtube.com/embed/kuKiv3uze4U?feature=oembed"></iframe>
|
||||
|
||||
The user interface has the following sections:
|
||||
|
||||

|
||||
|
||||
Feel free to browse around a bit in order to become acquainted with Alpine. You can always return to the command prompt by hitting the 'Q' key any time. Note that all screens have context-related help available at the bottom of the screen.
|
||||
|
||||
Before proceeding further, we will create a default configuration file for Alpine. In order to do so, quit Alpine, and execute the following command from the command line:
|
||||
|
||||
# alpine -conf > /etc/pine.conf
|
||||
|
||||
### Configuring Alpine to Use a Gmail Account ###
|
||||
|
||||
Once you have installed Alpine and spent at least a few minutes to feel comfortable with its interface and menus, it's time to actually configure it to use an existing Gmail account.
|
||||
|
||||
Before following these steps in Alpine, remember to enable IMAP in your Gmail settings from the webmail interface. Once IMAP access is enabled in your Gmail account, proceed to the following steps to enable reading Gmail messages on Alpine.
|
||||
|
||||
First, launch Alpine.
|
||||
|
||||
Press 'S' for Setup, and then 'L' for collection lists to define groups of folders to help you better organize your mail:
|
||||
|
||||

|
||||
|
||||
Add a new folder by pressing 'A' and fill the required information:
|
||||
|
||||
- **Nickname**: whatever name of your choice.
|
||||
- **Server**: imap.gmail.com/ssl/user=yourgmailusername@gmail.com
|
||||
|
||||
You may leave Path and View blank.
|
||||
|
||||
Then press Ctrl+X and enter your password when prompted:
|
||||
|
||||

|
||||
|
||||
If everything goes as expected, there should be a new folder named after the nickname that you chose earlier. You should find your Gmail mailboxes there:
|
||||
|
||||

|
||||
|
||||
For verification, you can compare the contents of your Alpine's "Gmail Sent" mailbox with those of the web client:
|
||||
|
||||

|
||||
|
||||
By default new mail checking/notification occurs automatically every 150 seconds. You can change this value, along with many others, in the /etc/pine.conf file. This configuration file is heavily commented for clarity. To set the desired mail check interval to 10 seconds, for example, you will need to do:
|
||||
|
||||
# The approximate number of seconds between checks for new mail
|
||||
mail-check-interval=10
|
||||
|
||||
Finally, we need to configure an SMTP server to send email messages via Alpine. Go back to the Alpine's setup screen as explained earlier, and press 'C' to set the address of a Google's SMTP server. You will need to edit the value of the SMTP Server (for sending) line as follows:
|
||||
|
||||
smtp.gmail.com:587/tls/user=yourgmailusername@gmail.com
|
||||
|
||||
You will be prompted to save changes when you press 'E' to exit setup. Once you save the changes, you are on your way to sending emails through Alpine! To do that, just go to Compose in the main menu, and start enjoying your Gmail account from the command line.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this post we have discussed how to access Gmail in a terminal environment via a lightweight and powerful command-line email client called Alpine. Alpine is free software released under the Apache Software License 2.0, which is a software license compatible with the GPL. Alpine takes pride in being friendly for new users, yet powerful for seasoned system administrators at the same time. I hope that after reading this article you have come to realize how true that last statement is.
|
||||
|
||||
Feel free to leave your comments or questions using the form below. I look forward to hearing from you!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/gmail-command-line-linux-alpine.html
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/gabriel
|
||||
[1]:http://xmodulo.com/schedule-appointments-todo-tasks-linux-terminal.html
|
||||
[2]:http://xmodulo.com/manage-personal-expenses-command-line.html
|
||||
[3]:http://xmodulo.com/access-facebook-command-line-linux.html
|
||||
[4]:http://xmodulo.com/access-twitter-command-line-linux.html
|
||||
[5]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
|
@ -1,151 +0,0 @@
|
||||
zpl1025
|
||||
Systemd Boot Process a Close Look in Linux
|
||||
================================================================================
|
||||
The way Linux system boots up is quite complex and there have always been need to optimize the way it works. The traditional boot up process of Linux system is mainly handled by the well know init process (also known as SysV init boot system), while there have been identified inefficiencies in the init based boot system, systemd on the other hand is another boot up manager for Linux based systems which claims to overcome the shortcomings of [traditional Linux SysV init][2] based system. We will be focusing our discussion on the features and controversies of systemd , but in order to understand it, let’s see how Linux boot process is handled by traditional SysV init based system. Kindly note that Systemd is still in testing phase and future releases of Linux operating systems are preparing to replace their current boot process with Systemd Boot manager.
|
||||
|
||||
### Understanding Linux Boot Process ###
|
||||
|
||||
Init is the very first process that starts when we power on our Linux system. Init process is assigned the PID of 1. It is parent process for all other processes on the system. When a Linux computer is started, the processor searches for the BIOS on the system memory, BIOS then tests system resources and find the first boot device, usually set as hard disk, it looks for Master Boot Record (MBR) on the hard disk, loads its contents to memory and passes control to it, the further boot process is controlled by MBR.
|
||||
|
||||
Master Boot Record initiates the Boot loader (Linux has two well know boot loaders, GRUB and LILO, 80% of Linux systems are using GRUB loaders), this is the time when GRUB or LILO loads the kernel module. Kernel module immediately looks for the “init” in /sbin partition and executes it. That’s from where init becomes the parent process of Linux system. The very first file read by init is /etc/inittab , from here init decides the run level of our Linux operating system. It finds partition table information from /etc/fstab file and mounts partitions accordingly. Init then launches all the services/scripts specified in the /etc/init.d directory of the default run level. This is the step where all services are initialized by init one by one. In this process, one service at a time is started by init , all services/daemons run in the background and init keeps managing them.
|
||||
|
||||
The shutdown process works in pretty much the reverse function, first of all init stops all services and then filesystem is un-mounted at the last stage.
|
||||
|
||||
The above mentioned process has some shortcomings. The need to replace traditional init with something better have been felt from long time now. Some replacements have been developed and implemented as well. The well know replacements for this init based system as Upstart , Epoch , Mudar and Systemd. Systemd is the one which got most attention and is considered to be better of all available alternatives.
|
||||
|
||||
### Understanding Systemd ###
|
||||
|
||||
Reducing the boot time and computational overhead is the main objective of developing the Systemd. Systemd (System Manager Daemon) , originally developed under GNU General Public License, is now under GNU Lesser General Public License, it is most frequently discussed boot and services manager these days. If your Linux system is configured to use Systemd boot manager, then instead of traditional SysV init, startup process will be handled by systemd. One of the core feature of Systemd is that it supports post boot scripts of SysV Init as well .
|
||||
|
||||
Systemd introduces the parallelization boot concept, it creates a sockets for each daemon that needs to be started, these sockets are abstracted from the processes that use them so they allow daemons to interact with each other. Systemd creates news processes and assigns every process a control group. The processes in different control groups use kernel to communicate with each others. The way [systemd handles the start up process][2] is quite neat, and much optimized as compared to the traditional init based system. Let’s review some of the core features of Systemd.
|
||||
|
||||
- The boot process is much simpler as compared to the init
|
||||
- Systemd provides concurrent and parallel process of system boot so it ensures better boot speed
|
||||
- Processes are tracked using control groups, not by PIDs
|
||||
- Improved ways to handle boot and services dependencies.
|
||||
- Capability of system snapshots and restore
|
||||
- Monitoring of started services ; also capabale of restarting any crashed services
|
||||
- Includes systemd-login module to control user logins.
|
||||
- Ability to add and remove components
|
||||
- Low memory foot prints and ability for job scheduling
|
||||
- Journald module for event logging and syslogd module for system log.
|
||||
|
||||
Systemd handles system shutdown process in well organized way as well. It has three script located inside /usr/lib/systemd/ directory, named systemd-halt.service , systemd-poweroff.service , systemd-reboot.service . These scripts are executed when user choose to shutdown, reboot or halt Linux system. In the event of shutdown, systemd first un-mount all file systems and disabled all swap devices, detaches the storage devices and kills remaining processes.
|
||||
|
||||

|
||||
|
||||
### Structural Overview of Systemd ###
|
||||
|
||||
Let’s review Linux system boot process with some structural details when it is using systemd as boot and services manager. For the sake of simplicity, we are listing the process in steps below:
|
||||
|
||||
**1.** The very first steps when you power on your system is the BIOS initialization. BIOS reads the boot device settings, locates and hands over control to MBR (assuming hard disk is set as first boot device).
|
||||
|
||||
**2.** MBR reads information from Grub or LILO boot loader and initializes the kernel. Grub or LILO will specify how to handle further system boot up. If you have specified systemd as boot manager in grub configuration file, then the further boot process will be handled by systemd. Systemd handles boot and services management process using “targets”. The ”target" files in systemd are used for grouping different boot units and start up synchronization processes.
|
||||
|
||||
**3.** The very first target executed by systemd is **default.target**. But default.target is actually a symlink to **graphical.target**. Symlink in linux works just like shortcuts in Windows. Graphical.target file is located at /usr/lib/systemd/system/graphical.target path. We have shown the contents of graphical.target file in the following screenshot.
|
||||
|
||||

|
||||
|
||||
**4.** At this stage, **multi-user.target** has been invoked and this target keeps its further sub-units inside “/etc/systemd/system/multi-user.target.wants” directory. This target sets the environment for multi user support. None root users are enabled at this stage of boot up process. Firewall related services are started on this stage of boot as well.
|
||||
|
||||

|
||||
|
||||
"multi-user.target" passes control to another layer “**basic.target**”.
|
||||
|
||||

|
||||
|
||||
**5.** "basic.target" unit is the one that starts usual services specially graphical manager service. It uses /etc/systemd/system/basic.target.wants directory to decide which services need to be started, basic.target passes on control to **sysinit.target**.
|
||||
|
||||

|
||||
|
||||
**6.** "sysinit.target" starts important system services like file System mounting, swap spaces and devices, kernel additional options etc. sysinit.target passes on startup process to **local-fs.target**. The contents of this target unit are shown in the following screenshot.
|
||||
|
||||

|
||||
|
||||
**7.** local-fs.target , no user related services are started by this target unit, it handles core low level services only. This target is the one performing actions on the basis of /etc/fstab and /etc/inittab files.
|
||||
|
||||
### Analyzing System Boot Performancev ###
|
||||
|
||||
Systemd offers tool to identify and troubleshoot boot related issues or performance concerns. **Systemd-analyze** is a built-in command which lets you examine boot process. You can find out the units which are facing errors during boot up and can further trace and correct boot component issues. Some useful systemd-analyze commands are listed below.
|
||||
|
||||
**systemd-analyze time** shows the time spent in kernel, and normal user space.
|
||||
|
||||
$ systemd-analyze time
|
||||
|
||||
Startup finished in 1440ms (kernel) + 3444ms (userspace)
|
||||
|
||||
**systemd-analyze blame** prints a list of all running units, sorted by the time taken by then to initialize, in this way you can have idea of which services are taking long time to start during boot up.
|
||||
|
||||
$ systemd-analyze blame
|
||||
|
||||
2001ms mysqld.service
|
||||
234ms httpd.service
|
||||
191ms vmms.service
|
||||
|
||||
**systemd-analyze verify** shows if there are any syntax errors in the system units. **Systemd-analyze plot** can be used to write down whole startup process to a SVG formate file. Whole boot process is very lengthy to read, so using this command we can dump the output of whole boot processing into a file and then can read and analyze it further. The following command will take care of this.
|
||||
|
||||
systemd-analyze plot > boot.svg
|
||||
|
||||
### Systemd Controversies ###
|
||||
|
||||
Systemd has not been lucky to receive love from everyone, some professionals and administrators have different opinions on its working and developments. Per critics of Systemd, it’s “not Unix-like” because it tried to replace some system services. Some professionals don’t like the idea of using binary configuration files as well. It is said that editing systemd configuration is not an easy tasks and there are no graphical tools available for this purpose.
|
||||
|
||||
### Test Systemd on Ubuntu 14.04 and 12.04 ###
|
||||
|
||||
Originally, Ubuntu decided to replace their current boot process with Systemd in Ubuntu 16.04 LTS. Ubuntu 16.04 is supposed to be released in April 2016, but considering the popularity and demand for Systemd, the upcoming **Ubuntu 15.04** will have it as its default boot manager. Good news is that the user of Ubuntu 14.04 Trusty Tahr And Ubuntu 12.04 Precise Pangolin can still test Systemd on their machines. The test process is not very complex, all you need to do is to include the related PPA to the system, update repository and perform system upgrade.
|
||||
|
||||
**Disclaimer** : Please note that its still in testing and development stages for Ubuntu. Testing packages might have any unknown issues and in worst case scenario, they might break your system configurations. Make sure you backup your important data before trying this upgrade.
|
||||
|
||||
Run following command on the terminal to add ppa to the your ubuntu system:
|
||||
|
||||
sudo add-apt-repository ppa:pitti/systemd
|
||||
|
||||
You will be seeing warning message here because we are trying to use temporary/testing PPA which is not recommended for production machines.
|
||||
|
||||

|
||||
|
||||
Now update the APT Package Manager repositories by running the following command.
|
||||
|
||||
sudo apt-get update
|
||||
|
||||

|
||||
|
||||
Perform system upgrade by running the following command.
|
||||
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||

|
||||
|
||||
That’s all, you should be able to see configuration files of systemd on your ubuntu system now, just browse to the /lib/systemd/ directory and see the files there.
|
||||
|
||||
Alright, it’s time we edit grub configuration file and specify systemd as default Boot Manager. Edit grub file using Gedit text editor.
|
||||
|
||||
sudo gedit /etc/default/grub
|
||||
|
||||

|
||||
|
||||
Here edit GRUB_CMDLINE_LINUX_DEFAULT parameter in this file and specify the value of this parameter as: "**init=/lib/systemd/systemd**"
|
||||
|
||||

|
||||
|
||||
That’s all, your ubuntu system is no longer using its traditional boot manager, its using Systemd Manager now. Reboot your system and see the systemd boot up process.
|
||||
|
||||

|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Systemd is no doubt a step forward towards improving Linux Boot process; it’s an awesome suite of libraries and daemons that together improve the system boot and shutdown process. Many linux distributions are preparing to support it as their official boot manager. In future releases of Linux distros, we can hope to see systemd startup. But on the other hand, in order to succeed and to be adopted on the wide scale, systemd should address the concerns of critics as well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/systemd-boot-process/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:http://linoxide.com/booting/boot-process-of-linux-in-detail/
|
||||
[2]:http://0pointer.de/blog/projects/self-documented-boot.html
|
@ -1,268 +0,0 @@
|
||||
translating by martin.
|
||||
|
||||
11 Linux Terminal Commands That Will Rock Your World
|
||||
================================================================================
|
||||
I have been using Linux for about 10 years and what I am going to show you in this article is a list of Linux commands, tools and clever little tricks that I wish somebody had shown me from the outset instead of stumbling upon them as I went along.
|
||||
|
||||

|
||||
Linux Keyboard Shortcuts.
|
||||
|
||||
### 1. Useful Command Line Keyboard Shortcuts ###
|
||||
|
||||
The following keyboard shortcuts are incredibly useful and will save you loads of time:
|
||||
|
||||
- CTRL + U - Cuts text up until the cursor.
|
||||
- CTRL + K - Cuts text from the cursor until the end of the line
|
||||
- CTRL + Y - Pastes text
|
||||
- CTRL + E - Move cursor to end of line
|
||||
- CTRL + A - Move cursor to the beginning of the line
|
||||
- ALT + F - Jump forward to next space
|
||||
- ALT + B - Skip back to previous space
|
||||
- ALT + Backspace - Delete previous word
|
||||
- CTRL + W - Cut word behind cursor
|
||||
- Shift + Insert - Pastes text into terminal
|
||||
|
||||
Just so that the commands above make sense look at the next line of text.
|
||||
|
||||
sudo apt-get intall programname
|
||||
|
||||
As you can see I have a spelling error and for the command to work I would need to change "intall" to "install".
|
||||
|
||||
Imagine the cursor is at the end of the line. There are various ways to get back to the word install to change it.
|
||||
|
||||
I could press ALT + B twice which would put the cursor in the following position (denoted by the ^ symbol):
|
||||
|
||||
sudo apt-get^intall programname
|
||||
|
||||
Now you could press the cursor key and insert the ''s' into install.
|
||||
|
||||
Another useful command is "shift + insert" especially If you need to copy text from a browser into the terminal.
|
||||
|
||||

|
||||
|
||||
### 2. SUDO !! ###
|
||||
|
||||
You are going to really thank me for the next command if you don't already know it because until you know this exists you curse yourself every time you enter a command and the words "permission denied" appear.
|
||||
|
||||
- sudo !!
|
||||
|
||||
How do you use sudo !!? Simply. Imagine you have entered the following command:
|
||||
|
||||
apt-get install ranger
|
||||
|
||||
The words "Permission denied" will appear unless you are logged in with elevated privileges.
|
||||
|
||||
sudo !! runs the previous command as sudo. So the previous command now becomes:
|
||||
|
||||
sudo apt-get install ranger
|
||||
|
||||
If you don't know what sudo is [start here][1].
|
||||
|
||||

|
||||
Pause Terminal Applications.
|
||||
|
||||
### 3. Pausing Commands And Running Commands In The Background ###
|
||||
|
||||
I have already written a guide showing how to run terminal commands in the background.
|
||||
|
||||
- CTRL + Z - Pauses an application
|
||||
- fg - Returns you to the application
|
||||
|
||||
So what is this tip about?
|
||||
|
||||
Imagine you have opened a file in nano as follows:
|
||||
|
||||
sudo nano abc.txt
|
||||
|
||||
Halfway through typing text into the file you realise that you quickly want to type another command into the terminal but you can't because you opened nano in foreground mode.
|
||||
|
||||
You may think your only option is to save the file, exit nano, run the command and then re-open nano.
|
||||
|
||||
All you have to do is press CTRL + Z and the foreground application will pause and you will be returned to the command line. You can then run any command you like and when you have finished return to your previously paused session by entering "fg" into the terminal window and pressing return.
|
||||
|
||||
An interesting thing to try out is to open a file in nano, enter some text and pause the session. Now open another file in nano, enter some text and pause the session. If you now enter "fg" you return to the second file you opened in nano. If you exit nano and enter "fg" again you return to the first file you opened within nano.
|
||||
|
||||

|
||||
nohup.
|
||||
|
||||
### 4. Use nohup To Run Commands After You Log Out Of An SSH Session ###
|
||||
|
||||
The [nohup command][2] is really useful if you use the ssh command to log onto other machines.
|
||||
|
||||
So what does nohup do?
|
||||
|
||||
Imagine you are logged on to another computer remotely using ssh and you want to run a command that takes a long time and then exit the ssh session but leave the command running even though you are no longer connected then nohup lets you do just that.
|
||||
|
||||
For instance I use my [Raspberry PI][3] to download distributions for review purposes.
|
||||
|
||||
I never have my Raspberry PI connected to a display nor do I have a keyboard and mouse connected to it.
|
||||
|
||||
I always connect to the Raspberry PI via [ssh][4] from a laptop. If I started downloading a large file on the Raspberry PI without using the nohup command then I would have to wait for the download to finish before logging off the ssh session and before shutting down the laptop. If I did this then I may as well have not used the Raspberry PI to download the file at all.
|
||||
|
||||
To use nohup all I have to type is nohup followed by the command as follows:
|
||||
|
||||
nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &
|
||||
|
||||

|
||||
Schedule tasks with at.
|
||||
|
||||
### 5. Running A Linux Command 'AT' A Specific Time ###
|
||||
|
||||
The 'nohup' command is good if you are connected to an SSH server and you want the command to remain running after logging out of the SSH session.
|
||||
|
||||
Imagine you want to run that same command at a specific point in time.
|
||||
|
||||
The 'at' command allows you to do just that. 'at' can be used as follows.
|
||||
|
||||
at 10:38 PM Fri
|
||||
at> cowsay 'hello'
|
||||
at> CTRL + D
|
||||
|
||||
The above command will run the program [cowsay][5] at 10:38 PM on Friday evening.
|
||||
|
||||
The syntax is 'at' followed by the date and time to run.
|
||||
|
||||
When the at> prompt appears enter the command you want to run at the specified time.
|
||||
|
||||
The CTRL + D returns you to the cursor.
|
||||
|
||||
There are lots of different date and time formats and it is worth checking the man pages for more ways to use 'at'.
|
||||
|
||||

|
||||
|
||||
### 6. Man Pages ###
|
||||
|
||||
Man pages give you an outline of what commands are supposed to do and the switches that can be used with them.
|
||||
|
||||
The man pages are kind of dull on their own. (I guess they weren't designed to excite us).
|
||||
|
||||
You can however do things to make your usage of man more appealing.
|
||||
|
||||
export PAGER=most
|
||||
|
||||
You will need to install 'most; for this to work but when you do it makes your man pages more colourful.
|
||||
|
||||
You can limit the width of the man page to a certain number of columns using the following command:
|
||||
|
||||
export MANWIDTH=80
|
||||
|
||||
Finally, if you have a browser available you can open any man page in the default browser by using the -H switch as follows:
|
||||
|
||||
man -H <command>
|
||||
|
||||
Note this only works if you have a default browser set up within the $BROWSER environment variable.
|
||||
|
||||

|
||||
View Processes With htop.
|
||||
|
||||
### 7. Use htop To View And Manage Processes ###
|
||||
|
||||
Which command do you currently use to find out which processes are running on your computer? My bet is that you are using '[ps][6]' and that you are using various switches to get the output you desire.
|
||||
|
||||
Install '[htop][7]'. It is definitely a tool you will wish that you installed earlier.
|
||||
|
||||
htop provides a list of all running processes in the terminal much like the file manager in Windows.
|
||||
|
||||
You can use a mixture of function keys to change the sort order and the columns that are displayed. You can also kill processes from within htop.
|
||||
|
||||
To run htop simply type the following into the terminal window:
|
||||
|
||||
htop
|
||||
|
||||

|
||||
Command Line File Manager - Ranger.
|
||||
|
||||
### 8. Navigate The File System Using ranger ###
|
||||
|
||||
If htop is immensely useful for controlling the processes running via the command line then [ranger][8] is immensely useful for navigating the file system using the command line.
|
||||
|
||||
You will probably need to install ranger to be able to use it but once installed you can run it simply by typing the following into the terminal:
|
||||
|
||||
ranger
|
||||
|
||||
The command line window will be much like any other file manager but it works left to right rather than top to bottom meaning that if you use the left arrow key you work your way up the folder structure and the right arrow key works down the folder structure.
|
||||
|
||||
It is worth reading the man pages before using ranger so that you can get used to all keyboard switches that are available.
|
||||
|
||||

|
||||
Cancel Linux Shutdown.
|
||||
|
||||
### 9. Cancel A Shutdown ###
|
||||
|
||||
So you started the [shutdown][9] either via the command line or from the GUI and you realised that you really didn't want to do that.
|
||||
|
||||
shutdown -c
|
||||
|
||||
Note that if the shutdown has already started then it may be too late to stop the shutdown.
|
||||
|
||||
Another command to try is as follows:
|
||||
|
||||
- [pkill][10] shutdown
|
||||
|
||||

|
||||
Kill Hung Processes With XKill.
|
||||
|
||||
### 10. Killing Hung Processes The Easy Way ###
|
||||
|
||||
Imagine you are running an application and for whatever reason it hangs.
|
||||
|
||||
You could use 'ps -ef' to find the process and then kill the process or you could use 'htop'.
|
||||
|
||||
There is a quicker and easier command that you will love called [xkill][11].
|
||||
|
||||
Simply type the following into a terminal and then click on the window of the application you want to kill.
|
||||
|
||||
xkill
|
||||
|
||||
What happens though if the whole system is hanging?
|
||||
|
||||
Hold down the 'alt' and 'sysrq' keys on your keyboard and whilst they are held down type the following slowly:
|
||||
|
||||
- [REISUB][12]
|
||||
|
||||
This will restart your computer without having to hold in the power button.
|
||||
|
||||

|
||||
youtube-dl.
|
||||
|
||||
### 11. Download Youtube Videos ###
|
||||
|
||||
Generally speaking most of us are quite happy for Youtube to host the videos and we watch them by streaming them through our chosen media player.
|
||||
|
||||
If you know you are going to be offline for a while (i.e. due to a plane journey or travelling between the south of Scotland and the north of England) then you may wish to download a few videos onto a pen drive and watch them at your leisure.
|
||||
|
||||
All you have to do is install youtube-dl from your package manager.
|
||||
|
||||
You can use youtube-dl as follows:
|
||||
|
||||
youtube-dl url-to-video
|
||||
|
||||
You can get the url to any video on Youtube by clicking the share link on the video's page. Simply copy the link and paste it into the command line (using the shift + insert shortcut).
|
||||
|
||||
### Summary ###
|
||||
|
||||
I hope that you found this list useful and that you are thinking "i didn't know you could do that" for at least 1 of the 11 items listed.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-Rock-Your-World.htm
|
||||
|
||||
作者:[Gary Newell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
|
||||
[1]:http://linux.about.com/cs/linux101/g/sudo.htm
|
||||
[2]:http://linux.about.com/library/cmd/blcmdl1_nohup.htm
|
||||
[3]:http://linux.about.com/od/mobiledevicesother/a/Raspberry-Pi-Computer-Running-Linux.htm
|
||||
[4]:http://linux.about.com/od/commands/l/blcmdl1_ssh.htm
|
||||
[5]:http://linux.about.com/cs/linux101/g/cowsay.htm
|
||||
[6]:http://linux.about.com/od/commands/l/blcmdl1_ps.htm
|
||||
[7]:http://www.linux.com/community/blogs/133-general-linux/745323-5-commands-to-check-memory-usage-on-linux
|
||||
[8]:http://ranger.nongnu.org/
|
||||
[9]:http://linux.about.com/od/commands/l/blcmdl8_shutdow.htm
|
||||
[10]:http://linux.about.com/library/cmd/blcmdl1_pkill.htm
|
||||
[11]:http://linux.about.com/od/funnymanpages/a/funman_xkill.htm
|
||||
[12]:http://blog.kember.net/articles/reisub-the-gentle-linux-restart/
|
@ -1,100 +0,0 @@
|
||||
translating wi-cuckoo LLAP
|
||||
How to Interactively Create a Docker Container
|
||||
================================================================================
|
||||
Hi everyone, today we'll learn how we can interactively create a docker container using a docker image. Once we start a process in Docker from an Image, Docker fetches the image and its Parent Image, and repeats the process until it reaches the Base Image. Then the Union File System adds a read-write layer on top. That read-write layer, the information about its Parent Image and some other information like its unique id, networking configuration, and resource limits is called a **Container**. Containers has states as they can change from **running** to **exited** state. A container with state as **running** includes a tree of processes running on the CPU, isolated from the other processes running on the host where as **exited** is the state of the file system and its exit value is preserved. You can start, stop, and restart a container with it.
|
||||
|
||||
Docker Technology has brought a remarkable change in the field of IT enabling cloud service for sharing applications and automating workflows, enabling apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. In this article, we'll build CentOS Instance in which we'll host a website running under Apache Web Server.
|
||||
|
||||
Here is quick and easy tutorial on how we can create a container in an interactive method using an interactive shell.
|
||||
|
||||
### 1. Running a Docker Instance ###
|
||||
|
||||
Docker initially tries to fetch and run the required image locally and if its not found in local host the it pulls from the [Docker Public Registry Hub][1] . Here. we'll fetch and create a fedora instance in a Docker Container and attach a bash shell to the tty.
|
||||
|
||||
# docker run -i -t fedora bash
|
||||
|
||||

|
||||
|
||||
### 2. Installing Apache Web Server ###
|
||||
|
||||
Now, after our Fedora base image with instance is ready, we'll now gonna install Apache Web Server interactively without creating a Dockerfile for it. To do so, we'll need to run the following commands in a terminal or shell.
|
||||
|
||||
# yum update
|
||||
|
||||

|
||||
|
||||
# yum install httpd
|
||||
|
||||

|
||||
|
||||
# exit
|
||||
|
||||
### 3. Saving the Image ###
|
||||
|
||||
Now, we'll gonna save the changes we made into the Fedora Instance. To do that, we'll first gonna need to know the Container ID of the Instance. To get that we'll need to run the following command.
|
||||
|
||||
# docker ps -a
|
||||
|
||||

|
||||
|
||||
Then, we'll save the changes as a new image by running the below command.
|
||||
|
||||
# docker commit c16378f943fe fedora-httpd
|
||||
|
||||

|
||||
|
||||
Here, the changes are saved using the Container ID and image name fedora-httpd. To make sure that the new image is running or not, we'll run the following command.
|
||||
|
||||
# docker images
|
||||
|
||||

|
||||
|
||||
### 4. Adding the Contents to the new image ###
|
||||
|
||||
As we have our new Fedora Apache image running successfully, now we'll want to add the web contents which includes our website to Apache Web Server so that our website will run successfully out of the box. To do so, we'll need to create a new Dockerfile which will handle the operation from copying web contents to allowing port 80. To do so, we'll need to create a file Dockerfile using our favorite text editor as shown below.
|
||||
|
||||
# nano Dockerfile
|
||||
|
||||
Now, we'll need to add the following lines into that file.
|
||||
|
||||
FROM fedora-httpd
|
||||
ADD mysite.tar /tmp/
|
||||
RUN mv /tmp/mysite/* /var/www/html
|
||||
EXPOSE 80
|
||||
ENTRYPOINT [ "/usr/sbin/httpd" ]
|
||||
CMD [ "-D", "FOREGROUND" ]
|
||||
|
||||

|
||||
|
||||
Here, in above Dockerfile, the web content which we have in mysite.tar will get automatically extracted to /tmp/ folder. Then, the entire site will move to the Apache Web root ie /var/www/html/ and the expose 80 will open port 80 so that the website will be available normally. Then, the entrypoint is set to /usr/sbin/httpd so that the Apache Server will execute.
|
||||
|
||||
### 5. Building and running a Container ###
|
||||
|
||||
Now, we'll build our Container using the Dockerfile we just created in order to add our website on it. To do so, we'll need to run the following command.
|
||||
|
||||
# docker build -rm -t mysite .
|
||||
|
||||

|
||||
|
||||
After building our new container, we'll want to run the container using the command below.
|
||||
|
||||
# docker run -d -P mysite
|
||||
|
||||

|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Finally, we've successfully built a Docker Container interactively. In this method, we build our containers and image directly via interactive shell commands. This method is quite easy and quick to build and deploy our images and containers. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/interactively-create-docker-container/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://registry.hub.docker.com/
|
@ -1,35 +0,0 @@
|
||||
Linux FAQs with Answers--How to upgrade Docker on Ubuntu
|
||||
================================================================================
|
||||
> **Question**: I installed Docker on Ubuntu using its standard repositories. However, the default Docker installation does not meet the version requirement for my another application that relies on Docker. How can I upgrade Docker to the latest version on Ubuntu?
|
||||
|
||||
Since Docker was first released in 2013, it has been fast evolving into a full-blown open platform for distributed applications. To meet the industry's expection, Docker is being aggressively developed and constantly upgraded with new features. Chances are that the stock Docker that comes with your Ubuntu distribution is quickly outdated. For example, Ubuntu 14.10 Utopic comes with Docker version 1.2.0, while the latest Docker version is 1.5.0.
|
||||
|
||||

|
||||
|
||||
For those of you who want to stay up-to-date with Docker's latest developments, Canonical maintains a separate PPA for Docker. Using this PPA repository, you can easily upgrade Docker to the latest version on Ubuntu.
|
||||
|
||||
Here is how to set up Docker PPA and upgrade Docker.
|
||||
|
||||
$ sudo add-apt-repository ppa:docker-maint/testing
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install docker.io
|
||||
|
||||
Now check the version of installed Docker:
|
||||
|
||||
$ docker --version
|
||||
|
||||
----------
|
||||
|
||||
Docker version 1.5.0-dev, build a78ce5c
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/upgrade-docker-ubuntu.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
@ -0,0 +1,147 @@
|
||||
Conky – The Ultimate X Based System Monitor Application
|
||||
================================================================================
|
||||
Conky is a system monitor application written in ‘C’ Programming Language and released under GNU General Public License and BSD License. It is available for Linux and BSD Operating System. The application is X (GUI) based that was originally forked from [Torsmo][1].
|
||||
|
||||
#### Features ####
|
||||
|
||||
- Simple User Interface
|
||||
- Higher Degree of configuration
|
||||
- It can show System stats using built-in objects (300+) as well as external scripts either on the desktop or in it’s own container.
|
||||
- Low on Resource Utilization
|
||||
- Shows system stats for a wide range of system variables which includes but not restricted to CPU, memory, swap, Temperature, Processes, Disk, Network, Battery, email, System messages, Music player, weather, breaking news, updates and blah..blah..blah
|
||||
- Available in Default installation of OS like CrunchBang Linux and Pinguy OS.
|
||||
|
||||
#### Lesser Known Facts about Conky ####
|
||||
|
||||
- The Name conky was derived from a Canadian Television Show.
|
||||
- It has already been ported to Nokia N900.
|
||||
- It is no more maintained officially.
|
||||
|
||||
### Conky Installation and Usage in Linux ###
|
||||
|
||||
Before we install conky, we need to install packages like lm-sensors, curl and hddtemp using following command.
|
||||
|
||||
# apt-get install lm-sensors curl hddtemp
|
||||
|
||||
Time to detect-sensors.
|
||||
|
||||
# sensors-detect
|
||||
|
||||
**Note**: Answer ‘Yes‘ when prompted!
|
||||
|
||||
Check all the detected sensors.
|
||||
|
||||
# sensors
|
||||
|
||||
#### Sample Output ####
|
||||
|
||||
acpitz-virtual-0
|
||||
Adapter: Virtual device
|
||||
temp1: +49.5°C (crit = +99.0°C)
|
||||
|
||||
coretemp-isa-0000
|
||||
Adapter: ISA adapter
|
||||
Physical id 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
Core 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
Core 1: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
|
||||
Conky can be installed from repo as well as, can be compiled from source.
|
||||
|
||||
# yum install conky [On RedHat systems]
|
||||
# apt-get install conky-all [On Debian systems]
|
||||
|
||||
**Note**: Before you install conky on Fedora/CentOS, you must have enabled [EPEL repository][2].
|
||||
|
||||
After conky has been installed, just issue following command to start it.
|
||||
|
||||
$ conky &
|
||||
|
||||

|
||||
Conky Monitor in Action
|
||||
|
||||
It will run conky in a popup like window. It uses the basic conky configuration file located at /etc/conky/conky.conf.
|
||||
|
||||
You may need to integrate conky with the desktop and won’t like a popup like window every-time. Here is what you need to do
|
||||
|
||||
Copy the configuration file /etc/conky/conky.conf to your home directory and rename it as ‘`.conkyrc`‘. The dot (.) at the beginning ensures that the configuration file is hidden.
|
||||
|
||||
$ cp /etc/conky/conky.conf /home/$USER/.conkyrc
|
||||
|
||||
Now restart conky to take new changes.
|
||||
|
||||
$ killall -SIGUSR1 conky
|
||||
|
||||

|
||||
Conky Monitor Window
|
||||
|
||||
You may edit the conky configuration file located in your home dircetory. The configuration file is very easy to understand.
|
||||
|
||||
Here is a sample configuration of conky.
|
||||
|
||||

|
||||
Conky Configuration
|
||||
|
||||
From the above window you can modify color, borders, size, scale, background, alignment and several other properties. By setting different alignments to different conky window, we can run more than one conky script at a time.
|
||||
|
||||
**Using script other than the default for conky and where to find it?**
|
||||
|
||||
You may write your own conky script or use one that is available over Internet. We don’t suggest you to use any script you find on the web which can be potentially dangerous unless you know what you are doing. However a few famous threads and pages have conky script that you can trust as mentioned below.
|
||||
|
||||
- [http://ubuntuforums.org/showthread.php?t=281865][3]
|
||||
- [http://conky.sourceforge.net/screenshots.html][4]
|
||||
|
||||
At the above url, you will find every screenshot has a hyperlink, which will redirects to script file.
|
||||
|
||||
#### Testing Conky Script ####
|
||||
|
||||
Here I will be running a third party written conky-script on my Debian Jessie Machine, to test.
|
||||
|
||||
$ wget https://github.com/alexbel/conky/archive/master.zip
|
||||
$ unzip master.zip
|
||||
|
||||
Change current working directory to just extracted directory.
|
||||
|
||||
$ cd conky-master
|
||||
|
||||
Rename the secrets.yml.example to secrets.yml.
|
||||
|
||||
$ mv secrets.yml.example secrets.yml
|
||||
|
||||
Install Ruby before you could run this (ruby) script.
|
||||
|
||||
$ sudo apt-get install ruby
|
||||
$ ruby starter.rb
|
||||
|
||||

|
||||
Conky Fancy Look
|
||||
|
||||
**Note**: This script can be modified to show your current weather, temperature, etc.
|
||||
|
||||
If you want to start conky at boot, add the below one liner to startup Applications.
|
||||
|
||||
conky --pause 10
|
||||
save and exit.
|
||||
|
||||
And Finally…such a lightweight and useful GUI eye candy like package is not in active stage and is not maintained officially anymore. The last stable release was conky 1.9.0 released on May 03, 2012. A thread on Ubuntu forum has gone over 2k pages of users sharing configuration. (link to forum : [http://ubuntuforums.org/showthread.php?t=281865/][5])
|
||||
|
||||
- [Conky Homepage][6]
|
||||
|
||||
That’s all for now. Keep connected. Keep commenting. Share your thoughts and configuration in comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-conky-in-ubuntu-debian-fedora/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://torsmo.sourceforge.net/
|
||||
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
||||
[3]:http://ubuntuforums.org/showthread.php?t=281865
|
||||
[4]:http://conky.sourceforge.net/screenshots.html
|
||||
[5]:http://ubuntuforums.org/showthread.php?t=281865/
|
||||
[6]:http://conky.sourceforge.net/
|
@ -0,0 +1,104 @@
|
||||
How to Generate/Encrypt/Decrypt Random Passwords in Linux
|
||||
================================================================================
|
||||
We have taken initiative to produce Linux tips and tricks series. If you’ve missed the last article of this series, you may like to visit the link below.
|
||||
|
||||
注:此篇文章做过原文
|
||||
- [5 Interesting Command Line Tips and Tricks in Linux][1]
|
||||
|
||||
In this article, we will share some interesting Linux tips and tricks to generate random passwords and also how to encrypt and decrypt passwords with or without slat method.
|
||||
|
||||
Security is one of the major concern of digital age. We put on password to computers, email, cloud, phone, documents and what not. We all know the basic to choose the password that is easy to remember and hard to guess. What about some sort of machine based password generation automatically? Believe me Linux is very good at this.
|
||||
|
||||
**1. Generate a random unique password of length equal to 10 characters using command ‘pwgen‘. If you have not installed pwgen yet, use Apt or YUM to get.**
|
||||
|
||||
$ pwgen 10 1
|
||||
|
||||

|
||||
Generate Random Unique Password
|
||||
|
||||
Generate several random unique passwords of character length 50 in one go!
|
||||
|
||||
$ pwgen 50
|
||||
|
||||

|
||||
Generate Multiple Random Passwords
|
||||
|
||||
**2. You may use ‘makepasswd‘ to generate random, unique password of given length as per choice. Before you can fire makepasswd command, make sure you have installed it. If not! Try installing the package ‘makepasswd’ using Apt or YUM.**
|
||||
|
||||
Generate a random password of character length 10. Default Value is 10.
|
||||
|
||||
$ makepasswd
|
||||
|
||||

|
||||
makepasswd Generate Unique Password
|
||||
|
||||
Generate a random password of character length 50.
|
||||
|
||||
$ makepasswd --char 50
|
||||
|
||||

|
||||
Generate Length 50 Password
|
||||
|
||||
Generate 7 random password of 20 characters.
|
||||
|
||||
$ makepasswd --char 20 --count 7
|
||||
|
||||

|
||||
|
||||
**3. Encrypt a password using crypt along with salt. Provide salt manually as well as automatically.**
|
||||
|
||||
For those who may not be aware of salt,
|
||||
|
||||
Salt is a random data which servers as an additional input to one way function in order to protect password against dictionary attack.
|
||||
|
||||
Make sure you have installed mkpasswd installed before proceeding.
|
||||
|
||||
The below command will encrypt the password with salt. The salt value is taken randomly and automatically. Hence every time you run the below command it will generate different output because it is accepting random value for salt every-time.
|
||||
|
||||
$ mkpasswd tecmint
|
||||
|
||||

|
||||
Encrypt Password Using Crypt
|
||||
|
||||
Now lets define the salt. It will output the same result every-time. Note you can input anything of your choice as salt.
|
||||
|
||||
$ mkpasswd tecmint -s tt
|
||||
|
||||

|
||||
Encrypt Password Using Salt
|
||||
|
||||
Moreover, mkpasswd is interactive and if you don’t provide password along with the command, it will ask password interactively.
|
||||
|
||||
**4. Encrypt a string say “Tecmint-is-a-Linux-Community” using aes-256-cbc encryption using password say “tecmint” and salt.**
|
||||
|
||||
# echo Tecmint-is-a-Linux-Community | openssl enc -aes-256-cbc -a -salt -pass pass:tecmint
|
||||
|
||||

|
||||
Encrypt A String in Linux
|
||||
|
||||
Here in the above example the output of 注:此篇原文也做过[echo command][2] is pipelined with openssl command that pass the input to be encrypted using Encoding with Cipher (enc) that uses aes-256-cbc encryption algorithm and finally with salt it is encrypted using password (tecmint).
|
||||
|
||||
**5. Decrypt the above string using openssl command using the -aes-256-cbc decryption.**
|
||||
|
||||
# echo U2FsdGVkX18Zgoc+dfAdpIK58JbcEYFdJBPMINU91DKPeVVrU2k9oXWsgpvpdO/Z | openssl enc -aes-256-cbc -a -d -salt -pass pass:tecmint
|
||||
|
||||

|
||||
Decrypt String in Linux
|
||||
|
||||
That’s all for now. If you know any such tips and tricks you may send us your tips at admin@tecmint.com, your tip will be published under your name and also we will include it in our future article.
|
||||
|
||||
Keep connected. Keep Connecting. Stay Tuned. Don’t forget to provide us with your valuable feedback in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/generate-encrypt-decrypt-random-passwords-in-linux/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/5-linux-command-line-tricks/
|
||||
[2]:http://www.tecmint.com/echo-command-in-linux/
|
@ -0,0 +1,348 @@
|
||||
How to Install WordPress with Nginx in a Docker Container
|
||||
================================================================================
|
||||
Hi all, today we'll learn how to install WordPress running Nginx Web Server in a Docker Container. WordPress is an awesome free and open source Content Management System running thousands of websites throughout the globe. [Docker][1] is an Open Source project that provides an open platform to pack, ship and run any application as a lightweight container. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. It makes them great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider.
|
||||
|
||||
Today, we'll deploy a docker container with the latest WordPress package with necessary prerequisites ie Nginx Web Server, PHP5, MariaDB Server, etc. Here are some short and sweet steps to successfully install a WordPress running Nginx in a Docker Container.
|
||||
|
||||
### 1. Installing Docker ###
|
||||
|
||||
Before we really start, we'll need to make sure that we have Docker installed in our Linux machine. Here, we are running CentOS 7 as host so, we'll be running yum manager to install docker using the below command.
|
||||
|
||||
# yum install docker
|
||||
|
||||

|
||||
|
||||
# systemctl restart docker.service
|
||||
|
||||
### 2. Creating WordPress Dockerfile ###
|
||||
|
||||
We'll need to create a Dockerfile which will automate the installation of the wordpress and its necessary pre-requisites. This Dockerfile will be used to build the image of WordPress installation we created. This WordPress Dockerfile fetches a CentOS 7 image from the Docker Registry Hub and updates the system with the latest available packages. It then installs the necessary softwares like Nginx Web Server, PHP, MariaDB, Open SSH Server and more which are essential for the Docker Container to work. It then executes a script which will initialize the installation of WordPress out of the box.
|
||||
|
||||
# nano Dockerfile
|
||||
|
||||
Then, we'll need to add the following lines of configuration inside that Dockerfile.
|
||||
|
||||
FROM centos:centos7
|
||||
MAINTAINER The CentOS Project <cloud-ops@centos.org>
|
||||
|
||||
RUN yum -y update; yum clean all
|
||||
RUN yum -y install epel-release; yum clean all
|
||||
RUN yum -y install mariadb mariadb-server mariadb-client nginx php-fpm php-cli php-mysql php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpc php-magickwand php-magpierss php-mbstring php-mcrypt php-mssql php-shout php-snmp php-soap php-tidy php-apc pwgen python-setuptools curl git tar; yum clean all
|
||||
ADD ./start.sh /start.sh
|
||||
ADD ./nginx-site.conf /nginx.conf
|
||||
RUN mv /nginx.conf /etc/nginx/nginx.conf
|
||||
RUN rm -rf /usr/share/nginx/html/*
|
||||
RUN /usr/bin/easy_install supervisor
|
||||
RUN /usr/bin/easy_install supervisor-stdout
|
||||
ADD ./supervisord.conf /etc/supervisord.conf
|
||||
RUN echo %sudo ALL=NOPASSWD: ALL >> /etc/sudoers
|
||||
ADD http://wordpress.org/latest.tar.gz /wordpress.tar.gz
|
||||
RUN tar xvzf /wordpress.tar.gz
|
||||
RUN mv /wordpress/* /usr/share/nginx/html/.
|
||||
RUN chown -R apache:apache /usr/share/nginx/
|
||||
RUN chmod 755 /start.sh
|
||||
RUN mkdir /var/run/sshd
|
||||
|
||||
EXPOSE 80
|
||||
EXPOSE 22
|
||||
|
||||
CMD ["/bin/bash", "/start.sh"]
|
||||
|
||||

|
||||
|
||||
### 3. Creating Start script ###
|
||||
|
||||
After we create our Dockerfile, we'll need to create a script named start.sh which will run and configure our WordPress installation. It will create and configure database, passwords for wordpress. To create it, we'll need to open start.sh with our favorite text editor.
|
||||
|
||||
# nano start.sh
|
||||
|
||||
After opening start.sh, we'll need to add the following lines of configuration into it.
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
__check() {
|
||||
if [ -f /usr/share/nginx/html/wp-config.php ]; then
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
__create_user() {
|
||||
# Create a user to SSH into as.
|
||||
SSH_USERPASS=`pwgen -c -n -1 8`
|
||||
useradd -G wheel user
|
||||
echo user:$SSH_USERPASS | chpasswd
|
||||
echo ssh user password: $SSH_USERPASS
|
||||
}
|
||||
|
||||
__mysql_config() {
|
||||
# Hack to get MySQL up and running... I need to look into it more.
|
||||
yum -y erase mariadb mariadb-server
|
||||
rm -rf /var/lib/mysql/ /etc/my.cnf
|
||||
yum -y install mariadb mariadb-server
|
||||
mysql_install_db
|
||||
chown -R mysql:mysql /var/lib/mysql
|
||||
/usr/bin/mysqld_safe &
|
||||
sleep 10
|
||||
}
|
||||
|
||||
__handle_passwords() {
|
||||
# Here we generate random passwords (thank you pwgen!). The first two are for mysql users, the last batch for random keys in wp-config.php
|
||||
WORDPRESS_DB="wordpress"
|
||||
MYSQL_PASSWORD=`pwgen -c -n -1 12`
|
||||
WORDPRESS_PASSWORD=`pwgen -c -n -1 12`
|
||||
# This is so the passwords show up in logs.
|
||||
echo mysql root password: $MYSQL_PASSWORD
|
||||
echo wordpress password: $WORDPRESS_PASSWORD
|
||||
echo $MYSQL_PASSWORD > /mysql-root-pw.txt
|
||||
echo $WORDPRESS_PASSWORD > /wordpress-db-pw.txt
|
||||
# There used to be a huge ugly line of sed and cat and pipe and stuff below,
|
||||
# but thanks to @djfiander's thing at https://gist.github.com/djfiander/6141138
|
||||
# there isn't now.
|
||||
sed -e "s/database_name_here/$WORDPRESS_DB/
|
||||
s/username_here/$WORDPRESS_DB/
|
||||
s/password_here/$WORDPRESS_PASSWORD/
|
||||
/'AUTH_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
|
||||
/'SECURE_AUTH_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
|
||||
/'LOGGED_IN_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
|
||||
/'NONCE_KEY'/s/put your unique phrase here/`pwgen -c -n -1 65`/
|
||||
/'AUTH_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/
|
||||
/'SECURE_AUTH_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/
|
||||
/'LOGGED_IN_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/
|
||||
/'NONCE_SALT'/s/put your unique phrase here/`pwgen -c -n -1 65`/" /usr/share/nginx/html/wp-config-sample.php > /usr/share/nginx/html/wp-config.php
|
||||
}
|
||||
|
||||
__httpd_perms() {
|
||||
chown apache:apache /usr/share/nginx/html/wp-config.php
|
||||
}
|
||||
|
||||
__start_mysql() {
|
||||
# systemctl start mysqld.service
|
||||
mysqladmin -u root password $MYSQL_PASSWORD
|
||||
mysql -uroot -p$MYSQL_PASSWORD -e "CREATE DATABASE wordpress; GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'localhost' IDENTIFIED BY '$WORDPRESS_PASSWORD'; FLUSH PRIVILEGES;"
|
||||
killall mysqld
|
||||
sleep 10
|
||||
}
|
||||
|
||||
__run_supervisor() {
|
||||
supervisord -n
|
||||
}
|
||||
|
||||
# Call all functions
|
||||
__check
|
||||
__create_user
|
||||
__mysql_config
|
||||
__handle_passwords
|
||||
__httpd_perms
|
||||
__start_mysql
|
||||
__run_supervisor
|
||||
|
||||

|
||||
|
||||
After adding the above configuration, we'll need to save it and then exit.
|
||||
|
||||
### 4. Creating Configuration files ###
|
||||
|
||||
Now, we'll need to create configuration file for Nginx Web Server named nginx-site.conf .
|
||||
|
||||
# nano nginx-site.conf
|
||||
|
||||
Then, we'll add the following configuration to the config file.
|
||||
|
||||
user nginx;
|
||||
worker_processes 1;
|
||||
|
||||
error_log /var/log/nginx/error.log;
|
||||
#error_log /var/log/nginx/error.log notice;
|
||||
#error_log /var/log/nginx/error.log info;
|
||||
|
||||
pid /run/nginx.pid;
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
sendfile on;
|
||||
#tcp_nopush on;
|
||||
|
||||
#keepalive_timeout 0;
|
||||
keepalive_timeout 65;
|
||||
|
||||
#gzip on;
|
||||
|
||||
index index.html index.htm index.php;
|
||||
|
||||
# Load modular configuration files from the /etc/nginx/conf.d directory.
|
||||
# See http://nginx.org/en/docs/ngx_core_module.html#include
|
||||
# for more information.
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name localhost;
|
||||
|
||||
#charset koi8-r;
|
||||
|
||||
#access_log logs/host.access.log main;
|
||||
root /usr/share/nginx/html;
|
||||
|
||||
#error_page 404 /404.html;
|
||||
|
||||
# redirect server error pages to the static page /50x.html
|
||||
#
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root html;
|
||||
}
|
||||
|
||||
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
|
||||
#
|
||||
#location ~ \.php$ {
|
||||
# proxy_pass http://127.0.0.1;
|
||||
#}
|
||||
|
||||
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
|
||||
#
|
||||
location ~ \.php$ {
|
||||
|
||||
root /usr/share/nginx/html;
|
||||
try_files $uri =404;
|
||||
fastcgi_pass 127.0.0.1:9000;
|
||||
fastcgi_index index.php;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
include fastcgi_params;
|
||||
}
|
||||
|
||||
# deny access to .htaccess files, if Apache's document root
|
||||
# concurs with nginx's one
|
||||
#
|
||||
#location ~ /\.ht {
|
||||
# deny all;
|
||||
#}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||

|
||||
|
||||
Now, we'll create supervisord.conf file and add the following lines as shown below.
|
||||
|
||||
# nano supervisord.conf
|
||||
|
||||
Then, add the following lines.
|
||||
|
||||
[unix_http_server]
|
||||
file=/tmp/supervisor.sock ; (the path to the socket file)
|
||||
|
||||
[supervisord]
|
||||
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
|
||||
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
|
||||
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
|
||||
loglevel=info ; (log level;default info; others: debug,warn,trace)
|
||||
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
|
||||
nodaemon=false ; (start in foreground if true;default false)
|
||||
minfds=1024 ; (min. avail startup file descriptors;default 1024)
|
||||
minprocs=200 ; (min. avail process descriptors;default 200)
|
||||
|
||||
; the below section must remain in the config file for RPC
|
||||
; (supervisorctl/web interface) to work, additional interfaces may be
|
||||
; added by defining them in separate rpcinterface: sections
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
[supervisorctl]
|
||||
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
|
||||
|
||||
[program:php-fpm]
|
||||
command=/usr/sbin/php-fpm -c /etc/php/fpm
|
||||
stdout_events_enabled=true
|
||||
stderr_events_enabled=true
|
||||
|
||||
[program:php-fpm-log]
|
||||
command=tail -f /var/log/php-fpm/php-fpm.log
|
||||
stdout_events_enabled=true
|
||||
stderr_events_enabled=true
|
||||
|
||||
[program:mysql]
|
||||
command=/usr/bin/mysql --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
|
||||
stdout_events_enabled=true
|
||||
stderr_events_enabled=true
|
||||
|
||||
[program:nginx]
|
||||
command=/usr/sbin/nginx
|
||||
stdout_events_enabled=true
|
||||
stderr_events_enabled=true
|
||||
|
||||
[eventlistener:stdout]
|
||||
command = supervisor_stdout
|
||||
buffer_size = 100
|
||||
events = PROCESS_LOG
|
||||
result_handler = supervisor_stdout:event_handler
|
||||
|
||||

|
||||
|
||||
After adding, we'll save and exit the file.
|
||||
|
||||
### 5. Building WordPress Container ###
|
||||
|
||||
Now, after done with creating configurations and scripts, we'll now finally use the Dockerfile to build our desired container with the latest WordPress CMS installed and configured according to the configuration. To do so, we'll run the following command in that directory.
|
||||
|
||||
# docker build --rm -t wordpress:centos7 .
|
||||
|
||||

|
||||
|
||||
### 6. Running WordPress Container ###
|
||||
|
||||
Now, to run our newly built container and open port 80 and 22 for Nginx Web Server and SSH access respectively, we'll run the following command.
|
||||
|
||||
# CID=$(docker run -d -p 80:80 wordpress:centos7)
|
||||
|
||||

|
||||
|
||||
To check the process and commands executed inside the container, we'll run the following command.
|
||||
|
||||
# echo "$(docker logs $CID )"
|
||||
|
||||
TO check if the port mapping is correct or not, run the following command.
|
||||
|
||||
# docker ps
|
||||
|
||||

|
||||
|
||||
### 7. Web Interface ###
|
||||
|
||||
Finally if everything went accordingly, we'll be welcomed with WordPress when pointing the browser to http://ip-address/ or http://mywebsite.com/ .
|
||||
|
||||

|
||||
|
||||
Now, we'll go step wise through the web interface and setup wordpress configuration, username and password for the WordPress Panel.
|
||||
|
||||

|
||||
|
||||
Then, use the username and password entered above into the WordPress Login page.
|
||||
|
||||

|
||||
|
||||
### Conclusion ###
|
||||
|
||||
We successfully built and run WordPress CMS under LEMP Stack running in CentOS 7 Operating System as the docker OS. Running WordPress inside a container makes a lot safe and secure to the host system from the security perspective. This article enables one to completely configure WordPress to run under Docker Container with Nginx Web Server. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-wordpress-nginx-docker-container/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://docker.io/
|
@ -0,0 +1,137 @@
|
||||
How to set up remote desktop on Linux VPS using x2go
|
||||
================================================================================
|
||||
As everything is moved to the cloud, virtualized remote desktop becomes increasingly popular in the industry as a way to enhance employee's productivity. Especially for those who need to roam constantly across multiple locations and devices, remote desktop allows them to stay connected seamlessly to their work environment. Remote desktop is attractive for employers as well, achieving increased agility and flexibility in work environments, lower IT cost due to hardware consolidation, desktop security hardening, and so on.
|
||||
|
||||
In the world of Linux, of course there is no shortage of choices for settings up remote desktop environment, with many protocols (e.g., RDP, RFB, NX) and server/client implementations (e.g., [TigerVNC][1], RealVNC, FreeNX, x2go, X11vnc, TeamViewer) available.
|
||||
|
||||
Standing out from the pack is [X2Go][2], an open-source (GPLv2) implementation of NX-based remote desktop server and client. In this tutorial, I am going to demonstrate **how to set up remote desktop environment for [Linux VPS][3] using X2Go**.
|
||||
|
||||
### What is X2Go? ###
|
||||
|
||||
The history of X2Go goes back to NoMachine's NX technology. The NX remote desktop protocol was designed to deal with low bandwidth and high latency network connections by leveraging aggressive compression and caching. Later, NX was turned into closed-source while NX libraries were made GPL-ed. This has led to open-source implementation of several NX-based remote desktop solutions, and one of them is X2Go.
|
||||
|
||||
What benefits does X2Go bring to the table, compared to other solutions such as VNC? X2Go inherits all the advanced features of NX technology, so naturally it works well over slow network connections. Besides, X2Go boasts of an excellent track record of ensuring security with its built-in SSH-based encryption. No longer need to set up an SSH tunnel [manually][4]. X2Go comes with audio support out of box, which means that music playback at the remote desktop is delivered (via PulseAudio) over network, and fed into local speakers. On usability front, an application that you run on remote desktop can be seamlessly rendered as a separate window on your local desktop, giving you an illusion that the application is actually running on the local desktop. As you can see, these are some of [its powerful features][5] lacking in VNC based solutions.
|
||||
|
||||
### X2GO's Desktop Environment Compatibility ###
|
||||
|
||||
As with other remote desktop servers, there are [known compatibility issues][6] for X2Go server. Desktop environments like KDE3/4, Xfce, MATE and LXDE are the most friendly to X2Go server. However, your mileage may vary with other desktop managers. For example, the later versions of GNOME 3, KDE5, Unity are known to be not compatible with X2Go. If the desktop manager of your remote host is compatible with X2Go, you can follow the rest of the tutorial.
|
||||
|
||||
### Install X2Go Server on Linux ###
|
||||
|
||||
X2Go consists of remote desktop server and client components. Let's start with X2Go server installation. I assume that you already have an X2Go-compatible desktop manager up and running on a remote host, where we will be installing X2Go server.
|
||||
|
||||
Note that X2Go server component does not have a separate service that needs to be started upon boot. You just need to make sure that SSH service is up and running.
|
||||
|
||||
#### Ubuntu or Linux Mint: ####
|
||||
|
||||
Configure X2Go PPA repository. X2Go PPA is available for Ubuntu 14.04 and higher.
|
||||
|
||||
$ sudo add-apt-repository ppa:x2go/stable
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install x2goserver x2goserver-xsession
|
||||
|
||||
#### Debian (Wheezy): ####
|
||||
|
||||
$ sudo apt-key adv --recv-keys --keyserver keys.gnupg.net E1F958385BFE2B6E
|
||||
$ sudo sh -c "echo deb http://packages.x2go.org/debian wheezy main > /etc/apt/sources.list.d/x2go.list"
|
||||
$ sudo sh -c "echo deb-src http://packages.x2go.org/debian wheezy main >> /etc/apt/sources.list.d/x2go.list"
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install x2goserver x2goserver-xsession
|
||||
|
||||
#### Fedora: ####
|
||||
|
||||
$ sudo yum install x2goserver x2goserver-xsession
|
||||
|
||||
#### CentOS/RHEL: ####
|
||||
|
||||
Enable [EPEL respository][7] first, and then run:
|
||||
|
||||
$ sudo yum install x2goserver x2goserver-xsession
|
||||
|
||||
### Install X2Go Client on Linux ###
|
||||
|
||||
On a local host where you will be connecting to remote desktop, install X2GO client as follows.
|
||||
|
||||
#### Ubuntu or Linux Mint: ####
|
||||
|
||||
Configure X2Go PPA repository. X2Go PPA is available for Ubuntu 14.04 and higher.
|
||||
|
||||
$ sudo add-apt-repository ppa:x2go/stable
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install x2goclient
|
||||
|
||||
Debian (Wheezy):
|
||||
|
||||
$ sudo apt-key adv --recv-keys --keyserver keys.gnupg.net E1F958385BFE2B6E
|
||||
$ sudo sh -c "echo deb http://packages.x2go.org/debian wheezy main > /etc/apt/sources.list.d/x2go.list"
|
||||
$ sudo sh -c "echo deb-src http://packages.x2go.org/debian wheezy main >> /etc/apt/sources.list.d/x2go.list"
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install x2goclient
|
||||
|
||||
#### Fedora: ####
|
||||
|
||||
$ sudo yum install x2goclient
|
||||
|
||||
CentOS/RHEL:
|
||||
|
||||
Enable EPEL respository first, and then run:
|
||||
|
||||
$ sudo yum install x2goclient
|
||||
|
||||
### Connect to Remote Desktop with X2Go Client ###
|
||||
|
||||
Now it's time to connect to your remote desktop. On the local host, simply run the following command or use desktop launcher to start X2Go client.
|
||||
|
||||
$ x2goclient
|
||||
|
||||
Enter the remote host's IP address and SSH user name. Also, specify session type (i.e., desktop manager of a remote host).
|
||||
|
||||

|
||||
|
||||
If you want, you can customize other things (by pressing other tabs), like connection speed, compression, screen resolution, and so on.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
When you initiate a remote desktop connection, you will be asked to log in. Type your SSH login and password.
|
||||
|
||||

|
||||
|
||||
Upon successful login, you will see the remote desktop screen.
|
||||
|
||||

|
||||
|
||||
If you want to test X2Go's seamless window feature, choose "Single application" as session type, and specify the path to an executable on the remote host. In this example, I choose Dolphin file manager on a remote KDE host.
|
||||
|
||||

|
||||
|
||||
Once you are successfully connected, you will see a remote application window open on your local desktop, not the entire remote desktop screen.
|
||||
|
||||

|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this tutorial, I demonstrated how to set up X2Go remote desktop on [Linux VPS][8] instance. As you can see, the whole setup process is pretty much painless (if you are using a right desktop environment). While there are some desktop-specific quirkiness, X2Go is a solid remote desktop solution which is secure, feature-rich, fast, and free.
|
||||
|
||||
What feature is the most appealing to you in X2Go? Please share your thought.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/x2go-remote-desktop-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://ask.xmodulo.com/centos-remote-desktop-vps.html
|
||||
[2]:http://wiki.x2go.org/
|
||||
[3]:http://xmodulo.com/go/digitalocean
|
||||
[4]:http://xmodulo.com/how-to-set-up-vnc-over-ssh.html
|
||||
[5]:http://wiki.x2go.org/doku.php/doc:newtox2go
|
||||
[6]:http://wiki.x2go.org/doku.php/doc:de-compat
|
||||
[7]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
|
||||
[8]:http://xmodulo.com/go/digitalocean
|
@ -0,0 +1,324 @@
|
||||
translating wi-cuckoo LLAP
|
||||
Its Now Worth Try Installing PHP 7.0 on CentOS 7.x / Fedora 21
|
||||
================================================================================
|
||||
PHP is a well known general purpose, server side web scripting language. A vast majority of online websites are coded in this language. PHP is ever evolving, feature rich, easy to use and well organized scripting language. Currently PHP development team is working on next major release of PHP, named PHP 7. The current production PHP version is PHP 5.6, as you might already know that PHP 6 was aborted in the past, the supporters of PHP 7 did not want the next important PHP version to be confused with that branch that was killed long time in the past. So it has been decided to name the next major release of PHP as PHP 7 instead of 6. PHP 7.0 is supposed to be released in November this year.
|
||||
|
||||
Here are some of the prominent features in next major PHP release.
|
||||
|
||||
- In order to improve performance and memory footprints PHPNG feature has been added to this new release.
|
||||
- JIT engine has been included to dynamically compile Zend opcodes into native machine code in order to achieve faster processing. This feature will allow subsequent calls to the same code so that it may run much faster.
|
||||
- AST (Abstract Syntax Tree) is a newly added feature which will enhance support for php extensions and userland applications.
|
||||
- Asynchronous Programming feature will add support for parallel tasks within the same request.
|
||||
- New version will support for stand alone multi-threading web server so that it may handle many simultaneous requests using a single memory pool.
|
||||
|
||||
### Installing PHP 7 on Centos / Fedora ###
|
||||
|
||||
Lets see how we can install PHP7 on Centos 7 and Fedora 21. In order to install PHP7 we will need to first clone php-src repository. Once cloning process is complete, we will configure and compile it. Before we proceed, lets ensure that we do have followings installed on our Linux system otherwise PHP compile process will return errors and abort.
|
||||
|
||||
- Git
|
||||
- autoconf
|
||||
- gcc
|
||||
- bison
|
||||
|
||||
All of the above metioned prerequisits can be installed using Yum package manager. The following single command should take care of this:
|
||||
|
||||
yum install git autoconf gcc bison
|
||||
|
||||
Ready to start PHP7 installation process ? Lets first create PHP7 directory and make it your working directory.
|
||||
|
||||
mkdir php7
|
||||
|
||||
cd php7
|
||||
|
||||
Now clone php-src repo, run following command on the terminal.
|
||||
|
||||
git clone https://git.php.net/repository/php-src.git
|
||||
|
||||
The process should complete in few min, here is sample output which you should see at the completion of this task.
|
||||
|
||||
[root@localhost php7]# git clone https://git.php.net/repository/php-src.git
|
||||
|
||||
Cloning into 'php-src'...
|
||||
|
||||
remote: Counting objects: 615064, done.
|
||||
|
||||
remote: Compressing objects: 100% (127800/127800), done.
|
||||
|
||||
remote: Total 615064 (delta 492063), reused 608718 (delta 485944)
|
||||
|
||||
Receiving objects: 100% (615064/615064), 152.32 MiB | 16.97 MiB/s, done.
|
||||
|
||||
Resolving deltas: 100% (492063/492063), done.
|
||||
|
||||
Lets configure and compile PHP7, run following commands on the terminal to start the configuration process:
|
||||
|
||||
cd php-src
|
||||
|
||||
./buildconf
|
||||
|
||||
Here is sample output for ./buildconf command.
|
||||
|
||||
[root@localhost php-src]# ./buildconf
|
||||
|
||||
buildconf: checking installation...
|
||||
|
||||
buildconf: autoconf version 2.69 (ok)
|
||||
|
||||
rebuilding aclocal.m4
|
||||
|
||||
rebuilding configure
|
||||
|
||||
rebuilding main/php_config.h.in
|
||||
|
||||
Proceed further with the configuration process using following command:
|
||||
|
||||
./configure \
|
||||
|
||||
--prefix=$HOME/php7/usr \
|
||||
|
||||
--with-config-file-path=$HOME/php7/usr/etc \
|
||||
|
||||
--enable-mbstring \
|
||||
|
||||
--enable-zip \
|
||||
|
||||
--enable-bcmath \
|
||||
|
||||
--enable-pcntl \
|
||||
|
||||
--enable-ftp \
|
||||
|
||||
--enable-exif \
|
||||
|
||||
--enable-calendar \
|
||||
|
||||
--enable-sysvmsg \
|
||||
|
||||
--enable-sysvsem \
|
||||
|
||||
--enable-sysvshm \
|
||||
|
||||
--enable-wddx \
|
||||
|
||||
--with-curl \
|
||||
|
||||
--with-mcrypt \
|
||||
|
||||
--with-iconv \
|
||||
|
||||
--with-gmp \
|
||||
|
||||
--with-pspell \
|
||||
|
||||
--with-gd \
|
||||
|
||||
--with-jpeg-dir=/usr \
|
||||
|
||||
--with-png-dir=/usr \
|
||||
|
||||
--with-zlib-dir=/usr \
|
||||
|
||||
--with-xpm-dir=/usr \
|
||||
|
||||
--with-freetype-dir=/usr \
|
||||
|
||||
--with-t1lib=/usr \
|
||||
|
||||
--enable-gd-native-ttf \
|
||||
|
||||
--enable-gd-jis-conv \
|
||||
|
||||
--with-openssl \
|
||||
|
||||
--with-mysql=/usr \
|
||||
|
||||
--with-pdo-mysql=/usr \
|
||||
|
||||
--with-gettext=/usr \
|
||||
|
||||
--with-zlib=/usr \
|
||||
|
||||
--with-bz2=/usr \
|
||||
|
||||
--with-recode=/usr \
|
||||
|
||||
--with-mysqli=/usr/bin/mysql_config
|
||||
|
||||
It will take a sweet amount to time, once completed, you should see output like this:
|
||||
|
||||
creating libtool
|
||||
|
||||
appending configuration tag "CXX" to libtool
|
||||
|
||||
Generating files
|
||||
|
||||
configure: creating ./config.status
|
||||
|
||||
creating main/internal_functions.c
|
||||
|
||||
creating main/internal_functions_cli.c
|
||||
|
||||
+--------------------------------------------------------------------+
|
||||
|
||||
| License: |
|
||||
|
||||
| This software is subject to the PHP License, available in this |
|
||||
|
||||
| distribution in the file LICENSE. By continuing this installation |
|
||||
|
||||
| process, you are bound by the terms of this license agreement. |
|
||||
|
||||
| If you do not agree with the terms of this license, you must abort |
|
||||
|
||||
| the installation process at this point. |
|
||||
|
||||
+--------------------------------------------------------------------+
|
||||
|
||||
|
||||
|
||||
Thank you for using PHP.
|
||||
|
||||
|
||||
|
||||
config.status: creating php7.spec
|
||||
|
||||
config.status: creating main/build-defs.h
|
||||
|
||||
config.status: creating scripts/phpize
|
||||
|
||||
config.status: creating scripts/man1/phpize.1
|
||||
|
||||
config.status: creating scripts/php-config
|
||||
|
||||
config.status: creating scripts/man1/php-config.1
|
||||
|
||||
config.status: creating sapi/cli/php.1
|
||||
|
||||
config.status: creating sapi/cgi/php-cgi.1
|
||||
|
||||
config.status: creating ext/phar/phar.1
|
||||
|
||||
config.status: creating ext/phar/phar.phar.1
|
||||
|
||||
config.status: creating main/php_config.h
|
||||
|
||||
config.status: executing default commands
|
||||
|
||||
|
||||
|
||||
Run following command to complete the compilation process.
|
||||
|
||||
make
|
||||
|
||||
Sample output for “make” command is shown below:
|
||||
|
||||
Generating phar.php
|
||||
|
||||
Generating phar.phar
|
||||
|
||||
PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled.
|
||||
|
||||
clicommand.inc
|
||||
|
||||
directorytreeiterator.inc
|
||||
|
||||
directorygraphiterator.inc
|
||||
|
||||
pharcommand.inc
|
||||
|
||||
invertedregexiterator.inc
|
||||
|
||||
phar.inc
|
||||
|
||||
|
||||
|
||||
Build complete.
|
||||
|
||||
Don't forget to run 'make test'.
|
||||
|
||||
That’s all, its time to install PHP7 now, run following to install it.
|
||||
|
||||
make install
|
||||
|
||||
Sample output for successful install process should look like:
|
||||
|
||||
[root@localhost php-src]# make install
|
||||
|
||||
Installing shared extensions: /root/php7/usr/lib/php/extensions/no-debug-non-zts-20141001/
|
||||
|
||||
Installing PHP CLI binary: /root/php7/usr/bin/
|
||||
|
||||
Installing PHP CLI man page: /root/php7/usr/php/man/man1/
|
||||
|
||||
Installing PHP CGI binary: /root/php7/usr/bin/
|
||||
|
||||
Installing PHP CGI man page: /root/php7/usr/php/man/man1/
|
||||
|
||||
Installing build environment: /root/php7/usr/lib/php/build/
|
||||
|
||||
Installing header files: /root/php7/usr/include/php/
|
||||
|
||||
Installing helper programs: /root/php7/usr/bin/
|
||||
|
||||
program: phpize
|
||||
|
||||
program: php-config
|
||||
|
||||
Installing man pages: /root/php7/usr/php/man/man1/
|
||||
|
||||
page: phpize.1
|
||||
|
||||
page: php-config.1
|
||||
|
||||
Installing PEAR environment: /root/php7/usr/lib/php/
|
||||
|
||||
[PEAR] Archive_Tar - installed: 1.3.13
|
||||
|
||||
[PEAR] Console_Getopt - installed: 1.3.1
|
||||
|
||||
[PEAR] Structures_Graph- installed: 1.0.4
|
||||
|
||||
[PEAR] XML_Util - installed: 1.2.3
|
||||
|
||||
[PEAR] PEAR - installed: 1.9.5
|
||||
|
||||
Wrote PEAR system config file at: /root/php7/usr/etc/pear.conf
|
||||
|
||||
You may want to add: /root/php7/usr/lib/php to your php.ini include_path
|
||||
|
||||
/root/php7/php-src/build/shtool install -c ext/phar/phar.phar /root/php7/usr/bin
|
||||
|
||||
ln -s -f /root/php7/usr/bin/phar.phar /root/php7/usr/bin/phar
|
||||
|
||||
Installing PDO headers: /root/php7/usr/include/php/ext/pdo/
|
||||
|
||||
Conguratulaion, PHP 7 has been installed on your Linux system now. Once installation is complete, move to sapi/cli direcoty inside php7 installation folder.
|
||||
|
||||
cd sapi/cli
|
||||
|
||||
and verify PHP version from here.
|
||||
|
||||
[root@localhost cli]# ./php -v
|
||||
|
||||
PHP 7.0.0-dev (cli) (built: Mar 28 2015 00:54:11)
|
||||
|
||||
Copyright (c) 1997-2015 The PHP Group
|
||||
|
||||
Zend Engine v3.0.0-dev, Copyright (c) 1998-2015 Zend Technologies
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
PHP 7 is also [added in remi repositories][1], this upcoming version is mainly focused on performance improvements, its new features are aimed to make PHP as a well fit for modern programming needs and trends. PHP 7.0 will have many new features and some deprecations to the old items. We hope to see details about new features and deprecations in the coming months. Enjoy!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-php-7-centos-7-fedora-21/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:http://blog.famillecollet.com/post/2015/03/25/PHP-7.0-as-Software-Collection
|
@ -0,0 +1,70 @@
|
||||
Linux Email App Geary Updated — How To Install It In Ubuntu
|
||||
================================================================================
|
||||
**Geary, the popular desktop email client for Linux, has been updated to version 0.10 — and it gains a glut of new features in the process.**
|
||||
|
||||

|
||||
An older version of Geary running in elementary OS
|
||||
|
||||
Geary 0.100 features some welcome user interface improvements and additional UI options, including:
|
||||
|
||||
- New: Ability to ‘Undo’ Archive, Trash and Move actions
|
||||
- New: Option to switch between a 2 column or 2 column layout
|
||||
- New “split header bar” — improves message list, composer layouts
|
||||
- New shortcut keys — use j/k to navigate next/previous conversations
|
||||
|
||||
This update also introduces a **brand new full-text search algorithm** designed to improve the search experience in Geary, according to Yorba.
|
||||
|
||||
This introduction should calm some complaints of the app’s search prowess, which often sees Geary return a slew of search results that are, to quote software outfit themselves, “…seemingly unrelated to the search query.”
|
||||
|
||||
> ‘Yorba recommends that all users of the client upgrade to this release’
|
||||
|
||||
*“Although not all search problems are fixed in 0.10, Geary should be more conservative about displaying results that match the user’s query,” [the team notes][1]. *
|
||||
|
||||
Last but by no means least on the main feature front is something sure to find favour with power users: **support for multiple/alternate e-mail addresses per account**.
|
||||
|
||||
If your main Gmail account is set-up in Geary to pull in your Yahoo, Outlook and KittyMail messages too then you should now see them all kept neatly together and be given the option of picking which identity you send from when using the composer ‘From’ field. No, it’s not the sexiest feature but it is one that has been requested often.
|
||||
|
||||
Rounding out this release of the popular Linux email client is the usual gamut of bug fixes, performance optimisations and miscellaneous improvements.
|
||||
|
||||
Yorba recommends that all users of the client upgrade to this release.
|
||||
|
||||
### Install Geary 0.10 in Ubuntu 14.04, 14.10 & 15.04 ###
|
||||
|
||||

|
||||
|
||||
The latest version of Yorba is available to download as source, ready for compiling from the GNOME Git. But let’s be honest: that’s a bit of a hassle, right?
|
||||
|
||||
Ubuntu users wondering how to install Geary 0.10 in **14.04, 14.10** and (for early birds) **15.04** have things easy.
|
||||
|
||||
The official Yorba PPA contains the **latest versions of Geary** as well as those for Shotwell (photo manager) and [California][2] (calendar app). Be aware that any existing versions of these apps installed on your computer may/will be upgraded to a more recent version by adding this PPA.
|
||||
|
||||
Capiche? Coolio.
|
||||
|
||||
To install Geary in Ubuntu you first need to add the Yorba PPA your Softwares Sources. To do this just open a new Terminal window and carefully enter the following two commands:
|
||||
|
||||
sudo add-apt-repository ppa:yorba/ppa
|
||||
|
||||
sudo apt-get update && sudo apt-get install geary
|
||||
|
||||
After hitting return/enter on the last you’ll be prompted to enter your password. Do this, and then let the installation complete.
|
||||
|
||||

|
||||
|
||||
Once done, open your desktop environment’s app launcher and seek out the ‘Geary’ icon. Click it, add your account(s) and discover [what the email mail man has dropped off through the information superhighway][3] and into the easy to use graphical interface.
|
||||
|
||||
**Don’t forget: you can always tip us with news, app suggestions, and anything else you’d like to see us cover by using the power of electronic mail. Direct your key punches to joey [at] oho [dot] io.**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/03/install-geary-ubuntu-linux-email-update
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://wiki.gnome.org/Apps/Geary/FullTextSearchStrategy
|
||||
[2]:http://www.omgubuntu.co.uk/2014/10/california-calendar-natural-language-parser
|
||||
[3]:https://www.youtube.com/watch?v=rxM8C71GB8w
|
743
sources/tech/20150401 ZMap Documentation.md
Normal file
743
sources/tech/20150401 ZMap Documentation.md
Normal file
@ -0,0 +1,743 @@
|
||||
ZMap Documentation
|
||||
================================================================================
|
||||
1. Getting Started with ZMap
|
||||
1. Scanning Best Practices
|
||||
1. Command Line Arguments
|
||||
1. Additional Information
|
||||
1. TCP SYN Probe Module
|
||||
1. ICMP Echo Probe Module
|
||||
1. UDP Probe Module
|
||||
1. Configuration Files
|
||||
1. Verbosity
|
||||
1. Results Output
|
||||
1. Blacklisting
|
||||
1. Rate Limiting and Sampling
|
||||
1. Sending Multiple Probes
|
||||
1. Extending ZMap
|
||||
1. Sample Applications
|
||||
1. Writing Probe and Output Modules
|
||||
|
||||
----------
|
||||
|
||||
### Getting Started with ZMap ###
|
||||
|
||||
ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices.
|
||||
|
||||
By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows:
|
||||
|
||||
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
|
||||
|
||||
Or more concisely specified as:
|
||||
|
||||
$ zmap -B 10M -p 80 -n 10000 -o results.csv
|
||||
|
||||
ZMap can also be used to scan specific subnets or CIDR blocks. For example, to scan only 10.0.0.0/8 and 192.168.0.0/16 on port 80, run:
|
||||
|
||||
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
|
||||
|
||||
If the scan started successfully, ZMap will output status updates every one second similar to the following:
|
||||
|
||||
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
|
||||
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
|
||||
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
|
||||
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
|
||||
|
||||
These updates provide information about the current state of the scan and are of the following form: %-complete (est time remaining); packets-sent curr-send-rate (avg-send-rate); recv: packets-recv recv-rate (avg-recv-rate); hits: hit-rate
|
||||
|
||||
If you do not know the scan rate that your network can support, you may want to experiment with different scan rates or bandwidth limits to find the fastest rate that your network can support before you see decreased results.
|
||||
|
||||
By default, ZMap will output the list of distinct IP addresses that responded successfully (e.g. with a SYN ACK packet) similar to the following. There are several additional formats (e.g. JSON and Redis) for outputting results as well as options for producing programmatically parsable scan statistics. As wells, additional output fields can be specified and the results can be filtered using an output filter.
|
||||
|
||||
115.237.116.119
|
||||
23.9.117.80
|
||||
207.118.204.141
|
||||
217.120.143.111
|
||||
50.195.22.82
|
||||
|
||||
We strongly encourage you to use a blacklist file, to exclude both reserved/unallocated IP space (e.g. multicast, RFC1918), as well as networks that request to be excluded from your scans. By default, ZMap will utilize a simple blacklist file containing reserved and unallocated addresses located at `/etc/zmap/blacklist.conf`. If you find yourself specifying certain settings, such as your maximum bandwidth or blacklist file every time you run ZMap, you can specify these in `/etc/zmap/zmap.conf` or use a custom configuration file.
|
||||
|
||||
If you are attempting to troubleshoot scan related issues, there are several options to help debug. First, it is possible can perform a dry run scan in order to see the packets that would be sent over the network by adding the `--dryrun` flag. As well, it is possible to change the logging verbosity by setting the `--verbosity=n` flag.
|
||||
|
||||
----------
|
||||
|
||||
### Scanning Best Practices ###
|
||||
|
||||
We offer these suggestions for researchers conducting Internet-wide scans as guidelines for good Internet citizenship.
|
||||
|
||||
- Coordinate closely with local network administrators to reduce risks and handle inquiries
|
||||
- Verify that scans will not overwhelm the local network or upstream provider
|
||||
- Signal the benign nature of the scans in web pages and DNS entries of the source addresses
|
||||
- Clearly explain the purpose and scope of the scans in all communications
|
||||
- Provide a simple means of opting out and honor requests promptly
|
||||
- Conduct scans no larger or more frequent than is necessary for research objectives
|
||||
- Spread scan traffic over time or source addresses when feasible
|
||||
|
||||
It should go without saying that scan researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions.
|
||||
|
||||
----------
|
||||
|
||||
### Command Line Arguments ###
|
||||
|
||||
#### Common Options ####
|
||||
|
||||
These options are the most common options when performing a simple scan. We note that some options are dependent on the probe module or output module used (e.g. target port is not used when performing an ICMP Echo Scan).
|
||||
|
||||
|
||||
**-p, --target-port=port**
|
||||
|
||||
TCP port number to scan (e.g. 443)
|
||||
|
||||
**-o, --output-file=name**
|
||||
|
||||
Write results to this file. Use - for stdout
|
||||
|
||||
**-b, --blacklist-file=path**
|
||||
|
||||
File of subnets to exclude, in CIDR notation (e.g. 192.168.0.0/16), one-per line. It is recommended you use this to exclude RFC 1918 addresses, multicast, IANA reserved space, and other IANA special-purpose addresses. An example blacklist file is provided in conf/blacklist.example for this purpose.
|
||||
|
||||
#### Scan Options ####
|
||||
|
||||
**-n, --max-targets=n**
|
||||
|
||||
Cap the number of targets to probe. This can either be a number (e.g. `-n 1000`) or a percentage (e.g. `-n 0.1%`) of the scannable address space (after excluding blacklist)
|
||||
|
||||
**-N, --max-results=n**
|
||||
|
||||
Exit after receiving this many results
|
||||
|
||||
**-t, --max-runtime=secs**
|
||||
|
||||
Cap the length of time for sending packets
|
||||
|
||||
**-r, --rate=pps**
|
||||
|
||||
Set the send rate in packets/sec
|
||||
|
||||
**-B, --bandwidth=bps**
|
||||
|
||||
Set the send rate in bits/second (supports suffixes G, M, and K (e.g. `-B 10M` for 10 mbps). This overrides the `--rate` flag.
|
||||
|
||||
**-c, --cooldown-time=secs**
|
||||
|
||||
How long to continue receiving after sending has completed (default=8)
|
||||
|
||||
**-e, --seed=n**
|
||||
|
||||
Seed used to select address permutation. Use this if you want to scan addresses in the same order for multiple ZMap runs.
|
||||
|
||||
**--shards=n**
|
||||
|
||||
Split the scan up into N shards/partitions among different instances of zmap (default=1). When sharding, `--seed` is required
|
||||
|
||||
**--shard=n**
|
||||
|
||||
Set which shard to scan (default=0). Shards are indexed in the range [0, N), where N is the total number of shards. When sharding `--seed` is required.
|
||||
|
||||
**-T, --sender-threads=n**
|
||||
|
||||
Threads used to send packets (default=1)
|
||||
|
||||
**-P, --probes=n**
|
||||
|
||||
Number of probes to send to each IP (default=1)
|
||||
|
||||
**-d, --dryrun**
|
||||
|
||||
Print out each packet to stdout instead of sending it (useful for debugging)
|
||||
|
||||
#### Network Options ####
|
||||
|
||||
**-s, --source-port=port|range**
|
||||
|
||||
Source port(s) to send packets from
|
||||
|
||||
**-S, --source-ip=ip|range**
|
||||
|
||||
Source address(es) to send packets from. Either single IP or range (e.g. 10.0.0.1-10.0.0.9)
|
||||
|
||||
**-G, --gateway-mac=addr**
|
||||
|
||||
Gateway MAC address to send packets to (in case auto-detection does not work)
|
||||
|
||||
**-i, --interface=name**
|
||||
|
||||
Network interface to use
|
||||
|
||||
#### Probe Options ####
|
||||
|
||||
ZMap allows users to specify and write their own probe modules for use with ZMap. Probe modules are responsible for generating probe packets to send, and processing responses from hosts.
|
||||
|
||||
**--list-probe-modules**
|
||||
|
||||
List available probe modules (e.g. tcp_synscan)
|
||||
|
||||
**-M, --probe-module=name**
|
||||
|
||||
Select probe module (default=tcp_synscan)
|
||||
|
||||
**--probe-args=args**
|
||||
|
||||
Arguments to pass to probe module
|
||||
|
||||
**--list-output-fields**
|
||||
|
||||
List the fields the selected probe module can send to the output module
|
||||
|
||||
#### Output Options ####
|
||||
|
||||
ZMap allows users to specify and write their own output modules for use with ZMap. Output modules are responsible for processing the fieldsets returned by the probe module, and outputing them to the user. Users can specify output fields, and write filters over the output fields.
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
List available output modules (e.g. tcp_synscan)
|
||||
|
||||
**-O, --output-module=name**
|
||||
|
||||
Select output module (default=csv)
|
||||
|
||||
**--output-args=args**
|
||||
|
||||
Arguments to pass to output module
|
||||
|
||||
**-f, --output-fields=fields**
|
||||
|
||||
Comma-separated list of fields to output
|
||||
|
||||
**--output-filter**
|
||||
|
||||
Specify an output filter over the fields defined by the probe module
|
||||
|
||||
#### Additional Options ####
|
||||
|
||||
**-C, --config=filename**
|
||||
|
||||
Read a configuration file, which can specify any other options.
|
||||
|
||||
**-q, --quiet**
|
||||
|
||||
Do not print status updates once per second
|
||||
|
||||
**-g, --summary**
|
||||
|
||||
Print configuration and summary of results at the end of the scan
|
||||
|
||||
**-v, --verbosity=n**
|
||||
|
||||
Level of log detail (0-5, default=3)
|
||||
|
||||
**-h, --help**
|
||||
|
||||
Print help and exit
|
||||
|
||||
**-V, --version**
|
||||
|
||||
Print version and exit
|
||||
|
||||
----------
|
||||
|
||||
### Additional Information ###
|
||||
|
||||
#### TCP SYN Scans ####
|
||||
|
||||
When performing a TCP SYN scan, ZMap requires a single target port and supports specifying a range of source ports from which the scan will originate.
|
||||
|
||||
**-p, --target-port=port**
|
||||
|
||||
TCP port number to scan (e.g. 443)
|
||||
|
||||
**-s, --source-port=port|range**
|
||||
|
||||
Source port(s) for scan packets (e.g. 40000-50000)
|
||||
|
||||
**Warning!** ZMap relies on the Linux kernel to respond to SYN/ACK packets with RST packets in order to close connections opened by the scanner. This occurs because ZMap sends packets at the Ethernet layer in order to reduce overhead otherwise incurred in the kernel from tracking open TCP connections and performing route lookups. As such, if you have a firewall rule that tracks established connections such as a netfilter rule similar to `-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`, this will block SYN/ACK packets from reaching the kernel. This will not prevent ZMap from recording responses, but it will prevent RST packets from being sent back, ultimately using up a connection on the scanned host until your connection times out. We strongly recommend that you select a set of unused ports on your scanning host which can be allowed access in your firewall and specifying this port range when executing ZMap, with the `-s` flag (e.g. `-s '50000-60000'`).
|
||||
|
||||
#### ICMP Echo Request Scans ####
|
||||
|
||||
While ZMap performs TCP SYN scans by default, it also supports ICMP echo request scans in which an ICMP echo request packet is sent to each host and the type of ICMP response received in reply is denoted. An ICMP scan can be performed by selecting the icmp_echoscan scan module similar to the following:
|
||||
|
||||
$ zmap --probe-module=icmp_echoscan
|
||||
|
||||
#### UDP Datagram Scans ####
|
||||
|
||||
ZMap additionally supports UDP probes, where it will send out an arbitrary UDP datagram to each host, and receive either UDP or ICMP Unreachable responses. ZMap supports four different methods of setting the UDP payload through the --probe-args command-line option. These are 'text' for ASCII-printable payloads, 'hex' for hexadecimal payloads set on the command-line, 'file' for payloads contained in an external file, and 'template' for payloads that require dynamic field generation. In order to obtain the UDP response, make sure that you specify 'data' as one of the fields to report with the -f option.
|
||||
|
||||
The example below will send the two bytes 'ST', a PCAnwywhere 'status' request, to UDP port 5632.
|
||||
|
||||
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
|
||||
|
||||
The example below will send the byte '0x02', a SQL Server 'client broadcast' request, to UDP port 1434.
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
|
||||
|
||||
The example below will send a NetBIOS status request to UDP port 137. This uses a payload file that is included with the ZMap distribution.
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
|
||||
|
||||
The example below will send a SIP 'OPTIONS' request to UDP port 5060. This uses a template file that is included with the ZMap distribution.
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
|
||||
|
||||
UDP payload templates are still experimental. You may encounter crashes when more using more than one send thread (-T) and there is a significant decrease in performance compared to static payloads. A template is simply a payload file that contains one or more field specifiers enclosed in a ${} sequence. Some protocols, notably SIP, require the payload to reflect the source and destination of the packet. Other protocols, such as portmapper and DNS, contain fields that should be randomized per request or risk being dropped by multi-homed systems scanned by ZMap.
|
||||
|
||||
The payload template below will send a SIP OPTIONS request to every destination:
|
||||
|
||||
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
|
||||
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
|
||||
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
|
||||
To: sip:${RAND_ALPHA=8}@${DADDR}
|
||||
Call-ID: ${RAND_DIGIT=10}@${SADDR}
|
||||
CSeq: 1 OPTIONS
|
||||
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
|
||||
Content-Length: 0
|
||||
Max-Forwards: 20
|
||||
User-Agent: ${RAND_ALPHA=8}
|
||||
Accept: text/plain
|
||||
|
||||
In the example above, note that line endings are \r\n and the end of this request must contain \r\n\r\n for most SIP implementations to correcly process it. A working example is included in the examples/udp-payloads directory of the ZMap source tree (sip_options.tpl).
|
||||
|
||||
The following template fields are currently implemented:
|
||||
|
||||
|
||||
- **SADDR**: Source IP address in dotted-quad format
|
||||
- **SADDR_N**: Source IP address in network byte order
|
||||
- **DADDR**: Destination IP address in dotted-quad format
|
||||
- **DADDR_N**: Destination IP address in network byte order
|
||||
- **SPORT**: Source port in ascii format
|
||||
- **SPORT_N**: Source port in network byte order
|
||||
- **DPORT**: Destination port in ascii format
|
||||
- **DPORT_N**: Destination port in network byte order
|
||||
- **RAND_BYTE**: Random bytes (0-255), length specified with =(length) parameter
|
||||
- **RAND_DIGIT**: Random digits from 0-9, length specified with =(length) parameter
|
||||
- **RAND_ALPHA**: Random mixed-case letters from A-Z, length specified with =(length) parameter
|
||||
- **RAND_ALPHANUM**: Random mixed-case letters from A-Z and digits from 0-9, length specified with =(length) parameter
|
||||
|
||||
### Configuration Files ###
|
||||
|
||||
ZMap supports configuration files instead of requiring all options to be specified on the command-line. A configuration can be created by specifying one long-name option and the value per line such as:
|
||||
|
||||
interface "eth1"
|
||||
source-ip 1.1.1.4-1.1.1.8
|
||||
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
|
||||
cooldown-time 300 # seconds
|
||||
blacklist-file /etc/zmap/blacklist.conf
|
||||
output-file ~/zmap-output
|
||||
quiet
|
||||
summary
|
||||
|
||||
ZMap can then be run with a configuration file and specifying any additional necessary parameters:
|
||||
|
||||
$ zmap --config=~/.zmap.conf --target-port=443
|
||||
|
||||
### Verbosity ###
|
||||
|
||||
There are several types of on-screen output that ZMap produces. By default, ZMap will print out basic progress information similar to the following every 1 second. This can be disabled by setting the `--quiet` flag.
|
||||
|
||||
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
|
||||
|
||||
ZMap also prints out informational messages during scanner configuration such as the following, which can be controlled with the `--verbosity` argument.
|
||||
|
||||
Aug 11 16:16:12.813 [INFO] zmap: started
|
||||
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
|
||||
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
|
||||
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
|
||||
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
|
||||
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
|
||||
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
|
||||
Aug 11 16:17:03.985 [DEBUG] recv: thread started
|
||||
|
||||
ZMap also supports printing out a grep-able summary at the end of the scan, similar to below, which can be invoked with the `--summary` flag.
|
||||
|
||||
cnf target-port 443
|
||||
cnf source-port-range-begin 32768
|
||||
cnf source-port-range-end 61000
|
||||
cnf source-addr-range-begin 1.1.1.4
|
||||
cnf source-addr-range-end 1.1.1.8
|
||||
cnf maximum-packets 4294967295
|
||||
cnf maximum-runtime 0
|
||||
cnf permutation-seed 0
|
||||
cnf cooldown-period 300
|
||||
cnf send-interface eth1
|
||||
cnf rate 45000
|
||||
env nprocessors 16
|
||||
exc send-start-time Fri Jan 18 01:47:35 2013
|
||||
exc send-end-time Sat Jan 19 00:47:07 2013
|
||||
exc recv-start-time Fri Jan 18 01:47:35 2013
|
||||
exc recv-end-time Sat Jan 19 00:52:07 2013
|
||||
exc sent 3722335150
|
||||
exc blacklisted 572632145
|
||||
exc first-scanned 1318129262
|
||||
exc hit-rate 0.874102
|
||||
exc synack-received-unique 32537000
|
||||
exc synack-received-total 36689941
|
||||
exc synack-cooldown-received-unique 193
|
||||
exc synack-cooldown-received-total 1543
|
||||
exc rst-received-unique 141901021
|
||||
exc rst-received-total 166779002
|
||||
adv source-port-secret 37952
|
||||
adv permutation-gen 4215763218
|
||||
|
||||
### Results Output ###
|
||||
|
||||
ZMap can produce results in several formats through the use of **output modules**. By default, ZMap only supports **csv** output, however support for **redis** and **json** can be compiled in. The results sent to these output modules may be filtered using an **output filter**. The fields the output module writes are specified by the user. By default, ZMap will return results in csv format and if no output file is specified, ZMap will not produce specific results. It is also possible to write your own output module; see Writing Output Modules for information.
|
||||
|
||||
**-o, --output-file=p**
|
||||
|
||||
File to write output to
|
||||
|
||||
**-O, --output-module=p**
|
||||
|
||||
Invoke a custom output module
|
||||
|
||||
|
||||
**-f, --output-fields=p**
|
||||
|
||||
Comma-separated list of fields to output
|
||||
|
||||
**--output-filter=filter**
|
||||
|
||||
Specify an output filter over fields for a given probe
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
Lists available output modules
|
||||
|
||||
**--list-output-fields**
|
||||
|
||||
List available output fields for a given probe
|
||||
|
||||
#### Output Fields ####
|
||||
|
||||
ZMap has a variety of fields it can output beyond IP address. These fields can be viewed for a given probe module by running with the `--list-output-fields` flag.
|
||||
|
||||
$ zmap --probe-module="tcp_synscan" --list-output-fields
|
||||
saddr string: source IP address of response
|
||||
saddr-raw int: network order integer form of source IP address
|
||||
daddr string: destination IP address of response
|
||||
daddr-raw int: network order integer form of destination IP address
|
||||
ipid int: IP identification number of response
|
||||
ttl int: time-to-live of response packet
|
||||
sport int: TCP source port
|
||||
dport int: TCP destination port
|
||||
seqnum int: TCP sequence number
|
||||
acknum int: TCP acknowledgement number
|
||||
window int: TCP window
|
||||
classification string: packet classification
|
||||
success int: is response considered success
|
||||
repeat int: is response a repeat response from host
|
||||
cooldown int: Was response received during the cooldown period
|
||||
timestamp-str string: timestamp of when response arrived in ISO8601 format.
|
||||
timestamp-ts int: timestamp of when response arrived in seconds since Epoch
|
||||
timestamp-us int: microsecond part of timestamp (e.g. microseconds since 'timestamp-ts')
|
||||
|
||||
To select which fields to output, any combination of the output fields can be specified as a comma-separated list using the `--output-fields=fields` or `-f` flags. Example:
|
||||
|
||||
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
||||
|
||||
#### Filtering Output ####
|
||||
|
||||
Results generated by a probe module can be filtered before being passed to the output module. Filters are defined over the output fields of a probe module. Filters are written in a simple filtering language, similar to SQL, and are passed to ZMap using the **--output-filter** option. Output filters are commonly used to filter out duplicate results, or to only pass only sucessful responses to the output module.
|
||||
|
||||
Filter expressions are of the form `<fieldname> <operation> <value>`. The type of `<value>` must be either a string or unsigned integer literal, and match the type of `<fieldname>`. The valid operations for integer comparisons are `= !=, <, >, <=, >=`. The operations for string comparisons are =, !=. The `--list-output-fields` flag will print what fields and types are available for the selected probe module, and then exit.
|
||||
|
||||
Compound filter expressions may be constructed by combining filter expressions using parenthesis to specify order of operations, the `&&` (logical AND) and `||` (logical OR) operators.
|
||||
|
||||
**Examples**
|
||||
|
||||
Write a filter for only successful, non-duplicate responses
|
||||
|
||||
--output-filter="success = 1 && repeat = 0"
|
||||
|
||||
Filter for packets that have classification RST and a TTL greater than 10, or for packets with classification SYNACK
|
||||
|
||||
--output-filter="(classification = rst && ttl > 10) || classification = synack"
|
||||
|
||||
#### CSV ####
|
||||
|
||||
The csv module will produce a comma-separated value file of the output fields requested. For example, the following command produces the following CSV in a file called `output.csv`.
|
||||
|
||||
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
||||
|
||||
----------
|
||||
|
||||
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
|
||||
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
|
||||
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
|
||||
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
|
||||
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
|
||||
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
|
||||
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
|
||||
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
|
||||
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
|
||||
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
|
||||
|
||||
#### Redis ####
|
||||
|
||||
The redis output module allows addresses to be added to a Redis queue instead of being saved to file which ultimately allows ZMap to be incorporated with post processing tools.
|
||||
|
||||
**Heads Up!** ZMap does not build with Redis support by default. If you are building ZMap from source, you can build with Redis support by running CMake with `-DWITH_REDIS=ON`.
|
||||
|
||||
### Blacklisting and Whitelisting ###
|
||||
|
||||
ZMap supports both blacklisting and whitelisting network prefixes. If ZMap is not provided with blacklist or whitelist parameters, ZMap will scan all IPv4 addresses (including local, reserved, and multicast addresses). If a blacklist file is specified, network prefixes in the blacklisted segments will not be scanned; if a whitelist file is provided, only network prefixes in the whitelist file will be scanned. A whitelist and blacklist file can be used in coordination; the blacklist has priority over the whitelist (e.g. if you have whitelisted 10.0.0.0/8 and blacklisted 10.1.0.0/16, then 10.1.0.0/16 will not be scanned). Whitelist and blacklist files can be specified on the command-line as follows:
|
||||
|
||||
**-b, --blacklist-file=path**
|
||||
|
||||
File of subnets to blacklist in CIDR notation, e.g. 192.168.0.0/16
|
||||
|
||||
**-w, --whitelist-file=path**
|
||||
|
||||
File of subnets to limit scan to in CIDR notation, e.g. 192.168.0.0/16
|
||||
|
||||
Blacklist files should be formatted with a single network prefix in CIDR notation per line. Comments are allowed using the `#` character. Example:
|
||||
|
||||
# From IANA IPv4 Special-Purpose Address Registry
|
||||
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
|
||||
# Updated 2013-05-22
|
||||
|
||||
0.0.0.0/8 # RFC1122: "This host on this network"
|
||||
10.0.0.0/8 # RFC1918: Private-Use
|
||||
100.64.0.0/10 # RFC6598: Shared Address Space
|
||||
127.0.0.0/8 # RFC1122: Loopback
|
||||
169.254.0.0/16 # RFC3927: Link Local
|
||||
172.16.0.0/12 # RFC1918: Private-Use
|
||||
192.0.0.0/24 # RFC6890: IETF Protocol Assignments
|
||||
192.0.2.0/24 # RFC5737: Documentation (TEST-NET-1)
|
||||
192.88.99.0/24 # RFC3068: 6to4 Relay Anycast
|
||||
192.168.0.0/16 # RFC1918: Private-Use
|
||||
192.18.0.0/15 # RFC2544: Benchmarking
|
||||
198.51.100.0/24 # RFC5737: Documentation (TEST-NET-2)
|
||||
203.0.113.0/24 # RFC5737: Documentation (TEST-NET-3)
|
||||
240.0.0.0/4 # RFC1112: Reserved
|
||||
255.255.255.255/32 # RFC0919: Limited Broadcast
|
||||
|
||||
# From IANA Multicast Address Space Registry
|
||||
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
|
||||
# Updated 2013-06-25
|
||||
|
||||
224.0.0.0/4 # RFC5771: Multicast/Reserved
|
||||
|
||||
If you are looking to scan only a random portion of the internet, checkout Sampling, instead of using whitelisting and blacklisting.
|
||||
|
||||
**Heads Up!** The default ZMap configuration uses the blacklist file at `/etc/zmap/blacklist.conf`, which contains locally scoped address space and reserved IP ranges. The default configuration can be changed by editing `/etc/zmap/zmap.conf`.
|
||||
|
||||
### Rate Limiting and Sampling ###
|
||||
|
||||
By default, ZMap will scan at the fastest rate that your network adaptor supports. In our experiences on commodity hardware, this is generally around 95-98% of the theoretical speed of gigabit Ethernet, which may be faster than your upstream provider can handle. ZMap will not automatically adjust its send-rate based on your upstream provider. You may need to manually adjust your send-rate to reduce packet drops and incorrect results.
|
||||
|
||||
**-r, --rate=pps**
|
||||
|
||||
Set maximum send rate in packets/sec
|
||||
|
||||
**-B, --bandwidth=bps**
|
||||
|
||||
Set send rate in bits/sec (supports suffixes G, M, and K). This overrides the --rate flag.
|
||||
|
||||
ZMap also allows random sampling of the IPv4 address space by specifying max-targets and/or max-runtime. Because hosts are scanned in a random permutation generated per scan instantiation, limiting a scan to n hosts will perform a random sampling of n hosts. Command-line options:
|
||||
|
||||
**-n, --max-targets=n**
|
||||
|
||||
Cap number of targets to probe
|
||||
|
||||
**-N, --max-results=n**
|
||||
|
||||
Cap number of results (exit after receiving this many positive results)
|
||||
|
||||
**-t, --max-runtime=s**
|
||||
|
||||
Cap length of time for sending packets (in seconds)
|
||||
|
||||
**-s, --seed=n**
|
||||
|
||||
Seed used to select address permutation. Specify the same seed in order to scan addresses in the same order for different ZMap runs.
|
||||
|
||||
For example, if you wanted to scan the same one million hosts on the Internet for multiple scans, you could set a predetermined seed and cap the number of scanned hosts similar to the following:
|
||||
|
||||
zmap -p 443 -s 3 -n 1000000 -o results
|
||||
|
||||
In order to determine which one million hosts were going to be scanned, you could run the scan in dry-run mode which will print out the packets that would be sent instead of performing the actual scan.
|
||||
|
||||
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
|
||||
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
|
||||
|
||||
### Sending Multiple Packets ###
|
||||
|
||||
ZMap supports sending multiple probes to each host. Increasing this number both increases scan time and hosts reached. However, we find that the increase in scan time (~100% per additional probe) greatly outweighs the increase in hosts reached (~1% per additional probe).
|
||||
|
||||
**-P, --probes=n**
|
||||
|
||||
The number of unique probes to send to each IP (default=1)
|
||||
|
||||
----------
|
||||
|
||||
### Sample Applications ###
|
||||
|
||||
ZMap is designed for initiating contact with a large number of hosts and finding ones that respond positively. However, we realize that many users will want to perform follow-up processing, such as performing an application level handshake. For example, users who perform a TCP SYN scan on port 80 might want to perform a simple GET request and users who scan port 443 may be interested in completing a TLS handshake.
|
||||
|
||||
#### Banner Grab ####
|
||||
|
||||
We have included a sample application, banner-grab, with ZMap that enables users to receive messages from listening TCP servers. Banner-grab connects to the provided servers, optionally sends a message, and prints out the first message received from the server. This tool can be used to fetch banners such as HTTP server responses to specific commands, telnet login prompts, or SSH server strings.
|
||||
|
||||
This example finds 1000 servers listening on port 80, and sends a simple GET request to each, storing their base-64 encoded responses in http-banners.out
|
||||
|
||||
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
|
||||
|
||||
For more details on using `banner-grab`, see the README file in `examples/banner-grab`.
|
||||
|
||||
**Heads Up!** ZMap and banner-grab can have significant performance and accuracy impact on one another if run simultaneously (as in the example). Make sure not to let ZMap saturate banner-grab-tcp's concurrent connections, otherwise banner-grab will fall behind reading stdin, causing ZMap to block on writing stdout. We recommend using a slower scanning rate with ZMap, and increasing the concurrency of banner-grab-tcp to no more than 3000 (Note that > 1000 concurrent connections requires you to use `ulimit -SHn 100000` and `ulimit -HHn 100000` to increase the maximum file descriptors per process). These parameters will of course be dependent on your server performance, and hit-rate; we encourage developers to experiment with small samples before running a large scan.
|
||||
|
||||
#### Forge Socket ####
|
||||
|
||||
We have also included a form of banner-grab, called forge-socket, that reuses the SYN-ACK sent from the server for the connection that ultimately fetches the banner. In `banner-grab-tcp`, ZMap sends a SYN to each server, and listening servers respond with a SYN+ACK. The ZMap host's kernel receives this, and sends a RST, as no active connection is associated with that packet. The banner-grab program must then create a new TCP connection to the same server to fetch data from it.
|
||||
|
||||
In forge-socket, we utilize a kernel module by the same name, that allows us to create a connection with arbitrary TCP parameters. This enables us to suppress the kernel's RST packet, and instead create a socket that will reuse the SYN+ACK's parameters, and send and receive data through this socket as we would any normally connected socket.
|
||||
|
||||
To use forge-socket, you will need the forge-socket kernel module, available from [github][1]. You should git clone `git@github.com:ewust/forge_socket.git` in the ZMap root source directory, and then cd into the forge_socket directory, and run make. Install the kernel module with `insmod forge_socket.ko` as root.
|
||||
|
||||
You must also tell the kernel not to send RST packets. An easy way to disable RST packets system wide is to use **iptables**. `iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP` as root will do this, though you may also add an optional --dport X to limit this to the port (X) you are scanning. To remove this after your scan completes, you can run `iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP` as root.
|
||||
|
||||
Now you should be able to build the forge-socket ZMap example program. To run it, you must use the **extended_file** ZMap output module:
|
||||
|
||||
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
|
||||
./forge-socket -c 500 -d ./http-req > ./http-banners.out
|
||||
|
||||
See the README in `examples/forge-socket` for more details.
|
||||
|
||||
----------
|
||||
|
||||
### Writing Probe and Output Modules ###
|
||||
|
||||
ZMap can be extended to support different types of scanning through **probe modules** and additional types of results **output through** output modules. Registered probe and output modules can be listed through the command-line interface:
|
||||
|
||||
**--list-probe-modules**
|
||||
|
||||
Lists installed probe modules
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
Lists installed output modules
|
||||
|
||||
#### Output Modules ####
|
||||
|
||||
ZMap output and post-processing can be extended by implementing and registering **output modules** with the scanner. Output modules receive a callback for every received response packet. While the default provided modules provide simple output, these modules are also capable of performing additional post-processing (e.g. tracking duplicates or outputting numbers in terms of AS instead of IP address)
|
||||
|
||||
Output modules are created by defining a new output_module struct and registering it in [output_modules.c][2]:
|
||||
|
||||
typedef struct output_module {
|
||||
const char *name; // how is output module referenced in the CLI
|
||||
unsigned update_interval; // how often is update called in seconds
|
||||
|
||||
output_init_cb init; // called at scanner initialization
|
||||
output_update_cb start; // called at the beginning of scanner
|
||||
output_update_cb update; // called every update_interval seconds
|
||||
output_update_cb close; // called at scanner termination
|
||||
|
||||
output_packet_cb process_ip; // called when a response is received
|
||||
|
||||
const char *helptext; // Printed when --list-output-modules is called
|
||||
|
||||
} output_module_t;
|
||||
|
||||
Output modules must have a name, which is how they are referenced on the command-line and generally implement `success_ip` and oftentimes `other_ip` callback. The process_ip callback is called for every response packet that is received and passed through the output filter by the current **probe module**. The response may or may not be considered a success (e.g. it could be a TCP RST). These callbacks must define functions that match the `output_packet_cb` definition:
|
||||
|
||||
int (*output_packet_cb) (
|
||||
|
||||
ipaddr_n_t saddr, // IP address of scanned host in network-order
|
||||
ipaddr_n_t daddr, // destination IP address in network-order
|
||||
|
||||
const char* response_type, // send-module classification of packet
|
||||
|
||||
int is_repeat, // {0: first response from host, 1: subsequent responses}
|
||||
int in_cooldown, // {0: not in cooldown state, 1: scanner in cooldown state}
|
||||
|
||||
const u_char* packet, // pointer to struct iphdr of IP packet
|
||||
size_t packet_len // length of packet in bytes
|
||||
);
|
||||
|
||||
An output module can also register callbacks to be executed at scanner initialization (tasks such as opening an output file), start of the scan (tasks such as documenting blacklisted addresses), during regular intervals during the scan (tasks such as progress updates), and close (tasks such as closing any open file descriptors). These callbacks are provided with complete access to the scan configuration and current state:
|
||||
|
||||
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
|
||||
|
||||
which are defined in [output_modules.h][3]. An example is available at [src/output_modules/module_csv.c][4].
|
||||
|
||||
#### Probe Modules ####
|
||||
|
||||
Packets are constructed using probe modules which allow abstracted packet creation and response classification. ZMap comes with two scan modules by default: `tcp_synscan` and `icmp_echoscan`. By default, ZMap uses `tcp_synscan`, which sends TCP SYN packets, and classifies responses from each host as open (received SYN+ACK) or closed (received RST). ZMap also allows developers to write their own probe modules for use with ZMap, using the following API.
|
||||
|
||||
Each type of scan is implemented by developing and registering the necessary callbacks in a `send_module_t` struct:
|
||||
|
||||
typedef struct probe_module {
|
||||
const char *name; // how scan is invoked on command-line
|
||||
size_t packet_length; // how long is probe packet (must be static size)
|
||||
|
||||
const char *pcap_filter; // PCAP filter for collecting responses
|
||||
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
|
||||
|
||||
uint8_t port_args; // set to 1 if ZMap requires a --target-port be
|
||||
// specified by the user
|
||||
|
||||
probe_global_init_cb global_initialize; // called once at scanner initialization
|
||||
probe_thread_init_cb thread_initialize; // called once for each thread packet buffer
|
||||
probe_make_packet_cb make_packet; // called once per host to update packet
|
||||
probe_validate_packet_cb validate_packet; // called once per received packet,
|
||||
// return 0 if packet is invalid,
|
||||
// non-zero otherwise.
|
||||
|
||||
probe_print_packet_cb print_packet; // called per packet if in dry-run mode
|
||||
probe_classify_packet_cb process_packet; // called by receiver to classify response
|
||||
probe_close_cb close; // called at scanner termination
|
||||
|
||||
fielddef_t *fields // Definitions of the fields specific to this module
|
||||
int numfields // Number of fields
|
||||
|
||||
} probe_module_t;
|
||||
|
||||
At scanner initialization, `global_initialize` is called once and can be utilized to perform any necessary global configuration or initialization. However, `global_initialize` does not have access to the packet buffer which is thread-specific. Instead, `thread_initialize` is called at the initialization of each sender thread and is provided with access to the buffer that will be used for constructing probe packets along with global source and destination values. This callback should be used to construct the host agnostic packet structure such that only specific values (e.g. destination host and checksum) need to be be updated for each host. For example, the Ethernet header will not change between headers (minus checksum which is calculated in hardware by the NIC) and therefore can be defined ahead of time in order to reduce overhead at scan time.
|
||||
|
||||
The `make_packet` callback is called for each host that is scanned to allow the **probe module** to update host specific values and is provided with IP address values, an opaque validation string, and probe number (shown below). The probe module is responsible for placing as much of the verification string into the probe, in such a way that when a valid response is returned by a server, the probe module can verify that it is present. For example, for a TCP SYN scan, the tcp_synscan probe module can use the TCP source port and sequence number to store the validation string. Response packets (SYN+ACKs) will contain the expected values in the destination port and acknowledgement number.
|
||||
|
||||
int make_packet(
|
||||
void *packetbuf, // packet buffer
|
||||
ipaddr_n_t src_ip, // source IP in network-order
|
||||
ipaddr_n_t dst_ip, // destination IP in network-order
|
||||
uint32_t *validation, // validation string to place in probe
|
||||
int probe_num // if sending multiple probes per host,
|
||||
// this will be which probe number for this
|
||||
// host we are currently sending
|
||||
);
|
||||
|
||||
Scan modules must also define `pcap_filter`, `validate_packet`, and `process_packet`. Only packets that match the PCAP filter will be considered by the scanner. For example, in the case of a TCP SYN scan, we only want to investigate TCP SYN/ACK or TCP RST packets and would utilize a filter similar to `tcp && tcp[13] & 4 != 0 || tcp[13] == 18`. The `validate_packet` function will be called for every packet that fulfills this PCAP filter. If the validation returns non-zero, the `process_packet` function will be called, and will populate a fieldset using fields defined in `fields` with data from the packet. For example, the following code processes a packet for the TCP synscan probe module.
|
||||
|
||||
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
|
||||
{
|
||||
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
|
||||
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
|
||||
+ (sizeof(struct iphdr)));
|
||||
|
||||
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
|
||||
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
|
||||
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
|
||||
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
|
||||
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
|
||||
|
||||
if (tcp->rst) { // RST packet
|
||||
fs_add_string(fs, "classification", (char*) "rst", 0);
|
||||
fs_add_uint64(fs, "success", 0);
|
||||
} else { // SYNACK packet
|
||||
fs_add_string(fs, "classification", (char*) "synack", 0);
|
||||
fs_add_uint64(fs, "success", 1);
|
||||
}
|
||||
}
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://zmap.io/documentation.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://github.com/ewust/forge_socket/
|
||||
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
|
||||
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
|
||||
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c
|
@ -0,0 +1,61 @@
|
||||
Papyrus:开源笔记管理工具
|
||||
================================================================================
|
||||

|
||||
|
||||
在上一篇帖子中,我们介绍了[任务管理软件Go For It!][1].今天我们将介绍一款名为**Papyrus的开源笔记软件**
|
||||
|
||||
[Papyrus][2] 是[Kaqaz笔记管理][3]的变体并使用了QT5.它不仅有简洁、易用的界面,还具备了较好的安全性。由于强调简洁,我觉得Papyrus与OneNote比较相像。你可以将你的笔记像"纸张"一样分类整理,还可以给他们添加标签进行分组。够简单的吧!
|
||||
|
||||
### Papyrus的功能: ###
|
||||
|
||||
## Papyrus的功能: ###
|
||||
|
||||
虽然Papyrus强调简洁,它依然有很多丰富的功能。他的一些主要功能如下:
|
||||
- 按类别和标签管理笔记
|
||||
- 高级搜索选项
|
||||
- 触屏模式
|
||||
- 全屏选项
|
||||
- 备份至Dropbox/硬盘
|
||||
- 某些页面允许加密
|
||||
- 可与其他软件共享笔记
|
||||
- 与Dropbox加密同步
|
||||
- 除Linux外,还可在Android,Windows和OS X使用
|
||||
|
||||
### Install Papyrus ###
|
||||
|
||||
Papyrus为Android用户提供了APK安装包。Windows和OS X也有安装文件。Linux用户还可以获取程序的源码。使用Ubuntu及其他基于Ubuntu的发行版可以使用.deb包进行安装。根据你的系统及习惯,你可以从Papyrus的下载页面中获取不同的文件:
|
||||
|
||||
- [下载 Papyrus][4]
|
||||
|
||||
### 软件截图 ###
|
||||
|
||||
以下是此软件的一些截图:
|
||||
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
试试Papyrus吧,你会喜欢上它的。
|
||||
|
||||
(译者注;此软件暂无中文版)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/papyrus-open-source-note-manager/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[KevinSJ](https://github.com/KevinSJ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/go-for-it-to-do-app-in-linux/
|
||||
[2]:http://aseman.co/en/products/papyrus/
|
||||
[3]:https://github.com/sialan-labs/kaqaz/
|
||||
[4]:http://aseman.co/en/products/papyrus/
|
@ -0,0 +1,167 @@
|
||||
===>> boredivan翻译中 <<===
|
||||
怎样在CentOS 7.0上安装/配置VNC服务器
|
||||
================================================================================
|
||||
这是一个关于怎样在你的 CentOS 7 上安装配置 [VNC][1] 服务的教程。当然这个教程也适合 RHEL 7 。在这个教程里,我们将学习什么是VNC以及怎样在 CentOS 7 上安装配置 [VNC 服务器][1]。
|
||||
|
||||
我们都知道,作为一个系统管理员,大多数时间是通过网络管理服务器的。在管理服务器的过程中很少会用到图形界面,多数情况下我们只是用 SSH 来完成我们的管理任务。在这篇文章里,我们将配置 VNC 来提供一个连接我们 CentOS 7 服务器的方法。VNC 允许我们开启一个远程图形会话来连接我们的服务器,这样我们就可以通过网络远程访问服务器的图形界面了。
|
||||
|
||||
VNC 服务器是一个自由且开源的软件,它可以让用户可以远程访问服务器的桌面环境。另外连接 VNC 服务器需要使用 VNC viewer 这个客户端。
|
||||
|
||||
** 一些 VNC 服务器的优点:**
|
||||
|
||||
远程的图形管理方式让工作变得简单方便。
|
||||
剪贴板可以在 CentOS 服务器主机和 VNC 客户端机器之间共享。
|
||||
CentOS 服务器上也可以安装图形工具,让管理能力变得更强大。
|
||||
只要安装了 VNC 客户端,任何操作系统都可以管理 CentOS 服务器了。
|
||||
比 ssh 图形和 RDP 连接更可靠。
|
||||
|
||||
那么,让我们开始安装 VNC 服务器之旅吧。我们需要按照下面的步骤一步一步来搭建一个有效的 VNC。
|
||||
|
||||
|
||||
首先,我们需要一个有效的桌面环境(X-Window),如果没有的话要先安装一个。
|
||||
|
||||
**注意:以下命令必须以 root 权限运行。要切换到 root ,请在终端下运行“sudo -s”,当然不包括双引号(“”)**
|
||||
|
||||
### 1. 安装 X-Window ###
|
||||
|
||||
首先我们需要安装 [X-Window][2],在终端中运行下面的命令,安装会花费一点时间。
|
||||
|
||||
# yum check-update
|
||||
# yum groupinstall "X Window System"
|
||||
|
||||

|
||||
|
||||
#yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts
|
||||
|
||||

|
||||
|
||||
# unlink /etc/systemd/system/default.target
|
||||
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
|
||||
|
||||

|
||||
|
||||
# reboot
|
||||
|
||||
在服务器重启之后,我们就有了一个工作着的 CentOS 7 桌面环境了。
|
||||
|
||||
现在,我们要在服务器上安装 VNC 服务器了。
|
||||
|
||||
### 2. 安装 VNC 服务器 ###
|
||||
|
||||
现在要在我们的 CentOS 7 上安装 VNC 服务器了。我们需要执行下面的命令。
|
||||
|
||||
# yum install tigervnc-server -y
|
||||
|
||||

|
||||
|
||||
### 3. 配置 VNC ###
|
||||
|
||||
然后,我们需要在 **/etc/systemd/system/** 目录里创建一个配置文件。我们可以从 **/lib/systemd/sytem/vncserver@.service** 拷贝一份配置文件范例过来。
|
||||
|
||||
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
|
||||
|
||||

|
||||
|
||||
接着我们用自己最喜欢的编辑器(这儿我们用的 **nano** )打开 **/etc/systemd/system/vncserver@:1.service** ,找到下面这几行,用自己的用户名替换掉 <USER> 。举例来说,我的用户名是 linoxide 所以我用 linoxide 来替换掉 <USER> :
|
||||
|
||||
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
|
||||
PIDFile=/home/<USER>/.vnc/%H%i.pid
|
||||
|
||||
替换成
|
||||
|
||||
ExecStart=/sbin/runuser -l linoxide -c "/usr/bin/vncserver %i"
|
||||
PIDFile=/home/linoxide/.vnc/%H%i.pid
|
||||
|
||||
如果是 root 用户则
|
||||
|
||||
ExecStart=/sbin/runuser -l root -c "/usr/bin/vncserver %i"
|
||||
PIDFile=/root/.vnc/%H%i.pid
|
||||
|
||||

|
||||
|
||||
好了,下面重启 systemd 。
|
||||
|
||||
# systemctl daemon-reload
|
||||
|
||||
Finally, we'll create VNC password for the user . To do so, first you'll need to be sure that you have sudo access to the user, here I will login to user "linoxide" then, execute the following. To login to linoxide we'll run "**su linoxide" without quotes** .
|
||||
最后还要设置一下用户的 VNC 密码。要设置某个用户的密码,必须要获得该用户的权限,这里我用 linoxide 的权限,执行“**su linoxide**”就可以了。
|
||||
|
||||
# su linoxide
|
||||
$ sudo vncpasswd
|
||||
|
||||

|
||||
|
||||
**确保你输入的密码多于6个字符**
|
||||
|
||||
### 4. 开启服务 ###
|
||||
|
||||
用下面的命令(永久地)开启服务:
|
||||
|
||||
$ sudo systemctl enable vncserver@:1.service
|
||||
|
||||
启动服务。
|
||||
|
||||
$ sudo systemctl start vncserver@:1.service
|
||||
|
||||
### 5. 防火墙设置 ###
|
||||
|
||||
我们需要配置防火墙来让 VNC 服务正常工作。
|
||||
|
||||
$ sudo firewall-cmd --permanent --add-service vnc-server
|
||||
$ sudo systemctl restart firewalld.service
|
||||
|
||||

|
||||
|
||||
现在就可以用 IP 和端口号(例如 192.168.1.1:1 ,这里的端口不是服务器的端口,而是视 VNC 连接数的多少从1开始排序——译注)来连接 VNC 服务器了。
|
||||
|
||||
### 6. 用 VNC 客户端连接服务器 ###
|
||||
|
||||
好了,现在已经完成了 VNC 服务器的安装了。要使用 VNC 连接服务器,我们还需要一个在本地计算机上安装的仅供连接远程计算机使用的 VNC 客户端。
|
||||
|
||||

|
||||
|
||||
你可以用像 [Tightvnc viewer][3] 和 [Realvnc viewer][4] 的客户端来连接到服务器。
|
||||
|
||||
要用其他用户和端口连接 VNC 服务器,请回到第3步,添加一个新的用户和端口。你需要创建 **vncserver@:2.service** 并替换配置文件里的用户名和之后步骤里响应的文件名、端口号。**请确保你登录 VNC 服务器用的是你之前配置 VNC 密码的时候使用的那个用户名**
|
||||
|
||||
|
||||
|
||||
VNC 服务本身使用的是5900端口。鉴于有不同的用户使用 VNC ,每个人的连接都会获得不同的端口。配置文件名里面的数字告诉 VNC 服务器把服务运行在5900的子端口上。在我们这个例子里,第一个 VNC 服务会运行在5901(5900 + 1)端口上,之后的依次增加,运行在5900 + x 号端口上。其中 x 是指之后用户的配置文件名 **vncserver@:x.service** 里面的 x 。
|
||||
|
||||
在建立连接之前,我们需要知道服务器的 IP 地址和端口。IP 地址是一台计算机在网络中的独特的识别号码。我的服务器的 IP 地址是96.126.120.92,VNC 用户端口是1。执行下面的命令可以获得服务器的公网 IP 地址。
|
||||
|
||||
# curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
|
||||
|
||||
### 总结 ###
|
||||
|
||||
好了,现在我们已经在运行 CentOS 7 / RHEL 7 (Red Hat Enterprises Linux)的服务器上安装配置好了 VNC 服务器。VNC 是自由及开源的软件中最简单的一种能实现远程控制服务器的一种工具,也是 Teamviewer Remote Access 的一款优秀的替代品。VNC 允许一个安装了 VNC 客户端的用户远程控制一台安装了 VNC 服务的服务器。下面还有一些经常使用的相关命令。好好玩!
|
||||
|
||||
#### 其他命令: ####
|
||||
|
||||
- 关闭 VNC 服务。
|
||||
|
||||
# systemctl stop vncserver@:1.service
|
||||
|
||||
- 禁止 VNC 服务开机启动。
|
||||
|
||||
# systemctl disable vncserver@:1.service
|
||||
|
||||
- 关闭防火墙。
|
||||
|
||||
# systemctl stop firewalld.service
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-configure-vnc-server-centos-7-0/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[boredivan](https://github.com/boredivan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://en.wikipedia.org/wiki/Virtual_Network_Computing
|
||||
[2]:http://en.wikipedia.org/wiki/X_Window_System
|
||||
[3]:http://www.tightvnc.com/
|
||||
[4]:https://www.realvnc.com/
|
@ -1,41 +0,0 @@
|
||||
Nmap--不只是邪恶.
|
||||
================================================================================
|
||||
如果SSH是系统管理员世界的"瑞士军刀"的话,那么Nmap就是一盒炸药. 炸药很容易被误用然后将你的双脚崩掉,但是也是一个很有威力的工具,能够胜任一些看似无法完成的任务.
|
||||
|
||||
大多数人想到Nmap时,他们想到的是扫描服务器,查找开放端口来实施工具. 然而,在过去的这些年中,同样的超能力在当你管理服务器或计算机遇到问题时变得难以置信的有用.无论是你试图找出在你的网络上有哪些类型的服务器使用了指定的IP地址,或者尝试锁定一个新的NAS设备,以及扫描网络等,都会非常有用.
|
||||
|
||||
图1显示了我的QNAP NAS的网络扫描.我使用该单元的唯一目的是为了NFS和SMB文件共享,但是你可以看到,它包含了一大堆大开大敞的端口.如果没有Nmap,很难发现机器到底在运行着什么玩意儿.
|
||||
|
||||

|
||||
|
||||
### 图1 网络扫描 ###
|
||||
|
||||
另外一个无法想象的用处是用它来扫描一个网络.你甚至根本不需要root的访问权限,而且你也可以非常容易地来指定你想要扫描的网络块,例如,输入:
|
||||
|
||||
nmap 192.168.1.0/24
|
||||
|
||||
上述命令会扫描我局部网络中全部的254个可用的IP地址,让我可以知道那个使可以Ping的,以及那些端口时开放的.如果你刚刚插入一片新的硬件,但是不知道它通过DHCP获取的IP地址,那么此时Nmap就是无价之宝. 例如,上述命令在我的网络中揭示了这个问题.
|
||||
|
||||
Nmap scan report for TIVO-8480001903CCDDB.brainofshawn.com (192.168.1.220)
|
||||
Host is up (0.0083s latency).
|
||||
Not shown: 995 filtered ports
|
||||
PORT STATE SERVICE
|
||||
80/tcp open http
|
||||
443/tcp open https
|
||||
2190/tcp open tivoconnect
|
||||
2191/tcp open tvbus
|
||||
9080/tcp closed glrpc
|
||||
|
||||
它不仅显示了新的Tivo单元,而且还告诉我那些端口是开放的. 由于它的可靠性,可用性以及黑色边框帽子的能力,Nmap获得了本月的 <<编辑推荐>>奖. 这不是一个新的程序,但是如果你是一个linux用户的话,你应该玩玩它.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/nmap%E2%80%94not-just-evil
|
||||
|
||||
作者:[Shawn Powers][a]
|
||||
译者:[theo-l](https://github.com/theo-l)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/shawn-powers
|
@ -0,0 +1,103 @@
|
||||
如何在 Linux 中使用 Alpine 在命令行里获取 Gmail
|
||||
================================================================================
|
||||
假如你是一个命令行爱好者,我很确信你将张开双臂欢迎任何可以使你使用这个强大的工作环境来执行哪怕一项日常任务的工具,例如从 [安排日程][1] 、 [管理财务][2] 到 获取 [Facebook][3] 、[Twitter][4]等任务。
|
||||
|
||||
在这个帖子中,我将为你展示 Linux 命令行的另一个漂亮干练的使用案例:**获取 Google 的 Gmail 服务**,为此,我们将使用 Alpine,一个基于 ncurses 的多功能命令行邮件客户端(不要和 Alpine Linux 搞混淆)。我们将在 Alphine 中配置 Gmail 的 IMAP 和 SMTP 设定来通过 Google 的邮件服务器在终端环境中收取和发送邮件。在这个教程的最后,你将意识到只需几步就可以在 Alpine 中使用其他的邮件服务。
|
||||
|
||||
诚然,已有许多卓越的基于 GUI 的邮件客户端存在,例如 Thunderbird, Evolution 或者甚至是 Web 界面,那么为什么还有人对使用命令行的邮件客户端来收取 Gmail 这样的事感兴趣呢?答案很简单。假如你需要快速地处理好事情并想避免使用不必要系统资源;或者你正工作在一个最小化安装(注:这里我感觉自己翻译有误)的服务器上,而它没有安装 X 服务(注:这里也需要更改);又或者是 X 服务在你的桌面上崩溃了,而你需要在解决这个问题之前急切地发送一些邮件。在上述所有的情况下, Alpine 都可以派上用场并在任何时间满足你的需求。
|
||||
|
||||
除了简单的编辑,发送和接收文本类的邮件信息等功能外, Alpine 还可以进行加密,解密和对邮件信息进行数字签名,以及与 TLS(注:Transport Layer Security) 无缝集成。
|
||||
|
||||
### 在 Linux 上安装 Alpine ###
|
||||
|
||||
在基于 Red Hat 的发行版本上,可以像下面那样来安装 Alpine。需要注意的是,在 RHEL 或 CentOS 上,你需要首先启用 [EPEL 软件仓库][5]。
|
||||
|
||||
# yum install alpine
|
||||
|
||||
在 Debian,Ubuntu 或它们的衍生发行版本上,你可以这样做:
|
||||
|
||||
# aptitude install alpine
|
||||
|
||||
在安装完成后,你可以运行下面的命令来启动该邮件客户端:
|
||||
|
||||
# alpine
|
||||
|
||||
在你第一次启用 Alpine 时,它将在当前用户的家目录下创建一个邮件文件夹(`~/mail`),并显现出主界面,正如下面的截屏所显示的那样:
|
||||
|
||||
注:youtube视频,发布的时候做个链接吧(注:这里我不知道该如何操作,不过我已经下载了该视频,如有需要,可以发送)
|
||||
<iframe width="615" height="346" frameborder="0" allowfullscreen="" src="http://www.youtube.com/embed/kuKiv3uze4U?feature=oembed"></iframe>
|
||||
|
||||
它的用户界面有下列几个模块:
|
||||
|
||||

|
||||
|
||||
请随意地浏览,操作来熟悉 Alpine。你总是可以在任何时候通过敲 'Q' 来回到命令提示符界面。请注意,所有的字符界面下方都有与操作相关的帮助。
|
||||
|
||||
在进一步深入之前,我们将为 Alpine 创建一个默认的配置文件。为此,请关闭 Alpine,然后在命令行中执行下面的命令:
|
||||
|
||||
# alpine -conf > /etc/pine.conf
|
||||
|
||||
### 配置 Alpine 来使用 Gmail 账号 ###
|
||||
|
||||
一旦你安装了 Alpine 并至少花费了几分钟的时间来熟悉它的界面和菜单,下面便是实际配置它来使用一个已有的 Gmail 账户的时候了。
|
||||
|
||||
在 Alpine 中执行下面的步骤之前,记得要通过你的 Web 邮件界面,在你的 Gmail 设定里启用 IMAP 协议。一旦在你的 Gmail 账户中 IMAP 被启用,执行下面的步骤来在 Alpine 中启用阅读 Gmail 信息的功能。
|
||||
|
||||
首先,启动 Alpine。
|
||||
|
||||
按 'S' 来进行设置,再按 'L' 选择 `collectionLists` 选项来定义不同的文件夹类别以帮助你更好地组织你的邮件:
|
||||
|
||||

|
||||
|
||||
按 'A' 来新建一个文件夹并填写必要的信息:
|
||||
|
||||
- **昵称**: 填写任何你想写的名字;
|
||||
- **服务器**: imap.gmail.com/ssl/user=yourgmailusername@gmail.com
|
||||
|
||||
你可以将 `Path` 和 `View` 留白不填。
|
||||
|
||||
然后按 `Ctrl+X` 并在有提示时输入你的 Gmail 密码:
|
||||
|
||||

|
||||
|
||||
假如一切如预期一样进展顺利,就会出现一个以你先前填写的昵称来命名的新文件夹。你应该可以在这里找到你的 Gmail 信箱:
|
||||
|
||||

|
||||
|
||||
为了验证,你可以比较在 Alpine 中显示的 "Gmail Sent" 信箱和在 Web 界面下的信箱:
|
||||
|
||||

|
||||
|
||||
默认情况下,每隔 150 秒,它将自动检查新邮件或提示,你可以在文件 `/etc/pine.conf`中改变这个值,同时你还可以修改许多其他设定。这个配置文件拥有详细且清晰的注释。例如,为了将检查新邮件的时间间隔设定为 10 秒,你需要这样设定:
|
||||
|
||||
# The approximate number of seconds between checks for new mail
|
||||
mail-check-interval=10
|
||||
|
||||
最后,我们需要配置一个 SMTP 服务器来通过 Alpine 发送邮件信息。回到先前解释过的 Alpine 的设置界面,然后按 'C' 来设定一个 Google 的 SMTP 服务器地址,你需要像下面这样编辑 `SMTP Server`(为了发送) 这一行内容:
|
||||
|
||||
smtp.gmail.com:587/tls/user=yourgmailusername@gmail.com
|
||||
|
||||
当你按 'E' 离开设定界面时,将会提醒你保存更改。一旦你保存了更改,马上你就可以通过 Alpine 来发送邮件了!为此,来到主菜单中的 `Compose` 选项,接着开始从命令行中操作你的 Gmail 吧。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这个帖子里,我们讨论了在终端环境中如何通过一个名为 Alpha 的轻量且强大的命令行邮件客户端来获取 Gmail。 Alpine 是一个发布在 Apache Software License 2.0 协议下的自由软件,该协议与 GPL 协议相兼容。 Alpine 引以自豪的是:它不仅对新手友好,同时还做到了让那些经验丰富的系统管理员认为它是强大的。我希望在你阅读完这篇文章后,你能意识到我最后一个论断是多么的正确。
|
||||
|
||||
非常欢迎使用下面的输入框来留下你的评论或问题。我期待着你们的反馈!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/gmail-command-line-linux-alpine.html
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/gabriel
|
||||
[1]:http://xmodulo.com/schedule-appointments-todo-tasks-linux-terminal.html
|
||||
[2]:http://xmodulo.com/manage-personal-expenses-command-line.html
|
||||
[3]:http://xmodulo.com/access-facebook-command-line-linux.html
|
||||
[4]:http://xmodulo.com/access-twitter-command-line-linux.html
|
||||
[5]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
|
@ -0,0 +1,150 @@
|
||||
走进Linux之systemd启动过程
|
||||
================================================================================
|
||||
Linux系统的启动方式有点复杂,而且总是有需要优化的地方。传统的Linux系统启动过程主要由著名的init进程(也被称为SysV init启动系统)处理,而基于init的启动系统也被确认会有效率不足的问题,systemd是Linux系统机器的另一种启动方式,宣称弥补了以[传统Linux SysV init][2]为基础的系统的缺点。在这里我们将着重讨论systemd的特性和争议,但是为了更好地理解它,也会看一下通过传统的以SysV init为基础的系统的Linux启动过程是什么样的。友情提醒一下systemd仍然处在测试阶段,而未来发布的Linux操作系统也正准备用systemd启动管理程序替代当前的启动过程。
|
||||
|
||||
### 理解Linux启动过程 ###
|
||||
|
||||
在我们打开Linux电脑的电源后第一个启动的进程就是init。分配给init进程的PID是1。它是系统其他所有进程的父进程。当一台Linux电脑启动后,处理器会先在系统存储中查找BIOS,之后BIOS会测试系统资源然后找到第一个引导设备,通常设置为硬盘,然后会查找硬盘的主引导记录(MBR),然后加载到内存中并把控制权交给它,以后的启动过程就由MBR控制。
|
||||
|
||||
主引导记录会初始化引导程序(Linux上有两个著名的引导程序,GRUB和LILO,80%的Linux系统在用GRUB引导程序),这个时候GRUB或LILO会加载内核模块。内核会马上查找/sbin下的init进程并执行它。从这里开始init成为了Linux系统的父进程。init读取的第一个文件是/etc/inittab,通过它init会确定我们Linux操作系统的运行级别。它会从文件/etc/fstab里查找分区表信息然后做相应的挂载。然后init会启动/etc/init.d里指定的默认启动级别的所有服务/脚本。所有服务在这里通过init一个一个被初始化。在这个过程里,init每次只启动一个服务,所有服务/守护进程都在后台执行并由init来管理。
|
||||
|
||||
关机过程差不多是相反的过程,首先init停止所有服务,最后阶段会卸载文件系统。
|
||||
|
||||
以上提到的启动过程有一些不足的地方。而用一种更好的方式来替代传统init的需求已经存在很长时间了。也产生了许多替代方案。其中比较著名的有Upstart,Epoch,Muda和Systemd。而Systemd获得最多关注并被认为是目前最佳的方案。
|
||||
|
||||
### 理解Systemd ###
|
||||
|
||||
开发Systemd的主要目的就是减少系统引导时间和计算开销。Systemd(系统管理守护进程),最开始以GNU GPL协议授权开发,现在已转为使用GNU LGPL协议,它是如今讨论最热烈的引导和服务管理程序。如果你的Linux系统配置为使用Systemd引导程序,那么代替传统的SysV init,启动过程将交给systemd处理。Systemd的一个核心功能是它同时支持SysV init的后开机启动脚本。
|
||||
|
||||
Systemd引入了并行启动的概念,它会为每个需要启动的守护进程建立一个管道套接字,这些套接字对于使用它们的进程来说是抽象的,这样它们可以允许不同守护进程之间进行交互。Systemd会创建新进程并为每个进程分配一个控制组。处于不同控制组的进程之间可以通过内核来互相通信。[systemd处理开机启动进程][2]的方式非常漂亮,和传统基于init的系统比起来优化了太多。让我们看下Systemd的一些核心功能。
|
||||
|
||||
- 和init比起来引导过程简化了很多
|
||||
- Systemd支持并发引导过程从而可以更快启动
|
||||
- 通过控制组来追踪进程,而不是PID
|
||||
- 优化了处理引导过程和服务之间依赖的方式
|
||||
- 支持系统快照和恢复
|
||||
- 监控已启动的服务;也支持重启已崩溃服务
|
||||
- 包含了systemd-login模块用于控制用户登录
|
||||
- 支持加载和卸载组件
|
||||
- 低内存使用痕迹以及任务调度能力
|
||||
- 记录事件的Journald模块和记录系统日志的syslogd模块
|
||||
|
||||
Systemd同时也清晰地处理了系统关机过程。它在/usr/lib/systemd/目录下有三个脚本,分别叫systemd-halt.service,systemd-poweroff.service,systemd-reboot.service。这几个脚本会在用户选择关机,重启或待机时执行。在接收到关机事件时,systemd首先卸载所有文件系统并停止所有内存交换设备,断开存储设备,之后停止所有剩下的进程。
|
||||
|
||||

|
||||
|
||||
### Systemd结构概览 ###
|
||||
|
||||
让我们看一下Linux系统在使用systemd作为引导程序时的开机启动过程的结构性细节。为了简单,我们将在下面按步骤列出来这个过程:
|
||||
|
||||
**1.** 当你打开电源后电脑所做的第一件事情就是BIOS初始化。BIOS会读取引导设备设定,定位并传递系统控制权给MBR(假设硬盘是第一引导设备)。
|
||||
|
||||
**2.** MBR从Grub或LILO引导程序读取相关信息并初始化内核。接下来将由Grub或LILO继续引导系统。如果你在grub配置文件里指定了systemd作为引导管理程序,之后的引导过程将由systemd完成。Systemd使用“target”来处理引导和服务管理过程。这些systemd里的“target”文件被用于分组不同的引导单元以及启动同步进程。
|
||||
|
||||
**3.** systemd执行的第一个目标是**default.target**。但实际上default.target是指向**graphical.target**的软链接。Linux里的软链接用起来和Windows下的快捷方式一样。文件Graphical.target的实际位置是/usr/lib/systemd/system/graphical.target。在下面的截图里显示了graphical.target文件的内容。
|
||||
|
||||

|
||||
|
||||
**4.** 在这个阶段,会启动**multi-user.target**而这个target将自己的子单元放在目录“/etc/systemd/system/multi-user.target.wants”里。这个target为多用户支持设定系统环境。非root用户会在这个阶段的引导过程中启用。防火墙相关的服务也会在这个阶段启动。
|
||||
|
||||

|
||||
|
||||
"multi-user.target"会将控制权交给另一层“**basic.target**”。
|
||||
|
||||

|
||||
|
||||
**5.** "basic.target"单元用于启动普通服务特别是图形管理服务。它通过/etc/systemd/system/basic.target.wants目录来决定哪些服务会被启动,basic.target之后将控制权交给**sysinit.target**.
|
||||
|
||||

|
||||
|
||||
**6.** "sysinit.target"会启动重要的系统服务例如系统挂载,内存交换空间和设备,内核补充选项等等。sysinit.target在启动过程中会传递给**local-fs.target**。这个target单元的内容如下面截图里所展示。
|
||||
|
||||

|
||||
|
||||
**7.** local-fs.target,这个target单元不会启动用户相关的服务,它只处理底层核心服务。这个target会根据/etc/fstab和/etc/inittab来执行相关操作。
|
||||
|
||||
### 系统引导性能分析 ###
|
||||
|
||||
Systemd提供了工具用于识别和定位引导相关的问题或性能影响。**Systemd-analyze**是一个内建的命令,可以用来检测引导过程。你可以找出在启动过程中出错的单元,然后跟踪并改正引导组件的问题。在下面列出一些常用的systemd-analyze命令。
|
||||
|
||||
**systemd-analyze time** 用于显示内核和普通用户空间启动时所花的时间。
|
||||
|
||||
$ systemd-analyze time
|
||||
|
||||
Startup finished in 1440ms (kernel) + 3444ms (userspace)
|
||||
|
||||
**systemd-analyze blame** 会列出所有正在运行的单元,按从初始化开始到当前所花的时间排序,通过这种方式你就知道哪些服务在引导过程中要花较长时间来启动。
|
||||
|
||||
$ systemd-analyze blame
|
||||
|
||||
2001ms mysqld.service
|
||||
234ms httpd.service
|
||||
191ms vmms.service
|
||||
|
||||
**systemd-analyze verify** 显示在所有系统单元中是否有语法错误。**systemd-analyze plot** 可以用来把整个引导过程写入一个SVG格式文件里。整个引导过程非常长不方便阅读,所以通过这个命令我们可以把输出写入一个文件,之后再查看和分析。下面这个命令就是做这个。
|
||||
|
||||
systemd-analyze plot > boot.svg
|
||||
|
||||
### Systemd的争议 ###
|
||||
|
||||
Systemd并没有幸运地获得所有人的青睐,一些专家和管理员对于它的工作方式和开发有不同意见。根据对于Systemd的批评,它不是“类Unix”方式因为它试着替换一些系统服务。一些专家也不喜欢使用二进制配置文件的想法。据说编辑systemd配置非常困难而且没有一个可用的图形工具。
|
||||
|
||||
### 在Ubuntu 14.04和12.04上测试Systemd ###
|
||||
|
||||
本来,Ubuntu决定从Ubuntu 16.04 LTS开始使用Systemd来替换当前的引导过程。Ubuntu 16.04预计在2016年4月发布,但是考虑到Systemd的流行和需求,即将发布的**Ubuntu 15.04**将采用它作为默认引导程序。好消息是Ubuntu 14.04 Trusty Tahr和Ubuntu 12.04 Precise Pangolin的用户可以在他们的机器上测试Systemd。测试过程并不复杂,你所要做的只是把相关的PPA包含到系统中,更新仓库并升级系统。
|
||||
|
||||
**声明**:请注意它仍然处于Ubuntu的测试和开发阶段。升级测试包可能会带来一些未知错误,最坏的情况下有可能损坏你的系统配置。请确保在尝试升级前已经备份好重要数据。
|
||||
|
||||
在终端里运行下面的命令来添加PPA到你的Ubuntu系统里:
|
||||
|
||||
sudo add-apt-repository ppa:pitti/systemd
|
||||
|
||||
你将会看到警告信息因为我们尝试使用临时/测试PPA,而它们是不建议用于实际工作机器上的。
|
||||
|
||||

|
||||
|
||||
然后运行下面的命令更新APT包管理仓库。
|
||||
|
||||
sudo apt-get update
|
||||
|
||||

|
||||
|
||||
运行下面的命令升级系统。
|
||||
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||

|
||||
|
||||
就这些,你应该已经可以在你的Ubuntu系统里看到Systemd配置文件了,打开/lib/systemd/目录可以看到这些文件。
|
||||
|
||||
好吧,现在让我们编辑一下grub配置文件指定systemd作为默认引导程序。可以使用Gedit文字编辑器编辑grub配置文件。
|
||||
|
||||
sudo gedit /etc/default/grub
|
||||
|
||||

|
||||
|
||||
在文件里修改GRUB_CMDLINE_LINUX_DEFAULT项,设定它的参数为:“**init=/lib/systemd/systemd**”
|
||||
|
||||

|
||||
|
||||
就这样,你的Ubuntu系统已经不在使用传统的引导程序了,改为使用Systemd管理器。重启你的机器然后查看systemd引导过程吧。
|
||||
|
||||

|
||||
|
||||
### 结论 ###
|
||||
|
||||
Systemd毫无疑问为改进Linux引导过程前进了一大步;它包含了一套漂亮的库和守护进程配合工作来优化系统引导和关闭过程。许多Linux发行版正准备将它作为自己的正式引导程序。在以后的Linux发行版中,我们将有望看到systemd开机。但是另一方面,为了获得成功并广泛应用,systemd仍需要认真处理批评意见。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/systemd-boot-process/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:http://linoxide.com/booting/boot-process-of-linux-in-detail/
|
||||
[2]:http://0pointer.de/blog/projects/self-documented-boot.html
|
@ -0,0 +1,266 @@
|
||||
11个Linux终端命令,让你的世界摇滚起来
|
||||
================================================================================
|
||||
我已经用了十年的Linux了,通过今天这篇文章我将向大家展示一系列的,我希望一开始就有人教导而不是曾在我成长道路上绊住我的Linux命令、工具和花招。
|
||||
|
||||

|
||||
Linux的快捷键。
|
||||
|
||||
### 1. 命令行日常系快捷键 ###
|
||||
|
||||
如下的快捷方式非常有用,能够极大的提升你的工作效率:
|
||||
|
||||
- CTRL + U - 剪切光标前的内容
|
||||
- CTRL + K - 剪切光标至行末的内容
|
||||
- CTRL + Y - 粘贴
|
||||
- CTRL + E - 移动光标到行末
|
||||
- CTRL + A - 移动光标到行首
|
||||
- ALT + F - 跳向下一个空格
|
||||
- ALT + B - 跳回上一个空格
|
||||
- ALT + Backspace - 删除前一个字
|
||||
- CTRL + W - 剪切光标后一个字
|
||||
- Shift + Insert - 向终端内粘贴文本
|
||||
|
||||
那么为了让上诉内容更易理解来看下面的这行命令。
|
||||
|
||||
sudo apt-get intall programname
|
||||
|
||||
如你所见,命令中存在拼写错误,为了正常执行需要把“intall”替换成“install”。
|
||||
|
||||
想象现在光标正在行末,我们有很多的方法将她退回单词install并替换它。
|
||||
|
||||
我可以按两次ALT+B这样光标就会在如下的位置(这里用^代替光标的位置)。
|
||||
|
||||
sudo apt-get^intall programname
|
||||
|
||||
现在你可以按两下方向键并将“s”插入到install中去了。
|
||||
|
||||
如果你想将浏览器中的文本复制到终端,可以使用快捷键"shift + insert"。
|
||||
|
||||

|
||||
|
||||
### 2. SUDO !! ###
|
||||
|
||||
这个命令如果你还不知道我觉得你应该好好感谢我,因为如果你不知道那每次你在输入长串命令后看到“permission denied”后一定会痛恼不堪。
|
||||
|
||||
- sudo !!
|
||||
|
||||
如何使用sudo !!?很简单。试想你刚输入了如下命令:
|
||||
|
||||
apt-get install ranger
|
||||
|
||||
一定会出现"Permission denied"除非你的登录了足够高权限的账户。
|
||||
|
||||
sudo !!就会用sudo的形式运行上一条命令。所以上一条命令可以看成是这样:
|
||||
|
||||
sudo apt-get install ranger
|
||||
|
||||
如果你不知道什么是sudo[戳这里][1]。
|
||||
|
||||

|
||||
暂停终端运行的应用程序。
|
||||
|
||||
### 3. 暂停并在后台运行命令 ###
|
||||
|
||||
我曾经写过一篇如何在终端后台运行命令的指南。
|
||||
|
||||
- CTRL + Z - 暂停应用程序
|
||||
- fg - 重新将程序唤到前台
|
||||
|
||||
如何使用这个技巧呢?
|
||||
|
||||
试想你正用nano编辑一个文件:
|
||||
|
||||
sudo nano abc.txt
|
||||
|
||||
文件编辑到一半你意识到你需要马上在终端输入些命令,但是nano在前台运行让你不能输入。
|
||||
|
||||
你可能觉得唯一的方法就是保存文件,推出nano,运行命令以后在重新打开nano。
|
||||
|
||||
其实你只要按CTRL + Z前台的命令就会暂停,画面就切回到命令行了。然后你就能运行你想要运行命令,等命令运行完后在终端窗口输入“fg”就可以回到先前暂停的任务。
|
||||
|
||||
有一个尝试非常有趣就是用nano打开文件,输入一些东西然后暂停会话。再用nano打开另一个文件,输入一些什么后再暂停会话。如果你输入“fg”你将回到第二个用nano打开的文件。只有退出nano再输入“fg”,你才会回到第一个用nano打开的文件。
|
||||
|
||||

|
||||
nohup.
|
||||
|
||||
### 4. 使用nohup在登出SSH会话后仍运行命令 ###
|
||||
|
||||
如果你用ssh登录别的机器时,[nohup命令]真的非常有用。
|
||||
|
||||
那么怎么使用nohup呢?
|
||||
|
||||
想象一下你使用ssh远程登录到另一台电脑上,你运行了一条非常耗时的命令然后退出了ssh会话,不过命令仍在执行。而nohup可以将这一场景变成现实。
|
||||
|
||||
举个例子以测试为目的我用[树莓派][3]来下载发行版。
|
||||
|
||||
我绝对不会给我的树莓派外接显示器、键盘或鼠标。
|
||||
|
||||
一般我总是用[SSH] [4]从笔记本电脑连接到树莓派。如果我在不用nohup的情况下使用树莓派下载大型文件,那我就必须等待到下载完成后才能登出ssh会话关掉笔记本。如果是这样那我为什么要使用树莓派下文件呢?
|
||||
|
||||
使用nohup的方法也很简单,只需如下例中在nohup后输入要执行的命令即可:
|
||||
|
||||
nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &
|
||||
|
||||

|
||||
At管理任务日程
|
||||
|
||||
### 5. ‘在’特定的时间运行Linux命令 ###
|
||||
|
||||
‘nohup’命令在你用SSH连接到服务器,并在上面保持执行SSH登出前任务的时候十分有用。
|
||||
|
||||
想一下如果你需要在特定的时间执行同一个命令,这种情况该怎么办呢?
|
||||
|
||||
命令‘at’就能妥善解决这一情况。以下是‘at’使用示例。
|
||||
|
||||
at 10:38 PM Fri
|
||||
at> cowsay 'hello'
|
||||
at> CTRL + D
|
||||
|
||||
上面的命令能在周五下午10时38分运行程序[cowsay] [5]。
|
||||
|
||||
使用的语法就是‘at’后追加日期时间。
|
||||
|
||||
当at>提示符出现后就可以输入你想在那个时间运行的命令了。
|
||||
|
||||
CTRL + D返回终端。
|
||||
|
||||
还有许多日期和时间的格式都是值得的你好好翻一翻‘at’的man手册来找到更多的使用方式。
|
||||
|
||||

|
||||
|
||||
### 6. Man手册 ###
|
||||
|
||||
Man手册会为你列出命令和参数的使用大纲,教你如何使用她们。
|
||||
|
||||
Man手册看起开沉闷呆板。(我思忖她们也不是被设计来娱乐我们的)。
|
||||
|
||||
不过这不代表你不能做些什么来使她们变得性感点。
|
||||
|
||||
export PAGER=most
|
||||
|
||||
你需要 ‘most’;她会使你的你的man手册的色彩更加绚丽。
|
||||
|
||||
你可以用一下命令给man手册设定指定的行长:
|
||||
|
||||
export MANWIDTH=80
|
||||
|
||||
最后,如果你有浏览器,你可以使用-H在默认浏览器中打开任意的man页。
|
||||
|
||||
man -H <command>
|
||||
|
||||
注意啦,以上的命令只有在你将默认的浏览器已经设置到环境变量$BROWSER中了之后才效果哟。
|
||||
|
||||

|
||||
使用htop查看进程。
|
||||
|
||||
### 7. 使用htop查看和管理进程 ###
|
||||
|
||||
你用哪个命令找出电脑上正在运行的进程的呢?我敢打赌是‘[ps][6]’并在其后加不同的参数来得到你所想要的不同输出。
|
||||
|
||||
安装‘[htop][7]’吧!绝对让你相见恨晚。
|
||||
|
||||
htop在终端中将进程以列表的方式呈现,有点类似于Windows中的任务管理器。
|
||||
|
||||
你可以使用功能键的组合来切换排列的方式和展示出来的项。你也可以在htop中直接杀死进程。
|
||||
|
||||
在终端中简单的输入htop即可运行。
|
||||
|
||||
htop
|
||||
|
||||

|
||||
命令行文件管理 - Ranger.
|
||||
|
||||
### 8. 使用ranger浏览文件系统 ###
|
||||
|
||||
如果说htop是命令行进程控制的好帮手那么[ranger][8]就是命令行浏览文件系统的好帮手。
|
||||
|
||||
你在用之前可能需要先安装,不过一旦安装了以后就可以在命令行输入以下命令启动她:
|
||||
|
||||
ranger
|
||||
|
||||
在命令行窗口中ranger和一些别的文件管理器很像,但是她是左右结构的比起上下的来意味着你按左方向键你将前进到上一个文件夹结构而右方向键则会切换到下一个。
|
||||
|
||||
在使用前ranger的man手册还是值得一读的,这样你就可以用快捷键操作ranger了。
|
||||
|
||||

|
||||
Linux取消关机。
|
||||
|
||||
### 9. 取消关机 ###
|
||||
|
||||
无论是在命令行还是图形用户界面[关机][9]后发现自己不是真的想要关机。
|
||||
|
||||
shutdown -c
|
||||
|
||||
需要注意的是,如果关机已经开始则有可能来不及停止关机。
|
||||
|
||||
以下是另一个可以尝试命令:
|
||||
|
||||
- [pkill][10] shutdown
|
||||
|
||||

|
||||
使用XKill杀死挂起进程。
|
||||
|
||||
### 10. 杀死挂起进程的简单方法 ###
|
||||
|
||||
想象一下,你正在运行的应用程序不明原因的僵死了。
|
||||
|
||||
你可以使用‘ps -ef’来找到该进程后杀掉或者使用‘htop’。
|
||||
|
||||
有一个更快、更容易的命令叫做[xkill][11]。
|
||||
|
||||
简单的在终端中输入以下命令并在窗口中点击你想杀死的应用程序。
|
||||
|
||||
xkill
|
||||
|
||||
那如果整个系统挂掉了怎么办呢?
|
||||
|
||||
按住键盘上的‘alt’和‘sysrq’同时输入:
|
||||
|
||||
- [REISUB][12]
|
||||
|
||||
这样不按电源键你的计算机也能重启了。
|
||||
|
||||

|
||||
youtube-dl.
|
||||
|
||||
### 11. 下载Youtube视频 ###
|
||||
|
||||
一般来说我们大多数人都喜欢看Youtube的视频,也会通过钟爱的播放器播放Youtube的流。
|
||||
|
||||
如果你需要离线一段时间(比如:从苏格兰南部坐飞机到英格兰南部旅游的这段时间)那么你可能希望下载一些视频到存储设备中,到闲暇时观看。
|
||||
|
||||
你所要做的就是从包管理器中安装youtube-dl。
|
||||
|
||||
你可以用以下命令使用youtube-dl:
|
||||
|
||||
youtube-dl url-to-video
|
||||
|
||||
你能在Youtubu视频页面点击分享链接得到视频的url。只要简单的复制链接在粘帖到命令行就行了(要用shift + insert快捷键哟)。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
希望你在这篇文章中得到帮助,并且在这11条中找到至少一条让你惊叹“原来可以这样”的技巧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-Rock-Your-World.htm
|
||||
|
||||
作者:[Gary Newell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
|
||||
[1]:http://linux.about.com/cs/linux101/g/sudo.htm
|
||||
[2]:http://linux.about.com/library/cmd/blcmdl1_nohup.htm
|
||||
[3]:http://linux.about.com/od/mobiledevicesother/a/Raspberry-Pi-Computer-Running-Linux.htm
|
||||
[4]:http://linux.about.com/od/commands/l/blcmdl1_ssh.htm
|
||||
[5]:http://linux.about.com/cs/linux101/g/cowsay.htm
|
||||
[6]:http://linux.about.com/od/commands/l/blcmdl1_ps.htm
|
||||
[7]:http://www.linux.com/community/blogs/133-general-linux/745323-5-commands-to-check-memory-usage-on-linux
|
||||
[8]:http://ranger.nongnu.org/
|
||||
[9]:http://linux.about.com/od/commands/l/blcmdl8_shutdow.htm
|
||||
[10]:http://linux.about.com/library/cmd/blcmdl1_pkill.htm
|
||||
[11]:http://linux.about.com/od/funnymanpages/a/funman_xkill.htm
|
||||
[12]:http://blog.kember.net/articles/reisub-the-gentle-linux-restart/
|
@ -0,0 +1,99 @@
|
||||
如何交互式地创建一个Docker容器
|
||||
===============================================================================
|
||||
大家好,今天我们来学习如何使用一个docker镜像交互式地创建一个Docker容器。一旦我们从镜像中启动一个Docker进程,Docker就会在父镜像与子镜像间来回搬运,重复工作,直到到达子镜像。然后联合文件系统会在顶层添加一个读写层。读写层又叫一个 **Container** 包含一些信息,关于它的父镜像和一些其他的信息,如单独的ID,网络配置和资源限制。容器已经声明,他们可以从 **running** 切换到 **exited** 状态。一个处于 **running** 状态的容器包含了很多分支在CPU上面运行,独立于其他在主机上运行的进程,而主机上 **exited** 是文件系统的状态,它的退出变量值是保留的。你可以使用读写层来启动,停止和重启一个容器。
|
||||
|
||||
Docker技术为IT界带来了巨大的改变,它使得云服务可以用来共享应用和工作流程自动化,使得应用可以从组件快速组合,消除了开发与品质保证和产品环境间的摩擦。在这篇文章中,我们将会建立CentOS环境,然后维护一个网站,在Apache网络服务器下运行。
|
||||
|
||||
这是快速且容易的教程,讨论我们怎样使用一个交互的shell,以一个交互的方式来创建一个容器。
|
||||
|
||||
### 1. 运行一个Docker实例 ###
|
||||
|
||||
Docker一开始尝试从本地取得和运行所需的镜像,如果在本地主机上没有发现,它就会从[Docker公共注册中心][1]拉取。这里,我们将会在一个DOcker容器里取得并创建一个fedora实例,附加一个bash shell到tty
|
||||
|
||||
# docker run -i -t fedora bash
|
||||
|
||||

|
||||
|
||||
### 2.安装Apache网络服务器 ###
|
||||
|
||||
现在,在我们的Fedora基本镜像准备好后,我们将会开始交互式地安装Apache网络服务器,而不必为它创建Dockerfile。为了做到这点,我们需要在终端或者shell运行以下命令。
|
||||
|
||||
# yum update
|
||||
|
||||

|
||||
|
||||
# yum install httpd
|
||||
|
||||

|
||||
|
||||
# exit
|
||||
|
||||
### 3.Saving the Image ###
|
||||
|
||||
现在,我们要去保存在Fedora实例里做的修改。要做到这个,我们首先需要知道实例的容器ID。而为了得到ID,我们又需要运行以下命令。
|
||||
|
||||
# docker ps -a
|
||||
|
||||

|
||||
|
||||
然后,我们会保存这些改变为一个新的镜像,请运行以下命令。
|
||||
|
||||
# docker commit c16378f943fe fedora-httpd
|
||||
|
||||

|
||||
|
||||
这里,修改已经通过使用容器ID保存起来了,镜像名字叫fedora-httpd。为了确认新的镜像时候在运行,我们将运行以下命令
|
||||
|
||||
# docker images
|
||||
|
||||

|
||||
|
||||
### 4. 添加内容到新的镜像 ###
|
||||
|
||||
我们自己新的Fedora Apache镜像正成功的运行,现在我们想添加一些网页内容到Apache网络服务器,包括我们的网站,使得网站能够脱离盒子正确运行。为做到这点,我们需要创建一个新的Dockerfile,它会处理从复制网页内容到使用80端口的所有操作。而为做到这,我们又需要使用我们最喜欢的文本编辑器创建Dockerfile文件,像下面演示的一样。
|
||||
|
||||
# nano Dockerfile
|
||||
|
||||
现在,我们需要添加以下的命令行到文件中。
|
||||
|
||||
FROM fedora-httpd
|
||||
ADD mysite.tar /tmp/
|
||||
RUN mv /tmp/mysite/* /var/www/html
|
||||
EXPOSE 80
|
||||
ENTRYPOINT [ "/usr/sbin/httpd" ]
|
||||
CMD [ "-D", "FOREGROUND" ]
|
||||
|
||||

|
||||
|
||||
这里,运行Dockerfile,mysite.tar里的网页内容会自动解压到/temp/文件夹里。然后,整个文件会被转移到Apache网页根目录/var/www/html/,命令expose 80会打开80端口,这样网站就能正常访问。其次,入口点放在了/usr/sbin/https里面,保证Apache服务器能够执行。
|
||||
|
||||
### 5. 建立并运行一个容器 ###
|
||||
|
||||
现在,为了添加我们网站在上面,我们要用刚刚创建的Dockerfile创建我们的容器,为做到这,我们需要运行以下命令。
|
||||
|
||||
# docker build -rm -t mysite .
|
||||
|
||||

|
||||
|
||||
我们建立自己的容器后,我们想要用下面的命令来运行容器。
|
||||
|
||||
# docker run -d -P mysite
|
||||
|
||||

|
||||
|
||||
### 总结 ###
|
||||
|
||||
最后,我们已经成功的以交互式的方式建立了一个Docker容器。在本节方法中,我们是直接通过交互的shell命令建立我们的容器和镜像。这种方法十分简单且快速,在建立与配置镜像与容器方面。如果你有任何问题,建议和反馈,请在下方的评论框里写下来,以便我们可以提升或者更新我们的文章。谢谢!祝生活快乐 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/interactively-create-docker-container/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://registry.hub.docker.com/
|
@ -0,0 +1,35 @@
|
||||
Linux有问必答-- 如何在Ubuntu中升级Docker
|
||||
================================================================================
|
||||
> **提问**: 我使用了Ubuntu的标准仓库安装了Docker。然而,默认安装的Docker不能满足我另外一个依赖Docker程序的版本需要。我该如何在Ubuntu中升级到Docker的最新版本?
|
||||
|
||||
Docker第一次在2013年发布,它快速地演变成了一个针对分布式程序的开发平台。为了满足工业期望,Docker正在紧密地开发并持续地带来新特性的升级。这样Ubuntu发行版中的Docker版本可能很快就会过时。比如,, Ubuntu 14.10 Utopic 中的Docker版本是1.2.0, 然而最新的Docker版本是1.5.0。
|
||||
|
||||

|
||||
|
||||
对于那些想要跟随Docker的最新开发的人而言,Canonical为Docker维护了一个独立的PPA。使用这个PPA仓库,你可以很容易地在Ubuntu上升级到最新的Docker版本。
|
||||
|
||||
下面是如何设置Docker的PPA和升级Docker。
|
||||
|
||||
$ sudo add-apt-repository ppa:docker-maint/testing
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install docker.io
|
||||
|
||||
检查安装的Docker版本:
|
||||
|
||||
$ docker --version
|
||||
|
||||
----------
|
||||
|
||||
Docker version 1.5.0-dev, build a78ce5c
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/upgrade-docker-ubuntu.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
Loading…
Reference in New Issue
Block a user