mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-29 21:41:00 +08:00
commit
c406226926
183
published/20150522 Analyzing Linux Logs.md
Normal file
183
published/20150522 Analyzing Linux Logs.md
Normal file
@ -0,0 +1,183 @@
|
|||||||
|
如何分析 Linux 日志
|
||||||
|
==============================================================================
|
||||||
|
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-Copy@2x1.png)
|
||||||
|
|
||||||
|
日志中有大量的信息需要你处理,尽管有时候想要提取并非想象中的容易。在这篇文章中我们会介绍一些你现在就能做的基本日志分析例子(只需要搜索即可)。我们还将涉及一些更高级的分析,但这些需要你前期努力做出适当的设置,后期就能节省很多时间。对数据进行高级分析的例子包括生成汇总计数、对有效值进行过滤,等等。
|
||||||
|
|
||||||
|
我们首先会向你展示如何在命令行中使用多个不同的工具,然后展示了一个日志管理工具如何能自动完成大部分繁重工作从而使得日志分析变得简单。
|
||||||
|
|
||||||
|
### 用 Grep 搜索 ###
|
||||||
|
|
||||||
|
搜索文本是查找信息最基本的方式。搜索文本最常用的工具是 [grep][1]。这个命令行工具在大部分 Linux 发行版中都有,它允许你用正则表达式搜索日志。正则表达式是一种用特殊的语言写的、能识别匹配文本的模式。最简单的模式就是用引号把你想要查找的字符串括起来。
|
||||||
|
|
||||||
|
#### 正则表达式 ####
|
||||||
|
|
||||||
|
这是一个在 Ubuntu 系统的认证日志中查找 “user hoover” 的例子:
|
||||||
|
|
||||||
|
$ grep "user hoover" /var/log/auth.log
|
||||||
|
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
||||||
|
pam_unix(sshd:session): session opened for user hoover by (uid=0)
|
||||||
|
pam_unix(sshd:session): session closed for user hoover
|
||||||
|
|
||||||
|
构建精确的正则表达式可能很难。例如,如果我们想要搜索一个类似端口 “4792” 的数字,它可能也会匹配时间戳、URL 以及其它不需要的数据。Ubuntu 中下面的例子,它匹配了一个我们不想要的 Apache 日志。
|
||||||
|
|
||||||
|
$ grep "4792" /var/log/auth.log
|
||||||
|
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
||||||
|
74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972 HTTP/1.0" 404 545 "-" "-”
|
||||||
|
|
||||||
|
#### 环绕搜索 ####
|
||||||
|
|
||||||
|
另一个有用的小技巧是你可以用 grep 做环绕搜索。这会向你展示一个匹配前面或后面几行是什么。它能帮助你调试导致错误或问题的东西。`B` 选项展示前面几行,`A` 选项展示后面几行。举个例子,我们知道当一个人以管理员员身份登录失败时,同时他们的 IP 也没有反向解析,也就意味着他们可能没有有效的域名。这非常可疑!
|
||||||
|
|
||||||
|
$ grep -B 3 -A 2 'Invalid user' /var/log/auth.log
|
||||||
|
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: reverse mapping checking getaddrinfo for 216-19-2-8.commspeed.net [216.19.2.8] failed - POSSIBLE BREAK-IN ATTEMPT!
|
||||||
|
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
||||||
|
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Invalid user admin from 216.19.2.8
|
||||||
|
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: input_userauth_request: invalid user admin [preauth]
|
||||||
|
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
||||||
|
|
||||||
|
#### Tail ####
|
||||||
|
|
||||||
|
你也可以把 grep 和 [tail][2] 结合使用来获取一个文件的最后几行,或者跟踪日志并实时打印。这在你做交互式更改的时候非常有用,例如启动服务器或者测试代码更改。
|
||||||
|
|
||||||
|
$ tail -f /var/log/auth.log | grep 'Invalid user'
|
||||||
|
Apr 30 19:49:48 ip-172-31-11-241 sshd[6512]: Invalid user ubnt from 219.140.64.136
|
||||||
|
Apr 30 19:49:49 ip-172-31-11-241 sshd[6514]: Invalid user admin from 219.140.64.136
|
||||||
|
|
||||||
|
关于 grep 和正则表达式的详细介绍并不在本指南的范围,但 [Ryan’s Tutorials][3] 有更深入的介绍。
|
||||||
|
|
||||||
|
日志管理系统有更高的性能和更强大的搜索能力。它们通常会索引数据并进行并行查询,因此你可以很快的在几秒内就能搜索 GB 或 TB 的日志。相比之下,grep 就需要几分钟,在极端情况下可能甚至几小时。日志管理系统也使用类似 [Lucene][4] 的查询语言,它提供更简单的语法来检索数字、域以及其它。
|
||||||
|
|
||||||
|
### 用 Cut、 AWK、 和 Grok 解析 ###
|
||||||
|
|
||||||
|
#### 命令行工具 ####
|
||||||
|
|
||||||
|
Linux 提供了多个命令行工具用于文本解析和分析。当你想要快速解析少量数据时非常有用,但处理大量数据时可能需要很长时间。
|
||||||
|
|
||||||
|
#### Cut ####
|
||||||
|
|
||||||
|
[cut][5] 命令允许你从有分隔符的日志解析字段。分隔符是指能分开字段或键值对的等号或逗号等。
|
||||||
|
|
||||||
|
假设我们想从下面的日志中解析出用户:
|
||||||
|
|
||||||
|
pam_unix(su:auth): authentication failure; logname=hoover uid=1000 euid=0 tty=/dev/pts/0 ruser=hoover rhost= user=root
|
||||||
|
|
||||||
|
我们可以像下面这样用 cut 命令获取用等号分割后的第八个字段的文本。这是一个 Ubuntu 系统上的例子:
|
||||||
|
|
||||||
|
$ grep "authentication failure" /var/log/auth.log | cut -d '=' -f 8
|
||||||
|
root
|
||||||
|
hoover
|
||||||
|
root
|
||||||
|
nagios
|
||||||
|
nagios
|
||||||
|
|
||||||
|
#### AWK ####
|
||||||
|
|
||||||
|
另外,你也可以使用 [awk][6],它能提供更强大的解析字段功能。它提供了一个脚本语言,你可以过滤出几乎任何不相干的东西。
|
||||||
|
|
||||||
|
例如,假设在 Ubuntu 系统中我们有下面的一行日志,我们想要提取登录失败的用户名称:
|
||||||
|
|
||||||
|
Mar 24 08:28:18 ip-172-31-11-241 sshd[32701]: input_userauth_request: invalid user guest [preauth]
|
||||||
|
|
||||||
|
你可以像下面这样使用 awk 命令。首先,用一个正则表达式 /sshd.*invalid user/ 来匹配 sshd invalid user 行。然后用 { print $9 } 根据默认的分隔符空格打印第九个字段。这样就输出了用户名。
|
||||||
|
|
||||||
|
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
||||||
|
guest
|
||||||
|
admin
|
||||||
|
info
|
||||||
|
test
|
||||||
|
ubnt
|
||||||
|
|
||||||
|
你可以在 [Awk 用户指南][7] 中阅读更多关于如何使用正则表达式和输出字段的信息。
|
||||||
|
|
||||||
|
#### 日志管理系统 ####
|
||||||
|
|
||||||
|
日志管理系统使得解析变得更加简单,使用户能快速的分析很多的日志文件。他们能自动解析标准的日志格式,比如常见的 Linux 日志和 Web 服务器日志。这能节省很多时间,因为当处理系统问题的时候你不需要考虑自己写解析逻辑。
|
||||||
|
|
||||||
|
下面是一个 sshd 日志消息的例子,解析出了每个 remoteHost 和 user。这是 Loggly 中的一张截图,它是一个基于云的日志管理服务。
|
||||||
|
|
||||||
|
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.25.09-AM.png)
|
||||||
|
|
||||||
|
你也可以对非标准格式自定义解析。一个常用的工具是 [Grok][8],它用一个常见正则表达式库,可以解析原始文本为结构化 JSON。下面是一个 Grok 在 Logstash 中解析内核日志文件的事例配置:
|
||||||
|
|
||||||
|
filter{
|
||||||
|
grok {
|
||||||
|
match => {"message" => "%{CISCOTIMESTAMP:timestamp} %{HOST:host} %{WORD:program}%{NOTSPACE} %{NOTSPACE}%{NUMBER:duration}%{NOTSPACE} %{GREEDYDATA:kernel_logs}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
下图是 Grok 解析后输出的结果:
|
||||||
|
|
||||||
|
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.30.37-AM.png)
|
||||||
|
|
||||||
|
### 用 Rsyslog 和 AWK 过滤 ###
|
||||||
|
|
||||||
|
过滤使得你能检索一个特定的字段值而不是进行全文检索。这使你的日志分析更加准确,因为它会忽略来自其它部分日志信息不需要的匹配。为了对一个字段值进行搜索,你首先需要解析日志或者至少有对事件结构进行检索的方式。
|
||||||
|
|
||||||
|
#### 如何对应用进行过滤 ####
|
||||||
|
|
||||||
|
通常,你可能只想看一个应用的日志。如果你的应用把记录都保存到一个文件中就会很容易。如果你需要在一个聚集或集中式日志中过滤一个应用就会比较复杂。下面有几种方法来实现:
|
||||||
|
|
||||||
|
1. 用 rsyslog 守护进程解析和过滤日志。下面的例子将 sshd 应用的日志写入一个名为 sshd-message 的文件,然后丢弃事件以便它不会在其它地方重复出现。你可以将它添加到你的 rsyslog.conf 文件中测试这个例子。
|
||||||
|
|
||||||
|
:programname, isequal, “sshd” /var/log/sshd-messages
|
||||||
|
&~
|
||||||
|
|
||||||
|
2. 用类似 awk 的命令行工具提取特定字段的值,例如 sshd 用户名。下面是 Ubuntu 系统中的一个例子。
|
||||||
|
|
||||||
|
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
||||||
|
guest
|
||||||
|
admin
|
||||||
|
info
|
||||||
|
test
|
||||||
|
ubnt
|
||||||
|
|
||||||
|
3. 用日志管理系统自动解析日志,然后在需要的应用名称上点击过滤。下面是在 Loggly 日志管理服务中提取 syslog 域的截图。我们对应用名称 “sshd” 进行过滤,如维恩图图标所示。
|
||||||
|
|
||||||
|
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png)
|
||||||
|
|
||||||
|
#### 如何过滤错误 ####
|
||||||
|
|
||||||
|
一个人最希望看到日志中的错误。不幸的是,默认的 syslog 配置不直接输出错误的严重性,也就使得难以过滤它们。
|
||||||
|
|
||||||
|
这里有两个解决该问题的方法。首先,你可以修改你的 rsyslog 配置,在日志文件中输出错误的严重性,使得便于查看和检索。在你的 rsyslog 配置中你可以用 pri-text 添加一个 [模板][9],像下面这样:
|
||||||
|
|
||||||
|
"<%pri-text%> : %timegenerated%,%HOSTNAME%,%syslogtag%,%msg%n"
|
||||||
|
|
||||||
|
这个例子会按照下面的格式输出。你可以看到该信息中指示错误的 err。
|
||||||
|
|
||||||
|
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
||||||
|
|
||||||
|
你可以用 awk 或者 grep 检索错误信息。在 Ubuntu 中,对这个例子,我们可以用一些语法特征,例如 . 和 >,它们只会匹配这个域。
|
||||||
|
|
||||||
|
$ grep '.err>' /var/log/auth.log
|
||||||
|
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
||||||
|
|
||||||
|
你的第二个选择是使用日志管理系统。好的日志管理系统能自动解析 syslog 消息并抽取错误域。它们也允许你用简单的点击过滤日志消息中的特定错误。
|
||||||
|
|
||||||
|
下面是 Loggly 中一个截图,显示了高亮错误严重性的 syslog 域,表示我们正在过滤错误:
|
||||||
|
|
||||||
|
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.00.36-AM.png)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.loggly.com/ultimate-guide/logging/analyzing-linux-logs/
|
||||||
|
|
||||||
|
作者:[Jason Skowronski][a],[Amy Echeverri][b],[ Sadequl Hussain][c]
|
||||||
|
译者:[ictlyh](https://github.com/ictlyh)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.linkedin.com/in/jasonskowronski
|
||||||
|
[b]:https://www.linkedin.com/in/amyecheverri
|
||||||
|
[c]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
|
||||||
|
[1]:http://linux.die.net/man/1/grep
|
||||||
|
[2]:http://linux.die.net/man/1/tail
|
||||||
|
[3]:http://ryanstutorials.net/linuxtutorial/grep.php
|
||||||
|
[4]:https://lucene.apache.org/core/2_9_4/queryparsersyntax.html
|
||||||
|
[5]:http://linux.die.net/man/1/cut
|
||||||
|
[6]:http://linux.die.net/man/1/awk
|
||||||
|
[7]:http://www.delorie.com/gnu/docs/gawk/gawk_26.html#IDX155
|
||||||
|
[8]:http://logstash.net/docs/1.4.2/filters/grok
|
||||||
|
[9]:http://www.rsyslog.com/doc/v8-stable/configuration/templates.html
|
@ -1,18 +1,18 @@
|
|||||||
Ubuntu 15.04上配置OpenVPN服务器-客户端
|
在 Ubuntu 15.04 上配置 OpenVPN 服务器和客户端
|
||||||
================================================================================
|
================================================================================
|
||||||
虚拟专用网(VPN)是几种用于建立与其它网络连接的网络技术中常见的一个名称。它被称为虚拟网,因为各个节点的连接不是通过物理线路实现的。而由于没有网络所有者的正确授权是不能通过公共线路访问到网络,所以它是专用的。
|
虚拟专用网(VPN)常指几种通过其它网络建立连接技术。它之所以被称为“虚拟”,是因为各个节点间的连接不是通过物理线路实现的,而“专用”是指如果没有网络所有者的正确授权是不能被公开访问到。
|
||||||
|
|
||||||
![](http://blog.linoxide.com/wp-content/uploads/2015/05/vpn_custom_illustration.jpg)
|
![](http://blog.linoxide.com/wp-content/uploads/2015/05/vpn_custom_illustration.jpg)
|
||||||
|
|
||||||
[OpenVPN][1]软件通过TUN/TAP驱动的帮助,使用TCP和UDP协议来传输数据。UDP协议和TUN驱动允许NAT后的用户建立到OpenVPN服务器的连接。此外,OpenVPN允许指定自定义端口。它提额外提供了灵活的配置,可以帮助你避免防火墙限制。
|
[OpenVPN][1]软件借助TUN/TAP驱动使用TCP和UDP协议来传输数据。UDP协议和TUN驱动允许NAT后的用户建立到OpenVPN服务器的连接。此外,OpenVPN允许指定自定义端口。它提供了更多的灵活配置,可以帮助你避免防火墙限制。
|
||||||
|
|
||||||
OpenVPN中,由OpenSSL库和传输层安全协议(TLS)提供了安全和加密。TLS是SSL协议的一个改进版本。
|
OpenVPN中,由OpenSSL库和传输层安全协议(TLS)提供了安全和加密。TLS是SSL协议的一个改进版本。
|
||||||
|
|
||||||
OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示了如何配置OpenVPN的服务器端,以及如何预备使用带有公共密钥非对称加密和TLS协议基础结构(PKI)。
|
OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示了如何配置OpenVPN的服务器端,以及如何配置使用带有公共密钥基础结构(PKI)的非对称加密和TLS协议。
|
||||||
|
|
||||||
### 服务器端配置 ###
|
### 服务器端配置 ###
|
||||||
|
|
||||||
首先,我们必须安装OpenVPN。在Ubuntu 15.04和其它带有‘apt’报管理器的Unix系统中,可以通过如下命令安装:
|
首先,我们必须安装OpenVPN软件。在Ubuntu 15.04和其它带有‘apt’包管理器的Unix系统中,可以通过如下命令安装:
|
||||||
|
|
||||||
sudo apt-get install openvpn
|
sudo apt-get install openvpn
|
||||||
|
|
||||||
@ -20,7 +20,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
|
|
||||||
sudo apt-get unstall easy-rsa
|
sudo apt-get unstall easy-rsa
|
||||||
|
|
||||||
**注意**: 所有接下来的命令要以超级用户权限执行,如在“sudo -i”命令后;此外,你可以使用“sudo -E”作为接下来所有命令的前缀。
|
**注意**: 所有接下来的命令要以超级用户权限执行,如在使用`sudo -i`命令后执行,或者你可以使用`sudo -E`作为接下来所有命令的前缀。
|
||||||
|
|
||||||
开始之前,我们需要拷贝“easy-rsa”到openvpn文件夹。
|
开始之前,我们需要拷贝“easy-rsa”到openvpn文件夹。
|
||||||
|
|
||||||
@ -32,15 +32,15 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
|
|
||||||
cd /etc/openvpn/easy-rsa/2.0
|
cd /etc/openvpn/easy-rsa/2.0
|
||||||
|
|
||||||
这里,我们开启了一个密钥生成进程。
|
这里,我们开始密钥生成进程。
|
||||||
|
|
||||||
首先,我们编辑一个“var”文件。为了简化生成过程,我们需要在里面指定数据。这里是“var”文件的一个样例:
|
首先,我们编辑一个“vars”文件。为了简化生成过程,我们需要在里面指定数据。这里是“vars”文件的一个样例:
|
||||||
|
|
||||||
export KEY_COUNTRY="US"
|
export KEY_COUNTRY="CN"
|
||||||
export KEY_PROVINCE="CA"
|
export KEY_PROVINCE="BJ"
|
||||||
export KEY_CITY="SanFrancisco"
|
export KEY_CITY="Beijing"
|
||||||
export KEY_ORG="Fort-Funston"
|
export KEY_ORG="Linux.CN"
|
||||||
export KEY_EMAIL="my@myhost.mydomain"
|
export KEY_EMAIL="open@vpn.linux.cn"
|
||||||
export KEY_OU=server
|
export KEY_OU=server
|
||||||
|
|
||||||
希望这些字段名称对你而言已经很清楚,不需要进一步说明了。
|
希望这些字段名称对你而言已经很清楚,不需要进一步说明了。
|
||||||
@ -61,7 +61,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
|
|
||||||
./build-ca
|
./build-ca
|
||||||
|
|
||||||
在对话中,我们可以看到默认的变量,这些变量是我们先前在“vars”中指定的。我们可以检查以下,如有必要进行编辑,然后按回车几次。对话如下
|
在对话中,我们可以看到默认的变量,这些变量是我们先前在“vars”中指定的。我们可以检查一下,如有必要进行编辑,然后按回车几次。对话如下
|
||||||
|
|
||||||
Generating a 2048 bit RSA private key
|
Generating a 2048 bit RSA private key
|
||||||
.............................................+++
|
.............................................+++
|
||||||
@ -75,14 +75,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
For some fields there will be a default value,
|
For some fields there will be a default value,
|
||||||
If you enter '.', the field will be left blank.
|
If you enter '.', the field will be left blank.
|
||||||
-----
|
-----
|
||||||
Country Name (2 letter code) [US]:
|
Country Name (2 letter code) [CN]:
|
||||||
State or Province Name (full name) [CA]:
|
State or Province Name (full name) [BJ]:
|
||||||
Locality Name (eg, city) [SanFrancisco]:
|
Locality Name (eg, city) [Beijing]:
|
||||||
Organization Name (eg, company) [Fort-Funston]:
|
Organization Name (eg, company) [Linux.CN]:
|
||||||
Organizational Unit Name (eg, section) [MyOrganizationalUnit]:
|
Organizational Unit Name (eg, section) [Tech]:
|
||||||
Common Name (eg, your name or your server's hostname) [Fort-Funston CA]:
|
Common Name (eg, your name or your server's hostname) [Linux.CN CA]:
|
||||||
Name [EasyRSA]:
|
Name [EasyRSA]:
|
||||||
Email Address [me@myhost.mydomain]:
|
Email Address [open@vpn.linux.cn]:
|
||||||
|
|
||||||
接下来,我们需要生成一个服务器密钥
|
接下来,我们需要生成一个服务器密钥
|
||||||
|
|
||||||
@ -102,14 +102,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
For some fields there will be a default value,
|
For some fields there will be a default value,
|
||||||
If you enter '.', the field will be left blank.
|
If you enter '.', the field will be left blank.
|
||||||
-----
|
-----
|
||||||
Country Name (2 letter code) [US]:
|
Country Name (2 letter code) [CN]:
|
||||||
State or Province Name (full name) [CA]:
|
State or Province Name (full name) [BJ]:
|
||||||
Locality Name (eg, city) [SanFrancisco]:
|
Locality Name (eg, city) [Beijing]:
|
||||||
Organization Name (eg, company) [Fort-Funston]:
|
Organization Name (eg, company) [Linux.CN]:
|
||||||
Organizational Unit Name (eg, section) [MyOrganizationalUnit]:
|
Organizational Unit Name (eg, section) [Tech]:
|
||||||
Common Name (eg, your name or your server's hostname) [server]:
|
Common Name (eg, your name or your server's hostname) [Linux.CN server]:
|
||||||
Name [EasyRSA]:
|
Name [EasyRSA]:
|
||||||
Email Address [me@myhost.mydomain]:
|
Email Address [open@vpn.linux.cn]:
|
||||||
|
|
||||||
Please enter the following 'extra' attributes
|
Please enter the following 'extra' attributes
|
||||||
to be sent with your certificate request
|
to be sent with your certificate request
|
||||||
@ -119,14 +119,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
Check that the request matches the signature
|
Check that the request matches the signature
|
||||||
Signature ok
|
Signature ok
|
||||||
The Subject's Distinguished Name is as follows
|
The Subject's Distinguished Name is as follows
|
||||||
countryName :PRINTABLE:'US'
|
countryName :PRINTABLE:'CN'
|
||||||
stateOrProvinceName :PRINTABLE:'CA'
|
stateOrProvinceName :PRINTABLE:'BJ'
|
||||||
localityName :PRINTABLE:'SanFrancisco'
|
localityName :PRINTABLE:'Beijing'
|
||||||
organizationName :PRINTABLE:'Fort-Funston'
|
organizationName :PRINTABLE:'Linux.CN'
|
||||||
organizationalUnitName:PRINTABLE:'MyOrganizationalUnit'
|
organizationalUnitName:PRINTABLE:'Tech'
|
||||||
commonName :PRINTABLE:'server'
|
commonName :PRINTABLE:'Linux.CN server'
|
||||||
name :PRINTABLE:'EasyRSA'
|
name :PRINTABLE:'EasyRSA'
|
||||||
emailAddress :IA5STRING:'me@myhost.mydomain'
|
emailAddress :IA5STRING:'open@vpn.linux.cn'
|
||||||
Certificate is to be certified until May 22 19:00:25 2025 GMT (3650 days)
|
Certificate is to be certified until May 22 19:00:25 2025 GMT (3650 days)
|
||||||
Sign the certificate? [y/n]:y
|
Sign the certificate? [y/n]:y
|
||||||
1 out of 1 certificate requests certified, commit? [y/n]y
|
1 out of 1 certificate requests certified, commit? [y/n]y
|
||||||
@ -143,7 +143,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
|
|
||||||
Generating DH parameters, 2048 bit long safe prime, generator 2
|
Generating DH parameters, 2048 bit long safe prime, generator 2
|
||||||
This is going to take a long time
|
This is going to take a long time
|
||||||
................................+................<and many many dots>
|
................................+................<许多的点>
|
||||||
|
|
||||||
在漫长的等待之后,我们可以继续生成最后的密钥了,该密钥用于TLS验证。命令如下:
|
在漫长的等待之后,我们可以继续生成最后的密钥了,该密钥用于TLS验证。命令如下:
|
||||||
|
|
||||||
@ -176,7 +176,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
|
|
||||||
### Unix的客户端配置 ###
|
### Unix的客户端配置 ###
|
||||||
|
|
||||||
假定我们有一台装有类Unix操作系统的设备,比如Ubuntu 15.04,并安装有OpenVPN。我们想要从先前的部分连接到OpenVPN服务器。首先,我们需要为客户端生成密钥。为了生成该密钥,请转到服务器上的目录中:
|
假定我们有一台装有类Unix操作系统的设备,比如Ubuntu 15.04,并安装有OpenVPN。我们想要连接到前面建立的OpenVPN服务器。首先,我们需要为客户端生成密钥。为了生成该密钥,请转到服务器上的对应目录中:
|
||||||
|
|
||||||
cd /etc/openvpn/easy-rsa/2.0
|
cd /etc/openvpn/easy-rsa/2.0
|
||||||
|
|
||||||
@ -211,7 +211,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
dev tun
|
dev tun
|
||||||
proto udp
|
proto udp
|
||||||
|
|
||||||
# IP and Port of remote host with OpenVPN server
|
# 远程 OpenVPN 服务器的 IP 和 端口号
|
||||||
remote 111.222.333.444 1194
|
remote 111.222.333.444 1194
|
||||||
|
|
||||||
resolv-retry infinite
|
resolv-retry infinite
|
||||||
@ -243,7 +243,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
|
|
||||||
安卓设备上的OpenVPN配置和Unix系统上的十分类似,我们需要一个含有配置文件、密钥和证书的包。文件列表如下:
|
安卓设备上的OpenVPN配置和Unix系统上的十分类似,我们需要一个含有配置文件、密钥和证书的包。文件列表如下:
|
||||||
|
|
||||||
- configuration file (.ovpn),
|
- 配置文件 (扩展名 .ovpn),
|
||||||
- ca.crt,
|
- ca.crt,
|
||||||
- dh2048.pem,
|
- dh2048.pem,
|
||||||
- client.crt,
|
- client.crt,
|
||||||
@ -257,7 +257,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
dev tun
|
dev tun
|
||||||
proto udp
|
proto udp
|
||||||
|
|
||||||
# IP and Port of remote host with OpenVPN server
|
# 远程 OpenVPN 服务器的 IP 和 端口号
|
||||||
remote 111.222.333.444 1194
|
remote 111.222.333.444 1194
|
||||||
|
|
||||||
resolv-retry infinite
|
resolv-retry infinite
|
||||||
@ -274,21 +274,21 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
|||||||
|
|
||||||
所有这些文件我们必须移动我们设备的SD卡上。
|
所有这些文件我们必须移动我们设备的SD卡上。
|
||||||
|
|
||||||
然后,我们需要安装[OpenVPN连接][2]。
|
然后,我们需要安装一个[OpenVPN Connect][2] 应用。
|
||||||
|
|
||||||
接下来,配置过程很是简单:
|
接下来,配置过程很是简单:
|
||||||
|
|
||||||
open setting of OpenVPN and select Import options
|
- 打开 OpenVPN 并选择“Import”选项
|
||||||
select Import Profile from SD card option
|
- 选择“Import Profile from SD card”
|
||||||
in opened window go to folder with prepared files and select .ovpn file
|
- 在打开的窗口中导航到我们放置好文件的目录,并选择那个 .ovpn 文件
|
||||||
application offered us to create a new profile
|
- 应用会要求我们创建一个新的配置文件
|
||||||
tap on the Connect button and wait a second
|
- 点击“Connect”按钮并稍等一下
|
||||||
|
|
||||||
搞定。现在,我们的安卓设备已经通过安全的VPN连接连接到我们的专用网。
|
搞定。现在,我们的安卓设备已经通过安全的VPN连接连接到我们的专用网。
|
||||||
|
|
||||||
### 尾声 ###
|
### 尾声 ###
|
||||||
|
|
||||||
虽然OpenVPN初始配置花费不少时间,但是简易客户端配置为我们弥补了时间上的损失,也提供了从任何设备连接的能力。此外,OpenVPN提供了一个很高的安全等级,以及从不同地方连接的能力,包括位于NAT后面的客户端。因此,OpenVPN可以同时在家和在企业中使用。
|
虽然OpenVPN初始配置花费不少时间,但是简易的客户端配置为我们弥补了时间上的损失,也提供了从任何设备连接的能力。此外,OpenVPN提供了一个很高的安全等级,以及从不同地方连接的能力,包括位于NAT后面的客户端。因此,OpenVPN可以同时在家和企业中使用。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
@ -296,7 +296,7 @@ via: http://linoxide.com/ubuntu-how-to/configure-openvpn-server-client-ubuntu-15
|
|||||||
|
|
||||||
作者:[Ivan Zabrovskiy][a]
|
作者:[Ivan Zabrovskiy][a]
|
||||||
译者:[GOLinux](https://github.com/GOLinux)
|
译者:[GOLinux](https://github.com/GOLinux)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,10 +1,10 @@
|
|||||||
Syncthing: 一个跨计算机的私人的文件/文件夹安全同步工具
|
Syncthing: 一个在计算机之间同步文件/文件夹的私密安全同步工具
|
||||||
================================================================================
|
================================================================================
|
||||||
### 简介 ###
|
### 简介 ###
|
||||||
|
|
||||||
**Syncthing** 是一个免费开源的工具,它能在你的各个网络计算机间同步文件/文件夹。它不像其它的同步工具,如**BitTorrent Sync**和**Dropbox**那样,它的同步数据是直接从一个系统中直接传输到另一个系统的,并且它是完全开源的,安全且私有的。你所有的珍贵数据都会被存储在你的系统中,这样你就能对你的文件和文件夹拥有全面的控制权,没有任何的文件或文件夹会被存储在第三方系统中。此外,你有权决定这些数据该存于何处,是否要分享到第三方,或这些数据在互联网上的传输方式。
|
**Syncthing**是一个免费开源的工具,它能在你的各个网络计算机间同步文件/文件夹。它不像其它的同步工具,如**BitTorrent Sync**和**Dropbox**那样,它的同步数据是直接从一个系统中直接传输到另一个系统的,并且它是完全开源的,安全且私密的。你所有的珍贵数据都会被存储在你的系统中,这样你就能对你的文件和文件夹拥有全面的控制权,没有任何的文件或文件夹会被存储在第三方系统中。此外,你有权决定这些数据该存于何处,是否要分享到第三方,或这些数据在互联网上的传输方式。
|
||||||
|
|
||||||
所有的信息通讯都使用TLS进行加密,这样你的数据便能十分安全地逃离窥探。Syncthing有一个强大的响应式的网页管理界面(WebGUI,下同),它能够帮助用户简便地添加,删除和管理那些通过网络进行同步的文件夹。通过使用Syncthing,你可以在多个系统上一次同步多个文件夹。在安装和使用上,Syncthing是一个可移植的,简单但强大的工具。即然文件或文件夹是从一部计算机中直接传输到另一计算机中的,那么你就无需考虑向云服务供应商支付金钱来获取额外的云空间。你所需要的仅仅是非常稳定的LAN/WAN连接和你的系统中足够的硬盘空间。它支持所有的现代操作系统,包括GNU/Linux, Windows, Mac OS X, 当然还有Android。
|
所有的信息通讯都使用TLS进行加密,这样你的数据便能十分安全地逃离窥探。Syncthing有一个强大的响应式的网页管理界面(WebGUI,下同),它能够帮助用户简便地添加、删除和管理那些通过网络进行同步的文件夹。通过使用Syncthing,你可以在多个系统上一次同步多个文件夹。在安装和使用上,Syncthing是一个可移植的、简单而强大的工具。即然文件或文件夹是从一部计算机中直接传输到另一计算机中的,那么你就无需考虑向云服务供应商支付金钱来获取额外的云空间。你所需要的仅仅是非常稳定的LAN/WAN连接以及在你的系统中有足够的硬盘空间。它支持所有的现代操作系统,包括GNU/Linux, Windows, Mac OS X, 当然还有Android。
|
||||||
|
|
||||||
### 安装 ###
|
### 安装 ###
|
||||||
|
|
||||||
@ -13,7 +13,7 @@ Syncthing: 一个跨计算机的私人的文件/文件夹安全同步工具
|
|||||||
### 系统1细节: ###
|
### 系统1细节: ###
|
||||||
|
|
||||||
- **操作系统**: Ubuntu 14.04 LTS server;
|
- **操作系统**: Ubuntu 14.04 LTS server;
|
||||||
- **主机名**: server1.unixmen.local;
|
- **主机名**: **server1**.unixmen.local;
|
||||||
- **IP地址**: 192.168.1.150.
|
- **IP地址**: 192.168.1.150.
|
||||||
- **系统用户**: sk (你可以使用你自己的系统用户)
|
- **系统用户**: sk (你可以使用你自己的系统用户)
|
||||||
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
|
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
|
||||||
@ -21,7 +21,7 @@ Syncthing: 一个跨计算机的私人的文件/文件夹安全同步工具
|
|||||||
### 系统2细节 ###
|
### 系统2细节 ###
|
||||||
|
|
||||||
- **操作系统**: Ubuntu 14.10 server;
|
- **操作系统**: Ubuntu 14.10 server;
|
||||||
- **主机名**: server.unixmen.local;
|
- **主机名**: **server**.unixmen.local;
|
||||||
- **IP地址**: 192.168.1.151.
|
- **IP地址**: 192.168.1.151.
|
||||||
- **系统用户**: sk (你可以使用你自己的系统用户)
|
- **系统用户**: sk (你可以使用你自己的系统用户)
|
||||||
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
|
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
|
||||||
@ -49,7 +49,7 @@ Syncthing: 一个跨计算机的私人的文件/文件夹安全同步工具
|
|||||||
|
|
||||||
cd syncthing-linux-amd64-v0.10.20/
|
cd syncthing-linux-amd64-v0.10.20/
|
||||||
|
|
||||||
复制可执行文件"Syncthing"到**$PATH**:
|
复制可执行文件"syncthing"到**$PATH**:
|
||||||
|
|
||||||
sudo cp syncthing /usr/local/bin/
|
sudo cp syncthing /usr/local/bin/
|
||||||
|
|
||||||
@ -57,7 +57,7 @@ Syncthing: 一个跨计算机的私人的文件/文件夹安全同步工具
|
|||||||
|
|
||||||
syncthing
|
syncthing
|
||||||
|
|
||||||
当你执行上述命令后,syncthing会生成一个配置以及一些关键值(keys),并且在你的浏览器上打开一个管理界面。,
|
当你执行上述命令后,syncthing会生成一个配置以及一些配置键值,并且在你的浏览器上打开一个管理界面。
|
||||||
|
|
||||||
输入示例:
|
输入示例:
|
||||||
|
|
||||||
@ -78,11 +78,11 @@ Syncthing: 一个跨计算机的私人的文件/文件夹安全同步工具
|
|||||||
[BQXVO] 15:41:07 INFO: Device BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ is "server1" at [dynamic]
|
[BQXVO] 15:41:07 INFO: Device BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ is "server1" at [dynamic]
|
||||||
[BQXVO] 15:41:07 INFO: Completed initial scan (rw) of folder default
|
[BQXVO] 15:41:07 INFO: Completed initial scan (rw) of folder default
|
||||||
|
|
||||||
Syncthing已经被成功地初始化了,网页管理接口也可以通过浏览器在URL: **http://localhost:8080**进行访问了。如上面输入所看到的,Syncthing在你的**home**目录中的Sync目录**下自动为你创建了一个名为**default**的文件夹。
|
Syncthing已经被成功地初始化了,网页管理接口也可以通过浏览器访问URL: **http://localhost:8080**。如上面输入所看到的,Syncthing在你的**home**目录中的Sync目录**下自动为你创建了一个名为**default**的文件夹。
|
||||||
|
|
||||||
默认情况下,Syncthing的网页管理界面(WebGUI)只能在本地端口(localhost)中进行访问,你需要在两个系统中进行以下操作:
|
默认情况下,Syncthing的网页管理界面只能在本地端口(localhost)中进行访问,要从远程进行访问,你需要在两个系统中进行以下操作:
|
||||||
|
|
||||||
首先,按下CTRL+C键来停止Syncthing初始化进程。现在你回到了终端界面。
|
首先,按下CTRL+C键来终止Syncthing初始化进程。现在你回到了终端界面。
|
||||||
|
|
||||||
编辑**config.xml**文件,
|
编辑**config.xml**文件,
|
||||||
|
|
||||||
@ -115,17 +115,18 @@ Syncthing已经被成功地初始化了,网页管理接口也可以通过浏
|
|||||||
现在,在你的浏览器上打开**http://ip-address:8080/**。你会看到下面的界面:
|
现在,在你的浏览器上打开**http://ip-address:8080/**。你会看到下面的界面:
|
||||||
|
|
||||||
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_001.png)
|
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_001.png)
|
||||||
|
|
||||||
网页管理界面分为两个窗格,在左窗格中,你应该可以看到同步的文件夹列表。如前所述,文件夹**default**在你初始化Syncthing时被自动创建。如果你想同步更多文件夹,点击**Add Folder**按钮。
|
网页管理界面分为两个窗格,在左窗格中,你应该可以看到同步的文件夹列表。如前所述,文件夹**default**在你初始化Syncthing时被自动创建。如果你想同步更多文件夹,点击**Add Folder**按钮。
|
||||||
|
|
||||||
在右窗格中,你可以看到已连接的设备数。现在这里只有一个,就是你现在正在操作的计算机。
|
在右窗格中,你可以看到已连接的设备数。现在这里只有一个,就是你现在正在操作的计算机。
|
||||||
|
|
||||||
### 网页管理界面(WebGUI)上设置Syncthing ###
|
### 网页管理界面上设置Syncthing ###
|
||||||
|
|
||||||
为了提高安全性,让我们启用TLS,并且设置访问网页管理界面的管理员用户和密码。要做到这点,点击右上角的齿轮按钮,然后选择**Settings**
|
为了提高安全性,让我们启用TLS,并且设置访问网页管理界面的管理员用户和密码。要做到这点,点击右上角的齿轮按钮,然后选择**Settings**
|
||||||
|
|
||||||
![](http://www.unixmen.com/wp-content/uploads/2015/01/Menu_002.png)
|
![](http://www.unixmen.com/wp-content/uploads/2015/01/Menu_002.png)
|
||||||
|
|
||||||
输入管理员的帐户名/密码。我设置的是admin/Ubuntu。你可以使用一些更复杂的密码。
|
输入管理员的帐户名/密码。我设置的是admin/Ubuntu。你应该使用一些更复杂的密码。
|
||||||
|
|
||||||
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_004.png)
|
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_004.png)
|
||||||
|
|
||||||
@ -155,7 +156,7 @@ Syncthing已经被成功地初始化了,网页管理接口也可以通过浏
|
|||||||
|
|
||||||
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_010.png)
|
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_010.png)
|
||||||
|
|
||||||
接着会出现下面的界面。在Device区域粘贴**系统1 ID **。输入设备名称(可选)。在地址区域,你可以输入其它系统(译者注:即粘贴的ID所属的系统,此应为系统1)的IP地址,或者使用默认值。默认值为**dynamic**。最后,选择要同步的文件夹。在我们的例子中,同步文件夹为**default**。
|
接着会出现下面的界面。在Device区域粘贴**系统1 ID **。输入设备名称(可选)。在地址区域,你可以输入其它系统( LCTT 译注:即粘贴的ID所属的系统,此应为系统1)的IP地址,或者使用默认值。默认值为**dynamic**。最后,选择要同步的文件夹。在我们的例子中,同步文件夹为**default**。
|
||||||
|
|
||||||
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_009.png)
|
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_009.png)
|
||||||
|
|
||||||
@ -181,7 +182,7 @@ Syncthing已经被成功地初始化了,网页管理接口也可以通过浏
|
|||||||
|
|
||||||
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
|
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
|
||||||
|
|
||||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
|
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
|
||||||
|
|
||||||
现在,在任一个系统中的“**default**”文件夹中放进任意文件或文件夹。你应该可以看到这些文件/文件夹被自动同步到其它系统。
|
现在,在任一个系统中的“**default**”文件夹中放进任意文件或文件夹。你应该可以看到这些文件/文件夹被自动同步到其它系统。
|
||||||
|
|
||||||
@ -197,7 +198,7 @@ via: http://www.unixmen.com/syncthing-private-secure-tool-sync-filesfolders-comp
|
|||||||
|
|
||||||
作者:[SK][a]
|
作者:[SK][a]
|
||||||
译者:[XLCYun](https://github.com/XLCYun)
|
译者:[XLCYun](https://github.com/XLCYun)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,89 @@
|
|||||||
|
修复Linux中的“提供类似行编辑的袖珍BASH...”的GRUB错误
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
这两天我[安装了Elementary OS和Windows双系统][1],在启动的时候遇到了一个Grub错误。命令行中呈现如下信息:
|
||||||
|
|
||||||
|
**Minimal BASH like line editing is supported. For the first word, TAB lists possible command completions. anywhere else TAB lists possible device or file completions.**
|
||||||
|
|
||||||
|
**提供类似行编辑的袖珍 BASH。TAB键补全第一个词,列出可以使用的命令。除此之外,TAB键补全可以列出可用的设备或文件。**
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_Linux_1.jpeg)
|
||||||
|
|
||||||
|
事实上这并不是Elementary OS独有的错误。这是常见的[Grub][2]错误,会在Ubuntu,Fedora,Linux Mint等Linux操作系统上发生。
|
||||||
|
|
||||||
|
通过这篇文章里我们可以学到基于Linux系统**如何修复Ubuntu中出现的“minimal BASH like line editing is supported” Grub错误**。
|
||||||
|
|
||||||
|
> 你可以参阅这篇教程来修复类似的常见问题,[错误:分区未找到Linux grub救援模式][3]。
|
||||||
|
|
||||||
|
### 先决条件 ###
|
||||||
|
|
||||||
|
要修复这个问题,你需要达成以下的条件:
|
||||||
|
|
||||||
|
- 一个包含相同版本、相同OS的LiveUSB或磁盘
|
||||||
|
- 当前会话的Internet连接正常工作
|
||||||
|
|
||||||
|
在确认了你拥有先决条件了之后,让我们看看如何修复Linux的死亡黑屏(如果我可以这样的称呼它的话 ;) )。
|
||||||
|
|
||||||
|
### 如何在基于Ubuntu的Linux中修复“minimal BASH like line editing is supported” Grub错误 ###
|
||||||
|
|
||||||
|
我知道你一定疑问这种Grub错误并不局限于在基于Ubuntu的Linux发行版上发生,那为什么我要强调在基于Ubuntu的发行版上呢?原因是,在这里我们将采用一个简单的方法,用个叫做**Boot Repair**的工具来修复我们的问题。我并不确定在其他的诸如Fedora的发行版中是否有这个工具可用。不再浪费时间,我们来看如何修复“minimal BASH like line editing is supported” Grub错误。
|
||||||
|
|
||||||
|
### 步骤 1: 引导进入lives会话 ###
|
||||||
|
|
||||||
|
插入live USB,引导进入live会话。
|
||||||
|
|
||||||
|
### 步骤 2: 安装 Boot Repair ###
|
||||||
|
|
||||||
|
等你进入了lives会话后,打开终端使用以下命令来安装Boot Repair:
|
||||||
|
|
||||||
|
sudo add-apt-repository ppa:yannubuntu/boot-repair
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install boot-repair
|
||||||
|
|
||||||
|
注意:推荐这篇教程[如何修复 apt-get update 无法添加新的 CD-ROM 的错误][4],如果你在运行以上命令是遭遇同样的问题。
|
||||||
|
|
||||||
|
### 步骤 3: 使用Boot Repair修复引导 ###
|
||||||
|
|
||||||
|
装完Boot Repair后,在命令行运行如下命令启动:
|
||||||
|
|
||||||
|
boot-repair &
|
||||||
|
|
||||||
|
其实操作非常简单直接,你仅需按照Boot Repair工具提供的说明操作即可。首先,点击Boot Repair中的**Recommended repair**选项。
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu.png)
|
||||||
|
|
||||||
|
Boot Repair需要花费一些时间来分析引导和Grub中存在的问题。然后,它会提供一些可在命令行中直接运行的命令。将这些命令一个个在终端中执行。我这边屏幕上显示的是:
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_1.png)
|
||||||
|
|
||||||
|
在输入了这些命令之后,它会执行执行一段时间:
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_2.png)
|
||||||
|
|
||||||
|
在这一过程结束后,它会提供一个由boot repair的日志组成的网页网址。如果你的引导问题这样都没有修复,你就可以去社区或是发邮件给开发团队并提交该网址作为参考。很酷!不是吗?
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Final_Ubuntu.png)
|
||||||
|
|
||||||
|
在boot repair成功完成后,关闭你的电脑,移除USB并再次引导。我这就能成功的引导了,但是在Grub画面上会多出额外的两行。相比于看到系统能够再次正常引导的喜悦这些对我来说并不重要。
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_Linux_2.jpeg)
|
||||||
|
|
||||||
|
### 对你有效吗? ###
|
||||||
|
|
||||||
|
这就是我修复**Elementary OS Freya中的minimal BASH like line editing is supported Grub 错误**的方法。怎么样?是否对你也有效呢?请自由的在下方的评论区提出你的问题和建议。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/
|
||||||
|
|
||||||
|
作者:[Abhishek][a]
|
||||||
|
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://itsfoss.com/author/abhishek/
|
||||||
|
[1]:http://itsfoss.com/guide-install-elementary-os-luna/
|
||||||
|
[2]:http://www.gnu.org/software/grub/
|
||||||
|
[3]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/
|
||||||
|
[4]:http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/
|
@ -0,0 +1,101 @@
|
|||||||
|
FreeBSD 和 Linux 有什么不同?
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
![](https://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/FreeBSD-790x494.jpg)
|
||||||
|
|
||||||
|
### 简介 ###
|
||||||
|
|
||||||
|
BSD最初从UNIX继承而来,目前,有许多的类Unix操作系统是基于BSD的。FreeBSD是使用最广泛的开源的伯克利软件发行版(即 BSD 发行版)。就像它隐含的意思一样,它是一个自由开源的类Unix操作系统,并且是公共服务器平台。FreeBSD源代码通常以宽松的BSD许可证发布。它与Linux有很多相似的地方,但我们得承认它们在很多方面仍有不同。
|
||||||
|
|
||||||
|
本文的其余部分组织如下:FreeBSD的描述在第一部分,FreeBSD和Linux的相似点在第二部分,它们的区别将在第三部分讨论,对他们功能的讨论和总结在最后一节。
|
||||||
|
|
||||||
|
### FreeBSD描述 ###
|
||||||
|
|
||||||
|
#### 历史 ####
|
||||||
|
|
||||||
|
- FreeBSD的第一个版本发布于1993年,它的第一张CD-ROM是FreeBSD1.0,发行于1993年12月。接下来,FreeBSD 2.1.0在1995年发布,并且获得了所有用户的青睐。实际上许多IT公司都使用FreeBSD并且很满意,我们可以列出其中的一些:IBM、Nokia、NetApp和Juniper Network。
|
||||||
|
|
||||||
|
#### 许可证 ####
|
||||||
|
|
||||||
|
- 关于它的许可证,FreeBSD以多种开源许可证进行发布,它的名为Kernel的最新代码以两句版BSD许可证进行了发布,给予使用和重新发布FreeBSD的绝对自由。其它的代码则以三句版或四句版BSD许可证进行发布,有些是以GPL和CDDL的许可证发布的。
|
||||||
|
|
||||||
|
(LCTT 译注:BSD 许可证与 GPL 许可证相比,相当简短,最初只有四句规则;1999年应 RMS 请求,删除了第三句,新的许可证称作“新 BSD”或三句版BSD;原来的 BSD 许可证称作“旧 BSD”、“修订的 BSD”或四句版BSD;也有一种删除了第三、第四两句的版本,称之为两句版 BSD,等价于 MIT 许可证。)
|
||||||
|
|
||||||
|
#### 用户 ####
|
||||||
|
|
||||||
|
- FreeBSD的重要特点之一就是它的用户多样性。实际上,FreeBSD可以作为邮件服务器、Web 服务器、FTP 服务器以及路由器等,您只需要在它上运行服务相关的软件即可。而且FreeBSD还支持ARM、PowerPC、MIPS、x86、x86-64架构。
|
||||||
|
|
||||||
|
### FreeBSD和Linux的相似处 ###
|
||||||
|
|
||||||
|
FreeBSD和Linux是两个自由开源的软件。实际上,它们的用户可以很容易的检查并修改源代码,用户拥有绝对的自由。而且,FreeBSD和Linux都是类Unix系统,它们的内核、内部组件、库程序都使用从历史上的AT&T Unix继承来的算法。FreeBSD从根基上更像Unix系统,而Linux是作为自由的类Unix系统发布的。许多工具应用都可以在FreeBSD和Linux中找到,实际上,他们几乎有同样的功能。
|
||||||
|
|
||||||
|
此外,FreeBSD能够运行大量的Linux应用。它可以安装一个Linux的兼容层,这个兼容层可以在编译FreeBSD时加入AAC Compact Linux得到,或通过下载已编译了Linux兼容层的FreeBSD系统,其中会包括兼容程序:aac_linux.ko。不同于FreeBSD的是,Linux无法运行FreeBSD的软件。
|
||||||
|
|
||||||
|
最后,我们注意到虽然二者有同样的目标,但二者还是有一些不同之处,我们在下一节中列出。
|
||||||
|
|
||||||
|
### FreeBSD和Linux的区别 ###
|
||||||
|
|
||||||
|
目前对于大多数用户来说并没有一个选择FreeBSD还是Linux的明确的准则。因为他们有着很多同样的应用程序,因为他们都被称作类Unix系统。
|
||||||
|
|
||||||
|
在这一章,我们将列出这两种系统的一些重要的不同之处。
|
||||||
|
|
||||||
|
#### 许可证 ####
|
||||||
|
|
||||||
|
- 两个系统的区别首先在于它们的许可证。Linux以GPL许可证发行,它为用户提供阅读、发行和修改源代码的自由,GPL许可证帮助用户避免仅仅发行二进制。而FreeBSD以BSD许可证发布,BSD许可证比GPL更宽容,因为其衍生著作不需要仍以该许可证发布。这意味着任何用户能够使用、发布、修改代码,并且不需要维持之前的许可证。
|
||||||
|
- 您可以依据您的需求,在两种许可证中选择一种。首先是BSD许可证,由于其特殊的条款,它更受用户青睐。实际上,这个许可证使用户在保证源代码的封闭性的同时,可以售卖以该许可证发布的软件。再说说GPL,它需要每个使用以该许可证发布的软件的用户多加注意。
|
||||||
|
- 如果想在以不同许可证发布的两种软件中做出选择,您需要了解他们各自的许可证,以及他们开发中的方法论,从而能了解他们特性的区别,来选择更适合自己需求的。
|
||||||
|
|
||||||
|
#### 控制 ####
|
||||||
|
|
||||||
|
- 由于FreeBSD和Linux是以不同的许可证发布的,Linus Torvalds控制着Linux的内核,而FreeBSD却与Linux不同,它并未被控制。我个人更倾向于使用FreeBSD而不是Linux,这是因为FreeBSD才是绝对自由的软件,没有任何控制许可证的存在。Linux和FreeBSD还有其他的不同之处,我建议您先不急着做出选择,等读完本文后再做出您的选择。
|
||||||
|
|
||||||
|
#### 操作系统 ####
|
||||||
|
|
||||||
|
- Linux主要指内核系统,这与FreeBSD不同,FreeBSD的整个系统都被维护着。FreeBSD的内核和一组由FreeBSD团队开发的软件被作为一个整体进行维护。实际上,FreeBSD开发人员能够远程且高效的管理核心操作系统。
|
||||||
|
- 而Linux方面,在管理系统方面有一些困难。由于不同的组件由不同的源维护,Linux开发者需要将它们汇集起来,才能达到同样的功能。
|
||||||
|
- FreeBSD和Linux都给了用户大量的可选软件和发行版,但他们管理的方式不同。FreeBSD是统一的管理方式,而Linux需要被分别维护。
|
||||||
|
|
||||||
|
#### 硬件支持 ####
|
||||||
|
|
||||||
|
- 说到硬件支持,Linux比FreeBSD做的更好。但这不意味着FreeBSD没有像Linux那样支持硬件的能力。他们只是在管理的方式不同,这通常还依赖于您的需求。因此,如果您在寻找最新的解决方案,FreeBSD更适应您;但如果您在寻找更多的普适性,那最好使用Linux。
|
||||||
|
|
||||||
|
#### 原生FreeBSD Vs 原生Linux ####
|
||||||
|
|
||||||
|
- 两者的原生系统的区别又有不同。就像我之前说的,Linux是一个Unix的替代系统,由Linux Torvalds编写,并由网络上的许多极客一起协助实现的。Linux有一个现代系统所需要的全部功能,诸如虚拟内存、共享库、动态加载、优秀的内存管理等。它以GPL许可证发布。
|
||||||
|
- FreeBSD也继承了Unix的许多重要的特性。FreeBSD作为在加州大学开发的BSD的一种发行版。开发BSD的最重要的原因是用一个开源的系统来替代AT&T操作系统,从而给用户无需AT&T许可证便可使用的能力。
|
||||||
|
- 许可证的问题是开发者们最关心的问题。他们试图提供一个最大化克隆Unix的开源系统。这影响了用户的选择,由于FreeBSD使用BSD许可证进行发布,因而相比Linux更加自由。
|
||||||
|
|
||||||
|
#### 支持的软件包 ####
|
||||||
|
|
||||||
|
- 从用户的角度来看,另一个二者不同的地方便是软件包以及从源码安装的软件的可用性和支持。Linux只提供了预编译的二进制包,这与FreeBSD不同,它不但提供预编译的包,而且还提供从源码编译和安装的构建系统。使用它的 ports 工具,FreeBSD给了您选择使用预编译的软件包(默认)和在编译时定制您软件的能力。(LCTT 译注:此处说明有误。Linux 也提供了源代码方式的包,并支持自己构建。)
|
||||||
|
- 这些 ports 允许您构建所有支持FreeBSD的软件。而且,它们的管理还是层次化的,您可以在/usr/ports下找到源文件的地址以及一些正确使用FreeBSD的文档。
|
||||||
|
- 这些提到的 ports给予你产生不同软件包版本的可能性。FreeBSD给了您通过源代码构建以及预编译的两种软件,而不是像Linux一样只有预编译的软件包。您可以使用两种安装方式管理您的系统。
|
||||||
|
|
||||||
|
#### FreeBSD 和 Linux 常用工具比较 ####
|
||||||
|
|
||||||
|
- 有大量的常用工具在FreeBSD上可用,并且有趣的是他们由FreeBSD的团队所拥有。相反的,Linux工具来自GNU,这就是为什么在使用中有一些限制。(LCTT 译注:这也是 Linux 正式的名称被称作“GNU/Linux”的原因,因为本质上 Linux 其实只是指内核。)
|
||||||
|
- 实际上FreeBSD采用的BSD许可证非常有益且有用。因此,您有能力维护核心操作系统,控制这些应用程序的开发。有一些工具类似于它们的祖先 - BSD和Unix的工具,但不同于GNU的套件,GNU套件只想做到最小的向后兼容。
|
||||||
|
|
||||||
|
#### 标准 Shell ####
|
||||||
|
|
||||||
|
- FreeBSD默认使用tcsh。它是csh的评估版,由于FreeBSD以BSD许可证发行,因此不建议您在其中使用GNU的组件 bash shell。bash和tcsh的区别仅仅在于tcsh的脚本功能。实际上,我们更推荐在FreeBSD中使用sh shell,因为它更加可靠,可以避免一些使用tcsh和csh时出现的脚本问题。
|
||||||
|
|
||||||
|
#### 一个更加层次化的文件系统 ####
|
||||||
|
|
||||||
|
- 像之前提到的一样,使用FreeBSD时,基础操作系统以及可选组件可以被很容易的区别开来。这导致了一些管理它们的标准。在Linux下,/bin,/sbin,/usr/bin或者/usr/sbin都是存放可执行文件的目录。FreeBSD不同,它有一些附加的对其进行组织的规范。基础操作系统被放在/usr/local/bin或者/usr/local/sbin目录下。这种方法可以帮助管理和区分基础操作系统和可选组件。
|
||||||
|
|
||||||
|
### 结论 ###
|
||||||
|
|
||||||
|
FreeBSD和Linux都是自由且开源的系统,他们有相似点也有不同点。上面列出的内容并不能说哪个系统比另一个更好。实际上,FreeBSD和Linux都有自己的特点和技术规格,这使它们与别的系统区别开来。那么,您有什么看法呢?您已经有在使用它们中的某个系统了么?如果答案为是的话,请给我们您的反馈;如果答案是否的话,在读完我们的描述后,您怎么看?请在留言处发表您的观点。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.unixmen.com/comparative-introduction-freebsd-linux-users/
|
||||||
|
|
||||||
|
作者:[anismaj][a]
|
||||||
|
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.unixmen.com/author/anis/
|
@ -1,4 +1,4 @@
|
|||||||
ZMap 文档
|
互联网扫描器 ZMap 完全手册
|
||||||
================================================================================
|
================================================================================
|
||||||
1. 初识 ZMap
|
1. 初识 ZMap
|
||||||
1. 最佳扫描习惯
|
1. 最佳扫描习惯
|
||||||
@ -21,7 +21,7 @@ ZMap 文档
|
|||||||
|
|
||||||
### 初识 ZMap ###
|
### 初识 ZMap ###
|
||||||
|
|
||||||
ZMap被设计用来针对IPv4所有地址或其中的大部分实施综合扫描的工具。ZMap是研究者手中的利器,但在运行ZMap时,请注意,您很有可能正在以每秒140万个包的速度扫描整个IPv4地址空间 。我们建议用户在实施即使小范围扫描之前,也联系一下本地网络的管理员并参考我们列举的最佳扫描习惯。
|
ZMap被设计用来针对整个IPv4地址空间或其中的大部分实施综合扫描的工具。ZMap是研究者手中的利器,但在运行ZMap时,请注意,您很有可能正在以每秒140万个包的速度扫描整个IPv4地址空间 。我们建议用户即使在实施小范围扫描之前,也联系一下本地网络的管理员并参考我们列举的[最佳扫描体验](#bestpractices)。
|
||||||
|
|
||||||
默认情况下,ZMap会对于指定端口实施尽可能大速率的TCP SYN扫描。较为保守的情况下,对10,000个随机的地址的80端口以10Mbps的速度扫描,如下所示:
|
默认情况下,ZMap会对于指定端口实施尽可能大速率的TCP SYN扫描。较为保守的情况下,对10,000个随机的地址的80端口以10Mbps的速度扫描,如下所示:
|
||||||
|
|
||||||
@ -42,11 +42,13 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如,仅扫描10.0.0.0/
|
|||||||
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
|
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
|
||||||
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
|
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
|
||||||
|
|
||||||
这些更新信息提供了扫描的即时状态并表示成:完成进度% (剩余时间); send: 发出包的数量 即时速率 (平均发送速率); recv: 接收包的数量 接收率 (平均接收率); hits: 成功率
|
这些更新信息提供了扫描的即时状态并表示成:
|
||||||
|
|
||||||
如果您不知道您所在网络支持的扫描速率,您可能要尝试不同的扫描速率和带宽限制直到扫描效果开始下降,借此找出当前网络能够支持的最快速度。
|
完成进度% (剩余时间); send: 发出包的数量 即时速率 (平均发送速率); recv: 接收包的数量 接收率 (平均接收率); hits: 命中率
|
||||||
|
|
||||||
默认情况下,ZMap会输出不同IP地址的列表(例如,SYN ACK数据包的情况)像下面这样。还有几种附加的格式(如,JSON和Redis)作为其输出结果,以及生成程序可解析的扫描统计选项。 同样,可以指定附加的输出字段并使用输出过滤来过滤输出的结果。
|
如果您不知道您所在网络能支持的扫描速率,您可能要尝试不同的扫描速率和带宽限制直到扫描效果开始下降,借此找出当前网络能够支持的最快速度。
|
||||||
|
|
||||||
|
默认情况下,ZMap会输出不同IP地址的列表(例如,根据SYN ACK数据包的情况),像下面这样。其[输出结果](#output)还有几种附加的格式(如,JSON和Redis),可以用作生成[程序可解析的扫描统计](#verbosity)。 同样,可以指定附加的[输出字段](#outputfields)并使用[输出过滤](#outputfilter)来过滤输出的结果。
|
||||||
|
|
||||||
115.237.116.119
|
115.237.116.119
|
||||||
23.9.117.80
|
23.9.117.80
|
||||||
@ -54,52 +56,49 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如,仅扫描10.0.0.0/
|
|||||||
217.120.143.111
|
217.120.143.111
|
||||||
50.195.22.82
|
50.195.22.82
|
||||||
|
|
||||||
我们强烈建议您使用黑名单文件,以排除预留的/未分配的IP地址空间(如,组播地址,RFC1918),以及网络中需要排除在您扫描之外的地址。默认情况下,ZMap将采用位于 `/etc/zmap/blacklist.conf`的这个简单的黑名单文件中所包含的预留和未分配地址。如果您需要某些特定设置,比如每次运行ZMap时的最大带宽或黑名单文件,您可以在文件`/etc/zmap/zmap.conf`中指定或使用自定义配置文件。
|
我们强烈建议您使用[黑名单文件](#blacklisting),以排除预留的/未分配的IP地址空间(如,RFC1918 规定的私有地址、组播地址),以及网络中需要排除在您扫描之外的地址。默认情况下,ZMap将采用位于 `/etc/zmap/blacklist.conf`的这个简单的[黑名单文件](#blacklisting)中所包含的预留和未分配地址。如果您需要某些特定设置,比如每次运行ZMap时的最大带宽或[黑名单文件](#blacklisting),您可以在文件`/etc/zmap/zmap.conf`中指定或使用自定义[配置文件](#config)。
|
||||||
|
|
||||||
如果您正试图解决扫描的相关问题,有几个选项可以帮助您调试。首先,您可以通过添加`--dryrun`实施预扫,以此来分析包可能会发送到网络的何处。此外,还可以通过设置'--verbosity=n`来更改日志详细程度。
|
如果您正试图解决扫描的相关问题,有几个选项可以帮助您调试。首先,您可以通过添加`--dryrun`实施[预扫](#dryrun),以此来分析包可能会发送到网络的何处。此外,还可以通过设置'--verbosity=n`来更改[日志详细程度](#verbosity)。
|
||||||
|
|
||||||
----------
|
### 最佳扫描体验 ###
|
||||||
|
<a name="bestpractices" ></a>
|
||||||
|
|
||||||
### 最佳扫描习惯 ###
|
我们为针对互联网进行扫描的研究者提供了一些建议,以此来引导养成良好的互联网合作氛围。
|
||||||
|
|
||||||
我们为针对互联网进行扫描的研究者提供了一些建议,以此来引导养成良好的互联网合作氛围
|
|
||||||
|
|
||||||
- 密切协同本地的网络管理员,以减少风险和调查
|
- 密切协同本地的网络管理员,以减少风险和调查
|
||||||
- 确认扫描不会使本地网络或上游供应商瘫痪
|
- 确认扫描不会使本地网络或上游供应商瘫痪
|
||||||
- 标记出在扫描中呈良性的网页和DNS条目的源地址
|
- 在发起扫描的源地址的网页和DNS条目中申明你的扫描是善意的
|
||||||
- 明确注明扫描中所有连接的目的和范围
|
- 明确解释你的扫描中所有连接的目的和范围
|
||||||
- 提供一个简单的退出方法并及时响应请求
|
- 提供一个简单的退出扫描的方法并及时响应请求
|
||||||
- 实施扫描时,不使用比研究对象需求更大的扫描范围或更快的扫描频率
|
- 实施扫描时,不使用比研究对象需求更大的扫描范围或更快的扫描频率
|
||||||
- 如果可以通过时间或源地址来传播扫描流量
|
- 如果可以,将扫描流量分布到不同的时间或源地址上
|
||||||
|
|
||||||
即使不声明,使用扫描的研究者也应该避免利用漏洞或访问受保护的资源,并遵守其辖区内任何特殊的法律规定。
|
即使不声明,使用扫描的研究者也应该避免利用漏洞或访问受保护的资源,并遵守其辖区内任何特殊的法律规定。
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### 命令行参数 ###
|
### 命令行参数 ###
|
||||||
|
<a name="args" ></a>
|
||||||
|
|
||||||
#### 通用选项 ####
|
#### 通用选项 ####
|
||||||
|
|
||||||
这些选项是实施简单扫描时最常用的选项。我们注意到某些选项取决于所使用的探测模块或输出模块(如,在实施ICMP Echo扫描时是不需要使用目的端口的)。
|
这些选项是实施简单扫描时最常用的选项。我们注意到某些选项取决于所使用的[探测模块](#probemodule)或[输出模块](#outputmodule)(如,在实施ICMP Echo扫描时是不需要使用目的端口的)。
|
||||||
|
|
||||||
|
|
||||||
**-p, --target-port=port**
|
**-p, --target-port=port**
|
||||||
|
|
||||||
用来扫描的TCP端口号(例如,443)
|
要扫描的目标TCP端口号(例如,443)
|
||||||
|
|
||||||
**-o, --output-file=name**
|
**-o, --output-file=name**
|
||||||
|
|
||||||
使用标准输出将结果写入该文件。
|
将结果写入该文件,使用`-`代表输出到标准输出。
|
||||||
|
|
||||||
**-b, --blacklist-file=path**
|
**-b, --blacklist-file=path**
|
||||||
|
|
||||||
文件中被排除的子网使用CIDR表示法(如192.168.0.0/16),一个一行。建议您使用此方法排除RFC 1918地址,组播地址,IANA预留空间等IANA专用地址。在conf/blacklist.example中提供了一个以此为目的示例黑名单文件。
|
文件中被排除的子网使用CIDR表示法(如192.168.0.0/16),一个一行。建议您使用此方法排除RFC 1918地址、组播地址、IANA预留空间等IANA专用地址。在conf/blacklist.example中提供了一个以此为目的示例黑名单文件。
|
||||||
|
|
||||||
#### 扫描选项 ####
|
#### 扫描选项 ####
|
||||||
|
|
||||||
**-n, --max-targets=n**
|
**-n, --max-targets=n**
|
||||||
|
|
||||||
限制探测目标的数量。后面跟的可以是一个数字(例如'-n 1000`)或百分比(例如,`-n 0.1%`)当然都是针对可扫描地址空间而言的(不包括黑名单)
|
限制探测目标的数量。后面跟的可以是一个数字(例如'-n 1000`),或可扫描地址空间的百分比(例如,`-n 0.1%`,不包括黑名单)
|
||||||
|
|
||||||
**-N, --max-results=n**
|
**-N, --max-results=n**
|
||||||
|
|
||||||
@ -111,7 +110,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如,仅扫描10.0.0.0/
|
|||||||
|
|
||||||
**-r, --rate=pps**
|
**-r, --rate=pps**
|
||||||
|
|
||||||
设置传输速率,以包/秒为单位
|
设置发包速率,以包/秒为单位
|
||||||
|
|
||||||
**-B, --bandwidth=bps**
|
**-B, --bandwidth=bps**
|
||||||
|
|
||||||
@ -119,7 +118,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如,仅扫描10.0.0.0/
|
|||||||
|
|
||||||
**-c, --cooldown-time=secs**
|
**-c, --cooldown-time=secs**
|
||||||
|
|
||||||
发送完成后多久继续接收(默认值= 8)
|
发送完成后等待多久继续接收回包(默认值= 8)
|
||||||
|
|
||||||
**-e, --seed=n**
|
**-e, --seed=n**
|
||||||
|
|
||||||
@ -127,7 +126,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如,仅扫描10.0.0.0/
|
|||||||
|
|
||||||
**--shards=n**
|
**--shards=n**
|
||||||
|
|
||||||
将扫描分片/区在使其可多个ZMap中执行(默认值= 1)。启用分片时,`--seed`参数是必需的。
|
将扫描分片/区,使其可多个ZMap中执行(默认值= 1)。启用分片时,`--seed`参数是必需的。
|
||||||
|
|
||||||
**--shard=n**
|
**--shard=n**
|
||||||
|
|
||||||
@ -165,7 +164,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如,仅扫描10.0.0.0/
|
|||||||
|
|
||||||
#### 探测选项 ####
|
#### 探测选项 ####
|
||||||
|
|
||||||
ZMap允许用户指定并添加自己所需要探测的模块。 探测模块的职责就是生成主机回复的响应包。
|
ZMap允许用户指定并添加自己所需要的[探测模块](#probemodule)。 探测模块的职责就是生成要发送的探测包,并处理主机回复的响应包。
|
||||||
|
|
||||||
**--list-probe-modules**
|
**--list-probe-modules**
|
||||||
|
|
||||||
@ -173,7 +172,7 @@ ZMap允许用户指定并添加自己所需要探测的模块。 探测模块的
|
|||||||
|
|
||||||
**-M, --probe-module=name**
|
**-M, --probe-module=name**
|
||||||
|
|
||||||
选择探探测模块(默认值= tcp_synscan)
|
选择[探测模块](#probemodule)(默认值= tcp_synscan)
|
||||||
|
|
||||||
**--probe-args=args**
|
**--probe-args=args**
|
||||||
|
|
||||||
@ -185,7 +184,7 @@ ZMap允许用户指定并添加自己所需要探测的模块。 探测模块的
|
|||||||
|
|
||||||
#### 输出选项 ####
|
#### 输出选项 ####
|
||||||
|
|
||||||
ZMap允许用户选择指定的输出模块。输出模块负责处理由探测模块返回的字段,并将它们交给用户。用户可以指定输出的范围,并过滤相应字段。
|
ZMap允许用户指定和编写他们自己的[输出模块](#outputmodule)。输出模块负责处理由探测模块返回的字段,并将它们输出给用户。用户可以指定输出的字段,并过滤相应字段。
|
||||||
|
|
||||||
**--list-output-modules**
|
**--list-output-modules**
|
||||||
|
|
||||||
@ -193,7 +192,7 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
|
|||||||
|
|
||||||
**-O, --output-module=name**
|
**-O, --output-module=name**
|
||||||
|
|
||||||
选择输出模块(默认值为csv)
|
选择[输出模块](#outputmodule)(默认值为csv)
|
||||||
|
|
||||||
**--output-args=args**
|
**--output-args=args**
|
||||||
|
|
||||||
@ -201,21 +200,21 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
|
|||||||
|
|
||||||
**-f, --output-fields=fields**
|
**-f, --output-fields=fields**
|
||||||
|
|
||||||
输出列表,以逗号分割
|
输出的字段列表,以逗号分割
|
||||||
|
|
||||||
**--output-filter**
|
**--output-filter**
|
||||||
|
|
||||||
通过指定相应的探测模块来过滤输出字段
|
指定输出[过滤器](#outputfilter)对[探测模块](#probemodule)定义字段进行过滤
|
||||||
|
|
||||||
#### 附加选项 ####
|
#### 附加选项 ####
|
||||||
|
|
||||||
**-C, --config=filename**
|
**-C, --config=filename**
|
||||||
|
|
||||||
加载配置文件,可以指定其他路径。
|
加载[配置文件](#config),可以指定其他路径。
|
||||||
|
|
||||||
**-q, --quiet**
|
**-q, --quiet**
|
||||||
|
|
||||||
不再是每秒刷新输出
|
不必每秒刷新输出
|
||||||
|
|
||||||
**-g, --summary**
|
**-g, --summary**
|
||||||
|
|
||||||
@ -233,13 +232,12 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
|
|||||||
|
|
||||||
打印版本并退出
|
打印版本并退出
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### 附加信息 ###
|
### 附加信息 ###
|
||||||
|
<a name="additional"></a>
|
||||||
|
|
||||||
#### TCP SYN 扫描 ####
|
#### TCP SYN 扫描 ####
|
||||||
|
|
||||||
在执行TCP SYN扫描时,ZMap需要指定一个目标端口和以供扫描的源端口范围。
|
在执行TCP SYN扫描时,ZMap需要指定一个目标端口,也支持指定发起扫描的源端口范围。
|
||||||
|
|
||||||
**-p, --target-port=port**
|
**-p, --target-port=port**
|
||||||
|
|
||||||
@ -249,27 +247,27 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
|
|||||||
|
|
||||||
发送扫描数据包的源端口(例如 40000-50000)
|
发送扫描数据包的源端口(例如 40000-50000)
|
||||||
|
|
||||||
**警示!** ZMAP基于Linux内核使用SYN/ACK包应答,RST包关闭扫描打开的连接。ZMap是在Ethernet层完成包的发送的,这样做时为了减少跟踪打开的TCP连接和路由寻路带来的内核开销。因此,如果您有跟踪连接建立的防火墙规则,如netfilter的规则类似于`-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`,将阻止SYN/ACK包到达内核。这不会妨碍到ZMap记录应答,但它会阻止RST包被送回,最终连接会在超时后断开。我们强烈建议您在执行ZMap时,选择一组主机上未使用且防火墙允许访问的端口,加在`-s`后(如 `-s '50000-60000' ` )。
|
**警示!** ZMap基于Linux内核使用RST包来应答SYN/ACK包响应,以关闭扫描器打开的连接。ZMap是在Ethernet层完成包的发送的,这样做是为了减少跟踪打开的TCP连接和路由寻路带来的内核开销。因此,如果您有跟踪连接建立的防火墙规则,如类似于`-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`的netfilter规则,将阻止SYN/ACK包到达内核。这不会妨碍到ZMap记录应答,但它会阻止RST包被送回,最终被扫描主机的连接会一直打开,直到超时后断开。我们强烈建议您在执行ZMap时,选择一组主机上未使用且防火墙允许访问的端口,加在`-s`后(如 `-s '50000-60000' ` )。
|
||||||
|
|
||||||
#### ICMP Echo 请求扫描 ####
|
#### ICMP Echo 请求扫描 ####
|
||||||
|
|
||||||
虽然在默认情况下ZMap执行的是TCP SYN扫描,但它也支持使用ICMP echo请求扫描。在这种扫描方式下ICMP echo请求包被发送到每个主机,并以收到ICMP 应答包作为答复。实施ICMP扫描可以通过选择icmp_echoscan扫描模块来执行,如下:
|
虽然在默认情况下ZMap执行的是TCP SYN扫描,但它也支持使用ICMP echo请求扫描。在这种扫描方式下ICMP echo请求包被发送到每个主机,并以收到ICMP应答包作为答复。实施ICMP扫描可以通过选择icmp_echoscan扫描模块来执行,如下:
|
||||||
|
|
||||||
$ zmap --probe-module=icmp_echoscan
|
$ zmap --probe-module=icmp_echoscan
|
||||||
|
|
||||||
#### UDP 数据报扫描 ####
|
#### UDP 数据报扫描 ####
|
||||||
|
|
||||||
ZMap还额外支持UDP探测,它会发出任意UDP数据报给每个主机,并能在无论UDP或是ICMP任何一个不可达的情况下接受应答。ZMap支持通过使用--probe-args命令行选择四种不同的UDP payload方式。这些都有可列印payload的‘文本’,用于命令行的十六进制payload的‘hex’,外部文件中包含payload的‘file’,和需要动态区域生成的payload的‘template’。为了得到UDP响应,请使用-f参数确保您指定的“data”领域处于汇报范围。
|
ZMap还额外支持UDP探测,它会发出任意UDP数据报给每个主机,并接收UDP或ICMP不可达的应答。ZMap可以通过使用--probe-args命令行选项来设置四种不同的UDP载荷。这些是:可在命令行设置可打印的ASCII 码的‘text’载荷和十六进制载荷的‘hex’,外部文件中包含载荷的‘file’,和通过动态字段生成的载荷的‘template’。为了得到UDP响应,请使用-f参数确保您指定的“data”字段处于输出范围。
|
||||||
|
|
||||||
下面的例子将发送两个字节'ST',即PC的'status'请求,到UDP端口5632。
|
下面的例子将发送两个字节'ST',即PCAnwywhere的'status'请求,到UDP端口5632。
|
||||||
|
|
||||||
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
|
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
|
||||||
|
|
||||||
下面的例子将发送字节“0X02”,即SQL服务器的 'client broadcast'请求,到UDP端口1434。
|
下面的例子将发送字节“0X02”,即SQL Server的'client broadcast'请求,到UDP端口1434。
|
||||||
|
|
||||||
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
|
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
|
||||||
|
|
||||||
下面的例子将发送一个NetBIOS状态请求到UDP端口137。使用一个ZMap自带的payload文件。
|
下面的例子将发送一个NetBIOS状态请求到UDP端口137。使用一个ZMap自带的载荷文件。
|
||||||
|
|
||||||
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
|
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
|
||||||
|
|
||||||
@ -277,9 +275,9 @@ ZMap还额外支持UDP探测,它会发出任意UDP数据报给每个主机,
|
|||||||
|
|
||||||
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
|
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
|
||||||
|
|
||||||
UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上的发送线程(-T)时可能会遇到崩溃和一个明显的相比静态payload性能降低的表现。模板仅仅是一个由一个或多个使用$ {}将字段说明封装成序列构成的payload文件。某些协议,特别是SIP,需要payload来反射包中的源和目的地址。其他协议,如端口映射和DNS,包含范围伴随每一次请求随机生成或Zamp扫描的多宿主系统将会抛出危险警告。
|
UDP载荷模板仍处于实验阶段。当您在更多的使用一个以上的发送线程(-T)时可能会遇到崩溃和一个明显的相比静态载荷性能降低的表现。模板仅仅是一个由一个或多个使用${}将字段说明封装成序列构成的载荷文件。某些协议,特别是SIP,需要载荷来反射包中的源和目的包。其他协议,如portmapper和DNS,每个请求包含的字段应该是随机的,或降低被Zamp扫描的多宿主系统的风险。
|
||||||
|
|
||||||
以下的payload模板将发送SIP OPTIONS请求到每一个目的地:
|
以下的载荷模板将发送SIP OPTIONS请求到每一个目的地:
|
||||||
|
|
||||||
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
|
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
|
||||||
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
|
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
|
||||||
@ -293,10 +291,9 @@ UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上
|
|||||||
User-Agent: ${RAND_ALPHA=8}
|
User-Agent: ${RAND_ALPHA=8}
|
||||||
Accept: text/plain
|
Accept: text/plain
|
||||||
|
|
||||||
就像在上面的例子中展示的那样,对于大多数SIP正常的实现会在在每行行末添加\r\n,并且在请求的末尾一定包含\r\n\r\n。一个可以使用的在ZMap的examples/udp-payloads目录下的例子 (sip_options.tpl).
|
就像在上面的例子中展示的那样,注意每行行末以\r\n结尾,请求以\r\n\r\n结尾,大多数SIP实现都可以正确处理它。一个可以工作的例子放在ZMap的examples/udp-payloads目录下 (sip_options.tpl).
|
||||||
|
|
||||||
下面的字段正在如今的模板中实施:
|
|
||||||
|
|
||||||
|
当前实现了下面的模板字段:
|
||||||
|
|
||||||
- **SADDR**: 源IP地址的点分十进制格式
|
- **SADDR**: 源IP地址的点分十进制格式
|
||||||
- **SADDR_N**: 源IP地址的网络字节序格式
|
- **SADDR_N**: 源IP地址的网络字节序格式
|
||||||
@ -306,14 +303,15 @@ UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上
|
|||||||
- **SPORT_N**: 源端口的网络字节序格式
|
- **SPORT_N**: 源端口的网络字节序格式
|
||||||
- **DPORT**: 目的端口的ascii格式
|
- **DPORT**: 目的端口的ascii格式
|
||||||
- **DPORT_N**: 目的端口的网络字节序格式
|
- **DPORT_N**: 目的端口的网络字节序格式
|
||||||
- **RAND_BYTE**: 随机字节(0-255),长度由=(长度) 参数决定
|
- **RAND_BYTE**: 随机字节(0-255),长度由=(length) 参数决定
|
||||||
- **RAND_DIGIT**: 随机数字0-9,长度由=(长度) 参数决定
|
- **RAND_DIGIT**: 随机数字0-9,长度由=(length) 参数决定
|
||||||
- **RAND_ALPHA**: 随机大写字母A-Z,长度由=(长度) 参数决定
|
- **RAND_ALPHA**: 随机大写字母A-Z,长度由=(length) 参数决定
|
||||||
- **RAND_ALPHANUM**: 随机大写字母A-Z和随机数字0-9,长度由=(长度) 参数决定
|
- **RAND_ALPHANUM**: 随机大写字母A-Z和随机数字0-9,长度由=(length) 参数决定
|
||||||
|
|
||||||
### 配置文件 ###
|
### 配置文件 ###
|
||||||
|
<a name="config"></a>
|
||||||
|
|
||||||
ZMap支持使用配置文件代替在命令行上指定所有的需求选项。配置中可以通过每行指定一个长名称的选项和对应的值来创建:
|
ZMap支持使用配置文件来代替在命令行上指定所有要求的选项。配置中可以通过每行指定一个长名称的选项和对应的值来创建:
|
||||||
|
|
||||||
interface "eth1"
|
interface "eth1"
|
||||||
source-ip 1.1.1.4-1.1.1.8
|
source-ip 1.1.1.4-1.1.1.8
|
||||||
@ -324,11 +322,12 @@ ZMap支持使用配置文件代替在命令行上指定所有的需求选项。
|
|||||||
quiet
|
quiet
|
||||||
summary
|
summary
|
||||||
|
|
||||||
然后ZMap就可以按照配置文件和一些必要的附加参数运行了:
|
然后ZMap就可以按照配置文件并指定一些必要的附加参数运行了:
|
||||||
|
|
||||||
$ zmap --config=~/.zmap.conf --target-port=443
|
$ zmap --config=~/.zmap.conf --target-port=443
|
||||||
|
|
||||||
### 详细 ###
|
### 详细 ###
|
||||||
|
<a name="verbosity" ></a>
|
||||||
|
|
||||||
ZMap可以在屏幕上生成多种类型的输出。默认情况下,Zmap将每隔1秒打印出相似的基本进度信息。可以通过设置`--quiet`来禁用。
|
ZMap可以在屏幕上生成多种类型的输出。默认情况下,Zmap将每隔1秒打印出相似的基本进度信息。可以通过设置`--quiet`来禁用。
|
||||||
|
|
||||||
@ -377,8 +376,9 @@ ZMap还支持在扫描之后打印出一个的可grep的汇总信息,类似于
|
|||||||
adv permutation-gen 4215763218
|
adv permutation-gen 4215763218
|
||||||
|
|
||||||
### 结果输出 ###
|
### 结果输出 ###
|
||||||
|
<a name="output" />
|
||||||
|
|
||||||
ZMap可以通过**输出模块**生成不同格式的结果。默认情况下,ZMap只支持**csv**的输出,但是可以通过编译支持**redis**和**json** 。可以使用**输出过滤**来过滤这些发送到输出模块上的结果。输出模块的范围由用户指定。默认情况如果没有指定输出文件,ZMap将以csv格式返回结果,ZMap不会产生特定结果。也可以编写自己的输出模块;请参阅编写输出模块。
|
ZMap可以通过**输出模块**生成不同格式的结果。默认情况下,ZMap只支持**csv**的输出,但是可以通过编译支持**redis**和**json** 。可以使用**输出过滤**来过滤这些发送到输出模块上的结果。输出模块输出的字段由用户指定。默认情况如果没有指定输出文件,ZMap将以csv格式返回结果,而不会生成特定结果。也可以编写自己的输出模块;请参阅[编写输出模块](#exteding)。
|
||||||
|
|
||||||
**-o, --output-file=p**
|
**-o, --output-file=p**
|
||||||
|
|
||||||
@ -388,14 +388,13 @@ ZMap可以通过**输出模块**生成不同格式的结果。默认情况下,
|
|||||||
|
|
||||||
调用自定义输出模块
|
调用自定义输出模块
|
||||||
|
|
||||||
|
|
||||||
**-f, --output-fields=p**
|
**-f, --output-fields=p**
|
||||||
|
|
||||||
输出以逗号分隔各字段的列表
|
以逗号分隔的输出的字段列表
|
||||||
|
|
||||||
**--output-filter=filter**
|
**--output-filter=filter**
|
||||||
|
|
||||||
在给定的探测区域实施输出过滤
|
对给定的探测指定字段输出过滤
|
||||||
|
|
||||||
**--list-output-modules**
|
**--list-output-modules**
|
||||||
|
|
||||||
@ -403,17 +402,17 @@ ZMap可以通过**输出模块**生成不同格式的结果。默认情况下,
|
|||||||
|
|
||||||
**--list-output-fields**
|
**--list-output-fields**
|
||||||
|
|
||||||
列出可用的给定探测区域
|
列出给定的探测的可用输出字段
|
||||||
|
|
||||||
#### 输出字段 ####
|
#### 输出字段 ####
|
||||||
|
|
||||||
ZMap有很多区域,它可以基于IP地址输出。这些区域可以通过在给定探测模块上运行`--list-output-fields`来查看。
|
除了IP地址之外,ZMap有很多字段。这些字段可以通过在给定探测模块上运行`--list-output-fields`来查看。
|
||||||
|
|
||||||
$ zmap --probe-module="tcp_synscan" --list-output-fields
|
$ zmap --probe-module="tcp_synscan" --list-output-fields
|
||||||
saddr string: 应答包中的源IP地址
|
saddr string: 应答包中的源IP地址
|
||||||
saddr-raw int: 网络提供的整形形式的源IP地址
|
saddr-raw int: 网络字节格式的源IP地址
|
||||||
daddr string: 应答包中的目的IP地址
|
daddr string: 应答包中的目的IP地址
|
||||||
daddr-raw int: 网络提供的整形形式的目的IP地址
|
daddr-raw int: 网络字节格式的目的IP地址
|
||||||
ipid int: 应答包中的IP识别号
|
ipid int: 应答包中的IP识别号
|
||||||
ttl int: 应答包中的ttl(存活时间)值
|
ttl int: 应答包中的ttl(存活时间)值
|
||||||
sport int: TCP 源端口
|
sport int: TCP 源端口
|
||||||
@ -426,7 +425,7 @@ ZMap有很多区域,它可以基于IP地址输出。这些区域可以通过
|
|||||||
repeat int: 是否是来自主机的重复响应
|
repeat int: 是否是来自主机的重复响应
|
||||||
cooldown int: 是否是在冷却时间内收到的响应
|
cooldown int: 是否是在冷却时间内收到的响应
|
||||||
timestamp-str string: 响应抵达时的时间戳使用ISO8601格式
|
timestamp-str string: 响应抵达时的时间戳使用ISO8601格式
|
||||||
timestamp-ts int: 响应抵达时的时间戳使用纪元开始的秒数
|
timestamp-ts int: 响应抵达时的时间戳使用UNIX纪元开始的秒数
|
||||||
timestamp-us int: 时间戳的微秒部分(例如 从'timestamp-ts'的几微秒)
|
timestamp-us int: 时间戳的微秒部分(例如 从'timestamp-ts'的几微秒)
|
||||||
|
|
||||||
可以通过使用`--output-fields=fields`或`-f`来选择选择输出字段,任意组合的输出字段可以被指定为逗号分隔的列表。例如:
|
可以通过使用`--output-fields=fields`或`-f`来选择选择输出字段,任意组合的输出字段可以被指定为逗号分隔的列表。例如:
|
||||||
@ -435,31 +434,32 @@ ZMap有很多区域,它可以基于IP地址输出。这些区域可以通过
|
|||||||
|
|
||||||
#### 过滤输出 ####
|
#### 过滤输出 ####
|
||||||
|
|
||||||
在传到输出模块之前,探测模块生成的结果可以先过滤。过滤被实施在探测模块的输出字段上。过滤使用简单的过滤语法写成,类似于SQL,通过ZMap的**--output-filter**选项来实施。输出过滤通常用于过滤掉重复的结果或仅传输成功的响应到输出模块。
|
在传到输出模块之前,探测模块生成的结果可以先过滤。过滤是针对探测模块的输出字段的。过滤使用类似于SQL的简单过滤语法写成,通过ZMap的**--output-filter**选项来指定。输出过滤通常用于过滤掉重复的结果,或仅传输成功的响应到输出模块。
|
||||||
|
|
||||||
过滤表达式的形式为`<字段名> <操作> <值>`。`<值>`的类型必须是一个字符串或一串无符号整数并且匹配`<字段名>`类型。对于整数比较有效的操作是`= !=, <, >, <=, >=`。字符串比较的操作是=,!=。`--list-output-fields`会打印那些可供探测模块选择的字段和类型,然后退出。
|
过滤表达式的形式为`<字段名> <操作符> <值>`。`<值>`的类型必须是一个字符串或一串无符号整数并且匹配`<字段名>`类型。对于整数比较有效的操作符是`= !=, <, >, <=, >=`。字符串比较的操作是=,!=。`--list-output-fields`可以打印那些可供探测模块选择的字段和类型,然后退出。
|
||||||
|
|
||||||
复合型的过滤操作,可以通过使用`&&`(逻辑与)和`||`(逻辑或)这样的运算符来组合出特殊的过滤操作。
|
复合型的过滤操作,可以通过使用`&&`(逻辑与)和`||`(逻辑或)这样的运算符来组合出特殊的过滤操作。
|
||||||
|
|
||||||
**示例**
|
**示例**
|
||||||
|
|
||||||
书写一则过滤仅显示成功,过滤重复应答
|
书写一则过滤仅显示成功的、不重复的应答
|
||||||
|
|
||||||
--output-filter="success = 1 && repeat = 0"
|
--output-filter="success = 1 && repeat = 0"
|
||||||
|
|
||||||
过滤出包中含RST并且TTL大于10的分类,或者包中含SYNACK的分类
|
过滤出RST分类并且TTL大于10的包,或者SYNACK分类的包
|
||||||
|
|
||||||
--output-filter="(classification = rst && ttl > 10) || classification = synack"
|
--output-filter="(classification = rst && ttl > 10) || classification = synack"
|
||||||
|
|
||||||
#### CSV ####
|
#### CSV ####
|
||||||
|
|
||||||
csv模块将会生成以逗号分隔各输出请求字段的文件。例如,以下的指令将生成下面的CSV至名为`output.csv`的文件。
|
csv模块将会生成以逗号分隔各个要求输出的字段的文件。例如,以下的指令将生成名为`output.csv`的CSV文件。
|
||||||
|
|
||||||
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
||||||
|
|
||||||
----------
|
----------
|
||||||
|
|
||||||
响应, 源地址, 目的地址, 源端口, 目的端口, 序列号, 应答, 是否是冷却模式, 是否重复, 时间戳
|
#响应, 源地址, 目的地址, 源端口, 目的端口, 序列号, 应答, 是否是冷却模式, 是否重复, 时间戳
|
||||||
|
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
|
||||||
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
|
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
|
||||||
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
|
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
|
||||||
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
|
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
|
||||||
@ -472,13 +472,20 @@ csv模块将会生成以逗号分隔各输出请求字段的文件。例如,
|
|||||||
|
|
||||||
#### Redis ####
|
#### Redis ####
|
||||||
|
|
||||||
Redis的输出模块允许地址被添加到一个Redis的队列,不是被保存到文件,允许ZMap将它与之后的处理工具结合使用。
|
Redis的输出模块允许地址被添加到一个Redis的队列,而不是保存到文件,允许ZMap将它与之后的处理工具结合使用。
|
||||||
|
|
||||||
**注意!** ZMap默认不会编译Redis功能。如果您想要将Redis功能编译进ZMap源码中,可以在CMake的时候加上`-DWITH_REDIS=ON`。
|
**注意!** ZMap默认不会编译Redis功能。如果你从源码构建ZMap,可以在CMake的时候加上`-DWITH_REDIS=ON`来增加Redis支持。
|
||||||
|
|
||||||
|
#### JSON ####
|
||||||
|
|
||||||
|
JSON输出模块用起来类似于CSV模块,只是以JSON格式写入到文件。JSON文件能轻松地导入到其它可以读取JSON的程序中。
|
||||||
|
|
||||||
|
**注意!**,ZMap默认不会编译JSON功能。如果你从源码构建ZMap,可以在CMake的时候加上`-DWITH_JSON=ON`来增加JSON支持。
|
||||||
|
|
||||||
### 黑名单和白名单 ###
|
### 黑名单和白名单 ###
|
||||||
|
<a name="blacklisting"></a>
|
||||||
|
|
||||||
ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名单和白名单参数,他将会扫描所有的IPv4地址(包括本地的,保留的以及组播地址)。如果指定了黑名单文件,那么在黑名单中的网络前缀将不再扫描;如果指定了白名单文件,只有那些网络前缀在白名单内的才会扫描。白名单和黑名单文件可以协同使用;黑名单运用于白名单上(例如:如果您在白名单中指定了10.0.0.0/8并在黑名单中指定了10.1.0.0/16,那么10.1.0.0/16将不会扫描)。白名单和黑名单文件可以在命令行中指定,如下所示:
|
ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名单和白名单参数,他将会扫描所有的IPv4地址(包括本地的,保留的以及组播地址)。如果指定了黑名单文件,那么在黑名单中的网络前缀将不再扫描;如果指定了白名单文件,只有那些网络前缀在白名单内的才会扫描。白名单和黑名单文件可以协同使用;黑名单优先于白名单(例如:如果您在白名单中指定了10.0.0.0/8并在黑名单中指定了10.1.0.0/16,那么10.1.0.0/16将不会扫描)。白名单和黑名单文件可以在命令行中指定,如下所示:
|
||||||
|
|
||||||
**-b, --blacklist-file=path**
|
**-b, --blacklist-file=path**
|
||||||
|
|
||||||
@ -488,7 +495,7 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
|
|||||||
|
|
||||||
文件用于记录限制扫描的子网,以CIDR的表示法,例如192.168.0.0/16
|
文件用于记录限制扫描的子网,以CIDR的表示法,例如192.168.0.0/16
|
||||||
|
|
||||||
黑名单文件的每行都需要以CIDR的表示格式书写一个单一的网络前缀。允许使用`#`加以备注。例如:
|
黑名单文件的每行都需要以CIDR的表示格式书写,一行单一的网络前缀。允许使用`#`加以备注。例如:
|
||||||
|
|
||||||
# IANA(英特网编号管理局)记录的用于特殊目的的IPv4地址
|
# IANA(英特网编号管理局)记录的用于特殊目的的IPv4地址
|
||||||
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
|
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
|
||||||
@ -501,14 +508,14 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
|
|||||||
169.254.0.0/16 # RFC3927: 本地链路地址
|
169.254.0.0/16 # RFC3927: 本地链路地址
|
||||||
172.16.0.0/12 # RFC1918: 私有地址
|
172.16.0.0/12 # RFC1918: 私有地址
|
||||||
192.0.0.0/24 # RFC6890: IETF协议预留
|
192.0.0.0/24 # RFC6890: IETF协议预留
|
||||||
192.0.2.0/24 # RFC5737: 测试地址
|
192.0.2.0/24 # RFC5737: 测试地址1
|
||||||
192.88.99.0/24 # RFC3068: IPv6转换到IPv4的任意播
|
192.88.99.0/24 # RFC3068: IPv6转换到IPv4的任播
|
||||||
192.168.0.0/16 # RFC1918: 私有地址
|
192.168.0.0/16 # RFC1918: 私有地址
|
||||||
192.18.0.0/15 # RFC2544: 检测地址
|
192.18.0.0/15 # RFC2544: 检测地址
|
||||||
198.51.100.0/24 # RFC5737: 测试地址
|
198.51.100.0/24 # RFC5737: 测试地址2
|
||||||
203.0.113.0/24 # RFC5737: 测试地址
|
203.0.113.0/24 # RFC5737: 测试地址3
|
||||||
240.0.0.0/4 # RFC1112: 预留地址
|
240.0.0.0/4 # RFC1112: 预留地址
|
||||||
255.255.255.255/32 # RFC0919: 广播地址
|
255.255.255.255/32 # RFC0919: 限制广播地址
|
||||||
|
|
||||||
# IANA记录的用于组播的地址空间
|
# IANA记录的用于组播的地址空间
|
||||||
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
|
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
|
||||||
@ -516,13 +523,14 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
|
|||||||
|
|
||||||
224.0.0.0/4 # RFC5771: 组播/预留地址ed
|
224.0.0.0/4 # RFC5771: 组播/预留地址ed
|
||||||
|
|
||||||
如果您只是想扫描因特网中随机的一部分地址,使用采样检出,来代替使用白名单和黑名单。
|
如果您只是想扫描因特网中随机的一部分地址,使用[抽样](#ratelimiting)检出,来代替使用白名单和黑名单。
|
||||||
|
|
||||||
**注意!**ZMap默认设置使用`/etc/zmap/blacklist.conf`作为黑名单文件,其中包含有本地的地址空间和预留的IP空间。通过编辑`/etc/zmap/zmap.conf`可以改变默认的配置。
|
**注意!**ZMap默认设置使用`/etc/zmap/blacklist.conf`作为黑名单文件,其中包含有本地的地址空间和预留的IP空间。通过编辑`/etc/zmap/zmap.conf`可以改变默认的配置。
|
||||||
|
|
||||||
### 速度限制与抽样 ###
|
### 速度限制与抽样 ###
|
||||||
|
<a name="ratelimiting"></a>
|
||||||
|
|
||||||
默认情况下,ZMap将以您当前网络所能支持的最快速度扫描。以我们对于常用硬件的经验,这普遍是理论上Gbit以太网速度的95-98%,这可能比您的上游提供商可处理的速度还要快。ZMap是不会自动的根据您的上游提供商来调整发送速率的。您可能需要手动的调整发送速率来减少丢包和错误结果。
|
默认情况下,ZMap将以您当前网卡所能支持的最快速度扫描。以我们对于常用硬件的经验,这通常是理论上Gbit以太网速度的95-98%,这可能比您的上游提供商可处理的速度还要快。ZMap是不会自动的根据您的上游提供商来调整发送速率的。您可能需要手动的调整发送速率来减少丢包和错误结果。
|
||||||
|
|
||||||
**-r, --rate=pps**
|
**-r, --rate=pps**
|
||||||
|
|
||||||
@ -530,9 +538,9 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
|
|||||||
|
|
||||||
**-B, --bandwidth=bps**
|
**-B, --bandwidth=bps**
|
||||||
|
|
||||||
设置发送速率以比特/秒(支持G,M和K后缀)。也同样作用于--rate的参数。
|
设置发送速率以比特/秒(支持G,M和K后缀)。这会覆盖--rate参数。
|
||||||
|
|
||||||
ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运行时间的随机采样。由于针对主机的扫描是通过随机排序生成的实例,限制扫描的主机个数为N就会随机抽选N个主机。命令选项如下:
|
ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运行时间的随机采样。由于每次对主机的扫描是通过随机排序生成的,限制扫描的主机个数为N就会随机抽选N个主机。命令选项如下:
|
||||||
|
|
||||||
**-n, --max-targets=n**
|
**-n, --max-targets=n**
|
||||||
|
|
||||||
@ -540,7 +548,7 @@ ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运
|
|||||||
|
|
||||||
**-N, --max-results=n**
|
**-N, --max-results=n**
|
||||||
|
|
||||||
结果上限数量(累积收到这么多结果后推出)
|
结果上限数量(累积收到这么多结果后退出)
|
||||||
|
|
||||||
**-t, --max-runtime=s**
|
**-t, --max-runtime=s**
|
||||||
|
|
||||||
@ -554,48 +562,46 @@ ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运
|
|||||||
|
|
||||||
zmap -p 443 -s 3 -n 1000000 -o results
|
zmap -p 443 -s 3 -n 1000000 -o results
|
||||||
|
|
||||||
为了确定哪一百万主机将要被扫描,您可以执行预扫,只列印数据包而非发送,并非真的实施扫描。
|
为了确定哪一百万主机将要被扫描,您可以执行预扫,只打印数据包而非发送,并非真的实施扫描。
|
||||||
|
|
||||||
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
|
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
|
||||||
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
|
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
|
||||||
|
|
||||||
### 发送多个数据包 ###
|
### 发送多个数据包 ###
|
||||||
|
|
||||||
ZMap支持想每个主机发送多个扫描。增加这个数量既增加了扫描时间又增加了到达的主机数量。然而,我们发现,增加扫描时间(每个额外扫描的增加近100%)远远大于到达的主机数量(每个额外扫描的增加近1%)。
|
ZMap支持向每个主机发送多个探测。增加这个数量既增加了扫描时间又增加了到达的主机数量。然而,我们发现,增加的扫描时间(每个额外扫描的增加近100%)远远大于到达的主机数量(每个额外扫描的增加近1%)。
|
||||||
|
|
||||||
**-P, --probes=n**
|
**-P, --probes=n**
|
||||||
|
|
||||||
向每个IP发出的独立扫描个数(默认值=1)
|
向每个IP发出的独立探测个数(默认值=1)
|
||||||
|
|
||||||
----------
|
### 示例应用 ###
|
||||||
|
|
||||||
### 示例应用程序 ###
|
ZMap专为向大量主机发起连接并寻找那些正确响应而设计。然而,我们意识到许多用户需要执行一些后续处理,如执行应用程序级别的握手。例如,用户在80端口实施TCP SYN扫描也许想要实施一个简单的GET请求,还有用户扫描443端口可能希望完成TLS握手。
|
||||||
|
|
||||||
ZMap专为向大量主机发启连接并寻找那些正确响应而设计。然而,我们意识到许多用户需要执行一些后续处理,如执行应用程序级别的握手。例如,用户在80端口实施TCP SYN扫描可能只是想要实施一个简单的GET请求,还有用户扫描443端口可能是对TLS握手如何完成感兴趣。
|
|
||||||
|
|
||||||
#### Banner获取 ####
|
#### Banner获取 ####
|
||||||
|
|
||||||
我们收录了一个示例程序,banner-grab,伴随ZMap使用可以让用户从监听状态的TCP服务器上接收到消息。Banner-grab连接到服务上,任意的发送一个消息,然后打印出收到的第一个消息。这个工具可以用来获取banners例如HTTP服务的回复的具体指令,telnet登陆提示,或SSH服务的字符串。
|
我们收录了一个示例程序,banner-grab,伴随ZMap使用可以让用户从监听状态的TCP服务器上接收到消息。Banner-grab连接到提供的服务器上,发送一个可选的消息,然后打印出收到的第一个消息。这个工具可以用来获取banner,例如HTTP服务的回复的具体指令,telnet登陆提示,或SSH服务的字符串。
|
||||||
|
|
||||||
这个例子寻找了1000个监听80端口的服务器,并向每个发送一个简单的GET请求,存储他们的64位编码响应至http-banners.out
|
下面的例子寻找了1000个监听80端口的服务器,并向每个发送一个简单的GET请求,存储他们的64位编码响应至http-banners.out
|
||||||
|
|
||||||
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
|
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
|
||||||
|
|
||||||
如果想知道更多使用`banner-grab`的细节,可以参考`examples/banner-grab`中的README文件。
|
如果想知道更多使用`banner-grab`的细节,可以参考`examples/banner-grab`中的README文件。
|
||||||
|
|
||||||
**注意!** ZMap和banner-grab(如例子中)同时运行可能会比较显著的影响对方的表现和精度。确保不让ZMap充满banner-grab-tcp的并发连接,不然banner-grab将会落后于标准输入的读入,导致屏蔽编写标准输出。我们推荐使用较慢扫描速率的ZMap,同时提升banner-grab-tcp的并发性至3000以内(注意 并发连接>1000需要您使用`ulimit -SHn 100000`和`ulimit -HHn 100000`来增加每个问进程的最大文件描述)。当然,这些参数取决于您服务器的性能,连接成功率(hit-rate);我们鼓励开发者在运行大型扫描之前先进行小样本的试验。
|
**注意!** ZMap和banner-grab(如例子中)同时运行可能会比较显著的影响对方的表现和精度。确保不让ZMap占满banner-grab-tcp的并发连接,不然banner-grab将会落后于标准输入的读入,导致阻塞ZMap的输出写入。我们推荐使用较慢扫描速率的ZMap,同时提升banner-grab-tcp的并发性至3000以内(注意 并发连接>1000需要您使用`ulimit -SHn 100000`和`ulimit -HHn 100000`来增加每个进程的最大文件描述符数量)。当然,这些参数取决于您服务器的性能、连接成功率(hit-rate);我们鼓励开发者在运行大型扫描之前先进行小样本的试验。
|
||||||
|
|
||||||
#### 建立套接字 ####
|
#### 建立套接字 ####
|
||||||
|
|
||||||
我们也收录了另一种形式的banner-grab,就是forge-socket, 重复利用服务器发出的SYN-ACK,连接并最终取得banner。在`banner-grab-tcp`中,ZMap向每个服务器发送一个SYN,并监听服务器发回的带有SYN+ACK的应答。运行ZMap主机的内核接受应答后发送RST,因为有没有处于活动状态的连接与该包关联。程序banner-grab必须在这之后创建一个新的TCP连接到从服务器获取数据。
|
我们也收录了另一种形式的banner-grab,就是forge-socket, 重复利用服务器发出的SYN-ACK,连接并最终取得banner。在`banner-grab-tcp`中,ZMap向每个服务器发送一个SYN,并监听服务器发回的带有SYN+ACK的应答。运行ZMap主机的内核接受应答后发送RST,这样就没有与该包关联活动连接。程序banner-grab必须在这之后创建一个新的TCP连接到从服务器获取数据。
|
||||||
|
|
||||||
在forge-socket中,我们以同样的名字利用内核模块,这使我们可以创建任意参数的TCP连接。这样可以抑制内核的RST包,并且通过创建套接字取代它可以重用SYN+ACK的参数,通过这个套接字收发数据和我们平时使用的连接套接字并没有什么不同。
|
在forge-socket中,我们利用内核中同名的模块,使我们可以创建任意参数的TCP连接。可以通过抑制内核的RST包,并重用SYN+ACK的参数取代该包而创建套接字,通过这个套接字收发数据和我们平时使用的连接套接字并没有什么不同。
|
||||||
|
|
||||||
要使用forge-socket,您需要forge-socket内核模块,从[github][1]上可以获得。您需要git clone `git@github.com:ewust/forge_socket.git`至ZMap源码根目录,然后cd进入forge_socket 目录,运行make。以root身份安装带有`insmod forge_socket.ko` 的内核模块。
|
要使用forge-socket,您需要forge-socket内核模块,从[github][1]上可以获得。您需要`git clone git@github.com:ewust/forge_socket.git`至ZMap源码根目录,然后cd进入forge_socket目录,运行make。以root身份运行`insmod forge_socket.ko` 来安装该内核模块。
|
||||||
|
|
||||||
您也需要告知内核不要发送RST包。一个简单的在全系统禁用RST包的方法是**iptables**。以root身份运行`iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP`即可,当然您也可以加上一项--dport X将禁用局限于所扫描的端口(X)上。扫描完成后移除这项设置,以root身份运行`iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP`即可。
|
您也需要告知内核不要发送RST包。一个简单的在全系统禁用RST包的方法是使用**iptables**。以root身份运行`iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP`即可,当然您也可以加上一项`--dport X`将禁用局限于所扫描的端口(X)上。扫描完成后移除这项设置,以root身份运行`iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP`即可。
|
||||||
|
|
||||||
现在应该可以建立forge-socket的ZMap示例程序了。运行需要使用**extended_file**ZMap输出模块:
|
现在应该可以建立forge-socket的ZMap示例程序了。运行需要使用**extended_file**ZMap[输出模块](#outputmodule):
|
||||||
|
|
||||||
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
|
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
|
||||||
./forge-socket -c 500 -d ./http-req > ./http-banners.out
|
./forge-socket -c 500 -d ./http-req > ./http-banners.out
|
||||||
@ -605,8 +611,9 @@ ZMap专为向大量主机发启连接并寻找那些正确响应而设计。然
|
|||||||
----------
|
----------
|
||||||
|
|
||||||
### 编写探测和输出模块 ###
|
### 编写探测和输出模块 ###
|
||||||
|
<a name="extending"></a>
|
||||||
|
|
||||||
ZMap可以通过**probe modules**扩展支持不同类型的扫描,通过**output modules**追加不同类型的输出结果。注册过的探测和输出模块可以在命令行中列出:
|
ZMap可以通过**探测模块**来扩展支持不同类型的扫描,通过**输出模块**增加不同类型的输出结果。注册过的探测和输出模块可以在命令行中列出:
|
||||||
|
|
||||||
**--list-probe-modules**
|
**--list-probe-modules**
|
||||||
|
|
||||||
@ -617,95 +624,95 @@ ZMap可以通过**probe modules**扩展支持不同类型的扫描,通过**out
|
|||||||
列出安装过的输出模块
|
列出安装过的输出模块
|
||||||
|
|
||||||
#### 输出模块 ####
|
#### 输出模块 ####
|
||||||
|
<a name="outputmodule"></a>
|
||||||
|
|
||||||
ZMap的输出和输出后处理可以通过执行和注册扫描的**output modules**来扩展。输出模块在接收每一个应答包时都会收到一个回调。然而提供的默认模块仅提供简单的输出,这些模块同样支持扩展扫描后处理(例如:重复跟踪或输出AS号码来代替IP地址)
|
ZMap的输出和输出后处理可以通过实现和注册扫描器的**输出模块**来扩展。输出模块在接收每一个应答包时都会收到一个回调。然而默认提供的模块仅提供简单的输出,这些模块同样支持更多的输出后处理(例如:重复跟踪或输出AS号码来代替IP地址)。
|
||||||
|
|
||||||
通过定义一个新的output_module机构体来创建输出模块,并在[output_modules.c][2]中注册:
|
通过定义一个新的output_module结构来创建输出模块,并在[output_modules.c][2]中注册:
|
||||||
|
|
||||||
typedef struct output_module {
|
typedef struct output_module {
|
||||||
const char *name; // 在命令行如何引出输出模块
|
const char *name; // 在命令行如何引用输出模块
|
||||||
unsigned update_interval; // 以秒为单位的更新间隔
|
unsigned update_interval; // 以秒为单位的更新间隔
|
||||||
|
|
||||||
output_init_cb init; // 在扫描初始化的时候调用
|
output_init_cb init; // 在扫描器初始化的时候调用
|
||||||
output_update_cb start; // 在开始的扫描的时候调用
|
output_update_cb start; // 在扫描器开始的时候调用
|
||||||
output_update_cb update; // 每次更新间隔调用,秒为单位
|
output_update_cb update; // 每次更新间隔调用,秒为单位
|
||||||
output_update_cb close; // 扫描终止后调用
|
output_update_cb close; // 扫描终止后调用
|
||||||
|
|
||||||
output_packet_cb process_ip; // 接收到应答时调用
|
output_packet_cb process_ip; // 接收到应答时调用
|
||||||
|
|
||||||
const char *helptext; // 会在--list-output-modules时打印在屏幕啥
|
const char *helptext; // 会在--list-output-modules时打印在屏幕上
|
||||||
|
|
||||||
} output_module_t;
|
} output_module_t;
|
||||||
|
|
||||||
输出模块必须有名称,通过名称可以在命令行、普遍实施的`success_ip`和常见的`other_ip`回调中使用模块。process_ip的回调由每个收到的或经由**probe module**过滤的应答包调用。应答是否被认定为成功并不确定(比如,他可以是一个TCP的RST)。这些回调必须定义匹配`output_packet_cb`定义的函数:
|
输出模块必须有名称,通过名称可以在命令行调用,并且通常会实现`success_ip`和常见的`other_ip`回调。process_ip的回调由每个收到并经由**probe module**过滤的应答包调用。应答是否被认定为成功并不确定(比如,它可以是一个TCP的RST)。这些回调必须定义匹配`output_packet_cb`定义的函数:
|
||||||
|
|
||||||
int (*output_packet_cb) (
|
int (*output_packet_cb) (
|
||||||
|
|
||||||
ipaddr_n_t saddr, // network-order格式的扫描主机IP地址
|
ipaddr_n_t saddr, // 网络字节格式的发起扫描主机IP地址
|
||||||
ipaddr_n_t daddr, // network-order格式的目的IP地址
|
ipaddr_n_t daddr, // 网络字节格式的目的IP地址
|
||||||
|
|
||||||
const char* response_type, // 发送模块的数据包分类
|
const char* response_type, // 发送模块的数据包分类
|
||||||
|
|
||||||
int is_repeat, // {0: 主机的第一个应答, 1: 后续的应答}
|
int is_repeat, // {0: 主机的第一个应答, 1: 后续的应答}
|
||||||
int in_cooldown, // {0: 非冷却状态, 1: 扫描处于冷却中}
|
int in_cooldown, // {0: 非冷却状态, 1: 扫描器处于冷却中}
|
||||||
|
|
||||||
const u_char* packet, // 指向结构体iphdr中IP包的指针
|
const u_char* packet, // 指向IP包的iphdr结构体的指针
|
||||||
size_t packet_len // 包的长度以字节为单位
|
size_t packet_len // 包的长度,以字节为单位
|
||||||
);
|
);
|
||||||
|
|
||||||
输出模块还可以通过注册回调执行在扫描初始化的时候(诸如打开输出文件的任务),扫描开始阶段(诸如记录黑名单的任务),在常规间隔实施(诸如程序升级的任务)在关闭的时候(诸如关掉所有打开的文件描述符。这些回调提供完整的扫描配置入口和实时状态:
|
输出模块还可以通过注册回调,执行在扫描初始化的时候(诸如打开输出文件的任务)、在扫描开始阶段(诸如记录黑名单的任务)、在扫描的常规间隔(诸如状态更新的任务)、在关闭的时候(诸如关掉所有打开的文件描述符)。提供的这些回调可以完整的访问扫描配置和当前状态:
|
||||||
|
|
||||||
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
|
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
|
||||||
|
|
||||||
被定义在[output_modules.h][3]中。在[src/output_modules/module_csv.c][4]中有可用示例。
|
这些定义在[output_modules.h][3]中。在[src/output_modules/module_csv.c][4]中有可用示例。
|
||||||
|
|
||||||
#### 探测模块 ####
|
#### 探测模块 ####
|
||||||
|
<a name="probemodule"></a>
|
||||||
|
|
||||||
数据包由探测模块构造,由此可以创建抽象包并对应答分类。ZMap默认拥有两个扫描模块:`tcp_synscan`和`icmp_echoscan`。默认情况下,ZMap使用`tcp_synscan`来发送TCP SYN包并对每个主机的并对每个主机的响应分类,如打开时(收到SYN+ACK)或关闭时(收到RST)。ZMap允许开发者编写自己的ZMap探测模块,使用如下的API:
|
数据包由**探测模块**构造,它可以创建各种包和不同类型的响应。ZMap默认拥有两个扫描模块:`tcp_synscan`和`icmp_echoscan`。默认情况下,ZMap使用`tcp_synscan`来发送TCP SYN包并对每个主机的响应分类,如打开时(收到SYN+ACK)或关闭时(收到RST)。ZMap允许开发者编写自己的ZMap探测模块,使用如下的API:
|
||||||
|
|
||||||
任何类型的扫描的实施都需要在`send_module_t`结构体内开发和注册必要的回调:
|
任何类型的扫描都必须通过开发和注册`send_module_t`结构中的回调来实现:
|
||||||
|
|
||||||
typedef struct probe_module {
|
typedef struct probe_module {
|
||||||
const char *name; // 如何在命令行调用扫描
|
const char *name; // 如何在命令行调用扫描
|
||||||
size_t packet_length; // 探测包有多长(必须是静态的)
|
size_t packet_length; // 探测包有多长(必须是静态的)
|
||||||
|
|
||||||
const char *pcap_filter; // 对收到的响应实施PCAP过滤
|
const char *pcap_filter; // 对收到的响应实施PCAP过滤
|
||||||
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
|
size_t pcap_snaplen; // libpcap 捕获的最大字节数
|
||||||
|
uint8_t port_args; // 设为1,如果ZMap需要用户指定--target-port
|
||||||
uint8_t port_args; // 设为1,如果需要使用ZMap的--target-port
|
|
||||||
// 用户指定
|
|
||||||
|
|
||||||
probe_global_init_cb global_initialize; // 在扫描初始化会时被调用一次
|
probe_global_init_cb global_initialize; // 在扫描初始化会时被调用一次
|
||||||
probe_thread_init_cb thread_initialize; // 每个包缓存区的线程中被调用一次
|
probe_thread_init_cb thread_initialize; // 每个包缓存区的线程中被调用一次
|
||||||
probe_make_packet_cb make_packet; // 每个主机更新包的时候被调用一次
|
probe_make_packet_cb make_packet; // 每个主机更新包的时候被调用一次
|
||||||
probe_validate_packet_cb validate_packet; // 每收到一个包被调用一次,
|
probe_validate_packet_cb validate_packet; // 每收到一个包被调用一次,
|
||||||
// 如果包无效返回0,
|
// 如果包无效返回0,
|
||||||
// 非零则覆盖。
|
// 非零则有效。
|
||||||
|
|
||||||
probe_print_packet_cb print_packet; // 如果在dry-run模式下被每个包都调用
|
probe_print_packet_cb print_packet; // 如果在预扫模式下被每个包都调用
|
||||||
probe_classify_packet_cb process_packet; // 由区分响应的接收器调用
|
probe_classify_packet_cb process_packet; // 由区分响应的接收器调用
|
||||||
probe_close_cb close; // 扫描终止后被调用
|
probe_close_cb close; // 扫描终止后被调用
|
||||||
|
|
||||||
fielddef_t *fields // 该模块指定的区域的定义
|
fielddef_t *fields // 该模块指定的字段的定义
|
||||||
int numfields // 区域的数量
|
int numfields // 字段的数量
|
||||||
|
|
||||||
} probe_module_t;
|
} probe_module_t;
|
||||||
|
|
||||||
在扫描操作初始化时会调用一次`global_initialize`,可以用来实施一些必要的全局配置和初始化操作。然而,`global_initialize`并不能访问报缓冲区,那里由线程指定。用以代替的,`thread_initialize`在每个发送线程初始化的时候被调用,提供对于缓冲区的访问,可以用来构建探测包和全局的源和目的值。此回调应用于构建宿主不可知分组结构,甚至只有特定值(如:目的主机和校验和),需要随着每个主机更新。例如,以太网头部信息在交换时不会变更(减去校验和是由NIC硬件计算的)因此可以事先定义以减少扫描时间开销。
|
在扫描操作初始化时会调用一次`global_initialize`,可以用来实施一些必要的全局配置和初始化操作。然而,`global_initialize`并不能访问包缓冲区,那里是线程特定的。代替的,`thread_initialize`在每个发送线程初始化的时候被调用,提供对于缓冲区的访问,可以用来构建探测包和全局的源和目的值。此回调应用于构建主机不可知的包结构,甚至只有特定值(如:目的主机和校验和),需要随着每个主机更新。例如,以太网头部信息在交换时不会变更(减去校验和是由NIC硬件计算的)因此可以事先定义以减少扫描时间开销。
|
||||||
|
|
||||||
调用回调参数`make_packet是为了让被扫描的主机允许**probe module**更新主机指定的值,同时提供IP地址、一个非透明的验证字符串和探测数目(如下所示)。探测模块负责在探测中放置尽可能多的验证字符串,以至于当服务器返回的应答为空时,探测模块也能验证它的当前状态。例如,针对TCP SYN扫描,tcp_synscan探测模块会使用TCP源端口和序列号的格式存储验证字符串。响应包(SYN+ACKs)将包含预期的值包含目的端口和确认号。
|
调用回调参数`make_packet`是为了让被扫描的主机允许**探测模块**更新主机指定的值,同时提供IP地址、一个非透明的验证字符串和探测数目(如下所示)。探测模块负责在探测中放置尽可能多的验证字符串,即便当服务器返回的应答为空时,探测模块也能验证它的当前状态。例如,针对TCP SYN扫描,tcp_synscan探测模块会使用TCP源端口和序列号的格式存储验证字符串。响应包(SYN+ACK)将包含目的端口和确认号的预期值。
|
||||||
|
|
||||||
int make_packet(
|
int make_packet(
|
||||||
void *packetbuf, // 包的缓冲区
|
void *packetbuf, // 包的缓冲区
|
||||||
ipaddr_n_t src_ip, // network-order格式源IP
|
ipaddr_n_t src_ip, // 网络字节格式源IP
|
||||||
ipaddr_n_t dst_ip, // network-order格式目的IP
|
ipaddr_n_t dst_ip, // 网络字节格式目的IP
|
||||||
uint32_t *validation, // 探测中的确认字符串
|
uint32_t *validation, // 探测中的有效字符串
|
||||||
int probe_num // 如果向每个主机发送多重探测,
|
int probe_num // 如果向每个主机发送多重探测,
|
||||||
// 该值为对于主机我们
|
// 该值为我们对于该主机
|
||||||
// 正在实施的探测数目
|
// 正在发送的探测数目
|
||||||
);
|
);
|
||||||
|
|
||||||
扫描模块也应该定义`pcap_filter`、`validate_packet`和`process_packet`。只有符合PCAP过滤的包才会被扫描。举个例子,在一个TCP SYN扫描的情况下,我们只想要调查TCP SYN / ACK或RST TCP数据包,并利用类似`tcp && tcp[13] & 4 != 0 || tcp[13] == 18`的过滤方法。`validate_packet`函数将会被每个满足PCAP过滤条件的包调用。如果验证返回的值非零,将会调用`process_packet`函数,并使用包中被定义成的`fields`字段和数据填充字段集。如下代码为TCP synscan探测模块处理了一个数据包。
|
扫描模块也应该定义`pcap_filter`、`validate_packet`和`process_packet`。只有符合PCAP过滤器的包才会被扫描。举个例子,在一个TCP SYN扫描的情况下,我们只想要调查TCP SYN / ACK或RST TCP数据包,并利用类似`tcp && tcp[13] & 4 != 0 || tcp[13] == 18`的过滤方法。`validate_packet`函数将会被每个满足PCAP过滤条件的包调用。如果验证返回的值非零,将会调用`process_packet`函数,并使用`fields`定义的字段和包中的数据填充字段集。举个例子,如下代码为TCP synscan探测模块处理了一个数据包。
|
||||||
|
|
||||||
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
|
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
|
||||||
{
|
{
|
||||||
@ -733,7 +740,7 @@ ZMap的输出和输出后处理可以通过执行和注册扫描的**output modu
|
|||||||
via: https://zmap.io/documentation.html
|
via: https://zmap.io/documentation.html
|
||||||
|
|
||||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,80 +1,78 @@
|
|||||||
sevenot translating
|
10 大帮助你获得理想的职业的操作系统技能
|
||||||
10 Top Distributions in Demand to Get Your Dream Job
|
|
||||||
================================================================================
|
================================================================================
|
||||||
We are coming up with a series of five articles which aims at making you aware of the top skills which will help you in getting yours dream job. In this competitive world you can not rely on one skill. You need to have balanced set of skills. There is no measure of a balanced skill set except a few conventions and statistics which changes from time-to-time.
|
我们用了5篇系列文章,来让人们意识到那些可以帮助他们获得理想职业的顶级技能。在这个充满竞争的社会里,你不能仅仅依赖一项仅能,你需要在多个职业技能上都有所涉猎。我们并不能权衡这些技能,但是我们可以参考这些几乎不变的惯例和统计数据。
|
||||||
|
|
||||||
The below article and remaining to follow is the result of close study of job boards, posting and requirements of various IT Companies across the globe of last three months. The statistics keeps on changing as the demand and market changes. We will try our best to update the list when there is any major changes.
|
下面的文章和紧跟其后的内容,是针对全球各大IT公司上一季度对员工技能要求的详细调查报告。统计数据真实的反映了需求和市场的变化。我们会尽力让这份报告保持时效性,特别是有明显变化的时候。这五篇系列文章是:
|
||||||
The Five articles of this series are…
|
|
||||||
|
|
||||||
- 10 Distributions in Demand to Get Your Dream Job
|
- 10大帮助你获得理想的职业的需求分布
|
||||||
- [10 Famous IT Skills in Demand That Will Get You Hired][1]
|
- [10大帮助你获得职位的著名 IT 技能][1]
|
||||||
- 10 Programming Skills That Will Help You to Get Dream Job
|
- [10大帮助你获得理想职位的项目技能][2]
|
||||||
- 10 IT Networking Protocols Skills to Land Your Dream Job
|
- [10大帮助你赢得理想职位的网络技能][3]
|
||||||
- 10 Professional Certifications in Demand That Will Get You Hired
|
- [10大帮助你获得理想职位的个人认证][4]
|
||||||
|
|
||||||
### 1. Windows ###
|
### 1. Windows ###
|
||||||
|
|
||||||
The operating System developed by Microsoft not only dominates the PC market but it is also the most sought OS skill from job perspective irrespective of all the odds and criticism that follows. It has shown a growth in demand which equals to 0.1% in the last quarter.
|
微软研发的windows操作系统不仅在PC市场上占据龙头地位,而且从职位视角来看也是最抢手的操作系统技能,不管你是赞成还是反对。有资料显示上一季度需求增长达到0.1%.
|
||||||
|
|
||||||
Latest Stable Release : Windows 8.1
|
最新版本 : Windows 8.1
|
||||||
|
|
||||||
### 2. Red Hat Enterprise Linux ###
|
### 2. Red Hat Enterprise Linux ###
|
||||||
|
|
||||||
Red Hat Enterprise Linux is a commercial Linux Distribution developed by Red Hat Inc. It is one of the Most widely used Linux distribution specially in corporates and production. It comes at number two having a overall growth in demand which equals to 17% in the last quarter.
|
Red Hat Enterprise Linux 是一个商业的Linux发行版本,它由红帽公司研发。它是世界上运用最广的Linux发行版本之一,特别是在生产环境和协同工作方面。上一季度其整体需求上涨17%,位列第二。
|
||||||
|
|
||||||
Latest Stable Release : RedHat Enterprise Linux 7.1
|
最新版本 : RedHat Enterprise Linux 7.1
|
||||||
|
|
||||||
### 3. Solaris ###
|
### 3. Solaris ###
|
||||||
|
|
||||||
The UNIX Operating System developed by Sun Microsystems and now owned by Oracle Inc. comes at number three. It has shown a growth in demand which equals to 14% in the last quarter.
|
排在第三的是 Solaris UNIX操作系统,最初由Sun Microsystems公司研发,现由Oracle公司负责继续研发。在上一季度起需求率上涨14%.
|
||||||
|
|
||||||
Latest Stable Release : Oracle Solaris 10 1/13
|
最新版本:Oracle Solaris 10 1/13
|
||||||
|
|
||||||
### 4. AIX ###
|
### 4. AIX ###
|
||||||
|
|
||||||
Advanced Interactive eXecutive is a Proprietary Unix Operating System by IBM stands at number four. It has shown a growth in demand which equals to 11% in the last quarter.
|
排在第四的是AIX,这是一款由IBM研发的专用 Unix 操作系统。在上一季度需求率上涨11%。
|
||||||
|
|
||||||
Latest Stable Release : AIX 7
|
最新版本 : AIX 7
|
||||||
|
|
||||||
### 5. Android ###
|
### 5. Android ###
|
||||||
|
|
||||||
One of the most widely used open source operating system designed specially for mobile, tablet computers and wearable gadgets is now owned by Google Inc. comes at number five. It has shown a growth in demand which equals to 4% in the last quarter.
|
排在第5的是谷歌公司研发的安卓系统,它是一款使用非常广泛的开源操作系统,专门为手机、平板电脑、可穿戴设备设计的。在上一季度需求率上涨4%。
|
||||||
|
|
||||||
Latest Stable Release : Android 5.1 aka Lollipop
|
最新版本 : Android 5.1 aka Lollipop
|
||||||
|
|
||||||
### 6. CentOS ###
|
### 6. CentOS ###
|
||||||
|
|
||||||
Community Enterprise Operating System is a Linux distribution derived from RedHat Enterprise Linux. It comes at sixth position in the list. Market has shown a growth in demand which is nearly 22% for CentOS, in the last quarter.
|
排在第6的是 CentOS,它是从 RedHat Enterprise Linux 衍生出的一个发行版本。在上一季度需求率上涨接近22%。
|
||||||
|
|
||||||
Latest Stable Release : CentOS 7
|
最新版本 : CentOS 7
|
||||||
|
|
||||||
### 7. Ubuntu ###
|
### 7. Ubuntu ###
|
||||||
|
|
||||||
The Linux Operating System designed for Humans and designed by Canonicals Ltd. Ubuntu comes at position seventh. It has shown a growth in demand which equals to 11% in the last quarter.
|
排在第7的是Ubuntu,这是一款由Canonicals公司研发设计的Linux系统,旨在服务于个人。上一季度需求率上涨11%。
|
||||||
Latest Stable Release :
|
|
||||||
|
|
||||||
- Ubuntu 14.10 (9 months security and maintenance update).
|
最新版本 :
|
||||||
|
|
||||||
|
- Ubuntu 14.10 (已有九个月的安全和维护更新).
|
||||||
- Ubuntu 14.04.2 LTS
|
- Ubuntu 14.04.2 LTS
|
||||||
|
|
||||||
### 8. Suse ###
|
### 8. Suse ###
|
||||||
|
|
||||||
Suse is a Linux operating System owned by Novell. The Linux distribution is famous for YaST configuration tool. It comes at position eight. It has shown a growth in demand which equals to 8% in the last quarter.
|
排在第8的是由Novell研发的 Suse,这款发行版本的Linux操作系统因为YaST 配置工具而闻名。其上一季度需求率上涨8%。
|
||||||
|
|
||||||
Latest Stable Release : 13.2
|
最新版本 : 13.2
|
||||||
|
|
||||||
### 9. Debian ###
|
### 9. Debian ###
|
||||||
|
|
||||||
The very famous Linux Operating System, mother of 100’s of Distro and closest to GNU comes at number nine. It has shown a decline in demand which is nearly 9% in the last quarter.
|
排在第9的是非常有名的 Linux 操作系统Debian,它是上百种Linux 发行版之母,非常接近GNU理念。其上一季度需求率上涨9%。
|
||||||
|
|
||||||
Latest Stable Release : Debian 7.8
|
最新版本: Debian 7.8
|
||||||
|
|
||||||
### 10. HP-UX ###
|
### 10. HP-UX ###
|
||||||
|
|
||||||
The Proprietary UNIX Operating System designed by Hewlett-Packard comes at number ten. It has shown a decline in the last quarter by 5%.
|
排在第10的是Hewlett-Packard公司研发的专用 Linux 操作系统HP-UX,上一季度需求率上涨5%。
|
||||||
|
|
||||||
Latest Stable Release : 11i v3 Update 13
|
最新版本 : 11i v3 Update 13
|
||||||
|
|
||||||
注:表格数据--不需要翻译--开始
|
|
||||||
<table border="0" cellspacing="0">
|
<table border="0" cellspacing="0">
|
||||||
<colgroup width="107"></colgroup>
|
<colgroup width="107"></colgroup>
|
||||||
<colgroup width="92"></colgroup>
|
<colgroup width="92"></colgroup>
|
||||||
@ -132,19 +130,21 @@ Latest Stable Release : 11i v3 Update 13
|
|||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
注:表格数据--不需要翻译--结束
|
|
||||||
|
|
||||||
That’s all for now. I’ll be coming up with the next article of this series very soon. Till then stay tuned and connected to Tecmint. Don’t forget to provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
|
以上便是全部信息,我会尽快推出下一篇系列文章,敬请关注Tecmint。不要忘了留下您宝贵的评论。如果您喜欢我们的文章并且与我们分享您的见解,这对我们的工作是一种鼓励。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
via: http://www.tecmint.com/top-distributions-in-demand-to-get-your-dream-job/
|
via: http://www.tecmint.com/top-distributions-in-demand-to-get-your-dream-job/
|
||||||
|
|
||||||
作者:[Avishek Kumar][a]
|
作者:[Avishek Kumar][a]
|
||||||
译者:[weychen](https://github.com/weychen)
|
译者:[sevenot](https://github.com/sevenot)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
[a]:http://www.tecmint.com/author/avishek/
|
[a]:http://www.tecmint.com/author/avishek/
|
||||||
[1]:http://www.tecmint.com/top-distributions-in-demand-to-get-your-dream-job/www.tecmint.com/famous-it-skills-in-demand-that-will-get-you-hired/
|
[1]:http://www.tecmint.com/famous-it-skills-in-demand-that-will-get-you-hired/
|
||||||
|
[2]:https://linux.cn/article-5303-1.html
|
||||||
|
[3]:http://www.tecmint.com/networking-protocols-skills-to-land-your-dream-job/
|
||||||
|
[4]:http://www.tecmint.com/professional-certifications-in-demand-that-will-get-you-hired/
|
@ -1,13 +1,14 @@
|
|||||||
在Linux中使用‘Systemctl’管理‘Systemd’服务和单元
|
systemctl 完全指南
|
||||||
================================================================================
|
================================================================================
|
||||||
Systemctl是一个systemd工具,主要负责控制systemd系统和服务管理器。
|
Systemctl是一个systemd工具,主要负责控制systemd系统和服务管理器。
|
||||||
|
|
||||||
Systemd是一个系统管理守护进程、工具和库的集合,用于取代System V初始进程。Systemd的功能是用于集中管理和配置类UNIX系统。
|
Systemd是一个系统管理守护进程、工具和库的集合,用于取代System V初始进程。Systemd的功能是用于集中管理和配置类UNIX系统。
|
||||||
|
|
||||||
在Linux生态系统中,Systemd被部署到了大多数的标准Linux发行版中,只有位数不多的几个尚未部署。Systemd通常是所有其它守护进程的父进程,但并非总是如此。
|
在Linux生态系统中,Systemd被部署到了大多数的标准Linux发行版中,只有为数不多的几个发行版尚未部署。Systemd通常是所有其它守护进程的父进程,但并非总是如此。
|
||||||
|
|
||||||
![Manage Linux Services Using Systemctl](http://www.tecmint.com/wp-content/uploads/2015/04/Manage-Linux-Services-Using-Systemctl.jpg)
|
![Manage Linux Services Using Systemctl](http://www.tecmint.com/wp-content/uploads/2015/04/Manage-Linux-Services-Using-Systemctl.jpg)
|
||||||
使用Systemctl管理Linux服务
|
|
||||||
|
*使用Systemctl管理Linux服务*
|
||||||
|
|
||||||
本文旨在阐明在运行systemd的系统上“如何控制系统和服务”。
|
本文旨在阐明在运行systemd的系统上“如何控制系统和服务”。
|
||||||
|
|
||||||
@ -41,11 +42,9 @@ Systemd是一个系统管理守护进程、工具和库的集合,用于取代S
|
|||||||
root 555 1 0 16:27 ? 00:00:00 /usr/lib/systemd/systemd-logind
|
root 555 1 0 16:27 ? 00:00:00 /usr/lib/systemd/systemd-logind
|
||||||
dbus 556 1 0 16:27 ? 00:00:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
|
dbus 556 1 0 16:27 ? 00:00:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
|
||||||
|
|
||||||
**注意**:systemd是作为父进程(PID=1)运行的。在上面带(-e)参数的ps命令输出中,选择所有进程,(-
|
**注意**:systemd是作为父进程(PID=1)运行的。在上面带(-e)参数的ps命令输出中,选择所有进程,(-a)选择除会话前导外的所有进程,并使用(-f)参数输出完整格式列表(即 -eaf)。
|
||||||
|
|
||||||
a)选择除会话前导外的所有进程,并使用(-f)参数输出完整格式列表(如 -eaf)。
|
也请注意上例中后随的方括号和例子中剩余部分。方括号表达式是grep的字符类表达式的一部分。
|
||||||
|
|
||||||
也请注意上例中后随的方括号和样例剩余部分。方括号表达式是grep的字符类表达式的一部分。
|
|
||||||
|
|
||||||
#### 4. 分析systemd启动进程 ####
|
#### 4. 分析systemd启动进程 ####
|
||||||
|
|
||||||
@ -147,7 +146,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
|||||||
1 loaded units listed. Pass --all to see loaded but inactive units, too.
|
1 loaded units listed. Pass --all to see loaded but inactive units, too.
|
||||||
To show all installed unit files use 'systemctl list-unit-files'.
|
To show all installed unit files use 'systemctl list-unit-files'.
|
||||||
|
|
||||||
#### 10. 检查某个单元(cron.service)是否启用 ####
|
#### 10. 检查某个单元(如 cron.service)是否启用 ####
|
||||||
|
|
||||||
# systemctl is-enabled crond.service
|
# systemctl is-enabled crond.service
|
||||||
|
|
||||||
@ -187,7 +186,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
|||||||
dbus-org.fedoraproject.FirewallD1.service enabled
|
dbus-org.fedoraproject.FirewallD1.service enabled
|
||||||
....
|
....
|
||||||
|
|
||||||
#### 13. Linux中如何启动、重启、停止、重载服务以及检查服务(httpd.service)状态 ####
|
#### 13. Linux中如何启动、重启、停止、重载服务以及检查服务(如 httpd.service)状态 ####
|
||||||
|
|
||||||
# systemctl start httpd.service
|
# systemctl start httpd.service
|
||||||
# systemctl restart httpd.service
|
# systemctl restart httpd.service
|
||||||
@ -214,15 +213,15 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
|||||||
Apr 28 17:21:30 tecmint systemd[1]: Started The Apache HTTP Server.
|
Apr 28 17:21:30 tecmint systemd[1]: Started The Apache HTTP Server.
|
||||||
Hint: Some lines were ellipsized, use -l to show in full.
|
Hint: Some lines were ellipsized, use -l to show in full.
|
||||||
|
|
||||||
**注意**:当我们使用systemctl的start,restart,stop和reload命令时,我们不会不会从终端获取到任何输出内容,只有status命令可以打印输出。
|
**注意**:当我们使用systemctl的start,restart,stop和reload命令时,我们不会从终端获取到任何输出内容,只有status命令可以打印输出。
|
||||||
|
|
||||||
#### 14. 如何激活服务并在启动时启用或禁用服务(系统启动时自动启动服务) ####
|
#### 14. 如何激活服务并在启动时启用或禁用服务(即系统启动时自动启动服务) ####
|
||||||
|
|
||||||
# systemctl is-active httpd.service
|
# systemctl is-active httpd.service
|
||||||
# systemctl enable httpd.service
|
# systemctl enable httpd.service
|
||||||
# systemctl disable httpd.service
|
# systemctl disable httpd.service
|
||||||
|
|
||||||
#### 15. 如何屏蔽(让它不能启动)或显示服务(httpd.service) ####
|
#### 15. 如何屏蔽(让它不能启动)或显示服务(如 httpd.service) ####
|
||||||
|
|
||||||
# systemctl mask httpd.service
|
# systemctl mask httpd.service
|
||||||
ln -s '/dev/null' '/etc/systemd/system/httpd.service'
|
ln -s '/dev/null' '/etc/systemd/system/httpd.service'
|
||||||
@ -297,7 +296,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
|||||||
# systemctl enable tmp.mount
|
# systemctl enable tmp.mount
|
||||||
# systemctl disable tmp.mount
|
# systemctl disable tmp.mount
|
||||||
|
|
||||||
#### 20. 在Linux中屏蔽(让它不能启动)或显示挂载点 ####
|
#### 20. 在Linux中屏蔽(让它不能启用)或可见挂载点 ####
|
||||||
|
|
||||||
# systemctl mask tmp.mount
|
# systemctl mask tmp.mount
|
||||||
|
|
||||||
@ -375,7 +374,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
|||||||
|
|
||||||
CPUShares=2000
|
CPUShares=2000
|
||||||
|
|
||||||
**注意**:当你为某个服务设置CPUShares,会自动创建一个以服务名命名的目录(httpd.service),里面包含了一个名为90-CPUShares.conf的文件,该文件含有CPUShare限制信息,你可以通过以下方式查看该文件:
|
**注意**:当你为某个服务设置CPUShares,会自动创建一个以服务名命名的目录(如 httpd.service),里面包含了一个名为90-CPUShares.conf的文件,该文件含有CPUShare限制信息,你可以通过以下方式查看该文件:
|
||||||
|
|
||||||
# vi /etc/systemd/system/httpd.service.d/90-CPUShares.conf
|
# vi /etc/systemd/system/httpd.service.d/90-CPUShares.conf
|
||||||
|
|
||||||
@ -528,13 +527,13 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
|||||||
#### 35. 启动运行等级5,即图形模式 ####
|
#### 35. 启动运行等级5,即图形模式 ####
|
||||||
|
|
||||||
# systemctl isolate runlevel5.target
|
# systemctl isolate runlevel5.target
|
||||||
OR
|
或
|
||||||
# systemctl isolate graphical.target
|
# systemctl isolate graphical.target
|
||||||
|
|
||||||
#### 36. 启动运行等级3,即多用户模式(命令行) ####
|
#### 36. 启动运行等级3,即多用户模式(命令行) ####
|
||||||
|
|
||||||
# systemctl isolate runlevel3.target
|
# systemctl isolate runlevel3.target
|
||||||
OR
|
或
|
||||||
# systemctl isolate multiuser.target
|
# systemctl isolate multiuser.target
|
||||||
|
|
||||||
#### 36. 设置多用户模式或图形模式为默认运行等级 ####
|
#### 36. 设置多用户模式或图形模式为默认运行等级 ####
|
||||||
@ -572,7 +571,7 @@ via: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux
|
|||||||
|
|
||||||
作者:[Avishek Kumar][a]
|
作者:[Avishek Kumar][a]
|
||||||
译者:[GOLinux](https://github.com/GOLinux)
|
译者:[GOLinux](https://github.com/GOLinux)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,57 @@
|
|||||||
|
GNU、开源和 Apple 的那些黑历史
|
||||||
|
==============================================================================
|
||||||
|
> 自由软件/开源社区与 Apple 之间的争论可以回溯到上世纪80年代,当时 Linux 的创始人称 Mac OS X 的核心就是“一堆废物”。还有其他一些软件史上的轶事。
|
||||||
|
|
||||||
|
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/05/untitled_2.png)
|
||||||
|
|
||||||
|
开源拥护者们与微软之间有着很长、而且摇摆的关系。每个人都知道这个。但是,在许多方面,自由或者开源软件的支持者们与 Apple 之间的争执则更加突出——尽管这很少受到媒体的关注。
|
||||||
|
|
||||||
|
需要说明的是,并不是所有的开源拥护者都厌恶苹果。从各种轶事中,我已经见过很多 Linux 的黑客玩耍 iPhone 和iPad。实际上,许多 Linux 用户是十分喜欢 Apple 的 OS X 系统的,以至于他们[创造了很多Linux的发行版][1],都设计得看起来像OS X。(顺便说下,[北朝鲜政府][2]就这样做了。)
|
||||||
|
|
||||||
|
但是 Mac 的信徒与企鹅的信徒——即 Linux 社区(不包括别的,仅指自由与开源软件世界中的这一小部分)之间的关系,并不一直是完全的和谐。并且这绝不是一个新的现象,在我研究Linux和自由软件基金会历史的时候就发现了。
|
||||||
|
|
||||||
|
### GNU vs. Apple ###
|
||||||
|
|
||||||
|
这场战争将回溯到至少上世纪80年代后期。1988年6月,Richard Stallman 发起了 [GNU][3] 项目,希望建立一个完全自由的类 Unix 操作系统,其源代码将会免费共享,[强烈指责][4] Apple 对 [Hewlett-Packard][5](HPQ)和 [Microsoft][6](MSFT)的诉讼,称Apple的声明中说别人对 Macintosh 操作系统的界面和体验的抄袭是不正确。如果 Apple 流行的话,GNU 警告到,这家公司“将会借助大众的新力量终结掉为取代商业软件而生的自由软件。”
|
||||||
|
|
||||||
|
那个时候,GNU 对抗 Apple 的诉讼(这意味着,十分讽刺的是,GNU 正在支持 Microsoft,尽管当时的情况不一样),通过发布[“让你的律师远离我的电脑”按钮][7]。同时呼吁 GNU 的支持者们抵制 Apple,警告虽然 Macintosh 看起来是不错的计算机,但 Apple 一旦赢得了诉讼就会给市场带来垄断,这会极大地提高计算机的售价。
|
||||||
|
|
||||||
|
Apple 最终[输掉了这场诉讼][8],但是直到1994年之后,GNU 才[撤销对 Apple 的抵制][9]。这期间,GNU 一直不断指责 Apple。在上世纪90年代早期甚至之后,GNU 开始发展 GNU 软件项目,可以在其他个人电脑平台包括 MS-DOS 计算机上使用。[GNU 宣称][10],除非 Apple 停止在计算机领域垄断的野心,让用户界面可以模仿 Macintosh 的一些东西,否则“我们不会提供任何对 Apple 机器的支持。”(因此讽刺的是 Apple 在90年代后期开发的类 UNIX 系统 OS X 有一大堆软件来自GNU。但是那是另外的故事了。)
|
||||||
|
|
||||||
|
### Torvalds 与 Jobs ###
|
||||||
|
|
||||||
|
除去他对大多数发行版比较自由放任的态度,Linux内核的创造者 Liuns Torvalds 相较于 Stallman 和 GNU 过去对Apple 的态度和善得多。在他 2001 年出版的书”Just For Fun: The Story of an Accidental Revolutionary“中,Torvalds 描述到与 Steve Jobs 的一次会面,大约是 1997 年收到后者的邀请去讨论 Mac OS X,当时 Apple 正在开发中,但还没有公开发布。
|
||||||
|
|
||||||
|
“基本上,Jobs 一开始就试图告诉我在桌面上的玩家就两个,Microsoft 和 Apple,而且他认为我能为 Linux 做的最好的事,就是从了 Apple,努力让开源用户去支持 Mac OS X” Torvalds 写道。
|
||||||
|
|
||||||
|
这次会谈显然让 Torvalds 很不爽。争吵的一点集中在 Torvalds 对 Mach 技术上的藐视,对于 Apple 正在用于构建新的 OS X 操作系统的内核,Torvalds 称其“一堆废物。它包含了所有你能做到的设计错误,并且甚至打算只弥补一小部分。”
|
||||||
|
|
||||||
|
但是更令人不快的是,显然是 Jobs 在开发 OS X 时入侵开源的方式(OS X 的核心里上有很多开源程序):“他有点贬低了结构的瑕疵:谁在乎基础操作系统这个真正的 low-core 东西是不是开源,如果你有 Mac 层在最上面,这不是开源?”
|
||||||
|
|
||||||
|
一切的一切,Torvalds 总结到,Jobs “并没有太多争论。他仅仅很简单地说着,胸有成竹地认为我会对与 Apple 合作感兴趣”。“他一无所知,不能去想像还会有人并不关心 Mac 市场份额的增长。我认为他真的感到惊讶了,当我表现出对 Mac 的市场有多大,或者 Microsoft 市场有多大的毫不关心时。”
|
||||||
|
|
||||||
|
当然,Torvalds 并没有对所有 Linux 用户说起过。他对于 OS X 和 Apple 的看法从 2001 年开始就渐渐软化了。但实际上,早在2000年,Linux 社区的领导角色表现出对 Apple 及其高层的傲慢的深深的鄙视,可以看出一些重要的东西,关于 Apple 世界和开源/自由软件世界的矛盾是多么的根深蒂固。
|
||||||
|
|
||||||
|
从以上两则历史上的花边新闻中,可以看到关于 Apple 产品价值的重大争议,即是否该公司致力于提升其创造的软硬件的质量,或者仅仅是借市场的小聪明获利,让Apple产品卖出更多的钱而不是创造等同其价值的功能。但是不管怎样,我会暂时置身讨论之外。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://thevarguy.com/open-source-application-software-companies/051815/linux-better-os-x-gnu-open-source-and-apple-
|
||||||
|
|
||||||
|
作者:[Christopher Tozzi][a]
|
||||||
|
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||||
|
[1]:https://www.linux.com/news/software/applications/773516-the-mac-ifying-of-the-linux-desktop/
|
||||||
|
[2]:http://thevarguy.com/open-source-application-software-companies/010615/north-koreas-red-star-linux-os-made-apples-image
|
||||||
|
[3]:http://gnu.org/
|
||||||
|
[4]:https://www.gnu.org/bulletins/bull5.html
|
||||||
|
[5]:http://www.hp.com/
|
||||||
|
[6]:http://www.microsoft.com/
|
||||||
|
[7]:http://www.duntemann.com/AppleSnakeButton.jpg
|
||||||
|
[8]:http://www.freibrun.com/articles/articl12.htm
|
||||||
|
[9]:https://www.gnu.org/bulletins/bull18.html#SEC6
|
||||||
|
[10]:https://www.gnu.org/bulletins/bull12.html
|
@ -1,28 +1,29 @@
|
|||||||
如何在Fedora 22上面配置Apache的Docker容器
|
如何在Fedora 22上面配置Apache的Docker容器
|
||||||
=============================================================================
|
=============================================================================
|
||||||
在这篇文章中,我们将会学习关于Docker的一些知识,如何使用Docker部署Apache httpd服务,并且共享到Docker Hub上面去。首先,我们学习怎样拉取和使用Docker Hub里面的镜像,然后交互式地安装Apache到一个Fedora 22的镜像里去,之后我们将会学习如何用一个Dockerfile文件来制作一个镜像,以一种更快,更优雅的方式。最后,我们会在Docker Hub上公开我们创建地镜像,这样以后任何人都可以下载并使用它。
|
|
||||||
|
|
||||||
### 安装Docker,运行hello world ###
|
在这篇文章中,我们将会学习关于Docker的一些知识,如何使用Docker部署Apache httpd服务,并且共享到Docker Hub上面去。首先,我们学习怎样拉取和使用Docker Hub里面的镜像,然后在一个Fedora 22的镜像上交互式地安装Apache,之后我们将会学习如何用一个Dockerfile文件来以一种更快,更优雅的方式制作一个镜像。最后,我们将我们创建的镜像发布到Docker Hub上,这样以后任何人都可以下载并使用它。
|
||||||
|
|
||||||
|
### 安装并初体验Docker ###
|
||||||
|
|
||||||
**要求**
|
**要求**
|
||||||
|
|
||||||
运行Docker,里至少需要满足这些:
|
运行Docker,你至少需要满足这些:
|
||||||
|
|
||||||
- 你需要一个64位的内核,版本3.10或者更高
|
- 你需要一个64位的内核,版本3.10或者更高
|
||||||
- Iptables 1.4 - Docker会用来做网络配置,如网络地址转换(NAT)
|
- Iptables 1.4 - Docker会用它来做网络配置,如网络地址转换(NAT)
|
||||||
- Git 1.7 - Docker会使用Git来与仓库交流,如Docker Hub
|
- Git 1.7 - Docker会使用Git来与仓库交流,如Docker Hub
|
||||||
- ps - 在大多数环境中这个工具都存在,在procps包里有提供
|
- ps - 在大多数环境中这个工具都存在,在procps包里有提供
|
||||||
- root - 防止一般用户可以通过TCP或者其他方式运行Docker,为了简化,我们会假定你就是root
|
- root - 尽管一般用户可以通过TCP或者其他方式来运行Docker,但是为了简化,我们会假定你就是root
|
||||||
|
|
||||||
### 使用dnf安装docker ###
|
#### 使用dnf安装docker ####
|
||||||
|
|
||||||
以下的命令会安装Docker
|
以下的命令会安装Docker
|
||||||
|
|
||||||
dnf update && dnf install docker
|
dnf update && dnf install docker
|
||||||
|
|
||||||
**注意**:在Fedora 22里,你仍然可以使用Yum命令,但是被DNF取代了,而且在纯净安装时不可用了。
|
**注意**:在Fedora 22里,你仍然可以使用Yum命令,但是它被DNF取代了,而且在纯净安装时不可用了。
|
||||||
|
|
||||||
### 检查安装 ###
|
#### 检查安装 ####
|
||||||
|
|
||||||
我们将要使用的第一个命令是docker info,这会输出很多信息给你:
|
我们将要使用的第一个命令是docker info,这会输出很多信息给你:
|
||||||
|
|
||||||
@ -32,25 +33,29 @@
|
|||||||
|
|
||||||
docker version
|
docker version
|
||||||
|
|
||||||
### 启动Dcoker为守护进程 ###
|
#### 以守护进程方式启动Dcoker####
|
||||||
|
|
||||||
你应该启动一个docker实例,然后她会处理我们的请求。
|
你应该启动一个docker实例,然后她会处理我们的请求。
|
||||||
|
|
||||||
docker -d
|
docker -d
|
||||||
|
|
||||||
|
现在我们设置 docker 随系统启动,以便我们不需要每次重启都需要运行上述命令。
|
||||||
|
|
||||||
|
chkconfig docker on
|
||||||
|
|
||||||
让我们用Busybox来打印hello world:
|
让我们用Busybox来打印hello world:
|
||||||
|
|
||||||
dockr run -t busybox /bin/echo "hello world"
|
dockr run -t busybox /bin/echo "hello world"
|
||||||
|
|
||||||
这个命令里,我们告诉Docker执行 /bin/echo "hello world",在Busybox镜像的一个实例/容器里。Busybox是一个小型的POSIX环境,将许多小工具都结合到了一个单独的可执行程序里。
|
这个命令里,我们告诉Docker在Busybox镜像的一个实例/容器里执行 /bin/echo "hello world"。Busybox是一个小型的POSIX环境,将许多小工具都结合到了一个单独的可执行程序里。
|
||||||
|
|
||||||
如果Docker不能在你的系统里找到本地的Busybox镜像,她就会自动从Docker Hub里拉取镜像,正如你可以看下如下的快照:
|
如果Docker不能在你的系统里找到本地的Busybox镜像,她就会自动从Docker Hub里拉取镜像,正如你可以看下如下的快照:
|
||||||
|
|
||||||
![Hello world with Busybox](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-hello-world-busybox-complete.png)
|
![Hello world with Busybox](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-hello-world-busybox-complete.png)
|
||||||
|
|
||||||
Hello world with Busybox
|
*Hello world with Busybox*
|
||||||
|
|
||||||
再次尝试相同的命令,这次由于Docker已经有了本地的Busybox镜像,所有你将会看到的就是echo的输出:
|
再次尝试相同的命令,这次由于Docker已经有了本地的Busybox镜像,你将会看到的全部就是echo的输出:
|
||||||
|
|
||||||
docker run -t busybox /bin/echo "hello world"
|
docker run -t busybox /bin/echo "hello world"
|
||||||
|
|
||||||
@ -66,31 +71,31 @@ Hello world with Busybox
|
|||||||
|
|
||||||
docker pull fedora:22
|
docker pull fedora:22
|
||||||
|
|
||||||
起一个容器在后台运行:
|
启动一个容器在后台运行:
|
||||||
|
|
||||||
docker run -d -t fedora:22 /bin/bash
|
docker run -d -t fedora:22 /bin/bash
|
||||||
|
|
||||||
列出正在运行地容器,并用名字标识,如下
|
列出正在运行地容器及其名字标识,如下
|
||||||
|
|
||||||
docker ps
|
docker ps
|
||||||
|
|
||||||
![listing with docker ps and attaching with docker attach](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-ps-with-docker-attach-highlight.png)
|
![listing with docker ps and attaching with docker attach](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-ps-with-docker-attach-highlight.png)
|
||||||
|
|
||||||
使用docker ps列出,并使用docker attach进入一个容器里
|
*使用docker ps列出,并使用docker attach进入一个容器里*
|
||||||
|
|
||||||
angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
angry_noble是docker分配给我们容器的名字,所以我们来连接上去:
|
||||||
|
|
||||||
docker attach angry_noble
|
docker attach angry_noble
|
||||||
|
|
||||||
注意:每次你起一个容器,就会被给与一个新的名字,如果你的容器需要一个固定的名字,你应该在 docker run 命令里使用 -name 参数。
|
注意:每次你启动一个容器,就会被给与一个新的名字,如果你的容器需要一个固定的名字,你应该在 docker run 命令里使用 -name 参数。
|
||||||
|
|
||||||
### 安装Apache ###
|
#### 安装Apache ####
|
||||||
|
|
||||||
下面的命令会更新DNF的数据库,下载安装Apache(httpd包)并清理dnf缓存使镜像尽量小
|
下面的命令会更新DNF的数据库,下载安装Apache(httpd包)并清理dnf缓存使镜像尽量小
|
||||||
|
|
||||||
dnf -y update && dnf -y install httpd && dnf -y clean all
|
dnf -y update && dnf -y install httpd && dnf -y clean all
|
||||||
|
|
||||||
配置Apache
|
**配置Apache**
|
||||||
|
|
||||||
我们需要修改httpd.conf的唯一地方就是ServerName,这会使Apache停止抱怨
|
我们需要修改httpd.conf的唯一地方就是ServerName,这会使Apache停止抱怨
|
||||||
|
|
||||||
@ -98,7 +103,7 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
|||||||
|
|
||||||
**设定环境**
|
**设定环境**
|
||||||
|
|
||||||
为了使Apache运行为单机模式,你必须以环境变量的格式提供一些信息,并且你也需要在这些变量里的目录设定,所以我们将会用一个小的shell脚本干这个工作,当然也会启动Apache
|
为了使Apache运行为独立模式,你必须以环境变量的格式提供一些信息,并且你也需要创建这些变量里的目录,所以我们将会用一个小的shell脚本干这个工作,当然也会启动Apache
|
||||||
|
|
||||||
vi /etc/httpd/run_apache_foreground
|
vi /etc/httpd/run_apache_foreground
|
||||||
|
|
||||||
@ -106,7 +111,7 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
|||||||
|
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
#set variables
|
#设置环境变量
|
||||||
APACHE_LOG_DI=R"/var/log/httpd"
|
APACHE_LOG_DI=R"/var/log/httpd"
|
||||||
APACHE_LOCK_DIR="/var/lock/httpd"
|
APACHE_LOCK_DIR="/var/lock/httpd"
|
||||||
APACHE_RUN_USER="apache"
|
APACHE_RUN_USER="apache"
|
||||||
@ -114,12 +119,12 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
|||||||
APACHE_PID_FILE="/var/run/httpd/httpd.pid"
|
APACHE_PID_FILE="/var/run/httpd/httpd.pid"
|
||||||
APACHE_RUN_DIR="/var/run/httpd"
|
APACHE_RUN_DIR="/var/run/httpd"
|
||||||
|
|
||||||
#create directories if necessary
|
#如果需要的话,创建目录
|
||||||
if ! [ -d /var/run/httpd ]; then mkdir /var/run/httpd;fi
|
if ! [ -d /var/run/httpd ]; then mkdir /var/run/httpd;fi
|
||||||
if ! [ -d /var/log/httpd ]; then mkdir /var/log/httpd;fi
|
if ! [ -d /var/log/httpd ]; then mkdir /var/log/httpd;fi
|
||||||
if ! [ -d /var/lock/httpd ]; then mkdir /var/lock/httpd;fi
|
if ! [ -d /var/lock/httpd ]; then mkdir /var/lock/httpd;fi
|
||||||
|
|
||||||
#run Apache
|
#运行 Apache
|
||||||
httpd -D FOREGROUND
|
httpd -D FOREGROUND
|
||||||
|
|
||||||
**另外地**,你可以粘贴这个片段代码到容器shell里并运行:
|
**另外地**,你可以粘贴这个片段代码到容器shell里并运行:
|
||||||
@ -130,11 +135,11 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
|||||||
|
|
||||||
**保存你的容器状态**
|
**保存你的容器状态**
|
||||||
|
|
||||||
你的容器现在可以运行Apache,是时候保存容器当前的状态为一个镜像,以备你需要的时候使用。
|
你的容器现在准备好运行Apache,是时候保存容器当前的状态为一个镜像,以备你需要的时候使用。
|
||||||
|
|
||||||
为了离开容器环境,你必须顺序按下 **Ctrl+q** 和 **Ctrl+p**,如果你仅仅在shell执行exit,你同时也会停止容器,失去目前为止你做过的所有工作。
|
为了离开容器环境,你必须顺序按下 **Ctrl+q** 和 **Ctrl+p**,如果你仅仅在shell执行exit,你同时也会停止容器,失去目前为止你做过的所有工作。
|
||||||
|
|
||||||
回到Docker主机,使用 **docker commit** 加容器和你期望的仓库名字/标签:
|
回到Docker主机,使用 **docker commit** 及容器名和你想要的仓库名字/标签:
|
||||||
|
|
||||||
docker commit angry_noble gaiada/apache
|
docker commit angry_noble gaiada/apache
|
||||||
|
|
||||||
@ -144,17 +149,15 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
|||||||
|
|
||||||
**运行并测试你的镜像**
|
**运行并测试你的镜像**
|
||||||
|
|
||||||
最后,从你的新镜像起一个容器,并且重定向80端口到容器:
|
最后,从你的新镜像启动一个容器,并且重定向80端口到该容器:
|
||||||
|
|
||||||
docker run -p 80:80 -d -t gaiada/apache /etc/httpd/run_apache_foreground
|
docker run -p 80:80 -d -t gaiada/apache /etc/httpd/run_apache_foreground
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
到目前,你正在你的容器里运行Apache,打开你的浏览器访问该服务,在[http://localhost][2],你将会看到如下Apache默认的页面
|
到目前,你正在你的容器里运行Apache,打开你的浏览器访问该服务,在[http://localhost][2],你将会看到如下Apache默认的页面
|
||||||
|
|
||||||
![Apache default page running from Docker container](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-apache-running.png)
|
![Apache default page running from Docker container](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-apache-running.png)
|
||||||
|
|
||||||
在容器里运行的Apache默认页面
|
*在容器里运行的Apache默认页面*
|
||||||
|
|
||||||
### 使用Dockerfile Docker化Apache ###
|
### 使用Dockerfile Docker化Apache ###
|
||||||
|
|
||||||
@ -190,21 +193,14 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
|||||||
|
|
||||||
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
|
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
我们一起来看看Dockerfile里面有什么:
|
我们一起来看看Dockerfile里面有什么:
|
||||||
|
|
||||||
**FROM** - 这告诉docker,我们将要使用Fedora 22作为基础镜像
|
- **FROM** - 这告诉docker,我们将要使用Fedora 22作为基础镜像
|
||||||
|
- **MAINTAINER** 和 **LABLE** - 这些命令对镜像没有直接作用,属于标记信息
|
||||||
**MAINTAINER** 和 **LABLE** - 这些命令对镜像没有直接作用,属于标记信息
|
- **RUN** - 自动完成我们之前交互式做的工作,安装Apache,新建目录并编辑httpd.conf
|
||||||
|
- **ENV** - 设置环境变量,现在我们再不需要run_apache_foreground脚本
|
||||||
**RUN** - 自动完成我们之前交互式做的工作,安装Apache,新建目录并编辑httpd.conf
|
- **EXPOSE** - 暴露80端口给外网
|
||||||
|
- **CMD** - 设置默认的命令启动httpd服务,这样我们就不需要每次起一个新的容器都重复这个工作
|
||||||
**ENV** - 设置环境变量,现在我们再不需要run_apache_foreground脚本
|
|
||||||
|
|
||||||
**EXPOSE** - 暴露80端口给外网
|
|
||||||
|
|
||||||
**CMD** - 设置默认的命令启动httpd服务,这样我们就不需要每次起一个新的容器都重复这个工作
|
|
||||||
|
|
||||||
**建立该镜像**
|
**建立该镜像**
|
||||||
|
|
||||||
@ -214,7 +210,7 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
|||||||
|
|
||||||
![docker build complete](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-build-complete.png)
|
![docker build complete](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-build-complete.png)
|
||||||
|
|
||||||
docker完成创建
|
*docker完成创建*
|
||||||
|
|
||||||
使用 **docker images** 列出本地镜像,查看是否存在你新建的镜像:
|
使用 **docker images** 列出本地镜像,查看是否存在你新建的镜像:
|
||||||
|
|
||||||
@ -226,7 +222,7 @@ docker完成创建
|
|||||||
|
|
||||||
这就是Dockerfile的工作,使用这项功能会使得事情更加容易,快速并且可重复生成。
|
这就是Dockerfile的工作,使用这项功能会使得事情更加容易,快速并且可重复生成。
|
||||||
|
|
||||||
### 公开你的镜像 ###
|
### 发布你的镜像 ###
|
||||||
|
|
||||||
直到现在,你仅仅是从Docker Hub拉取了镜像,但是你也可以推送你的镜像,以后需要也可以再次拉取他们。实际上,其他人也可以下载你的镜像,在他们的系统中使用它而不需要改变任何东西。现在我们将要学习如何使我们的镜像对世界上的其他人可用。
|
直到现在,你仅仅是从Docker Hub拉取了镜像,但是你也可以推送你的镜像,以后需要也可以再次拉取他们。实际上,其他人也可以下载你的镜像,在他们的系统中使用它而不需要改变任何东西。现在我们将要学习如何使我们的镜像对世界上的其他人可用。
|
||||||
|
|
||||||
@ -236,7 +232,7 @@ docker完成创建
|
|||||||
|
|
||||||
![Docker Hub signup page](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-hub-signup.png)
|
![Docker Hub signup page](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-hub-signup.png)
|
||||||
|
|
||||||
Docker Hub 注册页面
|
*Docker Hub 注册页面*
|
||||||
|
|
||||||
**登录**
|
**登录**
|
||||||
|
|
||||||
@ -256,11 +252,11 @@ Docker Hub 注册页面
|
|||||||
|
|
||||||
![Docker push Apache image complete](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-pushing-apachedf-complete.png)
|
![Docker push Apache image complete](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-pushing-apachedf-complete.png)
|
||||||
|
|
||||||
Docker推送Apache镜像完成
|
*Docker推送Apache镜像完成*
|
||||||
|
|
||||||
### 结论 ###
|
### 结论 ###
|
||||||
|
|
||||||
现在,你知道如何Docker化Apache,试一试包含其他一些组块,Perl,PHP,proxy,HTTPS,或者任何你需要的东西。我希望你们这些家伙喜欢她,并推送你们自己的镜像到Docker Hub。
|
现在,你知道如何Docker化Apache,试一试包含其他一些组件,Perl,PHP,proxy,HTTPS,或者任何你需要的东西。我希望你们这些家伙喜欢她,并推送你们自己的镜像到Docker Hub。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
@ -268,7 +264,7 @@ via: http://linoxide.com/linux-how-to/configure-apache-containers-docker-fedora-
|
|||||||
|
|
||||||
作者:[Carlos Alberto][a]
|
作者:[Carlos Alberto][a]
|
||||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,98 @@
|
|||||||
|
如何配置一个 Docker Swarm 原生集群
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
嗨,大家好。今天我们来学一学Swarm相关的内容吧,我们将学习通过Swarm来创建Docker原生集群。[Docker Swarm][1]是用于Docker的原生集群项目,它可以将一个Docker主机池转换成单个的虚拟主机。Swarm工作于标准的Docker API,所以任何可以和Docker守护进程通信的工具都可以使用Swarm来透明地伸缩到多个主机上。就像其它Docker项目一样,Swarm遵循“内置电池,并可拆卸”的原则(LCTT 译注:batteries included,内置电池原来是 Python 圈里面对 Python 的一种赞誉,指自给自足,无需外求的丰富环境;but removable,并可拆卸应该指的是非强制耦合)。它附带有一个开箱即用的简单的后端调度程序,而且作为初始开发套件,也为其开发了一个可插拔不同后端的API。其目标在于为一些简单的使用情况提供一个平滑的、开箱即用的体验,并且它允许切换为更强大的后端,如Mesos,以用于大规模生产环境部署。Swarm配置和使用极其简单。
|
||||||
|
|
||||||
|
这里给大家提供Swarm 0.2开箱的即用一些特性。
|
||||||
|
|
||||||
|
1. Swarm 0.2.0大约85%与Docker引擎兼容。
|
||||||
|
2. 它支持资源管理。
|
||||||
|
3. 它具有一些带有限制和类同功能的高级调度特性。
|
||||||
|
4. 它支持多个发现后端(hubs,consul,etcd,zookeeper)
|
||||||
|
5. 它使用TLS加密方法进行安全通信和验证。
|
||||||
|
|
||||||
|
那么,我们来看一看Swarm的一些相当简单而简用的使用步骤吧。
|
||||||
|
|
||||||
|
### 1. 运行Swarm的先决条件 ###
|
||||||
|
|
||||||
|
我们必须在所有节点安装Docker 1.4.0或更高版本。虽然各个节点的IP地址不需要要公共地址,但是Swarm管理器必须可以通过网络访问各个节点。
|
||||||
|
|
||||||
|
**注意**:Swarm当前还处于beta版本,因此功能特性等还有可能发生改变,我们不推荐你在生产环境中使用。
|
||||||
|
|
||||||
|
### 2. 创建Swarm集群 ###
|
||||||
|
|
||||||
|
现在,我们将通过运行下面的命令来创建Swarm集群。各个节点都将运行一个swarm节点代理,该代理会注册、监控相关的Docker守护进程,并更新发现后端获取的节点状态。下面的命令会返回一个唯一的集群ID标记,在启动节点上的Swarm代理时会用到它。
|
||||||
|
|
||||||
|
在集群管理器中:
|
||||||
|
|
||||||
|
# docker run swarm create
|
||||||
|
|
||||||
|
![Creating Swarm Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-swarm-cluster.png)
|
||||||
|
|
||||||
|
### 3. 启动各个节点上的Docker守护进程 ###
|
||||||
|
|
||||||
|
我们需要登录进我们将用来创建集群的每个节点,并在其上使用-H标记启动Docker守护进程。它会保证Swarm管理器能够通过TCP访问到各个节点上的Docker远程API。要启动Docker守护进程,我们需要在各个节点内部运行以下命令。
|
||||||
|
|
||||||
|
# docker -H tcp://0.0.0.0:2375 -d
|
||||||
|
|
||||||
|
![Starting Docker Daemon](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-docker-daemon.png)
|
||||||
|
|
||||||
|
### 4. 添加节点 ###
|
||||||
|
|
||||||
|
在启用Docker守护进程后,我们需要添加Swarm节点到发现服务,我们必须确保节点IP可从Swarm管理器访问到。要完成该操作,我们需要运行以下命令。
|
||||||
|
|
||||||
|
# docker run -d swarm join --addr=<node_ip>:2375 token://<cluster_id>
|
||||||
|
|
||||||
|
![Adding Nodes to Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/adding-nodes-to-cluster.png)
|
||||||
|
|
||||||
|
**注意**:我们需要用步骤2中获取到的节点IP地址和集群ID替换这里的<node_ip>和<cluster_id>。
|
||||||
|
|
||||||
|
### 5. 开启Swarm管理器 ###
|
||||||
|
|
||||||
|
现在,由于我们已经获得了连接到集群的节点,我们将启动swarm管理器。我们需要在节点中运行以下命令。
|
||||||
|
|
||||||
|
# docker run -d -p <swarm_port>:2375 swarm manage token://<cluster_id>
|
||||||
|
|
||||||
|
![Starting Swarm Manager](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-swarm-manager.png)
|
||||||
|
|
||||||
|
### 6. 检查配置 ###
|
||||||
|
|
||||||
|
一旦管理运行起来后,我们可以通过运行以下命令来检查配置。
|
||||||
|
|
||||||
|
# docker -H tcp://<manager_ip:manager_port> info
|
||||||
|
|
||||||
|
![Accessing Swarm Clusters](http://blog.linoxide.com/wp-content/uploads/2015/05/accessing-swarm-cluster.png)
|
||||||
|
|
||||||
|
**注意**:我们需要替换<manager_ip:manager_port>为运行swarm管理器的主机的IP地址和端口。
|
||||||
|
|
||||||
|
### 7. 使用docker CLI来访问节点 ###
|
||||||
|
|
||||||
|
在一切都像上面说得那样完美地完成后,这一部分是Docker Swarm最为重要的部分。我们可以使用Docker CLI来访问节点,并在节点上运行容器。
|
||||||
|
|
||||||
|
# docker -H tcp://<manager_ip:manager_port> info
|
||||||
|
# docker -H tcp://<manager_ip:manager_port> run ...
|
||||||
|
|
||||||
|
### 8. 监听集群中的节点 ###
|
||||||
|
|
||||||
|
我们可以使用swarm list命令来获取所有运行中节点的列表。
|
||||||
|
|
||||||
|
# docker run --rm swarm list token://<cluster_id>
|
||||||
|
|
||||||
|
![Listing Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/listing-swarm-nodes.png)
|
||||||
|
|
||||||
|
### 尾声 ###
|
||||||
|
|
||||||
|
Swarm真的是一个有着相当不错的功能的docker,它可以用于创建和管理集群。它相当易于配置和使用,当我们在它上面使用限制器和类同器时它更为出色。高级调度程序是一个相当不错的特性,它可以应用过滤器来通过端口、标签、健康状况来排除节点,并且它使用策略来挑选最佳节点。那么,如果你有任何问题、评论、反馈,请在下面的评论框中写出来吧,好让我们知道哪些材料需要补充或改进。谢谢大家了!尽情享受吧 :-)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://linoxide.com/linux-how-to/configure-swarm-clustering-docker/
|
||||||
|
|
||||||
|
作者:[Arun Pyasi][a]
|
||||||
|
译者:[GOLinux](https://github.com/GOLinux)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://linoxide.com/author/arunp/
|
||||||
|
[1]:https://docs.docker.com/swarm/
|
177
published/201507/20150616 LINUX 101--POWER UP YOUR SHELL.md
Normal file
177
published/201507/20150616 LINUX 101--POWER UP YOUR SHELL.md
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
LINUX 101: 让你的 SHELL 更强大
|
||||||
|
================================================================================
|
||||||
|
> 在我们的关于 shell 基础的指导下, 得到一个更灵活,功能更强大且多彩的命令行界面
|
||||||
|
|
||||||
|
**为何要这样做?**
|
||||||
|
|
||||||
|
- 使得在 shell 提示符下过得更轻松,高效
|
||||||
|
- 在失去连接后恢复先前的会话
|
||||||
|
- Stop pushing around that fiddly rodent!
|
||||||
|
|
||||||
|
![bash1](http://www.linuxvoice.com/wp-content/uploads/2015/02/bash1-large15.png)
|
||||||
|
|
||||||
|
这是我的命令行提示符的设置。对于这个小的终端窗口来说,这或许有些长。但你可以根据你的喜好来调整它。
|
||||||
|
|
||||||
|
作为一个 Linux 用户, 你可能熟悉 shell (又名为命令行)。 或许你需要时不时的打开终端来完成那些不能在 GUI 下处理的必要任务,抑或是因为你处在一个将窗口铺满桌面的环境中,而 shell 是你与你的 linux 机器交互的主要方式。
|
||||||
|
|
||||||
|
在上面那些情况下,你可能正在使用你所使用的发行版本自带的 Bash 配置。 尽管对于大多数的任务而言,它足够好了,但它可以更加强大。 在本教程中,我们将向你展示如何使得你的 shell 提供更多有用信息、更加实用且更适合工作。 我们将对提示符进行自定义,让它比默认情况下提供更好的反馈,并向你展示如何使用炫酷的 `tmux` 工具来管理会话并同时运行多个程序。 并且,为了让眼睛舒服一点,我们还将关注配色方案。那么,进击吧,少女!
|
||||||
|
|
||||||
|
### 让提示符更美妙 ###
|
||||||
|
|
||||||
|
大多数的发行版本配置有一个非常简单的提示符,它们大多向你展示了一些基本信息, 但提示符可以为你提供更多的内容。例如,在 Debian 7 下,默认的提示符是这样的:
|
||||||
|
|
||||||
|
mike@somebox:~$
|
||||||
|
|
||||||
|
上面的提示符展示出了用户、主机名、当前目录和账户类型符号(假如你切换到 root 账户, **$** 会变为 **#**)。 那这些信息是在哪里存储的呢? 答案是:在 **PS1** 环境变量中。 假如你键入 `echo $PS1`, 你将会在这个命令的输出字符串的最后有如下的字符:
|
||||||
|
|
||||||
|
\u@\h:\w$
|
||||||
|
|
||||||
|
这看起来有一些丑陋,并在瞥见它的第一眼时,你可能会开始尖叫,认为它是令人恐惧的正则表达式,但我们不打算用这些复杂的字符来煎熬我们的大脑。这不是正则表达式,这里的斜杠是转义序列,它告诉提示符进行一些特别的处理。 例如,上面的 **u** 部分,告诉提示符展示用户名, 而 w 则展示工作路径.
|
||||||
|
|
||||||
|
下面是一些你可以在提示符中用到的字符的列表:
|
||||||
|
|
||||||
|
- d 当前的日期
|
||||||
|
- h 主机名
|
||||||
|
- n 代表换行的字符
|
||||||
|
- A 当前的时间 (HH:MM)
|
||||||
|
- u 当前的用户
|
||||||
|
- w (小写) 整个工作路径的全称
|
||||||
|
- W (大写) 工作路径的简短名称
|
||||||
|
- $ 一个提示符号,对于 root 用户为 # 号
|
||||||
|
- ! 当前命令在 shell 历史记录中的序号
|
||||||
|
|
||||||
|
下面解释 **w** 和 **W** 选项的区别: 对于前者,你将看到你所在的工作路径的完整地址,(例如 **/usr/local/bin**),而对于后者, 它则只显示 **bin** 这一部分。
|
||||||
|
|
||||||
|
现在,我们该怎样改变提示符呢? 你需要更改 **PS1** 环境变量的内容,试试下面这个:
|
||||||
|
|
||||||
|
export PS1="I am \u and it is \A $"
|
||||||
|
|
||||||
|
现在,你的提示符将会像下面这样:
|
||||||
|
|
||||||
|
I am mike and it is 11:26 $
|
||||||
|
|
||||||
|
从这个例子出发,你就可以按照你的想法来试验一下上面列出的其他转义序列。 但等等 – 当你登出后,你的这些努力都将消失,因为在你每次打开终端时,**PS1** 环境变量的值都会被重置。解决这个问题的最简单方式是打开 **.bashrc** 配置文件(在你的家目录下) 并在这个文件的最下方添加上完整的 `export` 命令。在每次你启动一个新的 shell 会话时,这个 **.bashrc** 会被 `Bash` 读取, 所以你的加强的提示符就可以一直出现。你还可以使用额外的颜色来装扮提示符。刚开始,这将有点棘手,因为你必须使用一些相当奇怪的转义序列,但结果是非常漂亮的。 将下面的字符添加到你的 **PS1**字符串中的某个位置,最终这将把文本变为红色:
|
||||||
|
|
||||||
|
\[\e[31m\]
|
||||||
|
|
||||||
|
你可以将这里的 31 更改为其他的数字来获得不同的颜色:
|
||||||
|
|
||||||
|
- 30 黑色
|
||||||
|
- 32 绿色
|
||||||
|
- 33 黄色
|
||||||
|
- 34 蓝色
|
||||||
|
- 35 洋红色
|
||||||
|
- 36 青色
|
||||||
|
- 37 白色
|
||||||
|
|
||||||
|
所以,让我们使用先前看到的转义序列和颜色来创造一个提示符,以此来结束这一小节的内容。深吸一口气,弯曲你的手指,然后键入下面这只“野兽”:
|
||||||
|
|
||||||
|
export PS1="(\!) \[\e[31m\] \[\A\] \[\e[32m\]\u@\h \[\e[34m\]\w \[\e[30m\]$"
|
||||||
|
|
||||||
|
上面的命令提供了一个 Bash 命令历史序号、当前的时间、彩色的用户或主机名组合、以及工作路径。假如你“野心勃勃”,利用一些惊人的组合,你还可以更改提示符的背景色和前景色。非常有用的 Arch wiki 有一个关于颜色代码的完整列表:[http://tinyurl.com/3gvz4ec][1]。
|
||||||
|
|
||||||
|
> **Shell 精要**
|
||||||
|
>
|
||||||
|
> 假如你是一个彻底的 Linux 新手并第一次阅读这份杂志,或许你会发觉阅读这些教程有些吃力。 所以这里有一些基础知识来让你熟悉一些 shell。 通常在你的菜单中, shell 指的是 Terminal、 XTerm 或 Konsole, 当你启动它后, 最为实用的命令有这些:
|
||||||
|
>
|
||||||
|
> **ls** (列出文件名); **cp one.txt two.txt** (复制文件); **rm file.txt** (移除文件); **mv old.txt new.txt** (移动或重命名文件);
|
||||||
|
>
|
||||||
|
> **cd /some/directory** (改变目录); **cd ..** (回到上级目录); **./program** (在当前目录下运行一个程序); **ls > list.txt** (重定向输出到一个文件)。
|
||||||
|
>
|
||||||
|
> 几乎每个命令都有一个手册页用来解释其选项(例如 **man ls** – 按 Q 来退出)。在那里,你可以知晓命令的选项,这样你就知道 **ls -la** 展示一个详细的列表,其中也列出了隐藏文件, 并且在键入一个文件或目录的名字的一部分后, 可以使用 Tab 键来自动补全。
|
||||||
|
|
||||||
|
### Tmux: 针对 shell 的窗口管理器 ###
|
||||||
|
|
||||||
|
在文本模式的环境中使用一个窗口管理器 – 这听起来有点不可思议, 是吧? 然而,你应该记得当 Web 浏览器第一次实现分页浏览的时候吧? 在当时, 这是在可用性上的一个重大进步,它减少了桌面任务栏的杂乱无章和繁多的窗口列表。 对于你的浏览器来说,你只需要一个按钮便可以在浏览器中切换到你打开的每个单独网站, 而不是针对每个网站都有一个任务栏或导航图标。 这个功能非常有意义。
|
||||||
|
|
||||||
|
若有时你同时运行着几个虚拟终端,你便会遇到相似的情况; 在这些终端之间跳转,或每次在任务栏或窗口列表中找到你所需要的那一个终端,都可能会让你觉得麻烦。 拥有一个文本模式的窗口管理器不仅可以让你像在同一个终端窗口中运行多个 shell 会话,而且你甚至还可以将这些窗口排列在一起。
|
||||||
|
|
||||||
|
另外,这样还有另一个好处:可以将这些窗口进行分离和重新连接。想要看看这是如何运行的最好方式是自己尝试一下。在一个终端窗口中,输入 `screen` (在大多数发行版本中,它已经默认安装了或者可以在软件包仓库中找到)。 某些欢迎的文字将会出现 – 只需敲击 Enter 键这些文字就会消失。 现在运行一个交互式的文本模式的程序,例如 `nano`, 并关闭这个终端窗口。
|
||||||
|
|
||||||
|
在一个正常的 shell 对话中, 关闭窗口将会终止所有在该终端中运行的进程 – 所以刚才的 Nano 编辑对话也就被终止了, 但对于 screen 来说,并不是这样的。打开一个新的终端并输入如下命令:
|
||||||
|
|
||||||
|
screen -r
|
||||||
|
|
||||||
|
瞧,你刚开打开的 Nano 会话又回来了!
|
||||||
|
|
||||||
|
当刚才你运行 **screen** 时, 它会创建了一个新的独立的 shell 会话, 它不与某个特定的终端窗口绑定在一起,所以可以在后面被分离并重新连接(即 **-r** 选项)。
|
||||||
|
|
||||||
|
当你正使用 SSH 去连接另一台机器并做着某些工作时, 但并不想因为一个脆弱的连接而影响你的进度,这个方法尤其有用。假如你在一个 **screen** 会话中做着某些工作,并且你的连接突然中断了(或者你的笔记本没电了,又或者你的电脑报废了——不是这么悲催吧),你只需重新连接或给电脑充电或重新买一台电脑,接着运行 **screen -r** 来重新连接到远程的电脑,并在刚才掉线的地方接着开始。
|
||||||
|
|
||||||
|
现在,我们都一直在讨论 GNU 的 **screen**,但这个小节的标题提到的是 tmux。 实质上, **tmux** (terminal multiplexer) 就像是 **screen** 的一个进阶版本,带有许多有用的额外功能,所以现在我们开始关注 tmux。 某些发行版本默认包含了 **tmux**; 在其他的发行版本上,通常只需要一个 **apt-get、 yum install** 或 **pacman -S** 命令便可以安装它。
|
||||||
|
|
||||||
|
一旦你安装了它过后,键入 **tmux** 来启动它。接着你将注意到,在终端窗口的底部有一条绿色的信息栏,它非常像传统的窗口管理器中的任务栏: 上面显示着一个运行着的程序的列表、机器的主机名、当前时间和日期。 现在运行一个程序,同样以 Nano 为例, 敲击 Ctrl+B 后接着按 C 键, 这将在 tmux 会话中创建一个新的窗口,你便可以在终端的底部的任务栏中看到如下的信息:
|
||||||
|
|
||||||
|
0:nano- 1:bash*
|
||||||
|
|
||||||
|
每一个窗口都有一个数字,当前呈现的程序被一个星号所标记。 Ctrl+B 是与 tmux 交互的标准方式, 所以若你敲击这个按键组合并带上一个窗口序号, 那么就会切换到对应的那个窗口。你也可以使用 Ctrl+B 再加上 N 或 P 来分别切换到下一个或上一个窗口 – 或者使用 Ctrl+B 加上 L 来在最近使用的两个窗口之间来进行切换(有点类似于桌面中的经典的 Alt+Tab 组合键的效果)。 若需要知道窗口列表,使用 Ctrl+B 再加上 W。
|
||||||
|
|
||||||
|
目前为止,一切都还好:现在你可以在一个单独的终端窗口中运行多个程序,避免混乱(尤其是当你经常与同一个远程主机保持多个 SSH 连接时)。 当想同时看两个程序又该怎么办呢?
|
||||||
|
|
||||||
|
针对这种情况, 可以使用 tmux 中的窗格。 敲击 Ctrl+B 再加上 % , 则当前窗口将分为两个部分:一个在左一个在右。你可以使用 Ctrl+B 再加上 O 来在这两个部分之间切换。 这尤其在你想同时看两个东西时非常实用, – 例如一个窗格看指导手册,另一个窗格里用编辑器看一个配置文件。
|
||||||
|
|
||||||
|
有时,你想对一个单独的窗格进行缩放,而这需要一定的技巧。 首先你需要敲击 Ctrl+B 再加上一个 :(冒号),这将使得位于底部的 tmux 栏变为深橙色。 现在,你进入了命令模式,在这里你可以输入命令来操作 tmux。 输入 **resize-pane -R** 来使当前窗格向右移动一个字符的间距, 或使用 **-L** 来向左移动。 对于一个简单的操作,这些命令似乎有些长,但请注意,在 tmux 的命令模式(前面提到的一个分号开始的模式)下,可以使用 Tab 键来补全命令。 另外需要提及的是, **tmux** 同样也有一个命令历史记录,所以若你想重复刚才的缩放操作,可以先敲击 Ctrl+B 再跟上一个分号,并使用向上的箭头来取回刚才输入的命令。
|
||||||
|
|
||||||
|
最后,让我们看一下分离和重新连接 - 即我们刚才介绍的 screen 的特色功能。 在 tmux 中,敲击 Ctrl+B 再加上 D 来从当前的终端窗口中分离当前的 tmux 会话。这使得这个会话的一切工作都在后台中运行、使用 `tmux a` 可以再重新连接到刚才的会话。但若你同时有多个 tmux 会话在运行时,又该怎么办呢? 我们可以使用下面的命令来列出它们:
|
||||||
|
|
||||||
|
tmux ls
|
||||||
|
|
||||||
|
这个命令将为每个会话分配一个序号; 假如你想重新连接到会话 1, 可以使用 `tmux a -t 1`. tmux 是可以高度定制的,你可以自定义按键绑定并更改配色方案, 所以一旦你适应了它的主要功能,请钻研指导手册以了解更多的内容。
|
||||||
|
|
||||||
|
|
||||||
|
![tmux](http://www.linuxvoice.com/wp-content/uploads/2015/02/tmux-large13.jpg)
|
||||||
|
|
||||||
|
上图中, tmux 开启了两个窗格: 左边是 Vim 正在编辑一个配置文件,而右边则展示着指导手册页。
|
||||||
|
|
||||||
|
> **Zsh: 另一个 shell**
|
||||||
|
>
|
||||||
|
> 选择是好的,但标准同样重要。 你要知道几乎每个主流的 Linux 发行版本都默认使用 Bash shell – 尽管还存在其他的 shell。 Bash 为你提供了一个 shell 能够给你提供的几乎任何功能,包括命令历史记录,文件名补全和许多脚本编程的能力。它成熟、可靠并文档丰富 – 但它不是你唯一的选择。
|
||||||
|
>
|
||||||
|
> 许多高级用户热衷于 Zsh, 即 Z shell。 这是 Bash 的一个替代品并提供了 Bash 的几乎所有功能,另外还提供了一些额外的功能。 例如, 在 Zsh 中,你输入 **ls** ,并敲击 Tab 键可以得到 **ls** 可用的各种不同选项的一个大致描述。 而不需要再打开 man page 了!
|
||||||
|
>
|
||||||
|
> Zsh 还支持其他强大的自动补全功能: 例如,输入 **cd /u/lo/bi** 再敲击 Tab 键, 则完整的路径名 **/usr/local/bin** 就会出现(这里假设没有其他的路径包含 **u**, **lo** 和 **bi** 等字符)。 或者只输入 **cd** 再跟上 Tab 键,则你将看到着色后的路径名的列表 – 这比 Bash 给出的简单的结果好看得多。
|
||||||
|
>
|
||||||
|
> Zsh 在大多数的主要发行版本上都可以得到了; 安装它后并输入 **zsh** 便可启动它。 要将你的默认 shell 从 Bash 改为 Zsh, 可以使用 **chsh** 命令。 若需了解更多的信息,请访问 [www.zsh.org][2]。
|
||||||
|
|
||||||
|
### “未来”的终端 ###
|
||||||
|
|
||||||
|
你或许会好奇为什么包含你的命令行提示符的应用被叫做终端。 这需要追溯到 Unix 的早期, 那时人们一般工作在一个多用户的机器上,这个巨大的电脑主机将占据一座建筑中的一个房间, 人们通过某些线路,使用屏幕和键盘来连接到这个主机, 这些终端机通常被称为“哑终端”, 因为它们不能靠自己做任何重要的执行任务 – 它们只展示通过线路从主机传来的信息,并输送回从键盘的敲击中得到的输入信息。
|
||||||
|
|
||||||
|
今天,我们在自己的机器上执行几乎所有的实际操作,所以我们的电脑不是传统意义下的终端,这就是为什么诸如 **XTerm**、 Gnome Terminal、 Konsole 等程序被称为“终端模拟器” 的原因 – 他们提供了同昔日的物理终端一样的功能。事实上,在许多方面它们并没有改变多少。诚然,现在我们有了反锯齿字体,更好的颜色和点击网址的能力,但总的来说,几十年来我们一直以同样的方式在工作。
|
||||||
|
|
||||||
|
所以某些程序员正尝试改变这个状况。 **Terminology** ([http://tinyurl.com/osopjv9][3]), 它来自于超级时髦的 Enlightenment 窗口管理器背后的团队,旨在让终端步入到 21 世纪,例如带有在线媒体显示功能。你可以在一个充满图片的目录里输入 **ls** 命令,便可以看到它们的缩略图,或甚至可以直接在你的终端里播放视频。 这使得一个终端有点类似于一个文件管理器,意味着你可以快速地检查媒体文件的内容而不必用另一个应用来打开它们。
|
||||||
|
|
||||||
|
接着还有 Xiki ([www.xiki.org][4]),它自身的描述为“命令的革新”。它就像是一个传统的 shell、一个 GUI 和一个 wiki 之间的过渡;你可以在任何地方输入命令,并在后面将它们的输出存储为笔记以作为参考,并可以创建非常强大的自定义命令。用几句话是很能描述它的,所以作者们已经创作了一个视频来展示它的潜力是多么的巨大(请看 **Xiki** 网站的截屏视频部分)。
|
||||||
|
|
||||||
|
并且 Xiki 绝不是那种在几个月之内就消亡的昙花一现的项目,作者们成功地进行了一次 Kickstarter 众筹,在七月底已募集到超过 $84,000。 是的,你没有看错 – $84K 来支持一个终端模拟器。这可能是最不寻常的集资活动了,因为某些疯狂的家伙已经决定开始创办它们自己的 Linux 杂志 ......
|
||||||
|
|
||||||
|
### 下一代终端 ###
|
||||||
|
|
||||||
|
许多命令行和基于文本的程序在功能上与它们的 GUI 程序是相同的,并且常常更加快速和高效。我们的推荐有:
|
||||||
|
**Irssi** (IRC 客户端); **Mutt** (mail 客户端); **rTorrent** (BitTorrent); **Ranger** (文件管理器); **htop** (进程监视器)。 若给定在终端的限制下来进行 Web 浏览, Elinks 确实做的很好,并且对于阅读那些以文字为主的网站例如 Wikipedia 来说。它非常实用。
|
||||||
|
|
||||||
|
> **微调配色方案**
|
||||||
|
>
|
||||||
|
> 在《Linux Voice》杂志社中,我们并不迷恋那些养眼的东西,但当你每天花费几个小时盯着屏幕看东西时,我们确实认识到美学的重要性。我们中的许多人都喜欢调整我们的桌面和窗口管理器来达到完美的效果,调整阴影效果、摆弄不同的配色方案,直到我们 100% 的满意(然后出于习惯,摆弄更多的东西)。
|
||||||
|
>
|
||||||
|
> 但我们倾向于忽视终端窗口,它理应也获得我们的喜爱,并且在 [http://ciembor.github.io/4bit][5] 你将看到一个极其棒的配色方案设计器,对于所有受欢迎的终端模拟器(**XTerm, Gnome Terminal, Konsole 和 Xfce4 Terminal 等都是支持的应用。**),它可以输出其设定。移动滑块直到你看到配色方案最佳, 然后点击位于该页面右上角的 `得到方案` 按钮。
|
||||||
|
>
|
||||||
|
> 相似的,假如你在一个文本编辑器,如 Vim 或 Emacs 上花费了很多的时间,使用一个精心设计的调色板也是非常值得的。 **Solarized** [http://ethanschoonover.com/solarized][6] 是一个卓越的方案,它不仅漂亮,而且因追求最大的可用性而设计,在其背后有着大量的研究和测试。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.linuxvoice.com/linux-101-power-up-your-shell-8/
|
||||||
|
|
||||||
|
作者:[Ben Everard][a]
|
||||||
|
译者:[FSSlc](https://github.com/FSSlc)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://www.linuxvoice.com/author/ben_everard/
|
||||||
|
[1]:http://tinyurl.com/3gvz4ec
|
||||||
|
[2]:http://www.zsh.org/
|
||||||
|
[3]:http://tinyurl.com/osopjv9
|
||||||
|
[4]:http://www.xiki.org/
|
||||||
|
[5]:http://ciembor.github.io/4bit
|
||||||
|
[6]:http://ethanschoonover.com/solarized
|
@ -1,16 +1,16 @@
|
|||||||
XBMC:自制遥控
|
为 Kodi 自制遥控器
|
||||||
================================================================================
|
================================================================================
|
||||||
**通过运行在 Android 手机上的自制遥控器来控制你的家庭媒体播放器。**
|
**通过运行在 Android 手机上的自制遥控器来控制你的家庭媒体播放器。**
|
||||||
|
|
||||||
**XBMC** 一款很优秀的软件,能够将几乎所有电脑变身成媒体中心。它可以播放音乐和视频,显示图片,甚至还能显示天气预报。为了在配置成家庭影院后方便使用,你可以通过手机 app 访问运行在已连接到 Wi-Fi 的 XBMC 机器上的服务来控制它。可以找到很多这种工具,几乎覆盖所有智能手机系统。
|
**Kodi** 是一款很优秀的软件,能够将几乎所有电脑变身成媒体中心。它可以播放音乐和视频,显示图片,甚至还能显示天气预报。为了在配置成家庭影院后方便使用,你可以通过手机 app 访问运行在连接到 Wi-Fi 的 XBMC 机器上的服务来控制它。可以找到很多这种工具,几乎覆盖所有智能手机系统。
|
||||||
|
|
||||||
> ### Kodi ###
|
> **XBMC**
|
||||||
>
|
>
|
||||||
> 在你阅读这篇文章的时候,**XBMC** 可能已经成为历史。因为法律原因(因为名字 **XBMC** 或 X**-Box Media Center** 里引用了不再支持的过时硬件)项目组决定使用新的名字 **Kodi**。不过,除了名字,其他的都会保持原样。或者说除开通常新版本中所期待的大量新改进。这一般不会影响到遥控软件,它应该能在已有的 **XBMC** 系统和新的 Kodi 系统上都能工作。
|
> Kodi 原名叫做 XBMC,在你阅读这篇文章的时候,**XBMC** 已经成为历史。因为法律原因(因为名字 **XBMC** 或 X**-Box Media Center** 里引用了不再支持的过时硬件)项目组决定使用新的名字 **Kodi**。不过,除了名字,其他的都会保持原样。或者说除开通常新版本中所期待的大量新改进。这一般不会影响到遥控软件,它应该能在已有的 **XBMC** 系统和新的 Kodi 系统上都能工作。
|
||||||
|
|
||||||
我们目前配置了一个 **XBMC** 系统用于播放音乐,不过我们找到的所有 XBMC 遥控没一个好用的,特别是和媒体中心连接的电视没打开的时候。它们都有点太复杂了,集成了太多功能在手机的小屏幕上。我们希望能有这样的系统,从最开始就是设计成只用于访问音乐库和电台插件,所以我们决定自己实现一个。它不需要用到 XBMC 的所有功能,因为除了音乐以外的任务,我们可以简单地切换使用通用的 XBMC 遥控。我们的测试系统是一个刷了 RaspBMC 发行版的树莓派,但是我们要做的工具并不受限于树莓派或那个发行版,它应该可以匹配任何安装了相关插件的基于 Linux 的 XBMC 系统。
|
我们目前已经配置好了一个用于播放音乐的 **Kodi** 系统,不过我们找到的所有 Kodi 遥控没一个好用的,特别是和媒体中心连接的电视没打开的时候。它们都有点太复杂了,集成了太多功能在手机的小屏幕上。我们希望能有这样的系统,从最开始就是设计成只用于访问音乐库和电台插件,所以我们决定自己实现一个。它不需要用到 Kodi 的所有功能,因为除了音乐以外的任务,我们可以简单地切换使用通用的 Kodi 遥控。我们的测试系统是一个刷了 RaspBMC 发行版的树莓派,但是我们要做的工具并不受限于树莓派或Kodi那个发行版,它应该可以匹配任何安装了相关插件的基于 Linux 的 Kodi 系统。
|
||||||
|
|
||||||
首先,遥控程序需要一个用户界面。大多数 XBMC 遥控程序都是独立的 app。不过对于我们要做的这个音乐控制程序,我们希望用户可以不用安装任何东西就可以使用。显然我们需要使用网页界面。XBMC 本身自带网页服务器,但是为了获得更多权限,我们还是使用了独立的网页框架。在同一台电脑上跑两个以上网页服务器没有问题,只不过它们不能使用相同的端口。
|
首先,遥控程序需要一个用户界面。大多数 Kodi 遥控程序都是独立的 app。不过对于我们要做的这个音乐控制程序,我们希望用户可以不用安装任何东西就可以使用。显然我们需要使用网页界面。Kodi 本身自带网页服务器,但是为了获得更多权限,我们还是使用了独立的网页框架。在同一台电脑上跑两个以上网页服务器没有问题,只不过它们不能使用相同的端口。
|
||||||
|
|
||||||
有几个网页框架可以使用。而我们选用 Bottle 是因为它是一个简单高效的框架,而且我们也确实用不到任何高级功能。Bottle 是一个 Python 模块,所以这也将是我们编写服务器模块的语言。
|
有几个网页框架可以使用。而我们选用 Bottle 是因为它是一个简单高效的框架,而且我们也确实用不到任何高级功能。Bottle 是一个 Python 模块,所以这也将是我们编写服务器模块的语言。
|
||||||
|
|
||||||
@ -18,7 +18,7 @@ XBMC:自制遥控
|
|||||||
|
|
||||||
sudo apt-get install python-bottle
|
sudo apt-get install python-bottle
|
||||||
|
|
||||||
遥控程序实际上只是连接用户和系统的中间层。Bottle 提供了和用户交互的方式,而我们将通过 JSON API 来和 **XBMC** 交互。这样可以让我们通过发送 JSON 格式消息的方式去控制媒体播放器。
|
遥控程序实际上只是连接用户和系统的中间层。Bottle 提供了和用户交互的方式,而我们将通过 JSON API 来和 **Kodi** 交互。这样可以让我们通过发送 JSON 格式消息的方式去控制媒体播放器。
|
||||||
|
|
||||||
我们将用到一个叫做 xbmcjson 的简单 XBMC JASON API 封装。足够用来发送控制请求,而不需要关心实际的 JSON 格式以及和服务器通讯的无聊事。它没有包含在 PIP 包管理中,所以你得直接从 **GitHub** 安装:
|
我们将用到一个叫做 xbmcjson 的简单 XBMC JASON API 封装。足够用来发送控制请求,而不需要关心实际的 JSON 格式以及和服务器通讯的无聊事。它没有包含在 PIP 包管理中,所以你得直接从 **GitHub** 安装:
|
||||||
|
|
||||||
@ -35,13 +35,13 @@ XBMC:自制遥控
|
|||||||
from xbmcjson import XBMC
|
from xbmcjson import XBMC
|
||||||
from bottle import route, run, template, redirect, static_file, request
|
from bottle import route, run, template, redirect, static_file, request
|
||||||
import os
|
import os
|
||||||
xbmc = XBMC(“http://192.168.0.5/jsonrpc”, “xbmc”, “xbmc”)
|
xbmc = XBMC("http://192.168.0.5/jsonrpc", "xbmc", "xbmc")
|
||||||
@route(‘/hello/<name>’)
|
@route('/hello/<name>')
|
||||||
def index(name):
|
def index(name):
|
||||||
return template(‘<h1>Hello {{name}}!</h1>’, name=name)
|
return template('<h1>Hello {{name}}!</h1>', name=name)
|
||||||
run(host=”0.0.0.0”, port=8000)
|
run(host="0.0.0.0", port=8000)
|
||||||
|
|
||||||
这样程序将连接到 **XBMC**(不过实际上用不到);然后 Bottle 会开始伺服网站。在我们的代码里,它将监听主机 0.0.0.0(意味着允许所有主机连接)的端口 8000。它只设定了一个站点,就是 /hello/XXXX,这里的 XXXX 可以是任何内容。不管 XXXX 是什么都将作为参数名传递给 index()。然后再替换进去 HTML 网页模版。
|
这样程序将连接到 **Kodi**(不过实际上用不到);然后 Bottle 会开始提供网站服务。在我们的代码里,它将监听主机 0.0.0.0(意味着允许所有主机连接)的端口 8000。它只设定了一个站点,就是 /hello/XXXX,这里的 XXXX 可以是任何内容。不管 XXXX 是什么都将作为参数名传递给 index()。然后再替换进去 HTML 网页模版。
|
||||||
|
|
||||||
你可以先试着把上面内容写到一个文件(我们取的名字是 remote.py),然后用下面的命令启动:
|
你可以先试着把上面内容写到一个文件(我们取的名字是 remote.py),然后用下面的命令启动:
|
||||||
|
|
||||||
@ -51,56 +51,56 @@ XBMC:自制遥控
|
|||||||
|
|
||||||
@route() 用来设定网页服务器的路径,而函数 index() 会返回该路径的数据。通常是返回由模版生成的 HTML 页面,但是并不是说只能这样(后面会看到)。
|
@route() 用来设定网页服务器的路径,而函数 index() 会返回该路径的数据。通常是返回由模版生成的 HTML 页面,但是并不是说只能这样(后面会看到)。
|
||||||
|
|
||||||
随后,我们将给应用添加更多页面入口,让它变成一个全功能的 XBMC 遥控,但仍将采用相同代码结构。
|
随后,我们将给应用添加更多页面入口,让它变成一个全功能的 Kodi 遥控,但仍将采用相同代码结构。
|
||||||
|
|
||||||
XBMC JSON API 接口可以从和 XBMC 机器同网段的任意电脑上访问。也就是说你可以在自己的笔记本上开发,然后再布置到媒体中心上,而不需要浪费时间上传每次改动。
|
XBMC JSON API 接口可以从和 Kodi 机器同网段的任意电脑上访问。也就是说你可以在自己的笔记本上开发,然后再布置到媒体中心上,而不需要浪费时间上传每次改动。
|
||||||
|
|
||||||
模版 - 比如前面例子里的那个简单模版 - 是一种结合 Python 和 HTML 来控制输出的方式。理论上,这俩能做很多很多事,但是会非常混乱。我们将只是用它们来生成正确格式的数据。不过,在开始动手之前,我们先得准备点数据。
|
模版 - 比如前面例子里的那个简单模版 - 是一种结合 Python 和 HTML 来控制输出的方式。理论上,这俩能做很多很多事,但是会非常混乱。我们将只是用它们来生成正确格式的数据。不过,在开始动手之前,我们先得准备点数据。
|
||||||
|
|
||||||
> ### Paste ###
|
> **Paste**
|
||||||
>
|
>
|
||||||
> Bottle 自带网页服务器,就是我们用来测试遥控程序的。不过,我们发现它性能有时不够好。当我们的遥控程序正式上线时,我们希望页面能更快一点显示出来。Bottle 可以和很多不同的网页服务器配合工作,而我们发现 Paste 用起来非常不错。而要使用的话,只要简单地安装(Debian 系统里的 python-paste 包),然后修改一下代码里的 run 调用:
|
> Bottle 自带网页服务器,我们用它来测试遥控程序。不过,我们发现它性能有时不够好。当我们的遥控程序正式上线时,我们希望页面能更快一点显示出来。Bottle 可以和很多不同的网页服务器配合工作,而我们发现 Paste 用起来非常不错。而要使用的话,只要简单地安装(Debian 系统里的 python-paste 包),然后修改一下代码里的 run 调用:
|
||||||
>
|
>
|
||||||
> run(host=hostname, port=hostport, server=”paste”)
|
> run(host=hostname, port=hostport, server="paste")
|
||||||
>
|
>
|
||||||
> 你可以在 [http://bottlepy.org/docs/dev/deployment.html][1] 找到如何使用其他服务器的相关细节。
|
> 你可以在 [http://bottlepy.org/docs/dev/deployment.html][1] 找到如何使用其他服务器的相关细节。
|
||||||
|
|
||||||
#### 从 XBMC 获取数据 ####
|
#### 从 Kodi 获取数据 ####
|
||||||
|
|
||||||
XBMC JSON API 分成 14 个命名空间:JSONRPC, Player, Playlist, Files, AudioLibrary, VideoLibrary, Input, Application, System, Favourites, Profiles, Settings, Textures 和 XBMC。每个都可以通过 Python 的 XBMC 对象访问(Favourites 除外,明显是个疏忽)。每个命名空间都包含许多方法用于对程序的控制。例如,Playlist.GetItems() 可以用来获取某个特定播放列表的内容。服务器会返回给我们 JSON 格式的数据,但 xbmcjson 模块会为我们转化成 Python 词典。
|
XBMC JSON API 分成 14 个命名空间:JSONRPC, Player, Playlist, Files, AudioLibrary, VideoLibrary, Input, Application, System, Favourites, Profiles, Settings, Textures 和 XBMC。每个都可以通过 Python 的 XBMC 对象访问(Favourites 除外,明显是个疏忽)。每个命名空间都包含许多方法用于对程序的控制。例如,Playlist.GetItems() 可以用来获取某个特定播放列表的内容。服务器会返回给我们 JSON 格式的数据,但 xbmcjson 模块会为我们转化成 Python 词典。
|
||||||
|
|
||||||
我们需要用到 XBMC 里的两个组件来控制播放:播放器和播放列表。播放器带有播放列表并在每首歌结束时从列表里取下一首。为了查看当前正在播放的内容,我们需要获取正在工作的播放器的 ID,然后根据它找到当前播放列表的 ID。这个可以通过下面的代码来实现:
|
我们需要用到 Kodi 里的两个组件来控制播放:播放器和播放列表。播放器处理播放列表并在每首歌结束时从列表里取下一首。为了查看当前正在播放的内容,我们需要获取正在工作的播放器的 ID,然后根据它找到当前播放列表的 ID。这个可以通过下面的代码来实现:
|
||||||
|
|
||||||
def get_playlistid():
|
def get_playlistid():
|
||||||
player = xbmc.Player.GetActivePlayers()
|
player = xbmc.Player.GetActivePlayers()
|
||||||
if len(player[‘result’]) > 0:
|
if len(player['result']) > 0:
|
||||||
playlist_data = xbmc.Player.GetProperties({“playerid”:0, “properties”:[“playlistid”]})
|
playlist_data = xbmc.Player.GetProperties({"playerid":0, "properties":["playlistid"]})
|
||||||
if len(playlist_data[‘result’]) > 0 and “playlistid” in playlist_data[‘result’].keys():
|
if len(playlist_data['result']) > 0 and "playlistid" in playlist_data['result'].keys():
|
||||||
return playlist_data[‘result’][‘playlistid’]
|
return playlist_data['result']['playlistid']
|
||||||
return -1
|
return -1
|
||||||
|
|
||||||
如果当前没有播放器在工作(就是说,返回数据的结果部分的长度是 0),或者当前播放器不带播放列表,这样的话函数会返回 -1。其他时候,它会返回当前播放列表的数字 ID。
|
如果当前没有播放器在工作(就是说,返回数据的结果部分的长度是 0),或者当前播放器没有处理播放列表,这样的话函数会返回 -1。其他时候,它会返回当前播放列表的数字 ID。
|
||||||
|
|
||||||
当我们拿到当前播放列表的 ID 后,就可以获取列表的细节内容。按照我们的需求,有两个重要的地方:播放列表里包含的项,以及当前播放所处的位置(已经播放过的项并不会从播放列表移除,只是移动当前播放位置)。
|
当我们拿到当前播放列表的 ID 后,就可以获取该列表的细节内容。按照我们的需求,有两个重要的地方:播放列表里包含的项,以及当前播放所处的位置(已经播放过的项并不会从播放列表移除,只是移动当前播放位置)。
|
||||||
|
|
||||||
def get_playlist():
|
def get_playlist():
|
||||||
playlistid = get_playlistid()
|
playlistid = get_playlistid()
|
||||||
if playlistid >= 0:
|
if playlistid >= 0:
|
||||||
data = xbmc.Playlist.GetItems({“playlistid”:playlistid, “properties”: [“title”, “album”, “artist”, “file”]})
|
data = xbmc.Playlist.GetItems({"playlistid":playlistid, "properties": ["title", "album", "artist", "file"]})
|
||||||
position_data = xbmc.Player.GetProperties({“playerid”:0, ‘properties’:[“position”]})
|
position_data = xbmc.Player.GetProperties({"playerid":0, 'properties':["position"]})
|
||||||
position = int(position_data[‘result’][‘position’])
|
position = int(position_data['result']['position'])
|
||||||
return data[‘result’][‘items’][position:], position
|
return data['result']['items'][position:], position
|
||||||
return [], -1
|
return [], -1
|
||||||
|
|
||||||
这样可以返回以正在播放的项开始的列表(因为我们并不关心已经播放过的内容),而且也包含了位置信息用来从列表里移除项。
|
这样可以返回正在播放的项开始的列表(因为我们并不关心已经播放过的内容),而且也包含了用来从列表里移除项的位置信息。
|
||||||
|
|
||||||
![Image](http://www.linuxvoice.com/wp-content/uploads/2015/04/xbmc2-large.jpg)
|
![Image](http://www.linuxvoice.com/wp-content/uploads/2015/04/xbmc2-large.jpg)
|
||||||
|
|
||||||
API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列出了所有支持的函数,但是关于具体如何使用的描述有点太简单了。
|
API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列出了所有支持的函数,但是关于具体如何使用的描述有点太简单了。
|
||||||
|
|
||||||
> ### JSON ###
|
> **JSON**
|
||||||
>
|
>
|
||||||
> JSON 是 JavaScript Object Notation 的缩写,开始设计用于 JavaScript 对象的序列化。目前仍然起到这个作用,但是它也是用来编码任意数据的一种很好用的方式。
|
> JSON 是 JavaScript Object Notation 的缩写,最初设计用于 JavaScript 对象的序列化。目前仍然起到这个作用,但是它也是用来编码任意数据的一种很好用的方式。
|
||||||
>
|
>
|
||||||
> JSON 对象都是这样的格式:
|
> JSON 对象都是这样的格式:
|
||||||
>
|
>
|
||||||
@ -110,18 +110,18 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
|||||||
>
|
>
|
||||||
> 在字典数据结构里,值本身可以是另一个 JSON 对象,或者一个列表,所以下面的格式也是正确的:
|
> 在字典数据结构里,值本身可以是另一个 JSON 对象,或者一个列表,所以下面的格式也是正确的:
|
||||||
>
|
>
|
||||||
> {“name”:“Ben”, “jobs”:[“cook”, “bottle-washer”], “appearance”: {“height”:195, “skin”:“fair”}}
|
> {"name":"Ben", "jobs":["cook", "bottle-washer"], "appearance": {"height":195, "skin":"fair"}}
|
||||||
>
|
>
|
||||||
> JSON 通常在网络服务中用来发送和接收数据,并且大多数编程语言都能很好地支持,所以如果你熟悉 Python 的话,你应该可以使用你熟悉的编程语言调用相同的接口来轻松地控制 XBMC。
|
> JSON 通常在网络服务中用来发送和接收数据,并且大多数编程语言都能很好地支持,所以如果你熟悉 Python 的话,你应该可以使用你熟悉的编程语言调用相同的接口来轻松地控制 Kodi。
|
||||||
|
|
||||||
#### 整合到一起 ####
|
#### 整合到一起 ####
|
||||||
|
|
||||||
把之前的功能连接到 HTML 页面很简单:
|
把之前的功能连接到 HTML 页面很简单:
|
||||||
|
|
||||||
@route(‘/juke’)
|
@route('/juke')
|
||||||
def index():
|
def index():
|
||||||
current_playlist, position = get_playlist()
|
current_playlist, position = get_playlist()
|
||||||
return template(‘list’, playlist=current_playlist, offset = position)
|
return template('list', playlist=current_playlist, offset = position)
|
||||||
|
|
||||||
只需要抓取播放列表(调用我们之前定义的函数),然后将结果传递给负责显示的模版。
|
只需要抓取播放列表(调用我们之前定义的函数),然后将结果传递给负责显示的模版。
|
||||||
|
|
||||||
@ -131,57 +131,57 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
|||||||
% if playlist is not None:
|
% if playlist is not None:
|
||||||
% position = offset
|
% position = offset
|
||||||
% for song in playlist:
|
% for song in playlist:
|
||||||
<strong> {{song[‘title’]}} </strong>
|
<strong> {{song['title']}} </strong>
|
||||||
% if song[‘type’] == ‘unknown’:
|
% if song['type'] == 'unknown':
|
||||||
Radio
|
Radio
|
||||||
% else:
|
% else:
|
||||||
{{song[‘artist’][0]}}
|
{{song['artist'][0]}}
|
||||||
% end
|
% end
|
||||||
% if position != offset:
|
% if position != offset:
|
||||||
<a href=”/remove/{{position}}”>remove</a>
|
<a href="/remove/{{position}}">remove</a>
|
||||||
% else:
|
% else:
|
||||||
<a href=”/skip/{{position}}”>skip</a>
|
<a href="/skip/{{position}}">skip</a>
|
||||||
% end
|
% end
|
||||||
<br>
|
<br>
|
||||||
% position += 1
|
% position += 1
|
||||||
% end
|
% end
|
||||||
|
|
||||||
可以看到,模版大部分是用 HTML 写的,只有一小部分用来控制输出的其他代码。用两个括号括起来的变量是输出位置(像我们在第一个 ‘hello world’ 例子里看到的)。你也可以嵌入以百分号开头的 Python 代码。因为没有缩进,你需要用一个 % end 来结束当前的代码块(就像循环或 if 语句)。
|
可以看到,模版大部分是用 HTML 写的,只有一小部分用来控制输出的其他代码。用两个大括号括起来的变量是输出位置(像我们在第一个 'hello world' 例子里看到的)。你也可以嵌入以百分号开头的 Python 代码。因为没有缩进,你需要用一个 `% end` 来结束当前的代码块(就像循环或 if 语句)。
|
||||||
|
|
||||||
这个模版首先检查列表是否为空,然后遍历里面的每一项。每一项会用粗体显示歌曲名字,然后是艺术家名字,然后是一个是否跳过(如果是当前正在播的歌曲)或从列表移除的链接。所有歌曲的类型都是 ‘song’,如果类型是 ‘unknown’,那就不是歌曲而是网络电台。
|
这个模版首先检查列表是否为空,然后遍历里面的每一项。每一项会用粗体显示歌曲名字,然后是艺术家名字,然后是一个是否跳过(如果是当前正在播的歌曲)或从列表移除的链接。所有歌曲的类型都是 'song',如果类型是 'unknown',那就不是歌曲而是网络电台。
|
||||||
|
|
||||||
/remove/ 和 /skip/ 路径只是简单地封装了 XBMC 控制功能,在改动生效后重新加载 /juke:
|
/remove/ 和 /skip/ 路径只是简单地封装了 XBMC 控制功能,在改动生效后重新加载 /juke:
|
||||||
|
|
||||||
@route(‘/skip/<position>’)
|
@route('/skip/<position>')
|
||||||
def index(position):
|
def index(position):
|
||||||
print xbmc.Player.GoTo({‘playerid’:0, ‘to’:’next’})
|
print xbmc.Player.GoTo({'playerid':0, 'to':'next'})
|
||||||
redirect(“/juke”)
|
redirect("/juke")
|
||||||
@route(‘/remove/<position>’)
|
@route('/remove/<position>')
|
||||||
def index(position):
|
def index(position):
|
||||||
playlistid = get_playlistid()
|
playlistid = get_playlistid()
|
||||||
if playlistid >= 0:
|
if playlistid >= 0:
|
||||||
xbmc.Playlist.Remove({‘playlistid’:int(playlistid), ‘position’:int(position)})
|
xbmc.Playlist.Remove({'playlistid':int(playlistid), 'position':int(position)})
|
||||||
redirect(“/juke”)
|
redirect("/juke")
|
||||||
|
|
||||||
当然,如果不能往列表里添加歌曲的话那这个列表管理功能也不行。
|
当然,如果不能往列表里添加歌曲的话那这个列表管理功能也不行。
|
||||||
|
|
||||||
因为一旦播放列表结束,它就消失了,所以你需要重新创建一个,这会让事情复杂一些。而且有点让人迷惑的是,播放列表是通过调用 Playlist.Clear() 方法来创建的。这个方法也还用来删除包含网络电台(类型是 unknown)的播放列表。另一个麻烦的地方是列表里的网络电台开始播放后就不会停,所以如果当前在播网络电台,也会需要清除播放列表。
|
因为一旦播放列表结束,它就消失了,所以你需要重新创建一个,这会让事情复杂一些。而且有点让人迷惑的是,播放列表是通过调用 Playlist.Clear() 方法来创建的。这个方法也还用来删除包含网络电台(类型是 unknown)的播放列表。另一个麻烦的地方是列表里的网络电台开始播放后就不会停,所以如果当前在播网络电台,也会需要清除播放列表。
|
||||||
|
|
||||||
这些页面包含了指向 /play/<songid> 的链接来播放歌曲。通过下面的代码处理:
|
这些页面包含了指向 /play/\<songid> 的链接来播放歌曲。通过下面的代码处理:
|
||||||
|
|
||||||
@route(‘/play/<id>’)
|
@route('/play/<id>')
|
||||||
def index(id):
|
def index(id):
|
||||||
playlistid = get_playlistid()
|
playlistid = get_playlistid()
|
||||||
playlist, not_needed= get_playlist()
|
playlist, not_needed= get_playlist()
|
||||||
if playlistid < 0 or playlist[0][‘type’] == ‘unknown’:
|
if playlistid < 0 or playlist[0]['type'] == 'unknown':
|
||||||
xbmc.Playlist.Clear({“playlistid”:0})
|
xbmc.Playlist.Clear({"playlistid":0})
|
||||||
xbmc.Playlist.Add({“playlistid”:0, “item”:{“songid”:int(id)}})
|
xbmc.Playlist.Add({"playlistid":0, "item":{"songid":int(id)}})
|
||||||
xbmc.Player.open({“item”:{“playlistid”:0}})
|
xbmc.Player.open({"item":{"playlistid":0}})
|
||||||
playlistid = 0
|
playlistid = 0
|
||||||
else:
|
else:
|
||||||
xbmc.Playlist.Add({“playlistid”:playlistid, “item”:{“songid”:int(id)}})
|
xbmc.Playlist.Add({"playlistid":playlistid, "item":{"songid":int(id)}})
|
||||||
remove_duplicates(playlistid)
|
remove_duplicates(playlistid)
|
||||||
redirect(“/juke”)
|
redirect("/juke")
|
||||||
|
|
||||||
最后一件事情是实现 remove_duplicates 调用。这并不是必须的 - 而且还有人并不喜欢这个 - 不过可以保证同一首歌不会多次出现在播放列表里。
|
最后一件事情是实现 remove_duplicates 调用。这并不是必须的 - 而且还有人并不喜欢这个 - 不过可以保证同一首歌不会多次出现在播放列表里。
|
||||||
|
|
||||||
@ -191,40 +191,40 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
|||||||
|
|
||||||
还需要处理一下 UI,不过功能已经有了。
|
还需要处理一下 UI,不过功能已经有了。
|
||||||
|
|
||||||
> ### 日志 ###
|
> **日志**
|
||||||
>
|
>
|
||||||
> 通常拿到 XBMC JSON API 并不清楚能用来做什么,而且它的文档也有点模糊。找出如何使用的一种方式是看别的遥控程序是怎么做的。如果打开日志功能,就可以在使用其他遥控程序的时候看到哪个 API 被调用了,然后就可以应用到在自己的代码里。
|
> 通常拿到 XBMC JSON API 并不清楚能用来做什么,而且它的文档也有点模糊。找出如何使用的一种方式是看别的遥控程序是怎么做的。如果打开日志功能,就可以在使用其他遥控程序的时候看到哪个 API 被调用了,然后就可以应用到在自己的代码里。
|
||||||
>
|
>
|
||||||
> 要打开日志功能,把 XBMC 媒体中心 接到显示器上,再依次进入设置 > 系统 > 调试,打开允许调试日志。在打开日志功能后,还需要登录到 XBMC 机器上(比如通过 SSH),然后就可以查看日志了。日志文件的位置应该显示在 XBMC 界面左上角。在 RaspBMC 系统里,文件位置是 /home/pi/.xbmc/temp/xbmc.log。你可以通过下面的命令实时监视哪个 API 接口被调用了:
|
> 要打开日志功能,把 Kodi 媒体中心 接到显示器上,再依次进入设置 > 系统 > 调试,打开允许调试日志。在打开日志功能后,还需要登录到 Kodi 机器上(比如通过 SSH),然后就可以查看日志了。日志文件的位置应该显示在 Kodi 界面左上角。在 RaspBMC 系统里,文件位置是 /home/pi/.xbmc/temp/xbmc.log。你可以通过下面的命令实时监视哪个 API 接口被调用了:
|
||||||
>
|
>
|
||||||
> cd /home/pi/.xbmc/temp
|
> cd /home/pi/.xbmc/temp
|
||||||
> tail -f xbmc.log | grep “JSON”
|
> tail -f xbmc.log | grep "JSON"
|
||||||
|
|
||||||
#### 增加功能 ####
|
#### 增加功能 ####
|
||||||
|
|
||||||
上面的代码都是用来播放 XBMC 媒体库里的歌曲的,但我们还希望能播放网络电台。每个插件都有自己的独立 URL 可以通过普通的 XBMC JSON 命令来获取信息。举个例子,要从电台插件里获取选中的电台,可以使用;
|
上面的代码都是用来播放 Kodi 媒体库里的歌曲的,但我们还希望能播放网络电台。每个插件都有自己的独立 URL 可以通过普通的 XBMC JSON 命令来获取信息。举个例子,要从电台插件里获取选中的电台,可以使用;
|
||||||
|
|
||||||
@route(‘/radio/’)
|
@route('/radio/')
|
||||||
def index():
|
def index():
|
||||||
my_stations = xbmc.Files.GetDirectory({“directory”:”plugin://plugin.audio.radio_de/stations/my/”, “properties”:
|
my_stations = xbmc.Files.GetDirectory({"directory":"plugin://plugin.audio.radio_de/stations/my/", "properties":
|
||||||
[“title”,”thumbnail”,”playcount”,”artist”,”album”,”episode”,”season”,”showtitle”]})
|
["title","thumbnail","playcount","artist","album","episode","season","showtitle"]})
|
||||||
if ‘result’ in my_stations.keys():
|
if 'result' in my_stations.keys():
|
||||||
return template(‘radio’, stations=my_stations[‘result’][‘files’])
|
return template('radio', stations=my_stations['result']['files'])
|
||||||
else:
|
else:
|
||||||
return template(‘error’, error=’radio’)
|
return template('error', error='radio')
|
||||||
|
|
||||||
这样可以返回一个可以和歌曲一样能添加到播放列表的文件。不过,这些文件能一直播下去,所以(之前说过)在添加其他歌曲的时候需要重新创建列表。
|
这样可以返回一个可以和歌曲一样能添加到播放列表的文件。不过,这些文件能一直播下去,所以(之前说过)在添加其他歌曲的时候需要重新创建列表。
|
||||||
|
|
||||||
#### 共享歌曲 ####
|
#### 共享歌曲 ####
|
||||||
|
|
||||||
除了伺服页面模版,Bottle 还支持静态文件。方便用于那些不会因为用户输入而改变的内容。可以是 CSS 文件,一张图片或是一首 MP3 歌曲。在我们的简单遥控程序里(目前)还没有任何用来美化的 CSS 或图片,不过我们增加了一个下载歌曲的途径。这个可以让媒体中心变成一个存放歌曲的 NAS 盒子。在需要传输大量数据的时候,最好还是用类似 Samba 的功能,但只是下几首歌到手机上的话使用静态文件也是很好的方式。
|
除了伺服页面模版,Bottle 还支持静态文件,方便用于那些不会因为用户输入而改变的内容。可以是 CSS 文件,一张图片或是一首 MP3 歌曲。在我们的简单遥控程序里(目前)还没有任何用来美化的 CSS 或图片,不过我们增加了一个下载歌曲的途径。这个可以让媒体中心变成一个存放歌曲的 NAS 盒子。在需要传输大量数据的时候,最好还是用类似 Samba 的功能,但只是下几首歌到手机上的话使用静态文件也是很好的方式。
|
||||||
|
|
||||||
通过歌曲 ID 来下载的 Bottle 代码:
|
通过歌曲 ID 来下载的 Bottle 代码:
|
||||||
|
|
||||||
@route(‘/download/<id>’)
|
@route('/download/<id>')
|
||||||
def index(id):
|
def index(id):
|
||||||
data = xbmc.AudioLibrary.GetSongDetails({“songid”:int(id), “properties”:[“file”]})
|
data = xbmc.AudioLibrary.GetSongDetails({"songid":int(id), "properties":["file"]})
|
||||||
full_filename = data[‘result’][‘songdetails’][‘file’]
|
full_filename = data['result']['songdetails']['file']
|
||||||
path, filename = os.path.split(full_filename)
|
path, filename = os.path.split(full_filename)
|
||||||
return static_file(filename, root=path, download=True)
|
return static_file(filename, root=path, download=True)
|
||||||
|
|
||||||
@ -232,13 +232,13 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
|||||||
|
|
||||||
我们已经把所有的代码过了一遍,不过还需要一点工作来把它们集合到一起。可以自己去 GitHub 页面 [https://github.com/ben-ev/xbmc-remote][3] 看下。
|
我们已经把所有的代码过了一遍,不过还需要一点工作来把它们集合到一起。可以自己去 GitHub 页面 [https://github.com/ben-ev/xbmc-remote][3] 看下。
|
||||||
|
|
||||||
> ### 设置 ###
|
> **设置**
|
||||||
>
|
>
|
||||||
> 我们的遥控程序已经开发完成,还需要保证让它在媒体中心每次开机的时候都能启动。有几种方式,最简单的是在 /etc/rc.local 里增加一行命令来启动。我们的文件位置在 /opt/xbmc-remote/remote.py,其他文件也和它一起。然后在 /etc/rc.local 最后的 exit 0 之前增加了下面一行。
|
> 我们的遥控程序已经开发完成,还需要保证让它在媒体中心每次开机的时候都能启动。有几种方式,最简单的是在 /etc/rc.local 里增加一行命令来启动。我们的文件位置在 /opt/xbmc-remote/remote.py,其他文件也和它一起。然后在 /etc/rc.local 最后的 exit 0 之前增加了下面一行。
|
||||||
>
|
>
|
||||||
> cd /opt/xbmc-remote && python remote.py &
|
> cd /opt/xbmc-remote && python remote.py &
|
||||||
|
|
||||||
> ### GitHub ###
|
> **GitHub**
|
||||||
>
|
>
|
||||||
> 这个项目目前还只是个架子,但是 - 我们运营杂志就意味着没有太多自由时间来编程。不过,我们启动了一个 GitHub 项目,希望能持续完善, 而如果你觉得这个项目有用的话,欢迎做出贡献。
|
> 这个项目目前还只是个架子,但是 - 我们运营杂志就意味着没有太多自由时间来编程。不过,我们启动了一个 GitHub 项目,希望能持续完善, 而如果你觉得这个项目有用的话,欢迎做出贡献。
|
||||||
>
|
>
|
||||||
@ -252,7 +252,7 @@ via: http://www.linuxvoice.com/xbmc-build-a-remote-control/
|
|||||||
|
|
||||||
作者:[Ben Everard][a]
|
作者:[Ben Everard][a]
|
||||||
译者:[zpl1025](https://github.com/zpl1025)
|
译者:[zpl1025](https://github.com/zpl1025)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,95 +1,92 @@
|
|||||||
用这些专用工具让你截图更简单
|
用这些专用工具让你截图更简单
|
||||||
================================================================================
|
================================================================================
|
||||||
"一图胜过千万句",这句二十世纪早期在美国应运而生的名言,说的是一张单一的静止图片所蕴含的信息足以匹敌大量的描述性文字。本质上说,图片所传递的信息量的确是比文字更有效更高效。
|
“一图胜千言”,这句二十世纪早期在美国应运而生的名言,说的是一张单一的静止图片所蕴含的信息足以匹敌大量的描述性文字。本质上说,图片所传递的信息量的确是比文字更有效更高效。
|
||||||
|
|
||||||
截图(或抓帧)是一种捕捉自计算机所录制可视化设备输出的快照或图片,屏幕捕捉软件能让计算机获取到截图。此类软件有很多用处,因为一张图片能很好地说明计算机软件的操作,截图在软件开发过程和文档中扮演了一个很重要的角色。或者,如果你的电脑有了个技术性问题,一张截图能让技术支持理解你碰到的这个问题。要写好计算机相关的文章、文档和教程,没有一款好的截图工具是几乎不可能的。如果你想在保存屏幕上任意一些零星的信息,特别是不方便打字时,截图也很有用。
|
截图(或抓帧)是一种捕捉自计算机的快照或图片,用来记录可视设备的输出。屏幕捕捉软件能从计算机中获取到截图。此类软件有很多用处,因为一张图片能很好地说明计算机软件的操作,截图在软件开发过程和文档中扮演了一个很重要的角色。或者,如果你的电脑有了技术性问题,一张截图能让技术支持理解你碰到的这个问题。要写好计算机相关的文章、文档和教程,没有一款好的截图工具是几乎不可能的。如果你想保存你放在屏幕上的一些零星的信息,特别是不方便打字时,截图也很有用。
|
||||||
|
|
||||||
在开源世界,Linux有许多专注于截图功能的工具供选择,基于图形的和控制台的都有。如果要说一个功能丰富的专用截图工具,那就来看看Shutter吧。这款工具是小型开源工具的杰出代表,当然也有其它的选择。
|
在开源世界,Linux有许多专注于截图功能的工具供选择,基于图形的和控制台的都有。如果要说一个功能丰富的专用截图工具,看起来没有能超过Shutter的。这款工具是小型开源工具的杰出代表,但是也有其它的不错替代品可以选择。
|
||||||
|
|
||||||
屏幕捕捉功能不仅仅只有专业的工具提供,GIMP和ImageMagick这两款主攻图像处理的工具,也能提供像样的屏幕捕捉功能。
|
屏幕捕捉功能不仅仅只有专门的工具提供,GIMP和ImageMagick这两款主攻图像处理的工具,也能提供像样的屏幕捕捉功能。
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### Shutter ###
|
### Shutter ###
|
||||||
|
|
||||||
![Shutter in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-Shutter1.png)
|
![Shutter in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-Shutter1.png)
|
||||||
|
|
||||||
Shutter是一款功能丰富的截图软件。你可以给你的特殊区域、窗口、整个屏幕甚至是网站截图 - 在其中应用不用的效果,比如用高亮的点在上面绘图,然后上传至一个图片托管网站,一切尽在这个小窗口内。
|
Shutter是一款功能丰富的截图软件。你可以对特定区域、窗口、整个屏幕甚至是网站截图 - 并为其应用不同的效果,比如用高亮的点在上面绘图,然后上传至一个图片托管网站,一切尽在这个小窗口内。
|
||||||
|
|
||||||
包含特性:
|
包含特性:
|
||||||
|
|
||||||
|
|
||||||
- 截图范围:
|
- 截图范围:
|
||||||
- 一块特殊区域
|
- 一个特定区域
|
||||||
- 窗口
|
- 窗口
|
||||||
- 完整的桌面
|
- 完整的桌面
|
||||||
- 脚本生成的网页
|
- 脚本生成的网页
|
||||||
- 在截图中应用不同效果
|
- 在截图中应用不同效果
|
||||||
- 热键
|
- 热键
|
||||||
- 打印
|
- 打印
|
||||||
- 直接截图或指定一个延迟时间
|
- 直接截图或指定延迟时间截图
|
||||||
- 将截图保存至一个指定目录并用一个简便方法重命名它(用特殊的通配符)
|
- 将截图保存至一个指定目录并用一个简便方法重命名它(用指定通配符)
|
||||||
- 完成集成在GNOME桌面中(TrayIcon等等)
|
- 完全集成在GNOME桌面中(TrayIcon等等)
|
||||||
- 当你截了一张图并以%设置了尺寸时,直接生成缩略图
|
- 当你截了一张图并根据尺寸的百分比直接生成缩略图
|
||||||
- Shutter会话选项:
|
- Shutter会话集:
|
||||||
- 会话中保持所有截图的痕迹
|
- 跟踪会话中所有的截图
|
||||||
- 复制截图至剪贴板
|
- 复制截图至剪贴板
|
||||||
- 打印截图
|
- 打印截图
|
||||||
- 删除截图
|
- 删除截图
|
||||||
- 重命名文件
|
- 重命名文件
|
||||||
- 直接上传你的文件至图像托管网站(比如http://ubuntu-pics.de),取回所有需要的图像并将它们与其他人分享
|
- 直接上传你的文件至图像托管网站(比如 http://ubuntu-pics.de ),得到链接并将它们与其他人分享
|
||||||
- 用内置的绘画工具直接编辑截图
|
- 用内置的绘画工具直接编辑截图
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
- 主页: [shutter-project.org][1]
|
- 主页: [shutter-project.org][1]
|
||||||
- 开发者: Mario Kemper和Shutter团队
|
- 开发者: Mario Kemper和Shutter团队
|
||||||
- 许可证: GNU GPL v3
|
- 许可证: GNU GPL v3
|
||||||
- 版本号: 0.93.1
|
- 版本号: 0.93.1
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### HotShots ###
|
### HotShots ###
|
||||||
|
|
||||||
![HotShots in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-HotShots.png)
|
![HotShots in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-HotShots.png)
|
||||||
|
|
||||||
HotShots是一款捕捉屏幕并能以各种图片格式保存的软件,同时也能添加注释和图形数据(箭头、行、文本, ...)。
|
HotShots是一款捕捉屏幕并能以各种图片格式保存的软件,同时也能添加注释和图形数据(箭头、行、文本 ...)。
|
||||||
|
|
||||||
你也可以把你的作品上传到网上(FTP/一些web服务),HotShots是用Qt开发而成的。
|
你也可以把你的作品上传到网上(FTP/一些web服务),HotShots是用Qt开发而成的。
|
||||||
|
|
||||||
HotShots无法从Ubuntu的Software Center中获取,不过用以下命令可以轻松地来安装它:
|
HotShots无法从Ubuntu的Software Center中获取,不过用以下命令可以轻松地来安装它:
|
||||||
|
|
||||||
sudo add-apt-repository ppa:ubuntuhandbook1/apps
|
sudo add-apt-repository ppa:ubuntuhandbook1/apps
|
||||||
|
|
||||||
sudo apt-get update
|
sudo apt-get update
|
||||||
|
|
||||||
sudo apt-get install hotshots
|
sudo apt-get install hotshots
|
||||||
|
|
||||||
包含特性:
|
包含特性:
|
||||||
|
|
||||||
- 简单易用
|
- 简单易用
|
||||||
- 全功能使用
|
- 功能完整
|
||||||
- 嵌入式编辑器
|
- 内置编辑器
|
||||||
- 热键
|
- 热键
|
||||||
- 内置放大功能
|
- 内置放大功能
|
||||||
- 徒手和多屏捕捉
|
- 手动控制和多屏捕捉
|
||||||
- 支持输出格式:Black & Whte (bw), Encapsulated PostScript (eps, epsf), Encapsulated PostScript Interchange (epsi), OpenEXR (exr), PC Paintbrush Exchange (pcx), Photoshop Document (psd), ras, rgb, rgba, Irix RGB (sgi), Truevision Targa (tga), eXperimental Computing Facility (xcf), Windows Bitmap (bmp), DirectDraw Surface (dds), Graphic Interchange Format (gif), Icon Image (ico), Joint Photographic Experts Group 2000 (jp2), Joint Photographic Experts Group (jpeg, jpg), Multiple-image Network Graphics (mng), Portable Pixmap (ppm), Scalable Vector Graphics (svg), svgz, Tagged Image File Format (tif, tiff), webp, X11 Bitmap (xbm), X11 Pixmap (xpm), and Khoros Visualization (xv)
|
- 支持输出格式:Black & Whte (bw), Encapsulated PostScript (eps, epsf), Encapsulated PostScript Interchange (epsi), OpenEXR (exr), PC Paintbrush Exchange (pcx), Photoshop Document (psd), ras, rgb, rgba, Irix RGB (sgi), Truevision Targa (tga), eXperimental Computing Facility (xcf), Windows Bitmap (bmp), DirectDraw Surface (dds), Graphic Interchange Format (gif), Icon Image (ico), Joint Photographic Experts Group 2000 (jp2), Joint Photographic Experts Group (jpeg, jpg), Multiple-image Network Graphics (mng), Portable Pixmap (ppm), Scalable Vector Graphics (svg), svgz, Tagged Image File Format (tif, tiff), webp, X11 Bitmap (xbm), X11 Pixmap (xpm), and Khoros Visualization (xv)
|
||||||
- 国际化支持:巴斯克语、中文、捷克语、法语、加利西亚语、德语、希腊语、意大利语、日语、立陶宛语、波兰语、葡萄牙语、罗马尼亚语、俄罗斯语、塞尔维亚语、僧伽罗语、斯洛伐克语、西班牙语、土耳其语、乌克兰语和越南语
|
- 国际化支持:巴斯克语、中文、捷克语、法语、加利西亚语、德语、希腊语、意大利语、日语、立陶宛语、波兰语、葡萄牙语、罗马尼亚语、俄罗斯语、塞尔维亚语、僧伽罗语、斯洛伐克语、西班牙语、土耳其语、乌克兰语和越南语
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
- 主页: [thehive.xbee.net][2]
|
- 主页: [thehive.xbee.net][2]
|
||||||
- 开发者 xbee
|
- 开发者 xbee
|
||||||
- 许可证: GNU GPL v2
|
- 许可证: GNU GPL v2
|
||||||
- 版本号: 2.2.0
|
- 版本号: 2.2.0
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### ScreenCloud ###
|
### ScreenCloud ###
|
||||||
|
|
||||||
![ScreenCloud in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-ScreenCloud.png)
|
![ScreenCloud in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-ScreenCloud.png)
|
||||||
|
|
||||||
ScreenCloud是一款易于使用的开源截图工具。
|
ScreenCloud是一款易于使用的开源截图工具。
|
||||||
|
|
||||||
在这款软件中,用户可以用三个热键中的其中一个或只需点击ScreenCloud托盘图标就能进行截图,用户也可以自行选择保存截图的地址。
|
在这款软件中,用户可以用三个热键之一或只需点击ScreenCloud托盘图标就能进行截图,用户也可以自行选择保存截图的地址。
|
||||||
|
|
||||||
如果你选择上传你的截图到screencloud主页,链接会自动复制到你的剪贴板上,你能通过email或在一个聊天对话框里和你的朋友同事分享它,他们肯定会点击这个链接来看你的截图的。
|
如果你选择上传你的截图到screencloud网站,链接会自动复制到你的剪贴板上,你能通过email或在一个聊天对话框里和你的朋友同事分享它,他们肯定会点击这个链接来看你的截图的。
|
||||||
|
|
||||||
包含特性:
|
包含特性:
|
||||||
|
|
||||||
@ -106,18 +103,18 @@ ScreenCloud是一款易于使用的开源截图工具。
|
|||||||
- 插件支持:保存至Dropbox,Imgur等等
|
- 插件支持:保存至Dropbox,Imgur等等
|
||||||
- 支持上传至FTP和SFTP服务器
|
- 支持上传至FTP和SFTP服务器
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
- 主页: [screencloud.net][3]
|
- 主页: [screencloud.net][3]
|
||||||
- 开发者: Olav S Thoresen
|
- 开发者: Olav S Thoresen
|
||||||
- 许可证: GNU GPL v2
|
- 许可证: GNU GPL v2
|
||||||
- 版本号: 1.2.1
|
- 版本号: 1.2.1
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### KSnapshot ###
|
### KSnapshot ###
|
||||||
|
|
||||||
![KSnapShot in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-KSnapshot.png)
|
![KSnapShot in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-KSnapshot.png)
|
||||||
|
|
||||||
KSnapshot也是一款易于使用的截图工具,它能给整个桌面、一个单一窗口、窗口的一部分或一块所选区域捕捉图像。,图像能以各种不用的格式保存。
|
KSnapshot也是一款易于使用的截图工具,它能给整个桌面、单一窗口、窗口的一部分或一块所选区域捕捉图像。图像能以各种不同格式保存。
|
||||||
|
|
||||||
KSnapshot也允许用户用热键来进行截图。除了保存截图之外,它也可以被复制到剪贴板或用任何与图像文件关联的程序打开。
|
KSnapshot也允许用户用热键来进行截图。除了保存截图之外,它也可以被复制到剪贴板或用任何与图像文件关联的程序打开。
|
||||||
|
|
||||||
@ -127,10 +124,12 @@ KSnapshot是KDE 4图形模块的一部分。
|
|||||||
|
|
||||||
- 以多种格式保存截图
|
- 以多种格式保存截图
|
||||||
- 延迟截图
|
- 延迟截图
|
||||||
- 剔除窗口装饰图案
|
- 剔除窗口装饰(边框、菜单等)
|
||||||
- 复制截图至剪贴板
|
- 复制截图至剪贴板
|
||||||
- 热键
|
- 热键
|
||||||
- 能用它的D-Bus界面进行脚本化
|
- 能用它的D-Bus接口进行脚本化
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
- 主页: [www.kde.org][4]
|
- 主页: [www.kde.org][4]
|
||||||
- 开发者: KDE, Richard J. Moore, Aaron J. Seigo, Matthias Ettrich
|
- 开发者: KDE, Richard J. Moore, Aaron J. Seigo, Matthias Ettrich
|
||||||
@ -142,7 +141,7 @@ KSnapshot是KDE 4图形模块的一部分。
|
|||||||
via: http://www.linuxlinks.com/article/2015062316235249/ScreenCapture.html
|
via: http://www.linuxlinks.com/article/2015062316235249/ScreenCapture.html
|
||||||
|
|
||||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -4,7 +4,7 @@ Linux mkdir、tar 和 kill 命令的 4 个有用小技巧
|
|||||||
|
|
||||||
![有用的 Linux 小技巧](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Useful-Tips.jpg)
|
![有用的 Linux 小技巧](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Useful-Tips.jpg)
|
||||||
|
|
||||||
4 个有用的 Linux 小技巧
|
*4 个有用的 Linux 小技巧*
|
||||||
|
|
||||||
### 1. 假设你要创建一个类似于下面很长的/复杂的目录树。实现这最有效的方法是什么呢? ###
|
### 1. 假设你要创建一个类似于下面很长的/复杂的目录树。实现这最有效的方法是什么呢? ###
|
||||||
|
|
||||||
@ -37,9 +37,9 @@ Linux mkdir、tar 和 kill 命令的 4 个有用小技巧
|
|||||||
|
|
||||||
![检查目录结构](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Directory-Structure.png)
|
![检查目录结构](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Directory-Structure.png)
|
||||||
|
|
||||||
检查目录结构
|
*检查目录结构*
|
||||||
|
|
||||||
我们可以用上面的方式创建任意复制的目录树结构。注意这仅仅是一个普通的命令,但是用 ‘{}’ 来创建层级目录。需要的时候如果在 shell 脚本中使用是非常有用的。
|
我们可以用上面的方式创建任意复杂的目录树结构。注意这仅仅是一个普通的命令,但是用 ‘{}’ 来创建层级目录。需要的时候如果在 shell 脚本中使用是非常有用的。
|
||||||
|
|
||||||
### 2. 在桌面(/home/$USER/Desktop)创建一个文件(例如 test)并填入以下内容。 ###
|
### 2. 在桌面(/home/$USER/Desktop)创建一个文件(例如 test)并填入以下内容。 ###
|
||||||
|
|
||||||
@ -109,7 +109,7 @@ c. 解压 tar 包。
|
|||||||
|
|
||||||
我们也可以采用另外一种方式。
|
我们也可以采用另外一种方式。
|
||||||
|
|
||||||
我们可以在 Tar 包所在位置解压并复制/移动解压后的文件到所需的目标位置,例如:
|
我们也可以在 Tar 包所在位置解压并复制/移动解压后的文件到所需的目标位置,例如:
|
||||||
|
|
||||||
$ tar -jxvf firefox-37.0.2.tar.bz2
|
$ tar -jxvf firefox-37.0.2.tar.bz2
|
||||||
$ cp -R firefox/ /opt/
|
$ cp -R firefox/ /opt/
|
||||||
@ -122,7 +122,7 @@ c. 解压 tar 包。
|
|||||||
|
|
||||||
-C 选项提取文件到指定目录(这里是 /opt/)。
|
-C 选项提取文件到指定目录(这里是 /opt/)。
|
||||||
|
|
||||||
这并不是关于选项(-C)的问题,而是习惯的问题。养成使用带 -C 选项 tar 命令的习惯。这会使你的工作更加轻松。从现在开始不要再移动归档文件或复制/移动解压后的文件了,在 Downloads 文件夹保存 tar 包并解压到你想要的任何地方吧。
|
这并不是关于选项(-C)的问题,**而是习惯的问题**。养成使用带 -C 选项 tar 命令的习惯。这会使你的工作更加轻松。从现在开始不要再移动归档文件或复制/移动解压后的文件了,在 Downloads 文件夹保存 tar 包并解压到你想要的任何地方吧。
|
||||||
|
|
||||||
### 4. 常规方式我们怎样杀掉一个进程? ###
|
### 4. 常规方式我们怎样杀掉一个进程? ###
|
||||||
|
|
||||||
@ -132,7 +132,7 @@ c. 解压 tar 包。
|
|||||||
|
|
||||||
#### 输出样例 ####
|
#### 输出样例 ####
|
||||||
|
|
||||||
1006 ? 00:00:00 apache2
|
1006 ? 00:00:00 apache2
|
||||||
2702 ? 00:00:00 apache2
|
2702 ? 00:00:00 apache2
|
||||||
2703 ? 00:00:00 apache2
|
2703 ? 00:00:00 apache2
|
||||||
2704 ? 00:00:00 apache2
|
2704 ? 00:00:00 apache2
|
||||||
@ -188,7 +188,7 @@ c. 解压 tar 包。
|
|||||||
|
|
||||||
它没有输出任何东西并返回到窗口意味着没有名称中包含 apache2 的进程在运行。
|
它没有输出任何东西并返回到窗口意味着没有名称中包含 apache2 的进程在运行。
|
||||||
|
|
||||||
这就是我要说的所有东西。上面讨论的点肯定远远不够,但也肯定对你有所帮助。我们不仅仅是介绍教程使你学到一些新的东西,更重要的是想告诉你 ‘在同样的情况下如何变得更有效率’。在下面的评论框中告诉我们你的反馈吧。保持联系,继续评论。
|
这就是我要说的所有东西。上面讨论的点肯定远远不够,但也肯定对你有所帮助。我们不仅仅是介绍教程使你学到一些新的东西,更重要的是想告诉你 ‘**在同样的情况下如何变得更有效率**’。在下面的评论框中告诉我们你的反馈吧。保持联系,继续评论。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
@ -196,7 +196,7 @@ via: http://www.tecmint.com/mkdir-tar-and-kill-commands-in-linux/
|
|||||||
|
|
||||||
作者:[Avishek Kumar][a]
|
作者:[Avishek Kumar][a]
|
||||||
译者:[ictlyh](https://github.com/ictlyh)
|
译者:[ictlyh](https://github.com/ictlyh)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,47 +1,46 @@
|
|||||||
使用去重加密工具来备份
|
使用这些去重加密工具来备份你的数据
|
||||||
================================================================================
|
================================================================================
|
||||||
在体积和价值方面,数据都在增长。快速而可靠地备份和恢复数据正变得越来越重要。社会已经适应了技术的广泛使用,并懂得了如何依靠电脑和移动设备,但很少有人能够处理丢失重要数据的现实。在遭受数据损失的公司中,30% 的公司将在一年内损失一半市值,70% 的公司将在五年内停止交易。这更加凸显了数据的价值。
|
|
||||||
|
|
||||||
随着数据在体积上的增长,提高存储利用率尤为重要。In Computing(注:这里不知如何翻译),数据去重是一种特别的数据压缩技术,因为它可以消除重复数据的拷贝,所以这个技术可以提高存储利用率。
|
无论是体积还是价值,数据都在不断增长。快速而可靠地备份和恢复数据正变得越来越重要。社会已经适应了技术的广泛使用,并懂得了如何依靠电脑和移动设备,但很少有人能够面对丢失重要数据的现实。在遭受数据损失的公司中,30% 的公司将在一年内损失一半市值,70% 的公司将在五年内停止交易。这更加凸显了数据的价值。
|
||||||
|
|
||||||
数据并不仅仅只有其创造者感兴趣。政府、竞争者、犯罪分子、偷窥者可能都热衷于获取你的数据。他们或许想偷取你的数据,从你那里进行敲诈,或看你正在做什么。对于保护你的数据,加密是非常必要的。
|
随着数据在体积上的增长,提高存储利用率尤为重要。从计算机的角度说,数据去重是一种特别的数据压缩技术,因为它可以消除重复数据的拷贝,所以这个技术可以提高存储利用率。
|
||||||
|
|
||||||
所以,解决方法是我们需要一个去重加密备份软件。
|
数据并不仅仅只有其创造者感兴趣。政府、竞争者、犯罪分子、偷窥者可能都热衷于获取你的数据。他们或许想偷取你的数据,从你那里进行敲诈,或看你正在做什么。因此,对于保护你的数据,加密是非常必要的。
|
||||||
|
|
||||||
对于所有的用户而言,做文件备份是一件非常必要的事,至今为止许多用户还没有采取足够的措施来保护他们的数据。一台电脑不论是工作在一个合作的环境中,还是供私人使用,机器的硬盘可能在没有任何警告的情况下挂掉。另外,有些数据丢失可能是人为的错误所引发的。如果没有做经常性的备份,数据也可能不可避免地失去掉,即使请了专业的数据恢复公司来帮忙。
|
所以,解决方法是我们需要一个可以去重的加密备份软件。
|
||||||
|
|
||||||
|
对于所有的用户而言,做文件备份是一件非常必要的事,至今为止许多用户还没有采取足够的措施来保护他们的数据。一台电脑不论是工作在一个合作的环境中,还是供私人使用,机器的硬盘可能在没有任何警告的情况下挂掉。另外,有些数据丢失可能是人为的错误所引发的。如果没有做经常性的备份,数据也可能不可避免地丢失,即使请了专业的数据恢复公司来帮忙。
|
||||||
|
|
||||||
这篇文章将对 6 个去重加密备份工具进行简要的介绍。
|
这篇文章将对 6 个去重加密备份工具进行简要的介绍。
|
||||||
----------
|
|
||||||
|
|
||||||
### Attic ###
|
### Attic ###
|
||||||
|
|
||||||
Attic 是一个可用于去重、加密,验证完整性的用 Python 写的压缩备份程序。Attic 的主要目标是提供一个高效且安全的方式来备份数据。Attic 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。
|
Attic 是一个可用于去重、加密,验证完整性的压缩备份程序,它是用 Python 写的。Attic 的主要目标是提供一个高效且安全的方式来备份数据。Attic 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。
|
||||||
|
|
||||||
其特点有:
|
其特点有:
|
||||||
|
|
||||||
- 易用
|
- 易用
|
||||||
- 可高效利用存储空间,通过检查冗余的数据,数据块大小的去重被用来减少存储所用的空间
|
- 可高效利用存储空间,通过检查冗余的数据,对可变块大小的去重可以减少存储所用的空间
|
||||||
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来检查
|
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来校验
|
||||||
- 使用 SDSH 来进行离线备份
|
- 使用 SDSH 来进行离线备份
|
||||||
- 备份可作为文件系统来挂载
|
- 备份可作为文件系统来挂载
|
||||||
|
|
||||||
网站: [attic-backup.org][1]
|
网站: [attic-backup.org][1]
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### Borg ###
|
### Borg ###
|
||||||
|
|
||||||
Borg 是 Attic 的分支。它是一个安全的开源备份程序,被设计用来高效地存储那些新的或修改过的数据。
|
Borg 是 Attic 的一个分支。它是一个安全的开源备份程序,被设计用来高效地存储那些新的或修改过的数据。
|
||||||
|
|
||||||
Borg 的主要目标是提供一个高效、安全的方式来存储数据。Borg 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。认证加密使得它适用于不完全可信的目标的存储。
|
Borg 的主要目标是提供一个高效、安全的方式来存储数据。Borg 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。认证加密使得它适用于存储在不完全可信的位置。
|
||||||
|
|
||||||
Borg 由 Python 写成。Borg 于 2015 年 5 月被创造出来,为了回应让新的代码或重大的改变带入 Attic 的困难。
|
Borg 由 Python 写成。Borg 于 2015 年 5 月被创造出来,是为了解决让新的代码或重大的改变带入 Attic 的困难。
|
||||||
|
|
||||||
其特点包括:
|
其特点包括:
|
||||||
|
|
||||||
- 易用
|
- 易用
|
||||||
- 可高效利用存储空间,通过检查冗余的数据,数据块大小的去重被用来减少存储所用的空间
|
- 可高效利用存储空间,通过检查冗余的数据,对可变块大小的去重被用来减少存储所用的空间
|
||||||
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来检查
|
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来校验
|
||||||
- 使用 SDSH 来进行离线备份
|
- 使用 SDSH 来进行离线备份
|
||||||
- 备份可作为文件系统来挂载
|
- 备份可作为文件系统来挂载
|
||||||
|
|
||||||
@ -49,36 +48,32 @@ Borg 与 Attic 不兼容。
|
|||||||
|
|
||||||
网站: [borgbackup.github.io/borgbackup][2]
|
网站: [borgbackup.github.io/borgbackup][2]
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### Obnam ###
|
### Obnam ###
|
||||||
|
|
||||||
Obnam (OBligatory NAMe) 是一个易用、安全的基于 Python 的备份程序。备份可被存储在本地硬盘或通过 SSH SFTP 协议存储到网上。若使用了备份服务器,它并不需要任何特殊的软件,只需要使用 SSH 即可。
|
Obnam (OBligatory NAMe) 是一个易用、安全的基于 Python 的备份程序。备份可被存储在本地硬盘或通过 SSH SFTP 协议存储到网上。若使用了备份服务器,它并不需要任何特殊的软件,只需要使用 SSH 即可。
|
||||||
|
|
||||||
Obnam 通过将数据数据分成数据块,并单独存储它们来达到去重的目的,每次通过增量备份来生成备份,每次备份的生成就像是一次新的快照,但事实上是真正的增量备份。Obnam 由 Lars Wirzenius 开发。
|
Obnam 通过将数据分成数据块,并单独存储它们来达到去重的目的,每次通过增量备份来生成备份,每次备份的生成就像是一次新的快照,但事实上是真正的增量备份。Obnam 由 Lars Wirzenius 开发。
|
||||||
|
|
||||||
其特点有:
|
其特点有:
|
||||||
|
|
||||||
- 易用
|
- 易用
|
||||||
- 快照备份
|
- 快照备份
|
||||||
- 数据去重,跨文件,生成备份
|
- 数据去重,跨文件,然后生成备份
|
||||||
- 可使用 GnuPG 来加密备份
|
- 可使用 GnuPG 来加密备份
|
||||||
- 向一个单独的仓库中备份多个客户端的数据
|
- 向一个单独的仓库中备份多个客户端的数据
|
||||||
- 备份检查点 (创建一个保存点,以每 100MB 或其他容量)
|
- 备份检查点 (创建一个保存点,以每 100MB 或其他容量)
|
||||||
- 包含多个选项来调整性能,包括调整 lru-size 或 upload-queue-size
|
- 包含多个选项来调整性能,包括调整 lru-size 或 upload-queue-size
|
||||||
- 支持 MD5 校验和算法来识别重复的数据块
|
- 支持 MD5 校验算法来识别重复的数据块
|
||||||
- 通过 SFTP 将备份存储到一个服务器上
|
- 通过 SFTP 将备份存储到一个服务器上
|
||||||
- 同时支持 push(即在客户端上运行) 和 pull(即在服务器上运行)
|
- 同时支持 push(即在客户端上运行) 和 pull(即在服务器上运行)
|
||||||
|
|
||||||
网站: [obnam.org][3]
|
网站: [obnam.org][3]
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### Duplicity ###
|
### Duplicity ###
|
||||||
|
|
||||||
Duplicity 持续地以 tar 文件格式备份文件和目录,并使用 GnuPG 来进行加密,同时将它们上传到远程(或本地)的文件服务器上。它可以使用 ssh/scp, 本地文件获取, rsync, ftp, 和 Amazon S3 等来传递数据。
|
Duplicity 以 tar 文件格式增量备份文件和目录,并使用 GnuPG 来进行加密,同时将它们上传到远程(或本地)的文件服务器上。它可以使用 ssh/scp、本地文件获取、rsync、 ftp 和 Amazon S3 等来传递数据。
|
||||||
|
|
||||||
因为 duplicity 使用了 librsync, 增加的存档高效地利用了存储空间,且只记录自从上次备份依赖改变的那部分文件。由于该软件使用 GnuPG 来机密或对这些归档文件进行进行签名,这使得它们免于服务器的监视或修改。
|
因为 duplicity 使用了 librsync, 增量存档可以高效地利用存储空间,且只记录自从上次备份依赖改变的那部分文件。由于该软件使用 GnuPG 来加密或对这些归档文件进行进行签名,这使得它们免于服务器的监视或修改。
|
||||||
|
|
||||||
当前 duplicity 支持备份删除的文件,全部的 unix 权限,目录,符号链接, fifo 等。
|
当前 duplicity 支持备份删除的文件,全部的 unix 权限,目录,符号链接, fifo 等。
|
||||||
|
|
||||||
@ -101,39 +96,36 @@ duplicity 软件包还包含有 rdiffdir 工具。 Rdiffdir 是 librsync 的 rdi
|
|||||||
|
|
||||||
网站: [duplicity.nongnu.org][4]
|
网站: [duplicity.nongnu.org][4]
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### ZBackup ###
|
### ZBackup ###
|
||||||
|
|
||||||
ZBackup 是一个通用的全局去重备份工具。
|
ZBackup 是一个通用的全局去重备份工具。
|
||||||
|
|
||||||
其特点包括:
|
其特点包括:
|
||||||
|
|
||||||
- 存储数据的并行 LZMA 或 LZO 压缩,在一个仓库中,你还可以混合使用 LZMA 和 LZO
|
- 对存储数据并行进行 LZMA 或 LZO 压缩,在一个仓库中,你还可以混合使用 LZMA 和 LZO
|
||||||
- 内置对存储数据的 AES 加密
|
- 内置对存储数据的 AES 加密
|
||||||
- 可选择地删除旧的备份数据
|
- 能够删除旧的备份数据
|
||||||
- 可以使用 64 位的滚动哈希算法,使得文件冲突的数量几乎为零
|
- 可以使用 64 位的滚动哈希算法,使得文件冲突的数量几乎为零
|
||||||
- Repository consists of immutable files. No existing files are ever modified ====
|
- 仓库中存储的文件是不可修改的,已备份的文件不会被修改。
|
||||||
- 用 C++ 写成,只需少量的库文件依赖
|
- 用 C++ 写成,只需少量的库文件依赖
|
||||||
- 在生成环境中可以安全使用
|
- 在生成环境中可以安全使用
|
||||||
- 可以在不同仓库中进行数据交换而不必再进行压缩
|
- 可以在不同仓库中进行数据交换而不必再进行压缩
|
||||||
- 可以使用 64 位改进型 Rabin-Karp 滚动哈希算法
|
- 使用 64 位改进型 Rabin-Karp 滚动哈希算法
|
||||||
|
|
||||||
网站: [zbackup.org][5]
|
网站: [zbackup.org][5]
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
### bup ###
|
### bup ###
|
||||||
|
|
||||||
bup 是一个用 Python 写的备份程序,其名称是 "backup" 的缩写。在 git packfile 文件的基础上, bup 提供了一个高效的方式来备份一个系统,提供快速的增量备份和全局去重(在文件中或文件里,甚至包括虚拟机镜像)。
|
bup 是一个用 Python 写的备份程序,其名称是 "backup" 的缩写。基于 git packfile 文件格式, bup 提供了一个高效的方式来备份一个系统,提供快速的增量备份和全局去重(在文件中或文件里,甚至包括虚拟机镜像)。
|
||||||
|
|
||||||
bup 在 LGPL 版本 2 协议下发行。
|
bup 在 LGPL 版本 2 协议下发行。
|
||||||
|
|
||||||
其特点包括:
|
其特点包括:
|
||||||
|
|
||||||
- 全局去重 (在文件中或文件里,甚至包括虚拟机镜像)
|
- 全局去重 (在文件之间或文件内部,甚至包括虚拟机镜像)
|
||||||
- 使用一个滚动的校验和算法(类似于 rsync) 来将大文件分为多个数据块
|
- 使用一个滚动的校验和算法(类似于 rsync) 来将大文件分为多个数据块
|
||||||
- 使用来自 git 的 packfile 格式
|
- 使用来自 git 的 packfile 文件格式
|
||||||
- 直接写入 packfile 文件,以此提供快速的增量备份
|
- 直接写入 packfile 文件,以此提供快速的增量备份
|
||||||
- 可以使用 "par2" 冗余来恢复冲突的备份
|
- 可以使用 "par2" 冗余来恢复冲突的备份
|
||||||
- 可以作为一个 FUSE 文件系统来挂载你的 bup 仓库
|
- 可以作为一个 FUSE 文件系统来挂载你的 bup 仓库
|
||||||
@ -145,7 +137,7 @@ bup 在 LGPL 版本 2 协议下发行。
|
|||||||
via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
|
via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
|
||||||
|
|
||||||
译者:[FSSlc](https://github.com/FSSlc)
|
译者:[FSSlc](https://github.com/FSSlc)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,24 +1,24 @@
|
|||||||
PHP 安全
|
PHP 安全编程建议
|
||||||
================================================================================
|
================================================================================
|
||||||
![](http://www.codeproject.com/KB/PHP/363897/php_security.jpg)
|
![](http://www.codeproject.com/KB/PHP/363897/php_security.jpg)
|
||||||
|
|
||||||
### 简介 ###
|
### 简介 ###
|
||||||
|
|
||||||
为提供互联网服务,当你在开发代码的时候必须时刻保持安全意识。可能大部分 PHP 脚本都对安全问题不敏感;这很大程度上是因为有大量的无经验程序员在使用这门语言。但是,没有理由让你基于粗略估计你代码的影响性而有不一致的安全策略。当你在服务器上放任何经济相关的东西时,就有可能会有人尝试破解它。创建一个论坛程序或者任何形式的购物车,被攻击的可能性就上升到了无穷大。
|
要提供互联网服务,当你在开发代码的时候必须时刻保持安全意识。可能大部分 PHP 脚本都对安全问题都不在意,这很大程度上是因为有大量的*无经验程序员*在使用这门语言。但是,没有理由让你因为对你的代码的不确定性而导致不一致的安全策略。当你在服务器上放任何涉及到钱的东西时,就有可能会有人尝试破解它。创建一个论坛程序或者任何形式的购物车,被攻击的可能性就上升到了无穷大。
|
||||||
|
|
||||||
### 背景 ###
|
### 背景 ###
|
||||||
|
|
||||||
为了确保你的 web 内容安全,这里有一些一般的安全准则:
|
为了确保你的 web 内容安全,这里有一些常规的安全准则:
|
||||||
|
|
||||||
#### 别相信表单 ####
|
#### 别相信表单 ####
|
||||||
|
|
||||||
攻击表单很简单。通过使用一个简单的 JavaScript 技巧,你可以限制你的表单只允许在评分域中填写 1 到 5 的数字。如果有人关闭了他们浏览器的 JavaScript 功能或者提交自定义的表单数据,你客户端的验证就失败了。
|
攻击表单很简单。通过使用一个简单的 JavaScript 技巧,你可以限制你的表单只允许在评分域中填写 1 到 5 的数字。如果有人关闭了他们浏览器的 JavaScript 功能或者提交自定义的表单数据,你客户端的验证就失败了。
|
||||||
|
|
||||||
用户主要通过表单参数和你的脚本交互,因此他们是最大的安全风险。你应该学到什么呢?总是要验证 PHP 脚本中传递到其它任何 PHP 脚本的数据。在本文中,我们向你演示了如何分析和防范跨站点脚本(XSS)攻击,它可能劫持用户凭据(甚至更严重)。你也会看到如何防止会玷污或毁坏你数据的 MySQL 注入攻击。
|
用户主要通过表单参数和你的脚本交互,因此他们是最大的安全风险。你应该学到什么呢?在 PHP 脚本中,总是要验证 传递给任何 PHP 脚本的数据。在本文中,我们向你演示了如何分析和防范跨站脚本(XSS)攻击,它可能会劫持用户凭据(甚至更严重)。你也会看到如何防止会玷污或毁坏你数据的 MySQL 注入攻击。
|
||||||
|
|
||||||
#### 别相信用户 ####
|
#### 别相信用户 ####
|
||||||
|
|
||||||
假设你网站获取的每一份数据都充满了有害的代码。清理每一部分,就算你相信没有人会尝试攻击你的站点。
|
假定你网站获取的每一份数据都充满了有害的代码。清理每一部分,即便你相信没有人会尝试攻击你的站点。
|
||||||
|
|
||||||
#### 关闭全局变量 ####
|
#### 关闭全局变量 ####
|
||||||
|
|
||||||
@ -32,9 +32,9 @@ PHP 安全
|
|||||||
|
|
||||||
<input name="username" type="text" size="15" maxlength="64">
|
<input name="username" type="text" size="15" maxlength="64">
|
||||||
|
|
||||||
运行 process.php 的时候,启用了注册全局变量的 PHP 会为该参数赋值为 $username 变量。这会比通过 **$\_POST['username']** 或 **$\_GET['username']** 访问它节省敲击次数。不幸的是,这也会给你留下安全问题,因为 PHP 设置该变量的值为通过 GET 或 POST 参数发送到脚本的任何值,如果你没有显示地初始化该变量并且你不希望任何人去操作它,这就会有一个大问题。
|
运行 process.php 的时候,启用了注册全局变量的 PHP 会将该参数赋值到 $username 变量。这会比通过 **$\_POST['username']** 或 **$\_GET['username']** 访问它节省击键次数。不幸的是,这也会给你留下安全问题,因为 PHP 会设置该变量的值为通过 GET 或 POST 的参数发送到脚本的任何值,如果你没有显示地初始化该变量并且你不希望任何人去操作它,这就会有一个大问题。
|
||||||
|
|
||||||
看下面的脚本,假如 $authorized 变量的值为 true,它会给用户显示验证数据。正常情况下,只有当用户正确通过了假想的 authenticated\_user() 函数验证,$authorized 变量的值才会被设置为真。但是如果你启用了 **register\_globals**,任何人都可以发送一个 GET 参数,例如 authorized=1 去覆盖它:
|
看下面的脚本,假如 $authorized 变量的值为 true,它会给用户显示通过验证的数据。正常情况下,只有当用户正确通过了这个假想的 authenticated\_user() 函数验证,$authorized 变量的值才会被设置为真。但是如果你启用了 **register\_globals**,任何人都可以发送一个 GET 参数,例如 authorized=1 去覆盖它:
|
||||||
|
|
||||||
<?php
|
<?php
|
||||||
// Define $authorized = true only if user is authenticated
|
// Define $authorized = true only if user is authenticated
|
||||||
@ -45,7 +45,7 @@ PHP 安全
|
|||||||
|
|
||||||
这个故事的寓意是,你应该从预定义的服务器变量中获取表单数据。所有通过 post 表单传递到你 web 页面的数据都会自动保存到一个称为 **$\_POST** 的大数组中,所有的 GET 数据都保存在 **$\_GET** 大数组中。文件上传信息保存在一个称为 **$\_FILES** 的特殊数据中。另外,还有一个称为 **$\_REQUEST** 的复合变量。
|
这个故事的寓意是,你应该从预定义的服务器变量中获取表单数据。所有通过 post 表单传递到你 web 页面的数据都会自动保存到一个称为 **$\_POST** 的大数组中,所有的 GET 数据都保存在 **$\_GET** 大数组中。文件上传信息保存在一个称为 **$\_FILES** 的特殊数据中。另外,还有一个称为 **$\_REQUEST** 的复合变量。
|
||||||
|
|
||||||
要从一个 POST 方法表单中访问 username 域,可以使用 **$\_POST['username']**。如果 username 在 URL 中就使用 **$\_GET['username']**。如果你不确定值来自哪里,用 **$\_REQUEST['username']**。
|
要从一个 POST 方法表单中访问 username 字段,可以使用 **$\_POST['username']**。如果 username 在 URL 中就使用 **$\_GET['username']**。如果你不确定值来自哪里,用 **$\_REQUEST['username']**。
|
||||||
|
|
||||||
<?php
|
<?php
|
||||||
$post_value = $_POST['post_value'];
|
$post_value = $_POST['post_value'];
|
||||||
@ -61,12 +61,12 @@ $\_REQUEST 是 $\_GET、$\_POST、和 $\_COOKIE 数组的结合。如果你有
|
|||||||
|
|
||||||
- **register\_globals** 设置为 off
|
- **register\_globals** 设置为 off
|
||||||
- **safe\_mode** 设置为 off
|
- **safe\_mode** 设置为 off
|
||||||
- **error\_reporting** 设置为 off。如果出现错误了,这会向用户浏览器发送可见的错误报告信息。对于生产服务器,使用错误日志代替。开发服务器如果在防火墙后面就可以启用错误日志。
|
- **error\_reporting** 设置为 off。如果出现错误了,这会向用户浏览器发送可见的错误报告信息。对于生产服务器,使用错误日志代替。开发服务器如果在防火墙后面就可以启用错误日志。(LCTT 译注:此处据原文逻辑和常识,应该是“开发服务器如果在防火墙后面就可以启用错误报告,即 on。”)
|
||||||
- 停用这些函数:system()、exec()、passthru()、shell\_exec()、proc\_open()、和 popen()。
|
- 停用这些函数:system()、exec()、passthru()、shell\_exec()、proc\_open()、和 popen()。
|
||||||
- **open\_basedir** 为 /tmp(以便保存会话信息)目录和 web 根目录设置值,以便脚本不能访问选定区域外的文件。
|
- **open\_basedir** 为 /tmp(以便保存会话信息)目录和 web 根目录,以便脚本不能访问这些选定区域外的文件。
|
||||||
- **expose\_php** 设置为 off。该功能会向 Apache 头添加包含版本数字的 PHP 签名。
|
- **expose\_php** 设置为 off。该功能会向 Apache 头添加包含版本号的 PHP 签名。
|
||||||
- **allow\_url\_fopen** 设置为 off。如果你小心在你代码中访问文件的方式-也就是你验证所有输入参数,这并不严格需要。
|
- **allow\_url\_fopen** 设置为 off。如果你能够注意你代码中访问文件的方式-也就是你验证所有输入参数,这并不严格需要。
|
||||||
- **allow\_url\_include** 设置为 off。这实在没有明智的理由任何人会想要通过 HTTP 访问包含的文件。
|
- **allow\_url\_include** 设置为 off。对于任何人来说,实在没有明智的理由会想要访问通过 HTTP 包含的文件。
|
||||||
|
|
||||||
一般来说,如果你发现想要使用这些功能的代码,你就不应该相信它。尤其要小心会使用类似 system() 函数的代码-它几乎肯定有缺陷。
|
一般来说,如果你发现想要使用这些功能的代码,你就不应该相信它。尤其要小心会使用类似 system() 函数的代码-它几乎肯定有缺陷。
|
||||||
|
|
||||||
@ -74,13 +74,13 @@ $\_REQUEST 是 $\_GET、$\_POST、和 $\_COOKIE 数组的结合。如果你有
|
|||||||
|
|
||||||
### SQL 注入攻击 ###
|
### SQL 注入攻击 ###
|
||||||
|
|
||||||
由于 PHP 传递到 MySQL 数据库的查询语句是按照强大的 SQL 编程语言编写的,你就有某些人通过在 web 查询参数中使用 MySQL 语句尝试 SQL 注入攻击的风险。通过在参数中插入有害的 SQL 代码片段,攻击者会尝试进入(或破坏)你的服务器。
|
由于 PHP 传递到 MySQL 数据库的查询语句是用强大的 SQL 编程语言编写的,就有了某些人通过在 web 查询参数中使用 MySQL 语句尝试 SQL 注入攻击的风险。通过在参数中插入有害的 SQL 代码片段,攻击者会尝试进入(或破坏)你的服务器。
|
||||||
|
|
||||||
假如说你有一个最终会放入变量 $product 的表单参数,你使用了类似下面的 SQL 语句:
|
假如说你有一个最终会放入变量 $product 的表单参数,你使用了类似下面的 SQL 语句:
|
||||||
|
|
||||||
$sql = "select * from pinfo where product = '$product'";
|
$sql = "select * from pinfo where product = '$product'";
|
||||||
|
|
||||||
如果参数是直接从表单中获得的,使用 PHP 自带的数据库特定转义函数,类似:
|
如果参数是直接从表单中获得的,应该使用 PHP 自带的数据库特定转义函数,类似:
|
||||||
|
|
||||||
$sql = 'Select * from pinfo where product = '"'
|
$sql = 'Select * from pinfo where product = '"'
|
||||||
mysql_real_escape_string($product) . '"';
|
mysql_real_escape_string($product) . '"';
|
||||||
@ -89,7 +89,7 @@ $\_REQUEST 是 $\_GET、$\_POST、和 $\_COOKIE 数组的结合。如果你有
|
|||||||
|
|
||||||
39'; DROP pinfo; SELECT 'FOO
|
39'; DROP pinfo; SELECT 'FOO
|
||||||
|
|
||||||
$sql 的结果就是:
|
那么 $sql 的结果就是:
|
||||||
|
|
||||||
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
|
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
|
||||||
|
|
||||||
@ -110,15 +110,15 @@ $sql 的结果就是:
|
|||||||
|
|
||||||
**注意:要自动转义任何表单数据,可以启用魔术引号(Magic Quotes)。**
|
**注意:要自动转义任何表单数据,可以启用魔术引号(Magic Quotes)。**
|
||||||
|
|
||||||
一些 MySQL 破坏可以通过限制 MySQL 用户权限避免。任何 MySQL 账户可以限制为只允许对选定的表进行特定类型的查询。例如,你可以创建只能选择行的 MySQL 用户。但是,这对于动态数据并不十分有用,另外,如果你有敏感的用户信息,可能某些人能访问一些数据,但你并不希望如此。例如,一个访问账户数据的用户可能会尝试注入访问另一个账户号码的代码,而不是为当前会话指定的号码。
|
一些 MySQL 破坏可以通过限制 MySQL 用户权限避免。任何 MySQL 账户可以限制为只允许对选定的表进行特定类型的查询。例如,你可以创建只能选择行的 MySQL 用户。但是,这对于动态数据并不十分有用,另外,如果你有敏感的用户信息,可能某些人能访问其中一些数据,但你并不希望如此。例如,一个访问账户数据的用户可能会尝试注入访问另一个人的账户号码的代码,而不是为当前会话指定的号码。
|
||||||
|
|
||||||
### 防止基本的 XSS 攻击 ###
|
### 防止基本的 XSS 攻击 ###
|
||||||
|
|
||||||
XSS 表示跨站点脚本。不像大部分攻击,该漏洞发生在客户端。XSS 最常见的基本形式是在用户提交的内容中放入 JavaScript 以便偷取用户 cookie 中的数据。由于大部分站点使用 cookie 和 session 验证访客,偷取的数据可用于模拟该用于-如果是一个典型的用户账户就会深受麻烦,如果是管理员账户甚至是彻底的惨败。如果你不在站点中使用 cookie 和 session ID,你的用户就不容易被攻击,但你仍然应该明白这种攻击是如何工作的。
|
XSS 表示跨站脚本。不像大部分攻击,该漏洞发生在客户端。XSS 最常见的基本形式是在用户提交的内容中放入 JavaScript 以便偷取用户 cookie 中的数据。由于大部分站点使用 cookie 和 session 验证访客,偷取的数据可用于模拟该用户-如果是一个常见的用户账户就会深受麻烦,如果是管理员账户甚至是彻底的惨败。如果你不在站点中使用 cookie 和 session ID,你的用户就不容易被攻击,但你仍然应该明白这种攻击是如何工作的。
|
||||||
|
|
||||||
不像 MySQL 注入攻击,XSS 攻击很难预防。Yahoo、eBay、Apple、以及 Microsoft 都曾经受 XSS 影响。尽管攻击不包含 PHP,你可以使用 PHP 来剥离用户数据以防止攻击。为了防止 XSS 攻击,你应该限制和过滤用户提交给你站点的数据。正是因为这个原因大部分在线公告板都不允许在提交的数据中使用 HTML 标签,而是用自定义的标签格式代替,例如 **[b]** 和 **[linkto]**。
|
不像 MySQL 注入攻击,XSS 攻击很难预防。Yahoo、eBay、Apple、以及 Microsoft 都曾经受 XSS 影响。尽管攻击不包含 PHP,但你可以使用 PHP 来剥离用户数据以防止攻击。为了防止 XSS 攻击,你应该限制和过滤用户提交给你站点的数据。正是因为这个原因,大部分在线公告板都不允许在提交的数据中使用 HTML 标签,而是用自定义的标签格式代替,例如 **[b]** 和 **[linkto]**。
|
||||||
|
|
||||||
让我们来看一个如何防止这类攻击的简单脚本。对于更完善的解决办法,可以使用 SafeHHTML,本文的后面部分会讨论到。
|
让我们来看一个如何防止这类攻击的简单脚本。对于更完善的解决办法,可以使用 SafeHTML,本文的后面部分会讨论到。
|
||||||
|
|
||||||
function transform_HTML($string, $length = null) {
|
function transform_HTML($string, $length = null) {
|
||||||
// Helps prevent XSS attacks
|
// Helps prevent XSS attacks
|
||||||
@ -137,23 +137,21 @@ XSS 表示跨站点脚本。不像大部分攻击,该漏洞发生在客户端
|
|||||||
return $string;
|
return $string;
|
||||||
}
|
}
|
||||||
|
|
||||||
这个函数将 HTML 特定字符转换为 HTML 字面字符。一个浏览器对任何通过这个脚本的 HTML 以无标记的文本呈现。例如,考虑下面的 HTML 字符串:
|
这个函数将 HTML 特定的字符转换为 HTML 字面字符。一个浏览器对任何通过这个脚本的 HTML 以非标记的文本呈现。例如,考虑下面的 HTML 字符串:
|
||||||
|
|
||||||
<STRONG>Bold Text</STRONG>
|
<STRONG>Bold Text</STRONG>
|
||||||
|
|
||||||
一般情况下,HTML 会显示为:
|
一般情况下,HTML 会显示为:**Bold Text**
|
||||||
|
|
||||||
Bold Text
|
但是,通过 **transform\_HTML()** 后,它就像原始输入一样呈现。原因是处理的字符串中的标签字符串转换为 HTML 实体。**transform\_HTML()** 的结果字符串的纯文本看起来像下面这样:
|
||||||
|
|
||||||
但是,通过 **transform\_HTML()** 后,它就像初始输入一样呈现。原因是处理的字符串中标签字符串是 HTML 条目。**transform\_HTML()** 结果字符串的纯文本看起来像下面这样:
|
|
||||||
|
|
||||||
<STRONG>Bold Text</STRONG>
|
<STRONG>Bold Text</STRONG>
|
||||||
|
|
||||||
该函数的实质是 htmlentities() 函数调用,它会将 <、>、和 & 转换为 **<**、**>**、和 **&**。尽管这会处理大部分的普通攻击,有经验的 XSS 攻击者有另一种把戏:用十六进制或 UTF-8 编码恶意脚本,而不是采用普通的 ASCII 文本,从而希望能饶过你的过滤器。他们可以在 URL 的 GET 变量中发送代码,例如,“这是十六进制代码,你能帮我运行吗?” 一个十六进制例子看起来像这样:
|
该函数的实质是 htmlentities() 函数调用,它会将 <、>、和 & 转换为 **\<**、**\>**、和 **\&**。尽管这会处理大部分的普通攻击,但有经验的 XSS 攻击者有另一种把戏:用十六进制或 UTF-8 编码恶意脚本,而不是采用普通的 ASCII 文本,从而希望能绕过你的过滤器。他们可以在 URL 的 GET 变量中发送代码,告诉浏览器,“这是十六进制代码,你能帮我运行吗?” 一个十六进制例子看起来像这样:
|
||||||
|
|
||||||
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
|
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
|
||||||
|
|
||||||
浏览器渲染这信息的时候,结果就是:
|
浏览器渲染这个信息的时候,结果就是:
|
||||||
|
|
||||||
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
|
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
|
||||||
|
|
||||||
@ -163,20 +161,20 @@ XSS 表示跨站点脚本。不像大部分攻击,该漏洞发生在客户端
|
|||||||
|
|
||||||
### 使用 SafeHTML ###
|
### 使用 SafeHTML ###
|
||||||
|
|
||||||
之前脚本的问题比较简单,它不允许任何类型的用户标记。不幸的是,这里有上百种方法能使 JavaScript 跳过用户的过滤器,从用户输入中剥离 HTML,没有方法可以防止这种情况。
|
之前脚本的问题比较简单,它不允许任何类型的用户标记。不幸的是,这里有上百种方法能使 JavaScript 跳过用户的过滤器,并且要从用户输入中剥离全部 HTML,还没有方法可以防止这种情况。
|
||||||
|
|
||||||
当前,没有任何一个脚本能保证无法被破解,尽管有一些确实比大部分要好。有白名单和黑名单两种方法加固安全,白名单比较简单而且更加有效。
|
当前,没有任何一个脚本能保证无法被破解,尽管有一些确实比大部分要好。有白名单和黑名单两种方法加固安全,白名单比较简单而且更加有效。
|
||||||
|
|
||||||
一个白名单解决方案是 PixelApes 的 SafeHTML 反跨站点脚本解析器。
|
一个白名单解决方案是 PixelApes 的 SafeHTML 反跨站脚本解析器。
|
||||||
|
|
||||||
SafeHTML 能识别有效 HTML,能追踪并剥离任何危险标签。它用另一个称为 HTMLSax 的软件包进行解析。
|
SafeHTML 能识别有效 HTML,能追踪并剥离任何危险标签。它用另一个称为 HTMLSax 的软件包进行解析。
|
||||||
|
|
||||||
按照下面步骤安装和使用 SafeHTML:
|
按照下面步骤安装和使用 SafeHTML:
|
||||||
|
|
||||||
1. 到 [http://pixel-apes.com/safehtml/?page=safehtml][1] 下载最新版本的 SafeHTML。
|
1. 到 [http://pixel-apes.com/safehtml/?page=safehtml][1] 下载最新版本的 SafeHTML。
|
||||||
1. 把文件放到你服务器的类文件夹。该文件夹包括 SafeHTML 和 HTMLSax 起作用需要的所有东西。
|
1. 把文件放到你服务器的类文件夹。该文件夹包括 SafeHTML 和 HTMLSax 功能所需的所有东西。
|
||||||
1. 在脚本中包含 SafeHTML 类文件(safehtml.php)。
|
1. 在脚本中 `include` SafeHTML 类文件(safehtml.php)。
|
||||||
1. 创建称为 $safehtml 的新 SafeHTML 对象。
|
1. 创建一个名为 $safehtml 的新 SafeHTML 对象。
|
||||||
1. 用 $safehtml->parse() 方法清理你的数据。
|
1. 用 $safehtml->parse() 方法清理你的数据。
|
||||||
|
|
||||||
这是一个完整的例子:
|
这是一个完整的例子:
|
||||||
@ -203,45 +201,45 @@ SafeHTML 能识别有效 HTML,能追踪并剥离任何危险标签。它用另
|
|||||||
|
|
||||||
你可能犯的最大错误是假设这个类能完全避免 XSS 攻击。SafeHTML 是一个相当复杂的脚本,几乎能检查所有事情,但没有什么是能保证的。你仍然需要对你的站点做参数验证。例如,该类不能检查给定变量的长度以确保能适应数据库的字段。它也不检查缓冲溢出问题。
|
你可能犯的最大错误是假设这个类能完全避免 XSS 攻击。SafeHTML 是一个相当复杂的脚本,几乎能检查所有事情,但没有什么是能保证的。你仍然需要对你的站点做参数验证。例如,该类不能检查给定变量的长度以确保能适应数据库的字段。它也不检查缓冲溢出问题。
|
||||||
|
|
||||||
XSS 攻击者很有创造力,他们使用各种各样的方法来尝试达到他们的目标。可以阅读 RSnake 的 XSS 教程[http://ha.ckers.org/xss.html][2] 看一下这里有多少种方法尝试使代码跳过过滤器。SafeHTML 项目有很好的程序员一直在尝试阻止 XSS 攻击,但无法保证某些人不会想起一些奇怪和新奇的方法来跳过过滤器。
|
XSS 攻击者很有创造力,他们使用各种各样的方法来尝试达到他们的目标。可以阅读 RSnake 的 XSS 教程[http://ha.ckers.org/xss.html][2] ,看一下这里有多少种方法尝试使代码跳过过滤器。SafeHTML 项目有很好的程序员一直在尝试阻止 XSS 攻击,但无法保证某些人不会想起一些奇怪和新奇的方法来跳过过滤器。
|
||||||
|
|
||||||
**注意:XSS 攻击严重影响的一个例子 [http://namb.la/popular/tech.html][3],其中显示了如何一步一步创建会超载 MySpace 服务器的 JavaScript XSS 蠕虫。**
|
**注意:XSS 攻击严重影响的一个例子 [http://namb.la/popular/tech.html][3],其中显示了如何一步一步创建一个让 MySpace 服务器过载的 JavaScript XSS 蠕虫。**
|
||||||
|
|
||||||
### 用单向哈希保护数据 ###
|
### 用单向哈希保护数据 ###
|
||||||
|
|
||||||
该脚本对输入的数据进行单向转换-换句话说,它能对某人的密码产生哈希签名,但不能解码获得原始密码。为什么你希望这样呢?应用程序会存储密码。一个管理员不需要知道用户的密码-事实上,只有用户知道他的/她的密码是个好主意。系统(也仅有系统)应该能识别一个正确的密码;这是 Unix 多年来的密码安全模型。单向密码安全按照下面的方式工作:
|
该脚本对输入的数据进行单向转换,换句话说,它能对某人的密码产生哈希签名,但不能解码获得原始密码。为什么你希望这样呢?应用程序会存储密码。一个管理员不需要知道用户的密码,事实上,只有用户知道他/她自己的密码是个好主意。系统(也仅有系统)应该能识别一个正确的密码;这是 Unix 多年来的密码安全模型。单向密码安全按照下面的方式工作:
|
||||||
|
|
||||||
1. 当一个用户或管理员创建或更改一个账户密码时,系统对密码进行哈希并保存结果。主机系统忽视明文密码。
|
1. 当一个用户或管理员创建或更改一个账户密码时,系统对密码进行哈希并保存结果。主机系统会丢弃明文密码。
|
||||||
2. 当用户通过任何方式登录到系统时,再次对输入的密码进行哈希。
|
2. 当用户通过任何方式登录到系统时,再次对输入的密码进行哈希。
|
||||||
3. 主机系统抛弃输入的明文密码。
|
3. 主机系统丢弃输入的明文密码。
|
||||||
4. 当前新哈希的密码和之前保存的哈希相比较。
|
4. 当前新哈希的密码和之前保存的哈希相比较。
|
||||||
5. 如果哈希的密码相匹配,系统就会授予访问权限。
|
5. 如果哈希的密码相匹配,系统就会授予访问权限。
|
||||||
|
|
||||||
主机系统完成这些并不需要知道原始密码;事实上,原始值完全不相关。一个副作用是,如果某人侵入系统并盗取了密码数据库,入侵者会获得很多哈希后的密码,但无法把它们反向转换为原始密码。当然,给足够时间、计算能力,以及弱用户密码,一个攻击者还是有可能采用字典攻击找出密码。因此,别轻易让人碰你的密码数据库,如果确实有人这样做了,让每个用户更改他们的密码。
|
主机系统完成这些并不需要知道原始密码;事实上,原始密码完全无所谓。一个副作用是,如果某人侵入系统并盗取了密码数据库,入侵者会获得很多哈希后的密码,但无法把它们反向转换为原始密码。当然,给足够时间、计算能力,以及弱用户密码,一个攻击者还是有可能采用字典攻击找出密码。因此,别轻易让人碰你的密码数据库,如果确实有人这样做了,让每个用户更改他们的密码。
|
||||||
|
|
||||||
#### 加密 Vs 哈希 ####
|
#### 加密 Vs 哈希 ####
|
||||||
|
|
||||||
技术上来来说,这过程并不是加密。哈希和加密是不相同的,这有两个理由:
|
技术上来来说,哈希过程并不是加密。哈希和加密是不同的,这有两个理由:
|
||||||
|
|
||||||
不像加密,数据不能被解密。
|
不像加密,哈希数据不能被解密。
|
||||||
|
|
||||||
是有可能(但很不常见)两个不同的字符串会产生相同的哈希。并不能保证哈希是唯一的,因此别像数据库中的唯一键那样使用哈希。
|
是有可能(但非常罕见)两个不同的字符串会产生相同的哈希。并不能保证哈希是唯一的,因此别像数据库中的唯一键那样使用哈希。
|
||||||
|
|
||||||
function hash_ish($string) {
|
function hash_ish($string) {
|
||||||
return md5($string);
|
return md5($string);
|
||||||
}
|
}
|
||||||
|
|
||||||
md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5)返回一个由 32 个字符组成的十六进制串。然后你可以将那个 32 位字符串插入到数据库中,和另一个 md5 字符串相比较,或者就用这 32 个字符。
|
上面的 md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5)返回一个由 32 个字符组成的十六进制串。然后你可以将那个 32 位字符串插入到数据库中和另一个 md5 字符串相比较,或者直接用这 32 个字符。
|
||||||
|
|
||||||
#### 破解脚本 ####
|
#### 破解脚本 ####
|
||||||
|
|
||||||
几乎不可能解密 MD5 数据。或者说很难。但是,你仍然需要好的密码,因为根据整个字典生成哈希数据库仍然很简单。这里有在线 MD5 字典,当你输入 **06d80eb0c50b49a509b49f2424e8c805** 后会得到结果 “dog”。因此,尽管技术上 MD5 不能被解密,这里仍然有漏洞-如果某人获得了你的密码数据库,你可以肯定他们肯定会使用 MD5 字典破译。因此,当你创建基于密码的系统的时候尤其要注意密码长度(最小 6 个字符,8 个或许会更好)和包括字母和数字。并确保字典中没有这个密码。
|
几乎不可能解密 MD5 数据。或者说很难。但是,你仍然需要好的密码,因为用一整个字典生成哈希数据库仍然很简单。有一些在线 MD5 字典,当你输入 **06d80eb0c50b49a509b49f2424e8c805** 后会得到结果 “dog”。因此,尽管技术上 MD5 不能被解密,这里仍然有漏洞,如果某人获得了你的密码数据库,你可以肯定他们肯定会使用 MD5 字典破译。因此,当你创建基于密码的系统的时候尤其要注意密码长度(最小 6 个字符,8 个或许会更好)和包括字母和数字。并确保这个密码不在字典中。
|
||||||
|
|
||||||
### 用 Mcrypt 加密数据 ###
|
### 用 Mcrypt 加密数据 ###
|
||||||
|
|
||||||
如果你不需要以可阅读形式查看密码,采用 MD5 就足够了。不幸的是,这里并不总是有可选项-如果你提供以加密形式存储某人的信用卡信息,你可能需要在后面的某个点进行解密。
|
如果你不需要以可阅读形式查看密码,采用 MD5 就足够了。不幸的是,这里并不总是有可选项,如果你提供以加密形式存储某人的信用卡信息,你可能需要在后面的某个地方进行解密。
|
||||||
|
|
||||||
最早的一个解决方案是 Mcrypt 模块,用于允许 PHP 高速加密的附件。Mcrypt 库提供了超过 30 种计算方法用于加密,并且提供短语确保只有你(或者你的用户)可以解密数据。
|
最早的一个解决方案是 Mcrypt 模块,这是一个用于允许 PHP 高速加密的插件。Mcrypt 库提供了超过 30 种用于加密的计算方法,并且提供口令确保只有你(或者你的用户)可以解密数据。
|
||||||
|
|
||||||
让我们来看看使用方法。下面的脚本包含了使用 Mcrypt 加密和解密数据的函数:
|
让我们来看看使用方法。下面的脚本包含了使用 Mcrypt 加密和解密数据的函数:
|
||||||
|
|
||||||
@ -282,21 +280,21 @@ md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5)返
|
|||||||
**mcrypt()** 函数需要几个信息:
|
**mcrypt()** 函数需要几个信息:
|
||||||
|
|
||||||
- 需要加密的数据
|
- 需要加密的数据
|
||||||
- 用于加密和解锁数据的短语,也称为键。
|
- 用于加密和解锁数据的口令,也称为键。
|
||||||
- 用于加密数据的计算方法,也就是用于加密数据的算法。该脚本使用了 **MCRYPT\_SERPENT\_256**,但你可以从很多算法中选择,包括 **MCRYPT\_TWOFISH192**、**MCRYPT\_RC2**、**MCRYPT\_DES**、和 **MCRYPT\_LOKI97**。
|
- 用于加密数据的计算方法,也就是用于加密数据的算法。该脚本使用了 **MCRYPT\_SERPENT\_256**,但你可以从很多算法中选择,包括 **MCRYPT\_TWOFISH192**、**MCRYPT\_RC2**、**MCRYPT\_DES**、和 **MCRYPT\_LOKI97**。
|
||||||
- 加密数据的模式。这里有几个你可以使用的模式,包括电子密码本(Electronic Codebook) 和加密反馈(Cipher Feedback)。该脚本使用 **MCRYPT\_MODE\_CBC** 密码块链接。
|
- 加密数据的模式。这里有几个你可以使用的模式,包括电子密码本(Electronic Codebook) 和加密反馈(Cipher Feedback)。该脚本使用 **MCRYPT\_MODE\_CBC** 密码块链接。
|
||||||
- 一个 **初始化向量**-也称为 IV,或着一个种子-用于为加密算法设置种子的额外二进制位。也就是使算法更难于破解的额外信息。
|
- 一个 **初始化向量**-也称为 IV 或者种子,用于为加密算法设置种子的额外二进制位。也就是使算法更难于破解的额外信息。
|
||||||
- 键和 IV 字符串的长度,这可能随着加密和块而不同。使用 **mcrypt\_get\_key\_size()** 和 **mcrypt\_get\_block\_size()** 函数获取合适的长度;然后用 **substr()** 函数将键的值截取为合适的长度。(如果键的长度比要求的短,别担心-Mcrypt 会用 0 填充。)
|
- 键和 IV 字符串的长度,这可能随着加密和块而不同。使用 **mcrypt\_get\_key\_size()** 和 **mcrypt\_get\_block\_size()** 函数获取合适的长度;然后用 **substr()** 函数将键的值截取为合适的长度。(如果键的长度比要求的短,别担心,Mcrypt 会用 0 填充。)
|
||||||
|
|
||||||
如果有人窃取了你的数据和短语,他们只能一个个尝试加密算法直到找到正确的那一个。因此,在使用它之前我们通过对键使用 **md5()** 函数增加安全,就算他们获取了数据和短语,入侵者也不能获得想要的东西。
|
如果有人窃取了你的数据和短语,他们只能一个个尝试加密算法直到找到正确的那一个。因此,在使用它之前我们通过对键使用 **md5()** 函数增加安全,就算他们获取了数据和短语,入侵者也不能获得想要的东西。
|
||||||
|
|
||||||
入侵者同时需要函数,数据和短语-如果真是如此,他们可能获得了对你服务器的完整访问,你只能大清洗了。
|
入侵者同时需要函数,数据和口令,如果真是如此,他们可能获得了对你服务器的完整访问,你只能大清洗了。
|
||||||
|
|
||||||
这里还有一个数据存储格式的小问题。Mcrypt 以难懂的二进制形式返回加密后的数据,这使得当你将其存储到 MySQL 字段的时候可能出现可怕错误。因此,我们使用 **base64encode()** 和 **base64decode()** 函数转换为和 SQL 兼容的字母格式和检索行。
|
这里还有一个数据存储格式的小问题。Mcrypt 以难懂的二进制形式返回加密后的数据,这使得当你将其存储到 MySQL 字段的时候可能出现可怕错误。因此,我们使用 **base64encode()** 和 **base64decode()** 函数转换为和 SQL 兼容的字母格式和可检索行。
|
||||||
|
|
||||||
#### 破解脚本 ####
|
#### 破解脚本 ####
|
||||||
|
|
||||||
除了实验多种加密方法,你还可以在脚本中添加一些便利。例如,不是每次都提供键和模式,而是在包含的文件中声明为全局常量。
|
除了实验多种加密方法,你还可以在脚本中添加一些便利。例如,不用每次都提供键和模式,而是在包含的文件中声明为全局常量。
|
||||||
|
|
||||||
### 生成随机密码 ###
|
### 生成随机密码 ###
|
||||||
|
|
||||||
@ -331,8 +329,8 @@ md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5)返
|
|||||||
函数按照下面步骤工作:
|
函数按照下面步骤工作:
|
||||||
|
|
||||||
- 函数确保 **$num\_chars** 是非零的正整数。
|
- 函数确保 **$num\_chars** 是非零的正整数。
|
||||||
- 函数初始化 **$accepted\_chars** 变量为密码可能包含的字符列表。该脚本使用所有小写字母和数字 0 到 9,但你可以使用你喜欢的任何字符集合。
|
- 函数初始化 **$accepted\_chars** 变量为密码可能包含的字符列表。该脚本使用所有小写字母和数字 0 到 9,但你可以使用你喜欢的任何字符集合。(LCTT 译注:有时候为了便于肉眼识别,你可以将其中的 0 和 O,1 和 l 之类的都去掉。)
|
||||||
- 随机数生成器需要一个种子,从而获得一系列类随机值(PHP 4.2 及之后版本中并不严格要求)。
|
- 随机数生成器需要一个种子,从而获得一系列类随机值(PHP 4.2 及之后版本中并不需要,会自动播种)。
|
||||||
- 函数循环 **$num\_chars** 次,每次迭代生成密码中的一个字符。
|
- 函数循环 **$num\_chars** 次,每次迭代生成密码中的一个字符。
|
||||||
- 对于每个新字符,脚本查看 **$accepted_chars** 的长度,选择 0 和长度之间的一个数字,然后添加 **$accepted\_chars** 中该数字为索引值的字符到 $password。
|
- 对于每个新字符,脚本查看 **$accepted_chars** 的长度,选择 0 和长度之间的一个数字,然后添加 **$accepted\_chars** 中该数字为索引值的字符到 $password。
|
||||||
- 循环结束后,函数返回 **$password**。
|
- 循环结束后,函数返回 **$password**。
|
||||||
@ -347,7 +345,7 @@ via: http://www.codeproject.com/Articles/363897/PHP-Security
|
|||||||
|
|
||||||
作者:[SamarRizvi][a]
|
作者:[SamarRizvi][a]
|
||||||
译者:[ictlyh](https://github.com/ictlyh)
|
译者:[ictlyh](https://github.com/ictlyh)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,83 @@
|
|||||||
|
监控 Linux 系统的 7 个命令行工具
|
||||||
|
================================================================================
|
||||||
|
**这里有一些基本的命令行工具,让你能更简单地探索和操作Linux。**
|
||||||
|
|
||||||
|
![Image courtesy Meltys-stock](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-1-100591899-orig.png)
|
||||||
|
|
||||||
|
### 深入 ###
|
||||||
|
|
||||||
|
关于Linux最棒的一件事之一是你能深入操作系统,来探索它是如何工作的,并寻找机会来微调性能或诊断问题。这里有一些基本的命令行工具,让你能更简单地探索和操作Linux。大多数的这些命令是在你的Linux系统中已经内建的,但假如它们没有的话,就用谷歌搜索命令名和你的发行版名吧,你会找到哪些包需要安装(注意,一些命令是和其它命令捆绑起来打成一个包的,你所找的包可能写的是其它的名字)。如果你知道一些你所使用的其它工具,欢迎评论。
|
||||||
|
|
||||||
|
|
||||||
|
### 我们怎么开始 ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-2-100591901-orig.png)
|
||||||
|
|
||||||
|
须知: 本文中的截图取自一台[Debian Linux 8.1][1] (“Jessie”),其运行在[OS X 10.10.3][3] (“Yosemite”)操作系统下的[Oracle VirtualBox 4.3.28][2]中的一台虚拟机里。想要建立你的Debian虚拟机,可以看看我的这篇教程——“[如何在 VirtualBox VM 下安装 Debian][4]”。
|
||||||
|
|
||||||
|
|
||||||
|
### Top ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-3-100591902-orig.png)
|
||||||
|
|
||||||
|
作为Linux系统监控工具中比较易用的一个,**top命令**能带我们一览Linux中的几乎每一处。以下这张图是它的默认界面,但是按“z”键可以切换不同的显示颜色。其它热键和命令则有其它的功能,例如显示概要信息和内存信息(第四行第二个),根据各种不一样的条件排序、终止进程任务等等(你可以在[这里][5]找到完整的列表)。
|
||||||
|
|
||||||
|
|
||||||
|
### htop ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-4-100591904-orig.png)
|
||||||
|
|
||||||
|
相比top,它的替代品Htop则更为精致。维基百科是这样描述的:“用户经常会部署htop以免Unix top不能提供关于系统进程的足够信息,比如说当你在尝试发现应用程序里的一个小的内存泄露问题,Htop一般也能作为一个系统监听器来使用。相比top,它提供了一个更方便的光标控制界面来向进程发送信号。” (想了解更多细节猛戳[这里][6])
|
||||||
|
|
||||||
|
|
||||||
|
### Vmstat ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-5-100591903-orig.png)
|
||||||
|
|
||||||
|
Vmstat是一款监控Linux系统性能数据的简易工具,这让它更合适使用在shell脚本中。使出你的正则表达式绝招,用vmstat和cron作业来做一些激动人心的事情吧。“后面的报告给出的是上一次系统重启之后的均值,另外一份报告给出的则是从前一个报告起间隔周期中的信息。其它的进程和内存报告是那个瞬态的情况”(猛戳[这里][7]获取更多信息)。
|
||||||
|
|
||||||
|
### ps ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-6-100591905-orig.png)
|
||||||
|
|
||||||
|
ps命令展现的是正在运行中的进程列表。在这种情况下,我们用“-e”选项来显示每个进程,也就是所有正在运行的进程了(我把列表滚动到了前面,否则列名就看不到了)。这个命令有很多选项允许你去按需格式化输出。只要使用上述一点点的正则表达式技巧,你就能得到一个强大的工具了。猛戳[这里][8]获取更多信息。
|
||||||
|
|
||||||
|
### Pstree ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-7-100591906-orig.png)
|
||||||
|
|
||||||
|
Pstree“以树状图显示正在运行中的进程。这个进程树是以某个 pid 为根节点的,如果pid被省略的话那树是以init为根节点的。如果指定用户名,那所有进程树都会以该用户所属的进程为父进程进行显示。”以树状图来帮你将进程之间的所属关系进行分类,这的确是个很有效的工具(戳[这里][9])。
|
||||||
|
|
||||||
|
### pmap ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-8-100591907-orig.png)
|
||||||
|
|
||||||
|
在调试过程中,理解一个应用程序如何使用内存是至关重要的,而pmap的作用就是当给出一个进程ID时显示出相关信息。上面的截图展示的是使用“-x”选项所产生的部分输出,你也可以用pmap的“-X”选项来获取更多的细节信息,但是前提是你要有个更宽的终端窗口。
|
||||||
|
|
||||||
|
### iostat ###
|
||||||
|
|
||||||
|
![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-9-100591900-orig.png)
|
||||||
|
|
||||||
|
Linux系统的一个至关重要的性能指标是处理器和存储的使用率,它也是iostat命令所报告的内容。如同ps命令一样,iostat有很多选项允许你选择你需要的输出格式,除此之外还可以在某一段时间范围内的重复采样几次。详情请戳[这里][10]。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.networkworld.com/article/2937219/linux/7-command-line-tools-for-monitoring-your-linux-system.html
|
||||||
|
|
||||||
|
作者:[Mark Gibbs][a]
|
||||||
|
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://www.networkworld.com/author/Mark-Gibbs/
|
||||||
|
[1]:https://www.debian.org/releases/stable/
|
||||||
|
[2]:https://www.virtualbox.org/
|
||||||
|
[3]:http://www.apple.com/osx/
|
||||||
|
[4]:http://www.networkworld.com/article/2937148/how-to-install-debian-linux-8-1-in-a-virtualbox-vm
|
||||||
|
[5]:http://linux.die.net/man/1/top
|
||||||
|
[6]:http://linux.die.net/man/1/htop
|
||||||
|
[7]:http://linuxcommand.org/man_pages/vmstat8.html
|
||||||
|
[8]:http://linux.die.net/man/1/ps
|
||||||
|
[9]:http://linux.die.net/man/1/pstree
|
||||||
|
[10]:http://linux.die.net/man/1/iostat
|
@ -1,24 +1,25 @@
|
|||||||
在 Linux 中安装 Google 环聊桌面客户端
|
在 Linux 中安装 Google 环聊桌面客户端
|
||||||
================================================================================
|
================================================================================
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg)
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg)
|
||||||
|
|
||||||
先前,我们已经介绍了如何[在 Linux 中安装 Facebook Messenger][1] 和[WhatsApp 桌面客户端][2]。这些应用都是非官方的应用。今天,我将为你推荐另一款非官方的应用,它就是 [Google 环聊][3]
|
先前,我们已经介绍了如何[在 Linux 中安装 Facebook Messenger][1] 和[WhatsApp 桌面客户端][2]。这些应用都是非官方的应用。今天,我将为你推荐另一款非官方的应用,它就是 [Google 环聊][3]
|
||||||
|
|
||||||
当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它把。
|
当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它吧。
|
||||||
|
|
||||||
### 在 Linux 中安装 Google 环聊 ###
|
### 在 Linux 中安装 Google 环聊 ###
|
||||||
|
|
||||||
我们将使用一个名为 [yakyak][4] 的开源项目,它是一个针对 Linux,Windows 和 OS X 平台的非官方 Google 环聊客户端。我将向你展示如何在 Ubuntu 中使用 yakyak,但我相信在其他的 Linux 发行版本中,你可以使用同样的方法来使用它。在了解如何使用它之前,让我们先看看 yakyak 的主要特点:
|
我们将使用一个名为 [yakyak][4] 的开源项目,它是一个针对 Linux,Windows 和 OS X 平台的非官方 Google 环聊客户端。我将向你展示如何在 Ubuntu 中使用 yakyak,但我相信在其他的 Linux 发行版本中,你可以使用同样的方法来使用它。在了解如何使用它之前,让我们先看看 yakyak 的主要特点:
|
||||||
|
|
||||||
- 发送和接受聊天信息
|
- 发送和接受聊天信息
|
||||||
- 创建和更改对话 (重命名, 添加人物)
|
- 创建和更改对话 (重命名, 添加参与者)
|
||||||
- 离开或删除对话
|
- 离开或删除对话
|
||||||
- 桌面提醒通知
|
- 桌面提醒通知
|
||||||
- 打开或关闭通知
|
- 打开或关闭通知
|
||||||
- 针对图片上传,支持拖放,复制粘贴或使用上传按钮
|
- 对于图片上传,支持拖放,复制粘贴或使用上传按钮
|
||||||
- Hangupsbot 房间同步(实际的用户图片) (注: 这里翻译不到位,希望改善一下)
|
- Hangupsbot 房间同步(使用用户实际的图片)
|
||||||
- 展示行内图片
|
- 展示行内图片
|
||||||
- 历史回放
|
- 翻阅历史
|
||||||
|
|
||||||
听起来不错吧,你可以从下面的链接下载到该软件的安装文件:
|
听起来不错吧,你可以从下面的链接下载到该软件的安装文件:
|
||||||
|
|
||||||
@ -36,7 +37,7 @@
|
|||||||
|
|
||||||
![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg)
|
![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg)
|
||||||
|
|
||||||
假如你想看看对话的配置图,你可以选择 `查看-> 展示对话缩略图`
|
假如你想在联系人里面显示用户头像,你可以选择 `查看-> 展示对话缩略图`
|
||||||
|
|
||||||
![Google 环聊缩略图](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg)
|
![Google 环聊缩略图](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg)
|
||||||
|
|
||||||
@ -54,7 +55,7 @@ via: http://itsfoss.com/install-google-hangouts-linux/
|
|||||||
|
|
||||||
作者:[Abhishek][a]
|
作者:[Abhishek][a]
|
||||||
译者:[FSSlc](https://github.com/FSSlc)
|
译者:[FSSlc](https://github.com/FSSlc)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,10 +1,12 @@
|
|||||||
Linux有问必答-- 如何为在Linux中安装兄弟打印机
|
Linux有问必答:如何为在Linux中安装兄弟牌打印机
|
||||||
================================================================================
|
================================================================================
|
||||||
> **提问**: 我有一台兄弟HL-2270DW激光打印机,我想从我的Linux机器上答应文档。我该如何在我的电脑上安装合适的驱动并使用它?
|
> **提问**: 我有一台兄弟牌HL-2270DW激光打印机,我想从我的Linux机器上打印文档。我该如何在我的电脑上安装合适的驱动并使用它?
|
||||||
|
|
||||||
兄弟牌以买得起的[紧凑型激光打印机][1]而闻名。你可以用低于200美元的价格得到高质量的WiFi/双工激光打印机,而且价格还在下降。最棒的是,它们还提供良好的Linux支持,因此你可以在Linux中下载并安装它们的打印机驱动。我在一年前买了台[HL-2270DW][2],我对它的性能和可靠性都很满意。
|
兄弟牌以买得起的[紧凑型激光打印机][1]而闻名。你可以用低于200美元的价格得到高质量的WiFi/双工激光打印机,而且价格还在下降。最棒的是,它们还提供良好的Linux支持,因此你可以在Linux中下载并安装它们的打印机驱动。我在一年前买了台[HL-2270DW][2],我对它的性能和可靠性都很满意。
|
||||||
|
|
||||||
下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中,我会演示安装HL-2270DW激光打印机的USB驱动。首先通过USB线连接你的打印机到Linux上。
|
下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中,我会演示安装HL-2270DW激光打印机的USB驱动。
|
||||||
|
|
||||||
|
首先通过USB线连接你的打印机到Linux上。
|
||||||
|
|
||||||
### 准备 ###
|
### 准备 ###
|
||||||
|
|
||||||
@ -16,13 +18,13 @@ Linux有问必答-- 如何为在Linux中安装兄弟打印机
|
|||||||
|
|
||||||
![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
|
![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
|
||||||
|
|
||||||
下一页,你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的GUI对(本地、远程)打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
|
下一页,你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动,后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的图形界面对(本地、远程)打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
|
||||||
|
|
||||||
![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
|
![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
|
||||||
|
|
||||||
运行安装文件之前,你需要在64位的Linux系统上做另外一件事情。
|
运行安装文件之前,你需要在64位的Linux系统上做另外一件事情。
|
||||||
|
|
||||||
因为兄弟打印机驱动是为32位的Linux系统开发的,因此你需要按照下面的方法安装32位的库。
|
因为兄弟打印机驱动是为32位的Linux系统开发的,因此你需要按照下面的方法安装32位的库。
|
||||||
|
|
||||||
在早期的Debian(6.0或者更早期)或者Ubuntu(11.04或者更早期),安装下面的包。
|
在早期的Debian(6.0或者更早期)或者Ubuntu(11.04或者更早期),安装下面的包。
|
||||||
|
|
||||||
@ -54,7 +56,7 @@ Linux有问必答-- 如何为在Linux中安装兄弟打印机
|
|||||||
|
|
||||||
![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
|
![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
|
||||||
|
|
||||||
同意GPL协议直呼,接受接下来的任何默认问题。
|
同意GPL协议之后,接受接下来的任何默认问题。
|
||||||
|
|
||||||
![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
|
![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
|
||||||
|
|
||||||
@ -68,7 +70,7 @@ Linux有问必答-- 如何为在Linux中安装兄弟打印机
|
|||||||
|
|
||||||
$ sudo netstat -nap | grep 631
|
$ sudo netstat -nap | grep 631
|
||||||
|
|
||||||
打开一个浏览器输入http://localhost:631。你会下面的打印机管理界面。
|
打开一个浏览器输入 http://localhost:631 。你会看到下面的打印机管理界面。
|
||||||
|
|
||||||
![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
|
![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
|
||||||
|
|
||||||
@ -98,7 +100,7 @@ via: http://ask.xmodulo.com/install-brother-printer-linux.html
|
|||||||
|
|
||||||
作者:[Dan Nanni][a]
|
作者:[Dan Nanni][a]
|
||||||
译者:[geekpi](https://github.com/geekpi)
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,10 +1,9 @@
|
|||||||
|
如何修复 ubuntu 中检测到系统程序错误的问题
|
||||||
如何修复ubuntu 14.04中检测到系统程序错误的问题
|
|
||||||
================================================================================
|
================================================================================
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
|
||||||
|
|
||||||
|
在过去的几个星期,(几乎)每次都有消息 **Ubuntu 15.04在启动时检测到系统程序错误** 跑出来“欢迎”我。那时我是直接忽略掉它的,但是这种情况到了某个时刻,它就让人觉得非常烦人了!
|
||||||
在过去的几个星期,(几乎)每次都有消息 **Ubuntu 15.04在启动时检测到系统程序错误(system program problem detected on startup in Ubuntu 15.04)** 跑出来“欢迎”我。那时我是直接忽略掉它的,但是这种情况到了某个时刻,它就让人觉得非常烦人了!
|
|
||||||
|
|
||||||
> 检测到系统程序错误(System program problem detected)
|
> 检测到系统程序错误(System program problem detected)
|
||||||
>
|
>
|
||||||
@ -18,15 +17,16 @@
|
|||||||
|
|
||||||
#### 那么这个通知到底是关于什么的? ####
|
#### 那么这个通知到底是关于什么的? ####
|
||||||
|
|
||||||
大体上讲,它是在告知你,你的系统的一部分崩溃了。可别因为“崩溃”这个词而恐慌。这不是一个严重的问题,你的系统还是完完全全可用的。只是在以前的某个时刻某个程序崩溃了,而Ubuntu想让你决定要不要把这个问题报告给开发者,这样他们就能够修复这个问题。
|
大体上讲,它是在告知你,你的系统的一部分崩溃了。可别因为“崩溃”这个词而恐慌。这不是一个严重的问题,你的系统还是完完全全可用的。只是在之前的某个时刻某个程序崩溃了,而Ubuntu想让你决定要不要把这个问题报告给开发者,这样他们就能够修复这个问题。
|
||||||
|
|
||||||
#### 那么,我们点了“报告错误”的按钮后,它以后就不再显示了?####
|
#### 那么,我们点了“报告错误”的按钮后,它以后就不再显示了?####
|
||||||
|
|
||||||
|
不,不是的!即使你点了“报告错误”按钮,最后你还是会被一个如下的弹窗再次“欢迎”一下:
|
||||||
|
|
||||||
不,不是的!即使你点了“报告错误”按钮,最后你还是会被一个如下的弹窗再次“欢迎”:
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
|
||||||
|
|
||||||
[对不起,Ubuntu发生了一个内部错误(Sorry, Ubuntu has experienced an internal error)][1]是一个Apport(Apport是Ubuntu中错误信息的收集报告系统,详见Ubuntu Wiki中的Apport篇,译者注),它将会进一步的打开网页浏览器,然后你可以通过登录或创建[Launchpad][2]帐户来填写一份漏洞(Bug)报告文件。你看,这是一个复杂的过程,它要花整整四步来完成.
|
[对不起,Ubuntu发生了一个内部错误][1]是个Apport(LCTT 译注:Apport是Ubuntu中错误信息的收集报告系统,详见Ubuntu Wiki中的Apport篇),它将会进一步的打开网页浏览器,然后你可以通过登录或创建[Launchpad][2]帐户来填写一份漏洞(Bug)报告文件。你看,这是一个复杂的过程,它要花整整四步来完成。
|
||||||
|
|
||||||
#### 但是我想帮助开发者,让他们知道这个漏洞啊 !####
|
#### 但是我想帮助开发者,让他们知道这个漏洞啊 !####
|
||||||
|
|
||||||
你这样想的确非常地周到体贴,而且这样做也是正确的。但是这样做的话,存在两个问题。第一,存在非常高的概率,这个漏洞已经被报告过了;第二,即使你报告了个这次崩溃,也无法保证你不会再看到它。
|
你这样想的确非常地周到体贴,而且这样做也是正确的。但是这样做的话,存在两个问题。第一,存在非常高的概率,这个漏洞已经被报告过了;第二,即使你报告了个这次崩溃,也无法保证你不会再看到它。
|
||||||
@ -34,35 +34,38 @@
|
|||||||
#### 那么,你的意思就是说别报告这次崩溃了?####
|
#### 那么,你的意思就是说别报告这次崩溃了?####
|
||||||
|
|
||||||
对,也不对。如果你想的话,在你第一次看到它的时候报告它。你可以在上面图片显示的“显示细节(Show Details)”中,查看崩溃的程序。但是如果你总是看到它,或者你不想报告漏洞(Bug),那么我建议你还是一次性摆脱这个问题吧。
|
对,也不对。如果你想的话,在你第一次看到它的时候报告它。你可以在上面图片显示的“显示细节(Show Details)”中,查看崩溃的程序。但是如果你总是看到它,或者你不想报告漏洞(Bug),那么我建议你还是一次性摆脱这个问题吧。
|
||||||
|
|
||||||
### 修复Ubuntu中“检测到系统程序错误”的错误 ###
|
### 修复Ubuntu中“检测到系统程序错误”的错误 ###
|
||||||
|
|
||||||
这些错误报告被存放在Ubuntu中目录/var/crash中。如果你翻看这个目录的话,应该可以看到有一些以crash结尾的文件。
|
这些错误报告被存放在Ubuntu中目录/var/crash中。如果你翻看这个目录的话,应该可以看到有一些以crash结尾的文件。
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
|
||||||
|
|
||||||
我的建议是删除这些错误报告。打开一个终端,执行下面的命令:
|
我的建议是删除这些错误报告。打开一个终端,执行下面的命令:
|
||||||
|
|
||||||
sudo rm /var/crash/*
|
sudo rm /var/crash/*
|
||||||
|
|
||||||
这个操作会删除所有在/var/crash目录下的所有内容。这样你就不会再被这些报告以前程序错误的弹窗所扰。但是如果有一个程序又崩溃了,你就会再次看到“检测到系统程序错误”的错误。你可以再次删除这些报告文件,或者你可以禁用Apport来彻底地摆脱这个错误弹窗。
|
这个操作会删除所有在/var/crash目录下的所有内容。这样你就不会再被这些报告以前程序错误的弹窗所扰。但是如果又有一个程序崩溃了,你就会再次看到“检测到系统程序错误”的错误。你可以再次删除这些报告文件,或者你可以禁用Apport来彻底地摆脱这个错误弹窗。
|
||||||
|
|
||||||
#### 彻底地摆脱Ubuntu中的系统错误弹窗 ####
|
#### 彻底地摆脱Ubuntu中的系统错误弹窗 ####
|
||||||
|
|
||||||
如果你这样做,系统中任何程序崩溃时,系统都不会再通知你。如果你想问问我的看法的话,我会说,这不是一件坏事,除非你愿意填写错误报告。如果你不想填写错误报告,那么这些错误通知存不存在都不会有什么区别。
|
如果你这样做,系统中任何程序崩溃时,系统都不会再通知你。如果你想问问我的看法的话,我会说,这不是一件坏事,除非你愿意填写错误报告。如果你不想填写错误报告,那么这些错误通知存不存在都不会有什么区别。
|
||||||
|
|
||||||
要禁止Apport,并且彻底地摆脱Ubuntu系统中的程序崩溃报告,打开一个终端,输入以下命令:
|
要禁止Apport,并且彻底地摆脱Ubuntu系统中的程序崩溃报告,打开一个终端,输入以下命令:
|
||||||
|
|
||||||
gksu gedit /etc/default/apport
|
gksu gedit /etc/default/apport
|
||||||
|
|
||||||
这个文件的内容是:
|
这个文件的内容是:
|
||||||
|
|
||||||
# set this to 0 to disable apport, or to 1 to enable it
|
# 设置0表示禁用Apportw,或者1开启它。
|
||||||
# 设置0表示禁用Apportw,或者1开启它。译者注,下同。
|
|
||||||
# you can temporarily override this with
|
|
||||||
# 你可以用下面的命令暂时关闭它:
|
# 你可以用下面的命令暂时关闭它:
|
||||||
# sudo service apport start force_start=1
|
# sudo service apport start force_start=1
|
||||||
enabled=1
|
enabled=1
|
||||||
|
|
||||||
把**enabled=1**改为**enabled=0**.保存并关闭文件。完成之后你就再也不会看到弹窗报告错误了。很显然,如果我们想重新开启错误报告功能,只要再打开这个文件,把enabled设置为1就可以了。
|
把**enabled=1**改为**enabled=0**。保存并关闭文件。完成之后你就再也不会看到弹窗报告错误了。很显然,如果我们想重新开启错误报告功能,只要再打开这个文件,把enabled设置为1就可以了。
|
||||||
|
|
||||||
#### 你的有效吗? ####
|
#### 你的有效吗? ####
|
||||||
|
|
||||||
我希望这篇教程能够帮助你修复Ubuntu 14.04和Ubuntu 15.04中检测到系统程序错误的问题。如果这个小窍门帮你摆脱了这个烦人的问题,请让我知道。
|
我希望这篇教程能够帮助你修复Ubuntu 14.04和Ubuntu 15.04中检测到系统程序错误的问题。如果这个小窍门帮你摆脱了这个烦人的问题,请让我知道。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
@ -71,7 +74,7 @@ via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
|
|||||||
|
|
||||||
作者:[Abhishek][a]
|
作者:[Abhishek][a]
|
||||||
译者:[XLCYun](https://github.com/XLCYun)
|
译者:[XLCYun](https://github.com/XLCYun)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
140
published/201507/20150713 How to manage Vim plugins.md
Normal file
140
published/201507/20150713 How to manage Vim plugins.md
Normal file
@ -0,0 +1,140 @@
|
|||||||
|
如何管理 Vim 插件
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
Vim是Linux上一个轻量级的通用文本编辑器。虽然它开始时的学习曲线对于一般的Linux用户来说可能很困难,但比起它的好处,这些付出完全是值得的。vim 可以通过完全可定制的插件来增加越来越多的功能。但是,由于它的功能配置比较难,你需要花一些时间去了解它的插件系统,然后才能够有效地去个性化定置Vim。幸运的是,我们已经有一些工具能够使我们在使用Vim插件时更加轻松。而我日常所使用的就是Vundle。
|
||||||
|
|
||||||
|
### 什么是Vundle ###
|
||||||
|
|
||||||
|
[Vundle][1]意即Vim Bundle,是一个vim插件管理器。Vundle能让你很简单地实现插件的安装、升级、搜索或者清除。它还能管理你的运行环境并且在标签方面提供帮助。在本教程中我们将展示如何安装和使用Vundle。
|
||||||
|
|
||||||
|
### 安装Vundle ###
|
||||||
|
|
||||||
|
首先,如果你的Linux系统上没有Git的话,先[安装Git][2]。
|
||||||
|
|
||||||
|
接着,创建一个目录,Vim的插件将会被下载并且安装在这个目录上。默认情况下,这个目录为~/.vim/bundle。
|
||||||
|
|
||||||
|
$ mkdir -p ~/.vim/bundle
|
||||||
|
|
||||||
|
现在,使用如下指令安装Vundle。注意Vundle本身也是一个vim插件。因此我们同样把vundle安装到之前创建的目录~/.vim/bundle下。
|
||||||
|
|
||||||
|
$ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
|
||||||
|
|
||||||
|
### 配置Vundle ###
|
||||||
|
|
||||||
|
现在配置你的.vimrc文件如下:
|
||||||
|
|
||||||
|
set nocompatible " 必需。
|
||||||
|
filetype off " 必须。
|
||||||
|
|
||||||
|
" 在这里设置你的运行时环境的路径。
|
||||||
|
set rtp+=~/.vim/bundle/Vundle.vim
|
||||||
|
|
||||||
|
" 初始化vundle
|
||||||
|
call vundle#begin()
|
||||||
|
|
||||||
|
" 这一行应该永远放在开头。
|
||||||
|
Plugin 'gmarik/Vundle.vim'
|
||||||
|
|
||||||
|
" 这个示范来自https://github.com/gmarik/Vundle.vim README
|
||||||
|
Plugin 'tpope/vim-fugitive'
|
||||||
|
|
||||||
|
" 取自http://vim-scripts.org/vim/scripts.html的插件
|
||||||
|
Plugin 'L9'
|
||||||
|
|
||||||
|
" 该Git插件没有放在GitHub上。
|
||||||
|
Plugin 'git://git.wincent.com/command-t.git'
|
||||||
|
|
||||||
|
"本地计算机上的Git仓库路径 (例如,当你在开发你自己的插件时)
|
||||||
|
Plugin 'file:///home/gmarik/path/to/plugin'
|
||||||
|
|
||||||
|
" vim脚本sparkup存放在这个名叫vim的仓库下的一个子目录中。
|
||||||
|
" 将这个路径正确地设置为runtimepath。
|
||||||
|
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
|
||||||
|
|
||||||
|
" 避免与L9发生名字上的冲突
|
||||||
|
Plugin 'user/L9', {'name': 'newL9'}
|
||||||
|
|
||||||
|
"所有的插件都应该在这一行之前。
|
||||||
|
call vundle#end() " 必需。
|
||||||
|
|
||||||
|
容我简单解释一下上面的设置:默认情况下,Vundle将从github.com或者vim-scripts.org下载和安装vim插件。你也可以改变这个默认行为。
|
||||||
|
|
||||||
|
要从github安装插件:
|
||||||
|
|
||||||
|
Plugin 'user/plugin'
|
||||||
|
|
||||||
|
要从 http://vim-scripts.org/vim/scripts.html 处安装:
|
||||||
|
|
||||||
|
Plugin 'plugin_name'
|
||||||
|
|
||||||
|
要从另外一个git仓库中安装:
|
||||||
|
|
||||||
|
Plugin 'git://git.another_repo.com/plugin'
|
||||||
|
|
||||||
|
从本地文件中安装:
|
||||||
|
|
||||||
|
Plugin 'file:///home/user/path/to/plugin'
|
||||||
|
|
||||||
|
你同样可以定制其它东西,例如你的插件的运行时路径,当你自己在编写一个插件时,或者你只是想从其它目录——而不是~/.vim——中加载插件时,这样做就非常有用。
|
||||||
|
|
||||||
|
Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
|
||||||
|
|
||||||
|
如果你有同名的插件,你可以重命名你的插件,这样它们就不会发生冲突了。
|
||||||
|
|
||||||
|
Plugin 'user/plugin', {'name': 'newPlugin'}
|
||||||
|
|
||||||
|
### 使用Vum命令 ###
|
||||||
|
|
||||||
|
一旦你用vundle设置好你的插件,你就可以通过几个vundle命令来安装、升级、搜索插件,或者清除没有用的插件。
|
||||||
|
|
||||||
|
#### 安装一个新的插件 ####
|
||||||
|
|
||||||
|
`PluginInstall`命令将会安装所有列在你的.vimrc文件中的插件。你也可以通过传递一个插件名给它,来安装某个的特定插件。
|
||||||
|
|
||||||
|
:PluginInstall
|
||||||
|
:PluginInstall <插件名>
|
||||||
|
|
||||||
|
![](https://farm1.staticflickr.com/559/18998707843_438cd55463_c.jpg)
|
||||||
|
|
||||||
|
#### 清除没有用的插件 ####
|
||||||
|
|
||||||
|
如果你有任何没有用到的插件,你可以通过`PluginClean`命令来删除它。
|
||||||
|
|
||||||
|
:PluginClean
|
||||||
|
|
||||||
|
![](https://farm4.staticflickr.com/3814/19433047689_17d9822af6_c.jpg)
|
||||||
|
|
||||||
|
#### 查找一个插件 ####
|
||||||
|
|
||||||
|
如果你想从提供的插件清单中安装一个插件,搜索功能会很有用。
|
||||||
|
|
||||||
|
:PluginSearch <文本>
|
||||||
|
|
||||||
|
![](https://farm1.staticflickr.com/541/19593459846_75b003443d_c.jpg)
|
||||||
|
|
||||||
|
在搜索的时候,你可以在交互式分割窗口中安装、清除、重新搜索或者重新加载插件清单。安装后的插件不会自动加载生效,要使其加载生效,可以将它们添加进你的.vimrc文件中。
|
||||||
|
|
||||||
|
### 总结 ###
|
||||||
|
|
||||||
|
Vim是一个妙不可言的工具。它不单单是一个能够使你的工作更加顺畅高效的默认文本编辑器,同时它还能够摇身一变,成为现存的几乎任何一门编程语言的IDE。
|
||||||
|
|
||||||
|
注意,有一些网站能帮你找到适合的vim插件。猛击 [http://www.vim-scripts.org][3], Github或者 [http://www.vimawesome.com][4] 获取新的脚本或插件。同时记得使用为你的插件提供的帮助。
|
||||||
|
|
||||||
|
和你最爱的编辑器一起嗨起来吧!
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://xmodulo.com/manage-vim-plugins.html
|
||||||
|
|
||||||
|
作者:[Christopher Valerio][a]
|
||||||
|
译者:[XLCYun(袖里藏云)](https://github.com/XLCYun)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://xmodulo.com/author/valerio
|
||||||
|
[1]:https://github.com/VundleVim/Vundle.vim
|
||||||
|
[2]:http://ask.xmodulo.com/install-git-linux.html
|
||||||
|
[3]:http://www.vim-scripts.org/
|
||||||
|
[4]:http://www.vimawesome.com/
|
||||||
|
|
@ -0,0 +1,118 @@
|
|||||||
|
Ubuntu 下 CCleaner 的 4 个替代品
|
||||||
|
================================================================================
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/ccleaner-10-700x393.jpg)
|
||||||
|
|
||||||
|
回首我使用 Windows 的那些日子,[CCleaner][1] 是我用来释放空间、删除垃圾文件和加速 Windows 的最喜爱的工具。我知道,当从 Windows 切换到 Linux 时,我并不是唯一期望 CCleaner 拥有 Linux 版本的人。假如你正在寻找 Linux 下 CCleaner 的替代品,我将在下面列举 4 个这样的应用,它们可以用来清理 Ubuntu 或基于 Ubuntu 的 Linux 发行版本。但在我们看这个清单之前,先让我们考虑一下 Linux 是否需要系统清理工具这个问题。
|
||||||
|
|
||||||
|
### Linux 需要像 CCleaner 那样的系统清理工具吗? ###
|
||||||
|
|
||||||
|
为了得到答案,让我们看看 CCleaner 做了什么。正如 [How-To Geek][2] 的这篇文章中所提到的那样:
|
||||||
|
|
||||||
|
> CCleaner 有两个主要的功能。一是:它扫描并删除无用的文件,释放磁盘空间。二是:它擦除隐私的数据,例如你的浏览记录和在各种软件中最近打开的文件列表。
|
||||||
|
|
||||||
|
所以,概括起来,它在系统范围内清理在你的浏览器或媒体播放器中的临时文件。你或许知道 Windows 有在系统中保存垃圾文件的喜好,那 Linux 呢?它是如何处理临时文件的呢?
|
||||||
|
|
||||||
|
与 Windows 不同, Linux 自动地清理所有的临时文件(在 `/tmp` 中存储)。在 Linux 中没有注册表,这进一步减轻了头痛。在最坏情况下,你可能会有一些损坏的不再需要的软件包,以及丢失一些网络浏览历史记录, cookies ,缓存等。
|
||||||
|
|
||||||
|
### 这意味着 Linux 不必需要系统清理工具了吗? ###
|
||||||
|
|
||||||
|
- 假如你可以运行某些命令来清理偶尔使用的软件包,手动删除浏览历史记录等,那么答案是:不需要;
|
||||||
|
- 假如你不想不断地从一个地方跳到另一个地方来运行命令,并想用一个工具来删除所有可通过一次或多次点击所选择的东西,那么答案是:需要。
|
||||||
|
|
||||||
|
假如你的答案是“需要”,就让我们继续看看一些类似于 CCleaner 的工具,用它们清理你的 Ubuntu 系统。
|
||||||
|
|
||||||
|
### Ubuntu 下 CCleaner 的替代品 ###
|
||||||
|
|
||||||
|
请注意,我使用的系统是 Ubuntu,因为下面讨论的一些工具只存在于基于 Ubuntu 的 Linux 发行版本中,而另外一些在所有的 Linux 发行版本中都可使用。
|
||||||
|
|
||||||
|
#### 1. BleachBit ####
|
||||||
|
|
||||||
|
![BleachBit 针对 Linux 的系统清理工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/BleachBit_Cleaning_Tool_Ubuntu.jpeg)
|
||||||
|
|
||||||
|
[BleachBit][3] 是一个跨平台的应用程序,在 Windows 和 Linux 平台下都可使用。它有一个很长的支持清理的程序的列表,这样可以让你选择性的清理缓存,cookies 和日志文件。让我们快速浏览它的特点:
|
||||||
|
|
||||||
|
- 简洁的图形界面确认框,你可以预览或删除
|
||||||
|
- 支持多平台: Linux 和 Windows
|
||||||
|
- 免费且开源
|
||||||
|
- 粉碎文件以隐藏它们的内容并防止数据恢复
|
||||||
|
- 重写空闲的磁盘空间来隐藏先前删除的文件内容
|
||||||
|
- 也拥有命令行界面
|
||||||
|
|
||||||
|
默认情况下,在 Ubuntu 14.04 and 15.04 中都可以获取到 BleachBit,你可以在终端中使用下面的命令来安装:
|
||||||
|
|
||||||
|
sudo apt-get install bleachbit
|
||||||
|
|
||||||
|
对于所有主流的 Linux 发行版本, BleachBit 提供有二进制程序,你可以从下面的链接中下载到 BleachBit:
|
||||||
|
|
||||||
|
- [下载 BleachBit 的 Linux 版本][4]
|
||||||
|
|
||||||
|
#### 2. Sweeper ####
|
||||||
|
|
||||||
|
![Sweeper 针对 Ubuntu 的系统清理工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/sweeper.jpeg)
|
||||||
|
|
||||||
|
Sweeper 是一个系统清理工具,它是[KDE SC utilities][5] 模块的一部分。它的主要特点有:
|
||||||
|
|
||||||
|
- 移除与网络相关的痕迹: cookies, 历史,缓存等
|
||||||
|
- 移除图形缩略图缓存
|
||||||
|
- 清理应用和文件的历史记录
|
||||||
|
|
||||||
|
默认情况下,Sweeper 在 Ubuntu 的软件仓库中可以得到。可以在终端中使用下面的命令来安装 Sweeper:
|
||||||
|
|
||||||
|
sudo apt-get install sweeper
|
||||||
|
|
||||||
|
#### 3. Ubuntu Tweak ####
|
||||||
|
|
||||||
|
![清理 Ubuntu 系统的 Ubuntu Tweak 工具 ](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Tweak_Janitor.jpeg)
|
||||||
|
|
||||||
|
正如它的名称所说的那样,[Ubuntu Tweak][6] 是一个调整工具,而不仅仅是一个清理应用。除了调整诸如 compiz 设置,面板的配置,开机启动程序的控制,电源管理等,Ubuntu Tweak 还提供一个清理选项,它可以让你:
|
||||||
|
|
||||||
|
- 清理浏览器缓存
|
||||||
|
- 清理 Ubuntu 软件中心缓存
|
||||||
|
- 清理缩略图缓存
|
||||||
|
- 清理 apt 仓库缓存
|
||||||
|
- 清理旧的内核文件
|
||||||
|
- 清理软件包配置
|
||||||
|
|
||||||
|
你可以从下面的链接中得到 Ubuntu Tweak 的 `.deb` 安装文件:
|
||||||
|
|
||||||
|
- [下载 Ubuntu Tweak][7]
|
||||||
|
|
||||||
|
#### 4. GCleaner (beta) ####
|
||||||
|
|
||||||
|
![GCleaner 类似 CCleaner 的工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/GCleaner.jpeg)
|
||||||
|
|
||||||
|
作为 elementary OS Freya 的第三方应用, GCleaner 旨在成为 GNU 世界的 CCleaner,其界面与 CCleaner 非常相似。它的一些主要特点有:
|
||||||
|
|
||||||
|
- 清理浏览器历史记录
|
||||||
|
- 清理应用缓存
|
||||||
|
- 清理软件包及其配置
|
||||||
|
- 清理最近使用的文件历史记录
|
||||||
|
- 清空垃圾箱
|
||||||
|
|
||||||
|
在书写本文时, GCleaner 仍处于开发阶段,你可以查看这个项目的网站,并得到源代码来编译和使用 GCleaner。
|
||||||
|
|
||||||
|
- [更多地了解 GCleaner][8]
|
||||||
|
|
||||||
|
### 你的选择呢? ###
|
||||||
|
|
||||||
|
我已经向你列举了一些可能选项,我让你选择决定使用哪个工具来清理 Ubuntu 14.04。但我可以肯定的是,若你正在寻找一个类似 CCleaner 的应用,你将在上面列举的 4 个工具中进行最后的选择。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://itsfoss.com/ccleaner-alternatives-ubuntu-linux/
|
||||||
|
|
||||||
|
作者:[Abhishek][a]
|
||||||
|
译者:[FSSlc](https://github.com/FSSlc)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://itsfoss.com/author/abhishek/
|
||||||
|
[1]:https://www.piriform.com/ccleaner/download
|
||||||
|
[2]:http://www.howtogeek.com/172820/beginner-geek-what-does-ccleaner-do-and-should-you-use-it/
|
||||||
|
[3]:http://bleachbit.sourceforge.net/
|
||||||
|
[4]:http://bleachbit.sourceforge.net/download/linux
|
||||||
|
[5]:https://www.kde.org/applications/utilities/
|
||||||
|
[6]:http://ubuntu-tweak.com/
|
||||||
|
[7]:http://ubuntu-tweak.com/
|
||||||
|
[8]:https://quassy.github.io/elementary-apps/GCleaner/
|
@ -0,0 +1,162 @@
|
|||||||
|
|
||||||
|
在 RHEL/CentOS 上为Web服务器架设 “XR”(Crossroads) 负载均衡器
|
||||||
|
================================================================================
|
||||||
|
Crossroads 是一个独立的服务,它是一个用于Linux和TCP服务的开源负载均衡和故障转移实用程序。它可用于HTTP,HTTPS,SSH,SMTP 和 DNS 等,它也是一个多线程的工具,在提供负载均衡服务时,它可以只使用一块内存空间以此来提高性能。
|
||||||
|
|
||||||
|
首先来看看 XR 是如何工作的。我们可以将 XR 放到网络客户端和服务器之间,它可以将客户端的请求分配到服务器上以平衡负载。
|
||||||
|
|
||||||
|
如果一台服务器宕机,XR 会转发客户端请求到另一个服务器,所以客户感觉不到停顿。看看下面的图来了解什么样的情况下,我们要使用 XR 处理。
|
||||||
|
|
||||||
|
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.jpg)
|
||||||
|
|
||||||
|
*安装 XR Crossroads 负载均衡器*
|
||||||
|
|
||||||
|
这里有两个 Web 服务器,一个网关服务器,我们将在网关服务器上安装和设置 XR 以接收客户端请求,并分发到服务器。
|
||||||
|
|
||||||
|
XR Crossroads 网关服务器:172.16.1.204
|
||||||
|
|
||||||
|
Web 服务器01:172.16.1.222
|
||||||
|
|
||||||
|
Web 服务器02:192.168.1.161
|
||||||
|
|
||||||
|
在上述情况下,我们网关服务器(即 XR Crossroads)的IP地址是172.16.1.222,webserver01 为172.16.1.222,它监听8888端口,webserver02 是192.168.1.161,它监听端口5555。
|
||||||
|
|
||||||
|
现在,我们需要的是均衡所有的请求,通过 XR 网关从网上接收请求然后分发它到两个web服务器已达到负载均衡。
|
||||||
|
|
||||||
|
### 第1步:在网关服务器上安装 XR Crossroads 负载均衡器 ###
|
||||||
|
|
||||||
|
**1. 不幸的是,没有为 crossroads 提供可用的 RPM 包,我们只能从源码安装。**
|
||||||
|
|
||||||
|
要编译 XR,你必须在系统上安装 C++ 编译器和 GNU make 组件,才能避免安装错误。
|
||||||
|
|
||||||
|
# yum install gcc gcc-c++ make
|
||||||
|
|
||||||
|
接下来,去他们的官方网站([https://crossroads.e-tunity.com] [1])下载此压缩包(即 crossroads-stable.tar.gz)。
|
||||||
|
|
||||||
|
或者,您可以使用 wget 去下载包然后解压在任何位置(如:/usr/src/),进入解压目录,并使用 “make install” 命令安装。
|
||||||
|
|
||||||
|
# wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
|
||||||
|
# tar -xvf crossroads-stable.tar.gz
|
||||||
|
# cd crossroads-2.74/
|
||||||
|
# make install
|
||||||
|
|
||||||
|
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.png)
|
||||||
|
|
||||||
|
*安装 XR Crossroads 负载均衡器*
|
||||||
|
|
||||||
|
安装完成后,二进制文件安装在 /usr/sbin 目录下,XR 的配置文件在 /etc 下名为 “xrctl.xml” 。
|
||||||
|
|
||||||
|
**2. 最后一个条件,你需要两个web服务器。为了方便使用,我在一台服务器中创建两个 Python SimpleHTTPServer 实例。**
|
||||||
|
|
||||||
|
要了解如何设置一个 python SimpleHTTPServer,请阅读我们此处的文章 [使用 SimpleHTTPServer 轻松创建两个 web 服务器][2].
|
||||||
|
|
||||||
|
正如我所说的,我们要使用两个web服务器,webserver01 通过8888端口运行在172.16.1.222上,webserver02 通过5555端口运行在192.168.1.161上。
|
||||||
|
|
||||||
|
![XR WebServer 01](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer01.jpg)
|
||||||
|
|
||||||
|
*XR WebServer 01*
|
||||||
|
|
||||||
|
![XR WebServer 02](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer02.jpg)
|
||||||
|
|
||||||
|
*XR WebServer 02*
|
||||||
|
|
||||||
|
### 第2步: 配置 XR Crossroads 负载均衡器 ###
|
||||||
|
|
||||||
|
**3. 所需都已经就绪。现在我们要做的就是配置`xrctl.xml` 文件并通过 XR 服务器接受来自互联网的请求分发到 web 服务器上。**
|
||||||
|
|
||||||
|
现在用 [vi/vim 编辑器][3]打开`xrctl.xml`文件。
|
||||||
|
|
||||||
|
# vim /etc/xrctl.xml
|
||||||
|
|
||||||
|
并作如下修改。
|
||||||
|
|
||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<configuration>
|
||||||
|
<system>
|
||||||
|
<uselogger>true</uselogger>
|
||||||
|
<logdir>/tmp</logdir>
|
||||||
|
</system>
|
||||||
|
<service>
|
||||||
|
<name>Tecmint</name>
|
||||||
|
<server>
|
||||||
|
<address>172.16.1.204:8080</address>
|
||||||
|
<type>tcp</type>
|
||||||
|
<webinterface>0:8010</webinterface>
|
||||||
|
<verbose>yes</verbose>
|
||||||
|
<clientreadtimeout>0</clientreadtimeout>
|
||||||
|
<clientwritetimout>0</clientwritetimeout>
|
||||||
|
<backendreadtimeout>0</backendreadtimeout>
|
||||||
|
<backendwritetimeout>0</backendwritetimeout>
|
||||||
|
</server>
|
||||||
|
<backend>
|
||||||
|
<address>172.16.1.222:8888</address>
|
||||||
|
</backend>
|
||||||
|
<backend>
|
||||||
|
<address>192.168.1.161:5555</address>
|
||||||
|
</backend>
|
||||||
|
</service>
|
||||||
|
</configuration>
|
||||||
|
|
||||||
|
![Configure XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Configure-XR-Crossroads-Load-Balancer.jpg)
|
||||||
|
|
||||||
|
*配置 XR Crossroads 负载均衡器*
|
||||||
|
|
||||||
|
在这里,你可以看到在 xrctl.xml 中配置了一个非常基本的 XR 。我已经定义了 XR 服务器在哪里,XR 的后端服务和端口及 XR 的 web 管理界面是什么。
|
||||||
|
|
||||||
|
**4. 现在,你需要通过以下命令来启动该 XR 守护进程。**
|
||||||
|
|
||||||
|
# xrctl start
|
||||||
|
# xrctl status
|
||||||
|
|
||||||
|
![Start XR Crossroads](http://www.tecmint.com/wp-content/uploads/2015/07/Start-XR-Crossroads.jpg)
|
||||||
|
|
||||||
|
*启动 XR Crossroads*
|
||||||
|
|
||||||
|
**5. 好的。现在是时候来检查该配置是否可以工作正常了。打开两个网页浏览器,输入 XR 服务器的 IP 地址和端口,并查看输出。**
|
||||||
|
|
||||||
|
![Verify Web Server Load Balancing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Web-Server-Load-Balancing.jpg)
|
||||||
|
|
||||||
|
*验证 Web 服务器负载均衡*
|
||||||
|
|
||||||
|
太棒了。它工作正常。是时候玩玩 XR 了。(LCTT 译注:可以看到两个请求分别分配到了不同服务器。)
|
||||||
|
|
||||||
|
**6. 现在可以通过我们配置的网络管理界面的端口来登录到 XR Crossroads 仪表盘。在浏览器输入你的 XR 服务器的 IP 地址和你配置在 xrctl.xml 中的管理端口。**
|
||||||
|
|
||||||
|
http://172.16.1.204:8010
|
||||||
|
|
||||||
|
![XR Crossroads Dashboard](http://www.tecmint.com/wp-content/uploads/2015/07/XR-Crossroads-Dashboard.jpg)
|
||||||
|
|
||||||
|
*XR Crossroads 仪表盘*
|
||||||
|
|
||||||
|
看起来像上面一样。它容易理解,用户界面友好,易于使用。它在右上角显示每个服务器能容纳多少个连接,以及关于接收该请求的附加细节。你也可以设置每个服务器承担的负载量,最大连接数和平均负载等。
|
||||||
|
|
||||||
|
最大的好处是,即使没有配置文件 xrctl.xml,你也可以做到这一点。你唯一要做的就是运行以下命令,它就会把这一切搞定。
|
||||||
|
|
||||||
|
# xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555
|
||||||
|
|
||||||
|
上面语法的详细说明:
|
||||||
|
|
||||||
|
- -verbose 将显示命令执行后的信息。
|
||||||
|
- -server 定义你在安装包中的 XR 服务器。
|
||||||
|
- -backend 定义你需要平衡分配到 Web 服务器的流量。
|
||||||
|
- tcp 说明我们使用 TCP 服务。
|
||||||
|
|
||||||
|
欲了解更多详情,有关文件及 CROSSROADS 的配置,请访问他们的官方网站: [https://crossroads.e-tunity.com/][4].
|
||||||
|
|
||||||
|
XR Corssroads 使用许多方法来提高服务器性能,避免宕机,让你的管理任务更轻松,更简便。希望你喜欢此文章,并随时在下面发表你的评论和建议,方便与我们保持联系。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.tecmint.com/setting-up-xr-crossroads-load-balancer-for-web-servers-on-rhel-centos/
|
||||||
|
|
||||||
|
作者:[Thilina Uvindasiri][a]
|
||||||
|
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://www.tecmint.com/author/thilidhanushka/
|
||||||
|
[1]:https://crossroads.e-tunity.com/
|
||||||
|
[2]:http://www.tecmint.com/python-simplehttpserver-to-create-webserver-or-serve-files-instantly/
|
||||||
|
[3]:http://www.tecmint.com/vi-editor-usage/
|
||||||
|
[4]:https://crossroads.e-tunity.com/
|
@ -0,0 +1,201 @@
|
|||||||
|
在 Linux 命令行中使用和执行 PHP 代码(二):12 个 PHP 交互性 shell 的用法
|
||||||
|
================================================================================
|
||||||
|
在上一篇文章“[在 Linux 命令行中使用和执行 PHP 代码(一)][1]”中,我同时着重讨论了直接在Linux命令行中运行PHP代码以及在Linux终端中执行PHP脚本文件。
|
||||||
|
|
||||||
|
![Run PHP Codes in Linux Commandline](http://www.tecmint.com/wp-content/uploads/2015/07/Run-PHP-Codes-in-Linux-Commandline.jpeg)
|
||||||
|
|
||||||
|
本文旨在让你了解一些相当不错的Linux终端中的PHP交互性 shell 的用法特性。
|
||||||
|
|
||||||
|
让我们先在PHP 的交互shell中来对`php.ini`设置进行一些配置吧。
|
||||||
|
|
||||||
|
**6. 设置PHP命令行提示符**
|
||||||
|
|
||||||
|
要设置PHP命令行提示,你需要在Linux终端中使用下面的php -a(启用PHP交互模式)命令开启一个PHP交互shell。
|
||||||
|
|
||||||
|
$ php -a
|
||||||
|
|
||||||
|
然后,设置任何东西(比如说Hi Tecmint ::)作为PHP交互shell的命令提示符,操作如下:
|
||||||
|
|
||||||
|
php > #cli.prompt=Hi Tecmint ::
|
||||||
|
|
||||||
|
![Enable PHP Interactive Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Enable-PHP-Interactive-Shell.png)
|
||||||
|
|
||||||
|
*启用PHP交互Shell*
|
||||||
|
|
||||||
|
同时,你也可以设置当前时间作为你的命令行提示符,操作如下:
|
||||||
|
|
||||||
|
php > #cli.prompt=`echo date('H:m:s');` >
|
||||||
|
|
||||||
|
22:15:43 >
|
||||||
|
|
||||||
|
**7. 每次输出一屏**
|
||||||
|
|
||||||
|
在我们上一篇文章中,我们已经在原始命令中通过管道在很多地方使用了`less`命令。通过该操作,我们可以在那些不能一屏全部输出的地方获得分屏显示。但是,我们可以通过配置php.ini文件,设置pager的值为less以每次输出一屏,操作如下:
|
||||||
|
|
||||||
|
$ php -a
|
||||||
|
php > #cli.pager=less
|
||||||
|
|
||||||
|
![Fix PHP Screen Output](http://www.tecmint.com/wp-content/uploads/2015/07/Fix-PHP-Screen-Output.png)
|
||||||
|
|
||||||
|
*限制PHP屏幕输出*
|
||||||
|
|
||||||
|
这样,下次当你运行一个命令(比如说条调试器`phpinfo();`)的时候,而该命令的输出内容又太过庞大而不能固定在一屏,它就会自动产生适合你当前屏幕的输出结果。
|
||||||
|
|
||||||
|
php > phpinfo();
|
||||||
|
|
||||||
|
![PHP Info Output](http://www.tecmint.com/wp-content/uploads/2015/07/PHP-Info-Output.png)
|
||||||
|
|
||||||
|
*PHP信息输出*
|
||||||
|
|
||||||
|
**8. 建议和TAB补全**
|
||||||
|
|
||||||
|
PHP shell足够智能,它可以显示给你建议和进行TAB补全,你可以通过TAB键来使用该功能。如果对于你想要用TAB补全的字符串而言有多个选项,那么你需要使用两次TAB键来完成,其它情况则使用一次即可。
|
||||||
|
|
||||||
|
如果有超过一个的可能性,请使用两次TAB键。
|
||||||
|
|
||||||
|
php > ZIP [TAB] [TAB]
|
||||||
|
|
||||||
|
如果只有一个可能性,只要使用一次TAB键。
|
||||||
|
|
||||||
|
php > #cli.pager [TAB]
|
||||||
|
|
||||||
|
你可以一直按TAB键来获得建议的补全,直到该值满足要求。所有的行为都将记录到`~/.php-history`文件。
|
||||||
|
|
||||||
|
要检查你的PHP交互shell活动日志,你可以执行:
|
||||||
|
|
||||||
|
$ nano ~/.php_history | less
|
||||||
|
|
||||||
|
![Check PHP Interactive Shell Logs](http://www.tecmint.com/wp-content/uploads/2015/07/Check-PHP-Interactive-Shell-Logs.png)
|
||||||
|
|
||||||
|
*检查PHP交互Shell日志*
|
||||||
|
|
||||||
|
**9. 你可以在PHP交互shell中使用颜色,你所需要知道的仅仅是颜色代码。**
|
||||||
|
|
||||||
|
使用echo来打印各种颜色的输出结果,类似这样:
|
||||||
|
|
||||||
|
php > echo "color_code1 TEXT second_color_code";
|
||||||
|
|
||||||
|
具体来说是:
|
||||||
|
|
||||||
|
php > echo "\033[0;31m Hi Tecmint \x1B[0m";
|
||||||
|
|
||||||
|
![Enable Colors in PHP Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Enable-Colors-in-PHP-Shell.png)
|
||||||
|
|
||||||
|
*在PHP Shell中启用彩色*
|
||||||
|
|
||||||
|
到目前为止,我们已经看到,按回车键意味着执行命令,然而PHP Shell中各个命令结尾的分号是必须的。
|
||||||
|
|
||||||
|
**10. 在PHP shell中用basename()输出路径中最后一部分**
|
||||||
|
|
||||||
|
PHP shell中的basename函数可以从给出的包含有到文件或目录路径的最后部分。
|
||||||
|
|
||||||
|
basename()样例#1和#2。
|
||||||
|
|
||||||
|
php > echo basename("/var/www/html/wp/wp-content/plugins");
|
||||||
|
php > echo basename("www.tecmint.com/contact-us.html");
|
||||||
|
|
||||||
|
上述两个样例将输出:
|
||||||
|
|
||||||
|
plugins
|
||||||
|
contact-us.html
|
||||||
|
|
||||||
|
![Print Base Name in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Print-Base-Name-in-PHP.png)
|
||||||
|
|
||||||
|
*在PHP中打印基本名称*
|
||||||
|
|
||||||
|
**11. 你可以使用PHP交互shell在你的桌面创建文件(比如说test1.txt),就像下面这么简单**
|
||||||
|
|
||||||
|
php> touch("/home/avi/Desktop/test1.txt");
|
||||||
|
|
||||||
|
我们已经见识了PHP交互shell在数学运算中有多优秀,这里还有更多一些例子会令你吃惊。
|
||||||
|
|
||||||
|
**12. 使用PHP交互shell打印比如像tecmint.com这样的字符串的长度**
|
||||||
|
|
||||||
|
strlen函数用于获取指定字符串的长度。
|
||||||
|
|
||||||
|
php > echo strlen("tecmint.com");
|
||||||
|
|
||||||
|
![Print Length String in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Print-Length-String-in-PHP.png)
|
||||||
|
|
||||||
|
*在PHP中打印字符串长度*
|
||||||
|
|
||||||
|
**13. PHP交互shell可以对数组排序,是的,你没听错**
|
||||||
|
|
||||||
|
声明变量a,并将其值设置为array(7,9,2,5,10)。
|
||||||
|
|
||||||
|
php > $a=array(7,9,2,5,10);
|
||||||
|
|
||||||
|
对数组中的数字进行排序。
|
||||||
|
|
||||||
|
php > sort($a);
|
||||||
|
|
||||||
|
以排序后的顺序打印数组中的数字,同时打印序号,第一个为[0]。
|
||||||
|
|
||||||
|
php > print_r($a);
|
||||||
|
Array
|
||||||
|
(
|
||||||
|
[0] => 2
|
||||||
|
[1] => 5
|
||||||
|
[2] => 7
|
||||||
|
[3] => 9
|
||||||
|
[4] => 10
|
||||||
|
)
|
||||||
|
|
||||||
|
![Sort Arrays in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Sort-Arrays-in-PHP.png)
|
||||||
|
|
||||||
|
*在PHP中对数组排序*
|
||||||
|
|
||||||
|
**14. 在PHP交互Shell中获取π的值**
|
||||||
|
|
||||||
|
php > echo pi();
|
||||||
|
|
||||||
|
3.1415926535898
|
||||||
|
|
||||||
|
**15. 打印某个数比如32的平方根**
|
||||||
|
|
||||||
|
php > echo sqrt(150);
|
||||||
|
|
||||||
|
12.247448713916
|
||||||
|
|
||||||
|
**16. 从0-10的范围内挑选一个随机数**
|
||||||
|
|
||||||
|
php > echo rand(0, 10);
|
||||||
|
|
||||||
|
![Get Random Number in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Random-Number-in-PHP.png)
|
||||||
|
|
||||||
|
*在PHP中获取随机数*
|
||||||
|
|
||||||
|
**17. 获取某个指定字符串的md5校验和sha1校验,例如,让我们在PHP Shell中检查某个字符串(比如说avi)的md5校验和sha1校验,并交叉校验bash shell生成的md5校验和sha1校验的结果。**
|
||||||
|
|
||||||
|
php > echo md5(avi);
|
||||||
|
3fca379b3f0e322b7b7967bfcfb948ad
|
||||||
|
|
||||||
|
php > echo sha1(avi);
|
||||||
|
8f920f22884d6fea9df883843c4a8095a2e5ac6f
|
||||||
|
|
||||||
|
----------
|
||||||
|
|
||||||
|
$ echo -n avi | md5sum
|
||||||
|
3fca379b3f0e322b7b7967bfcfb948ad -
|
||||||
|
|
||||||
|
$ echo -n avi | sha1sum
|
||||||
|
8f920f22884d6fea9df883843c4a8095a2e5ac6f -
|
||||||
|
|
||||||
|
![Check md5sum and sha1sum in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Check-md5sum-and-sha1sum.png)
|
||||||
|
|
||||||
|
*在PHP中检查md5校验和sha1校验*
|
||||||
|
|
||||||
|
这里只是PHP Shell中所能获取的功能和PHP Shell的交互特性的惊鸿一瞥,这些就是到现在为止我所讨论的一切。保持连线,在评论中为我们提供你有价值的反馈吧。为我们点赞并分享,帮助我们扩散哦。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.tecmint.com/execute-php-codes-functions-in-linux-commandline/
|
||||||
|
|
||||||
|
作者:[Avishek Kumar][a]
|
||||||
|
译者:[GOLinux](https://github.com/GOLinux)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://www.tecmint.com/author/avishek/
|
||||||
|
[1]:https://linux.cn/article-5906-1.html
|
@ -0,0 +1,183 @@
|
|||||||
|
在 Linux 命令行中使用和执行 PHP 代码(一)
|
||||||
|
================================================================================
|
||||||
|
PHP是一个开源服务器端脚本语言,最初这三个字母代表的是“Personal Home Page”,而现在则代表的是“PHP:Hypertext Preprocessor”,它是个递归首字母缩写。它是一个跨平台脚本语言,深受C、C++和Java的影响。
|
||||||
|
|
||||||
|
![Run PHP Codes in Linux Command Line](http://www.tecmint.com/wp-content/uploads/2015/07/php-command-line-usage.jpeg)
|
||||||
|
|
||||||
|
*在 Linux 命令行中运行 PHP 代码*
|
||||||
|
|
||||||
|
PHP的语法和C、Java以及带有一些PHP特性的Perl变成语言中的语法十分相似,它当下大约正被2.6亿个网站所使用,当前最新的稳定版本是PHP版本5.6.10。
|
||||||
|
|
||||||
|
PHP是HTML的嵌入脚本,它便于开发人员快速写出动态生成的页面。PHP主要用于服务器端(而Javascript则用于客户端)以通过HTTP生成动态网页,然而,当你知道可以在Linux终端中不需要网页浏览器来执行PHP时,你或许会大为惊讶。
|
||||||
|
|
||||||
|
本文将阐述PHP脚本语言的命令行方面。
|
||||||
|
|
||||||
|
**1. 在安装完PHP和Apache2后,我们需要安装PHP命令行解释器。**
|
||||||
|
|
||||||
|
# apt-get install php5-cli [Debian 及类似系统]
|
||||||
|
# yum install php-cli [CentOS 及类似系统]
|
||||||
|
|
||||||
|
接下来我们通常要做的是,在`/var/www/html`(这是 Apache2 在大多数发行版中的工作目录)这个位置创建一个内容为 `<?php phpinfo(); ?>`,名为 `infophp.php` 的文件来测试(PHP是否安装正确),执行以下命令即可。
|
||||||
|
|
||||||
|
# echo '<?php phpinfo(); ?>' > /var/www/html/infophp.php
|
||||||
|
|
||||||
|
然后,将浏览器访问 http://127.0.0.1/infophp.php ,这将会在网络浏览器中打开该文件。
|
||||||
|
|
||||||
|
![Check PHP Info](http://www.tecmint.com/wp-content/uploads/2015/07/Check-PHP-Info.png)
|
||||||
|
|
||||||
|
*检查PHP信息*
|
||||||
|
|
||||||
|
不需要任何浏览器,在Linux终端中也可以获得相同的结果。在Linux命令行中执行`/var/www/html/infophp.php`,如:
|
||||||
|
|
||||||
|
# php -f /var/www/html/infophp.php
|
||||||
|
|
||||||
|
![Check PHP info from Commandline](http://www.tecmint.com/wp-content/uploads/2015/07/Check-PHP-info-from-Commandline.png)
|
||||||
|
|
||||||
|
*从命令行检查PHP信息*
|
||||||
|
|
||||||
|
由于输出结果太大,我们可以通过管道将上述输出结果输送给 `less` 命令,这样就可以一次输出一屏了,命令如下:
|
||||||
|
|
||||||
|
# php -f /var/www/html/infophp.php | less
|
||||||
|
|
||||||
|
![Check All PHP Info](http://www.tecmint.com/wp-content/uploads/2015/07/Check-All-PHP-Info.png)
|
||||||
|
|
||||||
|
*检查所有PHP信息*
|
||||||
|
|
||||||
|
这里,‘-f‘选项解析并执行命令后跟随的文件。
|
||||||
|
|
||||||
|
**2. 我们可以直接在Linux命令行使用`phpinfo()`这个十分有价值的调试工具而不需要从文件来调用,只需执行以下命令:**
|
||||||
|
|
||||||
|
# php -r 'phpinfo();'
|
||||||
|
|
||||||
|
![PHP Debugging Tool](http://www.tecmint.com/wp-content/uploads/2015/07/PHP-Debugging-Tool.png)
|
||||||
|
|
||||||
|
*PHP调试工具*
|
||||||
|
|
||||||
|
这里,‘-r‘ 选项会让PHP代码在Linux终端中不带`<`和`>`标记直接执行。
|
||||||
|
|
||||||
|
**3. 以交互模式运行PHP并做一些数学运算。这里,‘-a‘ 选项用于以交互模式运行PHP。**
|
||||||
|
|
||||||
|
# php -a
|
||||||
|
|
||||||
|
Interactive shell
|
||||||
|
|
||||||
|
php > echo 2+3;
|
||||||
|
5
|
||||||
|
php > echo 9-6;
|
||||||
|
3
|
||||||
|
php > echo 5*4;
|
||||||
|
20
|
||||||
|
php > echo 12/3;
|
||||||
|
4
|
||||||
|
php > echo 12/5;
|
||||||
|
2.4
|
||||||
|
php > echo 2+3-1;
|
||||||
|
4
|
||||||
|
php > echo 2+3-1*3;
|
||||||
|
2
|
||||||
|
php > exit
|
||||||
|
|
||||||
|
输入 ‘exit‘ 或者按下 ‘ctrl+c‘ 来关闭PHP交互模式。
|
||||||
|
|
||||||
|
![Enable PHP Interactive Mode](http://www.tecmint.com/wp-content/uploads/2015/07/Enable-PHP-interactive-mode1.png)
|
||||||
|
|
||||||
|
*启用PHP交互模式*
|
||||||
|
|
||||||
|
**4. 你可以仅仅将PHP脚本作为shell脚本来运行。首先,创建在你当前工作目录中创建一个PHP样例脚本。**
|
||||||
|
|
||||||
|
# echo -e '#!/usr/bin/php\n<?php phpinfo(); ?>' > phpscript.php
|
||||||
|
|
||||||
|
注意,我们在该PHP脚本的第一行使用`#!/usr/bin/php`,就像在shell脚本中那样(`/bin/bash`)。第一行的`#!/usr/bin/php`告诉Linux命令行用 PHP 解释器来解析该脚本文件。
|
||||||
|
|
||||||
|
其次,让该脚本可执行:
|
||||||
|
|
||||||
|
# chmod 755 phpscript.php
|
||||||
|
|
||||||
|
接着来运行它,
|
||||||
|
|
||||||
|
# ./phpscript.php
|
||||||
|
|
||||||
|
**5. 你可以完全靠自己通过交互shell来创建简单函数,这你一定会被惊到了。下面是循序渐进的指南。**
|
||||||
|
|
||||||
|
开启PHP交互模式。
|
||||||
|
|
||||||
|
# php -a
|
||||||
|
|
||||||
|
创建一个函数,将它命名为 `addition`。同时,声明两个变量 `$a` 和 `$b`。
|
||||||
|
|
||||||
|
php > function addition ($a, $b)
|
||||||
|
|
||||||
|
使用花括号来在其间为该函数定义规则。
|
||||||
|
|
||||||
|
php > {
|
||||||
|
|
||||||
|
定义规则。这里,该规则讲的是添加这两个变量。
|
||||||
|
|
||||||
|
php { echo $a + $b;
|
||||||
|
|
||||||
|
所有规则定义完毕,通过闭合花括号来封装规则。
|
||||||
|
|
||||||
|
php {}
|
||||||
|
|
||||||
|
测试函数,添加数字4和3,命令如下:
|
||||||
|
|
||||||
|
php > var_dump (addition(4,3));
|
||||||
|
|
||||||
|
#### 样例输出 ####
|
||||||
|
|
||||||
|
7NULL
|
||||||
|
|
||||||
|
你可以运行以下代码来执行该函数,你可以测试不同的值,你想来多少次都行。将里头的 a 和 b 替换成你自己的值。
|
||||||
|
|
||||||
|
php > var_dump (addition(a,b));
|
||||||
|
|
||||||
|
----------
|
||||||
|
|
||||||
|
php > var_dump (addition(9,3.3));
|
||||||
|
|
||||||
|
#### 样例输出 ####
|
||||||
|
|
||||||
|
12.3NULL
|
||||||
|
|
||||||
|
![Create PHP Functions](http://www.tecmint.com/wp-content/uploads/2015/07/Create-PHP-Functions.png)
|
||||||
|
|
||||||
|
*创建PHP函数*
|
||||||
|
|
||||||
|
你可以一直运行该函数,直至退出交互模式(ctrl+z)。同时,你也应该注意到了,上面输出结果中返回的数据类型为 NULL。这个问题可以通过要求 php 交互 shell用 return 代替 echo 返回结果来修复。
|
||||||
|
|
||||||
|
只需要在上面的函数的中 ‘echo‘ 声明用 ‘return‘ 来替换
|
||||||
|
|
||||||
|
替换
|
||||||
|
|
||||||
|
php { echo $a + $b;
|
||||||
|
|
||||||
|
为
|
||||||
|
|
||||||
|
php { return $a + $b;
|
||||||
|
|
||||||
|
剩下的东西和原理仍然一样。
|
||||||
|
|
||||||
|
这里是一个样例,在该样例的输出结果中返回了正确的数据类型。
|
||||||
|
|
||||||
|
![PHP Functions](http://www.tecmint.com/wp-content/uploads/2015/07/PHP-Functions.png)
|
||||||
|
|
||||||
|
*PHP函数*
|
||||||
|
|
||||||
|
永远都记住,用户定义的函数不会从一个shell会话保留到下一个shell会话,因此,一旦你退出交互shell,它就会丢失了。
|
||||||
|
|
||||||
|
希望你喜欢此次教程。保持连线,你会获得更多此类文章。保持关注,保持健康。请在下面的评论中为我们提供有价值的反馈。点赞并分享,帮助我们扩散。
|
||||||
|
|
||||||
|
还请阅读: [12个Linux终端中有用的的PHP命令行用法——第二部分][1]
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.tecmint.com/run-php-codes-from-linux-commandline/
|
||||||
|
|
||||||
|
作者:[Avishek Kumar][a]
|
||||||
|
译者:[GOLinux](https://github.com/GOLinux)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://www.tecmint.com/author/avishek/
|
||||||
|
[1]:http://www.tecmint.com/execute-php-codes-functions-in-linux-commandline/
|
@ -0,0 +1,188 @@
|
|||||||
|
如何在 Ubuntu/CentOS7.1/Fedora22 上安装 Plex Media Server
|
||||||
|
================================================================================
|
||||||
|
在本文中我们将会向你展示如何容易地在主流的最新Linux发行版上安装Plex Media Server。在Plex安装成功后你将可以使用你的中央式家庭媒体播放系统,该系统能让多个Plex播放器App共享它的媒体资源,并且该系统允许你设置你的环境,增加你的设备以及设置一个可以一起使用Plex的用户组。让我们首先在Ubuntu15.04上开始Plex的安装。
|
||||||
|
|
||||||
|
### 基本的系统资源 ###
|
||||||
|
|
||||||
|
系统资源主要取决于你打算用来连接服务的设备类型和数量, 所以根据我们的需求我们将会在一个单独的服务器上使用以下系统资源。
|
||||||
|
|
||||||
|
<table width="666" style="height: 181px;">
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td width="670" colspan="2"><b>Plex Media Server</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td width="236"><b>基础操作系统</b></td>
|
||||||
|
<td width="425">Ubuntu 15.04 / CentOS 7.1 / Fedora 22 Work Station</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td width="236"><b>Plex Media Server</b></td>
|
||||||
|
<td width="425">Version 0.9.12.3.1173-937aac3</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td width="236"><b>RAM 和 CPU</b></td>
|
||||||
|
<td width="425">1 GB , 2.0 GHZ</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td width="236"><b>硬盘</b></td>
|
||||||
|
<td width="425">30 GB</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
### 在Ubuntu 15.04上安装Plex Media Server 0.9.12.3 ###
|
||||||
|
|
||||||
|
我们现在准备开始在Ubuntu上安装Plex Media Server,让我们从下面的步骤开始来让Plex做好准备。
|
||||||
|
|
||||||
|
#### 步骤 1: 系统更新 ####
|
||||||
|
|
||||||
|
用root权限登录你的服务器。确保你的系统是最新的,如果不是就使用下面的命令。
|
||||||
|
|
||||||
|
root@ubuntu-15:~#apt-get update
|
||||||
|
|
||||||
|
#### 步骤 2: 下载最新的Plex Media Server包 ####
|
||||||
|
|
||||||
|
创建一个新目录,用wget命令从[Plex官网](https://plex.tv/)下载为Ubuntu提供的.deb包并放入该目录中。
|
||||||
|
|
||||||
|
root@ubuntu-15:~# cd /plex/
|
||||||
|
root@ubuntu-15:/plex#
|
||||||
|
root@ubuntu-15:/plex# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb
|
||||||
|
|
||||||
|
#### 步骤 3: 安装Plex Media Server的Debian包 ####
|
||||||
|
|
||||||
|
现在在相同的目录下执行下面的命令来开始debian包的安装, 然后检查plexmediaserver服务的状态。
|
||||||
|
|
||||||
|
root@ubuntu-15:/plex# dpkg -i plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb
|
||||||
|
|
||||||
|
----------
|
||||||
|
|
||||||
|
root@ubuntu-15:~# service plexmediaserver status
|
||||||
|
|
||||||
|
![Plexmediaserver Service](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-status.png)
|
||||||
|
|
||||||
|
### 在Ubuntu 15.04上设置Plex Media Web应用 ###
|
||||||
|
|
||||||
|
让我们在你的本地网络主机中打开web浏览器, 并用你的本地主机IP以及端口32400来打开Web界面,并完成以下步骤来配置Plex。
|
||||||
|
|
||||||
|
http://172.25.10.179:32400/web
|
||||||
|
http://localhost:32400/web
|
||||||
|
|
||||||
|
#### 步骤 1: 登录前先注册 ####
|
||||||
|
|
||||||
|
在你访问到Plex Media Server的Web界面之后, 确保注册并填上你的用户名和密码来登录。
|
||||||
|
|
||||||
|
![Plex Sign In](http://blog.linoxide.com/wp-content/uploads/2015/06/PMS-Login.png)
|
||||||
|
|
||||||
|
#### 输入你的PIN码来保护你的Plex Media用户####
|
||||||
|
|
||||||
|
![Plex User Pin](http://blog.linoxide.com/wp-content/uploads/2015/06/333.png)
|
||||||
|
|
||||||
|
现在你已经成功的在Plex Media下配置你的用户。
|
||||||
|
|
||||||
|
![Welcome To Plex](http://blog.linoxide.com/wp-content/uploads/2015/06/3333.png)
|
||||||
|
|
||||||
|
### 在设备上而不是本地服务器上打开Plex Web应用 ###
|
||||||
|
|
||||||
|
如我们在Plex Media主页看到的提示“你没有权限访问这个服务”。 这说明我们跟服务器计算机不在同个网络。
|
||||||
|
|
||||||
|
![Plex Server Permissions](http://blog.linoxide.com/wp-content/uploads/2015/06/33.png)
|
||||||
|
|
||||||
|
现在我们需要解决这个权限问题,以便我们通过设备访问服务器而不是只能在服务器上访问。通过完成下面的步骤完成。
|
||||||
|
|
||||||
|
### 设置SSH隧道使Windows系统可以访问到Linux服务器 ###
|
||||||
|
|
||||||
|
首先我们需要建立一条SSH隧道以便我们访问远程服务器资源,就好像资源在本地一样。 这仅仅是必要的初始设置。
|
||||||
|
|
||||||
|
如果你正在使用Windows作为你的本地系统,Linux作为服务器,那么我们可以参照下图通过Putty来设置SSH隧道。
|
||||||
|
(LCTT译注: 首先要在Putty的Session中用Plex服务器IP配置一个SSH的会话,才能进行下面的隧道转发规则配置。
|
||||||
|
然后点击“Open”,输入远端服务器用户名密码, 来保持SSH会话连接。)
|
||||||
|
|
||||||
|
![Plex SSH Tunnel](http://blog.linoxide.com/wp-content/uploads/2015/06/ssh-tunnel.png)
|
||||||
|
|
||||||
|
**一旦你完成SSH隧道设置:**
|
||||||
|
|
||||||
|
打开你的Web浏览器窗口并在地址栏输入下面的URL。
|
||||||
|
|
||||||
|
http://localhost:8888/web
|
||||||
|
|
||||||
|
浏览器将会连接到Plex服务器并且加载与服务器本地功能一致的Plex Web应用。 同意服务条款并开始。
|
||||||
|
|
||||||
|
![Agree to Plex term](http://blog.linoxide.com/wp-content/uploads/2015/06/5.png)
|
||||||
|
|
||||||
|
现在一个功能齐全的Plex Media Server已经准备好添加新的媒体库、频道、播放列表等资源。
|
||||||
|
|
||||||
|
![PMS Settings](http://blog.linoxide.com/wp-content/uploads/2015/06/8.png)
|
||||||
|
|
||||||
|
### 在CentOS 7.1上安装Plex Media Server 0.9.12.3 ###
|
||||||
|
|
||||||
|
我们将会按照上述在Ubuntu15.04上安装Plex Media Server的步骤来将Plex安装到CentOS 7.1上。
|
||||||
|
|
||||||
|
让我们从安装Plex Media Server开始。
|
||||||
|
|
||||||
|
#### 步骤1: 安装Plex Media Server ####
|
||||||
|
|
||||||
|
为了在CentOS7.1上安装Plex Media Server,我们需要从Plex官网下载rpm安装包。 因此我们使用wget命令来将rpm包下载到一个新的目录下。
|
||||||
|
|
||||||
|
[root@linux-tutorials ~]# cd /plex
|
||||||
|
[root@linux-tutorials plex]# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm
|
||||||
|
|
||||||
|
#### 步骤2: 安装RPM包 ####
|
||||||
|
|
||||||
|
在完成安装包完整的下载之后, 我们将会使用rpm命令在相同的目录下安装这个rpm包。
|
||||||
|
|
||||||
|
[root@linux-tutorials plex]# ls
|
||||||
|
plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm
|
||||||
|
[root@linux-tutorials plex]# rpm -i plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm
|
||||||
|
|
||||||
|
#### 步骤3: 启动Plexmediaservice ####
|
||||||
|
|
||||||
|
我们已经成功地安装Plex Media Server, 现在我们只需要重启它的服务然后让它永久地启用。
|
||||||
|
|
||||||
|
[root@linux-tutorials plex]# systemctl start plexmediaserver.service
|
||||||
|
[root@linux-tutorials plex]# systemctl enable plexmediaserver.service
|
||||||
|
[root@linux-tutorials plex]# systemctl status plexmediaserver.service
|
||||||
|
|
||||||
|
### 在CentOS-7.1上设置Plex Media Web应用 ###
|
||||||
|
|
||||||
|
现在我们只需要重复在Ubuntu上设置Plex Web应用的所有步骤就可以了。 让我们在Web浏览器上打开一个新窗口并用localhost或者Plex服务器的IP来访问Plex Media Web应用。
|
||||||
|
|
||||||
|
http://172.20.3.174:32400/web
|
||||||
|
http://localhost:32400/web
|
||||||
|
|
||||||
|
为了获取服务的完整权限你需要重复创建SSH隧道的步骤。 在你用新账户注册后我们将可以访问到服务的所有特性,并且可以添加新用户、添加新的媒体库以及根据我们的需求来设置它。
|
||||||
|
|
||||||
|
![Plex Device Centos](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-devices-centos.png)
|
||||||
|
|
||||||
|
### 在Fedora 22工作站上安装Plex Media Server 0.9.12.3 ###
|
||||||
|
|
||||||
|
下载和安装Plex Media Server步骤基本跟在CentOS 7.1上安装的步骤一致。我们只需要下载对应的rpm包然后用rpm命令来安装它。
|
||||||
|
|
||||||
|
![PMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-on-fedora.png)
|
||||||
|
|
||||||
|
### 在Fedora 22工作站上配置Plex Media Web应用 ###
|
||||||
|
|
||||||
|
我们在(与Plex服务器)相同的主机上配置Plex Media Server,因此不需要设置SSH隧道。只要在你的Fedora 22工作站上用Plex Media Server的默认端口号32400打开Web浏览器并同意Plex的服务条款即可。
|
||||||
|
|
||||||
|
![Plex Agreement](http://blog.linoxide.com/wp-content/uploads/2015/06/Plex-Terms.png)
|
||||||
|
|
||||||
|
*欢迎来到Fedora 22工作站上的Plex Media Server*
|
||||||
|
|
||||||
|
让我们用你的Plex账户登录,并且开始将你喜欢的电影频道添加到媒体库、创建你的播放列表、添加你的图片以及享用更多其他的特性。
|
||||||
|
|
||||||
|
![Plex Add Libraries](http://blog.linoxide.com/wp-content/uploads/2015/06/create-library.png)
|
||||||
|
|
||||||
|
### 总结 ###
|
||||||
|
|
||||||
|
我们已经成功完成Plex Media Server在主流Linux发行版上安装和配置。Plex Media Server永远都是媒体管理的最佳选择。 它在跨平台上的设置是如此的简单,就像我们在Ubuntu,CentOS以及Fedora上的设置一样。它简化了你组织媒体内容的工作,并将媒体内容“流”向其他计算机以及设备以便你跟你的朋友分享媒体内容。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://linoxide.com/tools/install-plex-media-server-ubuntu-centos-7-1-fedora-22/
|
||||||
|
|
||||||
|
作者:[Kashif Siddique][a]
|
||||||
|
译者:[dingdongnigetou](https://github.com/dingdongnigetou)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://linoxide.com/author/kashifs/
|
@ -1,120 +0,0 @@
|
|||||||
FSSlc translating
|
|
||||||
|
|
||||||
4 CCleaner Alternatives For Ubuntu Linux
|
|
||||||
================================================================================
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/ccleaner-10-700x393.jpg)
|
|
||||||
|
|
||||||
Back in my Windows days, [CCleaner][1] was my favorite tool for freeing up space, delete junk files and speed up Windows. I know I am not the only one who looked for CCleaner for Linux when switched from Windows. If you are looking for CCleaner alternative in Linux, I am going to list here four such application that you can use to clean up Ubuntu or Ubuntu based Linux distributions. But before we see the list, let’s ponder over whether Linux requires system clean up tools or not.
|
|
||||||
|
|
||||||
### Does Linux need system clean up utilities like CCleaner? ###
|
|
||||||
|
|
||||||
To get this answer, let’s first see what does CCleaner do. As per [How-To Geek][2]:
|
|
||||||
|
|
||||||
> CCleaner has two main uses. One, it scans for and deletes useless files, freeing up space. Two, it erases private data like your browsing history and list of most recently opened files in various programs.
|
|
||||||
|
|
||||||
So in short, it performs a system wide clean up of temporary file be it in your web browser or in your media player. You might know that Windows has the affection for keeping junk files in the system for like since ever but what about Linux? What does it do with the temporary files?
|
|
||||||
|
|
||||||
Unlike Windows, Linux cleans up all the temporary files (store in /tmp) automatically. You don’t have registry in Linux which further reduces the headache. At worst, you might have some broken packages, packages that are not needed anymore and internet browsing history, cookies and cache.
|
|
||||||
|
|
||||||
### Does it mean that Linux does not need system clean up utilities? ###
|
|
||||||
|
|
||||||
- Answer is no if you can run few commands for occasional package cleaning, manually deleting browser history etc.
|
|
||||||
- Answer is yes if you don’t want to run from places to places and want one tool to rule them all where you can clean up all the suggested things in one (or few) click(s).
|
|
||||||
|
|
||||||
If you have got your answer as yes, let’s move on to see some CCleaner like utilities to clean up your Ubuntu Linux.
|
|
||||||
|
|
||||||
### CCleaner alternatives for Ubuntu ###
|
|
||||||
|
|
||||||
Please note that I am using Ubuntu here because some tools discussed here are only existing for Ubuntu based Linux distributions while some are available for all Linux distributions.
|
|
||||||
|
|
||||||
#### 1. BleachBit ####
|
|
||||||
|
|
||||||
![BleachBit System Cleaning Tool for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/BleachBit_Cleaning_Tool_Ubuntu.jpeg)
|
|
||||||
|
|
||||||
[BleachBit][3] is cross platform app available for both Windows and Linux. It has a long list of applications that it support for cleaning and thus giving you option for cleaning cache, cookies and log files. A quick look at its feature:
|
|
||||||
|
|
||||||
- Simple GUI check the boxes you want, preview it and delete it.
|
|
||||||
- Multi-platform: Linux and Windows
|
|
||||||
- Free and open source
|
|
||||||
- Shred files to hide their contents and prevent data recovery
|
|
||||||
- Overwrite free disk space to hide previously deleted files
|
|
||||||
- Command line interface also available
|
|
||||||
|
|
||||||
BleachBit is available by default in Ubuntu 14.04 and 15.04. You can install it using the command below in terminal:
|
|
||||||
|
|
||||||
sudo apt-get install bleachbit
|
|
||||||
|
|
||||||
BleachBit has binaries available for all major Linux distributions. You can download BleachBit from the link below:
|
|
||||||
|
|
||||||
- [Download BleachBit for Linux][4]
|
|
||||||
|
|
||||||
#### 2. Sweeper ####
|
|
||||||
|
|
||||||
![Sweeper system clean up tool for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/sweeper.jpeg)
|
|
||||||
|
|
||||||
Sweeper is a system clean up utility which is a part of [KDE SC utilities][5] module. It’s main features are:
|
|
||||||
|
|
||||||
- remove web-related traces: cookies, history, cache
|
|
||||||
- remove the image thumbnails cache
|
|
||||||
- clean the applications and documentes history
|
|
||||||
|
|
||||||
Sweeper is available by default in Ubuntu repository. Use the command below in a terminal to install Sweeper:
|
|
||||||
|
|
||||||
sudo apt-get install sweeper
|
|
||||||
|
|
||||||
#### 3. Ubuntu Tweak ####
|
|
||||||
|
|
||||||
![Ubuntu Tweak Tool for cleaning up Ubuntu system](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Tweak_Janitor.jpeg)
|
|
||||||
|
|
||||||
As the name suggests, [Ubuntu Tweak][6] is more of a tweaking tool than a cleaning utility. But along with tweaking things like compiz settings, panel configuration, start up program control, power management etc, Ubuntu Tweak also provides a Janitor tab that lets you:
|
|
||||||
|
|
||||||
- clean browser cache
|
|
||||||
- clean Ubuntu Software Center cache
|
|
||||||
- clean thumbnail cache
|
|
||||||
- clan apt repository cache
|
|
||||||
- clean old kernel files
|
|
||||||
- clean package configs
|
|
||||||
|
|
||||||
You can get the .deb installer for Ubuntu Tweak from the link below:
|
|
||||||
|
|
||||||
- [Download Ubuntu Tweak][7]
|
|
||||||
|
|
||||||
#### 4. GCleaner (beta) ####
|
|
||||||
|
|
||||||
![GCleaner CCleaner like tool](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/GCleaner.jpeg)
|
|
||||||
|
|
||||||
One of the third party apps for elementaryOS Freya, GCleaner aims to be CCleaner in GNU world. The interface resembles heavily to CCleaner. Some of the main features of GCleaner are:
|
|
||||||
|
|
||||||
- clean browser history
|
|
||||||
- clean app cache
|
|
||||||
- clean packages and configs
|
|
||||||
- clean recent document history
|
|
||||||
- empty recycle bin
|
|
||||||
|
|
||||||
At the time of writing this article, GCleaner is in heavy development. You can check the project website and get the source code to build and use GCleaner.
|
|
||||||
|
|
||||||
- [Know More About GCleaner][8]
|
|
||||||
|
|
||||||
### Your choice? ###
|
|
||||||
|
|
||||||
I have listed down the possibilities to you. I let you decide which tool you would use to clean Ubuntu 14.04. But I am certain that if you were looking for a CCleaner like application, one of these four end your search.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://itsfoss.com/ccleaner-alternatives-ubuntu-linux/
|
|
||||||
|
|
||||||
作者:[Abhishek][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://itsfoss.com/author/abhishek/
|
|
||||||
[1]:https://www.piriform.com/ccleaner/download
|
|
||||||
[2]:http://www.howtogeek.com/172820/beginner-geek-what-does-ccleaner-do-and-should-you-use-it/
|
|
||||||
[3]:http://bleachbit.sourceforge.net/
|
|
||||||
[4]:http://bleachbit.sourceforge.net/download/linux
|
|
||||||
[5]:https://www.kde.org/applications/utilities/
|
|
||||||
[6]:http://ubuntu-tweak.com/
|
|
||||||
[7]:http://ubuntu-tweak.com/
|
|
||||||
[8]:https://quassy.github.io/elementary-apps/GCleaner/
|
|
@ -1,74 +0,0 @@
|
|||||||
2015 will be the year Linux takes over the enterprise (and other predictions)
|
|
||||||
================================================================================
|
|
||||||
> Jack Wallen removes his rose-colored glasses and peers into the crystal ball to predict what 2015 has in store for Linux.
|
|
||||||
|
|
||||||
![](http://tr1.cbsistatic.com/hub/i/r/2014/12/15/f79d21fe-f1d1-416d-ba22-7e757dfcdb31/resize/620x485/52a10d26d34c3fc4201c5daa8ff277ff/linux2015hero.jpg)
|
|
||||||
|
|
||||||
The crystal ball has been vague and fuzzy for quite some time. Every pundit and voice has opined on what the upcoming year will mean to whatever topic it is they hold dear to their heart. In my case, we're talking Linux and open source.
|
|
||||||
|
|
||||||
In previous years, I'd don the rose-colored glasses and make predictions that would shine a fantastic light over the Linux landscape and proclaim 20** will be the year of Linux on the _____ (name your platform). Many times, those predictions were wrong, and Linux would wind up grinding on in the background.
|
|
||||||
|
|
||||||
This coming year, however, there are some fairly bold predictions to be made, some of which are sure things. Read on and see if you agree.
|
|
||||||
|
|
||||||
### Linux takes over big data ###
|
|
||||||
|
|
||||||
This should come as no surprise, considering the advancements Linux and open source has made over the previous few years. With the help of SuSE, Red Hat, and SAP Hana, Linux will hold powerful sway over big data in 2015. In-memory computing and live kernel patching will be the thing that catapults big data into realms of uptime and reliability never before known. SuSE will lead this charge like a warrior rushing into a battle it cannot possibly lose.
|
|
||||||
|
|
||||||
This rise of Linux in the world of big data will have serious trickle down over the rest of the business world. We already know how fond enterprise businesses are of Linux and big data. What we don't know is how this relationship will alter the course of Linux with regards to the rest of the business world.
|
|
||||||
|
|
||||||
My prediction is that the success of Linux with big data will skyrocket the popularity of Linux throughout the business landscape. More contracts for SuSE and Red Hat will equate to more deployments of Linux servers that handle more tasks within the business world. This will especially apply to the cloud, where OpenStack should easily become an overwhelming leader.
|
|
||||||
|
|
||||||
As the end of 2015 draws to a close, Linux will continue its take over of more backend services, which may include the likes of collaboration servers, security, and much more.
|
|
||||||
|
|
||||||
### Smart machines ###
|
|
||||||
|
|
||||||
Linux is already leading the trend for making homes and autos more intelligent. With improvements in the likes of Nest (which currently uses an embedded Linux), the open source platform is poised to take over your machines. Because 2015 should see a massive rise in smart machines, it goes without saying that Linux will be a huge part of that growth. I firmly believe more homes and businesses will take advantage of such smart controls, and that will lead to more innovations (all of which will be built on Linux).
|
|
||||||
|
|
||||||
One of the issues facing Nest, however, is that it was purchased by Google. What does this mean for the thermostat controller? Will Google continue using the Linux platform -- or will it opt to scrap that in favor of Android? Of course, a switch would set the Nest platform back a bit.
|
|
||||||
|
|
||||||
The upcoming year will see Linux lead the rise in popularity of home automation. Wink, Iris, Q Station, Staples Connect, and more (similar) systems will help to bridge Linux and home users together.
|
|
||||||
|
|
||||||
### The desktop ###
|
|
||||||
|
|
||||||
The big question, as always, is one that tends to hang over the heads of the Linux community like a dark cloud. That question is in relation to the desktop. Unfortunately, my predictions here aren't nearly as positive. I believe that the year 2015 will remain quite stagnant for Linux on the desktop. That complacency will center around Ubuntu.
|
|
||||||
|
|
||||||
As much as I love Ubuntu (and the Unity desktop), this particular distribution will continue to drag the Linux desktop down. Why?
|
|
||||||
|
|
||||||
Convergence... or the lack thereof.
|
|
||||||
|
|
||||||
Canonical has been so headstrong about converging the desktop and mobile experience that they are neglecting the current state of the desktop. The last two releases of Ubuntu (one being an LTS release) have been stagnant (at best). The past year saw two of the most unexciting releases of Ubuntu that I can recall. The reason? Because the developers of Ubuntu are desperately trying to make Unity 8/Mir and the ubiquitous Ubuntu Phone a reality. The vaporware that is the Ubuntu Phone will continue on through 2015, and Unity 8/Mir may or may not be released.
|
|
||||||
|
|
||||||
When the new iteration of the Ubuntu Unity desktop is finally released, it will suffer a serious setback, because there will be so little hardware available to truly show it off. [System76][1] will sell their outstanding [Sable Touch][2], which will probably become the flagship system for Unity 8/Mir. As for the Ubuntu Phone? How many reports have you read that proclaimed "Ubuntu Phone will ship this year"?
|
|
||||||
|
|
||||||
I'm now going on the record to predict that the Ubuntu Phone will not ship in 2015. Why? Canonical created partnerships with two OEMs over a year ago. Those partnerships have yet to produce a single shippable product. The closest thing to a shippable product is the Meizu MX4 phone. The "Pro" version of that phone was supposed to have a formal launch of Sept 25. Like everything associated with the Ubuntu Phone, it didn't happen.
|
|
||||||
|
|
||||||
Unless Canonical stops putting all of its eggs in one vaporware basket, desktop Linux will take a major hit in 2015. Ubuntu needs to release something major -- something to make heads turn -- otherwise, 2015 will be just another year where we all look back and think "we could have done something special."
|
|
||||||
|
|
||||||
Outside of Ubuntu, I do believe there are some outside chances that Linux could still make some noise on the desktop. I think two distributions, in particular, will bring something rather special to the table:
|
|
||||||
|
|
||||||
- [Evolve OS][3] -- a ChromeOS-like Linux distribution
|
|
||||||
- [Quantum OS][4] -- a Linux distribution that uses Android's Material Design specs
|
|
||||||
|
|
||||||
Both of these projects are quite exciting and offer unique, user-friendly takes on the Linux desktop. This is quickly become a necessity in a landscape being dragged down by out-of-date design standards (think the likes of Cinnamon, Mate, XFCE, LXCE -- all desperately clinging to the past).
|
|
||||||
|
|
||||||
This is not to say that Linux on the desktop doesn't have a chance in 2015. It does. In order to grasp the reins of that chance, it will have to move beyond the past and drop the anchors that prevent it from moving out to deeper, more viable waters.
|
|
||||||
|
|
||||||
Linux stands to make more waves in 2015 than it has in a very long time. From enterprise to home automation -- the world could be the oyster that Linux uses as a springboard to the desktop and beyond.
|
|
||||||
|
|
||||||
What are your predictions for Linux and open source in 2015? Share your thoughts in the discussion thread below.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.techrepublic.com/article/2015-will-be-the-year-linux-takes-over-the-enterprise-and-other-predictions/
|
|
||||||
|
|
||||||
作者:[Jack Wallen][a]
|
|
||||||
译者:[barney-ro](https://github.com/barney-ro)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.techrepublic.com/search/?a=jack+wallen
|
|
||||||
[1]:https://system76.com/
|
|
||||||
[2]:https://system76.com/desktops/sable
|
|
||||||
[3]:https://evolve-os.com/
|
|
||||||
[4]:http://quantum-os.github.io/
|
|
@ -1,120 +0,0 @@
|
|||||||
The Curious Case of the Disappearing Distros
|
|
||||||
================================================================================
|
|
||||||
![](http://www.linuxinsider.com/ai/828896/linux-distros.jpg)
|
|
||||||
|
|
||||||
"Linux is a big game now, with billions of dollars of profit, and it's the best thing since sliced bread, but corporations are taking control, and slowly but systematically, community distros are being killed," said Google+ blogger Alessandro Ebersol. "Linux is slowly becoming just like BSD, where companies use and abuse it and give very little in return."
|
|
||||||
|
|
||||||
Well the holidays are pretty much upon us at last here in the Linux blogosphere, and there's nowhere left to hide. The next two weeks or so promise little more than a blur of forced social occasions and too-large meals, punctuated only -- for the luckier ones among us -- by occasional respite down at the Broken Windows Lounge.
|
|
||||||
|
|
||||||
Perhaps that's why Linux bloggers seized with such glee upon the good old-fashioned mystery that came up recently -- delivered in the nick of time, as if on cue.
|
|
||||||
|
|
||||||
"Why is the Number of Linux Distros Declining?" is the [question][1] posed over at Datamation, and it's just the distraction so many FOSS fans have been needing.
|
|
||||||
|
|
||||||
"Until about 2011, the number of active distributions slowly increased by a few each year," wrote author Bruce Byfield. "By contrast, the last three years have seen a 12 percent decline -- a decrease too high to be likely to be coincidence.
|
|
||||||
|
|
||||||
"So what's happening?" Byfield wondered.
|
|
||||||
|
|
||||||
It would be difficult to imagine a more thought-provoking question with which to spend the Northern hemisphere's shortest days.
|
|
||||||
|
|
||||||
### 'There Are Too Many Distros' ###
|
|
||||||
|
|
||||||
![](http://www.linuxinsider.com/images/article_images/linuxgirl_bg_pinkswirl_150x245.jpg)
|
|
||||||
|
|
||||||
"That's an easy question," began blogger [Robert Pogson][2]. "There are too many distros."
|
|
||||||
|
|
||||||
After all, "if a fanatic like me can enjoy life having sampled only a dozen distros, why have any more?" Pogson explained. "If someone has a concept different from the dozen or so most common distros, that concept can likely be demonstrated by documenting the tweaks and package-lists and, perhaps, some code."
|
|
||||||
|
|
||||||
Trying to compete with some 40,000 package repositories like Debian's, however, is "just silly," he said.
|
|
||||||
|
|
||||||
"No startup can compete with such a distro," Pogson asserted. "Why try? Just use it to do what you want and tell the world about it."
|
|
||||||
|
|
||||||
### 'I Don't Distro-Hop Anymore' ###
|
|
||||||
|
|
||||||
The major existing distros are doing a good job, so "we don't need so many derivative works," Google+ blogger Kevin O'Brien agreed.
|
|
||||||
|
|
||||||
"I know I don't 'distro-hop' anymore, and my focus is on using my computer to get work done," O'Brien added.
|
|
||||||
|
|
||||||
"If my apps run fine every day, that is all that I need," he said. "Right now I am sticking with Ubuntu LTS 14.04, and probably will until 2016."
|
|
||||||
|
|
||||||
### 'The More Distros, the Better' ###
|
|
||||||
|
|
||||||
It stands to reason that "as distros get better, there will be less reasons to roll your own," concurred [Linux Rants][3] blogger Mike Stone.
|
|
||||||
|
|
||||||
"I think the modern Linux distros cover the bases of a larger portion of the Linux-using crowd, so fewer and fewer people are starting their own distribution to compensate for something that the others aren't satisfying," he explained. "Add to that the fact that corporations are more heavily involved in the development of Linux now than they ever have been, and they're going to focus their resources."
|
|
||||||
|
|
||||||
So, the decline isn't necessarily a bad thing, as it only points to the strength of the current offerings, he asserted.
|
|
||||||
|
|
||||||
At the same time, "I do think there are some negative consequences as well," Stone added. "Variation in the distros is a way that Linux grows and evolves, and with a narrower field, we're seeing less opportunity to put new ideas out there. In my mind, the more distros, the better -- hopefully the trend reverses soon."
|
|
||||||
|
|
||||||
### 'I Hope Some Diversity Survives' ###
|
|
||||||
|
|
||||||
Indeed, "the era of novelty and experimentation is over," Google+ blogger Gonzalo Velasco C. told Linux Girl.
|
|
||||||
|
|
||||||
"Linux is 20+ years old and got professional," he noted. "There is always room for experimentation, but the top 20 are here since more than a decade ago.
|
|
||||||
|
|
||||||
"Godspeed GNU/Linux," he added. "I hope some diversity survives -- especially distros without Systemd; on the other hand, some standards are reached through consensus."
|
|
||||||
|
|
||||||
### A Question of Package Managers ###
|
|
||||||
|
|
||||||
There are two trends at work here, suggested consultant and [Slashdot][4] blogger Gerhard Mack.
|
|
||||||
|
|
||||||
First, "there are fewer reasons to start a new distro," he said. "The basic nuts and bolts are mostly done, installation is pretty easy across most distros, and it's not difficult on most hardware to get a working system without having to resort to using the command line."
|
|
||||||
|
|
||||||
The second thing is that "we are seeing a reduction of distros with inferior package managers," Mack suggested. "It is clear that .deb-based distros had fewer losses and ended up with a larger overall share."
|
|
||||||
|
|
||||||
### Survival of the Fittest ###
|
|
||||||
|
|
||||||
It's like survival of the fittest, suggested consultant Rodolfo Saenz, who is certified in Linux, IBM Tivoli Storage Manager and Microsoft Active Directory.
|
|
||||||
|
|
||||||
"I prefer to see a strong Linux with less distros," Saenz added. "Too many distros dilutes development efforts and can confuse potential future users."
|
|
||||||
|
|
||||||
Fewer distros, on the other hand, "focuses development efforts into the stronger distros and also attracts new potential users with clear choices for their needs," he said.
|
|
||||||
|
|
||||||
### All About the Money ###
|
|
||||||
|
|
||||||
Google+ blogger Alessandro Ebersol also saw survival of the fittest at play, but he took a darker view.
|
|
||||||
|
|
||||||
"Linux is a big game now, with billions of dollars of profit, and it's the best thing since sliced bread," Ebersol began. "But corporations are taking control, and slowly but systematically, community distros are being killed."
|
|
||||||
|
|
||||||
It's difficult for community distros to keep pace with the ever-changing field, and cash is a necessity, he conceded.
|
|
||||||
|
|
||||||
Still, "Linux is slowly becoming just like BSD, where companies use and abuse it and give very little in return," Ebersol said. "It saddens me, but GNU/Linux's best days were 10 years ago, circa 2002 to 2004. Now, it's the survival of the fittest -- and of course, the ones with more money will prevail."
|
|
||||||
|
|
||||||
### 'Fewer Devs Care' ###
|
|
||||||
|
|
||||||
SoylentNews blogger hairyfeet focused on today's altered computing landscape.
|
|
||||||
|
|
||||||
"The reason there are fewer distros is simple: With everybody moving to the Google Playwall of Android, and Windows 10 looking to be the next XP, fewer devs care," hairyfeet said.
|
|
||||||
|
|
||||||
"Why should they?" he went on. "The desktop wars are over, MSFT won, and the mobile wars are gonna be proprietary Google, proprietary Apple and proprietary MSFT. The money is in apps and services, and with a slow economy, there just isn't time for pulling a Taco Bell and rerolling yet another distro.
|
|
||||||
|
|
||||||
"For the few that care about Linux desktops you have Ubuntu, Mint and Cent, and that is plenty," hairyfeet said.
|
|
||||||
|
|
||||||
### 'No Less Diversity' ###
|
|
||||||
|
|
||||||
Last but not least, Chris Travers, a [blogger][5] who works on the [LedgerSMB][6] project, took an optimistic view.
|
|
||||||
|
|
||||||
"Ever since I have been around Linux, there have been a few main families -- [SuSE][7], [Red Hat][8], Debian, Gentoo, Slackware -- and a number of forks of these," Travers said. "The number of major families of distros has been declining for some time -- Mandrake and Connectiva merging, for example, Caldera disappearing -- but each of these families is ending up with fewer members as well.
|
|
||||||
|
|
||||||
"I think this is a good thing," he concluded.
|
|
||||||
|
|
||||||
"The big community distros -- Debian, Slackware, Gentoo, Fedora -- are going strong and picking up a lot of the niche users that other distros catered to," he pointed out. "Many of these distros are making it easier to come up with customized variants for niche markets. So what you have is a greater connectedness within the big distros, and no less diversity."
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.linuxinsider.com/story/The-Curious-Case-of-the-Disappearing-Distros-81518.html
|
|
||||||
|
|
||||||
作者:Katherine Noyes
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[1]:http://www.datamation.com/open-source/why-is-the-number-of-linux-distros-declining.html
|
|
||||||
[2]:http://mrpogson.com/
|
|
||||||
[3]:http://linuxrants.com/
|
|
||||||
[4]:http://slashdot.org/
|
|
||||||
[5]:http://ledgersmbdev.blogspot.com/
|
|
||||||
[6]:http://www.ledgersmb.org/
|
|
||||||
[7]:http://www.novell.com/linux
|
|
||||||
[8]:http://www.redhat.com/
|
|
@ -1,36 +0,0 @@
|
|||||||
diff -u: What's New in Kernel Development
|
|
||||||
================================================================================
|
|
||||||
**David Drysdale** wanted to add Capsicum security features to Linux after he noticed that FreeBSD already had Capsicum support. Capsicum defines fine-grained security privileges, not unlike filesystem capabilities. But as David discovered, Capsicum also has some controversy surrounding it.
|
|
||||||
|
|
||||||
Capsicum has been around for a while and was described in a USENIX paper in 2010: [http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf][1].
|
|
||||||
|
|
||||||
Part of the controversy is just because of the similarity with capabilities. As Eric Biderman pointed out during the discussion, it would be possible to implement features approaching Capsicum's as an extension of capabilities, but implementing Capsicum directly would involve creating a whole new (and extensive) abstraction layer in the kernel. Although David argued that capabilities couldn't actually be extended far enough to match Capsicum's fine-grained security controls.
|
|
||||||
|
|
||||||
Capsicum also was controversial within its own developer community. For example, as Eric described, it lacked a specification for how to revoke privileges. And, David pointed out that this was because the community couldn't agree on how that could best be done. David quoted an e-mail sent by Ben Laurie to the cl-capsicum-discuss mailing list in 2011, where Ben said, "It would require additional book-keeping to find and revoke outstanding capabilities, which requires knowing how to reach capabilities, and then whether they are derived from the capability being revoked. It also requires an authorization model for revocation. The former two points mean additional overhead in terms of data structure operations and synchronisation."
|
|
||||||
|
|
||||||
Given the ongoing controversy within the Capsicum developer community and the corresponding lack of specification of key features, and given the existence of capabilities that already perform a similar function in the kernel and the invasiveness of Capsicum patches, Eric was opposed to David implementing Capsicum in Linux.
|
|
||||||
|
|
||||||
But, given the fact that capabilities are much coarser-grained than Capsicum's security features, to the point that capabilities can't really be extended far enough to mimic Capsicum's features, and given that FreeBSD already has Capsicum implemented in its kernel, showing that it can be done and that people might want it, it seems there will remain a lot of folks interested in getting Capsicum into the Linux kernel.
|
|
||||||
|
|
||||||
Sometimes it's unclear whether there's a bug in the code or just a bug in the written specification. Henrique de Moraes Holschuh noticed that the Intel Software Developer Manual (vol. 3A, section 9.11.6) said quite clearly that microcode updates required 16-byte alignment for the P6 family of CPUs, the Pentium 4 and the Xeon. But, the code in the kernel's microcode driver didn't enforce that alignment.
|
|
||||||
|
|
||||||
In fact, Henrique's investigation uncovered the fact that some Intel chips, like the Xeon X5550 and the second-generation i5 chips, needed only 4-byte alignment in practice, and not 16. However, to conform to the documented specification, he suggested fixing the kernel code to match the spec.
|
|
||||||
|
|
||||||
Borislav Petkov objected to this. He said Henrique was looking for problems where there weren't any. He said that Henrique simply had discovered a bug in Intel's documentation, because the alignment issue clearly wasn't a problem in the real world. He suggested alerting the Intel folks to the documentation problem and moving on. As he put it, "If the processor accepts the non-16-byte-aligned update, why do you care?"
|
|
||||||
|
|
||||||
But, as H. Peter Anvin remarked, the written spec was Intel's guarantee that certain behaviors would work. If the kernel ignored the spec, it could lead to subtle bugs later on. And, Bill Davidsen said that if the kernel ignored the alignment requirement, and "if the requirement is enforced in some future revision, and updates then fail in some insane way, the vendor is justified in claiming 'I told you so'."
|
|
||||||
|
|
||||||
The end result was that Henrique sent in some patches to make the microcode driver enforce the 16-byte alignment requirement.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.linuxjournal.com/content/diff-u-whats-new-kernel-development-6
|
|
||||||
|
|
||||||
作者:[Zack Brown][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.linuxjournal.com/user/801501
|
|
||||||
[1]:http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf
|
|
@ -1,91 +0,0 @@
|
|||||||
Did this JavaScript break the console?
|
|
||||||
---------
|
|
||||||
|
|
||||||
#Q:
|
|
||||||
|
|
||||||
Just doing some JavaScript stuff in google chrome (don't want to try in other browsers for now, in case this is really doing real damage) and I'm not sure why this seemed to break my console.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
>var x = "http://www.foo.bar/q?name=%%this%%";
|
|
||||||
<undefined
|
|
||||||
>x
|
|
||||||
```
|
|
||||||
|
|
||||||
After x (and enter) the console stops working... I restarted chrome and now when I do a simple
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
console.clear();
|
|
||||||
```
|
|
||||||
|
|
||||||
It's giving me
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Console was cleared
|
|
||||||
```
|
|
||||||
|
|
||||||
And not clearing the console. Now in my scripts console.log's do not register and I'm wondering what is going on. 99% sure it has to do with the double percent signs (%%).
|
|
||||||
|
|
||||||
Anyone know what I did wrong or better yet, how to fix the console?
|
|
||||||
|
|
||||||
[A bug report for this issue has been filed here.][1]
|
|
||||||
Edit: Feeling pretty dumb, but I had Preserve log checked... That's why the console wasn't clearing.
|
|
||||||
|
|
||||||
#A:
|
|
||||||
|
|
||||||
As discussed in the comments, there are actually many different ways of constructing a string that causes this issue, and it is not necessary for there to be two percent signs in most cases.
|
|
||||||
|
|
||||||
```TXT
|
|
||||||
http://example.com/%
|
|
||||||
http://%%%
|
|
||||||
http://ab%
|
|
||||||
http://%ab
|
|
||||||
http://%zz
|
|
||||||
```
|
|
||||||
|
|
||||||
However, it's not just the presence of a percent sign that breaks the Chrome console, as when we enter the following well-formed URL, the console continues to work properly and produces a clickable link.
|
|
||||||
|
|
||||||
```TXT
|
|
||||||
http://ab%20cd
|
|
||||||
```
|
|
||||||
|
|
||||||
Additionally, the strings `http://%`, and `http://%%` will also print properly, since Chrome will not auto-link a URL-link string unless the [`http://`][2] is followed by at least 3 characters.
|
|
||||||
|
|
||||||
From here I hypothesized that the issue must be in the process of linking a URL string in the console, likely in the process of decoding a malformed URL. I remembered that the JavaScript function `decodeURI` will throw an exception if given a malformed URL, and since Chrome's developer tools are largely written in JavaScript, could this be the issue that is evidently crashing the developer console?
|
|
||||||
|
|
||||||
To test this theory, I ran Chrome by the command link, to see if any errors were being logged.
|
|
||||||
|
|
||||||
Indeed, the same error you would see if you ran decodeURI on a malformed URL (i.e. decodeURI('http://example.com/%')) was being printed to the console:
|
|
||||||
|
|
||||||
>[4810:1287:0107/164725:ERROR:CONSOLE(683)] "Uncaught URIError: URI malformed", source: chrome-devtools://devtools/bundled/devtools.js (683)
|
|
||||||
>So, I opened the URL 'chrome-devtools://devtools/bundled/devtools.js' in Chrome, and on line 683, I found the following.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
{var parsedURL=new WebInspector.ParsedURL(decodeURI(url));var origin;var folderPath;var name;if(parsedURL.isValid){origin=parsedURL.scheme+"://"+parsedURL.host;if(parsedURL.port)
|
|
||||||
```
|
|
||||||
|
|
||||||
As we can see, `decodeURI(url)` is being called on the URL without any error checking, thus throwing the exception and crashing the developer console.
|
|
||||||
|
|
||||||
A real fix for this issue will come from adding error handling to the Chrome console code, but in the meantime, one way to avoid the issue would be to wrap the string in a complex data type like an array to prevent parsing when logging.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
var x = "http://example.com/%";
|
|
||||||
console.log([x]);
|
|
||||||
```
|
|
||||||
|
|
||||||
Thankfully, the broken console issue does not persist once the tab is closed, and will not affect other tabs.
|
|
||||||
|
|
||||||
###Update:
|
|
||||||
|
|
||||||
Apparently, the issue can persist across tabs and restarts if Preserve Log is checked. Uncheck this if you are having this issue.
|
|
||||||
|
|
||||||
via:[stackoverflow](http://stackoverflow.com/questions/27828804/did-this-javascript-break-the-console/27830948#27830948)
|
|
||||||
|
|
||||||
作者:[Alexander O'Mara][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://stackoverflow.com/users/3155639/alexander-omara
|
|
||||||
[1]:https://code.google.com/p/chromium/issues/detail?id=446975
|
|
||||||
[2]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURI
|
|
@ -1,3 +1,5 @@
|
|||||||
|
FSSlc Translating
|
||||||
|
|
||||||
7 communities driving open source development
|
7 communities driving open source development
|
||||||
================================================================================
|
================================================================================
|
||||||
Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation.
|
Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation.
|
||||||
@ -83,4 +85,4 @@ via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities
|
|||||||
[4]:http://www.openstack.org/foundation/
|
[4]:http://www.openstack.org/foundation/
|
||||||
[5]:http://www.opendaylight.org/
|
[5]:http://www.opendaylight.org/
|
||||||
[6]:http://www.apache.org/
|
[6]:http://www.apache.org/
|
||||||
[7]:http://www.opencompute.org/
|
[7]:http://www.opencompute.org/
|
||||||
|
@ -1,66 +0,0 @@
|
|||||||
Revealed: The best and worst of Docker
|
|
||||||
================================================================================
|
|
||||||
![](http://images.techhive.com/images/article/2015/01/best_worst_places_to_work-100564193-primary.idge.jpg)
|
|
||||||
Credit: [Shutterstock][1]
|
|
||||||
|
|
||||||
> Docker experts talk about the good, the bad, and the ugly of the ubiquitous application container system
|
|
||||||
|
|
||||||
No question about it: Docker's app container system has made its mark and become a staple in many IT environments. With its accelerating adoption, it's bound to stick around for a good long time.
|
|
||||||
|
|
||||||
But there's no end to the debate about what Docker's best for, where it falls short, or how to most sensibly move it forward without alienating its existing users or damaging its utility. Here, we've turned to a few of the folks who have made Docker their business to get their takes on Docker's good, bad, and ugly sides.
|
|
||||||
|
|
||||||
### The good ###
|
|
||||||
|
|
||||||
One hardly expects Steve Francia, chief of operations of the Docker open source project, to speak of Docker in anything less than glowing terms. When asked by email about Docker's best attributes, he didn't disappoint: "I think the best thing about Docker is that it enables people, enables developers, enables users to very easily run an application anywhere," he said. "It's almost like the Holy Grail of development in that you can run an application on your desktop, and the exact same application without any changes can run on the server. That's never been done before."
|
|
||||||
|
|
||||||
Alexis Richardson of [Weaveworks][2], a virtual networking product, praised Docker for enabling simplicity. "Docker offers immense potential to radically simplify and speed up how software gets built," he replied in an email. "This is why it has delivered record-breaking initial mind share and traction."
|
|
||||||
|
|
||||||
Bob Quillin, CEO of [StackEngine][3], which makes Docker management and automation solutions, noted in an email that Docker (the company) has done a fine job of maintaining Docker's (the product) appeal to its audience. "Docker has been best at delivering strong developer support and focused investment in its product," he wrote. "Clearly, they know they have to keep the momentum, and they are doing that by putting intense effort into product functionality." He also mentioned that Docker's commitment to open source has accelerated adoption by "[allowing] people to build around their features as they are being built."
|
|
||||||
|
|
||||||
Though containerization itself isn't new, as Rob Markovich of IT monitoring-service makers [Moogsoft][4] pointed out, Docker's implementation makes it new. "Docker is considered a next-generation virtualization technology given its more modern, lightweight form [of containerization]," he wrote in an email. "[It] brings an opportunity for an order-of-magnitude leap forward for software development teams seeking to deploy code faster."
|
|
||||||
|
|
||||||
### The bad ###
|
|
||||||
|
|
||||||
What's less appealing about Docker boils down to two issues: the complexity of using the product, and the direction of the company behind it.
|
|
||||||
|
|
||||||
Samir Ghosh, CEO of enterprise PaaS outfit [WaveMaker][5], gave Docker a thumbs-up for simplifying the complex scripting typically needed for continuous delivery. That said, he added, "That doesn't mean Docker is simple. Implementing Docker is complicated. There are a lot of supporting technologies needed for things like container management, orchestration, app stack packaging, intercontainer networking, data snapshots, and so on."
|
|
||||||
|
|
||||||
Ghosh noted the ones who feel the most of that pain are enterprises that want to leverage Docker for continuous delivery, but "it's even more complicated for enterprises that have diverse workloads, various app stacks, heterogenous infrastructures, and limited resources, not to mention unique IT needs for visibility, control and security."
|
|
||||||
|
|
||||||
Complexity also becomes an issue in troubleshooting and analysis, and Markovich cited the fact that Docker provides application abstraction as the reason why. "It is nearly impossible to relate problems with application performance running on Docker to the performance of the underlying infrastructure domains," he said in an email. "IT teams are going to need visibility -- a new class of monitoring and analysis tools that can correlate across and relate how everything is working up and down the Docker stack, from the applications down to the private or public infrastructure."
|
|
||||||
|
|
||||||
Quillin is most concerned about Docker's direction vis-à-vis its partner community: "Where will Docker make money, and where will their partners? If [Docker] wants to be the next VMware, it will need to take a page out of VMware's playbook in how to build and support a thriving partner ecosystem.
|
|
||||||
|
|
||||||
"Additionally, to drive broader adoption, especially in the enterprise, Docker needs to start acting like a market leader by releasing more fully formed capabilities that organizations can count on, versus announcements of features with 'some assembly required,' that don't exist yet, or that require you to 'submit a pull request' to fix it yourself."
|
|
||||||
|
|
||||||
Francia pointed to Docker's rapid ascent for creating its own difficulties. "[Docker] caught on so quickly that there's definitely places that we're focused on to add some features that a lot of users are looking forward to."
|
|
||||||
|
|
||||||
One such feature, he noted, was having a GUI. "Right now to use Docker," he said, "you have to be comfortable with the command line. There's no visual interface to using Docker. Right now it's all command line-based. And we know if we want to really be as successful as we think we can be, we need to be more approachable and a lot of people when they see a command line, it's a bit intimidating for a lot of users."
|
|
||||||
|
|
||||||
### The future ###
|
|
||||||
|
|
||||||
In that last respect, Docker recently started to make advances. Last week it [bought the startup Kitematic][6], whose product gave Docker a convenient GUI on Mac OS X (and will eventually do the same for Windows). Another acqui-hire, [SocketPlane][7], is being spun in to work on Docker's networking.
|
|
||||||
|
|
||||||
What remains to be seen is whether Docker's proposed solutions to its problems will be adopted, or whether another party -- say, [Red Hat][8] -- will provide a more immediately useful solution for enterprise customers who can't wait around for the chips to stop falling.
|
|
||||||
|
|
||||||
"Good technology is hard and takes time to build," said Richardson. "The big risk is that expectations spin wildly out of control and customers are disappointed."
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.infoworld.com/article/2896895/application-virtualization/best-and-worst-about-docker.html
|
|
||||||
|
|
||||||
作者:[Serdar Yegulalp][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.infoworld.com/author/Serdar-Yegulalp/
|
|
||||||
[1]:http://shutterstock.com/
|
|
||||||
[2]:http://weave.works/
|
|
||||||
[3]:http://stackengine.com/
|
|
||||||
[4]:http://www.moogsoft.com/
|
|
||||||
[5]:http://www.wavemaker.com/
|
|
||||||
[6]:http://www.infoworld.com/article/2896099/application-virtualization/dockers-new-acquisition-does-containers-on-the-desktop.html
|
|
||||||
[7]:http://www.infoworld.com/article/2892916/application-virtualization/docker-snaps-up-socketplane-to-fix-networking-flaws.html
|
|
||||||
[8]:http://www.infoworld.com/article/2895804/application-virtualization/red-hat-wants-to-do-for-containers-what-its-done-for-linux.html
|
|
@ -1,3 +1,4 @@
|
|||||||
|
zpl1025
|
||||||
Interviews: Linus Torvalds Answers Your Question
|
Interviews: Linus Torvalds Answers Your Question
|
||||||
================================================================================
|
================================================================================
|
||||||
Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2].
|
Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2].
|
||||||
@ -181,4 +182,4 @@ via: http://linux.slashdot.org/story/15/06/30/0058243/interviews-linus-torvalds-
|
|||||||
[a]:samzenpus@slashdot.org
|
[a]:samzenpus@slashdot.org
|
||||||
[1]:http://interviews.slashdot.org/story/15/06/24/1718247/interview-ask-linus-torvalds-a-question
|
[1]:http://interviews.slashdot.org/story/15/06/24/1718247/interview-ask-linus-torvalds-a-question
|
||||||
[2]:http://meta.slashdot.org/story/12/10/11/0030249/linus-torvalds-answers-your-questions
|
[2]:http://meta.slashdot.org/story/12/10/11/0030249/linus-torvalds-answers-your-questions
|
||||||
[3]:https://lwn.net/Articles/604695/
|
[3]:https://lwn.net/Articles/604695/
|
||||||
|
@ -1,80 +0,0 @@
|
|||||||
Translating by ZTinoZ
|
|
||||||
7 command line tools for monitoring your Linux system
|
|
||||||
================================================================================
|
|
||||||
**Here is a selection of basic command line tools that will make your exploration and optimization in Linux easier. **
|
|
||||||
|
|
||||||
![Image courtesy Meltys-stock](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-1-100591899-orig.png)
|
|
||||||
|
|
||||||
### Dive on in ###
|
|
||||||
|
|
||||||
One of the great things about Linux is how deeply you can dive into the system to explore how it works and to look for opportunities to fine tune performance or diagnose problems. Here is a selection of basic command line tools that will make your exploration and optimization easier. Most of these commands are already built into your Linux system, but in case they aren’t, just Google “install”, the command name, and the name of your distro and you’ll find which package needs installing (note that some commands are bundled with other commands in a package that has a different name from the one you’re looking for). If you have any other tools you use, let me know for our next Linux Tools roundup.
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-2-100591901-orig.png)
|
|
||||||
|
|
||||||
### How we did it ###
|
|
||||||
|
|
||||||
FYI: The screenshots in this collection were created on [Debian Linux 8.1][1] (“Jessie”) running in a virtual machine under [Oracle VirtualBox 4.3.28][2] under [OS X 10.10.3][3] (“Yosemite”). See my next slideshow “[How to install Debian Linux in a VirtualBox VM][4]” for a tutorial on how to build your own Debian VM.
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-3-100591902-orig.png)
|
|
||||||
|
|
||||||
### Top command ###
|
|
||||||
|
|
||||||
One of the simpler Linux system monitoring tools, the **top command** comes with pretty much every flavor of Linux. This is the default display, but pressing the “z” key switches the display to color. Other hot keys and command line switches control things such as the display of summary and memory information (the second through fourth lines), sorting the list according to various criteria, killing tasks, and so on (you can find the complete list at [here][5]).
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-4-100591904-orig.png)
|
|
||||||
|
|
||||||
### htop ###
|
|
||||||
|
|
||||||
Htop is a more sophisticated alternative to top. Wikipedia: “Users often deploy htop in cases where Unix top does not provide enough information about the systems processes, for example when trying to find minor memory leaks in applications. Htop is also popularly used interactively as a system monitor. Compared to top, it provides a more convenient, cursor-controlled interface for sending signals to processes.” (For more detail go [here][6].)
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-5-100591903-orig.png)
|
|
||||||
|
|
||||||
### Vmstat ###
|
|
||||||
|
|
||||||
Vmstat is a simpler tool for monitoring your Linux system performance statistics but that makes it highly suitable for use in shell scripts. Fire up your regex-fu and you can do some amazing things with vmstat and cron jobs. “The first report produced gives averages since the last reboot. Additional reports give information on a sampling period of length delay. The process and memory reports are instantaneous in either case” (go [here][7] for more info.).
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-6-100591905-orig.png)
|
|
||||||
|
|
||||||
### ps ###
|
|
||||||
|
|
||||||
The ps command shows a list of running processes. In this case, I’ve used the “-e”switch to show everything, that is, all processes running (I’ve scrolled back to the top of the output otherwise the column names wouldn’t be visible). This command has a lot of switches that allow you to format the output as needed. Add a little of the aforementioned regex-fu and you’ve got a powerful tool. Go [here][8] for the full details.
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-7-100591906-orig.png)
|
|
||||||
|
|
||||||
### Pstree ###
|
|
||||||
|
|
||||||
Pstree “shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified, all process trees rooted at processes owned by that user are shown.”This is a really useful tool as the tree helps you sort out which process is dependent on which process (go [here][9]).
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-8-100591907-orig.png)
|
|
||||||
|
|
||||||
### pmap ###
|
|
||||||
|
|
||||||
Understanding just how an app uses memory is often crucial in debugging, and the pmap produces just such information when given a process ID (PID). The screenshot shows the medium weight output generated by using the “-x”switch. You can get pmap to produce even more detailed information using the “-X”switch but you’ll need a much wider terminal window.
|
|
||||||
|
|
||||||
![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-9-100591900-orig.png)
|
|
||||||
|
|
||||||
### iostat ###
|
|
||||||
|
|
||||||
A crucial factor in your Linux system’s performance is processor and storage usage, which are what the iostat command reports on. As with the ps command, iostat has loads of switches that allow you to select the output format you need as well as sample performance over a time period and then repeat that sampling a number of times before reporting. See [here][10].
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.networkworld.com/article/2937219/linux/7-command-line-tools-for-monitoring-your-linux-system.html
|
|
||||||
|
|
||||||
作者:[Mark Gibbs][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.networkworld.com/author/Mark-Gibbs/
|
|
||||||
[1]:https://www.debian.org/releases/stable/
|
|
||||||
[2]:https://www.virtualbox.org/
|
|
||||||
[3]:http://www.apple.com/osx/
|
|
||||||
[4]:http://www.networkworld.com/article/2937148/how-to-install-debian-linux-8-1-in-a-virtualbox-vm
|
|
||||||
[5]:http://linux.die.net/man/1/top
|
|
||||||
[6]:http://linux.die.net/man/1/htop
|
|
||||||
[7]:http://linuxcommand.org/man_pages/vmstat8.html
|
|
||||||
[8]:http://linux.die.net/man/1/ps
|
|
||||||
[9]:http://linux.die.net/man/1/pstree
|
|
||||||
[10]:http://linux.die.net/man/1/iostat
|
|
@ -1,88 +0,0 @@
|
|||||||
alim0x translating
|
|
||||||
|
|
||||||
The history of Android
|
|
||||||
================================================================================
|
|
||||||
![Gingerbread's new keyboard, text selection UI, overscroll effect, and new checkboxes.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/3kb-high-over-check.png)
|
|
||||||
Gingerbread's new keyboard, text selection UI, overscroll effect, and new checkboxes.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
One of the most important additions to Android 2.3 was the system-wide text selection interface, which you can see in the Google search bar in the left screenshot. Long pressing a word would highlight it in orange and make draggable handles appear on either side of the highlight. You could then adjust the highlight using the handles and long press on the highlight to bring up options for cut, copy, and paste. Previous methods used tiny controls that relied on a trackball or D-Pad, but with this first finger-driven text selection method, the Nexus S didn’t need the extra hardware controls.
|
|
||||||
|
|
||||||
The right set of images shows the new checkbox design and overscroll effect. The Froyo checkbox worked like a light bulb—it would show a green check when on and a gray check when off. Gingerbread now displayed an empty box when an option is turned off—which made much more sense. Gingerbread was the first version to have an overscroll effect. An orange glow appeared when you hit the end of a list and grew larger as you pulled more against the dead end. Bounce scrolling would probably have made the most sense, but that was patented by Apple.
|
|
||||||
|
|
||||||
![The new dialer and dialog box design.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/dialdialog.png)
|
|
||||||
The new dialer and dialog box design.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
The dialer received a little more love in Gingerbread. It became darker, and Google finally addressed the combination of sharp corners, rounded corners, and complete circles that it had going on. Now every corner was a sharp right angle. All the dial pad buttons were replaced with a weird underline, like some faint leftovers of what used to be a button. You were never really sure if you were supposed to see a button or not—our brains wanted to imagine the rest of the square.
|
|
||||||
|
|
||||||
The Wi-Fi network dialog is pictured to show off the rest of the system-wide changes. All the dialog box titles were changed from gray to black, every dialog box, dropdown, and button corner was sharpened up, and everything was a little bit darker. All these system-wide changes made all of Gingerbread look a lot less bubbly and more mature. The "all black everything" look wasn't necessarily the most welcoming color palette, but it certainly looked better than Android's previous gray-and-beige color scheme.
|
|
||||||
|
|
||||||
![The new Market, which added a massive green header.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/4market.png)
|
|
||||||
The new Market, which added a massive green header.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
While not exclusive to Gingerbread, with the launch of the new OS came "Android Market 2.0." Most of the list design was the same, but Google covered the top third of the screen with a massive green banner that was used for featured apps and navigation. The primary design inspiration here was probably the green Android mascot—the color is a perfect match. At a time when the OS was getting a darker design, the neon green banner and white list made the Market a lot brighter.
|
|
||||||
|
|
||||||
However, the same green background image was used across phones, which meant on lower resolution devices, the green banner was even bigger. Users complained so much about the wasted screen space that later updates would make the green banner scroll up with the content. At the time, horizontal mode was even worse—it would fill the left half of the screen with the static green banner.
|
|
||||||
|
|
||||||
![An app page from the Market showing the collapsible text section, the "My apps" section, and Google Books screenshots.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/5rest-of-market-and-books.png)
|
|
||||||
An app page from the Market showing the collapsible text section, the "My apps" section, and Google Books screenshots.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
App pages were redesigned with collapsible sections. Rather than having to scroll through a thousand-line description, text boxes were truncated to only the first few lines. After that, a "more" button needed to be tapped. This allowed users to easily scroll through the list and find things like pictures and "contact developer," which would usually be farther down the page.
|
|
||||||
|
|
||||||
The other parts of the Android homescreen wisely toned down the green monster. The rest of the app was mostly just the old Market with new green navigational elements. Any of the old tabbed interfaces were upgraded to swipeable tabs. In the right Gingerbread image, swiping right-to-left would switch from "Top Paid" to "Top Free," which made navigation a little easier.
|
|
||||||
|
|
||||||
Gingerbread came with the first of what would become the Google Play content stores: Google Books. The app was a basic book reader that would display books in a simple thumbnail grid. The "Get eBooks" link at the top of the screen opened the browser and loaded a mobile website where you could buy books.
|
|
||||||
|
|
||||||
Google Books and the “My Apps" page of the Market were both examples of early precursors to the Action Bar. Just like the current guidelines, a stickied top bar featured the app icon, the name of the screen within the app, and a few controls. The layout of these two apps was actually pretty modern looking.
|
|
||||||
|
|
||||||
![The new Google Maps.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps1.png)
|
|
||||||
The new Google Maps.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
Google Maps (which, again, at this point was on the Android Market and not exclusive to this version of Android) now featured another action bar precursor in the form of a top-aligned control bar. This version of an early action bar featured a lot of experimenting. The majority of the bar was taken up with a search box, but you could never type into the bar. Tapping on it would open the old search interface from Android 1.x, with a totally different bar design and bubbly buttons. This 2.3 bar wasn't anything more than a really big search button.
|
|
||||||
|
|
||||||
![The new business pages, which switched from black to white.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps2-Im-hungry.png)
|
|
||||||
The new business pages, which switched from black to white.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
Along with Places' new top billing in the app drawer came a redesigned interface. Unlike the rest of Gingerbread, this switched from black to white. Google also kept the old buttons with rounded corners. This new version of Maps helpfully displayed the hours of operation of a business, and it offered advanced search options like places that were currently open or thresholds for ratings and price. Reviews were brought to the surface, allowing a user to easily get a feel for the current business. It was now also possible to "star" a location from the search results and save it for later.
|
|
||||||
|
|
||||||
![The new YouTube design, which, amazingly, sort of matches the old Maps business page design.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/youtube22.png)
|
|
||||||
The new YouTube design, which, amazingly, sort of matches the old Maps business page design.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
The YouTube app seemed completely separate from the rest of Android, as if whoever designed this had no idea what Gingerbread would end up looking like. Highlights were red and gray instead of green and orange, and rather than the flat black of Gingerbread, YouTube featured bubbly buttons, tabs, and bars with rounded corners and heavy gradients. The new app did get a few things right, though. All the tabs could be horizontally swiped through, and the app finally added a vertical viewing mode for videos. Android seemed like such an uncoordinated effort at this stage. It’s like someone told the YouTube team “make it black," and that was all the direction they were given. The only Android entity this seemed to match was the old Google Maps business page design.
|
|
||||||
|
|
||||||
Despite the weird design choices, the YouTube app had the best approximation yet of an action bar. Besides the bar at the top with an app logo and a few buttons, the rightmost button was labeled “more" and would bring up options that didn’t fit in the bar. Today, this is called the “Overflow" button, and it's a standard UI piece.
|
|
||||||
|
|
||||||
![The new Google Talk, which supported voice and video calls, and the new Voice Actions interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/talkvoice.png)
|
|
||||||
The new Google Talk, which supported voice and video calls, and the new Voice Actions interface.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
One last update for Gingerbread came with Android 2.3.4, which brought a new version of Google Talk. Unlike the Nexus One, the Nexus S had a front-facing camera—and the redesigned version of Google Talk had voice and video calling. The colored indicators on the right of the friends list were used to indicate not only presence, but voice and video availability. A dot was text only, a microphone was text or voice, and a camera was text, voice, or video. If available, tapping on a voice or video icon would immediately start a call with that person.
|
|
||||||
|
|
||||||
Gingerbread is the oldest version of Android still supported by Google. Firing up a Gingerbread device and letting it sit for a few minutes will result in a ton of upgrades. Gingerbread will pull down Google Play Services, resulting in a ton of new API support, and it will upgrade to the very newest version of the Play Store. Open the Play Store and hit the update button, and just about every single Google app will be replaced with a modern version. We tried to keep this article authentic to the time Gingerbread was released, but a real user stuck on Gingerbread today will be treated to a flood of anachronisms.
|
|
||||||
|
|
||||||
Gingerbread is still supported because there are a good number of users still running the now ancient OS. Gingerbread's staying power is due to the extremely low system requirements, making it the go-to choice for slow, cheap phones. The next few versions of Android were much more exclusive and/or demanding on hardware. For instance, Android 3.0 Honeycomb is not open source, meaning it could only be ported to a device with Google's cooperation. It was also only for tablets, making Gingerbread the newest phone version of Android for a very long time. 4.0 Ice Cream Sandwich was the next phone release, but it significantly raised Android’s systems requirements, cutting off the low-end of the market. Google is hoping to get cheaper phones back on the update track with 4.4 KitKat, which brings the system requirements back down to 512MB of RAM. The passage of time helps, too—by now, even cheap SoCs have caught up to the demands of a 4.0-era version of Android.
|
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
|
||||||
|
|
||||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
|
||||||
|
|
||||||
[@RonAmadeo][t]
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/15/
|
|
||||||
|
|
||||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://arstechnica.com/author/ronamadeo
|
|
||||||
[t]:https://twitter.com/RonAmadeo
|
|
@ -1,66 +0,0 @@
|
|||||||
The history of Android
|
|
||||||
================================================================================
|
|
||||||
### Android 3.0 Honeycomb—tablets and a design renaissance ###
|
|
||||||
|
|
||||||
Despite all the changes made in Gingerbread, Android was still the ugly duckling of the mobile world. Compared to the iPhone, its level of polish and design just didn't hold up. On the other hand, one of the few operating systems that could stand up to iOS's aesthetic acumen was Palm's WebOS. WebOS was a cohesive, well-designed OS with several innovative features, and it was supposed to save the company from the relentless march of the iPhone.
|
|
||||||
|
|
||||||
A year after launch though, Palm was running out of cash. The company never saw the iPhone coming, and by the time WebOS was ready, it was too late. In April 2010, Hewlett-Packard purchased Palm for $1 billion. While HP bought a product with a great user interface, the lead designer of that interface, a man by the name of Matias Duarte, did not join HP. In May 2010, just before HP took control of Palm, Duarte jumped ship to Google. HP bought the bread, but Google hired the baker.
|
|
||||||
|
|
||||||
![The first Honeycomb device, the Motorola Xoom 10-inch tablet.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Motorola-XOOM-MZ604.jpg)
|
|
||||||
The first Honeycomb device, the Motorola Xoom 10-inch tablet.
|
|
||||||
|
|
||||||
At Google, Duarte was named the Director of Android User Experience. This was the first time someone was publicly in charge of the way Android looked. While Matias landed at Google during the launch of Android 2.2, the first version he truly impacted was Android 3.0, Honeycomb, released in February 2011.
|
|
||||||
|
|
||||||
By Google's own admission, Honeycomb was rushed out the door. Ten months prior, Apple modernized the tablet with the launch of the iPad, and Google wanted to respond as quickly as possible. Honeycomb was that response, a version of Android that ran on 10-inch touchscreens. Sadly, getting this OS to market was such a priority that corners were cut to save time.
|
|
||||||
|
|
||||||
The new OS was for tablets only—phones would not be updated to Honeycomb, which spared Google the difficult problem of making the OS work on wildly different screen sizes. But with phone support off the table, a Honeycomb source drop never happened. Previous Android versions were open source, enabling the hacking community to port the latest version to all sorts of different devices. Google didn't want app developers to feel pressured to support half-broken Honeycomb phone ports, so Google kept the source to itself and strictly controlled what could and couldn't have Honeycomb. The rushed development led to problems with the software, too. At launch, Honeycomb wasn't particularly stable, SD cards didn't work, and Adobe Flash—one of Android's big differentiators—wasn't supported.
|
|
||||||
|
|
||||||
One of the few devices that could have Honeycomb was [the Motorola Xoom][1], the flagship product for the new OS. The Xoom was a 10-inch, 16:9 tablet with 1GB of RAM and a dual-core, 1GHz Nvidia Tegra 2 processor. Despite being the launch device of a new version of Android where Google controlled the updates directly, the device wasn't called a "Nexus." The most likely reason for this was that Google didn't feel confident enough in the product to call it a flagship.
|
|
||||||
|
|
||||||
Nevertheless, Honeycomb was a major milestone for Android. With an experienced designer in charge, the entire Android user interface was rebuilt, and most of the erratic app designs were brought to heel. Android's default apps finally looked like pieces of a cohesive whole with similar layouts and theming across the board. Redesigning Android would be a multi-version project though—Honeycomb was just the start of getting Android whipped into shape. This first draft laid the groundwork for how future versions of Android would function, but it also used a heavy-handed sci-fi theme that Google would spend the next few versions toning down.
|
|
||||||
|
|
||||||
![The home screens of Honeycomb and Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/homeskreen.png)
|
|
||||||
The home screens of Honeycomb and Gingerbread.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
While Gingerbread only experimented with a sci-fi look in its photon wallpaper, Honeycomb went full sci-fi with a Tron-inspired theme for the entire OS. Everything was made black, and if you needed a contrasting color, you could choose from a few different shades of blue. Everything that was made blue was also given a "glow" effect, making the entire OS look like it was powered by alien technology. The default background was a holographic grid of hexagons (a Honeycomb! get it?) that looked like it was the floor of a teleport pad on a spaceship.
|
|
||||||
|
|
||||||
The most important change of Honeycomb was the addition of the system bar. The Motorola Xoom had no hardware buttons other than power and volume, so a large black bar was added along the bottom of the screen that housed the navigational buttons. This meant the default Android interface no longer needed specialized hardware buttons. Previously, Android couldn't function without hardware Back, Menu, and Home keys. Now, with the software supplying all the necessary buttons, anything with a touch screen was able to run Android.
|
|
||||||
|
|
||||||
The biggest benefit of the new software buttons was flexibility. The new app guidelines stated that apps should no longer require a hardware menu button, but for those that do, Honeycomb detects this and adds a fourth button to the system bar that allows these apps to work. The other flexibility attribute of software buttons was that they could change orientation with the device. Other than the power and volume buttons, the Xoom's orientation really wasn't important. The system bar always sat on the "bottom" of the device from the user's perspective. The trade off was that a big bar along the bottom of the screen definitely sucked up some screen real estate. To save space on 10-inch tablets, the status bar was merged into the system bar. All the usual status duties lived on the right side—there was battery and connectivity status, the time, and notification icons.
|
|
||||||
|
|
||||||
The whole layout of the home screen changed, placing UI pieces in each of the four corners of the device. The bottom left housed the previously discussed navigational buttons, the bottom right was for status and notifications, the top left displayed text search and voice search, and the top right had buttons for the app drawer and adding widgets.
|
|
||||||
|
|
||||||
![The new lock screen and Recent Apps interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/lockscreen-and-recent.png)
|
|
||||||
The new lock screen and Recent Apps interface.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
(Since the Xoom was a [heavy] 10-inch, 16:9 tablet, it was primarily meant to be used horizontally. Most apps also supported portrait mode, though, so for the sake of our formatting, we're using mostly portrait mode shots. Just keep in mind the Honeycomb shots come from a 10-inch tablet, and the Gingerbread shots come from a 3.7-inch phone. The densities of information are not directly comparable.)
|
|
||||||
|
|
||||||
The unlock screen—after switching from a menu button to a rotary dial to slide-to-unlock—removed any required accuracy from the unlock process by switching to a circle unlock. Swiping from the center outward in any direction would unlock the device. Like the rotary unlock, this was much nicer ergonomically than forcing your finger to follow a perfectly straight path.
|
|
||||||
|
|
||||||
The strip of thumbnails in the second picture was the interface brought up by the newly christened "Recent Apps" button, now living next to Back and Home. Rather than the group of icons brought up in Gingerbread by long-pressing on the home button, Honeycomb showed app icons and thumbnails on the screen, which made it a lot easier to switch between tasks. Recent Apps was clearly inspired by Duarte's "card" multitasking in WebOS, which used full-screen thumbnails to switch tasks. This design offered the same ease-of-recognition as WebOS's task switcher, but the smaller thumbnails allowed more apps to fit on screen at once.
|
|
||||||
|
|
||||||
While this implementation of Recent Apps may look like what you get on a current device, this version was very early. The list didn't scroll, meaning it showed seven apps in portrait mode and only five apps in horizontal mode. Anything beyond that was bumped off the list. You also couldn't swipe away thumbnails to close apps—this was just a static list.
|
|
||||||
|
|
||||||
Here we see the Tron influence in full effect: the thumbnails had blue outlines and an eerie glow around them. This screenshot also shows a benefit of software buttons—context. The back button closed the list of thumbnails, so instead of the normal arrow, this pointed down.
|
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
|
||||||
|
|
||||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
|
||||||
|
|
||||||
[@RonAmadeo][t]
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/16/
|
|
||||||
|
|
||||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[1]:http://arstechnica.com/gadgets/2011/03/ars-reviews-the-motorola-xoom/
|
|
||||||
[a]:http://arstechnica.com/author/ronamadeo
|
|
||||||
[t]:https://twitter.com/RonAmadeo
|
|
@ -1,86 +0,0 @@
|
|||||||
The history of Android
|
|
||||||
================================================================================
|
|
||||||
![The Honeycomb app lineup lost a ton of apps. This also shows the notification panel and the new quick settings.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/apps-and-notifications2.png)
|
|
||||||
The Honeycomb app lineup lost a ton of apps. This also shows the notification panel and the new quick settings.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
The default app icons were slashed from 32 to 25, and two of those were third-party games. Since Honeycomb was not for phones and Google wanted the default apps to all be tablet-optimized, a lot of apps didn't make the cut. We lost the Amazon MP3 store, Car Home, Facebook, Google Goggles, Messaging, News and Weather, Phone, Twitter, Google Voice, and Voice Dialer. Google was quietly building a music service that would launch soon, so the Amazon MP3 store needed to go anyway. Car Home, Messaging, and Phone made little sense on a non-phone device, Facebook and Twitter still don't have tablet Android apps, and Goggles, News and Weather, and Voice Dialer were barely supported applications that most people wouldn't miss.
|
|
||||||
|
|
||||||
Almost every app icon was new. Just like the switch from the G1 to the Motorola Droid, the biggest impetus for change was probably the bump in resolution. The Nexus S had an 800×480 display, and Gingerbread came with art assets to match. The Xoom used a whopping 1280×800 10-inch display, which meant nearly every piece of art had to go. But again, this time a real designer was in charge, and things were a lot more cohesive. Honeycomb marked the switch from a vertically scrolling app drawer to paginated horizontal drawer. This change made sense on a horizontal device, but on phones it was still much faster to navigate the app drawer with a flingable, vertical list.
|
|
||||||
|
|
||||||
The second Honeycomb screenshot shows the new notification panel. The gray and black Gingerbread design was tossed for another straight-black panel that gave off a blue glow. At the top was a block showing the time, date, connection status, battery, and a shortcut to the notification quick settings, and below that were the actual notifications. Non-permanent notifications could now be dismissed by tapping on an "X" on the right side of the notification. Honeycomb was the first version to enable controls within a notification. The first (and at the launch of Honeycomb, only) app to take advantage of this was the new Google Music app, which placed previous, play/pause, and next buttons in its notification. These new controls could be accessed from any app and made controlling music a breeze.
|
|
||||||
|
|
||||||
!["Add to home screen" was given a zoomed-out interface for easy organizing. The search interface split auto suggest and universal search into different panes.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/widgetkeyboard.png)
|
|
||||||
"Add to home screen" was given a zoomed-out interface for easy organizing. The search interface split auto suggest and universal search into different panes.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
Pressing the plus button in the top right corner of the home screen or long pressing on the background would open the new home screen configuration interface. Honeycomb showed a zoomed-out view of all the home screens along the top of the screen, and it filled the bottom half of the screen with a tabbed drawer containing widgets and shortcuts. Items could be dragged out of the bottom drawer and into any of the five home screens. Gingerbread would just show a list of text, but Honeycomb showed full thumbnail previews of the widgets. This gave you a much better idea of what a widget would look like instead of an app-name-only description like "calendar."
|
|
||||||
|
|
||||||
The larger screen of the Motorola Xoom allowed the keyboard to take on a more PC-style layout, with keys like backspace, enter, shift, and tab put in the traditional locations. The keyboard took on a blueish tint and gained even more spacing between the keys. Google also added a dedicated smiley-face button. :-)
|
|
||||||
|
|
||||||
![Gmail on Honeycomb versus Gmail on Gingerbread with the menu open. Buttons were placed on the main screen for easier discovery.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/thebasics.png)
|
|
||||||
Gmail on Honeycomb versus Gmail on Gingerbread with the menu open. Buttons were placed on the main screen for easier discovery.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
Gmail demonstrated all the new UI concepts in Honeycomb. Android 3.0 did away with hiding all the controls behind a menu button. There was now a strip of icons along the top of the screen called the Action Bar, which lifted many useful controls to the main screen where users could see them. Gmail showed buttons for search, compose, and refresh, and it put less useful controls like settings, help, and feedback in a dropdown called the "overflow" button. Tapping checkboxes or selecting text would cause the entire action bar to change to icons relating to those actions—for instance, selecting text would bring up cut, copy, and select all buttons.
|
|
||||||
|
|
||||||
The app icon displayed in the top left corner doubled as a navigation button called "Up." While "Back" worked similarly to a browser back button, navigating to previously visited screens, "Up" would navigate up the app hierarchy. For instance, if you were in the Android Market, pressed the "Email developer" button, and Gmail opened, "Back" would take you back to the Android Market, but "Up" would take you to the Gmail inbox. "Back" might close the current app, but "Up" never would. Apps could control the "Back" button, and they usually reprogrammed it to replicate the "Up" functionality. In practice, there was rarely a difference between the two buttons.
|
|
||||||
|
|
||||||
Honeycomb also introduced the "Fragments" API, which allowed developers to use a single app for tablets and phones. A "Fragment" was a single pane of a user interface. In the Gmail picture above, the left folder list was one fragment and the inbox was another fragment. Phones would show one fragment per screen, and tablets could show two side-by-side. The developer defined the look of individual fragments, and Android would decide how they should be displayed based on the current device.
|
|
||||||
|
|
||||||
![The calculator finally used regular Android buttons, but someone spilled blue ink on the calendar.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/calculendar.png)
|
|
||||||
The calculator finally used regular Android buttons, but someone spilled blue ink on the calendar.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
For the first time in Android's history, the calculator got a makeover with non-custom buttons, so it actually looked like part of the OS. The bigger screen made room for more buttons, enough that all the calculator functionality could fit on one screen. The calendar greatly benefited from the extra space, gaining much more room for appointment text and controls. The action bar at the top of the screen held buttons to switch views, along with showing the current time span and common controls. Appointment blocks switched to a white background with the calendar corner only showing in the top right corner. At the bottom (or side, in horizontal view) were boxes showing the month calendar and a list of displayed calendars.
|
|
||||||
|
|
||||||
The scale of the calendar could be adjusted, too. By performing a pinch zoom gesture, portrait week and day views could show between five and 19 hours of appointments on a single screen. The background of the calendar was made up of an uneven blue splotch, which didn't look particularly great and was tossed on later versions.
|
|
||||||
|
|
||||||
![The new camera interface, showing off the live "Negative" effect.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/camera.png)
|
|
||||||
The new camera interface, showing off the live "Negative" effect.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
The giant 10-inch Xoom tablet did have a camera, which meant that it also had a camera app. The Tron redesign finally got rid of the old faux-leather look that Google came up with in Android 1.6. The controls were laid out in a circle around the shutter button, bringing to mind the circular controls and dials on a real camera. The Cooliris-derived speech bubble popups were changed to glowing, semi-transparent black boxes. The Honeycomb screenshot shows the new "color effect" functionality, which applied a filter to the viewfinder in real time. Unlike the Gingerbread camera app, this didn't support a portrait orientation—it was limited to landscape only. Taking a portrait picture with a 10-inch tablet doesn't make much sense, but then neither does taking a landscape one.
|
|
||||||
|
|
||||||
![The clock app didn't get quite as much love as other areas. Google just threw it into a tiny box and called it a day.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/clocks.png)
|
|
||||||
The clock app didn't get quite as much love as other areas. Google just threw it into a tiny box and called it a day.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
Tons of functionality went out the door when it came time to remake the clock app. The entire "Deskclock" concept was kicked out the door, replaced with a simple large display of the time against a plain black background. The ability to launch other apps and view the weather was gone, as was the ability of the clock app to use your wallpaper. Google sometimes gave up when it came time to design a tablet-sized interface, like here, where it just threw the alarm interface into a tiny, centered dialog box.
|
|
||||||
|
|
||||||
![The Music app finally got the ground-up redesign it has needed forever.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/muzack.png)
|
|
||||||
The Music app finally got the ground-up redesign it has needed forever.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
While music received a few minor additions during its life, this was really the first time since Android 0.9 that it received serious attention. The highlight of the redesign was a don't-call-it-coverflow scrolling 3D album art view, called "New and Recent." Instead of the tabs added in Android 2.1, navigation was handled by a Dropbox box in the Action Bar. While "New and Recent" had 3D scrolling album art, "Albums" used a flat grid of albums thumbnails. The other sections had totally different designs, too. "Songs" used a vertically scrolling list of text, and "Playlists," "Genres," and "Artists" used stacked album art.
|
|
||||||
|
|
||||||
In nearly every view, every single item had its own individual menu, usually little arrows in the bottom right corner of an item. For now, these would only show "Play" and "add to Playlist," but this version of Google Music was built for the future. Google was launching a Music service soon, and those individual menus would be needed for things like viewing other content from that artist in the Music Store and managing the cloud storage versus local storage options.
|
|
||||||
|
|
||||||
Just like the Cooliris Gallery in Android 2.1, Google Music would blow up one of your thumbnails and use it as a background. The bottom "Now Playing" bar now displayed the album art, playback controls, and a song progress bar.
|
|
||||||
|
|
||||||
![Some of the new Google Maps was really nice, and some of it was from Android 1.5.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps.png)
|
|
||||||
Some of the new Google Maps was really nice, and some of it was from Android 1.5.
|
|
||||||
Photo by Ron Amadeo
|
|
||||||
|
|
||||||
Google Maps received another redesign for the big screen. This one would stick around for a while and used a semi-transparent black action bar for all the controls. Search was again the primary function, given the first spot in the action bar, but this time it was an actual search bar you could type in, instead of a search bar-shaped button that launched a completely different interface. Google finally gave up on dedicating screen space to actual zoom buttons, relying on only gestures to control the map view. While the feature has since been ported to all old versions of Maps, Honeycomb was the first version to feature 3D building outlines on the map. Dragging two fingers down on the map would "tilt" the map view and show the sides of the buildings. You could freely rotate and the buildings would adjust, too.
|
|
||||||
|
|
||||||
Not every part of Maps was redesigned. Navigation was untouched from Gingerbread, and some core parts of the interface, like directions, were pulled straight from Android 1.6 and centered in a tiny box.
|
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
|
||||||
|
|
||||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
|
||||||
|
|
||||||
[@RonAmadeo][t]
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/17/
|
|
||||||
|
|
||||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://arstechnica.com/author/ronamadeo
|
|
||||||
[t]:https://twitter.com/RonAmadeo
|
|
@ -1,3 +1,5 @@
|
|||||||
|
alim0x translating
|
||||||
|
|
||||||
The history of Android
|
The history of Android
|
||||||
================================================================================
|
================================================================================
|
||||||
![Yet another Android Market redesign dips its toe into the "cards" interface that would become a Google staple.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/play-store.png)
|
![Yet another Android Market redesign dips its toe into the "cards" interface that would become a Google staple.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/play-store.png)
|
||||||
@ -80,4 +82,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor
|
|||||||
|
|
||||||
[1]:http://techcrunch.com/2014/03/03/gartner-195m-tablets-sold-in-2013-android-grabs-top-spot-from-ipad-with-62-share/
|
[1]:http://techcrunch.com/2014/03/03/gartner-195m-tablets-sold-in-2013-android-grabs-top-spot-from-ipad-with-62-share/
|
||||||
[a]:http://arstechnica.com/author/ronamadeo
|
[a]:http://arstechnica.com/author/ronamadeo
|
||||||
[t]:https://twitter.com/RonAmadeo
|
[t]:https://twitter.com/RonAmadeo
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
translating...
|
||||||
|
|
||||||
How to set up IPv6 BGP peering and filtering in Quagga BGP router
|
How to set up IPv6 BGP peering and filtering in Quagga BGP router
|
||||||
================================================================================
|
================================================================================
|
||||||
In the previous tutorials, we demonstrated how we can set up a [full-fledged BGP router][1] and configure [prefix filtering][2] with Quagga. In this tutorial, we are going to show you how we can set up IPv6 BGP peering and advertise IPv6 prefixes through BGP. We will also demonstrate how we can filter IPv6 prefixes advertised or received by using prefix-list and route-map features.
|
In the previous tutorials, we demonstrated how we can set up a [full-fledged BGP router][1] and configure [prefix filtering][2] with Quagga. In this tutorial, we are going to show you how we can set up IPv6 BGP peering and advertise IPv6 prefixes through BGP. We will also demonstrate how we can filter IPv6 prefixes advertised or received by using prefix-list and route-map features.
|
||||||
@ -255,4 +257,4 @@ via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html
|
|||||||
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
|
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
|
||||||
[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||||
[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html
|
[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html
|
||||||
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||||
|
@ -1,88 +0,0 @@
|
|||||||
translating...
|
|
||||||
|
|
||||||
Fix Minimal BASH like line editing is supported GRUB Error In Linux
|
|
||||||
================================================================================
|
|
||||||
The other day when I [installed Elementary OS in dual boot with Windows][1], I encountered a Grub error at the reboot time. I was presented with command line with error message:
|
|
||||||
|
|
||||||
**Minimal BASH like line editing is supported. For the first word, TAB lists possible command completions. anywhere else TAB lists possible device or file completions.**
|
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_Linux_1.jpeg)
|
|
||||||
|
|
||||||
Indeed this is not an error specific to Elementary OS. It is a common [Grub][2] error that could occur with any Linux OS be it Ubuntu, Fedora, Linux Mint etc.
|
|
||||||
|
|
||||||
In this post we shall see **how to fix this “minimal BASH like line editing is supported” Grub error in Ubuntu** based Linux systems.
|
|
||||||
|
|
||||||
> You can read this tutorial to fix similar and more frequent issue, [error: no such partition grub rescue in Linux][3].
|
|
||||||
|
|
||||||
### Prerequisites ###
|
|
||||||
|
|
||||||
To fix this issue, you would need the followings:
|
|
||||||
|
|
||||||
- A live USB or disk of the same OS and same version
|
|
||||||
- A working internet connection in the live session
|
|
||||||
|
|
||||||
Once you make sure that you have the prerequisites, let’s see how to fix the black screen of death for Linux (if I can call it that ;)).
|
|
||||||
|
|
||||||
### How to fix this “minimal BASH like line editing is supported” Grub error in Ubuntu based Linux ###
|
|
||||||
|
|
||||||
I know that you might point out that this Grub error is not exclusive to Ubuntu or Ubuntu based Linux distributions, then why am I putting emphasis on the world Ubuntu? The reason is, here we will take an easy way out and use a tool called **Boot Repair** to fix our problem. I am not sure if this tool is available for other distributions like Fedora. Without wasting anymore time, let’s see how to solve minimal BASH like line editing is supported Grub error.
|
|
||||||
|
|
||||||
### Step 1: Boot in lives session ###
|
|
||||||
|
|
||||||
Plug in the live USB and boot in to the live session.
|
|
||||||
|
|
||||||
### Step 2: Install Boot Repair ###
|
|
||||||
|
|
||||||
Once you are in the lives session, open the terminal and use the following commands to install Boot Repair:
|
|
||||||
|
|
||||||
sudo add-apt-repository ppa:yannubuntu/boot-repair
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get install boot-repair
|
|
||||||
|
|
||||||
Note: Follow this tutorial to [fix failed to fetch cdrom apt-get update cannot be used to add new CD-ROMs error][4], if you encounter it while running the above command.
|
|
||||||
|
|
||||||
### Step 3: Repair boot with Boot Repair ###
|
|
||||||
|
|
||||||
Once you installed Boot Repair, run it from the command line using the following command:
|
|
||||||
|
|
||||||
boot-repair &
|
|
||||||
|
|
||||||
Actually things are pretty straight forward from here. You just need to follow the instructions provided by Boot Repair tool. First, click on **Recommended repair** option in the Boot Repair.
|
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu.png)
|
|
||||||
|
|
||||||
It will take couple of minutes for Boot Repair to analyze the problem with boot and Grub. Afterwards, it will provide you some commands to use in the command line. Copy the commands one by one in terminal. For me it showed me a screen like this:
|
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_1.png)
|
|
||||||
|
|
||||||
It will do some processes after you enter these commands:
|
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_2.png)
|
|
||||||
|
|
||||||
Once the process finishes, it will provide you a URL which consists of the logs of the boot repair. If your boot issue is not fixed even now, you can go to the forum or mail to the dev team and provide them the URL as a reference. Cool, isn’t it?
|
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Final_Ubuntu.png)
|
|
||||||
|
|
||||||
After the boot repair finishes successfully, shutdown your computer, remove the USB and boot again. For me it booted successfully but added two additional lines in the Grub screen. Something which was not of importance to me as I was happy to see the system booting normally again.
|
|
||||||
|
|
||||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_Linux_2.jpeg)
|
|
||||||
|
|
||||||
### Did it work for you? ###
|
|
||||||
|
|
||||||
So this is how I fixed **minimal BASH like line editing is supported Grub error in Elementary OS Freya**. How about you? Did it work for you? Feel free to ask a question or drop a suggestion in the comment box below.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/
|
|
||||||
|
|
||||||
作者:[Abhishek][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://itsfoss.com/author/abhishek/
|
|
||||||
[1]:http://itsfoss.com/guide-install-elementary-os-luna/
|
|
||||||
[2]:http://www.gnu.org/software/grub/
|
|
||||||
[3]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/
|
|
||||||
[4]:http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/
|
|
@ -1,4 +1,4 @@
|
|||||||
2q1w2007申领
|
ictlyh Translating
|
||||||
How to access a Linux server behind NAT via reverse SSH tunnel
|
How to access a Linux server behind NAT via reverse SSH tunnel
|
||||||
================================================================================
|
================================================================================
|
||||||
You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users.
|
You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users.
|
||||||
|
@ -1,191 +0,0 @@
|
|||||||
Install Plex Media Server On Ubuntu / CentOS 7.1 / Fedora 22
|
|
||||||
================================================================================
|
|
||||||
In this article we will show you how easily you can setup Plex Home Media Server on major Linux distributions with their latest releases. After its successful installation of Plex you will be able to use your centralized home media playback system that streams its media to many Plex player Apps and the Plex Home will allows you to setup your environment by adding your devices and to setup a group of users that all can use Plex Together. So let’s start its installation first on Ubuntu 15.04.
|
|
||||||
|
|
||||||
### Basic System Resources ###
|
|
||||||
|
|
||||||
System resources mainly depend on the type and number of devices that you are planning to connect with the server. So according to our requirements we will be using as following system resources and software for a standalone server.
|
|
||||||
|
|
||||||
注:表格
|
|
||||||
<table width="666" style="height: 181px;">
|
|
||||||
<tbody>
|
|
||||||
<tr>
|
|
||||||
<td width="670" colspan="2"><b>Plex Home Media Server</b></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td width="236"><b>Base Operating System</b></td>
|
|
||||||
<td width="425">Ubuntu 15.04 / CentOS 7.1 / Fedora 22 Work Station</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td width="236"><b>Plex Media Server</b></td>
|
|
||||||
<td width="425">Version 0.9.12.3.1173-937aac3</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td width="236"><b>RAM and CPU</b></td>
|
|
||||||
<td width="425">1 GB , 2.0 GHZ</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td width="236"><b>Hard Disk</b></td>
|
|
||||||
<td width="425">30 GB</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
### Plex Media Server 0.9.12.3 on Ubuntu 15.04 ###
|
|
||||||
|
|
||||||
We are now ready to start the installations process of Plex Media Server on Ubuntu so let’s start with the following steps to get it ready.
|
|
||||||
|
|
||||||
#### Step 1: System Update ####
|
|
||||||
|
|
||||||
Login to your server with root privileges Make your that your system is upto date if not then do by using below command.
|
|
||||||
|
|
||||||
root@ubuntu-15:~#apt-get update
|
|
||||||
|
|
||||||
#### Step 2: Download the Latest Plex Media Server Package ####
|
|
||||||
|
|
||||||
Create a new directory and download .deb plex Media Package in it from the official website of Plex for Ubuntu using wget command.
|
|
||||||
|
|
||||||
root@ubuntu-15:~# cd /plex/
|
|
||||||
root@ubuntu-15:/plex#
|
|
||||||
root@ubuntu-15:/plex# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb
|
|
||||||
|
|
||||||
#### Step 3: Install the Debian Package of Plex Media Server ####
|
|
||||||
|
|
||||||
Now within the same directory run following command to start installation of debian package and then check the status of plekmediaserver.
|
|
||||||
|
|
||||||
root@ubuntu-15:/plex# dpkg -i plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb
|
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
root@ubuntu-15:~# service plexmediaserver status
|
|
||||||
|
|
||||||
![Plexmediaserver Service](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-status.png)
|
|
||||||
|
|
||||||
### Plex Home Media Web App Setup on Ubuntu 15.04 ###
|
|
||||||
|
|
||||||
Let's open your web browser within your localhost network and open the Web Interface with your localhost IP and port 32400 and do following steps to configure it:
|
|
||||||
|
|
||||||
http://172.25.10.179:32400/web
|
|
||||||
http://localhost:32400/web
|
|
||||||
|
|
||||||
#### Step 1:Sign UP before Login ####
|
|
||||||
|
|
||||||
After you have access to the web interface of Plesk Media Server make sure to Sign Up and set your username email ID and Password to login as.
|
|
||||||
|
|
||||||
![Plex Sign In](http://blog.linoxide.com/wp-content/uploads/2015/06/PMS-Login.png)
|
|
||||||
|
|
||||||
#### Step 2: Enter Your Pin to Secure Your Plex Media Home User ####
|
|
||||||
|
|
||||||
![Plex User Pin](http://blog.linoxide.com/wp-content/uploads/2015/06/333.png)
|
|
||||||
|
|
||||||
Now you have successfully configured your user under Plex Home Media.
|
|
||||||
|
|
||||||
![Welcome To Plex](http://blog.linoxide.com/wp-content/uploads/2015/06/3333.png)
|
|
||||||
|
|
||||||
### Opening Plex Web App on Devices Other than Localhost Server ###
|
|
||||||
|
|
||||||
As we have seen in our Plex media home page that it indicates that "You do not have permissions to access this server". Its because of we are on a different network than the Server computer.
|
|
||||||
|
|
||||||
![Plex Server Permissions](http://blog.linoxide.com/wp-content/uploads/2015/06/33.png)
|
|
||||||
|
|
||||||
Now we need to resolve this permissions issue so that we can have access to server on the devices other than the hosted server by doing following setup.
|
|
||||||
|
|
||||||
### Setup SSH Tunnel for Windows System to access Linux Server ###
|
|
||||||
|
|
||||||
First we need to set up a SSH tunnel so that we can access things as if they were local. This is only necessary for the initial setup.
|
|
||||||
|
|
||||||
If you are using Windows as your local system and server on Linux then we can setup SSH-Tunneling using Putty as shown.
|
|
||||||
|
|
||||||
![Plex SSH Tunnel](http://blog.linoxide.com/wp-content/uploads/2015/06/ssh-tunnel.png)
|
|
||||||
|
|
||||||
**Once you have the SSH tunnel set up:**
|
|
||||||
|
|
||||||
Open your Web browser window and type following URL in the address bar.
|
|
||||||
|
|
||||||
http://localhost:8888/web
|
|
||||||
|
|
||||||
The browser will connect to the server and load the Plex Web App with same functionality as on local.
|
|
||||||
Agree to the terms of Services and start
|
|
||||||
|
|
||||||
![Agree to Plex term](http://blog.linoxide.com/wp-content/uploads/2015/06/5.png)
|
|
||||||
|
|
||||||
Now a fully functional Plex Home Media Server is ready to add new media libraries, channels, playlists etc.
|
|
||||||
|
|
||||||
![PMS Settings](http://blog.linoxide.com/wp-content/uploads/2015/06/8.png)
|
|
||||||
|
|
||||||
### Plex Media Server 0.9.12.3 on CentOS 7.1 ###
|
|
||||||
|
|
||||||
We will follow the same steps on CentOS-7.1 that we did for the installation of Plex Home Media Server on Ubuntu 15.04.
|
|
||||||
|
|
||||||
So lets start with Plex Media Servers Package Installation.
|
|
||||||
|
|
||||||
#### Step 1: Plex Media Server Installation ####
|
|
||||||
|
|
||||||
To install Plex Media Server on centOS 7.1 we need to download the .rpm package from the official website of Plex. So we will use wget command to download .rpm package for this purpose in a new directory.
|
|
||||||
|
|
||||||
[root@linux-tutorials ~]# cd /plex
|
|
||||||
[root@linux-tutorials plex]# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm
|
|
||||||
|
|
||||||
#### Step 2: Install .RPM Package ####
|
|
||||||
|
|
||||||
After completion of complete download package we will install this package using rpm command within the same direcory where we installed the .rpm package.
|
|
||||||
|
|
||||||
[root@linux-tutorials plex]# ls
|
|
||||||
plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm
|
|
||||||
[root@linux-tutorials plex]# rpm -i plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm
|
|
||||||
|
|
||||||
#### Step 3: Start Plexmediaservice ####
|
|
||||||
|
|
||||||
We have successfully installed Plex Media Server Now we just need to restart its service and then enable it permanently.
|
|
||||||
|
|
||||||
[root@linux-tutorials plex]# systemctl start plexmediaserver.service
|
|
||||||
[root@linux-tutorials plex]# systemctl enable plexmediaserver.service
|
|
||||||
[root@linux-tutorials plex]# systemctl status plexmediaserver.service
|
|
||||||
|
|
||||||
### Plex Home Media Web App Setup on CentOS-7.1 ###
|
|
||||||
|
|
||||||
Now we just need to repeat all steps that we performed during the Web app setup of Ubuntu.
|
|
||||||
So let's Open a new window in your web browser and access the Plex Media Server Web app using localhost or IP or your Plex server.
|
|
||||||
|
|
||||||
http://172.20.3.174:32400/web
|
|
||||||
http://localhost:32400/web
|
|
||||||
|
|
||||||
Then to get full permissions on the server you need to repeat the steps to create the SSH-Tunnel.
|
|
||||||
After signing up with new user account we will be able to access its all features and can add new users, add new libraries and setup it per our needs.
|
|
||||||
|
|
||||||
![Plex Device Centos](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-devices-centos.png)
|
|
||||||
|
|
||||||
### Plex Media Server 0.9.12.3 on Fedora 22 Work Station ###
|
|
||||||
|
|
||||||
The Basic steps to download and install Plex Media Server are the same as its we did for in CentOS 7.1.
|
|
||||||
We just need to download its .rpm package and then install it with rpm command.
|
|
||||||
|
|
||||||
![PMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-on-fedora.png)
|
|
||||||
|
|
||||||
### Plex Home Media Web App Setup on Fedora 22 Work Station ###
|
|
||||||
|
|
||||||
We had setup Plex Media Server on the same host so we don't need to setup SSH-Tunnel in this time scenario. Just open the web browser in your Fedora 22 Workstation with default port 32400 of Plex Home Media Server and accept the Plex Terms of Services Agreement.
|
|
||||||
|
|
||||||
![Plex Agreement](http://blog.linoxide.com/wp-content/uploads/2015/06/Plex-Terms.png)
|
|
||||||
|
|
||||||
**Welcome to Plex Home Media Server on Fedora 22 Workstation**
|
|
||||||
|
|
||||||
Lets login with your plex account and start with adding your libraries for your favorite movie channels , create your playlists, add your photos and enjoy with many other features of Plex Home Media Server.
|
|
||||||
|
|
||||||
![Plex Add Libraries](http://blog.linoxide.com/wp-content/uploads/2015/06/create-library.png)
|
|
||||||
|
|
||||||
### Conclusion ###
|
|
||||||
|
|
||||||
We had successfully installed and configured Plex Media Server on Major Linux Distributions. So, Plex Home Media Server has always been a best choice for media management. Its so simple to setup on cross platform as we did for Ubuntu, CentOS and Fedora. It has simplifies the tasks of organizing your media content and streaming to other computers and devices then to share it with your friends.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://linoxide.com/tools/install-plex-media-server-ubuntu-centos-7-1-fedora-22/
|
|
||||||
|
|
||||||
作者:[Kashif Siddique][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://linoxide.com/author/kashifs/
|
|
@ -1,182 +0,0 @@
|
|||||||
translating by zhangboyue
|
|
||||||
Analyzing Linux Logs
|
|
||||||
================================================================================
|
|
||||||
There’s a great deal of information waiting for you within your logs, although it’s not always as easy as you’d like to extract it. In this section we will cover some examples of basic analysis you can do with your logs right away (just search what’s there). We’ll also cover more advanced analysis that may take some upfront effort to set up properly, but will save you time on the back end. Examples of advanced analysis you can do on parsed data include generating summary counts, filtering on field values, and more.
|
|
||||||
|
|
||||||
We’ll show you first how to do this yourself on the command line using several different tools and then show you how a log management tool can automate much of the grunt work and make this so much more streamlined.
|
|
||||||
|
|
||||||
### Searching with Grep ###
|
|
||||||
|
|
||||||
Searching for text is the most basic way to find what you’re looking for. The most common tool for searching text is [grep][1]. This command line tool, available on most Linux distributions, allows you to search your logs using regular expressions. A regular expression is a pattern written in a special language that can identify matching text. The simplest pattern is to put the string you’re searching for surrounded by quotes
|
|
||||||
|
|
||||||
#### Regular Expressions ####
|
|
||||||
|
|
||||||
Here’s an example to find authentication logs for “user hoover” on an Ubuntu system:
|
|
||||||
|
|
||||||
$ grep "user hoover" /var/log/auth.log
|
|
||||||
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
|
||||||
pam_unix(sshd:session): session opened for user hoover by (uid=0)
|
|
||||||
pam_unix(sshd:session): session closed for user hoover
|
|
||||||
|
|
||||||
It can be hard to construct regular expressions that are accurate. For example, if we searched for a number like the port “4792” it could also match timestamps, URLs, and other undesired data. In the below example for Ubuntu, it matched an Apache log that we didn’t want.
|
|
||||||
|
|
||||||
$ grep "4792" /var/log/auth.log
|
|
||||||
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
|
||||||
74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972HTTP/1.0" 404 545 "-" "-”
|
|
||||||
|
|
||||||
#### Surround Search ####
|
|
||||||
|
|
||||||
Another useful tip is that you can do surround search with grep. This will show you what happened a few lines before or after a match. It can help you debug what lead up to a particular error or problem. The B flag gives you lines before, and A gives you lines after. For example, we can see that when someone failed to login as an admin, they also failed the reverse mapping which means they might not have a valid domain name. This is very suspicious!
|
|
||||||
|
|
||||||
$ grep -B 3 -A 2 'Invalid user' /var/log/auth.log
|
|
||||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: reverse mapping checking getaddrinfo for 216-19-2-8.commspeed.net [216.19.2.8] failed - POSSIBLE BREAK-IN ATTEMPT!
|
|
||||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
|
||||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Invalid user admin from 216.19.2.8
|
|
||||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: input_userauth_request: invalid user admin [preauth]
|
|
||||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
|
||||||
|
|
||||||
#### Tail ####
|
|
||||||
|
|
||||||
You can also pair grep with [tail][2] to get the last few lines of a file, or to follow the logs and print them in real time. This is useful if you are making interactive changes like starting a server or testing a code change.
|
|
||||||
|
|
||||||
$ tail -f /var/log/auth.log | grep 'Invalid user'
|
|
||||||
Apr 30 19:49:48 ip-172-31-11-241 sshd[6512]: Invalid user ubnt from 219.140.64.136
|
|
||||||
Apr 30 19:49:49 ip-172-31-11-241 sshd[6514]: Invalid user admin from 219.140.64.136
|
|
||||||
|
|
||||||
A full introduction on grep and regular expressions is outside the scope of this guide, but [Ryan’s Tutorials][3] include more in-depth information.
|
|
||||||
|
|
||||||
Log management systems have higher performance and more powerful searching abilities. They often index their data and parallelize queries so you can quickly search gigabytes or terabytes of logs in seconds. In contrast, this would take minutes or in extreme cases hours with grep. Log management systems also use query languages like [Lucene][4] which offer an easier syntax for searching on numbers, fields, and more.
|
|
||||||
|
|
||||||
### Parsing with Cut, AWK, and Grok ###
|
|
||||||
|
|
||||||
#### Command Line Tools ####
|
|
||||||
|
|
||||||
Linux offers several command line tools for text parsing and analysis. They are great if you want to quickly parse a small amount of data but can take a long time to process large volumes of data
|
|
||||||
|
|
||||||
#### Cut ####
|
|
||||||
|
|
||||||
The [cut][5] command allows you to parse fields from delimited logs. Delimiters are characters like equal signs or commas that break up fields or key value pairs.
|
|
||||||
|
|
||||||
Let’s say we want to parse the user from this log:
|
|
||||||
|
|
||||||
pam_unix(su:auth): authentication failure; logname=hoover uid=1000 euid=0 tty=/dev/pts/0 ruser=hoover rhost= user=root
|
|
||||||
|
|
||||||
We can use the cut command like this to get the text after the eighth equal sign. This example is on an Ubuntu system:
|
|
||||||
|
|
||||||
$ grep "authentication failure" /var/log/auth.log | cut -d '=' -f 8
|
|
||||||
root
|
|
||||||
hoover
|
|
||||||
root
|
|
||||||
nagios
|
|
||||||
nagios
|
|
||||||
|
|
||||||
#### AWK ####
|
|
||||||
|
|
||||||
Alternately, you can use [awk][6], which offers more powerful features to parse out fields. It offers a scripting language so you can filter out nearly everything that’s not relevant.
|
|
||||||
|
|
||||||
For example, let’s say we have the following log line on an Ubuntu system and we want to extract the username that failed to login:
|
|
||||||
|
|
||||||
Mar 24 08:28:18 ip-172-31-11-241 sshd[32701]: input_userauth_request: invalid user guest [preauth]
|
|
||||||
|
|
||||||
Here’s how you can use the awk command. First, put a regular expression /sshd.*invalid user/ to match the sshd invalid user lines. Then print the ninth field using the default delimiter of space using { print $9 }. This outputs the usernames.
|
|
||||||
|
|
||||||
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
|
||||||
guest
|
|
||||||
admin
|
|
||||||
info
|
|
||||||
test
|
|
||||||
ubnt
|
|
||||||
|
|
||||||
You can read more about how to use regular expressions and print fields in the [Awk User’s Guide][7].
|
|
||||||
|
|
||||||
#### Log Management Systems ####
|
|
||||||
|
|
||||||
Log management systems make parsing easier and enable users to quickly analyze large collections of log files. They can automatically parse standard log formats like common Linux logs or web server logs. This saves a lot of time because you don’t have to think about writing your own parsing logic when troubleshooting a system problem.
|
|
||||||
|
|
||||||
Here you can see an example log message from sshd which has each of the fields remoteHost and user parsed out. This is a screenshot from Loggly, a cloud-based log management service.
|
|
||||||
|
|
||||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.25.09-AM.png)
|
|
||||||
|
|
||||||
You can also do custom parsing for non-standard formats. A common tool to use is [Grok][8] which uses a library of common regular expressions to parse raw text into structured JSON. Here is an example configuration for Grok to parse kernel log files inside Logstash:
|
|
||||||
|
|
||||||
filter{
|
|
||||||
grok {
|
|
||||||
match => {"message" => "%{CISCOTIMESTAMP:timestamp} %{HOST:host} %{WORD:program}%{NOTSPACE} %{NOTSPACE}%{NUMBER:duration}%{NOTSPACE} %{GREEDYDATA:kernel_logs}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
And here is what the parsed output looks like from Grok:
|
|
||||||
|
|
||||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.30.37-AM.png)
|
|
||||||
|
|
||||||
### Filtering with Rsyslog and AWK ###
|
|
||||||
|
|
||||||
Filtering allows you to search on a specific field value instead of doing a full text search. This makes your log analysis more accurate because it will ignore undesired matches from other parts of the log message. In order to search on a field value, you need to parse your logs first or at least have a way of searching based on the event structure.
|
|
||||||
|
|
||||||
#### How to Filter on One App ####
|
|
||||||
|
|
||||||
Often, you just want to see the logs from just one application. This is easy if your application always logs to a single file. It’s more complicated if you need to filter one application among many in an aggregated or centralized log. Here are several ways to do this:
|
|
||||||
|
|
||||||
1. Use the rsyslog daemon to parse and filter logs. This example writes logs from the sshd application to a file named sshd-messages, then discards the event so it’s not repeated elsewhere. You can try this example by adding it to your rsyslog.conf file.
|
|
||||||
|
|
||||||
:programname, isequal, “sshd” /var/log/sshd-messages
|
|
||||||
&~
|
|
||||||
|
|
||||||
2. Use command line tools like awk to extract the values of a particular field like the sshd username. This example is from an Ubuntu system.
|
|
||||||
|
|
||||||
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
|
||||||
guest
|
|
||||||
admin
|
|
||||||
info
|
|
||||||
test
|
|
||||||
ubnt
|
|
||||||
|
|
||||||
3. Use a log management system that automatically parses your logs, then click to filter on the desired application name. Here is a screenshot showing the syslog fields in a log management service called Loggly. We are filtering on the appName “sshd” as indicated by the Venn diagram icon.
|
|
||||||
|
|
||||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png)
|
|
||||||
|
|
||||||
#### How to Filter on Errors ####
|
|
||||||
|
|
||||||
One of the most common thing people want to see in their logs is errors. Unfortunately, the default syslog configuration doesn’t output the severity of errors directly, making it difficult to filter on them.
|
|
||||||
|
|
||||||
There are two ways you can solve this problem. First, you can modify your rsyslog configuration to output the severity in the log file to make it easier to read and search. In your rsyslog configuration you can add a [template][9] with pri-text such as the following:
|
|
||||||
|
|
||||||
"<%pri-text%> : %timegenerated%,%HOSTNAME%,%syslogtag%,%msg%n"
|
|
||||||
|
|
||||||
This example gives you output in the following format. You can see that the severity in this message is err.
|
|
||||||
|
|
||||||
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
|
||||||
|
|
||||||
You can use awk or grep to search for just the error messages. In this example for Ubuntu, we’re including some surrounding syntax like the . and the > which match only this field.
|
|
||||||
|
|
||||||
$ grep '.err>' /var/log/auth.log
|
|
||||||
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
|
||||||
|
|
||||||
Your second option is to use a log management system. Good log management systems automatically parse syslog messages and extract the severity field. They also allow you to filter on log messages of a certain severity with a single click.
|
|
||||||
|
|
||||||
Here is a screenshot from Loggly showing the syslog fields with the error severity highlighted to show we are filtering for errors:
|
|
||||||
|
|
||||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.00.36-AM.png)
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.loggly.com/ultimate-guide/logging/analyzing-linux-logs/
|
|
||||||
|
|
||||||
作者:[Jason Skowronski][a] [Amy Echeverri][b] [ Sadequl Hussain][c]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://www.linkedin.com/in/jasonskowronski
|
|
||||||
[b]:https://www.linkedin.com/in/amyecheverri
|
|
||||||
[c]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
|
|
||||||
[1]:http://linux.die.net/man/1/grep
|
|
||||||
[2]:http://linux.die.net/man/1/tail
|
|
||||||
[3]:http://ryanstutorials.net/linuxtutorial/grep.php
|
|
||||||
[4]:https://lucene.apache.org/core/2_9_4/queryparsersyntax.html
|
|
||||||
[5]:http://linux.die.net/man/1/cut
|
|
||||||
[6]:http://linux.die.net/man/1/awk
|
|
||||||
[7]:http://www.delorie.com/gnu/docs/gawk/gawk_26.html#IDX155
|
|
||||||
[8]:http://logstash.net/docs/1.4.2/filters/grok
|
|
||||||
[9]:http://www.rsyslog.com/doc/v8-stable/configuration/templates.html
|
|
@ -1,209 +0,0 @@
|
|||||||
translating by wwy-hust
|
|
||||||
|
|
||||||
Nishita Agarwal Shares Her Interview Experience on Linux ‘iptables’ Firewall
|
|
||||||
================================================================================
|
|
||||||
Nishita Agarwal, a frequent Tecmint Visitor shared her experience (Question and Answer) with us regarding the job interview she had just given in a privately owned hosting company in Pune, India. She was asked a lot of questions on a variety of topics however she is an expert in iptables and she wanted to share those questions and their answer (she gave) related to iptables to others who may be going to give interview in near future.
|
|
||||||
|
|
||||||
![Linux Firewall Iptables Interview Questions](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg)
|
|
||||||
|
|
||||||
All the questions and their Answer are rewritten based upon the memory of Nishita Agarwal.
|
|
||||||
|
|
||||||
> “Hello Friends! My name is **Nishita Agarwal**. I have Pursued Bachelor Degree in Technology. My area of Specialization is UNIX and Variants of UNIX (BSD, Linux) fascinates me since the time I heard it. I have 1+ years of experience in storage. I was looking for a job change which ended with a hosting company in Pune, India.”
|
|
||||||
|
|
||||||
Here is the collection of what I was asked during the Interview. I’ve documented only those questions and their answer that were related to iptables based upon my memory. Hope this will help you in cracking your Interview.
|
|
||||||
|
|
||||||
### 1. Have you heard of iptables and firewall in Linux? Any idea of what they are and for what it is used? ###
|
|
||||||
|
|
||||||
> **Answer** : I’ve been using iptables for quite long time and I am aware of both iptables and firewall. Iptables is an application program mostly written in C Programming Language and is released under GNU General Public License. Written for System administration point of view, the latest stable release if iptables 1.4.21.iptables may be considered as firewall for UNIX like operating system which can be called as iptables/netfilter, more accurately. The Administrator interact with iptables via console/GUI front end tools to add and define firewall rules into predefined tables. Netfilter is a module built inside of kernel that do the job of filtering.
|
|
||||||
>
|
|
||||||
> Firewalld is the latest implementation of filtering rules in RHEL/CentOS 7 (may be implemented in other distributions which I may not be aware of). It has replaced iptables interface and connects to netfilter.
|
|
||||||
|
|
||||||
### 2. Have you used some kind of GUI based front end tool for iptables or the Linux Command Line? ###
|
|
||||||
|
|
||||||
> **Answer** : Though I have used both the GUI based front end tools for iptables like Shorewall in conjugation of [Webmin][1] in GUI and Direct access to iptables via console.And I must admit that direct access to iptables via Linux console gives a user immense power in the form of higher degree of flexibility and better understanding of what is going on in the background, if not anything other. GUI is for novice administrator while console is for experienced.
|
|
||||||
|
|
||||||
### 3. What are the basic differences between between iptables and firewalld? ###
|
|
||||||
|
|
||||||
> **Answer** : iptables and firewalld serves the same purpose (Packet Filtering) but with different approach. iptables flush the entire rules set each time a change is made unlike firewalld. Typically the location of iptables configuration lies at ‘/etc/sysconfig/iptables‘ whereas firewalld configuration lies at ‘/etc/firewalld/‘, which is a set of XML files.Configuring a XML based firewalld is easier as compared to configuration of iptables, however same task can be achieved using both the packet filtering application ie., iptables and firewalld. Firewalld runs iptables under its hood along with it’s own command line interface and configuration file that is XML based and said above.
|
|
||||||
|
|
||||||
### 4. Would you replace iptables with firewalld on all your servers, if given a chance? ###
|
|
||||||
|
|
||||||
> **Answer** : I am familiar with iptables and it’s working and if there is nothing that requires dynamic aspect of firewalld, there seems no reason to migrate all my configuration from iptables to firewalld.In most of the cases, so far I have never seen iptables creating an issue. Also the general rule of Information technology says “why fix if it is not broken”. However this is my personal thought and I would never mind implementing firewalld if the Organization is going to replace iptables with firewalld.
|
|
||||||
|
|
||||||
### 5. You seems confident with iptables and the plus point is even we are using iptables on our server. ###
|
|
||||||
|
|
||||||
What are the tables used in iptables? Give a brief description of the tables used in iptables and the chains they support.
|
|
||||||
|
|
||||||
> **Answer** : Thanks for the recognition. Moving to question part, There are four tables used in iptables, namely they are:
|
|
||||||
>
|
|
||||||
> Nat Table
|
|
||||||
> Mangle Table
|
|
||||||
> Filter Table
|
|
||||||
> Raw Table
|
|
||||||
>
|
|
||||||
> Nat Table : Nat table is primarily used for Network Address Translation. Masqueraded packets get their IP address altered as per the rules in the table. Packets in the stream traverse Nat Table only once. ie., If a packet from a jet of Packets is masqueraded they rest of the packages in the stream will not traverse through this table again. It is recommended not to filter in this table. Chains Supported by NAT Table are PREROUTING Chain, POSTROUTING Chain and OUTPUT Chain.
|
|
||||||
>
|
|
||||||
> Mangle Table : As the name suggests, this table serves for mangling the packets. It is used for Special package alteration. It can be used to alter the content of different packets and their headers. Mangle table can’t be used for Masquerading. Supported chains are PREROUTING Chain, OUTPUT Chain, Forward Chain, INPUT Chain, POSTROUTING Chain.
|
|
||||||
>
|
|
||||||
> Filter Table : Filter Table is the default table used in iptables. It is used for filtering Packets. If no rules are defined, Filter Table is taken as default table and filtering is done on the basis of this table. Supported Chains are INPUT Chain, OUTPUT Chain, FORWARD Chain.
|
|
||||||
>
|
|
||||||
> Raw Table : Raw table comes into action when we want to configure packages that were exempted earlier. It supports PREROUTING Chain and OUTPUT Chain.
|
|
||||||
|
|
||||||
### 6. What are the target values (that can be specified in target) in iptables and what they do, be brief! ###
|
|
||||||
|
|
||||||
> **Answer** : Following are the target values that we can specify in target in iptables:
|
|
||||||
>
|
|
||||||
> ACCEPT : Accept Packets
|
|
||||||
> QUEUE : Paas Package to user space (place where application and drivers reside)
|
|
||||||
> DROP : Drop Packets
|
|
||||||
> RETURN : Return Control to calling chain and stop executing next set of rules for the current Packets in the chain.
|
|
||||||
|
|
||||||
|
|
||||||
### 7. Lets move to the technical aspects of iptables, by technical I means practical. ###
|
|
||||||
|
|
||||||
How will you Check iptables rpm that is required to install iptables in CentOS?.
|
|
||||||
|
|
||||||
> **Answer** : iptables rpm are included in standard CentOS installation and we do not need to install it separately. We can check the rpm as:
|
|
||||||
>
|
|
||||||
> # rpm -qa iptables
|
|
||||||
>
|
|
||||||
> iptables-1.4.21-13.el7.x86_64
|
|
||||||
>
|
|
||||||
> If you need to install it, you may do yum to get it.
|
|
||||||
>
|
|
||||||
> # yum install iptables-services
|
|
||||||
|
|
||||||
### 8. How to Check and ensure if iptables service is running? ###
|
|
||||||
|
|
||||||
> **Answer** : To check the status of iptables, you may run the following command on the terminal.
|
|
||||||
>
|
|
||||||
> # service status iptables [On CentOS 6/5]
|
|
||||||
> # systemctl status iptables [On CentOS 7]
|
|
||||||
>
|
|
||||||
> If it is not running, the below command may be executed.
|
|
||||||
>
|
|
||||||
> ---------------- On CentOS 6/5 ----------------
|
|
||||||
> # chkconfig --level 35 iptables on
|
|
||||||
> # service iptables start
|
|
||||||
>
|
|
||||||
> ---------------- On CentOS 7 ----------------
|
|
||||||
> # systemctl enable iptables
|
|
||||||
> # systemctl start iptables
|
|
||||||
>
|
|
||||||
> We may also check if the iptables module is loaded or not, as:
|
|
||||||
>
|
|
||||||
> # lsmod | grep ip_tables
|
|
||||||
|
|
||||||
### 9. How will you review the current Rules defined in iptables? ###
|
|
||||||
|
|
||||||
> **Answer** : The current rules in iptables can be review as simple as:
|
|
||||||
>
|
|
||||||
> # iptables -L
|
|
||||||
>
|
|
||||||
> Sample Output
|
|
||||||
>
|
|
||||||
> Chain INPUT (policy ACCEPT)
|
|
||||||
> target prot opt source destination
|
|
||||||
> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
|
||||||
> ACCEPT icmp -- anywhere anywhere
|
|
||||||
> ACCEPT all -- anywhere anywhere
|
|
||||||
> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
|
||||||
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
|
||||||
>
|
|
||||||
> Chain FORWARD (policy ACCEPT)
|
|
||||||
> target prot opt source destination
|
|
||||||
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
|
||||||
>
|
|
||||||
> Chain OUTPUT (policy ACCEPT)
|
|
||||||
> target prot opt source destination
|
|
||||||
|
|
||||||
### 10. How will you flush all iptables rules or a particular chain? ###
|
|
||||||
|
|
||||||
> **Answer** : To flush a particular iptables chain, you may use following commands.
|
|
||||||
>
|
|
||||||
>
|
|
||||||
> # iptables --flush OUTPUT
|
|
||||||
>
|
|
||||||
> To Flush all the iptables rules.
|
|
||||||
>
|
|
||||||
> # iptables --flush
|
|
||||||
|
|
||||||
### 11. Add a rule in iptables to accept packets from a trusted IP Address (say 192.168.0.7) ###
|
|
||||||
|
|
||||||
> **Answer** : The above scenario can be achieved simply by running the below command.
|
|
||||||
>
|
|
||||||
> # iptables -A INPUT -s 192.168.0.7 -j ACCEPT
|
|
||||||
>
|
|
||||||
> We may include standard slash or subnet mask in the source as:
|
|
||||||
>
|
|
||||||
> # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT
|
|
||||||
> # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT
|
|
||||||
|
|
||||||
### 12. How to add rules to ACCEPT, REJECT, DENY and DROP ssh service in iptables. ###
|
|
||||||
|
|
||||||
> **Answer** : Hoping ssh is running on port 22, which is also the default port for ssh, we can add rule to iptables as:To ACCEPT tcp packets for ssh service (port 22).
|
|
||||||
>
|
|
||||||
> # iptables -A INPUT -s -p tcp - -dport -j ACCEPT
|
|
||||||
>
|
|
||||||
> To REJECT tcp packets for ssh service (port 22).
|
|
||||||
>
|
|
||||||
> # iptables -A INPUT -s -p tcp - -dport -j REJECT
|
|
||||||
>
|
|
||||||
> To DENY tcp packets for ssh service (port 22).
|
|
||||||
>
|
|
||||||
>
|
|
||||||
> # iptables -A INPUT -s -p tcp - -dport -j DENY
|
|
||||||
>
|
|
||||||
> To DROP tcp packets for ssh service (port 22).
|
|
||||||
>
|
|
||||||
>
|
|
||||||
> # iptables -A INPUT -s -p tcp - -dport -j DROP
|
|
||||||
|
|
||||||
### 13. Let me give you a scenario. Say there is a machine the local ip address of which is 192.168.0.6. You need to block connections on port 21, 22, 23, and 80 to your machine. What will you do? ###
|
|
||||||
|
|
||||||
> **Answer** : Well all I need to use is the ‘multiport‘ option with iptables followed by port numbers to be blocked and the above scenario can be achieved in a single go as.
|
|
||||||
>
|
|
||||||
> # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP
|
|
||||||
>
|
|
||||||
> The written rules can be checked using the below command.
|
|
||||||
>
|
|
||||||
> # iptables -L
|
|
||||||
>
|
|
||||||
> Chain INPUT (policy ACCEPT)
|
|
||||||
> target prot opt source destination
|
|
||||||
> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
|
||||||
> ACCEPT icmp -- anywhere anywhere
|
|
||||||
> ACCEPT all -- anywhere anywhere
|
|
||||||
> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
|
||||||
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
|
||||||
> DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache
|
|
||||||
>
|
|
||||||
> Chain FORWARD (policy ACCEPT)
|
|
||||||
> target prot opt source destination
|
|
||||||
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
|
||||||
>
|
|
||||||
> Chain OUTPUT (policy ACCEPT)
|
|
||||||
> target prot opt source destination
|
|
||||||
|
|
||||||
**Interviewer** : That’s all I wanted to ask. You are a valuable employee we won’t like to miss. I will recommend your name to the HR. If you have any question you may ask me.
|
|
||||||
|
|
||||||
As a candidate I don’t wanted to kill the conversation hence keep asking about the projects I would be handling if selected and what are the other openings in the company. Not to mention HR round was not difficult to crack and I got the opportunity.
|
|
||||||
|
|
||||||
Also I would like to thank Avishek and Ravi (whom I am a friend since long) for taking the time to document my interview.
|
|
||||||
|
|
||||||
Friends! If you had given any such interview and you would like to share your interview experience to millions of Tecmint readers around the globe? then send your questions and answers to admin@tecmint.com.
|
|
||||||
|
|
||||||
Thank you! Keep Connected. Also let me know if I could have answered a question more correctly than what I did.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/
|
|
||||||
|
|
||||||
作者:[Avishek Kumar][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.tecmint.com/author/avishek/
|
|
||||||
[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/
|
|
@ -1,95 +0,0 @@
|
|||||||
How to Configure Swarm Native Clustering for Docker
|
|
||||||
================================================================================
|
|
||||||
Hi everyone, today we'll learn about Swarm and how we can create native clusters using Docker with Swarm. [Docker Swarm][1] is a native clustering program for Docker which turns a pool of Docker hosts into a single virtual host. Swarm serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Swarm follows the "batteries included but removable" principle as other Docker Projects. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends. The goal is to provide a smooth out-of-box experience for simple use cases, and allow swapping in more powerful backends, like Mesos, for large scale production deployments. Swarm is extremely easy to setup and get started.
|
|
||||||
|
|
||||||
So, here are some features of Swarm 0.2 out of the box.
|
|
||||||
|
|
||||||
1. Swarm 0.2.0 is about 85% compatible with the Docker Engine.
|
|
||||||
2. It supports Resource Management.
|
|
||||||
3. It has Advanced Scheduling feature with constraints and affinities.
|
|
||||||
4. It supports multiple Discovery Backends (hubs, consul, etcd, zookeeper)
|
|
||||||
5. It uses TLS encryption method for security and authentication.
|
|
||||||
|
|
||||||
So, here are some very simple and easy steps on how we can use Swarm.
|
|
||||||
|
|
||||||
### 1. Pre-requisites to run Swarm ###
|
|
||||||
|
|
||||||
We must install Docker 1.4.0 or later on all nodes. While each node's IP need not be public, the Swarm manager must be able to access each node across the network.
|
|
||||||
|
|
||||||
Note: Swarm is currently in beta, so things are likely to change. We don't recommend you use it in production yet.
|
|
||||||
|
|
||||||
### 2. Creating Swarm Cluster ###
|
|
||||||
|
|
||||||
Now, we'll create swarm cluster by running the below command. Each node will run a swarm node agent. The agent registers the referenced Docker daemon, monitors it, and updates the discovery backend with the node's status. The below command returns a token which is a unique cluster id, it will be used when starting the Swarm Agent on nodes.
|
|
||||||
|
|
||||||
# docker run swarm create
|
|
||||||
|
|
||||||
![Creating Swarm Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-swarm-cluster.png)
|
|
||||||
|
|
||||||
### 3. Starting the Docker Daemon in each nodes ###
|
|
||||||
|
|
||||||
We'll need to login into each node that we'll use to create clusters and start the Docker Daemon into it using flag -H . It ensures that the docker remote API on the node is available over TCP for the Swarm Manager. To do start the docker daemon, we'll need to run the following command inside the nodes.
|
|
||||||
|
|
||||||
# docker -H tcp://0.0.0.0:2375 -d
|
|
||||||
|
|
||||||
![Starting Docker Daemon](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-docker-daemon.png)
|
|
||||||
|
|
||||||
### 4. Adding the Nodes ###
|
|
||||||
|
|
||||||
After enabling Docker Daemon, we'll need to add the Swarm Nodes to the discovery service. We must ensure that node's IP must be accessible from the Swarm Manager. To do so, we'll need to run the following command.
|
|
||||||
|
|
||||||
# docker run -d swarm join --addr=<node_ip>:2375 token://<cluster_id>
|
|
||||||
|
|
||||||
![Adding Nodes to Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/adding-nodes-to-cluster.png)
|
|
||||||
|
|
||||||
**Note**: Here, we'll need to replace <node_ip> and <cluster_id> with the IP address of the Node and the cluster ID we got from step 2.
|
|
||||||
|
|
||||||
### 5. Starting the Swarm Manager ###
|
|
||||||
|
|
||||||
Now, as we have got the nodes connected to the cluster. Now, we'll start the swarm manager, we'll need to run the following command in the node.
|
|
||||||
|
|
||||||
# docker run -d -p <swarm_port>:2375 swarm manage token://<cluster_id>
|
|
||||||
|
|
||||||
![Starting Swarm Manager](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-swarm-manager.png)
|
|
||||||
|
|
||||||
### 6. Checking the Configuration ###
|
|
||||||
|
|
||||||
Once the manager is running, we can check the configuration by running the following command.
|
|
||||||
|
|
||||||
# docker -H tcp://<manager_ip:manager_port> info
|
|
||||||
|
|
||||||
![Accessing Swarm Clusters](http://blog.linoxide.com/wp-content/uploads/2015/05/accessing-swarm-cluster.png)
|
|
||||||
|
|
||||||
**Note**: We'll need to replace <manager_ip:manager_port> with the ip address and port of the host running the swarm manager.
|
|
||||||
|
|
||||||
### 7. Using the docker CLI to access nodes ###
|
|
||||||
|
|
||||||
After everything is done perfectly as explained above, this part is the most important part of the Docker Swarm. We can use Docker CLI to access the nodes and run containers on them.
|
|
||||||
|
|
||||||
# docker -H tcp://<manager_ip:manager_port> info
|
|
||||||
# docker -H tcp://<manager_ip:manager_port> run ...
|
|
||||||
|
|
||||||
### 8. Listing nodes in the cluster ###
|
|
||||||
|
|
||||||
We can get a list of all of the running nodes using the swarm list command.
|
|
||||||
|
|
||||||
# docker run --rm swarm list token://<cluster_id>
|
|
||||||
|
|
||||||
![Listing Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/listing-swarm-nodes.png)
|
|
||||||
|
|
||||||
### Conclusion ###
|
|
||||||
|
|
||||||
Swarm is really an awesome feature of docker that can be used for creating and managing clusters. It is pretty easy to setup and use. It is more beautiful when we use constraints and affinities on top of it. Advanced Scheduling is an awesome feature of it which applies filters to exclude nodes with ports, labels, health and it uses strategies to pick the best node. So, if you have any questions, comments, feedback please do write on the comment box below and let us know what stuffs needs to be added or improved. Thank You! Enjoy :-)
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://linoxide.com/linux-how-to/configure-swarm-clustering-docker/
|
|
||||||
|
|
||||||
作者:[Arun Pyasi][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://linoxide.com/author/arunp/
|
|
||||||
[1]:https://docs.docker.com/swarm/
|
|
@ -1,125 +0,0 @@
|
|||||||
How to Provision Swarm Clusters using Docker Machine
|
|
||||||
================================================================================
|
|
||||||
Hi all, today we'll learn how we can deploy Swarm Clusters using Docker Machine. It serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. We can provision swarm clusters with any driver we need and is highly secured with TLS Encryption.
|
|
||||||
|
|
||||||
Here are some quick and easy steps on how to provision swarm clusters with Docker Machine.
|
|
||||||
|
|
||||||
### 1. Installing Docker Machine ###
|
|
||||||
|
|
||||||
Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the Github site . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 .
|
|
||||||
|
|
||||||
For 64 Bit Operating System
|
|
||||||
|
|
||||||
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
|
|
||||||
|
|
||||||
For 32 Bit Operating System
|
|
||||||
|
|
||||||
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine
|
|
||||||
|
|
||||||
After downloading the latest release of Docker Machine, we'll make the file named docker-machine under /usr/local/bin/ executable using the command below.
|
|
||||||
|
|
||||||
# chmod +x /usr/local/bin/docker-machine
|
|
||||||
|
|
||||||
After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system.
|
|
||||||
|
|
||||||
# docker-machine -v
|
|
||||||
|
|
||||||
![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png)
|
|
||||||
|
|
||||||
To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below.
|
|
||||||
|
|
||||||
# curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker
|
|
||||||
# chmod +x /usr/local/bin/docker
|
|
||||||
|
|
||||||
### 2. Creating Machine ###
|
|
||||||
|
|
||||||
After installing Machine into our working PC or device, we'll wanna go forward to create a machine using Docker Machine. Here, in this tutorial we'll gonna deploy a machine in the Digital Ocean Platform so we'll gonna use "digitalocean" as its Driver API then, docker swarm will be running into that Droplet which will be further configured as Swarm Master and another droplet will be created which will be configured as Swarm Node Agent.
|
|
||||||
|
|
||||||
So, to create the machine, we'll need to run the following command.
|
|
||||||
|
|
||||||
# docker-machine create --driver digitalocean --digitalocean-access-token <API-Token> linux-dev
|
|
||||||
|
|
||||||
**Note**: Here, linux-dev is the name of the machine we are wanting to create. <API-Token> is a security key which can be generated from the Digital Ocean Control Panel of the account holder of Digital Ocean Cloud Platform. To retrieve that key, we simply need to login to our Digital Ocean Control Panel then click on API, then click on Generate New Token and give it a name tick on both Read and Write. Then we'll get a long hex key, thats the <API-Token> now, simply replace it into the command above.
|
|
||||||
|
|
||||||
Now, to load the Machine configuration into the shell we are running the comamands, run the following command.
|
|
||||||
|
|
||||||
# eval "$(docker-machine env linux-dev)"
|
|
||||||
|
|
||||||
![Docker Machine Digitalocean Cloud](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-digitalocean-cloud.png)
|
|
||||||
|
|
||||||
Then, we'll mark our machine as ACTIVE by running the below command.
|
|
||||||
|
|
||||||
# docker-machine active linux-dev
|
|
||||||
|
|
||||||
Now, we'll check whether its been marked as ACTIVE "*" or not.
|
|
||||||
|
|
||||||
# docker-machine ls
|
|
||||||
|
|
||||||
![Docker Machine Active List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-active-list.png)
|
|
||||||
|
|
||||||
### 3. Running Swarm Docker Image ###
|
|
||||||
|
|
||||||
Now, after we finish creating the required machine. We'll gonna deploy swarm docker image in our active machine. This machine will run the docker image and will control over the Swarm master and node. To run the image, we can simply run the below command.
|
|
||||||
|
|
||||||
# docker run swarm create
|
|
||||||
|
|
||||||
![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png)
|
|
||||||
|
|
||||||
If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet.
|
|
||||||
|
|
||||||
# docker-machine ssh
|
|
||||||
#docker run swarm create
|
|
||||||
#exit
|
|
||||||
|
|
||||||
### 4. Creating Swarm Master ###
|
|
||||||
|
|
||||||
Now, after our machine and swarm image is running into the machine, we'll now create a Swarm Master. This will also add the master as a node. To do so, here's the command below.
|
|
||||||
|
|
||||||
# docker-machine create \
|
|
||||||
-d digitalocean \
|
|
||||||
--digitalocean-access-token <DIGITALOCEAN-TOKEN>
|
|
||||||
--swarm \
|
|
||||||
--swarm-master \
|
|
||||||
--swarm-discovery token://<CLUSTER-ID> \
|
|
||||||
swarm-master
|
|
||||||
|
|
||||||
![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png)
|
|
||||||
|
|
||||||
### 5. Creating Swarm Nodes ###
|
|
||||||
|
|
||||||
Now, we'll create a swarm node which will get connected with the Swarm Master. The command below will create a new droplet which will be named as swarm-node and will get connected with the Swarm Master as node. This will create a Swarm cluster across the two nodes.
|
|
||||||
|
|
||||||
# docker-machine create \
|
|
||||||
-d digitalocean \
|
|
||||||
--digitalocean-access-token <DIGITALOCEAN-TOKEN>
|
|
||||||
--swarm \
|
|
||||||
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
|
|
||||||
swarm-node
|
|
||||||
|
|
||||||
![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png)
|
|
||||||
|
|
||||||
### 6. Connecting to the Swarm Master ###
|
|
||||||
|
|
||||||
We, now connect to the Swarm Master so that we can deploy Docker containers across the nodes as per the requirement and configuration. To load the Swarm Master's Machine configuration into our environment, we can run the below command.
|
|
||||||
|
|
||||||
# eval "$(docker-machine env --swarm swarm-master)"
|
|
||||||
|
|
||||||
After that, we can run the required containers of our choice across the nodes. Here, we'll check if everything went fine or not. So, we'll run **docker info** to check the information about the Swarm Clusters.
|
|
||||||
|
|
||||||
# docker info
|
|
||||||
|
|
||||||
### Conclusion ###
|
|
||||||
|
|
||||||
We can pretty easily create Swarm Cluster with Docker Machine. This method is a lot productive because it reduces a lot of time of a system admin or users. In this article, we successfully provisioned clusters by creating a master and a node using a machine with Digital Ocean as driver. It can be created using any driver like VirtualBox, Google Cloud Computing, Amazon Web Service, Microsoft Azure and more according to the need and requirement of the user and the connection is highly secured with TLS Encryption. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-machine/
|
|
||||||
|
|
||||||
作者:[Arun Pyasi][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://linoxide.com/author/arunp/
|
|
@ -1,237 +0,0 @@
|
|||||||
How to collect NGINX metrics - Part 2
|
|
||||||
================================================================================
|
|
||||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png)
|
|
||||||
|
|
||||||
### How to get the NGINX metrics you need ###
|
|
||||||
|
|
||||||
How you go about capturing metrics depends on which version of NGINX you are using, as well as which metrics you wish to access. (See [the companion article][1] for an in-depth exploration of NGINX metrics.) Free, open-source NGINX and the commercial product NGINX Plus both have status modules that report metrics, and NGINX can also be configured to report certain metrics in its logs:
|
|
||||||
|
|
||||||
注:表格
|
|
||||||
<table>
|
|
||||||
<colgroup>
|
|
||||||
<col style="text-align: left;">
|
|
||||||
<col style="text-align: center;">
|
|
||||||
<col style="text-align: center;">
|
|
||||||
<col style="text-align: center;"> </colgroup>
|
|
||||||
<thead>
|
|
||||||
<tr>
|
|
||||||
<th rowspan="2" style="text-align: left;">Metric</th>
|
|
||||||
<th colspan="3" style="text-align: center;">Availability</th>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source">NGINX (open-source)</a></th>
|
|
||||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus">NGINX Plus</a></th>
|
|
||||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs">NGINX logs</a></th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">accepts / accepted</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">handled</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">dropped</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">active</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">requests / total</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">4xx codes</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">5xx codes</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: left;">request time</td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
<td style="text-align: center;"></td>
|
|
||||||
<td style="text-align: center;">x</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
#### Metrics collection: NGINX (open-source) ####
|
|
||||||
|
|
||||||
Open-source NGINX exposes several basic metrics about server activity on a simple status page, provided that you have the HTTP [stub status module][2] enabled. To check if the module is already enabled, run:
|
|
||||||
|
|
||||||
nginx -V 2>&1 | grep -o with-http_stub_status_module
|
|
||||||
|
|
||||||
The status module is enabled if you see with-http_stub_status_module as output in the terminal.
|
|
||||||
|
|
||||||
If that command returns no output, you will need to enable the status module. You can use the --with-http_stub_status_module configuration parameter when [building NGINX from source][3]:
|
|
||||||
|
|
||||||
./configure \
|
|
||||||
… \
|
|
||||||
--with-http_stub_status_module
|
|
||||||
make
|
|
||||||
sudo make install
|
|
||||||
|
|
||||||
After verifying the module is enabled or enabling it yourself, you will also need to modify your NGINX configuration to set up a locally accessible URL (e.g., /nginx_status) for the status page:
|
|
||||||
|
|
||||||
server {
|
|
||||||
location /nginx_status {
|
|
||||||
stub_status on;
|
|
||||||
|
|
||||||
access_log off;
|
|
||||||
allow 127.0.0.1;
|
|
||||||
deny all;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Note: The server blocks of the NGINX config are usually found not in the master configuration file (e.g., /etc/nginx/nginx.conf) but in supplemental configuration files that are referenced by the master config. To find the relevant configuration files, first locate the master config by running:
|
|
||||||
|
|
||||||
nginx -t
|
|
||||||
|
|
||||||
Open the master configuration file listed, and look for lines beginning with include near the end of the http block, such as:
|
|
||||||
|
|
||||||
include /etc/nginx/conf.d/*.conf;
|
|
||||||
|
|
||||||
In one of the referenced config files you should find the main server block, which you can modify as above to configure NGINX metrics reporting. After changing any configurations, reload the configs by executing:
|
|
||||||
|
|
||||||
nginx -s reload
|
|
||||||
|
|
||||||
Now you can view the status page to see your metrics:
|
|
||||||
|
|
||||||
Active connections: 24
|
|
||||||
server accepts handled requests
|
|
||||||
1156958 1156958 4491319
|
|
||||||
Reading: 0 Writing: 18 Waiting : 6
|
|
||||||
|
|
||||||
Note that if you are trying to access the status page from a remote machine, you will need to whitelist the remote machine’s IP address in your status configuration, just as 127.0.0.1 is whitelisted in the configuration snippet above.
|
|
||||||
|
|
||||||
The NGINX status page is an easy way to get a quick snapshot of your metrics, but for continuous monitoring you will need to automatically record that data at regular intervals. Parsers for the NGINX status page already exist for monitoring tools such as [Nagios][4] and [Datadog][5], as well as for the statistics collection daemon [collectD][6].
|
|
||||||
|
|
||||||
#### Metrics collection: NGINX Plus ####
|
|
||||||
|
|
||||||
The commercial NGINX Plus provides [many more metrics][7] through its ngx_http_status_module than are available in open-source NGINX. Among the additional metrics exposed by NGINX Plus are bytes streamed, as well as information about upstream systems and caches. NGINX Plus also reports counts of all HTTP status code types (1xx, 2xx, 3xx, 4xx, 5xx). A sample NGINX Plus status board is available [here][8].
|
|
||||||
|
|
||||||
![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png)
|
|
||||||
|
|
||||||
*Note: the “Active” connections on the NGINX Plus status dashboard are defined slightly differently than the Active state connections in the metrics collected via the open-source NGINX stub status module. In NGINX Plus metrics, Active connections do not include connections in the Waiting state (aka Idle connections).*
|
|
||||||
|
|
||||||
NGINX Plus also reports [metrics in JSON format][9] for easy integration with other monitoring systems. With NGINX Plus, you can see the metrics and health status [for a given upstream grouping of servers][10], or drill down to get a count of just the response codes [from a single server][11] in that upstream:
|
|
||||||
|
|
||||||
{"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055}
|
|
||||||
|
|
||||||
To enable the NGINX Plus metrics dashboard, you can add a status server block inside the http block of your NGINX configuration. ([See the section above][12] on collecting metrics from open-source NGINX for instructions on locating the relevant config files.) For example, to set up a status dashboard at http://your.ip.address:8080/status.html and a JSON interface at http://your.ip.address:8080/status, you would add the following server block:
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 8080;
|
|
||||||
root /usr/share/nginx/html;
|
|
||||||
|
|
||||||
location /status {
|
|
||||||
status;
|
|
||||||
}
|
|
||||||
|
|
||||||
location = /status.html {
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
The status pages should be live once you reload your NGINX configuration:
|
|
||||||
|
|
||||||
nginx -s reload
|
|
||||||
|
|
||||||
The official NGINX Plus docs have [more details][13] on how to configure the expanded status module.
|
|
||||||
|
|
||||||
#### Metrics collection: NGINX logs ####
|
|
||||||
|
|
||||||
NGINX’s [log module][14] writes configurable access logs to a destination of your choosing. You can customize the format of your logs and the data they contain by [adding or subtracting variables][15]. The simplest way to capture detailed logs is to add the following line in the server block of your config file (see [the section][16] on collecting metrics from open-source NGINX for instructions on locating your config files):
|
|
||||||
|
|
||||||
access_log logs/host.access.log combined;
|
|
||||||
|
|
||||||
After changing any NGINX configurations, reload the configs by executing:
|
|
||||||
|
|
||||||
nginx -s reload
|
|
||||||
|
|
||||||
The “combined” log format, included by default, captures [a number of key data points][17], such as the actual HTTP request and the corresponding response code. In the example logs below, NGINX logged a 200 (success) status code for a request for /index.html and a 404 (not found) error for the nonexistent /fail.
|
|
||||||
|
|
||||||
127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36"
|
|
||||||
|
|
||||||
127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
|
||||||
|
|
||||||
You can log request processing time as well by adding a new log format to the http block of your NGINX config file:
|
|
||||||
|
|
||||||
log_format nginx '$remote_addr - $remote_user [$time_local] '
|
|
||||||
'"$request" $status $body_bytes_sent $request_time '
|
|
||||||
'"$http_referer" "$http_user_agent"';
|
|
||||||
|
|
||||||
And by adding or modifying the access_log line in the server block of your config file:
|
|
||||||
|
|
||||||
access_log logs/host.access.log nginx;
|
|
||||||
|
|
||||||
After reloading the updated configs (by running nginx -s reload), your access logs will include response times, as seen below. The units are seconds, with millisecond resolution. In this instance, the server received a request for /big.pdf, returning a 206 (success) status code after sending 33973115 bytes. Processing the request took 0.202 seconds (202 milliseconds):
|
|
||||||
|
|
||||||
127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
|
||||||
|
|
||||||
You can use a variety of tools and services to parse and analyze NGINX logs. For instance, [rsyslog][18] can monitor your logs and pass them to any number of log-analytics services; you can use a free, open-source tool such as [logstash][19] to collect and analyze logs; or you can use a unified logging layer such as [Fluentd][20] to collect and parse your NGINX logs.
|
|
||||||
|
|
||||||
### Conclusion ###
|
|
||||||
|
|
||||||
Which NGINX metrics you monitor will depend on the tools available to you, and whether the insight provided by a given metric justifies the overhead of monitoring that metric. For instance, is measuring error rates important enough to your organization to justify investing in NGINX Plus or implementing a system to capture and analyze logs?
|
|
||||||
|
|
||||||
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][21], and get started right away with a [free trial of Datadog][22].
|
|
||||||
|
|
||||||
----------
|
|
||||||
|
|
||||||
Source Markdown for this post is available [on GitHub][23]. Questions, corrections, additions, etc.? Please [let us know][24].
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
|
||||||
|
|
||||||
作者:K Young
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
|
||||||
[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
|
|
||||||
[3]:http://wiki.nginx.org/InstallOptions
|
|
||||||
[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx
|
|
||||||
[5]:http://docs.datadoghq.com/integrations/nginx/
|
|
||||||
[6]:https://collectd.org/wiki/index.php/Plugin:nginx
|
|
||||||
[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data
|
|
||||||
[8]:http://demo.nginx.com/status.html
|
|
||||||
[9]:http://demo.nginx.com/status
|
|
||||||
[10]:http://demo.nginx.com/status/upstreams/demoupstreams
|
|
||||||
[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses
|
|
||||||
[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
|
||||||
[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example
|
|
||||||
[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html
|
|
||||||
[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
|
||||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
|
||||||
[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
|
||||||
[18]:http://www.rsyslog.com/
|
|
||||||
[19]:https://www.elastic.co/products/logstash
|
|
||||||
[20]:http://www.fluentd.org/
|
|
||||||
[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
|
||||||
[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up
|
|
||||||
[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md
|
|
||||||
[24]:https://github.com/DataDog/the-monitor/issues
|
|
@ -1,3 +1,4 @@
|
|||||||
|
translating wi-cuckoo
|
||||||
How to monitor NGINX with Datadog - Part 3
|
How to monitor NGINX with Datadog - Part 3
|
||||||
================================================================================
|
================================================================================
|
||||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
|
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
|
KevinSJ Translating
|
||||||
How to monitor NGINX - Part 1
|
How to monitor NGINX - Part 1
|
||||||
================================================================================
|
================================================================================
|
||||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)
|
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)
|
||||||
@ -405,4 +406,4 @@ via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
|||||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||||
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
|
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
|
||||||
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
|
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
|
||||||
[20]:https://github.com/DataDog/the-monitor/issues
|
[20]:https://github.com/DataDog/the-monitor/issues
|
||||||
|
@ -1,188 +0,0 @@
|
|||||||
zpl1025
|
|
||||||
Howto Configure FTP Server with Proftpd on Fedora 22
|
|
||||||
================================================================================
|
|
||||||
In this article, we'll learn about setting up an FTP server with Proftpd running Fedora 22 in our machine or server. [ProFTPD][1] is a free and open source FTP daemon software licensed under GPL. It is among most popular FTP server among machines running Linux. Its primary design aims to have an FTP server with many advanced features and provisioning users for more configuration options for easy customization. It includes a number of configuration options that are still not available with many other FTP daemons. It was initially developed by the developers as an alternative with better security and configuration to wu-ftpd server. An FTP server is a program that allows us to upload or download files and folders from a remote server where it is setup using an FTP client. Some of the features of ProFTPD daemon are as follows, you can check more features on [http://www.proftpd.org/features.html][2] .
|
|
||||||
|
|
||||||
- It includes a per directory ".ftpaccess" access configuration similar to Apache's ".htaccess"
|
|
||||||
- It features multiple virtual FTP server with multiple users login and anonymous FTP services.
|
|
||||||
- It can be run either as a stand-alone server or from inetd/xinetd.
|
|
||||||
- Its ownership, file/folder attributes and file/folder permissions are UNIX-based.
|
|
||||||
- It can be run as standalone mode in order to protect the system from damage that can be caused from root access.
|
|
||||||
- The modular design of it makes it easily extensible with modules like LDAP servers, SSL/TLS encryption, RADIUS support, etc.
|
|
||||||
- IPv6 support is also included in the ProFTPD server.
|
|
||||||
|
|
||||||
Here are some easy to perform steps on how we can setup an FTP Server with ProFTPD in Fedora 22 operating system.
|
|
||||||
|
|
||||||
### 1. Installing ProFTPD ###
|
|
||||||
|
|
||||||
First of all, we'll wanna install Proftpd server in our box running Fedora 22 as its operating system. As yum package manager has been depreciated, we'll use the latest and greatest built package manager called dnf. DNF is pretty easy to use and highly user friendly package manager available in Fedora 22. We'll simply use it to install proftpd daemon server. To do so, we'll need to run the following command in a terminal or a console in sudo mode.
|
|
||||||
|
|
||||||
$ sudo dnf -y install proftpd proftpd-utils
|
|
||||||
|
|
||||||
### 2. Configuring ProFTPD ###
|
|
||||||
|
|
||||||
Now, we'll make changes to some configurations in the daemon. To configure the daemon, we will need to edit /etc/proftpd.conf with a text editor. The main configuration file of the ProFTPD daemon is **/etc/proftpd.conf** so, any changes made to this file will affect the FTP server. Here, are some changes we make in this initial step.
|
|
||||||
|
|
||||||
$ sudo vi /etc/proftpd.conf
|
|
||||||
|
|
||||||
Next, after we open the file using a text editor, we'll wanna make changes to the ServerName and ServerAdmin as hostname and email address respectively. Here's what we have made changes to those configs.
|
|
||||||
|
|
||||||
ServerName "ftp.linoxide.com"
|
|
||||||
ServerAdmin arun@linoxide.com
|
|
||||||
|
|
||||||
After that, we'll wanna the following lines into the configuration file so that it logs access & auth into its specified log files.
|
|
||||||
|
|
||||||
ExtendedLog /var/log/proftpd/access.log WRITE,READ default
|
|
||||||
ExtendedLog /var/log/proftpd/auth.log AUTH auth
|
|
||||||
|
|
||||||
![Configuring ProFTPD Config](http://blog.linoxide.com/wp-content/uploads/2015/06/configuring-proftpd-config.png)
|
|
||||||
|
|
||||||
### 3. Adding FTP users ###
|
|
||||||
|
|
||||||
After configure the basics of the configuration file, we'll surely wanna create an FTP user which is rooted at a specific directory we want. The current users that we use to login into our machine are automatically enabled with the FTP service, we can even use it to login into the FTP server. But, in this tutorial, we'll gonna create a new user with a specified home directory to the ftp server.
|
|
||||||
|
|
||||||
Here, we'll create a new group named ftpgroup.
|
|
||||||
|
|
||||||
$ sudo groupadd ftpgroup
|
|
||||||
|
|
||||||
Then, we'll gonna add a new user arunftp into the group with home directory specified as /ftp-dir/
|
|
||||||
|
|
||||||
$ sudo useradd -G ftpgroup arunftp -s /sbin/nologin -d /ftp-dir/
|
|
||||||
|
|
||||||
After the user has been created and added to the group, we'll wanna set a password to the user arunftp.
|
|
||||||
|
|
||||||
$ sudo passwd arunftp
|
|
||||||
|
|
||||||
Changing password for user arunftp.
|
|
||||||
New password:
|
|
||||||
Retype new password:
|
|
||||||
passwd: all authentication tokens updated successfully.
|
|
||||||
|
|
||||||
Now, we'll set read and write permission of the home directory by the ftp users by executing the following command.
|
|
||||||
|
|
||||||
$ sudo setsebool -P allow_ftpd_full_access=1
|
|
||||||
$ sudo setsebool -P ftp_home_dir=1
|
|
||||||
|
|
||||||
Then, we'll wanna make that directory and its contents unable to get removed or renamed by any other users.
|
|
||||||
|
|
||||||
$ sudo chmod -R 1777 /ftp-dir/
|
|
||||||
|
|
||||||
### 4. Enabling TLS Support ###
|
|
||||||
|
|
||||||
FTP is considered less secure in comparison to the latest encryption methods used these days as anybody sniffing the network card can read the data pass through FTP. So, we'll enable TLS Encryption support in our FTP server. To do so, we'll need to a edit /etc/proftpd.conf configuration file. Before that, we'll wanna backup our existing configuration file to make sure we can revert our configuration if any unexpected happens.
|
|
||||||
|
|
||||||
$ sudo cp /etc/proftpd.conf /etc/proftpd.conf.bak
|
|
||||||
|
|
||||||
Then, we'll wanna edit the configuration file using our favorite text editor.
|
|
||||||
|
|
||||||
$ sudo vi /etc/proftpd.conf
|
|
||||||
|
|
||||||
Then, we'll wanna add the following lines just below line we configured in step 2 .
|
|
||||||
|
|
||||||
TLSEngine on
|
|
||||||
TLSRequired on
|
|
||||||
TLSProtocol SSLv23
|
|
||||||
TLSLog /var/log/proftpd/tls.log
|
|
||||||
TLSRSACertificateFile /etc/pki/tls/certs/proftpd.pem
|
|
||||||
TLSRSACertificateKeyFile /etc/pki/tls/certs/proftpd.pem
|
|
||||||
|
|
||||||
![Enabling TLS Configuration](http://blog.linoxide.com/wp-content/uploads/2015/06/tls-configuration.png)
|
|
||||||
|
|
||||||
After finishing up with the configuration, we'll wanna save and exit it.
|
|
||||||
|
|
||||||
Next, we'll need to generate the SSL certificates inside **/etc/pki/tls/certs/** directory as proftpd.pem. To do so, first we'll need to install openssl in our Fedora 22 machine.
|
|
||||||
|
|
||||||
$ sudo dnf install openssl
|
|
||||||
|
|
||||||
Then, we'll gonna generate the SSL certificate by running the following command.
|
|
||||||
|
|
||||||
$ sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/proftpd.pem -out /etc/pki/tls/certs/proftpd.pem
|
|
||||||
|
|
||||||
We'll be asked with some information that will be associated into the certificate. After completing the required information, it will generate a 2048 bit RSA private key.
|
|
||||||
|
|
||||||
Generating a 2048 bit RSA private key
|
|
||||||
...................+++
|
|
||||||
...................+++
|
|
||||||
writing new private key to '/etc/pki/tls/certs/proftpd.pem'
|
|
||||||
-----
|
|
||||||
You are about to be asked to enter information that will be incorporated
|
|
||||||
into your certificate request.
|
|
||||||
What you are about to enter is what is called a Distinguished Name or a DN.
|
|
||||||
There are quite a few fields but you can leave some blank
|
|
||||||
For some fields there will be a default value,
|
|
||||||
If you enter '.', the field will be left blank.
|
|
||||||
-----
|
|
||||||
Country Name (2 letter code) [XX]:NP
|
|
||||||
State or Province Name (full name) []:Narayani
|
|
||||||
Locality Name (eg, city) [Default City]:Bharatpur
|
|
||||||
Organization Name (eg, company) [Default Company Ltd]:Linoxide
|
|
||||||
Organizational Unit Name (eg, section) []:Linux Freedom
|
|
||||||
Common Name (eg, your name or your server's hostname) []:ftp.linoxide.com
|
|
||||||
Email Address []:arun@linoxide.com
|
|
||||||
|
|
||||||
After that, we'll gonna change the permission of the generated certificate file in order to secure it.
|
|
||||||
|
|
||||||
$ sudo chmod 600 /etc/pki/tls/certs/proftpd.pem
|
|
||||||
|
|
||||||
### 5. Allowing FTP through Firewall ###
|
|
||||||
|
|
||||||
Now, we'll need to allow the ftp ports that are usually blocked by the firewall by default. So, we'll allow ports and enable access to the ftp through firewall.
|
|
||||||
|
|
||||||
If **TLS/SSL Encryption is enabled** run the following command.
|
|
||||||
|
|
||||||
$sudo firewall-cmd --add-port=1024-65534/tcp
|
|
||||||
$ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
|
|
||||||
|
|
||||||
If **TLS/SSL Encryption is disabled** run the following command.
|
|
||||||
|
|
||||||
$ sudo firewall-cmd --permanent --zone=public --add-service=ftp
|
|
||||||
|
|
||||||
success
|
|
||||||
|
|
||||||
Then, we'll need to reload the firewall configuration.
|
|
||||||
|
|
||||||
$ sudo firewall-cmd --reload
|
|
||||||
|
|
||||||
success
|
|
||||||
|
|
||||||
### 6. Starting and Enabling ProFTPD ###
|
|
||||||
|
|
||||||
After everything is set, we'll finally start our ProFTPD and give it a try. To start the proftpd ftp daemon, we'll need to run the following command.
|
|
||||||
|
|
||||||
$ sudo systemctl start proftpd.service
|
|
||||||
|
|
||||||
Then, we'll wanna enable proftpd to start on every boot.
|
|
||||||
|
|
||||||
$ sudo systemctl enable proftpd.service
|
|
||||||
|
|
||||||
Created symlink from /etc/systemd/system/multi-user.target.wants/proftpd.service to /usr/lib/systemd/system/proftpd.service.
|
|
||||||
|
|
||||||
### 7. Logging into the FTP server ###
|
|
||||||
|
|
||||||
Now, if everything was configured and done as expected, we must be able to connect to the ftp server and login with the details we set above. Here, we'll gonna configure our FTP client, filezilla with hostname as **server's ip or url**, Protocol as **FTP**, User as **arunftp** and password as the one we set in above step 3. If you followed step 4 for enabling TLS support, then we'll need to set the Encryption type as **Require explicit FTP over TLS** but if you didn't follow step 4 and don't wanna use TLS encryption then set the Encryption type as **Plain FTP**.
|
|
||||||
|
|
||||||
![FTP Login Details](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
|
|
||||||
|
|
||||||
To setup the above configuration, we'll need goto File which is under the Menu and then click on Site Manager in which we can click on new site then configure as illustrated above.
|
|
||||||
|
|
||||||
![FTP SSL Certificate](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-ssl-certificate.png)
|
|
||||||
|
|
||||||
Then, we're asked to accept the SSL certificate, that can be done by click OK. After that, we are able to upload and download required files and folders from our FTP server.
|
|
||||||
|
|
||||||
### Conclusion ###
|
|
||||||
|
|
||||||
Finally, we have successfully installed and configured our Fedora 22 box with Proftpd FTP server. Proftpd is an awesome powerful highly configurable and extensible FTP daemon. The above tutorial illustrates us how we can configure a secure FTP server with TLS encryption. It is highly recommended to configure FTP server with TLS encryption as it enables SSL certificate security to the data transfer and login. Here, we haven't configured anonymous access to the FTP cause they are usually not recommended in a protected FTP system. An FTP access makes pretty easy for people to upload and download at good efficient performance. We can even change the ports for the users for additional security. So, if you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
|
|
||||||
|
|
||||||
作者:[Arun Pyasi][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://linoxide.com/author/arunp/
|
|
||||||
[1]:http://www.proftpd.org/
|
|
||||||
[2]:http://www.proftpd.org/features.html
|
|
@ -1,160 +0,0 @@
|
|||||||
translation by strugglingyouth
|
|
||||||
Setting Up ‘XR’ (Crossroads) Load Balancer for Web Servers on RHEL/CentOS
|
|
||||||
================================================================================
|
|
||||||
Crossroads is a service independent, open source load balance and fail-over utility for Linux and TCP based services. It can be used for HTTP, HTTPS, SSH, SMTP and DNS etc. It is also a multi-threaded utility which consumes only one memory space which leads to increase the performance when balancing load.
|
|
||||||
|
|
||||||
Let’s have a look at how XR works. We can locate XR between network clients and a nest of servers which dispatches client requests to the servers balancing the load.
|
|
||||||
|
|
||||||
If a server is down, XR forwards next client request to the next server in line, so client feels no down time. Have a look at the below diagram to understand what kind of a situation we are going to handle with XR.
|
|
||||||
|
|
||||||
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.jpg)
|
|
||||||
|
|
||||||
Install XR Crossroads Load Balancer
|
|
||||||
|
|
||||||
There are two web-servers, one gateway server which we install and setup XR to receive client requests and distribute them among the servers.
|
|
||||||
|
|
||||||
XR Crossroads Gateway Server : 172.16.1.204
|
|
||||||
Web Server 01 : 172.16.1.222
|
|
||||||
Web Server 02 : 192.168.1.161
|
|
||||||
|
|
||||||
In above scenario, my gateway server (i.e XR Crossroads) bears the IP address 172.16.1.222, webserver01 is 172.16.1.222 and it listens through port 8888 and webserver02 is 192.168.1.161 and it listens through port 5555.
|
|
||||||
|
|
||||||
Now all I need is to balance the load of all the requests that receives by the XR gateway from internet and distribute them among two web-servers balancing the load.
|
|
||||||
|
|
||||||
### Step1: Install XR Crossroads Load Balancer on Gateway Server ###
|
|
||||||
|
|
||||||
**1. Unfortunately, there isn’t any binary RPM packages available for crosscroads, the only way to install XR crossroads from source tarball.**
|
|
||||||
|
|
||||||
To compile XR, you must have C++ compiler and Gnu make utilities installed on the system in order to continue installation error free.
|
|
||||||
|
|
||||||
# yum install gcc gcc-c++ make
|
|
||||||
|
|
||||||
Next, download the source tarball by going to their official site ([https://crossroads.e-tunity.com][1]), and grab the archived package (i.e. crossroads-stable.tar.gz).
|
|
||||||
|
|
||||||
Alternatively, you use following wget utility to download the package and extract it in any location (eg: /usr/src/), go to unpacked directory and issue “make install” command.
|
|
||||||
|
|
||||||
# wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
|
|
||||||
# tar -xvf crossroads-stable.tar.gz
|
|
||||||
# cd crossroads-2.74/
|
|
||||||
# make install
|
|
||||||
|
|
||||||
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.png)
|
|
||||||
|
|
||||||
Install XR Crossroads Load Balancer
|
|
||||||
|
|
||||||
After installation finishes, the binary files are created under /usr/sbin/ and XR configuration within /etc namely “xrctl.xml”.
|
|
||||||
|
|
||||||
**2. As the last prerequisite, you need two web-servers. For ease of use, I have created two python SimpleHTTPServer instances in one server.**
|
|
||||||
|
|
||||||
To see how to setup a python SimpleHTTPServer, read our article at [Create Two Web Servers Easily Using SimpleHTTPServer][2].
|
|
||||||
|
|
||||||
As I said, we’re using two web-servers, and they are webserver01 running on 172.16.1.222 through port 8888 and webserver02 running on 192.168.1.161 through port 5555.
|
|
||||||
|
|
||||||
![XR WebServer 01](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer01.jpg)
|
|
||||||
|
|
||||||
XR WebServer 01
|
|
||||||
|
|
||||||
![XR WebServer 02](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer02.jpg)
|
|
||||||
|
|
||||||
XR WebServer 02
|
|
||||||
|
|
||||||
### Step 2: Configure XR Crossroads Load Balancer ###
|
|
||||||
|
|
||||||
**3. All requisites are in place. Now what we have to do is configure the `xrctl.xml` file to distribute the load among the web-servers which receives by the XR server from the internet.**
|
|
||||||
|
|
||||||
Now open `xrctl.xml` file with [vi/vim editor][3].
|
|
||||||
|
|
||||||
# vim /etc/xrctl.xml
|
|
||||||
|
|
||||||
and make the changes as suggested below.
|
|
||||||
|
|
||||||
<?xml version=<94>1.0<94> encoding=<94>UTF-8<94>?>
|
|
||||||
<configuration>
|
|
||||||
<system>
|
|
||||||
<uselogger>true</uselogger>
|
|
||||||
<logdir>/tmp</logdir>
|
|
||||||
</system>
|
|
||||||
<service>
|
|
||||||
<name>Tecmint</name>
|
|
||||||
<server>
|
|
||||||
<address>172.16.1.204:8080</address>
|
|
||||||
<type>tcp</type>
|
|
||||||
<webinterface>0:8010</webinterface>
|
|
||||||
<verbose>yes</verbose>
|
|
||||||
<clientreadtimeout>0</clientreadtimeout>
|
|
||||||
<clientwritetimout>0</clientwritetimeout>
|
|
||||||
<backendreadtimeout>0</backendreadtimeout>
|
|
||||||
<backendwritetimeout>0</backendwritetimeout>
|
|
||||||
</server>
|
|
||||||
<backend>
|
|
||||||
<address>172.16.1.222:8888</address>
|
|
||||||
</backend>
|
|
||||||
<backend>
|
|
||||||
<address>192.168.1.161:5555</address>
|
|
||||||
</backend>
|
|
||||||
</service>
|
|
||||||
</configuration>
|
|
||||||
|
|
||||||
![Configure XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Configure-XR-Crossroads-Load-Balancer.jpg)
|
|
||||||
|
|
||||||
Configure XR Crossroads Load Balancer
|
|
||||||
|
|
||||||
Here, you can see a very basic XR configuration done within xrctl.xml. I have defined what the XR server is, what are the back end servers and their ports and web interface port for the XR.
|
|
||||||
|
|
||||||
**4. Now you need to start the XR daemon by issuing below commands.**
|
|
||||||
|
|
||||||
# xrctl start
|
|
||||||
# xrctl status
|
|
||||||
|
|
||||||
![Start XR Crossroads](http://www.tecmint.com/wp-content/uploads/2015/07/Start-XR-Crossroads.jpg)
|
|
||||||
|
|
||||||
Start XR Crossroads
|
|
||||||
|
|
||||||
**5. Okay great. Now it’s time to check whether the configs are working fine. Open two web browsers and enter the IP address of the XR server with port and see the output.**
|
|
||||||
|
|
||||||
![Verify Web Server Load Balancing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Web-Server-Load-Balancing.jpg)
|
|
||||||
|
|
||||||
Verify Web Server Load Balancing
|
|
||||||
|
|
||||||
Fantastic. It works fine. now it’s time to play with XR.
|
|
||||||
|
|
||||||
**6. Now it’s time to login into XR Crossroads dashboard and see the port we’ve configured for web-interface. Enter your XR server’s IP address with the port number for web-interface you have configured in xrctl.xml.**
|
|
||||||
|
|
||||||
http://172.16.1.204:8010
|
|
||||||
|
|
||||||
![XR Crossroads Dashboard](http://www.tecmint.com/wp-content/uploads/2015/07/XR-Crossroads-Dashboard.jpg)
|
|
||||||
|
|
||||||
XR Crossroads Dashboard
|
|
||||||
|
|
||||||
This is what it looks like. It’s easy to understand, user-friendly and easy to use. It shows how many connections each back end server received in the top right corner along with the additional details regarding the requests receiving. Even you can set the load weight each server you need to bear, maximum number of connections and load average etc..
|
|
||||||
|
|
||||||
The best part is, you actually can do this even without configuring xrctl.xml. Only thing you have to do is issue the command with following syntax and it will do the job done.
|
|
||||||
|
|
||||||
# xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555
|
|
||||||
|
|
||||||
Explanation of above syntax in detail:
|
|
||||||
|
|
||||||
- –verbose will show what happens when the command has executed.
|
|
||||||
- –server defines the XR server you have installed the package in.
|
|
||||||
- –backend defines the webservers you need to balance the traffic to.
|
|
||||||
- Tcp defines it uses tcp services.
|
|
||||||
|
|
||||||
For more details, about documentations and configuration of CROSSROADS, please visit their official site at: [https://crossroads.e-tunity.com/][4].
|
|
||||||
|
|
||||||
XR Corssroads enables many ways to enhance your server performance, protect downtime’s and make your admin tasks easier and handier. Hope you enjoyed the guide and feel free to comment below for the suggestions and clarifications. Keep in touch with Tecmint for handy How To’s.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.tecmint.com/setting-up-xr-crossroads-load-balancer-for-web-servers-on-rhel-centos/
|
|
||||||
|
|
||||||
作者:[Thilina Uvindasiri][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.tecmint.com/author/thilidhanushka/
|
|
||||||
[1]:https://crossroads.e-tunity.com/
|
|
||||||
[2]:http://www.tecmint.com/python-simplehttpserver-to-create-webserver-or-serve-files-instantly/
|
|
||||||
[3]:http://www.tecmint.com/vi-editor-usage/
|
|
||||||
[4]:https://crossroads.e-tunity.com/
|
|
674
sources/tech/20150728 Process of the Linux kernel building.md
Normal file
674
sources/tech/20150728 Process of the Linux kernel building.md
Normal file
@ -0,0 +1,674 @@
|
|||||||
|
Process of the Linux kernel building
|
||||||
|
================================================================================
|
||||||
|
Introduction
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
I will not tell you how to build and install custom Linux kernel on your machine, you can find many many [resources](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) that will help you to do it. Instead, we will know what does occur when you are typed `make` in the directory with Linux kernel source code in this part. When I just started to learn source code of the Linux kernel, the [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) file was a first file that I've opened. And it was scary :) This [makefile](https://en.wikipedia.org/wiki/Make_%28software%29) contains `1591` lines of code at the time when I wrote this part and it was [third](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) release candidate.
|
||||||
|
|
||||||
|
This makefile is the the top makefile in the Linux kernel source code and kernel build starts here. Yes, it is big, but moreover, if you've read the source code of the Linux kernel you can noted that all directories with a source code has an own makefile. Of course it is not real to describe how each source files compiled and linked. So, we will see compilation only for the standard case. You will not find here building of the kernel's documentation, cleaning of the kernel source code, [tags](https://en.wikipedia.org/wiki/Ctags) generation, [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) related stuff and etc. We will start from the `make` execution with the standard kernel configuration file and will finish with the building of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage).
|
||||||
|
|
||||||
|
It would be good if you're already familiar with the [make](https://en.wikipedia.org/wiki/Make_%28software%29) util, but I will anyway try to describe all code that will be in this part.
|
||||||
|
|
||||||
|
So let's start.
|
||||||
|
|
||||||
|
Preparation before the kernel compilation
|
||||||
|
---------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
There are many things to preparate before the kernel compilation will be started. The main point here is to find and configure
|
||||||
|
the type of compilation, to parse command line arguments that are passed to the `make` util and etc. So let's dive into the top `Makefile` of the Linux kernel.
|
||||||
|
|
||||||
|
The Linux kernel top `Makefile` is responsible for building two major products: [vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (the resident kernel image) and the modules (any module files). The [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel starts from the definition of the following variables:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
VERSION = 4
|
||||||
|
PATCHLEVEL = 2
|
||||||
|
SUBLEVEL = 0
|
||||||
|
EXTRAVERSION = -rc3
|
||||||
|
NAME = Hurr durr I'ma sheep
|
||||||
|
```
|
||||||
|
|
||||||
|
These variables determine the current version of the Linux kernel and are used in the different places, for example in the forming of the `KERNELVERSION` variable:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)
|
||||||
|
```
|
||||||
|
|
||||||
|
After this we can see a couple of the `ifeq` condition that check some of the parameters passed to `make`. The Linux kernel `makefiles` provides a special `make help` target that prints all available targets and some of the command line arguments that can be passed to `make`. For example: `make V=1` - provides verbose builds. The first `ifeq` condition checks if the `V=n` option is passed to make:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
ifeq ("$(origin V)", "command line")
|
||||||
|
KBUILD_VERBOSE = $(V)
|
||||||
|
endif
|
||||||
|
ifndef KBUILD_VERBOSE
|
||||||
|
KBUILD_VERBOSE = 0
|
||||||
|
endif
|
||||||
|
|
||||||
|
ifeq ($(KBUILD_VERBOSE),1)
|
||||||
|
quiet =
|
||||||
|
Q =
|
||||||
|
else
|
||||||
|
quiet=quiet_
|
||||||
|
Q = @
|
||||||
|
endif
|
||||||
|
|
||||||
|
export quiet Q KBUILD_VERBOSE
|
||||||
|
```
|
||||||
|
|
||||||
|
If this option is passed to `make` we set the `KBUILD_VERBOSE` variable to the value of the `V` option. Otherwise we set the `KBUILD_VERBOSE` variable to zero. After this we check value of the `KBUILD_VERBOSE` variable and set values of the `quiet` and `Q` variables depends on the `KBUILD_VERBOSE` value. The `@` symbols suppress the output of the command and if it will be set before a command we will see something like this: `CC scripts/mod/empty.o` instead of the `Compiling .... scripts/mod/empty.o`. In the end we just export all of these variables. The next `ifeq` statement checks that `O=/dir` option was passed to the `make`. This option allows to locate all output files in the given `dir`:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
ifeq ($(KBUILD_SRC),)
|
||||||
|
|
||||||
|
ifeq ("$(origin O)", "command line")
|
||||||
|
KBUILD_OUTPUT := $(O)
|
||||||
|
endif
|
||||||
|
|
||||||
|
ifneq ($(KBUILD_OUTPUT),)
|
||||||
|
saved-output := $(KBUILD_OUTPUT)
|
||||||
|
KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \
|
||||||
|
&& /bin/pwd)
|
||||||
|
$(if $(KBUILD_OUTPUT),, \
|
||||||
|
$(error failed to create output directory "$(saved-output)"))
|
||||||
|
|
||||||
|
sub-make: FORCE
|
||||||
|
$(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
|
||||||
|
-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
|
||||||
|
|
||||||
|
skip-makefile := 1
|
||||||
|
endif # ifneq ($(KBUILD_OUTPUT),)
|
||||||
|
endif # ifeq ($(KBUILD_SRC),)
|
||||||
|
```
|
||||||
|
|
||||||
|
We check the `KBUILD_SRC` that represent top directory of the source code of the linux kernel and if it is empty (it is empty every time while makefile executes first time) and the set the `KBUILD_OUTPUT` variable to the value that passed with the `O` option (if this option was passed). In the next step we check this `KBUILD_OUTPUT` variable and if we set it, we do following things:
|
||||||
|
|
||||||
|
* Store value of the `KBUILD_OUTPUT` in the temp `saved-output` variable;
|
||||||
|
* Try to create given output directory;
|
||||||
|
* Check that directory created, in other way print error;
|
||||||
|
* If custom output directory created sucessfully, execute `make` again with the new directory (see `-C` option).
|
||||||
|
|
||||||
|
The next `ifeq` statements checks that `C` or `M` options was passed to the make:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
ifeq ("$(origin C)", "command line")
|
||||||
|
KBUILD_CHECKSRC = $(C)
|
||||||
|
endif
|
||||||
|
ifndef KBUILD_CHECKSRC
|
||||||
|
KBUILD_CHECKSRC = 0
|
||||||
|
endif
|
||||||
|
|
||||||
|
ifeq ("$(origin M)", "command line")
|
||||||
|
KBUILD_EXTMOD := $(M)
|
||||||
|
endif
|
||||||
|
```
|
||||||
|
|
||||||
|
The first `C` option tells to the `makefile` that need to check all `c` source code with a tool provided by the `$CHECK` environment variable, by default it is [sparse](https://en.wikipedia.org/wiki/Sparse). The second `M` option provides build for the external modules (will not see this case in this part). As we set this variables we make a check of the `KBUILD_SRC` variable and if it is not set we set `srctree` variable to `.`:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
ifeq ($(KBUILD_SRC),)
|
||||||
|
srctree := .
|
||||||
|
endif
|
||||||
|
|
||||||
|
objtree := .
|
||||||
|
src := $(srctree)
|
||||||
|
obj := $(objtree)
|
||||||
|
|
||||||
|
export srctree objtree VPATH
|
||||||
|
```
|
||||||
|
|
||||||
|
That tells to `Makefile` that source tree of the Linux kernel will be in the current directory where `make` command was executed. After this we set `objtree` and other variables to this directory and export these variables. The next step is the getting value for the `SUBARCH` variable that will represent tewhat the underlying archicecture is:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
|
||||||
|
-e s/sun4u/sparc64/ \
|
||||||
|
-e s/arm.*/arm/ -e s/sa110/arm/ \
|
||||||
|
-e s/s390x/s390/ -e s/parisc64/parisc/ \
|
||||||
|
-e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
|
||||||
|
-e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see it executes [uname](https://en.wikipedia.org/wiki/Uname) utils that prints information about machine, operating system and architecture. As it will get output of the `uname` util, it will parse it and assign to the `SUBARCH` variable. As we got `SUBARCH`, we set the `SRCARCH` variable that provides directory of the certain architecture and `hfr-arch` that provides directory for the header files:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
ifeq ($(ARCH),i386)
|
||||||
|
SRCARCH := x86
|
||||||
|
endif
|
||||||
|
ifeq ($(ARCH),x86_64)
|
||||||
|
SRCARCH := x86
|
||||||
|
endif
|
||||||
|
|
||||||
|
hdr-arch := $(SRCARCH)
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that `ARCH` is the alias for the `SUBARCH`. In the next step we set the `KCONFIG_CONFIG` variable that represents path to the kernel configuration file and if it was not set before, it will be `.config` by default:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
KCONFIG_CONFIG ?= .config
|
||||||
|
export KCONFIG_CONFIG
|
||||||
|
```
|
||||||
|
|
||||||
|
and the [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) that will be used during kernel compilation:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
|
||||||
|
else if [ -x /bin/bash ]; then echo /bin/bash; \
|
||||||
|
else echo sh; fi ; fi)
|
||||||
|
```
|
||||||
|
|
||||||
|
The next set of variables related to the compiler that will be used during Linux kernel compilation. We set the host compilers for the `c` and `c++` and flags for it:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
HOSTCC = gcc
|
||||||
|
HOSTCXX = g++
|
||||||
|
HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89
|
||||||
|
HOSTCXXFLAGS = -O2
|
||||||
|
```
|
||||||
|
|
||||||
|
Next we will meet the `CC` variable that represent compiler too, so why do we need in the `HOST*` variables? The `CC` is the target compiler that will be used during kernel compilation, but `HOSTCC` will be used during compilation of the set of the `host` programs (we will see it soon). After this we can see definition of the `KBUILD_MODULES` and `KBUILD_BUILTIN` variables that are used for the determination of the what to compile (kernel, modules or both):
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
KBUILD_MODULES :=
|
||||||
|
KBUILD_BUILTIN := 1
|
||||||
|
|
||||||
|
ifeq ($(MAKECMDGOALS),modules)
|
||||||
|
KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1)
|
||||||
|
endif
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we can see definition of these variables and the value of the `KBUILD_BUILTIN` will depens on the `CONFIG_MODVERSIONS` kernel configuration parameter if we pass only `modules` to the `make`. The next step is including of the:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
include scripts/Kbuild.include
|
||||||
|
```
|
||||||
|
|
||||||
|
`kbuild` file. The [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) or `Kernel Build System` is the special infrastructure to manage building of the kernel and its modules. The `kbuild` files has the same syntax that makefiles. The [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) file provides some generic definitions for the `kbuild` system. As we included this `kbuild` files we can see definition of the variables that are related to the different tools that will be used during kernel and modules compilation (like linker, compilers, utils from the [binutils](http://www.gnu.org/software/binutils/) and etc...):
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
AS = $(CROSS_COMPILE)as
|
||||||
|
LD = $(CROSS_COMPILE)ld
|
||||||
|
CC = $(CROSS_COMPILE)gcc
|
||||||
|
CPP = $(CC) -E
|
||||||
|
AR = $(CROSS_COMPILE)ar
|
||||||
|
NM = $(CROSS_COMPILE)nm
|
||||||
|
STRIP = $(CROSS_COMPILE)strip
|
||||||
|
OBJCOPY = $(CROSS_COMPILE)objcopy
|
||||||
|
OBJDUMP = $(CROSS_COMPILE)objdump
|
||||||
|
AWK = awk
|
||||||
|
...
|
||||||
|
...
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
After definition of these variables we define two variables: `USERINCLUDE` and `LINUXINCLUDE`. They will contain paths of the directories with headers (public for users in the first case and for kernel in the second case):
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
USERINCLUDE := \
|
||||||
|
-I$(srctree)/arch/$(hdr-arch)/include/uapi \
|
||||||
|
-Iarch/$(hdr-arch)/include/generated/uapi \
|
||||||
|
-I$(srctree)/include/uapi \
|
||||||
|
-Iinclude/generated/uapi \
|
||||||
|
-include $(srctree)/include/linux/kconfig.h
|
||||||
|
|
||||||
|
LINUXINCLUDE := \
|
||||||
|
-I$(srctree)/arch/$(hdr-arch)/include \
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
And the standard flags for the C compiler:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
|
||||||
|
-fno-strict-aliasing -fno-common \
|
||||||
|
-Werror-implicit-function-declaration \
|
||||||
|
-Wno-format-security \
|
||||||
|
-std=gnu89
|
||||||
|
```
|
||||||
|
|
||||||
|
It is the not last compiler flags, they can be updated by the other makefiles (for example kbuilds from `arch/`). After all of these, all variables will be exported to be available in the other makefiles. The following two the `RCS_FIND_IGNORE` and the `RCS_TAR_IGNORE` variables will contain files that will be ignored in the version control system:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \
|
||||||
|
-name CVS -o -name .pc -o -name .hg -o -name .git \) \
|
||||||
|
-prune -o
|
||||||
|
export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \
|
||||||
|
--exclude CVS --exclude .pc --exclude .hg --exclude .git
|
||||||
|
```
|
||||||
|
|
||||||
|
That's all. We have finished with the all preparations, next point is the building of `vmlinux`.
|
||||||
|
|
||||||
|
Directly to the kernel build
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
As we have finished all preparations, next step in the root makefile is related to the kernel build. Before this moment we will not see in the our terminal after the execution of the `make` command. But now first steps of the compilation are started. In this moment we need to go on the [598](https://github.com/torvalds/linux/blob/master/Makefile#L598) line of the Linux kernel top makefile and we will see `vmlinux` target there:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
all: vmlinux
|
||||||
|
include arch/$(SRCARCH)/Makefile
|
||||||
|
```
|
||||||
|
|
||||||
|
Don't worry that we have missed many lines in Makefile that are placed after `export RCS_FIND_IGNORE.....` and before `all: vmlinux.....`. This part of the makefile is responsible for the `make *.config` targets and as I wrote in the beginning of this part we will see only building of the kernel in a general way.
|
||||||
|
|
||||||
|
The `all:` target is the default when no target is given on the command line. You can see here that we include architecture specific makefile there (in our case it will be [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). From this moment we will continue from this makefile. As we can see `all` target depends on the `vmlinux` target that defined a little lower in the top makefile:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE
|
||||||
|
```
|
||||||
|
|
||||||
|
The `vmlinux` is is the Linux kernel in an statically linked executable file format. The [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script links combines different compiled subsystems into vmlinux. The second target is the `vmlinux-deps` that defined as:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN)
|
||||||
|
```
|
||||||
|
|
||||||
|
and consists from the set of the `built-in.o` from the each top directory of the Linux kernel. Later, when we will go through all directories in the Linux kernel, the `Kbuild` will compile all the `$(obj-y)` files. It then calls `$(LD) -r` to merge these files into one `built-in.o` file. For this moment we have no `vmlinux-deps`, so the `vmlinux` target will not be executed now. For me `vmlinux-deps` contains following files:
|
||||||
|
|
||||||
|
```
|
||||||
|
arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o
|
||||||
|
arch/x86/kernel/head64.o arch/x86/kernel/head.o
|
||||||
|
init/built-in.o usr/built-in.o
|
||||||
|
arch/x86/built-in.o kernel/built-in.o
|
||||||
|
mm/built-in.o fs/built-in.o
|
||||||
|
ipc/built-in.o security/built-in.o
|
||||||
|
crypto/built-in.o block/built-in.o
|
||||||
|
lib/lib.a arch/x86/lib/lib.a
|
||||||
|
lib/built-in.o arch/x86/lib/built-in.o
|
||||||
|
drivers/built-in.o sound/built-in.o
|
||||||
|
firmware/built-in.o arch/x86/pci/built-in.o
|
||||||
|
arch/x86/power/built-in.o arch/x86/video/built-in.o
|
||||||
|
net/built-in.o
|
||||||
|
```
|
||||||
|
|
||||||
|
The next target that can be executed is following:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||||
|
$(vmlinux-dirs): prepare scripts
|
||||||
|
$(Q)$(MAKE) $(build)=$@
|
||||||
|
```
|
||||||
|
|
||||||
|
As we can see the `vmlinux-dirs` depends on the two targets: `prepare` and `scripts`. The first `prepare` defined in the top `Makefile` of the Linux kernel and executes three stages of preparations:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
prepare: prepare0
|
||||||
|
prepare0: archprepare FORCE
|
||||||
|
$(Q)$(MAKE) $(build)=.
|
||||||
|
archprepare: archheaders archscripts prepare1 scripts_basic
|
||||||
|
|
||||||
|
prepare1: prepare2 $(version_h) include/generated/utsrelease.h \
|
||||||
|
include/config/auto.conf
|
||||||
|
$(cmd_crmodverdir)
|
||||||
|
prepare2: prepare3 outputmakefile asm-generic
|
||||||
|
```
|
||||||
|
|
||||||
|
The first `prepare0` expands to the `archprepare` that exapnds to the `archheaders` and `archscripts` that defined in the `x86_64` specific [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on it. The `x86_64` specific makefile starts from the definition of the variables that are related to the archicteture-specific configs ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs) and etc.). After this it defines flags for the compiling of the [16-bit](https://en.wikipedia.org/wiki/Real_mode) code, calculating of the `BITS` variable that can be `32` for `i386` or `64` for the `x86_64` flags for the assembly source code, flags for the linker and many many more (all definitions you can find in the [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). The first target is `archheaders` in the makefile generates syscall table:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
archheaders:
|
||||||
|
$(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all
|
||||||
|
```
|
||||||
|
|
||||||
|
And the second target is `archscripts` in this makefile is:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
archscripts: scripts_basic
|
||||||
|
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
|
||||||
|
```
|
||||||
|
|
||||||
|
We can see that it depends on the `scripts_basic` target from the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile). At the first we can see the `scripts_basic` target that executes make for the [scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) makefile:
|
||||||
|
|
||||||
|
```Maklefile
|
||||||
|
scripts_basic:
|
||||||
|
$(Q)$(MAKE) $(build)=scripts/basic
|
||||||
|
```
|
||||||
|
|
||||||
|
The `scripts/basic/Makefile` contains targets for compilation of the two host programs: `fixdep` and `bin2`:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
hostprogs-y := fixdep
|
||||||
|
hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c
|
||||||
|
always := $(hostprogs-y)
|
||||||
|
|
||||||
|
$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep
|
||||||
|
```
|
||||||
|
|
||||||
|
First program is `fixdep` - optimizes list of dependencies generated by the [gcc](https://gcc.gnu.org/) that tells make when to remake a source code file. The second program is `bin2c` depends on the value of the `CONFIG_BUILD_BIN2C` kernel configuration option and very little C program that allows to convert a binary on stdin to a C include on stdout. You can note here strange notation: `hostprogs-y` and etc. This notation is used in the all `kbuild` files and more about it you can read in the [documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt). In our case the `hostprogs-y` tells to the `kbuild` that there is one host program named `fixdep` that will be built from the will be built from `fixdep.c` that located in the same directory that `Makefile`. The first output after we will execute `make` command in our terminal will be result of this `kbuild` file:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make
|
||||||
|
HOSTCC scripts/basic/fixdep
|
||||||
|
```
|
||||||
|
|
||||||
|
As `script_basic` target was executed, the `archscripts` target will execute `make` for the [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) makefile with the `relocs` target:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
|
||||||
|
```
|
||||||
|
|
||||||
|
The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) information and we will see it in the `make` output:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
HOSTCC arch/x86/tools/relocs_32.o
|
||||||
|
HOSTCC arch/x86/tools/relocs_64.o
|
||||||
|
HOSTCC arch/x86/tools/relocs_common.o
|
||||||
|
HOSTLD arch/x86/tools/relocs
|
||||||
|
```
|
||||||
|
|
||||||
|
There is checking of the `version.h` after compiling of the `relocs.c`:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(version_h): $(srctree)/Makefile FORCE
|
||||||
|
$(call filechk,version.h)
|
||||||
|
$(Q)rm -f $(old_version_h)
|
||||||
|
```
|
||||||
|
|
||||||
|
We can see it in the output:
|
||||||
|
|
||||||
|
```
|
||||||
|
CHK include/config/kernel.release
|
||||||
|
```
|
||||||
|
|
||||||
|
and the building of the `generic` assembly headers with the `asm-generic` target from the `arch/x86/include/generated/asm` that generated in the top Makefile of the Linux kernel. After the `asm-generic` target the `archprepare` will be done, so the `prepare0` target will be executed. As I wrote above:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
prepare0: archprepare FORCE
|
||||||
|
$(Q)$(MAKE) $(build)=.
|
||||||
|
```
|
||||||
|
|
||||||
|
Note on the `build`. It defined in the [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) and looks like this:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
build := -f $(srctree)/scripts/Makefile.build obj
|
||||||
|
```
|
||||||
|
|
||||||
|
or in our case it is current source directory - `.`:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=.
|
||||||
|
```
|
||||||
|
|
||||||
|
The [scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) tries to find the `Kbuild` file by the given directory via the `obj` parameter, include this `Kbuild` files:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
include $(kbuild-file)
|
||||||
|
```
|
||||||
|
|
||||||
|
and build targets from it. In our case `.` contains the [Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild) file that generates the `kernel/bounds.s` and the `arch/x86/kernel/asm-offsets.s`. After this the `prepare` target finished to work. The `vmlinux-dirs` also depends on the second target - `scripts` that compiles following programs: `file2alias`, `mk_elfconfig`, `modpost` and etc... After scripts/host-programs compilation our `vmlinux-dirs` target can be executed. First of all let's try to understand what does `vmlinux-dirs` contain. For my case it contains paths of the following kernel directories:
|
||||||
|
|
||||||
|
```
|
||||||
|
init usr arch/x86 kernel mm fs ipc security crypto block
|
||||||
|
drivers sound firmware arch/x86/pci arch/x86/power
|
||||||
|
arch/x86/video net lib arch/x86/lib
|
||||||
|
```
|
||||||
|
|
||||||
|
We can find definition of the `vmlinux-dirs` in the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||||
|
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||||
|
$(net-y) $(net-m) $(libs-y) $(libs-m)))
|
||||||
|
|
||||||
|
init-y := init/
|
||||||
|
drivers-y := drivers/ sound/ firmware/
|
||||||
|
net-y := net/
|
||||||
|
libs-y := lib/
|
||||||
|
...
|
||||||
|
...
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we remove the `/` symbol from the each directory with the help of the `patsubst` and `filter` functions and put it to the `vmlinux-dirs`. So we have list of directories in the `vmlinux-dirs` and the following code:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(vmlinux-dirs): prepare scripts
|
||||||
|
$(Q)$(MAKE) $(build)=$@
|
||||||
|
```
|
||||||
|
|
||||||
|
The `$@` represents `vmlinux-dirs` here that means that it will go recursively over all directories from the `vmlinux-dirs` and its internal directories (depens on configuration) and will execute `make` in there. We can see it in the output:
|
||||||
|
|
||||||
|
```
|
||||||
|
CC init/main.o
|
||||||
|
CHK include/generated/compile.h
|
||||||
|
CC init/version.o
|
||||||
|
CC init/do_mounts.o
|
||||||
|
...
|
||||||
|
CC arch/x86/crypto/glue_helper.o
|
||||||
|
AS arch/x86/crypto/aes-x86_64-asm_64.o
|
||||||
|
CC arch/x86/crypto/aes_glue.o
|
||||||
|
...
|
||||||
|
AS arch/x86/entry/entry_64.o
|
||||||
|
AS arch/x86/entry/thunk_64.o
|
||||||
|
CC arch/x86/entry/syscall_64.o
|
||||||
|
```
|
||||||
|
|
||||||
|
Source code in each directory will be compiled and linked to the `built-in.o`:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ find . -name built-in.o
|
||||||
|
./arch/x86/crypto/built-in.o
|
||||||
|
./arch/x86/crypto/sha-mb/built-in.o
|
||||||
|
./arch/x86/net/built-in.o
|
||||||
|
./init/built-in.o
|
||||||
|
./usr/built-in.o
|
||||||
|
...
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Ok, all buint-in.o(s) built, now we can back to the `vmlinux` target. As you remember, the `vmlinux` target is in the top Makefile of the Linux kernel. Before the linking of the `vmlinux` it builds [samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation) and etc., but I will not describe it in this part as I wrote in the beginning of this part.
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE
|
||||||
|
...
|
||||||
|
...
|
||||||
|
+$(call if_changed,link-vmlinux)
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script is linking of the all `built-in.o`(s) to the one statically linked executable and creation of the [System.map](https://en.wikipedia.org/wiki/System.map). In the end we will see following output:
|
||||||
|
|
||||||
|
```
|
||||||
|
LINK vmlinux
|
||||||
|
LD vmlinux.o
|
||||||
|
MODPOST vmlinux.o
|
||||||
|
GEN .version
|
||||||
|
CHK include/generated/compile.h
|
||||||
|
UPD include/generated/compile.h
|
||||||
|
CC init/version.o
|
||||||
|
LD init/built-in.o
|
||||||
|
KSYM .tmp_kallsyms1.o
|
||||||
|
KSYM .tmp_kallsyms2.o
|
||||||
|
LD vmlinux
|
||||||
|
SORTEX vmlinux
|
||||||
|
SYSMAP System.map
|
||||||
|
```
|
||||||
|
|
||||||
|
and `vmlinux` and `System.map` in the root of the Linux kernel source tree:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ls vmlinux System.map
|
||||||
|
System.map vmlinux
|
||||||
|
```
|
||||||
|
|
||||||
|
That's all, `vmlinux` is ready. The next step is creation of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage).
|
||||||
|
|
||||||
|
Building bzImage
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
The `bzImage` is the compressed Linux kernel image. We can get it with the execution of the `make bzImage` after the `vmlinux` built. In other way we can just execute `make` without arguments and will get `bzImage` anyway because it is default image:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
all: bzImage
|
||||||
|
```
|
||||||
|
|
||||||
|
in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on this target, it will help us to understand how this image builds. As I already said the `bzImage` target defined in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) and looks like this:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
bzImage: vmlinux
|
||||||
|
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
|
||||||
|
$(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
|
||||||
|
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@
|
||||||
|
```
|
||||||
|
|
||||||
|
We can see here, that first of all called `make` for the boot directory, in our case it is:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
boot := arch/x86/boot
|
||||||
|
```
|
||||||
|
|
||||||
|
The main goal now to build source code in the `arch/x86/boot` and `arch/x86/boot/compressed` directories, build `setup.bin` and `vmlinux.bin`, and build the `bzImage` from they in the end. First target in the [arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) is the `$(obj)/setup.elf`:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
|
||||||
|
$(call if_changed,ld)
|
||||||
|
```
|
||||||
|
|
||||||
|
We already have the `setup.ld` linker script in the `arch/x86/boot` directory and the `SETUP_OBJS` expands to the all source files from the `boot` directory. We can see first output:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
AS arch/x86/boot/bioscall.o
|
||||||
|
CC arch/x86/boot/cmdline.o
|
||||||
|
AS arch/x86/boot/copy.o
|
||||||
|
HOSTCC arch/x86/boot/mkcpustr
|
||||||
|
CPUSTR arch/x86/boot/cpustr.h
|
||||||
|
CC arch/x86/boot/cpu.o
|
||||||
|
CC arch/x86/boot/cpuflags.o
|
||||||
|
CC arch/x86/boot/cpucheck.o
|
||||||
|
CC arch/x86/boot/early_serial_console.o
|
||||||
|
CC arch/x86/boot/edd.o
|
||||||
|
```
|
||||||
|
|
||||||
|
The next source code file is the [arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S), but we can't build it now because this target depends on the following two header files:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h
|
||||||
|
```
|
||||||
|
|
||||||
|
The first is `voffset.h` generated by the `sed` script that gets two addresses from the `vmlinux` with the `nm` util:
|
||||||
|
|
||||||
|
```C
|
||||||
|
#define VO__end 0xffffffff82ab0000
|
||||||
|
#define VO__text 0xffffffff81000000
|
||||||
|
```
|
||||||
|
|
||||||
|
They are start and end of the kernel. The second is `zoffset.h` depens on the `vmlinux` target from the [arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile):
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
|
||||||
|
$(call if_changed,zoffset)
|
||||||
|
```
|
||||||
|
|
||||||
|
The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that compiles source code files from the [arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) directory and generates `vmlinux.bin`, `vmlinux.bin.bz2`, and compiles programm - `mkpiggy`. We can see this in the output:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
LDS arch/x86/boot/compressed/vmlinux.lds
|
||||||
|
AS arch/x86/boot/compressed/head_64.o
|
||||||
|
CC arch/x86/boot/compressed/misc.o
|
||||||
|
CC arch/x86/boot/compressed/string.o
|
||||||
|
CC arch/x86/boot/compressed/cmdline.o
|
||||||
|
OBJCOPY arch/x86/boot/compressed/vmlinux.bin
|
||||||
|
BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2
|
||||||
|
HOSTCC arch/x86/boot/compressed/mkpiggy
|
||||||
|
```
|
||||||
|
|
||||||
|
Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and comments and the `vmlinux.bin.bz2` compressed `vmlinux.bin.all` + `u32` size of `vmlinux.bin.all`. The `vmlinux.bin.all` is `vmlinux.bin + vmlinux.relocs`, where `vmlinux.relocs` is the `vmlinux` that was handled by the `relocs` program (see above). As we got these files, the `piggy.S` assembly files will be generated with the `mkpiggy` program and compiled:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
MKPIGGY arch/x86/boot/compressed/piggy.S
|
||||||
|
AS arch/x86/boot/compressed/piggy.o
|
||||||
|
```
|
||||||
|
|
||||||
|
This assembly files will contain computed offset from a compressed kernel. After this we can see that `zoffset` generated:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
ZOFFSET arch/x86/boot/zoffset.h
|
||||||
|
```
|
||||||
|
|
||||||
|
As the `zoffset.h` and the `voffset.h` are generated, compilation of the source code files from the [arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) can be continued:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
AS arch/x86/boot/header.o
|
||||||
|
CC arch/x86/boot/main.o
|
||||||
|
CC arch/x86/boot/mca.o
|
||||||
|
CC arch/x86/boot/memory.o
|
||||||
|
CC arch/x86/boot/pm.o
|
||||||
|
AS arch/x86/boot/pmjump.o
|
||||||
|
CC arch/x86/boot/printf.o
|
||||||
|
CC arch/x86/boot/regs.o
|
||||||
|
CC arch/x86/boot/string.o
|
||||||
|
CC arch/x86/boot/tty.o
|
||||||
|
CC arch/x86/boot/video.o
|
||||||
|
CC arch/x86/boot/video-mode.o
|
||||||
|
CC arch/x86/boot/video-vga.o
|
||||||
|
CC arch/x86/boot/video-vesa.o
|
||||||
|
CC arch/x86/boot/video-bios.o
|
||||||
|
```
|
||||||
|
|
||||||
|
As all source code files will be compiled, they will be linked to the `setup.elf`:
|
||||||
|
|
||||||
|
```Makefile
|
||||||
|
LD arch/x86/boot/setup.elf
|
||||||
|
```
|
||||||
|
|
||||||
|
or:
|
||||||
|
|
||||||
|
```
|
||||||
|
ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf
|
||||||
|
```
|
||||||
|
|
||||||
|
The last two things is the creation of the `setup.bin` that will contain compiled code from the `arch/x86/boot/*` directory:
|
||||||
|
|
||||||
|
```
|
||||||
|
objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin
|
||||||
|
```
|
||||||
|
|
||||||
|
and the creation of the `vmlinux.bin` from the `vmlinux`:
|
||||||
|
|
||||||
|
```
|
||||||
|
objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin
|
||||||
|
```
|
||||||
|
|
||||||
|
In the end we compile host program: [arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c) that will create our `bzImage` from the `setup.bin` and the `vmlinux.bin`:
|
||||||
|
|
||||||
|
```
|
||||||
|
arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage
|
||||||
|
```
|
||||||
|
|
||||||
|
Actually the `bzImage` is the concatenated `setup.bin` and the `vmlinux.bin`. In the end we will see the output which familiar to all who once build the Linux kernel from source:
|
||||||
|
|
||||||
|
```
|
||||||
|
Setup is 16268 bytes (padded to 16384 bytes).
|
||||||
|
System is 4704 kB
|
||||||
|
CRC 94a88f9a
|
||||||
|
Kernel: arch/x86/boot/bzImage is ready (#5)
|
||||||
|
```
|
||||||
|
|
||||||
|
That's all.
|
||||||
|
|
||||||
|
Conclusion
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
It is the end of this part and here we saw all steps from the execution of the `make` command to the generation of the `bzImage`. I know, the Linux kernel makefiles and process of the Linux kernel building may seem confusing at first glance, but it is not so hard. Hope this part will help you to understand process of the Linux kernel building.
|
||||||
|
|
||||||
|
Links
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29)
|
||||||
|
* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile)
|
||||||
|
* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler)
|
||||||
|
* [Ctags](https://en.wikipedia.org/wiki/Ctags)
|
||||||
|
* [sparse](https://en.wikipedia.org/wiki/Sparse)
|
||||||
|
* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage)
|
||||||
|
* [uname](https://en.wikipedia.org/wiki/Uname)
|
||||||
|
* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29)
|
||||||
|
* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt)
|
||||||
|
* [binutils](http://www.gnu.org/software/binutils/)
|
||||||
|
* [gcc](https://gcc.gnu.org/)
|
||||||
|
* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt)
|
||||||
|
* [System.map](https://en.wikipedia.org/wiki/System.map)
|
||||||
|
* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md
|
||||||
|
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,121 @@
|
|||||||
|
|
||||||
|
Translating by dingdongnigetou
|
||||||
|
|
||||||
|
Understanding Shell Commands Easily Using “Explain Shell” Script in Linux
|
||||||
|
================================================================================
|
||||||
|
While working on Linux platform all of us need help on shell commands, at some point of time. Although inbuilt help like man pages, whatis command is helpful, but man pages output are too lengthy and until and unless one has some experience with Linux, it is very difficult to get any help from massive man pages. The output of whatis command is rarely more than one line which is not sufficient for newbies.
|
||||||
|
|
||||||
|
![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg)
|
||||||
|
|
||||||
|
Explain Shell Commands in Linux Shell
|
||||||
|
|
||||||
|
There are third-party application like ‘cheat‘, which we have covered here “[Commandline Cheat Sheet for Linux Users][1]. Although Cheat is an exceptionally good application which shows help on shell command even when computer is not connected to Internet, it shows help on predefined commands only.
|
||||||
|
|
||||||
|
There is a small piece of code written by Jackson which is able to explain shell commands within the bash shell very effectively and guess what the best part is you don’t need to install any third party package. He named the file containing this piece of code as `'explain.sh'`.
|
||||||
|
|
||||||
|
#### Features of Explain Utility ####
|
||||||
|
|
||||||
|
- Easy Code Embedding.
|
||||||
|
- No third-party utility needed to be installed.
|
||||||
|
- Output just enough information in course of explanation.
|
||||||
|
- Requires internet connection to work.
|
||||||
|
- Pure command-line utility.
|
||||||
|
- Able to explain most of the shell commands in bash shell.
|
||||||
|
- No root Account involvement required.
|
||||||
|
|
||||||
|
**Prerequisite**
|
||||||
|
|
||||||
|
The only requirement is `'curl'` package. In most of the today’s latest Linux distributions, curl package comes pre-installed, if not you can install it using package manager as shown below.
|
||||||
|
|
||||||
|
# apt-get install curl [On Debian systems]
|
||||||
|
# yum install curl [On CentOS systems]
|
||||||
|
|
||||||
|
### Installation of explain.sh Utility in Linux ###
|
||||||
|
|
||||||
|
We have to insert the below piece of code as it is in the `~/.bashrc` file. The code should be inserted for each user and each `.bashrc` file. It is suggested to insert the code to the user’s .bashrc file only and not in the .bashrc of root user.
|
||||||
|
|
||||||
|
Notice the first line of code that starts with hash `(#)` is optional and added just to differentiate rest of the codes of .bashrc.
|
||||||
|
|
||||||
|
# explain.sh marks the beginning of the codes, we are inserting in .bashrc file at the bottom of this file.
|
||||||
|
|
||||||
|
# explain.sh begins
|
||||||
|
explain () {
|
||||||
|
if [ "$#" -eq 0 ]; then
|
||||||
|
while read -p "Command: " cmd; do
|
||||||
|
curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd"
|
||||||
|
done
|
||||||
|
echo "Bye!"
|
||||||
|
elif [ "$#" -eq 1 ]; then
|
||||||
|
curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1"
|
||||||
|
else
|
||||||
|
echo "Usage"
|
||||||
|
echo "explain interactive mode."
|
||||||
|
echo "explain 'cmd -o | ...' one quoted command to explain it."
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
### Working of explain.sh Utility ###
|
||||||
|
|
||||||
|
After inserting the code and saving it, you must logout of the current session and login back to make the changes taken into effect. Every thing is taken care of by the ‘curl’ command which transfer the input command and flag that need explanation to the mankier server and then print just necessary information to the Linux command-line. Not to mention to use this utility you must be connected to internet always.
|
||||||
|
|
||||||
|
Let’s test few examples of command which I don’t know the meaning with explain.sh script.
|
||||||
|
|
||||||
|
**1. I forgot what ‘du -h‘ does. All I need to do is:**
|
||||||
|
|
||||||
|
$ explain 'du -h'
|
||||||
|
|
||||||
|
![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png)
|
||||||
|
|
||||||
|
Get Help on du Command
|
||||||
|
|
||||||
|
**2. If you forgot what ‘tar -zxvf‘ does, you may simply do:**
|
||||||
|
|
||||||
|
$ explain 'tar -zxvf'
|
||||||
|
|
||||||
|
![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png)
|
||||||
|
|
||||||
|
Tar Command Help
|
||||||
|
|
||||||
|
**3. One of my friend often confuse the use of ‘whatis‘ and ‘whereis‘ command, so I advised him.**
|
||||||
|
|
||||||
|
Go to Interactive Mode by simply typing explain command on the terminal.
|
||||||
|
|
||||||
|
$ explain
|
||||||
|
|
||||||
|
and then type the commands one after another to see what they do in one window, as:
|
||||||
|
|
||||||
|
Command: whatis
|
||||||
|
Command: whereis
|
||||||
|
|
||||||
|
![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png)
|
||||||
|
|
||||||
|
Whatis Whereis Commands Help
|
||||||
|
|
||||||
|
To exit interactive mode he just need to do Ctrl + c.
|
||||||
|
|
||||||
|
**4. You can ask to explain more than one command chained by pipeline.**
|
||||||
|
|
||||||
|
$ explain 'ls -l | grep -i Desktop'
|
||||||
|
|
||||||
|
![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png)
|
||||||
|
|
||||||
|
Get Help on Multiple Commands
|
||||||
|
|
||||||
|
Similarly you can ask your shell to explain any shell command. All you need is a working Internet connection. The output is generated based upon the explanation needed from the server and hence the output result is not customizable.
|
||||||
|
|
||||||
|
For me this utility is really helpful and it has been honored being added to my .bashrc. Let me know what is your thought on this project? How it can useful for you? Is the explanation satisfactory?
|
||||||
|
|
||||||
|
Provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/
|
||||||
|
|
||||||
|
作者:[Avishek Kumar][a]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://www.tecmint.com/author/avishek/
|
||||||
|
[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/
|
@ -0,0 +1,174 @@
|
|||||||
|
How to Setup iTOP (IT Operational Portal) on CentOS 7
|
||||||
|
================================================================================
|
||||||
|
iTOP is a simple, Open source web based IT Service Management tool. It has all of ITIL functionality that includes with Service desk, Configuration Management, Incident Management, Problem Management, Change Management and Service Management. iTop relays on Apache/IIS, MySQL and PHP, so it can run on any operating system supporting these applications. Since iTop is a web based application you don’t need to deploy any client software on each user’s PC. A simple web browser is enough to perform day to day operations of an IT environment with iTOP.
|
||||||
|
|
||||||
|
To install and configure iTOP we will be using CentOS 7 as base operating with basic LAMP Stack environment installed on it that will cover its almost all prerequisites.
|
||||||
|
|
||||||
|
### Downloading iTOP ###
|
||||||
|
|
||||||
|
iTop download package is present on SourceForge, we can get its link from their official website [link][1].
|
||||||
|
|
||||||
|
![itop download](http://blog.linoxide.com/wp-content/uploads/2015/07/1-itop-download.png)
|
||||||
|
|
||||||
|
We will the download link from here and get this zipped file on server with wget command as below.
|
||||||
|
|
||||||
|
[root@centos-007 ~]# wget http://downloads.sourceforge.net/project/itop/itop/2.1.0/iTop-2.1.0-2127.zip
|
||||||
|
|
||||||
|
### iTop Extensions and Web Setup ###
|
||||||
|
|
||||||
|
By using unzip command we will extract the downloaded packages in the document root directory of our apache web server in a new directory with name itop.
|
||||||
|
|
||||||
|
[root@centos-7 ~]# ls
|
||||||
|
iTop-2.1.0-2127.zip
|
||||||
|
[root@centos-7 ~]# unzip iTop-2.1.0-2127.zip -d /var/www/html/itop/
|
||||||
|
|
||||||
|
List the folder to view installation packages in it.
|
||||||
|
|
||||||
|
[root@centos-7 ~]# ls -lh /var/www/html/itop/
|
||||||
|
total 68K
|
||||||
|
-rw-r--r--. 1 root root 1.4K Dec 17 2014 INSTALL
|
||||||
|
-rw-r--r--. 1 root root 35K Dec 17 2014 LICENSE
|
||||||
|
-rw-r--r--. 1 root root 23K Dec 17 2014 README
|
||||||
|
drwxr-xr-x. 19 root root 4.0K Jul 14 13:10 web
|
||||||
|
|
||||||
|
Here is all the extensions that we can install.
|
||||||
|
|
||||||
|
[root@centos-7 2.x]# ls
|
||||||
|
authent-external itop-backup itop-config-mgmt itop-problem-mgmt itop-service-mgmt-provider itop-welcome-itil
|
||||||
|
authent-ldap itop-bridge-virtualization-storage itop-datacenter-mgmt itop-profiles-itil itop-sla-computation version.xml
|
||||||
|
authent-local itop-change-mgmt itop-endusers-devices itop-request-mgmt itop-storage-mgmt wizard-icons
|
||||||
|
installation.xml itop-change-mgmt-itil itop-incident-mgmt-itil itop-request-mgmt-itil itop-tickets
|
||||||
|
itop-attachments itop-config itop-knownerror-mgmt itop-service-mgmt itop-virtualization-mgmt
|
||||||
|
|
||||||
|
Now from the extracted web directory, moving through different data models we will migrate the required extensions from the datamodels into the web extensions directory of web document root directory with copy command.
|
||||||
|
|
||||||
|
[root@centos-7 2.x]# pwd
|
||||||
|
/var/www/html/itop/web/datamodels/2.x
|
||||||
|
[root@centos-7 2.x]# cp -r itop-request-mgmt itop-service-mgmt itop-service-mgmt itop-config itop-change-mgmt /var/www/html/itop/web/extensions/
|
||||||
|
|
||||||
|
### Installing iTop Web Interface ###
|
||||||
|
|
||||||
|
Most of our server side settings and configurations are done.Finally we need to complete its web interface installation process to finalize the setup.
|
||||||
|
|
||||||
|
Open your favorite web browser and access the WordPress web directory in your web browser using your server IP or FQDN like.
|
||||||
|
|
||||||
|
http://servers_ip_address/itop/web/
|
||||||
|
|
||||||
|
You will be redirected towards the web installation process for iTop. Let’s configure it as per your requirements like we did here in this tutorial.
|
||||||
|
|
||||||
|
#### Prerequisites Validation ####
|
||||||
|
|
||||||
|
At the stage you will be prompted for welcome screen with prerequisites validation ok. If you get some warning then you have to make resolve it by installing its prerequisites.
|
||||||
|
|
||||||
|
![mcrypt missing](http://blog.linoxide.com/wp-content/uploads/2015/07/2-itop-web-install.png)
|
||||||
|
|
||||||
|
At this stage one optional package named php mcrypt will be missing. Download the following rpm package then try to install php mcrypt package.
|
||||||
|
|
||||||
|
[root@centos-7 ~]#yum localinstall php-mcrypt-5.3.3-1.el6.x86_64.rpm libmcrypt-2.5.8-9.el6.x86_64.rpm.
|
||||||
|
|
||||||
|
After successful installation of php-mcrypt library we need to restart apache web service, then reload the web page and this time its prerequisites validation should be OK.
|
||||||
|
|
||||||
|
#### Install or Upgrade iTop ####
|
||||||
|
|
||||||
|
Here we will choose the fresh installation as we have not installed iTop previously on our server.
|
||||||
|
|
||||||
|
![Install New iTop](http://blog.linoxide.com/wp-content/uploads/2015/07/3.png)
|
||||||
|
|
||||||
|
#### iTop License Agreement ####
|
||||||
|
|
||||||
|
Chose the option to accept the terms of the licenses of all the components of iTop and click "NEXT".
|
||||||
|
|
||||||
|
![License Agreement](http://blog.linoxide.com/wp-content/uploads/2015/07/4.png)
|
||||||
|
|
||||||
|
#### Database Configuration ####
|
||||||
|
|
||||||
|
Here we the do Configuration of the database connection by giving our database servers credentials and then choose from the option to create new database as shown.
|
||||||
|
|
||||||
|
![DB Connection](http://blog.linoxide.com/wp-content/uploads/2015/07/5.png)
|
||||||
|
|
||||||
|
#### Administrator Account ####
|
||||||
|
|
||||||
|
In this step we will configure an Admin account by filling out its login details as.
|
||||||
|
|
||||||
|
![Admin Account](http://blog.linoxide.com/wp-content/uploads/2015/07/6.png)
|
||||||
|
|
||||||
|
#### Miscellaneous Parameters ####
|
||||||
|
|
||||||
|
Let's choose the additional parameters whether you want to install with demo contents or with fresh database and proceed forward.
|
||||||
|
|
||||||
|
![Misc Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/7.png)
|
||||||
|
|
||||||
|
### iTop Configurations Management ###
|
||||||
|
|
||||||
|
The options below allow you to configure the type of elements that are to be managed inside iTop like all the base objects that are mandatory in the iTop CMDB, Manage Data Center devices, storage device and virtualization.
|
||||||
|
|
||||||
|
![Conf Management](http://blog.linoxide.com/wp-content/uploads/2015/07/8.png)
|
||||||
|
|
||||||
|
#### Service Management ####
|
||||||
|
|
||||||
|
Select from the choices that best describes the relationships between the services and the IT infrastructure in your IT environment. So we are choosing Service Management for Service Providers here.
|
||||||
|
|
||||||
|
![Service Management](http://blog.linoxide.com/wp-content/uploads/2015/07/9.png)
|
||||||
|
|
||||||
|
#### iTop Tickets Management ####
|
||||||
|
|
||||||
|
From the different available options we will Select the ITIL Compliant Tickets Management option to have different types of ticket for managing user requests and incidents.
|
||||||
|
|
||||||
|
![Ticket Management](http://blog.linoxide.com/wp-content/uploads/2015/07/10.png)
|
||||||
|
|
||||||
|
#### Change Management Options ####
|
||||||
|
|
||||||
|
Select the type of tickets you want to use in order to manage changes to the IT infrastructure from the available options. We are going to choose ITIL change management option here.
|
||||||
|
|
||||||
|
![ITIL Change](http://blog.linoxide.com/wp-content/uploads/2015/07/11.png)
|
||||||
|
|
||||||
|
#### iTop Extensions ####
|
||||||
|
|
||||||
|
In this section we can select the additional extensions to install or we can unchecked the ones that you want to skip.
|
||||||
|
|
||||||
|
![iTop Extensions](http://blog.linoxide.com/wp-content/uploads/2015/07/13.png)
|
||||||
|
|
||||||
|
### Ready to Start Web Installation ###
|
||||||
|
|
||||||
|
Now we are ready to start installing the components that we choose in previous steps. We can also drop down these installation parameters to view our configuration from the drop down.
|
||||||
|
|
||||||
|
Once you are confirmed with the installation parameters click on the install button.
|
||||||
|
|
||||||
|
![Installation Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/16.png)
|
||||||
|
|
||||||
|
Let's wait for the progress bar to complete the installation process. It might takes few minutes to complete its installation process.
|
||||||
|
|
||||||
|
![iTop Installation Process](http://blog.linoxide.com/wp-content/uploads/2015/07/17.png)
|
||||||
|
|
||||||
|
### iTop Installation Done ###
|
||||||
|
|
||||||
|
Our iTop installation setup is complete, just need to do a simple manual operation as shown and then click to enter iTop.
|
||||||
|
|
||||||
|
![iTop Done](http://blog.linoxide.com/wp-content/uploads/2015/07/18.png)
|
||||||
|
|
||||||
|
### Welcome to iTop (IT Operational Portal) ###
|
||||||
|
|
||||||
|
![itop welcome note](http://blog.linoxide.com/wp-content/uploads/2015/07/20.png)
|
||||||
|
|
||||||
|
### iTop Dashboard ###
|
||||||
|
|
||||||
|
You can manage configuration of everything from here Servers, computers, Contacts, Locations, Contracts, Network devices…. You can create your own. Just the fact, that the installed CMDB module is great which is an essential part of every bigger IT.
|
||||||
|
|
||||||
|
![iTop Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/07/19.png)
|
||||||
|
|
||||||
|
### Conclusion ###
|
||||||
|
|
||||||
|
ITOP is one of the best Open Source Service Desk solutions. We have successfully installed and configured it on our CentOS 7 cloud host. So, the most powerful aspect of iTop is the ease with which it can be customized via its “extensions”. Feel free to comment if you face any trouble during its setup.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://linoxide.com/tools/setup-itop-centos-7/
|
||||||
|
|
||||||
|
作者:[Kashif Siddique][a]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://linoxide.com/author/kashifs/
|
||||||
|
[1]:http://www.combodo.com/spip.php?page=rubrique&id_rubrique=8
|
@ -0,0 +1,126 @@
|
|||||||
|
Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker
|
||||||
|
================================================================================
|
||||||
|
Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names.
|
||||||
|
|
||||||
|
Here, in this tutorial, we will use Nginx to load balance requests to a set of containers running Apache. Here are the simple and easy to do steps on using Weave to configure nginx as a load balancer running in ubuntu docker container.
|
||||||
|
|
||||||
|
### 1. Settting up AWS Instances ###
|
||||||
|
|
||||||
|
First of all, we'll need to setup Amazon Web Service Instances so that we can run docker containers with Weave and Ubuntu as Operating System. We will use the [AWS CLI][1] to setup and configure two AWS EC2 instances. Here, in this tutorial, we'll use the smallest available instances, t1.micro. We will need to have a valid **Amazon Web Services account** with AWS CLI setup and configured. We'll first gonna clone the repository of weave from the github by running the following command in AWS CLI.
|
||||||
|
|
||||||
|
$ git clone http://github.com/fintanr/weave-gs
|
||||||
|
$ cd weave-gs/aws-nginx-ubuntu-simple
|
||||||
|
|
||||||
|
After cloning the repository, we wanna run the script that will deploy two instances of t1.micro instance running Weave and Docker in Ubuntu Operating System.
|
||||||
|
|
||||||
|
$ sudo ./demo-aws-setup.sh
|
||||||
|
|
||||||
|
Here, for this tutorial we'll need the IP addresses of these instances further in future. These are stored in an environment file weavedemo.env which is created during the execution of the demo-aws-setup.sh. To get those ip addresses, we need to run the following command which will give the output similar to the output below.
|
||||||
|
|
||||||
|
$ cat weavedemo.env
|
||||||
|
|
||||||
|
export WEAVE_AWS_DEMO_HOST1=52.26.175.175
|
||||||
|
export WEAVE_AWS_DEMO_HOST2=52.26.83.141
|
||||||
|
export WEAVE_AWS_DEMO_HOSTCOUNT=2
|
||||||
|
export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141)
|
||||||
|
|
||||||
|
Please note these are not the IP addresses for our tutorial, AWS dynamically allocate IP addresses to our instances.
|
||||||
|
|
||||||
|
As were are using a bash, we will just source this file and execute it using the command below.
|
||||||
|
|
||||||
|
. ./weavedemo.env
|
||||||
|
|
||||||
|
### 2. Launching Weave and WeaveDNS ###
|
||||||
|
|
||||||
|
After deploying the instances, we'll want to launch weave and weavedns on each hosts. Weave and weavedns allows us to easily deploy our containers to a new infrastructure and configuration without the need of changing the codes and without the need to understand concepts such as ambassador containers and links. Here are the commands to launch them in the first host.
|
||||||
|
|
||||||
|
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
|
||||||
|
$ sudo weave launch
|
||||||
|
$ sudo weave launch-dns 10.2.1.1/24
|
||||||
|
|
||||||
|
Next, we'll also wanna launch them in our second host.
|
||||||
|
|
||||||
|
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
|
||||||
|
$ sudo weave launch $WEAVE_AWS_DEMO_HOST1
|
||||||
|
$ sudo weave launch-dns 10.2.1.2/24
|
||||||
|
|
||||||
|
### 3. Launching Application Containers ###
|
||||||
|
|
||||||
|
Now, we wanna launch six containers across our two hosts running an Apache2 Web Server instance with our simple php site. So, we'll be running the following commands which will run 3 containers running Apache2 Web Server on our 1st instance.
|
||||||
|
|
||||||
|
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
|
||||||
|
$ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache
|
||||||
|
$ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache
|
||||||
|
$ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache
|
||||||
|
|
||||||
|
After that, we'll again launch 3 containers running apache2 web server in our 2nd instance as shown below.
|
||||||
|
|
||||||
|
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
|
||||||
|
$ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache
|
||||||
|
$ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache
|
||||||
|
$ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache
|
||||||
|
|
||||||
|
Note: Here, --with-dns option tells the container to use weavedns to resolve names and -h x.weave.local allows the host to be resolvable with WeaveDNS.
|
||||||
|
|
||||||
|
### 4. Launching Nginx Container ###
|
||||||
|
|
||||||
|
After our application containers are running well as expected, we'll wanna launch an nginx container which contains the nginx configuration which will round-robin across the severs for the reverse proxy or load balancing. To run the nginx container, we'll need to run the following command.
|
||||||
|
|
||||||
|
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
|
||||||
|
$ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple
|
||||||
|
|
||||||
|
Hence, our Nginx container is publicly exposed as a http server on $WEAVE_AWS_DEMO_HOST1.
|
||||||
|
|
||||||
|
### 5. Testing the Load Balancer ###
|
||||||
|
|
||||||
|
To test our load balancer is working or not, we'll run a script that will make http requests to our nginx container. We'll make six requests so that we can see nginx moving through each of the webservers in round-robin turn.
|
||||||
|
|
||||||
|
$ ./access-aws-hosts.sh
|
||||||
|
|
||||||
|
{
|
||||||
|
"message" : "Hello Weave - nginx example",
|
||||||
|
"hostname" : "ws1.weave.local",
|
||||||
|
"date" : "2015-06-26 12:24:23"
|
||||||
|
}
|
||||||
|
{
|
||||||
|
"message" : "Hello Weave - nginx example",
|
||||||
|
"hostname" : "ws2.weave.local",
|
||||||
|
"date" : "2015-06-26 12:24:23"
|
||||||
|
}
|
||||||
|
{
|
||||||
|
"message" : "Hello Weave - nginx example",
|
||||||
|
"hostname" : "ws3.weave.local",
|
||||||
|
"date" : "2015-06-26 12:24:23"
|
||||||
|
}
|
||||||
|
{
|
||||||
|
"message" : "Hello Weave - nginx example",
|
||||||
|
"hostname" : "ws4.weave.local",
|
||||||
|
"date" : "2015-06-26 12:24:23"
|
||||||
|
}
|
||||||
|
{
|
||||||
|
"message" : "Hello Weave - nginx example",
|
||||||
|
"hostname" : "ws5.weave.local",
|
||||||
|
"date" : "2015-06-26 12:24:23"
|
||||||
|
}
|
||||||
|
{
|
||||||
|
"message" : "Hello Weave - nginx example",
|
||||||
|
"hostname" : "ws6.weave.local",
|
||||||
|
"date" : "2015-06-26 12:24:23"
|
||||||
|
}
|
||||||
|
|
||||||
|
### Conclusion ###
|
||||||
|
|
||||||
|
Finally, we've successfully configured nginx as a reverse proxy or load balancer with weave and docker running ubuntu server in AWS (Amazon Web Service) EC2 . From the above output in above step, it is clear that we have configured it correctly. We can see that the request is being sent to 6 application containers in round-robin turn which is running a PHP app hosted in apache web server. Here, weave and weavedns did great work to deploy a containerised PHP application using nginx across multiple hosts on AWS EC2 without need to change in codes and connected the containers to eachother with the hostname using weavedns. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/
|
||||||
|
|
||||||
|
作者:[Arun Pyasi][a]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://linoxide.com/author/arunp/
|
||||||
|
[1]:http://console.aws.amazon.com/
|
@ -1,315 +0,0 @@
|
|||||||
[translating by xiqingongzi]
|
|
||||||
|
|
||||||
RHCSA Series: Reviewing Essential Commands & System Documentation – Part 1
|
|
||||||
================================================================================
|
|
||||||
RHCSA (Red Hat Certified System Administrator) is a certification exam from Red Hat company, which provides an open source operating system and software to the enterprise community, It also provides support, training and consulting services for the organizations.
|
|
||||||
|
|
||||||
![RHCSA Exam Guide](http://www.tecmint.com/wp-content/uploads/2015/02/RHCSA-Series-by-Tecmint.png)
|
|
||||||
|
|
||||||
RHCSA Exam Preparation Guide
|
|
||||||
|
|
||||||
RHCSA exam is the certification obtained from Red Hat Inc, after passing the exam (codename EX200). RHCSA exam is an upgrade to the RHCT (Red Hat Certified Technician) exam, and this upgrade is compulsory as the Red Hat Enterprise Linux was upgraded. The main variation between RHCT and RHCSA is that RHCT exam based on RHEL 5, whereas RHCSA certification is based on RHEL 6 and 7, the courseware of these two certifications are also vary to a certain level.
|
|
||||||
|
|
||||||
This Red Hat Certified System Administrator (RHCSA) is essential to perform the following core system administration tasks needed in Red Hat Enterprise Linux environments:
|
|
||||||
|
|
||||||
- Understand and use necessary tools for handling files, directories, command-environments line, and system-wide / packages documentation.
|
|
||||||
- Operate running systems, even in different run levels, identify and control processes, start and stop virtual machines.
|
|
||||||
- Set up local storage using partitions and logical volumes.
|
|
||||||
- Create and configure local and network file systems and its attributes (permissions, encryption, and ACLs).
|
|
||||||
- Setup, configure, and control systems, including installing, updating and removing software.
|
|
||||||
- Manage system users and groups, along with use of a centralized LDAP directory for authentication.
|
|
||||||
- Ensure system security, including basic firewall and SELinux configuration.
|
|
||||||
|
|
||||||
To view fees and register for an exam in your country, check the [RHCSA Certification page][1].
|
|
||||||
|
|
||||||
To view fees and register for an exam in your country, check the RHCSA Certification page.
|
|
||||||
|
|
||||||
In this 15-article RHCSA series, titled Preparation for the RHCSA (Red Hat Certified System Administrator) exam, we will going to cover the following topics on the latest releases of Red Hat Enterprise Linux 7.
|
|
||||||
|
|
||||||
- Part 1: Reviewing Essential Commands & System Documentation
|
|
||||||
- Part 2: How to Perform File and Directory Management in RHEL 7
|
|
||||||
- Part 3: How to Manage Users and Groups in RHEL 7
|
|
||||||
- Part 4: Editing Text Files with Nano and Vim / Analyzing text with grep and regexps
|
|
||||||
- Part 5: Process Management in RHEL 7: boot, shutdown, and everything in between
|
|
||||||
- Part 6: Using ‘Parted’ and ‘SSM’ to Configure and Encrypt System Storage
|
|
||||||
- Part 7: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares
|
|
||||||
- Part 8: Securing SSH, Setting Hostname and Enabling Network Services
|
|
||||||
- Part 9: Installing, Configuring and Securing a Web and FTP Server
|
|
||||||
- Part 10: Yum Package Management, Automating Tasks with Cron and Monitoring System Logs
|
|
||||||
- Part 11: Firewall Essentials and Control Network Traffic Using FirewallD and Iptables
|
|
||||||
- Part 12: Automate RHEL 7 Installations Using ‘Kickstart’
|
|
||||||
- Part 13: RHEL 7: What is SELinux and how it works?
|
|
||||||
- Part 14: Use LDAP-based authentication in RHEL 7
|
|
||||||
- Part 15: Virtualization in RHEL 7: KVM and Virtual machine management
|
|
||||||
|
|
||||||
In this Part 1 of the RHCSA series, we will explain how to enter and execute commands with the correct syntax in a shell prompt or terminal, and explained how to find, inspect, and use system documentation.
|
|
||||||
|
|
||||||
![RHCSA: Reviewing Essential Linux Commands – Part 1](http://www.tecmint.com/wp-content/uploads/2015/02/Reviewing-Essential-Linux-Commands.png)
|
|
||||||
|
|
||||||
RHCSA: Reviewing Essential Linux Commands – Part 1
|
|
||||||
|
|
||||||
#### Prerequisites: ####
|
|
||||||
|
|
||||||
At least a slight degree of familiarity with basic Linux commands such as:
|
|
||||||
|
|
||||||
- [cd command][2] (change directory)
|
|
||||||
- [ls command][3] (list directory)
|
|
||||||
- [cp command][4] (copy files)
|
|
||||||
- [mv command][5] (move or rename files)
|
|
||||||
- [touch command][6] (create empty files or update the timestamp of existing ones)
|
|
||||||
- rm command (delete files)
|
|
||||||
- mkdir command (make directory)
|
|
||||||
|
|
||||||
The correct usage of some of them are anyway exemplified in this article, and you can find further information about each of them using the suggested methods in this article.
|
|
||||||
|
|
||||||
Though not strictly required to start, as we will be discussing general commands and methods for information search in a Linux system, you should try to install RHEL 7 as explained in the following article. It will make things easier down the road.
|
|
||||||
|
|
||||||
- [Red Hat Enterprise Linux (RHEL) 7 Installation Guide][7]
|
|
||||||
|
|
||||||
### Interacting with the Linux Shell ###
|
|
||||||
|
|
||||||
If we log into a Linux box using a text-mode login screen, chances are we will be dropped directly into our default shell. On the other hand, if we login using a graphical user interface (GUI), we will have to open a shell manually by starting a terminal. Either way, we will be presented with the user prompt and we can start typing and executing commands (a command is executed by pressing the Enter key after we have typed it).
|
|
||||||
|
|
||||||
Commands are composed of two parts:
|
|
||||||
|
|
||||||
- the name of the command itself, and
|
|
||||||
- arguments
|
|
||||||
|
|
||||||
Certain arguments, called options (usually preceded by a hyphen), alter the behavior of the command in a particular way while other arguments specify the objects upon which the command operates.
|
|
||||||
|
|
||||||
The type command can help us identify whether another certain command is built into the shell or if it is provided by a separate package. The need to make this distinction lies in the place where we will find more information about the command. For shell built-ins we need to look in the shell’s man page, whereas for other binaries we can refer to its own man page.
|
|
||||||
|
|
||||||
![Check Shell built in Commands](http://www.tecmint.com/wp-content/uploads/2015/02/Check-shell-built-in-Commands.png)
|
|
||||||
|
|
||||||
Check Shell built in Commands
|
|
||||||
|
|
||||||
In the examples above, cd and type are shell built-ins, while top and less are binaries external to the shell itself (in this case, the location of the command executable is returned by type).
|
|
||||||
|
|
||||||
Other well-known shell built-ins include:
|
|
||||||
|
|
||||||
- [echo command][8]: Displays strings of text.
|
|
||||||
- [pwd command][9]: Prints the current working directory.
|
|
||||||
|
|
||||||
![More Built in Shell Commands](http://www.tecmint.com/wp-content/uploads/2015/02/More-Built-in-Shell-Commands.png)
|
|
||||||
|
|
||||||
More Built in Shell Commands
|
|
||||||
|
|
||||||
**exec command**
|
|
||||||
|
|
||||||
Runs an external program that we specify. Note that in most cases, this is better accomplished by just typing the name of the program we want to run, but the exec command has one special feature: rather than create a new process that runs alongside the shell, the new process replaces the shell, as can verified by subsequent.
|
|
||||||
|
|
||||||
# ps -ef | grep [original PID of the shell process]
|
|
||||||
|
|
||||||
When the new process terminates, the shell terminates with it. Run exec top and then hit the q key to quit top. You will notice that the shell session ends when you do, as shown in the following screencast:
|
|
||||||
|
|
||||||
注:youtube视频
|
|
||||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/f02w4WT73LE"></iframe>
|
|
||||||
|
|
||||||
**export command**
|
|
||||||
|
|
||||||
Exports variables to the environment of subsequently executed commands.
|
|
||||||
|
|
||||||
**history Command**
|
|
||||||
|
|
||||||
Displays the command history list with line numbers. A command in the history list can be repeated by typing the command number preceded by an exclamation sign. If we need to edit a command in history list before executing it, we can press Ctrl + r and start typing the first letters associated with the command. When we see the command completed automatically, we can edit it as per our current need:
|
|
||||||
|
|
||||||
注:youtube视频
|
|
||||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/69vafdSMfU4"></iframe>
|
|
||||||
|
|
||||||
This list of commands is kept in our home directory in a file called .bash_history. The history facility is a useful resource for reducing the amount of typing, especially when combined with command line editing. By default, bash stores the last 500 commands you have entered, but this limit can be extended by using the HISTSIZE environment variable:
|
|
||||||
|
|
||||||
![Linux history Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-history-Command.png)
|
|
||||||
|
|
||||||
Linux history Command
|
|
||||||
|
|
||||||
But this change as performed above, will not be persistent on our next boot. In order to preserve the change in the HISTSIZE variable, we need to edit the .bashrc file by hand:
|
|
||||||
|
|
||||||
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
|
|
||||||
HISTSIZE=1000
|
|
||||||
|
|
||||||
**Important**: Keep in mind that these changes will not take effect until we restart our shell session.
|
|
||||||
|
|
||||||
**alias command**
|
|
||||||
|
|
||||||
With no arguments or with the -p option prints the list of aliases in the form alias name=value on standard output. When arguments are provided, an alias is defined for each name whose value is given.
|
|
||||||
|
|
||||||
With alias, we can make up our own commands or modify existing ones by including desired options. For example, suppose we want to alias ls to ls –color=auto so that the output will display regular files, directories, symlinks, and so on, in different colors:
|
|
||||||
|
|
||||||
# alias ls='ls --color=auto'
|
|
||||||
|
|
||||||
![Linux alias Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-alias-Command.png)
|
|
||||||
|
|
||||||
Linux alias Command
|
|
||||||
|
|
||||||
**Note**: That you can assign any name to your “new command” and enclose as many commands as desired between single quotes, but in that case you need to separate them by semicolons, as follows:
|
|
||||||
|
|
||||||
# alias myNewCommand='cd /usr/bin; ls; cd; clear'
|
|
||||||
|
|
||||||
**exit command**
|
|
||||||
|
|
||||||
The exit and logout commands both terminate the shell. The exit command terminates any shell, but the logout command terminates only login shells—that is, those that are launched automatically when you initiate a text-mode login.
|
|
||||||
|
|
||||||
If we are ever in doubt as to what a program does, we can refer to its man page, which can be invoked using the man command. In addition, there are also man pages for important files (inittab, fstab, hosts, to name a few), library functions, shells, devices, and other features.
|
|
||||||
|
|
||||||
#### Examples: ####
|
|
||||||
|
|
||||||
- man uname (print system information, such as kernel name, processor, operating system type, architecture, and so on).
|
|
||||||
- man inittab (init daemon configuration).
|
|
||||||
|
|
||||||
Another important source of information is provided by the info command, which is used to read info documents. These documents often provide more information than the man page. It is invoked by using the info keyword followed by a command name, such as:
|
|
||||||
|
|
||||||
# info ls
|
|
||||||
# info cut
|
|
||||||
|
|
||||||
In addition, the /usr/share/doc directory contains several subdirectories where further documentation can be found. They either contain plain-text files or other friendly formats.
|
|
||||||
|
|
||||||
Make sure you make it a habit to use these three methods to look up information for commands. Pay special and careful attention to the syntax of each of them, which is explained in detail in the documentation.
|
|
||||||
|
|
||||||
**Converting Tabs into Spaces with expand Command**
|
|
||||||
|
|
||||||
Sometimes text files contain tabs but programs that need to process the files don’t cope well with tabs. Or maybe we just want to convert tabs into spaces. That’s where the expand tool (provided by the GNU coreutils package) comes in handy.
|
|
||||||
|
|
||||||
For example, given the file NumbersList.txt, let’s run expand against it, changing tabs to one space, and display on standard output.
|
|
||||||
|
|
||||||
# expand --tabs=1 NumbersList.txt
|
|
||||||
|
|
||||||
![Linux expand Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-expand-Command.png)
|
|
||||||
|
|
||||||
Linux expand Command
|
|
||||||
|
|
||||||
The unexpand command performs the reverse operation (converts spaces into tabs).
|
|
||||||
|
|
||||||
**Display the first lines of a file with head and the last lines with tail**
|
|
||||||
|
|
||||||
By default, the head command followed by a filename, will display the first 10 lines of the said file. This behavior can be changed using the -n option and specifying a certain number of lines.
|
|
||||||
|
|
||||||
# head -n3 /etc/passwd
|
|
||||||
# tail -n3 /etc/passwd
|
|
||||||
|
|
||||||
![Linux head and tail Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-head-and-tail-Command.png)
|
|
||||||
|
|
||||||
Linux head and tail Command
|
|
||||||
|
|
||||||
One of the most interesting features of tail is the possibility of displaying data (last lines) as the input file grows (tail -f my.log, where my.log is the file under observation). This is particularly useful when monitoring a log to which data is being continually added.
|
|
||||||
|
|
||||||
Read More: [Manage Files Effectively using head and tail Commands][10]
|
|
||||||
|
|
||||||
**Merging Lines with paste**
|
|
||||||
|
|
||||||
The paste command merges files line by line, separating the lines from each file with tabs (by default), or another delimiter that can be specified (in the following example the fields in the output are separated by an equal sign).
|
|
||||||
|
|
||||||
# paste -d= file1 file2
|
|
||||||
|
|
||||||
![Merge Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Merge-Files-in-Linux-with-paste-command.png)
|
|
||||||
|
|
||||||
Merge Files in Linux
|
|
||||||
|
|
||||||
**Breaking a file into pieces using split command**
|
|
||||||
|
|
||||||
The split command is used split a file into two (or more) separate files, which are named according to a prefix of our choosing. The splitting can be defined by size, chunks, or number of lines, and the resulting files can have a numeric or alphabetic suffixes. In the following example, we will split bash.pdf into files of size 50 KB (-b 50KB), using numeric suffixes (-d):
|
|
||||||
|
|
||||||
# split -b 50KB -d bash.pdf bash_
|
|
||||||
|
|
||||||
![Split Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Split-Files-in-Linux-with-split-command.png)
|
|
||||||
|
|
||||||
Split Files in Linux
|
|
||||||
|
|
||||||
You can merge the files to recreate the original file with the following command:
|
|
||||||
|
|
||||||
# cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf
|
|
||||||
|
|
||||||
**Translating characters with tr command**
|
|
||||||
|
|
||||||
The tr command can be used to translate (change) characters on a one-by-one basis or using character ranges. In the following example we will use the same file2 as previously, and we will change:
|
|
||||||
|
|
||||||
- lowercase o’s to uppercase,
|
|
||||||
- and all lowercase to uppercase
|
|
||||||
|
|
||||||
# cat file2 | tr o O
|
|
||||||
# cat file2 | tr [a-z] [A-Z]
|
|
||||||
|
|
||||||
![Translate Characters in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Translate-characters-in-Linux-with-tr-command.png)
|
|
||||||
|
|
||||||
Translate Characters in Linux
|
|
||||||
|
|
||||||
**Reporting or deleting duplicate lines with uniq and sort command**
|
|
||||||
|
|
||||||
The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files).
|
|
||||||
|
|
||||||
By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option. Please note how the output returned by sort and uniq change as we change the key field in the following example:
|
|
||||||
|
|
||||||
# cat file3
|
|
||||||
# sort file3 | uniq
|
|
||||||
# sort -k2 file3 | uniq
|
|
||||||
# sort -k3 file3 | uniq
|
|
||||||
|
|
||||||
![Remove Duplicate Lines in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Remove-Duplicate-Lines-in-file.png)
|
|
||||||
|
|
||||||
Remove Duplicate Lines in Linux
|
|
||||||
|
|
||||||
**Extracting text with cut command**
|
|
||||||
|
|
||||||
The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b), characters (-c), or fields (-f).
|
|
||||||
|
|
||||||
When using cut based on fields, the default field separator is a tab, but a different separator can be specified by using the -d option.
|
|
||||||
|
|
||||||
# cut -d: -f1,3 /etc/passwd # Extract specific fields: 1 and 3 in this case
|
|
||||||
# cut -d: -f2-4 /etc/passwd # Extract range of fields: 2 through 4 in this example
|
|
||||||
|
|
||||||
![Extract Text From a File in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Extract-Text-from-a-file.png)
|
|
||||||
|
|
||||||
Extract Text From a File in Linux
|
|
||||||
|
|
||||||
Note that the output of the two examples above was truncated for brevity.
|
|
||||||
|
|
||||||
**Reformatting files with fmt command**
|
|
||||||
|
|
||||||
fmt is used to “clean up” files with a great amount of content or lines, or with varying degrees of indentation. The new paragraph formatting defaults to no more than 75 characters wide. You can change this with the -w (width) option, which set the line length to the specified number of characters.
|
|
||||||
|
|
||||||
For example, let’s see what happens when we use fmt to display the /etc/passwd file setting the width of each line to 100 characters. Once again, output has been truncated for brevity.
|
|
||||||
|
|
||||||
# fmt -w100 /etc/passwd
|
|
||||||
|
|
||||||
![File Reformatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Reformatting-in-Linux-with-fmt-command.png)
|
|
||||||
|
|
||||||
File Reformatting in Linux
|
|
||||||
|
|
||||||
**Formatting content for printing with pr command**
|
|
||||||
|
|
||||||
pr paginates and displays in columns one or more files for printing. In other words, pr formats a file to make it look better when printed. For example, the following command:
|
|
||||||
|
|
||||||
# ls -a /etc | pr -n --columns=3 -h "Files in /etc"
|
|
||||||
|
|
||||||
Shows a listing of all the files found in /etc in a printer-friendly format (3 columns) with a custom header (indicated by the -h option), and numbered lines (-n).
|
|
||||||
|
|
||||||
![File Formatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Formatting-in-Linux-with-pr-command.png)
|
|
||||||
|
|
||||||
File Formatting in Linux
|
|
||||||
|
|
||||||
### Summary ###
|
|
||||||
|
|
||||||
In this article we have discussed how to enter and execute commands with the correct syntax in a shell prompt or terminal, and explained how to find, inspect, and use system documentation. As simple as it seems, it’s a large first step in your way to becoming a RHCSA.
|
|
||||||
|
|
||||||
If you would like to add other commands that you use on a periodic basis and that have proven useful to fulfill your daily responsibilities, feel free to share them with the world by using the comment form below. Questions are also welcome. We look forward to hearing from you!
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
|
|
||||||
|
|
||||||
作者:[Gabriel Cánepa][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.tecmint.com/author/gacanepa/
|
|
||||||
[1]:https://www.redhat.com/en/services/certification/rhcsa
|
|
||||||
[2]:http://www.tecmint.com/cd-command-in-linux/
|
|
||||||
[3]:http://www.tecmint.com/ls-command-interview-questions/
|
|
||||||
[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/
|
|
||||||
[5]:http://www.tecmint.com/rename-multiple-files-in-linux/
|
|
||||||
[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
|
||||||
[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/
|
|
||||||
[8]:http://www.tecmint.com/echo-command-in-linux/
|
|
||||||
[9]:http://www.tecmint.com/pwd-command-examples/
|
|
||||||
[10]:http://www.tecmint.com/view-contents-of-file-in-linux/
|
|
@ -1,3 +1,4 @@
|
|||||||
|
[translating by xiqingongzi]
|
||||||
RHCSA Series: How to Perform File and Directory Management – Part 2
|
RHCSA Series: How to Perform File and Directory Management – Part 2
|
||||||
================================================================================
|
================================================================================
|
||||||
In this article, RHCSA Part 2: File and directory management, we will review some essential skills that are required in the day-to-day tasks of a system administrator.
|
In this article, RHCSA Part 2: File and directory management, we will review some essential skills that are required in the day-to-day tasks of a system administrator.
|
||||||
@ -319,4 +320,4 @@ via: http://www.tecmint.com/file-and-directory-management-in-linux/
|
|||||||
[a]:http://www.tecmint.com/author/gacanepa/
|
[a]:http://www.tecmint.com/author/gacanepa/
|
||||||
[1]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
[1]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
||||||
[2]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
|
[2]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
|
||||||
[3]:http://www.tecmint.com/18-tar-command-examples-in-linux/
|
[3]:http://www.tecmint.com/18-tar-command-examples-in-linux/
|
||||||
|
@ -1,98 +0,0 @@
|
|||||||
以比较的方式向Linux用户介绍FreeBSD
|
|
||||||
================================================================================
|
|
||||||
![](https://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/FreeBSD-790x494.jpg)
|
|
||||||
|
|
||||||
### 简介 ###
|
|
||||||
|
|
||||||
BSD最初从UNIX继承而来,目前,有许多的类Unix操作系统是基于BSD的。FreeBSD是使用最广泛的开源伯克利软件发行版(BSD发行版)。就像它隐含的意思一样,它是一个免费开源的类Unix操作系统,并且是公共服务器的平台。FreeBSD源代码通常以宽松的BSD许可证发布。它与Linux有很多相似的地方,但我们得承认它们在很多方面仍有不同。
|
|
||||||
|
|
||||||
本文的其余部分组织如下:FreeBSD的描述在第一部分,FreeBSD和Linux的相似点在第二部分,它们的区别将在第三部分讨论,对他们功能的讨论和总结在最后一节。
|
|
||||||
|
|
||||||
### FreeBSD描述 ###
|
|
||||||
|
|
||||||
#### 历史 ####
|
|
||||||
|
|
||||||
- FreeBSD的第一个版本发布于1993年,它的第一张CD-ROM是FreeBSD1.0,也发行于1993年。接下来,FreeBSD 2.1.0在1995年发布并且获得了所有用户的青睐。实际上许多IT公司都使用FreeBSD并且很满意,我们可以列出其中的一些:IBM、Nokia、NetApp和Juniper Network。
|
|
||||||
|
|
||||||
#### 许可证 ####
|
|
||||||
|
|
||||||
- 关于它的许可证,FreeBSD以多种开源许可证进行发布,它最新的名为Kernel的代码以两种子BSD许可证进行了发布,给予使用和重新发布FreeBSD的绝对自由。其它的代码则以三、四种子BSD许可证进行发布,有些是以GPL和CDDL的许可证发布的。
|
|
||||||
|
|
||||||
#### 用户 ####
|
|
||||||
|
|
||||||
- FreeBSD的重要特点之一就是它多样的用户。实际上,FreeBSD可以作为邮件服务器、Web Server、FTP服务器以及路由器等,您只需要在它上运行服务相关的软件即可。而且FreeBSD还支持ARM、PowerPC、MIPS、x86、x86-64架构。
|
|
||||||
|
|
||||||
### FreeBSD和Linux的相似处 ###
|
|
||||||
|
|
||||||
FreeBSD和Linux是两个免费开源的软件。实际上,它们的用户可以很容易的检查并修改源代码,用户拥有绝对的自由。而且,FreeBSD和Linux都是类Unix系统,它们的内核、内部组件、库程序都使用从历史上的AT&T Unix处继承的算法。FreeBSD从根基上更像Unix系统,而Linux是作为免费的类Unix系统发布的。许多工具应用都可以在FreeBSD和Linux中找到,实际上,他们几乎有同样的功能。
|
|
||||||
|
|
||||||
此外,FreeBSD能够运行大量的Linux应用。它可以安装一个Linux的兼容层,这个兼容层可以在编译FreeBSD时加入AAC Compact Linux得到或通过下载已编译了Linux兼容层的FreeBSD系统,其中会包括兼容程序:aac_linux.ko。不同于FreeBSD的是,Linux无法运行FreeBSD的软件。
|
|
||||||
|
|
||||||
最后,我们注意到虽然二者有同样的目标,但二者还是有一些不同之处,我们在下一节中列出。
|
|
||||||
|
|
||||||
### FreeBSD和Linux的区别 ###
|
|
||||||
|
|
||||||
目前对于大多数用户来说并没有一个选择FreeBSD还是Linux的清楚的准则。因为他们有着很多同样的应用程序,因为他们都被称作类Unix系统。
|
|
||||||
|
|
||||||
在这一章,我们将列出这两种系统的一些重要的不同之处。
|
|
||||||
|
|
||||||
#### 许可证 ####
|
|
||||||
|
|
||||||
- 两个系统的区别首先在于它们的许可证。Linux以GPL许可证发行,它为用户提供阅读、发行和修改源代码的自由,GPL许可证帮助用户避免仅仅发行二进制。而FreeBSD以BSD许可证发布,BSD许可证比GPL更宽容,因为其衍生著作不需要仍以该许可证发布。这意味着任何用户能够使用、发布、修改代码,并且不需要维持之前的许可证。
|
|
||||||
- 您可以依据您的需求,在两种许可证中选择一种。首先是BSD许可证,由于其特殊的条款,它更受用户青睐。实际上,这个许可证使用户在保证源代码的封闭性的同时,可以售卖以该许可证发布的软件。再说说GPL,它需要每个使用以该许可证发布的软件的用户多加注意。
|
|
||||||
- 如果想在以不同许可证发布的两种软件中做出选择,您需要了解他们各自的许可证,以及他们开发中的方法论,从而能了解他们特性的区别,来选择更适合自己需求的。
|
|
||||||
|
|
||||||
#### 控制 ####
|
|
||||||
|
|
||||||
- 由于FreeBSD和Linux是以不同的许可证发布的,Linus Torvalds控制着Linux的内核,而FreeBSD却与Linux不同,它并未被控制。我个人更倾向于使用FreeBSD而不是Linux,这是因为FreeBSD才是绝对自由的软件,没有任何控制许可的存在。Linux和FreeBSD还有其他的不同之处,我建议您先不急着做出选择,等读完本文后再做出您的选择。
|
|
||||||
|
|
||||||
#### 操作系统 ####
|
|
||||||
|
|
||||||
- Linux聚焦于内核系统,这与FreeBSD不同,FreeBSD的整个系统都被维护着。FreeBSD的内核和一组由FreeBSD团队开发的软件被作为一个整体进行维护。实际上,FreeBSD开发人员能够远程且高效的管理核心操作系统。
|
|
||||||
- 而Linux方面,在管理系统方面有一些困难。由于不同的组件由不同的源维护,Linux开发者需要将它们汇集起来,才能达到同样的功能。
|
|
||||||
- FreeBSD和Linux都给了用户大量的可选软件和发行版,但他们管理的方式不同。FreeBSD是统一的管理方式,而Linux需要被分别维护。
|
|
||||||
|
|
||||||
#### 硬件支持 ####
|
|
||||||
|
|
||||||
- 说到硬件支持,Linux比FreeBSD做的更好。但这不意味着FreeBSD没有像Linux那样支持硬件的能力。他们只是在管理的方式不同,这通常还依赖于您的需求。因此,如果您在寻找最新的解决方案,FreeBSD更适应您;但如果您在寻找一幅宏大的画卷,那最好使用Linux。
|
|
||||||
|
|
||||||
#### 原生FreeBSD Vs 原生Linux ####
|
|
||||||
|
|
||||||
- 两者的原生系统的区别又有不同。就像我之前说的,Linux是一个Unix的替代系统,由Linux Torvalds编写,并由网络上的许多极客一起协助实现的。Linux有一个现代系统所需要的全部功能,诸如虚拟内存、共享库、动态加载、优秀的内存管理等。它以GPL许可证发布。
|
|
||||||
- FreeBSD也继承了Unix的许多重要的特性。FreeBSD作为在加州大学开发的BSD的一种发行版。开发BSD的最重要的原因是用一个开源的系统来替代AT&T操作系统,从而给用户无需AT&T证书便可使用的能力。
|
|
||||||
- 许可证的问题是开发者们最关心的问题。他们试图提供一个最大化克隆Unix的开源系统。这影响了用户的选择,由于FreeBSD相比Linux使用BSD许可证进行发布,因而更加自由。
|
|
||||||
|
|
||||||
#### 支持的软件包 ####
|
|
||||||
|
|
||||||
- 从用户的角度来看,另一个二者不同的地方便是软件包以及对源码安装的软件的可用性和支持。Linux只提供了预编译的二进制包,这与FreeBSD不同,它不但提供预编译的包,而且还提供从源码编译和安装的构建系统。由于这样的移植,FreeBSD给了您选择使用预编译的软件包(默认)和在编译时定制您软件的能力。
|
|
||||||
- 这些可选组件允许您用FreeBSD构建所有的软件。而且,它们的管理还是层次化的,您可以在/usr/ports下找到源文件的地址以及一些正确使用FreeBSD的文档。
|
|
||||||
- 这些提到的可选组件给予了产生不同软件包版本的可能性。FreeBSD给了您通过源代码构建以及预编译的两种软件,而不是像Linux一样只有预编译的软件包。您可以使用两种安装方式管理您的系统。
|
|
||||||
|
|
||||||
#### FreeBSD 和 Linux 常用工具比较 ####
|
|
||||||
|
|
||||||
- 有大量的常用工具在FreeBSD上可用,并且有趣的是他们由FreeBSD的团队所拥有。相反的,Linux工具来自GNU,这就是为什么在使用中有一些限制。
|
|
||||||
- 实际上FreeBSD采用的BSD许可证非常有益且有用。因此,您有能力维护核心操作系统,控制这些应用程序的开发。有一些工具类似于它们的祖先 - BSD和Unix的工具,但不同于GNU的套件,GNU套件只想做到最小的向后兼容。
|
|
||||||
|
|
||||||
#### 标准 Shell ####
|
|
||||||
|
|
||||||
- FreeBSD默认使用tcsh。它是csh的评估版,由于FreeBSD以BSD许可证发行,因此不建议您在其中使用GNU的组件 bash shell。bash和tcsh的区别仅仅在于tcsh的脚本功能。实际上,我们更推荐在FreeBSD中使用sh shell,因为它更加可靠,可以避免一些使用tcsh和csh时出现的脚本问题。
|
|
||||||
|
|
||||||
#### 一个更加层次化的文件系统 ####
|
|
||||||
|
|
||||||
- 像之前提到的一样,使用FreeBSD时,基础操作系统以及可选组件可以被很容易的区别开来。这导致了一些管理它们的标准。在Linux下,/bin,/sbin,/usr/bin或者/usr/sbin都是存放可执行文件的目录。FreeBSD不同,它有一些附加的对其进行组织的规范。基础操作系统被放在/usr/local/bin或者/usr/local/sbin目录下。这种方法可以帮助管理和区分基础操作系统和可选组件。
|
|
||||||
|
|
||||||
### 结论 ###
|
|
||||||
|
|
||||||
FreeBSD和Linux都是免费且开源的系统,他们有相似点也有不同点。上面列出的内容并不能说哪个系统比另一个更好。实际上,FreeBSD和Linux都有自己的特点和技术规格,这使它们与别的系统区别开来。那么,您有什么看法呢?您已经有在使用它们中的某个系统了么?如果答案为是的话,请给我们您的反馈;如果答案是否的话,在读完我们的描述后,您怎么看?请在留言处发表您的观点。
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.unixmen.com/comparative-introduction-freebsd-linux-users/
|
|
||||||
|
|
||||||
作者:[anismaj][a]
|
|
||||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://www.unixmen.com/author/anis/
|
|
@ -1,57 +0,0 @@
|
|||||||
Linux比Mac OS X更好吗?历史中的GNU,开源和Apple
|
|
||||||
==============================================================================
|
|
||||||
> 自由软件/开源社区与Apple之间的争论可以回溯到上世纪80年代,当时Linux的创始人称Mac OS X的核心就是"一个废物",还有其他一些软件历史上的轶事。
|
|
||||||
|
|
||||||
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/05/untitled_2.png)
|
|
||||||
|
|
||||||
开源拥护者们与微软之间有着很长,而且摇摆的关系。每个人都知道这个。但是,在许多方面,自由或者开源软件的支持者们与Apple之间的紧张关系则更加突出——尽管这很少受到媒体的关注。
|
|
||||||
|
|
||||||
需要说明的是,并不是所有的开源拥护者都厌恶苹果。Anecdotally(待译),我已经见过很多Linux的黑客玩弄iPhones和iPads。实际上,许多Linux用户是十分喜欢Apple的OS X系统的,以至于他们[创造了很多Linux的发行版][1],都设计得看起来像OS X。(顺便说下,[北朝鲜政府][2]就这样做了。)
|
|
||||||
|
|
||||||
但是Mac的信徒与企鹅——即Linux社区(未提及自由与开源软件世界的小众群体)的信徒之间的关系,并不一直是完全的和谐。并且这绝不是一个新的现象,在我研究Linux历史和开源基金会的时候就发现了。
|
|
||||||
|
|
||||||
### GNU vs. Apple ###
|
|
||||||
|
|
||||||
这场战争将回溯到至少上世界80年代后期。1988年6月,Richard Stallman发起了[GNU][3]项目,希望建立一个完全自由的类Unix操作系统,其源代码讲会免费共享,[[强烈指责][4]Apple对[Hewlett-Packard][5](HPQ)和[Microsoft][6](MSFT)的诉讼,称Apple的声明中,说别人对Macintosh操作系统的界面和体验的抄袭是不正确。如果Apple流行,GNU警告到,这家公司“将会借助大众的新力量终结掉自由软件,而自由软件可以成为商业软件的替代品。”
|
|
||||||
|
|
||||||
那个时候,GNU对抗Apple的诉讼(这意味着,十分讽刺的是,GNU正在支持Microsoft,尽管当时的情况不一样),通过发布["让你的律师远离我的电脑”按钮][7]。同时呼吁GNU的支持者们抵制Apple,警告如果Macintoshes看起来是不错的计算机,但Apple一旦赢得了诉讼就会给市场带来垄断,这会极大地提高计算机的售价。
|
|
||||||
|
|
||||||
Apple最终[输掉了诉讼][8],但是直到1994年之后,GNU才[撤销对Apple的抵制][9]。这期间,GNU一直不断指责Apple。在上世纪90年代早期甚至之后,GNU开始发展GNU软件项目,可以在其他个人电脑平台包括MS-DOS上使用。[GNU 宣称][10],除非Apple停止在计算机领域垄断的野心,让用户界面可以模仿Macintosh的一些东西,否则“我们不会提供任何对Apple机器的支持。”(因此讽刺的是一大堆软件都开发了OS X和类Unix系统的版本,于是Apple在90年代后期介绍这些软件来自GNU。但是那是另外的故事了。)
|
|
||||||
|
|
||||||
### Trovalds on Jobs ###
|
|
||||||
|
|
||||||
除去他对大多数发行版比较自由放任的态度,Liuns Trovalds,Linux内核的创造者,相较于Stallman和GNU过去对Apple的态度没有多一点仁慈。在他2001年出版的书"Just For Fun: The Story of an Accidental Revolutionary"中,Trovalds描述到与Steve Jobs的一个会面,大约是1997年收到后者的邀请去讨论Mac OS X,Apple正在开发,但还没有公开发布。
|
|
||||||
|
|
||||||
"基本上,Jobs一开始就试图告诉我在桌面上的玩家就两个,Microsoft和Apple,而且他认为我能为Linux做的最好的事,就是从了Apple,努力让开源用户站到Mac OS X后面去"Trovalds写道。
|
|
||||||
|
|
||||||
这次谈判显然让Trovalds很不爽。争吵的一点集中在Trovalds对Mach技术上的藐视,对于Apple正在用于构建新的OS X操作系统的内核,Trovalds称其“一推废物。它包含了所有你能做到的设计错误,并且甚至打算只弥补一小部分。”
|
|
||||||
|
|
||||||
但是更令人不快的是,显然是Jobs在开发OS X时入侵开源的方式(OS X的核心里上有很多开源程序):“他有点贬低了结构的瑕疵:谁在乎基础操作系统,真正的low-core东西是不是开源,如果你有Mac层在最上面,这不是开源?”
|
|
||||||
|
|
||||||
一切的一切,Trovalds总结到,Jobs“并没有使用太多争论。他仅仅很简单地说着,胸有成竹地认为我会对与Apple合作感兴趣”。“他没有任何线索,不能去想像还会有人并不关心Mac市场份额的增长。我认为他真的感到惊讶了,当我表现出对Mac的市场有多大,或者Microsoft市场有多大的可怜的关心时。”
|
|
||||||
|
|
||||||
当然,Trovalds并没有对所有Linux用户说起。他对于OS X和Apple的看法从2001年开始就渐渐软化了。但实际上,早在2000年,Linux社区的领导角色表现出对Apple和其高层的傲慢的深深的鄙视,可以看出一些重要的东西,关于Apple和开源/自由软件世界的矛盾是多么的根深蒂固。
|
|
||||||
|
|
||||||
从以上两则历史上的花边新闻中,可以看到关于Apple产品价值的重大争议,即是否该公司致力于提升软硬件的质量,或者仅仅是借市场的小聪明获利,后者会让Apple产品卖出更多的钱,**********(该处不知如何翻译)。但是不管怎样,我会暂时置身讨论之外。
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://thevarguy.com/open-source-application-software-companies/051815/linux-better-os-x-gnu-open-source-and-apple-
|
|
||||||
|
|
||||||
作者:[Christopher Tozzi][a]
|
|
||||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
|
||||||
[1]:https://www.linux.com/news/software/applications/773516-the-mac-ifying-of-the-linux-desktop/
|
|
||||||
[2]:http://thevarguy.com/open-source-application-software-companies/010615/north-koreas-red-star-linux-os-made-apples-image
|
|
||||||
[3]:http://gnu.org/
|
|
||||||
[4]:https://www.gnu.org/bulletins/bull5.html
|
|
||||||
[5]:http://www.hp.com/
|
|
||||||
[6]:http://www.microsoft.com/
|
|
||||||
[7]:http://www.duntemann.com/AppleSnakeButton.jpg
|
|
||||||
[8]:http://www.freibrun.com/articles/articl12.htm
|
|
||||||
[9]:https://www.gnu.org/bulletins/bull18.html#SEC6
|
|
||||||
[10]:https://www.gnu.org/bulletins/bull12.html
|
|
@ -0,0 +1,86 @@
|
|||||||
|
安卓编年史
|
||||||
|
================================================================================
|
||||||
|
![姜饼的新键盘,文本选择,边界回弹效果以及新复选框。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/3kb-high-over-check.png)
|
||||||
|
姜饼的新键盘,文本选择,边界回弹效果以及新复选框。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
安卓2.3最重要的新增功能就是系统全局文本选择界面,你可以在左侧截图的谷歌搜索栏看到它。长按一个词能使其变为橙色高亮,并且出现可拖拽的小标签,长按高亮部分会弹出剪切,复制和粘贴选项。之前的方法使用的是依赖于十字方向键的控制,但现在有了触摸文本选择,Nexus S 不再需要额外的硬件控件。
|
||||||
|
|
||||||
|
左侧截图右半边展示的是新的复选框设计和边界回弹效果。冻酸奶(2.2)的复选框像个灯泡——选中时显示一个绿色的勾,未选中的时候显示灰色的勾。姜饼在选项关闭的时候显示一个空的选框——这显得更有意义。姜饼是第一个拥有滚动到底发光效果的版本。当到达列表底部的时候会有一道橙色的光晕,你越往上拉光晕越明显。列表上拉滚动反弹也许最直观,但那是苹果的专利。
|
||||||
|
|
||||||
|
![新拨号界面和对话框设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/dialdialog.png)
|
||||||
|
新拨号界面和对话框设计。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
姜饼里的拨号受到了稍微多点的照顾。它变得更暗了,并且谷歌终于解决了原本的直角,圆角以及圆形的结合问题。现在所有的边角都是直角了。所有的拨号按键被替换成了带有奇怪下划线的样式,像是用边角料拼凑的。你永远无法确定是否看到了一个按钮——我们的大脑得想象出按钮形状的剩余部分。
|
||||||
|
|
||||||
|
图中的无线网络对话框可以看作是剩下的系统全局改动的样本。所有的对话框标题从灰色变为黑色,对话框,下拉框以及按钮边缘都变成了直角,各部分色调都变暗了一点。所有的这些全局变化使得姜饼看起来不像原来那样活泼,而是更加地成熟。“到处都是黑色”的外观必然不是最受欢迎的,但它无疑看起来比安卓之前的灰色和米色的配色方案好多了。
|
||||||
|
|
||||||
|
![新市场,添加了大块的绿色页面顶栏。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/4market.png)
|
||||||
|
新市场,添加了大块的绿色页面顶栏。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
新版系统带来了“安卓市场 2.0”,虽然它不是姜饼独占的。主要的列表设计和原来一致,但谷歌将屏幕上部三分之一覆盖上了大块的绿色横幅,用来展示热门应用以及导航。这里主要的设计灵感也许是绿色的安卓吉祥物——它们的颜色完美匹配。在系统设计偏向暗色系的时候,霓虹灯般的绿色横幅和白色列表让市场明快得多。
|
||||||
|
|
||||||
|
但是,相同的绿色背景图片被用在了不同的手机上,这意味着在低分辨率设备上,绿色横幅看起来更加的大。不少用户抱怨这浪费了屏幕空间,于是随后的更新使得绿色横幅跟随内容向上滚动。在那时,横屏模式更加糟糕——绿色横幅会填满剩下的半个屏幕。
|
||||||
|
|
||||||
|
![市场的一个带有可折叠描述的应用详情页面,“我的应用”界面,以及Google Books界面截图。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/5rest-of-market-and-books.png)
|
||||||
|
市场的一个带有可折叠描述的应用详情页面,“我的应用”界面,以及 Google Books 界面截图。
|
||||||
|
Ron Amadeo供图
|
||||||
|
|
||||||
|
应用详情页面经过重新设计有了可折叠部分。文本描述只截取前几行展示,向下滑动页面不用再穿过数千行的描述。简短的描述后有一个“更多”按钮可供点击来显示完整的描述。这让用户可以轻松地滑动过列表找到像是截图和“联系开发者”部分,这些部分通常在页面偏下部分。
|
||||||
|
|
||||||
|
安卓主屏的其它部分明智地淡化了绿色机器人元素。市场应用的剩余部分绝大多数仅仅只是旧版市场加上新的绿色导航元素。旧有的标签界面升级成了可滑动切换标签。在姜饼右侧截图中,从右向左滑动将会从“热门付费”切换至“热门免费”,这使得导航变得更加方便。
|
||||||
|
|
||||||
|
姜饼带来了将会成为 Google Play 内容商店第一位成员的应用:Google Books。这个应用是个基础的电子书阅读器,会将书籍以简单的预览图平铺展示。屏幕顶部的“获取 eBooks”链接会打开浏览器,然后加载一个你可以在上面购买电子书的移动网站。
|
||||||
|
|
||||||
|
Google Books 以及市场的“我的应用”页面都是 Action Bar 的原型。就像现在的指南中写的,页面有一个带应用图标的固定置顶栏,应用内页面的名称,以及一些控件。这两个应用的布局实际上看起来十分现代,和现在的界面相似。
|
||||||
|
|
||||||
|
![新版谷歌地图](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps1.png)
|
||||||
|
新版谷歌地图。
|
||||||
|
Ron Amadeo供图
|
||||||
|
|
||||||
|
谷歌地图(再重复一次,这时候的谷歌地图是在安卓市场中的,并且不是这个安卓版本独占的)拥有了另一个操作栏原型,是一个顶部对齐的控件栏。这个早期版本的操作栏拥有许多试验性功能。功能栏主要被一个搜索框所占据,但是你永远无法向其输入内容。点击搜索框会打开安卓 1.x 版本以来的旧搜索界面,它带有完全不同的操作栏设计和活泼的按钮。2.3 版本的顶栏仅仅只是个大号的搜索按钮而已。
|
||||||
|
|
||||||
|
![从黑变白的新 business 页面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps2-Im-hungry.png)
|
||||||
|
从黑变白的新 business 页面。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
应用抽屉里和地点一起到来的热门商家重新设计了界面。不像姜饼的其它部分,它从黑色转换成了白色。谷歌还给它保留了圆角的旧按钮。这个新版本的地图能显示商家的营业时间,并且提供高级搜索选项,比如正在营业或是通过评分或价格限定搜索范围。点评被调整到了商家详情页面,用户可以更容易地对当前商家有个直观感受。而且现在还可以从搜索结果中给某个地点加星,保存起来以后使用。
|
||||||
|
|
||||||
|
![新 YouTube 设计,神奇的是有点像旧版地图的商家页面的设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/youtube22.png)
|
||||||
|
新 YouTube 设计,神奇的是有点像旧版地图的商家页面的设计。
|
||||||
|
Ron Amadeo供图
|
||||||
|
|
||||||
|
YouTube 应用似乎完全与安卓的其它部分分离开来,就像是设计它的人完全不知道姜饼最终会是什么样子一样。高亮是红色和灰色方案,而不是绿色和橙色,而且不像扁平黑色风格的姜饼,Youtube 有着气泡状的,带有圆角并且大幅使用渐变效果的按钮,标签以及操作栏。尽管如此,新应用还是有一些正确的地方。所有的标签可以水平滑动切换,而且应用终于提供了竖屏观看视频模式。安卓在那个阶段似乎工作不是很一致。就像是有人告诉 Youtube 团队“把它做成黑色的”,然后这就是全部的指导方向一样。唯一一个与其相似的安卓实体就是旧版谷歌地图的商家页面的设计。
|
||||||
|
|
||||||
|
尽管有些奇怪的设计,Youtube 应用有着最接近操作栏的顶栏设计。除了顶部操作栏的应用图标和一些按钮,最右侧还有个标着“更多”字样的按钮,点击它可以打开因为过多而无法装进操作栏的选项。在今天,这被称作“更多操作”按钮,它是个标准界面控件。
|
||||||
|
|
||||||
|
![新 Google Talk,支持语音和视频通话,以及新语音命令界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/talkvoice.png)
|
||||||
|
新 Google Talk,支持语音和视频通话,以及新语音命令界面。
|
||||||
|
Ron Amadeo供图
|
||||||
|
|
||||||
|
姜饼的最后一个更新是安卓 2.3.4,它带来了新版 Google Talk。不像 Nexus One,Nexus S 带有前置摄像头——重新设计的 Google Talk 拥有语音和视频通话功能。好友列表右侧的彩色指示不仅指明在线状态,还显示了语音和视频的可用性。一个点表示仅文本信息,一个麦克风表示文本信息或语音,一个摄像机表示支持文本信息,语音以及视频。如果可用的话,点击语音或视频图标会立即向好友发起通话。
|
||||||
|
|
||||||
|
姜饼是谷歌仍然提供支持的最老的安卓版本。激活一部姜饼设备并放置一会儿会收到大量更新。姜饼会拉取 Google Play 服务,它会带来许多新的 API 支持,并且会升级到最新版本的 Play 商店。打开 Play 商店并点击更新按钮,几乎每个独立谷歌应用都会被替换为更加现代的版本。我们尝试着保持这篇文章讲述的是姜饼发布时的样子,但时至今日还停留在姜饼的用户会被认为有点跟不上时代了。
|
||||||
|
|
||||||
|
姜饼如今仍然能够得到支持,因为有数量可观的用户仍然在使用这个有点过时的系统。姜饼仍然存在的能量来自于它极低的系统要求,使得它成为了低端廉价设备的最佳选择。下个版本的安卓对硬件的要求变得更高了。举个例子,安卓 3.0 蜂巢不是开源的,这意味着它只能在谷歌的协助之下移植到一个设备上。同时它还是只为平板设计的,这让姜饼作为最新的手机安卓版本存在了很长一段时间。4.0 冰淇淋三明治是下一个手机版本,但它显著地提高了安卓系统要求,抛弃了低端市场。谷歌现在希望借 4.4 KitKat(奇巧巧克力)重回廉价手机市场,它的系统要求降回了 512MB 内存。时间的推移同样有所帮助——如今,就算是廉价的系统级芯片都能满足安卓 4.0 时代的系统要求。
|
||||||
|
|
||||||
|
----------
|
||||||
|
|
||||||
|
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
||||||
|
|
||||||
|
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
|
||||||
|
|
||||||
|
[@RonAmadeo][t]
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/15/
|
||||||
|
|
||||||
|
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://arstechnica.com/author/ronamadeo
|
||||||
|
[t]:https://twitter.com/RonAmadeo
|
@ -0,0 +1,66 @@
|
|||||||
|
安卓编年史
|
||||||
|
================================================================================
|
||||||
|
### 安卓 3.0 蜂巢—平板和设计复兴 ###
|
||||||
|
|
||||||
|
尽管姜饼中做了许多改变,安卓仍然是移动世界里的丑小鸭。相比于 iPhone,它的优雅程度和设计完全抬不起头。另一方面来说,为数不多的能与 iOS 的美学智慧相当的操作系统之一是 Palm 的 WebOS。WebOS 有着优秀的整体设计,创新的功能,而且被寄予期望能够从和 iPhone 的长期竞争中拯救公司。
|
||||||
|
|
||||||
|
尽管如此,一年之后,Palm 资金链断裂。Palm 公司从未看到 iPhone 的到来,到 WebOS 就绪的时候已经太晚了。2010年4月,惠普花费10亿美元收购了 Palm。尽管惠普收购了一个拥有优秀用户界面的产品,界面的首席设计师,Matias Duarte,并没有加入惠普公司。2010年5月,就在惠普接手 Palm 之前,Duarte 加入了谷歌。惠普买下了面包,但谷歌雇佣了它的烘培师。
|
||||||
|
|
||||||
|
![第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Motorola-XOOM-MZ604.jpg)
|
||||||
|
第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。
|
||||||
|
|
||||||
|
在谷歌,Duarte 被任命为安卓用户体验主管。这是第一次有人公开掌管安卓的外观。尽管 Matias 在安卓 2.2 发布时就来到了谷歌,第一个真正受他影响的安卓版本是 3.0 蜂巢,它在2011年2月发布。
|
||||||
|
|
||||||
|
按谷歌自己的说法,蜂巢是匆忙问世的。10个月前,苹果发布了 iPad,让平板变得更加现代,谷歌希望能够尽快做出回应。蜂巢就是那个回应,一个运行在10英寸触摸屏上的安卓版本。悲伤的是,将这个系统推向市场是如此优先的事项,以至于边边角角都被砍去了以节省时间。
|
||||||
|
|
||||||
|
新系统只用于平板——手机不能升级到蜂巢,这加大了谷歌让系统运行在差异巨大的不同尺寸屏幕上的难度。但是,仅支持平板而不支持手机使得蜂巢源码没有泄露。之前的安卓版本是开源的,这使得黑客社区能够将其最新版本移植到所有的不同设备之上。谷歌不希望应用开发者在支持不完美的蜂巢手机移植版本时感到压力,所以谷歌将源码留在自己手中,并且严格控制能够拥有蜂巢的设备。匆忙的开发还导致了软件问题。在发布时,蜂巢不是特别稳定,SD卡不能工作,Adobe Flash——安卓最大的特色之一——还不被支持。
|
||||||
|
|
||||||
|
[摩托罗拉 Xoom][1]是为数不多的拥有蜂巢的设备之一,它是这个新系统的旗舰产品。Xoom 是一个10英寸,16:9 的平板,拥有 1GB 内存和 1GHz Tegra 2 双核处理器。尽管是由谷歌直接控制更新的新版安卓发布设备,它并没有被叫做“Nexus”。对此最可能的原因是谷歌对它没有足够的信心称其为旗舰。
|
||||||
|
|
||||||
|
尽管如此,蜂巢是安卓的一个里程碑。在一个体验设计师的主管之下,整个安卓用户界面被重构,绝大多数奇怪的应用设计都得到改进。安卓的默认应用终于看起来像整体的一部分,不同的界面有着相似的布局和主题。然而重新设计安卓会是一个跨版本的项目——蜂巢只是将安卓塑造成型的开始。这第一份草稿为安卓未来版本的样子做了基础设计,但它也用了过多的科幻主题,谷歌将花费接下来的数个版本来淡化它。
|
||||||
|
|
||||||
|
![蜂巢和姜饼的主屏幕。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/homeskreen.png)
|
||||||
|
蜂巢和姜饼的主屏幕。
|
||||||
|
Ron Amadeo供图
|
||||||
|
|
||||||
|
姜饼只是在它的量子壁纸上试验了科幻外观,蜂巢整个系统的以电子为灵感的主题让它充满科幻意味。所有东西都是黑色的,如果你需要对比色,你可以从一些不同色调的蓝色中挑选。所有蓝色的东西还有“光晕”效果,让整个系统看起来像是外星科技创造的。默认背景是个六边形的全息方阵(一个蜂巢!明白了吗?),看起来像是一艘飞船上的传送阵的地板。
|
||||||
|
|
||||||
|
蜂巢最重要的变化是增加了系统栏。摩托罗拉 Xoom 除了电源和音量键之外没有配备实体按键,所以蜂巢添加了一个大黑色底栏到屏幕底部,用于放置导航按键。这意味着默认安卓界面不再需要特别的实体按键。在这之前,安卓没有实体的返回,菜单和 Home 键就不能正常工作。现在,软件提供了所有必需的按钮,任何带有触摸屏的设备都能够运行安卓。
|
||||||
|
|
||||||
|
新软件按键带来的最大的好处是灵活性。新的应用指南表明应用应不再要求实体菜单按键,需要用到的时候,蜂巢会自动检测并添加四个按钮到系统栏让应用正常工作。另一个软件按键的灵活属性是它们可以改变设备的屏幕方向。除了电源和音量键之外,Xoom 的方向实际上不是那么重要。从用户的角度来看,系统栏始终处于设备的“底部”。代价是系统栏明显占据了一些屏幕空间。为了在10英寸平板上节省空间,状态栏被合并到了系统栏中。所有的常用状态指示放在了右侧——有电源,连接状态,时间还有通知图标。
|
||||||
|
|
||||||
|
主屏幕的整个布局都改变了,用户界面部件放在了设备的四个角落。屏幕底部左侧放置着之前讨论过的导航按键,右侧用于状态指示和通知,顶部左侧显示的是文本搜索和语音搜索,右侧有应用抽屉和添加小部件的按钮。
|
||||||
|
|
||||||
|
![新锁屏界面和最近应用界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/lockscreen-and-recent.png)
|
||||||
|
新锁屏界面和最近应用界面。
|
||||||
|
Ron Amadeo供图
|
||||||
|
|
||||||
|
(因为 Xoom 是一部 [较重] 的10英寸,16:9平板设备,这意味着它主要是横屏使用。虽然大部分应用还支持竖屏模式,但是到目前为止,由于我们的版式限制,我们大部分使用的是竖屏模式的截图。请记住蜂巢的截图来自于10英寸的平板,而姜饼的截图来自3.7英寸的手机。二者所展现的信息密度是不能直接比较的。)
|
||||||
|
|
||||||
|
解锁界面——从菜单按钮到旋转式拨号盘再到滑动解锁——移除了解锁步骤的任何精度要求,它采用了一个环状解锁盘。从中间向任意方向向外滑动就能解锁设备。就像旋转式解锁,这种解锁方式更加符合人体工程学,而不用强迫你的手指完美地遵循一条笔直的解锁路径。
|
||||||
|
|
||||||
|
第二张图中略缩图条带是由新增的“最近应用”按钮打开的界面,现在处在返回和 Home 键旁边。不像姜饼中长按 Home 键显示一组最近应用的图标,蜂巢在屏幕上显示应用图标和略缩图,使得在任务间切换变得更加方便。最近应用的灵感明显来自于 Duarte 在 WebOS 中的“卡片式”多任务管理,其使用全屏略缩图来切换任务。这个设计提供和 WebOS 的任务切换一样的易识别体验,但更小的略缩图允许更多的应用一次性显示在屏幕上。
|
||||||
|
|
||||||
|
尽管最近应用的实现看起来和你现在的设备很像,这个版本实际上是非常早期的。这个列表不能滚动,这意味着竖屏下只能显示七个应用,横屏下只能显示五个。任何超出范围的应用会从列表中去除。而且你也不能通过滑动略缩图来关闭应用——这只是个静态的列表。
|
||||||
|
|
||||||
|
这里我们看到电子灵感影响的完整主题效果:略缩图的周围有蓝色的轮廓以及神秘的光晕。这张截图还展示软件按键的好处——上下文。返回按钮可以关闭略缩图列表,所以这里的箭头指向下方,而不是通常的样子。
|
||||||
|
|
||||||
|
----------
|
||||||
|
|
||||||
|
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
||||||
|
|
||||||
|
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
|
||||||
|
|
||||||
|
[@RonAmadeo][t]
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/16/
|
||||||
|
|
||||||
|
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[1]:http://arstechnica.com/gadgets/2011/03/ars-reviews-the-motorola-xoom/
|
||||||
|
[a]:http://arstechnica.com/author/ronamadeo
|
||||||
|
[t]:https://twitter.com/RonAmadeo
|
@ -0,0 +1,86 @@
|
|||||||
|
安卓编年史
|
||||||
|
================================================================================
|
||||||
|
![蜂巢的应用列表少了很多应用。上图还展示了通知中心和新的快速设置。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/apps-and-notifications2.png)
|
||||||
|
蜂巢的应用列表少了很多应用。上图还展示了通知中心和新的快速设置。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
默认的应用图标从32个减少到了25个,其中还有两个是第三方的游戏。因为蜂巢不是为手机设计的,而且谷歌希望默认应用都是为平板优化的,很多应用因此没有成为默认应用。被去掉的应用有亚马逊 MP3 商店,Car Home,Facebook,Google Goggles,信息,新闻与天气,电话,Twitter,谷歌语音,以及语音拨号。谷歌正在悄悄打造的音乐服务将于不久后面世,所以亚马逊 MP3 商店需要为它让路。Car Home,信息以及电话对一部不是手机的设备来说没有多大意义,Facebook 和 Twitter还没有平板版应用,Goggles,新闻与天气以及语音拨号几乎没什么人注意,就算移除了大多数人也不会想念它们的。
|
||||||
|
|
||||||
|
几乎每个应用图标都是全新设计的。就像是从 G1 切换到摩托罗拉 Droid,变化的最大动力是分辨率的提高。Nexus S 有一块800×480分辨率的显示屏,姜饼重新设计了图标等资源来适应它。Xoom 巨大的1280×800 10英寸显示屏意味着几乎所有设计都要重做。但是再说一次,这次是有真正的设计师在负责,所有东西看起来更有整体性了。蜂巢的应用列表从纵向滚动变为了横向分页式。这个变化对横屏设备有意义,而对手机来说,查找一个应用还是纵向滚动列表比较快。
|
||||||
|
|
||||||
|
第二张蜂巢截图展示的是新通知中心。姜饼中的灰色和黑色设计已经被抛弃了,现在是黑色面板带蓝色光晕。上面一块显示着日期时间,连接状态,电量和打开快速设置的按钮,下面是实际的通知。非持续性通知现在可以通过通知右侧的“X”来关闭。蜂巢是第一个支持通知内控制的版本。第一个(也是蜂巢发布时唯一一个)利用了此特性的应用是新的谷歌音乐,在它的通知上有上一曲,播放/暂停,下一曲按钮。这些控制可以在任何应用中访问到,这让控制音乐播放变成了一件轻而易举的事情。
|
||||||
|
|
||||||
|
![“添加到主屏幕”的缩小视图更易于组织布局。搜索界面将自动搜索建议和通用搜索分为两个面板显示。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/widgetkeyboard.png)
|
||||||
|
“添加到主屏幕”的缩小视图更易于组织布局。搜索界面将自动搜索建议和通用搜索分为两个面板显示。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
点击主屏幕右上角的加号或长按背景空白处就会打开新的主屏幕设置界面。蜂巢会在屏幕上半部分显示所有主屏的缩小视图,下半部分分页显示的是小部件和快捷方式。小部件或快捷方式可以从下半部分的抽屉中拖动到五个主屏幕中的任意一个上。姜饼只会显示一个文本列表,而蜂巢会显示小部件完整的略缩图预览。这让你更清楚一个小部件是什么样子的,而不是像原来的“日历”一样只是一个只有应用名称的描述。
|
||||||
|
|
||||||
|
摩托罗拉 Xoom 更大的屏幕让键盘的布局更加接近 PC 风格,退格,回车,shift 以及 tab 都在传统的位置上。键盘带有浅蓝色,并且键与键之间的空间更大了。谷歌还添加了一个专门的笑脸按钮。 :-)
|
||||||
|
|
||||||
|
![打开菜单的 Gmail 在蜂巢和姜饼上的效果。按钮布置在首屏更容易被发现。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/thebasics.png)
|
||||||
|
打开菜单的 Gmail 在蜂巢和姜饼上的效果。按钮布置在首屏更容易被发现。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
Gmail 示范了蜂巢所有的用户界面概念。安卓 3.0不再把所有控制都隐藏在菜单按钮之后。屏幕的顶部现在有一条带有图标的条带,叫做 Action Bar(操作栏),它将许多常用的控制选项提升到了主屏幕上,用户直接就能看到它们。Gmail 的操作栏显示着搜索,新邮件,刷新按钮,不常用的选项比如设置,帮助,以及反馈放在了“更多”按钮中。点击复选框或选中文本的时候时整个操作栏的图标会变成和操作相关的——举个例子,选择文本会出现复制,粘贴和全选按钮。
|
||||||
|
|
||||||
|
应用左上角显示的图标同时也作为称作“上一级”的导航按钮。“后退”的作用类似浏览器的后退按钮,导航到之前访问的页面,“上一级”则会导航至应用的上一层次。举例来说,如果你在安卓市场,点击“给开发者发邮件”,会打开 Gmail,“后退”会让你返回安卓市场,但是“上一级”会带你到 Gmail 的收件箱。“后退”可能会关闭当前应用,而“上一级”永远不会。应用可以控制“后退”按钮,它们往往重新定义它为“上一级”的功能。事实上,这两个按钮之间几乎没什么不同。
|
||||||
|
|
||||||
|
蜂巢还引入了 “Fragments” API,允许开发者开发同时适用于平板和手机的应用。一个 “Fragments”(格子) 是一个用户界面的面板。在上图的 Gmail 中,左边的文件夹列表是一个格子,收件箱是另一个格子。手机每屏显示一个格子,而平板则可以并列显示两个。开发者可以自行定义单独每个格子的外观,安卓会根据当前的设备决定如何显示它们。
|
||||||
|
|
||||||
|
![计算器使用了常规的安卓按钮,但日历看起来像是被谁打翻了蓝墨水。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/calculendar.png)
|
||||||
|
计算器使用了常规的安卓按钮,但日历看起来像是被谁打翻了蓝墨水。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
这是安卓历史上第一次计算器换上了没有特别定制的按钮,所以它看起来确实是系统的一部分。更大的屏幕有了更多空间容纳按钮,足够将计算器基本功能容纳在一个屏幕上。日历极大地受益于额外的显示空间,有了更多的空间显示事件文本和控制选项。顶部的操作栏有切换视图的按钮,显示当前时间跨度,以及常规按钮。事件块变成了白色背景,日历标识只在左上角显示。在底部(或横屏模式的侧边)显示的是月历和显示的日历列表。
|
||||||
|
|
||||||
|
日历的比例同样可以调整。通过两指缩放手势,纵向的周和日视图能够在一屏内显示五到十九小时的事件。日历的背景由不均匀的蓝色斑点组成,看起来不是特别棒,在随后的版本里就被抛弃了。
|
||||||
|
|
||||||
|
![新相机界面,取景器显示的是“负片”效果。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/camera.png)
|
||||||
|
新相机界面,取景器显示的是“负片”效果。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
巨大的10英寸 Xoom 平板有个摄像头,这意味着它同样有个相机应用。电子风格的重新设计终于甩掉了谷歌从安卓 1.6 以来使用的仿皮革外观。控制选项以环形排布在快门键周围,让人想起真正的相机上的圆形控制转盘。Cooliris 衍生的弹出对话气泡变成了带光晕的半透明黑色选框。蜂巢的截图显示的是新的“颜色效果”功能,它能给取景器实时加上滤镜效果。不像姜饼的相机应用,它不支持竖屏模式——它被限制在横屏状态。用10英寸的平板拍摄纵向照片没多大意义,但拍摄横向照片也没多大意义。
|
||||||
|
|
||||||
|
![时钟应用相比其它地方没受到多少关照。谷歌把它扔进一个小盒子里然后就收工了。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/clocks.png)
|
||||||
|
时钟应用相比其它地方没受到多少关照。谷歌把它扔进一个小盒子里然后就收工了。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
无数功能已经成形了,现在是时候来重制一下时钟了。整个“桌面时钟”概念被踢出门外,取而代之的是在纯黑背景上显示的简单又巨大的时间数字。打开其它应用查看天气的功能不见了,随之而去的还有显示你的壁纸的功能。当要设计平板尺寸的界面时,有时候谷歌就放弃了,就像这里,就只是把时钟界面扔到了一个小小的,居中的对话框里。
|
||||||
|
|
||||||
|
![音乐应用终于得到了一直以来都需要的完全重新设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/muzack.png)
|
||||||
|
音乐应用终于得到了一直以来都需要的完全重新设计。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
尽管音乐应用之前有得到一些小的加强,但这是自安卓 0.9 以来它第一次受到正视。重新设计的亮点是一个“别叫它封面流滚动 3D 专辑封面视图”,称作“最新和最近”。导航由操作栏的下拉框解决,取代了安卓 2.1 引入的标签页导航。尽管“最新和最近”有个 3D 滚动专辑封面,“专辑”使用的是专辑略缩图的平面方阵。另一个部分也有个完全不同的设计。“歌曲”使用了垂直滚动的文本列表,“播放列表”,“年代”和“艺术家”用的是堆砌专辑显示。
|
||||||
|
|
||||||
|
在几乎每个视图中,每个单独的项目有它自己单独的菜单,通常在每项的右下角有个小箭头。眼下这里只会显示“播放”和“添加到播放列表”,但这个版本的谷歌音乐是为未来搭建的。谷歌不久后就要发布音乐服务,这些独立菜单在像是在音乐商店里浏览该艺术家的其它内容,或是管理云存储和本地存储时将会是不可或缺的。
|
||||||
|
|
||||||
|
正如安卓 2.1 中的 Cooliris 风格的相册,谷歌音乐会将略缩图放大作为背景图片。底部的“正在播放”栏现在显示着专辑封面,播放控制,以及播放进度条。
|
||||||
|
|
||||||
|
![新谷歌地图的一些地方真的很棒,一些却是从安卓 1.5 来的。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps.png)
|
||||||
|
新谷歌地图的一些地方真的很棒,一些却是从安卓 1.5 来的。
|
||||||
|
Ron Amadeo 供图
|
||||||
|
|
||||||
|
谷歌地图也为大屏幕进行了重新设计。这个设计将会持续一段时间,它对所有的控制选项用了一个半透明的黑色操作栏。搜索再次成为主要功能,占据了操作栏显要位置,但这回可是真的搜索栏,你可以在里面输入关键字,不像以前那个搜索栏形状的按钮会打开完全不同的界面。谷歌最终还是放弃了给缩放控件留屏幕空间,仅仅依靠手势来控制地图显示。尽管 3D 建筑轮廓这个特性已经被移植到了旧版本的地图中,蜂巢依然是拥有这个特性的第一个版本。双指在地图上向下拖放会“倾斜”地图的视角,展示建筑的侧面。你可以随意旋转,建筑同样会跟着进行调整。
|
||||||
|
|
||||||
|
并不是所有部分都进行了重新设计。导航自姜饼以来就没动过,还有些界面的核心部分,像是路线,直接从安卓 1.6 的设计拿出来,放到一个小盒子里居中放置,仅此而已。
|
||||||
|
|
||||||
|
----------
|
||||||
|
|
||||||
|
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
||||||
|
|
||||||
|
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
|
||||||
|
|
||||||
|
[@RonAmadeo][t]
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/17/
|
||||||
|
|
||||||
|
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://arstechnica.com/author/ronamadeo
|
||||||
|
[t]:https://twitter.com/RonAmadeo
|
@ -1,177 +0,0 @@
|
|||||||
LINUX 101: 让你的 SHELL 更强大
|
|
||||||
================================================================================
|
|
||||||
> 在我们的有关 shell 基础的指导下, 得到一个更灵活,功能更强大且多彩的命令行界面
|
|
||||||
|
|
||||||
**为何要这样做?**
|
|
||||||
|
|
||||||
- 使得在 shell 提示符下过得更轻松,高效
|
|
||||||
- 在失去连接后恢复先前的会话
|
|
||||||
- Stop pushing around that fiddly rodent! (注: 我不知道这句该如何翻译)
|
|
||||||
|
|
||||||
![bash1](http://www.linuxvoice.com/wp-content/uploads/2015/02/bash1-large15.png)
|
|
||||||
|
|
||||||
Here’s our souped-up prompt on steroids.(注: 我不知道该如何翻译这句)对于这个细小的终端窗口来说,这或许有些长.但你可以根据你的喜好来调整它的大小.
|
|
||||||
|
|
||||||
作为一个 Linux 用户, 对 shell (又名为命令行),你可能会熟悉. 或许你需要时不时的打开终端来完成那些不能在 GUI 下处理的必要任务,抑或是因为你处在一个平铺窗口管理器的环境中, 而 shell 是你与你的 linux 机器交互的主要方式.
|
|
||||||
|
|
||||||
在上面的任一情况下,你可能正在使用你所使用的发行版本自带的 Bash 配置. 尽管对于大多数的任务而言,它足够强大,但它可以更加强大. 在本教程中,我们将向你展示如何使得你的 shell 更具信息性,更加实用且更适于在其中工作. 我们将对提示符进行自定义,让它比默认情况下提供更好的反馈,并向你展示如何使用炫酷的 `tmux` 工具来管理会话并同时运行多个程序. 并且,为了让眼睛舒服一点,我们还将关注配色方案. 接着,就让我们向前吧!
|
|
||||||
|
|
||||||
### 让提示符 "唱歌" ###
|
|
||||||
|
|
||||||
大多数的发行版本配置有一个非常简单的提示符 – 它们大多向你展示了一些基本信息, 但提示符可以为你提供更多的内容.例如,在 Debian 7 下,默认的提示符是这样的:
|
|
||||||
|
|
||||||
mike@somebox:~$
|
|
||||||
|
|
||||||
上面的提示符展示出了用户,主机名,当前目录和账户类型符号(假如你切换到 root 账户, **$** 会变为 # ). 那这些信息是在哪里存储的呢? 答案是:在 **PS1** 环境变量中. 假如你键入 **echo $PS1**, 你将会在这个命令的输出字符串的最后有如下的字符:
|
|
||||||
|
|
||||||
\u@\h:\w$ (注:这里没有加上斜杠 \,应该是没有转义 ,下面的有些命令也一样,我把 \ 都加上了,发表的时候也得注意一下)
|
|
||||||
|
|
||||||
这看起来有一些丑陋,并在瞥见它的第一眼时,你可能会开始尖叫,认为它是令人恐惧的正则表达式,但我们不打算用这些复杂的字符来煎熬我们的大脑. 这不是正则表达式, 这里的斜杠是转义序列,它告诉提示符进行一些特别的处理. 例如,上面的 **u** 部分,告诉提示符展示用户名, 而 w 则展示工作路径.
|
|
||||||
|
|
||||||
下面是一些你可以在提示符中用到的字符的列表:
|
|
||||||
|
|
||||||
- d 当前的日期.
|
|
||||||
- h 主机名.
|
|
||||||
- n 代表新的一行的字符.
|
|
||||||
- A 当前的时间 (HH:MM).
|
|
||||||
- u 当前的用户.
|
|
||||||
- w (小写) 整个工作路径的全称.
|
|
||||||
- W (大写) 工作路径的简短名称.
|
|
||||||
- $ 一个提示符号,对于 root 用户为 # 号.
|
|
||||||
- ! 当前命令在 shell 历史记录中的序号.
|
|
||||||
|
|
||||||
下面解释 **w** 和 **W** 选项的区别: 对于前者,你将看到你所在的工作路径的完整地址,(例如 **/usr/local/bin**), 而对于后者, 它则只显示 **bin** 这一部分.
|
|
||||||
|
|
||||||
现在, 我们该怎样改变提示符呢? 你需要更改 **PS1** 环境变量的内容, 试试下面这个:
|
|
||||||
|
|
||||||
export PS1=”I am \u and it is \A $”
|
|
||||||
|
|
||||||
现在, 你的提示符将会像下面这样:
|
|
||||||
|
|
||||||
I am mike and it is 11:26 $
|
|
||||||
|
|
||||||
从这个例子出发, 你就可以按照你的想法来试验一下上面列出的其他转义序列. 但稍等片刻 – 当你登出后,你的这些努力都将消失,因为在你每次打开终端时, **PS1** 环境变量的值都会被重置. 解决这个问题的最简单方式是打开 **.bashrc** 配置文件(在你的家目录下) 并在这个文件的最下方添加上完整的 `export` 命令.在每次你启动一个新的 shell 会话时,这个 **.bashrc** 会被 `Bash` 读取, 所以你的被加强了的提示符就可以一直出现.你还可以使用额外的颜色来装扮提示符.刚开始,这将有点棘手,因为你必须使用一些相当奇怪的转义序列,但结果是非常漂亮的. 将下面的字符添加到你的 **PS1**字符串中的某个位置,最终这将把文本变为红色:
|
|
||||||
|
|
||||||
\[\e[31m\]
|
|
||||||
|
|
||||||
你可以将这里的 31 更改为其他的数字来获得不同的颜色:
|
|
||||||
|
|
||||||
- 30 黑色
|
|
||||||
- 32 绿色
|
|
||||||
- 33 黄色
|
|
||||||
- 34 蓝色
|
|
||||||
- 35 洋红色
|
|
||||||
- 36 青色
|
|
||||||
- 37 白色
|
|
||||||
|
|
||||||
所以,让我们使用先前看到的转义序列和颜色来创造一个提示符,以此来结束这一小节的内容. 深吸一口气,弯曲你的手指,然后键入下面这只"野兽":
|
|
||||||
|
|
||||||
export PS1="(\!) \[\e[31m\] \[\A\] \[\e[32m\]\u@\h \[\e[34m\]\w \[\e[30m\]$"
|
|
||||||
|
|
||||||
上面的命令提供了一个 Bash 命令历史序号, 当前的时间,用户或主机名与颜色之间的组合,以及工作路径.假如你"野心勃勃",利用一些惊人的组合,你还可以更改提示符的背景色和前景色.先前实用的 Arch wiki 有一个关于颜色代码的完整列表:[http://tinyurl.com/3gvz4ec][1].
|
|
||||||
|
|
||||||
> ### Shell 精要 ###
|
|
||||||
>
|
|
||||||
> 假如你是一个彻底的 Linux 新手并第一次阅读这份杂志,或许你会发觉阅读这些教程有些吃力. 所以这里有一些基础知识来让你熟悉一些 shell. 通常在你的菜单中, shell 指的是 Terminal, XTerm 或 Konsole, 但你启动它后, 最为实用的命令有这些:
|
|
||||||
>
|
|
||||||
> **ls** (列出文件名); **cp one.txt two.txt** (复制文件); **rm file.txt** (移除文件); **mv old.txt new.txt** (移动或重命名文件);
|
|
||||||
>
|
|
||||||
> **cd /some/directory** (改变目录); **cd ..** (回到上级目录); **./program** (在当前目录下运行一个程序); **ls > list.txt** (重定向输出到一个文件).
|
|
||||||
>
|
|
||||||
> 几乎每个命令都有一个手册页用来解释其选项(例如 **man ls** – 按 Q 来退出).在那里,你可以知晓命令的选项,这样你就知道 **ls -la** 展示一个详细的列表,其中也列出了隐藏文件, 并且在键入一个文件或目录的名字的一部分后, 可以使用 Tab 键来自动补全.
|
|
||||||
|
|
||||||
### Tmux: 针对 shell 的窗口管理器 ###
|
|
||||||
|
|
||||||
在文本模式的环境中使用一个窗口管理器 – 这听起来有点不可思议, 是吧? 然而,你应该记得当 Web 浏览器第一次实现分页浏览的时候吧? 在当时, 这是在可用性上的一个重大进步,它减少了桌面任务栏的杂乱无章和繁多的窗口列表. 对于你的浏览器来说,你只需要一个按钮便可以在浏览器中切换到你打开的每个单独网站, 而不是针对每个网站都有一个任务栏或导航图标. 这个功能非常有意义.
|
|
||||||
|
|
||||||
若有时你同时运行着几个虚拟终端,你便会遇到相似的情况; 在这些终端之间跳转,或每次在任务栏或窗口列表中找到你所需要的那一个终端,都可能会让你觉得麻烦. 拥有一个文本模式的窗口管理器不仅可以让你像在同一个终端窗口中运行多个 shell 会话,而且你甚至还可以将这些窗口排列在一起.
|
|
||||||
|
|
||||||
另外,这样还有另一个好处:可以将这些窗口进行分离和重新连接.想要看看这是如何运行的最好方式是自己尝试一下. 在一个终端窗口中,输入 **screen** (在大多数发行版本中,它被默认安装了或者可以在软件包仓库中找到). 某些欢迎的文字将会出现 – 只需敲击 Enter 键这些文字就会消失. 现在运行一个交互式的文本模式的程序,例如 **nano**, 并关闭这个终端窗口.
|
|
||||||
|
|
||||||
在一个正常的 shell 对话中, 关闭窗口将会终止所有在该终端中运行的进程 – 所以刚才的 Nano 编辑对话也就被终止了, 但对于 screen 来说,并不是这样的. 打开一个新的终端并输入如下命令:
|
|
||||||
|
|
||||||
screen -r
|
|
||||||
|
|
||||||
瞧, 你刚开打开的 Nano 会话又回来了!
|
|
||||||
|
|
||||||
当刚才你运行 **screen** 时, 它会创建了一个新的独立的 shell 会话, 它不与某个特定的终端窗口绑定在一起,所以可以在后面被分离并重新连接( 即 **-r** 选项).
|
|
||||||
|
|
||||||
当你正使用 SSH 去连接另一台机器并做着某些工作, 但并不想因为一个单独的连接而毁掉你的所有进程时,这个方法尤其有用.假如你在一个 **screen** 会话中做着某些工作,并且你的连接突然中断了(或者你的笔记本没电了,又或者你的电脑报废了),你只需重新连接一个新的电脑或给电脑充电或重新买一台电脑,接着运行 **screen -r** 来重新连接到远程的电脑,并在刚才掉线的地方接着开始.
|
|
||||||
|
|
||||||
现在,我们都一直在讨论 GNU 的 **screen**,但这个小节的标题提到的是 tmux. 实质上, **tmux** (terminal multiplexer) 就像是 **screen** 的一个进阶版本,带有许多有用的额外功能,所以现在我们开始关注 tmux. 某些发行版本默认包含了 **tmux**; 在其他的发行版本上,通常只需要一个 **apt-get, yum install** 或 **pacman -S** 命令便可以安装它.
|
|
||||||
|
|
||||||
一旦你安装了它过后,键入 **tmux** 来启动它.接着你将注意到,在终端窗口的底部有一条绿色的信息栏,它非常像传统的窗口管理器中的任务栏: 上面显示着一个运行着的程序的列表,机器的主机名,当前时间和日期. 现在运行一个程序,又以 Nano 为例, 敲击 Ctrl+B 后接着按 C 键, 这将在 tmux 会话中创建一个新的窗口,你便可以在终端的底部的任务栏中看到如下的信息:
|
|
||||||
|
|
||||||
0:nano- 1:bash*
|
|
||||||
|
|
||||||
每一个窗口都有一个数字,当前呈现的程序被一个星号所标记. Ctrl+B 是与 tmux 交互的标准方式, 所以若你敲击这个按键组合并带上一个窗口序号, 那么就会切换到对应的那个窗口.你也可以使用 Ctrl+B 再加上 N 或 P 来分别切换到下一个或上一个窗口 – 或者使用 Ctrl+B 加上 L 来在最近使用的两个窗口之间来进行切换(有点类似于桌面中的经典的 Alt+Tab 组合键的效果). 若需要知道窗口列表,使用 Ctrl+B 再加上 W.
|
|
||||||
|
|
||||||
目前为止,一切都还好:现在你可以在一个单独的终端窗口中运行多个程序,避免混乱(尤其是当你经常与同一个远程主机保持多个 SSH 连接时.). 当想同时看两个程序又该怎么办呢?
|
|
||||||
|
|
||||||
针对这种情况, 可以使用 tmux 中的窗格. 敲击 Ctrl+B 再加上 % , 则当前窗口将分为两个部分,一个在左一个在右.你可以使用 Ctrl+B 再加上 O 来在这两个部分之间切换. 这尤其在你想同时看两个东西时非常实用, – 例如一个窗格看指导手册,另一个窗格里用编辑器看一个配置文件.
|
|
||||||
|
|
||||||
有时,你想对一个单独的窗格进行缩放,而这需要一定的技巧. 首先你需要敲击 Ctrl+B 再加上一个 :(分号),这将使得位于底部的 tmux 栏变为深橙色. 现在,你进入了命令模式,在这里你可以输入命令来操作 tmux. 输入 **resize-pane -R** 来使当前窗格向右移动一个字符的间距, 或使用 **-L** 来向左移动. 对于一个简单的操作,这些命令似乎有些长,但请注意,在 tmux 的命令模式(以前面提到的一个分号开始的模式)下,可以使用 Tab 键来补全命令. 另外需要提及的是, **tmux** 同样也有一个命令历史记录,所以若你想重复刚才的缩放操作,可以先敲击 Ctrl+B 再跟上一个分号并使用向上的箭头来取回刚才输入的命令.
|
|
||||||
|
|
||||||
最后,让我们看一下分离和重新连接 - 即我们刚才介绍的 screen 的特色功能. 在 tmux 中,敲击 Ctrl+B 再加上 D 来从当前的终端窗口中分离当前的 tmux 会话, 这使得这个会话的一切工作都在后台中运行.使用 **tmux a** 可以再重新连接到刚才的会话. 但若你同时有多个 tmux 会话在运行时,又该怎么办呢? 我们可以使用下面的命令来列出它们:
|
|
||||||
|
|
||||||
tmux ls
|
|
||||||
|
|
||||||
这个命令将为每个会话分配一个序号; 假如你想重新连接到会话 1, 可以使用 `tmux a -t 1`. tmux 是可以高度定制的,你可以自定义按键绑定并更改配色方案, 所以一旦你适应了它的主要功能,请钻研指导手册以了解更多的内容.
|
|
||||||
|
|
||||||
tmux: 一个针对 shell 的窗口管理器
|
|
||||||
|
|
||||||
![tmux](http://www.linuxvoice.com/wp-content/uploads/2015/02/tmux-large13.jpg)
|
|
||||||
|
|
||||||
上图中, tmux 开启了两个窗格: 左边是 Vim 正在编辑一个配置文件,而右边则展示着指导手册页.
|
|
||||||
|
|
||||||
> ### Zsh: 另一个 shell ###
|
|
||||||
>
|
|
||||||
> 选择是好的,但标准同样重要. 你要知道几乎每个主流的 Linux 发行版本都默认使用 Bash shell – 尽管还存在其他的 shell. Bash 为你提供了一个 shell 能够给你提供的几乎任何功能,包括命令历史记录,文件名补全和许多脚本编程的能力.它成熟,可靠并文档丰富 – 但它不是你唯一的选择.
|
|
||||||
>
|
|
||||||
> 许多高级用户热衷于 Zsh, 即 Z shell. 这是 Bash 的一个替代品并提供了 Bash 的几乎所有功能,令外还提供了一些额外的功能. 例如, 在 Zsh 中,你输入 **ls** - 并敲击 Tab 键可以得到 **ls** 可用的各种不同选项的一个大致描述. 而不需要再打开 man page 了!
|
|
||||||
>
|
|
||||||
> Zsh 还支持其他强大的自动补全功能: 例如,输入 **cd /u/lo/bi** 再敲击 Tab 键, 则完整的路径名 **/usr/local/bin** 就会出现(这里假设没有其他的路径包含 **u**, **lo** 和 **bi** 等字符.). 或者只输入 **cd** 再跟上 Tab 键,则你将看到着色后的路径名的列表 – 这比 Bash 给出的简单的结果好看得多.
|
|
||||||
>
|
|
||||||
> Zsh 在大多数的主要发行版本上都可以得到; 安装它后并输入 **zsh** 便可启动它. 要将你的默认 shell 从 Bash 改为 Zsh, 可以使用 **chsh** 命令. 若需了解更多的信息,请访问 [www.zsh.org][2].
|
|
||||||
|
|
||||||
### "未来" 的终端 ###
|
|
||||||
|
|
||||||
你或许会好奇为什么包含你的命令行提示符的应用被叫做终端. 这需要追溯到 Unix 的早期, 那时人们一般工作在一个多用户的机器上,这个巨大的电脑主机将占据一座建筑中的一个房间, 人们在某些线路的配合下,使用屏幕和键盘来连接到这个主机, 这些终端机通常被称为 "哑终端", 因为它们不能靠自己做任何重要的执行任务 – 它们只展示通过线路从主机传来的信息,并输送回从键盘的敲击中得到的输入信息.
|
|
||||||
|
|
||||||
今天,几乎所有的我们在自己的机器上执行实际的操作,所以我们的电脑不是传统意义下的终端, 这就是为什么诸如 **XTerm**, Gnome Terminal, Konsole 等程序被称为 "终端模拟器" 的原因 – 他们提供了同昔日的物理终端一样的功能.事实上,在许多方面它们并没有改变多少.诚然,现在我们有了反锯齿字体,更好的颜色和点击网址的能力,但总的来说,几十年来我们一直以同样的方式在工作.
|
|
||||||
|
|
||||||
所以某些程序员正尝试改变这个状况. **Terminology** ([http://tinyurl.com/osopjv9][3]), 它来自于超级时髦的 Enlightenment 窗口管理器背后的团队,旨在将终端引入 21 世纪,例如带有在线媒体显示功能.你可以在一个充满图片的目录里输入 **ls** 命令,便可以看到它们的缩略图,或甚至可以直接在你的终端里播放视频. 这使得一个终端有点类似于一个文件管理器,意味着你可以快速地检查媒体文件的内容而不必用另一个应用来打开它们.
|
|
||||||
|
|
||||||
接着还有 Xiki ([www.xiki.org][4]),它自身的描述为 "命令的革新".它就像是一个传统的 shell, 一个 GUI 和一个 wiki 之间的过渡; 你可以在任何地方输入命令,并在后面将它们的输出存储为笔记以作为参考,并可以创建非常强大的自定义命令.用几句话是很能描述它的,所以作者们已经创作了一个视频来展示它的潜力是多么的巨大(请看 **Xiki** 网站的截屏视频部分).
|
|
||||||
|
|
||||||
并且 Xiki 绝不是那种在几个月之内就消亡的昙花一现的项目,作者们成功地进行了一次 Kickstarter 众筹,在七月底已募集到超过 $84,000. 是的,你没有看错 – $84K 来支持一个终端模拟器.这可能是最不寻常的集资活动,因为某些疯狂的家伙已经决定开始创办它们自己的 Linux 杂志 ......
|
|
||||||
|
|
||||||
### 下一代终端 ###
|
|
||||||
|
|
||||||
许多命令行和基于文本的程序在功能上与它们的 GUI 程序是相同的,并且常常更加快速和高效. 我们的推荐有:
|
|
||||||
**Irssi** (IRC 客户端); **Mutt** (mail 客户端); **rTorrent** (BitTorrent); **Ranger** (文件管理器); **htop** (进程监视器). 若给定在终端的限制下来进行 Web 浏览, Elinks 确实做的很好,并且对于阅读那些以文字为主的网站例如 Wikipedia 来说,它非常实用.
|
|
||||||
|
|
||||||
> ### 微调配色方案 ###
|
|
||||||
>
|
|
||||||
> 在 Linux Voice 中,我们并不迷恋养眼的东西,但当你每天花费几个小时盯着屏幕看东西时,我们确实认识到美学的重要性.我们中的许多人都喜欢调整我们的桌面和窗口管理器来达到完美的效果,调整阴影效果,摆弄不同的配色方案,直到我们 100% 的满意.(然后出于习惯,摆弄更多的东西.)
|
|
||||||
>
|
|
||||||
> 但我们倾向于忽视终端窗口,它理应也获得我们的喜爱, 并且在 [http://ciembor.github.io/4bit][5] 你将看到一个极其棒的配色方案设计器,对于所有受欢迎的终端模拟器(**XTerm, Gnome Terminal, Konsole and Xfce4 Terminal are among the apps supported.**),它可以色设定.移动滑动条直到你看到配色方案 norvana, 然后点击位于该页面右上角的 `得到方案` 按钮.
|
|
||||||
>
|
|
||||||
> 相似的,假如你在一个文本编辑器,如 Vim 或 Emacs 上花费很多的时间,使用一个精心设计的调色板也是非常值得的. **Solarized at** [http://ethanschoonover.com/solarized][6] 是一个卓越的方案,它不仅漂亮,而且因追求最大的可用性而设计,在其背后有着大量的研究和测试.
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://www.linuxvoice.com/linux-101-power-up-your-shell-8/
|
|
||||||
|
|
||||||
作者:[Ben Everard][a]
|
|
||||||
译者:[FSSlc](https://github.com/FSSlc)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://www.linuxvoice.com/author/ben_everard/
|
|
||||||
[1]:http://tinyurl.com/3gvz4ec
|
|
||||||
[2]:http://www.zsh.org/
|
|
||||||
[3]:http://tinyurl.com/osopjv9
|
|
||||||
[4]:http://www.xiki.org/
|
|
||||||
[5]:http://ciembor.github.io/4bit
|
|
||||||
[6]:http://ethanschoonover.com/solarized
|
|
@ -0,0 +1,125 @@
|
|||||||
|
如何使用Docker Machine部署Swarm集群
|
||||||
|
================================================================================
|
||||||
|
大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了独立的Docker API,所以任何与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。
|
||||||
|
|
||||||
|
下面是我提供的简便方法。
|
||||||
|
### 1. 安装Docker Machine ###
|
||||||
|
|
||||||
|
Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。
|
||||||
|
64位操作系统:
|
||||||
|
|
||||||
|
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
|
||||||
|
|
||||||
|
32位操作系统:
|
||||||
|
|
||||||
|
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine
|
||||||
|
|
||||||
|
下载了最先版本的Docker Machine之后,我们需要对 /usr/local/bin/ 目录下的docker-machine文件的权限进行修改。命令如下:
|
||||||
|
|
||||||
|
# chmod +x /usr/local/bin/docker-machine
|
||||||
|
|
||||||
|
在做完上面的事情以后,我们必须确保docker-machine已经安装好。怎么检查呢?运行docker-machine -v指令,指令将会给出我们系统上所安装的docker-machine版本。
|
||||||
|
|
||||||
|
# docker-machine -v
|
||||||
|
|
||||||
|
![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png)
|
||||||
|
|
||||||
|
为了让Docker命令能够在我们的机器上运行,必须还要在机器上安装Docker客户端。命令如下。
|
||||||
|
|
||||||
|
# curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker
|
||||||
|
# chmod +x /usr/local/bin/docker
|
||||||
|
|
||||||
|
### 2. 创建Machine ###
|
||||||
|
|
||||||
|
在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这片文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。
|
||||||
|
创建machine的命令如下:
|
||||||
|
|
||||||
|
# docker-machine create --driver digitalocean --digitalocean-access-token <API-Token> linux-dev
|
||||||
|
|
||||||
|
**Note**: 假设我们要创建一个名为“linux-dev”的machine。<API-Token>是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是<API-Token>了。用其替换上面那条命令中的API-Token字段。
|
||||||
|
|
||||||
|
现在,运行下面的指令,将Machine configuration装载进shell。
|
||||||
|
|
||||||
|
# eval "$(docker-machine env linux-dev)"
|
||||||
|
|
||||||
|
![Docker Machine Digitalocean Cloud](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-digitalocean-cloud.png)
|
||||||
|
|
||||||
|
然后,我们使用如下命令将我们的machine标记为ACTIVE状态。
|
||||||
|
|
||||||
|
# docker-machine active linux-dev
|
||||||
|
|
||||||
|
现在,我们检查是否它(指machine)被标记为了 ACTIVE "*"。
|
||||||
|
|
||||||
|
# docker-machine ls
|
||||||
|
|
||||||
|
![Docker Machine Active List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-active-list.png)
|
||||||
|
|
||||||
|
### 3. 运行Swarm Docker镜像 ###
|
||||||
|
|
||||||
|
现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像并且控制Swarm主节点和从节点。使用下面的指令运行镜像:
|
||||||
|
|
||||||
|
# docker run swarm create
|
||||||
|
|
||||||
|
![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png)
|
||||||
|
|
||||||
|
If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet.
|
||||||
|
如果你想要在**32位操作系统**上运行swarm docker镜像。你需要SSH登录到Droplet当中。
|
||||||
|
|
||||||
|
# docker-machine ssh
|
||||||
|
#docker run swarm create
|
||||||
|
#exit
|
||||||
|
|
||||||
|
### 4. 创建Swarm主节点 ###
|
||||||
|
|
||||||
|
在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主节点。使用下面的语句,添加一个主节点。(这里的感觉怪怪的,好像少翻译了很多东西,是我把Master翻译为主节点的原因吗?)
|
||||||
|
|
||||||
|
# docker-machine create \
|
||||||
|
-d digitalocean \
|
||||||
|
--digitalocean-access-token <DIGITALOCEAN-TOKEN>
|
||||||
|
--swarm \
|
||||||
|
--swarm-master \
|
||||||
|
--swarm-discovery token://<CLUSTER-ID> \
|
||||||
|
swarm-master
|
||||||
|
|
||||||
|
![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png)
|
||||||
|
|
||||||
|
### 5. 创建Swarm结点群 ###
|
||||||
|
|
||||||
|
现在,我们将要创建一个swarm结点,此结点将与Swarm主节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主节点相连。到此,我们就拥有了一个两节点的swarm集群了。
|
||||||
|
|
||||||
|
# docker-machine create \
|
||||||
|
-d digitalocean \
|
||||||
|
--digitalocean-access-token <DIGITALOCEAN-TOKEN>
|
||||||
|
--swarm \
|
||||||
|
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
|
||||||
|
swarm-node
|
||||||
|
|
||||||
|
![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png)
|
||||||
|
|
||||||
|
### 6. Connecting to the Swarm Master ###
|
||||||
|
### 6. 与Swarm主节点连接 ###
|
||||||
|
|
||||||
|
现在,我们连接Swarm主节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主节点的Machine配置文件加载到环境当中。
|
||||||
|
|
||||||
|
# eval "$(docker-machine env --swarm swarm-master)"
|
||||||
|
|
||||||
|
然后,我们就可以跨结点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。
|
||||||
|
|
||||||
|
# docker info
|
||||||
|
|
||||||
|
### Conclusion ###
|
||||||
|
### 总结 ###
|
||||||
|
|
||||||
|
我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主节点和一个从节点成功地部署了集群。其他类似的应用还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量!
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-machine/
|
||||||
|
|
||||||
|
作者:[Arun Pyasi][a]
|
||||||
|
译者:[DongShuaike](https://github.com/DongShuaike)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://linoxide.com/author/arunp/
|
@ -1,149 +0,0 @@
|
|||||||
|
|
||||||
如何管理Vim插件
|
|
||||||
================================================================================
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Vim是Linux上一个轻量级的通用文本编辑器。虽然它开始时的学习曲线对于一般的Linux用户来说可能很困难,但比起它的好处,这些付出完全是值得的。随着功能的增长,在插件工具的应用下,Vim是完全可定制的。但是,由于它高级的功能配置,你需要花一些时间去了解它的插件系统,然后才能够有效地去个性化定置Vim。幸运的是,我们已经有一些工具能够使我们在使用Vim插件时更加轻松。而我日常所使用的就是Vundle.
|
|
||||||
### 什么是Vundle ###
|
|
||||||
|
|
||||||
[Vundle][1]是一个vim插件管理器,用于支持Vim包。Vundle能让你很简单地实现插件的安装,升级,搜索或者清除。它还能管理你的运行环境并且在标签方面提供帮助。
|
|
||||||
### 安装Vundle ###
|
|
||||||
|
|
||||||
首先,如果你的Linux系统上没有Git的话,先[安装Git][2].
|
|
||||||
|
|
||||||
接着,创建一个目录,Vim的插件将会被下载并且安装在这个目录上。默认情况下,这个目录为~/.vim/bundle。
|
|
||||||
|
|
||||||
$ mkdir -p ~/.vim/bundle
|
|
||||||
|
|
||||||
现在,安装Vundle如下。注意Vundle本身也是一个vim插件。因此我们同样把vundle安装到之前创建的目录~/.vim/bundle下。
|
|
||||||
|
|
||||||
$ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
|
|
||||||
|
|
||||||
### 配置Vundle ###
|
|
||||||
|
|
||||||
现在配置你的.vimrc文件如下:
|
|
||||||
|
|
||||||
set nocompatible " This is required
|
|
||||||
" 这是被要求的。(译注:中文注释为译者所加,下同。)
|
|
||||||
filetype off " This is required
|
|
||||||
" 这是被要求的。
|
|
||||||
|
|
||||||
" Here you set up the runtime path
|
|
||||||
" 在这里设置你的运行时环境的路径。
|
|
||||||
set rtp+=~/.vim/bundle/Vundle.vim
|
|
||||||
|
|
||||||
" Initialize vundle
|
|
||||||
" 初始化vundle
|
|
||||||
call vundle#begin()
|
|
||||||
|
|
||||||
" This should always be the first
|
|
||||||
" 这一行应该永远放在前面。
|
|
||||||
Plugin 'gmarik/Vundle.vim'
|
|
||||||
|
|
||||||
" This examples are from https://github.com/gmarik/Vundle.vim README
|
|
||||||
" 这个示范来自https://github.com/gmarik/Vundle.vim README
|
|
||||||
Plugin 'tpope/vim-fugitive'
|
|
||||||
|
|
||||||
" Plugin from http://vim-scripts.org/vim/scripts.html
|
|
||||||
" 取自http://vim-scripts.org/vim/scripts.html的插件
|
|
||||||
Plugin 'L9'
|
|
||||||
|
|
||||||
" Git plugin not hosted on GitHub
|
|
||||||
" Git插件,但并不在GitHub上。
|
|
||||||
Plugin 'git://git.wincent.com/command-t.git'
|
|
||||||
|
|
||||||
"git repos on your local machine (i.e. when working on your own plugin)
|
|
||||||
"本地计算机上的Git仓库路径 (例如,当你在开发你自己的插件时)
|
|
||||||
Plugin 'file:///home/gmarik/path/to/plugin'
|
|
||||||
|
|
||||||
" The sparkup vim script is in a subdirectory of this repo called vim.
|
|
||||||
" Pass the path to set the runtimepath properly.
|
|
||||||
" vim脚本sparkup存放在这个名叫vim的仓库下的一个子目录中。
|
|
||||||
" 提交这个路径来正确地设置运行时路径
|
|
||||||
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
|
|
||||||
|
|
||||||
" Avoid a name conflict with L9
|
|
||||||
" 避免与L9发生名字上的冲突
|
|
||||||
Plugin 'user/L9', {'name': 'newL9'}
|
|
||||||
|
|
||||||
"Every Plugin should be before this line
|
|
||||||
"所有的插件都应该在这一行之前。
|
|
||||||
call vundle#end() " required 被要求的
|
|
||||||
|
|
||||||
容我简单解释一下上面的设置:默认情况下,Vundle将从github.com或者vim-scripts.org下载和安装vim插件。你也可以改变这个默认行为。
|
|
||||||
|
|
||||||
要从github安装(安装插件,译者注,下同):
|
|
||||||
Plugin 'user/plugin'
|
|
||||||
|
|
||||||
要从http://vim-scripts.org/vim/scripts.html处安装:
|
|
||||||
Plugin 'plugin_name'
|
|
||||||
|
|
||||||
要从另外一个git仓库中安装:
|
|
||||||
|
|
||||||
Plugin 'git://git.another_repo.com/plugin'
|
|
||||||
|
|
||||||
从本地文件中安装:
|
|
||||||
|
|
||||||
Plugin 'file:///home/user/path/to/plugin'
|
|
||||||
|
|
||||||
|
|
||||||
你同样可以定制其它东西,例如你的插件运行时路径,当你自己在编写一个插件时,或者你只是想从其它目录——而不是~/.vim——中加载插件时,这样做就非常有用。
|
|
||||||
|
|
||||||
Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
|
|
||||||
|
|
||||||
如果你有同名的插件,你可以重命名你的插件,这样它们就不会发生冲突了。
|
|
||||||
|
|
||||||
Plugin 'user/plugin', {'name': 'newPlugin'}
|
|
||||||
|
|
||||||
### 使用Vum命令 ###
|
|
||||||
一旦你用vundle设置好你的插件,你就可以通过几个vundle命令来安装,升级,搜索插件,或者清除没有用的插件。
|
|
||||||
|
|
||||||
#### 安装一个新的插件 ####
|
|
||||||
|
|
||||||
所有列在你的.vimrc文件中的插件,都会被PluginInstall命令安装。你也可以通递一个插件名给它,来安装某个的特定插件。
|
|
||||||
:PluginInstall
|
|
||||||
:PluginInstall <插件名>
|
|
||||||
|
|
||||||
![](https://farm1.staticflickr.com/559/18998707843_438cd55463_c.jpg)
|
|
||||||
|
|
||||||
#### 清除没有用的插件 ####
|
|
||||||
|
|
||||||
如果你有任何没有用到的插件,你可以通过PluginClean命令来删除它.
|
|
||||||
:PluginClean
|
|
||||||
|
|
||||||
![](https://farm4.staticflickr.com/3814/19433047689_17d9822af6_c.jpg)
|
|
||||||
|
|
||||||
#### 查找一个插件 ####
|
|
||||||
|
|
||||||
如果你想从提供的插件清单中安装一个插件,搜索功能会很有用
|
|
||||||
:PluginSearch <文本>
|
|
||||||
|
|
||||||
![](https://farm1.staticflickr.com/541/19593459846_75b003443d_c.jpg)
|
|
||||||
|
|
||||||
|
|
||||||
在搜索的时候,你可以在交互式分割窗口中安装,清除,重新搜索或者重新加载插件清单.安装后的插件不会自动加载生效,要使其加载生效,可以将它们添加进你的.vimrc文件中.
|
|
||||||
### 总结 ###
|
|
||||||
|
|
||||||
Vim是一个妙不可言的工具.它不单单是一个能够使你的工作更加顺畅高效的默认文本编辑器,同时它还能够摇身一变,成为现存的几乎任何一门编程语言的IDE.
|
|
||||||
|
|
||||||
注意,有一些网站能帮你找到适合的vim插件.猛击[http://www.vim-scripts.org][3], Github或者 [http://www.vimawesome.com][4] 获取新的脚本或插件.同时记得为你的插件使用帮助供应程序.
|
|
||||||
|
|
||||||
和你最爱的编辑器一起嗨起来吧!
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: http://xmodulo.com/manage-vim-plugins.html
|
|
||||||
|
|
||||||
作者:[Christopher Valerio][a]
|
|
||||||
译者:[XLCYun(袖里藏云)](https://github.com/XLCYun)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:http://xmodulo.com/author/valerio
|
|
||||||
[1]:https://github.com/VundleVim/Vundle.vim
|
|
||||||
[2]:http://ask.xmodulo.com/install-git-linux.html
|
|
||||||
[3]:http://www.vim-scripts.org/
|
|
||||||
[4]:http://www.vimawesome.com/
|
|
||||||
|
|
@ -1,24 +1,23 @@
|
|||||||
How to Configure Chef (server/client) on Ubuntu 14.04 / 15.04
|
如何在Ubuntu 14.04/15.04上配置Chef(服务端/客户端)
|
||||||
================================================================================
|
================================================================================
|
||||||
Chef is a configuration management and automation tool for information technology professionals that configures and manages your infrastructure whether it is on-premises or in the cloud. It can be used to speed up application deployment and to coordinate the work of multiple system administrators and developers involving hundreds, or even thousands, of servers and applications to support a large customer base. The key to Chef’s power is that it turns infrastructure into code. Once you master Chef, you will be able to enable web IT with first class support for managing your cloud infrastructure with an easy automation of your internal deployments or end users systems.
|
Chef是对于信息技术专业人员的一款配置管理和自动化工具,它可以配置和管理你的设备无论它在本地还是在云上。它可以用于加速应用部署并协调多个系统管理员和开发人员的工作,涉及到成百甚至上千的服务器和程序来支持大量的客户群。chef最有用的是让设备变成代码。一旦你掌握了Chef,你可以获得一流的网络IT支持来自动化管理你的云端设备或者终端用户。
|
||||||
|
|
||||||
Here are the major components of Chef that we are going to setup and configure in this tutorial.
|
下面是我们将要在本篇中要设置和配置Chef的主要组件。
|
||||||
chef components
|
|
||||||
|
|
||||||
![](http://blog.linoxide.com/wp-content/uploads/2015/07/chef.png)
|
![](http://blog.linoxide.com/wp-content/uploads/2015/07/chef.png)
|
||||||
|
|
||||||
### Chef Prerequisites and Versions ###
|
### 安装Chef的要求和版本 ###
|
||||||
|
|
||||||
We are going to setup Chef configuration management system under the following basic environment.
|
我们将在下面的基础环境下设置Chef配置管理系统。
|
||||||
|
|
||||||
注:表格
|
注:表格
|
||||||
<table width="701" style="height: 284px;">
|
<table width="701" style="height: 284px;">
|
||||||
<tbody>
|
<tbody>
|
||||||
<tr>
|
<tr>
|
||||||
<td width="660" colspan="2"><strong>Chef, Configuration Management Tool</strong></td>
|
<td width="660" colspan="2"><strong>管理和配置工具:Chef</strong></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td width="220"><strong>Base Operating System</strong></td>
|
<td width="220"><strong>基础操作系统</strong></td>
|
||||||
<td width="492">Ubuntu 14.04.1 LTS (x86_64)</td>
|
<td width="492">Ubuntu 14.04.1 LTS (x86_64)</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
@ -34,135 +33,138 @@ We are going to setup Chef configuration management system under the following b
|
|||||||
<td width="492">Version 0.6.2</td>
|
<td width="492">Version 0.6.2</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td width="220"><strong>RAM and CPU</strong></td>
|
<td width="220"><strong>内存和CPU</strong></td>
|
||||||
<td width="492">4 GB , 2.0+2.0 GHZ</td>
|
<td width="492">4 GB , 2.0+2.0 GHZ</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
### Chef Server's Installation and Configurations ###
|
### Chef服务端的安装和配置 ###
|
||||||
|
|
||||||
Chef Server is central core component that stores recipes as well as other configuration data and interact with the workstations and nodes. let's download the installation media by selecting the latest version of chef server from its official web link.
|
Chef服务端是核心组件,它存储配置以及其他和工作站交互的配置数据。让我们在他们的官网下载最新的安装文件。
|
||||||
|
|
||||||
We will get its installation package and install it by using following commands.
|
我使用下面的命令来下载和安装它。
|
||||||
|
|
||||||
**1) Downloading Chef Server**
|
**1) 下载Chef服务端**
|
||||||
|
|
||||||
root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb
|
root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb
|
||||||
|
|
||||||
**2) To install Chef Server**
|
**2) 安装Chef服务端**
|
||||||
|
|
||||||
root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb
|
root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb
|
||||||
|
|
||||||
**3) Reconfigure Chef Server**
|
**3) 重新配置Chef服务端**
|
||||||
|
|
||||||
Now Run the following command to start all of the chef server services ,this step may take a few minutes to complete as its composed of many different services that work together to create a functioning system.
|
现在运行下面的命令来启动所有的chef服务端服务,这步也许会花费一些时间,因为它有许多不同一起工作的服务组成来创建一个正常运作的系统。
|
||||||
|
|
||||||
root@ubuntu-14-chef:/tmp# chef-server-ctl reconfigure
|
root@ubuntu-14-chef:/tmp# chef-server-ctl reconfigure
|
||||||
|
|
||||||
The chef server startup command 'chef-server-ctl reconfigure' needs to be run twice so that installation ends with the following completion output.
|
chef服务端启动命令'chef-server-ctl reconfigure'需要运行两次,这样就会在安装后看到这样的输出。
|
||||||
|
|
||||||
Chef Client finished, 342/350 resources updated in 113.71139964 seconds
|
Chef Client finished, 342/350 resources updated in 113.71139964 seconds
|
||||||
opscode Reconfigured!
|
opscode Reconfigured!
|
||||||
|
|
||||||
**4) Reboot OS**
|
**4) 重启系统 **
|
||||||
|
|
||||||
Once the installation complete reboot the operating system for the best working without doing this we you might get the following SSL_connect error during creation of User.
|
安装完成后重启系统使系统能最好的工作,不然我们或许会在创建用户的时候看到下面的SSL连接错误。
|
||||||
|
|
||||||
ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect
|
ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect
|
||||||
|
|
||||||
**5) Create new Admin User**
|
**5) 创建心的管理员**
|
||||||
|
|
||||||
Run the following command to create a new administrator user with its profile settings. During its creation user’s RSA private key is generated automatically that should be saved to a safe location. The --filename option will save the RSA private key to a specified path.
|
运行下面的命令来创建一个新的用它自己的配置的管理员账户。创建过程中,用户的RSA私钥会自动生成并需要被保存到一个安全的地方。--file选项会保存RSA私钥到指定的路径下。
|
||||||
|
|
||||||
root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem
|
root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem
|
||||||
|
|
||||||
### Chef Manage Setup on Chef Server ###
|
### Chef服务端的管理设置 ###
|
||||||
|
|
||||||
Chef Manage is a management console for Enterprise Chef that enables a web-based user interface for visualizing and managing nodes, data bags, roles, environments, cookbooks and role-based access control (RBAC).
|
Chef Manage是一个针对企业Chef用户的管理控制台,它启用了可视化的web用户界面并可以管理节点、数据包、规则、环境、配置和基于角色的访问控制(RBAC)
|
||||||
|
|
||||||
**1) Downloading Chef Manage**
|
**1) 下载Chef Manage**
|
||||||
|
|
||||||
Copy the link for Chef Manage from the official web site and download the chef manage package.
|
从官网复制链接病下载chef manage的安装包。
|
||||||
|
|
||||||
root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb
|
root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb
|
||||||
|
|
||||||
**2) Installing Chef Manage**
|
**2) 安装Chef Manage**
|
||||||
|
|
||||||
Let's install it into the root's home directory with below command.
|
使用下面的命令在root的家目录下安装它。
|
||||||
|
|
||||||
root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root
|
root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root
|
||||||
|
|
||||||
**3) Restart Chef Manage and Server**
|
**3) 重启Chef Manage和服务端**
|
||||||
|
|
||||||
Once the installation is complete we need to restart chef manage and chef server services by executing following commands.
|
安装完成后我们需要运行下面的命令来重启chef manage和服务端。
|
||||||
|
|
||||||
root@ubuntu-14-chef:~# opscode-manage-ctl reconfigure
|
root@ubuntu-14-chef:~# opscode-manage-ctl reconfigure
|
||||||
root@ubuntu-14-chef:~# chef-server-ctl reconfigure
|
root@ubuntu-14-chef:~# chef-server-ctl reconfigure
|
||||||
|
|
||||||
### Chef Manage Web Console ###
|
### Chef Manage网页控制台 ###
|
||||||
|
|
||||||
We can access chef manage web console from the localhost as wel as its fqdn and login with the already created admin user account.
|
我们可以使用localhost访问网页控制台以及fqdn,并用已经创建的管理员登录
|
||||||
|
|
||||||
![chef amanage](http://blog.linoxide.com/wp-content/uploads/2015/07/5-chef-web.png)
|
![chef amanage](http://blog.linoxide.com/wp-content/uploads/2015/07/5-chef-web.png)
|
||||||
|
|
||||||
**1) Create New Organization with Chef Manage**
|
**1) Chef Manage创建新的组织 **
|
||||||
|
|
||||||
You would be asked to create new organization or accept the invitation from the organizations. Let's create a new organization by providing its short and full name as shown.
|
你或许被要求创建新的组织或者接受其他阻止的邀请。如下所示,使用缩写和全名来创建一个新的组织。
|
||||||
|
|
||||||
![Create Org](http://blog.linoxide.com/wp-content/uploads/2015/07/7-create-org.png)
|
![Create Org](http://blog.linoxide.com/wp-content/uploads/2015/07/7-create-org.png)
|
||||||
|
|
||||||
**2) Create New Organization with Command line**
|
**2) 用命令行创建心的组织 **
|
||||||
|
|
||||||
We can also create new Organization from the command line by executing the following command.
|
We can also create new Organization from the command line by executing the following command.
|
||||||
|
我们同样也可以运行下面的命令来创建新的组织。
|
||||||
|
|
||||||
root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem
|
root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem
|
||||||
|
|
||||||
### Configuration and setup of Workstation ###
|
### 设置工作站 ###
|
||||||
|
|
||||||
As we had done with successful installation of chef server now we are going to setup its workstation to create and configure any recipes, cookbooks, attributes, and other changes that we want to made to our Chef configurations.
|
我们已经完成安装chef服务端,现在我们可以开始创建任何recipes、cookbooks、属性和其他任何的我们想要对Chef的修改。
|
||||||
|
|
||||||
**1) Create New User and Organization on Chef Server**
|
**1) 在Chef服务端上创建新的用户和组织 **
|
||||||
|
|
||||||
In order to setup workstation we create a new user and an organization for this from the command line.
|
为了设置工作站,我们用命令行创建一个新的用户和组织。
|
||||||
|
|
||||||
root@ubuntu-14-chef:~# chef-server-ctl user-create bloger Bloger Kashif bloger.kashif@gmail.com bloger123 --filename bloger.pem
|
root@ubuntu-14-chef:~# chef-server-ctl user-create bloger Bloger Kashif bloger.kashif@gmail.com bloger123 --filename bloger.pem
|
||||||
|
|
||||||
root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem
|
root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem
|
||||||
|
|
||||||
**2) Download Starter Kit for Workstation**
|
**2) 下载工作站入门套件 **
|
||||||
|
|
||||||
Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server.
|
Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server.
|
||||||
|
在工作站的网页控制台中下面并保存入门套件用于与服务端协同工作
|
||||||
|
|
||||||
![Starter Kit](http://blog.linoxide.com/wp-content/uploads/2015/07/8-download-kit.png)
|
![Starter Kit](http://blog.linoxide.com/wp-content/uploads/2015/07/8-download-kit.png)
|
||||||
|
|
||||||
**3) Click to "Proceed" with starter kit download**
|
**3) 点击"Proceed"下载套件 **
|
||||||
|
|
||||||
![starter kit](http://blog.linoxide.com/wp-content/uploads/2015/07/9-download-kit.png)
|
![starter kit](http://blog.linoxide.com/wp-content/uploads/2015/07/9-download-kit.png)
|
||||||
|
|
||||||
### Chef Development Kit Setup for Workstation ###
|
### 对于工作站的Chef开发套件设置 ###
|
||||||
|
|
||||||
Chef Development Kit is a software package suite with all the development tools need to code Chef. It combines with the best of the breed tools developed by Chef community with Chef Client.
|
Chef开发套件是一款包含所有开发chef所需工具的软件包。它捆绑了由Chef开发的带Chef客户端的工具。
|
||||||
|
|
||||||
**1) Downloading Chef DK**
|
**1) 下载 Chef DK**
|
||||||
|
|
||||||
We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit.
|
We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit.
|
||||||
|
我们可以从它的官网链接中下载开发包,并选择操作系统来得到chef开发包。
|
||||||
|
|
||||||
![Chef DK](http://blog.linoxide.com/wp-content/uploads/2015/07/10-CDK.png)
|
![Chef DK](http://blog.linoxide.com/wp-content/uploads/2015/07/10-CDK.png)
|
||||||
|
|
||||||
Copy the link and download it with wget command.
|
复制链接并用wget下载
|
||||||
|
|
||||||
root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb
|
root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb
|
||||||
|
|
||||||
**1) Chef Development Kit Installatoion**
|
**1) Chef开发套件安装**
|
||||||
|
|
||||||
Install chef-development kit using dpkg command
|
使用dpkg命令安装开发套件
|
||||||
|
|
||||||
root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb
|
root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb
|
||||||
|
|
||||||
**3) Chef DK Verfication**
|
**3) Chef DK 验证**
|
||||||
|
|
||||||
Verify using the below command that the client got installed properly.
|
使用下面的命令验证客户端是否已经正确安装。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:~# chef verify
|
root@ubuntu-15-WKS:~# chef verify
|
||||||
|
|
||||||
@ -193,9 +195,9 @@ Verify using the below command that the client got installed properly.
|
|||||||
Verification of component 'chefspec' succeeded.
|
Verification of component 'chefspec' succeeded.
|
||||||
Verification of component 'package installation' succeeded.
|
Verification of component 'package installation' succeeded.
|
||||||
|
|
||||||
**Connecting to Chef Server**
|
**连接Chef服务端**
|
||||||
|
|
||||||
We will Create ~/.chef and copy the two user and organization pem files to this folder from chef server.
|
我们将创建 ~/.chef并从chef服务端复制两个用户和组织的pem文件到chef的文件到这个目录下。
|
||||||
|
|
||||||
root@ubuntu-14-chef:~# scp bloger.pem blogs.pem kashi.pem linux.pem root@172.25.10.172:/.chef/
|
root@ubuntu-14-chef:~# scp bloger.pem blogs.pem kashi.pem linux.pem root@172.25.10.172:/.chef/
|
||||||
|
|
||||||
@ -207,9 +209,9 @@ We will Create ~/.chef and copy the two user and organization pem files to this
|
|||||||
kashi.pem 100% 1678 1.6KB/s 00:00
|
kashi.pem 100% 1678 1.6KB/s 00:00
|
||||||
linux.pem 100% 1678 1.6KB/s 00:00
|
linux.pem 100% 1678 1.6KB/s 00:00
|
||||||
|
|
||||||
**Knife Configurations to Manage your Chef Environment**
|
** 编辑配置来管理chef环境 **
|
||||||
|
|
||||||
Now create "~/.chef/knife.rb" with following content as configured in previous steps.
|
现在使用下面的内容创建"~/.chef/knife.rb"。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:/.chef# vim knife.rb
|
root@ubuntu-15-WKS:/.chef# vim knife.rb
|
||||||
current_dir = File.dirname(__FILE__)
|
current_dir = File.dirname(__FILE__)
|
||||||
@ -225,17 +227,17 @@ Now create "~/.chef/knife.rb" with following content as configured in previous s
|
|||||||
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
|
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
|
||||||
cookbook_path ["#{current_dir}/../cookbooks"]
|
cookbook_path ["#{current_dir}/../cookbooks"]
|
||||||
|
|
||||||
Create "~/cookbooks" folder for cookbooks as specified knife.rb file.
|
创建knife.rb中指定的“~/cookbooks”文件夹。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:/# mkdir cookbooks
|
root@ubuntu-15-WKS:/# mkdir cookbooks
|
||||||
|
|
||||||
**Testing with Knife Configurations**
|
**测试Knife配置**
|
||||||
|
|
||||||
Run "knife user list" and "knife client list" commands to verify whether knife configuration is working.
|
运行“knife user list”和“knife client list”来验证knife是否在工作。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:/.chef# knife user list
|
root@ubuntu-15-WKS:/.chef# knife user list
|
||||||
|
|
||||||
You might get the following error while first time you run this command.This occurs because we do not have our Chef server's SSL certificate on our workstation.
|
第一次运行的时候可能会得到下面的错误,这是因为工作站上还没有chef服务端的SSL证书。
|
||||||
|
|
||||||
ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
|
ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
|
||||||
ERROR: Could not establish a secure connection to the server.
|
ERROR: Could not establish a secure connection to the server.
|
||||||
@ -243,27 +245,26 @@ You might get the following error while first time you run this command.This occ
|
|||||||
If your Chef Server uses a self-signed certificate, you can use
|
If your Chef Server uses a self-signed certificate, you can use
|
||||||
`knife ssl fetch` to make knife trust the server's certificates.
|
`knife ssl fetch` to make knife trust the server's certificates.
|
||||||
|
|
||||||
To recover from the above error run the following command to fetch ssl certs and once again run the knife user and client list command and it should be fine then.
|
要从上面的命令中恢复,运行下面的命令来获取ssl整数并重新运行knife user和client list,这时候应该就可以了。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:/.chef# knife ssl fetch
|
root@ubuntu-15-WKS:/.chef# knife ssl fetch
|
||||||
WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert
|
WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert
|
||||||
directory (/.chef/trusted_certs).
|
directory (/.chef/trusted_certs).
|
||||||
|
|
||||||
Knife has no means to verify these are the correct certificates. You should
|
knife没有办法验证这些是有效的证书。你应该在下载时候验证这些证书的真实性。
|
||||||
verify the authenticity of these certificates after downloading.
|
|
||||||
|
|
||||||
Adding certificate for ubuntu-14-chef.test.com in /.chef/trusted_certs/ubuntu-14-chef_test_com.crt
|
在/.chef/trusted_certs/ubuntu-14-chef_test_com.crt下面添加ubuntu-14-chef.test.com的证书。
|
||||||
|
|
||||||
Now after fetching ssl certs with above command, let's again run the below command.
|
在上面的命令取得ssl证书后,接着运行下面的命令。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:/.chef#knife client list
|
root@ubuntu-15-WKS:/.chef#knife client list
|
||||||
kashi-linux
|
kashi-linux
|
||||||
|
|
||||||
### New Node Configuration to interact with chef-server ###
|
### 与chef服务端交互的新的节点 ###
|
||||||
|
|
||||||
Nodes contain chef-client which performs all the infrastructure automation. So, Its time to begin with adding new servers to our chef environment by Configuring a new node to interact with chef-server after we had Configured chef-server and knife workstation combinations.
|
节点是执行所有设备自动化的chef客户端。因此是时侯添加新的服务端到我们的chef环境下,在配置完chef-server和knife工作站后配置新的节点与chef-server交互。
|
||||||
|
|
||||||
To configure a new node to work with chef server use below command.
|
我们使用下面的命令来添加新的节点与chef服务端工作。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:~# knife bootstrap 172.25.10.170 --ssh-user root --ssh-password kashi123 --node-name mydns
|
root@ubuntu-15-WKS:~# knife bootstrap 172.25.10.170 --ssh-user root --ssh-password kashi123 --node-name mydns
|
||||||
|
|
||||||
@ -290,25 +291,25 @@ To configure a new node to work with chef server use below command.
|
|||||||
172.25.10.170 to file /tmp/install.sh.26024/metadata.txt
|
172.25.10.170 to file /tmp/install.sh.26024/metadata.txt
|
||||||
172.25.10.170 trying wget...
|
172.25.10.170 trying wget...
|
||||||
|
|
||||||
After all we can see the vewly created node under the knife node list and new client list as it it will also creates a new client with the node.
|
之后我们可以在knife节点列表下看到新创建的节点,也会新节点列表下创建新的客户端。
|
||||||
|
|
||||||
root@ubuntu-15-WKS:~# knife node list
|
root@ubuntu-15-WKS:~# knife node list
|
||||||
mydns
|
mydns
|
||||||
|
|
||||||
Similarly we can add multiple number of nodes to our chef infrastructure by providing ssh credentials with the same above knofe bootstrap command.
|
相似地我们只要提供ssh证书通过上面的knife命令来创建多个节点到chef设备上。
|
||||||
|
|
||||||
### Conclusion ###
|
### 总结 ###
|
||||||
|
|
||||||
In this detailed article we learnt about the Chef Configuration Management tool with its basic understanding and overview of its components with installation and configuration settings. We hope you have enjoyed learning the installation and configuration of Chef server with its workstation and client nodes.
|
本篇我们学习了chef管理工具并通过安装和配置设置浏览了它的组件。我希望你在学习安装和配置Chef服务端以及它的工作站和客户端节点中获得乐趣。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04/
|
via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04/
|
||||||
|
|
||||||
作者:[Kashif Siddique][a]
|
作者:[Kashif Siddique][a]
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
[a]:http://linoxide.com/author/kashifs/
|
[a]:http://linoxide.com/author/kashifs/
|
@ -0,0 +1,237 @@
|
|||||||
|
|
||||||
|
如何收集NGINX指标 - 第2部分
|
||||||
|
================================================================================
|
||||||
|
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png)
|
||||||
|
|
||||||
|
### 如何获取你所需要的NGINX指标 ###
|
||||||
|
|
||||||
|
如何获取需要的指标取决于你正在使用的 NGINX 版本。(参见 [the companion article][1] 将深入探索NGINX指标。)免费,开源版的 NGINX 和商业版的 NGINX 都有指标度量的状态模块,NGINX 也可以在其日志中配置指标模块:
|
||||||
|
|
||||||
|
注:表格
|
||||||
|
<table>
|
||||||
|
<colgroup>
|
||||||
|
<col style="text-align: left;">
|
||||||
|
<col style="text-align: center;">
|
||||||
|
<col style="text-align: center;">
|
||||||
|
<col style="text-align: center;"> </colgroup>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th rowspan="2" style="text-align: left;">Metric</th>
|
||||||
|
<th colspan="3" style="text-align: center;">Availability</th>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source">NGINX (open-source)</a></th>
|
||||||
|
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus">NGINX Plus</a></th>
|
||||||
|
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs">NGINX logs</a></th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">accepts / accepted</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">handled</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">dropped</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">active</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">requests / total</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">4xx codes</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">5xx codes</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="text-align: left;">request time</td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
<td style="text-align: center;"></td>
|
||||||
|
<td style="text-align: center;">x</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
#### 指标收集:NGINX(开源版) ####
|
||||||
|
|
||||||
|
开源版的 NGINX 会显示几个与服务器状态有关的指标在状态页面上,只要你启用了 HTTP [stub status module][2] 。要检查模块是否被加载,运行以下命令:
|
||||||
|
|
||||||
|
nginx -V 2>&1 | grep -o with-http_stub_status_module
|
||||||
|
|
||||||
|
如果你看到 http_stub_status_module 被输出在终端,说明状态模块已启用。
|
||||||
|
|
||||||
|
如果该命令没有输出,你需要启用状态模块。你可以使用 --with-http_stub_status_module 参数去配置 [building NGINX from source][3]:
|
||||||
|
|
||||||
|
./configure \
|
||||||
|
… \
|
||||||
|
--with-http_stub_status_module
|
||||||
|
make
|
||||||
|
sudo make install
|
||||||
|
|
||||||
|
验证模块已经启用或你自己启用它后,你还需要修改 NGINX 配置文件为状态页面设置本地访问的 URL(例如,/ nginx_status):
|
||||||
|
|
||||||
|
server {
|
||||||
|
location /nginx_status {
|
||||||
|
stub_status on;
|
||||||
|
|
||||||
|
access_log off;
|
||||||
|
allow 127.0.0.1;
|
||||||
|
deny all;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
注:nginx 配置中的 server 块通常并不在主配置文件中(例如,/etc/nginx/nginx.conf),但主配置中会加载补充的配置文件。要找到主配置文件,首先运行以下命令:
|
||||||
|
|
||||||
|
nginx -t
|
||||||
|
|
||||||
|
打开主配置文件,在以 http 模块结尾的附近查找以 include 开头的行包,如:
|
||||||
|
|
||||||
|
include /etc/nginx/conf.d/*.conf;
|
||||||
|
|
||||||
|
在所包含的配置文件中,你应该会找到主服务器模块,你可以如上所示修改 NGINX 的指标报告。更改任何配置后,通过执行以下命令重新加载配置文件:
|
||||||
|
|
||||||
|
nginx -s reload
|
||||||
|
|
||||||
|
现在,你可以查看指标的状态页:
|
||||||
|
|
||||||
|
Active connections: 24
|
||||||
|
server accepts handled requests
|
||||||
|
1156958 1156958 4491319
|
||||||
|
Reading: 0 Writing: 18 Waiting : 6
|
||||||
|
|
||||||
|
请注意,如果你正试图从远程计算机访问状态页面,则需要将远程计算机的 IP 地址添加到你的状态配置文件的白名单中,在上面的配置文件中 127.0.0.1 仅在白名单中。
|
||||||
|
|
||||||
|
nginx 的状态页面是一中查看指标快速又简单的方法,但当连续监测时,你需要每隔一段时间自动记录该数据。然后通过监控工具箱 [Nagios][4] 或者 [Datadog][5],以及收集统计信息的服务 [collectD][6] 来分析已保存的 NGINX 状态信息。
|
||||||
|
|
||||||
|
#### 指标收集: NGINX Plus ####
|
||||||
|
|
||||||
|
商业版的 NGINX Plus 通过 ngx_http_status_module 提供的可用指标比开源版 NGINX 更多 [many more metrics][7] 。NGINX Plus 附加了更多的字节流指标,以及负载均衡系统和高速缓存的信息。NGINX Plus 还报告所有的 HTTP 状态码类型(1XX,2XX,3XX,4XX,5XX)的计数。一个简单的 NGINX Plus 状态报告 [here][8]。
|
||||||
|
|
||||||
|
![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png)
|
||||||
|
|
||||||
|
*注: NGINX Plus 在状态仪表盘"Active”连接定义的收集指标的状态模块和开源 NGINX 的略有不同。在 NGINX Plus 指标中,活动连接不包括等待状态(又叫空闲连接)连接。*
|
||||||
|
|
||||||
|
NGINX Plus 也集成了其他监控系统的报告 [JSON格式指标][9] 。用 NGINX Plus 时,你可以看到 [负载均衡服务器组的][10]指标和健康状况,或着再向下能取得的仅是响应代码计数[从单个服务器][11]在负载均衡服务器中:
|
||||||
|
{"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055}
|
||||||
|
|
||||||
|
启动 NGINX Plus 指标仪表盘,你可以在 NGINX 配置文件的 http 块内添加状态 server 块。 ([参见上一页][12]查找相关的配置文件,收集开源 NGINX 版指标的说明。)例如,设立以下一个状态仪表盘在http://your.ip.address:8080/status.html 和一个 JSON 接口 http://your.ip.address:8080/status,可以添加以下 server block 来设定:
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 8080;
|
||||||
|
root /usr/share/nginx/html;
|
||||||
|
|
||||||
|
location /status {
|
||||||
|
status;
|
||||||
|
}
|
||||||
|
|
||||||
|
location = /status.html {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
一旦你重新加载 NGINX 配置,状态页就会被加载:
|
||||||
|
|
||||||
|
nginx -s reload
|
||||||
|
|
||||||
|
关于如何配置扩展状态模块,官方 NGINX Plus 文档有 [详细介绍][13] 。
|
||||||
|
|
||||||
|
#### 指标收集:NGINX日志 ####
|
||||||
|
|
||||||
|
NGINX 的 [日志模块][14] 写到配置可以自定义访问日志到指定文件。你可以自定义日志的格式和时间通过 [添加或移除变量][15]。捕获日志的详细信息,最简单的方法是添加下面一行在你配置文件的server 块中(参见[此节][16] 通过加载配置文件的信息来收集开源 NGINX 的指标):
|
||||||
|
|
||||||
|
access_log logs/host.access.log combined;
|
||||||
|
|
||||||
|
更改 NGINX 配置文件后,必须要重新加载配置文件:
|
||||||
|
|
||||||
|
nginx -s reload
|
||||||
|
|
||||||
|
“combined” 的日志格式,只包含默认参数,包括[一些关键数据][17],如实际的 HTTP 请求和相应的响应代码。在下面的示例日志中,NGINX 记录了200(成功)状态码当请求 /index.html 时和404(未找到)错误不存在的请求文件 /fail。
|
||||||
|
|
||||||
|
127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36"
|
||||||
|
|
||||||
|
127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
||||||
|
|
||||||
|
你可以记录请求处理的时间通过添加一个新的日志格式在 NGINX 配置文件中的 http 块:
|
||||||
|
|
||||||
|
log_format nginx '$remote_addr - $remote_user [$time_local] '
|
||||||
|
'"$request" $status $body_bytes_sent $request_time '
|
||||||
|
'"$http_referer" "$http_user_agent"';
|
||||||
|
|
||||||
|
通过修改配置文件中 server 块的 access_log 行:
|
||||||
|
|
||||||
|
access_log logs/host.access.log nginx;
|
||||||
|
|
||||||
|
重新加载配置文件(运行 nginx -s reload)后,你的访问日志将包括响应时间,如下图所示。单位为秒,毫秒。在这种情况下,服务器接收 /big.pdf 的请求时,发送33973115字节后返回206(成功)状态码。处理请求用时0.202秒(202毫秒):
|
||||||
|
|
||||||
|
127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
||||||
|
|
||||||
|
你可以使用各种工具和服务来收集和分析 NGINX 日志。例如,[rsyslog][18] 可以监视你的日志,并将其传递给多个日志分析服务;你也可以使用免费的开源工具,如[logstash][19]来收集和分析日志;或者你可以使用一个统一日志记录层,如[Fluentd][20]来收集和分析你的 NGINX 日志。
|
||||||
|
|
||||||
|
### 结论 ###
|
||||||
|
|
||||||
|
监视 NGINX 的哪一项指标将取决于你提供的工具,以及是否由给定指标证明监控指标的开销。例如,通过收集和分析日志来定位问题是非常重要的在 NGINX Plus 或者 运行的系统中。
|
||||||
|
|
||||||
|
在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][21],并开始使用 [免费的Datadog][22]。
|
||||||
|
|
||||||
|
----------
|
||||||
|
|
||||||
|
原文在这 [on GitHub][23]。问题,更正,补充等?请[让我们知道][24]。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||||
|
|
||||||
|
作者:K Young
|
||||||
|
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||||
|
[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
|
||||||
|
[3]:http://wiki.nginx.org/InstallOptions
|
||||||
|
[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx
|
||||||
|
[5]:http://docs.datadoghq.com/integrations/nginx/
|
||||||
|
[6]:https://collectd.org/wiki/index.php/Plugin:nginx
|
||||||
|
[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data
|
||||||
|
[8]:http://demo.nginx.com/status.html
|
||||||
|
[9]:http://demo.nginx.com/status
|
||||||
|
[10]:http://demo.nginx.com/status/upstreams/demoupstreams
|
||||||
|
[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses
|
||||||
|
[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||||
|
[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example
|
||||||
|
[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html
|
||||||
|
[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
||||||
|
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||||
|
[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
||||||
|
[18]:http://www.rsyslog.com/
|
||||||
|
[19]:https://www.elastic.co/products/logstash
|
||||||
|
[20]:http://www.fluentd.org/
|
||||||
|
[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||||
|
[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up
|
||||||
|
[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md
|
||||||
|
[24]:https://github.com/DataDog/the-monitor/issues
|
@ -0,0 +1,187 @@
|
|||||||
|
如何在 Fedora 22 上配置 Proftpd 服务器
|
||||||
|
================================================================================
|
||||||
|
在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款免费的基于 GPL 授权开源的 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是具备许多高级功能以及能为用户提供丰富的配置选项可以轻松实现定制。它的许多配置选项在其他一些 FTP 服务器软件里仍然没有集成。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。
|
||||||
|
|
||||||
|
- 每个目录都包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess"
|
||||||
|
- 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。
|
||||||
|
- 可以作为独立进程启动服务或者通过 inetd/xinetd 启动
|
||||||
|
- 它的文件/目录属性、属主和权限采用类 UNIX 方式。
|
||||||
|
- 它可以独立运行,保护系统避免 root 访问可能带来的损坏。
|
||||||
|
- 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器,SSL/TLS 加密,RADIUS 支持,等等。
|
||||||
|
- ProFTPD 服务器还支持 IPv6.
|
||||||
|
|
||||||
|
下面是如何在运行 Fedora 22 操作系统的计算机上使用 ProFTPD 架设 FTP 服务器的一些简单步骤。
|
||||||
|
|
||||||
|
### 1. 安装 ProFTPD ###
|
||||||
|
|
||||||
|
首先,我们将在运行 Fedora 22 的机器上安装 Proftpd 软件。因为 yum 包管理器已经被抛弃了,我们将使用最新最好的包管理器 dnf。DNF 很容易使用,是 Fedora 22 上采用的非常人性化的包管理器。我们将用它来安装 proftpd 软件。这需要在终端或控制台里用 sudo 模式运行下面的命令。
|
||||||
|
|
||||||
|
$ sudo dnf -y install proftpd proftpd-utils
|
||||||
|
|
||||||
|
### 2. 配置 ProFTPD ###
|
||||||
|
|
||||||
|
现在,我们将修改软件的一些配置。要配置它,我们需要用文本编辑器编辑 /etc/proftpd.conf 文件。**/etc/proftpd.conf** 文件是 ProFTPD 软件的主要配置文件,所以,这个文件的任何改动都会影响到 FTP 服务器。在这里,是我们在初始步骤里做出的改动。
|
||||||
|
|
||||||
|
$ sudo vi /etc/proftpd.conf
|
||||||
|
|
||||||
|
之后,在用文本编辑器打开这个文件后,我们会想改下 ServerName 以及 ServerAdmin,分别填入自己的域名和 email 地址。下面是我们改的。
|
||||||
|
|
||||||
|
ServerName "ftp.linoxide.com"
|
||||||
|
ServerAdmin arun@linoxide.com
|
||||||
|
|
||||||
|
在这之后,我们将把下面的设定加到配置文件里,这样可以让服务器将访问和授权记录到相应的日志文件里。
|
||||||
|
|
||||||
|
ExtendedLog /var/log/proftpd/access.log WRITE,READ default
|
||||||
|
ExtendedLog /var/log/proftpd/auth.log AUTH auth
|
||||||
|
|
||||||
|
![调整 ProFTPD 设置](http://blog.linoxide.com/wp-content/uploads/2015/06/configuring-proftpd-config.png)
|
||||||
|
|
||||||
|
### 3. 添加 FTP 用户 ###
|
||||||
|
|
||||||
|
在设定好了基本的配置文件后,我们很自然地希望为指定目录添加 FTP 用户。目前用来登录的用户是 FTP 服务自动生成的,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。
|
||||||
|
|
||||||
|
下面,我们将建立一个名字是 ftpgroup 的新用户组。
|
||||||
|
|
||||||
|
$ sudo groupadd ftpgroup
|
||||||
|
|
||||||
|
然后,我们将以目录 /ftp-dir/ 作为主目录增加一个新用户 arunftp 并加入这个组中。
|
||||||
|
|
||||||
|
$ sudo useradd -G ftpgroup arunftp -s /sbin/nologin -d /ftp-dir/
|
||||||
|
|
||||||
|
在创建好用户并加入用户组后,我们将为用户 arunftp 设置一个密码。
|
||||||
|
|
||||||
|
$ sudo passwd arunftp
|
||||||
|
|
||||||
|
Changing password for user arunftp.
|
||||||
|
New password:
|
||||||
|
Retype new password:
|
||||||
|
passwd: all authentication tokens updated successfully.
|
||||||
|
|
||||||
|
现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限。
|
||||||
|
|
||||||
|
$ sudo setsebool -P allow_ftpd_full_access=1
|
||||||
|
$ sudo setsebool -P ftp_home_dir=1
|
||||||
|
|
||||||
|
然后,我们会设定不允许其他用户移动或重命名这个目录以及里面的内容。
|
||||||
|
|
||||||
|
$ sudo chmod -R 1777 /ftp-dir/
|
||||||
|
|
||||||
|
### 4. 打开 TLS 支持 ###
|
||||||
|
|
||||||
|
目前 FTP 所用的加密手段并不安全,任何人都可以通过监听网卡来读取 FTP 传输的数据。所以,我们将为自己的服务器打开 TLS 加密支持。这样的话,需要编辑 /etc/proftpd.conf 配置文件。在这之前,我们先备份一下当前的配置文件,可以保证在改出问题后还可以恢复。
|
||||||
|
|
||||||
|
$ sudo cp /etc/proftpd.conf /etc/proftpd.conf.bak
|
||||||
|
|
||||||
|
然后,我们可以用自己喜欢的文本编辑器修改配置文件。
|
||||||
|
|
||||||
|
$ sudo vi /etc/proftpd.conf
|
||||||
|
|
||||||
|
然后,把下面几行附加到我们在第 2 步中所增加内容的后面。
|
||||||
|
|
||||||
|
TLSEngine on
|
||||||
|
TLSRequired on
|
||||||
|
TLSProtocol SSLv23
|
||||||
|
TLSLog /var/log/proftpd/tls.log
|
||||||
|
TLSRSACertificateFile /etc/pki/tls/certs/proftpd.pem
|
||||||
|
TLSRSACertificateKeyFile /etc/pki/tls/certs/proftpd.pem
|
||||||
|
|
||||||
|
![打开 TLS 配置](http://blog.linoxide.com/wp-content/uploads/2015/06/tls-configuration.png)
|
||||||
|
|
||||||
|
完成上面的设定后,保存退出。
|
||||||
|
|
||||||
|
然后,我们需要生成 SSL 凭证 proftpd.pem 并放到 **/etc/pki/tls/certs/** 目录里。这样的话,首先需要在 Fedora 22 上安装 openssl。
|
||||||
|
|
||||||
|
$ sudo dnf install openssl
|
||||||
|
|
||||||
|
然后,可以通过执行下面的命令生成 SSL 凭证。
|
||||||
|
|
||||||
|
$ sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/proftpd.pem -out /etc/pki/tls/certs/proftpd.pem
|
||||||
|
|
||||||
|
系统会询问一些将写入凭证里的基本信息。在填完资料后,就会生成一个 2048 位的 RSA 私钥。
|
||||||
|
|
||||||
|
Generating a 2048 bit RSA private key
|
||||||
|
...................+++
|
||||||
|
...................+++
|
||||||
|
writing new private key to '/etc/pki/tls/certs/proftpd.pem'
|
||||||
|
-----
|
||||||
|
You are about to be asked to enter information that will be incorporated
|
||||||
|
into your certificate request.
|
||||||
|
What you are about to enter is what is called a Distinguished Name or a DN.
|
||||||
|
There are quite a few fields but you can leave some blank
|
||||||
|
For some fields there will be a default value,
|
||||||
|
If you enter '.', the field will be left blank.
|
||||||
|
-----
|
||||||
|
Country Name (2 letter code) [XX]:NP
|
||||||
|
State or Province Name (full name) []:Narayani
|
||||||
|
Locality Name (eg, city) [Default City]:Bharatpur
|
||||||
|
Organization Name (eg, company) [Default Company Ltd]:Linoxide
|
||||||
|
Organizational Unit Name (eg, section) []:Linux Freedom
|
||||||
|
Common Name (eg, your name or your server's hostname) []:ftp.linoxide.com
|
||||||
|
Email Address []:arun@linoxide.com
|
||||||
|
|
||||||
|
在这之后,我们要改变所生成凭证文件的权限以增加安全性。
|
||||||
|
|
||||||
|
$ sudo chmod 600 /etc/pki/tls/certs/proftpd.pem
|
||||||
|
|
||||||
|
### 5. 允许 FTP 通过 Firewall ###
|
||||||
|
|
||||||
|
现在,需要允许 ftp 端口,一般默认被防火墙阻止了。就是说,需要允许 ftp 端口能通过防火墙访问。
|
||||||
|
|
||||||
|
如果 **打开了 TLS/SSL 加密**,执行下面的命令。
|
||||||
|
|
||||||
|
$sudo firewall-cmd --add-port=1024-65534/tcp
|
||||||
|
$ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
|
||||||
|
|
||||||
|
如果 **没有打开 TLS/SSL 加密**,执行下面的命令。
|
||||||
|
|
||||||
|
$ sudo firewall-cmd --permanent --zone=public --add-service=ftp
|
||||||
|
|
||||||
|
success
|
||||||
|
|
||||||
|
然后,重新加载防火墙设定。
|
||||||
|
|
||||||
|
$ sudo firewall-cmd --reload
|
||||||
|
|
||||||
|
success
|
||||||
|
|
||||||
|
### 6. 启动并激活 ProFTPD ###
|
||||||
|
|
||||||
|
全部设定好后,最后就是启动 ProFTPD 并试一下。可以运行下面的命令来启动 proftpd ftp 守护程序。
|
||||||
|
|
||||||
|
$ sudo systemctl start proftpd.service
|
||||||
|
|
||||||
|
然后,我们可以设定开机启动。
|
||||||
|
|
||||||
|
$ sudo systemctl enable proftpd.service
|
||||||
|
|
||||||
|
Created symlink from /etc/systemd/system/multi-user.target.wants/proftpd.service to /usr/lib/systemd/system/proftpd.service.
|
||||||
|
|
||||||
|
### 7. 登录到 FTP 服务器 ###
|
||||||
|
|
||||||
|
现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或 URL **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **显式要求基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。
|
||||||
|
|
||||||
|
![FTP 登录细节](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
|
||||||
|
|
||||||
|
要做上述设定,需要打开菜单里的文件,点击站点管理器,然后点击新建站点,再按上面的方式设置。
|
||||||
|
|
||||||
|
![FTP SSL 凭证](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-ssl-certificate.png)
|
||||||
|
|
||||||
|
随后系统会要求允许 SSL 凭证,点确定。之后,就可以从我们的 FTP 服务器上传下载文件和文件夹了。
|
||||||
|
|
||||||
|
### 总结 ###
|
||||||
|
|
||||||
|
最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度配置和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
|
||||||
|
|
||||||
|
作者:[Arun Pyasi][a]
|
||||||
|
译者:[zpl1025](https://github.com/zpl1025)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://linoxide.com/author/arunp/
|
||||||
|
[1]:http://www.proftpd.org/
|
||||||
|
[2]:http://www.proftpd.org/features.html
|
@ -0,0 +1,59 @@
|
|||||||
|
Ubuntu 14.04中修复“update information is outdated”错误
|
||||||
|
================================================================================
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Fix_update_information_is_outdated.jpeg)
|
||||||
|
|
||||||
|
看到Ubuntu 14.04的顶部面板上那个显示下面这个错误的红色三角形了吗?
|
||||||
|
|
||||||
|
> 更新信息过时。该错误可能是由网络问题,或者某个仓库不再可用而造成的。请通过从指示器菜单中选择‘显示更新’来手动更新,然后查看是否存在有失败的仓库。
|
||||||
|
>
|
||||||
|
|
||||||
|
它看起来像这样:
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Update_error_Ubuntu.jpeg)
|
||||||
|
|
||||||
|
这里的粉红色感叹号标记就是原来的红色三角形,因为我使用了[最佳的Ubuntu图标主题][1]之一,Numix。让我们回到该错误中,这是一个常见的更新问题,你也许时不时地会碰到它。现在,你或许想知道的是,到底是什么导致了这个更新错误的出现。
|
||||||
|
|
||||||
|
### 引起‘update information is outdated’错误的原因 ###
|
||||||
|
|
||||||
|
导致该错误的原因在其自身的错误描述中就讲得相当明白,它告诉你“这可能是由网络问题或者某个不再可用的仓库导致的”。所以,要么是你更新了系统和某些仓库,要么是PPA不再受到支持了,或者你正面对的类似问题。
|
||||||
|
|
||||||
|
虽然错误本身就讲得很明白,而它给出了的议操作“请通过从指示器菜单选择‘显示更新’来手动更新以查看失败的仓库”却并不能很好地解决问题。如果你点击显示更新,你所能看见的仅仅是系统已经更新。
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/System_updated_Ubuntu.png)
|
||||||
|
|
||||||
|
很奇怪,不是吗?我们怎样才能找出是什么出错了,哪里出错了,以及为什么出错呢?
|
||||||
|
|
||||||
|
### 修复‘update information is outdated’错误 ###
|
||||||
|
|
||||||
|
这里讨论的‘解决方案’可能对Ubuntu的这些版本有用:Ubuntu 14.04,12.04或14.04。你所要做的仅仅是打开终端(Ctrl+Alt+T),然后使用下面的命令:
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
等待命令结束,然后查看其结果。这里插个快速提示,你可以[在终端中添加通知][2],这样当一个耗时很长的命令结束执行时就会通知你。在该命令的最后几行中,可以看到你的系统正面临什么样的错误。是的,你肯定会看到一个错误。
|
||||||
|
|
||||||
|
在我这里,我看到了有名的[GPG error: The following could not be verified][3]错误。很明显,[在Ubuntu 15.04中安装声破天][4]有点问题。
|
||||||
|
|
||||||
|
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Update_error_Ubuntu_1.jpeg)
|
||||||
|
|
||||||
|
很可能你看到的不是像我一样的GPG错误,那样的话,我建议你读一读我写的这篇文章[修复Ubuntu中的各种常见更新错误][5]。
|
||||||
|
|
||||||
|
我知道有不少人,尤其是初学者,很是讨厌命令行,但是如果你正在使用Linux,你就无可避免会使用到终端。此外,那东西并没你想象的那么可怕。试试吧,你会很快上手的。
|
||||||
|
|
||||||
|
我希望这个快速提示对于你修复Ubuntu中的“update information is outdated”错误有帮助。如果你有任何问题或建议,请不吝提出,我们将无任欢迎。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://itsfoss.com/fix-update-information-outdated-ubuntu/
|
||||||
|
|
||||||
|
作者:[Abhishek][a]
|
||||||
|
译者:[GOLinux](https://github.com/GOLinux)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://itsfoss.com/author/abhishek/
|
||||||
|
[1]:http://itsfoss.com/best-icon-themes-ubuntu-1404/
|
||||||
|
[2]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/
|
||||||
|
[3]:http://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/
|
||||||
|
[4]:http://itsfoss.com/install-spotify-ubuntu-1504/
|
||||||
|
[5]:http://itsfoss.com/fix-update-errors-ubuntu-1404/
|
@ -0,0 +1,112 @@
|
|||||||
|
如何在 Ubuntu 中管理开机启动应用
|
||||||
|
================================================================================
|
||||||
|
![在 Ubuntu 中管理开机启动应用](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Manage-Start-up-apps-in-Ubuntu.png)
|
||||||
|
|
||||||
|
你曾经考虑过 **在 Ubuntu 中管理开机启动应用** 吗?如果在开机时,你的 Ubuntu 系统启动得非常缓慢,那么你就需要考虑这个问题了。
|
||||||
|
|
||||||
|
每当你开机进入一个操作系统,一系列的应用将会自动启动。这些应用被称为‘开机启动应用’ 或‘开机启动程序’。随着时间的推移,当你在系统中安装了足够多的应用时,你将发现有太多的‘开机启动应用’在开机时自动地启动了,它们吃掉了很多的系统资源,并将你的系统拖慢。这可能会让你感觉卡顿,我想这种情况并不是你想要的。
|
||||||
|
|
||||||
|
让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你发现这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。
|
||||||
|
|
||||||
|
在这篇文章中,我们将看到 **在 Ubuntu 中,如何控制开机启动应用,如何让一个应用在开机时启动以及如何发现隐藏的开机启动应用。**这里提供的指导对所有的 Ubuntu 版本均适用,例如 Ubuntu 12.04, Ubuntu 14.04 和 Ubuntu 15.04。
|
||||||
|
|
||||||
|
### 在 Ubuntu 中管理开机启动应用 ###
|
||||||
|
|
||||||
|
默认情况下, Ubuntu 提供了一个`开机启动应用工具`来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。
|
||||||
|
|
||||||
|
![ubuntu 中的开机启动应用工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_Ubuntu.jpeg)
|
||||||
|
|
||||||
|
点击它来启动。下面是我的`开机启动应用`的样子:
|
||||||
|
|
||||||
|
![在 Ubuntu 中查看开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Screenshot-from-2015-07-18-122550.png)
|
||||||
|
|
||||||
|
### 在 Ubuntu 中移除开机启动应用 ###
|
||||||
|
|
||||||
|
现在由你来发现哪个程序对你用处不大,对我来说,是 [Caribou][1] 这个软件,它是一个屏幕键盘程序,在开机时它并没有什么用处,所以我想将它移除出开机启动程序的列表中。
|
||||||
|
|
||||||
|
你可以选择阻止某个程序在开机时启动,而在开机启动程序列表中保留该选项以便以后再进行激活。点击 `关闭`按钮来保留你的偏好设置。
|
||||||
|
|
||||||
|
![在 Ubuntu 中移除开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_ubuntu_2.png)
|
||||||
|
|
||||||
|
要将一个程序从开机启动程序列表中移除,选择对应的选项然后从窗口右边的面板中点击`移除`按钮来保留你的偏好设置。
|
||||||
|
|
||||||
|
![在 Ubuntu 中将程序从开机启动列表中移除](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_Ubuntu_1.jpeg)
|
||||||
|
|
||||||
|
需要提醒的是,这并不会将该程序卸载掉,只是让该程序不再在每次开机时自动启动。你可以对所有你不喜欢的程序做类似的处理。
|
||||||
|
|
||||||
|
### 让开机启动程序延迟启动 ###
|
||||||
|
|
||||||
|
若你并不想在开机启动列表中移除掉程序,但同时又忧虑着系统性能的问题,那么你所需要做的是给程序添加一个延迟启动命令,这样所有的程序就不会在开机时同时启动。
|
||||||
|
|
||||||
|
选择一个程序然后点击 `编辑` 按钮。
|
||||||
|
|
||||||
|
![编辑开机启动应用列表](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_ubuntu_3.png)
|
||||||
|
|
||||||
|
这将展示出运行这个特定的程序所需的命令。
|
||||||
|
|
||||||
|
![在开机启动列表的程序运行所需的命令](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_ubuntu_4.jpg)
|
||||||
|
|
||||||
|
所有你需要做的就是在程序运行命令前添加一句 `sleep XX;` 。这样就为实际运行该命令来启动的对应程序添加了 `XX` 秒的延迟。例如,假如我想让 Variety [壁纸管理应用][2] 延迟启动 2 分钟,我就需要像下面那样在命令前添加 `sleep 120;`
|
||||||
|
|
||||||
|
![在 Ubuntu 中延迟开机启动的程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_ubuntu_5.png)
|
||||||
|
|
||||||
|
保存并关闭设置。你将在下一次启动时看到效果。
|
||||||
|
|
||||||
|
### 增添一个程序到开机启动应用列表中 ###
|
||||||
|
|
||||||
|
这对于新手来说需要一点技巧。我们知道,在 Linux 的底层都是一些命令,在上一节我们看到这些开机启动程序只是在每次开机时运行一些命令。假如你想在开机启动列表中添加一个新的程序,你需要知道运行该应用所需的命令。
|
||||||
|
|
||||||
|
#### 第 1 步:如何查找运行一个程序所需的命令? ####
|
||||||
|
|
||||||
|
首先来到 Unity Dash 面板然后搜索 `Main Menu`:
|
||||||
|
|
||||||
|
![Ubuntu 下的程序菜单](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Program_menu_Ubuntu.jpg)
|
||||||
|
|
||||||
|
这将展示出在各种类别下你安装的所有程序。在 Ubuntu 的低版本中,你将看到一个相似的菜单,通过它来选择并运行应用。
|
||||||
|
|
||||||
|
![Ubuntu 下的 main menu ](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Main_menu_ubuntu.jpeg)
|
||||||
|
|
||||||
|
在各种类别下找到你找寻的应用,然后点击 `属性` 按钮来查看运行该应用所需的命令。例如,我想在开机时运行 `Transmission Torrent 客户端`。
|
||||||
|
|
||||||
|
![在 Ubuntu 下查找运行程序所需的命令](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Main_menu_ubuntu_1.jpeg)
|
||||||
|
|
||||||
|
这就会向我给出运行 `Transmission` 应用的命令:
|
||||||
|
|
||||||
|
![在 Ubuntu 下查找运行某个程序所需的命令](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_programs_commands.png)
|
||||||
|
|
||||||
|
接着,我将用相同的信息来将 `Transmission` 应用添加到开机启动列表中。
|
||||||
|
|
||||||
|
#### 第 2 步: 添加一个程序到开机启动列表中 ####
|
||||||
|
|
||||||
|
再次来到开机启动应用工具中并点击 `添加` 按钮。这将让你输入一个应用的名称,对应的命令和相关的描述。其中命令最为重要,你可以使用任何你想用的名称和描述。使用上一步得到的命令然后点击 `添加` 按钮。
|
||||||
|
|
||||||
|
![在 Ubuntu 中添加一个开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Add_startup_program_Ubuntu.jpg)
|
||||||
|
|
||||||
|
就这样,你将在下一次开机时看到这个程序会自动运行。这就是在 Ubuntu 中你能做的关于开机启动应用的所有事情。
|
||||||
|
|
||||||
|
到现在为止,我们已经讨论在开机时可见的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。
|
||||||
|
|
||||||
|
### 在 Ubuntu 中查看隐藏的开机启动程序 ###
|
||||||
|
|
||||||
|
要查看在开机时哪些服务在运行,可以打开一个终端并使用下面的命令:
|
||||||
|
|
||||||
|
sudo sed -i 's/NoDisplay=true/NoDisplay=false/g' /etc/xdg/autostart/*.desktop
|
||||||
|
|
||||||
|
上面的命令是一个快速查找和替换命令,它将在所有自动启动的程序里的 `NoDisplay=false` 改为 `NoDisplay=true` ,一旦执行了这个命令后,再次打开`开机启动应用工具`,现在你应该可以看到更多的程序:
|
||||||
|
|
||||||
|
![在 Ubuntu 中查看隐藏的开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Hidden_startup_program_Ubuntu.jpg)
|
||||||
|
|
||||||
|
你可以像先前我们讨论的那样管理这些开机启动应用。我希望这篇教程可以帮助你在 Ubuntu 中控制开机启动程序。任何的问题或建议总是欢迎的。
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://itsfoss.com/manage-startup-applications-ubuntu/
|
||||||
|
|
||||||
|
作者:[Abhishek][a]
|
||||||
|
译者:[FSSlc](https://github.com/FSSlc)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://itsfoss.com/author/abhishek/
|
||||||
|
[1]:https://wiki.gnome.org/action/show/Projects/Caribou?action=show&redirect=Caribou
|
||||||
|
[2]:http://itsfoss.com/applications-manage-wallpapers-ubuntu/
|
@ -0,0 +1,66 @@
|
|||||||
|
如何在 Docker 中通过 Kitematic 交互式执行任务
|
||||||
|
================================================================================
|
||||||
|
在本篇文章中,我们会学习如何在 Windows 操作系统上安装 Kitematic 以及部署一个 Hello World Nginx Web 服务器。Kitematic 是一个自由开源软件,它有现代化的界面设计使得允许我们在 Docker 中交互式执行任务。Kitematic 设计非常漂亮、界面也很不错。我们可以简单快速地开箱搭建我们的容器而不需要输入命令,我们可以在图形用户界面中通过简单的点击从而在容器上部署我们的应用。Kitematic 集成了 Docker Hub,允许我们搜索、拉取任何需要的镜像,并在上面部署应用。它同时也能很好地切换到命令行用户接口模式。目前,它包括了自动映射端口、可视化更改环境变量、配置卷、精简日志以及其它功能。
|
||||||
|
|
||||||
|
下面是在 Windows 上安装 Kitematic 并部署 Hello World Nginx Web 服务器的 3 个简单步骤。
|
||||||
|
|
||||||
|
### 1. 下载 Kitematic ###
|
||||||
|
|
||||||
|
首先,我们需要从 github 仓库 [https://github.com/kitematic/kitematic/releases][1] 中下载 Windows 操作系统可用的最新的 Kitematic 发行版。我们用下载器或者 web 浏览器下载了它的可执行 EXE 文件。下载完成后,我们需要双击可执行应用文件。
|
||||||
|
|
||||||
|
![运行 Kitematic](http://blog.linoxide.com/wp-content/uploads/2015/06/running-kitematic.png)
|
||||||
|
|
||||||
|
双击应用文件之后,会问我们一个安全问题,我们只需要点击 OK 按钮,如下图所示。
|
||||||
|
|
||||||
|
![安全警告](http://blog.linoxide.com/wp-content/uploads/2015/06/security-warning.png)
|
||||||
|
|
||||||
|
### 2. 安装 Kitematic ###
|
||||||
|
|
||||||
|
下载好可执行安装程序之后,我们现在打算在我们的 Windows 操作系统上安装 Kitematic。安装程序现在会开始下载并安装运行 Kitematic 需要的依赖,包括 Virtual Box 和 Docker。如果已经在系统上安装了 Virtual Box,它会把它升级到最新版本。安装程序会在几分钟内完成,但取决于你网络和系统的速度。如果你还没有安装 Virtual Box,它会问你是否安装 Virtual Box 网络驱动。建议安装它,因为它有助于 Virtual Box 的网络。
|
||||||
|
|
||||||
|
![安装 Kitematic](http://blog.linoxide.com/wp-content/uploads/2015/06/installing-kitematic.png)
|
||||||
|
|
||||||
|
需要的依赖 Docker 和 Virtual Box 安装完成并运行后,会让我们登录到 Docker Hub。如果我们还没有账户或者还不想登录,可以点击 **SKIP FOR NOW** 继续后面的步骤。
|
||||||
|
|
||||||
|
![登录 Docker Hub](http://blog.linoxide.com/wp-content/uploads/2015/06/login-docker-hub.jpg)
|
||||||
|
|
||||||
|
如果你还没有账户,你可以在应用程序上点击注册链接并在 Docker Hub 上创建账户。
|
||||||
|
|
||||||
|
完成之后,就会出现 Kitematic 应用程序的第一个界面。正如下面看到的这样。我们可以搜索可用的 docker 镜像。
|
||||||
|
|
||||||
|
![启动 Kitematic](http://blog.linoxide.com/wp-content/uploads/2015/07/kitematic-app-launched.jpg)
|
||||||
|
|
||||||
|
### 3. 部署 Nginx Hello World 容器 ###
|
||||||
|
|
||||||
|
现在,成功安装完 Kitematic 之后,我们打算部署容器。要运行一个容器,我们只需要在搜索区域中搜索镜像。然后点击 Create 按钮部署容器。在这篇教程中,我们会部署一个小的包含了 Hello World 主页的 Nginx Web 服务器。为此,我们在搜索区域中搜索 Hello World Nginx。看到了容器信息之后,我们点击 Create 来部署容器。
|
||||||
|
|
||||||
|
![运行 Hello World Nginx](http://blog.linoxide.com/wp-content/uploads/2015/06/hello-world-nginx-run.jpg)
|
||||||
|
|
||||||
|
镜像下载完成之后,它会自动部署。我们可以查看 Kitematic 部署容器的命令日志。我们也可以在 Kitematic 界面上预览 web 页面。现在,我们通过点击预览在 web 浏览器中查看我们的 Hello World 页面。
|
||||||
|
|
||||||
|
![浏览 Nginx Hello World](http://blog.linoxide.com/wp-content/uploads/2015/07/nginx-hello-world-browser.jpg)
|
||||||
|
|
||||||
|
如果我们想切换到命令行接口并用它管理 docker,这里有个称为 Docker CLI 的按钮,它会打开一个 PowerShell,在里面我们可以执行 docker 命令。
|
||||||
|
|
||||||
|
![Docker CLI PowerShell](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-cli-powershell.png)
|
||||||
|
|
||||||
|
现在,如果我们想配置我们的容器并执行类似更改容器名称、设置环境变量、指定端口、配置容器存储以及其它高级功能的任务,我们可以在容器设置页面做到这些。
|
||||||
|
|
||||||
|
![设置 Kitematic Container](http://blog.linoxide.com/wp-content/uploads/2015/07/kitematic-container-settings.png)
|
||||||
|
|
||||||
|
### 总结 ###
|
||||||
|
|
||||||
|
我们终于成功在 Windows 操作系统上安装了 Kitematic 并部署了一个 Hello World Ngnix 服务器。总是推荐下载安装 Kitematic 最新的发行版,因为会增加很多新的高级功能。由于 Docker 运行在 64 位平台,当前 Kitematic 也是为 64 位操作系统构建。它只能在 Windows 7 以及更高版本上运行。在这篇教程中,我们部署了一个 Nginx Web 服务器,类似地我们可以在 Kitematic 中简单的点击就能通过镜像部署任何 docker 容器。Kitematic 已经有可用的 Mac OS X 和 Windows 版本,Linux 版本也在开发中很快就会发布。如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来以便我们更改地改进或更新我们的内容。非常感谢!Enjoy :-)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: http://linoxide.com/linux-how-to/interactively-docker-kitematic/
|
||||||
|
|
||||||
|
作者:[Arun Pyasi][a]
|
||||||
|
译者:[ictlyh](https://github.com/ictlyh)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:http://linoxide.com/author/arunp/
|
||||||
|
[1]:https://github.com/kitematic/kitematic/releases
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user