Merge pull request #8 from LCTT/master

update
This commit is contained in:
Kevin Sicong Jiang 2015-07-20 14:34:10 -05:00
commit cd03d0c462
31 changed files with 3554 additions and 1065 deletions

View File

@ -1,4 +1,4 @@
ZMap 文档
互联网扫描器 ZMap 完全手册
================================================================================
1. 初识 ZMap
1. 最佳扫描习惯
@ -21,7 +21,7 @@ ZMap 文档
### 初识 ZMap ###
ZMap被设计用来针对IPv4所有地址或其中的大部分实施综合扫描的工具。ZMap是研究者手中的利器但在运行ZMap时请注意您很有可能正在以每秒140万个包的速度扫描整个IPv4地址空间 。我们建议用户在实施即使小范围扫描之前,也联系一下本地网络的管理员并参考我们列举的最佳扫描习惯
ZMap被设计用来针对整个IPv4地址空间或其中的大部分实施综合扫描的工具。ZMap是研究者手中的利器但在运行ZMap时请注意您很有可能正在以每秒140万个包的速度扫描整个IPv4地址空间 。我们建议用户即使在实施小范围扫描之前,也联系一下本地网络的管理员并参考我们列举的[最佳扫描体验](#bestpractices)
默认情况下ZMap会对于指定端口实施尽可能大速率的TCP SYN扫描。较为保守的情况下对10,000个随机的地址的80端口以10Mbps的速度扫描如下所示
@ -42,11 +42,13 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
这些更新信息提供了扫描的即时状态并表示成:完成进度% (剩余时间); send: 发出包的数量 即时速率 (平均发送速率); recv: 接收包的数量 接收率 (平均接收率); hits: 成功率
这些更新信息提供了扫描的即时状态并表示成:
如果您不知道您所在网络支持的扫描速率,您可能要尝试不同的扫描速率和带宽限制直到扫描效果开始下降,借此找出当前网络能够支持的最快速度。
完成进度% (剩余时间); send: 发出包的数量 即时速率 (平均发送速率); recv: 接收包的数量 接收率 (平均接收率); hits: 命中率
默认情况下ZMap会输出不同IP地址的列表例如SYN ACK数据包的情况像下面这样。还有几种附加的格式JSON和Redis作为其输出结果以及生成程序可解析的扫描统计选项。 同样,可以指定附加的输出字段并使用输出过滤来过滤输出的结果。
如果您不知道您所在网络能支持的扫描速率,您可能要尝试不同的扫描速率和带宽限制直到扫描效果开始下降,借此找出当前网络能够支持的最快速度。
默认情况下ZMap会输出不同IP地址的列表例如根据SYN ACK数据包的情况像下面这样。其[输出结果](#output)还有几种附加的格式JSON和Redis可以用作生成[程序可解析的扫描统计](#verbosity)。 同样,可以指定附加的[输出字段](#outputfields)并使用[输出过滤](#outputfilter)来过滤输出的结果。
115.237.116.119
23.9.117.80
@ -54,52 +56,49 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/
217.120.143.111
50.195.22.82
我们强烈建议您使用黑名单文件,以排除预留的/未分配的IP地址空间组播地址,RFC1918以及网络中需要排除在您扫描之外的地址。默认情况下ZMap将采用位于 `/etc/zmap/blacklist.conf`的这个简单的黑名单文件中所包含的预留和未分配地址。如果您需要某些特定设置比如每次运行ZMap时的最大带宽或黑名单文件您可以在文件`/etc/zmap/zmap.conf`中指定或使用自定义配置文件。
我们强烈建议您使用[黑名单文件](#blacklisting),以排除预留的/未分配的IP地址空间RFC1918 规定的私有地址、组播地址以及网络中需要排除在您扫描之外的地址。默认情况下ZMap将采用位于 `/etc/zmap/blacklist.conf`的这个简单的[黑名单文件](#blacklisting)中所包含的预留和未分配地址。如果您需要某些特定设置比如每次运行ZMap时的最大带宽或[黑名单文件](#blacklisting),您可以在文件`/etc/zmap/zmap.conf`中指定或使用自定义[配置文件](#config)
如果您正试图解决扫描的相关问题,有几个选项可以帮助您调试。首先,您可以通过添加`--dryrun`实施预扫,以此来分析包可能会发送到网络的何处。此外,还可以通过设置'--verbosity=n`来更改日志详细程度。
如果您正试图解决扫描的相关问题,有几个选项可以帮助您调试。首先,您可以通过添加`--dryrun`实施[预扫](#dryrun),以此来分析包可能会发送到网络的何处。此外,还可以通过设置'--verbosity=n`来更改[日志详细程度](#verbosity)
----------
### 最佳扫描体验 ###
<a name="bestpractices" ></a>
### 最佳扫描习惯 ###
我们为针对互联网进行扫描的研究者提供了一些建议,以此来引导养成良好的互联网合作氛围
我们为针对互联网进行扫描的研究者提供了一些建议,以此来引导养成良好的互联网合作氛围。
- 密切协同本地的网络管理员,以减少风险和调查
- 确认扫描不会使本地网络或上游供应商瘫痪
- 标记出在扫描中呈良性的网页和DNS条目的源地址
- 明确注明扫描中所有连接的目的和范围
- 提供一个简单的退出方法并及时响应请求
- 在发起扫描的源地址的网页和DNS条目中申明你的扫描是善意的
- 明确解释你的扫描中所有连接的目的和范围
- 提供一个简单的退出扫描的方法并及时响应请求
- 实施扫描时,不使用比研究对象需求更大的扫描范围或更快的扫描频率
- 如果可以通过时间或源地址来传播扫描流量
- 如果可以,将扫描流量分布到不同的时间或源地址上
即使不声明,使用扫描的研究者也应该避免利用漏洞或访问受保护的资源,并遵守其辖区内任何特殊的法律规定。
----------
### 命令行参数 ###
<a name="args" ></a>
#### 通用选项 ####
这些选项是实施简单扫描时最常用的选项。我们注意到某些选项取决于所使用的探测模块或输出模块在实施ICMP Echo扫描时是不需要使用目的端口的
这些选项是实施简单扫描时最常用的选项。我们注意到某些选项取决于所使用的[探测模块](#probemodule)或[输出模块](#outputmodule)在实施ICMP Echo扫描时是不需要使用目的端口的
**-p, --target-port=port**
用来扫描的TCP端口号例如443
要扫描的目标TCP端口号例如443
**-o, --output-file=name**
使用标准输出将结果写入该文件。
将结果写入该文件,使用`-`代表输出到标准输出
**-b, --blacklist-file=path**
文件中被排除的子网使用CIDR表示法如192.168.0.0/16一个一行。建议您使用此方法排除RFC 1918地址,组播地址,IANA预留空间等IANA专用地址。在conf/blacklist.example中提供了一个以此为目的示例黑名单文件。
文件中被排除的子网使用CIDR表示法如192.168.0.0/16一个一行。建议您使用此方法排除RFC 1918地址、组播地址、IANA预留空间等IANA专用地址。在conf/blacklist.example中提供了一个以此为目的示例黑名单文件。
#### 扫描选项 ####
**-n, --max-targets=n**
限制探测目标的数量。后面跟的可以是一个数字(例如'-n 1000`或百分比(例如,`-n 0.1`)当然都是针对可扫描地址空间而言的(不包括黑名单)
限制探测目标的数量。后面跟的可以是一个数字(例如'-n 1000`,或可扫描地址空间的百分比(例如,`-n 0.1`不包括黑名单)
**-N, --max-results=n**
@ -111,7 +110,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/
**-r, --rate=pps**
设置传输速率,以包/秒为单位
设置发包速率,以包/秒为单位
**-B, --bandwidth=bps**
@ -119,7 +118,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/
**-c, --cooldown-time=secs**
发送完成后多久继续接收(默认值= 8
发送完成后等待多久继续接收回包(默认值= 8
**-e, --seed=n**
@ -127,7 +126,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/
**--shards=n**
将扫描分片/区使其可多个ZMap中执行默认值= 1。启用分片时`--seed`参数是必需的。
将扫描分片/区使其可多个ZMap中执行默认值= 1。启用分片时`--seed`参数是必需的。
**--shard=n**
@ -165,7 +164,7 @@ ZMap也可用于扫描特定子网或CIDR地址块。例如仅扫描10.0.0.0/
#### 探测选项 ####
ZMap允许用户指定并添加自己所需要探测模块。 探测模块的职责就是生成主机回复的响应包。
ZMap允许用户指定并添加自己所需要的[探测模块](#probemodule)。 探测模块的职责就是生成要发送的探测包,并处理主机回复的响应包。
**--list-probe-modules**
@ -173,7 +172,7 @@ ZMap允许用户指定并添加自己所需要探测的模块。 探测模块的
**-M, --probe-module=name**
选择探探测模块(默认值= tcp_synscan
选择[探测模块](#probemodule)(默认值= tcp_synscan
**--probe-args=args**
@ -185,7 +184,7 @@ ZMap允许用户指定并添加自己所需要探测的模块。 探测模块的
#### 输出选项 ####
ZMap允许用户选择指定的输出模块。输出模块负责处理由探测模块返回的字段,并将它们交给用户。用户可以指定输出的范围,并过滤相应字段。
ZMap允许用户指定和编写他们自己[输出模块](#outputmodule)。输出模块负责处理由探测模块返回的字段,并将它们输出给用户。用户可以指定输出的字段,并过滤相应字段。
**--list-output-modules**
@ -193,7 +192,7 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
**-O, --output-module=name**
选择输出模块默认值为csv
选择[输出模块](#outputmodule)默认值为csv
**--output-args=args**
@ -201,21 +200,21 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
**-f, --output-fields=fields**
输出列表,以逗号分割
输出的字段列表,以逗号分割
**--output-filter**
通过指定相应的探测模块来过滤输出字段
指定输出[过滤器](#outputfilter)对[探测模块](#probemodule)定义字段进行过滤
#### 附加选项 ####
**-C, --config=filename**
加载配置文件,可以指定其他路径。
加载[配置文件](#config),可以指定其他路径。
**-q, --quiet**
再是每秒刷新输出
每秒刷新输出
**-g, --summary**
@ -233,13 +232,12 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
打印版本并退出
----------
### 附加信息 ###
<a name="additional"></a>
#### TCP SYN 扫描 ####
在执行TCP SYN扫描时ZMap需要指定一个目标端口和以供扫描的源端口范围。
在执行TCP SYN扫描时ZMap需要指定一个目标端口,也支持指定发起扫描的源端口范围。
**-p, --target-port=port**
@ -249,27 +247,27 @@ ZMap允许用户选择指定的输出模块。输出模块负责处理由探测
发送扫描数据包的源端口(例如 40000-50000
**警示!** ZMAP基于Linux内核使用SYN/ACK包应答RST包关闭扫描打开的连接。ZMap是在Ethernet层完成包的发送的这样做时为了减少跟踪打开的TCP连接和路由寻路带来的内核开销。因此如果您有跟踪连接建立的防火墙规则netfilter的规则类似于`-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`将阻止SYN/ACK包到达内核。这不会妨碍到ZMap记录应答但它会阻止RST包被送回最终连接会在超时后断开。我们强烈建议您在执行ZMap时选择一组主机上未使用且防火墙允许访问的端口加在`-s`后(如 `-s '50000-60000' ` )。
**警示!** ZMap基于Linux内核使用RST包来应答SYN/ACK包响应以关闭扫描器打开的连接。ZMap是在Ethernet层完成包的发送的这样做是为了减少跟踪打开的TCP连接和路由寻路带来的内核开销。因此如果您有跟踪连接建立的防火墙规则如类似于`-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`的netfilter规则将阻止SYN/ACK包到达内核。这不会妨碍到ZMap记录应答但它会阻止RST包被送回最终被扫描主机的连接会一直打开,直到超时后断开。我们强烈建议您在执行ZMap时选择一组主机上未使用且防火墙允许访问的端口加在`-s`后(如 `-s '50000-60000' ` )。
#### ICMP Echo 请求扫描 ####
虽然在默认情况下ZMap执行的是TCP SYN扫描但它也支持使用ICMP echo请求扫描。在这种扫描方式下ICMP echo请求包被发送到每个主机并以收到ICMP 应答包作为答复。实施ICMP扫描可以通过选择icmp_echoscan扫描模块来执行如下
虽然在默认情况下ZMap执行的是TCP SYN扫描但它也支持使用ICMP echo请求扫描。在这种扫描方式下ICMP echo请求包被发送到每个主机并以收到ICMP应答包作为答复。实施ICMP扫描可以通过选择icmp_echoscan扫描模块来执行如下
$ zmap --probe-module=icmp_echoscan
#### UDP 数据报扫描 ####
ZMap还额外支持UDP探测它会发出任意UDP数据报给每个主机能在无论UDP或是ICMP任何一个不可达的情况下接受应答。ZMap支持通过使用--probe-args命令行选择四种不同的UDP payload方式。这些都有可列印payload的文本用于命令行的十六进制payload的hex外部文件中包含payload的file和需要动态区域生成的payload的template。为了得到UDP响应请使用-f参数确保您指定的“data”领域处于汇报范围。
ZMap还额外支持UDP探测它会发出任意UDP数据报给每个主机接收UDP或ICMP不可达的应答。ZMap可以通过使用--probe-args命令行选项来设置四种不同的UDP载荷。这些是可在命令行设置可打印的ASCII 码的text载荷和十六进制载荷的hex外部文件中包含载荷的file和通过动态字段生成的载荷的template。为了得到UDP响应请使用-f参数确保您指定的“data”字段处于输出范围。
下面的例子将发送两个字节'ST'即PC的'status'请求到UDP端口5632。
下面的例子将发送两个字节'ST'即PCAnwywhere的'status'请求到UDP端口5632。
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
下面的例子将发送字节“0X02”即SQL服务器的 'client broadcast'请求到UDP端口1434。
下面的例子将发送字节“0X02”即SQL Server的'client broadcast'请求到UDP端口1434。
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
下面的例子将发送一个NetBIOS状态请求到UDP端口137。使用一个ZMap自带的payload文件。
下面的例子将发送一个NetBIOS状态请求到UDP端口137。使用一个ZMap自带的载荷文件。
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
@ -277,9 +275,9 @@ ZMap还额外支持UDP探测它会发出任意UDP数据报给每个主机
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上的发送线程(-T时可能会遇到崩溃和一个明显的相比静态payload性能降低的表现。模板仅仅是一个由一个或多个使用$ {}将字段说明封装成序列构成的payload文件。某些协议特别是SIP需要payload来反射包中的源和目的地址。其他协议如端口映射和DNS包含范围伴随每一次请求随机生成或Zamp扫描的多宿主系统将会抛出危险警告
UDP载荷模板仍处于实验阶段。当您在更多的使用一个以上的发送线程(-T时可能会遇到崩溃和一个明显的相比静态载荷性能降低的表现。模板仅仅是一个由一个或多个使用${}将字段说明封装成序列构成的载荷文件。某些协议特别是SIP需要载荷来反射包中的源和目的包。其他协议如portmapper和DNS每个请求包含的字段应该是随机的或降低被Zamp扫描的多宿主系统的风险
以下的payload模板将发送SIP OPTIONS请求到每一个目的地
以下的载荷模板将发送SIP OPTIONS请求到每一个目的地
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
@ -293,10 +291,9 @@ UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上
User-Agent: ${RAND_ALPHA=8}
Accept: text/plain
就像在上面的例子中展示的那样对于大多数SIP正常的实现会在在每行行末添加\r\n并且在请求的末尾一定包含\r\n\r\n。一个可以使用的在ZMap的examples/udp-payloads目录下的例子 (sip_options.tpl).
下面的字段正在如今的模板中实施:
就像在上面的例子中展示的那样,注意每行行末以\r\n结尾请求以\r\n\r\n结尾大多数SIP实现都可以正确处理它。一个可以工作的例子放在ZMap的examples/udp-payloads目录下 (sip_options.tpl).
当前实现了下面的模板字段:
- **SADDR**: 源IP地址的点分十进制格式
- **SADDR_N**: 源IP地址的网络字节序格式
@ -306,14 +303,15 @@ UDP payload 模板仍处于实验阶段。当您在更多的使用一个以上
- **SPORT_N**: 源端口的网络字节序格式
- **DPORT**: 目的端口的ascii格式
- **DPORT_N**: 目的端口的网络字节序格式
- **RAND_BYTE**: 随机字节(0-255),长度由=(长度) 参数决定
- **RAND_DIGIT**: 随机数字0-9长度由=(长度) 参数决定
- **RAND_ALPHA**: 随机大写字母A-Z长度由=(长度) 参数决定
- **RAND_ALPHANUM**: 随机大写字母A-Z和随机数字0-9长度由=(长度) 参数决定
- **RAND_BYTE**: 随机字节(0-255),长度由=(length) 参数决定
- **RAND_DIGIT**: 随机数字0-9长度由=(length) 参数决定
- **RAND_ALPHA**: 随机大写字母A-Z长度由=(length) 参数决定
- **RAND_ALPHANUM**: 随机大写字母A-Z和随机数字0-9长度由=(length) 参数决定
### 配置文件 ###
<a name="config"></a>
ZMap支持使用配置文件代替在命令行上指定所有的需求选项。配置中可以通过每行指定一个长名称的选项和对应的值来创建:
ZMap支持使用配置文件来代替在命令行上指定所有要求的选项。配置中可以通过每行指定一个长名称的选项和对应的值来创建:
interface "eth1"
source-ip 1.1.1.4-1.1.1.8
@ -324,11 +322,12 @@ ZMap支持使用配置文件代替在命令行上指定所有的需求选项。
quiet
summary
然后ZMap就可以按照配置文件一些必要的附加参数运行了:
然后ZMap就可以按照配置文件并指定一些必要的附加参数运行了:
$ zmap --config=~/.zmap.conf --target-port=443
### 详细 ###
<a name="verbosity" ></a>
ZMap可以在屏幕上生成多种类型的输出。默认情况下Zmap将每隔1秒打印出相似的基本进度信息。可以通过设置`--quiet`来禁用。
@ -377,8 +376,9 @@ ZMap还支持在扫描之后打印出一个的可grep的汇总信息类似于
adv permutation-gen 4215763218
### 结果输出 ###
<a name="output" />
ZMap可以通过**输出模块**生成不同格式的结果。默认情况下ZMap只支持**csv**的输出,但是可以通过编译支持**redis**和**json** 。可以使用**输出过滤**来过滤这些发送到输出模块上的结果。输出模块的范围由用户指定。默认情况如果没有指定输出文件ZMap将以csv格式返回结果ZMap不会产生特定结果。也可以编写自己的输出模块;请参阅编写输出模块。
ZMap可以通过**输出模块**生成不同格式的结果。默认情况下ZMap只支持**csv**的输出,但是可以通过编译支持**redis**和**json** 。可以使用**输出过滤**来过滤这些发送到输出模块上的结果。输出模块输出的字段由用户指定。默认情况如果没有指定输出文件ZMap将以csv格式返回结果而不会生成特定结果。也可以编写自己的输出模块;请参阅[编写输出模块](#exteding)
**-o, --output-file=p**
@ -388,14 +388,13 @@ ZMap可以通过**输出模块**生成不同格式的结果。默认情况下,
调用自定义输出模块
**-f, --output-fields=p**
输出以逗号分隔各字段的列表
以逗号分隔的输出的字段列表
**--output-filter=filter**
在给定的探测区域实施输出过滤
对给定的探测指定字段输出过滤
**--list-output-modules**
@ -403,17 +402,17 @@ ZMap可以通过**输出模块**生成不同格式的结果。默认情况下,
**--list-output-fields**
列出可用的给定探测区域
列出给定的探测的可用输出字段
#### 输出字段 ####
ZMap有很多区域它可以基于IP地址输出。这些区域可以通过在给定探测模块上运行`--list-output-fields`来查看。
除了IP地址之外ZMap有很多字段。这些字段可以通过在给定探测模块上运行`--list-output-fields`来查看。
$ zmap --probe-module="tcp_synscan" --list-output-fields
saddr string: 应答包中的源IP地址
saddr-raw int: 网络提供的整形形式的源IP地址
saddr-raw int: 网络字节格式的源IP地址
daddr string: 应答包中的目的IP地址
daddr-raw int: 网络提供的整形形式的目的IP地址
daddr-raw int: 网络字节格式的目的IP地址
ipid int: 应答包中的IP识别号
ttl int: 应答包中的ttl存活时间
sport int: TCP 源端口
@ -426,7 +425,7 @@ ZMap有很多区域它可以基于IP地址输出。这些区域可以通过
repeat int: 是否是来自主机的重复响应
cooldown int: 是否是在冷却时间内收到的响应
timestamp-str string: 响应抵达时的时间戳使用ISO8601格式
timestamp-ts int: 响应抵达时的时间戳使用纪元开始的秒数
timestamp-ts int: 响应抵达时的时间戳使用UNIX纪元开始的秒数
timestamp-us int: 时间戳的微秒部分(例如 从'timestamp-ts'的几微秒)
可以通过使用`--output-fields=fields`或`-f`来选择选择输出字段,任意组合的输出字段可以被指定为逗号分隔的列表。例如:
@ -435,31 +434,32 @@ ZMap有很多区域它可以基于IP地址输出。这些区域可以通过
#### 过滤输出 ####
在传到输出模块之前,探测模块生成的结果可以先过滤。过滤被实施在探测模块的输出字段上。过滤使用简单的过滤语法写成类似于SQL通过ZMap的**--output-filter**选项来实施。输出过滤通常用于过滤掉重复的结果或仅传输成功的响应到输出模块。
在传到输出模块之前,探测模块生成的结果可以先过滤。过滤是针对探测模块的输出字段的。过滤使用类似于SQL的简单过滤语法写成通过ZMap的**--output-filter**选项来指定。输出过滤通常用于过滤掉重复的结果或仅传输成功的响应到输出模块。
过滤表达式的形式为`<字段名> <操作> <>`。`<>`的类型必须是一个字符串或一串无符号整数并且匹配`<字段名>`类型。对于整数比较有效的操作是`= !=, <, >, <=, >=`。字符串比较的操作是=!=。`--list-output-fields`打印那些可供探测模块选择的字段和类型,然后退出。
过滤表达式的形式为`<字段名> <操作> <>`。`<>`的类型必须是一个字符串或一串无符号整数并且匹配`<字段名>`类型。对于整数比较有效的操作是`= !=, <, >, <=, >=`。字符串比较的操作是=!=。`--list-output-fields`可以打印那些可供探测模块选择的字段和类型,然后退出。
复合型的过滤操作,可以通过使用`&&`(逻辑与)和`||`(逻辑或)这样的运算符来组合出特殊的过滤操作。
**示例**
书写一则过滤仅显示成功,过滤重复应答
书写一则过滤仅显示成功的、不重复的应答
--output-filter="success = 1 && repeat = 0"
过滤出包中含RST并且TTL大于10的分类或者包中含SYNACK的分类
过滤出RST分类并且TTL大于10的包或者SYNACK分类的包
--output-filter="(classification = rst && ttl > 10) || classification = synack"
#### CSV ####
csv模块将会生成以逗号分隔各输出请求字段的文件。例如,以下的指令将生成下面的CSV至名为`output.csv`的文件。
csv模块将会生成以逗号分隔各个要求输出的字段的文件。例如,以下的指令将生成名为`output.csv`的CSV文件。
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
----------
响应, 源地址, 目的地址, 源端口, 目的端口, 序列号, 应答, 是否是冷却模式, 是否重复, 时间戳
#响应, 源地址, 目的地址, 源端口, 目的端口, 序列号, 应答, 是否是冷却模式, 是否重复, 时间戳
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
@ -472,13 +472,20 @@ csv模块将会生成以逗号分隔各输出请求字段的文件。例如
#### Redis ####
Redis的输出模块允许地址被添加到一个Redis的队列,不是被保存到文件,允许ZMap将它与之后的处理工具结合使用。
Redis的输出模块允许地址被添加到一个Redis的队列,而不是保存到文件,允许ZMap将它与之后的处理工具结合使用。
**注意!** ZMap默认不会编译Redis功能。如果您想要将Redis功能编译进ZMap源码中可以在CMake的时候加上`-DWITH_REDIS=ON`。
**注意!** ZMap默认不会编译Redis功能。如果你从源码构建ZMap可以在CMake的时候加上`-DWITH_REDIS=ON`来增加Redis支持。
#### JSON ####
JSON输出模块用起来类似于CSV模块只是以JSON格式写入到文件。JSON文件能轻松地导入到其它可以读取JSON的程序中。
**注意!**ZMap默认不会编译JSON功能。如果你从源码构建ZMap可以在CMake的时候加上`-DWITH_JSON=ON`来增加JSON支持。
### 黑名单和白名单 ###
<a name="blacklisting"></a>
ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名单和白名单参数他将会扫描所有的IPv4地址包括本地的保留的以及组播地址。如果指定了黑名单文件那么在黑名单中的网络前缀将不再扫描如果指定了白名单文件只有那些网络前缀在白名单内的才会扫描。白名单和黑名单文件可以协同使用黑名单运用于白名单上例如如果您在白名单中指定了10.0.0.0/8并在黑名单中指定了10.1.0.0/16那么10.1.0.0/16将不会扫描。白名单和黑名单文件可以在命令行中指定如下所示
ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名单和白名单参数他将会扫描所有的IPv4地址包括本地的保留的以及组播地址。如果指定了黑名单文件那么在黑名单中的网络前缀将不再扫描如果指定了白名单文件只有那些网络前缀在白名单内的才会扫描。白名单和黑名单文件可以协同使用黑名单优先于白名单例如如果您在白名单中指定了10.0.0.0/8并在黑名单中指定了10.1.0.0/16那么10.1.0.0/16将不会扫描。白名单和黑名单文件可以在命令行中指定如下所示
**-b, --blacklist-file=path**
@ -488,7 +495,7 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
文件用于记录限制扫描的子网以CIDR的表示法例如192.168.0.0/16
黑名单文件的每行都需要以CIDR的表示格式书写一个单一的网络前缀。允许使用`#`加以备注。例如:
黑名单文件的每行都需要以CIDR的表示格式书写,一行单一的网络前缀。允许使用`#`加以备注。例如:
# IANA英特网编号管理局记录的用于特殊目的的IPv4地址
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
@ -501,14 +508,14 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
169.254.0.0/16 # RFC3927: 本地链路地址
172.16.0.0/12 # RFC1918: 私有地址
192.0.0.0/24 # RFC6890: IETF协议预留
192.0.2.0/24 # RFC5737: 测试地址
192.88.99.0/24 # RFC3068: IPv6转换到IPv4的任
192.0.2.0/24 # RFC5737: 测试地址1
192.88.99.0/24 # RFC3068: IPv6转换到IPv4的任播
192.168.0.0/16 # RFC1918: 私有地址
192.18.0.0/15 # RFC2544: 检测地址
198.51.100.0/24 # RFC5737: 测试地址
203.0.113.0/24 # RFC5737: 测试地址
198.51.100.0/24 # RFC5737: 测试地址2
203.0.113.0/24 # RFC5737: 测试地址3
240.0.0.0/4 # RFC1112: 预留地址
255.255.255.255/32 # RFC0919: 广播地址
255.255.255.255/32 # RFC0919: 限制广播地址
# IANA记录的用于组播的地址空间
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
@ -516,13 +523,14 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
224.0.0.0/4 # RFC5771: 组播/预留地址ed
如果您只是想扫描因特网中随机的一部分地址,使用采样检出,来代替使用白名单和黑名单。
如果您只是想扫描因特网中随机的一部分地址,使用[抽样](#ratelimiting)检出,来代替使用白名单和黑名单。
**注意!**ZMap默认设置使用`/etc/zmap/blacklist.conf`作为黑名单文件其中包含有本地的地址空间和预留的IP空间。通过编辑`/etc/zmap/zmap.conf`可以改变默认的配置。
### 速度限制与抽样 ###
<a name="ratelimiting"></a>
默认情况下ZMap将以您当前网络所能支持的最快速度扫描。以我们对于常用硬件的经验,这普遍是理论上Gbit以太网速度的95-98%这可能比您的上游提供商可处理的速度还要快。ZMap是不会自动的根据您的上游提供商来调整发送速率的。您可能需要手动的调整发送速率来减少丢包和错误结果。
默认情况下ZMap将以您当前网卡所能支持的最快速度扫描。以我们对于常用硬件的经验,这通常是理论上Gbit以太网速度的95-98%这可能比您的上游提供商可处理的速度还要快。ZMap是不会自动的根据您的上游提供商来调整发送速率的。您可能需要手动的调整发送速率来减少丢包和错误结果。
**-r, --rate=pps**
@ -530,9 +538,9 @@ ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名
**-B, --bandwidth=bps**
设置发送速率以比特/秒(支持G,M和K后缀)。也同样作用于--rate的参数。
设置发送速率以比特/秒(支持G,M和K后缀)。这会覆盖--rate参数。
ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运行时间的随机采样。由于针对主机的扫描是通过随机排序生成的实例限制扫描的主机个数为N就会随机抽选N个主机。命令选项如下
ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运行时间的随机采样。由于每次对主机的扫描是通过随机排序生成的限制扫描的主机个数为N就会随机抽选N个主机。命令选项如下
**-n, --max-targets=n**
@ -540,7 +548,7 @@ ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运
**-N, --max-results=n**
结果上限数量(累积收到这么多结果后出)
结果上限数量(累积收到这么多结果后退出)
**-t, --max-runtime=s**
@ -554,48 +562,46 @@ ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运
zmap -p 443 -s 3 -n 1000000 -o results
为了确定哪一百万主机将要被扫描,您可以执行预扫,只印数据包而非发送,并非真的实施扫描。
为了确定哪一百万主机将要被扫描,您可以执行预扫,只印数据包而非发送,并非真的实施扫描。
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
### 发送多个数据包 ###
ZMap支持想每个主机发送多个扫描。增加这个数量既增加了扫描时间又增加了到达的主机数量。然而我们发现增加扫描时间每个额外扫描的增加近100远远大于到达的主机数量每个额外扫描的增加近1
ZMap支持向每个主机发送多个探测。增加这个数量既增加了扫描时间又增加了到达的主机数量。然而,我们发现,增加扫描时间每个额外扫描的增加近100远远大于到达的主机数量每个额外扫描的增加近1
**-P, --probes=n**
向每个IP发出的独立扫描个数(默认值=1
向每个IP发出的独立探测个数(默认值=1
----------
### 示例应用 ###
### 示例应用程序 ###
ZMap专为向大量主机发启连接并寻找那些正确响应而设计。然而我们意识到许多用户需要执行一些后续处理如执行应用程序级别的握手。例如用户在80端口实施TCP SYN扫描可能只是想要实施一个简单的GET请求还有用户扫描443端口可能是对TLS握手如何完成感兴趣。
ZMap专为向大量主机发起连接并寻找那些正确响应而设计。然而我们意识到许多用户需要执行一些后续处理如执行应用程序级别的握手。例如用户在80端口实施TCP SYN扫描也许想要实施一个简单的GET请求还有用户扫描443端口可能希望完成TLS握手。
#### Banner获取 ####
我们收录了一个示例程序banner-grab伴随ZMap使用可以让用户从监听状态的TCP服务器上接收到消息。Banner-grab连接到服务上任意的发送一个消息然后打印出收到的第一个消息。这个工具可以用来获取banners例如HTTP服务的回复的具体指令telnet登陆提示或SSH服务的字符串。
我们收录了一个示例程序banner-grab伴随ZMap使用可以让用户从监听状态的TCP服务器上接收到消息。Banner-grab连接到提供的服务上,发送一个可选的消息然后打印出收到的第一个消息。这个工具可以用来获取banner例如HTTP服务的回复的具体指令telnet登陆提示或SSH服务的字符串。
这个例子寻找了1000个监听80端口的服务器并向每个发送一个简单的GET请求存储他们的64位编码响应至http-banners.out
下面的例子寻找了1000个监听80端口的服务器并向每个发送一个简单的GET请求存储他们的64位编码响应至http-banners.out
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
如果想知道更多使用`banner-grab`的细节,可以参考`examples/banner-grab`中的README文件。
**注意!** ZMap和banner-grab如例子中同时运行可能会比较显著的影响对方的表现和精度。确保不让ZMap充满banner-grab-tcp的并发连接不然banner-grab将会落后于标准输入的读入导致屏蔽编写标准输出。我们推荐使用较慢扫描速率的ZMap同时提升banner-grab-tcp的并发性至3000以内注意 并发连接>1000需要您使用`ulimit -SHn 100000`和`ulimit -HHn 100000`来增加每个进程的最大文件描述)。当然,这些参数取决于您服务器的性能连接成功率hit-rate我们鼓励开发者在运行大型扫描之前先进行小样本的试验。
**注意!** ZMap和banner-grab如例子中同时运行可能会比较显著的影响对方的表现和精度。确保不让ZMap占满banner-grab-tcp的并发连接不然banner-grab将会落后于标准输入的读入导致阻塞ZMap的输出写入。我们推荐使用较慢扫描速率的ZMap同时提升banner-grab-tcp的并发性至3000以内注意 并发连接>1000需要您使用`ulimit -SHn 100000`和`ulimit -HHn 100000`来增加每个进程的最大文件描述符数量)。当然,这些参数取决于您服务器的性能连接成功率hit-rate我们鼓励开发者在运行大型扫描之前先进行小样本的试验。
#### 建立套接字 ####
我们也收录了另一种形式的banner-grab就是forge-socket 重复利用服务器发出的SYN-ACK连接并最终取得banner。在`banner-grab-tcp`中ZMap向每个服务器发送一个SYN并监听服务器发回的带有SYN+ACK的应答。运行ZMap主机的内核接受应答后发送RST因为有没有处于活动状态的连接与该包关联。程序banner-grab必须在这之后创建一个新的TCP连接到从服务器获取数据。
我们也收录了另一种形式的banner-grab就是forge-socket 重复利用服务器发出的SYN-ACK连接并最终取得banner。在`banner-grab-tcp`中ZMap向每个服务器发送一个SYN并监听服务器发回的带有SYN+ACK的应答。运行ZMap主机的内核接受应答后发送RST这样就没有与该包关联活动连接。程序banner-grab必须在这之后创建一个新的TCP连接到从服务器获取数据。
在forge-socket中我们以同样的名字利用内核模块,使我们可以创建任意参数的TCP连接。这样可以抑制内核的RST包且通过创建套接字取代它可以重用SYN+ACK的参数通过这个套接字收发数据和我们平时使用的连接套接字并没有什么不同。
在forge-socket中我们利用内核中同名的模块使我们可以创建任意参数的TCP连接。可以通过抑制内核的RST包并重用SYN+ACK的参数取代该包而创建套接字,通过这个套接字收发数据和我们平时使用的连接套接字并没有什么不同。
要使用forge-socket您需要forge-socket内核模块从[github][1]上可以获得。您需要git clone `git@github.com:ewust/forge_socket.git`至ZMap源码根目录然后cd进入forge_socket 目录运行make。以root身份安装带有`insmod forge_socket.ko` 的内核模块。
要使用forge-socket您需要forge-socket内核模块从[github][1]上可以获得。您需要`git clone git@github.com:ewust/forge_socket.git`至ZMap源码根目录然后cd进入forge_socket目录运行make。以root身份运行`insmod forge_socket.ko` 来安装该内核模块。
您也需要告知内核不要发送RST包。一个简单的在全系统禁用RST包的方法是**iptables**。以root身份运行`iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP`即可,当然您也可以加上一项--dport X将禁用局限于所扫描的端口X上。扫描完成后移除这项设置以root身份运行`iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP`即可。
您也需要告知内核不要发送RST包。一个简单的在全系统禁用RST包的方法是使用**iptables**。以root身份运行`iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP`即可,当然您也可以加上一项`--dport X`将禁用局限于所扫描的端口X上。扫描完成后移除这项设置以root身份运行`iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP`即可。
现在应该可以建立forge-socket的ZMap示例程序了。运行需要使用**extended_file**ZMap输出模块
现在应该可以建立forge-socket的ZMap示例程序了。运行需要使用**extended_file**ZMap[输出模块](#outputmodule)
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
./forge-socket -c 500 -d ./http-req > ./http-banners.out
@ -605,8 +611,9 @@ ZMap专为向大量主机发启连接并寻找那些正确响应而设计。然
----------
### 编写探测和输出模块 ###
<a name="extending"></a>
ZMap可以通过**probe modules**扩展支持不同类型的扫描,通过**output modules**追加不同类型的输出结果。注册过的探测和输出模块可以在命令行中列出:
ZMap可以通过**探测模块**来扩展支持不同类型的扫描,通过**输出模块**增加不同类型的输出结果。注册过的探测和输出模块可以在命令行中列出:
**--list-probe-modules**
@ -617,95 +624,95 @@ ZMap可以通过**probe modules**扩展支持不同类型的扫描,通过**out
列出安装过的输出模块
#### 输出模块 ####
<a name="outputmodule"></a>
ZMap的输出和输出后处理可以通过执行和注册扫描的**output modules**来扩展。输出模块在接收每一个应答包时都会收到一个回调。然而提供的默认模块仅提供简单的输出,这些模块同样支持扩展扫描后处理例如重复跟踪或输出AS号码来代替IP地址
ZMap的输出和输出后处理可以通过实现和注册扫描器的**输出模块**来扩展。输出模块在接收每一个应答包时都会收到一个回调。然而默认提供的模块仅提供简单的输出,这些模块同样支持更多的输出后处理例如重复跟踪或输出AS号码来代替IP地址
通过定义一个新的output_module机构体来创建输出模块,并在[output_modules.c][2]中注册:
通过定义一个新的output_module结构来创建输出模块,并在[output_modules.c][2]中注册:
typedef struct output_module {
const char *name; // 在命令行如何引输出模块
const char *name; // 在命令行如何引输出模块
unsigned update_interval; // 以秒为单位的更新间隔
output_init_cb init; // 在扫描初始化的时候调用
output_update_cb start; // 在开始的扫描的时候调用
output_init_cb init; // 在扫描初始化的时候调用
output_update_cb start; // 在扫描器开始的时候调用
output_update_cb update; // 每次更新间隔调用,秒为单位
output_update_cb close; // 扫描终止后调用
output_packet_cb process_ip; // 接收到应答时调用
const char *helptext; // 会在--list-output-modules时打印在屏幕
const char *helptext; // 会在--list-output-modules时打印在屏幕
} output_module_t;
输出模块必须有名称,通过名称可以在命令行、普遍实施的`success_ip`和常见的`other_ip`回调中使用模块。process_ip的回调由每个收到的或经由**probe module**过滤的应答包调用。应答是否被认定为成功并不确定(比如,可以是一个TCP的RST。这些回调必须定义匹配`output_packet_cb`定义的函数:
输出模块必须有名称,通过名称可以在命令行调用,并且通常会实现`success_ip`和常见的`other_ip`回调。process_ip的回调由每个收到并经由**probe module**过滤的应答包调用。应答是否被认定为成功并不确定(比如,可以是一个TCP的RST。这些回调必须定义匹配`output_packet_cb`定义的函数:
int (*output_packet_cb) (
ipaddr_n_t saddr, // network-order格式的扫描主机IP地址
ipaddr_n_t daddr, // network-order格式的目的IP地址
ipaddr_n_t saddr, // 网络字节格式的发起扫描主机IP地址
ipaddr_n_t daddr, // 网络字节格式的目的IP地址
const char* response_type, // 发送模块的数据包分类
int is_repeat, // {0: 主机的第一个应答, 1: 后续的应答}
int in_cooldown, // {0: 非冷却状态, 1: 扫描处于冷却中}
int in_cooldown, // {0: 非冷却状态, 1: 扫描处于冷却中}
const u_char* packet, // 指向结构体iphdr中IP包的指针
size_t packet_len // 包的长度以字节为单位
const u_char* packet, // 指向IP包的iphdr结构体的指针
size_t packet_len // 包的长度以字节为单位
);
输出模块还可以通过注册回调执行在扫描初始化的时候(诸如打开输出文件的任务),扫描开始阶段(诸如记录黑名单的任务),在常规间隔实施(诸如程序升级的任务)在关闭的时候(诸如关掉所有打开的文件描述符。这些回调提供完整的扫描配置入口和实时状态:
输出模块还可以通过注册回调执行在扫描初始化的时候(诸如打开输出文件的任务)、在扫描开始阶段(诸如记录黑名单的任务)、在扫描的常规间隔(诸如状态更新的任务)、在关闭的时候(诸如关掉所有打开的文件描述符)。提供的这些回调可以完整的访问扫描配置和当前状态:
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
定义在[output_modules.h][3]中。在[src/output_modules/module_csv.c][4]中有可用示例。
这些定义在[output_modules.h][3]中。在[src/output_modules/module_csv.c][4]中有可用示例。
#### 探测模块 ####
<a name="probemodule"></a>
数据包由探测模块构造,由此可以创建抽象包并对应答分类。ZMap默认拥有两个扫描模块`tcp_synscan`和`icmp_echoscan`。默认情况下ZMap使用`tcp_synscan`来发送TCP SYN包并对每个主机的并对每个主机的响应分类如打开时收到SYN+ACK或关闭时收到RST。ZMap允许开发者编写自己的ZMap探测模块使用如下的API
数据包由**探测模块**构造,它可以创建各种包和不同类型的响应。ZMap默认拥有两个扫描模块`tcp_synscan`和`icmp_echoscan`。默认情况下ZMap使用`tcp_synscan`来发送TCP SYN包并对每个主机的响应分类如打开时收到SYN+ACK或关闭时收到RST。ZMap允许开发者编写自己的ZMap探测模块使用如下的API
任何类型的扫描的实施都需要在`send_module_t`结构体内开发和注册必要的回调
任何类型的扫描都必须通过开发和注册`send_module_t`结构中的回调来实现
typedef struct probe_module {
const char *name; // 如何在命令行调用扫描
size_t packet_length; // 探测包有多长(必须是静态的)
const char *pcap_filter; // 对收到的响应实施PCAP过滤
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
uint8_t port_args; // 设为1如果需要使用ZMap的--target-port
// 用户指定
size_t pcap_snaplen; // libpcap 捕获的最大字节数
uint8_t port_args; // 设为1如果ZMap需要用户指定--target-port
probe_global_init_cb global_initialize; // 在扫描初始化会时被调用一次
probe_thread_init_cb thread_initialize; // 每个包缓存区的线程中被调用一次
probe_make_packet_cb make_packet; // 每个主机更新包的时候被调用一次
probe_validate_packet_cb validate_packet; // 每收到一个包被调用一次,
// 如果包无效返回0
// 非零则覆盖
// 非零则有效
probe_print_packet_cb print_packet; // 如果在dry-run模式下被每个包都调用
probe_classify_packet_cb process_packet; // 由区分响应的接收器调用
probe_print_packet_cb print_packet; // 如果在预扫模式下被每个包都调用
probe_classify_packet_cb process_packet; // 由区分响应的接收器调用
probe_close_cb close; // 扫描终止后被调用
fielddef_t *fields // 该模块指定的区域的定义
int numfields // 区域的数量
fielddef_t *fields // 该模块指定的字段的定义
int numfields // 字段的数量
} probe_module_t;
在扫描操作初始化时会调用一次`global_initialize`,可以用来实施一些必要的全局配置和初始化操作。然而,`global_initialize`并不能访问报缓冲区,那里由线程指定。用以代替的,`thread_initialize`在每个发送线程初始化的时候被调用,提供对于缓冲区的访问,可以用来构建探测包和全局的源和目的值。此回调应用于构建宿主不可知分组结构甚至只有特定值目的主机和校验和需要随着每个主机更新。例如以太网头部信息在交换时不会变更减去校验和是由NIC硬件计算的因此可以事先定义以减少扫描时间开销。
在扫描操作初始化时会调用一次`global_initialize`,可以用来实施一些必要的全局配置和初始化操作。然而,`global_initialize`并不能访问包缓冲区,那里是线程特定的。代替的,`thread_initialize`在每个发送线程初始化的时候被调用,提供对于缓冲区的访问,可以用来构建探测包和全局的源和目的值。此回调应用于构建主机不可知的包结构甚至只有特定值目的主机和校验和需要随着每个主机更新。例如以太网头部信息在交换时不会变更减去校验和是由NIC硬件计算的因此可以事先定义以减少扫描时间开销。
调用回调参数`make_packet是为了让被扫描的主机允许**probe module**更新主机指定的值同时提供IP地址、一个非透明的验证字符串和探测数目如下所示。探测模块负责在探测中放置尽可能多的验证字符串以至于当服务器返回的应答为空时探测模块也能验证它的当前状态。例如针对TCP SYN扫描tcp_synscan探测模块会使用TCP源端口和序列号的格式存储验证字符串。响应包SYN+ACKs)将包含预期的值包含目的端口和确认号。
调用回调参数`make_packet`是为了让被扫描的主机允许**探测模块**更新主机指定的值同时提供IP地址、一个非透明的验证字符串和探测数目如下所示。探测模块负责在探测中放置尽可能多的验证字符串即便当服务器返回的应答为空时探测模块也能验证它的当前状态。例如针对TCP SYN扫描tcp_synscan探测模块会使用TCP源端口和序列号的格式存储验证字符串。响应包SYN+ACK将包含目的端口和确认号的预期值
int make_packet(
void *packetbuf, // 包的缓冲区
ipaddr_n_t src_ip, // network-order格式源IP
ipaddr_n_t dst_ip, // network-order格式目的IP
uint32_t *validation, // 探测中的确认字符串
ipaddr_n_t src_ip, // 网络字节格式源IP
ipaddr_n_t dst_ip, // 网络字节格式目的IP
uint32_t *validation, // 探测中的有效字符串
int probe_num // 如果向每个主机发送多重探测,
// 该值为对于主机我们
// 正在实施的探测数目
// 该值为我们对于主机
// 正在发送的探测数目
);
扫描模块也应该定义`pcap_filter`、`validate_packet`和`process_packet`。只有符合PCAP过滤的包才会被扫描。举个例子在一个TCP SYN扫描的情况下我们只想要调查TCP SYN / ACK或RST TCP数据包并利用类似`tcp && tcp[13] & 4 != 0 || tcp[13] == 18`的过滤方法。`validate_packet`函数将会被每个满足PCAP过滤条件的包调用。如果验证返回的值非零将会调用`process_packet`函数,并使用包中被定义成的`fields`字段和数据填充字段集。如下代码为TCP synscan探测模块处理了一个数据包。
扫描模块也应该定义`pcap_filter`、`validate_packet`和`process_packet`。只有符合PCAP过滤的包才会被扫描。举个例子在一个TCP SYN扫描的情况下我们只想要调查TCP SYN / ACK或RST TCP数据包并利用类似`tcp && tcp[13] & 4 != 0 || tcp[13] == 18`的过滤方法。`validate_packet`函数将会被每个满足PCAP过滤条件的包调用。如果验证返回的值非零将会调用`process_packet`函数,并使用`fields`定义的字段和包中的数据填充字段集。举个例子,如下代码为TCP synscan探测模块处理了一个数据包。
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
{
@ -733,7 +740,7 @@ ZMap的输出和输出后处理可以通过执行和注册扫描的**output modu
via: https://zmap.io/documentation.html
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,28 +1,29 @@
如何在Fedora 22上面配置Apache的Docker容器
=============================================================================
在这篇文章中,我们将会学习关于Docker的一些知识如何使用Docker部署Apache httpd服务并且共享到Docker Hub上面去。首先我们学习怎样拉取和使用Docker Hub里面的镜像然后交互式地安装Apache到一个Fedora 22的镜像里去之后我们将会学习如何用一个Dockerfile文件来制作一个镜像以一种更快更优雅的方式。最后我们会在Docker Hub上公开我们创建地镜像这样以后任何人都可以下载并使用它。
### 安装Docker运行hello world ###
在这篇文章中我们将会学习关于Docker的一些知识如何使用Docker部署Apache httpd服务并且共享到Docker Hub上面去。首先我们学习怎样拉取和使用Docker Hub里面的镜像然后在一个Fedora 22的镜像上交互式地安装Apache之后我们将会学习如何用一个Dockerfile文件来以一种更快更优雅的方式制作一个镜像。最后我们将我们创建的镜像发布到Docker Hub上这样以后任何人都可以下载并使用它。
### 安装并初体验Docker ###
**要求**
运行Docker至少需要满足这些:
运行Docker至少需要满足这些:
- 你需要一个64位的内核版本3.10或者更高
- Iptables 1.4 - Docker会用来做网络配置如网络地址转换NAT
- Iptables 1.4 - Docker会用来做网络配置如网络地址转换NAT
- Git 1.7 - Docker会使用Git来与仓库交流如Docker Hub
- ps - 在大多数环境中这个工具都存在在procps包里有提供
- root - 防止一般用户可以通过TCP或者其他方式运行Docker为了简化我们会假定你就是root
- root - 尽管一般用户可以通过TCP或者其他方式来运行Docker但是为了简化我们会假定你就是root
### 使用dnf安装docker ###
#### 使用dnf安装docker ####
以下的命令会安装Docker
dnf update && dnf install docker
**注意**在Fedora 22里你仍然可以使用Yum命令但是被DNF取代了而且在纯净安装时不可用了。
**注意**在Fedora 22里你仍然可以使用Yum命令但是被DNF取代了而且在纯净安装时不可用了。
### 检查安装 ###
#### 检查安装 ####
我们将要使用的第一个命令是docker info这会输出很多信息给你
@ -32,25 +33,29 @@
docker version
### 启动Dcoker为守护进程 ###
#### 以守护进程方式启动Dcoker####
你应该启动一个docker实例然后她会处理我们的请求。
docker -d
现在我们设置 docker 随系统启动,以便我们不需要每次重启都需要运行上述命令。
chkconfig docker on
让我们用Busybox来打印hello world
dockr run -t busybox /bin/echo "hello world"
这个命令里我们告诉Docker执行 /bin/echo "hello world"在Busybox镜像的一个实例/容器里。Busybox是一个小型的POSIX环境将许多小工具都结合到了一个单独的可执行程序里。
这个命令里我们告诉Docker在Busybox镜像的一个实例/容器里执行 /bin/echo "hello world"。Busybox是一个小型的POSIX环境将许多小工具都结合到了一个单独的可执行程序里。
如果Docker不能在你的系统里找到本地的Busybox镜像她就会自动从Docker Hub里拉取镜像正如你可以看下如下的快照
![Hello world with Busybox](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-hello-world-busybox-complete.png)
Hello world with Busybox
*Hello world with Busybox*
再次尝试相同的命令这次由于Docker已经有了本地的Busybox镜像所有你将会看到的就是echo的输出
再次尝试相同的命令这次由于Docker已经有了本地的Busybox镜像你将会看到的全部就是echo的输出
docker run -t busybox /bin/echo "hello world"
@ -66,31 +71,31 @@ Hello world with Busybox
docker pull fedora:22
一个容器在后台运行:
启动一个容器在后台运行:
docker run -d -t fedora:22 /bin/bash
列出正在运行地容器,并用名字标识,如下
列出正在运行地容器及其名字标识,如下
docker ps
![listing with docker ps and attaching with docker attach](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-ps-with-docker-attach-highlight.png)
使用docker ps列出并使用docker attach进入一个容器里
*使用docker ps列出并使用docker attach进入一个容器里*
angry_noble是docker分配给我们容器的名字所以我们来上去:
angry_noble是docker分配给我们容器的名字所以我们来连接上去:
docker attach angry_noble
注意:每次你一个容器,就会被给与一个新的名字,如果你的容器需要一个固定的名字,你应该在 docker run 命令里使用 -name 参数。
注意:每次你启动一个容器,就会被给与一个新的名字,如果你的容器需要一个固定的名字,你应该在 docker run 命令里使用 -name 参数。
### 安装Apache ###
#### 安装Apache ####
下面的命令会更新DNF的数据库下载安装Apachehttpd包并清理dnf缓存使镜像尽量小
dnf -y update && dnf -y install httpd && dnf -y clean all
配置Apache
**配置Apache**
我们需要修改httpd.conf的唯一地方就是ServerName这会使Apache停止抱怨
@ -98,7 +103,7 @@ angry_noble是docker分配给我们容器的名字所以我们来附上去
**设定环境**
为了使Apache运行为单机模式,你必须以环境变量的格式提供一些信息,并且你也需要在这些变量里的目录设定所以我们将会用一个小的shell脚本干这个工作当然也会启动Apache
为了使Apache运行为独立模式,你必须以环境变量的格式提供一些信息,并且你也需要创建这些变量里的目录所以我们将会用一个小的shell脚本干这个工作当然也会启动Apache
vi /etc/httpd/run_apache_foreground
@ -106,7 +111,7 @@ angry_noble是docker分配给我们容器的名字所以我们来附上去
#!/bin/bash
#set variables
#设置环境变量
APACHE_LOG_DI=R"/var/log/httpd"
APACHE_LOCK_DIR="/var/lock/httpd"
APACHE_RUN_USER="apache"
@ -114,12 +119,12 @@ angry_noble是docker分配给我们容器的名字所以我们来附上去
APACHE_PID_FILE="/var/run/httpd/httpd.pid"
APACHE_RUN_DIR="/var/run/httpd"
#create directories if necessary
#如果需要的话,创建目录
if ! [ -d /var/run/httpd ]; then mkdir /var/run/httpd;fi
if ! [ -d /var/log/httpd ]; then mkdir /var/log/httpd;fi
if ! [ -d /var/lock/httpd ]; then mkdir /var/lock/httpd;fi
#run Apache
#运行 Apache
httpd -D FOREGROUND
**另外地**你可以粘贴这个片段代码到容器shell里并运行:
@ -130,11 +135,11 @@ angry_noble是docker分配给我们容器的名字所以我们来附上去
**保存你的容器状态**
你的容器现在可以运行Apache是时候保存容器当前的状态为一个镜像以备你需要的时候使用。
你的容器现在准备好运行Apache是时候保存容器当前的状态为一个镜像以备你需要的时候使用。
为了离开容器环境,你必须顺序按下 **Ctrl+q****Ctrl+p**如果你仅仅在shell执行exit你同时也会停止容器失去目前为止你做过的所有工作。
回到Docker主机使用 **docker commit** 加容器和你期望的仓库名字/标签:
回到Docker主机使用 **docker commit** 及容器名和你想要的仓库名字/标签:
docker commit angry_noble gaiada/apache
@ -144,17 +149,15 @@ angry_noble是docker分配给我们容器的名字所以我们来附上去
**运行并测试你的镜像**
最后,从你的新镜像起一个容器并且重定向80端口到容器:
最后,从你的新镜像启动一个容器并且重定向80端口到该容器:
docker run -p 80:80 -d -t gaiada/apache /etc/httpd/run_apache_foreground
到目前你正在你的容器里运行Apache打开你的浏览器访问该服务在[http://localhost][2]你将会看到如下Apache默认的页面
![Apache default page running from Docker container](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-apache-running.png)
在容器里运行的Apache默认页面
*在容器里运行的Apache默认页面*
### 使用Dockerfile Docker化Apache ###
@ -190,21 +193,14 @@ angry_noble是docker分配给我们容器的名字所以我们来附上去
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
我们一起来看看Dockerfile里面有什么
**FROM** - 这告诉docker我们将要使用Fedora 22作为基础镜像
**MAINTAINER** 和 **LABLE** - 这些命令对镜像没有直接作用,属于标记信息
**RUN** - 自动完成我们之前交互式做的工作安装Apache新建目录并编辑httpd.conf
**ENV** - 设置环境变量现在我们再不需要run_apache_foreground脚本
**EXPOSE** - 暴露80端口给外网
**CMD** - 设置默认的命令启动httpd服务这样我们就不需要每次起一个新的容器都重复这个工作
- **FROM** - 这告诉docker我们将要使用Fedora 22作为基础镜像
- **MAINTAINER****LABLE** - 这些命令对镜像没有直接作用,属于标记信息
- **RUN** - 自动完成我们之前交互式做的工作安装Apache新建目录并编辑httpd.conf
- **ENV** - 设置环境变量现在我们再不需要run_apache_foreground脚本
- **EXPOSE** - 暴露80端口给外网
- **CMD** - 设置默认的命令启动httpd服务这样我们就不需要每次起一个新的容器都重复这个工作
**建立该镜像**
@ -214,7 +210,7 @@ angry_noble是docker分配给我们容器的名字所以我们来附上去
![docker build complete](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-build-complete.png)
docker完成创建
*docker完成创建*
使用 **docker images** 列出本地镜像,查看是否存在你新建的镜像:
@ -226,7 +222,7 @@ docker完成创建
这就是Dockerfile的工作使用这项功能会使得事情更加容易快速并且可重复生成。
### 公开你的镜像 ###
### 发布你的镜像 ###
直到现在你仅仅是从Docker Hub拉取了镜像但是你也可以推送你的镜像以后需要也可以再次拉取他们。实际上其他人也可以下载你的镜像在他们的系统中使用它而不需要改变任何东西。现在我们将要学习如何使我们的镜像对世界上的其他人可用。
@ -236,7 +232,7 @@ docker完成创建
![Docker Hub signup page](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-hub-signup.png)
Docker Hub 注册页面
*Docker Hub 注册页面*
**登录**
@ -256,11 +252,11 @@ Docker Hub 注册页面
![Docker push Apache image complete](http://blog.linoxide.com/wp-content/uploads/2015/06/docker-pushing-apachedf-complete.png)
Docker推送Apache镜像完成
*Docker推送Apache镜像完成*
### 结论 ###
现在你知道如何Docker化Apache试一试包含其他一些组PerlPHPproxyHTTPS或者任何你需要的东西。我希望你们这些家伙喜欢她并推送你们自己的镜像到Docker Hub。
现在你知道如何Docker化Apache试一试包含其他一些组PerlPHPproxyHTTPS或者任何你需要的东西。我希望你们这些家伙喜欢她并推送你们自己的镜像到Docker Hub。
--------------------------------------------------------------------------------
@ -268,7 +264,7 @@ via: http://linoxide.com/linux-how-to/configure-apache-containers-docker-fedora-
作者:[Carlos Alberto][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,23 +1,22 @@
Linux_Logo 输出彩色 ANSI Linux 发行版徽标的命令行工具
================================================================================
linuxlogo 或 linux_logo 是一款在Linux命令行下生成附带系统信息的彩色 ANSI 发行版徽标的工具。
linuxlogo或叫 linux_logo是一款在Linux命令行下用彩色 ANSI 代码生成附带有系统信息的发行版徽标的工具。
![Linux_Logo 输出彩色 ANSI Linux 发行版徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Linux_Logo.png)
Linux_Logo 输出彩色 ANSI Linux 发行版徽标
*Linux_Logo 输出彩色 ANSI Linux 发行版徽标*
这个小工具可以从 /proc 文件系统中获取系统信息并可以显示包括主机发行版在内的其他很多发行版的徽标。
这个小工具可以从 /proc 文件系统中获取系统信息并可以显示包括主机上安装的发行版在内的很多发行版的徽标。
与徽标一同显示的系统信息包括 Linux 内核版本最近一次编译Linux内核的时间处理器/核心数量,速度,制造商,以及哪一代处理器。它还能显示总共的物理内存大小。
与徽标一同显示的系统信息包括 Linux 内核版本最近一次编译Linux内核的时间处理器/核心数量,速度,制造商,以及哪一代处理器。它还能显示总共的物理内存大小。
值得一提的是screenfetch是一个拥有类似功能的工具它也能显示发行版徽标同时还提供更加详细美观的系统信息。我们之前已经介绍过这个工具你可以参考一下链接
- [ScreenFetch Generates Linux System Information][1]
无独有偶screenfetch是一个拥有类似功能的工具它也能显示发行版徽标同时还提供更加详细美观的系统信息。我们之前已经介绍过这个工具你可以参考一下链接
- [screenFetch: 命令行信息截图工具][1]
linux_logo 和 Screenfetch 并不能相提并论。尽管 screenfetch 的输出较为整洁并提供更多细节, linux_logo 则提供了更多的彩色 ANSI 图标, 并且提供了格式化输出的选项。
linux\_logo 和 Screenfetch 并完全一样。尽管 screenfetch 的输出较为整洁并提供更多细节, linux\_logo 则提供了更多的彩色 ANSI 图标, 并且提供了格式化输出的选项。
linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统中因此需要安装图形界面 X11 或 X 系统。这个软件使用GNU 2.0协议。
linux\_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统中因此需要安装图形界面 X11 或 X 系统LCTT 译注:此处应是错误的。按说不需要任何图形界面支持,并且译者从其官方站 http://www.deater.net/weave/vmwprod/linux_logo 也没找到任何相关 X11的信息。这个软件使用GNU 2.0协议。
本文中,我们将使用以下环境测试 linux_logo 工具。
@ -26,7 +25,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
### 在 Linux 中安装 Linux Logo工具 ###
**1. linuxlogo软件包 ( 5.11 稳定版) 可通过如下方式使用 apt, yum,或 dnf 在所有发行版中使用默认的软件仓库进行安装**
**1. linuxlogo软件包 ( 5.11 稳定版) 可通过如下方式使用 apt, yum 或 dnf 在所有发行版中使用默认的软件仓库进行安装**
# apt-get install linux_logo [用于基于 Apt 的系统] 译者注Ubuntu中该软件包名为linuxlogo
# yum install linux_logo [用于基于 Yum 的系统]
@ -42,7 +41,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
![获取默认系统徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Get-Default-OS-Logo.png)
获取默认系统徽标
*获取默认系统徽标*
**3. 使用 `[-a]` 选项可以输出没有颜色的徽标。当在黑白终端里使用 linux_logo 时,这个选项会很有用。**
@ -50,7 +49,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
![黑白 Linux 徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Black-and-White-Linux-Logo.png)
黑白 Linux 徽标
*黑白 Linux 徽标*
**4. 使用 `[-l]` 选项可以仅输出徽标而不包含系统信息。**
@ -58,7 +57,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
![输出发行版徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Print-Distribution-Logo.png)
输出发行版徽标
*输出发行版徽标*
**5. `[-u]` 选项可以显示系统运行时间。**
@ -66,7 +65,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
![输出系统运行时间](http://www.tecmint.com/wp-content/uploads/2015/06/Print-System-Uptime.png)
输出系统运行时间
*输出系统运行时间*
**6. 如果你对系统平均负载感兴趣,可以使用 `[-y]` 选项。你可以同时使用多个选项。**
@ -74,7 +73,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
![输出系统平均负载](http://www.tecmint.com/wp-content/uploads/2015/06/Print-System-Load-Average.png)
输出系统平均负载
*输出系统平均负载*
如需查看更多选项并获取相关帮助,你可以使用如下命令。
@ -82,7 +81,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
![Linuxlogo 选项及帮助](http://www.tecmint.com/wp-content/uploads/2015/06/linuxlogo-options.png)
Linuxlogo选项及帮助
*Linuxlogo选项及帮助*
**7. 此工具内置了很多不同发行版的徽标。你可以使用 `[-L list]` 选项查看在这些徽标的列表。**
@ -90,7 +89,7 @@ Linuxlogo选项及帮助
![Linux 徽标列表](http://www.tecmint.com/wp-content/uploads/2015/06/List-of-Linux-Logos.png)
Linux 徽标列表
*Linux 徽标列表*
如果你想输出这个列表中的任意徽标,可以使用 `-L NUM``-L NAME` 来显示想要选中的图标。
@ -105,7 +104,7 @@ Linux 徽标列表
![输出 AIX 图标](http://www.tecmint.com/wp-content/uploads/2015/06/Print-AIX-Logo.png)
输出 AIX 图标
*输出 AIX 图标*
**注**: 命令中的使用 `-L 1` 是因为 AIX 徽标在列表中的编号是1而使用 `-L aix` 则是因为 AIX 徽标在列表中的名称为 aix
@ -116,13 +115,13 @@ Linux 徽标列表
![各种 Linux 徽标](http://www.tecmint.com/wp-content/uploads/2015/06/Various-Linux-Logos.png)
各种 Linux 徽标
*各种 Linux 徽标*
你可以通过徽标对应的编号或名字使用任意徽标
你可以通过徽标对应的编号或名字使用任意徽标
### 一些使用 Linux_logo 的建议和提示###
**8. 你可以在登录界面输出你的 Linux 发行版徽标。要输出默认徽标,你可以在 ` ~/.bashrc`` 文件的最后添加以下内容。**
**8. 你可以在登录界面输出你的 Linux 发行版徽标。要输出默认徽标,你可以在 ` ~/.bashrc` 文件的最后添加以下内容。**
if [ -f /usr/bin/linux_logo ]; then linux_logo; fi
@ -132,15 +131,15 @@ Linux 徽标列表
![Print Logo on User Login](http://www.tecmint.com/wp-content/uploads/2015/06/Print-Logo-on-Login.png)
在用户登录时输出徽标
*在用户登录时输出徽标*
其实你也可以在登录后输出任意图标,只需加入以下内容
其实你也可以在登录后输出任意图标,只需加入以下内容
if [ -f /usr/bin/linux_logo ]; then linux_logo -L num; fi
**重要**: 不要忘了将 num 替换成你想使用的图标。
**10. You can also print your own logo by simply specifying the location of the logo as shown below.**
**10. 你也能直接指定徽标所在的位置来显示你自己的徽标。**
# linux_logo -D /path/to/ASCII/logo
@ -152,12 +151,11 @@ Linux 徽标列表
# /usr/local/bin/linux_logo -a > /etc/issue.net
**12. 创建一个 Penguin 端口 - 用于回应连接的端口。要创建 Penguin 端口, 则需在 /etc/services 文件中加入以下内容 **
**12. 创建一个 Linux 上的端口 - 用于回应连接的端口。要创建 Linux 端口, 则需在 /etc/services 文件中加入以下内容**
penguin 4444/tcp penguin
这里的 `4444` 是一个未被任何其他资源使用的空闲端口。你也可以使用其他端口。
你还需要在 /etc/inetd.conf中加入以下内容
这里的 `4444` 是一个未被任何其他资源使用的空闲端口。你也可以使用其他端口。你还需要在 /etc/inetd.conf中加入以下内容
penguin stream tcp nowait root /usr/local/bin/linux_logo
@ -165,6 +163,8 @@ Linux 徽标列表
# killall -HUP inetd
LCTT 译注:然后你就可以远程或本地连接到这个端口,并显示这个徽标了。)
linux_logo 还可以用做启动脚本来愚弄攻击者或对你朋友使用恶作剧。这是一个我经常在我的脚本中用来获取不同发行版输出的好工具。
试过一次后,你就不会忘记的。让我们知道你对这个工具的想法及它对你的作用吧。 不要忘记给评论、点赞或分享!
@ -174,10 +174,10 @@ linux_logo 还可以用做启动脚本来愚弄攻击者或对你朋友使用恶
via: http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
作者:[Avishek Kumar][a]
译者:[KevSJ](https://github.com/KevSJ)
校对:[校对者ID](https://github.com/校对者ID)
译者:[KevinSJ](https://github.com/KevinSJ)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[1]:https://linux.cn/article-1947-1.html

View File

@ -1,95 +1,92 @@
用这些专用工具让你截图更简单
================================================================================
"一图胜过千万句",这句二十世纪早期在美国应运而生的名言,说的是一张单一的静止图片所蕴含的信息足以匹敌大量的描述性文字。本质上说,图片所传递的信息量的确是比文字更有效更高效。
“一图胜千言”,这句二十世纪早期在美国应运而生的名言,说的是一张单一的静止图片所蕴含的信息足以匹敌大量的描述性文字。本质上说,图片所传递的信息量的确是比文字更有效更高效。
截图(或抓帧)是一种捕捉自计算机所录制可视化设备输出的快照或图片,屏幕捕捉软件能让计算机获取到截图。此类软件有很多用处,因为一张图片能很好地说明计算机软件的操作,截图在软件开发过程和文档中扮演了一个很重要的角色。或者,如果你的电脑有了技术性问题,一张截图能让技术支持理解你碰到的这个问题。要写好计算机相关的文章、文档和教程,没有一款好的截图工具是几乎不可能的。如果你想在保存屏幕上任意一些零星的信息,特别是不方便打字时,截图也很有用。
截图(或抓帧)是一种捕捉自计算机的快照或图片,用来记录可视设备的输出。屏幕捕捉软件能从计算机中获取到截图。此类软件有很多用处,因为一张图片能很好地说明计算机软件的操作,截图在软件开发过程和文档中扮演了一个很重要的角色。或者,如果你的电脑有了技术性问题,一张截图能让技术支持理解你碰到的这个问题。要写好计算机相关的文章、文档和教程,没有一款好的截图工具是几乎不可能的。如果你想保存你放在屏幕上的一些零星的信息,特别是不方便打字时,截图也很有用。
在开源世界Linux有许多专注于截图功能的工具供选择基于图形的和控制台的都有。如果要说一个功能丰富的专用截图工具那就来看看Shutter吧。这款工具是小型开源工具的杰出代表当然也有其它的选择。
在开源世界Linux有许多专注于截图功能的工具供选择基于图形的和控制台的都有。如果要说一个功能丰富的专用截图工具看起来没有能超过Shutter的。这款工具是小型开源工具的杰出代表但是也有其它的不错替代品可以选择。
屏幕捕捉功能不仅仅只有专的工具提供GIMP和ImageMagick这两款主攻图像处理的工具也能提供像样的屏幕捕捉功能。
屏幕捕捉功能不仅仅只有专的工具提供GIMP和ImageMagick这两款主攻图像处理的工具也能提供像样的屏幕捕捉功能。
----------
### Shutter ###
![Shutter in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-Shutter1.png)
Shutter是一款功能丰富的截图软件。你可以给你的特殊区域、窗口、整个屏幕甚至是网站截图 - 在其中应用不用的效果,比如用高亮的点在上面绘图,然后上传至一个图片托管网站,一切尽在这个小窗口内。
Shutter是一款功能丰富的截图软件。你可以对特定区域、窗口、整个屏幕甚至是网站截图 - 并为其应用不同的效果,比如用高亮的点在上面绘图,然后上传至一个图片托管网站,一切尽在这个小窗口内。
包含特性:
- 截图范围:
- 一块特殊区域
- 一个特定区域
- 窗口
- 完整的桌面
- 脚本生成的网页
- 在截图中应用不同效果
- 热键
- 打印
- 直接截图或指定一个延迟时间
- 将截图保存至一个指定目录并用一个简便方法重命名它(用特殊的通配符)
- 完成集成在GNOME桌面中(TrayIcon等等)
- 当你截了一张图并以%设置了尺寸时,直接生成缩略图
- Shutter会话选项
- 会话中保持所有截图的痕迹
- 直接截图或指定延迟时间截图
- 将截图保存至一个指定目录并用一个简便方法重命名它(用指定通配符)
- 完全集成在GNOME桌面中TrayIcon等等
- 当你截了一张图并根据尺寸的百分比直接生成缩略图
- Shutter会话
- 跟踪会话中所有截图
- 复制截图至剪贴板
- 打印截图
- 删除截图
- 重命名文件
- 直接上传你的文件至图像托管网站(比如http://ubuntu-pics.de),取回所有需要的图像并将它们与其他人分享
- 直接上传你的文件至图像托管网站(比如 http://ubuntu-pics.de ),得到链接并将它们与其他人分享
- 用内置的绘画工具直接编辑截图
---
- 主页: [shutter-project.org][1]
- 开发者: Mario Kemper和Shutter团队
- 许可证: GNU GPL v3
- 版本号: 0.93.1
----------
### HotShots ###
![HotShots in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-HotShots.png)
HotShots是一款捕捉屏幕并能以各种图片格式保存的软件同时也能添加注释和图形数据(箭头、行、文本, ...)
HotShots是一款捕捉屏幕并能以各种图片格式保存的软件同时也能添加注释和图形数据(箭头、行、文本 ...
你也可以把你的作品上传到网上(FTP/一些web服务)HotShots是用Qt开发而成的。
你也可以把你的作品上传到网上FTP/一些web服务HotShots是用Qt开发而成的。
HotShots无法从Ubuntu的Software Center中获取不过用以下命令可以轻松地来安装它
sudo add-apt-repository ppa:ubuntuhandbook1/apps
sudo add-apt-repository ppa:ubuntuhandbook1/apps
sudo apt-get update
sudo apt-get install hotshots
包含特性:
- 简单易用
- 全功能使用
- 嵌入式编辑器
- 功能完整
- 内置编辑器
- 热键
- 内置放大功能
- 手和多屏捕捉
- 手动控制和多屏捕捉
- 支持输出格式Black & Whte (bw), Encapsulated PostScript (eps, epsf), Encapsulated PostScript Interchange (epsi), OpenEXR (exr), PC Paintbrush Exchange (pcx), Photoshop Document (psd), ras, rgb, rgba, Irix RGB (sgi), Truevision Targa (tga), eXperimental Computing Facility (xcf), Windows Bitmap (bmp), DirectDraw Surface (dds), Graphic Interchange Format (gif), Icon Image (ico), Joint Photographic Experts Group 2000 (jp2), Joint Photographic Experts Group (jpeg, jpg), Multiple-image Network Graphics (mng), Portable Pixmap (ppm), Scalable Vector Graphics (svg), svgz, Tagged Image File Format (tif, tiff), webp, X11 Bitmap (xbm), X11 Pixmap (xpm), and Khoros Visualization (xv)
- 国际化支持:巴斯克语、中文、捷克语、法语、加利西亚语、德语、希腊语、意大利语、日语、立陶宛语、波兰语、葡萄牙语、罗马尼亚语、俄罗斯语、塞尔维亚语、僧伽罗语、斯洛伐克语、西班牙语、土耳其语、乌克兰语和越南语
---
- 主页: [thehive.xbee.net][2]
- 开发者 xbee
- 许可证: GNU GPL v2
- 版本号: 2.2.0
----------
### ScreenCloud ###
![ScreenCloud in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-ScreenCloud.png)
ScreenCloud是一款易于使用的开源截图工具。
在这款软件中,用户可以用三个热键中的其中一个或只需点击ScreenCloud托盘图标就能进行截图用户也可以自行选择保存截图的地址。
在这款软件中,用户可以用三个热键之一或只需点击ScreenCloud托盘图标就能进行截图用户也可以自行选择保存截图的地址。
如果你选择上传你的截图到screencloud主页链接会自动复制到你的剪贴板上你能通过email或在一个聊天对话框里和你的朋友同事分享它他们肯定会点击这个链接来看你的截图的。
如果你选择上传你的截图到screencloud网站链接会自动复制到你的剪贴板上你能通过email或在一个聊天对话框里和你的朋友同事分享它他们肯定会点击这个链接来看你的截图的。
包含特性:
@ -106,18 +103,18 @@ ScreenCloud是一款易于使用的开源截图工具。
- 插件支持保存至DropboxImgur等等
- 支持上传至FTP和SFTP服务器
---
- 主页: [screencloud.net][3]
- 开发者: Olav S Thoresen
- 许可证: GNU GPL v2
- 版本号: 1.2.1
----------
### KSnapshot ###
![KSnapShot in action](http://www.linuxlinks.com/portal/content/reviews/Graphics/Screenshot-KSnapshot.png)
KSnapshot也是一款易于使用的截图工具它能给整个桌面、一个单一窗口、窗口的一部分或一块所选区域捕捉图像。,图像能以各种不用的格式保存。
KSnapshot也是一款易于使用的截图工具它能给整个桌面、单一窗口、窗口的一部分或一块所选区域捕捉图像。图像能以各种不同格式保存。
KSnapshot也允许用户用热键来进行截图。除了保存截图之外它也可以被复制到剪贴板或用任何与图像文件关联的程序打开。
@ -127,10 +124,12 @@ KSnapshot是KDE 4图形模块的一部分。
- 以多种格式保存截图
- 延迟截图
- 剔除窗口装饰图案
- 剔除窗口装饰(边框、菜单等)
- 复制截图至剪贴板
- 热键
- 能用它的D-Bus界面进行脚本化
- 能用它的D-Bus接口进行脚本化
---
- 主页: [www.kde.org][4]
- 开发者: KDE, Richard J. Moore, Aaron J. Seigo, Matthias Ettrich
@ -142,7 +141,7 @@ KSnapshot是KDE 4图形模块的一部分。
via: http://www.linuxlinks.com/article/2015062316235249/ScreenCapture.html
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,10 +1,12 @@
Linux有问必答-- 如何为在Linux中安装兄弟打印机
Linux有问必答如何为在Linux中安装兄弟牌打印机
================================================================================
> **提问**: 我有一台兄弟HL-2270DW激光打印机我想从我的Linux机器上答应文档。我该如何在我的电脑上安装合适的驱动并使用它?
> **提问**: 我有一台兄弟HL-2270DW激光打印机我想从我的Linux机器上打印文档。我该如何在我的电脑上安装合适的驱动并使用它?
兄弟牌以买得起的[紧凑型激光打印机][1]而闻名。你可以用低于200美元的价格得到高质量的WiFi/双工激光打印机而且价格还在下降。最棒的是它们还提供良好的Linux支持因此你可以在Linux中下载并安装它们的打印机驱动。我在一年前买了台[HL-2270DW][2],我对它的性能和可靠性都很满意。
下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中我会演示安装HL-2270DW激光打印机的USB驱动。首先通过USB线连接你的打印机到Linux上。
下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中我会演示安装HL-2270DW激光打印机的USB驱动。
首先通过USB线连接你的打印机到Linux上。
### 准备 ###
@ -16,13 +18,13 @@ Linux有问必答-- 如何为在Linux中安装兄弟打印机
![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
下一页你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的GUI本地、远程打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
下一页你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的图形界面本地、远程打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
运行安装文件之前你需要在64位的Linux系统上做另外一件事情。
因为兄弟打印机驱动是为32位的Linux系统开发的,因此你需要按照下面的方法安装32位的库。
因为兄弟打印机驱动是为32位的Linux系统开发的因此你需要按照下面的方法安装32位的库。
在早期的Debian(6.0或者更早期)或者Ubuntu11.04或者更早期),安装下面的包。
@ -54,7 +56,7 @@ Linux有问必答-- 如何为在Linux中安装兄弟打印机
![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
同意GPL协议直呼,接受接下来的任何默认问题。
同意GPL协议之后,接受接下来的任何默认问题。
![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
@ -68,7 +70,7 @@ Linux有问必答-- 如何为在Linux中安装兄弟打印机
$ sudo netstat -nap | grep 631
打开一个浏览器输入http://localhost:631。你会下面的打印机管理界面。
打开一个浏览器输入 http://localhost:631 。你会看到下面的打印机管理界面。
![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
@ -98,7 +100,7 @@ via: http://ask.xmodulo.com/install-brother-printer-linux.html
作者:[Dan Nanni][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -6,7 +6,7 @@
当监控服务器发送一个关于 MySQL 服务器存储的报警时,恐慌就开始了 —— 就是说磁盘快要满了。
一番调查后你意识到大多数地盘空间被 InnoDB 的共享表空间 ibdata1 使用。而你已经启用了 [innodb_file_per_table][2],所以问题是:
一番调查后你意识到大多数地盘空间被 InnoDB 的共享表空间 ibdata1 使用。而你已经启用了 [innodb\_file\_per\_table][2],所以问题是:
### ibdata1存了什么 ###
@ -17,7 +17,7 @@
- 双写缓冲区
- 撤销日志
其中的一些在 [Percona 服务器][3]上可以被配置来避免增长过大的。例如你可以通过 [innodb_ibuf_max_size][4] 设置最大变更缓冲区,或设置 [innodb_doublewrite_file][5] 来将双写缓冲区存储到一个分离的文件。
其中的一些在 [Percona 服务器][3]上可以被配置来避免增长过大的。例如你可以通过 [innodb\_ibuf\_max\_size][4] 设置最大变更缓冲区,或设置 [innodb\_doublewrite\_file][5] 来将双写缓冲区存储到一个分离的文件。
MySQL 5.6 版中你也可以创建外部的撤销表空间,所以它们可以放到自己的文件来替代存储到 ibdata1。可以看看这个[文档][6]。
@ -82,7 +82,7 @@ MySQL 5.6 版中你也可以创建外部的撤销表空间,所以它们可以
没有目前还没有一个容易并且快速的方法。InnoDB 表空间从不收缩...参见[10 年之久的漏洞报告][10],最新更新自詹姆斯·戴(谢谢):
当你删除一些行,这个页被标为已删除稍后重用,但是这个空间从不会被回收。唯一的方法是使用新的 ibdata1 启动数据库。要做这个你应该需要使用 mysqldump 做一个逻辑全备份,然后停止 MySQL 并删除所有数据库、ib_logfile*、ibdata1* 文件。当你再启动 MySQL 的时候将会创建一个新的共享表空间。然后恢复逻辑备份。
当你删除一些行,这个页被标为已删除稍后重用,但是这个空间从不会被回收。唯一的方法是使用新的 ibdata1 启动数据库。要做这个你应该需要使用 mysqldump 做一个逻辑全备份,然后停止 MySQL 并删除所有数据库、ib_logfile\*、ibdata1\* 文件。当你再启动 MySQL 的时候将会创建一个新的共享表空间。然后恢复逻辑备份。
### 总结 ###

View File

@ -0,0 +1,163 @@
在 RHEL/CentOS 上为Web服务器设置 “XR”Crossroads 负载均衡器
================================================================================
Crossroads 是一个独立的服务并且是一个开源的负载均衡和故障转移实用程序基于Linux和TCP服务。它可用于HTTPHTTPSSSHSMTP和DNS等它也是一个多线程的工具它只消耗一个存储空间以此来提高性能达到负载均衡的目的。
首先来看看 XR 是如何工作的。我们可以找到 XR 分派的网络客户端和服务器之间到负载均衡服务器上的请求。
如果一台服务器宕机XR 会转发客户端请求到另一个服务器,所以客户感觉不到停机时间。看看下面的图来了解什么样的情况下,我们要使用 XR 处理。
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.jpg)
安装 XR Crossroads 负载均衡器
有两个 Web 服务器,一个网关服务器,我们安装和设置 XR 接收客户端请求,分发到服务器之间。
    XR Crossroads 网关服务器172.16.1.204
    Web 服务器01172.16.1.222
    Web 服务器02192.168.1.161
在上述情况下,我们网关服务器(即 XR Crossroads的IP地址是172.16.1.222webserver01 为172.16.1.222它监听8888端口webserver02 是192.168.1.161它监听端口5555。
现在,我们需要的是均衡所有的请求,通过 XR 网关从网上接收请求然后分发它到两个web服务器已达到负载均衡。
### 第1步在网关服务器上安装 XR Crossroads 负载均衡器 ###
**1. 不幸的是,没有为 crosscroads 提供可用的 RPM 包,我们只能从源码安装。**
要编译 XR你必须在系统上安装 C++ 编译器和 GNU make 组件,才能继续无差错的安装。
# yum install gcc gcc-c++ make
接下来,去他们的官方网站([https://crossroads.e-tunity.com] [1])下载此压缩包(即 crossroads-stable.tar.gz
或者,您可以使用 wget 去下载包然后解压在任何位置(如:/usr/src/),进入解压目录,并使用 “make install” 命令安装。
# wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
# tar -xvf crossroads-stable.tar.gz
# cd crossroads-2.74/
# make install
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.png)
安装 XR Crossroads 负载均衡器
安装完成后,二进制文件安装在 /usr/sbin 目录下XR 的配置文件在 /etc 下名为 “xrctl.xml” 。
**2. 最后一个条件你需要两个web服务器。为了方便使用我在一台服务器中创建两个 Python SimpleHTTPServer 实例。**
要了解如何设置一个 python SimpleHTTPServer请阅读我们此处的文章 [Create Two Web Servers Easily Using SimpleHTTPServer][2].
正如我所说的我们要使用两个web服务器webserver01 通过8888端口运行在172.16.1.222上webserver02 通过5555端口运行在192.168.1.161上。
![XR WebServer 01](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer01.jpg)
XR WebServer 01
![XR WebServer 02](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer02.jpg)
XR WebServer 02
### Step 2: 配置 XR Crossroads 负载均衡器 ###
**3. 这个位置是最关键的。现在我们要做的就是配置`xrctl.xml` 文件并通过 XR 服务器接受来自互联网的请求分发到 web 服务器之间。**
现在打开`xrctl.xml`文件用 [vi/vim editor][3].
# vim /etc/xrctl.xml
并作如下修改。
<?xml version=<94>1.0<94> encoding=<94>UTF-8<94>?>
<configuration>
<system>
<uselogger>true</uselogger>
<logdir>/tmp</logdir>
</system>
<service>
<name>Tecmint</name>
<server>
<address>172.16.1.204:8080</address>
<type>tcp</type>
<webinterface>0:8010</webinterface>
<verbose>yes</verbose>
<clientreadtimeout>0</clientreadtimeout>
<clientwritetimout>0</clientwritetimeout>
<backendreadtimeout>0</backendreadtimeout>
<backendwritetimeout>0</backendwritetimeout>
</server>
<backend>
<address>172.16.1.222:8888</address>
</backend>
<backend>
<address>192.168.1.161:5555</address>
</backend>
</service>
</configuration>
![Configure XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Configure-XR-Crossroads-Load-Balancer.jpg)
配置 XR Crossroads 负载均衡器
在这里,你可以看到在 xrctl.xml 做了一个非常基本的 XR 配置。我已经定义了 XR 服务器是什么XR 的后端服务和端口及网络接口端是什么。
**4. 现在,你需要通过以下命令来启动该 XR 守护进程。**
# xrctl start
# xrctl status
![Start XR Crossroads](http://www.tecmint.com/wp-content/uploads/2015/07/Start-XR-Crossroads.jpg)
启动 XR Crossroads
**5. 好的。现在是时候来检查该配置是否可以工作正常了。打开两个网页浏览器,输入 XR 服务器端口的 IP 地址,并查看输出。**
![Verify Web Server Load Balancing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Web-Server-Load-Balancing.jpg)
验证 Web 服务器负载均衡
太棒了。它工作正常。是时候玩玩 XR 了。
**6. 现在可以登录到 XR Crossroads 仪表盘,看看我们已经配置的网络接口的端口。在网络接口输入你的 XR 服务器的 IP 地址和端口你将看到在 xrctl.xml 中的配置。**
http://172.16.1.204:8010
![XR Crossroads Dashboard](http://www.tecmint.com/wp-content/uploads/2015/07/XR-Crossroads-Dashboard.jpg)
XR Crossroads 仪表盘
看起来很像了。它容易理解,用户界面​​友好,易于使用。它在右上角显示每个服务器能容纳多少个连接连同关于接收该请求的附加细节。你也可以设置每个服务器承担的负载量,最大连接数和平均负载等。。
最大的好处是,即使没有配置文件 xrctl.xml你也可以做到这一点。你唯一要做的就是运行以下命令它会做这项工作。
# xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555
上面语法的详细说明:
- -verbose 将显示命令执行后的信息。
- -server 定义你在安装包中的 XR 服务器。
- -backend 定义你需要平衡到 Web 服务器的流量。
- tcp 的定义,它使用 TCP 服务。
欲了解更多详情,有关文件及 CROSSROADS 的配置,请访问他们的官方网站: [https://crossroads.e-tunity.com/][4].
XR Corssroads 使用许多方法来提高服务器性能,避免宕机,让你的管理任务更轻松,更简便。希望你喜欢此文章,并随时在下面发表你的评论和建议,方便与 Tecmint 保持联系。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/setting-up-xr-crossroads-load-balancer-for-web-servers-on-rhel-centos/
作者:[Thilina Uvindasiri][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/thilidhanushka/
[1]:https://crossroads.e-tunity.com/
[2]:http://www.tecmint.com/python-simplehttpserver-to-create-webserver-or-serve-files-instantly/
[3]:http://www.tecmint.com/vi-editor-usage/
[4]:https://crossroads.e-tunity.com/

View File

@ -0,0 +1,626 @@
PHP 7.0 升级备注
===============
1. 向后不兼容的变化
2. 新功能
3. SAPI 模块中的变化
4. 废弃的功能
5. 变化的功能
6. 新功能
7. 新的类和接口
8. 移除的扩展和 SAPI
9. 扩展的其它变化
10. 新的全局常量
11. INI 文件处理的变化
12. Windows 支持
13. 其它变化
##1. 向后不兼容的变化
###语言变化
####变量处理的变化
* 间接变量、属性和方法引用现在以从左到右的语义进行解释。一些例子:
$$foo['bar']['baz'] // 解释做 ($$foo)['bar']['baz']
$foo->$bar['baz'] // 解释做 ($foo->$bar)['baz']
$foo->$bar['baz']() // 解释做 ($foo->$bar)['baz']()
Foo::$bar['baz']() // 解释做 (Foo::$bar)['baz']()
要恢复以前的行为,需要显式地加大括号:
${$foo['bar']['baz']}
$foo->{$bar['baz']}
$foo->{$bar['baz']}()
Foo::{$bar['baz']}()
* 全局关键字现在只接受简单变量。像以前的
global $$foo->bar;
现在要求如下写法:
global ${$foo->bar};
* 变量或函数调用的前后加上括号不再有任何影响。例如下列代码,函数调用结果以引用的方式传给一个函数
function getArray() { return [1, 2, 3]; }
$last = array_pop(getArray());
// Strict Standards: 只有变量可以用引用方式传递
$last = array_pop((getArray()));
// Strict Standards: 只有变量可以用引用方式传递
现在无论是否使用括号,都会抛出一个严格标准错误。以前在第二种调用方式下不会有提示。
* 数组元素或对象属性自动安装引用顺序创建,现在的结果顺序将不同。例如:
$array = [];
$array["a"] =& $array["b"];
$array["b"] = 1;
var_dump($array);
现在结果是 ["a" => 1, "b" => 1],而以前的结果是 ["b" => 1, "a" => 1]。
相关的 RFC
* https://wiki.php.net/rfc/uniform_variable_syntax
* https://wiki.php.net/rfc/abstract_syntax_tree
####list() 的变化
* list() 不再以反序赋值,例如:
list($array[], $array[], $array[]) = [1, 2, 3];
var_dump($array);
现在结果是 $array == [1, 2, 3] ,而不是 [3, 2, 1]。注意仅赋值**顺序**变化了而赋值仍然一致LCTT 译注:即以前的 list()行为是从后面的变量开始逐一赋值,这样对与上述用法就会产生 [3,2,1] 这样的结果了。)。例如,类似如下的常规用法
list($a, $b, $c) = [1, 2, 3];
// $a = 1; $b = 2; $c = 3;
仍然保持当前的行为。
* 不再允许对空的 list() 赋值。如下全是无效的:
list() = $a;
list(,,) = $a;
list($x, list(), $y) = $a;
* list() 不再支持对字符串的拆分(以前也只在某些情况下支持)。如下代码:
$string = "xy";
list($x, $y) = $string;
现在的结果是: $x == null 和 $y == null (没有提示),而以前的结果是:
$x == "x" 和 $y == "y" 。此外, list() 现在总是可以处理实现了 ArrayAccess 的对象,例如:
list($a, $b) = (object) new ArrayObject([0, 1]);
现在的结果是: $a == 0 和 $b == 1。 以前 $a 和 $b 都是 null。
相关 RFC:
* https://wiki.php.net/rfc/abstract_syntax_tree#changes_to_list
* https://wiki.php.net/rfc/fix_list_behavior_inconsistency
####foreach 的变化
* foreach() 迭代不再影响数组内部指针,数组指针可通过 current()/next() 等系列的函数访问。例如:
$array = [0, 1, 2];
foreach ($array as &$val) {
var_dump(current($array));
}
现在将指向值 int(0) 三次。以前的输出是 int(1)、int(2) 和 bool(false)。
* 在对数组按值迭代时foreach 总是在对数组副本进行操作,在迭代中任何对数组的操作都不会影响到迭代行为。例如:
$array = [0, 1, 2];
$ref =& $array; // Necessary to trigger the old behavior
foreach ($array as $val) {
var_dump($val);
unset($array[1]);
}
现在将打印出全部三个元素 (0 1 2),而以前第二个元素 1 会跳过 (0 2)。
* 在对数组按引用迭代时,对数组的修改将继续会影响到迭代。不过,现在 PHP 在使用数字作为键时可以更好的维护数组内的位置。例如,在按引用迭代过程中添加数组元素:
$array = [0];
foreach ($array as &$val) {
var_dump($val);
$array[1] = 1;
}
现在迭代会正确的添加了元素。如上代码输出是 "int(0) int(1)",而以前只是 "int(0)"。
* 对普通(不可遍历的)对象按值或按引用迭代的行为类似于对数组进行按引用迭代。这符合以前的行为,除了如上一点所述的更精确的位置管理的改进。
* 对可遍历对象的迭代行为保持不变。
相关 RFC: https://wiki.php.net/rfc/php7_foreach
####参数处理的变化
* 不能定义两个同名的函数参数。例如,下面的方法将会触发编译时错误:
public function foo($a, $b, $unused, $unused) {
// ...
}
如上的代码应该修改使用不同的参数名,如:
public function foo($a, $b, $unused1, $unused2) {
// ...
}
* func\_get\_arg() 和 func\_get\_args() 函数不再返回传递给参数的原始值,而是返回其当前值(也许会被修改)。例如:
function foo($x) {
$x++;
var_dump(func_get_arg(0));
}
foo(1);
将会打印 "2" 而不是 "1"。代码应该改成仅在调用 func\_get\_arg(s) 后进行修改操作。
function foo($x) {
var_dump(func_get_arg(0));
$x++;
}
或者应该避免修改参数:
function foo($x) {
$newX = $x + 1;
var_dump(func_get_arg(0));
}
* 类似的,异常回溯也不再显示传递给函数的原始值,而是修改后的值。例如:
function foo($x) {
$x = 42;
throw new Exception;
}
foo("string");
现在堆栈跟踪的结果是:
Stack trace:
#0 file.php(4): foo(42)
#1 {main}
而以前是:
Stack trace:
#0 file.php(4): foo('string')
#1 {main}
这并不会影响到你的代码的运行时行为,值得注意的是在调试时会有所不同。
同样的限制也会影响到 debug\_backtrace() 及其它检查函数参数的函数。
相关 RFC: https://wiki.php.net/phpng
####整数处理的变化
* 无效的八进制表示包含大于7的数字现在会产生编译错误。例如下列代码不再有效
$i = 0781; // 8 不是一个有效的八进制数字!
以前,无效的数字(以及无效数字后的任何数字)会简单的忽略。以前如上 $i 的值是 7因为后两位数字会被悄悄丢弃。
* 二进制以负数镜像位移现在会抛出一个算术错误:
var_dump(1 >> -1);
// ArithmeticError: 以负数进行位移
* 向左位移的位数超出了整型宽度时,结果总是 0。
var_dump(1 << 64); // int(0)
以前上述代码的结果依赖于所用的 CPU 架构。例如,在 x86包括 x86-64 上结果是 int(1),因为其位移操作数在范围内。
* 类似的,向右位移的位数超出了整型宽度时,其结果总是 0 或 -1 (依赖于符号):
var_dump(1 >> 64); // int(0)
var_dump(-1 >> 64); // int(-1)
相关 RFC: https://wiki.php.net/rfc/integer_semantics
####字符串处理的变化
* 包含十六进制数字的字符串不会再被当做数字,也不会被特殊处理。参见例子中的新行为:
var_dump("0x123" == "291"); // bool(false) (以前是 true)
var_dump(is_numeric("0x123")); // bool(false) (以前是 true)
var_dump("0xe" + "0x1"); // int(0) (以前是 16)
var_dump(substr("foo", "0x1")); // string(3) "foo" (以前是 "oo")
// 注意:遇到了一个非正常格式的数字
filter\_var() 可以用来检查一个字符串是否包含了十六进制数字,或这个字符串是否能转换为整数:
$str = "0xffff";
$int = filter_var($str, FILTER_VALIDATE_INT, FILTER_FLAG_ALLOW_HEX);
if (false === $int) {
throw new Exception("Invalid integer!");
}
var_dump($int); // int(65535)
* 由于给双引号字符串和 HERE 文档增加了 Unicode 码点转义格式Unicode Codepoint Escape Syntax 所以带有无效序列的 "\u{" 现在会造成错误:
$str = "\u{xyz}"; // 致命错误:无效的 UTF-8 码点转义序列
要避免这种情况,需要转义开头的反斜杠:
$str = "\\u{xyz}"; // 正确
不过,不跟随 { 的 "\u" 不受影响。如下代码不会生成错误,和前面的一样工作:
$str = "\u202e"; // 正确
相关 RFC:
* https://wiki.php.net/rfc/remove_hex_support_in_numeric_strings
* https://wiki.php.net/rfc/unicode_escape
####错误处理的变化
* 现在有两个异常类: Exception 和 Error 。这两个类都实现了一个新接口: Throwable 。在异常处理代码中的类型指示也许需要修改来处理这种情况。
* 一些致命错误和可恢复的致命错误现在改为抛出一个 Error 。由于 Error 是一个独立于 Exception 的类,这些异常不会被已有的 try/catch 块捕获。
可恢复的致命错误被转换为一个异常,所以它们不能在错误处理里面悄悄的忽略。部分情况下,类型指示失败不再能忽略。
* 解析错误现在会生成一个 Error 扩展的 ParseError 。除了以前的基于返回值 / error_get_last() 的处理,对某些可能无效的代码的 eval() 的错误处理应该改为捕获 ParseError 。
* 内部类的构造函数在失败时总是会抛出一个异常。以前一些构造函数会返回 NULL 或一个不可用的对象。
* 一些 E_STRICT 提示的错误级别改变了。
相关 RFC:
* https://wiki.php.net/rfc/engine_exceptions_for_php7
* https://wiki.php.net/rfc/throwable-interface
* https://wiki.php.net/rfc/internal_constructor_behaviour
* https://wiki.php.net/rfc/reclassify_e_strict
####其它的语言变化
* 静态调用一个不兼容的 $this 上下文的非静态调用的做法不再支持。这种情况下,$this 是没有定义的,但是对它的调用是允许的,并带有一个废弃提示。例子:
class A {
public function test() { var_dump($this); }
}
// 注意:没有从类 A 进行扩展
class B {
public function callNonStaticMethodOfA() { A::test(); }
}
(new B)->callNonStaticMethodOfA();
// 废弃:非静态方法 A::test() 不应该被静态调用
// 提示:未定义的变量 $this
NULL
注意,这仅出现在来自不兼容上下文的调用上。如果类 B 扩展自类 A ,调用会被允许,没有任何提示。
* 不能使用下列类名、接口名和特殊名(大小写敏感):
bool
int
float
string
null
false
true
这用于 class/interface/trait 声明、 class_alias() 和 use 语句中。
此外,下列类名、接口名和特殊名保留做将来使用,但是使用时尚不会抛出错误:
resource
object
mixed
numeric
* yield 语句结构当用在一个表达式上下文时,不再要求括号。它现在是一个优先级在 “print” 和 “=>” 之间的右结合操作符。在某些情况下这会导致不同的行为,例如:
echo yield -1;
// 以前被解释如下
echo (yield) - 1;
// 现在被解释如下
echo yield (-1);
yield $foo or die;
// 以前被解释如下
yield ($foo or die);
// 现在被解释如下
(yield $foo) or die;
这种情况可以通过增加括号来解决。
* 移除了 ASP (\<%) 和 script (\<script language=php>) 标签。
RFC: https://wiki.php.net/rfc/remove_alternative_php_tags
* 不支持以引用的方式对 new 的结果赋值。
* 不支持对一个来自非兼容的 $this 上下文的非静态方法的域内调用。细节参见: https://wiki.php.net/rfc/incompat_ctx 。
* 不支持 ini 文件中的 # 风格的备注。使用 ; 风格的备注替代。
* $HTTP\_RAW\_POST\_DATA 不再可用,使用 php://input 流替代。
###标准库的变化
* call\_user\_method() 和 call\_user\_method\_array() 不再存在。
* 在一个输出缓冲区被创建在输出缓冲处理器里时, ob\_start() 不再发出 E\_ERROR而是 E\_RECOVERABLE\_ERROR。
* 改进的 zend\_qsort (使用 hybrid 排序算法)性能更好,并改名为 zend\_sort。
* 增加静态排序算法 zend\_insert\_sort。
* 移除 fpm-fcgi 的 dl() 函数。
* setcookie() 如果 cookie 名为空会触发一个 WARNING ,而不是发出一个空的 set-cookie 头。
###其它
- Curl:
- 去除对禁用 CURLOPT\_SAFE\_UPLOAD 选项的支持。所有的 curl 文件上载必须使用 curl\_file / CURLFile API。
- Date:
- 从 mktime() 和 gmmktime() 中移除 $is\_dst 参数
- DBA
- 如果键也没有出现在 inifile 处理器中dba\_delete() 现在会返回 false。
- GMP
- 现在要求 libgmp 版本 4.2 或更新。
- gmp\_setbit() 和 gmp\_clrbit() 对于负指标返回 FALSE和其它的 GMP 函数一致。
- Intl:
- 移除废弃的别名 datefmt\_set\_timezone\_id() 和 IntlDateFormatter::setTimeZoneID()。替代使用 datefmt\_set\_timezone() 和 IntlDateFormatter::setTimeZone()。
- libxml:
- 增加 LIBXML\_BIGLINES 解析器选项。从 libxml 2.9.0 开始可用,并增加了在错误报告中行号大于 16 位的支持。
- Mcrypt
- 移除等同于 mcrypt\_generic\_deinit() 的废弃别名 mcrypt\_generic\_end()。
- 移除废弃的 mcrypt\_ecb()、 mcrypt\_cbc()、 mcrypt\_cfb() 和 mcrypt\_ofb() 函数,它们等同于使用 MCRYPT\_MODE\_* 标志的 mcrypt\_encrypt() 和 mcrypt\_decrypt() 。
- Session
- session\_start() 以数组方式接受所有的 INI 设置。例如, ['cache\_limiter'=>'private'] 会设置 session.cache\_limiter=private 。也支持 'read\_and\_close' 以在读取数据后立即关闭会话数据。
- 会话保存处理器接受使用 validate\_sid() 和 update\_timestamp() 来校验会话 ID 是否存在、更新会话时间戳。对旧式的用户定义的会话保存处理器继续兼容。
- 增加了 SessionUpdateTimestampHandlerInterface 。 validateSid()、 updateTimestamp()
定义在接口里面。
- session.lazy\_write(默认是 On) 的 INI 设置支持仅在会话数据更新时写入。
- Opcache
- 移除 opcache.load\_comments 配置语句。现在文件内备注载入无成本,并且总是启用的。
- OpenSSL:
- 移除 "rsa\_key\_size" SSL 上下文选项,按给出的协商的加密算法自动设置适当的大小。
- 移除 "CN\_match" 和 "SNI\_server\_name" SSL 上下文选项。使用自动侦测或 "peer\_name" 选项替代。
- PCRE:
- 移除对 /e (PREG\_REPLACE\_EVAL) 修饰符的支持,使用 preg\_replace\_callback() 替代。
- PDO\_pgsql:
- 移除 PGSQL\_ATTR\_DISABLE\_NATIVE\_PREPARED\_STATEMENT 属性,等同于 ATTR\_EMULATE\_PREPARES。
- Standard:
- 移除 setlocale() 中的字符串类目支持。使用 LC_* 常量替代。
instead.
- 移除 set\_magic\_quotes\_runtime() 及其别名 magic\_quotes\_runtime()。
- JSON:
- 拒绝 json_decode 中的 RFC 7159 不兼容数字格式 - 顶层 (07, 0xff, .1, -.1) 和所有层的 ([1.], [1.e1])
- 用一个参数调用 json\_decode 等价于用空的 PHP 字符串或值调用转换为空字符串NULL, FALSE的结果是 JSON 格式错误。
- Stream:
- 移除 set\_socket\_blocking() ,等同于其别名 stream\_set\_blocking()。
- XSL:
- 移除 xsl.security\_prefs ini 选项,使用 XsltProcessor::setSecurityPrefs() 替代。
##2. 新功能
- Core
- 增加了组式 use 声明。
(RFC: https://wiki.php.net/rfc/group_use_declarations)
- 增加了 null 合并操作符 (??)。
(RFC: https://wiki.php.net/rfc/isset_ternary)
- 在 64 位架构上支持长度 >= 2^31 字节的字符串。
- 增加了 Closure::call() 方法(仅工作在用户侧的类)。
- 在双引号字符串和 here 文档中增加了 \u{xxxxxx} Unicode 码点转义格式。
- define() 现在支持数组作为常量值,修复了一个当 define() 还不支持数组常量值时的疏忽。
- 增加了比较操作符 (<=>),即太空船操作符。
(RFC: https://wiki.php.net/rfc/combined-comparison-operator)
- 为委托生成器添加了类似协程的 yield from 操作符。
(RFC: https://wiki.php.net/rfc/generator-delegation)
- 保留的关键字现在可以用在几种新的上下文中。
(RFC: https://wiki.php.net/rfc/context_sensitive_lexer)
- 增加了标量类型的声明支持,并可以使用 declare(strict\_types=1) 的声明严格模式。
(RFC: https://wiki.php.net/rfc/scalar_type_hints_v5)
- 增加了对加密级安全的用户侧的随机数发生器的支持。
(RFC: https://wiki.php.net/rfc/easy_userland_csprng)
- Opcache
- 增加了基于文件的二级 opcode 缓存实验性——默认禁用。要启用它PHP 需要使用 --enable-opcache-file 配置和构建,然后 opcache.file\_cache=\<DIR> 配置指令就可以设置在 php.ini 中。二级缓存也许可以提升服务器重启或 SHM 重置时的性能。此外,也可以设置 opcache.file\_cache\_only=1 来使用文件缓存而根本不用 SHM也许对于共享主机有用设置 opcache.file\_cache\_consistency\_checks=0 来禁用文件缓存一致性检查,以加速载入过程,有安全风险。
- OpenSSL
- 当用 OpenSSL 1.0.2 及更新构建时,增加了 "alpn\_protocols" SSL 上下文选项来允许加密的客户端/服务器流使用 ALPN TLS 扩展去协商替代的协议。协商后的协议信息可以通过 stream\_get\_meta\_data() 输出访问。
- Reflection
- 增加了一个 ReflectionGenerator 类yield from Traces当前文件/行等等)。
- 增加了一个 ReflectionType 类来更好的支持新的返回类型和标量类型声明功能。新的 ReflectionParameter::getType() 和 ReflectionFunctionAbstract::getReturnType() 方法都返回一个 ReflectionType 实例。
- Stream
- 添加了新的仅用于 Windows 的流上下文选项以允许阻塞管道读取。要启用该功能,当创建流上下文时,传递 array("pipe" => array("blocking" => true)) 。要注意的是,该选项会导致管道缓冲区的死锁,然而它在几个命令行场景中有用。
##3. SAPI 模块的变化
- FPM
- 修复错误 #65933 不能设置超过1024字节的配置行
- Listen = port 现在监听在所有地址上IPv6 和 IPv4 映射的)。
##4. 废弃的功能
- Core
- 废弃了 PHP 4 风格的构建函数(即构建函数名必须与类名相同)。
- 废弃了对非静态方法的静态调用。
- OpenSSL
- 废弃了 "capture\_session\_meta" SSL 上下文选项。 在流资源上活动的加密相关的元数据可以通过 stream\_get\_meta\_data() 的返回值访问。
##5. 函数的变化
- parse\_ini\_file():
- parse\_ini\_string():
- 添加了扫描模式 INI_SCANNER_TYPED 来得到 yield 类型的 .ini 值。
- unserialize():
- 给 unserialize 函数添加了第二个参数
(RFC: https://wiki.php.net/rfc/secure_unserialize) 来指定可接受的类:
unserialize($foo, ["allowed_classes" => ["MyClass", "MyClass2"]]);
- proc\_open():
- 可以被 proc\_open() 使用的最大管道数以前被硬编码地限制为 16。现在去除了这个限制只受限于 PHP 的可用内存大小。
- 新添加的仅用于 Windows 的配置选项 "blocking\_pipes" 可以用于强制阻塞对子进程管道的读取。这可以用于几种命令行应用场景,但是它会导致死锁。此外,这与新的流的管道上下文选项相关。
- array_column():
- 该函数现在支持把对象数组当做二维数组。只有公开属性会被处理,对象里面使用 \_\_get() 的动态属性必须也实现 \_\_isset() 才行。
- stream\_context\_create()
- 现在可以接受一个仅 Windows 可用的配置 array("pipe" => array("blocking" => \<boolean>)) 来强制阻塞管道读取。该选项应该小心使用,该平台有可能导致管道缓冲区的死锁。
##6. 新函数
- GMP
- 添加了 gmp\_random\_seed()。
- PCRE:
- 添加了 preg\_replace\_callback\_array 函数。
(RFC: https://wiki.php.net/rfc/preg_replace_callback_array)
- Standard
. 添加了整数除法 intdiv() 函数。
. 添加了重置错误状态的 error\_clear\_last() 函数。
- Zlib:
. 添加了 deflate\_init()、 deflate\_add()、 inflate\_init()、 inflate\_add() 函数来运行递增和流的压缩/解压。
##7. 新的类和接口
(暂无)
##8. 移除的扩展和 SAPI
- sapi/aolserver
- sapi/apache
- sapi/apache_hooks
- sapi/apache2filter
- sapi/caudium
- sapi/continuity
- sapi/isapi
- sapi/milter
- sapi/nsapi
- sapi/phttpd
- sapi/pi3web
- sapi/roxen
- sapi/thttpd
- sapi/tux
- sapi/webjames
- ext/mssql
- ext/mysql
- ext/sybase_ct
- ext/ereg
更多细节参见:
- https://wiki.php.net/rfc/removal_of_dead_sapis_and_exts
- https://wiki.php.net/rfc/remove_deprecated_functionality_in_php7
注意NSAPI 没有在 RFC 中投票,不过它会在以后移除。这就是说,它相关的 SDK 今后不可用。
##9. 扩展的其它变化
- Mhash
- Mhash 今后不是一个扩展了,使用 function\_exists("mhash") 来检查器是否可用。
##10. 新的全局常量
- Core
. 添加 PHP\_INT\_MIN
- Zlib
- 添加的这些常量用于控制新的增量deflate\_add() 和 inflate\_add() 函数的刷新行为:
- ZLIB\_NO\_FLUSH
- ZLIB\_PARTIAL\_FLUSH
- ZLIB\_SYNC\_FLUSH
- ZLIB\_FULL\_FLUSH
- ZLIB\_BLOCK
- ZLIB\_FINISH
- GD
- 移除了 T1Lib 支持,这样由于对 T1Lib 的可选依赖,如下将来不可用:
函数:
- imagepsbbox()
- imagepsencodefont()
- imagepsextendedfont()
- imagepsfreefont()
- imagepsloadfont()
- imagepsslantfont()
- imagepstext()
资源:
- 'gd PS font'
- 'gd PS encoding'
##11. INI 文件处理的变化
- Core
- 移除了 asp\_tags ini 指令。如果启用它会导致致命错误。
- 移除了 always\_populate\_raw\_post\_data ini 指令。
##12. Windows 支持
- Core
- 在 64 位系统上支持原生的 64 位整数。
- 在 64 位系统上支持大文件。
- 支持 getrusage()。
- ftp
- 所带的 ftp 扩展总是共享库的。
- 对于 SSL 支持,取消了对 openssl 扩展的依赖,取而代之仅依赖 openssl 库。如果在编译时需要,会自动启用
ftp\_ssl\_connect()。
- odbc
- 所带的 odbc 扩展总是共享库的。
##13. 其它变化
- Core
- NaN 和 Infinity 转换为整数时总是 0而不是未定义和平台相关的。
- 对非对象调用方法会触发一个可捕获错误,而不是致命错误;参见: https://wiki.php.net/rfc/catchable-call-to-member-of-non-object
- zend\_parse\_parameters、类型提示和转换现在总是用 "integer" 和 "float",而不是 "long" 和 "double"。
- 如果 ignore\_user\_abort 设置为 true ,对应中断的连接,输出缓存会继续工作。
--------------------------------------------------------------------------------
via: https://github.com/php/php-src/blob/php-7.0.0beta1/UPGRADING
作者:[php][a]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/php

View File

@ -0,0 +1,120 @@
FSSlc translating
4 CCleaner Alternatives For Ubuntu Linux
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/ccleaner-10-700x393.jpg)
Back in my Windows days, [CCleaner][1] was my favorite tool for freeing up space, delete junk files and speed up Windows. I know I am not the only one who looked for CCleaner for Linux when switched from Windows. If you are looking for CCleaner alternative in Linux, I am going to list here four such application that you can use to clean up Ubuntu or Ubuntu based Linux distributions. But before we see the list, lets ponder over whether Linux requires system clean up tools or not.
### Does Linux need system clean up utilities like CCleaner? ###
To get this answer, lets first see what does CCleaner do. As per [How-To Geek][2]:
> CCleaner has two main uses. One, it scans for and deletes useless files, freeing up space. Two, it erases private data like your browsing history and list of most recently opened files in various programs.
So in short, it performs a system wide clean up of temporary file be it in your web browser or in your media player. You might know that Windows has the affection for keeping junk files in the system for like since ever but what about Linux? What does it do with the temporary files?
Unlike Windows, Linux cleans up all the temporary files (store in /tmp) automatically. You dont have registry in Linux which further reduces the headache. At worst, you might have some broken packages, packages that are not needed anymore and internet browsing history, cookies and cache.
### Does it mean that Linux does not need system clean up utilities? ###
- Answer is no if you can run few commands for occasional package cleaning, manually deleting browser history etc.
- Answer is yes if you dont want to run from places to places and want one tool to rule them all where you can clean up all the suggested things in one (or few) click(s).
If you have got your answer as yes, lets move on to see some CCleaner like utilities to clean up your Ubuntu Linux.
### CCleaner alternatives for Ubuntu ###
Please note that I am using Ubuntu here because some tools discussed here are only existing for Ubuntu based Linux distributions while some are available for all Linux distributions.
#### 1. BleachBit ####
![BleachBit System Cleaning Tool for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/BleachBit_Cleaning_Tool_Ubuntu.jpeg)
[BleachBit][3] is cross platform app available for both Windows and Linux. It has a long list of applications that it support for cleaning and thus giving you option for cleaning cache, cookies and log files. A quick look at its feature:
- Simple GUI check the boxes you want, preview it and delete it.
- Multi-platform: Linux and Windows
- Free and open source
- Shred files to hide their contents and prevent data recovery
- Overwrite free disk space to hide previously deleted files
- Command line interface also available
BleachBit is available by default in Ubuntu 14.04 and 15.04. You can install it using the command below in terminal:
sudo apt-get install bleachbit
BleachBit has binaries available for all major Linux distributions. You can download BleachBit from the link below:
- [Download BleachBit for Linux][4]
#### 2. Sweeper ####
![Sweeper system clean up tool for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/sweeper.jpeg)
Sweeper is a system clean up utility which is a part of [KDE SC utilities][5] module. Its main features are:
- remove web-related traces: cookies, history, cache
- remove the image thumbnails cache
- clean the applications and documentes history
Sweeper is available by default in Ubuntu repository. Use the command below in a terminal to install Sweeper:
sudo apt-get install sweeper
#### 3. Ubuntu Tweak ####
![Ubuntu Tweak Tool for cleaning up Ubuntu system](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Tweak_Janitor.jpeg)
As the name suggests, [Ubuntu Tweak][6] is more of a tweaking tool than a cleaning utility. But along with tweaking things like compiz settings, panel configuration, start up program control, power management etc, Ubuntu Tweak also provides a Janitor tab that lets you:
- clean browser cache
- clean Ubuntu Software Center cache
- clean thumbnail cache
- clan apt repository cache
- clean old kernel files
- clean package configs
You can get the .deb installer for Ubuntu Tweak from the link below:
- [Download Ubuntu Tweak][7]
#### 4. GCleaner (beta) ####
![GCleaner CCleaner like tool](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/GCleaner.jpeg)
One of the third party apps for elementaryOS Freya, GCleaner aims to be CCleaner in GNU world. The interface resembles heavily to CCleaner. Some of the main features of GCleaner are:
- clean browser history
- clean app cache
- clean packages and configs
- clean recent document history
- empty recycle bin
At the time of writing this article, GCleaner is in heavy development. You can check the project website and get the source code to build and use GCleaner.
- [Know More About GCleaner][8]
### Your choice? ###
I have listed down the possibilities to you. I let you decide which tool you would use to clean Ubuntu 14.04. But I am certain that if you were looking for a CCleaner like application, one of these four end your search.
--------------------------------------------------------------------------------
via: http://itsfoss.com/ccleaner-alternatives-ubuntu-linux/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:https://www.piriform.com/ccleaner/download
[2]:http://www.howtogeek.com/172820/beginner-geek-what-does-ccleaner-do-and-should-you-use-it/
[3]:http://bleachbit.sourceforge.net/
[4]:http://bleachbit.sourceforge.net/download/linux
[5]:https://www.kde.org/applications/utilities/
[6]:http://ubuntu-tweak.com/
[7]:http://ubuntu-tweak.com/
[8]:https://quassy.github.io/elementary-apps/GCleaner/

View File

@ -0,0 +1,125 @@
Interview: Larry Wall
================================================================================
> Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out whats going on.
Larry Wall is a fascinating man. Hes the creator of Perl, a programming language thats widely regarded as the glue holding the internet together, and mocked by some as being a “write-only” language due to its density and liberal use of non-alphanumeric characters. Larry also has a background in linguistics, and is well known for delivering entertaining “State of the Onion” presentations about the future of Perl.
At FOSDEM 2015 in Brussels, we caught up with Larry to ask him why Perl 6 has taken so long (Perl 5 was released in 1994), how difficult it is to manage a project when everyone has strong opinions and pulling in different directions, and how his background in linguistics influenced the design of Perl from the start. Get ready for some intriguing diversions…
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall1.jpg)
**Linux Voice: You once had a plan to go and find an undocumented language somewhere in the world and create a written script for it, but you never had the opportunity to fulfil this plan. Is that something youd like to go back and do now?**
Larry Wall: You have to be kind of young to be able to carry that off! Its actually a lot of hard work, and organisations that do these things dont tend to take people in when theyre over a certain age. Partly this is down to health and vigour, but also because people are much better at picking up new languages when theyre younger, and you have to learn the language before making a script for it.
I started trying to teach myself Japanese about 10 years ago, and I could speak it quite well, because of my phonology and phonetics training but its very hard for me to understand what anybody says. So I can go to Japan and ask for directions, but I cant really understand the answers!
> “With Perl 6, we found some ways to make the computer more sure about what the user is talking about.”
So usually learning a language well enough to develop a writing system, and to at least be conversational in the language, takes some period of years before you can get to the point where you can actually do literacy and start educating people on their own culture, as it were. And then you teach them to write about their own culture as well.
Of course, if you have language helpers and we were told not to call them “language informants”, or everyone would think we were working for the CIA! if you have these people, you can get them to come in and help you learn the foreign language. They are not teachers but there are ways of eliciting things from someone whos not a language teacher they can still teach you how to speak. They can take a stick and point to it and say “thats a stick”, and drop it and say “the stick falls”. Then you start writing things down and systematising things.
The motivation that most people have, going out to these groups, is to translate the Bible into their languages. But thats only one part of it; the other is also culture preservation. Missionaries get kind of a bad rep on that, because anthropologists think they should be left to sit their in their own culture. But somebody is probably going to change their culture anyway its usually the army, or businesses coming in, like Coca Cola or the sewing machine people, or missionaries. And of those three, the missionaries are the least damaging, if theyre doing their job right.
**LV: Many writing systems are based on existing scripts, and then you have invented ones like Greenlandic…**
LW: The Cherokee invented their own just by copying letters, and they have no mapping much to what we think of letters, and its fairly arbitrary in that sense. It just has to represent how the people themselves think of the language, and sufficiently well to communicate. Often there will be variations on Western orthography, using characters from Latin where possible. Tonal languages have to mark the tones somehow, by accents or by numbers.
As soon as you start leaning towards a phoenetic or phonological representation, then you also start to lose dialectical differences or you have to write the dialectal differences. Or you have conventional spelling like we have in English, but pronunciation that doesnt really match it.
**LV: When you started working on Perl, what did you take from your background in linguistics that made you think: “this is really important in a programming language”?**
LW: I thought a lot about how people use languages. In real languages, you have a system of nouns and verbs and adjectives, and you kind of know which words are which type. And in real natural languages, you have a lot of instances of shoving one word into a different slot. The linguistic theory I studied was called tagmemics, and it accounts for how this works in a natural language that you could have something that you think of as a noun, but you can verb it, and people do that all time.
You can pretty much shove anything in any slot, and you can communicate. One of my favourite examples is shoving an entire sentence in as an adjective. The sentence goes like this: “I dont like your I-can-use-anything-as-an-adjective attitude”!
So natural language is very flexible this way because you have a very intelligent listener or at least, compared with a computer who you can rely on to figure out what you must have meant, in case of ambiguity. Of course, in a computer language you have to manage the ambiguity much more closely.
Arguably in Perl 1 through to 5 we didnt manage it quite adequately enough. Sometimes the computer was confused when it really shouldnt be. With Perl 6, we discovered some ways to make the computer more sure about what the user is talking about, even if the user is confused about whether something is really a string or a number. The computer knows the exact type of it. We figured out ways of having stronger typing internally but still have the allomorphic “you can use this as that” idea.
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall2.jpg)
**LV: For a long time Perl was seen as the “glue” language of the internet, for fitting bits and pieces together. Do you see Perl 6 as a release to satisfy the needs of existing users, or as a way to bring in new people, and bring about a resurgence in the language?**
LW: The initial intent was to make a better Perl for Perl programmers. But as we looked at the some of the inadequacies of Perl 5, it became apparent that if we fixed these inadequacies, Perl 6 would be more applicable, as I mentioned in my talk like how J. R. R. Tolkien talked about applicability [see http://tinyurl.com/nhpr8g2].
The idea that “easy things should be easy and hard things should be possible” goes way back, to the boundary between Perl 2 and Perl 3. In Perl 2, we couldnt handle binary data or embedded nulls it was just C-style strings. I said then that “Perl is just a text processing language you dont need those things in a text processing language”.
But it occurred to me at the time that there were a large number of problems that were mostly text, and had a little bit of binary data in them network addresses and things like that. You use binary data to open the socket but then text to process it. So the applicability of the language more than doubled by making it possible to handle binary data.
That began a trade-off about what things should be easy in a language. Nowadays we have a principle in Perl, and we stole the phrase Huffman coding for it, from the bit encoding system where you have different sizes for characters. Common characters are encoded in a fewer number of bits, and rarer characters are encoded in more bits.
> “There had to be a very careful balancing act. There were just so many good ideas at the beginning.”
We stole that idea as a general principle for Perl, for things that are commonly used, or when you have to type them very often the common things need to be shorter or more succinct. Another bit of that, however, is that theyre allowed to be more irregular. In natural language, its actually the most commonly used verbs that tend to be the most irregular.
And theres a reason for that, because you need more differentiation of them. One of my favourite books is called The Search for the Perfect Language by Umberto Eco, and its not about computer languages; its about philosophical languages, and the whole idea that maybe some ancient language was the perfect language and we should get back to it.
All of those languages make the mistake of thinking that similar things should always be encoded similarly. But thats not how you communicate. If you have a bunch of barnyard animals, and they all have related names, and you say “Go out and kill the Blerfoo”, but you really wanted them to kill the Blerfee, you might get a cow killed when you want a chicken killed.
So in realms like that its actually better to differentiate the words, for more redundancy in the communication channel. The common words need to have more of that differentiation. Its all about communicating efficiently, and then theres also this idea of self-clocking codes. If you look at a UPC label on a product a barcode thats actually a self-clocking code where each pair of bars and spaces is always in a unit of seven columns wide. You rely on that you know the width of the bars will always add up to that. So its self-clocking.
There are other self-clocking codes used in electronics. In the old transmission serial protocols there were stop and start bits so you could keep things synced up. Natural languages also do this. For instance, in the writing of Japanese, they dont use spaces. Because the way they write it, they will have a Kanji character from Chinese at the head of each phrase, and then the endings are written in the a syllabary.
**LV: Hiragana, right?**
LW: Yes, Hiragana. So naturally the head of each phrase really stands out with this system. Similarly, in ancient Greek, most of the verbs were declined or conjugated. So they had standard endings were sort-of a clocking mechanism. Spaces were optional in their writing system as well it was a more modern invention to put the spaces in.
So similarly in computer languages, theres value in having a self-clocking code. We rely on this heavily in Perl, and even more heavily in Perl 6 than in previous releases. The idea that when youre parsing an expression, youre either expecting a term or an infix operator. When youre expecting a term you might also get a prefix operator thats kind-of in the same expectation slot and when youre expecting an infix you might also get a postfix for the previous term.
But it flips back and forth. And if the compiler actually knows which it is expecting, you can overload those a little bit, and Perl does this. So a slash when its expecting a term will introduce a regular expression, whereas a slash when youre expecting an infix will be division. On the other hand, we dont want to overload everything, because then you lose the self-clocking redundancy.
Most of our best error messages, for syntax errors, actually come out of noticing that you have two terms in a row. And then we try to figure out why there are two terms in a row “oh, you must have left a semicolon out on the previous line”. So we can produce much better error messages than the more ad-hoc parsers.
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall3.jpg)
**LV: Why has Perl 6 taken fifteen years? It must be hard overseeing a language when everyone has different opinions about things, and theres not always the right way to do things, and the wrong way.**
LW: There had to be a very careful balancing act. There were just so many good ideas at the beginning well, I dont want to say they were all good ideas. There were so many pain points, like there were 361 RFCs [feature proposal documents] when I expected maybe 20. We had to sit back and actually look at them all, and ignore the proposed solutions, because they were all over the map and all had tunnel vision. Each one many have just changed one thing, but if we had done them all, it wouldve been a complete mess.
So we had to re-rationalise based on how people were actually hurting when they tried to use Perl 5. We started to look at the unifying, underlying ideas. Many of these RFCs were based on the fact that we had an inadequate type system. By introducing a more coherent type system we could fix many problems in a sane fashion and a cohesive fashion.
And we started noticing other ways how we could unify the featuresets and start reusing ideas in different areas. Not necessarily that they were the same thing underneath. We have a standard way of writing pairs well, two ways in Perl! But the way of writing pairs with a colon could also be reused for radix notation, or for literal numbers in any base. It could also be used for various alternative forms of quoting. We say in Perl that its “strangely consistent”.
> “People who made early implementations of Perl 6 came back to me, cap in hand, and said “We really need a language designer.””
Similar ideas pop up, and you say “Im already familiar with how that syntax works, but I see its being used for something else”. So it took some unity of vision to find these unifications. People who had the various ideas and made early implementations of Perl 6 came back to me, cap-in-hand, and said “We really need a language designer. Could you be our benevolent dictator?”
So I was the language designer, but I was almost explicitly told: “Stay out of the implementation! We saw what you did made out of Perl 5, and we dont like it!” It was really funny because the innards of the new implementation started looking a whole lot like Perl 5 inside, and maybe thats why some of the early implementations didnt work well.
Because we were still feeling our way into the whole design, the implementations made a lot of assumptions about what VM should do and shouldnt do, so we ended up with something like an object oriented assembly language. That sort of problem was fairly pervasive at the beginning. Then the Pugs guys came along and said “Lets use Haskell, because it makes you think very clearly about what youre doing. Lets use it to clarify our semantic model underneath.”
So we nailed down some of those semantic models, but more importantly, we started building the test suite at that point, to be consistent with those semantic models. Then after that, the Parrot VM continued developing, and then another implementation, Niecza, came along and it was based on .NET. It was by a young fellow who was very smart and implemented a large subset of Perl 6, but he was kind of a loner, didnt really figure out a way to get other people involved in his project.
At the same time the Parrot project was getting too big for anyone to really manage it inside, and very difficult to refactor. At that point the fellows working on Rakudo decided that we probably needed to be on more platforms than just the Parrot VM. So they invented a portability layer called NQP which stands for “Not Quite Perl”. They ported it to first of all run on the JVM (Java Virtual Machine), and while they were doing that they were also secretly working on a new VM called MoarVM. That became public a little over a year ago.
Both MoarVM and JVM run a pretty much equivalent set of regression tests Parrot is kind-of trailing back in some areas. So that has been very good to flush out VM-specific assumptions, and were starting to think about NQP targeting other things. There was a Google Summer of Code project year to target NQP to JavaScript, and that might fit right in, because MoarVM also uses Node.js for much of its more mundane processing.
We probably need to concentrate on MoarVM for the rest of this year, until we actually define 6.0, and then the rest will catch up.
**LV: Last year in the UK, the government kicked off the Year of Code, an attempt to get young people interested in programming. There are lots of opinions about how this should be done like whether you should teach low-level languages at the start, so that people really understand memory usage, or a high-level language. Whats your take on that?**
LW: Up until now, the Python community has done a much better job of getting into the lower levels of education than we have. Wed like to do something in that space too, and thats partly why we have the butterfly logo, because its going to be appealing to seven year old girls!
But we do think that Perl 6 will be learnable as a first language. A number of people have surprised us by learning Perl 5 as their first language. And you know, there are a number of fairly powerful concepts even in Perl 5, like closures, lexical scoping, and features you generally get from functional programming. Even more so in Perl 6.
> “Until now, the Python community has done a much better job of getting into the lower levels of education.”
Part of the reason the Perl 6 has taken so long is that we have around 50 different principles we try to stick to, and in language design youre end up juggling everything and saying “whats really the most important principle here”? There has been a lot of discussion about a lot of different things. Sometimes we commit to a decision, work with it for a while, and then realise it wasnt quite the right decision.
We didnt design or specify pretty much anything about concurrent programming until someone came along who was smart enough about it and knew what the different trade-offs were, and thats Jonathan Worthington. He has blended together ideas from other languages like Go and C#, with concurrent primitives that compose well. Composability is important in the rest of the language.
There are an awful lot of concurrent and parallel programming systems that dont compose well like threads and locks, and there have been lots of ways to do it poorly. So in one sense, its been worth waiting this extra time to see some of these languages like Go and C# develop really good high-level primitives thats sort of a contradiction in terms that compose well.
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/interview-larry-wall/
作者:[Mike Saunders][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/mike/

View File

@ -1,88 +0,0 @@
alim0x translating
The history of Android
================================================================================
![Gingerbread's new keyboard, text selection UI, overscroll effect, and new checkboxes.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/3kb-high-over-check.png)
Gingerbread's new keyboard, text selection UI, overscroll effect, and new checkboxes.
Photo by Ron Amadeo
One of the most important additions to Android 2.3 was the system-wide text selection interface, which you can see in the Google search bar in the left screenshot. Long pressing a word would highlight it in orange and make draggable handles appear on either side of the highlight. You could then adjust the highlight using the handles and long press on the highlight to bring up options for cut, copy, and paste. Previous methods used tiny controls that relied on a trackball or D-Pad, but with this first finger-driven text selection method, the Nexus S didnt need the extra hardware controls.
The right set of images shows the new checkbox design and overscroll effect. The Froyo checkbox worked like a light bulb—it would show a green check when on and a gray check when off. Gingerbread now displayed an empty box when an option is turned off—which made much more sense. Gingerbread was the first version to have an overscroll effect. An orange glow appeared when you hit the end of a list and grew larger as you pulled more against the dead end. Bounce scrolling would probably have made the most sense, but that was patented by Apple.
![The new dialer and dialog box design.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/dialdialog.png)
The new dialer and dialog box design.
Photo by Ron Amadeo
The dialer received a little more love in Gingerbread. It became darker, and Google finally addressed the combination of sharp corners, rounded corners, and complete circles that it had going on. Now every corner was a sharp right angle. All the dial pad buttons were replaced with a weird underline, like some faint leftovers of what used to be a button. You were never really sure if you were supposed to see a button or not—our brains wanted to imagine the rest of the square.
The Wi-Fi network dialog is pictured to show off the rest of the system-wide changes. All the dialog box titles were changed from gray to black, every dialog box, dropdown, and button corner was sharpened up, and everything was a little bit darker. All these system-wide changes made all of Gingerbread look a lot less bubbly and more mature. The "all black everything" look wasn't necessarily the most welcoming color palette, but it certainly looked better than Android's previous gray-and-beige color scheme.
![The new Market, which added a massive green header.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/4market.png)
The new Market, which added a massive green header.
Photo by Ron Amadeo
While not exclusive to Gingerbread, with the launch of the new OS came "Android Market 2.0." Most of the list design was the same, but Google covered the top third of the screen with a massive green banner that was used for featured apps and navigation. The primary design inspiration here was probably the green Android mascot—the color is a perfect match. At a time when the OS was getting a darker design, the neon green banner and white list made the Market a lot brighter.
However, the same green background image was used across phones, which meant on lower resolution devices, the green banner was even bigger. Users complained so much about the wasted screen space that later updates would make the green banner scroll up with the content. At the time, horizontal mode was even worse—it would fill the left half of the screen with the static green banner.
![An app page from the Market showing the collapsible text section, the "My apps" section, and Google Books screenshots.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/5rest-of-market-and-books.png)
An app page from the Market showing the collapsible text section, the "My apps" section, and Google Books screenshots.
Photo by Ron Amadeo
App pages were redesigned with collapsible sections. Rather than having to scroll through a thousand-line description, text boxes were truncated to only the first few lines. After that, a "more" button needed to be tapped. This allowed users to easily scroll through the list and find things like pictures and "contact developer," which would usually be farther down the page.
The other parts of the Android homescreen wisely toned down the green monster. The rest of the app was mostly just the old Market with new green navigational elements. Any of the old tabbed interfaces were upgraded to swipeable tabs. In the right Gingerbread image, swiping right-to-left would switch from "Top Paid" to "Top Free," which made navigation a little easier.
Gingerbread came with the first of what would become the Google Play content stores: Google Books. The app was a basic book reader that would display books in a simple thumbnail grid. The "Get eBooks" link at the top of the screen opened the browser and loaded a mobile website where you could buy books.
Google Books and the “My Apps" page of the Market were both examples of early precursors to the Action Bar. Just like the current guidelines, a stickied top bar featured the app icon, the name of the screen within the app, and a few controls. The layout of these two apps was actually pretty modern looking.
![The new Google Maps.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps1.png)
The new Google Maps.
Photo by Ron Amadeo
Google Maps (which, again, at this point was on the Android Market and not exclusive to this version of Android) now featured another action bar precursor in the form of a top-aligned control bar. This version of an early action bar featured a lot of experimenting. The majority of the bar was taken up with a search box, but you could never type into the bar. Tapping on it would open the old search interface from Android 1.x, with a totally different bar design and bubbly buttons. This 2.3 bar wasn't anything more than a really big search button.
![The new business pages, which switched from black to white.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps2-Im-hungry.png)
The new business pages, which switched from black to white.
Photo by Ron Amadeo
Along with Places' new top billing in the app drawer came a redesigned interface. Unlike the rest of Gingerbread, this switched from black to white. Google also kept the old buttons with rounded corners. This new version of Maps helpfully displayed the hours of operation of a business, and it offered advanced search options like places that were currently open or thresholds for ratings and price. Reviews were brought to the surface, allowing a user to easily get a feel for the current business. It was now also possible to "star" a location from the search results and save it for later.
![The new YouTube design, which, amazingly, sort of matches the old Maps business page design.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/youtube22.png)
The new YouTube design, which, amazingly, sort of matches the old Maps business page design.
Photo by Ron Amadeo
The YouTube app seemed completely separate from the rest of Android, as if whoever designed this had no idea what Gingerbread would end up looking like. Highlights were red and gray instead of green and orange, and rather than the flat black of Gingerbread, YouTube featured bubbly buttons, tabs, and bars with rounded corners and heavy gradients. The new app did get a few things right, though. All the tabs could be horizontally swiped through, and the app finally added a vertical viewing mode for videos. Android seemed like such an uncoordinated effort at this stage. Its like someone told the YouTube team “make it black," and that was all the direction they were given. The only Android entity this seemed to match was the old Google Maps business page design.
Despite the weird design choices, the YouTube app had the best approximation yet of an action bar. Besides the bar at the top with an app logo and a few buttons, the rightmost button was labeled “more" and would bring up options that didnt fit in the bar. Today, this is called the “Overflow" button, and it's a standard UI piece.
![The new Google Talk, which supported voice and video calls, and the new Voice Actions interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/talkvoice.png)
The new Google Talk, which supported voice and video calls, and the new Voice Actions interface.
Photo by Ron Amadeo
One last update for Gingerbread came with Android 2.3.4, which brought a new version of Google Talk. Unlike the Nexus One, the Nexus S had a front-facing camera—and the redesigned version of Google Talk had voice and video calling. The colored indicators on the right of the friends list were used to indicate not only presence, but voice and video availability. A dot was text only, a microphone was text or voice, and a camera was text, voice, or video. If available, tapping on a voice or video icon would immediately start a call with that person.
Gingerbread is the oldest version of Android still supported by Google. Firing up a Gingerbread device and letting it sit for a few minutes will result in a ton of upgrades. Gingerbread will pull down Google Play Services, resulting in a ton of new API support, and it will upgrade to the very newest version of the Play Store. Open the Play Store and hit the update button, and just about every single Google app will be replaced with a modern version. We tried to keep this article authentic to the time Gingerbread was released, but a real user stuck on Gingerbread today will be treated to a flood of anachronisms.
Gingerbread is still supported because there are a good number of users still running the now ancient OS. Gingerbread's staying power is due to the extremely low system requirements, making it the go-to choice for slow, cheap phones. The next few versions of Android were much more exclusive and/or demanding on hardware. For instance, Android 3.0 Honeycomb is not open source, meaning it could only be ported to a device with Google's cooperation. It was also only for tablets, making Gingerbread the newest phone version of Android for a very long time. 4.0 Ice Cream Sandwich was the next phone release, but it significantly raised Androids systems requirements, cutting off the low-end of the market. Google is hoping to get cheaper phones back on the update track with 4.4 KitKat, which brings the system requirements back down to 512MB of RAM. The passage of time helps, too—by now, even cheap SoCs have caught up to the demands of a 4.0-era version of Android.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/15/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,66 +0,0 @@
The history of Android
================================================================================
### Android 3.0 Honeycomb—tablets and a design renaissance ###
Despite all the changes made in Gingerbread, Android was still the ugly duckling of the mobile world. Compared to the iPhone, its level of polish and design just didn't hold up. On the other hand, one of the few operating systems that could stand up to iOS's aesthetic acumen was Palm's WebOS. WebOS was a cohesive, well-designed OS with several innovative features, and it was supposed to save the company from the relentless march of the iPhone.
A year after launch though, Palm was running out of cash. The company never saw the iPhone coming, and by the time WebOS was ready, it was too late. In April 2010, Hewlett-Packard purchased Palm for $1 billion. While HP bought a product with a great user interface, the lead designer of that interface, a man by the name of Matias Duarte, did not join HP. In May 2010, just before HP took control of Palm, Duarte jumped ship to Google. HP bought the bread, but Google hired the baker.
![The first Honeycomb device, the Motorola Xoom 10-inch tablet.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Motorola-XOOM-MZ604.jpg)
The first Honeycomb device, the Motorola Xoom 10-inch tablet.
At Google, Duarte was named the Director of Android User Experience. This was the first time someone was publicly in charge of the way Android looked. While Matias landed at Google during the launch of Android 2.2, the first version he truly impacted was Android 3.0, Honeycomb, released in February 2011.
By Google's own admission, Honeycomb was rushed out the door. Ten months prior, Apple modernized the tablet with the launch of the iPad, and Google wanted to respond as quickly as possible. Honeycomb was that response, a version of Android that ran on 10-inch touchscreens. Sadly, getting this OS to market was such a priority that corners were cut to save time.
The new OS was for tablets only—phones would not be updated to Honeycomb, which spared Google the difficult problem of making the OS work on wildly different screen sizes. But with phone support off the table, a Honeycomb source drop never happened. Previous Android versions were open source, enabling the hacking community to port the latest version to all sorts of different devices. Google didn't want app developers to feel pressured to support half-broken Honeycomb phone ports, so Google kept the source to itself and strictly controlled what could and couldn't have Honeycomb. The rushed development led to problems with the software, too. At launch, Honeycomb wasn't particularly stable, SD cards didn't work, and Adobe Flash—one of Android's big differentiators—wasn't supported.
One of the few devices that could have Honeycomb was [the Motorola Xoom][1], the flagship product for the new OS. The Xoom was a 10-inch, 16:9 tablet with 1GB of RAM and a dual-core, 1GHz Nvidia Tegra 2 processor. Despite being the launch device of a new version of Android where Google controlled the updates directly, the device wasn't called a "Nexus." The most likely reason for this was that Google didn't feel confident enough in the product to call it a flagship.
Nevertheless, Honeycomb was a major milestone for Android. With an experienced designer in charge, the entire Android user interface was rebuilt, and most of the erratic app designs were brought to heel. Android's default apps finally looked like pieces of a cohesive whole with similar layouts and theming across the board. Redesigning Android would be a multi-version project though—Honeycomb was just the start of getting Android whipped into shape. This first draft laid the groundwork for how future versions of Android would function, but it also used a heavy-handed sci-fi theme that Google would spend the next few versions toning down.
![The home screens of Honeycomb and Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/homeskreen.png)
The home screens of Honeycomb and Gingerbread.
Photo by Ron Amadeo
While Gingerbread only experimented with a sci-fi look in its photon wallpaper, Honeycomb went full sci-fi with a Tron-inspired theme for the entire OS. Everything was made black, and if you needed a contrasting color, you could choose from a few different shades of blue. Everything that was made blue was also given a "glow" effect, making the entire OS look like it was powered by alien technology. The default background was a holographic grid of hexagons (a Honeycomb! get it?) that looked like it was the floor of a teleport pad on a spaceship.
The most important change of Honeycomb was the addition of the system bar. The Motorola Xoom had no hardware buttons other than power and volume, so a large black bar was added along the bottom of the screen that housed the navigational buttons. This meant the default Android interface no longer needed specialized hardware buttons. Previously, Android couldn't function without hardware Back, Menu, and Home keys. Now, with the software supplying all the necessary buttons, anything with a touch screen was able to run Android.
The biggest benefit of the new software buttons was flexibility. The new app guidelines stated that apps should no longer require a hardware menu button, but for those that do, Honeycomb detects this and adds a fourth button to the system bar that allows these apps to work. The other flexibility attribute of software buttons was that they could change orientation with the device. Other than the power and volume buttons, the Xoom's orientation really wasn't important. The system bar always sat on the "bottom" of the device from the user's perspective. The trade off was that a big bar along the bottom of the screen definitely sucked up some screen real estate. To save space on 10-inch tablets, the status bar was merged into the system bar. All the usual status duties lived on the right side—there was battery and connectivity status, the time, and notification icons.
The whole layout of the home screen changed, placing UI pieces in each of the four corners of the device. The bottom left housed the previously discussed navigational buttons, the bottom right was for status and notifications, the top left displayed text search and voice search, and the top right had buttons for the app drawer and adding widgets.
![The new lock screen and Recent Apps interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/lockscreen-and-recent.png)
The new lock screen and Recent Apps interface.
Photo by Ron Amadeo
(Since the Xoom was a [heavy] 10-inch, 16:9 tablet, it was primarily meant to be used horizontally. Most apps also supported portrait mode, though, so for the sake of our formatting, we're using mostly portrait mode shots. Just keep in mind the Honeycomb shots come from a 10-inch tablet, and the Gingerbread shots come from a 3.7-inch phone. The densities of information are not directly comparable.)
The unlock screen—after switching from a menu button to a rotary dial to slide-to-unlock—removed any required accuracy from the unlock process by switching to a circle unlock. Swiping from the center outward in any direction would unlock the device. Like the rotary unlock, this was much nicer ergonomically than forcing your finger to follow a perfectly straight path.
The strip of thumbnails in the second picture was the interface brought up by the newly christened "Recent Apps" button, now living next to Back and Home. Rather than the group of icons brought up in Gingerbread by long-pressing on the home button, Honeycomb showed app icons and thumbnails on the screen, which made it a lot easier to switch between tasks. Recent Apps was clearly inspired by Duarte's "card" multitasking in WebOS, which used full-screen thumbnails to switch tasks. This design offered the same ease-of-recognition as WebOS's task switcher, but the smaller thumbnails allowed more apps to fit on screen at once.
While this implementation of Recent Apps may look like what you get on a current device, this version was very early. The list didn't scroll, meaning it showed seven apps in portrait mode and only five apps in horizontal mode. Anything beyond that was bumped off the list. You also couldn't swipe away thumbnails to close apps—this was just a static list.
Here we see the Tron influence in full effect: the thumbnails had blue outlines and an eerie glow around them. This screenshot also shows a benefit of software buttons—context. The back button closed the list of thumbnails, so instead of the normal arrow, this pointed down.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/16/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2011/03/ars-reviews-the-motorola-xoom/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,208 +0,0 @@
XLCYun translating.
Syncthing: A Private, And Secure Tool To Sync Files/Folders Between Computers
================================================================================
### Introduction ###
**Syncthing** is a free, Open Source tool that can be used to sync files/folders between your networked computers. Unlike other sync tools, such as **BitTorrent Sync** or **Dropbox**, Syncthing transfers data directly from one system to another system, and It is completely open source, secure and private. All of your precious data will be stored in your system so that you can have full control over your files and folders, and none of them are stored in any third party systems. Also, you deserve to choose where it is stored, if it is shared with some third party and how its transmitted over the Internet.
All communication is encrypted using TLS, so your data is very secure from the prying eyes. Syncthing has a responsive and powerful WebGUI which will help the users to easily add, delete and manage directories to be synced over network. Using Syncthing, you can sync multiple folders to multiple systems at a time. Syncthing is very simple, portable, yet powerful tool in terms of installation and usage. Since all files/folders are directly transferred from one computer to another computer, you dont have to worry about purchasing extra space from your Cloud provider. All you need is very stable LAN/WAN connection and enough disk space on your systems. It supports all modern operating systems, including GNU/Linux, Windows, Mac OS X, and ofcourse Android.
### Installation ###
For the purpose of this tutorial, We will be using two systems, one is running with Ubuntu 14.04 LTS, and another one is running with Ubuntu 14.10 server. To easily recognize these two systems, we will be calling them using names **system1**, and **system2**.
### System1 Details: ###
- **OS**: Ubuntu 14.04 LTS server;
- **Hostname**: server1.unixmen.local;
- **IP Address**: 192.168.1.150.
- **System user**: sk (You can use your own)
- **Sync Directory**: /home/Sync/ (Will be created by default by Syncthing)
### System2 Details: ###
- **OS**: Ubuntu 14.10 server;
- **Hostname**: server.unixmen.local;
- **IP Address**: 192.168.1.151.
- **System user**: sk (You can use your own)
- **Sync Directory**: /home/Sync/ (Will be created by default by Syncthing)
### Creating User For Syncthing On System 1 & System2: ###
Run the following commands on both system to create the user for Syncthing and the directory to be synced between two systems:
sudo useradd sk
sudo passwd sk
### Install Syncthing On System1 And System2: ###
You should do the following steps on both System 1 and System 2.
Download the latest version from the [official download page][1]. As I am using 64bit system, I downloaded the 6bbit package.
wget https://github.com/syncthing/syncthing/releases/download/v0.10.20/syncthing-linux-amd64-v0.10.20.tar.gz
Extract the download file:
tar xzvf syncthing-linux-amd64-v0.10.20.tar.gz
Cd to the extracted folder:
cd syncthing-linux-amd64-v0.10.20/
Copy the excutable file “syncthing” to **$PATH**:
sudo cp syncthing /usr/local/bin/
Now, run the following command to run the syncthing for the first time.
syncthing
When you run the above command, syncthing will generate a configuration and some keys and then start the admin GUI in your browser. You should see something like below.
Sample output:
[monitor] 15:40:27 INFO: Starting syncthing
15:40:27 INFO: Generating RSA key and certificate for syncthing...
[BQXVO] 15:40:34 INFO: syncthing v0.10.20 (go1.4 linux-386 default) unknown-user@syncthing-builder 2015-01-13 16:27:47 UTC
[BQXVO] 15:40:34 INFO: My ID: BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ
[BQXVO] 15:40:34 INFO: No config file; starting with empty defaults
[BQXVO] 15:40:34 INFO: Edit /home/sk/.config/syncthing/config.xml to taste or use the GUI
[BQXVO] 15:40:34 INFO: Starting web GUI on http://127.0.0.1:8080/
[BQXVO] 15:40:34 INFO: Loading HTTPS certificate: open /home/sk/.config/syncthing/https-cert.pem: no such file or directory
[BQXVO] 15:40:34 INFO: Creating new HTTPS certificate
[BQXVO] 15:40:34 INFO: Generating RSA key and certificate for server1...
[BQXVO] 15:41:01 INFO: Starting UPnP discovery...
[BQXVO] 15:41:07 INFO: Starting local discovery announcements
[BQXVO] 15:41:07 INFO: Starting global discovery announcements
[BQXVO] 15:41:07 OK: Ready to synchronize default (read-write)
[BQXVO] 15:41:07 INFO: Device BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ is "server1" at [dynamic]
[BQXVO] 15:41:07 INFO: Completed initial scan (rw) of folder default
Syncthing has been successfully initialized, and the Web admin interface can be accessed using URL: **http://localhost:8080** from your browser. As you see in the above output, syncthing has automatically created a folder called **default** for you, in a directory called **Sync** in your **home** directory.
By default, Syncthing WebGUI will only be accessed from the localhost itself. To access the WebGUI from the remote systems, you need to do the following changes on both systems.
First, stop the Syncthing initialization process by pressing the CTRL+C. Now, you will be returned back to the Terminal.
Edit file **config.xml**,
sudo nano ~/.config/syncthing/config.xml
Find this directive:
[...]
<gui enabled="true" tls="false">
<address>127.0.0.1:8080</address>
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
</gui>
[...]
In the **<address>** field, change **127.0.0.1:8080** to **0.0.0.0:8080**. So, your config.xml will look like below.
<gui enabled="true" tls="false">
<address>0.0.0.0:8080</address>
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
</gui>
Save and close the file.
Now, start again the Syncthing initialization on both systems by entering the following command:
syncthing
### Access the WebGUI ###
Now, open your browser **http://ip-address:8080/**. You will see the following screen,
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_001.png)
The WebGUI has two panes. In the left pane, you may see the list of folders to be synced. As I mentioned before, the folder **default** has been automatically created for you while initializing Syncthing. If you want to sync more folders, you can add using **Add Folder** button.
In the right pane, you see the number of devices connected. Currently there is only one device, the computer you are running this on.
### Configure Syncthing Web GUI ###
For the security enhancement, let us enable TLS, and setup administrative user and password to access the WebGUI. To od that, click on the gear button and select **Settings** on the top right corner.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Menu_002.png)
Enter the admin username/password. In my case it is admin/ubuntu. You should use some strong password. And, check the box that says: **Use HTTPS for GUI**.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_004.png)
Click Save button. Now, youll be asked to restart the Syncthing to activate the changes. Click Restart.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Selection_005.png)
Selection_005Refresh you web browser. Youll see the SSL warning like below. Click on the button that says: **I understand the Risks**. And, click Add Exception button to add this page to the browser trusted lists.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Untrusted-Connection-Mozilla-Firefox_006.png)
Enter the administrative user and password which we configured in the previous steps. In my case its **admin/ubuntu**.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Authentication-Required_007.png)
We have secured the WebGUI now. Dont forget to do the same steps on both server.
### Connect Servers To Each Other ###
To sync folders between systems, you must told them about each other. This is accomplished by exchanging “device IDs”. You can find it in the web GUI by selecting the “gear menu” (top right) and “Show ID”.
For example, here is my System 1 ID.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_008.png)
Copy the ID, and go to the another system (system 2) WebGUI. From the second system (system 2) WebGUI window, click on the Add Device on the right side.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_010.png)
The following screen should appear. Paste the **System 1 ID** in the Device section. Enter the Device name(optional). In the Addresses field, you can either enter the IP address of the other system or leave it as default. The default value is **dynamic**. Finally, select the folder to be synced. In our case, the sync folder is **default**.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_009.png)
Once you done, click on the save button. Youll be asked to restart the Syncthing. Click Restart button to activate the changes.
Now, go to the **System 1** WebUI, youll see a request has been sent from the System 2 to connect and sync. Click **Add** button. Now, the System 2 will ask the System 1 to share and sync the folder called “default”. Click **Share** button.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Selection_013.png)
Next restart the Syncthing service on the System 1 to activate the changes.
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Selection_014.png)
Wait for few seconds, approximately 60 seconds, and youll see the two systems have been successfully connected and synced to each other.
You can verify it under the Add Device section of the WebGUI.
System 1 WebGUI console after adding System 2:
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_016.png)
System 2 WebGUI console after adding System 1:
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
Now, put any file or folder in any one of the systems “**default**” folder. You may see the file/folder will be synced to the other system automatically.
Thats it! Happy Syncing!!
Cheers!!!
- [Syncthing Website][2]
--------------------------------------------------------------------------------
via: http://www.unixmen.com/syncthing-private-secure-tool-sync-filesfolders-computers/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:https://github.com/syncthing/syncthing/releases/tag/v0.10.20
[2]:http://syncthing.net/

View File

@ -1,3 +1,5 @@
translating...
Fix Minimal BASH like line editing is supported GRUB Error In Linux
================================================================================
The other day when I [installed Elementary OS in dual boot with Windows][1], I encountered a Grub error at the reboot time. I was presented with command line with error message:
@ -83,4 +85,4 @@ via: http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux
[1]:http://itsfoss.com/guide-install-elementary-os-luna/
[2]:http://www.gnu.org/software/grub/
[3]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/
[4]:http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/
[4]:http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/

View File

@ -1,3 +1,5 @@
Translating by DongShuaike
How to Provision Swarm Clusters using Docker Machine
================================================================================
Hi all, today we'll learn how we can deploy Swarm Clusters using Docker Machine. It serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. We can provision swarm clusters with any driver we need and is highly secured with TLS Encryption.
@ -122,4 +124,4 @@ via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-mach
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[a]:http://linoxide.com/author/arunp/

View File

@ -1,360 +0,0 @@
ictlyh Translating
PHP Security
================================================================================
![](http://www.codeproject.com/KB/PHP/363897/php_security.jpg)
### Introduction ###
When offering an Internet service, you must always keep security in mind as you develop your code. It may appear that most PHP scripts aren't sensitive to security concerns; this is mainly due to the large number of inexperienced programmers working in the language. However, there is no reason for you to have an inconsistent security policy based on a rough guess at your code's significance. The moment you put anything financially interesting on your server, it becomes likely that someone will try to casually hack it. Create a forum program or any sort of shopping cart, and the probability of attack rises to a dead certainty.
### Background ###
Here are a few general security guidelines for securing your web content:
#### Don't trust forms ####
Hacking forms is trivial. Yes, by using a silly JavaScript trick, you may be able to limit your form to allow only the numbers 1 through 5 in a rating field. The moment someone turns JavaScript off in their browser or posts custom form data, your client-side validation flies out the window.
Users interact with your scripts primarily through form parameters, and therefore they're the biggest security risk. What's the lesson? Always validate the data that gets passed to any PHP script in the PHP script. In this article, we show you how to analyze and protect against cross-site scripting (XSS) attacks, which can hijack your user's credentials (and worse). You'll also see how to prevent the MySQL injection attacks that can taint or destroy your data.
#### Don't trust users ####
Assume that every piece of data your website gathers is laden with harmful code. Sanitize every piece, even if you're positive that nobody would ever try to attack your site.
#### Turn off global variables ####
The biggest security hole you can have is having the register_globals configuration parameter enabled. Mercifully, it's turned off by default in PHP 4.2 and later. If **register_globals** is on, then you can disable this feature by turning the register_globals variable to Off in your server's php.ini file :
register_globals = Off
Novice programmers view registered globals as a convenience, but they don't realize how dangerous this setting is. A server with global variables enabled automatically assigns global variables to any form parameters. For an idea of how this works and why this is dangerous, let's look at an example.
Let's say that you have a script named process.php that enters form data into your user database. The original form looked like this:
<input name="username" type="text" size="15" maxlength="64">
When running process.php, PHP with registered globals enabled places the value of this parameter into the $username variable. This saves some typing over accessing them through **$_POST['username']** or **$_GET['username']**. Unfortunately, this also leaves you open to security problems, because PHP sets a variable for any value sent to the script via a GET or POST parameter, and that is a big problem if you didn't explicitly initialize the variable and you don't want someone to manipulate it.
Take the script below, for example—if the $authorized variable is true, it shows confidential data to the user. Under normal circumstances, the $authorized variable is set to true only if the user has been properly authenticated via the hypothetical authenticated_user() function. But if you have **register_globals** active, anyone could send a GET parameter such as authorized=1 to override this:
<?php
// Define $authorized = true only if user is authenticated
if (authenticated_user()) {
$authorized = true;
}
?>
The moral of the story is that you should pull form data from predefined server variables. All data passed on to your web page via a posted form is automatically stored in a large array called $_POST, and all GET data is stored in a large array called **$_GET**. File upload information is stored in a special array called $_FILES. In addition, there is a combined variable called $_REQUEST.
To access the username field from a POST method form, use **$_POST['username']**. Use **$_GET['username']** if the username is in the URL. If you don't care where the value came from, use **$_REQUEST['username']**.
<?php
$post_value = $_POST['post_value'];
$get_value = $_GET['get_value'];
$some_variable = $_REQUEST['some_value'];
?>
$_REQUEST is a union of the $_GET, $_POST, and $_COOKIE arrays. If you have two or more values of the same parameter name, be careful of which one PHP uses. The default order is cookie, POST, then GET.
#### Recommended Security Configuration Options ####
There are several PHP configuration settings that affect security features. Here are the ones that should obviously be used for production servers:
- **register_globals** set to off
- **safe_mode** set to off
- **error_reporting** set to off. This is visible error reporting that sends a message to the user's browser if something goes wrong. For production servers, use error logging instead. Development servers can enable error logging as long as they're behind a firewall.
- Disable these functions: system(), exec(), passthru(), shell_exec(), proc_open(), and popen().
- **open_basedir** set for both the /tmp directory (so that session information can be stored) and the web root so that scripts cannot access files outside a selected area.
- **expose_php** set to off. This feature adds a PHP signature that includes the version number to the Apache headers.
- **allow_url_fopen** set to off. This isn't strictly necessary if you're careful about how you access files in your code—that is, you validate all input parameters.
- **allow_url_include** set to off. There's really no sane reason for anyone to want to access include files via HTTP.
In general, if you find code that wants to use these features, you shouldn't trust it. Be especially careful of anything that wants to use a function such as system()—it's almost certainly flawed.
With these settings now behind us, let's look at some specific attacks and the methods that will help you protect your server.
### SQL Injection Attacks ###
Because the queries that PHP passes to MySQL databases are written in the powerful SQL programming language, you run the risk of someone attempting an SQL injection attack by using MySQL in web query parameters. By inserting malicious SQL code fragments into form parameters, an attacker attempts to break into (or disable) your server.
Let's say that you have a form parameter that you eventually place into a variable named $product, and you create some SQL like this:
$sql = "select * from pinfo where product = '$product'";
If that parameter came straight from the form, use database-specific escapes with PHP's native functions, like this:
$sql = 'Select * from pinfo where product = '"'
mysql_real_escape_string($product) . '"';
If you don't, someone might just decide to throw this fragment into the form parameter:
39'; DROP pinfo; SELECT 'FOO
Then the result of $sql is:
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
Because the semicolon is MySQL's statement delimiter, the database processes these three statements:
select * from pinfo where product = '39'
DROP pinfo
SELECT 'FOO'
Well, there goes your table.
Note that this particular syntax won't actually work with PHP and MySQL, because the **mysql_query()** function allows just one statement to be processed per request. However, a subquery will still work.
To prevent SQL injection attacks, do two things:
- Always validate all parameters. For example, if something needs to be a number, make sure that it's a number.
- Always use the mysql_real_escape_string() function on data to escape any quotes or double quotes in your data.
**Note: To automatically escape any form data, you can turn on Magic Quotes.**
Some MySQL damage can be avoided by restricting your MySQL user privileges. Any MySQL account can be restricted to only do certain kinds of queries on selected tables. For example, you could create a MySQL user who can select rows but nothing else. However, this is not terribly useful for dynamic data, and, furthermore, if you have sensitive customer information, it might be possible for someone to have access to some data that you didn't intend to make available. For example, a user accessing account data could try to inject some code that accesses another account number instead of the one assigned to the current session.
### Preventing Basic XSS Attacks ###
XSS stands for cross-site scripting. Unlike most attacks, this exploit works on the client side. The most basic form of XSS is to put some JavaScript in user-submitted content to steal the data in a user's cookie. Since most sites use cookies and sessions to identify visitors, the stolen data can then be used to impersonate that user—which is deeply troublesome when it's a typical user account, and downright disastrous if it's the administrative account. If you don't use cookies or session IDs on your site, your users aren't vulnerable, but you should still be aware of how this attack works.
Unlike MySQL injection attacks, XSS attacks are difficult to prevent. Yahoo!, eBay, Apple, and Microsoft have all been affected by XSS. Although the attack doesn't involve PHP, you can use PHP to strip user data in order to prevent attacks. To stop an XSS attack, you have to restrict and filter the data a user submits to your site. It is for this precise reason that most online bulletin boards don't allow the use of HTML tags in posts and instead replace them with custom tag formats such as **[b]** and **[linkto]**.
Let's look at a simple script that illustrates how to prevent some of these attacks. For a more complete solution, use SafeHTML, discussed later in this article.
function transform_HTML($string, $length = null) {
// Helps prevent XSS attacks
// Remove dead space.
$string = trim($string);
// Prevent potential Unicode codec problems.
$string = utf8_decode($string);
// HTMLize HTML-specific characters.
$string = htmlentities($string, ENT_NOQUOTES);
$string = str_replace("#", "&#35;", $string);
$string = str_replace("%", "&#37;", $string);
$length = intval($length);
if ($length > 0) {
$string = substr($string, 0, $length);
}
return $string;
}
This function transforms HTML-specific characters into HTML literals. A browser renders any HTML run through this script as text with no markup. For example, consider this HTML string:
<STRONG>Bold Text</STRONG>
Normally, this HTML would render as follows:
Bold Text
However, when run through **transform_HTML()**, it renders as the original input. The reason is that the tag characters are HTML entities in the processed string. The resulting string from **HTML()** in plaintext looks like this:
&lt;STRONG&gt;Bold Text&lt;/STRONG&gt;
The essential piece of this function is the htmlentities() function call that transforms <, >, and & into their entity equivalents of **&lt;**, **&gt;**, and **&amp;**. Although this takes care of the most common attacks, experienced XSS hackers have another sneaky trick up their sleeve: Encoding their malicious scripts in hexadecimal or UTF-8 instead of normal ASCII text, hoping to circumvent your filters. They can send the code along as a GET variable in the URL, saying, "Hey, this is hexadecimal code, but could you run it for me anyway?" A hexadecimal example looks something like this:
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
But when the browser renders that information, it turns out to be:
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
To prevent this, transform_HTML() takes the additional steps of converting # and % signs into their entity, shutting down hex attacks, and converting UTF-8encoded data.
Finally, just in case someone tries to overload a string with a very long input, hoping to crash something, you can add an optional $length parameter to trim the string to the maximum length you specify.
### Using SafeHTML ###
The problem with the previous script is that it is simple, and it does not allow for any kind of user markup. Unfortunately, there are hundreds of ways to try to sneak JavaScript past someone's filters, and short of stripping all HTML from someone's input, there's no way of stopping it.
Currently, there's no single script that's guaranteed to be unbreakable, though there are some that are better than most. There are two approaches to security, whitelisting and blacklisting, and whitelisting tends to be less complicated and more effective.
One whitelisting solution is the SafeHTML anti-XSS parser from PixelApes.
SafeHTML is smart enough to recognize valid HTML, so it can hunt and strip any dangerous tags. It does its parsing with another package called HTMLSax.
To install and use SafeHTML, do the following:
1. Go to [http://pixel-apes.com/safehtml/?page=safehtml][1] and download the latest version of SafeHTML.
1. Put the files in the classes directory on your server. This directory contains everything that SafeHTML and HTMLSax need to function.
1. Include the SafeHTML class file (safehtml.php) in your script.
1. Create a new SafeHTML object called $safehtml.
1. Sanitize your data with the $safehtml->parse() method.
Here's a complete example:
<?php
/* If you're storing the HTMLSax3.php in the /classes directory, along
with the safehtml.php script, define XML_HTMLSAX3 as a null string. */
define(XML_HTMLSAX3, '');
// Include the class file.
require_once('classes/safehtml.php');
// Define some sample bad code.
$data = "This data would raise an alert <script>alert('XSS Attack')</script>";
// Create a safehtml object.
$safehtml = new safehtml();
// Parse and sanitize the data.
$safe_data = $safehtml->parse($data);
// Display result.
echo 'The sanitized data is <br />' . $safe_data;
?>
If you want to sanitize any other data in your script, you don't have to create a new object; just use the $safehtml->parse() method throughout your script.
#### What Can Go Wrong? ####
The biggest mistake you can make is assuming that this class completely shuts down XSS attacks. SafeHTML is a fairly complex script that checks for almost everything, but nothing is guaranteed. You still want to do the parameter validation that applies to your site. For example, this class doesn't check the length of a given variable to ensure that it fits into a database field. It doesn't check for buffer overflow problems.
XSS hackers are creative and use a variety of approaches to try to accomplish their objectives. Just look at RSnake's XSS tutorial at [http://ha.ckers.org/xss.html][2] to see how many ways there are to try to sneak code past someone's filters. The SafeHTML project has good programmers working overtime to try to stop XSS attacks, and it has a solid approach, but there's no guarantee that someone won't come up with some weird and fresh approach that could short-circuit its filters.
**Note: For an example of the powerful effects of XSS attacks, check out [http://namb.la/popular/tech.html][3], which shows a step-by-step approach to creating the JavaScript XSS worm that overloaded the MySpace servers. **
### Protecting Data with a One-Way Hash ###
This script performs a one-way transformation on data—in other words, it can make a hash signature of someone's password, but you can't ever decrypt it and go back to the original password. Why would you want to do that? The application is in storing passwords. An administrator doesn't need to know users' passwords—in fact, it's a good idea that only the user knows his or her password. The system (and the system alone) should be able to identify a correct password; this has been the Unix password security model for years. One-way password security works as follows:
1. When a user or administrator creates or changes an account password, the system hashes the password and stores the result. The host system discards the plaintext password.
1. When the user logs in to a system via any means, the entered password is again hashed.
1. The host system throws away the plaintext password entered.
1. This newly hashed password is compared against the stored hash.
1. If the hashed passwords match, then the system grants access.
The host system does this without ever knowing the original password; in fact, the original value is completely irrelevant. As a side effect, should someone break into your system and steal your password database, the intruder will have a bunch of hashed passwords without any way of reversing them to find the originals. Of course, given enough time, computer power, and poorly chosen user passwords, an attacker could probably use a dictionary attack to figure out the passwords. Therefore, don't make it easy for people to get their hands on your password database, and if someone does, have everyone change their passwords.
#### Encryption Vs Hashing ####
Technically speaking, this process is not encryption. It is a hash, which is different from encryption for two reasons:
Unlike in encryption, data cannot be decrypted.
It's possible (but extremely unlikely) that two different strings will produce the same hash. There's no guarantee that a hash is unique, so don't try to use a hash as something like a unique key in a database.
function hash_ish($string) {
return md5($string);
}
The md5() function returns a 32-character hexadecimal string, based on the RSA Data Security Inc. Message-Digest Algorithm (also known, conveniently enough, as MD5). You can then insert that 32-character string into your database, compare it against other md5'd strings, or just adore its 32-character perfection.
#### Hacking the Script ####
It is virtually impossible to decrypt MD5 data. That is, it's very hard. However, you still need good passwords, because it's still easy to make a database of hashes for the entire dictionary. There are online MD5 dictionaries where you can enter **06d80eb0c50b49a509b49f2424e8c805** and get a result of "dog." Thus, even though MD5s can't technically be decrypted, they're still vulnerable—and if someone gets your password database, you can be sure that they'll be consulting an MD5 dictionary. Thus, it's in your best interests when creating password-based systems that the passwords are long (a minimum of six characters and preferably eight) and contain both letters and numbers. And make sure that the password isn't in the dictionary.
### Encrypting Data with Mcrypt ###
MD5 hashes work just fine if you never need to see your data in readable form. Unfortunately, that's not always an option—if you offer to store someone's credit card information in encrypted format, you need to decrypt it at some later point.
One of the easiest solutions is the Mcrypt module, an add-in for PHP that allows high-grade encryption. The Mcrypt library offers more than 30 ciphers to use in encryption and the possibility of a passphrase that ensures that only you (or, optionally, your users) can decrypt data.
Let's see some hands-on use. The following script contains functions that use Mcrypt to encrypt and decrypt data:
<?php
$data = "Stuff you want encrypted";
$key = "Secret passphrase used to encrypt your data";
$cipher = "MCRYPT_SERPENT_256";
$mode = "MCRYPT_MODE_CBC";
function encrypt($data, $key, $cipher, $mode) {
// Encrypt data
return (string)
base64_encode
(
mcrypt_encrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
$data,
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
)
);
}
function decrypt($data, $key, $cipher, $mode) {
// Decrypt data
return (string)
mcrypt_decrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
base64_decode($data),
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
);
}
?>
The **mcrypt()** function requires several pieces of information:
- The data to encrypted.
- The passphrase used to encrypt and unlock your data, also known as the key.
- The cipher used to encrypt the data, which is the specific algorithm used to encrypt the data. This script uses **MCRYPT_SERPENT_256**, but you can choose from an array of fancy-sounding ciphers, including **MCRYPT_TWOFISH192**, **MCRYPT_RC2**, **MCRYPT_DES**, and **MCRYPT_LOKI97**.
- The mode used to encrypt the data. There are several modes you can use, including Electronic Codebook and Cipher Feedback. This script uses **MCRYPT_MODE_CBC**, Cipher Block Chaining.
- An **initialization vector**—also known as an IV, or a seed—an additional bit of binary data used to seed the encryption algorithm. That is, it's something extra thrown in to make the algorithm harder to crack.
- The length of the string needed for the key and IV, which vary by cipher and block. Use the **mcrypt_get_key_size()** and **mcrypt_get_block_size()** functions to find the appropriate length; then trim the key value to the appropriate length with a handy **substr()** function. (If the key is shorter than the required value, don't worry—Mcrypt pads it with zeros.)
If someone steals both your data and your passphrase, they can just cycle through the ciphers until finding the one that works. Thus, we apply the additional security of using the **md5()** function on the key before we use it, so even having both data and passphrase won't get the intruder what she wants.
An intruder would need the function, the data, and the passphrase all at once—and if that is the case, they probably have complete access to your server, and you're hosed anyway.
There's a small data storage format problem here. Mcrypt returns its encrypted data in an ugly binary format that causes horrific errors when you try to store it in certain MySQL fields. Therefore, we use the **base64encode()** and **base64decode()** functions to transform the data into a SQL-compatible alphabetical format and retrieve rows.
#### Hacking the Script ####
In addition to experimenting with various encryption methods, you can add some convenience to this script. For example, rather than providing the key and mode every time, you could declare them as global constants in an included file.
### Generating Random Passwords ###
Random (but difficult-to-guess) strings are important in user security. For example, if someone loses a password and you're using MD5 hashes, you won't be able to, nor should you want to, look it up. Instead, you should generate a secure random password and send that to the user. Another application for random number generation is creating activation links in order to access your site's services. Here is a function that creates a password:
<?php
function make_password($num_chars) {
if ((is_numeric($num_chars)) &&
($num_chars > 0) &&
(! is_null($num_chars))) {
$password = '';
$accepted_chars = 'abcdefghijklmnopqrstuvwxyz1234567890';
// Seed the generator if necessary.
srand(((int)((double)microtime()*1000003)) );
for ($i=0; $i<=$num_chars; $i++) {
$random_number = rand(0, (strlen($accepted_chars) -1));
$password .= $accepted_chars[$random_number] ;
}
return $password;
}
}
?>
#### Using the Script ####
The **make_password()** function returns a string, so all you need to do is supply the length of the string as an argument:
<?php
$fifteen_character_password = make_password(15);
?>
The function works as follows:
- The function makes sure that **$num_chars** is a positive nonzero integer.
- The function initializes the **$password** variable to an empty string.
- The function initializes the **$accepted_chars** variable to the list of characters the password may contain. This script uses all lowercase letters and the numbers 0 through 9, but you can choose any set of characters you like.
- The random number generator needs a seed, so it gets a bunch of random-like values. (This isn't strictly necessary on PHP 4.2 and later.)
- The function loops **$num_chars** times, one iteration for each character in the password to generate.
- For each new character, the script looks at the length of **$accepted_chars**, chooses a number between 0 and the length, and adds the character at that index in **$accepted_chars** to $password.
- After the loop completes, the function returns **$password**.
### License ###
This article, along with any associated source code and files, is licensed under [The Code Project Open License (CPOL)][4]
--------------------------------------------------------------------------------
via: http://www.codeproject.com/Articles/363897/PHP-Security
作者:[SamarRizvi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.codeproject.com/script/Membership/View.aspx?mid=7483622
[1]:http://pixel-apes.com/safehtml/?page=safehtml
[2]:http://ha.ckers.org/xss.html
[3]:http://namb.la/popular/tech.html
[4]:http://www.codeproject.com/info/cpol10.aspx

View File

@ -0,0 +1,55 @@
Translating by XLCYun.
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 1 - Introduction
================================================================================
*Author's Note: If by some miracle you managed to click this article without reading the title then I want to re-iterate something... This is an editorial. These are my opinions. They are not representative of Phoronix, or Michael, these are my own thoughts.*
Additionally, yes... This is quite possibly a flame-bait article. I hope the community is better than that, because I do want to start a discussion and give feedback to both the KDE and Gnome communities. For that reason when I point out, what I see as, a flaw I will try to be specific and direct so that any discussion can be equally specific and direct. For the record: The alternative title for this article was "Death By A Thousand [Paper Cuts][1]".
Now, with that out of the way... Onto the article.
![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920)
When I sent the [Fedora 22 KDE Review][2] off to Michael I did it with a bit of a bad taste in my mouth. It wasn't because I didn't like KDE, or hadn't been enjoying Fedora, far from it. In fact, I started to transition my T450s over to Arch Linux but quickly decided against that, as I enjoyed the level of convenience that Fedora brings to me for many things.
The reason I had a bad taste in my mouth was because the Fedora developers put a lot of time and effort into their "Workstation" product and I wasn't seeing any of it. I wasn't using Fedora the way the main developers had intended it to be used and therefore wasn't getting the "Fedora Experience." It felt like someone reviewing Ubuntu by using Kubuntu, using a Hackintosh to review OS X, or reviewing Gentoo by using Sabayon. A lot of readers in the forums bash on Michael for reviewing distributions in their default configurations-- myself included. While I still do believe that reviews should be done under 'real-world' configurations, I do see the value in reviewing something in the condition it was given to you-- for better or worse.
It was with that attitude in mind that I decided to take a dip in the Gnome pool.
I do, however, need to add one more disclaimer... I am looking at KDE and Gnome as they are packaged in Fedora. OpenSUSE, Kubuntu, Arch, etc, might all have different implementations of each desktop that will change whether my specific 'pain points' are relevant to your distribution. Furthermore, despite the title, this is going to be a VERY KDE heavy article. I called the article what I did because it was actually USING Gnome that made me realize how many "paper cuts" KDE actually has.
### Login Screen ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920)
I normally don't mind Distributions shipping distro-specific themes, because most of them make the desktop look nicer. I finally found my exception.
First impression's count for a lot, right? Well, GDM definitely gets this one right. The login screen is incredibly clean with consistent design language through every single part of it. The use of common-language icons instead of text boxes helps in that regard.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920)
That is not to say that the Fedora 22 KDE login screen-- now SDDM rather than KDM-- looks 'bad' per say but its definitely more jarring.
Where's the fault? The top bar. Look at the Gnome screenshot-- you select a user and you get a tiny little gear simple for selecting what session you want to log into. The design is clean, it gets out of your way, you could honestly miss it completely if you weren't paying attention. Now look at the blue KDE screenshot, the bar doesn't look it was even rendered using the same widgets, and its entire placement feels like an after thought of "Well shit, we need to throw this option somewhere..."
The same can be said for the Reboot and Shutdown options in the top right. Why not just a power button that creates a drop down menu that has a drop down for Reboot, Shutdown, Suspend? Having the buttons be different colors than the background certainly makes them stick out and be noticeable... but I don't think in a good way. Again, they feel like an after thought.
GDM is also far more useful from a practical standpoint, look again along the top row. The time is listed, there's a volume control so that if you are trying to be quiet you can mute all sounds before you even login, there's an accessibility button for things like high contrast, zooming, test to speech, etc, all available via simple toggle buttons.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920)
Swap it to upstream's Breeze theme and... suddenly most of my complaints are fixed. Common-language icons, everything is in the center of the screen, but the less important stuff is off to the sides. This creates a nice harmony between the top and bottom of the screen since they are equally empty. You still have a text box for the session switcher, but I can forgive that since the power buttons are now common language icons. Current time is available which is a nice touch, as is a battery life indicator. Sure gnome still has a few nice additions, such as the volume applet and the accessibility buttons, but Breeze is a step up from Fedora's KDE theme.
Go to Windows (pre-Windows 8 & 10...) or OS X and you will see similar things very clean, get-out-of-your-way lock screens and login screens that are devoid of text boxes or other widgets that distract the eye. It's a design that works and that is non-distracting. Fedora... Ship Breeze by default. VDG got the design of the Breeze theme right. Don't mess it up.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1

View File

@ -0,0 +1,32 @@
Translating by XLCYun.
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 2 - The GNOME Desktop
================================================================================
### The Desktop ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920)
I spent the first five days of my week logging into Gnome manually-- not turning on automatic login. On night of the fifth day I got annoyed with having to login by hand and so I went into the User Manager and turned on automatic login. The next time I logged in I got a prompt: "Your keychain was not unlocked. Please enter your password to unlock your keychain." That was when I realized something... Gnome had been automatically unlocking my keychain—my wallet in KDE speak-- every time I logged in via GDM. It was only when I bypassed GDM's login that Gnome had to step in and make me do it manually.
Now, I am under the personal belief that if you enable automatic login then your key chain should be unlocked automatically as well-- otherwise what's the point? Either way you still have to type in your password and at least if you hit the GDM Login screen you have a chance to change your session if you want to.
But, regardless of that, it was at that moment that I realized it was such a simple thing that made the desktop feel so much more like it was working WITH ME. When I log into KDE via SDDM? Before the splash screen is even finished loading there is a window popping up over top the splash animation-- thereby disrupting the splash screen-- prompting me to unlock my KDE wallet or GPG keyring.
If a wallet doesn't exist already you get prompted to create a wallet-- why couldn't one have been created for me at user creation?-- and then get asked to pick between two encryption methods, where one is even implied as insecure (Blowfish), why are you letting me pick something that's insecure for my security? Author's Note: If you install the actual KDE spin and don't just install KDE after-the-fact then a wallet is created for you at user creation. Unfortunately it's not unlocked for you automatically, and it seems to use the older Blowfish method rather than the new, and more secure, GPG method.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920)
If you DO pick the secure one (GPG) then it tries to load an Gpg key... which I hope you had one created already because if you don't you get yelled at. How do you create one? Well, it doesn't offer to make one for you... nor It doesn't tell you... and if you do manage TO figure out that you are supposed to use KGpg to create the key then you get taken through several menus and prompts that are nothing but confusing to new users. Why are you asking me where the GPG binary is located? How on earth am I supposed to know? Can't you just use the most recent one if there's more than one? And if there IS only one then, I ask again, why are you prompting me?
Why are you asking me what key size and encryption algorithm to use? You select 2048 and RSA/RSA by default, so why not just use those? If you want to have those options available then throw them under the "Expert mode" button that is right there. This isn't just about having configuration options available, its about needless things that get thrown in the user's face by default. This is going to be a theme for the rest of the article... KDE needs better sane defaults. Configuration is great, I love the configuration I get with KDE, but it needs to learn when to and when not to prompt. It also needs to learn that "Well its configurable" is no excuse for bad defaults. Defaults are what users see initially, bad defaults will lose users.
Let's move on from the key chain issue though, because I think I made my point.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,62 @@
Translating by XLCYun.
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 3 - GNOME Applications
================================================================================
### Applications ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920)
This is the one area where things are basically a wash. Each environment has a few applications that are really nice, and a few that are not so great. Once again though, Gnome gets the little things right in a way that KDE completely misses. None of KDE's applications are bad or broken, that's not what I'm saying. They function. But that's about it. To use an analogy: they passed the test, but they sure didn't get any where close to 100% on it.
Gnome on left, KDE on right. Dragon performs perfectly fine, it has clearly marked buttons for playing a file, URL, or a disc, just as you can do under Gnome Videos... but Gnome takes it one extra little step further in the name of convenience and user friendliness: they show all the videos detected under your system by default, without you having to do anything. KDE has Baloo-- just as they had Nepomuk before that-- why not use them? They've got a list video files that are freely accessible... but don't make use of the feature.
Moving on... Music Players.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920)
Both of these applications, Rhythmbox on the left and Amarok on the right were opened up and then a screenshot was immediately taken, nothing was clicked, or altered. See the difference? Rhythmbox looks like a music player. It's direct, there's obvious ways to sort the results, it knows what is trying to be and what it's job is: to play music.
Amarok feels like one of the tech demos, or library demos where someone puts every option and extension they possible can all inside one application in order to show them off-- it's never something that gets shipped as production, it's just there to show off bits and pieces. And that's exactly what Amarok feels like: its someone trying to show off every single possible cool thing they shove into a media player without ever stopping to think "Wait, what were trying to write again? An app to play music?"
Just look at the default layout. What is front and center for the user? A visualizer and Wikipedia integration-- the largest and most prominent column on the page. What's the second largest? Playlist list. Third largest, aka smallest? The actual music listing. How on earth are these sane defaults for a core application?
Software Managers! Something that has seen a lot of push in recent years and will likely only see a bigger push in the months to come. Unfortunately, it's another area where KDE was so close... and then fell on its face right at the finish line.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920)
Gnome Software is probably my new favorite software center, minus one gripe which I will get to in a bit. Muon, I wanted to like you. I really did. But you are a design nightmare. When the VDG was drawing up plans for you (mockup below), you looked pretty slick. Good use of white space, clean design, nice category listing, your whole not-being-split-into-two-applications.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920)
Then someone got around to coding you and doing your actual UI, and I can only guess they were drunk while they did it.
Let's look at Gnome Software. What's smack dab in the middle? The application, its screenshots, its description, etc. What's smack dab in the middle of Muon? Gigantic waste of white space. Gnome Software also includes the lovely convenience feature of putting a "Launch" button right there in case you already have an application installed. Convenience and ease of use are important, people. Honestly, JUST having things in Muon be centered aligned would probably make things look better already.
What's along the top edge of Gnome Software, like a tab listing? All Software, Installed, Updates. Clean language, direct, to the point. Muon? Well, we have "Discover", which works okay as far as language goes, and then we have Installed, and then nothing. Where's updates?
Well.. the developers decided to split updates off into its own application, thus requiring you to open two applications to handle your software-- one to install it, and one to update it-- going against every Software Center paradigm that has ever existed since the Synaptic graphical package manager.
I'm not going to show it in a screenshot just because I don't want to have to clean up my system afterwards, but if you go into Muon and start installing something the way it shows that is by adding a little tab to the bottom of your screen with the application's name. That tab doesn't go away when the application is done installing either, so if you're installing a lot of applications at a single time then you'll just slowly accumulate tabs along the bottom that you then have to go through and clean up manually, because if you don't then they grow off the screen and you have to swipe through them all to get to the most recent ones. Think: opening 50 tabs in Firefox. Major annoyance, major inconvenience.
I did say I would bash on Gnome a bit, and I meant it. Muon does get one thing very right that Gnome Software doesn't. Under the settings bar Muon has an option for "Show Technical Packages" aka: compilers, software libraries, non-graphical applications, applications without AppData, etc. Gnome doesn't. If you want to install any of those you have to drop down to the terminal. I think that's wrong. I certainly understand wanting to push AppData but I think they pushed it too soon. What made me realize Gnome didn't have this setting was when I went to install PowerTop and couldn't get Gnome to display it-- no AppData, no "Show Technical Packages" setting.
Doubly unfortunate is the fact that you can't "just use apper" if you're under KDE since...
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920)
Apper's support for installing local packages has been broken for since Fedora 19 or so, almost two years. I love the attention to detail and quality.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,52 @@
Translating by XLCYun.
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 4 - GNOME Settings
================================================================================
### Settings ###
There are a few specific KDE Control modules that I am going to pick at, mostly because they are so laughable horrible compared to their gnome counter-part that its honestly pathetic.
First one up? Printers.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920)
Gnome is on the left, KDE is on the right. You know what the difference is between the printer applet on the left, and the one on the right? When I opened up Gnome Control Center and hit "Printers" the applet popped up and nothing happened. When I opened up KDE System Settings and hit "Printers" I got a password prompt. Before I was even allowed to LOOK at the printers I had to give up ROOT'S password.
Let me just re-iterate that. In this, the days of PolicyKit and Logind, I am still being asked for Root's password for what should be a sudo operation. I didn't even SETUP root's password when I installed the system. I had to drop down to Konsole and run 'sudo passwd root' so that I could GIVE root a password so that I could go back into System Setting's printer applet and then give up root's password to even LOOK at what printers were available. Once I did that I got prompted for root's password AGAIN when I hit "Add Printer" then I got prompted for root's password AGAIN after I went through and selected a printer and driver. Three times I got asked for ROOT'S password just to add a printer to the system.
When I added a printer under Gnome I didn't get prompted for my SUDO password until I hit "Unlock" in the printer applet. I got asked once, then I never got asked again. KDE, I am begging you... Adopt Gnome's "Unlock" methodology. Do not prompt for a password until you really need one. Furthermore, whatever library is out there that allows for KDE applications to bypass PolicyKit / Logind (if its available) and prompt directly for root... Bin that code. If this was a multi-user system I either have to give up root's password, or be there every second of every day in order to put it in any time a user might have to update, change, or add a new printer. Both options are completely unacceptable.
One more thing...
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920)
Question to the forums: What looks cleaner to you? I had this realization when I was writing this article: Gnome's applet makes it very clear where any additional printers are going to go, they set aside a column on the left to list them. Before I added a second printer to KDE, and it suddenly grew a left side column, I had this nightmare-image in my head of the applet just shoving another icon into the screen and them being listed out like preview images in a folder of pictures. I was pleasantly surprised to see that I was wrong but the fact that the applet just 'grew' another column that didn't exist before and drastically altered its presentation is not really 'good' either. It's a design that's confusing, shocking, and non-intuitive.
Enough about printers though... Next KDE System Setting that is up for my public stoning? Multimedia, Aka Phonon.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920)
As always, Gnome's on the left, KDE is on the right. Let's just run through the Gnome setting first... The eyes go left to right, top to bottom, right? So let's do the same. First up: volume control slider. The blue hint against the empty bar with 100% clearly marked removes all confusion about which way is "volume up." Immediately after the slider is an easy On/Off toggle that functions a mute on/off. Points to Gnome for remembering what the volume was set to BEFORE I muted sound, and returning to that same level AFTER I press volume-up to un-mute. Kmixer, you amnesiac piece of crap, I wish I could say as much about you.
Moving on! Tabbed options for Output, Input and Applications? With per application volume controls within easy reach? Gnome I love you more and more with every passing second. Balance options, sound profiles, and a clearly marked "Test Speakers" option.
I'm not sure how this could have been implemented in a cleaner, more concise way. Yes, it's just a Gnome-ized Pavucontrol but I think that's the point. Pavucontrol got it mostly right to begin with, the Sound applet in Gnome Control Center just refines it slightly to make it even closer to perfect.
Phonon, you're up. And let me start by saying: What the fsck am I looking at? -I- get that I am looking at the priority list for the audio devices on the system, but the way it is presented is a bit of a nightmare. Also where are the things the user probably cares about? A priority list is a great thing to have, it SHOULD be available, but it's something the user messes with once or twice and then never touches again. It's not important, or common, enough to warrant being front and center. Where's the volume slider? Where's per application controls? The things that users will be using more frequently? Well.. those are under Kmix, a separate program, with its own settings and configuration... not under the System Settings... which kind of makes System Settings a bit of a misnomer. And in that same vein, Let's hop over to network settings.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920)
Presented above is the Gnome Network Settings. KDE's isn't included because of the reason I'm about to hit on. If you go to KDE's System Settings and hit any of the three options under the "Network" Section you get tons of options: Bluetooth settings, default username and password for Samba shares (Seriously, "Connectivity" only has 2 options: Username and password for SMB shares. How the fsck does THAT deserve the all-inclusive title "Connectivity"?), controls for Browser Identification (which only work for Konqueror...a dead project), proxy settings, etc... Where's my wifi settings? They aren't there. Where are they? Well, they are in the network applet's private settings... not under Network Settings...
KDE, you're killing me. You have "System Settings" USE IT!
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,40 @@
Translating by XLCYun.
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 5 - Conclusion
================================================================================
### User Experience and Closing Thoughts ###
When Gnome 2.x and KDE 4.x were going head to head.. I jumped between the two quite happily. Some things I loved, some things I hated, but over all they were both a pleasure to use. Then Gnome 3.x came around and all of the drama with Gnome Shell. I swore off Gnome and avoided it every chance I could. It wasn't user friendly, it was non-intuitive, it broke an establish paradigm in preparation for tablet's taking over the world... A future that, judging from the dropping sales of tablets, will never come.
Eight releases of Gnome 3 later and the unimaginable happened. Gnome got user friendly. Gnome got intuitive. Is it perfect? Of course not. I still hate the paradigm it tries to push, I hate how it tries to force a work flow onto me, but both of those things can be gotten used to with time and patience. Once you have managed to look past Gnome Shell's alien appearance and you start interacting with it and the other parts of Gnome (Control Center especially) you see what Gnome has definitely gotten right: the little things. The attention to detail.
People can adapt to new paradigms, people can adapt to new work flows-- the iPhone and iPad proved that-- but what will always bother them are the paper cuts.
Which brings up an important distinction between KDE and Gnome. Gnome feels like a product. It feels like a singular experience. When you use it, it feels like it is complete and that everything you need is at your fingertips. It feel's like THE Linux desktop in the same way that Windows or OS X have THE desktop experience: what you need is there and it was all written by the same guys working on the same team towards the same goal. Hell, even an application prompting for sudo access feels like an intentional part of the desktop under Gnome, much the way that it is under Windows. In KDE it's just some random-looking window popup that any application could have created. It doesn't feel like a part of the system stopping and going "Hey! Something has requested administrative rights! Do you want to let it go through?" in an official capacity.
KDE doesn't feel like cohesive experience. KDE doesn't feel like it has a direction its moving in, it doesn't feel like a full experience. KDE feels like its a bunch of pieces that are moving in a bunch of different directions, that just happen to have a shared toolkit beneath them. If that's what the developers are happy with, then fine, good for them, but if the developers still have the hope of offering the best experience possible then the little stuff needs to matter. The user experience and being intuitive needs to be at the forefront of every single application, there needs to be a vision of what KDE wants to offer -and- how it should look.
Is there anything stopping me from using Gnome Disks under KDE? Rhythmbox? Evolution? Nope. Nope. Nope. But that misses the point. Gnome and KDE both market themselves as "Desktop Environments." They are supposed to be full -environments-, that means they all the pieces come and fit together, that you use that environment's tools because they are saying "We support everything you need to have a full desktop." Honestly? Only Gnome seems to fit the bill of being complete. KDE feel's half-finished when it comes to "coming together" part, let alone offering everything you need for a "full experience". There's no counterpart to Gnome Disks-- kpartitionmanager prompts for root. No "First Time User" run through, it just now got a user manager in Kubuntu. Hell, Gnome even provides a Maps, Notes, Calendar and Clock application. Do all of these applications matter 100%? No, of course not. But the fact that Gnome has them helps to push the idea that Gnome is a full and complete experience.
My complaints about KDE are not impossible to fix, not by a long shot. But it requires people to care. It requires developers to take pride in their work beyond just function-- form counts for a whole hell of a lot. Don't take away the user's ability to configure things-- the lack of configuration is one of my biggest gripes with GNOME 3.x, but don't use "Well you can configure it however you want," as an excuse for not providing sane defaults. The defaults are what users are going to see, they are what the users are going to judge from the first moment they open your application. Make it a good impression.
I know the KDE developers know design matters, that is WHY the Visual Design Group exists, but it feels like they aren't using the VDG to their fullest. And therein lies KDE's hamartia. It's not that KDE can't be complete, it's not that it can't come together and fix the downfalls, it just that they haven't. They aimed for the bulls eye... but they missed.
And before anyone says it... Don't say "Patches are welcome." Because while I can happily submit patches for the individual annoyances more will just keep coming as developers keep on their marry way of doing things in non-intuitive ways. This isn't about Muon not being center-aligned. This isn't about Amarok having an ugly UI. This isn't about the volume and brightness pop-up notifiers taking up a large chunk of my screen real-estate every time I hit my hotkeys (seriously, someone shrink those things).
This is about a mentality of apathy, this is about developers apparently not thinking things through when they make the UI for their applications. Everything the KDE Community does works fine. Amarok plays music. Dragon Player plays videos. Kwin / Qt & kdelibs is seemingly more power efficient than Mutter / gtk (according to my battery life times. Non-scientific testing). Those things are all well and good, and important.. but the presentation matters to. Arguably, the presentation matters the most because that is what user's see and interact with.
To KDE application developers... Get the VDG involved. Make every single 'core' application get its design vetted and approved by the VDG, have a UI/UX expert from the VDG go through the usage patterns and usage flow of your application to make sure its intuitive. Hell, even just posting a mock up to the VDG forums and asking for feedback would probably get you some nice pointers and feedback for whatever application you're working on. You have this great resource there, now actually use them.
I am not trying to sound ungrateful. I love KDE, I love the work and effort that volunteers put into giving Linux users a viable desktop, and an alternative to Gnome. And it is because I care that I write this article. Because I want to see KDE excel, I want to see it go further and farther than it has before. But doing that requires work on everyone's part, and it requires that people don't hold back criticism. It requires that people are honest about their interaction with the system and where it falls apart. If we can't give direct criticism, if we can't say "This sucks!" then it will never get better.
Will I still use Gnome after this week? Probably not, no. Gnome still trying to force a work flow on me that I don't want to follow or abide by, I feel less productive when I'm using it because it doesn't follow my paradigm. For my friends though, when they ask me "What desktop environment should I use?" I'm probably going to recommend Gnome, especially if they are less technical users who want things to "just work." And that is probably the most damning assessment I could make in regards to the current state of KDE.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,314 @@
How to Configure Chef (server/client) on Ubuntu 14.04 / 15.04
================================================================================
Chef is a configuration management and automation tool for information technology professionals that configures and manages your infrastructure whether it is on-premises or in the cloud. It can be used to speed up application deployment and to coordinate the work of multiple system administrators and developers involving hundreds, or even thousands, of servers and applications to support a large customer base. The key to Chefs power is that it turns infrastructure into code. Once you master Chef, you will be able to enable web IT with first class support for managing your cloud infrastructure with an easy automation of your internal deployments or end users systems.
Here are the major components of Chef that we are going to setup and configure in this tutorial.
chef components
![](http://blog.linoxide.com/wp-content/uploads/2015/07/chef.png)
### Chef Prerequisites and Versions ###
We are going to setup Chef configuration management system under the following basic environment.
注:表格
<table width="701" style="height: 284px;">
<tbody>
<tr>
<td width="660" colspan="2"><strong>Chef, Configuration Management Tool</strong></td>
</tr>
<tr>
<td width="220"><strong>Base Operating System</strong></td>
<td width="492">Ubuntu 14.04.1 LTS&nbsp;(x86_64)</td>
</tr>
<tr>
<td width="220"><strong>Chef Server</strong></td>
<td width="492">Version 12.1.0</td>
</tr>
<tr>
<td width="220"><strong>Chef Manage</strong></td>
<td width="492">Version 1.17.0</td>
</tr>
<tr>
<td width="220"><strong>Chef Development Kit</strong></td>
<td width="492">Version 0.6.2</td>
</tr>
<tr>
<td width="220"><strong>RAM and CPU</strong></td>
<td width="492">4 GB&nbsp; , 2.0+2.0 GHZ</td>
</tr>
</tbody>
</table>
### Chef Server's Installation and Configurations ###
Chef Server is central core component that stores recipes as well as other configuration data and interact with the workstations and nodes. let's download the installation media by selecting the latest version of chef server from its official web link.
We will get its installation package and install it by using following commands.
**1) Downloading Chef Server**
root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb
**2) To install Chef Server**
root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb
**3) Reconfigure Chef Server**
Now Run the following command to start all of the chef server services ,this step may take a few minutes to complete as its composed of many different services that work together to create a functioning system.
root@ubuntu-14-chef:/tmp# chef-server-ctl reconfigure
The chef server startup command 'chef-server-ctl reconfigure' needs to be run twice so that installation ends with the following completion output.
Chef Client finished, 342/350 resources updated in 113.71139964 seconds
opscode Reconfigured!
**4) Reboot OS**
Once the installation complete reboot the operating system for the best working without doing this we you might get the following SSL_connect error during creation of User.
ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect
**5) Create new Admin User**
Run the following command to create a new administrator user with its profile settings. During its creation users RSA private key is generated automatically that should be saved to a safe location. The --filename option will save the RSA private key to a specified path.
root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem
### Chef Manage Setup on Chef Server ###
Chef Manage is a management console for Enterprise Chef that enables a web-based user interface for visualizing and managing nodes, data bags, roles, environments, cookbooks and role-based access control (RBAC).
**1) Downloading Chef Manage**
Copy the link for Chef Manage from the official web site and download the chef manage package.
root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb
**2) Installing Chef Manage**
Let's install it into the root's home directory with below command.
root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root
**3) Restart Chef Manage and Server**
Once the installation is complete we need to restart chef manage and chef server services by executing following commands.
root@ubuntu-14-chef:~# opscode-manage-ctl reconfigure
root@ubuntu-14-chef:~# chef-server-ctl reconfigure
### Chef Manage Web Console ###
We can access chef manage web console from the localhost as wel as its fqdn and login with the already created admin user account.
![chef amanage](http://blog.linoxide.com/wp-content/uploads/2015/07/5-chef-web.png)
**1) Create New Organization with Chef Manage**
You would be asked to create new organization or accept the invitation from the organizations. Let's create a new organization by providing its short and full name as shown.
![Create Org](http://blog.linoxide.com/wp-content/uploads/2015/07/7-create-org.png)
**2) Create New Organization with Command line**
We can also create new Organization from the command line by executing the following command.
root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem
### Configuration and setup of Workstation ###
As we had done with successful installation of chef server now we are going to setup its workstation to create and configure any recipes, cookbooks, attributes, and other changes that we want to made to our Chef configurations.
**1) Create New User and Organization on Chef Server**
In order to setup workstation we create a new user and an organization for this from the command line.
root@ubuntu-14-chef:~# chef-server-ctl user-create bloger Bloger Kashif bloger.kashif@gmail.com bloger123 --filename bloger.pem
root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem
**2) Download Starter Kit for Workstation**
Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server.
![Starter Kit](http://blog.linoxide.com/wp-content/uploads/2015/07/8-download-kit.png)
**3) Click to "Proceed" with starter kit download**
![starter kit](http://blog.linoxide.com/wp-content/uploads/2015/07/9-download-kit.png)
### Chef Development Kit Setup for Workstation ###
Chef Development Kit is a software package suite with all the development tools need to code Chef. It combines with the best of the breed tools developed by Chef community with Chef Client.
**1) Downloading Chef DK**
We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit.
![Chef DK](http://blog.linoxide.com/wp-content/uploads/2015/07/10-CDK.png)
Copy the link and download it with wget command.
root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb
**1) Chef Development Kit Installatoion**
Install chef-development kit using dpkg command
root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb
**3) Chef DK Verfication**
Verify using the below command that the client got installed properly.
root@ubuntu-15-WKS:~# chef verify
----------
Running verification for component 'berkshelf'
Running verification for component 'test-kitchen'
Running verification for component 'chef-client'
Running verification for component 'chef-dk'
Running verification for component 'chefspec'
Running verification for component 'rubocop'
Running verification for component 'fauxhai'
Running verification for component 'knife-spork'
Running verification for component 'kitchen-vagrant'
Running verification for component 'package installation'
Running verification for component 'openssl'
..............
---------------------------------------------
Verification of component 'rubocop' succeeded.
Verification of component 'knife-spork' succeeded.
Verification of component 'openssl' succeeded.
Verification of component 'berkshelf' succeeded.
Verification of component 'chef-dk' succeeded.
Verification of component 'fauxhai' succeeded.
Verification of component 'test-kitchen' succeeded.
Verification of component 'kitchen-vagrant' succeeded.
Verification of component 'chef-client' succeeded.
Verification of component 'chefspec' succeeded.
Verification of component 'package installation' succeeded.
**Connecting to Chef Server**
We will Create ~/.chef and copy the two user and organization pem files to this folder from chef server.
root@ubuntu-14-chef:~# scp bloger.pem blogs.pem kashi.pem linux.pem root@172.25.10.172:/.chef/
----------
root@172.25.10.172's password:
bloger.pem 100% 1674 1.6KB/s 00:00
blogs.pem 100% 1674 1.6KB/s 00:00
kashi.pem 100% 1678 1.6KB/s 00:00
linux.pem 100% 1678 1.6KB/s 00:00
**Knife Configurations to Manage your Chef Environment**
Now create "~/.chef/knife.rb" with following content as configured in previous steps.
root@ubuntu-15-WKS:/.chef# vim knife.rb
current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "kashi"
client_key "#{current_dir}/kashi.pem"
validation_client_name "kashi-linux"
validation_key "#{current_dir}/linux.pem"
chef_server_url "https://172.25.10.173/organizations/linux"
cache_type 'BasicFile'
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
cookbook_path ["#{current_dir}/../cookbooks"]
Create "~/cookbooks" folder for cookbooks as specified knife.rb file.
root@ubuntu-15-WKS:/# mkdir cookbooks
**Testing with Knife Configurations**
Run "knife user list" and "knife client list" commands to verify whether knife configuration is working.
root@ubuntu-15-WKS:/.chef# knife user list
You might get the following error while first time you run this command.This occurs because we do not have our Chef server's SSL certificate on our workstation.
ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.
To recover from the above error run the following command to fetch ssl certs and once again run the knife user and client list command and it should be fine then.
root@ubuntu-15-WKS:/.chef# knife ssl fetch
WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert
directory (/.chef/trusted_certs).
Knife has no means to verify these are the correct certificates. You should
verify the authenticity of these certificates after downloading.
Adding certificate for ubuntu-14-chef.test.com in /.chef/trusted_certs/ubuntu-14-chef_test_com.crt
Now after fetching ssl certs with above command, let's again run the below command.
root@ubuntu-15-WKS:/.chef#knife client list
kashi-linux
### New Node Configuration to interact with chef-server ###
Nodes contain chef-client which performs all the infrastructure automation. So, Its time to begin with adding new servers to our chef environment by Configuring a new node to interact with chef-server after we had Configured chef-server and knife workstation combinations.
To configure a new node to work with chef server use below command.
root@ubuntu-15-WKS:~# knife bootstrap 172.25.10.170 --ssh-user root --ssh-password kashi123 --node-name mydns
----------
Doing old-style registration with the validation key at /.chef/linux.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 172.25.10.170
172.25.10.170 Installing Chef Client...
172.25.10.170 --2015-07-04 22:21:16-- https://www.opscode.com/chef/install.sh
172.25.10.170 Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
172.25.10.170 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... connected.
172.25.10.170 HTTP request sent, awaiting response... 200 OK
172.25.10.170 Length: 18736 (18K) [application/x-sh]
172.25.10.170 Saving to: STDOUT
172.25.10.170
100%[======================================>] 18,736 --.-K/s in 0s
172.25.10.170
172.25.10.170 2015-07-04 22:21:17 (200 MB/s) - written to stdout [18736/18736]
172.25.10.170
172.25.10.170 Downloading Chef 12 for ubuntu...
172.25.10.170 downloading https://www.opscode.com/chef/metadata?v=12&prerelease=false&nightlies=false&p=ubuntu&pv=14.04&m=x86_64
172.25.10.170 to file /tmp/install.sh.26024/metadata.txt
172.25.10.170 trying wget...
After all we can see the vewly created node under the knife node list and new client list as it it will also creates a new client with the node.
root@ubuntu-15-WKS:~# knife node list
mydns
Similarly we can add multiple number of nodes to our chef infrastructure by providing ssh credentials with the same above knofe bootstrap command.
### Conclusion ###
In this detailed article we learnt about the Chef Configuration Management tool with its basic understanding and overview of its components with installation and configuration settings. We hope you have enjoyed learning the installation and configuration of Chef server with its workstation and client nodes.
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04/
作者:[Kashif Siddique][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/

View File

@ -0,0 +1,237 @@
How to collect NGINX metrics - Part 2
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png)
### How to get the NGINX metrics you need ###
How you go about capturing metrics depends on which version of NGINX you are using, as well as which metrics you wish to access. (See [the companion article][1] for an in-depth exploration of NGINX metrics.) Free, open-source NGINX and the commercial product NGINX Plus both have status modules that report metrics, and NGINX can also be configured to report certain metrics in its logs:
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: center;">
<col style="text-align: center;">
<col style="text-align: center;"> </colgroup>
<thead>
<tr>
<th rowspan="2" style="text-align: left;">Metric</th>
<th colspan="3" style="text-align: center;">Availability</th>
</tr>
<tr>
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source">NGINX (open-source)</a></th>
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus">NGINX Plus</a></th>
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs">NGINX logs</a></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepts / accepted</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">handled</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">dropped</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">active</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">requests / total</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">4xx codes</td>
<td style="text-align: center;"></td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
</tr>
<tr>
<td style="text-align: left;">5xx codes</td>
<td style="text-align: center;"></td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
</tr>
<tr>
<td style="text-align: left;">request time</td>
<td style="text-align: center;"></td>
<td style="text-align: center;"></td>
<td style="text-align: center;">x</td>
</tr>
</tbody>
</table>
#### Metrics collection: NGINX (open-source) ####
Open-source NGINX exposes several basic metrics about server activity on a simple status page, provided that you have the HTTP [stub status module][2] enabled. To check if the module is already enabled, run:
nginx -V 2>&1 | grep -o with-http_stub_status_module
The status module is enabled if you see with-http_stub_status_module as output in the terminal.
If that command returns no output, you will need to enable the status module. You can use the --with-http_stub_status_module configuration parameter when [building NGINX from source][3]:
./configure \
… \
--with-http_stub_status_module
make
sudo make install
After verifying the module is enabled or enabling it yourself, you will also need to modify your NGINX configuration to set up a locally accessible URL (e.g., /nginx_status) for the status page:
server {
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
Note: The server blocks of the NGINX config are usually found not in the master configuration file (e.g., /etc/nginx/nginx.conf) but in supplemental configuration files that are referenced by the master config. To find the relevant configuration files, first locate the master config by running:
nginx -t
Open the master configuration file listed, and look for lines beginning with include near the end of the http block, such as:
include /etc/nginx/conf.d/*.conf;
In one of the referenced config files you should find the main server block, which you can modify as above to configure NGINX metrics reporting. After changing any configurations, reload the configs by executing:
nginx -s reload
Now you can view the status page to see your metrics:
Active connections: 24
server accepts handled requests
1156958 1156958 4491319
Reading: 0 Writing: 18 Waiting : 6
Note that if you are trying to access the status page from a remote machine, you will need to whitelist the remote machines IP address in your status configuration, just as 127.0.0.1 is whitelisted in the configuration snippet above.
The NGINX status page is an easy way to get a quick snapshot of your metrics, but for continuous monitoring you will need to automatically record that data at regular intervals. Parsers for the NGINX status page already exist for monitoring tools such as [Nagios][4] and [Datadog][5], as well as for the statistics collection daemon [collectD][6].
#### Metrics collection: NGINX Plus ####
The commercial NGINX Plus provides [many more metrics][7] through its ngx_http_status_module than are available in open-source NGINX. Among the additional metrics exposed by NGINX Plus are bytes streamed, as well as information about upstream systems and caches. NGINX Plus also reports counts of all HTTP status code types (1xx, 2xx, 3xx, 4xx, 5xx). A sample NGINX Plus status board is available [here][8].
![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png)
*Note: the “Active” connections on the NGINX Plus status dashboard are defined slightly differently than the Active state connections in the metrics collected via the open-source NGINX stub status module. In NGINX Plus metrics, Active connections do not include connections in the Waiting state (aka Idle connections).*
NGINX Plus also reports [metrics in JSON format][9] for easy integration with other monitoring systems. With NGINX Plus, you can see the metrics and health status [for a given upstream grouping of servers][10], or drill down to get a count of just the response codes [from a single server][11] in that upstream:
{"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055}
To enable the NGINX Plus metrics dashboard, you can add a status server block inside the http block of your NGINX configuration. ([See the section above][12] on collecting metrics from open-source NGINX for instructions on locating the relevant config files.) For example, to set up a status dashboard at http://your.ip.address:8080/status.html and a JSON interface at http://your.ip.address:8080/status, you would add the following server block:
server {
listen 8080;
root /usr/share/nginx/html;
location /status {
status;
}
location = /status.html {
}
}
The status pages should be live once you reload your NGINX configuration:
nginx -s reload
The official NGINX Plus docs have [more details][13] on how to configure the expanded status module.
#### Metrics collection: NGINX logs ####
NGINXs [log module][14] writes configurable access logs to a destination of your choosing. You can customize the format of your logs and the data they contain by [adding or subtracting variables][15]. The simplest way to capture detailed logs is to add the following line in the server block of your config file (see [the section][16] on collecting metrics from open-source NGINX for instructions on locating your config files):
access_log logs/host.access.log combined;
After changing any NGINX configurations, reload the configs by executing:
nginx -s reload
The “combined” log format, included by default, captures [a number of key data points][17], such as the actual HTTP request and the corresponding response code. In the example logs below, NGINX logged a 200 (success) status code for a request for /index.html and a 404 (not found) error for the nonexistent /fail.
127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36"
127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
You can log request processing time as well by adding a new log format to the http block of your NGINX config file:
log_format nginx '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent $request_time '
'"$http_referer" "$http_user_agent"';
And by adding or modifying the access_log line in the server block of your config file:
access_log logs/host.access.log nginx;
After reloading the updated configs (by running nginx -s reload), your access logs will include response times, as seen below. The units are seconds, with millisecond resolution. In this instance, the server received a request for /big.pdf, returning a 206 (success) status code after sending 33973115 bytes. Processing the request took 0.202 seconds (202 milliseconds):
127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
You can use a variety of tools and services to parse and analyze NGINX logs. For instance, [rsyslog][18] can monitor your logs and pass them to any number of log-analytics services; you can use a free, open-source tool such as [logstash][19] to collect and analyze logs; or you can use a unified logging layer such as [Fluentd][20] to collect and parse your NGINX logs.
### Conclusion ###
Which NGINX metrics you monitor will depend on the tools available to you, and whether the insight provided by a given metric justifies the overhead of monitoring that metric. For instance, is measuring error rates important enough to your organization to justify investing in NGINX Plus or implementing a system to capture and analyze logs?
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][21], and get started right away with a [free trial of Datadog][22].
----------
Source Markdown for this post is available [on GitHub][23]. Questions, corrections, additions, etc.? Please [let us know][24].
--------------------------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
作者K Young
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
[3]:http://wiki.nginx.org/InstallOptions
[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx
[5]:http://docs.datadoghq.com/integrations/nginx/
[6]:https://collectd.org/wiki/index.php/Plugin:nginx
[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data
[8]:http://demo.nginx.com/status.html
[9]:http://demo.nginx.com/status
[10]:http://demo.nginx.com/status/upstreams/demoupstreams
[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses
[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example
[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html
[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
[18]:http://www.rsyslog.com/
[19]:https://www.elastic.co/products/logstash
[20]:http://www.fluentd.org/
[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up
[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md
[24]:https://github.com/DataDog/the-monitor/issues

View File

@ -0,0 +1,150 @@
How to monitor NGINX with Datadog - Part 3
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
If youve already read [our post on monitoring NGINX][1], you know how much information you can gain about your web environment from just a handful of metrics. And youve also seen just how easy it is to start collecting metrics from NGINX on ad hoc basis. But to implement comprehensive, ongoing NGINX monitoring, you will need a robust monitoring system to store and visualize your metrics, and to alert you when anomalies happen. In this post, well show you how to set up NGINX monitoring in Datadog so that you can view your metrics on customizable dashboards like this:
![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png)
Datadog allows you to build graphs and alerts around individual hosts, services, processes, metrics—or virtually any combination thereof. For instance, you can monitor all of your NGINX hosts, or all hosts in a certain availability zone, or you can monitor a single key metric being reported by all hosts with a certain tag. This post will show you how to:
- Monitor NGINX metrics on Datadog dashboards, alongside all your other systems
- Set up automated alerts to notify you when a key metric changes dramatically
### Configuring NGINX ###
To collect metrics from NGINX, you first need to ensure that NGINX has an enabled status module and a URL for reporting its status metrics. Step-by-step instructions [for configuring open-source NGINX][2] and [NGINX Plus][3] are available in our companion post on metric collection.
### Integrating Datadog and NGINX ###
#### Install the Datadog Agent ####
The Datadog Agent is [the open-source software][4] that collects and reports metrics from your hosts so that you can view and monitor them in Datadog. Installing the agent usually takes [just a single command][5].
As soon as your Agent is up and running, you should see your host reporting metrics [in your Datadog account][6].
![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png)
#### Configure the Agent ####
Next youll need to create a simple NGINX configuration file for the Agent. The location of the Agents configuration directory for your OS can be found [here][7].
Inside that directory, at conf.d/nginx.yaml.example, you will find [a sample NGINX config file][8] that you can edit to provide the status URL and optional tags for each of your NGINX instances:
init_config:
instances:
- nginx_status_url: http://localhost/nginx_status/
tags:
- instance:foo
Once you have supplied the status URLs and any tags, save the config file as conf.d/nginx.yaml.
#### Restart the Agent ####
You must restart the Agent to load your new configuration file. The restart command varies somewhat by platform—see the specific commands for your platform [here][9].
#### Verify the configuration settings ####
To check that Datadog and NGINX are properly integrated, run the Datadog info command. The command for each platform is available [here][10].
If the configuration is correct, you will see a section like this in the output:
Checks
======
[...]
nginx
-----
- instance #0 [OK]
- Collected 8 metrics & 0 events
#### Install the integration ####
Finally, switch on the NGINX integration inside your Datadog account. Its as simple as clicking the “Install Integration” button under the Configuration tab in the [NGINX integration settings][11].
![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png)
### Metrics! ###
Once the Agent begins reporting NGINX metrics, you will see [an NGINX dashboard][12] among your list of available dashboards in Datadog.
The basic NGINX dashboard displays a handful of graphs encapsulating most of the key metrics highlighted [in our introduction to NGINX monitoring][13]. (Some metrics, notably request processing time, require log analysis and are not available in Datadog.)
You can easily create a comprehensive dashboard for monitoring your entire web stack by adding additional graphs with important metrics from outside NGINX. For example, you might want to monitor host-level metrics on your NGINX hosts, such as system load. To start building a custom dashboard, simply clone the default NGINX dashboard by clicking on the gear near the upper right of the dashboard and selecting “Clone Dash”.
![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png)
You can also monitor your NGINX instances at a higher level using Datadogs [Host Maps][14]—for instance, color-coding all your NGINX hosts by CPU usage to identify potential hotspots.
![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png)
### Alerting on NGINX metrics ###
Once Datadog is capturing and visualizing your metrics, you will likely want to set up some monitors to automatically keep tabs on your metrics—and to alert you when there are problems. Below well walk through a representative example: a metric monitor that alerts on sudden drops in NGINX throughput.
#### Monitor your NGINX throughput ####
Datadog metric alerts can be threshold-based (alert when the metric exceeds a set value) or change-based (alert when the metric changes by a certain amount). In this case well take the latter approach, alerting when our incoming requests per second drop precipitously. Such drops are often indicative of problems.
1.**Create a new metric monitor**. Select “New Monitor” from the “Monitors” dropdown in Datadog. Select “Metric” as the monitor type.
![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png)
2.**Define your metric monitor**. We want to know when our total NGINX requests per second drop by a certain amount. So we define the metric of interest to be the sum of nginx.net.request_per_s across our infrastructure.
![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png)
3.**Set metric alert conditions**. Since we want to alert on a change, rather than on a fixed threshold, we select “Change Alert.” Well set the monitor to alert us whenever the request volume drops by 30 percent or more. Here we use a one-minute window of data to represent the metrics value “now” and alert on the average change across that interval, as compared to the metrics value 10 minutes prior.
![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png)
4.**Customize the notification**. If our NGINX request volume drops, we want to notify our team. In this case we will post a notification in the ops teams chat room and page the engineer on call. In “Say whats happening”, we name the monitor and add a short message that will accompany the notification to suggest a first step for investigation. We @mention the Slack channel that we use for ops and use @pagerduty to [route the alert to PagerDuty][15]
![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png)
5.**Save the integration monitor**. Click the “Save” button at the bottom of the page. Youre now monitoring a key NGINX [work metric][16], and your on-call engineer will be paged anytime it drops rapidly.
### Conclusion ###
In this post weve walked you through integrating NGINX with Datadog to visualize your key metrics and notify your team when your web infrastructure shows signs of trouble.
If youve followed along using your own Datadog account, you should now have greatly improved visibility into whats happening in your web environment, as well as the ability to create automated monitors tailored to your environment, your usage patterns, and the metrics that are most valuable to your organization.
If you dont yet have a Datadog account, you can sign up for [a free trial][17] and start monitoring your infrastructure, your applications, and your services today.
----------
Source Markdown for this post is available [on GitHub][18]. Questions, corrections, additions, etc.? Please [let us know][19].
------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
作者K Young
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus
[4]:https://github.com/DataDog/dd-agent
[5]:https://app.datadoghq.com/account/settings#agent
[6]:https://app.datadoghq.com/infrastructure
[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
[11]:https://app.datadoghq.com/account/settings#integrations/nginx
[12]:https://app.datadoghq.com/dash/integration/nginx
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
[15]:https://www.datadoghq.com/blog/pagerduty/
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
[19]:https://github.com/DataDog/the-monitor/issues

View File

@ -0,0 +1,408 @@
How to monitor NGINX - Part 1
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)
### What is NGINX? ###
[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus.
NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases.
### Key NGINX metrics ###
By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment).
More generally, there are at least three key categories of metrics to watch:
- Basic activity metrics
- Error metrics
- Performance metrics
Below well break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice.
This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting.
#### Basic activity metrics ####
Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed.
NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus.
**NGINX**
The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection.
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png)
Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume.
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepts</td>
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">handled</td>
<td style="text-align: left;">Count of successful client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">active</td>
<td style="text-align: left;">Currently active client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">dropped (calculated)</td>
<td style="text-align: left;">Count of dropped connections (accepts &ndash; handled)</td>
<td style="text-align: left;">Work: Errors*</td>
</tr>
<tr>
<td style="text-align: left;">requests</td>
<td style="text-align: left;">Count of client requests</td>
<td style="text-align: left;">Work: Throughput</td>
</tr>
<tr>
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics">a metric of resource saturation</a>, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics">a work metric</a>.</em></td>
</tr>
</tbody>
</table>
The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINXs [worker_connections][4] limit, has been reached.
Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed:
Active state
- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive.
- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation.
- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state.
Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.)
**NGINX Plus**
As mentioned above, all of open-source NGINXs metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus.
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png)
Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume.
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepted</td>
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">dropped</td>
<td style="text-align: left;">Count of dropped connections</td>
<td style="text-align: left;">Work: Errors*</td>
</tr>
<tr>
<td style="text-align: left;">active</td>
<td style="text-align: left;">Currently active client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">idle</td>
<td style="text-align: left;">Client connections with zero current requests</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">total</td>
<td style="text-align: left;">Count of client requests</td>
<td style="text-align: left;">Work: Throughput</td>
</tr>
<tr>
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.</em></td>
</tr>
</tbody>
</table>
The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Pluss [worker_connections][4] limit, has been reached.
**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX.
**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection.
**Metric differences between Open-Source and Plus**
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;">NGINX (open-source)</th>
<th style="text-align: left;">NGINX Plus</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepts</td>
<td style="text-align: left;">accepted</td>
</tr>
<tr>
<td style="text-align: left;">dropped must be calculated</td>
<td style="text-align: left;">dropped is reported directly</td>
</tr>
<tr>
<td style="text-align: left;">reading + writing</td>
<td style="text-align: left;">current</td>
</tr>
<tr>
<td style="text-align: left;">waiting</td>
<td style="text-align: left;">idle</td>
</tr>
<tr>
<td style="text-align: left;">active (includes “waiting” states)</td>
<td style="text-align: left;">active (excludes “idle” states)</td>
</tr>
<tr>
<td style="text-align: left;">requests</td>
<td style="text-align: left;">total</td>
</tr>
</tbody>
</table>
**Metric to alert on: Dropped connections**
The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation.
![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png)
**Metric to alert on: Requests per second**
Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests youre receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs.
![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png)
**Collecting activity metrics**
Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection.
#### Error metrics ####
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
<th style="text-align: left;"><strong>Availability</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">4xx codes</td>
<td style="text-align: left;">Count of client errors</td>
<td style="text-align: left;">Work: Errors</td>
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
</tr>
<tr>
<td style="text-align: left;">5xx codes</td>
<td style="text-align: left;">Count of server errors</td>
<td style="text-align: left;">Work: Errors</td>
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
</tr>
</tbody>
</table>
NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes.
**Metric to alert on: Server error rate**
Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user.
![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png)
A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities.
**Collecting error metrics**
Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information:
- Use the expanded status module available with commercially supported NGINX Plus
- Configure NGINXs log module to write response codes in access logs
Read the companion post on NGINX metrics collection for detailed instructions on both approaches.
#### Performance metrics ####
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
<th style="text-align: left;"><strong>Availability</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">request time</td>
<td style="text-align: left;">Time to process each request, in seconds</td>
<td style="text-align: left;">Work: Performance</td>
<td style="text-align: left;">NGINX logs</td>
</tr>
</tbody>
</table>
**Metric to alert on: Request processing time**
The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream.
**Collecting processing time metrics**
NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8].
#### Reverse proxy metrics ####
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
<th style="text-align: left;"><strong>Availability</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Active connections by upstream server</td>
<td style="text-align: left;">Currently active client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
<td style="text-align: left;">NGINX Plus</td>
</tr>
<tr>
<td style="text-align: left;">5xx codes by upstream server</td>
<td style="text-align: left;">Server errors</td>
<td style="text-align: left;">Work: Errors</td>
<td style="text-align: left;">NGINX Plus</td>
</tr>
<tr>
<td style="text-align: left;">Available servers per upstream group</td>
<td style="text-align: left;">Servers passing health checks</td>
<td style="text-align: left;">Resource: Availability</td>
<td style="text-align: left;">NGINX Plus</td>
</tr>
</tbody>
</table>
One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus.
NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times.
**Activity metrics**
The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns
**Error metrics**
Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular servers error rate.
**Availability metrics**
For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers.
**Collecting upstream metrics**
NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11].
### Conclusion ###
In this post weve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure:
- [Dropped connections][12]
- [Requests per second][13]
- [Server error rate][14]
- [Request processing time][15]
Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus.
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18].
### Acknowledgments ###
Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications.
----------
Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20].
--------------------------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
作者K Young
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://nginx.org/en/
[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[9]:https://en.wikipedia.org/wiki/Reverse_proxy
[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
[20]:https://github.com/DataDog/the-monitor/issues

View File

@ -0,0 +1,188 @@
zpl1025
Howto Configure FTP Server with Proftpd on Fedora 22
================================================================================
In this article, we'll learn about setting up an FTP server with Proftpd running Fedora 22 in our machine or server. [ProFTPD][1] is a free and open source FTP daemon software licensed under GPL. It is among most popular FTP server among machines running Linux. Its primary design aims to have an FTP server with many advanced features and provisioning users for more configuration options for easy customization. It includes a number of configuration options that are still not available with many other FTP daemons. It was initially developed by the developers as an alternative with better security and configuration to wu-ftpd server. An FTP server is a program that allows us to upload or download files and folders from a remote server where it is setup using an FTP client. Some of the features of ProFTPD daemon are as follows, you can check more features on [http://www.proftpd.org/features.html][2] .
- It includes a per directory ".ftpaccess" access configuration similar to Apache's ".htaccess"
- It features multiple virtual FTP server with multiple users login and anonymous FTP services.
- It can be run either as a stand-alone server or from inetd/xinetd.
- Its ownership, file/folder attributes and file/folder permissions are UNIX-based.
- It can be run as standalone mode in order to protect the system from damage that can be caused from root access.
- The modular design of it makes it easily extensible with modules like LDAP servers, SSL/TLS encryption, RADIUS support, etc.
- IPv6 support is also included in the ProFTPD server.
Here are some easy to perform steps on how we can setup an FTP Server with ProFTPD in Fedora 22 operating system.
### 1. Installing ProFTPD ###
First of all, we'll wanna install Proftpd server in our box running Fedora 22 as its operating system. As yum package manager has been depreciated, we'll use the latest and greatest built package manager called dnf. DNF is pretty easy to use and highly user friendly package manager available in Fedora 22. We'll simply use it to install proftpd daemon server. To do so, we'll need to run the following command in a terminal or a console in sudo mode.
$ sudo dnf -y install proftpd proftpd-utils
### 2. Configuring ProFTPD ###
Now, we'll make changes to some configurations in the daemon. To configure the daemon, we will need to edit /etc/proftpd.conf with a text editor. The main configuration file of the ProFTPD daemon is **/etc/proftpd.conf** so, any changes made to this file will affect the FTP server. Here, are some changes we make in this initial step.
$ sudo vi /etc/proftpd.conf
Next, after we open the file using a text editor, we'll wanna make changes to the ServerName and ServerAdmin as hostname and email address respectively. Here's what we have made changes to those configs.
ServerName "ftp.linoxide.com"
ServerAdmin arun@linoxide.com
After that, we'll wanna the following lines into the configuration file so that it logs access & auth into its specified log files.
ExtendedLog /var/log/proftpd/access.log WRITE,READ default
ExtendedLog /var/log/proftpd/auth.log AUTH auth
![Configuring ProFTPD Config](http://blog.linoxide.com/wp-content/uploads/2015/06/configuring-proftpd-config.png)
### 3. Adding FTP users ###
After configure the basics of the configuration file, we'll surely wanna create an FTP user which is rooted at a specific directory we want. The current users that we use to login into our machine are automatically enabled with the FTP service, we can even use it to login into the FTP server. But, in this tutorial, we'll gonna create a new user with a specified home directory to the ftp server.
Here, we'll create a new group named ftpgroup.
$ sudo groupadd ftpgroup
Then, we'll gonna add a new user arunftp into the group with home directory specified as /ftp-dir/
$ sudo useradd -G ftpgroup arunftp -s /sbin/nologin -d /ftp-dir/
After the user has been created and added to the group, we'll wanna set a password to the user arunftp.
$ sudo passwd arunftp
Changing password for user arunftp.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Now, we'll set read and write permission of the home directory by the ftp users by executing the following command.
$ sudo setsebool -P allow_ftpd_full_access=1
$ sudo setsebool -P ftp_home_dir=1
Then, we'll wanna make that directory and its contents unable to get removed or renamed by any other users.
$ sudo chmod -R 1777 /ftp-dir/
### 4. Enabling TLS Support ###
FTP is considered less secure in comparison to the latest encryption methods used these days as anybody sniffing the network card can read the data pass through FTP. So, we'll enable TLS Encryption support in our FTP server. To do so, we'll need to a edit /etc/proftpd.conf configuration file. Before that, we'll wanna backup our existing configuration file to make sure we can revert our configuration if any unexpected happens.
$ sudo cp /etc/proftpd.conf /etc/proftpd.conf.bak
Then, we'll wanna edit the configuration file using our favorite text editor.
$ sudo vi /etc/proftpd.conf
Then, we'll wanna add the following lines just below line we configured in step 2 .
TLSEngine on
TLSRequired on
TLSProtocol SSLv23
TLSLog /var/log/proftpd/tls.log
TLSRSACertificateFile /etc/pki/tls/certs/proftpd.pem
TLSRSACertificateKeyFile /etc/pki/tls/certs/proftpd.pem
![Enabling TLS Configuration](http://blog.linoxide.com/wp-content/uploads/2015/06/tls-configuration.png)
After finishing up with the configuration, we'll wanna save and exit it.
Next, we'll need to generate the SSL certificates inside **/etc/pki/tls/certs/** directory as proftpd.pem. To do so, first we'll need to install openssl in our Fedora 22 machine.
$ sudo dnf install openssl
Then, we'll gonna generate the SSL certificate by running the following command.
$ sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/proftpd.pem -out /etc/pki/tls/certs/proftpd.pem
We'll be asked with some information that will be associated into the certificate. After completing the required information, it will generate a 2048 bit RSA private key.
Generating a 2048 bit RSA private key
...................+++
...................+++
writing new private key to '/etc/pki/tls/certs/proftpd.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:NP
State or Province Name (full name) []:Narayani
Locality Name (eg, city) [Default City]:Bharatpur
Organization Name (eg, company) [Default Company Ltd]:Linoxide
Organizational Unit Name (eg, section) []:Linux Freedom
Common Name (eg, your name or your server's hostname) []:ftp.linoxide.com
Email Address []:arun@linoxide.com
After that, we'll gonna change the permission of the generated certificate file in order to secure it.
$ sudo chmod 600 /etc/pki/tls/certs/proftpd.pem
### 5. Allowing FTP through Firewall ###
Now, we'll need to allow the ftp ports that are usually blocked by the firewall by default. So, we'll allow ports and enable access to the ftp through firewall.
If **TLS/SSL Encryption is enabled** run the following command.
$sudo firewall-cmd --add-port=1024-65534/tcp
$ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
If **TLS/SSL Encryption is disabled** run the following command.
$ sudo firewall-cmd --permanent --zone=public --add-service=ftp
success
Then, we'll need to reload the firewall configuration.
$ sudo firewall-cmd --reload
success
### 6. Starting and Enabling ProFTPD ###
After everything is set, we'll finally start our ProFTPD and give it a try. To start the proftpd ftp daemon, we'll need to run the following command.
$ sudo systemctl start proftpd.service
Then, we'll wanna enable proftpd to start on every boot.
$ sudo systemctl enable proftpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/proftpd.service to /usr/lib/systemd/system/proftpd.service.
### 7. Logging into the FTP server ###
Now, if everything was configured and done as expected, we must be able to connect to the ftp server and login with the details we set above. Here, we'll gonna configure our FTP client, filezilla with hostname as **server's ip or url**, Protocol as **FTP**, User as **arunftp** and password as the one we set in above step 3. If you followed step 4 for enabling TLS support, then we'll need to set the Encryption type as **Require explicit FTP over TLS** but if you didn't follow step 4 and don't wanna use TLS encryption then set the Encryption type as **Plain FTP**.
![FTP Login Details](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
To setup the above configuration, we'll need goto File which is under the Menu and then click on Site Manager in which we can click on new site then configure as illustrated above.
![FTP SSL Certificate](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-ssl-certificate.png)
Then, we're asked to accept the SSL certificate, that can be done by click OK. After that, we are able to upload and download required files and folders from our FTP server.
### Conclusion ###
Finally, we have successfully installed and configured our Fedora 22 box with Proftpd FTP server. Proftpd is an awesome powerful highly configurable and extensible FTP daemon. The above tutorial illustrates us how we can configure a secure FTP server with TLS encryption. It is highly recommended to configure FTP server with TLS encryption as it enables SSL certificate security to the data transfer and login. Here, we haven't configured anonymous access to the FTP cause they are usually not recommended in a protected FTP system. An FTP access makes pretty easy for people to upload and download at good efficient performance. We can even change the ports for the users for additional security. So, if you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.proftpd.org/
[2]:http://www.proftpd.org/features.html

View File

@ -1,85 +0,0 @@
XLCYun translating.
How To Fix System Program Problem Detected In Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
For the last couple of weeks, (almost) every time I was greeted with **system program problem detected on startup in Ubuntu 15.04**. I ignored it for sometime but it was quite annoying after a certain point. You wont be too happy as well if you are greeted by a pop-up displaying this every time you boot in to the system:
> System program problem detected
>
> Do you want to report the problem now?
>
> ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/System_Program_Problem_Detected.png)
I know if you are an Ubuntu user you might have faced this annoying pop-up sometimes for sure. In this post we are going to see what to do with “system program problem detected” report in Ubuntu 14.04 and 15.04.
### What to do with “system program problem detected” error in Ubuntu? ###
#### So what exactly is this notifier all about? ####
Basically, this notifies you of a crash in your system. Dont panic by the word crash. Its not a major issue and your system is very much usable. It just that some program crashed some time in the past and Ubuntu wants you to decide whether or not you want to report this crash report to developers so that they could fix this issue.
#### So, we click on Report problem and it will vanish? ####
No, not really. Even if you click on report problem, youll be ultimately greeted with a pop up like this:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
[Sorry, Ubuntu has experienced an internal error][1] is the apport that will further open a web browser and then you can file a bug report by logging or creating an account with [Launchpad][2]. You see, it is a complicated procedure which will take around four steps to complete.
#### But, I want to help developers and let them know of the bugs! ####
Thats very thoughtful of you and the right thing to do. But there are two issues here. First, there are high chances that the bug would have already been reported. Second, even if you take the pain of reporting the crash, its not a guarantee that you wont see it again.
#### So, you suggesting to not report the crash? ####
Yes and no. Report the crash when you see it the first time, if you want. You can see the crashing program under “Show Details” in the above picture. But if you see it repetitively or if you do not want to report the bug, I advise you to get rid of the system crash once and for all.
### Fix “system program problem detected” error in Ubuntu ###
The crash reports are stored in /var/crash directory in Ubuntu. If you look in to this directory, you should see some files ending with crash.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
What I suggest is that you delete these crash reports. Open a terminal and use the following command:
sudo rm /var/crash/*
This will delete all the content of directory /var/crash. This way you wont be annoyed by the pop up for the programs crash that happened in the past. But if a programs crashes again, youll again see system program problem detected error. You can either remove the crash reports again, like we just did, or you can disable the Apport (debug tool) and permanently get rid of the pop-ups.
#### Permanently get rid of system error pop up in Ubuntu ####
If you do this, youll never be notified about any program crash that happens in the system. If you ask my view, I would say its not that bad a thing unless you are willing to file bug reports. If you have no intention of filing a bug report, the crash notifications and their absence will make no difference.
To disable the Apport and get rid of system crash report completely, open a terminal and use the following command to edit the Apport settings file:
gksu gedit /etc/default/apport
The content of the file is:
# set this to 0 to disable apport, or to 1 to enable it
# you can temporarily override this with
# sudo service apport start force_start=1
enabled=1
Change the **enabled=1** to **enabled=0**. Save and close the file. You wont see any pop up for crash reports after doing this. Obvious to point out that if you want to enable the crash reports again, you just need to change the same file and put enabled as 1 again.
#### Did it work for you? ####
I hope this tutorial helped you to fix system program problem detected in Ubuntu 14.04 and Ubuntu 15.04. Let me know if this tip helped you to get rid of this annoyance.
--------------------------------------------------------------------------------
via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
[2]:https://launchpad.net/

View File

@ -0,0 +1,86 @@
安卓编年史
================================================================================
![姜饼的新键盘,文本选择,边界回弹效果以及新复选框。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/3kb-high-over-check.png)
姜饼的新键盘,文本选择,边界回弹效果以及新复选框。
Ron Amadeo 供图
安卓2.3最重要的新增功能就是系统全局文本选择界面你可以在左侧截图的谷歌搜索栏看到它。长按一个词能使其变为橙色高亮并且出现可拖拽的小标签长按高亮部分会弹出剪切复制和粘贴选项。之前的方法使用的是依赖于十字方向键的控制但现在有了触摸文本选择Nexus S 不再需要额外的硬件控件。
左侧截图右半边展示的是新的复选框设计和边界回弹效果。冻酸奶2.2)的复选框像个灯泡——选中时显示一个绿色的勾,未选中的时候显示灰色的勾。姜饼在选项关闭的时候显示一个空的选框——这显得更有意义。姜饼是第一个拥有滚动到底发光效果的版本。当到达列表底部的时候会有一道橙色的光晕,你越往上拉光晕越明显。列表上拉滚动反弹也许最直观,但那是苹果的专利。
![新拨号界面和对话框设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/dialdialog.png)
新拨号界面和对话框设计。
Ron Amadeo 供图
姜饼里的拨号受到了稍微多点的照顾。它变得更暗了,并且谷歌终于解决了原本的直角,圆角以及圆形的结合问题。现在所有的边角都是直角了。所有的拨号按键被替换成了带有奇怪下划线的样式,像是用边角料拼凑的。你永远无法确定是否看到了一个按钮——我们的大脑得想象出按钮形状的剩余部分。
图中的无线网络对话框可以看作是剩下的系统全局改动的样本。所有的对话框标题从灰色变为黑色,对话框,下拉框以及按钮边缘都变成了直角,各部分色调都变暗了一点。所有的这些全局变化使得姜饼看起来不像原来那样活泼,而是更加地成熟。“到处都是黑色”的外观必然不是最受欢迎的,但它无疑看起来比安卓之前的灰色和米色的配色方案好多了。
![新市场,添加了大块的绿色页面顶栏。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/4market.png)
新市场,添加了大块的绿色页面顶栏。
Ron Amadeo 供图
新版系统带来了“安卓市场 2.0”,虽然它不是姜饼独占的。主要的列表设计和原来一致,但谷歌将屏幕上部三分之一覆盖上了大块的绿色横幅,用来展示热门应用以及导航。这里主要的设计灵感也许是绿色的安卓吉祥物——它们的颜色完美匹配。在系统设计偏向暗色系的时候,霓虹灯般的绿色横幅和白色列表让市场明快得多。
但是,相同的绿色背景图片被用在了不同的手机上,这意味着在低分辨率设备上,绿色横幅看起来更加的大。不少用户抱怨这浪费了屏幕空间,于是随后的更新使得绿色横幅跟随内容向上滚动。在那时,横屏模式更加糟糕——绿色横幅会填满剩下的半个屏幕。
![市场的一个带有可折叠描述的应用详情页面“我的应用”界面以及Google Books界面截图。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/5rest-of-market-and-books.png)
市场的一个带有可折叠描述的应用详情页面,“我的应用”界面,以及 Google Books 界面截图。
Ron Amadeo供图
应用详情页面经过重新设计有了可折叠部分。文本描述只截取前几行展示,向下滑动页面不用再穿过数千行的描述。简短的描述后有一个“更多”按钮可供点击来显示完整的描述。这让用户可以轻松地滑动过列表找到像是截图和“联系开发者”部分,这些部分通常在页面偏下部分。
安卓主屏的其它部分明智地淡化了绿色机器人元素。市场应用的剩余部分绝大多数仅仅只是旧版市场加上新的绿色导航元素。旧有的标签界面升级成了可滑动切换标签。在姜饼右侧截图中,从右向左滑动将会从“热门付费”切换至“热门免费”,这使得导航变得更加方便。
姜饼带来了将会成为 Google Play 内容商店第一位成员的应用Google Books。这个应用是个基础的电子书阅读器会将书籍以简单的预览图平铺展示。屏幕顶部的“获取 eBooks”链接会打开浏览器然后加载一个你可以在上面购买电子书的移动网站。
Google Books 以及市场的“我的应用”页面都是 Action Bar 的原型。就像现在的指南中写的,页面有一个带应用图标的固定置顶栏,应用内页面的名称,以及一些控件。这两个应用的布局实际上看起来十分现代,和现在的界面相似。
![新版谷歌地图](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps1.png)
新版谷歌地图。
Ron Amadeo供图
谷歌地图(再重复一次,这时候的谷歌地图是在安卓市场中的,并且不是这个安卓版本独占的)拥有了另一个操作栏原型,是一个顶部对齐的控件栏。这个早期版本的操作栏拥有许多试验性功能。功能栏主要被一个搜索框所占据,但是你永远无法向其输入内容。点击搜索框会打开安卓 1.x 版本以来的旧搜索界面它带有完全不同的操作栏设计和活泼的按钮。2.3 版本的顶栏仅仅只是个大号的搜索按钮而已。
![从黑变白的新 business 页面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps2-Im-hungry.png)
从黑变白的新 business 页面。
Ron Amadeo 供图
应用抽屉里和地点一起到来的热门商家重新设计了界面。不像姜饼的其它部分,它从黑色转换成了白色。谷歌还给它保留了圆角的旧按钮。这个新版本的地图能显示商家的营业时间,并且提供高级搜索选项,比如正在营业或是通过评分或价格限定搜索范围。点评被调整到了商家详情页面,用户可以更容易地对当前商家有个直观感受。而且现在还可以从搜索结果中给某个地点加星,保存起来以后使用。
![新 YouTube 设计,神奇的是有点像旧版地图的商家页面的设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/youtube22.png)
新 YouTube 设计,神奇的是有点像旧版地图的商家页面的设计。
Ron Amadeo供图
YouTube 应用似乎完全与安卓的其它部分分离开来就像是设计它的人完全不知道姜饼最终会是什么样子一样。高亮是红色和灰色方案而不是绿色和橙色而且不像扁平黑色风格的姜饼Youtube 有着气泡状的,带有圆角并且大幅使用渐变效果的按钮,标签以及操作栏。尽管如此,新应用还是有一些正确的地方。所有的标签可以水平滑动切换,而且应用终于提供了竖屏观看视频模式。安卓在那个阶段似乎工作不是很一致。就像是有人告诉 Youtube 团队“把它做成黑色的”,然后这就是全部的指导方向一样。唯一一个与其相似的安卓实体就是旧版谷歌地图的商家页面的设计。
尽管有些奇怪的设计Youtube 应用有着最接近操作栏的顶栏设计。除了顶部操作栏的应用图标和一些按钮,最右侧还有个标着“更多”字样的按钮,点击它可以打开因为过多而无法装进操作栏的选项。在今天,这被称作“更多操作”按钮,它是个标准界面控件。
![新 Google Talk支持语音和视频通话以及新语音命令界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/talkvoice.png)
新 Google Talk支持语音和视频通话以及新语音命令界面。
Ron Amadeo供图
姜饼的最后一个更新是安卓 2.3.4,它带来了新版 Google Talk。不像 Nexus OneNexus S 带有前置摄像头——重新设计的 Google Talk 拥有语音和视频通话功能。好友列表右侧的彩色指示不仅指明在线状态,还显示了语音和视频的可用性。一个点表示仅文本信息,一个麦克风表示文本信息或语音,一个摄像机表示支持文本信息,语音以及视频。如果可用的话,点击语音或视频图标会立即向好友发起通话。
姜饼是谷歌仍然提供支持的最老的安卓版本。激活一部姜饼设备并放置一会儿会收到大量更新。姜饼会拉取 Google Play 服务,它会带来许多新的 API 支持,并且会升级到最新版本的 Play 商店。打开 Play 商店并点击更新按钮,几乎每个独立谷歌应用都会被替换为更加现代的版本。我们尝试着保持这篇文章讲述的是姜饼发布时的样子,但时至今日还停留在姜饼的用户会被认为有点跟不上时代了。
姜饼如今仍然能够得到支持,因为有数量可观的用户仍然在使用这个有点过时的系统。姜饼仍然存在的能量来自于它极低的系统要求,使得它成为了低端廉价设备的最佳选择。下个版本的安卓对硬件的要求变得更高了。举个例子,安卓 3.0 蜂巢不是开源的这意味着它只能在谷歌的协助之下移植到一个设备上。同时它还是只为平板设计的这让姜饼作为最新的手机安卓版本存在了很长一段时间。4.0 冰淇淋三明治是下一个手机版本,但它显著地提高了安卓系统要求,抛弃了低端市场。谷歌现在希望借 4.4 KitKat奇巧巧克力重回廉价手机市场它的系统要求降回了 512MB 内存。时间的推移同样有所帮助——如今,就算是廉价的系统级芯片都能满足安卓 4.0 时代的系统要求。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/15/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,66 @@
安卓编年史
================================================================================
### 安卓 3.0 蜂巢—平板和设计复兴 ###
尽管姜饼中做了许多改变,安卓仍然是移动世界里的丑小鸭。相比于 iPhone它的优雅程度和设计完全抬不起头。另一方面来说为数不多的能与 iOS 的美学智慧相当的操作系统之一是 Palm 的 WebOS。WebOS 有着优秀的整体设计,创新的功能,而且被寄予期望能够从和 iPhone 的长期竞争中拯救公司。
尽管如此一年之后Palm 资金链断裂。Palm 公司从未看到 iPhone 的到来,到 WebOS 就绪的时候已经太晚了。2010年4月惠普花费10亿美元收购了 Palm。尽管惠普收购了一个拥有优秀用户界面的产品界面的首席设计师Matias Duarte并没有加入惠普公司。2010年5月就在惠普接手 Palm 之前Duarte 加入了谷歌。惠普买下了面包,但谷歌雇佣了它的烘培师。
![第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Motorola-XOOM-MZ604.jpg)
第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。
在谷歌Duarte 被任命为安卓用户体验主管。这是第一次有人公开掌管安卓的外观。尽管 Matias 在安卓 2.2 发布时就来到了谷歌,第一个真正受他影响的安卓版本是 3.0 蜂巢它在2011年2月发布。
按谷歌自己的说法蜂巢是匆忙问世的。10个月前苹果发布了 iPad让平板变得更加现代谷歌希望能够尽快做出回应。蜂巢就是那个回应一个运行在10英寸触摸屏上的安卓版本。悲伤的是将这个系统推向市场是如此优先的事项以至于边边角角都被砍去了以节省时间。
新系统只用于平板——手机不能升级到蜂巢这加大了谷歌让系统运行在差异巨大的不同尺寸屏幕上的难度。但是仅支持平板而不支持手机使得蜂巢源码没有泄露。之前的安卓版本是开源的这使得黑客社区能够将其最新版本移植到所有的不同设备之上。谷歌不希望应用开发者在支持不完美的蜂巢手机移植版本时感到压力所以谷歌将源码留在自己手中并且严格控制能够拥有蜂巢的设备。匆忙的开发还导致了软件问题。在发布时蜂巢不是特别稳定SD卡不能工作Adobe Flash——安卓最大的特色之一——还不被支持。
[摩托罗拉 Xoom][1]是为数不多的拥有蜂巢的设备之一它是这个新系统的旗舰产品。Xoom 是一个10英寸16:9 的平板,拥有 1GB 内存和 1GHz Tegra 2 双核处理器。尽管是由谷歌直接控制更新的新版安卓发布设备它并没有被叫做“Nexus”。对此最可能的原因是谷歌对它没有足够的信心称其为旗舰。
尽管如此,蜂巢是安卓的一个里程碑。在一个体验设计师的主管之下,整个安卓用户界面被重构,绝大多数奇怪的应用设计都得到改进。安卓的默认应用终于看起来像整体的一部分,不同的界面有着相似的布局和主题。然而重新设计安卓会是一个跨版本的项目——蜂巢只是将安卓塑造成型的开始。这第一份草稿为安卓未来版本的样子做了基础设计,但它也用了过多的科幻主题,谷歌将花费接下来的数个版本来淡化它。
![蜂巢和姜饼的主屏幕。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/homeskreen.png)
蜂巢和姜饼的主屏幕。
Ron Amadeo供图
姜饼只是在它的量子壁纸上试验了科幻外观,蜂巢整个系统的以电子为灵感的主题让它充满科幻意味。所有东西都是黑色的,如果你需要对比色,你可以从一些不同色调的蓝色中挑选。所有蓝色的东西还有“光晕”效果,让整个系统看起来像是外星科技创造的。默认背景是个六边形的全息方阵(一个蜂巢!明白了吗?),看起来像是一艘飞船上的传送阵的地板。
蜂巢最重要的变化是增加了系统栏。摩托罗拉 Xoom 除了电源和音量键之外没有配备实体按键,所以蜂巢添加了一个大黑色底栏到屏幕底部,用于放置导航按键。这意味着默认安卓界面不再需要特别的实体按键。在这之前,安卓没有实体的返回,菜单和 Home 键就不能正常工作。现在,软件提供了所有必需的按钮,任何带有触摸屏的设备都能够运行安卓。
新软件按键带来的最大的好处是灵活性。新的应用指南表明应用应不再要求实体菜单按键需要用到的时候蜂巢会自动检测并添加四个按钮到系统栏让应用正常工作。另一个软件按键的灵活属性是它们可以改变设备的屏幕方向。除了电源和音量键之外Xoom 的方向实际上不是那么重要。从用户的角度来看系统栏始终处于设备的“底部”。代价是系统栏明显占据了一些屏幕空间。为了在10英寸平板上节省空间状态栏被合并到了系统栏中。所有的常用状态指示放在了右侧——有电源连接状态时间还有通知图标。
主屏幕的整个布局都改变了,用户界面部件放在了设备的四个角落。屏幕底部左侧放置着之前讨论过的导航按键,右侧用于状态指示和通知,顶部左侧显示的是文本搜索和语音搜索,右侧有应用抽屉和添加小部件的按钮。
![新锁屏界面和最近应用界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/lockscreen-and-recent.png)
新锁屏界面和最近应用界面。
Ron Amadeo供图
(因为 Xoom 是一部 [较重] 的10英寸16:9平板设备这意味着它主要是横屏使用。虽然大部分应用还支持竖屏模式但是到目前为止由于我们的版式限制我们大部分使用的是竖屏模式的截图。请记住蜂巢的截图来自于10英寸的平板而姜饼的截图来自3.7英寸的手机。二者所展现的信息密度是不能直接比较的。)
解锁界面——从菜单按钮到旋转式拨号盘再到滑动解锁——移除了解锁步骤的任何精度要求,它采用了一个环状解锁盘。从中间向任意方向向外滑动就能解锁设备。就像旋转式解锁,这种解锁方式更加符合人体工程学,而不用强迫你的手指完美地遵循一条笔直的解锁路径。
第二张图中略缩图条带是由新增的“最近应用”按钮打开的界面,现在处在返回和 Home 键旁边。不像姜饼中长按 Home 键显示一组最近应用的图标,蜂巢在屏幕上显示应用图标和略缩图,使得在任务间切换变得更加方便。最近应用的灵感明显来自于 Duarte 在 WebOS 中的“卡片式”多任务管理,其使用全屏略缩图来切换任务。这个设计提供和 WebOS 的任务切换一样的易识别体验,但更小的略缩图允许更多的应用一次性显示在屏幕上。
尽管最近应用的实现看起来和你现在的设备很像,这个版本实际上是非常早期的。这个列表不能滚动,这意味着竖屏下只能显示七个应用,横屏下只能显示五个。任何超出范围的应用会从列表中去除。而且你也不能通过滑动略缩图来关闭应用——这只是个静态的列表。
这里我们看到电子灵感影响的完整主题效果:略缩图的周围有蓝色的轮廓以及神秘的光晕。这张截图还展示软件按键的好处——上下文。返回按钮可以关闭略缩图列表,所以这里的箭头指向下方,而不是通常的样子。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/16/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2011/03/ars-reviews-the-motorola-xoom/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,206 @@
Syncthing: 一个跨计算机的私人的文件/文件夹安全同步工具
================================================================================
### 简介 ###
**Syncthing** 是一个免费开源的工具,它能在你的各个网络计算机间同步文件/文件夹。它不像其它的同步工具,如**BitTorrent Sync**和**Dropbox**那样,它的同步数据是直接从一个系统中直接传输到另一个系统的,并且它是完全开源的,安全且私有的。你所有的珍贵数据都会被存储在你的系统中,这样你就能对你的文件和文件夹拥有全面的控制权,没有任何的文件或文件夹会被存储在第三方系统中。此外,你有权决定这些数据该存于何处,是否要分享到第三方,或这些数据在互联网上的传输方式。
所有的信息通讯都使用TLS进行加密这样你的数据便能十分安全地逃离窥探。Syncthing有一个强大的响应式的网页管理界面(WebGUI下同)它能够帮助用户简便地添加删除和管理那些通过网络进行同步的文件夹。通过使用Syncthing你可以在多个系统上一次同步多个文件夹。在安装和使用上Syncthing是一个可移植的简单但强大的工具。即然文件或文件夹是从一部计算机中直接传输到另一计算机中的那么你就无需考虑向云服务供应商支付金钱来获取额外的云空间。你所需要的仅仅是非常稳定的LAN/WAN连接和你的系统中足够的硬盘空间。它支持所有的现代操作系统包括GNU/Linux, Windows, Mac OS X, 当然还有Android。
### 安装 ###
基于本文的目的我们将使用两个系统一个是Ubuntu 14.04 LTS, 一个是Ubuntu 14.10 server。为了简单辨别这两个系统我们将分别称其为**系统1**和**系统2**。
### 系统1细节 ###
- **操作系统**: Ubuntu 14.04 LTS server;
- **主机名**: server1.unixmen.local;
- **IP地址**: 192.168.1.150.
- **系统用户**: sk (你可以使用你自己的系统用户)
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
### 系统2细节 ###
- **操作系统**: Ubuntu 14.10 server;
- **主机名**: server.unixmen.local;
- **IP地址**: 192.168.1.151.
- **系统用户**: sk (你可以使用你自己的系统用户)
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
### 在系统1和系统2上为Syncthing创建用户 ###
在两个系统上运行下面的命令来为Syncthing创建用户以及两系统间的同步文件夹。
sudo useradd sk
sudo passwd sk
### 为系统1和系统2安装Syncthing ###
在系统1和系统2上遵循以下步骤进行操作。
从[官方下载页][1]上下载最新版本。我使用的是64位版本因此下载64位版的软件包。
wget https://github.com/syncthing/syncthing/releases/download/v0.10.20/syncthing-linux-amd64-v0.10.20.tar.gz
解压缩下载的文件:
tar xzvf syncthing-linux-amd64-v0.10.20.tar.gz
切换到解压缩出来的文件夹:
cd syncthing-linux-amd64-v0.10.20/
复制可执行文件"Syncthing"到**$PATH**
sudo cp syncthing /usr/local/bin/
现在执行下列命令来首次运行Syncthing
syncthing
当你执行上述命令后syncthing会生成一个配置以及一些关键值keys),并且在你的浏览器上打开一个管理界面。,
输入示例:
[monitor] 15:40:27 INFO: Starting syncthing
15:40:27 INFO: Generating RSA key and certificate for syncthing...
[BQXVO] 15:40:34 INFO: syncthing v0.10.20 (go1.4 linux-386 default) unknown-user@syncthing-builder 2015-01-13 16:27:47 UTC
[BQXVO] 15:40:34 INFO: My ID: BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ
[BQXVO] 15:40:34 INFO: No config file; starting with empty defaults
[BQXVO] 15:40:34 INFO: Edit /home/sk/.config/syncthing/config.xml to taste or use the GUI
[BQXVO] 15:40:34 INFO: Starting web GUI on http://127.0.0.1:8080/
[BQXVO] 15:40:34 INFO: Loading HTTPS certificate: open /home/sk/.config/syncthing/https-cert.pem: no such file or directory
[BQXVO] 15:40:34 INFO: Creating new HTTPS certificate
[BQXVO] 15:40:34 INFO: Generating RSA key and certificate for server1...
[BQXVO] 15:41:01 INFO: Starting UPnP discovery...
[BQXVO] 15:41:07 INFO: Starting local discovery announcements
[BQXVO] 15:41:07 INFO: Starting global discovery announcements
[BQXVO] 15:41:07 OK: Ready to synchronize default (read-write)
[BQXVO] 15:41:07 INFO: Device BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ is "server1" at [dynamic]
[BQXVO] 15:41:07 INFO: Completed initial scan (rw) of folder default
Syncthing已经被成功地初始化了网页管理接口也可以通过浏览器在URL: **http://localhost:8080**进行访问了。如上面输入所看到的Syncthing在你的**home**目录中的Sync目录**下自动为你创建了一个名为**default**的文件夹。
默认情况下Syncthing的网页管理界面WebGUI)只能在本地端口(localhost)中进行访问,你需要在两个系统中进行以下操作:
首先按下CTRL+C键来停止Syncthing初始化进程。现在你回到了终端界面。
编辑**config.xml**文件,
sudo nano ~/.config/syncthing/config.xml
找到下面的指令:
[...]
<gui enabled="true" tls="false">
<address>127.0.0.1:8080</address>
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
</gui>
[...]
在**<address>**区域中,把**127.0.0.1:8080**改为**0.0.0.0:8080**。结果你的config.xml看起来会是这样的
<gui enabled="true" tls="false">
<address>0.0.0.0:8080</address>
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
</gui>
保存并关闭文件。
在两个系统上再次执行下述命令:
syncthing
### 访问网页管理界面 ###
现在,在你的浏览器上打开**http://ip-address:8080/**。你会看到下面的界面:
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_001.png)
网页管理界面分为两个窗格,在左窗格中,你应该可以看到同步的文件夹列表。如前所述,文件夹**default**在你初始化Syncthing时被自动创建。如果你想同步更多文件夹点击**Add Folder**按钮。
在右窗格中,你可以看到已连接的设备数。现在这里只有一个,就是你现在正在操作的计算机。
### 网页管理界面(WebGUI)上设置Syncthing ###
为了提高安全性让我们启用TLS并且设置访问网页管理界面的管理员用户和密码。要做到这点点击右上角的齿轮按钮然后选择**Settings**
![](http://www.unixmen.com/wp-content/uploads/2015/01/Menu_002.png)
输入管理员的帐户名/密码。我设置的是admin/Ubuntu。你可以使用一些更复杂的密码。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_004.png)
点击Save按钮现在你会被要求重启Syncthing使更改生效。点击Restart。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Selection_005.png)
刷新你的网页浏览器。你可以看到一个像下面一样的SSL警告。点击显示**我了解风险(I understand the Risks)**的按钮。接着,点击“添加例外(Add Exception)“按钮把当前页面添加进浏览器的信任列表中。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Untrusted-Connection-Mozilla-Firefox_006.png)
输入前面几步设置的管理员用户和密码。我设置的是**admin/ubuntu**。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Authentication-Required_007.png)
现在,我们提高了网页管理界面的安全性。别忘了两个系统都要执行上面同样的步骤。
### 连接到其它服务器 ###
要在各个系统之间同步文件你必须各自告诉它们其它服务器的信息。这是通过交换设备IDsdevice IDs)来实现的。你可以通过选择“齿轮菜单gear menu)”在右上角中的”Show ID显示ID)“来找到它。
例如下面是我系统1的ID.
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server1-Mozilla-Firefox_008.png)
复制这个ID然后到另外一个系统系统2的网页管理界面在右边窗格点击Add Device按钮。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_010.png)
接着会出现下面的界面。在Device区域粘贴**系统1 ID **。输入设备名称可选。在地址区域你可以输入其它系统译者注即粘贴的ID所属的系统此应为系统1的IP地址或者使用默认值。默认值为**dynamic**。最后,选择要同步的文件夹。在我们的例子中,同步文件夹为**default**。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_009.png)
一旦完成了点击save按钮。你会被要求重启Syncthing。点击Restart按钮重启使更改生效。
现在,我们到**系统1**的网页管理界面,你会看到来自系统2的连接和同步请求。点击**Add**按钮。现在系统会要求系统分享和同步名为default的文件夹。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Selection_013.png)
接着重启系统的Syncthing服务使更改生效。
![](http://www.unixmen.com/wp-content/uploads/2015/01/Selection_014.png)
等待大概60秒,接着你会看到两个系统之间已成功连接并同步。
你可以在网页管理界面中的Add Device区域核实该情况。
添加系统2后,系统1网页管理界面中的控制窗口如下:
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_016.png)
添加系统1后,系统2网页管理界面中的控制窗口如下:
![](http://www.unixmen.com/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Syncthing-server-Mozilla-Firefox_018.png)
现在,在任一个系统中的“**default**”文件夹中放进任意文件或文件夹。你应该可以看到这些文件/文件夹被自动同步到其它系统。
本文完!祝同步愉快!
噢耶!!!
- [Syncthing网站][2]
--------------------------------------------------------------------------------
via: http://www.unixmen.com/syncthing-private-secure-tool-sync-filesfolders-computers/
作者:[SK][a]
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:https://github.com/syncthing/syncthing/releases/tag/v0.10.20
[2]:http://syncthing.net/

View File

@ -0,0 +1,358 @@
PHP 安全
================================================================================
![](http://www.codeproject.com/KB/PHP/363897/php_security.jpg)
### 简介 ###
为提供互联网服务,当你在开发代码的时候必须时刻保持安全意识。可能大部分 PHP 脚本都对安全问题不敏感;这很大程度上是因为有大量的无经验程序员在使用这门语言。但是,没有理由让你基于粗略估计你代码的影响性而有不一致的安全策略。当你在服务器上放任何经济相关的东西时,就有可能会有人尝试破解它。创建一个论坛程序或者任何形式的购物车,被攻击的可能性就上升到了无穷大。
### 背景 ###
为了确保你的 web 内容安全,这里有一些一般的安全准则:
#### 别相信表单 ####
攻击表单很简单。通过使用一个简单的 JavaScript 技巧,你可以限制你的表单只允许在评分域中填写 1 到 5 的数字。如果有人关闭了他们浏览器的 JavaScript 功能或者提交自定义的表单数据,你客户端的验证就失败了。
用户主要通过表单参数和你的脚本交互,因此他们是最大的安全风险。你应该学到什么呢?总是要验证 PHP 脚本中传递到其它任何 PHP 脚本的数据。在本文中我们向你演示了如何分析和防范跨站点脚本XSS攻击它可能劫持用户凭据甚至更严重。你也会看到如何防止会玷污或毁坏你数据的 MySQL 注入攻击。
#### 别相信用户 ####
假设你网站获取的每一份数据都充满了有害的代码。清理每一部分,就算你相信没有人会尝试攻击你的站点。
#### 关闭全局变量 ####
你可能会有的最大安全漏洞是启用了 register\_globals 配置参数。幸运的是PHP 4.2 及以后版本默认关闭了这个配置。如果打开了 **register\_globals**,你可以在你的 php.ini 文件中通过改变 register\_globals 变量为 Off 关闭该功能:
register_globals = Off
新手程序员觉得注册全局变量很方便,但他们不会意识到这个设置有多么危险。一个启用了全局变量的服务器会自动为全局变量赋任何形式的参数。为了了解它如何工作以及为什么有危险,让我们来看一个例子。
假设你有一个称为 process.php 的脚本,它会向你的数据库插入表单数据。初始的表单像下面这样:
<input name="username" type="text" size="15" maxlength="64">
运行 process.php 的时候,启用了注册全局变量的 PHP 会为该参数赋值为 $username 变量。这会比通过 **$\_POST['username']** 或 **$\_GET['username']** 访问它节省敲击次数。不幸的是,这也会给你留下安全问题,因为 PHP 设置该变量的值为通过 GET 或 POST 参数发送到脚本的任何值,如果你没有显示地初始化该变量并且你不希望任何人去操作它,这就会有一个大问题。
看下面的脚本,假如 $authorized 变量的值为 true它会给用户显示验证数据。正常情况下只有当用户正确通过了假想的 authenticated\_user() 函数验证,$authorized 变量的值才会被设置为真。但是如果你启用了 **register\_globals**,任何人都可以发送一个 GET 参数,例如 authorized=1 去覆盖它:
<?php
// Define $authorized = true only if user is authenticated
if (authenticated_user()) {
$authorized = true;
}
?>
这个故事的寓意是,你应该从预定义的服务器变量中获取表单数据。所有通过 post 表单传递到你 web 页面的数据都会自动保存到一个称为 **$\_POST** 的大数组中,所有的 GET 数据都保存在 **$\_GET** 大数组中。文件上传信息保存在一个称为 **$\_FILES** 的特殊数据中。另外,还有一个称为 **$\_REQUEST** 的复合变量。
要从一个 POST 方法表单中访问 username 域,可以使用 **$\_POST['username']**。如果 username 在 URL 中就使用 **$\_GET['username']**。如果你不确定值来自哪里,用 **$\_REQUEST['username']**。
<?php
$post_value = $_POST['post_value'];
$get_value = $_GET['get_value'];
$some_variable = $_REQUEST['some_value'];
?>
$\_REQUEST 是 $\_GET、$\_POST、和 $\_COOKIE 数组的结合。如果你有两个或多个值有相同的参数名称,注意 PHP 会使用哪个。默认的顺序是 cookie、POST、然后是 GET。
#### 推荐安全配置选项 ####
这里有几个会影响安全功能的 PHP 配置设置。下面是一些显然应该用于生产服务器的:
- **register\_globals** 设置为 off
- **safe\_mode** 设置为 off
- **error\_reporting** 设置为 off。如果出现错误了这会向用户浏览器发送可见的错误报告信息。对于生产服务器使用错误日志代替。开发服务器如果在防火墙后面就可以启用错误日志。
- 停用这些函数system()、exec()、passthru()、shell\_exec()、proc\_open()、和 popen()。
- **open\_basedir** 为 /tmp以便保存会话信息目录和 web 根目录设置值,以便脚本不能访问选定区域外的文件。
- **expose\_php** 设置为 off。该功能会向 Apache 头添加包含版本数字的 PHP 签名。
- **allow\_url\_fopen** 设置为 off。如果你小心在你代码中访问文件的方式-也就是你验证所有输入参数,这并不严格需要。
- **allow\_url\_include** 设置为 off。这实在没有明智的理由任何人会想要通过 HTTP 访问包含的文件。
一般来说,如果你发现想要使用这些功能的代码,你就不应该相信它。尤其要小心会使用类似 system() 函数的代码-它几乎肯定有缺陷。
启用了这些设置后,让我们来看看一些特定的攻击以及能帮助你保护你服务器的方法。
### SQL 注入攻击 ###
由于 PHP 传递到 MySQL 数据库的查询语句是按照强大的 SQL 编程语言编写的,你就有某些人通过在 web 查询参数中使用 MySQL 语句尝试 SQL 注入攻击的风险。通过在参数中插入有害的 SQL 代码片段,攻击者会尝试进入(或破坏)你的服务器。
假如说你有一个最终会放入变量 $product 的表单参数,你使用了类似下面的 SQL 语句:
$sql = "select * from pinfo where product = '$product'";
如果参数是直接从表单中获得的,使用 PHP 自带的数据库特定转义函数,类似:
$sql = 'Select * from pinfo where product = '"'
mysql_real_escape_string($product) . '"';
如果不这样做的话,有人也许会把下面的代码段放到表单参数中:
39'; DROP pinfo; SELECT 'FOO
$sql 的结果就是:
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
由于分号是 MySQL 的语句分隔符,数据库会运行下面三条语句:
select * from pinfo where product = '39'
DROP pinfo
SELECT 'FOO'
好了,你丢失了你的表。
注意实际上 PHP 和 MySQL 不会运行这种特殊语法,因为 **mysql\_query()** 函数只允许每个请求处理一个语句。但是,一个子查询仍然会生效。
要防止 SQL 注入攻击,做这两件事:
- 总是验证所有参数。例如,如果需要一个数字,就要确保它是一个数字。
- 总是对数据使用 mysql\_real\_escape\_string() 函数转义数据中的任何引号和双引号。
**注意要自动转义任何表单数据可以启用魔术引号Magic Quotes。**
一些 MySQL 破坏可以通过限制 MySQL 用户权限避免。任何 MySQL 账户可以限制为只允许对选定的表进行特定类型的查询。例如,你可以创建只能选择行的 MySQL 用户。但是,这对于动态数据并不十分有用,另外,如果你有敏感的用户信息,可能某些人能访问一些数据,但你并不希望如此。例如,一个访问账户数据的用户可能会尝试注入访问另一个账户号码的代码,而不是为当前会话指定的号码。
### 防止基本的 XSS 攻击 ###
XSS 表示跨站点脚本。不像大部分攻击该漏洞发生在客户端。XSS 最常见的基本形式是在用户提交的内容中放入 JavaScript 以便偷取用户 cookie 中的数据。由于大部分站点使用 cookie 和 session 验证访客,偷取的数据可用于模拟该用于-如果是一个典型的用户账户就会深受麻烦,如果是管理员账户甚至是彻底的惨败。如果你不在站点中使用 cookie 和 session ID你的用户就不容易被攻击但你仍然应该明白这种攻击是如何工作的。
不像 MySQL 注入攻击XSS 攻击很难预防。Yahoo、eBay、Apple、以及 Microsoft 都曾经受 XSS 影响。尽管攻击不包含 PHP你可以使用 PHP 来剥离用户数据以防止攻击。为了防止 XSS 攻击,你应该限制和过滤用户提交给你站点的数据。正是因为这个原因大部分在线公告板都不允许在提交的数据中使用 HTML 标签,而是用自定义的标签格式代替,例如 **[b]** 和 **[linkto]**。
让我们来看一个如何防止这类攻击的简单脚本。对于更完善的解决办法,可以使用 SafeHHTML本文的后面部分会讨论到。
function transform_HTML($string, $length = null) {
// Helps prevent XSS attacks
// Remove dead space.
$string = trim($string);
// Prevent potential Unicode codec problems.
$string = utf8_decode($string);
// HTMLize HTML-specific characters.
$string = htmlentities($string, ENT_NOQUOTES);
$string = str_replace("#", "&#35;", $string);
$string = str_replace("%", "&#37;", $string);
$length = intval($length);
if ($length > 0) {
$string = substr($string, 0, $length);
}
return $string;
}
这个函数将 HTML 特定字符转换为 HTML 字面字符。一个浏览器对任何通过这个脚本的 HTML 以无标记的文本呈现。例如,考虑下面的 HTML 字符串:
<STRONG>Bold Text</STRONG>
一般情况下HTML 会显示为:
Bold Text
但是,通过 **transform\_HTML()** 后,它就像初始输入一样呈现。原因是处理的字符串中标签字符串是 HTML 条目。**transform\_HTML()** 结果字符串的纯文本看起来像下面这样:
&lt;STRONG&gt;Bold Text&lt;/STRONG&gt;
该函数的实质是 htmlentities() 函数调用,它会将 <、>、和 & 转换为 **&lt;**、**&gt;**、和 **&amp;**。尽管这会处理大部分的普通攻击,有经验的 XSS 攻击者有另一种把戏:用十六进制或 UTF-8 编码恶意脚本,而不是采用普通的 ASCII 文本,从而希望能饶过你的过滤器。他们可以在 URL 的 GET 变量中发送代码,例如,“这是十六进制代码,你能帮我运行吗?” 一个十六进制例子看起来像这样:
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
浏览器渲染这信息的时候,结果就是:
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
为了防止这种情况transform\_HTML() 采用额外的步骤把 # 和 % 符号转换为它们的实体,从而避免十六进制攻击,并转换 UTF-8 编码的数据。
最后,为了防止某些人用很长的输入超载字符串从而导致某些东西崩溃,你可以添加一个可选的 $length 参数来截取你指定最大长度的字符串。
### 使用 SafeHTML ###
之前脚本的问题比较简单,它不允许任何类型的用户标记。不幸的是,这里有上百种方法能使 JavaScript 跳过用户的过滤器,从用户输入中剥离 HTML没有方法可以防止这种情况。
当前,没有任何一个脚本能保证无法被破解,尽管有一些确实比大部分要好。有白名单和黑名单两种方法加固安全,白名单比较简单而且更加有效。
一个白名单解决方案是 PixelApes 的 SafeHTML 反跨站点脚本解析器。
SafeHTML 能识别有效 HTML能追踪并剥离任何危险标签。它用另一个称为 HTMLSax 的软件包进行解析。
按照下面步骤安装和使用 SafeHTML
1. 到 [http://pixel-apes.com/safehtml/?page=safehtml][1] 下载最新版本的 SafeHTML。
1. 把文件放到你服务器的类文件夹。该文件夹包括 SafeHTML 和 HTMLSax 起作用需要的所有东西。
1. 在脚本中包含 SafeHTML 类文件safehtml.php
1. 创建称为 $safehtml 的新 SafeHTML 对象。
1. 用 $safehtml->parse() 方法清理你的数据。
这是一个完整的例子:
<?php
/* If you're storing the HTMLSax3.php in the /classes directory, along
with the safehtml.php script, define XML_HTMLSAX3 as a null string. */
define(XML_HTMLSAX3, '');
// Include the class file.
require_once('classes/safehtml.php');
// Define some sample bad code.
$data = "This data would raise an alert <script>alert('XSS Attack')</script>";
// Create a safehtml object.
$safehtml = new safehtml();
// Parse and sanitize the data.
$safe_data = $safehtml->parse($data);
// Display result.
echo 'The sanitized data is <br />' . $safe_data;
?>
如果你想清理脚本中的任何其它数据,你不需要创建一个新的对象;在你的整个脚本中只需要使用 $safehtml->parse() 方法。
#### 什么可能会出现问题? ####
你可能犯的最大错误是假设这个类能完全避免 XSS 攻击。SafeHTML 是一个相当复杂的脚本,几乎能检查所有事情,但没有什么是能保证的。你仍然需要对你的站点做参数验证。例如,该类不能检查给定变量的长度以确保能适应数据库的字段。它也不检查缓冲溢出问题。
XSS 攻击者很有创造力,他们使用各种各样的方法来尝试达到他们的目标。可以阅读 RSnake 的 XSS 教程[http://ha.ckers.org/xss.html][2] 看一下这里有多少种方法尝试使代码跳过过滤器。SafeHTML 项目有很好的程序员一直在尝试阻止 XSS 攻击,但无法保证某些人不会想起一些奇怪和新奇的方法来跳过过滤器。
**注意XSS 攻击严重影响的一个例子 [http://namb.la/popular/tech.html][3],其中显示了如何一步一步创建会超载 MySpace 服务器的 JavaScript XSS 蠕虫。**
### 用单向哈希保护数据 ###
该脚本对输入的数据进行单向转换-换句话说,它能对某人的密码产生哈希签名,但不能解码获得原始密码。为什么你希望这样呢?应用程序会存储密码。一个管理员不需要知道用户的密码-事实上,只有用户知道他的/她的密码是个好主意。系统(也仅有系统)应该能识别一个正确的密码;这是 Unix 多年来的密码安全模型。单向密码安全按照下面的方式工作:
1. 当一个用户或管理员创建或更改一个账户密码时,系统对密码进行哈希并保存结果。主机系统忽视明文密码。
2. 当用户通过任何方式登录到系统时,再次对输入的密码进行哈希。
3. 主机系统抛弃输入的明文密码。
4. 当前新哈希的密码和之前保存的哈希相比较。
5. 如果哈希的密码相匹配,系统就会授予访问权限。
主机系统完成这些并不需要知道原始密码;事实上,原始值完全不相关。一个副作用是,如果某人侵入系统并盗取了密码数据库,入侵者会获得很多哈希后的密码,但无法把它们反向转换为原始密码。当然,给足够时间、计算能力,以及弱用户密码,一个攻击者还是有可能采用字典攻击找出密码。因此,别轻易让人碰你的密码数据库,如果确实有人这样做了,让每个用户更改他们的密码。
#### 加密 Vs 哈希 ####
技术上来来说,这过程并不是加密。哈希和加密是不相同的,这有两个理由:
不像加密,数据不能被解密。
是有可能(但很不常见)两个不同的字符串会产生相同的哈希。并不能保证哈希是唯一的,因此别像数据库中的唯一键那样使用哈希。
function hash_ish($string) {
return md5($string);
}
md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5返回一个由 32 个字符组成的十六进制串。然后你可以将那个 32 位字符串插入到数据库中,和另一个 md5 字符串相比较,或者就用这 32 个字符。
#### 破解脚本 ####
几乎不可能解密 MD5 数据。或者说很难。但是,你仍然需要好的密码,因为根据整个字典生成哈希数据库仍然很简单。这里有在线 MD5 字典,当你输入 **06d80eb0c50b49a509b49f2424e8c805** 后会得到结果 “dog”。因此尽管技术上 MD5 不能被解密,这里仍然有漏洞-如果某人获得了你的密码数据库,你可以肯定他们肯定会使用 MD5 字典破译。因此,当你创建基于密码的系统的时候尤其要注意密码长度(最小 6 个字符8 个或许会更好)和包括字母和数字。并确保字典中没有这个密码。
### 用 Mcrypt 加密数据 ###
如果你不需要以可阅读形式查看密码,采用 MD5 就足够了。不幸的是,这里并不总是有可选项-如果你提供以加密形式存储某人的信用卡信息,你可能需要在后面的某个点进行解密。
最早的一个解决方案是 Mcrypt 模块,用于允许 PHP 高速加密的附件。Mcrypt 库提供了超过 30 种计算方法用于加密,并且提供短语确保只有你(或者你的用户)可以解密数据。
让我们来看看使用方法。下面的脚本包含了使用 Mcrypt 加密和解密数据的函数:
<?php
$data = "Stuff you want encrypted";
$key = "Secret passphrase used to encrypt your data";
$cipher = "MCRYPT_SERPENT_256";
$mode = "MCRYPT_MODE_CBC";
function encrypt($data, $key, $cipher, $mode) {
// Encrypt data
return (string)
base64_encode
(
mcrypt_encrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
$data,
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
)
);
}
function decrypt($data, $key, $cipher, $mode) {
// Decrypt data
return (string)
mcrypt_decrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
base64_decode($data),
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
);
}
?>
**mcrypt()** 函数需要几个信息:
- 需要加密的数据
- 用于加密和解锁数据的短语,也称为键。
- 用于加密数据的计算方法,也就是用于加密数据的算法。该脚本使用了 **MCRYPT\_SERPENT\_256**,但你可以从很多算法中选择,包括 **MCRYPT\_TWOFISH192**、**MCRYPT\_RC2**、**MCRYPT\_DES**、和 **MCRYPT\_LOKI97**
- 加密数据的模式。这里有几个你可以使用的模式包括电子密码本Electronic Codebook 和加密反馈Cipher Feedback。该脚本使用 **MCRYPT\_MODE\_CBC** 密码块链接。
- 一个 **初始化向量**-也称为 IV或着一个种子-用于为加密算法设置种子的额外二进制位。也就是使算法更难于破解的额外信息。
- 键和 IV 字符串的长度,这可能随着加密和块而不同。使用 **mcrypt\_get\_key\_size()****mcrypt\_get\_block\_size()** 函数获取合适的长度;然后用 **substr()** 函数将键的值截取为合适的长度。(如果键的长度比要求的短,别担心-Mcrypt 会用 0 填充。)
如果有人窃取了你的数据和短语,他们只能一个个尝试加密算法直到找到正确的那一个。因此,在使用它之前我们通过对键使用 **md5()** 函数增加安全,就算他们获取了数据和短语,入侵者也不能获得想要的东西。
入侵者同时需要函数,数据和短语-如果真是如此,他们可能获得了对你服务器的完整访问,你只能大清洗了。
这里还有一个数据存储格式的小问题。Mcrypt 以难懂的二进制形式返回加密后的数据,这使得当你将其存储到 MySQL 字段的时候可能出现可怕错误。因此,我们使用 **base64encode()****base64decode()** 函数转换为和 SQL 兼容的字母格式和检索行。
#### 破解脚本 ####
除了实验多种加密方法,你还可以在脚本中添加一些便利。例如,不是每次都提供键和模式,而是在包含的文件中声明为全局常量。
### 生成随机密码 ###
随机(但难以猜测)字符串在用户安全中很重要。例如,如果某人丢失了密码并且你使用 MD5 哈希,你不可能,也不希望查找回来。而是应该生成一个安全的随机密码并发送给用户。为了访问你站点的服务,另外一个用于生成随机数字的应用程序会创建有效链接。下面是创建密码的一个函数:
<?php
function make_password($num_chars) {
if ((is_numeric($num_chars)) &&
($num_chars > 0) &&
(! is_null($num_chars))) {
$password = '';
$accepted_chars = 'abcdefghijklmnopqrstuvwxyz1234567890';
// Seed the generator if necessary.
srand(((int)((double)microtime()*1000003)) );
for ($i=0; $i<=$num_chars; $i++) {
$random_number = rand(0, (strlen($accepted_chars) -1));
$password .= $accepted_chars[$random_number] ;
}
return $password;
}
}
?>
#### 使用脚本 ####
**make_password()** 函数返回一个字符串,因此你需要做的就是提供字符串的长度作为参数:
<?php
$fifteen_character_password = make_password(15);
?>
函数按照下面步骤工作:
- 函数确保 **$num\_chars** 是非零的正整数。
- 函数初始化 **$accepted\_chars** 变量为密码可能包含的字符列表。该脚本使用所有小写字母和数字 0 到 9但你可以使用你喜欢的任何字符集合。
- 随机数生成器需要一个种子从而获得一系列类随机值PHP 4.2 及之后版本中并不严格要求)。
- 函数循环 **$num\_chars** 次,每次迭代生成密码中的一个字符。
- 对于每个新字符,脚本查看 **$accepted_chars** 的长度,选择 0 和长度之间的一个数字,然后添加 **$accepted\_chars** 中该数字为索引值的字符到 $password。
- 循环结束后,函数返回 **$password**。
### 许可证 ###
本篇文章,包括相关的源代码和文件,都是在 [The Code Project Open License (CPOL)][4] 协议下发布。
--------------------------------------------------------------------------------
via: http://www.codeproject.com/Articles/363897/PHP-Security
作者:[SamarRizvi][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.codeproject.com/script/Membership/View.aspx?mid=7483622
[1]:http://pixel-apes.com/safehtml/?page=safehtml
[2]:http://ha.ckers.org/xss.html
[3]:http://namb.la/popular/tech.html
[4]:http://www.codeproject.com/info/cpol10.aspx