mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
commit
f960a8b261
@ -0,0 +1,207 @@
|
||||
Syncthing: 一个在计算机之间同步文件/文件夹的私密安全同步工具
|
||||
================================================================================
|
||||
### 简介 ###
|
||||
|
||||
**Syncthing**是一个免费开源的工具,它能在你的各个网络计算机间同步文件/文件夹。它不像其它的同步工具,如**BitTorrent Sync**和**Dropbox**那样,它的同步数据是直接从一个系统中直接传输到另一个系统的,并且它是完全开源的,安全且私密的。你所有的珍贵数据都会被存储在你的系统中,这样你就能对你的文件和文件夹拥有全面的控制权,没有任何的文件或文件夹会被存储在第三方系统中。此外,你有权决定这些数据该存于何处,是否要分享到第三方,或这些数据在互联网上的传输方式。
|
||||
|
||||
所有的信息通讯都使用TLS进行加密,这样你的数据便能十分安全地逃离窥探。Syncthing有一个强大的响应式的网页管理界面(WebGUI,下同),它能够帮助用户简便地添加、删除和管理那些通过网络进行同步的文件夹。通过使用Syncthing,你可以在多个系统上一次同步多个文件夹。在安装和使用上,Syncthing是一个可移植的、简单而强大的工具。即然文件或文件夹是从一部计算机中直接传输到另一计算机中的,那么你就无需考虑向云服务供应商支付金钱来获取额外的云空间。你所需要的仅仅是非常稳定的LAN/WAN连接以及在你的系统中有足够的硬盘空间。它支持所有的现代操作系统,包括GNU/Linux, Windows, Mac OS X, 当然还有Android。
|
||||
|
||||
### 安装 ###
|
||||
|
||||
基于本文的目的,我们将使用两个系统,一个是Ubuntu 14.04 LTS, 一个是Ubuntu 14.10 server。为了简单辨别这两个系统,我们将分别称其为**系统1**和**系统2**。
|
||||
|
||||
### 系统1细节: ###
|
||||
|
||||
- **操作系统**: Ubuntu 14.04 LTS server;
|
||||
- **主机名**: **server1**.unixmen.local;
|
||||
- **IP地址**: 192.168.1.150.
|
||||
- **系统用户**: sk (你可以使用你自己的系统用户)
|
||||
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
|
||||
|
||||
### 系统2细节 ###
|
||||
|
||||
- **操作系统**: Ubuntu 14.10 server;
|
||||
- **主机名**: **server**.unixmen.local;
|
||||
- **IP地址**: 192.168.1.151.
|
||||
- **系统用户**: sk (你可以使用你自己的系统用户)
|
||||
- **同步文件夹**: /home/Sync/ (Syncthing会默认创建)
|
||||
|
||||
### 在系统1和系统2上为Syncthing创建用户 ###
|
||||
|
||||
在两个系统上运行下面的命令来为Syncthing创建用户以及两系统间的同步文件夹。
|
||||
|
||||
sudo useradd sk
|
||||
sudo passwd sk
|
||||
|
||||
### 为系统1和系统2安装Syncthing ###
|
||||
|
||||
在系统1和系统2上遵循以下步骤进行操作。
|
||||
|
||||
从[官方下载页][1]上下载最新版本。我使用的是64位版本,因此下载64位版的软件包。
|
||||
|
||||
wget https://github.com/syncthing/syncthing/releases/download/v0.10.20/syncthing-linux-amd64-v0.10.20.tar.gz
|
||||
|
||||
解压缩下载的文件:
|
||||
|
||||
tar xzvf syncthing-linux-amd64-v0.10.20.tar.gz
|
||||
|
||||
切换到解压缩出来的文件夹:
|
||||
|
||||
cd syncthing-linux-amd64-v0.10.20/
|
||||
|
||||
复制可执行文件"syncthing"到**$PATH**:
|
||||
|
||||
sudo cp syncthing /usr/local/bin/
|
||||
|
||||
现在,执行下列命令来首次运行Syncthing:
|
||||
|
||||
syncthing
|
||||
|
||||
当你执行上述命令后,syncthing会生成一个配置以及一些配置键值,并且在你的浏览器上打开一个管理界面。
|
||||
|
||||
输入示例:
|
||||
|
||||
[monitor] 15:40:27 INFO: Starting syncthing
|
||||
15:40:27 INFO: Generating RSA key and certificate for syncthing...
|
||||
[BQXVO] 15:40:34 INFO: syncthing v0.10.20 (go1.4 linux-386 default) unknown-user@syncthing-builder 2015-01-13 16:27:47 UTC
|
||||
[BQXVO] 15:40:34 INFO: My ID: BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ
|
||||
[BQXVO] 15:40:34 INFO: No config file; starting with empty defaults
|
||||
[BQXVO] 15:40:34 INFO: Edit /home/sk/.config/syncthing/config.xml to taste or use the GUI
|
||||
[BQXVO] 15:40:34 INFO: Starting web GUI on http://127.0.0.1:8080/
|
||||
[BQXVO] 15:40:34 INFO: Loading HTTPS certificate: open /home/sk/.config/syncthing/https-cert.pem: no such file or directory
|
||||
[BQXVO] 15:40:34 INFO: Creating new HTTPS certificate
|
||||
[BQXVO] 15:40:34 INFO: Generating RSA key and certificate for server1...
|
||||
[BQXVO] 15:41:01 INFO: Starting UPnP discovery...
|
||||
[BQXVO] 15:41:07 INFO: Starting local discovery announcements
|
||||
[BQXVO] 15:41:07 INFO: Starting global discovery announcements
|
||||
[BQXVO] 15:41:07 OK: Ready to synchronize default (read-write)
|
||||
[BQXVO] 15:41:07 INFO: Device BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ is "server1" at [dynamic]
|
||||
[BQXVO] 15:41:07 INFO: Completed initial scan (rw) of folder default
|
||||
|
||||
Syncthing已经被成功地初始化了,网页管理接口也可以通过浏览器访问URL: **http://localhost:8080**。如上面输入所看到的,Syncthing在你的**home**目录中的Sync目录**下自动为你创建了一个名为**default**的文件夹。
|
||||
|
||||
默认情况下,Syncthing的网页管理界面只能在本地端口(localhost)中进行访问,要从远程进行访问,你需要在两个系统中进行以下操作:
|
||||
|
||||
首先,按下CTRL+C键来终止Syncthing初始化进程。现在你回到了终端界面。
|
||||
|
||||
编辑**config.xml**文件,
|
||||
|
||||
sudo nano ~/.config/syncthing/config.xml
|
||||
|
||||
找到下面的指令:
|
||||
|
||||
[...]
|
||||
<gui enabled="true" tls="false">
|
||||
<address>127.0.0.1:8080</address>
|
||||
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
|
||||
</gui>
|
||||
[...]
|
||||
|
||||
在**<address>**区域中,把**127.0.0.1:8080**改为**0.0.0.0:8080**。结果,你的config.xml看起来会是这样的:
|
||||
|
||||
<gui enabled="true" tls="false">
|
||||
<address>0.0.0.0:8080</address>
|
||||
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
|
||||
</gui>
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
在两个系统上再次执行下述命令:
|
||||
|
||||
syncthing
|
||||
|
||||
### 访问网页管理界面 ###
|
||||
|
||||
现在,在你的浏览器上打开**http://ip-address:8080/**。你会看到下面的界面:
|
||||
|
||||

|
||||
|
||||
网页管理界面分为两个窗格,在左窗格中,你应该可以看到同步的文件夹列表。如前所述,文件夹**default**在你初始化Syncthing时被自动创建。如果你想同步更多文件夹,点击**Add Folder**按钮。
|
||||
|
||||
在右窗格中,你可以看到已连接的设备数。现在这里只有一个,就是你现在正在操作的计算机。
|
||||
|
||||
### 网页管理界面上设置Syncthing ###
|
||||
|
||||
为了提高安全性,让我们启用TLS,并且设置访问网页管理界面的管理员用户和密码。要做到这点,点击右上角的齿轮按钮,然后选择**Settings**
|
||||
|
||||

|
||||
|
||||
输入管理员的帐户名/密码。我设置的是admin/Ubuntu。你应该使用一些更复杂的密码。
|
||||
|
||||

|
||||
|
||||
点击Save按钮,现在,你会被要求重启Syncthing使更改生效。点击Restart。
|
||||
|
||||

|
||||
|
||||
刷新你的网页浏览器。你可以看到一个像下面一样的SSL警告。点击显示**我了解风险(I understand the Risks)**的按钮。接着,点击“添加例外(Add Exception)“按钮把当前页面添加进浏览器的信任列表中。
|
||||
|
||||

|
||||
|
||||
输入前面几步设置的管理员用户和密码。我设置的是**admin/ubuntu**。
|
||||
|
||||

|
||||
|
||||
现在,我们提高了网页管理界面的安全性。别忘了两个系统都要执行上面同样的步骤。
|
||||
|
||||
### 连接到其它服务器 ###
|
||||
|
||||
要在各个系统之间同步文件,你必须各自告诉它们其它服务器的信息。这是通过交换设备IDs(device IDs)来实现的。你可以通过选择“齿轮菜单(gear menu)”(在右上角)中的”Show ID(显示ID)“来找到它。
|
||||
|
||||
例如,下面是我系统1的ID.
|
||||
|
||||

|
||||
|
||||
复制这个ID,然后到另外一个系统(系统2)的网页管理界面,在右边窗格点击Add Device按钮。
|
||||
|
||||

|
||||
|
||||
接着会出现下面的界面。在Device区域粘贴**系统1 ID **。输入设备名称(可选)。在地址区域,你可以输入其它系统( LCTT 译注:即粘贴的ID所属的系统,此应为系统1)的IP地址,或者使用默认值。默认值为**dynamic**。最后,选择要同步的文件夹。在我们的例子中,同步文件夹为**default**。
|
||||
|
||||

|
||||
|
||||
一旦完成了,点击save按钮。你会被要求重启Syncthing。点击Restart按钮重启使更改生效。
|
||||
|
||||
现在,我们到**系统1**的网页管理界面,你会看到来自系统2的连接和同步请求。点击**Add**按钮。现在,系统2会要求系统1分享和同步名为default的文件夹。
|
||||
|
||||

|
||||
|
||||
接着重启系统1的Syncthing服务使更改生效。
|
||||
|
||||

|
||||
|
||||
等待大概60秒,接着你会看到两个系统之间已成功连接并同步。
|
||||
|
||||
你可以在网页管理界面中的Add Device区域核实该情况。
|
||||
|
||||
添加系统2后,系统1网页管理界面中的控制窗口如下:
|
||||
|
||||

|
||||
|
||||
添加系统1后,系统2网页管理界面中的控制窗口如下:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
现在,在任一个系统中的“**default**”文件夹中放进任意文件或文件夹。你应该可以看到这些文件/文件夹被自动同步到其它系统。
|
||||
|
||||
本文完!祝同步愉快!
|
||||
|
||||
噢耶!!!
|
||||
|
||||
- [Syncthing网站][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/syncthing-private-secure-tool-sync-filesfolders-computers/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[XLCYun](https://github.com/XLCYun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
||||
[1]:https://github.com/syncthing/syncthing/releases/tag/v0.10.20
|
||||
[2]:http://syncthing.net/
|
750
published/20150401 ZMap Documentation.md
Normal file
750
published/20150401 ZMap Documentation.md
Normal file
@ -0,0 +1,750 @@
|
||||
互联网扫描器 ZMap 完全手册
|
||||
================================================================================
|
||||
1. 初识 ZMap
|
||||
1. 最佳扫描习惯
|
||||
1. 命令行参数
|
||||
1. 附加信息
|
||||
1. TCP SYN 探测模块
|
||||
1. ICMP Echo 探测模块
|
||||
1. UDP 探测模块
|
||||
1. 配置文件
|
||||
1. 详细
|
||||
1. 结果输出
|
||||
1. 黑名单
|
||||
1. 速度限制与抽样
|
||||
1. 发送多个探测
|
||||
1. ZMap 扩展
|
||||
1. 示例应用程序
|
||||
1. 编写探测和输出模块
|
||||
|
||||
----------
|
||||
|
||||
### 初识 ZMap ###
|
||||
|
||||
ZMap被设计用来针对整个IPv4地址空间或其中的大部分实施综合扫描的工具。ZMap是研究者手中的利器,但在运行ZMap时,请注意,您很有可能正在以每秒140万个包的速度扫描整个IPv4地址空间 。我们建议用户即使在实施小范围扫描之前,也联系一下本地网络的管理员并参考我们列举的[最佳扫描体验](#bestpractices)。
|
||||
|
||||
默认情况下,ZMap会对于指定端口实施尽可能大速率的TCP SYN扫描。较为保守的情况下,对10,000个随机的地址的80端口以10Mbps的速度扫描,如下所示:
|
||||
|
||||
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
|
||||
|
||||
或者更加简洁地写成:
|
||||
|
||||
$ zmap -B 10M -p 80 -n 10000 -o results.csv
|
||||
|
||||
ZMap也可用于扫描特定子网或CIDR地址块。例如,仅扫描10.0.0.0/8和192.168.0.0/16的80端口,运行指令如下:
|
||||
|
||||
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
|
||||
|
||||
如果扫描进行的顺利,ZMap会每秒输出类似以下内容的状态更新:
|
||||
|
||||
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
|
||||
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
|
||||
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
|
||||
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
|
||||
|
||||
这些更新信息提供了扫描的即时状态并表示成:
|
||||
|
||||
完成进度% (剩余时间); send: 发出包的数量 即时速率 (平均发送速率); recv: 接收包的数量 接收率 (平均接收率); hits: 命中率
|
||||
|
||||
如果您不知道您所在网络能支持的扫描速率,您可能要尝试不同的扫描速率和带宽限制直到扫描效果开始下降,借此找出当前网络能够支持的最快速度。
|
||||
|
||||
默认情况下,ZMap会输出不同IP地址的列表(例如,根据SYN ACK数据包的情况),像下面这样。其[输出结果](#output)还有几种附加的格式(如,JSON和Redis),可以用作生成[程序可解析的扫描统计](#verbosity)。 同样,可以指定附加的[输出字段](#outputfields)并使用[输出过滤](#outputfilter)来过滤输出的结果。
|
||||
|
||||
115.237.116.119
|
||||
23.9.117.80
|
||||
207.118.204.141
|
||||
217.120.143.111
|
||||
50.195.22.82
|
||||
|
||||
我们强烈建议您使用[黑名单文件](#blacklisting),以排除预留的/未分配的IP地址空间(如,RFC1918 规定的私有地址、组播地址),以及网络中需要排除在您扫描之外的地址。默认情况下,ZMap将采用位于 `/etc/zmap/blacklist.conf`的这个简单的[黑名单文件](#blacklisting)中所包含的预留和未分配地址。如果您需要某些特定设置,比如每次运行ZMap时的最大带宽或[黑名单文件](#blacklisting),您可以在文件`/etc/zmap/zmap.conf`中指定或使用自定义[配置文件](#config)。
|
||||
|
||||
如果您正试图解决扫描的相关问题,有几个选项可以帮助您调试。首先,您可以通过添加`--dryrun`实施[预扫](#dryrun),以此来分析包可能会发送到网络的何处。此外,还可以通过设置'--verbosity=n`来更改[日志详细程度](#verbosity)。
|
||||
|
||||
### 最佳扫描体验 ###
|
||||
<a name="bestpractices" ></a>
|
||||
|
||||
我们为针对互联网进行扫描的研究者提供了一些建议,以此来引导养成良好的互联网合作氛围。
|
||||
|
||||
- 密切协同本地的网络管理员,以减少风险和调查
|
||||
- 确认扫描不会使本地网络或上游供应商瘫痪
|
||||
- 在发起扫描的源地址的网页和DNS条目中申明你的扫描是善意的
|
||||
- 明确解释你的扫描中所有连接的目的和范围
|
||||
- 提供一个简单的退出扫描的方法并及时响应请求
|
||||
- 实施扫描时,不使用比研究对象需求更大的扫描范围或更快的扫描频率
|
||||
- 如果可以,将扫描流量分布到不同的时间或源地址上
|
||||
|
||||
即使不声明,使用扫描的研究者也应该避免利用漏洞或访问受保护的资源,并遵守其辖区内任何特殊的法律规定。
|
||||
|
||||
### 命令行参数 ###
|
||||
<a name="args" ></a>
|
||||
|
||||
#### 通用选项 ####
|
||||
|
||||
这些选项是实施简单扫描时最常用的选项。我们注意到某些选项取决于所使用的[探测模块](#probemodule)或[输出模块](#outputmodule)(如,在实施ICMP Echo扫描时是不需要使用目的端口的)。
|
||||
|
||||
**-p, --target-port=port**
|
||||
|
||||
要扫描的目标TCP端口号(例如,443)
|
||||
|
||||
**-o, --output-file=name**
|
||||
|
||||
将结果写入该文件,使用`-`代表输出到标准输出。
|
||||
|
||||
**-b, --blacklist-file=path**
|
||||
|
||||
文件中被排除的子网使用CIDR表示法(如192.168.0.0/16),一个一行。建议您使用此方法排除RFC 1918地址、组播地址、IANA预留空间等IANA专用地址。在conf/blacklist.example中提供了一个以此为目的示例黑名单文件。
|
||||
|
||||
#### 扫描选项 ####
|
||||
|
||||
**-n, --max-targets=n**
|
||||
|
||||
限制探测目标的数量。后面跟的可以是一个数字(例如'-n 1000`),或可扫描地址空间的百分比(例如,`-n 0.1%`,不包括黑名单)
|
||||
|
||||
**-N, --max-results=n**
|
||||
|
||||
收到多少结果后退出
|
||||
|
||||
**-t, --max-runtime=secs**
|
||||
|
||||
限制发送报文的时间
|
||||
|
||||
**-r, --rate=pps**
|
||||
|
||||
设置发包速率,以包/秒为单位
|
||||
|
||||
**-B, --bandwidth=bps**
|
||||
|
||||
以比特/秒设置传输速率(支持使用后缀G,M或K(如`-B 10M`就是速度10 mbps)的。设置会覆盖`--rate`。
|
||||
|
||||
**-c, --cooldown-time=secs**
|
||||
|
||||
发送完成后等待多久继续接收回包(默认值= 8)
|
||||
|
||||
**-e, --seed=n**
|
||||
|
||||
地址排序种子。如果要用多个ZMap以相同的顺序扫描地址,那么就可以使用这个参数。
|
||||
|
||||
**--shards=n**
|
||||
|
||||
将扫描分片/区,使其可多个ZMap中执行(默认值= 1)。启用分片时,`--seed`参数是必需的。
|
||||
|
||||
**--shard=n**
|
||||
|
||||
选择扫描的分片(默认值= 0)。n的范围在[0,N),其中N为碎片的总数。启用分片时,`--seed`参数是必需的。
|
||||
|
||||
**-T, --sender-threads=n**
|
||||
|
||||
用于发送数据包的线程数(默认值= 1)
|
||||
|
||||
**-P, --probes=n**
|
||||
|
||||
发送到每个IP的探测数(默认值= 1)
|
||||
|
||||
**-d, --dryrun**
|
||||
|
||||
用标准输出打印出每个包,而不是将其发送(用于调试)
|
||||
|
||||
#### 网络选项 ####
|
||||
|
||||
**-s, --source-port=port|range**
|
||||
|
||||
发送数据包的源端口
|
||||
|
||||
**-S, --source-ip=ip|range**
|
||||
|
||||
发送数据包的源地址。可以仅仅是一个IP,也可以是一个范围(如,10.0.0.1-10.0.0.9)
|
||||
|
||||
**-G, --gateway-mac=addr**
|
||||
|
||||
数据包发送到的网关MAC地址(用以防止自动检测不工作的情况)
|
||||
|
||||
**-i, --interface=name**
|
||||
|
||||
使用的网络接口
|
||||
|
||||
#### 探测选项 ####
|
||||
|
||||
ZMap允许用户指定并添加自己所需要的[探测模块](#probemodule)。 探测模块的职责就是生成要发送的探测包,并处理主机回复的响应包。
|
||||
|
||||
**--list-probe-modules**
|
||||
|
||||
列出可用探测模块(如tcp_synscan)
|
||||
|
||||
**-M, --probe-module=name**
|
||||
|
||||
选择[探测模块](#probemodule)(默认值= tcp_synscan)
|
||||
|
||||
**--probe-args=args**
|
||||
|
||||
向模块传递参数
|
||||
|
||||
**--list-output-fields**
|
||||
|
||||
列出可用的输出模块
|
||||
|
||||
#### 输出选项 ####
|
||||
|
||||
ZMap允许用户指定和编写他们自己的[输出模块](#outputmodule)。输出模块负责处理由探测模块返回的字段,并将它们输出给用户。用户可以指定输出的字段,并过滤相应字段。
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
列出可用输出模块(如tcp_synscan)
|
||||
|
||||
**-O, --output-module=name**
|
||||
|
||||
选择[输出模块](#outputmodule)(默认值为csv)
|
||||
|
||||
**--output-args=args**
|
||||
|
||||
传递给输出模块的参数
|
||||
|
||||
**-f, --output-fields=fields**
|
||||
|
||||
输出的字段列表,以逗号分割
|
||||
|
||||
**--output-filter**
|
||||
|
||||
指定输出[过滤器](#outputfilter)对[探测模块](#probemodule)定义字段进行过滤
|
||||
|
||||
#### 附加选项 ####
|
||||
|
||||
**-C, --config=filename**
|
||||
|
||||
加载[配置文件](#config),可以指定其他路径。
|
||||
|
||||
**-q, --quiet**
|
||||
|
||||
不必每秒刷新输出
|
||||
|
||||
**-g, --summary**
|
||||
|
||||
在扫描结束后打印配置和结果汇总信息
|
||||
|
||||
**-v, --verbosity=n**
|
||||
|
||||
日志详细程度(0-5,默认值= 3)
|
||||
|
||||
**-h, --help**
|
||||
|
||||
打印帮助并退出
|
||||
|
||||
**-V, --version**
|
||||
|
||||
打印版本并退出
|
||||
|
||||
### 附加信息 ###
|
||||
<a name="additional"></a>
|
||||
|
||||
#### TCP SYN 扫描 ####
|
||||
|
||||
在执行TCP SYN扫描时,ZMap需要指定一个目标端口,也支持指定发起扫描的源端口范围。
|
||||
|
||||
**-p, --target-port=port**
|
||||
|
||||
扫描的TCP端口(例如 443)
|
||||
|
||||
**-s, --source-port=port|range**
|
||||
|
||||
发送扫描数据包的源端口(例如 40000-50000)
|
||||
|
||||
**警示!** ZMap基于Linux内核使用RST包来应答SYN/ACK包响应,以关闭扫描器打开的连接。ZMap是在Ethernet层完成包的发送的,这样做是为了减少跟踪打开的TCP连接和路由寻路带来的内核开销。因此,如果您有跟踪连接建立的防火墙规则,如类似于`-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`的netfilter规则,将阻止SYN/ACK包到达内核。这不会妨碍到ZMap记录应答,但它会阻止RST包被送回,最终被扫描主机的连接会一直打开,直到超时后断开。我们强烈建议您在执行ZMap时,选择一组主机上未使用且防火墙允许访问的端口,加在`-s`后(如 `-s '50000-60000' ` )。
|
||||
|
||||
#### ICMP Echo 请求扫描 ####
|
||||
|
||||
虽然在默认情况下ZMap执行的是TCP SYN扫描,但它也支持使用ICMP echo请求扫描。在这种扫描方式下ICMP echo请求包被发送到每个主机,并以收到ICMP应答包作为答复。实施ICMP扫描可以通过选择icmp_echoscan扫描模块来执行,如下:
|
||||
|
||||
$ zmap --probe-module=icmp_echoscan
|
||||
|
||||
#### UDP 数据报扫描 ####
|
||||
|
||||
ZMap还额外支持UDP探测,它会发出任意UDP数据报给每个主机,并接收UDP或ICMP不可达的应答。ZMap可以通过使用--probe-args命令行选项来设置四种不同的UDP载荷。这些是:可在命令行设置可打印的ASCII 码的‘text’载荷和十六进制载荷的‘hex’,外部文件中包含载荷的‘file’,和通过动态字段生成的载荷的‘template’。为了得到UDP响应,请使用-f参数确保您指定的“data”字段处于输出范围。
|
||||
|
||||
下面的例子将发送两个字节'ST',即PCAnwywhere的'status'请求,到UDP端口5632。
|
||||
|
||||
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
|
||||
|
||||
下面的例子将发送字节“0X02”,即SQL Server的'client broadcast'请求,到UDP端口1434。
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
|
||||
|
||||
下面的例子将发送一个NetBIOS状态请求到UDP端口137。使用一个ZMap自带的载荷文件。
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
|
||||
|
||||
下面的例子将发送SIP的'OPTIONS'请求到UDP端口5060。使用附ZMap自带的模板文件。
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
|
||||
|
||||
UDP载荷模板仍处于实验阶段。当您在更多的使用一个以上的发送线程(-T)时可能会遇到崩溃和一个明显的相比静态载荷性能降低的表现。模板仅仅是一个由一个或多个使用${}将字段说明封装成序列构成的载荷文件。某些协议,特别是SIP,需要载荷来反射包中的源和目的包。其他协议,如portmapper和DNS,每个请求包含的字段应该是随机的,或降低被Zamp扫描的多宿主系统的风险。
|
||||
|
||||
以下的载荷模板将发送SIP OPTIONS请求到每一个目的地:
|
||||
|
||||
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
|
||||
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
|
||||
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
|
||||
To: sip:${RAND_ALPHA=8}@${DADDR}
|
||||
Call-ID: ${RAND_DIGIT=10}@${SADDR}
|
||||
CSeq: 1 OPTIONS
|
||||
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
|
||||
Content-Length: 0
|
||||
Max-Forwards: 20
|
||||
User-Agent: ${RAND_ALPHA=8}
|
||||
Accept: text/plain
|
||||
|
||||
就像在上面的例子中展示的那样,注意每行行末以\r\n结尾,请求以\r\n\r\n结尾,大多数SIP实现都可以正确处理它。一个可以工作的例子放在ZMap的examples/udp-payloads目录下 (sip_options.tpl).
|
||||
|
||||
当前实现了下面的模板字段:
|
||||
|
||||
- **SADDR**: 源IP地址的点分十进制格式
|
||||
- **SADDR_N**: 源IP地址的网络字节序格式
|
||||
- **DADDR**: 目的IP地址的点分十进制格式
|
||||
- **DADDR_N**: 目的IP地址的网络字节序格式
|
||||
- **SPORT**: 源端口的ascii格式
|
||||
- **SPORT_N**: 源端口的网络字节序格式
|
||||
- **DPORT**: 目的端口的ascii格式
|
||||
- **DPORT_N**: 目的端口的网络字节序格式
|
||||
- **RAND_BYTE**: 随机字节(0-255),长度由=(length) 参数决定
|
||||
- **RAND_DIGIT**: 随机数字0-9,长度由=(length) 参数决定
|
||||
- **RAND_ALPHA**: 随机大写字母A-Z,长度由=(length) 参数决定
|
||||
- **RAND_ALPHANUM**: 随机大写字母A-Z和随机数字0-9,长度由=(length) 参数决定
|
||||
|
||||
### 配置文件 ###
|
||||
<a name="config"></a>
|
||||
|
||||
ZMap支持使用配置文件来代替在命令行上指定所有要求的选项。配置中可以通过每行指定一个长名称的选项和对应的值来创建:
|
||||
|
||||
interface "eth1"
|
||||
source-ip 1.1.1.4-1.1.1.8
|
||||
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
|
||||
cooldown-time 300 # seconds
|
||||
blacklist-file /etc/zmap/blacklist.conf
|
||||
output-file ~/zmap-output
|
||||
quiet
|
||||
summary
|
||||
|
||||
然后ZMap就可以按照配置文件并指定一些必要的附加参数运行了:
|
||||
|
||||
$ zmap --config=~/.zmap.conf --target-port=443
|
||||
|
||||
### 详细 ###
|
||||
<a name="verbosity" ></a>
|
||||
|
||||
ZMap可以在屏幕上生成多种类型的输出。默认情况下,Zmap将每隔1秒打印出相似的基本进度信息。可以通过设置`--quiet`来禁用。
|
||||
|
||||
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
|
||||
|
||||
ZMap同样也可以根据扫描配置打印如下消息,可以通过'--verbosity`参数加以控制。
|
||||
|
||||
Aug 11 16:16:12.813 [INFO] zmap: started
|
||||
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
|
||||
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
|
||||
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
|
||||
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
|
||||
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
|
||||
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
|
||||
Aug 11 16:17:03.985 [DEBUG] recv: thread started
|
||||
|
||||
ZMap还支持在扫描之后打印出一个的可grep的汇总信息,类似于下面这样,可以通过调用`--summary`来实现。
|
||||
|
||||
cnf target-port 443
|
||||
cnf source-port-range-begin 32768
|
||||
cnf source-port-range-end 61000
|
||||
cnf source-addr-range-begin 1.1.1.4
|
||||
cnf source-addr-range-end 1.1.1.8
|
||||
cnf maximum-packets 4294967295
|
||||
cnf maximum-runtime 0
|
||||
cnf permutation-seed 0
|
||||
cnf cooldown-period 300
|
||||
cnf send-interface eth1
|
||||
cnf rate 45000
|
||||
env nprocessors 16
|
||||
exc send-start-time Fri Jan 18 01:47:35 2013
|
||||
exc send-end-time Sat Jan 19 00:47:07 2013
|
||||
exc recv-start-time Fri Jan 18 01:47:35 2013
|
||||
exc recv-end-time Sat Jan 19 00:52:07 2013
|
||||
exc sent 3722335150
|
||||
exc blacklisted 572632145
|
||||
exc first-scanned 1318129262
|
||||
exc hit-rate 0.874102
|
||||
exc synack-received-unique 32537000
|
||||
exc synack-received-total 36689941
|
||||
exc synack-cooldown-received-unique 193
|
||||
exc synack-cooldown-received-total 1543
|
||||
exc rst-received-unique 141901021
|
||||
exc rst-received-total 166779002
|
||||
adv source-port-secret 37952
|
||||
adv permutation-gen 4215763218
|
||||
|
||||
### 结果输出 ###
|
||||
<a name="output" />
|
||||
|
||||
ZMap可以通过**输出模块**生成不同格式的结果。默认情况下,ZMap只支持**csv**的输出,但是可以通过编译支持**redis**和**json** 。可以使用**输出过滤**来过滤这些发送到输出模块上的结果。输出模块输出的字段由用户指定。默认情况如果没有指定输出文件,ZMap将以csv格式返回结果,而不会生成特定结果。也可以编写自己的输出模块;请参阅[编写输出模块](#exteding)。
|
||||
|
||||
**-o, --output-file=p**
|
||||
|
||||
输出写入文件地址
|
||||
|
||||
**-O, --output-module=p**
|
||||
|
||||
调用自定义输出模块
|
||||
|
||||
**-f, --output-fields=p**
|
||||
|
||||
以逗号分隔的输出的字段列表
|
||||
|
||||
**--output-filter=filter**
|
||||
|
||||
对给定的探测指定字段输出过滤
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
列出可用输出模块
|
||||
|
||||
**--list-output-fields**
|
||||
|
||||
列出给定的探测的可用输出字段
|
||||
|
||||
#### 输出字段 ####
|
||||
|
||||
除了IP地址之外,ZMap有很多字段。这些字段可以通过在给定探测模块上运行`--list-output-fields`来查看。
|
||||
|
||||
$ zmap --probe-module="tcp_synscan" --list-output-fields
|
||||
saddr string: 应答包中的源IP地址
|
||||
saddr-raw int: 网络字节格式的源IP地址
|
||||
daddr string: 应答包中的目的IP地址
|
||||
daddr-raw int: 网络字节格式的目的IP地址
|
||||
ipid int: 应答包中的IP识别号
|
||||
ttl int: 应答包中的ttl(存活时间)值
|
||||
sport int: TCP 源端口
|
||||
dport int: TCP 目的端口
|
||||
seqnum int: TCP 序列号
|
||||
acknum int: TCP Ack号
|
||||
window int: TCP 窗口
|
||||
classification string: 包类型
|
||||
success int: 是应答包成功
|
||||
repeat int: 是否是来自主机的重复响应
|
||||
cooldown int: 是否是在冷却时间内收到的响应
|
||||
timestamp-str string: 响应抵达时的时间戳使用ISO8601格式
|
||||
timestamp-ts int: 响应抵达时的时间戳使用UNIX纪元开始的秒数
|
||||
timestamp-us int: 时间戳的微秒部分(例如 从'timestamp-ts'的几微秒)
|
||||
|
||||
可以通过使用`--output-fields=fields`或`-f`来选择选择输出字段,任意组合的输出字段可以被指定为逗号分隔的列表。例如:
|
||||
|
||||
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
||||
|
||||
#### 过滤输出 ####
|
||||
|
||||
在传到输出模块之前,探测模块生成的结果可以先过滤。过滤是针对探测模块的输出字段的。过滤使用类似于SQL的简单过滤语法写成,通过ZMap的**--output-filter**选项来指定。输出过滤通常用于过滤掉重复的结果,或仅传输成功的响应到输出模块。
|
||||
|
||||
过滤表达式的形式为`<字段名> <操作符> <值>`。`<值>`的类型必须是一个字符串或一串无符号整数并且匹配`<字段名>`类型。对于整数比较有效的操作符是`= !=, <, >, <=, >=`。字符串比较的操作是=,!=。`--list-output-fields`可以打印那些可供探测模块选择的字段和类型,然后退出。
|
||||
|
||||
复合型的过滤操作,可以通过使用`&&`(逻辑与)和`||`(逻辑或)这样的运算符来组合出特殊的过滤操作。
|
||||
|
||||
**示例**
|
||||
|
||||
书写一则过滤仅显示成功的、不重复的应答
|
||||
|
||||
--output-filter="success = 1 && repeat = 0"
|
||||
|
||||
过滤出RST分类并且TTL大于10的包,或者SYNACK分类的包
|
||||
|
||||
--output-filter="(classification = rst && ttl > 10) || classification = synack"
|
||||
|
||||
#### CSV ####
|
||||
|
||||
csv模块将会生成以逗号分隔各个要求输出的字段的文件。例如,以下的指令将生成名为`output.csv`的CSV文件。
|
||||
|
||||
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
||||
|
||||
----------
|
||||
|
||||
#响应, 源地址, 目的地址, 源端口, 目的端口, 序列号, 应答, 是否是冷却模式, 是否重复, 时间戳
|
||||
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
|
||||
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
|
||||
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
|
||||
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
|
||||
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
|
||||
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
|
||||
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
|
||||
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
|
||||
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
|
||||
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
|
||||
|
||||
#### Redis ####
|
||||
|
||||
Redis的输出模块允许地址被添加到一个Redis的队列,而不是保存到文件,允许ZMap将它与之后的处理工具结合使用。
|
||||
|
||||
**注意!** ZMap默认不会编译Redis功能。如果你从源码构建ZMap,可以在CMake的时候加上`-DWITH_REDIS=ON`来增加Redis支持。
|
||||
|
||||
#### JSON ####
|
||||
|
||||
JSON输出模块用起来类似于CSV模块,只是以JSON格式写入到文件。JSON文件能轻松地导入到其它可以读取JSON的程序中。
|
||||
|
||||
**注意!**,ZMap默认不会编译JSON功能。如果你从源码构建ZMap,可以在CMake的时候加上`-DWITH_JSON=ON`来增加JSON支持。
|
||||
|
||||
### 黑名单和白名单 ###
|
||||
<a name="blacklisting"></a>
|
||||
|
||||
ZMap同时支持对网络前缀做黑名单和白名单。如果ZMap不加黑名单和白名单参数,他将会扫描所有的IPv4地址(包括本地的,保留的以及组播地址)。如果指定了黑名单文件,那么在黑名单中的网络前缀将不再扫描;如果指定了白名单文件,只有那些网络前缀在白名单内的才会扫描。白名单和黑名单文件可以协同使用;黑名单优先于白名单(例如:如果您在白名单中指定了10.0.0.0/8并在黑名单中指定了10.1.0.0/16,那么10.1.0.0/16将不会扫描)。白名单和黑名单文件可以在命令行中指定,如下所示:
|
||||
|
||||
**-b, --blacklist-file=path**
|
||||
|
||||
文件用于记录黑名单子网,以CIDR(无类域间路由)的表示法,例如192.168.0.0/16
|
||||
|
||||
**-w, --whitelist-file=path**
|
||||
|
||||
文件用于记录限制扫描的子网,以CIDR的表示法,例如192.168.0.0/16
|
||||
|
||||
黑名单文件的每行都需要以CIDR的表示格式书写,一行单一的网络前缀。允许使用`#`加以备注。例如:
|
||||
|
||||
# IANA(英特网编号管理局)记录的用于特殊目的的IPv4地址
|
||||
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
|
||||
# 更新于2013-05-22
|
||||
|
||||
0.0.0.0/8 # RFC1122: 网络中的所有主机
|
||||
10.0.0.0/8 # RFC1918: 私有地址
|
||||
100.64.0.0/10 # RFC6598: 共享地址空间
|
||||
127.0.0.0/8 # RFC1122: 回环地址
|
||||
169.254.0.0/16 # RFC3927: 本地链路地址
|
||||
172.16.0.0/12 # RFC1918: 私有地址
|
||||
192.0.0.0/24 # RFC6890: IETF协议预留
|
||||
192.0.2.0/24 # RFC5737: 测试地址1
|
||||
192.88.99.0/24 # RFC3068: IPv6转换到IPv4的任播
|
||||
192.168.0.0/16 # RFC1918: 私有地址
|
||||
192.18.0.0/15 # RFC2544: 检测地址
|
||||
198.51.100.0/24 # RFC5737: 测试地址2
|
||||
203.0.113.0/24 # RFC5737: 测试地址3
|
||||
240.0.0.0/4 # RFC1112: 预留地址
|
||||
255.255.255.255/32 # RFC0919: 限制广播地址
|
||||
|
||||
# IANA记录的用于组播的地址空间
|
||||
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
|
||||
# 更新于2013-06-25
|
||||
|
||||
224.0.0.0/4 # RFC5771: 组播/预留地址ed
|
||||
|
||||
如果您只是想扫描因特网中随机的一部分地址,使用[抽样](#ratelimiting)检出,来代替使用白名单和黑名单。
|
||||
|
||||
**注意!**ZMap默认设置使用`/etc/zmap/blacklist.conf`作为黑名单文件,其中包含有本地的地址空间和预留的IP空间。通过编辑`/etc/zmap/zmap.conf`可以改变默认的配置。
|
||||
|
||||
### 速度限制与抽样 ###
|
||||
<a name="ratelimiting"></a>
|
||||
|
||||
默认情况下,ZMap将以您当前网卡所能支持的最快速度扫描。以我们对于常用硬件的经验,这通常是理论上Gbit以太网速度的95-98%,这可能比您的上游提供商可处理的速度还要快。ZMap是不会自动的根据您的上游提供商来调整发送速率的。您可能需要手动的调整发送速率来减少丢包和错误结果。
|
||||
|
||||
**-r, --rate=pps**
|
||||
|
||||
设置最大发送速率以包/秒为单位
|
||||
|
||||
**-B, --bandwidth=bps**
|
||||
|
||||
设置发送速率以比特/秒(支持G,M和K后缀)。这会覆盖--rate参数。
|
||||
|
||||
ZMap同样支持对IPv4地址空间进行指定最大目标数和/或最长运行时间的随机采样。由于每次对主机的扫描是通过随机排序生成的,限制扫描的主机个数为N就会随机抽选N个主机。命令选项如下:
|
||||
|
||||
**-n, --max-targets=n**
|
||||
|
||||
探测目标上限数量
|
||||
|
||||
**-N, --max-results=n**
|
||||
|
||||
结果上限数量(累积收到这么多结果后退出)
|
||||
|
||||
**-t, --max-runtime=s**
|
||||
|
||||
发送数据包时间长度上限(以秒为单位)
|
||||
|
||||
**-s, --seed=n**
|
||||
|
||||
种子用以选择地址的排列方式。使用不同ZMap执行扫描操作时将种子设成相同的值可以保证相同的扫描顺序。
|
||||
|
||||
举个例子,如果您想要多次扫描同样的一百万个互联网主机,您可以设定排序种子和扫描主机的上限数量,大致如下所示:
|
||||
|
||||
zmap -p 443 -s 3 -n 1000000 -o results
|
||||
|
||||
为了确定哪一百万主机将要被扫描,您可以执行预扫,只打印数据包而非发送,并非真的实施扫描。
|
||||
|
||||
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
|
||||
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
|
||||
|
||||
### 发送多个数据包 ###
|
||||
|
||||
ZMap支持向每个主机发送多个探测。增加这个数量既增加了扫描时间又增加了到达的主机数量。然而,我们发现,增加的扫描时间(每个额外扫描的增加近100%)远远大于到达的主机数量(每个额外扫描的增加近1%)。
|
||||
|
||||
**-P, --probes=n**
|
||||
|
||||
向每个IP发出的独立探测个数(默认值=1)
|
||||
|
||||
### 示例应用 ###
|
||||
|
||||
ZMap专为向大量主机发起连接并寻找那些正确响应而设计。然而,我们意识到许多用户需要执行一些后续处理,如执行应用程序级别的握手。例如,用户在80端口实施TCP SYN扫描也许想要实施一个简单的GET请求,还有用户扫描443端口可能希望完成TLS握手。
|
||||
|
||||
#### Banner获取 ####
|
||||
|
||||
我们收录了一个示例程序,banner-grab,伴随ZMap使用可以让用户从监听状态的TCP服务器上接收到消息。Banner-grab连接到提供的服务器上,发送一个可选的消息,然后打印出收到的第一个消息。这个工具可以用来获取banner,例如HTTP服务的回复的具体指令,telnet登陆提示,或SSH服务的字符串。
|
||||
|
||||
下面的例子寻找了1000个监听80端口的服务器,并向每个发送一个简单的GET请求,存储他们的64位编码响应至http-banners.out
|
||||
|
||||
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
|
||||
|
||||
如果想知道更多使用`banner-grab`的细节,可以参考`examples/banner-grab`中的README文件。
|
||||
|
||||
**注意!** ZMap和banner-grab(如例子中)同时运行可能会比较显著的影响对方的表现和精度。确保不让ZMap占满banner-grab-tcp的并发连接,不然banner-grab将会落后于标准输入的读入,导致阻塞ZMap的输出写入。我们推荐使用较慢扫描速率的ZMap,同时提升banner-grab-tcp的并发性至3000以内(注意 并发连接>1000需要您使用`ulimit -SHn 100000`和`ulimit -HHn 100000`来增加每个进程的最大文件描述符数量)。当然,这些参数取决于您服务器的性能、连接成功率(hit-rate);我们鼓励开发者在运行大型扫描之前先进行小样本的试验。
|
||||
|
||||
#### 建立套接字 ####
|
||||
|
||||
我们也收录了另一种形式的banner-grab,就是forge-socket, 重复利用服务器发出的SYN-ACK,连接并最终取得banner。在`banner-grab-tcp`中,ZMap向每个服务器发送一个SYN,并监听服务器发回的带有SYN+ACK的应答。运行ZMap主机的内核接受应答后发送RST,这样就没有与该包关联活动连接。程序banner-grab必须在这之后创建一个新的TCP连接到从服务器获取数据。
|
||||
|
||||
在forge-socket中,我们利用内核中同名的模块,使我们可以创建任意参数的TCP连接。可以通过抑制内核的RST包,并重用SYN+ACK的参数取代该包而创建套接字,通过这个套接字收发数据和我们平时使用的连接套接字并没有什么不同。
|
||||
|
||||
要使用forge-socket,您需要forge-socket内核模块,从[github][1]上可以获得。您需要`git clone git@github.com:ewust/forge_socket.git`至ZMap源码根目录,然后cd进入forge_socket目录,运行make。以root身份运行`insmod forge_socket.ko` 来安装该内核模块。
|
||||
|
||||
您也需要告知内核不要发送RST包。一个简单的在全系统禁用RST包的方法是使用**iptables**。以root身份运行`iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP`即可,当然您也可以加上一项`--dport X`将禁用局限于所扫描的端口(X)上。扫描完成后移除这项设置,以root身份运行`iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP`即可。
|
||||
|
||||
现在应该可以建立forge-socket的ZMap示例程序了。运行需要使用**extended_file**ZMap[输出模块](#outputmodule):
|
||||
|
||||
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
|
||||
./forge-socket -c 500 -d ./http-req > ./http-banners.out
|
||||
|
||||
详细内容可以参考`examples/forge-socket`目录下的README。
|
||||
|
||||
----------
|
||||
|
||||
### 编写探测和输出模块 ###
|
||||
<a name="extending"></a>
|
||||
|
||||
ZMap可以通过**探测模块**来扩展支持不同类型的扫描,通过**输出模块**增加不同类型的输出结果。注册过的探测和输出模块可以在命令行中列出:
|
||||
|
||||
**--list-probe-modules**
|
||||
|
||||
列出安装过的探测模块
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
列出安装过的输出模块
|
||||
|
||||
#### 输出模块 ####
|
||||
<a name="outputmodule"></a>
|
||||
|
||||
ZMap的输出和输出后处理可以通过实现和注册扫描器的**输出模块**来扩展。输出模块在接收每一个应答包时都会收到一个回调。然而默认提供的模块仅提供简单的输出,这些模块同样支持更多的输出后处理(例如:重复跟踪或输出AS号码来代替IP地址)。
|
||||
|
||||
通过定义一个新的output_module结构来创建输出模块,并在[output_modules.c][2]中注册:
|
||||
|
||||
typedef struct output_module {
|
||||
const char *name; // 在命令行如何引用输出模块
|
||||
unsigned update_interval; // 以秒为单位的更新间隔
|
||||
|
||||
output_init_cb init; // 在扫描器初始化的时候调用
|
||||
output_update_cb start; // 在扫描器开始的时候调用
|
||||
output_update_cb update; // 每次更新间隔调用,秒为单位
|
||||
output_update_cb close; // 扫描终止后调用
|
||||
|
||||
output_packet_cb process_ip; // 接收到应答时调用
|
||||
|
||||
const char *helptext; // 会在--list-output-modules时打印在屏幕上
|
||||
|
||||
} output_module_t;
|
||||
|
||||
输出模块必须有名称,通过名称可以在命令行调用,并且通常会实现`success_ip`和常见的`other_ip`回调。process_ip的回调由每个收到并经由**probe module**过滤的应答包调用。应答是否被认定为成功并不确定(比如,它可以是一个TCP的RST)。这些回调必须定义匹配`output_packet_cb`定义的函数:
|
||||
|
||||
int (*output_packet_cb) (
|
||||
|
||||
ipaddr_n_t saddr, // 网络字节格式的发起扫描主机IP地址
|
||||
ipaddr_n_t daddr, // 网络字节格式的目的IP地址
|
||||
|
||||
const char* response_type, // 发送模块的数据包分类
|
||||
|
||||
int is_repeat, // {0: 主机的第一个应答, 1: 后续的应答}
|
||||
int in_cooldown, // {0: 非冷却状态, 1: 扫描器处于冷却中}
|
||||
|
||||
const u_char* packet, // 指向IP包的iphdr结构体的指针
|
||||
size_t packet_len // 包的长度,以字节为单位
|
||||
);
|
||||
|
||||
输出模块还可以通过注册回调,执行在扫描初始化的时候(诸如打开输出文件的任务)、在扫描开始阶段(诸如记录黑名单的任务)、在扫描的常规间隔(诸如状态更新的任务)、在关闭的时候(诸如关掉所有打开的文件描述符)。提供的这些回调可以完整的访问扫描配置和当前状态:
|
||||
|
||||
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
|
||||
|
||||
这些定义在[output_modules.h][3]中。在[src/output_modules/module_csv.c][4]中有可用示例。
|
||||
|
||||
#### 探测模块 ####
|
||||
<a name="probemodule"></a>
|
||||
|
||||
数据包由**探测模块**构造,它可以创建各种包和不同类型的响应。ZMap默认拥有两个扫描模块:`tcp_synscan`和`icmp_echoscan`。默认情况下,ZMap使用`tcp_synscan`来发送TCP SYN包并对每个主机的响应分类,如打开时(收到SYN+ACK)或关闭时(收到RST)。ZMap允许开发者编写自己的ZMap探测模块,使用如下的API:
|
||||
|
||||
任何类型的扫描都必须通过开发和注册`send_module_t`结构中的回调来实现:
|
||||
|
||||
typedef struct probe_module {
|
||||
const char *name; // 如何在命令行调用扫描
|
||||
size_t packet_length; // 探测包有多长(必须是静态的)
|
||||
|
||||
const char *pcap_filter; // 对收到的响应实施PCAP过滤
|
||||
size_t pcap_snaplen; // libpcap 捕获的最大字节数
|
||||
uint8_t port_args; // 设为1,如果ZMap需要用户指定--target-port
|
||||
|
||||
probe_global_init_cb global_initialize; // 在扫描初始化会时被调用一次
|
||||
probe_thread_init_cb thread_initialize; // 每个包缓存区的线程中被调用一次
|
||||
probe_make_packet_cb make_packet; // 每个主机更新包的时候被调用一次
|
||||
probe_validate_packet_cb validate_packet; // 每收到一个包被调用一次,
|
||||
// 如果包无效返回0,
|
||||
// 非零则有效。
|
||||
|
||||
probe_print_packet_cb print_packet; // 如果在预扫模式下被每个包都调用
|
||||
probe_classify_packet_cb process_packet; // 由区分响应的接收器调用
|
||||
probe_close_cb close; // 扫描终止后被调用
|
||||
|
||||
fielddef_t *fields // 该模块指定的字段的定义
|
||||
int numfields // 字段的数量
|
||||
|
||||
} probe_module_t;
|
||||
|
||||
在扫描操作初始化时会调用一次`global_initialize`,可以用来实施一些必要的全局配置和初始化操作。然而,`global_initialize`并不能访问包缓冲区,那里是线程特定的。代替的,`thread_initialize`在每个发送线程初始化的时候被调用,提供对于缓冲区的访问,可以用来构建探测包和全局的源和目的值。此回调应用于构建主机不可知的包结构,甚至只有特定值(如:目的主机和校验和),需要随着每个主机更新。例如,以太网头部信息在交换时不会变更(减去校验和是由NIC硬件计算的)因此可以事先定义以减少扫描时间开销。
|
||||
|
||||
调用回调参数`make_packet`是为了让被扫描的主机允许**探测模块**更新主机指定的值,同时提供IP地址、一个非透明的验证字符串和探测数目(如下所示)。探测模块负责在探测中放置尽可能多的验证字符串,即便当服务器返回的应答为空时,探测模块也能验证它的当前状态。例如,针对TCP SYN扫描,tcp_synscan探测模块会使用TCP源端口和序列号的格式存储验证字符串。响应包(SYN+ACK)将包含目的端口和确认号的预期值。
|
||||
|
||||
int make_packet(
|
||||
void *packetbuf, // 包的缓冲区
|
||||
ipaddr_n_t src_ip, // 网络字节格式源IP
|
||||
ipaddr_n_t dst_ip, // 网络字节格式目的IP
|
||||
uint32_t *validation, // 探测中的有效字符串
|
||||
int probe_num // 如果向每个主机发送多重探测,
|
||||
// 该值为我们对于该主机
|
||||
// 正在发送的探测数目
|
||||
);
|
||||
|
||||
扫描模块也应该定义`pcap_filter`、`validate_packet`和`process_packet`。只有符合PCAP过滤器的包才会被扫描。举个例子,在一个TCP SYN扫描的情况下,我们只想要调查TCP SYN / ACK或RST TCP数据包,并利用类似`tcp && tcp[13] & 4 != 0 || tcp[13] == 18`的过滤方法。`validate_packet`函数将会被每个满足PCAP过滤条件的包调用。如果验证返回的值非零,将会调用`process_packet`函数,并使用`fields`定义的字段和包中的数据填充字段集。举个例子,如下代码为TCP synscan探测模块处理了一个数据包。
|
||||
|
||||
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
|
||||
{
|
||||
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
|
||||
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
|
||||
+ (sizeof(struct iphdr)));
|
||||
|
||||
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
|
||||
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
|
||||
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
|
||||
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
|
||||
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
|
||||
|
||||
if (tcp->rst) { // RST packet
|
||||
fs_add_string(fs, "classification", (char*) "rst", 0);
|
||||
fs_add_uint64(fs, "success", 0);
|
||||
} else { // SYNACK packet
|
||||
fs_add_string(fs, "classification", (char*) "synack", 0);
|
||||
fs_add_uint64(fs, "success", 1);
|
||||
}
|
||||
}
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://zmap.io/documentation.html
|
||||
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://github.com/ewust/forge_socket/
|
||||
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
|
||||
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
|
||||
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c
|
164
published/20150407 10 Truly Amusing Easter Eggs in Linux.md
Normal file
164
published/20150407 10 Truly Amusing Easter Eggs in Linux.md
Normal file
@ -0,0 +1,164 @@
|
||||
十个非常有趣的 Linux 彩蛋
|
||||
================================================================================
|
||||
|
||||
*制作 Adventure 的程序员悄悄的把一个秘密的功能塞进了游戏里。Atari 并没有对此感到生气,而是给这类“秘密功能”起了个名字——“彩蛋”,因为——你懂的——你会像找复活节彩蛋一样寻找它们。*
|
||||
|
||||

|
||||
|
||||
*图片来自: Wikipedia*
|
||||
|
||||
在1979年的时候,公司为 Atari 2600 开发了一个电子游戏——[Adventure][1]。
|
||||
|
||||
制作 Adventure 的程序员悄悄的把这样的一个功能放进了游戏里,当用户把一个“隐形方块”移动到特定的一面墙上时,会让用户进入一个“密室”。那个房间里只有一句话:“Created by [Warren Robinett][2]”——意思是,由 [Warren Robinett][2] 创建。
|
||||
|
||||
Atari 有一项反对作者将自己的名字放进他们的游戏里的政策,所以这个无畏的程序员只能偷偷的把自己的名字放进游戏里。Atari 在 Warren Robinett 离开公司之后才发现这个“密室”。Atari 并没有对此感到生气,而是给这类“秘密功能”起了个名字——“彩蛋”,因为——你懂的——你会寻找它们。Atari 还宣布将在之后的游戏中加入更多的“彩蛋”。
|
||||
|
||||
这种软件里的“隐藏功能”并不是第一次出现(这类特性的首次出现是在1966年[PDP-10][3]的操作系统上),但这是它第一次有了名字,同时也是第一次真正的被众多电脑用户和游戏玩家所注意。
|
||||
|
||||
Linux(以及和Linux相关的软件)也没有被遗忘。这些年来,人们为这个倍受喜爱的操作系统创作了很多非常有趣的彩蛋。下面将介绍我个人最喜爱的彩蛋——以及如何找到它们。
|
||||
|
||||
你很快就会想到这些彩蛋大多需要通过终端才能体验到。这是故意的。因为终端比较酷。【我应该借此机机会提醒你一下,如果你想运行我所列出的应用,然而你却还没有安装它们,你是绝对无法运行成功的。你应该先安装好它们的。因为……毕竟只是计算机。】
|
||||
|
||||
### Arch : 包管理器(pacman)里的吃豆人(Pac-Man) ###
|
||||
|
||||
为了广大的 [Arch Linux][4] 粉丝,我们将以此开篇。你们可以将“[pacman][6]” (Arch 的包管理器)的进度条变成吃豆人吃豆的样子。别问我为什么这不是默认设置。
|
||||
|
||||
你需要在你最喜欢的文本编辑器里编辑“/etc/pacman.conf”文件。在“# Misc options”区下面,删除“Color”前的“#”,添加一行“ILoveCandy”。因为吃豆人喜欢糖豆。
|
||||
|
||||
没错,这样就行了!下次你在终端里运行pacman管理器时,你就会让这个黄颜色的小家伙吃到些午餐(至少能吃些糖豆)。
|
||||
|
||||
### GNU Emacs : 俄罗斯方块(Tetris)以及…… ###
|
||||
|
||||

|
||||
|
||||
*我不喜欢 emacs。一点也不喜欢。但是它确实能玩俄罗斯方块。*
|
||||
|
||||
我要坦白一件事:我不喜欢[emacs][7]。一点也不喜欢。
|
||||
|
||||
有些东西让我满心欢喜。有些东西能带走我所有伤痛。有些东西能解决我的烦恼。这些[绝对跟 emacs 无关][8]。
|
||||
|
||||
但是它确实能玩俄罗斯方块。这可不是件小事。方法如下:
|
||||
|
||||
第一步)打开 emacs。(有疑问?输入“emacs”。)
|
||||
|
||||
第二步)按下键盘上的Esc和X键。
|
||||
|
||||
第三步)输入“tetris”然后按下“Enter”。
|
||||
|
||||
玩腻了俄罗斯方块?试试“pong”、“snake”还有其他一堆小游戏(或奇怪的东西)。在“/usr/share/emacs/*/lisp/play”文件中可以看见完整的清单。
|
||||
|
||||
### 动物说话了 ###
|
||||
|
||||
让动物在终端里说话在 Linux 世界里有着悠久而辉煌的历史。下面这些真的是最应该知道的。
|
||||
|
||||
在用基于 Debian 的发行版?试试输入“apt-get moo"。
|
||||
|
||||

|
||||
|
||||
*apt-get moo*
|
||||
|
||||
简单?的确。但这是只会说话的牛,所以惹我们喜欢。再试试“aptitude moo”。他会告诉你“There are no Easter Eggs in this program(这个程序里没有彩蛋)”。
|
||||
|
||||
关于 [aptitude][9] 有一件事你一定要知道,它是个肮脏、下流的骗子。如果 aptitude 是匹诺曹,那它的鼻子能刺穿月球。在这条命令中添加“-v”选项。不停的添加 v,直到它被逼得投降。
|
||||
|
||||

|
||||
|
||||
*我猜大家都同意,这是 aptitude 中最重要的功能。*
|
||||
|
||||
我猜大家都同意,这是 aptitude 中最重要的功能。但是万一你想把自己的话让一头牛说出来怎么办?这时我们就需要“cowsay”了。
|
||||
|
||||
还有,别让“cowsay(牛说)”这个名字把你给骗了。你可以让你的话从各种东西的嘴里说出来。比如一头大象,Calvin,Beavis 甚至可以是 Ghostbusters(捉鬼敢死队)的标志。只需在终端输入“cowsay -l”就能看到所有选项的列表。
|
||||
|
||||

|
||||
|
||||
*你可以让你的话从各种东西的嘴里说出来*
|
||||
|
||||
想玩高端点的?你可以用管道把其他应用的输出放到 cowsay 中。试试“fortune | cowsay”。非常有趣。
|
||||
|
||||
### Sudo 请无情的侮辱我 ###
|
||||
|
||||
当你做错事时希望你的电脑骂你的人请举手。反正,我这样想过。试试这个:
|
||||
|
||||
输入“sudo visudo”以打开“sudoers”文件。在文件的开头你很可能会看见几行以“Defaults”开头的文字。在那几行后面添加“Defaults insults”并保存文件。
|
||||
|
||||
现在,只要你输错了你的 sudo 密码,你的系统就会骂你。这些可以提高自信的语句包括“听着,煎饼脑袋,我可没时间听这些垃圾。”,“你吃错药了吧?”以及“你被电过以后大脑就跟以前不太一样了是不是?”
|
||||
|
||||
把这个设在同事的电脑上会有非常有趣。
|
||||
|
||||
### Firefox 是个厚脸皮 ###
|
||||
|
||||
这一个不需要终端!太棒了!
|
||||
|
||||
打开火狐浏览器。在地址栏填上“about:about”。你将得到火狐浏览器中所有的“about”页。一点也不炫酷,是不是?
|
||||
|
||||
现在试试“about:mozilla”,浏览器就会回应你一条从“[Book of Mozilla(Mozilla 之书)][10]”——这本浏览网页的圣经——里引用的话。我的另一个最爱是“about:robots”,这个也很有趣。
|
||||
|
||||

|
||||
|
||||
*“[Book of Mozilla(Mozilla 之书)][10]”——浏览网页的圣经。*
|
||||
|
||||
### 精心调制的混搭日历 ###
|
||||
|
||||
是否厌倦了千百年不变的 [Gregorian Calendar(罗马教历)][11]?准备好乱入了吗?试试输入“ddate”。这样会把当前日历以[Discordian Calendar(不和教历)][12]的方式显示出来。你会遇见这样的语句:
|
||||
|
||||
“今天是Sweetmorn(甜美的清晨),3181年Discord(不和)季的第18天。”
|
||||
|
||||
我听见你在说什么了,“但这根本不是什么彩蛋!”嘘~,闭嘴。只要我想,我就可以把它叫做彩蛋。
|
||||
|
||||
### 快速进入黑客行话模式 ###
|
||||
|
||||
想不想尝试一下电影里超级黑客的感觉?试试(通过添加“-oS”)把扫描器设置成“[Script Kiddie][13]”模式。然后所有的输出都会变成最3l33t的[黑客范][14]。
|
||||
|
||||
例如: “nmap -oS - google.com”
|
||||
|
||||
赶快试试。我知道你有多想这么做。你一定会让安吉丽娜·朱莉(Angelina Jolie)[印象深刻][15]
|
||||
|
||||
### lolcat彩虹 ###
|
||||
|
||||
在你的Linux终端里有很多彩蛋真真是极好的……但是如果你还想要变得……更有魅力些怎么办?输入:lolcat。把任何一个程序的文本输出通过管道输入到lolcat里。你会得到它的超级无敌彩虹版。
|
||||
|
||||

|
||||
|
||||
*把任何一个程序的文本输出通过管道输入到lolcat里。你会得到它的超级无敌彩虹版。*
|
||||
|
||||
### 追光标的小家伙 ###
|
||||
|
||||

|
||||
|
||||
*“Oneko” -- 经典 “Neko”的Linux移植版本。*
|
||||
|
||||
接下来是“Oneko” -- 经典 “[Neko][16]”的Linux移植版本。基本上就是个满屏幕追着你的光标跑的小猫。
|
||||
|
||||
虽然严格来它并不算是“彩蛋”,它还是很有趣的。而且感觉上也是很彩蛋的。
|
||||
|
||||
你还可以用不同的选项(比如“oneko -dog”)把小猫替代成小狗,或是调成其他样式。用这个对付讨厌的同事有着无限的可能。
|
||||
|
||||
就是这些了!一个我最喜欢的Linux彩蛋(或是类似东西)的清单。请尽情的的在下面的评论区留下你的最爱。因为这是互联网。你就能做这些事。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-
|
||||
|
||||
作者:[Bryan Lunduke][a]
|
||||
译者:[H-mudcup](https://github.com/H-mudcup)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linux.com/community/forums/person/56734
|
||||
[1]:http://en.wikipedia.org/wiki/Adventure_(Atari_2600)
|
||||
[2]:http://en.wikipedia.org/wiki/Warren_Robinett
|
||||
[3]:http://en.wikipedia.org/wiki/PDP-10
|
||||
[4]:http://en.wikipedia.org/wiki/Arch_Linux
|
||||
[5]:http://en.wikipedia.org/wiki/Pac-Man
|
||||
[6]:http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-#Pacman
|
||||
[7]:http://en.wikipedia.org/wiki/GNU_Emacs
|
||||
[8]:https://www.youtube.com/watch?v=AQ4NAZPi2js
|
||||
[9]:https://wiki.debian.org/Aptitude
|
||||
[10]:http://en.wikipedia.org/wiki/The_Book_of_Mozilla
|
||||
[11]:http://en.wikipedia.org/wiki/Gregorian_calendar
|
||||
[12]:http://en.wikipedia.org/wiki/Discordian_calendar
|
||||
[13]:http://nmap.org/book/output-formats-script-kiddie.html
|
||||
[14]:http://nmap.org/book/output-formats-script-kiddie.html
|
||||
[15]:https://www.youtube.com/watch?v=Ql1uLyuWra8
|
||||
[16]:http://en.wikipedia.org/wiki/Neko_%28computer_program%29
|
@ -1,79 +1,78 @@
|
||||
10 Top Distributions in Demand to Get Your Dream Job
|
||||
10 大帮助你获得理想的职业的操作系统技能
|
||||
================================================================================
|
||||
We are coming up with a series of five articles which aims at making you aware of the top skills which will help you in getting yours dream job. In this competitive world you can not rely on one skill. You need to have balanced set of skills. There is no measure of a balanced skill set except a few conventions and statistics which changes from time-to-time.
|
||||
我们用了5篇系列文章,来让人们意识到那些可以帮助他们获得理想职业的顶级技能。在这个充满竞争的社会里,你不能仅仅依赖一项仅能,你需要在多个职业技能上都有所涉猎。我们并不能权衡这些技能,但是我们可以参考这些几乎不变的惯例和统计数据。
|
||||
|
||||
The below article and remaining to follow is the result of close study of job boards, posting and requirements of various IT Companies across the globe of last three months. The statistics keeps on changing as the demand and market changes. We will try our best to update the list when there is any major changes.
|
||||
The Five articles of this series are…
|
||||
下面的文章和紧跟其后的内容,是针对全球各大IT公司上一季度对员工技能要求的详细调查报告。统计数据真实的反映了需求和市场的变化。我们会尽力让这份报告保持时效性,特别是有明显变化的时候。这五篇系列文章是:
|
||||
|
||||
- 10 Distributions in Demand to Get Your Dream Job
|
||||
- [10 Famous IT Skills in Demand That Will Get You Hired][1]
|
||||
- 10 Programming Skills That Will Help You to Get Dream Job
|
||||
- 10 IT Networking Protocols Skills to Land Your Dream Job
|
||||
- 10 Professional Certifications in Demand That Will Get You Hired
|
||||
- 10大帮助你获得理想的职业的需求分布
|
||||
- [10大帮助你获得职位的著名 IT 技能][1]
|
||||
- [10大帮助你获得理想职位的项目技能][2]
|
||||
- [10大帮助你赢得理想职位的网络技能][3]
|
||||
- [10大帮助你获得理想职位的个人认证][4]
|
||||
|
||||
### 1. Windows ###
|
||||
|
||||
The operating System developed by Microsoft not only dominates the PC market but it is also the most sought OS skill from job perspective irrespective of all the odds and criticism that follows. It has shown a growth in demand which equals to 0.1% in the last quarter.
|
||||
微软研发的windows操作系统不仅在PC市场上占据龙头地位,而且从职位视角来看也是最抢手的操作系统技能,不管你是赞成还是反对。有资料显示上一季度需求增长达到0.1%.
|
||||
|
||||
Latest Stable Release : Windows 8.1
|
||||
最新版本 : Windows 8.1
|
||||
|
||||
### 2. Red Hat Enterprise Linux ###
|
||||
|
||||
Red Hat Enterprise Linux is a commercial Linux Distribution developed by Red Hat Inc. It is one of the Most widely used Linux distribution specially in corporates and production. It comes at number two having a overall growth in demand which equals to 17% in the last quarter.
|
||||
Red Hat Enterprise Linux 是一个商业的Linux发行版本,它由红帽公司研发。它是世界上运用最广的Linux发行版本之一,特别是在生产环境和协同工作方面。上一季度其整体需求上涨17%,位列第二。
|
||||
|
||||
Latest Stable Release : RedHat Enterprise Linux 7.1
|
||||
最新版本 : RedHat Enterprise Linux 7.1
|
||||
|
||||
### 3. Solaris ###
|
||||
|
||||
The UNIX Operating System developed by Sun Microsystems and now owned by Oracle Inc. comes at number three. It has shown a growth in demand which equals to 14% in the last quarter.
|
||||
排在第三的是 Solaris UNIX操作系统,最初由Sun Microsystems公司研发,现由Oracle公司负责继续研发。在上一季度起需求率上涨14%.
|
||||
|
||||
Latest Stable Release : Oracle Solaris 10 1/13
|
||||
最新版本:Oracle Solaris 10 1/13
|
||||
|
||||
### 4. AIX ###
|
||||
|
||||
Advanced Interactive eXecutive is a Proprietary Unix Operating System by IBM stands at number four. It has shown a growth in demand which equals to 11% in the last quarter.
|
||||
排在第四的是AIX,这是一款由IBM研发的专用 Unix 操作系统。在上一季度需求率上涨11%。
|
||||
|
||||
Latest Stable Release : AIX 7
|
||||
最新版本 : AIX 7
|
||||
|
||||
### 5. Android ###
|
||||
|
||||
One of the most widely used open source operating system designed specially for mobile, tablet computers and wearable gadgets is now owned by Google Inc. comes at number five. It has shown a growth in demand which equals to 4% in the last quarter.
|
||||
排在第5的是谷歌公司研发的安卓系统,它是一款使用非常广泛的开源操作系统,专门为手机、平板电脑、可穿戴设备设计的。在上一季度需求率上涨4%。
|
||||
|
||||
Latest Stable Release : Android 5.1 aka Lollipop
|
||||
最新版本 : Android 5.1 aka Lollipop
|
||||
|
||||
### 6. CentOS ###
|
||||
|
||||
Community Enterprise Operating System is a Linux distribution derived from RedHat Enterprise Linux. It comes at sixth position in the list. Market has shown a growth in demand which is nearly 22% for CentOS, in the last quarter.
|
||||
排在第6的是 CentOS,它是从 RedHat Enterprise Linux 衍生出的一个发行版本。在上一季度需求率上涨接近22%。
|
||||
|
||||
Latest Stable Release : CentOS 7
|
||||
最新版本 : CentOS 7
|
||||
|
||||
### 7. Ubuntu ###
|
||||
|
||||
The Linux Operating System designed for Humans and designed by Canonicals Ltd. Ubuntu comes at position seventh. It has shown a growth in demand which equals to 11% in the last quarter.
|
||||
Latest Stable Release :
|
||||
排在第7的是Ubuntu,这是一款由Canonicals公司研发设计的Linux系统,旨在服务于个人。上一季度需求率上涨11%。
|
||||
|
||||
- Ubuntu 14.10 (9 months security and maintenance update).
|
||||
最新版本 :
|
||||
|
||||
- Ubuntu 14.10 (已有九个月的安全和维护更新).
|
||||
- Ubuntu 14.04.2 LTS
|
||||
|
||||
### 8. Suse ###
|
||||
|
||||
Suse is a Linux operating System owned by Novell. The Linux distribution is famous for YaST configuration tool. It comes at position eight. It has shown a growth in demand which equals to 8% in the last quarter.
|
||||
排在第8的是由Novell研发的 Suse,这款发行版本的Linux操作系统因为YaST 配置工具而闻名。其上一季度需求率上涨8%。
|
||||
|
||||
Latest Stable Release : 13.2
|
||||
最新版本 : 13.2
|
||||
|
||||
### 9. Debian ###
|
||||
|
||||
The very famous Linux Operating System, mother of 100’s of Distro and closest to GNU comes at number nine. It has shown a decline in demand which is nearly 9% in the last quarter.
|
||||
排在第9的是非常有名的 Linux 操作系统Debian,它是上百种Linux 发行版之母,非常接近GNU理念。其上一季度需求率上涨9%。
|
||||
|
||||
Latest Stable Release : Debian 7.8
|
||||
最新版本: Debian 7.8
|
||||
|
||||
### 10. HP-UX ###
|
||||
|
||||
The Proprietary UNIX Operating System designed by Hewlett-Packard comes at number ten. It has shown a decline in the last quarter by 5%.
|
||||
排在第10的是Hewlett-Packard公司研发的专用 Linux 操作系统HP-UX,上一季度需求率上涨5%。
|
||||
|
||||
Latest Stable Release : 11i v3 Update 13
|
||||
最新版本 : 11i v3 Update 13
|
||||
|
||||
注:表格数据--不需要翻译--开始
|
||||
<table border="0" cellspacing="0">
|
||||
<colgroup width="107"></colgroup>
|
||||
<colgroup width="92"></colgroup>
|
||||
@ -131,19 +130,21 @@ Latest Stable Release : 11i v3 Update 13
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
注:表格数据--不需要翻译--结束
|
||||
|
||||
That’s all for now. I’ll be coming up with the next article of this series very soon. Till then stay tuned and connected to Tecmint. Don’t forget to provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
|
||||
以上便是全部信息,我会尽快推出下一篇系列文章,敬请关注Tecmint。不要忘了留下您宝贵的评论。如果您喜欢我们的文章并且与我们分享您的见解,这对我们的工作是一种鼓励。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/top-distributions-in-demand-to-get-your-dream-job/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[weychen](https://github.com/weychen)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[sevenot](https://github.com/sevenot)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/top-distributions-in-demand-to-get-your-dream-job/www.tecmint.com/famous-it-skills-in-demand-that-will-get-you-hired/
|
||||
[1]:http://www.tecmint.com/famous-it-skills-in-demand-that-will-get-you-hired/
|
||||
[2]:https://linux.cn/article-5303-1.html
|
||||
[3]:http://www.tecmint.com/networking-protocols-skills-to-land-your-dream-job/
|
||||
[4]:http://www.tecmint.com/professional-certifications-in-demand-that-will-get-you-hired/
|
@ -1,26 +1,26 @@
|
||||
Lolcat – 一个在 Linux 终端中输出彩虹特效的命令行工具
|
||||
Lolcat :一个在 Linux 终端中输出彩虹特效的命令行工具
|
||||
================================================================================
|
||||
那些相信 Linux 命令行是单调无聊且没有任何乐趣的人们,你们错了,这里有一些有关 Linux 的文章,它们展示着 Linux 是如何的有趣和“淘气” 。
|
||||
|
||||
- [20 个有趣的 Linux 命令或在终端中 Linux 是有趣的][1]
|
||||
- [6 个有趣的好玩 Linux 命令(在终端中的乐趣)][2]
|
||||
- [在 Linux 终端中的乐趣 – 把玩文字和字符计数][3]
|
||||
- [Linux命令及Linux终端的20个趣事][1]
|
||||
- [终端中的乐趣:6个有趣的Linux命令行工具][2]
|
||||
- [Linux终端的乐趣之把玩字词计数][3]
|
||||
|
||||
在本文中,我将讨论一个名为“lolcat”的应用 – 它在终端中生成彩虹般的颜色。
|
||||
在本文中,我将讨论一个名为“lolcat”的小工具 – 它可以在终端中生成彩虹般的颜色。
|
||||
|
||||

|
||||
|
||||
为终端生成彩虹般颜色的输出的 Lolcat 命令
|
||||
*为终端生成彩虹般颜色的输出的 Lolcat 命令*
|
||||
|
||||
#### 何为 lolcat ? ####
|
||||
|
||||
Lolcat 是一个针对 Linux,BSD 和 OSX 平台的应用,它类似于 [cat 命令][4],并为 `cat` 的输出添加彩虹般的色彩。 Lolcat 原本用于在 Linux 终端中为文本添加彩虹般的色彩。
|
||||
Lolcat 是一个针对 Linux,BSD 和 OSX 平台的工具,它类似于 [cat 命令][4],并为 `cat` 的输出添加彩虹般的色彩。 Lolcat 主要用于在 Linux 终端中为文本添加彩虹般的色彩。
|
||||
|
||||
### 在 Linux 中安装 Lolcat ###
|
||||
|
||||
**1. Lolcat 应用在许多 Linux 发行版本的软件仓库中都可获取到,但可获得的版本都有些陈旧,而你可以通过 git 仓库下载和安装最新版本的 lolcat。**
|
||||
**1. Lolcat 工具在许多 Linux 发行版的软件仓库中都可获取到,但可获得的版本都有些陈旧,而你可以通过 git 仓库下载和安装最新版本的 lolcat。**
|
||||
|
||||
由于 Lolcat 是一个 ruby gem 程序,所以在你的系统中安装有最新版本的 RUBY 是必须的。
|
||||
由于 Lolcat 是一个 ruby gem 程序,所以在你的系统中必须安装有最新版本的 RUBY。
|
||||
|
||||
# apt-get install ruby [在基于 APT 的系统中]
|
||||
# yum install ruby [在基于 Yum 的系统中]
|
||||
@ -53,7 +53,7 @@ Lolcat 是一个针对 Linux,BSD 和 OSX 平台的应用,它类似于 [cat
|
||||
|
||||

|
||||
|
||||
Lolcat 的帮助文档
|
||||
*Lolcat 的帮助文档*
|
||||
|
||||
**4. 接着, 通过管道连接 lolcat 和其他命令,例如 ps, date 和 cal:**
|
||||
|
||||
@ -63,15 +63,15 @@ Lolcat 的帮助文档
|
||||
|
||||

|
||||
|
||||
ps 命令的输出
|
||||
*ps 命令的输出*
|
||||
|
||||

|
||||
|
||||
Date 的输出
|
||||
*Date 的输出*
|
||||
|
||||

|
||||
|
||||
Calendar 的输出
|
||||
*Calendar 的输出*
|
||||
|
||||
**5. 使用 lolcat 来展示一个脚本文件的代码:**
|
||||
|
||||
@ -79,18 +79,18 @@ Calendar 的输出
|
||||
|
||||

|
||||
|
||||
用 lolcat 来展示代码
|
||||
*用 lolcat 来展示代码*
|
||||
|
||||
**6. 通过管道连接 lolcat 和 figlet 命令。Figlet 是一个展示由常规的屏幕字符组成的巨大字符串的应用。我们可以通过管道将 figlet 的输出连接到 lolcat 中来给出如下的多彩输出:**
|
||||
**6. 通过管道连接 lolcat 和 figlet 命令。Figlet 是一个展示由常规的屏幕字符组成的巨大字符串的应用。我们可以通过管道将 figlet 的输出连接到 lolcat 中来展示出如下的多彩输出:**
|
||||
|
||||
# echo I ❤ Tecmint | lolcat
|
||||
# figlet I Love Tecmint | lolcat
|
||||
|
||||

|
||||
|
||||
多彩的文字
|
||||
*多彩的文字*
|
||||
|
||||
**注**: 毫无疑问 ❤ 是一个 unicode 字符并且为了安装 figlet,你需要像下面那样使用 yum 和 apt 来得到这个软件包:
|
||||
**注**: 注意, ❤ 是一个 unicode 字符。要安装 figlet,你需要像下面那样使用 yum 和 apt 来得到这个软件包:
|
||||
|
||||
# apt-get figlet
|
||||
# yum install figlet
|
||||
@ -102,7 +102,7 @@ Calendar 的输出
|
||||
|
||||

|
||||
|
||||
动的文本
|
||||
*动的文本*
|
||||
|
||||
这里选项 `-a` 指的是 Animation(动画), `-d` 指的是 duration(持续时间)。在上面的例子中,持续 500 次动画。
|
||||
|
||||
@ -112,7 +112,7 @@ Calendar 的输出
|
||||
|
||||

|
||||
|
||||
多彩地显示文件
|
||||
*多彩地显示文件*
|
||||
|
||||
**9. 通过管道连接 lolcat 和 cowsay。cowsay 是一个可配置的正在思考或说话的奶牛,这个程序也支持其他的动物。**
|
||||
|
||||
@ -136,15 +136,15 @@ Calendar 的输出
|
||||
skeleton snowman sodomized-sheep stegosaurus stimpy suse three-eyes turkey
|
||||
turtle tux unipony unipony-smaller vader vader-koala www
|
||||
|
||||
通过管道连接 lolcat 和 cowsay 后的输出,并且使用了‘gnu’cowfile。
|
||||
通过管道连接 lolcat 和 cowsay 后的输出,并且使用了‘gnu’形象的 cowfile。
|
||||
|
||||
# cowsay -f gnu ☛ Tecmint ☚ is the best Linux Resource Available online | lolcat
|
||||
|
||||

|
||||
|
||||
使用 Lolcat 的 Cowsay
|
||||
*使用 Lolcat 的 Cowsay*
|
||||
|
||||
**注**: 你可以在管道中使用 lolcat 和其他任何命令来在终端中得到彩色的输出。
|
||||
**注**: 你可以在将 lolcat 和其他任何命令用管道连接起来在终端中得到彩色的输出。
|
||||
|
||||
**10. 你可以为最常用的命令创建别名来使得命令的输出呈现出彩虹般的色彩。你可以像下面那样为 ‘ls -l‘ 命令创建别名,这个命令输出一个目录中包含内容的列表。**
|
||||
|
||||
@ -153,23 +153,24 @@ Calendar 的输出
|
||||
|
||||

|
||||
|
||||
多彩的 Alias 命令
|
||||
*多彩的 Alias 命令*
|
||||
|
||||
你可以像上面建议的那样,为任何命令创建别名。为了使得别名永久生效,你必须添加相关的代码(上面的代码是 ls -l 的别名) 到 ~/.bashrc 文件中,并确保登出后再重新登录来使得更改生效。
|
||||
你可以像上面建议的那样,为任何命令创建别名。为了使得别名永久生效,你需要添加相关的代码(上面的代码是 ls -l 的别名) 到 ~/.bashrc 文件中,并登出后再重新登录来使得更改生效。
|
||||
|
||||
现在就是这些了。我想知道你是否曾经注意过 lolcat 这个工具?你是否喜欢这篇文章?欢迎在下面的评论环节中给出你的建议和反馈。喜欢并分享我们,帮助我们传播。
|
||||
|
||||
现在就是这些了。我想知道你是否曾经注意过 lolcat 这个应用?你是否喜欢这篇文章?欢迎在下面的评论环节中给出你的建议和反馈。喜欢并分享我们,帮助我们传播。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/lolcat-command-to-output-rainbow-of-colors-in-linux-terminal/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/20-funny-commands-of-linux-or-linux-is-fun-in-terminal/
|
||||
[2]:http://www.tecmint.com/linux-funny-commands/
|
||||
[3]:http://www.tecmint.com/play-with-word-and-character-counts-in-linux/
|
||||
[1]:https://linux.cn/article-2831-1.html
|
||||
[2]:https://linux.cn/article-4128-1.html
|
||||
[3]:https://linux.cn/article-4088-1.html
|
||||
[4]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
|
@ -0,0 +1,57 @@
|
||||
GNU、开源和 Apple 的那些黑历史
|
||||
==============================================================================
|
||||
> 自由软件/开源社区与 Apple 之间的争论可以回溯到上世纪80年代,当时 Linux 的创始人称 Mac OS X 的核心就是“一堆废物”。还有其他一些软件史上的轶事。
|
||||
|
||||

|
||||
|
||||
开源拥护者们与微软之间有着很长、而且摇摆的关系。每个人都知道这个。但是,在许多方面,自由或者开源软件的支持者们与 Apple 之间的争执则更加突出——尽管这很少受到媒体的关注。
|
||||
|
||||
需要说明的是,并不是所有的开源拥护者都厌恶苹果。从各种轶事中,我已经见过很多 Linux 的黑客玩耍 iPhone 和iPad。实际上,许多 Linux 用户是十分喜欢 Apple 的 OS X 系统的,以至于他们[创造了很多Linux的发行版][1],都设计得看起来像OS X。(顺便说下,[北朝鲜政府][2]就这样做了。)
|
||||
|
||||
但是 Mac 的信徒与企鹅的信徒——即 Linux 社区(不包括别的,仅指自由与开源软件世界中的这一小部分)之间的关系,并不一直是完全的和谐。并且这绝不是一个新的现象,在我研究Linux和自由软件基金会历史的时候就发现了。
|
||||
|
||||
### GNU vs. Apple ###
|
||||
|
||||
这场战争将回溯到至少上世纪80年代后期。1988年6月,Richard Stallman 发起了 [GNU][3] 项目,希望建立一个完全自由的类 Unix 操作系统,其源代码将会免费共享,[强烈指责][4] Apple 对 [Hewlett-Packard][5](HPQ)和 [Microsoft][6](MSFT)的诉讼,称Apple的声明中说别人对 Macintosh 操作系统的界面和体验的抄袭是不正确。如果 Apple 流行的话,GNU 警告到,这家公司“将会借助大众的新力量终结掉为取代商业软件而生的自由软件。”
|
||||
|
||||
那个时候,GNU 对抗 Apple 的诉讼(这意味着,十分讽刺的是,GNU 正在支持 Microsoft,尽管当时的情况不一样),通过发布[“让你的律师远离我的电脑”按钮][7]。同时呼吁 GNU 的支持者们抵制 Apple,警告虽然 Macintosh 看起来是不错的计算机,但 Apple 一旦赢得了诉讼就会给市场带来垄断,这会极大地提高计算机的售价。
|
||||
|
||||
Apple 最终[输掉了这场诉讼][8],但是直到1994年之后,GNU 才[撤销对 Apple 的抵制][9]。这期间,GNU 一直不断指责 Apple。在上世纪90年代早期甚至之后,GNU 开始发展 GNU 软件项目,可以在其他个人电脑平台包括 MS-DOS 计算机上使用。[GNU 宣称][10],除非 Apple 停止在计算机领域垄断的野心,让用户界面可以模仿 Macintosh 的一些东西,否则“我们不会提供任何对 Apple 机器的支持。”(因此讽刺的是 Apple 在90年代后期开发的类 UNIX 系统 OS X 有一大堆软件来自GNU。但是那是另外的故事了。)
|
||||
|
||||
### Torvalds 与 Jobs ###
|
||||
|
||||
除去他对大多数发行版比较自由放任的态度,Linux内核的创造者 Liuns Torvalds 相较于 Stallman 和 GNU 过去对Apple 的态度和善得多。在他 2001 年出版的书”Just For Fun: The Story of an Accidental Revolutionary“中,Torvalds 描述到与 Steve Jobs 的一次会面,大约是 1997 年收到后者的邀请去讨论 Mac OS X,当时 Apple 正在开发中,但还没有公开发布。
|
||||
|
||||
“基本上,Jobs 一开始就试图告诉我在桌面上的玩家就两个,Microsoft 和 Apple,而且他认为我能为 Linux 做的最好的事,就是从了 Apple,努力让开源用户去支持 Mac OS X” Torvalds 写道。
|
||||
|
||||
这次会谈显然让 Torvalds 很不爽。争吵的一点集中在 Torvalds 对 Mach 技术上的藐视,对于 Apple 正在用于构建新的 OS X 操作系统的内核,Torvalds 称其“一堆废物。它包含了所有你能做到的设计错误,并且甚至打算只弥补一小部分。”
|
||||
|
||||
但是更令人不快的是,显然是 Jobs 在开发 OS X 时入侵开源的方式(OS X 的核心里上有很多开源程序):“他有点贬低了结构的瑕疵:谁在乎基础操作系统这个真正的 low-core 东西是不是开源,如果你有 Mac 层在最上面,这不是开源?”
|
||||
|
||||
一切的一切,Torvalds 总结到,Jobs “并没有太多争论。他仅仅很简单地说着,胸有成竹地认为我会对与 Apple 合作感兴趣”。“他一无所知,不能去想像还会有人并不关心 Mac 市场份额的增长。我认为他真的感到惊讶了,当我表现出对 Mac 的市场有多大,或者 Microsoft 市场有多大的毫不关心时。”
|
||||
|
||||
当然,Torvalds 并没有对所有 Linux 用户说起过。他对于 OS X 和 Apple 的看法从 2001 年开始就渐渐软化了。但实际上,早在2000年,Linux 社区的领导角色表现出对 Apple 及其高层的傲慢的深深的鄙视,可以看出一些重要的东西,关于 Apple 世界和开源/自由软件世界的矛盾是多么的根深蒂固。
|
||||
|
||||
从以上两则历史上的花边新闻中,可以看到关于 Apple 产品价值的重大争议,即是否该公司致力于提升其创造的软硬件的质量,或者仅仅是借市场的小聪明获利,让Apple产品卖出更多的钱而不是创造等同其价值的功能。但是不管怎样,我会暂时置身讨论之外。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://thevarguy.com/open-source-application-software-companies/051815/linux-better-os-x-gnu-open-source-and-apple-
|
||||
|
||||
作者:[Christopher Tozzi][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||
[1]:https://www.linux.com/news/software/applications/773516-the-mac-ifying-of-the-linux-desktop/
|
||||
[2]:http://thevarguy.com/open-source-application-software-companies/010615/north-koreas-red-star-linux-os-made-apples-image
|
||||
[3]:http://gnu.org/
|
||||
[4]:https://www.gnu.org/bulletins/bull5.html
|
||||
[5]:http://www.hp.com/
|
||||
[6]:http://www.microsoft.com/
|
||||
[7]:http://www.duntemann.com/AppleSnakeButton.jpg
|
||||
[8]:http://www.freibrun.com/articles/articl12.htm
|
||||
[9]:https://www.gnu.org/bulletins/bull18.html#SEC6
|
||||
[10]:https://www.gnu.org/bulletins/bull12.html
|
@ -0,0 +1,108 @@
|
||||
动态壁纸给linux发行版添加活力背景
|
||||
================================================================================
|
||||
**我们知道你想拥有一个有格调的ubuntu桌面来炫耀一下 :)**
|
||||
|
||||

|
||||
|
||||
*Live Wallpaper*
|
||||
|
||||
在linxu上费一点点劲搭建一个出色的工作环境是很简单的。今天,我们([重新][2])着重来探讨长驻你脑海中那些东西 :一款自由,开源,能够给你的截图增添光彩的工具。
|
||||
|
||||
它叫 **Live Wallpaper** (正如你猜的那样) ,它用由OpenGL驱动的一款动态桌面背景来代替标准的静态桌面背景。
|
||||
|
||||
最好的一点是:在ubuntu上安装它很容易。
|
||||
|
||||
### 动态壁纸主题 ###
|
||||
|
||||

|
||||
|
||||
Live Wallpaper 不是此类软件唯一的一款,但它是最好的一款之一。
|
||||
|
||||
它附带很多不同的开箱即用的主题。
|
||||
|
||||
从精细的(‘noise’)到狂热的 (‘nexus’),包罗万象,甚至有受到Ubuntu Phone欢迎屏幕启发的obligatory锁屏壁纸。
|
||||
|
||||
- Circles — 带着‘evolving circle’风格的时钟,灵感来自于Ubuntu Phone
|
||||
- Galaxy — 支持自定义大小,位置的旋转星系
|
||||
- Gradient Clock — 放在倾斜面上的polar时钟
|
||||
- Nexus — 亮色粒子火花穿越屏幕
|
||||
- Noise — 类似于iOS动态壁纸的Bokeh设计
|
||||
- Photoslide — 由文件夹(默认为 ~/Photos)内照片构成的动态网格相册
|
||||
|
||||
Live Wallpaper **完全开源**,所以没有什么能够阻挡天马行空的艺术家们用诀窍(当然还有耐心)来创造他们自己的精美主题。
|
||||
|
||||
### 设置 & 特点 ###
|
||||
|
||||

|
||||
|
||||
虽然某些主题与其它主题相比有更多的选项,但每款主题都可以通过某些方式来配置或者定制。
|
||||
|
||||
例如,Nexus主题中 (上图所示) 你可以更改脉冲粒子的数量,颜色,大小和出现频率。
|
||||
|
||||
首选项提供了 **通用选项** 适用于所有主题,包括:
|
||||
|
||||
- 设置登录界面的动态壁纸
|
||||
- 自定义动画背景
|
||||
- 调节 FPS (包括在屏幕上显示FPS)
|
||||
- 指定多显示器的行为
|
||||
|
||||
有如此多的选项,diy适用于你自己的桌面背景是很容易的。
|
||||
|
||||
### 缺陷 ###
|
||||
|
||||
#### 没有桌面图标 ####
|
||||
|
||||
Live Wallpaper在运行时,你无法在桌面添加,打开或者是编辑文件和文件夹。
|
||||
|
||||
首选项程序提供了一个选项来让你这样做(只是猜测)。也许是它只能在老版本中使用,在我们的测试中-测试环境为Ununtu 14.10,它不工作。但在测试中发现当把桌面壁纸设置成格式为png的图片文件时,这个选项有用,不需要是透明的png图片文件,只要是png图片文件就行了。
|
||||
|
||||
#### 资源占用 ####
|
||||
|
||||
动态壁纸与标准的壁纸相比要消耗更多的系统资源。
|
||||
|
||||
我们并不是说任何时候都会消耗大量资源,但至少在我们的测试中是这样,所以低配置机器和笔记本用户要谨慎使用这款软件。可以使用 [系统监视器][2] 来追踪CPU 和GPU的负载。
|
||||
|
||||
#### 退出程序 ####
|
||||
|
||||
对我来说最大的“bug”绝对是没有“退出”选项。
|
||||
|
||||
当然,动态壁纸可以通过托盘图标和首选项完全退出,那退出托盘图标呢?没办法。只能在终端执行命令‘pkill livewallpaper’。
|
||||
|
||||
### 怎么在 Ubuntu 14.04 LTS+ 上安装 Live Wallpaper ###
|
||||
|
||||

|
||||
|
||||
要想在Ubuntu 14.04 LTS 和更高版本中安装 Live Wallpaper,你首先需要把官方PPA添加进你的软件源。
|
||||
最快的方法是在终端中执行下列命令:
|
||||
|
||||
sudo add-apt-repository ppa:fyrmir/livewallpaper-daily
|
||||
|
||||
sudo apt-get update && sudo apt-get install livewallpaper
|
||||
|
||||
你还需要安装 indicator applet,这样可以方便快速的打开或是关闭动态壁纸,从菜单选择主题,另外图形配置工具可以让你基于你自己的口味来配置每款主题。
|
||||
|
||||
sudo apt-get install livewallpaper-config livewallpaper-indicator
|
||||
|
||||
所有都安装好之后你就可以通过Unity Dash来启动它和它的首选项工具了。
|
||||
|
||||

|
||||
|
||||
让人不爽的是,安装完成后,程序不会自动打开托盘图标,而仅仅将它自己加入自动启动项,所以,快速来个注销 -> 登陆它就会出现啦。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
如果你正处在无聊呆板的桌面中,幻想有一个更有活力的生活,不妨试试。另外,告诉我们你想看到什么样的动态壁纸!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/05/animated-wallpaper-adds-live-backgrounds-to-linux-distros
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[Love-xuan](https://github.com/Love-xuan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/2012/11/live-wallpaper-for-ubuntu
|
||||
[2]:http://www.omgubuntu.co.uk/2011/11/5-system-monitoring-tools-for-ubuntu
|
@ -0,0 +1,93 @@
|
||||
LFS中文版手册发布:如何打造自己的 Linux 发行版
|
||||
================================================================================
|
||||
您是否想过打造您自己的 Linux 发行版?每个 Linux 用户在他们使用 Linux 的过程中都想过做一个他们自己的发行版,至少一次。我也不例外,作为一个 Linux 菜鸟,我也考虑过开发一个自己的 Linux 发行版。从头开发一个 Linux 发行版这件事情被称作 Linux From Scratch (LFS)。
|
||||
|
||||
在开始之前,我总结了一些有关 LFS 的内容,如下:
|
||||
|
||||
**1. 那些想要打造他们自己的 Linux 发行版的人应该了解打造一个 Linux 发行版(打造意味着从头开始)与配置一个已有的 Linux 发行版的不同**
|
||||
|
||||
如果您只是想调整下启动屏幕、定制登录页面以及拥有更好的外观和使用体验。您可以选择任何一个 Linux 发行版并且按照您的喜好进行个性化配置。此外,有许多配置工具可以帮助您。
|
||||
|
||||
如果您想打包所有必须的文件、引导加载器和内核,并选择什么该被包括进来,然后依靠自己编译这一切东西。那么您需要的就是 Linux From Scratch (LFS)。
|
||||
|
||||
**注意**:如果您只想要定制 Linux 系统的外表和体验,这个指南并不适合您。但如果您真的想打造一个 Linux 发行版,并且向了解怎么开始以及一些其他的信息,那么这个指南正是为您而写。
|
||||
|
||||
**2. 打造一个 Linux 发行版(LFS)的好处**
|
||||
|
||||
- 您将了解 Linux 系统的内部工作机制
|
||||
- 您将开发一个灵活的适应您需求的系统
|
||||
- 您开发的系统(LFS)将会非常紧凑,因为您对该包含/不该包含什么拥有绝对的掌控
|
||||
- 您开发的系统(LFS)在安全性上会更好
|
||||
|
||||
**3. 打造一个Linux发行版(LFS)的坏处**
|
||||
|
||||
打造一个 Linux 系统意味着将所有需要的东西放在一起并且编译之。这需要许多查阅、耐心和时间。而且您需要一个可用的 Linux 系统和足够的磁盘空间来打造 LFS。
|
||||
|
||||
**4. 有趣的是,Gentoo/GNU Linux 在某种意义上最接近于 LFS。Gentoo 和 LFS 都是完全从源码编译的定制的 Linux 系统**
|
||||
|
||||
**5. 您应该是一个有经验的Linux用户,对编译包、解决依赖有相当的了解,并且是个 shell 脚本的专家。**
|
||||
|
||||
了解一门编程语言(最好是 C 语言)将会使事情变得容易些。但哪怕您是一个新手,只要您是一个优秀的学习者,可以很快的掌握知识,您也可以开始。最重要的是不要在 LFS 过程中丢失您的热情。
|
||||
|
||||
如果您不够坚定,恐怕会在 LFS 进行到一半时放弃。
|
||||
|
||||
**6. 现在您需要一步一步的指导来打造一个 Linux 。LFS 手册是打造 LFS 的官方指南。我们的合作站点 tradepub 也为我们的读者制作了 LFS 的指南,这同样是免费的。 ###
|
||||
|
||||
您可以从下面的链接下载 Linux From Scratch 的电子书:
|
||||
|
||||
[][1]
|
||||
|
||||
下载: [Linux From Scratch][1]
|
||||
|
||||
**7. 当前 LFS 的版本是 7.7,分为 systemd 版本和非 systemd 版本**
|
||||
|
||||
LFS 的官方网站是: http://www.linuxfromscratch.org/
|
||||
|
||||
您可以在官网在线浏览 LFS 以及类似 BLFS 这样的相关项目的手册,也可以下载不同格式的版本。
|
||||
|
||||
- LFS (非 systemd 版本):
|
||||
- PDF 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7.pdf
|
||||
- 单一 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7-NOCHUNKS.html
|
||||
- 打包的多页 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/stable/LFS-BOOK-7.7.tar.bz2
|
||||
- LFS (systemd 版本):
|
||||
- PDF 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd.pdf
|
||||
- 单一 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd-NOCHUNKS.html
|
||||
- 打包的多页 HTML 版本: http://www.linuxfromscratch.org/lfs/downloads/7.7-systemd/LFS-BOOK-7.7-systemd.tar.bz2
|
||||
|
||||
**8. Linux 中国/LCTT 翻译了一份 LFS 手册(7.7,systemd 版本)**
|
||||
|
||||
经过 LCTT 成员的努力,我们终于完成了对 LFS 7.7 systemd 版本手册的翻译。
|
||||
|
||||
手册在线访问地址:https://linux.cn/lfs/LFS-BOOK-7.7-systemd/index.html 。
|
||||
|
||||
其它格式的版本稍后推出。
|
||||
|
||||
感谢参与翻译的成员: wxy, ictlyh, dongfengweixiao, zpl1025, H-mudcup, Yuking-net, kevinSJ 。
|
||||
|
||||
|
||||
### 关于:Linux From Scratch ###
|
||||
|
||||
这本手册是由 LFS 的项目领头人 Gerard Beekmans 创作的, Matthew Burgess 和 Bruse Dubbs 参与编辑,两人都是LFS 项目的联合领导人。这本书内容很广泛,有 338 页之多。
|
||||
|
||||
手册中内容包括:介绍 LFS、准备构建、构建 LFS、建立启动脚本、使 LFS 可以引导,以及附录。其中涵盖了您想知道的 LFS 项目中的所有东西。
|
||||
|
||||
这本手册还给出了编译一个包的预估时间。预估的时间以编译第一个包的时间作为参考。所有的东西都以易于理解的方式呈现,甚至对于新手来说也是这样。
|
||||
|
||||
如果您有充裕的时间并且真正对构建自己的 Linux 发行版感兴趣,那么您绝对不会错过下载这个电子书(免费下载)的机会。您需要做的,便是照着这本手册在一个工作的 Linux 系统(任何 Linux 发行版,足够的磁盘空间即可)中开始构建您自己的 Linux 系统,付出时间和热情。
|
||||
|
||||
如果 Linux 使您着迷,如果您想自己动手构建一个自己的 Linux 发行版,这便是现阶段您应该知道的全部了,其他的信息您可以参考上面链接的手册中的内容。
|
||||
|
||||
请让我了解您阅读/使用这本手册的经历,这本详尽的 LFS 指南的使用是否足够简单?如果您已经构建了一个 LFS 并且想给我们的读者一些建议,欢迎留言和反馈。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-custom-linux-distribution-from-scratch/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://tecmint.tradepub.com/free/w_linu01/prgm.cgi
|
@ -1,6 +1,7 @@
|
||||
在Ubuntu 15.04中安装RUby on Rails
|
||||
在Ubuntu 15.04中安装Ruby on Rails
|
||||
================================================================================
|
||||
本篇我们会学习如何用rbenv在Ubuntu 15.04中安装Ruby on Rails。我们选择Ubuntu作为操作系统因为Ubuntu是Linux发行版中自带很多包和完整文档的操作系统,因此我认为这是正确的选择。如果你不想安装最新的Ubuntu,你可以从[下载iso文件][1]开始。
|
||||
|
||||
本篇我们会学习如何用rbenv在Ubuntu 15.04中安装Ruby on Rails。我们选择Ubuntu作为操作系统是因为Ubuntu是Linux发行版中自带很多包和完整文档的操作系统,因此我认为这是正确的选择。如果你还没有安装最新的Ubuntu,你可以从[下载iso文件][1]开始。
|
||||
|
||||
### 安装 Ruby ###
|
||||
|
||||
@ -9,9 +10,9 @@
|
||||
sudo apt-get update
|
||||
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
|
||||
|
||||
有三种方法来安装Ruby比如rbenv,rvm和从源码安装。每种都有各自的好处,但是这些天开发者们更倾向使用rbenv而不是rvm和源码来安装。我们将安装最新的Ruby版本,2.2.2。
|
||||
有三种方法来安装Ruby:rbenv、rvm和从源码安装。每种都有各自的好处,但是近来开发者们更倾向使用rbenv而不是rvm和源码来安装。我们将安装最新的Ruby版本,2.2.2。
|
||||
|
||||
用rbenv来安装只有简单的两步。第一步安装rbenv接着是ruby-build:
|
||||
用rbenv来安装只有简单的两步。第一步安装rbenv,接着是ruby-build:
|
||||
|
||||
cd
|
||||
git clone git://github.com/sstephenson/rbenv.git .rbenv
|
||||
@ -28,23 +29,23 @@
|
||||
rbenv global 2.2.2
|
||||
ruby -v
|
||||
|
||||
我们需要安装Bundler但是我们要在安装之前告诉rubygems不要为每个包本地安装文档。
|
||||
我们需要安装Bundler,但是我们要在安装之前告诉rubygems不要为每个包安装本地文档。
|
||||
|
||||
echo "gem: --no-ri --no-rdoc" > ~/.gemrc
|
||||
gem install bundler
|
||||
|
||||
### 配置 GIT ###
|
||||
|
||||
配置git之前,你要创建一个github账号,你可以注册[git][2]。我们需要git作为版本控制系统,因此我们要设置来匹配github账号。
|
||||
配置git之前,你要创建一个github账号,你可以注册一个[github 账号][2]。我们需要git作为版本控制系统,因此我们要设置它来匹配github账号。
|
||||
|
||||
用户的github账号来代替下面的**Name** 和 **Email address** 。
|
||||
用户的github账号来替换下面的**Name** 和 **Email address** 。
|
||||
|
||||
git config --global color.ui true
|
||||
git config --global user.name "YOUR NAME"
|
||||
git config --global user.email "YOUR@EMAIL.com"
|
||||
ssh-keygen -t rsa -C "YOUR@EMAIL.com"
|
||||
|
||||
接下来用新生成的ssh key添加到github账号中。这样你需要复制下面命令的输出并[粘贴在这][3]。
|
||||
接下来用新生成的ssh key添加到github账号中。这样你需要复制下面命令的输出并[粘贴在Github的设置页面里面][3]。
|
||||
|
||||
cat ~/.ssh/id_rsa.pub
|
||||
|
||||
@ -58,7 +59,7 @@
|
||||
|
||||
### 安装 Rails ###
|
||||
|
||||
我们需要安装javascript运行时,像NodeJS因为这些天Rails带来很多依赖。这样我们可以结合并缩小你的javascript来提供一个更快的生产环境。
|
||||
我们需要安装像NodeJS这样的javascript运行时环境,因为近来Rails的依赖越来越多了。这样我们可以合并和压缩你的javascript,从而提供一个更快的生产环境。
|
||||
|
||||
我们需要添加PPA来安装nodeJS。
|
||||
|
||||
@ -66,7 +67,7 @@
|
||||
sudo apt-get update
|
||||
sudo apt-get install nodejs
|
||||
|
||||
如果在更新是晕倒了问题,你可以试试这个命令:
|
||||
如果在更新时遇到了问题,你可以试试这个命令:
|
||||
|
||||
# Note the new setup script name for Node.js v0.12
|
||||
curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
|
||||
@ -74,15 +75,15 @@
|
||||
# Then install with:
|
||||
sudo apt-get install -y nodejs
|
||||
|
||||
下一步,用这个命令:
|
||||
下一步,用这个命令安装 rails:
|
||||
|
||||
gem install rails -v 4.2.1
|
||||
|
||||
因为我们正在使用rbenv,用下面的命令来安装rails。
|
||||
因为我们正在使用rbenv,用下面的命令来让rails的执行程序可以使用。
|
||||
|
||||
rbenv rehash
|
||||
|
||||
要确保rails已经正确安炸u哪个,你可以运行rails -v,显示如下:
|
||||
要确保rails已经正确安装,你可以运行rails -v,显示如下:
|
||||
|
||||
rails -v
|
||||
# Rails 4.2.1
|
||||
@ -91,25 +92,25 @@
|
||||
|
||||
### 设置 MySQL ###
|
||||
|
||||
或许你已经熟悉MySQL了,你可以从Ubuntu的仓库中安装MySQL的客户端与服务端。你可以在安装时设置root用户密码。这个信息将来会进入你rails程序的database.yml文件中、用下面的命令来安装mysql。
|
||||
或许你已经熟悉MySQL了,你可以从Ubuntu的仓库中安装MySQL的客户端与服务端。你可以在安装时设置root用户密码。这个信息将来会进入你rails程序的database.yml文件中。用下面的命令来安装mysql。
|
||||
|
||||
sudo apt-get install mysql-server mysql-client libmysqlclient-dev
|
||||
|
||||
安装libmysqlclient-dev用于提供在设置rails程序时,rails在连接mysql所需要用到的用于编译mysql2 gem的文件。
|
||||
安装libmysqlclient-dev用于mysql2 gem的编译;在设置rails程序时,rails通过它来连接mysql。
|
||||
|
||||
### 最后一步 ###
|
||||
|
||||
让我们尝试创建你的第一个rails程序:
|
||||
|
||||
# Use MySQL
|
||||
# 使用 MySQL 数据库
|
||||
|
||||
rails new myapp -d mysql
|
||||
|
||||
# Move into the application directory
|
||||
# 进入到应用目录
|
||||
|
||||
cd myapp
|
||||
|
||||
# Create Database
|
||||
# 创建数据库
|
||||
|
||||
rake db:create
|
||||
|
||||
@ -125,7 +126,7 @@
|
||||
|
||||
nano config/database.yml
|
||||
|
||||
接着输入MySql root用户的密码。
|
||||
接着填入MySql root用户的密码。
|
||||
|
||||

|
||||
|
||||
@ -133,7 +134,7 @@
|
||||
|
||||
### 总结 ###
|
||||
|
||||
Rails是用Ruby写的, 也就是随着rails一起使用的编程语言。在Ubuntu 15.04中Ruby on Rails可以用rbenv、 rvm和源码的方式来安装。本篇我们使用的是rbenv方式并用了MySQL作为数据库。有任何的问题或建议,请在评论栏指出。
|
||||
Rails是用Ruby写的, 也是随着rails一起使用的编程语言。在Ubuntu 15.04中Ruby on Rails可以用rbenv、 rvm和源码的方式来安装。本篇我们使用的是rbenv方式并用了MySQL作为数据库。有任何的问题或建议,请在评论栏指出。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -141,7 +142,7 @@ via: http://linoxide.com/ubuntu-how-to/installing-ruby-rails-using-rbenv-ubuntu-
|
||||
|
||||
作者:[Obet][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,28 +1,29 @@
|
||||
如何在Fedora 22上面配置Apache的Docker容器
|
||||
=============================================================================
|
||||
在这篇文章中,我们将会学习关于Docker的一些知识,如何使用Docker部署Apache httpd服务,并且共享到Docker Hub上面去。首先,我们学习怎样拉取和使用Docker Hub里面的镜像,然后交互式地安装Apache到一个Fedora 22的镜像里去,之后我们将会学习如何用一个Dockerfile文件来制作一个镜像,以一种更快,更优雅的方式。最后,我们会在Docker Hub上公开我们创建地镜像,这样以后任何人都可以下载并使用它。
|
||||
|
||||
### 安装Docker,运行hello world ###
|
||||
在这篇文章中,我们将会学习关于Docker的一些知识,如何使用Docker部署Apache httpd服务,并且共享到Docker Hub上面去。首先,我们学习怎样拉取和使用Docker Hub里面的镜像,然后在一个Fedora 22的镜像上交互式地安装Apache,之后我们将会学习如何用一个Dockerfile文件来以一种更快,更优雅的方式制作一个镜像。最后,我们将我们创建的镜像发布到Docker Hub上,这样以后任何人都可以下载并使用它。
|
||||
|
||||
### 安装并初体验Docker ###
|
||||
|
||||
**要求**
|
||||
|
||||
运行Docker,里至少需要满足这些:
|
||||
运行Docker,你至少需要满足这些:
|
||||
|
||||
- 你需要一个64位的内核,版本3.10或者更高
|
||||
- Iptables 1.4 - Docker会用来做网络配置,如网络地址转换(NAT)
|
||||
- Iptables 1.4 - Docker会用它来做网络配置,如网络地址转换(NAT)
|
||||
- Git 1.7 - Docker会使用Git来与仓库交流,如Docker Hub
|
||||
- ps - 在大多数环境中这个工具都存在,在procps包里有提供
|
||||
- root - 防止一般用户可以通过TCP或者其他方式运行Docker,为了简化,我们会假定你就是root
|
||||
- root - 尽管一般用户可以通过TCP或者其他方式来运行Docker,但是为了简化,我们会假定你就是root
|
||||
|
||||
### 使用dnf安装docker ###
|
||||
#### 使用dnf安装docker ####
|
||||
|
||||
以下的命令会安装Docker
|
||||
|
||||
dnf update && dnf install docker
|
||||
|
||||
**注意**:在Fedora 22里,你仍然可以使用Yum命令,但是被DNF取代了,而且在纯净安装时不可用了。
|
||||
**注意**:在Fedora 22里,你仍然可以使用Yum命令,但是它被DNF取代了,而且在纯净安装时不可用了。
|
||||
|
||||
### 检查安装 ###
|
||||
#### 检查安装 ####
|
||||
|
||||
我们将要使用的第一个命令是docker info,这会输出很多信息给你:
|
||||
|
||||
@ -32,25 +33,29 @@
|
||||
|
||||
docker version
|
||||
|
||||
### 启动Dcoker为守护进程 ###
|
||||
#### 以守护进程方式启动Dcoker####
|
||||
|
||||
你应该启动一个docker实例,然后她会处理我们的请求。
|
||||
|
||||
docker -d
|
||||
|
||||
现在我们设置 docker 随系统启动,以便我们不需要每次重启都需要运行上述命令。
|
||||
|
||||
chkconfig docker on
|
||||
|
||||
让我们用Busybox来打印hello world:
|
||||
|
||||
dockr run -t busybox /bin/echo "hello world"
|
||||
|
||||
这个命令里,我们告诉Docker执行 /bin/echo "hello world",在Busybox镜像的一个实例/容器里。Busybox是一个小型的POSIX环境,将许多小工具都结合到了一个单独的可执行程序里。
|
||||
这个命令里,我们告诉Docker在Busybox镜像的一个实例/容器里执行 /bin/echo "hello world"。Busybox是一个小型的POSIX环境,将许多小工具都结合到了一个单独的可执行程序里。
|
||||
|
||||
如果Docker不能在你的系统里找到本地的Busybox镜像,她就会自动从Docker Hub里拉取镜像,正如你可以看下如下的快照:
|
||||
|
||||

|
||||
|
||||
Hello world with Busybox
|
||||
*Hello world with Busybox*
|
||||
|
||||
再次尝试相同的命令,这次由于Docker已经有了本地的Busybox镜像,所有你将会看到的就是echo的输出:
|
||||
再次尝试相同的命令,这次由于Docker已经有了本地的Busybox镜像,你将会看到的全部就是echo的输出:
|
||||
|
||||
docker run -t busybox /bin/echo "hello world"
|
||||
|
||||
@ -66,31 +71,31 @@ Hello world with Busybox
|
||||
|
||||
docker pull fedora:22
|
||||
|
||||
起一个容器在后台运行:
|
||||
启动一个容器在后台运行:
|
||||
|
||||
docker run -d -t fedora:22 /bin/bash
|
||||
|
||||
列出正在运行地容器,并用名字标识,如下
|
||||
列出正在运行地容器及其名字标识,如下
|
||||
|
||||
docker ps
|
||||
|
||||

|
||||
|
||||
使用docker ps列出,并使用docker attach进入一个容器里
|
||||
*使用docker ps列出,并使用docker attach进入一个容器里*
|
||||
|
||||
angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
angry_noble是docker分配给我们容器的名字,所以我们来连接上去:
|
||||
|
||||
docker attach angry_noble
|
||||
|
||||
注意:每次你起一个容器,就会被给与一个新的名字,如果你的容器需要一个固定的名字,你应该在 docker run 命令里使用 -name 参数。
|
||||
注意:每次你启动一个容器,就会被给与一个新的名字,如果你的容器需要一个固定的名字,你应该在 docker run 命令里使用 -name 参数。
|
||||
|
||||
### 安装Apache ###
|
||||
#### 安装Apache ####
|
||||
|
||||
下面的命令会更新DNF的数据库,下载安装Apache(httpd包)并清理dnf缓存使镜像尽量小
|
||||
|
||||
dnf -y update && dnf -y install httpd && dnf -y clean all
|
||||
|
||||
配置Apache
|
||||
**配置Apache**
|
||||
|
||||
我们需要修改httpd.conf的唯一地方就是ServerName,这会使Apache停止抱怨
|
||||
|
||||
@ -98,7 +103,7 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
|
||||
**设定环境**
|
||||
|
||||
为了使Apache运行为单机模式,你必须以环境变量的格式提供一些信息,并且你也需要在这些变量里的目录设定,所以我们将会用一个小的shell脚本干这个工作,当然也会启动Apache
|
||||
为了使Apache运行为独立模式,你必须以环境变量的格式提供一些信息,并且你也需要创建这些变量里的目录,所以我们将会用一个小的shell脚本干这个工作,当然也会启动Apache
|
||||
|
||||
vi /etc/httpd/run_apache_foreground
|
||||
|
||||
@ -106,7 +111,7 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
#set variables
|
||||
#设置环境变量
|
||||
APACHE_LOG_DI=R"/var/log/httpd"
|
||||
APACHE_LOCK_DIR="/var/lock/httpd"
|
||||
APACHE_RUN_USER="apache"
|
||||
@ -114,12 +119,12 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
APACHE_PID_FILE="/var/run/httpd/httpd.pid"
|
||||
APACHE_RUN_DIR="/var/run/httpd"
|
||||
|
||||
#create directories if necessary
|
||||
#如果需要的话,创建目录
|
||||
if ! [ -d /var/run/httpd ]; then mkdir /var/run/httpd;fi
|
||||
if ! [ -d /var/log/httpd ]; then mkdir /var/log/httpd;fi
|
||||
if ! [ -d /var/lock/httpd ]; then mkdir /var/lock/httpd;fi
|
||||
|
||||
#run Apache
|
||||
#运行 Apache
|
||||
httpd -D FOREGROUND
|
||||
|
||||
**另外地**,你可以粘贴这个片段代码到容器shell里并运行:
|
||||
@ -130,11 +135,11 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
|
||||
**保存你的容器状态**
|
||||
|
||||
你的容器现在可以运行Apache,是时候保存容器当前的状态为一个镜像,以备你需要的时候使用。
|
||||
你的容器现在准备好运行Apache,是时候保存容器当前的状态为一个镜像,以备你需要的时候使用。
|
||||
|
||||
为了离开容器环境,你必须顺序按下 **Ctrl+q** 和 **Ctrl+p**,如果你仅仅在shell执行exit,你同时也会停止容器,失去目前为止你做过的所有工作。
|
||||
|
||||
回到Docker主机,使用 **docker commit** 加容器和你期望的仓库名字/标签:
|
||||
回到Docker主机,使用 **docker commit** 及容器名和你想要的仓库名字/标签:
|
||||
|
||||
docker commit angry_noble gaiada/apache
|
||||
|
||||
@ -144,17 +149,15 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
|
||||
**运行并测试你的镜像**
|
||||
|
||||
最后,从你的新镜像起一个容器,并且重定向80端口到容器:
|
||||
最后,从你的新镜像启动一个容器,并且重定向80端口到该容器:
|
||||
|
||||
docker run -p 80:80 -d -t gaiada/apache /etc/httpd/run_apache_foreground
|
||||
|
||||
|
||||
|
||||
到目前,你正在你的容器里运行Apache,打开你的浏览器访问该服务,在[http://localhost][2],你将会看到如下Apache默认的页面
|
||||
|
||||

|
||||
|
||||
在容器里运行的Apache默认页面
|
||||
*在容器里运行的Apache默认页面*
|
||||
|
||||
### 使用Dockerfile Docker化Apache ###
|
||||
|
||||
@ -190,21 +193,14 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
|
||||
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
|
||||
|
||||
|
||||
|
||||
我们一起来看看Dockerfile里面有什么:
|
||||
|
||||
**FROM** - 这告诉docker,我们将要使用Fedora 22作为基础镜像
|
||||
|
||||
**MAINTAINER** 和 **LABLE** - 这些命令对镜像没有直接作用,属于标记信息
|
||||
|
||||
**RUN** - 自动完成我们之前交互式做的工作,安装Apache,新建目录并编辑httpd.conf
|
||||
|
||||
**ENV** - 设置环境变量,现在我们再不需要run_apache_foreground脚本
|
||||
|
||||
**EXPOSE** - 暴露80端口给外网
|
||||
|
||||
**CMD** - 设置默认的命令启动httpd服务,这样我们就不需要每次起一个新的容器都重复这个工作
|
||||
- **FROM** - 这告诉docker,我们将要使用Fedora 22作为基础镜像
|
||||
- **MAINTAINER** 和 **LABLE** - 这些命令对镜像没有直接作用,属于标记信息
|
||||
- **RUN** - 自动完成我们之前交互式做的工作,安装Apache,新建目录并编辑httpd.conf
|
||||
- **ENV** - 设置环境变量,现在我们再不需要run_apache_foreground脚本
|
||||
- **EXPOSE** - 暴露80端口给外网
|
||||
- **CMD** - 设置默认的命令启动httpd服务,这样我们就不需要每次起一个新的容器都重复这个工作
|
||||
|
||||
**建立该镜像**
|
||||
|
||||
@ -214,7 +210,7 @@ angry_noble是docker分配给我们容器的名字,所以我们来附上去:
|
||||
|
||||

|
||||
|
||||
docker完成创建
|
||||
*docker完成创建*
|
||||
|
||||
使用 **docker images** 列出本地镜像,查看是否存在你新建的镜像:
|
||||
|
||||
@ -226,7 +222,7 @@ docker完成创建
|
||||
|
||||
这就是Dockerfile的工作,使用这项功能会使得事情更加容易,快速并且可重复生成。
|
||||
|
||||
### 公开你的镜像 ###
|
||||
### 发布你的镜像 ###
|
||||
|
||||
直到现在,你仅仅是从Docker Hub拉取了镜像,但是你也可以推送你的镜像,以后需要也可以再次拉取他们。实际上,其他人也可以下载你的镜像,在他们的系统中使用它而不需要改变任何东西。现在我们将要学习如何使我们的镜像对世界上的其他人可用。
|
||||
|
||||
@ -236,7 +232,7 @@ docker完成创建
|
||||
|
||||

|
||||
|
||||
Docker Hub 注册页面
|
||||
*Docker Hub 注册页面*
|
||||
|
||||
**登录**
|
||||
|
||||
@ -256,11 +252,11 @@ Docker Hub 注册页面
|
||||
|
||||

|
||||
|
||||
Docker推送Apache镜像完成
|
||||
*Docker推送Apache镜像完成*
|
||||
|
||||
### 结论 ###
|
||||
|
||||
现在,你知道如何Docker化Apache,试一试包含其他一些组块,Perl,PHP,proxy,HTTPS,或者任何你需要的东西。我希望你们这些家伙喜欢她,并推送你们自己的镜像到Docker Hub。
|
||||
现在,你知道如何Docker化Apache,试一试包含其他一些组件,Perl,PHP,proxy,HTTPS,或者任何你需要的东西。我希望你们这些家伙喜欢她,并推送你们自己的镜像到Docker Hub。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -268,7 +264,7 @@ via: http://linoxide.com/linux-how-to/configure-apache-containers-docker-fedora-
|
||||
|
||||
作者:[Carlos Alberto][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,23 +1,22 @@
|
||||
Linux_Logo – 输出彩色 ANSI Linux 发行版徽标的命令行工具
|
||||
================================================================================
|
||||
linuxlogo 或 linux_logo 是一款在Linux命令行下生成附带系统信息的彩色 ANSI 发行版徽标的工具。
|
||||
|
||||
linuxlogo(或叫 linux_logo)是一款在Linux命令行下用彩色 ANSI 代码生成附带有系统信息的发行版徽标的工具。
|
||||
|
||||

|
||||
|
||||
Linux_Logo – 输出彩色 ANSI Linux 发行版徽标
|
||||
*Linux_Logo – 输出彩色 ANSI Linux 发行版徽标*
|
||||
|
||||
这个小工具可以从 /proc 文件系统中获取系统信息并可以显示包括主机发行版在内的其他很多发行版的徽标。
|
||||
这个小工具可以从 /proc 文件系统中获取系统信息并可以显示包括主机上安装的发行版在内的很多发行版的徽标。
|
||||
|
||||
与徽标一同显示的系统信息包括 – Linux 内核版本,最近一次编译Linux内核的时间,处理器/核心数量,速度,制造商,以及哪一代处理器。它还能显示总共的物理内存大小。
|
||||
与徽标一同显示的系统信息包括 : Linux 内核版本,最近一次编译Linux内核的时间,处理器/核心数量,速度,制造商,以及哪一代处理器。它还能显示总共的物理内存大小。
|
||||
|
||||
值得一提的是,screenfetch是一个拥有类似功能的工具,它也能显示发行版徽标,同时还提供更加详细美观的系统信息。我们之前已经介绍过这个工具,你可以参考一下链接:
|
||||
- [ScreenFetch – Generates Linux System Information][1]
|
||||
无独有偶,screenfetch是一个拥有类似功能的工具,它也能显示发行版徽标,同时还提供更加详细美观的系统信息。我们之前已经介绍过这个工具,你可以参考一下链接:
|
||||
|
||||
- [screenFetch: 命令行信息截图工具][1]
|
||||
|
||||
linux_logo 和 Screenfetch 并不能相提并论。尽管 screenfetch 的输出较为整洁并提供更多细节, linux_logo 则提供了更多的彩色 ANSI 图标, 并且提供了格式化输出的选项。
|
||||
linux\_logo 和 Screenfetch 并完全一样。尽管 screenfetch 的输出较为整洁并提供更多细节, 但 linux\_logo 则提供了更多的彩色 ANSI 图标, 并且提供了格式化输出的选项。
|
||||
|
||||
linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统中因此需要安装图形界面 X11 或 X 系统。这个软件使用GNU 2.0协议。
|
||||
linux\_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统中因此需要安装图形界面 X11 或 X 系统(LCTT 译注:此处应是错误的。按说不需要任何图形界面支持,并且译者从其官方站 http://www.deater.net/weave/vmwprod/linux_logo 也没找到任何相关 X11的信息)。这个软件使用GNU 2.0协议。
|
||||
|
||||
本文中,我们将使用以下环境测试 linux_logo 工具。
|
||||
|
||||
@ -26,7 +25,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
|
||||
|
||||
### 在 Linux 中安装 Linux Logo工具 ###
|
||||
|
||||
**1. linuxlogo软件包 ( 5.11 稳定版) 可通过如下方式使用 apt, yum,或 dnf 在所有发行版中使用默认的软件仓库进行安装**
|
||||
**1. linuxlogo软件包 ( 5.11 稳定版) 可通过如下方式使用 apt, yum 或 dnf 在所有发行版中使用默认的软件仓库进行安装**
|
||||
|
||||
# apt-get install linux_logo [用于基于 Apt 的系统] (译者注:Ubuntu中,该软件包名为linuxlogo)
|
||||
# yum install linux_logo [用于基于 Yum 的系统]
|
||||
@ -42,7 +41,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
|
||||
|
||||

|
||||
|
||||
获取默认系统徽标
|
||||
*获取默认系统徽标*
|
||||
|
||||
**3. 使用 `[-a]` 选项可以输出没有颜色的徽标。当在黑白终端里使用 linux_logo 时,这个选项会很有用。**
|
||||
|
||||
@ -50,7 +49,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
|
||||
|
||||

|
||||
|
||||
黑白 Linux 徽标
|
||||
*黑白 Linux 徽标*
|
||||
|
||||
**4. 使用 `[-l]` 选项可以仅输出徽标而不包含系统信息。**
|
||||
|
||||
@ -58,7 +57,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
|
||||
|
||||

|
||||
|
||||
输出发行版徽标
|
||||
*输出发行版徽标*
|
||||
|
||||
**5. `[-u]` 选项可以显示系统运行时间。**
|
||||
|
||||
@ -66,7 +65,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
|
||||
|
||||

|
||||
|
||||
输出系统运行时间
|
||||
*输出系统运行时间*
|
||||
|
||||
**6. 如果你对系统平均负载感兴趣,可以使用 `[-y]` 选项。你可以同时使用多个选项。**
|
||||
|
||||
@ -74,7 +73,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
|
||||
|
||||

|
||||
|
||||
输出系统平均负载
|
||||
*输出系统平均负载*
|
||||
|
||||
如需查看更多选项并获取相关帮助,你可以使用如下命令。
|
||||
|
||||
@ -82,7 +81,7 @@ linux_logo 主要使用C语言编写并将 linux 徽标呈现在 X 窗口系统
|
||||
|
||||

|
||||
|
||||
Linuxlogo选项及帮助
|
||||
*Linuxlogo选项及帮助*
|
||||
|
||||
**7. 此工具内置了很多不同发行版的徽标。你可以使用 `[-L list]` 选项查看在这些徽标的列表。**
|
||||
|
||||
@ -90,7 +89,7 @@ Linuxlogo选项及帮助
|
||||
|
||||

|
||||
|
||||
Linux 徽标列表
|
||||
*Linux 徽标列表*
|
||||
|
||||
如果你想输出这个列表中的任意徽标,可以使用 `-L NUM` 或 `-L NAME` 来显示想要选中的图标。
|
||||
|
||||
@ -105,7 +104,7 @@ Linux 徽标列表
|
||||
|
||||

|
||||
|
||||
输出 AIX 图标
|
||||
*输出 AIX 图标*
|
||||
|
||||
**注**: 命令中的使用 `-L 1` 是因为 AIX 徽标在列表中的编号是1,而使用 `-L aix` 则是因为 AIX 徽标在列表中的名称为 aix
|
||||
|
||||
@ -116,13 +115,13 @@ Linux 徽标列表
|
||||
|
||||

|
||||
|
||||
各种 Linux 徽标
|
||||
*各种 Linux 徽标*
|
||||
|
||||
你可以通过徽标对应的编号或名字使用任意徽标
|
||||
你可以通过徽标对应的编号或名字使用任意徽标。
|
||||
|
||||
### 一些使用 Linux_logo 的建议和提示###
|
||||
|
||||
**8. 你可以在登录界面输出你的 Linux 发行版徽标。要输出默认徽标,你可以在 ` ~/.bashrc`` 文件的最后添加以下内容。**
|
||||
**8. 你可以在登录界面输出你的 Linux 发行版徽标。要输出默认徽标,你可以在 ` ~/.bashrc` 文件的最后添加以下内容。**
|
||||
|
||||
if [ -f /usr/bin/linux_logo ]; then linux_logo; fi
|
||||
|
||||
@ -132,15 +131,15 @@ Linux 徽标列表
|
||||
|
||||

|
||||
|
||||
在用户登录时输出徽标
|
||||
*在用户登录时输出徽标*
|
||||
|
||||
其实你也可以在登录后输出任意图标,只需加入以下内容
|
||||
其实你也可以在登录后输出任意图标,只需加入以下内容:
|
||||
|
||||
if [ -f /usr/bin/linux_logo ]; then linux_logo -L num; fi
|
||||
|
||||
**重要**: 不要忘了将 num 替换成你想使用的图标。
|
||||
|
||||
**10. You can also print your own logo by simply specifying the location of the logo as shown below.**
|
||||
**10. 你也能直接指定徽标所在的位置来显示你自己的徽标。**
|
||||
|
||||
# linux_logo -D /path/to/ASCII/logo
|
||||
|
||||
@ -152,12 +151,11 @@ Linux 徽标列表
|
||||
|
||||
# /usr/local/bin/linux_logo -a > /etc/issue.net
|
||||
|
||||
**12. 创建一个 Penguin 端口 - 用于回应连接的端口。要创建 Penguin 端口, 则需在 /etc/services 文件中加入以下内容 **
|
||||
**12. 创建一个 Linux 上的端口 - 用于回应连接的端口。要创建 Linux 端口, 则需在 /etc/services 文件中加入以下内容**
|
||||
|
||||
penguin 4444/tcp penguin
|
||||
|
||||
这里的 `4444` 是一个未被任何其他资源使用的空闲端口。你也可以使用其他端口。
|
||||
你还需要在 /etc/inetd.conf中加入以下内容
|
||||
这里的 `4444` 是一个未被任何其他资源使用的空闲端口。你也可以使用其他端口。你还需要在 /etc/inetd.conf中加入以下内容:
|
||||
|
||||
penguin stream tcp nowait root /usr/local/bin/linux_logo
|
||||
|
||||
@ -165,6 +163,8 @@ Linux 徽标列表
|
||||
|
||||
# killall -HUP inetd
|
||||
|
||||
(LCTT 译注:然后你就可以远程或本地连接到这个端口,并显示这个徽标了。)
|
||||
|
||||
linux_logo 还可以用做启动脚本来愚弄攻击者或对你朋友使用恶作剧。这是一个我经常在我的脚本中用来获取不同发行版输出的好工具。
|
||||
|
||||
试过一次后,你就不会忘记的。让我们知道你对这个工具的想法及它对你的作用吧。 不要忘记给评论、点赞或分享!
|
||||
@ -174,10 +174,10 @@ linux_logo 还可以用做启动脚本来愚弄攻击者或对你朋友使用恶
|
||||
via: http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[KevSJ](https://github.com/KevSJ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[KevinSJ](https://github.com/KevinSJ)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
|
||||
[1]:https://linux.cn/article-1947-1.html
|
@ -1,16 +1,16 @@
|
||||
XBMC:自制遥控
|
||||
为 Kodi 自制遥控器
|
||||
================================================================================
|
||||
**通过运行在 Android 手机上的自制遥控器来控制你的家庭媒体播放器。**
|
||||
|
||||
**XBMC** 一款很优秀的软件,能够将几乎所有电脑变身成媒体中心。它可以播放音乐和视频,显示图片,甚至还能显示天气预报。为了在配置成家庭影院后方便使用,你可以通过手机 app 访问运行在已连接到 Wi-Fi 的 XBMC 机器上的服务来控制它。可以找到很多这种工具,几乎覆盖所有智能手机系统。
|
||||
**Kodi** 是一款很优秀的软件,能够将几乎所有电脑变身成媒体中心。它可以播放音乐和视频,显示图片,甚至还能显示天气预报。为了在配置成家庭影院后方便使用,你可以通过手机 app 访问运行在连接到 Wi-Fi 的 XBMC 机器上的服务来控制它。可以找到很多这种工具,几乎覆盖所有智能手机系统。
|
||||
|
||||
> ### Kodi ###
|
||||
> **XBMC**
|
||||
>
|
||||
> 在你阅读这篇文章的时候,**XBMC** 可能已经成为历史。因为法律原因(因为名字 **XBMC** 或 X**-Box Media Center** 里引用了不再支持的过时硬件)项目组决定使用新的名字 **Kodi**。不过,除了名字,其他的都会保持原样。或者说除开通常新版本中所期待的大量新改进。这一般不会影响到遥控软件,它应该能在已有的 **XBMC** 系统和新的 Kodi 系统上都能工作。
|
||||
> Kodi 原名叫做 XBMC,在你阅读这篇文章的时候,**XBMC** 已经成为历史。因为法律原因(因为名字 **XBMC** 或 X**-Box Media Center** 里引用了不再支持的过时硬件)项目组决定使用新的名字 **Kodi**。不过,除了名字,其他的都会保持原样。或者说除开通常新版本中所期待的大量新改进。这一般不会影响到遥控软件,它应该能在已有的 **XBMC** 系统和新的 Kodi 系统上都能工作。
|
||||
|
||||
我们目前配置了一个 **XBMC** 系统用于播放音乐,不过我们找到的所有 XBMC 遥控没一个好用的,特别是和媒体中心连接的电视没打开的时候。它们都有点太复杂了,集成了太多功能在手机的小屏幕上。我们希望能有这样的系统,从最开始就是设计成只用于访问音乐库和电台插件,所以我们决定自己实现一个。它不需要用到 XBMC 的所有功能,因为除了音乐以外的任务,我们可以简单地切换使用通用的 XBMC 遥控。我们的测试系统是一个刷了 RaspBMC 发行版的树莓派,但是我们要做的工具并不受限于树莓派或那个发行版,它应该可以匹配任何安装了相关插件的基于 Linux 的 XBMC 系统。
|
||||
我们目前已经配置好了一个用于播放音乐的 **Kodi** 系统,不过我们找到的所有 Kodi 遥控没一个好用的,特别是和媒体中心连接的电视没打开的时候。它们都有点太复杂了,集成了太多功能在手机的小屏幕上。我们希望能有这样的系统,从最开始就是设计成只用于访问音乐库和电台插件,所以我们决定自己实现一个。它不需要用到 Kodi 的所有功能,因为除了音乐以外的任务,我们可以简单地切换使用通用的 Kodi 遥控。我们的测试系统是一个刷了 RaspBMC 发行版的树莓派,但是我们要做的工具并不受限于树莓派或Kodi那个发行版,它应该可以匹配任何安装了相关插件的基于 Linux 的 Kodi 系统。
|
||||
|
||||
首先,遥控程序需要一个用户界面。大多数 XBMC 遥控程序都是独立的 app。不过对于我们要做的这个音乐控制程序,我们希望用户可以不用安装任何东西就可以使用。显然我们需要使用网页界面。XBMC 本身自带网页服务器,但是为了获得更多权限,我们还是使用了独立的网页框架。在同一台电脑上跑两个以上网页服务器没有问题,只不过它们不能使用相同的端口。
|
||||
首先,遥控程序需要一个用户界面。大多数 Kodi 遥控程序都是独立的 app。不过对于我们要做的这个音乐控制程序,我们希望用户可以不用安装任何东西就可以使用。显然我们需要使用网页界面。Kodi 本身自带网页服务器,但是为了获得更多权限,我们还是使用了独立的网页框架。在同一台电脑上跑两个以上网页服务器没有问题,只不过它们不能使用相同的端口。
|
||||
|
||||
有几个网页框架可以使用。而我们选用 Bottle 是因为它是一个简单高效的框架,而且我们也确实用不到任何高级功能。Bottle 是一个 Python 模块,所以这也将是我们编写服务器模块的语言。
|
||||
|
||||
@ -18,7 +18,7 @@ XBMC:自制遥控
|
||||
|
||||
sudo apt-get install python-bottle
|
||||
|
||||
遥控程序实际上只是连接用户和系统的中间层。Bottle 提供了和用户交互的方式,而我们将通过 JSON API 来和 **XBMC** 交互。这样可以让我们通过发送 JSON 格式消息的方式去控制媒体播放器。
|
||||
遥控程序实际上只是连接用户和系统的中间层。Bottle 提供了和用户交互的方式,而我们将通过 JSON API 来和 **Kodi** 交互。这样可以让我们通过发送 JSON 格式消息的方式去控制媒体播放器。
|
||||
|
||||
我们将用到一个叫做 xbmcjson 的简单 XBMC JASON API 封装。足够用来发送控制请求,而不需要关心实际的 JSON 格式以及和服务器通讯的无聊事。它没有包含在 PIP 包管理中,所以你得直接从 **GitHub** 安装:
|
||||
|
||||
@ -35,13 +35,13 @@ XBMC:自制遥控
|
||||
from xbmcjson import XBMC
|
||||
from bottle import route, run, template, redirect, static_file, request
|
||||
import os
|
||||
xbmc = XBMC(“http://192.168.0.5/jsonrpc”, “xbmc”, “xbmc”)
|
||||
@route(‘/hello/<name>’)
|
||||
xbmc = XBMC("http://192.168.0.5/jsonrpc", "xbmc", "xbmc")
|
||||
@route('/hello/<name>')
|
||||
def index(name):
|
||||
return template(‘<h1>Hello {{name}}!</h1>’, name=name)
|
||||
run(host=”0.0.0.0”, port=8000)
|
||||
return template('<h1>Hello {{name}}!</h1>', name=name)
|
||||
run(host="0.0.0.0", port=8000)
|
||||
|
||||
这样程序将连接到 **XBMC**(不过实际上用不到);然后 Bottle 会开始伺服网站。在我们的代码里,它将监听主机 0.0.0.0(意味着允许所有主机连接)的端口 8000。它只设定了一个站点,就是 /hello/XXXX,这里的 XXXX 可以是任何内容。不管 XXXX 是什么都将作为参数名传递给 index()。然后再替换进去 HTML 网页模版。
|
||||
这样程序将连接到 **Kodi**(不过实际上用不到);然后 Bottle 会开始提供网站服务。在我们的代码里,它将监听主机 0.0.0.0(意味着允许所有主机连接)的端口 8000。它只设定了一个站点,就是 /hello/XXXX,这里的 XXXX 可以是任何内容。不管 XXXX 是什么都将作为参数名传递给 index()。然后再替换进去 HTML 网页模版。
|
||||
|
||||
你可以先试着把上面内容写到一个文件(我们取的名字是 remote.py),然后用下面的命令启动:
|
||||
|
||||
@ -51,56 +51,56 @@ XBMC:自制遥控
|
||||
|
||||
@route() 用来设定网页服务器的路径,而函数 index() 会返回该路径的数据。通常是返回由模版生成的 HTML 页面,但是并不是说只能这样(后面会看到)。
|
||||
|
||||
随后,我们将给应用添加更多页面入口,让它变成一个全功能的 XBMC 遥控,但仍将采用相同代码结构。
|
||||
随后,我们将给应用添加更多页面入口,让它变成一个全功能的 Kodi 遥控,但仍将采用相同代码结构。
|
||||
|
||||
XBMC JSON API 接口可以从和 XBMC 机器同网段的任意电脑上访问。也就是说你可以在自己的笔记本上开发,然后再布置到媒体中心上,而不需要浪费时间上传每次改动。
|
||||
XBMC JSON API 接口可以从和 Kodi 机器同网段的任意电脑上访问。也就是说你可以在自己的笔记本上开发,然后再布置到媒体中心上,而不需要浪费时间上传每次改动。
|
||||
|
||||
模版 - 比如前面例子里的那个简单模版 - 是一种结合 Python 和 HTML 来控制输出的方式。理论上,这俩能做很多很多事,但是会非常混乱。我们将只是用它们来生成正确格式的数据。不过,在开始动手之前,我们先得准备点数据。
|
||||
|
||||
> ### Paste ###
|
||||
> **Paste**
|
||||
>
|
||||
> Bottle 自带网页服务器,就是我们用来测试遥控程序的。不过,我们发现它性能有时不够好。当我们的遥控程序正式上线时,我们希望页面能更快一点显示出来。Bottle 可以和很多不同的网页服务器配合工作,而我们发现 Paste 用起来非常不错。而要使用的话,只要简单地安装(Debian 系统里的 python-paste 包),然后修改一下代码里的 run 调用:
|
||||
> Bottle 自带网页服务器,我们用它来测试遥控程序。不过,我们发现它性能有时不够好。当我们的遥控程序正式上线时,我们希望页面能更快一点显示出来。Bottle 可以和很多不同的网页服务器配合工作,而我们发现 Paste 用起来非常不错。而要使用的话,只要简单地安装(Debian 系统里的 python-paste 包),然后修改一下代码里的 run 调用:
|
||||
>
|
||||
> run(host=hostname, port=hostport, server=”paste”)
|
||||
> run(host=hostname, port=hostport, server="paste")
|
||||
>
|
||||
> 你可以在 [http://bottlepy.org/docs/dev/deployment.html][1] 找到如何使用其他服务器的相关细节。
|
||||
|
||||
#### 从 XBMC 获取数据 ####
|
||||
#### 从 Kodi 获取数据 ####
|
||||
|
||||
XBMC JSON API 分成 14 个命名空间:JSONRPC, Player, Playlist, Files, AudioLibrary, VideoLibrary, Input, Application, System, Favourites, Profiles, Settings, Textures 和 XBMC。每个都可以通过 Python 的 XBMC 对象访问(Favourites 除外,明显是个疏忽)。每个命名空间都包含许多方法用于对程序的控制。例如,Playlist.GetItems() 可以用来获取某个特定播放列表的内容。服务器会返回给我们 JSON 格式的数据,但 xbmcjson 模块会为我们转化成 Python 词典。
|
||||
|
||||
我们需要用到 XBMC 里的两个组件来控制播放:播放器和播放列表。播放器带有播放列表并在每首歌结束时从列表里取下一首。为了查看当前正在播放的内容,我们需要获取正在工作的播放器的 ID,然后根据它找到当前播放列表的 ID。这个可以通过下面的代码来实现:
|
||||
我们需要用到 Kodi 里的两个组件来控制播放:播放器和播放列表。播放器处理播放列表并在每首歌结束时从列表里取下一首。为了查看当前正在播放的内容,我们需要获取正在工作的播放器的 ID,然后根据它找到当前播放列表的 ID。这个可以通过下面的代码来实现:
|
||||
|
||||
def get_playlistid():
|
||||
player = xbmc.Player.GetActivePlayers()
|
||||
if len(player[‘result’]) > 0:
|
||||
playlist_data = xbmc.Player.GetProperties({“playerid”:0, “properties”:[“playlistid”]})
|
||||
if len(playlist_data[‘result’]) > 0 and “playlistid” in playlist_data[‘result’].keys():
|
||||
return playlist_data[‘result’][‘playlistid’]
|
||||
if len(player['result']) > 0:
|
||||
playlist_data = xbmc.Player.GetProperties({"playerid":0, "properties":["playlistid"]})
|
||||
if len(playlist_data['result']) > 0 and "playlistid" in playlist_data['result'].keys():
|
||||
return playlist_data['result']['playlistid']
|
||||
return -1
|
||||
|
||||
如果当前没有播放器在工作(就是说,返回数据的结果部分的长度是 0),或者当前播放器不带播放列表,这样的话函数会返回 -1。其他时候,它会返回当前播放列表的数字 ID。
|
||||
如果当前没有播放器在工作(就是说,返回数据的结果部分的长度是 0),或者当前播放器没有处理播放列表,这样的话函数会返回 -1。其他时候,它会返回当前播放列表的数字 ID。
|
||||
|
||||
当我们拿到当前播放列表的 ID 后,就可以获取列表的细节内容。按照我们的需求,有两个重要的地方:播放列表里包含的项,以及当前播放所处的位置(已经播放过的项并不会从播放列表移除,只是移动当前播放位置)。
|
||||
当我们拿到当前播放列表的 ID 后,就可以获取该列表的细节内容。按照我们的需求,有两个重要的地方:播放列表里包含的项,以及当前播放所处的位置(已经播放过的项并不会从播放列表移除,只是移动当前播放位置)。
|
||||
|
||||
def get_playlist():
|
||||
playlistid = get_playlistid()
|
||||
if playlistid >= 0:
|
||||
data = xbmc.Playlist.GetItems({“playlistid”:playlistid, “properties”: [“title”, “album”, “artist”, “file”]})
|
||||
position_data = xbmc.Player.GetProperties({“playerid”:0, ‘properties’:[“position”]})
|
||||
position = int(position_data[‘result’][‘position’])
|
||||
return data[‘result’][‘items’][position:], position
|
||||
data = xbmc.Playlist.GetItems({"playlistid":playlistid, "properties": ["title", "album", "artist", "file"]})
|
||||
position_data = xbmc.Player.GetProperties({"playerid":0, 'properties':["position"]})
|
||||
position = int(position_data['result']['position'])
|
||||
return data['result']['items'][position:], position
|
||||
return [], -1
|
||||
|
||||
这样可以返回以正在播放的项开始的列表(因为我们并不关心已经播放过的内容),而且也包含了位置信息用来从列表里移除项。
|
||||
这样可以返回正在播放的项开始的列表(因为我们并不关心已经播放过的内容),而且也包含了用来从列表里移除项的位置信息。
|
||||
|
||||

|
||||
|
||||
API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列出了所有支持的函数,但是关于具体如何使用的描述有点太简单了。
|
||||
|
||||
> ### JSON ###
|
||||
> **JSON**
|
||||
>
|
||||
> JSON 是 JavaScript Object Notation 的缩写,开始设计用于 JavaScript 对象的序列化。目前仍然起到这个作用,但是它也是用来编码任意数据的一种很好用的方式。
|
||||
> JSON 是 JavaScript Object Notation 的缩写,最初设计用于 JavaScript 对象的序列化。目前仍然起到这个作用,但是它也是用来编码任意数据的一种很好用的方式。
|
||||
>
|
||||
> JSON 对象都是这样的格式:
|
||||
>
|
||||
@ -110,18 +110,18 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
||||
>
|
||||
> 在字典数据结构里,值本身可以是另一个 JSON 对象,或者一个列表,所以下面的格式也是正确的:
|
||||
>
|
||||
> {“name”:“Ben”, “jobs”:[“cook”, “bottle-washer”], “appearance”: {“height”:195, “skin”:“fair”}}
|
||||
> {"name":"Ben", "jobs":["cook", "bottle-washer"], "appearance": {"height":195, "skin":"fair"}}
|
||||
>
|
||||
> JSON 通常在网络服务中用来发送和接收数据,并且大多数编程语言都能很好地支持,所以如果你熟悉 Python 的话,你应该可以使用你熟悉的编程语言调用相同的接口来轻松地控制 XBMC。
|
||||
> JSON 通常在网络服务中用来发送和接收数据,并且大多数编程语言都能很好地支持,所以如果你熟悉 Python 的话,你应该可以使用你熟悉的编程语言调用相同的接口来轻松地控制 Kodi。
|
||||
|
||||
#### 整合到一起 ####
|
||||
|
||||
把之前的功能连接到 HTML 页面很简单:
|
||||
|
||||
@route(‘/juke’)
|
||||
@route('/juke')
|
||||
def index():
|
||||
current_playlist, position = get_playlist()
|
||||
return template(‘list’, playlist=current_playlist, offset = position)
|
||||
return template('list', playlist=current_playlist, offset = position)
|
||||
|
||||
只需要抓取播放列表(调用我们之前定义的函数),然后将结果传递给负责显示的模版。
|
||||
|
||||
@ -131,57 +131,57 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
||||
% if playlist is not None:
|
||||
% position = offset
|
||||
% for song in playlist:
|
||||
<strong> {{song[‘title’]}} </strong>
|
||||
% if song[‘type’] == ‘unknown’:
|
||||
<strong> {{song['title']}} </strong>
|
||||
% if song['type'] == 'unknown':
|
||||
Radio
|
||||
% else:
|
||||
{{song[‘artist’][0]}}
|
||||
{{song['artist'][0]}}
|
||||
% end
|
||||
% if position != offset:
|
||||
<a href=”/remove/{{position}}”>remove</a>
|
||||
<a href="/remove/{{position}}">remove</a>
|
||||
% else:
|
||||
<a href=”/skip/{{position}}”>skip</a>
|
||||
<a href="/skip/{{position}}">skip</a>
|
||||
% end
|
||||
<br>
|
||||
% position += 1
|
||||
% end
|
||||
|
||||
可以看到,模版大部分是用 HTML 写的,只有一小部分用来控制输出的其他代码。用两个括号括起来的变量是输出位置(像我们在第一个 ‘hello world’ 例子里看到的)。你也可以嵌入以百分号开头的 Python 代码。因为没有缩进,你需要用一个 % end 来结束当前的代码块(就像循环或 if 语句)。
|
||||
可以看到,模版大部分是用 HTML 写的,只有一小部分用来控制输出的其他代码。用两个大括号括起来的变量是输出位置(像我们在第一个 'hello world' 例子里看到的)。你也可以嵌入以百分号开头的 Python 代码。因为没有缩进,你需要用一个 `% end` 来结束当前的代码块(就像循环或 if 语句)。
|
||||
|
||||
这个模版首先检查列表是否为空,然后遍历里面的每一项。每一项会用粗体显示歌曲名字,然后是艺术家名字,然后是一个是否跳过(如果是当前正在播的歌曲)或从列表移除的链接。所有歌曲的类型都是 ‘song’,如果类型是 ‘unknown’,那就不是歌曲而是网络电台。
|
||||
这个模版首先检查列表是否为空,然后遍历里面的每一项。每一项会用粗体显示歌曲名字,然后是艺术家名字,然后是一个是否跳过(如果是当前正在播的歌曲)或从列表移除的链接。所有歌曲的类型都是 'song',如果类型是 'unknown',那就不是歌曲而是网络电台。
|
||||
|
||||
/remove/ 和 /skip/ 路径只是简单地封装了 XBMC 控制功能,在改动生效后重新加载 /juke:
|
||||
|
||||
@route(‘/skip/<position>’)
|
||||
@route('/skip/<position>')
|
||||
def index(position):
|
||||
print xbmc.Player.GoTo({‘playerid’:0, ‘to’:’next’})
|
||||
redirect(“/juke”)
|
||||
@route(‘/remove/<position>’)
|
||||
print xbmc.Player.GoTo({'playerid':0, 'to':'next'})
|
||||
redirect("/juke")
|
||||
@route('/remove/<position>')
|
||||
def index(position):
|
||||
playlistid = get_playlistid()
|
||||
if playlistid >= 0:
|
||||
xbmc.Playlist.Remove({‘playlistid’:int(playlistid), ‘position’:int(position)})
|
||||
redirect(“/juke”)
|
||||
xbmc.Playlist.Remove({'playlistid':int(playlistid), 'position':int(position)})
|
||||
redirect("/juke")
|
||||
|
||||
当然,如果不能往列表里添加歌曲的话那这个列表管理功能也不行。
|
||||
|
||||
因为一旦播放列表结束,它就消失了,所以你需要重新创建一个,这会让事情复杂一些。而且有点让人迷惑的是,播放列表是通过调用 Playlist.Clear() 方法来创建的。这个方法也还用来删除包含网络电台(类型是 unknown)的播放列表。另一个麻烦的地方是列表里的网络电台开始播放后就不会停,所以如果当前在播网络电台,也会需要清除播放列表。
|
||||
|
||||
这些页面包含了指向 /play/<songid> 的链接来播放歌曲。通过下面的代码处理:
|
||||
这些页面包含了指向 /play/\<songid> 的链接来播放歌曲。通过下面的代码处理:
|
||||
|
||||
@route(‘/play/<id>’)
|
||||
@route('/play/<id>')
|
||||
def index(id):
|
||||
playlistid = get_playlistid()
|
||||
playlist, not_needed= get_playlist()
|
||||
if playlistid < 0 or playlist[0][‘type’] == ‘unknown’:
|
||||
xbmc.Playlist.Clear({“playlistid”:0})
|
||||
xbmc.Playlist.Add({“playlistid”:0, “item”:{“songid”:int(id)}})
|
||||
xbmc.Player.open({“item”:{“playlistid”:0}})
|
||||
if playlistid < 0 or playlist[0]['type'] == 'unknown':
|
||||
xbmc.Playlist.Clear({"playlistid":0})
|
||||
xbmc.Playlist.Add({"playlistid":0, "item":{"songid":int(id)}})
|
||||
xbmc.Player.open({"item":{"playlistid":0}})
|
||||
playlistid = 0
|
||||
else:
|
||||
xbmc.Playlist.Add({“playlistid”:playlistid, “item”:{“songid”:int(id)}})
|
||||
xbmc.Playlist.Add({"playlistid":playlistid, "item":{"songid":int(id)}})
|
||||
remove_duplicates(playlistid)
|
||||
redirect(“/juke”)
|
||||
redirect("/juke")
|
||||
|
||||
最后一件事情是实现 remove_duplicates 调用。这并不是必须的 - 而且还有人并不喜欢这个 - 不过可以保证同一首歌不会多次出现在播放列表里。
|
||||
|
||||
@ -191,40 +191,40 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
||||
|
||||
还需要处理一下 UI,不过功能已经有了。
|
||||
|
||||
> ### 日志 ###
|
||||
> **日志**
|
||||
>
|
||||
> 通常拿到 XBMC JSON API 并不清楚能用来做什么,而且它的文档也有点模糊。找出如何使用的一种方式是看别的遥控程序是怎么做的。如果打开日志功能,就可以在使用其他遥控程序的时候看到哪个 API 被调用了,然后就可以应用到在自己的代码里。
|
||||
>
|
||||
> 要打开日志功能,把 XBMC 媒体中心 接到显示器上,再依次进入设置 > 系统 > 调试,打开允许调试日志。在打开日志功能后,还需要登录到 XBMC 机器上(比如通过 SSH),然后就可以查看日志了。日志文件的位置应该显示在 XBMC 界面左上角。在 RaspBMC 系统里,文件位置是 /home/pi/.xbmc/temp/xbmc.log。你可以通过下面的命令实时监视哪个 API 接口被调用了:
|
||||
> 要打开日志功能,把 Kodi 媒体中心 接到显示器上,再依次进入设置 > 系统 > 调试,打开允许调试日志。在打开日志功能后,还需要登录到 Kodi 机器上(比如通过 SSH),然后就可以查看日志了。日志文件的位置应该显示在 Kodi 界面左上角。在 RaspBMC 系统里,文件位置是 /home/pi/.xbmc/temp/xbmc.log。你可以通过下面的命令实时监视哪个 API 接口被调用了:
|
||||
>
|
||||
> cd /home/pi/.xbmc/temp
|
||||
> tail -f xbmc.log | grep “JSON”
|
||||
> tail -f xbmc.log | grep "JSON"
|
||||
|
||||
#### 增加功能 ####
|
||||
|
||||
上面的代码都是用来播放 XBMC 媒体库里的歌曲的,但我们还希望能播放网络电台。每个插件都有自己的独立 URL 可以通过普通的 XBMC JSON 命令来获取信息。举个例子,要从电台插件里获取选中的电台,可以使用;
|
||||
上面的代码都是用来播放 Kodi 媒体库里的歌曲的,但我们还希望能播放网络电台。每个插件都有自己的独立 URL 可以通过普通的 XBMC JSON 命令来获取信息。举个例子,要从电台插件里获取选中的电台,可以使用;
|
||||
|
||||
@route(‘/radio/’)
|
||||
@route('/radio/')
|
||||
def index():
|
||||
my_stations = xbmc.Files.GetDirectory({“directory”:”plugin://plugin.audio.radio_de/stations/my/”, “properties”:
|
||||
[“title”,”thumbnail”,”playcount”,”artist”,”album”,”episode”,”season”,”showtitle”]})
|
||||
if ‘result’ in my_stations.keys():
|
||||
return template(‘radio’, stations=my_stations[‘result’][‘files’])
|
||||
my_stations = xbmc.Files.GetDirectory({"directory":"plugin://plugin.audio.radio_de/stations/my/", "properties":
|
||||
["title","thumbnail","playcount","artist","album","episode","season","showtitle"]})
|
||||
if 'result' in my_stations.keys():
|
||||
return template('radio', stations=my_stations['result']['files'])
|
||||
else:
|
||||
return template(‘error’, error=’radio’)
|
||||
return template('error', error='radio')
|
||||
|
||||
这样可以返回一个可以和歌曲一样能添加到播放列表的文件。不过,这些文件能一直播下去,所以(之前说过)在添加其他歌曲的时候需要重新创建列表。
|
||||
|
||||
#### 共享歌曲 ####
|
||||
|
||||
除了伺服页面模版,Bottle 还支持静态文件。方便用于那些不会因为用户输入而改变的内容。可以是 CSS 文件,一张图片或是一首 MP3 歌曲。在我们的简单遥控程序里(目前)还没有任何用来美化的 CSS 或图片,不过我们增加了一个下载歌曲的途径。这个可以让媒体中心变成一个存放歌曲的 NAS 盒子。在需要传输大量数据的时候,最好还是用类似 Samba 的功能,但只是下几首歌到手机上的话使用静态文件也是很好的方式。
|
||||
除了伺服页面模版,Bottle 还支持静态文件,方便用于那些不会因为用户输入而改变的内容。可以是 CSS 文件,一张图片或是一首 MP3 歌曲。在我们的简单遥控程序里(目前)还没有任何用来美化的 CSS 或图片,不过我们增加了一个下载歌曲的途径。这个可以让媒体中心变成一个存放歌曲的 NAS 盒子。在需要传输大量数据的时候,最好还是用类似 Samba 的功能,但只是下几首歌到手机上的话使用静态文件也是很好的方式。
|
||||
|
||||
通过歌曲 ID 来下载的 Bottle 代码:
|
||||
|
||||
@route(‘/download/<id>’)
|
||||
@route('/download/<id>')
|
||||
def index(id):
|
||||
data = xbmc.AudioLibrary.GetSongDetails({“songid”:int(id), “properties”:[“file”]})
|
||||
full_filename = data[‘result’][‘songdetails’][‘file’]
|
||||
data = xbmc.AudioLibrary.GetSongDetails({"songid":int(id), "properties":["file"]})
|
||||
full_filename = data['result']['songdetails']['file']
|
||||
path, filename = os.path.split(full_filename)
|
||||
return static_file(filename, root=path, download=True)
|
||||
|
||||
@ -232,13 +232,13 @@ API 文档在这里:[http://wiki.xbmc.org/?title=JSON-RPC_API/v6][2]。它列
|
||||
|
||||
我们已经把所有的代码过了一遍,不过还需要一点工作来把它们集合到一起。可以自己去 GitHub 页面 [https://github.com/ben-ev/xbmc-remote][3] 看下。
|
||||
|
||||
> ### 设置 ###
|
||||
> **设置**
|
||||
>
|
||||
> 我们的遥控程序已经开发完成,还需要保证让它在媒体中心每次开机的时候都能启动。有几种方式,最简单的是在 /etc/rc.local 里增加一行命令来启动。我们的文件位置在 /opt/xbmc-remote/remote.py,其他文件也和它一起。然后在 /etc/rc.local 最后的 exit 0 之前增加了下面一行。
|
||||
>
|
||||
> cd /opt/xbmc-remote && python remote.py &
|
||||
|
||||
> ### GitHub ###
|
||||
> **GitHub**
|
||||
>
|
||||
> 这个项目目前还只是个架子,但是 - 我们运营杂志就意味着没有太多自由时间来编程。不过,我们启动了一个 GitHub 项目,希望能持续完善, 而如果你觉得这个项目有用的话,欢迎做出贡献。
|
||||
>
|
||||
@ -252,7 +252,7 @@ via: http://www.linuxvoice.com/xbmc-build-a-remote-control/
|
||||
|
||||
作者:[Ben Everard][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,95 +1,92 @@
|
||||
用这些专用工具让你截图更简单
|
||||
================================================================================
|
||||
"一图胜过千万句",这句二十世纪早期在美国应运而生的名言,说的是一张单一的静止图片所蕴含的信息足以匹敌大量的描述性文字。本质上说,图片所传递的信息量的确是比文字更有效更高效。
|
||||
“一图胜千言”,这句二十世纪早期在美国应运而生的名言,说的是一张单一的静止图片所蕴含的信息足以匹敌大量的描述性文字。本质上说,图片所传递的信息量的确是比文字更有效更高效。
|
||||
|
||||
截图(或抓帧)是一种捕捉自计算机所录制可视化设备输出的快照或图片,屏幕捕捉软件能让计算机获取到截图。此类软件有很多用处,因为一张图片能很好地说明计算机软件的操作,截图在软件开发过程和文档中扮演了一个很重要的角色。或者,如果你的电脑有了个技术性问题,一张截图能让技术支持理解你碰到的这个问题。要写好计算机相关的文章、文档和教程,没有一款好的截图工具是几乎不可能的。如果你想在保存屏幕上任意一些零星的信息,特别是不方便打字时,截图也很有用。
|
||||
截图(或抓帧)是一种捕捉自计算机的快照或图片,用来记录可视设备的输出。屏幕捕捉软件能从计算机中获取到截图。此类软件有很多用处,因为一张图片能很好地说明计算机软件的操作,截图在软件开发过程和文档中扮演了一个很重要的角色。或者,如果你的电脑有了技术性问题,一张截图能让技术支持理解你碰到的这个问题。要写好计算机相关的文章、文档和教程,没有一款好的截图工具是几乎不可能的。如果你想保存你放在屏幕上的一些零星的信息,特别是不方便打字时,截图也很有用。
|
||||
|
||||
在开源世界,Linux有许多专注于截图功能的工具供选择,基于图形的和控制台的都有。如果要说一个功能丰富的专用截图工具,那就来看看Shutter吧。这款工具是小型开源工具的杰出代表,当然也有其它的选择。
|
||||
在开源世界,Linux有许多专注于截图功能的工具供选择,基于图形的和控制台的都有。如果要说一个功能丰富的专用截图工具,看起来没有能超过Shutter的。这款工具是小型开源工具的杰出代表,但是也有其它的不错替代品可以选择。
|
||||
|
||||
屏幕捕捉功能不仅仅只有专业的工具提供,GIMP和ImageMagick这两款主攻图像处理的工具,也能提供像样的屏幕捕捉功能。
|
||||
屏幕捕捉功能不仅仅只有专门的工具提供,GIMP和ImageMagick这两款主攻图像处理的工具,也能提供像样的屏幕捕捉功能。
|
||||
|
||||
----------
|
||||
|
||||
### Shutter ###
|
||||
|
||||

|
||||
|
||||
Shutter是一款功能丰富的截图软件。你可以给你的特殊区域、窗口、整个屏幕甚至是网站截图 - 在其中应用不用的效果,比如用高亮的点在上面绘图,然后上传至一个图片托管网站,一切尽在这个小窗口内。
|
||||
Shutter是一款功能丰富的截图软件。你可以对特定区域、窗口、整个屏幕甚至是网站截图 - 并为其应用不同的效果,比如用高亮的点在上面绘图,然后上传至一个图片托管网站,一切尽在这个小窗口内。
|
||||
|
||||
包含特性:
|
||||
|
||||
|
||||
- 截图范围:
|
||||
- 一块特殊区域
|
||||
- 一个特定区域
|
||||
- 窗口
|
||||
- 完整的桌面
|
||||
- 脚本生成的网页
|
||||
- 在截图中应用不同效果
|
||||
- 热键
|
||||
- 打印
|
||||
- 直接截图或指定一个延迟时间
|
||||
- 将截图保存至一个指定目录并用一个简便方法重命名它(用特殊的通配符)
|
||||
- 完成集成在GNOME桌面中(TrayIcon等等)
|
||||
- 当你截了一张图并以%设置了尺寸时,直接生成缩略图
|
||||
- Shutter会话选项:
|
||||
- 会话中保持所有截图的痕迹
|
||||
- 直接截图或指定延迟时间截图
|
||||
- 将截图保存至一个指定目录并用一个简便方法重命名它(用指定通配符)
|
||||
- 完全集成在GNOME桌面中(TrayIcon等等)
|
||||
- 当你截了一张图并根据尺寸的百分比直接生成缩略图
|
||||
- Shutter会话集:
|
||||
- 跟踪会话中所有的截图
|
||||
- 复制截图至剪贴板
|
||||
- 打印截图
|
||||
- 删除截图
|
||||
- 重命名文件
|
||||
- 直接上传你的文件至图像托管网站(比如http://ubuntu-pics.de),取回所有需要的图像并将它们与其他人分享
|
||||
- 直接上传你的文件至图像托管网站(比如 http://ubuntu-pics.de ),得到链接并将它们与其他人分享
|
||||
- 用内置的绘画工具直接编辑截图
|
||||
|
||||
---
|
||||
|
||||
- 主页: [shutter-project.org][1]
|
||||
- 开发者: Mario Kemper和Shutter团队
|
||||
- 许可证: GNU GPL v3
|
||||
- 版本号: 0.93.1
|
||||
|
||||
----------
|
||||
|
||||
### HotShots ###
|
||||
|
||||

|
||||
|
||||
HotShots是一款捕捉屏幕并能以各种图片格式保存的软件,同时也能添加注释和图形数据(箭头、行、文本, ...)。
|
||||
HotShots是一款捕捉屏幕并能以各种图片格式保存的软件,同时也能添加注释和图形数据(箭头、行、文本 ...)。
|
||||
|
||||
你也可以把你的作品上传到网上(FTP/一些web服务),HotShots是用Qt开发而成的。
|
||||
你也可以把你的作品上传到网上(FTP/一些web服务),HotShots是用Qt开发而成的。
|
||||
|
||||
HotShots无法从Ubuntu的Software Center中获取,不过用以下命令可以轻松地来安装它:
|
||||
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1/apps
|
||||
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1/apps
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install hotshots
|
||||
|
||||
包含特性:
|
||||
|
||||
- 简单易用
|
||||
- 全功能使用
|
||||
- 嵌入式编辑器
|
||||
- 功能完整
|
||||
- 内置编辑器
|
||||
- 热键
|
||||
- 内置放大功能
|
||||
- 徒手和多屏捕捉
|
||||
- 手动控制和多屏捕捉
|
||||
- 支持输出格式:Black & Whte (bw), Encapsulated PostScript (eps, epsf), Encapsulated PostScript Interchange (epsi), OpenEXR (exr), PC Paintbrush Exchange (pcx), Photoshop Document (psd), ras, rgb, rgba, Irix RGB (sgi), Truevision Targa (tga), eXperimental Computing Facility (xcf), Windows Bitmap (bmp), DirectDraw Surface (dds), Graphic Interchange Format (gif), Icon Image (ico), Joint Photographic Experts Group 2000 (jp2), Joint Photographic Experts Group (jpeg, jpg), Multiple-image Network Graphics (mng), Portable Pixmap (ppm), Scalable Vector Graphics (svg), svgz, Tagged Image File Format (tif, tiff), webp, X11 Bitmap (xbm), X11 Pixmap (xpm), and Khoros Visualization (xv)
|
||||
- 国际化支持:巴斯克语、中文、捷克语、法语、加利西亚语、德语、希腊语、意大利语、日语、立陶宛语、波兰语、葡萄牙语、罗马尼亚语、俄罗斯语、塞尔维亚语、僧伽罗语、斯洛伐克语、西班牙语、土耳其语、乌克兰语和越南语
|
||||
|
||||
---
|
||||
|
||||
- 主页: [thehive.xbee.net][2]
|
||||
- 开发者 xbee
|
||||
- 许可证: GNU GPL v2
|
||||
- 版本号: 2.2.0
|
||||
|
||||
----------
|
||||
|
||||
### ScreenCloud ###
|
||||
|
||||

|
||||
|
||||
ScreenCloud是一款易于使用的开源截图工具。
|
||||
|
||||
在这款软件中,用户可以用三个热键中的其中一个或只需点击ScreenCloud托盘图标就能进行截图,用户也可以自行选择保存截图的地址。
|
||||
在这款软件中,用户可以用三个热键之一或只需点击ScreenCloud托盘图标就能进行截图,用户也可以自行选择保存截图的地址。
|
||||
|
||||
如果你选择上传你的截图到screencloud主页,链接会自动复制到你的剪贴板上,你能通过email或在一个聊天对话框里和你的朋友同事分享它,他们肯定会点击这个链接来看你的截图的。
|
||||
如果你选择上传你的截图到screencloud网站,链接会自动复制到你的剪贴板上,你能通过email或在一个聊天对话框里和你的朋友同事分享它,他们肯定会点击这个链接来看你的截图的。
|
||||
|
||||
包含特性:
|
||||
|
||||
@ -106,18 +103,18 @@ ScreenCloud是一款易于使用的开源截图工具。
|
||||
- 插件支持:保存至Dropbox,Imgur等等
|
||||
- 支持上传至FTP和SFTP服务器
|
||||
|
||||
---
|
||||
|
||||
- 主页: [screencloud.net][3]
|
||||
- 开发者: Olav S Thoresen
|
||||
- 许可证: GNU GPL v2
|
||||
- 版本号: 1.2.1
|
||||
|
||||
----------
|
||||
|
||||
### KSnapshot ###
|
||||
|
||||

|
||||
|
||||
KSnapshot也是一款易于使用的截图工具,它能给整个桌面、一个单一窗口、窗口的一部分或一块所选区域捕捉图像。,图像能以各种不用的格式保存。
|
||||
KSnapshot也是一款易于使用的截图工具,它能给整个桌面、单一窗口、窗口的一部分或一块所选区域捕捉图像。图像能以各种不同格式保存。
|
||||
|
||||
KSnapshot也允许用户用热键来进行截图。除了保存截图之外,它也可以被复制到剪贴板或用任何与图像文件关联的程序打开。
|
||||
|
||||
@ -127,10 +124,12 @@ KSnapshot是KDE 4图形模块的一部分。
|
||||
|
||||
- 以多种格式保存截图
|
||||
- 延迟截图
|
||||
- 剔除窗口装饰图案
|
||||
- 剔除窗口装饰(边框、菜单等)
|
||||
- 复制截图至剪贴板
|
||||
- 热键
|
||||
- 能用它的D-Bus界面进行脚本化
|
||||
- 能用它的D-Bus接口进行脚本化
|
||||
|
||||
---
|
||||
|
||||
- 主页: [www.kde.org][4]
|
||||
- 开发者: KDE, Richard J. Moore, Aaron J. Seigo, Matthias Ettrich
|
||||
@ -142,7 +141,7 @@ KSnapshot是KDE 4图形模块的一部分。
|
||||
via: http://www.linuxlinks.com/article/2015062316235249/ScreenCapture.html
|
||||
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -4,7 +4,7 @@ Linux mkdir、tar 和 kill 命令的 4 个有用小技巧
|
||||
|
||||

|
||||
|
||||
4 个有用的 Linux 小技巧
|
||||
*4 个有用的 Linux 小技巧*
|
||||
|
||||
### 1. 假设你要创建一个类似于下面很长的/复杂的目录树。实现这最有效的方法是什么呢? ###
|
||||
|
||||
@ -37,9 +37,9 @@ Linux mkdir、tar 和 kill 命令的 4 个有用小技巧
|
||||
|
||||

|
||||
|
||||
检查目录结构
|
||||
*检查目录结构*
|
||||
|
||||
我们可以用上面的方式创建任意复制的目录树结构。注意这仅仅是一个普通的命令,但是用 ‘{}’ 来创建层级目录。需要的时候如果在 shell 脚本中使用是非常有用的。
|
||||
我们可以用上面的方式创建任意复杂的目录树结构。注意这仅仅是一个普通的命令,但是用 ‘{}’ 来创建层级目录。需要的时候如果在 shell 脚本中使用是非常有用的。
|
||||
|
||||
### 2. 在桌面(/home/$USER/Desktop)创建一个文件(例如 test)并填入以下内容。 ###
|
||||
|
||||
@ -109,7 +109,7 @@ c. 解压 tar 包。
|
||||
|
||||
我们也可以采用另外一种方式。
|
||||
|
||||
我们可以在 Tar 包所在位置解压并复制/移动解压后的文件到所需的目标位置,例如:
|
||||
我们也可以在 Tar 包所在位置解压并复制/移动解压后的文件到所需的目标位置,例如:
|
||||
|
||||
$ tar -jxvf firefox-37.0.2.tar.bz2
|
||||
$ cp -R firefox/ /opt/
|
||||
@ -122,7 +122,7 @@ c. 解压 tar 包。
|
||||
|
||||
-C 选项提取文件到指定目录(这里是 /opt/)。
|
||||
|
||||
这并不是关于选项(-C)的问题,而是习惯的问题。养成使用带 -C 选项 tar 命令的习惯。这会使你的工作更加轻松。从现在开始不要再移动归档文件或复制/移动解压后的文件了,在 Downloads 文件夹保存 tar 包并解压到你想要的任何地方吧。
|
||||
这并不是关于选项(-C)的问题,**而是习惯的问题**。养成使用带 -C 选项 tar 命令的习惯。这会使你的工作更加轻松。从现在开始不要再移动归档文件或复制/移动解压后的文件了,在 Downloads 文件夹保存 tar 包并解压到你想要的任何地方吧。
|
||||
|
||||
### 4. 常规方式我们怎样杀掉一个进程? ###
|
||||
|
||||
@ -132,7 +132,7 @@ c. 解压 tar 包。
|
||||
|
||||
#### 输出样例 ####
|
||||
|
||||
1006 ? 00:00:00 apache2
|
||||
1006 ? 00:00:00 apache2
|
||||
2702 ? 00:00:00 apache2
|
||||
2703 ? 00:00:00 apache2
|
||||
2704 ? 00:00:00 apache2
|
||||
@ -188,7 +188,7 @@ c. 解压 tar 包。
|
||||
|
||||
它没有输出任何东西并返回到窗口意味着没有名称中包含 apache2 的进程在运行。
|
||||
|
||||
这就是我要说的所有东西。上面讨论的点肯定远远不够,但也肯定对你有所帮助。我们不仅仅是介绍教程使你学到一些新的东西,更重要的是想告诉你 ‘在同样的情况下如何变得更有效率’。在下面的评论框中告诉我们你的反馈吧。保持联系,继续评论。
|
||||
这就是我要说的所有东西。上面讨论的点肯定远远不够,但也肯定对你有所帮助。我们不仅仅是介绍教程使你学到一些新的东西,更重要的是想告诉你 ‘**在同样的情况下如何变得更有效率**’。在下面的评论框中告诉我们你的反馈吧。保持联系,继续评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -196,7 +196,7 @@ via: http://www.tecmint.com/mkdir-tar-and-kill-commands-in-linux/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,149 @@
|
||||
使用这些去重加密工具来备份你的数据
|
||||
================================================================================
|
||||
|
||||
无论是体积还是价值,数据都在不断增长。快速而可靠地备份和恢复数据正变得越来越重要。社会已经适应了技术的广泛使用,并懂得了如何依靠电脑和移动设备,但很少有人能够面对丢失重要数据的现实。在遭受数据损失的公司中,30% 的公司将在一年内损失一半市值,70% 的公司将在五年内停止交易。这更加凸显了数据的价值。
|
||||
|
||||
随着数据在体积上的增长,提高存储利用率尤为重要。从计算机的角度说,数据去重是一种特别的数据压缩技术,因为它可以消除重复数据的拷贝,所以这个技术可以提高存储利用率。
|
||||
|
||||
数据并不仅仅只有其创造者感兴趣。政府、竞争者、犯罪分子、偷窥者可能都热衷于获取你的数据。他们或许想偷取你的数据,从你那里进行敲诈,或看你正在做什么。因此,对于保护你的数据,加密是非常必要的。
|
||||
|
||||
所以,解决方法是我们需要一个可以去重的加密备份软件。
|
||||
|
||||
对于所有的用户而言,做文件备份是一件非常必要的事,至今为止许多用户还没有采取足够的措施来保护他们的数据。一台电脑不论是工作在一个合作的环境中,还是供私人使用,机器的硬盘可能在没有任何警告的情况下挂掉。另外,有些数据丢失可能是人为的错误所引发的。如果没有做经常性的备份,数据也可能不可避免地丢失,即使请了专业的数据恢复公司来帮忙。
|
||||
|
||||
这篇文章将对 6 个去重加密备份工具进行简要的介绍。
|
||||
|
||||
### Attic ###
|
||||
|
||||
Attic 是一个可用于去重、加密,验证完整性的压缩备份程序,它是用 Python 写的。Attic 的主要目标是提供一个高效且安全的方式来备份数据。Attic 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。
|
||||
|
||||
其特点有:
|
||||
|
||||
- 易用
|
||||
- 可高效利用存储空间,通过检查冗余的数据,对可变块大小的去重可以减少存储所用的空间
|
||||
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来校验
|
||||
- 使用 SDSH 来进行离线备份
|
||||
- 备份可作为文件系统来挂载
|
||||
|
||||
网站: [attic-backup.org][1]
|
||||
|
||||
|
||||
### Borg ###
|
||||
|
||||
Borg 是 Attic 的一个分支。它是一个安全的开源备份程序,被设计用来高效地存储那些新的或修改过的数据。
|
||||
|
||||
Borg 的主要目标是提供一个高效、安全的方式来存储数据。Borg 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。认证加密使得它适用于存储在不完全可信的位置。
|
||||
|
||||
Borg 由 Python 写成。Borg 于 2015 年 5 月被创造出来,是为了解决让新的代码或重大的改变带入 Attic 的困难。
|
||||
|
||||
其特点包括:
|
||||
|
||||
- 易用
|
||||
- 可高效利用存储空间,通过检查冗余的数据,对可变块大小的去重被用来减少存储所用的空间
|
||||
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来校验
|
||||
- 使用 SDSH 来进行离线备份
|
||||
- 备份可作为文件系统来挂载
|
||||
|
||||
Borg 与 Attic 不兼容。
|
||||
|
||||
网站: [borgbackup.github.io/borgbackup][2]
|
||||
|
||||
### Obnam ###
|
||||
|
||||
Obnam (OBligatory NAMe) 是一个易用、安全的基于 Python 的备份程序。备份可被存储在本地硬盘或通过 SSH SFTP 协议存储到网上。若使用了备份服务器,它并不需要任何特殊的软件,只需要使用 SSH 即可。
|
||||
|
||||
Obnam 通过将数据分成数据块,并单独存储它们来达到去重的目的,每次通过增量备份来生成备份,每次备份的生成就像是一次新的快照,但事实上是真正的增量备份。Obnam 由 Lars Wirzenius 开发。
|
||||
|
||||
其特点有:
|
||||
|
||||
- 易用
|
||||
- 快照备份
|
||||
- 数据去重,跨文件,然后生成备份
|
||||
- 可使用 GnuPG 来加密备份
|
||||
- 向一个单独的仓库中备份多个客户端的数据
|
||||
- 备份检查点 (创建一个保存点,以每 100MB 或其他容量)
|
||||
- 包含多个选项来调整性能,包括调整 lru-size 或 upload-queue-size
|
||||
- 支持 MD5 校验算法来识别重复的数据块
|
||||
- 通过 SFTP 将备份存储到一个服务器上
|
||||
- 同时支持 push(即在客户端上运行) 和 pull(即在服务器上运行)
|
||||
|
||||
网站: [obnam.org][3]
|
||||
|
||||
### Duplicity ###
|
||||
|
||||
Duplicity 以 tar 文件格式增量备份文件和目录,并使用 GnuPG 来进行加密,同时将它们上传到远程(或本地)的文件服务器上。它可以使用 ssh/scp、本地文件获取、rsync、 ftp 和 Amazon S3 等来传递数据。
|
||||
|
||||
因为 duplicity 使用了 librsync, 增量存档可以高效地利用存储空间,且只记录自从上次备份依赖改变的那部分文件。由于该软件使用 GnuPG 来加密或对这些归档文件进行进行签名,这使得它们免于服务器的监视或修改。
|
||||
|
||||
当前 duplicity 支持备份删除的文件,全部的 unix 权限,目录,符号链接, fifo 等。
|
||||
|
||||
duplicity 软件包还包含有 rdiffdir 工具。 Rdiffdir 是 librsync 的 rdiff 针对目录的扩展。它可以用来生成对目录的签名和差异,对普通文件也有效。
|
||||
|
||||
其特点有:
|
||||
|
||||
- 使用简单
|
||||
- 对归档进行加密和签名(使用 GnuPG)
|
||||
- 高效使用带宽和存储空间,使用 rsync 的算法
|
||||
- 标准的文件格式
|
||||
- 可选择多种远程协议
|
||||
- 本地存储
|
||||
- scp/ssh
|
||||
- ftp
|
||||
- rsync
|
||||
- HSI
|
||||
- WebDAV
|
||||
- Amazon S3
|
||||
|
||||
网站: [duplicity.nongnu.org][4]
|
||||
|
||||
### ZBackup ###
|
||||
|
||||
ZBackup 是一个通用的全局去重备份工具。
|
||||
|
||||
其特点包括:
|
||||
|
||||
- 对存储数据并行进行 LZMA 或 LZO 压缩,在一个仓库中,你还可以混合使用 LZMA 和 LZO
|
||||
- 内置对存储数据的 AES 加密
|
||||
- 能够删除旧的备份数据
|
||||
- 可以使用 64 位的滚动哈希算法,使得文件冲突的数量几乎为零
|
||||
- 仓库中存储的文件是不可修改的,已备份的文件不会被修改。
|
||||
- 用 C++ 写成,只需少量的库文件依赖
|
||||
- 在生成环境中可以安全使用
|
||||
- 可以在不同仓库中进行数据交换而不必再进行压缩
|
||||
- 使用 64 位改进型 Rabin-Karp 滚动哈希算法
|
||||
|
||||
网站: [zbackup.org][5]
|
||||
|
||||
|
||||
### bup ###
|
||||
|
||||
bup 是一个用 Python 写的备份程序,其名称是 "backup" 的缩写。基于 git packfile 文件格式, bup 提供了一个高效的方式来备份一个系统,提供快速的增量备份和全局去重(在文件中或文件里,甚至包括虚拟机镜像)。
|
||||
|
||||
bup 在 LGPL 版本 2 协议下发行。
|
||||
|
||||
其特点包括:
|
||||
|
||||
- 全局去重 (在文件之间或文件内部,甚至包括虚拟机镜像)
|
||||
- 使用一个滚动的校验和算法(类似于 rsync) 来将大文件分为多个数据块
|
||||
- 使用来自 git 的 packfile 文件格式
|
||||
- 直接写入 packfile 文件,以此提供快速的增量备份
|
||||
- 可以使用 "par2" 冗余来恢复冲突的备份
|
||||
- 可以作为一个 FUSE 文件系统来挂载你的 bup 仓库
|
||||
|
||||
网站: [bup.github.io][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
|
||||
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://attic-backup.org/
|
||||
[2]:https://borgbackup.github.io/borgbackup/
|
||||
[3]:http://obnam.org/
|
||||
[4]:http://duplicity.nongnu.org/
|
||||
[5]:http://zbackup.org/
|
||||
[6]:https://bup.github.io/
|
@ -0,0 +1,69 @@
|
||||
原子(Atom)代码编辑器的视频短片介绍
|
||||
================================================================================
|
||||

|
||||
|
||||
[Atom 1.0][1]时代来临。作为[最好的开源代码编辑器]之一,Atom已公开使用快一年了,近段时间,第一个稳定版本的原子编辑器的发布却引起了广大用户的谈论。这个[Github][4]上的项目随着“为21世纪破解文本编辑器”活动的兴起,已近被下载了150万余次,积累35万活跃用户。
|
||||
|
||||
### 这是个漫长的过程 ###
|
||||
|
||||
滴水穿石,非一日之功,Atom同样经历一个漫长的过程。从2008年首次提出概念到这个月第一个稳定版本的发布,主创人员和全球各地的贡献者,这几年来不断地致力于Atom核心的开发。我们通过下面这张图来了解一下Atom的发展过程:
|
||||
|
||||

|
||||
|
||||
*图片来源:Atom*
|
||||
|
||||
### 回到未来 ###
|
||||
|
||||
Atom 1.0 通过流行的视频发布方式,展示了这款编辑器的潜能。这个视屏就像70年代的科幻连续剧一样,今天你将会看到一个极其酷炫的视屏:
|
||||
|
||||
注:youtube视频,不行做个链接吧
|
||||
<iframe width="640" height="390" frameborder="0" allowfullscreen="true" src="http://www.youtube.com/embed/Y7aEiVwBAdk?version=3&rel=1&fs=1&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent" type="text/html" class="youtube-player"></iframe>
|
||||
|
||||
### 原子编辑器特点 ###
|
||||
|
||||
- 跨平台编辑
|
||||
- 实现包管理
|
||||
- 智能化、自动化
|
||||
- 文件系统视图
|
||||
- 多窗操作
|
||||
- 支持查找更换
|
||||
- 高度个性化
|
||||
- 界面更新颖
|
||||
|
||||
### Atom 1.0起来 ###
|
||||
|
||||
Atom 1.0 支持Linux,Windows和Mac OS X。对于基于Debian的Linux,例如Ubuntu和Linux Mint,Atom提供了deb包。对于Fedora,同样有rpm包。如果你愿意,你可以下载源代码。通过下面的链接下载最新的版本。
|
||||
|
||||
- [Atom .deb][5]
|
||||
- [Atom .rpm][6]
|
||||
- [Atom Source Code][7]
|
||||
|
||||
如果你愿意,你可以[通过PPA在Ubuntu上安装Atom]。PPA并不是官方解决方案。
|
||||
|
||||
注:下面是一个调查,可以发布的时候在文章内发布个调查
|
||||
|
||||
#### 你对Atom感兴趣吗? ####
|
||||
|
||||
- 噢,当然!这是程序员的福音。
|
||||
- 我并不这样认为。我见过更好的编辑器。
|
||||
- 并不关心,我的默认编辑器就能胜任我的工作。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/atom-stable-released/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[sevenot](https://github.com/sevenot)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://blog.atom.io/2015/06/25/atom-1-0.html
|
||||
[2]:http://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[3]:https://atom.io/
|
||||
[4]:https://github.com/
|
||||
[5]:https://atom.io/download/deb
|
||||
[6]:https://atom.io/download/rpm
|
||||
[7]:https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md
|
||||
[8]:http://itsfoss.com/install-atom-text-editor-ubuntu-1404-linux-mint-17/
|
356
published/20150706 PHP Security.md
Normal file
356
published/20150706 PHP Security.md
Normal file
@ -0,0 +1,356 @@
|
||||
PHP 安全编程建议
|
||||
================================================================================
|
||||

|
||||
|
||||
### 简介 ###
|
||||
|
||||
要提供互联网服务,当你在开发代码的时候必须时刻保持安全意识。可能大部分 PHP 脚本都对安全问题都不在意,这很大程度上是因为有大量的*无经验程序员*在使用这门语言。但是,没有理由让你因为对你的代码的不确定性而导致不一致的安全策略。当你在服务器上放任何涉及到钱的东西时,就有可能会有人尝试破解它。创建一个论坛程序或者任何形式的购物车,被攻击的可能性就上升到了无穷大。
|
||||
|
||||
### 背景 ###
|
||||
|
||||
为了确保你的 web 内容安全,这里有一些常规的安全准则:
|
||||
|
||||
#### 别相信表单 ####
|
||||
|
||||
攻击表单很简单。通过使用一个简单的 JavaScript 技巧,你可以限制你的表单只允许在评分域中填写 1 到 5 的数字。如果有人关闭了他们浏览器的 JavaScript 功能或者提交自定义的表单数据,你客户端的验证就失败了。
|
||||
|
||||
用户主要通过表单参数和你的脚本交互,因此他们是最大的安全风险。你应该学到什么呢?在 PHP 脚本中,总是要验证 传递给任何 PHP 脚本的数据。在本文中,我们向你演示了如何分析和防范跨站脚本(XSS)攻击,它可能会劫持用户凭据(甚至更严重)。你也会看到如何防止会玷污或毁坏你数据的 MySQL 注入攻击。
|
||||
|
||||
#### 别相信用户 ####
|
||||
|
||||
假定你网站获取的每一份数据都充满了有害的代码。清理每一部分,即便你相信没有人会尝试攻击你的站点。
|
||||
|
||||
#### 关闭全局变量 ####
|
||||
|
||||
你可能会有的最大安全漏洞是启用了 register\_globals 配置参数。幸运的是,PHP 4.2 及以后版本默认关闭了这个配置。如果打开了 **register\_globals**,你可以在你的 php.ini 文件中通过改变 register\_globals 变量为 Off 关闭该功能:
|
||||
|
||||
register_globals = Off
|
||||
|
||||
新手程序员觉得注册全局变量很方便,但他们不会意识到这个设置有多么危险。一个启用了全局变量的服务器会自动为全局变量赋任何形式的参数。为了了解它如何工作以及为什么有危险,让我们来看一个例子。
|
||||
|
||||
假设你有一个称为 process.php 的脚本,它会向你的数据库插入表单数据。初始的表单像下面这样:
|
||||
|
||||
<input name="username" type="text" size="15" maxlength="64">
|
||||
|
||||
运行 process.php 的时候,启用了注册全局变量的 PHP 会将该参数赋值到 $username 变量。这会比通过 **$\_POST['username']** 或 **$\_GET['username']** 访问它节省击键次数。不幸的是,这也会给你留下安全问题,因为 PHP 会设置该变量的值为通过 GET 或 POST 的参数发送到脚本的任何值,如果你没有显示地初始化该变量并且你不希望任何人去操作它,这就会有一个大问题。
|
||||
|
||||
看下面的脚本,假如 $authorized 变量的值为 true,它会给用户显示通过验证的数据。正常情况下,只有当用户正确通过了这个假想的 authenticated\_user() 函数验证,$authorized 变量的值才会被设置为真。但是如果你启用了 **register\_globals**,任何人都可以发送一个 GET 参数,例如 authorized=1 去覆盖它:
|
||||
|
||||
<?php
|
||||
// Define $authorized = true only if user is authenticated
|
||||
if (authenticated_user()) {
|
||||
$authorized = true;
|
||||
}
|
||||
?>
|
||||
|
||||
这个故事的寓意是,你应该从预定义的服务器变量中获取表单数据。所有通过 post 表单传递到你 web 页面的数据都会自动保存到一个称为 **$\_POST** 的大数组中,所有的 GET 数据都保存在 **$\_GET** 大数组中。文件上传信息保存在一个称为 **$\_FILES** 的特殊数据中。另外,还有一个称为 **$\_REQUEST** 的复合变量。
|
||||
|
||||
要从一个 POST 方法表单中访问 username 字段,可以使用 **$\_POST['username']**。如果 username 在 URL 中就使用 **$\_GET['username']**。如果你不确定值来自哪里,用 **$\_REQUEST['username']**。
|
||||
|
||||
<?php
|
||||
$post_value = $_POST['post_value'];
|
||||
$get_value = $_GET['get_value'];
|
||||
$some_variable = $_REQUEST['some_value'];
|
||||
?>
|
||||
|
||||
$\_REQUEST 是 $\_GET、$\_POST、和 $\_COOKIE 数组的结合。如果你有两个或多个值有相同的参数名称,注意 PHP 会使用哪个。默认的顺序是 cookie、POST、然后是 GET。
|
||||
|
||||
#### 推荐安全配置选项 ####
|
||||
|
||||
这里有几个会影响安全功能的 PHP 配置设置。下面是一些显然应该用于生产服务器的:
|
||||
|
||||
- **register\_globals** 设置为 off
|
||||
- **safe\_mode** 设置为 off
|
||||
- **error\_reporting** 设置为 off。如果出现错误了,这会向用户浏览器发送可见的错误报告信息。对于生产服务器,使用错误日志代替。开发服务器如果在防火墙后面就可以启用错误日志。(LCTT 译注:此处据原文逻辑和常识,应该是“开发服务器如果在防火墙后面就可以启用错误报告,即 on。”)
|
||||
- 停用这些函数:system()、exec()、passthru()、shell\_exec()、proc\_open()、和 popen()。
|
||||
- **open\_basedir** 为 /tmp(以便保存会话信息)目录和 web 根目录,以便脚本不能访问这些选定区域外的文件。
|
||||
- **expose\_php** 设置为 off。该功能会向 Apache 头添加包含版本号的 PHP 签名。
|
||||
- **allow\_url\_fopen** 设置为 off。如果你能够注意你代码中访问文件的方式-也就是你验证所有输入参数,这并不严格需要。
|
||||
- **allow\_url\_include** 设置为 off。对于任何人来说,实在没有明智的理由会想要访问通过 HTTP 包含的文件。
|
||||
|
||||
一般来说,如果你发现想要使用这些功能的代码,你就不应该相信它。尤其要小心会使用类似 system() 函数的代码-它几乎肯定有缺陷。
|
||||
|
||||
启用了这些设置后,让我们来看看一些特定的攻击以及能帮助你保护你服务器的方法。
|
||||
|
||||
### SQL 注入攻击 ###
|
||||
|
||||
由于 PHP 传递到 MySQL 数据库的查询语句是用强大的 SQL 编程语言编写的,就有了某些人通过在 web 查询参数中使用 MySQL 语句尝试 SQL 注入攻击的风险。通过在参数中插入有害的 SQL 代码片段,攻击者会尝试进入(或破坏)你的服务器。
|
||||
|
||||
假如说你有一个最终会放入变量 $product 的表单参数,你使用了类似下面的 SQL 语句:
|
||||
|
||||
$sql = "select * from pinfo where product = '$product'";
|
||||
|
||||
如果参数是直接从表单中获得的,应该使用 PHP 自带的数据库特定转义函数,类似:
|
||||
|
||||
$sql = 'Select * from pinfo where product = '"'
|
||||
mysql_real_escape_string($product) . '"';
|
||||
|
||||
如果不这样做的话,有人也许会把下面的代码段放到表单参数中:
|
||||
|
||||
39'; DROP pinfo; SELECT 'FOO
|
||||
|
||||
那么 $sql 的结果就是:
|
||||
|
||||
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
|
||||
|
||||
由于分号是 MySQL 的语句分隔符,数据库会运行下面三条语句:
|
||||
|
||||
select * from pinfo where product = '39'
|
||||
DROP pinfo
|
||||
SELECT 'FOO'
|
||||
|
||||
好了,你丢失了你的表。
|
||||
|
||||
注意实际上 PHP 和 MySQL 不会运行这种特殊语法,因为 **mysql\_query()** 函数只允许每个请求处理一个语句。但是,一个子查询仍然会生效。
|
||||
|
||||
要防止 SQL 注入攻击,做这两件事:
|
||||
|
||||
- 总是验证所有参数。例如,如果需要一个数字,就要确保它是一个数字。
|
||||
- 总是对数据使用 mysql\_real\_escape\_string() 函数转义数据中的任何引号和双引号。
|
||||
|
||||
**注意:要自动转义任何表单数据,可以启用魔术引号(Magic Quotes)。**
|
||||
|
||||
一些 MySQL 破坏可以通过限制 MySQL 用户权限避免。任何 MySQL 账户可以限制为只允许对选定的表进行特定类型的查询。例如,你可以创建只能选择行的 MySQL 用户。但是,这对于动态数据并不十分有用,另外,如果你有敏感的用户信息,可能某些人能访问其中一些数据,但你并不希望如此。例如,一个访问账户数据的用户可能会尝试注入访问另一个人的账户号码的代码,而不是为当前会话指定的号码。
|
||||
|
||||
### 防止基本的 XSS 攻击 ###
|
||||
|
||||
XSS 表示跨站脚本。不像大部分攻击,该漏洞发生在客户端。XSS 最常见的基本形式是在用户提交的内容中放入 JavaScript 以便偷取用户 cookie 中的数据。由于大部分站点使用 cookie 和 session 验证访客,偷取的数据可用于模拟该用户-如果是一个常见的用户账户就会深受麻烦,如果是管理员账户甚至是彻底的惨败。如果你不在站点中使用 cookie 和 session ID,你的用户就不容易被攻击,但你仍然应该明白这种攻击是如何工作的。
|
||||
|
||||
不像 MySQL 注入攻击,XSS 攻击很难预防。Yahoo、eBay、Apple、以及 Microsoft 都曾经受 XSS 影响。尽管攻击不包含 PHP,但你可以使用 PHP 来剥离用户数据以防止攻击。为了防止 XSS 攻击,你应该限制和过滤用户提交给你站点的数据。正是因为这个原因,大部分在线公告板都不允许在提交的数据中使用 HTML 标签,而是用自定义的标签格式代替,例如 **[b]** 和 **[linkto]**。
|
||||
|
||||
让我们来看一个如何防止这类攻击的简单脚本。对于更完善的解决办法,可以使用 SafeHTML,本文的后面部分会讨论到。
|
||||
|
||||
function transform_HTML($string, $length = null) {
|
||||
// Helps prevent XSS attacks
|
||||
// Remove dead space.
|
||||
$string = trim($string);
|
||||
// Prevent potential Unicode codec problems.
|
||||
$string = utf8_decode($string);
|
||||
// HTMLize HTML-specific characters.
|
||||
$string = htmlentities($string, ENT_NOQUOTES);
|
||||
$string = str_replace("#", "#", $string);
|
||||
$string = str_replace("%", "%", $string);
|
||||
$length = intval($length);
|
||||
if ($length > 0) {
|
||||
$string = substr($string, 0, $length);
|
||||
}
|
||||
return $string;
|
||||
}
|
||||
|
||||
这个函数将 HTML 特定的字符转换为 HTML 字面字符。一个浏览器对任何通过这个脚本的 HTML 以非标记的文本呈现。例如,考虑下面的 HTML 字符串:
|
||||
|
||||
<STRONG>Bold Text</STRONG>
|
||||
|
||||
一般情况下,HTML 会显示为:**Bold Text**
|
||||
|
||||
但是,通过 **transform\_HTML()** 后,它就像原始输入一样呈现。原因是处理的字符串中的标签字符串转换为 HTML 实体。**transform\_HTML()** 的结果字符串的纯文本看起来像下面这样:
|
||||
|
||||
<STRONG>Bold Text</STRONG>
|
||||
|
||||
该函数的实质是 htmlentities() 函数调用,它会将 <、>、和 & 转换为 **\<**、**\>**、和 **\&**。尽管这会处理大部分的普通攻击,但有经验的 XSS 攻击者有另一种把戏:用十六进制或 UTF-8 编码恶意脚本,而不是采用普通的 ASCII 文本,从而希望能绕过你的过滤器。他们可以在 URL 的 GET 变量中发送代码,告诉浏览器,“这是十六进制代码,你能帮我运行吗?” 一个十六进制例子看起来像这样:
|
||||
|
||||
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
|
||||
|
||||
浏览器渲染这个信息的时候,结果就是:
|
||||
|
||||
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
|
||||
|
||||
为了防止这种情况,transform\_HTML() 采用额外的步骤把 # 和 % 符号转换为它们的实体,从而避免十六进制攻击,并转换 UTF-8 编码的数据。
|
||||
|
||||
最后,为了防止某些人用很长的输入超载字符串从而导致某些东西崩溃,你可以添加一个可选的 $length 参数来截取你指定最大长度的字符串。
|
||||
|
||||
### 使用 SafeHTML ###
|
||||
|
||||
之前脚本的问题比较简单,它不允许任何类型的用户标记。不幸的是,这里有上百种方法能使 JavaScript 跳过用户的过滤器,并且要从用户输入中剥离全部 HTML,还没有方法可以防止这种情况。
|
||||
|
||||
当前,没有任何一个脚本能保证无法被破解,尽管有一些确实比大部分要好。有白名单和黑名单两种方法加固安全,白名单比较简单而且更加有效。
|
||||
|
||||
一个白名单解决方案是 PixelApes 的 SafeHTML 反跨站脚本解析器。
|
||||
|
||||
SafeHTML 能识别有效 HTML,能追踪并剥离任何危险标签。它用另一个称为 HTMLSax 的软件包进行解析。
|
||||
|
||||
按照下面步骤安装和使用 SafeHTML:
|
||||
|
||||
1. 到 [http://pixel-apes.com/safehtml/?page=safehtml][1] 下载最新版本的 SafeHTML。
|
||||
1. 把文件放到你服务器的类文件夹。该文件夹包括 SafeHTML 和 HTMLSax 功能所需的所有东西。
|
||||
1. 在脚本中 `include` SafeHTML 类文件(safehtml.php)。
|
||||
1. 创建一个名为 $safehtml 的新 SafeHTML 对象。
|
||||
1. 用 $safehtml->parse() 方法清理你的数据。
|
||||
|
||||
这是一个完整的例子:
|
||||
|
||||
<?php
|
||||
/* If you're storing the HTMLSax3.php in the /classes directory, along
|
||||
with the safehtml.php script, define XML_HTMLSAX3 as a null string. */
|
||||
define(XML_HTMLSAX3, '');
|
||||
// Include the class file.
|
||||
require_once('classes/safehtml.php');
|
||||
// Define some sample bad code.
|
||||
$data = "This data would raise an alert <script>alert('XSS Attack')</script>";
|
||||
// Create a safehtml object.
|
||||
$safehtml = new safehtml();
|
||||
// Parse and sanitize the data.
|
||||
$safe_data = $safehtml->parse($data);
|
||||
// Display result.
|
||||
echo 'The sanitized data is <br />' . $safe_data;
|
||||
?>
|
||||
|
||||
如果你想清理脚本中的任何其它数据,你不需要创建一个新的对象;在你的整个脚本中只需要使用 $safehtml->parse() 方法。
|
||||
|
||||
#### 什么可能会出现问题? ####
|
||||
|
||||
你可能犯的最大错误是假设这个类能完全避免 XSS 攻击。SafeHTML 是一个相当复杂的脚本,几乎能检查所有事情,但没有什么是能保证的。你仍然需要对你的站点做参数验证。例如,该类不能检查给定变量的长度以确保能适应数据库的字段。它也不检查缓冲溢出问题。
|
||||
|
||||
XSS 攻击者很有创造力,他们使用各种各样的方法来尝试达到他们的目标。可以阅读 RSnake 的 XSS 教程[http://ha.ckers.org/xss.html][2] ,看一下这里有多少种方法尝试使代码跳过过滤器。SafeHTML 项目有很好的程序员一直在尝试阻止 XSS 攻击,但无法保证某些人不会想起一些奇怪和新奇的方法来跳过过滤器。
|
||||
|
||||
**注意:XSS 攻击严重影响的一个例子 [http://namb.la/popular/tech.html][3],其中显示了如何一步一步创建一个让 MySpace 服务器过载的 JavaScript XSS 蠕虫。**
|
||||
|
||||
### 用单向哈希保护数据 ###
|
||||
|
||||
该脚本对输入的数据进行单向转换,换句话说,它能对某人的密码产生哈希签名,但不能解码获得原始密码。为什么你希望这样呢?应用程序会存储密码。一个管理员不需要知道用户的密码,事实上,只有用户知道他/她自己的密码是个好主意。系统(也仅有系统)应该能识别一个正确的密码;这是 Unix 多年来的密码安全模型。单向密码安全按照下面的方式工作:
|
||||
|
||||
1. 当一个用户或管理员创建或更改一个账户密码时,系统对密码进行哈希并保存结果。主机系统会丢弃明文密码。
|
||||
2. 当用户通过任何方式登录到系统时,再次对输入的密码进行哈希。
|
||||
3. 主机系统丢弃输入的明文密码。
|
||||
4. 当前新哈希的密码和之前保存的哈希相比较。
|
||||
5. 如果哈希的密码相匹配,系统就会授予访问权限。
|
||||
|
||||
主机系统完成这些并不需要知道原始密码;事实上,原始密码完全无所谓。一个副作用是,如果某人侵入系统并盗取了密码数据库,入侵者会获得很多哈希后的密码,但无法把它们反向转换为原始密码。当然,给足够时间、计算能力,以及弱用户密码,一个攻击者还是有可能采用字典攻击找出密码。因此,别轻易让人碰你的密码数据库,如果确实有人这样做了,让每个用户更改他们的密码。
|
||||
|
||||
#### 加密 Vs 哈希 ####
|
||||
|
||||
技术上来来说,哈希过程并不是加密。哈希和加密是不同的,这有两个理由:
|
||||
|
||||
不像加密,哈希数据不能被解密。
|
||||
|
||||
是有可能(但非常罕见)两个不同的字符串会产生相同的哈希。并不能保证哈希是唯一的,因此别像数据库中的唯一键那样使用哈希。
|
||||
|
||||
function hash_ish($string) {
|
||||
return md5($string);
|
||||
}
|
||||
|
||||
上面的 md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5)返回一个由 32 个字符组成的十六进制串。然后你可以将那个 32 位字符串插入到数据库中和另一个 md5 字符串相比较,或者直接用这 32 个字符。
|
||||
|
||||
#### 破解脚本 ####
|
||||
|
||||
几乎不可能解密 MD5 数据。或者说很难。但是,你仍然需要好的密码,因为用一整个字典生成哈希数据库仍然很简单。有一些在线 MD5 字典,当你输入 **06d80eb0c50b49a509b49f2424e8c805** 后会得到结果 “dog”。因此,尽管技术上 MD5 不能被解密,这里仍然有漏洞,如果某人获得了你的密码数据库,你可以肯定他们肯定会使用 MD5 字典破译。因此,当你创建基于密码的系统的时候尤其要注意密码长度(最小 6 个字符,8 个或许会更好)和包括字母和数字。并确保这个密码不在字典中。
|
||||
|
||||
### 用 Mcrypt 加密数据 ###
|
||||
|
||||
如果你不需要以可阅读形式查看密码,采用 MD5 就足够了。不幸的是,这里并不总是有可选项,如果你提供以加密形式存储某人的信用卡信息,你可能需要在后面的某个地方进行解密。
|
||||
|
||||
最早的一个解决方案是 Mcrypt 模块,这是一个用于允许 PHP 高速加密的插件。Mcrypt 库提供了超过 30 种用于加密的计算方法,并且提供口令确保只有你(或者你的用户)可以解密数据。
|
||||
|
||||
让我们来看看使用方法。下面的脚本包含了使用 Mcrypt 加密和解密数据的函数:
|
||||
|
||||
<?php
|
||||
$data = "Stuff you want encrypted";
|
||||
$key = "Secret passphrase used to encrypt your data";
|
||||
$cipher = "MCRYPT_SERPENT_256";
|
||||
$mode = "MCRYPT_MODE_CBC";
|
||||
function encrypt($data, $key, $cipher, $mode) {
|
||||
// Encrypt data
|
||||
return (string)
|
||||
base64_encode
|
||||
(
|
||||
mcrypt_encrypt
|
||||
(
|
||||
$cipher,
|
||||
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
|
||||
$data,
|
||||
$mode,
|
||||
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
|
||||
)
|
||||
);
|
||||
}
|
||||
function decrypt($data, $key, $cipher, $mode) {
|
||||
// Decrypt data
|
||||
return (string)
|
||||
mcrypt_decrypt
|
||||
(
|
||||
$cipher,
|
||||
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
|
||||
base64_decode($data),
|
||||
$mode,
|
||||
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
|
||||
);
|
||||
}
|
||||
?>
|
||||
|
||||
**mcrypt()** 函数需要几个信息:
|
||||
|
||||
- 需要加密的数据
|
||||
- 用于加密和解锁数据的口令,也称为键。
|
||||
- 用于加密数据的计算方法,也就是用于加密数据的算法。该脚本使用了 **MCRYPT\_SERPENT\_256**,但你可以从很多算法中选择,包括 **MCRYPT\_TWOFISH192**、**MCRYPT\_RC2**、**MCRYPT\_DES**、和 **MCRYPT\_LOKI97**。
|
||||
- 加密数据的模式。这里有几个你可以使用的模式,包括电子密码本(Electronic Codebook) 和加密反馈(Cipher Feedback)。该脚本使用 **MCRYPT\_MODE\_CBC** 密码块链接。
|
||||
- 一个 **初始化向量**-也称为 IV 或者种子,用于为加密算法设置种子的额外二进制位。也就是使算法更难于破解的额外信息。
|
||||
- 键和 IV 字符串的长度,这可能随着加密和块而不同。使用 **mcrypt\_get\_key\_size()** 和 **mcrypt\_get\_block\_size()** 函数获取合适的长度;然后用 **substr()** 函数将键的值截取为合适的长度。(如果键的长度比要求的短,别担心,Mcrypt 会用 0 填充。)
|
||||
|
||||
如果有人窃取了你的数据和短语,他们只能一个个尝试加密算法直到找到正确的那一个。因此,在使用它之前我们通过对键使用 **md5()** 函数增加安全,就算他们获取了数据和短语,入侵者也不能获得想要的东西。
|
||||
|
||||
入侵者同时需要函数,数据和口令,如果真是如此,他们可能获得了对你服务器的完整访问,你只能大清洗了。
|
||||
|
||||
这里还有一个数据存储格式的小问题。Mcrypt 以难懂的二进制形式返回加密后的数据,这使得当你将其存储到 MySQL 字段的时候可能出现可怕错误。因此,我们使用 **base64encode()** 和 **base64decode()** 函数转换为和 SQL 兼容的字母格式和可检索行。
|
||||
|
||||
#### 破解脚本 ####
|
||||
|
||||
除了实验多种加密方法,你还可以在脚本中添加一些便利。例如,不用每次都提供键和模式,而是在包含的文件中声明为全局常量。
|
||||
|
||||
### 生成随机密码 ###
|
||||
|
||||
随机(但难以猜测)字符串在用户安全中很重要。例如,如果某人丢失了密码并且你使用 MD5 哈希,你不可能,也不希望查找回来。而是应该生成一个安全的随机密码并发送给用户。为了访问你站点的服务,另外一个用于生成随机数字的应用程序会创建有效链接。下面是创建密码的一个函数:
|
||||
|
||||
<?php
|
||||
function make_password($num_chars) {
|
||||
if ((is_numeric($num_chars)) &&
|
||||
($num_chars > 0) &&
|
||||
(! is_null($num_chars))) {
|
||||
$password = '';
|
||||
$accepted_chars = 'abcdefghijklmnopqrstuvwxyz1234567890';
|
||||
// Seed the generator if necessary.
|
||||
srand(((int)((double)microtime()*1000003)) );
|
||||
for ($i=0; $i<=$num_chars; $i++) {
|
||||
$random_number = rand(0, (strlen($accepted_chars) -1));
|
||||
$password .= $accepted_chars[$random_number] ;
|
||||
}
|
||||
return $password;
|
||||
}
|
||||
}
|
||||
?>
|
||||
|
||||
#### 使用脚本 ####
|
||||
|
||||
**make_password()** 函数返回一个字符串,因此你需要做的就是提供字符串的长度作为参数:
|
||||
|
||||
<?php
|
||||
$fifteen_character_password = make_password(15);
|
||||
?>
|
||||
|
||||
函数按照下面步骤工作:
|
||||
|
||||
- 函数确保 **$num\_chars** 是非零的正整数。
|
||||
- 函数初始化 **$accepted\_chars** 变量为密码可能包含的字符列表。该脚本使用所有小写字母和数字 0 到 9,但你可以使用你喜欢的任何字符集合。(LCTT 译注:有时候为了便于肉眼识别,你可以将其中的 0 和 O,1 和 l 之类的都去掉。)
|
||||
- 随机数生成器需要一个种子,从而获得一系列类随机值(PHP 4.2 及之后版本中并不需要,会自动播种)。
|
||||
- 函数循环 **$num\_chars** 次,每次迭代生成密码中的一个字符。
|
||||
- 对于每个新字符,脚本查看 **$accepted_chars** 的长度,选择 0 和长度之间的一个数字,然后添加 **$accepted\_chars** 中该数字为索引值的字符到 $password。
|
||||
- 循环结束后,函数返回 **$password**。
|
||||
|
||||
### 许可证 ###
|
||||
|
||||
本篇文章,包括相关的源代码和文件,都是在 [The Code Project Open License (CPOL)][4] 协议下发布。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.codeproject.com/Articles/363897/PHP-Security
|
||||
|
||||
作者:[SamarRizvi][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.codeproject.com/script/Membership/View.aspx?mid=7483622
|
||||
[1]:http://pixel-apes.com/safehtml/?page=safehtml
|
||||
[2]:http://ha.ckers.org/xss.html
|
||||
[3]:http://namb.la/popular/tech.html
|
||||
[4]:http://www.codeproject.com/info/cpol10.aspx
|
@ -1,6 +1,7 @@
|
||||
Linux常见问题解答--如何修复"tar:由于前一个错误导致于失败状态中退出"("Exiting with failure status due to previous errors")
|
||||
Linux常见问题解答--如何修复"tar:由于前一个错误导致于失败状态中退出"
|
||||
================================================================================
|
||||
> **问题**: 当我想试着用tar命令来创建一个压缩文件时,总在执行过程中失败,并且抛出一个错误说明"tar:由于前一个错误导致于失败状态中退出"("Exiting with failure status due to previous errors"). 什么导致这个错误的发生,要如何解决?
|
||||
|
||||

|
||||
|
||||
如果当你执行tar命令时,遇到了下面的错误,那么最有可能的原因是对于你想用tar命令压缩的某个文件中,你并不具备其读权限。
|
||||
@ -13,21 +14,20 @@ Linux常见问题解答--如何修复"tar:由于前一个错误导致于失败
|
||||
|
||||
$ tar cvzfz backup.tgz my_program/ > /dev/null
|
||||
|
||||
然后你会看到tar输出的标准错误(stderr)信息。
|
||||
然后你会看到tar输出的标准错误(stderr)信息。(LCTT 译注:自然,不用 v 参数也可以。)
|
||||
|
||||
tar: my_program/src/lib/.conf.db.~lock~: Cannot open: Permission denied
|
||||
tar: Exiting with failure status due to previous errors
|
||||
|
||||
你可以从上面的例子中看到,引起错误的原因的确是“读权限不允许”(denied read permission.)
|
||||
要解决这个问题,只要简单地更改(或移除)问题文件的权限,然后重新执行tar命令即可。
|
||||
你可以从上面的例子中看到,引起错误的原因的确是“读权限不允许”(denied read permission.)要解决这个问题,只要简单地更改(或移除)问题文件的权限,然后重新执行tar命令即可。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/tar-exiting-with-failure-status-due-to-previous-errors.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[XLCYun(袖里藏云)](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[XLCYun(袖里藏云)](https://github.com/XLCYun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,110 @@
|
||||
Linux有问必答:如何为在Linux中安装兄弟牌打印机
|
||||
================================================================================
|
||||
> **提问**: 我有一台兄弟牌HL-2270DW激光打印机,我想从我的Linux机器上打印文档。我该如何在我的电脑上安装合适的驱动并使用它?
|
||||
|
||||
兄弟牌以买得起的[紧凑型激光打印机][1]而闻名。你可以用低于200美元的价格得到高质量的WiFi/双工激光打印机,而且价格还在下降。最棒的是,它们还提供良好的Linux支持,因此你可以在Linux中下载并安装它们的打印机驱动。我在一年前买了台[HL-2270DW][2],我对它的性能和可靠性都很满意。
|
||||
|
||||
下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中,我会演示安装HL-2270DW激光打印机的USB驱动。
|
||||
|
||||
首先通过USB线连接你的打印机到Linux上。
|
||||
|
||||
### 准备 ###
|
||||
|
||||
在准备阶段,进入[兄弟官方支持网站][3],输入你的型号(比如:HL-2270DW)搜索你的兄弟打印机型号。
|
||||
|
||||

|
||||
|
||||
进入下面页面后,选择你的Linux平台。对于Debian、Ubuntu或者其他衍生版,选择“Linux (deb)”。对于Fedora、CentOS或者RHEL选择“Linux (rpm)”。
|
||||
|
||||

|
||||
|
||||
下一页,你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动,后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的图形界面对(本地、远程)打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
|
||||
|
||||

|
||||
|
||||
运行安装文件之前,你需要在64位的Linux系统上做另外一件事情。
|
||||
|
||||
因为兄弟打印机驱动是为32位的Linux系统开发的,因此你需要按照下面的方法安装32位的库。
|
||||
|
||||
在早期的Debian(6.0或者更早期)或者Ubuntu(11.04或者更早期),安装下面的包。
|
||||
|
||||
$ sudo apt-get install ia32-libs
|
||||
|
||||
对于已经引入多架构的新的Debian或者Ubuntu而言,你可以安装下面的包:
|
||||
|
||||
$ sudo apt-get install lib32z1 lib32ncurses5
|
||||
|
||||
上面的包代替了ia32-libs包。或者你只需要安装:
|
||||
|
||||
$ sudo apt-get install lib32stdc++6
|
||||
|
||||
如果你使用的是基于Red Hat的Linux,你可以安装:
|
||||
|
||||
$ sudo yum install glibc.i686
|
||||
|
||||
### 驱动安装 ###
|
||||
|
||||
现在解压下载的驱动文件。
|
||||
|
||||
$ gunzip linux-brprinter-installer-2.0.0-1.gz
|
||||
|
||||
接下来像下面这样运行安装文件。
|
||||
|
||||
$ sudo sh ./linux-brprinter-installer-2.0.0-1
|
||||
|
||||
你会被要求输入打印机的型号。输入你打印机的型号,比如“HL-2270DW”。
|
||||
|
||||

|
||||
|
||||
同意GPL协议之后,接受接下来的任何默认问题。
|
||||
|
||||

|
||||
|
||||
现在LPR/CUPS打印机驱动已经安装好了。接下来要配置你的打印机了。
|
||||
|
||||
### 打印机配置 ###
|
||||
|
||||
我接下来就要通过基于CUPS的网页管理和配置兄弟打印机了。
|
||||
|
||||
首先验证CUPS守护进程已经启动。
|
||||
|
||||
$ sudo netstat -nap | grep 631
|
||||
|
||||
打开一个浏览器输入 http://localhost:631 。你会看到下面的打印机管理界面。
|
||||
|
||||

|
||||
|
||||
进入“Administration”选项卡,点击打印机选项下的“Manage Printers”。
|
||||
|
||||

|
||||
|
||||
你一定在下面的页面中看到了你的打印机(HL-2270DW)。点击打印机名。
|
||||
|
||||
在下拉菜单“Administration”中,选择“Set As Server Default”。这会设置你的打印机位系统默认打印机。
|
||||
|
||||

|
||||
|
||||
当被要求验证时,输入你的Linux登录信息。
|
||||
|
||||

|
||||
|
||||
现在基础配置已经基本完成了。为了测试打印,打开任何文档浏览程序(比如:PDF浏览器)并打印。你会看到“HL-2270DW”被列出并被作为默认的打印机设置。
|
||||
|
||||

|
||||
|
||||
打印机应该可以工作了。你可以通过CUPS的网页看到打印机状态和管理打印机任务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-brother-printer-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/go/brother_printers
|
||||
[2]:http://xmodulo.com/go/hl_2270dw
|
||||
[3]:http://support.brother.com/
|
@ -1,8 +1,8 @@
|
||||
Linux 答疑--如何在 Ubuntu 15.04 的 GNOME 终端中开启多个标签
|
||||
Linux有问必答:如何在 Ubuntu 15.04 的 GNOME 终端中开启多个标签
|
||||
================================================================================
|
||||
> **问**: 我以前可以在我的 Ubuntu 台式机中的 gnome-terminal 中开启多个标签。但升到 Ubuntu 15.04 后,我就无法再在 gnome-terminal 窗口中打开新标签了。要怎样做才能在 Ubuntu 15.04 的 gnome-terminal 中打开标签呢?
|
||||
|
||||
在 Ubuntu 14.10 或之前的版本中,gnome-terminal 允许你在终端窗口中开启一个新标签或一个终端窗口。但从 Ubuntu 15.04开始,gnome-terminal 移除了“新标签”选项。这实际上并不是一个 bug,而是一个合并新标签和新窗口的举措。GNOME 3.12 引入了 [单独的“开启终端”选项][1]。开启新终端标签的功能从终端菜单移动到了首选项中。
|
||||
在 Ubuntu 14.10 或之前的版本中,gnome-terminal 允许你在终端窗口中开启一个新标签或一个终端窗口。但从 Ubuntu 15.04开始,gnome-terminal 移除了“新标签”选项。这实际上并不是一个 bug,而是一个合并新标签和新窗口的举措。GNOME 3.12 引入了[单独的“开启终端”选项][1]。开启新终端标签的功能从终端菜单移动到了首选项中。
|
||||
|
||||

|
||||
|
||||
@ -29,8 +29,8 @@ Linux 答疑--如何在 Ubuntu 15.04 的 GNOME 终端中开启多个标签
|
||||
via: http://ask.xmodulo.com/open-multiple-tabs-gnome-terminal-ubuntu.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[KevSJ](https://github.com/KevSJ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[KevinSJ](https://github.com/KevinSJ)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,114 @@
|
||||
为什么 mysql 里的 ibdata1 文件不断的增长?
|
||||
================================================================================
|
||||

|
||||
|
||||
我们在 [Percona 支持栏目][1]经常收到关于 MySQL 的 ibdata1 文件的这个问题。
|
||||
|
||||
当监控服务器发送一个关于 MySQL 服务器存储的报警时,恐慌就开始了 —— 就是说磁盘快要满了。
|
||||
|
||||
一番调查后你意识到大多数地盘空间被 InnoDB 的共享表空间 ibdata1 使用。而你已经启用了 [innodb\_file\_per\_table][2],所以问题是:
|
||||
|
||||
### ibdata1存了什么? ###
|
||||
|
||||
当你启用了 `innodb_file_per_table`,表被存储在他们自己的表空间里,但是共享表空间仍然在存储其它的 InnoDB 内部数据:
|
||||
|
||||
- 数据字典,也就是 InnoDB 表的元数据
|
||||
- 变更缓冲区
|
||||
- 双写缓冲区
|
||||
- 撤销日志
|
||||
|
||||
其中的一些在 [Percona 服务器][3]上可以被配置来避免增长过大的。例如你可以通过 [innodb\_ibuf\_max\_size][4] 设置最大变更缓冲区,或设置 [innodb\_doublewrite\_file][5] 来将双写缓冲区存储到一个分离的文件。
|
||||
|
||||
MySQL 5.6 版中你也可以创建外部的撤销表空间,所以它们可以放到自己的文件来替代存储到 ibdata1。可以看看这个[文档][6]。
|
||||
|
||||
### 什么引起 ibdata1 增长迅速? ###
|
||||
|
||||
当 MySQL 出现问题通常我们需要执行的第一个命令是:
|
||||
|
||||
SHOW ENGINE INNODB STATUS/G
|
||||
|
||||
这将展示给我们一些很有价值的信息。我们从** TRANSACTION(事务)**部分开始检查,然后我们会发现这个:
|
||||
|
||||
---TRANSACTION 36E, ACTIVE 1256288 sec
|
||||
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
|
||||
show engine innodb status
|
||||
Trx read view will not see trx with id >= 36F, sees < 36F
|
||||
|
||||
这是一个最常见的原因,一个14天前创建的相当老的事务。这个状态是**活动的**,这意味着 InnoDB 已经创建了一个数据的快照,所以需要在**撤销**日志中维护旧页面,以保障数据库的一致性视图,直到事务开始。如果你的数据库有大量的写入任务,那就意味着存储了大量的撤销页。
|
||||
|
||||
如果你找不到任何长时间运行的事务,你也可以监控INNODB STATUS 中的其他的变量,“**History list length(历史记录列表长度)**”展示了一些等待清除操作。这种情况下问题经常发生,因为清除线程(或者老版本的主线程)不能像这些记录进来的速度一样快地处理撤销。
|
||||
|
||||
### 我怎么检查什么被存储到了 ibdata1 里了? ###
|
||||
|
||||
很不幸,MySQL 不提供查看什么被存储到 ibdata1 共享表空间的信息,但是有两个工具将会很有帮助。第一个是马克·卡拉汉制作的一个修改版 innochecksum ,它发布在[这个漏洞报告][7]里。
|
||||
|
||||
它相当易于使用:
|
||||
|
||||
# ./innochecksum /var/lib/mysql/ibdata1
|
||||
0 bad checksum
|
||||
13 FIL_PAGE_INDEX
|
||||
19272 FIL_PAGE_UNDO_LOG
|
||||
230 FIL_PAGE_INODE
|
||||
1 FIL_PAGE_IBUF_FREE_LIST
|
||||
892 FIL_PAGE_TYPE_ALLOCATED
|
||||
2 FIL_PAGE_IBUF_BITMAP
|
||||
195 FIL_PAGE_TYPE_SYS
|
||||
1 FIL_PAGE_TYPE_TRX_SYS
|
||||
1 FIL_PAGE_TYPE_FSP_HDR
|
||||
1 FIL_PAGE_TYPE_XDES
|
||||
0 FIL_PAGE_TYPE_BLOB
|
||||
0 FIL_PAGE_TYPE_ZBLOB
|
||||
0 other
|
||||
3 max index_id
|
||||
|
||||
全部的 20608 中有 19272 个撤销日志页。**这占用了表空间的 93%**。
|
||||
|
||||
第二个检查表空间内容的方式是杰里米·科尔制作的 [InnoDB Ruby 工具][8]。它是个检查 InnoDB 的内部结构的更先进的工具。例如我们可以使用 space-summary 参数来得到每个页面及其数据类型的列表。我们可以使用标准的 Unix 工具来统计**撤销日志**页的数量:
|
||||
|
||||
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
|
||||
19272
|
||||
|
||||
尽管这种特殊的情况下,innochedcksum 更快更容易使用,但是我推荐你使用杰里米的工具去了解更多的 InnoDB 内部的数据分布及其内部结构。
|
||||
|
||||
好,现在我们知道问题所在了。下一个问题:
|
||||
|
||||
### 我该怎么解决问题? ###
|
||||
|
||||
这个问题的答案很简单。如果你还能提交语句,就做吧。如果不能的话,你必须要杀掉线程开始回滚过程。那将停止 ibdata1 的增长,但是很显然,你的软件会出现漏洞,有些人会遇到错误。现在你知道如何去鉴定问题所在,你需要使用你自己的调试工具或普通的查询日志来找出谁或者什么引起的问题。
|
||||
|
||||
如果问题发生在清除线程,解决方法通常是升级到新版本,新版中使用一个独立的清除线程替代主线程。更多信息查看该[文档][9]
|
||||
|
||||
### 有什么方法回收已使用的空间么? ###
|
||||
|
||||
没有,目前还没有一个容易并且快速的方法。InnoDB 表空间从不收缩...参见[10 年之久的漏洞报告][10],最新更新自詹姆斯·戴(谢谢):
|
||||
|
||||
当你删除一些行,这个页被标为已删除稍后重用,但是这个空间从不会被回收。唯一的方法是使用新的 ibdata1 启动数据库。要做这个你应该需要使用 mysqldump 做一个逻辑全备份,然后停止 MySQL 并删除所有数据库、ib_logfile\*、ibdata1\* 文件。当你再启动 MySQL 的时候将会创建一个新的共享表空间。然后恢复逻辑备份。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
当 ibdata1 文件增长太快,通常是 MySQL 里长时间运行的被遗忘的事务引起的。尝试去解决问题越快越好(提交或者杀死事务),因为不经过痛苦缓慢的 mysqldump 过程,你就不能回收浪费的磁盘空间。
|
||||
|
||||
也是非常推荐监控数据库以避免这些问题。我们的 [MySQL 监控插件][11]包括一个 Nagios 脚本,如果发现了一个太老的运行事务它可以提醒你。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
|
||||
|
||||
作者:[Miguel Angel Nieto][a]
|
||||
译者:[wyangsun](https://github.com/wyangsun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.percona.com/blog/author/miguelangelnieto/
|
||||
[1]:https://www.percona.com/products/mysql-support
|
||||
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
|
||||
[3]:https://www.percona.com/software/percona-server
|
||||
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
|
||||
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
|
||||
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
|
||||
[7]:http://bugs.mysql.com/bug.php?id=57611
|
||||
[8]:https://github.com/jeremycole/innodb_ruby
|
||||
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
|
||||
[10]:http://bugs.mysql.com/bug.php?id=1341
|
||||
[11]:https://www.percona.com/software/percona-monitoring-plugins
|
140
published/20150713 How to manage Vim plugins.md
Normal file
140
published/20150713 How to manage Vim plugins.md
Normal file
@ -0,0 +1,140 @@
|
||||
如何管理 Vim 插件
|
||||
================================================================================
|
||||
|
||||
Vim是Linux上一个轻量级的通用文本编辑器。虽然它开始时的学习曲线对于一般的Linux用户来说可能很困难,但比起它的好处,这些付出完全是值得的。vim 可以通过完全可定制的插件来增加越来越多的功能。但是,由于它的功能配置比较难,你需要花一些时间去了解它的插件系统,然后才能够有效地去个性化定置Vim。幸运的是,我们已经有一些工具能够使我们在使用Vim插件时更加轻松。而我日常所使用的就是Vundle。
|
||||
|
||||
### 什么是Vundle ###
|
||||
|
||||
[Vundle][1]意即Vim Bundle,是一个vim插件管理器。Vundle能让你很简单地实现插件的安装、升级、搜索或者清除。它还能管理你的运行环境并且在标签方面提供帮助。在本教程中我们将展示如何安装和使用Vundle。
|
||||
|
||||
### 安装Vundle ###
|
||||
|
||||
首先,如果你的Linux系统上没有Git的话,先[安装Git][2]。
|
||||
|
||||
接着,创建一个目录,Vim的插件将会被下载并且安装在这个目录上。默认情况下,这个目录为~/.vim/bundle。
|
||||
|
||||
$ mkdir -p ~/.vim/bundle
|
||||
|
||||
现在,使用如下指令安装Vundle。注意Vundle本身也是一个vim插件。因此我们同样把vundle安装到之前创建的目录~/.vim/bundle下。
|
||||
|
||||
$ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
|
||||
|
||||
### 配置Vundle ###
|
||||
|
||||
现在配置你的.vimrc文件如下:
|
||||
|
||||
set nocompatible " 必需。
|
||||
filetype off " 必须。
|
||||
|
||||
" 在这里设置你的运行时环境的路径。
|
||||
set rtp+=~/.vim/bundle/Vundle.vim
|
||||
|
||||
" 初始化vundle
|
||||
call vundle#begin()
|
||||
|
||||
" 这一行应该永远放在开头。
|
||||
Plugin 'gmarik/Vundle.vim'
|
||||
|
||||
" 这个示范来自https://github.com/gmarik/Vundle.vim README
|
||||
Plugin 'tpope/vim-fugitive'
|
||||
|
||||
" 取自http://vim-scripts.org/vim/scripts.html的插件
|
||||
Plugin 'L9'
|
||||
|
||||
" 该Git插件没有放在GitHub上。
|
||||
Plugin 'git://git.wincent.com/command-t.git'
|
||||
|
||||
"本地计算机上的Git仓库路径 (例如,当你在开发你自己的插件时)
|
||||
Plugin 'file:///home/gmarik/path/to/plugin'
|
||||
|
||||
" vim脚本sparkup存放在这个名叫vim的仓库下的一个子目录中。
|
||||
" 将这个路径正确地设置为runtimepath。
|
||||
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
|
||||
|
||||
" 避免与L9发生名字上的冲突
|
||||
Plugin 'user/L9', {'name': 'newL9'}
|
||||
|
||||
"所有的插件都应该在这一行之前。
|
||||
call vundle#end() " 必需。
|
||||
|
||||
容我简单解释一下上面的设置:默认情况下,Vundle将从github.com或者vim-scripts.org下载和安装vim插件。你也可以改变这个默认行为。
|
||||
|
||||
要从github安装插件:
|
||||
|
||||
Plugin 'user/plugin'
|
||||
|
||||
要从 http://vim-scripts.org/vim/scripts.html 处安装:
|
||||
|
||||
Plugin 'plugin_name'
|
||||
|
||||
要从另外一个git仓库中安装:
|
||||
|
||||
Plugin 'git://git.another_repo.com/plugin'
|
||||
|
||||
从本地文件中安装:
|
||||
|
||||
Plugin 'file:///home/user/path/to/plugin'
|
||||
|
||||
你同样可以定制其它东西,例如你的插件的运行时路径,当你自己在编写一个插件时,或者你只是想从其它目录——而不是~/.vim——中加载插件时,这样做就非常有用。
|
||||
|
||||
Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
|
||||
|
||||
如果你有同名的插件,你可以重命名你的插件,这样它们就不会发生冲突了。
|
||||
|
||||
Plugin 'user/plugin', {'name': 'newPlugin'}
|
||||
|
||||
### 使用Vum命令 ###
|
||||
|
||||
一旦你用vundle设置好你的插件,你就可以通过几个vundle命令来安装、升级、搜索插件,或者清除没有用的插件。
|
||||
|
||||
#### 安装一个新的插件 ####
|
||||
|
||||
`PluginInstall`命令将会安装所有列在你的.vimrc文件中的插件。你也可以通过传递一个插件名给它,来安装某个的特定插件。
|
||||
|
||||
:PluginInstall
|
||||
:PluginInstall <插件名>
|
||||
|
||||

|
||||
|
||||
#### 清除没有用的插件 ####
|
||||
|
||||
如果你有任何没有用到的插件,你可以通过`PluginClean`命令来删除它。
|
||||
|
||||
:PluginClean
|
||||
|
||||

|
||||
|
||||
#### 查找一个插件 ####
|
||||
|
||||
如果你想从提供的插件清单中安装一个插件,搜索功能会很有用。
|
||||
|
||||
:PluginSearch <文本>
|
||||
|
||||

|
||||
|
||||
在搜索的时候,你可以在交互式分割窗口中安装、清除、重新搜索或者重新加载插件清单。安装后的插件不会自动加载生效,要使其加载生效,可以将它们添加进你的.vimrc文件中。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
Vim是一个妙不可言的工具。它不单单是一个能够使你的工作更加顺畅高效的默认文本编辑器,同时它还能够摇身一变,成为现存的几乎任何一门编程语言的IDE。
|
||||
|
||||
注意,有一些网站能帮你找到适合的vim插件。猛击 [http://www.vim-scripts.org][3], Github或者 [http://www.vimawesome.com][4] 获取新的脚本或插件。同时记得使用为你的插件提供的帮助。
|
||||
|
||||
和你最爱的编辑器一起嗨起来吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/manage-vim-plugins.html
|
||||
|
||||
作者:[Christopher Valerio][a]
|
||||
译者:[XLCYun(袖里藏云)](https://github.com/XLCYun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/valerio
|
||||
[1]:https://github.com/VundleVim/Vundle.vim
|
||||
[2]:http://ask.xmodulo.com/install-git-linux.html
|
||||
[3]:http://www.vim-scripts.org/
|
||||
[4]:http://www.vimawesome.com/
|
||||
|
118
published/20150716 4 CCleaner Alternatives For Ubuntu Linux.md
Normal file
118
published/20150716 4 CCleaner Alternatives For Ubuntu Linux.md
Normal file
@ -0,0 +1,118 @@
|
||||
Ubuntu 下 CCleaner 的 4 个替代品
|
||||
================================================================================
|
||||

|
||||
|
||||
回首我使用 Windows 的那些日子,[CCleaner][1] 是我用来释放空间、删除垃圾文件和加速 Windows 的最喜爱的工具。我知道,当从 Windows 切换到 Linux 时,我并不是唯一期望 CCleaner 拥有 Linux 版本的人。假如你正在寻找 Linux 下 CCleaner 的替代品,我将在下面列举 4 个这样的应用,它们可以用来清理 Ubuntu 或基于 Ubuntu 的 Linux 发行版本。但在我们看这个清单之前,先让我们考虑一下 Linux 是否需要系统清理工具这个问题。
|
||||
|
||||
### Linux 需要像 CCleaner 那样的系统清理工具吗? ###
|
||||
|
||||
为了得到答案,让我们看看 CCleaner 做了什么。正如 [How-To Geek][2] 的这篇文章中所提到的那样:
|
||||
|
||||
> CCleaner 有两个主要的功能。一是:它扫描并删除无用的文件,释放磁盘空间。二是:它擦除隐私的数据,例如你的浏览记录和在各种软件中最近打开的文件列表。
|
||||
|
||||
所以,概括起来,它在系统范围内清理在你的浏览器或媒体播放器中的临时文件。你或许知道 Windows 有在系统中保存垃圾文件的喜好,那 Linux 呢?它是如何处理临时文件的呢?
|
||||
|
||||
与 Windows 不同, Linux 自动地清理所有的临时文件(在 `/tmp` 中存储)。在 Linux 中没有注册表,这进一步减轻了头痛。在最坏情况下,你可能会有一些损坏的不再需要的软件包,以及丢失一些网络浏览历史记录, cookies ,缓存等。
|
||||
|
||||
### 这意味着 Linux 不必需要系统清理工具了吗? ###
|
||||
|
||||
- 假如你可以运行某些命令来清理偶尔使用的软件包,手动删除浏览历史记录等,那么答案是:不需要;
|
||||
- 假如你不想不断地从一个地方跳到另一个地方来运行命令,并想用一个工具来删除所有可通过一次或多次点击所选择的东西,那么答案是:需要。
|
||||
|
||||
假如你的答案是“需要”,就让我们继续看看一些类似于 CCleaner 的工具,用它们清理你的 Ubuntu 系统。
|
||||
|
||||
### Ubuntu 下 CCleaner 的替代品 ###
|
||||
|
||||
请注意,我使用的系统是 Ubuntu,因为下面讨论的一些工具只存在于基于 Ubuntu 的 Linux 发行版本中,而另外一些在所有的 Linux 发行版本中都可使用。
|
||||
|
||||
#### 1. BleachBit ####
|
||||
|
||||

|
||||
|
||||
[BleachBit][3] 是一个跨平台的应用程序,在 Windows 和 Linux 平台下都可使用。它有一个很长的支持清理的程序的列表,这样可以让你选择性的清理缓存,cookies 和日志文件。让我们快速浏览它的特点:
|
||||
|
||||
- 简洁的图形界面确认框,你可以预览或删除
|
||||
- 支持多平台: Linux 和 Windows
|
||||
- 免费且开源
|
||||
- 粉碎文件以隐藏它们的内容并防止数据恢复
|
||||
- 重写空闲的磁盘空间来隐藏先前删除的文件内容
|
||||
- 也拥有命令行界面
|
||||
|
||||
默认情况下,在 Ubuntu 14.04 and 15.04 中都可以获取到 BleachBit,你可以在终端中使用下面的命令来安装:
|
||||
|
||||
sudo apt-get install bleachbit
|
||||
|
||||
对于所有主流的 Linux 发行版本, BleachBit 提供有二进制程序,你可以从下面的链接中下载到 BleachBit:
|
||||
|
||||
- [下载 BleachBit 的 Linux 版本][4]
|
||||
|
||||
#### 2. Sweeper ####
|
||||
|
||||

|
||||
|
||||
Sweeper 是一个系统清理工具,它是[KDE SC utilities][5] 模块的一部分。它的主要特点有:
|
||||
|
||||
- 移除与网络相关的痕迹: cookies, 历史,缓存等
|
||||
- 移除图形缩略图缓存
|
||||
- 清理应用和文件的历史记录
|
||||
|
||||
默认情况下,Sweeper 在 Ubuntu 的软件仓库中可以得到。可以在终端中使用下面的命令来安装 Sweeper:
|
||||
|
||||
sudo apt-get install sweeper
|
||||
|
||||
#### 3. Ubuntu Tweak ####
|
||||
|
||||

|
||||
|
||||
正如它的名称所说的那样,[Ubuntu Tweak][6] 是一个调整工具,而不仅仅是一个清理应用。除了调整诸如 compiz 设置,面板的配置,开机启动程序的控制,电源管理等,Ubuntu Tweak 还提供一个清理选项,它可以让你:
|
||||
|
||||
- 清理浏览器缓存
|
||||
- 清理 Ubuntu 软件中心缓存
|
||||
- 清理缩略图缓存
|
||||
- 清理 apt 仓库缓存
|
||||
- 清理旧的内核文件
|
||||
- 清理软件包配置
|
||||
|
||||
你可以从下面的链接中得到 Ubuntu Tweak 的 `.deb` 安装文件:
|
||||
|
||||
- [下载 Ubuntu Tweak][7]
|
||||
|
||||
#### 4. GCleaner (beta) ####
|
||||
|
||||

|
||||
|
||||
作为 elementary OS Freya 的第三方应用, GCleaner 旨在成为 GNU 世界的 CCleaner,其界面与 CCleaner 非常相似。它的一些主要特点有:
|
||||
|
||||
- 清理浏览器历史记录
|
||||
- 清理应用缓存
|
||||
- 清理软件包及其配置
|
||||
- 清理最近使用的文件历史记录
|
||||
- 清空垃圾箱
|
||||
|
||||
在书写本文时, GCleaner 仍处于开发阶段,你可以查看这个项目的网站,并得到源代码来编译和使用 GCleaner。
|
||||
|
||||
- [更多地了解 GCleaner][8]
|
||||
|
||||
### 你的选择呢? ###
|
||||
|
||||
我已经向你列举了一些可能选项,我让你选择决定使用哪个工具来清理 Ubuntu 14.04。但我可以肯定的是,若你正在寻找一个类似 CCleaner 的应用,你将在上面列举的 4 个工具中进行最后的选择。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/ccleaner-alternatives-ubuntu-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:https://www.piriform.com/ccleaner/download
|
||||
[2]:http://www.howtogeek.com/172820/beginner-geek-what-does-ccleaner-do-and-should-you-use-it/
|
||||
[3]:http://bleachbit.sourceforge.net/
|
||||
[4]:http://bleachbit.sourceforge.net/download/linux
|
||||
[5]:https://www.kde.org/applications/utilities/
|
||||
[6]:http://ubuntu-tweak.com/
|
||||
[7]:http://ubuntu-tweak.com/
|
||||
[8]:https://quassy.github.io/elementary-apps/GCleaner/
|
@ -0,0 +1,162 @@
|
||||
|
||||
在 RHEL/CentOS 上为Web服务器架设 “XR”(Crossroads) 负载均衡器
|
||||
================================================================================
|
||||
Crossroads 是一个独立的服务,它是一个用于Linux和TCP服务的开源负载均衡和故障转移实用程序。它可用于HTTP,HTTPS,SSH,SMTP 和 DNS 等,它也是一个多线程的工具,在提供负载均衡服务时,它可以只使用一块内存空间以此来提高性能。
|
||||
|
||||
首先来看看 XR 是如何工作的。我们可以将 XR 放到网络客户端和服务器之间,它可以将客户端的请求分配到服务器上以平衡负载。
|
||||
|
||||
如果一台服务器宕机,XR 会转发客户端请求到另一个服务器,所以客户感觉不到停顿。看看下面的图来了解什么样的情况下,我们要使用 XR 处理。
|
||||
|
||||

|
||||
|
||||
*安装 XR Crossroads 负载均衡器*
|
||||
|
||||
这里有两个 Web 服务器,一个网关服务器,我们将在网关服务器上安装和设置 XR 以接收客户端请求,并分发到服务器。
|
||||
|
||||
XR Crossroads 网关服务器:172.16.1.204
|
||||
|
||||
Web 服务器01:172.16.1.222
|
||||
|
||||
Web 服务器02:192.168.1.161
|
||||
|
||||
在上述情况下,我们网关服务器(即 XR Crossroads)的IP地址是172.16.1.222,webserver01 为172.16.1.222,它监听8888端口,webserver02 是192.168.1.161,它监听端口5555。
|
||||
|
||||
现在,我们需要的是均衡所有的请求,通过 XR 网关从网上接收请求然后分发它到两个web服务器已达到负载均衡。
|
||||
|
||||
### 第1步:在网关服务器上安装 XR Crossroads 负载均衡器 ###
|
||||
|
||||
**1. 不幸的是,没有为 crossroads 提供可用的 RPM 包,我们只能从源码安装。**
|
||||
|
||||
要编译 XR,你必须在系统上安装 C++ 编译器和 GNU make 组件,才能避免安装错误。
|
||||
|
||||
# yum install gcc gcc-c++ make
|
||||
|
||||
接下来,去他们的官方网站([https://crossroads.e-tunity.com] [1])下载此压缩包(即 crossroads-stable.tar.gz)。
|
||||
|
||||
或者,您可以使用 wget 去下载包然后解压在任何位置(如:/usr/src/),进入解压目录,并使用 “make install” 命令安装。
|
||||
|
||||
# wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
|
||||
# tar -xvf crossroads-stable.tar.gz
|
||||
# cd crossroads-2.74/
|
||||
# make install
|
||||
|
||||

|
||||
|
||||
*安装 XR Crossroads 负载均衡器*
|
||||
|
||||
安装完成后,二进制文件安装在 /usr/sbin 目录下,XR 的配置文件在 /etc 下名为 “xrctl.xml” 。
|
||||
|
||||
**2. 最后一个条件,你需要两个web服务器。为了方便使用,我在一台服务器中创建两个 Python SimpleHTTPServer 实例。**
|
||||
|
||||
要了解如何设置一个 python SimpleHTTPServer,请阅读我们此处的文章 [使用 SimpleHTTPServer 轻松创建两个 web 服务器][2].
|
||||
|
||||
正如我所说的,我们要使用两个web服务器,webserver01 通过8888端口运行在172.16.1.222上,webserver02 通过5555端口运行在192.168.1.161上。
|
||||
|
||||

|
||||
|
||||
*XR WebServer 01*
|
||||
|
||||

|
||||
|
||||
*XR WebServer 02*
|
||||
|
||||
### 第2步: 配置 XR Crossroads 负载均衡器 ###
|
||||
|
||||
**3. 所需都已经就绪。现在我们要做的就是配置`xrctl.xml` 文件并通过 XR 服务器接受来自互联网的请求分发到 web 服务器上。**
|
||||
|
||||
现在用 [vi/vim 编辑器][3]打开`xrctl.xml`文件。
|
||||
|
||||
# vim /etc/xrctl.xml
|
||||
|
||||
并作如下修改。
|
||||
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<configuration>
|
||||
<system>
|
||||
<uselogger>true</uselogger>
|
||||
<logdir>/tmp</logdir>
|
||||
</system>
|
||||
<service>
|
||||
<name>Tecmint</name>
|
||||
<server>
|
||||
<address>172.16.1.204:8080</address>
|
||||
<type>tcp</type>
|
||||
<webinterface>0:8010</webinterface>
|
||||
<verbose>yes</verbose>
|
||||
<clientreadtimeout>0</clientreadtimeout>
|
||||
<clientwritetimout>0</clientwritetimeout>
|
||||
<backendreadtimeout>0</backendreadtimeout>
|
||||
<backendwritetimeout>0</backendwritetimeout>
|
||||
</server>
|
||||
<backend>
|
||||
<address>172.16.1.222:8888</address>
|
||||
</backend>
|
||||
<backend>
|
||||
<address>192.168.1.161:5555</address>
|
||||
</backend>
|
||||
</service>
|
||||
</configuration>
|
||||
|
||||

|
||||
|
||||
*配置 XR Crossroads 负载均衡器*
|
||||
|
||||
在这里,你可以看到在 xrctl.xml 中配置了一个非常基本的 XR 。我已经定义了 XR 服务器在哪里,XR 的后端服务和端口及 XR 的 web 管理界面是什么。
|
||||
|
||||
**4. 现在,你需要通过以下命令来启动该 XR 守护进程。**
|
||||
|
||||
# xrctl start
|
||||
# xrctl status
|
||||
|
||||

|
||||
|
||||
*启动 XR Crossroads*
|
||||
|
||||
**5. 好的。现在是时候来检查该配置是否可以工作正常了。打开两个网页浏览器,输入 XR 服务器的 IP 地址和端口,并查看输出。**
|
||||
|
||||

|
||||
|
||||
*验证 Web 服务器负载均衡*
|
||||
|
||||
太棒了。它工作正常。是时候玩玩 XR 了。(LCTT 译注:可以看到两个请求分别分配到了不同服务器。)
|
||||
|
||||
**6. 现在可以通过我们配置的网络管理界面的端口来登录到 XR Crossroads 仪表盘。在浏览器输入你的 XR 服务器的 IP 地址和你配置在 xrctl.xml 中的管理端口。**
|
||||
|
||||
http://172.16.1.204:8010
|
||||
|
||||

|
||||
|
||||
*XR Crossroads 仪表盘*
|
||||
|
||||
看起来像上面一样。它容易理解,用户界面友好,易于使用。它在右上角显示每个服务器能容纳多少个连接,以及关于接收该请求的附加细节。你也可以设置每个服务器承担的负载量,最大连接数和平均负载等。
|
||||
|
||||
最大的好处是,即使没有配置文件 xrctl.xml,你也可以做到这一点。你唯一要做的就是运行以下命令,它就会把这一切搞定。
|
||||
|
||||
# xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555
|
||||
|
||||
上面语法的详细说明:
|
||||
|
||||
- -verbose 将显示命令执行后的信息。
|
||||
- -server 定义你在安装包中的 XR 服务器。
|
||||
- -backend 定义你需要平衡分配到 Web 服务器的流量。
|
||||
- tcp 说明我们使用 TCP 服务。
|
||||
|
||||
欲了解更多详情,有关文件及 CROSSROADS 的配置,请访问他们的官方网站: [https://crossroads.e-tunity.com/][4].
|
||||
|
||||
XR Corssroads 使用许多方法来提高服务器性能,避免宕机,让你的管理任务更轻松,更简便。希望你喜欢此文章,并随时在下面发表你的评论和建议,方便与我们保持联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/setting-up-xr-crossroads-load-balancer-for-web-servers-on-rhel-centos/
|
||||
|
||||
作者:[Thilina Uvindasiri][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/thilidhanushka/
|
||||
[1]:https://crossroads.e-tunity.com/
|
||||
[2]:http://www.tecmint.com/python-simplehttpserver-to-create-webserver-or-serve-files-instantly/
|
||||
[3]:http://www.tecmint.com/vi-editor-usage/
|
||||
[4]:https://crossroads.e-tunity.com/
|
626
published/PHP 7 upgrading.md
Normal file
626
published/PHP 7 upgrading.md
Normal file
@ -0,0 +1,626 @@
|
||||
PHP 7.0 升级备注
|
||||
===============
|
||||
|
||||
1. 向后不兼容的变化
|
||||
2. 新功能
|
||||
3. SAPI 模块中的变化
|
||||
4. 废弃的功能
|
||||
5. 变化的功能
|
||||
6. 新功能
|
||||
7. 新的类和接口
|
||||
8. 移除的扩展和 SAPI
|
||||
9. 扩展的其它变化
|
||||
10. 新的全局常量
|
||||
11. INI 文件处理的变化
|
||||
12. Windows 支持
|
||||
13. 其它变化
|
||||
|
||||
|
||||
##1. 向后不兼容的变化
|
||||
|
||||
###语言变化
|
||||
|
||||
####变量处理的变化
|
||||
|
||||
* 间接变量、属性和方法引用现在以从左到右的语义进行解释。一些例子:
|
||||
|
||||
$$foo['bar']['baz'] // 解释做 ($$foo)['bar']['baz']
|
||||
$foo->$bar['baz'] // 解释做 ($foo->$bar)['baz']
|
||||
$foo->$bar['baz']() // 解释做 ($foo->$bar)['baz']()
|
||||
Foo::$bar['baz']() // 解释做 (Foo::$bar)['baz']()
|
||||
|
||||
要恢复以前的行为,需要显式地加大括号:
|
||||
|
||||
${$foo['bar']['baz']}
|
||||
$foo->{$bar['baz']}
|
||||
$foo->{$bar['baz']}()
|
||||
Foo::{$bar['baz']}()
|
||||
|
||||
* 全局关键字现在只接受简单变量。像以前的
|
||||
|
||||
global $$foo->bar;
|
||||
|
||||
现在要求如下写法:
|
||||
|
||||
global ${$foo->bar};
|
||||
|
||||
* 变量或函数调用的前后加上括号不再有任何影响。例如下列代码,函数调用结果以引用的方式传给一个函数
|
||||
|
||||
function getArray() { return [1, 2, 3]; }
|
||||
|
||||
$last = array_pop(getArray());
|
||||
// Strict Standards: 只有变量可以用引用方式传递
|
||||
$last = array_pop((getArray()));
|
||||
// Strict Standards: 只有变量可以用引用方式传递
|
||||
|
||||
现在无论是否使用括号,都会抛出一个严格标准错误。以前在第二种调用方式下不会有提示。
|
||||
|
||||
* 数组元素或对象属性自动安装引用顺序创建,现在的结果顺序将不同。例如:
|
||||
|
||||
$array = [];
|
||||
$array["a"] =& $array["b"];
|
||||
$array["b"] = 1;
|
||||
var_dump($array);
|
||||
|
||||
现在结果是 ["a" => 1, "b" => 1],而以前的结果是 ["b" => 1, "a" => 1]。
|
||||
|
||||
相关的 RFC:
|
||||
* https://wiki.php.net/rfc/uniform_variable_syntax
|
||||
* https://wiki.php.net/rfc/abstract_syntax_tree
|
||||
|
||||
####list() 的变化
|
||||
|
||||
* list() 不再以反序赋值,例如:
|
||||
|
||||
list($array[], $array[], $array[]) = [1, 2, 3];
|
||||
var_dump($array);
|
||||
|
||||
现在结果是 $array == [1, 2, 3] ,而不是 [3, 2, 1]。注意仅赋值**顺序**变化了,而赋值仍然一致(LCTT 译注:即以前的 list()行为是从后面的变量开始逐一赋值,这样对与上述用法就会产生 [3,2,1] 这样的结果了。)。例如,类似如下的常规用法
|
||||
|
||||
list($a, $b, $c) = [1, 2, 3];
|
||||
// $a = 1; $b = 2; $c = 3;
|
||||
|
||||
仍然保持当前的行为。
|
||||
|
||||
* 不再允许对空的 list() 赋值。如下全是无效的:
|
||||
|
||||
list() = $a;
|
||||
list(,,) = $a;
|
||||
list($x, list(), $y) = $a;
|
||||
|
||||
* list() 不再支持对字符串的拆分(以前也只在某些情况下支持)。如下代码:
|
||||
|
||||
$string = "xy";
|
||||
list($x, $y) = $string;
|
||||
|
||||
现在的结果是: $x == null 和 $y == null (没有提示),而以前的结果是:
|
||||
$x == "x" 和 $y == "y" 。此外, list() 现在总是可以处理实现了 ArrayAccess 的对象,例如:
|
||||
|
||||
list($a, $b) = (object) new ArrayObject([0, 1]);
|
||||
|
||||
现在的结果是: $a == 0 和 $b == 1。 以前 $a 和 $b 都是 null。
|
||||
|
||||
相关 RFC:
|
||||
* https://wiki.php.net/rfc/abstract_syntax_tree#changes_to_list
|
||||
* https://wiki.php.net/rfc/fix_list_behavior_inconsistency
|
||||
|
||||
####foreach 的变化
|
||||
|
||||
* foreach() 迭代不再影响数组内部指针,数组指针可通过 current()/next() 等系列的函数访问。例如:
|
||||
|
||||
$array = [0, 1, 2];
|
||||
foreach ($array as &$val) {
|
||||
var_dump(current($array));
|
||||
}
|
||||
|
||||
现在将指向值 int(0) 三次。以前的输出是 int(1)、int(2) 和 bool(false)。
|
||||
|
||||
* 在对数组按值迭代时,foreach 总是在对数组副本进行操作,在迭代中任何对数组的操作都不会影响到迭代行为。例如:
|
||||
|
||||
$array = [0, 1, 2];
|
||||
$ref =& $array; // Necessary to trigger the old behavior
|
||||
foreach ($array as $val) {
|
||||
var_dump($val);
|
||||
unset($array[1]);
|
||||
}
|
||||
|
||||
现在将打印出全部三个元素 (0 1 2),而以前第二个元素 1 会跳过 (0 2)。
|
||||
|
||||
* 在对数组按引用迭代时,对数组的修改将继续会影响到迭代。不过,现在 PHP 在使用数字作为键时可以更好的维护数组内的位置。例如,在按引用迭代过程中添加数组元素:
|
||||
|
||||
$array = [0];
|
||||
foreach ($array as &$val) {
|
||||
var_dump($val);
|
||||
$array[1] = 1;
|
||||
}
|
||||
|
||||
现在迭代会正确的添加了元素。如上代码输出是 "int(0) int(1)",而以前只是 "int(0)"。
|
||||
|
||||
* 对普通(不可遍历的)对象按值或按引用迭代的行为类似于对数组进行按引用迭代。这符合以前的行为,除了如上一点所述的更精确的位置管理的改进。
|
||||
|
||||
* 对可遍历对象的迭代行为保持不变。
|
||||
|
||||
相关 RFC: https://wiki.php.net/rfc/php7_foreach
|
||||
|
||||
####参数处理的变化
|
||||
|
||||
* 不能定义两个同名的函数参数。例如,下面的方法将会触发编译时错误:
|
||||
|
||||
public function foo($a, $b, $unused, $unused) {
|
||||
// ...
|
||||
}
|
||||
|
||||
如上的代码应该修改使用不同的参数名,如:
|
||||
|
||||
public function foo($a, $b, $unused1, $unused2) {
|
||||
// ...
|
||||
}
|
||||
|
||||
* func\_get\_arg() 和 func\_get\_args() 函数不再返回传递给参数的原始值,而是返回其当前值(也许会被修改)。例如:
|
||||
|
||||
function foo($x) {
|
||||
$x++;
|
||||
var_dump(func_get_arg(0));
|
||||
}
|
||||
foo(1);
|
||||
|
||||
将会打印 "2" 而不是 "1"。代码应该改成仅在调用 func\_get\_arg(s) 后进行修改操作。
|
||||
|
||||
function foo($x) {
|
||||
var_dump(func_get_arg(0));
|
||||
$x++;
|
||||
}
|
||||
|
||||
或者应该避免修改参数:
|
||||
|
||||
function foo($x) {
|
||||
$newX = $x + 1;
|
||||
var_dump(func_get_arg(0));
|
||||
}
|
||||
|
||||
* 类似的,异常回溯也不再显示传递给函数的原始值,而是修改后的值。例如:
|
||||
|
||||
function foo($x) {
|
||||
$x = 42;
|
||||
throw new Exception;
|
||||
}
|
||||
foo("string");
|
||||
|
||||
现在堆栈跟踪的结果是:
|
||||
|
||||
Stack trace:
|
||||
#0 file.php(4): foo(42)
|
||||
#1 {main}
|
||||
|
||||
而以前是:
|
||||
|
||||
Stack trace:
|
||||
#0 file.php(4): foo('string')
|
||||
#1 {main}
|
||||
|
||||
这并不会影响到你的代码的运行时行为,值得注意的是在调试时会有所不同。
|
||||
|
||||
同样的限制也会影响到 debug\_backtrace() 及其它检查函数参数的函数。
|
||||
|
||||
相关 RFC: https://wiki.php.net/phpng
|
||||
|
||||
####整数处理的变化
|
||||
|
||||
* 无效的八进制表示(包含大于7的数字)现在会产生编译错误。例如,下列代码不再有效:
|
||||
|
||||
$i = 0781; // 8 不是一个有效的八进制数字!
|
||||
|
||||
以前,无效的数字(以及无效数字后的任何数字)会简单的忽略。以前如上 $i 的值是 7,因为后两位数字会被悄悄丢弃。
|
||||
|
||||
* 二进制以负数镜像位移现在会抛出一个算术错误:
|
||||
|
||||
var_dump(1 >> -1);
|
||||
// ArithmeticError: 以负数进行位移
|
||||
|
||||
* 向左位移的位数超出了整型宽度时,结果总是 0。
|
||||
|
||||
var_dump(1 << 64); // int(0)
|
||||
|
||||
以前上述代码的结果依赖于所用的 CPU 架构。例如,在 x86(包括 x86-64) 上结果是 int(1),因为其位移操作数在范围内。
|
||||
|
||||
* 类似的,向右位移的位数超出了整型宽度时,其结果总是 0 或 -1 (依赖于符号):
|
||||
|
||||
var_dump(1 >> 64); // int(0)
|
||||
var_dump(-1 >> 64); // int(-1)
|
||||
|
||||
相关 RFC: https://wiki.php.net/rfc/integer_semantics
|
||||
|
||||
####字符串处理的变化
|
||||
|
||||
* 包含十六进制数字的字符串不会再被当做数字,也不会被特殊处理。参见例子中的新行为:
|
||||
|
||||
var_dump("0x123" == "291"); // bool(false) (以前是 true)
|
||||
var_dump(is_numeric("0x123")); // bool(false) (以前是 true)
|
||||
var_dump("0xe" + "0x1"); // int(0) (以前是 16)
|
||||
|
||||
var_dump(substr("foo", "0x1")); // string(3) "foo" (以前是 "oo")
|
||||
// 注意:遇到了一个非正常格式的数字
|
||||
|
||||
filter\_var() 可以用来检查一个字符串是否包含了十六进制数字,或这个字符串是否能转换为整数:
|
||||
|
||||
$str = "0xffff";
|
||||
$int = filter_var($str, FILTER_VALIDATE_INT, FILTER_FLAG_ALLOW_HEX);
|
||||
if (false === $int) {
|
||||
throw new Exception("Invalid integer!");
|
||||
}
|
||||
var_dump($int); // int(65535)
|
||||
|
||||
* 由于给双引号字符串和 HERE 文档增加了 Unicode 码点转义格式(Unicode Codepoint Escape Syntax), 所以带有无效序列的 "\u{" 现在会造成错误:
|
||||
|
||||
$str = "\u{xyz}"; // 致命错误:无效的 UTF-8 码点转义序列
|
||||
|
||||
要避免这种情况,需要转义开头的反斜杠:
|
||||
|
||||
$str = "\\u{xyz}"; // 正确
|
||||
|
||||
不过,不跟随 { 的 "\u" 不受影响。如下代码不会生成错误,和前面的一样工作:
|
||||
|
||||
$str = "\u202e"; // 正确
|
||||
|
||||
相关 RFC:
|
||||
* https://wiki.php.net/rfc/remove_hex_support_in_numeric_strings
|
||||
* https://wiki.php.net/rfc/unicode_escape
|
||||
|
||||
####错误处理的变化
|
||||
|
||||
* 现在有两个异常类: Exception 和 Error 。这两个类都实现了一个新接口: Throwable 。在异常处理代码中的类型指示也许需要修改来处理这种情况。
|
||||
|
||||
* 一些致命错误和可恢复的致命错误现在改为抛出一个 Error 。由于 Error 是一个独立于 Exception 的类,这些异常不会被已有的 try/catch 块捕获。
|
||||
|
||||
可恢复的致命错误被转换为一个异常,所以它们不能在错误处理里面悄悄的忽略。部分情况下,类型指示失败不再能忽略。
|
||||
|
||||
* 解析错误现在会生成一个 Error 扩展的 ParseError 。除了以前的基于返回值 / error_get_last() 的处理,对某些可能无效的代码的 eval() 的错误处理应该改为捕获 ParseError 。
|
||||
|
||||
* 内部类的构造函数在失败时总是会抛出一个异常。以前一些构造函数会返回 NULL 或一个不可用的对象。
|
||||
|
||||
* 一些 E_STRICT 提示的错误级别改变了。
|
||||
|
||||
相关 RFC:
|
||||
* https://wiki.php.net/rfc/engine_exceptions_for_php7
|
||||
* https://wiki.php.net/rfc/throwable-interface
|
||||
* https://wiki.php.net/rfc/internal_constructor_behaviour
|
||||
* https://wiki.php.net/rfc/reclassify_e_strict
|
||||
|
||||
####其它的语言变化
|
||||
|
||||
* 静态调用一个不兼容的 $this 上下文的非静态调用的做法不再支持。这种情况下,$this 是没有定义的,但是对它的调用是允许的,并带有一个废弃提示。例子:
|
||||
|
||||
class A {
|
||||
public function test() { var_dump($this); }
|
||||
}
|
||||
|
||||
// 注意:没有从类 A 进行扩展
|
||||
class B {
|
||||
public function callNonStaticMethodOfA() { A::test(); }
|
||||
}
|
||||
|
||||
(new B)->callNonStaticMethodOfA();
|
||||
|
||||
// 废弃:非静态方法 A::test() 不应该被静态调用
|
||||
// 提示:未定义的变量 $this
|
||||
NULL
|
||||
|
||||
注意,这仅出现在来自不兼容上下文的调用上。如果类 B 扩展自类 A ,调用会被允许,没有任何提示。
|
||||
|
||||
* 不能使用下列类名、接口名和特殊名(大小写敏感):
|
||||
|
||||
bool
|
||||
int
|
||||
float
|
||||
string
|
||||
null
|
||||
false
|
||||
true
|
||||
|
||||
这用于 class/interface/trait 声明、 class_alias() 和 use 语句中。
|
||||
|
||||
此外,下列类名、接口名和特殊名保留做将来使用,但是使用时尚不会抛出错误:
|
||||
|
||||
resource
|
||||
object
|
||||
mixed
|
||||
numeric
|
||||
|
||||
* yield 语句结构当用在一个表达式上下文时,不再要求括号。它现在是一个优先级在 “print” 和 “=>” 之间的右结合操作符。在某些情况下这会导致不同的行为,例如:
|
||||
|
||||
echo yield -1;
|
||||
// 以前被解释如下
|
||||
echo (yield) - 1;
|
||||
// 现在被解释如下
|
||||
echo yield (-1);
|
||||
|
||||
yield $foo or die;
|
||||
// 以前被解释如下
|
||||
yield ($foo or die);
|
||||
// 现在被解释如下
|
||||
(yield $foo) or die;
|
||||
|
||||
这种情况可以通过增加括号来解决。
|
||||
|
||||
* 移除了 ASP (\<%) 和 script (\<script language=php>) 标签。
|
||||
|
||||
RFC: https://wiki.php.net/rfc/remove_alternative_php_tags
|
||||
|
||||
* 不支持以引用的方式对 new 的结果赋值。
|
||||
|
||||
* 不支持对一个来自非兼容的 $this 上下文的非静态方法的域内调用。细节参见: https://wiki.php.net/rfc/incompat_ctx 。
|
||||
|
||||
* 不支持 ini 文件中的 # 风格的备注。使用 ; 风格的备注替代。
|
||||
|
||||
* $HTTP\_RAW\_POST\_DATA 不再可用,使用 php://input 流替代。
|
||||
|
||||
###标准库的变化
|
||||
|
||||
* call\_user\_method() 和 call\_user\_method\_array() 不再存在。
|
||||
|
||||
* 在一个输出缓冲区被创建在输出缓冲处理器里时, ob\_start() 不再发出 E\_ERROR,而是 E\_RECOVERABLE\_ERROR。
|
||||
|
||||
* 改进的 zend\_qsort (使用 hybrid 排序算法)性能更好,并改名为 zend\_sort。
|
||||
|
||||
* 增加静态排序算法 zend\_insert\_sort。
|
||||
|
||||
* 移除 fpm-fcgi 的 dl() 函数。
|
||||
|
||||
* setcookie() 如果 cookie 名为空会触发一个 WARNING ,而不是发出一个空的 set-cookie 头。
|
||||
|
||||
###其它
|
||||
|
||||
- Curl:
|
||||
- 去除对禁用 CURLOPT\_SAFE\_UPLOAD 选项的支持。所有的 curl 文件上载必须使用 curl\_file / CURLFile API。
|
||||
|
||||
- Date:
|
||||
- 从 mktime() 和 gmmktime() 中移除 $is\_dst 参数
|
||||
|
||||
- DBA
|
||||
- 如果键也没有出现在 inifile 处理器中,dba\_delete() 现在会返回 false。
|
||||
|
||||
- GMP
|
||||
- 现在要求 libgmp 版本 4.2 或更新。
|
||||
- gmp\_setbit() 和 gmp\_clrbit() 对于负指标返回 FALSE,和其它的 GMP 函数一致。
|
||||
|
||||
- Intl:
|
||||
- 移除废弃的别名 datefmt\_set\_timezone\_id() 和 IntlDateFormatter::setTimeZoneID()。替代使用 datefmt\_set\_timezone() 和 IntlDateFormatter::setTimeZone()。
|
||||
|
||||
- libxml:
|
||||
- 增加 LIBXML\_BIGLINES 解析器选项。从 libxml 2.9.0 开始可用,并增加了在错误报告中行号大于 16 位的支持。
|
||||
|
||||
- Mcrypt
|
||||
- 移除等同于 mcrypt\_generic\_deinit() 的废弃别名 mcrypt\_generic\_end()。
|
||||
- 移除废弃的 mcrypt\_ecb()、 mcrypt\_cbc()、 mcrypt\_cfb() 和 mcrypt\_ofb() 函数,它们等同于使用 MCRYPT\_MODE\_* 标志的 mcrypt\_encrypt() 和 mcrypt\_decrypt() 。
|
||||
|
||||
- Session
|
||||
- session\_start() 以数组方式接受所有的 INI 设置。例如, ['cache\_limiter'=>'private'] 会设置 session.cache\_limiter=private 。也支持 'read\_and\_close' 以在读取数据后立即关闭会话数据。
|
||||
- 会话保存处理器接受使用 validate\_sid() 和 update\_timestamp() 来校验会话 ID 是否存在、更新会话时间戳。对旧式的用户定义的会话保存处理器继续兼容。
|
||||
- 增加了 SessionUpdateTimestampHandlerInterface 。 validateSid()、 updateTimestamp()
|
||||
定义在接口里面。
|
||||
- session.lazy\_write(默认是 On) 的 INI 设置支持仅在会话数据更新时写入。
|
||||
|
||||
- Opcache
|
||||
- 移除 opcache.load\_comments 配置语句。现在文件内备注载入无成本,并且总是启用的。
|
||||
|
||||
- OpenSSL:
|
||||
- 移除 "rsa\_key\_size" SSL 上下文选项,按给出的协商的加密算法自动设置适当的大小。
|
||||
- 移除 "CN\_match" 和 "SNI\_server\_name" SSL 上下文选项。使用自动侦测或 "peer\_name" 选项替代。
|
||||
|
||||
- PCRE:
|
||||
- 移除对 /e (PREG\_REPLACE\_EVAL) 修饰符的支持,使用 preg\_replace\_callback() 替代。
|
||||
|
||||
- PDO\_pgsql:
|
||||
- 移除 PGSQL\_ATTR\_DISABLE\_NATIVE\_PREPARED\_STATEMENT 属性,等同于 ATTR\_EMULATE\_PREPARES。
|
||||
|
||||
- Standard:
|
||||
- 移除 setlocale() 中的字符串类目支持。使用 LC_* 常量替代。
|
||||
instead.
|
||||
- 移除 set\_magic\_quotes\_runtime() 及其别名 magic\_quotes\_runtime()。
|
||||
|
||||
- JSON:
|
||||
- 拒绝 json_decode 中的 RFC 7159 不兼容数字格式 - 顶层 (07, 0xff, .1, -.1) 和所有层的 ([1.], [1.e1])
|
||||
- 用一个参数调用 json\_decode 等价于用空的 PHP 字符串或值调用,转换为空字符串(NULL, FALSE)的结果是 JSON 格式错误。
|
||||
|
||||
- Stream:
|
||||
- 移除 set\_socket\_blocking() ,等同于其别名 stream\_set\_blocking()。
|
||||
|
||||
- XSL:
|
||||
- 移除 xsl.security\_prefs ini 选项,使用 XsltProcessor::setSecurityPrefs() 替代。
|
||||
|
||||
##2. 新功能
|
||||
|
||||
- Core
|
||||
- 增加了组式 use 声明。
|
||||
(RFC: https://wiki.php.net/rfc/group_use_declarations)
|
||||
- 增加了 null 合并操作符 (??)。
|
||||
(RFC: https://wiki.php.net/rfc/isset_ternary)
|
||||
- 在 64 位架构上支持长度 >= 2^31 字节的字符串。
|
||||
- 增加了 Closure::call() 方法(仅工作在用户侧的类)。
|
||||
- 在双引号字符串和 here 文档中增加了 \u{xxxxxx} Unicode 码点转义格式。
|
||||
- define() 现在支持数组作为常量值,修复了一个当 define() 还不支持数组常量值时的疏忽。
|
||||
- 增加了比较操作符 (<=>),即太空船操作符。
|
||||
(RFC: https://wiki.php.net/rfc/combined-comparison-operator)
|
||||
- 为委托生成器添加了类似协程的 yield from 操作符。
|
||||
(RFC: https://wiki.php.net/rfc/generator-delegation)
|
||||
- 保留的关键字现在可以用在几种新的上下文中。
|
||||
(RFC: https://wiki.php.net/rfc/context_sensitive_lexer)
|
||||
- 增加了标量类型的声明支持,并可以使用 declare(strict\_types=1) 的声明严格模式。
|
||||
(RFC: https://wiki.php.net/rfc/scalar_type_hints_v5)
|
||||
- 增加了对加密级安全的用户侧的随机数发生器的支持。
|
||||
(RFC: https://wiki.php.net/rfc/easy_userland_csprng)
|
||||
|
||||
- Opcache
|
||||
- 增加了基于文件的二级 opcode 缓存(实验性——默认禁用)。要启用它,PHP 需要使用 --enable-opcache-file 配置和构建,然后 opcache.file\_cache=\<DIR> 配置指令就可以设置在 php.ini 中。二级缓存也许可以提升服务器重启或 SHM 重置时的性能。此外,也可以设置 opcache.file\_cache\_only=1 来使用文件缓存而根本不用 SHM(也许对于共享主机有用);设置 opcache.file\_cache\_consistency\_checks=0 来禁用文件缓存一致性检查,以加速载入过程,有安全风险。
|
||||
|
||||
- OpenSSL
|
||||
- 当用 OpenSSL 1.0.2 及更新构建时,增加了 "alpn\_protocols" SSL 上下文选项来允许加密的客户端/服务器流使用 ALPN TLS 扩展去协商替代的协议。协商后的协议信息可以通过 stream\_get\_meta\_data() 输出访问。
|
||||
|
||||
- Reflection
|
||||
- 增加了一个 ReflectionGenerator 类(yield from Traces,当前文件/行等等)。
|
||||
- 增加了一个 ReflectionType 类来更好的支持新的返回类型和标量类型声明功能。新的 ReflectionParameter::getType() 和 ReflectionFunctionAbstract::getReturnType() 方法都返回一个 ReflectionType 实例。
|
||||
|
||||
- Stream
|
||||
- 添加了新的仅用于 Windows 的流上下文选项以允许阻塞管道读取。要启用该功能,当创建流上下文时,传递 array("pipe" => array("blocking" => true)) 。要注意的是,该选项会导致管道缓冲区的死锁,然而它在几个命令行场景中有用。
|
||||
|
||||
##3. SAPI 模块的变化
|
||||
|
||||
- FPM
|
||||
- 修复错误 #65933 (不能设置超过1024字节的配置行)。
|
||||
- Listen = port 现在监听在所有地址上(IPv6 和 IPv4 映射的)。
|
||||
|
||||
##4. 废弃的功能
|
||||
|
||||
- Core
|
||||
- 废弃了 PHP 4 风格的构建函数(即构建函数名必须与类名相同)。
|
||||
- 废弃了对非静态方法的静态调用。
|
||||
|
||||
- OpenSSL
|
||||
- 废弃了 "capture\_session\_meta" SSL 上下文选项。 在流资源上活动的加密相关的元数据可以通过 stream\_get\_meta\_data() 的返回值访问。
|
||||
|
||||
##5. 函数的变化
|
||||
|
||||
- parse\_ini\_file():
|
||||
- parse\_ini\_string():
|
||||
- 添加了扫描模式 INI_SCANNER_TYPED 来得到 yield 类型的 .ini 值。
|
||||
|
||||
- unserialize():
|
||||
- 给 unserialize 函数添加了第二个参数
|
||||
(RFC: https://wiki.php.net/rfc/secure_unserialize) 来指定可接受的类:
|
||||
unserialize($foo, ["allowed_classes" => ["MyClass", "MyClass2"]]);
|
||||
|
||||
- proc\_open():
|
||||
- 可以被 proc\_open() 使用的最大管道数以前被硬编码地限制为 16。现在去除了这个限制,只受限于 PHP 的可用内存大小。
|
||||
- 新添加的仅用于 Windows 的配置选项 "blocking\_pipes" 可以用于强制阻塞对子进程管道的读取。这可以用于几种命令行应用场景,但是它会导致死锁。此外,这与新的流的管道上下文选项相关。
|
||||
|
||||
- array_column():
|
||||
- 该函数现在支持把对象数组当做二维数组。只有公开属性会被处理,对象里面使用 \_\_get() 的动态属性必须也实现 \_\_isset() 才行。
|
||||
|
||||
- stream\_context\_create()
|
||||
- 现在可以接受一个仅 Windows 可用的配置 array("pipe" => array("blocking" => \<boolean>)) 来强制阻塞管道读取。该选项应该小心使用,该平台有可能导致管道缓冲区的死锁。
|
||||
|
||||
##6. 新函数
|
||||
|
||||
- GMP
|
||||
- 添加了 gmp\_random\_seed()。
|
||||
|
||||
- PCRE:
|
||||
- 添加了 preg\_replace\_callback\_array 函数。
|
||||
(RFC: https://wiki.php.net/rfc/preg_replace_callback_array)
|
||||
|
||||
- Standard
|
||||
. 添加了整数除法 intdiv() 函数。
|
||||
. 添加了重置错误状态的 error\_clear\_last() 函数。
|
||||
|
||||
- Zlib:
|
||||
. 添加了 deflate\_init()、 deflate\_add()、 inflate\_init()、 inflate\_add() 函数来运行递增和流的压缩/解压。
|
||||
|
||||
##7. 新的类和接口
|
||||
|
||||
(暂无)
|
||||
|
||||
##8. 移除的扩展和 SAPI
|
||||
|
||||
- sapi/aolserver
|
||||
- sapi/apache
|
||||
- sapi/apache_hooks
|
||||
- sapi/apache2filter
|
||||
- sapi/caudium
|
||||
- sapi/continuity
|
||||
- sapi/isapi
|
||||
- sapi/milter
|
||||
- sapi/nsapi
|
||||
- sapi/phttpd
|
||||
- sapi/pi3web
|
||||
- sapi/roxen
|
||||
- sapi/thttpd
|
||||
- sapi/tux
|
||||
- sapi/webjames
|
||||
- ext/mssql
|
||||
- ext/mysql
|
||||
- ext/sybase_ct
|
||||
- ext/ereg
|
||||
|
||||
更多细节参见:
|
||||
|
||||
- https://wiki.php.net/rfc/removal_of_dead_sapis_and_exts
|
||||
- https://wiki.php.net/rfc/remove_deprecated_functionality_in_php7
|
||||
|
||||
注意:NSAPI 没有在 RFC 中投票,不过它会在以后移除。这就是说,它相关的 SDK 今后不可用。
|
||||
|
||||
##9. 扩展的其它变化
|
||||
|
||||
- Mhash
|
||||
- Mhash 今后不是一个扩展了,使用 function\_exists("mhash") 来检查器是否可用。
|
||||
|
||||
##10. 新的全局常量
|
||||
|
||||
- Core
|
||||
. 添加 PHP\_INT\_MIN
|
||||
|
||||
- Zlib
|
||||
- 添加的这些常量用于控制新的增量deflate\_add() 和 inflate\_add() 函数的刷新行为:
|
||||
- ZLIB\_NO\_FLUSH
|
||||
- ZLIB\_PARTIAL\_FLUSH
|
||||
- ZLIB\_SYNC\_FLUSH
|
||||
- ZLIB\_FULL\_FLUSH
|
||||
- ZLIB\_BLOCK
|
||||
- ZLIB\_FINISH
|
||||
|
||||
- GD
|
||||
- 移除了 T1Lib 支持,这样由于对 T1Lib 的可选依赖,如下将来不可用:
|
||||
|
||||
函数:
|
||||
- imagepsbbox()
|
||||
- imagepsencodefont()
|
||||
- imagepsextendedfont()
|
||||
- imagepsfreefont()
|
||||
- imagepsloadfont()
|
||||
- imagepsslantfont()
|
||||
- imagepstext()
|
||||
|
||||
资源:
|
||||
- 'gd PS font'
|
||||
- 'gd PS encoding'
|
||||
|
||||
##11. INI 文件处理的变化
|
||||
|
||||
- Core
|
||||
- 移除了 asp\_tags ini 指令。如果启用它会导致致命错误。
|
||||
- 移除了 always\_populate\_raw\_post\_data ini 指令。
|
||||
|
||||
##12. Windows 支持
|
||||
|
||||
- Core
|
||||
- 在 64 位系统上支持原生的 64 位整数。
|
||||
- 在 64 位系统上支持大文件。
|
||||
- 支持 getrusage()。
|
||||
|
||||
- ftp
|
||||
- 所带的 ftp 扩展总是共享库的。
|
||||
- 对于 SSL 支持,取消了对 openssl 扩展的依赖,取而代之仅依赖 openssl 库。如果在编译时需要,会自动启用
|
||||
ftp\_ssl\_connect()。
|
||||
|
||||
- odbc
|
||||
- 所带的 odbc 扩展总是共享库的。
|
||||
|
||||
##13. 其它变化
|
||||
|
||||
- Core
|
||||
- NaN 和 Infinity 转换为整数时总是 0,而不是未定义和平台相关的。
|
||||
- 对非对象调用方法会触发一个可捕获错误,而不是致命错误;参见: https://wiki.php.net/rfc/catchable-call-to-member-of-non-object
|
||||
- zend\_parse\_parameters、类型提示和转换,现在总是用 "integer" 和 "float",而不是 "long" 和 "double"。
|
||||
- 如果 ignore\_user\_abort 设置为 true ,对应中断的连接,输出缓存会继续工作。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/php/php-src/blob/php-7.0.0beta1/UPGRADING
|
||||
|
||||
作者:[php][a]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/php
|
@ -1,155 +0,0 @@
|
||||
Translating by H-mudcup
|
||||
10 Truly Amusing Easter Eggs in Linux
|
||||
================================================================================
|
||||

|
||||
The programmer working on Adventure slipped a secret feature into the game. Instead of getting upset about it, Atari decided to give these sorts of “secret features” a name -- “Easter Eggs” because… you know… you hunt for them. Image credit: Wikipedia.
|
||||
|
||||
Back in 1979, a video game was being developed for the Atari 2600 -- [Adventure][1].
|
||||
|
||||
The programmer working on Adventure slipped a secret feature into the game which, when the user moved an “invisible square” to a particular wall, allowed entry into a “secret room”. That room contained a simple phrase: “Created by [Warren Robinett][2]”.
|
||||
|
||||
Atari had a policy against putting author credits in their games, so this intrepid programmer put his John Hancock on the game by being, well, sneaky. Atari only found out about the “secret room” after Warren Robinett had left the company. Instead of getting upset about it, Atari decided to give these sorts of “secret features” a name -- “Easter Eggs” because… you know… you hunt for them -- and declared that they would be putting more of these “Easter Eggs” in future games.
|
||||
|
||||
This wasn’t the first such “hidden feature” built into a piece of software (that distinction goes to an operating system for the [PDP-10][3] from 1966, but this was the first time it was given a name. And it was the first time it really grabbed the attention of most computer users and gamers.
|
||||
|
||||
Linux (and Linux related software) has not been left out. Some truly amusing Easter Eggs have been created for our beloved operating system over the years. Here are some of my personal favorites -- with how to achieve them.
|
||||
|
||||
You’ll notice, rather quickly, that most of these are experienced via a terminal. That’s on purpose. Because terminals are cool. [I should also take this moment to say that if you try to run an application I list, and you do not have it installed, it will not work. You should install it first. Because… computers.]
|
||||
|
||||
### Arch : Pac-Man in pacman ###
|
||||
|
||||
We’re going to start with one just for the [Arch Linux][4] fans out there. You can add a [Pac-Man][5]-esque character to your progress bars in “[pacman][6]” (the Arch package manager). Why this isn’t enabled by default is beyond me.
|
||||
|
||||
To do this you’ll want to edit “/etc/pacman.conf” in your favorite text editor. Under the “# Misc options” section, remove the “#” in front of “Color” and add the line “ILoveCandy”. Because Pac-Man loves candy.
|
||||
|
||||
That’s it! Next time you fire up a terminal and run pacman, you’ll help the little yellow guy get some lunch (or at least some candy).
|
||||
|
||||
### GNU Emacs : Tetris and such ###
|
||||
|
||||

|
||||
I don’t like emacs. Not even a little bit. But it does play Tetris.
|
||||
|
||||
I have a confession to make: I don’t like [emacs][7]. Not even a little bit.
|
||||
|
||||
Some things fill my heart with gladness. Some things take away all my sadness. Some things ease my troubles. That’s [not what emacs does][8].
|
||||
|
||||
But it does play Tetris. And that’s not nothing. Here’s how:
|
||||
|
||||
Step 1) Launch emacs. (When in doubt, type “emacs”.)
|
||||
|
||||
Step 2) Hit Escape then X on your keyboard.
|
||||
|
||||
Step 3) Type “tetris” and hit Enter.
|
||||
|
||||
Bored of Tetris? Try “pong”, “snake” and a whole host of other little games (and novelties). Take a look in “/usr/share/emacs/*/lisp/play” for the full list.
|
||||
|
||||
### Animals Saying Things ###
|
||||
|
||||
The Linux world has a long and glorious history of animals saying things in a terminal. Here are the ones that are the most important to know by heart.
|
||||
|
||||
On a Debian-based distro? Try typing “apt-get moo".
|
||||
|
||||

|
||||
apt-get moo
|
||||
|
||||
Simple, sure. But it’s a talking cow. So we like it. Then try “aptitude moo”. It will inform you that “There are no Easter Eggs in this program”.
|
||||
|
||||
If there’s one thing you should know about [aptitude][9], it’s that it’s a dirty, filthy liar. If aptitude were wearing pants, the fire could be seen from space. Add a “-v” option to that same command. Keep adding more v’s until you force aptitude to come clean.
|
||||
|
||||

|
||||
I think we can all agree, that this is probably the most important feature in aptitude.
|
||||
|
||||
I think we can all agree, that this is probably the most important feature in aptitude. But what if you want to put your own words into the mouth of a cow? That’s where “cowsay” comes in.
|
||||
|
||||
And, don’t let the name “cowsay” fool you. You can put words into so much more than just a cow. Like an elephant, Calvin, Beavis and even the Ghostbusters logo. Just do a “cowsay -l” from the terminal to get a complete list of options.
|
||||
|
||||

|
||||
You can put words into so much more than just a cow.
|
||||
|
||||
Want to get really tricky? You can pipe the output of other applications into cowsay. Try “fortune | cowsay”. Lots of fun can be had.
|
||||
|
||||
### Sudo Insult Me Please ###
|
||||
|
||||
Raise your hand if you’ve always wanted your computer to insult you when you do something wrong. Hell. I know I have. Try this:
|
||||
|
||||
Type “sudo visudo” to open the “sudoers” file. In the top of that file you’ll likely see a few lines that start with “Defaults”. At the bottom of that list add “Defaults insults” and save the file.
|
||||
|
||||
Now, whenever you mistype your sudo password, your system will lob insults at you. Confidence boosting phrases such as “Listen, burrito brains, I don’t have time to listen to this trash.”, “Are you on drugs?” and “You’re mind just hasn’t been the same since the electro-shocks, has it?”.
|
||||
|
||||
This one has the side-effect of being a rather fun thing to set on a co-worker's computer.
|
||||
|
||||
### Firefox is cheeky ###
|
||||
|
||||
Here’s one that isn’t done from the Terminal! Huzzah!
|
||||
|
||||
Open up Firefox. In the URL bar type “about:about”. That will give you a list of all of the “about” pages in Firefox. Nothing too fancy there, right?
|
||||
|
||||
Now try “about:mozilla” and you’ll be greeted with a quote from the “[Book of Mozilla][10]” -- the holy book of web browsing. One of my other favorites, “about:robots”, is also quite excellent.
|
||||
|
||||

|
||||
The “Book of Mozilla” -- the holy book of web browsing.
|
||||
|
||||
### Carefully Crafted Calendar Concoctions ###
|
||||
|
||||
Tired of the boring old [Gregorian Calendar][11]? Ready to mix things up a little bit? Try typing “ddate”. This will print the current date on the [Discordian Calendar][12]. You will be greeted by something that looks like this:
|
||||
|
||||
“Today is Sweetmorn, the 18th day of Discord in the YOLD 3181”
|
||||
|
||||
I hear what you’re saying, “But, this isn’t an Easter Egg!” Shush. I’ll call it an Easter Egg if I want to.
|
||||
|
||||
### Instant l33t Hacker Mode ###
|
||||
|
||||
Want to feel like you’re a super-hacker from a movie? Try setting nmap into “[Script Kiddie][13]” mode (by adding “-oS”) and all of the output will be rendered in the most 3l33t [h@x0r-y way][14] possible.
|
||||
|
||||
Example: “nmap -oS - google.com”
|
||||
|
||||
Do it. You know you want to. Angelina Jolie would be [super impressed][15].
|
||||
|
||||
### The lolcat Rainbow ###
|
||||
|
||||
Having awesome Easter Eggs and goodies in your Linux terminal is fine and dandy… but what if you want it to have a little more… pizazz? Enter: lolcat. Take the text output of any program and pipe it through lolcat to super-duper-rainbow-ize it.
|
||||
|
||||

|
||||
Take the text output of any program and pipe it through lolcat to super-duper-rainbow-ize it.
|
||||
|
||||
### Cursor Chasing Critter ###
|
||||
|
||||

|
||||
“Oneko” -- the Linux port of the classic “Neko”.
|
||||
|
||||
“Oneko” -- the Linux port of the classic “[Neko][16]”.
|
||||
And that brings us to “oneko” -- the Linux port of the classic “Neko”. Basically a little cat that chases your cursor around the screen.
|
||||
|
||||
While this may not qualify as an “Easter Egg” in the strictest sense of the word, it’s still fun. And it feels Easter Egg-y.
|
||||
|
||||
You can also use different options (such as “oneko -dog”) to use a little dog instead of a cat and a few other tweaks and options. Lots of possibilities for annoying co-workers with this one.
|
||||
|
||||
There you have it! A list of my favorite Linux Easter Eggs (and things of that ilk). Feel free to add your own favorite in the comments section below. Because this is the Internet. And you can do that sort of thing.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-
|
||||
|
||||
作者:[Bryan Lunduke][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linux.com/community/forums/person/56734
|
||||
[1]:http://en.wikipedia.org/wiki/Adventure_(Atari_2600)
|
||||
[2]:http://en.wikipedia.org/wiki/Warren_Robinett
|
||||
[3]:http://en.wikipedia.org/wiki/PDP-10
|
||||
[4]:http://en.wikipedia.org/wiki/Arch_Linux
|
||||
[5]:http://en.wikipedia.org/wiki/Pac-Man
|
||||
[6]:http://www.linux.com/news/software/applications/820944-10-truly-amusing-linux-easter-eggs-#Pacman
|
||||
[7]:http://en.wikipedia.org/wiki/GNU_Emacs
|
||||
[8]:https://www.youtube.com/watch?v=AQ4NAZPi2js
|
||||
[9]:https://wiki.debian.org/Aptitude
|
||||
[10]:http://en.wikipedia.org/wiki/The_Book_of_Mozilla
|
||||
[11]:http://en.wikipedia.org/wiki/Gregorian_calendar
|
||||
[12]:http://en.wikipedia.org/wiki/Discordian_calendar
|
||||
[13]:http://nmap.org/book/output-formats-script-kiddie.html
|
||||
[14]:http://nmap.org/book/output-formats-script-kiddie.html
|
||||
[15]:https://www.youtube.com/watch?v=Ql1uLyuWra8
|
||||
[16]:http://en.wikipedia.org/wiki/Neko_%28computer_program%29
|
@ -1,114 +0,0 @@
|
||||
Translating by Love-xuan
|
||||
Animated Wallpaper Adds Live Backgrounds To Linux Distros
|
||||
================================================================================
|
||||
**We know a lot of you love having a stylish Ubuntu desktop to show off.**
|
||||
|
||||

|
||||
|
||||
Live Wallpaper
|
||||
|
||||
And as Linux makes it so easy to create a stunning workspace with a minimal effort, that’s understandable!
|
||||
|
||||
Today, we’re highlighting — [re-highlighting][2] for those of you with long memories — a free, open-source tool that can add extra bling your OS screenshots and screencasts.
|
||||
|
||||
It’s called **Live Wallpaper** and (as you can probably guess) it will replace the standard static desktop background with an animated alternative powered by OpenGL.
|
||||
|
||||
And the best bit: it can be installed in Ubuntu very easily.
|
||||
|
||||
### Animated Wallpaper Themes ###
|
||||
|
||||

|
||||
|
||||
Live Wallpaper is not the only app of this type, but it is one of the the best.
|
||||
|
||||
It comes with a number of different themes out of the box.
|
||||
|
||||
These range from the subtle (‘noise’) to frenetic (‘nexus’), and caters to everything in between. There’s even the obligatory clock wallpaper inspired by the welcome screen of the Ubuntu Phone:
|
||||
|
||||
- Circles — Clock inspired by Ubuntu Phone with ‘evolving circle’ aura
|
||||
- Galaxy — Spinning galaxy that can be resized/repositioned
|
||||
- Gradient Clock — A polar clock overlaid on basic gradient
|
||||
- Nexus — Brightly colored particles fire across screen
|
||||
- Noise — A bokeh design similar to the iOS dynamic wallpaper
|
||||
- Photoslide — Grid of photos from folder (default ~/Photos) animate in/out
|
||||
|
||||
Live Wallpaper is **fully open-source** so there’s nothing to stop imaginative artists with the know-how (and patience) from creating some slick themes of their own.
|
||||
|
||||
### Settings & Features ###
|
||||
|
||||

|
||||
|
||||
Each theme can be configured or customised in some way, though certain themes have more options than others.
|
||||
|
||||
For example, in Nexus (pictured above) you can change the number and colour of the the pulse particles, their size, and their frequency.
|
||||
|
||||
The preferences app also provides a set of **general options** that will apply to all themes. These include:
|
||||
|
||||
- Setting live wallpaper to run on log-in
|
||||
- Setting a custom background that the animation sits on
|
||||
- Adjusting the FPS (including option to show FPS on screen)
|
||||
- Specifying the multi-monitor behaviour
|
||||
|
||||
With so many options available it should be easy to create a background set up that suits you.
|
||||
|
||||
### Drawbacks ###
|
||||
|
||||
#### No Desktop Icons ####
|
||||
|
||||
You can’t add, open or edit files or folders on the desktop while Live Wallpaper is ‘On’.
|
||||
|
||||
The Preferences app does list an option that will, supposedly, let you do this. It may work on really older releases but in our testing, on Ubuntu 14.10, it does nothing.
|
||||
|
||||
One workaround that seems to work for some users of the app on Ubuntu is setting a .png image as the custom background. It doesn’t have to be a transparent .png, simply a .png.
|
||||
|
||||
#### Resource Usage ####
|
||||
|
||||
Animated wallpapers use more system resources than standard background images.
|
||||
|
||||
We’re not talking about 50% load at all times —at least not with this app in our testing— but those on low-power devices and laptops will want to use apps like this cautiously. Use a [system monitoring tool][2] to keep an eye on CPU and GPU load.
|
||||
|
||||
#### Quitting the app ####
|
||||
|
||||
The biggest “bug” for me is the absolute lack of “quit” option.
|
||||
|
||||
Sure, the animated wallpaper can be turned off from the Indicator Applet and the Preferences tool but quitting the app entirely, quitting the indicator applet? Nope. To do that I have to use the ‘pkill livewallpaper’ command in the Terminal.
|
||||
|
||||
### How to Install Live Wallpaper in Ubuntu 14.04 LTS + ###
|
||||
|
||||

|
||||
|
||||
To install Live Wallpaper in Ubuntu 14.04 LTS and above you will first need to add the official PPA for the app to your Software Sources.
|
||||
|
||||
The quickest way to do this is using the Terminal:
|
||||
|
||||
sudo add-apt-repository ppa:fyrmir/livewallpaper-daily
|
||||
|
||||
sudo apt-get update && sudo apt-get install livewallpaper
|
||||
|
||||
You should also install the indicator applet, which lets you quickly and easily turn on/off the animated wallpaper and switch theme from the menu area, and the GUI settings tool so that you can configure each theme based on your tastes.
|
||||
|
||||
sudo apt-get install livewallpaper-config livewallpaper-indicator
|
||||
|
||||
When everything has installed you will be able to launch the app and its preferences tool from the Unity Dash.
|
||||
|
||||

|
||||
|
||||
Annoyingly, the Indicator Applet won’t automatically open after you install it. It does add itself to the start up list, so a quick log out > log in will get it to show.
|
||||
|
||||
### Summary ###
|
||||
|
||||
If you fancy breathing life into a dull desktop, give it a spin — and let us know what you think of it and what animated wallpapers you’d love to see added!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/05/animated-wallpaper-adds-live-backgrounds-to-linux-distros
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/2012/11/live-wallpaper-for-ubuntu
|
||||
[2]:http://www.omgubuntu.co.uk/2011/11/5-system-monitoring-tools-for-ubuntu
|
@ -1,158 +0,0 @@
|
||||
Backup with these DeDuplicating Encryption Tools
|
||||
================================================================================
|
||||
Data is growing both in volume and value. It is becoming increasingly important to be able to back up and restore this information quickly and reliably. As society has adapted to technology and learned how to depend on computers and mobile devices, there are few that can deal with the reality of losing important data. Of firms that suffer the loss of data, 30% fold within a year, 70% cease trading within five years. This highlights the value of data.
|
||||
|
||||
With data growing in volume, improving storage utilization is pretty important. In computing, data deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data. This technique therefore improves storage utilization.
|
||||
|
||||
Data is not only of interest to its creator. Governments, competitors, criminals, snoopers may be very keen to access your data. They might want to steal your data, extort money from you, or see what you are up to. Enryption is essential to protect your data.
|
||||
|
||||
So the solution is a deduplicating encrypting backup software.
|
||||
|
||||
Making file backups is an essential activity for all users, yet many users do not take adequate steps to protect their data. Whether a computer is being used in a corporate environment, or for private use, the machine's hard disk may fail without any warning signs. Alternatively, some data loss occurs as a result of human error. Without regular backups being made, data will inevitably be lost even if the services of a specialist recovery organisation are used.
|
||||
|
||||
This article provides a quick roundup of 6 deduplicating encryption backup tools.
|
||||
|
||||
----------
|
||||
|
||||
### Attic ###
|
||||
|
||||
Attic is a deduplicating, encrypted, authenticated and compressed backup program written in Python. The main goal of Attic is to provide an efficient and secure way to backup data. The data deduplication technique used makes Attic suitable for daily backups since only the changes are stored.
|
||||
|
||||
Features include:
|
||||
|
||||
- Easy to use
|
||||
- Space efficient storage variable block size deduplication is used to reduce the number of bytes stored by detecting redundant data
|
||||
- Optional data encryption using 256-bit AES encryption. Data integrity and authenticity is verified using HMAC-SHA256
|
||||
- Off-site backups with SDSH
|
||||
- Backups mountable as filesystems
|
||||
|
||||
Website: [attic-backup.org][1]
|
||||
|
||||
----------
|
||||
|
||||
### Borg ###
|
||||
|
||||
Borg is a fork of Attic. It is a secure open source backup program designed for efficient data storage where only new or modified data is stored.
|
||||
|
||||
The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only the changes are stored. The authenticated encryption makes it suitable for backups to not fully trusted targets.
|
||||
|
||||
Borg is written in Python. Borg was created in May 2015 in response to the difficulty of getting new code or larger changes incorporated into Attic.
|
||||
|
||||
Features include:
|
||||
|
||||
- Easy to use
|
||||
- Space efficient storage variable block size deduplication is used to reduce the number of bytes stored by detecting redundant data
|
||||
- Optional data encryption using 256-bit AES encryption. Data integrity and authenticity is verified using HMAC-SHA256
|
||||
- Off-site backups with SDSH
|
||||
- Backups mountable as filesystems
|
||||
|
||||
Borg is not compatible with Attic.
|
||||
|
||||
Website: [borgbackup.github.io/borgbackup][2]
|
||||
|
||||
----------
|
||||
|
||||
### Obnam ###
|
||||
|
||||
Obnam (OBligatory NAMe) is an easy to use, secure Python based backup program. Backups can be stored on local hard disks, or online via the SSH SFTP protocol. The backup server, if used, does not require any special software, on top of SSH.
|
||||
|
||||
Obnam performs de-duplication by splitting up file data into chunks, and storing those individually. Generations are incremental backups; Every backup generation looks like a fresh snapshot, but is really incremental. Obnam is developed by Lars Wirzenius.
|
||||
|
||||
Features include:
|
||||
|
||||
- Easy to use
|
||||
- Snapshot backups
|
||||
- Data de-duplication, across files, and backup generations
|
||||
- Encrypted backups, using GnuPG
|
||||
- Backup multiple clients to a single repository
|
||||
- Backup checkpoints (creates a "save" every 100MBs or so)
|
||||
- Number of options for performance tuning including lru-size and/or upload-queue-size
|
||||
- MD5 checksum algorithm for recognising duplicate data chunks
|
||||
- Store backups to a server via SFTP
|
||||
- Supports both push (i.e. Run on the client) and pull (i.e. Run on the server) methods
|
||||
|
||||
Website: [obnam.org][3]
|
||||
|
||||
----------
|
||||
|
||||
### Duplicity ###
|
||||
|
||||
Duplicity incrementally backs up files and directory by encrypting tar-format volumes with GnuPG and uploading them to a remote (or local) file server. To transmit data it can use ssh/scp, local file access, rsync, ftp, and Amazon S3.
|
||||
|
||||
Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. As the software uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.
|
||||
|
||||
Currently duplicity supports deleted files, full unix permissions, directories, symbolic links, fifos, etc.
|
||||
|
||||
The duplicity package also includes the rdiffdir utility. Rdiffdir is an extension of librsync's rdiff to directories; it can be used to produce signatures and deltas of directories as well as regular files.
|
||||
|
||||
Features include:
|
||||
|
||||
- Simple to use
|
||||
- Encrypted and signed archives (using GnuPG)
|
||||
- Bandwidth and space efficient, using the rsync algorithm
|
||||
- Standard file format
|
||||
- Choice of remote protocol
|
||||
- Local storage
|
||||
- scp/ssh
|
||||
- ftp
|
||||
- rsync
|
||||
- HSI
|
||||
- WebDAV
|
||||
- Amazon S3
|
||||
|
||||
Website: [duplicity.nongnu.org][4]
|
||||
|
||||
----------
|
||||
|
||||
### ZBackup ###
|
||||
|
||||
ZBackup is a versatile globally-deduplicating backup tool.
|
||||
|
||||
Features include:
|
||||
|
||||
- Parallel LZMA or LZO compression of the stored data. You can mix LZMA and LZO in a repository
|
||||
- Built-in AES encryption of the stored data
|
||||
- Possibility to delete old backup data
|
||||
- Use of a 64-bit rolling hash, keeping the amount of soft collisions to zero
|
||||
- Repository consists of immutable files. No existing files are ever modified
|
||||
- Written in C++ only with only modest library dependencies
|
||||
- Safe to use in production
|
||||
- Possibility to exchange data between repos without recompression
|
||||
- Uses a 64-bit modified Rabin-Karp rolling hash
|
||||
|
||||
Website: [zbackup.org][5]
|
||||
|
||||
----------
|
||||
|
||||
### bup ###
|
||||
|
||||
bup is a program written in Python that backs things up. It's short for "backup". It provides an efficient way to backup a system based on the git packfile format, providing fast incremental saves and global deduplication (among and within files, including virtual machine images).
|
||||
|
||||
bup is released under the LGPL version 2 license.
|
||||
|
||||
Features include:
|
||||
|
||||
- Global deduplication (among and within files, including virtual machine images)
|
||||
- Uses a rolling checksum algorithm (similar to rsync) to split large files into chunks
|
||||
- Uses the packfile format from git
|
||||
- Writes packfiles directly offering fast incremental saves
|
||||
- Can use "par2" redundancy to recover corrupted backups
|
||||
- Mount your bup repository as a FUSE filesystem
|
||||
|
||||
Website: [bup.github.io][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://attic-backup.org/
|
||||
[2]:https://borgbackup.github.io/borgbackup/
|
||||
[3]:http://obnam.org/
|
||||
[4]:http://duplicity.nongnu.org/
|
||||
[5]:http://zbackup.org/
|
||||
[6]:https://bup.github.io/
|
@ -1,69 +0,0 @@
|
||||
sevenot translating
|
||||
First Stable Version Of Atom Code Editor Has Been Released
|
||||
================================================================================
|
||||

|
||||
|
||||
[Atom 1.0][1] is here. One of the [best open source code editors][2], [Atom][3] was available for public uses for almost a year but this is the first stable version of the most talked about text/code editor of recent times. Promoted as the “hackable text editor for 21st century”, this project of [Github][4] has already been downloaded 1.5 million times in the past and currently it has over 350,000 monthly active users.
|
||||
|
||||
### It’s been a long time ###
|
||||
|
||||
Rome was not built in a day and neither was Atom. Since it was first conceptualized in 2008 till the first stable release this month, it has taken several years and hundreds of contributors from across the globe, along with main developers working on Atom core. A quick look at the journey of Atom can be seen in the picture below:
|
||||
|
||||

|
||||
Image credit: Atom
|
||||
|
||||
### Back to the future ###
|
||||
|
||||
This launch of Atom 1.0 is announced with a retro video showing the capabilities of the editor. Resembling to 70’s science fiction TV series, this will be the coolest video you are going to watch today :)
|
||||
|
||||
注:youtube视频,不行做个链接吧
|
||||
<iframe width="640" height="390" frameborder="0" allowfullscreen="true" src="http://www.youtube.com/embed/Y7aEiVwBAdk?version=3&rel=1&fs=1&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent" type="text/html" class="youtube-player"></iframe>
|
||||
|
||||
### Features of Atom text editor ###
|
||||
|
||||
- Cross-platform editing
|
||||
- Built-in package manager
|
||||
- Smart autocompletion
|
||||
- File system browser
|
||||
- Multiple panes
|
||||
- Find and replace
|
||||
- Highly customizable
|
||||
- Modern look
|
||||
|
||||
### Get Atom 1.0 ###
|
||||
|
||||
Atom 1.0 is available for Linux, Windows and Mac OS X. For Debian based Linux distributions such as Ubuntu and Linux Mint, Atom provides .deb binaries. For Fedora, it also has .rpm binaries. You can also get the source code, if you like. The links below will let you download the latest stable version.
|
||||
|
||||
- [Atom .deb][5]
|
||||
- [Atom .rpm][6]
|
||||
- [Atom Source Code][7]
|
||||
|
||||
If you prefer, you can [install Atom in Ubuntu using PPA][8]. The PPA is not official though.
|
||||
|
||||
注:下面是一个调查,可以发布的时候在文章内发布个调查
|
||||
|
||||
#### Are you excited about Atom? ####
|
||||
|
||||
- Oh Yes! This is the best thing that could happen to programmers.
|
||||
- Not really. I have seen better editors.
|
||||
- Don't care. My default text editor does the job just fine.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/atom-stable-released/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://blog.atom.io/2015/06/25/atom-1-0.html
|
||||
[2]:http://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[3]:https://atom.io/
|
||||
[4]:https://github.com/
|
||||
[5]:https://atom.io/download/deb
|
||||
[6]:https://atom.io/download/rpm
|
||||
[7]:https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md
|
||||
[8]:http://itsfoss.com/install-atom-text-editor-ubuntu-1404-linux-mint-17/
|
@ -1,3 +1,5 @@
|
||||
Translating by H-mudcup
|
||||
|
||||
Defending the Free Linux World
|
||||
================================================================================
|
||||

|
||||
@ -122,4 +124,4 @@ via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html
|
||||
[2]:http://www.redhat.com/
|
||||
[3]:http://www.law.uh.edu/
|
||||
[4]:http://www.chaoticmoon.com/
|
||||
[5]:http://www.ieee.org/
|
||||
[5]:http://www.ieee.org/
|
||||
|
@ -1,3 +1,5 @@
|
||||
FSSlc Translating
|
||||
|
||||
7 communities driving open source development
|
||||
================================================================================
|
||||
Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation.
|
||||
@ -83,4 +85,4 @@ via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities
|
||||
[4]:http://www.openstack.org/foundation/
|
||||
[5]:http://www.opendaylight.org/
|
||||
[6]:http://www.apache.org/
|
||||
[7]:http://www.opencompute.org/
|
||||
[7]:http://www.opencompute.org/
|
||||
|
125
sources/talk/20150716 Interview--Larry Wall.md
Normal file
125
sources/talk/20150716 Interview--Larry Wall.md
Normal file
@ -0,0 +1,125 @@
|
||||
Interview: Larry Wall
|
||||
================================================================================
|
||||
> Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out what’s going on.
|
||||
|
||||
Larry Wall is a fascinating man. He’s the creator of Perl, a programming language that’s widely regarded as the glue holding the internet together, and mocked by some as being a “write-only” language due to its density and liberal use of non-alphanumeric characters. Larry also has a background in linguistics, and is well known for delivering entertaining “State of the Onion” presentations about the future of Perl.
|
||||
|
||||
At FOSDEM 2015 in Brussels, we caught up with Larry to ask him why Perl 6 has taken so long (Perl 5 was released in 1994), how difficult it is to manage a project when everyone has strong opinions and pulling in different directions, and how his background in linguistics influenced the design of Perl from the start. Get ready for some intriguing diversions…
|
||||
|
||||

|
||||
|
||||
**Linux Voice: You once had a plan to go and find an undocumented language somewhere in the world and create a written script for it, but you never had the opportunity to fulfil this plan. Is that something you’d like to go back and do now?**
|
||||
|
||||
Larry Wall: You have to be kind of young to be able to carry that off! It’s actually a lot of hard work, and organisations that do these things don’t tend to take people in when they’re over a certain age. Partly this is down to health and vigour, but also because people are much better at picking up new languages when they’re younger, and you have to learn the language before making a script for it.
|
||||
|
||||
I started trying to teach myself Japanese about 10 years ago, and I could speak it quite well, because of my phonology and phonetics training – but it’s very hard for me to understand what anybody says. So I can go to Japan and ask for directions, but I can’t really understand the answers!
|
||||
|
||||
> “With Perl 6, we found some ways to make the computer more sure about what the user is talking about.”
|
||||
|
||||
So usually learning a language well enough to develop a writing system, and to at least be conversational in the language, takes some period of years before you can get to the point where you can actually do literacy and start educating people on their own culture, as it were. And then you teach them to write about their own culture as well.
|
||||
|
||||
Of course, if you have language helpers – and we were told not to call them “language informants”, or everyone would think we were working for the CIA! – if you have these people, you can get them to come in and help you learn the foreign language. They are not teachers but there are ways of eliciting things from someone who’s not a language teacher – they can still teach you how to speak. They can take a stick and point to it and say “that’s a stick”, and drop it and say “the stick falls”. Then you start writing things down and systematising things.
|
||||
|
||||
The motivation that most people have, going out to these groups, is to translate the Bible into their languages. But that’s only one part of it; the other is also culture preservation. Missionaries get kind of a bad rep on that, because anthropologists think they should be left to sit their in their own culture. But somebody is probably going to change their culture anyway – it’s usually the army, or businesses coming in, like Coca Cola or the sewing machine people, or missionaries. And of those three, the missionaries are the least damaging, if they’re doing their job right.
|
||||
|
||||
**LV: Many writing systems are based on existing scripts, and then you have invented ones like Greenlandic…**
|
||||
|
||||
LW: The Cherokee invented their own just by copying letters, and they have no mapping much to what we think of letters, and it’s fairly arbitrary in that sense. It just has to represent how the people themselves think of the language, and sufficiently well to communicate. Often there will be variations on Western orthography, using characters from Latin where possible. Tonal languages have to mark the tones somehow, by accents or by numbers.
|
||||
|
||||
As soon as you start leaning towards a phoenetic or phonological representation, then you also start to lose dialectical differences – or you have to write the dialectal differences. Or you have conventional spelling like we have in English, but pronunciation that doesn’t really match it.
|
||||
|
||||
**LV: When you started working on Perl, what did you take from your background in linguistics that made you think: “this is really important in a programming language”?**
|
||||
|
||||
LW: I thought a lot about how people use languages. In real languages, you have a system of nouns and verbs and adjectives, and you kind of know which words are which type. And in real natural languages, you have a lot of instances of shoving one word into a different slot. The linguistic theory I studied was called tagmemics, and it accounts for how this works in a natural language – that you could have something that you think of as a noun, but you can verb it, and people do that all time.
|
||||
|
||||
You can pretty much shove anything in any slot, and you can communicate. One of my favourite examples is shoving an entire sentence in as an adjective. The sentence goes like this: “I don’t like your I-can-use-anything-as-an-adjective attitude”!
|
||||
|
||||
So natural language is very flexible this way because you have a very intelligent listener – or at least, compared with a computer – who you can rely on to figure out what you must have meant, in case of ambiguity. Of course, in a computer language you have to manage the ambiguity much more closely.
|
||||
|
||||
Arguably in Perl 1 through to 5 we didn’t manage it quite adequately enough. Sometimes the computer was confused when it really shouldn’t be. With Perl 6, we discovered some ways to make the computer more sure about what the user is talking about, even if the user is confused about whether something is really a string or a number. The computer knows the exact type of it. We figured out ways of having stronger typing internally but still have the allomorphic “you can use this as that” idea.
|
||||
|
||||

|
||||
|
||||
**LV: For a long time Perl was seen as the “glue” language of the internet, for fitting bits and pieces together. Do you see Perl 6 as a release to satisfy the needs of existing users, or as a way to bring in new people, and bring about a resurgence in the language?**
|
||||
|
||||
LW: The initial intent was to make a better Perl for Perl programmers. But as we looked at the some of the inadequacies of Perl 5, it became apparent that if we fixed these inadequacies, Perl 6 would be more applicable, as I mentioned in my talk – like how J. R. R. Tolkien talked about applicability [see http://tinyurl.com/nhpr8g2].
|
||||
|
||||
The idea that “easy things should be easy and hard things should be possible” goes way back, to the boundary between Perl 2 and Perl 3. In Perl 2, we couldn’t handle binary data or embedded nulls – it was just C-style strings. I said then that “Perl is just a text processing language – you don’t need those things in a text processing language”.
|
||||
|
||||
But it occurred to me at the time that there were a large number of problems that were mostly text, and had a little bit of binary data in them – network addresses and things like that. You use binary data to open the socket but then text to process it. So the applicability of the language more than doubled by making it possible to handle binary data.
|
||||
|
||||
That began a trade-off about what things should be easy in a language. Nowadays we have a principle in Perl, and we stole the phrase Huffman coding for it, from the bit encoding system where you have different sizes for characters. Common characters are encoded in a fewer number of bits, and rarer characters are encoded in more bits.
|
||||
|
||||
> “There had to be a very careful balancing act. There were just so many good ideas at the beginning.”
|
||||
|
||||
We stole that idea as a general principle for Perl, for things that are commonly used, or when you have to type them very often – the common things need to be shorter or more succinct. Another bit of that, however, is that they’re allowed to be more irregular. In natural language, it’s actually the most commonly used verbs that tend to be the most irregular.
|
||||
|
||||
And there’s a reason for that, because you need more differentiation of them. One of my favourite books is called The Search for the Perfect Language by Umberto Eco, and it’s not about computer languages; it’s about philosophical languages, and the whole idea that maybe some ancient language was the perfect language and we should get back to it.
|
||||
|
||||
All of those languages make the mistake of thinking that similar things should always be encoded similarly. But that’s not how you communicate. If you have a bunch of barnyard animals, and they all have related names, and you say “Go out and kill the Blerfoo”, but you really wanted them to kill the Blerfee, you might get a cow killed when you want a chicken killed.
|
||||
|
||||
So in realms like that it’s actually better to differentiate the words, for more redundancy in the communication channel. The common words need to have more of that differentiation. It’s all about communicating efficiently, and then there’s also this idea of self-clocking codes. If you look at a UPC label on a product – a barcode – that’s actually a self-clocking code where each pair of bars and spaces is always in a unit of seven columns wide. You rely on that – you know the width of the bars will always add up to that. So it’s self-clocking.
|
||||
|
||||
There are other self-clocking codes used in electronics. In the old transmission serial protocols there were stop and start bits so you could keep things synced up. Natural languages also do this. For instance, in the writing of Japanese, they don’t use spaces. Because the way they write it, they will have a Kanji character from Chinese at the head of each phrase, and then the endings are written in the a syllabary.
|
||||
|
||||
**LV: Hiragana, right?**
|
||||
|
||||
LW: Yes, Hiragana. So naturally the head of each phrase really stands out with this system. Similarly, in ancient Greek, most of the verbs were declined or conjugated. So they had standard endings were sort-of a clocking mechanism. Spaces were optional in their writing system as well – it was a more modern invention to put the spaces in.
|
||||
|
||||
So similarly in computer languages, there’s value in having a self-clocking code. We rely on this heavily in Perl, and even more heavily in Perl 6 than in previous releases. The idea that when you’re parsing an expression, you’re either expecting a term or an infix operator. When you’re expecting a term you might also get a prefix operator – that’s kind-of in the same expectation slot – and when you’re expecting an infix you might also get a postfix for the previous term.
|
||||
|
||||
But it flips back and forth. And if the compiler actually knows which it is expecting, you can overload those a little bit, and Perl does this. So a slash when it’s expecting a term will introduce a regular expression, whereas a slash when you’re expecting an infix will be division. On the other hand, we don’t want to overload everything, because then you lose the self-clocking redundancy.
|
||||
|
||||
Most of our best error messages, for syntax errors, actually come out of noticing that you have two terms in a row. And then we try to figure out why there are two terms in a row – “oh, you must have left a semicolon out on the previous line”. So we can produce much better error messages than the more ad-hoc parsers.
|
||||
|
||||

|
||||
|
||||
**LV: Why has Perl 6 taken fifteen years? It must be hard overseeing a language when everyone has different opinions about things, and there’s not always the right way to do things, and the wrong way.**
|
||||
|
||||
LW: There had to be a very careful balancing act. There were just so many good ideas at the beginning – well, I don’t want to say they were all good ideas. There were so many pain points, like there were 361 RFCs [feature proposal documents] when I expected maybe 20. We had to sit back and actually look at them all, and ignore the proposed solutions, because they were all over the map and all had tunnel vision. Each one many have just changed one thing, but if we had done them all, it would’ve been a complete mess.
|
||||
|
||||
So we had to re-rationalise based on how people were actually hurting when they tried to use Perl 5. We started to look at the unifying, underlying ideas. Many of these RFCs were based on the fact that we had an inadequate type system. By introducing a more coherent type system we could fix many problems in a sane fashion and a cohesive fashion.
|
||||
|
||||
And we started noticing other ways how we could unify the featuresets and start reusing ideas in different areas. Not necessarily that they were the same thing underneath. We have a standard way of writing pairs – well, two ways in Perl! But the way of writing pairs with a colon could also be reused for radix notation, or for literal numbers in any base. It could also be used for various alternative forms of quoting. We say in Perl that it’s “strangely consistent”.
|
||||
|
||||
> “People who made early implementations of Perl 6 came back to me, cap in hand, and said “We really need a language designer.””
|
||||
|
||||
Similar ideas pop up, and you say “I’m already familiar with how that syntax works, but I see it’s being used for something else”. So it took some unity of vision to find these unifications. People who had the various ideas and made early implementations of Perl 6 came back to me, cap-in-hand, and said “We really need a language designer. Could you be our benevolent dictator?”
|
||||
|
||||
So I was the language designer, but I was almost explicitly told: “Stay out of the implementation! We saw what you did made out of Perl 5, and we don’t like it!” It was really funny because the innards of the new implementation started looking a whole lot like Perl 5 inside, and maybe that’s why some of the early implementations didn’t work well.
|
||||
|
||||
Because we were still feeling our way into the whole design, the implementations made a lot of assumptions about what VM should do and shouldn’t do, so we ended up with something like an object oriented assembly language. That sort of problem was fairly pervasive at the beginning. Then the Pugs guys came along and said “Let’s use Haskell, because it makes you think very clearly about what you’re doing. Let’s use it to clarify our semantic model underneath.”
|
||||
|
||||
So we nailed down some of those semantic models, but more importantly, we started building the test suite at that point, to be consistent with those semantic models. Then after that, the Parrot VM continued developing, and then another implementation, Niecza, came along and it was based on .NET. It was by a young fellow who was very smart and implemented a large subset of Perl 6, but he was kind of a loner, didn’t really figure out a way to get other people involved in his project.
|
||||
|
||||
At the same time the Parrot project was getting too big for anyone to really manage it inside, and very difficult to refactor. At that point the fellows working on Rakudo decided that we probably needed to be on more platforms than just the Parrot VM. So they invented a portability layer called NQP which stands for “Not Quite Perl”. They ported it to first of all run on the JVM (Java Virtual Machine), and while they were doing that they were also secretly working on a new VM called MoarVM. That became public a little over a year ago.
|
||||
|
||||
Both MoarVM and JVM run a pretty much equivalent set of regression tests – Parrot is kind-of trailing back in some areas. So that has been very good to flush out VM-specific assumptions, and we’re starting to think about NQP targeting other things. There was a Google Summer of Code project year to target NQP to JavaScript, and that might fit right in, because MoarVM also uses Node.js for much of its more mundane processing.
|
||||
|
||||
We probably need to concentrate on MoarVM for the rest of this year, until we actually define 6.0, and then the rest will catch up.
|
||||
|
||||
**LV: Last year in the UK, the government kicked off the Year of Code, an attempt to get young people interested in programming. There are lots of opinions about how this should be done – like whether you should teach low-level languages at the start, so that people really understand memory usage, or a high-level language. What’s your take on that?**
|
||||
|
||||
LW: Up until now, the Python community has done a much better job of getting into the lower levels of education than we have. We’d like to do something in that space too, and that’s partly why we have the butterfly logo, because it’s going to be appealing to seven year old girls!
|
||||
|
||||
But we do think that Perl 6 will be learnable as a first language. A number of people have surprised us by learning Perl 5 as their first language. And you know, there are a number of fairly powerful concepts even in Perl 5, like closures, lexical scoping, and features you generally get from functional programming. Even more so in Perl 6.
|
||||
|
||||
> “Until now, the Python community has done a much better job of getting into the lower levels of education.”
|
||||
|
||||
Part of the reason the Perl 6 has taken so long is that we have around 50 different principles we try to stick to, and in language design you’re end up juggling everything and saying “what’s really the most important principle here”? There has been a lot of discussion about a lot of different things. Sometimes we commit to a decision, work with it for a while, and then realise it wasn’t quite the right decision.
|
||||
|
||||
We didn’t design or specify pretty much anything about concurrent programming until someone came along who was smart enough about it and knew what the different trade-offs were, and that’s Jonathan Worthington. He has blended together ideas from other languages like Go and C#, with concurrent primitives that compose well. Composability is important in the rest of the language.
|
||||
|
||||
There are an awful lot of concurrent and parallel programming systems that don’t compose well – like threads and locks, and there have been lots of ways to do it poorly. So in one sense, it’s been worth waiting this extra time to see some of these languages like Go and C# develop really good high-level primitives – that’s sort of a contradiction in terms – that compose well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxvoice.com/interview-larry-wall/
|
||||
|
||||
作者:[Mike Saunders][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxvoice.com/author/mike/
|
@ -1,88 +0,0 @@
|
||||
alim0x translating
|
||||
|
||||
The history of Android
|
||||
================================================================================
|
||||

|
||||
Gingerbread's new keyboard, text selection UI, overscroll effect, and new checkboxes.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
One of the most important additions to Android 2.3 was the system-wide text selection interface, which you can see in the Google search bar in the left screenshot. Long pressing a word would highlight it in orange and make draggable handles appear on either side of the highlight. You could then adjust the highlight using the handles and long press on the highlight to bring up options for cut, copy, and paste. Previous methods used tiny controls that relied on a trackball or D-Pad, but with this first finger-driven text selection method, the Nexus S didn’t need the extra hardware controls.
|
||||
|
||||
The right set of images shows the new checkbox design and overscroll effect. The Froyo checkbox worked like a light bulb—it would show a green check when on and a gray check when off. Gingerbread now displayed an empty box when an option is turned off—which made much more sense. Gingerbread was the first version to have an overscroll effect. An orange glow appeared when you hit the end of a list and grew larger as you pulled more against the dead end. Bounce scrolling would probably have made the most sense, but that was patented by Apple.
|
||||
|
||||

|
||||
The new dialer and dialog box design.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The dialer received a little more love in Gingerbread. It became darker, and Google finally addressed the combination of sharp corners, rounded corners, and complete circles that it had going on. Now every corner was a sharp right angle. All the dial pad buttons were replaced with a weird underline, like some faint leftovers of what used to be a button. You were never really sure if you were supposed to see a button or not—our brains wanted to imagine the rest of the square.
|
||||
|
||||
The Wi-Fi network dialog is pictured to show off the rest of the system-wide changes. All the dialog box titles were changed from gray to black, every dialog box, dropdown, and button corner was sharpened up, and everything was a little bit darker. All these system-wide changes made all of Gingerbread look a lot less bubbly and more mature. The "all black everything" look wasn't necessarily the most welcoming color palette, but it certainly looked better than Android's previous gray-and-beige color scheme.
|
||||
|
||||

|
||||
The new Market, which added a massive green header.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
While not exclusive to Gingerbread, with the launch of the new OS came "Android Market 2.0." Most of the list design was the same, but Google covered the top third of the screen with a massive green banner that was used for featured apps and navigation. The primary design inspiration here was probably the green Android mascot—the color is a perfect match. At a time when the OS was getting a darker design, the neon green banner and white list made the Market a lot brighter.
|
||||
|
||||
However, the same green background image was used across phones, which meant on lower resolution devices, the green banner was even bigger. Users complained so much about the wasted screen space that later updates would make the green banner scroll up with the content. At the time, horizontal mode was even worse—it would fill the left half of the screen with the static green banner.
|
||||
|
||||

|
||||
An app page from the Market showing the collapsible text section, the "My apps" section, and Google Books screenshots.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
App pages were redesigned with collapsible sections. Rather than having to scroll through a thousand-line description, text boxes were truncated to only the first few lines. After that, a "more" button needed to be tapped. This allowed users to easily scroll through the list and find things like pictures and "contact developer," which would usually be farther down the page.
|
||||
|
||||
The other parts of the Android homescreen wisely toned down the green monster. The rest of the app was mostly just the old Market with new green navigational elements. Any of the old tabbed interfaces were upgraded to swipeable tabs. In the right Gingerbread image, swiping right-to-left would switch from "Top Paid" to "Top Free," which made navigation a little easier.
|
||||
|
||||
Gingerbread came with the first of what would become the Google Play content stores: Google Books. The app was a basic book reader that would display books in a simple thumbnail grid. The "Get eBooks" link at the top of the screen opened the browser and loaded a mobile website where you could buy books.
|
||||
|
||||
Google Books and the “My Apps" page of the Market were both examples of early precursors to the Action Bar. Just like the current guidelines, a stickied top bar featured the app icon, the name of the screen within the app, and a few controls. The layout of these two apps was actually pretty modern looking.
|
||||
|
||||

|
||||
The new Google Maps.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Google Maps (which, again, at this point was on the Android Market and not exclusive to this version of Android) now featured another action bar precursor in the form of a top-aligned control bar. This version of an early action bar featured a lot of experimenting. The majority of the bar was taken up with a search box, but you could never type into the bar. Tapping on it would open the old search interface from Android 1.x, with a totally different bar design and bubbly buttons. This 2.3 bar wasn't anything more than a really big search button.
|
||||
|
||||

|
||||
The new business pages, which switched from black to white.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Along with Places' new top billing in the app drawer came a redesigned interface. Unlike the rest of Gingerbread, this switched from black to white. Google also kept the old buttons with rounded corners. This new version of Maps helpfully displayed the hours of operation of a business, and it offered advanced search options like places that were currently open or thresholds for ratings and price. Reviews were brought to the surface, allowing a user to easily get a feel for the current business. It was now also possible to "star" a location from the search results and save it for later.
|
||||
|
||||

|
||||
The new YouTube design, which, amazingly, sort of matches the old Maps business page design.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The YouTube app seemed completely separate from the rest of Android, as if whoever designed this had no idea what Gingerbread would end up looking like. Highlights were red and gray instead of green and orange, and rather than the flat black of Gingerbread, YouTube featured bubbly buttons, tabs, and bars with rounded corners and heavy gradients. The new app did get a few things right, though. All the tabs could be horizontally swiped through, and the app finally added a vertical viewing mode for videos. Android seemed like such an uncoordinated effort at this stage. It’s like someone told the YouTube team “make it black," and that was all the direction they were given. The only Android entity this seemed to match was the old Google Maps business page design.
|
||||
|
||||
Despite the weird design choices, the YouTube app had the best approximation yet of an action bar. Besides the bar at the top with an app logo and a few buttons, the rightmost button was labeled “more" and would bring up options that didn’t fit in the bar. Today, this is called the “Overflow" button, and it's a standard UI piece.
|
||||
|
||||

|
||||
The new Google Talk, which supported voice and video calls, and the new Voice Actions interface.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
One last update for Gingerbread came with Android 2.3.4, which brought a new version of Google Talk. Unlike the Nexus One, the Nexus S had a front-facing camera—and the redesigned version of Google Talk had voice and video calling. The colored indicators on the right of the friends list were used to indicate not only presence, but voice and video availability. A dot was text only, a microphone was text or voice, and a camera was text, voice, or video. If available, tapping on a voice or video icon would immediately start a call with that person.
|
||||
|
||||
Gingerbread is the oldest version of Android still supported by Google. Firing up a Gingerbread device and letting it sit for a few minutes will result in a ton of upgrades. Gingerbread will pull down Google Play Services, resulting in a ton of new API support, and it will upgrade to the very newest version of the Play Store. Open the Play Store and hit the update button, and just about every single Google app will be replaced with a modern version. We tried to keep this article authentic to the time Gingerbread was released, but a real user stuck on Gingerbread today will be treated to a flood of anachronisms.
|
||||
|
||||
Gingerbread is still supported because there are a good number of users still running the now ancient OS. Gingerbread's staying power is due to the extremely low system requirements, making it the go-to choice for slow, cheap phones. The next few versions of Android were much more exclusive and/or demanding on hardware. For instance, Android 3.0 Honeycomb is not open source, meaning it could only be ported to a device with Google's cooperation. It was also only for tablets, making Gingerbread the newest phone version of Android for a very long time. 4.0 Ice Cream Sandwich was the next phone release, but it significantly raised Android’s systems requirements, cutting off the low-end of the market. Google is hoping to get cheaper phones back on the update track with 4.4 KitKat, which brings the system requirements back down to 512MB of RAM. The passage of time helps, too—by now, even cheap SoCs have caught up to the demands of a 4.0-era version of Android.
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/15/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -1,66 +0,0 @@
|
||||
The history of Android
|
||||
================================================================================
|
||||
### Android 3.0 Honeycomb—tablets and a design renaissance ###
|
||||
|
||||
Despite all the changes made in Gingerbread, Android was still the ugly duckling of the mobile world. Compared to the iPhone, its level of polish and design just didn't hold up. On the other hand, one of the few operating systems that could stand up to iOS's aesthetic acumen was Palm's WebOS. WebOS was a cohesive, well-designed OS with several innovative features, and it was supposed to save the company from the relentless march of the iPhone.
|
||||
|
||||
A year after launch though, Palm was running out of cash. The company never saw the iPhone coming, and by the time WebOS was ready, it was too late. In April 2010, Hewlett-Packard purchased Palm for $1 billion. While HP bought a product with a great user interface, the lead designer of that interface, a man by the name of Matias Duarte, did not join HP. In May 2010, just before HP took control of Palm, Duarte jumped ship to Google. HP bought the bread, but Google hired the baker.
|
||||
|
||||

|
||||
The first Honeycomb device, the Motorola Xoom 10-inch tablet.
|
||||
|
||||
At Google, Duarte was named the Director of Android User Experience. This was the first time someone was publicly in charge of the way Android looked. While Matias landed at Google during the launch of Android 2.2, the first version he truly impacted was Android 3.0, Honeycomb, released in February 2011.
|
||||
|
||||
By Google's own admission, Honeycomb was rushed out the door. Ten months prior, Apple modernized the tablet with the launch of the iPad, and Google wanted to respond as quickly as possible. Honeycomb was that response, a version of Android that ran on 10-inch touchscreens. Sadly, getting this OS to market was such a priority that corners were cut to save time.
|
||||
|
||||
The new OS was for tablets only—phones would not be updated to Honeycomb, which spared Google the difficult problem of making the OS work on wildly different screen sizes. But with phone support off the table, a Honeycomb source drop never happened. Previous Android versions were open source, enabling the hacking community to port the latest version to all sorts of different devices. Google didn't want app developers to feel pressured to support half-broken Honeycomb phone ports, so Google kept the source to itself and strictly controlled what could and couldn't have Honeycomb. The rushed development led to problems with the software, too. At launch, Honeycomb wasn't particularly stable, SD cards didn't work, and Adobe Flash—one of Android's big differentiators—wasn't supported.
|
||||
|
||||
One of the few devices that could have Honeycomb was [the Motorola Xoom][1], the flagship product for the new OS. The Xoom was a 10-inch, 16:9 tablet with 1GB of RAM and a dual-core, 1GHz Nvidia Tegra 2 processor. Despite being the launch device of a new version of Android where Google controlled the updates directly, the device wasn't called a "Nexus." The most likely reason for this was that Google didn't feel confident enough in the product to call it a flagship.
|
||||
|
||||
Nevertheless, Honeycomb was a major milestone for Android. With an experienced designer in charge, the entire Android user interface was rebuilt, and most of the erratic app designs were brought to heel. Android's default apps finally looked like pieces of a cohesive whole with similar layouts and theming across the board. Redesigning Android would be a multi-version project though—Honeycomb was just the start of getting Android whipped into shape. This first draft laid the groundwork for how future versions of Android would function, but it also used a heavy-handed sci-fi theme that Google would spend the next few versions toning down.
|
||||
|
||||

|
||||
The home screens of Honeycomb and Gingerbread.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
While Gingerbread only experimented with a sci-fi look in its photon wallpaper, Honeycomb went full sci-fi with a Tron-inspired theme for the entire OS. Everything was made black, and if you needed a contrasting color, you could choose from a few different shades of blue. Everything that was made blue was also given a "glow" effect, making the entire OS look like it was powered by alien technology. The default background was a holographic grid of hexagons (a Honeycomb! get it?) that looked like it was the floor of a teleport pad on a spaceship.
|
||||
|
||||
The most important change of Honeycomb was the addition of the system bar. The Motorola Xoom had no hardware buttons other than power and volume, so a large black bar was added along the bottom of the screen that housed the navigational buttons. This meant the default Android interface no longer needed specialized hardware buttons. Previously, Android couldn't function without hardware Back, Menu, and Home keys. Now, with the software supplying all the necessary buttons, anything with a touch screen was able to run Android.
|
||||
|
||||
The biggest benefit of the new software buttons was flexibility. The new app guidelines stated that apps should no longer require a hardware menu button, but for those that do, Honeycomb detects this and adds a fourth button to the system bar that allows these apps to work. The other flexibility attribute of software buttons was that they could change orientation with the device. Other than the power and volume buttons, the Xoom's orientation really wasn't important. The system bar always sat on the "bottom" of the device from the user's perspective. The trade off was that a big bar along the bottom of the screen definitely sucked up some screen real estate. To save space on 10-inch tablets, the status bar was merged into the system bar. All the usual status duties lived on the right side—there was battery and connectivity status, the time, and notification icons.
|
||||
|
||||
The whole layout of the home screen changed, placing UI pieces in each of the four corners of the device. The bottom left housed the previously discussed navigational buttons, the bottom right was for status and notifications, the top left displayed text search and voice search, and the top right had buttons for the app drawer and adding widgets.
|
||||
|
||||

|
||||
The new lock screen and Recent Apps interface.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
(Since the Xoom was a [heavy] 10-inch, 16:9 tablet, it was primarily meant to be used horizontally. Most apps also supported portrait mode, though, so for the sake of our formatting, we're using mostly portrait mode shots. Just keep in mind the Honeycomb shots come from a 10-inch tablet, and the Gingerbread shots come from a 3.7-inch phone. The densities of information are not directly comparable.)
|
||||
|
||||
The unlock screen—after switching from a menu button to a rotary dial to slide-to-unlock—removed any required accuracy from the unlock process by switching to a circle unlock. Swiping from the center outward in any direction would unlock the device. Like the rotary unlock, this was much nicer ergonomically than forcing your finger to follow a perfectly straight path.
|
||||
|
||||
The strip of thumbnails in the second picture was the interface brought up by the newly christened "Recent Apps" button, now living next to Back and Home. Rather than the group of icons brought up in Gingerbread by long-pressing on the home button, Honeycomb showed app icons and thumbnails on the screen, which made it a lot easier to switch between tasks. Recent Apps was clearly inspired by Duarte's "card" multitasking in WebOS, which used full-screen thumbnails to switch tasks. This design offered the same ease-of-recognition as WebOS's task switcher, but the smaller thumbnails allowed more apps to fit on screen at once.
|
||||
|
||||
While this implementation of Recent Apps may look like what you get on a current device, this version was very early. The list didn't scroll, meaning it showed seven apps in portrait mode and only five apps in horizontal mode. Anything beyond that was bumped off the list. You also couldn't swipe away thumbnails to close apps—this was just a static list.
|
||||
|
||||
Here we see the Tron influence in full effect: the thumbnails had blue outlines and an eerie glow around them. This screenshot also shows a benefit of software buttons—context. The back button closed the list of thumbnails, so instead of the normal arrow, this pointed down.
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/16/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://arstechnica.com/gadgets/2011/03/ars-reviews-the-motorola-xoom/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -1,86 +0,0 @@
|
||||
The history of Android
|
||||
================================================================================
|
||||

|
||||
The Honeycomb app lineup lost a ton of apps. This also shows the notification panel and the new quick settings.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The default app icons were slashed from 32 to 25, and two of those were third-party games. Since Honeycomb was not for phones and Google wanted the default apps to all be tablet-optimized, a lot of apps didn't make the cut. We lost the Amazon MP3 store, Car Home, Facebook, Google Goggles, Messaging, News and Weather, Phone, Twitter, Google Voice, and Voice Dialer. Google was quietly building a music service that would launch soon, so the Amazon MP3 store needed to go anyway. Car Home, Messaging, and Phone made little sense on a non-phone device, Facebook and Twitter still don't have tablet Android apps, and Goggles, News and Weather, and Voice Dialer were barely supported applications that most people wouldn't miss.
|
||||
|
||||
Almost every app icon was new. Just like the switch from the G1 to the Motorola Droid, the biggest impetus for change was probably the bump in resolution. The Nexus S had an 800×480 display, and Gingerbread came with art assets to match. The Xoom used a whopping 1280×800 10-inch display, which meant nearly every piece of art had to go. But again, this time a real designer was in charge, and things were a lot more cohesive. Honeycomb marked the switch from a vertically scrolling app drawer to paginated horizontal drawer. This change made sense on a horizontal device, but on phones it was still much faster to navigate the app drawer with a flingable, vertical list.
|
||||
|
||||
The second Honeycomb screenshot shows the new notification panel. The gray and black Gingerbread design was tossed for another straight-black panel that gave off a blue glow. At the top was a block showing the time, date, connection status, battery, and a shortcut to the notification quick settings, and below that were the actual notifications. Non-permanent notifications could now be dismissed by tapping on an "X" on the right side of the notification. Honeycomb was the first version to enable controls within a notification. The first (and at the launch of Honeycomb, only) app to take advantage of this was the new Google Music app, which placed previous, play/pause, and next buttons in its notification. These new controls could be accessed from any app and made controlling music a breeze.
|
||||
|
||||

|
||||
"Add to home screen" was given a zoomed-out interface for easy organizing. The search interface split auto suggest and universal search into different panes.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Pressing the plus button in the top right corner of the home screen or long pressing on the background would open the new home screen configuration interface. Honeycomb showed a zoomed-out view of all the home screens along the top of the screen, and it filled the bottom half of the screen with a tabbed drawer containing widgets and shortcuts. Items could be dragged out of the bottom drawer and into any of the five home screens. Gingerbread would just show a list of text, but Honeycomb showed full thumbnail previews of the widgets. This gave you a much better idea of what a widget would look like instead of an app-name-only description like "calendar."
|
||||
|
||||
The larger screen of the Motorola Xoom allowed the keyboard to take on a more PC-style layout, with keys like backspace, enter, shift, and tab put in the traditional locations. The keyboard took on a blueish tint and gained even more spacing between the keys. Google also added a dedicated smiley-face button. :-)
|
||||
|
||||

|
||||
Gmail on Honeycomb versus Gmail on Gingerbread with the menu open. Buttons were placed on the main screen for easier discovery.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Gmail demonstrated all the new UI concepts in Honeycomb. Android 3.0 did away with hiding all the controls behind a menu button. There was now a strip of icons along the top of the screen called the Action Bar, which lifted many useful controls to the main screen where users could see them. Gmail showed buttons for search, compose, and refresh, and it put less useful controls like settings, help, and feedback in a dropdown called the "overflow" button. Tapping checkboxes or selecting text would cause the entire action bar to change to icons relating to those actions—for instance, selecting text would bring up cut, copy, and select all buttons.
|
||||
|
||||
The app icon displayed in the top left corner doubled as a navigation button called "Up." While "Back" worked similarly to a browser back button, navigating to previously visited screens, "Up" would navigate up the app hierarchy. For instance, if you were in the Android Market, pressed the "Email developer" button, and Gmail opened, "Back" would take you back to the Android Market, but "Up" would take you to the Gmail inbox. "Back" might close the current app, but "Up" never would. Apps could control the "Back" button, and they usually reprogrammed it to replicate the "Up" functionality. In practice, there was rarely a difference between the two buttons.
|
||||
|
||||
Honeycomb also introduced the "Fragments" API, which allowed developers to use a single app for tablets and phones. A "Fragment" was a single pane of a user interface. In the Gmail picture above, the left folder list was one fragment and the inbox was another fragment. Phones would show one fragment per screen, and tablets could show two side-by-side. The developer defined the look of individual fragments, and Android would decide how they should be displayed based on the current device.
|
||||
|
||||

|
||||
The calculator finally used regular Android buttons, but someone spilled blue ink on the calendar.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
For the first time in Android's history, the calculator got a makeover with non-custom buttons, so it actually looked like part of the OS. The bigger screen made room for more buttons, enough that all the calculator functionality could fit on one screen. The calendar greatly benefited from the extra space, gaining much more room for appointment text and controls. The action bar at the top of the screen held buttons to switch views, along with showing the current time span and common controls. Appointment blocks switched to a white background with the calendar corner only showing in the top right corner. At the bottom (or side, in horizontal view) were boxes showing the month calendar and a list of displayed calendars.
|
||||
|
||||
The scale of the calendar could be adjusted, too. By performing a pinch zoom gesture, portrait week and day views could show between five and 19 hours of appointments on a single screen. The background of the calendar was made up of an uneven blue splotch, which didn't look particularly great and was tossed on later versions.
|
||||
|
||||

|
||||
The new camera interface, showing off the live "Negative" effect.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The giant 10-inch Xoom tablet did have a camera, which meant that it also had a camera app. The Tron redesign finally got rid of the old faux-leather look that Google came up with in Android 1.6. The controls were laid out in a circle around the shutter button, bringing to mind the circular controls and dials on a real camera. The Cooliris-derived speech bubble popups were changed to glowing, semi-transparent black boxes. The Honeycomb screenshot shows the new "color effect" functionality, which applied a filter to the viewfinder in real time. Unlike the Gingerbread camera app, this didn't support a portrait orientation—it was limited to landscape only. Taking a portrait picture with a 10-inch tablet doesn't make much sense, but then neither does taking a landscape one.
|
||||
|
||||

|
||||
The clock app didn't get quite as much love as other areas. Google just threw it into a tiny box and called it a day.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Tons of functionality went out the door when it came time to remake the clock app. The entire "Deskclock" concept was kicked out the door, replaced with a simple large display of the time against a plain black background. The ability to launch other apps and view the weather was gone, as was the ability of the clock app to use your wallpaper. Google sometimes gave up when it came time to design a tablet-sized interface, like here, where it just threw the alarm interface into a tiny, centered dialog box.
|
||||
|
||||

|
||||
The Music app finally got the ground-up redesign it has needed forever.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
While music received a few minor additions during its life, this was really the first time since Android 0.9 that it received serious attention. The highlight of the redesign was a don't-call-it-coverflow scrolling 3D album art view, called "New and Recent." Instead of the tabs added in Android 2.1, navigation was handled by a Dropbox box in the Action Bar. While "New and Recent" had 3D scrolling album art, "Albums" used a flat grid of albums thumbnails. The other sections had totally different designs, too. "Songs" used a vertically scrolling list of text, and "Playlists," "Genres," and "Artists" used stacked album art.
|
||||
|
||||
In nearly every view, every single item had its own individual menu, usually little arrows in the bottom right corner of an item. For now, these would only show "Play" and "add to Playlist," but this version of Google Music was built for the future. Google was launching a Music service soon, and those individual menus would be needed for things like viewing other content from that artist in the Music Store and managing the cloud storage versus local storage options.
|
||||
|
||||
Just like the Cooliris Gallery in Android 2.1, Google Music would blow up one of your thumbnails and use it as a background. The bottom "Now Playing" bar now displayed the album art, playback controls, and a song progress bar.
|
||||
|
||||

|
||||
Some of the new Google Maps was really nice, and some of it was from Android 1.5.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Google Maps received another redesign for the big screen. This one would stick around for a while and used a semi-transparent black action bar for all the controls. Search was again the primary function, given the first spot in the action bar, but this time it was an actual search bar you could type in, instead of a search bar-shaped button that launched a completely different interface. Google finally gave up on dedicating screen space to actual zoom buttons, relying on only gestures to control the map view. While the feature has since been ported to all old versions of Maps, Honeycomb was the first version to feature 3D building outlines on the map. Dragging two fingers down on the map would "tilt" the map view and show the sides of the buildings. You could freely rotate and the buildings would adjust, too.
|
||||
|
||||
Not every part of Maps was redesigned. Navigation was untouched from Gingerbread, and some core parts of the interface, like directions, were pulled straight from Android 1.6 and centered in a tiny box.
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/17/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -1,3 +1,5 @@
|
||||
alim0x translating
|
||||
|
||||
The history of Android
|
||||
================================================================================
|
||||

|
||||
@ -80,4 +82,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor
|
||||
|
||||
[1]:http://techcrunch.com/2014/03/03/gartner-195m-tablets-sold-in-2013-android-grabs-top-spot-from-ipad-with-62-share/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
||||
|
@ -1,207 +0,0 @@
|
||||
Syncthing: A Private, And Secure Tool To Sync Files/Folders Between Computers
|
||||
================================================================================
|
||||
### Introduction ###
|
||||
|
||||
**Syncthing** is a free, Open Source tool that can be used to sync files/folders between your networked computers. Unlike other sync tools, such as **BitTorrent Sync** or **Dropbox**, Syncthing transfers data directly from one system to another system, and It is completely open source, secure and private. All of your precious data will be stored in your system so that you can have full control over your files and folders, and none of them are stored in any third party systems. Also, you deserve to choose where it is stored, if it is shared with some third party and how it’s transmitted over the Internet.
|
||||
|
||||
All communication is encrypted using TLS, so your data is very secure from the prying eyes. Syncthing has a responsive and powerful WebGUI which will help the users to easily add, delete and manage directories to be synced over network. Using Syncthing, you can sync multiple folders to multiple systems at a time. Syncthing is very simple, portable, yet powerful tool in terms of installation and usage. Since all files/folders are directly transferred from one computer to another computer, you don’t have to worry about purchasing extra space from your Cloud provider. All you need is very stable LAN/WAN connection and enough disk space on your systems. It supports all modern operating systems, including GNU/Linux, Windows, Mac OS X, and ofcourse Android.
|
||||
|
||||
### Installation ###
|
||||
|
||||
For the purpose of this tutorial, We will be using two systems, one is running with Ubuntu 14.04 LTS, and another one is running with Ubuntu 14.10 server. To easily recognize these two systems, we will be calling them using names **system1**, and **system2**.
|
||||
|
||||
### System1 Details: ###
|
||||
|
||||
- **OS**: Ubuntu 14.04 LTS server;
|
||||
- **Hostname**: server1.unixmen.local;
|
||||
- **IP Address**: 192.168.1.150.
|
||||
- **System user**: sk (You can use your own)
|
||||
- **Sync Directory**: /home/Sync/ (Will be created by default by Syncthing)
|
||||
|
||||
### System2 Details: ###
|
||||
|
||||
- **OS**: Ubuntu 14.10 server;
|
||||
- **Hostname**: server.unixmen.local;
|
||||
- **IP Address**: 192.168.1.151.
|
||||
- **System user**: sk (You can use your own)
|
||||
- **Sync Directory**: /home/Sync/ (Will be created by default by Syncthing)
|
||||
|
||||
### Creating User For Syncthing On System 1 & System2: ###
|
||||
|
||||
Run the following commands on both system to create the user for Syncthing and the directory to be synced between two systems:
|
||||
|
||||
sudo useradd sk
|
||||
sudo passwd sk
|
||||
|
||||
### Install Syncthing On System1 And System2: ###
|
||||
|
||||
You should do the following steps on both System 1 and System 2.
|
||||
|
||||
Download the latest version from the [official download page][1]. As I am using 64bit system, I downloaded the 6bbit package.
|
||||
|
||||
wget https://github.com/syncthing/syncthing/releases/download/v0.10.20/syncthing-linux-amd64-v0.10.20.tar.gz
|
||||
|
||||
Extract the download file:
|
||||
|
||||
tar xzvf syncthing-linux-amd64-v0.10.20.tar.gz
|
||||
|
||||
Cd to the extracted folder:
|
||||
|
||||
cd syncthing-linux-amd64-v0.10.20/
|
||||
|
||||
Copy the excutable file “syncthing” to **$PATH**:
|
||||
|
||||
sudo cp syncthing /usr/local/bin/
|
||||
|
||||
Now, run the following command to run the syncthing for the first time.
|
||||
|
||||
syncthing
|
||||
|
||||
When you run the above command, syncthing will generate a configuration and some keys and then start the admin GUI in your browser. You should see something like below.
|
||||
|
||||
Sample output:
|
||||
|
||||
[monitor] 15:40:27 INFO: Starting syncthing
|
||||
15:40:27 INFO: Generating RSA key and certificate for syncthing...
|
||||
[BQXVO] 15:40:34 INFO: syncthing v0.10.20 (go1.4 linux-386 default) unknown-user@syncthing-builder 2015-01-13 16:27:47 UTC
|
||||
[BQXVO] 15:40:34 INFO: My ID: BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ
|
||||
[BQXVO] 15:40:34 INFO: No config file; starting with empty defaults
|
||||
[BQXVO] 15:40:34 INFO: Edit /home/sk/.config/syncthing/config.xml to taste or use the GUI
|
||||
[BQXVO] 15:40:34 INFO: Starting web GUI on http://127.0.0.1:8080/
|
||||
[BQXVO] 15:40:34 INFO: Loading HTTPS certificate: open /home/sk/.config/syncthing/https-cert.pem: no such file or directory
|
||||
[BQXVO] 15:40:34 INFO: Creating new HTTPS certificate
|
||||
[BQXVO] 15:40:34 INFO: Generating RSA key and certificate for server1...
|
||||
[BQXVO] 15:41:01 INFO: Starting UPnP discovery...
|
||||
[BQXVO] 15:41:07 INFO: Starting local discovery announcements
|
||||
[BQXVO] 15:41:07 INFO: Starting global discovery announcements
|
||||
[BQXVO] 15:41:07 OK: Ready to synchronize default (read-write)
|
||||
[BQXVO] 15:41:07 INFO: Device BQXVO3D-VEBIDRE-MVMMGJI-ECD2PC3-T5LT3JB-OK4Z45E-MPIDWHI-IRW3NAZ is "server1" at [dynamic]
|
||||
[BQXVO] 15:41:07 INFO: Completed initial scan (rw) of folder default
|
||||
|
||||
Syncthing has been successfully initialized, and the Web admin interface can be accessed using URL: **http://localhost:8080** from your browser. As you see in the above output, syncthing has automatically created a folder called **default** for you, in a directory called **Sync** in your **home** directory.
|
||||
|
||||
By default, Syncthing WebGUI will only be accessed from the localhost itself. To access the WebGUI from the remote systems, you need to do the following changes on both systems.
|
||||
|
||||
First, stop the Syncthing initialization process by pressing the CTRL+C. Now, you will be returned back to the Terminal.
|
||||
|
||||
Edit file **config.xml**,
|
||||
|
||||
sudo nano ~/.config/syncthing/config.xml
|
||||
|
||||
Find this directive:
|
||||
|
||||
[...]
|
||||
<gui enabled="true" tls="false">
|
||||
<address>127.0.0.1:8080</address>
|
||||
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
|
||||
</gui>
|
||||
[...]
|
||||
|
||||
In the **<address>** field, change **127.0.0.1:8080** to **0.0.0.0:8080**. So, your config.xml will look like below.
|
||||
|
||||
<gui enabled="true" tls="false">
|
||||
<address>0.0.0.0:8080</address>
|
||||
<apikey>-Su9v0lW80JWybGjK9vNK00YDraxXHGP</apikey>
|
||||
</gui>
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Now, start again the Syncthing initialization on both systems by entering the following command:
|
||||
|
||||
syncthing
|
||||
|
||||
### Access the WebGUI ###
|
||||
|
||||
Now, open your browser **http://ip-address:8080/**. You will see the following screen,
|
||||
|
||||

|
||||
|
||||
The WebGUI has two panes. In the left pane, you may see the list of folders to be synced. As I mentioned before, the folder **default** has been automatically created for you while initializing Syncthing. If you want to sync more folders, you can add using **Add Folder** button.
|
||||
|
||||
In the right pane, you see the number of devices connected. Currently there is only one device, the computer you are running this on.
|
||||
|
||||
### Configure Syncthing Web GUI ###
|
||||
|
||||
For the security enhancement, let us enable TLS, and setup administrative user and password to access the WebGUI. To od that, click on the gear button and select **Settings** on the top right corner.
|
||||
|
||||

|
||||
|
||||
Enter the admin username/password. In my case it is admin/ubuntu. You should use some strong password. And, check the box that says: **Use HTTPS for GUI**.
|
||||
|
||||

|
||||
|
||||
Click Save button. Now, you’ll be asked to restart the Syncthing to activate the changes. Click Restart.
|
||||
|
||||

|
||||
|
||||
Selection_005Refresh you web browser. You’ll see the SSL warning like below. Click on the button that says: **I understand the Risks**. And, click Add Exception button to add this page to the browser trusted lists.
|
||||
|
||||

|
||||
|
||||
Enter the administrative user and password which we configured in the previous steps. In my case it’s **admin/ubuntu**.
|
||||
|
||||

|
||||
|
||||
We have secured the WebGUI now. Don’t forget to do the same steps on both server.
|
||||
|
||||
### Connect Servers To Each Other ###
|
||||
|
||||
To sync folders between systems, you must told them about each other. This is accomplished by exchanging “device IDs”. You can find it in the web GUI by selecting the “gear menu” (top right) and “Show ID”.
|
||||
|
||||
For example, here is my System 1 ID.
|
||||
|
||||

|
||||
|
||||
Copy the ID, and go to the another system (system 2) WebGUI. From the second system (system 2) WebGUI window, click on the Add Device on the right side.
|
||||
|
||||

|
||||
|
||||
The following screen should appear. Paste the **System 1 ID** in the Device section. Enter the Device name(optional). In the Addresses field, you can either enter the IP address of the other system or leave it as default. The default value is **dynamic**. Finally, select the folder to be synced. In our case, the sync folder is **default**.
|
||||
|
||||

|
||||
|
||||
Once you done, click on the save button. You’ll be asked to restart the Syncthing. Click Restart button to activate the changes.
|
||||
|
||||
Now, go to the **System 1** WebUI, you’ll see a request has been sent from the System 2 to connect and sync. Click **Add** button. Now, the System 2 will ask the System 1 to share and sync the folder called “default”. Click **Share** button.
|
||||
|
||||

|
||||
|
||||
Next restart the Syncthing service on the System 1 to activate the changes.
|
||||
|
||||

|
||||
|
||||
Wait for few seconds, approximately 60 seconds, and you’ll see the two systems have been successfully connected and synced to each other.
|
||||
|
||||
You can verify it under the Add Device section of the WebGUI.
|
||||
|
||||
System 1 WebGUI console after adding System 2:
|
||||
|
||||

|
||||
|
||||
System 2 WebGUI console after adding System 1:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Now, put any file or folder in any one of the systems “**default**” folder. You may see the file/folder will be synced to the other system automatically.
|
||||
|
||||
That’s it! Happy Sync’ing!!
|
||||
|
||||
Cheers!!!
|
||||
|
||||
- [Syncthing Website][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/syncthing-private-secure-tool-sync-filesfolders-computers/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
||||
[1]:https://github.com/syncthing/syncthing/releases/tag/v0.10.20
|
||||
[2]:http://syncthing.net/
|
@ -1,3 +1,5 @@
|
||||
translating...
|
||||
|
||||
How to set up IPv6 BGP peering and filtering in Quagga BGP router
|
||||
================================================================================
|
||||
In the previous tutorials, we demonstrated how we can set up a [full-fledged BGP router][1] and configure [prefix filtering][2] with Quagga. In this tutorial, we are going to show you how we can set up IPv6 BGP peering and advertise IPv6 prefixes through BGP. We will also demonstrate how we can filter IPv6 prefixes advertised or received by using prefix-list and route-map features.
|
||||
@ -255,4 +257,4 @@ via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html
|
||||
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
|
||||
[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||
[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html
|
||||
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||
|
@ -1,86 +0,0 @@
|
||||
Fix Minimal BASH like line editing is supported GRUB Error In Linux
|
||||
================================================================================
|
||||
The other day when I [installed Elementary OS in dual boot with Windows][1], I encountered a Grub error at the reboot time. I was presented with command line with error message:
|
||||
|
||||
**Minimal BASH like line editing is supported. For the first word, TAB lists possible command completions. anywhere else TAB lists possible device or file completions.**
|
||||
|
||||

|
||||
|
||||
Indeed this is not an error specific to Elementary OS. It is a common [Grub][2] error that could occur with any Linux OS be it Ubuntu, Fedora, Linux Mint etc.
|
||||
|
||||
In this post we shall see **how to fix this “minimal BASH like line editing is supported” Grub error in Ubuntu** based Linux systems.
|
||||
|
||||
> You can read this tutorial to fix similar and more frequent issue, [error: no such partition grub rescue in Linux][3].
|
||||
|
||||
### Prerequisites ###
|
||||
|
||||
To fix this issue, you would need the followings:
|
||||
|
||||
- A live USB or disk of the same OS and same version
|
||||
- A working internet connection in the live session
|
||||
|
||||
Once you make sure that you have the prerequisites, let’s see how to fix the black screen of death for Linux (if I can call it that ;)).
|
||||
|
||||
### How to fix this “minimal BASH like line editing is supported” Grub error in Ubuntu based Linux ###
|
||||
|
||||
I know that you might point out that this Grub error is not exclusive to Ubuntu or Ubuntu based Linux distributions, then why am I putting emphasis on the world Ubuntu? The reason is, here we will take an easy way out and use a tool called **Boot Repair** to fix our problem. I am not sure if this tool is available for other distributions like Fedora. Without wasting anymore time, let’s see how to solve minimal BASH like line editing is supported Grub error.
|
||||
|
||||
### Step 1: Boot in lives session ###
|
||||
|
||||
Plug in the live USB and boot in to the live session.
|
||||
|
||||
### Step 2: Install Boot Repair ###
|
||||
|
||||
Once you are in the lives session, open the terminal and use the following commands to install Boot Repair:
|
||||
|
||||
sudo add-apt-repository ppa:yannubuntu/boot-repair
|
||||
sudo apt-get update
|
||||
sudo apt-get install boot-repair
|
||||
|
||||
Note: Follow this tutorial to [fix failed to fetch cdrom apt-get update cannot be used to add new CD-ROMs error][4], if you encounter it while running the above command.
|
||||
|
||||
### Step 3: Repair boot with Boot Repair ###
|
||||
|
||||
Once you installed Boot Repair, run it from the command line using the following command:
|
||||
|
||||
boot-repair &
|
||||
|
||||
Actually things are pretty straight forward from here. You just need to follow the instructions provided by Boot Repair tool. First, click on **Recommended repair** option in the Boot Repair.
|
||||
|
||||

|
||||
|
||||
It will take couple of minutes for Boot Repair to analyze the problem with boot and Grub. Afterwards, it will provide you some commands to use in the command line. Copy the commands one by one in terminal. For me it showed me a screen like this:
|
||||
|
||||

|
||||
|
||||
It will do some processes after you enter these commands:
|
||||
|
||||

|
||||
|
||||
Once the process finishes, it will provide you a URL which consists of the logs of the boot repair. If your boot issue is not fixed even now, you can go to the forum or mail to the dev team and provide them the URL as a reference. Cool, isn’t it?
|
||||
|
||||

|
||||
|
||||
After the boot repair finishes successfully, shutdown your computer, remove the USB and boot again. For me it booted successfully but added two additional lines in the Grub screen. Something which was not of importance to me as I was happy to see the system booting normally again.
|
||||
|
||||

|
||||
|
||||
### Did it work for you? ###
|
||||
|
||||
So this is how I fixed **minimal BASH like line editing is supported Grub error in Elementary OS Freya**. How about you? Did it work for you? Feel free to ask a question or drop a suggestion in the comment box below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/guide-install-elementary-os-luna/
|
||||
[2]:http://www.gnu.org/software/grub/
|
||||
[3]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/
|
||||
[4]:http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/
|
@ -1,745 +0,0 @@
|
||||
translating...
|
||||
|
||||
ZMap Documentation
|
||||
================================================================================
|
||||
1. Getting Started with ZMap
|
||||
1. Scanning Best Practices
|
||||
1. Command Line Arguments
|
||||
1. Additional Information
|
||||
1. TCP SYN Probe Module
|
||||
1. ICMP Echo Probe Module
|
||||
1. UDP Probe Module
|
||||
1. Configuration Files
|
||||
1. Verbosity
|
||||
1. Results Output
|
||||
1. Blacklisting
|
||||
1. Rate Limiting and Sampling
|
||||
1. Sending Multiple Probes
|
||||
1. Extending ZMap
|
||||
1. Sample Applications
|
||||
1. Writing Probe and Output Modules
|
||||
|
||||
----------
|
||||
|
||||
### Getting Started with ZMap ###
|
||||
|
||||
ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices.
|
||||
|
||||
By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows:
|
||||
|
||||
$ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.csv
|
||||
|
||||
Or more concisely specified as:
|
||||
|
||||
$ zmap -B 10M -p 80 -n 10000 -o results.csv
|
||||
|
||||
ZMap can also be used to scan specific subnets or CIDR blocks. For example, to scan only 10.0.0.0/8 and 192.168.0.0/16 on port 80, run:
|
||||
|
||||
zmap -p 80 -o results.csv 10.0.0.0/8 192.168.0.0/16
|
||||
|
||||
If the scan started successfully, ZMap will output status updates every one second similar to the following:
|
||||
|
||||
0% (1h51m left); send: 28777 562 Kp/s (560 Kp/s avg); recv: 1192 248 p/s (231 p/s avg); hits: 0.04%
|
||||
0% (1h51m left); send: 34320 554 Kp/s (559 Kp/s avg); recv: 1442 249 p/s (234 p/s avg); hits: 0.04%
|
||||
0% (1h50m left); send: 39676 535 Kp/s (555 Kp/s avg); recv: 1663 220 p/s (232 p/s avg); hits: 0.04%
|
||||
0% (1h50m left); send: 45372 570 Kp/s (557 Kp/s avg); recv: 1890 226 p/s (232 p/s avg); hits: 0.04%
|
||||
|
||||
These updates provide information about the current state of the scan and are of the following form: %-complete (est time remaining); packets-sent curr-send-rate (avg-send-rate); recv: packets-recv recv-rate (avg-recv-rate); hits: hit-rate
|
||||
|
||||
If you do not know the scan rate that your network can support, you may want to experiment with different scan rates or bandwidth limits to find the fastest rate that your network can support before you see decreased results.
|
||||
|
||||
By default, ZMap will output the list of distinct IP addresses that responded successfully (e.g. with a SYN ACK packet) similar to the following. There are several additional formats (e.g. JSON and Redis) for outputting results as well as options for producing programmatically parsable scan statistics. As wells, additional output fields can be specified and the results can be filtered using an output filter.
|
||||
|
||||
115.237.116.119
|
||||
23.9.117.80
|
||||
207.118.204.141
|
||||
217.120.143.111
|
||||
50.195.22.82
|
||||
|
||||
We strongly encourage you to use a blacklist file, to exclude both reserved/unallocated IP space (e.g. multicast, RFC1918), as well as networks that request to be excluded from your scans. By default, ZMap will utilize a simple blacklist file containing reserved and unallocated addresses located at `/etc/zmap/blacklist.conf`. If you find yourself specifying certain settings, such as your maximum bandwidth or blacklist file every time you run ZMap, you can specify these in `/etc/zmap/zmap.conf` or use a custom configuration file.
|
||||
|
||||
If you are attempting to troubleshoot scan related issues, there are several options to help debug. First, it is possible can perform a dry run scan in order to see the packets that would be sent over the network by adding the `--dryrun` flag. As well, it is possible to change the logging verbosity by setting the `--verbosity=n` flag.
|
||||
|
||||
----------
|
||||
|
||||
### Scanning Best Practices ###
|
||||
|
||||
We offer these suggestions for researchers conducting Internet-wide scans as guidelines for good Internet citizenship.
|
||||
|
||||
- Coordinate closely with local network administrators to reduce risks and handle inquiries
|
||||
- Verify that scans will not overwhelm the local network or upstream provider
|
||||
- Signal the benign nature of the scans in web pages and DNS entries of the source addresses
|
||||
- Clearly explain the purpose and scope of the scans in all communications
|
||||
- Provide a simple means of opting out and honor requests promptly
|
||||
- Conduct scans no larger or more frequent than is necessary for research objectives
|
||||
- Spread scan traffic over time or source addresses when feasible
|
||||
|
||||
It should go without saying that scan researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions.
|
||||
|
||||
----------
|
||||
|
||||
### Command Line Arguments ###
|
||||
|
||||
#### Common Options ####
|
||||
|
||||
These options are the most common options when performing a simple scan. We note that some options are dependent on the probe module or output module used (e.g. target port is not used when performing an ICMP Echo Scan).
|
||||
|
||||
|
||||
**-p, --target-port=port**
|
||||
|
||||
TCP port number to scan (e.g. 443)
|
||||
|
||||
**-o, --output-file=name**
|
||||
|
||||
Write results to this file. Use - for stdout
|
||||
|
||||
**-b, --blacklist-file=path**
|
||||
|
||||
File of subnets to exclude, in CIDR notation (e.g. 192.168.0.0/16), one-per line. It is recommended you use this to exclude RFC 1918 addresses, multicast, IANA reserved space, and other IANA special-purpose addresses. An example blacklist file is provided in conf/blacklist.example for this purpose.
|
||||
|
||||
#### Scan Options ####
|
||||
|
||||
**-n, --max-targets=n**
|
||||
|
||||
Cap the number of targets to probe. This can either be a number (e.g. `-n 1000`) or a percentage (e.g. `-n 0.1%`) of the scannable address space (after excluding blacklist)
|
||||
|
||||
**-N, --max-results=n**
|
||||
|
||||
Exit after receiving this many results
|
||||
|
||||
**-t, --max-runtime=secs**
|
||||
|
||||
Cap the length of time for sending packets
|
||||
|
||||
**-r, --rate=pps**
|
||||
|
||||
Set the send rate in packets/sec
|
||||
|
||||
**-B, --bandwidth=bps**
|
||||
|
||||
Set the send rate in bits/second (supports suffixes G, M, and K (e.g. `-B 10M` for 10 mbps). This overrides the `--rate` flag.
|
||||
|
||||
**-c, --cooldown-time=secs**
|
||||
|
||||
How long to continue receiving after sending has completed (default=8)
|
||||
|
||||
**-e, --seed=n**
|
||||
|
||||
Seed used to select address permutation. Use this if you want to scan addresses in the same order for multiple ZMap runs.
|
||||
|
||||
**--shards=n**
|
||||
|
||||
Split the scan up into N shards/partitions among different instances of zmap (default=1). When sharding, `--seed` is required
|
||||
|
||||
**--shard=n**
|
||||
|
||||
Set which shard to scan (default=0). Shards are indexed in the range [0, N), where N is the total number of shards. When sharding `--seed` is required.
|
||||
|
||||
**-T, --sender-threads=n**
|
||||
|
||||
Threads used to send packets (default=1)
|
||||
|
||||
**-P, --probes=n**
|
||||
|
||||
Number of probes to send to each IP (default=1)
|
||||
|
||||
**-d, --dryrun**
|
||||
|
||||
Print out each packet to stdout instead of sending it (useful for debugging)
|
||||
|
||||
#### Network Options ####
|
||||
|
||||
**-s, --source-port=port|range**
|
||||
|
||||
Source port(s) to send packets from
|
||||
|
||||
**-S, --source-ip=ip|range**
|
||||
|
||||
Source address(es) to send packets from. Either single IP or range (e.g. 10.0.0.1-10.0.0.9)
|
||||
|
||||
**-G, --gateway-mac=addr**
|
||||
|
||||
Gateway MAC address to send packets to (in case auto-detection does not work)
|
||||
|
||||
**-i, --interface=name**
|
||||
|
||||
Network interface to use
|
||||
|
||||
#### Probe Options ####
|
||||
|
||||
ZMap allows users to specify and write their own probe modules for use with ZMap. Probe modules are responsible for generating probe packets to send, and processing responses from hosts.
|
||||
|
||||
**--list-probe-modules**
|
||||
|
||||
List available probe modules (e.g. tcp_synscan)
|
||||
|
||||
**-M, --probe-module=name**
|
||||
|
||||
Select probe module (default=tcp_synscan)
|
||||
|
||||
**--probe-args=args**
|
||||
|
||||
Arguments to pass to probe module
|
||||
|
||||
**--list-output-fields**
|
||||
|
||||
List the fields the selected probe module can send to the output module
|
||||
|
||||
#### Output Options ####
|
||||
|
||||
ZMap allows users to specify and write their own output modules for use with ZMap. Output modules are responsible for processing the fieldsets returned by the probe module, and outputing them to the user. Users can specify output fields, and write filters over the output fields.
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
List available output modules (e.g. tcp_synscan)
|
||||
|
||||
**-O, --output-module=name**
|
||||
|
||||
Select output module (default=csv)
|
||||
|
||||
**--output-args=args**
|
||||
|
||||
Arguments to pass to output module
|
||||
|
||||
**-f, --output-fields=fields**
|
||||
|
||||
Comma-separated list of fields to output
|
||||
|
||||
**--output-filter**
|
||||
|
||||
Specify an output filter over the fields defined by the probe module
|
||||
|
||||
#### Additional Options ####
|
||||
|
||||
**-C, --config=filename**
|
||||
|
||||
Read a configuration file, which can specify any other options.
|
||||
|
||||
**-q, --quiet**
|
||||
|
||||
Do not print status updates once per second
|
||||
|
||||
**-g, --summary**
|
||||
|
||||
Print configuration and summary of results at the end of the scan
|
||||
|
||||
**-v, --verbosity=n**
|
||||
|
||||
Level of log detail (0-5, default=3)
|
||||
|
||||
**-h, --help**
|
||||
|
||||
Print help and exit
|
||||
|
||||
**-V, --version**
|
||||
|
||||
Print version and exit
|
||||
|
||||
----------
|
||||
|
||||
### Additional Information ###
|
||||
|
||||
#### TCP SYN Scans ####
|
||||
|
||||
When performing a TCP SYN scan, ZMap requires a single target port and supports specifying a range of source ports from which the scan will originate.
|
||||
|
||||
**-p, --target-port=port**
|
||||
|
||||
TCP port number to scan (e.g. 443)
|
||||
|
||||
**-s, --source-port=port|range**
|
||||
|
||||
Source port(s) for scan packets (e.g. 40000-50000)
|
||||
|
||||
**Warning!** ZMap relies on the Linux kernel to respond to SYN/ACK packets with RST packets in order to close connections opened by the scanner. This occurs because ZMap sends packets at the Ethernet layer in order to reduce overhead otherwise incurred in the kernel from tracking open TCP connections and performing route lookups. As such, if you have a firewall rule that tracks established connections such as a netfilter rule similar to `-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT`, this will block SYN/ACK packets from reaching the kernel. This will not prevent ZMap from recording responses, but it will prevent RST packets from being sent back, ultimately using up a connection on the scanned host until your connection times out. We strongly recommend that you select a set of unused ports on your scanning host which can be allowed access in your firewall and specifying this port range when executing ZMap, with the `-s` flag (e.g. `-s '50000-60000'`).
|
||||
|
||||
#### ICMP Echo Request Scans ####
|
||||
|
||||
While ZMap performs TCP SYN scans by default, it also supports ICMP echo request scans in which an ICMP echo request packet is sent to each host and the type of ICMP response received in reply is denoted. An ICMP scan can be performed by selecting the icmp_echoscan scan module similar to the following:
|
||||
|
||||
$ zmap --probe-module=icmp_echoscan
|
||||
|
||||
#### UDP Datagram Scans ####
|
||||
|
||||
ZMap additionally supports UDP probes, where it will send out an arbitrary UDP datagram to each host, and receive either UDP or ICMP Unreachable responses. ZMap supports four different methods of setting the UDP payload through the --probe-args command-line option. These are 'text' for ASCII-printable payloads, 'hex' for hexadecimal payloads set on the command-line, 'file' for payloads contained in an external file, and 'template' for payloads that require dynamic field generation. In order to obtain the UDP response, make sure that you specify 'data' as one of the fields to report with the -f option.
|
||||
|
||||
The example below will send the two bytes 'ST', a PCAnwywhere 'status' request, to UDP port 5632.
|
||||
|
||||
$ zmap -M udp -p 5632 --probe-args=text:ST -N 100 -f saddr,data -o -
|
||||
|
||||
The example below will send the byte '0x02', a SQL Server 'client broadcast' request, to UDP port 1434.
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=hex:02 -N 100 -f saddr,data -o -
|
||||
|
||||
The example below will send a NetBIOS status request to UDP port 137. This uses a payload file that is included with the ZMap distribution.
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=file:netbios_137.pkt -N 100 -f saddr,data -o -
|
||||
|
||||
The example below will send a SIP 'OPTIONS' request to UDP port 5060. This uses a template file that is included with the ZMap distribution.
|
||||
|
||||
$ zmap -M udp -p 1434 --probe-args=file:sip_options.tpl -N 100 -f saddr,data -o -
|
||||
|
||||
UDP payload templates are still experimental. You may encounter crashes when more using more than one send thread (-T) and there is a significant decrease in performance compared to static payloads. A template is simply a payload file that contains one or more field specifiers enclosed in a ${} sequence. Some protocols, notably SIP, require the payload to reflect the source and destination of the packet. Other protocols, such as portmapper and DNS, contain fields that should be randomized per request or risk being dropped by multi-homed systems scanned by ZMap.
|
||||
|
||||
The payload template below will send a SIP OPTIONS request to every destination:
|
||||
|
||||
OPTIONS sip:${RAND_ALPHA=8}@${DADDR} SIP/2.0
|
||||
Via: SIP/2.0/UDP ${SADDR}:${SPORT};branch=${RAND_ALPHA=6}.${RAND_DIGIT=10};rport;alias
|
||||
From: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT};tag=${RAND_DIGIT=8}
|
||||
To: sip:${RAND_ALPHA=8}@${DADDR}
|
||||
Call-ID: ${RAND_DIGIT=10}@${SADDR}
|
||||
CSeq: 1 OPTIONS
|
||||
Contact: sip:${RAND_ALPHA=8}@${SADDR}:${SPORT}
|
||||
Content-Length: 0
|
||||
Max-Forwards: 20
|
||||
User-Agent: ${RAND_ALPHA=8}
|
||||
Accept: text/plain
|
||||
|
||||
In the example above, note that line endings are \r\n and the end of this request must contain \r\n\r\n for most SIP implementations to correcly process it. A working example is included in the examples/udp-payloads directory of the ZMap source tree (sip_options.tpl).
|
||||
|
||||
The following template fields are currently implemented:
|
||||
|
||||
|
||||
- **SADDR**: Source IP address in dotted-quad format
|
||||
- **SADDR_N**: Source IP address in network byte order
|
||||
- **DADDR**: Destination IP address in dotted-quad format
|
||||
- **DADDR_N**: Destination IP address in network byte order
|
||||
- **SPORT**: Source port in ascii format
|
||||
- **SPORT_N**: Source port in network byte order
|
||||
- **DPORT**: Destination port in ascii format
|
||||
- **DPORT_N**: Destination port in network byte order
|
||||
- **RAND_BYTE**: Random bytes (0-255), length specified with =(length) parameter
|
||||
- **RAND_DIGIT**: Random digits from 0-9, length specified with =(length) parameter
|
||||
- **RAND_ALPHA**: Random mixed-case letters from A-Z, length specified with =(length) parameter
|
||||
- **RAND_ALPHANUM**: Random mixed-case letters from A-Z and digits from 0-9, length specified with =(length) parameter
|
||||
|
||||
### Configuration Files ###
|
||||
|
||||
ZMap supports configuration files instead of requiring all options to be specified on the command-line. A configuration can be created by specifying one long-name option and the value per line such as:
|
||||
|
||||
interface "eth1"
|
||||
source-ip 1.1.1.4-1.1.1.8
|
||||
gateway-mac b4:23:f9:28:fa:2d # upstream gateway
|
||||
cooldown-time 300 # seconds
|
||||
blacklist-file /etc/zmap/blacklist.conf
|
||||
output-file ~/zmap-output
|
||||
quiet
|
||||
summary
|
||||
|
||||
ZMap can then be run with a configuration file and specifying any additional necessary parameters:
|
||||
|
||||
$ zmap --config=~/.zmap.conf --target-port=443
|
||||
|
||||
### Verbosity ###
|
||||
|
||||
There are several types of on-screen output that ZMap produces. By default, ZMap will print out basic progress information similar to the following every 1 second. This can be disabled by setting the `--quiet` flag.
|
||||
|
||||
0:01 12%; send: 10000 done (15.1 Kp/s avg); recv: 144 143 p/s (141 p/s avg); hits: 1.44%
|
||||
|
||||
ZMap also prints out informational messages during scanner configuration such as the following, which can be controlled with the `--verbosity` argument.
|
||||
|
||||
Aug 11 16:16:12.813 [INFO] zmap: started
|
||||
Aug 11 16:16:12.817 [DEBUG] zmap: no interface provided. will use eth0
|
||||
Aug 11 16:17:03.971 [DEBUG] cyclic: primitive root: 3489180582
|
||||
Aug 11 16:17:03.971 [DEBUG] cyclic: starting point: 46588
|
||||
Aug 11 16:17:03.975 [DEBUG] blacklist: 3717595507 addresses allowed to be scanned
|
||||
Aug 11 16:17:03.975 [DEBUG] send: will send from 1 address on 28233 source ports
|
||||
Aug 11 16:17:03.975 [DEBUG] send: using bandwidth 10000000 bits/s, rate set to 14880 pkt/s
|
||||
Aug 11 16:17:03.985 [DEBUG] recv: thread started
|
||||
|
||||
ZMap also supports printing out a grep-able summary at the end of the scan, similar to below, which can be invoked with the `--summary` flag.
|
||||
|
||||
cnf target-port 443
|
||||
cnf source-port-range-begin 32768
|
||||
cnf source-port-range-end 61000
|
||||
cnf source-addr-range-begin 1.1.1.4
|
||||
cnf source-addr-range-end 1.1.1.8
|
||||
cnf maximum-packets 4294967295
|
||||
cnf maximum-runtime 0
|
||||
cnf permutation-seed 0
|
||||
cnf cooldown-period 300
|
||||
cnf send-interface eth1
|
||||
cnf rate 45000
|
||||
env nprocessors 16
|
||||
exc send-start-time Fri Jan 18 01:47:35 2013
|
||||
exc send-end-time Sat Jan 19 00:47:07 2013
|
||||
exc recv-start-time Fri Jan 18 01:47:35 2013
|
||||
exc recv-end-time Sat Jan 19 00:52:07 2013
|
||||
exc sent 3722335150
|
||||
exc blacklisted 572632145
|
||||
exc first-scanned 1318129262
|
||||
exc hit-rate 0.874102
|
||||
exc synack-received-unique 32537000
|
||||
exc synack-received-total 36689941
|
||||
exc synack-cooldown-received-unique 193
|
||||
exc synack-cooldown-received-total 1543
|
||||
exc rst-received-unique 141901021
|
||||
exc rst-received-total 166779002
|
||||
adv source-port-secret 37952
|
||||
adv permutation-gen 4215763218
|
||||
|
||||
### Results Output ###
|
||||
|
||||
ZMap can produce results in several formats through the use of **output modules**. By default, ZMap only supports **csv** output, however support for **redis** and **json** can be compiled in. The results sent to these output modules may be filtered using an **output filter**. The fields the output module writes are specified by the user. By default, ZMap will return results in csv format and if no output file is specified, ZMap will not produce specific results. It is also possible to write your own output module; see Writing Output Modules for information.
|
||||
|
||||
**-o, --output-file=p**
|
||||
|
||||
File to write output to
|
||||
|
||||
**-O, --output-module=p**
|
||||
|
||||
Invoke a custom output module
|
||||
|
||||
|
||||
**-f, --output-fields=p**
|
||||
|
||||
Comma-separated list of fields to output
|
||||
|
||||
**--output-filter=filter**
|
||||
|
||||
Specify an output filter over fields for a given probe
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
Lists available output modules
|
||||
|
||||
**--list-output-fields**
|
||||
|
||||
List available output fields for a given probe
|
||||
|
||||
#### Output Fields ####
|
||||
|
||||
ZMap has a variety of fields it can output beyond IP address. These fields can be viewed for a given probe module by running with the `--list-output-fields` flag.
|
||||
|
||||
$ zmap --probe-module="tcp_synscan" --list-output-fields
|
||||
saddr string: source IP address of response
|
||||
saddr-raw int: network order integer form of source IP address
|
||||
daddr string: destination IP address of response
|
||||
daddr-raw int: network order integer form of destination IP address
|
||||
ipid int: IP identification number of response
|
||||
ttl int: time-to-live of response packet
|
||||
sport int: TCP source port
|
||||
dport int: TCP destination port
|
||||
seqnum int: TCP sequence number
|
||||
acknum int: TCP acknowledgement number
|
||||
window int: TCP window
|
||||
classification string: packet classification
|
||||
success int: is response considered success
|
||||
repeat int: is response a repeat response from host
|
||||
cooldown int: Was response received during the cooldown period
|
||||
timestamp-str string: timestamp of when response arrived in ISO8601 format.
|
||||
timestamp-ts int: timestamp of when response arrived in seconds since Epoch
|
||||
timestamp-us int: microsecond part of timestamp (e.g. microseconds since 'timestamp-ts')
|
||||
|
||||
To select which fields to output, any combination of the output fields can be specified as a comma-separated list using the `--output-fields=fields` or `-f` flags. Example:
|
||||
|
||||
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
||||
|
||||
#### Filtering Output ####
|
||||
|
||||
Results generated by a probe module can be filtered before being passed to the output module. Filters are defined over the output fields of a probe module. Filters are written in a simple filtering language, similar to SQL, and are passed to ZMap using the **--output-filter** option. Output filters are commonly used to filter out duplicate results, or to only pass only sucessful responses to the output module.
|
||||
|
||||
Filter expressions are of the form `<fieldname> <operation> <value>`. The type of `<value>` must be either a string or unsigned integer literal, and match the type of `<fieldname>`. The valid operations for integer comparisons are `= !=, <, >, <=, >=`. The operations for string comparisons are =, !=. The `--list-output-fields` flag will print what fields and types are available for the selected probe module, and then exit.
|
||||
|
||||
Compound filter expressions may be constructed by combining filter expressions using parenthesis to specify order of operations, the `&&` (logical AND) and `||` (logical OR) operators.
|
||||
|
||||
**Examples**
|
||||
|
||||
Write a filter for only successful, non-duplicate responses
|
||||
|
||||
--output-filter="success = 1 && repeat = 0"
|
||||
|
||||
Filter for packets that have classification RST and a TTL greater than 10, or for packets with classification SYNACK
|
||||
|
||||
--output-filter="(classification = rst && ttl > 10) || classification = synack"
|
||||
|
||||
#### CSV ####
|
||||
|
||||
The csv module will produce a comma-separated value file of the output fields requested. For example, the following command produces the following CSV in a file called `output.csv`.
|
||||
|
||||
$ zmap -p 80 -f "response,saddr,daddr,sport,seq,ack,in_cooldown,is_repeat,timestamp" -o output.csv
|
||||
|
||||
----------
|
||||
|
||||
response, saddr, daddr, sport, dport, seq, ack, in_cooldown, is_repeat, timestamp
|
||||
synack, 159.174.153.144, 10.0.0.9, 80, 40555, 3050964427, 3515084203, 0, 0,2013-08-15 18:55:47.681
|
||||
rst, 141.209.175.1, 10.0.0.9, 80, 40136, 0, 3272553764, 0, 0,2013-08-15 18:55:47.683
|
||||
rst, 72.36.213.231, 10.0.0.9, 80, 56642, 0, 2037447916, 0, 0,2013-08-15 18:55:47.691
|
||||
rst, 148.8.49.150, 10.0.0.9, 80, 41672, 0, 1135824975, 0, 0,2013-08-15 18:55:47.692
|
||||
rst, 50.165.166.206, 10.0.0.9, 80, 38858, 0, 535206863, 0, 0,2013-08-15 18:55:47.694
|
||||
rst, 65.55.203.135, 10.0.0.9, 80, 50008, 0, 4071709905, 0, 0,2013-08-15 18:55:47.700
|
||||
synack, 50.57.166.186, 10.0.0.9, 80, 60650, 2813653162, 993314545, 0, 0,2013-08-15 18:55:47.704
|
||||
synack, 152.75.208.114, 10.0.0.9, 80, 52498, 460383682, 4040786862, 0, 0,2013-08-15 18:55:47.707
|
||||
synack, 23.72.138.74, 10.0.0.9, 80, 33480, 810393698, 486476355, 0, 0,2013-08-15 18:55:47.710
|
||||
|
||||
#### Redis ####
|
||||
|
||||
The redis output module allows addresses to be added to a Redis queue instead of being saved to file which ultimately allows ZMap to be incorporated with post processing tools.
|
||||
|
||||
**Heads Up!** ZMap does not build with Redis support by default. If you are building ZMap from source, you can build with Redis support by running CMake with `-DWITH_REDIS=ON`.
|
||||
|
||||
### Blacklisting and Whitelisting ###
|
||||
|
||||
ZMap supports both blacklisting and whitelisting network prefixes. If ZMap is not provided with blacklist or whitelist parameters, ZMap will scan all IPv4 addresses (including local, reserved, and multicast addresses). If a blacklist file is specified, network prefixes in the blacklisted segments will not be scanned; if a whitelist file is provided, only network prefixes in the whitelist file will be scanned. A whitelist and blacklist file can be used in coordination; the blacklist has priority over the whitelist (e.g. if you have whitelisted 10.0.0.0/8 and blacklisted 10.1.0.0/16, then 10.1.0.0/16 will not be scanned). Whitelist and blacklist files can be specified on the command-line as follows:
|
||||
|
||||
**-b, --blacklist-file=path**
|
||||
|
||||
File of subnets to blacklist in CIDR notation, e.g. 192.168.0.0/16
|
||||
|
||||
**-w, --whitelist-file=path**
|
||||
|
||||
File of subnets to limit scan to in CIDR notation, e.g. 192.168.0.0/16
|
||||
|
||||
Blacklist files should be formatted with a single network prefix in CIDR notation per line. Comments are allowed using the `#` character. Example:
|
||||
|
||||
# From IANA IPv4 Special-Purpose Address Registry
|
||||
# http://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
|
||||
# Updated 2013-05-22
|
||||
|
||||
0.0.0.0/8 # RFC1122: "This host on this network"
|
||||
10.0.0.0/8 # RFC1918: Private-Use
|
||||
100.64.0.0/10 # RFC6598: Shared Address Space
|
||||
127.0.0.0/8 # RFC1122: Loopback
|
||||
169.254.0.0/16 # RFC3927: Link Local
|
||||
172.16.0.0/12 # RFC1918: Private-Use
|
||||
192.0.0.0/24 # RFC6890: IETF Protocol Assignments
|
||||
192.0.2.0/24 # RFC5737: Documentation (TEST-NET-1)
|
||||
192.88.99.0/24 # RFC3068: 6to4 Relay Anycast
|
||||
192.168.0.0/16 # RFC1918: Private-Use
|
||||
192.18.0.0/15 # RFC2544: Benchmarking
|
||||
198.51.100.0/24 # RFC5737: Documentation (TEST-NET-2)
|
||||
203.0.113.0/24 # RFC5737: Documentation (TEST-NET-3)
|
||||
240.0.0.0/4 # RFC1112: Reserved
|
||||
255.255.255.255/32 # RFC0919: Limited Broadcast
|
||||
|
||||
# From IANA Multicast Address Space Registry
|
||||
# http://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
|
||||
# Updated 2013-06-25
|
||||
|
||||
224.0.0.0/4 # RFC5771: Multicast/Reserved
|
||||
|
||||
If you are looking to scan only a random portion of the internet, checkout Sampling, instead of using whitelisting and blacklisting.
|
||||
|
||||
**Heads Up!** The default ZMap configuration uses the blacklist file at `/etc/zmap/blacklist.conf`, which contains locally scoped address space and reserved IP ranges. The default configuration can be changed by editing `/etc/zmap/zmap.conf`.
|
||||
|
||||
### Rate Limiting and Sampling ###
|
||||
|
||||
By default, ZMap will scan at the fastest rate that your network adaptor supports. In our experiences on commodity hardware, this is generally around 95-98% of the theoretical speed of gigabit Ethernet, which may be faster than your upstream provider can handle. ZMap will not automatically adjust its send-rate based on your upstream provider. You may need to manually adjust your send-rate to reduce packet drops and incorrect results.
|
||||
|
||||
**-r, --rate=pps**
|
||||
|
||||
Set maximum send rate in packets/sec
|
||||
|
||||
**-B, --bandwidth=bps**
|
||||
|
||||
Set send rate in bits/sec (supports suffixes G, M, and K). This overrides the --rate flag.
|
||||
|
||||
ZMap also allows random sampling of the IPv4 address space by specifying max-targets and/or max-runtime. Because hosts are scanned in a random permutation generated per scan instantiation, limiting a scan to n hosts will perform a random sampling of n hosts. Command-line options:
|
||||
|
||||
**-n, --max-targets=n**
|
||||
|
||||
Cap number of targets to probe
|
||||
|
||||
**-N, --max-results=n**
|
||||
|
||||
Cap number of results (exit after receiving this many positive results)
|
||||
|
||||
**-t, --max-runtime=s**
|
||||
|
||||
Cap length of time for sending packets (in seconds)
|
||||
|
||||
**-s, --seed=n**
|
||||
|
||||
Seed used to select address permutation. Specify the same seed in order to scan addresses in the same order for different ZMap runs.
|
||||
|
||||
For example, if you wanted to scan the same one million hosts on the Internet for multiple scans, you could set a predetermined seed and cap the number of scanned hosts similar to the following:
|
||||
|
||||
zmap -p 443 -s 3 -n 1000000 -o results
|
||||
|
||||
In order to determine which one million hosts were going to be scanned, you could run the scan in dry-run mode which will print out the packets that would be sent instead of performing the actual scan.
|
||||
|
||||
zmap -p 443 -s 3 -n 1000000 --dryrun | grep daddr
|
||||
| awk -F'daddr: ' '{print $2}' | sed 's/ |.*//;'
|
||||
|
||||
### Sending Multiple Packets ###
|
||||
|
||||
ZMap supports sending multiple probes to each host. Increasing this number both increases scan time and hosts reached. However, we find that the increase in scan time (~100% per additional probe) greatly outweighs the increase in hosts reached (~1% per additional probe).
|
||||
|
||||
**-P, --probes=n**
|
||||
|
||||
The number of unique probes to send to each IP (default=1)
|
||||
|
||||
----------
|
||||
|
||||
### Sample Applications ###
|
||||
|
||||
ZMap is designed for initiating contact with a large number of hosts and finding ones that respond positively. However, we realize that many users will want to perform follow-up processing, such as performing an application level handshake. For example, users who perform a TCP SYN scan on port 80 might want to perform a simple GET request and users who scan port 443 may be interested in completing a TLS handshake.
|
||||
|
||||
#### Banner Grab ####
|
||||
|
||||
We have included a sample application, banner-grab, with ZMap that enables users to receive messages from listening TCP servers. Banner-grab connects to the provided servers, optionally sends a message, and prints out the first message received from the server. This tool can be used to fetch banners such as HTTP server responses to specific commands, telnet login prompts, or SSH server strings.
|
||||
|
||||
This example finds 1000 servers listening on port 80, and sends a simple GET request to each, storing their base-64 encoded responses in http-banners.out
|
||||
|
||||
$ zmap -p 80 -N 1000 -B 10M -o - | ./banner-grab-tcp -p 80 -c 500 -d ./http-req > out
|
||||
|
||||
For more details on using `banner-grab`, see the README file in `examples/banner-grab`.
|
||||
|
||||
**Heads Up!** ZMap and banner-grab can have significant performance and accuracy impact on one another if run simultaneously (as in the example). Make sure not to let ZMap saturate banner-grab-tcp's concurrent connections, otherwise banner-grab will fall behind reading stdin, causing ZMap to block on writing stdout. We recommend using a slower scanning rate with ZMap, and increasing the concurrency of banner-grab-tcp to no more than 3000 (Note that > 1000 concurrent connections requires you to use `ulimit -SHn 100000` and `ulimit -HHn 100000` to increase the maximum file descriptors per process). These parameters will of course be dependent on your server performance, and hit-rate; we encourage developers to experiment with small samples before running a large scan.
|
||||
|
||||
#### Forge Socket ####
|
||||
|
||||
We have also included a form of banner-grab, called forge-socket, that reuses the SYN-ACK sent from the server for the connection that ultimately fetches the banner. In `banner-grab-tcp`, ZMap sends a SYN to each server, and listening servers respond with a SYN+ACK. The ZMap host's kernel receives this, and sends a RST, as no active connection is associated with that packet. The banner-grab program must then create a new TCP connection to the same server to fetch data from it.
|
||||
|
||||
In forge-socket, we utilize a kernel module by the same name, that allows us to create a connection with arbitrary TCP parameters. This enables us to suppress the kernel's RST packet, and instead create a socket that will reuse the SYN+ACK's parameters, and send and receive data through this socket as we would any normally connected socket.
|
||||
|
||||
To use forge-socket, you will need the forge-socket kernel module, available from [github][1]. You should git clone `git@github.com:ewust/forge_socket.git` in the ZMap root source directory, and then cd into the forge_socket directory, and run make. Install the kernel module with `insmod forge_socket.ko` as root.
|
||||
|
||||
You must also tell the kernel not to send RST packets. An easy way to disable RST packets system wide is to use **iptables**. `iptables -A OUTPUT -p tcp -m tcp --tcp-flgas RST,RST RST,RST -j DROP` as root will do this, though you may also add an optional --dport X to limit this to the port (X) you are scanning. To remove this after your scan completes, you can run `iptables -D OUTPUT -p tcp -m tcp --tcp-flags RST,RST RST,RST -j DROP` as root.
|
||||
|
||||
Now you should be able to build the forge-socket ZMap example program. To run it, you must use the **extended_file** ZMap output module:
|
||||
|
||||
$ zmap -p 80 -N 1000 -B 10M -O extended_file -o - | \
|
||||
./forge-socket -c 500 -d ./http-req > ./http-banners.out
|
||||
|
||||
See the README in `examples/forge-socket` for more details.
|
||||
|
||||
----------
|
||||
|
||||
### Writing Probe and Output Modules ###
|
||||
|
||||
ZMap can be extended to support different types of scanning through **probe modules** and additional types of results **output through** output modules. Registered probe and output modules can be listed through the command-line interface:
|
||||
|
||||
**--list-probe-modules**
|
||||
|
||||
Lists installed probe modules
|
||||
|
||||
**--list-output-modules**
|
||||
|
||||
Lists installed output modules
|
||||
|
||||
#### Output Modules ####
|
||||
|
||||
ZMap output and post-processing can be extended by implementing and registering **output modules** with the scanner. Output modules receive a callback for every received response packet. While the default provided modules provide simple output, these modules are also capable of performing additional post-processing (e.g. tracking duplicates or outputting numbers in terms of AS instead of IP address)
|
||||
|
||||
Output modules are created by defining a new output_module struct and registering it in [output_modules.c][2]:
|
||||
|
||||
typedef struct output_module {
|
||||
const char *name; // how is output module referenced in the CLI
|
||||
unsigned update_interval; // how often is update called in seconds
|
||||
|
||||
output_init_cb init; // called at scanner initialization
|
||||
output_update_cb start; // called at the beginning of scanner
|
||||
output_update_cb update; // called every update_interval seconds
|
||||
output_update_cb close; // called at scanner termination
|
||||
|
||||
output_packet_cb process_ip; // called when a response is received
|
||||
|
||||
const char *helptext; // Printed when --list-output-modules is called
|
||||
|
||||
} output_module_t;
|
||||
|
||||
Output modules must have a name, which is how they are referenced on the command-line and generally implement `success_ip` and oftentimes `other_ip` callback. The process_ip callback is called for every response packet that is received and passed through the output filter by the current **probe module**. The response may or may not be considered a success (e.g. it could be a TCP RST). These callbacks must define functions that match the `output_packet_cb` definition:
|
||||
|
||||
int (*output_packet_cb) (
|
||||
|
||||
ipaddr_n_t saddr, // IP address of scanned host in network-order
|
||||
ipaddr_n_t daddr, // destination IP address in network-order
|
||||
|
||||
const char* response_type, // send-module classification of packet
|
||||
|
||||
int is_repeat, // {0: first response from host, 1: subsequent responses}
|
||||
int in_cooldown, // {0: not in cooldown state, 1: scanner in cooldown state}
|
||||
|
||||
const u_char* packet, // pointer to struct iphdr of IP packet
|
||||
size_t packet_len // length of packet in bytes
|
||||
);
|
||||
|
||||
An output module can also register callbacks to be executed at scanner initialization (tasks such as opening an output file), start of the scan (tasks such as documenting blacklisted addresses), during regular intervals during the scan (tasks such as progress updates), and close (tasks such as closing any open file descriptors). These callbacks are provided with complete access to the scan configuration and current state:
|
||||
|
||||
int (*output_update_cb)(struct state_conf*, struct state_send*, struct state_recv*);
|
||||
|
||||
which are defined in [output_modules.h][3]. An example is available at [src/output_modules/module_csv.c][4].
|
||||
|
||||
#### Probe Modules ####
|
||||
|
||||
Packets are constructed using probe modules which allow abstracted packet creation and response classification. ZMap comes with two scan modules by default: `tcp_synscan` and `icmp_echoscan`. By default, ZMap uses `tcp_synscan`, which sends TCP SYN packets, and classifies responses from each host as open (received SYN+ACK) or closed (received RST). ZMap also allows developers to write their own probe modules for use with ZMap, using the following API.
|
||||
|
||||
Each type of scan is implemented by developing and registering the necessary callbacks in a `send_module_t` struct:
|
||||
|
||||
typedef struct probe_module {
|
||||
const char *name; // how scan is invoked on command-line
|
||||
size_t packet_length; // how long is probe packet (must be static size)
|
||||
|
||||
const char *pcap_filter; // PCAP filter for collecting responses
|
||||
size_t pcap_snaplen; // maximum number of bytes for libpcap to capture
|
||||
|
||||
uint8_t port_args; // set to 1 if ZMap requires a --target-port be
|
||||
// specified by the user
|
||||
|
||||
probe_global_init_cb global_initialize; // called once at scanner initialization
|
||||
probe_thread_init_cb thread_initialize; // called once for each thread packet buffer
|
||||
probe_make_packet_cb make_packet; // called once per host to update packet
|
||||
probe_validate_packet_cb validate_packet; // called once per received packet,
|
||||
// return 0 if packet is invalid,
|
||||
// non-zero otherwise.
|
||||
|
||||
probe_print_packet_cb print_packet; // called per packet if in dry-run mode
|
||||
probe_classify_packet_cb process_packet; // called by receiver to classify response
|
||||
probe_close_cb close; // called at scanner termination
|
||||
|
||||
fielddef_t *fields // Definitions of the fields specific to this module
|
||||
int numfields // Number of fields
|
||||
|
||||
} probe_module_t;
|
||||
|
||||
At scanner initialization, `global_initialize` is called once and can be utilized to perform any necessary global configuration or initialization. However, `global_initialize` does not have access to the packet buffer which is thread-specific. Instead, `thread_initialize` is called at the initialization of each sender thread and is provided with access to the buffer that will be used for constructing probe packets along with global source and destination values. This callback should be used to construct the host agnostic packet structure such that only specific values (e.g. destination host and checksum) need to be be updated for each host. For example, the Ethernet header will not change between headers (minus checksum which is calculated in hardware by the NIC) and therefore can be defined ahead of time in order to reduce overhead at scan time.
|
||||
|
||||
The `make_packet` callback is called for each host that is scanned to allow the **probe module** to update host specific values and is provided with IP address values, an opaque validation string, and probe number (shown below). The probe module is responsible for placing as much of the verification string into the probe, in such a way that when a valid response is returned by a server, the probe module can verify that it is present. For example, for a TCP SYN scan, the tcp_synscan probe module can use the TCP source port and sequence number to store the validation string. Response packets (SYN+ACKs) will contain the expected values in the destination port and acknowledgement number.
|
||||
|
||||
int make_packet(
|
||||
void *packetbuf, // packet buffer
|
||||
ipaddr_n_t src_ip, // source IP in network-order
|
||||
ipaddr_n_t dst_ip, // destination IP in network-order
|
||||
uint32_t *validation, // validation string to place in probe
|
||||
int probe_num // if sending multiple probes per host,
|
||||
// this will be which probe number for this
|
||||
// host we are currently sending
|
||||
);
|
||||
|
||||
Scan modules must also define `pcap_filter`, `validate_packet`, and `process_packet`. Only packets that match the PCAP filter will be considered by the scanner. For example, in the case of a TCP SYN scan, we only want to investigate TCP SYN/ACK or TCP RST packets and would utilize a filter similar to `tcp && tcp[13] & 4 != 0 || tcp[13] == 18`. The `validate_packet` function will be called for every packet that fulfills this PCAP filter. If the validation returns non-zero, the `process_packet` function will be called, and will populate a fieldset using fields defined in `fields` with data from the packet. For example, the following code processes a packet for the TCP synscan probe module.
|
||||
|
||||
void synscan_process_packet(const u_char *packet, uint32_t len, fieldset_t *fs)
|
||||
{
|
||||
struct iphdr *ip_hdr = (struct iphdr *)&packet[sizeof(struct ethhdr)];
|
||||
struct tcphdr *tcp = (struct tcphdr*)((char *)ip_hdr
|
||||
+ (sizeof(struct iphdr)));
|
||||
|
||||
fs_add_uint64(fs, "sport", (uint64_t) ntohs(tcp->source));
|
||||
fs_add_uint64(fs, "dport", (uint64_t) ntohs(tcp->dest));
|
||||
fs_add_uint64(fs, "seqnum", (uint64_t) ntohl(tcp->seq));
|
||||
fs_add_uint64(fs, "acknum", (uint64_t) ntohl(tcp->ack_seq));
|
||||
fs_add_uint64(fs, "window", (uint64_t) ntohs(tcp->window));
|
||||
|
||||
if (tcp->rst) { // RST packet
|
||||
fs_add_string(fs, "classification", (char*) "rst", 0);
|
||||
fs_add_uint64(fs, "success", 0);
|
||||
} else { // SYNACK packet
|
||||
fs_add_string(fs, "classification", (char*) "synack", 0);
|
||||
fs_add_uint64(fs, "success", 1);
|
||||
}
|
||||
}
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://zmap.io/documentation.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://github.com/ewust/forge_socket/
|
||||
[2]:https://github.com/zmap/zmap/blob/v1.0.0/src/output_modules/output_modules.c
|
||||
[3]:https://github.com/zmap/zmap/blob/master/src/output_modules/output_modules.h
|
||||
[4]:https://github.com/zmap/zmap/blob/master/src/output_modules/module_csv.c
|
@ -1,3 +1,5 @@
|
||||
Translating by dingdongnigetou
|
||||
|
||||
Install Plex Media Server On Ubuntu / CentOS 7.1 / Fedora 22
|
||||
================================================================================
|
||||
In this article we will show you how easily you can setup Plex Home Media Server on major Linux distributions with their latest releases. After its successful installation of Plex you will be able to use your centralized home media playback system that streams its media to many Plex player Apps and the Plex Home will allows you to setup your environment by adding your devices and to setup a group of users that all can use Plex Together. So let’s start its installation first on Ubuntu 15.04.
|
||||
@ -188,4 +190,4 @@ via: http://linoxide.com/tools/install-plex-media-server-ubuntu-centos-7-1-fedor
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by GOLinux!
|
||||
How to Configure Swarm Native Clustering for Docker
|
||||
================================================================================
|
||||
Hi everyone, today we'll learn about Swarm and how we can create native clusters using Docker with Swarm. [Docker Swarm][1] is a native clustering program for Docker which turns a pool of Docker hosts into a single virtual host. Swarm serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Swarm follows the "batteries included but removable" principle as other Docker Projects. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends. The goal is to provide a smooth out-of-box experience for simple use cases, and allow swapping in more powerful backends, like Mesos, for large scale production deployments. Swarm is extremely easy to setup and get started.
|
||||
@ -92,4 +93,4 @@ via: http://linoxide.com/linux-how-to/configure-swarm-clustering-docker/
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://docs.docker.com/swarm/
|
||||
[1]:https://docs.docker.com/swarm/
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by DongShuaike
|
||||
|
||||
How to Provision Swarm Clusters using Docker Machine
|
||||
================================================================================
|
||||
Hi all, today we'll learn how we can deploy Swarm Clusters using Docker Machine. It serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. We can provision swarm clusters with any driver we need and is highly secured with TLS Encryption.
|
||||
@ -122,4 +124,4 @@ via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-mach
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
|
@ -1,359 +0,0 @@
|
||||
PHP Security
|
||||
================================================================================
|
||||

|
||||
|
||||
### Introduction ###
|
||||
|
||||
When offering an Internet service, you must always keep security in mind as you develop your code. It may appear that most PHP scripts aren't sensitive to security concerns; this is mainly due to the large number of inexperienced programmers working in the language. However, there is no reason for you to have an inconsistent security policy based on a rough guess at your code's significance. The moment you put anything financially interesting on your server, it becomes likely that someone will try to casually hack it. Create a forum program or any sort of shopping cart, and the probability of attack rises to a dead certainty.
|
||||
|
||||
### Background ###
|
||||
|
||||
Here are a few general security guidelines for securing your web content:
|
||||
|
||||
#### Don't trust forms ####
|
||||
|
||||
Hacking forms is trivial. Yes, by using a silly JavaScript trick, you may be able to limit your form to allow only the numbers 1 through 5 in a rating field. The moment someone turns JavaScript off in their browser or posts custom form data, your client-side validation flies out the window.
|
||||
|
||||
Users interact with your scripts primarily through form parameters, and therefore they're the biggest security risk. What's the lesson? Always validate the data that gets passed to any PHP script in the PHP script. In this article, we show you how to analyze and protect against cross-site scripting (XSS) attacks, which can hijack your user's credentials (and worse). You'll also see how to prevent the MySQL injection attacks that can taint or destroy your data.
|
||||
|
||||
#### Don't trust users ####
|
||||
|
||||
Assume that every piece of data your website gathers is laden with harmful code. Sanitize every piece, even if you're positive that nobody would ever try to attack your site.
|
||||
|
||||
#### Turn off global variables ####
|
||||
|
||||
The biggest security hole you can have is having the register_globals configuration parameter enabled. Mercifully, it's turned off by default in PHP 4.2 and later. If **register_globals** is on, then you can disable this feature by turning the register_globals variable to Off in your server's php.ini file :
|
||||
|
||||
register_globals = Off
|
||||
|
||||
Novice programmers view registered globals as a convenience, but they don't realize how dangerous this setting is. A server with global variables enabled automatically assigns global variables to any form parameters. For an idea of how this works and why this is dangerous, let's look at an example.
|
||||
|
||||
Let's say that you have a script named process.php that enters form data into your user database. The original form looked like this:
|
||||
|
||||
<input name="username" type="text" size="15" maxlength="64">
|
||||
|
||||
When running process.php, PHP with registered globals enabled places the value of this parameter into the $username variable. This saves some typing over accessing them through **$_POST['username']** or **$_GET['username']**. Unfortunately, this also leaves you open to security problems, because PHP sets a variable for any value sent to the script via a GET or POST parameter, and that is a big problem if you didn't explicitly initialize the variable and you don't want someone to manipulate it.
|
||||
|
||||
Take the script below, for example—if the $authorized variable is true, it shows confidential data to the user. Under normal circumstances, the $authorized variable is set to true only if the user has been properly authenticated via the hypothetical authenticated_user() function. But if you have **register_globals** active, anyone could send a GET parameter such as authorized=1 to override this:
|
||||
|
||||
<?php
|
||||
// Define $authorized = true only if user is authenticated
|
||||
if (authenticated_user()) {
|
||||
$authorized = true;
|
||||
}
|
||||
?>
|
||||
|
||||
The moral of the story is that you should pull form data from predefined server variables. All data passed on to your web page via a posted form is automatically stored in a large array called $_POST, and all GET data is stored in a large array called **$_GET**. File upload information is stored in a special array called $_FILES. In addition, there is a combined variable called $_REQUEST.
|
||||
|
||||
To access the username field from a POST method form, use **$_POST['username']**. Use **$_GET['username']** if the username is in the URL. If you don't care where the value came from, use **$_REQUEST['username']**.
|
||||
|
||||
<?php
|
||||
$post_value = $_POST['post_value'];
|
||||
$get_value = $_GET['get_value'];
|
||||
$some_variable = $_REQUEST['some_value'];
|
||||
?>
|
||||
|
||||
$_REQUEST is a union of the $_GET, $_POST, and $_COOKIE arrays. If you have two or more values of the same parameter name, be careful of which one PHP uses. The default order is cookie, POST, then GET.
|
||||
|
||||
#### Recommended Security Configuration Options ####
|
||||
|
||||
There are several PHP configuration settings that affect security features. Here are the ones that should obviously be used for production servers:
|
||||
|
||||
- **register_globals** set to off
|
||||
- **safe_mode** set to off
|
||||
- **error_reporting** set to off. This is visible error reporting that sends a message to the user's browser if something goes wrong. For production servers, use error logging instead. Development servers can enable error logging as long as they're behind a firewall.
|
||||
- Disable these functions: system(), exec(), passthru(), shell_exec(), proc_open(), and popen().
|
||||
- **open_basedir** set for both the /tmp directory (so that session information can be stored) and the web root so that scripts cannot access files outside a selected area.
|
||||
- **expose_php** set to off. This feature adds a PHP signature that includes the version number to the Apache headers.
|
||||
- **allow_url_fopen** set to off. This isn't strictly necessary if you're careful about how you access files in your code—that is, you validate all input parameters.
|
||||
- **allow_url_include** set to off. There's really no sane reason for anyone to want to access include files via HTTP.
|
||||
|
||||
In general, if you find code that wants to use these features, you shouldn't trust it. Be especially careful of anything that wants to use a function such as system()—it's almost certainly flawed.
|
||||
|
||||
With these settings now behind us, let's look at some specific attacks and the methods that will help you protect your server.
|
||||
|
||||
### SQL Injection Attacks ###
|
||||
|
||||
Because the queries that PHP passes to MySQL databases are written in the powerful SQL programming language, you run the risk of someone attempting an SQL injection attack by using MySQL in web query parameters. By inserting malicious SQL code fragments into form parameters, an attacker attempts to break into (or disable) your server.
|
||||
|
||||
Let's say that you have a form parameter that you eventually place into a variable named $product, and you create some SQL like this:
|
||||
|
||||
$sql = "select * from pinfo where product = '$product'";
|
||||
|
||||
If that parameter came straight from the form, use database-specific escapes with PHP's native functions, like this:
|
||||
|
||||
$sql = 'Select * from pinfo where product = '"'
|
||||
mysql_real_escape_string($product) . '"';
|
||||
|
||||
If you don't, someone might just decide to throw this fragment into the form parameter:
|
||||
|
||||
39'; DROP pinfo; SELECT 'FOO
|
||||
|
||||
Then the result of $sql is:
|
||||
|
||||
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
|
||||
|
||||
Because the semicolon is MySQL's statement delimiter, the database processes these three statements:
|
||||
|
||||
select * from pinfo where product = '39'
|
||||
DROP pinfo
|
||||
SELECT 'FOO'
|
||||
|
||||
Well, there goes your table.
|
||||
|
||||
Note that this particular syntax won't actually work with PHP and MySQL, because the **mysql_query()** function allows just one statement to be processed per request. However, a subquery will still work.
|
||||
|
||||
To prevent SQL injection attacks, do two things:
|
||||
|
||||
- Always validate all parameters. For example, if something needs to be a number, make sure that it's a number.
|
||||
- Always use the mysql_real_escape_string() function on data to escape any quotes or double quotes in your data.
|
||||
|
||||
**Note: To automatically escape any form data, you can turn on Magic Quotes.**
|
||||
|
||||
Some MySQL damage can be avoided by restricting your MySQL user privileges. Any MySQL account can be restricted to only do certain kinds of queries on selected tables. For example, you could create a MySQL user who can select rows but nothing else. However, this is not terribly useful for dynamic data, and, furthermore, if you have sensitive customer information, it might be possible for someone to have access to some data that you didn't intend to make available. For example, a user accessing account data could try to inject some code that accesses another account number instead of the one assigned to the current session.
|
||||
|
||||
### Preventing Basic XSS Attacks ###
|
||||
|
||||
XSS stands for cross-site scripting. Unlike most attacks, this exploit works on the client side. The most basic form of XSS is to put some JavaScript in user-submitted content to steal the data in a user's cookie. Since most sites use cookies and sessions to identify visitors, the stolen data can then be used to impersonate that user—which is deeply troublesome when it's a typical user account, and downright disastrous if it's the administrative account. If you don't use cookies or session IDs on your site, your users aren't vulnerable, but you should still be aware of how this attack works.
|
||||
|
||||
Unlike MySQL injection attacks, XSS attacks are difficult to prevent. Yahoo!, eBay, Apple, and Microsoft have all been affected by XSS. Although the attack doesn't involve PHP, you can use PHP to strip user data in order to prevent attacks. To stop an XSS attack, you have to restrict and filter the data a user submits to your site. It is for this precise reason that most online bulletin boards don't allow the use of HTML tags in posts and instead replace them with custom tag formats such as **[b]** and **[linkto]**.
|
||||
|
||||
Let's look at a simple script that illustrates how to prevent some of these attacks. For a more complete solution, use SafeHTML, discussed later in this article.
|
||||
|
||||
function transform_HTML($string, $length = null) {
|
||||
// Helps prevent XSS attacks
|
||||
// Remove dead space.
|
||||
$string = trim($string);
|
||||
// Prevent potential Unicode codec problems.
|
||||
$string = utf8_decode($string);
|
||||
// HTMLize HTML-specific characters.
|
||||
$string = htmlentities($string, ENT_NOQUOTES);
|
||||
$string = str_replace("#", "#", $string);
|
||||
$string = str_replace("%", "%", $string);
|
||||
$length = intval($length);
|
||||
if ($length > 0) {
|
||||
$string = substr($string, 0, $length);
|
||||
}
|
||||
return $string;
|
||||
}
|
||||
|
||||
This function transforms HTML-specific characters into HTML literals. A browser renders any HTML run through this script as text with no markup. For example, consider this HTML string:
|
||||
|
||||
<STRONG>Bold Text</STRONG>
|
||||
|
||||
Normally, this HTML would render as follows:
|
||||
|
||||
Bold Text
|
||||
|
||||
However, when run through **transform_HTML()**, it renders as the original input. The reason is that the tag characters are HTML entities in the processed string. The resulting string from **HTML()** in plaintext looks like this:
|
||||
|
||||
<STRONG>Bold Text</STRONG>
|
||||
|
||||
The essential piece of this function is the htmlentities() function call that transforms <, >, and & into their entity equivalents of **<**, **>**, and **&**. Although this takes care of the most common attacks, experienced XSS hackers have another sneaky trick up their sleeve: Encoding their malicious scripts in hexadecimal or UTF-8 instead of normal ASCII text, hoping to circumvent your filters. They can send the code along as a GET variable in the URL, saying, "Hey, this is hexadecimal code, but could you run it for me anyway?" A hexadecimal example looks something like this:
|
||||
|
||||
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
|
||||
|
||||
But when the browser renders that information, it turns out to be:
|
||||
|
||||
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
|
||||
|
||||
To prevent this, transform_HTML() takes the additional steps of converting # and % signs into their entity, shutting down hex attacks, and converting UTF-8–encoded data.
|
||||
|
||||
Finally, just in case someone tries to overload a string with a very long input, hoping to crash something, you can add an optional $length parameter to trim the string to the maximum length you specify.
|
||||
|
||||
### Using SafeHTML ###
|
||||
|
||||
The problem with the previous script is that it is simple, and it does not allow for any kind of user markup. Unfortunately, there are hundreds of ways to try to sneak JavaScript past someone's filters, and short of stripping all HTML from someone's input, there's no way of stopping it.
|
||||
|
||||
Currently, there's no single script that's guaranteed to be unbreakable, though there are some that are better than most. There are two approaches to security, whitelisting and blacklisting, and whitelisting tends to be less complicated and more effective.
|
||||
|
||||
One whitelisting solution is the SafeHTML anti-XSS parser from PixelApes.
|
||||
|
||||
SafeHTML is smart enough to recognize valid HTML, so it can hunt and strip any dangerous tags. It does its parsing with another package called HTMLSax.
|
||||
|
||||
To install and use SafeHTML, do the following:
|
||||
|
||||
1. Go to [http://pixel-apes.com/safehtml/?page=safehtml][1] and download the latest version of SafeHTML.
|
||||
1. Put the files in the classes directory on your server. This directory contains everything that SafeHTML and HTMLSax need to function.
|
||||
1. Include the SafeHTML class file (safehtml.php) in your script.
|
||||
1. Create a new SafeHTML object called $safehtml.
|
||||
1. Sanitize your data with the $safehtml->parse() method.
|
||||
|
||||
Here's a complete example:
|
||||
|
||||
<?php
|
||||
/* If you're storing the HTMLSax3.php in the /classes directory, along
|
||||
with the safehtml.php script, define XML_HTMLSAX3 as a null string. */
|
||||
define(XML_HTMLSAX3, '');
|
||||
// Include the class file.
|
||||
require_once('classes/safehtml.php');
|
||||
// Define some sample bad code.
|
||||
$data = "This data would raise an alert <script>alert('XSS Attack')</script>";
|
||||
// Create a safehtml object.
|
||||
$safehtml = new safehtml();
|
||||
// Parse and sanitize the data.
|
||||
$safe_data = $safehtml->parse($data);
|
||||
// Display result.
|
||||
echo 'The sanitized data is <br />' . $safe_data;
|
||||
?>
|
||||
|
||||
If you want to sanitize any other data in your script, you don't have to create a new object; just use the $safehtml->parse() method throughout your script.
|
||||
|
||||
#### What Can Go Wrong? ####
|
||||
|
||||
The biggest mistake you can make is assuming that this class completely shuts down XSS attacks. SafeHTML is a fairly complex script that checks for almost everything, but nothing is guaranteed. You still want to do the parameter validation that applies to your site. For example, this class doesn't check the length of a given variable to ensure that it fits into a database field. It doesn't check for buffer overflow problems.
|
||||
|
||||
XSS hackers are creative and use a variety of approaches to try to accomplish their objectives. Just look at RSnake's XSS tutorial at [http://ha.ckers.org/xss.html][2] to see how many ways there are to try to sneak code past someone's filters. The SafeHTML project has good programmers working overtime to try to stop XSS attacks, and it has a solid approach, but there's no guarantee that someone won't come up with some weird and fresh approach that could short-circuit its filters.
|
||||
|
||||
**Note: For an example of the powerful effects of XSS attacks, check out [http://namb.la/popular/tech.html][3], which shows a step-by-step approach to creating the JavaScript XSS worm that overloaded the MySpace servers. **
|
||||
|
||||
### Protecting Data with a One-Way Hash ###
|
||||
|
||||
This script performs a one-way transformation on data—in other words, it can make a hash signature of someone's password, but you can't ever decrypt it and go back to the original password. Why would you want to do that? The application is in storing passwords. An administrator doesn't need to know users' passwords—in fact, it's a good idea that only the user knows his or her password. The system (and the system alone) should be able to identify a correct password; this has been the Unix password security model for years. One-way password security works as follows:
|
||||
|
||||
1. When a user or administrator creates or changes an account password, the system hashes the password and stores the result. The host system discards the plaintext password.
|
||||
1. When the user logs in to a system via any means, the entered password is again hashed.
|
||||
1. The host system throws away the plaintext password entered.
|
||||
1. This newly hashed password is compared against the stored hash.
|
||||
1. If the hashed passwords match, then the system grants access.
|
||||
|
||||
The host system does this without ever knowing the original password; in fact, the original value is completely irrelevant. As a side effect, should someone break into your system and steal your password database, the intruder will have a bunch of hashed passwords without any way of reversing them to find the originals. Of course, given enough time, computer power, and poorly chosen user passwords, an attacker could probably use a dictionary attack to figure out the passwords. Therefore, don't make it easy for people to get their hands on your password database, and if someone does, have everyone change their passwords.
|
||||
|
||||
#### Encryption Vs Hashing ####
|
||||
|
||||
Technically speaking, this process is not encryption. It is a hash, which is different from encryption for two reasons:
|
||||
|
||||
Unlike in encryption, data cannot be decrypted.
|
||||
|
||||
It's possible (but extremely unlikely) that two different strings will produce the same hash. There's no guarantee that a hash is unique, so don't try to use a hash as something like a unique key in a database.
|
||||
|
||||
function hash_ish($string) {
|
||||
return md5($string);
|
||||
}
|
||||
|
||||
The md5() function returns a 32-character hexadecimal string, based on the RSA Data Security Inc. Message-Digest Algorithm (also known, conveniently enough, as MD5). You can then insert that 32-character string into your database, compare it against other md5'd strings, or just adore its 32-character perfection.
|
||||
|
||||
#### Hacking the Script ####
|
||||
|
||||
It is virtually impossible to decrypt MD5 data. That is, it's very hard. However, you still need good passwords, because it's still easy to make a database of hashes for the entire dictionary. There are online MD5 dictionaries where you can enter **06d80eb0c50b49a509b49f2424e8c805** and get a result of "dog." Thus, even though MD5s can't technically be decrypted, they're still vulnerable—and if someone gets your password database, you can be sure that they'll be consulting an MD5 dictionary. Thus, it's in your best interests when creating password-based systems that the passwords are long (a minimum of six characters and preferably eight) and contain both letters and numbers. And make sure that the password isn't in the dictionary.
|
||||
|
||||
### Encrypting Data with Mcrypt ###
|
||||
|
||||
MD5 hashes work just fine if you never need to see your data in readable form. Unfortunately, that's not always an option—if you offer to store someone's credit card information in encrypted format, you need to decrypt it at some later point.
|
||||
|
||||
One of the easiest solutions is the Mcrypt module, an add-in for PHP that allows high-grade encryption. The Mcrypt library offers more than 30 ciphers to use in encryption and the possibility of a passphrase that ensures that only you (or, optionally, your users) can decrypt data.
|
||||
|
||||
Let's see some hands-on use. The following script contains functions that use Mcrypt to encrypt and decrypt data:
|
||||
|
||||
<?php
|
||||
$data = "Stuff you want encrypted";
|
||||
$key = "Secret passphrase used to encrypt your data";
|
||||
$cipher = "MCRYPT_SERPENT_256";
|
||||
$mode = "MCRYPT_MODE_CBC";
|
||||
function encrypt($data, $key, $cipher, $mode) {
|
||||
// Encrypt data
|
||||
return (string)
|
||||
base64_encode
|
||||
(
|
||||
mcrypt_encrypt
|
||||
(
|
||||
$cipher,
|
||||
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
|
||||
$data,
|
||||
$mode,
|
||||
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
|
||||
)
|
||||
);
|
||||
}
|
||||
function decrypt($data, $key, $cipher, $mode) {
|
||||
// Decrypt data
|
||||
return (string)
|
||||
mcrypt_decrypt
|
||||
(
|
||||
$cipher,
|
||||
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
|
||||
base64_decode($data),
|
||||
$mode,
|
||||
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
|
||||
);
|
||||
}
|
||||
?>
|
||||
|
||||
The **mcrypt()** function requires several pieces of information:
|
||||
|
||||
- The data to encrypted.
|
||||
- The passphrase used to encrypt and unlock your data, also known as the key.
|
||||
- The cipher used to encrypt the data, which is the specific algorithm used to encrypt the data. This script uses **MCRYPT_SERPENT_256**, but you can choose from an array of fancy-sounding ciphers, including **MCRYPT_TWOFISH192**, **MCRYPT_RC2**, **MCRYPT_DES**, and **MCRYPT_LOKI97**.
|
||||
- The mode used to encrypt the data. There are several modes you can use, including Electronic Codebook and Cipher Feedback. This script uses **MCRYPT_MODE_CBC**, Cipher Block Chaining.
|
||||
- An **initialization vector**—also known as an IV, or a seed—an additional bit of binary data used to seed the encryption algorithm. That is, it's something extra thrown in to make the algorithm harder to crack.
|
||||
- The length of the string needed for the key and IV, which vary by cipher and block. Use the **mcrypt_get_key_size()** and **mcrypt_get_block_size()** functions to find the appropriate length; then trim the key value to the appropriate length with a handy **substr()** function. (If the key is shorter than the required value, don't worry—Mcrypt pads it with zeros.)
|
||||
|
||||
If someone steals both your data and your passphrase, they can just cycle through the ciphers until finding the one that works. Thus, we apply the additional security of using the **md5()** function on the key before we use it, so even having both data and passphrase won't get the intruder what she wants.
|
||||
|
||||
An intruder would need the function, the data, and the passphrase all at once—and if that is the case, they probably have complete access to your server, and you're hosed anyway.
|
||||
|
||||
There's a small data storage format problem here. Mcrypt returns its encrypted data in an ugly binary format that causes horrific errors when you try to store it in certain MySQL fields. Therefore, we use the **base64encode()** and **base64decode()** functions to transform the data into a SQL-compatible alphabetical format and retrieve rows.
|
||||
|
||||
#### Hacking the Script ####
|
||||
|
||||
In addition to experimenting with various encryption methods, you can add some convenience to this script. For example, rather than providing the key and mode every time, you could declare them as global constants in an included file.
|
||||
|
||||
### Generating Random Passwords ###
|
||||
|
||||
Random (but difficult-to-guess) strings are important in user security. For example, if someone loses a password and you're using MD5 hashes, you won't be able to, nor should you want to, look it up. Instead, you should generate a secure random password and send that to the user. Another application for random number generation is creating activation links in order to access your site's services. Here is a function that creates a password:
|
||||
|
||||
<?php
|
||||
function make_password($num_chars) {
|
||||
if ((is_numeric($num_chars)) &&
|
||||
($num_chars > 0) &&
|
||||
(! is_null($num_chars))) {
|
||||
$password = '';
|
||||
$accepted_chars = 'abcdefghijklmnopqrstuvwxyz1234567890';
|
||||
// Seed the generator if necessary.
|
||||
srand(((int)((double)microtime()*1000003)) );
|
||||
for ($i=0; $i<=$num_chars; $i++) {
|
||||
$random_number = rand(0, (strlen($accepted_chars) -1));
|
||||
$password .= $accepted_chars[$random_number] ;
|
||||
}
|
||||
return $password;
|
||||
}
|
||||
}
|
||||
?>
|
||||
|
||||
#### Using the Script ####
|
||||
|
||||
The **make_password()** function returns a string, so all you need to do is supply the length of the string as an argument:
|
||||
|
||||
<?php
|
||||
$fifteen_character_password = make_password(15);
|
||||
?>
|
||||
|
||||
The function works as follows:
|
||||
|
||||
- The function makes sure that **$num_chars** is a positive nonzero integer.
|
||||
- The function initializes the **$password** variable to an empty string.
|
||||
- The function initializes the **$accepted_chars** variable to the list of characters the password may contain. This script uses all lowercase letters and the numbers 0 through 9, but you can choose any set of characters you like.
|
||||
- The random number generator needs a seed, so it gets a bunch of random-like values. (This isn't strictly necessary on PHP 4.2 and later.)
|
||||
- The function loops **$num_chars** times, one iteration for each character in the password to generate.
|
||||
- For each new character, the script looks at the length of **$accepted_chars**, chooses a number between 0 and the length, and adds the character at that index in **$accepted_chars** to $password.
|
||||
- After the loop completes, the function returns **$password**.
|
||||
|
||||
### License ###
|
||||
|
||||
This article, along with any associated source code and files, is licensed under [The Code Project Open License (CPOL)][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.codeproject.com/Articles/363897/PHP-Security
|
||||
|
||||
作者:[SamarRizvi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.codeproject.com/script/Membership/View.aspx?mid=7483622
|
||||
[1]:http://pixel-apes.com/safehtml/?page=safehtml
|
||||
[2]:http://ha.ckers.org/xss.html
|
||||
[3]:http://namb.la/popular/tech.html
|
||||
[4]:http://www.codeproject.com/info/cpol10.aspx
|
@ -1,69 +0,0 @@
|
||||
FSSlc translating
|
||||
|
||||
Install Google Hangouts Desktop Client In Linux
|
||||
================================================================================
|
||||

|
||||
|
||||
Earlier, we have seen how to [install Facebook Messenger in Linux][1] and [WhatsApp desktop client in Linux][2]. Both of these were unofficial apps. I have one more unofficial app for today and it is [Google Hangouts][3].
|
||||
|
||||
Of course, you can use Google Hangouts in the web browser but it is more fun to use the desktop client than the web browser one. Curious? Let’s see how to **install Google Hangouts desktop client in Linux** and how to use it.
|
||||
|
||||
### Install Google Hangouts in Linux ###
|
||||
|
||||
We are going to use an open source project called [yakyak][4] which is unofficial Google Hangouts client for Linux, Windows and OS X. I’ll show you how to use yakyak in Ubuntu but I believe that you can use the same method to use it in other Linux distributions. Before we see how to use it, let’s first take a look at main features of yakyak:
|
||||
|
||||
- Send and receive chat messages
|
||||
- Create and change conversations (rename, add people)
|
||||
- Leave and/or delete conversation
|
||||
- Desktop notifications
|
||||
- Toggle notifications on/off
|
||||
- Drag-drop, copy-paste or attach-button for image upload.
|
||||
- Hangupsbot sync room aware (actual user pics)
|
||||
- Shows inline images
|
||||
- History scrollback
|
||||
|
||||
Sounds good enough? Download the installation files from the link below:
|
||||
|
||||
- [Download Google Hangout client yakyak][5]
|
||||
|
||||
The downloaded file would be compressed. Extract it and you will see a directory like linux-x64 or linux-x32 based on your system. Go in to this directory and you should see a file named yakyak. Double click on it to run it.
|
||||
|
||||

|
||||
|
||||
You’ll have to enter your Google Account credentials of course.
|
||||
|
||||

|
||||
|
||||
Once you are through, you’ll see a screen like the one below where you can chat with your Google contacts.
|
||||
|
||||

|
||||
|
||||
If you want to show profile pictures of the contacts, you can select View->Show conversation thumbnails.
|
||||
|
||||

|
||||
|
||||
You’ll also get desktop notification for new messages.
|
||||
|
||||

|
||||
|
||||
### Worth a try? ###
|
||||
|
||||
I let you give it a try and decide whether or not it is worth to **install Google Hangouts client in Linux**. If you want official apps, take a look at these [instant messaging applications with native Linux clients][6]. Don’t forget to share your experience with Google Hangouts in Linux.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/install-google-hangouts-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/facebook-messenger-linux/
|
||||
[2]:http://itsfoss.com/whatsapp-linux-desktop/
|
||||
[3]:http://www.google.com/+/learnmore/hangouts/
|
||||
[4]:https://github.com/yakyak/yakyak
|
||||
[5]:https://github.com/yakyak/yakyak
|
||||
[6]:http://itsfoss.com/best-messaging-apps-linux/
|
@ -1,109 +0,0 @@
|
||||
Linux FAQs with Answers--How to install a Brother printer on Linux
|
||||
================================================================================
|
||||
> **Question**: I have a Brother HL-2270DW laser printer, and want to print documents from my Linux box using this printer. How can I install an appropriate Brother printer driver on my Linux computer, and use it?
|
||||
|
||||
Brother is well known for its affordable [compact laser printer lineup][1]. You can get a high-quality WiFi/duplex-capable laser printer for less than 200USD, and the price keeps going down. On top of that, they provide reasonably good Linux support, so you can download and install their printer driver on your Linux computer. I bought [HL-2270DW][2] model more than a year ago, and I have been more than happy with its performance and reliability.
|
||||
|
||||
Here is how to install and configure a Brother printer driver on Linux. In this tutorial, I am demonstrating the installation of a USB driver for Brother HL-2270DW laser printer. So first connect your printer to a Linux computer via USB cable.
|
||||
|
||||
### Preparation ###
|
||||
|
||||
In this preparation step, go to the official [Brother support website][3], and search for the driver of your Brother printer by typing printer model name (e.g., HL-2270DW).
|
||||
|
||||

|
||||
|
||||
Once you go to the download page for your Brother printer, choose your Linux platform. For Debian, Ubuntu or their derivatives, choose "Linux (deb)". For Fedora, CentOS or RHEL, choose "Linux (rpm)".
|
||||
|
||||

|
||||
|
||||
On the next page, you will find a LPR driver as well as CUPS wrapper driver for your printer. The former is a command-line driver, while the latter allows you to configure and manage your printer via web-based administration interface. Especially the CUPS-based GUI is quite useful for (local or remote) printer maintenance. It is recommended that you install both drivers. So click on "Driver Install Tool" and download the installer file.
|
||||
|
||||

|
||||
|
||||
Before proceeding to run the installer file, you need to do one additional step if you are using a 64-bit Linux system.
|
||||
|
||||
Since Brother printer drivers are developed for 32-bit Linux, you need to install necessary 32-bit libraries on 64-bit Linux as follows.
|
||||
|
||||
On older Debian (6.0 or earlier) or Ubuntu (11.04 or earlier), install the following package.
|
||||
|
||||
$ sudo apt-get install ia32-libs
|
||||
|
||||
On newer Debian or Ubuntu which has introduced multiarch, you can install the following package instead:
|
||||
|
||||
$ sudo apt-get install lib32z1 lib32ncurses5
|
||||
|
||||
which replaces ia32-libs package. Or, you can install just:
|
||||
|
||||
$ sudo apt-get install lib32stdc++6
|
||||
|
||||
If you are using a Red Hat based Linux, you can install:
|
||||
|
||||
$ sudo yum install glibc.i686
|
||||
|
||||
### Driver Installation ###
|
||||
|
||||
Now go ahead and extract a downloaded driver installer file.
|
||||
|
||||
$ gunzip linux-brprinter-installer-2.0.0-1.gz
|
||||
|
||||
Next, run the driver installer file as follows.
|
||||
|
||||
$ sudo sh ./linux-brprinter-installer-2.0.0-1
|
||||
|
||||
You will be prompted to type a printer model name. Type the model name of your printer, for example "HL-2270DW".
|
||||
|
||||

|
||||
|
||||
After agreeing to GPL license agreement, accept default answers to any subsequent questions.
|
||||
|
||||

|
||||
|
||||
Now LPR/CUPS printer drivers are installed. Proceed to configure your printer next.
|
||||
|
||||
### Printer Configuration ###
|
||||
|
||||
We are going to configure and manage a Brother via CUPS-based web management interface.
|
||||
|
||||
First, verify that CUPS daemon is running successfully.
|
||||
|
||||
$ sudo netstat -nap | grep 631
|
||||
|
||||
Open a web browser window, and go to http://localhost:631. You will see the following CUPS printer management interface.
|
||||
|
||||

|
||||
|
||||
Go to "Administration" tab, and click on "Manage Printers" under Printers section.
|
||||
|
||||

|
||||
|
||||
You must see your printer (HL-2270DW) listed in the next page. Click on the printer name.
|
||||

|
||||
|
||||
In the dropdown menu titled "Administration", choose "Set As Server Default" option. This will make your printer system-wide default.
|
||||
|
||||

|
||||
|
||||
When asked to authenticate yourself, type in your Linux login information.
|
||||
|
||||

|
||||
|
||||
Now the basic configuration step is mostly done. To test print, open any document viewer application (e.g., PDF viwer), and print it. You will see "HL-2270DW" listed and chosen by default in printer setting.
|
||||
|
||||

|
||||
|
||||
Print should work now. You can see the printer status and manage printer jobs via the same CUPS web interface.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-brother-printer-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/go/brother_printers
|
||||
[2]:http://xmodulo.com/go/hl_2270dw
|
||||
[3]:http://support.brother.com/
|
@ -1,115 +0,0 @@
|
||||
wyangsun 翻译中
|
||||
Why is the ibdata1 file continuously growing in MySQL?
|
||||
================================================================================
|
||||

|
||||
|
||||
We receive this question about the ibdata1 file in MySQL very often in [Percona Support][1].
|
||||
|
||||
The panic starts when the monitoring server sends an alert about the storage of the MySQL server – saying that the disk is about to get filled.
|
||||
|
||||
After some research you realize that most of the disk space is used by the InnoDB’s shared tablespace ibdata1. You have [innodb_file_per_table][2] enabled, so the question is:
|
||||
|
||||
### What is stored in ibdata1? ###
|
||||
|
||||
When you have innodb_file_per_table enabled, the tables are stored in their own tablespace but the shared tablespace is still used to store other InnoDB’s internal data:
|
||||
|
||||
- data dictionary aka metadata of InnoDB tables
|
||||
- change buffer
|
||||
- doublewrite buffer
|
||||
- undo logs
|
||||
|
||||
Some of them can be configured on [Percona Server][3] to avoid becoming too large. For example you can set a maximum size for change buffer with [innodb_ibuf_max_size][4] or store the doublewrite buffer on a separate file with [innodb_doublewrite_file][5].
|
||||
|
||||
In MySQL 5.6 you can also create external UNDO tablespaces so they will be in their own files instead of stored inside ibdata1. Check following [documentation link][6].
|
||||
|
||||
### What is causing the ibdata1 to grow that fast? ###
|
||||
|
||||
Usually the first command that we need to run when there is a MySQL problem is:
|
||||
|
||||
SHOW ENGINE INNODB STATUSG
|
||||
|
||||
That will show us very valuable information. We start checking the **TRANSACTIONS** section and we find this:
|
||||
|
||||
---TRANSACTION 36E, ACTIVE 1256288 sec
|
||||
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
|
||||
show engine innodb status
|
||||
Trx read view will not see trx with id >= 36F, sees < 36F
|
||||
|
||||
This is the most common reason, a pretty old transaction created 14 days ago. The status is **ACTIVE**, that means InnoDB has created a snapshot of the data so it needs to maintain old pages in **undo** to be able to provide a consistent view of the database since that transaction was started. If your database is heavily write loaded that means lots of undo pages are being stored.
|
||||
|
||||
If you don’t find any long-running transaction you can also monitor another variable from the INNODB STATUS, the “**History list length.**” It shows the number of pending purge operations. In this case the problem is usually caused because the purge thread (or master thread in older versions) is not capable to process undo records with the same speed as they come in.
|
||||
|
||||
### How can I check what is being stored in the ibdata1? ###
|
||||
|
||||
Unfortunately MySQL doesn’t provide information of what is being stored on that ibdata1 shared tablespace but there are two tools that will be very helpful. First a modified version of innochecksum made by Mark Callaghan and published in [this bug report][7].
|
||||
|
||||
It is pretty easy to use:
|
||||
|
||||
# ./innochecksum /var/lib/mysql/ibdata1
|
||||
0 bad checksum
|
||||
13 FIL_PAGE_INDEX
|
||||
19272 FIL_PAGE_UNDO_LOG
|
||||
230 FIL_PAGE_INODE
|
||||
1 FIL_PAGE_IBUF_FREE_LIST
|
||||
892 FIL_PAGE_TYPE_ALLOCATED
|
||||
2 FIL_PAGE_IBUF_BITMAP
|
||||
195 FIL_PAGE_TYPE_SYS
|
||||
1 FIL_PAGE_TYPE_TRX_SYS
|
||||
1 FIL_PAGE_TYPE_FSP_HDR
|
||||
1 FIL_PAGE_TYPE_XDES
|
||||
0 FIL_PAGE_TYPE_BLOB
|
||||
0 FIL_PAGE_TYPE_ZBLOB
|
||||
0 other
|
||||
3 max index_id
|
||||
|
||||
It has 19272 UNDO_LOG pages from a total of 20608. **That’s the 93% of the tablespace**.
|
||||
|
||||
The second way to check the content of a tablespace are the [InnoDB Ruby Tools][8] made by Jeremy Cole. It is a more advanced tool to examine the internals of InnoDB. For example we can use the space-summary parameter to get a list with every page and its data type. We can use standard Unix tools to get the number of **UNDO_LOG** pages:
|
||||
|
||||
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
|
||||
19272
|
||||
|
||||
Altough in this particular case innochecksum is faster and easier to use I recommend you to play with Jeremy’s tools to learn more about the data distribution inside InnoDB and its internals.
|
||||
|
||||
OK, now we know where the problem is. The next question:
|
||||
|
||||
### How can I solve the problem? ###
|
||||
|
||||
The answer to this question is easy. If you can still commit that query, do it. If not you’ll have to kill the thread to start the rollback process. That will just stop ibdata1 from growing but it is clear that your software has a bug or someone made a mistake. Now that you know how to identify where is the problem you need to find who or what is causing it using your own debugging tools or the general query log.
|
||||
|
||||
If the problem is caused by the purge thread then the solution is usually to upgrade to a newer version where you can use a dedicated purge thread instead of the master thread. More information on the following [documentation link][9].
|
||||
|
||||
### Is there any way to recover the used space? ###
|
||||
|
||||
No, it is not possible at least in an easy and fast way. InnoDB tablespaces never shrink… see the following [10-year old bug report][10] recently updated by James Day (thanks):
|
||||
|
||||
When you delete some rows, the pages are marked as deleted to reuse later but the space is never recovered. The only way is to start the database with fresh ibdata1. To do that you would need to take a full logical backup with mysqldump. Then stop MySQL and remove all the databases, ib_logfile* and ibdata* files. When you start MySQL again it will create a new fresh shared tablespace. Then, recover the logical dump.
|
||||
|
||||
### Summary ###
|
||||
|
||||
When the ibdata1 file is growing too fast within MySQL it is usually caused by a long running transaction that we have forgotten about. Try to solve the problem as fast as possible (commiting or killing a transaction) because you won’t be able to recover the wasted disk space without the painfully slow mysqldump process.
|
||||
|
||||
Monitoring the database to avoid these kind of problems is also very recommended. Our [MySQL Monitoring Plugins][11] includes a Nagios script that can alert you if it finds a too old running transaction.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
|
||||
|
||||
作者:[Miguel Angel Nieto][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.percona.com/blog/author/miguelangelnieto/
|
||||
[1]:https://www.percona.com/products/mysql-support
|
||||
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
|
||||
[3]:https://www.percona.com/software/percona-server
|
||||
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
|
||||
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
|
||||
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
|
||||
[7]:http://bugs.mysql.com/bug.php?id=57611
|
||||
[8]:https://github.com/jeremycole/innodb_ruby
|
||||
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
|
||||
[10]:http://bugs.mysql.com/bug.php?id=1341
|
||||
[11]:https://www.percona.com/software/percona-monitoring-plugins
|
@ -0,0 +1,55 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 1 - Introduction
|
||||
================================================================================
|
||||
*Author's Note: If by some miracle you managed to click this article without reading the title then I want to re-iterate something... This is an editorial. These are my opinions. They are not representative of Phoronix, or Michael, these are my own thoughts.*
|
||||
|
||||
Additionally, yes... This is quite possibly a flame-bait article. I hope the community is better than that, because I do want to start a discussion and give feedback to both the KDE and Gnome communities. For that reason when I point out, what I see as, a flaw I will try to be specific and direct so that any discussion can be equally specific and direct. For the record: The alternative title for this article was "Death By A Thousand [Paper Cuts][1]".
|
||||
|
||||
Now, with that out of the way... Onto the article.
|
||||
|
||||

|
||||
|
||||
When I sent the [Fedora 22 KDE Review][2] off to Michael I did it with a bit of a bad taste in my mouth. It wasn't because I didn't like KDE, or hadn't been enjoying Fedora, far from it. In fact, I started to transition my T450s over to Arch Linux but quickly decided against that, as I enjoyed the level of convenience that Fedora brings to me for many things.
|
||||
|
||||
The reason I had a bad taste in my mouth was because the Fedora developers put a lot of time and effort into their "Workstation" product and I wasn't seeing any of it. I wasn't using Fedora the way the main developers had intended it to be used and therefore wasn't getting the "Fedora Experience." It felt like someone reviewing Ubuntu by using Kubuntu, using a Hackintosh to review OS X, or reviewing Gentoo by using Sabayon. A lot of readers in the forums bash on Michael for reviewing distributions in their default configurations-- myself included. While I still do believe that reviews should be done under 'real-world' configurations, I do see the value in reviewing something in the condition it was given to you-- for better or worse.
|
||||
|
||||
It was with that attitude in mind that I decided to take a dip in the Gnome pool.
|
||||
|
||||
I do, however, need to add one more disclaimer... I am looking at KDE and Gnome as they are packaged in Fedora. OpenSUSE, Kubuntu, Arch, etc, might all have different implementations of each desktop that will change whether my specific 'pain points' are relevant to your distribution. Furthermore, despite the title, this is going to be a VERY KDE heavy article. I called the article what I did because it was actually USING Gnome that made me realize how many "paper cuts" KDE actually has.
|
||||
|
||||
### Login Screen ###
|
||||
|
||||

|
||||
|
||||
I normally don't mind Distributions shipping distro-specific themes, because most of them make the desktop look nicer. I finally found my exception.
|
||||
|
||||
First impression's count for a lot, right? Well, GDM definitely gets this one right. The login screen is incredibly clean with consistent design language through every single part of it. The use of common-language icons instead of text boxes helps in that regard.
|
||||
|
||||

|
||||
|
||||
That is not to say that the Fedora 22 KDE login screen-- now SDDM rather than KDM-- looks 'bad' per say but its definitely more jarring.
|
||||
|
||||
Where's the fault? The top bar. Look at the Gnome screenshot-- you select a user and you get a tiny little gear simple for selecting what session you want to log into. The design is clean, it gets out of your way, you could honestly miss it completely if you weren't paying attention. Now look at the blue KDE screenshot, the bar doesn't look it was even rendered using the same widgets, and its entire placement feels like an after thought of "Well shit, we need to throw this option somewhere..."
|
||||
|
||||
The same can be said for the Reboot and Shutdown options in the top right. Why not just a power button that creates a drop down menu that has a drop down for Reboot, Shutdown, Suspend? Having the buttons be different colors than the background certainly makes them stick out and be noticeable... but I don't think in a good way. Again, they feel like an after thought.
|
||||
|
||||
GDM is also far more useful from a practical standpoint, look again along the top row. The time is listed, there's a volume control so that if you are trying to be quiet you can mute all sounds before you even login, there's an accessibility button for things like high contrast, zooming, test to speech, etc, all available via simple toggle buttons.
|
||||
|
||||

|
||||
|
||||
Swap it to upstream's Breeze theme and... suddenly most of my complaints are fixed. Common-language icons, everything is in the center of the screen, but the less important stuff is off to the sides. This creates a nice harmony between the top and bottom of the screen since they are equally empty. You still have a text box for the session switcher, but I can forgive that since the power buttons are now common language icons. Current time is available which is a nice touch, as is a battery life indicator. Sure gnome still has a few nice additions, such as the volume applet and the accessibility buttons, but Breeze is a step up from Fedora's KDE theme.
|
||||
|
||||
Go to Windows (pre-Windows 8 & 10...) or OS X and you will see similar things – very clean, get-out-of-your-way lock screens and login screens that are devoid of text boxes or other widgets that distract the eye. It's a design that works and that is non-distracting. Fedora... Ship Breeze by default. VDG got the design of the Breeze theme right. Don't mess it up.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
|
||||
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1
|
@ -0,0 +1,32 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 2 - The GNOME Desktop
|
||||
================================================================================
|
||||
### The Desktop ###
|
||||
|
||||

|
||||
|
||||
I spent the first five days of my week logging into Gnome manually-- not turning on automatic login. On night of the fifth day I got annoyed with having to login by hand and so I went into the User Manager and turned on automatic login. The next time I logged in I got a prompt: "Your keychain was not unlocked. Please enter your password to unlock your keychain." That was when I realized something... Gnome had been automatically unlocking my keychain—my wallet in KDE speak-- every time I logged in via GDM. It was only when I bypassed GDM's login that Gnome had to step in and make me do it manually.
|
||||
|
||||
Now, I am under the personal belief that if you enable automatic login then your key chain should be unlocked automatically as well-- otherwise what's the point? Either way you still have to type in your password and at least if you hit the GDM Login screen you have a chance to change your session if you want to.
|
||||
|
||||
But, regardless of that, it was at that moment that I realized it was such a simple thing that made the desktop feel so much more like it was working WITH ME. When I log into KDE via SDDM? Before the splash screen is even finished loading there is a window popping up over top the splash animation-- thereby disrupting the splash screen-- prompting me to unlock my KDE wallet or GPG keyring.
|
||||
|
||||
If a wallet doesn't exist already you get prompted to create a wallet-- why couldn't one have been created for me at user creation?-- and then get asked to pick between two encryption methods, where one is even implied as insecure (Blowfish), why are you letting me pick something that's insecure for my security? Author's Note: If you install the actual KDE spin and don't just install KDE after-the-fact then a wallet is created for you at user creation. Unfortunately it's not unlocked for you automatically, and it seems to use the older Blowfish method rather than the new, and more secure, GPG method.
|
||||
|
||||

|
||||
|
||||
If you DO pick the secure one (GPG) then it tries to load an Gpg key... which I hope you had one created already because if you don't you get yelled at. How do you create one? Well, it doesn't offer to make one for you... nor It doesn't tell you... and if you do manage TO figure out that you are supposed to use KGpg to create the key then you get taken through several menus and prompts that are nothing but confusing to new users. Why are you asking me where the GPG binary is located? How on earth am I supposed to know? Can't you just use the most recent one if there's more than one? And if there IS only one then, I ask again, why are you prompting me?
|
||||
|
||||
Why are you asking me what key size and encryption algorithm to use? You select 2048 and RSA/RSA by default, so why not just use those? If you want to have those options available then throw them under the "Expert mode" button that is right there. This isn't just about having configuration options available, its about needless things that get thrown in the user's face by default. This is going to be a theme for the rest of the article... KDE needs better sane defaults. Configuration is great, I love the configuration I get with KDE, but it needs to learn when to and when not to prompt. It also needs to learn that "Well its configurable" is no excuse for bad defaults. Defaults are what users see initially, bad defaults will lose users.
|
||||
|
||||
Let's move on from the key chain issue though, because I think I made my point.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,62 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 3 - GNOME Applications
|
||||
================================================================================
|
||||
### Applications ###
|
||||
|
||||

|
||||
|
||||
This is the one area where things are basically a wash. Each environment has a few applications that are really nice, and a few that are not so great. Once again though, Gnome gets the little things right in a way that KDE completely misses. None of KDE's applications are bad or broken, that's not what I'm saying. They function. But that's about it. To use an analogy: they passed the test, but they sure didn't get any where close to 100% on it.
|
||||
|
||||
Gnome on left, KDE on right. Dragon performs perfectly fine, it has clearly marked buttons for playing a file, URL, or a disc, just as you can do under Gnome Videos... but Gnome takes it one extra little step further in the name of convenience and user friendliness: they show all the videos detected under your system by default, without you having to do anything. KDE has Baloo-- just as they had Nepomuk before that-- why not use them? They've got a list video files that are freely accessible... but don't make use of the feature.
|
||||
|
||||
Moving on... Music Players.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Both of these applications, Rhythmbox on the left and Amarok on the right were opened up and then a screenshot was immediately taken, nothing was clicked, or altered. See the difference? Rhythmbox looks like a music player. It's direct, there's obvious ways to sort the results, it knows what is trying to be and what it's job is: to play music.
|
||||
|
||||
Amarok feels like one of the tech demos, or library demos where someone puts every option and extension they possible can all inside one application in order to show them off-- it's never something that gets shipped as production, it's just there to show off bits and pieces. And that's exactly what Amarok feels like: its someone trying to show off every single possible cool thing they shove into a media player without ever stopping to think "Wait, what were trying to write again? An app to play music?"
|
||||
|
||||
Just look at the default layout. What is front and center for the user? A visualizer and Wikipedia integration-- the largest and most prominent column on the page. What's the second largest? Playlist list. Third largest, aka smallest? The actual music listing. How on earth are these sane defaults for a core application?
|
||||
|
||||
Software Managers! Something that has seen a lot of push in recent years and will likely only see a bigger push in the months to come. Unfortunately, it's another area where KDE was so close... and then fell on its face right at the finish line.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Gnome Software is probably my new favorite software center, minus one gripe which I will get to in a bit. Muon, I wanted to like you. I really did. But you are a design nightmare. When the VDG was drawing up plans for you (mockup below), you looked pretty slick. Good use of white space, clean design, nice category listing, your whole not-being-split-into-two-applications.
|
||||
|
||||

|
||||
|
||||
Then someone got around to coding you and doing your actual UI, and I can only guess they were drunk while they did it.
|
||||
|
||||
Let's look at Gnome Software. What's smack dab in the middle? The application, its screenshots, its description, etc. What's smack dab in the middle of Muon? Gigantic waste of white space. Gnome Software also includes the lovely convenience feature of putting a "Launch" button right there in case you already have an application installed. Convenience and ease of use are important, people. Honestly, JUST having things in Muon be centered aligned would probably make things look better already.
|
||||
|
||||
What's along the top edge of Gnome Software, like a tab listing? All Software, Installed, Updates. Clean language, direct, to the point. Muon? Well, we have "Discover", which works okay as far as language goes, and then we have Installed, and then nothing. Where's updates?
|
||||
|
||||
Well.. the developers decided to split updates off into its own application, thus requiring you to open two applications to handle your software-- one to install it, and one to update it-- going against every Software Center paradigm that has ever existed since the Synaptic graphical package manager.
|
||||
|
||||
I'm not going to show it in a screenshot just because I don't want to have to clean up my system afterwards, but if you go into Muon and start installing something the way it shows that is by adding a little tab to the bottom of your screen with the application's name. That tab doesn't go away when the application is done installing either, so if you're installing a lot of applications at a single time then you'll just slowly accumulate tabs along the bottom that you then have to go through and clean up manually, because if you don't then they grow off the screen and you have to swipe through them all to get to the most recent ones. Think: opening 50 tabs in Firefox. Major annoyance, major inconvenience.
|
||||
|
||||
I did say I would bash on Gnome a bit, and I meant it. Muon does get one thing very right that Gnome Software doesn't. Under the settings bar Muon has an option for "Show Technical Packages" aka: compilers, software libraries, non-graphical applications, applications without AppData, etc. Gnome doesn't. If you want to install any of those you have to drop down to the terminal. I think that's wrong. I certainly understand wanting to push AppData but I think they pushed it too soon. What made me realize Gnome didn't have this setting was when I went to install PowerTop and couldn't get Gnome to display it-- no AppData, no "Show Technical Packages" setting.
|
||||
|
||||
Doubly unfortunate is the fact that you can't "just use apper" if you're under KDE since...
|
||||
|
||||

|
||||
|
||||
Apper's support for installing local packages has been broken for since Fedora 19 or so, almost two years. I love the attention to detail and quality.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,52 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 4 - GNOME Settings
|
||||
================================================================================
|
||||
### Settings ###
|
||||
|
||||
There are a few specific KDE Control modules that I am going to pick at, mostly because they are so laughable horrible compared to their gnome counter-part that its honestly pathetic.
|
||||
|
||||
First one up? Printers.
|
||||
|
||||

|
||||
|
||||
Gnome is on the left, KDE is on the right. You know what the difference is between the printer applet on the left, and the one on the right? When I opened up Gnome Control Center and hit "Printers" the applet popped up and nothing happened. When I opened up KDE System Settings and hit "Printers" I got a password prompt. Before I was even allowed to LOOK at the printers I had to give up ROOT'S password.
|
||||
|
||||
Let me just re-iterate that. In this, the days of PolicyKit and Logind, I am still being asked for Root's password for what should be a sudo operation. I didn't even SETUP root's password when I installed the system. I had to drop down to Konsole and run 'sudo passwd root' so that I could GIVE root a password so that I could go back into System Setting's printer applet and then give up root's password to even LOOK at what printers were available. Once I did that I got prompted for root's password AGAIN when I hit "Add Printer" then I got prompted for root's password AGAIN after I went through and selected a printer and driver. Three times I got asked for ROOT'S password just to add a printer to the system.
|
||||
|
||||
When I added a printer under Gnome I didn't get prompted for my SUDO password until I hit "Unlock" in the printer applet. I got asked once, then I never got asked again. KDE, I am begging you... Adopt Gnome's "Unlock" methodology. Do not prompt for a password until you really need one. Furthermore, whatever library is out there that allows for KDE applications to bypass PolicyKit / Logind (if its available) and prompt directly for root... Bin that code. If this was a multi-user system I either have to give up root's password, or be there every second of every day in order to put it in any time a user might have to update, change, or add a new printer. Both options are completely unacceptable.
|
||||
|
||||
One more thing...
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Question to the forums: What looks cleaner to you? I had this realization when I was writing this article: Gnome's applet makes it very clear where any additional printers are going to go, they set aside a column on the left to list them. Before I added a second printer to KDE, and it suddenly grew a left side column, I had this nightmare-image in my head of the applet just shoving another icon into the screen and them being listed out like preview images in a folder of pictures. I was pleasantly surprised to see that I was wrong but the fact that the applet just 'grew' another column that didn't exist before and drastically altered its presentation is not really 'good' either. It's a design that's confusing, shocking, and non-intuitive.
|
||||
|
||||
Enough about printers though... Next KDE System Setting that is up for my public stoning? Multimedia, Aka Phonon.
|
||||
|
||||

|
||||
|
||||
As always, Gnome's on the left, KDE is on the right. Let's just run through the Gnome setting first... The eyes go left to right, top to bottom, right? So let's do the same. First up: volume control slider. The blue hint against the empty bar with 100% clearly marked removes all confusion about which way is "volume up." Immediately after the slider is an easy On/Off toggle that functions a mute on/off. Points to Gnome for remembering what the volume was set to BEFORE I muted sound, and returning to that same level AFTER I press volume-up to un-mute. Kmixer, you amnesiac piece of crap, I wish I could say as much about you.
|
||||
|
||||
Moving on! Tabbed options for Output, Input and Applications? With per application volume controls within easy reach? Gnome I love you more and more with every passing second. Balance options, sound profiles, and a clearly marked "Test Speakers" option.
|
||||
|
||||
I'm not sure how this could have been implemented in a cleaner, more concise way. Yes, it's just a Gnome-ized Pavucontrol but I think that's the point. Pavucontrol got it mostly right to begin with, the Sound applet in Gnome Control Center just refines it slightly to make it even closer to perfect.
|
||||
|
||||
Phonon, you're up. And let me start by saying: What the fsck am I looking at? -I- get that I am looking at the priority list for the audio devices on the system, but the way it is presented is a bit of a nightmare. Also where are the things the user probably cares about? A priority list is a great thing to have, it SHOULD be available, but it's something the user messes with once or twice and then never touches again. It's not important, or common, enough to warrant being front and center. Where's the volume slider? Where's per application controls? The things that users will be using more frequently? Well.. those are under Kmix, a separate program, with its own settings and configuration... not under the System Settings... which kind of makes System Settings a bit of a misnomer. And in that same vein, Let's hop over to network settings.
|
||||
|
||||

|
||||
|
||||
Presented above is the Gnome Network Settings. KDE's isn't included because of the reason I'm about to hit on. If you go to KDE's System Settings and hit any of the three options under the "Network" Section you get tons of options: Bluetooth settings, default username and password for Samba shares (Seriously, "Connectivity" only has 2 options: Username and password for SMB shares. How the fsck does THAT deserve the all-inclusive title "Connectivity"?), controls for Browser Identification (which only work for Konqueror...a dead project), proxy settings, etc... Where's my wifi settings? They aren't there. Where are they? Well, they are in the network applet's private settings... not under Network Settings...
|
||||
|
||||
KDE, you're killing me. You have "System Settings" USE IT!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,40 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 5 - Conclusion
|
||||
================================================================================
|
||||
### User Experience and Closing Thoughts ###
|
||||
|
||||
When Gnome 2.x and KDE 4.x were going head to head.. I jumped between the two quite happily. Some things I loved, some things I hated, but over all they were both a pleasure to use. Then Gnome 3.x came around and all of the drama with Gnome Shell. I swore off Gnome and avoided it every chance I could. It wasn't user friendly, it was non-intuitive, it broke an establish paradigm in preparation for tablet's taking over the world... A future that, judging from the dropping sales of tablets, will never come.
|
||||
|
||||
Eight releases of Gnome 3 later and the unimaginable happened. Gnome got user friendly. Gnome got intuitive. Is it perfect? Of course not. I still hate the paradigm it tries to push, I hate how it tries to force a work flow onto me, but both of those things can be gotten used to with time and patience. Once you have managed to look past Gnome Shell's alien appearance and you start interacting with it and the other parts of Gnome (Control Center especially) you see what Gnome has definitely gotten right: the little things. The attention to detail.
|
||||
|
||||
People can adapt to new paradigms, people can adapt to new work flows-- the iPhone and iPad proved that-- but what will always bother them are the paper cuts.
|
||||
|
||||
Which brings up an important distinction between KDE and Gnome. Gnome feels like a product. It feels like a singular experience. When you use it, it feels like it is complete and that everything you need is at your fingertips. It feel's like THE Linux desktop in the same way that Windows or OS X have THE desktop experience: what you need is there and it was all written by the same guys working on the same team towards the same goal. Hell, even an application prompting for sudo access feels like an intentional part of the desktop under Gnome, much the way that it is under Windows. In KDE it's just some random-looking window popup that any application could have created. It doesn't feel like a part of the system stopping and going "Hey! Something has requested administrative rights! Do you want to let it go through?" in an official capacity.
|
||||
|
||||
KDE doesn't feel like cohesive experience. KDE doesn't feel like it has a direction its moving in, it doesn't feel like a full experience. KDE feels like its a bunch of pieces that are moving in a bunch of different directions, that just happen to have a shared toolkit beneath them. If that's what the developers are happy with, then fine, good for them, but if the developers still have the hope of offering the best experience possible then the little stuff needs to matter. The user experience and being intuitive needs to be at the forefront of every single application, there needs to be a vision of what KDE wants to offer -and- how it should look.
|
||||
|
||||
Is there anything stopping me from using Gnome Disks under KDE? Rhythmbox? Evolution? Nope. Nope. Nope. But that misses the point. Gnome and KDE both market themselves as "Desktop Environments." They are supposed to be full -environments-, that means they all the pieces come and fit together, that you use that environment's tools because they are saying "We support everything you need to have a full desktop." Honestly? Only Gnome seems to fit the bill of being complete. KDE feel's half-finished when it comes to "coming together" part, let alone offering everything you need for a "full experience". There's no counterpart to Gnome Disks-- kpartitionmanager prompts for root. No "First Time User" run through, it just now got a user manager in Kubuntu. Hell, Gnome even provides a Maps, Notes, Calendar and Clock application. Do all of these applications matter 100%? No, of course not. But the fact that Gnome has them helps to push the idea that Gnome is a full and complete experience.
|
||||
|
||||
My complaints about KDE are not impossible to fix, not by a long shot. But it requires people to care. It requires developers to take pride in their work beyond just function-- form counts for a whole hell of a lot. Don't take away the user's ability to configure things-- the lack of configuration is one of my biggest gripes with GNOME 3.x, but don't use "Well you can configure it however you want," as an excuse for not providing sane defaults. The defaults are what users are going to see, they are what the users are going to judge from the first moment they open your application. Make it a good impression.
|
||||
|
||||
I know the KDE developers know design matters, that is WHY the Visual Design Group exists, but it feels like they aren't using the VDG to their fullest. And therein lies KDE's hamartia. It's not that KDE can't be complete, it's not that it can't come together and fix the downfalls, it just that they haven't. They aimed for the bulls eye... but they missed.
|
||||
|
||||
And before anyone says it... Don't say "Patches are welcome." Because while I can happily submit patches for the individual annoyances more will just keep coming as developers keep on their marry way of doing things in non-intuitive ways. This isn't about Muon not being center-aligned. This isn't about Amarok having an ugly UI. This isn't about the volume and brightness pop-up notifiers taking up a large chunk of my screen real-estate every time I hit my hotkeys (seriously, someone shrink those things).
|
||||
|
||||
This is about a mentality of apathy, this is about developers apparently not thinking things through when they make the UI for their applications. Everything the KDE Community does works fine. Amarok plays music. Dragon Player plays videos. Kwin / Qt & kdelibs is seemingly more power efficient than Mutter / gtk (according to my battery life times. Non-scientific testing). Those things are all well and good, and important.. but the presentation matters to. Arguably, the presentation matters the most because that is what user's see and interact with.
|
||||
|
||||
To KDE application developers... Get the VDG involved. Make every single 'core' application get its design vetted and approved by the VDG, have a UI/UX expert from the VDG go through the usage patterns and usage flow of your application to make sure its intuitive. Hell, even just posting a mock up to the VDG forums and asking for feedback would probably get you some nice pointers and feedback for whatever application you're working on. You have this great resource there, now actually use them.
|
||||
|
||||
I am not trying to sound ungrateful. I love KDE, I love the work and effort that volunteers put into giving Linux users a viable desktop, and an alternative to Gnome. And it is because I care that I write this article. Because I want to see KDE excel, I want to see it go further and farther than it has before. But doing that requires work on everyone's part, and it requires that people don't hold back criticism. It requires that people are honest about their interaction with the system and where it falls apart. If we can't give direct criticism, if we can't say "This sucks!" then it will never get better.
|
||||
|
||||
Will I still use Gnome after this week? Probably not, no. Gnome still trying to force a work flow on me that I don't want to follow or abide by, I feel less productive when I'm using it because it doesn't follow my paradigm. For my friends though, when they ask me "What desktop environment should I use?" I'm probably going to recommend Gnome, especially if they are less technical users who want things to "just work." And that is probably the most damning assessment I could make in regards to the current state of KDE.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
239
sources/tech/20150717 How to collect NGINX metrics - Part 2.md
Normal file
239
sources/tech/20150717 How to collect NGINX metrics - Part 2.md
Normal file
@ -0,0 +1,239 @@
|
||||
translation by strugglingyouth
|
||||
|
||||
How to collect NGINX metrics - Part 2
|
||||
================================================================================
|
||||

|
||||
|
||||
### How to get the NGINX metrics you need ###
|
||||
|
||||
How you go about capturing metrics depends on which version of NGINX you are using, as well as which metrics you wish to access. (See [the companion article][1] for an in-depth exploration of NGINX metrics.) Free, open-source NGINX and the commercial product NGINX Plus both have status modules that report metrics, and NGINX can also be configured to report certain metrics in its logs:
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: center;">
|
||||
<col style="text-align: center;">
|
||||
<col style="text-align: center;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th rowspan="2" style="text-align: left;">Metric</th>
|
||||
<th colspan="3" style="text-align: center;">Availability</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source">NGINX (open-source)</a></th>
|
||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus">NGINX Plus</a></th>
|
||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs">NGINX logs</a></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts / accepted</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">handled</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests / total</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">4xx codes</td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes</td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">request time</td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;">x</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
#### Metrics collection: NGINX (open-source) ####
|
||||
|
||||
Open-source NGINX exposes several basic metrics about server activity on a simple status page, provided that you have the HTTP [stub status module][2] enabled. To check if the module is already enabled, run:
|
||||
|
||||
nginx -V 2>&1 | grep -o with-http_stub_status_module
|
||||
|
||||
The status module is enabled if you see with-http_stub_status_module as output in the terminal.
|
||||
|
||||
If that command returns no output, you will need to enable the status module. You can use the --with-http_stub_status_module configuration parameter when [building NGINX from source][3]:
|
||||
|
||||
./configure \
|
||||
… \
|
||||
--with-http_stub_status_module
|
||||
make
|
||||
sudo make install
|
||||
|
||||
After verifying the module is enabled or enabling it yourself, you will also need to modify your NGINX configuration to set up a locally accessible URL (e.g., /nginx_status) for the status page:
|
||||
|
||||
server {
|
||||
location /nginx_status {
|
||||
stub_status on;
|
||||
|
||||
access_log off;
|
||||
allow 127.0.0.1;
|
||||
deny all;
|
||||
}
|
||||
}
|
||||
|
||||
Note: The server blocks of the NGINX config are usually found not in the master configuration file (e.g., /etc/nginx/nginx.conf) but in supplemental configuration files that are referenced by the master config. To find the relevant configuration files, first locate the master config by running:
|
||||
|
||||
nginx -t
|
||||
|
||||
Open the master configuration file listed, and look for lines beginning with include near the end of the http block, such as:
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
|
||||
In one of the referenced config files you should find the main server block, which you can modify as above to configure NGINX metrics reporting. After changing any configurations, reload the configs by executing:
|
||||
|
||||
nginx -s reload
|
||||
|
||||
Now you can view the status page to see your metrics:
|
||||
|
||||
Active connections: 24
|
||||
server accepts handled requests
|
||||
1156958 1156958 4491319
|
||||
Reading: 0 Writing: 18 Waiting : 6
|
||||
|
||||
Note that if you are trying to access the status page from a remote machine, you will need to whitelist the remote machine’s IP address in your status configuration, just as 127.0.0.1 is whitelisted in the configuration snippet above.
|
||||
|
||||
The NGINX status page is an easy way to get a quick snapshot of your metrics, but for continuous monitoring you will need to automatically record that data at regular intervals. Parsers for the NGINX status page already exist for monitoring tools such as [Nagios][4] and [Datadog][5], as well as for the statistics collection daemon [collectD][6].
|
||||
|
||||
#### Metrics collection: NGINX Plus ####
|
||||
|
||||
The commercial NGINX Plus provides [many more metrics][7] through its ngx_http_status_module than are available in open-source NGINX. Among the additional metrics exposed by NGINX Plus are bytes streamed, as well as information about upstream systems and caches. NGINX Plus also reports counts of all HTTP status code types (1xx, 2xx, 3xx, 4xx, 5xx). A sample NGINX Plus status board is available [here][8].
|
||||
|
||||

|
||||
|
||||
*Note: the “Active” connections on the NGINX Plus status dashboard are defined slightly differently than the Active state connections in the metrics collected via the open-source NGINX stub status module. In NGINX Plus metrics, Active connections do not include connections in the Waiting state (aka Idle connections).*
|
||||
|
||||
NGINX Plus also reports [metrics in JSON format][9] for easy integration with other monitoring systems. With NGINX Plus, you can see the metrics and health status [for a given upstream grouping of servers][10], or drill down to get a count of just the response codes [from a single server][11] in that upstream:
|
||||
|
||||
{"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055}
|
||||
|
||||
To enable the NGINX Plus metrics dashboard, you can add a status server block inside the http block of your NGINX configuration. ([See the section above][12] on collecting metrics from open-source NGINX for instructions on locating the relevant config files.) For example, to set up a status dashboard at http://your.ip.address:8080/status.html and a JSON interface at http://your.ip.address:8080/status, you would add the following server block:
|
||||
|
||||
server {
|
||||
listen 8080;
|
||||
root /usr/share/nginx/html;
|
||||
|
||||
location /status {
|
||||
status;
|
||||
}
|
||||
|
||||
location = /status.html {
|
||||
}
|
||||
}
|
||||
|
||||
The status pages should be live once you reload your NGINX configuration:
|
||||
|
||||
nginx -s reload
|
||||
|
||||
The official NGINX Plus docs have [more details][13] on how to configure the expanded status module.
|
||||
|
||||
#### Metrics collection: NGINX logs ####
|
||||
|
||||
NGINX’s [log module][14] writes configurable access logs to a destination of your choosing. You can customize the format of your logs and the data they contain by [adding or subtracting variables][15]. The simplest way to capture detailed logs is to add the following line in the server block of your config file (see [the section][16] on collecting metrics from open-source NGINX for instructions on locating your config files):
|
||||
|
||||
access_log logs/host.access.log combined;
|
||||
|
||||
After changing any NGINX configurations, reload the configs by executing:
|
||||
|
||||
nginx -s reload
|
||||
|
||||
The “combined” log format, included by default, captures [a number of key data points][17], such as the actual HTTP request and the corresponding response code. In the example logs below, NGINX logged a 200 (success) status code for a request for /index.html and a 404 (not found) error for the nonexistent /fail.
|
||||
|
||||
127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36"
|
||||
|
||||
127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
||||
|
||||
You can log request processing time as well by adding a new log format to the http block of your NGINX config file:
|
||||
|
||||
log_format nginx '$remote_addr - $remote_user [$time_local] '
|
||||
'"$request" $status $body_bytes_sent $request_time '
|
||||
'"$http_referer" "$http_user_agent"';
|
||||
|
||||
And by adding or modifying the access_log line in the server block of your config file:
|
||||
|
||||
access_log logs/host.access.log nginx;
|
||||
|
||||
After reloading the updated configs (by running nginx -s reload), your access logs will include response times, as seen below. The units are seconds, with millisecond resolution. In this instance, the server received a request for /big.pdf, returning a 206 (success) status code after sending 33973115 bytes. Processing the request took 0.202 seconds (202 milliseconds):
|
||||
|
||||
127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
||||
|
||||
You can use a variety of tools and services to parse and analyze NGINX logs. For instance, [rsyslog][18] can monitor your logs and pass them to any number of log-analytics services; you can use a free, open-source tool such as [logstash][19] to collect and analyze logs; or you can use a unified logging layer such as [Fluentd][20] to collect and parse your NGINX logs.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Which NGINX metrics you monitor will depend on the tools available to you, and whether the insight provided by a given metric justifies the overhead of monitoring that metric. For instance, is measuring error rates important enough to your organization to justify investing in NGINX Plus or implementing a system to capture and analyze logs?
|
||||
|
||||
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][21], and get started right away with a [free trial of Datadog][22].
|
||||
|
||||
----------
|
||||
|
||||
Source Markdown for this post is available [on GitHub][23]. Questions, corrections, additions, etc.? Please [let us know][24].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
|
||||
作者:K Young
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
|
||||
[3]:http://wiki.nginx.org/InstallOptions
|
||||
[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx
|
||||
[5]:http://docs.datadoghq.com/integrations/nginx/
|
||||
[6]:https://collectd.org/wiki/index.php/Plugin:nginx
|
||||
[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data
|
||||
[8]:http://demo.nginx.com/status.html
|
||||
[9]:http://demo.nginx.com/status
|
||||
[10]:http://demo.nginx.com/status/upstreams/demoupstreams
|
||||
[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses
|
||||
[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||
[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example
|
||||
[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html
|
||||
[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||
[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
||||
[18]:http://www.rsyslog.com/
|
||||
[19]:https://www.elastic.co/products/logstash
|
||||
[20]:http://www.fluentd.org/
|
||||
[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up
|
||||
[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md
|
||||
[24]:https://github.com/DataDog/the-monitor/issues
|
@ -0,0 +1,151 @@
|
||||
translating wi-cuckoo
|
||||
How to monitor NGINX with Datadog - Part 3
|
||||
================================================================================
|
||||

|
||||
|
||||
If you’ve already read [our post on monitoring NGINX][1], you know how much information you can gain about your web environment from just a handful of metrics. And you’ve also seen just how easy it is to start collecting metrics from NGINX on ad hoc basis. But to implement comprehensive, ongoing NGINX monitoring, you will need a robust monitoring system to store and visualize your metrics, and to alert you when anomalies happen. In this post, we’ll show you how to set up NGINX monitoring in Datadog so that you can view your metrics on customizable dashboards like this:
|
||||
|
||||

|
||||
|
||||
Datadog allows you to build graphs and alerts around individual hosts, services, processes, metrics—or virtually any combination thereof. For instance, you can monitor all of your NGINX hosts, or all hosts in a certain availability zone, or you can monitor a single key metric being reported by all hosts with a certain tag. This post will show you how to:
|
||||
|
||||
- Monitor NGINX metrics on Datadog dashboards, alongside all your other systems
|
||||
- Set up automated alerts to notify you when a key metric changes dramatically
|
||||
|
||||
### Configuring NGINX ###
|
||||
|
||||
To collect metrics from NGINX, you first need to ensure that NGINX has an enabled status module and a URL for reporting its status metrics. Step-by-step instructions [for configuring open-source NGINX][2] and [NGINX Plus][3] are available in our companion post on metric collection.
|
||||
|
||||
### Integrating Datadog and NGINX ###
|
||||
|
||||
#### Install the Datadog Agent ####
|
||||
|
||||
The Datadog Agent is [the open-source software][4] that collects and reports metrics from your hosts so that you can view and monitor them in Datadog. Installing the agent usually takes [just a single command][5].
|
||||
|
||||
As soon as your Agent is up and running, you should see your host reporting metrics [in your Datadog account][6].
|
||||
|
||||

|
||||
|
||||
#### Configure the Agent ####
|
||||
|
||||
Next you’ll need to create a simple NGINX configuration file for the Agent. The location of the Agent’s configuration directory for your OS can be found [here][7].
|
||||
|
||||
Inside that directory, at conf.d/nginx.yaml.example, you will find [a sample NGINX config file][8] that you can edit to provide the status URL and optional tags for each of your NGINX instances:
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
|
||||
- nginx_status_url: http://localhost/nginx_status/
|
||||
tags:
|
||||
- instance:foo
|
||||
|
||||
Once you have supplied the status URLs and any tags, save the config file as conf.d/nginx.yaml.
|
||||
|
||||
#### Restart the Agent ####
|
||||
|
||||
You must restart the Agent to load your new configuration file. The restart command varies somewhat by platform—see the specific commands for your platform [here][9].
|
||||
|
||||
#### Verify the configuration settings ####
|
||||
|
||||
To check that Datadog and NGINX are properly integrated, run the Datadog info command. The command for each platform is available [here][10].
|
||||
|
||||
If the configuration is correct, you will see a section like this in the output:
|
||||
|
||||
Checks
|
||||
======
|
||||
|
||||
[...]
|
||||
|
||||
nginx
|
||||
-----
|
||||
- instance #0 [OK]
|
||||
- Collected 8 metrics & 0 events
|
||||
|
||||
#### Install the integration ####
|
||||
|
||||
Finally, switch on the NGINX integration inside your Datadog account. It’s as simple as clicking the “Install Integration” button under the Configuration tab in the [NGINX integration settings][11].
|
||||
|
||||

|
||||
|
||||
### Metrics! ###
|
||||
|
||||
Once the Agent begins reporting NGINX metrics, you will see [an NGINX dashboard][12] among your list of available dashboards in Datadog.
|
||||
|
||||
The basic NGINX dashboard displays a handful of graphs encapsulating most of the key metrics highlighted [in our introduction to NGINX monitoring][13]. (Some metrics, notably request processing time, require log analysis and are not available in Datadog.)
|
||||
|
||||
You can easily create a comprehensive dashboard for monitoring your entire web stack by adding additional graphs with important metrics from outside NGINX. For example, you might want to monitor host-level metrics on your NGINX hosts, such as system load. To start building a custom dashboard, simply clone the default NGINX dashboard by clicking on the gear near the upper right of the dashboard and selecting “Clone Dash”.
|
||||
|
||||

|
||||
|
||||
You can also monitor your NGINX instances at a higher level using Datadog’s [Host Maps][14]—for instance, color-coding all your NGINX hosts by CPU usage to identify potential hotspots.
|
||||
|
||||

|
||||
|
||||
### Alerting on NGINX metrics ###
|
||||
|
||||
Once Datadog is capturing and visualizing your metrics, you will likely want to set up some monitors to automatically keep tabs on your metrics—and to alert you when there are problems. Below we’ll walk through a representative example: a metric monitor that alerts on sudden drops in NGINX throughput.
|
||||
|
||||
#### Monitor your NGINX throughput ####
|
||||
|
||||
Datadog metric alerts can be threshold-based (alert when the metric exceeds a set value) or change-based (alert when the metric changes by a certain amount). In this case we’ll take the latter approach, alerting when our incoming requests per second drop precipitously. Such drops are often indicative of problems.
|
||||
|
||||
1.**Create a new metric monitor**. Select “New Monitor” from the “Monitors” dropdown in Datadog. Select “Metric” as the monitor type.
|
||||
|
||||

|
||||
|
||||
2.**Define your metric monitor**. We want to know when our total NGINX requests per second drop by a certain amount. So we define the metric of interest to be the sum of nginx.net.request_per_s across our infrastructure.
|
||||
|
||||

|
||||
|
||||
3.**Set metric alert conditions**. Since we want to alert on a change, rather than on a fixed threshold, we select “Change Alert.” We’ll set the monitor to alert us whenever the request volume drops by 30 percent or more. Here we use a one-minute window of data to represent the metric’s value “now” and alert on the average change across that interval, as compared to the metric’s value 10 minutes prior.
|
||||
|
||||

|
||||
|
||||
4.**Customize the notification**. If our NGINX request volume drops, we want to notify our team. In this case we will post a notification in the ops team’s chat room and page the engineer on call. In “Say what’s happening”, we name the monitor and add a short message that will accompany the notification to suggest a first step for investigation. We @mention the Slack channel that we use for ops and use @pagerduty to [route the alert to PagerDuty][15]
|
||||
|
||||

|
||||
|
||||
5.**Save the integration monitor**. Click the “Save” button at the bottom of the page. You’re now monitoring a key NGINX [work metric][16], and your on-call engineer will be paged anytime it drops rapidly.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this post we’ve walked you through integrating NGINX with Datadog to visualize your key metrics and notify your team when your web infrastructure shows signs of trouble.
|
||||
|
||||
If you’ve followed along using your own Datadog account, you should now have greatly improved visibility into what’s happening in your web environment, as well as the ability to create automated monitors tailored to your environment, your usage patterns, and the metrics that are most valuable to your organization.
|
||||
|
||||
If you don’t yet have a Datadog account, you can sign up for [a free trial][17] and start monitoring your infrastructure, your applications, and your services today.
|
||||
|
||||
----------
|
||||
|
||||
Source Markdown for this post is available [on GitHub][18]. Questions, corrections, additions, etc.? Please [let us know][19].
|
||||
|
||||
------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
|
||||
作者:K Young
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||
[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus
|
||||
[4]:https://github.com/DataDog/dd-agent
|
||||
[5]:https://app.datadoghq.com/account/settings#agent
|
||||
[6]:https://app.datadoghq.com/infrastructure
|
||||
[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
|
||||
[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[11]:https://app.datadoghq.com/account/settings#integrations/nginx
|
||||
[12]:https://app.datadoghq.com/dash/integration/nginx
|
||||
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
|
||||
[15]:https://www.datadoghq.com/blog/pagerduty/
|
||||
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
|
||||
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
|
||||
[19]:https://github.com/DataDog/the-monitor/issues
|
409
sources/tech/20150717 How to monitor NGINX- Part 1.md
Normal file
409
sources/tech/20150717 How to monitor NGINX- Part 1.md
Normal file
@ -0,0 +1,409 @@
|
||||
KevinSJ Translating
|
||||
How to monitor NGINX - Part 1
|
||||
================================================================================
|
||||

|
||||
|
||||
### What is NGINX? ###
|
||||
|
||||
[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus.
|
||||
|
||||
NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases.
|
||||
|
||||
### Key NGINX metrics ###
|
||||
|
||||
By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment).
|
||||
|
||||
More generally, there are at least three key categories of metrics to watch:
|
||||
|
||||
- Basic activity metrics
|
||||
- Error metrics
|
||||
- Performance metrics
|
||||
|
||||
Below we’ll break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice.
|
||||
|
||||
This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting.
|
||||
|
||||
#### Basic activity metrics ####
|
||||
|
||||
Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed.
|
||||
|
||||
NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus.
|
||||
|
||||
**NGINX**
|
||||
|
||||
The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection.
|
||||
|
||||

|
||||
|
||||
Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume.
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts</td>
|
||||
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">handled</td>
|
||||
<td style="text-align: left;">Count of successful client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped (calculated)</td>
|
||||
<td style="text-align: left;">Count of dropped connections (accepts – handled)</td>
|
||||
<td style="text-align: left;">Work: Errors*</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests</td>
|
||||
<td style="text-align: left;">Count of client requests</td>
|
||||
<td style="text-align: left;">Work: Throughput</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics">a metric of resource saturation</a>, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics">a work metric</a>.</em></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINX’s [worker_connections][4] limit, has been reached.
|
||||
|
||||
Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed:
|
||||
|
||||
Active state
|
||||
|
||||
- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive.
|
||||
- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation.
|
||||
- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state.
|
||||
|
||||
Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.)
|
||||
|
||||
**NGINX Plus**
|
||||
|
||||
As mentioned above, all of open-source NGINX’s metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus.
|
||||
|
||||

|
||||
|
||||
Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume.
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepted</td>
|
||||
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped</td>
|
||||
<td style="text-align: left;">Count of dropped connections</td>
|
||||
<td style="text-align: left;">Work: Errors*</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">idle</td>
|
||||
<td style="text-align: left;">Client connections with zero current requests</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">total</td>
|
||||
<td style="text-align: left;">Count of client requests</td>
|
||||
<td style="text-align: left;">Work: Throughput</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.</em></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Plus’s [worker_connections][4] limit, has been reached.
|
||||
|
||||
**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX.
|
||||
|
||||
**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection.
|
||||
|
||||
**Metric differences between Open-Source and Plus**
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;">NGINX (open-source)</th>
|
||||
<th style="text-align: left;">NGINX Plus</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts</td>
|
||||
<td style="text-align: left;">accepted</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped must be calculated</td>
|
||||
<td style="text-align: left;">dropped is reported directly</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">reading + writing</td>
|
||||
<td style="text-align: left;">current</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">waiting</td>
|
||||
<td style="text-align: left;">idle</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active (includes “waiting” states)</td>
|
||||
<td style="text-align: left;">active (excludes “idle” states)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests</td>
|
||||
<td style="text-align: left;">total</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
**Metric to alert on: Dropped connections**
|
||||
|
||||
The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation.
|
||||
|
||||

|
||||
|
||||
**Metric to alert on: Requests per second**
|
||||
|
||||
Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests you’re receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs.
|
||||
|
||||

|
||||
|
||||
**Collecting activity metrics**
|
||||
|
||||
Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection.
|
||||
|
||||
#### Error metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">4xx codes</td>
|
||||
<td style="text-align: left;">Count of client errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes</td>
|
||||
<td style="text-align: left;">Count of server errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes.
|
||||
|
||||
**Metric to alert on: Server error rate**
|
||||
|
||||
Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user.
|
||||
|
||||

|
||||
|
||||
A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities.
|
||||
|
||||
**Collecting error metrics**
|
||||
|
||||
Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information:
|
||||
|
||||
- Use the expanded status module available with commercially supported NGINX Plus
|
||||
- Configure NGINX’s log module to write response codes in access logs
|
||||
|
||||
Read the companion post on NGINX metrics collection for detailed instructions on both approaches.
|
||||
|
||||
#### Performance metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">request time</td>
|
||||
<td style="text-align: left;">Time to process each request, in seconds</td>
|
||||
<td style="text-align: left;">Work: Performance</td>
|
||||
<td style="text-align: left;">NGINX logs</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
**Metric to alert on: Request processing time**
|
||||
|
||||
The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream.
|
||||
|
||||
**Collecting processing time metrics**
|
||||
|
||||
NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8].
|
||||
|
||||
#### Reverse proxy metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">Active connections by upstream server</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes by upstream server</td>
|
||||
<td style="text-align: left;">Server errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">Available servers per upstream group</td>
|
||||
<td style="text-align: left;">Servers passing health checks</td>
|
||||
<td style="text-align: left;">Resource: Availability</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus.
|
||||
|
||||
NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times.
|
||||
|
||||
**Activity metrics**
|
||||
|
||||
The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns
|
||||
|
||||
**Error metrics**
|
||||
|
||||
Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular server’s error rate.
|
||||
|
||||
**Availability metrics**
|
||||
|
||||
For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers.
|
||||
|
||||
**Collecting upstream metrics**
|
||||
|
||||
NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11].
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this post we’ve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure:
|
||||
|
||||
- [Dropped connections][12]
|
||||
- [Requests per second][13]
|
||||
- [Server error rate][14]
|
||||
- [Request processing time][15]
|
||||
|
||||
Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus.
|
||||
|
||||
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18].
|
||||
|
||||
### Acknowledgments ###
|
||||
|
||||
Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications.
|
||||
|
||||
----------
|
||||
|
||||
Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
|
||||
作者:K Young
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://nginx.org/en/
|
||||
[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
|
||||
[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
|
||||
[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
|
||||
[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
|
||||
[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
|
||||
[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[9]:https://en.wikipedia.org/wiki/Reverse_proxy
|
||||
[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
|
||||
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
|
||||
[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
|
||||
[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
|
||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
|
||||
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
|
||||
[20]:https://github.com/DataDog/the-monitor/issues
|
@ -0,0 +1,144 @@
|
||||
struggling 翻译中
|
||||
Introduction to RAID, Concepts of RAID and RAID Levels – Part 1
|
||||
================================================================================
|
||||
RAID is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume.
|
||||
|
||||

|
||||
|
||||
Understanding RAID Setups in Linux
|
||||
|
||||
Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance & high availability.
|
||||
|
||||
This series will be titled Preparation for the setting up RAID ‘s through Parts 1-9 and covers the following topics.
|
||||
|
||||
- Part 1: Introduction to RAID, Concepts of RAID and RAID Levels
|
||||
- Part 2: How to setup RAID0 (Stripe) in Linux
|
||||
- Part 3: How to setup RAID1 (Mirror) in Linux
|
||||
- Part 4: How to setup RAID5 (Striping with Distributed Parity) in Linux
|
||||
- Part 5: How to setup RAID6 (Striping with Double Distributed Parity) in Linux
|
||||
- Part 6: Setting Up RAID 10 or 1+0 (Nested) in Linux
|
||||
- Part 7: Growing an Existing RAID Array and Removing Failed Disks in Raid
|
||||
- Part 8: Recovering (Rebuilding) failed drives in RAID
|
||||
- Part 9: Managing RAID in Linux
|
||||
|
||||
This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux.
|
||||
|
||||
### Software RAID and Hardware RAID ###
|
||||
|
||||
Software RAID have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment.
|
||||
|
||||
Hardware RAID have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It won’t use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale.
|
||||
|
||||
Hardware RAID Card will look like below:
|
||||
|
||||

|
||||
|
||||
Hardware RAID
|
||||
|
||||
#### Featured Concepts of RAID ####
|
||||
|
||||
- Parity method in raid regenerate the lost content from parity saved information’s. RAID 5, RAID 6 Based on Parity.
|
||||
- Stripe is sharing data randomly to multiple disk. This won’t have full data in a single disk. If we use 3 disks half of our data will be in each disks.
|
||||
- Mirroring is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too.
|
||||
- Hot spare is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically.
|
||||
- Chunks are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I/O performance.
|
||||
|
||||
RAID’s are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment.
|
||||
|
||||
- RAID0 = Striping
|
||||
- RAID1 = Mirroring
|
||||
- RAID5 = Single Disk Distributed Parity
|
||||
- RAID6 = Double Disk Distributed Parity
|
||||
- RAID10 = Combine of Mirror & Stripe. (Nested RAID)
|
||||
|
||||
RAID are managed using mdadm package in most of the Linux distributions. Let us get a Brief look into each RAID Levels.
|
||||
|
||||
#### RAID 0 (or) Striping ####
|
||||
|
||||
Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk.
|
||||
|
||||
Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to logical volume it will be saved as ‘T‘ will be saved in first disk and ‘E‘ will be saved in Second disk and ‘C‘ will be saved in First disk and again ‘M‘ will be saved in Second disk and it continues in round-robin process.
|
||||
|
||||
In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk can’t use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data don’t use this RAID LEVEL.
|
||||
|
||||
- High Performance.
|
||||
- There is Zero Capacity Loss in RAID 0
|
||||
- Zero Fault Tolerance.
|
||||
- Write and Reading will be good performance.
|
||||
|
||||
#### RAID 1 (or) Mirroring ####
|
||||
|
||||
Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive.
|
||||
|
||||
While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss.
|
||||
|
||||
- Good Performance.
|
||||
- Here Half of the Space will be lost in total capacity.
|
||||
- Full Fault Tolerance.
|
||||
- Rebuilt will be faster.
|
||||
- Writing Performance will be slow.
|
||||
- Reading will be good.
|
||||
- Can be used for operating systems and database for small scale.
|
||||
|
||||
#### RAID 5 (or) Distributed Parity ####
|
||||
|
||||
RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure.
|
||||
|
||||
Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information’s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data’s.
|
||||
|
||||
- Excellent Performance
|
||||
- Reading will be extremely very good in speed.
|
||||
- Writing will be Average, slow if we won’t use a Hardware RAID Controller.
|
||||
- Rebuild from Parity information from all drives.
|
||||
- Full Fault Tolerance.
|
||||
- 1 Disk Space will be under Parity.
|
||||
- Can be used in file servers, web servers, very important backups.
|
||||
|
||||
#### RAID 6 Two Parity Distributed Disk ####
|
||||
|
||||
RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives.
|
||||
|
||||
Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity.
|
||||
|
||||
- Poor Performance.
|
||||
- Read Performance will be good.
|
||||
- Write Performance will be Poor if we not using a Hardware RAID Controller.
|
||||
- Rebuild from 2 Parity Drives.
|
||||
- Full Fault tolerance.
|
||||
- 2 Disks space will be under Parity.
|
||||
- Can be Used in Large Arrays.
|
||||
- Can be use in backup purpose, video streaming, used in large scale.
|
||||
|
||||
#### RAID 10 (or) Mirror & Stripe ####
|
||||
|
||||
RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01.
|
||||
|
||||
Assume, we have 4 Number of drives. While I’m writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods.
|
||||
|
||||
If I’m writing a data “TECMINT” in RAID 10 it will save the data as follow. First “T” will write to both disks and second “E” will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too.
|
||||
|
||||
Same time it will use the RAID 0 method and write data as follow “T” will write to first disk and “E” will write to second disk. Again “C” will write to first Disk and “M” to second disk.
|
||||
|
||||
- Good read and write performance.
|
||||
- Here Half of the Space will be lost in total capacity.
|
||||
- Fault Tolerance.
|
||||
- Fast rebuild from copying data.
|
||||
- Can be used in Database storage for high performance and availability.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID.
|
||||
|
||||
In the next upcoming articles I’m going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
@ -0,0 +1,219 @@
|
||||
struggling 翻译中
|
||||
Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2
|
||||
================================================================================
|
||||
RAID is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group).
|
||||
|
||||
RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid.
|
||||
|
||||

|
||||
|
||||
Setup RAID0 in Linux
|
||||
|
||||
Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if it’s failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set.
|
||||
|
||||
#### What is Stripe in RAID 0? ####
|
||||
|
||||
Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance RAID 0 will be used, but we can’t get the data if one of the drive fails. So, it isn’t a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files.
|
||||
|
||||
- RAID 0 has High Performance.
|
||||
- Zero Capacity Loss in RAID 0. No Space will be wasted.
|
||||
- Zero Fault Tolerance ( Can’t get back the data if any one of disk fails).
|
||||
- Write and Reading will be Excellent.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
Minimum number of disks are allowed to create RAID 0 is 2, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks.
|
||||
|
||||
Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from it’s utility UI. Some motherboard by default in-build with RAID feature, there UI can be accessed using Ctrl+I keys.
|
||||
|
||||
If you’re new to RAID setups, please read our earlier article, where we’ve covered some basic introduction of about RAID.
|
||||
|
||||
- [Introduction to RAID and RAID Concepts][1]
|
||||
|
||||
**My Server Setup**
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.225
|
||||
Two Disks : 20 GB each
|
||||
|
||||
This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software RAID0 or striping in Linux systems or servers using two 20GB disks named sdb and sdc.
|
||||
|
||||
### Step 1: Updating System and Installing mdadm for Managing RAID ###
|
||||
|
||||
1. Before setting up RAID0 in Linux, let’s do a system update and then install ‘mdadm‘ package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.
|
||||
|
||||
# yum clean all && yum update
|
||||
# yum install mdadm -y
|
||||
|
||||

|
||||
|
||||
Install mdadm Tool
|
||||
|
||||
### Step 2: Verify Attached Two 20GB Drives ###
|
||||
|
||||
2. Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command.
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||

|
||||
|
||||
Check Hard Drives
|
||||
|
||||
3. Once the new hard drives detected, it’s time to check whether the attached drives are already using any existing raid with the help of following ‘mdadm’ command.
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
|
||||

|
||||
|
||||
Check RAID Devices
|
||||
|
||||
In the above output, we come to know that none of the RAID have been applied to these two sdb and sdc drives.
|
||||
|
||||
### Step 3: Creating Partitions for RAID ###
|
||||
|
||||
4. Now create sdb and sdc partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on sdb drive.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
Follow below instructions for creating partitions.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Next select the partition number as 1.
|
||||
- Give the default value by just pressing two times Enter key.
|
||||
- Next press ‘P‘ to print the defined partition.
|
||||
|
||||

|
||||
|
||||
Create Partitions
|
||||
|
||||
Follow below instructions for creating Linux raid auto on partitions.
|
||||
|
||||
- Press ‘L‘ to list all available types.
|
||||
- Type ‘t‘to choose the partitions.
|
||||
- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
|
||||
- Then again use ‘P‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||

|
||||
|
||||
Create RAID Partitions in Linux
|
||||
|
||||
**Note**: Please follow same above instructions to create partition on sdc drive now.
|
||||
|
||||
5. After creating partitions, verify both the drivers are correctly defined for RAID using following command.
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
# mdadm --examine /dev/sd[b-c]1
|
||||
|
||||

|
||||
|
||||
Verify RAID Partitions
|
||||
|
||||
### Step 4: Creating RAID md Devices ###
|
||||
|
||||
6. Now create md device (i.e. /dev/md0) and apply raid level using below command.
|
||||
|
||||
# mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
|
||||
# mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1
|
||||
|
||||
- -C – create
|
||||
- -l – level
|
||||
- -n – No of raid-devices
|
||||
|
||||
7. Once md device has been created, now verify the status of RAID Level, Devices and Array used, with the help of following series of commands as shown.
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Verify RAID Level
|
||||
|
||||
# mdadm -E /dev/sd[b-c]1
|
||||
|
||||

|
||||
|
||||
Verify RAID Device
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Verify RAID Array
|
||||
|
||||
### Step 5: Assiging RAID Devices to Filesystem ###
|
||||
|
||||
8. Create a ext4 filesystem for a RAID device /dev/md0 and mount it under /dev/raid0.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||

|
||||
|
||||
Create ext4 Filesystem
|
||||
|
||||
9. Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e. /mnt/raid0) and mount the device /dev/md0 under it.
|
||||
|
||||
# mkdir /mnt/raid0
|
||||
# mount /dev/md0 /mnt/raid0/
|
||||
|
||||
10. Next, verify that the device /dev/md0 is mounted under /mnt/raid0 directory using df command.
|
||||
|
||||
# df -h
|
||||
|
||||
11. Next, create a file called ‘tecmint.txt‘ under the mount point /mnt/raid0, add some content to the created file and view the content of a file and directory.
|
||||
|
||||
# touch /mnt/raid0/tecmint.txt
|
||||
# echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt
|
||||
# cat /mnt/raid0/tecmint.txt
|
||||
# ls -l /mnt/raid0/
|
||||
|
||||

|
||||
|
||||
Verify Mount Device
|
||||
|
||||
12. Once you’ve verified mount points, it’s time to create an fstab entry in /etc/fstab file.
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
Add the following entry as described. May vary according to your mount location and filesystem you using.
|
||||
|
||||
/dev/md0 /mnt/raid0 ext4 deaults 0 0
|
||||
|
||||

|
||||
|
||||
Add Device to Fstab
|
||||
|
||||
13. Run mount ‘-a‘ to check if there is any error in fstab entry.
|
||||
|
||||
# mount -av
|
||||
|
||||

|
||||
|
||||
Check Errors in Fstab
|
||||
|
||||
### Step 6: Saving RAID Configurations ###
|
||||
|
||||
14. Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use ‘mdadm’ command with ‘-s‘ (scan) and ‘-v‘ (verbose) options as shown.
|
||||
|
||||
# mdadm -E -s -v >> /etc/mdadm.conf
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
# cat /etc/mdadm.conf
|
||||
|
||||

|
||||
|
||||
Save RAID Configurations
|
||||
|
||||
That’s it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup RAID5.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid0-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
@ -0,0 +1,213 @@
|
||||
struggling 翻译中
|
||||
Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3
|
||||
================================================================================
|
||||
RAID Mirroring means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and it’s useful only, when read performance or reliability is more precise than the data storage capacity.
|
||||
|
||||

|
||||
|
||||
Setup Raid1 in Linux
|
||||
|
||||
Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption.
|
||||
|
||||
### Features of RAID 1 ###
|
||||
|
||||
- Mirror has Good Performance.
|
||||
- 50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB.
|
||||
- No data loss in Mirroring if one disk fails, because we have the same content in both disks.
|
||||
- Reading will be good than writing data to drive.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card).
|
||||
|
||||
Here we’re using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from it’s utility UI or using Ctrl+I key.
|
||||
|
||||
Read Also: [Basic Concepts of RAID in Linux][1]
|
||||
|
||||
#### My Server Setup ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.226
|
||||
Hostname : rd1.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdb
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
|
||||
This article will guide you through a step-by-step instructions on how to setup a software RAID 1 or Mirror using mdadm (creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc.
|
||||
|
||||
### Step 1: Installing Prerequisites and Examine Drives ###
|
||||
|
||||
1. As I said above, we’re using mdadm utility for creating and managing RAID in Linux. So, let’s install the mdadm software package on Linux using yum or apt-get package manager tool.
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
2. Once ‘mdadm‘ package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command.
|
||||
|
||||
# mdadm -E /dev/sd[b-c]
|
||||
|
||||

|
||||
|
||||
Check RAID on Disks
|
||||
|
||||
As you see from the above screen, that there is no any super-block detected yet, means no RAID defined.
|
||||
|
||||
### Step 2: Drive Partitioning for RAID ###
|
||||
|
||||
3. As I mentioned above, that we’re using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Let’s create partitions on these two drives using ‘fdisk‘ command and change the type to raid during partition creation.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
Follow the below instructions
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Next select the partition number as 1.
|
||||
- Give the default full size by just pressing two times Enter key.
|
||||
- Next press ‘p‘ to print the defined partition.
|
||||
- Press ‘L‘ to list all available types.
|
||||
- Type ‘t‘to choose the partitions.
|
||||
- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
|
||||
- Then again use ‘p‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||

|
||||
|
||||
Create Disk Partitions
|
||||
|
||||
After ‘/dev/sdb‘ partition has been created, next follow the same instructions to create new partition on /dev/sdc drive.
|
||||
|
||||
# fdisk /dev/sdc
|
||||
|
||||

|
||||
|
||||
Create Second Partitions
|
||||
|
||||
4. Once both the partitions are created successfully, verify the changes on both sdb & sdc drive using the same ‘mdadm‘ command and also confirm the RAID type as shown in the following screen grabs.
|
||||
|
||||
# mdadm -E /dev/sd[b-c]
|
||||
|
||||

|
||||
|
||||
Verify Partitions Changes
|
||||
|
||||

|
||||
|
||||
Check RAID Type
|
||||
|
||||
**Note**: As you see in the above picture, there is no any defined RAID on the sdb1 and sdc1 drives so far, that’s the reason we are getting as no super-blocks detected.
|
||||
|
||||
### Step 3: Creating RAID1 Devices ###
|
||||
|
||||
5. Next create RAID1 Device called ‘/dev/md0‘ using the following command and verity it.
|
||||
|
||||
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Create RAID Device
|
||||
|
||||
6. Next check the raid devices type and raid array using following commands.
|
||||
|
||||
# mdadm -E /dev/sd[b-c]1
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Check RAID Device type
|
||||
|
||||

|
||||
|
||||
Check RAID Device Array
|
||||
|
||||
From the above pictures, one can easily understand that raid1 have been created and using /dev/sdb1 and /dev/sdc1 partitions and also you can see the status as resyncing.
|
||||
|
||||
### Step 4: Creating File System on RAID Device ###
|
||||
|
||||
7. Create file system using ext4 for md0 and mount under /mnt/raid1.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||

|
||||
|
||||
Create RAID Device Filesystem
|
||||
|
||||
8. Next, mount the newly created filesystem under ‘/mnt/raid1‘ and create some files and verify the contents under mount point.
|
||||
|
||||
# mkdir /mnt/raid1
|
||||
# mount /dev/md0 /mnt/raid1/
|
||||
# touch /mnt/raid1/tecmint.txt
|
||||
# echo "tecmint raid setups" > /mnt/raid1/tecmint.txt
|
||||
|
||||

|
||||
|
||||
Mount Raid Device
|
||||
|
||||
9. To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open ‘/etc/fstab‘ file and add the following line at the bottom of the file.
|
||||
|
||||
/dev/md0 /mnt/raid1 ext4 defaults 0 0
|
||||
|
||||

|
||||
|
||||
Raid Automount Device
|
||||
|
||||
10. Run ‘mount -a‘ to check whether there are any errors in fstab entry.
|
||||
|
||||
# mount -av
|
||||
|
||||

|
||||
|
||||
Check Errors in fstab
|
||||
|
||||
11. Next, save the raid configuration manually to ‘mdadm.conf‘ file using the below command.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||

|
||||
|
||||
Save Raid Configuration
|
||||
|
||||
The above configuration file is read by the system at the reboots and load the RAID devices.
|
||||
|
||||
### Step 5: Verify Data After Disk Failure ###
|
||||
|
||||
12. Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Let’s see what will happen when any of disk disk is unavailable in array.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Raid Device Verify
|
||||
|
||||
In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed sdc disk) or fails.
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Test RAID Devices
|
||||
|
||||
Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data.
|
||||
|
||||
# cd /mnt/raid1/
|
||||
# cat tecmint.txt
|
||||
|
||||

|
||||
|
||||
Verify RAID Data
|
||||
|
||||
Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a RAID 5 striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid1-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
@ -0,0 +1,286 @@
|
||||
struggling 翻译中
|
||||
Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4
|
||||
================================================================================
|
||||
In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy.
|
||||
|
||||

|
||||
|
||||
Setup Raid 5 in Linux
|
||||
|
||||
For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where it’s cost effective and provide performance as well as redundancy.
|
||||
|
||||
#### What is Parity? ####
|
||||
|
||||
Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Let’s say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity information’s. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.
|
||||
|
||||
#### Pros and Cons of RAID 5 ####
|
||||
|
||||
- Gives better performance
|
||||
- Support Redundancy and Fault tolerance.
|
||||
- Support hot spare options.
|
||||
- Will loose a single disk capacity for using parity information.
|
||||
- No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.
|
||||
- Suits for transaction oriented environment as the reading will be faster.
|
||||
- Due to parity overhead, writing will be slow.
|
||||
- Rebuild takes long time.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if you’ve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and ‘mdadm‘ package to create raid.
|
||||
|
||||
mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf.
|
||||
|
||||
Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux.
|
||||
|
||||
- [Basic Concepts of RAID in Linux – Part 1][1]
|
||||
- [Creating RAID 0 (Stripe) in Linux – Part 2][2]
|
||||
- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3]
|
||||
|
||||
#### My Server Setup ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.227
|
||||
Hostname : rd5.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdb
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
|
||||
This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd.
|
||||
|
||||
### Step 1: Installing mdadm and Verify Drives ###
|
||||
|
||||
1. As we said earlier, that we’re using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions.
|
||||
|
||||
# lsb_release -a
|
||||
# ifconfig | grep inet
|
||||
|
||||

|
||||
|
||||
CentOS 6.5 Summary
|
||||
|
||||
2. If you’re following our raid series, we assume that you’ve already installed ‘mdadm‘ package, if not, use the following command according to your Linux distribution to install the package.
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
3. After the ‘mdadm‘ package installation, let’s list the three 20GB disks which we have added in our system using ‘fdisk‘ command.
|
||||
|
||||
# fdisk -l | grep sd
|
||||
|
||||

|
||||
|
||||
Install mdadm Tool
|
||||
|
||||
4. Now it’s time to examine the attached three drives for any existing RAID blocks on these drives using following command.
|
||||
|
||||
# mdadm -E /dev/sd[b-d]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
|
||||
|
||||

|
||||
|
||||
Examine Drives For Raid
|
||||
|
||||
**Note**: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now.
|
||||
|
||||
### Step 2: Partitioning the Disks for RAID ###
|
||||
|
||||
5. First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using ‘fdisk’ command, before forwarding to the next steps.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
# fdisk /dev/sdd
|
||||
|
||||
#### Create /dev/sdb Partition ####
|
||||
|
||||
Please follow the below instructions to create partition on /dev/sdb drive.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition. Here we are choosing Primary because there is no partitions defined yet.
|
||||
- Then choose ‘1‘ to be the first partition. By default it will be 1.
|
||||
- Here for cylinder size we don’t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.
|
||||
- Next press ‘p‘ to print the created partition.
|
||||
- Change the Type, If we need to know the every available types Press ‘L‘.
|
||||
- Here, we are selecting ‘fd‘ as my type is RAID.
|
||||
- Next press ‘p‘ to print the defined partition.
|
||||
- Then again use ‘p‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||

|
||||
|
||||
Create sdb Partition
|
||||
|
||||
**Note**: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too.
|
||||
|
||||
#### Create /dev/sdc Partition ####
|
||||
|
||||
Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps.
|
||||
|
||||
# fdisk /dev/sdc
|
||||
|
||||

|
||||
|
||||
Create sdc Partition
|
||||
|
||||
#### Create /dev/sdd Partition ####
|
||||
|
||||
# fdisk /dev/sdd
|
||||
|
||||

|
||||
|
||||
Create sdd Partition
|
||||
|
||||
6. After creating partitions, check for changes in all three drives sdb, sdc, & sdd.
|
||||
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
|
||||
|
||||
or
|
||||
|
||||
# mdadm -E /dev/sd[b-c]
|
||||
|
||||

|
||||
|
||||
Check Partition Changes
|
||||
|
||||
**Note**: In the above pic. depict the type is fd i.e. for RAID.
|
||||
|
||||
7. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives.
|
||||
|
||||

|
||||
|
||||
Check Raid on Partition
|
||||
|
||||
### Step 3: Creating md device md0 ###
|
||||
|
||||
8. Now create a Raid device ‘md0‘ (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command.
|
||||
|
||||
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
|
||||
|
||||
or
|
||||
|
||||
# mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1
|
||||
|
||||
9. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output.
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Verify Raid Device
|
||||
|
||||
If you want to monitor the current building process, you can use ‘watch‘ command, just pass through the ‘cat /proc/mdstat‘ with watch command which will refresh screen every 1 second.
|
||||
|
||||
# watch -n1 cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Monitor Raid 5 Process
|
||||
|
||||

|
||||
|
||||
Raid 5 Process Summary
|
||||
|
||||
10. After creation of raid, Verify the raid devices using the following command.
|
||||
|
||||
# mdadm -E /dev/sd[b-d]1
|
||||
|
||||

|
||||
|
||||
Verify Raid Level
|
||||
|
||||
**Note**: The Output of the above command will be little long as it prints the information of all three drives.
|
||||
|
||||
11. Next, verify the RAID array to assume that the devices which we’ve included in the RAID level are running and started to re-sync.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Verify Raid Array
|
||||
|
||||
### Step 4: Creating file system for md0 ###
|
||||
|
||||
12. Create a file system for ‘md0‘ device using ext4 before mounting.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||

|
||||
|
||||
Create md0 Filesystem
|
||||
|
||||
13. Now create a directory under ‘/mnt‘ then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory.
|
||||
|
||||
# mkdir /mnt/raid5
|
||||
# mount /dev/md0 /mnt/raid5/
|
||||
# ls -l /mnt/raid5/
|
||||
|
||||
14. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content.
|
||||
|
||||
# touch /mnt/raid5/raid5_tecmint_{1..5}
|
||||
# ls -l /mnt/raid5/
|
||||
# echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1
|
||||
# cat /mnt/raid5/raid5_tecmint_1
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Mount Raid Device
|
||||
|
||||
15. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
/dev/md0 /mnt/raid5 ext4 defaults 0 0
|
||||
|
||||

|
||||
|
||||
Raid 5 Automount
|
||||
|
||||
16. Next, run ‘mount -av‘ command to check whether any errors in fstab entry.
|
||||
|
||||
# mount -av
|
||||
|
||||

|
||||
|
||||
Check Fstab Errors
|
||||
|
||||
### Step 5: Save Raid 5 Configuration ###
|
||||
|
||||
17. As mentioned earlier in requirement section, by default RAID don’t have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.
|
||||
|
||||
So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||

|
||||
|
||||
Save Raid 5 Configuration
|
||||
|
||||
Note: Saving the configuration will keep the RAID level stable in md0 device.
|
||||
|
||||
### Step 6: Adding Spare Drives ###
|
||||
|
||||
18. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here.
|
||||
|
||||
For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.
|
||||
|
||||
- [Add Spare Drive to Raid 5 Setup][4]
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid-5-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[2]:http://www.tecmint.com/create-raid0-in-linux/
|
||||
[3]:http://www.tecmint.com/create-raid1-in-linux/
|
||||
[4]:http://www.tecmint.com/create-raid-6-in-linux/
|
@ -0,0 +1,321 @@
|
||||
struggling 翻译中
|
||||
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5
|
||||
================================================================================
|
||||
RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It’s alike RAID 5, but provides more robust, because it uses one more disk for parity.
|
||||
|
||||
In our earlier article, we’ve seen distributed parity in RAID 5, but in this article we will going to see RAID 6 with double distributed parity. Don’t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity.
|
||||
|
||||

|
||||
|
||||
Setup RAID 6 in Linux
|
||||
|
||||
To setup a RAID 6, minimum 4 numbers of disks or more in a set are required. RAID 6 have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks.
|
||||
|
||||
Now, many of us comes to conclusion, why we need to use RAID 6, when it doesn’t perform like any other RAID. Hmm… those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments.
|
||||
|
||||
#### Pros and Cons of RAID 6 ####
|
||||
|
||||
- Performance are good.
|
||||
- RAID 6 is expensive, as it requires two independent drives are used for parity functions.
|
||||
- Will loose a two disks capacity for using parity information (double parity).
|
||||
- No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk.
|
||||
- Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won’t get better performance in RAID 6. So we need a physical RAID controller.
|
||||
|
||||
Those who are new to RAID setup, we recommend to go through RAID articles below.
|
||||
|
||||
- [Basic Concepts of RAID in Linux – Part 1][1]
|
||||
- [Creating Software RAID 0 (Stripe) in Linux – Part 2][2]
|
||||
- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3]
|
||||
|
||||
#### My Server Setup ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.228
|
||||
Hostname : rd6.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdb
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
Disk 4 [20GB] : /dev/sde
|
||||
|
||||
This article is a Part 5 of a 9-tutorial RAID series, here we are going to see how we can create and setup Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde.
|
||||
|
||||
### Step 1: Installing mdadm Tool and Examine Drives ###
|
||||
|
||||
1. If you’re following our last two Raid articles (Part 2 and Part 3), where we’ve already shown how to install ‘mdadm‘ tool. If you’re new to this article, let me explain that ‘mdadm‘ is a tool to create and manage Raid in Linux systems, let’s install the tool using following command according to your Linux distribution.
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
2. After installing the tool, now it’s time to verify the attached four drives that we are going to use for raid creation using the following ‘fdisk‘ command.
|
||||
|
||||
# fdisk -l | grep sd
|
||||
|
||||

|
||||
|
||||
Check Disks in Linux
|
||||
|
||||
3. Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
|
||||

|
||||
|
||||
Check Raid on Disk
|
||||
|
||||
**Note**: In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6.
|
||||
|
||||
### Step 2: Drive Partitioning for RAID 6 ###
|
||||
|
||||
4. Now create partitions for raid on ‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ and ‘/dev/sde‘ with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives.
|
||||
|
||||
**Create /dev/sdb Partition**
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
Please follow the instructions as shown below for creating partition.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Next choose the partition number as 1.
|
||||
- Define the default value by just pressing two times Enter key.
|
||||
- Next press ‘P‘ to print the defined partition.
|
||||
- Press ‘L‘ to list all available types.
|
||||
- Type ‘t‘ to choose the partitions.
|
||||
- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
|
||||
- Then again use ‘P‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||

|
||||
|
||||
Create /dev/sdb Partition
|
||||
|
||||
**Create /dev/sdb Partition**
|
||||
|
||||
# fdisk /dev/sdc
|
||||
|
||||

|
||||
|
||||
Create /dev/sdc Partition
|
||||
|
||||
**Create /dev/sdd Partition**
|
||||
|
||||
# fdisk /dev/sdd
|
||||
|
||||

|
||||
|
||||
Create /dev/sdd Partition
|
||||
|
||||
**Create /dev/sde Partition**
|
||||
|
||||
# fdisk /dev/sde
|
||||
|
||||

|
||||
|
||||
Create /dev/sde Partition
|
||||
|
||||
5. After creating partitions, it’s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
|
||||
or
|
||||
|
||||
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
|
||||

|
||||
|
||||
Check Raid on New Partitions
|
||||
|
||||
### Step 3: Creating md device (RAID) ###
|
||||
|
||||
6. Now it’s time to create Raid device ‘md0‘ (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands.
|
||||
|
||||
# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Create Raid 6 Device
|
||||
|
||||
7. You can also check the current process of raid using watch command as shown in the screen grab below.
|
||||
|
||||
# watch -n1 cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Check Raid 6 Process
|
||||
|
||||
8. Verify the raid devices using the following command.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
**Note**:: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here.
|
||||
|
||||
9. Next, verify the RAID array to confirm that the re-syncing is started.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Check Raid 6 Array
|
||||
|
||||
### Step 4: Creating FileSystem on Raid Device ###
|
||||
|
||||
10. Create a filesystem using ext4 for ‘/dev/md0‘ and mount it under /mnt/raid5. Here we’ve used ext4, but you can use any type of filesystem as per your choice.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||

|
||||
|
||||
Create File System on Raid 6
|
||||
|
||||
11. Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory.
|
||||
|
||||
# mkdir /mnt/raid6
|
||||
# mount /dev/md0 /mnt/raid6/
|
||||
# ls -l /mnt/raid6/
|
||||
|
||||
12. Create some files under mount point and append some text in any one of the file to verify the content.
|
||||
|
||||
# touch /mnt/raid6/raid6_test.txt
|
||||
# ls -l /mnt/raid6/
|
||||
# echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt
|
||||
# cat /mnt/raid6/raid6_test.txt
|
||||
|
||||

|
||||
|
||||
Verify Raid Content
|
||||
|
||||
13. Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment.
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
/dev/md0 /mnt/raid6 ext4 defaults 0 0
|
||||
|
||||

|
||||
|
||||
Automount Raid 6 Device
|
||||
|
||||
14. Next, execute ‘mount -a‘ command to verify whether there is any error in fstab entry.
|
||||
|
||||
# mount -av
|
||||
|
||||

|
||||
|
||||
Verify Raid Automount
|
||||
|
||||
### Step 5: Save RAID 6 Configuration ###
|
||||
|
||||
15. Please note by default RAID don’t have a config file. We have to save it by manually using below command and then verify the status of device ‘/dev/md0‘.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Save Raid 6 Configuration
|
||||
|
||||

|
||||
|
||||
Check Raid 6 Status
|
||||
|
||||
### Step 6: Adding a Spare Drives ###
|
||||
|
||||
16. Now it has 4 disks and there are two parity information’s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6.
|
||||
|
||||
May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration.
|
||||
|
||||
For the demonstration purpose, I’ve hot-plugged a new HDD disk (i.e. /dev/sdf), let’s verify the attached disk.
|
||||
|
||||
# ls -l /dev/ | grep sd
|
||||
|
||||

|
||||
|
||||
Check New Disk
|
||||
|
||||
17. Now again confirm the new attached disk for any raid is already configured or not using the same mdadm command.
|
||||
|
||||
# mdadm --examine /dev/sdf
|
||||
|
||||

|
||||
|
||||
Check Raid on New Disk
|
||||
|
||||
**Note**: As usual, like we’ve created partitions for four disks earlier, similarly we’ve to create new partition on the new plugged disk using fdisk command.
|
||||
|
||||
# fdisk /dev/sdf
|
||||
|
||||

|
||||
|
||||
Create /dev/sdf Partition
|
||||
|
||||
18. Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device.
|
||||
|
||||
# mdadm --examine /dev/sdf
|
||||
# mdadm --examine /dev/sdf1
|
||||
# mdadm --add /dev/md0 /dev/sdf1
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Verify Raid on sdf Partition
|
||||
|
||||

|
||||
|
||||
Add sdf Partition to Raid
|
||||
|
||||

|
||||
|
||||
Verify sdf Partition Details
|
||||
|
||||
### Step 7: Check Raid 6 Fault Tolerance ###
|
||||
|
||||
19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I’ve personally marked one of the drive is failed.
|
||||
|
||||
Here, we’re going to mark /dev/sdd1 as failed drive.
|
||||
|
||||
# mdadm --manage --fail /dev/md0 /dev/sdd1
|
||||
|
||||

|
||||
|
||||
Check Raid 6 Fault Tolerance
|
||||
|
||||
20. Let me get the details of RAID set now and check whether our spare started to sync.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Check Auto Raid Syncing
|
||||
|
||||
**Hurray!** Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive /dev/sdd1 listed as faulty. We can monitor build process using following command.
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Raid 6 Auto Syncing
|
||||
|
||||
### Conclusion: ###
|
||||
|
||||
Here, we have seen how to setup RAID 6 using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a Nested RAID 10 and much more in the next articles. Till then, stay connected with TECMINT.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid-6-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[2]:http://www.tecmint.com/create-raid0-in-linux/
|
||||
[3]:http://www.tecmint.com/create-raid1-in-linux/
|
@ -0,0 +1,276 @@
|
||||
struggling 翻译中
|
||||
Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6
|
||||
================================================================================
|
||||
RAID 10 is a combine of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, we’ve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks.
|
||||
|
||||
Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we’ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method.
|
||||
|
||||

|
||||
|
||||
Create Raid 10 in Linux
|
||||
|
||||
Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk.
|
||||
|
||||
In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process.
|
||||
|
||||
Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10.
|
||||
|
||||
#### Pros and Cons of RAID 5 ####
|
||||
|
||||
- Gives better performance.
|
||||
- We will loose two of the disk capacity in RAID 10.
|
||||
- Reading and writing will be very good, because it will write and read to all those 4 disk at the same time.
|
||||
- It can be used for Database solutions, which needs a high I/O disk writes.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks.
|
||||
|
||||
**My Server Setup**
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.229
|
||||
Hostname : rd10.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdd
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
Disk 4 [20GB] : /dev/sde
|
||||
|
||||
There are two ways to setup RAID 10, but here I’m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10.
|
||||
|
||||
### Method 1: Setting Up Raid 10 ###
|
||||
|
||||
1. First, verify that all the 4 added disks are detected or not using the following command.
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
2. Once the four disks are detected, it’s time to check for the drives whether there is already any raid existed before creating a new one.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
|
||||

|
||||
|
||||
Verify 4 Added Disks
|
||||
|
||||
**Note**: In the above output, you see there isn’t any super-block detected yet, that means there is no RAID defined in all 4 drives.
|
||||
|
||||
#### Step 1: Drive Partitioning for RAID ####
|
||||
|
||||
3. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the ‘fdisk’ tool.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
# fdisk /dev/sdd
|
||||
# fdisk /dev/sde
|
||||
|
||||
**Create /dev/sdb Partition**
|
||||
|
||||
Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
Please use the below steps for creating a new partition on /dev/sdb drive.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Then choose ‘1‘ to be the first partition.
|
||||
- Next press ‘p‘ to print the created partition.
|
||||
- Change the Type, If we need to know the every available types Press ‘L‘.
|
||||
- Here, we are selecting ‘fd‘ as my type is RAID.
|
||||
- Next press ‘p‘ to print the defined partition.
|
||||
- Then again use ‘p‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||

|
||||
|
||||
Disk sdb Partition
|
||||
|
||||
**Note**: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde).
|
||||
|
||||
4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
OR
|
||||
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
|
||||

|
||||
|
||||
Check All Disks for Raid
|
||||
|
||||
**Note**: The above outputs shows that there isn’t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives.
|
||||
|
||||
#### Step 2: Creating ‘md’ RAID Device ####
|
||||
|
||||
5. Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device, your system must have ‘mdadm’ tool installed, if not install it first.
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command.
|
||||
|
||||
# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1
|
||||
|
||||
6. Next verify the newly created raid device using the ‘cat’ command.
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Create md raid Device
|
||||
|
||||
7. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks.
|
||||
|
||||
# mdadm --examine /dev/sd[b-e]1
|
||||
|
||||
8. Next, check the details of Raid Array with the help of following command.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Check Raid Array Details
|
||||
|
||||
**Note**: You see in the above results, that the status of Raid was active and re-syncing.
|
||||
|
||||
#### Step 3: Creating Filesystem ####
|
||||
|
||||
9. Create a file system using ext4 for ‘md0′ and mount it under ‘/mnt/raid10‘. Here, I’ve used ext4, but you can use any filesystem type if you want.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||

|
||||
|
||||
Create md Filesystem
|
||||
|
||||
10. After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command.
|
||||
|
||||
# mkdir /mnt/raid10
|
||||
# mount /dev/md0 /mnt/raid10/
|
||||
# ls -l /mnt/raid10/
|
||||
|
||||
Next, add some files under mount point and append some text in any one of the file and check the content.
|
||||
|
||||
# touch /mnt/raid10/raid10_files.txt
|
||||
# ls -l /mnt/raid10/
|
||||
# echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt
|
||||
# cat /mnt/raid10/raid10_files.txt
|
||||
|
||||

|
||||
|
||||
Mount md Device
|
||||
|
||||
11. For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!.
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
/dev/md0 /mnt/raid10 ext4 defaults 0 0
|
||||
|
||||

|
||||
|
||||
AutoMount md Device
|
||||
|
||||
12. Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command.
|
||||
|
||||
# mount -av
|
||||
|
||||

|
||||
|
||||
Check Errors in Fstab
|
||||
|
||||
#### Step 4: Save RAID Configuration ####
|
||||
|
||||
13. By default RAID don’t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||

|
||||
|
||||
Save Raid10 Configuration
|
||||
|
||||
That’s it, we have created RAID 10 using method 1, this method is the easier one. Now let’s move forward to setup RAID 10 using method 2.
|
||||
|
||||
### Method 2: Creating RAID 10 ###
|
||||
|
||||
1. In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0.
|
||||
|
||||
First, list the disks which are all available for creating RAID 10.
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||

|
||||
|
||||
List 4 Devices
|
||||
|
||||
2. Partition the all 4 disks using ‘fdisk’ command. For partitioning, you can follow #step 3 above.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
# fdisk /dev/sdd
|
||||
# fdisk /dev/sde
|
||||
|
||||
3. After partitioning all 4 disks, now examine the disks for any existing raid blocks.
|
||||
|
||||
# mdadm --examine /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sd[b-e]1
|
||||
|
||||

|
||||
|
||||
Examine 4 Disks
|
||||
|
||||
#### Step 1: Creating RAID 1 ####
|
||||
|
||||
4. First let me create 2 sets of RAID 1 using 4 disks ‘sdb1′ and ‘sdc1′ and other set using ‘sdd1′ & ‘sde1′.
|
||||
|
||||
# mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1
|
||||
# mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Creating Raid 1
|
||||
|
||||

|
||||
|
||||
Check Details of Raid 1
|
||||
|
||||
#### Step 2: Creating RAID 0 ####
|
||||
|
||||
5. Next, create the RAID 0 using md1 and md2 devices.
|
||||
|
||||
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2
|
||||
# cat /proc/mdstat
|
||||
|
||||

|
||||
|
||||
Creating Raid 0
|
||||
|
||||
#### Step 3: Save RAID Configuration ####
|
||||
|
||||
6. We need to save the Configuration under ‘/etc/mdadm.conf‘ to load all raid devices in every reboot times.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||
After this, we need to follow #step 3 Creating file system of method 1.
|
||||
|
||||
That’s it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid-10-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
@ -0,0 +1,180 @@
|
||||
struggling 翻译中
|
||||
Growing an Existing RAID Array and Removing Failed Disks in Raid – Part 7
|
||||
================================================================================
|
||||
Every newbies will get confuse of the word array. Array is just a collection of disks. In other words, we can call array as a set or group. Just like a set of eggs containing 6 numbers. Likewise RAID Array contains number of disks, it may be 2, 4, 6, 8, 12, 16 etc. Hope now you know what Array is.
|
||||
|
||||
Here we will see how to grow (extend) an existing array or raid group. For example, if we are using 2 disks in an array to form a raid 1 set, and in some situation if we need more space in that group, we can extend the size of an array using mdadm –grow command, just by adding one of the disk to the existing array. After growing (adding disk to an existing array), we will see how to remove one of the failed disk from array.
|
||||
|
||||

|
||||
|
||||
Growing Raid Array and Removing Failed Disks
|
||||
|
||||
Assume that one of the disk is little weak and need to remove that disk, till it fails let it under use, but we need to add one of the spare drive and grow the mirror before it fails, because we need to save our data. While the weak disk fails we can remove it from array this is the concept we are going to see in this topic.
|
||||
|
||||
#### Features of RAID Growth ####
|
||||
|
||||
- We can grow (extend) the size of any raid set.
|
||||
- We can remove the faulty disk after growing raid array with new disk.
|
||||
- We can grow raid array without any downtime.
|
||||
|
||||
Requirements
|
||||
|
||||
- To grow an RAID array, we need an existing RAID set (Array).
|
||||
- We need extra disks to grow the Array.
|
||||
- Here I’m using 1 disk to grow the existing array.
|
||||
|
||||
Before we learn about growing and recovering of Array, we have to know about the basics of RAID levels and setups. Follow the below links to know about those setups.
|
||||
|
||||
- [Understanding Basic RAID Concepts – Part 1][1]
|
||||
- [Creating a Software Raid 0 in Linux – Part 2][2]
|
||||
|
||||
#### My Server Setup ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.230
|
||||
Hostname : grow.tecmintlocal.com
|
||||
2 Existing Disks : 1 GB
|
||||
1 Additional Disk : 1 GB
|
||||
|
||||
Here, my already existing RAID has 2 number of disks with each size is 1GB and we are now adding one more disk whose size is 1GB to our existing raid array.
|
||||
|
||||
### Growing an Existing RAID Array ###
|
||||
|
||||
1. Before growing an array, first list the existing Raid array using the following command.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Check Existing Raid Array
|
||||
|
||||
**Note**: The above output shows that I’ve already has two disks in Raid array with raid1 level. Now here we are adding one more disk to an existing array,
|
||||
|
||||
2. Now let’s add the new disk “sdd” and create a partition using ‘fdisk‘ command.
|
||||
|
||||
# fdisk /dev/sdd
|
||||
|
||||
Please use the below instructions to create a partition on /dev/sdd drive.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Then choose ‘1‘ to be the first partition.
|
||||
- Next press ‘p‘ to print the created partition.
|
||||
- Here, we are selecting ‘fd‘ as my type is RAID.
|
||||
- Next press ‘p‘ to print the defined partition.
|
||||
- Then again use ‘p‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||

|
||||
|
||||
Create New sdd Partition
|
||||
|
||||
3. Once new sdd partition created, you can verify it using below command.
|
||||
|
||||
# ls -l /dev/ | grep sd
|
||||
|
||||

|
||||
|
||||
Confirm sdd Partition
|
||||
|
||||
4. Next, examine the newly created disk for any existing raid, before adding to the array.
|
||||
|
||||
# mdadm --examine /dev/sdd1
|
||||
|
||||

|
||||
|
||||
Check Raid on sdd Partition
|
||||
|
||||
**Note**: The above output shows that the disk has no super-blocks detected, means we can move forward to add a new disk to an existing array.
|
||||
|
||||
4. To add the new partition /dev/sdd1 in existing array md0, use the following command.
|
||||
|
||||
# mdadm --manage /dev/md0 --add /dev/sdd1
|
||||
|
||||

|
||||
|
||||
Add Disk To Raid-Array
|
||||
|
||||
5. Once the new disk has been added, check for the added disk in our array using.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Confirm Disk Added to Raid
|
||||
|
||||
**Note**: In the above output, you can see the drive has been added as a spare. Here, we already having 2 disks in the array, but what we are expecting is 3 devices in array for that we need to grow the array.
|
||||
|
||||
6. To grow the array we have to use the below command.
|
||||
|
||||
# mdadm --grow --raid-devices=3 /dev/md0
|
||||
|
||||

|
||||
|
||||
Grow Raid Array
|
||||
|
||||
Now we can see the third disk (sdd1) has been added to array, after adding third disk it will sync the data from other two disks.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Confirm Raid Array
|
||||
|
||||
**Note**: For large size disk it will take hours to sync the contents. Here I have used 1GB virtual disk, so its done very quickly within seconds.
|
||||
|
||||
### Removing Disks from Array ###
|
||||
|
||||
7. After the data has been synced to new disk ‘sdd1‘ from other two disks, that means all three disks now have same contents.
|
||||
|
||||
As I told earlier let’s assume that one of the disk is weak and needs to be removed, before it fails. So, now assume disk ‘sdc1‘ is weak and needs to be removed from an existing array.
|
||||
|
||||
Before removing a disk we have to mark the disk as failed one, then only we can able to remove it.
|
||||
|
||||
# mdadm --fail /dev/md0 /dev/sdc1
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Disk Fail in Raid Array
|
||||
|
||||
From the above output, we clearly see that the disk was marked as faulty at the bottom. Even its faulty, we can see the raid devices are 3, failed 1 and state was degraded.
|
||||
|
||||
Now we have to remove the faulty drive from the array and grow the array with 2 devices, so that the raid devices will be set to 2 devices as before.
|
||||
|
||||
# mdadm --remove /dev/md0 /dev/sdc1
|
||||
|
||||

|
||||
|
||||
Remove Disk in Raid Array
|
||||
|
||||
8. Once the faulty drive is removed, now we’ve to grow the raid array using 2 disks.
|
||||
|
||||
# mdadm --grow --raid-devices=2 /dev/md0
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||

|
||||
|
||||
Grow Disks in Raid Array
|
||||
|
||||
From the about output, you can see that our array having only 2 devices. If you need to grow the array again, follow the same steps as described above. If you need to add a drive as spare, mark it as spare so that if the disk fails, it will automatically active and rebuild.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In the article, we’ve seen how to grow an existing raid set and how to remove a faulty disk from an array after re-syncing the existing contents. All these steps can be done without any downtime. During data syncing, system users, files and applications will not get affected in any case.
|
||||
|
||||
In next, article I will show you how to manage the RAID, till then stay tuned to updates and don’t forget to add your comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/grow-raid-array-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[2]:http://www.tecmint.com/create-raid0-in-linux/
|
@ -1,315 +0,0 @@
|
||||
[translating by xiqingongzi]
|
||||
|
||||
RHCSA Series: Reviewing Essential Commands & System Documentation – Part 1
|
||||
================================================================================
|
||||
RHCSA (Red Hat Certified System Administrator) is a certification exam from Red Hat company, which provides an open source operating system and software to the enterprise community, It also provides support, training and consulting services for the organizations.
|
||||
|
||||

|
||||
|
||||
RHCSA Exam Preparation Guide
|
||||
|
||||
RHCSA exam is the certification obtained from Red Hat Inc, after passing the exam (codename EX200). RHCSA exam is an upgrade to the RHCT (Red Hat Certified Technician) exam, and this upgrade is compulsory as the Red Hat Enterprise Linux was upgraded. The main variation between RHCT and RHCSA is that RHCT exam based on RHEL 5, whereas RHCSA certification is based on RHEL 6 and 7, the courseware of these two certifications are also vary to a certain level.
|
||||
|
||||
This Red Hat Certified System Administrator (RHCSA) is essential to perform the following core system administration tasks needed in Red Hat Enterprise Linux environments:
|
||||
|
||||
- Understand and use necessary tools for handling files, directories, command-environments line, and system-wide / packages documentation.
|
||||
- Operate running systems, even in different run levels, identify and control processes, start and stop virtual machines.
|
||||
- Set up local storage using partitions and logical volumes.
|
||||
- Create and configure local and network file systems and its attributes (permissions, encryption, and ACLs).
|
||||
- Setup, configure, and control systems, including installing, updating and removing software.
|
||||
- Manage system users and groups, along with use of a centralized LDAP directory for authentication.
|
||||
- Ensure system security, including basic firewall and SELinux configuration.
|
||||
|
||||
To view fees and register for an exam in your country, check the [RHCSA Certification page][1].
|
||||
|
||||
To view fees and register for an exam in your country, check the RHCSA Certification page.
|
||||
|
||||
In this 15-article RHCSA series, titled Preparation for the RHCSA (Red Hat Certified System Administrator) exam, we will going to cover the following topics on the latest releases of Red Hat Enterprise Linux 7.
|
||||
|
||||
- Part 1: Reviewing Essential Commands & System Documentation
|
||||
- Part 2: How to Perform File and Directory Management in RHEL 7
|
||||
- Part 3: How to Manage Users and Groups in RHEL 7
|
||||
- Part 4: Editing Text Files with Nano and Vim / Analyzing text with grep and regexps
|
||||
- Part 5: Process Management in RHEL 7: boot, shutdown, and everything in between
|
||||
- Part 6: Using ‘Parted’ and ‘SSM’ to Configure and Encrypt System Storage
|
||||
- Part 7: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares
|
||||
- Part 8: Securing SSH, Setting Hostname and Enabling Network Services
|
||||
- Part 9: Installing, Configuring and Securing a Web and FTP Server
|
||||
- Part 10: Yum Package Management, Automating Tasks with Cron and Monitoring System Logs
|
||||
- Part 11: Firewall Essentials and Control Network Traffic Using FirewallD and Iptables
|
||||
- Part 12: Automate RHEL 7 Installations Using ‘Kickstart’
|
||||
- Part 13: RHEL 7: What is SELinux and how it works?
|
||||
- Part 14: Use LDAP-based authentication in RHEL 7
|
||||
- Part 15: Virtualization in RHEL 7: KVM and Virtual machine management
|
||||
|
||||
In this Part 1 of the RHCSA series, we will explain how to enter and execute commands with the correct syntax in a shell prompt or terminal, and explained how to find, inspect, and use system documentation.
|
||||
|
||||

|
||||
|
||||
RHCSA: Reviewing Essential Linux Commands – Part 1
|
||||
|
||||
#### Prerequisites: ####
|
||||
|
||||
At least a slight degree of familiarity with basic Linux commands such as:
|
||||
|
||||
- [cd command][2] (change directory)
|
||||
- [ls command][3] (list directory)
|
||||
- [cp command][4] (copy files)
|
||||
- [mv command][5] (move or rename files)
|
||||
- [touch command][6] (create empty files or update the timestamp of existing ones)
|
||||
- rm command (delete files)
|
||||
- mkdir command (make directory)
|
||||
|
||||
The correct usage of some of them are anyway exemplified in this article, and you can find further information about each of them using the suggested methods in this article.
|
||||
|
||||
Though not strictly required to start, as we will be discussing general commands and methods for information search in a Linux system, you should try to install RHEL 7 as explained in the following article. It will make things easier down the road.
|
||||
|
||||
- [Red Hat Enterprise Linux (RHEL) 7 Installation Guide][7]
|
||||
|
||||
### Interacting with the Linux Shell ###
|
||||
|
||||
If we log into a Linux box using a text-mode login screen, chances are we will be dropped directly into our default shell. On the other hand, if we login using a graphical user interface (GUI), we will have to open a shell manually by starting a terminal. Either way, we will be presented with the user prompt and we can start typing and executing commands (a command is executed by pressing the Enter key after we have typed it).
|
||||
|
||||
Commands are composed of two parts:
|
||||
|
||||
- the name of the command itself, and
|
||||
- arguments
|
||||
|
||||
Certain arguments, called options (usually preceded by a hyphen), alter the behavior of the command in a particular way while other arguments specify the objects upon which the command operates.
|
||||
|
||||
The type command can help us identify whether another certain command is built into the shell or if it is provided by a separate package. The need to make this distinction lies in the place where we will find more information about the command. For shell built-ins we need to look in the shell’s man page, whereas for other binaries we can refer to its own man page.
|
||||
|
||||

|
||||
|
||||
Check Shell built in Commands
|
||||
|
||||
In the examples above, cd and type are shell built-ins, while top and less are binaries external to the shell itself (in this case, the location of the command executable is returned by type).
|
||||
|
||||
Other well-known shell built-ins include:
|
||||
|
||||
- [echo command][8]: Displays strings of text.
|
||||
- [pwd command][9]: Prints the current working directory.
|
||||
|
||||

|
||||
|
||||
More Built in Shell Commands
|
||||
|
||||
**exec command**
|
||||
|
||||
Runs an external program that we specify. Note that in most cases, this is better accomplished by just typing the name of the program we want to run, but the exec command has one special feature: rather than create a new process that runs alongside the shell, the new process replaces the shell, as can verified by subsequent.
|
||||
|
||||
# ps -ef | grep [original PID of the shell process]
|
||||
|
||||
When the new process terminates, the shell terminates with it. Run exec top and then hit the q key to quit top. You will notice that the shell session ends when you do, as shown in the following screencast:
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/f02w4WT73LE"></iframe>
|
||||
|
||||
**export command**
|
||||
|
||||
Exports variables to the environment of subsequently executed commands.
|
||||
|
||||
**history Command**
|
||||
|
||||
Displays the command history list with line numbers. A command in the history list can be repeated by typing the command number preceded by an exclamation sign. If we need to edit a command in history list before executing it, we can press Ctrl + r and start typing the first letters associated with the command. When we see the command completed automatically, we can edit it as per our current need:
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/69vafdSMfU4"></iframe>
|
||||
|
||||
This list of commands is kept in our home directory in a file called .bash_history. The history facility is a useful resource for reducing the amount of typing, especially when combined with command line editing. By default, bash stores the last 500 commands you have entered, but this limit can be extended by using the HISTSIZE environment variable:
|
||||
|
||||

|
||||
|
||||
Linux history Command
|
||||
|
||||
But this change as performed above, will not be persistent on our next boot. In order to preserve the change in the HISTSIZE variable, we need to edit the .bashrc file by hand:
|
||||
|
||||
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
|
||||
HISTSIZE=1000
|
||||
|
||||
**Important**: Keep in mind that these changes will not take effect until we restart our shell session.
|
||||
|
||||
**alias command**
|
||||
|
||||
With no arguments or with the -p option prints the list of aliases in the form alias name=value on standard output. When arguments are provided, an alias is defined for each name whose value is given.
|
||||
|
||||
With alias, we can make up our own commands or modify existing ones by including desired options. For example, suppose we want to alias ls to ls –color=auto so that the output will display regular files, directories, symlinks, and so on, in different colors:
|
||||
|
||||
# alias ls='ls --color=auto'
|
||||
|
||||

|
||||
|
||||
Linux alias Command
|
||||
|
||||
**Note**: That you can assign any name to your “new command” and enclose as many commands as desired between single quotes, but in that case you need to separate them by semicolons, as follows:
|
||||
|
||||
# alias myNewCommand='cd /usr/bin; ls; cd; clear'
|
||||
|
||||
**exit command**
|
||||
|
||||
The exit and logout commands both terminate the shell. The exit command terminates any shell, but the logout command terminates only login shells—that is, those that are launched automatically when you initiate a text-mode login.
|
||||
|
||||
If we are ever in doubt as to what a program does, we can refer to its man page, which can be invoked using the man command. In addition, there are also man pages for important files (inittab, fstab, hosts, to name a few), library functions, shells, devices, and other features.
|
||||
|
||||
#### Examples: ####
|
||||
|
||||
- man uname (print system information, such as kernel name, processor, operating system type, architecture, and so on).
|
||||
- man inittab (init daemon configuration).
|
||||
|
||||
Another important source of information is provided by the info command, which is used to read info documents. These documents often provide more information than the man page. It is invoked by using the info keyword followed by a command name, such as:
|
||||
|
||||
# info ls
|
||||
# info cut
|
||||
|
||||
In addition, the /usr/share/doc directory contains several subdirectories where further documentation can be found. They either contain plain-text files or other friendly formats.
|
||||
|
||||
Make sure you make it a habit to use these three methods to look up information for commands. Pay special and careful attention to the syntax of each of them, which is explained in detail in the documentation.
|
||||
|
||||
**Converting Tabs into Spaces with expand Command**
|
||||
|
||||
Sometimes text files contain tabs but programs that need to process the files don’t cope well with tabs. Or maybe we just want to convert tabs into spaces. That’s where the expand tool (provided by the GNU coreutils package) comes in handy.
|
||||
|
||||
For example, given the file NumbersList.txt, let’s run expand against it, changing tabs to one space, and display on standard output.
|
||||
|
||||
# expand --tabs=1 NumbersList.txt
|
||||
|
||||

|
||||
|
||||
Linux expand Command
|
||||
|
||||
The unexpand command performs the reverse operation (converts spaces into tabs).
|
||||
|
||||
**Display the first lines of a file with head and the last lines with tail**
|
||||
|
||||
By default, the head command followed by a filename, will display the first 10 lines of the said file. This behavior can be changed using the -n option and specifying a certain number of lines.
|
||||
|
||||
# head -n3 /etc/passwd
|
||||
# tail -n3 /etc/passwd
|
||||
|
||||

|
||||
|
||||
Linux head and tail Command
|
||||
|
||||
One of the most interesting features of tail is the possibility of displaying data (last lines) as the input file grows (tail -f my.log, where my.log is the file under observation). This is particularly useful when monitoring a log to which data is being continually added.
|
||||
|
||||
Read More: [Manage Files Effectively using head and tail Commands][10]
|
||||
|
||||
**Merging Lines with paste**
|
||||
|
||||
The paste command merges files line by line, separating the lines from each file with tabs (by default), or another delimiter that can be specified (in the following example the fields in the output are separated by an equal sign).
|
||||
|
||||
# paste -d= file1 file2
|
||||
|
||||

|
||||
|
||||
Merge Files in Linux
|
||||
|
||||
**Breaking a file into pieces using split command**
|
||||
|
||||
The split command is used split a file into two (or more) separate files, which are named according to a prefix of our choosing. The splitting can be defined by size, chunks, or number of lines, and the resulting files can have a numeric or alphabetic suffixes. In the following example, we will split bash.pdf into files of size 50 KB (-b 50KB), using numeric suffixes (-d):
|
||||
|
||||
# split -b 50KB -d bash.pdf bash_
|
||||
|
||||

|
||||
|
||||
Split Files in Linux
|
||||
|
||||
You can merge the files to recreate the original file with the following command:
|
||||
|
||||
# cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf
|
||||
|
||||
**Translating characters with tr command**
|
||||
|
||||
The tr command can be used to translate (change) characters on a one-by-one basis or using character ranges. In the following example we will use the same file2 as previously, and we will change:
|
||||
|
||||
- lowercase o’s to uppercase,
|
||||
- and all lowercase to uppercase
|
||||
|
||||
# cat file2 | tr o O
|
||||
# cat file2 | tr [a-z] [A-Z]
|
||||
|
||||

|
||||
|
||||
Translate Characters in Linux
|
||||
|
||||
**Reporting or deleting duplicate lines with uniq and sort command**
|
||||
|
||||
The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files).
|
||||
|
||||
By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option. Please note how the output returned by sort and uniq change as we change the key field in the following example:
|
||||
|
||||
# cat file3
|
||||
# sort file3 | uniq
|
||||
# sort -k2 file3 | uniq
|
||||
# sort -k3 file3 | uniq
|
||||
|
||||

|
||||
|
||||
Remove Duplicate Lines in Linux
|
||||
|
||||
**Extracting text with cut command**
|
||||
|
||||
The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b), characters (-c), or fields (-f).
|
||||
|
||||
When using cut based on fields, the default field separator is a tab, but a different separator can be specified by using the -d option.
|
||||
|
||||
# cut -d: -f1,3 /etc/passwd # Extract specific fields: 1 and 3 in this case
|
||||
# cut -d: -f2-4 /etc/passwd # Extract range of fields: 2 through 4 in this example
|
||||
|
||||

|
||||
|
||||
Extract Text From a File in Linux
|
||||
|
||||
Note that the output of the two examples above was truncated for brevity.
|
||||
|
||||
**Reformatting files with fmt command**
|
||||
|
||||
fmt is used to “clean up” files with a great amount of content or lines, or with varying degrees of indentation. The new paragraph formatting defaults to no more than 75 characters wide. You can change this with the -w (width) option, which set the line length to the specified number of characters.
|
||||
|
||||
For example, let’s see what happens when we use fmt to display the /etc/passwd file setting the width of each line to 100 characters. Once again, output has been truncated for brevity.
|
||||
|
||||
# fmt -w100 /etc/passwd
|
||||
|
||||

|
||||
|
||||
File Reformatting in Linux
|
||||
|
||||
**Formatting content for printing with pr command**
|
||||
|
||||
pr paginates and displays in columns one or more files for printing. In other words, pr formats a file to make it look better when printed. For example, the following command:
|
||||
|
||||
# ls -a /etc | pr -n --columns=3 -h "Files in /etc"
|
||||
|
||||
Shows a listing of all the files found in /etc in a printer-friendly format (3 columns) with a custom header (indicated by the -h option), and numbered lines (-n).
|
||||
|
||||

|
||||
|
||||
File Formatting in Linux
|
||||
|
||||
### Summary ###
|
||||
|
||||
In this article we have discussed how to enter and execute commands with the correct syntax in a shell prompt or terminal, and explained how to find, inspect, and use system documentation. As simple as it seems, it’s a large first step in your way to becoming a RHCSA.
|
||||
|
||||
If you would like to add other commands that you use on a periodic basis and that have proven useful to fulfill your daily responsibilities, feel free to share them with the world by using the comment form below. Questions are also welcome. We look forward to hearing from you!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:https://www.redhat.com/en/services/certification/rhcsa
|
||||
[2]:http://www.tecmint.com/cd-command-in-linux/
|
||||
[3]:http://www.tecmint.com/ls-command-interview-questions/
|
||||
[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/
|
||||
[5]:http://www.tecmint.com/rename-multiple-files-in-linux/
|
||||
[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
||||
[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/
|
||||
[8]:http://www.tecmint.com/echo-command-in-linux/
|
||||
[9]:http://www.tecmint.com/pwd-command-examples/
|
||||
[10]:http://www.tecmint.com/view-contents-of-file-in-linux/
|
@ -1,3 +1,4 @@
|
||||
[translating by xiqingongzi]
|
||||
RHCSA Series: How to Perform File and Directory Management – Part 2
|
||||
================================================================================
|
||||
In this article, RHCSA Part 2: File and directory management, we will review some essential skills that are required in the day-to-day tasks of a system administrator.
|
||||
@ -319,4 +320,4 @@ via: http://www.tecmint.com/file-and-directory-management-in-linux/
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
||||
[2]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
|
||||
[3]:http://www.tecmint.com/18-tar-command-examples-in-linux/
|
||||
[3]:http://www.tecmint.com/18-tar-command-examples-in-linux/
|
||||
|
@ -1,65 +0,0 @@
|
||||
如何打造自己的Linux发行版
|
||||
================================================================================
|
||||
您是否想过打造您自己的Linux发行版?每个Linux用户在他们使用Linux的过程中都想过做一个他们自己的发行版,至少一次。我也不例外,作为一个Linux菜鸟,我也考虑过开发一个自己的Linux发行版。开发一个Linux发行版被叫做Linux From Scratch (LFS)。
|
||||
|
||||
在开始之前,我总结了一些LFS的内容,如下:
|
||||
|
||||
### 1. 那些想要打造他们自己的Linux发行版的人应该了解打造一个Linux发行版(打造意味着从头开始)与配置一个已有的Linux发行版的不同 ###
|
||||
|
||||
如果您只是想调整下屏幕显示、定制登录以及拥有更好的外表和使用体验。您可以选择任何一个Linux发行版并且按照您的喜好进行个性化配置。此外,有许多配置工具可以帮助您。
|
||||
|
||||
如果您想打包所有必须的文件、boot-loaders和内核,并选择什么该被包括进来,然后依靠自己编译这一切东西。那么您需要Linux From Scratch (LFS)。
|
||||
|
||||
**注意**:如果您只想要定制Linux系统的外表和体验,这个指南不适合您。但如果您真的想打造一个Linux发行版,并且向了解怎么开始以及一些其他的信息,那么这个指南正是为您而写。
|
||||
|
||||
### 2. 打造一个Linux发行版(LFS)的好处 ###
|
||||
|
||||
- 您将了解Linux系统的内部工作机制
|
||||
- 您将开发一个灵活的适应您需求的系统
|
||||
- 您开发的系统(LFS)将会非常紧凑,因为您对该包含/不该包含什么拥有绝对的掌控
|
||||
- 您开发的系统(LFS)在安全性上会更好
|
||||
|
||||
### 3. 打造一个Linux发行版(LFS)的坏处 ###
|
||||
|
||||
打造一个Linux系统意味着将所有需要的东西放在一起并且编译之。这需要许多查阅、耐心和时间。而且您需要一个可用的Linux系统和足够的磁盘空间来打造Linux系统。
|
||||
|
||||
### 4. 有趣的是,Gentoo/GNU Linux在某种意义上最接近于LFS。Gentoo和LFS都是完全从源码编译的定制的Linux系统 ###
|
||||
|
||||
### 5. 您应该是一个有经验的Linux用户,对编译包、解决依赖有相当的了解,并且是个shell脚本的专家。了解一门编程语言(C最好)将会使事情变得容易些。但哪怕您是一个新手,只要您是一个优秀的学习者,可以很快的掌握知识,您也可以开始。最重要的是不要在LFS过程中丢失您的热情。 ###
|
||||
|
||||
如果您不够坚定,恐怕会在LFS进行到一半时放弃。
|
||||
|
||||
### 6. 现在您需要一步一步的指导来打造一个Linux。LFS是打造Linux的官方指南。我们的搭档的站点tradepub也为我们的读者制作了LFS的指南,这同样是免费的。 ###
|
||||
|
||||
您可以从下面的链接下载Linux From Scratch的书籍:
|
||||
|
||||
[][1]
|
||||
|
||||
下载: [Linux From Scratch][1]
|
||||
|
||||
### 关于:Linux From Scratch ###
|
||||
|
||||
这本书是由LFS的项目领头人Gerard Beekmans创作的,由Matthew Burgess和Bruse Dubbs做编辑,两人都是LFS项目的联合领导人。这本书内容很广泛,有338页长。
|
||||
|
||||
书中内容包括:介绍LFS、准备构建、构建Linux(LFS)、建立启动脚本、使LFS可以引导和附录。其中涵盖了您想知道的LFS项目的所有东西。
|
||||
|
||||
这本书还给出了编译一个包的预估时间。预估的时间以编译第一个包的时间作为参考。所有的东西都以易于理解的方式呈现,甚至对于新手来说。
|
||||
|
||||
如果您有充裕的时间并且真正对构建自己的Linux发行版感兴趣,那么您绝对不会错过下载这个电子书(免费下载)的机会。您需要做的,便是照着这本书在一个工作的Linux系统(任何Linux发行版,足够的磁盘空间即可)中开始构建您自己的Linux系统,时间和热情。
|
||||
|
||||
如果Linux使您着迷,如果您想自己动手构建一个自己的Linux发行版,这便是现阶段您应该知道的全部了,其他的信息您可以参考上面链接的书中的内容。
|
||||
|
||||
请让我了解您阅读/使用这本书的经历,这本详尽的LFS指南的使用是否足够简单?如果您已经构建了一个LFS并且想给我们的读者一些建议,欢迎留言和反馈。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-custom-linux-distribution-from-scratch/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://tecmint.tradepub.com/free/w_linu01/prgm.cgi
|
@ -1,57 +0,0 @@
|
||||
Linux比Mac OS X更好吗?历史中的GNU,开源和Apple
|
||||
==============================================================================
|
||||
> 自由软件/开源社区与Apple之间的争论可以回溯到上世纪80年代,当时Linux的创始人称Mac OS X的核心就是"一个废物",还有其他一些软件历史上的轶事。
|
||||
|
||||

|
||||
|
||||
开源拥护者们与微软之间有着很长,而且摇摆的关系。每个人都知道这个。但是,在许多方面,自由或者开源软件的支持者们与Apple之间的紧张关系则更加突出——尽管这很少受到媒体的关注。
|
||||
|
||||
需要说明的是,并不是所有的开源拥护者都厌恶苹果。Anecdotally(待译),我已经见过很多Linux的黑客玩弄iPhones和iPads。实际上,许多Linux用户是十分喜欢Apple的OS X系统的,以至于他们[创造了很多Linux的发行版][1],都设计得看起来像OS X。(顺便说下,[北朝鲜政府][2]就这样做了。)
|
||||
|
||||
但是Mac的信徒与企鹅——即Linux社区(未提及自由与开源软件世界的小众群体)的信徒之间的关系,并不一直是完全的和谐。并且这绝不是一个新的现象,在我研究Linux历史和开源基金会的时候就发现了。
|
||||
|
||||
### GNU vs. Apple ###
|
||||
|
||||
这场战争将回溯到至少上世界80年代后期。1988年6月,Richard Stallman发起了[GNU][3]项目,希望建立一个完全自由的类Unix操作系统,其源代码讲会免费共享,[[强烈指责][4]Apple对[Hewlett-Packard][5](HPQ)和[Microsoft][6](MSFT)的诉讼,称Apple的声明中,说别人对Macintosh操作系统的界面和体验的抄袭是不正确。如果Apple流行,GNU警告到,这家公司“将会借助大众的新力量终结掉自由软件,而自由软件可以成为商业软件的替代品。”
|
||||
|
||||
那个时候,GNU对抗Apple的诉讼(这意味着,十分讽刺的是,GNU正在支持Microsoft,尽管当时的情况不一样),通过发布["让你的律师远离我的电脑”按钮][7]。同时呼吁GNU的支持者们抵制Apple,警告如果Macintoshes看起来是不错的计算机,但Apple一旦赢得了诉讼就会给市场带来垄断,这会极大地提高计算机的售价。
|
||||
|
||||
Apple最终[输掉了诉讼][8],但是直到1994年之后,GNU才[撤销对Apple的抵制][9]。这期间,GNU一直不断指责Apple。在上世纪90年代早期甚至之后,GNU开始发展GNU软件项目,可以在其他个人电脑平台包括MS-DOS上使用。[GNU 宣称][10],除非Apple停止在计算机领域垄断的野心,让用户界面可以模仿Macintosh的一些东西,否则“我们不会提供任何对Apple机器的支持。”(因此讽刺的是一大堆软件都开发了OS X和类Unix系统的版本,于是Apple在90年代后期介绍这些软件来自GNU。但是那是另外的故事了。)
|
||||
|
||||
### Trovalds on Jobs ###
|
||||
|
||||
除去他对大多数发行版比较自由放任的态度,Liuns Trovalds,Linux内核的创造者,相较于Stallman和GNU过去对Apple的态度没有多一点仁慈。在他2001年出版的书"Just For Fun: The Story of an Accidental Revolutionary"中,Trovalds描述到与Steve Jobs的一个会面,大约是1997年收到后者的邀请去讨论Mac OS X,Apple正在开发,但还没有公开发布。
|
||||
|
||||
"基本上,Jobs一开始就试图告诉我在桌面上的玩家就两个,Microsoft和Apple,而且他认为我能为Linux做的最好的事,就是从了Apple,努力让开源用户站到Mac OS X后面去"Trovalds写道。
|
||||
|
||||
这次谈判显然让Trovalds很不爽。争吵的一点集中在Trovalds对Mach技术上的藐视,对于Apple正在用于构建新的OS X操作系统的内核,Trovalds称其“一推废物。它包含了所有你能做到的设计错误,并且甚至打算只弥补一小部分。”
|
||||
|
||||
但是更令人不快的是,显然是Jobs在开发OS X时入侵开源的方式(OS X的核心里上有很多开源程序):“他有点贬低了结构的瑕疵:谁在乎基础操作系统,真正的low-core东西是不是开源,如果你有Mac层在最上面,这不是开源?”
|
||||
|
||||
一切的一切,Trovalds总结到,Jobs“并没有使用太多争论。他仅仅很简单地说着,胸有成竹地认为我会对与Apple合作感兴趣”。“他没有任何线索,不能去想像还会有人并不关心Mac市场份额的增长。我认为他真的感到惊讶了,当我表现出对Mac的市场有多大,或者Microsoft市场有多大的可怜的关心时。”
|
||||
|
||||
当然,Trovalds并没有对所有Linux用户说起。他对于OS X和Apple的看法从2001年开始就渐渐软化了。但实际上,早在2000年,Linux社区的领导角色表现出对Apple和其高层的傲慢的深深的鄙视,可以看出一些重要的东西,关于Apple和开源/自由软件世界的矛盾是多么的根深蒂固。
|
||||
|
||||
从以上两则历史上的花边新闻中,可以看到关于Apple产品价值的重大争议,即是否该公司致力于提升软硬件的质量,或者仅仅是借市场的小聪明获利,后者会让Apple产品卖出更多的钱,**********(该处不知如何翻译)。但是不管怎样,我会暂时置身讨论之外。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://thevarguy.com/open-source-application-software-companies/051815/linux-better-os-x-gnu-open-source-and-apple-
|
||||
|
||||
作者:[Christopher Tozzi][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||
[1]:https://www.linux.com/news/software/applications/773516-the-mac-ifying-of-the-linux-desktop/
|
||||
[2]:http://thevarguy.com/open-source-application-software-companies/010615/north-koreas-red-star-linux-os-made-apples-image
|
||||
[3]:http://gnu.org/
|
||||
[4]:https://www.gnu.org/bulletins/bull5.html
|
||||
[5]:http://www.hp.com/
|
||||
[6]:http://www.microsoft.com/
|
||||
[7]:http://www.duntemann.com/AppleSnakeButton.jpg
|
||||
[8]:http://www.freibrun.com/articles/articl12.htm
|
||||
[9]:https://www.gnu.org/bulletins/bull18.html#SEC6
|
||||
[10]:https://www.gnu.org/bulletins/bull12.html
|
@ -0,0 +1,86 @@
|
||||
安卓编年史
|
||||
================================================================================
|
||||

|
||||
姜饼的新键盘,文本选择,边界回弹效果以及新复选框。
|
||||
Ron Amadeo 供图
|
||||
|
||||
安卓2.3最重要的新增功能就是系统全局文本选择界面,你可以在左侧截图的谷歌搜索栏看到它。长按一个词能使其变为橙色高亮,并且出现可拖拽的小标签,长按高亮部分会弹出剪切,复制和粘贴选项。之前的方法使用的是依赖于十字方向键的控制,但现在有了触摸文本选择,Nexus S 不再需要额外的硬件控件。
|
||||
|
||||
左侧截图右半边展示的是新的复选框设计和边界回弹效果。冻酸奶(2.2)的复选框像个灯泡——选中时显示一个绿色的勾,未选中的时候显示灰色的勾。姜饼在选项关闭的时候显示一个空的选框——这显得更有意义。姜饼是第一个拥有滚动到底发光效果的版本。当到达列表底部的时候会有一道橙色的光晕,你越往上拉光晕越明显。列表上拉滚动反弹也许最直观,但那是苹果的专利。
|
||||
|
||||

|
||||
新拨号界面和对话框设计。
|
||||
Ron Amadeo 供图
|
||||
|
||||
姜饼里的拨号受到了稍微多点的照顾。它变得更暗了,并且谷歌终于解决了原本的直角,圆角以及圆形的结合问题。现在所有的边角都是直角了。所有的拨号按键被替换成了带有奇怪下划线的样式,像是用边角料拼凑的。你永远无法确定是否看到了一个按钮——我们的大脑得想象出按钮形状的剩余部分。
|
||||
|
||||
图中的无线网络对话框可以看作是剩下的系统全局改动的样本。所有的对话框标题从灰色变为黑色,对话框,下拉框以及按钮边缘都变成了直角,各部分色调都变暗了一点。所有的这些全局变化使得姜饼看起来不像原来那样活泼,而是更加地成熟。“到处都是黑色”的外观必然不是最受欢迎的,但它无疑看起来比安卓之前的灰色和米色的配色方案好多了。
|
||||
|
||||

|
||||
新市场,添加了大块的绿色页面顶栏。
|
||||
Ron Amadeo 供图
|
||||
|
||||
新版系统带来了“安卓市场 2.0”,虽然它不是姜饼独占的。主要的列表设计和原来一致,但谷歌将屏幕上部三分之一覆盖上了大块的绿色横幅,用来展示热门应用以及导航。这里主要的设计灵感也许是绿色的安卓吉祥物——它们的颜色完美匹配。在系统设计偏向暗色系的时候,霓虹灯般的绿色横幅和白色列表让市场明快得多。
|
||||
|
||||
但是,相同的绿色背景图片被用在了不同的手机上,这意味着在低分辨率设备上,绿色横幅看起来更加的大。不少用户抱怨这浪费了屏幕空间,于是随后的更新使得绿色横幅跟随内容向上滚动。在那时,横屏模式更加糟糕——绿色横幅会填满剩下的半个屏幕。
|
||||
|
||||

|
||||
市场的一个带有可折叠描述的应用详情页面,“我的应用”界面,以及 Google Books 界面截图。
|
||||
Ron Amadeo供图
|
||||
|
||||
应用详情页面经过重新设计有了可折叠部分。文本描述只截取前几行展示,向下滑动页面不用再穿过数千行的描述。简短的描述后有一个“更多”按钮可供点击来显示完整的描述。这让用户可以轻松地滑动过列表找到像是截图和“联系开发者”部分,这些部分通常在页面偏下部分。
|
||||
|
||||
安卓主屏的其它部分明智地淡化了绿色机器人元素。市场应用的剩余部分绝大多数仅仅只是旧版市场加上新的绿色导航元素。旧有的标签界面升级成了可滑动切换标签。在姜饼右侧截图中,从右向左滑动将会从“热门付费”切换至“热门免费”,这使得导航变得更加方便。
|
||||
|
||||
姜饼带来了将会成为 Google Play 内容商店第一位成员的应用:Google Books。这个应用是个基础的电子书阅读器,会将书籍以简单的预览图平铺展示。屏幕顶部的“获取 eBooks”链接会打开浏览器,然后加载一个你可以在上面购买电子书的移动网站。
|
||||
|
||||
Google Books 以及市场的“我的应用”页面都是 Action Bar 的原型。就像现在的指南中写的,页面有一个带应用图标的固定置顶栏,应用内页面的名称,以及一些控件。这两个应用的布局实际上看起来十分现代,和现在的界面相似。
|
||||
|
||||

|
||||
新版谷歌地图。
|
||||
Ron Amadeo供图
|
||||
|
||||
谷歌地图(再重复一次,这时候的谷歌地图是在安卓市场中的,并且不是这个安卓版本独占的)拥有了另一个操作栏原型,是一个顶部对齐的控件栏。这个早期版本的操作栏拥有许多试验性功能。功能栏主要被一个搜索框所占据,但是你永远无法向其输入内容。点击搜索框会打开安卓 1.x 版本以来的旧搜索界面,它带有完全不同的操作栏设计和活泼的按钮。2.3 版本的顶栏仅仅只是个大号的搜索按钮而已。
|
||||
|
||||

|
||||
从黑变白的新 business 页面。
|
||||
Ron Amadeo 供图
|
||||
|
||||
应用抽屉里和地点一起到来的热门商家重新设计了界面。不像姜饼的其它部分,它从黑色转换成了白色。谷歌还给它保留了圆角的旧按钮。这个新版本的地图能显示商家的营业时间,并且提供高级搜索选项,比如正在营业或是通过评分或价格限定搜索范围。点评被调整到了商家详情页面,用户可以更容易地对当前商家有个直观感受。而且现在还可以从搜索结果中给某个地点加星,保存起来以后使用。
|
||||
|
||||

|
||||
新 YouTube 设计,神奇的是有点像旧版地图的商家页面的设计。
|
||||
Ron Amadeo供图
|
||||
|
||||
YouTube 应用似乎完全与安卓的其它部分分离开来,就像是设计它的人完全不知道姜饼最终会是什么样子一样。高亮是红色和灰色方案,而不是绿色和橙色,而且不像扁平黑色风格的姜饼,Youtube 有着气泡状的,带有圆角并且大幅使用渐变效果的按钮,标签以及操作栏。尽管如此,新应用还是有一些正确的地方。所有的标签可以水平滑动切换,而且应用终于提供了竖屏观看视频模式。安卓在那个阶段似乎工作不是很一致。就像是有人告诉 Youtube 团队“把它做成黑色的”,然后这就是全部的指导方向一样。唯一一个与其相似的安卓实体就是旧版谷歌地图的商家页面的设计。
|
||||
|
||||
尽管有些奇怪的设计,Youtube 应用有着最接近操作栏的顶栏设计。除了顶部操作栏的应用图标和一些按钮,最右侧还有个标着“更多”字样的按钮,点击它可以打开因为过多而无法装进操作栏的选项。在今天,这被称作“更多操作”按钮,它是个标准界面控件。
|
||||
|
||||

|
||||
新 Google Talk,支持语音和视频通话,以及新语音命令界面。
|
||||
Ron Amadeo供图
|
||||
|
||||
姜饼的最后一个更新是安卓 2.3.4,它带来了新版 Google Talk。不像 Nexus One,Nexus S 带有前置摄像头——重新设计的 Google Talk 拥有语音和视频通话功能。好友列表右侧的彩色指示不仅指明在线状态,还显示了语音和视频的可用性。一个点表示仅文本信息,一个麦克风表示文本信息或语音,一个摄像机表示支持文本信息,语音以及视频。如果可用的话,点击语音或视频图标会立即向好友发起通话。
|
||||
|
||||
姜饼是谷歌仍然提供支持的最老的安卓版本。激活一部姜饼设备并放置一会儿会收到大量更新。姜饼会拉取 Google Play 服务,它会带来许多新的 API 支持,并且会升级到最新版本的 Play 商店。打开 Play 商店并点击更新按钮,几乎每个独立谷歌应用都会被替换为更加现代的版本。我们尝试着保持这篇文章讲述的是姜饼发布时的样子,但时至今日还停留在姜饼的用户会被认为有点跟不上时代了。
|
||||
|
||||
姜饼如今仍然能够得到支持,因为有数量可观的用户仍然在使用这个有点过时的系统。姜饼仍然存在的能量来自于它极低的系统要求,使得它成为了低端廉价设备的最佳选择。下个版本的安卓对硬件的要求变得更高了。举个例子,安卓 3.0 蜂巢不是开源的,这意味着它只能在谷歌的协助之下移植到一个设备上。同时它还是只为平板设计的,这让姜饼作为最新的手机安卓版本存在了很长一段时间。4.0 冰淇淋三明治是下一个手机版本,但它显著地提高了安卓系统要求,抛弃了低端市场。谷歌现在希望借 4.4 KitKat(奇巧巧克力)重回廉价手机市场,它的系统要求降回了 512MB 内存。时间的推移同样有所帮助——如今,就算是廉价的系统级芯片都能满足安卓 4.0 时代的系统要求。
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/15/
|
||||
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -0,0 +1,66 @@
|
||||
安卓编年史
|
||||
================================================================================
|
||||
### 安卓 3.0 蜂巢—平板和设计复兴 ###
|
||||
|
||||
尽管姜饼中做了许多改变,安卓仍然是移动世界里的丑小鸭。相比于 iPhone,它的优雅程度和设计完全抬不起头。另一方面来说,为数不多的能与 iOS 的美学智慧相当的操作系统之一是 Palm 的 WebOS。WebOS 有着优秀的整体设计,创新的功能,而且被寄予期望能够从和 iPhone 的长期竞争中拯救公司。
|
||||
|
||||
尽管如此,一年之后,Palm 资金链断裂。Palm 公司从未看到 iPhone 的到来,到 WebOS 就绪的时候已经太晚了。2010年4月,惠普花费10亿美元收购了 Palm。尽管惠普收购了一个拥有优秀用户界面的产品,界面的首席设计师,Matias Duarte,并没有加入惠普公司。2010年5月,就在惠普接手 Palm 之前,Duarte 加入了谷歌。惠普买下了面包,但谷歌雇佣了它的烘培师。
|
||||
|
||||

|
||||
第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。
|
||||
|
||||
在谷歌,Duarte 被任命为安卓用户体验主管。这是第一次有人公开掌管安卓的外观。尽管 Matias 在安卓 2.2 发布时就来到了谷歌,第一个真正受他影响的安卓版本是 3.0 蜂巢,它在2011年2月发布。
|
||||
|
||||
按谷歌自己的说法,蜂巢是匆忙问世的。10个月前,苹果发布了 iPad,让平板变得更加现代,谷歌希望能够尽快做出回应。蜂巢就是那个回应,一个运行在10英寸触摸屏上的安卓版本。悲伤的是,将这个系统推向市场是如此优先的事项,以至于边边角角都被砍去了以节省时间。
|
||||
|
||||
新系统只用于平板——手机不能升级到蜂巢,这加大了谷歌让系统运行在差异巨大的不同尺寸屏幕上的难度。但是,仅支持平板而不支持手机使得蜂巢源码没有泄露。之前的安卓版本是开源的,这使得黑客社区能够将其最新版本移植到所有的不同设备之上。谷歌不希望应用开发者在支持不完美的蜂巢手机移植版本时感到压力,所以谷歌将源码留在自己手中,并且严格控制能够拥有蜂巢的设备。匆忙的开发还导致了软件问题。在发布时,蜂巢不是特别稳定,SD卡不能工作,Adobe Flash——安卓最大的特色之一——还不被支持。
|
||||
|
||||
[摩托罗拉 Xoom][1]是为数不多的拥有蜂巢的设备之一,它是这个新系统的旗舰产品。Xoom 是一个10英寸,16:9 的平板,拥有 1GB 内存和 1GHz Tegra 2 双核处理器。尽管是由谷歌直接控制更新的新版安卓发布设备,它并没有被叫做“Nexus”。对此最可能的原因是谷歌对它没有足够的信心称其为旗舰。
|
||||
|
||||
尽管如此,蜂巢是安卓的一个里程碑。在一个体验设计师的主管之下,整个安卓用户界面被重构,绝大多数奇怪的应用设计都得到改进。安卓的默认应用终于看起来像整体的一部分,不同的界面有着相似的布局和主题。然而重新设计安卓会是一个跨版本的项目——蜂巢只是将安卓塑造成型的开始。这第一份草稿为安卓未来版本的样子做了基础设计,但它也用了过多的科幻主题,谷歌将花费接下来的数个版本来淡化它。
|
||||
|
||||

|
||||
蜂巢和姜饼的主屏幕。
|
||||
Ron Amadeo供图
|
||||
|
||||
姜饼只是在它的量子壁纸上试验了科幻外观,蜂巢整个系统的以电子为灵感的主题让它充满科幻意味。所有东西都是黑色的,如果你需要对比色,你可以从一些不同色调的蓝色中挑选。所有蓝色的东西还有“光晕”效果,让整个系统看起来像是外星科技创造的。默认背景是个六边形的全息方阵(一个蜂巢!明白了吗?),看起来像是一艘飞船上的传送阵的地板。
|
||||
|
||||
蜂巢最重要的变化是增加了系统栏。摩托罗拉 Xoom 除了电源和音量键之外没有配备实体按键,所以蜂巢添加了一个大黑色底栏到屏幕底部,用于放置导航按键。这意味着默认安卓界面不再需要特别的实体按键。在这之前,安卓没有实体的返回,菜单和 Home 键就不能正常工作。现在,软件提供了所有必需的按钮,任何带有触摸屏的设备都能够运行安卓。
|
||||
|
||||
新软件按键带来的最大的好处是灵活性。新的应用指南表明应用应不再要求实体菜单按键,需要用到的时候,蜂巢会自动检测并添加四个按钮到系统栏让应用正常工作。另一个软件按键的灵活属性是它们可以改变设备的屏幕方向。除了电源和音量键之外,Xoom 的方向实际上不是那么重要。从用户的角度来看,系统栏始终处于设备的“底部”。代价是系统栏明显占据了一些屏幕空间。为了在10英寸平板上节省空间,状态栏被合并到了系统栏中。所有的常用状态指示放在了右侧——有电源,连接状态,时间还有通知图标。
|
||||
|
||||
主屏幕的整个布局都改变了,用户界面部件放在了设备的四个角落。屏幕底部左侧放置着之前讨论过的导航按键,右侧用于状态指示和通知,顶部左侧显示的是文本搜索和语音搜索,右侧有应用抽屉和添加小部件的按钮。
|
||||
|
||||

|
||||
新锁屏界面和最近应用界面。
|
||||
Ron Amadeo供图
|
||||
|
||||
(因为 Xoom 是一部 [较重] 的10英寸,16:9平板设备,这意味着它主要是横屏使用。虽然大部分应用还支持竖屏模式,但是到目前为止,由于我们的版式限制,我们大部分使用的是竖屏模式的截图。请记住蜂巢的截图来自于10英寸的平板,而姜饼的截图来自3.7英寸的手机。二者所展现的信息密度是不能直接比较的。)
|
||||
|
||||
解锁界面——从菜单按钮到旋转式拨号盘再到滑动解锁——移除了解锁步骤的任何精度要求,它采用了一个环状解锁盘。从中间向任意方向向外滑动就能解锁设备。就像旋转式解锁,这种解锁方式更加符合人体工程学,而不用强迫你的手指完美地遵循一条笔直的解锁路径。
|
||||
|
||||
第二张图中略缩图条带是由新增的“最近应用”按钮打开的界面,现在处在返回和 Home 键旁边。不像姜饼中长按 Home 键显示一组最近应用的图标,蜂巢在屏幕上显示应用图标和略缩图,使得在任务间切换变得更加方便。最近应用的灵感明显来自于 Duarte 在 WebOS 中的“卡片式”多任务管理,其使用全屏略缩图来切换任务。这个设计提供和 WebOS 的任务切换一样的易识别体验,但更小的略缩图允许更多的应用一次性显示在屏幕上。
|
||||
|
||||
尽管最近应用的实现看起来和你现在的设备很像,这个版本实际上是非常早期的。这个列表不能滚动,这意味着竖屏下只能显示七个应用,横屏下只能显示五个。任何超出范围的应用会从列表中去除。而且你也不能通过滑动略缩图来关闭应用——这只是个静态的列表。
|
||||
|
||||
这里我们看到电子灵感影响的完整主题效果:略缩图的周围有蓝色的轮廓以及神秘的光晕。这张截图还展示软件按键的好处——上下文。返回按钮可以关闭略缩图列表,所以这里的箭头指向下方,而不是通常的样子。
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/16/
|
||||
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://arstechnica.com/gadgets/2011/03/ars-reviews-the-motorola-xoom/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -0,0 +1,86 @@
|
||||
安卓编年史
|
||||
================================================================================
|
||||

|
||||
蜂巢的应用列表少了很多应用。上图还展示了通知中心和新的快速设置。
|
||||
Ron Amadeo 供图
|
||||
|
||||
默认的应用图标从32个减少到了25个,其中还有两个是第三方的游戏。因为蜂巢不是为手机设计的,而且谷歌希望默认应用都是为平板优化的,很多应用因此没有成为默认应用。被去掉的应用有亚马逊 MP3 商店,Car Home,Facebook,Google Goggles,信息,新闻与天气,电话,Twitter,谷歌语音,以及语音拨号。谷歌正在悄悄打造的音乐服务将于不久后面世,所以亚马逊 MP3 商店需要为它让路。Car Home,信息以及电话对一部不是手机的设备来说没有多大意义,Facebook 和 Twitter还没有平板版应用,Goggles,新闻与天气以及语音拨号几乎没什么人注意,就算移除了大多数人也不会想念它们的。
|
||||
|
||||
几乎每个应用图标都是全新设计的。就像是从 G1 切换到摩托罗拉 Droid,变化的最大动力是分辨率的提高。Nexus S 有一块800×480分辨率的显示屏,姜饼重新设计了图标等资源来适应它。Xoom 巨大的1280×800 10英寸显示屏意味着几乎所有设计都要重做。但是再说一次,这次是有真正的设计师在负责,所有东西看起来更有整体性了。蜂巢的应用列表从纵向滚动变为了横向分页式。这个变化对横屏设备有意义,而对手机来说,查找一个应用还是纵向滚动列表比较快。
|
||||
|
||||
第二张蜂巢截图展示的是新通知中心。姜饼中的灰色和黑色设计已经被抛弃了,现在是黑色面板带蓝色光晕。上面一块显示着日期时间,连接状态,电量和打开快速设置的按钮,下面是实际的通知。非持续性通知现在可以通过通知右侧的“X”来关闭。蜂巢是第一个支持通知内控制的版本。第一个(也是蜂巢发布时唯一一个)利用了此特性的应用是新的谷歌音乐,在它的通知上有上一曲,播放/暂停,下一曲按钮。这些控制可以在任何应用中访问到,这让控制音乐播放变成了一件轻而易举的事情。
|
||||
|
||||

|
||||
“添加到主屏幕”的缩小视图更易于组织布局。搜索界面将自动搜索建议和通用搜索分为两个面板显示。
|
||||
Ron Amadeo 供图
|
||||
|
||||
点击主屏幕右上角的加号或长按背景空白处就会打开新的主屏幕设置界面。蜂巢会在屏幕上半部分显示所有主屏的缩小视图,下半部分分页显示的是小部件和快捷方式。小部件或快捷方式可以从下半部分的抽屉中拖动到五个主屏幕中的任意一个上。姜饼只会显示一个文本列表,而蜂巢会显示小部件完整的略缩图预览。这让你更清楚一个小部件是什么样子的,而不是像原来的“日历”一样只是一个只有应用名称的描述。
|
||||
|
||||
摩托罗拉 Xoom 更大的屏幕让键盘的布局更加接近 PC 风格,退格,回车,shift 以及 tab 都在传统的位置上。键盘带有浅蓝色,并且键与键之间的空间更大了。谷歌还添加了一个专门的笑脸按钮。 :-)
|
||||
|
||||

|
||||
打开菜单的 Gmail 在蜂巢和姜饼上的效果。按钮布置在首屏更容易被发现。
|
||||
Ron Amadeo 供图
|
||||
|
||||
Gmail 示范了蜂巢所有的用户界面概念。安卓 3.0不再把所有控制都隐藏在菜单按钮之后。屏幕的顶部现在有一条带有图标的条带,叫做 Action Bar(操作栏),它将许多常用的控制选项提升到了主屏幕上,用户直接就能看到它们。Gmail 的操作栏显示着搜索,新邮件,刷新按钮,不常用的选项比如设置,帮助,以及反馈放在了“更多”按钮中。点击复选框或选中文本的时候时整个操作栏的图标会变成和操作相关的——举个例子,选择文本会出现复制,粘贴和全选按钮。
|
||||
|
||||
应用左上角显示的图标同时也作为称作“上一级”的导航按钮。“后退”的作用类似浏览器的后退按钮,导航到之前访问的页面,“上一级”则会导航至应用的上一层次。举例来说,如果你在安卓市场,点击“给开发者发邮件”,会打开 Gmail,“后退”会让你返回安卓市场,但是“上一级”会带你到 Gmail 的收件箱。“后退”可能会关闭当前应用,而“上一级”永远不会。应用可以控制“后退”按钮,它们往往重新定义它为“上一级”的功能。事实上,这两个按钮之间几乎没什么不同。
|
||||
|
||||
蜂巢还引入了 “Fragments” API,允许开发者开发同时适用于平板和手机的应用。一个 “Fragments”(格子) 是一个用户界面的面板。在上图的 Gmail 中,左边的文件夹列表是一个格子,收件箱是另一个格子。手机每屏显示一个格子,而平板则可以并列显示两个。开发者可以自行定义单独每个格子的外观,安卓会根据当前的设备决定如何显示它们。
|
||||
|
||||

|
||||
计算器使用了常规的安卓按钮,但日历看起来像是被谁打翻了蓝墨水。
|
||||
Ron Amadeo 供图
|
||||
|
||||
这是安卓历史上第一次计算器换上了没有特别定制的按钮,所以它看起来确实是系统的一部分。更大的屏幕有了更多空间容纳按钮,足够将计算器基本功能容纳在一个屏幕上。日历极大地受益于额外的显示空间,有了更多的空间显示事件文本和控制选项。顶部的操作栏有切换视图的按钮,显示当前时间跨度,以及常规按钮。事件块变成了白色背景,日历标识只在左上角显示。在底部(或横屏模式的侧边)显示的是月历和显示的日历列表。
|
||||
|
||||
日历的比例同样可以调整。通过两指缩放手势,纵向的周和日视图能够在一屏内显示五到十九小时的事件。日历的背景由不均匀的蓝色斑点组成,看起来不是特别棒,在随后的版本里就被抛弃了。
|
||||
|
||||

|
||||
新相机界面,取景器显示的是“负片”效果。
|
||||
Ron Amadeo 供图
|
||||
|
||||
巨大的10英寸 Xoom 平板有个摄像头,这意味着它同样有个相机应用。电子风格的重新设计终于甩掉了谷歌从安卓 1.6 以来使用的仿皮革外观。控制选项以环形排布在快门键周围,让人想起真正的相机上的圆形控制转盘。Cooliris 衍生的弹出对话气泡变成了带光晕的半透明黑色选框。蜂巢的截图显示的是新的“颜色效果”功能,它能给取景器实时加上滤镜效果。不像姜饼的相机应用,它不支持竖屏模式——它被限制在横屏状态。用10英寸的平板拍摄纵向照片没多大意义,但拍摄横向照片也没多大意义。
|
||||
|
||||

|
||||
时钟应用相比其它地方没受到多少关照。谷歌把它扔进一个小盒子里然后就收工了。
|
||||
Ron Amadeo 供图
|
||||
|
||||
无数功能已经成形了,现在是时候来重制一下时钟了。整个“桌面时钟”概念被踢出门外,取而代之的是在纯黑背景上显示的简单又巨大的时间数字。打开其它应用查看天气的功能不见了,随之而去的还有显示你的壁纸的功能。当要设计平板尺寸的界面时,有时候谷歌就放弃了,就像这里,就只是把时钟界面扔到了一个小小的,居中的对话框里。
|
||||
|
||||

|
||||
音乐应用终于得到了一直以来都需要的完全重新设计。
|
||||
Ron Amadeo 供图
|
||||
|
||||
尽管音乐应用之前有得到一些小的加强,但这是自安卓 0.9 以来它第一次受到正视。重新设计的亮点是一个“别叫它封面流滚动 3D 专辑封面视图”,称作“最新和最近”。导航由操作栏的下拉框解决,取代了安卓 2.1 引入的标签页导航。尽管“最新和最近”有个 3D 滚动专辑封面,“专辑”使用的是专辑略缩图的平面方阵。另一个部分也有个完全不同的设计。“歌曲”使用了垂直滚动的文本列表,“播放列表”,“年代”和“艺术家”用的是堆砌专辑显示。
|
||||
|
||||
在几乎每个视图中,每个单独的项目有它自己单独的菜单,通常在每项的右下角有个小箭头。眼下这里只会显示“播放”和“添加到播放列表”,但这个版本的谷歌音乐是为未来搭建的。谷歌不久后就要发布音乐服务,这些独立菜单在像是在音乐商店里浏览该艺术家的其它内容,或是管理云存储和本地存储时将会是不可或缺的。
|
||||
|
||||
正如安卓 2.1 中的 Cooliris 风格的相册,谷歌音乐会将略缩图放大作为背景图片。底部的“正在播放”栏现在显示着专辑封面,播放控制,以及播放进度条。
|
||||
|
||||

|
||||
新谷歌地图的一些地方真的很棒,一些却是从安卓 1.5 来的。
|
||||
Ron Amadeo 供图
|
||||
|
||||
谷歌地图也为大屏幕进行了重新设计。这个设计将会持续一段时间,它对所有的控制选项用了一个半透明的黑色操作栏。搜索再次成为主要功能,占据了操作栏显要位置,但这回可是真的搜索栏,你可以在里面输入关键字,不像以前那个搜索栏形状的按钮会打开完全不同的界面。谷歌最终还是放弃了给缩放控件留屏幕空间,仅仅依靠手势来控制地图显示。尽管 3D 建筑轮廓这个特性已经被移植到了旧版本的地图中,蜂巢依然是拥有这个特性的第一个版本。双指在地图上向下拖放会“倾斜”地图的视角,展示建筑的侧面。你可以随意旋转,建筑同样会跟着进行调整。
|
||||
|
||||
并不是所有部分都进行了重新设计。导航自姜饼以来就没动过,还有些界面的核心部分,像是路线,直接从安卓 1.6 的设计拿出来,放到一个小盒子里居中放置,仅此而已。
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/17/
|
||||
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -0,0 +1,86 @@
|
||||
修复Linux中的提供最小化类BASH命令行编辑GRUB错误
|
||||
================================================================================
|
||||
这两天我[安装了Elementary OS和Windows双系统][1],在启动的时候遇到了一个Grub错误。命令行中呈现如下信息:
|
||||
|
||||
**提供最小化类BASH命令行编辑。对于第一个词,TAB键补全可以使用的命令。除此之外,TAB键补全可用的设备或文件。**
|
||||
|
||||

|
||||
|
||||
事实上这并不是Elementary OS独有的错误。这是常见的[Grub][2]错误,会在Ubuntu,Fedora,Linux Mint等Linux操作系统上发生。
|
||||
|
||||
通过这篇文章里我们可以学到基于Linux系统**如何修复Ubuntu中出现的“minimal BASH like line editing is supported” Grub错误**。
|
||||
|
||||
> 你可以参阅这篇教程来修复类似的高频问题,[错误:分区未找到Linux grub救援模式][3]。
|
||||
|
||||
### 先决条件 ###
|
||||
|
||||
要修复这个问题,你需要达成以下的条件:
|
||||
|
||||
- 一个包含相同版本、相同OS的LiveUSB或磁盘
|
||||
- 当前会话的Internet连接正常工作
|
||||
|
||||
在确认了你拥有先决条件了之后,让我们看看如何修复Linux的死亡黑屏(如果我可以这样的称呼它的话;))。
|
||||
|
||||
### 如何在基于Ubuntu的Linux中修复“minimal BASH like line editing is supported” Grub错误 ###
|
||||
|
||||
我知道你一定疑问这种Grub错误并不局限于在基于Ubuntu的Linux发行版上发生,那为什么我要强调在基于Ubuntu的发行版上呢?原因是,在这里我们将采用一个简单的方法并叫作**Boot Repair**的工具来修复我们的问题。我并不确定在其他的诸如Fedora的发行版中是否有这个工具可用。不再浪费时间,我们来看如何修复minimal BASH like line editing is supported Grub错误。
|
||||
|
||||
### 步骤 1: 引导进入lives会话 ###
|
||||
|
||||
插入live USB,引导进入live会话。
|
||||
|
||||
### 步骤 2: 安装 Boot Repair ###
|
||||
|
||||
等你进入了lives会话后,打开终端使用以下命令来安装Boot Repair:
|
||||
|
||||
sudo add-apt-repository ppa:yannubuntu/boot-repair
|
||||
sudo apt-get update
|
||||
sudo apt-get install boot-repair
|
||||
|
||||
注意:推荐这篇教程[如何修复 apt-get update 无法添加新的 CD-ROM 的错误][4],如果你在运行以上命令是遭遇同样的问题。
|
||||
|
||||
### 步骤 3: 使用Boot Repair修复引导 ###
|
||||
|
||||
装完Boot Repair后,在命令行运行如下命令启动:
|
||||
|
||||
boot-repair &
|
||||
|
||||
其实操作非常简单直接,你仅需按照Boot Repair工具提供的说明操作即可。首先,点击Boot Repair中的**Recommended repair**选项。
|
||||
|
||||

|
||||
|
||||
Boot Repair需要花费一些时间来分析引导和Grub中存在的问题。然后,它会提供一些可在命令行中直接运行的命令。将这些命令一个个在终端中执行。我这边屏幕上显示的是:
|
||||
|
||||

|
||||
|
||||
在输入了这些命令之后,它会执行执行一段时间:
|
||||
|
||||

|
||||
|
||||
在这一过程结束后,它会提供一个由boot repair的日志组成的网页网址。如果你的引导问题这样都没有修复,你就可以去社区或是发邮件给开发团队并提交该网址作为参考。很酷!不是吗?
|
||||
|
||||

|
||||
|
||||
在boot repair成功完成后,关闭你的电脑,移除USB并再次引导。我这就能成功的引导了,但是在Grub画面上会多出额外的两行。相比于看到系统能够再次正常引导的喜悦这些对我来说并不重要。
|
||||
|
||||

|
||||
|
||||
### 对你有效吗? ###
|
||||
|
||||
这就是我修复**Elementary OS Freya中的minimal BASH like line editing is supported Grub 错误**的方法。怎么样?是否对你也有效呢?请自由的在下方的评论区提出你的问题和建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/guide-install-elementary-os-luna/
|
||||
[2]:http://www.gnu.org/software/grub/
|
||||
[3]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/
|
||||
[4]:http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/
|
@ -0,0 +1,67 @@
|
||||
在 Linux 中安装 Google 环聊桌面客户端
|
||||
================================================================================
|
||||

|
||||
|
||||
先前,我们已经介绍了如何[在 Linux 中安装 Facebook Messenger][1] 和[WhatsApp 桌面客户端][2]。这些应用都是非官方的应用。今天,我将为你推荐另一款非官方的应用,它就是 [Google 环聊][3]
|
||||
|
||||
当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它把。
|
||||
|
||||
### 在 Linux 中安装 Google 环聊 ###
|
||||
|
||||
我们将使用一个名为 [yakyak][4] 的开源项目,它是一个针对 Linux,Windows 和 OS X 平台的非官方 Google 环聊客户端。我将向你展示如何在 Ubuntu 中使用 yakyak,但我相信在其他的 Linux 发行版本中,你可以使用同样的方法来使用它。在了解如何使用它之前,让我们先看看 yakyak 的主要特点:
|
||||
|
||||
- 发送和接受聊天信息
|
||||
- 创建和更改对话 (重命名, 添加人物)
|
||||
- 离开或删除对话
|
||||
- 桌面提醒通知
|
||||
- 打开或关闭通知
|
||||
- 针对图片上传,支持拖放,复制粘贴或使用上传按钮
|
||||
- Hangupsbot 房间同步(实际的用户图片) (注: 这里翻译不到位,希望改善一下)
|
||||
- 展示行内图片
|
||||
- 历史回放
|
||||
|
||||
听起来不错吧,你可以从下面的链接下载到该软件的安装文件:
|
||||
|
||||
- [下载 Google 环聊客户端 yakyak][5]
|
||||
|
||||
下载的文件是压缩的。解压后,你将看到一个名称类似于 linux-x64 或 linux-x32 的目录,其名称取决于你的系统。进入这个目录,你应该可以看到一个名为 yakyak 的文件。双击这个文件来启动它。
|
||||
|
||||

|
||||
|
||||
当然,你需要键入你的 Google 账号来认证。
|
||||
|
||||

|
||||
|
||||
一旦你通过认证后,你将看到如下的画面,在这里你可以和你的 Google 联系人进行聊天。
|
||||
|
||||

|
||||
|
||||
假如你想看看对话的配置图,你可以选择 `查看-> 展示对话缩略图`
|
||||
|
||||

|
||||
|
||||
当有新的信息时,你将得到桌面提醒。
|
||||
|
||||

|
||||
|
||||
### 值得一试吗? ###
|
||||
|
||||
我让你尝试一下,并决定 **在 Linux 中安装 Google 环聊客户端** 是否值得。若你想要官方的应用,你可以看看这些 [拥有原生 Linux 客户端的即时消息应用程序][6]。不要忘记分享你在 Linux 中使用 Google 环聊的体验。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/install-google-hangouts-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/facebook-messenger-linux/
|
||||
[2]:http://itsfoss.com/whatsapp-linux-desktop/
|
||||
[3]:http://www.google.com/+/learnmore/hangouts/
|
||||
[4]:https://github.com/yakyak/yakyak
|
||||
[5]:https://github.com/yakyak/yakyak
|
||||
[6]:http://itsfoss.com/best-messaging-apps-linux/
|
@ -0,0 +1,80 @@
|
||||
|
||||
如何修复ubuntu 14.04中检测到系统程序错误的问题
|
||||
================================================================================
|
||||

|
||||
|
||||
|
||||
在过去的几个星期,(几乎)每次都有消息 **Ubuntu 15.04在启动时检测到系统程序错误(system program problem detected on startup in Ubuntu 15.04)** 跑出来“欢迎”我。那时我是直接忽略掉它的,但是这种情况到了某个时刻,它就让人觉得非常烦人了!
|
||||
|
||||
> 检测到系统程序错误(System program problem detected)
|
||||
>
|
||||
> 你想立即报告这个问题吗?
|
||||
>
|
||||
> 
|
||||
|
||||
|
||||
我肯定地知道如果你是一个Ubuntu用户,你可能曾经也遇到过这个恼人的弹窗。在本文中,我们将探讨在Ubuntu 14.04和15.04中遇到"检测到系统程序错误(system program problem detected)"时 应该怎么办。
|
||||
### 怎么解决Ubuntu中"检测到系统程序错误"的错误 ###
|
||||
|
||||
#### 那么这个通知到底是关于什么的? ####
|
||||
|
||||
大体上讲,它是在告知你,你的系统的一部分崩溃了。可别因为“崩溃”这个词而恐慌。这不是一个严重的问题,你的系统还是完完全全可用的。只是在以前的某个时刻某个程序崩溃了,而Ubuntu想让你决定要不要把这个问题报告给开发者,这样他们就能够修复这个问题。
|
||||
|
||||
#### 那么,我们点了“报告错误”的按钮后,它以后就不再显示了?####
|
||||
|
||||
|
||||
不,不是的!即使你点了“报告错误”按钮,最后你还是会被一个如下的弹窗再次“欢迎”:
|
||||

|
||||
|
||||
[对不起,Ubuntu发生了一个内部错误(Sorry, Ubuntu has experienced an internal error)][1]是一个Apport(Apport是Ubuntu中错误信息的收集报告系统,详见Ubuntu Wiki中的Apport篇,译者注),它将会进一步的打开网页浏览器,然后你可以通过登录或创建[Launchpad][2]帐户来填写一份漏洞(Bug)报告文件。你看,这是一个复杂的过程,它要花整整四步来完成.
|
||||
#### 但是我想帮助开发者,让他们知道这个漏洞啊 !####
|
||||
|
||||
你这样想的确非常地周到体贴,而且这样做也是正确的。但是这样做的话,存在两个问题。第一,存在非常高的概率,这个漏洞已经被报告过了;第二,即使你报告了个这次崩溃,也无法保证你不会再看到它。
|
||||
|
||||
#### 那么,你的意思就是说别报告这次崩溃了?####
|
||||
|
||||
对,也不对。如果你想的话,在你第一次看到它的时候报告它。你可以在上面图片显示的“显示细节(Show Details)”中,查看崩溃的程序。但是如果你总是看到它,或者你不想报告漏洞(Bug),那么我建议你还是一次性摆脱这个问题吧。
|
||||
### 修复Ubuntu中“检测到系统程序错误”的错误 ###
|
||||
|
||||
这些错误报告被存放在Ubuntu中目录/var/crash中。如果你翻看这个目录的话,应该可以看到有一些以crash结尾的文件。
|
||||

|
||||
|
||||
我的建议是删除这些错误报告。打开一个终端,执行下面的命令:
|
||||
|
||||
sudo rm /var/crash/*
|
||||
|
||||
这个操作会删除所有在/var/crash目录下的所有内容。这样你就不会再被这些报告以前程序错误的弹窗所扰。但是如果有一个程序又崩溃了,你就会再次看到“检测到系统程序错误”的错误。你可以再次删除这些报告文件,或者你可以禁用Apport来彻底地摆脱这个错误弹窗。
|
||||
#### 彻底地摆脱Ubuntu中的系统错误弹窗 ####
|
||||
|
||||
如果你这样做,系统中任何程序崩溃时,系统都不会再通知你。如果你想问问我的看法的话,我会说,这不是一件坏事,除非你愿意填写错误报告。如果你不想填写错误报告,那么这些错误通知存不存在都不会有什么区别。
|
||||
|
||||
要禁止Apport,并且彻底地摆脱Ubuntu系统中的程序崩溃报告,打开一个终端,输入以下命令:
|
||||
gksu gedit /etc/default/apport
|
||||
|
||||
这个文件的内容是:
|
||||
|
||||
# set this to 0 to disable apport, or to 1 to enable it
|
||||
# 设置0表示禁用Apportw,或者1开启它。译者注,下同。
|
||||
# you can temporarily override this with
|
||||
# 你可以用下面的命令暂时关闭它:
|
||||
# sudo service apport start force_start=1
|
||||
enabled=1
|
||||
|
||||
把**enabled=1**改为**enabled=0**.保存并关闭文件。完成之后你就再也不会看到弹窗报告错误了。很显然,如果我们想重新开启错误报告功能,只要再打开这个文件,把enabled设置为1就可以了。
|
||||
|
||||
#### 你的有效吗? ####
|
||||
我希望这篇教程能够帮助你修复Ubuntu 14.04和Ubuntu 15.04中检测到系统程序错误的问题。如果这个小窍门帮你摆脱了这个烦人的问题,请让我知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[XLCYun](https://github.com/XLCYun)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
|
||||
[2]:https://launchpad.net/
|
@ -0,0 +1,315 @@
|
||||
如何在Ubuntu 14.04/15.04上配置Chef(服务端/客户端)
|
||||
================================================================================
|
||||
Chef是对于信息技术专业人员的一款配置管理和自动化工具,它可以配置和管理你的设备无论它在本地还是在云上。它可以用于加速应用部署并协调多个系统管理员和开发人员的工作,涉及到成百甚至上千的服务器和程序来支持大量的客户群。chef最有用的是让设备变成代码。一旦你掌握了Chef,你可以获得一流的网络IT支持来自动化管理你的云端设备或者终端用户。
|
||||
|
||||
下面是我们将要在本篇中要设置和配置Chef的主要组件。
|
||||
|
||||

|
||||
|
||||
### 安装Chef的要求和版本 ###
|
||||
|
||||
我们将在下面的基础环境下设置Chef配置管理系统。
|
||||
|
||||
注:表格
|
||||
<table width="701" style="height: 284px;">
|
||||
<tbody>
|
||||
<tr>
|
||||
<td width="660" colspan="2"><strong>管理和配置工具:Chef</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="220"><strong>基础操作系统</strong></td>
|
||||
<td width="492">Ubuntu 14.04.1 LTS (x86_64)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="220"><strong>Chef Server</strong></td>
|
||||
<td width="492">Version 12.1.0</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="220"><strong>Chef Manage</strong></td>
|
||||
<td width="492">Version 1.17.0</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="220"><strong>Chef Development Kit</strong></td>
|
||||
<td width="492">Version 0.6.2</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="220"><strong>内存和CPU</strong></td>
|
||||
<td width="492">4 GB , 2.0+2.0 GHZ</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
### Chef服务端的安装和配置 ###
|
||||
|
||||
Chef服务端是核心组件,它存储配置以及其他和工作站交互的配置数据。让我们在他们的官网下载最新的安装文件。
|
||||
|
||||
我使用下面的命令来下载和安装它。
|
||||
|
||||
**1) 下载Chef服务端**
|
||||
|
||||
root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb
|
||||
|
||||
**2) 安装Chef服务端**
|
||||
|
||||
root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb
|
||||
|
||||
**3) 重新配置Chef服务端**
|
||||
|
||||
现在运行下面的命令来启动所有的chef服务端服务,这步也许会花费一些时间,因为它有许多不同一起工作的服务组成来创建一个正常运作的系统。
|
||||
|
||||
root@ubuntu-14-chef:/tmp# chef-server-ctl reconfigure
|
||||
|
||||
chef服务端启动命令'chef-server-ctl reconfigure'需要运行两次,这样就会在安装后看到这样的输出。
|
||||
|
||||
Chef Client finished, 342/350 resources updated in 113.71139964 seconds
|
||||
opscode Reconfigured!
|
||||
|
||||
**4) 重启系统 **
|
||||
|
||||
安装完成后重启系统使系统能最好的工作,不然我们或许会在创建用户的时候看到下面的SSL连接错误。
|
||||
|
||||
ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect
|
||||
|
||||
**5) 创建心的管理员**
|
||||
|
||||
运行下面的命令来创建一个新的用它自己的配置的管理员账户。创建过程中,用户的RSA私钥会自动生成并需要被保存到一个安全的地方。--file选项会保存RSA私钥到指定的路径下。
|
||||
|
||||
root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem
|
||||
|
||||
### Chef服务端的管理设置 ###
|
||||
|
||||
Chef Manage是一个针对企业Chef用户的管理控制台,它启用了可视化的web用户界面并可以管理节点、数据包、规则、环境、配置和基于角色的访问控制(RBAC)
|
||||
|
||||
**1) 下载Chef Manage**
|
||||
|
||||
从官网复制链接病下载chef manage的安装包。
|
||||
|
||||
root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb
|
||||
|
||||
**2) 安装Chef Manage**
|
||||
|
||||
使用下面的命令在root的家目录下安装它。
|
||||
|
||||
root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root
|
||||
|
||||
**3) 重启Chef Manage和服务端**
|
||||
|
||||
安装完成后我们需要运行下面的命令来重启chef manage和服务端。
|
||||
|
||||
root@ubuntu-14-chef:~# opscode-manage-ctl reconfigure
|
||||
root@ubuntu-14-chef:~# chef-server-ctl reconfigure
|
||||
|
||||
### Chef Manage网页控制台 ###
|
||||
|
||||
我们可以使用localhost访问网页控制台以及fqdn,并用已经创建的管理员登录
|
||||
|
||||

|
||||
|
||||
**1) Chef Manage创建新的组织 **
|
||||
|
||||
你或许被要求创建新的组织或者接受其他阻止的邀请。如下所示,使用缩写和全名来创建一个新的组织。
|
||||
|
||||

|
||||
|
||||
**2) 用命令行创建心的组织 **
|
||||
|
||||
We can also create new Organization from the command line by executing the following command.
|
||||
我们同样也可以运行下面的命令来创建新的组织。
|
||||
|
||||
root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem
|
||||
|
||||
### 设置工作站 ###
|
||||
|
||||
我们已经完成安装chef服务端,现在我们可以开始创建任何recipes、cookbooks、属性和其他任何的我们想要对Chef的修改。
|
||||
|
||||
**1) 在Chef服务端上创建新的用户和组织 **
|
||||
|
||||
为了设置工作站,我们用命令行创建一个新的用户和组织。
|
||||
|
||||
root@ubuntu-14-chef:~# chef-server-ctl user-create bloger Bloger Kashif bloger.kashif@gmail.com bloger123 --filename bloger.pem
|
||||
|
||||
root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem
|
||||
|
||||
**2) 下载工作站入门套件 **
|
||||
|
||||
Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server.
|
||||
在工作站的网页控制台中下面并保存入门套件用于与服务端协同工作
|
||||
|
||||

|
||||
|
||||
**3) 点击"Proceed"下载套件 **
|
||||
|
||||

|
||||
|
||||
### 对于工作站的Chef开发套件设置 ###
|
||||
|
||||
Chef开发套件是一款包含所有开发chef所需工具的软件包。它捆绑了由Chef开发的带Chef客户端的工具。
|
||||
|
||||
**1) 下载 Chef DK**
|
||||
|
||||
We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit.
|
||||
我们可以从它的官网链接中下载开发包,并选择操作系统来得到chef开发包。
|
||||
|
||||

|
||||
|
||||
复制链接并用wget下载
|
||||
|
||||
root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb
|
||||
|
||||
**1) Chef开发套件安装**
|
||||
|
||||
使用dpkg命令安装开发套件
|
||||
|
||||
root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb
|
||||
|
||||
**3) Chef DK 验证**
|
||||
|
||||
使用下面的命令验证客户端是否已经正确安装。
|
||||
|
||||
root@ubuntu-15-WKS:~# chef verify
|
||||
|
||||
----------
|
||||
|
||||
Running verification for component 'berkshelf'
|
||||
Running verification for component 'test-kitchen'
|
||||
Running verification for component 'chef-client'
|
||||
Running verification for component 'chef-dk'
|
||||
Running verification for component 'chefspec'
|
||||
Running verification for component 'rubocop'
|
||||
Running verification for component 'fauxhai'
|
||||
Running verification for component 'knife-spork'
|
||||
Running verification for component 'kitchen-vagrant'
|
||||
Running verification for component 'package installation'
|
||||
Running verification for component 'openssl'
|
||||
..............
|
||||
---------------------------------------------
|
||||
Verification of component 'rubocop' succeeded.
|
||||
Verification of component 'knife-spork' succeeded.
|
||||
Verification of component 'openssl' succeeded.
|
||||
Verification of component 'berkshelf' succeeded.
|
||||
Verification of component 'chef-dk' succeeded.
|
||||
Verification of component 'fauxhai' succeeded.
|
||||
Verification of component 'test-kitchen' succeeded.
|
||||
Verification of component 'kitchen-vagrant' succeeded.
|
||||
Verification of component 'chef-client' succeeded.
|
||||
Verification of component 'chefspec' succeeded.
|
||||
Verification of component 'package installation' succeeded.
|
||||
|
||||
**连接Chef服务端**
|
||||
|
||||
我们将创建 ~/.chef并从chef服务端复制两个用户和组织的pem文件到chef的文件到这个目录下。
|
||||
|
||||
root@ubuntu-14-chef:~# scp bloger.pem blogs.pem kashi.pem linux.pem root@172.25.10.172:/.chef/
|
||||
|
||||
----------
|
||||
|
||||
root@172.25.10.172's password:
|
||||
bloger.pem 100% 1674 1.6KB/s 00:00
|
||||
blogs.pem 100% 1674 1.6KB/s 00:00
|
||||
kashi.pem 100% 1678 1.6KB/s 00:00
|
||||
linux.pem 100% 1678 1.6KB/s 00:00
|
||||
|
||||
** 编辑配置来管理chef环境 **
|
||||
|
||||
现在使用下面的内容创建"~/.chef/knife.rb"。
|
||||
|
||||
root@ubuntu-15-WKS:/.chef# vim knife.rb
|
||||
current_dir = File.dirname(__FILE__)
|
||||
|
||||
log_level :info
|
||||
log_location STDOUT
|
||||
node_name "kashi"
|
||||
client_key "#{current_dir}/kashi.pem"
|
||||
validation_client_name "kashi-linux"
|
||||
validation_key "#{current_dir}/linux.pem"
|
||||
chef_server_url "https://172.25.10.173/organizations/linux"
|
||||
cache_type 'BasicFile'
|
||||
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
|
||||
cookbook_path ["#{current_dir}/../cookbooks"]
|
||||
|
||||
创建knife.rb中指定的“~/cookbooks”文件夹。
|
||||
|
||||
root@ubuntu-15-WKS:/# mkdir cookbooks
|
||||
|
||||
**测试Knife配置**
|
||||
|
||||
运行“knife user list”和“knife client list”来验证knife是否在工作。
|
||||
|
||||
root@ubuntu-15-WKS:/.chef# knife user list
|
||||
|
||||
第一次运行的时候可能会得到下面的错误,这是因为工作站上还没有chef服务端的SSL证书。
|
||||
|
||||
ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
|
||||
ERROR: Could not establish a secure connection to the server.
|
||||
Use `knife ssl check` to troubleshoot your SSL configuration.
|
||||
If your Chef Server uses a self-signed certificate, you can use
|
||||
`knife ssl fetch` to make knife trust the server's certificates.
|
||||
|
||||
要从上面的命令中恢复,运行下面的命令来获取ssl整数并重新运行knife user和client list,这时候应该就可以了。
|
||||
|
||||
root@ubuntu-15-WKS:/.chef# knife ssl fetch
|
||||
WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert
|
||||
directory (/.chef/trusted_certs).
|
||||
|
||||
knife没有办法验证这些是有效的证书。你应该在下载时候验证这些证书的真实性。
|
||||
|
||||
在/.chef/trusted_certs/ubuntu-14-chef_test_com.crt下面添加ubuntu-14-chef.test.com的证书。
|
||||
|
||||
在上面的命令取得ssl证书后,接着运行下面的命令。
|
||||
|
||||
root@ubuntu-15-WKS:/.chef#knife client list
|
||||
kashi-linux
|
||||
|
||||
### 与chef服务端交互的新的节点 ###
|
||||
|
||||
节点是执行所有设备自动化的chef客户端。因此是时侯添加新的服务端到我们的chef环境下,在配置完chef-server和knife工作站后配置新的节点与chef-server交互。
|
||||
|
||||
我们使用下面的命令来添加新的节点与chef服务端工作。
|
||||
|
||||
root@ubuntu-15-WKS:~# knife bootstrap 172.25.10.170 --ssh-user root --ssh-password kashi123 --node-name mydns
|
||||
|
||||
----------
|
||||
|
||||
Doing old-style registration with the validation key at /.chef/linux.pem...
|
||||
Delete your validation key in order to use your user credentials instead
|
||||
|
||||
Connecting to 172.25.10.170
|
||||
172.25.10.170 Installing Chef Client...
|
||||
172.25.10.170 --2015-07-04 22:21:16-- https://www.opscode.com/chef/install.sh
|
||||
172.25.10.170 Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
|
||||
172.25.10.170 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... connected.
|
||||
172.25.10.170 HTTP request sent, awaiting response... 200 OK
|
||||
172.25.10.170 Length: 18736 (18K) [application/x-sh]
|
||||
172.25.10.170 Saving to: ‘STDOUT’
|
||||
172.25.10.170
|
||||
100%[======================================>] 18,736 --.-K/s in 0s
|
||||
172.25.10.170
|
||||
172.25.10.170 2015-07-04 22:21:17 (200 MB/s) - written to stdout [18736/18736]
|
||||
172.25.10.170
|
||||
172.25.10.170 Downloading Chef 12 for ubuntu...
|
||||
172.25.10.170 downloading https://www.opscode.com/chef/metadata?v=12&prerelease=false&nightlies=false&p=ubuntu&pv=14.04&m=x86_64
|
||||
172.25.10.170 to file /tmp/install.sh.26024/metadata.txt
|
||||
172.25.10.170 trying wget...
|
||||
|
||||
之后我们可以在knife节点列表下看到新创建的节点,也会新节点列表下创建新的客户端。
|
||||
|
||||
root@ubuntu-15-WKS:~# knife node list
|
||||
mydns
|
||||
|
||||
相似地我们只要提供ssh证书通过上面的knife命令来创建多个节点到chef设备上。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
本篇我们学习了chef管理工具并通过安装和配置设置浏览了它的组件。我希望你在学习安装和配置Chef服务端以及它的工作站和客户端节点中获得乐趣。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
@ -0,0 +1,187 @@
|
||||
如何在 Fedora 22 上配置 Proftpd 服务器
|
||||
================================================================================
|
||||
在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款免费的基于 GPL 授权开源的 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是具备许多高级功能以及能为用户提供丰富的配置选项可以轻松实现定制。它的许多配置选项在其他一些 FTP 服务器软件里仍然没有集成。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。
|
||||
|
||||
- 每个目录都包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess"
|
||||
- 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。
|
||||
- 可以作为独立进程启动服务或者通过 inetd/xinetd 启动
|
||||
- 它的文件/目录属性、属主和权限采用类 UNIX 方式。
|
||||
- 它可以独立运行,保护系统避免 root 访问可能带来的损坏。
|
||||
- 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器,SSL/TLS 加密,RADIUS 支持,等等。
|
||||
- ProFTPD 服务器还支持 IPv6.
|
||||
|
||||
下面是如何在运行 Fedora 22 操作系统的计算机上使用 ProFTPD 架设 FTP 服务器的一些简单步骤。
|
||||
|
||||
### 1. 安装 ProFTPD ###
|
||||
|
||||
首先,我们将在运行 Fedora 22 的机器上安装 Proftpd 软件。因为 yum 包管理器已经被抛弃了,我们将使用最新最好的包管理器 dnf。DNF 很容易使用,是 Fedora 22 上采用的非常人性化的包管理器。我们将用它来安装 proftpd 软件。这需要在终端或控制台里用 sudo 模式运行下面的命令。
|
||||
|
||||
$ sudo dnf -y install proftpd proftpd-utils
|
||||
|
||||
### 2. 配置 ProFTPD ###
|
||||
|
||||
现在,我们将修改软件的一些配置。要配置它,我们需要用文本编辑器编辑 /etc/proftpd.conf 文件。**/etc/proftpd.conf** 文件是 ProFTPD 软件的主要配置文件,所以,这个文件的任何改动都会影响到 FTP 服务器。在这里,是我们在初始步骤里做出的改动。
|
||||
|
||||
$ sudo vi /etc/proftpd.conf
|
||||
|
||||
之后,在用文本编辑器打开这个文件后,我们会想改下 ServerName 以及 ServerAdmin,分别填入自己的域名和 email 地址。下面是我们改的。
|
||||
|
||||
ServerName "ftp.linoxide.com"
|
||||
ServerAdmin arun@linoxide.com
|
||||
|
||||
在这之后,我们将把下面的设定加到配置文件里,这样可以让服务器将访问和授权记录到相应的日志文件里。
|
||||
|
||||
ExtendedLog /var/log/proftpd/access.log WRITE,READ default
|
||||
ExtendedLog /var/log/proftpd/auth.log AUTH auth
|
||||
|
||||

|
||||
|
||||
### 3. 添加 FTP 用户 ###
|
||||
|
||||
在设定好了基本的配置文件后,我们很自然地希望为指定目录添加 FTP 用户。目前用来登录的用户是 FTP 服务自动生成的,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。
|
||||
|
||||
下面,我们将建立一个名字是 ftpgroup 的新用户组。
|
||||
|
||||
$ sudo groupadd ftpgroup
|
||||
|
||||
然后,我们将以目录 /ftp-dir/ 作为主目录增加一个新用户 arunftp 并加入这个组中。
|
||||
|
||||
$ sudo useradd -G ftpgroup arunftp -s /sbin/nologin -d /ftp-dir/
|
||||
|
||||
在创建好用户并加入用户组后,我们将为用户 arunftp 设置一个密码。
|
||||
|
||||
$ sudo passwd arunftp
|
||||
|
||||
Changing password for user arunftp.
|
||||
New password:
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
|
||||
现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限。
|
||||
|
||||
$ sudo setsebool -P allow_ftpd_full_access=1
|
||||
$ sudo setsebool -P ftp_home_dir=1
|
||||
|
||||
然后,我们会设定不允许其他用户移动或重命名这个目录以及里面的内容。
|
||||
|
||||
$ sudo chmod -R 1777 /ftp-dir/
|
||||
|
||||
### 4. 打开 TLS 支持 ###
|
||||
|
||||
目前 FTP 所用的加密手段并不安全,任何人都可以通过监听网卡来读取 FTP 传输的数据。所以,我们将为自己的服务器打开 TLS 加密支持。这样的话,需要编辑 /etc/proftpd.conf 配置文件。在这之前,我们先备份一下当前的配置文件,可以保证在改出问题后还可以恢复。
|
||||
|
||||
$ sudo cp /etc/proftpd.conf /etc/proftpd.conf.bak
|
||||
|
||||
然后,我们可以用自己喜欢的文本编辑器修改配置文件。
|
||||
|
||||
$ sudo vi /etc/proftpd.conf
|
||||
|
||||
然后,把下面几行附加到我们在第 2 步中所增加内容的后面。
|
||||
|
||||
TLSEngine on
|
||||
TLSRequired on
|
||||
TLSProtocol SSLv23
|
||||
TLSLog /var/log/proftpd/tls.log
|
||||
TLSRSACertificateFile /etc/pki/tls/certs/proftpd.pem
|
||||
TLSRSACertificateKeyFile /etc/pki/tls/certs/proftpd.pem
|
||||
|
||||

|
||||
|
||||
完成上面的设定后,保存退出。
|
||||
|
||||
然后,我们需要生成 SSL 凭证 proftpd.pem 并放到 **/etc/pki/tls/certs/** 目录里。这样的话,首先需要在 Fedora 22 上安装 openssl。
|
||||
|
||||
$ sudo dnf install openssl
|
||||
|
||||
然后,可以通过执行下面的命令生成 SSL 凭证。
|
||||
|
||||
$ sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/proftpd.pem -out /etc/pki/tls/certs/proftpd.pem
|
||||
|
||||
系统会询问一些将写入凭证里的基本信息。在填完资料后,就会生成一个 2048 位的 RSA 私钥。
|
||||
|
||||
Generating a 2048 bit RSA private key
|
||||
...................+++
|
||||
...................+++
|
||||
writing new private key to '/etc/pki/tls/certs/proftpd.pem'
|
||||
-----
|
||||
You are about to be asked to enter information that will be incorporated
|
||||
into your certificate request.
|
||||
What you are about to enter is what is called a Distinguished Name or a DN.
|
||||
There are quite a few fields but you can leave some blank
|
||||
For some fields there will be a default value,
|
||||
If you enter '.', the field will be left blank.
|
||||
-----
|
||||
Country Name (2 letter code) [XX]:NP
|
||||
State or Province Name (full name) []:Narayani
|
||||
Locality Name (eg, city) [Default City]:Bharatpur
|
||||
Organization Name (eg, company) [Default Company Ltd]:Linoxide
|
||||
Organizational Unit Name (eg, section) []:Linux Freedom
|
||||
Common Name (eg, your name or your server's hostname) []:ftp.linoxide.com
|
||||
Email Address []:arun@linoxide.com
|
||||
|
||||
在这之后,我们要改变所生成凭证文件的权限以增加安全性。
|
||||
|
||||
$ sudo chmod 600 /etc/pki/tls/certs/proftpd.pem
|
||||
|
||||
### 5. 允许 FTP 通过 Firewall ###
|
||||
|
||||
现在,需要允许 ftp 端口,一般默认被防火墙阻止了。就是说,需要允许 ftp 端口能通过防火墙访问。
|
||||
|
||||
如果 **打开了 TLS/SSL 加密**,执行下面的命令。
|
||||
|
||||
$sudo firewall-cmd --add-port=1024-65534/tcp
|
||||
$ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
|
||||
|
||||
如果 **没有打开 TLS/SSL 加密**,执行下面的命令。
|
||||
|
||||
$ sudo firewall-cmd --permanent --zone=public --add-service=ftp
|
||||
|
||||
success
|
||||
|
||||
然后,重新加载防火墙设定。
|
||||
|
||||
$ sudo firewall-cmd --reload
|
||||
|
||||
success
|
||||
|
||||
### 6. 启动并激活 ProFTPD ###
|
||||
|
||||
全部设定好后,最后就是启动 ProFTPD 并试一下。可以运行下面的命令来启动 proftpd ftp 守护程序。
|
||||
|
||||
$ sudo systemctl start proftpd.service
|
||||
|
||||
然后,我们可以设定开机启动。
|
||||
|
||||
$ sudo systemctl enable proftpd.service
|
||||
|
||||
Created symlink from /etc/systemd/system/multi-user.target.wants/proftpd.service to /usr/lib/systemd/system/proftpd.service.
|
||||
|
||||
### 7. 登录到 FTP 服务器 ###
|
||||
|
||||
现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或 URL **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **显式要求基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。
|
||||
|
||||

|
||||
|
||||
要做上述设定,需要打开菜单里的文件,点击站点管理器,然后点击新建站点,再按上面的方式设置。
|
||||
|
||||

|
||||
|
||||
随后系统会要求允许 SSL 凭证,点确定。之后,就可以从我们的 FTP 服务器上传下载文件和文件夹了。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度配置和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://www.proftpd.org/
|
||||
[2]:http://www.proftpd.org/features.html
|
@ -0,0 +1,193 @@
|
||||
每个Linux人应知应会的12个有用的PHP命令行用法
|
||||
================================================================================
|
||||
在我上一篇文章“[在Linux命令行中使用并执行PHP代码][1]”中,我同时着重讨论了直接在Linux命令行中运行PHP代码以及在Linux终端中执行PHP脚本文件。
|
||||
|
||||

|
||||
|
||||
在Linux命令行运行PHP代码——第二部分
|
||||
|
||||
本文旨在让你了解一些相当不错的Linux终端中的PHP用法特性。
|
||||
|
||||
让我们先在PHP交互shell中来对`php.ini`设置进行一些配置吧。
|
||||
|
||||
**6. 设置PHP命令行提示符**
|
||||
|
||||
要设置PHP命令行提示,你需要在Linux终端中使用下面的php -a(启用PHP交互模式)命令开启一个PHP交互shell。
|
||||
|
||||
$ php -a
|
||||
|
||||
然后,设置任何东西(比如说Hi Tecmint ::)作为PHP交互shell的命令提示符,操作如下:
|
||||
|
||||
php > #cli.prompt=Hi Tecmint ::
|
||||
|
||||

|
||||
启用PHP交互Shell
|
||||
|
||||
同时,你也可以设置当前时间作为你的命令行提示符,操作如下:
|
||||
|
||||
php > #cli.prompt=`echo date('H:m:s');` >
|
||||
|
||||
22:15:43 >
|
||||
|
||||
**7. 每次输出一屏**
|
||||
|
||||
在我们上一篇文章中,我们已经在原始命令中通过管道在很多地方使用了‘less‘命令。通过该操作,我们可以在那些不能一次满屏输出的地方获得每次一屏的输出。但是,我们可以通过配置php.ini文件,设置pager的值为less以每次输出一屏,操作如下:
|
||||
|
||||
$ php -a
|
||||
php > #cli.pager=less
|
||||
|
||||

|
||||
固定PHP屏幕输出
|
||||
|
||||
这样,下次当你运行一个命令(比如说条调试器`phpinfo();`)的时候,而该命令的输出内容又太过庞大而不能固定在一屏,它就会自动产生适合你当前屏幕的输出结果。
|
||||
|
||||
php > phpinfo();
|
||||
|
||||

|
||||
PHP信息输出
|
||||
|
||||
**8. 建议和TAB补全**
|
||||
|
||||
PHP shell足够智能,它可以显示给你建议和进行TAB补全,你可以通过TAB键来使用该功能。如果对于你想要用TAB补全的字符串而言有多个选项,那么你需要使用两次TAB键来完成,其它情况则使用一次即可。
|
||||
|
||||
如果有超过一个的可能性,请使用两次TAB键。
|
||||
|
||||
php > ZIP [TAB] [TAB]
|
||||
|
||||
如果只有一个可能性,只要使用一次TAB键。
|
||||
|
||||
php > #cli.pager [TAB]
|
||||
|
||||
你可以一直按TAB键来获得选项,直到选项值满足要求。所有的行为都将记录到`~/.php-history`文件。
|
||||
|
||||
要检查你的PHP交互shell活动日志,你可以执行:
|
||||
|
||||
$ nano ~/.php_history | less
|
||||
|
||||

|
||||
检查PHP交互Shell日志
|
||||
|
||||
**9. 你可以在PHP交互shell中使用颜色,你所需要知道的仅仅是颜色代码。**
|
||||
|
||||
使用echo来打印各种颜色的输出结果,看我信手拈来:
|
||||
|
||||
php > echo “color_code1 TEXT second_color_code”;
|
||||
|
||||
一个更能说明问题的例子是:
|
||||
|
||||
php > echo "\033[0;31m Hi Tecmint \x1B[0m";
|
||||
|
||||

|
||||
在PHP Shell中启用彩色
|
||||
|
||||
到目前为止,我们已经看到,按回车键意味着执行命令,然而PHP Shell中各个命令结尾的分号是必须的。
|
||||
|
||||
**10. PHP shell中的用以打印后续组件的路径名称**
|
||||
|
||||
PHP shell中的basename函数从给出的包含有到文件或目录路径的后续组件的路径名称。
|
||||
|
||||
basename()样例#1和#2。
|
||||
|
||||
php > echo basename("/var/www/html/wp/wp-content/plugins");
|
||||
php > echo basename("www.tecmint.com/contact-us.html");
|
||||
|
||||
上述两个样例都将输出:
|
||||
|
||||
plugins
|
||||
contact-us.html
|
||||
|
||||

|
||||
在PHP中打印基本名称
|
||||
|
||||
**11. 你可以使用PHP交互shell在你的桌面创建文件(比如说test1.txt),就像下面这么简单**
|
||||
|
||||
$ touch("/home/avi/Desktop/test1.txt");
|
||||
|
||||
我们已经见识了PHP交互shell在数学运算中有多优秀,这里还有更多一些例子会令你吃惊。
|
||||
|
||||
**12. 使用PHP交互shell打印比如像tecmint.com这样的字符串的长度**
|
||||
|
||||
strlen函数用于获取指定字符串的长度。
|
||||
|
||||
php > echo strlen("tecmint.com");
|
||||
|
||||

|
||||
在PHP中打印字符串长度
|
||||
|
||||
**13. PHP交互shell可以对数组排序,是的,你没听错**
|
||||
|
||||
声明变量a,并将其值设置为array(7,9,2,5,10)。
|
||||
|
||||
php > $a=array(7,9,2,5,10);
|
||||
|
||||
对数组中的数字进行排序。
|
||||
|
||||
php > sort($a);
|
||||
|
||||
以排序后的顺序打印数组中的数字,同时打印序号,第一个为[0]。
|
||||
|
||||
php > print_r($a);
|
||||
Array
|
||||
(
|
||||
[0] => 2
|
||||
[1] => 5
|
||||
[2] => 7
|
||||
[3] => 9
|
||||
[4] => 10
|
||||
)
|
||||
|
||||

|
||||
在PHP中对数组排序
|
||||
|
||||
**14. 在PHP交互Shell中获取Pi的值**
|
||||
|
||||
php > echo pi();
|
||||
|
||||
3.1415926535898
|
||||
|
||||
**15. 打印某个数比如32的平方根**
|
||||
|
||||
php > echo sqrt(150);
|
||||
|
||||
12.247448713916
|
||||
|
||||
**16. 从0-10的范围内回显一个随机数**
|
||||
|
||||
php > echo rand(0, 10);
|
||||
|
||||

|
||||
在PHP中获取随机数
|
||||
|
||||
**17. 获取某个指定字符串的md5sum和sha1sum,例如,让我们在PHP Shell中检查某个字符串(比如说avi)的md5sum和sha1sum,并交叉检查这些带有bash shell生成的md5sum和sha1sum的结果。**
|
||||
|
||||
php > echo md5(avi);
|
||||
3fca379b3f0e322b7b7967bfcfb948ad
|
||||
|
||||
php > echo sha1(avi);
|
||||
8f920f22884d6fea9df883843c4a8095a2e5ac6f
|
||||
|
||||
----------
|
||||
|
||||
$ echo -n avi | md5sum
|
||||
3fca379b3f0e322b7b7967bfcfb948ad -
|
||||
|
||||
$ echo -n avi | sha1sum
|
||||
8f920f22884d6fea9df883843c4a8095a2e5ac6f -
|
||||
|
||||

|
||||
在PHP中检查md5sum和sha1sum
|
||||
|
||||
这里只是PHP Shell中所能获取的功能和PHP Shell的交互特性的惊鸿一瞥,这些就是到现在为止我所讨论的一切。保持和tecmint的连线,在评论中为我们提供你有价值的反馈吧。为我们点赞并分享,帮助我们扩散哦。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/execute-php-codes-functions-in-linux-commandline/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/run-php-codes-from-linux-commandline/
|
@ -0,0 +1,59 @@
|
||||
Ubuntu 14.04中修复“update information is outdated”错误
|
||||
================================================================================
|
||||

|
||||
|
||||
看到Ubuntu 14.04的顶部面板上那个显示下面这个错误的红色三角形了吗?
|
||||
|
||||
> 更新信息过时。该错误可能是由网络问题,或者某个仓库不再可用而造成的。请通过从指示器菜单中选择‘显示更新’来手动更新,然后查看是否存在有失败的仓库。
|
||||
>
|
||||
|
||||
它看起来像这样:
|
||||
|
||||

|
||||
|
||||
这里的粉红色感叹号标记就是原来的红色三角形,因为我使用了[最佳的Ubuntu图标主题][1]之一,Numix。让我们回到该错误中,这是一个常见的更新问题,你也许时不时地会碰到它。现在,你或许想知道的是,到底是什么导致了这个更新错误的出现。
|
||||
|
||||
### 引起‘update information is outdated’错误的原因 ###
|
||||
|
||||
导致该错误的原因在其自身的错误描述中就讲得相当明白,它告诉你“这可能是由网络问题或者某个不再可用的仓库导致的”。所以,要么是你更新了系统和某些仓库,要么是PPA不再受到支持了,或者你正面对的类似问题。
|
||||
|
||||
虽然错误本身就讲得很明白,而它给出了的议操作“请通过从指示器菜单选择‘显示更新’来手动更新以查看失败的仓库”却并不能很好地解决问题。如果你点击显示更新,你所能看见的仅仅是系统已经更新。
|
||||
|
||||

|
||||
|
||||
很奇怪,不是吗?我们怎样才能找出是什么出错了,哪里出错了,以及为什么出错呢?
|
||||
|
||||
### 修复‘update information is outdated’错误 ###
|
||||
|
||||
这里讨论的‘解决方案’可能对Ubuntu的这些版本有用:Ubuntu 14.04,12.04或14.04。你所要做的仅仅是打开终端(Ctrl+Alt+T),然后使用下面的命令:
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
等待命令结束,然后查看其结果。这里插个快速提示,你可以[在终端中添加通知][2],这样当一个耗时很长的命令结束执行时就会通知你。在该命令的最后几行中,可以看到你的系统正面临什么样的错误。是的,你肯定会看到一个错误。
|
||||
|
||||
在我这里,我看到了有名的[GPG error: The following could not be verified][3]错误。很明显,[在Ubuntu 15.04中安装声破天][4]有点问题。
|
||||
|
||||

|
||||
|
||||
很可能你看到的不是像我一样的GPG错误,那样的话,我建议你读一读我写的这篇文章[修复Ubuntu中的各种常见更新错误][5]。
|
||||
|
||||
我知道有不少人,尤其是初学者,很是讨厌命令行,但是如果你正在使用Linux,你就无可避免会使用到终端。此外,那东西并没你想象的那么可怕。试试吧,你会很快上手的。
|
||||
|
||||
我希望这个快速提示对于你修复Ubuntu中的“update information is outdated”错误有帮助。如果你有任何问题或建议,请不吝提出,我们将无任欢迎。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/fix-update-information-outdated-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/best-icon-themes-ubuntu-1404/
|
||||
[2]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/
|
||||
[3]:http://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/
|
||||
[4]:http://itsfoss.com/install-spotify-ubuntu-1504/
|
||||
[5]:http://itsfoss.com/fix-update-errors-ubuntu-1404/
|
@ -0,0 +1,112 @@
|
||||
如何在 Ubuntu 中管理开机启动应用
|
||||
================================================================================
|
||||

|
||||
|
||||
你曾经考虑过 **在 Ubuntu 中管理开机启动应用** 吗?如果在开机时,你的 Ubuntu 系统启动得非常缓慢,那么你就需要考虑这个问题了。
|
||||
|
||||
每当你开机进入一个操作系统,一系列的应用将会自动启动。这些应用被称为‘开机启动应用’ 或‘开机启动程序’。随着时间的推移,当你在系统中安装了足够多的应用时,你将发现有太多的‘开机启动应用’在开机时自动地启动了,它们吃掉了很多的系统资源,并将你的系统拖慢。这可能会让你感觉卡顿,我想这种情况并不是你想要的。
|
||||
|
||||
让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你发现这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。
|
||||
|
||||
在这篇文章中,我们将看到 **在 Ubuntu 中,如何控制开机启动应用,如何让一个应用在开机时启动以及如何发现隐藏的开机启动应用。**这里提供的指导对所有的 Ubuntu 版本均适用,例如 Ubuntu 12.04, Ubuntu 14.04 和 Ubuntu 15.04。
|
||||
|
||||
### 在 Ubuntu 中管理开机启动应用 ###
|
||||
|
||||
默认情况下, Ubuntu 提供了一个`开机启动应用工具`来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。
|
||||
|
||||

|
||||
|
||||
点击它来启动。下面是我的`开机启动应用`的样子:
|
||||
|
||||

|
||||
|
||||
### 在 Ubuntu 中移除开机启动应用 ###
|
||||
|
||||
现在由你来发现哪个程序对你用处不大,对我来说,是 [Caribou][1] 这个软件,它是一个屏幕键盘程序,在开机时它并没有什么用处,所以我想将它移除出开机启动程序的列表中。
|
||||
|
||||
你可以选择阻止某个程序在开机时启动,而在开机启动程序列表中保留该选项以便以后再进行激活。点击 `关闭`按钮来保留你的偏好设置。
|
||||
|
||||

|
||||
|
||||
要将一个程序从开机启动程序列表中移除,选择对应的选项然后从窗口右边的面板中点击`移除`按钮来保留你的偏好设置。
|
||||
|
||||

|
||||
|
||||
需要提醒的是,这并不会将该程序卸载掉,只是让该程序不再在每次开机时自动启动。你可以对所有你不喜欢的程序做类似的处理。
|
||||
|
||||
### 让开机启动程序延迟启动 ###
|
||||
|
||||
若你并不想在开机启动列表中移除掉程序,但同时又忧虑着系统性能的问题,那么你所需要做的是给程序添加一个延迟启动命令,这样所有的程序就不会在开机时同时启动。
|
||||
|
||||
选择一个程序然后点击 `编辑` 按钮。
|
||||
|
||||

|
||||
|
||||
这将展示出运行这个特定的程序所需的命令。
|
||||
|
||||

|
||||
|
||||
所有你需要做的就是在程序运行命令前添加一句 `sleep XX;` 。这样就为实际运行该命令来启动的对应程序添加了 `XX` 秒的延迟。例如,假如我想让 Variety [壁纸管理应用][2] 延迟启动 2 分钟,我就需要像下面那样在命令前添加 `sleep 120;`
|
||||
|
||||

|
||||
|
||||
保存并关闭设置。你将在下一次启动时看到效果。
|
||||
|
||||
### 增添一个程序到开机启动应用列表中 ###
|
||||
|
||||
这对于新手来说需要一点技巧。我们知道,在 Linux 的底层都是一些命令,在上一节我们看到这些开机启动程序只是在每次开机时运行一些命令。假如你想在开机启动列表中添加一个新的程序,你需要知道运行该应用所需的命令。
|
||||
|
||||
#### 第 1 步:如何查找运行一个程序所需的命令? ####
|
||||
|
||||
首先来到 Unity Dash 面板然后搜索 `Main Menu`:
|
||||
|
||||

|
||||
|
||||
这将展示出在各种类别下你安装的所有程序。在 Ubuntu 的低版本中,你将看到一个相似的菜单,通过它来选择并运行应用。
|
||||
|
||||

|
||||
|
||||
在各种类别下找到你找寻的应用,然后点击 `属性` 按钮来查看运行该应用所需的命令。例如,我想在开机时运行 `Transmission Torrent 客户端`。
|
||||
|
||||

|
||||
|
||||
这就会向我给出运行 `Transmission` 应用的命令:
|
||||
|
||||

|
||||
|
||||
接着,我将用相同的信息来将 `Transmission` 应用添加到开机启动列表中。
|
||||
|
||||
#### 第 2 步: 添加一个程序到开机启动列表中 ####
|
||||
|
||||
再次来到开机启动应用工具中并点击 `添加` 按钮。这将让你输入一个应用的名称,对应的命令和相关的描述。其中命令最为重要,你可以使用任何你想用的名称和描述。使用上一步得到的命令然后点击 `添加` 按钮。
|
||||
|
||||

|
||||
|
||||
就这样,你将在下一次开机时看到这个程序会自动运行。这就是在 Ubuntu 中你能做的关于开机启动应用的所有事情。
|
||||
|
||||
到现在为止,我们已经讨论在开机时可见的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。
|
||||
|
||||
### 在 Ubuntu 中查看隐藏的开机启动程序 ###
|
||||
|
||||
要查看在开机时哪些服务在运行,可以打开一个终端并使用下面的命令:
|
||||
|
||||
sudo sed -i 's/NoDisplay=true/NoDisplay=false/g' /etc/xdg/autostart/*.desktop
|
||||
|
||||
上面的命令是一个快速查找和替换命令,它将在所有自动启动的程序里的 `NoDisplay=false` 改为 `NoDisplay=true` ,一旦执行了这个命令后,再次打开`开机启动应用工具`,现在你应该可以看到更多的程序:
|
||||
|
||||

|
||||
|
||||
你可以像先前我们讨论的那样管理这些开机启动应用。我希望这篇教程可以帮助你在 Ubuntu 中控制开机启动程序。任何的问题或建议总是欢迎的。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:https://wiki.gnome.org/action/show/Projects/Caribou?action=show&redirect=Caribou
|
||||
[2]:http://itsfoss.com/applications-manage-wallpapers-ubuntu/
|
@ -0,0 +1,174 @@
|
||||
Linux命令行中使用和执行PHP代码——第一部分
|
||||
================================================================================
|
||||
PHP是一个开元服务器端脚本语言,最初这三个字母代表的是“Personal Home Page”,而现在则代表的是“PHP:Hypertext Preprocessor”,它是个递归首字母缩写。它是一个跨平台脚本语言,深受C、C++和Java的影响。
|
||||

|
||||
Linux命令行中运行PHP代码——第一部分
|
||||
|
||||
PHP的语法和C、Java以及带有一些PHP特性的Perl变成语言中的语法十分相似,它眼下大约正被2.6亿个网站所使用,当前最新的稳定版本是PHP版本5.6.10。
|
||||
|
||||
PHP是HTML的嵌入脚本,它便于开发人员快速写出动态生成的页面。PHP主要用于服务器端(而Javascript则用于客户端)以通过HTTP生成动态网页,然而,当你知道可以在Linux终端中不需要网页浏览器来执行PHP时,你或许会大为惊讶。
|
||||
|
||||
本文将阐述PHP脚本语言的命令行方面。
|
||||
|
||||
**1. 在安装完PHP和Apache2后,我们需要安装PHP命令行解释器。**
|
||||
|
||||
# apt-get install php5-cli [Debian and alike System)
|
||||
# yum install php-cli [CentOS and alike System)
|
||||
|
||||
接下来我们通常要做的是,在‘/var/www/html‘(这是 Apache2 在大多数发行版中的工作目录)这个位置创建一个内容为 ‘<?php phpinfo(); ?>‘,名为 ‘infophp.php‘ 的文件来测试(是否安装正确),执行以下命令即可。
|
||||
|
||||
# echo '<?php phpinfo(); ?>' > /var/www/html/infophp.php
|
||||
|
||||
然后,将浏览器指向http://127.0.0.1/infophp.php, 这将会在网络浏览器中打开该文件。
|
||||
|
||||

|
||||
检查PHP信息
|
||||
|
||||
不需要任何浏览器,在Linux终端中也可以获得相同的结果。在Linux命令行中执行‘/var/www/html/infophp.php‘,如:
|
||||
|
||||
# php -f /var/www/html/infophp.php
|
||||
|
||||

|
||||
从命令行检查PHP信息
|
||||
|
||||
由于输出结果太大,我们可以通过管道将上述输出结果输送给 ‘less‘ 命令,这样就可以一次输出一屏了,命令如下:
|
||||
|
||||
# php -f /var/www/html/infophp.php | less
|
||||
|
||||

|
||||
检查所有PHP信息
|
||||
|
||||
这里,‘-f‘选项解析病执行命令后跟随的文件。
|
||||
|
||||
**2. 我们可以直接在Linux命令行使用`phpinfo()`这个十分有价值的调试工具而不需要从文件来调用,只需执行以下命令:**
|
||||
|
||||
# php -r 'phpinfo();'
|
||||
|
||||

|
||||
PHP调试工具
|
||||
|
||||
这里,‘-r‘ 选项会让PHP代码在Linux终端中不带`<`和`>`标记直接执行。
|
||||
|
||||
**3. 以交互模式运行PHP并做一些数学运算。这里,‘-a‘ 选项用于以交互模式运行PHP。**
|
||||
|
||||
# php -a
|
||||
|
||||
Interactive shell
|
||||
|
||||
php > echo 2+3;
|
||||
5
|
||||
php > echo 9-6;
|
||||
3
|
||||
php > echo 5*4;
|
||||
20
|
||||
php > echo 12/3;
|
||||
4
|
||||
php > echo 12/5;
|
||||
2.4
|
||||
php > echo 2+3-1;
|
||||
4
|
||||
php > echo 2+3-1*3;
|
||||
2
|
||||
php > exit
|
||||
|
||||
输入 ‘exit‘ 或者按下 ‘ctrl+c‘ 来关闭PHP交互模式。
|
||||
|
||||

|
||||
启用PHP交互模式
|
||||
|
||||
**4. 你可以仅仅将PHP脚本作为shell脚本来运行。首先,创建在你当前工作目录中创建一个PHP样例脚本。**
|
||||
|
||||
# echo -e '#!/usr/bin/php\n<?php phpinfo(); ?>' > phpscript.php
|
||||
|
||||
注意,我们在该PHP脚本的第一行使用#!/usr/bin/php,就像在shell脚本中那样(/bin/bash)。第一行的#!/usr/bin/php告诉Linux命令行将该脚本文件解析到PHP解释器中。
|
||||
|
||||
其次,让该脚本可执行:
|
||||
|
||||
# chmod 755 phpscript.php
|
||||
|
||||
接着来运行它,
|
||||
|
||||
# ./phpscript.php
|
||||
|
||||
**5. 你可以完全靠自己通过交互shell来创建简单函数,这你一定会被惊到了。下面是循序渐进的指南。**
|
||||
|
||||
开启PHP交互模式。
|
||||
|
||||
# php -a
|
||||
|
||||
创建一个函授,将它命名为 addition。同时,声明两个变量 $a 和 $b。
|
||||
|
||||
php > function addition ($a, $b)
|
||||
|
||||
使用花括号来在其间为该函数定义规则。
|
||||
|
||||
php > {
|
||||
|
||||
定义规则。这里,该规则讲的是添加这两个变量。
|
||||
|
||||
php { echo $a + $b;
|
||||
|
||||
所有规则定义完毕,通过闭合花括号来封装规则。
|
||||
|
||||
php {}
|
||||
|
||||
测试函数,添加数字4和3,命令如下:
|
||||
|
||||
php > var_dump (addition(4,3));
|
||||
|
||||
#### 样例输出 ####
|
||||
|
||||
7NULL
|
||||
|
||||
你可以运行以下代码来执行该函数,你可以测试不同的值,你想来多少次都行。将里头的 a 和 b 替换成你自己的值。
|
||||
|
||||
php > var_dump (addition(a,b));
|
||||
|
||||
----------
|
||||
|
||||
php > var_dump (addition(9,3.3));
|
||||
|
||||
#### 样例输出 ####
|
||||
|
||||
12.3NULL
|
||||
|
||||

|
||||
创建PHP函数
|
||||
|
||||
你可以一直运行该函数,直至退出交互模式(ctrl+z)。同时,你也应该注意到了,上面输出结果中返回的数据类型为 NULL。这个问题可以通过要求 php 交互 shell用 return 代替 echo 返回结果来修复。
|
||||
|
||||
只需要在上面的函数的中 ‘echo‘ 声明用 ‘return‘ 来替换
|
||||
|
||||
替换
|
||||
|
||||
php { echo $a + $b;
|
||||
|
||||
为
|
||||
|
||||
php { return $a + $b;
|
||||
|
||||
剩下的东西和原理仍然一样。
|
||||
|
||||
这里是一个样例,在该样例的输出结果中返回了正确的数据类型。
|
||||
|
||||

|
||||
PHP函数
|
||||
|
||||
永远都记住,用户定义的函数不会从一个shell会话保留到下一个shell会话,因此,一旦你退出交互shell,它就会丢失了。
|
||||
|
||||
希望你喜欢此次会话。保持连线,你会获得更多此类文章。保持关注,保持健康。请在下面的评论中为我们提供有价值的反馈。点赞并分享,帮助我们扩散。
|
||||
|
||||
还请阅读: [12个Linux终端中有用的的PHP命令行用法——第二部分][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/run-php-codes-from-linux-commandline/
|
||||
|
||||
作者:[vishek Kumar][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/execute-php-codes-functions-in-linux-commandline/
|
@ -0,0 +1,66 @@
|
||||
如何在 Docker 中通过 Kitematic 交互式执行任务
|
||||
================================================================================
|
||||
在本篇文章中,我们会学习如何在 Windows 操作系统上安装 Kitematic 以及部署一个 Hello World Nginx Web 服务器。Kitematic 是一个自由开源软件,它有现代化的界面设计使得允许我们在 Docker 中交互式执行任务。Kitematic 设计非常漂亮、界面也很不错。我们可以简单快速地开箱搭建我们的容器而不需要输入命令,我们可以在图形用户界面中通过简单的点击从而在容器上部署我们的应用。Kitematic 集成了 Docker Hub,允许我们搜索、拉取任何需要的镜像,并在上面部署应用。它同时也能很好地切换到命令行用户接口模式。目前,它包括了自动映射端口、可视化更改环境变量、配置卷、精简日志以及其它功能。
|
||||
|
||||
下面是在 Windows 上安装 Kitematic 并部署 Hello World Nginx Web 服务器的 3 个简单步骤。
|
||||
|
||||
### 1. 下载 Kitematic ###
|
||||
|
||||
首先,我们需要从 github 仓库 [https://github.com/kitematic/kitematic/releases][1] 中下载 Windows 操作系统可用的最新的 Kitematic 发行版。我们用下载器或者 web 浏览器下载了它的可执行 EXE 文件。下载完成后,我们需要双击可执行应用文件。
|
||||
|
||||

|
||||
|
||||
双击应用文件之后,会问我们一个安全问题,我们只需要点击 OK 按钮,如下图所示。
|
||||
|
||||

|
||||
|
||||
### 2. 安装 Kitematic ###
|
||||
|
||||
下载好可执行安装程序之后,我们现在打算在我们的 Windows 操作系统上安装 Kitematic。安装程序现在会开始下载并安装运行 Kitematic 需要的依赖,包括 Virtual Box 和 Docker。如果已经在系统上安装了 Virtual Box,它会把它升级到最新版本。安装程序会在几分钟内完成,但取决于你网络和系统的速度。如果你还没有安装 Virtual Box,它会问你是否安装 Virtual Box 网络驱动。建议安装它,因为它有助于 Virtual Box 的网络。
|
||||
|
||||

|
||||
|
||||
需要的依赖 Docker 和 Virtual Box 安装完成并运行后,会让我们登录到 Docker Hub。如果我们还没有账户或者还不想登录,可以点击 **SKIP FOR NOW** 继续后面的步骤。
|
||||
|
||||

|
||||
|
||||
如果你还没有账户,你可以在应用程序上点击注册链接并在 Docker Hub 上创建账户。
|
||||
|
||||
完成之后,就会出现 Kitematic 应用程序的第一个界面。正如下面看到的这样。我们可以搜索可用的 docker 镜像。
|
||||
|
||||

|
||||
|
||||
### 3. 部署 Nginx Hello World 容器 ###
|
||||
|
||||
现在,成功安装完 Kitematic 之后,我们打算部署容器。要运行一个容器,我们只需要在搜索区域中搜索镜像。然后点击 Create 按钮部署容器。在这篇教程中,我们会部署一个小的包含了 Hello World 主页的 Nginx Web 服务器。为此,我们在搜索区域中搜索 Hello World Nginx。看到了容器信息之后,我们点击 Create 来部署容器。
|
||||
|
||||

|
||||
|
||||
镜像下载完成之后,它会自动部署。我们可以查看 Kitematic 部署容器的命令日志。我们也可以在 Kitematic 界面上预览 web 页面。现在,我们通过点击预览在 web 浏览器中查看我们的 Hello World 页面。
|
||||
|
||||

|
||||
|
||||
如果我们想切换到命令行接口并用它管理 docker,这里有个称为 Docker CLI 的按钮,它会打开一个 PowerShell,在里面我们可以执行 docker 命令。
|
||||
|
||||

|
||||
|
||||
现在,如果我们想配置我们的容器并执行类似更改容器名称、设置环境变量、指定端口、配置容器存储以及其它高级功能的任务,我们可以在容器设置页面做到这些。
|
||||
|
||||

|
||||
|
||||
### 总结 ###
|
||||
|
||||
我们终于成功在 Windows 操作系统上安装了 Kitematic 并部署了一个 Hello World Ngnix 服务器。总是推荐下载安装 Kitematic 最新的发行版,因为会增加很多新的高级功能。由于 Docker 运行在 64 位平台,当前 Kitematic 也是为 64 位操作系统构建。它只能在 Windows 7 以及更高版本上运行。在这篇教程中,我们部署了一个 Nginx Web 服务器,类似地我们可以在 Kitematic 中简单的点击就能通过镜像部署任何 docker 容器。Kitematic 已经有可用的 Mac OS X 和 Windows 版本,Linux 版本也在开发中很快就会发布。如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来以便我们更改地改进或更新我们的内容。非常感谢!Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/interactively-docker-kitematic/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://github.com/kitematic/kitematic/releases
|
@ -0,0 +1,320 @@
|
||||
[translating by xiqingongzi]
|
||||
|
||||
RHCSA系列: 复习基础命令及系统文档 – 第一部分
|
||||
================================================================================
|
||||
RHCSA (红帽认证系统工程师) 是由给商业公司提供开源操作系统和软件的RedHat公司举行的认证考试, 除此之外,红帽公司还为这些企业和机构提供支持、训练以及咨询服务
|
||||
|
||||

|
||||
|
||||
RHCSA 考试准备指南
|
||||
|
||||
RHCSA 考试(考试编号 EX200)通过后可以获取由Red Hat 公司颁发的证书. RHCSA 考试是RHCT(红帽认证技师)的升级版,而且RHCSA必须在新的Red Hat Enterprise Linux(红帽企业版)下完成.RHCT和RHCSA的主要变化就是RHCT基于 RHEL5 , 而RHCSA基于RHEL6或者7, 这两个认证的等级也有所不同.
|
||||
|
||||
红帽认证管理员所会的最基础的是在红帽企业版的环境下执行如下系统管理任务:
|
||||
|
||||
- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档
|
||||
- 使用不同的启动等级启动系统,认证和控制进程,启动或停止虚拟机
|
||||
- 使用分区和逻辑卷管理本地存储
|
||||
- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(许可、加密、访问控制表)
|
||||
- 部署、配置、并且控制系统,包括安装、升级和卸载软件
|
||||
- 管理系统用户和组,独立使用集中制的LDAP目录权限控制
|
||||
- 确保系统安全,包括基础的防火墙规则和SELinux配置
|
||||
|
||||
|
||||
关于你所在国家的考试注册费用参考 [RHCSA Certification page][1].
|
||||
|
||||
关于你所在国家的考试注册费用参考RHCSA 认证页面
|
||||
|
||||
|
||||
在这个有15章的RHCSA(红帽认证管理员)备考系列,我们将覆盖以下的关于红帽企业Linux第七版的最新的信息
|
||||
|
||||
- Part 1: 回顾必会的命令和系统文档
|
||||
- Part 2: 在RHEL7如何展示文件和管理目录
|
||||
- Part 3: 在RHEL7中如何管理用户和组
|
||||
- Part 4: 使用nano和vim管理命令/ 使用grep和正则表达式分析文本
|
||||
- Part 5: RHEL7的进程管理:启动,关机,以及其他介于二者之间的.
|
||||
- Part 6: 使用 'Parted'和'SSM'来管理和加密系统存储
|
||||
- Part 7: 使用ACLs(访问控制表)并挂载 Samba /NFS 文件分享
|
||||
- Part 8: 加固SSH,设置主机名并开启网络服务
|
||||
- Part 9: 安装、配置和加固一个Web,FTP服务器
|
||||
- Part 10: Yum 包管理方式,使用Cron进行自动任务管理以及监控系统日志
|
||||
- Part 11: 使用FirewallD和Iptables设置防火墙,控制网络流量
|
||||
- Part 12: 使用Kickstart 自动安装RHEL 7
|
||||
- Part 13: RHEL7:什么是SeLinux?他的原理是什么?
|
||||
- Part 14: 在RHEL7 中使用基于LDAP的权限控制
|
||||
- Part 15: RHEL7的虚拟化:KVM 和虚拟机管理
|
||||
|
||||
在第一章,我们讲解如何输入和运行正确的命令在终端或者Shell窗口,并且讲解如何找到、插入,以及使用系统文档
|
||||
|
||||

|
||||
|
||||
RHCSA:回顾必会的Linux命令 - 第一部分
|
||||
|
||||
#### 前提: ####
|
||||
|
||||
至少你要熟悉如下命令
|
||||
|
||||
- [cd command][2] (改变目录)
|
||||
- [ls command][3] (列举文件)
|
||||
- [cp command][4] (复制文件)
|
||||
- [mv command][5] (移动或重命名文件)
|
||||
- [touch command][6] (创建一个新的文件或更新已存在文件的时间表)
|
||||
- rm command (删除文件)
|
||||
- mkdir command (创建目录)
|
||||
|
||||
在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法.
|
||||
|
||||
虽然没有严格的要求,但是作为讨论常用的Linux命令和方法,你应该安装RHEL7 来尝试使用文章中提到的命令.这将会使你学习起来更省力.
|
||||
|
||||
- [红帽企业版Linux(RHEL)7 安装指南][7]
|
||||
|
||||
### 使用Shell进行交互 ###
|
||||
如果我们使用文本模式登陆Linux,我们就无法使用鼠标在默认的shell。另一方面,如果我们使用图形化界面登陆,我们将会通过启动一个终端来开启shell,无论那种方式,我们都会看到用户提示,并且我们可以开始输入并且执行命令(当按下Enter时,命令就会被执行)
|
||||
|
||||
|
||||
当我们使用文本模式登陆Linux时,
|
||||
命令是由两个部分组成的:
|
||||
|
||||
- 命令本身
|
||||
- 参数
|
||||
|
||||
某些参数,称为选项(通常使用一个连字符区分),改变了由其他参数定义的命令操作.
|
||||
|
||||
命令的类型可以帮助我们识别某一个特定的命令是由shell内建的还是由一个单独的包提供。这样的区别在于我们能够找到更多关于该信息的命令,对shell内置的命令,我们需要看shell的ManPage,如果是其他提供的,我们需要看它自己的ManPage.
|
||||
|
||||

|
||||
|
||||
检查Shell的内建命令
|
||||
|
||||
在上面的例子中, cd 和 type 是shell内建的命令,top和 less 是由其他的二进制文件提供的(在这种情况下,type将返回命令的位置)
|
||||
其他的内建命令
|
||||
|
||||
- [echo command][8]: 展示字符串
|
||||
- [pwd command][9]: 输出当前的工作目录
|
||||
|
||||

|
||||
|
||||
更多内建函数
|
||||
|
||||
**exec 命令**
|
||||
|
||||
运行我们指定的外部程序。请注意,最好是只输入我们想要运行的程序的名字,不过exec命令有一个特殊的特性:使用旧的shell运行,而不是创建新的进程,可以作为子请求的验证.
|
||||
|
||||
# ps -ef | grep [shell 进程的PID]
|
||||
|
||||
当新的进程注销,Shell也随之注销,运行 exec top 然后按下 q键来退出top,你会注意到shell 会话会结束,如下面的屏幕录像展示的那样:
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/f02w4WT73LE"></iframe>
|
||||
|
||||
**export 命令**
|
||||
|
||||
输出之后执行的命令的环境的变量
|
||||
|
||||
**history 命令**
|
||||
|
||||
展示数行之前的历史命令.在感叹号前输入命令编号可以再次执行这个命令.如果我们需要编辑历史列表中的命令,我们可以按下 Ctrl + r 并输入与命令相关的第一个字符.
|
||||
当我们看到的命令自动补全,我们可以根据我们目前的需要来编辑它:
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/69vafdSMfU4"></iframe>
|
||||
|
||||
命令列表会保存在一个叫 .bash_history的文件里.history命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候.默认情况下,bash保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加:
|
||||
|
||||
|
||||

|
||||
|
||||
Linux history 命令
|
||||
|
||||
但上述变化,在我们的下一次启动不会保留。为了保持HISTSIZE变量的变化,我们需要通过手工修改文件编辑:
|
||||
|
||||
# 设置history请看 HISTSIZE 和 HISTFILESIZE 在 bash(1)的文档
|
||||
HISTSIZE=1000
|
||||
|
||||
**重要**: 我们的更改不会生效,除非我们重启了系统
|
||||
|
||||
**alias 命令**
|
||||
没有参数或使用-p参数将会以 名称=值的标准形式输出alias 列表.当提供了参数时,一个alias 将被定义给给定的命令和值
|
||||
|
||||
使用alias ,我们可以创建我们自己的命令,或修改现有的命令,包括需要的参数.举个例子,假设我们想别名 ls 到 ls –color=auto ,这样就可以使用不同颜色输出文件、目录、链接
|
||||
|
||||
|
||||
# alias ls='ls --color=auto'
|
||||
|
||||

|
||||
|
||||
Linux 别名命令
|
||||
|
||||
**Note**: 你可以给你的新命令起任何的名字,并且附上足够多的使用单引号分割的参数,但是这样的情况下你要用分号区分开他们.
|
||||
|
||||
# alias myNewCommand='cd /usr/bin; ls; cd; clear'
|
||||
|
||||
**exit 命令**
|
||||
|
||||
Exit和logout命令都是退出shell.exit命令退出所有的shell,logout命令只注销登陆的shell,其他的自动以文本模式启动的shell不算.
|
||||
|
||||
如果我们对某个程序由疑问,我们可以看他的man Page,可以使用man命令调出它,额外的,还有一些重要的文件的手册页(inittab,fstab,hosts等等),库函数,shells,设备及其他功能
|
||||
|
||||
#### 举例: ####
|
||||
|
||||
- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等).
|
||||
- man inittab (初始化守护设置).
|
||||
|
||||
另外一个重要的信息的来源就是info命令提供的,info命令常常被用来读取信息文件.这些文件往往比manpage 提供更多信息.通过info 关键词调用某个命令的信息
|
||||
|
||||
# info ls
|
||||
# info cut
|
||||
|
||||
|
||||
另外,在/usr/share/doc 文件夹包含了大量的子目录,里面可以找到大量的文档.他们包含文本文件或其他友好的格式.
|
||||
确保你使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法
|
||||
|
||||
**使用expand命令把tabs转换为空格**
|
||||
|
||||
有时候文本文档包含了tabs但是程序无法很好的处理的tabs.或者我们只是简单的希望将tabs转换成空格.这就是为什么expand (GNU核心组件提供)工具出现,
|
||||
|
||||
举个例子,给我们一个文件 NumberList.txt,让我们使用expand处理它,将tabs转换为一个空格.并且以标准形式输出.
|
||||
|
||||
# expand --tabs=1 NumbersList.txt
|
||||
|
||||

|
||||
|
||||
Linux expand 命令
|
||||
|
||||
unexpand命令可以实现相反的功能(将空格转为tab)
|
||||
|
||||
**使用head输出文件首行及使用tail输出文件尾行**
|
||||
|
||||
通常情况下,head命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 -n 参数来自定义具体的行数。
|
||||
|
||||
# head -n3 /etc/passwd
|
||||
# tail -n3 /etc/passwd
|
||||
|
||||

|
||||
|
||||
Linux 的 head 和 tail 命令
|
||||
|
||||
tail 最有意思的一个特性就是能够展现信息(最后一行)就像我们输入文件(tail -f my.log,一行一行的,就像我们在观察它一样。)这在我们监控一个持续增加的日志文件时非常有用
|
||||
|
||||
更多: [Manage Files Effectively using head and tail Commands][10]
|
||||
|
||||
**使用paste合并文本文件**
|
||||
paste命令一行一行的合并文件,默认会以tab来区分每一行,或者其他你自定义的分行方式.(下面的例子就是输出使用等号划分行的文件).
|
||||
# paste -d= file1 file2
|
||||
|
||||

|
||||
|
||||
Merge Files in Linux
|
||||
|
||||
**使用split命令将文件分块**
|
||||
|
||||
split 命令常常用于把一个文件切割成两个或多个文由我们自定义的前缀命名的件文件.这些文件可以通过大小、区块、行数,生成的文件会有一个数字或字母的后缀.在下面的例子中,我们将切割bash.pdf ,每个文件50KB (-b 50KB) ,使用命名后缀 (-d):
|
||||
|
||||
# split -b 50KB -d bash.pdf bash_
|
||||
|
||||

|
||||
|
||||
在Linux下划分文件
|
||||
|
||||
你可以使用如下命令来合并这些文件,生成源文件:
|
||||
|
||||
# cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf
|
||||
|
||||
**使用tr命令改变字符**
|
||||
|
||||
tr 命令多用于变化(改变)一个一个的字符活使用字符范围.和之前一样,下面的实例我们江使用同样的文件file2,我们将实习:
|
||||
|
||||
- 小写字母 o 变成大写
|
||||
- 所有的小写字母都变成大写字母
|
||||
|
||||
# cat file2 | tr o O
|
||||
# cat file2 | tr [a-z] [A-Z]
|
||||
|
||||

|
||||
|
||||
在Linux中替换文字
|
||||
|
||||
**使用uniq和sort检查或删除重复的文字**
|
||||
|
||||
uniq命令可以帮我们查出或删除文件中的重复的行,默认会写出到stdout.我们应当注意, uniq 只能查出相邻的两个相同的单纯,所以, uniq 往往和sort 一起使用(sort一般用于对文本文件的内容进行排序)
|
||||
|
||||
|
||||
默认的,sort 以第一个参数(使用空格区分)为关键字.想要定义特殊的关键字,我们需要使用 -k参数,请注意如何使用sort 和uniq输出我们想要的字段,具体可以看下面的例子
|
||||
|
||||
# cat file3
|
||||
# sort file3 | uniq
|
||||
# sort -k2 file3 | uniq
|
||||
# sort -k3 file3 | uniq
|
||||
|
||||

|
||||
|
||||
删除文件中重复的行
|
||||
|
||||
**从文件中提取文本的命令**
|
||||
|
||||
Cut命令基于字节(-b),字符(-c),或者区块(-f)从stdin活文件中提取到的部分将会以标准的形式展现在屏幕上
|
||||
|
||||
当我们使用区块切割时,默认的分隔符是一个tab,不过你可以通过 -d 参数来自定义分隔符.
|
||||
|
||||
# cut -d: -f1,3 /etc/passwd # 这个例子提取了第一块和第三块的文本
|
||||
# cut -d: -f2-4 /etc/passwd # 这个例子提取了第一块到第三块的文本
|
||||
|
||||

|
||||
|
||||
从文件中提取文本
|
||||
|
||||
|
||||
注意,上方的两个输出的结果是十分简洁的。
|
||||
|
||||
**使用fmt命令重新格式化文件**
|
||||
|
||||
fmt 被用于去“清理”有大量内容或行的文件,或者有很多缩进的文件.新的锻炼格式每行不会超过75个字符款,你能改变这个设定通过 -w(width 宽度)参数,它可以设置行宽为一个特定的数值
|
||||
|
||||
举个例子,让我们看看当我们用fmt显示定宽为100个字符的时候的文件/etc/passwd 时会发生什么.再来一次,输出值变得更加简洁.
|
||||
|
||||
# fmt -w100 /etc/passwd
|
||||
|
||||

|
||||
|
||||
Linux文件重新格式化
|
||||
|
||||
**使用pr命令格式化打印内容**
|
||||
|
||||
pr 分页并且在列中展示一个或多个用于打印的文件. 换句话说,使用pr格式化一个文件使他打印出来时看起来更好.举个例子,下面这个命令
|
||||
|
||||
# ls -a /etc | pr -n --columns=3 -h "Files in /etc"
|
||||
|
||||
以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现),行号(-n)
|
||||
|
||||

|
||||
|
||||
Linux的文件格式
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇文章中,我们已经讨论了如何在Shell或终端以正确的语法输入和执行命令,并解释如何找到,检查和使用系统文档。正如你看到的一样简单,这就是你成为RHCSA的第一大步
|
||||
|
||||
如果你想添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并为分享他们而感到自豪,请在下方留言.也欢迎提出问题.我们期待您的回复.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:https://www.redhat.com/en/services/certification/rhcsa
|
||||
[2]:http://www.tecmint.com/cd-command-in-linux/
|
||||
[3]:http://www.tecmint.com/ls-command-interview-questions/
|
||||
[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/
|
||||
[5]:http://www.tecmint.com/rename-multiple-files-in-linux/
|
||||
[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
||||
[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/
|
||||
[8]:http://www.tecmint.com/echo-command-in-linux/
|
||||
[9]:http://www.tecmint.com/pwd-command-examples/
|
||||
[10]:http://www.tecmint.com/view-contents-of-file-in-linux/
|
Loading…
Reference in New Issue
Block a user