Merge pull request #38 from LCTT/master

update 2015/12/22
This commit is contained in:
Yu Ye 2015-12-22 17:58:19 +08:00
commit a9b2d9d582
63 changed files with 5140 additions and 2859 deletions

View File

@ -0,0 +1,509 @@
来自 Linux 基金会内部的《Linux 工作站安全检查清单》
================================================================================
### 目标受众
这是一套 Linux 基金会为其系统管理员提供的推荐规范。
这个文档用于帮助那些使用 Linux 工作站来访问和管理项目的 IT 设施的系统管理员团队。
如果你的系统管理员是远程员工你也许可以使用这套指导方针确保系统管理员的系统可以通过核心安全需求降低你的IT 平台成为攻击目标的风险。
即使你的系统管理员不是远程员工,很多人也会在工作环境中通过便携笔记本完成工作,或者在家中设置系统以便在业余时间或紧急时刻访问工作平台。不论发生何种情况,你都能调整这个推荐规范来适应你的环境。
### 限制
但是这并不是一个详细的“工作站加固”文档可以说这是一个努力避免大多数明显安全错误而不会导致太多不便的一组推荐基线baseline。你也许阅读这个文档后会认为它的方法太偏执而另一些人也许会认为这仅仅是一些肤浅的研究。安全就像在高速公路上开车 -- 任何比你开的慢的都是一个傻瓜,然而任何比你开的快的人都是疯子。这个指南仅仅是一些列核心安全规则,既不详细又不能替代经验、警惕和常识。
我们分享这篇文档是为了[将开源协作的优势带到 IT 策略文献资料中][18]。如果你发现它有用,我们希望你可以将它用到你自己团体中,并分享你的改进,对它的完善做出你的贡献。
### 结构
每一节都分为两个部分:
- 核对适合你项目的需求
- 形式不定的提示内容,解释了为什么这么做
#### 严重级别
在清单的每一个项目都包括严重级别,我们希望这些能帮助指导你的决定:
- **关键ESSENTIAL** 该项应该在考虑列表上被明确的重视。如果不采取措施,将会导致你的平台安全出现高风险。
- **中等NICE** 该项将改善你的安全形势,但是会影响到你的工作环境的流程,可能会要求养成新的习惯,改掉旧的习惯。
- **低等PARANOID** 留作感觉会明显完善我们平台安全、但是可能会需要大量调整与操作系统交互的方式的项目。
记住,这些只是参考。如果你觉得这些严重级别不能反映你的工程对安全的承诺,你应该调整它们为你所合适的。
## 选择正确的硬件
我们并不会要求管理员使用一个特殊供应商或者一个特殊的型号,所以这一节提供的是选择工作系统时的核心注意事项。
### 检查清单
- [ ] 系统支持安全启动SecureBoot _(关键)_
- [ ] 系统没有火线Firewire雷电thunderbolt或者扩展卡ExpressCard接口 _(中等)_
- [ ] 系统有 TPM 芯片 _(中等)_
### 注意事项
#### 安全启动SecureBoot
尽管它还有争议但是安全引导能够预防很多针对工作站的攻击Rootkits、“Evil Maid”等等而没有太多额外的麻烦。它并不能阻止真正专门的攻击者加上在很大程度上国家安全机构有办法应对它可能是通过设计),但是有安全引导总比什么都没有强。
作为选择,你也许可以部署 [Anti Evil Maid][1] 提供更多健全的保护,以对抗安全引导所需要阻止的攻击类型,但是它需要更多部署和维护的工作。
#### 系统没有火线Firewire雷电thunderbolt或者扩展卡ExpressCard接口
火线是一个标准,其设计上允许任何连接的设备能够完全地直接访问你的系统内存(参见[维基百科][2])。雷电接口和扩展卡同样有问题,虽然一些后来部署的雷电接口试图限制内存访问的范围。如果你没有这些系统端口,那是最好的,但是它并不严重,它们通常可以通过 UEFI 关闭或内核本身禁用。
#### TPM 芯片
可信平台模块Trusted Platform Module TPM是主板上的一个与核心处理器单独分开的加密芯片它可以用来增加平台的安全性比如存储全盘加密的密钥不过通常不会用于日常的平台操作。充其量这个是一个有则更好的东西除非你有特殊需求需要使用 TPM 增加你的工作站安全性。
## 预引导环境
这是你开始安装操作系统前的一系列推荐规范。
### 检查清单
- [ ] 使用 UEFI 引导模式(不是传统 BIOS_(关键)_
- [ ] 进入 UEFI 配置需要使用密码 _(关键)_
- [ ] 使用安全引导 _(关键)_
- [ ] 启动系统需要 UEFI 级别密码 _(中等)_
### 注意事项
#### UEFI 和安全引导
UEFI 尽管有缺点,还是提供了很多传统 BIOS 没有的好功能,比如安全引导。大多数现代的系统都默认使用 UEFI 模式。
确保进入 UEFI 配置模式要使用高强度密码。注意,很多厂商默默地限制了你使用密码长度,所以相比长口令你也许应该选择高熵值的短密码(关于密码短语请参考下面内容)。
基于你选择的 Linux 发行版,你也许需要、也许不需要按照 UEFI 的要求,来导入你的发行版的安全引导密钥,从而允许你启动该发行版。很多发行版已经与微软合作,用大多数厂商所支持的密钥给它们已发布的内核签名,因此避免了你必须处理密钥导入的麻烦。
作为一个额外的措施在允许某人访问引导分区然后尝试做一些不好的事之前让他们输入密码。为了防止肩窥shoulder-surfing这个密码应该跟你的 UEFI 管理密码不同。如果你经常关闭和启动,你也许不想这么麻烦,因为你已经必须输入 LUKS 密码了LUKS 参见下面内容),这样会让你您减少一些额外的键盘输入。
## 发行版选择注意事项
很有可能你会坚持一个广泛使用的发行版如 FedoraUbuntuArchDebian或它们的一个类似发行版。无论如何以下是你选择使用发行版应该考虑的。
### 检查清单
- [ ] 拥有一个强健的 MAC/RBAC 系统SELinux/AppArmor/Grsecurity _(关键)_
- [ ] 发布安全公告 _(关键)_
- [ ] 提供及时的安全补丁 _(关键)_
- [ ] 提供软件包的加密验证 _(关键)_
- [ ] 完全支持 UEFI 和安全引导 _(关键)_
- [ ] 拥有健壮的原生全磁盘加密支持 _(关键)_
### 注意事项
#### SELinuxAppArmor和 GrSecurity/PaX
强制访问控制Mandatory Access ControlsMAC或者基于角色的访问控制Role-Based Access ControlsRBAC是一个用在老式 POSIX 系统的基于用户或组的安全机制扩展。现在大多数发行版已经捆绑了 MAC/RBAC 系统FedoraUbuntu或通过提供一种机制一个可选的安装后步骤来添加它GentooArchDebian。显然强烈建议您选择一个预装 MAC/RBAC 系统的发行版,但是如果你对某个没有默认启用它的发行版情有独钟,装完系统后应计划配置安装它。
应该坚决避免使用不带任何 MAC/RBAC 机制的发行版,像传统的 POSIX 基于用户和组的安全在当今时代应该算是考虑不足。如果你想建立一个 MAC/RBAC 工作站,通常认为 AppArmor 和 PaX 比 SELinux 更容易掌握。此外在工作站上很少有或者根本没有对外监听的守护进程而针对用户运行的应用造成的最高风险GrSecurity/PaX _可能_ 会比SELinux 提供更多的安全便利。
#### 发行版安全公告
大多数广泛使用的发行版都有一个给它们的用户发送安全公告的机制,但是如果你对一些机密感兴趣,去看看开发人员是否有见于文档的提醒用户安全漏洞和补丁的机制。缺乏这样的机制是一个重要的警告信号,说明这个发行版不够成熟,不能被用作主要管理员的工作站。
#### 及时和可靠的安全更新
多数常用的发行版提供定期安全更新但应该经常检查以确保及时提供关键包更新。因此应避免使用附属发行版spin-offs和“社区重构”因为它们必须等待上游发行版先发布它们经常延迟发布安全更新。
现在很难找到一个不使用加密签名、更新元数据或二者都不使用的发行版。如此说来常用的发行版在引入这个基本安全机制就已经知道这些很多年了Arch说你呢所以这也是值得检查的。
#### 发行版支持 UEFI 和安全引导
检查发行版是否支持 UEFI 和安全引导。查明它是否需要导入额外的密钥或是否要求启动内核有一个已经被系统厂商信任的密钥签名(例如跟微软达成合作)。一些发行版不支持 UEFI 或安全启动但是提供了替代品来确保防篡改tamper-proof或防破坏tamper-evident引导环境[Qubes-OS][3] 使用 Anti Evil Maid前面提到的。如果一个发行版不支持安全引导也没有防止引导级别攻击的机制还是看看别的吧。
#### 全磁盘加密
全磁盘加密是保护静止数据的要求,大多数发行版都支持。作为一个选择方案,带有自加密硬盘的系统也可以用(通常通过主板 TPM 芯片实现),并提供了类似安全级别而且操作更快,但是花费也更高。
## 发行版安装指南
所有发行版都是不同的,但是也有一些一般原则:
### 检查清单
- [ ] 使用健壮的密码全磁盘加密LUKS _(关键)_
- [ ] 确保交换分区也加密了 _(关键)_
- [ ] 确保引导程序设置了密码可以和LUKS一样 _(关键)_
- [ ] 设置健壮的 root 密码可以和LUKS一样 _(关键)_
- [ ] 使用无特权账户登录,作为管理员组的一部分 _(关键)_
- [ ] 设置健壮的用户登录密码,不同于 root 密码 _(关键)_
### 注意事项
#### 全磁盘加密
除非你正在使用自加密硬盘,配置你的安装程序完整地加密所有存储你的数据与系统文件的磁盘很重要。简单地通过自动挂载的 cryptfs 环loop文件加密用户目录还不够说你呢旧版 Ubuntu这并没有给系统二进制文件或交换分区提供保护它可能包含大量的敏感数据。推荐的加密策略是加密 LVM 设备,以便在启动过程中只需要一个密码。
`/boot`分区将一直保持非加密,因为引导程序需要在调用 LUKS/dm-crypt 前能引导内核自身。一些发行版支持加密的`/boot`分区,比如 [Arch][16],可能别的发行版也支持,但是似乎这样增加了系统更新的复杂度。如果你的发行版并没有原生支持加密`/boot`也不用太在意,内核镜像本身并没有什么隐私数据,它会通过安全引导的加密签名检查来防止被篡改。
#### 选择一个好密码
现代的 Linux 系统没有限制密码口令长度,所以唯一的限制是你的偏执和倔强。如果你要启动你的系统,你将大概至少要输入两个不同的密码:一个解锁 LUKS 另一个登录所以长密码将会使你老的更快。最好从丰富或混合的词汇中选择2-3个单词长度容易输入的密码。
优秀密码例子(是的,你可以使用空格):
- nature abhors roombas
- 12 in-flight Jebediahs
- perdon, tengo flatulence
如果你喜欢输入可以在公开场合和你生活中能见到的句子,比如:
- Mary had a little lamb
- you're a wizard, Harry
- to infinity and beyond
如果你愿意的话,你也应该带上最少要 10-12个字符长度的非词汇的密码。
除非你担心物理安全,你可以写下你的密码,并保存在一个远离你办公桌的安全的地方。
#### Root用户密码和管理组
我们建议,你的 root 密码和你的 LUKS 加密使用同样的密码(除非你共享你的笔记本给信任的人,让他应该能解锁设备,但是不应该能成为 root 用户)。如果你是笔记本电脑的唯一用户,那么你的 root 密码与你的 LUKS 密码不同是没有安全优势上的意义的。通常,你可以使用同样的密码在你的 UEFI 管理,磁盘加密,和 root 登录中 -- 知道这些任意一个都会让攻击者完全控制您的系统,在单用户工作站上使这些密码不同,没有任何安全益处。
你应该有一个不同的,但同样强健的常规用户帐户密码用来日常工作。这个用户应该是管理组用户(例如`wheel`或者类似,根据发行版不同),允许你执行`sudo`来提升权限。
换句话说如果在你的工作站只有你一个用户你应该有两个独特的、强健robust而强壮strong的密码需要记住
**管理级别**,用在以下方面:
- UEFI 管理
- 引导程序GRUB
- 磁盘加密LUKS
- 工作站管理root 用户)
**用户级别**,用在以下:
- 用户登录和 sudo
- 密码管理器的主密码
很明显,如果有一个令人信服的理由的话,它们全都可以不同。
## 安装后的加固
安装后的安全加固在很大程度上取决于你选择的发行版,所以在一个像这样的通用文档中提供详细说明是徒劳的。然而,这里有一些你应该采取的步骤:
### 检查清单
- [ ] 在全局范围内禁用火线和雷电模块 _(关键)_
- [ ] 检查你的防火墙,确保过滤所有传入端口 _(关键)_
- [ ] 确保 root 邮件转发到一个你可以收到的账户 _(关键)_
- [ ] 建立一个系统自动更新任务,或更新提醒 _(中等)_
- [ ] 检查以确保 sshd 服务默认情况下是禁用的 _(中等)_
- [ ] 配置屏幕保护程序在一段时间的不活动后自动锁定 _(中等)_
- [ ] 设置 logwatch _(中等)_
- [ ] 安装使用 rkhunter _(中等)_
- [ ] 安装一个入侵检测系统Intrusion Detection System _(中等)_
### 注意事项
#### 将模块列入黑名单
将火线和雷电模块列入黑名单,增加一行到`/etc/modprobe.d/blacklist-dma.conf`文件:
blacklist firewire-core
blacklist thunderbolt
重启后的这些模块将被列入黑名单。这样做是无害的,即使你没有这些端口(但也不做任何事)。
#### Root 邮件
默认的 root 邮件只是存储在系统基本上没人读过。确保你设置了你的`/etc/aliases`来转发 root 邮件到你确实能读取的邮箱,否则你也许错过了重要的系统通知和报告:
# Person who should get root's mail
root: bob@example.com
编辑后这些后运行`newaliases`,然后测试它确保能投递到,像一些邮件供应商将拒绝来自不存在的域名或者不可达的域名的邮件。如果是这个原因,你需要配置邮件转发直到确实可用。
#### 防火墙sshd和监听进程
默认的防火墙设置将取决于您的发行版,但是大多数都允许`sshd`端口连入。除非你有一个令人信服的合理理由允许连入 ssh你应该过滤掉它并禁用 sshd 守护进程。
systemctl disable sshd.service
systemctl stop sshd.service
如果你需要使用它,你也可以临时启动它。
通常,你的系统不应该有任何侦听端口,除了响应 ping 之外。这将有助于你对抗网络级的零日漏洞利用。
#### 自动更新或通知
建议打开自动更新,除非你有一个非常好的理由不这么做,如果担心自动更新将使您的系统无法使用(以前发生过,所以这种担心并非杞人忧天)。至少,你应该启用自动通知可用的更新。大多数发行版已经有这个服务自动运行,所以你不需要做任何事。查阅你的发行版文档了解更多。
你应该尽快应用所有明显的勘误,即使这些不是特别贴上“安全更新”或有关联的 CVE 编号。所有的问题都有潜在的安全漏洞和新的错误,比起停留在旧的、已知的问题上,未知问题通常是更安全的策略。
#### 监控日志
你应该会对你的系统上发生了什么很感兴趣。出于这个原因,你应该安装`logwatch`然后配置它每夜发送在你的系统上发生的任何事情的活动报告。这不会预防一个专业的攻击者,但是一个不错的安全网络功能。
注意,许多 systemd 发行版将不再自动安装一个“logwatch”所需的 syslog 服务(因为 systemd 会放到它自己的日志中所以你需要安装和启用“rsyslog”来确保在使用 logwatch 之前你的 /var/log 不是空的。
#### Rkhunter 和 IDS
安装`rkhunter`和一个类似`aide`或者`tripwire`入侵检测系统IDS并不是那么有用除非你确实理解它们如何工作并采取必要的步骤来设置正确例如保证数据库在外部介质从可信的环境运行检测记住执行系统更新和配置更改后要刷新散列数据库等等。如果你不愿在你的工作站执行这些步骤并调整你如何工作的方式这些工具只能带来麻烦而没有任何实在的安全益处。
我们建议你安装`rkhunter`并每晚运行它。它相当易于学习和使用,虽然它不会阻止一个复杂的攻击者,它也能帮助你捕获你自己的错误。
## 个人工作站备份
工作站备份往往被忽视,或偶尔才做一次,这常常是不安全的方式。
### 检查清单
- [ ] 设置加密备份工作站到外部存储 _(关键)_
- [ ] 使用零认知zero-knowledge备份工具备份到站外或云上 _(中等)_
### 注意事项
#### 全加密的备份存到外部存储
把全部备份放到一个移动磁盘中比较方便,不用担心带宽和上行网速(在这个时代,大多数供应商仍然提供显著的不对称的上传/下载速度)。不用说,这个移动硬盘本身需要加密(再说一次,通过 LUKS或者你应该使用一个备份工具建立加密备份例如`duplicity`或者它的 GUI 版本 `deja-dup`。我建议使用后者并使用随机生成的密码,保存到离线的安全地方。如果你带上笔记本去旅行,把这个磁盘留在家,以防你的笔记本丢失或被窃时可以找回备份。
除了你的家目录外,你还应该备份`/etc`目录和出于取证目的的`/var/log`目录。
尤其重要的是,避免拷贝你的家目录到任何非加密存储上,即使是需要快速的在两个系统上移动文件时,一旦完成你肯定会忘了清除它,从而暴露个人隐私或者安全信息到监听者手中 -- 尤其是把这个存储介质跟你的笔记本放到同一个包里。
#### 有选择的零认知站外备份
站外备份Off-site backup也是相当重要的是否可以做到要么需要你的老板提供空间要么找一家云服务商。你可以建一个单独的 duplicity/deja-dup 配置,只包括重要的文件,以免传输大量你不想备份的数据(网络缓存、音乐、下载等等)。
作为选择你可以使用零认知zero-knowledge备份工具例如 [SpiderOak][5],它提供一个卓越的 Linux GUI工具还有更多的实用特性例如在多个系统或平台间同步内容。
## 最佳实践
下面是我们认为你应该采用的最佳实践列表。它当然不是非常详细的,而是试图提供实用的建议,来做到可行的整体安全性和可用性之间的平衡。
### 浏览
毫无疑问, web 浏览器将是你的系统上最大、最容易暴露的面临攻击的软件。它是专门下载和执行不可信、甚至是恶意代码的一个工具。它试图采用沙箱和代码清洁code sanitization等多种机制保护你免受这种危险但是在之前它们都被击败了多次。你应该知道在任何时候浏览网站都是你做的最不安全的活动。
有几种方法可以减少浏览器的影响,但这些真实有效的方法需要你明显改变操作您的工作站的方式。
#### 1: 使用两个不同的浏览器 _(关键)_
这很容易做到,但是只有很少的安全效益。并不是所有浏览器都可以让攻击者完全自由访问您的系统 -- 有时它们只能允许某人读取本地浏览器存储,窃取其它标签的活动会话,捕获浏览器的输入等。使用两个不同的浏览器,一个用在工作/高安全站点,另一个用在其它方面,有助于防止攻击者请求整个 cookie 存储的小问题。主要的不便是两个不同的浏览器会消耗大量内存。
我们建议:
##### 火狐用来访问工作和高安全站点
使用火狐登录工作有关的站点,应该额外关心的是确保数据如 cookies会话登录信息击键等等明显不应该落入攻击者手中。除了少数的几个网站你不应该用这个浏览器访问其它网站。
你应该安装下面的火狐扩展:
- [ ] NoScript _(关键)_
- NoScript 阻止活动内容加载,除非是在用户白名单里的域名。如果用于默认浏览器它会很麻烦(可是提供了真正好的安全效益),所以我们建议只在访问与工作相关的网站的浏览器上开启它。
- [ ] Privacy Badger _(关键)_
- EFF 的 Privacy Badger 将在页面加载时阻止大多数外部追踪器和广告平台,有助于在这些追踪站点影响你的浏览器时避免跪了(追踪器和广告站点通常会成为攻击者的目标,因为它们能会迅速影响世界各地成千上万的系统)。
- [ ] HTTPS Everywhere _(关键)_
- 这个 EFF 开发的扩展将确保你访问的大多数站点都使用安全连接,甚至你点击的连接使用的是 http://(可以有效的避免大多数的攻击,例如[SSL-strip][7])。
- [ ] Certificate Patrol _(中等)_
- 如果你正在访问的站点最近改变了它们的 TLS 证书,这个工具将会警告你 -- 特别是如果不是接近失效期或者现在使用不同的证书颁发机构。它有助于警告你是否有人正尝试中间人攻击你的连接,不过它会产生很多误报。
你应该让火狐成为你打开连接时的默认浏览器,因为 NoScript 将在加载或者执行时阻止大多数活动内容。
##### 其它一切都用 Chrome/Chromium
Chromium 开发者在增加很多很好的安全特性方面走在了火狐前面(至少[在 Linux 上][6]),例如 seccomp 沙箱内核用户空间等等这会成为一个你访问的网站与你其它系统之间的额外隔离层。Chromium 是上游开源项目Chrome 是 Google 基于它构建的专有二进制包(加一句偏执的提醒,如果你有任何不想让谷歌知道的事情都不要使用它)。
推荐你在 Chrome 上也安装**Privacy Badger** 和 **HTTPS Everywhere** 扩展,然后给它一个与火狐不同的主题,以让它告诉你这是你的“不可信站点”浏览器。
#### 2: 使用两个不同浏览器,一个在专用的虚拟机里 _(中等)_
这有点像上面建议的做法,除了您将添加一个通过快速访问协议运行在专用虚拟机内部 Chrome 的额外步骤它允许你共享剪贴板和转发声音事件Spice 或 RDP。这将在不可信浏览器和你其它的工作环境之间添加一个优秀的隔离层确保攻击者完全危害你的浏览器将必须另外打破 VM 隔离层,才能达到系统的其余部分。
这是一个鲜为人知的可行方式,但是需要大量的 RAM 和高速的处理器来处理多增加的负载。这要求作为管理员的你需要相应地调整自己的工作实践而付出辛苦。
#### 3: 通过虚拟化完全隔离你的工作和娱乐环境 _(低等)_
了解下 [Qubes-OS 项目][3],它致力于通过划分你的应用到完全隔离的 VM 中来提供高度安全的工作环境。
### 密码管理器
#### 检查清单
- [ ] 使用密码管理器 _(关键)_
- [ ] 不相关的站点使用不同的密码 _(关键)_
- [ ] 使用支持团队共享的密码管理器 _(中等)_
- [ ] 给非网站类账户使用一个单独的密码管理器 _(低等)_
#### 注意事项
使用好的、唯一的密码对你的团队成员来说应该是非常关键的需求。凭证credential盗取一直在发生 — 通过被攻破的计算机、盗取数据库备份、远程站点利用、以及任何其它的方式。凭证绝不应该跨站点重用,尤其是关键的应用。
##### 浏览器中的密码管理器
每个浏览器有一个比较安全的保存密码机制,可以同步到供应商维护的,并使用用户的密码保证数据加密。然而,这个机制有严重的劣势:
1. 不能跨浏览器工作
2. 不提供任何与团队成员共享凭证的方法
也有一些支持良好、免费或便宜的密码管理器,可以很好的融合到多个浏览器,跨平台工作,提供小组共享(通常是付费服务)。可以很容易地通过搜索引擎找到解决方案。
##### 独立的密码管理器
任何与浏览器结合的密码管理器都有一个主要的缺点,它实际上是应用的一部分,这样最有可能被入侵者攻击。如果这让你不放心(应该这样),你应该选择两个不同的密码管理器 -- 一个集成在浏览器中用来保存网站密码,一个作为独立运行的应用。后者可用于存储高风险凭证如 root 密码、数据库密码、其它 shell 账户凭证等。
这样的工具在团队成员间共享超级用户的凭据方面特别有用(服务器 root 密码、ILO密码、数据库管理密码、引导程序密码等等
这几个工具可以帮助你:
- [KeePassX][8]在第2版中改进了团队共享
- [Pass][9],它使用了文本文件和 PGP并与 git 结合
- [Django-Pstore][10],它使用 GPG 在管理员之间共享凭据
- [Hiera-Eyaml][11],如果你已经在你的平台中使用了 Puppet在你的 Hiera 加密数据的一部分里面,可以便捷的追踪你的服务器/服务凭证。
### 加固 SSH 与 PGP 的私钥
个人加密密钥,包括 SSH 和 PGP 私钥,都是你工作站中最重要的物品 -- 这是攻击者最想得到的东西,这可以让他们进一步攻击你的平台或在其它管理员面前冒充你。你应该采取额外的步骤,确保你的私钥免遭盗窃。
#### 检查清单
- [ ] 用来保护私钥的强壮密码 _(关键)_
- [ ] PGP 的主密码保存在移动存储中 _(中等)_
- [ ] 用于身份验证、签名和加密的子密码存储在智能卡设备 _(中等)_
- [ ] SSH 配置为以 PGP 认证密钥作为 ssh 私钥 _(中等)_
#### 注意事项
防止私钥被偷的最好方式是使用一个智能卡存储你的加密私钥,绝不要拷贝到工作站上。有几个厂商提供支持 OpenPGP 的设备:
- [Kernel Concepts][12],在这里可以采购支持 OpenPGP 的智能卡和 USB 读取器,你应该需要一个。
- [Yubikey NEO][13],这里提供 OpenPGP 功能的智能卡还提供很多很酷的特性U2F、PIV、HOTP等等
确保 PGP 主密码没有存储在工作站也很重要,仅使用子密码。主密钥只有在签名其它的密钥和创建新的子密钥时使用 — 不经常发生这种操作。你可以照着 [Debian 的子密钥][14]向导来学习如何将你的主密钥移动到移动存储并创建子密钥。
你应该配置你的 gnupg 代理作为 ssh 代理,然后使用基于智能卡 PGP 认证密钥作为你的 ssh 私钥。我们发布了一个[详尽的指导][15]如何使用智能卡读取器或 Yubikey NEO。
如果你不想那么麻烦,最少要确保你的 PGP 私钥和你的 SSH 私钥有个强健的密码,这将让攻击者很难盗取使用它们。
### 休眠或关机,不要挂起
当系统挂起时内存中的内容仍然保留在内存芯片中可以会攻击者读取到这叫做冷启动攻击Cold Boot Attack。如果你离开你的系统的时间较长比如每天下班结束最好关机或者休眠而不是挂起它或者就那么开着。
### 工作站上的 SELinux
如果你使用捆绑了 SELinux 的发行版(如 Fedora这有些如何使用它的建议让你的工作站达到最大限度的安全。
#### 检查清单
- [ ] 确保你的工作站强制enforcing使用 SELinux _(关键)_
- [ ] 不要盲目的执行`audit2allow -M`,应该经常检查 _(关键)_
- [ ] 绝不要 `setenforce 0` _(中等)_
- [ ] 切换你的用户到 SELinux 用户`staff_u` _(中等)_
#### 注意事项
SELinux 是强制访问控制Mandatory Access ControlsMAC是 POSIX许可核心功能的扩展。它是成熟、强健自从它推出以来已经有很长的路了。不管怎样许多系统管理员现在仍旧重复过时的口头禅“关掉它就行”。
话虽如此,在工作站上 SELinux 会带来一些有限的安全效益,因为大多数你想运行的应用都是可以自由运行的。开启它有益于给网络提供足够的保护,也有可能有助于防止攻击者通过脆弱的后台服务提升到 root 级别的权限用户。
我们的建议是开启它并强制使用enforcing
##### 绝不`setenforce 0`
使用`setenforce 0`临时把 SELinux 设置为许可permissive模式很有诱惑力但是你应该避免这样做。当你想查找一个特定应用或者程序的问题时实际上这样做是把整个系统的 SELinux 给关闭了。
你应该使用`semanage permissive -a [somedomain_t]`替换`setenforce 0`,只把这个程序放入许可模式。首先运行`ausearch`查看哪个程序发生问题:
ausearch -ts recent -m avc
然后看下`scontext=`(源自 SELinux 的上下文)行,像这样:
scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023
^^^^^^^^^^^^^^
这告诉你程序`gpg_pinentry_t`被拒绝了,所以你想排查应用的故障,应该增加它到许可域:
semange permissive -a gpg_pinentry_t
这将允许你使用应用然后收集 AVC 的其它数据,你可以结合`audit2allow`来写一个本地策略。一旦完成你就不会看到新的 AVC 的拒绝消息,你就可以通过运行以下命令从许可中删除程序:
semanage permissive -d gpg_pinentry_t
##### 用 SELinux 的用户 staff_r 使用你的工作站
SELinux 带有角色role的原生实现基于用户帐户相关角色来禁止或授予某些特权。作为一个管理员你应该使用`staff_r`角色,这可以限制访问很多配置和其它安全敏感文件,除非你先执行`sudo`。
默认情况下,用户以`unconfined_r`创建你可以自由运行大多数应用没有任何或只有一点SELinux 约束。转换你的用户到`staff_r`角色,运行下面的命令:
usermod -Z staff_u [username]
你应该退出然后登录新的角色,届时如果你运行`id -Z`,你将会看到:
staff_u:staff_r:staff_t:s0-s0:c0.c1023
在执行`sudo`时,你应该记住增加一个额外标志告诉 SELinux 转换到“sysadmin”角色。你需要用的命令是
sudo -i -r sysadm_r
然后`id -Z`将会显示:
staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023
**警告**:在进行这个切换前你应该能很顺畅的使用`ausearch`和`audit2allow`,当你以`staff_r`角色运行时你的应用有可能不再工作了。在写作本文时,已知以下流行的应用在`staff_r`下没有做策略调整就不会工作:
- Chrome/Chromium
- Skype
- VirtualBox
切换回`unconfined_r`,运行下面的命令:
usermod -Z unconfined_u [username]
然后注销再重新回到舒适区。
## 延伸阅读
IT 安全的世界是一个没有底的兔子洞。如果你想深入,或者找到你的具体发行版更多的安全特性,请查看下面这些链接:
- [Fedora 安全指南](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html)
- [CESG Ubuntu 安全指南](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts)
- [Debian 安全手册](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html)
- [Arch Linux 安全维基](https://wiki.archlinux.org/index.php/Security)
- [Mac OSX 安全](https://www.apple.com/support/security/guides/)
## 许可
这项工作在[创作共用授权4.0国际许可证][0]许可下。
--------------------------------------------------------------------------------
via: https://github.com/lfit/itpol/blob/bbc17d8c69cb8eee07ec41f8fbf8ba32fdb4301b/linux-workstation-security.md
作者:[mricon][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/mricon
[0]: http://creativecommons.org/licenses/by-sa/4.0/
[1]: https://github.com/QubesOS/qubes-antievilmaid
[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues
[3]: https://qubes-os.org/
[4]: https://xkcd.com/936/
[5]: https://spideroak.com/
[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing
[7]: http://www.thoughtcrime.org/software/sslstrip/
[8]: https://keepassx.org/
[9]: http://www.passwordstore.org/
[10]: https://pypi.python.org/pypi/django-pstore
[11]: https://github.com/TomPoulton/hiera-eyaml
[12]: http://shop.kernelconcepts.de/
[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
[14]: https://wiki.debian.org/Subkeys
[15]: https://github.com/lfit/ssh-gpg-smartcard-config
[16]: http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption/
[17]: https://en.wikipedia.org/wiki/Cold_boot_attack
[18]: http://www.linux.com/news/featured-blogs/167-amanda-mcpherson/850607-linux-foundation-sysadmins-open-source-their-it-policies

View File

@ -1,19 +1,18 @@
提高 WordPress 性能的9个技巧
深入浅出讲述提升 WordPress 性能的九大秘笈
================================================================================
关于建站和 web 应用程序交付WordPress 是全球最大的一个平台。全球大约 [四分之一][1] 的站点现在正在使用开源 WordPress 软件,包括 eBay, Mozilla, RackSpace, TechCrunch, CNN, MTV,纽约时报,华尔街日报
在建站和 web 应用程序交付方面WordPress 是全球最大的一个平台。全球大约[四分之一][1] 的站点现在正在使用开源 WordPress 软件,包括 eBay、 Mozilla、 RackSpace、 TechCrunch、 CNN、 MTV、纽约时报、华尔街日报 等等
WordPress.com对于用户创建博客平台是最流行的其也运行在WordPress 开源软件上。[NGINX powers WordPress.com][2]。许多 WordPress 用户刚开始在 WordPress.com 上建站,然后移动到搭载着 WordPress 开源软件的托管主机上;其中大多数站点都使用 NGINX 软件。
最流行的个人博客平台 WordPress.com其也运行在 WordPress 开源软件上。[而 NGINX 则为 WordPress.com 提供了动力][2]。在 WordPress.com 的用户当中,许多站点起步于 WordPress.com然后换成了自己运行 WordPress 开源软件;它们中越来越多的站点也使用了 NGINX 软件。
WordPress 的吸引力是它的简单性,无论是安装启动或者对于终端用户的使用。然而当使用量不断增长时WordPress 站点的体系结构也存在一定的问题 - 这里几个方法,包括使用缓存以及组合 WordPress 和 NGINX,可以解决这些问题。
WordPress 的吸引力源于其简单性,无论是对于最终用户还是安装架设。然而当使用量不断增长时WordPress 站点的体系结构也存在一定的问题 - 这里有几个方法,包括使用缓存,以及将 WordPress 和 NGINX 组合起来,可以解决这些问题。
在这篇博客中,我们提供了9个技巧来进行优化帮助你解决 WordPress 中一些常见的性能问题:
在这篇博客中,我们提供了九个提速技巧来帮助你解决 WordPress 中一些常见的性能问题:
- [缓存静态资源][3]
- [缓存动态文件][4]
- [使用 NGINX][5]
- [添加支持 NGINX 的链接][6]
- [迁移到 NGINX][5]
- [添加 NGINX 静态链接支持][6]
- [为 NGINX 配置 FastCGI][7]
- [为 NGINX 配置 W3_Total_Cache][8]
- [为 NGINX 配置 WP-Super-Cache][9]
@ -22,39 +21,39 @@ WordPress 的吸引力是它的简单性,无论是安装启动或者对于终
### 在 LAMP 架构下 WordPress 的性能 ###
大多数 WordPress 站点都运行在传统的 LAMP 架构下Linux 操作系统Apache Web 服务器软件MySQL 数据库软件 - 通常是一个单独的数据库服务器 - 和 PHP 编程语言。这些都是非常著名的,广泛应用的开源工具。大多数人都将 WordPress “称为” LAMP并且很容易寻求帮助和支持。
大多数 WordPress 站点都运行在传统的 LAMP 架构下Linux 操作系统Apache Web 服务器软件MySQL 数据库软件(通常是一个单独的数据库服务器)和 PHP 编程语言。这些都是非常著名的,广泛应用的开源工具。在 WordPress 世界里,很多人都用的是 LAMP所以很容易寻求帮助和支持。
当用户访问 WordPress 站点时,浏览器为每个用户创建六到八个连接来运行 Linux/Apache 的组合。当用户请求连接时,每个页面的 PHP 文件开始飞速的从 MySQL 数据库争夺资源来响应请求。
当用户访问 WordPress 站点时,浏览器为每个用户创建六到八个连接来连接到 Linux/Apache 上。当用户请求连接时PHP 即时生成每个页面,从 MySQL 数据库获取资源来响应请求。
LAMP 对于数百个并发用户依然能照常工作。然而,流量突然增加是常见的并且 - 通常是 - 一件好事。
LAMP 或许对于数百个并发用户依然能照常工作。然而,流量突然增加是常见的并且通常这应该算是一件好事。
但是,当 LAMP 站点变得繁忙时,当同时在线的用户达到数千个时,它的瓶颈就会被暴露出来。瓶颈存在主要是两个原因:
1. Apache Web 服务器 - Apache 为每一个连接需要消耗大量资源。如果 Apache 接受了太多的并发连接,内存可能会耗尽,性能急剧降低,因为数据必须使用磁盘进行交换。如果以限制连接数来提高响应时间,新的连接必须等待,这也导致了用户体验变得很差。
1. Apache Web 服务器 - Apache 的每个/每次连接需要消耗大量资源。如果 Apache 接受了太多的并发连接,内存可能会耗尽,从而导致性能急剧降低,因为数据必须交换到磁盘了。如果以限制连接数来提高响应时间,新的连接必须等待,这也导致了用户体验变得很差。
1. PHP/MySQL 的交互 - 总之,一个运行 PHP 和 MySQL 数据库服务器的应用服务器上每秒的请求量不能超过最大限制。当请求的数量超过最大连接数时,用户必须等待。超过最大连接数时也会增加所有用户的响应时间。超过其两倍以上时会出现明显的性能问题。
1. PHP/MySQL 的交互 - 一个运行 PHP 和 MySQL 数据库服务器的应用服务器上每秒的请求量有一个最大限制。当请求的数量超过这个最大限制时,用户必须等待。超过这个最大限制时也会增加所有用户的响应时间。超过其两倍以上时会出现明显的性能问题。
LAMP 架构的网站一般都会出现性能瓶颈,这时就需要升级硬件了 - 加 CPU扩大磁盘空间等等。当 Apache 和 PHP/MySQL 的架构负载运行后,在硬件上不断的提升无法保证对系统资源指数增长的需求。
LAMP 架构的网站出现性能瓶颈是常见的情况,这时就需要升级硬件了 - 加 CPU扩大磁盘空间等等。当 Apache 和 PHP/MySQL 的架构超载后,在硬件上不断的提升却跟不上系统资源指数增长的需求。
最先取代 LAMP 架构的是 LEMP 架构 Linux, NGINX, MySQL, 和 PHP。 (这是 LEMP 的缩写E 代表着 “engine-x.” 的发音。) 我们在 [技巧 3][12] 中会描述 LEMP 架构。
首选替代 LAMP 架构的是 LEMP 架构 Linux, NGINX, MySQL, 和 PHP。 (这是 LEMP 的缩写E 代表着 “engine-x.” 的发音。) 我们在 [技巧 3][12] 中会描述 LEMP 架构。
### 技巧 1. 缓存静态资源 ###
静态资源是指不变的文件,像 CSSJavaScript 和图片。这些文件往往在网页的数据中占半数以上。页面的其余部分是动态生成的,像在论坛中评论,仪表盘的性能或个性化的内容可以看看Amazon.com 产品)。
静态资源是指不变的文件,像 CSSJavaScript 和图片。这些文件往往在网页的数据中占半数以上。页面的其余部分是动态生成的,像在论坛中评论,性能仪表盘,或个性化的内容(可以看看 Amazon.com 产品)。
缓存静态资源有两大好处:
- 更快的交付给用户 - 用户从他们浏览器的缓存或者从互联网上离他们最近的缓存服务器获取静态文件。有时候文件较大,因此减少等待时间对他们来说帮助很大。
- 更快的交付给用户 - 用户可以从它们浏览器的缓存或者从互联网上离它们最近的缓存服务器获取静态文件。有时候文件较大,因此减少等待时间对它们来说帮助很大。
- 减少应用服务器的负载 - 从缓存中检索到的每个文件会让 web 服务器少处理一个请求。你的缓存越多,用户等待的时间越短。
要让浏览器缓存文件,需要早在静态文件中设置正确的 HTTP 首部。当看到 HTTP Cache-Control 首部时,特别设置了 max-ageExpires 首部,以及 Entity 标记。[这里][13] 有详细的介绍。
要让浏览器缓存文件,需要在静态文件中设置正确的 HTTP 首部。看看 HTTP Cache-Control 首部,特别是设置了 max-age 参数Expires 首部,以及 Entity 标记。[这里][13] 有详细的介绍。
当启用本地缓存然后用户请求以前访问过的文件时,浏览器首先检查该文件是否在缓存中。如果在,它会询问 Web 服务器该文件是否改变过。如果该文件没有改变Web 服务器将立即响应一个304状态码未改变这意味着该文件没有改变而不是返回状态码200 OK,然后继续检索并发送已改变的文件。
当启用本地缓存然后用户请求以前访问过的文件时,浏览器首先检查该文件是否在缓存中。如果在,它会询问 Web 服务器该文件是否改变过。如果该文件没有改变Web 服务器将立即响应一个304状态码未改变这意味着该文件没有改变而不是返回状态码200 OK 并检索和发送已改变的文件。
为了支持浏览器以外的缓存,可以考虑下面的方法,内容分发网络CDN。CDN 是一​​种流行且​​强大的缓存工具,但我们在这里不详细描述它。可以想一下 CDN 背后的支撑技术的实现。此外,当你的站点从 HTTP/1.x 过渡到 HTTP/2 协议时CDN 的用处可能不太大;根据需要调查和测试,找到你网站需要的正确方法。
要在浏览器之外支持缓存,可以考虑下面讲到的技巧,以及考虑使用内容分发网络CDN。CDN 是一​​种流行且​​强大的缓存工具,但我们在这里不详细描述它。在你实现了这里讲到的其它技术之后可以考虑 CDN。此外当你的站点从 HTTP/1.x 过渡到 HTTP/2 协议时CDN 的用处可能不太大;根据需要调查和测试,找到你网站需要的正确方法。
如果你转向 NGINX Plus 或开源的 NGINX 软件作为架构的一部分,建议你考虑 [技巧 3][14],然后配置 NGINX 缓存静态资源。使用下面的配置,用你 Web 服务器的 URL 替换 www.example.com。
如果你转向 NGINX Plus 或开源的 NGINX 软件作为架构的一部分,建议你考虑 [技巧 3][14],然后配置 NGINX 缓存静态资源。使用下面的配置,用你 Web 服务器的 URL 替换 www.example.com。
server {
# substitute your web server's URL for www.example.com
@ -86,63 +85,63 @@ LAMP 对于数百个并发用户依然能照常工作。然而,流量突然增
### 技巧 2. 缓存动态文件 ###
WordPress 动态生成网页,这意味着每次请求时它都要生成一个给定的网页(即使和前一次的结果相同)。这意味着用户随时获得的是最新内容。
WordPress 动态生成网页,这意味着每次请求时它都要生成一个给定的网页(即使和前一次的结果相同)。这意味着用户随时获得的是最新内容。
想一下,当用户访问一个帖子时,并在文章底部有用户的评论时。你希望用户能够看到所有的评论 - 即使评论刚刚发布。动态内容就是处理这种情况的。
但现在,当帖子每秒出现十几二十几个请求时。应用服务器可能每秒需要频繁生成页面导致其压力过大,造成延误。为了给用户提供最新的内容,每个访问理论上都是新的请求,因此他们也不得不在首页等待
但现在,当帖子每秒出现十几二十几个请求时。应用服务器可能每秒需要频繁生成页面导致其压力过大,造成延误。为了给用户提供最新的内容,每个访问理论上都是新的请求,因此它们不得不在原始出处等待很长时间
为了防止页面由于负载过大变得缓慢,需要缓存动态文件。这需要减少文件的动态内容来提高整个系统的响应速度。
为了防止页面由于不断提升的负载而变得缓慢,需要缓存动态文件。这需要减少文件的动态内容来提高整个系统的响应速度。
要在 WordPress 中启用缓存中,需要使用一些流行的插件 - 如下所述。WordPress 的缓存插件需要刷新页面,然后将其缓存短暂时间 - 也许只有几秒钟。因此,如果该网站每秒中只有几个请求,那大多数用户获得的页面都是缓存的副本。这也有助于提高所有用户的检索时间:
要在 WordPress 中启用缓存中,需要使用一些流行的插件 - 如下所述。WordPress 的缓存插件会请求最新的页面,然后将其缓存短暂时间 - 也许只有几秒钟。因此,如果该网站每秒中会有几个请求,那大多数用户获得的页面都是缓存的副本。这也有助于提高所有用户的检索时间:
- 大多数用户获得页面的缓存副本。应用服务器没有做任何工作。
- 用户很快会得到一个新的副本。应用服务器只需每隔一段时间刷新页面。当服务器产生一个新的页面(对于第一个用户访问后,缓存页过期),它这样做要快得多,因为它的请求不会超载。
- 用户会得到一个之前的崭新副本。应用服务器只需每隔一段时间生成一个崭新页面。当服务器产生一个崭新页面(对于缓存过期后的第一个用户访问),它这样做要快得多,因为它的请求并没有超载。
你可以缓存运行在 LAMP 架构或者 [LEMP 架构][15] 上 WordPress 的动态文件(在 [技巧 3][16] 中说明了)。有几个缓存插件,你可以在 WordPress 中使用。这里有最流行的缓存插件和缓存技术,从最简单到最强大的:
你可以缓存运行在 LAMP 架构或者 [LEMP 架构][15] 上 WordPress 的动态文件(在 [技巧 3][16] 中说明了)。有几个缓存插件,你可以在 WordPress 中使用。运用到了最流行的缓存插件和缓存技术,从最简单到最强大的:
- [Hyper-Cache][17] 和 [Quick-Cache][18] 这两个插件为每个 WordPress 页面创建单个 PHP 文件。它支持的一些动态函数会绕过多个 WordPress 与数据库的连接核心处理,创建一个更快的用户体验。他们不会绕过所有的 PHP 处理,所以使用以下选项他们不能给出相同的性能提升。他们也不需要修改 NGINX 的配置。
- [Hyper-Cache][17] 和 [Quick-Cache][18] 这两个插件为每个 WordPress 页面创建单个 PHP 文件。它支持绕过多个 WordPress 与数据库的连接核心处理的一些动态功能,创建一个更快的用户体验。它们不会绕过所有的 PHP 处理,所以并不会如下面那些取得同样的性能提升。它们也不需要修改 NGINX 的配置。
- [WP Super Cache][19] 最流行的 WordPress 缓存插件。它有许多功能,它的界面非常简洁,如下图所示。我们展示了 NGINX 一个简单的配置实例在 [技巧 7][20] 中
- [WP Super Cache][19] 最流行的 WordPress 缓存插件。在它易用的界面易用上提供了许多功能,如下所示。我们在 [技巧 7][20] 中展示了一个简单的 NGINX 配置实例
- [W3 Total Cache][21] 这是第二大最受欢迎的 WordPress 缓存插件。它比 WP Super Cache 的功能更强大,但它有些配置选项比较复杂。一个 NGINX 的简单配置,请看 [技巧 6][22]。
- [W3 Total Cache][21] 这是第二流行的 WordPress 缓存插件。它比 WP Super Cache 的功能更强大,但它有些配置选项比较复杂。样例 NGINX 配置,请看 [技巧 6][22]。
- [FastCGI][23] CGI 代表通用网关接口,在因特网上发送请求和接收文件。它不是一个插件只是一种能直接使用缓存的方法。FastCGI 可以被用在 Apache 和 Nginx 上,它也是最流行的动态缓存方法;我们在 [技巧 5][24] 中描述了如何配置 NGINX 来使用它。
- [FastCGI][23] CGI 的意思是通用网关接口( Common Gateway Interface在因特网上发送请求和接收文件的一种通用方式。它不是一个插件而是一种与缓存交互缓存的方法。FastCGI 可以被用在 Apache 和 Nginx 上,它也是最流行的动态缓存方法;我们在 [技巧 5][24] 中描述了如何配置 NGINX 来使用它。
这些插件的技术文档解释了如何在 LAMP 架构中配置它们。配置选项包括数据库和对象缓存;也包括使用 HTMLCSS 和 JavaScript 来构建 CDN 集成环境。对于 NGINX 的配置,请看列表中的提示技巧。
这些插件和技术的文档解释了如何在典型的 LAMP 架构中配置它们。配置方式包括数据库和对象缓存;最小化 HTML、CSS 和 JavaScript集成流行的 CDN 集成环境。对于 NGINX 的配置,请看列表中的提示技巧。
**注意**WordPress 不能缓存用户的登录信息,因为它们的 WordPress 页面都是不同的。(对于大多数网站来说,只有一小部分用户可能会登录),大多数缓存不会对刚刚评论过的用户显示缓存页面,只有当用户刷新页面时才会看到他们的评论。若要缓存页面的非个性化内容,如果它对整体性能来说很重要,可以使用一种称为 [fragment caching][25] 的技术。
**注意**缓存不会用于已经登录的 WordPress 用户,因为他们的 WordPress 页面都是不同的。(对于大多数网站来说,只有一小部分用户可能会登录)此外,大多数缓存不会对刚刚评论过的用户显示缓存页面,因为当用户刷新页面时希望看到他们的评论。若要缓存页面的非个性化内容,如果它对整体性能来说很重要,可以使用一种称为 [碎片缓存(fragment caching][25] 的技术。
### 技巧 3. 使用 NGINX ###
如上所述,当并发用户数超过某一值时 Apache 会导致性能问题 可能数百个用户同时使用。Apache 对于每一个连接会消耗大量的资源因而容易耗尽内存。Apache 可以配置连接数的值来避免耗尽内存,但是这意味着,超过限制时,新的连接请求必须等待。
如上所述,当并发用户数超过某一数量时 Apache 会导致性能问题 可能是数百个用户同时使用。Apache 对于每一个连接会消耗大量的资源因而容易耗尽内存。Apache 可以配置连接数的值来避免耗尽内存,但是这意味着,超过限制时,新的连接请求必须等待。
此外Apache 使用 mod_php 模块将每一个连接加载到内存中,即使只有静态文件图片CSSJavaScript 等)。这使得每个连接消耗更多的资源,从而限制了服务器的性能。
此外Apache 为每个连接加载一个 mod_php 模块副本到内存中,即使只有服务于静态文件图片CSSJavaScript 等)。这使得每个连接消耗更多的资源,从而限制了服务器的性能。
开始解决这些问题吧,从 LAMP 架构迁到 LEMP 架构 使用 NGINX 取代 Apache 。NGINX 仅消耗很少量的内存就能处理成千上万的并发连接数,所以你不必经历颠簸,也不必限制并发连接数。
要解决这些问题,从 LAMP 架构迁到 LEMP 架构 使用 NGINX 取代 Apache 。NGINX 在一定的内存之下就能处理成千上万的并发连接数,所以你不必经历颠簸,也不必限制并发连接数到很小的数量
NGINX 处理静态文件的性能也较好,它有内置的,简单的 [缓存][26] 控制策略。减少应用服务器的负载,你的网站的访问速度会更快,用户体验更好。
NGINX 处理静态文件的性能也较好,它有内置的,容易调整的 [缓存][26] 控制策略。减少应用服务器的负载,你的网站的访问速度会更快,用户体验更好。
你可以在部署的所有 Web 服务器上使用 NGINX或者你可以把一个 NGINX 服务器作为 Apache 的“前端”来进行反向代理 - NGINX 服务器接收客户端请求,将请求的静态文件直接返回,将 PHP 请求转发到 Apache 上进行处理。
你可以在部署环境的所有 Web 服务器上使用 NGINX或者你可以把一个 NGINX 服务器作为 Apache 的“前端”来进行反向代理 - NGINX 服务器接收客户端请求,将请求的静态文件直接返回,将 PHP 请求转发到 Apache 上进行处理。
对于动态页面的生成 - WordPress 核心体验 - 选择一个缓存工具,如 [技巧 2][27] 中描述的。在下面的技巧中,你可以看到 FastCGIW3_Total_Cache 和 WP-Super-Cache 在 NGINX 上的配置示例。 Hyper-Cache 和 Quick-Cache 不需要改变 NGINX 的配置。)
对于动态页面的生成,这是 WordPress 核心体验,可以选择一个缓存工具,如 [技巧 2][27] 中描述的。在下面的技巧中,你可以看到 FastCGIW3\_Total\_Cache 和 WP-Super-Cache 在 NGINX 上的配置示例。 Hyper-Cache 和 Quick-Cache 不需要改变 NGINX 的配置。)
**技巧** 缓存通常会被保存到磁盘上,但你可以用 [tmpfs][28] 将缓存放在内存中来提高性能。
为 WordPress 配置 NGINX 很容易。按照这四个步骤,其详细的描述在指定的技巧中:
为 WordPress 配置 NGINX 很容易。仅需四步,其详细的描述在指定的技巧中:
1.添加永久的支持 - 添加对 NGINX 的永久支持。此步消除了对 **.htaccess** 配置文件的依赖,这是 Apache 特有的。参见 [技巧 4][29]
2.配置缓存 - 选择一个缓存工具并安装好它。可选择的有 FastCGI cacheW3 Total Cache, WP Super Cache, Hyper Cache, 和 Quick Cache。请看技巧 [5][30], [6][31], 和 [7][32].
3.落实安全防范措施 - 在 NGINX 上采用对 WordPress 最佳安全的做法。参见 [技巧 8][33]。
4.配置 WordPress 多站点 - 如果你使用 WordPress 多站点,在 NGINX 下配置子目录,子域,或多个域的结构。见 [技巧9][34]。
1. 添加永久链接的支持 - 让 NGINX 支持永久链接。此步消除了对 **.htaccess** 配置文件的依赖,这是 Apache 特有的。参见 [技巧 4][29]
2. 配置缓存 - 选择一个缓存工具并安装好它。可选择的有 FastCGI cacheW3 Total Cache, WP Super Cache, Hyper Cache, 和 Quick Cache。请看技巧 [5][30]、 [6][31] 和 [7][32]。
3. 落实安全防范措施 - 在 NGINX 上采用对 WordPress 最佳安全的做法。参见 [技巧 8][33]。
4. 配置 WordPress 多站点 - 如果你使用 WordPress 多站点,在 NGINX 下配置子目录,子域,或多域名架构。见 [技巧9][34]。
### 技巧 4. 添加支持 NGINX 的链接 ###
### 技巧 4. 让 NGINX 支持永久链接 ###
许多 WordPress 网站依**.htaccess** 文件,此文件依赖 WordPress 的多个功能,包括永久支持,插件和文件缓存。NGINX 不支持 **.htaccess** 文件。幸运的是,你可以使用 NGINX 的简单而全面的配置文件来实现大部分相同的功能。
许多 WordPress 网站依赖于 **.htaccess** 文件,此文件为 WordPress 的多个功能所需要,包括永久链接支持、插件和文件缓存。NGINX 不支持 **.htaccess** 文件。幸运的是,你可以使用 NGINX 的简单而全面的配置文件来实现大部分相同的功能。
你可以在使用 NGINX 的 WordPress 中通过在主 [server][36] 块下添加下面的 location 块中启用 [永久链接][35]。(此 location 块在其代码示例中也会被包括)。
你可以在你的主 [server][36] 块下添加下面的 location 块中为使用 NGINX 的 WordPress 启用 [永久链接][35]。(此 location 块在其代码示例中也会被包括)。
**try_files** 指令告诉 NGINX 检查请求的 URL 在根目录下是作为文件(**$uri**)还是目录(**$uri/**)**/var/www/example.com/htdocs**。如果都不是NGINX 将重定向到 **/index.php**,通过查询字符串参数判断是否作为参数。
**try_files** 指令告诉 NGINX 检查请求的 URL 在文档根目录(**/var/www/example.com/htdocs**)下是作为文件(**$uri**)还是目录(**$uri/**) 存在的。如果都不是NGINX 将重定向到 **/index.php**,并传递查询字符串参数作为参数。
server {
server_name example.com www.example.com;
@ -159,17 +158,17 @@ NGINX 处理静态文件的性能也较好,它有内置的,简单的 [缓存
### 技巧 5. 在 NGINX 中配置 FastCGI ###
NGINX 可以从 FastCGI 应用程序中缓存响应,如 PHP 响应。此方法可提供最佳的性能。
NGINX 可以缓存来自 FastCGI 应用程序的响应,如 PHP 响应。此方法可提供最佳的性能。
对于开源的 NGINX第三方模块 [ngx_cache_purge][37] 提供缓存清除能力,需要手动编译,配置代码如下所示。NGINX Plus 已经包含了此代码的实现
对于开源的 NGINX编译入第三方模块 [ngx\_cache\_purge][37] 可以提供缓存清除能力配置代码如下所示。NGINX Plus 已经包含了它自己实现此代码。
当使用 FastCGI 时,我们建议你安装 [NGINX 辅助插件][38] 并使用下面的配置文件,尤其是要使用 **fastcgi_cache_key** 并且 location 块下要包括 **fastcgi_cache_purge**。当页面被发布或有改变时,甚至有新评论被发布时,该插件会自动清除你的缓存,你也可以从 WordPress 管理控制台手动清除。
当使用 FastCGI 时,我们建议你安装 [NGINX 辅助插件][38] 并使用下面的配置文件,尤其是要注意 **fastcgi\_cache\_key** 的使用和包括 **fastcgi\_cache\_purge** 的 location 块。当页面发布或有改变时,有新评论被发布时,该插件会自动清除你的缓存,你也可以从 WordPress 管理控制台手动清除。
NGINX 的辅助插件还可以添加一个简短的 HTML 代码到你网页的底部,确认缓存是否正常并显示一些统计工作。(你也可以使用 [$upstream_cache_status][39] 确认缓存功能是否正常。)
NGINX 的辅助插件还可以在你网页的底部添加一个简短的 HTML 代码,以确认缓存是否正常并显示一些统计数据。(你也可以使用 [$upstream\_cache\_status][39] 确认缓存功能是否正常。)
fastcgi_cache_path /var/run/nginx-cache levels=1:2
fastcgi_cache_path /var/run/nginx-cache levels=1:2
keys_zone=WORDPRESS:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
server_name example.com www.example.com;
@ -181,7 +180,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
set $skip_cache 0;
# POST 请求和查询网址的字符串应该交给 PHP
# POST 请求和带有查询参数的网址应该交给 PHP
if ($request_method = POST) {
set $skip_cache 1;
}
@ -196,7 +195,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
set $skip_cache 1;
}
#用户不能使用缓存登录或缓存最近的评论
#不要为登录用户或最近的评论者进行缓存
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass
|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
@ -240,13 +239,13 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
}
}
### 技巧 6. 为 NGINX 配置 W3_Total_Cache ###
### 技巧 6. 为 NGINX 配置 W3\_Total\_Cache ###
[W3 Total Cache][40], 是 Frederick Townes 的 [W3-Edge][41] 下的, 是一个支持 NGINX 的 WordPress 缓存框架。其有众多选项配置,可以替代 FastCGI 缓存。
[W3 Total Cache][40], 是 [W3-Edge][41] 的 Frederick Townes 出品的, 是一个支持 NGINX 的 WordPress 缓存框架。其有众多选项配置,可以替代 FastCGI 缓存。
缓存插件提供了各种缓存配置,还包括数据库和对象的缓存,对 HTMLCSS 和 JavaScript可选择性的与流行的 CDN 整合。
这个缓存插件提供了各种缓存配置,还包括数据库和对象的缓存,最小化 HTML、CSS 和 JavaScript并可选与流行的 CDN 整合。
使用插件时,需要将其配置信息写入位于你的域的根目录的 NGINX 配置文件中
这个插件会通过写入一个位于你的域的根目录的 NGINX 配置文件来控制 NGINX
server {
server_name example.com www.example.com;
@ -271,11 +270,11 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
### 技巧 7. 为 NGINX 配置 WP Super Cache ###
[WP Super Cache][42] 是由 Donncha O Caoimh 完成的, [Automattic][43] 上的一个 WordPress 开发者, 这是一个 WordPress 缓存引擎,它可以将 WordPress 的动态页面转变成静态 HTML 文件,以使 NGINX 可以很快的提供服务。它是第一个 WordPress 缓存插件,和其的相比,它更专注于某一特定的领域。
[WP Super Cache][42] 是由 Donncha O Caoimh 开发的, 他是 [Automattic][43] 的一个 WordPress 开发者, 这是一个 WordPress 缓存引擎,它可以将 WordPress 的动态页面转变成静态 HTML 文件,以使 NGINX 可以很快的提供服务。它是第一个 WordPress 缓存插件,和其的相比,它更专注于某一特定的领域。
配置 NGINX 使用 WP Super Cache 可以根据你的喜好而进行不同的配置。以下是一个示例配置。
在下面的配置中,location 块中使用了名为 WP Super Cache 的超级缓存中部分配置来工作。代码的其余部分是根据 WordPress 的规则不缓存用户登录信息,不缓存 POST 请求,并对静态资源设置过期首部,再加上标准的 PHP 实现;这部分可以进行定制,来满足你的需求
在下面的配置中,带有名为 supercache 的 location 块是 WP Super Cache 特有的部分。 WordPress 规则的其余代码用于不缓存已登录用户的信息,不缓存 POST 请求,并对静态资源设置过期首部,再加上标准的 PHP 处理;这部分可以根据你的需求进行定制
server {
@ -288,7 +287,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
set $cache_uri $request_uri;
# POST 请求和查询网址的字符串应该交给 PHP
# POST 请求和带有查询字符串的网址应该交给 PHP
if ($request_method = POST) {
set $cache_uri 'null cache';
}
@ -305,13 +304,13 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
set $cache_uri 'null cache';
}
#用户不能使用缓存登录或缓存最近的评论
#不对已登录用户和最近的评论者使用缓存
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+
|wp-postpass|wordpress_logged_in") {
set $cache_uri 'null cache';
}
#当请求的文件存在时使用缓存否则将请求转发给WordPress
#当请求的文件存在时使用缓存,否则将请求转发给 WordPress
location / {
try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html
$uri $uri/ /index.php;
@ -346,7 +345,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
### 技巧 8. 为 NGINX 配置安全防范措施 ###
为了防止攻击,可以控制对关键资源的访问以及当机器超载时进行登录限制
为了防止攻击,可以控制对关键资源的访问并限制机器人对登录功能的过量攻击
只允许特定的 IP 地址访问 WordPress 的仪表盘。
@ -365,14 +364,14 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
deny all;
}
拒绝其他人访问 WordPress 的配置文件 **wp-config.php**。拒绝其他人访问的另一种方法是将该文件的一个目录移到域的根目录下
拒绝其它人访问 WordPress 的配置文件 **wp-config.php**。拒绝其它人访问的另一种方法是将该文件的一个目录移到域的根目录之上的目录
# 拒绝其人访问 wp-config.php
# 拒绝其人访问 wp-config.php
location ~* wp-config.php {
deny all;
}
**wp-login.php** 进行限速来防止暴力攻击
**wp-login.php** 进行限速来防止暴力破解
# 拒绝访问 wp-login.php
location = /wp-login.php {
@ -383,27 +382,27 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
### 技巧 9. 配置 NGINX 支持 WordPress 多站点 ###
WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单个实例中允许你管理两个或多个网站。[WordPress.com][44] 运行的就是 WordPress 多站点,其主机为成千上万的用户提供博客服务。
WordPress 多站点WordPress Multisite顾名思义这个版本 WordPress 可以让你以单个实例管理两个或多个网站。[WordPress.com][44] 运行的就是 WordPress 多站点,其主机为成千上万的用户提供博客服务。
你可以从单个域的任何子目录或从不同的子域来运行独立的网站。
使用此代码块添加对子目录的支持。
# 在 WordPress 中添加支持子目录结构的多站点
# 在 WordPress 多站点中添加对子目录结构的支持
if (!-e $request_filename) {
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
rewrite ^(/[^/]+)?(/wp-.*) $2 last;
rewrite ^(/[^/]+)?(/.*\.php) $2 last;
}
使用此代码块来替换上面的代码块以添加对子目录结构的支持,子目录名自定义
使用此代码块来替换上面的代码块以添加对子目录结构的支持,替换为你自己的子目录名。
# 添加支持子域名
server_name example.com *.example.com;
旧版本3.4以前)的 WordPress 多站点使用 readfile() 来提供静态内容。然而readfile() 是 PHP 代码,它会导致在执行时性能会显著降低。我们可以用 NGINX 来绕过这个非必要的 PHP 处理。该代码片段在下面被(==============)线分割出来了。
# 避免 PHP readfile() 在 /blogs.dir/structure 子目录中
# 避免对子目录中 /blogs.dir/ 结构执行 PHP readfile()
location ^~ /blogs.dir {
internal;
alias /var/www/example.com/htdocs/wp-content/blogs.dir;
@ -414,8 +413,8 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
============================================================
# 避免 PHP readfile() 在 /files/structure 子目录中
location ~ ^(/[^/]+/)?files/(?.+) {
# 避免对子目录中 /files/ 结构执行 PHP readfile()
location ~ ^(/[^/]+/)?files/(?.+) {
try_files /wp-content/blogs.dir/$blogid/files/$rt_file /wp-includes/ms-files.php?file=$rt_file;
access_log off;
log_not_found off;
@ -424,7 +423,7 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
============================================================
# WPMU 文件结构的子域路径
# 子域路径的WPMU 文件结构
location ~ ^/files/(.*)$ {
try_files /wp-includes/ms-files.php?file=$1 =404;
access_log off;
@ -434,7 +433,7 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
============================================================
# 地图博客 ID 在特定的目录下
# 映射博客 ID 到特定的目录
map $http_host $blogid {
default 0;
example.com 1;
@ -444,15 +443,15 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
### 结论 ###
可扩展性对许多站点的开发者来说是一项挑战,因为这会让他们在 WordPress 站点中取得成功。(对于那些想要跨越 WordPress 性能问题的新站点。)为 WordPress 添加缓存,并将 WordPress 和 NGINX 结合,是不错的答案。
可扩展性对许多要让他们的 WordPress 站点取得成功的开发者来说是一项挑战。(对于那些想要跨越 WordPress 性能门槛的新站点而言。)为 WordPress 添加缓存,并将 WordPress 和 NGINX 结合,是不错的答案。
NGINX 不仅对 WordPress 网站是有用的。世界上排名前 100010,000和100,000网站中 NGINX 也是作为 [领先的 web 服务器][45] 被使用
NGINX 不仅用于 WordPress 网站。世界上排名前 1000、10000 和 100000 网站中 NGINX 也是 [遥遥领先的 web 服务器][45]
欲了解更多有关 NGINX 的性能,请看我们最近的博客,[关于 10x 应用程序的 10 个技巧][46]。
欲了解更多有关 NGINX 的性能,请看我们最近的博客,[让应用性能提升 10 倍的 10 个技巧][46]。
NGINX 软件有两个版本:
- NGINX 开源软件 - 像 WordPress 一样,此软件你可以自行下载,配置和编译。
- NGINX 开源软件 - 像 WordPress 一样,此软件你可以自行下载,配置和编译。
- NGINX Plus - NGINX Plus 包括一个预构建的参考版本的软件,以及服务和技术支持。
想要开始,先到 [nginx.org][47] 下载开源软件并了解下 [NGINX Plus][48]。
@ -463,7 +462,7 @@ via: https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-
作者:[Floyd Smith][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,80 @@
如何使用 pv 命令监控 linux 命令的执行进度
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/11/pv-featured-1.jpg)
如果你是一个 linux 系统管理员,那么毫无疑问你必须花费大量的工作时间在命令行上:安装和卸载软件,监视系统状态,复制、移动、删除文件,查错,等等。很多时候都是你输入一个命令,然后等待很长时间直到执行完成。也有的时候你执行的命令挂起了,而你只能猜测命令执行的实际情况。
通常 linux 命令不提供和进度相关的信息而这些信息特别重要尤其当你只有有限的时间时。然而这并不意味着你是无助的——现在有一个命令pv它会显示当前在命令行执行的命令的进度信息。在本文我们会讨论它并用几个简单的例子说明其特性。
### PV 命令 ###
[PV][1] 由Andrew Wood 开发,是 Pipe Viewer 的简称,意思是通过管道显示数据处理进度的信息。这些信息包括已经耗费的时间,完成的百分比(通过进度条显示),当前的速度,全部传输的数据,以及估计剩余的时间。
> "要使用 PV需要配合合适的选项把它放置在两个进程之间的管道。命令的标准输入将会通过标准输出传进来的而进度会被输出到标准错误输出。”
上述解释来自该命令的帮助页。
### 下载和安装 ###
Debian 系的操作系统,如 Ubuntu可以简单的使用下面的命令安装 PV
sudo apt-get install pv
如果你使用了其他发行版本,你可以使用各自的包管理软件在你的系统上安装 PV。一旦 PV 安装好了你就可以在各种场合使用它(详见下文)。需要注意的是下面所有例子都使用的是 pv 1.2.0。
### 特性和用法 ###
我们(在 linux 上使用命令行的用户)的大多数使用场景都会用到的命令是从一个 USB 驱动器拷贝电影文件到你的电脑。如果你使用 cp 来完成上面的任务,你会什么情况都不清楚,直到整个复制过程结束或者出错。
然而pv 命令在这种情景下很有帮助。比如:
pv /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
输出如下:
![pv-copy](https://www.maketecheasier.com/assets/uploads/2015/10/pv-copy.png)
所以,如你所见,这个命令显示了很多和操作有关的有用信息,包括已经传输了的数据量,花费的时间,传输速率,进度条,进度的百分比,以及剩余的时间。
`pv` 命令提供了多种显示选项开关。比如,你可以使用`-p` 来显示百分比,`-t` 来显示时间,`-r` 表示传输速率,`-e` 代表etaLCTT 译注:估计剩余的时间)。好事是你不必记住某一个选项,因为默认这几个选项都是启用的。但是,如果你只要其中某一个信息,那么可以通过控制这几个选项来完成任务。
这里还有一个`-n` 选项来允许 pv 命令显示整数百分比,在标准错误输出上每行显示一个数字,用来替代通常的可视进度条。下面是一个例子:
pv -n /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
![pv-numeric](https://www.maketecheasier.com/assets/uploads/2015/10/pv-numeric.png)
这个特殊的选项非常合适某些情境下的需求,如你想把用管道把输出传给[dialog][2] 命令。
接下来还有一个命令行选项,`-L` 可以让你修改 pv 命令的传输速率。举个例子,使用 -L 选项来限制传输速率为2MB/s。
pv -L 2m /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
![pv-ratelimit](https://www.maketecheasier.com/assets/uploads/2015/10/pv-ratelimit.png)
如上图所见,数据传输速度按照我们的要求被限制了。
另一个pv 可以帮上忙的情景是压缩文件。这里有一个例子可以向你解释如何与压缩软件Gzip 一起工作。
pv /media/himanshu/1AC2-A8E3/fnf.mkv | gzip > ./Desktop/fnf.log.gz
![pv-gzip](https://www.maketecheasier.com/assets/uploads/2015/10/pv-gzip.png)
### 结论 ###
如上所述pv 是一个非常有用的小工具,它可以在命令没有按照预期执行的情况下帮你节省你宝贵的时间。而且这些显示的信息还可以用在 shell 脚本里。我强烈的推荐你使用这个命令,它值得你一试。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/monitor-progress-linux-command-line-operation/
作者:[Himanshu Arora][a]
译者:[ezio](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/himanshu/
[1]:http://linux.die.net/man/1/pv
[2]:http://linux.die.net/man/1/dialog

View File

@ -1,14 +1,14 @@
在 Ubuntu 15.10 上安装 PostgreSQL 9.4 和 phpPgAdmin
在 Ubuntu 上安装世界上最先进的开源数据库 PostgreSQL 9.4 和 phpPgAdmin
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/05/postgresql.png)
### 简介 ###
[PostgreSQL][1] 是一款强大的,开源对象关系型数据库系统。它支持所有的主流操作系统,包括 Linux、UnixAIX、BSD、HP-UXSGI IRIX、Mac OS、Solaris、Tru64 以及 Windows 操作系统。
[PostgreSQL][1] 是一款强大的,开源的,对象关系型数据库系统。它支持所有的主流操作系统,包括 Linux、UnixAIX、BSD、HP-UXSGI IRIX、Mac OS、Solaris、Tru64 以及 Windows 操作系统。
下面是 **Ubuntu** 发起者 **Mark Shuttleworth** 对 PostgreSQL 的一段评价。
> PostgreSQL 真的是一款很好的数据库系统。刚开始我们使用它的时候,并不确定它能否胜任工作。但我错的太离谱了。它很强壮、快速,在各个方面都很专业。
> PostgreSQL 是一款极赞的数据库系统。刚开始我们在 Launchpad 上使用它的时候,并不确定它能否胜任工作。但我错了。它很强壮、快速,在各个方面都很专业。
>
> — Mark Shuttleworth.
@ -22,7 +22,7 @@
如果你需要其它的版本,按照下面那样先添加 PostgreSQL 仓库然后再安装。
**PostgreSQL apt 仓库** 支持 amd64 和 i386 架构的 Ubuntu 长期支持版10.04、12.04 和 14.04以及非长期支持版14.04)。对于其它非长期支持版,该软件包虽然不能完全支持,但使用和 LTS 版本近似的也能正常工作。
**PostgreSQL apt 仓库** 支持 amd64 和 i386 架构的 Ubuntu 长期支持版10.04、12.04 和 14.04以及非长期支持版14.04)。对于其它非长期支持版,该软件包虽然没有完全支持,但使用和 LTS 版本近似的也能正常工作。
#### Ubuntu 14.10 系统: ####
@ -36,11 +36,11 @@
**注意** 上面的库只能用于 Ubuntu 14.10。还没有升级到 Ubuntu 15.04 和 15.10。
**Ubuntu 14.04**,添加下面一行:
对于 **Ubuntu 14.04**,添加下面一行:
deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main
**Ubuntu 12.04**,添加下面一行:
对于 **Ubuntu 12.04**,添加下面一行:
deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main
@ -48,8 +48,6 @@
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc
----------
sudo apt-key add -
更新软件包列表:
@ -66,7 +64,7 @@
sudo -u postgres psql postgres
#### 例输出: ####
#### 例输出: ####
psql (9.4.5)
Type "help" for help.
@ -87,7 +85,7 @@
Enter it again:
postgres=# \q
要安装 PostgreSQL Adminpack在 postgresql 窗口输入下面的命令:
要安装 PostgreSQL Adminpack 扩展,在 postgresql 窗口输入下面的命令:
sudo -u postgres psql postgres
@ -165,7 +163,7 @@
#port = 5432
[...]
取消行的注释,然后设置你 postgresql 服务器的 IP 地址,或者设置为 * 监听所有用户。你应该谨慎设置所有远程用户都可以访问 PostgreSQL。
取消行的注释,然后设置你 postgresql 服务器的 IP 地址,或者设置为 * 监听所有用户。你应该谨慎设置所有远程用户都可以访问 PostgreSQL。
[...]
listen_addresses = '*'
@ -272,8 +270,6 @@
sudo systemctl restart postgresql
----------
sudo systemctl restart apache2
或者,
@ -284,19 +280,19 @@
现在打开你的浏览器并导航到 **http://ip-address/phppgadmin**。你会看到以下截图。
![phpPgAdmin Google Chrome_001](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg)
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg)
用你之前创建的用户登录。我之前已经创建了一个名为 “**senthil**” 的用户,密码是 “**ubuntu**”,因此我以 “senthil” 用户登录。
![phpPgAdmin Google Chrome_002](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg)
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg)
然后你就可以访问 phppgadmin 面板了。
![phpPgAdmin Google Chrome_003](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg)
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg)
用 postgres 用户登录:
![phpPgAdmin Google Chrome_004](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg)
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg)
就是这样。现在你可以用 phppgadmin 可视化创建、删除或者更改数据库了。
@ -308,7 +304,7 @@ via: http://www.unixmen.com/install-postgresql-9-4-and-phppgadmin-on-ubuntu-15-1
作者:[SK][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,25 +1,21 @@
使用 netcat [nc] 命令对 Linux 和 Unix 进行端口扫描
================================================================================
我如何在自己的服务器上找出哪些端口是开放的?如何使用 nc 命令进行端口扫描来替换 [Linux 或 类 Unix 中的 nmap 命令][1]
我如何在自己的服务器上找出哪些端口是开放的?如何使用 nc 命令进行端口扫描来替换 [Linux 或类 Unix 中的 nmap 命令][1]
nmap (“Network Mapper”)是一个开源工具用于网络探测和安全审核。如果 nmap 没有安装或者你不希望使用 nmap那你可以用 netcat/nc 命令进行端口扫描。它对于查看目标计算机上哪些端口是开放的或者运行着服务是非常有用的。你也可以使用 [nmap 命令进行端口扫描][2] 。
nmap (“Network Mapper”)是一个用于网络探测和安全审核的开源工具。如果 nmap 没有安装或者你不希望使用 nmap那你可以用 netcat/nc 命令进行端口扫描。它对于查看目标计算机上哪些端口是开放的或者运行着服务是非常有用的。你也可以使用 [nmap 命令进行端口扫描][2] 。
### 如何使用 nc 来扫描 LinuxUNIX 和 Windows 服务器的端口呢? ###
If nmap is not installed try nc / netcat command as follow. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. Run nc command with -z flag. You need to specify host name / ip along with the port range to limit and speedup operation:
如果未安装 nmap试试 nc/netcat 命令,如下所示。-z 参数用来告诉 nc 报告开放的端口,而不是启动连接。在 nc 命令中使用 -z 参数时,你需要在主机名/ip 后面限定端口的范围和加速其运行:
如果未安装 nmap如下所示试试 nc/netcat 命令。-z 参数用来告诉 nc 报告开放的端口,而不是启动连接。在 nc 命令中使用 -z 参数时,你需要在主机名/ip 后面指定端口的范围来限制和加速其运行:
## 语法 ##
nc -z -v {host-name-here} {port-range-here}
### 语法 ###
### nc -z -v {host-name-here} {port-range-here}
nc -z -v host-name-here ssh
nc -z -v host-name-here 22
nc -w 1 -z -v server-name-here port-Number-her
## 扫描 1 to 1023 端口 ##
### 扫描 1 to 1023 端口 ###
nc -zv vip-1.vsnl.nixcraft.in 1-1023
输出示例:
@ -42,16 +38,16 @@ If nmap is not installed try nc / netcat command as follow. The -z flag can be u
nc -zv v.txvip1 smtp
nc -zvn v.txvip1 ftp
## really fast scanner with 1 timeout value ##
### 使用1秒的超时值来更快的扫描 ###
netcat -v -z -n -w 1 v.txvip1 1-1023
输出示例:
![Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server](http://s0.cyberciti.org/uploads/faq/2007/07/scan-with-nc.jpg)
图01Linux/Unix使用 Netcat 来测试 TCP 和 UDP 与服务器建立连接
*图01Linux/Unix使用 Netcat 来测试 TCP 和 UDP 与服务器建立连接*
1. -z : 端口扫描模式即 I/O 模式。
1. -z : 端口扫描模式即 I/O 模式。
1. -v : 显示详细信息 [使用 -vv 来输出更详细的信息]。
1. -n : 使用纯数字 IP 地址,即不用 DNS 来解析 IP 地址。
1. -w 1 : 设置超时值设置为1。
@ -88,12 +84,12 @@ via: http://www.cyberciti.biz/faq/linux-port-scanning/
作者Vivek Gite
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/
[2]:http://www.cyberciti.biz/tips/linux-scanning-network-for-open-ports.html
[3]:http://www.cyberciti.biz/networking/nmap-command-examples-tutorials/
[1]:https://linux.cn/article-2561-1.html
[2]:https://linux.cn/article-2561-1.html
[3]:https://linux.cn/article-2561-1.html
[4]:http://www.manpager.com/linux/man1/nc.1.html
[5]:http://www.manpager.com/linux/man1/nmap.1.html

View File

@ -1,11 +1,10 @@
如何在命令行中使用ftp命令上传和下载文件
如何在命令行中使用 ftp 命令上传和下载文件
================================================================================
本文中介绍在Linux shell中如何使用ftp命令。包括如何连接FTP服务器上传或下载文件以及创建文件夹。尽管现在有许多不错的FTP桌面应用但是在服务器、ssh、远程回话中命令行ftp命令还是有很多应用的。比如。需要服务器从ftp仓库拉取备份。
本文中,介绍在 Linux shell 中如何使用 ftp 命令。包括如何连接 FTP 服务器,上传或下载文件以及创建文件夹。尽管现在有许多不错的 FTP 桌面应用但是在服务器、SSH、远程会话中命令行 ftp 命令还是有很多应用的。比如。需要服务器从 ftp 仓库拉取备份。
### 步骤 1: 建立FTP连接 ###
### 步骤 1: 建立 FTP 连接 ###
想要连接FTP服务器在命令上中先输入'**ftp**'然后空格跟上FTP服务器的域名'domain.com'或者IP地址
想要连接 FTP 服务器,在命令上中先输入`ftp`然后空格跟上 FTP 服务器的域名 'domain.com' 或者 IP 地址
#### 例如: ####
@ -15,17 +14,17 @@
ftp user@ftpdomain.com
**注意: 本次例子使用匿名服务器.**
**注意: 本例中使用匿名服务器。**
替换下面例子中IP或域名为你的服务器地址。
替换下面例子中 IP 或域名为你的服务器地址。
![FTP登录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/ftpanonymous.png)
![FTP 登录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/ftpanonymous.png)
### 步骤 2: 使用用户名密码登录 ###
绝大多数的FTP服务器是使用密码保护的因此这些FTP服务器会询问'**用户名**'和'**密码**'.
绝大多数的 FTP 服务器是使用密码保护的,因此这些 FTP 服务器会询问'**username**'和'**password**'.
如果你连接到被动匿名FTP服务器可以尝试"anonymous"作为用户名以及空密码:
如果你连接到被称作匿名 FTP 服务器LCTT 译注:即,并不需要你有真实的用户信息即可使用的 FTP 服务器称之为匿名 FTP 服务器),可以尝试`anonymous`作为用户名以及使用空密码:
Name: anonymous
@ -40,15 +39,14 @@
登录成功。
![FTP登录成功](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/login.png)
![FTP 登录成功](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/login.png)
### 步骤 3: 目录操作 ###
FTP命令可以列出、移动和创建文件夹如同我们在本地使用我们的电脑一样。ls可以打印目录列表cd可以改变目录mkdir可以创建文件夹。
FTP 命令可以列出、移动和创建文件夹,如同我们在本地使用我们的电脑一样。`ls`可以打印目录列表,`cd`可以改变目录,`mkdir`可以创建文件夹。
#### 使用安全设置列出目录 ####
ftp> ls
服务器将返回:
@ -74,15 +72,15 @@ FTP命令可以列出、移动和创建文件夹如同我们在本地使用
![FTP中改变目录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/directory.png)
### 步骤 4: 使用FTP下载文件 ###
### 步骤 4: 使用 FTP 下载文件 ###
在下载一个文件之前我们首先需要使用lcd命令设定本地接受目录位置。
在下载一个文件之前,我们首先需要使用`lcd`命令设定本地接受目录位置。
lcd /home/user/yourdirectoryname
如果你不指定下载目录文件将会下载到你登录FTP时候的工作目录。
如果你不指定下载目录,文件将会下载到你登录 FTP 时候的工作目录。
现在我们可以使用命令get来下载文件比如
现在,我们可以使用命令 get 来下载文件,比如:
get file
@ -98,15 +96,15 @@ FTP命令可以列出、移动和创建文件夹如同我们在本地使用
![使用FTP下载文件](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/gettingfile.png)
下载多个文件可以使用通配符。例如,下面这个例子我打算下载所有以.xls结尾的文件。
下载多个文件可以使用通配符`mget` 命令。例如,下面这个例子我打算下载所有以 .xls 结尾的文件。
mget *.xls
### 步骤 5: 使用FTP上传文件 ###
### 步骤 5: 使用 FTP 上传文件 ###
完成FTP连接后FTP同样可以上传文件
完成 FTP 连接后FTP 同样可以上传文件
使用put命令上传文件
使用 `put`命令上传文件:
put file
@ -118,7 +116,7 @@ FTP命令可以列出、移动和创建文件夹如同我们在本地使用
mput *.xls
### 步骤 6: 关闭FTP连接 ###
### 步骤 6: 关闭 FTP 连接 ###
完成FTP工作后为了安全起见需要关闭连接。有三个命令可以关闭连接
@ -134,7 +132,7 @@ FTP命令可以列出、移动和创建文件夹如同我们在本地使用
![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/goodbye.png)
需要更多帮助,在使用ftp命令连接到服务器后可以使用“help”获得更多帮助。
需要更多帮助,在使用 ftp 命令连接到服务器后,可以使用`help`获得更多帮助。
![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/helpwindow.png)
@ -143,6 +141,6 @@ FTP命令可以列出、移动和创建文件夹如同我们在本地使用
via: https://www.howtoforge.com/tutorial/how-to-use-ftp-on-the-linux-shell/
译者:[VicYu](http://vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,11 +1,16 @@
在 Centos/RHEL 6.X 上安装 Wetty
================================================================================
![](http://www.unixmen.com/wp-content/uploads/2015/11/Terminal.png)
Wetty 是什么?
**Wetty 是什么?**
作为系统管理员,如果你是在 Linux 桌面下,你可能会使用一个软件来连接远程服务器,像 GNOME 终端(或类似的),如果你是在 Windows 下,你可能会使用像 Putty 这样的 SSH 客户端来连接,并同时可以在浏览器中查收邮件等做其他事情。
Wetty = Web + tty
作为系统管理员,如果你是在 Linux 桌面下,你可以用它像一个 GNOME 终端(或类似的)一样来连接远程服务器;如果你是在 Windows 下,你可以用它像使用 Putty 这样的 SSH 客户端一样来连接远程,然后同时可以在浏览器中上网并查收邮件等其它事情。
LCTT 译注:简而言之,这是一个基于 Web 浏览器的远程终端)
![](https://github.com/krishnasrinivas/wetty/raw/master/terminal.png)
### 第1步: 安装 epel 源 ###
@ -15,6 +20,8 @@ Wetty 是什么?
### 第2步安装依赖 ###
# yum install epel-release git nodejs npm -y
LCTT 译注:对,没错,是用 node.js 编写的)
### 第3步在安装完依赖后克隆 GitHub 仓库 ###
@ -31,13 +38,15 @@ Wetty 是什么?
### 第6步为 Wetty 安装 HTTPS 证书 ###
# openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes (complete this)
# openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes
(等待完成)
### Step 7: 通过 HTTPS 来使用 Wetty ###
### 第7步通过 HTTPS 来使用 Wetty ###
# nohup node app.js --sslkey key.pem --sslcert cert.pem -p 8080 &
### Step 8: 为 wetty 添加一个用户 ###
### 第8步为 wetty 添加一个用户 ###
# useradd <username>
# Passwd <username>
@ -45,7 +54,8 @@ Wetty 是什么?
### 第9步访问 wetty ###
http://Your_IP-Address:8080
give the credential have created before for wetty and access
输入你之前为 wetty 创建的证书然后访问。
到此结束!
@ -55,7 +65,7 @@ via: http://www.unixmen.com/install-wetty-centosrhel-6-x/
作者:[Debojyoti Das][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -19,4 +19,4 @@ via来源链接
[6]:
[7]:
[8]:
[9]:
[9]:

View File

@ -1,43 +0,0 @@
Flowsnow translating
Apple Swift Programming Language Comes To Linux
================================================================================
![](http://itsfoss.com/wp-content/uploads/2015/12/Apple-Swift-Open-Source.jpg)
Apple and Open Source toogether? Yes! Apples Swift programming language is now open source. This should not come as surprise because [Apple had already announced it six months back][1].
Apple announced the launch of open source Swift community came this week. A [new website][2] dedicated to the open source Swift community has been put in place with the following message:
> We are excited by this new chapter in the story of Swift. After Apple unveiled the Swift programming language, it quickly became one of the fastest growing languages in history. Swift makes it easy to write software that is incredibly fast and safe by design. Now that Swift is open source, you can help make the best general purpose programming language available everywhere.
[swift.org][2] will work as the one stop shop providing downloads for various platforms, community guidelines, news, getting started tutorials, instructions for contribution to open source Swift, documentation and other guidelines. If you are looking forward to learn Swift, this website must be bookmarked.
In this announcement, a new package manager for easy sharing and building code has been made available as well.
Most important of all for Linux users, the source code is now available at [Github][3]. You can check it out from the link below:
- [Apple Swift Source Code][3]
In addition to that, there are prebuilt binaries for Ubuntu 14.04 and 15.10.
- [Swift binaries for Ubuntu][4]
Dont rush to use them because these are development branches and will not be suitable for production machine. So avoid it for now. Once stable version of Swift for Linux is released, I hope that Ubuntu will include it in [umake][5] on the line of [Visual Studio][6].
--------------------------------------------------------------------------------
via: http://itsfoss.com/swift-open-source-linux/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/apple-open-sources-swift-programming-language-linux/
[2]:https://swift.org/
[3]:https://github.com/apple
[4]:https://swift.org/download/#latest-development-snapshots
[5]:https://wiki.ubuntu.com/ubuntu-make
[6]:http://itsfoss.com/install-visual-studio-code-ubuntu/

View File

@ -1,70 +0,0 @@
Translating by ZTinoZ
7 ways hackers can use Wi-Fi against you
================================================================================
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg)
### 7 ways hackers can use Wi-Fi against you ###
Wi-Fi — oh so convenient, yet oh so dangerous. Here are seven ways you could be giving away your identity through a Wi-Fi connection and what to do instead.
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/1_free-hotspots-100626674-orig.jpg)
### Using free hotspots ###
They seem to be everywhere, and their numbers are expected to [quadruple over the next four years][1]. But many of them are untrustworthy, created just so your login credentials, to email or even more sensitive accounts, can be picked up by hackers using “sniffers” — software that captures any information you submit over the connection. The best defense against sniffing hackers is to use a VPN (virtual private network). A VPN keeps your private data protected because it encrypts what you input.
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/2_online-banking-100626675-orig.jpg)
### Banking online ###
You might think that no one needs to be warned against banking online using free Wi-Fi, but cybersecurity firm Kaspersky Lab says that [more than 100 banks worldwide have lost $900 million][2] from cyberhacking, so it would seem that a lot of people are doing it. If you want to use the free Wi-Fi in a coffee shop because youre confident it will be legitimate, confirm the exact network name with the barista. Its pretty easy for [someone else in the shop with a router to set up an open connection][3] with a name that seems like it would be the name of the shops Wi-Fi.
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/3_keeping-wifi-on-100626676-orig.jpg)
### Keeping Wi-Fi on all the time ###
When your phones Wi-Fi is automatically enabled, you can be connected to an unsecure network without even realizing it. Use your phones [location-based Wi-Fi feature][4], if its available. It will turn off your Wi-Fi when youre away from your saved networks and will turn back on when youre within range.
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/4_not-using-firewall-100626677-orig.jpg)
### Not using a firewall ###
A firewall is your first line of defense against malicious intruders. Its meant to let good traffic through your computer on a network and keep hackers and malware out. You should turn it off only when your antivirus software has its own firewall.
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/5_browsing-unencrypted-sites-100626678-orig.jpg)
### Browsing unencrypted websites ###
Sad to say, [55% of the Webs top 1 million sites dont offer encryption][5]. An unencrypted website allows all data transmissions to be viewed by the prying eyes of hackers. Your browser will indicate when a site is secure (youll see a gray padlock with Mozilla Firefox, for example, and a green lock icon with Chrome). But even a secure website cant protect you from sidejackers, who can steal the cookies from a website you visited, whether its a valid site or not, through a public network.
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/6_updating-security-software-100626679-orig.jpg)
### Not updating your security software ###
If you want to ensure that your own network is well protected, upgrade the firmware of your router. All you have to do is go to your routers administration page to check. Normally, you can download the newest firmware right from the manufacturers site.
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/7_securing-home-wifi-100626680-orig.jpg)
### Not securing your home Wi-Fi ###
Needless to say, it is important to set up a password that is not too easy to guess, and change your connections default name. You can also filter your MAC address so your router will recognize only certain devices.
**Josh Althuser** is an open software advocate, Web architect and tech entrepreneur. Over the past 12 years, he has spent most of his time advocating for open-source software and managing teams and projects, as well as providing enterprise-level consultancy for Web applications and helping bring their products to the market. You may connect with him on [Twitter][6].
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers-can-use-wi-fi-against-you.html
作者:[Josh Althuser][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/JoshAlthuser
[1]:http://www.pcworld.com/article/243464/number_of_wifi_hotspots_to_quadruple_by_2015_says_study.html
[2]:http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?hp&amp;action=click&amp;pgtype=Homepage&amp;module=first-column-region%C2%AEion=top-news&amp;WT.nav=top-news&amp;_r=3
[3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html
[4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off
[5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/
[6]:https://twitter.com/JoshAlthuser

View File

@ -1,220 +0,0 @@
19 Years of KDE History: Step by Step
================================================================================
youtube 视频
<iframe width="660" height="371" src="https://www.youtube.com/embed/1UG4lQOMBC4?feature=oembed" frameborder="0" allowfullscreen></iframe>
### Introduction ###
KDE one of most functional desktop environment ever. Its open source and free for use. 19 years ago, 14 october 1996 german programmer Matthias Ettrich has started a development of this beautiful environment. KDE provides the shell and many applications for everyday using. Today KDE uses the hundred thousand peoples over the world on Unix and Windows operating system. 19 years serious age for software projects. Time to return and see how it begin.
K Desktop Environment has some new aspects: new design, good look & feel, consistency, easy to use, powerful applications for typical desktop work and special use cases. Name “KDE” is an easy word hack with “Common Desktop Environment”, “K” “Cool”. The first KDE version used proprietary Trolltechs Qt framework (parent of Qt) with dual licensing: open source QPL(Q public license) and proprietary commercial license. In 2000 Trolltech released some Qt libraries under GPL; Qt 4.5 was released in LGPL 2.1. Since 2009 KDE is compiled for three products: Plasma Workspaces (Shell), KDE Applications, KDE Platform as KDE Software compilation.
### Releases ###
#### Pre-Release 14 October 1996 ####
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/0b3.png)
Kool Desktop Environment. Word “Kool” will be dropped in future. In the beginning, all components were released to the developer community separately without any coordinated timeframe throughout the overall project. First communication of KDE via mailing list, that was called kde@fiwi02.wiwi.uni-Tubingen.de.
#### KDE 1.0 July 12, 1998 ####
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/10.png)
This version received mixed reception. Many criticized the use of the Qt software framework back then under the FreeQt license which was claimed to not be compatible with free software and advised the use of Motif or LessTif instead. Despite that criticism, KDE was well received by many users and made its way into the first Linux distributions.
![28 January 1999](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/11.png)
28 January 1999
An update, **K Desktop Environment 1.1**, was faster, more stable and included many small improvements. It also included a new set of icons, backgrounds and textures. Among this overhauled artwork was a new KDE logo by Torsten Rahn consisting of the letter K in front of a gear which is used in revised form to this day.
#### KDE 2.0 October 23, 2000 ####
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/20.png)
Major updates: * DCOP (Desktop COmmunication Protocol), a client-to-client communications protocol * KIO, an application I/O library. * KParts, a component object model * KHTML, an HTML 4.0 compliant rendering and drawing engine
![26 February 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/21.png)
26 February 2001
**K Desktop Environment 2.1** release inaugurated the media player noatun, which used a modular, plugin design. For development, K Desktop Environment 2.1 was bundled with KDevelop.
![15 August 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/22.png)
15 August 2001
The **KDE 2.2** release featured up to a 50% improvement in application startup time on GNU/Linux systems and increased stability and capabilities for HTML rendering and JavaScript; some new features in KMail.
#### KDE 3.0 April 3, 2002 ####
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/30.png)
K Desktop Environment 3.0 introduced better support for restricted usage, a feature demanded by certain environments such as kiosks, Internet cafes and enterprise deployments, which disallows the user from having full access to all capabilities of a piece of software.
![28 January 2003](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/31.png)
28 January 2003
**K Desktop Environment 3.1** introduced new default window (Keramik) and icon (Crystal) styles as well as several feature enhancements.
![3 February 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/32.png)
3 February 2004
**K Desktop Environment 3.2** included new features, such as inline spell checking for web forms and emails, improved e-mail and calendaring support, tabs in Konqueror and support for Microsoft Windows desktop sharing protocol (RDP).
![19 August 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/33.png)
19 August 2004
**K Desktop Environment 3.3** focused on integrating different desktop components. Kontact was integrated with Kolab, a groupware application, and Kpilot. Konqueror was given better support for instant messaging contacts, with the capability to send files to IM contacts and support for IM protocols (e.g., IRC).
![16 March 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/34.png)
16 March 2005
**K Desktop Environment 3.4** focused on improving accessibility. The update added a text-to-speech system with support for Konqueror, Kate, KPDF, the standalone application KSayIt and text-to-speech synthesis on the desktop.
![29 November 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/35.png)
29 November 2005
**The K Desktop Environment 3.5** release added SuperKaramba, which provides integrated and simple-to-install widgets to the desktop. Konqueror was given an ad-block feature and became the second web browser to pass the Acid2 CSS test.
#### KDE SC 4.0 January 11, 2008 ####
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/400.png)
The majority of development went into implementing most of the new technologies and frameworks of KDE 4. Plasma and the Oxygen style were two of the biggest user-facing changes. Dolphin replaces Konqueror as file manager, Okular default document viewer.
![29 July 2008](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/401.png)
29 July 2008
**KDE 4.1** includes a shared emoticon theming system which is used in PIM and Kopete, and DXS, a service that lets applications download and install data from the Internet with one click. Also introduced are GStreamer, QuickTime 7, and DirectShow 9 Phonon backends. New applications: * Dragon Player * Kontact * Skanlite software for scanners * Step physics simulator * New games: Kdiamond, Kollision, KBreakout and others
![27 January 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/402.png)
27 January 2009
**KDE 4.2** is considered a significant improvement beyond KDE 4.1 in nearly all aspects, and a suitable replacement for KDE 3.5 for most users.
![4 August 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/403.png)
4 August 2009
**KDE 4.3** fixed over 10,000 bugs and implemented almost 2,000 feature requests. Integration with other technologies, such as PolicyKit, NetworkManager & Geolocation services, was another focus of this release.
![9 February 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/404.png)
9 February 2010
**KDE SC 4.4** is based on version 4.6 of the Qt 4 toolkit. New application KAddressBook, first release of Kopete.
![10 August 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/405.png)
10 August 2010
**KDE SC 4.5** has some new features: integration of the WebKit library, an open-source web browser engine, which is used in major browsers such as Apple Safari and Google Chrome. KPackageKit replaced Kpackage.
![26 January 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/406.png)
26 January 2011
**KDE SC 4.6** has better OpenGL compositing along with the usual myriad of fixes and features.
![27 July 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/407.png)
27 July 2011
**KDE SC 4.7** has updated KWin with OpenGL ES 2.0 compatible, Qt Quick, Plasma Desktop with many enhancements and a lot of new functions in general applications. 12k bugs if fixed.
![25 January 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/408.png)
25 January 2012
**KDE SC 4.8**: better KWin performance and Wayland support, new design of Doplhin.
![1 August 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/409.png)
1 August 2012
**KDE SC 4.9**: several improvements to the Dolphin file manager, including the reintroduction of in-line file renaming, back and forward mouse buttons, improvement of the places panel and better usage of file metadata.
![6 February 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/410.png)
6 February 2013
**KDE SC 4.10**: many of the default Plasma widgets were rewritten in QML, and Nepomuk, Kontact and Okular received significant speed improvements.
![14 August 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/411.png)
14 August 2013
**KDE SC 4.11**: Kontact and Nepomuk received many optimizations. The first generation Plasma Workspaces entered maintenance-only development mode.
![18 December 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/412.png)
18 December 2013
**KDE SC 4.12**: Kontact received substantial improvements, many small improvements.
![16 April 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/413.png)
18 December 2013
**KDE SC 4.13**: Nepomuk semantic desktop search was replaced with KDEs in house Baloo. KDE SC 4.13 was released in 53 different translations.
![20 August 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/414.png)
18 December 2013
**KDE SC 4.14**: he release primarily focused on stability, with numerous bugs fixed and few new features added. This was the final KDE SC 4 release.
#### KDE Plasma 5.0 July 15, 2014 ####
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/500.png)
KDE Plasma 5 5th generation of KDE. Massive impovements in design and system, new default theme Breeze, complete migration to QML, better performance with OpenGL, better HiDPI displays support.
![11 November 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/501.png)
11 November 2014
**KDE Plasma 5.1**: Ported missing features from Plasma 4.
![27 January 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/502.png)
27 January 2015
**KDE Plasma 5.2**: New components: BlueDevil, KSSHAskPass, Muon, SDDM theme configuration, KScreen, GTK+ style configuration and KDecoration.
![28 April 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/503.png)
28 April 2015
**KDE Plasma 5.3**: Tech preview of Plasma Media Center. New Bluetooth and touchpad applets. Enhanced power management.
![25 August 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/504.png)
25 August 2015
**KDE Plasma 5.4**: Initial Wayland session, new QML-based audio volume applet, and alternative full-screen application launcher.
Big thanks to the [KDE][1] developers and community, Wikipedia for [descriptions][2] and all my readers. Be free and use the open source software like a KDE.
--------------------------------------------------------------------------------
via: https://tlhp.cf/kde-history/
作者:[Pavlo RudyiCategories][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://tlhp.cf/author/paul/
[1]:https://www.kde.org/
[2]:https://en.wikipedia.org/wiki/KDE_Plasma_5

View File

@ -1,285 +0,0 @@
translating。。。
Review: 5 memory debuggers for Linux coding
================================================================================
![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg)
Credit: [Moini][1]
As a programmer, I'm aware that I tend to make mistakes -- and why not? Even programmers are human. Some errors are detected during code compilation, while others get caught during software testing. However, a category of error exists that usually does not get detected at either of these stages and that may cause the software to behave unexpectedly -- or worse, terminate prematurely.
If you haven't already guessed it, I am talking about memory-related errors. Manually debugging these errors can be not only time-consuming but difficult to find and correct. Also, it's worth mentioning that these errors are surprisingly common, especially in software written in programming languages like C and C++, which were designed for use with [manual memory management][2].
Thankfully, several programming tools exist that can help you find memory errors in your software programs. In this roundup, I assess five popular, free and open-source memory debuggers that are available for Linux: Dmalloc, Electric Fence, Memcheck, Memwatch and Mtrace. I've used all five in my day-to-day programming, and so these reviews are based on practical experience.
eviews are based on practical experience.
### [Dmalloc][3] ###
**Developer**: Gray Watson
**Reviewed version**: 5.5.2
**Linux support**: All flavors
**License**: Creative Commons Attribution-Share Alike 3.0 License
Dmalloc is a memory-debugging tool developed by Gray Watson. It is implemented as a library that provides wrappers around standard memory management functions like **malloc(), calloc(), free()** and more, enabling programmers to detect problematic code.
![cw dmalloc output](http://images.techhive.com/images/article/2015/11/cw_dmalloc-output-100627040-large.idge.png)
Dmalloc
As listed on the tool's Web page, the debugging features it provides includes memory-leak tracking, [double free][4] error tracking and [fence-post write detection][5]. Other features include file/line number reporting, and general logging of statistics.
#### What's new ####
Version 5.5.2 is primarily a [bug-fix release][6] containing corrections for a couple of build and install problems.
#### What's good about it ####
The best part about Dmalloc is that it's extremely configurable. For example, you can configure it to include support for C++ programs as well as threaded applications. A useful functionality it provides is runtime configurability, which means that you can easily enable/disable the features the tool provides while it is being executed.
You can also use Dmalloc with the [GNU Project Debugger (GDB)][7] -- just add the contents of the dmalloc.gdb file (located in the contrib subdirectory in Dmalloc's source package) to the .gdbinit file in your home directory.
Another thing that I really like about Dmalloc is its extensive documentation. Just head to the [documentation section][8] on its official website, and you'll get everything from how to download, install, run and use the library to detailed descriptions of the features it provides and an explanation of the output file it produces. There's also a section containing solutions to some common problems.
#### Other considerations ####
Like Mtrace, Dmalloc requires programmers to make changes to their program's source code. In this case you may, at the very least, want to add the **dmalloc.h** header, because it allows the tool to report the file/line numbers of calls that generate problems, something that is very useful as it saves time while debugging.
In addition, the Dmalloc library, which is produced after the package is compiled, needs to be linked with your program while the program is being compiled.
However, complicating things somewhat is the fact that you also need to set an environment variable, dubbed **DMALLOC_OPTION**, that the debugging tool uses to configure the memory debugging features -- as well as the location of the output file -- at runtime. While you can manually assign a value to the environment variable, beginners may find that process a bit tough, given that the Dmalloc features you want to enable are listed as part of that value, and are actually represented as a sum of their respective hexadecimal values -- you can read more about it [here][9].
An easier way to set the environment variable is to use the [Dmalloc Utility Program][10], which was designed for just that purpose.
#### Bottom line ####
Dmalloc's real strength lies in the configurability options it provides. It is also highly portable, having being successfully ported to many OSes, including AIX, BSD/OS, DG/UX, Free/Net/OpenBSD, GNU/Hurd, HPUX, Irix, Linux, MS-DOG, NeXT, OSF, SCO, Solaris, SunOS, Ultrix, Unixware and even Unicos (on a Cray T3E). Although the tool has a bit of a learning curve associated with it, the features it provides are worth it.
### [Electric Fence][15] ###
**Developer**: Bruce Perens
**Reviewed version**: 2.2.3
**Linux support**: All flavors
**License**: GNU GPL (version 2)
Electric Fence is a memory-debugging tool developed by Bruce Perens. It is implemented in the form of a library that your program needs to link to, and is capable of detecting overruns of memory allocated on a [heap][11] ) as well as memory accesses that have already been released.
![cw electric fence output](http://images.techhive.com/images/article/2015/11/cw_electric-fence-output-100627041-large.idge.png)
Electric Fence
As the name suggests, Electric Fence creates a virtual fence around each allocated buffer in a way that any illegal memory access results in a [segmentation fault][12]. The tool supports both C and C++ programs.
#### What's new ####
Version 2.2.3 contains a fix for the tool's build system, allowing it to actually pass the -fno-builtin-malloc option to the [GNU Compiler Collection (GCC)][13].
#### What's good about it ####
The first thing that I liked about Electric Fence is that -- unlike Memwatch, Dmalloc and Mtrace -- it doesn't require you to make any changes in the source code of your program. You just need to link your program with the tool's library during compilation.
Secondly, the way the debugging tool is implemented makes sure that a segmentation fault is generated on the very first instruction that causes a bounds violation, which is always better than having the problem detected at a later stage.
Electric Fence always produces a copyright message in output irrespective of whether an error was detected or not. This behavior is quite useful, as it also acts as a confirmation that you are actually running an Electric Fence-enabled version of your program.
#### Other considerations ####
On the other hand, what I really miss in Electric Fence is the ability to detect memory leaks, as it is one of the most common and potentially serious problems that software written in C/C++ has. In addition, the tool cannot detect overruns of memory allocated on the stack, and is not thread-safe.
Given that the tool allocates an inaccessible virtual memory page both before and after a user-allocated memory buffer, it ends up consuming a lot of extra memory if your program makes too many dynamic memory allocations.
Another limitation of the tool is that it cannot explicitly tell exactly where the problem lies in your programs' code -- all it does is produce a segmentation fault whenever it detects a memory-related error. To find out the exact line number, you'll have to debug your Electric Fence-enabled program with a tool like [The Gnu Project Debugger (GDB)][14], which in turn depends on the -g compiler option to produce line numbers in output.
Finally, although Electric Fence is capable of detecting most buffer overruns, an exception is the scenario where the allocated buffer size is not a multiple of the word size of the system -- in that case, an overrun (even if it's only a few bytes) won't be detected.
#### Bottom line ####
Despite all its limitations, where Electric Fence scores is the ease of use -- just link your program with the tool once, and it'll alert you every time it detects a memory issue it's capable of detecting. However, as already mentioned, the tool requires you to use a source-code debugger like GDB.
### [Memcheck][16] ###
**Developer**: [Valgrind Developers][17]
**Reviewed version**: 3.10.1
**Linux support**: All flavors
**License**: GPL
[Valgrind][18] is a suite that provides several tools for debugging and profiling Linux programs. Although it works with programs written in many different languages -- such as Java, Perl, Python, Assembly code, Fortran, Ada and more -- the tools it provides are largely aimed at programs written in C and C++.
The most popular Valgrind tool is Memcheck, a memory-error detector that can detect issues such as memory leaks, invalid memory access, uses of undefined values and problems related to allocation and deallocation of heap memory.
#### What's new ####
This [release][19] of the suite (3.10.1) is a minor one that primarily contains fixes to bugs reported in version 3.10.0. In addition, it also "backports fixes for all reported missing AArch64 ARMv8 instructions and syscalls from the trunk."
#### What's good about it ####
Memcheck, like all other Valgrind tools, is basically a command line utility. It's very easy to use: If you normally run your program on the command line in a form such as prog arg1 arg2, you just need to add a few values, like this: valgrind --leak-check=full prog arg1 arg2.
![cw memcheck output](http://images.techhive.com/images/article/2015/11/cw_memcheck-output-100627037-large.idge.png)
Memcheck
(Note: You don't need to mention Memcheck anywhere in the command line because it's the default Valgrind tool. However, you do need to initially compile your program with the -g option -- which adds debugging information -- so that Memcheck's error messages include exact line numbers.)
What I really like about Memcheck is that it provides a lot of command line options (such as the --leak-check option mentioned above), allowing you to not only control how the tool works but also how it produces the output.
For example, you can enable the --track-origins option to see information on the sources of uninitialized data in your program. Enabling the --show-mismatched-frees option will let Memcheck match the memory allocation and deallocation techniques. For code written in C language, Memcheck will make sure that only the free() function is used to deallocate memory allocated by malloc(), while for code written in C++, the tool will check whether or not the delete and delete[] operators are used to deallocate memory allocated by new and new[], respectively. If a mismatch is detected, an error is reported.
But the best part, especially for beginners, is that the tool even produces suggestions about which command line option the user should use to make the output more meaningful. For example, if you do not use the basic --leak-check option, it will produce an output suggesting: "Rerun with --leak-check=full to see details of leaked memory." And if there are uninitialized variables in the program, the tool will generate a message that says, "Use --track-origins=yes to see where uninitialized values come from."
Another useful feature of Memcheck is that it lets you [create suppression files][20], allowing you to suppress certain errors that you can't fix at the moment -- this way you won't be reminded of them every time the tool is run. It's worth mentioning that there already exists a default suppression file that Memcheck reads to suppress errors in the system libraries, such as the C library, that come pre-installed with your OS. You can either create a new suppression file for your use, or edit the existing one (usually /usr/lib/valgrind/default.supp).
For those seeking advanced functionality, it's worth knowing that Memcheck can also [detect memory errors][21] in programs that use [custom memory allocators][22]. In addition, it also provides [monitor commands][23] that can be used while working with Valgrind's built-in gdbserver, as well as a [client request mechanism][24] that allows you not only to tell the tool facts about the behavior of your program, but make queries as well.
#### Other considerations ####
While there's no denying that Memcheck can save you a lot of debugging time and frustration, the tool uses a lot of memory, and so can make your program execution significantly slower (around 20 to 30 times, [according to the documentation][25]).
Aside from this, there are some other limitations, too. According to some user comments, Memcheck apparently isn't [thread-safe][26]; it doesn't detect [static buffer overruns][27]). Also, there are some Linux programs, like [GNU Emacs][28], that currently do not work with Memcheck.
If you're interested in taking a look, an exhaustive list of Valgrind's limitations can be found [here][29].
#### Bottom line ####
Memcheck is a handy memory-debugging tool for both beginners as well as those looking for advanced features. While it's very easy to use if all you need is basic debugging and error checking, there's a bit of learning curve if you want to use features like suppression files or monitor commands.
Although it has a long list of limitations, Valgrind (and hence Memcheck) claims on its site that it is used by [thousands of programmers][30] across the world -- the team behind the tool says it's received feedback from users in over 30 countries, with some of them working on projects with up to a whopping 25 million lines of code.
### [Memwatch][31] ###
**Developer**: Johan Lindh
**Reviewed version**: 2.71
**Linux support**: All flavors
**License**: GNU GPL
Memwatch is a memory-debugging tool developed by Johan Lindh. Although it's primarily a memory-leak detector, it is also capable (according to its Web page) of detecting other memory-related issues like [double-free error tracking and erroneous frees][32], buffer overflow and underflow, [wild pointer][33] writes, and more.
The tool works with programs written in C. Although you can also use it with C++ programs, it's not recommended (according to the Q&A file that comes with the tool's source package).
#### What's new ####
This version adds ULONG_LONG_MAX to detect whether a program is 32-bit or 64-bit.
#### What's good about it ####
Like Dmalloc, Memwatch comes with good documentation. You can refer to the USING file if you want to learn things like how the tool works; how it performs initialization, cleanup and I/O operations; and more. Then there is a FAQ file that is aimed at helping users in case they face any common error while using Memcheck. Finally, there is a test.c file that contains a working example of the tool for your reference.
![cw memwatch output](http://images.techhive.com/images/article/2015/11/cw_memwatch_output-100627038-large.idge.png)
Memwatch
Unlike Mtrace, the log file to which Memwatch writes the output (usually memwatch.log) is in human-readable form. Also, instead of truncating, Memwatch appends the memory-debugging output to the file each time the tool is run, allowing you to easily refer to the previous outputs should the need arise.
It's also worth mentioning that when you execute your program with Memwatch enabled, the tool produces a one-line output on [stdout][34] informing you that some errors were found -- you can then head to the log file for details. If no such error message is produced, you can rest assured that the log file won't contain any mistakes -- this actually saves time if you're running the tool several times.
Another thing that I liked about Memwatch is that it also provides a way through which you can capture the tool's output from within the code, and handle it the way you like (refer to the mwSetOutFunc() function in the Memwatch source code for more on this).
#### Other considerations ####
Like Mtrace and Dmalloc, Memwatch requires you to add extra code to your source file -- you have to include the memwatch.h header file in your code. Also, while compiling your program, you need to either compile memwatch.c along with your program's source files or include the object module from the compile of the file, as well as define the MEMWATCH and MW_STDIO variables on the command line. Needless to say, the -g compiler option is also required for your program if you want exact line numbers in the output.
There are some features that it doesn't contain. For example, the tool cannot detect attempts to write to an address that has already been freed or read data from outside the allocated memory. Also, it's not thread-safe. Finally, as I've already pointed out in the beginning, there is no guarantee on how the tool will behave if you use it with programs written in C++.
#### Bottom line ####
Memcheck can detect many memory-related problems, making it a handy debugging tool when dealing with projects written in C. Given that it has a very small source code, you can learn how the tool works, debug it if the need arises, and even extend or update its functionality as per your requirements.
### [Mtrace][35] ###
**Developers**: Roland McGrath and Ulrich Drepper
**Reviewed version**: 2.21
**Linux support**: All flavors
**License**: GNU LGPL
Mtrace is a memory-debugging tool included in [the GNU C library][36]. It works with both C and C++ programs on Linux, and detects memory leaks caused by unbalanced calls to the malloc() and free() functions.
![cw mtrace output](http://images.techhive.com/images/article/2015/11/cw_mtrace-output-100627039-large.idge.png)
Mtrace
The tool is implemented in the form of a function called mtrace(), which traces all malloc/free calls made by a program and logs the information in a user-specified file. Because the file contains data in computer-readable format, a Perl script -- also named mtrace -- is used to convert and display it in human-readable form.
#### What's new ####
[The Mtrace source][37] and [the Perl file][38] that now come with the GNU C library (version 2.21) add nothing new to the tool aside from an update to the copyright dates.
#### What's good about it ####
The best part about Mtrace is that the learning curve for it isn't steep; all you need to understand is how and where to add the mtrace() -- and the corresponding muntrace() -- function in your code, and how to use the Mtrace Perl script. The latter is very straightforward -- all you have to do is run the mtrace() <program-executable> <log-file-generated-upon-program-execution> command. (For an example, see the last command in the screenshot above.)
Another thing that I like about Mtrace is that it's scalable -- which means that you can not only use it to debug a complete program, but can also use it to detect memory leaks in individual modules of the program. Just call the mtrace() and muntrace() functions within each module.
Finally, since the tool is triggered when the mtrace() function -- which you add in your program's source code -- is executed, you have the flexibility to enable the tool dynamically (during program execution) [using signals][39].
#### Other considerations ####
Because the calls to mtrace() and mauntrace() functions -- which are declared in the mcheck.h file that you need to include in your program's source -- are fundamental to Mtrace's operation (the mauntrace() function is not [always required][40]), the tool requires programmers to make changes in their code at least once.
Be aware that you need to compile your program with the -g option (provided by both the [GCC][41] and [G++][42] compilers), which enables the debugging tool to display exact line numbers in the output. In addition, some programs (depending on how big their source code is) can take a long time to compile. Finally, compiling with -g increases the size of the executable (because it produces extra information for debugging), so you have to remember that the program needs to be recompiled without -g after the testing has been completed.
To use Mtrace, you need to have some basic knowledge of environment variables in Linux, given that the path to the user-specified file -- which the mtrace() function uses to log all the information -- has to be set as a value for the MALLOC_TRACE environment variable before the program is executed.
Feature-wise, Mtrace is limited to detecting memory leaks and attempts to free up memory that was never allocated. It can't detect other memory-related issues such as illegal memory access or use of uninitialized memory. Also, [there have been complaints][43] that it's not [thread-safe][44].
### Conclusions ###
Needless to say, each memory debugger that I've discussed here has its own qualities and limitations. So, which one is best suited for you mostly depends on what features you require, although ease of setup and use might also be a deciding factor in some cases.
Mtrace is best suited for cases where you just want to catch memory leaks in your software program. It can save you some time, too, since the tool comes pre-installed on your Linux system, something which is also helpful in situations where the development machines aren't connected to the Internet or you aren't allowed to download a third party tool for any kind of debugging.
Dmalloc, on the other hand, can not only detect more error types compared to Mtrace, but also provides more features, such as runtime configurability and GDB integration. Also, unlike any other tool discussed here, Dmalloc is thread-safe. Not to mention that it comes with detailed documentation, making it ideal for beginners.
Although Memwatch comes with even more comprehensive documentation than Dmalloc, and can detect even more error types, you can only use it with software written in the C programming language. One of its features that stands out is that it lets you handle its output from within the code of your program, something that is helpful in case you want to customize the format of the output.
If making changes to your program's source code is not what you want, you can use Electric Fence. However, keep in mind that it can only detect a couple of error types, and that doesn't include memory leaks. Plus, you also need to know GDB basics to make the most out of this memory-debugging tool.
Memcheck is probably the most comprehensive of them all. It detects more error types and provides more features than any other tool discussed here -- and it doesn't require you to make any changes in your program's source code.But be aware that, while the learning curve is not very high for basic usage, if you want to use its advanced features, a level of expertise is definitely required.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debuggers-for-linux-coding.html
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Himanshu-Arora/
[1]:https://openclipart.org/detail/132427/penguin-admin
[2]:https://en.wikipedia.org/wiki/Manual_memory_management
[3]:http://dmalloc.com/
[4]:https://www.owasp.org/index.php/Double_Free
[5]:https://stuff.mit.edu/afs/sipb/project/gnucash-test/src/dmalloc-4.8.2/dmalloc.html#Fence-Post%20Overruns
[6]:http://dmalloc.com/releases/notes/dmalloc-5.5.2.html
[7]:http://www.gnu.org/software/gdb/
[8]:http://dmalloc.com/docs/
[9]:http://dmalloc.com/docs/latest/online/dmalloc_26.html#SEC32
[10]:http://dmalloc.com/docs/latest/online/dmalloc_23.html#SEC29
[11]:https://en.wikipedia.org/wiki/Memory_management#Dynamic_memory_allocation
[12]:https://en.wikipedia.org/wiki/Segmentation_fault
[13]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection
[14]:http://www.gnu.org/software/gdb/
[15]:https://launchpad.net/ubuntu/+source/electric-fence/2.2.3
[16]:http://valgrind.org/docs/manual/mc-manual.html
[17]:http://valgrind.org/info/developers.html
[18]:http://valgrind.org/
[19]:http://valgrind.org/docs/manual/dist.news.html
[20]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles
[21]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools
[22]:http://stackoverflow.com/questions/4642671/c-memory-allocators
[23]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
[24]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs
[25]:http://valgrind.org/docs/manual/valgrind_manual.pdf
[26]:http://sourceforge.net/p/valgrind/mailman/message/30292453/
[27]:https://msdn.microsoft.com/en-us/library/ee798431%28v=cs.20%29.aspx
[28]:http://www.computerworld.com/article/2484425/linux/5-free-linux-text-editors-for-programming-and-word-processing.html?nsdr=true&page=2
[29]:http://valgrind.org/docs/manual/manual-core.html#manual-core.limits
[30]:http://valgrind.org/info/
[31]:http://www.linkdata.se/sourcecode/memwatch/
[32]:http://www.cecalc.ula.ve/documentacion/tutoriales/WorkshopDebugger/007-2579-007/sgi_html/ch09.html
[33]:http://c2.com/cgi/wiki?WildPointer
[34]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29
[35]:http://www.gnu.org/software/libc/manual/html_node/Tracing-malloc.html
[36]:https://www.gnu.org/software/libc/
[37]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.c;h=df10128b872b4adc4086cf74e5d965c1c11d35d2;hb=HEAD
[38]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.pl;h=0737890510e9837f26ebee2ba36c9058affb0bf1;hb=HEAD
[39]:http://webcache.googleusercontent.com/search?q=cache:s6ywlLtkSqQJ:www.gnu.org/s/libc/manual/html_node/Tips-for-the-Memory-Debugger.html+&cd=1&hl=en&ct=clnk&gl=in&client=Ubuntu
[40]:http://www.gnu.org/software/libc/manual/html_node/Using-the-Memory-Debugger.html#Using-the-Memory-Debugger
[41]:http://linux.die.net/man/1/gcc
[42]:http://linux.die.net/man/1/g++
[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html
[44]:https://en.wikipedia.org/wiki/Thread_safety

View File

@ -1,95 +0,0 @@
alim0x translating
The history of Android
================================================================================
![Another Market design that was nothing like the old one. This lineup shows the categories page, featured, a top apps list, and an app page.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market-pages.png)
Another Market design that was nothing like the old one. This lineup shows the categories page, featured, a top apps list, and an app page.
Photo by Ron Amadeo
These screenshots give us our first look at the refined version of the Action Bar in Ice Cream Sandwich. Almost every app got a bar at the top of the screen that housed the app icon, title of the screen, several function buttons, and a menu button on the right. The right-aligned menu button was called the "overflow" button, because it housed items that didn't fit on the main action bar. The overflow menu wasn't static, though, it gave the action bar more screen real-estate—like in horizontal mode or on a tablet—and more of the overflow menu items were shown on the action bar as actual buttons.
New in Ice Cream Sandwich was this design style of "swipe tabs," which replaced the 2×3 interstitial navigation screen Google was previously pushing. A tab bar sat just under the Action Bar, with the center title showing the current tab and the left and right having labels for the pages to the left and right of this screen. A swipe in either direction would change tabs, or you could tap on a title to go to that tab.
One really cool design touch on the individual app screen was that, after the pictures, it would dynamically rearrange the page based on your history with that app. If you never installed the app before, the description would be the first box. If you used the app before, the first section would be the reviews bar, which would either invite you to review the app or remind you what you thought of the app last time you installed it. The second section for a previously used app was “Whats New," since an existing user would most likely be interested in changes.
![Recent apps and the browser were just like Honeycomb, but smaller.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/recentbrowser.png)
Recent apps and the browser were just like Honeycomb, but smaller.
Photo by Ron Amadeo
Recent apps toned the Tron look way down. The blue outline around the thumbnails was removed, along with the eerie, uneven blue glow in the background. It now looked like a neutral UI piece that would be at home in any time period.
The Browser did its best to bring a tabbed experience to phones. Multi-tab browsing was placed front and center, but instead of wasting precious screen space on a tab strip, a tab button would open a Recent Apps-like interface that would show you your open tabs. Functionally, there wasn't much difference between this and the "window" view that was present in past versions of the Browser. The best addition to the Browser was a "Request desktop site" menu item, which would switch from the default mobile view to the normal site. The Browser showed off the flexibility of Google's Action Bar design, which, despite not having a top-left app icon, still functioned like any other top bar design.
![Gmail and Google Talk—they're like Honeycomb, but smaller!](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gmail2.png)
Gmail and Google Talk—they're like Honeycomb, but smaller!
Photo by Ron Amadeo
Gmail and Google Talk both looked like smaller versions of their Honeycomb designs, but with a few tweaks to work better on smaller screens. Gmail featured a dual Action Bar—one on the top of the screen and one on the bottom. The top of the bar showed your current folder, account, and number of unread messages, and tapping on the bar opened a navigation menu. The bottom featured all the normal buttons you would expect along with the overflow button. This dual layout was used in order display more buttons on the surface level, but in landscape mode where vertical space was at a premium, the dual bars merged into a single top bar.
In the message view, the blue bar was "sticky" when you scrolled down. It stuck to the top of the screen, so you could always see who wrote the current message, reply, or star it. Once in a message, the thin, dark gray bar at the bottom showed your current spot in the inbox (or whatever list brought you here), and you could swipe left and right to get to other messages.
Google Talk would let you swipe left and right to change chat windows, just like Gmail, but there the bar was at the top.
![The new dialer and the incoming call screen, both of which we haven't seen since Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/inc-calls.png)
The new dialer and the incoming call screen, both of which we haven't seen since Gingerbread.
Photo by Ron Amadeo
Since Honeycomb was only for tablets, some UI pieces were directly preceded by Gingerbread instead. The new Ice Cream Sandwich dialer was, of course, black and blue, and it used smaller tabs that could be swiped through. While Ice Cream Sandwich finally did the sensible thing and separated the main phone and contacts interfaces, the phone app still had its own contacts tab. There were now two spots to view your contact list—one with a dark theme and one with a light theme. With a hardware search button no longer being a requirement, the bottom row of buttons had the voicemail shortcut swapped out for a search icon.
Google liked to have the incoming call interface mirror the lock screen, which meant Ice Cream Sandwich got a circle-unlock design. Besides the usual decline or accept options, a new button was added to the top of the circle, which would let you decline a call by sending a pre-defined text message to the caller. Swiping up and picking a message like "Can't talk now, call you later" was (and still is) much more informative than an endlessly ringing phone.
![Honeycomb didn't have folders or a texting app, so here's Ice Cream Sandwich versus Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/thenonmessedupversion.png)
Honeycomb didn't have folders or a texting app, so here's Ice Cream Sandwich versus Gingerbread.
Photo by Ron Amadeo
Folders were now much easier to make. In Gingerbread, you had to long press on the screen, pick "folders," and then pick "new folder." In Ice Cream Sandwich, just drag one icon on top of another, and a folder is created containing those two icons. It was dead simple and much easier than finding the hidden long-press command.
The design was much improved, too. Gingerbread used a generic beige folder icon, but Ice Cream Sandwich actually showed you what was in the folder by stacking the first three icons on top of each other, drawing a circle around them, and using that as the folder icon. Open folder containers resized to fit the amount of icons in the folder rather than being a full-screen, mostly empty box. It looked way, way better.
![YouTube switched to a more modern white theme and used a list view instead of the crazy 3D scrolling](http://cdn.arstechnica.net/wp-content/uploads/2014/03/youtubes.png)
YouTube switched to a more modern white theme and used a list view instead of the crazy 3D scrolling
Photo by Ron Amadeo
YouTube was completely redesigned and looked less like something from The Matrix and more like, well, YouTube. It was a simple white list of vertically scrolling videos, just like the website. Making videos on your phone was given prime real estate, with the first button on the action bar dedicated to recording a video. Strangely, different screens used different YouTube logos in the top left, switching between a horizontal YouTube logo and a square one.
YouTube used swipe tabs just about everywhere. They were placed on the main page to browse and view your account and on the video pages to switch between comments, info, and related videos. The 4.0 app showed the first signs of Google+ YouTube integration, placing a "+1" icon next to the traditional rating buttons. Eventually Google+ would completely take over YouTube, turning the comments and author pages into Google+ activity.
![Ice Cream Sandwich tried to make things easier on everyone. Here is a screen for tracking data usage, the new developer options with tons of analytics enabled, and the intro tutorial.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/data.png)
Ice Cream Sandwich tried to make things easier on everyone. Here is a screen for tracking data usage, the new developer options with tons of analytics enabled, and the intro tutorial.
Photo by Ron Amadeo
Data Usage allowed users to easily keep track of and control their data usage. The main page showed a graph of this month's data usage, and users could set thresholds to be warned about data consumption or even set a hard usage limit to avoid overage charges. All of this was done easily by dragging the horizontal orange and red threshold lines higher or lower on the chart. The vertical white bars allowed users to select a slice of time in the graph. At the bottom of the page, the data usage for the selected time was broken down by app, so users could select a spike and easily see what app was sucking up all their data. When times got really tough, in the overflow button was an option to restrict all background data. Then, only apps running in the foreground could have access to the Internet connection.
The Developer Options typically only housed a tiny handful of settings, but in Ice Cream Sandwich the section received a huge expansion. Google added all sorts of on-screen diagnostic overlays to help app developers understand what was happening inside their app. You could view CPU usage, pointer location, and view screen updates. There were also options to change the way the system functioned, like control over animation speed, background processing, and GPU rendering.
One of the biggest differences between Android and the iOS is Android's app drawer interface. In Ice Cream Sandwich's quest to be more user-friendly, the initial startup launched a small tutorial showing users where the app drawer was and how to drag icons out of the drawer and onto the homescreen. With the removal of the off-screen menu button and changes like this, Android 4.0 made a big push to be more inviting to new smartphone users and switchers.
![The "touch to beam" NFC support, Google Earth, and App Info, which would let you disable crapware.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-06-03.57.png)
The "touch to beam" NFC support, Google Earth, and App Info, which would let you disable crapware.
Built into Ice Cream Sandwich was full support for [NFC][1]. While previous devices like the Nexus S had NFC, support was limited and the OS couldn't do much with the chip. 4.0 added a feature called Android Beam, which would let two NFC-equipped Android 4.0 devices transfer data back and forth. NFC would transmit data related to whatever was on the screen at the time, so tapping when a phone displayed a webpage would send that page to the other phone. You could also send contact information, directions, and YouTube links. When the two phones were put together, the screen zoomed out, and tapping on the zoomed-out display would send the information.
In Android, users are not allowed to uninstall system apps, which are often integral to the function of the device. Carriers and OEMs took advantage of this and started putting crapware in the system partition, which they would often stick with software they didn't want. Android 4.0 allowed users to disable any app that couldn't be uninstalled, meaning the app remained on the system but didn't show up in the app drawer and couldn't be run. If users were willing to dig through the settings, this gave them an easy way to take control of their phone.
Android 4.0 can be thought of as the start of the modern Android era. Most of the Google apps released around this time only worked on Android 4.0 and above. There were so many new APIs that Google wanted to take advantage of that—initially at least—support for versions below 4.0 was limited. After Ice Cream Sandwich and Honeycomb, Google was really starting to get serious about software design. In January 2012, the company [finally launched][2] *Android Design*, a design guideline site that taught Android app developers how to create apps to match the look and feel of Android. This was something iOS not only had from the start of third-party app support, but Apple enforced design so seriously that apps that did not meet the guidelines were blocked from the App Store. The fact that Android went three years without any kind of public design documents from Google shows just how bad things used to be. But with Duarte in charge of Android's design revolution, the company was finally addressing basic design needs.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/20/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2011/02/near-field-communications-a-technology-primer/
[2]:http://arstechnica.com/business/2012/01/google-launches-style-guide-for-android-developers/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,103 +0,0 @@
The history of Android
================================================================================
![](http://cdn.arstechnica.net/wp-content/uploads/2014/03/playicons2.png)
Photo by Ron Amadeo
### Google Play and the return of direct-to-consumer device sales ###
On March 6, 2012, Google unified all of its content offerings under the banner of "Google Play." The Android Market became the Google Play Store, Google Books became Google Play Books, Google Music became Google Play Music, and Android Market Movies became Google Play Movies & TV. While the app interfaces didn't change much, all four content apps got new names and icons. Content purchased in the Play Store would be downloaded to the appropriate app, and the Play Store and Play content apps all worked together to provide a fairly organized content experience.
The Google Play update was Google's first big out-of-cycle update. Four packed-in apps were all changed without having to issue a system update—they were all updated through the Android Market/Play Store. Enabling out-of-cycle updates to individual apps was a big focus for Google, and being able to do an update like this was the culmination of an engineering effort that started in the Gingerbread era. Google had been working on "decoupling" the apps from the operating system and making everything portable enough to be distributed through the Android Market/Play Store.
While one or two apps (mostly Maps and Gmail) had previously lived on the Android Market, from here on you'll see a lot more significant updates that have nothing to do with an operating system release. System updates require the cooperation of OEMs and carriers, so they are difficult to push out to every user. Play Store updates are completely controlled by Google, though, providing the company a direct line to users' devices. For the launch of Google Play, the Android Market updated itself to the Google Play Store, and from there, Books, Music, and Movies were all issued Google Play-flavored updates.
The design of the Google Play apps was still all over the place. Each app looked and functioned differently, but for now, a cohesive brand was a good start. And removing "Android" from the branding was necessary because many services were available in the browser and could be used without touching an Android device at all.
In April 2012, Google started [selling devices though the Play Store again][1], reviving the direct-to-customer model it had experimented with for the launch of the Nexus One. While it was only two years after ending the Nexus One sales, Internet shopping was now more common place, and buying something before you could hold it didn't seem as crazy as it did in 2010.
Google also saw how price-conscious consumers became when faced with the Nexus One's $530 price tag. The first device for sale was an unlocked, GSM version of the Galaxy Nexus for $399. From there, price would go even lower. $350 has been the entry-level price for the last two Nexus smartphones, and 7-inch Nexus tablets would come in at only $200 to $220.
Today, the Play Store sells eight different Android devices, four Chromebooks, a thermostat, and tons of accessories, and the device store is the de-facto location for a new Google product launch. New phone launches are so popular, the site usually breaks under the load, and new Nexus phones sell out in a few hours.
### Android 4.1, Jelly Bean—Google Now points toward the future ###
![The Asus-made Nexus 7, Android 4.1's launch device.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/ASUS_Google_Nexus_7_4_11.jpg)
The Asus-made Nexus 7, Android 4.1's launch device.
With the release of Android 4.1, Jelly Bean in July 2012, Google settled into an Android release cadence of about every six months. The platform matured to the point where a release every three months was unnecessary, and the slower release cycle gave OEMs a chance to catch their breath. Unlike Honeycomb, point releases were now fairly major updates, with 4.1 bringing major UI and framework changes.
One of the biggest changes in Jelly Bean that you won't be able to see in screenshots is "Project Butter," the name for a concerted effort by Google's engineers to make Android animations run smoothly at 30FPS. Core changes were made, like Vsync and triple buffering, and individual animations were optimized so they could be drawn smoothly. Animation and scrolling smoothness had always been a weak point of Android when compared to iOS. After some work on both the core animation framework and on individual apps, Jelly Bean brought Android a lot closer to iOS' smoothness.
Along with Jelly Bean came the [Nexus][2] 7, a 7-inch tablet manufactured by Asus. Unlike the primarily horizontal Xoom, the Nexus 7 was meant to be used in portrait mode, like a large phone. The Nexus 7 showed that, after almost a year-and-a-half of ecosystem building, Google was ready to commit to the tablet market with a flagship device. Like the Nexus One and GSM Galaxy Nexus, the Nexus 7 was sold online directly by Google. While those earlier devices had shockingly high prices for consumers that were used to carrier subsidies, the Nexus 7 hit a mass market price point of only $200. The price bought you a device with a 7-inch, 1280x800 display, a quad core, 1.2 GHz Tegra 3 processor, 1GB of RAM, and 8GB of storage. The Nexus 7 was such a good value that many wondered if Google was making any money at all on its flagship tablet.
This smaller, lighter, 7-inch form factor would be a huge success for Google, and it put the company in the rare position of being an industry trendsetter. Apple, which started with a 10-inch iPad, was eventually forced to answer the Nexus 7 and tablets like it with the iPad Mini.
![4.1's new lock screen design, wallpaper, and the new on-press highlight on the system buttons.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/picture.png)
4.1's new lock screen design, wallpaper, and the new on-press highlight on the system buttons.
Photo by Ron Amadeo
The Tron look introduced in Honeycomb was toned down a little in Ice Cream Sandwich, and Jelly Bean took things a step further. It started removing blue from large chunks of the operating system. The hint was the on-press highlights on the system buttons, which changed from blue to gray.
![A composite image of the new app lineup and the new notification panel with expandable notifications.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/jb-apps-and-notications.png)
A composite image of the new app lineup and the new notification panel with expandable notifications.
Photo by Ron Amadeo
The Notification panel was completely revamped, and we've finally arrived at the design used today in KitKat. The new panel extended to the top of the screen and covered the usual status icons, meaning the status bar was no longer visible when the panel was open. The time was prominently displayed in the top left corner, along with the date and a settings shortcut. The clear all notions button, which was represented by an "X" in Ice Cream Sandwich, changed to a stairstep icon, symbolizing the staggered sliding animation that cleared the notification panel. The bottom handle changed from a circle to a single line that ran the length of the notification panel. All the typography was changed—the notification panel now used bigger, thinner fonts for everything. This was another screen where the blue introduced in Ice Cream Sandwich and Honeycomb was removed. The notification panel was entirely gray now except for on-touch highlights.
There was new functionality in the panel, too. Notifications were now expandable and could show much more information than the previous two-line design. It now showed up to eight lines of text and could even show buttons at the bottom of the notification. The screenshot notification had a share button at the bottom, and you could call directly from a missed call notification, or you could snooze a ringing alarm all from the notification panel. New notifications were expanded by default, but as they piled up they would collapse back to the traditional size. Dragging down on a notification with two fingers would expand it.
![The new Google Search app, with Google Now cards, voice search, and text search.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googlenow.png)
The new Google Search app, with Google Now cards, voice search, and text search.
Photo by Ron Amadeo
The biggest feature addition to Jelly Bean for not only Android, but for Google as a whole, was the new version of the Google Search application. This introduced "Google Now," a predictive search feature. Google Now was displayed as several cards that sit below the search box, and it would offer results to searches Google thinks you care about. These were things like Google Maps searches for places you've recently looked at on your desktop computer or calendar appointment locations, the weather, and time at home while traveling.
The new Google Search app could, of course, be launched with the Google icon, but it could also be accessed from any screen with a swipe up from the system bar. Long pressing on the system bar brought up a ring that worked similarly to the lock screen ring. The card section scrolled vertically, and cards could be a swipe away if you didn't want to see them. Voice Search was a big part of the updates. Questions weren't just blindly entered into Google; if Google knew the answer, it would also talk back using a text-To-Speech engine. And old-school text searches were, of course, still supported. Just tap on the bar and start typing.
Google frequently called Google Now "the future of Google Search." Telling Google what you wanted wasn't good enough. Google wanted to know what you wanted before you did. Google Now put all of Google's data mining knowledge about you to work for you, and it was the company's biggest advantage against rival search services like Bing. Smartphones knew more about you than any other device you own, so the service debuted on Android. But Google slowly worked Google Now into Chrome, and eventually it will likely end up on Google.com.
While the functionality was important, it became clear that Google Now was the most important design work to ever come out of the company, too. The white card aesthetic that this app introduced would become the foundation for Google's design of just about everything. Today, this card style is used in the Google Play Store and in all of the Play content apps, YouTube, Google Maps, Drive, Keep, Gmail, Google+, and many others. It's not just Android apps, either. Many of Google's desktop sites and iOS apps are inspired by this design. Design was historically one of Google's weak areas, but Google Now was the point where the company finally got its act together with a cohesive, company-wide design language.
![Yet another YouTube redesign. Information density went way down.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/yotuube.png)
Yet another YouTube redesign. Information density went way down.
Photo by Ron Amadeo
Another version, another YouTube redesign. This time the list view was primarily thumbnail-based, with giant images taking up most of the screen real estate. Information density tanked with the new list design. Before YouTube would display around six items per screen, now it could only display three.
YouTube was one of the first apps to add a sliding drawer to the left side of an app, a feature which would become a standard design style across Google's apps. The drawer has links for your account and channel subscriptions, which allowed Google to kill the tabs-on-top design.
![Google Play Service's responsibilities versus the rest of Android.](http://cdn.arstechnica.net/wp-content/uploads/2013/08/playservicesdiagram2.png)
Google Play Service's responsibilities versus the rest of Android.
Photo by Ron Amadeo
### Google Play Services—fragmentation and making OS versions (nearly) obsolete ###
It didn't seem like a big deal at the time, but in September 2012, Google Play Services 1.0 was automatically pushed out to every Android phone running 2.2 and up. It added a few Google+ APIs and support for OAuth 2.0.
While this update might sound boring, Google Play Services would eventually grow to become an integral part of Android. Google Play Services acts as a shim between the normal apps and the installed Android OS, allowing Google to update or replace some core components and add APIs without having to ship out a new Android version.
With Play Services, Google had a direct line to the core of an Android phone without having to go through OEM updates and carrier approval processes. Google used Play Services to add an entirely new location system, a malware scanner, remote wipe capabilities, and new Google Maps APIs, all without shipping an OS update. Like we mentioned at the end of the Gingerbread section, thanks to all the "portable" APIs implemented in Play Services, Gingerbread can still download a modern version of the Play Store and many other Google Apps.
The other big benefit was compatibility with Android's user base. The newest release of an Android OS can take a very long time to get out to the majority of users, which means APIs that get tied to the latest version of the OS won't be any good to developers until the majority of the user base upgrades. Google Play Services is compatible with Froyo and above, which is 99 percent of active devices, and the updates pushed directly to phones through the Play Store. By including APIs in Google Play Services instead of Android, Google can push a new API out to almost all users in about a week. It's [a great solution][3] to many of the problems caused by version fragmentation.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/21/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2012/04/unlocked-samsung-galaxy-nexus-can-now-be-purchased-from-google/
[2]:http://arstechnica.com/gadgets/2012/07/divine-intervention-googles-nexus-7-is-a-fantastic-200-tablet/
[3]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,203 +0,0 @@
translating wi-cuckoo
A Repository with 44 Years of Unix Evolution
================================================================================
### Abstract ###
The evolution of the Unix operating system is made available as a version-control repository, covering the period from its inception in 1972 as a five thousand line kernel, to 2015 as a widely-used 26 million line system. The repository contains 659 thousand commits and 2306 merges. The repository employs the commonly used Git system for its storage, and is hosted on the popular GitHub archive. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, Berkeley University, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, 850 individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology.
### 1 Introduction ###
The Unix operating system stands out as a major engineering breakthrough due to its exemplary design, its numerous technical contributions, its development model, and its widespread use. The design of the Unix programming environment has been characterized as one offering unusual simplicity, power, and elegance [[1][1]]. On the technical side, features that can be directly attributed to Unix or were popularized by it include [[2][2]]: the portable implementation of the kernel in a high level language; a hierarchical file system; compatible file, device, networking, and inter-process I/O; the pipes and filters architecture; virtual file systems; and the shell as a user-selectable regular process. A large community contributed software to Unix from its early days [[3][3]], [[4][4],pp. 65-72]. This community grew immensely over time and worked using what are now termed open source software development methods [[5][5],pp. 440-442]. Unix and its intellectual descendants have also helped the spread of the C and C++ programming languages, parser and lexical analyzer generators (*yacc, lex*), document preparation tools (*troff, eqn, tbl*), scripting languages (*awk, sed, Perl*), TCP/IP networking, and configuration management systems (*SCCS, RCS, Subversion, Git*), while also forming a large part of the modern internet infrastructure and the web.
Luckily, important Unix material of historical importance has survived and is nowadays openly available. Although Unix was initially distributed with relatively restrictive licenses, the most significant parts of its early development have been released by one of its right-holders (Caldera International) under a liberal license. Combining these parts with software that was developed or released as open source software by the University of California, Berkeley and the FreeBSD Project provides coverage of the system's development over a period ranging from June 20th 1972 until today.
Curating and processing available snapshots as well as old and modern configuration management repositories allows the reconstruction of a new synthetic Git repository that combines under a single roof most of the available data. This repository documents in a digital form the detailed evolution of an important digital artefact over a period of 44 years. The following sections describe the repository's structure and contents (Section [II][6]), the way it was created (Section [III][7]), and how it can be used (Section [IV][8]).
### 2 Data Overview ###
The 1GB Unix history Git repository is made available for cloning on [GitHub][9].[1][10] Currently[2][11] the repository contains 659 thousand commits and 2306 merges from about 850 contributors. The contributors include 23 from the Bell Labs staff, 158 from Berkeley's Computer Systems Research Group (CSRG), and 660 from the FreeBSD Project.
The repository starts its life at a tag identified as *Epoch*, which contains only licensing information and its modern README file. Various tag and branch names identify points of significance.
- *Research-VX* tags correspond to six research editions that came out of Bell Labs. These start with *Research-V1* (4768 lines of PDP-11 assembly) and end with *Research-V7* (1820 mostly C files, 324kLOC).
- *Bell-32V* is the port of the 7th Edition Unix to the DEC/VAX architecture.
- *BSD-X* tags correspond to 15 snapshots released from Berkeley.
- *386BSD-X* tags correspond to two open source versions of the system, with the Intel 386 architecture kernel code mainly written by Lynne and William Jolitz.
- *FreeBSD-release/X* tags and branches mark 116 releases coming from the FreeBSD project.
In addition, branches with a *-Snapshot-Development* suffix denote commits that have been synthesized from a time-ordered sequence of a snapshot's files, while tags with a *-VCS-Development* suffix mark the point along an imported version control history branch where a particular release occurred.
The repository's history includes commits from the earliest days of the system's development, such as the following.
commit c9f643f59434f14f774d61ee3856972b8c3905b1
Author: Dennis Ritchie <research!dmr>
Date: Mon Dec 2 18:18:02 1974 -0500
Research V5 development
Work on file usr/sys/dmr/kl.c
Merges between releases that happened along the system's evolution, such as the development of BSD 3 from BSD 2 and Unix 32/V, are also correctly represented in the Git repository as graph nodes with two parents.
More importantly, the repository is constructed in a way that allows *git blame*, which annotates source code lines with the version, date, and author associated with their first appearance, to produce the expected code provenance results. For example, checking out the *BSD-4* tag, and running git blame on the kernel's *pipe.c* file will show lines written by Ken Thompson in 1974, 1975, and 1979, and by Bill Joy in 1980. This allows the automatic (though computationally expensive) detection of the code's provenance at any point of time.
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/provenance.png)
Figure 1: Code provenance across significant Unix releases.
As can be seen in Figure [1][12], a modern version of Unix (FreeBSD 9) still contains visible chunks of code from BSD 4.3, BSD 4.3 Net/2, and FreeBSD 2.0. Interestingly, the Figure shows that code developed during the frantic dash to create an open source operating system out of the code released by Berkeley (386BSD and FreeBSD 1.0) does not seem to have survived. The oldest code in FreeBSD 9 appears to be an 18-line sequence in the C library file timezone.c, which can also be found in the 7th Edition Unix file with the same name and a time stamp of January 10th, 1979 - 36 years ago.
### 3 Data Collection and Processing ###
The goal of the project is to consolidate data concerning the evolution of Unix in a form that helps the study of the system's evolution, by entering them into a modern revision repository. This involves collecting the data, curating them, and synthesizing them into a single Git repository.
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/branches.png)
Figure 2: Imported Unix snapshots, repositories, and their mergers.
The project is based on three types of data (see Figure [2][13]). First, snapshots of early released versions, which were obtained from the [Unix Heritage Society archive][14],[3][15] the [CD-ROM images][16] containing the full source archives of CSRG,[4][17] the [OldLinux site][18],[5][19] and the [FreeBSD archive][20].[6][21] Second, past and current repositories, namely the CSRG SCCS [[6][22]] repository, the FreeBSD 1 CVS repository, and the [Git mirror of modern FreeBSD development][23].[7][24] The first two were obtained from the same sources as the corresponding snapshots.
The last, and most labour intensive, source of data was **primary research**. The release snapshots do not provide information regarding their ancestors and the contributors of each file. Therefore, these pieces of information had to be determined through primary research. The authorship information was mainly obtained by reading author biographies, research papers, internal memos, and old documentation scans; by reading and automatically processing source code and manual page markup; by communicating via email with people who were there at the time; by posting a query on the Unix *StackExchange* site; by looking at the location of files (in early editions the kernel source code was split into `usr/sys/dmr` and `/usr/sys/ken`); and by propagating authorship from research papers and manual pages to source code and from one release to others. (Interestingly, the 1st and 2nd Research Edition manual pages have an "owner" section, listing the person (e.g. *ken*) associated with the corresponding system command, file, system call, or library function. This section was not there in the 4th Edition, and resurfaced as the "Author" section in BSD releases.) Precise details regarding the source of the authorship information are documented in the project's files that are used for mapping Unix source code files to their authors and the corresponding commit messages. Finally, information regarding merges between source code bases was obtained from a [BSD family tree maintained by the NetBSD project][25].[8][26]
The software and data files that were developed as part of this project, are [available online][27],[9][28] and, with appropriate network, CPU and disk resources, they can be used to recreate the repository from scratch. The authorship information for major releases is stored in files under the project's `author-path` directory. These contain lines with a regular expressions for a file path followed by the identifier of the corresponding author. Multiple authors can also be specified. The regular expressions are processed sequentially, so that a catch-all expression at the end of the file can specify a release's default authors. To avoid repetition, a separate file with a `.au` suffix is used to map author identifiers into their names and emails. One such file has been created for every community associated with the system's evolution: Bell Labs, Berkeley, 386BSD, and FreeBSD. For the sake of authenticity, emails for the early Bell Labs releases are listed in UUCP notation (e.g. `research!ken`). The FreeBSD author identifier map, required for importing the early CVS repository, was constructed by extracting the corresponding data from the project's modern Git repository. In total the commented authorship files (828 rules) comprise 1107 lines, and there are another 640 lines mapping author identifiers to names.
The curation of the project's data sources has been codified into a 168-line `Makefile`. It involves the following steps.
**Fetching** Copying and cloning about 11GB of images, archives, and repositories from remote sites.
**Tooling** Obtaining an archiver for old PDP-11 archives from 2.9 BSD, and adjusting it to compile under modern versions of Unix; compiling the 4.3 BSD *compress* program, which is no longer part of modern Unix systems, in order to decompress the 386BSD distributions.
**Organizing** Unpacking archives using tar and *cpio*; combining three 6th Research Edition directories; unpacking all 1 BSD archives using the old PDP-11 archiver; mounting CD-ROM images so that they can be processed as file systems; combining the 8 and 62 386BSD floppy disk images into two separate files.
**Cleaning** Restoring the 1st Research Edition kernel source code files, which were obtained from printouts through optical character recognition, into a format close to their original state; patching some 7th Research Edition source code files; removing metadata files and other files that were added after a release, to avoid obtaining erroneous time stamp information; patching corrupted SCCS files; processing the early FreeBSD CVS repository by removing CVS symbols assigned to multiple revisions with a custom Perl script, deleting CVS *Attic* files clashing with live ones, and converting the CVS repository into a Git one using *cvs2svn*.
An interesting part of the repository representation is how snapshots are imported and linked together in a way that allows *git blame* to perform its magic. Snapshots are imported into the repository as sequential commits based on the time stamp of each file. When all files have been imported the repository is tagged with the name of the corresponding release. At that point one could delete those files, and begin the import of the next snapshot. Note that the *git blame* command works by traversing backwards a repository's history, and using heuristics to detect code moving and being copied within or across files. Consequently, deleted snapshots would create a discontinuity between them, and prevent the tracing of code between them.
Instead, before the next snapshot is imported, all the files of the preceding snapshot are moved into a hidden look-aside directory named `.ref` (reference). They remain there, until all files of the next snapshot have been imported, at which point they are deleted. Because every file in the `.ref` directory matches exactly an original file, *git blame* can determine how source code moves from one version to the next via the `.ref` file, without ever displaying the `.ref` file. To further help the detection of code provenance, and to increase the representation's realism, each release is represented as a merge between the branch with the incremental file additions (*-Development*) and the preceding release.
For a period in the 1980s, only a subset of the files developed at Berkeley were under SCCS version control. During that period our unified repository contains imports of both the SCCS commits, and the snapshots' incremental additions. At the point of each release, the SCCS commit with the nearest time stamp is found and is marked as a merge with the release's incremental import branch. These merges can be seen in the middle of Figure [2][29].
The synthesis of the various data sources into a single repository is mainly performed by two scripts. A 780-line Perl script (`import-dir.pl`) can export the (real or synthesized) commit history from a single data source (snapshot directory, SCCS repository, or Git repository) in the *Git fast export* format. The output is a simple text format that Git tools use to import and export commits. Among other things, the script takes as arguments the mapping of files to contributors, the mapping between contributor login names and their full names, the commit(s) from which the import will be merged, which files to process and which to ignore, and the handling of "reference" files. A 450-line shell script creates the Git repository and calls the Perl script with appropriate arguments to import each one of the 27 available historical data sources. The shell script also runs 30 tests that compare the repository at specific tags against the corresponding data sources, verify the appearance and disappearance of look-aside directories, and look for regressions in the count of tree branches and merges and the output of *git blame* and *git log*. Finally, *git* is called to garbage-collect and compress the repository from its initial 6GB size down to the distributed 1GB.
### 4 Data Uses ###
The data set can be used for empirical research in software engineering, information systems, and software archeology. Through its unique uninterrupted coverage of a period of more than 40 years, it can inform work on software evolution and handovers across generations. With thousandfold increases in processing speed and million-fold increases in storage capacity during that time, the data set can also be used to study the co-evolution of software and hardware technology. The move of the software's development from research labs, to academia, and to the open source community can be used to study the effects of organizational culture on software development. The repository can also be used to study how notable individuals, such as Turing Award winners (Dennis Ritchie and Ken Thompson) and captains of the IT industry (Bill Joy and Eric Schmidt), actually programmed. Another phenomenon worthy of study concerns the longevity of code, either at the level of individual lines, or as complete systems that were at times distributed with Unix (Ingres, Lisp, Pascal, Ratfor, Snobol, TMG), as well as the factors that lead to code's survival or demise. Finally, because the data set stresses Git, the underlying software repository storage technology, to its limits, it can be used to drive engineering progress in the field of revision management systems.
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/metrics.png)
Figure 3: Code style evolution along Unix releases.
Figure [3][30], which depicts trend lines (obtained with R's local polynomial regression fitting function) of some interesting code metrics along 36 major releases of Unix, demonstrates the evolution of code style and programming language use over very long timescales. This evolution can be driven by software and hardware technology affordances and requirements, software construction theory, and even social forces. The dates in the Figure have been calculated as the average date of all files appearing in a given release. As can be seen in it, over the past 40 years the mean length of identifiers and file names has steadily increased from 4 and 6 characters to 7 and 11 characters, respectively. We can also see less steady increases in the number of comments and decreases in the use of the *goto* statement, as well as the virtual disappearance of the *register* type modifier.
### 5 Further Work ###
Many things can be done to increase the repository's faithfulness and usefulness. Given that the build process is shared as open source code, it is easy to contribute additions and fixes through GitHub pull requests. The most useful community contribution would be to increase the coverage of imported snapshot files that are attributed to a specific author. Currently, about 90 thousand files (out of a total of 160 thousand) are getting assigned an author through a default rule. Similarly, there are about 250 authors (primarily early FreeBSD ones) for which only the identifier is known. Both are listed in the build repository's unmatched directory, and contributions are welcomed. Furthermore, the BSD SCCS and the FreeBSD CVS commits that share the same author and time-stamp can be coalesced into a single Git commit. Support can be added for importing the SCCS file comment fields, in order to bring into the repository the corresponding metadata. Finally, and most importantly, more branches of open source systems can be added, such as NetBSD OpenBSD, DragonFlyBSD, and *illumos*. Ideally, current right holders of other important historical Unix releases, such as System III, System V, NeXTSTEP, and SunOS, will release their systems under a license that would allow their incorporation into this repository for study.
#### Acknowledgements ####
The author thanks the many individuals who contributed to the effort. Brian W. Kernighan, Doug McIlroy, and Arnold D. Robbins helped with Bell Labs login identifiers. Clem Cole, Era Eriksson, Mary Ann Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze, and Anatole Shaw helped with BSD login identifiers. The BSD SCCS import code is based on work by H. Merijn Brand and Jonathan Gray.
This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thalis - Athens University of Economics and Business - Software Engineering Research Platform.
### References ###
[[1]][31]
M. D. McIlroy, E. N. Pinson, and B. A. Tague, "UNIX time-sharing system: Foreword," *The Bell System Technical Journal*, vol. 57, no. 6, pp. 1899-1904, July-August 1978.
[[2]][32]
D. M. Ritchie and K. Thompson, "The UNIX time-sharing system," *Bell System Technical Journal*, vol. 57, no. 6, pp. 1905-1929, July-August 1978.
[[3]][33]
D. M. Ritchie, "The evolution of the UNIX time-sharing system," *AT&T Bell Laboratories Technical Journal*, vol. 63, no. 8, pp. 1577-1593, Oct. 1984.
[[4]][34]
P. H. Salus, *A Quarter Century of UNIX*. Boston, MA: Addison-Wesley, 1994.
[[5]][35]
E. S. Raymond, *The Art of Unix Programming*. Addison-Wesley, 2003.
[[6]][36]
M. J. Rochkind, "The source code control system," *IEEE Transactions on Software Engineering*, vol. SE-1, no. 4, pp. 255-265, 1975.
----------
#### Footnotes: ####
[1][37] - [https://github.com/dspinellis/unix-history-repo][38]
[2][39] - Updates may add or modify material. To ensure replicability the repository's users are encouraged to fork it or archive it.
[3][40] - [http://www.tuhs.org/archive_sites.html][41]
[4][42] - [https://www.mckusick.com/csrg/][43]
[5][44] - [http://www.oldlinux.org/Linux.old/distributions/386BSD][45]
[6][46] - [http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/][47]
[7][48] - [https://github.com/freebsd/freebsd][49]
[8][50] - [http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree][51]
[9][52] - [https://github.com/dspinellis/unix-history-make][53]
--------------------------------------------------------------------------------
via: http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#MPT78
[2]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#RT78
[3]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Rit84
[4]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Sal94
[5]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Ray03
[6]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:data
[7]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:dev
[8]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:use
[9]:https://github.com/dspinellis/unix-history-repo
[10]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAB
[11]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAC
[12]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:provenance
[13]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches
[14]:http://www.tuhs.org/archive_sites.html
[15]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAD
[16]:https://www.mckusick.com/csrg/
[17]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAE
[18]:http://www.oldlinux.org/Linux.old/distributions/386BSD
[19]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAF
[20]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/
[21]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAG
[22]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#SCCS
[23]:https://github.com/freebsd/freebsd
[24]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAH
[25]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree
[26]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAI
[27]:https://github.com/dspinellis/unix-history-make
[28]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAJ
[29]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches
[30]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:metrics
[31]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITEMPT78
[32]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERT78
[33]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERit84
[34]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESal94
[35]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERay03
[36]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESCCS
[37]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAB
[38]:https://github.com/dspinellis/unix-history-repo
[39]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAC
[40]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAD
[41]:http://www.tuhs.org/archive_sites.html
[42]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAE
[43]:https://www.mckusick.com/csrg/
[44]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAF
[45]:http://www.oldlinux.org/Linux.old/distributions/386BSD
[46]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAG
[47]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/
[48]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAH
[49]:https://github.com/freebsd/freebsd
[50]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAI
[51]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree
[52]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAJ
[53]:https://github.com/dspinellis/unix-history-make

View File

@ -1,3 +1,4 @@
translation by bestony
DFileManager: Cover Flow File Manager
================================================================================
A real gem of a file manager absent from the standard Ubuntu repositories but sporting a unique feature. Thats DFileManager in a twitterish statement.

View File

@ -1,3 +1,4 @@
Translating by KnightJoker
How to send email notifications using Gmail SMTP server on Linux
================================================================================
Suppose you want to configure a Linux app to send out email messages from your server or desktop. The email messages can be part of email newsletters, status updates (e.g., [Cachet][1]), monitoring alerts (e.g., [Monit][2]), disk events (e.g., [RAID mdadm][3]), and so on. While you can set up your [own outgoing mail server][4] to deliver messages, you can alternatively rely on a freely available public SMTP server as a maintenance-free option.

View File

@ -1,3 +1,5 @@
taichirain 翻译中
How to Configure Apache Solr on Ubuntu 14 / 15
================================================================================
Hello and welcome to our today's article on Apache Solr. The brief description about Apache Solr is that it is an Open Source most famous search platform with Apache Lucene at the back end for Web sites that enables you to easily create search engines which searches websites, databases and files. It can index and search multiple sites and return recommendations for related contents based on the searched text.
@ -130,4 +132,4 @@ via: http://linoxide.com/ubuntu-how-to/configure-apache-solr-ubuntu-14-15/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/
[1]:http://lucene.apache.org/solr/
[1]:http://lucene.apache.org/solr/

View File

@ -1,42 +0,0 @@
Translating by DongShuaike
Backup (System Restore Point) your Ubuntu/Linux Mint with SystemBack
================================================================================
System Restore is must have feature for any OS that allows the user to revert their computer's state (including system files, installed applications, and system settings) to that of a previous point in time, which can be used to recover from system malfunctions or other problems.
Sometimes installing a program or driver can make your OS go to blank screen. System Restore can return your PC's system files and programs to a time when everything was working fine, potentially preventing hours of troubleshooting headaches. It won't affect your documents, pictures, or other data.
Simple system backup and restore application with extra features. [Systemback][1] makes it easy to create backups of system and users configuration files. In case of problems you can easily restore the previous state of the system. There are extra features like system copying, system installation and Live system creation.
Screenshots
![systemback](http://2.bp.blogspot.com/-2UPS3yl3LHw/VlilgtGAlvI/AAAAAAAAGts/ueRaAghXNvc/s1600/systemback-1.jpg)
![systemback](http://2.bp.blogspot.com/-7djBLbGenxE/Vlilgk-FZHI/AAAAAAAAGtk/2PVNKlaPO-c/s1600/systemback-2.jpg)
![](http://3.bp.blogspot.com/-beZYwKrsT4o/VlilgpThziI/AAAAAAAAGto/cwsghXFNGRA/s1600/systemback-3.jpg)
![](http://1.bp.blogspot.com/-t_gmcoQZrvM/VlilhLP--TI/AAAAAAAAGt0/GWBg6bGeeaI/s1600/systemback-5.jpg)
**Note**: Using System Restore will not restore documents, music, emails, or personal files of any kind. Depending on your perspective, this is both a positive and negative feature. The bad news is that it won't restore that accidentally deleted file you wish you could get back, though a file recovery program might solve that problem.
If no restore point exists on your computer, System Restore has nothing to revert to so the tool won't work for you. If you're trying to recover from a major problem, you'll need to move on to another troubleshooting step.
>>> Available for Ubuntu 15.10 Wily/16.04/15.04 Vivid/14.04 Trusty/Linux Mint 17.x/other Ubuntu derivatives
To install SystemBack Application in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Terminal Commands:
sudo add-apt-repository ppa:nemh/systemback
sudo apt-get update
sudo apt-get install systemback
That's it
--------------------------------------------------------------------------------
via: http://www.noobslab.com/2015/11/backup-system-restore-point-your.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://launchpad.net/systemback

View File

@ -1,3 +1,4 @@
translating by NearTan
How to Install Laravel PHP Framework on CentOS 7 / Ubuntu 15.04
================================================================================
Hi All, In this article we are going to setup Laravel on CentOS 7 and Ubuntu 15.04. If you are a PHP web developer then you don't need to worry about of all modern PHP frameworks, Laravel is the easiest to get up and running that saves your time and effort and makes web development a joy. Laravel embraces a general development philosophy that sets a high priority on creating maintainable code by following some simple guidelines, you should be able to keep a rapid pace of development and be free to change your code with little fear of breaking existing functionality.
@ -172,4 +173,4 @@ via: http://linoxide.com/linux-how-to/install-laravel-php-centos-7-ubuntu-15-04/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/
[a]:http://linoxide.com/author/kashifs/

View File

@ -1,196 +0,0 @@
translation by strugglingyouth
Linux / Unix: jobs Command Examples
================================================================================
I am new Linux and Unix user. How do I show the active jobs on Linux or Unix-like systems using BASH/KSH/TCSH or POSIX based shell? How can I display status of jobs in the current session on Unix/Linux?
Job control is nothing but the ability to stop/suspend the execution of processes (command) and continue/resume their execution as per your requirements. This is done using your operating system and shell such as bash/ksh or POSIX shell.
You shell keeps a table of currently executing jobs and can be displayed with jobs command.
### Purpose ###
> Displays status of jobs in the current shell session.
### Syntax ###
The basic syntax is as follows:
jobs
OR
jobs jobID
OR
jobs [options] jobID
### Starting few jobs for demonstration purpose ###
Before you start using jobs command, you need to start couple of jobs on your system. Type the following commands to start jobs:
## Start xeyes, calculator, and gedit text editor ###
xeyes &
gnome-calculator &
gedit fetch-stock-prices.py &
Finally, run ping command in foreground:
ping www.cyberciti.biz
To suspend ping command job hit the **Ctrl-Z** key sequence.
### jobs command examples ###
To display the status of jobs in the current shell, enter:
$ jobs
Sample outputs:
[1] 7895 Running gpass &
[2] 7906 Running gnome-calculator &
[3]- 7910 Running gedit fetch-stock-prices.py &
[4]+ 7946 Stopped ping cyberciti.biz
To display the process ID or jobs for the job whose name begins with "p," enter:
$ jobs -p %p
OR
$ jobs %p
Sample outputs:
[4]- Stopped ping cyberciti.biz
The character % introduces a job specification. In this example, you are using the string whose name begins with suspended command such as %ping.
### How do I show process IDs in addition to the normal information? ###
Pass the -l(lowercase L) option to jobs command for more information about each job listed, run:
$ jobs -l
Sample outputs:
![Fig.01: Displaying the status of jobs in the shell](http://s0.cyberciti.org/uploads/faq/2013/02/jobs-command-output.jpg)
Fig.01: Displaying the status of jobs in the shell
### How do I list only processes that have changed status since the last notification? ###
First, start a new job as follows:
$ sleep 100 &
Now, only show jobs that have stopped or exited since last notified, type:
$ jobs -n
Sample outputs:
[5]- Running sleep 100 &
### Display lists process IDs (PIDs) only ###
Pass the -p option to jobs command to display PIDs only:
$ jobs -p
Sample outputs:
7895
7906
7910
7946
7949
### How do I display only running jobs? ###
Pass the -r option to jobs command to display only running jobs only, type:
$ jobs -r
Sample outputs:
[1] Running gpass &
[2] Running gnome-calculator &
[3]- Running gedit fetch-stock-prices.py &
### How do I display only jobs that have stopped? ###
Pass the -s option to jobs command to display only stopped jobs only, type:
$ jobs -s
Sample outputs:
[4]+ Stopped ping cyberciti.biz
To resume the ping cyberciti.biz job by entering the following bg command:
$ bg %4
### jobs command options ###
From the [bash(1)][1] command man page:
注:表格
<table border="1">
<tbody>
<tr>
<td>Option</td>
<td>Description</td>
</tr>
<tr>
<td><kbd><strong>-l</strong></kbd></td>
<td>Show process id's in addition to the normal information.</td>
</tr>
<tr>
<td><kbd><strong>-p</strong></kbd></td>
<td>Show process id's only.</td>
</tr>
<tr>
<td><kbd><strong>-n</strong></kbd></td>
<td>Show only processes that have changed status since the last notification are printed.</td>
</tr>
<tr>
<td><kbd><strong>-r</strong></kbd></td>
<td>Restrict output to running jobs only.</td>
</tr>
<tr>
<td><kbd><strong>-s</strong></kbd></td>
<td>Restrict output to stopped jobs only.</td>
</tr>
<tr>
<td><kbd><strong>-x</strong></kbd></td>
<td>COMMAND is run after all job specifications that appear in ARGS have been replaced with the process ID of that job's process group leader./td&gt;</td>
</tr>
</tbody>
</table>
### A note about /usr/bin/jobs and shell builtin ###
Type the following type command to find out whether jobs is part of shell, external command or both:
$ type -a jobs
Sample outputs:
jobs is a shell builtin
jobs is /usr/bin/jobs
In almost all cases you need to use the jobs command that is implemented as a BASH/KSH/POSIX shell built-in. The /usr/bin/jobs command can not be used in the current shell. The /usr/bin/jobs command operates in a different environment and does not share the parent bash/ksh's shells understanding of jobs.
--------------------------------------------------------------------------------
via:
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.manpager.com/linux/man1/bash.1.html

View File

@ -1,3 +1,4 @@
Translating by itsang
NetworkManager and privacy in the IPv6 internet
======================

View File

@ -1,65 +0,0 @@
How to Customize Time & Date Format in Ubuntu Panel
================================================================================
![Time & Date format](http://ubuntuhandbook.org/wp-content/uploads/2015/08/ubuntu_tips1.png)
This quick tutorial is going to show you how to customize your Time & Date indicator in Ubuntu panel, though there are already a few options available in the settings page.
![custom-timedate](http://ubuntuhandbook.org/wp-content/uploads/2015/12/custom-timedate.jpg)
To get started, search for and install **dconf Editor** in Ubuntu Software Center. Then launch the software and follow below steps:
**1.** When dconf Editor launches, navigate to **com -> canonical -> indicator -> datetime**. Set the value of **time-format** to **custom**.
![custom time format](http://ubuntuhandbook.org/wp-content/uploads/2015/12/time-format.jpg)
You can also do this via a command in terminal:
gsettings set com.canonical.indicator.datetime time-format 'custom'
**2.** Now you can customize the Time & Date format by editing the value of **custom-time-format**.
![customize-timeformat](http://ubuntuhandbook.org/wp-content/uploads/2015/12/customize-timeformat.jpg)
You can also do this via command:
gsettings set com.canonical.indicator.datetime custom-time-format 'FORMAT_VALUE_HERE'
Interpreted sequences are:
- %a = abbreviated weekday name
- %A = full weekday name
- %b = abbreviated month name
- %B = full month name
- %d = day of month
- %l = hour ( 1..12), %I = hour (01..12)
- %k = hour ( 1..23), %H = hour (01..23)
- %M = minute (00..59)
- %p = AM or PM, %P = am or pm.
- %S = second (00..59)
- open terminal and run command `man date` to get more details.
Some examples:
custom time format value: **%a %H:%M %m/%d/%Y**
![exam-1](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-1.jpg)
**%a %r %b %d or %a %I:%M:%S %p %b %d**
![exam-2](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-2.jpg)
**%a %-d %b %l:%M %P %z**
![exam-3](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-3.jpg)
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/12/time-date-format-ubuntu-panel/
作者:[Ji m][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
How to Install Bugzilla with Apache and SSL on FreeBSD 10.2
================================================================================
Bugzilla is open source web base application for bug tracker and testing tool, develop by mozilla project, and licensed under Mozilla Public License. It is used by high tech company like mozilla, redhat and gnome. Bugzilla was originally created by Terry Weissman in 1998. It written in perl, use MySQL as the database back-end. It is a server software designed to help you manage software development. Bugzilla has a lot of features, optimized database, excellent security, advanced search tool, integrated with email capabilities etc.
@ -264,4 +265,4 @@ via: http://linoxide.com/tools/install-bugzilla-apache-ssl-freebsd-10-2/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arulm/
[a]:http://linoxide.com/author/arulm/

View File

@ -1,450 +0,0 @@
Getting started with Docker by Dockerizing this Blog
======================
>This article covers the basic concepts of Docker and how to Dockerize an application by creating a custom Dockerfile
>Written by Benjamin Cane on 2015-12-01 10:00:00
Docker is an interesting technology that over the past 2 years has gone from an idea, to being used by organizations all over the world to deploy applications. In today's article I am going to cover how to get started with Docker by "Dockerizing" an existing application. The application in question is actually this very blog!
## What is Docker
Before we dive into learning the basics of Docker let's first understand what Docker is and why it is so popular. Docker, is an operating system container management tool that allows you to easily manage and deploy applications by making it easy to package them within operating system containers.
### Containers vs. Virtual Machines
Containers may not be as familiar as virtual machines but they are another method to provide **Operating System Virtualization**. However, they differ quite a bit from standard virtual machines.
Standard virtual machines generally include a full Operating System, OS Packages and eventually an Application or two. This is made possible by a Hypervisor which provides hardware virtualization to the virtual machine. This allows for a single server to run many standalone operating systems as virtual guests.
Containers are similar to virtual machines in that they allow a single server to run multiple operating environments, these environments however, are not full operating systems. Containers generally only include the necessary OS Packages and Applications. They do not generally contain a full operating system or hardware virtualization. This also means that containers have a smaller overhead than traditional virtual machines.
Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding. Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to **BSD Jails** and `chroot`'ed processes than full virtual machines.
### What Docker provides on top of containers
Docker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support [Solaris Zones](https://blog.docker.com/2015/08/docker-oracle-solaris-zones/) and [BSD Jails](https://wiki.freebsd.org/Docker). What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker.
Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container.
## Starting with Installation
As Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager.
```
# apt-get install docker.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
aufs-tools cgroup-lite git git-man liberror-perl
Suggested packages:
btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc
git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki
git-svn
The following NEW packages will be installed:
aufs-tools cgroup-lite docker.io git git-man liberror-perl
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 7,553 kB of archives.
After this operation, 46.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
```
To check if any containers are running we can execute the `docker` command using the `ps` option.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
The `ps` function of the `docker` command works similar to the Linux `ps `command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers.
## Deploying a pre-built nginx Docker container
One of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with `yum` or `apt-get`. To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the `docker` command again, however, this time with the `run` option.
```
# docker run -d nginx
Unable to find image 'nginx' locally
Pulling repository nginx
5c82215b03d1: Download complete
e2a4fb18da48: Download complete
58016a5acc80: Download complete
657abfa43d82: Download complete
dcb2fe003d16: Download complete
c79a417d7c6f: Download complete
abb90243122c: Download complete
d6137c9e2964: Download complete
85e566ddc7ef: Download complete
69f100eb42b5: Download complete
cd720b803060: Download complete
7cc81e9a118a: Download complete
```
The `run` function of the `docker` command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute `docker run` your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the `-d` (**detach**) flag.
By executing `docker ps` again we can see the nginx container running.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande
```
In the above output we can see the running container `desperate_lalande` and that this container has been built from the `nginx:latest image`.
### Docker Images
Images are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with `yum`. To get a better understanding of how this works let's look back at the output of the `docker run` execution.
```
# docker run -d nginx
Unable to find image 'nginx' locally
```
The first message we see is that `docker` could not find an image named nginx locally. The reason we see this message is that when we executed `docker run` we told Docker to startup a container, a container based on an image named **nginx**. Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name.
Since this system is brand new there is no Docker image with the name nginx, which means Docker will need to download it from a Docker repository.
```
Pulling repository nginx
5c82215b03d1: Download complete
e2a4fb18da48: Download complete
58016a5acc80: Download complete
657abfa43d82: Download complete
dcb2fe003d16: Download complete
c79a417d7c6f: Download complete
abb90243122c: Download complete
d6137c9e2964: Download complete
85e566ddc7ef: Download complete
69f100eb42b5: Download complete
cd720b803060: Download complete
7cc81e9a118a: Download complete
```
This is exactly what the second part of the output is showing us. By default, Docker uses the [Docker Hub](https://hub.docker.com/) repository, which is a repository service that Docker (the company) runs.
Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as `docker run registry`. For this article we will not be deploying a custom registry service.
### Stopping and Removing the Container
Before moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it.
To start a container we executed `docker` with the `run` option, in order to stop this same container we simply need to execute the `docker` with the `kill` option specifying the container name.
```
# docker kill desperate_lalande
desperate_lalande
```
If we execute `docker ps` again we will see that the container is no longer running.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
However, at this point we have only stopped the container; while it may no longer be running it still exists. By default, `docker ps` will only show running containers, if we add the `-a` (all) flag it will show all containers running or not.
```
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande
```
In order to fully remove the container we can use the `docker` command with the `rm` option.
```
# docker rm desperate_lalande
desperate_lalande
```
While this container has been removed; we still have a **nginx** image available. If we were to re-run `docker run -d nginx` again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system.
To see a full list of local images we can simply run the `docker` command with the `images` option.
```
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
nginx latest 9fab4090484a 5 days ago 132.8 MB
```
## Building our own custom image
At this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a **Dockerfile**.
With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog.
### Understanding the Application
Before we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog.
The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; **hamerkop**. The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public [GitHub](https://github.com/madflojo/blog) repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install **Python** along with some **Python** modules and execute the `hamerkop` application. To serve the generated content we will use **nginx**; which means we will also need **nginx** to be installed.
So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the [Dockerfile Syntax](https://docs.docker.com/v1.8/reference/builder/). To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor; `vi` in my case.
```
# git clone https://github.com/madflojo/blog.git
Cloning into 'blog'...
remote: Counting objects: 622, done.
remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622
Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done.
Resolving deltas: 100% (242/242), done.
Checking connectivity... done.
# cd blog/
# vi Dockerfile
```
### FROM - Inheriting a Docker image
The first instruction of a Dockerfile is the `FROM` instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same **nginx** image we were using before, if we wanted to start with a blank slate we could use the **Ubuntu** Docker image by specifying `ubuntu:latest`.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
```
In addition to the `FROM` instruction, I also included a `MAINTAINER` instruction which is used to show the Author of the Dockerfile.
As Docker supports using `#` as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile.
### Running a test build
Since we inherited the **nginx** Docker image our current Dockerfile also inherited all the instructions within the [Dockerfile](https://github.com/nginxinc/docker-nginx/blob/08eeb0e3f0a5ee40cbc2bc01f0004c2aa5b78c15/Dockerfile) used to build that **nginx** image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the **nginx** image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process.
In order to start the build from a Dockerfile we can simply execute the `docker` command with the **build** option.
```
# docker build -t blog /root/blog
Sending build context to Docker daemon 23.6 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Running in c97f36450343
---> 60a44f78d194
Removing intermediate container c97f36450343
Successfully built 60a44f78d194
```
In the above example I used the `-t` (**tag**) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an **Image ID** that Docker assigns. In this case the **Image ID** is `60a44f78d194` which we can see from the `docker` command's build success message.
In addition to the `-t` flag, I also specified the directory `/root/blog`. This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container.
Now that we have run through a successful build, let's start customizing this image.
### Using RUN to execute apt-get
The static site generator used to generate the HTML pages is written in **Python** and because of this the first custom task we should perform within this `Dockerfile` is to install Python. To install the Python package we will use the Apt package manager. This means we will need to specify within the Dockerfile that `apt-get update` and `apt-get install python-dev` are executed; we can do this with the `RUN` instruction.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
```
In the above we are simply using the `RUN` instruction to tell Docker that when it builds this image it will need to execute the specified `apt-get` commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though `python-dev` and `python-pip` are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the `pip` command will execute, outside the container, the `pip` command does not exist.
It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the `RUN` instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by `RUN` require user input.
### Installing Python modules
With **Python** installed we now need to install some Python modules. To do this outside of Docker, we would generally use the `pip` command and reference a file within the blog's Git repository named `requirements.txt`. In an earlier step we used the `git` command to "clone" the blog's GitHub repository to the `/root/blog` directory; this directory also happens to be the directory that we have created the `Dockerfile`. This is important as it means the contents of the Git repository are accessible to Docker during the build process.
When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible.
In order to install the required Python modules we will need to copy the `requirements.txt` file from the build directory into the container. We can do this using the `COPY` instruction within the `Dockerfile`.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
## Create a directory for required files
RUN mkdir -p /build/
## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt
```
Within the `Dockerfile` we added 3 instructions. The first instruction uses `RUN` to create a `/build/` directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the `COPY` instruction which copies the `requirements.txt` file from the "build directory" (`/root/blog`) into the `/build` directory within the container. The third is using the `RUN` instruction to execute the `pip` command; installing all the modules specified within the `requirements.txt` file.
`COPY` is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies.
### Re-running a build
Now that we have a few customization tasks for Docker to perform let's try another build of the blog image again.
```
# docker build -t blog /root/blog
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Using cache
---> 8e0f1899d1eb
Step 2 : RUN apt-get update
---> Using cache
---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
---> Using cache
---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
---> Running in bde05cf1e8fe
---> f4b66e09fa61
Removing intermediate container bde05cf1e8fe
Step 5 : COPY requirements.txt /build/
---> cef11c3fb97c
Removing intermediate container 9aa8ff43f4b0
Step 6 : RUN pip install -r /build/requirements.txt
---> Running in c50b15ddd8b1
Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1))
Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2))
<truncated to reduce noise>
Successfully installed jinja2 PyYaml mistune markdown MarkupSafe
Cleaning up...
---> abab55c20962
Removing intermediate container c50b15ddd8b1
Successfully built abab55c20962
```
From the above build output we can see the build was successful, but we can also see another interesting message;` ---> Using cache`. What this message is telling us is that Docker was able to use its build cache during the build of this image.
#### Docker build cache
When Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image.
```
Step 5 : COPY requirements.txt /build/
---> cef11c3fb97c
```
The last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the **Image ID**; `cef11c3fb97c`. The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the **blog** image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the `python-dev` and `python-pip` packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the `mkdir` command, each subsequent step was executed.
The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the `requirements.txt` file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the `requirements.txt` file. The execution of the `apt-get` commands however, are another story. If the **Apt** repository that provides the Python packages were to contain a newer version of the python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the `python-pip` package it could be a problem if the installation was caching a package with a known vulnerability.
For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify `--no-cache=True` when executing a Docker build.
## Deploying the rest of the blog
With the Python packages and modules installed this leaves us at the point of copying the required application files and running the `hamerkop` application. To do this we will simply use more `COPY` and `RUN` instructions.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
## Create a directory for required files
RUN mkdir -p /build/
## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt
## Add blog code nd required files
COPY static /build/static
COPY templates /build/templates
COPY hamerkop /build/
COPY config.yml /build/
COPY articles /build/articles
## Run Generator
RUN /build/hamerkop -c /build/config.yml
```
Now that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully.
```
# docker build -t blog /root/blog/
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Using cache
---> 8e0f1899d1eb
Step 2 : RUN apt-get update
---> Using cache
---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
---> Using cache
---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
---> Using cache
---> f4b66e09fa61
Step 5 : COPY requirements.txt /build/
---> Using cache
---> cef11c3fb97c
Step 6 : RUN pip install -r /build/requirements.txt
---> Using cache
---> abab55c20962
Step 7 : COPY static /build/static
---> 15cb91531038
Removing intermediate container d478b42b7906
Step 8 : COPY templates /build/templates
---> ecded5d1a52e
Removing intermediate container ac2390607e9f
Step 9 : COPY hamerkop /build/
---> 59efd1ca1771
Removing intermediate container b5fbf7e817b7
Step 10 : COPY config.yml /build/
---> bfa3db6c05b7
Removing intermediate container 1aebef300933
Step 11 : COPY articles /build/articles
---> 6b61cc9dde27
Removing intermediate container be78d0eb1213
Step 12 : RUN /build/hamerkop -c /build/config.yml
---> Running in fbc0b5e574c5
Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux
Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux
<truncated to reduce noise>
Successfully created file /usr/share/nginx/html//archive.html
Successfully created file /usr/share/nginx/html//sitemap.xml
---> 3b25263113e1
Removing intermediate container fbc0b5e574c5
Successfully built 3b25263113e1
```
### Running a custom container
With a successful build we can now start our custom container by running the `docker` command with the `run` option, similar to how we started the nginx container earlier.
```
# docker run -d -p 80:80 --name=blog blog
5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1
```
Once again the `-d` (**detach**) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is `--name`, which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is `-p`, this flag allows users to map a port from the host machine to a port within the container.
The base **nginx** image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the `-p` flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax `-p 8080:80`.
From the above command it appears that our container was started successfully, we can verify this by executing `docker ps`.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog
```
## Wrapping up
At this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout [Docker's reference page](https://docs.docker.com/v1.8/reference/builder/), which explains the instructions very well.
Another good resource is their [Dockerfile Best Practices page](https://docs.docker.com/engine/articles/dockerfile_best-practices/) which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the `COPY` instruction for the `articles` directory as the last `COPY` instruction. The reason for this is that the `articles` directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached.
In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below.
--------------------------------------
via:http://bencane.com/2015/12/01/getting-started-with-docker-by-dockerizing-this-blog/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+bencane%2FSAUo+%28Benjamin+Cane%29
作者Benjamin Cane
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
How to Install Light Table 0.8 in Ubuntu 14.04, 15.10
================================================================================
![](http://ubuntuhandbook.org/wp-content/uploads/2014/11/LightTable-IDE-logo-icon.png)
The Light Table IDE has just reached a new stable release after more than one year of development. Now it provides 64-bit only binary for Linux.
Changes in LightTable 0.8.0:
- CHANGED: We have switched to Electron from NW.js
- CHANGED: LTs releases and self-updating processes are completely in the open on Github
- ADDED: LT can be built from source with provided scripts across supported platforms
- ADDED: Most of LTs node libraries are installed as npm dependencies instead of as forked libraries
- ADDED: Significant documentation. See more below
- FIX: Major usability issues on >= OSX 10.10
- CHANGED: 32-bit linux is no longer an official download. Building from source will still be supported
- FIX: ClojureScript eval for modern versions of ClojureScript
- More details at [github.com/LightTable/LightTable/releases][1]
![LightTable 0.8.0](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-08.jpg)
### How to Install Light Table 0.8.0 in Ubuntu: ###
Below steps show you how to install the official binary in Ubuntu. Works on all current Ubuntu releases (**64-bit only**).
Before getting started, please make a backup if you have a previous release installed.
**1.** Download the Linux binary from link below:
- [lighttable-0.8.0-linux.tar.gz][2]
**2.** Open terminal from Unity Dash, App Launcher, or via Ctrl+Alt+T keys. When it opens, paste below command and hit enter:
gksudo file-roller ~/Downloads/lighttable-0.8.0-linux.tar.gz
![open-via-fileroller](http://ubuntuhandbook.org/wp-content/uploads/2015/12/open-via-fileroller.jpg)
Install `gksu` from Ubuntu Software Center if the command does not work.
**3.** Previous command opens the downloaded archive via Archive Manager using root user privilege.
When it opens, do:
- right-click and rename the folder name to **LightTable**
- extract it to **Computer -> /opt/** directory.
![extract-lighttable](http://ubuntuhandbook.org/wp-content/uploads/2015/12/extract-lighttable.jpg)
Finally you should have the LightTable installed to /opt/ directory:
![lighttable-in-opt](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-in-opt.jpg)
**4.** Create a launcher so you can start LightTable from Unity Dash or App Launcher.
Open terminal and run below command to create & edit a launcher file for LightTable:
gksudo gedit /usr/share/applications/lighttable.desktop
When the file opens via Gedit text editor, paste below and save the file:
[Desktop Entry]
Version=1.0
Type=Application
Name=Light Table
GenericName=Text Editor
Comment=Open source IDE that modify, from running programs to embed websites and games
Exec=/opt/LightTable/LightTable %F
Terminal=false
MimeType=text/plain;
Icon=/opt/LightTable/resources/app/core/img/lticon.png
Categories=TextEditor;Development;Utility;
StartupNotify=true
Actions=Window;Document;
Name[en_US]=Light Table
[Desktop Action Window]
Name=New Window
Exec=/opt/LightTable/LightTable -n
OnlyShowIn=Unity;
[Desktop Action Document]
Name=New File
Exec=/opt/LightTable/LightTable --command new_file
OnlyShowIn=Unity;
So it looks like:
![lighttable-launcher](http://ubuntuhandbook.org/wp-content/uploads/2015/12/lighttable-launcher.jpg)
Finally launch the IDE from Unity Dash or Application Launcher and enjoy!
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/12/install-light-table-0-8-ubuntu-14-04/
作者:[Ji m][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/
[1]:https://github.com/LightTable/LightTable/releases
[2]:https://github.com/LightTable/LightTable/releases/download/0.8.0/lighttable-0.8.0-linux.tar.gz

View File

@ -0,0 +1,110 @@
How to block network traffic by country on Linux
================================================================================
As a system admin who maintains production Linux servers, there are circumstances where you need to **selectively block or allow network traffic based on geographic locations**. For example, you are experiencing denial-of-service attacks mostly originating from IP addresses registered with a particular country. You want to block SSH logins from unknown foreign countries for security reasons. Your company has a distribution right to online videos, which requires it to legally stream to particular countries only. You need to prevent any local host from uploading documents to any non-US remote cloud storage due to geo-restriction company policies.
All these scenarios require an ability to set up a firewall which does **country-based traffic filtering**. There are a couple of ways to do that. For one, you can use TCP wrappers to set up conditional blocking for individual applications (e.g., SSH, NFS, httpd). The downside is that the application you want to protect must be built with TCP wrappers support. Besides, TCP wrappers are not universally available across different platforms (e.g., Arch Linux [dropped][2] its support). An alternative approach is to set up [ipset][3] with country-based GeoIP information and apply it to iptables rules. The latter approach is more promising as the iptables-based filtering is application-agnostic and easy to set up.
In this tutorial, I am going to present **another iptables-based GeoIP filtering which is implemented with xtables-addons**. For those unfamiliar with it, xtables-addons is a suite of extensions for netfilter/iptables. Included in xtables-addons is a module called xt_geoip which extends the netfilter/iptables to filter, NAT or mangle packets based on source/destination countries. For you to use xt_geoip, you don't need to recompile the kernel or iptables, but only need to build xtables-addons as modules, using the current kernel build environment (/lib/modules/`uname -r`/build). Reboot is not required either. As soon as you build and install xtables-addons, xt_geoip is immediately usable with iptables.
As for the comparison between xt_geoip and ipset, the [official source][3] mentions that xt_geoip is superior to ipset in terms of memory foot print. But in terms of matching speed, hash-based ipset might have an edge.
In the rest of the tutorial, I am going to show **how to use iptables/xt_geoip to block network traffic based on its source/destination countries**.
### Install Xtables-addons on Linux ###
Here is how you can compile and install xtables-addons on various Linux platforms.
To build xtables-addons, you need to install a couple of dependent packages first.
#### Install Dependencies on Debian, Ubuntu or Linux Mint ####
$ sudo apt-get install iptables-dev xtables-addons-common libtext-csv-xs-perl pkg-config
#### Install Dependencies on CentOS, RHEL or Fedora ####
CentOS/RHEL 6 requires EPEL repository being set up first (for perl-Text-CSV_XS).
$ sudo yum install gcc-c++ make automake kernel-devel-`uname -r` wget unzip iptables-devel perl-Text-CSV_XS
#### Compile and Install Xtables-addons ####
Download the latest `xtables-addons` source code from the [official site][4], and build/install it as follows.
$ wget http://downloads.sourceforge.net/project/xtables-addons/Xtables-addons/xtables-addons-2.10.tar.xz
$ tar xf xtables-addons-2.10.tar.xz
$ cd xtables-addons-2.10
$ ./configure
$ make
$ sudo make install
Note that for Red Hat based systems (CentOS, RHEL, Fedora) which have SELinux enabled by default, it is necessary to adjust SELinux policy as follows. Otherwise, SELinux will prevent iptables from loading xt_geoip module.
$ sudo chcon -vR --user=system_u /lib/modules/$(uname -r)/extra/*.ko
$ sudo chcon -vR --type=lib_t /lib64/xtables/*.so
### Install GeoIP Database for Xtables-addons ###
The next step is to install GeoIP database which will be used by xt_geoip for IP-to-country mapping. Conveniently, the xtables-addons source package comes with two helper scripts for downloading GeoIP database from MaxMind and converting it into a binary form recognized by xt_geoip. These scripts are found in geoip folder inside the source package. Follow the instructions below to build and install GeoIP database on your system.
$ cd geoip
$ ./xt_geoip_dl
$ ./xt_geoip_build GeoIPCountryWhois.csv
$ sudo mkdir -p /usr/share/xt_geoip
$ sudo cp -r {BE,LE} /usr/share/xt_geoip
According to [MaxMind][5], their GeoIP database is 99.8% accurate on a country-level, and the database is updated every month. To keep the locally installed GeoIP database up-to-date, you want to set up a monthly [cron job][6] to refresh the local GeoIP database as often.
### Block Network Traffic Originating from or Destined to a Country ###
Once xt_geoip module and GeoIP database are installed, you can immediately use the geoip match options in iptables command.
$ sudo iptables -m geoip --src-cc country[,country...] --dst-cc country[,country...]
Countries you want to block are specified using [two-letter ISO3166 code][7] (e.g., US (United States), CN (China), IN (India), FR (France)).
For example, if you want to block incoming traffic from Yemen (YE) and Zambia (ZM), the following iptables command will do.
$ sudo iptables -I INPUT -m geoip --src-cc YE,ZM -j DROP
If you want to block outgoing traffic destined to China (CN), run the following command.
$ sudo iptables -A OUTPUT -m geoip --dst-cc CN -j DROP
The matching condition can also be "negated" by prepending "!" to "--src-cc" or "--dst-cc". For example:
If you want to block all incoming non-US traffic on your server, run this:
$ sudo iptables -I INPUT -m geoip ! --src-cc US -j DROP
![](https://c2.staticflickr.com/6/5654/23665427845_050241b03f_c.jpg)
#### For Firewall-cmd Users ####
Some distros such as CentOS/RHEL 7 or Fedora have replaced iptables with firewalld as the default firewall service. On such systems, you can use firewall-cmd to block traffic using xt_geoip similarly. The above three examples can be rewritten with firewall-cmd as follows.
$ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -m geoip --src-cc YE,ZM -j DROP
$ sudo firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -m geoip --dst-cc CN -j DROP
$ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -m geoip ! --src-cc US -j DROP
### Conclusion ###
In this tutorial, I presented iptables/xt_geoip which is an easy way to filter network packets based on their source/destination countries. This can be a useful arsenal to deploy in your firewall system if needed. As a final word of caution, I should mention that GeoIP-based traffic filtering is not a foolproof way to ban certain countries on your server. GeoIP database is by nature inaccurate/incomplete, and source/destination geography can easily be spoofed using VPN, Tor or any compromised relay hosts. Geography-based filtering can even block legitimate traffic that should not be banned. Understand this limitation before you decide to deploy it in your production environment.
--------------------------------------------------------------------------------
via: http://xmodulo.com/block-network-traffic-by-country-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://www.archlinux.org/news/dropping-tcp_wrappers-support/
[2]:http://xmodulo.com/block-unwanted-ip-addresses-linux.html
[3]:http://xtables-addons.sourceforge.net/geoip.php
[4]:http://xtables-addons.sourceforge.net/
[5]:https://support.maxmind.com/geoip-faq/geoip2-and-geoip-legacy-databases/how-accurate-are-your-geoip2-and-geoip-legacy-databases/
[6]:http://ask.xmodulo.com/add-cron-job-linux.html
[7]:https://en.wikipedia.org/wiki/ISO_3166-1

View File

@ -0,0 +1,101 @@
translation by strugglingyouth
Linux Desktop Fun: Summon Swarms Of Penguins To Waddle About The Desktop
================================================================================
XPenguins is a program for animating cute cartoons animals in your root window. By default it will be penguins they drop in from the top of the screen, walk along the tops of your windows, up the side of your windows, levitate, skateboard, and do other similarly exciting things. Now you can send an army of cute little penguins to invade the screen of someone else on your network.
### Install XPenguins ###
Open a command-line terminal (select Applications > Accessories > Terminal), and then type the following commands to install XPenguins program. First, type the command apt-get update to tell apt to refresh its package information by querying the configured repositories and then install the required program:
$ sudo apt-get update
$ sudo apt-get install xpenguins
### How do I Start XPenguins Locally? ###
Type the following command:
$ xpenguins
Sample outputs:
![An army of cute little penguins invading the screen](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_002_12_07_2011.png)
An army of cute little penguins invading the screen
![Linux: Cute little penguins walking along the tops of your windows](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_001_12_07_2011.png)
Linux: Cute little penguins walking along the tops of your windows
![Xpenguins Screenshot](http://files.cyberciti.biz/uploads/tips/2011/07/xpenguins-screenshot.jpg)
Xpenguins Screenshot
Be careful when you move windows as the little guys squash easily. If you send the program an interupt signal (Ctrl-C) they will burst.
### Themes ###
To list themes, enter:
$ xpenguins -l
Sample outputs:
Big Penguins
Bill
Classic Penguins
Penguins
Turtles
You can use alternative themes as follows:
$ xpenguins --theme "Big Penguins" --theme "Turtles"
You can install additional themes as follows:
$ cd /tmp
$ wget http://xpenguins.seul.org/xpenguins_themes-1.0.tar.gz
$ tar -zxvf xpenguins_themes-1.0.tar.gz
$ mkdir ~/.xpenguins
$ mv -v themes ~/.xpenguins/
$ xpenguins -l
Sample outputs:
Lemmings
Sonic the Hedgehog
The Simpsons
Winnie the Pooh
Worms
Big Penguins
Bill
Classic Penguins
Penguins
Turtles
To start with a random theme, enter:
$ xpenguins --random-theme
To load all available themes and run them simultaneously, enter:
$ xpenguins --all
More links and information:
- [XPenguins][1] home page.
- man penguins
- More Linux / UNIX desktop fun with [Steam Locomotive][2] and [Terminal ASCII Aquarium][3].
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/tips/linux-cute-little-xpenguins-walk-along-tops-ofyour-windows.html
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://xpenguins.seul.org/
[2]:http://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html
[3]:http://www.cyberciti.biz/tips/linux-unix-apple-osx-terminal-ascii-aquarium.html

View File

@ -0,0 +1,201 @@
Linux / Unix Desktop Fun: Text Mode ASCII-art Box and Comment Drawing
================================================================================
Boxes command is a text filter and a little known tool that can draw any kind of ASCII art box around its input text or code for fun and profit. You can quickly create email signatures, or create regional comments in any programming language. This command was intended to be used with the vim text editor, but can be tied to any text editor which supports filters, as well as from the command line as a standalone tool.
### Task: Install boxes ###
Use the [apt-get command][1] to install boxes under Debian / Ubuntu Linux:
$ sudo apt-get install boxes
Sample outputs:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
boxes
0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded.
Need to get 0 B/59.8 kB of archives.
After this operation, 205 kB of additional disk space will be used.
Selecting previously deselected package boxes.
(Reading database ... 224284 files and directories currently installed.)
Unpacking boxes (from .../boxes_1.0.1a-2.3_amd64.deb) ...
Processing triggers for man-db ...
Setting up boxes (1.0.1a-2.3) ...
RHEL / CentOS / Fedora Linux users, use the [yum command to install boxes][2] (first [enable EPEL repo as described here][3]):
# yum install boxes
Sample outputs:
Loaded plugins: rhnplugin
Setting up Install Process
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.
--> Running transaction check
---> Package boxes.x86_64 0:1.1-8.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==========================================================================
Package Arch Version Repository Size
==========================================================================
Installing:
boxes x86_64 1.1-8.el6 epel 64 k
Transaction Summary
==========================================================================
Install 1 Package(s)
Total download size: 64 k
Installed size: 151 k
Is this ok [y/N]: y
Downloading Packages:
boxes-1.1-8.el6.x86_64.rpm | 64 kB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : boxes-1.1-8.el6.x86_64 1/1
Installed:
boxes.x86_64 0:1.1-8.el6
Complete!
FreeBSD user can use the port as follows:
cd /usr/ports/misc/boxes/ && make install clean
Or, add the package using the pkg_add command:
# pkg_add -r boxes
### Draw any kind of box around some given text ###
Type the following command:
echo "This is a test" | boxes
Or specify the name of the design to use:
echo -e "\n\tVivek Gite\n\tvivek@nixcraft.com\n\twww.cyberciti.biz" | boxes -d dog
Sample outputs:
![Unix / Linux: Boxes Command To Draw Various Designs](http://s0.cyberciti.org/uploads/l/tips/2012/06/unix-linux-boxes-draw-dog-design.png)
Fig.01: Unix / Linux: Boxes Command To Draw Various Designs
#### How do I list all designs? ####
The syntax is:
boxes option
pipe | boxes options
echo "text" | boxes -d foo
boxes -l
The -d design option sets the name of the design to use. The syntax is:
echo "Text" | boxes -d design
pipe | boxes -d desig
The -l option list designs. It produces a listing of all available box designs in the config file, along with a sample box and information about it's creator:
boxes -l
boxes -l | more
boxes -l | less
Sample outputs:
43 Available Styles in "/etc/boxes/boxes-config":
-------------------------------------------------
ada-box (Neil Bird ):
---------------
-- --
-- --
---------------
ada-cmt (Neil Bird ):
--
-- regular Ada
-- comments
--
boy (Joan G. Stark ):
.-"""-.
/ .===. \
\/ 6 6 \/
( \___/ )
_________ooo__\_____/______________
/ \
| joan stark spunk1111@juno.com |
| VISIT MY ASCII ART GALLERY: |
| http://www.geocities.com/SoHo/7373/ |
\_______________________ooo_________/ jgs
| | |
|_ | _|
| | |
|__|__|
/-'Y'-\
(__/ \__)
....
...
output truncated
..
### How do I filter text via boxes while using vi/vim text editor? ###
You can use any external command with vi or vim. In this example, [insert current date and time][4], enter:
!!date
OR
:r !date
You need to type above command in Vim to read the output from the date command. This will insert the date and time after the current line:
Tue Jun 12 00:05:38 IST 2012
You can do the same with boxes command. Create a sample shell script or a c program as follows:
#!/bin/bash
Purpose: Backup mysql database to remote server.
Author: Vivek Gite
Last updated on: Tue Jun, 12 2012
Now type the following (move cursor to the second line i.e. line which starts with "Purpose: ...")
3!!boxes
And voila you will get the output as follows:
#!/bin/bash
/****************************************************/
/* Purpose: Backup mysql database to remote server. */
/* Author: Vivek Gite */
/* Last updated on: Tue Jun, 12 2012 */
/****************************************************/
This video will give you an introduction to boxes command:
youtube 视频
<iframe width="595" height="446" frameborder="0" src="http://www.youtube.com/embed/glzXjNvrYOc?rel=0"></iframe>
(Video:01: boxes command in action. BTW, this is my first video so go easy on me and let me know what you think.)
See also
- boxes man page
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/tips/unix-linux-draw-any-kind-of-boxes-around-text-editor.html
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
[3]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
[4]:http://www.cyberciti.biz/faq/vim-inserting-current-date-time-under-linux-unix-osx/

View File

@ -0,0 +1,501 @@
translating by ezio
Securi-Pi: Using the Raspberry Pi as a Secure Landing Point
================================================================================
Like many LJ readers these days, I've been leading a bit of a techno-nomadic lifestyle as of the past few years—jumping from network to network, access point to access point, as I bounce around the real world while maintaining my connection to the Internet and other networks I use on a daily basis. As of late, I've found that more and more networks are starting to block outbound ports like SMTP (port 25), SSH (port 22) and others. It becomes really frustrating when you drop into a local coffee house expecting to be able to fire up your SSH client and get a few things done, and you can't, because the network's blocking you.
However, I have yet to run across a network that blocks HTTPS outbound (port 443). After a bit of fiddling with a Raspberry Pi 2 I have at home, I was able to get a nice clean solution that lets me hit various services on the Raspberry Pi via port 443—allowing me to walk around blocked ports and hobbled networks so I can do the things I need to do. In a nutshell, I have set up this Raspberry Pi to act as an OpenVPN endpoint, SSH endpoint and Apache server—with all these services listening on port 443 so networks with restrictive policies aren't an issue.
### Notes
This solution will work on most networks, but firewalls that do deep packet inspection on outbound traffic still can block traffic that's tunneled using this method. However, I haven't been on a network that does that...yet. Also, while I use a lot of cryptography-based solutions here (OpenVPN, HTTPS, SSH), I haven't done a strict security audit of this setup. DNS may leak information, for example, and there may be other things I haven't thought of. I'm not recommending this as a way to hide all your traffic—I just use this so that I can connect to the Internet in an unfettered way when I'm out and about.
### Getting Started
Let's start off with what you need to put this solution together. I'm using this on a Raspberry Pi 2 at home, running the latest Raspbian, but this should work just fine on a Raspberry Pi Model B, as well. It fits within the 512MB of RAM footprint quite easily, although performance may be a bit slower, because the Raspberry Pi Model B has a single-core CPU as opposed to the Pi 2's quad-core. My Raspberry Pi 2 is behind my home's router/firewall, so I get the added benefit of being able to access my machines at home. This also means that any traffic I send to the Internet appears to come from my home router's IP address, so this isn't a solution designed to protect anonymity. If you don't have a Raspberry Pi, or don't want this running out of your home, it's entirely possible to run this out of a small cloud server too. Just make sure that the server's running Debian or Ubuntu, as these instructions are targeted at Debian-based distributions.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11913f1.jpg)
Figure 1. The Raspberry Pi, about to become an encrypted network endpoint.
### Installing and Configuring BIND
Once you have your platform up and running—whether it's a Raspberry Pi or otherwise—next you're going to install BIND, the nameserver that powers a lot of the Internet. You're going to install BIND as a caching nameserver only, and not have it service incoming requests from the Internet. Installing BIND will give you a DNS server to point your OpenVPN clients at, once you get to the OpenVPN step. Installing BIND is easy; it's just a simple `apt-get `command to install it:
```
root@test:~# apt-get install bind9
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
bind9utils
Suggested packages:
bind9-doc resolvconf ufw
The following NEW packages will be installed:
bind9 bind9utils
0 upgraded, 2 newly installed, 0 to remove and
↪0 not upgraded.
Need to get 490 kB of archives.
After this operation, 1,128 kB of additional disk
↪space will be used.
Do you want to continue [Y/n]? y
```
There are a couple minor configuration changes that need to be made to one of the config files of BIND before it can operate as a caching nameserver. Both changes are in `/etc/bind/named.conf.options`. First, you're going to uncomment the "forwarders" section of this file, and you're going to add a nameserver on the Internet to which to forward requests. In this case, I'm going to add Google's DNS (8.8.8.8). The "forwarders" section of the file should look like this:
```
forwarders {
8.8.8.8;
};
```
The second change you're going to make allows queries from your internal network and localhost. Simply add this line to the bottom of the configuration file, right before the `}`; that ends the file:
```
allow-query { 192.168.1.0/24; 127.0.0.0/16; };
```
That line above allows this DNS server to be queried from the network it's on (in this case, my network behind my firewall) and localhost. Next, you just need to restart BIND:
```
root@test:~# /etc/init.d/bind9 restart
[....] Stopping domain name service...: bind9waiting
↪for pid 13209 to die
. ok
[ ok ] Starting domain name service...: bind9.
```
Now you can test `nslookup` to make sure your server works:
```
root@test:~# nslookup
> server localhost
Default server: localhost
Address: 127.0.0.1#53
> www.google.com
Server: localhost
Address: 127.0.0.1#53
Non-authoritative answer:
Name: www.google.com
Address: 173.194.33.176
Name: www.google.com
Address: 173.194.33.177
Name: www.google.com
Address: 173.194.33.178
Name: www.google.com
Address: 173.194.33.179
Name: www.google.com
Address: 173.194.33.180
```
That's it! You've got a working nameserver on this machine. Next, let's move on to OpenVPN.
### Installing and Configuring OpenVPN
OpenVPN is an open-source VPN solution that relies on SSL/TLS for its key exchange. It's also easy to install and get working under Linux. Configuration of OpenVPN can be a bit daunting, but you're not going to deviate from the default configuration by much. To start, you're going to run an apt-get command and install OpenVPN:
```
root@test:~# apt-get install openvpn
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
liblzo2-2 libpkcs11-helper1
Suggested packages:
resolvconf
The following NEW packages will be installed:
liblzo2-2 libpkcs11-helper1 openvpn
0 upgraded, 3 newly installed, 0 to remove and
↪0 not upgraded.
Need to get 621 kB of archives.
After this operation, 1,489 kB of additional disk
↪space will be used.
Do you want to continue [Y/n]? y
```
Now that OpenVPN is installed, you're going to configure it. OpenVPN is SSL-based, and it relies on both server and client certificates to work. To generate these certificates, you need to configure a Certificate Authority (CA) on the machine. Luckily, OpenVPN ships with some wrapper scripts known as "easy-rsa" that help to bootstrap this process. You'll start by making a directory on the filesystem for the easy-rsa scripts to reside in and by copying the scripts from the template directory there:
```
root@test:~# mkdir /etc/openvpn/easy-rsa
root@test:~# cp -rpv
↪/usr/share/doc/openvpn/examples/easy-rsa/2.0/*
↪/etc/openvpn/easy-rsa/
```
Next, copy the vars file to a backup copy:
```
root@test:/etc/openvpn/easy-rsa# cp vars vars.bak
```
Now, edit vars so it's got information pertinent to your installation. I'm going specify only the lines that need to be edited, with sample data, below:
```
KEY_SIZE=4096
KEY_COUNTRY="US"
KEY_PROVINCE="CA"
KEY_CITY="Silicon Valley"
KEY_ORG="Linux Journal"
KEY_EMAIL="bill.childers@linuxjournal.com"
```
The next step is to source the vars file, so that the environment variables in the file are in your current environment:
```
root@test:/etc/openvpn/easy-rsa# source ./vars
NOTE: If you run ./clean-all, I will be doing a
↪rm -rf on /etc/openvpn/easy-rsa/keys
```
### Building the Certificate Authority
You're now going to run clean-all to ensure a clean working environment, and then you're going to build the CA. Note that I'm changing changeme prompts to something that's appropriate for this installation:
```
root@test:/etc/openvpn/easy-rsa# ./clean-all
root@test:/etc/openvpn/easy-rsa# ./build-ca
Generating a 4096 bit RSA private key
...................................................++
...................................................++
writing new private key to 'ca.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN.
There are quite a few fields but you can leave some
blank. For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[changeme]:test.linuxjournal.com
Name [changeme]:test.linuxjournal.com
Email Address [bill.childers@linuxjournal.com]:
```
### Building the Server Certificate
Once the CA is created, you need to build the OpenVPN server certificate:
```root@test:/etc/openvpn/easy-rsa#
↪./build-key-server test.linuxjournal.com
Generating a 4096 bit RSA private key
...................................................++
writing new private key to 'test.linuxjournal.com.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN.
There are quite a few fields but you can leave some
blank. For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[test.linuxjournal.com]:
Name [changeme]:test.linuxjournal.com
Email Address [bill.childers@linuxjournal.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from
↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'CA'
localityName :PRINTABLE:'Silicon Valley'
organizationName :PRINTABLE:'Linux Journal'
organizationalUnitName:PRINTABLE:'SecTeam'
commonName :PRINTABLE:'test.linuxjournal.com'
name :PRINTABLE:'test.linuxjournal.com'
emailAddress
↪:IA5STRING:'bill.childers@linuxjournal.com'
Certificate is to be certified until Sep 1
↪06:23:59 2025 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
```
The next step may take a while—building the Diffie-Hellman key for the OpenVPN server. This takes several minutes on a conventional desktop-grade CPU, but on the ARM processor of the Raspberry Pi, it can take much, much longer. Have patience, as long as the dots in the terminal are proceeding, the system is building its Diffie-Hellman key (note that many dots are snipped in these examples):
```
root@test:/etc/openvpn/easy-rsa# ./build-dh
Generating DH parameters, 4096 bit long safe prime,
↪generator 2
This is going to take a long time
....................................................+
<snipped out many more dots>
```
### Building the Client Certificate
Now you're going to generate a client key for your client to use when logging in to the OpenVPN server. OpenVPN is typically configured for certificate-based auth, where the client presents a certificate that was issued by an approved Certificate Authority:
```
root@test:/etc/openvpn/easy-rsa# ./build-key
↪bills-computer
Generating a 4096 bit RSA private key
...................................................++
...................................................++
writing new private key to 'bills-computer.key'
-----
You are about to be asked to enter information that
will be incorporated into your certificate request.
What you are about to enter is what is called a
Distinguished Name or a DN. There are quite a few
fields but you can leave some blank.
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:
State or Province Name (full name) [CA]:
Locality Name (eg, city) [Silicon Valley]:
Organization Name (eg, company) [Linux Journal]:
Organizational Unit Name (eg, section)
↪[changeme]:SecTeam
Common Name (eg, your name or your server's hostname)
↪[bills-computer]:
Name [changeme]:bills-computer
Email Address [bill.childers@linuxjournal.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from
↪/etc/openvpn/easy-rsa/openssl-1.0.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'CA'
localityName :PRINTABLE:'Silicon Valley'
organizationName :PRINTABLE:'Linux Journal'
organizationalUnitName:PRINTABLE:'SecTeam'
commonName :PRINTABLE:'bills-computer'
name :PRINTABLE:'bills-computer'
emailAddress
↪:IA5STRING:'bill.childers@linuxjournal.com'
Certificate is to be certified until
↪Sep 1 07:35:07 2025 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified,
↪commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
root@test:/etc/openvpn/easy-rsa#
```
Now you're going to generate an HMAC code as a shared key to increase the security of the system further:
```
root@test:~# openvpn --genkey --secret
↪/etc/openvpn/easy-rsa/keys/ta.key
```
### Configuration of the Server
Finally, you're going to get to the meat of configuring the OpenVPN server. You're going to create a new file, /etc/openvpn/server.conf, and you're going to stick to a default configuration for the most part. The main change you're going to do is to set up OpenVPN to use TCP rather than UDP. This is needed for the next major step to work—without OpenVPN using TCP for its network communication, you can't get things working on port 443. So, create a new file called /etc/openvpn/server.conf, and put the following configuration in it: Garrick, shrink below.
```
port 1194
proto tcp
dev tun
ca easy-rsa/keys/ca.crt
cert easy-rsa/keys/test.linuxjournal.com.crt ## or whatever
↪your hostname was
key easy-rsa/keys/test.linuxjournal.com.key ## Hostname key
↪- This file should be kept secret
management localhost 7505
dh easy-rsa/keys/dh4096.pem
tls-auth /etc/openvpn/certs/ta.key 0
server 10.8.0.0 255.255.255.0 # The server will use this
↪subnet for clients connecting to it
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp" # Forces clients
↪to redirect all traffic through the VPN
push "dhcp-option DNS 192.168.1.1" # Tells the client to
↪use the DNS server at 192.168.1.1 for DNS -
↪replace with the IP address of the OpenVPN
↪machine and clients will use the BIND
↪server setup earlier
keepalive 30 240
comp-lzo # Enable compression
persist-key
persist-tun
status openvpn-status.log
verb 3
```
And last, you're going to enable IP forwarding on the server, configure OpenVPN to start on boot and start the OpenVPN service:
```
root@test:/etc/openvpn/easy-rsa/keys# echo
↪"net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
root@test:/etc/openvpn/easy-rsa/keys# sysctl -p
↪/etc/sysctl.conf
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.ipv4.ip_forward = 0
net.ipv4.ip_forward = 1
root@test:/etc/openvpn/easy-rsa/keys# update-rc.d
↪openvpn defaults
update-rc.d: using dependency based boot sequencing
root@test:/etc/openvpn/easy-rsa/keys#
↪/etc/init.d/openvpn start
[ ok ] Starting virtual private network daemon:.
```
### Setting Up OpenVPN Clients
Your client installation depends on the host OS of your client, but you'll need to copy your client certs and keys created above to your client, and you'll need to import those certificates and create a configuration for that client. Each client and client OS does it slightly differently and documenting each one is beyond the scope of this article, so you'll need to refer to the documentation for that client to get it running. Refer to the Resources section for OpenVPN clients for each major OS.
### Installing SSLH—the "Magic" Protocol Multiplexer
The really interesting piece of this solution is SSLH. SSLH is a protocol multiplexer—it listens on port 443 for traffic, and then it can analyze whether the incoming packet is an SSH packet, HTTPS or OpenVPN, and it can forward that packet onto the proper service. This is what enables this solution to bypass most port blocks—you use the HTTPS port for all of this traffic, since HTTPS is rarely blocked.
To start, `apt-get` install SSLH:
```
root@test:/etc/openvpn/easy-rsa/keys# apt-get
↪install sslh
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
apache2 apache2-mpm-worker apache2-utils
↪apache2.2-bin apache2.2-common
libapr1 libaprutil1 libaprutil1-dbd-sqlite3
↪libaprutil1-ldap libconfig9
Suggested packages:
apache2-doc apache2-suexec apache2-suexec-custom
↪openbsd-inetd inet-superserver
The following NEW packages will be installed:
apache2 apache2-mpm-worker apache2-utils
↪apache2.2-bin apache2.2-common
libapr1 libaprutil1 libaprutil1-dbd-sqlite3
↪libaprutil1-ldap libconfig9 sslh
0 upgraded, 11 newly installed, 0 to remove
↪and 0 not upgraded.
Need to get 1,568 kB of archives.
After this operation, 5,822 kB of additional
↪disk space will be used.
Do you want to continue [Y/n]? y
```
After SSLH is installed, the package installer will ask you if you want to run it in inetd or standalone mode. Select standalone mode, because you want SSLH to run as its own process. If you don't have Apache installed, the Debian/Raspbian package of SSLH will pull it in automatically, although it's not strictly required. If you already have Apache running and configured, you'll want to make sure it only listens on localhost's interface and not all interfaces (otherwise, SSLH can't start because it can't bind to port 443). After installation, you'll receive an error that looks like this:
```
[....] Starting ssl/ssh multiplexer: sslhsslh disabled,
↪please adjust the configuration to your needs
[FAIL] and then set RUN to 'yes' in /etc/default/sslh
↪to enable it. ... failed!
failed!
```
This isn't an error, exactly—it's just SSLH telling you that it's not configured and can't start. Configuring SSLH is pretty simple. Its configuration is stored in `/etc/default/sslh`, and you just need to configure the `RUN` and `DAEMON_OPTS` variables. My SSLH configuration looks like this:
```
# Default options for sslh initscript
# sourced by /etc/init.d/sslh
# Disabled by default, to force yourself
# to read the configuration:
# - /usr/share/doc/sslh/README.Debian (quick start)
# - /usr/share/doc/sslh/README, at "Configuration" section
# - sslh(8) via "man sslh" for more configuration details.
# Once configuration ready, you *must* set RUN to yes here
# and try to start sslh (standalone mode only)
RUN=yes
# binary to use: forked (sslh) or single-thread
↪(sslh-select) version
DAEMON=/usr/sbin/sslh
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh
↪127.0.0.1:22 --ssl 127.0.0.1:443 --openvpn
↪127.0.0.1:1194 --pidfile /var/run/sslh/sslh.pid"
```
Save the file and start SSLH:
```
root@test:/etc/openvpn/easy-rsa/keys#
↪/etc/init.d/sslh start
[ ok ] Starting ssl/ssh multiplexer: sslh.
```
Now, you should be able to ssh to port 443 on your Raspberry Pi, and have it forward via SSLH:
```
$ ssh -p 443 root@test.linuxjournal.com
root@test:~#
```
SSLH is now listening on port 443 and can direct traffic to SSH, Apache or OpenVPN based on the type of packet that hits it. You should be ready to go!
### Conclusion
Now you can fire up OpenVPN and set your OpenVPN client configuration to port 443, and SSLH will route it to the OpenVPN server on port 1194. But because you're talking to your server on port 443, your VPN traffic won't get blocked. Now you can land at a strange coffee shop, in a strange town, and know that your Internet will just work when you fire up your OpenVPN and point it at your Raspberry Pi. You'll also gain some encryption on your link, which will improve the privacy of your connection. Enjoy surfing the Net via your new landing point!
Resources
Installing and Configuring OpenVPN: [https://wiki.debian.org/OpenVPN](https://wiki.debian.org/OpenVPN) and [http://cryptotap.com/articles/openvpn](http://cryptotap.com/articles/openvpn)
OpenVPN client downloads: [https://openvpn.net/index.php/open-source/downloads.html](https://openvpn.net/index.php/open-source/downloads.html)
OpenVPN Client for iOS: [https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8](https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8)
OpenVPN Client for Android: [https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en](https://play.google.com/store/apps/details?id=net.openvpn.openvpn&hl=en)
Tunnelblick for Mac OS X (OpenVPN client): [https://tunnelblick.net](https://tunnelblick.net)
SSLH—Protocol Multiplexer: [http://www.rutschle.net/tech/sslh.shtml](http://www.rutschle.net/tech/sslh.shtml) and [https://github.com/yrutschle/sslh](https://github.com/yrutschle/sslh)
----------
via: http://www.linuxjournal.com/content/securi-pi-using-raspberry-pi-secure-landing-point?page=0,0
作者:[Bill Childers][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/bill-childers

View File

@ -0,0 +1,631 @@
translate by zky001
* * *
# GCC-Inline-Assembly-HOWTO
v0.1, 01 March 2003.
* * *
_This HOWTO explains the use and usage of the inline assembly feature provided by GCC. There are only two prerequisites for reading this article, and thats obviously a basic knowledge of x86 assembly language and C._
* * *
## 1. Introduction.
## 1.1 Copyright and License.
Copyright (C)2003 Sandeep S.
This document is free; you can redistribute and/or modify this under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
## 1.2 Feedback and Corrections.
Kindly forward feedback and criticism to [Sandeep.S](mailto:busybox@sancharnet.in). I will be indebted to anybody who points out errors and inaccuracies in this document; I shall rectify them as soon as I am informed.
## 1.3 Acknowledgments.
I express my sincere appreciation to GNU people for providing such a great feature. Thanks to Mr.Pramode C E for all the helps he did. Thanks to friends at the Govt Engineering College, Trichur for their moral-support and cooperation, especially to Nisha Kurur and Sakeeb S. Thanks to my dear teachers at Govt Engineering College, Trichur for their cooperation.
Additionally, thanks to Phillip, Brennan Underwood and colin@nyx.net; Many things here are shamelessly stolen from their works.
* * *
## 2. Overview of the whole thing.
We are here to learn about GCC inline assembly. What this inline stands for?
We can instruct the compiler to insert the code of a function into the code of its callers, to the point where actually the call is to be made. Such functions are inline functions. Sounds similar to a Macro? Indeed there are similarities.
What is the benefit of inline functions?
This method of inlining reduces the function-call overhead. And if any of the actual argument values are constant, their known values may permit simplifications at compile time so that not all of the inline functions code needs to be included. The effect on code size is less predictable, it depends on the particular case. To declare an inline function, weve to use the keyword `inline` in its declaration.
Now we are in a position to guess what is inline assembly. Its just some assembly routines written as inline functions. They are handy, speedy and very much useful in system programming. Our main focus is to study the basic format and usage of (GCC) inline assembly functions. To declare inline assembly functions, we use the keyword `asm`.
Inline assembly is important primarily because of its ability to operate and make its output visible on C variables. Because of this capability, "asm" works as an interface between the assembly instructions and the "C" program that contains it.
* * *
## 3. GCC Assembler Syntax.
GCC, the GNU C Compiler for Linux, uses **AT&T**/**UNIX** assembly syntax. Here well be using AT&T syntax for assembly coding. Dont worry if you are not familiar with AT&T syntax, I will teach you. This is quite different from Intel syntax. I shall give the major differences.
1. Source-Destination Ordering.
The direction of the operands in AT&T syntax is opposite to that of Intel. In Intel syntax the first operand is the destination, and the second operand is the source whereas in AT&T syntax the first operand is the source and the second operand is the destination. ie,
"Op-code dst src" in Intel syntax changes to
"Op-code src dst" in AT&T syntax.
2. Register Naming.
Register names are prefixed by % ie, if eax is to be used, write %eax.
3. Immediate Operand.
AT&T immediate operands are preceded by $. For static "C" variables also prefix a $. In Intel syntax, for hexadecimal constants an h is suffixed, instead of that, here we prefix 0x to the constant. So, for hexadecimals, we first see a $, then 0x and finally the constants.
4. Operand Size.
In AT&T syntax the size of memory operands is determined from the last character of the op-code name. Op-code suffixes of b, w, and l specify byte(8-bit), word(16-bit), and long(32-bit) memory references. Intel syntax accomplishes this by prefixing memory operands (not the op-codes) with byte ptr, word ptr, and dword ptr.
Thus, Intel "mov al, byte ptr foo" is "movb foo, %al" in AT&T syntax.
5. Memory Operands.
In Intel syntax the base register is enclosed in [ and ] where as in AT&T they change to ( and ). Additionally, in Intel syntax an indirect memory reference is like
section:[base + index*scale + disp], which changes to
section:disp(base, index, scale) in AT&T.
One point to bear in mind is that, when a constant is used for disp/scale, $ shouldnt be prefixed.
Now we saw some of the major differences between Intel syntax and AT&T syntax. Ive wrote only a few of them. For a complete information, refer to GNU Assembler documentations. Now well look at some examples for better understanding.
> `
>
> <pre>+------------------------------+------------------------------------+
> | Intel Code | AT&T Code |
> +------------------------------+------------------------------------+
> | mov eax,1 | movl $1,%eax |
> | mov ebx,0ffh | movl $0xff,%ebx |
> | int 80h | int $0x80 |
> | mov ebx, eax | movl %eax, %ebx |
> | mov eax,[ecx] | movl (%ecx),%eax |
> | mov eax,[ebx+3] | movl 3(%ebx),%eax |
> | mov eax,[ebx+20h] | movl 0x20(%ebx),%eax |
> | add eax,[ebx+ecx*2h] | addl (%ebx,%ecx,0x2),%eax |
> | lea eax,[ebx+ecx] | leal (%ebx,%ecx),%eax |
> | sub eax,[ebx+ecx*4h-20h] | subl -0x20(%ebx,%ecx,0x4),%eax |
> +------------------------------+------------------------------------+
> </pre>
>
> `
* * *
## 4. Basic Inline.
The format of basic inline assembly is very much straight forward. Its basic form is
`asm("assembly code");`
Example.
> `
>
> * * *
>
> <pre>asm("movl %ecx %eax"); /* moves the contents of ecx to eax */
> __asm__("movb %bh (%eax)"); /*moves the byte from bh to the memory pointed by eax */
> </pre>
>
> * * *
>
> `
You might have noticed that here Ive used `asm` and `__asm__`. Both are valid. We can use `__asm__` if the keyword `asm` conflicts with something in our program. If we have more than one instructions, we write one per line in double quotes, and also suffix a \n and \t to the instruction. This is because gcc sends each instruction as a string to **as**(GAS) and by using the newline/tab we send correctly formatted lines to the assembler.
Example.
> `
>
> * * *
>
> <pre> __asm__ ("movl %eax, %ebx\n\t"
> "movl $56, %esi\n\t"
> "movl %ecx, $label(%edx,%ebx,$4)\n\t"
> "movb %ah, (%ebx)");
> </pre>
>
> * * *
>
> `
If in our code we touch (ie, change the contents) some registers and return from asm without fixing those changes, something bad is going to happen. This is because GCC have no idea about the changes in the register contents and this leads us to trouble, especially when compiler makes some optimizations. It will suppose that some register contains the value of some variable that we might have changed without informing GCC, and it continues like nothing happened. What we can do is either use those instructions having no side effects or fix things when we quit or wait for something to crash. This is where we want some extended functionality. Extended asm provides us with that functionality.
* * *
## 5. Extended Asm.
In basic inline assembly, we had only instructions. In extended assembly, we can also specify the operands. It allows us to specify the input registers, output registers and a list of clobbered registers. It is not mandatory to specify the registers to use, we can leave that head ache to GCC and that probably fit into GCCs optimization scheme better. Anyway the basic format is:
> `
>
> * * *
>
> <pre> asm ( assembler template
> : output operands /* optional */
> : input operands /* optional */
> : list of clobbered registers /* optional */
> );
> </pre>
>
> * * *
>
> `
The assembler template consists of assembly instructions. Each operand is described by an operand-constraint string followed by the C expression in parentheses. A colon separates the assembler template from the first output operand and another separates the last output operand from the first input, if any. Commas separate the operands within each group. The total number of operands is limited to ten or to the maximum number of operands in any instruction pattern in the machine description, whichever is greater.
If there are no output operands but there are input operands, you must place two consecutive colons surrounding the place where the output operands would go.
Example:
> `
>
> * * *
>
> <pre> asm ("cld\n\t"
> "rep\n\t"
> "stosl"
> : /* no output registers */
> : "c" (count), "a" (fill_value), "D" (dest)
> : "%ecx", "%edi"
> );
> </pre>
>
> * * *
>
> `
Now, what does this code do? The above inline fills the `fill_value` `count` times to the location pointed to by the register `edi`. It also says to gcc that, the contents of registers `eax` and `edi` are no longer valid. Let us see one more example to make things more clearer.
> `
>
> * * *
>
> <pre>
> int a=10, b;
> asm ("movl %1, %%eax;
> movl %%eax, %0;"
> :"=r"(b) /* output */
> :"r"(a) /* input */
> :"%eax" /* clobbered register */
> );
> </pre>
>
> * * *
>
> `
Here what we did is we made the value of b equal to that of a using assembly instructions. Some points of interest are:
* "b" is the output operand, referred to by %0 and "a" is the input operand, referred to by %1.
* "r" is a constraint on the operands. Well see constraints in detail later. For the time being, "r" says to GCC to use any register for storing the operands. output operand constraint should have a constraint modifier "=". And this modifier says that it is the output operand and is write-only.
* There are two %s prefixed to the register name. This helps GCC to distinguish between the operands and registers. operands have a single % as prefix.
* The clobbered register %eax after the third colon tells GCC that the value of %eax is to be modified inside "asm", so GCC wont use this register to store any other value.
When the execution of "asm" is complete, "b" will reflect the updated value, as it is specified as an output operand. In other words, the change made to "b" inside "asm" is supposed to be reflected outside the "asm".
Now we may look each field in detail.
## 5.1 Assembler Template.
The assembler template contains the set of assembly instructions that gets inserted inside the C program. The format is like: either each instruction should be enclosed within double quotes, or the entire group of instructions should be within double quotes. Each instruction should also end with a delimiter. The valid delimiters are newline(\n) and semicolon(;). \n may be followed by a tab(\t). We know the reason of newline/tab, right?. Operands corresponding to the C expressions are represented by %0, %1 ... etc.
## 5.2 Operands.
C expressions serve as operands for the assembly instructions inside "asm". Each operand is written as first an operand constraint in double quotes. For output operands, therell be a constraint modifier also within the quotes and then follows the C expression which stands for the operand. ie,
"constraint" (C expression) is the general form. For output operands an additional modifier will be there. Constraints are primarily used to decide the addressing modes for operands. They are also used in specifying the registers to be used.
If we use more than one operand, they are separated by comma.
In the assembler template, each operand is referenced by numbers. Numbering is done as follows. If there are a total of n operands (both input and output inclusive), then the first output operand is numbered 0, continuing in increasing order, and the last input operand is numbered n-1\. The maximum number of operands is as we saw in the previous section.
Output operand expressions must be lvalues. The input operands are not restricted like this. They may be expressions. The extended asm feature is most often used for machine instructions the compiler itself does not know as existing ;-). If the output expression cannot be directly addressed (for example, it is a bit-field), our constraint must allow a register. In that case, GCC will use the register as the output of the asm, and then store that register contents into the output.
As stated above, ordinary output operands must be write-only; GCC will assume that the values in these operands before the instruction are dead and need not be generated. Extended asm also supports input-output or read-write operands.
So now we concentrate on some examples. We want to multiply a number by 5\. For that we use the instruction `lea`.
> `
>
> * * *
>
> <pre> asm ("leal (%1,%1,4), %0"
> : "=r" (five_times_x)
> : "r" (x)
> );
> </pre>
>
> * * *
>
> `
Here our input is in x. We didnt specify the register to be used. GCC will choose some register for input, one for output and does what we desired. If we want the input and output to reside in the same register, we can instruct GCC to do so. Here we use those types of read-write operands. By specifying proper constraints, here we do it.
> `
>
> * * *
>
> <pre> asm ("leal (%0,%0,4), %0"
> : "=r" (five_times_x)
> : "0" (x)
> );
> </pre>
>
> * * *
>
> `
Now the input and output operands are in the same register. But we dont know which register. Now if we want to specify that also, there is a way.
> `
>
> * * *
>
> <pre> asm ("leal (%%ecx,%%ecx,4), %%ecx"
> : "=c" (x)
> : "c" (x)
> );
> </pre>
>
> * * *
>
> `
In all the three examples above, we didnt put any register to the clobber list. why? In the first two examples, GCC decides the registers and it knows what changes happen. In the last one, we dont have to put `ecx` on the c lobberlist, gcc knows it goes into x. Therefore, since it can know the value of `ecx`, it isnt considered clobbered.
## 5.3 Clobber List.
Some instructions clobber some hardware registers. We have to list those registers in the clobber-list, ie the field after the third **:** in the asm function. This is to inform gcc that we will use and modify them ourselves. So gcc will not assume that the values it loads into these registers will be valid. We shoudnt list the input and output registers in this list. Because, gcc knows that "asm" uses them (because they are specified explicitly as constraints). If the instructions use any other registers, implicitly or explicitly (and the registers are not present either in input or in the output constraint list), then those registers have to be specified in the clobbered list.
If our instruction can alter the condition code register, we have to add "cc" to the list of clobbered registers.
If our instruction modifies memory in an unpredictable fashion, add "memory" to the list of clobbered registers. This will cause GCC to not keep memory values cached in registers across the assembler instruction. We also have to add the **volatile** keyword if the memory affected is not listed in the inputs or outputs of the asm.
We can read and write the clobbered registers as many times as we like. Consider the example of multiple instructions in a template; it assumes the subroutine _foo accepts arguments in registers `eax` and `ecx`.
> `
>
> * * *
>
> <pre> asm ("movl %0,%%eax;
> movl %1,%%ecx;
> call _foo"
> : /* no outputs */
> : "g" (from), "g" (to)
> : "eax", "ecx"
> );
> </pre>
>
> * * *
>
> `
## 5.4 Volatile ...?
If you are familiar with kernel sources or some beautiful code like that, you must have seen many functions declared as `volatile` or `__volatile__` which follows an `asm` or `__asm__`. I mentioned earlier about the keywords `asm` and `__asm__`. So what is this `volatile`?
If our assembly statement must execute where we put it, (i.e. must not be moved out of a loop as an optimization), put the keyword `volatile` after asm and before the ()s. So to keep it from moving, deleting and all, we declare it as
`asm volatile ( ... : ... : ... : ...);`
Use `__volatile__` when we have to be verymuch careful.
If our assembly is just for doing some calculations and doesnt have any side effects, its better not to use the keyword `volatile`. Avoiding it helps gcc in optimizing the code and making it more beautiful.
In the section `Some Useful Recipes`, I have provided many examples for inline asm functions. There we can see the clobber-list in detail.
* * *
## 6. More about constraints.
By this time, you might have understood that constraints have got a lot to do with inline assembly. But weve said little about constraints. Constraints can say whether an operand may be in a register, and which kinds of register; whether the operand can be a memory reference, and which kinds of address; whether the operand may be an immediate constant, and which possible values (ie range of values) it may have.... etc.
## 6.1 Commonly used constraints.
There are a number of constraints of which only a few are used frequently. Well have a look at those constraints.
1. **Register operand constraint(r)**
When operands are specified using this constraint, they get stored in General Purpose Registers(GPR). Take the following example:
`asm ("movl %%eax, %0\n" :"=r"(myval));`
Here the variable myval is kept in a register, the value in register `eax` is copied onto that register, and the value of `myval` is updated into the memory from this register. When the "r" constraint is specified, gcc may keep the variable in any of the available GPRs. To specify the register, you must directly specify the register names by using specific register constraints. They are:
> `
>
> <pre>+---+--------------------+
> | r | Register(s) |
> +---+--------------------+
> | a | %eax, %ax, %al |
> | b | %ebx, %bx, %bl |
> | c | %ecx, %cx, %cl |
> | d | %edx, %dx, %dl |
> | S | %esi, %si |
> | D | %edi, %di |
> +---+--------------------+
> </pre>
>
> `
2. **Memory operand constraint(m)**
When the operands are in the memory, any operations performed on them will occur directly in the memory location, as opposed to register constraints, which first store the value in a register to be modified and then write it back to the memory location. But register constraints are usually used only when they are absolutely necessary for an instruction or they significantly speed up the process. Memory constraints can be used most efficiently in cases where a C variable needs to be updated inside "asm" and you really dont want to use a register to hold its value. For example, the value of idtr is stored in the memory location loc:
`asm("sidt %0\n" : :"m"(loc));`
3. **Matching(Digit) constraints**
In some cases, a single variable may serve as both the input and the output operand. Such cases may be specified in "asm" by using matching constraints.
`asm ("incl %0" :"=a"(var):"0"(var));`
We saw similar examples in operands subsection also. In this example for matching constraints, the register %eax is used as both the input and the output variable. var input is read to %eax and updated %eax is stored in var again after increment. "0" here specifies the same constraint as the 0th output variable. That is, it specifies that the output instance of var should be stored in %eax only. This constraint can be used:
* In cases where input is read from a variable or the variable is modified and modification is written back to the same variable.
* In cases where separate instances of input and output operands are not necessary.
The most important effect of using matching restraints is that they lead to the efficient use of available registers.
Some other constraints used are:
1. "m" : A memory operand is allowed, with any kind of address that the machine supports in general.
2. "o" : A memory operand is allowed, but only if the address is offsettable. ie, adding a small offset to the address gives a valid address.
3. "V" : A memory operand that is not offsettable. In other words, anything that would fit the `m constraint but not the `oconstraint.
4. "i" : An immediate integer operand (one with constant value) is allowed. This includes symbolic constants whose values will be known only at assembly time.
5. "n" : An immediate integer operand with a known numeric value is allowed. Many systems cannot support assembly-time constants for operands less than a word wide. Constraints for these operands should use n rather than i.
6. "g" : Any register, memory or immediate integer operand is allowed, except for registers that are not general registers.
Following constraints are x86 specific.
1. "r" : Register operand constraint, look table given above.
2. "q" : Registers a, b, c or d.
3. "I" : Constant in range 0 to 31 (for 32-bit shifts).
4. "J" : Constant in range 0 to 63 (for 64-bit shifts).
5. "K" : 0xff.
6. "L" : 0xffff.
7. "M" : 0, 1, 2, or 3 (shifts for lea instruction).
8. "N" : Constant in range 0 to 255 (for out instruction).
9. "f" : Floating point register
10. "t" : First (top of stack) floating point register
11. "u" : Second floating point register
12. "A" : Specifies the `a or `d registers. This is primarily useful for 64-bit integer values intended to be returned with the `d register holding the most significant bits and the `a register holding the least significant bits.
## 6.2 Constraint Modifiers.
While using constraints, for more precise control over the effects of constraints, GCC provides us with constraint modifiers. Mostly used constraint modifiers are
1. "=" : Means that this operand is write-only for this instruction; the previous value is discarded and replaced by output data.
2. "&" : Means that this operand is an earlyclobber operand, which is modified before the instruction is finished using the input operands. Therefore, this operand may not lie in a register that is used as an input operand or as part of any memory address. An input operand can be tied to an earlyclobber operand if its only use as an input occurs before the early result is written.
The list and explanation of constraints is by no means complete. Examples can give a better understanding of the use and usage of inline asm. In the next section well see some examples, there well find more about clobber-lists and constraints.
* * *
## 7. Some Useful Recipes.
Now we have covered the basic theory about GCC inline assembly, now we shall concentrate on some simple examples. It is always handy to write inline asm functions as MACROs. We can see many asm functions in the kernel code. (/usr/src/linux/include/asm/*.h).
1. First we start with a simple example. Well write a program to add two numbers.
> `
>
> * * *
>
> <pre>int main(void)
> {
> int foo = 10, bar = 15;
> __asm__ __volatile__("addl %%ebx,%%eax"
> :"=a"(foo)
> :"a"(foo), "b"(bar)
> );
> printf("foo+bar=%d\n", foo);
> return 0;
> }
> </pre>
>
> * * *
>
> `
Here we insist GCC to store foo in %eax, bar in %ebx and we also want the result in %eax. The = sign shows that it is an output register. Now we can add an integer to a variable in some other way.
> `
>
> * * *
>
> <pre> __asm__ __volatile__(
> " lock ;\n"
> " addl %1,%0 ;\n"
> : "=m" (my_var)
> : "ir" (my_int), "m" (my_var)
> : /* no clobber-list */
> );
> </pre>
>
> * * *
>
> `
This is an atomic addition. We can remove the instruction lock to remove the atomicity. In the output field, "=m" says that my_var is an output and it is in memory. Similarly, "ir" says that, my_int is an integer and should reside in some register (recall the table we saw above). No registers are in the clobber list.
2. Now well perform some action on some registers/variables and compare the value.
> `
>
> * * *
>
> <pre> __asm__ __volatile__( "decl %0; sete %1"
> : "=m" (my_var), "=q" (cond)
> : "m" (my_var)
> : "memory"
> );
> </pre>
>
> * * *
>
> `
Here, the value of my_var is decremented by one and if the resulting value is `0` then, the variable cond is set. We can add atomicity by adding an instruction "lock;\n\t" as the first instruction in assembler template.
In a similar way we can use "incl %0" instead of "decl %0", so as to increment my_var.
Points to note here are that (i) my_var is a variable residing in memory. (ii) cond is in any of the registers eax, ebx, ecx and edx. The constraint "=q" guarantees it. (iii) And we can see that memory is there in the clobber list. ie, the code is changing the contents of memory.
3. How to set/clear a bit in a register? As next recipe, we are going to see it.
> `
>
> * * *
>
> <pre>__asm__ __volatile__( "btsl %1,%0"
> : "=m" (ADDR)
> : "Ir" (pos)
> : "cc"
> );
> </pre>
>
> * * *
>
> `
Here, the bit at the position pos of variable at ADDR ( a memory variable ) is set to `1` We can use btrl for btsl to clear the bit. The constraint "Ir" of pos says that, pos is in a register, and its value ranges from 0-31 (x86 dependant constraint). ie, we can set/clear any bit from 0th to 31st of the variable at ADDR. As the condition codes will be changed, we are adding "cc" to clobberlist.
4. Now we look at some more complicated but useful function. String copy.
> `
>
> * * *
>
> <pre>static inline char * strcpy(char * dest,const char *src)
> {
> int d0, d1, d2;
> __asm__ __volatile__( "1:\tlodsb\n\t"
> "stosb\n\t"
> "testb %%al,%%al\n\t"
> "jne 1b"
> : "=&S" (d0), "=&D" (d1), "=&a" (d2)
> : "0" (src),"1" (dest)
> : "memory");
> return dest;
> }
> </pre>
>
> * * *
>
> `
The source address is stored in esi, destination in edi, and then starts the copy, when we reach at **0**, copying is complete. Constraints "&S", "&D", "&a" say that the registers esi, edi and eax are early clobber registers, ie, their contents will change before the completion of the function. Here also its clear that why memory is in clobberlist.
We can see a similar function which moves a block of double words. Notice that the function is declared as a macro.
> `
>
> * * *
>
> <pre>#define mov_blk(src, dest, numwords) \
> __asm__ __volatile__ ( \
> "cld\n\t" \
> "rep\n\t" \
> "movsl" \
> : \
> : "S" (src), "D" (dest), "c" (numwords) \
> : "%ecx", "%esi", "%edi" \
> )
> </pre>
>
> * * *
>
> `
Here we have no outputs, so the changes that happen to the contents of the registers ecx, esi and edi are side effects of the block movement. So we have to add them to the clobber list.
5. In Linux, system calls are implemented using GCC inline assembly. Let us look how a system call is implemented. All the system calls are written as macros (linux/unistd.h). For example, a system call with three arguments is defined as a macro as shown below.
> `
>
> * * *
>
> <pre>#define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \
> type name(type1 arg1,type2 arg2,type3 arg3) \
> { \
> long __res; \
> __asm__ volatile ( "int $0x80" \
> : "=a" (__res) \
> : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \
> "d" ((long)(arg3))); \
> __syscall_return(type,__res); \
> }
> </pre>
>
> * * *
>
> `
Whenever a system call with three arguments is made, the macro shown above is used to make the call. The syscall number is placed in eax, then each parameters in ebx, ecx, edx. And finally "int 0x80" is the instruction which makes the system call work. The return value can be collected from eax.
Every system calls are implemented in a similar way. Exit is a single parameter syscall and lets see how its code will look like. It is as shown below.
> `
>
> * * *
>
> <pre>{
> asm("movl $1,%%eax; /* SYS_exit is 1 */
> xorl %%ebx,%%ebx; /* Argument is in ebx, it is 0 */
> int $0x80" /* Enter kernel mode */
> );
> }
> </pre>
>
> * * *
>
> `
The number of exit is "1" and here, its parameter is 0\. So we arrange eax to contain 1 and ebx to contain 0 and by `int $0x80`, the `exit(0)` is executed. This is how exit works.
* * *
## 8. Concluding Remarks.
This document has gone through the basics of GCC Inline Assembly. Once you have understood the basic concept it is not difficult to take steps by your own. We saw some examples which are helpful in understanding the frequently used features of GCC Inline Assembly.
GCC Inlining is a vast subject and this article is by no means complete. More details about the syntaxs we discussed about is available in the official documentation for GNU Assembler. Similarly, for a complete list of the constraints refer to the official documentation of GCC.
And of-course, the Linux kernel use GCC Inline in a large scale. So we can find many examples of various kinds in the kernel sources. They can help us a lot.
If you have found any glaring typos, or outdated info in this document, please let us know.
* * *
## 9. References.
1. [Brennans Guide to Inline Assembly](http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html)
2. [Using Assembly Language in Linux](http://linuxassembly.org/articles/linasm.html)
3. [Using as, The GNU Assembler](http://www.gnu.org/manual/gas-2.9.1/html_mono/as.html)
4. [Using and Porting the GNU Compiler Collection (GCC)](http://gcc.gnu.org/onlinedocs/gcc_toc.html)
5. [Linux Kernel Source](http://ftp.kernel.org/)
* * *
via: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html
作者:[Sandeep.S](mailto:busybox@sancharnet.in) 译者:[zky001](https://github.com/zky001) 校对:[]()
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,86 @@
Turn Tor socks to http
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/12/tor-593x445.jpg)
For using Tor service you can use diffrent tools like Tor browser, Foxyproxy and other things, some download managers such as Wget or Aria2 cant get Tor socks directly and start downloading anonymously with that so we need some tools to change Tor socks to http and then download with that.
**Note** : This tutorial is under Debian distrobutions and in other distrobutions may be diffrent so if your distro is Debian base and you have configured Tor correctly go a head !
**Polipo** : This service uses 8123 Port and 127.0.0.1 IP, use following command to install Polipo on your computer :
sudo apt install polipo
Now use this command to go in Polipo config file:
sudo nano /etc/polipo/config
Add the following lines to the end of the file :
proxyAddress = "::0"
allowedClients = 192.168.1.0/24
socksParentProxy = "localhost:9050"
socksProxyType = socks5
Restart the Polipo service with this command :
sudo service polipo restart
Now Polipo is ready ! do what ever you like in anonymous world ! as example of how using it :
pdmt -l "link" -i 127.0.01 -p 8123
With command above, PDMT ( Persian Download Manager Terminal ) will download your file anonymously.
**Proxychains** : In this service you can set Tor or Lantern proxy to turn socks too but in usage its a little diffrent with Polipo and Privoxy because you dont need to use any port ! for installing that use following command :
sudo apt install proxychains
Open config file with this command :
sudo nano /etc/proxychains.conf
Now add the following code to the end of text, this code is Tor port and Ip :
socks5 127.0.0.1 9050
If you put “proxychains” word before a command in terminal and run it, it would run by Tor proxy :
proxychains firefoxt
proxychains aria2c
proxychains wget
**Privoxy** : Privoxy uses 8118 port and its easy to run first install privoxy package :
sudo apt install privoxy
We should change the config file now :
sudo nano /etc/pivoxy/config
Add the following lines to end of the file :
forward-socks5 / 127.0.0.1:9050 .
forward-socks4a / 127.0.0.1:9050 .
forward-socks5t / 127.0.0.1:9050 .
forward 192.168.*.*/ .
forward 10.*.*.*/ .
forward 127.*.*.*/ .
forward localhost/ .
Restart the service :
sudo service privoxy restart
Service is ready ! port is 8118 and Ip is 127.0.0.1 use it and enjoy from it !
--------------------------------------------------------------------------------
via: http://www.unixmen.com/turn-tor-socks-http/
作者:[Hossein heydari][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/hossein/

View File

@ -1,16 +1,17 @@
bazz2
Learn with Linux: Learning Music
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-featured.png)
This article is part of the [Learn with Linux][1] series:
[Linux 学习系列][1]的所有文章:
- [Learn with Linux: Learning to Type][2]
- [Learn with Linux: Physics Simulation][3]
- [Learn with Linux: Learning Music][4]
- [Learn with Linux: Two Geography Apps][5]
- [Learn with Linux: Master Your Math with These Linux Apps][6]
- [Linux 教学之教你练打字][2]
- [Linux 教学之物理模拟][3]
- [Linux 教学之教你玩音乐][4]
- [Linux 教学之两款地理软件][5]
- [Linux 教学之掌握数学][6]
Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software.
引言Linux 提供大量的教学软件和工具面向各个年级段以及年龄段提供大量学科的练习实践其中大多数是可以与用户进行交互的。本“Linux 教学”系列就来介绍一些教学软件。
Learning music is a great pastime. Training your ears to identify scales and chords and mastering an instrument or your own voice requires lots of practise and could become difficult. Music theory is extensive. There is much to memorize, and to turn it into a “skill” you will need diligence. Linux offers exceptional software to help you along your musical journey. They will not help you become a professional musician instantly but could ease the process of learning, being a great aide and reference point.
@ -152,4 +153,4 @@ via: https://www.maketecheasier.com/linux-learning-music/
[10]:http://sourceforge.net/projects/tete/files/latest/download
[11]:http://sourceforge.net/projects/jalmus/files/Jalmus-2.3/
[12]:http://tuxguitar.herac.com.ar/
[13]:http://www.linuxlinks.com/article/20090517041840856/PianoBooster.html
[13]:http://www.linuxlinks.com/article/20090517041840856/PianoBooster.html

View File

@ -1,121 +0,0 @@
Learn with Linux: Learning to Type
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-featured.png)
This article is part of the [Learn with Linux][1] series:
- [Learn with Linux: Learning to Type][2]
- [Learn with Linux: Physics Simulation][3]
- [Learn with Linux: Learning Music][4]
- [Learn with Linux: Two Geography Apps][5]
- [Learn with Linux: Master Your Math with These Linux Apps][6]
Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software.
Typing is taken for granted by many people; today being keyboard savvy often comes as second nature. Yet how many of us still type with two fingers, even if ever so fast? Once typing was taught in schools, but slowly the art of ten-finger typing is giving way to two thumbs.
The following two applications can help you master the keyboard so that your next thought does not get lost while your fingers catch up. They were chosen for their simplicity and ease of use. While there are some more flashy or better looking typing apps out there, the following two will get the basics covered and offer the easiest way to start out.
### TuxType (or TuxTyping) ###
TuxType is for children. Young students can learn how to type with ten fingers with simple lessons and practice their newly-acquired skills in fun games.
Debian and derivatives (therefore all Ubuntu derivatives) should have TuxType in their standard repositories. To install simply type
sudo apt-get install tuxtype
The application starts with a simple menu screen featuring Tux and some really bad midi music (Fortunately the sound can be turned off easily with the icon in the lower left corner.).
![learntotype-tuxtyping-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-main.jpg)
The top two choices, “Fish Cascade” and “Comet Zap,” represent typing games, but to start learning you need to head over to the lessons.
There are forty simple built-in lessons to choose from. Each one of these will take a letter from the keyboard and make the student practice while giving visual hints, such as which finger to use.
![learntotype-tuxtyping-exd1](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd1.jpg)
![learntotype-tuxtyping-exd2](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd2.jpg)
For more advanced practice, phrase typing is also available, although for some reason this is hidden under the options menu.
![learntotype-tuxtyping-phrase](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-phrase.jpg)
The games are good for speed and accuracy as the player helps Tux catch falling fish
![learntotype-tuxtyping-fish](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-fish.jpg)
or zap incoming asteroids by typing the words written over them.
![learntotype-tuxtyping-zap](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-zap.jpg)
Besides being a fun way to practice, these games teach spelling, speed, and eye-to-hand coordination, as you must type while also watching the screen, building a foundation for touch typing, if taken seriously.
### GNU typist (gtype) ###
For adults and more experienced typists, there is GNU Typist, a console-based application developed by the GNU project.
GNU Typist will also be carried by most Debian derivatives main repos. Installing it is as easy as typing
sudo apt-get install gtype
You will probably not find it in the Applications menu; insteaad you should start it from a terminal window.
gtype
The main menu is simple, no-nonsense and frill-free, yet it is evident how much the software has to offer. Typing lessons of all levels are immediately accessible.
![learntotype-gtype-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-main.png)
The lessons are straightforward and detailed.
![learntotype-gtype-lesson](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-lesson.png)
The interactive practice sessions offer little more than highlighting your mistakes. Instead of flashy visuals you have to chance to focus on practising. At the end of each lesson you get some simple statistics of how youve been doing. If you make too many mistakes, you cannot proceed until you can pass the level.
![learntotype-gtype-mistake](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-mistake.png)
While the basic lessons only require you to repeat some characters, more advanced drills will have the practitioner type either whole sentences,
![learntotype-gtype-warmup](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmup.png)
where of course the three percent error margin means you are allowed even fewer mistakes,
![learntotype-gtype-warmupfail](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmupfail.png)
or some drills aiming to achieve certain goals, as in the “Balanced keyboard drill.”
![learntotype-gtype-balanceddrill](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-balanceddrill.png)
Simple speed drills have you type quotes,
![learntotype-gtype-speed-simple](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-simple.png)
while more advanced ones will make you write longer texts taken from classics.
![learntotype-gtype-speed-advanced](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-advanced.png)
If youd prefer a different language, more lessons can also be loaded as command line arguments.
![learntotype-gtype-more-lessons](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-more-lessons.png)
### Conclusion ###
If you care to hone your typing skills, Linux has great software to offer. The two basic, yet feature-rich, applications discussed above will cater to most aspiring typists needs. If you use or know of another great typing application, please dont hesitate to let us know below in the comments.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/learn-to-type-in-linux/
作者:[Attila Orosz][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/attilaorosz/
[1]:https://www.maketecheasier.com/series/learn-with-linux/
[2]:https://www.maketecheasier.com/learn-to-type-in-linux/
[3]:https://www.maketecheasier.com/linux-physics-simulation/
[4]:https://www.maketecheasier.com/linux-learning-music/
[5]:https://www.maketecheasier.com/linux-geography-apps/
[6]:https://www.maketecheasier.com/learn-linux-maths/

View File

@ -1,108 +0,0 @@
[bazz222222]
Linux 学习系列之物理模拟
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/physics-fetured.jpg)
[Linux 学习系列][1]的所有文章:
- [Learn with Linux: Learning to Type][2]
- [Learn with Linux: Physics Simulation][3]
- [Learn with Linux: Learning Music][4]
- [Learn with Linux: Two Geography Apps][5]
- [Learn with Linux: Master Your Math with These Linux Apps][6]
Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software.
Physics is an interesting subject, and arguably the most enjoyable part of any Physics class/lecture are the demonstrations. It is really nice to see physics in action, yet the experiments do not need to be restricted to the classroom. While Linux offers many great tools for scientists to support or conduct experiments, this article will concern a few that would make learning physics easier or more fun.
### 1. Step ###
[Step][7] is an interactive physics simulator, part of [KDEEdu, the KDE Education Project][8]. Nobody could better describe what Step does than the people who made it. According to the project webpage, “[Step] works like this: you place some bodies on the scene, add some forces such as gravity or springs, then click “Simulate” and Step shows you how your scene will evolve according to the laws of physics. You can change every property of bodies/forces in your experiment (even during simulation) and see how this will change the outcome of the experiment. With Step, you can not only learn but feel how physics works!”
While of course it requires Qt and loads of KDE-specific dependencies to work, projects like this (and KDEEdu itself) are part of the reason why KDE is such an awesome environment (if you dont mind running a heavier desktop, of course).
Step is in the Debian repositories; to install it on derivatives, simply type
sudo apt-get install step
into a terminal. On a KDE system it should have minimal dependencies and install in seconds.
Step has a simple interface, and it lets you jump right into simulations.
![physics-step-main](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-main.png)
You will find all available objects on the left-hand side. You can have different particles, gas, shaped objects, springs, and different forces in action. (1) If you select an object, a short description of it will appear on the right-hand side (2). On the right you will also see an overview of the “world” you have created (the objects it contains) (3), the properties of the currently selected object (4), and the steps you have taken so far (5).
![physics-step-parts](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-parts.png)
Once you have placed all you wanted on the canvas, just press “Simulate,” and watch the events unfold as the objects interact with each other.
![physics-step-simulate1](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate1.png)
![physics-step-simulate2](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate2.png)
![physics-step-simulate3](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate3.png)
To get to know Step better you only need to press F1. The KDE Help Center offers a great and detailed Step handbook.
### 2. Lightspeed ###
Lightspeed is a simple GTK+ and OpenGL based simulator that is meant to demonstrate the effect of how one might observe a fast moving object. Lightspeed will simulate these effects based on Einsteins special relativity. According to [their sourceforge page][9] “When an object accelerates to more than a few million meters per second, it begins to appear warped and discolored in strange and unusual ways, and as it approaches the speed of light (299,792,458 m/s) the effects become more and more bizarre. In addition, the manner in which the object is distorted varies drastically with the viewpoint from which it is observed.”
These effects which come into play at relative velocities are:
- **The Lorentz contraction** causes the object to appear shorter
- **The Doppler red/blue shift** alters the hues of color observed
- **The headlight effect** brightens or darkens the object
- **Optical aberration** deforms the object in unusual ways
Lightspeed is in the Debian repositories; to install it, simply type:
sudo apt-get install lightspeed
The user interface is very simple. You get a shape (more can be downloaded from sourceforge) which would move along the x-axis (animation can be started by processing “A” or by selecting it from the object menu).
![physics-lightspeed](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed.png)
You control the speed of its movement with the right-hand side slider and watch how it deforms.
![physics-lightspeed-deform](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-deform.png)
Some simple controls will allow you to add more visual elements
![physics-lightspeed-visual](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-visual.png)
The viewing angles can be adjusted by pressing either the left, middle or right button and dragging the mouse or from the Camera menu that also offers some other adjustments like background colour or graphics mode.
### Notable mention: Physion ###
Physion looks like an interesting project and a great looking software to simulate physics in a much more colorful and fun way than the above examples would allow. Unfortunately, at the time of writing, the [official website][10] was experiencing problems, and the download page was unavailable.
Judging from their Youtube videos, Physion must be worth installing once a download line becomes available. Until then we can just enjoy the this video demo.
youtube 视频
<iframe frameborder="0" src="//www.youtube.com/embed/P32UHa-3BfU?autoplay=1&amp;autohide=2&amp;border=0&amp;wmode=opaque&amp;enablejsapi=1&amp;controls=0&amp;showinfo=0" id="youtube-iframe"></iframe>
Do you have another favorite physics simulation/demonstration/learning applications for Linux? Please share with us in the comments below.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/linux-physics-simulation/
作者:[Attila Orosz][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/attilaorosz/
[1]:https://www.maketecheasier.com/series/learn-with-linux/
[2]:https://www.maketecheasier.com/learn-to-type-in-linux/
[3]:https://www.maketecheasier.com/linux-physics-simulation/
[4]:https://www.maketecheasier.com/linux-learning-music/
[5]:https://www.maketecheasier.com/linux-geography-apps/
[6]:https://www.maketecheasier.com/learn-linux-maths/
[7]:https://edu.kde.org/applications/all/step
[8]:https://edu.kde.org/
[9]:http://lightspeed.sourceforge.net/
[10]:http://www.physion.net/

View File

@ -1,103 +0,0 @@
Learn with Linux: Two Geography Apps
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-featured.png)
This article is part of the [Learn with Linux][1] series:
- [Learn with Linux: Learning to Type][2]
- [Learn with Linux: Physics Simulation][3]
- [Learn with Linux: Learning Music][4]
- [Learn with Linux: Two Geography Apps][5]
- [Learn with Linux: Master Your Math with These Linux Apps][6]
Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software.
Geography is an interesting subject, used by many of us day to day, often without realizing. But when you fire up GPS, SatNav, or just Google maps, you are using the geographical data provided by this software with the maps drawn by cartographists. When you hear about a certain country in the news or hear financial data being recited, these all fall under the umbrella of geography. And you have some great Linux software to study and practice these, whether it is for school or your own improvement.
### Kgeography ###
There are only two geography-related applications readily available in most Linux repositories, and both of these are KDE applications, in fact part of the KDE Educatonal project. Kgeography uses simple color-coded maps of any selected country.
To install kegeography just type
sudo apt-get install kgeography
into a terminal window of any Ubuntu-based distribution.
The interface is very basic. You are first presented with a picker menu that lets you choose an area map.
![learn-geography-kgeo-pick](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-pick.png)
On the map you can display the name and capital of any given territory by clicking on it,
![learn-geography-kgeo-brit](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-brit.png)
and test your knowledge in different quizzes.
![learn-geography-kgeo-test](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-test.png)
It is an interactive way to test your basic geographical knowledge and could be an excellent tool to help you prepare for exams.
### Marble ###
Marble is a somewhat more advanced software, offering a global view of the world without the need of 3D acceleration.
![learn-geography-marble-main](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-main.png)
To get Marble, type
sudo apt-get install marble
into a terminal window of any Ubuntu-based distribution.
Marble focuses on cartography, its main view being that of an atlas.
![learn-geography-marble-atlas](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-atlas.jpg)
You can have different projections, like Globe or Mercator displayed as defaults, with flat and other exotic views available from a drop-down menu. The surfaces include the basic Atlas view, a full-fledged offline map powered by OpenStreetMap,
![learn-geography-marble-map](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-map.jpg)
satellite view (by NASA),
![learn-geography-marble-satellite](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-satellite.jpg)
and political and even historical maps of the world, among others.
![learn-geography-marble-history](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-history.jpg)
Besides providing great offline maps with different skins and varying amount of data, Marble offers other types of information as well. You can switch on and off various offline info-boxes
![learn-geography-marble-offline](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-offline.png)
and online services from the menu.
![learn-geography-marble-online](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-online.png)
An interesting online service is Wikipedia integration. Clicking on the little Wiki logos will bring up a pop-up featuring detailed information about the selected places.
![learn-geography-marble-wiki](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-wiki.png)
The software also includes options for location tracking, route planning, and searching for locations, among other great and useful features. If you enjoy cartography, Marble offers hours of fun exploring and learning.
### Conclusion ###
Linux offers many great educational applications, and the subject of geography is no exception. With the above two programs you can learn a lot about our globe and test your knowledge in a fun and interactive manner.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/linux-geography-apps/
作者:[Attila Orosz][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/attilaorosz/
[1]:https://www.maketecheasier.com/series/learn-with-linux/
[2]:https://www.maketecheasier.com/learn-to-type-in-linux/
[3]:https://www.maketecheasier.com/linux-physics-simulation/
[4]:https://www.maketecheasier.com/linux-learning-music/
[5]:https://www.maketecheasier.com/linux-geography-apps/
[6]:https://www.maketecheasier.com/learn-linux-maths/

View File

@ -1,3 +1,4 @@
(translating by runningwater)
Grep Count Lines If a String / Word Matches
================================================================================
How do I count lines if given word or string matches for each input file under Linux or UNIX operating systems?
@ -27,7 +28,7 @@ Sample outputs:
via: http://www.cyberciti.biz/faq/grep-count-lines-if-a-string-word-matches/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,3 +1,4 @@
(translating by runningwater)
Grep From Files and Display the File Name
================================================================================
How do I grep from a number of files and display the file name only?
@ -61,7 +62,7 @@ Sample outputs:
via: http://www.cyberciti.biz/faq/grep-from-files-and-display-the-file-name/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,43 @@
苹果编程语言Swift开始支持Linux
================================================================================
![](http://itsfoss.com/wp-content/uploads/2015/12/Apple-Swift-Open-Source.jpg)
苹果也开源了是的苹果编程语言Swift已经开源了。其实我们并不应该感到意外因为[在六个月以前苹果就已经宣布了这个消息][1]。
苹果宣布这周将推出开源Swift社区。一个专用于开源Swift社区的[新网站][2]已经就位,网站首页显示以下信息:
> 我们对Swift开源感到兴奋。在苹果推出了编程语言Swift之后它很快成为历史上增长最快的语言之一。Swift可以编写出难以置信的又快又安全的软件。目前Swift是开源的你能帮助做出随处可用的最好的通用编程语言。
[swift.org][2]这个网站将会作为一站式网站它会提供各种资料的下载包括各种平台社区指南最新消息入门教程贡献开源Swift的说明文件和一些其他的指南。 如果你正期待着学习Swift那么必须收藏这个网站。
在苹果的这次宣布中,一个用于方便分享和构建代码的包管理器已经可用了。
对于所有的Linux使用者来说最重要的是源代码已经可以从[Github][3]获得了.你可以从以下链接Checkout它
Most important of all for Linux users, the source code is now available at [Github][3]. You can check it out from the link below:
- [苹果Swift源代码][3]
除此之外对于ubuntu 14.04和15.10版本还有预编译的二进制文件。
- [ubuntu系统的Swift二进制文件][4]
不要急着去使用它们因为这些都是发展分支而且不适合于专用机器。因此现在避免使用一旦发布了Linux下Swift的稳定版本我希望ubuntu会把它包含在[umake][5]中靠近[Visual Studio][6]的地方。
--------------------------------------------------------------------------------
via: http://itsfoss.com/swift-open-source-linux/
作者:[Abhishek][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/apple-open-sources-swift-programming-language-linux/
[2]:https://swift.org/
[3]:https://github.com/apple
[4]:https://swift.org/download/#latest-development-snapshots
[5]:https://wiki.ubuntu.com/ubuntu-make
[6]:http://itsfoss.com/install-visual-studio-code-ubuntu/

View File

@ -0,0 +1,69 @@
黑客利用Wi-Fi侵犯你隐私的七种方法
================================================================================
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg)
### 黑客利用Wi-Fi侵犯你隐私的七种方法 ###
Wi-Fi — 既然方便又危险的东西这里给大家介绍一下通过Wi-Fi连接泄露身份信息的七种方法和预防措施。
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/1_free-hotspots-100626674-orig.jpg)
### 利用免费热点 ###
它们似乎无处不在,而且它们的数量会在[下一个四年里增加四倍][1]。但是它们当中很多都是不值得信任的从你的登录凭证、email甚至更加敏感的账户都能被黑客用一款名叫“sniffers”的软件截获 — 这款软件能截获到任何你通过该连接提交的信息。防止被黑客盯上的最好办法就是使用VPN(virtual private network),它能保护你的数据隐私它会加密你所输入的信息。
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/2_online-banking-100626675-orig.jpg)
### 网上银行 ###
你可能认为没有人需要自己被提醒不要使用免费Wi-Fi来操作网上银行, 但网络安全厂商卡巴斯基实验室表示[全球超过100家银行因为网络黑客而损失9亿美元][2]由此可见还是有很多人因此受害。如果你真的想要在一家咖吧里使用免费真实的Wi-Fi那么你应该向服务员确认网络名称。[在店里用路由器设置一个开放的无线连接][3]并将它的网络名称设置成店名是一件相当简单的事。
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/3_keeping-wifi-on-100626676-orig.jpg)
### 始终开着Wi-Fi开关 ###
如果你手机的Wi-Fi开关一直开着的你会自动被连接到一个不安全的网络中去你甚至都没有意识到。你可以利用你手机的[基于位置的Wi-Fi功能][4]如果它是可用的那它会在你离开你所保存的网络范围后自动关闭你的Wi-Fi开关并在你回去之后再次开启。
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/4_not-using-firewall-100626677-orig.jpg)
### 不使用防火墙 ###
防火墙是你的第一道抵御恶意入侵的防线,它能有效地让你的电脑网络通畅并阻挡黑客和恶意软件。你应该时刻开启它除非你的杀毒软件有它自己的防火墙。
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/5_browsing-unencrypted-sites-100626678-orig.jpg)
### 浏览非加密网页 ###
说起来很难过,[世界上排名前100万个网站中55%是不加密的][5],一个未加密的网站则会让传输的数据暴露在黑客的眼下。如果一个网页是安全的,你的浏览器则会有标明(比如说火狐浏览器是一把绿色的挂锁、Chrome蓝旗则是个绿色的图标)。但是一个安全的网站不能让你免于被劫持的风险它能通过公共网络从你访问过的网站上窃取cookies无论是不是正当网站与否。
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/6_updating-security-software-100626679-orig.jpg)
### 不更新你的安全防护软件 ###
如果你想要确保你自己的网络是受保护的,就更新的路由器固件。你要做的就是进入你的路由器管理页面去检查,通常你能在厂商的官方网页上下载到最新的固件版本。
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/7_securing-home-wifi-100626680-orig.jpg)
### 不保护你的家用Wi-Fi ###
不用说设置一个复杂的密码和更改无线连接的默认名都是非常重要的。你还可以过滤你的MAC地址来让你的路由器只承认那些确认过的设备。
**Josh Althuser**是一个开源支持者、网络架构师和科技企业家。在过去12年里他花了很多时间去倡导使用开源软件来管理团队和项目同时为网络应用程序提供企业级咨询并帮助它们走向市场。你可以联系[他的推特][6].
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers-can-use-wi-fi-against-you.html
作者:[Josh Althuser][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/JoshAlthuser
[1]:http://www.pcworld.com/article/243464/number_of_wifi_hotspots_to_quadruple_by_2015_says_study.html
[2]:http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?hp&amp;action=click&amp;pgtype=Homepage&amp;module=first-column-region%C2%AEion=top-news&amp;WT.nav=top-news&amp;_r=3
[3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html
[4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off
[5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/
[6]:https://twitter.com/JoshAlthuser

View File

@ -0,0 +1,209 @@
# 19年KDE进化历程
youtube 视频
<iframe width="660" height="371" src="https://www.youtube.com/embed/1UG4lQOMBC4?feature=oembed" frameborder="0" allowfullscreen></iframe>
## 概述
KDE 史上功能最强大的桌面环境之一; 开源且免费。19年前1996年10月14日德国程序员 Matthias Ettrich 开始了编写这个美观的桌面环境。KDE提供了诸如shell以及其他很多日常使用的程序。今日KDE被成千上万人在 Unix 和 Windows 上使用。19年----一个对软件项目而言极为漫长的年岁。现在是时候让我们回到最初,看看这一切从哪里开始了。
K Desktop EnvironmentKDE有很多创新之处新设计美观连贯性易于使用对普通用户和专业用户都足够强大的应用库。"KDE"这个名字是对单词"通用桌面环境"Common Desktop Environment玩的一个简单谐音游戏"K"----"Cool"。 第一代KDE在双证书授权下使用了有专利的 Trolltech's Qt 框架 现Qt的前身这两个许可证分别是 open source QPL(Q public license) 和 商业专利许可证proprietary commercial license。在2000年 Trolltech 让一部分 Qt 软件库开始发布在 GPL 证书下; Qt 4.5 发布在了 LGPL 2.1 许可证下。自2009起 KDE 桌面环境由三部分构成Plasma Workspaces (作Shell)KDE 应用,作为 KDE Software 编译的 KDE Platform.
## 各发布版本
### Pre-Release 1996年10月14日
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/0b3.png)
当时名称为 Kool Desktop Environment"Kool"这个单词在很快就被弃用了。最初所有KDE的组件都是被单独发布在开发社区里的他们之间没有任何环绕大项目的组装配合。开发组邮件列表中的第一封通信是发往kde@fiwi02.wiwi.uni-Tubingen.de 的邮件。
### KDE 1.0 1998年7月12日
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/10.png)
这个版本受到了颇有争议的反馈。很多人反对使用Qt框架----当时的 FreeQt 许可证和自由软件许可证并不兼容----并建议开发组使用 Motif 或者 LessTif 替代。尽管有着这些反对声KDE 仍然被很多用户所青睐并且成功作为第一个Linux发行版的环境被集成了进去。(made its way into the first Linux distributions)
![28 January 1999](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/11.png)
1999年1月28日
一次升级,**K Desktop Environment 1.1**,更快,更稳定的同时加入了很多小升级。这个版本同时也加入了很多新的图标,背景,外观文理。和这些全面翻新同时出现的还有 Torsten Rahn 绘制的全新KDE图标----齿轮前的3个K字母这个图标的修改版也一直沿用至今。
### KDE 2.0 2000年10月23日
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/20.png)
重大更新_ DCOP (Desktop COmmunication Protocol),一个端到端的通信协议 _ KIO一个应用程序I/O库 _ KParts组件对象模板 _ KHTML一个符合 HTML 4.0 标准的图像绘制引擎。
![26 February 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/21.png)
2001年2月26日
**K Desktop Environment 2.1** 首次发布了媒体播放器 noatunnoatun使用了先进的模组-插件设计。为了便利开发者K Desktop Environment 2.1 打包了 KDevelop
![15 August 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/22.png)
2001年8月15日
**KDE 2.2**版本在GNU/Linux上加快了50%的应用启动速度,同时提高了稳定性和 HTML、JavaScript的解析性能同时还增加了一些 KMail 的功能。
### KDE 3.0 2002年4月3日
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/30.png)
K Desktop Environment 3.0 加入了更好的限制使用功能,这个功能在网咖,企业公用电脑上被广泛需求。
![28 January 2003](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/31.png)
2003年1月28日
**K Desktop Environment 3.1** 加入了新的默认窗口Keramik和图标样式Crystal和其他一些改进。
![3 February 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/32.png)
2004年2月3日
**K Desktop Environment 3.2** 加入了诸如网页表格书写邮件中拼写检查的新功能补强了邮件和日历功能。完善了Konqueror 中的标签机制和对 Microsoft Windows 桌面共享协议的支持。
![19 August 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/33.png)
2004年8月19日
**K Desktop Environment 3.3** 侧重于组合不同的桌面组件。Kontact 被放进了群件应用Kolab 并与 Kpilot 结合。Konqueror 的加入让KDE有了更好的 IM 交流功能,比如支持发送文件,以及其他 IM 协议如IRC的支持。
![16 March 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/34.png)
2005年3月16日
**K Desktop Environment 3.4** 侧重于提高易用性。这次更新为KonquerorKateKPDF加入了文字-语音转换功能;也在桌面系统中加入了独立的 KSayIt 文字-语音转换软件。
![29 November 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/35.png)
2005年11月29日
**The K Desktop Environment 3.5** 发布加入了 SuperKaramba为桌面环境提供了易于安装的插件机制。 desktop. Konqueror 加入了广告屏蔽功能并成为了有史以来第二个通过Acid2 CSS 测试的浏览器。
### KDE SC 4.0 2008年1月11日
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/400.png)
大部分开组投身于把最新的技术和开发框架整合进 KDE 4 当中。Plasma 和 Oxygen 是两次最大的用户界面风格变更。同时Dolphin 替代 Konqueror 成为默认文件管理器Okular 成为了默认文档浏览器。
![29 July 2008](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/401.png)
2008年7月29日
**KDE 4.1** 引入了一个在 PIM 和 Kopete 中使用的表情主题系统引入了可以让用户便利地从互联网上一键下载数据的DXS。同时引入了 GStreamerQuickTime和 DirectShow 9 Phonon 后台。加入了新应用如_ Dragon Player _ Kontact _ Skanlite 扫描仪软件_ Step 物理模拟软件 * 新游戏: KdiamondKollisionKBreakout 和更多......
![27 January 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/402.png)
2009年1月27日
**KDE 4.2** 被认为是在已经极佳的 KDE 4.1 基础上的又一次全面超越,同时也成为了大多数用户替换旧 3.5 版本的完美选择。
![4 August 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/403.png)
2009年8月4日
**KDE 4.3** 修复了超过10,000个 bugs同时加入了让近2,000个被用户需求的功能。整合一些新的技术例如PolicyKitNetworkManage & Geolocation services 等也是这个版本的一大重点。
![9 February 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/404.png)
2010年2月9日
**KDE SC 4.4** 基础 Qt 4 开框架的 4.6 版本,新的应用 KAddressBook 被加入同时也是is based on version 4.6 of the Qt 4 toolkit. New application KAddressBookKopete首次发布。
![10 August 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/405.png)
2010年8月10日
**KDE SC 4.5** 增加了一些新特性:整合了 WebKit 库----一个开源的浏览器引擎库,现在也被在 Apple Safari 和 Google Chrome 中广泛使用。KPackageKit 替换了 Kpackage。
![26 January 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/406.png)
2011年1月26日
**KDE SC 4.6** 加强了 OpenGl 的性能同时照常更新了无数bug和小改进。
![27 July 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/407.png)
2011年7月27日
**KDE SC 4.7** 升级 KWin 以兼容 OpenGL ES 2.0 ,更新了 Qt QuickPlasma Desktop 中在应用里普遍使用的新特性 1.2万个bug被修复。
![25 January 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/408.png)
2012年1月25日
**KDE SC 4.8**: 更好的 KWin 性能与 Wayland 支持,更新了 Doplhin 的外观设计。
![1 August 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/409.png)
2012年8月1日
**KDE SC 4.9**: 向 Dolphin 文件管理器增加了一些更新,比如加入了实时文件重命名,鼠标辅助按钮支持,更好的位置标签和更多文件分类管理功能。
![6 February 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/410.png)
2013年2月6日
**KDE SC 4.10**: 很多 Plasma 插件使用 QML 重写; NepomukKontact 和 Okular 得到了很大程度的性能和功能提升。
![14 August 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/411.png)
2013年8月14日
**KDE SC 4.11**: Kontact 和 Nepomuk 有了很大的优化。 第一代 Plasma Workspaces 进入了仅有维护而没有新生开发的软件周期。
![18 December 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/412.png)
2013年12月18日
**KDE SC 4.12**: Kontact 得到了极大的提升。
![16 April 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/413.png)
2014年4月16日
**KDE SC 4.13**: Nepomuk 语义搜索功能替代了桌面上的原有的Baloo搜索。 KDE SC 4.13 发布了53个语言版本。
![20 August 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/414.png)
2014年8月20日
**KDE SC 4.14**: 这个发布版本侧重于稳定性提升大量的bug修复和小更新。这是最后一个 KDE SC 4 发布版本。
### KDE Plasma 5.0 2014年7月15日
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/500.png)
KDE Plasma 5 第五代 KDE。大幅改进了设计和系统新的默认主题 ---- Breeze完全迁移到了 QML更好的 OpenGL 性能,更完美的 HiDPI (高分辨率)显示支持。
![11 November 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/501.png)
2014年11月11日
**KDE Plasma 5.1**加入了Plasma 4里原先没有补完的功能。
![27 January 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/502.png)
2015年1月27日
**KDE Plasma 5.2**新组件BlueDevilKSSHAskPassMuonSDDM 主题设置KScreenGTK+ 样式设置 和 KDecoration.
![28 April 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/503.png)
2015年4月28日
**KDE Plasma 5.3**Plasma Media Center 技术预览。新的蓝牙和触摸板小程序;改良了电源管理。
![25 August 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/504.png)
2015年8月25日
**KDE Plasma 5.4**Wayland 登场,新的基于 QML 的音频管理程序,交替式全屏程序显示。
万分感谢 [KDE][1] 开发者和社区及Wikipedia 为书写 [概述][2] 带来的帮助同时感谢所有读者。希望大家保持自由精神be free并继续支持如同 KDE 一样的开源的自由软件发展。
--------------------------------------------------------------------------------
via: [https://tlhp.cf/kde-history/](https://tlhp.cf/kde-history/)
作者:[Pavlo RudyiCategories][a] 译者:[jerryling315](https://github.com/jerryling315) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]: https://www.kde.org/
[2]: https://en.wikipedia.org/wiki/KDE_Plasma_5
[a]: https://tlhp.cf/author/paul/

View File

@ -0,0 +1,299 @@
点评Linux编程中五款内存调试器
================================================================================
![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg)
Credit: [Moini][1]
作为一个程序员,我知道我总在犯错误——事实是,怎么可能会不犯错的!程序员也是人啊。有的错误能在编码过程中及时发现,而有些却得等到软件测试才显露出来。然而,有一类错误并不能在这两个时期被排除,从而导致软件不能正常运行,甚至是提前中止。
想到了吗我说的就是内存相关的错误。手动调试这些错误不仅耗时而且很难发现并纠正。值得一提的是这种错误非常地常见特别是在一些软件里这些软件是用C/C++这类允许[手动管理内存][2]的语言编写的。
幸运的是现行有一些编程工具能够帮你找到软件程序中这些内存相关的错误。在这些工具集中我评定了五款Linux可用的流行、免费并且开源的内存调试器Dmalloc、Electric Fence、 Memcheck、 Memwatch以及Mtrace。日常编码过程中我已经把这五个调试器用了个遍所以这些点评是建立在我的实际体验之上的。
### [Dmalloc][3] ###
**开发者**Gray Watson
**点评版本**5.5.2
**Linux支持**:所有种类
**许可**:知识共享署名-相同方式共享许可证3.0
Dmalloc是Gray Watson开发的一款内存调试工具。它实现成库封装了标准内存管理函数如**malloc(), calloc(), free()**等,使得程序员得以检测出有问题的代码。
![cw dmalloc output](http://images.techhive.com/images/article/2015/11/cw_dmalloc-output-100627040-large.idge.png)
Dmalloc
如同工具的网页所列,这个调试器提供的特性包括内存泄漏跟踪、[重复释放(double free)][4]错误跟踪、以及[越界写入(fence-post write)][5]检测。其它特性包括文件/行号报告、普通统计记录。
#### 更新内容 ####
5.5.2版本是一个[bug修复发行版][6],同时修复了构建和安装的问题。
#### 有何优点 ####
Dmalloc最大的优点是可以进行任意配置。比如说你可以配置以支持C++程序和多线程应用。Dmalloc还提供一个有用的功能运行时可配置这表示在Dmalloc执行时可以轻易地使能或者禁能它提供的特性。
你还可以配合[GNU Project Debugger (GDB)][7]来使用Dmalloc只需要将dmalloc.gdb文件位于Dmalloc源码包中的contrib子目录里的内容添加到你的主目录中的.gdbinit文件里即可。
另外一个优点让我对Dmalloc爱不释手的是它有大量的资料文献。前往官网的[Documentation标签][8]可以获取任何内容有关于如何下载、安装、运行怎样使用库和Dmalloc所提供特性的细节描述及其输入文件的解释。里面还有一个章节介绍了一般问题的解决方法。
#### 注意事项 ####
跟Mtrace一样Dmalloc需要程序员改动他们的源代码。比如说你可以必须的添加头文件**dmalloc.h**,工具就能汇报产生问题的调用的文件或行号。这个功能非常有用,因为它节省了调试的时间。
除此之外还需要在编译你的程序时把Dmalloc库编译源码包时产生的链接进去。
然而,还有点更麻烦的事,需要设置一个环境变量,命名为**DMALLOC_OPTION**以供工具在运行时配置内存调试特性以及输出文件的路径。可以手动为该环境变量分配一个值不过初学者可能会觉得这个过程有点困难因为你想使能的Dmalloc特性是存在于这个值之中的——表示为各自的十六进制值的累加。[这里][9]有详细介绍。
一个比较简单方法设置这个环境变量是使用[Dmalloc实用指令][10],这是专为这个目的设计的方法。
#### 总结 ####
Dmalloc真正的优势在于它的可配置选项。而且高度可移植曾经成功移植到多种操作系统如AIX、BSD/OS、DG/UX、Free/Net/OpenBSD、GNU/Hurd、HPUX、Irix、Linux、MS-DOG、NeXT、OSF、SCO、Solaris、SunOS、Ultrix、Unixware甚至Unicos运行在Cray T3E主机上。虽然Dmalloc有很多东西需要学习但是它所提供的特性值得为之付出。
### [Electric Fence][15] ###
**开发者**Bruce Perens
**点评版本**2.2.3
**Linux支持**:所有种类
**许可**GNU 通用公共许可证 (第二版)
Electric Fence是Bruce Perens开发的一款内存调试工具它以库的形式实现你的程序需要链接它。Electric Fence能检测出[栈][11]内存溢出和访问已经释放的内存。
![cw electric fence output](http://images.techhive.com/images/article/2015/11/cw_electric-fence-output-100627041-large.idge.png)
Electric Fence
顾名思义Electric Fence在每个申请的缓存边界建立了fence防护任何非法内存访问都会导致[段错误][12]。这个调试工具同时支持C和C++编程。
#### 更新内容 ####
2.2.3版本修复了工具的构建系统,使得-fno-builtin-malloc选项能真正传给[GNU Compiler Collection (GCC)][13]。
#### 有何优点 ####
我喜欢Electric Fence首要的一点是Memwatch、Dmalloc和Mtrace所不具有的这个调试工具不需要你的源码做任何的改动你只需要在编译的时候把它的库链接进你的程序即可。
其次Electric Fence实现一个方法确认导致越界访问(a bounds violation)的第一个指令就是引起段错误的原因。这比在后面再发现问题要好多了。
不管是否有检测出错误Electric Fence经常会在输出产生版权信息。这一点非常有用由此可以确定你所运行的程序已经启用了Electric Fence。
#### 注意事项 ####
另一方面我对Electric Fence真正念念不忘的是它检测内存泄漏的能力。内存泄漏是C/C++软件最常见也是最难隐秘的问题之一。不过Electric Fence不能检测出堆内存溢出而且也不是线程安全的。
基于Electric Fence会在用户分配内存区的前后分配禁止访问的虚拟内存页如果你过多的进行动态内存分配将会导致你的程序消耗大量的额外内存。
Electric Fence还有一个局限是不能明确指出错误代码所在的行号。它所能做只是在监测到内存相关错误时产生段错误。想要定位行号需要借助[The Gnu Project Debugger (GDB)][14]这样的调试工具来调试你启用了Electric Fence的程序。
最后一点Electric Fence虽然能检测出大部分的缓冲区溢出有一个例外是如果所申请的缓冲区大小不是系统字长的倍数这时候溢出即使只有几个字节就不能被检测出来。
#### 总结 ####
尽管有那么多的局限但是Electric Fence的优点却在于它的易用性。程序只要链接工具一次Electric Fence就可以在监测出内存相关问题的时候报警。不过如同前面所说Electric Fence需要配合像GDB这样的源码调试器使用。
### [Memcheck][16] ###
**开发者**[Valgrind开发团队][17]
**点评版本**3.10.1
**Linux支持**:所有种类
**许可**:通用公共许可证
[Valgrind][18]是一个提供好几款调试和Linux程序性能分析工具的套件。虽然Valgrind和编写语言各不相同有Java、Perl、Python、Assembly code、ortran、Ada等等的程序配合工作但是它所提供的工具大部分都意在支持C/C++所编写的程序。
Memcheck作为内存错误检测器是一款最受欢迎的Memcheck工具。它能够检测出诸多问题诸如内存泄漏、无效的内存访问、未定义变量的使用以及栈内存分配和释放相关的问题等。
#### 更新内容 ####
工具套件(3.10.1)的[发行版][19]是一个副版本主要修复了3.10.0版本发现的bug。除此之外从主版本backport一些包修复了缺失的AArch64 ARMv8指令和系统调用。
#### 有何优点 ####
同其它所有Valgrind工具一样Memcheck也是基本的命令行实用程序。它的操作非常简单通常我们会使用诸如prog arg1 arg2格式的命令来运行程序而Memcheck只要求你多加几个值即可就像valgrind --leak-check=full prog arg1 arg2。
![cw memcheck output](http://images.techhive.com/images/article/2015/11/cw_memcheck-output-100627037-large.idge.png)
Memcheck
注意因为Memcheck是Valgrind的默认工具所以无需提及Memcheck。但是需要在编译程序之初带上-g参数选项这一步会添加调试信息使得Memcheck的错误信息会包含正确的行号。
我真正倾心于Memcheck的是它提供了很多命令行选项如上所述的--leak-check选项如此不仅能控制工具运转还可以控制它的输出。
举个例子,可以开启--track-origins选项以查看程序源码中未初始化的数据。可以开启--show-mismatched-frees选项让Memcheck匹配内存的分配和释放技术。对于C语言所写的代码Memcheck会确保只能使用free()函数来释放内存malloc()函数来申请内存。而对C++所写的源码Memcheck会检查是否使用了delete或delete[]操作符来释放内存以及new或者new[]来申请内存。
Memcheck最好的特点尤其是对于初学者来说的是它会给用户建议使用那个命令行选项能让输出更加有意义。比如说如果你不使用基本的--leak-check选项Memcheck会在输出时建议“使用--leak-check=full重新运行查看更多泄漏内存细节”。如果程序有未初始化的变量Memcheck会产生信息“使用--track-origins=yes查看未初始化变量的定位”。
Memcheck另外一个有用的特性是它可以[创建抑制文件(suppression files)][20]由此可以忽略特定不能修正的错误这样Memcheck运行时就不会每次都报警了。值得一提的是Memcheck会去读取默认抑制文件来忽略系统库比如C库中的报错这些错误在系统创建之前就已经存在了。可以选择创建一个新的抑制文件或是编辑现有的(通常是/usr/lib/valgrind/default.supp)。
Memcheck还有高级功能比如可以使用[定制内存分配器][22]来[检测内存错误][21]。除此之外Memcheck提供[监控命令][23]当用到Valgrind的内置gdbserver以及[客户端请求][24]机制不仅能把程序的行为告知Memcheck还可以进行查询时可以使用。
#### 注意事项 ####
毫无疑问Memcheck可以节省很多调试时间以及省去很多麻烦。但是它使用了很多内存导致程序执行变慢[由资料可知][25]大概花上20至30倍时间
除此之外Memcheck还有其它局限。根据用户评论Memcheck明显不是[线程安全][26]的;它不能检测出 [静态缓冲区溢出][27]还有就是一些Linux程序如[GNU Emacs][28]目前还不能使用Memcheck。
如果有兴趣,可以在[这里][29]查看Valgrind详尽的局限性说明。
#### 总结 ####
无论是对于初学者还是那些需要高级特性的人来说Memcheck都是一款便捷的内存调试工具。如果你仅需要基本调试和错误核查Memcheck会非常容易上手。而当你想要使用像抑制文件或者监控指令这样的特性就需要花一些功夫学习了。
虽然罗列了大量的局限性但是Valgrind包括Memcheck在它的网站上声称全球有[成千上万程序员][30]使用了此工具。开发团队称收到来自超过30个国家的用户反馈而这些用户的工程代码有的高达2.5千万行。
### [Memwatch][31] ###
**开发者**Johan Lindh
**点评版本**2.71
**Linux支持**:所有种类
**许可**GNU通用公共许可证
Memwatch是由Johan Lindh开发的内存调试工具虽然它主要扮演内存泄漏检测器的角色但是它也具有检测其它如[重复释放跟踪和内存错误释放][32]、缓冲区溢出和下溢、[野指针][33]写入等等内存相关问题的能力(根据网页介绍所知)。
Memwatch支持用C语言所编写的程序。可以在C++程序中使用它但是这种做法并不提倡由Memwatch源码包随附的Q&A文件中可知
#### 更新内容 ####
这个版本添加了ULONG_LONG_MAX以区分32位和64位程序。
#### 有何优点 ####
跟Dmalloc一样Memwatch也有优秀的文献资料。参考USING文件可以学习如何使用Memwatch可以了解Memwatch是如何初始化、如何清理以及如何进行I/O操作的等等不一而足。还有一个FAQ文件旨在帮助用户解决使用过程遇到的一般问题。最后还有一个test.c文件提供工作案例参考。
![cw memwatch output](http://images.techhive.com/images/article/2015/11/cw_memwatch_output-100627038-large.idge.png)
Memwatch
不同于MtraceMemwatch的输出产生的日志文件通常是memwatch.log是人类可阅读格式。而且Memwatch每次运行时总会拼接内存调试输出到此文件末尾而不是进行覆盖译改。如此便可在需要之时轻松查看之前的输出信息。
同样值得一提的是当你执行了启用Memwatch的程序Memwatch会在[标准输出][34]中产生一个单行输出,告知发现了错误,然后你可以在日志文件中查看输出细节。如果没有产生错误信息,就可以确保日志文件不会写入任何错误,多次运行的话能实际节省时间。
另一个我喜欢的优点是Memwatch同样在源码中提供一个方法你可以据此获取Memwatch的输出信息然后任由你进行处理参考Memwatch源码中的mwSetOutFunc()函数获取更多有关的信息)。
#### 注意事项 ####
跟Mtrace和Dmalloc一样Memwatch也需要你往你的源文件里增加代码你需要把memwatch.h这个头文件包含进你的代码。而且编译程序的时候你需要连同memwatch.c一块编译或者你可以把已经编译好的目标模块包含起来然后在命令行定义MEMWATCH和MW_STDIO变量。不用说想要在输出中定位行号-g编译器选项也少不了。
还有一些没有具备的特性。比如Memwatch不能检测出往一块已经被释放的内存写入操作或是在分配的内存块之外的读取操作。而且Memwatch也不是线程安全的。还有一点正如我在开始时指出在C++程序上运行Memwatch的结果是不能预料的。
#### 总结 ####
Memcheck可以检测很多内存相关的问题在处理C程序时是非常便捷的调试工具。因为源码小巧所以可以从中了解Memcheck如何运转有需要的话可以调试它甚至可以根据自身需求扩展升级它的功能。
### [Mtrace][35] ###
**开发者**: Roland McGrath and Ulrich Drepper
**点评版本**: 2.21
**Linux支持**:所有种类
**许可**GNU通用公共许可证
Mtrace是[GNU C库][36]中的一款内存调试工具同时支持Linux C和C++程序检测由malloc()和free()函数的不对等调用所引起的内存泄漏问题。
![cw mtrace output](http://images.techhive.com/images/article/2015/11/cw_mtrace-output-100627039-large.idge.png)
Mtrace
Mtrace实现为对mtrace()函数的调用跟踪程序中所有malloc/free调用在用户指定的文件中记录相关信息。文件以一种机器可读的格式记录数据所以有一个Perl脚本同样命名为mtrace用来把文件转换并展示为人类可读格式。
#### 更新内容 ####
[Mtrace源码][37]和[Perl文件][38]同GNU C库(2.21版本)一起释出,除了更新版权日期,其它别无改动。
#### 有何优点 ####
Mtrace最优秀的特点是非常简单易学。你只需要了解在你的源码中如何以及何处添加mtrace()及其对立的muntrace()函数还有如何使用Mtrace的Perl脚本。后者非常简单只需要运行指令mtrace <program-executable> <log-file-generated-upon-program-execution>(例子见开头截图最后一条指令)。
Mtrace另外一个优点是它的可收缩性体现在不仅可以使用它来调试完整的程序还可以使用它来检测程序中独立模块的内存泄漏。只需在每个模块里调用mtrace()和muntrace()即可。
最后一点因为Mtrace会在mtace()(在源码中添加的函数)执行时被触发,因此可以很灵活地[使用信号][39]动态地在程序执行周期内使能Mtrace。
#### 注意事项 ####
因为mtrace()和mauntrace()函数在mcheck.h文件中声明所以必须在源码中包含此头文件的调用是Mtrace运行mauntrace()函数并非[总是必要][40]的根本因此Mtrace要求程序员至少改动源码一次。
了解需要在编译程序的时候带上-g选项[GCC][41]和[G++][42]编译器均由提供),才能使调试工具在输出展示正确的行号。除此之外,有些程序(取决于源码体积有多大)可能会花很长时间进行编译。最后,带-g选项编译会增加了可执行文件的内存因为提供了额外的调试信息因此记得程序需要在测试结束不带-g选项重新进行编译。
使用Mtrace你需要掌握Linux环境变量的基本知识因为在程序执行之前需要把用户指定文件mtrace()函数用以记载全部信息的路径设置为环境变量MALLOC_TRACE的值。
Mtrace在检测内存泄漏和尝试释放未经过分配的内存方面存在局限。它不能检测其它内存相关问题如非法内存访问、使用未初始化内存。而且[有人抱怨][43]Mtrace不是[线程安全][44]的。
### 总结 ###
不言自明,我在此讨论的每款内存调试器都有其优点和局限。所以,哪一款适合你取决于你所需要的特性,虽然有时候容易安装和使用也是一个决定因素。
要想捕获软件程序中的内存泄漏Mtrace最适合不过了。它还可以节省时间。由于Linux系统已经预装了此工具对于不能联网或者不可以下载第三方调试调试工具的情况Mtrace也是极有助益的。
另一方面相比Mtrace,Dmalloc不仅能检测更多错误类型还你呢个提供更多特性比如运行时可配置、GDB集成。而且Dmalloc不像这里所说的其它工具它是线程安全的。更不用说它的详细资料了这让Dmalloc成为初学者的理想选择。
虽然Memwatch的资料比Dmalloc的更加丰富而且还能检测更多的错误种类但是你只能在C语言写就的软件程序上使用它。一个让Memwatch脱颖而出的特性是它允许在你的程序源码中处理它的输出这对于想要定制输出格式来说是非常有用的。
如果改动程序源码非你所愿那么使用Electric Fence吧。不过请记住Electric Fence只能检测两种错误类型而此二者均非内存泄漏。还有就是需要了解GDB基础以最大程序发挥这款内存调试工具的作用。
Memcheck可能是这当中综合性最好的了。相比这里所说其它工具它检测更多的错误类型提供更多的特性而且不需要你的源码做任何改动。但请注意基本功能并不难上手但是想要使用它的高级特性就必须学习相关的专业知识了。
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debuggers-for-linux-coding.html
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/soooogreen)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Himanshu-Arora/
[1]:https://openclipart.org/detail/132427/penguin-admin
[2]:https://en.wikipedia.org/wiki/Manual_memory_management
[3]:http://dmalloc.com/
[4]:https://www.owasp.org/index.php/Double_Free
[5]:https://stuff.mit.edu/afs/sipb/project/gnucash-test/src/dmalloc-4.8.2/dmalloc.html#Fence-Post%20Overruns
[6]:http://dmalloc.com/releases/notes/dmalloc-5.5.2.html
[7]:http://www.gnu.org/software/gdb/
[8]:http://dmalloc.com/docs/
[9]:http://dmalloc.com/docs/latest/online/dmalloc_26.html#SEC32
[10]:http://dmalloc.com/docs/latest/online/dmalloc_23.html#SEC29
[11]:https://en.wikipedia.org/wiki/Memory_management#Dynamic_memory_allocation
[12]:https://en.wikipedia.org/wiki/Segmentation_fault
[13]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection
[14]:http://www.gnu.org/software/gdb/
[15]:https://launchpad.net/ubuntu/+source/electric-fence/2.2.3
[16]:http://valgrind.org/docs/manual/mc-manual.html
[17]:http://valgrind.org/info/developers.html
[18]:http://valgrind.org/
[19]:http://valgrind.org/docs/manual/dist.news.html
[20]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles
[21]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools
[22]:http://stackoverflow.com/questions/4642671/c-memory-allocators
[23]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
[24]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs
[25]:http://valgrind.org/docs/manual/valgrind_manual.pdf
[26]:http://sourceforge.net/p/valgrind/mailman/message/30292453/
[27]:https://msdn.microsoft.com/en-us/library/ee798431%28v=cs.20%29.aspx
[28]:http://www.computerworld.com/article/2484425/linux/5-free-linux-text-editors-for-programming-and-word-processing.html?nsdr=true&page=2
[29]:http://valgrind.org/docs/manual/manual-core.html#manual-core.limits
[30]:http://valgrind.org/info/
[31]:http://www.linkdata.se/sourcecode/memwatch/
[32]:http://www.cecalc.ula.ve/documentacion/tutoriales/WorkshopDebugger/007-2579-007/sgi_html/ch09.html
[33]:http://c2.com/cgi/wiki?WildPointer
[34]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29
[35]:http://www.gnu.org/software/libc/manual/html_node/Tracing-malloc.html
[36]:https://www.gnu.org/software/libc/
[37]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.c;h=df10128b872b4adc4086cf74e5d965c1c11d35d2;hb=HEAD
[38]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.pl;h=0737890510e9837f26ebee2ba36c9058affb0bf1;hb=HEAD
[39]:http://webcache.googleusercontent.com/search?q=cache:s6ywlLtkSqQJ:www.gnu.org/s/libc/manual/html_node/Tips-for-the-Memory-Debugger.html+&cd=1&hl=en&ct=clnk&gl=in&client=Ubuntu
[40]:http://www.gnu.org/software/libc/manual/html_node/Using-the-Memory-Debugger.html#Using-the-Memory-Debugger
[41]:http://linux.die.net/man/1/gcc
[42]:http://linux.die.net/man/1/g++
[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html
[44]:https://en.wikipedia.org/wiki/Thread_safety

View File

@ -0,0 +1,93 @@
安卓编年史
================================================================================
![和之前完全不同的市场设计。以上是分类,特色,热门应用以及应用详情页面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market-pages.png)
和之前完全不同的市场设计。以上是分类,特色,热门应用以及应用详情页面。
Ron Amadeo 供图
这些截图给了我们冰淇淋三明治中新版操作栏的第一印象。几乎所有的应用顶部都有一条栏,带有应用图标,当前界面标题,一些功能按钮,右边还有一个菜单按钮。这个右对齐的菜单按钮被称为“更多操作”,因为里面存放着无法放置到主操作栏的项目。不过更多操作菜单并不是固定不变的,它给了操作栏节省了更多的屏幕空间——比如在横屏模式或在平板上时,更多操作菜单的项目会像通常的按钮一样显示在操作栏上。
冰淇凌三明治中新增了“滑动标签页”设计替换掉了谷歌之前推行的2×3方阵导航屏幕。一个标签页栏放置在了操作栏下方位于中间的标签显示的是当前页面左右侧的两个标签显示的是对应的当前页面的左右侧页面。向左右滑动可以切换标签页或者你可以点击指定页面的标签跳转过去。
应用详情页面有个很赞的设计,在应用截图后,会根据你关于那个应用的历史动态地重新布局页面。如果你从来没有安装过该应用,应用描述会优先显示。如果你曾安装过这个应用,第一部分将会是评价栏,它会邀请你评价该应用或者提醒你上次你安装该应用时的评价是什么。之前使用过的应用页面第二部分是“新特性”,因为一个老用户最关心的应该是应用有什么变化。
![最近应用和浏览器和蜂巢中的类似,但是是小号的](http://cdn.arstechnica.net/wp-content/uploads/2014/03/recentbrowser.png)
最近应用和浏览器和蜂巢中的类似,但是是小号的。
Ron Amadeo 供图
最近应用的电子风格外观被移除了。略缩图周围的蓝色的轮廓线被去除了,同时去除的还有背景怪异的,不均匀的蓝色光晕。它现在看起来是个中立型的界面,在任何时候看起来都很舒适。
浏览器尽了最大的努力把标签页体验带到手机上来。多标签浏览受到了关注,操作栏上引入的一个标签页按钮会打开一个类似最近应用的界面,显示你打开的标签页,而不是浪费宝贵的屏幕空间引入一个标签条。从功能上来说,这个和之前的浏览器中的“窗口”视图没什么差别。浏览器最佳的改进是菜单中的“请求桌面版站点”选项,这让你可以从默认的移动站点视图切换到正常站点。浏览器展示了谷歌的操作栏设计的灵活性,尽管这里没有左上角的应用图标,功能上来说和其他的顶栏设计相似。
![Gmail 和 Google Talk —— 它们和蜂巢中的相似,但是更小!](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gmail2.png)
Gmail 和 Google Talk —— 它们和蜂巢中的相似,但是更小!
Ron Amadeo 供图
Gmail 和 Google Talk 看起来都像是之前蜂巢中的设计的缩小版但是有些小调整让它们在小屏幕上表现更佳。Gmail 以双操作栏为特色——一个在屏幕顶部,一个在底部。顶部操作栏显示当前文件夹,账户,以及未读消息数目,点击顶栏可以打开一个导航菜单。底部操作栏有你期望出现在更多操作中的选项。使用双操作栏布局是为了在界面显示更多的按钮,但是在横屏模式下纵向空间有限,双操作栏就是合并成一个顶部操作栏。
在邮件视图下,往下滚动屏幕时蓝色栏有“粘性”。它会固定在屏幕顶部,所以你一直可以看到该邮件是谁写的,回复它,或者给它加星标。一旦处于邮件消息界面,底部细长的,深灰色栏会显示你当前在收件箱(或你所在的某个列表)的位置,并且你可以向左或向右滑动来切换到其他邮件。
Google Talk 允许你像在 Gmail 中那样左右滑动来切换聊天窗口,但是这里显示栏是在顶部。
![新的拨号和来电界面,都是姜饼以来我们还没见过的。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/inc-calls.png)
新的拨号和来电界面,都是姜饼以来我们还没见过的。
Ron Amadeo 供图
因为蜂巢只给平板使用,所以一些界面设计直接超前于姜饼。冰淇淋三明治的新拨号界面就是如此,黑色和蓝色相间,并且使用了可滑动切换的小标签。尽管冰淇淋三明治终于做了对的事情并将电话主体和联系人独立开来,但电话应用还是有它自己的联系人标签。现在有两个地方可以看到你的联系人列表——一个有着暗色主题,另一个有着亮色主题。由于实体搜索按钮不再是硬性要求,底部的按钮栏的语音信息快捷方式被替换为了搜索图标。
谷歌几乎就是把来电界面做成了锁屏界面的镜像,这意味着冰淇淋三明治有着一个环状解锁设计。除了通常的接受和挂断选项,圆环的顶部还添加了一个按钮,让你可以挂断来电并给对方发送一条预先定义好的信息。向上滑动并选择一条信息如“现在无法接听,一会回电”,相比于一直响个不停的手机而言这样做的信息交流更加丰富。
![蜂巢没有文件夹和信息应用,所以这里是冰淇淋三明治和姜饼的对比。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/thenonmessedupversion.png)
蜂巢没有文件夹和信息应用,所以这里是冰淇淋三明治和姜饼的对比。
Ron Amadeo 供图
现在创建文件夹更加方便了。在姜饼中,你得长按屏幕,选择“文件夹”选项,再点击“新文件夹”。在冰淇淋三明治中,你只要将一个图标拖拽到另一个图标上面,就会自动创建一个文件夹,并包含这两个图标。这简直不能更简单了,比寻找隐藏的长按命令容易多了。
设计上也有很大的改进。姜饼使用了一个通用的米黄色文件夹图标,但冰淇淋三明治直接显示出了文件夹中的头三个应用,把它们的图标叠在一起,在外侧画一个圆圈,并将其设置为文件夹图标。打开文件夹容器将自动调整大小以适应文件夹中的应用图标数目,而不是显示一个全屏的,大部分都是空的对话框。这看起来好得多得多。
![Youtube 转换到一个更加现代的白色主题,使用了列表视图替换疯狂的 3D 滚动视图。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/youtubes.png)
Youtube 转换到一个更加现代的白色主题,使用了列表视图替换疯狂的 3D 滚动视图。
Ron Amadeo 供图
Youtube 经过了完全的重新设计看起来没那么像是来自黑客帝国的产物更像是Youtube。它现在就是一个简单的垂直滚动的白色视频列表就像网站的那样。在你手机上制作视频受到了重视操作栏的第一个按钮专用于拍摄视频。奇怪的是不同的界面左上角使用了不同的 Youtube 标志,在水平的 Youtube 标志和方形标志之间切换。
Youtube 几乎在所有地方都使用了滑动标签页。它们被放置在主页面以在浏览和账户间切换放置在视频页面以在评论介绍和相关视频之间切换。4.0 版本的应用显示出 Google+ Youtube 集成的第一个信号,通常的评分按钮旁边放置了 “+1” 图标。最终 Google+ 会完全占据 Youtube将评论和作者页面变成 Google+ 活动。
![冰淇淋三明治试着让事情对所有人都更加简单。这里是数据使用量追踪,打开许多数据的新开发者选项,以及使用向导。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/data.png)
冰淇淋三明治试着让事情对所有人都更加简单。这里是数据使用量追踪,打开许多数据的新开发者选项,以及使用向导。
Ron Amadeo 供图
数据使用量允许用户更轻松地追踪和控制他们的数据使用。主页面显示一个月度使用量图表,用户可以设置数据使用警告值或者硬性使用限制以避免超量使用产生费用。所有的这些只需简单地拖动橙色和红色水平限制线在图表上的位置即可。纵向的白色把手允许用户选择图表上的一段指定时间段。在页面底部,选定时间段内的数据使用量又细分到每个应用,所以用户可以选择一个数据使用高峰并轻松地查看哪个应用在消耗大量流量。当流量紧张的时候,更多操作按钮中有个限制所有后台流量的选项。设置之后只用在前台运行的程序有权连接互联网。
开发者选项通常只有一点点设置选项,但是在冰淇淋三明治中,这部分有非常多选项。谷歌添加了所有类型的屏幕诊断显示浮层来帮助开发者理解他们的应用中发生了什么。你可以看到 CPU 使用率,触摸点位置,还有视图界面更新。还有些选项可以更改系统功能,比如控制动画速度,后台处理,以及 GPU 渲染。
安卓和 iOS 之间最大的区别之一就是应用抽屉界面。在冰淇淋三明治对更加用户友好的追求下,设备第一次初始化启动会启动一个小教程,向用户展示应用抽屉的位置以及如何将应用图标从应用抽屉拖拽到主屏幕。随着实体菜单按键的移除和像这样的改变,安卓 4.0 做了很大的努力变得对新智能手机用户和转换过来的用户更有吸引力。
![“触摸分享”NFC 支持Google Earth以及应用信息让你可以禁用垃圾软件。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-06-03.57.png)
“触摸分享”NFC 支持Google Earth以及应用信息让你可以禁用垃圾软件。
冰淇淋三明治内置对 [NFC][1] 的完整支持。尽管之前的设备,比如 Nexus S 也拥有 NFC得到的支持是有限的并且系统并不能利用芯片做太多事情。4.0 添加了一个“Android Beam”功能两台拥有 NFC 的安卓 4.0 设备可以借此在设备间来回传输数据。NFC 会传输关于此事屏幕显示的数据,因此在手机显示一个网页的时候使用该功能会将该页面传送给另一部手机。你还可以发送联系人信息,方向导航,以及 Youtube 链接。当两台手机放在一起时,屏幕显示会缩小,点击缩小的界面会发送相关信息。
I在安卓中用户不允许删除系统应用以保证系统完整性。运营商和 OEM 利用该特性并开始将垃圾软件放入系统分区,经常有一些没用的应用存在系统中。安卓 4.0 允许用户禁用任何不能被卸载的应用,意味着该应用还存在于系统中但是不显示在应用抽屉里并且不能运行。如果用户愿意深究设置项,这给了他们一个简单的途径来拿回手机的控制权。
安卓 4.0 可以看做是现代安卓时代的开始。大部分这时发布的谷歌应用只能在安卓 4.0 及以上版本运行。4.0 还有许多谷歌想要好好利用的新 API——至少最初想要——对 4.0 以下的版本的支持就有限了。在冰淇淋三明治和蜂巢之后谷歌真的开始认真对待软件设计。在2012年1月谷歌[最终发布了][2] *Android Design*,一个教安卓开发者如何创建符合安卓外观和感觉的应用的设计指南站点。这是 iOS 在有第三方应用支持开始就在做的事情,苹果还严肃地对待应用的设计,不符合指南的应用都被 App Store 拒之门外。安卓三年以来谷歌没有给出任何公共设计规范文档的事实,足以说明事情有多糟糕。但随着在 Duarte 掌控下的安卓设计革命,谷歌终于发布了基本设计需求。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/20/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2011/02/near-field-communications-a-technology-primer/
[2]:http://arstechnica.com/business/2012/01/google-launches-style-guide-for-android-developers/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,104 @@
安卓编年史
================================================================================
![](http://cdn.arstechnica.net/wp-content/uploads/2014/03/playicons2.png)
Ron Amadeo 供图
### Google Play 和直接面向消费者出售设备的回归 ###
2012年3月6日谷歌将旗下提供的所有内容统一到 “Google Play”。安卓市场变为了 Google Play 商店Google Books 变为 Google Play BooksGoogle Music 变为 Google Play Music还有 Android Market Movies 变为 Google Play Movies & TV。尽管应用界面的变化不是很大这四个内容应用都获得了新的名称和图标。在 Play 商店购买的内容会下载到对应的应用中Play 商店和 Play 内容应用一道给用户提供了易管理的内容体验。
Google Play 更新是谷歌第一个大的更新周期外更新。四个自带应用都没有通过系统更新获得升级,它们都是直接通过安卓市场/ Play商店更新的。对单独的应用启用周期外更新是谷歌的重大关注点之一而能够实现这样的更新是自姜饼时代开始的工程努力的顶峰。谷歌一直致力于对应用从系统“解耦”从而让它们能够通过安卓市场/ Play 商店进行分发。
尽管一两个应用(主要是地图和 Gmail之前就在安卓市场上从这里开始你会看到许多更重大的更新而其和系统发布无关。系统更新需要 OEM 厂商和运营商的合作,所以很难保证推送到每个用户手上。而 Play 商店更新则完全掌握在谷歌手上,给了谷歌一条直接到达用户设备的途径。因为 Google Play 的发布,安卓市场对自身升级到了 Google Play Store在那之后图书音乐以及电影应用都下发了 Google Play 式的更新。
Google Play 系列应用的设计仍然不尽相同。每个应用的外观和功能各有差异,但暂且来说,一个统一的品牌标识是个好的开始。从品牌标识中去除“安卓”字样是很有必要的,因为很多服务是在浏览器中提供的,不需要安卓设备也能使用。
2012年4月谷歌[再次开始通过 Play 商店销售设备][1],恢复在 Nexus One 发布时尝试的直接面向消费者销售的方式。尽管距 Nexus One 销售结束仅有两年但往上购物现在更加寻常在接触到物品之前就购买它并不像在2010年时听起来那么疯狂。
谷歌也看到了价格敏感的用户在面对 Nexus One 530美元的价格时的反应。第一部销售的设备是无锁的GSM 版本的 Galaxy Nexus价格399美元。在那之后价格变得更低。350美元成为了最近两台 Nexus 设备的入门价7英寸 Nexus 平板的价格更是只有200美元到220美元。
今天Play 商店销售八款不同的安卓设备,四款 Chromebook一款自动调温器以及许多配件设备商店已经是谷歌新产品发布的实际地点了。新产品发布总是如此受欢迎站点往往无法承载如此大的流量新 Nexus 手机也在几小时内售空。
### 安卓 4.1果冻豆——Google Now指明未来
![华硕制造的 Nexus 7安卓 4.1 的首发设备。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/ASUS_Google_Nexus_7_4_11.jpg)
华硕制造的 Nexus 7安卓 4.1 的首发设备。
随着2012年7月安卓 4.1 果冻豆的发布,谷歌的安卓发布节奏进入每六个月一发布的轨道。平台已经成熟,三个月的发布周期就没那么必要了,更长的发布周期也给了 OEM 厂商足够的时间跟上谷歌的节奏。和蜂巢不同小数点后的更新发布现在是主要更新4.1 带来了主要的界面更新和框架变化。
果冻豆最大的变化之一,并且你在截图中看不到的是“黄油计划”,谷歌工程师齐心努力让安卓的动画顺畅地跑在 30FPS 上。还有一些核心变化,像垂直同步和三重缓冲,每个动画都经过优化以流畅地绘制。动画和顺滑滚动一直是安卓和 iOS 相比之下的弱点。经过在核心动画框架和单独的应用上的努力,果冻豆让安卓的流畅度大幅接近 iOS。
和果冻豆一起到来的还有 [Nexus][2] 7由华硕生产的7英寸平板。不像之前主要是横屏模式的 XoomNexus 7 主要以竖屏模式使用像个大一号的手机。Nexus 7 展现了经过一年半的生态建设,谷歌已经准备好了给平板市场带来一部旗舰设备。和 Nexus One 和 GSM Galaxy Nexus 一样Nexus 7 直接由谷歌在线销售。尽管那些早先的设备对习惯于运营商补贴的消费者来说拥有惊人的高价Nexus 7 以仅仅 200 美元的价格推向大众市场。这个价格给你带来一部7英寸1280x800 英寸显示屏,四核 1.2GHz Tegra 3 处理器1GB 内存8GB 内置存储的设备。Nexus 7 的性价比如此之高,许多人都想知道谷歌到底有没有在其旗舰平板上赚到钱。
更小更轻7英寸这些因素促成了谷歌巨大的成功并且将谷歌带向了引领行业潮流的位置。一开始制造10英寸 iPad 的苹果,最终也不得不推出和 Nexus 7 相似的 iPad Mini 来应对。
![4.1 的新锁屏设计,壁纸,以及系统按钮新的点击高亮。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/picture.png)
4.1 的新锁屏设计,壁纸,以及系统按钮新的点击高亮。
Ron Amadeo 供图
蜂巢引入的电子风格在冰淇淋三明治中有所减少,果冻豆在此之上走得更远。它开始从系统中大范围地移除蓝色。迹象就是系统按钮的点击高亮从蓝色变为了灰色。
![新应用阵容合成图以及新的消息可展开通知面板。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/jb-apps-and-notications.png)
新应用阵容合成图以及新的消息可展开通知面板。
Ron Amadeo 供图
通知中心面板完全重制了这个设计一直沿用到今天的奇巧巧克力KitKat。新面板扩展到了屏幕顶部并且覆盖了状态栏图标这意味着通知面板打开的时候不再能看到状态栏。时间突出显示在左上角旁边是日期和设置按钮。清除所有通知按钮冰淇淋三明治中显示为一个“X”按钮现在变为阶梯状的按钮象征着清除所有通知的时候消息交错滑动的动画效果。底部的面板把手从一个小圆换成了一条直线和面板等宽。所有的排版都发生了变化——通知面板的所有项现在都使用了更大更细的字体。通知面板是另一个从冰淇淋三明治和蜂巢中引入的蓝色元素被移除的屏幕。除了触摸高亮之外整个通知面板都是灰色的。
通知面板也引入了新功能。相较于之前的两行设计现在的通知消息可以展开以显示更多信息。通知消息可以显示最多8行文本甚至还能在消息底部显示按钮。屏幕截图通知消息底部有个分享按钮你也可以直接从未接来电通知拨号或者将一个正在响铃的闹钟小睡这些都可以在通知面板完成。新通知消息默认展开但当它们堆叠到一起时会恢复原来的尺寸。在通知消息上双指向下滑动可以展开消息。
![新谷歌搜索应用,带有 Google Now 卡片,语音搜索,以及文字搜索。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googlenow.png)
新谷歌搜索应用,带有 Google Now 卡片,语音搜索,以及文字搜索。
Ron Amadeo 供图
果冻豆中不止对安卓而言也是对谷歌来说最大的特性是新版谷歌搜索应用。它带来了“Google Now”一个预测性搜索功能。Google Now 在搜索框下面显示为几张卡片,它会提供谷歌认为你所关心的事物的搜索结果。就比如谷歌地图搜索你最近在桌面电脑查找的地点或日历的约会地点,天气,以及旅行时回家的时间。
新版谷歌搜索应用自然可以从谷歌图标启动,但它还可以在任意屏幕从系统栏上滑访问。长按系统栏会唤出一个类似锁屏解锁的环。卡片部分纵向滚动,如果你不想看到它们,可以滑动消除它们。语音搜索是更新的一个大部分。提问不是无脑地输入进谷歌,如果谷歌知道答案,它还会用文本语音转换引擎回答你。传统的文字搜索当然也受支持。只需点击搜索栏然后开始输入即可。
谷歌经常将 Google Now 称作“谷歌搜索的未来”。告诉谷歌你想要什么这还不够好。谷歌想要在你之前知道你想要什么。Google Now 用谷歌所有的数据挖掘关于你的知识为你服务,这也是谷歌对抗搜索引擎竞争对手,比如必应,最大的优势所在。智能手机比你拥有的其它设备更了解你,所以该服务在安卓上首次亮相。但谷歌慢慢也将 Google Now 加入 Chrome最终似乎会到达 Google.com。
尽管功能很重要,但同时 Google Now 是谷歌产品有史以来最重要的设计工作也是毋庸置疑的。谷歌搜索应用引入的白色卡片审美将会成为几乎所有谷歌产品设计的基础。今天,卡片风格被用在 Google Play 商店以及所有的 Play 内容应用Youtube谷歌地图DriveKeepGmailGoogle+以及其它产品。同时也不限于安卓应用。不少谷歌的桌面站点和 iOS 应用也以此设计为灵感。设计是谷歌历史中的弱项之一,但 Google Now 开始谷歌最终在设计上采取了行动,带来一个统一的,全公司范围的设计语言。
![又一个 Youtube 重新设计,信息密度有所下降。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/yotuube.png)
又一个 Youtube 的重新设计,信息密度有所下降。
Ron Amadeo 供图
又一个版本,又一个 Youtube 的重新设计。这次列表视图主要基于略缩图,大大的图片占据了屏幕的大部分。信息密度在新列表设计中有所下降。之前 Youtube 每屏大约能显示6个项目现在只能显示3个。
Youtube 是首批在应用左侧加入滑动抽屉的应用之一,该特性会成为谷歌应用的标准设计风格。抽屉中有你的账户的链接和订阅频道,这让谷歌可以去除页面顶部标签页设计。
![Google Play 服务的职责以及安卓的剩余部分职责。](http://cdn.arstechnica.net/wp-content/uploads/2013/08/playservicesdiagram2.png)
Google Play 服务的职责以及安卓的剩余部分职责。
Ron Amadeo 供图
### Google Play Services—fragmentation and making OS versions (nearly) obsolete ###
### Google Play 服务——碎片化和让系统版本(几乎)过时 ###
碎片化那时候看起来这并不是个大问题但2012年12月Google Play 服务 1.0 面向所有安卓2.2及以上版本的手机推出。它添加了一些 Google+ API 和对 OAuth 2.0 的支持。
尽管这个升级听起来很无聊,但 Google Play 服务最终会成长为安卓整体的一部分。Google Play 服务扮演着正常应用和安卓系统的中间角色,使得谷歌可以升级或替换一些核心组件,并在不发布新安卓版本的前提下添加 API。
有了 Play 服务,谷歌有了直接接触安卓手机核心部分的能力,而不用通过 OEM 更新一集运营商批准的过程。谷歌使用 Play 服务添加了全新的位置系统,恶意软件扫描,远程擦除功能,以及新的谷歌地图 API所有的这一切都不用通过发布一个系统更新实现。正如我们在姜饼部分的结尾提到的感谢 Play 服务里这些“可移植的”API 实现,姜饼仍然能够下载现代版本的 Play 商店和许多其他的谷歌应用。
另一个巨大的益处是安卓用户基础的兼容性。最新版本的安卓系统要经过很长时间到达大多数用户手中,这意味着最新版本系统绑定的 API 在大多数用户升级之前对开发者来说没有任何意义。Google Play 服务兼容冻酸奶及以上版本换句话说就是99%的活跃设备,并且更新可以直接通过 Play 商店直接推送到手机上。通过将 API 包含在 Google Play 服务中而不是安卓中,谷歌可以在一周内将新 API 推送到几乎所有用户手中。这对许多版本碎片化引起的问题来说是个[伟大的解决方案][3]。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/21/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2012/04/unlocked-samsung-galaxy-nexus-can-now-be-purchased-from-google/
[2]:http://arstechnica.com/gadgets/2012/07/divine-intervention-googles-nexus-7-is-a-fantastic-200-tablet/
[3]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,487 +0,0 @@
Linux平台安全备忘录
================================================================================
这是一组Linux基金会自己系统管理员的推荐规范。所有Linux基金会的雇员都是远程工作我们使用这套指导方针确保系统管理员的系统通过核心安全需求降低我们平台成为攻击目标的风险。
即使你的系统管理员不用远程工作,很有可能的是,很多人的工作是在一个便携的笔记本上完成的,或者在业余时间或紧急时刻他们在工作平台中部署自己的家用系统。不论发生何种情况,你都能对应这个规范匹配到你的环境中。
这绝不是一个详细的“工作站加固”文档,可以说这是一个努力避免大多数明显安全错误导致太多不便的一组规范的底线。你可能阅读这个文档会认为它的方法太偏执,同时另一些人也许会认为这仅仅是一些肤浅的研究。安全就像在高速公路上开车 -- 任何比你开的慢的都是一个傻瓜,然而任何比你开的快的人都是疯子。这个指南仅仅是一些列核心安全规则,既不详细又不是替代经验,警惕,和常识。
每一节都分为两个部分:
- 核对适合你项目的需求
- 随意列出关心的项目,解释为什么这么决定
## 严重级别
在清单的每一个项目都包括严重级别,这些是我们希望能帮助指导你的决定:
- _(关键)_ 项目应该在考虑列表上被明确的重视。如果不采取措施,将会导致你的平台安全出现高风险。
- _(中等)_ 项目将改善你的安全形态,但不是很重要,尤其是如果他们太多的干涉你的工作流程。
- _(低等)_ 项目也许会改善整体安全性,但是在便利权衡下也许并不值得。
- _(可疑)_ 留作感觉会明显完善我们平台安全的项目,但是可能会需要大量的调整与操作系统交互的方式。
记住,这些只是参考。如果你觉得这些严重级别不能表达你的工程对安全承诺,正如你所见你应该调整他们为你合适的。
## 选择正确的硬件
我们禁止管理员使用一个特殊供应商或者一个特殊的型号,所以在选择工作系统时这部分是核心注意事项。
### 清单
- [ ] 系统支持安全启动 _(关键)_
- [ ] 系统没有火线,雷电或者扩展卡接口 _(中等)_
- [ ] 系统有TPM芯片 _(低)_
### 注意事项
#### 安全引导
尽管它是有争议的性质安全引导提供了对抗很多针对平台的攻击Rootkits, "Evil Maid,"等等),没有介绍太多额外的麻烦。它将不会停止真正专用的攻击者,加上有很大程度上,站点安全机构有办法应对它(可能通过设计),但是拥有安全引导总比什么都没有强。
作为选择,你也许部署了[Anti Evil Maid][1]提供更多健全的保护,对抗安全引导支持的攻击类型,但是它需要更多部署和维护的工作。
#### 系统没有火线,雷电或者扩展卡接口
火线是一个标准,故意的,允许任何连接设备完全直接内存访问你的系统([查看维基百科][2]。雷电接口和扩展卡同样有问题虽然一些后来部署的雷电接口试图限制内存访问的范围。如果你没有这些系统端口那是最好的但是它并不严重他们通常可以通过UEFI或内核本身禁用。
#### TPM芯片
可信平台模块TPM是主板上的一个与核心处理器单独分开的加密芯片他可以用来增加平台的安全性比如存储完整磁盘加密密钥不过通常不用在日常平台操作。最多这是个很好的存在除非你有特殊需要使用TPM增加你平台安全性。
## 预引导环境
这是你开始安装系统前的一系列推荐规范。
### 清单
- [ ] 使用UEFI引导模式不是传统BIOS_(关键)_
- [ ] 进入UEFI配置需要使用密码 _(关键)_
- [ ] 使用安全引导 _(关键)_
- [ ] 启动系统需要UEFI级别密码 _(低)_
### 注意事项
#### UEFI和安全引导
UEFI尽管有缺点还是提供很多传统BIOS没有的好功能比如安全引导。大多数现代的系统都默认使用UEFI模式。
UEFI配置模式密码要确保密码强度。注意很多厂商默默地限制了你使用密码长度所以对比长口令你也许应该选择高熵短密码更多地密码短语看下面
基于你选择的Linux分支你也许会也许不会跳过额外的圈子以导入你的发行版的安全引导键才允许你启动发行版。很多分支已经与微软合作大多数厂商给他们已发布的内核签订密钥这已经是大多数厂商公认的了因此为了避免问题你必须处理密钥导入。
作为一个额外的措施在允许某人得到引导分区然后尝试做一些不好的事之前让他们输入密码。为了防止肩窥这个密码应该跟你的UEFI管理密码不同。如果你关闭启动太多你也许该选择别把心思费在这上面当你已经进入LUKS密码这将为您节省一些额外的按键。
## 发行版选择注意事项
很有可能你会坚持一个广泛使用的发行版如FedoraUbuntuArchDebian或他们的一个类似分支。无论如何这是你选择使用发行版应该考虑的。
### 清单
- [ ] 拥有一个强健的MAC/RBAC系统SELinux/AppArmor/Grsecurity _(关键)_
- [ ] 公开的安全公告 _(关键)_
- [ ] 提供及时的安全补丁 _(关键)_
- [ ] 提供密码验证的包 _(关键)_
- [ ] 完全支持UEFI和安全引导 _(关键)_
- [ ] 拥有健壮的原生全磁盘加密支持 _(关键)_
### 注意事项
#### SELinuxAppArmor和GrSecurity/PaX
强制访问控制MAC或者基于角色的访问控制RBAC是一个POSIX系统遗留的基于用户或组的安全机制延伸。这些天大多数发行版已经绑定MAC/RBAC系统FedoraUbuntu或通过提供一种机制一个可选的安装后的步骤来添加它GentooArchDebian。很明显强烈建议您选择一个预装MAC/RBAC系统的分支但是如果你对一个分支情有独钟没有默认启用它装完系统后应计划配置安装它。
应该坚决避免使用不带任何MAC/RBAC机制的分支像传统的POSIX基于用户和组的安全在当今时代应该算是考虑不足。如果你想建立一个MAC/RBAC工作站通常会考虑AppArmor和PaX他们比SELinux更容易学习。此外在一个工作站上有很少或者没有额外的监听用户运行的应用造成的最高风险GrSecurity/PaX_可能_会比SELinux提供更多的安全效益。
#### 发行版安全公告
大多数广泛使用的分支都有一个机制发送安全公告到他们的用户,但是如果你对一些机密感兴趣,查看开发人员是否有记录机制提醒用户安全漏洞和补丁。缺乏这样的机制是一个重要的警告信号,这个分支不够成熟,不能被视为主要管理工作站。
#### 及时和可靠的安全更新
多数常用的发行版提供的定期安全更新,但为确保关键包更新及时提供是值得检查的。避免使用分支和"社区重建"的原因是,由于不得不等待上游分支先发布它,他们经常延迟安全更新。
你如果找到一个在安装包更新元数据或两者上不使用加密签名的发行版将会处于困境。这么说常用的发行版多年前就已经知道这个基本安全的意义Arch我正在看你所以这也是值得检查的。
#### 发行版支持UEFI和安全引导
检查发行版支持UEFI和安全引导。查明它是否需要导入额外的密钥或是否要求启动内核有一个已经被系统厂商信任的密钥签名例如跟微软达成合作。一些发行版不支持UEFI或安全启动但是提供了替代品来确保防篡改或防破坏引导环境[Qubes-OS][3]使用Anti Evil Maid前面提到的。如果一个发行版不支持安全引导和没有机制防止引导级别攻击还是看看别的吧。
#### 全磁盘加密
全磁盘加密是保护静止数据要求大多数发行版都支持。作为一个选择方案系统自加密硬件驱动也许用来通常通过主板TPM芯片实现和提供类似安全级别加更快的选项但是花费也更高。
## 发行版安装指南
所有发行版都是不同的,但是也有一些一般原则:
### 清单
- [ ] 使用健壮的密码全磁盘加密LUKS _(关键)_
- [ ] 确保交换分区也加密了 _(关键)_
- [ ] 确保引导程序设置了密码可以和LUKS一样 _(关键)_
- [ ] 设置健壮的root密码可以和LUKS一样 _(关键)_
- [ ] 使用无特权账户登录,管理员组的一部分 _(关键)_
- [ ] 设置强壮的用户登录密码不同于root密码 _(关键)_
### 注意事项
#### 全磁盘加密
除非你正在使用自加密硬件设备配置你的安装程序给磁盘完整加密用来存储你的数据与你的系统文件很重要。通过自动安装的cryptfs循环文件加密用户目录还不够简单我正在看你老版Ubuntu这并没有给系统二进制文件或交换分区提供保护它可能包含大量的敏感数据。推荐的加密策略是加密LVM设备所以在启动过程中只需要一个密码。
`/boot`分区将一直保持非加密当引导程序需要引导内核前调用LUKS/dm-crypt。内核映像本身应该用安全引导加密签名检查防止被篡改。
换句话说,`/boot`应该是你系统上唯一没有加密的分区。
#### 选择好密码
现代的Linux系统没有限制密码口令长度所以唯一的限制是你的偏执和倔强。如果你要启动你的系统你将大概至少要输入两个不同的密码一个解锁LUKS另一个登陆所以长密码将会使你老的很快。最好从丰富或混合的词汇中选择2-3个单词长度容易输入的密码。
优秀密码例子(是的,你可以使用空格):
- nature abhors roombas
- 12 in-flight Jebediahs
- perdon, tengo flatulence
如果你更喜欢输入口令句你也可以坚持使用无词汇密码但最少要10-12个字符长度。
除非你有人身安全的担忧,写下你的密码,并保存在一个远离你办公桌的安全的地方才合适。
#### Root用户密码和管理组
我们建议你的root密码和你的LUKS加密使用同样的密码除非你共享你的笔记本给可信的人他应该能解锁设备但是不应该能成为root用户。如果你是笔记本电脑的唯一用户,那么你的root密码与你的LUKS密码不同是没有意义的安全优势。通常你可以使用同样的密码在你的UEFI管理磁盘加密和root登陆 -- 知道这些任意一个都会让攻击者完全控制您的系统,在单用户工作站上使这些密码不同,没有任何安全益处。
你应该有一个不同的,但同样强健的常规用户帐户密码用来每天工作。这个用户应该是管理组用户(例如`wheel`或者类似,根据分支),允许你执行`sudo`来提升权限。
换句话说,如果在你的工作站只有你一个用户,你应该有两个独特的,强健的,同样的强壮的密码需要记住:
**管理级别**,用在以下区域:
- UEFI管理
- 引导程序GRUB
- 磁盘加密LUKS
- 工作站管理root用户
**User-level**, used for the following:
**用户级别**,用在以下:
- 用户登陆和sudo
- 密码管理器的主密码
很明显,如果有一个令人信服的理由他们所有可以不同。
## 安装后的加强
安装后的安全性加强在很大程度上取决于你选择的分支,所以在一个通用的文档中提供详细说明是徒劳的,例如这一个。然而,这里有一些你应该采取的步骤:
### 清单
- [ ] 在全体范围内禁用火线和雷电模块 _(关键)_
- [ ] 检查你的防火墙,确保过滤所有传入端口 _(关键)_
- [ ] 确保root邮件转发到一个你可以查看到的账户 _(关键)_
- [ ] 检查以确保sshd服务默认情况下是禁用的 _(中等)_
- [ ] 建立一个系统自动更新任务,或更新提醒 _(中等)_
- [ ] 配置屏幕保护程序在一段时间的不活动后自动锁定 _(中等)_
- [ ] 建立日志监控 _(中等)_
- [ ] 安装使用rkhunter _(低等)_
- [ ] 安装一个入侵检测系统 _(偏执)_
### 注意事项
#### 黑名单模块
将火线和雷电模块列入黑名单,增加一行到`/etc/modprobe.d/blacklist-dma.conf`文件:
blacklist firewire-core
blacklist thunderbolt
重启后的模块将被列入黑名单。这样做是无害的,即使你没有这些端口(但也不做任何事)。
#### Root邮件
默认的root邮件只是存储在系统基本上没人读过。确保你设置了你的`/etc/aliases`来转发root邮件到你确实能读取的邮箱否则你也许错过了重要的系统通知和报告
# Person who should get root's mail
root: bob@example.com
编辑后这些后运行`newaliases`,然后测试它确保已投递,像一些邮件供应商将拒绝从没有或者不可达的域名的邮件。如果是这个原因,你需要配置邮件转发直到确实可用。
#### 防火墙sshd和监听进程
默认的防火墙设置将取决于您的发行版,但是大多数都允许`sshd`端口连入。除非你有一个令人信服的合理理由允许连入ssh你应该过滤出来,禁用sshd守护进程。
systemctl disable sshd.service
systemctl stop sshd.service
如果你需要使用它,你也可以临时启动它。
通常你的系统不应该有任何侦听端口除了响应ping。这将有助于你对抗网络级别的零日漏洞利用。
#### 自动更新或通知
建议打开自动更新,除非你有一个非常好的理由不这么做,如担心自动更新将使您的系统无法使用(这是发生在过去,所以这种恐惧并非杞人忧天)。至少,你应该启用自动通知可用的更新。大多数发行版已经有这个服务自动运行,所以你不需要做任何事。查阅你的发行版文档查看更多。
你应该尽快应用所有明显的勘误即使这些不是特别贴上“安全更新”或有关联的CVE代码。所有错误都潜在的安全漏洞和新的错误比起坚持旧的已知的错误未知错误通常是更安全的策略。
#### 监控日志
你应该对你的系统上发生了什么很感兴趣。出于这个原因,你应该安装`logwatch`然后配置它每夜发送在你的系统上发生的任何事情的活动报告。这不会预防一个专业的攻击者,但是一个好安全网功能。
注意,许多systemd发行版将不再自动安装一个“logwatch”需要的syslog服务由于systemd依靠自己的分类所以你需要安装和启用“rsyslog”来确保使用logwatch之前你的/var/log不是空。
#### Rkhunter和IDS
安装`rkhunter`和一个入侵检测系统IDS像`aide`或者`tripwire`将不会有用,除非你确实理解他们如何工作采取必要的步骤来设置正确(例如,保证数据库在额外的媒介,从可信的环境运行检测,记住执行系统更新和配置更改后要刷新数据库散列,等等)。如果你不愿在你的工作站执行这些步骤调整你如何工作,这些工具将带来麻烦没有任何实在的安全益处。
我们强烈建议你安装`rkhunter`并每晚运行它。它相当易于学习和使用,虽然它不会阻止一个复杂的攻击者,它也能帮助你捕获你自己的错误。
## 个人工作站备份
工作站备份往往被忽视,或无计划的做,常常是不安全的方式。
### 清单
- [ ] 设置加密备份工作站到外部存储 _(关键)_
- [ ] 使用零认知云备份的备份工具 _(中等)_
### 注意事项
#### 全加密备份存到外部存储
把全部备份放到一个移动磁盘中比较方便,不用担心带宽和流速(在这个时代,大多数供应商仍然提供显著的不对称的上传/下载速度。不用说这个移动硬盘本身需要加密又一次通过LIKS或者你应该使用一个备份工具建立加密备份例如`duplicity`或者它的GUI版本`deja-dup`。我建议使用后者并使用随机生成的密码,保存到你的密码管理器中。如果你带上笔记本去旅行,把这个磁盘留在家,以防你的笔记本丢失或被窃时可以找回备份。
除了你的家目录外,你还应该备份`/etc`目录和处于鉴定目的的`/var/log`目录。
首先是,避免拷贝你的家目录到任何非加密存储上,甚至是快速的在两个系统上移动文件,一旦完成你肯定会忘了清除它,暴露个人隐私或者安全信息到监听者手中 -- 尤其是把这个存储跟你的笔记本防盗同一个包里。
#### 零认知站外备份选择性
站外备份也是相当重要的是否可以做到要么需要你的老板提供空间要么找一家云服务商。你可以建一个单独的duplicity/deja-dup配置只包括重要的文件以免传输大量你不想备份的数据网络缓存音乐下载等等
作为选择,你可以使用零认知备份工具,例如[SpiderOak][5]它提供一个卓越的Linux GUI工具还有实用的特性例如在多个系统或平台间同步内容。
## 最佳实践
下面是我们认为你应该采用的最佳实践列表。它当然不是非常详细的,而是试图提供实用的建议,一个可行的整体安全性和可用性之间的平衡
### 浏览
毫无疑问在你的系统上web浏览器将是最大、最容易暴露的攻击层面的软件。它是专门下载和执行不可信恶意代码的一个工具。它试图采用沙箱和代码卫生处理等多种机制保护你免受这种危险但是在之前多个场合他们都被击败了。你应该学到浏览网站是最不安全的活动在你参与的任何一天。
有几种方法可以减少浏览器的影响,但真正有效的方法需要你操作您的工作站将发生显著的变化。
#### 1: 实用两个不同的浏览器
这很容易做到,但是只有很少的安全效益。并不是所有浏览器都妥协给攻击者完全自由访问您的系统 -- 有时他们只能允许一个读取本地浏览器存储,窃取其他标签的活动会话,捕获输入浏览器,例如,实用两个不同的浏览器,一个用在工作/高安全站点,另一个用在其他,有助于防止攻击者请求整个饼干罐的小妥协。主要的不便是两个不同的浏览器消耗内存大量。
我们建议:
##### 火狐用来工作和高安全站点
使用火狐登陆工作有关的站点应该额外关心的是确保数据如cookies会话登陆信息打键次数等等明显不应该落入攻击者手中。除了少数的几个网站你不应该用这个浏览器访问其他网站。
你应该安装下面的火狐扩展:
- [ ] NoScript _(关键)_
- NoScript阻止活动内容加载除非在用户白名单里的域名。跟你默认浏览器比它使用起来很麻烦可是提供了真正好的安全效益所以我们建议只在开启了它的浏览器上访问与工作相关的网站。
- [ ] Privacy Badger _(关键)_
- EFF的Privacy Badger将在加载时预防大多数外部追踪器和广告平台在这些追踪站点影响你的浏览器时将有助于避免妥协追踪着和广告站点通常会成为攻击者的目标因为他们会迅速影响世界各地成千上万的系统
- [ ] HTTPS Everywhere _(关键)_
- 这个EFF开发的扩展将确保你访问的大多数站点都在安全连接上甚至你点击的连接使用的是http://(有效的避免大多数的攻击,例如[SSL-strip][7])。
- [ ] Certificate Patrol _(中等)_
- 如果你正在访问的站点最近改变了他们的TLS证书 -- 特别是如果不是接近失效期或者现在使用不同的证书颁发机构,这个工具将会警告你。它有助于警告你是否有人正尝试中间人攻击你的连接,但是产生很多无害的假的类似情况。
你应该让火狐成为你的默认打开连接的浏览器因为NoScript将在加载或者执行时阻止大多数活动内容。
##### 其他一切都用Chrome/Chromium
Chromium开发者在增加很多很好的安全特性方面比火狐强至少[在Linux上][6])例如seccomp沙箱内核用户名空间等等这担当一个你访问网站和你其他系统间额外的隔离层。Chromium是流开源项目Chrome是Google所有的基于它构建的包使用它输入时要非常谨慎任何你不想让谷歌知道的事情都不要使用它
有人推荐你在Chrome上也安装**Privacy Badger**和**HTTPS Everywhere**扩展,然后给他一个不同的主题,从火狐指出这是你浏览器“不信任的站点”。
#### 2: 使用两个不同浏览器,一个在专用的虚拟机里
这有点像上面建议的做法除了您将添加一个额外的步骤通过快速访问协议运行专用虚拟机内部Chrome允许你共享剪贴板和转发声音事件Spice或RDP。这将在不可信的浏览器和你其他的工作环境之间添加一个优秀的隔离层确保攻击者完全危害你的浏览器将不得不另外打破VM隔离层以达到系统的其余部分。
这是一个出奇可行的结构,但是需要大量的RAM和高速处理器可以处理增加的负载。这还需要一个重要的奉献的管理员需要相应地调整自己的工作实践。
#### 3: 通过虚拟化完全隔离你的工作和娱乐环境
看[Qubes-OS项目][3]它致力于通过划分你的应用到完全独立分开的VM中提供高安全工作环境。
### 密码管理器
#### 清单
- [ ] 使用密码管理器 _(关键)_
- [ ] 不相关的站点使用不同的密码 _(关键)_
- [ ] 使用支持团队共享的密码管理器 _(中等)_
- [ ] 给非网站用户使用一个单独的密码管理器 _(偏执)_
#### 注意事项
使用好的,唯一的密码对你的团队成员来说应该是非常关键的需求。证书盗取一直在发生 — 要么通过中间计算机,盗取数据库备份,远程站点利用,要么任何其他的打算。证书从不应该通过站点被重用,尤其是关键的应用。
##### 浏览器中的密码管理器
每个浏览器有一个比较安全的保存密码机制,通过供应商的机制可以同步到云存储同事用户提供密码保证数据加密。无论如何,这个机制有严重的劣势:
1. 不能跨浏览器工作
2. 不提供任何与团队成员共享凭证的方法
也有一些良好的支持,免费或便宜的密码管理器,很好的融合到多个浏览器,跨平台工作,提供小组共享(通常是支付服务)。可以很容易地通过搜索引擎找到解决方案。
##### 独立的密码管理器
任何密码管理器都有一个主要的缺点,与浏览器结合,事实上是应用的一部分,这样最有可能被入侵者攻击。如果这让你不舒服(应该这样),你应该选择两个不同的密码管理器 -- 一个集成在浏览器中用来保存网站密码一个作为独立运行的应用。后者可用于存储高风险凭证如root密码数据库密码其他shell账户凭证等。
有这样的工具可以特别有效的在团腿成员间共享超级用户的凭据服务器根密码ILO密码数据库管理密码引导装载程序密码等等
这几个工具可以帮助你:
- [KeePassX][8]2版中改善了团队共享
- [Pass][9]它使用了文本文件和PGP并与git结合
- [Django-Pstore][10]他是用GPG在管理员之间共享凭据
- [Hiera-Eyaml][11]如果你已经在你的平台中使用了Puppet可以便捷的追踪你的服务器/服务凭证像你的Hiera加密数据的一部分。
### 加固SSH和PGP私钥
个人加密密钥包括SSH和PGP私钥都是你工作站中最重要的物品 -- 攻击将在获取到感兴趣的东西,这将允许他们进一步攻击你的平台或冒充你为其他管理员。你应该采取额外的步骤,确保你的私钥免遭盗窃。
#### 清单
- [ ] 强壮的密码用来保护私钥 _(关键)_
- [ ] PGP的主密码保存在移动存储中 _(中等)_
- [ ] 身份验证、签名和加密注册表子项存储在智能卡设备 _(中等)_
- [ ] SSH配置为使用PGP认证密钥作为ssh私钥 _(中等)_
#### 注意事项
防止私钥被偷的最好方式是使用一个智能卡存储你的加密私钥不要拷贝到工作平台上。有几个厂商提供支持OpenPGP的设备
- [Kernel Concepts][12]在这里可以采购支持OpenPGP的智能卡和USB读取器你应该需要一个。
- [Yubikey NEO][13]这里提供OpenPGP功能的智能卡还提供很多很酷的特性U2F, PIV, HOTP等等
确保PGP主密码没有存储在工作平台也很重要只有子密码在使用。主密钥只有在登陆其他的密钥和创建子密钥时使用 — 不经常发生这种操作。你可以照着[Debian的子密钥][14]向导来学习如何移动你的主密钥到移动存储和创建子密钥。
你应该配置你的gnupg代理作为ssh代理然后使用基于智能卡PGP认证密钥作为你的ssh私钥。我们公布了一个细节向导如何使用智能卡读取器或Yubikey NEO。
如果你不想那么麻烦最少要确保你的PGP私钥和你的SSH私钥有个强健的密码这将让攻击者很难盗取使用它们。
### 工作站上的SELinux
如果你使用的发行版绑定了SELinux如Fedora这有些如何使用它的建议让你的工作站达到最大限度的安全。
#### 清单
- [ ] 确保你的工作站强制使用SELinux _(关键)_
- [ ] 不要盲目的执行`audit2allow -M`,经常检查 _(关键)_
- [ ] 从不 `setenforce 0` _(中等)_
- [ ] 切换你的用户到SELinux用户`staff_u` _(中等)_
#### 注意事项
SELinux是一个强制访问控制MAC为POSIX许可核心功能扩展。它是成熟强健自从它推出以来已经有很长的路了。不管怎样许多系统管理员现在重复过时的口头禅“关掉它就行。”
话虽如此在工作站上SELinux还是限制了安全效益像很多应用都要作为一个用户自由的运行。开启它有益于给网络提供足够的保护有可能有助于防止攻击者通过脆弱的后台服务提升到root级别的权限用户。
我们的建议是开启它并强制使用。
##### 从不`setenforce 0`
使用`setenforce 0`短时间内把SELinux设置为许可模式但是你应该避免这样做。其实你是想查找一个特定应用或者程序的问题实际上这样是把全部系统的SELinux关闭了。
你应该使用`semanage permissive -a [somedomain_t]`替换`setenforce 0`,只把这个程序放入许可模式。首先运行`ausearch`查看那个程序发生问题:
ausearch -ts recent -m avc
然后看下`scontext=`SELinux的上下文像这样
scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023
^^^^^^^^^^^^^^
这告诉你程序`gpg_pinentry_t`被拒绝了,所以你想查看应用的故障,应该增加它到许可模式:
semange permissive -a gpg_pinentry_t
这将允许你使用应用然后收集AVC的其他部分你可以连同`audit2allow`写一个本地策略。一旦完成你就不会看到新的AVC的拒绝你可以从许可中删除程序运行
semanage permissive -d gpg_pinentry_t
##### 用SELinux的用户staff_r使用你的工作站
SELinux附带的本地角色实现基于角色的用户帐户禁止或授予某些特权。作为一个管理员你应该使用`staff_r`角色,这可以限制访问很多配置和其他安全敏感文件,除非你先执行`sudo`。
默认,用户作为`unconfined_r`被创建你可以运行大多数应用没有任何或只有一点SELinux约束。转换你的用户到`staff_r`角色,运行下面的命令:
usermod -Z staff_u [username]
你应该退出然后登陆激活新角色,届时如果你运行`id -Z`,你将会看到:
staff_u:staff_r:staff_t:s0-s0:c0.c1023
在执行`sudo`时你应该记住增加一个额外的标准告诉SELinux转换到"sysadmin"角色。你想要的命令是:
sudo -i -r sysadm_r
届时`id -Z`将会显示:
staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023
**警告**:在进行这个切换前你应该舒服的使用`ausearch`和`audit2allow`,当你作为`staff_r`角色运行时你的应用有可能不再工作了。写到这里时,以下流行的应用已知在`staff_r`下没有做策略调整就不会工作:
- Chrome/Chromium
- Skype
- VirtualBox
切换回`unconfined_r`,运行下面的命令:
usermod -Z unconfined_u [username]
然后注销再重新回到舒服的区域。
## 延伸阅读
IT安全的世界是一个没有底的兔子洞。如果你想深入或者找到你的具体发行版更多的安全特性请查看下面这些链接
- [Fedora Security Guide](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html)
- [CESG Ubuntu Security Guide](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts)
- [Debian Security Manual](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html)
- [Arch Linux Security Wiki](https://wiki.archlinux.org/index.php/Security)
- [Mac OSX Security](https://www.apple.com/support/security/guides/)
## 许可
这项工作在[创作共用授权4.0国际许可证][0]许可下。
--------------------------------------------------------------------------------
via: https://github.com/lfit/itpol/blob/master/linux-workstation-security.md#linux-workstation-security-list
作者:[mricon][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/mricon
[0]: http://creativecommons.org/licenses/by-sa/4.0/
[1]: https://github.com/QubesOS/qubes-antievilmaid
[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues
[3]: https://qubes-os.org/
[4]: https://xkcd.com/936/
[5]: https://spideroak.com/
[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing
[7]: http://www.thoughtcrime.org/software/sslstrip/
[8]: https://keepassx.org/
[9]: http://www.passwordstore.org/
[10]: https://pypi.python.org/pypi/django-pstore
[11]: https://github.com/TomPoulton/hiera-eyaml
[12]: http://shop.kernelconcepts.de/
[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
[14]: https://wiki.debian.org/Subkeys
[15]: https://github.com/lfit/ssh-gpg-smartcard-config

View File

@ -0,0 +1,202 @@
一个涵盖 Unix 44 年进化史的仓库
=============================================================================
### 摘要 ###
Unix 操作系统的进化历史,可以从一个版本控制的仓库中窥见,时间跨越从 1972 年以 5000 行内核代码的出现,到 2015 年成为一个含有 26,000,000 行代码的被广泛使用的系统。该仓库包含 659,000 条提交,和 2306 次合并。仓库部署被普遍采用的 Git 系统储存其代码,并且在时下流行的 GitHub 上建立了档案。它综合了系统定制软件的 24 个快照都是开发自贝尔实验室伯克利大学386BSD 团队,两个传统的仓库 和 开源 FreeBSD 系统的仓库。总的来说850 位个人贡献者已经确认,更早些时候的一批人主要做基础研究。这些数据可以用于一些经验性的研究,在软件工程,信息系统和软件考古学领域。
### 1 介绍 ###
Unix 操作系统作为一个主要的工程上的突破而脱颖而出得益于其模范的设计大量的技术贡献它的开发模型和广泛的使用。Unix 编程环境的设计已经被标榜为一个能提供非常简洁,功能强大并且优雅的设计[[1][1]]。在技术方面,许多对 Unix 有直接贡献的,或者因 Unix 而流行的特性就包括[[2][2]]:用高级语言编写的可移植部署的内核;一个分层式设计的文件系统;兼容的文件,设备,网络和进程间 I/O管道和过滤架构虚拟文件系统和用户可选的 shell。很早的时候就有一个庞大的社区为 Unix 贡献软件[[3][3]][[4][4]],pp. 65-72]。随时间流走,这个社区不断壮大,并且以现在称为开源软件开发的方式在工作着[[5][5],pp. 440-442]。Unix 和其睿智的晚辈们也将 C 和 C++ 编程语言,分析程序和词法分析生成器(*yacclex*),发扬光大了,文档编制工具(*troffeqntbl*),脚本语言(*awksedPerl*TCP/IP 网络,和配置管理系统(*SCSSRCSSubversionGit*)发扬广大了,同时也形成了大部分的现代互联网基础设施和网络。
幸运的是,一些重要的具有历史意义的 Unix 材料已经保存下来了,现在保持对外开放。尽管 Unix 最初是由相对严格的协议发行,但在早期的开发中,很多重要的部分是通过 Unix 的某个版权拥有者以一个自由的协议发行。然后将这些部分再结合开发的软件或者以开源发行的软件BerkeleyCalifornai 大学和 FreeBSD 项目组从 1972 年六月二十日开始到现在,提供了涵盖整个系统的开发。
Curating and processing available snapshots as well as old and modern configuration management repositories allows the reconstruction of a new synthetic Git repository that combines under a single roof most of the available data. This repository documents in a digital form the detailed evolution of an important digital artefact over a period of 44 years. The following sections describe the repository's structure and contents (Section [II][6]), the way it was created (Section [III][7]), and how it can be used (Section [IV][8]).
### 2. 数据概览 ###
这 1GB 的 Unix 仓库可以从 [GitHub][9] 克隆。[1][10]如今[2][11],这个仓库包含来自 850 个贡献者的 659,000 个提交和 2306 个合并。贡献者有来自 Bell 实验室的 23 个员工Berkeley 计算机系统研究组CSRG的 158 个人,和 FreeBSD 项目的 660 个成员。
这个仓库的生命始于一个 *Epoch* 的标签,这里面只包含了证书信息和现在的 README 文件。其后各种各样的标签和分支记录了很多重要的时刻。
- *Research-VX* 标签对应来自 Bell 实验室六次的研究版本。从 *Research-V1* PDP-11 4768 行汇编代码)开始,到以 *Research-V7* (大约 324,000 行代码1820 个 C 文件)结束。
- *Bell-32V* 是 DEC/VAX 架构的 Unix 第七个版本的一部分。
- *BSD-X* 标签对应 Berkeley 释出的 15 个快照。
- *386BSD-X* 标签对应系统的两个开源版本,主要是 Lynne 和 William Jolitz 写的 适用于 Intel 386 架构的内核代码。
- *FreeBSD-release/X* 标签和分支标记了来自 FreeBSD 项目的 116 个发行版。
另外,以 *-Snapshot-Development* 为后缀的分支,表示一个被综合的以时间排序的快照文件序列的一些提交,而以一个 *-VCS-Development* 为后缀的标签,标记了有特别发行版出现的历史分支的时刻。
仓库的历史包含从系统开发早期的一些提交,比如下面这些。
commit c9f643f59434f14f774d61ee3856972b8c3905b1
Author: Dennis Ritchie <research!dmr>
Date: Mon Dec 2 18:18:02 1974 -0500
Research V5 development
Work on file usr/sys/dmr/kl.c
释出间隙的合并随着系统进化而发生,比如 从 BSD 2 到 BSD 3 的开发Unix 32/V 也是正确地代表了 Git 仓库里带两个父节点的图形节点。(这太莫名其妙了)
更为重要的是,该仓库的构造方式允许 **git blame**,就是可以给源代码加上注释,如版本,日期和它们第一次出现相关联的作者,这样可以知道任何代码的起源。比如说 **BSD-4** 这个标签,在内核的 *pipe.c* 文件上运行一下 git blame就会显示代码行由 Ken Thompson 写于 19741975 和 1979年Bill Joy 写于 1980 年。这就可以自动(尽管计算上比较费事)检测出任何时刻出现的代码。
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/provenance.png)
Figure 1: Code provenance across significant Unix releases.
如上图[12]所示,现代版本的 UnixFreeBSD 9依然有来自 BSD 4.3BSD 4.3 Net/2 和 BSD 2.0 的代码块。有趣的是,这图片显示有部分代码好像没有保留下来,当时激进地要创造一个脱离于 Berkeyel386BSD 和 FreeBSD 1.0释出代码的开源操作系统其所开发的代码。FreeBSD 9 中最古老的代码是一个 18 行的队列,在 C 库里面的 timezone.c 文件里,该文件也可以在第七版的 Unix 文件里找到,同样的名字,时间戳是 1979 年一月十日 - 36 年前。
### 数据收集和处理 ###
这个项目的目的是以某种方式巩固从数据方面说明 Unix 的进化,通过将其并入一个现代的修订仓库,帮助人们对系统进化的研究。项目工作包括收录数据,分类并综合到一个单独的 Git 仓库里。
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/branches.png)
Figure 2: Imported Unix snapshots, repositories, and their mergers.
项目以三种数据类型为基础(见 Figure [2][13])。首先,早些发布的版本快照,是从 [Unix Heritage Society archive][14] 中获得,[2][15] 在 [CD-ROM 镜像][16] 中包括 CSRG 全部的源包,[4][17] [Oldlinux site][5][19] 和 [FreeBSD 包][20]。[6][21] 其次,以前的,现在的仓库,也就是 CSRG SCCS [[6][22]] 仓库FreeBSD 1 CVS 仓库,和[现代 FreeBSD 开发的 Git 镜像][23]。[7][24]前两个都是从相同的源获得而作为对应的快照。
最后,也是最费力的数据源是 **primary research**。释出的快照并没有提供关于它们的源头和每个文件贡献者的信息。因此,这些信息片段需要通过 primary research 验证。至于作者信息主要通过作者的自传,研究论文,内部备忘录和旧文档扫描;通过阅读并且自动处理源代码和帮助页面补充;通过与那个年代的人用电子邮件交流;在 *StackExchange* 网站上贴出疑问;查看文件的位置(在早期的内核源代码版本中,分为 `usr/sys/dmr``/usr/sys/ken`);从研究论文和帮助手册到源代码,从一个发行版到另一个发行版地宣传中获取。(有趣的是,第一和第二的研究版帮助页面都有一个 “owner” 部分,列出了作者(比如,*Ken*)与对应的系统命令,文件,系统调用,或者功能库。在第四版中这个部分就没了,而在 BSD 发行版中又浮现了 “Author” 部分。关于作者信息更为详细地写在了项目的文件中这些文件被用于匹配源代码文件和它们的作者和对应的提交信息。最后information regarding merges between source code bases was obtained from a [BSD family tree maintained by the NetBSD project][25].[8][26](不好组织这个语言)
作为该项目的一部分而开发的软件和数据文件,现在可以[在线获取][27][9][28]并且如果有合适的网络环境CPU 和磁盘资源,可以用来从头构建这样一个仓库。关于主要发行版的所有者信息,都存储在该项目 `author-path` 目录下的文件里。These contain lines with a regular expressions for a file path followed by the identifier of the corresponding author.(这句单词都认识,但是不理解具体意思)也可以制定多个作者。正则表达式是按线性处理的,所以一个文件末尾的匹配一切的表达式可以指定一个发行版的默认作者。为避免重复,一个以 `.au` 后缀的独立文件专门用于映射作者身份到他们的名字和 email。这样一个文件为每个社区建立了一个以关联系统的进化Bell 实验室Berkeley386BSD 和 FreeBSD。为了真实性的需要早期 Bell 实验室发行版的 emails 都以 UUCP 注释列出了e.g. `research!ken`)。FreeBSD 作者的鉴定人图谱,需要导入早期的 CVS 仓库,通过从如今项目的 Git 仓库里解压对应的数据构建。总的来说注释作者信息的文件828 rules包含 1107 行,并且另外 640 映射作者鉴定人到名字。
现在项目的数据源被编码成了一个 168 行的 `Makefile`。它包括下面的步骤。
**Fetching** 从远程站点复制和克隆大约 11GB 的镜像,档案和仓库。
**Tooling** 从 2.9 BSD 中为旧的 PDP-11 档案获取一个归档器,并作出调整来在现代的 Unix 版本下编译;编译 4.3 BSD *compress* 程序来解压 386BSD 发行版,这个程序不再是现代 Unix 系统的组成部分了。
**Organizing** 用 tar 和 *cpio* 解压缩包;结合第六版的三个目录;用旧的 PDP-11 归档器解压所有的 1 BSD 档案;挂载 CD-ROM 镜像,这样可以作为文件系统处理;组合 8 和 62 386BSD 散乱的磁盘镜像为两个独立的文件。
**Cleaning** 重新存储第一版的内核源代码文件,这个可以通过合适的字符识别从打印输出用获取;给第七版的源代码文件打补丁;移除一个发行版后被添加进来的元数据和其他文件,为避免得到错误的时间戳信息;修复毁坏的 SCCS 文件;通过移除 CVS symbols assigned to multiple revisions with a custom Perl script删除 CVS *Attr* 文件和用 *cvs2svn* 将 CVS 仓库转换为 Git 仓库,来处理早期的 FreeBSD CVS 仓库。
在仓库表述中有一个很有意思的部分就是,如何导入那些快照,并以一种方式联系起来,使得 *git blame* 可以发挥它的魔力。快照导入到仓库是作为一系列的提交实现的,根据每个文件的时间戳。当所有文件导入后,就被用对应发行版的名字给标记了。在这点上,一个人可以删除那些文件,并开始导入下一个快照。注意 *git blame* 命令是通过回溯一个仓库的历史来工作的,并使用启发法来检测文件之间或内部的代码移动和复制。因此,删除掉的快照间会产生中断,防止它们之间的代码被追踪。
相反,在下一个快照导入之前,之前快照的所有文件都被移动到了一个隐藏的后备目录里,叫做 `.ref`(引用)。它们保存在那,直到下个快照的所有文件都被导入了,这时候它们就会被删掉。因为 `.ref` 目录下的每个文件都完全配对一个原始文件,*git blame* 可以知道多少源代码通过 `.ref` 文件从一个版本移到了下一个,而不用显示出 `.ref` 文件。为了更进一步帮助检测代码起源,同时增加表述的真实性,每个发行版都被表述成了一个合并,介于有增加文件的分支(*-Development*)与之前发行版之间的合并。
上世纪 80 年代这个时期,只有 Berkeley 开发文件的一个子集是用 SCCS 版本控制的。整个时期内,我们统一的仓库里包含了来自 SCCS 的提交和快照增加的文件。在每个发行版的时间点上,可以发现 SCCS 最近的提交,被标记成一个发行版中增加的导入分支的合并。这些合并可以在 Figure [2][29] 中间看到。
将各种数据资源综合到一个仓库的工作,主要是用两个脚本来完成的。一个 780 行的 Perl 脚本(`import-dir.pl`可以从一个单独的数据源快照目录SCCS 仓库,或者 Git 仓库)中,以 *Git fast export* 格式导出真实的或者综合的提交历史。输出是一个简单的文本格式Git 工具用这个来导入和导出提交。其他方面,这个脚本以一些东西为参数,如文件到贡献者的映射,贡献者登录名和他们的全名间的映射,导入的提交会被合并,哪些文件要处理,哪些文件要忽略,和“引用”文件的处理。一个 450 行的 Shell 脚本创建 Git 仓库,并调用带适当参数的 Perl 脚本,导入 27 个可用的历史数据资源。Shell 脚本也会跑 30 遍测试,比较特定标签的仓库和对应的数据源,确认出现的和没出现的备用目录,并查看分支树和合并的数量,*git blame* 和 *git log* 的输出中的退化。最后,*git* 被调用来作垃圾收集和压缩仓库,从最初的 6GB 降到发行的 1GB。
### 4 数据使用 ###
数据可以用于软件工程,信息系统和软件考古学领域的经验性研究。鉴于它从不间断而独一无二的存在了超过了 40 年,可以供软件的进化和代代更迭参考。伴随那个时代以来在处理速度千倍地增长,存储容量百万倍的扩大,数据同样可以用于软件和硬件技术交叉进化的研究。软件开发从研究中心到大学,到开源社区的转移,可以用来研究组织文化对于软件开发的影响。仓库也可以用于学习开发者编程的影响力,比如 Turing 奖获得者Dennis Ritchie 和 Ken Thompson和 IT 产业的大佬Bill Joy 和 Eric Schmidt。另一个值得学习的现象是代码的长寿无论是单行的水平或是作为那个时代随 Unix 发行的完整的系统Ingres, Lisp, Pascal, Ratfor, Snobol, TMP和导致代码存活或消亡的因素。最后因为数据使 Git 底层软件仓库存储技术感到压力,到了它的限度,这会加速修正管理系统领域的工程进度。
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/metrics.png)
Figure 3: Code style evolution along Unix releases.
Figure [3][30] 描述了一些有趣的代码统计,根据 36 个主要 Unix 发行版,验证了代码风格和编程语言的使用在很长的时间尺度上的进化。这种进化是软硬件技术的需求和支持驱动的,软件构筑理论,甚至社会力量。图片中的数据已经计算了在一个所给发行版中出现的文件的平均时间。正如可以从中看到,在过去的 40 年中,验证器和文件名字的长度已经稳定从 4 到 6 个字符增长到 7 到 11 个字符。我们也可以看到在评论数量的少量稳定增加,和 *goto* 表达的使用量减少,同时 *register* 这个类型修改器的消失。
### 5 未来的工作 ###
可以做很多事情去提高仓库的正确性和有效性。创建进程作为源代码开源了,通过 GitHub 的拉取请求,可以很容易地贡献更多代码和修复。最有用的社区贡献会使得导入的快照文件的覆盖增长,这曾经是隶属于一个具体的作者。现在,大约 90,000 个文件(在 160,000 总量之外)被指定了作者,根据一个默认的规则。类似地,大约有 250 个作者(最初 FreeBSD 那些)是验证器确认的。两个都列在了 build 仓库无配对的目录里也欢迎贡献数据。进一步BSD SCCS 和 FreeBSD CVS 的提交,共享相同的作者和时间戳,这些可以结合成一个单独的 Git 提交。导入 SCCS 文件提交的支持会被添加进来,为了引入仓库对应的元数据。最后,最重要的,开源系统的更多分支会添加进来,比如 NetBSD OpenBSD DragonFlyBSD 和 *illumos*。理想地,其他重要的历史上的 Unix 发行版,它们的版权拥有者,如 System III System V NeXTSTEP 和 SunOS也会在一个协议下释出他们的系统允许他们的合作伙伴使用仓库用于研究。
### 鸣谢 ###
本人感谢很多付出努力的人们。 Brian W. Kernighan, Doug McIlroy 和 Arnold D. Robbins 帮助 Bell 实验室开发了登录验证器。 Clem Cole Era Erikson Mary Ann Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze 和 Anatole Shaw 开发了 BSD 的登录验证器。BSD SCCS 导入了 H. Merijn Brand 和 Jonathan Gray 的开发工作的代码。
这次研究通过 National Strategic Reference Framework (NSRF) 的 Operational Program " Education and Lifelong Learning" - Research Funding Program: Thalis - Athens University of Economics and Business - Software Engineering Research Platform由 European Union ( European Social Fund - ESF) 和 Greek national funds 出资赞助。
### 引用 ###
[[1]][31]
M. D. McIlroy, E. N. Pinson, and B. A. Tague, "UNIX time-sharing system: Foreword," *The Bell System Technical Journal*, vol. 57, no. 6, pp. 1899-1904, July-August 1978.
[[2]][32]
D. M. Ritchie and K. Thompson, "The UNIX time-sharing system," *Bell System Technical Journal*, vol. 57, no. 6, pp. 1905-1929, July-August 1978.
[[3]][33]
D. M. Ritchie, "The evolution of the UNIX time-sharing system," *AT&T Bell Laboratories Technical Journal*, vol. 63, no. 8, pp. 1577-1593, Oct. 1984.
[[4]][34]
P. H. Salus, *A Quarter Century of UNIX*. Boston, MA: Addison-Wesley, 1994.
[[5]][35]
E. S. Raymond, *The Art of Unix Programming*. Addison-Wesley, 2003.
[[6]][36]
M. J. Rochkind, "The source code control system," *IEEE Transactions on Software Engineering*, vol. SE-1, no. 4, pp. 255-265, 1975.
----------
#### Footnotes: ####
[1][37] - [https://github.com/dspinellis/unix-history-repo][38]
[2][39] - Updates may add or modify material. To ensure replicability the repository's users are encouraged to fork it or archive it.
[3][40] - [http://www.tuhs.org/archive_sites.html][41]
[4][42] - [https://www.mckusick.com/csrg/][43]
[5][44] - [http://www.oldlinux.org/Linux.old/distributions/386BSD][45]
[6][46] - [http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/][47]
[7][48] - [https://github.com/freebsd/freebsd][49]
[8][50] - [http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree][51]
[9][52] - [https://github.com/dspinellis/unix-history-make][53]
--------------------------------------------------------------------------------
via: http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#MPT78
[2]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#RT78
[3]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Rit84
[4]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Sal94
[5]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Ray03
[6]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:data
[7]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:dev
[8]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:use
[9]:https://github.com/dspinellis/unix-history-repo
[10]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAB
[11]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAC
[12]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:provenance
[13]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches
[14]:http://www.tuhs.org/archive_sites.html
[15]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAD
[16]:https://www.mckusick.com/csrg/
[17]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAE
[18]:http://www.oldlinux.org/Linux.old/distributions/386BSD
[19]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAF
[20]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/
[21]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAG
[22]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#SCCS
[23]:https://github.com/freebsd/freebsd
[24]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAH
[25]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree
[26]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAI
[27]:https://github.com/dspinellis/unix-history-make
[28]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAJ
[29]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches
[30]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:metrics
[31]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITEMPT78
[32]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERT78
[33]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERit84
[34]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESal94
[35]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERay03
[36]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESCCS
[37]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAB
[38]:https://github.com/dspinellis/unix-history-repo
[39]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAC
[40]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAD
[41]:http://www.tuhs.org/archive_sites.html
[42]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAE
[43]:https://www.mckusick.com/csrg/
[44]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAF
[45]:http://www.oldlinux.org/Linux.old/distributions/386BSD
[46]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAG
[47]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/
[48]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAH
[49]:https://github.com/freebsd/freebsd
[50]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAI
[51]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree
[52]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAJ
[53]:https://github.com/dspinellis/unix-history-make

View File

@ -1,80 +0,0 @@
如何监控linux 命令行的命令执行进度
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/11/pv-featured-1.jpg)
如果你是一个linux 系统管理员,那么毫无疑问你必须花费大量的工作时间在命令行上:安装和卸载软件,监视系统状态,复制、移动、删除文件,查错,等等。很多时候都是你输入一个命令,然后等待很长时间直到执行完成。也有的时候你执行的命令挂起了,而你只能猜测命令执行的实际情况。
通常linux命令不提供和进度相关的信息而这些信息特别重要尤其当你只有有限的时间时。然而这并不意味着你是无助的-现在有一个命令pv他会显示当前在命令行执行的命令的进度信息。在本文我们会讨论它并用几个简单的例子说明种特性。
### PV 命令 ###
[PV][1] 由Andrew Wood 开发是Pipe Viewer 的简称,意思是通过管道显示数据处理进度的信息。这些信息包括已经耗费的时间,完成的百分比(通过进度条显示),当前的速度,要传输的全部数据,以及估计剩余的时间。
>"要使用PV需要配合合适的选项把它放置在两个进程之间的管道。命令的标准输入将会通过标准输出传进来的而进度会被输出到标准错误输出。”
上面解释了命令的主页(?)
### 下载和安装 ###
Debian 系的操作系统如Ubuntu可以简单的使用下面的命令安装PV
sudo apt-get install pv
如果你使用了其他发行版本你可以使用各自的包管理软件在你的系统上安装PV。一旦PV 安装好了你就可以在各种场合使用它详见下文。需要注意的是下面所有例子都可以正常的鱼pv 1.2.0 工作。
### 特性和用法 ###
我们在linux 上使用命令行的用户的大多数使用场景都会用到的命令是从一个USB 驱动器拷贝电影文件到你的电脑。如果你使用cp 来完成上面的任务,你会什么情况都不清楚知道整个复制过程结束或者出错。
然而pv 命令在这种情景下很有帮助。比如:
pv /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
输出如下:
![pv-copy](https://www.maketecheasier.com/assets/uploads/2015/10/pv-copy.png)
所以,如你所见,这个命令显示了很多和操作有关的有用信息,包括已经传输了的数据量,花费的时间,传输速率,进度条,进度的百分比,已经剩余的时间。
`pv` 命令提供了多种显示选项开关。比如,你可以使用`-p` 来显示百分比,`-t` 来显示时间,`-r` 表示传输速率,`-e` 代表eta译注估计剩余的时间。好事是你不必记住某一个选项因为默认这几个选项都是使能的。但是如果你只要其中某一个信息那么可以通过控制这几个选项来完成任务。
整理还有一个`-n` 选项来允许pv 命令显示整数百分比,在标准错误输出上每行显示一个数字,用来替代通常的视觉进度条。下面是一个例子:
pv -n /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
![pv-numeric](https://www.maketecheasier.com/assets/uploads/2015/10/pv-numeric.png)
这个特殊的选项非常合适某些情境下的需求,如你想把用管道把输出传给[dialog][2] 命令。
接下来还有一个命令行选项,`-L` 可以让你修改pv 命令的传输速率。举个例子,使用-L 选项来限制传输速率为2MB/s。
pv -L 2m /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
![pv-ratelimit](https://www.maketecheasier.com/assets/uploads/2015/10/pv-ratelimit.png)
如上图所见,数据传输速度按照我们的要求被限制了。
另一个pv 可以帮上忙的情景是压缩文件。这里有一个例子可以向你解释如何与压缩软件Gzip 一起工作。
pv /media/himanshu/1AC2-A8E3/fnf.mkv | gzip > ./Desktop/fnf.log.gz
![pv-gzip](https://www.maketecheasier.com/assets/uploads/2015/10/pv-gzip.png)
### 结论 ###
如上所述pv 是一个非常有用的小工具它可以在命令没有按照预期执行的情况下帮你节省你宝贵的时间。而且这些现实的信息还可以用在shell 脚本里。我强烈的推荐你使用这个命令,他值得你一试。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/monitor-progress-linux-command-line-operation/
作者:[Himanshu Arora][a]
译者:[ezio](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/himanshu/
[1]:http://linux.die.net/man/1/pv
[2]:http://linux.die.net/man/1/dialog

View File

@ -0,0 +1,39 @@
# 使用SystemBack备份你的Ubuntu/Linux Mint系统还原
系统还原对于任何一款允许用户还原电脑到之前状态(包括文件系统,安装的应用,以及系统设置)的操作系统来说,都是必备功能,可以处理系统故障以及其他的问题。有的时候安装一个程序或者驱动可能让你的系统黑屏。系统还原则让你电脑里面的系统文件(译者注:是系统文件,并非普通文件,详情请看**注意**部分)和程序恢复到之前工作正常时候的状态,进而让你远离那让人头痛的排障过程了。而且它也不会影响你的文件,照片或者其他数据。简单的系统备份还原工具[Systemback](https://launchpad.net/systemback)让你很容易地创建系统备份以及用户配置文件。如果遇到问题你可以傻瓜式还原。它还有一些额外的特征包括系统复制系统安装以及Live系统创建。
截图
![systemback](http://2.bp.blogspot.com/-2UPS3yl3LHw/VlilgtGAlvI/AAAAAAAAGts/ueRaAghXNvc/s1600/systemback-1.jpg)
![systemback](http://2.bp.blogspot.com/-7djBLbGenxE/Vlilgk-FZHI/AAAAAAAAGtk/2PVNKlaPO-c/s1600/systemback-2.jpg)
![](http://3.bp.blogspot.com/-beZYwKrsT4o/VlilgpThziI/AAAAAAAAGto/cwsghXFNGRA/s1600/systemback-3.jpg)
![](http://1.bp.blogspot.com/-t_gmcoQZrvM/VlilhLP--TI/AAAAAAAAGt0/GWBg6bGeeaI/s1600/systemback-5.jpg)
**注意**:使用系统还原不会还原你的文件,音乐,电子邮件或者其他任何类型的私人文件。对不同用户来讲,这既是优点又是缺点。坏消息是它不会还原你意外删除的文件,不过你可以通过一个文件恢复程序来解决这个问题。如果你的计算机上没有系统还原点,那么系统还原工具就不会奏效了。(最后一句没有太理解)
> > >适用于Ubuntu 15.10 Wily/16.04/15.04 Vivid/14.04 Trusty/Linux Mint 14.x/其他Ubuntu衍生版打开终端将下面这些命令复制过去
终端命令:
```
sudo add-apt-repository ppa:nemh/systemback
sudo apt-get update
sudo apt-get install systemback
```
大功告成。
--------------------------------------------------------------------------------
via: http://www.noobslab.com/2015/11/backup-system-restore-point-your.html
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://launchpad.net/systemback

View File

@ -0,0 +1,197 @@
Linux / Unix: jobs 命令示例
================================================================================
我是个新的 Linux 或 Unix 用户。如何在 Linux 或类 Unix 系统中使用 BASH/KSH/TCSH 或者基于 POSIX 的 shell 来查看当前正在进行的作业?在 Unix/Linux 上怎样显示当前作业的状态?
作业控制的是什么,停止/暂停进程(命令)的执行并按你的要求继续/恢复它们的执行。这是根据你的操作系统和 shell 如bash/ksh 或 POSIX shell 来执行的。
shell 会将当前所执行的作业保存在一个表中,可以用 jobs 命令来显示。
### 目的 ###
> 在当前 shell 会话中显示作业的状态。
### 语法 ###
其基本语法如下:
jobs
jobs jobID
或者
jobs [options] jobID
### 启动一些作业来进行示范 ###
在开始使用 jobs 命令前,你需要在系统上先启动多个作业。执行以下命令来启动作业:
## 启动 xeyes, calculator, 和 gedit 文本编辑器 ###
xeyes &
gnome-calculator &
gedit fetch-stock-prices.py &
最后,在前台运行 ping 命令:
ping www.cyberciti.biz
**Ctrl-Z** 键来暂停 ping 命令的作业。
### jobs 命令示例 ###
要在当前 shell 显示作业的状态,请输入:
$ jobs
输出示例:
[1] 7895 Running gpass &
[2] 7906 Running gnome-calculator &
[3]- 7910 Running gedit fetch-stock-prices.py &
[4]+ 7946 Stopped ping cyberciti.biz
要显示进程 ID 或作业名称请使用 “P” 选项,输入:
$ jobs -p %p
或者
$ jobs %p
输出示例:
[4]- Stopped ping cyberciti.biz
字符 后加一个作业。在这个例子中,你需要使用作业的名称来暂停它,如 ping。
### 如何显示进程 ID 不包含其他正常的信息? ###
通过 jobs 命令的 -l小写的 L选项列出每个作业的详细信息运行
$ jobs -l
示例输出:
![Fig.01: Displaying the status of jobs in the shell](http://s0.cyberciti.org/uploads/faq/2013/02/jobs-command-output.jpg)
Fig.01: 在 shell 中显示 jobs 的状态
### 如何只列出最近一次状态改变的进程? ###
首先,启动一个新的工作如下所示:
$ sleep 100 &
现在,只显示作业最近一次的状态(停止或退出),输入:
$ jobs -n
示例输出:
[5]- Running sleep 100 &
### 仅显示进程 IDPID ###
通过 jobs 命令的 -p 选项仅显示 PID
$ jobs -p
示例输出:
7895
7906
7910
7946
7949
### 怎样只显示正在运行的作业呢? ###
通过 jobs 命令的 -r 选项只显示正在运行的作业,输入:
$ jobs -r
示例输出:
[1] Running gpass &
[2] Running gnome-calculator &
[3]- Running gedit fetch-stock-prices.py &
### 怎样只显示已经停止工作的作业? ###
通过 jobs 命令的 -s 选项只显示停止工作的作业,输入:
$ jobs -s
示例输出:
[4]+ Stopped ping cyberciti.biz
要继续执行 ping cyberciti.biz 作业,输入以下 bg 命令:
$ bg %4
### jobs 命令选项 ###
摘自 [bash(1)][1] 命令 man 手册页:
注:表格
<table border="1">
<tbody>
<tr>
<td>Option</td>
<td>Description</td>
</tr>
<tr>
<td><kbd><strong>-l</strong></kbd></td>
<td>Show process id's in addition to the normal information.</td>
</tr>
<tr>
<td><kbd><strong>-p</strong></kbd></td>
<td>Show process id's only.</td>
</tr>
<tr>
<td><kbd><strong>-n</strong></kbd></td>
<td>Show only processes that have changed status since the last notification are printed.</td>
</tr>
<tr>
<td><kbd><strong>-r</strong></kbd></td>
<td>Restrict output to running jobs only.</td>
</tr>
<tr>
<td><kbd><strong>-s</strong></kbd></td>
<td>Restrict output to stopped jobs only.</td>
</tr>
<tr>
<td><kbd><strong>-x</strong></kbd></td>
<td>COMMAND is run after all job specifications that appear in ARGS have been replaced with the process ID of that job's process group leader./td&gt;</td>
</tr>
</tbody>
</table>
### 关于 /usr/bin/jobs 和 shell 内建的说明 ###
输入以下 type 命令找出是否 jobs 命令是 shell 的内建命令或是外部命令:
$ type -a jobs
输出示例:
jobs is a shell builtin
jobs is /usr/bin/jobs
在几乎所有情况下jobs 命令都是作为 BASH/KSH/POSIX shell 内建命令被实现的。/usr/bin/jobs 命令不能被用在当前 shell 中。/usr/bin/jobs 命令工作在不同的环境中不共享父 bash/ksh 的 shells 来执行作业。
--------------------------------------------------------------------------------
via:
作者Vivek Gite
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.manpager.com/linux/man1/bash.1.html

View File

@ -0,0 +1,67 @@
自定义Ubuntu面板时间日期显示格式
================================================================================
![时间日期格式](http://ubuntuhandbook.org/wp-content/uploads/2015/08/ubuntu_tips1.png)
尽管设置里已经有一些选项可以用了,这个快速教程会向你展示如何更加深入地自定义 Ubuntu 面板上的时间和日期指示器。
![自定义世间日期](http://ubuntuhandbook.org/wp-content/uploads/2015/12/custom-timedate.jpg)
在开始之前,在 Ubuntu 软件中心搜索并安装 **dconf Editor**。然后启动该软件并按以下步骤执行:
**1.** 当 dconf Editor 启动后,导航至 **com -> canonical -> indicator -> datetime**。将 **time-format** 的值设置为 **custom**
![自定义时间格式](http://ubuntuhandbook.org/wp-content/uploads/2015/12/time-format.jpg)
你也可以通过终端里的命令完成以上操作:
gsettings set com.canonical.indicator.datetime time-format 'custom'
**2.** 现在你可以通过编辑 **custom-time-format** 的值来自定义时间和日期的格式。
![自定义-时间格式](http://ubuntuhandbook.org/wp-content/uploads/2015/12/customize-timeformat.jpg)
你也可以通过命令完成:(译注:将 FORMAT_VALUE_HERE 替换为所需要的格式值)
gsettings set com.canonical.indicator.datetime custom-time-format 'FORMAT_VALUE_HERE'
以下是参数含义:
- %a = 星期名缩写
- %A = 星期名完整拼写
- %b = 月份名缩写
- %B = 月份名完整拼写
- %d = 按月计日期
- %l = 小时 ( 1..12) %I = 小时 (01..12)
- %k = 小时 ( 1..23) %H = 小时 (01..23)
- %M = 分钟 (00..59)
- %p = 午别AM 或 PM %P = am 或 pm.
- %S = 秒 (00..59)
- 打开终端键入命令 `man date` 并执行以了解更多细节。
一些例子:
自定义时间日期显示格式值:
**%a %H:%M %m/%d/%Y**
![exam-1](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-1.jpg)
**%a %r %b %d or %a %I:%M:%S %p %b %d**
![exam-2](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-2.jpg)
**%a %-d %b %l:%M %P %z**
![exam-3](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-3.jpg)
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/12/time-date-format-ubuntu-panel/
作者:[Ji m][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/

View File

@ -1,20 +1,20 @@
How to renew the ISPConfig 3 SSL Certificate
如何更新ISPConfig 3 SSL证书
================================================================================
This tutorial describes the steps to renew the SSL Certificate of the ISPConfig 3 control panel. There are two alternative ways to achieve that:
本教程描述了如何再ISPConfig 3控制面板中更新SSL证书。有两个可选的方法
- Create a new OpenSSL Certificate and CSR on the command line with OpenSSL.
- Renew the SSL Certificate with the ISPConfig updater
- 用OpenSSL创建一个新的OpenSSL证书和CSR。
- 用ISPConfig updater更新SSL证书
I'll start with the manual way to renew the ssl cert.
我将会用手工的方法更新ssl证书。
### 1) Create a new ISPConfig 3 SSL Certificate with OpenSSL ###
### 1用OpenSSL创建一个新的ISPConfig 3 SSL 证书 ###
Login to your server on the shell as root user. Before we create a new SSL Cert, backup the current ones. SSL Certs are security sensitive so I'll store the backup in the /root/ folder.
用root用户登录你的服务器。在创建一个新的SSL证书之前备份现有的。SSL证书是安全敏感的因此我将它存储在/root/目录下。
tar pcfz /root/ispconfig_ssl_backup.tar.gz /usr/local/ispconfig/interface/ssl
chmod 600 /root/ispconfig_ssl_backup.tar.gz
> Now create a new SSL Certificate key, Certificate Request (csr) and a self signed Certificate.
> 现在创建一个新的SSL证书密钥证书请求csr和自签发证书。
cd /usr/local/ispconfig/interface/ssl
openssl genrsa -des3 -out ispserver.key 4096
@ -25,14 +25,13 @@ Login to your server on the shell as root user. Before we create a new SSL Cert,
mv ispserver.key ispserver.key.secure
mv ispserver.key.insecure ispserver.key
Restart Apache to load the new SSL Certificate.
重启apache来加载新的SSL证书
service apache2 restart
### 2) Renew the SSL Certificate with the ISPConfig installer ###
### 2用ISPConfig安装器来更新SSL证书 ###
The alternative way to get a new SSL Certificate is to use the ISPConfig update script.
Download ISPConfig to the /tmp folder, unpack the archive and start the update script.
另一个获取新的SSL证书的替代方案是使用ISPConfig更新脚本。下载ISPConfig到/tmp目录下解压包并运行脚本。
cd /tmp
wget http://www.ispconfig.org/downloads/ISPConfig-3-stable.tar.gz
@ -40,20 +39,20 @@ Download ISPConfig to the /tmp folder, unpack the archive and start the update s
cd ispconfig3_install/install
php -q update.php
The update script will ask the following question during update:
更新脚本会在更新时询问下面的额问题:
Create new ISPConfig SSL certificate (yes,no) [no]:
Answer "yes" here and the SSL Certificate creation dialog will start.
这里回答“yes”SSL证书创建对话框就会启动。
--------------------------------------------------------------------------------
via: http://www.faqforge.com/linux/how-to-renew-the-ispconfig-3-ssl-certificate/
作者:[Till][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.faqforge.com/author/till/
[a]:http://www.faqforge.com/author/till/

View File

@ -0,0 +1,464 @@
通过Dockerize这篇博客来开启我们的Docker之旅
===
>这篇文章将包含Docker的基本概念以及如何通过创建一个定制的Dockerfile来Dockerize一个应用
>作者Benjamin Cane2015-12-01 10:00:00
Docker是2年前从某个idea中孕育而生的有趣技术世界各地的公司组织都积极使用它来部署应用。在今天的文章中我将教你如何通过"Dockerize"一个现有的应用来开始我们的Docker运用。问题中的应用指的就是这篇博客
## 什么是Docker
当我们开始学习Docker基本概念时让我们先去搞清楚什么是Docker以及它为什么这么流行。Docker是一个操作系统容器管理工具它通过将应用打包在操作系统容器中来方便我们管理和部署应用。
### 容器 vs. 虚拟机
容器虽和虚拟机并不完全相似,但它也是一种提供**操作系统虚拟化**的方式。但是,它和标准的虚拟机还是有不同之处的。
标准虚拟机一般会包括一个完整的操作系统,操作系统包,最后还有一至两个应用。这都得益于为虚拟机提供硬件虚拟化的管理程序。这样一来,一个单一的服务器就可以将许多独立的操作系统作为虚拟客户机运行了。
容器和虚拟机很相似,它们都支持在单一的服务器上运行多个操作环境,只是,在容器中,这些环境并不是一个个完整的操作系统。容器一般只包含必要的操作系统包和一些应用。它们通常不会包含一个完整的操作系统或者硬件虚拟化程序。这也意味着容器比传统的虚拟机开销更少。
容器和虚拟机常被误认为是两种抵触的技术。虚拟机采用同一个物理服务器,来提供全功能的操作环境,该环境会和其余虚拟机一起共享这些物理资源。容器一般用来隔离运行中的应用进程,运行进程将在单独的主机中运行,以保证隔离后的进程之间不能相互影响。事实上,容器和**BSD Jails**以及`chroot`进程的相似度,超过了和完整虚拟机的相似度。
### Docker在容器的上层提供了什么
Docker不是一个容器运行环境事实上只是一个容器技术并不包含那些帮助Docker支持[Solaris Zones](https://blog.docker.com/2015/08/docker-oracle-solaris-zones/)和[BSD Jails](https://wiki.freebsd.org/Docker)的技术。Docker提供管理打包和部署容器的方式。虽然一定程度上虚拟机多多少少拥有这些类似的功能但虚拟机并没有完整拥有绝大多数的容器功能即使拥有这些功能用起来都并没有Docker来的方便。
现在我们应该知道Docker是什么了然后我们将从安装Docker并部署一个公共的预构建好的容器开始学习Docker是如何工作的。
## 从安装开始
默认情况下Docker并不会自动被安装在您的计算机中所以第一步就是安装Docker包我们的教学机器系统是Ubuntu 14.0.4所以我们将使用Apt包管理器来执行安装操作。
```
# apt-get install docker.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
aufs-tools cgroup-lite git git-man liberror-perl
Suggested packages:
btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc
git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki
git-svn
The following NEW packages will be installed:
aufs-tools cgroup-lite docker.io git git-man liberror-perl
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 7,553 kB of archives.
After this operation, 46.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
```
为了检查当前是否有容器运行,我们可以执行`docker`命令,加上`ps`选项
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
`docker`命令中的`ps`功能类似于Linux的`ps`命令。它将显示可找到的Docker容器以及各自的状态。由于我们并没有开启任何Docker容器所以命令没有显示任何正在运行的容器。
## 部署一个预构建好的nginx Docker容器
我比较喜欢的Docker特性之一就是Docker部署预先构建好的容器的方式就像`yum`和`apt-get`部署包一样。为了更好地解释我们来部署一个运行着nginx web服务器的预构建容器。我们可以继续使用`docker`命令,这次选择`run`选项。
```
# docker run -d nginx
Unable to find image 'nginx' locally
Pulling repository nginx
5c82215b03d1: Download complete
e2a4fb18da48: Download complete
58016a5acc80: Download complete
657abfa43d82: Download complete
dcb2fe003d16: Download complete
c79a417d7c6f: Download complete
abb90243122c: Download complete
d6137c9e2964: Download complete
85e566ddc7ef: Download complete
69f100eb42b5: Download complete
cd720b803060: Download complete
7cc81e9a118a: Download complete
```
`docker`命令的`run`选项用来通知Docker去寻找一个指定的Docker镜像然后开启运行着该镜像的容器。默认情况下Docker容器在前台运行这意味着当你运行`docker run`命令的时候你的shell会被绑定到容器的控制台以及运行在容器中的进程。为了能在后台运行该Docker容器我们可以使用`-d` (**detach**)标志。
再次运行`docker ps`命令可以看到nginx容器正在运行。
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande
```
从上面的打印信息中,我们可以看到正在运行的名为`desperate_lalande`的容器,它是由`nginx:latest image`译者注nginx最新版本的镜像构建而来得。
### Docker镜像
镜像是Docker的核心特征之一类似于虚拟机镜像。和虚拟机镜像一样Docker镜像是一个被保存并打包的容器。当然Docker不只是创建镜像它还可以通过Docker仓库发布这些镜像Docker仓库和包仓库的概念差不多它让Docker能够模仿`yum`部署包的方式来部署镜像。为了更好地理解这是怎么工作的,我们来回顾`docker run`执行后的输出。
```
# docker run -d nginx
Unable to find image 'nginx' locally
```
我们可以看到第一条信息是Docker不能在本地找到名叫nginx的镜像。这是因为当我们执行`docker run`命令时告诉Docker运行一个基于nginx镜像的容器。既然Docker要启动一个基于特定镜像的容器那么Docker首先需要知道那个指定镜像。在检查远程仓库之前Docker首先检查本地是否存在指定名称的本地镜像。
因为系统是崭新的不存在nginx镜像Docker将选择从Docker仓库下载之。
```
Pulling repository nginx
5c82215b03d1: Download complete
e2a4fb18da48: Download complete
58016a5acc80: Download complete
657abfa43d82: Download complete
dcb2fe003d16: Download complete
c79a417d7c6f: Download complete
abb90243122c: Download complete
d6137c9e2964: Download complete
85e566ddc7ef: Download complete
69f100eb42b5: Download complete
cd720b803060: Download complete
7cc81e9a118a: Download complete
```
这就是第二部分打印信息显示给我们的内容。默认Docker会使用[Docker Hub](https://hub.docker.com/)仓库该仓库由Docker公司维护。
和Github一样在Docker Hub创建公共仓库是免费的私人仓库就需要缴纳费用了。当然部署你自己的Docker仓库也是可以实现的事实上只需要简单地运行`docker run registry`命令就行了。但在这篇文章中,我们的重点将不是讲解如何部署一个定制的注册服务。
### 关闭并移除容器
在我们继续构建定制容器之前我们先清理Docker环境我们将关闭先前的容器并移除它。
我们利用`docker`命令和`run`选项运行一个容器,所以,为了停止该相同的容器,我们简单地在执行`docker`命令时,使用`kill`选项,并指定容器名。
```
# docker kill desperate_lalande
desperate_lalande
```
当我们再次执行`docker ps`,就不再有容器运行了
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
但是,此时,我们这是停止了容器;虽然它不再运行,但仍然存在。默认情况下,`docker ps`只会显示正在运行的容器,如果我们附加`-a` (all) 标识,它会显示所有运行和未运行的容器。
```
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande
```
为了能完整地移除容器,我们在用`docker`命令时,附加`rm`选项。
```
# docker rm desperate_lalande
desperate_lalande
```
虽然容器被移除了;但是我们仍拥有可用的**nginx**镜像(译者注:镜像缓存)。如果我们重新运行`docker run -d nginx`Docker就无需再次拉取nginx镜像即可启动容器。这是因为我们本地系统中已经保存了一个副本。
为了列出系统中所有的本地镜像,我们运行`docker`命令,附加`images`选项。
```
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
nginx latest 9fab4090484a 5 days ago 132.8 MB
```
## 构建我们自己的镜像
截至目前我们已经使用了一些基础的Docker命令来开启停止和移除一个预构建好的普通镜像。为了"Dockerize"这篇博客,我们需要构建我们自己的镜像,也就是创建一个**Dockerfile**。
在大多数虚拟机环境中如果你想创建一个机器镜像首先你需要建立一个新的虚拟机安装操作系统安装应用最后将其转换为一个模板或者镜像。但在Docker中所有这些步骤都可以通过Dockerfile实现全自动。Dockerfile是向Docker提供构建指令去构建定制镜像的方式。在这一章节我们将编写能用来部署这篇博客的定制Dockerfile。
### 理解应用
我们开始构建Dockerfile之前第一步要搞明白我们需要哪些东西来部署这篇博客。
博客本质上是由静态站点生成器生成的静态HTML页面这个静态站点是我编写的名为**hamerkop**。这个生成器很简单,它所做的就是生成该博客站点。所有的博客源码都被我放在了一个公共的[Github仓库](https://github.com/madflojo/blog)。为了部署这篇博客我们要先从Github仓库把博客内容拉取下来然后安装**Python**和一些**Python**模块,最后执行`hamerkop`应用。我们还需要安装**nginx**,来运行生成后的内容。
截止目前这些还是一个简单的Dockerfile但它却给我们展示了相当多的[Dockerfile语法]((https://docs.docker.com/v1.8/reference/builder/))。我们需要克隆Github仓库然后使用你最喜欢的编辑器编写Dockerfile我选择`vi`
```
# git clone https://github.com/madflojo/blog.git
Cloning into 'blog'...
remote: Counting objects: 622, done.
remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622
Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done.
Resolving deltas: 100% (242/242), done.
Checking connectivity... done.
# cd blog/
# vi Dockerfile
```
### FROM - 继承一个Docker镜像
第一条Dockerfile指令是`FROM`指令。这将指定一个现存的镜像作为我们的基础镜像。这也从根本上给我们提供了继承其他Docker镜像的途径。在本例中我们还是从刚刚我们使用的**nginx**开始,如果我们想重新开始,我们可以通过指定`ubuntu:latest`来使用**Ubuntu** Docker镜像。
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
```
除了`FROM`指令,我还使用了`MAINTAINER`它用来显示Dockerfile的作者。
Docker支持使用`#`作为注释我将经常使用该语法来解释Dockerfile的部分内容。
### 运行一次测试构建
因为我们继承了**nginx** Docker镜像我们现在的Dockerfile也就包括了用来构建**nginx**镜像的[Dockerfile](https://github.com/nginxinc/docker-nginx/blob/08eeb0e3f0a5ee40cbc2bc01f0004c2aa5b78c15/Dockerfile)中所有指令。这意味着此时我们可以从该Dockerfile中构建出一个Docker镜像然后从该镜像中运行一个容器。虽然最终的镜像和**nginx**镜像本质上是一样的但是我们这次是通过构建Dockerfile的形式然后我们将讲解Docker构建镜像的过程。
想要从Dockerfile构建镜像我们只需要在运行`docker`命令的时候,加上**build**选项。
```
# docker build -t blog /root/blog
Sending build context to Docker daemon 23.6 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Running in c97f36450343
---> 60a44f78d194
Removing intermediate container c97f36450343
Successfully built 60a44f78d194
```
上面的例子,我们使用了`-t` (**tag**)标识给镜像添加"blog"的标签。本质上我们只是在给镜像命名如果我们不指定标签就只能通过Docker分配的**Image ID**来访问镜像了。本例中从Docker构建成功的信息可以看出**Image ID**值为`60a44f78d194`。
除了`-t`标识外,我还指定了目录`/root/blog`。该目录被称作"构建目录"它将包含Dockerfile以及其他需要构建该容器的文件。
现在我们构建成功,下面我们开始定制该镜像。
### 使用RUN来执行apt-get
用来生成HTML页面的静态站点生成器是用**Python**语言编写的所以在Dockerfile中需要做的第一件定制任务是安装Python。我们将使用Apt包管理器来安装Python包这意味着在Dockerfile中我们要指定运行`apt-get update`和`apt-get install python-dev`;为了完成这一点,我们可以使用`RUN`指令。
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
```
如上所示我们只是简单地告知Docker构建镜像的时候要去执行指定的`apt-get`命令。比较有趣的是,这些命令只会在该容器的上下文中执行。这意味着,即使容器中安装了`python-dev`和`python-pip`,但主机本身并没有安装这些。说的更简单点,`pip`命令将只在容器中执行,出了容器,`pip`命令不存在。
还有一点比较重要的是Docker构建过程中不接受用户输入。这说明任何被`RUN`指令执行的命令必须在没有用户输入的时候完成。由于很多应用在安装的过程中需要用户的输入信息,所以这增加了一点难度。我们例子,`RUN`命令执行的命令都不需要用户输入。
### 安装Python模块
**Python**安装完毕后我们现在需要安装Python模块。如果在Docker外做这些事我们通常使用`pip`命令然后参考博客Git仓库中名叫`requirements.txt`的文件。在之前的步骤中,我们已经使用`git`命令成功地将Github仓库"克隆"到了`/root/blog`目录;这个目录碰巧也是我们创建`Dockerfile`的目录。这很重要因为这意味着Dokcer在构建过程中可以访问Git仓库中的内容。
当我们执行构建后Docker将构建的上下文环境设置为指定的"构建目录"。这意味着目录中的所有文件都可以在构建过程中被使用,目录之外的文件(构建环境之外)是不能访问的。
为了能安装需要的Python模块我们需要将`requirements.txt`从构建目录拷贝到容器中。我们可以在`Dockerfile`中使用`COPY`指令完成这一需求。
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
## Create a directory for required files
RUN mkdir -p /build/
## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt
```
在`Dockerfile`中我们增加了3条指令。第一条指令使用`RUN`在容器中创建了`/build/`目录。该目录用来拷贝生成静态HTML页面需要的一切应用文件。第二条指令是`COPY`指令,它将`requirements.txt`从"构建目录"(`/root/blog`)拷贝到容器中的`/build/`目录。第三条使用`RUN`指令来执行`pip`命令;安装`requirements.txt`文件中指定的所有模块。
当构建定制镜像时,`COPY`是条重要的指令。如果在Dockerfile中不指定拷贝文件Docker镜像将不会包含requirements.txt文件。在Docker容器中所有东西都是隔离的除非在Dockerfile中指定执行否则容器中不会包括需要的依赖。
### 重新运行构建
现在我们让Docker执行了一些定制任务现在我们尝试另一次blog镜像的构建。
```
# docker build -t blog /root/blog
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Using cache
---> 8e0f1899d1eb
Step 2 : RUN apt-get update
---> Using cache
---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
---> Using cache
---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
---> Running in bde05cf1e8fe
---> f4b66e09fa61
Removing intermediate container bde05cf1e8fe
Step 5 : COPY requirements.txt /build/
---> cef11c3fb97c
Removing intermediate container 9aa8ff43f4b0
Step 6 : RUN pip install -r /build/requirements.txt
---> Running in c50b15ddd8b1
Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1))
Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2))
<truncated to reduce noise>
Successfully installed jinja2 PyYaml mistune markdown MarkupSafe
Cleaning up...
---> abab55c20962
Removing intermediate container c50b15ddd8b1
Successfully built abab55c20962
```
上述输出所示,我们可以看到构建成功了,我们还可以看到另外一个有趣的信息` ---> Using cache`。这条信息告诉我们Docker在构建该镜像时使用了它的构建缓存。
### Docker构建缓存
当Docker构建镜像时它不仅仅构建一个单独的镜像事实上在构建过程中它会构建许多镜像。从上面的输出信息可以看出在每一"步"执行后Docker都在创建新的镜像。
```
Step 5 : COPY requirements.txt /build/
---> cef11c3fb97c
```
上面片段的最后一行可以看出Docker在告诉我们它在创建一个新镜像因为它打印了**Image ID**;`cef11c3fb97c`。这种方式有用之处在于Docker能在随后构建**blog**镜像时将这些镜像作为缓存使用。这很有用处因为这样Docker就能加速同一个容器中新构建任务的构建流程。从上面的例子中我们可以看出Docker没有重新安装`python-dev`和`python-pip`包Docker则使用了缓存镜像。但是由于Docker并没有找到执行`mkdir`命令的构建缓存,随后的步骤就被一一执行了。
Docker构建缓存一定程度上是福音但有时也是噩梦。这是因为使用缓存或者重新运行指令的决定在一个很狭窄的范围内执行。比如如果`requirements.txt`文件发生了修改Docker会在构建时检测到该变化然后Docker会重新执行该执行那个点往后的所有指令。这得益于Docker能查看`requirements.txt`的文件内容。但是,`apt-get`命令的执行就是另一回事了。如果提供Python包的**Apt** 仓库包含了一个更新的python-pip包Docker不会检测到这个变化转而去使用构建缓存。这会导致之前旧版本的包将被安装。虽然对`python-pip`来说,这不是主要的问题,但对使用了某个致命攻击缺陷的包缓存来说,这是个大问题。
出于这个原因抛弃Docker缓存定期地重新构建镜像是有好处的。这时当我们执行Docker构建时我简单地指定`--no-cache=True`即可。
## 部署博客的剩余部分
Python包和模块安装后接下来我们将拷贝需要用到的应用文件然后运行`hamerkop`应用。我们只需要使用更多的`COPY` and `RUN`指令就可完成。
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
## Create a directory for required files
RUN mkdir -p /build/
## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt
## Add blog code nd required files
COPY static /build/static
COPY templates /build/templates
COPY hamerkop /build/
COPY config.yml /build/
COPY articles /build/articles
## Run Generator
RUN /build/hamerkop -c /build/config.yml
```
现在我们已经写出了剩余的构建指令,我们再次运行另一次构建,并确保镜像构建成功。
```
# docker build -t blog /root/blog/
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Using cache
---> 8e0f1899d1eb
Step 2 : RUN apt-get update
---> Using cache
---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
---> Using cache
---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
---> Using cache
---> f4b66e09fa61
Step 5 : COPY requirements.txt /build/
---> Using cache
---> cef11c3fb97c
Step 6 : RUN pip install -r /build/requirements.txt
---> Using cache
---> abab55c20962
Step 7 : COPY static /build/static
---> 15cb91531038
Removing intermediate container d478b42b7906
Step 8 : COPY templates /build/templates
---> ecded5d1a52e
Removing intermediate container ac2390607e9f
Step 9 : COPY hamerkop /build/
---> 59efd1ca1771
Removing intermediate container b5fbf7e817b7
Step 10 : COPY config.yml /build/
---> bfa3db6c05b7
Removing intermediate container 1aebef300933
Step 11 : COPY articles /build/articles
---> 6b61cc9dde27
Removing intermediate container be78d0eb1213
Step 12 : RUN /build/hamerkop -c /build/config.yml
---> Running in fbc0b5e574c5
Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux
Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux
<truncated to reduce noise>
Successfully created file /usr/share/nginx/html//archive.html
Successfully created file /usr/share/nginx/html//sitemap.xml
---> 3b25263113e1
Removing intermediate container fbc0b5e574c5
Successfully built 3b25263113e1
```
### 运行定制的容器
成功的一次构建后,我们现在就可以通过运行`docker`命令和`run`选项来运行我们定制的容器和之前我们启动nginx容器一样。
```
# docker run -d -p 80:80 --name=blog blog
5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1
```
我们这次又使用了`-d` (**detach**)标识来让Docker在后台运行。但是我们也可以看到两个新标识。第一个新标识是`--name`这用来给容器指定一个用户名称。之前的例子我们没有指定名称因为Docker随机帮我们生成了一个。第二个新标识是`-p`,这个标识允许用户从主机映射一个端口到容器中的一个端口。
之前我们使用的基础**nginx**镜像分配了80端口给HTTP服务。默认情况下容器内的端口通道并没有绑定到主机系统。为了让外部系统能访问容器内部端口我们必须使用`-p`标识将主机端口映射到容器内部端口。上面的命令,我们通过`-p 8080:80`语法将主机80端口映射到容器内部的80端口。
经过上面的命令,我们的容器似乎成功启动了,我们可以通过执行`docker ps`核实。
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog
```
## 总结
截止目前我们拥有了正在运行的定制Docker容器。虽然在这篇文章中我们只接触了一些Dockerfile指令用法但是我们还是要讨论所有的指令。我们可以检查[Docker's reference page](https://docs.docker.com/v1.8/reference/builder/)来获取所有的Dockerfile指令用法那里对指令的用法说明得很详细。
另一个比较好的资源是[Dockerfile Best Practices page](https://docs.docker.com/engine/articles/dockerfile_best-practices/)它有许多构建定制Dockerfile的最佳练习。有些技巧非常有用比如战略性地组织好Dockerfile中的命令。上面的例子中我们将`articles`目录的`COPY`指令作为Dockerfile中最后的`COPY`指令。这是因为`articles`目录会经常变动。所以,将那些经常变化的指令尽可能地放在最后面的位置,来最优化那些可以被缓存的步骤。
通过这篇文章我们涉及了如何运行一个预构建的容器以及如何构建然后部署定制容器。虽然关于Docker你还有许多需要继续学习的地方但我想这篇文章给了你如何继续开始的好建议。当然如果你认为还有一些需要继续补充的内容在下面评论即可。
--------------------------------------
via:http://bencane.com/2015/12/01/getting-started-with-docker-by-dockerizing-this-blog/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+bencane%2FSAUo+%28Benjamin+Cane%29
作者Benjamin Cane
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,84 @@
Linux / Unix桌面之趣终端上的圣诞树
================================================================================
给你的Linux或Unix控制台创造一棵圣诞树玩玩吧。在此之前需要先安装一个Perl模块命名为Acme::POE::Tree。这是一棵很喜庆的圣诞树我已经在Linux、OSX和类Unix系统上验证过了。
### 安装 Acme::POE::Tree ###
安装perl模块最简单的办法就是使用cpanPerl综合典藏网。打开终端把下面的指令敲进去便可安装Acme::POE::Tree。
## 以root身份运行 ##
perl -MCPAN -e 'install Acme::POE::Tree'
**案例输出:**
Installing /home/vivek/perl5/man/man3/POE::NFA.3pm
Installing /home/vivek/perl5/man/man3/POE::Kernel.3pm
Installing /home/vivek/perl5/man/man3/POE::Loop.3pm
Installing /home/vivek/perl5/man/man3/POE::Resource.3pm
Installing /home/vivek/perl5/man/man3/POE::Filter::Map.3pm
Installing /home/vivek/perl5/man/man3/POE::Resource::SIDs.3pm
Installing /home/vivek/perl5/man/man3/POE::Loop::IO_Poll.3pm
Installing /home/vivek/perl5/man/man3/POE::Pipe::TwoWay.3pm
Appending installation info to /home/vivek/perl5/lib/perl5/x86_64-linux-gnu-thread-multi/perllocal.pod
RCAPUTO/POE-1.367.tar.gz
/usr/bin/make install -- OK
RCAPUTO/Acme-POE-Tree-1.022.tar.gz
Has already been unwrapped into directory /home/vivek/.cpan/build/Acme-POE-Tree-1.022-uhlZUz
RCAPUTO/Acme-POE-Tree-1.022.tar.gz
Has already been prepared
Running make for R/RC/RCAPUTO/Acme-POE-Tree-1.022.tar.gz
cp lib/Acme/POE/Tree.pm blib/lib/Acme/POE/Tree.pm
Manifying 1 pod document
RCAPUTO/Acme-POE-Tree-1.022.tar.gz
/usr/bin/make -- OK
Running make test
PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/01_basic.t .. ok
All tests successful.
Files=1, Tests=2, 6 wallclock secs ( 0.09 usr 0.03 sys + 0.53 cusr 0.06 csys = 0.71 CPU)
Result: PASS
RCAPUTO/Acme-POE-Tree-1.022.tar.gz
Tests succeeded but one dependency not OK (Curses)
RCAPUTO/Acme-POE-Tree-1.022.tar.gz
[dependencies] -- NA
### 在Shell中显示圣诞树 ###
只需要在终端上运行以下命令:
perl -MAcme::POE::Tree -e 'Acme::POE::Tree->new()->run()'
**案例输出**
![Gif 01: An animated christmas tree in Perl](http://s0.cyberciti.org/uploads/cms/2015/12/perl-tree.gif)
Gif 01: 一棵用Perl写的喜庆圣诞树
### 树的定制 ###
以下是我的脚本文件tree.pl的内容
#!/usr/bin/perl
use Acme::POE::Tree;
my $tree = Acme::POE::Tree->new(
{
star_delay => 1.5, # shimmer star every 1.5 sec
light_delay => 2, # twinkle lights every 2 sec
run_for => 10, # automatically exit after 10 sec
}
);
$tree->run();
这样就可以通过修改star_delay、run_for和light_delay参数的值来自定义你的树了。一棵提供消遣的终端圣诞树就此诞生。
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/open-source/command-line-hacks/linux-unix-desktop-fun-christmas-tree-for-your-terminal/
作者Vivek Gite
译者:[soooogreen](https://github.com/soooogreen)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,49 @@
修复无法与SFTP服务器建立FTP连接
================================================================================
### 问题 ###
有一天我要连接到我的web服务器。我使用[FileZilla][1]连接到FTP服务器。当我输入主机名和密码后来连接服务器后我得到了下面的错误。
> Error: Cannot establish FTP connection to an SFTP server. Please select proper protocol.
>
> Error: Critical error: Could not connect to server
![FileZilla Cannot establish FTP connection to an SFTP server](http://itsfoss.com/wp-content/uploads/2015/12/FileZilla_FTP_SFTP_Problem_1.jpeg)
### 原因 ###
看见错误信息后我意识到了我的错误。我尝试与一台SFTP服务器建立一个[FTP][2]连接。很明显我没有使用一个正确的协议应该是SFTP而不是FTP
如你在上图所见FileZilla默认使用的是FTP协议。
### 解决“Cannot establish FTP connection to an SFTP server”的方案 ###
解决方案很简单。使用SFTP协议而不是FTP。一个你或许要面对的问题是把协议修改成SFTP。这就是我要帮助你的。
再FileZilla菜单中进入 **文件->站点管理**.
![FileZilla Site Manager](http://itsfoss.com/wp-content/uploads/2015/12/FileZilla_FTP_SFTP_Problem_2.jpeg)
在站点管理中进入通用选项并选择SFTP协议。同样填上主机、端口号、用户密码等。
![Cannot establish FTP connection to an SFTP server](http://itsfoss.com/wp-content/uploads/2015/12/FileZilla_FTP_SFTP_Problem_3.png)
我希望你从这里可以开始处理。
我希望本篇教程可以帮助你修复“Cannot establish FTP connection to an SFTP server. Please select proper protocol.”这个问题。在相关的文章中,你可以读[了解在Linux中如何设置FTP][4]。
--------------------------------------------------------------------------------
via: http://itsfoss.com/fix-establish-ftp-connection-sftp-server/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:https://filezilla-project.org/
[2]:https://en.wikipedia.org/wiki/File_Transfer_Protocol
[3]:https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol
[4]:http://itsfoss.com/set-ftp-server-linux/

View File

@ -0,0 +1,106 @@
如何在CentOS上启用Software Collections(SCL)
================================================================================
红帽企业版linux(RHEL)和它的社区版分支——CentOS提供10年的生命周期这意味着RHEL/CentOS的每个版本会提供长达10年的安全更新。虽然这么长的生命周期为企业用户提供了更需要的系统兼容性和可靠性但也存在一个缺点随着底层的RHEL/CentOS版本接近生命周期的结束核心应用和运行时环境变得陈旧过时。例如CentOS 6.5它的生命周期结束时间是2020年11月30日携带Python 2.6.6和MySQL 5.1.73,以今天的标准来看已经非常古老了。
另一方面在RHEL/CentOS上试图手动升级开发工具链和运行时环境存在潜在的可能使系统崩溃除非所有依赖都被正确解决。通常情况下手动升级都是不推荐的除非你知道你在干什么。
[Software Collections][1](SCL)源出现了以帮助解决RHEL/CentOS下的这种问题。SCL的创建就是为了给RHEL/CentOS用户提供一种方式以方便、安全地安装和使用应用程序和运行时环境的多个而且可能更新的版本同时避免把系统搞乱。与之相对的是第三方源它们可能会在已安装的包之间引起冲突。
最新的SCL提供
- Python 3.3 和 2.7
- PHP 5.4
- Node.js 0.10
- Ruby 1.9.3
- Perl 5.16.3
- MariaDB 和 MySQL 5.5
- Apache httpd 2.4.6
在这篇教程的剩余部分我会展示一下如何配置SCL源以及如何安装和启用SCL中的包。
### 配置SCL源
SCL可用于CentOS 6.5及更新的版本。要配置SCL源只需执行
$ sudo yum install centos-release-SCL
要启用和运行SCL中的应用你还需要安装下列包
$ sudo yum install scl-utils-build
执行下面的命令可以查看SCL中可用包的完整列表
$ yum --disablerepo="*" --enablerepo="scl" list available
![](https://c2.staticflickr.com/6/5730/23304424250_f5c8a09584_c.jpg)
### 从SCL中安装和启用包
既然你已配置好了SCL你可以继续并从SCL中安装包了。
你可以搜索SCL中的包
$ yum --disablerepo="*" --enablerepo="scl" search <keyword>
我们假设你要安装Python 3.3。
继续就像通常安装包那样使用yum安装
$ sudo yum install python33
任何时候你都可以查看从SCL中安装的包的列表只需执行
$ scl --list
----------
python33
SCL的优点之一是安装其中的包不会覆盖任何系统文件并且保证了不会引起系统中其它库和应用的冲突。
例如若果在安装python33包后检查默认的python版本你会发现默认的版本并没有改变
$ python --version
----------
Python 2.6.6
如果想使用一个已经安装的SCL包你需要在每个命令中使用scl命令显式启用它译注即想在哪条命令中使用SCL中的包就得通过scl命令执行该命令
$ scl enable <scl-package-name> <command>
例如要针对python命令启用python33包
$ scl enable python33 'python --version'
----------
Python 3.3.2
如果想在启用python33包时执行多条命令你可以像下面那样创建一个启用SCL的bash会话
$ scl enable python33 bash
在这个bash会话中默认的python会被切换为3.3版本直到你输入exit退出会话。
![](https://c2.staticflickr.com/6/5642/23491549632_1d08e163cc_c.jpg)
简而言之SCL有几分像Python的虚拟环境但更通用因为你可以为远远更多的应用启用/禁用SCL会话而不仅仅是Python。
更详细的SCL指南参考官方的[快速入门指南][2]
--------------------------------------------------------------------------------
via: http://xmodulo.com/enable-software-collections-centos.html
作者:[Dan Nanni][a]
译者:[bianjp](https://github.com/bianjp)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://www.softwarecollections.org/
[2]:https://www.softwarecollections.org/docs/

View File

@ -0,0 +1,75 @@
Linux/Unix桌面趣事让桌面下雪
================================================================================
在这个节日里感到孤独么试一下Xsnow吧。它是一个可以在Unix/Linux桌面下下雪的app。圣诞老人和他的驯鹿会在屏幕中奔跑伴随着雪片让你感受到节日的感觉。
我第一次是再13、4年前安装的它。它最初是在1984年Macintosh系统中创造的。你可以用下面的方法来安装
### 安装 xsnow ###
Debian/Ubuntu/Mint用户用下面的命令
$ sudo apt-get install xsnow
Freebsd用户输入下面的命令
# cd /usr/ports/x11/xsnow/
# make install clean
或者尝试添加包:
# pkg_add -r xsnow
#### 其他发行版的方法 ####
1. Fedora/RHEL/CentOS在[rpmfusion][1]仓库中找找。
2. Gentoo用户试下Gentoo portage也就是[emerge -p xsnow][2]
3. Opensuse用户使用yast搜索xsnow
### 我该如何使用xsnow ###
打开终端(程序 > 附件 > 终端输入下面的额命令启动xsnow
$ xsnow
示例输出:
![Fig.01: Snow for your Linux and Unix desktop systems](http://files.cyberciti.biz/uploads/tips/2011/12/application-to-bring-snow-to-desktop_small.png)
图01: 在Linux和Unix桌面中显示雪花
你可以设置背景位蓝色,并让它下白雪,输入:
$ xsnow -bg blue -sc snow
设置最大的雪片数量,并让它尽可能快地运行,输入:
$ xsnow -snowflakes 10000 -delay 0
不要显示圣诞树和圣诞老人满屏幕地跑,输入:
$ xsnow -notrees -nosanta
关于xsnow更多的信息和选项在命令行下输入man xsnow查看手册
$ man xsnow
建议阅读
- 官网[下载 Xsnow][1]
- 注意[MS-Windows][2]和[Mac OS X version][3]有一次性的共享软件费用。
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/tips/linux-unix-xsnow.html
作者Vivek Gite
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://rpmfusion.org/Configuration
[2]:http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?part=2&chap=1
[3]:http://dropmix.xs4all.nl/rick/Xsnow/
[4]:http://dropmix.xs4all.nl/rick/WinSnow/
[5]:http://dropmix.xs4all.nl/rick/MacOSXSnow/

View File

@ -0,0 +1,40 @@
Linux/Unix 桌面趣事:蒸汽火车
================================================================================
一个[最常见的错误][1]是把ls输入成了sl。我已经设置了[一个alias][2]也就是alias sl=ls。但是你也许就错过了带汽笛的蒸汽小火车了。
sl是一个玩笑软件或是一个Unix游戏。它会在你错误地把“ls”输入成“sl”Steam Locomotive后出现一辆蒸汽火车穿过你的屏幕。
### 安装 sl ###
在Debian/Ubuntu下输入下面的命令
# apt-get install sl
它同样也在Freebsd和其他类Unix的操作系统上存在。下面把ls输错成sl
$ sl
![Fig.01: Run steam locomotive across the screen if you type "sl" instead of "ls"](http://files.cyberciti.biz/uploads/tips/2011/05/sl_command_steam_locomotive.png)
图01: 如果你把“ls”输入成“sl”蒸汽火车会穿过你的屏幕。
It also supports the following options:
它同样支持下面的选项:
- **-a** : 似乎发生了意外。你会哭喊求助的人们感到难过。
- **-l** : 显示小一点的火车
- **-F** : 它飞走
- **-e** : 允许被Ctrl+C终端
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html
作者Vivek Gite
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.cyberciti.biz/tips/my-10-unix-command-line-mistakes.html
[2]:http://bash.cyberciti.biz/guide/Create_and_use_aliases

View File

@ -0,0 +1,65 @@
Linux/Unix桌面趣事终端ASCII水族箱
================================================================================
你可以在你的终端中使用ASCIIQuarium安全地欣赏海洋的神秘了。它是一个用perl写的ASCII艺术水族箱/海洋动画。
### 安装 Term::Animation ###
首先你需要安装名为Term-Animation的perl模块。打开终端选择程序 > 附件 > 终端),并输入:
$ sudo apt-get install libcurses-perl
$ cd /tmp
$ wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz
$ tar -zxvf Term-Animation-2.4.tar.gz
$ cd Term-Animation-2.4/
$ perl Makefile.PL && make && make test
$ sudo make install
### 下载安装ASCIIQuarium ###
接着再终端中输入:
$ cd /tmp
$ wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz
$ tar -zxvf asciiquarium.tar.gz
$ cd asciiquarium_1.0/
$ sudo cp asciiquarium /usr/local/bin
$ sudo chmod 0755 /usr/local/bin/asciiquarium
### 我怎么浏览ASCII水族箱? ###
输入下面的命令:
$ /usr/local/bin/asciiquarium
或者
$ perl /usr/local/bin/asciiquarium
![Fig.01: ASCII Aquarium](http://s0.cyberciti.org/uploads/tips/2011/01/screenshot-ASCIIQuarium.png)
### 相关媒体 ###
youtube 视频
<iframe width="596" height="335" frameborder="0" allowfullscreen="" src="//www.youtube.com/embed/MzatWgu67ok"></iframe>
[视频01 ASCIIQuarium - Linux/Unix桌面上的海洋动画][1]
### 下载ASCII Aquarium的KDE和Mac OS X版本 ###
[下载asciiquarium][2]。如果你运行的是Mac OS X试下一个可以直接使用已经打包好的[版本][3]。对于KDE用户试试基于Asciiquarium的[KDE屏幕保护程序][4]
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/tips/linux-unix-apple-osx-terminal-ascii-aquarium.html
作者Vivek Gite
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://youtu.be/MzatWgu67ok
[2]:http://www.robobunny.com/projects/asciiquarium/html/
[3]:http://habilis.net/macasciiquarium/
[4]:http://kde-look.org/content/show.php?content=29207

View File

@ -0,0 +1,91 @@
Linux/Unix桌面趣事猫和老鼠在屏幕中追逐
================================================================================
Oneko是一个有趣的app。它会把你的光标变成一直老鼠,并在后面创建一个可爱的小猫并且始终在老鼠光标后面追着。单词“neko”再日语中的意思是老鼠。它最初是作为Macintosh桌面附件由一位日本人开发的。
### 安装 oneko ###
试下下面的命令:
$ sudo apt-get install oneko
示例输出:
[sudo] password for vivek:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
oneko
0 upgraded, 1 newly installed, 0 to remove and 10 not upgraded.
Need to get 38.6 kB of archives.
After this operation, 168 kB of additional disk space will be used.
Get:1 http://debian.osuosl.org/debian/ squeeze/main oneko amd64 1.2.sakura.6-7 [38.6 kB]
Fetched 38.6 kB in 1s (25.9 kB/s)
Selecting previously deselected package oneko.
(Reading database ... 274152 files and directories currently installed.)
Unpacking oneko (from .../oneko_1.2.sakura.6-7_amd64.deb) ...
Processing triggers for menu ...
Processing triggers for man-db ...
Setting up oneko (1.2.sakura.6-7) ...
Processing triggers for menu ...
FreeBSD用户输入下面的命令安装oneko
# cd /usr/ports/games/oneko
# make install clean
### 我该如何使用oneko ###
Simply type the following command:
输入下面的命令:
$ oneko
你可以把猫变成“tora-neko”一只像白老虎条纹的猫
$ oneko -tora
### 不喜欢猫? ###
你可以用狗代替猫:
$ oneko -dog
下面可以用樱花代替猫:
$ oneko -sakura
用大道寺代替猫:
$ oneko -tomoyo
### 查看相关媒体 ###
这个教程同样也有视频格式:
youtube 视频
<iframe width="596" height="335" frameborder="0" allowfullscreen="" src="http://www.youtube.com/embed/Nm3SkXThL0s"></iframe>
(Video.01: 示例 - 在Linux下安装和使用oneko)
### 其他选项 ###
You can pass the following options:
你可以传入下面的选项
1.**-tofocus**:让猫再聚焦的窗口顶部奔跑。当聚焦的窗口不在视野中时,猫像平常那样追逐老鼠。
2. **-position 坐标** 指定X和Y来调整猫相对老鼠的位置
3. **-rv**:将前景色和背景色对调
4. **-fg 颜色** : 前景色 (比如 oneko -dog -fg red)。
5. **-bg 颜色** : 背景色 (比如 oneko -dog -bg green)。
6. 查看oneko的手册获取更多信息。
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/open-source/oneko-app-creates-cute-cat-chasing-around-your-mouse/
作者Vivek Gite
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,119 @@
Linux 教学之教你练打字
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-featured.png)
[Linux 学习系列][1]的所有文章:
- [Linux 教学之教你练打字][2]
- [Linux 教学之物理模拟][3]
- [Linux 教学之教你玩音乐][4]
- [Linux 教学之两款地理软件][5]
- [Linux 教学之掌握数学][6]
引言Linux 提供大量的教学软件和工具面向各个年级段以及年龄段提供大量学科的练习实践其中大多数是可以与用户进行交互的。本“Linux 教学”系列就来介绍一些教学软件。
很多人都要打字,操作键盘已经成为他们的第二天性。 但是这些人中有多少是依然使用两个手指头来快速地按键盘的即使学校有教我们使用键盘的方法LCTT 译注:呃。。。),我们也会慢慢地抛弃正确的打字姿势,养成只用两个大拇指玩键盘的习惯。
下面要介绍的两款软件可以帮你掌控你的键盘,然后你就可以让你的手指跟上你的思维,然后你的思维就不会被打断了。当然,还有很多更炫更酷的软件可供选择,但本文所选的这两款是最简单、最容易上手的。
### TuxType (或者叫 TuxTyping ###
TuxType 是给小孩子玩的。在一些有趣的游戏中,小学生们可以通过完成一些简单的练习来 get “10个手指打字”的新技能。
Debian 及其衍生版本(包含所有 Ubuntu 衍生版本)的标准软件仓库都有 TuxType使用下面的命令安装
sudo apt-get install tuxtype
软件开始时有一个简单的 Tux 界面和一段难听的 midi 音乐,幸运的是你可以通过右下角的喇叭按钮把声音调低了。(LCTT译注Tux 就是那只 Linux 吉祥物Linus 说它的表情被设计成刚喝完啤酒后的满足感见《Just For Fun》。)
![learntotype-tuxtyping-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-main.jpg)
最开始处的两个选项“Fish Cascade”和“Comet Zap”是打字游戏当你开始游戏时你需要很投入到这个课程。
第3个选项为“Lession”,提供40多个简单的课程每个课程会增加一个字母让你来练习练习过程中会给出一些提示比如应该用哪个手指按键盘上的字母。
![learntotype-tuxtyping-exd1](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd1.jpg)
![learntotype-tuxtyping-exd2](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd2.jpg)
更高级点的你可以练习输入句子。不知道为什么句子练习被放在“Options”选项里。LCTT 译注句子练习第一句是“The quick brown fox jumps over the lazy dog”包含了26个英文字母可用于检测键盘是否坏键也是练习英文打字的必备良药啊。
![learntotype-tuxtyping-phrase](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-phrase.jpg)
这个游戏让玩家打出单词来帮助 Tux 吃到小鱼或者干掉掉下来的流星,训练速度和精确度。
![learntotype-tuxtyping-fish](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-fish.jpg)
![learntotype-tuxtyping-zap](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-zap.jpg)
除了练习有趣外,这些游戏还可以训练玩家的拼写、速度、手眼配合能力,因为你如果认真在玩的话,必须盯着屏幕,不看键盘打字。
### GNU typist (gtype) ###
对于成年人或有打字经验的人来说GNU Typist 可能更合适,它是一个 GNU 项目,基于控制台操作。
GNU Typist 也在大多数 Debian 衍生版本的软件库中,运行下面的命令来安装:
sudo apt-get install gtype
你估计不能在应用菜单里找到它,只能在终端界面上执行下面的命令来启动:
gtype
界面简单,没有废话,直接提供课程内容,玩家选择就是了。
![learntotype-gtype-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-main.png)
课程直截了当,内容详细。
![learntotype-gtype-lesson](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-lesson.png)
在交互练习的过程中,如果你输入错误,会将错误点高亮显示。不会像其他漂亮界面分散你的注意力,你可以专注于练习。每个课程的右下角都有一组统计数据来展示你的表现,如果你犯了很多错误,就可能无法通过关卡了。
![learntotype-gtype-mistake](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-mistake.png)
简单练习只需要你重复输入一些字符,而高阶练习需要你输入整个句子。
![learntotype-gtype-warmup](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmup.png)
下图的错误已经超过 3%,错误率太高了,你得降低些。
![learntotype-gtype-warmupfail](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmupfail.png)
一些训练用于完成特殊目标比如“平衡键盘训练LCTT 译注:感觉是用来练习手感的)”。
![learntotype-gtype-balanceddrill](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-balanceddrill.png)
下图是速度练习。
![learntotype-gtype-speed-simple](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-simple.png)
下图是要你输入一段经典文章。
![learntotype-gtype-speed-advanced](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-advanced.png)
如果你想练习其他语种,操作一下命令行参数就行。
![learntotype-gtype-more-lessons](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-more-lessons.png)
### 总结 ###
如果你想练练自己的打字水平Linux 上有很多软件给你用。本文介绍的两款软件界面简单但内容丰富,能满足绝大多数打字爱好者的需求。如果你正在使用、或者听说过其他的优秀打字练习软件,请在评论栏贴出来,让我们长长姿势。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/learn-to-type-in-linux/
作者:[Attila Orosz][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/attilaorosz/
[1]:https://www.maketecheasier.com/series/learn-with-linux/
[2]:https://www.maketecheasier.com/learn-to-type-in-linux/
[3]:https://www.maketecheasier.com/linux-physics-simulation/
[4]:https://www.maketecheasier.com/linux-learning-music/
[5]:https://www.maketecheasier.com/linux-geography-apps/
[6]:https://www.maketecheasier.com/learn-linux-maths/

View File

@ -0,0 +1,107 @@
Linux 教学之物理模拟
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/physics-fetured.jpg)
[Linux 学习系列][1]的所有文章:
- [Linux 教学之教你练打字][2]
- [Linux 教学之物理模拟][3]
- [Linux 教学之教你玩音乐][4]
- [Linux 教学之两款地理软件][5]
- [Linux 教学之掌握数学][6]
引言Linux 提供大量的教学软件和工具面向各个年级段以及年龄段提供大量学科的练习实践其中大多数是可以与用户进行交互的。本“Linux 教学”系列就来介绍一些教学软件。
物理是一个有趣的课题证据就是任何物理课程都可以用具体的图片演示给你看。能看到物理变化过程是一个很妙的体验特别是你不需要到教室就能体验到。Linux 上有很多很好的科学软件来为你提供这种美妙感觉,本篇文章只着重介绍其中几种。
### 1. Step ###
[Step][7] 是一个交互型物理模拟器KDEEdu[8]KDE 教育)项目的一部分。没人会比它的作者更了解它的作用。在项目官网主页上写着“[Step] 是这样玩的你放点东西进来添加一些力地心引力或者弹簧然后点击模拟按钮这款软件就会为你模拟这个物体在真实世界的物理定律影响下的运动状态。你可以改变物体或力的属性允许在模拟过程中进行修改然后观察不同属性下产生的现象。Step 可以让你从体验中学习物理!”
Step 依赖 Qt 以及其他一些 KDE 所依赖的软件,正是由于像 KDEEdu 之类的项目存在,才使得 KDE 变得如此强大,当然,你可能需要忍受由此带来的庞大的桌面系统。
Debian 的源中包含了 step 软件,终端下运行以下命令安装:
sudo apt-get install step
在 KDE 环境下,它只需要很少的依赖,几秒钟就能安装完成。
Step 有个简单的交互界面,你进去后直接可以进行模拟操作。
![physics-step-main](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-main.png)
你会发现所有物品在屏幕左边包括不同的质点空气不同形状的物体弹簧以及不同的力见1区域 。如果你选中一个物体屏幕右边会出现简短的描述信息见2区域以及你创造的世界的介绍主要介绍这个世界中包含的物体见3区域以及你当前选中的物体的属性见4区域以及你的操作历史见5区域
![physics-step-parts](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-parts.png)
一旦你放好了所有物体,点击下“模拟”按钮,可以看到物体与物体之间的相互作用。
![physics-step-simulate1](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate1.png)
![physics-step-simulate2](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate2.png)
![physics-step-simulate3](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate3.png)
想要更多了解 Step按 F1 键KDE 帮助中心会打印详细的软件操作手册。
### 2. Lightspeed ###
Lightspeed 是一个简单的基于 GTK+ 和 OpenGL 的模拟器,可以模拟一个高速移动的物体被观测到的现象。这个模拟器的理论基础是爱因斯坦的狭义相对论,在 Lightspeed 的 [srouceforge 页面][9]上他们这样介绍当一个物体被加速到几千公里每秒它就会表现得扭曲和褪色当物体被不断加速到接近光速299,792,458 m/s这个现象会越来越明显并且在不同方向观察这个物体的扭曲方式会得到完全不一样的结果。
受到相对速度影响的现象如下LCTT 译注:都可以从“光速不变”理论推导出来):
- **洛伦兹收缩** —— 物体看起来变短了
- **多普乐红移/蓝移** —— 物体的颜色变了
- **前灯效应** —— 物体的明暗变化LCTT 译注:当物体接近光速移动时,会在它前进的方向强烈地辐射光子,从这个角度看,物体会变得很亮,相反,从物体背后观察,会发现它很暗)
- **光行差效应** —— 物体扭曲变形了
Lightspeed 有 Debian 的源,执行下面的命令来安装:
sudo apt-get install lightspeed
用户界面非常简单,里边有一个物体(你可以从 sourceforge 下载更多形状的物体)沿着 x 轴运动(按下 A 键或在菜单栏 object 项目的 Animation 选项设置,物体就会开始运动)。
![physics-lightspeed](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed.png)
你可以滑动右边的滑动条来控制物体移动的速度。
![physics-lightspeed-deform](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-deform.png)
其他一些简单的控制器可以让你获得更多的视觉效果。
![physics-lightspeed-visual](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-visual.png)
点击界面并拖动鼠标可以改变物体视角,在 Camera 菜单下可以修改背景颜色或者物体的图形模式,以及其他效果。
### 特别推荐: Physion ###
Physion 是个非常有趣并且美观的物理模拟软件,比上面介绍的两款软件都好玩好看。可惜在写本文章的时候它的[官网][10]出现问题了,下载页面无法使用。
从他们放在 Youtube 上的视频来看Physion 还是值得我们下载下来玩玩的。在官网恢复之前,我们只能看看演示视频了。
youtube 视频
<iframe frameborder="0" src="//www.youtube.com/embed/P32UHa-3BfU?autoplay=1&amp;autohide=2&amp;border=0&amp;wmode=opaque&amp;enablejsapi=1&amp;controls=0&amp;showinfo=0" id="youtube-iframe"></iframe>
你有其他 Linux 下的好玩的物理模拟、演示、教学软件吗?如果有,请在评论处分享给我们。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/linux-physics-simulation/
作者:[Attila Orosz][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/attilaorosz/
[1]:https://www.maketecheasier.com/series/learn-with-linux/
[2]:https://www.maketecheasier.com/learn-to-type-in-linux/
[3]:https://www.maketecheasier.com/linux-physics-simulation/
[4]:https://www.maketecheasier.com/linux-learning-music/
[5]:https://www.maketecheasier.com/linux-geography-apps/
[6]:https://www.maketecheasier.com/learn-linux-maths/
[7]:https://edu.kde.org/applications/all/step
[8]:https://edu.kde.org/
[9]:http://lightspeed.sourceforge.net/
[10]:http://www.physion.net/

View File

@ -0,0 +1,99 @@
Linux 教学之两款地理软件
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-featured.png)
[Linux 学习系列][1]的所有文章:
- [Linux 教学之教你练打字][2]
- [Linux 教学之物理模拟][3]
- [Linux 教学之教你玩音乐][4]
- [Linux 教学之两款地理软件][5]
- [Linux 教学之掌握数学][6]
引言Linux 提供大量的教学软件和工具面向各个年级段以及年龄段提供大量学科的练习实践其中大多数是可以与用户进行交互的。本“Linux 教学”系列就来介绍一些教学软件。
地理是一门有趣的学科,我们每天都能接触到,虽然可能没有意识到,但当你打开 GPS、SatNav 或谷歌地图时你就已经在使用这些软件提供的地理数据了当你在新闻中看到一个国家的消息或听到一些金融数据时这些信息都可以归于地理学范畴。Linux 提供了很多学习地理学的软件,可用于教学,也可用于自学。
### Kgeography ###
在多数 Linux 发行版的软件库中,只有两个与地理有关的软件,两个都属于 KDE 阵营,或者说都属于 KDE 教育项目。Kgeopraphy 使用简单的彩色编码图来绘制被选中的国家。
Ubuntu 及衍生版在终端执行下面命令安装软件:
sudo apt-get install kgeography
界面很简单,给你一个选择界面,你可以选择不同的国家。
![learn-geography-kgeo-pick](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-pick.png)
点击地图上的某个区域,界面就会显示这个区域所在的国家和首都。
![learn-geography-kgeo-brit](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-brit.png)
以及给出不同的测试题来检测你的知识水平。
![learn-geography-kgeo-test](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-test.png)
这款软件以交互的方式测试你的地理知识,并且可以帮你为考试做好充足的准备。
### Marble ###
Marble 是一个稍微高级一点的软件,无需 3D 加速就能提供全球视角。
![learn-geography-marble-main](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-main.png)
在 Ubuntu 及衍生版的终端输入下面的命令来安装 Marble
sudo apt-get install marble
Marble 专注于地图绘制,它的主界面就是一张地图。
![learn-geography-marble-atlas](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-atlas.jpg)
你可以选择不同的投影方法比如球状投影和麦卡托投影LCTT 译注:把地球表面绘制在平面上的方法),在下拉菜单里你可以选择平面视角或外部视角,包括 Atlas 视角OpenStreetMap 提供的成熟的离线地图,
![learn-geography-marble-map](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-map.jpg)
以及卫星视角(由 NASA 提供),
![learn-geography-marble-satellite](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-satellite.jpg)
以及政治上甚至是历史上的世界地图。
![learn-geography-marble-history](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-history.jpg)
除了有包含不同界面和大量数据的离线地图Marble 还提供其他信息。你可以在菜单中打开或关闭不同的离线 info-boxes
![learn-geography-marble-offline](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-offline.png)
和在线的 online services。
![learn-geography-marble-online](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-online.png)
一个有趣的在线服务是维基百科,点击下 Wiki 图标,会弹出一个界面来展示你选中区域的详细信息。
![learn-geography-marble-wiki](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-wiki.png)
这款软件还提供定位追踪、路由规划、位置搜索和其他有用的功能。如果你喜欢地图学Marble 可以让你长时间享受探索和学习的乐趣。
### 总结 ###
Linux 提供大量优秀的教育软件,当然也包括地理学科。本文介绍的两款软件可以帮你学到很多地理知识,并且你可以以一种好玩的人机交互方式来测试你的知识量。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/linux-geography-apps/
作者:[Attila Orosz][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/attilaorosz/
[1]:https://www.maketecheasier.com/series/learn-with-linux/
[2]:https://www.maketecheasier.com/learn-to-type-in-linux/
[3]:https://www.maketecheasier.com/linux-physics-simulation/
[4]:https://www.maketecheasier.com/linux-learning-music/
[5]:https://www.maketecheasier.com/linux-geography-apps/
[6]:https://www.maketecheasier.com/learn-linux-maths/

View File

@ -1,32 +1,32 @@
(翻译中 by runningwater)Search Multiple Words / String Pattern Using grep Command
使用 grep 命令来搜索多个单词/字符串模式
================================================================================
How do I search multiple strings or words using the grep command? For example I'd like to search word1, word2, word3 and so on within /path/to/file. How do I force grep to search multiple words?
要使用 grep 命令来搜索多个字符串或单词,我们该怎么做?例如我想要查找 /path/to/file 文件中的 word1、word2、word3 等单词,我怎么样命令 grep 查找这些单词呢?
The [grep command supports regular expression][1] pattern. To search multiple words, use following syntax:
[grep 命令支持正则表达式][1]匹配模式。要使用多单词搜索,请使用如下语法:
grep 'word1\|word2\|word3' /path/to/file
In this example, search warning, error, and critical words in a text log file called /var/log/messages, enter:
下的例子中,要在一个名叫 /var/log/messages 的文本日志文件中查找 warning、error 和 critical 这几个单词,输入:
$ grep 'warning\|error\|critical' /var/log/messages
To just match words, add -w swith:
仅仅只是要匹配单词的话,可以加上 -w 选项参数:
$ grep -w 'warning\|error\|critical' /var/log/messages
egrep command can skip the above syntax and use the following syntax:
egrep 命令可以跳过上面的语法格式,其使用的语法格式如下:
$ egrep -w 'warning|error|critical' /var/log/messages
I recommend that you pass the -i (ignore case) and --color option as follows:
我建义您们加上 -i (忽略大小写) 和 --color 选项参数,如下示:
$ egrep -wi --color 'warning|error|critical' /var/log/messages
Sample outputs:
输出示例:
![Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output](http://s0.cyberciti.org/uploads/faq/2008/04/egrep-words-output.png)
Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output
Fig.01: Linux / Unix egrep 命令查找多个单词输出例子
--------------------------------------------------------------------------------