diff --git a/.gitmodules b/.gitmodules
new file mode 100644
index 0000000000..64ae20885b
--- /dev/null
+++ b/.gitmodules
@@ -0,0 +1,3 @@
+[submodule "comic"]
+ path = comic
+ url = https://wxy@github.com/LCTT/comic.git
diff --git a/comic b/comic
new file mode 160000
index 0000000000..e5db5b880d
--- /dev/null
+++ b/comic
@@ -0,0 +1 @@
+Subproject commit e5db5b880dac1302ee0571ecaaa1f8ea7cf61901
diff --git a/translated/tech/20170210 How to Make Vim Editor as Bash-IDE Using bash-support Plugin in Linux.md b/published/20170210 How to Make Vim Editor as Bash-IDE Using bash-support Plugin in Linux.md
similarity index 59%
rename from translated/tech/20170210 How to Make Vim Editor as Bash-IDE Using bash-support Plugin in Linux.md
rename to published/20170210 How to Make Vim Editor as Bash-IDE Using bash-support Plugin in Linux.md
index 05461e5ce8..d449819ad2 100644
--- a/translated/tech/20170210 How to Make Vim Editor as Bash-IDE Using bash-support Plugin in Linux.md
+++ b/published/20170210 How to Make Vim Editor as Bash-IDE Using bash-support Plugin in Linux.md
@@ -1,17 +1,17 @@
-在 Linux 如何用 ‘bash-support’ 插件将 Vim 编辑器打造成一个 Bash-IDE
+在 Linux 如何用 bash-support 插件将 Vim 编辑器打造成一个 Bash-IDE
============================================================
-IDE([集成开发环境][1])就是一个软件,它为了最大化程序员生产效率,提供了很多编程所需的设施和组件。 IDE 将所有开发集中到一个程序中,使得程序员可以编写、修改、编译、部署以及调试程序。
+IDE([集成开发环境][1])就是这样一个软件,它为了最大化程序员生产效率,提供了很多编程所需的设施和组件。 IDE 将所有开发工作集中到一个程序中,使得程序员可以编写、修改、编译、部署以及调试程序。
-在这篇文章中,我们会介绍如何通过使用 bash-support vim 插件将[Vim 编辑器安装和配置][2] 为一个 Bash-IDE。
+在这篇文章中,我们会介绍如何通过使用 bash-support vim 插件将 [Vim 编辑器安装和配置][2] 为一个 Bash-IDE。
#### 什么是 bash-support.vim 插件?
-bash-support 是一个高度定制化的 vim 插件,它允许你插入:文件头、补全语句、注释、函数、以及代码块。它也使你可以进行语法检查、使脚本可执行、通过一次按键启动调试器;完成所有的这些而不需要关闭编辑器。
+bash-support 是一个高度定制化的 vim 插件,它允许你插入:文件头、补全语句、注释、函数、以及代码块。它也使你可以进行语法检查、使脚本可执行、一键启动调试器;而完成所有的这些而不需要关闭编辑器。
-它使用快捷键(映射),通过有组织、一致的文件内容编写/插入,使得 bash 脚本变得有趣和愉快。
+它使用快捷键(映射),通过有组织地、一致的文件内容编写/插入,使得 bash 脚本变得有趣和愉快。
-插件当前版本是 4.3,版本 4.0 重写了版本 3.12.1,4.0 及之后的版本基于一个全新的、更强大的、和之前版本模板语法不同的模板系统。
+插件当前版本是 4.3,4.0 版本 重写了之前的 3.12.1 版本,4.0 及之后的版本基于一个全新的、更强大的、和之前版本模板语法不同的模板系统。
### 如何在 Linux 中安装 Bash-support 插件
@@ -36,64 +36,65 @@ $ unzip ~/Downloads/bash-support.zip
$ vi ~/.vimrc
```
-通过插入下面一行:
+并插入下面一行:
```
filetype plug-in on
-set number #optionally add this to show line numbers in vim
+set number # 可选,增加这行以在 vim 中显示行号
```
### 如何在 Vim 编辑器中使用 Bash-support 插件
-为了简化使用,通常使用的结构和特定操作可以分别通过键映射插入/执行。 ~/.vim/doc/bashsupport.txt 和 ~/.vim/bash-support/doc/bash-hotkeys.pdf 或者 ~/.vim/bash-support/doc/bash-hotkeys.tex 文件中介绍了映射。
+为了简化使用,通常使用的结构和特定操作可以分别通过键映射来插入/执行。 `~/.vim/doc/bashsupport.txt` 和 `~/.vim/bash-support/doc/bash-hotkeys.pdf` 或者 `~/.vim/bash-support/doc/bash-hotkeys.tex` 文件中介绍了映射。
-##### 重要:
+**重要:**
-1. 所有映射(`(\)+charater(s)` 组合)都是针对特定文件类型的:为了避免和其它插件的映射冲突,它们只适用于 ‘sh’ 文件。
+1. 所有映射(`(\)+charater(s)` 组合)都是针对特定文件类型的:为了避免和其它插件的映射冲突,它们只适用于 `sh` 文件。
2. 使用键映射的时候打字速度也有关系,引导符 `('\')` 和后面字符的组合要在特定短时间内才能识别出来(很可能少于 3 秒 - 基于假设)。
下面我们会介绍和学习使用这个插件一些显著的功能:
#### 如何为新脚本自动生成文件头
-看下面的事例文件头,为了要在你所有的新脚本中自动创建该文件头,请按照以下步骤操作。
+看下面的示例文件头,为了要在你所有的新脚本中自动创建该文件头,请按照以下步骤操作。
[
- ![脚本事例文件头选项](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![脚本示例文件头选项](http://www.tecmint.com/wp-content/uploads/2017/02/Script-Header-Options.png)
][3]
-脚本事例文件头选项
+*脚本示例文件头选项*
首先设置你的个人信息(作者名称、作者参考、组织、公司等)。在一个 Bash 缓冲区(像下面这样打开一个测试脚本)中使用映射 `\ntw` 启动模板设置向导。
-选中选项(1)设置个性化文件,然后按回车键。
+选中选项 1 设置个性化文件,然后按回车键。
```
$ vi test.sh
```
+
[
- ![在脚本文件中设置个性化信息](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![在脚本文件中设置个性化信息](http://www.tecmint.com/wp-content/uploads/2017/02/Set-Personalization-in-Scripts.png)
][4]
-在脚本文件中设置个性化信息
+*在脚本文件中设置个性化信息*
-之后,再次输入回车键。然后再一次选中选项(1)设置个性化文件的路径并输入回车。
+之后,再次输入回车键。然后再一次选中选项 1 设置个性化文件的路径并输入回车。
[
- ![设置个性化文件路径](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![设置个性化文件路径](http://www.tecmint.com/wp-content/uploads/2017/02/Set-Personalization-File-Location.png)
][5]
-设置个性化文件路径
+*设置个性化文件路径*
-设置向导会把目标文件 .vim/bash-support/rc/personal.templates 拷贝到 .vim/templates/personal.templates,打开并编辑它,在这里你可以输入你的信息。
+设置向导会把目标文件 `.vim/bash-support/rc/personal.templates` 拷贝到 `.vim/templates/personal.templates`,打开并编辑它,在这里你可以输入你的信息。
-按 `i` 键像截图那样在一个单引号中插入合适的值。
+按 `i` 键像截图那样在单引号中插入合适的值。
[
- ![在脚本文件头添加信息](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![在脚本文件头添加信息](http://www.tecmint.com/wp-content/uploads/2017/02/Add-Info-in-Script-Header.png)
][6]
-在脚本文件头添加信息
+*在脚本文件头添加信息*
一旦你设置了正确的值,输入 `:wq` 保存并退出文件。关闭 Bash 测试脚本,打开另一个脚本来测试新的配置。现在文件头中应该有和下面截图类似的你的个人信息:
@@ -101,108 +102,109 @@ $ vi test.sh
$ test2.sh
```
[
- ![自动添加文件头到脚本](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![自动添加文件头到脚本](http://www.tecmint.com/wp-content/uploads/2017/02/Auto-Adds-Header-to-Script.png)
][7]
-自动添加文件头到脚本
+*自动添加文件头到脚本*
-#### 使 Bash-support 插件帮助信息可访问
+#### 添加 Bash-support 插件帮助信息
-为此,在 Vim 命令行输入下面的命令并按回车键,它会创建 .vim/doc/tags 文件:
+为此,在 Vim 命令行输入下面的命令并按回车键,它会创建 `.vim/doc/tags` 文件:
```
:helptags $HOME/.vim/doc/
```
+
[
- ![在 Vi 编辑器添加插件帮助](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![在 Vi 编辑器添加插件帮助](http://www.tecmint.com/wp-content/uploads/2017/02/Add-Plugin-Help-in-Vi-Editor.png)
][8]
-在 Vi 编辑器添加插件帮助
+*在 Vi 编辑器添加插件帮助*
#### 如何在 Shell 脚本中插入注释
要插入一个块注释,在普通模式下输入 `\cfr`:
[
- ![添加注释到脚本](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![添加注释到脚本](http://www.tecmint.com/wp-content/uploads/2017/02/Add-Comments-to-Scripts.png)
][9]
-添加注释到脚本
+*添加注释到脚本*
#### 如何在 Shell 脚本中插入语句
-下面是一些用于插入语句的键映射(`n` – 普通模式, `i` – 插入模式):
+下面是一些用于插入语句的键映射(`n` – 普通模式, `i` – 插入模式,`v` 可视模式):
-1. `\sc` – case in … esac (n, I)
-2. `\sei` – elif then (n, I)
-3. `\sf` – for in do done (n, i, v)
-4. `\sfo` – for ((…)) do done (n, i, v)
-5. `\si` – if then fi (n, i, v)
-6. `\sie` – if then else fi (n, i, v)
-7. `\ss` – select in do done (n, i, v)
-8. `\su` – until do done (n, i, v)
-9. `\sw` – while do done (n, i, v)
-10. `\sfu` – function (n, i, v)
-11. `\se` – echo -e “…” (n, i, v)
-12. `\sp` – printf “…” (n, i, v)
-13. `\sa` – 数组元素, ${.[.]} (n, i, v) 和其它更多的数组功能。
+1. `\sc` – `case in … esac` (n, i)
+2. `\sei` – `elif then` (n, i)
+3. `\sf` – `for in do done` (n, i, v)
+4. `\sfo` – `for ((…)) do done` (n, i, v)
+5. `\si` – `if then fi` (n, i, v)
+6. `\sie` – `if then else fi` (n, i, v)
+7. `\ss` – `select in do done` (n, i, v)
+8. `\su` – `until do done` (n, i, v)
+9. `\sw` – `while do done` (n, i, v)
+10. `\sfu` – `function` (n, i, v)
+11. `\se` – `echo -e "…"` (n, i, v)
+12. `\sp` – `printf "…"` (n, i, v)
+13. `\sa` – 数组元素, `${.[.]}` (n, i, v) 和其它更多的数组功能。
#### 插入一个函数和函数头
输入 `\sfu` 添加一个新的空函数,然后添加函数名并按回车键创建它。之后,添加你的函数代码。
[
- ![在脚本中插入新函数](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![在脚本中插入新函数](http://www.tecmint.com/wp-content/uploads/2017/02/Insert-New-Function-in-Script.png)
][10]
-在脚本中插入新函数
+*在脚本中插入新函数*
为了给上面的函数创建函数头,输入 `\cfu`,输入函数名称,按回车键并填入合适的值(名称、介绍、参数、返回值):
[
- ![在脚本中创建函数头](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![在脚本中创建函数头](http://www.tecmint.com/wp-content/uploads/2017/02/Insert-New-Function-in-Script.png)
][11]
-在脚本中创建函数头
+*在脚本中创建函数头*
#### 更多关于添加 Bash 语句的例子
-下面是一个使用 `\si` 插入一条 if 语句的例子:
+下面是一个使用 `\si` 插入一条 `if` 语句的例子:
[
- ![在脚本中插入语句](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![在脚本中插入语句](http://www.tecmint.com/wp-content/uploads/2017/02/Add-Insert-Statement-to-Script.png)
][12]
-在脚本中插入语句
+*在脚本中插入语句*
-下面的例子显示使用 `\se` 添加一条 echo 语句:
+下面的例子显示使用 `\se` 添加一条 `echo` 语句:
[
- ![在脚本中添加 echo 语句](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![在脚本中添加 echo 语句](http://www.tecmint.com/wp-content/uploads/2017/02/Add-echo-Statement-to-Script.png)
][13]
-在脚本中添加 echo 语句
+*在脚本中添加 echo 语句*
#### 如何在 Vi 编辑器中使用运行操作
下面是一些运行操作键映射的列表:
-1. `\rr` – 更新文件,运行脚本 (n, I)
-2. `\ra` – 设置脚本命令行参数 (n, I)
-3. `\rc` – 更新文件,检查语法 (n, I)
-4. `\rco` – 语法检查选项 (n, I)
-5. `\rd` – 启动调试器 (n, I)
-6. `\re` – 使脚本可/不可执行(*) (in)
+1. `\rr` – 更新文件,运行脚本(n, i)
+2. `\ra` – 设置脚本命令行参数 (n, i)
+3. `\rc` – 更新文件,检查语法 (n, i)
+4. `\rco` – 语法检查选项 (n, i)
+5. `\rd` – 启动调试器(n, i)
+6. `\re` – 使脚本可/不可执行(*) (n, i)
#### 使脚本可执行
编写完脚本后,保存它然后输入 `\re` 和回车键使它可执行。
[
- ![使脚本可执行](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![使脚本可执行](http://www.tecmint.com/wp-content/uploads/2017/02/make-script-executable.png)
][14]
-使脚本可执行
+*使脚本可执行*
#### 如何在 Bash 脚本中使用预定义代码片段
@@ -211,23 +213,24 @@ $ test2.sh
```
$ .vim/bash-support/codesnippets/
```
+
[
- ![代码段列表](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![代码段列表](http://www.tecmint.com/wp-content/uploads/2017/02/list-of-code-snippets.png)
][15]
-代码段列表
+*代码段列表*
为了使用代码段,例如 free-software-comment,输入 `\nr` 并使用自动补全功能选择它的名称,然后输入回车键:
[
- ![添加代码段到脚本](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
+ ![添加代码段到脚本](http://www.tecmint.com/wp-content/uploads/2017/02/Add-Code-Snippet-to-Script.png)
][16]
-添加代码段到脚本
+*添加代码段到脚本*
#### 创建自定义预定义代码段
-可以在 ~/.vim/bash-support/codesnippets/ 目录下编写你自己的代码段。另外,你还可以从你正常的脚本代码中创建你自己的代码段:
+可以在 `~/.vim/bash-support/codesnippets/` 目录下编写你自己的代码段。另外,你还可以从你正常的脚本代码中创建你自己的代码段:
1. 选择你想作为代码段的部分代码,然后输入 `\nw` 并给它一个相近的文件名。
2. 要读入它,只需要输入 `\nr` 然后使用文件名就可以添加你自定义的代码段。
@@ -243,17 +246,17 @@ $ .vim/bash-support/codesnippets/
![查看内建命令帮助](http://www.tecmint.com/wp-content/uploads/2017/02/View-Built-in-Command-Help.png)
][17]
-查看内建命令帮助
+*查看内建命令帮助*
更多参考资料,可以查看文件:
```
-~/.vim/doc/bashsupport.txt #copy of online documentation
+~/.vim/doc/bashsupport.txt #在线文档的副本
~/.vim/doc/tags
```
-访问 Bash-support 插件 GitHub 仓库:[https://github.com/WolfgangMehner/bash-support][18]
-在 Vim 网站访问 Bash-support 插件:[http://www.vim.org/scripts/script.php?script_id=365][19]
+- 访问 Bash-support 插件 GitHub 仓库:[https://github.com/WolfgangMehner/bash-support][18]
+- 在 Vim 网站访问 Bash-support 插件:[http://www.vim.org/scripts/script.php?script_id=365][19]
就是这些啦,在这篇文章中,我们介绍了在 Linux 中使用 Bash-support 插件安装和配置 Vim 为一个 Bash-IDE 的步骤。快去发现这个插件其它令人兴奋的功能吧,一定要在评论中和我们分享哦。
@@ -269,7 +272,7 @@ via: http://www.tecmint.com/use-vim-as-bash-ide-using-bash-support-in-linux/
作者:[Aaron Kili][a]
译者:[ictlyh](https://github.com/ictlyh)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20110123 How debuggers work Part 1 - Basics.md b/published/201704/20110123 How debuggers work Part 1 - Basics.md
similarity index 58%
rename from translated/tech/20110123 How debuggers work Part 1 - Basics.md
rename to published/201704/20110123 How debuggers work Part 1 - Basics.md
index 9288fea970..8d1bebae0c 100644
--- a/translated/tech/20110123 How debuggers work Part 1 - Basics.md
+++ b/published/201704/20110123 How debuggers work Part 1 - Basics.md
@@ -1,21 +1,21 @@
-[调试器的工作原理:第一篇-基础][21]
+调试器的工作原理(一):基础篇
============================================================
这是调试器工作原理系列文章的第一篇,我不确定这个系列会有多少篇文章,会涉及多少话题,但我仍会从这篇基础开始。
### 这一篇会讲什么
-我将为大家展示 Linux 中调试器的主要构成模块 - ptrace 系统调用。这篇文章所有代码都是基于 32 位 Ubuntu 操作系统.值得注意的是,尽管这些代码是平台相关的,将他们移植到其他平台应该并不困难。
+我将为大家展示 Linux 中调试器的主要构成模块 - `ptrace` 系统调用。这篇文章所有代码都是基于 32 位 Ubuntu 操作系统。值得注意的是,尽管这些代码是平台相关的,将它们移植到其它平台应该并不困难。
### 缘由
-为了理解我们要做什么,让我们先考虑下调试器为了完成调试都需要什么资源。调试器可以开始一个进程并调试这个进程,又或者将自己同某个已经存在的进程关联起来。调试器能够单步执行代码,设定断点并且将程序执行到断点,检查变量的值并追踪堆栈。许多调试器有着更高级的特性,例如在调试器的地址空间内执行表达式或者调用函数,甚至可以在进程执行过程中改变代码并观察效果。
+为了理解我们要做什么,让我们先考虑下调试器为了完成调试都需要什么资源。调试器可以开始一个进程并调试这个进程,又或者将自己同某个已经存在的进程关联起来。调试器能够单步执行代码,设定断点并且将程序执行到断点,检查变量的值并追踪堆栈。许多调试器有着更高级的特性,例如在调试器的地址空间内执行表达式或者调用函数,甚至可以在进程执行过程中改变代码并观察效果。
-尽管现代的调试器都十分的复杂 [[1]][13],但他们的工作的原理却是十分的简单。调试器的基础是操作系统与编译器 / 链接器提供的一些基础服务,其余的部分只是[简单的编程][14]。
+尽管现代的调试器都十分的复杂(我没有检查,但我确信 gdb 的代码行数至少有六位数),但它们的工作的原理却是十分的简单。调试器的基础是操作系统与编译器 / 链接器提供的一些基础服务,其余的部分只是[简单的编程][14]而已。
### Linux 的调试 - ptrace
-Linux 调试器中的瑞士军刀便是 ptrace 系统调用 [[2]][15]。这是一种复杂却强大的工具,可以允许一个进程控制另外一个进程并从内部替换被控制进程的内核镜像的值[[3]][16].。
+Linux 调试器中的瑞士军刀便是 `ptrace` 系统调用(使用 man 2 ptrace 命令可以了解更多)。这是一种复杂却强大的工具,可以允许一个进程控制另外一个进程并从内部替换被控制进程的内核镜像的值(Peek and poke 在系统编程中是很知名的叫法,指的是直接读写内存内容)。
接下来会深入分析。
@@ -49,7 +49,7 @@ int main(int argc, char** argv)
}
```
-看起来相当的简单:我们用 fork 命令创建了一个新的子进程。if 语句的分支执行子进程(这里称之为“target”),else if 的分支执行父进程(这里称之为“debugger”)。
+看起来相当的简单:我们用 `fork` 创建了一个新的子进程(这篇文章假定读者有一定的 Unix/Linux 编程经验。我假定你知道或至少了解 fork、exec 族函数与 Unix 信号)。if 语句的分支执行子进程(这里称之为 “target”),`else if` 的分支执行父进程(这里称之为 “debugger”)。
下面是 target 进程的代码:
@@ -69,18 +69,18 @@ void run_target(const char* programname)
}
```
-这段代码中最值得注意的是 ptrace 调用。在 "sys/ptrace.h" 中,ptrace 是如下定义的:
+这段代码中最值得注意的是 `ptrace` 调用。在 `sys/ptrace.h` 中,`ptrace` 是如下定义的:
```
long ptrace(enum __ptrace_request request, pid_t pid,
void *addr, void *data);
```
-第一个参数是 _request_,这是许多预定义的 PTRACE_* 常量中的一个。第二个参数为请求分配进程 ID。第三个与第四个参数是地址与数据指针,用于操作内存。上面代码段中的ptrace调用发起了 PTRACE_TRACEME 请求,这意味着该子进程请求系统内核让其父进程跟踪自己。帮助页面上对于 request 的描述很清楚:
+第一个参数是 `_request_`,这是许多预定义的 `PTRACE_*` 常量中的一个。第二个参数为请求分配进程 ID。第三个与第四个参数是地址与数据指针,用于操作内存。上面代码段中的 `ptrace` 调用发起了 `PTRACE_TRACEME` 请求,这意味着该子进程请求系统内核让其父进程跟踪自己。帮助页面上对于 request 的描述很清楚:
-> 意味着该进程被其父进程跟踪。任何传递给该进程的信号(除了 SIGKILL)都将通过 wait() 方法阻塞该进程并通知其父进程。**此外,该进程的之后所有调用 exec() 动作都将导致 SIGTRAP 信号发送到此进程上,使得父进程在新的程序执行前得到取得控制权的机会**。如果一个进程并不需要它的的父进程跟踪它,那么这个进程不应该发送这个请求。(pid,addr 与 data 暂且不提)
+> 意味着该进程被其父进程跟踪。任何传递给该进程的信号(除了 `SIGKILL`)都将通过 `wait()` 方法阻塞该进程并通知其父进程。**此外,该进程的之后所有调用 `exec()` 动作都将导致 `SIGTRAP` 信号发送到此进程上,使得父进程在新的程序执行前得到取得控制权的机会**。如果一个进程并不需要它的的父进程跟踪它,那么这个进程不应该发送这个请求。(pid、addr 与 data 暂且不提)
-我高亮了这个例子中我们需要注意的部分。在 ptrace 调用后,run_target 接下来要做的就是通过 execl 传参并调用。如同高亮部分所说明,这将导致系统内核在 execl 创建进程前暂时停止,并向父进程发送信号。
+我高亮了这个例子中我们需要注意的部分。在 `ptrace` 调用后,`run_target` 接下来要做的就是通过 `execl` 传参并调用。如同高亮部分所说明,这将导致系统内核在 `execl` 创建进程前暂时停止,并向父进程发送信号。
是时候看看父进程做什么了。
@@ -110,11 +110,11 @@ void run_debugger(pid_t child_pid)
}
```
-如前文所述,一旦子进程调用了 exec,子进程会停止并被发送 SIGTRAP 信号。父进程会等待该过程的发生并在第一个 wait() 处等待。一旦上述事件发生了,wait() 便会返回,由于子进程停止了父进程便会收到信号(如果子进程由于信号的发送停止了,WIFSTOPPED 就会返回 true)。
+如前文所述,一旦子进程调用了 `exec`,子进程会停止并被发送 `SIGTRAP` 信号。父进程会等待该过程的发生并在第一个 `wait()` 处等待。一旦上述事件发生了,`wait()` 便会返回,由于子进程停止了父进程便会收到信号(如果子进程由于信号的发送停止了,`WIFSTOPPED` 就会返回 `true`)。
-父进程接下来的动作就是整篇文章最需要关注的部分了。父进程会将 PTRACE_SINGLESTEP 与子进程ID作为参数调用 ptrace 方法。这就会告诉操作系统,“请恢复子进程,但在它执行下一条指令前阻塞”。周而复始地,父进程等待子进程阻塞,循环继续。当 wait() 中传出的信号不再是子进程的停止信号时,循环终止。在跟踪器(父进程)运行期间,这将会是被跟踪进程(子进程)传递给跟踪器的终止信号(如果子进程终止 WIFEXITED 将返回 true)。
+父进程接下来的动作就是整篇文章最需要关注的部分了。父进程会将 `PTRACE_SINGLESTEP` 与子进程 ID 作为参数调用 `ptrace` 方法。这就会告诉操作系统,“请恢复子进程,但在它执行下一条指令前阻塞”。周而复始地,父进程等待子进程阻塞,循环继续。当 `wait()` 中传出的信号不再是子进程的停止信号时,循环终止。在跟踪器(父进程)运行期间,这将会是被跟踪进程(子进程)传递给跟踪器的终止信号(如果子进程终止 `WIFEXITED` 将返回 `true`)。
-icounter 存储了子进程执行指令的次数。这么看来我们小小的例子也完成了些有用的事情 - 在命令行中指定程序,它将执行该程序并记录它从开始到结束所需要的 cpu 指令数量。接下来就让我们这么做吧。
+`icounter` 存储了子进程执行指令的次数。这么看来我们小小的例子也完成了些有用的事情 - 在命令行中指定程序,它将执行该程序并记录它从开始到结束所需要的 cpu 指令数量。接下来就让我们这么做吧。
### 测试
@@ -131,11 +131,11 @@ int main()
```
-令我惊讶的是,跟踪器花了相当长的时间,并报告整个执行过程共有超过 100,000 条指令执行。仅仅是一条输出语句?什么造成了这种情况?答案很有趣[[5]][18]。Linux 的 gcc 默认会动态的将程序与 c 的运行时库动态地链接。这就意味着任何程序运行前的第一件事是需要动态库加载器去查找程序运行所需要的共享库。这些代码的数量很大 - 别忘了我们的跟踪器要跟踪每一条指令,不仅仅是主函数的,而是“整个过程中的指令”。
+令我惊讶的是,跟踪器花了相当长的时间,并报告整个执行过程共有超过 100,000 条指令执行。仅仅是一条输出语句?什么造成了这种情况?答案很有趣(至少你同我一样痴迷与机器/汇编语言)。Linux 的 gcc 默认会动态的将程序与 c 的运行时库动态地链接。这就意味着任何程序运行前的第一件事是需要动态库加载器去查找程序运行所需要的共享库。这些代码的数量很大 - 别忘了我们的跟踪器要跟踪每一条指令,不仅仅是主函数的,而是“整个进程中的指令”。
-所以当我将测试程序使用静态编译时(通过比较,可执行文件会多出 500 KB 左右的大小,这部分是 C 运行时库的静态链接),跟踪器提示只有大概 7000 条指令被执行。这个数目仍然不小,但是考虑到在主函数执行前 libc 的初始化以及主函数执行后的清除代码,这个数目已经是相当不错了。此外,printf 也是一个复杂的函数。
+所以当我将测试程序使用静态编译时(通过比较,可执行文件会多出 500 KB 左右的大小,这部分是 C 运行时库的静态链接),跟踪器提示只有大概 7000 条指令被执行。这个数目仍然不小,但是考虑到在主函数执行前 libc 的初始化以及主函数执行后的清除代码,这个数目已经是相当不错了。此外,`printf` 也是一个复杂的函数。
-仍然不满意的话,我需要的是“可以测试”的东西 - 例如可以完整记录每一个指令运行的程序执行过程。这当然可以通过汇编代码完成。所以我找到了这个版本的“Hello, world!”并编译了它。
+仍然不满意的话,我需要的是“可以测试”的东西 - 例如可以完整记录每一个指令运行的程序执行过程。这当然可以通过汇编代码完成。所以我找到了这个版本的 “Hello, world!” 并编译了它。
```
@@ -168,13 +168,11 @@ len equ $ - msg
```
-当然,现在跟踪器提示 7 条指令被执行了,这样一来很容易区分他们。
-
+当然,现在跟踪器提示 7 条指令被执行了,这样一来很容易区分它们。
### 深入指令流
-
-上面那个汇编语言编写的程序使得我可以向你介绍 ptrace 的另外一个强大的用途 - 详细显示被跟踪进程的状态。下面是 run_debugger 函数的另一个版本:
+上面那个汇编语言编写的程序使得我可以向你介绍 `ptrace` 的另外一个强大的用途 - 详细显示被跟踪进程的状态。下面是 `run_debugger` 函数的另一个版本:
```
void run_debugger(pid_t child_pid)
@@ -209,24 +207,16 @@ void run_debugger(pid_t child_pid)
}
```
-
-不同仅仅存在于 while 循环的开始几行。这个版本里增加了两个新的 ptrace 调用。第一条将进程的寄存器值读取进了一个结构体中。 sys/user.h 定义有 user_regs_struct。如果你查看头文件,头部的注释这么写到:
+不同仅仅存在于 `while` 循环的开始几行。这个版本里增加了两个新的 `ptrace` 调用。第一条将进程的寄存器值读取进了一个结构体中。 `sys/user.h` 定义有 `user_regs_struct`。如果你查看头文件,头部的注释这么写到:
```
-/* The whole purpose of this file is for GDB and GDB only.
- Don't read too much into it. Don't use it for
- anything other than GDB unless know what you are
- doing. */
-```
-
-```
-/* 这个文件只为了GDB而创建
+/* 这个文件只为了 GDB 而创建
不用详细的阅读.如果你不知道你在干嘛,
不要在除了 GDB 以外的任何地方使用此文件 */
```
-不知道你做何感想,但这让我觉得我们找对地方了。回到例子中,一旦我们在 regs 变量中取得了寄存器的值,我们就可以通过将 PTRACE_PEEKTEXT 作为参数、 regs.eip(x86 上的扩展指令指针)作为地址,调用 ptrace ,读取当前进程的当前指令。下面是新跟踪器所展示出的调试效果:
+不知道你做何感想,但这让我觉得我们找对地方了。回到例子中,一旦我们在 `regs` 变量中取得了寄存器的值,我们就可以通过将 `PTRACE_PEEKTEXT` 作为参数、 `regs.eip`(x86 上的扩展指令指针)作为地址,调用 `ptrace` ,读取当前进程的当前指令(警告:如同我上面所说,文章很大程度上是平台相关的。我简化了一些设定 - 例如,x86 指令集不需要调整到 4 字节,我的32位 Ubuntu unsigned int 是 4 字节。事实上,许多平台都不需要。从内存中读取指令需要预先安装完整的反汇编器。我们这里没有,但实际的调试器是有的)。下面是新跟踪器所展示出的调试效果:
```
$ simple_tracer traced_helloworld
@@ -244,7 +234,7 @@ Hello, world!
```
-现在,除了 icounter,我们也可以观察到指令指针与它每一步所指向的指令。怎么来判断这个结果对不对呢?使用 objdump -d 处理可执行文件:
+现在,除了 `icounter`,我们也可以观察到指令指针与它每一步所指向的指令。怎么来判断这个结果对不对呢?使用 `objdump -d` 处理可执行文件:
```
$ objdump -d traced_helloworld
@@ -263,62 +253,36 @@ Disassembly of section .text:
804809b: cd 80 int $0x80
```
-
这个结果和我们跟踪器的结果就很容易比较了。
-
### 将跟踪器关联到正在运行的进程
-
-如你所知,调试器也能关联到已经运行的进程。现在你应该不会惊讶,ptrace 通过 以PTRACE_ATTACH 为参数调用也可以完成这个过程。这里我不会展示示例代码,通过上文的示例代码应该很容易实现这个过程。出于学习目的,这里使用的方法更简便(因为我们在子进程刚开始就可以让它停止)。
-
+如你所知,调试器也能关联到已经运行的进程。现在你应该不会惊讶,`ptrace` 通过以 `PTRACE_ATTACH` 为参数调用也可以完成这个过程。这里我不会展示示例代码,通过上文的示例代码应该很容易实现这个过程。出于学习目的,这里使用的方法更简便(因为我们在子进程刚开始就可以让它停止)。
### 代码
-
-上文中的简单的跟踪器(更高级的,可以打印指令的版本)的完整c源代码可以在[这里][20]找到。它是通过 4.4 版本的 gcc 以 -Wall -pedantic --std=c99 编译的。
-
+上文中的简单的跟踪器(更高级的,可以打印指令的版本)的完整c源代码可以在[这里][20]找到。它是通过 4.4 版本的 gcc 以 `-Wall -pedantic --std=c99` 编译的。
### 结论与计划
+诚然,这篇文章并没有涉及很多内容 - 我们距离亲手完成一个实际的调试器还有很长的路要走。但我希望这篇文章至少可以使得调试这件事少一些神秘感。`ptrace` 是功能多样的系统调用,我们目前只展示了其中的一小部分。
-诚然,这篇文章并没有涉及很多内容 - 我们距离亲手完成一个实际的调试器还有很长的路要走。但我希望这篇文章至少可以使得调试这件事少一些神秘感。ptrace 是功能多样的系统调用,我们目前只展示了其中的一小部分。
-
-
-单步调试代码很有用,但也只是在一定程度上有用。上面我通过c的“Hello World!”做了示例。为了执行主函数,可能需要上万行代码来初始化c的运行环境。这并不是很方便。最理想的是在main函数入口处放置断点并从断点处开始分步执行。为此,在这个系列的下一篇,我打算展示怎么实现断点。
-
-
+单步调试代码很有用,但也只是在一定程度上有用。上面我通过 C 的 “Hello World!” 做了示例。为了执行主函数,可能需要上万行代码来初始化 C 的运行环境。这并不是很方便。最理想的是在 `main` 函数入口处放置断点并从断点处开始分步执行。为此,在这个系列的下一篇,我打算展示怎么实现断点。
### 参考
-
撰写此文时参考了如下文章
* [Playing with ptrace, Part I][11]
* [How debugger works][12]
-
-
-[1] 我没有检查,但我确信 gdb 的代码行数至少有六位数。
-
-[2] 使用 man 2 ptrace 命令可以了解更多。
-
-[3] Peek and poke 在系统编程中是很知名的叫法,指的是直接读写内存内容。
-
-[4] 这篇文章假定读者有一定的 Unix/Linux 编程经验。我假定你知道(至少了解概念)fork,exec 族函数与 Unix 信号。
-
-[5] 至少你同我一样痴迷与机器/汇编语言。
-
-[6] 警告:如同我上面所说,文章很大程度上是平台相关的。我简化了一些设定 - 例如,x86指令集不需要调整到 4 字节(我的32位 Ubuntu unsigned int 是 4 字节)。事实上,许多平台都不需要。从内存中读取指令需要预先安装完整的反汇编器。我们这里没有,但实际的调试器是有的。
-
-
--------------------------------------------------------------------------------
via: http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1
-作者:[Eli Bendersky ][a]
-译者:[译者ID](https://github.com/YYforymj)
-校对:[校对者ID](https://github.com/校对者ID)
+作者:[Eli Bendersky][a]
+译者:[YYforymj](https://github.com/YYforymj)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/201704/20150112 Data-Oriented Hash Table.md b/published/201704/20150112 Data-Oriented Hash Table.md
new file mode 100644
index 0000000000..9446cd38ff
--- /dev/null
+++ b/published/201704/20150112 Data-Oriented Hash Table.md
@@ -0,0 +1,165 @@
+深入解析面向数据的哈希表性能
+============================================================
+
+最近几年中,面向数据的设计已经受到了很多的关注 —— 一种强调内存中数据布局的编程风格,包括如何访问以及将会引发多少的 cache 缺失。由于在内存读取操作中缺失所占的数量级要大于命中的数量级,所以缺失的数量通常是优化的关键标准。这不仅仅关乎那些对性能有要求的 code-data 结构设计的软件,由于缺乏对内存效益的重视而成为软件运行缓慢、膨胀的一个很大因素。
+
+
+高效缓存数据结构的中心原则是将事情变得平滑和线性。比如,在大部分情况下,存储一个序列元素更倾向于使用普通数组而不是链表 —— 每一次通过指针来查找数据都会为 cache 缺失增加一份风险;而普通数组则可以预先获取,并使得内存系统以最大的效率运行
+
+如果你知道一点内存层级如何运作的知识,下面的内容会是想当然的结果——但是有时候即便它们相当明显,测试一下任不失为一个好主意。几年前 [Baptiste Wicht 测试过了 `std::vector` vs `std::list` vs `std::deque`][4],(后者通常使用分块数组来实现,比如:一个数组的数组)。结果大部分会和你预期的保持一致,但是会存在一些违反直觉的东西。作为实例:在序列链表的中间位置做插入或者移除操作被认为会比数组快,但如果该元素是一个 POD 类型,并且不大于 64 字节或者在 64 字节左右(在一个 cache 流水线内),通过对要操作的元素周围的数组元素进行移位操作要比从头遍历链表来的快。这是由于在遍历链表以及通过指针插入/删除元素的时候可能会导致不少的 cache 缺失,相对而言,数组移位则很少会发生。(对于更大的元素,非 POD 类型,或者你已经有了指向链表元素的指针,此时和预期的一样,链表胜出)
+
+
+多亏了类似 Baptiste 这样的数据,我们知道了内存布局如何影响序列容器。但是关联容器,比如 hash 表会怎么样呢?已经有了些权威推荐:[Chandler Carruth 推荐的带局部探测的开放寻址][5](此时,我们没必要追踪指针),以及[Mike Acton 推荐的在内存中将 value 和 key 隔离][6](这种情况下,我们可以在每一个 cache 流水线中得到更多的 key), 这可以在我们必须查找多个 key 时提高局部性能。这些想法很有意义,但再一次的说明:测试永远是好习惯,但由于我找不到任何数据,所以只好自己收集了。
+
+### 测试
+
+我测试了四个不同的 quick-and-dirty 哈希表实现,另外还包括 `std::unordered_map` 。这五个哈希表都使用了同一个哈希函数 —— Bob Jenkins 的 [SpookyHash][8](64 位哈希值)。(由于哈希函数在这里不是重点,所以我没有测试不同的哈希函数;我同样也没有检测我的分析中的总内存消耗。)实现会通过简短的代码在测试结果表中标注出来。
+
+* **UM**: `std::unordered_map` 。在 VS2012 和 libstdc++-v3 (libstdc++-v3: gcc 和 clang 都会用到这东西)中,UM 是以链表的形式实现,所有的元素都在链表中,bucket 数组中存储了链表的迭代器。VS2012 中,则是一个双链表,每一个 bucket 存储了起始迭代器和结束迭代器;libstdc++ 中,是一个单链表,每一个 bucket 只存储了一个起始迭代器。这两种情况里,链表节点是独立申请和释放的。最大负载因子是 1 。
+* **Ch**:分离的、链状 buket 指向一个元素节点的单链表。为了避免分开申请每一个节点,元素节点存储在普通数组池中。未使用的节点保存在一个空闲链表中。最大负载因子是 1。
+* **OL**:开地址线性探测 —— 每一个 bucket 存储一个 62 bit 的 hash 值,一个 2 bit 的状态值(包括 empty,filled,removed 三个状态),key 和 vale 。最大负载因子是 2/3。
+* **DO1**:“data-oriented 1” —— 和 OL 相似,但是哈希值、状态值和 key、values 分离在两个隔离的平滑数组中。
+* **DO2**:“data-oriented 2” —— 与 OL 类似,但是哈希/状态,keys 和 values 被分离在 3 个相隔离的平滑数组中。
+
+
+在我的所有实现中,包括 VS2012 的 UM 实现,默认使用尺寸为 2 的 n 次方。如果超出了最大负载因子,则扩展两倍。在 libstdc++ 中,UM 默认尺寸是一个素数。如果超出了最大负载因子,则扩展为下一个素数大小。但是我不认为这些细节对性能很重要。素数是一种对低 bit 位上没有足够熵的低劣 hash 函数的挽救手段,但是我们正在用的是一个很好的 hash 函数。
+
+OL,DO1 和 DO2 的实现将共同的被称为 OA(open addressing)——稍后我们将发现它们在性能特性上非常相似。在每一个实现中,单元数从 100 K 到 1 M,有效负载(比如:总的 key + value 大小)从 8 到 4 k 字节我为几个不同的操作记了时间。 keys 和 values 永远是 POD 类型,keys 永远是 8 个字节(除了 8 字节的有效负载,此时 key 和 value 都是 4 字节)因为我的目的是为了测试内存影响而不是哈希函数性能,所以我将 key 放在连续的尺寸空间中。每一个测试都会重复 5 遍,然后记录最小的耗时。
+
+测试的操作在这里:
+
+* **Fill**:将一个随机的 key 序列插入到表中(key 在序列中是唯一的)。
+* **Presized fill**:和 Fill 相似,但是在插入之间我们先为所有的 key 保留足够的内存空间,以防止在 fill 过程中 rehash 或者重申请。
+* **Lookup**:执行 100 k 次随机 key 查找,所有的 key 都在 table 中。
+* **Failed lookup**: 执行 100 k 次随机 key 查找,所有的 key 都不在 table 中。
+* **Remove**:从 table 中移除随机选择的半数元素。
+* **Destruct**:销毁 table 并释放内存。
+
+你可以[在这里下载我的测试代码][9]。这些代码只能在 64 机器上编译(包括Windows和Linux)。在 `main()` 函数顶部附近有一些开关,你可把它们打开或者关掉——如果全开,可能会需要一两个小时才能结束运行。我收集的结果也放在了那个打包文件里的 Excel 表中。(注意: Windows 和 Linux 在不同的 CPU 上跑的,所以时间不具备可比较性)代码也跑了一些单元测试,用来验证所有的 hash 表实现都能运行正确。
+
+我还顺带尝试了附加的两个实现:Ch 中第一个节点存放在 bucket 中而不是 pool 里,二次探测的开放寻址。
+这两个都不足以好到可以放在最终的数据里,但是它们的代码仍放在了打包文件里面。
+
+### 结果
+
+这里有成吨的数据!!
+这一节我将详细的讨论一下结果,但是如果你对此不感兴趣,可以直接跳到下一节的总结。
+
+#### Windows
+
+这是所有的测试的图表结果,使用 Visual Studio 2012 编译,运行于 Windows 8.1 和 Core i7-4710HQ 机器上。(点击可以放大。)
+
+[
+ ![Results for VS 2012, Windows 8.1, Core i7-4710HQ](http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png "Results for VS 2012, Windows 8.1, Core i7-4710HQ")
+][12]
+
+从左至右是不同的有效负载大小,从上往下是不同的操作(注意:不是所有的Y轴都是相同的比例!)我将为每一个操作总结一下主要趋向。
+
+**Fill**:
+
+在我的 hash 表中,Ch 稍比任何的 OA 变种要好。随着哈希表大小和有效负载的加大,差距也随之变大。我猜测这是由于 Ch 只需要从一个空闲链表中拉取一个元素,然后把它放在 bucket 前面,而 OA 不得不搜索一部分 bucket 来找到一个空位置。所有的 OA 变种的性能表现基本都很相似,当然 DO1 稍微有点优势。
+
+在小负载的情况,UM 几乎是所有 hash 表中表现最差的 —— 因为 UM 为每一次的插入申请(内存)付出了沉重的代价。但是在 128 字节的时候,这些 hash 表基本相当,大负载的时候 UM 还赢了点。因为,我所有的实现都需要重新调整元素池的大小,并需要移动大量的元素到新池里面,这一点我几乎无能为力;而 UM 一旦为元素申请了内存后便不需要移动了。注意大负载中图表上夸张的跳步!这更确认了重新调整大小带来的问题。相反,UM 只是线性上升 —— 只需要重新调整 bucket 数组大小。由于没有太多隆起的地方,所以相对有效率。
+
+**Presized fill**:
+
+大致和 Fill 相似,但是图示结果更加的线性光滑,没有太大的跳步(因为不需要 rehash ),所有的实现差距在这一测试中要缩小了些。大负载时 UM 依然稍快于 Ch,问题还是在于重新调整大小上。Ch 仍是稳步少快于 OA 变种,但是 DO1 比其它的 OA 稍有优势。
+
+**Lookup**:
+
+所有的实现都相当的集中。除了最小负载时,DO1 和 OL 稍快,其余情况下 UM 和 DO2 都跑在了前面。(LCTT 译注: 你确定?)真的,我无法描述 UM 在这一步做的多么好。尽管需要遍历链表,但是 UM 还是坚守了面向数据的本性。
+
+顺带一提,查找时间和 hash 表的大小有着很弱的关联,这真的很有意思。
+哈希表查找时间期望上是一个常量时间,所以在的渐进视图中,性能不应该依赖于表的大小。但是那是在忽视了 cache 影响的情况下!作为具体的例子,当我们在具有 10 k 条目的表中做 100 k 次查找时,速度会便变快,因为在第一次 10 k - 20 k 次查找后,大部分的表会处在 L3 中。
+
+**Failed lookup**:
+
+相对于成功查找,这里就有点更分散一些。DO1 和 DO2 跑在了前面,但 UM 并没有落下,OL 则是捉襟见肘啊。我猜测,这可能是因为 OL 整体上具有更长的搜索路径,尤其是在失败查询时;内存中,hash 值在 key 和 value 之飘来荡去的找不着出路,我也很受伤啊。DO1 和 DO2 具有相同的搜索长度,但是它们将所有的 hash 值打包在内存中,这使得问题有所缓解。
+
+**Remove**:
+
+DO2 很显然是赢家,但 DO1 也未落下。Ch 则落后,UM 则是差的不是一丁半点(主要是因为每次移除都要释放内存);差距随着负载的增加而拉大。移除操作是唯一不需要接触数据的操作,只需要 hash 值和 key 的帮助,这也是为什么 DO1 和 DO2 在移除操作中的表现大相径庭,而其它测试中却保持一致。(如果你的值不是 POD 类型的,并需要析构,这种差异应该是会消失的。)
+
+**Destruct**:
+
+Ch 除了最小负载,其它的情况都是最快的(最小负载时,约等于 OA 变种)。所有的 OA 变种基本都是相等的。注意,在我的 hash 表中所做的所有析构操作都是释放少量的内存 buffer 。但是 [在Windows中,释放内存的消耗和大小成比例关系][13]。(而且,这是一个很显著的开支 —— 申请 ~1 GB 的内存需要 ~100 ms 的时间去释放!)
+
+UM 在析构时是最慢的一个(小负载时,慢的程度可以用数量级来衡量),大负载时依旧是稍慢些。对于 UM 来讲,释放每一个元素而不是释放一组数组真的是一个硬伤。
+
+#### Linux
+
+我还在装有 Linux Mint 17.1 的 Core i5-4570S 机器上使用 gcc 4.8 和 clang 3.5 来运行了测试。gcc 和 clang 的结果很相像,因此我只展示了 gcc 的;完整的结果集合包含在了代码下载打包文件中,链接在上面。(点击图来缩放)
+
+[
+ ![Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S](http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png "Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S")
+][15]
+
+大部分结果和 Windows 很相似,因此我只高亮了一些有趣的不同点。
+
+**Lookup**:
+
+这里 DO1 跑在前头,而在 Windows 中 DO2 更快些。(LCTT 译注: 这里原文写错了吧?)同样,UM 和 Ch 落后于其它所有的实现——过多的指针追踪,然而 OA 只需要在内存中线性的移动即可。至于 Windows 和 Linux 结果为何不同,则不是很清楚。UM 同样比 Ch 慢了不少,特别是大负载时,这很奇怪;我期望的是它们可以基本相同。
+
+**Failed lookup**:
+
+UM 再一次落后于其它实现,甚至比 OL 还要慢。我再一次无法理解为何 UM 比 Ch 慢这么多,Linux 和 Windows 的结果为何有着如此大的差距。
+
+
+**Destruct**:
+
+在我的实现中,小负载的时候,析构的消耗太少了,以至于无法测量;在大负载中,线性增加的比例和创建的虚拟内存页数量相关,而不是申请到的数量?同样,要比 Windows 中的析构快上几个数量级。但是并不是所有的都和 hash 表有关;我们在这里可以看出不同系统和运行时内存系统的表现。貌似,Linux 释放大内存块是要比 Windows 快上不少(或者 Linux 很好的隐藏了开支,或许将释放工作推迟到了进程退出,又或者将工作推给了其它线程或者进程)。
+
+UM 由于要释放每一个元素,所以在所有的负载中都比其它慢上几个数量级。事实上,我将图片做了剪裁,因为 UM 太慢了,以至于破坏了 Y 轴的比例。
+
+### 总结
+
+好,当我们凝视各种情况下的数据和矛盾的结果时,我们可以得出什么结果呢?我想直接了当的告诉你这些 hash 表变种中有一个打败了其它所有的 hash 表,但是这显然不那么简单。不过我们仍然可以学到一些东西。
+
+首先,在大多数情况下我们“很容易”做的比 `std::unordered_map` 还要好。我为这些测试所写的所有实现(它们并不复杂;我只花了一两个小时就写完了)要么是符合 `unordered_map` 要么是在其基础上做的提高,除了大负载(超过128字节)中的插入性能, `unordered_map` 为每一个节点独立申请存储占了优势。(尽管我没有测试,我同样期望 `unordered_map` 能在非 POD 类型的负载上取得胜利。)具有指导意义的是,如果你非常关心性能,不要假设你的标准库中的数据结构是高度优化的。它们可能只是在 C++ 标准的一致性上做了优化,但不是性能。:P
+
+其次,如果不管在小负载还是超负载中,若都只用 DO1 (开放寻址,线性探测,hashes/states 和 key/vaules分别处在隔离的普通数组中),那可能不会有啥好表现。这不是最快的插入,但也不坏(还比 `unordered_map` 快),并且在查找,移除,析构中也很快。你所知道的 —— “面向数据设计”完成了!
+
+注意,我的为这些哈希表做的测试代码远未能用于生产环境——它们只支持 POD 类型,没有拷贝构造函数以及类似的东西,也未检测重复的 key,等等。我将可能尽快的构建一些实际的 hash 表,用于我的实用库中。为了覆盖基础部分,我想我将有两个变种:一个基于 DO1,用于小的,移动时不需要太大开支的负载;另一个用于链接并且避免重新申请和移动元素(就像 `unordered_map` ),用于大负载或者移动起来需要大开支的负载情况。这应该会给我带来最好的两个世界。
+
+与此同时,我希望你们会有所启迪。最后记住,如果 Chandler Carruth 和 Mike Acton 在数据结构上给你提出些建议,你一定要听。
+
+--------------------------------------------------------------------------------
+
+
+作者简介:
+
+我是一名图形程序员,目前在西雅图做自由职业者。之前我在 NVIDIA 的 DevTech 软件团队中工作,并在美少女特工队工作室中为 PS3 和 PS4 的 Infamous 系列游戏开发渲染技术。
+
+自 2002 年起,我对图形非常感兴趣,并且已经完成了一系列的工作,包括:雾、大气雾霾、体积照明、水、视觉效果、粒子系统、皮肤和头发阴影、后处理、镜面模型、线性空间渲染、和 GPU 性能测量和优化。
+
+你可以在我的博客了解更多和我有关的事,处理图形,我还对理论物理和程序设计感兴趣。
+
+你可以在 nathaniel.reed@gmail.com 或者在 Twitter(@Reedbeta)/Google+ 上关注我。我也会经常在 StackExchange 上回答计算机图形的问题。
+
+--------------
+
+via: http://reedbeta.com/blog/data-oriented-hash-table/
+
+作者:[Nathan Reed][a]
+译者:[sanfusu](https://github.com/sanfusu)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://reedbeta.com/about/
+[1]:http://reedbeta.com/blog/data-oriented-hash-table/
+[2]:http://reedbeta.com/blog/category/coding/
+[3]:http://reedbeta.com/blog/data-oriented-hash-table/#comments
+[4]:http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html
+[5]:https://www.youtube.com/watch?v=fHNmRkzxHWs
+[6]:https://www.youtube.com/watch?v=rX0ItVEVjHc
+[7]:http://reedbeta.com/blog/data-oriented-hash-table/#the-tests
+[8]:http://burtleburtle.net/bob/hash/spooky.html
+[9]:http://reedbeta.com/blog/data-oriented-hash-table/hash-table-tests.zip
+[10]:http://reedbeta.com/blog/data-oriented-hash-table/#the-results
+[11]:http://reedbeta.com/blog/data-oriented-hash-table/#windows
+[12]:http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png
+[13]:https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
+[14]:http://reedbeta.com/blog/data-oriented-hash-table/#linux
+[15]:http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png
+[16]:http://reedbeta.com/blog/data-oriented-hash-table/#conclusions
diff --git a/published/20160926 First 5 Commands When I Connect on a Linux Server.md b/published/201704/20160926 First 5 Commands When I Connect on a Linux Server.md
similarity index 100%
rename from published/20160926 First 5 Commands When I Connect on a Linux Server.md
rename to published/201704/20160926 First 5 Commands When I Connect on a Linux Server.md
diff --git a/published/20161020 Useful Vim editor plugins for software developers - part 3 a.vim.md b/published/201704/20161020 Useful Vim editor plugins for software developers - part 3 a.vim.md
similarity index 100%
rename from published/20161020 Useful Vim editor plugins for software developers - part 3 a.vim.md
rename to published/201704/20161020 Useful Vim editor plugins for software developers - part 3 a.vim.md
diff --git a/published/20161028 Configuring WINE with Winetricks.md b/published/201704/20161028 Configuring WINE with Winetricks.md
similarity index 100%
rename from published/20161028 Configuring WINE with Winetricks.md
rename to published/201704/20161028 Configuring WINE with Winetricks.md
diff --git a/published/20161104 Build Strong Real-Time Streaming Apps with Apache Calcite.md b/published/201704/20161104 Build Strong Real-Time Streaming Apps with Apache Calcite.md
similarity index 100%
rename from published/20161104 Build Strong Real-Time Streaming Apps with Apache Calcite.md
rename to published/201704/20161104 Build Strong Real-Time Streaming Apps with Apache Calcite.md
diff --git a/published/20161115 Build Deploy and Manage Custom Apps with IBM Bluemix.md b/published/201704/20161115 Build Deploy and Manage Custom Apps with IBM Bluemix.md
similarity index 100%
rename from published/20161115 Build Deploy and Manage Custom Apps with IBM Bluemix.md
rename to published/201704/20161115 Build Deploy and Manage Custom Apps with IBM Bluemix.md
diff --git a/published/20161128 JavaScript frameworks and libraries.md b/published/201704/20161128 JavaScript frameworks and libraries.md
similarity index 100%
rename from published/20161128 JavaScript frameworks and libraries.md
rename to published/201704/20161128 JavaScript frameworks and libraries.md
diff --git a/published/201704/20161222 Top open source creative tools in 2016.md b/published/201704/20161222 Top open source creative tools in 2016.md
new file mode 100644
index 0000000000..ce3eacb8a2
--- /dev/null
+++ b/published/201704/20161222 Top open source creative tools in 2016.md
@@ -0,0 +1,314 @@
+2016 年度顶级开源创作工具
+============================================================
+
+> 无论你是想修改图片、编译音频,还是制作动画,这里的自由而开源的工具都能帮你做到。
+
+![2016 年度 36 个开源创作工具](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/art-yearbook-paint-draw-create-creative.png?itok=KgEF_IN_ "Top 34 open source creative tools in 2016 ")
+
+>图片来源 : opensource.com
+
+几年前,我在 Red Hat 总结会上做了一个简单的演讲,给与会者展示了 [2012 年度开源创作工具][12]。开源软件在过去几年里发展迅速,现在我们来看看 2016 年的相关领域的软件。
+
+### 核心应用
+
+这六款应用是开源的设计软件中的最强王者。它们做的很棒,拥有完善的功能特征集、稳定发行版以及活跃的开发者社区,是很成熟的项目。这六款应用都是跨平台的,每一个都能在 Linux、OS X 和 Windows 上使用,不过大多数情况下 Linux 版本一般都是最先更新的。这些应用广为人知,我已经把最新特性的重要部分写进来了,如果你不是非常了解它们的开发情况,你有可能会忽视这些特性。
+
+如果你想要对这些软件做更深层次的了解,或许你可以帮助测试这四个软件 —— GIMP、Inkscape、Scribus,以及 MyPaint 的最新版本,在 Linux 机器上你可以用 [Flatpak][13] 软件轻松地安装它们。这些应用的每日构建版本可以[按照指令][14] 通过 Flatpak 的“每日构建的绘图应用(Nightly Graphics Apps)”得到。有一件事要注意:如果你要给每个应用的 Flatpak 版本安装笔刷或者其它扩展,用于移除这些扩展的目录将会位于相应应用的目录 **~/.var/app** 下。
+
+#### GIMP
+
+[GIMP][15] [在 2015 年迎来了它的 20 周岁][16],使得它成为这里资历最久的开源创造型应用之一。GIMP 是一款强大的应用,可以处理图片,创作简单的绘画,以及插图。你可以通过简单的任务来尝试 GIMP,比如裁剪、缩放图片,然后循序渐进使用它的其它功能。GIMP 可以在 Linux、Mac OS X 以及 Windows 上使用,是一款跨平台的应用,而且能够打开、导出一系列格式的文件,包括在与之相似的软件 Photoshop 上广为应用的那些格式。
+
+GIMP 开发团队正在忙着 2.10 发行版的工作;[2.8.18][17] 是最新的稳定版本。更振奋人心的是非稳定版,[2.9.4][18],拥有全新的用户界面,旨在节省空间的符号式图标和黑色主题,改进了颜色管理,更多的基于 GEGL 的支持分离预览的过滤器,支持 MyPaint 笔刷(如下图所示),对称绘图,以及命令行批次处理。想了解更多信息,请关注 [完整的发行版注记][19]。
+
+![GIMP 截图](https://opensource.com/sites/default/files/gimp_520.png "GIMP 截图")
+
+#### Inkscape
+
+[Inkscape][20] 是一款富有特色的矢量绘图设计软件。可以用它来创作简单的图形、图表、布局或者图标。
+
+最新的稳定版是 [0.91][21] 版本;与 GIMP 相似,能在预发布版 0.92pre3 版本中找到更多有趣的东西,其发布于 2016 年 11 月。最新推出的预发布版的突出特点是 [梯度网格特性(gradient mesh feature)][22](如下图所示);0.91 发行版里介绍的新特性包括:[强力笔触(power stroke)][23] 用于完全可配置的书法笔画(下图的 “opensource.com” 中的 “open” 用的就是强力笔触技术),画布测量工具,以及 [全新的符号对话框][24](如下图右侧所示)。(很多符号库可以从 GitHub 上获得;[Xaviju's inkscape-open-symbols set][25] 就很不错。)_对象_对话框是在改进版或每日构建中可用的新特性,整合了一个文档中的所有对象,提供工具来管理这些对象。
+
+![Inkscape 截图](https://opensource.com/sites/default/files/inkscape_520.png "Inkscape 截图")
+
+#### Scribus
+
+[Scribus][26] 是一款强大的桌面出版和页面布局工具。Scribus 让你能够创造精致美丽的物品,包括信封、书籍、杂志以及其它印刷品。Scribus 的颜色管理工具可以处理和输出 CMYK 格式,还能给文件配色,可靠地用于印刷车间的重印。
+
+[1.4.6][27] 是 Scribus 的最新稳定版本;[1.5.x][28] 系列的发行版更令人期待,因为它们是即将到来的 1.6.0 发行版的预览。1.5.3 版本包含了 Krita 文件(*.KRA)导入工具; 1.5.x 系列中其它的改进包括了表格工具、文本框对齐、脚注、导出可选 PDF 格式、改进的字典、可驻留边框的调色盘、符号工具,和丰富的文件格式支持。
+
+![Scribus 截图](https://opensource.com/sites/default/files/scribus_520.png "Scribus 截图")
+
+#### MyPaint
+
+[MyPaint][29] 是一款用于数位屏的快速绘图和插画工具。它很轻巧,界面虽小,但快捷键丰富,因此你能够不用放下数位笔而专心于绘图。
+
+[MyPaint 1.2.0][30] 是其最新的稳定版本,包含了一些新特性,诸如 [直观上墨工具][31] 用来跟踪铅笔绘图的轨迹,新的填充工具,层分组,笔刷和颜色的历史面板,用户界面的改进包括暗色主题和小型符号图标,以及可编辑的矢量层。想要尝试 MyPaint 里的最新改进,我建议安装每日构建版的 Flatpak 构建,尽管自从 1.2.0 版本没有添加重要的特性。
+
+ ![MyPaint 截图](https://opensource.com/sites/default/files/mypaint_520.png "MyPaint 截图")
+
+#### Blender
+
+[Blender][32] 最初发布于 1995 年 1 月,像 GIMP 一样,已经有 20 多年的历史了。Blender 是一款功能强大的开源 3D 制作套件,包含建模、雕刻、渲染、真实材质、套索、动画、影像合成、视频编辑、游戏创作以及模拟。
+
+Blender 最新的稳定版是 [2.78a][33]。2.78 版本很庞大,包含的特性有:改进的 2D 蜡笔(Grease Pencil) 动画工具;针对球面立体图片的 VR 渲染支持;以及新的手绘曲线的绘图工具。
+
+![Inkscape 截图](https://opensource.com/sites/default/files/blender_520.png "Inkscape 截图")
+
+要尝试最新的 Blender 开发工具,有很多种选择,包括:
+
+* Blender 基金会在官方网址提供 [非稳定版的每日构建版][2]。
+* 如果你在寻找特殊的开发中特性,[graphicall.org][3] 是一个适合社区的网站,能够提供特殊版本的 Blender(偶尔还有其它的创造型开源应用),让艺术家能够尝试体验最新的代码。
+* Mathieu Bridon 通过 Flatpak 做了 Blender 的一个开发版本。查看它的博客以了解详情:[Flatpak 上每日构建版的 Blender][4]
+
+#### Krita
+
+[Krita][34] 是一款拥有强大功能的数字绘图应用。这款应用贴合插画师、印象艺术家以及漫画家的需求,有很多附件,比如笔刷、调色板、图案以及模版。
+
+最新的稳定版是 [Krita 3.0.1][35],于 2016 年 9 月发布。3.0.x 系列的新特性包括 2D 逐帧动画;改进的层管理器和功能;丰富的常用快捷键;改进了网格、向导和图形捕捉;还有软打样。
+
+ ![Krita 截图](https://opensource.com/sites/default/files/krita_520.png "Krita 截图")
+
+### 视频处理工具
+
+关于开源的视频编辑工具则有很多很多。这这些工具之中,[Flowblade][36] 是新推出的,而 Kdenlive 则是构建完善、对新手友好、功能最全的竞争者。对你排除某些备选品有所帮助的主要标准是它们所支持的平台,其中一些只支持 Linux 平台。它们的软件上游都很活跃,最新的稳定版都于近期发布,发布时间相差不到一周。
+
+#### Kdenlive
+
+[Kdenlive][37],最初于 2002 年发布,是一款强大的非线性视频编辑器,有 Linux 和 OS X 版本(但是 OS X 版本已经过时了)。Kdenlive 有用户友好的、基于拖拽的用户界面,适合初学者,又有专业人员需要的深层次功能。
+
+可以看看 Seth Kenlon 写的 [Kdenlive 系列教程][38],了解如何使用 Kdenlive。
+
+* 最新稳定版: 16.08.2 (2016 年 10 月)
+
+![](https://opensource.com/sites/default/files/images/life-uploads/kdenlive_6_leader.png)
+
+#### Flowblade
+
+2012 年发布, [Flowblade][39],只有 Linux 版本的视频编辑器,是个相当不错的后起之秀。
+
+* 最新稳定版: 1.8 (2016 年 9 月)
+
+#### Pitivi
+
+[Pitivi][40] 是用户友好型的自由开源视频编辑器。Pitivi 是用 [Python][41] 编写的(“Pitivi” 中的 “Pi”来源于此),使用了 [GStreamer][42] 多媒体框架,社区活跃。
+
+* 最新稳定版: 0.97 (2016 年 8 月)
+* 通过 Flatpak 获取 [最新版本][5]
+
+#### Shotcut
+
+[Shotcut][43] 是一款自由开源的跨平台视频编辑器,[早在 2004 年]就发布了,之后由现在的主要开发者 [Dan Dennedy][45] 重写。
+
+* 最新稳定版: 16.11 (2016 年 11 月)
+* 支持 4K 分辨率
+* 仅以 tar 包方式发布
+
+#### OpenShot Video Editor
+
+始于 2008 年,[OpenShot Video Editor][46] 是一款自由、开源、易于使用、跨平台的视频编辑器。
+
+* 最新稳定版: [2.1][6] (2016 年 8 月)
+
+
+### 其它工具
+
+#### SwatchBooker
+
+[SwatchBooker][47] 是一款很方便的工具,尽管它近几年都没有更新了,但它还是很有用。SwatchBooler 能帮助用户从各大制造商那里合法地获取色卡,你可以用其它自由开源的工具处理它导出的格式,包括 Scribus。
+
+#### GNOME Color Manager
+
+[GNOME Color Manager][48] 是 GNOME 桌面环境内建的颜色管理器,而 GNOME 是某些 Linux 发行版的默认桌面。这个工具让你能够用色度计为自己的显示设备创建属性文件,还可以为这些设备加载/管理 ICC 颜色属性文件。
+
+#### GNOME Wacom Control
+
+[The GNOME Wacom controls][49] 允许你在 GNOME 桌面环境中配置自己的 Wacom 手写板;你可以修改手写板交互的很多选项,包括自定义手写板灵敏度,以及手写板映射到哪块屏幕上。
+
+#### Xournal
+
+[Xournal][50] 是一款简单但可靠的应用,可以让你通过手写板手写或者在笔记上涂鸦。Xournal 是一款有用的工具,可以让你签名或注解 PDF 文档。
+
+#### PDF Mod
+
+[PDF Mod][51] 是一款编辑 PDF 文件很方便的工具。PDF Mod 让用户可以移除页面、添加页面,将多个 PDF 文档合并成一个单独的 PDF 文件,重新排列页面,旋转页面等。
+
+#### SparkleShare
+
+[SparkleShare][52] 是一款基于 git 的文件分享工具,艺术家用来协作和分享资源。它会挂载在 GitLab 仓库上,你能够采用一个精妙的开源架构来进行资源管理。SparkleShare 的前端通过在顶部提供一个类似下拉框界面,避免了使用 git 的复杂性。
+
+### 摄影
+
+#### Darktable
+
+[Darktable][53] 是一款能让你开发数位 RAW 文件的应用,有一系列工具,可以管理工作流、无损编辑图片。Darktable 支持许多流行的相机和镜头。
+
+![改变颜色平衡度的图片](https://opensource.com/sites/default/files/dt_colour.jpg "改变颜色平衡度的图片")
+
+#### Entangle
+
+[Entangle][54] 允许你将数字相机连接到电脑上,让你能从电脑上完全控制相机。
+
+#### Hugin
+
+[Hugin][55] 是一款工具,让你可以拼接照片,从而制作全景照片。
+
+### 2D 动画
+
+#### Synfig Studio
+
+[Synfig Studio][56] 是基于矢量的二维动画套件,支持位图原图,在平板上用起来方便。
+
+#### Blender Grease Pencil
+
+我在前面讲过了 Blender,但值得注意的是,最近的发行版里[重构的蜡笔特性][57],添加了创作二维动画的功能。
+
+#### Krita
+
+[Krita][58] 现在同样提供了二维动画功能。
+
+### 音频编辑
+
+#### Audacity
+
+[Audacity][59] 在编辑音频文件、记录声音方面很有名,是用户友好型的工具。
+
+#### Ardour
+
+[Ardour][60] 是一款数字音频工作软件,界面中间是录音,编辑和混音工作流。使用上它比 Audacity 要稍微难一点,但它允许自动操作,并且更高端。(有 Linux、Mac OS X 和 Windows 版本)
+
+#### Hydrogen
+
+[Hydrogen][61] 是一款开源的电子鼓,界面直观。它可以用合成的乐器创作、整理各种乐谱。
+
+#### Mixxx
+
+[Mixxx][62] 是四仓 DJ 套件,让你能够以强大操控来 DJ 和混音歌曲,包含节拍循环、时间延长、音高变化,还可以用 DJ 硬件控制器直播混音和衔接。
+
+### Rosegarden
+
+[Rosegarden][63] 是一款作曲软件,有乐谱编写和音乐作曲或编辑的功能,提供音频和 MIDI 音序器。(LCTT 译注:MIDI 即 Musical Instrument Digital Interface 乐器数字接口)
+
+#### MuseScore
+
+[MuseScore][64] 是乐谱创作、记谱和编辑的软件,它还有个乐谱贡献者社区。
+
+### 其它具有创造力的工具
+
+#### MakeHuman
+
+[MakeHuman][65] 是一款三维绘图工具,可以创造人型的真实模型。
+
+#### Natron
+
+[Natron][66] 是基于节点的合成工具,用于视频后期制作、动态图象和设计特效。
+
+#### FontForge
+
+[FontForge][67] 是创作和编辑字体的工具。允许你编辑某个字体中的字形,也能够使用这些字形生成字体。
+
+#### Valentina
+
+[Valentina][68] 是用来设计缝纫图案的应用。
+
+#### Calligra Flow
+
+[Calligra Flow][69] 是一款图表工具,类似 Visio(有 Linux,Mac OS X 和 Windows 版本)。
+
+### 资源
+
+这里有很多小玩意和彩蛋值得尝试。需要一点灵感来探索?这些网站和论坛有很多教程和精美的成品能够激发你开始创作:
+
+1、 [pixls.us][7]: 摄影师 Pat David 管理的博客,他专注于专业摄影师使用的自由开源的软件和工作流。
+2、 [David Revoy 's Blog][8]: David Revoy 的博客,热爱自由开源,非常有天赋的插画师,概念派画师和开源倡议者,对 Blender 基金会电影有很大贡献。
+3、 [The Open Source Creative Podcast][9]: 由 Opensource.com 社区版主和专栏作家 [Jason van Gumster][10] 管理,他是 Blender 和 GIMP 的专家, [《Blender for Dummies》][1] 的作者,该文章正好是面向我们这些热爱开源创作工具和这些工具周边的文化的人。
+4、 [Libre Graphics Meeting][11]: 自由开源创作软件的开发者和使用这些软件的创作者的年度会议。这是个好地方,你可以通过它找到你喜爱的开源创作软件将会推出哪些有意思的特性,还可以了解到这些软件的用户用它们在做什么。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-343-8e0fb148b105b450634e30acd8f5b22b.png?itok=oxzTm70z)
+
+Máirín Duffy - Máirín 是 Red Hat 的首席交互设计师。她热衷于自由软件和开源工具,尤其是在创作领域:她最喜欢的应用是 [Inkscape](http://inkscape.org)。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/16/12/yearbook-top-open-source-creative-tools-2016
+
+作者:[Máirín Duffy][a]
+译者:[GitFuture](https://github.com/GitFuture)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/mairin
+[1]:http://www.blenderbasics.com/
+[2]:https://builder.blender.org/download/
+[3]:http://graphicall.org/
+[4]:https://mathieu.daitauha.fr/blog/2016/09/23/blender-nightly-in-flatpak/
+[5]:https://pitivi.wordpress.com/2016/07/18/get-pitivi-directly-from-us-with-flatpak/
+[6]:http://www.openshotvideo.com/2016/08/openshot-21-released.html
+[7]:http://pixls.us/
+[8]:http://davidrevoy.com/
+[9]:http://monsterjavaguns.com/podcast/
+[10]:https://opensource.com/users/jason-van-gumster
+[11]:http://libregraphicsmeeting.org/2016/
+[12]:https://opensource.com/life/12/9/tour-through-open-source-creative-tools
+[13]:https://opensource.com/business/16/8/flatpak
+[14]:http://flatpak.org/apps.html
+[15]:https://opensource.com/tags/gimp
+[16]:https://linux.cn/article-7131-1.html
+[17]:https://www.gimp.org/news/2016/07/14/gimp-2-8-18-released/
+[18]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
+[19]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
+[20]:https://opensource.com/tags/inkscape
+[21]:http://wiki.inkscape.org/wiki/index.php/Release_notes/0.91
+[22]:http://wiki.inkscape.org/wiki/index.php/Mesh_Gradients
+[23]:https://www.youtube.com/watch?v=IztyV-Dy4CE
+[24]:https://inkscape.org/cs/~doctormo/%E2%98%85symbols-dialog
+[25]:https://github.com/Xaviju/inkscape-open-symbols
+[26]:https://opensource.com/tags/scribus
+[27]:https://www.scribus.net/scribus-1-4-6-released/
+[28]:https://www.scribus.net/scribus-1-5-2-released/
+[29]:http://mypaint.org/
+[30]:http://mypaint.org/blog/2016/01/15/mypaint-1.2.0-released/
+[31]:https://github.com/mypaint/mypaint/wiki/v1.2-Inking-Tool
+[32]:https://opensource.com/tags/blender
+[33]:http://www.blender.org/features/2-78/
+[34]:https://opensource.com/tags/krita
+[35]:https://krita.org/en/item/krita-3-0-1-update-brings-numerous-fixes/
+[36]:https://opensource.com/life/16/9/10-reasons-flowblade-linux-video-editor
+[37]:https://opensource.com/tags/kdenlive
+[38]:https://opensource.com/life/11/11/introduction-kdenlive
+[39]:http://jliljebl.github.io/flowblade/
+[40]:http://pitivi.org/
+[41]:http://wiki.pitivi.org/wiki/Why_Python%3F
+[42]:https://gstreamer.freedesktop.org/
+[43]:http://shotcut.org/
+[44]:http://permalink.gmane.org/gmane.comp.lib.fltk.general/2397
+[45]:http://www.dennedy.org/
+[46]:http://openshot.org/
+[47]:http://www.selapa.net/swatchbooker/
+[48]:https://help.gnome.org/users/gnome-help/stable/color.html.en
+[49]:https://help.gnome.org/users/gnome-help/stable/wacom.html.en
+[50]:http://xournal.sourceforge.net/
+[51]:https://wiki.gnome.org/Apps/PdfMod
+[52]:https://www.sparkleshare.org/
+[53]:https://opensource.com/life/16/4/how-use-darktable-digital-darkroom
+[54]:https://entangle-photo.org/
+[55]:http://hugin.sourceforge.net/
+[56]:https://opensource.com/article/16/12/synfig-studio-animation-software-tutorial
+[57]:https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.78/GPencil
+[58]:https://opensource.com/tags/krita
+[59]:https://opensource.com/tags/audacity
+[60]:https://ardour.org/
+[61]:http://www.hydrogen-music.org/
+[62]:http://mixxx.org/
+[63]:http://www.rosegardenmusic.com/
+[64]:https://opensource.com/life/16/03/musescore-tutorial
+[65]:http://makehuman.org/
+[66]:https://natron.fr/
+[67]:http://fontforge.github.io/en-US/
+[68]:http://valentina-project.org/
+[69]:https://www.calligra.org/flow/
diff --git a/published/20170107 Min Browser Muffles the Web Noise.md b/published/201704/20170107 Min Browser Muffles the Web Noise.md
similarity index 100%
rename from published/20170107 Min Browser Muffles the Web Noise.md
rename to published/201704/20170107 Min Browser Muffles the Web Noise.md
diff --git a/published/20170110 Improve your programming skills with Exercism.md b/published/201704/20170110 Improve your programming skills with Exercism.md
similarity index 100%
rename from published/20170110 Improve your programming skills with Exercism.md
rename to published/201704/20170110 Improve your programming skills with Exercism.md
diff --git a/published/20170110 What engineers and marketers can learn from each other.md b/published/201704/20170110 What engineers and marketers can learn from each other.md
similarity index 100%
rename from published/20170110 What engineers and marketers can learn from each other.md
rename to published/201704/20170110 What engineers and marketers can learn from each other.md
diff --git a/translated/tech/20170111 Git in 2016.md b/published/201704/20170111 Git in 2016.md
similarity index 59%
rename from translated/tech/20170111 Git in 2016.md
rename to published/201704/20170111 Git in 2016.md
index d1bf2acabd..251277e22e 100644
--- a/translated/tech/20170111 Git in 2016.md
+++ b/published/201704/20170111 Git in 2016.md
@@ -1,74 +1,59 @@
-
2016 Git 新视界
============================================================
- ![](https://cdn-images-1.medium.com/max/2000/1*1SiSsLMsNSyAk6khb63W9g.png)
+![](https://cdn-images-1.medium.com/max/2000/1*1SiSsLMsNSyAk6khb63W9g.png)
2016 年 Git 发生了 _惊天动地_ 地变化,发布了五大新特性[¹][57] (从 _v2.7_ 到 _v2.11_ )和十六个补丁[²][58]。189 位作者[³][59]贡献了 3676 个提交[⁴][60]到 `master` 分支,比 2015 年多了 15%[⁵][61]!总计有 1545 个文件被修改,其中增加了 276799 行并移除了 100973 行。
但是,通过统计提交的数量和代码行数来衡量生产力是一种十分愚蠢的方法。除了深度研究过的开发者可以做到凭直觉来判断代码质量的地步,我们普通人来作仲裁难免会因我们常人的判断有失偏颇。
-谨记这一条于心,我决定整理这一年里六个我最喜爱的 Git 特性涵盖的改进,来做一次分类回顾 。这篇文章作为一篇中篇推文有点太过长了,所以我不介意你们直接跳到你们特别感兴趣的特性去。
+谨记这一条于心,我决定整理这一年里六个我最喜爱的 Git 特性涵盖的改进,来做一次分类回顾。这篇文章作为一篇中篇推文有点太过长了,所以我不介意你们直接跳到你们特别感兴趣的特性去。
-* [完成][41]`[git worktree][25]`[命令][42]
-* [更多方便的][43]`[git rebase][26]`[选项][44]
-* `[git lfs][27]`[梦幻的性能加速][45]
-* `[git diff][28]`[实验性的算法和更好的默认结果][46]
-* `[git submodules][29]`[差强人意][47]
-* `[git stash][30]`的[90 个增强][48]
+* [完成][41] [`git worktree`][25] [命令][42]
+* [更多方便的][43] [`git rebase`][26] [选项][44]
+* [`git lfs`][27] [梦幻的性能加速][45]
+* [`git diff`][28] [实验性的算法和更好的默认结果][46]
+* [`git submodules`][29] [差强人意][47]
+* [`git stash`][30] 的[90 个增强][48]
在我们开始之前,请注意在大多数操作系统上都自带有 Git 的旧版本,所以你需要检查你是否在使用最新并且最棒的版本。如果在终端运行 `git --version` 返回的结果小于 Git `v2.11.0`,请立刻跳转到 Atlassian 的快速指南 [更新或安装 Git][63] 并根据你的平台做出选择。
+###[所需的“引用”]
+在我们进入高质量内容之前还需要做一个短暂的停顿:我觉得我需要为你展示我是如何从公开文档生成统计信息(以及开篇的封面图片)的。你也可以使用下面的命令来对你自己的仓库做一个快速的 *年度回顾*!
-###[`引用` 是需要的]
-
-在我们进入高质量内容之前还需要做一个短暂的停顿:我觉得我需要为你展示我是如何从公开文档(以及开篇的封面图片)生成统计信息的。你也可以使用下面的命令来对你自己的仓库做一个快速的 *年度回顾*!
-
-```
-¹ Tags from 2016 matching the form vX.Y.0
-```
+>¹ Tags from 2016 matching the form vX.Y.0
```
$ git for-each-ref --sort=-taggerdate --format \
'%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.0 .* 2016"
```
-```
-² Tags from 2016 matching the form vX.Y.Z
-```
+> ² Tags from 2016 matching the form vX.Y.Z
```
$ git for-each-ref --sort=-taggerdate --format '%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.[^0] .* 2016"
```
-```
-³ Commits by author in 2016
-```
+> ³ Commits by author in 2016
```
$ git shortlog -s -n --since=2016-01-01 --until=2017-01-01
```
-```
-⁴ Count commits in 2016
-```
+> ⁴ Count commits in 2016
```
$ git log --oneline --since=2016-01-01 --until=2017-01-01 | wc -l
```
-```
-⁵ ... and in 2015
-```
+> ⁵ ... and in 2015
```
$ git log --oneline --since=2015-01-01 --until=2016-01-01 | wc -l
```
-```
-⁶ Net LOC added/removed in 2016
-```
+> ⁶ Net LOC added/removed in 2016
```
$ git diff --shortstat `git rev-list -1 --until=2016-01-01 master` \
@@ -79,39 +64,36 @@ $ git diff --shortstat `git rev-list -1 --until=2016-01-01 master` \
现在,让我们开始说好的回顾……
-### 完成 Git worktress
-`git worktree` 命令首次出现于 Git v2.5 但是在 2016 年有了一些显著的增强。两个有价值的新特性在 v2.7 被引入—— `list` 子命令,和为二分搜索增加了命令空间的 refs——而 `lock`/`unlock` 子命令则是在 v2.10被引入。
+### 完成 Git 工作树(worktree)
+
+`git worktree` 命令首次出现于 Git v2.5 ,但是在 2016 年有了一些显著的增强。两个有价值的新特性在 v2.7 被引入:`list` 子命令,和为二分搜索增加了命令空间的 refs。而 `lock`/`unlock` 子命令则是在 v2.10 被引入。
+
+#### 什么是工作树呢?
+
+[`git worktree`][49] 命令允许你同步地检出和操作处于不同路径下的同一仓库的多个分支。例如,假如你需要做一次快速的修复工作但又不想扰乱你当前的工作区,你可以使用以下命令在一个新路径下检出一个新分支:
-#### 什么是 worktree 呢?
-`[git worktree][49]` 命令允许你同步地检出和操作处于不同路径下的同一仓库的多个分支。例如,假如你需要做一次快速的修复工作但又不想扰乱你当前的工作区,你可以使用以下命令在一个新路径下检出一个新分支
```
$ git worktree add -b hotfix/BB-1234 ../hotfix/BB-1234
Preparing ../hotfix/BB-1234 (identifier BB-1234)
HEAD is now at 886e0ba Merged in bedwards/BB-13430-api-merge-pr (pull request #7822)
```
-Worktree 不仅仅是为分支工作。你可以检出多个里程碑(tags)作为不同的工作树来并行构建或测试它们。例如,我从 Git v2.6 和 v2.7 的里程碑中创建工作树来检验不同版本 Git 的行为特征。
+工作树不仅仅是为分支工作。你可以检出多个里程碑(tags)作为不同的工作树来并行构建或测试它们。例如,我从 Git v2.6 和 v2.7 的里程碑中创建工作树来检验不同版本 Git 的行为特征。
```
$ git worktree add ../git-v2.6.0 v2.6.0
Preparing ../git-v2.6.0 (identifier git-v2.6.0)
HEAD is now at be08dee Git 2.6
-```
-```
$ git worktree add ../git-v2.7.0 v2.7.0
Preparing ../git-v2.7.0 (identifier git-v2.7.0)
HEAD is now at 7548842 Git 2.7
-```
-```
$ git worktree list
/Users/kannonboy/src/git 7548842 [master]
/Users/kannonboy/src/git-v2.6.0 be08dee (detached HEAD)
/Users/kannonboy/src/git-v2.7.0 7548842 (detached HEAD)
-```
-```
$ cd ../git-v2.7.0 && make
```
@@ -119,7 +101,7 @@ $ cd ../git-v2.7.0 && make
#### 列出工作树
-`git worktree list` 子命令(于 Git v2.7引入)显示所有与当前仓库有关的工作树。
+`git worktree list` 子命令(于 Git v2.7 引入)显示所有与当前仓库有关的工作树。
```
$ git worktree list
@@ -130,43 +112,41 @@ $ git worktree list
#### 二分查找工作树
-`[gitbisect][50]` 是一个简洁的 Git 命令,可以让我们对提交记录执行一次二分搜索。通常用来找到哪一次提交引入了一个指定的退化。例如,如果在我的 `master` 分支最后的提交上有一个测试没有通过,我可以使用 `git bisect` 来贯穿仓库的历史来找寻第一次造成这个错误的提交。
+[`gitbisect`][50] 是一个简洁的 Git 命令,可以让我们对提交记录执行一次二分搜索。通常用来找到哪一次提交引入了一个指定的退化。例如,如果在我的 `master` 分支最后的提交上有一个测试没有通过,我可以使用 `git bisect` 来贯穿仓库的历史来找寻第一次造成这个错误的提交。
```
$ git bisect start
-```
-```
-# indicate the last commit known to be passing the tests
-# (e.g. the latest release tag)
+# 找到已知通过测试的最后提交
+# (例如最新的发布里程碑)
$ git bisect good v2.0.0
-```
-```
-# indicate a known broken commit (e.g. the tip of master)
+# 找到已知的出问题的提交
+# (例如在 `master` 上的提示)
$ git bisect bad master
-```
-```
-# tell git bisect a script/command to run; git bisect will
-# find the oldest commit between "good" and "bad" that causes
-# this script to exit with a non-zero status
+# 告诉 git bisect 要运行的脚本/命令;
+# git bisect 会在 “good” 和 “bad”范围内
+# 找到导致脚本以非 0 状态退出的最旧的提交
$ git bisect run npm test
```
-在后台,bisect 使用 refs 来跟踪 好 与 坏 的提交来作为二分搜索范围的上下界限。不幸的是,对工作树的粉丝来说,这些 refs 都存储在寻常的 `.git/refs/bisect` 命名空间,意味着 `git bisect` 操作如果运行在不同的工作树下可能会互相干扰。
+在后台,bisect 使用 refs 来跟踪 “good” 与 “bad” 的提交来作为二分搜索范围的上下界限。不幸的是,对工作树的粉丝来说,这些 refs 都存储在寻常的 `.git/refs/bisect` 命名空间,意味着 `git bisect` 操作如果运行在不同的工作树下可能会互相干扰。
+
到了 v2.7 版本,bisect 的 refs 移到了 `.git/worktrees/$worktree_name/refs/bisect`, 所以你可以并行运行 bisect 操作于多个工作树中。
#### 锁定工作树
+
当你完成了一颗工作树的工作,你可以直接删除它,然后通过运行 `git worktree prune` 等它被当做垃圾自动回收。但是,如果你在网络共享或者可移除媒介上存储了一颗工作树,如果工作树目录在删除期间不可访问,工作树会被完全清除——不管你喜不喜欢!Git v2.10 引入了 `git worktree lock` 和 `unlock` 子命令来防止这种情况发生。
+
```
-# to lock the git-v2.7 worktree on my USB drive
+# 在我的 USB 盘上锁定 git-v2.7 工作树
$ git worktree lock /Volumes/Flash_Gordon/git-v2.7 --reason \
"In case I remove my removable media"
```
```
-# to unlock (and delete) the worktree when I'm finished with it
+# 当我完成时,解锁(并删除)该工作树
$ git worktree unlock /Volumes/Flash_Gordon/git-v2.7
$ rm -rf /Volumes/Flash_Gordon/git-v2.7
$ git worktree prune
@@ -175,32 +155,33 @@ $ git worktree prune
`--reason` 标签允许为未来的你留一个记号,描述为什么当初工作树被锁定。`git worktree unlock` 和 `lock` 都要求你指定工作树的路径。或者,你可以 `cd` 到工作树目录然后运行 `git worktree lock .` 来达到同样的效果。
+### 更多 Git 变基(rebase)选项
-### 更多 Git `reabse` 选项
-2016 年三月,Git v2.8 增加了在拉取过程中交互进行 rebase 的命令 `git pull --rebase=interactive` 。对应地,六月份 Git v2.9 发布了通过 `git rebase -x` 命令对执行变基操作而不需要进入交互模式的支持。
+2016 年三月,Git v2.8 增加了在拉取过程中交互进行变基的命令 `git pull --rebase=interactive` 。对应地,六月份 Git v2.9 发布了通过 `git rebase -x` 命令对执行变基操作而不需要进入交互模式的支持。
#### Re-啥?
-在我们继续深入前,我假设读者中有些并不是很熟悉或者没有完全习惯变基命令或者交互式变基。从概念上说,它很简单,但是与很多 Git 的强大特性一样,变基散发着听起来很复杂的专业术语的气息。所以,在我们深入前,先来快速的复习一下什么是 rebase。
+在我们继续深入前,我假设读者中有些并不是很熟悉或者没有完全习惯变基命令或者交互式变基。从概念上说,它很简单,但是与很多 Git 的强大特性一样,变基散发着听起来很复杂的专业术语的气息。所以,在我们深入前,先来快速的复习一下什么是变基(rebase)。
-变基操作意味着将一个或多个提交在一个指定分支上重写。`git rebase` 命令是被深度重载了,但是 rebase 名字的来源事实上还是它经常被用来改变一个分支的基准提交(你基于此提交创建了这个分支)。
+变基操作意味着将一个或多个提交在一个指定分支上重写。`git rebase` 命令是被深度重载了,但是 rebase 这个名字的来源事实上还是它经常被用来改变一个分支的基准提交(你基于此提交创建了这个分支)。
-从概念上说,rebase 通过将你的分支上的提交存储为一系列补丁包临时释放了它们,接着将这些补丁包按顺序依次打在目标提交之上。
+从概念上说,rebase 通过将你的分支上的提交临时存储为一系列补丁包,接着将这些补丁包按顺序依次打在目标提交之上。
- ![](https://cdn-images-1.medium.com/max/800/1*mgyl38slmqmcE4STS56nXA.gif)
+![](https://cdn-images-1.medium.com/max/800/1*mgyl38slmqmcE4STS56nXA.gif)
对 master 分支的一个功能分支执行变基操作 (`git reabse master`)是一种通过将 master 分支上最新的改变合并到功能分支的“保鲜法”。对于长期存在的功能分支,规律的变基操作能够最大程度的减少开发过程中出现冲突的可能性和严重性。
有些团队会选择在合并他们的改动到 master 前立即执行变基操作以实现一次快速合并 (`git merge --ff `)。对 master 分支快速合并你的提交是通过简单的将 master ref 指向你的重写分支的顶点而不需要创建一个合并提交。
- ![](https://cdn-images-1.medium.com/max/800/1*QXa3znQiuNWDjxroX628VA.gif)
+![](https://cdn-images-1.medium.com/max/800/1*QXa3znQiuNWDjxroX628VA.gif)
-变基是如此方便和功能强大以致于它已经被嵌入其他常见的 Git 命令中,例如 `git pull`。如果你在本地 master 分支有未推送的提交,运行 `git pull` 命令从 origin 拉取你队友的改动会造成不必要的合并提交。
+变基是如此方便和功能强大以致于它已经被嵌入其他常见的 Git 命令中,例如拉取操作 `git pull` 。如果你在本地 master 分支有未推送的提交,运行 `git pull` 命令从 origin 拉取你队友的改动会造成不必要的合并提交。
- ![](https://cdn-images-1.medium.com/max/800/1*IxDdJ5CygvSWdD8MCNpZNg.gif)
+![](https://cdn-images-1.medium.com/max/800/1*IxDdJ5CygvSWdD8MCNpZNg.gif)
这有点混乱,而且在繁忙的团队,你会获得成堆的不必要的合并提交。`git pull --rebase` 将你本地的提交在你队友的提交上执行变基而不产生一个合并提交。
- ![](https://cdn-images-1.medium.com/max/800/1*HcroDMwBE9m21-hOeIwRmw.gif)
+
+![](https://cdn-images-1.medium.com/max/800/1*HcroDMwBE9m21-hOeIwRmw.gif)
这很整洁吧!甚至更酷,Git v2.8 引入了一个新特性,允许你在拉取时 _交互地_ 变基。
@@ -209,18 +190,15 @@ $ git worktree prune
交互式变基是变基操作的一种更强大的形态。和标准变基操作相似,它可以重写提交,但它也可以向你提供一个机会让你能够交互式地修改这些将被重新运用在新基准上的提交。
当你运行 `git rebase --interactive` (或 `git pull --rebase=interactive`)时,你会在你的文本编辑器中得到一个可供选择的提交列表视图。
-```
-$ git rebase master --interactive
-```
```
+$ git rebase master --interactive
+
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
pick ed93626 ACE-1294: removed pull request service from test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
pick e68f710 ACE-1294: added testing data to batch email file
-```
-```
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 commands)
#
# Commands:
@@ -238,27 +216,30 @@ pick e68f710 ACE-1294: added testing data to batch email file
# If you remove a line here THAT COMMIT WILL BE LOST.
```
-注意到每一条提交旁都有一个 `pick`。这是对 rebase 而言,"照原样留下这个提交"。如果你现在就退出文本编辑器,它会执行一次如上文所述的普通变基操作。但是,如果你将 `pick` 改为 `edit` 或者其他 rebase 命令中的一个,变基操作会允许你在它被重新运用前改变它。有效的变基命令有如下几种:
-* `reword`: 编辑提交信息。
-* `edit`: 编辑提交了的文件。
-* `squash`: 将提交与之前的提交(同在文件中)合并,并将提交信息拼接。
-* `fixup`: 将本提交与上一条提交合并,并且逐字使用上一条提交的提交信息(这很方便,如果你为一个很小的改动创建了第二个提交,而它本身就应该属于上一条提交,例如,你忘记暂存了一个文件)。
-* `exec`: 运行一条任意的 shell 命令(我们将会在下一节看到本例一次简洁的使用场景)。
+注意到每一条提交旁都有一个 `pick`。这是对 rebase 而言,“照原样留下这个提交”。如果你现在就退出文本编辑器,它会执行一次如上文所述的普通变基操作。但是,如果你将 `pick` 改为 `edit` 或者其他 rebase 命令中的一个,变基操作会允许你在它被重新运用前改变它。有效的变基命令有如下几种:
+
+* `reword`:编辑提交信息。
+* `edit`:编辑提交了的文件。
+* `squash`:将提交与之前的提交(同在文件中)合并,并将提交信息拼接。
+* `fixup`:将本提交与上一条提交合并,并且逐字使用上一条提交的提交信息(这很方便,如果你为一个很小的改动创建了第二个提交,而它本身就应该属于上一条提交,例如,你忘记暂存了一个文件)。
+* `exec`: 运行一条任意的 shell 命令(我们将会在下一节看到本例一次简洁的使用场景)。
* `drop`: 这将丢弃这条提交。
-你也可以在文件内重新整理提交,这样会改变他们被重新运用的顺序。这会很顺手当你对不同的主题创建了交错的提交时,你可以使用 `squash` 或者 `fixup` 来将其合并成符合逻辑的原子提交。
+你也可以在文件内重新整理提交,这样会改变它们被重新应用的顺序。当你对不同的主题创建了交错的提交时这会很顺手,你可以使用 `squash` 或者 `fixup` 来将其合并成符合逻辑的原子提交。
-当你设置完命令并且保存这个文件后,Git 将递归每一条提交,在每个 `reword` 和 `edit` 命令处为你暂停来执行你设计好的改变并且自动运行 `squash`, `fixup`,`exec` 和 `drop`命令。
+当你设置完命令并且保存这个文件后,Git 将递归每一条提交,在每个 `reword` 和 `edit` 命令处为你暂停来执行你设计好的改变,并且自动运行 `squash`, `fixup`,`exec` 和 `drop` 命令。
####非交互性式执行
+
当你执行变基操作时,本质上你是在通过将你每一条新提交应用于指定基址的头部来重写历史。`git pull --rebase` 可能会有一点危险,因为根据上游分支改动的事实,你的新建历史可能会由于特定的提交遭遇测试失败甚至编译问题。如果这些改动引起了合并冲突,变基过程将会暂停并且允许你来解决它们。但是,整洁的合并改动仍然有可能打断编译或测试过程,留下破败的提交弄乱你的提交历史。
但是,你可以指导 Git 为每一个重写的提交来运行你的项目测试套件。在 Git v2.9 之前,你可以通过绑定 `git rebase --interactive` 和 `exec` 命令来实现。例如这样:
+
```
$ git rebase master −−interactive −−exec=”npm test”
```
-...会生成在重写每条提交后执行 `npm test` 这样的一个交互式变基计划,保证你的测试仍然会通过:
+……这会生成一个交互式变基计划,在重写每条提交后执行 `npm test` ,保证你的测试仍然会通过:
```
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
@@ -269,20 +250,17 @@ pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
exec npm test
pick e68f710 ACE-1294: added testing data to batch email file
exec npm test
-```
-```
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 command(s))
```
如果出现了测试失败的情况,变基会暂停并让你修复这些测试(并且将你的修改应用于相应提交):
+
```
291 passing
1 failing
-```
-```
-1) Host request “after all” hook:
+1) Host request "after all" hook:
Uncaught Error: connect ECONNRESET 127.0.0.1:3001
…
npm ERR! Test failed.
@@ -292,91 +270,96 @@ You can fix the problem, and then run
```
这很方便,但是使用交互式变基有一点臃肿。到了 Git v2.9,你可以这样来实现非交互式变基:
+
```
-$ git rebase master -x “npm test”
+$ git rebase master -x "npm test"
```
-简单替换 `npm test` 为 `make`,`rake`,`mvn clean install`,或者任何你用来构建或测试你的项目的命令。
+可以简单替换 `npm test` 为 `make`,`rake`,`mvn clean install`,或者任何你用来构建或测试你的项目的命令。
+
+#### 小小警告
-####小小警告
就像电影里一样,重写历史可是一个危险的行当。任何提交被重写为变基操作的一部分都将改变它的 SHA-1 ID,这意味着 Git 会把它当作一个全新的提交对待。如果重写的历史和原来的历史混杂,你将获得重复的提交,而这可能在你的团队中引起不少的疑惑。
为了避免这个问题,你仅仅需要遵照一条简单的规则:
+
> _永远不要变基一条你已经推送的提交!_
坚持这一点你会没事的。
+### Git LFS 的性能提升
+[Git 是一个分布式版本控制系统][64],意味着整个仓库的历史会在克隆阶段被传送到客户端。对包含大文件的项目——尤其是大文件经常被修改——初始克隆会非常耗时,因为每一个版本的每一个文件都必须下载到客户端。[Git LFS(Large File Storage 大文件存储)][65] 是一个 Git 拓展包,由 Atlassian、GitHub 和其他一些开源贡献者开发,通过需要时才下载大文件的相对版本来减少仓库中大文件的影响。更明确地说,大文件是在检出过程中按需下载的而不是在克隆或抓取过程中。
-### `Git LFS` 的性能提升
-[Git 是一个分布式版本控制系统][64],意味着整个仓库的历史会在克隆阶段被传送到客户端。对包含大文件的项目——尤其是大文件经常被修改——初始克隆会非常耗时,因为每一个版本的每一个文件都必须下载到客户端。[Git LFS(Large File Storage 大文件存储)][65] 是一个 Git 拓展包,由 Atlassian,GitHub 和其他一些开源贡献者开发,通过消极地下载大文件的相对版本来减少仓库中大文件的影响。更明确地说,大文件是在检出过程中按需下载的而不是在克隆或抓取过程中。
-
-在 Git 2016 年的五大发布中,Git LFS 自身有四个功能丰富的发布:v1.2 到 v1.5。你可以凭 Git LFS 自身来写一系列回顾文章,但是就这篇文章而言,我将专注于 2016 年解决的一项最重要的主题:速度。一系列针对 Git 和 Git LFS 的改进极大程度地优化了将文件传入/传出服务器的性能。
+在 Git 2016 年的五大发布中,Git LFS 自身就有四个功能版本的发布:v1.2 到 v1.5。你可以仅对 Git LFS 这一项来写一系列回顾文章,但是就这篇文章而言,我将专注于 2016 年解决的一项最重要的主题:速度。一系列针对 Git 和 Git LFS 的改进极大程度地优化了将文件传入/传出服务器的性能。
#### 长期过滤进程
-当你 `git add` 一个文件时,Git 的净化过滤系统会被用来在文件被写入 Git 目标存储前转化文件的内容。Git LFS 通过使用净化过滤器将大文件内容存储到 LFS 缓存中以缩减仓库的大小,并且增加一个小“指针”文件到 Git 目标存储中作为替代。
+当你 `git add` 一个文件时,Git 的净化过滤系统会被用来在文件被写入 Git 目标存储之前转化文件的内容。Git LFS 通过使用净化过滤器(clean filter)将大文件内容存储到 LFS 缓存中以缩减仓库的大小,并且增加一个小“指针”文件到 Git 目标存储中作为替代。
+![](https://cdn-images-1.medium.com/max/800/0*Ku328eca7GLOo7sS.png)
- ![](https://cdn-images-1.medium.com/max/800/0*Ku328eca7GLOo7sS.png)
-
-污化过滤器是净化过滤器的对立面——正如其名。在 `git checkout` 过程中从一个 Git 目标仓库读取文件内容时,污化过滤系统有机会在文件被写入用户的工作区前将其改写。Git LFS 污化过滤器通过将指针文件替代为对应的大文件将其转化,可以是从 LFS 缓存中获得或者通过读取存储在 Bitbucket 的 Git LFS。
+污化过滤器(smudge filter)是净化过滤器的对立面——正如其名。在 `git checkout` 过程中从一个 Git 目标仓库读取文件内容时,污化过滤系统有机会在文件被写入用户的工作区前将其改写。Git LFS 污化过滤器通过将指针文件替代为对应的大文件将其转化,可以是从 LFS 缓存中获得或者通过读取存储在 Bitbucket 的 Git LFS。
![](https://cdn-images-1.medium.com/max/800/0*CU60meE1lbCuivn7.png)
传统上,污化和净化过滤进程在每个文件被增加和检出时只能被唤起一次。所以,一个项目如果有 1000 个文件在被 Git LFS 追踪 ,做一次全新的检出需要唤起 `git-lfs-smudge` 命令 1000 次。尽管单次操作相对很迅速,但是经常执行 1000 次独立的污化进程总耗费惊人。、
-针对 Git v2.11(和 Git LFS v1.5),污化和净化过滤器可以被定义为长期进程,为第一个需要过滤的文件调用一次,然后为之后的文件持续提供污化或净化过滤直到父 Git 操作结束。[Lars Schneider][66],Git 的长期过滤系统的贡献者,简洁地总结了对 Git LFS 性能改变带来的影响。
-> 使用 12k 个文件的测试仓库的过滤进程在 macOS 上快了80 倍,在 Windows 上 快了 58 倍。在 Windows 上,这意味着测试运行了 57 秒而不是 55 分钟。
-> 这真是一个让人印象深刻的性能增强!
+针对 Git v2.11(和 Git LFS v1.5),污化和净化过滤器可以被定义为长期进程,为第一个需要过滤的文件调用一次,然后为之后的文件持续提供污化或净化过滤直到父 Git 操作结束。[Lars Schneider][66],Git 的长期过滤系统的贡献者,简洁地总结了对 Git LFS 性能改变带来的影响。
+
+> 使用 12k 个文件的测试仓库的过滤进程在 macOS 上快了 80 倍,在 Windows 上 快了 58 倍。在 Windows 上,这意味着测试运行了 57 秒而不是 55 分钟。
+
+这真是一个让人印象深刻的性能增强!
#### LFS 专有克隆
-长期运行的污化和净化过滤器在对向本地缓存读写的加速做了很多贡献,但是对大目标传入/传出 Git LFS 服务器的速度提升贡献很少。 每次 Git LFS 污化过滤器在本地 LFS 缓存中无法找到一个文件时,它不得不使用两个 HTTP 请求来获得该文件:一个用来定位文件,另外一个用来下载它。在一次 `git clone` 过程中,你的本地 LFS 缓存是空的,所以 Git LFS 会天真地为你的仓库中每个 LFS 所追踪的文件创建两个 HTTP 请求:
+长期运行的污化和净化过滤器在对向本地缓存读写的加速做了很多贡献,但是对大目标传入/传出 Git LFS 服务器的速度提升贡献很少。 每次 Git LFS 污化过滤器在本地 LFS 缓存中无法找到一个文件时,它不得不使用两次 HTTP 请求来获得该文件:一个用来定位文件,另外一个用来下载它。在一次 `git clone` 过程中,你的本地 LFS 缓存是空的,所以 Git LFS 会天真地为你的仓库中每个 LFS 所追踪的文件创建两个 HTTP 请求:
- ![](https://cdn-images-1.medium.com/max/800/0*ViL7r3ZhkGvF0z3-.png)
+![](https://cdn-images-1.medium.com/max/800/0*ViL7r3ZhkGvF0z3-.png)
-幸运的是,Git LFS v1.2 提供了专门的 `[git lfs clone][51]` 命令。不再是一次下载一个文件; `git lfs clone` 禁止 Git LFS 污化过滤器,等待检出结束,然后从 Git LFS 存储中按批下载任何需要的文件。这允许了并行下载并且将需要的 HTTP 请求数量减半。
+幸运的是,Git LFS v1.2 提供了专门的 [`git lfs clone`][51] 命令。不再是一次下载一个文件; `git lfs clone` 禁止 Git LFS 污化过滤器,等待检出结束,然后从 Git LFS 存储中按批下载任何需要的文件。这允许了并行下载并且将需要的 HTTP 请求数量减半。
- ![](https://cdn-images-1.medium.com/max/800/0*T43VA0DYTujDNgkH.png)
+![](https://cdn-images-1.medium.com/max/800/0*T43VA0DYTujDNgkH.png)
-###自定义传输路由器
+### 自定义传输路由器(Transfer Adapter)
-正如之前讨论过的,Git LFS 在 v1.5 中 发起了对长期过滤进程的支持。不过,对另外一种可插入进程的支持早在今年年初就发布了。 Git LFS 1.3 包含了对可插拔传输路由器的支持,因此不同的 Git LFS 托管服务可以定义属于它们自己的协议来向或从 LFS 存储中传输文件。
+正如之前讨论过的,Git LFS 在 v1.5 中提供对长期过滤进程的支持。不过,对另外一种类型的可插入进程的支持早在今年年初就发布了。 Git LFS 1.3 包含了对可插拔传输路由器(pluggable transfer adapter)的支持,因此不同的 Git LFS 托管服务可以定义属于它们自己的协议来向或从 LFS 存储中传输文件。
-直到 2016 年底,Bitbucket 是唯一一个执行专属 Git LFS 传输协议 [Bitbucket LFS Media Adapter][67] 的托管服务商。这是为了从 Bitbucket 的一个独特的被称为 chunking 的 LFS 存储 API 特性中获利。Chunking 意味着在上传或下载过程中,大文件被分解成 4MB 的文件块(chunk)。
- ![](https://cdn-images-1.medium.com/max/800/1*N3SpjQZQ1Ge8OwvWrtS1og.gif)
+直到 2016 年底,Bitbucket 是唯一一个执行专属 Git LFS 传输协议 [Bitbucket LFS Media Adapter][67] 的托管服务商。这是为了从 Bitbucket 的一个被称为 chunking 的 LFS 存储 API 独特特性中获益。Chunking 意味着在上传或下载过程中,大文件被分解成 4MB 的文件块(chunk)。
+
+![](https://cdn-images-1.medium.com/max/800/1*N3SpjQZQ1Ge8OwvWrtS1og.gif)
分块给予了 Bitbucket 支持的 Git LFS 三大优势:
-1. 并行下载与上传。默认地,Git LFS 最多并行传输三个文件。但是,如果只有一个文件被单独传输(这也是 Git LFS 污化过滤器的默认行为),它会在一个单独的流中被传输。Bitbucket 的分块允许同一文件的多个文件块同时被上传或下载,经常能够梦幻地提升传输速度。
-2. 可恢复文件块传输。文件块都在本地缓存,所以如果你的下载或上传被打断,Bitbucket 的自定义 LFS 流媒体路由器会在下一次你推送或拉取时仅为丢失的文件块恢复传输。
-3. 免重复。Git LFS,正如 Git 本身,是内容索位;每一个 LFS 文件都由它的内容生成的 SHA-256 哈希值认证。所以,哪怕你稍微修改了一位数据,整个文件的 SHA-256 就会修改而你不得不重新上传整个文件。分块允许你仅仅重新上传文件真正被修改的部分。举个例子,想想一下Git LFS 在追踪一个 41M 的电子游戏精灵表。如果我们增加在此精灵表上增加 2MB 的新层并且提交它,传统上我们需要推送整个新的 43M 文件到服务器端。但是,使用 Bitbucket 的自定义传输路由,我们仅仅需要推送 ~7MB:先是 4MB 文件块(因为文件的信息头会改变)和我们刚刚添加的包含新层的 3MB 文件块!其余未改变的文件块在上传过程中被自动跳过,节省了巨大的带宽和时间消耗。
-可自定义的传输路由器是 Git LFS 一个伟大的特性,它们使得不同服务商在不过载核心项目的前提下体验适合其服务器的优化后的传输协议。
+1. 并行下载与上传。默认地,Git LFS 最多并行传输三个文件。但是,如果只有一个文件被单独传输(这也是 Git LFS 污化过滤器的默认行为),它会在一个单独的流中被传输。Bitbucket 的分块允许同一文件的多个文件块同时被上传或下载,经常能够神奇地提升传输速度。
+2. 可续传的文件块传输。文件块都在本地缓存,所以如果你的下载或上传被打断,Bitbucket 的自定义 LFS 流媒体路由器会在下一次你推送或拉取时仅为丢失的文件块恢复传输。
+3. 免重复。Git LFS,正如 Git 本身,是一种可定位的内容;每一个 LFS 文件都由它的内容生成的 SHA-256 哈希值认证。所以,哪怕你稍微修改了一位数据,整个文件的 SHA-256 就会修改而你不得不重新上传整个文件。分块允许你仅仅重新上传文件真正被修改的部分。举个例子,想想一下 Git LFS 在追踪一个 41M 的精灵表格(spritesheet)。如果我们增加在此精灵表格上增加 2MB 的新的部分并且提交它,传统上我们需要推送整个新的 43M 文件到服务器端。但是,使用 Bitbucket 的自定义传输路由,我们仅仅需要推送大约 7MB:先是 4MB 文件块(因为文件的信息头会改变)和我们刚刚添加的包含新的部分的 3MB 文件块!其余未改变的文件块在上传过程中被自动跳过,节省了巨大的带宽和时间消耗。
-### 更佳的 `git diff` 算法与默认值
+可自定义的传输路由器是 Git LFS 的一个伟大的特性,它们使得不同服务商在不重载核心项目的前提下体验适合其服务器的优化后的传输协议。
+
+### 更佳的 git diff 算法与默认值
不像其他的版本控制系统,Git 不会明确地存储文件被重命名了的事实。例如,如果我编辑了一个简单的 Node.js 应用并且将 `index.js` 重命名为 `app.js`,然后运行 `git diff`,我会得到一个看起来像一个文件被删除另一个文件被新建的结果。
- ![](https://cdn-images-1.medium.com/max/800/1*ohMUBpSh_jqz2ffScJ7ApQ.png)
+![](https://cdn-images-1.medium.com/max/800/1*ohMUBpSh_jqz2ffScJ7ApQ.png)
-我猜测移动或重命名一个文件从技术上来讲是一次删除后跟一次新建,但这不是对人类最友好的方式来诉说它。其实,你可以使用 `-M` 标志来指示 Git 在计算差异时抽空尝试检测重命名文件。对之前的例子,`git diff -M` 给我们如下结果:
- ![](https://cdn-images-1.medium.com/max/800/1*ywYjxBc1wii5O8EhHbpCTA.png)
+我猜测移动或重命名一个文件从技术上来讲是一次删除后跟着一次新建,但这不是对人类最友好的描述方式。其实,你可以使用 `-M` 标志来指示 Git 在计算差异时同时尝试检测是否是文件重命名。对之前的例子,`git diff -M` 给我们如下结果:
-第二行显示的 similarity index 告诉我们文件内容经过比较后的相似程度。默认地,`-M` 会考虑任意两个文件都有超过 50% 相似度。这意味着,你需要编辑少于 50% 的行数来确保它们被识别成一个重命名后的文件。你可以通过加上一个百分比来选择你自己的 similarity index,如,`-M80%`。
+![](https://cdn-images-1.medium.com/max/800/1*ywYjxBc1wii5O8EhHbpCTA.png)
-到 Git v2.9 版本,如果你使用了 `-M` 标志 `git diff` 和 `git log` 命令都会默认检测重命名。如果不喜欢这种行为(或者,更现实的情况,你在通过一个脚本来解析 diff 输出),那么你可以通过显示的传递 `--no-renames` 标志来禁用它。
+第二行显示的 similarity index 告诉我们文件内容经过比较后的相似程度。默认地,`-M` 会处理任意两个超过 50% 相似度的文件。这意味着,你需要编辑少于 50% 的行数来确保它们可以被识别成一个重命名后的文件。你可以通过加上一个百分比来选择你自己的 similarity index,如,`-M80%`。
+
+到 Git v2.9 版本,无论你是否使用了 `-M` 标志, `git diff` 和 `git log` 命令都会默认检测重命名。如果不喜欢这种行为(或者,更现实的情况,你在通过一个脚本来解析 diff 输出),那么你可以通过显式的传递 `--no-renames` 标志来禁用这种行为。
#### 详细的提交
-你经历过调用 `git commit` 然后盯着空白的 shell 试图想起你刚刚做过的所有改动吗?verbose 标志就为此而来!
+你经历过调用 `git commit` 然后盯着空白的 shell 试图想起你刚刚做过的所有改动吗?`verbose` 标志就为此而来!
不像这样:
-```
-Ah crap, which dependency did I just rev?
-```
```
+Ah crap, which dependency did I just rev?
+
# Please enter the commit message for your changes. Lines starting
# with ‘#’ will be ignored, and an empty message aborts the commit.
# On branch master
@@ -387,15 +370,16 @@ Ah crap, which dependency did I just rev?
#
```
-...你可以调用 `git commit --verbose` 来查看你改动造成的内联差异。不用担心,这不会包含在你的提交信息中:
+……你可以调用 `git commit --verbose` 来查看你改动造成的行内差异。不用担心,这不会包含在你的提交信息中:
- ![](https://cdn-images-1.medium.com/max/800/1*1vOYE2ow3ZDS8BP_QfssQw.png)
+![](https://cdn-images-1.medium.com/max/800/1*1vOYE2ow3ZDS8BP_QfssQw.png)
-`--verbose` 标志不是最新的,但是直到 Git v2.9 你可以通过 `git config --global commit.verbose true` 永久的启用它。
+`--verbose` 标志并不是新出现的,但是直到 Git v2.9 你才可以通过 `git config --global commit.verbose true` 永久的启用它。
#### 实验性的 Diff 改进
当一个被修改部分前后几行相同时,`git diff` 可能产生一些稍微令人迷惑的输出。如果在一个文件中有两个或者更多相似结构的函数时这可能发生。来看一个有些刻意人为的例子,想象我们有一个 JS 文件包含一个单独的函数:
+
```
/* @return {string} "Bitbucket" */
function productName() {
@@ -403,15 +387,14 @@ function productName() {
}
```
-现在想象一下我们刚提交的改动包含一个预谋的 _另一个_可以做相似事情的函数:
+现在想象一下我们刚提交的改动包含一个我们专门做的 _另一个_可以做相似事情的函数:
+
```
/* @return {string} "Bitbucket" */
function productId() {
return "Bitbucket";
}
-```
-```
/* @return {string} "Bitbucket" */
function productName() {
return "Bitbucket";
@@ -420,32 +403,34 @@ function productName() {
我们希望 `git diff` 显示开头五行被新增,但是实际上它不恰当地将最初提交的第一行也包含进来。
- ![](https://cdn-images-1.medium.com/max/800/1*9C7DWMObGHMEqD-QFGHmew.png)
+![](https://cdn-images-1.medium.com/max/800/1*9C7DWMObGHMEqD-QFGHmew.png)
错误的注释被包含在了 diff 中!这虽不是世界末日,但每次发生这种事情总免不了花费几秒钟的意识去想 _啊?_
在十二月,Git v2.11 介绍了一个新的实验性的 diff 选项,`--indent-heuristic`,尝试生成从美学角度来看更赏心悦目的 diff。
- ![](https://cdn-images-1.medium.com/max/800/1*UyWZ6JjC-izDquyWCA4bow.png)
+![](https://cdn-images-1.medium.com/max/800/1*UyWZ6JjC-izDquyWCA4bow.png)
-在后台,`--indent-heuristic` 在每一次改动造成的所有可能的 diff 中循环,并为它们分别打上一个 "不良" 分数。这是基于试探性的如差异文件块是否以不同等级的缩进开始和结束(从美学角度讲不良)以及差异文件块前后是否有空白行(从美学角度讲令人愉悦)。最后,有着最低不良分数的块就是最终输出。
+在后台,`--indent-heuristic` 在每一次改动造成的所有可能的 diff 中循环,并为它们分别打上一个 “不良” 分数。这是基于启发式的,如差异文件块是否以不同等级的缩进开始和结束(从美学角度讲“不良”),以及差异文件块前后是否有空白行(从美学角度讲令人愉悦)。最后,有着最低不良分数的块就是最终输出。
+
+这个特性还是实验性的,但是你可以通过应用 `--indent-heuristic` 选项到任何 `git diff` 命令来专门测试它。如果,如果你喜欢尝鲜,你可以这样将其在你的整个系统内启用:
-这个特性还是实验性的,但是你可以通过应用 `--indent-heuristic` 选项到任何 `git diff` 命令来专门测试它。如果,如果你喜欢在刀口上讨生活,你可以这样将其在你的整个系统内使能:
```
$ git config --global diff.indentHeuristic true
```
-### Submodules 差强人意
+### 子模块(Submodule)差强人意
子模块允许你从 Git 仓库内部引用和包含其他 Git 仓库。这通常被用在当一些项目管理的源依赖也在被 Git 跟踪时,或者被某些公司用来作为包含一系列相关项目的 [monorepo][68] 的替代品。
由于某些用法的复杂性以及使用错误的命令相当容易破坏它们的事实,Submodule 得到了一些坏名声。
- ![](https://cdn-images-1.medium.com/max/800/1*xNffiElY7BZNMDM0jm0JNQ.gif)
+![](https://cdn-images-1.medium.com/max/800/1*xNffiElY7BZNMDM0jm0JNQ.gif)
-但是,它们还是有着它们的用处,而且,我想,仍然对其他方案有依赖时的最好的选择。 幸运的是,2016 对 submodule 用户来说是伟大的一年,在几次发布中落地了许多意义重大的性能和特性提升。
+但是,它们还是有着它们的用处,而且,我想这仍然是用于需要厂商依赖项的最好选择。 幸运的是,2016 对子模块的用户来说是伟大的一年,在几次发布中落地了许多意义重大的性能和特性提升。
#### 并行抓取
-当克隆或则抓取一个仓库时,加上 `--recurse-submodules` 选项意味着任何引用的 submodule 也将被克隆或更新。传统上,这会被串行执行,每次只抓取一个 submodule。直到 Git v2.8,你可以附加 `--jobs=n` 选项来使用 _n_ 个并行线程来抓取 submodules。
+
+当克隆或则抓取一个仓库时,加上 `--recurse-submodules` 选项意味着任何引用的子模块也将被克隆或更新。传统上,这会被串行执行,每次只抓取一个子模块。直到 Git v2.8,你可以附加 `--jobs=n` 选项来使用 _n_ 个并行线程来抓取子模块。
我推荐永久的配置这个选项:
@@ -453,30 +438,31 @@ $ git config --global diff.indentHeuristic true
$ git config --global submodule.fetchJobs 4
```
-...或者你可以选择使用任意程度的平行化。
+……或者你可以选择使用任意程度的平行化。
-#### 浅层子模块
-Git v2.9 介绍了 `git clone —shallow-submodules` 标志。它允许你抓取你仓库的完整克隆,然后递归的浅层克隆所有引用的子模块的一个提交。如果你不需要项目的依赖的完整记录时会很有用。
+#### 浅层化子模块
-例如,一个仓库有着一些混合了的子模块,其中包含有其他方案商提供的依赖和你自己其它的项目。你可能希望初始化时执行浅层子模块克隆然后深度选择几个你想要与之工作的项目。
+Git v2.9 介绍了 `git clone —shallow-submodules` 标志。它允许你抓取你仓库的完整克隆,然后递归地以一个提交的深度浅层化克隆所有引用的子模块。如果你不需要项目的依赖的完整记录时会很有用。
-另一种情况可能是配置一次持续性的集成或调度工作。Git 需要超级仓库以及每个子模块最新的提交以便能够真正执行构建。但是,你可能并不需要每个子模块全部的历史记录,所以仅仅检索最新的提交可以为你省下时间和带宽。
+例如,一个仓库有着一些混合了的子模块,其中包含有其他厂商提供的依赖和你自己其它的项目。你可能希望初始化时执行浅层化子模块克隆,然后深度选择几个你想工作与其上的项目。
+
+另一种情况可能是配置持续集成或部署工作。Git 需要一个包含了子模块的超级仓库以及每个子模块最新的提交以便能够真正执行构建。但是,你可能并不需要每个子模块全部的历史记录,所以仅仅检索最新的提交可以为你省下时间和带宽。
#### 子模块的替代品
-`--reference` 选项可以和 `git clone` 配合使用来指定另一个本地仓库作为一个目标存储来保存你本地已经存在的又通过网络传输的重复制目标。语法为:
+`--reference` 选项可以和 `git clone` 配合使用来指定另一个本地仓库作为一个替代的对象存储,来避免跨网络重新复制你本地已经存在的对象。语法为:
```
$ git clone --reference
```
-直到 Git v2.11,你可以使用 `—reference` 选项与 `—recurse-submodules` 结合来设置子模块替代品从另一个本地仓库指向子模块。其语法为:
+到 Git v2.11,你可以使用 `—reference` 选项与 `—recurse-submodules` 结合来设置子模块指向一个来自另一个本地仓库的子模块。其语法为:
```
$ git clone --recurse-submodules --reference
```
-这潜在的可以省下很大数量的带宽和本地磁盘空间,但是如果引用的本地仓库不包含你所克隆自的远程仓库所必需的所有子模块时,它可能会失败。。
+这潜在的可以省下大量的带宽和本地磁盘空间,但是如果引用的本地仓库不包含你克隆的远程仓库所必需的所有子模块时,它可能会失败。
幸运的是,方便的 `—-reference-if-able` 选项将会让它优雅地失败,然后为丢失了的被引用的本地仓库的所有子模块回退为一次普通的克隆。
@@ -487,11 +473,11 @@ $ git clone --recurse-submodules --reference-if-able \
#### 子模块的 diff
-在 Git v2.11 之前,Git 有两种模式来显示对更新了仓库子模块的提交之间的差异。
+在 Git v2.11 之前,Git 有两种模式来显示对更新你的仓库子模块的提交之间的差异。
-`git diff —-submodule=short` 显示你的项目引用的子模块中的旧提交和新提交( 这也是如果你整体忽略 `--submodule` 选项的默认结果):
+`git diff —-submodule=short` 显示你的项目引用的子模块中的旧提交和新提交(这也是如果你整体忽略 `--submodule` 选项的默认结果):
- ![](https://cdn-images-1.medium.com/max/800/1*K71cJ30NokO5B69-a470NA.png)
+![](https://cdn-images-1.medium.com/max/800/1*K71cJ30NokO5B69-a470NA.png)
`git diff —submodule=log` 有一点啰嗦,显示更新了的子模块中任意新建或移除的提交的信息中统计行。
@@ -499,15 +485,16 @@ $ git clone --recurse-submodules --reference-if-able \
Git v2.11 引入了第三个更有用的选项:`—-submodule=diff`。这会显示更新后的子模块所有改动的完整的 diff。
- ![](https://cdn-images-1.medium.com/max/800/1*nPhJTjP8tcJ0cD8s3YOmjw.png)
+![](https://cdn-images-1.medium.com/max/800/1*nPhJTjP8tcJ0cD8s3YOmjw.png)
-### `git stash` 的 90 个增强
+### git stash 的 90 个增强
-不像 submodules,几乎没有 Git 用户不钟爱 `[git stash][52]`。 `git stash` 临时搁置(或者 _藏匿_)你对工作区所做的改动使你能够先处理其他事情,结束后重新将搁置的改动恢复到先前状态。
+不像子模块,几乎没有 Git 用户不钟爱 [`git stash`][52]。 `git stash` 临时搁置(或者 _藏匿_)你对工作区所做的改动使你能够先处理其他事情,结束后重新将搁置的改动恢复到先前状态。
#### 自动搁置
如果你是 `git rebase` 的粉丝,你可能很熟悉 `--autostash` 选项。它会在变基之前自动搁置工作区所有本地修改然后等变基结束再将其复用。
+
```
$ git rebase master --autostash
Created autostash: 54f212a
@@ -516,12 +503,14 @@ First, rewinding head to replay your work on top of it...
Applied autostash.
```
-这很方便,因为它使得你可以在一个不洁的工作区执行变基。有一个方便的配置标志叫做 `rebase.autostash` 可以将这个特性设为默认,你可以这样来全局使能它:
+这很方便,因为它使得你可以在一个不洁的工作区执行变基。有一个方便的配置标志叫做 `rebase.autostash` 可以将这个特性设为默认,你可以这样来全局启用它:
+
```
$ git config --global rebase.autostash true
```
`rebase.autostash` 实际上自从 [Git v1.8.4][69] 就可用了,但是 v2.7 引入了通过 `--no-autostash` 选项来取消这个标志的功能。如果你对未暂存的改动使用这个选项,变基会被一条工作树被污染的警告禁止:
+
```
$ git rebase master --no-autostash
Cannot rebase: You have unstaged changes.
@@ -531,31 +520,37 @@ Please commit or stash them.
#### 补丁式搁置
说到配置标签,Git v2.7 也引入了 `stash.showPatch`。`git stash show` 的默认行为是显示你搁置文件的汇总。
+
```
$ git stash show
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
-将 `-p` 标志传入会将 `git stash show` 变为 "补丁模式",这将会显示完整的 diff:
- ![](https://cdn-images-1.medium.com/max/800/1*HpcT3quuKKQj9CneqPuufw.png)
+将 `-p` 标志传入会将 `git stash show` 变为 “补丁模式”,这将会显示完整的 diff:
+
+![](https://cdn-images-1.medium.com/max/800/1*HpcT3quuKKQj9CneqPuufw.png)
+
+`stash.showPatch` 将这个行为定为默认。你可以将其全局启用:
-`stash.showPatch` 将这个行为定为默认。你可以将其全局使能:
```
$ git config --global stash.showPatch true
```
如果你使能 `stash.showPatch` 但却之后决定你仅仅想要查看文件总结,你可以通过传入 `--stat` 选项来重新获得之前的行为。
+
```
$ git stash show --stat
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
-顺便一提:`--no-patch` 是一个有效选项但它不会如你所希望的改写 `stash.showPatch` 的结果。不仅如此,它会传递给用来生成补丁时潜在调用的 `git diff` 命令,然后你会发现完全没有任何输出。
+顺便一提:`--no-patch` 是一个有效选项但它不会如你所希望的取消 `stash.showPatch`。不仅如此,它会传递给用来生成补丁时潜在调用的 `git diff` 命令,然后你会发现完全没有任何输出。
#### 简单的搁置标识
-如果你是 `git stash` 的粉丝,你可能知道你可以搁置多次改动然后通过 `git stash list` 来查看它们:
+
+如果你惯用 `git stash` ,你可能知道你可以搁置多次改动然后通过 `git stash list` 来查看它们:
+
```
$ git stash list
stash@{0}: On master: crazy idea that might work one day
@@ -564,11 +559,11 @@ stash@{2}: On master: perf improvement that I forgot I stashed
stash@{3}: On master: pop this when we use Docker in production
```
-但是,你可能不知道为什么 Git 的搁置有着这么难以理解的标识(`stash@{1}`, `stash@{2}`, 等)也可能将它们勾勒成 "仅仅是 Git 的一个特性吧"。实际上就像很多 Git 特性一样,这些奇怪的标志实际上是 Git 数据模型一个非常巧妙使用(或者说是滥用了的)的特性。
+但是,你可能不知道为什么 Git 的搁置有着这么难以理解的标识(`stash@{1}`、`stash@{2}` 等),或许你可能将它们勾勒成 “仅仅是 Git 的癖好吧”。实际上就像很多 Git 特性一样,这些奇怪的标志实际上是 Git 数据模型的一个非常巧妙使用(或者说是滥用了的)的结果。
-在后台,`git stash` 命令实际创建了一系列特别的提交目标,这些目标对你搁置的改动做了编码并且维护一个 [reglog][70] 来保存对这些特殊提交的参考。 这也是为什么 `git stash list` 的输出看起来很像 `git reflog` 的输出。当你运行 `git stash apply stash@{1}` 时,你实际上在说,"从stash reflog 的位置 1 上应用这条提交 "
+在后台,`git stash` 命令实际创建了一系列特定的提交目标,这些目标对你搁置的改动做了编码并且维护一个 [reglog][70] 来保存对这些特殊提交的引用。 这也是为什么 `git stash list` 的输出看起来很像 `git reflog` 的输出。当你运行 `git stash apply stash@{1}` 时,你实际上在说,“从 stash reflog 的位置 1 上应用这条提交。”
-直到 Git v2.11,你不再需要使用完整的 `stash@{n}` 语句。相反,你可以通过一个简单的整数指出搁置在 stash reflog 中的位置来引用它们。
+到了 Git v2.11,你不再需要使用完整的 `stash@{n}` 语句。相反,你可以通过一个简单的整数指出该搁置在 stash reflog 中的位置来引用它们。
```
$ git stash show 1
@@ -577,25 +572,26 @@ $ git stash pop 1
```
讲了很多了。如果你还想要多学一些搁置是怎么保存的,我在 [这篇教程][71] 中写了一点这方面的内容。
-### <2016> <2017>
-好了,结束了。感谢您的阅读!我希望您享受阅读这份长篇大论,正如我享受在 Git 的源码,发布文档,和 `man` 手册中探险一番来撰写它。如果你认为我忘记了一些重要的事,请留下一条评论或者在 [Twitter][72] 上让我知道,我会努力写一份后续篇章。
-至于 Git 接下来会发生什么,这要靠广大维护者和贡献者了(其中有可能就是你!)。随着日益增长的采用,我猜测简化,改进后的用户体验,和更好的默认结果将会是 2017 年 Git 主要的主题。随着 Git 仓库变得又大又旧,我猜我们也可以看到继续持续关注性能和对大文件、深度树和长历史的改进处理。
+### 2016> <2017>
-如果你关注 Git 并且很期待能够和一些项目背后的开发者会面,请考虑来 Brussels 花几周时间来参加 [Git Merge][74] 。我会在[那里发言][75]!但是更重要的是,很多维护 Git 的开发者将会出席这次会议而且一年一度的 Git 贡献者峰会很可能会指定来年发展的方向。
+好了,结束了。感谢您的阅读!我希望您喜欢阅读这份长篇大论,正如我乐于在 Git 的源码、发布文档和 `man` 手册中探险一番来撰写它。如果你认为我忘记了一些重要的事,请留下一条评论或者在 [Twitter][72] 上让我知道,我会努力写一份后续篇章。
+
+至于 Git 接下来会发生什么,这要靠广大维护者和贡献者了(其中有可能就是你!)。随着 Git 的采用日益增长,我猜测简化、改进的用户体验,和更好的默认结果将会是 2017 年 Git 主要的主题。随着 Git 仓库变得越来越大、越来越旧,我猜我们也可以看到继续持续关注性能和对大文件、深度树和长历史的改进处理。
+
+如果你关注 Git 并且很期待能够和一些项目背后的开发者会面,请考虑来 Brussels 花几周时间来参加 [Git Merge][74] 。我会在[那里发言][75]!但是更重要的是,很多维护 Git 的开发者将会出席这次会议而且一年一度的 Git 贡献者峰会很可能会指定来年发展的方向。
或者如果你实在等不及,想要获得更多的技巧和指南来改进你的工作流,请参看这份 Atlassian 的优秀作品: [Git 教程][76] 。
-
-*如果你翻到最下方来找第一节的脚注,请跳转到 [ [引用是需要的] ][77]一节去找生成统计信息的命令。免费的封面图片是由 [ instaco.de ][78] 生成的 ❤️。*
+封面图片是由 [instaco.de][78] 生成的。
--------------------------------------------------------------------------------
-via: https://hackernoon.com/git-in-2016-fad96ae22a15#.t5c5cm48f
+via: https://medium.com/hacker-daily/git-in-2016-fad96ae22a15
作者:[Tim Pettersen][a]
译者:[xiaow6](https://github.com/xiaow6)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20170111 How to choose your first programming language.md b/published/201704/20170111 How to choose your first programming language.md
similarity index 100%
rename from published/20170111 How to choose your first programming language.md
rename to published/201704/20170111 How to choose your first programming language.md
diff --git a/published/20170116 Setup SysVol Replication Across Two Samba4 AD DC with Rsync - Part 6.md b/published/201704/20170116 Setup SysVol Replication Across Two Samba4 AD DC with Rsync - Part 6.md
similarity index 100%
rename from published/20170116 Setup SysVol Replication Across Two Samba4 AD DC with Rsync - Part 6.md
rename to published/201704/20170116 Setup SysVol Replication Across Two Samba4 AD DC with Rsync - Part 6.md
diff --git a/published/20170116 Using the AWS SDK for Gos Regions and Endpoints Metadata.md b/published/201704/20170116 Using the AWS SDK for Gos Regions and Endpoints Metadata.md
similarity index 100%
rename from published/20170116 Using the AWS SDK for Gos Regions and Endpoints Metadata.md
rename to published/201704/20170116 Using the AWS SDK for Gos Regions and Endpoints Metadata.md
diff --git a/published/20170117 Arch Linux on a Lenovo Yoga 900.md b/published/201704/20170117 Arch Linux on a Lenovo Yoga 900.md
similarity index 100%
rename from published/20170117 Arch Linux on a Lenovo Yoga 900.md
rename to published/201704/20170117 Arch Linux on a Lenovo Yoga 900.md
diff --git a/published/201704/20170118 Arrive On Time With NTP -- Part 1- Usage Overview.md b/published/201704/20170118 Arrive On Time With NTP -- Part 1- Usage Overview.md
new file mode 100644
index 0000000000..0474590efb
--- /dev/null
+++ b/published/201704/20170118 Arrive On Time With NTP -- Part 1- Usage Overview.md
@@ -0,0 +1,62 @@
+用 NTP 把控时间(一):使用概览
+============================================================
+
+ ![NTP](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ntp-time.jpg?itok=zu8dqpki "NTP")
+
+这系列共三部分,首先,Chirs Binnie 探讨了在一个合理的架构中 NTP 服务的重要性。
+
+鲜有互联网上的服务能如时间服务一样重要。影响你系统计时的小问题可能需要一两天才能被发现,而这些不期而遇的问题所带来的连锁反应几乎总是让人伤脑筋的。
+
+设想你的备份服务器与网络时间协议(NTP)服务器断开连接,过了几天,引起了几小时的时间偏差。你的同事照常九点上班,发现需要大量带宽的备份服务器消耗了所有网络资源,这也就意味着他们在备份完成之前几乎不能登录工作台开始他们的日常工作。
+
+这系列共三部分,首先,我将提供简要介绍 NTP 来防止这种困境的发生。从邮件的时间戳到记录你工作的进展,NTP 服务对于一个合理的架构是如此的重要。
+
+可以把如此重要的 NTP 服务器(其他的服务器从此获取时钟数据)看做是倒置金字塔的底部,它被称之为一层服务器(也被称为“主”服务器)。这些服务器与国家级时间服务(称为零层,通常这些设备是是原子钟和 GPS 钟之类的装置)直接交互。与之安全通讯的方法很多,例如通过卫星或者无线电。
+
+令人惊讶的是,几乎所有的大型企业都会连接二层服务器(或“次级”服务器)而是不是主服务器。如你所料,二层服务器和一层直接同步。如果你觉得大公司可能有自己的本地 NTP 服务器(至少两个,通常三个,为了灾难恢复之用),这些就是三层服务器。这样,三层服务器将连接上层预定义的次级服务器,负责任地传递时间给客户端和服务器,精确地反馈当前时间。
+
+由简单设计而构成的 NTP 可以工作的前提是——多亏了通过互联网跨越了大范围的地理距离——在确认时间完全准确之前,来回时间(包什么时候发出和多少秒后被收到)都会被清楚记录。设置电脑的时间的背后要比你想象的复杂得多,如果你不相信,那[这神奇的网页][3]值得一看。
+
+再次重提一遍,为了确保你的架构如预期般工作,NTP 是如此的关键,你的 NTP 与层次服务器之间的连接必须是完全可信赖并且能提供额外的冗余,才能保持你的内部时钟同步。在 [NTP 主站][4]有一个有用的一层服务器列表。
+
+正如你在列表所见,一些 NTP 一层服务器以 “ClosedAccount” 状态运行;这些服务器需要事先接受才可以使用。但是只要你完全按照他们的使用引导做,“OpenAccess” 服务器是可以用于轮询使用的。而 “RestrictedAccess” 服务器有时候会因为大量客户端访问或者轮询间隙太小而受限。另外有时候也有一些专供某种类型的组织使用,例如学术界。
+
+### 尊重我的权威
+
+在公共 NTP 服务器上,你可能发现遵从某些规则的使用规范。现在让我们看看其中一些。
+
+ “iburst” 选项作用是如果在一个标准的轮询间隔内没有应答,客户端会发送一定数量的包(八个包而不是通常的一个)给 NTP 服务器。如果在短时间内呼叫 NTP 服务器几次,没有出现可辨识的应答,那么本地时间将不会变化。
+
+不像 “iburst” ,按照 NTP 服务器的通用规则, “burst” 选项一般不允许使用(所以不要用它!)。这个选项不仅在轮询间隔发送大量包(明显又是八个),而且也会在服务器能正常使用时这样做。如果你在高层服务器持续发送包,甚至是它们在正常应答时,你可能会因为使用 “burst” 选项而被拉黑。
+
+显然,你连接服务器的频率造成了它的负载差异(和少量的带宽占用)。使用 “minpoll” 和 “maxpoll” 选项可以本地设置频率。然而,根据连接 NTP 服务器的规则,你不应该分别修改其默认的 64 秒和 1024 秒。
+
+此外,需要提出的是客户应该重视那些请求时间的服务器发出的“亲一下就死(死亡之吻)” (KOD)消息。如果 NTP 服务器不想响应某个特定的请求,就像路由和防火墙技术那样,那么它最有可能的就是简单地遗弃或吞没任何相关的包。
+
+换句话说,接受到这些数据的服务器并不需要特别处理这些包,简单地丢弃这些它认为这不值得回应的包即可。你可以想象,这并不是特别好的做法,有时候礼貌地问客户是否中止或停止比忽略请求更为有效。因此,这种特别的包类型叫做 KOD 包。如果一个客户端被发送了这种不受欢迎的 KOD 包,它应该记住这个发回了拒绝访问标志的服务器。
+
+如果从该服务器收到不止一个 KOD 包,客户端会猜想服务器上发生了流量限速的情况(或类似的)。这种情况下,客户端一般会写入本地日志,提示与该服务器的交流不太顺利,以备将来排错之用。
+
+牢记,出于显而易见的原因,关键在于 NTP 服务器的架构应该是动态的。因此,不要给你的 NTP 配置硬编码 IP 地址是非常重要的。通过使用 DNS 域名,个别服务器断开网络时服务仍能继续进行,而 IP 地址空间也能重新分配,并且可引入简单的负载均衡(具有一定程度的弹性)。
+
+请别忘了我们也需要考虑呈指数增长的物联网(IoT),这最终将包括数以亿万计的新装置,意味着这些设备都需要保持正确时间。硬件卖家无意(或有意)设置他们的设备只能与一个提供者的(甚至一个) NTP 服务器连接终将成为过去,这是一个非常麻烦的问题。
+
+你可能会想象,随着买入和上线更多的硬件单元,NTP 基础设施的拥有者大概不会为相关的费用而感到高兴,因为他们正被没有实际收入所困扰。这种情形并非杞人忧天。头疼的问题多呢 -- 由于 NTP 流量导致基础架构不堪重负 -- 这在过去几年里已发生过多次。
+
+在下面两篇文章里,我将着重于一些重要的 NTP 安全配置选项和描述服务器的搭建。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/arrive-time-ntp-part-1-usage-overview
+
+作者:[CHRIS BINNIE][a]
+译者:[XYenChi](https://github.com/XYenChi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/chrisbinnie
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/files/images/ntp-timejpg
+[3]:http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
+[4]:http://support.ntp.org/bin/view/Servers/StratumOneTimeServers
diff --git a/published/20170118 Linux command line navigation tips- the basics of pushd and popd commands.md b/published/201704/20170118 Linux command line navigation tips- the basics of pushd and popd commands.md
similarity index 100%
rename from published/20170118 Linux command line navigation tips- the basics of pushd and popd commands.md
rename to published/201704/20170118 Linux command line navigation tips- the basics of pushd and popd commands.md
diff --git a/published/20170119 A beginners guide to comparing files using visual diffmerge tool Meld on Linux.md b/published/201704/20170119 A beginners guide to comparing files using visual diffmerge tool Meld on Linux.md
similarity index 100%
rename from published/20170119 A beginners guide to comparing files using visual diffmerge tool Meld on Linux.md
rename to published/201704/20170119 A beginners guide to comparing files using visual diffmerge tool Meld on Linux.md
diff --git a/published/201704/20170120 How to Install Elastic Stack on CentOS 7.md b/published/201704/20170120 How to Install Elastic Stack on CentOS 7.md
new file mode 100644
index 0000000000..402c5693ee
--- /dev/null
+++ b/published/201704/20170120 How to Install Elastic Stack on CentOS 7.md
@@ -0,0 +1,632 @@
+如何在 CentOS 7 上安装 Elastic Stack
+============================================================
+
+**Elasticsearch** 是基于 Lucene 由 Java 开发的开源搜索引擎。它提供了一个分布式、多租户的全文搜索引擎(LCTT 译注:多租户是指多租户技术,是一种软件架构技术,用来探讨与实现如何在多用户的环境下共用相同的系统或程序组件,并且仍可确保各用户间数据的隔离性。),并带有 HTTP 仪表盘的 Web 界面(Kibana)。数据会被 Elasticsearch 查询、检索,并且使用 JSON 文档方案存储。Elasticsearch 是一个可扩展的搜索引擎,可用于搜索所有类型的文本文档,包括日志文件。Elasticsearch 是 Elastic Stack 的核心,Elastic Stack 也被称为 ELK Stack。
+
+**Logstash** 是用于管理事件和日志的开源工具。它为数据收集提供实时传递途径。 Logstash 将收集您的日志数据,将数据转换为 JSON 文档,并将其存储在 Elasticsearch 中。
+
+**Kibana** 是 Elasticsearch 的开源数据可视化工具。Kibana 提供了一个漂亮的仪表盘 Web 界面。 你可以用它来管理和可视化来自 Elasticsearch 的数据。 它不仅美丽,而且强大。
+
+在本教程中,我将向您展示如何在 CentOS 7 服务器上安装和配置 Elastic Stack 以监视服务器日志。 然后,我将向您展示如何在操作系统为 CentOS 7 和 Ubuntu 16 的客户端上安装 “Elastic beats”。
+
+**前提条件**
+
+* 64 位的 CentOS 7,4 GB 内存 - elk 主控机
+* 64 位的 CentOS 7 ,1 GB 内存 - 客户端 1
+* 64 位的 Ubuntu 16 ,1 GB 内存 - 客户端 2
+
+### 步骤 1 - 准备操作系统
+
+在本教程中,我们将禁用 CentOS 7 服务器上的 SELinux。 编辑 SELinux 配置文件。
+
+```
+vim /etc/sysconfig/selinux
+```
+
+将 `SELINUX` 的值从 `enforcing` 改成 `disabled` 。
+
+```
+SELINUX=disabled
+```
+
+然后重启服务器:
+
+```
+reboot
+```
+
+再次登录服务器并检查 SELinux 状态。
+
+```
+getenforce
+```
+
+确保结果是 `disabled`。
+
+### 步骤 2 - 安装 Java
+
+部署 Elastic stack 依赖于Java,Elasticsearch 需要 Java 8 版本,推荐使用 Oracle JDK 1.8 。我将从官方的 Oracle rpm 包安装 Java 8。
+
+使用 `wget` 命令下载 Java 8 的 JDK。
+
+```
+wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"
+```
+
+然后使用 `rpm` 命令安装:
+
+```
+rpm -ivh jdk-8u77-linux-x64.rpm
+```
+
+最后,检查 java JDK 版本,确保它正常工作。
+
+```
+java -version
+```
+
+您将看到服务器的 Java 版本。
+
+### 步骤 3 - 安装和配置 Elasticsearch
+
+在此步骤中,我们将安装和配置 Elasticsearch。 从 elastic.co 网站提供的 rpm 包安装 Elasticsearch,并将其配置运行在 localhost 上(以确保该程序安全,而且不能从外部访问)。
+
+在安装 Elasticsearch 之前,将 elastic.co 的密钥添加到服务器。
+
+```
+rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
+```
+
+接下来,使用 `wget` 下载 Elasticsearch 5.1,然后安装它。
+
+```
+wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
+rpm -ivh elasticsearch-5.1.1.rpm
+```
+
+Elasticsearch 已经安装好了。 现在进入配置目录编辑 `elasticsaerch.yml` 配置文件。
+
+```
+cd /etc/elasticsearch/
+vim elasticsearch.yml
+```
+
+去掉第 40 行的注释,启用 Elasticsearch 的内存锁。这将禁用 Elasticsearch 的内存交换。
+
+```
+bootstrap.memory_lock: true
+```
+
+在 `Network` 块中,取消注释 `network.host` 和 `http.port` 行。
+
+```
+network.host: localhost
+http.port: 9200
+```
+
+保存文件并退出编辑器。
+
+现在编辑 `elasticsearch.service` 文件的内存锁配置。
+
+```
+vim /usr/lib/systemd/system/elasticsearch.service
+```
+
+去掉第 60 行的注释,确保该值为 `unlimited`。
+
+```
+MAX_LOCKED_MEMORY=unlimited
+```
+
+保存并退出。
+
+Elasticsearch 配置到此结束。Elasticsearch 将在本机的 9200 端口运行,我们通过在 CentOS 服务器上启用 `mlockall` 来禁用内存交换。重新加载 systemd,将 Elasticsearch 置为开机启动,然后启动服务。
+
+```
+sudo systemctl daemon-reload
+sudo systemctl enable elasticsearch
+sudo systemctl start elasticsearch
+```
+
+等待 Eelasticsearch 启动成功,然后检查服务器上打开的端口,确保 9200 端口的状态是 `LISTEN`。
+
+```
+netstat -plntu
+```
+
+![Check elasticsearch running on port 9200] [10]
+
+然后检查内存锁以确保启用 `mlockall`,并使用以下命令检查 Elasticsearch 是否正在运行。
+
+```
+curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
+curl -XGET 'localhost:9200/?pretty'
+```
+
+会看到如下结果。
+
+ ![Check memory lock elasticsearch and check status] [11]
+
+### 步骤 4 - 安装和配置 Kibana 和 Nginx
+
+在这一步,我们将在 Nginx Web 服务器上安装并配置 Kibana。 Kibana 监听在 localhost 上,而 Nginx 作为 Kibana 的反向代理。
+
+用 `wget` 下载 Kibana 5.1,然后使用 `rpm` 命令安装:
+
+```
+wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
+rpm -ivh kibana-5.1.1-x86_64.rpm
+```
+
+编辑 Kibana 配置文件。
+
+```
+vim /etc/kibana/kibana.yml
+```
+
+去掉配置文件中 `server.port`、`server.host` 和 `elasticsearch.url` 这三行的注释。
+
+```
+server.port: 5601
+server.host: "localhost"
+elasticsearch.url: "http://localhost:9200"
+```
+
+保存并退出。
+
+将 Kibana 设为开机启动,并且启动 Kibana 。
+
+```
+sudo systemctl enable kibana
+sudo systemctl start kibana
+```
+
+Kibana 将作为 node 应用程序运行在端口 5601 上。
+
+```
+netstat -plntu
+```
+
+![Kibana running as node application on port 5601] [12]
+
+Kibana 安装到此结束。 现在我们需要安装 Nginx 并将其配置为反向代理,以便能够从公共 IP 地址访问 Kibana。
+
+Nginx 在 Epel 资源库中可以找到,用 `yum` 安装 epel-release。
+
+```
+yum -y install epel-release
+```
+
+然后安装 Nginx 和 httpd-tools 这两个包。
+
+```
+yum -y install nginx httpd-tools
+```
+
+httpd-tools 软件包包含 Web 服务器的工具,可以为 Kibana 添加 htpasswd 基础认证。
+
+编辑 Nginx 配置文件并删除 `server {}` 块,这样我们可以添加一个新的虚拟主机配置。
+
+```
+cd /etc/nginx/
+vim nginx.conf
+```
+
+删除 `server { }` 块。
+
+ ![Remove Server Block on Nginx configuration] [13]
+
+保存并退出。
+
+现在我们需要在 `conf.d` 目录中创建一个新的虚拟主机配置文件。 用 `vim` 创建新文件 `kibana.conf`。
+
+```
+vim /etc/nginx/conf.d/kibana.conf
+```
+
+复制下面的配置。
+
+```
+server {
+ listen 80;
+
+ server_name elk-stack.co;
+
+ auth_basic "Restricted Access";
+ auth_basic_user_file /etc/nginx/.kibana-user;
+
+ location / {
+ proxy_pass http://localhost:5601;
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection 'upgrade';
+ proxy_set_header Host $host;
+ proxy_cache_bypass $http_upgrade;
+ }
+}
+```
+
+保存并退出。
+
+然后使用 `htpasswd` 命令创建一个新的基本认证文件。
+
+```
+sudo htpasswd -c /etc/nginx/.kibana-user admin
+“输入你的密码”
+```
+
+测试 Nginx 配置,确保没有错误。 然后设定 Nginx 开机启动并启动 Nginx。
+
+```
+nginx -t
+systemctl enable nginx
+systemctl start nginx
+```
+
+![Add nginx virtual host configuration for Kibana Application] [14]
+
+### 步骤 5 - 安装和配置 Logstash
+
+在此步骤中,我们将安装 Logstash,并将其配置为:从配置了 filebeat 的 logstash 客户端里集中化服务器的日志,然后过滤和转换 Syslog 数据,并将其移动到存储中心(Elasticsearch)中。
+
+下载 Logstash 并使用 rpm 进行安装。
+
+```
+wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
+rpm -ivh logstash-5.1.1.rpm
+```
+
+生成新的 SSL 证书文件,以便客户端可以识别 elastic 服务端。
+
+进入 `tls` 目录并编辑 `openssl.cnf` 文件。
+
+```
+cd /etc/pki/tls
+vim openssl.cnf
+```
+
+在 `[v3_ca]` 部分添加服务器标识。
+
+```
+[ v3_ca ]
+
+# Server IP Address
+subjectAltName = IP: 10.0.15.10
+```
+
+保存并退出。
+
+使用 `openssl` 命令生成证书文件。
+
+```
+openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
+```
+
+证书文件可以在 `/etc/pki/tls/certs/` 和 `/etc/pki/tls/private/` 目录中找到。
+
+接下来,我们会为 Logstash 创建新的配置文件。创建一个新的 `filebeat-input.conf` 文件来为 filebeat 配置日志源,然后创建一个 `syslog-filter.conf` 配置文件来处理 syslog,再创建一个 `output-elasticsearch.conf` 文件来定义输出日志数据到 Elasticsearch。
+
+转到 logstash 配置目录,并在 `conf.d` 子目录中创建新的配置文件。
+
+```
+cd /etc/logstash/
+vim conf.d/filebeat-input.conf
+```
+
+输入配置,粘贴以下配置:
+
+```
+input {
+ beats {
+ port => 5443
+ ssl => true
+ ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
+ ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
+ }
+}
+```
+
+保存并退出。
+
+创建 `syslog-filter.conf` 文件。
+
+```
+vim conf.d/syslog-filter.conf
+```
+
+粘贴以下配置:
+
+```
+filter {
+ if [type] == "syslog" {
+ grok {
+ match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
+ add_field => [ "received_at", "%{@timestamp}" ]
+ add_field => [ "received_from", "%{host}" ]
+ }
+ date {
+ match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
+ }
+ }
+}
+```
+
+我们使用名为 `grok` 的过滤器插件来解析 syslog 文件。
+
+保存并退出。
+
+创建输出配置文件 `output-elasticsearch.conf`。
+
+```
+vim conf.d/output-elasticsearch.conf
+```
+
+粘贴以下配置:
+
+```
+output {
+ elasticsearch { hosts => ["localhost:9200"]
+ hosts => "localhost:9200"
+ manage_template => false
+ index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
+ document_type => "%{[@metadata][type]}"
+ }
+}
+```
+
+保存并退出。
+
+最后,将 logstash 设定为开机启动并且启动服务。
+
+```
+sudo systemctl enable logstash
+sudo systemctl start logstash
+```
+
+![Logstash started on port 5443 with SSL Connection] [15]
+
+### 步骤 6 - 在 CentOS 客户端上安装并配置 Filebeat
+
+Beat 作为数据发送人的角色,是一种可以安装在客户端节点上的轻量级代理,将大量数据从客户机发送到 Logstash 或 Elasticsearch 服务器。有 4 种 beat,`Filebeat` 用于发送“日志文件”,`Metricbeat` 用于发送“指标”,`Packetbeat` 用于发送“网络数据”,`Winlogbeat` 用于发送 Windows 客户端的“事件日志”。
+
+在本教程中,我将向您展示如何安装和配置 `Filebeat`,通过 SSL 连接将数据日志文件传输到 Logstash 服务器。
+
+登录到客户端1的服务器上。 然后将证书文件从 elastic 服务器复制到客户端1的服务器上。
+
+```
+ssh root@client1IP
+```
+
+使用 `scp` 命令拷贝证书文件。
+
+```
+scp root@elk-serverIP:~/logstash-forwarder.crt .
+输入 elk-server 的密码
+```
+
+创建一个新的目录,将证书移动到这个目录中。
+
+```
+sudo mkdir -p /etc/pki/tls/certs/
+mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
+```
+
+接下来,在客户端 1 服务器上导入 elastic 密钥。
+
+```
+rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
+```
+
+下载 Filebeat 并且用 `rpm` 命令安装。
+
+```
+wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
+rpm -ivh filebeat-5.1.1-x86_64.rpm
+```
+
+Filebeat 已经安装好了,请转到配置目录并编辑 `filebeat.yml` 文件。
+
+```
+cd /etc/filebeat/
+vim filebeat.yml
+```
+
+在第 21 行的路径部分,添加新的日志文件。 我们将创建两个文件,记录 ssh 活动的 `/var/log/secure` 文件 ,以及服务器日志 `/var/log/messages` 。
+
+```
+ paths:
+ - /var/log/secure
+ - /var/log/messages
+```
+
+在第 26 行添加一个新配置来定义 syslog 类型的文件。
+
+```
+ document-type: syslog
+```
+
+Filebeat 默认使用 Elasticsearch 作为输出目标。 在本教程中,我们将其更改为 Logshtash。 在 83 行和 85 行添加注释来禁用 Elasticsearch 输出。
+
+禁用 Elasticsearch 输出:
+
+```
+#-------------------------- Elasticsearch output ------------------------------
+#output.elasticsearch:
+ # Array of hosts to connect to.
+# hosts: ["localhost:9200"]
+```
+
+现在添加新的 logstash 输出配置。 去掉 logstash 输出配置的注释,并将所有值更改为下面配置中的值。
+
+```
+output.logstash:
+ # The Logstash hosts
+ hosts: ["10.0.15.10:5443"]
+ bulk_max_size: 1024
+ ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
+ template.name: "filebeat"
+ template.path: "filebeat.template.json"
+ template.overwrite: false
+```
+
+保存文件并退出 vim。
+
+将 Filebeat 设定为开机启动并启动。
+
+```
+sudo systemctl enable filebeat
+sudo systemctl start filebeat
+```
+
+### 步骤 7 - 在 Ubuntu 客户端上安装并配置 Filebeat
+
+使用 `ssh` 连接到服务器。
+
+```
+ssh root@ubuntu-clientIP
+```
+
+使用 `scp` 命令拷贝证书文件。
+
+```
+scp root@elk-serverIP:~/logstash-forwarder.crt .
+```
+
+创建一个新的目录,将证书移动到这个目录中。
+
+```
+sudo mkdir -p /etc/pki/tls/certs/
+mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
+```
+
+在服务器上导入 elastic 密钥。
+
+```
+wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
+```
+
+下载 Filebeat .deb 包并且使用 `dpkg` 命令进行安装。
+
+```
+wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
+dpkg -i filebeat-5.1.1-amd64.deb
+```
+
+转到配置目录并编辑 `filebeat.yml` 文件。
+
+```
+cd /etc/filebeat/
+vim filebeat.yml
+```
+
+在路径配置部分添加新的日志文件路径。
+
+```
+ paths:
+ - /var/log/auth.log
+ - /var/log/syslog
+```
+
+设定文档类型为 `syslog` 。
+
+```
+ document-type: syslog
+```
+
+将下列几行注释掉,禁用输出到 Elasticsearch。
+
+```
+#-------------------------- Elasticsearch output ------------------------------
+#output.elasticsearch:
+ # Array of hosts to connect to.
+# hosts: ["localhost:9200"]
+```
+
+启用 logstash 输出,去掉以下配置的注释并且按照如下所示更改值。
+
+```
+output.logstash:
+ # The Logstash hosts
+ hosts: ["10.0.15.10:5443"]
+ bulk_max_size: 1024
+ ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
+ template.name: "filebeat"
+ template.path: "filebeat.template.json"
+ template.overwrite: false
+```
+
+保存并退出 vim。
+
+将 Filebeat 设定为开机启动并启动。
+
+```
+sudo systemctl enable filebeat
+sudo systemctl start filebeat
+```
+
+检查服务状态:
+
+```
+systemctl status filebeat
+```
+
+![Filebeat is running on the client Ubuntu] [16]
+
+### 步骤 8 - 测试
+
+打开您的网络浏览器,并访问您在 Nginx 中配置的 elastic stack 域名,我的是“elk-stack.co”。 使用管理员密码登录,然后按 Enter 键登录 Kibana 仪表盘。
+
+![Login to the Kibana Dashboard with Basic Auth] [17]
+
+创建一个新的默认索引 `filebeat-*`,然后点击“创建”按钮。
+
+![Create First index filebeat for Kibana] [18]
+
+默认索引已创建。 如果 elastic stack 上有多个 beat,您可以在“星形”按钮上点击一下即可配置默认 beat。
+
+![Filebeat index as default index on Kibana Dashboard] [19]
+
+转到 “发现” 菜单,您就可以看到 elk-client1 和 elk-client2 服务器上的所有日志文件。
+
+![Discover all Log Files from the Servers] [20]
+
+来自 elk-client1 服务器日志中的无效 ssh 登录的 JSON 输出示例。
+
+![JSON output for Failed SSH Login] [21]
+
+使用其他的选项,你可以使用 Kibana 仪表盘做更多的事情。
+
+Elastic Stack 已安装在 CentOS 7 服务器上。 Filebeat 已安装在 CentOS 7 和 Ubuntu 客户端上。
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
+
+作者:[Muhammad Arul][a]
+译者:[Flowsnow](https://github.com/Flowsnow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
+[1]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-nbspprepare-the-operating-system
+[2]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-java
+[3]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-elasticsearch
+[4]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-kibana-with-nginx
+[5]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-logstash
+[6]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-centos-client
+[7]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-ubuntu-client
+[8]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-testing
+[9]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#reference
+[10]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/1.png
+[11]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/2.png
+[12]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/3.png
+[13]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/4.png
+[14]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/5.png
+[15]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/6.png
+[16]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/12.png
+[17]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/7.png
+[18]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/8.png
+[19]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/9.png
+[20]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/10.png
+[21]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/11.png
\ No newline at end of file
diff --git a/published/20170123 Linux command line navigation tipstricks 3 - the CDPATH environment variable.md b/published/201704/20170123 Linux command line navigation tipstricks 3 - the CDPATH environment variable.md
similarity index 100%
rename from published/20170123 Linux command line navigation tipstricks 3 - the CDPATH environment variable.md
rename to published/201704/20170123 Linux command line navigation tipstricks 3 - the CDPATH environment variable.md
diff --git a/published/201704/20170124 How to Keep Hackers out of Your Linux Machine Part 3- Your Questions Answered.md b/published/201704/20170124 How to Keep Hackers out of Your Linux Machine Part 3- Your Questions Answered.md
new file mode 100644
index 0000000000..d31be551e8
--- /dev/null
+++ b/published/201704/20170124 How to Keep Hackers out of Your Linux Machine Part 3- Your Questions Answered.md
@@ -0,0 +1,82 @@
+让你的 Linux 远离黑客(三):问题回答
+============================================================
+
+ ![Computer security](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/keep-hackers-out.jpg?itok=lqgHDxDu "computer security")
+
+Mike Guthrie 最近在 Linux 基金会的网络研讨会上回答了一些安全相关的问题。随时观看免费的研讨会。[Creative Commons Zero][1]
+
+这个系列的[第一篇][6]和[第二篇][7]文章覆盖了 5 个让你的 Linux 远离黑客的最简单方法,并且知道他们是否已经进入。这一次,我将回答一些我最近在 Linux 基金会网络研讨会上收到的很好的安全性问题。[随时观看免费网络研讨会][8]。
+
+### 如果系统自动使用私钥认证,如何存储密钥密码?
+
+这个很难。这是我们一直在斗争的事情,特别是我们在做 “Red Team” 的时候,因为我们有些需要自动调用的东西。我使用 Expect,但我倾向于在这上面使用老方法。你需要编写脚本,是的,将密码存储在系统上不是那么简单的一件事,当你这么做时你需要加密它。
+
+我的 Expect 脚本加密了存储的密码,然后解密,发送密码,并在完成后重新加密。我知道到这有一些缺陷,但它比使用无密码的密钥更好。
+
+如果你有一个无密码的密钥,并且你确实需要使用它。我建议你尽量限制需要用它的用户。例如,如果你正在进行一些自动日志传输或自动化软件安装,则只给那些需要执行这些功能的程序权限。
+
+你可以通过 SSH 运行命令,所以不要给它们一个 shell,使它只能运行那个命令就行,这样就能防止某人窃取了这个密钥并做其他事情。
+
+### 你对密码管理器如 KeePass2 怎么看?
+
+对我而言,密码管理器是一个非常好的目标。随着 GPU 破解的出现和 EC2 的一些破解能力,这些东西很容易就变成过去时。我一直在窃取这些密码库。
+
+现在,我们在破解这些库的成功率是另外一件事。我们差不多有 10% 左右的破解成功率。如果人们不能为他们的密码库用一个安全的密码,那么我们就会进入并会获得丰硕成果。比不用要强,但是你仍需要保护好这些资产。如你保护其他密码一样保护好密码库。
+
+### 你认为从安全的角度来看,除了创建具有更高密钥长度的主机密钥之外,创建一个新的 “Diffie-Hellman” 模数并限制 2048 位或更高值得么?
+
+值得的。以前在 SSH 产品中存在弱点,你可以做到解密数据包流。有了它,你可以传递各种数据。作为一种加密机制,人们不假思索使用这种方式来传输文件和密码。使用健壮的加密并且改变你的密钥是很重要的。 我会轮换我的 SSH 密钥 - 这不像我的密码那么频繁,但是我每年会轮换一次。是的,这是一个麻烦,但它让我安心。我建议尽可能地使你的加密技术健壮。
+
+### 使用完全随机的英语单词(大概 10 万个)作为密码合适么?
+
+当然。我的密码实际上是一个完整的短语。它是带标点符号和大小写一句话。除此以外,我不再使用其他任何东西。
+
+我是一个“你可以记住而不用写下来或者放在密码库的密码”的大大的支持者。一个你可以记住不必写下来的密码比你需要写下来的密码更安全。
+
+使用短语或使用你可以记住的四个随机单词比那些需要经过几次转换的一串数字和字符的字符串更安全。我目前的密码长度大约是 200 个字符。这是我可以快速打出来并且记住的。
+
+### 在物联网情景下对保护基于 Linux 的嵌入式系统有什么建议么?
+
+物联网是一个新的领域,它是系统和安全的前沿,日新月异。现在,我尽量都保持离线。我不喜欢人们把我的灯光和冰箱搞乱。我故意不去购买支持联网的冰箱,因为我有朋友是黑客,我可不想我每天早上醒来都会看到那些不雅图片。封住它,锁住它,隔离它。
+
+目前物联网设备的恶意软件取决于默认密码和后门,所以只需要对你所使用的设备进行一些研究,并确保没有其他人可以默认访问。然后确保这些设备的管理接口受到防火墙或其他此类设备的良好保护。
+
+### 你可以提一个可以在 SMB 和大型环境中使用的防火墙/UTM(OS 或应用程序)么?
+
+我使用 pfSense,它是 BSD 的衍生产品。我很喜欢它。它有很多模块,实际上现在它有商业支持,这对于小企业来说这是非常棒的。对于更大的设备、更大的环境,这取决于你有哪些管理员。
+
+我一直都是 CheckPoint 管理员,但是 Palo Alto 也越来越受欢迎了。这些设备与小型企业或家庭使用很不同。我在各种小型网络中都使用 pfSense。
+
+### 云服务有什么内在问题么?
+
+并没有云,那只不过是其他人的电脑而已。云服务存在内在的问题。只知道谁访问了你的数据,你在上面放了什么。要知道当你向 Amazon 或 Google 或 Microsoft 上传某些东西时,你将不再完全控制它,并且该数据的隐私是有问题的。
+
+### 要获得 OSCP 你建议需要准备些什么?
+
+我现在准备通过这个认证。我的整个团队是这样。阅读他们的材料。记住, OSCP 将成为令人反感的安全基准。你一切都要使用 Kali。如果不这样做 - 如果你决定不使用 Kali,请确保仿照 Kali 实例安装所有的工具。
+
+这将是一个基于工具的重要认证。这是一个很好的方式。看看一些名为“渗透测试框架”的内容,因为这将为你提供一个很好的测试流程,他们的实验室似乎是很棒的。这与我家里的实验室非常相似。
+
+_[随时免费观看完整的网络研讨会][3]。查看这个系列的[第一篇][4]和[第二篇][5]文章获得 5 个简单的贴士来让你的 Linux 机器安全。_
+
+_Mike Guthrie 为能源部工作,负责 “Red Team” 的工作和渗透测试。_
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-3-your-questions-answered
+
+作者:[MIKE GUTHRIE][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/anch
+[1]:https://www.linux.com/licenses/category/creative-commons-zero
+[2]:https://www.linux.com/files/images/keep-hackers-outjpg
+[3]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
+[4]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
+[5]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
+[6]:https://linux.cn/article-8189-1.html
+[7]:https://linux.cn/article-8338-1.html
+[8]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
diff --git a/published/20170125 An executive's guide to containers.md b/published/201704/20170125 An executive's guide to containers.md
similarity index 100%
rename from published/20170125 An executive's guide to containers.md
rename to published/201704/20170125 An executive's guide to containers.md
diff --git a/published/20170125 Command line aliases in the Linux Shell.md b/published/201704/20170125 Command line aliases in the Linux Shell.md
similarity index 100%
rename from published/20170125 Command line aliases in the Linux Shell.md
rename to published/201704/20170125 Command line aliases in the Linux Shell.md
diff --git a/published/20170125 NMAP Common Scans – Part Two.md b/published/201704/20170125 NMAP Common Scans – Part Two.md
similarity index 100%
rename from published/20170125 NMAP Common Scans – Part Two.md
rename to published/201704/20170125 NMAP Common Scans – Part Two.md
diff --git a/published/201704/20170127 How to compare directories with Meld on Linux.md b/published/201704/20170127 How to compare directories with Meld on Linux.md
new file mode 100644
index 0000000000..fb784c2011
--- /dev/null
+++ b/published/201704/20170127 How to compare directories with Meld on Linux.md
@@ -0,0 +1,140 @@
+在 Linux 上使用 Meld 比较文件夹
+============================================================
+
+我们已经从一个新手的角度[了解][15]了 Meld (包括 Meld 的安装),我们也提及了一些 Meld 中级用户常用的小技巧。如果你有印象,在新手教程中,我们说过 Meld 可以比较文件和文件夹。已经讨论过怎么比较文件,今天,我们来看看 Meld 怎么比较文件夹。
+
+**需要指出的是,本教程中的所有命令和例子都是在 Ubuntu 14.04 上测试的,使用的 Meld 版本为 3.14.2。**
+
+### 用 Meld 比较文件夹
+
+打开 Meld 工具,然后选择 比较文件夹 选项来比较两个文件夹。
+
+[
+ ![Compare directories using Meld](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dir-comp-1.png)
+][5]
+
+选择你要比较的文件夹:
+
+[
+ ![select the directories](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-sel-dir-2.png)
+][6]
+
+然后单击比较按钮,你会看到 Meld 像图中这样分成两栏比较目录,就像文件比较一样。
+
+[
+ ![Compare directories visually](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dircomp-begins-3.png)
+][7]
+
+分栏会树形显示这些文件夹。你可以在上图中看到 —— 区别之处,不论是新建的还是被修改过的文件 —— 都会以不同的颜色高亮显示。
+
+根据 Meld 的官方文档可以知道,在窗口中看到的每个不同的文件或文件夹都会被突出显示。这样就很容易看出这个文件/文件夹与另一个分栏中对应位置的文件/文件夹的区别。
+
+下表是 Meld 网站上列出的在比较文件夹时突出显示的不同字体大小/颜色/背景等代表的含义。
+
+|**状态** | **表现** | **含义** |
+| --- | --- | --- |
+| 相同 | 正常字体 | 比较的文件夹中所有文件/文件夹相同。|
+| 过滤后相同 | 斜体 | 文件夹中文件不同,但使用文本过滤器的话,文件是相同的。|
+| 修改过 | 蓝色粗体 | 比较的文件夹中这些文件不同。 |
+| 新建 | 绿色粗体 | 该文件/文件夹在这个目录中存在,但其它目录中没有。|
+| 缺失 | 置灰文本,删除线 | 该文件/文件夹在这个目录中不存在,在在其它某个目录中存在。 |
+| 错误 | 黄色背景的红色粗体 | 比较文件时发生错误,最常见错误原因是文件权限(例如,Meld 无法打开该文件)和文件名编码错误。 |
+
+Meld 默认会列出比较文件夹中的所有内容,即使这些内容没有任何不同。当然,你也可以在工具栏中单击相同按钮设置 Meld 不显示这些相同的文件/文件夹 —— 单击这个按钮使其不可用。
+
+[
+ ![same button](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-button.png)
+][3]
+
+[
+ ![Meld compare buttons](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-disabled.png)
+][8]
+
+下面是单击 相同 按钮使其不可用的截图:
+
+[
+ ![Directory Comparison without same files](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-only-diff.png)
+][9]
+
+这样你会看到只显示了两个文件夹中不同的文件(新建的和修改过的)。同样,如果你单击 新建 按钮使其不可用,那么 Meld 就只会列出修改过的文件。所以,在比较文件夹时可以通过这些按钮自定义要显示的内容。
+
+你可以使用工具窗口显示区的上下箭头来切换选择是显示新建的文件还是修改过的文件。要打开两个文件进行分栏比较,可以双击文件,或者单击箭头旁边的 比较按钮。
+
+[
+ ![meld compare arrow keys](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-compare-arrows.png)
+][10]
+
+**提示 1**:如果你仔细观察,就会看到 Meld 窗口的左边和右边有一些小条。这些条的目的是提供“简单的用颜色区分的比较结果”。对每个不同的文件/文件夹,条上就有一个小的颜色块。你可以单击每一个这样的小块跳到它对应的文件/文件夹。
+
+**提示 2**:你总可以分栏比较文件,然后以你的方式合并不同的文件,假如你想要合并所有不同的文件/文件夹(就是说你想要一个特定的文件/文件夹与另一个完全相同),那么你可以用 复制到左边和 复制到右边 按钮:
+
+[
+ ![meld copy right part](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-copy-right-left.png)
+][11]
+
+比如,你可以在左边的分栏中选择一个文件或文件夹,然后单击 复制到右边 按钮,使右边对应条目完全一样。
+
+现在,在窗口的下拉菜单中找到 过滤 按钮,它就在 相同、新建 和 修改的 这三个按钮下面。这里你可以选择或取消文件的类型,告知 Meld 在比较文件夹时是否显示这种类型的文件/文件夹。官方文档解释说菜单中的这个条目表示“执行文件夹比较时该类文件名不会被查看。”
+
+该列表中条目包括备份文件,操作系统元数据,版本控制文件、二进制文件和多媒体文件。
+
+[
+ ![Meld filters](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-filters.png)
+][12]
+
+前面提到的条目也可以通过这样的方式找到:_浏览->文件过滤_。你可以通过 _编辑->首选项->文件过滤_ 为这个条目增加新元素(也可以删除已经存在的元素)。
+
+[
+ ![Meld preferences](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-edit-filters-menu.png)
+][13]
+
+要新建一个过滤条件,你需要使用一组 shell 符号,下表列出了 Meld 支持的 shell 符号:
+
+| **通配符** | **匹配** |
+| --- | --- |
+| * | 任何字符 (例如,零个或多个字符) |
+| ? | 一个字符 |
+| [abc] | 所列字符中的任何一个 |
+| [!abc] | 不在所列字符中的任何一个 |
+| {cat,dog} | “cat” 或 “dog” 中的一个 |
+
+最重要的一点是 Meld 的文件名默认大小写敏感。也就是说,Meld 认为 readme 和 ReadMe 与 README 是不一样的文件。
+
+幸运的是,你可以关掉 Meld 的大小写敏感。只需要打开 _浏览_ 菜单然后选择 忽略文件名大小写 选项。
+[
+ ![Meld ignore filename case](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-ignore-case.png)
+][14]
+
+### 结论
+
+你是否觉得使用 Meld 比较文件夹很容易呢 —— 事实上,我认为它相当容易。只有新建一个文件过滤器会花点时间,但是这不意味着你没必要学习创建过滤器。显然,这取决于你的需求。
+
+另外,你甚至可以用 Meld 比较三个文件夹。想要比较三个文件夹时,你可以通过单击 三向比较 复选框。今天,我们不介绍怎么比较三个文件夹,但它肯定会出现在后续的教程中。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
+
+作者:[Ansh][a]
+译者:[vim-kakali](https://github.com/vim-kakali)
+校对:[jasminepeng](https://github.com/jasminepeng)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
+[1]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#compare-directories-using-meld
+[2]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#conclusion
+[3]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-button.png
+[4]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
+[5]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dir-comp-1.png
+[6]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-sel-dir-2.png
+[7]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dircomp-begins-3.png
+[8]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-disabled.png
+[9]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-only-diff.png
+[10]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-compare-arrows.png
+[11]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-copy-right-left.png
+[12]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-filters.png
+[13]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-edit-filters-menu.png
+[14]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-ignore-case.png
+[15]:https://linux.cn/article-8402-1.html
\ No newline at end of file
diff --git a/published/20170128 How communities in India support privacy and software freedom.md b/published/201704/20170128 How communities in India support privacy and software freedom.md
similarity index 100%
rename from published/20170128 How communities in India support privacy and software freedom.md
rename to published/201704/20170128 How communities in India support privacy and software freedom.md
diff --git a/translated/tech/20170201 Building your own personal cloud with Cozy.md b/published/201704/20170201 Building your own personal cloud with Cozy.md
similarity index 62%
rename from translated/tech/20170201 Building your own personal cloud with Cozy.md
rename to published/201704/20170201 Building your own personal cloud with Cozy.md
index 9be9d839fe..58500b8424 100644
--- a/translated/tech/20170201 Building your own personal cloud with Cozy.md
+++ b/published/201704/20170201 Building your own personal cloud with Cozy.md
@@ -2,17 +2,18 @@
============================================================
![使用 Cozy 搭建个人云](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_tree_clouds.png?itok=dSV0oTDS "Building your own personal cloud with Cozy")
+
>Image by : [Pixabay][2]. Modified by Opensource.com. [CC BY-SA 4.0][3]
我认识的大部分人为了他们的日历、电子邮件、文件存储等,都会使用一些基于 Web 的应用。但是,如果像我这样,对隐私感到担忧、或者只是希望将你自己的数字生活简单化为一个你所控制的地方呢? [Cozy][4] 就是一个朝着健壮的自主云平台方向发展的项目。你可以从 [GitHub][5] 上获取 Cozy 的源代码,它采用 AGPL 3.0 协议。
### 安装
-安装 Cozy 非常快捷简单,这里有多种平台的 [简单易懂安装指令][6]。在我的测试中,我使用 64 位的 Debian 8 系统。安装需要几分钟时间,然后你只需要到服务器的 IP 地址注册一个账号,就会加载并准备好默认的应用程序集。
+安装 Cozy 非常快捷简单,这里有多种平台的 [简单易懂的安装指令][6]。在我的测试中,我使用 64 位的 Debian 8 系统。安装需要几分钟时间,然后你只需要到服务器的 IP 地址注册一个账号,就会加载并准备好默认的应用程序集。
要注意的一点 - 安装假设没有正在运行任何其它 Web 服务,而且它会尝试安装 [Nginx web 服务器][7]。如果你的服务器已经有网站正在运行,配置可能就比较麻烦。我是在一个全新的 VPS(Virtual Private Server,虚拟个人服务器)上安装,因此比较简单。运行安装程序、启动 Nginx,然后你就可以访问云了。
-Cozy 还有一个 [应用商店][8],你可以从中下载额外的应用程序。有一些看起来非常有趣,例如 [Ghost 博客平台][9] 以及开源维基 [TiddlyWiki][10]。其中的目标,显然是允许把其它很多好的应用程序集成到这个平台。我认为你要看到很多其它流行的开源应用程序提供集成功能只是时间问题。此刻,已经支持 [Node.js][11],但是如何也支持其它应用层,你就可以看到很多其它很好的应用程序。
+Cozy 还有一个 [应用商店][8],你可以从中下载额外的应用程序。有一些看起来非常有趣,例如 [Ghost 博客平台][9] 以及开源维基 [TiddlyWiki][10]。其目的,显然是允许把其它很多好的应用程序集成到这个平台。我认为你要看到很多其它流行的开源应用程序提供集成功能只是时间问题。此刻,已经支持 [Node.js][11],但是如果也支持其它应用层,你就可以看到很多其它很好的应用程序。
其中可能的一个功能是从安卓设备中使用免费的安卓应用程序访问你的信息。当前还没有 iOS 应用,但有计划要解决这个问题。
@@ -20,27 +21,27 @@ Cozy 还有一个 [应用商店][8],你可以从中下载额外的应用程序
![主要 Cozy 界面](https://opensource.com/sites/default/files/main_cozy_interface.jpg "Main Cozy Interface")
-主要 Cozy 界面
+*主要 Cozy 界面*
### 文件
和很多分支一样,我使用 [Dropbox][12] 进行文件存储。事实上,由于我有很多东西需要存储,我需要花钱买 DropBox Pro。对我来说,如果它有我想要的功能,那么把我的文件移动到 Cozy 能为我节省很多开销。
-我希望我能说这是真的,我确实做到了。我被 Cozy 应用程序内建的基于 web 的文件上传和文件管理工具所惊讶。拖拽功能正如你期望的那样,界面也很干净整洁。我在上传事例文件和目录、随处跳转、移动、删除以及重命名文件时都没有遇到问题。
+我希望如此,而它真的可以。我被 Cozy 应用程序内建的基于 web 的文件上传和文件管理工具所惊讶。拖拽功能正如你期望的那样,界面也很干净整洁。我在上传事例文件和目录、随处跳转、移动、删除以及重命名文件时都没有遇到问题。
-如果你想要的就是基于 web 的云文件存储,那么你做到了。对我来说,它缺失的是 DropBox 具有的选择性文件目录同步功能。在 DropBox 中,如果你拖拽一个文件到目录中,它就会被拷贝到云,几分钟后该文件在你所有同步设备中都可以看到。实际上,[Cozy 正在研发该功能][13],但此时它还处于 beta 版,而且只支持 Linux 客户端。另外,我有一个称为 [Download to Dropbox][15] 的 [Chrome][14] 扩展,我时不时用它抓取图片和其它内容,但当前 Cozy 中还没有类似的工具。
+如果你想要的就是基于 web 的云文件存储,那么你已经有了。对我来说,它缺失的是 DropBox 具有的选择性的文件目录同步功能。在 DropBox 中,如果你拖拽一个文件到目录中,它就会被拷贝到云,几分钟后该文件在你所有同步设备中都可以看到。实际上,[Cozy 正在研发该功能][13],但此时它还处于 beta 版,而且只支持 Linux 客户端。另外,我有一个称为 [Download to Dropbox][15] 的 [Chrome][14] 扩展,我时不时用它抓取图片和其它内容,但当前 Cozy 中还没有类似的工具。
![文件管理界面](https://opensource.com/sites/default/files/cozy_2.jpg "文件管理界面")
-文件管理界面
+*文件管理界面*
### 从 Google 导入数据
-如果你正在使用 Google 日历和联系人,使用 Cozy 安装的应用程序很轻易的就可以导入它们。当你授权访问 Google 时,会给你一个 API 密钥,把它粘贴到 Cozy,它就会迅速高效地进行复制。两种情况下,内容都会被打上“从 Google 导入”的标签。对于我混乱的联系人,这可能是件好事情,因为它使得我有机会重新整理,把它们重新标记为更有意义的类别。“Google Calendar” 中所有的事件都导入了,但是我注意到其中一些事件的时间不对,可能是由于两端时区设置的影响。
+如果你正在使用 Google 日历和联系人,使用 Cozy 安装的应用程序很轻易的就可以导入它们。当你授权对 Google 的访问时,会给你一个 API 密钥,把它粘贴到 Cozy,它就会迅速高效地进行复制。两种情况下,内容都会被打上“从 Google 导入”的标签。对于我混乱的联系人,这可能是件好事情,因为它使得我有机会重新整理,把它们重新标记为更有意义的类别。“Google Calendar” 中所有的事件都导入了,但是我注意到其中一些事件的时间不对,可能是由于两端时区设置的影响。
### 联系人
-联系人正如你期望的那样,界面也很像 Google 联系人。尽管如此,还是有一些不好的地方。和你(例如)智能手机的同步通过 [CardDAV][16] 完成,这是用于共享联系人数据的标准协议,但安卓手机并不原生支持该技术。为了把你的联系人同步到安卓设备,你需要在你的手机上安装一个应用。这对我来说是个很大的打击,因为我已经有很多类似这样的旧应用程序了(例如 work mail、Gmail以及其它 mail,我的天),我并不想安装一个不能和我智能手机原生联系人应用程序同步的软件。如果你正在使用 iPhone,你直接就能使用 CradDAV。
+联系人功能正如你期望的那样,界面也很像 Google 联系人。尽管如此,还是有一些不好的地方。例如,和你的智能手机的同步是通过 [CardDAV][16] 完成的,这是用于共享联系人数据的标准协议,但安卓手机并不原生支持该技术。为了把你的联系人同步到安卓设备,你需要在你的手机上安装一个应用。这对我来说是个很大的打击,因为我已经有很多类似这样的古怪应用程序了(例如 工作的邮件、Gmail 以及其它邮件,我的天),我并不想安装一个不能和我智能手机原生联系人应用程序同步的软件。如果你正在使用 iPhone,你直接就能使用 CradDAV。
### 日历
@@ -48,15 +49,15 @@ Cozy 还有一个 [应用商店][8],你可以从中下载额外的应用程序
### 照片
-照片应用让我印象深刻,它从文件应用程序借鉴了很多东西。你甚至可以把一个其它应用程序的照片文件添加到相册,或者直接通过拖拽上传。不幸的是,一旦上传后,我没有找到任何重拍和编辑照片的方法。你只能把它们从相册中删除。应用有一个通过令牌链接进行分享的工具,而且你可以指定一个或多个联系人。系统会给这些联系人发送邀请他们查看相册的电子邮件。当然还有很多比这个有更丰富功能的相册应用,但在 Cozy 平台中这算是一个好的起点。
+照片应用让我印象深刻,它从文件应用程序借鉴了很多东西。你甚至可以把一个其它应用程序的照片文件添加到相册,或者直接通过拖拽上传。不幸的是,一旦上传后,我没有找到任何重新排序和编辑照片的方法。你只能把它们从相册中删除。应用有一个通过令牌链接进行分享的工具,而且你可以指定一个或多个联系人。系统会给这些联系人发送邀请他们查看相册的电子邮件。当然还有很多比这个有更丰富功能的相册应用,但在 Cozy 平台中这算是一个好的起点。
![Photos 界面](https://opensource.com/sites/default/files/cozy_3_0.jpg "Photos Interface")
-Photos 界面
+*Photos 界面*
### 总结
-Cozy 目标远大。他们尝试搭建你能部署任意你想要的基于云的服务的平台。它已经到了黄金时段吗?我并不认为。对于一些重度用户来说我之前提到的一些问题很严重,而且还没有 iOS 应用,这可能阻碍用户使用它。不管怎样,继续关注吧 - 随着研发的继续,Cozy 有一家代替很多应用程序的潜能。
+Cozy 目标远大。他们尝试搭建一个你能部署任意你想要的基于云服务的平台。它已经到了成熟期吗?我并不认为。对于一些重度用户来说我之前提到的一些问题很严重,而且还没有 iOS 应用,这可能阻碍用户使用它。不管怎样,继续关注吧 - 随着研发的继续,Cozy 有一家代替很多应用程序的潜能。
--------------------------------------------------------------------------------
@@ -70,7 +71,7 @@ via: https://opensource.com/article/17/2/cozy-personal-cloud
作者:[D Ruth Bavousett][a]
译者:[ictlyh](https://github.com/ictlyh)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/201704/20170201 OpenContrail An Essential Tool in the OpenStack Ecosystem.md b/published/201704/20170201 OpenContrail An Essential Tool in the OpenStack Ecosystem.md
new file mode 100644
index 0000000000..5cc23c47b4
--- /dev/null
+++ b/published/201704/20170201 OpenContrail An Essential Tool in the OpenStack Ecosystem.md
@@ -0,0 +1,57 @@
+OpenContrail:一个 OpenStack 生态中的重要工具
+============================================================
+
+
+![OpenContrail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/contrails-cloud.jpg?itok=aoNIH-ar "OpenContrail")
+
+*OpenContrail 是用于 OpenStack 云计算平台的 SDN 平台,它正在成为管理员需要具备的技能的重要工具。*
+
+[Creative Commons Zero] [1] Pixabay
+
+整个 2016 年,软件定义网络(SDN)迅速发展,开源和云计算领域的众多参与者正帮助其获得增长。结合这一趋势,用在 OpenStack 云计算平台上的流行的 SDN 平台 [OpenContrail][3] 正成为许多管理员需要具备的技能的重要工具。
+
+正如管理员和开发人员在 OpenStack 生态系统中围绕着诸如 Ceph 等重要工具提升技能一样,他们将需要拥抱 OpenContrail,它是由 Apache 软件基金会全面开源并管理的软件。
+
+考虑到这些,OpenStack 领域中最活跃的公司之一 Mirantis 已经[宣布][4]对 OpenContrail 的提供商业支持和贡献。该公司提到:“添加了 OpenContrail 后,Mirantis 将会为与 OpenStack 一起使用的开源技术,包括用于存储的 Ceph、用于计算的 OpenStack/KVM、用于 SDN 的 OpenContrail 或 Neutron 提供一站式的支持。”
+
+根据 Mirantis 公告,“OpenContrail 是一个使用基于标准协议构建的 Apache 2.0 许可项目,为网络虚拟化提供了所有必要的组件 - SDN 控制器、虚拟路由器、分析引擎和已发布的上层 API,它有一个可扩展 REST API 用于配置以及从系统收集操作和分析数据。作为规模化构建,OpenContrail 可以作为云基础设施的基础网络平台。”
+
+有消息称 Mirantis [收购了 TCP Cloud][5],这是一家专门从事 OpenStack、OpenContrail 和 Kubernetes 管理服务的公司。Mirantis 将使用 TCP Cloud 的云架构持续交付技术来管理将在 Docker 容器中运行的 OpenContrail 控制面板。作为这项工作的一部分,Mirantis 也会一直致力于 OpenContrail。
+
+OpenContrail 的许多贡献者正在与 Mirantis 紧密合作,他们特别注意了 Mirantis 将提供的支持计划。
+
+“OpenContrail 是 OpenStack 社区中一个重要的项目,而 Mirantis 很好地容器化并提供商业支持。我们团队正在做的工作使 OpenContrail 能轻松地扩展并更新,并与 Mirantis OpenStack 的其余部分进行无缝滚动升级。 ” Mirantis 的工程师总监和 OpenContrail 咨询委员会主任 Jakub Pavlik 说:“商业支持也将使 Mirantis 能够使该项目与各种交换机兼容,从而为客户提供更多的硬件和软件选择。”
+
+除了 OpenContrail 的商业支持外,我们很可能还会看到 Mirantis 为那些想要学习如何利用它的云管理员和开发人员提供的教育服务。Mirantis 已经以其 [OpenStack 培训][6]课程而闻名,并将 Ceph 纳入了培训课程中。
+
+在 2016 年,SDN 种类快速演变,并且对许多部署 OpenStack 的组织也有意义。IDC 最近发布了 SDN 市场的[一项研究][7],预计从 2014 年到 2020 年 SDN 市场的年均复合增长率为 53.9%,届时市场价值将达到 125 亿美元。此外,“Technology Trends 2016” 报告将 SDN 列为组织最佳的技术投资之一。
+
+IDC 网络基础设施总裁 [Rohit Mehra][8] 说:“云计算和第三方平台推动了 SDN 的需求,它将在 2020 年代表一个价值超过 125 亿美元的市场。丝毫不用奇怪的是 SDN 的价值将越来越多地渗透到网络虚拟化软件和 SDN 应用中,包括虚拟化网络和安全服务。大型企业在数据中心中实现 SDN 的价值,但它们最终将会认识到其在横跨分支机构和校园网络的广域网中的广泛应用。”
+
+同时,Linux 基金会最近[宣布][9]发布了其 2016 年度报告[“开放云指导:当前趋势和开源项目”][10]。第三份年度报告全面介绍了开放云计算,并包含一个关于 SDN 的部分。
+
+Linux 基金会还提供了[软件定义网络基础知识][11](LFS265),这是一个自定进度的 SDN 在线课程,另外作为 [Open Daylight][12] 项目的领导者,另一个重要的开源 SDN 平台正在迅速成长。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/event/open-networking-summit/2017/2/opencontrail-essential-tool-openstack-ecosystem
+
+作者:[SAM DEAN][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/sam-dean
+[1]:https://www.linux.com/licenses/category/creative-commons-zero
+[2]:https://www.linux.com/files/images/contrails-cloudjpg
+[3]:https://www.globenewswire.com/Tracker?data=brZ3aJVRyVHeFOyzJ1Dl4DMY3CsSV7XcYkwRyOcrw4rDHplSItUqHxXtWfs18mLsa8_bPzeN2EgZXWcQU8vchg==
+[4]:http://www.econotimes.com/Mirantis-Becomes-First-Vendor-to-Offer-Support-and-Managed-Services-for-OpenContrail-SDN-486228
+[5]:https://www.globenewswire.com/Tracker?data=Lv6LkvREFzGWgujrf1n6r_qmjSdu67-zdRAYt2itKQ6Fytomhfphuk5EbDNjNYtfgAsbnqI8H1dn_5kB5uOSmmSYY9XP2ibkrPw_wKi5JtnAyV43AjuR_epMmOUkZZ8QtFdkR33lTGDmN6O5B4xkwv7fENcDpm30nI2Og_YrYf0b4th8Yy4S47lKgITa7dz2bJpwpbCIzd7muk0BZ17vsEp0S3j4kQJnmYYYk5udOMA=
+[6]:https://training.mirantis.com/
+[7]:https://www.idc.com/getdoc.jsp?containerId=prUS41005016
+[8]:http://www.idc.com/getdoc.jsp?containerId=PRF003513
+[9]:https://www.linux.com/blog/linux-foundation-issues-2016-guide-open-source-cloud-projects
+[10]:http://ctt.marketwire.com/?release=11G120876-001&id=10172077&type=0&url=http%3A%2F%2Fgo.linuxfoundation.org%2Frd-open-cloud-report-2016-pr
+[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/software-defined-networking-fundamentals
+[12]:https://www.opendaylight.org/
diff --git a/published/201704/20170201 lnav – An Advanced Console Based Log File Viewer for Linux.md b/published/201704/20170201 lnav – An Advanced Console Based Log File Viewer for Linux.md
new file mode 100644
index 0000000000..b795d62d3c
--- /dev/null
+++ b/published/201704/20170201 lnav – An Advanced Console Based Log File Viewer for Linux.md
@@ -0,0 +1,198 @@
+lnav:Linux 下一个基于控制台的高级日志文件查看器
+============================================================
+
+[LNAV][3](Log file Navigator)是 Linux 下一个基于控制台的高级日志文件查看器。它和其它文件查看器,例如 cat、more、tail 等,完成相同的任务,但有很多普通文件查看器没有的增强功能(尤其是它自带多种颜色和易于阅读的格式)。
+
+它能在解压多个压缩日志文件(zip、gzip、bzip)的同时把它们合并到一起进行导航。基于消息的时间戳,`lnav` 能把多个日志文件合并到一个视图(Single Log Review),从而避免打开多个窗口。左边的颜色栏帮助显示消息所属的文件。
+
+警告和错误的数量以(黄色和红色)高亮显示,因此我们能够很轻易地看到问题出现在哪里。它会自动加载新的日志行。
+
+它按照消息时间戳排序显示所有文件的日志消息。顶部和底部的状态栏会告诉你位于哪个日志文件。如果你想按特定的模式查找,只需要在搜索弹窗中输入就会即时显示。
+
+内建的日志消息解析器会自动从每一行中发现和提取详细信息。
+
+服务器日志是一个由服务器创建并经常更新、用于抓取特定服务和应用的所有活动信息的日志文件。当你的应用或者服务出现问题时这个文件就会非常有用。从日志文件中你可以获取所有关于该问题的信息,例如基于警告或者错误信息它什么时候开始表现不正常。
+
+当你用一个普通文件查看器打开一个日志文件时,它会用纯文本格式显示所有信息(如果用更直白的话说的话:纯白——黑底白字),这样很难去发现和理解哪里有警告或错误信息。为了克服这种情况,快速找到警告和错误信息来解决问题, lnav 是一个入手可用的更好的解决方案。
+
+大部分常见的 Linux 日志文件都放在 `/var/log/`。
+
+**lnav 自动检测以下日志格式**
+
+* Common Web Access Log format(普通 web 访问日志格式)
+* CUPS page_log
+* Syslog
+* Glog
+* VMware ESXi/vCenter 日志
+* dpkg.log
+* uwsgi
+* “Generic” – 以时间戳开始的任何消息
+* Strace
+* sudo
+* gzib & bizp
+
+**lnav 高级功能**
+
+* 单一日志视图 - 基于消息时间戳,所有日志文件内容都会被合并到一个单一视图
+* 自动日志格式检测 - `lnav` 支持大部分日志格式
+* 过滤器 - 能进行基于正则表达式的过滤
+* 时间线视图
+* 适宜打印视图(Pretty-Print)
+* 使用 SQL 查询日志
+* 自动数据抽取
+* 实时操作
+* 语法高亮
+* Tab 补全
+* 当你查看相同文件集时可以自动保存和恢复会话信息。
+* Headless 模式
+
+### 如何在 Linux 中安装 lnav
+
+大部分发行版(Debian、Ubuntu、Mint、Fedora、suse、openSUSE、Arch Linux、Manjaro、Mageia 等等)默认都有 `lnav` 软件包,在软件包管理器的帮助下,我们可以很轻易地从发行版官方仓库中安装它。对于 CentOS/RHEL 我们需要启用 **[EPEL 仓库][1]**。
+
+```
+[在 Debian/Ubuntu/LinuxMint 上安装 lnav]
+$ sudo apt-get install lnav
+
+[在 RHEL/CentOS 上安装 lnav]
+$ sudo yum install lnav
+
+[在 Fedora 上安装 lnav]
+$ sudo dnf install lnav
+
+[在 openSUSE 上安装 lnav]
+$ sudo zypper install lnav
+
+[在 Mageia 上安装 lnav]
+$ sudo urpmi lnav
+
+[在基于 Arch Linux 的系统上安装 lnav]
+$ yaourt -S lnav
+```
+
+如果你的发行版没有 `lnav` 软件包,别担心,开发者提供了 `.rpm` 和 `.deb` 安装包,因此我们可以轻易安装。确保你从 [开发者 github 页面][4] 下载最新版本的安装包。
+
+```
+[在 Debian/Ubuntu/LinuxMint 上安装 lnav]
+$ sudo wget https://github.com/tstack/lnav/releases/download/v0.8.1/lnav_0.8.1_amd64.deb
+$ sudo dpkg -i lnav_0.8.1_amd64.deb
+
+[在 RHEL/CentOS 上安装 lnav]
+$ sudo yum install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
+
+[在 Fedora 上安装 lnav]
+$ sudo dnf install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
+
+[在 openSUSE 上安装 lnav]
+$ sudo zypper install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
+
+[在 Mageia 上安装 lnav]
+$ sudo rpm -ivh https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
+```
+
+### 不带参数运行 lnav
+
+默认情况下你不带参数运行 `lnav` 时它会打开 `syslog` 文件。
+
+```
+# lnav
+```
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png)
+][5]
+
+### 使用 lnav 查看特定日志文件
+
+要用 `lnav` 查看特定的日志文件,在 `lnav` 命令后面添加日志文件路径。例如我们想看 `/var/log/dpkg.log` 日志文件。
+
+```
+# lnav /var/log/dpkg.log
+```
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png)
+][6]
+
+### 用 lnav 查看多个日志文件
+
+要用 `lnav` 查看多个日志文件,在 lnav 命令后面逐个添加日志文件路径,用一个空格隔开。例如我们想查看 `/var/log/dpkg.log` 和 `/var/log/kern.log` 日志文件。
+
+左边的颜色栏帮助显示消息所属的文件。另外顶部状态栏还会显示当前日志文件的名称。为了显示多个日志文件,大部分应用经常会打开多个窗口、或者在窗口中水平或竖直切分,但 `lnav` 使用不同的方式(它基于日期组合在同一个窗口显示多个日志文件)。
+
+```
+# lnav /var/log/dpkg.log /var/log/kern.log
+```
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png)
+][7]
+
+### 使用 lnav 查看压缩的日志文件
+
+要查看并同时解压被压缩的日志文件(zip、gzip、bzip),在 `lnav` 命令后面添加 `-r` 选项。
+
+```
+# lnav -r /var/log/Xorg.0.log.old.gz
+```
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png)
+][8]
+
+### 直方图视图
+
+首先运行 `lnav` 然后按 `i` 键切换到/出直方图视图。
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png)
+][9]
+
+### 查看日志解析器结果
+
+首先运行 `lnav` 然后按 `p` 键打开显示日志解析器结果。
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png)
+][10]
+
+### 语法高亮
+
+你可以搜索任何给定的字符串,它会在屏幕上高亮显示。首先运行 `lnav` 然后按 `/` 键并输入你想查找的字符串。为了测试,我搜索字符串 `Default`,看下面的截图。
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png)
+][11]
+
+### Tab 补全
+
+命令窗口支持大部分操作的 tab 补全。例如,在进行搜索时,你可以使用 tab 补全屏幕上显示的单词,而不需要复制粘贴。为了测试,我搜索字符串 `/var/log/Xorg`,看下面的截图。
+
+[
+ ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png)
+][12]
+
+
+--------------------------------------------------------------------------------
+
+via: http://www.2daygeek.com/install-and-use-advanced-log-file-viewer-navigator-lnav-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.2daygeek.com/author/magesh/
+[1]:http://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
+[2]:http://www.2daygeek.com/author/magesh/
+[3]:http://lnav.org/
+[4]:https://github.com/tstack/lnav/releases
+[5]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png
+[6]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png
+[7]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png
+[8]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png
+[9]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png
+[10]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png
+[11]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png
+[12]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png
diff --git a/published/201704/20170203 A comprehensive guide to taking screenshots in Linux using gnome-screenshot.md b/published/201704/20170203 A comprehensive guide to taking screenshots in Linux using gnome-screenshot.md
new file mode 100644
index 0000000000..ee5d7853f0
--- /dev/null
+++ b/published/201704/20170203 A comprehensive guide to taking screenshots in Linux using gnome-screenshot.md
@@ -0,0 +1,292 @@
+史上最全的使用 gnome-screenshot 获取屏幕快照指南
+============================================================
+
+在应用市场中有好几种屏幕截图工具,但其中大多数都是基于 GUI 的。如果你花时间在 linux 命令行上工作,而且正在寻找一款优秀的功能丰富的基于命令行的屏幕截图工具,你可能会想尝试 [gnome-screenshot][17]。在本教程中,我将使用易于理解的例子来解释这个实用程序。
+
+请注意,本教程中提到的所有例子已经在 Ubuntu 16.04 LTS 上测试过,测试所使用的 gonme-screenshot 版本是 3.18.0。
+
+### 关于 Gnome-screenshot
+
+Gnome-screenshot 是一款 GNOME 工具,顾名思义,它是一款用来对整个屏幕、一个特定的窗口或者用户所定义一些其他区域进行捕获的工具。该工具提供了几个其他的功能,包括对所捕获的截图的边界进行美化的功能。
+
+### Gnome-screenshot 安装
+
+Ubuntu 系统上已经预安装了 gnome-screeshot 工具,但是如果你出于某些原因需要重新安装这款软件程序,你可以使用下面的命令来进行安装:
+
+```
+sudo apt-get install gnome-screeshot
+```
+
+一旦软件安装完成后,你可以使用下面的命令来启动它:
+
+```
+gnome-screenshot
+```
+
+### Gnome-screenshot 用法/特点
+
+在这部分,我们将讨论如何使用 gnome-screenshot ,以及它提供的所有功能。
+
+默认情况下,使用该工具且不带任何命令行选项时,就会抓取整个屏幕。
+
+[
+ ![Starting Gnome Screenshot](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/gnome-default.png)
+][18]
+
+#### 捕获当前活动窗口
+
+如何你需要的话,你可以使用 `-w` 选项限制到只对当前活动窗口截图。
+
+```
+gnome-screenshot -w
+```
+
+[
+ ![Capturing current active window](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/activewindow.png)
+][19]
+
+#### 窗口边框
+
+默认情况下,这个程序会将它捕获的窗口的边框包含在内,尽管还有一个明确的命令行选项 `-b` 可以启用此功能(以防你在某处想使用它)。以下是如何使用这个程序的:
+
+```
+gnome-screenshot -wb
+```
+
+当然,你需要同时使用 `-w` 选项和 `-b` 选项,以便捕获的是当前活动的窗口(否则,`-b` 将没有作用)。
+
+更重要的是,如果你需要的话,你也可以移除窗口的边框。可以使用 `-B` 选项来完成。下面是你可以如何使用这个选项的一个例子:
+
+```
+gnome-screenshot -wB
+```
+
+下面是例子的截图:
+
+[
+ ![Window border](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/removeborder.png)
+][20]
+
+#### 添加效果到窗口边框
+
+在 gnome-screenshot 工具的帮助下,您还可以向窗口边框添加各种效果。这可以使用 `--border-effect` 选项来做到。
+
+你可以添加这款程序所提供的任何效果,比如 `shadow` 效果(在窗口添加阴影)、`bordor` 效果(在屏幕截图周围添加矩形区域)和 `vintage` 效果(使截图略微淡化,着色并在其周围添加矩形区域)。
+
+```
+gnome-screenshot --border-effect=[EFFECT]
+```
+
+例如,运行下面的命令添加 shadow 效果:
+
+```
+gnome-screenshot –border-effect=shadow
+```
+
+以下是 shadow 效果的示例快照:
+
+[
+ ![Adding effects to window borders](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/shadoweffect-new.png)
+][21]
+
+请注意,上述屏幕截图主要集中在终端的一个角落,以便您清楚地看到阴影效果。
+
+#### 对特定区域的截图
+
+如何你需要,你还可以使用 gnome-screenshot 程序对你电脑屏幕的某一特定区域进行截图。这可以通过使用 `-a` 选项来完成。
+
+```
+gnome-screenshot -a
+```
+
+当上面的命令被运行后,你的鼠标指针将会变成 '+' 这个符号。在这种模式下,你可以按住鼠标左键移动鼠标来对某个特定区域截图。
+
+这是一个示例截图,裁剪了我的终端窗口的一小部分。
+
+[
+ ![example screenshot wherein I cropped a small area of my terminal window](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/area.png)
+][22]
+
+#### 在截图中包含鼠标指针
+
+默认情况下,每当你使用这个工具截图的时候,截的图中并不会包含鼠标指针。然而,这个程序是可以让你把指针包括进去的,你可以使用 `-p` 命令行选项做到。
+
+```
+gnome-screenshot -p
+```
+
+这是一个示例截图:
+
+[
+ ![Include mouse pointer in snapshot](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/includecursor.png)
+][23]
+
+#### 延时截图
+
+截图时你还可以引入时间延迟。要做到这,你不需要给 `--delay` 选项赋予一个以秒为单位的值。
+
+```
+gnome-screenshot –delay=[SECONDS]
+```
+
+例如:
+
+```
+gnome-screenshot --delay=5
+```
+
+示例截图如下:
+
+[
+ ![Delay in taking screenshots](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/delay.png)
+][24]
+
+#### 以交互模式运行这个工具
+
+这个工具还允许你使用一个单独的 `-i` 选项来访问其所有功能。使用这个命令行选项,用户可以在运行这个命令时使用这个工具的一个或多个功能。
+
+```
+gnome-screenshot -i
+```
+
+示例截图如下:
+
+[
+ ![Run the tool in interactive mode](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/interactive.png)
+][25]
+
+你可以从上面的截图中看到,`-i` 选项提供了对很多功能的访问,比如截取整个屏幕、截取当前窗口、选择一个区域进行截图、延时选项和特效选项等都在交互模式里。
+
+#### 直接保存你的截图
+
+如果你需要的话,你可以直接将你截的图片从终端中保存到你当前的工作目录,这意味着,在这个程序运行后,它并不要求你为截取的图片输入一个文件名。这个功能可以使用 `--file` 命令行选项来获取,很明显,需要给它传递一个文件名。
+
+```
+gnome-screenshot –file=[FILENAME]
+```
+
+例如:
+
+```
+gnome-screenshot --file=ashish
+```
+
+示例截图如下:
+
+[
+ ![Directly save your screenshot](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/ashish.png)
+][26]
+
+#### 复制到剪切板
+
+gnome-screenshot 也允许你把你截的图复制到剪切板。这可以通过使用 `-c` 命令行选项做到。
+
+```
+gnome-screenshot -c
+```
+
+[
+ ![Copy to clipboard](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/copy.png)
+][27]
+
+在这个模式下,例如,你可以把复制的图直接粘贴到你的任何一个图片编辑器中(比如 GIMP)。
+
+#### 多显示器情形下的截图
+
+如果有多个显示器连接到你的系统,你想对某一个进行截图,那么你可以使用 `--then` 命令行选项。需要给这个选项一个显示器设备 ID 的值(需要被截图的显示器的 ID)。
+
+```
+gnome-screenshot --display=[DISPLAY]
+```
+例如:
+
+```
+gnome-screenshot --display=VGA-0
+```
+
+在上面的例子中,VAG-0 是我正试图对其进行截图的显示器的 ID。为了找到你想对其进行截图的显示器的 ID,你可以使用下面的命令:
+
+```
+xrandr --query
+```
+
+为了让你明白一些,在我的例子中这个命令产生了下面的输出:
+
+```
+$ xrandr --query
+Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192
+VGA-0 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm
+1366x768 59.8*+
+1024x768 75.1 75.0 60.0
+832x624 74.6
+800x600 75.0 60.3 56.2
+640x480 75.0 60.0
+720x400 70.1
+HDMI-0 disconnected (normal left inverted right x axis y axis)
+```
+
+#### 自动化屏幕截图过程
+
+正如我们之前讨论的,`-a` 命令行选项可以帮助我们对屏幕的某一个特定区域进行截图。然而,我们需要用鼠标手动选取这个区域。如果你想的话,你可以使用 gnome-screenshot 来自动化完成这个过程,但是在那种情形下,你将需要使用一个名为 `xdotol` 的工具,它可以模仿敲打键盘甚至是点击鼠标这些事件。
+
+例如:
+
+```
+(gnome-screenshot -a &); sleep 0.1 && xdotool mousemove 100 100 mousedown 1 mousemove 400 400 mouseup 1
+```
+
+`mousemove` 子命令自动把鼠标指针定位到明确的 `X` 坐标和 `Y` 坐标的位置(上面例子中是 100 和 100)。`mousedown` 子命令触发一个与点击执行相同操作的事件(因为我们想左击,所以我们使用了参数 1),然而 `mouseup` 子命令触发一个执行用户释放鼠标按钮的任务的事件。
+
+所以总而言之,上面所示的 `xdotool` 命令做了一项本来需要使用鼠标手动执行对同一区域进行截图的工作。特别说明,该命令把鼠标指针定位到屏幕上坐标为 `100,100` 的位置并选择封闭区域,直到指针到达屏幕上坐标为 `400,400` 的位置。所选择的区域随之被 gnome-screenshot 捕获。
+
+这是上述命令的截图:
+
+[
+ ![screenshot of the above command](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/automatedcommand.png)
+][28]
+
+这是输出的结果:
+
+[
+ ![Screenshot output](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/outputxdo.png)
+][29]
+
+想获取更多关于 `xdotool` 的信息,[请到这来][30]。
+
+#### 获取帮助
+
+如果你有疑问或者你正面临一个与该命令行的其中某个选项有关的问题,那么你可以使用 --help、-? 或者 -h 选项来获取相关信息。
+
+```
+gnome-screenshot -h
+```
+
+### 总结
+
+我推荐你至少使用一次这个程序,因为它不仅对初学者来说比较简单,而且还提供功能丰富的高级用法体验。动起手来,尝试一下吧。
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
+
+作者:[Himanshu Arora][a]
+译者:[zhousiyu325](https://github.com/zhousiyu325)
+校对:[jasminepeng](https://github.com/jasminepeng)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
+[17]:https://linux.die.net/man/1/gnome-screenshot
+[18]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/gnome-default.png
+[19]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/activewindow.png
+[20]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/removeborder.png
+[21]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/shadoweffect-new.png
+[22]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/area.png
+[23]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/includecursor.png
+[24]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/delay.png
+[25]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/interactive.png
+[26]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/ashish.png
+[27]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/copy.png
+[28]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/automatedcommand.png
+[29]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/outputxdo.png
+[30]:http://manpages.ubuntu.com/manpages/trusty/man1/xdotool.1.html
diff --git a/published/20170203 bmon – A Powerful Network Bandwidth Monitoring and Debugging Tool for Linux.md b/published/201704/20170203 bmon – A Powerful Network Bandwidth Monitoring and Debugging Tool for Linux.md
similarity index 100%
rename from published/20170203 bmon – A Powerful Network Bandwidth Monitoring and Debugging Tool for Linux.md
rename to published/201704/20170203 bmon – A Powerful Network Bandwidth Monitoring and Debugging Tool for Linux.md
diff --git a/published/20170205 Hosting Django With Nginx and Gunicorn on Linux.md b/published/201704/20170205 Hosting Django With Nginx and Gunicorn on Linux.md
similarity index 100%
rename from published/20170205 Hosting Django With Nginx and Gunicorn on Linux.md
rename to published/201704/20170205 Hosting Django With Nginx and Gunicorn on Linux.md
diff --git a/published/20170206 OpenSUSE Leap 42.2 Gnome - Better but not really.md b/published/201704/20170206 OpenSUSE Leap 42.2 Gnome - Better but not really.md
similarity index 100%
rename from published/20170206 OpenSUSE Leap 42.2 Gnome - Better but not really.md
rename to published/201704/20170206 OpenSUSE Leap 42.2 Gnome - Better but not really.md
diff --git a/sources/tech/20170206 OpenVAS - Vulnerability Assessment install on Kali Linux.md b/published/201704/20170206 OpenVAS - Vulnerability Assessment install on Kali Linux.md
similarity index 54%
rename from sources/tech/20170206 OpenVAS - Vulnerability Assessment install on Kali Linux.md
rename to published/201704/20170206 OpenVAS - Vulnerability Assessment install on Kali Linux.md
index a43ad62398..8529528f31 100644
--- a/sources/tech/20170206 OpenVAS - Vulnerability Assessment install on Kali Linux.md
+++ b/published/201704/20170206 OpenVAS - Vulnerability Assessment install on Kali Linux.md
@@ -1,118 +1,119 @@
-OpenVAS - Vulnerability Assessment install on Kali Linux
+OpenVAS:Kali Linux 中的漏洞评估工具
============================================================
-### On this page
+本教程将介绍在 Kali Linux 中安装 OpenVAS 8.0 的过程。 OpenVAS 是一个可以自动执行网络安全审核和漏洞评估的开源[漏洞评估][6]程序。请注意,漏洞评估(Vulnerability Assessment)也称为 VA 并不是渗透测试(penetration test),渗透测试会进一步验证是否存在发现的漏洞,请参阅[什么是渗透测试][7]来对渗透测试的构成以及不同类型的安全测试有一个了解。
-1. [What is Kali Linux?][1]
-2. [Updating Kali Linux][2]
-3. [Installing OpenVAS 8][3]
-4. [Start OpenVAS on Kali][4]
+### 什么是 Kali Linux?
-This tutorial documents the process of installing OpenVAS 8.0 on Kali Linux rolling. OpenVAS is open source [vulnerability assessment][6] application that automates the process of performing network security audits and vulnerability assessments. Note, a vulnerability assessment also known as VA is not a penetration test, a penetration test goes a step further and validates the existence of a discovered vulnerability, see [what is penetration testing][7] for an overview of what pen testing consists of and the different types of security testing.
+Kali Linux 是 Linux 渗透测试分发版。它基于 Debian,并且预安装了许多常用的渗透测试工具,例如 Metasploit Framework (MSF)和其他通常在安全评估期间由渗透测试人员使用的命令行工具。
-### What is Kali Linux?
+在大多数使用情况下,Kali 运行在虚拟机中,你可以在这里获取最新的 VMWare 或 Vbox 镜像:[https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/][8] 。
-Kali Linux is a Linux penetration testing distribution. It's Debian based and comes pre-installed with many commonly used penetration testing tools such as Metasploit Framework and other command line tools typically used by penetration testers during a security assessment.
+除非你有特殊的原因想要一个更小的虚拟机占用空间,否则请下载完整版本而不是 Kali light。 下载完成后,你需要解压文件并打开 vbox 或者 VMWare .vmx 文件,虚拟机启动后,默认帐号是 `root`/`toor`。请将 root 密码更改为安全的密码。
-For most use cases Kali runs in a VM, you can grab the latest VMWare or Vbox image of Kali from here: [https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/][8]
+或者,你可以下载 ISO 版本,并在裸机上执行 Kali 的安装。
-Download the full version not Kali light, unless you have a specific reason for wanting a smaller virtual machine footprint. After the download finishes you will need to extract the contents and open the vbox or VMWare .vmx file, when the machine boots the default credentials are root / toor. Change the root password to a secure password.
+### 升级 Kali Linux
-Alternatively, you can download the ISO version and perform an installation of Kali on the bare metal.
+完成安装后,为 Kail Linux 执行一次完整的升级。
-### Updating Kali Linux
-
-After installation, perform a full update of Kali Linux.
-
-Updating Kali:
+升级 Kali:
+```
apt-get update && apt-get dist-upgrade -y
+```
[
![Updating Kali Linux](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/kali-apt-get-update-dist-upgrade.png)
][9]
-The update process might take some time to complete. Kali is now a rolling release meaning you can update to the current version from any version of Kali rolling. However, there are release numbers but these are point in time versions of Kali rolling for VMWare snapshots. You can update to the current stable release from any of the VMWare images.
+更新过程可能需要一些时间才能完成。Kali 目前是滚动更新,这意味着你可以从任何版本的 Kali 滚动更新到当前版本。然而它仍有发布号,但这些是针对特定 Kali 时间点版本的 VMWare 快照。你可以从任何 VMWare 镜像更新到当前的稳定版本。
-After updating perform a reboot.
+更新完成后重新启动。
-### Installing OpenVAS 8
+### 安装 OpenVAS 8
[
![Installing OpenVAS 8](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/kali-install-openvas-vulnerability-assessment.png)
][10]
+```
apt-get install openvas
openvas-setup
+```
-During installation you'll be prompted about redis, select the default option to run as a UNIX socket.
+在安装中,你会被询问关于 redis 的问题,选择默认选项来以 UNIX 套接字运行。
[
![Configure OpenVAS Scanner](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/openvas-vulnerability-scanner-enable-redis.png)
][11]
-Even on a fast connection openvas-setup takes a long time to download and update all the required CVE, SCAP definitions.
+即使是有快速的网络连接,openvas-setup 仍需要很长时间来下载和更新所有所需的 CVE、SCAP 定义。
[
![Update all the required CVE, SCAP definitions](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/openvas-vulnerability-scanner-install-2.png)
][12]
-Pay attention to the command output during openvas-setup, the password is generated during installation and printed to console near the end of the setup.
+请注意 openvas-setup 的命令输出,密码会在安装过程中生成,并在安装的最后在控制台中打印出来。
[
![Command output during install](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/openvas-vulnerability-scanner-install-complete.png)
][13]
-Verify openvas is running:
+验证 openvas 正在运行:
+```
netstat -tulpn
+```
[
![Check OpenVAS Status](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/openvas-running-netstat.png)
][14]
-### Start OpenVAS on Kali
+### 在 Kali 中运行 OpenVAS
-To start the OpenVAS service on Kali run:
+要在 Kali 中启动 OpenVAS:
+```
openvas-start
+```
-After installation, you should be able to access the OpenVAS web application at **https://127.0.0.1:9392**
+安装后,你应该可以通过 `https://127.0.0.1:9392` 访问 OpenVAS 的 web 程序了。
-**[
+[
![OpenVAS started](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/openvas-self-signed-certificate.png)
-][5]**
+][5]
-Accept the self-signed certificate and login to the application using the credentials admin and the password displayed during openvas-setup.
+接受自签名证书,并使用 openvas-setup 输出的 admin 凭证和密码登录程序。
[
![Accept the self-signed SSL cert](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/accept-openvas-self-signed-certificate.png)
][15]
-After accepting the self-signed certificate, you should be presented with the login screen:
+接受自签名证书后,你应该可以看到登录界面了。
[
![OpenVAS Login](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/openvas-login-screen.png)
][16]
-After logging in you should be presented with the following screen:
+登录后,你应该可以看到下面的页面:
[
![OpenVAS Dashboard](https://www.howtoforge.com/images/openvas_vulnerability_assessment_install_on_kali_linux/openvas-menu.png)
][17]
-From this point you should be able to configure your own vulnerability scans using the wizard.
+从此,你应该可以使用向导配置自己的漏洞扫描了。
-It's recommended to read the documentation. Be aware of what a vulnerability assessment conductions (depending on configuration OpenVAS could attempt exploitation) and the traffic it will generate on a network as well as the DOS effect it can have on services / servers and hosts / devices on a network.
+我建议阅读文档。请注意漏洞评估导向(取决于 OpenVAS 可能尝试利用的配置)及其在网络上生成的流量以及网络上可能对服务/服务器和主机/设备产生的 DOS 影响。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/openvas-vulnerability-assessment-install-on-kali-linux/
-作者:[KJS ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+作者:[KJS][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/201704/20170206 Try Raspberry Pis PIXEL OS on your PC.md b/published/201704/20170206 Try Raspberry Pis PIXEL OS on your PC.md
new file mode 100644
index 0000000000..95dc2e82c2
--- /dev/null
+++ b/published/201704/20170206 Try Raspberry Pis PIXEL OS on your PC.md
@@ -0,0 +1,114 @@
+在 PC 上尝试树莓派的 PIXEL OS
+============================================================
+
+ ![Try Raspberry Pi's PIXEL OS on your PC](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/virtualbox_pixel_raspberrypi.jpg?itok=bEdS8qpi "Try Raspberry Pi's PIXEL OS on your PC")
+
+图片版权:树莓派基金会, CC BY-SA
+
+在过去四年中,树莓派基金会非常努力地针对树莓派的硬件优化了 Debian 的移植版 Raspbian,包括创建新的教育软件、编程工具和更美观的桌面。
+
+在(去年) 9 月份,我们发布了一个更新,介绍了树莓派新的桌面环境 PIXEL(Pi Improved Xwindows Environment,轻量级)。在圣诞节之前,我们[发布](https://linux.cn/article-8064-1.html)了一个在 x86 PC 上运行的操作系统版本,所以现在可以将它安装在 PC、Mac 或笔记本电脑上。
+
+![Installing PIXEL](https://opensource.com/sites/default/files/pixel_0.jpg "Installing PIXEL")
+
+当然,像许多支持良好的 Linux 发行版一样,操作系统在旧的硬件上也能正常运行。 Raspbian 是让你几年前就丢弃的旧式 Windows 机器焕发新生的好方法。
+
+[PIXEL ISO][13] 可从树莓派网站上下载,在 “[MagPi][14]” 杂志封面上也有赠送可启动的 Live DVD 。
+
+ ![Welcome to PIXEL](https://opensource.com/sites/default/files/welcome-to-pixel.jpg "Welcome to PIXEL")
+
+为了消除想要学习计算机的人们的入门障碍,我们发布了树莓派的个人电脑操作系统。它比购买一块树莓派更便宜,因为它是免费的,你可以在现有的计算机上使用它。PIXEL 是我们一直想要的 Linux 桌面,我们希望它可供所有人使用。
+
+### 由 Debian 提供支持
+
+不构建在 Debian 之上的话,Raspbian 或 x86 PIXEL 发行版就都不会存在。 Debian 拥有庞大的可以从一个 apt 仓库中获得的免费开源软件、程序、游戏和其他工具。在树莓派中,你仅限运行为 [ARM][15] 芯片编译的软件包。然而,在 PC 镜像中,你可以在机器上运行的软件包的范围更广,因为 PC 中的 Intel 芯片有更多的支持。
+
+ ![Debian Advanced Packaging Tool (APT) repository](https://opensource.com/sites/default/files/apt.png "Debian Advanced Packaging Tool (APT) repository")
+
+### PIXEL 包含什么
+
+带有 PIXEL 的 Raspbian 和带有 PIXEL 的 Debian 都捆绑了大量的软件。Raspbian 自带:
+
+* Python、Java、Scratch、Sonic Pi、Mathematica*、Node-RED 和 Sense HAT 仿真器的编程环境
+* LibreOffice 办公套件
+* Chromium(包含 Flash)和 Epiphany 网络浏览器
+* Minecraft:树莓派版(包括 Python API)*
+* 各种工具和实用程序
+
+*由于许可证限制,本列表中唯一没有包含在 x86 版本中的程序是 Mathematica 和 Minecraft。
+
+ ![PIXEL menu](https://opensource.com/sites/default/files/pixel-menu.png "PIXEL menu")
+
+### 创建一个 PIXEL Live 盘
+
+你可以下载 PIXEL ISO 并将其写入空白 DVD 或 USB 记忆棒中。 然后,你就可以从盘中启动你的电脑,这样你可以立刻看到 PIXEL 桌面。你可以浏览网页、打开编程环境或使用办公套件,而无需在计算机上安装任何内容。完成后,只需拿出 DVD 或 USB 驱动器,关闭计算机,再次重新启动计算机时,将会像以前一样重新启动到你平常的操作系统。
+
+### 在虚拟机中运行 PIXEL
+
+另外一种尝试 PIXEL 的方法是在像 VirtualBox 这样的虚拟机中安装它。
+
+ ![PIXEL Virtualbox](https://opensource.com/sites/default/files/pixel-virtualbox.png "PIXEL Virtualbox")
+
+这允许你体验镜像而不用安装它,也可以在主操作系统里面的窗口中运行它,并访问 PIXEL 中的软件和工具。这也意味着你的会话会一直存在,而不是每次重新启动时从头开始,就像使用 Live 盘一样。
+
+### 在 PC 中安装 PIXEL
+
+如果你真的准备开始,你可以擦除旧的操作系统并将 PIXEL 安装在硬盘上。如果你想使用旧的闲置的笔记本电脑,这可能是个好主意。
+
+### 用于教育的 PIXEL
+
+许多学校在所有电脑上使用 Windows,并且对它们可以安装的软件进行严格的控制。这使得教师难以使用必要的软件工具和 IDE(集成开发环境)来教授编程技能。即使在线编程计划(如 Scratch 2)也可能被过于谨慎的网络过滤器阻止。在某些情况下,安装像 Python 这样的东西根本是不可能的。树莓派硬件通过提供包含教育软件的 SD 卡引导的小型廉价计算机来解决这个问题,学生可以连接到现有 PC 的显示器、鼠标和键盘上。
+
+然而,PIXEL Live 光盘允许教师引导到装有能立即使用的编程语言和工具的系统中,所有这些都不需要安装权限。在课程结束时,他们可以安全关闭,使计算机恢复原状。这也是 Code Clubs、CoderDojos、青年俱乐部、Raspberry Jams 等等的一个方便的解决方案。
+
+### 远程 GPIO
+
+树莓派与传统台式 PC 区别的功能之一是 GPIO 引脚(通用输入/输出)引脚的存在,它允许你将现实世界中的电子元件和附加板连接设备上,这将开放一个新的世界,如业余项目、家庭自动化、连接的设备和物联网。
+
+[GPIO Zero][16] Python 库的一个很棒的功能是通过在 PC 上写入一些简单的代码,然后在网络上控制树莓派的 GPIO 引脚。
+
+远程 GPIO 可以从一台树莓派连接到另一台树莓派,或者从运行任何系统的 OS 的 PC 连接到树莓派上,但是,使用 PIXEL x86 的话所有需要的软件都是开箱即用的。参见 Josh 的[博文][17],并参考我的 [gist][18] 了解更多信息。
+
+### 更多指南
+
+[MagPi 的第 53 期][19]提供了一些试用和安装 PIXEL 的指南,包括使用带持久驱动的 Live 光盘来维护你的文件和应用程序。你可以购买一份,或免费下载 PDF 来了解更多。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Ben Nuttall - Ben Nuttall 是一名树莓派社区管理员。他除了为树莓派基金会工作外,他还对自由软件、数学、皮划艇、GitHub、Adventure Time 和 Futurama 等感兴趣。在 Twitter @ben_nuttall 上关注 Ben。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc
+
+作者:[Ben Nuttall][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/bennuttall
+[1]:https://twitter.com/ben_nuttall
+[2]:https://twitter.com/intent/tweet?in_reply_to=811511740907261952
+[3]:https://twitter.com/intent/retweet?tweet_id=811511740907261952
+[4]:https://twitter.com/intent/like?tweet_id=811511740907261952
+[5]:https://twitter.com/ben_nuttall
+[6]:https://twitter.com/ben_nuttall/status/811511740907261952
+[7]:https://twitter.com/search?q=place%3A3bc1b6cfd27ef7f6
+[8]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
+[9]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
+[10]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
+[11]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
+[12]:https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc?rate=iqVrGV3EhwRuqh68sf6Zye6Y7VSpXRCUQoZV3sg-QJM
+[13]:http://downloads.raspberrypi.org/pixel_x86/images/pixel_x86-2016-12-13/
+[14]:https://www.raspberrypi.org/magpi/issues/53/
+[15]:https://en.wikipedia.org/wiki/ARM_Holdings
+[16]:http://gpiozero.readthedocs.io/
+[17]:http://www.allaboutcode.co.uk/single-post/2016/12/21/GPIOZero-Remote-GPIO-with-PIXEL-x86
+[18]:https://gist.github.com/bennuttall/572789b0aa5fc2e7c05c7ada1bdc813e
+[19]:https://www.raspberrypi.org/magpi/issues/53/
+[20]:https://opensource.com/user/26767/feed
+[21]:https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc#comments
+[22]:https://opensource.com/users/bennuttall
diff --git a/published/20170208 4 open source tools for conducting online surveys.md b/published/201704/20170208 4 open source tools for conducting online surveys.md
similarity index 100%
rename from published/20170208 4 open source tools for conducting online surveys.md
rename to published/201704/20170208 4 open source tools for conducting online surveys.md
diff --git a/published/201704/20170209 How to protect your server with badIPs.com and report IPs with Fail2ban on Debian.md b/published/201704/20170209 How to protect your server with badIPs.com and report IPs with Fail2ban on Debian.md
new file mode 100644
index 0000000000..a2342cc69a
--- /dev/null
+++ b/published/201704/20170209 How to protect your server with badIPs.com and report IPs with Fail2ban on Debian.md
@@ -0,0 +1,233 @@
+使用 badIPs.com 保护你的服务器,并通过 Fail2ban 报告恶意 IP
+============================================================
+
+这篇指南向你介绍使用 badips 滥用追踪器(abuse tracker) 和 Fail2ban 保护你的服务器或计算机的步骤。我已经在 Debian 8 Jessie 和 Debian 7 Wheezy 系统上进行了测试。
+
+**什么是 badIPs?**
+
+BadIps 是通过 [fail2ban][8] 报告为不良 IP 的列表。
+
+这个指南包括两个部分,第一部分介绍列表的使用,第二部分介绍数据注入。
+
+### 使用 badIPs 列表
+
+#### 定义安全等级和类别
+
+你可以通过使用 REST API 获取 IP 地址列表。
+
+* 当你使用 GET 请求获取 URL:[https://www.badips.com/get/categories][9] 后,你就可以看到服务中现有的所有不同类别。
+* 第二步,决定适合你的等级。 参考 badips 应该有所帮助(我个人使用 `scope = 3`):
+* 如果你想要编译一个统计信息模块或者将数据用于实验目的,那么你应该用等级 0 开始。
+* 如果你想用防火墙保护你的服务器或者网站,使用等级 2。可能也要和你的结果相结合,尽管它们可能没有超过 0 或 1 的情况。
+* 如果你想保护一个网络商店、或高流量、赚钱的电子商务服务器,我推荐你使用值 3 或 4。当然还是要和你的结果相结合。
+* 如果你是偏执狂,那就使用 5。
+
+现在你已经有了两个变量,通过把它们两者连接起来获取你的链接。
+
+```
+http://www.badips.com/get/list/{{SERVICE}}/{{LEVEL}}
+```
+
+注意:像我一样,你可以获取所有服务。在这种情况下把服务的名称改为 `any`。
+
+最终的 URL 就是:
+
+```
+https://www.badips.com/get/list/any/3
+```
+
+### 创建脚本
+
+所有都完成了之后,我们就会创建一个简单的脚本。
+
+1、 把你的列表放到一个临时文件。
+
+2、 在 iptables 中创建一个链(chain)(只需要创建一次)。(LCTT 译注:iptables 可能包括多个表(tables),表可能包括多个链(chains),链可能包括多个规则(rules))
+
+3、 把所有链接到该链的数据(旧条目)刷掉。
+
+4、 把每个 IP 链接到这个新的链。
+
+5、 完成后,阻塞所有链接到该链的 INPUT / OUTPUT /FORWARD 请求。
+
+6、 删除我们的临时文件。
+
+为此,我们创建脚本:
+
+```
+cd /home//
+vi myBlacklist.sh
+```
+
+把以下内容输入到文件。
+
+```
+#!/bin/sh
+### based on this version http://www.timokorthals.de/?p=334
+### adapted by Stéphane T.
+
+_ipt=/sbin/iptables ### iptables 路径(应该是这个)
+_input=badips.db ### 数据库的名称(会用这个名称下载)
+_pub_if=eth0 ### 连接到互联网的设备(执行 $ifconfig 获取)
+_droplist=droplist ### iptables 中链的名称(如果你已经有这么一个名称的链,你就换另外一个)
+_level=3 ### Blog(LCTT 译注:Bad log)等级:不怎么坏(0)、确认坏(3)、相当坏(5)(从 www.badips.com 获取详情)
+_service=any ### 记录日志的服务(从 www.badips.com 获取详情)
+
+# 获取不良 IPs
+wget -qO- http://www.badips.com/get/list/${_service}/$_level > $_input || { echo "$0: Unable to download ip list."; exit 1; }
+
+### 设置我们的黑名单 ###
+### 首先清除该链
+$_ipt --flush $_droplist
+
+### 创建新的链
+### 首次运行时取消下面一行的注释
+# $_ipt -N $_droplist
+
+### 过滤掉注释和空行
+### 保存每个 ip 到 $ip
+for ip in `cat $_input`
+do
+ ### 添加到 $_droplist
+ $_ipt -A $_droplist -i ${_pub_if} -s $ip -j LOG --log-prefix "Drop Bad IP List "
+ $_ipt -A $_droplist -i ${_pub_if} -s $ip -j DROP
+done
+
+### 最后,插入或者追加到我们的黑名单列表
+$_ipt -I INPUT -j $_droplist
+$_ipt -I OUTPUT -j $_droplist
+$_ipt -I FORWARD -j $_droplist
+
+### 删除你的临时文件
+rm $_input
+exit 0
+```
+
+完成这些后,你应该创建一个定时任务定期更新我们的黑名单。
+
+为此,我使用 crontab 在每天晚上 11:30(在我的延迟备份之前) 运行脚本。
+
+```
+crontab -e
+```
+```
+23 30 * * * /home//myBlacklist.sh #Block BAD IPS
+```
+
+别忘了更改脚本的权限:
+
+````
+chmod + x myBlacklist.sh
+```
+
+现在终于完成了,你的服务器/计算机应该更安全了。
+
+你也可以像下面这样手动运行脚本:
+
+```
+cd /home//
+./myBlacklist.sh
+```
+
+它可能要花费一些时间,因此期间别中断脚本。事实上,耗时取决于该脚本的最后一行。
+
+### 使用 Fail2ban 向 badIPs 报告 IP 地址
+
+在本篇指南的第二部分,我会向你展示如何通过使用 Fail2ban 向 badips.com 网站报告不良 IP 地址。
+
+#### Fail2ban >= 0.8.12
+
+通过 Fail2ban 完成报告。取决于你 Fail2ban 的版本,你要使用本章的第一或第二节。
+
+如果你 fail2ban 的版本是 0.8.12 或更新版本。
+
+```
+fail2ban-server --version
+```
+
+在每个你要报告的类别中,添加一个 action。
+
+```
+[ssh]
+enabled = true
+action = iptables-multiport
+ badips[category=ssh]
+port = ssh
+filter = sshd
+logpath = /var/log/auth.log
+maxretry= 6
+```
+
+正如你看到的,类别是 SSH,从 ([https://www.badips.com/get/categories][11]) 查找正确类别。
+
+#### Fail2ban < 0.8.12
+
+如果版本是 0.8.12 之前,你需要新建一个 action。你可以从 [https://www.badips.com/asset/fail2ban/badips.conf][12] 下载。
+
+```
+wget https://www.badips.com/asset/fail2ban/badips.conf -O /etc/fail2ban/action.d/badips.conf
+```
+
+在上面的 badips.conf 中,你可以像前面那样激活每个类别,也可以全局启用它:
+
+```
+cd /etc/fail2ban/
+vi jail.conf
+```
+
+```
+[DEFAULT]
+...
+
+banaction = iptables-multiport
+ badips
+```
+
+现在重启 fail2ban - 从现在开始它就应该开始报告了。
+
+ service fail2ban restart
+
+### 你的 IP 报告统计信息
+
+最后一步 - 没那么有用。你可以创建一个密钥。 但如果你想看你的数据,这一步就很有帮助。
+
+复制/粘贴下面的命令,你的控制台中就会出现一个 JSON 响应。
+
+```
+wget https://www.badips.com/get/key -qO -
+
+{
+ "err":"",
+ "suc":"new key 5f72253b673eb49fc64dd34439531b5cca05327f has been set.",
+ "key":"5f72253b673eb49fc64dd34439531b5cca05327f"
+}
+```
+
+到 [badips][13] 网站,输入你的 “key” 并点击 “statistics”。
+
+现在你就可以看到不同类别的统计信息。
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/
+
+作者:[Stephane T][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/
+[1]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#define-your-security-level-and-category
+[2]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#failban-gt-
+[3]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#failban-ltnbsp
+[4]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#use-the-badips-list
+[5]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#lets-create-the-script
+[6]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#report-ip-addresses-to-badips-with-failban
+[7]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#statistics-of-your-ip-reporting
+[8]:http://www.fail2ban.org/
+[9]:https://www.badips.com/get/categories
+[10]:http://www.timokorthals.de/?p=334
+[11]:https://www.badips.com/get/categories
+[12]:https://www.badips.com/asset/fail2ban/badips.conf
+[13]:https://www.badips.com/
diff --git a/sources/tech/20170209 Windows Trojan hacks into embedded devices to install Mirai.md b/published/201704/20170209 Windows Trojan hacks into embedded devices to install Mirai.md
similarity index 55%
rename from sources/tech/20170209 Windows Trojan hacks into embedded devices to install Mirai.md
rename to published/201704/20170209 Windows Trojan hacks into embedded devices to install Mirai.md
index 7a3013609b..15ec4385c5 100644
--- a/sources/tech/20170209 Windows Trojan hacks into embedded devices to install Mirai.md
+++ b/published/201704/20170209 Windows Trojan hacks into embedded devices to install Mirai.md
@@ -1,42 +1,39 @@
-Windows Trojan hacks into embedded devices to install Mirai
+Windows 木马攻破嵌入式设备来安装 Mirai 恶意软件
============================================================
-> The Trojan tries to authenticate over different protocols with factory default credentials and, if successful, deploys the Mirai bot
-
+> 木马尝试使用出厂默认凭证对不同协议进行身份验证,如果成功则会部署 Mirai 僵尸程序。
![Windows Trojan uses brute-force attacks against IoT devices.](http://images.techhive.com/images/idgnsImport/2015/08/id-2956907-matrix-434036-100606417-large.jpg)
+*图片来源: Gerd Altmann / Pixabay*
-Attackers have started to use Windows and Android malware to hack into embedded devices, dispelling the widely held belief that if such devices are not directly exposed to the Internet they're less vulnerable.
+攻击者已经开始使用 Windows 和 Android 恶意软件入侵嵌入式设备,这消除了人们广泛持有的想法,认为如果设备不直接暴露在互联网上,那么它们就不那么脆弱。
-Researchers from Russian antivirus vendor Doctor Web have recently [come across a Windows Trojan program][21] that was designed to gain access to embedded devices using brute-force methods and to install the Mirai malware on them.
+来自俄罗斯防病毒供应商 Doctor Web 的研究人员最近[遇到了一个 Windows 木马程序][21],它使用暴力方法访问嵌入式设备,并在其上安装 Mirai 恶意软件。
-Mirai is a malware program for Linux-based internet-of-things devices, such as routers, IP cameras, digital video recorders and others. It's used primarily to launch distributed denial-of-service (DDoS) attacks and spreads over Telnet by using factory device credentials.
+Mirai 是一种用在基于 Linux 的物联网设备的恶意程序,例如路由器、IP 摄像机、数字录像机等。它主要通过使用出厂设备凭据来发动分布式拒绝服务 (DDoS) 攻击并通过 Telnet 传播。
-The Mirai botnet has been used to launch some of the largest DDoS attacks over the past six months. After [its source code was leaked][22], the malware was used to infect more than 500,000 devices.
+Mirai 的僵尸网络在过去六个月里一直被用来发起最大型的 DDoS 攻击。[它的源代码泄漏][22]之后,恶意软件被用来感染超过 50 万台设备。
-Once installed on a Windows computer, the new Trojan discovered by Doctor Web downloads a configuration file from a command-and-control server. That file contains a range of IP addresses to attempt authentication over several ports including 22 (SSH) and 23 (Telnet).
+Doctor Web 发现,一旦在一台 Windows 上安装之后,该新木马会从命令控制服务器下载配置文件。该文件包含一系列 IP 地址,通过多个端口,包括 22(SSH)和 23(Telnet),尝试进行身份验证。
-#### [■ GET YOUR DAILY SECURITY NEWS: Sign up for CSO's security newsletters][11]
+如果身份验证成功,恶意软件将会根据受害系统的类型。执行配置文件中指定的某些命令。在通过 Telnet 访问的 Linux 系统中,木马会下载并执行一个二进制包,然后安装 Mirai 僵尸程序。
+如果按照设计或配置,受影响的设备不会从 Internet 直接访问,那么许多物联网供应商会降低漏洞的严重性。这种思维方式假定局域网是信任和安全的环境。
-If authentication is successful, the malware executes certain commands specified in the configuration file, depending on the type of compromised system. In the case of Linux systems accessed via Telnet, the Trojan downloads and executes a binary package that then installs the Mirai bot.
+然而事实并非如此,其他威胁如跨站点请求伪造已经出现了多年。但 Doctor Web 发现的新木马似乎是第一个专门设计用于劫持嵌入式或物联网设备的 Windows 恶意软件。
-Many IoT vendors downplay the severity of vulnerabilities if the affected devices are not intended or configured for direct access from the Internet. This way of thinking assumes that LANs are trusted and secure environments.
+Doctor Web 发现的新木马被称为 [Trojan.Mirai.1][23],从它可以看到,攻击者还可以使用受害的计算机来攻击不能从互联网直接访问的物联网设备。
-This was never really the case, with other threats like cross-site request forgery attacks going around for years. But the new Trojan that Doctor Web discovered appears to be the first Windows malware specifically designed to hijack embedded or IoT devices.
-
-This new Trojan found by Doctor Web, dubbed [Trojan.Mirai.1][23], shows that attackers can also use compromised computers to target IoT devices that are not directly accessible from the internet.
-
-Infected smartphones can be used in a similar way. Researchers from Kaspersky Lab have already [found an Android app][24] designed to perform brute-force password guessing attacks against routers over the local network.
+受感染的智能手机可以以类似的方式使用。卡巴斯基实验室的研究人员已经[发现了一个 Android 程序][24],通过本地网络对路由器执行暴力密码猜测攻击。
--------------------------------------------------------------------------------
via: http://www.csoonline.com/article/3168357/security/windows-trojan-hacks-into-embedded-devices-to-install-mirai.html
-作者:[ Lucian Constantin][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+作者:[Lucian Constantin][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -51,7 +48,6 @@ via: http://www.csoonline.com/article/3168357/security/windows-trojan-hacks-into
[8]:http://www.csoonline.com/article/3144197/security/upgraded-mirai-botnet-disrupts-deutsche-telekom-by-infecting-routers.html
[9]:http://www.csoonline.com/video/73795/security-sessions-the-csos-role-in-active-shooter-planning
[10]:http://www.csoonline.com/video/73795/security-sessions-the-csos-role-in-active-shooter-planning
-[11]:http://csoonline.com/newsletters/signup.html#tk.cso-infsb
[12]:http://www.csoonline.com/author/Lucian-Constantin/
[13]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.csoonline.com%2Farticle%2F3168357%2Fsecurity%2Fwindows-trojan-hacks-into-embedded-devices-to-install-mirai.html&via=csoonline&text=Windows+Trojan+hacks+into+embedded+devices+to+install+Mirai
[14]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.csoonline.com%2Farticle%2F3168357%2Fsecurity%2Fwindows-trojan-hacks-into-embedded-devices-to-install-mirai.html
diff --git a/published/201704/20170210 Use tmux for a more powerful terminal.md b/published/201704/20170210 Use tmux for a more powerful terminal.md
new file mode 100644
index 0000000000..f2856ed029
--- /dev/null
+++ b/published/201704/20170210 Use tmux for a more powerful terminal.md
@@ -0,0 +1,129 @@
+使用 tmux 打造更强大的终端
+============================
+
+ ![](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/tmux-945x400.jpg)
+
+一些 Fedora 用户把大部分甚至是所有时间花费在了[命令行][4]终端上。 终端可让您访问整个系统,以及数以千计的强大的实用程序。 但是,它默认情况下一次只显示一个命令行会话。 即使有一个大的终端窗口,整个窗口也只会显示一个会话。 这浪费了空间,特别是在大型显示器和高分辨率的笔记本电脑屏幕上。 但是,如果你可以将终端分成多个会话呢? 这正是 tmux 最方便的地方,或者说不可或缺的。
+
+### 安装并启动 tmux
+
+tmux 应用程序的名称来源于终端(terminal)复用器(muxer)或多路复用器(multiplexer)。 换句话说,它可以将您的单终端会话分成多个会话。 它管理窗口和窗格:
+
+- 窗口(window)是一个单一的视图 - 也就是终端中显示的各种东西。
+- 窗格(pane) 是该视图的一部分,通常是一个终端会话。
+
+开始前,请在系统上安装 `tmux` 应用程序。 你需要为您的用户帐户设置 `sudo` 权限(如果需要,请[查看本文][5]获取相关说明)。
+
+```
+sudo dnf -y install tmux
+```
+
+运行 `tmux`程序:
+
+```
+tmux
+```
+
+### 状态栏
+
+首先,似乎什么也没有发生,除了出现在终端的底部的状态栏:
+
+ ![Start of tmux session](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-41.png)
+
+底部栏显示:
+
+* `[0]` – 这是 `tmux` 服务器创建的第一个会话。编号从 0 开始。`tmux` 服务器会跟踪所有的会话确认其是否存活。
+* `0:testuser@scarlett:~` – 有关该会话的第一个窗口的信息。编号从 0 开始。这表示窗口的活动窗格中的终端归主机名 `scarlett` 中 `testuser` 用户所有。当前目录是 `~` (家目录)。
+* `*` – 显示你目前在此窗口中。
+* `“scarlett.internal.fri”` – 你正在使用的 `tmux` 服务器的主机名。
+* 此外,还会显示该特定主机上的日期和时间。
+
+当你向会话中添加更多窗口和窗格时,信息栏将随之改变。
+
+### tmux 基础知识
+
+把你的终端窗口拉伸到最大。现在让我们尝试一些简单的命令来创建更多的窗格。默认情况下,所有的命令都以 `Ctrl+b` 开头。
+
+* 敲 `Ctrl+b, "` 水平分割当前单个窗格。 现在窗口中有两个命令行窗格,一个在顶部,一个在底部。请注意,底部的新窗格是活动窗格。
+* 敲 `Ctrl+b, %` 垂直分割当前单个窗格。 现在你的窗口中有三个命令行窗格,右下角的窗格是活动窗格。
+
+![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
+
+注意当前窗格周围高亮显示的边框。要浏览所有的窗格,请做以下操作:
+
+* 敲 `Ctrl+b`,然后点箭头键
+* 敲 `Ctrl+b, q`,数字会短暂的出现在窗格上。在这期间,你可以你想要浏览的窗格上对应的数字。
+
+现在,尝试使用不同的窗格运行不同的命令。例如以下这样的:
+
+* 在顶部窗格中使用 `ls` 命令显示目录内容。
+* 在左下角的窗格中使用 `vi` 命令,编辑一个文本文件。
+* 在右下角的窗格中运行 `top` 命令监控系统进程。
+
+屏幕将会如下显示:
+
+![tmux session with three panes running different commands](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-57-51.png)
+
+到目前为止,这个示例中只是用了一个带多个窗格的窗口。你也可以在会话中运行多个窗口。
+
+* 为了创建一个新的窗口,请敲`Ctrl+b, c` 。请注意,状态栏显示当前有两个窗口正在运行。(敏锐的读者会看到上面的截图。)
+* 要移动到上一个窗口,请敲 `Ctrl+b, p` 。
+* 要移动到下一个窗口,请敲 `Ctrl+b, n` 。
+* 要立即移动到特定的窗口,请敲 `Ctrl+b` 然后跟上窗口编号。
+
+如果你想知道如何关闭窗格,只需要使用 `exit` 、`logout`,或者 `Ctrl+d` 来退出特定的命令行 shell。一旦你关闭了窗口中的所有窗格,那么该窗口也会消失。
+
+### 脱离和附加
+
+`tmux` 最强大的功能之一是能够脱离和重新附加到会话。 当你脱离的时候,你可以离开你的窗口和窗格独立运行。 此外,您甚至可以完全注销系统。 然后,您可以登录到同一个系统,重新附加到 `tmux` 会话,查看您离开时的所有窗口和窗格。 脱离的时候你运行的命令一直保持运行状态。
+
+为了脱离一个会话,请敲 `Ctrl+b, d`。然后会话消失,你重新返回到一个标准的单一 shell。如果要重新附加到会话中,使用一下命令:
+
+```
+tmux attach-session
+```
+
+当你连接到主机的网络不稳定时,这个功能就像救生员一样有用。如果连接失败,会话中的所有的进程都会继续运行。只要连接恢复了,你就可以恢复正常,就好像什么事情也没有发生一样。
+
+如果这些功能还不够,在每个会话的顶层窗口和窗格中,你可以运行多个会话。你可以列举出这些窗口和窗格,然后通过编号或者名称把他们附加到正确的会话中:
+
+```
+tmux list-sessions
+```
+
+### 延伸阅读
+
+本文只触及的 `tmux` 的表面功能。你可以通过其他方式操作会话:
+
+* 将一个窗格和另一个窗格交换
+* 将窗格移动到另一个窗口中(可以在同一个会话中也可以在不同的会话中)
+* 设定快捷键自动执行你喜欢的命令
+* 在 `~/.tmux.conf` 文件中配置你最喜欢的配置项,这样每一个会话都会按照你喜欢的方式呈现
+
+有关所有命令的完整说明,请查看以下参考:
+
+* 官方[手册页][1]
+* `tmux` [电子书][2]
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Paul W. Frields 自 1997 年以来一直是 Linux 用户和爱好者,并于 2003 年加入 Fedora 项目,这个项目刚推出不久。他是 Fedora 项目委员会的创始成员,在文档,网站发布,宣传,工具链开发和维护软件方面都有贡献。他于2008 年 2 月至 2010 年 7 月加入 Red Hat,担任 Fedora 项目负责人,并担任 Red Hat 的工程经理。目前他和妻子以及两个孩子居住在弗吉尼亚。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
+
+作者:[Paul W. Frields][a]
+译者:[Flowsnow](https://github.com/Flowsnow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/pfrields/
+[1]: http://man.openbsd.org/OpenBSD-current/man1/tmux.1
+[2]: https://pragprog.com/book/bhtmux2/tmux-2
+[3]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
+[4]: http://www.cryptonomicon.com/beginning.html
+[5]: https://fedoramagazine.org/howto-use-sudo/
\ No newline at end of file
diff --git a/published/20170214 10 Best Linux Terminal Emulators For Ubuntu And Fedora.md b/published/201704/20170214 10 Best Linux Terminal Emulators For Ubuntu And Fedora.md
similarity index 100%
rename from published/20170214 10 Best Linux Terminal Emulators For Ubuntu And Fedora.md
rename to published/201704/20170214 10 Best Linux Terminal Emulators For Ubuntu And Fedora.md
diff --git a/published/201704/20170214 Basics of network protocol analyzer Wireshark On Linux.md b/published/201704/20170214 Basics of network protocol analyzer Wireshark On Linux.md
new file mode 100644
index 0000000000..e6211dabc4
--- /dev/null
+++ b/published/201704/20170214 Basics of network protocol analyzer Wireshark On Linux.md
@@ -0,0 +1,110 @@
+Linux 下网络协议分析器 Wireshark 使用基础
+=================
+
+Wireshark 是 Kali 中预置的众多有价值工具中的一种。与其它工具一样,它可以被用于正面用途,同样也可以被用于不良目的。当然,本文将会介绍如何追踪你自己的网络流量来发现潜在的非正常活动。
+
+Wireshark 相当的强大,当你第一次见到它的时候可能会被它吓到,但是它的目的始终就只有一个,那就是追踪网络流量,并且它所实现的所有选项都只为了加强它追踪流量的能力。
+
+### 安装
+
+Kali 中预置了 Wireshark 。不过,`wireshark-gtk` 包提供了一个更好的界面使你在使用 Wireshark 的时候会有更友好的体验。因此,在使用 Wireshark 前的第一步是安装 `wireshark-gtk` 这个包。
+
+```
+# apt install wireshark-gtk
+```
+
+如果你的 Kali 是从 live 介质上运行的也不需要担心,依然有效。
+
+
+### 基础配置
+
+在你使用 Wireshark 之前,将它设置成你使用起来最舒适的状态可能是最好的。Wireshark 提供了许多不同的布局方案和选项来配置程序的行为。尽管数量很多,但是使用起来是相当直接明确的。
+
+从启动 Wireshark-gtk 开始。需要确定启动的是 GTK 版本的。在 Kali 中它们是被分别列出的。
+
+![Wireshark running on Kali](https://linuxconfig.org/images/wireshark-start.jpg?58a2b879)
+
+### 布局
+
+默认情况下,Wireshark 的信息展示分为三块内容,每一块都叠在另一块上方。(LCTT 译注:这里的三部分指的是展示抓包信息的时候的那三块内容,本段配图没有展示,配图 4、5、6 的设置不是默认设置,与这里的描述不符)最上方的一块是所抓包的列表。中间的一块是包的详细信息。最下面那块中包含的是包的原始字节信息。通常来说,上面的两块中的信息比最下面的那块有用的多,但是对于资深用户来说这块内容仍然是重要信息。
+
+每一块都是可以缩放的,可并不是每一个人都必须使用这样叠起来的布局方式。你可以在 Wireshark 的“选项(Preferences)”菜单中进行更改。点击“编辑(Edit)”菜单,最下方就是的“选项”菜单。这个选项会打开一个有更多选项的新窗口。单击侧边菜单中“用户界面(User Interface)”下的“布局(Layout)”选项。
+
+![Wireshark's layout configuration](https://linuxconfig.org/images/wireshark-layouts.jpg?58a2b879)
+
+你将会看到一些不同的布局方案。上方的图示可以让你选择不同的面板位置布局方案,下面的单选框可以让你选择不同面板中的数据内容。
+
+下面那个标记为“列(Columns)”的标签可以让你选择展示所抓取包的哪些信息。选择那些你需要的数据信息,或者全部展示。
+
+### 工具条
+
+对于 Wireshark 的工具条能做的设置不是太多,但是如果你想设置的话,你依然在前文中提到的“布局”菜单中的窗口管理工具下方找到一些有用的设置选项。那些能让你配置工具条和工具条中条目的选项就在窗口选项下方。
+
+你还可以在“视图(View)”菜单下勾选来配置工具条的显示内容。
+
+### 功能
+
+主要的用来控制 Wireshark 抓包的控制选项基本都集中在“捕捉(Capture)”菜单下的“选项(Options)”选项中。
+
+在开启的窗口中最上方的“捕捉(Capture)”部分可以让你选择 Wireshark 要监控的某个具体的网络接口。这部分可能会由于你系统的配置不同而会有相当大的不同。要记得勾选正确的选择框才能获得正确的数据。虚拟机和伴随它们一起的网络接口也同样会在这个列表里显示。同样也会有多种不同的选项对应这多种不同的网络接口。
+
+![Wireshark's capture configuration](https://linuxconfig.org/images/wireshark-capture-config.jpg?58a2b879)
+
+在网络接口列表的下方是两个选项。其中一个选项是全选所有的接口。另一个选项用来选择是否开启混杂模式。这个选项可以使你的计算机监控到所选网络上的所有的计算机。(LCTT 译注:混杂模式可以在 HUB 中或监听模式的交换机接口上捕获那些由于 MAC 地址非本机而会被自动丢弃的数据包)如果你想监控你所在的整个网络,这个选项是你所需要的。
+
+**注意:** 在一个不属于你或者不拥有权限的网络上使用混杂模式来监控是非法的!
+
+在窗口下方的右侧是“显示选项(Display Options)”和“名称解析(Name Resolution)”选项块。对于“显示选项(Display Options)”来说,三个选项全选可能就是一个很好的选择了。当然你也可以取消选择,但是最好还是保留选择“实时更新抓包列表”。
+
+在“名称解析(Name Resolution)”中你也可以设置你的偏好。这里的选项会产生附加的请求因此选得越多就会有越多的请求产生使你的抓取的包列表显得杂乱。把 MAC 解析选项选上是个好主意,那样就可以知道所使用的网络硬件的品牌了。这可以帮助你来确定你是在与哪台设备上的哪个接口进行交互。
+
+### 抓包
+
+抓包是 Wireshark 的核心功能。监控和记录特定网络上的流量就是它最初产生的目的。使用它最基本的方式来作这个抓包的工作是相当简单方便的。当然,越多的配置和选项就越可以充分利用 Wireshark 的力量。这里的介绍的关注点依然还是它最基本的记录方式。
+
+按下那个看起来像蓝色鲨鱼鳍的新建实时抓包按钮就可以开始抓包了。(LCTT 译注:在我的 Debian 上它是绿色的)
+
+
+![Wireshark listing packet information](https://linuxconfig.org/images/wireshark-packet-list.jpg?58a2b879)
+
+在抓包的过程中,Wireshark 会收集所有它能收集到的包的数据并且记录下来。如果没有更改过相关设置的话,在抓包的过程中你会看见不断的有新的包进入到“包列表”面板中。你可以实时的查看你认为有趣的包,或者就让 Wireshark 运行着,同时你可以做一些其它的事情。
+
+当你完成了,按下红色的正方形“停止”按钮就可以了。现在,你可以选择是否要保存这些所抓取的数据了。要保存的话,你可以使用“文件”菜单下的“保存”或者是“另存为”选项。
+
+### 读取数据
+
+Wireshark 的目标是向你提供你所需要的所有数据。这样做时,它会在它监控的网络上收集大量的与网络包相关的数据。它使用可折叠的标签来展示这些数据使得这些数据看起来没有那么吓人。每一个标签都对应于网络包中一部分的请求数据。
+
+这些标签是按照从最底层到最高层一层层堆起来的。顶部标签总是包含数据包中包含的字节数据。最下方的标签可能会是多种多样的。在下图的例子中是一个 HTTP 请求,它会包含 HTTP 的信息。您遇到的大多数数据包将是 TCP 数据,它将展示在底层的标签中。
+
+![Wireshark listing HTTP packet info](https://linuxconfig.org/images/wireshark-packet-info-http.jpg?58a2b879)
+
+每一个标签页都包含了抓取包中对应部分的相关数据。一个 HTTP 包可能会包含与请求类型相关的信息,如所使用的网络浏览器,服务器的 IP 地址,语言,编码方式等的数据。一个 TCP 包会包含服务器与客户端使用的端口信息和 TCP 三次握手过程中的标志位信息。
+
+![Wireshark listing TCP packet info](https://linuxconfig.org/images/wireshark-packet-info-tcp.jpg?58a2b879)
+
+在上方的其它标签中包含了一些大多数用户都感兴趣的少量信息。其中一个标签中包含了数据包是否是通过 IPv4 或者 IPv6 传输的,以及客户端和服务器端的 IP 地址。另一个标签中包含了客户端和接入因特网的路由器或网关的设备的 MAC 地址信息。
+
+### 结语
+
+即使只使用这些基础选项与配置,你依然可以发现 Wireshark 会是一个多么强大的工具。监控你的网络流量可以帮助你识别、终止网络攻击或者提升连接速度。它也可以帮你找到问题应用。下一篇 Wireshark 指南我们将会一起探索 Wireshark 的包过滤选项。
+
+--------------------------------------------------------------------------------
+
+via: https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux
+
+作者:[Nick Congleton][a]
+译者:[wcnnbdk1](https://github.com/wcnnbdk1)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux
+[1]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-1-layout
+[2]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-2-toolbars
+[3]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-3-functionality
+[4]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h1-installation
+[5]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-basic-configuration
+[6]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h3-capture
+[7]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h4-reading-data
+[8]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h5-closing-thoughts
diff --git a/published/20170217 Understanding the difference between sudo and su.md b/published/201704/20170217 Understanding the difference between sudo and su.md
similarity index 100%
rename from published/20170217 Understanding the difference between sudo and su.md
rename to published/201704/20170217 Understanding the difference between sudo and su.md
diff --git a/translated/tech/20170218 Install Drupal 8 in RHEL CentOS Fedora.md b/published/201704/20170218 Install Drupal 8 in RHEL CentOS Fedora.md
similarity index 68%
rename from translated/tech/20170218 Install Drupal 8 in RHEL CentOS Fedora.md
rename to published/201704/20170218 Install Drupal 8 in RHEL CentOS Fedora.md
index a747fad043..5069df80e5 100644
--- a/translated/tech/20170218 Install Drupal 8 in RHEL CentOS Fedora.md
+++ b/published/201704/20170218 Install Drupal 8 in RHEL CentOS Fedora.md
@@ -1,30 +1,31 @@
-在 RHEL,CentOS 及 Fedora 上安装 Drupal 8
+在 RHEL、CentOS 及 Fedora 上安装 Drupal 8
============================================================
**Drupal** 是一个开源,灵活,高度可拓展和安全的内容管理系统(CMS),使用户轻松的创建网站。
+
它可以使用模块拓展,使用户将内容管理转换为强大的数字解决方案。
-**Drupal** 运行在诸如 **Apache,IIS,Lighttpd,Cherokee,Nginx** 的 Web 服务器上,后端数据库可以使用 **Mysql,MongoDB,MariaDB,PostgreSQL,MSSQL Server**。
+**Drupal** 运行在诸如 Apache、IIS、Lighttpd、Cherokee、Nginx 的 Web 服务器上,后端数据库可以使用 MySQL、MongoDB、MariaDB、PostgreSQL、MSSQL Server。
-在这篇文章中, 我们会展示在 RHEL 7/6,CentOS 7/6 和 Fedora 20-25 发行版本使用 LAMP,如何手动安装和配置 Drupal 8。
+在这篇文章中, 我们会展示在 RHEL 7/6、CentOS 7/6 和 Fedora 20-25 发行版上使用 LAMP 架构,如何手动安装和配置 Drupal 8。
-#### Drupal 需求:
+#### Drupal 需求:
-1. **Apache 2.x** (推荐)
-2. **PHP 5.5.9** 或 更高 (推荐 PHP 5.5)
-3. **MYSQL 5.5.3** 或 **MariaDB 5.5.20** 与 PHP 数据对象(PDO)
+1. **Apache 2.x** (推荐)
+2. **PHP 5.5.9** 或 更高 (推荐 PHP 5.5)
+3. **MySQL 5.5.3** 或 **MariaDB 5.5.20** 与 PHP 数据对象(PDO) 支持
安装过程中,我使用 `drupal.tecmint.com` 作为网站主机名,IP 地址为 `192.168.0.104`。你的环境也许与这些设置不同,因此请适当做出更改。
-### 步骤 1: 安装 Apache Web 服务器
+### 步骤 1:安装 Apache Web 服务器
-1. 首先我们从官方仓库开始安装 Apache Web 服务器。
+1、 首先我们从官方仓库开始安装 Apache Web 服务器。
```
# yum install httpd
```
-2. 安装完成后,服务将会被被禁用,因此我们需要手动启动它,同时让它下次系统启动时自动启动,如下:
+2、 安装完成后,服务开始是被禁用的,因此我们需要手动启动它,同时让它下次系统启动时自动启动,如下:
```
------------- 通过 SystemD - CentOS/RHEL 7 和 Fedora 22+ -------------------
@@ -36,7 +37,7 @@
# chkconfig --level 35 httpd on
```
-3. 接下来,为了允许通过 **HTTP** 和 **HTTPS** 访问 Apache 服务,我们必须打开 **HTTPD** 守护进程正在监听的 **80** 和 **443** 端口,如下所示:
+3、 接下来,为了允许通过 **HTTP** 和 **HTTPS** 访问 Apache 服务,我们必须打开 **HTTPD** 守护进程正在监听的 **80** 和 **443** 端口,如下所示:
```
------------ 通过 Firewalld - CentOS/RHEL 7 and Fedora 22+ -------------
@@ -51,7 +52,7 @@
# service iptables restart
```
-4. 现在验证 Apache 是否正常工作, 打开浏览器在地址栏中输入 http://server_IP, 输入你的服务器 IP 地址, 默认 Apache2 页面应出现,如下面截图所示:
+4、 现在验证 Apache 是否正常工作, 打开浏览器在地址栏中输入 `http://server_IP`, 输入你的服务器 IP 地址, 默认 Apache2 页面应出现,如下面截图所示:
[
![Apache 默认页面](https://dn-coding-net-production-pp.qbox.me/a93436ad-59ee-404d-9a28-ebde4446cd6d.png)
@@ -59,15 +60,15 @@
*Apache 默认页面*
-### 步骤 2: 安装 Apache PHP 支持
+### 步骤 2: 安装 Apache PHP 支持
-5. 接下来,安装 PHP 和 PHP 所需模块.
+5、 接下来,安装 PHP 和 PHP 所需模块。
```
# yum install php php-mbstring php-gd php-xml php-pear php-fpm php-mysql php-pdo php-opcache
```
-**重要**: 假如你想要安装 **PHP7**, 你需要增加以下仓库:**EPEL** 和 **Webtactic** 才可以使用 yum 安装 PHP7.0:
+**重要**: 假如你想要安装 **PHP7**, 你需要增加以下仓库:**EPEL** 和 **Webtactic** 才可以使用 yum 安装 PHP7.0:
```
------------- Install PHP 7 in CentOS/RHEL and Fedora -------------
@@ -76,7 +77,7 @@
# yum install php70w php70w-opcache php70w-mbstring php70w-gd php70w-xml php70w-pear php70w-fpm php70w-mysql php70w-pdo
```
-6. 接下来,要从浏览器得到关于 PHP 安装和配置完整信息,使用下面命令在 Apache 文档根目录 (/var/www/html) 创建一个 `info.php` 文件。
+6、 接下来,要从浏览器得到关于 PHP 安装和配置完整信息,使用下面命令在 Apache 文档根目录 (`/var/www/html`) 创建一个 `info.php` 文件。
```
# echo "" > /var/www/html/info.php
@@ -98,7 +99,7 @@
### 步骤 3: 安装和配置 MariaDB 数据库
-7. 请了解, **Red Hat Enterprise Linux/CentOS 7.0** 从支持 **MYSQL** 转为了 **MariaDB** 作为默认数据库管理系统。
+7、 请知晓, **Red Hat Enterprise Linux/CentOS 7.0** 从支持 **MySQL** 转为了 **MariaDB** 作为默认数据库管理系统。
要安装 **MariaDB** 数据库, 你需要添加 [官方 MariaDB 库][3] 到 `/etc/yum.repos.d/MariaDB.repo` 中,如下所示。
@@ -110,13 +111,13 @@ gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
```
-当仓库文件准备好后,你可以像这样安装 MariaDB :
+当仓库文件准备好后,你可以像这样安装 MariaDB:
```
# yum install mariadb-server mariadb
```
-8. 当 MariaDB 数据库安装完成,启动数据库的守护进程,同时使它能够在下次启动后自动启动。
+8、 当 MariaDB 数据库安装完成,启动数据库的守护进程,同时使它能够在下次启动后自动启动。
```
------------- 通过 SystemD - CentOS/RHEL 7 and Fedora 22+ -------------
@@ -127,7 +128,7 @@ gpgcheck=1
# chkconfig --level 35 mysqld on
```
-9. 然后运行 `mysql_secure_installation` 脚本去保护数据库(设置 root 密码, 禁用远程登录,移除测试数据库并移除匿名用户),如下所示:
+9、 然后运行 `mysql_secure_installation` 脚本去保护数据库(设置 root 密码, 禁用远程登录,移除测试数据库并移除匿名用户),如下所示:
```
# mysql_secure_installation
@@ -136,25 +137,25 @@ gpgcheck=1
![Mysql安全安装](https://dn-coding-net-production-pp.qbox.me/15a20560-ea9f-499b-b155-a310e9aa6a88.png)
][4]
-*Mysql 安全安装*
+*MySQL 安全安装*
-### 步骤 4: 在 CentOS 中安装和配置 Drupal 8
+### 步骤 4: 在 CentOS 中安装和配置 Drupal 8
-10. 这里我们使用 [wget 命令][6] [下载最新版本 Drupal][5](例如 8.2.6),如果你没有安装 wget 和 gzip 包 ,请使用下面命令安装它们:
+10、 这里我们使用 [wget 命令][6] [下载最新版本 Drupal][5](例如 8.2.6),如果你没有安装 wget 和 gzip 包 ,请使用下面命令安装它们:
```
# yum install wget gzip
# wget -c https://ftp.drupal.org/files/projects/drupal-8.2.6.tar.gz
```
-11. 之后,[解压 tar 文件][7] 并移动 Drupal 目录到 Apache 文档根目录(`/var/www/html`).
+11、 之后,[解压 tar 文件][7] 并移动 Drupal 目录到 Apache 文档根目录(`/var/www/html`)。
```
# tar -zxvf drupal-8.2.6.tar.gz
# mv drupal-8.2.6 /var/www/html/drupal
```
-12. 然后,依据 `/var/www/html/drupal/sites/default` 目录下的示例设置文件 default.settings.php,创建设置文件 `settings.php`,然后给 Drupal 站点目录设置适当权限,包括子目录和文件,如下所示:
+12、 然后,依据 `/var/www/html/drupal/sites/default` 目录下的示例设置文件 `default.settings.php`,创建设置文件 `settings.php`,然后给 Drupal 站点目录设置适当权限,包括子目录和文件,如下所示:
```
# cd /var/www/html/drupal/sites/default/
@@ -162,13 +163,13 @@ gpgcheck=1
# chown -R apache:apache /var/www/html/drupal/
```
-13. 更重要的是在 `/var/www/html/drupal/sites/` 目录设置 **SElinux** 规则,如下:
+13、 更重要的是在 `/var/www/html/drupal/sites/` 目录设置 **SElinux** 规则,如下:
```
# chcon -R -t httpd_sys_content_rw_t /var/www/html/drupal/sites/
```
-14. 现在我们必须为 Drupal 站点去创建一个数据库和用户来管理。
+14、 现在我们必须为 Drupal 站点去创建一个用于管理的数据库和用户。
```
# mysql -u root -p
@@ -191,7 +192,7 @@ Query OK, 0 rows affected (0.00 sec)
Bye
```
-15. 最后,打开地址: `http://server_IP/drupal/` 开始网站的安装,选择你首选的安装语言然后点击保存以继续。
+15、 最后,打开地址: `http://server_IP/drupal/` 开始网站的安装,选择你首选的安装语言然后点击保存以继续。
[
![Drupal 安装语言](http://www.tecmint.com/wp-content/uploads/2013/07/Drupal-Installation-Language.png)
@@ -199,7 +200,7 @@ Bye
*Drupal 安装语言*
-16. 下一步,选择安装配置文件,选择 Standard(标准),点击保存继续。
+16、 下一步,选择安装配置文件,选择 Standard(标准),点击保存继续。
[
![Drupal 安装配置文件](http://www.tecmint.com/wp-content/uploads/2013/07/Drupal-Installation-Profile.png)
@@ -207,7 +208,7 @@ Bye
*Drupal 安装配置文件*
-17. 在进行下一步之前查看并通过需求审查并启用 `Clean URL`。
+17、 在进行下一步之前查看并通过需求审查并启用 `Clean URL`。
[
![验证 Drupal 需求](http://www.tecmint.com/wp-content/uploads/2013/07/Verify-Drupal-Requirements.png)
@@ -215,13 +216,13 @@ Bye
*验证 Drupal 需求*
-现在在你的 Apache 配置下启用 Clean URL Drupal。
+现在在你的 Apache 配置下启用 Clean URL 的 Drupal。
```
# vi /etc/httpd/conf/httpd.conf
```
-确保为默认根文档目录 **/var/www/html** 设置 **AllowOverride All**,如下图所示:
+确保为默认根文档目录 `/var/www/html` 设置 `AllowOverride All`,如下图所示:
[
![在 Drupal 中启用 Clean URL](http://www.tecmint.com/wp-content/uploads/2013/07/Enable-Clean-URL-in-Drupal.png)
@@ -229,7 +230,7 @@ Bye
*在 Drupal 中启用 Clean URL*
-18. 当你为 Drupal 启用 `Clean URL`,刷新页面从下面界面执行数据库配置,输入 Drupal 站点数据库名,数据库用户和数据库密码。
+18、 当你为 Drupal 启用 Clean URL,刷新页面从下面界面执行数据库配置,输入 Drupal 站点数据库名,数据库用户和数据库密码。
当填写完所有信息点击**保存并继续**。
@@ -247,15 +248,15 @@ Bye
*Drupal 安装*
-19. 接下来配置站点为下面的设置(使用适用你的情况的值):
+19、 接下来配置站点为下面的设置(使用适用你的情况的值):
- - **站点名称** – TecMint Drupal Site
- - **站点邮箱地址** – admin@tecmint.com
- - **用户名** – admin
- - **密码** – ##########
- - **用户的邮箱地址** – admin@tecmint.com
- - **默认国家** – India
- - **默认时区** – UTC
+- **站点名称** – TecMint Drupal Site
+- **站点邮箱地址** – admin@tecmint.com
+- **用户名** – admin
+- **密码** – ##########
+- **用户的邮箱地址** – admin@tecmint.com
+- **默认国家** – India
+- **默认时区** – UTC
设置适当的值后,点击**保存并继续**完成站点安装过程。
@@ -275,7 +276,7 @@ Bye
现在你可以点击**增加内容**,创建示例网页内容。
-选项: 有些人[使用 MYSQL 命令行管理数据库][16]不舒服,可以从浏览器界面 [安装 PHPMYAdmin 管理数据库][17]
+选项: 有些人[使用 MySQL 命令行管理数据库][16]不舒服,可以从浏览器界面 [安装 PHPMYAdmin 管理数据库][17]
浏览 Drupal 文档 : [https://www.drupal.org/docs/8][18]
diff --git a/translated/tech/20170220 Inxi – A Powerful Feature-Rich Commandline System Information Tool for Linux.md b/published/201704/20170220 Inxi – A Powerful Feature-Rich Commandline System Information Tool for Linux.md
similarity index 91%
rename from translated/tech/20170220 Inxi – A Powerful Feature-Rich Commandline System Information Tool for Linux.md
rename to published/201704/20170220 Inxi – A Powerful Feature-Rich Commandline System Information Tool for Linux.md
index 75d02d05f7..e666cb0886 100644
--- a/translated/tech/20170220 Inxi – A Powerful Feature-Rich Commandline System Information Tool for Linux.md
+++ b/published/201704/20170220 Inxi – A Powerful Feature-Rich Commandline System Information Tool for Linux.md
@@ -1,33 +1,31 @@
-
-Inxi —— 一个功能强大的获取 Linux 系统信息的命令行工具
+inxi:一个功能强大的获取 Linux 系统信息的命令行工具
============================================================
-Inxi 最初是为控制台和 IRC(网络中继聊天)开发的一个强大且优秀的系统命令行工具。现在可以使用它获取用户的硬件和系统信息,它也能作为一个调试器使用或者一个社区技术支持工具。
+Inxi 最初是为控制台和 IRC(网络中继聊天)开发的一个强大且优秀的命令行系统信息脚本。可以使用它获取用户的硬件和系统信息,它也用于调试或者社区技术支持工具。
使用 Inxi 可以很容易的获取所有的硬件信息:硬盘、声卡、显卡、网卡、CPU 和 RAM 等。同时也能够获取大量的操作系统信息,比如硬件驱动、Xorg 、桌面环境、内核、GCC 版本,进程,开机时间和内存等信息。
-
-命令行和 IRC 上的 Inxi 输出略有不同,IRC 上会有一些可供用户使用的过滤器和颜色选项。支持 IRC 的客户端有:BitchX、Gaim/Pidgin、ircII、Irssi、 Konversation、 Kopete、 KSirc、 KVIrc、 Weechat 和 Xchat ;其他的一些客户端都会有一些过滤器和颜色选项,或者用 Inxi 的输出体现出这种区别。
-
-
+运行在命令行和 IRC 上的 Inxi 输出略有不同,IRC 上会有一些可供用户使用的默认过滤器和颜色选项。支持的 IRC 客户端有:BitchX、Gaim/Pidgin、ircII、Irssi、 Konversation、 Kopete、 KSirc、 KVIrc、 Weechat 和 Xchat 以及其它的一些客户端,它们具有展示内置或外部 Inxi 输出的能力。
### 在 Linux 系统上安装 Inxi
大多数主流 Linux 发行版的仓库中都有 Inxi ,包括大多数 BSD 系统。
-
```
$ sudo apt-get install inxi [On Debian/Ubuntu/Linux Mint]
$ sudo yum install inxi [On CentOs/RHEL/Fedora]
$ sudo dnf install inxi [On Fedora 22+]
```
-在使用 Inxi 之前,用下面的命令查看 Inxi 的介绍信息,包括各种各样的文件夹和需要安装的包。
+
+在使用 Inxi 之前,用下面的命令查看 Inxi 所有依赖和推荐的应用,以及各种目录,并显示需要安装哪些包来支持给定的功能。
```
$ inxi --recommends
```
+
Inxi 的输出:
+
```
inxi will now begin checking for the programs it needs to operate. First a check of the main languages and tools
inxi uses. Python is only for debugging data collection.
@@ -122,12 +120,13 @@ File: /var/run/dmesg.boot
All tests completed.
```
+### Inxi 工具的基本用法
-用下面的操作获取系统和硬件的详细信息。
+用下面的基本用法获取系统和硬件的详细信息。
-#### 获取系统信息
-Inix 不加任何选项就能输出下面的信息:CPU 、内核、开机时长、内存大小、硬盘大小、进程数、登陆终端以及 Inxi 版本。
+#### 获取 Linux 系统信息
+Inix 不加任何选项就能输出下面的信息:CPU 、内核、开机时长、内存大小、硬盘大小、进程数、登录终端以及 Inxi 版本。
```
$ inxi
@@ -136,8 +135,7 @@ CPU~Dual core Intel Core i5-4210U (-HT-MCP-) speed/max~2164/2700 MHz Kernel~4.4.
#### 获取内核和发行版本信息
-使用 Inxi 的 `-S` 选项查看本机系统信息:
-
+使用 Inxi 的 `-S` 选项查看本机系统信息(主机名、内核信息、桌面环境和发行版):
```
$ inxi -S
@@ -145,8 +143,8 @@ System: Host: TecMint Kernel: 4.4.0-21-generic x86_64 (64 bit) Desktop: Cinnamon
Distro: Linux Mint 18 Sarah
```
-
### 获取电脑机型
+
使用 `-M` 选项查看机型(笔记本/台式机)、产品 ID 、机器版本、主板、制造商和 BIOS 等信息:
@@ -157,8 +155,8 @@ Mobo: LENOVO model: Lancer 5A5 v: 31900059WIN Bios: LENOVO v: 9BCN26WW date: 07/
```
### 获取 CPU 及主频信息
-使用 `-C` 选项查看完整的 CPU 信息,包括每核 CPU 的频率及可用的最大主频。
+使用 `-C` 选项查看完整的 CPU 信息,包括每核 CPU 的频率及可用的最大主频。
```
$ inxi -C
@@ -166,10 +164,9 @@ CPU: Dual core Intel Core i5-4210U (-HT-MCP-) cache: 3072 KB
clock speeds: max: 2700 MHz 1: 1942 MHz 2: 1968 MHz 3: 1734 MHz 4: 1710 MHz
```
-
#### 获取显卡信息
-使用 `-G` 选项查看显卡信息,包括显卡类型、图形服务器、系统分辨率、GLX 渲染器(译者注: GLX 是一个 X 窗口系统的 OpenGL 扩展)和 GLX 版本。
+使用 `-G` 选项查看显卡信息,包括显卡类型、显示服务器、系统分辨率、GLX 渲染器和 GLX 版本等等(LCTT 译注: GLX 是一个 X 窗口系统的 OpenGL 扩展)。
```
$ inxi -G
@@ -179,10 +176,9 @@ Display Server: X.Org 1.18.4 drivers: intel (unloaded: fbdev,vesa) Resolution: 1
GLX Renderer: Mesa DRI Intel Haswell Mobile GLX Version: 3.0 Mesa 11.2.0
```
-
#### 获取声卡信息
-使用 `-A` 选项查看声卡信息:
+使用 `-A` 选项查看声卡信息:
```
$ inxi -A
@@ -190,8 +186,8 @@ Audio: Card-1 Intel 8 Series HD Audio Controller driver: snd_hda_intel Sound
Card-2 Intel Haswell-ULT HD Audio Controller driver: snd_hda_intel
```
-
#### 获取网卡信息
+
使用 `-N` 选项查看网卡信息:
```
@@ -201,18 +197,17 @@ Card-2: Realtek RTL8723BE PCIe Wireless Network Adapter driver: rtl8723be
```
#### 获取硬盘信息
-使用 `-D` 选项查看硬盘信息(大小、ID、型号):
+使用 `-D` 选项查看硬盘信息(大小、ID、型号):
```
$ inxi -D
Drives: HDD Total Size: 1000.2GB (20.0% used) ID-1: /dev/sda model: ST1000LM024_HN size: 1000.2GB
```
-
-
#### 获取简要的系统信息
-使用 `-b` 选项显示简要系统信息:
+使用 `-b` 选项显示上述信息的简要系统信息:
+
```
$ inxi -b
System: Host: TecMint Kernel: 4.4.0-21-generic x86_64 (64 bit) Desktop: Cinnamon 3.0.7
@@ -231,18 +226,17 @@ Info: Processes: 233 Uptime: 3:23 Memory: 3137.5/7879.9MB Client: Shell (ba
```
#### 获取硬盘分区信息
-使用 `-p` 选项输出完整的硬盘分区信息,包括每个分区的分区大小、已用空间、可用空间、文件系统以及文件系统类型。
+使用 `-p` 选项输出完整的硬盘分区信息,包括每个分区的分区大小、已用空间、可用空间、文件系统以及文件系统类型。
```
$ inxi -p
Partition: ID-1: / size: 324G used: 183G (60%) fs: ext4 dev: /dev/sda10
ID-2: swap-1 size: 4.00GB used: 0.00GB (0%) fs: swap dev: /dev/sda9
```
-
-
#### 获取完整的 Linux 系统信息
-使用 `-F` 选项查看可以完整的 Inxi 输出(安全起见比如网络 IP 地址信息不会显示,下面的示例只显示部分输出信息):
+
+使用 `-F` 选项查看可以完整的 Inxi 输出(安全起见比如网络 IP 地址信息没有显示,下面的示例只显示部分输出信息):
```
$ inxi -F
@@ -275,16 +269,17 @@ Info: Processes: 234 Uptime: 3:26 Memory: 3188.9/7879.9MB Client: Shell (ba
下面是监控 Linux 系统进程、开机时间和内存的几个选项的使用方法。
-
#### 监控 Linux 进程的内存使用
使用下面的命令查看进程数、开机时间和内存使用情况:
+
```
$ inxi -I
Info: Processes: 232 Uptime: 3:35 Memory: 3256.3/7879.9MB Client: Shell (bash) inxi: 2.2.35
```
#### 监控进程占用的 CPU 和内存资源
+
Inxi 默认显示 [前 5 个最消耗 CPU 和内存的进程][1]。 `-t` 选项和 `c` 选项一起使用查看前 5 个最消耗 CPU 资源的进程,查看最消耗内存的进程使用 `-t` 选项和 `m` 选项; `-t`选项 和 `cm` 选项一起使用显示前 5 个最消耗 CPU 和内存资源的进程。
```
@@ -325,8 +320,8 @@ Memory: MB / % used - Used/Total: 3223.6/7879.9MB - top 5 active
4: mem: 244.45MB (3.1%) command: chrome pid: 7405
5: mem: 211.68MB (2.6%) command: chrome pid: 6146
```
+
可以在选项 `cm` 后跟一个整数(在 1-20 之间)设置显示多少个进程,下面的命令显示了前 10 个最消耗 CPU 和内存的进程:
-We can use `cm` number (number can be 1-20) to specify a number other than 5, the command below will show us the [top 10 most active processes][2] eating up CPU and memory.
```
$ inxi -t cm10
@@ -355,8 +350,8 @@ Memory: MB / % used - Used/Total: 3163.1/7879.9MB - top 10 active
```
#### 监控网络设备
-下面的命令会列出网卡信息,包括接口信息、网络频率、mac 地址、网卡状态和网络 IP 等信息。
+下面的命令会列出网卡信息,包括接口信息、网络频率、mac 地址、网卡状态和网络 IP 等信息。
```
$ inxi -Nni
@@ -369,8 +364,8 @@ IF: enp1s0 ip-v4: 192.168.0.103
```
#### 监控 CPU 温度和电脑风扇转速
-可以使用 `-s` 选项监控 [配置了传感器的机器][2] 获取温度和风扇转速:
+可以使用 `-s` 选项监控 [配置了传感器的机器][2] 获取温度和风扇转速:
```
$ inxi -s
@@ -379,8 +374,8 @@ Fan Speeds (in rpm): cpu: N/A
```
#### 用 Linux 查看天气预报
-使用 `-w` 选项查看本地区的天气情况(虽然使用的 API 可能不是很可靠),使用 `-w` `` 设置所在的地区。
+使用 `-w` 选项查看本地区的天气情况(虽然使用的 API 可能不是很可靠),使用 `-W ` 设置另外的地区。
```
$ inxi -w
@@ -391,9 +386,9 @@ $ inxi -W Nairobi,Kenya
Weather: Conditions: 70 F (21 C) - Mostly Cloudy Time: February 20, 11:08 AM EAT
```
-#### 查看所有的 Linux 仓库信息。
-另外,可以使用 `-r` 选项查看一个 Linux 发行版的仓库信息:
+#### 查看所有的 Linux 仓库信息
+另外,可以使用 `-r` 选项查看一个 Linux 发行版的仓库信息:
```
$ inxi -r
@@ -426,16 +421,16 @@ Active apt sources in file: /etc/apt/sources.list.d/ubuntu-mozilla-security-ppa-
deb http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu xenial main
deb-src http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu xenial main
```
-下面是查看 Inxi 的安装版本、快速帮助和打开 man 主页的方法,以及更多的 Inxi 使用细节。
+下面是查看 Inxi 的安装版本、快速帮助和打开 man 主页的方法,以及更多的 Inxi 使用细节。
```
$ inxi -v #显示版本信息
$ inxi -h #快速帮助
$ man inxi #打开 man 主页
```
+
浏览 Inxi 的官方 GitHub 主页 [https://github.com/smxi/inxi][4] 查看更多的信息。
-For more information, visit official GitHub Repository: [https://github.com/smxi/inxi][4]
Inxi 是一个功能强大的获取硬件和系统信息的命令行工具。这也是我使用过的最好的 [获取硬件和系统信息的命令行工具][5] 之一。
@@ -446,7 +441,8 @@ Inxi 是一个功能强大的获取硬件和系统信息的命令行工具。这
作者简介:
-Aaron Kili 是一个 Linux 和 F.O.S.S(译者注:一个 Linux 开源门户网站)的狂热爱好者,即任的 Linux 系统管理员,web 开发者,TecMint 网站的专栏作者,他喜欢使用计算机工作,并且乐于分享计算机技术。
+
+Aaron Kili 是一个 Linux 和 F.O.S.S 的狂热爱好者,即任的 Linux 系统管理员,web 开发者,TecMint 网站的专栏作者,他喜欢使用计算机工作,并且乐于分享计算机技术。
--------------------------------------------------------------------------------
@@ -455,7 +451,7 @@ via: http://www.tecmint.com/inxi-command-to-find-linux-system-information/
作者:[Aaron Kili][a]
译者:[vim-kakali](https://github.com/vim-kakali)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20170222 Create a Shared Directory on Samba AD DC and Map to WindowsLinux Clients – Part 7.md b/published/201704/20170222 Create a Shared Directory on Samba AD DC and Map to WindowsLinux Clients – Part 7.md
similarity index 100%
rename from published/20170222 Create a Shared Directory on Samba AD DC and Map to WindowsLinux Clients – Part 7.md
rename to published/201704/20170222 Create a Shared Directory on Samba AD DC and Map to WindowsLinux Clients – Part 7.md
diff --git a/published/20170225 Microsoft Office Online gets better - on Linux too.md b/published/201704/20170225 Microsoft Office Online gets better - on Linux too.md
similarity index 100%
rename from published/20170225 Microsoft Office Online gets better - on Linux too.md
rename to published/201704/20170225 Microsoft Office Online gets better - on Linux too.md
diff --git a/published/20170302 How to use markers and perform text selection in Vim.md b/published/201704/20170302 How to use markers and perform text selection in Vim.md
similarity index 100%
rename from published/20170302 How to use markers and perform text selection in Vim.md
rename to published/201704/20170302 How to use markers and perform text selection in Vim.md
diff --git a/published/20170302 Installation of Devuan Linux Fork of Debian.md b/published/201704/20170302 Installation of Devuan Linux Fork of Debian.md
similarity index 100%
rename from published/20170302 Installation of Devuan Linux Fork of Debian.md
rename to published/201704/20170302 Installation of Devuan Linux Fork of Debian.md
diff --git a/published/201704/20170309 8 reasons to use LXDE.md b/published/201704/20170309 8 reasons to use LXDE.md
new file mode 100644
index 0000000000..a8ae04fb3c
--- /dev/null
+++ b/published/201704/20170309 8 reasons to use LXDE.md
@@ -0,0 +1,99 @@
+使用 LXDE 的 8 个理由
+============================================================
+
+> 考虑使用轻量级桌面环境 LXDE 作为你 Linux 桌面的理由
+
+![使用 LXDE 的 8 个理由](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/rh_003499_01_linux31x_cc.png?itok=1HXbvw2E "8 reasons to use LXDE")
+
+>Image by : opensource.com
+
+去年年底,升级到 Fedora 25 所装的新版本 [KDE][7] Plasma 给我带来了严重问题,让我难以完成任何工作。出于两个原因我决定尝试其它 Linux 桌面环境。第一,我需要完成我的工作。第二,一心使用 KDE 已经有很多年,我认为是时候尝试一些不同的桌面了。
+
+我第一个尝试了几周的替代桌面是 [Cinnamon][8],我在 1 月份介绍过它。这次我已经使用了 LXDE(轻量级 X11 桌面环境(Lightweight X11 Desktop Environment))大概 6 周,我发现它有很多我喜欢的东西。这是我使用 LXDE 的 8 个理由。
+
+更多 Linux 相关资源
+
+* [Linux 是什么?][1]
+* [Linux 容器是什么?][2]
+* [在 Linux 中管理设备][3]
+* [马上下载:Linux 命令速查表][4]
+* [我们最新的 Linux 文章][5]
+
+### 1、 LXDE 支持多个面板
+
+和 KDE 以及 Cinnamon 一样,LXDE 支持包括系统菜单、应用启动器的面板,以及显示正在运行应用图标的任务栏。我第一次登录到 LXDE 时,面板的配置看起来异常熟悉。LDXE 看起来已经根据我的 KDE 配置情况为我准备好了喜欢的顶部和底部面板,并包括了系统托盘设置。顶部面板上的应用程序启动器看似来自 Cinnamon 。面板上的东西使得启动和管理程序变得容易。默认情况下,只在桌面底部有一个面板。
+
+ ![打开了 Openbox Configuration Manager 的 LXDE 桌面。](https://opensource.com/sites/default/files/lxde-openboxconfigurationmanager.png "打开了 Openbox Configuration Manager 的 LXDE 桌面。")
+
+*打开了 Openbox 配置管理器的 LXDE 桌面。这个桌面还没有更改过,因此它使用了默认的颜色和图标主题。*
+
+### 2、 Openbox 配置管理器提供了一个用于管理和体验桌面外观的简单工具。
+
+它为主题、窗口修饰、多个显示器的窗口行为、移动和调整窗口大小、鼠标控制、多桌面等提供了选项。虽然这看起来似乎很多,但它远不如配置 KDE 桌面那么复杂,尽管如此 Openbox 仍然提供了绝佳的效果。
+
+### 3、 LXDE 有一个强大的菜单工具
+
+在桌面偏好(Desktop Preference)菜单的高级(Advanced)标签页有个有趣的选项。这个选项的名称是 “点击桌面时显示窗口管理器提供的菜单(Show menus provided by window managers when desktop is clicked)”。选中这个复选框,当你右击桌面时,会显示 Openbox 桌面菜单,而不是标准的 LXDE 桌面菜单。
+
+Openbox 桌面菜单包括了几乎每个你可能想要的菜单选项,所有都可从桌面便捷访问。它包括了所有的应用程序菜单、系统管理、以及首选项。它甚至有一个菜单包括了所有已安装的终端模拟器应用程序的列表,因此系统管理员可以轻易地启动他们喜欢的终端。
+
+### 4、 LXDE 桌面的设计干净简单
+
+它没有任何会妨碍你完成工作的东西。尽管你可以添加一些文件、目录、应用程序的链接到桌面,但是没有可以添加到桌面的小部件。在我的 KDE 和 Cinnamon 桌面上我确实喜欢一些小部件,但它们很容易被覆盖住,然后我就需要移动或者最小化窗口,或者使用 “显示桌面(Show Desktop)” 按钮清空整个桌面才能看到它们。 LXDE 确实有一个 “图标化所有窗口(Iconify all windows)” 按钮,但我很少需要使用它,除非我想看我的壁纸。
+
+### 5、 LXDE 有一个强大的文件管理器
+
+LXDE 默认的文件管理器是 PCManFM,因此在我使用 LXDE 的时候它成为了我的文件管理器。PCManFM 非常灵活、可以配置为适用于大部分人和场景。它看起来没有我常用的文件管理器 Krusader 那么可配置,但我确实喜欢 Krusader 所没有的 PCManFM 侧边栏。
+
+PCManFM 允许打开多个标签页,可以通过右击侧边栏的任何条目或者单击图标栏的新标签图标打开。PCManFM 窗口左边的位置(Places)面板显示了应用程序菜单,你可以从 PCManFM 启动应用程序。位置(Places)面板上面也显示了一个设备图标,可以用于查看你的物理存储设备,一系列带按钮的可移除设备允许你挂载和卸载它们,还有可以便捷访问的主目录、桌面、回收站。位置(Places)面板的底部包括一些默认目录的快捷方式,例如 Documents、Music、Pictures、Videos 以及 Downloads。你也可以拖拽其它目录到位置(Places)面板的快捷方式部分。位置(Places) 面板可以换为正常的目录树。
+
+### 6、 如果在现有窗口后面打开,新窗口的标题栏会闪烁
+
+这是一个在大量现有窗口中定位新窗口的好方法。
+
+### 7、 大部分现代桌面环境允许多个桌面,LXDE 也不例外
+
+我喜欢使用一个桌面用于我的开发、测试以及编辑工作,另一个桌面用于普通任务,例如电子邮件和网页浏览。LXDE 默认提供两个桌面,但你可以配置为只有一个或者多个。右击桌面切换器(Desktop Pager)配置它。
+
+通过一些有害但不是破坏性的测试,我发现最大允许桌面数目是 100。我还发现当我把桌面数目减少到低于我实际已经在使用的 3 个时,不活动桌面上的窗口会被移动到桌面 1。多么有趣的发现!
+
+### 8、 Xfce 电源管理器是一个小巧但强大的应用程序,它允许你配置电源管理如何工作
+
+它提供了一个标签页用于通用配置,以及用于系统、显示和设备的标签页。设备标签页显示了我系统上已有设备的表格,例如电池供电的鼠标、键盘,甚至我的 UPS(不间断电源)。它显示了每个设备的详细信息,包括厂商和系列号,如果可用的话,还有电池充电状态。当我写这篇博客的时候,我 UPS 的电量是 100%,而我罗技鼠标的电量是 75%。 Xfce 电源管理器还在系统托盘显示了一个图标,因此你可以从那里快速了解你设备的电池状态。
+
+关于 LXDE 桌面还有很多喜欢的东西,但这些就是抓住了我的注意力,它们也是对我使用现代图形用户界面工作非常重要、不可或缺的东西。
+
+我注意到奇怪的一点是,我一直没有弄明白桌面(Openbox)菜单的 “重新配置(Reconfigure)” 选项是干什么的。我点击了几次,从没有注意到有任何类型的任何活动表明该选项实际起了作用。
+
+我发现 LXDE 是一个简单但强大的桌面。我享受使用它写这篇文章的几周时间。通过允许我访问我想要的应用程序和文件,同时在其余时间保持不会让我分神,LXDE 使我得以高效地工作。我也没有遇到任何妨碍我完成工作的问题——当然,除了我用于探索这个好桌面所花的时间。我非常推荐 LXDE 桌面。
+
+我现在正在试用 GNOME 3 和 GNOME Shell,并将在下一期中报告。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+David Both 是一个 Linux 和开源倡导者,他居住在北卡罗莱纳州的 Raleigh。他在 IT 行业已经超过 40 年,在他工作的 IBM 公司教授 OS/2 超过 20 年,他在 1981 年为最早的 IBM PC 写了第一个培训课程。他教过 Red Hat 的 RHCE 课程,在 MCI Worldcom、 Cisco 和北卡罗莱纳州 工作过。他一直在使用 Linux 和开源软件近 20 年。
+
+--------------------------------------
+
+via: https://opensource.com/article/17/3/8-reasons-use-lxde
+
+作者:[David Both][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/dboth
+[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
+[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
+[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
+[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
+[5]:https://opensource.com/tags/linux?src=linux_resource_menu
+[6]:https://opensource.com/article/17/3/8-reasons-use-lxde?rate=QigvkBy_9zLvktdsL-QaIWedjIqjtlwwJIVFQDQzsSY
+[7]:https://opensource.com/life/15/4/9-reasons-to-use-kde
+[8]:https://opensource.com/article/17/1/cinnamon-desktop-environment
+[9]:https://opensource.com/user/14106/feed
+[10]:https://opensource.com/article/17/3/8-reasons-use-lxde#comments
+[11]:https://opensource.com/users/dboth
diff --git a/translated/tech/20170309 How to open port on AWS EC2 Linux server.md b/published/201704/20170309 How to open port on AWS EC2 Linux server.md
similarity index 85%
rename from translated/tech/20170309 How to open port on AWS EC2 Linux server.md
rename to published/201704/20170309 How to open port on AWS EC2 Linux server.md
index 8758e96f8b..3cc74f456e 100644
--- a/translated/tech/20170309 How to open port on AWS EC2 Linux server.md
+++ b/published/201704/20170309 How to open port on AWS EC2 Linux server.md
@@ -1,4 +1,4 @@
-如何在 AWS EC2 的 Linux 服务器上打开端口
+如何在 AWS EC2 的 Linux 服务器上开放一个端口
============================================================
_这是一篇用屏幕截图解释如何在 AWS EC2 Linux 服务器上打开端口的教程。它能帮助你管理 EC2 服务器上特定端口的服务。_
@@ -9,13 +9,13 @@ AWS(即 Amazon Web Services)不是 IT 世界中的新术语了。它是亚
AWS 提供服务器计算作为他们的服务之一,他们称之为 EC(弹性计算)。使用它可以构建我们的 Linux 服务器。我们已经看到了[如何在 AWS 上设置免费的 Linux 服务器][11]了。
-默认情况下,所有基于 EC2 的 Linux 服务器都只打开 22 端口,即 SSH 服务端口(所有 IP 的入站)。因此,如果你托管了任何特定端口的服务,则要为你的服务器在 AWS 防火墙上打开相应端口。
+默认情况下,所有基于 EC2 的 Linux 服务器都只打开 22 端口,即 SSH 服务端口(允许所有 IP 的入站连接)。因此,如果你托管了任何特定端口的服务,则要为你的服务器在 AWS 防火墙上打开相应端口。
-同样它的 1 到 65535 的端口是打开的(所有出站流量)。如果你想改变这个,你可以使用下面的方法编辑出站规则。
+同样它的 1 到 65535 的端口是打开的(对于所有出站流量)。如果你想改变这个,你可以使用下面的方法编辑出站规则。
在 AWS 上为你的服务器设置防火墙规则很容易。你能够在几秒钟内为你的服务器打开端口。我将用截图指导你如何打开 EC2 服务器的端口。
- _步骤 1 :_
+### 步骤 1 :
登录 AWS 帐户并进入 **EC2 管理控制台**。进入“网络及安全”菜单下的**安全组**,如下高亮显示:
@@ -23,9 +23,7 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC(弹
*AWS EC2 管理控制台*
-* * *
-
- _步骤 2 :_
+### 步骤 2 :
在安全组中选择你的 EC2 服务器,并在 **行动** 菜单下选择 **编辑入站规则**。
@@ -33,7 +31,7 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC(弹
*AWS 入站规则菜单*
- _步骤 3:_
+### 步骤 3:
现在你会看到入站规则窗口。你可以在此处添加/编辑/删除入站规则。这有几个如 http、nfs 等列在下拉菜单中,它们可以为你自动填充端口。如果你有自定义服务和端口,你也可以定义它。
@@ -46,15 +44,13 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC(弹
* 类型:http
* 协议:TCP
* 端口范围:80
-* 源:任何来源(打开 80 端口接受来自任何IP(0.0.0.0/0)的请求),我的 IP:那么它会自动填充你当前的公共互联网 IP
+* 源:任何来源:打开 80 端口接受来自“任何IP(0.0.0.0/0)”的请求;我的 IP:那么它会自动填充你当前的公共互联网 IP
-* * *
- _步骤 4:_
+### 步骤 4:
就是这样了。保存完毕后,你的服务器入站 80 端口将会打开!你可以通过 telnet 到 EC2 服务器公共域名的 80 端口来检验(可以在 EC2 服务器详细信息中找到)。
-
你也可以在 [ping.eu][12] 等网站上检验。
* * *
@@ -65,7 +61,7 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC(弹
via: http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/
-作者:[Shrikant Lavhate ][a]
+作者:[Shrikant Lavhate][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
diff --git a/published/201704/20170310 How to install Fedora 25 on your Raspberry Pi.md b/published/201704/20170310 How to install Fedora 25 on your Raspberry Pi.md
new file mode 100644
index 0000000000..69d8d43612
--- /dev/null
+++ b/published/201704/20170310 How to install Fedora 25 on your Raspberry Pi.md
@@ -0,0 +1,128 @@
+如何在树莓派上安装 Fedora 25
+============================================================
+
+> 了解 Fedora 第一个官方支持树莓派的版本
+
+ ![How to install Fedora 25 on your Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/gnome_raspberry_pi_fedora.jpg?itok=Efm6IKxP "How to install Fedora 25 on your Raspberry Pi")
+
+>图片提供 opensource.com
+
+2016 年 10 月,Fedora 25 Beta 发布了,随之而来的还有对 [树莓派 2 和 3 的初步支持][6]。Fedora 25 的最终“通用”版在一个月后发布,从那时起,我一直在树莓派上尝试不同的 Fedora spins。
+
+这篇文章不仅是一篇树莓派 3 上的 Fedora 25 的点评,还集合了技巧、截图以及我对 Fedora 第一个官方支持 Pi 的这个版本的一些个人看法。
+
+在我开始之前,需要说一下的是,为写这篇文章所做的所有工作都是在我的运行 Fedora 25 的个人笔记本电脑上完成的。我使用一张 microSD 插到 SD 适配器中,复制和编辑所有的 Fedora 镜像到 32GB 的 microSD 卡中,然后用它在一台三星电视上启动了树莓派 3。 因为 Fedora 25 尚不支持内置 Wi-Fi,所以树莓派 3 使用了以太网线缆进行网络连接。最后,我使用了 Logitech K410 无线键盘和触摸板进行输入。
+
+如果你没有条件使用以太网线连接在你的树莓派上玩 Fedora 25,我曾经用过一个 Edimax Wi-Fi USB 适配器,它也可以在 Fedora 25 上工作,但在本文中,我只使用了以太网连接。
+
+### 在树莓派上安装 Fedora 25 之前
+
+阅读 Fedora 项目 wiki 上的[树莓派支持文档][7]。你可以从 wiki 下载 Fedora 25 安装所需的镜像,那里还列出了所有支持和不支持的内容。
+
+此外,请注意,这是初始支持版本,还有许多新的工作和支持将随着 Fedora 26 的发布而出现,所以请随时报告 bug,并通过 [Bugzilla][8]、Fedora 的 [ARM 邮件列表][9]、或者 Freenode IRC 频道#fedora-arm,分享你在树莓派上使用 Fedora 25 的体验反馈。
+
+### 安装
+
+我下载并安装了五个不同的 Fedora 25 spin:GNOME(默认工作站)、KDE、Minimal、LXDE 和 Xfce。在多数情况下,它们都有一致和易于遵循的步骤,以确保我的树莓派 3 上启动正常。有的 spin 有已知 bug 的正在解决之中,而有的按照 Fedora wik 遵循标准操作程序即可。
+
+![GNOME on Raspberry Pi](https://opensource.com/sites/default/files/gnome_on_rpi.png "GNOME on Raspberry Pi")
+
+*树莓派 3 上的 Fedora 25 workstation、 GNOME 版本*
+
+### 安装步骤
+
+1、 在你的笔记本上,从支持文档页面的链接下载一个树莓派的 Fedora 25 镜像。
+
+2、 在笔记本上,使用 `fedora-arm-installer` 或下述命令行将镜像复制到 microSD:
+
+```
+xzcat Fedora-Workstation-armhfp-25-1.3-sda.raw.xz | dd bs=4M status=progress of=/dev/mmcblk0
+```
+
+注意:`/dev/mmclk0` 是我的 microSD 插到 SD 适配器后,在我的笔记本电脑上挂载的设备名。虽然我在笔记本上使用 Fedora,可以使用 `fedora-arm-installer`,但我还是喜欢命令行。
+
+3、 复制完镜像后,_先不要启动你的系统_。我知道你很想这么做,但你仍然需要进行几个调整。
+
+4、 为了使镜像文件尽可能小以便下载,镜像上的根文件系统是很小的,因此你必须增加根文件系统的大小。如果你不这么做,你仍然可以启动你的派,但如果你一旦运行 `dnf update` 来升级你的系统,它就会填满文件系统,导致糟糕的事情发生,所以趁着 microSD 还在你的笔记本上进行分区:
+
+```
+growpart /dev/mmcblk0 4
+resize2fs /dev/mmcblk0p4
+```
+
+注意:在 Fedora 中,`growpart` 命令由 `cloud-utils-growpart.noarch` 这个 RPM 提供的。
+
+5、文件系统更新后,您需要将 `vc4` 模块列入黑名单。[更多有关此 bug 的信息在此。][10]
+
+我建议在启动树莓派之前这样做,因为不同的 spin 有不同表现方式。例如,(至少对我来说)在没有黑名单 `vc4` 的情况下,GNOME 在我启动后首先出现,但在系统更新后,它不再出现。 KDE spin 则在第一次启动时根本不会出现 KDE。因此我们可能需要在我们的第一次启动之前将 `vc4` 加入黑名单,直到这个错误以后解决了。
+
+黑名单应该出现在两个不同的地方。首先,在你的 microSD 根分区上,在 `etc/modprode.d/` 下创建一个 `vc4.conf`,内容是:`blacklist vc4`。第二,在你的 microSD 启动分区,添加 `rd.driver.blacklist=vc4` 到 `extlinux/extlinux.conf` 文件的末尾。
+
+6、 现在,你可以启动你的树莓派了。
+
+### 启动
+
+你要有耐心,特别是对于 GNOME 和 KDE 发行版来说。在 SSD(固态驱动器)几乎即时启动的时代,你很容易就对派的启动速度感到不耐烦,特别是第一次启动时。在第一次启动 Window Manager 之前,会先弹出一个初始配置页面,可以配置 root 密码、常规用户、时区和网络。配置完毕后,你就应该能够 SSH 到你的树莓派上,方便地调试显示问题了。
+
+### 系统更新
+
+在树莓派上运行 Fedora 25 后,你最终(或立即)会想要更新系统。
+
+首先,进行内核升级时,先熟悉你的 `/boot/extlinux/extlinux.conf` 文件。如果升级内核,下次启动时,除非手动选择正确的内核,否则很可能会启动进入救援( Rescue )模式。避免这种情况发生最好的方法是,在你的 `extlinux.conf` 中将定义 Rescue 镜像的那五行移动到文件的底部,这样最新的内核将在下次自动启动。你可以直接在派上或通过在笔记本挂载来编辑 `/boot/extlinux/extlinux.conf`:
+```
+label Fedora 25 Rescue fdcb76d0032447209f782a184f35eebc (4.9.9-200.fc25.armv7hl)
+ kernel /vmlinuz-0-rescue-fdcb76d0032447209f782a184f35eebc
+ append ro root=UUID=c19816a7-cbb8-4cbb-8608-7fec6d4994d0 rd.driver.blacklist=vc4
+ fdtdir /dtb-4.9.9-200.fc25.armv7hl/
+ initrd /initramfs-0-rescue-fdcb76d0032447209f782a184f35eebc.img
+```
+
+第二点,如果无论什么原因,如果你的显示器在升级后再次变暗,并且你确定已经将 `vc4` 加入黑名单,请运行 `lsmod | grep vc4`。你可以先启动到多用户模式而不是图形模式,并从命令行中运行 `startx`。 请阅读 `/etc/inittab` 中的内容,了解如何切换 target 的说明。
+
+ ![KDE on Raspberry Pi 3](https://opensource.com/sites/default/files/kde_on_rpi.png "KDE on Raspberry Pi 3")
+
+*树莓派 3 上的 Fedora 25 workstation、 KDE 版本*
+
+### Fedora Spin
+
+在我尝试过的所有 Fedora Spin 中,唯一有问题的是 XFCE spin,我相信这是由于这个[已知的 bug][11] 导致的。
+
+按照我在这里分享的步骤操作,GNOME、KDE、LXDE 和 minimal 都运行得很好。考虑到 KDE 和 GNOME 会占用更多资源,我会推荐想要在树莓派上使用 Fedora 25 的人使用 LXDE 和 Minimal。如果你是一位系统管理员,想要一台廉价的 SELinux 支持的服务器来满足你的安全考虑,而且只是想要使用树莓派作为你的服务器,开放 22 端口以及 vi 可用,那就用 Minimal 版本。对于开发人员或刚开始学习 Linux 的人来说,LXDE 可能是更好的方式,因为它可以快速方便地访问所有基于 GUI 的工具,如浏览器、IDE 和你可能需要的客户端。
+
+ ![LXES on Raspberry Pi ](https://opensource.com/sites/default/files/lxde_on_rpi.png "LXDE on Raspberry Pi 3")
+
+*树莓派 3 上的 Fedora 25 workstation、LXDE。*
+
+看到越来越多的 Linux 发行版在基于 ARM 的树莓派上可用,那真是太棒了。对于其第一个支持的版本,Fedora 团队为日常 Linux 用户提供了更好的体验。我很期待 Fedora 26 的改进和 bug 修复。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Anderson Silva - Anderson 于 1996 年开始使用 Linux。更精确地说是 Red Hat Linux。 2007 年,他作为 IT 部门的发布工程师时加入红帽,他的职业梦想成为了现实。此后,他在红帽担任过多个不同角色,从发布工程师到系统管理员、高级经理和信息系统工程师。他是一名 RHCE 和 RHCA 以及一名活跃的 Fedora 包维护者。
+
+----------------
+
+via: https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi
+
+作者:[Anderson Silva][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[jasminepeng](https://github.com/jasminepeng)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/ansilva
+[1]:https://opensource.com/tags/raspberry-pi?src=raspberry_pi_resource_menu
+[2]:https://opensource.com/resources/what-raspberry-pi?src=raspberry_pi_resource_menu
+[3]:https://opensource.com/article/16/12/getting-started-raspberry-pi?src=raspberry_pi_resource_menu
+[4]:https://opensource.com/article/17/2/raspberry-pi-submit-your-article?src=raspberry_pi_resource_menu
+[5]:https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi?rate=gIIRltTrnOlwo4h81uDvdAjAE3V2rnwoqH0s_Dx44mE
+[6]:https://fedoramagazine.org/raspberry-pi-support-fedora-25-beta/
+[7]:https://fedoraproject.org/wiki/Raspberry_Pi
+[8]:https://bugzilla.redhat.com/show_bug.cgi?id=245418
+[9]:https://lists.fedoraproject.org/admin/lists/arm%40lists.fedoraproject.org/
+[10]:https://bugzilla.redhat.com/show_bug.cgi?id=1387733
+[11]:https://bugzilla.redhat.com/show_bug.cgi?id=1389163
+[12]:https://opensource.com/user/26502/feed
+[13]:https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi#comments
+[14]:https://opensource.com/users/ansilva
diff --git a/published/20170316 What is Linux VPS Hosting.md b/published/201704/20170316 What is Linux VPS Hosting.md
similarity index 100%
rename from published/20170316 What is Linux VPS Hosting.md
rename to published/201704/20170316 What is Linux VPS Hosting.md
diff --git a/translated/tech/20170319 How to Add a New Disk to an Existing Linux Server.md b/published/201704/20170319 How to Add a New Disk to an Existing Linux Server.md
similarity index 83%
rename from translated/tech/20170319 How to Add a New Disk to an Existing Linux Server.md
rename to published/201704/20170319 How to Add a New Disk to an Existing Linux Server.md
index 69c023deac..fe06f012b4 100644
--- a/translated/tech/20170319 How to Add a New Disk to an Existing Linux Server.md
+++ b/published/201704/20170319 How to Add a New Disk to an Existing Linux Server.md
@@ -1,20 +1,19 @@
如何在现有的 Linux 系统上添加新的磁盘
============================================================
+作为一个系统管理员,我们会有这样的一些需求:作为升级服务器容量的一部分,或者有时出现磁盘故障时更换磁盘,我们需要将新的硬盘配置到现有服务器。
-作为一个系统管理员,我们会有这样的一些需求:作为升级服务器容量的一部分、或者有时出现磁盘故障时更换磁盘,我们需要将新的硬盘配置到现有服务器。
+在这篇文章中,我会向你逐步介绍添加新硬盘到现有 **RHEL/CentOS** 或者 **Debian/Ubuntu Linux** 系统的步骤。
-在这篇文章中,我会向你逐步介绍添加新硬盘到现有 RHEL/CentOS 或者 Debian/Ubuntu Linux 系统的步骤。
-
-**推荐阅读:** [如何将超过 2TB 的新硬盘添加到现有 Linux][1]
+**推荐阅读:** [如何将超过 2TB 的新硬盘添加到现有 Linux][1]。
重要:请注意这篇文章的目的只是告诉你如何创建新的分区,而不包括分区扩展或者其它选项。
我使用 [fdisk 工具][2] 完成这些配置。
-我已经添加了一块 20GB 容量的硬盘,挂载到了 `/data` 分区。
+我已经添加了一块 **20GB** 容量的硬盘,挂载到了 `/data` 分区。
-fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行工具。
+`fdisk` 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行工具。
```
# fdisk -l
@@ -26,7 +25,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![查看 Linux 分区详情](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Partition-Details.png)
][3]
-查看 Linux 分区详情
+*查看 Linux 分区详情*
添加了 20GB 容量的硬盘后,`fdisk -l` 的输出像下面这样。
@@ -37,9 +36,9 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![查看新分区详情](http://www.tecmint.com/wp-content/uploads/2017/03/Find-New-Partition-Details.png)
][4]
-查看新分区详情
+*查看新分区详情*
-新添加的磁盘显示为 `/dev/xvdc`。如果我们添加的是物理磁盘,基于磁盘类型它会显示类似 `/dev/sda`。这里我使用的是虚拟磁盘。
+新添加的磁盘显示为 `/dev/xvdc`。如果我们添加的是物理磁盘,基于磁盘类型它会显示为类似 `/dev/sda`。这里我使用的是虚拟磁盘。
要在特定硬盘上分区,例如 `/dev/xvdc`。
@@ -47,7 +46,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
# fdisk /dev/xvdc
```
-常用 fdisk 命令。
+常用的 fdisk 命令。
* `n` - 创建分区
* `p` - 打印分区表
@@ -61,7 +60,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![在 Linux 上创建新分区](http://www.tecmint.com/wp-content/uploads/2017/03/Create-New-Partition-in-Linux.png)
][5]
-在 Linux上创建新分区
+*在 Linux 上创建新分区*
创建主分区或者扩展分区。默认情况下我们最多可以有 4 个主分区。
@@ -69,7 +68,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![创建主分区](http://www.tecmint.com/wp-content/uploads/2017/03/Create-Primary-Partition.png)
][6]
-创建主分区
+*创建主分区*
按需求输入分区编号。推荐使用默认的值 `1`。
@@ -77,7 +76,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![分配分区编号](http://www.tecmint.com/wp-content/uploads/2017/03/Assign-a-Partition-Number.png)
][7]
-分配分区编号
+*分配分区编号*
输入第一个扇区的大小。如果是一个新的磁盘,通常选择默认值。如果你是在同一个磁盘上创建第二个分区,我们需要在前一个分区的最后一个扇区的基础上加 `1`。
@@ -85,7 +84,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![为分区分配扇区](http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Sector-to-Partition.png)
][8]
-为分区分配扇区
+*为分区分配扇区*
输入最后一个扇区或者分区大小的值。通常推荐输入分区的大小。总是添加前缀 `+` 以防止值超出范围错误。
@@ -93,7 +92,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![分配分区大小](http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Partition-Size.png)
][9]
-分配分区大小
+*分配分区大小*
保存更改并退出。
@@ -101,9 +100,9 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![保存分区更改](http://www.tecmint.com/wp-content/uploads/2017/03/Save-Partition-Changes.png)
][10]
-保存分区更改
+*保存分区更改*
-现在使用 mkfs 命令格式化磁盘。
+现在使用 **mkfs** 命令格式化磁盘。
```
# mkfs.ext4 /dev/xvdc1
@@ -112,7 +111,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
![格式化新分区](http://www.tecmint.com/wp-content/uploads/2017/03/Format-New-Partition.png)
][11]
-格式化新分区
+*格式化新分区*
格式化完成后,按照下面的命令挂载分区。
@@ -130,7 +129,7 @@ fdisk 是一个在 Linux 系统上用于显示和管理硬盘和分区命令行
现在你知道如何使用 [fdisk 命令][12] 在新磁盘上创建分区并挂载了。
-当处理分区、尤其是编辑已配置磁盘的时候我们需要格外的小心。请分享你的反馈和建议吧。
+当处理分区、尤其是编辑已配置磁盘的时候,我们需要格外的小心。请分享你的反馈和建议吧。
--------------------------------------------------------------------------------
@@ -144,12 +143,12 @@ via: http://www.tecmint.com/add-new-disk-to-an-existing-linux/
作者:[Lakshmi Dhandapani][a]
译者:[ictlyh](https://github.com/ictlyh)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/lakshmi/
-[1]:http://www.tecmint.com/add-disk-larger-than-2tb-to-an-existing-linux/
+[1]:https://linux.cn/article-8398-1.html
[2]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Partition-Details.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-New-Partition-Details.png
diff --git a/published/20170320 Why Go.md b/published/201704/20170320 Why Go.md
similarity index 100%
rename from published/20170320 Why Go.md
rename to published/201704/20170320 Why Go.md
diff --git a/published/20170321 How to Install a DHCP Server in CentOS RHEL and Fedora.md b/published/201704/20170321 How to Install a DHCP Server in CentOS RHEL and Fedora.md
similarity index 100%
rename from published/20170321 How to Install a DHCP Server in CentOS RHEL and Fedora.md
rename to published/201704/20170321 How to Install a DHCP Server in CentOS RHEL and Fedora.md
diff --git a/published/20170324 A formal spec for GitHub Flavored Markdown.md b/published/201704/20170324 A formal spec for GitHub Flavored Markdown.md
similarity index 100%
rename from published/20170324 A formal spec for GitHub Flavored Markdown.md
rename to published/201704/20170324 A formal spec for GitHub Flavored Markdown.md
diff --git a/published/20170327 Using vi-mode in your shell.md b/published/201704/20170327 Using vi-mode in your shell.md
similarity index 100%
rename from published/20170327 Using vi-mode in your shell.md
rename to published/201704/20170327 Using vi-mode in your shell.md
diff --git a/published/20170330 5 open source RSS feed readers.md b/published/201704/20170330 5 open source RSS feed readers.md
similarity index 100%
rename from published/20170330 5 open source RSS feed readers.md
rename to published/201704/20170330 5 open source RSS feed readers.md
diff --git a/published/20170330 How to List Files Installed From a RPM or DEB Package in Linux.md b/published/201704/20170330 How to List Files Installed From a RPM or DEB Package in Linux.md
similarity index 100%
rename from published/20170330 How to List Files Installed From a RPM or DEB Package in Linux.md
rename to published/201704/20170330 How to List Files Installed From a RPM or DEB Package in Linux.md
diff --git a/published/201704/20170331 All You Need To Know About Processes in Linux Comprehensive Guide.md b/published/201704/20170331 All You Need To Know About Processes in Linux Comprehensive Guide.md
new file mode 100644
index 0000000000..3b04ce3334
--- /dev/null
+++ b/published/201704/20170331 All You Need To Know About Processes in Linux Comprehensive Guide.md
@@ -0,0 +1,366 @@
+关于 Linux 进程你所需要知道的一切
+============================================================
+
+在这篇指南中,我们会逐步对进程做基本的了解,然后简要看看如何用特定命令[管理 Linux 进程][9]。
+
+进程(process)是指正在执行的程序;是程序正在运行的一个实例。它由程序指令,和从文件、其它程序中读取的数据或系统用户的输入组成。
+
+### 进程的类型
+
+在 Linux 中主要有两种类型的进程:
+
+* 前台进程(也称为交互式进程) - 这些进程由终端会话初始化和控制。换句话说,需要有一个连接到系统中的用户来启动这样的进程;它们不是作为系统功能/服务的一部分自动启动。
+* 后台进程(也称为非交互式/自动进程) - 这些进程没有连接到终端;它们不需要任何用户输入。
+
+#### 什么是守护进程(daemon)
+
+这是后台进程的特殊类型,它们在系统启动时启动,并作为服务一直运行;它们不会死亡。它们自发地作为系统任务启动(作为服务运行)。但是,它们能被用户通过 init 进程控制。
+
+[
+ ![Linux 进程状态](http://www.tecmint.com/wp-content/uploads/2017/03/ProcessState.png)
+][10]
+
+*Linux 进程状态*
+
+### 在 Linux 中创建进程
+
+(LCTT 译注:此节原文不确,根据译者理解重新提供)
+
+在 Linux 中创建进程有三种方式:
+
+#### fork() 方式
+
+使用 fork() 函数以父进程为蓝本复制一个进程,其 PID号与父进程 PID 号不同。在 Linux 环境下,fork() 是以写复制实现的,新的子进程的环境和父进程一样,只有内存与父进程不同,其他与父进程共享,只有在父进程或者子进程进行了修改后,才重新生成一份。
+
+#### system() 方式
+
+system() 函数会调用 `/bin/sh –c command` 来执行特定的命令,并且阻塞当前进程的执行,直到 command 命令执行完毕。新的子进程会有新的 PID。
+
+#### exec() 方式
+
+exec() 方式有若干种不同的函数,与之前的 fork() 和 system() 函数不同,exec() 方式会用新进程代替原有的进程,系统会从新的进程运行,新的进程的 PID 值会与原来的进程的 PID 值相同。
+
+### Linux 如何识别进程?
+
+由于 Linux 是一个多用户系统,意味着不同的用户可以在系统上运行各种各样的程序,内核必须唯一标识程序运行的每个实例。
+
+程序由它的进程 ID(PID)和它父进程的进程 ID(PPID)识别,因此进程可以被分类为:
+
+* 父进程 - 这些是在运行时创建其它进程的进程。
+* 子进程 - 这些是在运行时由其它进程创建的进程。
+
+#### init 进程
+
+init 进程是系统中所有进程的父进程,它是[启动 Linux 系统][11]后第一个运行的程序;它管理着系统上的所有其它进程。它由内核自身启动,因此理论上说它没有父进程。
+
+init 进程的进程 ID 总是为 1。它是所有孤儿进程的收养父母。(它会收养所有孤儿进程)。
+
+#### 查找进程 ID
+
+你可以用 pidof 命令查找某个进程的进程 ID:
+
+```
+# pidof systemd
+# pidof top
+# pidof httpd
+```
+
+[
+ ![查找 Linux 进程 ID](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Process-ID.png)
+][12]
+
+*查找 Linux 进程 ID*
+
+要查找当前 shell 的进程 ID 以及它父进程的进程 ID,可以运行:
+
+```
+$ echo $$
+$ echo $PPID
+```
+
+[
+ ![查找 Linux 父进程 ID](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Parent-Process-ID.png)
+][13]
+
+*查找 Linux 父进程 ID*
+
+### 在 Linux 中启动进程
+
+每次你运行一个命令或程序(例如 cloudcmd - CloudCommander),它就会在系统中启动一个进程。你可以按照下面的方式启动一个前台(交互式)进程,它会被连接到终端,用户可以发送输入给它:
+
+```
+# cloudcmd
+```
+
+[
+ ![启动 Linux 交互进程](http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Interactive-Process.png)
+][14]
+
+*启动 Linux 交互进程*
+
+#### Linux 后台任务
+
+要在后台(非交互式)启动一个进程,使用 `&` 符号,此时,该进程不会从用户中读取输入,直到它被移到前台。
+
+```
+# cloudcmd &
+# jobs
+```
+
+[
+ ![在后台启动 Linux 进程](http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Process-in-Background.png)
+][15]
+
+*在后台启动 Linux 进程*
+
+你也可以使用 `[Ctrl + Z]` 暂停执行一个程序并把它发送到后台,它会给进程发送 SIGSTOP 信号,从而暂停它的执行;它就会变为空闲:
+
+```
+# tar -cf backup.tar /backups/* #press Ctrl+Z
+# jobs
+```
+
+要在后台继续运行上面被暂停的命令,使用 `bg` 命令:
+
+```
+# bg
+```
+
+要把后台进程发送到前台,使用 `fg` 命令以及任务的 ID,类似:
+
+```
+# jobs
+# fg %1
+```
+
+[
+ ![Linux 后台进程任务](http://www.tecmint.com/wp-content/uploads/2017/03/Linux-Background-Process-Jobs.png)
+][16]
+
+*Linux 后台进程任务*
+
+你也可能想要阅读:[如何在后台启动 Linux 命令以及在终端分离(Detach)进程][17]
+
+### Linux 中进程的状态
+
+在执行过程中,取决于它的环境一个进程会从一个状态转变到另一个状态。在 Linux 中,一个进程有下面的可能状态:
+
+* Running - 此时它正在运行(它是系统中的当前进程)或准备运行(它正在等待分配 CPU 单元)。
+* Waiting - 在这个状态,进程正在等待某个事件的发生或者系统资源。另外,内核也会区分两种不同类型的等待进程;可中断等待进程(interruptible waiting processes) - 可以被信号中断,以及不可中断等待进程(uninterruptible waiting processes)- 正在等待硬件条件,不能被任何事件/信号中断。
+* Stopped - 在这个状态,进程已经被停止了,通常是由于收到了一个信号。例如,正在被调试的进程。
+* Zombie - 该进程已经死亡,它已经停止了但是进程表(process table)中仍然有它的条目。
+
+### 如何在 Linux 中查看活跃进程
+
+有很多 Linux 工具可以用于查看/列出系统中正在运行的进程,两个传统众所周知的是 [ps][18] 和 [top][19] 命令:
+
+#### 1. ps 命令
+
+它显示被选中的系统中活跃进程的信息,如下图所示:
+
+```
+# ps
+# ps -e | head
+```
+
+[
+ ![列出 Linux 活跃进程](http://www.tecmint.com/wp-content/uploads/2017/03/ps-command.png)
+][20]
+
+*列出 Linux 活跃进程*
+
+#### 2. top - 系统监控工具
+
+[top 是一个强大的工具][21],它能给你提供 [运行系统的动态实时视图][22],如下面截图所示:
+
+```
+# top
+```
+
+[
+ ![列出 Linux 正在运行的程序](http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png)
+][23]
+
+*列出 Linux 正在运行的程序*
+
+阅读这篇文章获取更多 top 使用事例:[Linux 中 12 个 top 命令事例][24]
+
+#### 3. glances - 系统监控工具
+
+glances 是一个相对比较新的系统监控工具,它有一些比较高级的功能:
+
+```
+# glances
+```
+
+[
+ ![Glances - Linux 进程监控](http://www.tecmint.com/wp-content/uploads/2017/03/glances.png)
+][25]
+
+*Glances – Linux 进程监控*
+
+要获取完整使用指南,请阅读:[Glances - Linux 的一个高级实时系统监控工具][26]
+
+还有很多你可以用来列出活跃进程的其它有用的 Linux 系统监视工具,打开下面的链接了解更多关于它们的信息:
+
+1. [监控 Linux 性能的 20 个命令行工具][1]
+2. [13 个有用的 Linux 监控工具][2]
+
+### 如何在 Linux 中控制进程
+
+Linux 也有一些命令用于控制进程,例如 `kill`、`pkill`、`pgrep` 和 `killall`,下面是一些如何使用它们的基本事例:
+
+````
+$ pgrep -u tecmint top
+$ kill 2308
+$ pgrep -u tecmint top
+$ pgrep -u tecmint glances
+$ pkill glances
+$ pgrep -u tecmint glances
+```
+
+[
+ ![控制 Linux 进程](http://www.tecmint.com/wp-content/uploads/2017/03/Control-Linux-Processes.png)
+][27]
+
+*控制 Linux 进程*
+
+想要深入了解如何使用这些命令,在 Linux 中杀死/终止活跃进程,可以点击下面的链接:
+
+1. [终止 Linux 进程的 Kill、Pkill 和 Killall 命令指南][3]
+2. [如何在 Linux 中查找并杀死进程][4]
+
+注意当你系统僵死(freeze)时你可以使用它们杀死 [Linux 中的不响应程序][28]。
+
+#### 给进程发送信号
+
+Linux 中控制进程的基本方法是给它们发送信号。你可以发送很多信号给一个进程,运行下面的命令可以查看所有信号:
+
+```
+$ kill -l
+```
+[
+ ![列出所有 Linux 信号](http://www.tecmint.com/wp-content/uploads/2017/03/list-all-signals.png)
+][29]
+
+*列出所有 Linux 信号*
+
+要给一个进程发送信号,可以使用我们之前提到的 `kill`、`pkill` 或 `pgrep` 命令。但只有被编程为能识别这些信号时程序才能响应这些信号。
+
+大部分信号都是系统内部使用,或者给程序员编写代码时使用。下面是一些对系统用户非常有用的信号:
+
+* SIGHUP 1 - 当控制它的终端被被关闭时给进程发送该信号。
+* SIGINT 2 - 当用户使用 `[Ctrl+C]` 中断进程时控制它的终端给进程发送这个信号。
+* SIGQUIT 3 - 当用户发送退出信号 `[Ctrl+D]` 时给进程发送该信号。
+* SIGKILL 9 - 这个信号会马上中断(杀死)进程,进程不会进行清理操作。
+* SIGTERM 15 - 这是一个程序终止信号(kill 默认发送这个信号)。
+* SIGTSTP 20 - 它的控制终端发送这个信号给进程要求它停止(终端停止);通过用户按 `[Ctrl+Z]` 触发。
+
+下面是当 Firefox 应用程序僵死时通过它的 PID 杀死它的 kill 命令事例:
+
+```
+$ pidof firefox
+$ kill 9 2687
+或
+$ kill -KILL 2687
+或
+$ kill -SIGKILL 2687
+```
+
+使用它的名称杀死应用,可以像下面这样使用 pkill 或 killall:
+
+```
+$ pkill firefox
+$ killall firefox
+```
+
+#### 更改 Linux 进程优先级
+
+在 Linux 系统中,所有活跃进程都有一个优先级以及 nice 值。有比点优先级进程有更高优先级的进程一般会获得更多的 CPU 时间。
+
+但是,有 root 权限的系统用户可以使用 `nice` 和 `renice` 命令影响(更改)优先级。
+
+在 top 命令的输出中, NI 显示了进程的 nice 值:
+
+```
+$ top
+```
+
+[
+ ![列出 Linux 正在运行的进程](http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png)
+][30]
+
+*列出 Linux 正在运行的进程*
+
+使用 `nice` 命令为一个进程设置 nice 值。记住一个普通用户可以给他拥有的进程设置 0 到 20 的 nice 值。
+
+只有 root 用户可以使用负的 nice 值。
+
+要重新设置一个进程的优先级,像下面这样使用 `renice` 命令:
+
+```
+$ renice +8 2687
+$ renice +8 2103
+```
+
+阅读我们其它如何管理和控制 Linux 进程的有用文章。
+
+1. [Linux 进程管理:启动、停止以及中间过程][5]
+2. [使用 ‘top’ 命令 Batch 模式查找内存使用最高的 15 个进程][6]
+3. [在 Linux 中查找内存和 CPU 使用率最高的进程][7]
+4. [在 Linux 中如何使用进程 ID 查找进程名称][8]
+
+就是这些!如果你有任何问题或者想法,通过下面的反馈框和我们分享吧。
+
+--------------------------------------------------------------------------------
+
+
+作者简介:
+
+Aaron Kili 是一个 Linux 和 F.O.S.S(Free and Open-Source Software) 爱好者,一个 Linux 系统管理员、web 开发员,现在也是 TecMint 的内容创建者,他喜欢和电脑一起工作,他相信知识共享。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/linux-process-management/
+
+作者:[Aaron Kili][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/aaronkili/
+
+[1]:http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/
+[2]:http://www.tecmint.com/linux-performance-monitoring-tools/
+[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/
+[4]:http://www.tecmint.com/find-and-kill-running-processes-pid-in-linux/
+[5]:http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/
+[6]:http://www.tecmint.com/find-processes-by-memory-usage-top-batch-mode/
+[7]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
+[8]:http://www.tecmint.com/find-process-name-pid-number-linux/
+[9]:http://www.tecmint.com/dstat-monitor-linux-server-performance-process-memory-network/
+[10]:http://www.tecmint.com/wp-content/uploads/2017/03/ProcessState.png
+[11]:http://www.tecmint.com/linux-boot-process/
+[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Process-ID.png
+[13]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Parent-Process-ID.png
+[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Interactive-Process.png
+[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Process-in-Background.png
+[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Linux-Background-Process-Jobs.png
+[17]:http://www.tecmint.com/run-linux-command-process-in-background-detach-process/
+[18]:http://www.tecmint.com/linux-boot-process-and-manage-services/
+[19]:http://www.tecmint.com/12-top-command-examples-in-linux/
+[20]:http://www.tecmint.com/wp-content/uploads/2017/03/ps-command.png
+[21]:http://www.tecmint.com/12-top-command-examples-in-linux/
+[22]:http://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/
+[23]:http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png
+[24]:http://www.tecmint.com/12-top-command-examples-in-linux/
+[25]:http://www.tecmint.com/wp-content/uploads/2017/03/glances.png
+[26]:http://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/
+[27]:http://www.tecmint.com/wp-content/uploads/2017/03/Control-Linux-Processes.png
+[28]:http://www.tecmint.com/kill-processes-unresponsive-programs-in-ubuntu/
+[29]:http://www.tecmint.com/wp-content/uploads/2017/03/list-all-signals.png
+[30]:http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png
+[31]:http://www.tecmint.com/author/aaronkili/
+[32]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
+[33]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/published/201704/20170403 Yes Python is Slow and I Dont Care.md b/published/201704/20170403 Yes Python is Slow and I Dont Care.md
new file mode 100644
index 0000000000..2eda47b8bd
--- /dev/null
+++ b/published/201704/20170403 Yes Python is Slow and I Dont Care.md
@@ -0,0 +1,172 @@
+Python 是慢,但我无所谓
+=====================================
+
+> 为牺牲性能追求生产率而呐喊
+
+![](https://cdn-images-1.medium.com/max/800/0*pWAgROZ2JbYzlDgj.jpg)
+
+让我从关于 Python 中的 asyncio 这个标准库的讨论中休息一会,谈谈我最近正在思考的一些东西:Python 的速度。对不了解我的人说明一下,我是一个 Python 的粉丝,而且我在我能想到的所有地方都积极地使用 Python。人们对 Python 最大的抱怨之一就是它的速度比较慢,有些人甚至拒绝尝试使用 Python,因为它比其他语言速度慢。这里说说为什么我认为应该尝试使用 Python,尽管它是有点慢。
+
+### 速度不再重要
+
+过去的情形是,程序需要花费很长的时间来运行,CPU 比较贵,内存也很贵。程序的运行时间是一个很重要的指标。计算机非常的昂贵,计算机运行所需要的电也是相当贵的。对这些资源进行优化是因为一个永恒的商业法则:
+
+> 优化你最贵的资源。
+
+在过去,最贵的资源是计算机的运行时间。这就是导致计算机科学致力于研究不同算法的效率的原因。然而,这已经不再是正确的,因为现在硅芯片很便宜,确实很便宜。运行时间不再是你最贵的资源。公司最贵的资源现在是它的员工时间。或者换句话说,就是你。把事情做完比把它变快更加重要。实际上,这是相当的重要,我将把它再次放在这里,仿佛它是一个引文一样(给那些只是粗略浏览的人):
+
+> 把事情做完比快速地做事更加重要。
+
+你可能会说:“我的公司在意速度,我开发一个 web 应用程序,那么所有的响应时间必须少于 x 毫秒。”或者,“我们失去了客户,因为他们认为我们的 app 运行太慢了。”我并不是想说速度一点也不重要,我只是想说速度不再是最重要的东西;它不再是你最贵的资源。
+
+![](https://cdn-images-1.medium.com/max/800/0*Z6j9zMua_w-T25TC.jpg)
+
+### 速度是唯一重要的东西
+
+当你在编程的背景下说 _速度_ 时,你通常是说性能,也就是 CPU 周期。当你的 CEO 在编程的背景下说 _速度_ 时,他指的是业务速度,最重要的指标是产品上市的时间。基本上,你的产品/web 程序是多么的快并不重要。它是用什么语言写的也不重要。甚至它需要花费多少钱也不重要。在一天结束时,让你的公司存活下来或者死去的唯一事物就是产品上市时间。我不只是说创业公司的想法 -- 你开始赚钱需要花费多久,更多的是“从想法到客户手中”的时间期限。企业能够存活下来的唯一方法就是比你的竞争对手更快地创新。如果在你的产品上市之前,你的竞争对手已经提前上市了,那么你想出了多少好的主意也将不再重要。你必须第一个上市,或者至少能跟上。一但你放慢了脚步,你就输了。
+
+> 企业能够存活下来的唯一方法就是比你的竞争对手更快地创新。
+
+#### 一个微服务的案例
+
+像 Amazon、Google 和 Netflix 这样的公司明白快速前进的重要性。他们创建了一个业务系统,可以使用这个系统迅速地前进和快速的创新。微服务是针对他们的问题的解决方案。这篇文章不谈你是否应该使用微服务,但是至少要理解为什么 Amazon 和 Google 认为他们应该使用微服务。
+
+![](https://cdn-images-1.medium.com/max/600/0*MBM9zatYv_Lzr3QN.jpg)
+
+微服务本来就很慢。微服务的主要概念是用网络调用来打破边界。这意味着你正在把使用的函数调用(几个 cpu 周期)转变为一个网络调用。没有什么比这更影响性能了。和 CPU 相比较,网络调用真的很慢。但是这些大公司仍然选择使用微服务。我所知道的架构里面没有比微服务还要慢的了。微服务最大的弊端就是它的性能,但是最大的长处就是上市的时间。通过在较小的项目和代码库上建立团队,一个公司能够以更快的速度进行迭代和创新。这恰恰表明了,非常大的公司也很在意上市时间,而不仅仅只是只有创业公司。
+
+#### CPU 不是你的瓶颈
+
+![](https://cdn-images-1.medium.com/max/800/0*s1RKhkRIBMEYji_w.jpg)
+
+如果你在写一个网络应用程序,如 web 服务器,很有可能的情况会是,CPU 时间并不是你的程序的瓶颈。当你的 web 服务器处理一个请求时,可能会进行几次网络调用,例如到数据库,或者像 Redis 这样的缓存服务器。虽然这些服务本身可能比较快速,但是对它们的网络调用却很慢。[这里有一篇很好的关于特定操作的速度差异的博客文章][1]。在这篇文章里,作者把 CPU 周期时间缩放到更容易理解的人类时间。如果一个单独的 CPU 周期等同于 **1 秒**,那么一个从 California 到 New York 的网络调用将相当于 **4 年**。那就说明了网络调用是多少的慢。按一些粗略估计,我们可以假设在同一数据中心内的普通网络调用大约需要 3 毫秒。这相当于我们“人类比例” **3 个月**。现在假设你的程序是高 CPU 密集型,这需要 100000 个 CPU 周期来对单一调用进行响应。这相当于刚刚超过 **1 天**。现在让我们假设你使用的是一种要慢 5 倍的语言,这将需要大约 **5 天**。很好,将那与我们 3 个月的网络调用时间相比,4 天的差异就显得并不是很重要了。如果有人为了一个包裹不得不至少等待 3 个月,我不认为额外的 4 天对他们来说真的很重要。
+
+上面所说的终极意思是,尽管 Python 速度慢,但是这并不重要。语言的速度(或者 CPU 时间)几乎从来不是问题。实际上谷歌曾经就这一概念做过一个研究,[并且他们就此发表过一篇论文][2]。那篇论文论述了设计高吞吐量的系统。在结论里,他们说到:
+
+> 在高吞吐量的环境中使用解释性语言似乎是矛盾的,但是我们已经发现 CPU 时间几乎不是限制因素;语言的表达性是指,大多数程序是源程序,同时它们的大多数时间花费在 I/O 读写和本机的运行时代码上。而且,解释性语言无论是在语言层面的轻松实验还是在允许我们在很多机器上探索分布计算的方法都是很有帮助的,
+
+再次强调:
+
+> CPU 时间几乎不是限制因素。
+
+### 如果 CPU 时间是一个问题怎么办?
+
+你可能会说,“前面说的情况真是太好了,但是我们确实有过一些问题,这些问题中 CPU 成为了我们的瓶颈,并造成了我们的 web 应用的速度十分缓慢”,或者“在服务器上 X 语言比 Y 语言需要更少的硬件资源来运行。”这些都可能是对的。关于 web 服务器有这样的美妙的事情:你可以几乎无限地负载均衡它们。换句话说,可以在 web 服务器上投入更多的硬件。当然,Python 可能会比其他语言要求更好的硬件资源,比如 c 语言。只是把硬件投入在 CPU 问题上。相比于你的时间,硬件就显得非常的便宜了。如果你在一年内节省了两周的生产力时间,那将远远多于所增加的硬件开销的回报。
+
+![](https://cdn-images-1.medium.com/max/1000/0*mJFOcWsdEQq98gkF.jpg)
+
+### 那么,Python 更快一些吗?
+
+这一篇文章里面,我一直在谈论最重要的是开发时间。所以问题依然存在:当就开发时间而言,Python 要比其他语言更快吗?按常规惯例来看,我、[google][3] [还有][4][其他][5][几个人][6]可以告诉你 Python 是多么的[高效][7]。它为你抽象出很多东西,帮助你关注那些你真正应该编写代码的地方,而不会被困在琐碎事情的杂草里,比如你是否应该使用一个向量或者一个数组。但你可能不喜欢只是听别人说的这些话,所以让我们来看一些更多的经验数据。
+
+在大多数情况下,关于 python 是否是更高效语言的争论可以归结为脚本语言(或动态语言)与静态类型语言两者的争论。我认为人们普遍接受的是静态类型语言的生产力较低,但是,[这有一篇优秀的论文][8]解释了为什么不是这样。就 Python 而言,这里有一项[研究][9],它调查了不同语言编写字符串处理的代码所需要花费的时间,供参考。
+
+![](https://cdn-images-1.medium.com/max/800/1*cw7Oq54ZflGZhlFglDka4Q.png)
+
+在上述研究中,Python 的效率比 Java 高出 2 倍。有一些其他研究也显示相似的东西。 Rosetta Code 对编程语言的差异进行了[深入的研究][10]。在论文中,他们把 python 与其他脚本语言/解释性语言相比较,得出结论:
+
+> Python 更简洁,即使与函数式语言相比较(平均要短 1.2 到 1.6 倍)
+
+普遍的趋势似乎是 Python 中的代码行总是更少。代码行听起来可能像一个可怕的指标,但是包括上面已经提到的两项研究在内的[多项研究][11]表明,每种语言中每行代码所需要花费的时间大约是一样的。因此,限制代码行数就可以提高生产效率。甚至 codinghorror(一名 C# 程序员)本人[写了一篇关于 Python 是如何更有效率的文章][12]。
+
+我认为说 Python 比其他的很多语言更加的有效率是公正的。这主要是由于 Python 有大量的自带以及第三方库。[这里是一篇讨论 Python 和其他语言间的差异的简单的文章][13]。如果你不知道为何 Python 是如此的小巧和高效,我邀请你借此机会学习一点 python,自己多实践。这儿是你的第一个程序:
+
+```
+import __hello__
+```
+
+### 但是如果速度真的重要呢?
+
+ ![](https://cdn-images-1.medium.com/max/600/0*bg31_URKZ7xzWy5I.jpg)
+
+上述论点的语气可能会让人觉得优化与速度一点也不重要。但事实是,很多时候运行时性能真的很重要。一个例子是,你有一个 web 应用程序,其中有一个特定的端点需要用很长的时间来响应。你知道这个程序需要多快,并且知道程序需要改进多少。
+
+在我们的例子中,发生了两件事:
+
+1. 我们注意到有一个端点执行缓慢。
+2. 我们承认它是缓慢,因为我们有一个可以衡量是否足够快的标准,而它没达到那个标准。
+
+我们不必在应用程序中微调优化所有内容,只需要让其中每一个都“足够快”。如果一个端点花费了几秒钟来响应,你的用户可能会注意到,但是,他们并不会注意到你将响应时间由 35 毫秒降低到 25 毫秒。“足够好”就是你需要做到的所有事情。_免责声明: 我应该说有**一些**应用程序,如实时投标程序,**确实**需要细微优化,每一毫秒都相当重要。但那只是例外,而不是规则。_
+
+为了明白如何对端点进行优化,你的第一步将是配置代码,并尝试找出瓶颈在哪。毕竟:
+
+> 任何除了瓶颈之外的改进都是错觉。 -- Gene Kim
+
+如果你的优化没有触及到瓶颈,你只是浪费你的时间,并没有解决实际问题。在你优化瓶颈之前,你不会得到任何重要的改进。如果你在不知道瓶颈是什么前就尝试优化,那么你最终只会在部分代码中玩耍。在测量和确定瓶颈之前优化代码被称为“过早优化”。人们常提及 Donald Knuth 说的话,但他声称这句话实际上是他从别人那里听来的:
+
+> 过早优化是万恶之源。
+
+在谈到维护代码库时,来自 Donald Knuth 的更完整的引文是:
+
+> 在 97% 的时间里,我们应该忘记微不足道的效率:**过早的优化是万恶之源**。然而在关
+> 键的 3%,我们不应该错过优化的机会。 —— Donald Knuth
+
+换句话说,他所说的是,在大多数时间你应该忘记对你的代码进行优化。它几乎总是足够好。在不是足够好的情况下,我们通常只需要触及 3% 的代码路径。比如因为你使用了 if 语句而不是函数,你的端点快了几纳秒,但这并不会使你赢得任何奖项。
+
+过早的优化包括调用某些更快的函数,或者甚至使用特定的数据结构,因为它通常更快。计算机科学认为,如果一个方法或者算法与另一个具有相同的渐近增长(或称为 Big-O),那么它们是等价的,即使在实践中要慢两倍。计算机是如此之快,算法随着数据/使用增加而造成的计算增长远远超过实际速度本身。换句话说,如果你有两个 O(log n) 的函数,但是一个要慢两倍,这实际上并不重要。随着数据规模的增大,它们都以同样的速度“慢下来”。这就是过早优化是万恶之源的原因;它浪费了我们的时间,几乎从来没有真正有助于我们的性能改进。
+
+就 Big-O 而言,你可以认为对你的程序而言,所有的语言都是 O(n),其中 n 是代码或者指令的行数。对于同样的指令,它们以同样的速率增长。对于渐进增长,一种语言的速度快慢并不重要,所有语言都是相同的。在这个逻辑下,你可以说,为你的应用程序选择一种语言仅仅是因为它的“快速”是过早优化的最终形式。你选择某些预期快速的东西,却没有测量,也不理解瓶颈将在哪里。
+
+> 为您的应用选择语言只是因为它的“快速”,是过早优化的最终形式。
+
+
+![](https://cdn-images-1.medium.com/max/1000/0*6WaZOtaXLIo1Vy5H.png)
+
+### 优化 Python
+
+我最喜欢 Python 的一点是,它可以让你一次优化一点点代码。假设你有一个 Python 的方法,你发现它是你的瓶颈。你对它优化过几次,可能遵循[这里][14]和[那里][15]的一些指导,现在,你很肯定 Python 本身就是你的瓶颈。Python 有调用 C 代码的能力,这意味着,你可以用 C 重写这个方法来减少性能问题。你可以一次重写一个这样的方法。这个过程允许你用任何可以编译为 C 兼容汇编程序的语言,编写良好优化后的瓶颈方法。这让你能够在大多数时间使用 Python 编写,只在必要的时候都才用较低级的语言来写代码。
+
+有一种叫做 Cython 的编程语言,它是 Python 的超集。它几乎是 Python 和 C 的合并,是一种渐进类型的语言。任何 Python 代码都是有效的 Cython 代码,Cython 代码可以编译成 C 代码。使用 Cython,你可以编写一个模块或者一个方法,并逐渐进步到越来越多的 C 类型和性能。你可以将 C 类型和 Python 的鸭子类型混在一起。使用 Cython,你可以获得混合后的完美组合,只在瓶颈处进行优化,同时在其他所有地方不失去 Python 的美丽。
+
+![](https://cdn-images-1.medium.com/max/600/0*LStEb38q3d2sOffq.jpg)
+
+*星战前夜的一幅截图:这是用 Python 编写的 space MMO 游戏。*
+
+当您最终遇到 Python 的性能问题阻碍时,你不需要把你的整个代码库用另一种不同的语言来编写。你只需要用 Cython 重写几个函数,几乎就能得到你所需要的性能。这就是[星战前夜][16]采取的策略。这是一个大型多玩家的电脑游戏,在整个架构中使用 Python 和 Cython。它们通过优化 C/Cython 中的瓶颈来实现游戏级别的性能。如果这个策略对他们有用,那么它应该对任何人都有帮助。或者,还有其他方法来优化你的 Python。例如,[PyPy][17] 是一个 Python 的 JIT 实现,它通过使用 PyPy 替掉 CPython(这是 Python 的默认实现),为长时间运行的应用程序提供重要的运行时改进(如 web 服务器)。
+
+![](https://cdn-images-1.medium.com/max/1000/0*mPc5j1btWBFz6YK7.jpg)
+
+让我们回顾一下要点:
+
+ * 优化你最贵的资源。那就是你,而不是计算机。
+ * 选择一种语言/框架/架构来帮助你快速开发(比如 Python)。不要仅仅因为某些技术的快而选择它们。
+ * 当你遇到性能问题时,请找到瓶颈所在。
+ * 你的瓶颈很可能不是 CPU 或者 Python 本身。
+ * 如果 Python 成为你的瓶颈(你已经优化过你的算法),那么可以转向热门的 Cython 或者 C。
+ * 尽情享受可以快速做完事情的乐趣。
+
+我希望你喜欢阅读这篇文章,就像我喜欢写这篇文章一样。如果你想说谢谢,请为我点下赞。另外,如果某个时候你想和我讨论 Python,你可以在 twitter 上艾特我(@nhumrich),或者你可以在 [Python slack channel][18] 找到我。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Nick Humrich -- 坚持采用持续交付的方法,并为之写了很多工具。同是还是一名 Python 黑客与技术狂热者,目前是一名 DevOps 工程师。
+
+via: https://medium.com/hacker-daily/yes-python-is-slow-and-i-dont-care-13763980b5a1
+
+作者:[Nick Humrich][a]
+译者:[zhousiyu325](https://github.com/zhousiyu325)
+校对:[jasminepeng](https://github.com/jasminepeng)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://hackernoon.com/@nhumrich
+[1]:https://blog.codinghorror.com/the-infinite-space-between-words/
+[2]:https://static.googleusercontent.com/media/research.google.com/en//archive/sawzall-sciprog.pdf
+[3]:https://www.codefellows.org/blog/5-reasons-why-python-is-powerful-enough-for-google/
+[4]:https://www.lynda.com/Python-tutorials/Python-Programming-Efficiently/534425-2.html
+[5]:https://www.linuxjournal.com/article/3882
+[6]:https://www.codeschool.com/blog/2016/01/27/why-python/
+[7]:http://pythoncard.sourceforge.net/what_is_python.html
+[8]:http://www.tcl.tk/doc/scripting.html
+[9]:http://www.connellybarnes.com/documents/language_productivity.pdf
+[10]:https://arxiv.org/pdf/1409.0252.pdf
+[11]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.1831&rep=rep1&type=pdf
+[12]:https://blog.codinghorror.com/are-all-programming-languages-the-same/
+[13]:https://www.python.org/doc/essays/comparisons/
+[14]:https://wiki.python.org/moin/PythonSpeed
+[15]:https://wiki.python.org/moin/PythonSpeed/PerformanceTips
+[16]:https://www.eveonline.com/
+[17]:http://pypy.org/
+[18]:http://pythondevelopers.herokuapp.com/
diff --git a/translated/tech/20170403 pyDash – A Web Based Linux Performance Monitoring Tool.md b/published/201704/20170403 pyDash – A Web Based Linux Performance Monitoring Tool.md
similarity index 60%
rename from translated/tech/20170403 pyDash – A Web Based Linux Performance Monitoring Tool.md
rename to published/201704/20170403 pyDash – A Web Based Linux Performance Monitoring Tool.md
index 9e32f65b97..d37d4a792c 100644
--- a/translated/tech/20170403 pyDash – A Web Based Linux Performance Monitoring Tool.md
+++ b/published/201704/20170403 pyDash – A Web Based Linux Performance Monitoring Tool.md
@@ -1,16 +1,15 @@
-pyDash — 一个基于 web 的 Linux 性能监测工具
+pyDash:一个基于 web 的 Linux 性能监测工具
============================================================
-pyDash 是一个轻量且[基于 web 的 Linux 性能监测工具][1],它是用 Python 和 [Django][2] 加上 Chart.js 来写的。经测试,在下面这些主流 Linux 发行版上可运行:CentOS、Fedora、Ubuntu、Debian、Raspbian 以及 Pidora 。
+`pyDash` 是一个轻量且[基于 web 的 Linux 性能监测工具][1],它是用 Python 和 [Django][2] 加上 Chart.js 来写的。经测试,在下面这些主流 Linux 发行版上可运行:CentOS、Fedora、Ubuntu、Debian、Raspbian 以及 Pidora 。
-你可以使用这个工具来监视你的 Linux 个人电脑/服务器资源,比如 CPU、内存
-、网络统计,包括在线用户以及更多的进程。仪表盘是完全使用主要的 Python 版本提供的 Python 库开发的,因此它的依赖关系很少,你不需要安装许多包或库来运行它。
+你可以使用这个工具来监视你的 Linux 个人电脑/服务器资源,比如 CPU、内存、网络统计,包括在线用户的进程以及更多。仪表盘完全由主要的 Python 发行版本所提供的 Python 库开发,因此它的依赖关系很少,你不需要安装许多包或库来运行它。
-在这篇文章中,我将展示如果安装 pyDash 来监测 Linux 服务器性能。
+在这篇文章中,我将展示如何安装 `pyDash` 来监测 Linux 服务器性能。
-#### 如何在 Linux 系统下安装 pyDash
+### 如何在 Linux 系统下安装 pyDash
-1、首先,像下面这样安装需要的软件包 git 和 Python pip:
+1、首先,像下面这样安装需要的软件包 `git` 和 `Python pip`:
```
-------------- 在 Debian/Ubuntu 上 --------------
@@ -22,7 +21,7 @@ $ sudo apt-get install git python-pip
# dnf install git python-pip
```
-2、如果安装好了 git 和 Python pip,那么接下来,像下面这样安装 virtualenv,它有助于处理针对 Python 项目的依赖关系:
+2、如果安装好了 git 和 Python pip,那么接下来,像下面这样安装 `virtualenv`,它有助于处理针对 Python 项目的依赖关系:
```
# pip install virtualenv
@@ -30,14 +29,14 @@ $ sudo apt-get install git python-pip
$ sudo pip install virtualenv
```
-3、现在,像下面这样使用 git 命令,把 pyDash 仓库克隆到 home 目录中:
+3、现在,像下面这样使用 `git` 命令,把 pyDash 仓库克隆到 home 目录中:
```
# git clone https://github.com/k3oni/pydash.git
# cd pydash
```
-4、下一步,使用下面的 virtualenv 命令为项目创建一个叫做 pydashtest 虚拟环境:
+4、下一步,使用下面的 `virtualenv` 命令为项目创建一个叫做 `pydashtest` 虚拟环境:
```
$ virtualenv pydashtest #give a name for your virtual environment like pydashtest
@@ -48,9 +47,9 @@ $ virtualenv pydashtest #give a name for your virtual environment like pydashtes
*创建虚拟环境*
-重点:请注意,上面的屏幕截图中,虚拟环境的 bin 目录被高亮显示,你的可能和这不一样,取决于你把 pyDash 目录克隆到什么位置。
+重要:请注意,上面的屏幕截图中,虚拟环境的 `bin` 目录被高亮显示,你的可能和这不一样,取决于你把 pyDash 目录克隆到什么位置。
-5、创建好虚拟环境(pydashtest)以后,你需要在使用前像下面这样激活它:
+5、创建好虚拟环境(`pydashtest`)以后,你需要在使用前像下面这样激活它:
```
$ source /home/aaronkilik/pydash/pydashtest/bin/activate
@@ -61,16 +60,16 @@ $ source /home/aaronkilik/pydash/pydashtest/bin/activate
*激活虚拟环境*
-从上面的屏幕截图中,你可以注意到,提示字符串 1(PS1)已经发生改变,这表明虚拟环境已经被激活,而且可以开始使用。
+从上面的屏幕截图中,你可以注意到,提示字符串 1(`PS1`)已经发生改变,这表明虚拟环境已经被激活,而且可以开始使用。
-6、现在,安装 pydash 项目 requirements;如何你是一个细心的人,那么可以使用 [cat 命令][5]查看 requirements.txt 的内容,然后像下面展示这样进行安装:
+6、现在,安装 pydash 项目 requirements;如何你好奇的话,可以使用 [cat 命令][5]查看 `requirements.txt` 的内容,然后像下面所示那样进行安装:
```
$ cat requirements.txt
$ pip install -r requirements.txt
```
-7、现在,进入 `pydash` 目录,里面包含一个名为 `settings.py` 的文件,也可直接运行下面的命令打开这个文件,然后把 `SECRET_KEY` 改为一个特定值:
+7、现在,进入 `pydash` 目录,里面包含一个名为 `settings.py` 的文件,也可直接运行下面的命令打开这个文件,然后把 `SECRET_KEY` 改为一个特定值:
```
$ vi pydash/settings.py
@@ -83,7 +82,7 @@ $ vi pydash/settings.py
保存文件然后退出。
-8、之后,运行下面的命令来创建一个项目数据库和安装 Django 的身份验证系统,并创建一个项目的超级用户:
+8、之后,运行下面的命令来创建一个项目数据库和安装 Django 的身份验证系统,并创建一个项目的超级用户:
```
$ python manage.py syncdb
@@ -104,13 +103,13 @@ Password (again): ############
*创建项目数据库*
-9、这个时候,一切都设置好了,然后,运行下面的命令来启用 Django 开发服务器:
+9、这个时候,一切都设置好了,然后,运行下面的命令来启用 Django 开发服务器:
```
$ python manage.py runserver
```
-10、接下来,打开你的 web 浏览器,输入网址:http://127.0.0.1:8000/ 进入 web 控制台登录界面,输入你在第 8 步中创建数据库和安装 Django 身份验证系统时创建的超级用户名和密码,然后点击登录。
+10、接下来,打开你的 web 浏览器,输入网址:`http://127.0.0.1:8000/` 进入 web 控制台登录界面,输入你在第 8 步中创建数据库和安装 Django 身份验证系统时创建的超级用户名和密码,然后点击登录。
[
![pyDash Login Interface](http://www.tecmint.com/wp-content/uploads/2017/03/pyDash-web-login-interface.png)
@@ -118,7 +117,7 @@ $ python manage.py runserver
*pyDash 登录界面*
-11、登录到 pydash 主页面以后,你将会得到一段监测系统的基本信息,包括 CPU、内存和硬盘使用量以及系统平均负载。
+11、登录到 pydash 主页面以后,你将会可以看到监测系统的基本信息,包括 CPU、内存和硬盘使用量以及系统平均负载。
向下滚动便可查看更多部分的信息。
@@ -128,7 +127,7 @@ $ python manage.py runserver
*pydash 服务器性能概述*
-12、下一个屏幕截图显示的是一段 pydash 的跟踪界面,包括 IP 地址、互联网流量、硬盘读/写、在线用户以及 netstats 。
+12、下一个屏幕截图显示的是一段 pydash 的跟踪界面,包括 IP 地址、互联网流量、硬盘读/写、在线用户以及 netstats 。
[
![pyDash Network Overview](http://www.tecmint.com/wp-content/uploads/2017/03/pyDash-Network-Overview.png)
@@ -136,7 +135,7 @@ $ python manage.py runserver
*pyDash 网络概述*
-13、下一个 pydash 主页面的截图显示了一部分系统中被监视的活跃进程。
+13、下一个 pydash 主页面的截图显示了一部分系统中被监视的活跃进程。
[
@@ -154,16 +153,16 @@ $ python manage.py runserver
作者简介:
-我叫 Ravi Saive,是 TecMint 的创建者,是一个喜欢在网上分享技巧和知识的计算机极客和 Linux Guru 。我的大多数服务器都运行在叫做 Linux 的开源平台上。请关注我:[Twitter][10]、[Facebook][01] 以及 [Google+][02] 。
+我叫 Ravi Saive,是 TecMint 的原创作者,是一个喜欢在网上分享技巧和知识的计算机极客和 Linux Guru。我的大多数服务器都运行在 Linux 开源平台上。请关注我:[Twitter][10]、[Facebook][01] 以及 [Google+][02] 。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/pydash-a-web-based-linux-performance-monitoring-tool/
-作者:[Ravi Saive ][a]
+作者:[Ravi Saive][a]
译者:[ucasFL](https://github.com/ucasFL)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20170404 How To Enable Desktop Sharing In Ubuntu and Linux Mint.md b/published/201704/20170404 How To Enable Desktop Sharing In Ubuntu and Linux Mint.md
similarity index 100%
rename from published/20170404 How To Enable Desktop Sharing In Ubuntu and Linux Mint.md
rename to published/201704/20170404 How To Enable Desktop Sharing In Ubuntu and Linux Mint.md
diff --git a/published/20170404 How to Add a New Disk Larger Than 2TB to An Existing Linux.md b/published/201704/20170404 How to Add a New Disk Larger Than 2TB to An Existing Linux.md
similarity index 100%
rename from published/20170404 How to Add a New Disk Larger Than 2TB to An Existing Linux.md
rename to published/201704/20170404 How to Add a New Disk Larger Than 2TB to An Existing Linux.md
diff --git a/translated/tech/20170406 Anbox - Android in a Box.md b/published/201704/20170406 Anbox - Android in a Box.md
similarity index 77%
rename from translated/tech/20170406 Anbox - Android in a Box.md
rename to published/201704/20170406 Anbox - Android in a Box.md
index fac422a9ac..5fca50f6de 100644
--- a/translated/tech/20170406 Anbox - Android in a Box.md
+++ b/published/201704/20170406 Anbox - Android in a Box.md
@@ -1,10 +1,11 @@
-# Anbox
+Anbox:容器中的 Android
+===============
-Anbox 是一个基于容器的方式,在像 Ubuntu 这样的常规的 GNU Linux 系统上启动一个完整的 Android 系统。
+Anbox 以基于容器的方式,在像 Ubuntu 这样的常规的 GNU Linux 系统上启动一个完整的 Android 系统。
-## 概述
+### 概述
-Anbox 使用 Linux 命名空间(user、pid、uts、net、mount、ipc)来在容器中运行完整的 Android 系统,并提供任何基于 GNU Linux 平台的 Android 程序。
+Anbox 使用 Linux 命名空间(user、pid、uts、net、mount、ipc)来在容器中运行完整的 Android 系统,并在任何基于 GNU Linux 平台上提供 Android 应用。
容器内的 Android 无法直接访问任何硬件。所有硬件访问都通过主机上的 anbox 守护进程进行。我们重用基于 QEMU 的模拟器实现的 Android 中的 GL、ES 加速渲染。容器内的 Android 系统使用不同的管道与主机系统通信,并通过它发送所有硬件访问命令。
@@ -15,19 +16,19 @@ Anbox 使用 Linux 命名空间(user、pid、uts、net、mount、ipc)来在
* [Android 的 “qemud” 复用守护进程](https://android.googlesource.com/platform/external/qemu/+/emu-master-dev/android/docs/ANDROID-QEMUD.TXT)
* [Android qemud 服务](https://android.googlesource.com/platform/external/qemu/+/emu-master-dev/android/docs/ANDROID-QEMUD-SERVICES.TXT)
-Anbox 目前适合桌面使用,但也可使用移动操作系统,如 Ubuntu Touch、Sailfish OS 或 Lune OS。然而,由于 Android 程序映射目前只针对桌面环境,因此还需要额外的工作来支持其他的用户界面。
+Anbox 目前适合桌面使用,但也用在移动操作系统上,如 Ubuntu Touch、Sailfish OS 或 Lune OS。然而,由于 Android 程序的映射目前只针对桌面环境,因此还需要额外的工作来支持其他的用户界面。
-Android 运行时环境带有一个基于[ Android 开源项目](https://source.android.com/)镜像的最小自定义 Android 系统。所使用的镜像目前基于 Android 7.1.1。
+Android 运行时环境带有一个基于 [Android 开源项目](https://source.android.com/)镜像的最小自定义 Android 系统。所使用的镜像目前基于 Android 7.1.1。
-## 安装
+### 安装
目前,安装过程包括一些添加额外组件到系统的步骤。包括:
- * 没有分发版内核同时启用的 binder 和 ashmen 原始内核模块。
- * 使用 udev 规则为 /dev/binder 和 /dev/ashmem 设置正确权限。
- * 能够启动 Anbox 会话管理器作为用户会话的一个启动任务。
+* 启用用于 binder 和 ashmen 的非发行的树外内核模块。
+* 使用 udev 规则为 /dev/binder 和 /dev/ashmem 设置正确权限。
+* 能够启动 Anbox 会话管理器作为用户会话的一个启动任务。
-为了使这个过程尽可能简单,我们将必要的步骤绑定在一个 snap(见 https://snapcraft.io) 中,称为“anbox-installer”。这个安装程序会执行所有必要的步骤。你可以在所有支持 snap 的系统运行下面的命令安装它。
+为了使这个过程尽可能简单,我们将必要的步骤绑定在一个 snap(见 https://snapcraft.io) 中,称之为 “anbox-installer”。这个安装程序会执行所有必要的步骤。你可以在所有支持 snap 的系统运行下面的命令安装它。
```
$ snap install --classic anbox-installer
@@ -49,11 +50,11 @@ $ anbox-installer
它会引导你完成安装过程。
-**注意:** Anbox 目前处于** pre-alpha 开发状态**。不要指望它具有生产环境你需要的所有功能。你肯定会遇到错误和崩溃。如果你遇到了,请不要犹豫并报告它们!
+**注意:** Anbox 目前处于 **pre-alpha 开发状态**。不要指望它具有生产环境你需要的所有功能。你肯定会遇到错误和崩溃。如果你遇到了,请不要犹豫并报告它们!
**注意:** Anbox snap 目前 **完全没有约束**,因此它只能从边缘渠道获取。正确的约束是我们想要在未来实现的,但由于 Anbox 的性质和复杂性,这不是一个简单的任务。
-## 已支持的 Linux 发行版
+### 已支持的 Linux 发行版
目前我们官方支持下面的 Linux 发行版:
@@ -65,9 +66,9 @@ $ anbox-installer
* Ubuntu 16.10 (yakkety)
* Ubuntu 17.04 (zesty)
-## 安装并运行 Android 程序
+### 安装并运行 Android 程序
-## 从源码构建
+#### 从源码构建
要构建 Anbox 运行时不需要特别了解什么,我们使用 cmake 作为构建系统。你的主机系统中应已有下面这些构建依赖:
@@ -132,11 +133,11 @@ $ snapcraft
$ snap install --dangerous --devmode anbox_1_amd64.snap
```
-## 运行 Anbox
+#### 运行 Anbox
要从本地构建运行 Anbox ,你需要了解更多一点。请参考[“运行时步骤”](docs/runtime-setup.md)文档。
-## 文档
+### 文档
在项目源代码的子目录下,你可以找到额外的关于 Anbox 的文档。
@@ -145,15 +146,15 @@ $ snap install --dangerous --devmode anbox_1_amd64.snap
* [运行时步骤](docs/runtime-setup.md)
* [构建 Android 镜像](docs/build-android.md)
-## 报告 bug
+### 报告 bug
-如果你发现了一个 Anbox 问题,请[提交一个 bug](https://github.com/anbox/anbox/issues/new)。
+如果你发现了一个 Anbox 问题,请[提交 bug](https://github.com/anbox/anbox/issues/new)。
-## 取得联系
+### 取得联系
如果你想要与开发者联系,你可以在 [FreeNode](https://freenode.net/) 中加入 *#anbox* 的 IRC 频道。
-## 版权与许可
+### 版权与许可
Anbox 重用了像 Android QEMU 模拟器这样的其他项目的代码。这些项目可在外部/带有许可声明的子目录中得到。
@@ -163,7 +164,7 @@ anbox 源码本身,如果没有在相关源码中声明其他的许可,默
via: https://github.com/anbox/anbox/blob/master/README.md
-作者:[ Anbox][a]
+作者:[Anbox][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
diff --git a/published/20170407 Pyinotify – Monitor Filesystem Changes in Real-Time in Linux.md b/published/201704/20170407 Pyinotify – Monitor Filesystem Changes in Real-Time in Linux.md
similarity index 100%
rename from published/20170407 Pyinotify – Monitor Filesystem Changes in Real-Time in Linux.md
rename to published/201704/20170407 Pyinotify – Monitor Filesystem Changes in Real-Time in Linux.md
diff --git a/published/20170413 Ubuntu 17.04 (Zesty Zapus) Officially Released, Available to Download Now.md b/published/201704/20170413 Ubuntu 17.04 (Zesty Zapus) Officially Released, Available to Download Now.md
similarity index 100%
rename from published/20170413 Ubuntu 17.04 (Zesty Zapus) Officially Released, Available to Download Now.md
rename to published/201704/20170413 Ubuntu 17.04 (Zesty Zapus) Officially Released, Available to Download Now.md
diff --git a/published/201704/20170417 GoTTY – Share Your Linux Terminal TTY as a Web Application.md b/published/201704/20170417 GoTTY – Share Your Linux Terminal TTY as a Web Application.md
new file mode 100644
index 0000000000..5c13756f37
--- /dev/null
+++ b/published/201704/20170417 GoTTY – Share Your Linux Terminal TTY as a Web Application.md
@@ -0,0 +1,238 @@
+GoTTY:把你的 Linux 终端放到浏览器里面
+============================================================
+
+GoTTY 是一个简单的基于 Go 语言的命令行工具,它可以将你的终端(TTY)作为 web 程序共享。它会将命令行工具转换为 web 程序。
+
+它使用 Chrome OS 的终端仿真器(hterm)来在 Web 浏览器上执行基于 JavaScript 的终端。重要的是,GoTTY 运行了一个 Web 套接字服务器,它基本上是将 TTY 的输出传输给客户端,并从客户端接收输入(即允许客户端的输入),并将其转发给 TTY。
+
+它的架构(hterm + web socket 的想法)灵感来自 [Wetty 项目][1],它使终端能够通过 HTTP 和 HTTPS 使用。
+
+### 先决条件
+
+你需要在 Linux 中安装 [GoLang (Go 编程语言)][2] 环境来运行 GoTTY。
+
+### 如何在 Linux 中安装 GoTTY
+
+如果你已经有一个[可以工作的 Go 语言环境][3],运行下面的 `go get` 命令来安装它:
+
+```
+# go get github.com/yudai/gotty
+```
+
+上面的命令会在你的 `GOBIN` 环境变量中安装 GOTTY 的二进制,尝试检查下是否如此:
+
+```
+# $GOPATH/bin/
+```
+
+[
+ ![Check GOBIN Environment](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Go-Environment.png)
+][4]
+
+*检查 GOBIN 环境*
+
+### 如何在 Linux 中使用 GoTTY
+
+要运行它,你可以使用 GOBIN 环境变量并用命令补全:
+
+```
+# $GOBIN/gotty
+```
+
+另外,要不带完整命令路径运行 GoTTY 或其他 Go 程序,使用 `export` 命令将 `GOBIN` 变量添加到 `~/.profile` 文件中的 `PATH` 环境变量中。
+
+```
+export PATH="$PATH:$GOBIN"
+```
+
+保存文件并关闭。接着运行 `source` 来使更改生效:
+
+```
+# source ~/.profile
+```
+
+运行 GoTTY 命令的常规语法是:
+
+```
+Usage: gotty [options] []
+```
+
+现在用 GoTTY 运行任意命令,如 [df][5] 来从 Web 浏览器中查看系统分区空间及使用率。
+
+```
+# gotty df -h
+```
+
+GoTTY 默认会在 8080 启动一个 Web 服务器。在浏览器中打开 URL:`http://127.0.0.1:8080/`,你会看到运行的命令仿佛运行在终端中一样:
+
+[
+ ![Gotty Linux Disk Usage](http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-Linux-Disk-Usage.png)
+][6]
+
+*Gotty 查看 Linux 磁盘使用率*
+
+### 如何在 Linux 中自定义 GoTTY
+
+你可以在 `~/.gotty` 配置文件中修改默认选项以及终端,如果该文件存在,它会在每次启动时加载这个文件。
+
+这是由 getty 命令读取的主要自定义文件,因此,按如下方式创建:
+
+```
+# touch ~/.gotty
+```
+
+并为配置选项设置你自己的有效值(在此处查找所有配置选项)以自定义 GoTTY,例如:
+
+```
+// Listen at port 9000 by default
+port = "9000"
+// Enable TSL/SSL by default
+enable_tls = true
+// hterm preferences
+// Smaller font and a little bit bluer background color
+preferences {
+font_size = 5,
+background_color = "rgb(16, 16, 32)"
+}
+```
+
+你可以使用命令行中的 `--html` 选项设置你自己的 `index.html` 文件:
+
+```
+# gotty --index /path/to/index.html uptime
+```
+
+### 如何在 GoTTY 中使用安全功能
+
+由于 GoTTY 默认不提供可靠的安全保障,你需要手动使用下面说明的某些安全功能。
+
+#### 允许客户端在终端中运行命令
+
+请注意,默认情况下,GoTTY 不允许客户端输入到TTY中,它只支持窗口缩放。
+
+但是,你可以使用 `-w` 或 `--permit-write` 选项来允许客户端写入 TTY,但是并不推荐这么做因为会有安全威胁。
+
+以下命令会使用 [vi 命令行编辑器][7]在 Web 浏览器中打开文件 `fossmint.txt` 进行编辑:
+
+```
+# gotty -w vi fossmint.txt
+```
+
+以下是从 Web 浏览器看到的 vi 界面(像平常一样使用 vi 命令):
+
+[
+ ![Gotty Web Vi Editor](http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-Web-Vi-Editor.png)
+][8]
+
+*Gotty Web Vi 编辑器*
+
+#### 使用基本(用户名和密码)验证运行 GoTTY
+
+尝试激活基本身份验证机制,这样客户端将需要输入指定的用户名和密码才能连接到 GoTTY 服务器。
+
+以下命令使用 `-c` 选项限制客户端访问,以向用户询问指定的凭据(用户名:`test` 密码:`@67890`):
+
+```
+# gotty -w -p "9000" -c "test@67890" glances
+```
+[
+ ![Gotty with Basic Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-use-basic-authentication.png)
+][9]
+
+*使用基本验证运行 GoTTY*
+
+#### Gotty 生成随机 URL
+
+限制访问服务器的另一种方法是使用 `-r` 选项。GoTTY 会生成一个随机 URL,这样只有知道该 URL 的用户才可以访问该服务器。
+
+还可以使用 `-title-format "GoTTY – {{ .Command }} ({{ .Hostname }})"` 选项来定义浏览器标题。[glances][10] 用于显示系统监控统计信息:
+
+```
+# gotty -r --title-format "GoTTY - {{ .Command }} ({{ .Hostname }})" glances
+```
+
+以下是从浏览器中看到的上面的命令的结果:
+
+[
+ ![Gotty Random URL for Glances Linux Monitoring](http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-Random-URL-for-Glances-Linux-Monitoring.png)
+][11]
+
+*使用 Gotty 随机 URL 用于 Glances 系统监控*
+
+#### 带有 SSL/TLS 使用 GoTTY
+
+因为默认情况下服务器和客户端之间的所有连接都不加密,当你通过 GoTTY 发送秘密信息(如用户凭据或任何其他信息)时,你需要使用 `-t` 或 `--tls` 选项才能在会话中启用 TLS/SSL:
+
+默认情况下,GoTTY 会读取证书文件 `~/.gotty.crt` 和密钥文件 `~/.gotty.key`,因此,首先使用下面的 `openssl` 命令创建一个自签名的证书以及密钥( 回答问题以生成证书和密钥文件):
+
+```
+# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ~/.gotty.key -out ~/.gotty.crt
+```
+
+按如下所示,通过启用 SSL/TLS,以安全方式使用 GoTTY:
+
+```
+# gotty -tr --title-format "GoTTY - {{ .Command }} ({{ .Hostname }})" glances
+```
+
+#### 与多个客户端分享你的终端
+
+你可以使用[终端复用程序][12]来与多个客户端共享一个进程,以下命令会启动一个名为 gotty 的新 [tmux 会话][13]来运行 [glances][14](确保你安装了 tmux):
+
+```
+# gotty tmux new -A -s gotty glances
+```
+
+要读取不同的配置文件,像下面那样使用 `–config "/path/to/file"` 选项:
+
+```
+# gotty -tr --config "~/gotty_new_config" --title-format "GoTTY - {{ .Command }} ({{ .Hostname }})" glances
+```
+
+要显示 GoTTY 版本,运行命令:
+
+```
+# gotty -v
+```
+
+访问 GoTTY GitHub 仓库以查找更多使用示例:[https://github.com/yudai/gotty][15] 。
+
+就这样了!你有尝试过了吗?如何知道 GoTTY 的?通过下面的反馈栏与我们分享你的想法。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Aaron Kili 是 Linux 和 F.O.S.S 爱好者,即将成为 Linux SysAdmin 和网络开发人员,目前是 TecMint 的内容创作者,他喜欢在电脑上工作,并坚信分享知识。
+
+
+----------
+
+
+via: http://www.tecmint.com/gotty-share-linux-terminal-in-web-browser/
+
+作者:[Aaron Kili][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/aaronkili/
+[1]:http://www.tecmint.com/access-linux-server-terminal-in-web-browser-using-wetty/
+[2]:http://www.tecmint.com/install-go-in-linux/
+[3]:http://www.tecmint.com/install-go-in-linux/
+[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Go-Environment.png
+[5]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
+[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-Linux-Disk-Usage.png
+[7]:http://www.tecmint.com/vi-editor-usage/
+[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-Web-Vi-Editor.png
+[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-use-basic-authentication.png
+[10]:http://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/
+[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Gotty-Random-URL-for-Glances-Linux-Monitoring.png
+[12]:http://www.tecmint.com/tmux-to-access-multiple-linux-terminals-inside-a-single-console/
+[13]:http://www.tecmint.com/tmux-to-access-multiple-linux-terminals-inside-a-single-console/
+[14]:http://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/
+[15]:https://github.com/yudai/gotty
+[16]:http://www.tecmint.com/author/aaronkili/
+[17]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
+[18]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/published/20170410 Cpustat – Monitors CPU Utilization by Running Processes in Linux.md b/published/20170410 Cpustat – Monitors CPU Utilization by Running Processes in Linux.md
new file mode 100644
index 0000000000..d3239821d0
--- /dev/null
+++ b/published/20170410 Cpustat – Monitors CPU Utilization by Running Processes in Linux.md
@@ -0,0 +1,169 @@
+cpustat:在 Linux 下根据运行的进程监控 CPU 使用率
+============================================================
+
+cpustat 是 Linux 下一个强大的系统性能测量程序,它用 [Go 编程语言][3] 编写。它通过使用 “[用于分析任意系统的性能的方法(USE)](http://www.brendangregg.com/usemethod.html)”,以有效的方式显示 CPU 利用率和饱和度。
+
+它高频率对系统中运行的每个进程进行取样,然后以较低的频率汇总这些样本。例如,它能够每 200ms 测量一次每个进程,然后每 5 秒汇总这些样本,包括某些度量的最小/平均/最大值(min/avg/max)。
+
+**推荐阅读:** [监控 Linux 性能的 20 个命令行工具][4]
+
+cpustat 能用两种方式输出数据:定时汇总的纯文本列表和每个取样的彩色滚动面板。
+
+### 如何在 Linux 中安装 cpustat
+
+为了使用 cpustat,你的 Linux 系统中必须安装有 Go 语言(GoLang),如果你还没有安装它,点击下面的链接逐步安装 GoLang:
+
+- [在 Linux 下安装 GoLang(Go 编程语言)][1]
+
+安装完 Go 以后,输入下面的 `go get` 命令安装 cpustat,这个命令会将 cpustat 二进制文件安装到你的 `GOBIN` 变量(所指的路径):
+
+```
+# go get github.com/uber-common/cpustat
+```
+
+### 如何在 Linux 中使用 cpustat
+
+安装过程完成后,如果你不是以 root 用户控制系统,像下面这样使用 sudo 命令获取 root 权限运行 cpustat,否则会出现下面显示的错误信息:
+
+```
+$ $GOBIN/cpustat
+This program uses the netlink taskstats interface, so it must be run as root.
+```
+
+注意:想要像你系统中已经安装的其它 Go 程序那样运行 cpustat,你需要把 `GOBIN` 变量添加到 `PATH` 环境变量。打开下面的链接学习如何在 Linux 中设置 `PATH` 变量。
+
+- [学习如何在 Linux 中永久设置你的 $PATH 变量][2]
+
+cpustat 是这样工作的:在每个时间间隔查询 `/proc` 目录获取当前[进程 ID 列表][5],然后:
+
+* 对于每个 PID,读取 `/proc/pid/stat`,然后计算和前一个样本的差别。
+* 如果是一个新的 PID,读取 `/proc/pid/cmdline`。
+* 对于每个 PID,发送 `netlink` 消息获取 `taskstat`,计算和前一个样本的差别。
+* 读取 `/proc/stat` 获取总的系统统计信息。
+
+根据获取所有这些统计信息所花费的时间,会调整每个休息间隔。另外,通过每次取样之间实际经过的时间,每个样本也会记录它用于测量的时间。这可用于计算 cpustat 自身的延迟。
+
+当不带任何参数运行时,cpustat 默认会显示以下信息:样本间隔:200ms;汇总间隔:2s(10 个样本);[显示前 10 个进程][6];用户过滤器:all;pid 过滤器:all。正如下面截图所示:
+
+```
+$ sudo $GOBIN/cpustat
+```
+
+[
+ ![cpustat - 监控 Linux CPU 使用](http://www.tecmint.com/wp-content/uploads/2017/03/Cpustat-Monitor-Linux-CPU-Usage.png)
+][7]
+
+*cpustat – 监控 Linux CPU 使用*
+
+在上面的输出中,之前显示的系统范围的度量字段意义如下:
+
+* usr - 用户模式运行时间占 CPU 百分比的 min/avg/max 值。
+* sys - 系统模式运行时间占 CPU 百分比的 min/avg/max 值。
+* nice - 用户模式低优先级运行时间占 CPU 百分比的 min/avg/max 值。
+* idle - 用户模式空闲时间占 CPU 百分比的 min/avg/max 值。
+* iowait - 等待磁盘 IO 的 min/avg/max 延迟时间。
+* prun - 处于可运行状态的 min/avg/max 进程数量(同“平均负载”一样)。
+* pblock - 被磁盘 IO 阻塞的 min/avg/max 进程数量。
+* pstat - 在本次汇总间隔里启动的进程/线程数目。
+
+同样还是上面的输出,对于一个进程,不同列的意思分别是:
+
+* name - 从 `/proc/pid/stat` 或 `/proc/pid/cmdline` 获取的进程名称。
+* pid - 进程 ID,也被用作 “tgid” (线程组 ID)。
+* min - 该 pid 的用户模式+系统模式时间的最小样本,取自 `/proc/pid/stat`。比率是 CPU 的百分比。
+* max - 该 pid 的用户模式+系统模式时间的最大样本,取自 `/proc/pid/stat`。
+* usr - 在汇总期间该 pid 的平均用户模式运行时间,取自 `/proc/pid/stat`。
+* sys - 在汇总期间该 pid 的平均系统模式运行时间,取自 `/proc/pid/stat`。
+* nice - 表示该进程的当前 “nice” 值,取自 `/proc/pid/stat`。值越高表示越好(nicer)。
+* runq - 进程和它所有线程可运行但等待运行的时间,通过 netlink 取自 taskstats。比率是 CPU 的百分比。
+* iow - 进程和它所有线程被磁盘 IO 阻塞的时间,通过 netlink 取自 taskstats。比率是 CPU 的百分比,对整个汇总间隔平均。
+* swap - 进程和它所有线程等待被换入(swap in)的时间,通过 netlink 取自 taskstats。Scale 是 CPU 的百分比,对整个汇总间隔平均。
+* vcx 和 icx - 在汇总间隔期间进程和它的所有线程自动上下文切换总的次数,通过 netlink 取自 taskstats。
+* rss - 从 `/proc/pid/stat` 获取的当前 RSS 值。它是指该进程正在使用的内存数量。
+* ctime - 在汇总间隔期间等待子进程退出的用户模式+系统模式 CPU 时间总和,取自 `/proc/pid/stat`。
+ 注意长时间运行的子进程可能导致混淆这个值,因为只有在子进程退出后才会报告时间。但是,这对于计算高频 cron 任务以及 CPU 时间经常被多个子进程使用的健康检查非常有帮助。
+* thrd - 汇总间隔最后线程的数目,取自 `/proc/pid/stat`。
+* sam - 在这个汇总间隔期间该进程的样本数目。最近启动或退出的进程可能看起来比汇总间隔的样本数目少。
+
+
+下面的命令显示了系统中运行的前 10 个 root 用户进程:
+
+```
+$ sudo $GOBIN/cpustat -u root
+```
+
+[
+ ![查找 root 用户正在运行的进程](http://www.tecmint.com/wp-content/uploads/2017/03/show-root-user-processes.png)
+][8]
+
+*查找 root 用户正在运行的进程*
+
+要想用更好看的终端模式显示输出,像下面这样用 `-t` 选项:
+
+```
+$ sudo $GOBIN/cpustat -u root -t
+```
+
+[
+ ![root 用户正在运行的进程](http://www.tecmint.com/wp-content/uploads/2017/03/Root-User-Runnng-Processes.png)
+][9]
+
+*root 用户正在运行的进程*
+
+要查看前 [x 个进程][10](默认是 10),你可以使用 `-n` 选项,下面的命令显示了系统中 [正在运行的前 20 个进程][11]:
+
+```
+$ sudo $GOBIN/cpustat -n 20
+```
+
+你也可以像下面这样使用 `-cpuprofile` 选项将 CPU 信息写到文件,然后用 [cat 命令][12]查看文件:
+
+```
+$ sudo $GOBIN/cpustat -cpuprofile cpuprof.txt
+$ cat cpuprof.txt
+```
+
+要显示帮助信息,像下面这样使用 `-h` 选项:
+
+```
+$ sudo $GOBIN/cpustat -h
+```
+
+可以从 cpustat 的 Github 仓库:[https://github.com/uber-common/cpustat][13] 查阅其它资料。
+
+就是这些!在这篇文章中,我们向你展示了如何安装和使用 cpustat,Linux 下的一个有用的系统性能测量工具。通过下面的评论框和我们分享你的想法吧。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Aaron Kili 是一个 Linux 和 F.O.S.S(Free and Open-Source Software) 爱好者,一个 Linux 系统管理员、web 开发员,现在也是 TecMint 的内容创建者,他喜欢和电脑一起工作,他相信知识共享。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/cpustat-monitors-cpu-utilization-by-processes-in-linux/
+
+作者:[Aaron Kili][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/aaronkili/
+
+[1]:http://www.tecmint.com/install-go-in-linux/
+[2]:http://www.tecmint.com/set-path-variable-linux-permanently/
+[3]:http://www.tecmint.com/install-go-in-linux/
+[4]:http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/
+[5]:http://www.tecmint.com/find-process-name-pid-number-linux/
+[6]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
+[7]:http://www.tecmint.com/wp-content/uploads/2017/03/cpustat-Monitor-Linux-CPU-Usage.png
+[8]:http://www.tecmint.com/wp-content/uploads/2017/03/show-root-user-processes.png
+[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Root-User-Runnng-Processes.png
+[10]:http://www.tecmint.com/find-processes-by-memory-usage-top-batch-mode/
+[11]:http://www.tecmint.com/install-htop-linux-process-monitoring-for-rhel-centos-fedora/
+[12]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
+[13]:https://github.com/uber-common/cpustat
+[14]:http://www.tecmint.com/author/aaronkili/
+[15]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
+[16]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/published/20170422 The root of all eval.md b/published/20170422 The root of all eval.md
new file mode 100644
index 0000000000..df09de3648
--- /dev/null
+++ b/published/20170422 The root of all eval.md
@@ -0,0 +1,69 @@
+eval 之源
+============================================================
+
+(LCTT 译注:本文标题 “The root of all eval” 影射著名歌曲“The root of all evil”(万恶之源))
+
+唉,`eval` 这个函数让我爱恨交织,而且多半是后者居多。
+
+```
+$ perl -E'my $program = q[say "OH HAI"]; eval $program'
+OH HAI
+```
+
+当 `eval` 函数在 Perl 6 中被重命名为 `EVAL` 时,我感到有点震惊(这要追溯到 2013 年,[在这里][2]讨论规范之后)。我一直没有从内心接受这样这样的做法。虽然这是个很好的意见,但是在这个意见上我似乎或多或少是孤独的。
+
+理由是“这个函数真的很奇怪,所以我们应该用大写标记”。就像我们用 `BEGIN` 和其他 phaser 一样。使用 `BEGIN` 和其他 phaser,鼓励使用大写,这点我是同意的。phaser 能将程序“脱离正常控制流”。 但是 `eval` 函数并不能。(LCTT 译注: 在 Perl 6 当中,[phaser](https://docs.perl6.org/language/phasers) 是在一个特定的执行阶段中调用的代码块。)
+
+其他大写的地方像是 .WHAT 这样的东西,它看起来像属性,但是会在编译时将代码变成完全不同的东西。因为这发生在常规情况之外,因此大写甚至是被鼓励的。
+
+`eval` 归根到底是另一个函数。是的,这是一个潜在存在大量副作用的函数。但是那么多的标准函数都有大量的副作用。(举几个例子:`shell`、 `die`、 `exit`)你没看到有人呼吁将它们大写。
+
+我猜有人会争论说 `eval` 是非常特别的,因为它以正常函数所没有的方式钩到编译器和运行时里面。(这也是 TimToady 在将该函数重命名的提交中的[提交消息][3]中解释的。)这是一个来自实现细节的争论,然而这并不令人满意。这也同样适用与刚才提到的那些小写函数。
+
+雪上加霜的是,更名后 `EVAL` 也更难于使用:
+
+```
+$ perl6 -e'my $program = q[say "OH HAI"]; EVAL $program'
+===SORRY!=== Error while compiling -e
+EVAL is a very dangerous function!!! (use the MONKEY-SEE-NO-EVAL pragma to override this error,
+but only if you're VERY sure your data contains no injection attacks)
+at -e:1
+------> program = q[say "OH HAI"]; EVAL $program⏏
+
+$ perl6 -e'use MONKEY-SEE-NO-EVAL; my $program = q[say "OH HAI"]; EVAL $program'
+OH HAI
+```
+
+首先,注入攻击是一个真实的问题,并不是一个笑话。我们应该互相教育对方和新手。
+
+其次,这个错误消息(`"EVAL is a very dangerous function!!!"`)完全是恐吓多于帮助。我觉得当我们向人们解释代码注入的危险时,我们需要冷静并且切合实际,而不是用三个感叹号。这个错误信息对[已经知道什么是注入攻击的人][4]来说是有意义的,对于那些不了解这种风险的人员,它没有提供任何提示或线索。
+
+(Perl 6 社区并不是唯一对 `eval` 歇斯底里的,昨天我偶然发现了一个 StackOverflow 主题,关于如何将一个有类型名称的字符串转换为 JavaScript 中的相应构造函数,一些人不幸地提出了用 `eval`,而其他人立即集结起来指出这是多么不负责任,就像膝跳反射那样——“因为 eval 是坏的”)。
+
+第三,“MOKNEY-SEE-NO-EVAL”。拜托,我们能不能不要这样……汗,启用一个核弹级的函数时,就像是猴子般的随机引用和轻率的尝试,我奇怪地发现_启用_ `EVAL` 函数的是一个称为 `NO-EVAL` 的东西。这并不符合“最少惊喜”原则。
+
+不管怎样,有一天,我意识到我可以同时解决全大写名字问题和该指令的必要问题:
+
+```
+$ perl6 -e'my &eval = &EVAL; my $program = q[say "OH HAI"]; eval $program'
+OH HAI
+```
+
+我很高兴我能想到这点子并记录下来。显然我们把它改回了旧名字,这个非常危险的功能(`!!!`)就又好了。 耶!
+
+
+--------------------------------------------------------------------------------
+
+via: http://strangelyconsistent.org/blog/the-root-of-all-eval
+
+作者:[Carl Mäsak][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://strangelyconsistent.org/about
+[1]:http://strangelyconsistent.org/blog/the-root-of-all-eval
+[2]:https://github.com/perl6/specs/issues/50
+[3]:https://github.com/perl6/specs/commit/0b7df09ecc096eed5dc30f3dbdf568bbfd9de8f6
+[4]:http://bobby-tables.com/
diff --git a/sources/talk/20170119 The Many the Humble the Ubuntu Users.md b/sources/talk/20170119 The Many the Humble the Ubuntu Users.md
deleted file mode 100644
index 8ff6cf40f7..0000000000
--- a/sources/talk/20170119 The Many the Humble the Ubuntu Users.md
+++ /dev/null
@@ -1,54 +0,0 @@
-The Many, the Humble, the Ubuntu Users
-============================================================
-
-#### The proverbial “better mousetrap” isn’t one that takes a certified biologist to use. Like Ubuntu, it just needs to do its job extremely well and with little fuss.
-
-### Roblimo’s Hideaway
-
- ![Ubuntu Unity](https://i0.wp.com/fossforce.com/wp-content/uploads/2017/01/UbuntuDesktop.png?resize=524%2C295)
-
-I have never been much of a leading-edge computing person. In fact, I first got mildly famous online writing a weekly column titled “This Old PC” for Time/Life about making do with used gear — often by installing Linux on it — and after that an essentially identical column for Andover.net titled “Cheap Computing,” which was also about saving money in a world where most online computing columns seemed to be about getting you to spend until you had no money left to spend on food.
-
-Most of the early Linux adopters I knew were infatuated with their computers and the software that made them useful. They loved poring over source code and making minor changes. They were, for the most part, computer science students or worked as IT people. Their computers and computer networks fascinated them, as they should have.
-
-I was (and still am) a writer, not a computer science guy. For me, computers have always been tools. I want them to sit quietly until I tell them to do something, then follow my orders with the minimum possible fuss and bother. I like a GUI, since I don’t administer my PC or network often enough to memorize long command strings. Sure, I can look them up and type them in, but I’d really rather be at the beach.
-
-There was a time when, in Linux circles, mere _users_ were rare. “What do you mean, you just want to use your computer to type articles and maybe add a little HTML to them?” the developer and admin types seemed to ask, as if all fields of endeavor other than coding were inferior to what they did.
-
-But despite the sneers, I kept hammering a theme in speech after speech and conversation after conversation that went sort of like this: “Instead of scratching only your own itches, why not scratch your girlfriend’s itch? How about your coworkers? And people who work at your favorite restaurant? And what about your doctor? Don’t you want him to spend his time doctoring, not worrying about apt get this and grep that?”
-
-So yes, since I wanted easy-to-use Linux, I was an [early Mandrake user][1]. And today, I am a happy Ubuntu user.
-
-Why Ubuntu? Hey! Why not?! It’s the Toyota Camry (or maybe Honda Civic) of Linux distros. Plain-jane. So popular that support is easy to find on IRC, Linux Questions, and Ubuntu’s own extensive forums, and many other places.
-
-Sure, it’s cooler to use Debian or Fedora, and Mint looks jazzier out of the box, but I’m _still_ mostly interested in writing stories and adding a little HTML to them, along with reading this and that in my browser, editing work in Google Docs for a corporate client or two, keeping up with my email, doing this or that with a picture now and then…. all basic computer user stuff.
-
-And with all this going on, the appearance of my desktop is meaningless. I can’t see it! It’s covered with application windows! And I’m talking two monitors, not just one. I have, let’s see…. 17 Chrome tabs open in two windows. And GIMP running. And [Bluefish][2], which I’m using right now, to type this essay.
-
-So for me Ubuntu is the path of least resistance. Mint may be a little cuter, but when you come right down to it, and strip away the trim, isn’t it really Ubuntu? So if I use the same few programs over and over, which I do, and can’t see the desktop anyway, who cares if it’s brown?
-
-Some studies say Mint is more popular. Others say Debian. But they all show Ubuntu in the top few, year after year.
-
-So call me mass-average. Call me boring. Call me one of the many, the humble, the Ubuntu users — at least for now…
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-![](http://0.gravatar.com/avatar/f861a631676e6d4d2f4e4de2454f230e?s=80&d=blank&r=pg)
-
-Robin "Roblimo" Miller is a freelance writer and former editor-in-chief at Open Source Technology Group, the company that owned SourceForge, freshmeat, Linux.com, NewsForge, ThinkGeek and Slashdot, and until recently served as a video editor at Slashdot. He also publishes the blog Robin ‘Roblimo’ Miller’s Personal Site. @robinAKAroblimo
-
---------------------------------------------------------------------------------
-
-via: http://fossforce.com/2017/01/many-humble-ubuntu-users/
-
-作者:[Robin "Roblimo" Miller][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.roblimo.com/
-[1]:https://linux.slashdot.org/story/00/11/02/2324224/mandrake-72-in-wal-mart-a-good-idea
-[2]:http://bluefish.openoffice.nl/index.html
diff --git a/sources/talk/20170402 Why do developers who could work anywhere flock to the worlds most expensive cities.md b/sources/talk/20170402 Why do developers who could work anywhere flock to the worlds most expensive cities.md
deleted file mode 100644
index d715a85d0b..0000000000
--- a/sources/talk/20170402 Why do developers who could work anywhere flock to the worlds most expensive cities.md
+++ /dev/null
@@ -1,53 +0,0 @@
-Why do developers who could work anywhere flock to the world’s most expensive cities?
-============================================================
- ![](https://tctechcrunch2011.files.wordpress.com/2017/04/img_20170401_1835042.jpg?w=977)
-
-Politicians and economists [lament][10] that certain alpha regions — SF, LA, NYC, Boston, Toronto, London, Paris — attract all the best jobs while becoming repellently expensive, reducing economic mobility and contributing to further bifurcation between haves and have-nots. But why don’t the best jobs move elsewhere?
-
-Of course, many of them can’t. The average financier in NYC or London (until Brexit annihilates London’s banking industry, of course…) would be laughed out of the office, and not invited back, if they told their boss they wanted to henceforth work from Chiang Mai.
-
-But this isn’t true of (much of) the software field. The average web/app developer might have such a request declined; but they would not be laughed at, or fired. The demand for good developers greatly outstrips supply, and in this era of Skype and Slack, there’s nothing about software development that requires meatspace interactions.
-
-(This is even more true of writers, of course; I did in fact post this piece from Pohnpei. But writers don’t have anything like the leverage of software developers.)
-
-Some people will tell you that remote teams are inherently less effective and productive than localized ones, or that “serendipitous collisions” are so important that every employee must be forced to the same physical location every day so that these collisions can be manufactured. These people are wrong, as long as the team in question is small — on the order of handfuls, dozens or scores, rather than hundreds or thousands — and flexible.
-
-I should know: at [HappyFunCorp][11], we work extensively with remote teams, and actively recruit remote developers, and it works out fantastically well. A day in which I interact and collaborate with developers in Stockholm, São Paulo, Shanghai, Brooklyn and New Delhi, from my own home base in San Francisco, is not at all unusual.
-
-At this point, whether it’s a good idea is almost irrelevant, though. Supply and demand is such that any sufficiently skilled developer could become a so-called digital nomad if they really wanted to. But many who could, do not. I recently spent some time in Reykjavik at a house Airbnb-ed for the month by an ever-shifting crew of temporary remote workers, keeping East Coast time to keep up with their jobs, while spending mornings and weekends exploring Iceland — but almost all of us then returned to live in the Bay Area.
-
-Economically, of course, this is insane. Moving to and working from Southeast Asia would save us thousands of dollars a month in rent alone. So why do people who could live in Costa Rica on a San Francisco salary, or in Berlin while charging NYC rates, choose not to do so? Why are allegedly hardheaded engineers so financially irrational?
-
-Of course there are social and cultural reasons. Chiang Mai is very nice, but doesn’t have the Met, or steampunk masquerade parties or 50 foodie restaurants within a 15-minute walk. Berlin is lovely, but doesn’t offer kite surfing, or Sierra hiking or California weather. Neither promises an effectively limitless population of people with whom you share values and a first language.
-
-And yet I think there’s much more to it than this. I believe there’s a more fundamental economic divide opening than the one between haves and have-nots. I think we are witnessing a growing rift between the world’s Extremistan cities, in which truly extraordinary things can be achieved, and its Mediocristan towns, in which you can work and make money and be happy but never achieve greatness. (Labels stolen from the great Nassim Taleb.)
-
-The arts have long had Extremistan cities. That’s why aspiring writers move to New York City, and even directors and actors who found international success are still drawn to L.A. like moths to a klieg light. Now it is true of tech, too. Even if you don’t even want to try to (help) build something extraordinary — and the startup myth is so powerful today that it’s a very rare engineer indeed who hasn’t at least dreamed about it — the prospect of being _where great things happen_ is intoxicatingly enticing.
-
-But the interesting thing about this is that it could, in theory, change; because — as of quite recently — distributed, decentralized teams can, in fact, achieve extraordinary things. The cards are arguably stacked against them, because VCs tend to be quite myopic. But no law dictates that unicorns may only be born in California and a handful of other secondary territories; and it seems likely that, for better or worse, Extremistan is spreading. It would be pleasantly paradoxical if that expansion ultimately leads to _lower_ rents in the Mission.
-
---------------------------------------------------------------------------------
-
-via: https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
-
-作者:[ Jon Evans ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://techcrunch.com/author/jon-evans/
-[1]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#comments
-[2]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#
-[3]:http://twitter.com/share?via=techcrunch&url=http://tcrn.ch/2owXJ0C&text=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&hashtags=
-[4]:https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Ftechcrunch.com%2F2017%2F04%2F02%2Fwhy-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities%2F&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
-[5]:https://plus.google.com/share?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
-[6]:http://www.reddit.com/submit?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
-[7]:http://www.stumbleupon.com/badge/?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
-[8]:mailto:?subject=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities?&body=Article:%20https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
-[9]:https://share.flipboard.com/bookmarklet/popout?v=2&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
-[10]:https://mobile.twitter.com/Noahpinion/status/846054187288866
-[11]:http://happyfuncorp.com/
-[12]:https://twitter.com/rezendi
-[13]:https://techcrunch.com/author/jon-evans/
-[14]:https://techcrunch.com/2017/04/01/discussing-the-limits-of-artificial-intelligence/
diff --git a/sources/tech/ 20170422 FEWER MALLOCS IN CURL.md b/sources/tech/ 20170422 FEWER MALLOCS IN CURL.md
new file mode 100644
index 0000000000..e1b65b94f8
--- /dev/null
+++ b/sources/tech/ 20170422 FEWER MALLOCS IN CURL.md
@@ -0,0 +1,152 @@
+FEWER MALLOCS IN CURL
+===========================================================
+
+![](https://daniel.haxx.se/blog/wp-content/uploads/2016/09/IMG_20160916_122707-1038x576.jpg)
+
+Today I landed yet [another small change][4] to libcurl internals that further reduces the number of small mallocs we do. This time the generic linked list functions got converted to become malloc-less (the way linked list functions should behave, really).
+
+### Instrument mallocs
+
+I started out my quest a few weeks ago by instrumenting our memory allocations. This is easy since we have our own memory debug and logging system in curl since many years. Using a debug build of curl I run this script in my build dir:
+
+```
+#!/bin/sh
+export CURL_MEMDEBUG=$HOME/tmp/curlmem.log
+./src/curl http://localhost
+./tests/memanalyze.pl -v $HOME/tmp/curlmem.log
+```
+
+For curl 7.53.1, this counted about 115 memory allocations. Is that many or a few?
+
+The memory log is very basic. To give you an idea what it looks like, here’s an example snippet:
+
+```
+MEM getinfo.c:70 free((nil))
+MEM getinfo.c:73 free((nil))
+MEM url.c:294 free((nil))
+MEM url.c:297 strdup(0x559e7150d616) (24) = 0x559e73760f98
+MEM url.c:294 free((nil))
+MEM url.c:297 strdup(0x559e7150d62e) (22) = 0x559e73760fc8
+MEM multi.c:302 calloc(1,480) = 0x559e73760ff8
+MEM hash.c:75 malloc(224) = 0x559e737611f8
+MEM hash.c:75 malloc(29152) = 0x559e737a2bc8
+MEM hash.c:75 malloc(3104) = 0x559e737a9dc8
+```
+
+### Check the log
+
+I then studied the log closer and I realized that there were many small memory allocations done from the same code lines. We clearly had some rather silly code patterns where we would allocate a struct and then add that struct to a linked list or a hash and that code would then subsequently add yet another small struct and similar – and then often do that in a loop. (I say _we_ here to avoid blaming anyone, but of course I myself am to blame for most of this…)
+
+Those two allocations would always happen in pairs and they would be freed at the same time. I decided to address those. Doing very small (less than say 32 bytes) allocations is also wasteful just due to the very large amount of data in proportion that will be used just to keep track of that tiny little memory area (within the malloc system). Not to mention fragmentation of the heap.
+
+So, fixing the hash code and the linked list code to not use mallocs were immediate and easy ways to remove over 20% of the mallocs for a plain and simple ‘curl http://localhost’ transfer.
+
+At this point I sorted all allocations based on size and checked all the smallest ones. One that stood out was one we made in _curl_multi_wait(),_ a function that is called over and over in a typical curl transfer main loop. I converted it over to [use the stack][5] for most typical use cases. Avoiding mallocs in very repeatedly called functions is a good thing.
+
+### Recount
+
+Today, the script from above shows that the same “curl localhost” command is down to 80 allocations from the 115 curl 7.53.1 used. Without sacrificing anything really. An easy 26% improvement. Not bad at all!
+
+But okay, since I modified curl_multi_wait() I wanted to also see how it actually improves things for a slightly more advanced transfer. I took the [multi-double.c][6] example code, added the call to initiate the memory logging, made it uses curl_multi_wait() and had it download these two URLs in parallel:
+
+```
+http://www.example.com/
+http://localhost/512M
+```
+
+The second one being just 512 megabytes of zeroes and the first being a 600 bytes something public html page. Here’s the [count-malloc.c code][7].
+
+First, I brought out 7.53.1 and built the example against that and had the memanalyze script check it:
+
+```
+Mallocs: 33901
+Reallocs: 5
+Callocs: 24
+Strdups: 31
+Wcsdups: 0
+Frees: 33956
+Allocations: 33961
+Maximum allocated: 160385
+```
+
+Okay, so it used 160KB of memory totally and it did over 33,900 allocations. But ok, it downloaded over 512 megabytes of data so it makes one malloc per 15KB of data. Good or bad?
+
+Back to git master, the version we call 7.54.1-DEV right now – since we’re not quite sure which version number it’ll become when we release the next release. It can become 7.54.1 or 7.55.0, it has not been determined yet. But I digress, I ran the same modified multi-double.c example again, ran memanalyze on the memory log again and it now reported…
+
+```
+Mallocs: 69
+Reallocs: 5
+Callocs: 24
+Strdups: 31
+Wcsdups: 0
+Frees: 124
+Allocations: 129
+Maximum allocated: 153247
+```
+
+I had to look twice. Did I do something wrong? I better run it again just to double-check. The results are the same no matter how many times I run it…
+
+### 33,961 vs 129
+
+curl_multi_wait() is called a lot of times in a typical transfer, and it had at least one of the memory allocations we normally did during a transfer so removing that single tiny allocation had a pretty dramatic impact on the counter. A normal transfer also moves things in and out of linked lists and hashes a bit, but they too are mostly malloc-less now. Simply put: the remaining allocations are not done in the transfer loop so they’re way less important.
+
+The old curl did 263 times the number of allocations the current does for this example. Or the other way around: the new one does 0.37% the number of allocations the old one did…
+
+As an added bonus, the new one also allocates less memory in total as it decreased that amount by 7KB (4.3%).
+
+### Are mallocs important?
+
+In the day and age with many gigabytes of RAM and all, does a few mallocs in a transfer really make a notable difference for mere mortals? What is the impact of 33,832 extra mallocs done for 512MB of data?
+
+To measure what impact these changes have, I decided to compare HTTP transfers from localhost and see if we can see any speed difference. localhost is fine for this test since there’s no network speed limit, but the faster curl is the faster the download will be. The server side will be equally fast/slow since I’ll use the same set for both tests.
+
+I built curl 7.53.1 and curl 7.54.1-DEV identically and ran this command line:
+
+```
+curl http://localhost/80GB -o /dev/null
+```
+
+80 gigabytes downloaded as fast as possible written into the void.
+
+The exact numbers I got for this may not be totally interesting, as it will depend on CPU in the machine, which HTTP server that serves the file and optimization level when I build curl etc. But the relative numbers should still be highly relevant. The old code vs the new.
+
+7.54.1-DEV repeatedly performed 30% faster! The 2200MB/sec in my build of the earlier release increased to over 2900 MB/sec with the current version.
+
+The point here is of course not that it easily can transfer HTTP over 20GB/sec using a single core on my machine – since there are very few users who actually do that speedy transfers with curl. The point is rather that curl now uses less CPU per byte transferred, which leaves more CPU over to the rest of the system to perform whatever it needs to do. Or to save battery if the device is a portable one.
+
+On the cost of malloc: The 512MB test I did resulted in 33832 more allocations using the old code. The old code transferred HTTP at a rate of about 2200MB/sec. That equals 145,827 mallocs/second – that are now removed! A 600 MB/sec improvement means that curl managed to transfer 4300 bytes extra for each malloc it didn’t do, each second.
+
+### Was removing these mallocs hard?
+
+Not at all, it was all straight forward. It is however interesting that there’s still room for changes like this in a project this old. I’ve had this idea for some years and I’m glad I finally took the time to make it happen. Thanks to our test suite I could do this level of “drastic” internal change with a fairly high degree of confidence that I don’t introduce too terrible regressions. Thanks to our APIs being good at hiding internals, this change could be done completely without changing anything for old or new applications.
+
+(Yeah I haven’t shipped the entire change in a release yet so there’s of course a risk that I’ll have to regret my “this was easy” statement…)
+
+### Caveats on the numbers
+
+There have been 213 commits in the curl git repo from 7.53.1 till today. There’s a chance one or more other commits than just the pure alloc changes have made a performance impact, even if I can’t think of any.
+
+### More?
+
+Are there more “low hanging fruits” to pick here in the similar vein?
+
+Perhaps. We don’t do a lot of performance measurements or comparisons so who knows, we might do more silly things that we could stop doing and do even better. One thing I’ve always wanted to do, but never got around to, was to add daily “monitoring” of memory/mallocs used and how fast curl performs in order to better track when we unknowingly regress in these areas.
+
+--------------------------------------------------------------------------------
+
+via: https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/
+
+作者:[DANIEL STENBERG ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://daniel.haxx.se/blog/author/daniel/
+[1]:https://daniel.haxx.se/blog/author/daniel/
+[2]:https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/
+[3]:https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/#comments
+[4]:https://github.com/curl/curl/commit/cbae73e1dd95946597ea74ccb580c30f78e3fa73
+[5]:https://github.com/curl/curl/commit/5f1163517e1597339d
+[6]:https://github.com/curl/curl/commit/5f1163517e1597339d
+[7]:https://gist.github.com/bagder/dc4a42cb561e791e470362da7ef731d3
diff --git a/sources/tech/20110127 How debuggers work Part 2 - Breakpoints.md b/sources/tech/20110127 How debuggers work Part 2 - Breakpoints.md
index 69fd87a90e..59bad4ee0f 100644
--- a/sources/tech/20110127 How debuggers work Part 2 - Breakpoints.md
+++ b/sources/tech/20110127 How debuggers work Part 2 - Breakpoints.md
@@ -1,5 +1,3 @@
-Firstadream translating
-
[How debuggers work: Part 2 - Breakpoints][26]
============================================================
diff --git a/sources/tech/20110207 How debuggers work Part 3 - Debugging information.md b/sources/tech/20110207 How debuggers work Part 3 - Debugging information.md
deleted file mode 100644
index a4a525a829..0000000000
--- a/sources/tech/20110207 How debuggers work Part 3 - Debugging information.md
+++ /dev/null
@@ -1,349 +0,0 @@
-[How debuggers work: Part 3 - Debugging information][25]
-============================================================
-
-
-This is the third part in a series of articles on how debuggers work. Make sure you read [the first][26] and [the second][27] parts before this one.
-
-### In this part
-
-I'm going to explain how the debugger figures out where to find the C functions and variables in the machine code it wades through, and the data it uses to map between C source code lines and machine language words.
-
-### Debugging information
-
-Modern compilers do a pretty good job converting your high-level code, with its nicely indented and nested control structures and arbitrarily typed variables into a big pile of bits called machine code, the sole purpose of which is to run as fast as possible on the target CPU. Most lines of C get converted into several machine code instructions. Variables are shoved all over the place - into the stack, into registers, or completely optimized away. Structures and objects don't even _exist_ in the resulting code - they're merely an abstraction that gets translated to hard-coded offsets into memory buffers.
-
-So how does a debugger know where to stop when you ask it to break at the entry to some function? How does it manage to find what to show you when you ask it for the value of a variable? The answer is - debugging information.
-
-Debugging information is generated by the compiler together with the machine code. It is a representation of the relationship between the executable program and the original source code. This information is encoded into a pre-defined format and stored alongside the machine code. Many such formats were invented over the years for different platforms and executable files. Since the aim of this article isn't to survey the history of these formats, but rather to show how they work, we'll have to settle on something. This something is going to be DWARF, which is almost ubiquitously used today as the debugging information format for ELF executables on Linux and other Unix-y platforms.
-
-### The DWARF in the ELF
-
- ![](http://eli.thegreenplace.net/images/2011/02/dwarf_logo.gif)
-
-According to [its Wikipedia page][17], DWARF was designed alongside ELF, although it can in theory be embedded in other object file formats as well [[1]][18].
-
-DWARF is a complex format, building on many years of experience with previous formats for various architectures and operating systems. It has to be complex, since it solves a very tricky problem - presenting debugging information from any high-level language to debuggers, providing support for arbitrary platforms and ABIs. It would take much more than this humble article to explain it fully, and to be honest I don't understand all its dark corners well enough to engage in such an endeavor anyway [[2]][19]. In this article I will take a more hands-on approach, showing just enough of DWARF to explain how debugging information works in practical terms.
-
-### Debug sections in ELF files
-
-First let's take a glimpse of where the DWARF info is placed inside ELF files. ELF defines arbitrary sections that may exist in each object file. A _section header table_ defines which sections exist and their names. Different tools treat various sections in special ways - for example the linker is looking for some sections, the debugger for others.
-
-We'll be using an executable built from this C source for our experiments in this article, compiled into tracedprog2:
-
-```
-#include
-
-void do_stuff(int my_arg)
-{
- int my_local = my_arg + 2;
- int i;
-
- for (i = 0; i < my_local; ++i)
- printf("i = %d\n", i);
-}
-
-int main()
-{
- do_stuff(2);
- return 0;
-}
-```
-
-Dumping the section headers from the ELF executable using objdump -h we'll notice several sections with names beginning with .debug_ - these are the DWARF debugging sections:
-
-```
-26 .debug_aranges 00000020 00000000 00000000 00001037
- CONTENTS, READONLY, DEBUGGING
-27 .debug_pubnames 00000028 00000000 00000000 00001057
- CONTENTS, READONLY, DEBUGGING
-28 .debug_info 000000cc 00000000 00000000 0000107f
- CONTENTS, READONLY, DEBUGGING
-29 .debug_abbrev 0000008a 00000000 00000000 0000114b
- CONTENTS, READONLY, DEBUGGING
-30 .debug_line 0000006b 00000000 00000000 000011d5
- CONTENTS, READONLY, DEBUGGING
-31 .debug_frame 00000044 00000000 00000000 00001240
- CONTENTS, READONLY, DEBUGGING
-32 .debug_str 000000ae 00000000 00000000 00001284
- CONTENTS, READONLY, DEBUGGING
-33 .debug_loc 00000058 00000000 00000000 00001332
- CONTENTS, READONLY, DEBUGGING
-```
-
-The first number seen for each section here is its size, and the last is the offset where it begins in the ELF file. The debugger uses this information to read the section from the executable.
-
-Now let's see a few practical examples of finding useful debug information in DWARF.
-
-### Finding functions
-
-One of the most basic things we want to do when debugging is placing breakpoints at some function, expecting the debugger to break right at its entrance. To be able to perform this feat, the debugger must have some mapping between a function name in the high-level code and the address in the machine code where the instructions for this function begin.
-
-This information can be obtained from DWARF by looking at the .debug_info section. Before we go further, a bit of background. The basic descriptive entity in DWARF is called the Debugging Information Entry (DIE). Each DIE has a tag - its type, and a set of attributes. DIEs are interlinked via sibling and child links, and values of attributes can point at other DIEs.
-
-Let's run:
-
-```
-objdump --dwarf=info tracedprog2
-```
-
-The output is quite long, and for this example we'll just focus on these lines [[3]][20]:
-
-```
-<1><71>: Abbrev Number: 5 (DW_TAG_subprogram)
- <72> DW_AT_external : 1
- <73> DW_AT_name : (...): do_stuff
- <77> DW_AT_decl_file : 1
- <78> DW_AT_decl_line : 4
- <79> DW_AT_prototyped : 1
- <7a> DW_AT_low_pc : 0x8048604
- <7e> DW_AT_high_pc : 0x804863e
- <82> DW_AT_frame_base : 0x0 (location list)
- <86> DW_AT_sibling : <0xb3>
-
-<1>: Abbrev Number: 9 (DW_TAG_subprogram)
- DW_AT_external : 1
- DW_AT_name : (...): main
- DW_AT_decl_file : 1
- DW_AT_decl_line : 14
- DW_AT_type : <0x4b>
- DW_AT_low_pc : 0x804863e
- DW_AT_high_pc : 0x804865a
- DW_AT_frame_base : 0x2c (location list)
-```
-
-There are two entries (DIEs) tagged DW_TAG_subprogram, which is a function in DWARF's jargon. Note that there's an entry for do_stuff and an entry for main. There are several interesting attributes, but the one that interests us here is DW_AT_low_pc. This is the program-counter (EIP in x86) value for the beginning of the function. Note that it's 0x8048604 for do_stuff. Now let's see what this address is in the disassembly of the executable by running objdump -d:
-
-```
-08048604 :
- 8048604: 55 push ebp
- 8048605: 89 e5 mov ebp,esp
- 8048607: 83 ec 28 sub esp,0x28
- 804860a: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
- 804860d: 83 c0 02 add eax,0x2
- 8048610: 89 45 f4 mov DWORD PTR [ebp-0xc],eax
- 8048613: c7 45 (...) mov DWORD PTR [ebp-0x10],0x0
- 804861a: eb 18 jmp 8048634
- 804861c: b8 20 (...) mov eax,0x8048720
- 8048621: 8b 55 f0 mov edx,DWORD PTR [ebp-0x10]
- 8048624: 89 54 24 04 mov DWORD PTR [esp+0x4],edx
- 8048628: 89 04 24 mov DWORD PTR [esp],eax
- 804862b: e8 04 (...) call 8048534
- 8048630: 83 45 f0 01 add DWORD PTR [ebp-0x10],0x1
- 8048634: 8b 45 f0 mov eax,DWORD PTR [ebp-0x10]
- 8048637: 3b 45 f4 cmp eax,DWORD PTR [ebp-0xc]
- 804863a: 7c e0 jl 804861c
- 804863c: c9 leave
- 804863d: c3 ret
-```
-
-Indeed, 0x8048604 is the beginning of do_stuff, so the debugger can have a mapping between functions and their locations in the executable.
-
-### Finding variables
-
-Suppose that we've indeed stopped at a breakpoint inside do_stuff. We want to ask the debugger to show us the value of the my_local variable. How does it know where to find it? Turns out this is much trickier than finding functions. Variables can be located in global storage, on the stack, and even in registers. Additionally, variables with the same name can have different values in different lexical scopes. The debugging information has to be able to reflect all these variations, and indeed DWARF does.
-
-I won't cover all the possibilities, but as an example I'll demonstrate how the debugger can find my_local in do_stuff. Let's start at .debug_info and look at the entry for do_stuff again, this time also looking at a couple of its sub-entries:
-
-```
-<1><71>: Abbrev Number: 5 (DW_TAG_subprogram)
- <72> DW_AT_external : 1
- <73> DW_AT_name : (...): do_stuff
- <77> DW_AT_decl_file : 1
- <78> DW_AT_decl_line : 4
- <79> DW_AT_prototyped : 1
- <7a> DW_AT_low_pc : 0x8048604
- <7e> DW_AT_high_pc : 0x804863e
- <82> DW_AT_frame_base : 0x0 (location list)
- <86> DW_AT_sibling : <0xb3>
- <2><8a>: Abbrev Number: 6 (DW_TAG_formal_parameter)
- <8b> DW_AT_name : (...): my_arg
- <8f> DW_AT_decl_file : 1
- <90> DW_AT_decl_line : 4
- <91> DW_AT_type : <0x4b>
- <95> DW_AT_location : (...) (DW_OP_fbreg: 0)
- <2><98>: Abbrev Number: 7 (DW_TAG_variable)
- <99> DW_AT_name : (...): my_local
- <9d> DW_AT_decl_file : 1
- <9e> DW_AT_decl_line : 6
- <9f> DW_AT_type : <0x4b>
- DW_AT_location : (...) (DW_OP_fbreg: -20)
-<2>: Abbrev Number: 8 (DW_TAG_variable)
- DW_AT_name : i
- DW_AT_decl_file : 1
- DW_AT_decl_line : 7
- DW_AT_type : <0x4b>
- DW_AT_location : (...) (DW_OP_fbreg: -24)
-```
-
-Note the first number inside the angle brackets in each entry. This is the nesting level - in this example entries with <2> are children of the entry with <1>. So we know that the variable my_local (marked by the DW_TAG_variable tag) is a child of the do_stuff function. The debugger is also interested in a variable's type to be able to display it correctly. In the case of my_local the type points to another DIE - <0x4b>. If we look it up in the output of objdump we'll see it's a signed 4-byte integer.
-
-To actually locate the variable in the memory image of the executing process, the debugger will look at the DW_AT_location attribute. For my_local it says DW_OP_fbreg: -20. This means that the variable is stored at offset -20 from the DW_AT_frame_base attribute of its containing function - which is the base of the frame for the function.
-
-The DW_AT_frame_base attribute of do_stuff has the value 0x0 (location list), which means that this value actually has to be looked up in the location list section. Let's look at it:
-
-```
-$ objdump --dwarf=loc tracedprog2
-
-tracedprog2: file format elf32-i386
-
-Contents of the .debug_loc section:
-
- Offset Begin End Expression
- 00000000 08048604 08048605 (DW_OP_breg4: 4 )
- 00000000 08048605 08048607 (DW_OP_breg4: 8 )
- 00000000 08048607 0804863e (DW_OP_breg5: 8 )
- 00000000
- 0000002c 0804863e 0804863f (DW_OP_breg4: 4 )
- 0000002c 0804863f 08048641 (DW_OP_breg4: 8 )
- 0000002c 08048641 0804865a (DW_OP_breg5: 8 )
- 0000002c
-```
-
-The location information we're interested in is the first one [[4]][21]. For each address where the debugger may be, it specifies the current frame base from which offsets to variables are to be computed as an offset from a register. For x86, bpreg4 refers to esp and bpreg5 refers to ebp.
-
-It's educational to look at the first several instructions of do_stuff again:
-
-```
-08048604 :
- 8048604: 55 push ebp
- 8048605: 89 e5 mov ebp,esp
- 8048607: 83 ec 28 sub esp,0x28
- 804860a: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
- 804860d: 83 c0 02 add eax,0x2
- 8048610: 89 45 f4 mov DWORD PTR [ebp-0xc],eax
-```
-
-Note that ebp becomes relevant only after the second instruction is executed, and indeed for the first two addresses the base is computed from esp in the location information listed above. Once ebp is valid, it's convenient to compute offsets relative to it because it stays constant while esp keeps moving with data being pushed and popped from the stack.
-
-So where does it leave us with my_local? We're only really interested in its value after the instruction at 0x8048610 (where its value is placed in memory after being computed in eax), so the debugger will be using the DW_OP_breg5: 8 frame base to find it. Now it's time to rewind a little and recall that the DW_AT_location attribute for my_local says DW_OP_fbreg: -20. Let's do the math: -20 from the frame base, which is ebp + 8. We get ebp - 12. Now look at the disassembly again and note where the data is moved from eax - indeed, ebp - 12 is where my_local is stored.
-
-### Looking up line numbers
-
-When we talked about finding functions in the debugging information, I was cheating a little. When we debug C source code and put a breakpoint in a function, we're usually not interested in the first _machine code_ instruction [[5]][22]. What we're _really_ interested in is the first _C code_ line of the function.
-
-This is why DWARF encodes a full mapping between lines in the C source code and machine code addresses in the executable. This information is contained in the .debug_line section and can be extracted in a readable form as follows:
-
-```
-$ objdump --dwarf=decodedline tracedprog2
-
-tracedprog2: file format elf32-i386
-
-Decoded dump of debug contents of section .debug_line:
-
-CU: /home/eliben/eli/eliben-code/debugger/tracedprog2.c:
-File name Line number Starting address
-tracedprog2.c 5 0x8048604
-tracedprog2.c 6 0x804860a
-tracedprog2.c 9 0x8048613
-tracedprog2.c 10 0x804861c
-tracedprog2.c 9 0x8048630
-tracedprog2.c 11 0x804863c
-tracedprog2.c 15 0x804863e
-tracedprog2.c 16 0x8048647
-tracedprog2.c 17 0x8048653
-tracedprog2.c 18 0x8048658
-```
-
-It shouldn't be hard to see the correspondence between this information, the C source code and the disassembly dump. Line number 5 points at the entry point to do_stuff - 0x8040604. The next line, 6, is where the debugger should really stop when asked to break in do_stuff, and it points at 0x804860a which is just past the prologue of the function. This line information easily allows bi-directional mapping between lines and addresses:
-
-* When asked to place a breakpoint at a certain line, the debugger will use it to find which address it should put its trap on (remember our friend int 3 from the previous article?)
-* When an instruction causes a segmentation fault, the debugger will use it to find the source code line on which it happened.
-
-### libdwarf - Working with DWARF programmatically
-
-Employing command-line tools to access DWARF information, while useful, isn't fully satisfying. As programmers, we'd like to know how to write actual code that can read the format and extract what we need from it.
-
-Naturally, one approach is to grab the DWARF specification and start hacking away. Now, remember how everyone keeps saying that you should never, ever parse HTML manually but rather use a library? Well, with DWARF it's even worse. DWARF is _much_ more complex than HTML. What I've shown here is just the tip of the iceberg, and to make things even harder, most of this information is encoded in a very compact and compressed way in the actual object file [[6]][23].
-
-So we'll take another road and use a library to work with DWARF. There are two major libraries I'm aware of (plus a few less complete ones):
-
-1. BFD (libbfd) is used by the [GNU binutils][11], including objdump which played a star role in this article, ld (the GNU linker) and as (the GNU assembler).
-2. libdwarf - which together with its big brother libelf are used for the tools on Solaris and FreeBSD operating systems.
-
-I'm picking libdwarf over BFD because it appears less arcane to me and its license is more liberal (LGPLvs. GPL).
-
-Since libdwarf is itself quite complex it requires a lot of code to operate. I'm not going to show all this code here, but [you can download][24] and run it yourself. To compile this file you'll need to have libelfand libdwarf installed, and pass the -lelf and -ldwarf flags to the linker.
-
-The demonstrated program takes an executable and prints the names of functions in it, along with their entry points. Here's what it produces for the C program we've been playing with in this article:
-
-```
-$ dwarf_get_func_addr tracedprog2
-DW_TAG_subprogram: 'do_stuff'
-low pc : 0x08048604
-high pc : 0x0804863e
-DW_TAG_subprogram: 'main'
-low pc : 0x0804863e
-high pc : 0x0804865a
-```
-
-The documentation of libdwarf (linked in the References section of this article) is quite good, and with some effort you should have no problem pulling any other information demonstrated in this article from the DWARF sections using it.
-
-### Conclusion and next steps
-
-Debugging information is a simple concept in principle. The implementation details may be intricate, but in the end of the day what matters is that we now know how the debugger finds the information it needs about the original source code from which the executable it's tracing was compiled. With this information in hand, the debugger bridges between the world of the user, who thinks in terms of lines of code and data structures, and the world of the executable, which is just a bunch of machine code instructions and data in registers and memory.
-
-This article, with its two predecessors, concludes an introductory series that explains the inner workings of a debugger. Using the information presented here and some programming effort, it should be possible to create a basic but functional debugger for Linux.
-
-As for the next steps, I'm not sure yet. Maybe I'll end the series here, maybe I'll present some advanced topics such as backtraces, and perhaps debugging on Windows. Readers can also suggest ideas for future articles in this series or related material. Feel free to use the comments or send me an email.
-
-### References
-
-* objdump man page
-* Wikipedia pages for [ELF][12] and [DWARF][13].
-* [Dwarf Debugging Standard home page][14] - from here you can obtain the excellent DWARF tutorial by Michael Eager, as well as the DWARF standard itself. You'll probably want version 2 since it's what gccproduces.
-* [libdwarf home page][15] - the download package includes a comprehensive reference document for the library
-* [BFD documentation][16]
-
-
-[1] DWARF is an open standard, published here by the DWARF standards committee. The DWARF logo displayed above is taken from that website.
-
-[2] At the end of the article I've collected some useful resources that will help you get more familiar with DWARF, if you're interested. Particularly, start with the DWARF tutorial.
-
-[3] Here and in subsequent examples, I'm placing (...) instead of some longer and un-interesting information for the sake of more convenient formatting.
-
-[4] Because the DW_AT_frame_base attribute of do_stuff contains offset 0x0 into the location list. Note that the same attribute for main contains the offset 0x2c which is the offset for the second set of location expressions.
-
-[5] Where the function prologue is usually executed and the local variables aren't even valid yet.
-
-[6] Some parts of the information (such as location data and line number data) are encoded as instructions for a specialized virtual machine. Yes, really.
-
-* * *
-
---------------------------------------------------------------------------------
-
-via: http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information
-
-作者:[Eli Bendersky][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://eli.thegreenplace.net/
-[1]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id1
-[2]:http://dwarfstd.org/
-[3]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id2
-[4]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id3
-[5]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id4
-[6]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id5
-[7]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id6
-[8]:http://eli.thegreenplace.net/tag/articles
-[9]:http://eli.thegreenplace.net/tag/debuggers
-[10]:http://eli.thegreenplace.net/tag/programming
-[11]:http://www.gnu.org/software/binutils/
-[12]:http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
-[13]:http://en.wikipedia.org/wiki/DWARF
-[14]:http://dwarfstd.org/
-[15]:http://reality.sgiweb.org/davea/dwarf.html
-[16]:http://sourceware.org/binutils/docs-2.21/bfd/index.html
-[17]:http://en.wikipedia.org/wiki/DWARF
-[18]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id7
-[19]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id8
-[20]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id9
-[21]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id10
-[22]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id11
-[23]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id12
-[24]:https://github.com/eliben/code-for-blog/blob/master/2011/dwarf_get_func_addr.c
-[25]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information
-[26]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1/
-[27]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints/
diff --git a/sources/tech/20150316 Linux on UEFI A Quick Installation Guide.md b/sources/tech/20150316 Linux on UEFI A Quick Installation Guide.md
deleted file mode 100644
index 73db92f8a4..0000000000
--- a/sources/tech/20150316 Linux on UEFI A Quick Installation Guide.md
+++ /dev/null
@@ -1,220 +0,0 @@
-fuowang 翻译中
-
-Linux on UEFI:A Quick Installation Guide
-============================================================
-
-
-This Web page is provided free of charge and with no annoying outside ads; however, I did take time to prepare it, and Web hosting does cost money. If you find this Web page useful, please consider making a small donation to help keep this site up and running. Thanks!
-
-### Introduction
-
-For several years, a new firmware technology has been lurking in the wings, unknown to most ordinary users. Known as the [Extensible Firmware Interface (EFI),][29] or more recently as the Unified EFI (UEFI, which is essentially EFI 2. _x_ ), this technology has begun replacing the older [Basic Input/Output System (BIOS)][30] firmware with which most experienced computer users are at least somewhat familiar.
-
-This page is a quick introduction to EFI for Linux users, including advice on getting started installing Linux to such a computer. Unfortunately, EFI is a dense topic; the EFI software itself is complex, and many implementations have system-specific quirks and even bugs. Thus, I cannot describe everything you'll need to know to install and use Linux on an EFI computer on this one page. It's my hope that you'll find this page a useful starting point, though, and links within each section and in the [References][31] section at the end will point you toward additional documentation.
-
-#### Contents
-
-* [Introduction][18]
-* [Does Your Computer Use EFI?][19]
-* [Does Your Distribution Support EFI?][20]
-* [Preparing to Install Linux][21]
-* [Installing Linux][22]
-* [Fixing Post-Installation Problems][23]
-* [Oops: Converting a Legacy-Mode Install to Boot in EFI Mode][24]
-* [References][25]
-
-### Does Your Computer Use EFI?
-
-EFI is a type of _firmware,_ meaning that it's software built into the computer to handle low-level tasks. Most importantly, the firmware controls the computer's boot process, which in turn means that EFI-based computers boot differently than do BIOS-based computers. (A partial exception to this rule is described shortly.) This difference can greatly complicate the design of OS installation media, but it has little effect on the day-to-day operation of the computer, once everything is set up and running. Note that most manufacturers use the term "BIOS" to refer to their EFIs. I consider this usage confusing, so I avoid it; in my view, EFIs and BIOSes are two different types of firmware.
-
-**Note:** The EFI that Apple uses on Macs is unusual in many respects. Although much of this page applies to Macs, some details differ, particularly when it comes to setting up EFI boot loaders. This task is best handled from OS X by using the Mac's [bless utility,][49]which I don't describe here.
-
-EFI has been used on Intel-based Macs since they were first introduced in 2006\. Beginning in late 2012, most computers that ship with Windows 8 or later boot using UEFI by default, and in fact most PCs released since mid-2011 use UEFI, although they may not boot in EFI mode by default. A few PCs sold prior to 2011 also support EFI, although most such computers boot in BIOS mode by default.
-
-If you're uncertain about your computer's EFI support status, you should check your firmware setup utility and your user manual for references to _EFI_ , _UEFI_ , or _legacy booting_ . (Searching a PDF of your user manual can be a quick way to do this.) If you find no such references, your computer probably uses an old-style ("legacy") BIOS; but if you find references to these terms, it almost certainly uses EFI. You can also try booting a medium that contains _only_ an EFI-mode boot loader. The USB flash drive or CD-R image of [rEFInd][50] is a good choice for this test.
-
-Before proceeding further, you should understand that most EFIs on _x_ 86 and _x_ 86-64 computers include a component known as the _Compatibility Support Module (CSM),_ which enables the EFI to boot OSes using the older BIOS-style boot mechanisms. This can be a great convenience because it provides backwards compatibility; but it also creates complications because there's no standardization in the rules and user interfaces for controlling when a computer boots in EFI mode vs. when it boots in BIOS (aka CSM or legacy) mode. In particular, it's far too easy to accidentally boot your Linux installation medium in BIOS/CSM/legacy mode, which will result in a BIOS/CSM/legacy-mode installation of Linux. This can work fine if Linux is your only OS, but it complicates the boot process if you're dual-booting with Windows in EFI mode. (The opposite problem can also occur.) The following sections should help you boot your installer in the right mode. If you're reading this page after you've installed Linux in BIOS mode and want to switch boot modes, read the upcoming section, [Oops: Converting a Legacy-Mode Install to Boot in EFI Mode.][51]
-
-One optional feature of UEFI deserves mention: _Secure Boot._ This feature is designed to minimize the risk of a computer becoming infected with a _boot kit,_ which is a type of malware that infects the computer's boot loader. Boot kits can be particularly difficult to detect and remove, which makes blocking them a priority. Microsoft requires that all desktop and laptop computers that bear a Windows 8 logo ship with Secure Boot enabled. This type of configuration complicates Linux installation, although some distributions handle this problem better than do others. Do not confuse Secure Boot with EFI or UEFI, though; it's possible for an EFI computer to not support Secure Boot, and it's possible to disable Secure Boot even on _x_ 86-64 EFI computers that support it. Microsoft requires that users can disable Secure Boot for Windows 8 certification on _x_ 86 and _x_ 86-64 computers; however, this requirement is reversed for ARM computers—such computers that ship with Windows 8 must _not_ permit the user to disable Secure Boot. Fortunately, ARM-based Windows 8 computers are currently rare. I recommend avoiding them.
-
-### Does Your Distribution Support EFI?
-
-Most Linux distributions have supported EFI for years. The quality of that support varies from one distribution to another, though. Most of the major distributions (Fedora, OpenSUSE, Ubuntu, and so on) provide good EFI support, including support for Secure Boot. Some more "do-it-yourself" distributions, such as Gentoo, have weaker EFI support, but their nature makes it easy to add EFI support to them. In fact, it's possible to add EFI support to _any_ Linux distribution: You need to install it (even in BIOS mode) and then install an EFI boot loader on the computer. See the [Oops: Converting a Legacy-Mode Install to Boot in EFI Mode][52] section for information on how to do this.
-
-You should check your distribution's feature list to determine if it supports EFI. You should also pay attention to your distribution's support for Secure Boot, particularly if you intend to dual-boot with Windows 8\. Note that even distributions that officially support Secure Boot may require that this feature be disabled, since Linux Secure Boot support is often poor or creates complications.
-
-### Preparing to Install Linux
-
-A few preparatory steps will help make your Linux installation on an EFI-based computer go more smoothly:
-
-1. **Upgrade your firmware**—Some EFIs are badly broken, but hardware manufacturers occasionally release updates to their firmware. Thus, I recommend upgrading your firmware to the latest version available. If you know from forum posts or the like that your EFI is problematic, you should do this before installing Linux, because some problems will require extra steps to correct if the firmware is upgraded after the installation. On the other hand, upgrading firmware is always a bit risky, so holding off on such an upgrade may be best if you've heard good things about your manufacturer's EFI support.
-3. **Learn how to use your firmware**—You can usually enter a firmware setup utility by hitting the Del key or a function key early in the boot process. Check for prompts soon after you power on the computer or just try each function key. Similarly, the Esc key or a function key usually enters the firmware's built-in boot manager, which enables you to select which OS or external device to boot. Some manufacturers are making it hard to reach such settings. In some cases, you can do so from inside Windows 8, as described on [this page.][32]
-4. **Adjust the following firmware settings:**
- * **Fast boot**—This feature can speed up the boot process by taking shortcuts in hardware initialization. Sometimes this is fine, but sometimes it can leave USB hardware uninitialized, which can make it impossible to boot from a USB flash drive or similar device. Thus, disabling fast boot _may_ be helpful, or even required; but you can safely leave it active and deactivate it only if you have trouble getting the Linux installer to boot. Note that this feature sometimes goes by another name. In some cases, you must _enable_ USB support rather than _disable_ a fast boot feature.
- * **Secure Boot**—Fedora, OpenSUSE, Ubuntu, and some other distributions officially support Secure Boot; but if you have problems getting a boot loader or kernel to start, you might want to disable this feature. Unfortunately, fully describing how to do so is impossible because the settings vary from one computer to another. See [my Secure Boot page][1] for more on this topic.
-
- **Note:** Some guides say to enable BIOS/CSM/legacy support to install Linux. As a general rule, they're wrong to do so. Enabling this support can overcome hurdles involved in booting the installer, but doing so creates new problems down the line. Guides to install in this way often overcome these later problems by running Boot Repair, but it's better to do it correctly from the start. This page provides tips to help you get your Linux installer to boot in EFI mode, thus bypassing the later problems.
-
- * **CSM/legacy options**—If you want to install in EFI mode, set such options _off._ Some guides recommend enabling these options, and in some cases they may be required—for instance, they may be needed to enable the BIOS-mode firmware in some add-on video cards. In most cases, though, enabling CSM/legacy support simply increases the risk of inadvertently booting your Linux installer in BIOS mode, which you do _not_ want to do. Note that Secure Boot and CSM/legacy options are sometimes intertwined, so be sure to check each one after changing the other.
-5. **Disable the Windows Fast Startup feature**—[This page][33] describes how to disable this feature, which is almost certain to cause filesystem corruption if left enabled. Note that this feature is distinct from the firmware's fast boot feature.
-6. **Check your partition table**—Using [GPT fdisk,][34] parted, or any other partitioning tool, check your disk's partitions. Ideally, you should create a hardcopy that includes the exact start and end points (in sectors) of each partition. This will be a useful reference, particularly if you use a manual partitioning option in the installer. If Windows is already installed, be sure to identify your [EFI System Partition (ESP),][35] which is a FAT partition with its "boot flag" set (in parted or GParted) or that has a type code of EF00 in gdisk.
-
-### Installing Linux
-
-Most Linux distributions provide adequate installation instructions; however, I've observed a few common stumbling blocks on EFI-mode installations:
-
-* **Ensure that you're using a distribution that's the right bit depth**—EFI runs boot loaders that are the same bit depth as the EFI itself. This is normally 64-bit for modern computers, although the first couple generations of Intel-based Macs, some modern tablets and convertibles, and a handful of obscure computers use 32-bit EFIs. I have yet to encounter a 32-bit Linux distribution that officially supports EFI, although it is possible to add a 32-bit EFI boot loader to 32-bit distributions. (My [Managing EFI Boot Loaders for Linux][36] covers boot loaders generally, and understanding those principles may enable you to modify a 32-bit distribution's installer, although that's not a task for a beginner.) Installing a 32-bit Linux distribution on a computer with a 64-bit EFI is difficult at best, and I don't describe the process here; you should use a 64-bit distribution on a computer with a 64-bit EFI.
-* **Properly prepare your boot medium**—Third-party tools for moving .iso images onto USB flash drives, such as unetbootin, often fail to create the proper EFI-mode boot entries. I recommend you follow whatever procedure your distribution maintainer suggests for creating USB flash drives. If no such recommendation is made, use the Linux dd utility, as in dd if=image.iso of=/dev/sdc to create an image on the USB flash drive on /dev/sdc. Ports of dd to Windows, such as [WinDD][37] and [dd for Windows,][38] exist, but I've never tested them. Note that using tools that don't understand EFI to create your installation medium is one of the mistakes that leads people into the bigger mistake of installing in BIOS mode and then having to correct the ensuing problems, so don't ignore this point!
-* **Back up the ESP**—If you're installing to a computer that already boots Windows or some other OS, I recommend backing up your ESP before installing Linux. Although Linux _shouldn't_ damage files that are already on the ESP, this does seem to happen from time to time. Having a backup will help in such cases. A simple file-level backup (using cp, tar, or zip, for example) should work fine.
-* **Booting in EFI mode**—It's too easy to accidentally boot your Linux installer in BIOS/CSM/legacy mode, particularly if you leave the CSM/legacy options enabled in your firmware. A few tips can help you to avoid this problem:
-
-* You should verify an EFI-mode boot by dropping to a Linux shell and typing ls /sys/firmware/efi. If you see a list of files and directories, you've booted in EFI mode and you can ignore the following additional tips; if not, you've probably booted in BIOS mode and should review your settings.
-* Use your firmware's built-in boot manager (which you should have located earlier; see [Learn how to use your firmware][26]) to boot in EFI mode. Typically, you'll see two options for a CD-R or USB flash drive, one of which includes the string _EFI_ or _UEFI_ in its description, and one of which does not. Use the EFI/UEFI option to boot your medium.
-* Disable Secure Boot—Even if you're using a distribution that officially supports Secure Boot, sometimes this doesn't work. In this case, the computer will most likely silently move on to the next boot loader, which could be your medium's BIOS-mode boot loader, resulting in a BIOS-mode boot. See [my page on Secure Boot][27] for some tips on how to disable Secure Boot.
-* If you can't seem to get the Linux installer to boot in EFI mode, try using a USB flash drive or CD-R version of my [rEFInd boot manager.][28] If rEFInd boots, it's guaranteed to be running in EFI mode, and on a UEFI-based PC, it will show only EFI-mode boot options, so if you can then boot to the Linux installer, it should be in EFI mode. (On Macs, though, rEFInd shows BIOS-mode boot options in addition to EFI-mode options.)
-
-* **Preparing your ESP**—Except on Macs, EFIs use the ESP to hold boot loaders. If your computer came with Windows pre-installed, an ESP should already exist, and you can use it in Linux. If not, I recommend creating an ESP that's 550MiB in size. (If your existing ESP is smaller than this, go ahead and use it.) Create a FAT32 filesystem on it. If you use GParted or parted to prepare your disk, give the ESP a "boot flag." If you use GPT fdisk (gdisk, cgdisk, or sgdisk) to prepare the disk, give it a type code of EF00\. Some installers create a smallish ESP and put a FAT16 filesystem on it. This usually works fine, although if you subsequently need to re-install Windows, its installer will become confused by the FAT16 ESP, so you may need to back it up and convert it to FAT32 form.
-* **Using the ESP**—Different distributions' installers have different ways of identifying the ESP. For instance, some versions of Debian and Ubuntu call the ESP the "EFI boot partition" and do not show you an explicit mount point (although it will mount it behind the scenes); but a distribution like Arch or Gentoo will require you to mount it. The closest thing to a standard ESP mount point in Linux is /boot/efi, although /boot works well with some configurations—particularly if you want to use gummiboot or ELILO. Some distributions won't let you use a FAT partition as /boot, though. Thus, if you're asked to set a mount point for the ESP, make it /boot/efi. Do _not_ create a fresh filesystem on the ESP unless it doesn't already have one—if Windows or some other OS is already installed, its boot loader lives on the ESP, and creating a new filesystem will destroy that boot loader!
-* **Setting the boot loader location**—Some distributions may ask about the boot loader's (GRUB's) location. If you've properly flagged the ESP as such, this question should be unnecessary, but some distributions' installers still ask. Try telling it to use the ESP.
-* **Other partitions**—Other than the ESP, no other special partitions are required; you can set up root (/), swap, /home, or whatever else you like in the same way you would for a BIOS-mode installation. Note that you do _not_ need a [BIOS Boot Partition][39] for an EFI-mode installation, so if your installer is telling you that you need one, this may be a sign that you've accidentally booted in BIOS mode. On the other hand, if you create a BIOS Boot Partition, that will give you some extra flexibility, since you'll be able to install a BIOS version of GRUB to boot in either mode (EFI or BIOS).
-* **Fixing blank displays**—A problem that many people had through much of 2013 (but with decreasing frequency since then) was blank displays when booted in EFI mode. Sometimes this problem can be fixed by adding nomodeset to the kernel's command line. You can do this by typing e to open a simple text editor in GRUB. In many cases, though, you'll need to research this problem in more detail, because it often has more hardware-specific causes.
-
-In some cases, you may be forced to install Linux in BIOS mode. You can sometimes then manually install an EFI-mode boot loader for Linux to begin booting in EFI mode. See my [Managing EFI Boot Loaders for Linux][53] page for information on available boot loaders and how to install them.
-
-### Fixing Post-Installation Problems
-
-If you can't seem to get an EFI-mode boot of Linux working but a BIOS-mode boot works, you can abandon EFI mode entirely. This is easiest on a Linux-only computer; just install a BIOS-mode boot loader (which the installer should have done if you installed in BIOS mode). If you're dual-booting with an EFI-mode Windows, though, the easiest solution is to install my [rEFInd boot manager.][54] Install it from Windows and edit the refind.conf file: Uncomment the scanfor line and ensure that hdbios is among the options. This will enable rEFInd to redirect the boot process to a BIOS-mode boot loader. This solution works for many systems, but sometimes it fails for one reason or another.
-
-If you reboot the computer and it boots straight into Windows, it's likely that your Linux boot loader or boot manager was not properly installed. (You should try disabling Secure Boot first, though; as I've said, it often causes problems.) There are several possible solutions to this problem:
-
-* **Use efibootmgr**—You can boot a Linux emergency disc _in EFI mode_ and use the efibootmgr utility to re-register your Linux boot loader, as described [here.][40]
-* **Use bcdedit in Windows**—In a Windows Administrator Command Prompt window, typing bcdedit /set {bootmgr} path \EFI\fedora\grubx64.efi will set the EFI/fedora/grubx64.efi file on the ESP as the default boot loader. Change that path as necessary to point to your desired boot loader. If you're booting with Secure Boot enabled, you should set shim.efi, shimx64.efi, or PreLoader.efi (whichever is present) as the boot program, rather than grubx64.efi.
-* **Install rEFInd**—Sometimes rEFInd can overcome this problem. I recommend testing by using the [CD-R or USB flash drive image.][41] If it can boot Linux, install the Debian package, RPM, or .zip file package. (Note that you may need to edit your boot options by highlighting a Linux vmlinuz* option and hitting F2 or Insert twice. This is most likely to be required if you've got a separate /bootpartition, since in this situation rEFInd can't locate the root (/) partition to pass to the kernel.)
-* **Use Boot Repair**—Ubuntu's [Boot Repair utility][42] can auto-repair some boot problems; however, I recommend using it only on Ubuntu and closely-related distributions, such as Mint. In some cases, it may be necessary to click the Advanced option and check the box to back up and replace the Windows boot loader.
-* **Hijack the Windows boot loader**—Some buggy EFIs boot only the Windows boot loader, which is called EFI/Microsoft/Boot/bootmgfw.efi on the ESP. Thus, you may need to rename this boot loader to something else (I recommend moving it down one level, to EFI/Microsoft/bootmgfw.efi) and putting a copy of your preferred boot loader in its place. (Most distributions put a copy of GRUB in a subdirectory of EFI named after themselves, such as EFI/ubuntu for Ubuntu or EFI/fedora for Fedora.) Note that this solution is an ugly hack, and some users have reported that Windows will replace its boot loader, so it may not even work 100% of the time. It is, however, the only solution that works on some badly broken EFIs. Before attempting this solution, I recommend upgrading your firmware and re-registering your own boot loader with efibootmgr in Linux or bcdedit in Windows.
-
-Another class of problems relates to boot loader troubles—If you see GRUB (or whatever boot loader or boot manager your distribution uses by default) but it doesn't boot an OS, you must fix that problem. Windows often fails to boot because GRUB 2 is very finicky about booting Windows. This problem can be exacerbated by Secure Boot in some cases. See [my page on GRUB 2][55] for a sample GRUB 2 entry for booting Windows. Linux boot problems, once GRUB appears, can have a number of causes, and are likely to be similar to BIOS-mode Linux boot problems, so I don't cover them here.
-
-Despite the fact that it's very common, my opinion of GRUB 2 is rather low—it's an immensely complex program that's difficult to configure and use. Thus, if you run into problems with GRUB, my initial response is to replace it with something else. [My Web page on EFI boot loaders for Linux][56] describes the options that are available. These include my own [rEFInd boot manager,][57] which is much easier to install and maintain, aside from the fact that many distributions do manage to get GRUB 2 working—but if you're considering replacing GRUB 2 because of its problems, that's obviously not worked out for you!
-
-Beyond these issues, EFI booting problems can be quite idiosyncratic, so you may need to post to a Web forum for help. Be sure to describe the problem as thoroughly as you can. The [Boot Info Script][58] can provide useful information—run it and it should produce a file called RESULTS.txt that you can paste into your forum post. Be sure to precede this pasted text with the string [code] and follow it with [/code], though; otherwise people will complain. Alternatively, upload RESULTS.txt to a pastebin site, such as [pastebin.com,][59] and post the URL that the site gives you.
-
-### Oops: Converting a Legacy-Mode Install to Boot in EFI Mode
-
-**Warning:** These instructions are written primarily for UEFI-based PCs. If you've installed Linux in BIOS mode on a Mac but want to boot Linux in EFI mode, you can install your boot program _in OS X._ rEFInd (or the older rEFIt) is the usual choice on Macs, but GRUB can be made to work with some extra effort.
-
-As of early 2015, one very common problem I see in online forums is that people follow bad instructions and install Linux in BIOS mode to dual-boot with an existing EFI-mode Windows installation. This configuration works poorly because most EFIs make it difficult to switch between boot modes, and GRUB can't handle the task, either. You might also find yourself in this situation if you've got a very flaky EFI that simply won't boot an external medium in EFI mode, or if you have video or other problems with Linux when it's booted in EFI mode.
-
-As noted earlier, in [Fixing Post-Installation Problems,][60] one possible solution to such problems is to install rEFInd _in Windows_ and configure it to support BIOS-mode boots. You can then boot rEFInd and chainload to your BIOS-mode GRUB. I recommend this fix mainly when you have EFI-specific problems in Linux, such as a failure to use your video card. If you don't have such EFI-specific problems, installing rEFInd and a suitable EFI filesystem driver in Windows will enable you to boot Linux directly in EFI mode. This can be a perfectly good solution, and it will be equivalent to what I describe next.
-
-In most cases, it's best to configure Linux to boot in EFI mode. There are many ways to do this, but the best way requires using an EFI-mode boot of Linux (or conceivably Windows or an EFI shell) to register an EFI-mode version of your preferred boot manager. One way to accomplish this goal is as follows:
-
-1. Download a USB flash drive or CD-R version of my [rEFInd boot manager.][43]
-2. Prepare a medium from the image file you've downloaded. You can do this from any computer, booted in either EFI or BIOS mode (or in other ways on other platforms).
-3. If you've not already done so, [disable Secure Boot.][44] This is necessary because the rEFInd CD-R and USB images don't support Secure Boot. If you want to keep Secure Boot, you can re-enable it later.
-4. Boot rEFInd on your target computer. As described earlier, you may need to adjust your firmware settings and use the built-in boot manager to select your boot medium. The option you select may need to include the string _UEFI_ in its description.
-5. In rEFInd, examine the boot options. You should see at least one option for booting a Linux kernel (with a name that includes the string vmlinuz). Boot it in one of two ways:
- * If you do _not_ have a separate /boot partition, simply highlight the kernel and press Enter. Linux should boot.
- * If you _do_ have a separate /boot partition, press Insert or F2 twice. This action will open a line editor in which you can edit your kernel options. Add a root= specification to those options to identify your root (/) filesystem, as in root=/dev/sda5 if root (/) is on /dev/sda5. If you don't know what your root filesystem is, you should reboot in any way possible to figure it out.In some rare cases, you may need to add other kernel options instead of or in addition to a root= option. Gentoo with an LVM configuration requires dolvm, for example.
-6. Once Linux is booted, install your desired boot program. rEFInd is usually pretty easy to install via the RPM, Debian package, PPA, or binary .zip file referenced on the [rEFInd downloads page.][45] On Ubuntu and similar distributions, Boot Repair can fix your GRUB setup relatively simply, but it will be a bit of a leap of faith that it will work correctly. (It usually works fine, but in some cases it will make a hash of things.) Other options are described on my [Managing EFI Boot Loaders for Linux][46] page.
-7. If you want to boot with Secure Boot active, reboot and enable it. Note, however, that you may need to take extra installation steps to set up your boot program to use Secure Boot. Consult [my page on the topic][47] or your boot program's Secure Boot documentation for details.
-
-When you reboot, you should see the boot program you just installed. If the computer instead boots into a BIOS-mode version of GRUB, you should enter your firmware and disable the BIOS/CSM/legacy support, or perhaps adjust your boot order options. If the computer boots straight to Windows, then you should read the preceding section, [Fixing Post-Installation Problems.][61]
-
-You may want or need to tweak your configuration at this point. It's common to see extra boot options, or for an option you want to not be visible. Consult your boot program's documentation to learn how to make such changes.
-
-### References and Additional Information
-
-
-* **Informational Web pages**
- * My [Managing EFI Boot Loaders for Linux][2] page covers the available EFI boot loaders and boot managers.
- * The [man page for OS X's bless tool][3] may be helpful in setting up a boot loader or boot manager on that platform.
- * [The EFI Boot Process][4] describes, in broad strokes, how EFI systems boot.
- * The [Arch Linux UEFI wiki page][5] has a great deal of information on UEFI and Linux.
- * Adam Williamson has written a good [summary of what EFI is and how it works.][6]
- * [This page][7] describes how to adjust EFI firmware settings from within Windows 8.
- * Matthew J. Garrett, the developer of the Shim boot loader to manage Secure Boot, maintains [a blog][8] in which he often writes about EFI issues.
- * If you're interested in developing EFI software yourself, my [Programming for EFI][9] can help you get started.
-* **Additional programs**
- * [The official rEFInd Web page][10]
- * [The official gummiboot Web page][11]
- * [The official ELILO Web page][12]
- * [The official GRUB Web page][13]
- * [The official GPT fdisk partitioning software Web page][14]
- * Ubuntu's [Boot Repair utility][15] can help fix some boot problems
-* **Communications**
- * The [rEFInd discussion forum on Sourceforge][16] provides a way to discuss rEFInd with other users or with me.
- * Pastebin sites, such as [http://pastebin.com,][17] provide a convenient way to exchange largeish text files with users on Web forums.
-
---------------------------------------------------------------------------------
-
-via: http://www.rodsbooks.com/linux-uefi/
-
-作者:[Roderick W. Smith][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:rodsmith@rodsbooks.com
-[1]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
-[2]:http://www.rodsbooks.com/efi-bootloaders/
-[3]:http://ss64.com/osx/bless.html
-[4]:http://homepage.ntlworld.com/jonathan.deboynepollard/FGA/efi-boot-process.html
-[5]:https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface
-[6]:https://www.happyassassin.net/2014/01/25/uefi-boot-how-does-that-actually-work-then/
-[7]:http://www.eightforums.com/tutorials/20256-uefi-firmware-settings-boot-inside-windows-8-a.html
-[8]:http://mjg59.dreamwidth.org/
-[9]:http://www.rodsbooks.com/efi-programming/
-[10]:http://www.rodsbooks.com/refind/
-[11]:http://freedesktop.org/wiki/Software/gummiboot
-[12]:http://elilo.sourceforge.net/
-[13]:http://www.gnu.org/software/grub/
-[14]:http://www.rodsbooks.com/gdisk/
-[15]:https://help.ubuntu.com/community/Boot-Repair
-[16]:https://sourceforge.net/p/refind/discussion/
-[17]:http://pastebin.com/
-[18]:http://www.rodsbooks.com/linux-uefi/#intro
-[19]:http://www.rodsbooks.com/linux-uefi/#isitefi
-[20]:http://www.rodsbooks.com/linux-uefi/#distributions
-[21]:http://www.rodsbooks.com/linux-uefi/#preparing
-[22]:http://www.rodsbooks.com/linux-uefi/#installing
-[23]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
-[24]:http://www.rodsbooks.com/linux-uefi/#oops
-[25]:http://www.rodsbooks.com/linux-uefi/#references
-[26]:http://www.rodsbooks.com/linux-uefi/#using_firmware
-[27]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
-[28]:http://www.rodsbooks.com/refind/getting.html
-[29]:https://en.wikipedia.org/wiki/Uefi
-[30]:https://en.wikipedia.org/wiki/BIOS
-[31]:http://www.rodsbooks.com/linux-uefi/#references
-[32]:http://www.eightforums.com/tutorials/20256-uefi-firmware-settings-boot-inside-windows-8-a.html
-[33]:http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html
-[34]:http://www.rodsbooks.com/gdisk/
-[35]:http://en.wikipedia.org/wiki/EFI_System_partition
-[36]:http://www.rodsbooks.com/efi-bootloaders
-[37]:https://sourceforge.net/projects/windd/
-[38]:http://www.chrysocome.net/dd
-[39]:https://en.wikipedia.org/wiki/BIOS_Boot_partition
-[40]:http://www.rodsbooks.com/efi-bootloaders/installation.html
-[41]:http://www.rodsbooks.com/refind/getting.html
-[42]:https://help.ubuntu.com/community/Boot-Repair
-[43]:http://www.rodsbooks.com/refind/getting.html
-[44]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
-[45]:http://www.rodsbooks.com/refind/getting.html
-[46]:http://www.rodsbooks.com/efi-bootloaders/
-[47]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html
-[48]:mailto:rodsmith@rodsbooks.com
-[49]:http://ss64.com/osx/bless.html
-[50]:http://www.rodsbooks.com/refind/getting.html
-[51]:http://www.rodsbooks.com/linux-uefi/#oops
-[52]:http://www.rodsbooks.com/linux-uefi/#oops
-[53]:http://www.rodsbooks.com/efi-bootloaders/
-[54]:http://www.rodsbooks.com/refind/
-[55]:http://www.rodsbooks.com/efi-bootloaders/grub2.html
-[56]:http://www.rodsbooks.com/efi-bootloaders
-[57]:http://www.rodsbooks.com/refind/
-[58]:http://sourceforge.net/projects/bootinfoscript/
-[59]:http://pastebin.com/
-[60]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
-[61]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
diff --git a/sources/tech/20161025 GitLab Workflow An Overview.md b/sources/tech/20161025 GitLab Workflow An Overview.md
deleted file mode 100644
index 66c545127f..0000000000
--- a/sources/tech/20161025 GitLab Workflow An Overview.md
+++ /dev/null
@@ -1,505 +0,0 @@
-svtter 翻译中
-
-GitLab Workflow: An Overview
-======
-
-GitLab is a Git-based repository manager and a powerful complete application for software development.
-
-With an _"user-and-newbie-friendly" interface_, GitLab allows you to work effectively, both from the command line and from the UI itself. It's not only useful for developers, but can also be integrated across your entire team to bring everyone into a single and unique platform.
-
-The GitLab Workflow logic is intuitive and predictable, making the entire platform easy to use and easier to adopt. Once you do, you won't want anything else!
-
-* * *
-
-### In this post
-
-* [GitLab Workflow][53]
- * [Stages of Software Development][22]
-* [GitLab Issue Tracker][52]
- * [Confidential Issues][21]
- * [Due dates][20]
- * [Assignee][19]
- * [Labels][18]
- * [Issue Weight][17]
- * [GitLab Issue Board][16]
-* [Code Review with GitLab][51]
- * [First Commit][15]
- * [Merge Request][14]
- * [WIP MR][13]
- * [Review][12]
-* [Build, Test, and Deploy][50]
- * [Koding][11]
- * [Use-Cases][10]
-* [Feedback: Cycle Analytics][49]
-* [Enhance][48]
- * [Issue and MR Templates][9]
- * [Milestones][8]
-* [Pro Tips][47]
- * [For both Issues and MRs][7]
- * [Subscribe][3]
- * [Add TO-DO][2]
- * [Search for your Issues and MRs][1]
- * [Moving Issues][6]
- * [Code Snippets][5]
-* [GitLab WorkFlow Use-Case Scenario][46]
-* [Conclusions][45]
-
-* * *
-
-### GitLab Workflow
-
-The **GitLab Workflow** is a logical sequence of possible actions to be taken during the entire lifecycle of the software development process, using GitLab as the platform that hosts your code.
-
-The GitLab Workflow takes into account the [GitLab Flow][97], which consists of **Git-based** methods and tactics for version management, such as **branching strategy**, **Git best practices**, and so on.
-
-With the GitLab Workflow, the [goal][96] is to help teams work cohesively and effectively from the first stage of implementing something new (ideation) to the last stage—deploying implementation to production. That's what we call "going faster from idea to production in 10 steps."
-
-![FROM IDEA TO PRODUCTION IN 10 STEPS](https://about.gitlab.com/images/blogimages/idea-to-production-10-steps.png)
-
-### Stages of Software Development
-
-The natural course of the software development process passes through 10 major steps; GitLab has built solutions for all of them:
-
-1. **IDEA:** Every new proposal starts with an idea, which usually come up in a chat. For this stage, GitLab integrates with [Mattermost][44].
-2. **ISSUE:** The most effective way to discuss an idea is creating an issue for it. Your team and your collaborators can help you to polish and improve it in the [issue tracker][43].
-3. **PLAN:** Once the discussion comes to an agreement, it's time to code. But wait! First, we need to prioritize and organize our workflow. For this, we can use the [Issue Board][42].
-4. **CODE:** Now we're ready to write our code, once we have everything organized.
-5. **COMMIT:** Once we're happy with our draft, we can commit our code to a feature-branch with version control.
-6. **TEST:** With [GitLab CI][41], we can run our scripts to build and test our application.
-7. **REVIEW:** Once our script works and our tests and builds succeeds, we are ready to get our [code reviewed][40] and approved.
-8. **STAGING:** Now it's time to [deploy our code to a staging environment][39] to check if everything worked as we were expecting or if we still need adjustments.
-9. **PRODUCTION:** When we have everything working as it should, it's time to [deploy to our production environment][38]!
-10. **FEEDBACK:** Now it's time to look back and check what stage of our work needs improvement. We use [Cycle Analytics][37] for feedback on the time we spent on key stages of our process.
-
-To walk through these stages smoothly, it's important to have powerful tools to support this workflow. In the following sections, you'll find an overview of the toolset available from GitLab.
-
-### GitLab Issue Tracker
-
-GitLab has a powerful issue tracker that allows you, your team, and your collaborators to share and discuss ideas, before and while implementing them.
-
-![issue tracker - view list](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-tracker-list-view.png)
-
-Issues are the first essential feature of the GitLab Workflow. [Always start a discussion with an issue][95]; it's the best way to track the evolution of a new idea.
-
-It's most useful for:
-
-* Discussing ideas
-* Submitting feature proposals
-* Asking questions
-* Reporting bugs and malfunction
-* Obtaining support
-* Elaborating new code implementations
-
-Each project hosted by GitLab has an issue tracker. To create a new issue, navigate to your project's **Issues** > **New issue**, give it a title that summarizes the subject to be treated, and describe it using [Markdown][94]. Check the [pro tips][93] below to enhance your issue description.
-
-The GitLab Issue Tracker presents extra functionalities to make it easier to organize and prioritize your actions, described in the following sections.
-
-![new issue - additional settings](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-features-view.png)
-
-### Confidential Issues
-
-Whenever you want to keep the discussion presented in a issue within your team only, you can make that [issue confidential][92]. Even if your project is public, that issue will be preserved. The browser will respond with a 404 error whenever someone who is not a project member with at least [Reporter level][91] tries to access that issue's URL.
-
-### Due dates
-
-Every issue enables you to attribute a [due date][90] to it. Some teams work on tight schedules, and it's important to have a way to setup a deadline for implementations and for solving problems. This can be facilitated by the due dates.
-
-When you have due dates for multi-task projects—for example, a new release, product launch, or for tracking tasks by quarter—you can use [milestones][89].
-
-### Assignee
-
-Whenever someone starts to work on an issue, it can be assigned to that person. You can change the assignee as much as you need. The idea is that the assignee is responsible for that issue until he/she reassigns it to someone else to take it from there.
-
-It also helps with filtering issues per assignee.
-
-### Labels
-
-GitLab labels are also an important part of the GitLab flow. You can use them to categorize your issues, to localize them in your workflow, and to organize them by priority with [Priority Labels][88].
-
-Labels will enable you to work with the [GitLab Issue Board][87], facilitating your plan stage and organizing your workflow.
-
-**New!** You can also create [Group Labels][86], which give you the ability to use the same labels per group of projects.
-
-### Issue Weight
-
-You can attribute an [Issue Weight][85] to make it clear how difficult the implementation of that idea is. Less difficult would receive weights of 01-03, more difficult, 07-09, and the ones in the middle, 04-06\. Still, you can get to an agreement with your team to standardize the weights according to your needs.
-
-### GitLab Issue Board
-
-The [GitLab Issue Board][84] is a tool ideal for planning and organizing your issues according to your project's workflow.
-
-It consists of a board with lists corresponding to its respective labels. Each list contains their corresponding labeled issues, displayed as cards.
-
-The cards can be moved between lists, which will cause the label to be updated according to the list you moved the card into.
-
-![GitLab Issue Board](https://about.gitlab.com/images/blogimages/designing-issue-boards/issue-board.gif)
-
-**New!** You can also create issues right from the Board, by clicking the button on the top of a list. When you do so, that issue will be automatically created with the label corresponding to that list.
-
-**New!** We've [recently introduced][83] **Multiple Issue Boards** per project ([GitLab Enterprise Edition][82] only); it is the best way to organize your issues for different workflows.
-
-![Multiple Issue Boards](https://about.gitlab.com/images/8_13/m_ib.gif)
-
-### Code Review with GitLab
-
-After discussing a new proposal or implementation in the issue tracker, it's time to work on the code. You write your code locally and, once you're done with your first iteration, you commit your code and push to your GitLab repository. Your Git-based management strategy can be improved with the [GitLab Flow][81].
-
-### First Commit
-
-In your first commit message, you can add the number of the issue related to that commit message. By doing so, you create a link between the two stages of the development workflow: the issue itself and the first commit related to that issue.
-
-To do so, if the issue and the code you're committing are both in the same project, you simply add `#xxx` to the commit message, where `xxx` is the issue number. If they are not in the same project, you can add the full URL to the issue (`https://gitlab.com///issues/`).
-
-```
-`git commit -m "this is my commit message. Ref #xxx"`
-```
-
-or
-
-```
-`git commit -m "this is my commit message. Related to https://gitlab.com///issues/"`
-```
-
-Of course, you can replace `gitlab.com` with the URL of your own GitLab instance.
-
-**Note:** Linking your first commit to your issue is going to be relevant for tracking your process far ahead with [GitLab Cycle Analytics][80]. It will measure the time taken for planning the implementation of that issue, which is the time between creating an issue and making the first commit.
-
-### Merge Request
-
-Once you push your changes to a feature-branch, GitLab will identify this change and will prompt you to create a Merge Request (MR).
-
-Every MR will have a title (something that summarizes that implementation) and a description supported by [Markdown][79]. In the description, you can shortly describe what that MR is doing, mention any related issues and MRs (creating a link between them), and you can also add the [issue closing pattern][78], which will close that issue(s) once the MR is **merged**.
-
-For example:
-
-```
-`## Add new page
-
-This MR creates a `readme.md` to this project, with an overview of this app.
-
-Closes #xxx and https://gitlab.com///issues/
-
-Preview:
-
-![preview the new page](#image-url)
-
-cc/ @Mary @Jane @John`
-```
-
-When you create an MR with a description like the one above, it will:
-
-* Close both issues `#xxx` and `https://gitlab.com///issues/` when merged
-* Display an image
-* Notify the users `@Mary`, `@Jane`, and `@John` by e-mail
-
-You can assign the MR to yourself until you finish your work, then assign it to someone else to conduct a review. It can be reassigned as many times as necessary, to cover all the reviews you need.
-
-It can also be labeled and added to a [milestone][77] to facilitate organization and prioritization.
-
-When you add or edit a file and commit to a new branch from the UI instead of from the command line, it's also easy to create a new merge request. Just mark the checkbox "start a new merge request with these changes" and GitLab will automatically create a new MR once you commit your changes.
-
-![commit to a feature branch and add a new MR from the UI](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/start-new-mr-edit-from-ui.png)
-
-**Note:** It's important to add the [issue closing pattern][76] to your MR in order to be able to track the process with [GitLab Cycle Analytics][75]. It will track the "code" stage, which measures the time between pushing a first commit and creating a merge request related to that commit.
-
-**New!** We're currently developing [Review Apps][74], a new feature that gives you the ability to deploy your app to a dynamic environment, from which you can preview the changes based on the branch name, per merge request. See a [working example][73] here.
-
-### WIP MR
-
-A WIP MR, which stands for **Work in Progress Merge Request**, is a technique we use at GitLab to prevent that MR from getting merged before it's ready. Just add `WIP:` to the beginning of the title of an MR, and it will not be merged unless you remove it from there.
-
-When your changes are ready to get merged, remove the `WIP:` pattern either by editing the issue and deleting manually, or use the shortcut available for you just below the MR description.
-
-![WIP MR click to remove WIP from the title](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-wip-mr.png)
-
-**New!** The `WIP` pattern can be also [quickly added to the merge request][72] with the [slash command][71] `/wip`. Simply type it and submit the comment or the MR description.
-
-### Review
-
-Once you've created a merge request, it's time to get feedback from your team or collaborators. Using the diffs available on the UI, you can add inline comments, reply to them and resolve them.
-
-You can also grab the link for each line of code by clicking on the line number.
-
-The commit history is available from the UI, from which you can track the changes between the different versions of that file. You can view them inline or side-by-side.
-
-![code review in MRs at GitLab](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-review.png)
-
-**New!** If you run into merge conflicts, you can quickly [solve them right for the UI][70], or even edit the file to fix them as you need:
-
-![mr conflict resolution](https://about.gitlab.com/images/8_13/inlinemergeconflictresolution.gif)
-
-### Build, Test, and Deploy
-
-[GitLab CI][69] is an powerful built-in tool for [Continuous Integration, Continuos Deployment, and Continuous Delivery][68], which can be used to run scripts as you wish. The possibilities are endless: think of it as if it was your own command line running the jobs for you.
-
-It's all set by an Yaml file called, `.gitlab-ci.yml`, placed at your project's repository. Enjoy the CI templates by simply adding a new file through the web interface, and type the file name as `.gitlab-ci.yml` to trigger a dropdown menu with dozens of possible templates for different applications.
-
-![GitLab CI templates - dropdown menu](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-ci-template.png)
-
-### Koding
-
-Use GitLab's [Koding integration][67] to run your entire development environment in the cloud. This means that you can check out a project or just a merge request in a full-fledged IDE with the press of a button.
-
-### Use-Cases
-
-Examples of GitLab CI use-cases:
-
-* Use it to [build][36] any [Static Site Generator][35], and deploy your website with [GitLab Pages][34]
-* Use it to [deploy your website][33] to `staging` and `production` [environments][32]
-* Use it to [build an iOS application][31]
-* Use it to [build and deploy your Docker Image][30] with [GitLab Container Registry][29]
-
-We have prepared a dozen of [GitLab CI Example Projects][66] to offer you guidance. Check them out!
-
-### Feedback: Cycle Analytics
-
-When you follow the GitLab Workflow, you'll be able to gather feedback with [GitLab Cycle Analytics][65] on the time your team took to go from idea to production, for [each key stage of the process][64]:
-
-* **Issue:** the time from creating an issue to assigning the issue to a milestone or adding the issue to a list on your Issue Board
-* **Plan:** the time from giving an issue a milestone or adding it to an Issue Board list, to pushing the first commit
-* **Code:** the time from the first commit to creating the merge request
-* **Test:** the time CI takes to run the entire pipeline for the related merge request
-* **Review:** the time from creating the merge request to merging it
-* **Staging:** the time from merging until deploy to production
-* **Production** (Total): The time it takes between creating an issue and deploying the code to [production][28]
-
-### Enhance
-
-### Issue and MR Templates
-
-[Issue and MR templates][63] allow you to define context-specific templates for issue and merge request description fields for your project.
-
-You write them in [Markdown][62] and add them to the default branch of your repository. They can be accessed by the dropdown menu whenever an issue or MR is created.
-
-They save time when describing issues and MRs and standardize the information necessary to follow along. It makes sure everything you need to proceed is there.
-
-As you can create multiple templates, they serve for different purposes. For example, you can have one for feature proposals, and a different one for bug reports. Check the ones in [GitLab CE project][61] for real examples.
-
-![issues and MR templates - dropdown menu screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issues-choose-template.png)
-
-### Milestones
-
-[Milestones][60] are the best tool you have at GitLab to track the work of your team based on a common target, in a specific date.
-
-The goal can be different for each situation, but the panorama is the same: you have a collection of issues and merge requests being worked on to achieve that particular objective.
-
-This goal can be basically anything that groups the team work and effort to do something by a deadline. For example, publish a new release, launch a new product, get things done by that date, or assemble projects to get done by year quarters.
-
-For instance, you can create a milestone for Q1 2017 and assign every issue and MR that should be finished by the end of March, 2017\. You can also create a milestone for an event that your company is organizing. Then you access that milestone and view an entire panorama on the progress of your team to get things done.
-
-![milestone dashboard](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-milestone.png)
-
-### Pro Tips
-
-### For both Issues and MRs
-
-* In issues and MRs descriptions:
- * Type `#` to trigger a dropdown list of existing issues
- * Type `!` to trigger a dropdown list of existing MRs
- * Type `/` to trigger [slash commands][4]
- * Type `:` to trigger emojis (also supported for inline comments)
-* Add images (jpg, png, gif) and videos to inline comments with the button **Attach a file**
-* [Apply labels automatically][27] with [GitLab Webhooks][26]
-* [Fenced blockquote][24]: use the syntax `>>>` to start and finish a blockquote
-
- ```
- `>>>
- Quoted text
-
- Another paragraph
- >>>`
- ```
-* Create [task lists][23]:
-
- ```
- `- [ ] Task 1
- - [ ] Task 2
- - [ ] Task 3`
- ```
-
-#### Subscribe
-
-Have you found an issue or an MR that you want to follow up? Expand the navigation on your right and click [Subscribe][59] and you'll be updated whenever a new comment comes up. What if you want to subscribe to multiple issues and MRs at once? Use [bulk subscriptions][58]. 😃
-
-#### Add TO-DO
-
-Besides keeping an eye on an issue or MR, if you want to take a future action on it, or whenever you want it in your GitLab TO-DO list, expand the navigation tab at your right and [click on **Add todo**][57].
-
-#### Search for your Issues and MRs
-
-When you're looking for an issue or MR you opened long ago in a project with dozens, hundreds or even thousands of them, it turns out to be hard to find. Expand the navigation on your left and click on **Issues** or **Merge Requests**, and you'll see the ones assigned to you. From there or from any issue tracker, you can filter issues or MRs by author, assignee, milestone, label and weight, also search for opened, merged, closed, and all of them (both merged, closed, and opened).
-
-### Moving Issues
-
-An issue end up in a wrong project? Don't worry. Click on **Edit**, and [move the issue][56] to the correct project.
-
-### Code Snippets
-
-Sometimes do you use exactly the same code snippet or template in different projects or files? Create a code snippet and leave it available for you whenever you want. Expand the navigation on your left and click **[Snippets][25]**. All of your snippets will be there. You can set them to public, internal (only for GitLab logged users), or private.
-
-![Snippets - screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-snippet.png)
-
-### GitLab WorkFlow Use-Case Scenario
-
-To wrap-up, let's put everything together. It's easy!
-
-Let's suppose you work at a company focused in software development. You created a new issue for developing a new feature to be implemented in one of your applications.
-
-### Labels Strategy
-
-For this application, you already have created labels for "discussion", "backend", "frontend", "working on", "staging", "ready", "docs", "marketing", and "production." All of them already have their own lists in the Issue Board. Your issue currently have the label "discussion."
-
-After the discussion in the issue tracker came to an agreement, your backend team started to work on that issue, so their lead moved the issue from the list "discussion" to the list "backend." The first developer to start writing the code assigned the issue to himself, and added the label "working on."
-
-### Code & Commit
-
-In his first commit message, he referenced the issue number. After some work, he pushed his commits to a feature-branch and created a new merge request, including the issue closing pattern in the MR description. His team reviewed his code and made sure all the tests and builds were passing.
-
-### Using the Issue Board
-
-Once the backend team finished their work, they removed the label "working on" and moved the issue from the list "backend" to "frontend" in the Issue Board. So, the frontend team knew that issue was ready for them.
-
-### Deploying to Staging
-
-When a frontend developer started working on that issue, he or she added back the label "working on" and reassigned the issue to him/herself. When ready, the implementation was deployed to a **staging** environment. The label "working on" was removed and the issue card was moved to the "staging" list in the Issue Board.
-
-### Teamwork
-
-Finally, when the implementation succeeded, your team moved it to the list "ready."
-
-Then, the time came for your technical writing team to create the documentation for the new feature, and once someone got started, he/she added the label "docs." At the same time, your marketing team started to work on the campaign to launch and promote that feature, so someone added the label "marketing." When the tech writer finished the documentation, he/she removed the label "docs." Once the marketing team finished their work, they moved the issue from the list "marketing" to "production."
-
-### Deploying to Production
-
-At last, you, being the person responsible for new releases, merged the MR and deployed the new feature into the **production**environment and the issue was **closed**.
-
-### Feedback
-
-With [Cycle Analytics][55], you studied the time taken to go from idea to production with your team, and opened another issue to discuss the improvement of the process.
-
-### Conclusions
-
-GitLab Workflow helps your team to get faster from idea to production using a single platform:
-
-* It's **effective**, because you get your desired results.
-* It's **efficient**, because you achieve maximum productivity with minimum effort and expense.
-* It's **productive**, because you are able to plan effectively and act efficiently.
-* It's **easy**, because you don't need to setup different tools to accomplish what you need with just one, GitLab.
-* It's **fast**, because you don't need to jump across multiple platforms to get your job done.
-
-A new GitLab version is released every single month (on the 22nd), for making it a better integrated solution for software development, and for bringing teams to work together in one single and unique interface.
-
-At GitLab, everyone can contribute! Thanks to our amazing community we've got where we are. And thanks to them, we keep moving forward to provide you with a better product.
-
-Questions? Feedback? Please leave a comment or tweet at us [@GitLab][54]! 🙌
-
---------------------------------------------------------------------------------
-
-via: https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/
-
-作者:[Marcia Ramos][a]
-
-译者:[译者ID](https://github.com/译者ID)
-
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://twitter.com/XMDRamos
-[1]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#search-for-your-issues-and-mrs
-[2]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#add-to-do
-[3]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#subscribe
-[4]:https://docs.gitlab.com/ce/user/project/slash_commands.html
-[5]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-snippets
-[6]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#moving-issues
-[7]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#for-both-issues-and-mrs
-[8]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
-[9]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-and-mr-templates
-[10]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#use-cases
-[11]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#koding
-[12]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#review
-[13]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#wip-mr
-[14]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#merge-request
-[15]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#first-commit
-[16]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
-[17]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-weight
-[18]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#labels
-[19]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#assignee
-[20]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#due-dates
-[21]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#confidential-issues
-[22]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#stages-of-software-development
-[23]:https://docs.gitlab.com/ee/user/markdown.html#task-lists
-[24]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#blockquote-fence-syntax
-[25]:https://gitlab.com/dashboard/snippets
-[26]:https://docs.gitlab.com/ce/web_hooks/web_hooks.html
-[27]:https://about.gitlab.com/2016/08/19/applying-gitlab-labels-automatically/
-[28]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
-[29]:https://about.gitlab.com/2016/05/23/gitlab-container-registry/
-[30]:https://about.gitlab.com/2016/08/11/building-an-elixir-release-into-docker-image-using-gitlab-ci-part-1/
-[31]:https://about.gitlab.com/2016/03/10/setting-up-gitlab-ci-for-ios-projects/
-[32]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
-[33]:https://about.gitlab.com/2016/08/26/ci-deployment-and-environments/
-[34]:https://pages.gitlab.io/
-[35]:https://about.gitlab.com/2016/06/17/ssg-overview-gitlab-pages-part-3-examples-ci/
-[36]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
-[37]:https://about.gitlab.com/solutions/cycle-analytics/
-[38]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
-[39]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
-[40]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-code-review
-[41]:https://about.gitlab.com/gitlab-ci/
-[42]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
-[43]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
-[44]:https://about.gitlab.com/2015/08/18/gitlab-loves-mattermost/
-[45]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#conclusions
-[46]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow-use-case-scenario
-[47]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
-[48]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#enhance
-[49]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
-[50]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#build-test-and-deploy
-[51]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-review-with-gitlab
-[52]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
-[53]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow
-[54]:https://twitter.com/gitlab
-[55]:https://about.gitlab.com/solutions/cycle-analytics/
-[56]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#move-issues-to-other-projects
-[57]:https://about.gitlab.com/2016/06/22/gitlab-8-9-released/#manually-add-todos
-[58]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#bulk-subscribe-to-issues
-[59]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#subscribe-to-a-label
-[60]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#milestones
-[61]:https://gitlab.com/gitlab-org/gitlab-ce/issues/new
-[62]:https://docs.gitlab.com/ee/user/markdown.html
-[63]:https://docs.gitlab.com/ce/user/project/description_templates.html
-[64]:https://about.gitlab.com/2016/09/21/cycle-analytics-feature-highlight/
-[65]:https://about.gitlab.com/solutions/cycle-analytics/
-[66]:https://docs.gitlab.com/ee/ci/examples/README.html
-[67]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#koding-integration
-[68]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
-[69]:https://about.gitlab.com/gitlab-ci/
-[70]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#merge-conflict-resolution
-[71]:https://docs.gitlab.com/ce/user/project/slash_commands.html
-[72]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#wip-slash-command
-[73]:https://gitlab.com/gitlab-examples/review-apps-nginx/
-[74]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#ability-to-stop-review-apps
-[75]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
-[76]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
-[77]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
-[78]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
-[79]:https://docs.gitlab.com/ee/user/markdown.html
-[80]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
-[81]:https://about.gitlab.com/2014/09/29/gitlab-flow/
-[82]:https://about.gitlab.com/free-trial/
-[83]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#multiple-issue-boards-ee
-[84]:https://about.gitlab.com/solutions/issueboard
-[85]:https://docs.gitlab.com/ee/workflow/issue_weight.html
-[86]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#group-labels
-[87]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
-[88]:https://docs.gitlab.com/ee/user/project/labels.html#prioritize-labels
-[89]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
-[90]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#due-dates-for-issues
-[91]:https://docs.gitlab.com/ce/user/permissions.html
-[92]:https://about.gitlab.com/2016/03/31/feature-highlihght-confidential-issues/
-[93]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
-[94]:https://docs.gitlab.com/ee/user/markdown.html
-[95]:https://about.gitlab.com/2016/03/03/start-with-an-issue/
-[96]:https://about.gitlab.com/2016/09/13/gitlab-master-plan/
-[97]:https://about.gitlab.com/2014/09/29/gitlab-flow/
diff --git a/sources/tech/20161222 Top open source creative tools in 2016.md b/sources/tech/20161222 Top open source creative tools in 2016.md
deleted file mode 100644
index 1b3e4cdd37..0000000000
--- a/sources/tech/20161222 Top open source creative tools in 2016.md
+++ /dev/null
@@ -1,322 +0,0 @@
-GitFuture is translating.
-
-Top open source creative tools in 2016
-============================================================
-
-### Whether you want to manipulate images, edit audio, or animate stories, there's a free and open source tool to do the trick.
-
- ![Top 34 open source creative tools in 2016 ](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/art-yearbook-paint-draw-create-creative.png?itok=KgEF_IN_ "Top 34 open source creative tools in 2016 ")
-
->Image by : opensource.com
-
-A few years ago, I gave a lightning talk at Red Hat Summit that took attendees on a tour of the [2012 open source creative tools][12] landscape. Open source tools have evolved a lot in the past few years, so let's take a tour of 2016 landscape.
-
-### Core applications
-
-These six applications are the juggernauts of open source design tools. They are well-established, mature projects with full feature sets, stable releases, and active development communities. All six applications are cross-platform; each is available on Linux, OS X, and Windows, although in some cases the Linux versions are the most quickly updated. These applications are so widely known, I've also included highlights of the latest features available that you may have missed if you don't closely follow their development.
-
-If you'd like to follow new developments more closely, and perhaps even help out by testing the latest development versions of the first four of these applications—GIMP, Inkscape, Scribus, and MyPaint—you can install them easily on Linux using [Flatpak][13]. Nightly builds of each of these applications are available via Flatpak by [following the instructions][14] for _Nightly Graphics Apps_. One thing to note: If you'd like to install brushes or other extensions to each Flatpak version of the app, the directory to drop the extensions in will be under the directory corresponding to the application inside the **~/.var/app** directory.
-
-### GIMP
-
-[GIMP][15] [celebrated its 20th anniversary in 2015][16], making it one of the oldest open source creative applications out there. GIMP is a solid program for photo manipulation, basic graphic creation, and illustration. You can start using GIMP by trying simple tasks, such as cropping and resizing images, and over time work into a deep set of functionality. Available for Linux, Mac OS X, and Windows, GIMP is cross-platform and can open and export to a wide breadth of file formats, including those popularized by its proprietary analogue, Photoshop.
-
-The GIMP team is currently working toward the 2.10 release; [2.8.18][17] is the latest stable version. More exciting is the unstable version, [2.9.4][18], with a revamped user interface featuring space-saving symbolic icons and dark themes, improved color management, more GEGL-based filters with split-preview, MyPaint brush support (shown in screenshot below), symmetrical drawing, and command-line batch processing. For more details, check out [the full release notes][19].
-
- ![GIMP screenshot](https://opensource.com/sites/default/files/gimp_520.png "GIMP screenshot")
-
-### Inkscape
-
-[Inkscape][20] is a richly featured vector-based graphic design workhorse. Use it to create simple graphics, diagrams, layouts, or icon art.
-
-The latest stable version is [0.91][21]; similarly to GIMP, more excitement can be found in a pre-release version, 0.92pre3, which was released November 2016\. The premiere feature of the latest pre-release is the [gradient mesh feature][22](demonstrated in screenshot below); new features introduce in the 0.91 release include [power stroke][23] for fully configurable calligraphic strokes (the "open" in "opensource.com" in the screenshot below uses powerstroke), the on-canvas measure tool, and [the new symbols dialog][24] (shown in the right side of the screenshot below). (Many symbol libraries for Inkscape are available on GitHub; [Xaviju's inkscape-open-symbols set][25] is fantastic.) A new feature available in development/nightly builds is the _Objects_ dialog that catalogs all objects in a document and provides tools to manage them.
-
- ![Inkscape screenshot](https://opensource.com/sites/default/files/inkscape_520.png "Inkscape screenshot")
-
-### Scribus
-
-[Scribus][26] is a powerful desktop publishing and page layout tool. Scribus enables you to create sophisticated and beautiful items, including newsletters, books, and magazines, as well as other print pieces. Scribus has color management tools that can handle and output CMYK and spot colors for files that are ready for reliable reproduction at print shops.
-
-[1.4.6][27] is the latest stable release of Scribus; the [1.5.x][28] series of releases is the most exciting as they serve as a preview to the upcoming 1.6.0 release. Version 1.5.3 features a Krita file (*.KRA) file import tool; other developments in the 1.5.x series include the _Table_ tool, text frame welding, footnotes, additional PDF formats for export, improved dictionary support, dockable palettes, a symbols tool, and expanded file format support.
-
- ![Scribus screenshot](https://opensource.com/sites/default/files/scribus_520.png "Scribus screenshot")
-
-### MyPaint
-
-[MyPaint][29] is a drawing tablet-centric expressive drawing and illustration tool. It's lightweight and has a minimal interface with a rich set of keyboard shortcuts so that you can focus on your drawing without having to drop your pen.
-
-[MyPaint 1.2.0][30] is the latest stable release and includes new features, such as the [intuitive inking tool][31] for tracing over pencil drawings, new flood fill tool, layer groups, brush and color history panel, user interface revamp including a dark theme and small symbolic icons, and editable vector layers. To try out the latest developments in MyPaint, I recommend installing the nightly Flatpak build, although there have not been significant feature additions since the 1.2.0 release.
-
- ![MyPaint screenshot](https://opensource.com/sites/default/files/mypaint_520.png "MyPaint screenshot")
-
-### Blender
-
-Initially released in January 1995, [Blender][32], like GIMP, has been around for more than 20 years. Blender is a powerful open source 3D creation suite that includes tools for modeling, sculpting, rendering, realistic materials, rigging, animation, compositing, video editing, game creation, and simulation.
-
-The latest stable Blender release is [2.78a][33]. The 2.78 release was a large one and includes features such as the revamped _Grease Pencil_ 2D animation tool; VR rendering support for spherical stereo images; and a new drawing tool for freehand curves.
-
- ![Inkscape screenshot](https://opensource.com/sites/default/files/blender_520.png "Inkscape screenshot")
-
-To try out the latest exciting Blender developments, you have many options, including:
-
-* The Blender Foundation makes [unstable daily builds][2] available on the official Blender website.
-* If you're looking for builds that include particular in-development features, [graphicall.org][3] is a community-moderated site that provides special versions of Blender (and occasionally other open source creative apps) to enable artists to try out the latest available code and experiments.
-* Mathieu Bridon has made development versions of Blender available via Flatpak. See his blog post for details: [Blender nightly in Flatpak][4].
-
-### Krita
-
-[Krita][34] is a digital drawing application with a deep set of capabilities. The application is geared toward illustrators, concept artists, and comic artists and is fully loaded with extras, such as brushes, palettes, patterns, and templates.
-
-The latest stable version is [Krita 3.0.1][35], released in September 2016\. Features new to the 3.0.x series include 2D frame-by-frame animation; improved layer management and functionality; expanded and more usable shortcuts; improvements to grids, guides, and snapping; and soft-proofing.
-
- ![Krita screenshot](https://opensource.com/sites/default/files/krita_520.png "Krita screenshot")
-
-### Video tools
-
-There are many, many options for open source video editing tools. Of the members of the pack, [Flowblade][36] is a newcomer and Kdenlive is the established, newbie-friendly, and most fully featured contender. The main criteria that may help you eliminate some of this array of options is supported platforms—some of these only support Linux. These all have active upstreams and the latest stable versions of each have been released recently, within weeks of each other.
-
-### Kdenlive
-
-[Kdenlive][37], which was initially released back in 2002, is a powerful non-linear video editor available for Linux and OS X (although the OS X version is out-of-date). Kdenlive has a user-friendly drag-and-drop-based user interface that accommodates beginners, and with the depth experts need.
-
-Learn how to use Kdenlive with an [multi-part Kdenlive tutorial series][38] by Seth Kenlon.
-
-* Latest Stable: 16.08.2 (October 2016)
-
- ![](https://opensource.com/sites/default/files/images/life-uploads/kdenlive_6_leader.png)
-
-### Flowblade
-
-Released in 2012, [Flowblade][39], a Linux-only video editor, is a relative newcomer.
-
-* Latest Stable: 1.8 (September 2016)
-
-### Pitivi
-
-[Pitivi][40] is a user-friendly free and open source video editor. Pitivi is written in [Python][41] (the "Pi" in Pitivi), uses the [GStreamer][42] multimedia framework, and has an active community.
-
-* Latest stable: 0.97 (August 2016)
-* Get the [latest version with Flatpak][5]
-
-### Shotcut
-
-[Shotcut][43] is a free, open source, cross-platform video editor that started [back in 2004][44] and was later rewritten by current lead developer [Dan Dennedy][45].
-
-* Latest stable: 16.11 (November 2016)
-* 4K resolution support
-* Ships as a tarballed binary
-
-
-
-### OpenShot Video Editor
-
-Started in 2008, [OpenShot Video Editor][46] is a free, open source, easy-to-use, cross-platform video editor.
-
-* Latest stable: [2.1][6] (August 2016)
-
-
-### Utilities
-
-### SwatchBooker
-
-[SwatchBooker][47] is a handy utility, and although it hasn't been updated in a few years, it's still useful. SwatchBooker helps users legally obtain color swatches from various manufacturers in a format that you can use with other free and open source tools, including Scribus.
-
-### GNOME Color Manager
-
-[GNOME Color Manager][48] is the built-in color management system for the GNOME desktop environment, the default desktop for a bunch of Linux distros. The tool allows you to create profiles for your display devices using a colorimeter, and also allows you to load/managed ICC color profiles for those devices.
-
-### GNOME Wacom Control
-
-[The GNOME Wacom controls][49] allow you to configure your Wacom tablet in the GNOME desktop environment; you can modify various options for interacting with the tablet, including customizing the sensitivity of the tablet and which monitors the tablet maps to.
-
-### Xournal
-
-[Xournal][50] is a humble but solid app that allows you to hand write/doodle notes using a tablet. Xournal is a useful tool for signing or otherwise annotating PDF documents.
-
-### PDF Mod
-
-[PDF Mod][51] is a handy utility for editing PDFs. PDF Mod lets users remove pages, add pages, bind multiple single PDFs together into a single PDF, reorder the pages, and rotate the pages.
-
-### SparkleShare
-
-[SparkleShare][52] is a git-backed file-sharing tool artists use to collaborate and share assets. Hook it up to a GitLab repo and you've got a nice open source infrastructure for asset management. The SparkleShare front end nullifies the inscrutability of git by providing a dropbox-like interface on top of it.
-
-### Photography
-
-### Darktable
-
-[Darktable][53] is an application that allows you to develop digital RAW files and has a rich set of tools for the workflow management and non-destructive editing of photographic images. Darktable includes support for an extensive range of popular cameras and lenses.
-
- ![Changing color balance screenshot](https://opensource.com/sites/default/files/dt_colour.jpg "Changing color balance screenshot")
-
-### Entangle
-
-[Entangle][54] allows you to tether your digital camera to your computer and enables you to control your camera completely from the computer.
-
-### Hugin
-
-[Hugin][55] is a tool that allows you to stitch together photos in order to create panoramic photos.
-
-### 2D animation
-
-### Synfig Studio
-
-[Synfig Studio][56] is a vector-based 2D animation suite that also supports bitmap artwork and is tablet-friendly.
-
-### Blender Grease Pencil
-
-I covered Blender above, but particularly notable from a recent release is [a refactored grease pencil feature][57], which adds the ability to create 2D animations.
-
-
-### Krita
-
-[Krita][58] also now provides 2D animation functionality.
-
-
-### Music and audio editing
-
-### Audacity
-
-[Audacity][59] is popular, user-friendly tool for editing audio files and recording sound.
-
-### Ardour
-
-[Ardour][60] is a digital audio workstation with an interface centered around a record, edit, and mix workflow. It's a little more complicated than Audacity to use but allows for automation and is generally more sophisticated. (Available for Linux, Mac OS X, and Windows.)
-
-### Hydrogen
-
-[Hydrogen][61] is an open source drum machine with an intuitive interface. It provides the ability to create and arrange various patterns using synthesized instruments.
-
-### Mixxx
-
-[Mixxx][62] is a four-deck DJ suite that allows you to DJ and mix songs together with powerful controls, including beat looping, time stretching, and pitch bending, as well as live broadcast your mixes and interface with DJ hardware controllers.
-
-### Rosegarden
-
-[Rosegarden][63] is a music composition suite that includes tools for score writing and music composition/editing and provides an audio and MIDI sequencer.
-
-### MuseScore
-
-[MuseScore][64] is a music score creation, notation, and editing tool with a community of musical score contributors.
-
-### Additional creative tools
-
-### MakeHuman
-
-[MakeHuman][65] is a 3D graphical tool for creating photorealistic models of humanoid forms.
-
-
-
-### Natron
-
-[Natron][66] is a node-based compositor tool used for video post-production and motion graphic and special effect design.
-
-### FontForge
-
-[FontForge][67] is a typeface creation and editing tool. It allows you to edit letter forms in a typeface as well as generate fonts for using those typeface designs.
-
-### Valentina
-
-[Valentina][68] is an application for drafting sewing patterns.
-
-### Calligra Flow
-
-[Calligra Flow][69] is a Visio-like diagramming tool. (Available for Linux, Mac OS X, and Windows.)
-
-### Resources
-
-There are a lot of toys and goodies to try out there. Need some inspiration to start your exploration? These websites and conference are chock-full of tutorials and beautiful creative works to inspire you get you going:
-
-1. [pixls.us][7]: Blog hosted by photographer Pat David that focuses on free and open source tools and workflow for professional photographers.
-2. [David Revoy's Blog][8] The blog of David Revoy, an immensely talented free and open source illustrator, concept artist, and advocate, with credits on several of the Blender Foundation films.
-3. [The Open Source Creative Podcast][9]: Hosted by Opensource.com community moderator and columnist [Jason van Gumster][10], who is a Blender and GIMP expert, and author of _[Blender for Dummies][1]_, this podcast is directed squarely at those of us who enjoy open source creative tools and the culture around them.
-4. [Libre Graphics Meeting][11]: Annual conference for free and open source creative software developers and the creatives who use the software. This is the place to find out about what cool features are coming down the pipeline in your favorite open source creative tools, and to enjoy what their users are creating with them.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-343-8e0fb148b105b450634e30acd8f5b22b.png?itok=oxzTm70z)
-
-Máirín Duffy - Máirín is a principal interaction designer at Red Hat. She is passionate about software freedom and free & open source tools, particularly in the creative domain: her favorite application is Inkscape (http://inkscape.org).
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/16/12/yearbook-top-open-source-creative-tools-2016
-
-作者:[Máirín Duffy][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/mairin
-[1]:http://www.blenderbasics.com/
-[2]:https://builder.blender.org/download/
-[3]:http://graphicall.org/
-[4]:https://mathieu.daitauha.fr/blog/2016/09/23/blender-nightly-in-flatpak/
-[5]:https://pitivi.wordpress.com/2016/07/18/get-pitivi-directly-from-us-with-flatpak/
-[6]:http://www.openshotvideo.com/2016/08/openshot-21-released.html
-[7]:http://pixls.us/
-[8]:http://davidrevoy.com/
-[9]:http://monsterjavaguns.com/podcast/
-[10]:https://opensource.com/users/jason-van-gumster
-[11]:http://libregraphicsmeeting.org/2016/
-[12]:https://opensource.com/life/12/9/tour-through-open-source-creative-tools
-[13]:https://opensource.com/business/16/8/flatpak
-[14]:http://flatpak.org/apps.html
-[15]:https://opensource.com/tags/gimp
-[16]:https://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
-[17]:https://www.gimp.org/news/2016/07/14/gimp-2-8-18-released/
-[18]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
-[19]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
-[20]:https://opensource.com/tags/inkscape
-[21]:http://wiki.inkscape.org/wiki/index.php/Release_notes/0.91
-[22]:http://wiki.inkscape.org/wiki/index.php/Mesh_Gradients
-[23]:https://www.youtube.com/watch?v=IztyV-Dy4CE
-[24]:https://inkscape.org/cs/~doctormo/%E2%98%85symbols-dialog
-[25]:https://github.com/Xaviju/inkscape-open-symbols
-[26]:https://opensource.com/tags/scribus
-[27]:https://www.scribus.net/scribus-1-4-6-released/
-[28]:https://www.scribus.net/scribus-1-5-2-released/
-[29]:http://mypaint.org/
-[30]:http://mypaint.org/blog/2016/01/15/mypaint-1.2.0-released/
-[31]:https://github.com/mypaint/mypaint/wiki/v1.2-Inking-Tool
-[32]:https://opensource.com/tags/blender
-[33]:http://www.blender.org/features/2-78/
-[34]:https://opensource.com/tags/krita
-[35]:https://krita.org/en/item/krita-3-0-1-update-brings-numerous-fixes/
-[36]:https://opensource.com/life/16/9/10-reasons-flowblade-linux-video-editor
-[37]:https://opensource.com/tags/kdenlive
-[38]:https://opensource.com/life/11/11/introduction-kdenlive
-[39]:http://jliljebl.github.io/flowblade/
-[40]:http://pitivi.org/
-[41]:http://wiki.pitivi.org/wiki/Why_Python%3F
-[42]:https://gstreamer.freedesktop.org/
-[43]:http://shotcut.org/
-[44]:http://permalink.gmane.org/gmane.comp.lib.fltk.general/2397
-[45]:http://www.dennedy.org/
-[46]:http://openshot.org/
-[47]:http://www.selapa.net/swatchbooker/
-[48]:https://help.gnome.org/users/gnome-help/stable/color.html.en
-[49]:https://help.gnome.org/users/gnome-help/stable/wacom.html.en
-[50]:http://xournal.sourceforge.net/
-[51]:https://wiki.gnome.org/Apps/PdfMod
-[52]:https://www.sparkleshare.org/
-[53]:https://opensource.com/life/16/4/how-use-darktable-digital-darkroom
-[54]:https://entangle-photo.org/
-[55]:http://hugin.sourceforge.net/
-[56]:https://opensource.com/article/16/12/synfig-studio-animation-software-tutorial
-[57]:https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.78/GPencil
-[58]:https://opensource.com/tags/krita
-[59]:https://opensource.com/tags/audacity
-[60]:https://ardour.org/
-[61]:http://www.hydrogen-music.org/
-[62]:http://mixxx.org/
-[63]:http://www.rosegardenmusic.com/
-[64]:https://opensource.com/life/16/03/musescore-tutorial
-[65]:http://makehuman.org/
-[66]:https://natron.fr/
-[67]:http://fontforge.github.io/en-US/
-[68]:http://valentina-project.org/
-[69]:https://www.calligra.org/flow/
diff --git a/sources/tech/20161228 How to set up a Continuous Integration server for Android development.md b/sources/tech/20161228 How to set up a Continuous Integration server for Android development.md
index 62097b8fe5..388f908415 100644
--- a/sources/tech/20161228 How to set up a Continuous Integration server for Android development.md
+++ b/sources/tech/20161228 How to set up a Continuous Integration server for Android development.md
@@ -1,3 +1,4 @@
+++++翻译中++++++
How to set up a Continuous Integration server for Android development (Ubuntu + Jenkins + SonarQube)
============================================================
diff --git a/sources/tech/201701 GraphQL In Use Building a Blogging Engine API with Golang and PostgreSQL.md b/sources/tech/201701 GraphQL In Use Building a Blogging Engine API with Golang and PostgreSQL.md
index 07c779138e..bff6d03773 100644
--- a/sources/tech/201701 GraphQL In Use Building a Blogging Engine API with Golang and PostgreSQL.md
+++ b/sources/tech/201701 GraphQL In Use Building a Blogging Engine API with Golang and PostgreSQL.md
@@ -1,3 +1,4 @@
+ictlyh Translating
GraphQL In Use: Building a Blogging Engine API with Golang and PostgreSQL
============================================================
diff --git a/sources/tech/20170112 The 6 unwritten rules of open source development.md b/sources/tech/20170112 The 6 unwritten rules of open source development.md
deleted file mode 100644
index 86b8281019..0000000000
--- a/sources/tech/20170112 The 6 unwritten rules of open source development.md
+++ /dev/null
@@ -1,69 +0,0 @@
-+++
-++翻译中
-+++++
-The 6 unwritten rules of open source development
-============================================================
-
-> Do you want to be a successful and valued member of an open source project? Follow these unwritten rules
-
-
- ![The 6 unwritten rules of open source development](http://images.techhive.com/images/article/2016/12/09_opensource-100698477-large.jpg)
-
->_Matt Hicks is vice president of software engineering at Red Hat and one of the founding members of the Red Hat OpenShift team. He has spent 15 years in software engineering with a variety of roles in development, operations, architecture, and management._
-
-The sports world is rife with unwritten rules. These are the behaviors and rituals that are observed but rarely documented in an official capacity. For example, in baseball, unwritten rules range from not stealing bases when well ahead to never giving up an intentional walk when there’s a runner on first. To outsiders, these are esoteric, perhaps even nonsensical guidelines, but they are followed by every player who wants to be a valued teammate and respected opponent.
-
-Software development, particularly _open source_ software development, also has an invisible rulebook. As in other team sports, these rules can have a significant impact on how an open source community treats a developer, especially newcomers.
-
-
-### Walk before you run
-
-Before interacting with any community, open source or otherwise, you need to do your homework. For prospective open source contributors, this means understanding the community’s mission and learning where you can help from the onset. Everyone wants to contribute code, but far fewer developers are ready, willing, and able to do the grunt work: testing patches, reviewing code, sifting through documentation and correcting errors, and all of those other generally undesirable tasks that are required for a healthy community.
-
-Why do this when you can start cranking out beautiful lines of code? It’s about trust and, more important, showing that you care about the community as a whole and not developing only the features that you care about.
-
-
-### Make an impact, not a crater
-
-As you build up your reputation with a given open source community, it’s important to develop a broad understanding of the project and the code base. Don’t stop at the mission statement; dive into the project itself and understand what makes it tick outside of your own area of expertise. Beyond broadening your own understanding as a developer, this helps you gain insight into how your code contributions could impact the larger project, not only your little piece of the pie.
-
-For example, maybe you want to create a revision to a networking module. You build it and test it, and it looks good, so you send it off to the community for more testing. As it turns out, this module breaks a security setting or causes a major storage incident when deployed in a certain manner -- issues that could have been remedied had you looked at the code base as a whole rather than your piece alone. Showing that you have a broad understanding of how various parts of the project interact with others -- and developing your patches to make an impact, not a crater -- will go a long way toward making your contributions appreciated.
-
-### Patch bombing is not OK
-
-Your work is not over when your code is submitted. There will be discussion about the change and a lot of QA and testing yet to be done if accepted. You want to make sure you are putting in the time and effort to understand how you can move your code and patches forward without them becoming a burden on other members.
-
-### Help others before helping yourself
-
-Open source communities aren’t a dog-eat-dog world; they’re about putting the value of the project before individual contributions and individual success. If you want to increase your odds of being seen as a valued member of the community (and get your code accepted), help others with their efforts. If you know about networking, review networking modules -- apply your expertise to make the whole code base better. It’s no surprise that top reviewers often correlate to top contributors. The more you help, the more valued you are.
-
-### Address the edge
-
-As a developer, you’re likely looking to contribute to an open source project to address a particular pain point. Maybe your preferred operating system isn’t supported or you desperately want to modernize the security technology used by the community. The best way to introduce changes, especially more aggressive ones, is to make them impossible to refuse. Know enough about the code base to think through every edge case. Add capabilities without breaking existing functionality. Pour your energy into the completeness of your feature, not only the submission.
-
-### Don’t give up
-
-Open source communities have plenty of fly-by-night members, but with commitment comes credibility. Don’t merely walk away when a patch is rejected. Find out why it was rejected, make those fixes, and try again. As you work on your patch, keep up with changes to the code base and make sure your patch remains mergeable as the project evolves. Don’t leave it to others to patch up your patch. As the author, take the burden on yourself and keep other community members free to do the same with their work.
-
-These unwritten rules might seem simple, but too many open source contributors don’t follow them. Developers who do so will not only succeed in advancing a project for themselves; they will help to advance open source in general.
-
---------------------------------------------------------------------------------
-
-via: http://www.infoworld.com/article/3156776/open-source-tools/the-6-unwritten-rules-of-open-source-development.html
-
-作者:[Matt Hicks][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.infoworld.com/blog/new-tech-forum/
-[1]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&via=infoworld&text=The+6+unwritten+rules+of+open+source+development
-[2]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
-[3]:http://www.linkedin.com/shareArticle?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&title=The+6+unwritten+rules+of+open+source+development
-[4]:https://plus.google.com/share?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
-[5]:http://reddit.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&title=The+6+unwritten+rules+of+open+source+development
-[6]:http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
-[7]:http://www.infoworld.com/article/3156776/open-source-tools/the-6-unwritten-rules-of-open-source-development.html#email
-[8]:http://www.infoworld.com/article/3152565/linux/5-rock-solid-linux-distros-for-developers.html#tk.ifw-infsb
-[9]:http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
diff --git a/sources/tech/20170118 Arrive On Time With NTP -- Part 1- Usage Overview.md b/sources/tech/20170118 Arrive On Time With NTP -- Part 1- Usage Overview.md
deleted file mode 100644
index c5ad01dc35..0000000000
--- a/sources/tech/20170118 Arrive On Time With NTP -- Part 1- Usage Overview.md
+++ /dev/null
@@ -1,61 +0,0 @@
-Arrive On Time With NTP -- Part 1: Usage Overview
-============================================================
-
- ![NTP](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ntp-time.jpg?itok=zu8dqpki "NTP")
-In this first of a three-part series, Chris Binnie looks at why NTP services are essential to a happy infrastructure.[Used with permission][1]
-
-Few services on the Internet can claim to be so critical in nature as time. Subtle issues which affect the timekeeping of your systems can sometimes take a day or two to be realized, and they are almost always unwelcome because of the knock-on effects they cause.
-
-Consider as an example that your backup server loses connectivity to your Network Time Protocol (NTP) server and, over a period of a few days, introduces several hours of clock skew. Your colleagues arrive at work at 9am as usual only to find the bandwidth-intensive backups consuming all the network resources meaning that they can barely even log into their workstations to start their day’s work until the backup has finished.
-
-In this first of a three-part series, I’ll provide brief overview of NTP to help prevent such disasters. From the timestamps on your emails to remembering when you started your shift at work, NTP services are essential to a happy infrastructure.
-
-You might consider that the really important NTP servers (from which other servers pick up their clock data) are at the bottom of an inverted pyramid and referred to as Stratum 1 servers (also known as “primary” servers). These servers speak directly to national time services (known as Stratum 0, which might be devices such as atomic clocks or GPS clocks, for example). There are a number of ways of communicating with them securely, via satellite or radio, for example.
-
-Somewhat surprisingly, it’s reasonably common for even large enterprises to connect to Stratum 2 servers (or “secondary” servers) as opposed to primary servers. Stratum 2 servers, as you’d expect, synchronize directly with Stratum 1 servers. If you consider that a corporation might have their own onsite NTP servers (at least two, usually three, for resilience) then these would be Stratum 3 servers. As a result, such a corporation’s Stratum 3 servers would then connect upstream to predefined secondary servers and dutifully pass the time onto its many client and server machines as an accurate reflection of the current time.
-
-A simple design component of NTP is that it works on the premise -- thanks to the large geographical distances travelled by Internet traffic -- that round-trip times (of when a packet was sent and how many seconds later it was received) are sensibly taken into account before trusting to a time as being entirely accurate. There’s a lot more to setting a computer’s clock than you might at first think, if you don’t believe me, then [this fascinating web page][3] is well worth looking at.
-
-At the risk of revisiting the point, NTP is so key to making sure your infrastructure functions as expected that the Stratum servers to which your NTP servers connect to fuel your internal timekeeping must be absolutely trusted and additionally offer redundancy. There’s an informative list of the Stratum 1 servers available at the [main NTP site][4].
-
-As you can see from that list, some NTP Stratum 1 servers run in a “ClosedAccount” state; these servers can’t be used without prior consent. However, as long as you adhere to their usage guidelines, “OpenAccess” servers are indeed open for polling. Any “RestrictedAccess” servers can sometimes be limited due to a maximum number of clients or a minimum poll interval. Additionally, these are sometimes only available to certain types of organizations, such as academia.
-
-### Respect My Authority
-
-On a public NTP server, you are likely to find that the usage guidelines follow several rules. Let’s have a look at some of them now.
-
-The “iburst” option involves a client sending a number of packets (eight packets rather than the usual single packet) to an NTP server should it not respond during at a standard polling interval. If, after shouting loudly at the NTP server a few times within a short period of time, a recognized response isn’t forthcoming, then the local time is not changed.
-
-Unlike “iburst” the “burst” option is not commonly allowed (so don’t use it!) as per the general rules for NTP servers. That option instead sends numerous packets (eight again apparently) at each polling interval and also when the server is available. If you are continually throwing packets at higher-up Stratum servers even when they are responding normally, you may get blacklisted for using the “burst” option.
-
-Clearly, how often you connect to a server makes a difference to its load (and the negligible amount of bandwidth used). These settings can be configured locally using the “minpoll” and “maxpoll” options. However, to follow the connecting rules on to an NTP server, you shouldn’t generally alter the the defaults of 64 seconds and 1024 seconds, respectively.
-
-Another, far from tacit, rule is that clients should always respect Kiss-Of-Death (KOD) messages generated by those servers from which they request time. If an NTP server doesn’t want to respond to a particular request, similar to certain routing and firewalling techniques, then it’s perfectly possible for it to simply discard or blackhole any associated packets.
-
-In other words, the recipient server of these unwanted packets takes on no extra load to speak of and simply drops the traffic that it doesn’t think it should serve a response to. As you can imagine, however, this isn’t always entirely helpful, and sometimes it’s better to politely ask the client to cease and desist, rather than ignoring the requests. For this reason, there’s a specific packet type called the KOD packet. Should a client be sent an unwelcome KOD packet then it should then remember that particular server as having responded with an access-denied style marker.
-
-If it’s not the first KOD packet received from back the server, then the client assumes that there is a rate-limiting condition (or something similar) present on the server. It’s common at this stage for the client to write to its local logs, noting the less-than-satisfactory outcome of the transaction with that particular server, if you ever need to troubleshoot such a scenario.
-
-Bear in mind that, for obvious reasons, it’s key that your NTP’s infrastructure be dynamic. Thus, it’s important not to hard-code IP addresses into your NTP config. By using DNS names, individual servers can fall off the network and the service can still be maintained, IP address space can be reallocated and simple load balancing (with a degree of resilience) can be introduced.
-
-Let’s not forget that we also need to consider that the exponential growth of the Internet of Things (IoT), eventually involving billions of new devices, will mean a whole host of equipment will need to keep its wristwatches set to the correct time. Should a hardware vendor inadvertently (or purposely) configure their devices to only communicate with one provider’s NTP servers (or even a single server) then there can be -- and have been in the past -- very unwelcome issues.
-
-As you might imagine, as more units of hardware are purchased and brought online, the owner of the NTP infrastructure is likely to be less than grateful for the associated fees that they are incurring without any clear gain. This scenario is far from being unique to the realms of fantasy. Ongoing headaches -- thanks to NTP traffic forcing a provider’s infrastructure to creak -- have been seen several times over the last few years.
-
-In the next two articles, I’ll look at some important NTP configuration options and examine server setup.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/arrive-time-ntp-part-1-usage-overview
-
-作者:[CHRIS BINNIE][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/chrisbinnie
-[1]:https://www.linux.com/licenses/category/used-permission
-[2]:https://www.linux.com/files/images/ntp-timejpg
-[3]:http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
-[4]:http://support.ntp.org/bin/view/Servers/StratumOneTimeServers
diff --git a/sources/tech/20170120 How to Install Elastic Stack on CentOS 7.md b/sources/tech/20170120 How to Install Elastic Stack on CentOS 7.md
deleted file mode 100644
index 89704e593d..0000000000
--- a/sources/tech/20170120 How to Install Elastic Stack on CentOS 7.md
+++ /dev/null
@@ -1,562 +0,0 @@
-How to Install Elastic Stack on CentOS 7
-============================================================
-
-### On this page
-
-1. [Step 1 - Prepare the Operating System][1]
-2. [Step 2 - Install Java][2]
-3. [Step 3 - Install and Configure Elasticsearch][3]
-4. [Step 4 - Install and Configure Kibana with Nginx][4]
-5. [Step 5 - Install and Configure Logstash][5]
-6. [Step 6 - Install and Configure Filebeat on the CentOS Client][6]
-7. [Step 7 - Install and Configure Filebeat on the Ubuntu Client][7]
-8. [Step 8 - Testing][8]
-9. [Reference][9]
-
-**Elasticsearch** is an open source search engine based on Lucene, developed in Java. It provides a distributed and multitenant full-text search engine with an HTTP Dashboard web-interface (Kibana). The data is queried, retrieved and stored with a JSON document scheme. Elasticsearch is a scalable search engine that can be used to search for all kind of text documents, including log files. Elasticsearch is the heart of the 'Elastic Stack' or ELK Stack.
-
-**Logstash** is an open source tool for managing events and logs. It provides real-time pipelining for data collections. Logstash will collect your log data, convert the data into JSON documents, and store them in Elasticsearch.
-
-**Kibana** is an open source data visualization tool for Elasticsearch. Kibana provides a pretty dashboard web interface. It allows you to manage and visualize data from Elasticsearch. It's not just beautiful, but also powerful.
-
-In this tutorial, I will show you how to install and configure Elastic Stack on a CentOS 7 server for monitoring server logs. Then I'll show you how to install 'Elastic beats' on a CentOS 7 and a Ubuntu 16 client operating system.
-
-**Prerequisite**
-
-* CentOS 7 64 bit with 4GB of RAM - elk-master
-* CentOS 7 64 bit with 1 GB of RAM - client1
-* Ubuntu 16 64 bit with 1GB of RAM - client2
-
-### Step 1 - Prepare the Operating System
-
-In this tutorial, we will disable SELinux on the CentOS 7 server. Edit the SELinux configuration file.
-
-vim /etc/sysconfig/selinux
-
-Change SELINUX value from enforcing to disabled.
-
-SELINUX=disabled
-
-Then reboot the server.
-
-reboot
-
-Login to the server again and check the SELinux state.
-
-getenforce
-
-Make sure the result is disabled.
-
-### Step 2 - Install Java
-
-Java is required for the Elastic stack deployment. Elasticsearch requires Java 8, it is recommended to use the Oracle JDK 1.8\. I will install Java 8 from the official Oracle rpm package.
-
-Download Java 8 JDK with the wget command.
-
-wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"
-
-Then install it with this rpm command;
-
-rpm -ivh jdk-8u77-linux-x64.rpm
-
-Finally, check java JDK version to ensure that it is working properly.
-
-java -version
-
-You will see Java version of the server.
-
-### Step 3 - Install and Configure Elasticsearch
-
-In this step, we will install and configure Elasticsearch. I will install Elasticsearch from an rpm package provided by elastic.co and configure it to run on localhost (to make the setup secure and ensure that it is not reachable from the outside).
-
-Before installing Elasticsearch, add the elastic.co key to the server.
-
-rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
-Next, download Elasticsearch 5.1 with wget and then install it.
-
-wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
-rpm -ivh elasticsearch-5.1.1.rpm
-
-Elasticsearch is installed. Now go to the configuration directory and edit the elasticsaerch.yml configuration file.
-
-cd /etc/elasticsearch/
-vim elasticsearch.yml
-
-Enable memory lock for Elasticsearch by removing a comment on line 40\. This disables memory swapping for Elasticsearch.
-
-bootstrap.memory_lock: true
-
-In the 'Network' block, uncomment the network.host and http.port lines.
-
-network.host: localhost
-http.port: 9200
-
-Save the file and exit the editor.
-
-Now edit the elasticsearch.service file for the memory lock configuration.
-
-vim /usr/lib/systemd/system/elasticsearch.service
-
-Uncomment LimitMEMLOCK line.
-
-LimitMEMLOCK=infinity
-
-Save and exit.
-
-Edit the sysconfig configuration file for Elasticsearch.
-
-vim /etc/sysconfig/elasticsearch
-
-Uncomment line 60 and make sure the value is 'unlimited'.
-
-MAX_LOCKED_MEMORY=unlimited
-
-Save and exit.
-
-The Elasticsearch configuration is finished. Elasticsearch will run on the localhost IP address on port 9200, we disabled memory swapping for it by enabling mlockall on the CentOS server.
-
-Reload systemd, enable Elasticsearch to start at boot time, then start the service.
-
-sudo systemctl daemon-reload
-sudo systemctl enable elasticsearch
-sudo systemctl start elasticsearch
-
-Wait a second for Eelasticsearch to start, then check the open ports on the server, make sure 'state' for port 9200 is 'LISTEN'.
-
-netstat -plntu
-
-[
- ![Check elasticsearch running on port 9200](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/1.png)
-][10]
-
-Then check the memory lock to ensure that mlockall is enabled, and check that Elasticsearch is running with the commands below.
-
-curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
-curl -XGET 'localhost:9200/?pretty'
-
-You will see the results below.
-
-[
- ![Check memory lock elasticsearch and check status](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/2.png)
-][11]
-
-### Step 4 - Install and Configure Kibana with Nginx
-
-In this step, we will install and configure Kibana with a Nginx web server. Kibana will listen on the localhost IP address and Nginx acts as a reverse proxy for the Kibana application.
-
-Download Kibana 5.1 with wget, then install it with the rpm command:
-
-wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
-rpm -ivh kibana-5.1.1-x86_64.rpm
-
-Now edit the Kibana configuration file.
-
-vim /etc/kibana/kibana.yml
-
-Uncomment the configuration lines for server.port, server.host and elasticsearch.url.
-
-server.port: 5601
-server.host: "localhost"
-elasticsearch.url: "http://localhost:9200"
-
-Save and exit.
-
-Add Kibana to run at boot and start it.
-
-sudo systemctl enable kibana
-sudo systemctl start kibana
-
-Kibana will run on port 5601 as node application.
-
-netstat -plntu
-
-[
- ![Kibana running as node application on port 5601](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/3.png)
-][12]
-
-The Kibana installation is finished. Now we need to install Nginx and configure it as reverse proxy to be able to access Kibana from the public IP address.
-
-Nginx is available in the Epel repository, install epel-release with yum.
-
-yum -y install epel-release
-
-Next, install the Nginx and httpd-tools package.
-
-yum -y install nginx httpd-tools
-
-The httpd-tools package contains tools for the web server, we will use htpasswd basic authentication for Kibana.
-
-Edit the Nginx configuration file and remove the **'server { }**' block, so we can add a new virtual host configuration.
-
-cd /etc/nginx/
-vim nginx.conf
-
-Remove the server { } block.
-
-[
- ![Remove Server Block on Nginx configuration](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/4.png)
-][13]
-
-Save and exit.
-
-Now we need to create a new virtual host configuration file in the conf.d directory. Create the new file 'kibana.conf' with vim.
-
-vim /etc/nginx/conf.d/kibana.conf
-
-Paste the configuration below.
-
-```
-server {
- listen 80;
-
- server_name elk-stack.co;
-
- auth_basic "Restricted Access";
- auth_basic_user_file /etc/nginx/.kibana-user;
-
- location / {
- proxy_pass http://localhost:5601;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection 'upgrade';
- proxy_set_header Host $host;
- proxy_cache_bypass $http_upgrade;
- }
-}
-```
-
-Save and exit.
-
-Then create a new basic authentication file with the htpasswd command.
-
-sudo htpasswd -c /etc/nginx/.kibana-user admin
-TYPE YOUR PASSWORD
-
-Test the Nginx configuration and make sure there is no error. Then add Nginx to run at the boot time and start Nginx.
-
-nginx -t
-systemctl enable nginx
-systemctl start nginx
-
-[
- ![Add nginx virtual host configuration for Kibana Application](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/5.png)
-][14]
-
-### Step 5 - Install and Configure Logstash
-
-In this step, we will install Logsatash and configure it to centralize server logs from clients with filebeat, then filter and transform the Syslog data and move it into the stash (Elasticsearch).
-
-Download Logstash and install it with rpm.
-
-wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
-rpm -ivh logstash-5.1.1.rpm
-
-Generate a new SSL certificate file so that the client can identify the elastic server.
-
-Go to the tls directory and edit the openssl.cnf file.
-
-cd /etc/pki/tls
-vim openssl.cnf
-
-Add a new line in the '[ v3_ca ]' section for the server identification.
-
-[ v3_ca ]
-
-# Server IP Address
-subjectAltName = IP: 10.0.15.10
-
-Save and exit.
-
-Generate the certificate file with the openssl command.
-
-openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
-
-The certificate files can be found in the '/etc/pki/tls/certs/' and '/etc/pki/tls/private/' directories.
-
-Next, we will create new configuration files for Logstash. We will create a new 'filebeat-input.conf' file to configure the log sources for filebeat, then a 'syslog-filter.conf' file for syslog processing and the 'output-elasticsearch.conf' file to define the Elasticsearch output.
-
-Go to the logstash configuration directory and create the new configuration files in the 'conf.d' subdirectory.
-
-cd /etc/logstash/
-vim conf.d/filebeat-input.conf
-
-Input configuration: paste the configuration below.
-
-```
-input {
- beats {
- port => 5443
- ssl => true
- ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
- ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
- }
-}
-```
-
-Save and exit.
-
-Create the syslog-filter.conf file.
-
-vim conf.d/syslog-filter.conf
-
-Paste the configuration below.
-
-```
-filter {
- if [type] == "syslog" {
- grok {
- match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
- add_field => [ "received_at", "%{@timestamp}" ]
- add_field => [ "received_from", "%{host}" ]
- }
- date {
- match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
- }
- }
-}
-```
-
-We use a filter plugin named '**grok**' to parse the syslog files.
-
-Save and exit.
-
-Create the output configuration file 'output-elasticsearch.conf'.
-
-vim conf.d/output-elasticsearch.conf
-
-Paste the configuration below.
-
-```
-output {
- elasticsearch { hosts => ["localhost:9200"]
- hosts => "localhost:9200"
- manage_template => false
- index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
- document_type => "%{[@metadata][type]}"
- }
-}
-```
-
-Save and exit.
-
-Finally add logstash to start at boot time and start the service.
-
-sudo systemctl enable logstash
-sudo systemctl start logstash
-
-[
- ![Logstash started on port 5443 with SSL Connection](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/6.png)
-][15]
-
-### Step 6 - Install and Configure Filebeat on the CentOS Client
-
-Beats are data shippers, lightweight agents that can be installed on the client nodes to send huge amounts of data from the client machine to the Logstash or Elasticsearch server. There are 4 beats available, 'Filebeat' for 'Log Files', 'Metricbeat' for 'Metrics', 'Packetbeat' for 'Network Data' and 'Winlogbeat' for the Windows client 'Event Log'.
-
-In this tutorial, I will show you how to install and configure 'Filebeat' to transfer data log files to the Logstash server over an SSL connection.
-
-Login to the client1 server. Then copy the certificate file from the elastic server to the client1 server.
-
-ssh root@client1IP
-
-Copy the certificate file with the scp command.
-
-scp root@elk-serverIP:~/logstash-forwarder.crt .
-TYPE elk-server password
-
-Create a new directory and move certificate file to that directory.
-
-sudo mkdir -p /etc/pki/tls/certs/
-mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
-
-Next, import the elastic key on the client1 server.
-
-rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
-Download Filebeat and install it with rpm.
-
-wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
-rpm -ivh filebeat-5.1.1-x86_64.rpm
-
-Filebeat has been installed, go to the configuration directory and edit the file 'filebeat.yml'.
-
-cd /etc/filebeat/
-vim filebeat.yml
-
-In the paths section on line 21, add the new log files. We will add two files '/var/log/secure' for ssh activity and '/var/log/messages' for the server log.
-
- paths:
- - /var/log/secure
- - /var/log/messages
-
-Add a new configuration on line 26 to define the syslog type files.
-
- document-type: syslog
-
-Filebeat is using Elasticsearch as the output target by default. In this tutorial, we will change it to Logshtash. Disable Elasticsearch output by adding comments on the lines 83 and 85.
-
-Disable elasticsearch output.
-
-#-------------------------- Elasticsearch output ------------------------------
-#output.elasticsearch:
- # Array of hosts to connect to.
-# hosts: ["localhost:9200"]
-
-Now add the new logstash output configuration. Uncomment the logstash output configuration and change all value to the configuration that is shown below.
-
-output.logstash:
- # The Logstash hosts
- hosts: ["10.0.15.10:5443"]
- bulk_max_size: 1024
- ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
- template.name: "filebeat"
- template.path: "filebeat.template.json"
- template.overwrite: false
-
-Save the file and exit vim.
-
-Add Filebeat to start at boot time and start it.
-
-sudo systemctl enable filebeat
-sudo systemctl start filebeat
-
-### Step 7 - Install and Configure Filebeat on the Ubuntu Client
-
-Connect to the server by ssh.
-
-ssh root@ubuntu-clientIP
-
-Copy the certificate file to the client with the scp command.
-
-scp root@elk-serverIP:~/logstash-forwarder.crt .
-
-Create a new directory for the certificate file and move the file to that directory.
-
-sudo mkdir -p /etc/pki/tls/certs/
-mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
-
-Add the elastic key to the server.
-
-wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
-
-Download the Filebeat .deb package and install it with the dpkg command.
-
-wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
-dpkg -i filebeat-5.1.1-amd64.deb
-
-Go to the filebeat configuration directory and edit the file 'filebeat.yml' with vim.
-
-cd /etc/filebeat/
-vim filebeat.yml
-
-Add the new log file paths in the paths configuration section.
-
- paths:
- - /var/log/auth.log
- - /var/log/syslog
-
-Set the document type to syslog.
-
- document-type: syslog
-
-Disable elasticsearch output by adding comments to the lines shown below.
-
-#-------------------------- Elasticsearch output ------------------------------
-#output.elasticsearch:
- # Array of hosts to connect to.
-# hosts: ["localhost:9200"]
-
-Enable logstash output, uncomment the configuration and change the values as shown below.
-
-output.logstash:
- # The Logstash hosts
- hosts: ["10.0.15.10:5443"]
- bulk_max_size: 1024
- ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
- template.name: "filebeat"
- template.path: "filebeat.template.json"
- template.overwrite: false
-
-Save the file and exit vim.
-
-Add Filebeat to start at boot time and start it.
-
-sudo systemctl enable filebeat
-sudo systemctl start filebeat
-
-Check the service status.
-
-systemctl status filebeat
-
-[
- ![Filebeat is running on the client Ubuntu](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/12.png)
-][16]
-
-### Step 8 - Testing
-
-Open your web browser and visit the elastic stack domain that you used in the Nginx configuration, mine is 'elk-stack.co'. Login as admin user with your password and press Enter to log in to the Kibana dashboard.
-
-[
- ![Login to the Kibana Dashboard with Basic Auth](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/7.png)
-][17]
-
-Create a new default index 'filebeat-*' and click on the 'Create' button.
-
-[
- ![Create First index filebeat for Kibana](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/8.png)
-][18]
-
-Th default index has been created. If you have multiple beats on the elastic stack, you can configure the default beat with just one click on the 'star' button.
-
-[
- ![Filebeat index as default index on Kibana Dashboard](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/9.png)
-][19]
-
-Go to the '**Discover**' menu and you will see all the log file from the elk-client1 and elk-client2 servers.
-
-[
- ![Discover all Log Files from the Servers](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/10.png)
-][20]
-
-An example of JSON output from the elk-client1 server log for an invalid ssh login.
-
-[
- ![JSON output for Failed SSH Login](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/11.png)
-][21]
-
-And there is much more that you can do with Kibana dashboard, just play around with the available options.
-
-Elastic Stack has been installed on a CentOS 7 server. Filebeat has been installed on a CentOS 7 and a Ubuntu client.
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
-
-作者:[Muhammad Arul][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
-[1]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-nbspprepare-the-operating-system
-[2]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-java
-[3]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-elasticsearch
-[4]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-kibana-with-nginx
-[5]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-logstash
-[6]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-centos-client
-[7]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-ubuntu-client
-[8]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-testing
-[9]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#reference
-[10]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/1.png
-[11]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/2.png
-[12]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/3.png
-[13]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/4.png
-[14]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/5.png
-[15]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/6.png
-[16]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/12.png
-[17]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/7.png
-[18]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/8.png
-[19]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/9.png
-[20]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/10.png
-[21]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/11.png
diff --git a/sources/tech/20170124 How to Keep Hackers out of Your Linux Machine Part 3- Your Questions Answered.md b/sources/tech/20170124 How to Keep Hackers out of Your Linux Machine Part 3- Your Questions Answered.md
deleted file mode 100644
index c854001b0b..0000000000
--- a/sources/tech/20170124 How to Keep Hackers out of Your Linux Machine Part 3- Your Questions Answered.md
+++ /dev/null
@@ -1,81 +0,0 @@
-How to Keep Hackers out of Your Linux Machine Part 3: Your Questions Answered
-============================================================
-
- ![Computer security](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/keep-hackers-out.jpg?itok=lqgHDxDu "computer security")
-Mike Guthrie answers some of the security-related questions received during his recent Linux Foundation webinar. Watch the free webinar on-demand.[Creative Commons Zero][1]
-
-Articles [one][6] and [two][7] in this series covered the five easiest ways to keep hackers out of your Linux machine, and know if they have made it in. This time, I’ll answer some of the excellent security questions I received during my recent Linux Foundation webinar. [Watch the free webinar on-demand.][8]
-
-**How can I store a passphrase for a private key if private key authentication is used by automated systems?**
-
-This is tough. This is something that we struggle with on our end, especially when we are doing Red Teams because we have stuff that calls back automatically. I use Expect but I tend to be old-school on that. You are going to have to script it and, yes, storing that passphrase on the system is going to be tough; you are going to have to encrypt it when you store it.
-
-My Expect script encrypts the passphrase stored and then decrypts, sends the passphrase, and re-encrypts it when it's done. I do realize there are some flaws in that, but it's better than having a no-passphrase key.
-
-If you do have a no-passphrase key, and you do need to use it. Then I would suggest limiting the user that requires that to almost nothing. For instance, if you are doing some automated log transfers or automated software installs, limit the access to only what it requires to perform those functions.
-
-You can run commands by SSH, so don't give them a shell, make it so they just run that command and it will actually prevent somebody from stealing that key and doing something other than just that one command.
-
-**What do you think of password managers such as KeePass2?**
-
-Password managers, for me, are a very juicy target. With the advent of GPU cracking and some of the cracking capabilities in EC2, they become pretty easy to get past. I steal password vaults all the time.
-
-Now, our success rate at cracking those, that's a different story. We are still in about the 10 percent range of crack versus no crack. If a person doesn't do a good job at keeping a secure passphrase on their password vault, then we tend to get into it and we have a large amount of success. It's better than nothing but still you need to protect those assets. Protect the password vault as you would protect any other passwords.
-
-**Do you think it is worthwhile from a security perspective to create a new Diffie-Hellman moduli and limit them to 2048 bit or higher in addition to creating host keys with higher key lengths?**
-
-Yeah. There have been weaknesses in SSH products in the past where you could actually decrypt the packet stream. With that, you can pull all kinds of data across. People use this safes to transfer files and passwords and they do it thoughtlessly as an encryption mechanism. Doing what you can to use strong encryption and changing your keys and whatnot is important. I rotate my SSH keys -- not as often as I do my passwords -- but I rotate them about once a year. And, yeah, it's a pain, but it gives me peace of mind. I would recommend doing everything you can to make your encryption technology as strong as you possibly can.
-
-**Is using four completely random English words (around 100k words) for a passphrase okay?**
-
-Sure. My passphrase is actually a full phrase. It's a sentence. With punctuation and capitalization. I don't use anything longer than that.
-
-I am a big proponent of having passwords that you can remember that you don’t have to write down or store in a password vault. A password that you can remember that you don't have to write down is more secure than one that you have to write down because it's funky.
-
-Using a phrase or using four random words that you will remember is much more secure than having a string of numbers and characters and having to hit shift a bunch of times. My current passphrase is roughly 200 characters long. It's something that I can type quickly and that I remember.
-
-**Any advice for protecting Linux-based embedded systems in an IoT scenario?**
-
-IoT is a new space, this is the frontier of systems and security. It is starting to be different every single day. Right now, I try to keep as much offline as I possibly can. I don't like people messing with my lights and my refrigerator. I purposely did not buy a connected refrigerator because I have friends that are hackers, and I know that I would wake up to inappropriate pictures every morning. Keep them locked down. Keep them locked up. Keep them isolated.
-
-The current malware for IoT devices is dependent on default passwords and backdoors, so just do some research into what devices you have and make sure that there's nothing there that somebody could particularly access by default. Then make sure that the management interfaces for those devices are well protected by a firewall or another such device.
-
-**Can you name a firewall/UTM (OS or application) to use in SMB and large environments?**
-
-I use pfSense; it’s a BSD derivative. I like it a lot. There's a lot of modules, and there's actually commercial support for it now, which is pretty fantastic for small business. For larger devices, larger environments, it depends on what admins you can get a hold of.
-
-I have been a CheckPoint admin for most of my life, but Palo Alto is getting really popular, too. Those types of installations are going to be much different from a small business or home use. I use pfSense for any small networks.
-
-**Is there an inherent problem with cloud services?**
-
-There is no cloud; there are only other people's computers. There are inherent issues with cloud services. Just know who has access to your data and know what you are putting out there. Realize that when you give something to Amazon or Google or Microsoft, then you no longer have full control over it and the privacy of that data is in question.
-
-**What preparation would you suggest to get an OSCP?**
-
-I am actually going through that certification right now. My whole team is. Read their materials. Keep in mind that OSCP is going to be the offensive security baseline. You are going to use Kali for everything. If you don't -- if you decide not to use Kali -- make sure that you have all the tools installed to emulate a Kali instance.
-
-It's going to be a heavily tools-based certification. It's a good look into methodologies. Take a look at something called the Penetration Testing Framework because that would give you a good flow of how to do your test and their lab seems to be great. It's very similar to the lab that I have here at the house.
-
- _[Watch the full webinar on demand][3], for free. And see [parts one][4] and [two][5] of this series for five easy tips to keep your Linux machine secure._
-
- _Mike Guthrie works for the Department of Energy doing Red Team engagements and penetration testing._
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-3-your-questions-answered
-
-作者:[MIKE GUTHRIE][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/anch
-[1]:https://www.linux.com/licenses/category/creative-commons-zero
-[2]:https://www.linux.com/files/images/keep-hackers-outjpg
-[3]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
-[4]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
-[5]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
-[6]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
-[7]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
-[8]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
diff --git a/sources/tech/20170201 OpenContrail An Essential Tool in the OpenStack Ecosystem.md b/sources/tech/20170201 OpenContrail An Essential Tool in the OpenStack Ecosystem.md
deleted file mode 100644
index b1fd409f9f..0000000000
--- a/sources/tech/20170201 OpenContrail An Essential Tool in the OpenStack Ecosystem.md
+++ /dev/null
@@ -1,54 +0,0 @@
-OpenContrail: An Essential Tool in the OpenStack Ecosystem
-============================================================
-
-
- ![OpenContrail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/contrails-cloud.jpg?itok=aoNIH-ar "OpenContrail")
-OpenContrail, an SDN platform used with the OpenStack cloud computing platform, is emerging as an essential tool around which administrators will need to develop skillsets.[Creative Commons Zero][1]Pixabay
-
-Throughout 2016, software-defined networking (SDN) rapidly evolved, and numerous players in the open source and cloud computing arenas are now helping it gain momentum. In conjunction with that trend,[ OpenContrail][3], a popular SDN platform used with the OpenStack cloud computing platform, is emerging as an essential tool around which many administrators will have to develop skillsets.
-
-Just as administrators and developers have ramped up their skillsets surrounding essential tools like Ceph in the OpenStack ecosystem, they will need to embrace OpenContrail, which is fully open source and stewarded by the Apache Software Foundation.
-
-With all of this in mind, Mirantis, one of the most active companies on the OpenStack scene, has[ announced][4] commercial support for and contributions to OpenContrail. "With the addition of OpenContrail, Mirantis becomes a one-stop support shop for the entire stack of popular open source technologies used in conjunction with OpenStack, including Ceph for storage, OpenStack/KVM for compute and OpenContrail or Neutron for SDN," the company noted.
-
-According to a Mirantis announcement, "OpenContrail is an Apache 2.0-licensed project that is built using standards-based protocols and provides all the necessary components for network virtualization–SDN controller, virtual router, analytics engine, and published northbound APIs. It has an extensive REST API to configure and gather operational and analytics data from the system. Built for scale, OpenContrail can act as a fundamental network platform for cloud infrastructure."
-
-The news follows Mirantis’[ acquisition of TCP Cloud][5], a company specializing in managed services for OpenStack, OpenContrail, and Kubernetes. Mirantis will use TCP Cloud’s technology for continuous delivery of cloud infrastructure to manage the OpenContrail control plane, which will run in Docker containers. As a part of the effort, Mirantis has also been contributing to OpenContrail.
-
-Many contributors behind OpenContrail are working closely with Mirantis, and they have especially taken note of the support programs that Mirantis will offer.
-
-“OpenContrail is an essential project within the OpenStack community, and Mirantis is smart to containerize and commercially support it. The work our team is doing will make it easy to scale and update OpenContrail and perform seamless rolling upgrades alongside the rest of Mirantis OpenStack,” said Jakub Pavlik, Mirantis’ director of engineering and OpenContrail Advisory Board member. “Commercial support will also enable Mirantis to make the project compatible with a variety of switches, giving customers more choice in their hardware and software,” he said.
-
-In addition to commercial support for OpenContrail, we are very likely to see Mirantis serve up educational offerings for cloud administrators and developers who want to learn how to leverage it. Mirantis is already well-known for its[ OpenStack training][6] curriculum and has wrapped Ceph into its training.
-
-In 2016, the SDN category rapidly evolved, and it also became meaningful to many organizations with OpenStack deployments. IDC published [a study][7] of the SDN market recently and predicted a 53.9 percent CAGR from 2014 through 2020, at which point the market will be valued at $12.5 billion. In addition, the Technology Trends 2016 report ranked SDN as one of the best technology investments that organizations can make.
-
-"Cloud computing and the 3rd Platform have driven the need for SDN, which will represent a market worth more than $12.5 billion in 2020\. Not surprisingly, the value of SDN will accrue increasingly to network-virtualization software and to SDN applications, including virtualized network and security services. Large enterprises are now realizing the value of SDN in the datacenter, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network," said[ Rohit Mehra][8], Vice President of Network Infrastructure at IDC.
-
-Meanwhile, The Linux Foundation recently[ announced][9] the release of its 2016 report ["Guide to the Open Cloud: Current Trends and Open Source Projects."][10] This third annual report provides a comprehensive look at the state of open cloud computing, and includes a section on SDN.
-
-The Linux Foundation also offers [Software Defined Networking Fundamentals][11] (LFS265), a self-paced, online course on SDN, and functions as the steward of the[ Open Daylight][12] project, another important open source SDN platform that is quickly gaining momentum.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/news/event/open-networking-summit/2017/2/opencontrail-essential-tool-openstack-ecosystem
-
-作者:[SAM DEAN][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/sam-dean
-[1]:https://www.linux.com/licenses/category/creative-commons-zero
-[2]:https://www.linux.com/files/images/contrails-cloudjpg
-[3]:https://www.globenewswire.com/Tracker?data=brZ3aJVRyVHeFOyzJ1Dl4DMY3CsSV7XcYkwRyOcrw4rDHplSItUqHxXtWfs18mLsa8_bPzeN2EgZXWcQU8vchg==
-[4]:http://www.econotimes.com/Mirantis-Becomes-First-Vendor-to-Offer-Support-and-Managed-Services-for-OpenContrail-SDN-486228
-[5]:https://www.globenewswire.com/Tracker?data=Lv6LkvREFzGWgujrf1n6r_qmjSdu67-zdRAYt2itKQ6Fytomhfphuk5EbDNjNYtfgAsbnqI8H1dn_5kB5uOSmmSYY9XP2ibkrPw_wKi5JtnAyV43AjuR_epMmOUkZZ8QtFdkR33lTGDmN6O5B4xkwv7fENcDpm30nI2Og_YrYf0b4th8Yy4S47lKgITa7dz2bJpwpbCIzd7muk0BZ17vsEp0S3j4kQJnmYYYk5udOMA=
-[6]:https://training.mirantis.com/
-[7]:https://www.idc.com/getdoc.jsp?containerId=prUS41005016
-[8]:http://www.idc.com/getdoc.jsp?containerId=PRF003513
-[9]:https://www.linux.com/blog/linux-foundation-issues-2016-guide-open-source-cloud-projects
-[10]:http://ctt.marketwire.com/?release=11G120876-001&id=10172077&type=0&url=http%3A%2F%2Fgo.linuxfoundation.org%2Frd-open-cloud-report-2016-pr
-[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/software-defined-networking-fundamentals
-[12]:https://www.opendaylight.org/
diff --git a/sources/tech/20170201 lnav – An Advanced Console Based Log File Viewer for Linux.md b/sources/tech/20170201 lnav – An Advanced Console Based Log File Viewer for Linux.md
deleted file mode 100644
index a4476260c8..0000000000
--- a/sources/tech/20170201 lnav – An Advanced Console Based Log File Viewer for Linux.md
+++ /dev/null
@@ -1,195 +0,0 @@
-ictlyh Translating
-lnav – An Advanced Console Based Log File Viewer for Linux
-============================================================
-
-[LNAV][3] stands for Log file Navigator is an advanced console based log file viewer for Linux. It does the same job how other file viewers doing like cat, more, tail, etc but have more enhanced features which is not available in normal file viewers (especially, it will comes with set of color and easy to read format).
-
-This can decompresses all the compressed log files (zip, gzip, bzip) on the fly and merge them together for easy navigation. lnav Merge more than one log files (Single Log View) into a single view based on message timestamps which will reduce multiple windows open. The color bars on the left-hand side help to show which file a message belongs to.
-
-The number of warnings and errors are highlighted in the display (Yellow & Red), so that we can easily see where the problems have occurred. New log lines are automatically loaded.
-
-It display the log messages from all files sorted by the message timestamps. Top & Bottom status bars will tell you, where you are in the logs. If you want to grep any particular pattern, just type your inputs on search prompt which will be highlighted instantly.
-
-The built-in log message parser can automatically discover and extract the each lines with detailed information.
-
-A server log is a log file which is created and frequently updated by a server to capture all the activity for the particular service or application. This can be very useful when you have an issue with application or service. In log files you can get all the information about the issue like when it start behaving abnormal based on warning or error message.
-
-When you open a log file with normal file viewer, it will display all the details in plain format (If i want to tell you in straight forward, plain white) it’s very difficult to identify/understand where is warning & errors messages are there. To overcome this kind of situation and quickly find the warning & error message to troubleshoot the issue, lnav comes in handy for a better solution.
-
-Most of the common Linux log files are located at `/var/log/`.
-
-**lnav automatically detect below log formats**
-
-* Common Web Access Log format
-* CUPS page_log
-* Syslog
-* Glog
-* VMware ESXi/vCenter Logs
-* dpkg.log
-* uwsgi
-* “Generic” – Any message that starts with a timestamp
-* Strace
-* sudo
-* gzib & bizp
-
-**Awesome lnav features**
-
-* Single Log View – All log file contents are merged into a single view based on message timestamps.
-* Automatic Log Format Detection – Most of the log format is supported by lnav
-* Filters – regular expressions based filters can be performed.
-* Timeline View
-* Pretty-Print View
-* Query Logs Using SQL
-* Automatic Data Extraction
-* “Live” Operation
-* Syntax Highlighting
-* Tab-completion
-* Session information is saved automatically and restored when you are viewing the same set of files.
-* Headless Mode
-
-#### How to install lnav on Linux
-
-Most of the distribution (Debian, Ubuntu, Mint, Fedora, suse, openSUSE, Arch Linux, Manjaro, Mageia, etc.) has the lnav package by default, so we can easily install it from distribution official repository with help of package manager. For CentOS/RHEL we need to enable **[EPEL Repository][1]**.
-
-```
-[Install lnav on Debian/Ubuntu/LinuxMint]
-$ sudo apt-get install lnav
-
-[Install lnav on RHEL/CentOS]
-$ sudo yum install lnav
-
-[Install lnav on Fedora]
-$ sudo dnf install lnav
-
-[Install lnav on openSUSE]
-$ sudo zypper install lnav
-
-[Install lnav on Mageia]
-$ sudo urpmi lnav
-
-[Install lnav on Arch Linux based system]
-$ yaourt -S lnav
-```
-
-If the distribution doesn’t have the lnav package don’t worry, Developer offering the `.rpm & .deb`packages, so we can easily install without any issues. Make sure you have to download the latest one from [developer github page][4].
-
-```
-[Install lnav on Debian/Ubuntu/LinuxMint]
-$ sudo wget https://github.com/tstack/lnav/releases/download/v0.8.1/lnav_0.8.1_amd64.deb
-$ sudo dpkg -i lnav_0.8.1_amd64.deb
-
-[Install lnav on RHEL/CentOS]
-$ sudo yum install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
-
-[Install lnav on Fedora]
-$ sudo dnf install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
-
-[Install lnav on openSUSE]
-$ sudo zypper install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
-
-[Install lnav on Mageia]
-$ sudo rpm -ivh https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
-```
-
-#### Run lnav without any argument
-
-By default lnav brings `syslog` file when you are running without any arguments.
-
-```
-# lnav
-```
-
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png)
-][5]
-
-#### To view specific logs with lnav
-
-To view specific logs with lnav, add the log file `path` followed by lnav command. For example we are going to view `/var/log/dpkg.log` logs.
-
-```
-# lnav /var/log/dpkg.log
-```
-
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png)
-][6]
-
-#### To view multiple log files with lnav
-
-To view multiple log files with lnav, add the log files `path` one by one with single space followed by lnav command. For example we are going to view `/var/log/dpkg.log` & `/var/log/kern.log` logs.
-
-The color bars on the left-hand side help to show which file a message belongs to. Alternatively top bar also showing the current log file name. Most of the application used to open multiple windows or horizontal or vertical windows within the window to display more than one log but lnav doing in different way (It display multiple logs in the same window based on date combination).
-
-```
-# lnav /var/log/dpkg.log /var/log/kern.log
-```
-
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png)
-][7]
-
-#### To view older/compressed logs with lnav
-
-To view older/compressed logs which will decompresses all the compressed log files (zip, gzip, bzip) on the fly, add `-r` option followed by lnav command.
-
-```
-# lnav -r /var/log/Xorg.0.log.old.gz
-```
-
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png)
-][8]
-
-#### Histogram view
-
-First run `lnav` then hit `i` to Switch to/from the histogram view.
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png)
-][9]
-
-#### View log parser results
-
-First run `lnav` then hit `p` to Toggle the display of the log parser results.
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png)
-][10]
-
-#### Syntax Highlighting
-
-You can search any given string which will be highlighting on screen. First run `lnav` then hit `/` and type the string which you want to grep. For testing purpose, i’m searching `Default` string, See the below screenshot.
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png)
-][11]
-
-#### Tab-completion
-
-The command prompt supports tab-completion for almost all operations. For example, when doing a search, you can tab-complete words that are displayed on screen rather than having to do a copy & paste. For testing purpose, i’m searching `/var/log/Xorg` string, See the below screenshot.
-[
- ![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png)
-][12]
-
-
---------------------------------------------------------------------------------
-
-via: http://www.2daygeek.com/install-and-use-advanced-log-file-viewer-navigator-lnav-in-linux/
-
-作者:[Magesh Maruthamuthu][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.2daygeek.com/author/magesh/
-[1]:http://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
-[2]:http://www.2daygeek.com/author/magesh/
-[3]:http://lnav.org/
-[4]:https://github.com/tstack/lnav/releases
-[5]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png
-[6]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png
-[7]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png
-[8]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png
-[9]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png
-[10]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png
-[11]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png
-[12]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png
diff --git a/sources/tech/20170202 A look at 6 iconic open source brands.md b/sources/tech/20170202 A look at 6 iconic open source brands.md
index b692824636..e0ba1e7200 100644
--- a/sources/tech/20170202 A look at 6 iconic open source brands.md
+++ b/sources/tech/20170202 A look at 6 iconic open source brands.md
@@ -1,3 +1,4 @@
+geekrainy translating
A look at 6 iconic open source brands
============================================================
diff --git a/sources/tech/20170203 A comprehensive guide to taking screenshots in Linux using gnome-screenshot.md b/sources/tech/20170203 A comprehensive guide to taking screenshots in Linux using gnome-screenshot.md
deleted file mode 100644
index 6ef77ca290..0000000000
--- a/sources/tech/20170203 A comprehensive guide to taking screenshots in Linux using gnome-screenshot.md
+++ /dev/null
@@ -1,292 +0,0 @@
-A comprehensive guide to taking screenshots in Linux using gnome-screenshot
-============================================================
-
-### On this page
-
-1. [About Gnome-screenshot][13]
-2. [Gnome-screenshot Installation][14]
-3. [Gnome-screenshot Usage/Features][15]
- 1. [Capturing current active window][1]
- 2. [Window border][2]
- 3. [Adding effects to window borders][3]
- 4. [Screenshot of a particular area][4]
- 5. [Include mouse pointer in snapshot][5]
- 6. [Delay in taking screenshots][6]
- 7. [Run the tool in interactive mode][7]
- 8. [Directly save your screenshot][8]
- 9. [Copy to clipboard][9]
- 10. [Screenshot in case of multiple displays][10]
- 11. [Automate the screen grabbing process][11]
- 12. [Getting help][12]
-4. [Conclusion][16]
-
-There are several screenshot taking tools available in the market but most of them are GUI based. If you spend time working on the Linux command line, and are looking for a good, feature-rich command line-based screen grabbing tool, you may want to try out [gnome-screenshot][17]. In this tutorial, I will explain this utility using easy to understand examples.
-
-Please note that all the examples mentioned in this tutorial have been tested on Ubuntu 16.04 LTS, and the gnome-screenshot version we have used is 3.18.0.
-
-### About Gnome-screenshot
-
-Gnome-screenshot is a GNOME tool which - as the name suggests - is used for capturing the entire screen, a particular application window, or any other user defined area. The tool provides several other features, including the ability to apply beautifying effects to borders of captured screenshots.
-
-### Gnome-screenshot Installation
-
-The gnome-screenshot tool is pre-installed on Ubuntu systems, but if for some reason you need to install the utility, you can do that using the following command:
-
-sudo apt-get install gnome-screenshot
-
-Once the tool is installed, you can launch it by using following command:
-
-gnome-screenshot
-
-### Gnome-screenshot Usage/Features
-
-In this section, we will discuss how the gnome-screenshot tool can be used and what all features it provides.
-
-By default, when the tool is run without any command line options, it captures the complete screen.
-
-[
- ![Starting Gnome Screenshot](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/gnome-default.png)
-][18]
-
-### Capturing current active window
-
-If you want, you can limit the screenshot to the current active window by using the -w option.
-
-gnome-screenshot -w
-
-[
- ![Capturing current active window](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/activewindow.png)
-][19]
-
-###
-Window border
-
-By default, the utility includes the border of the window it captures, although there's also a specific command line option -b that enables this feature (in case you want to use it somewhere). Here's how it can be used:
-
-gnome-screenshot -wb
-
-Of course, you need to use the -w option with -b so that the captured area is the current active window (otherwise, -b will have no effect).
-
-Moving on and more importantly, you can also remove the border of the window if you want. This can be done using the -B command line option. Following is an example of how you can use this option:
-
-gnome-screenshot -wB
-
-Here is an example snapshot:
-
-[
- ![Window border](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/removeborder.png)
-][20]
-
-### Adding effects to window borders
-
-With the help of the gnome-screenshot tool, you can also add various effects to window borders. This can be done using the --border-effect option.
-
-You can add any of the effects provided by the utility such as 'shadow' effect (which adds drop shadow to the window), 'border' effect (adds rectangular space around the screenshot), and 'vintage' effect (desaturating the screenshot slightly, tinting it and adding rectangular space around it).
-
-gnome-screenshot --border-effect=[EFFECT]
-
-For example, to add the shadow effect, run the following command
-
-gnome-screenshot –border-effect=shadow
-
-Here is an example snapshot of the shadow effect:
-
-[
- ![Adding effects to window borders](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/shadoweffect-new.png)
-][21]
-
-Please note that the above screenshot focuses on a corner of the terminal to give you a clear view of the shadow effect.
-
-### Screenshot of a particular area
-
-If you want, you can also capture a particular area of your computer screen using the gnome-screenshot utility. This can be done by using the -a command line option.
-
-gnome-screenshot -a
-
-When the above command is run, your mouse pointer will change into a ‘+’ sign. In this mode, you can grab a particular area of your screen by moving the mouse with left-click pressed.
-
-Here is an example screenshot wherein I cropped a small area of my terminal window.
-
-[
- ![example screenshot wherein I cropped a small area of my terminal window](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/area.png)
-][22]
-
-###
-Include mouse pointer in snapshot
-
-By default, whenever you take screenshot using this tool, it doesn’t include mouse pointer. However, the utility allows you to include the pointer, something which you can do using the -p command line option.
-
-gnome-screenshot -p
-
-Here is an example snapshot
-
-[
- ![Include mouse pointer in snapshot](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/includecursor.png)
-][23]
-
-### Delay in taking screenshots
-
-You can also introduce time delay while taking screenshots. For this, you have to assign a value to the --delay option in seconds.
-
-gnome-screenshot –delay=[SECONDS]
-
-For example:
-
-gnome-screenshot --delay=5
-
-Here is an example screenshot
-
-[
- ![Delay in taking screenshots](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/delay.png)
-][24]
-
-### Run the tool in interactive mode
-
-The tool also allows you to access all its features using a single option, which is -i. Using this command line option, user can select one or more of the tool’s features at run time.
-
-$ gnome-screenshot -i
-
-Here is an example screenshot
-
-[
- ![Run the tool in interactive mode](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/interactive.png)
-][25]
-
-As you can see in the snapshot above, the -i option provides access to many features - such as grabbing the whole screen, grabbing the current window, selecting an area to grab, delay option, effects options - all in an interactive mode.
-
-### Directly save your screenshot
-
-If you want, you can directly save your screenshot from the terminal to your present working directory, meaning you won't be asked to enter a file name for the captured screenshot after the tool is run. This feature can be accessed using the --file command line option which, obviously, requires a filename to be passed to it.
-
-gnome-screenshot –file=[FILENAME]
-
-For example:
-
-gnome-screenshot --file=ashish
-
-Here is an example snapshot:
-
-[
- ![Directly save your screenshot](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/ashish.png)
-][26]
-
-### Copy to clipboard
-
-The gnome-screenshot tool also allows you to copy your screenshot to clipboard. This can be done using the -c command line option.
-
-gnome-screenshot -c
-
-[
- ![Copy to clipboard](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/copy.png)
-][27]
-
-In this mode, you can, for example, directly paste the copied screenshot in any of your image editors (such as GIMP).
-
-### Screenshot in case of multiple displays
-
-If there are multiple displays attached to your system and you want to take snapshot of a particular one, then you can use the --display command line option. This option requires a value which should be the display device ID (ID of the screen being grabbed).
-
-gnome-screenshot --display=[DISPLAY]
-
-For example:
-
-gnome-screenshot --display=VGA-0
-
-In the above example, VGA-0 is the id of the display that I am trying to capture. To find the ID of the display that you want to screenshot, you can use the following command:
-
-xrandr --query
-
-To give you an idea, this command produced the following output in my case:
-
-**$ xrandr --query**
-Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192
-**VGA-0** connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm
-1366x768 59.8*+
-1024x768 75.1 75.0 60.0
-832x624 74.6
-800x600 75.0 60.3 56.2
-640x480 75.0 60.0
-720x400 70.1
-**HDMI-0** disconnected (normal left inverted right x axis y axis)
-
-### Automate the screen grabbing process
-
-As we have discussed earlier, the -a command line option helps us to grab a particular area of the screen. However, we have to select the area manually using the mouse. If you want, you can automate this process using gnome-screenshot, but in that case, you will have to use an external tool known as xdotool, which is capable of simulating key presses and even mouse events.
-
-For example:
-
-(gnome-screenshot -a &); sleep 0.1 && xdotool mousemove 100 100 mousedown 1 mousemove 400 400 mouseup 1
-
-The mousemove sub-command automatically positions the mouse pointer at specified coordinates X and Y on screen (100 and 100 in the example above). The mousedown subcommand fires an event which performs the same operation as a click (since we wanted left-click, so we used the argument 1) , whereas the mouseup subcommand fires an event which performs the task of a user releasing the mouse-button.
-
-So all in all, the xdotool command shown above does the same area-grabbing work that you otherwise have to manually do with mouse - specifically, it positions the mouse pointer to 100, 100 coordinates on the screen, selects the area enclosed until the pointer reaches 400,400 coordinates on then screen. The selected area is then captured by gnome-screenshot.
-
-Here, is the screenshot of the above command:
-
-[
- ![screenshot of the above command](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/automatedcommand.png)
-][28]
-
-And this is the output:
-
-[
- ![Screenshot output](https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/outputxdo.png)
-][29]
-
-For more information on xdotool, head [here][30].
-
-### Getting help
-
-If you have a query or in case you are facing a problem related to any of the command line options, then you can use the --help, -? or -h options to get related information.
-
-gnome-screenshot -h
-
-For more information on gnome-screenshot, you can go through the command’s manual page or man page.
-
-man gnome-screenshot
-
-### Conclusion
-
-I will recommend that you to use this utlity atleast once as it's not only easy to use for beginners, but also offers a feature-rich experience for advanced usage. Go ahead and give it a try.
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
-
-作者:[Himanshu Arora][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
-[1]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#capturing-current-active-window
-[2]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#window-border
-[3]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#adding-effects-to-window-borders
-[4]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#screenshot-of-a-particular-area
-[5]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#include-mouse-pointer-in-snapshot
-[6]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#delay-in-taking-screenshots
-[7]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#run-the-tool-in-interactive-mode
-[8]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#directly-save-your-screenshot
-[9]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#copy-to-clipboard
-[10]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#screenshot-in-case-of-multiple-displays
-[11]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#automate-the-screen-grabbing-process
-[12]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#getting-help
-[13]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#about-gnomescreenshot
-[14]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#gnomescreenshot-installation
-[15]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#gnomescreenshot-usagefeatures
-[16]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/#conclusion
-[17]:https://linux.die.net/man/1/gnome-screenshot
-[18]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/gnome-default.png
-[19]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/activewindow.png
-[20]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/removeborder.png
-[21]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/shadoweffect-new.png
-[22]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/area.png
-[23]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/includecursor.png
-[24]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/delay.png
-[25]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/interactive.png
-[26]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/ashish.png
-[27]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/copy.png
-[28]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/automatedcommand.png
-[29]:https://www.howtoforge.com/images/taking-screenshots-in-linux-using-gnome-screenshot/big/outputxdo.png
-[30]:http://manpages.ubuntu.com/manpages/trusty/man1/xdotool.1.html
diff --git a/sources/tech/20170203 Record and Replay Terminal Session with Asciinema on Linux.md b/sources/tech/20170203 Record and Replay Terminal Session with Asciinema on Linux.md
deleted file mode 100644
index 7942aac629..0000000000
--- a/sources/tech/20170203 Record and Replay Terminal Session with Asciinema on Linux.md
+++ /dev/null
@@ -1,281 +0,0 @@
-### Record and Replay Terminal Session with Asciinema on Linux
-
-![](https://linuxconfig.org/images/asciimena-video-example.jpg?58942057)
-
-Contents
-
-* * [1. Introduction][11]
- * [2. Difficulty][12]
- * [3. Conventions][13]
- * [4. Standard Repository Installation][14]
- * [4.1. Arch Linux][1]
- * [4.2. Debian][2]
- * [4.3. Ubuntu][3]
- * [4.4. Fedora][4]
- * [5. Installation From Source][15]
- * [6. Prerequisites][16]
- * [6.1. Arch Linux][5]
- * [6.2. Debian][6]
- * [6.3. Ubuntu][7]
- * [6.4. Fedora][8]
- * [6.5. CentOS][9]
- * [7. Linuxbrew Installation][17]
- * [8. Asciinema Installation][18]
- * [9. Recording Terminal Session][19]
- * [10. Replay Recorded Terminal Session][20]
- * [11. Embedding Video as HTML][21]
- * [12. Conclusion][22]
- * [13. Troubleshooting][23]
- * [13.1. asciinema needs a UTF-8][10]
-
-### Introduction
-
-Asciinema is a lightweight and very efficient alternative to a `Script`terminal session recorder. It allows you to record, replay and share your JSON formatted terminal session recordings. The main advantage in comparison to desktop recorders such as Recordmydesktop, Simplescreenrecorder, Vokoscreen or Kazam is that Asciinema records all standard terminal input, output and error as a plain ASCII text with ANSI escape code .
-
-As a result, JSON format file is minuscule in size even for a longer terminal session. Furthermore, JSON format gives the user the ability to share the Asciinema JSON output file via simple file transfer, on the public website as part of embedded HTML code or share it on Asciinema.org using asciinema account. Lastly, in case that you have made some mistake during your terminal session, your recorded terminal session can be retrospectively edited using any text editor, that is if you know your way around ANSI escape code syntax.
-
-### Difficulty
-
-EASY
-
-### Conventions
-
-* **#** - requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
-* **$** - given command to be executed as a regular non-privileged user
-
-### Standard Repository Installation
-
-It is very likely that asciinema is installable as part fo your distribution repository. However, if Asciinema is not available on your system or you wish to install the latest version, you can use Linuxbrew package manager to perform Asciinema installation as described below in the "Installation From Source" section.
-
-### Arch Linux
-
-```
-# pacman -S asciinema
-```
-
-### Debian
-
-```
-# apt install asciinema
-```
-
-### Ubuntu
-
-```
-$ sudo apt install asciinema
-```
-
-### Fedora
-
-```
-$ sudo dnf install asciinema
-```
-
-### Installation From Source
-
-The easiest and recommended way to install the latest Asciinema version from source is by use of Linuxbrew package manager.
-
-### Prerequisites
-
-The following list of prerequisites fulfils dependency requirements for both, Linuxbrew and Asciinema.
-
-* git
-* gcc
-* make
-* ruby
-
-Before you proceed with Linuxbrew installation make sure that the above packages are istalled on your Linux system.
-
-### Arch Linux
-
-```
-# pacman -S git gcc make ruby
-```
-
-### Debian
-
-```
-# apt install git gcc make ruby
-```
-
-### Ubuntu
-
-```
-$ sudo apt install git gcc make ruby
-```
-
-### Fedora
-
-```
-$ sudo dnf install git gcc make ruby
-```
-
-### CentOS
-
-```
-# yum install git gcc make ruby
-```
-
-### Linuxbrew Installation
-
-The Linuxbrew package manager is a fork of the popular Homebrew package manager used on Apple's MacOS operating system. Homebrew is known for its ease of use, which is to be seen shortly, when we use Linuxbrew to install Asciinema. Run the bellow command to install Linuxbrew on your Linux distribution:
-```
-$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install)"
-```
-Linuxbrew is now installed under your `$HOME/.linuxbrew/`. What remains is to make it part of your executable `PATH` environment variable.
-```
-$ echo 'export PATH="$HOME/.linuxbrew/bin:$PATH"' >>~/.bash_profile
-$ . ~/.bash_profile
-```
-To confirm the Linuxbrew installation you can use `brew` command to query its version:
-```
-$ brew --version
-Homebrew 1.1.7
-Homebrew/homebrew-core (git revision 5229; last commit 2017-02-02)
-```
-
-### Asciinema Installation
-
-With the Linuxbrew now installed, the installation of Asciinema should be easy as single one-liner:
-```
-$ brew install asciinema
-```
-Check the correctnes of asciinema installation:
-```
-$ asciinema --version
-asciinema 1.3.0
-```
-
-### Recording Terminal Session
-
-After all that hard work with the installation, it is finally time to have some fun. Asciinema is an extremely easy to use software. In fact, the current version 1.3 has only few command line options available and one of them is `--help`.
-
-Let's start by recording a terminal session using the `rec` option. The following command will start recording your terminal session after which you will have an option to either discard your recording or upload it on asciinema.org website for a future reference.
-```
-$ asciinema rec
-```
-Once you run the above command, you will be notified that your asciinema recording session has started, and that the recording can be stopped by entering `CTRL+D` key sequence or execution of `exit` command. If you are on Debian/Ubuntu/Mint Linux you can try this as your first asciinema recording:
-```
-$ su
-Password:
-# apt install sl
-# exit
-$ sl
-```
-Once you enter the last exit command you will be asked:
-```
-$ exit
-~ Asciicast recording finished.
-~ Press to upload, to cancel.
-
-https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4
-```
-If you do not feel like to upload your super secret kung-fu command line skills to asciinema.org, you have an option to store Asciinema recording as a local file in JSON format. For example, the following asciinema recording will be stored as `/tmp/my_rec.json`:
-```
-$ asciinema rec /tmp/my_rec.json
-```
-Another extremely useful asciinema feature is time trimming. If you happen to be a slow writer or perhaps you are doing multitasking, the time between entering and execution of your commands can stretch greatly. Asciinema records your keystrokes real-time, meaning every pause you make will reflect on the lenght of your resulting video. Use `-w` option to shorten the time between your keystrokes. For example, the following command trims the time between your keystrokes to 0.2 seconds:
-```
-$ asciinema rec -w 0.2
-```
-
-### Replay Recorded Terminal Session
-
-There are two options to replay your recorded terminal sessions. First, play you terminal session directly from asciinema.org. That is, provided that you have previously uploaded your recording to asciinema.org and you have valid URL:
-```
-$ asciinema play https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4
-```
-Alternatively, use your locally stored JSON file:
-```
-$ asciinema play /tmp/my_rec.json
-```
-Use `wget` command to download your previously uploaded recording. Simply add `.json` to your existing URL:
-```
-$ wget -q -O steam_locomotive.json https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4.json
-$ asciinema play steam_locomotive.json
-```
-
-### Embedding Video as HTML
-
-Lastly, Asciinema also comes with a stand-alone JavaScript player. Which means that it is easy to share your terminal session recordings on your website. The below lines illustrate this idea with a simple `index.html`code. First, download all necessary parts:
-```
-$ cd /tmp/
-$ mkdir steam_locomotive
-$ cd steam_locomotive/
-$ wget -q -O steam_locomotive.json https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4.json
-$ wget -q https://github.com/asciinema/asciinema-player/releases/download/v2.4.0/asciinema-player.css
-$ wget -q https://github.com/asciinema/asciinema-player/releases/download/v2.4.0/asciinema-player.js
-```
-Next, create a new `/tmp/steam_locomotive/index.html` file with a following content:
-```
-
-
-
-
-
-
-
-
-
-```
-Once ready, open up your web browser, hit CTRL+O and open your newly created `/tmp/steam_locomotive/index.html` file.
-
-### Conclusion
-
-As mentioned before, the main advantage for recording your terminal sessions with the Asciinema recorder is the minuscule output file which makes your videos extremely easy to share. The example above produced a file containing 58 472 characters, that is 58KB for 22 seconds video session. When reviewing the output JSON file, even this number is greatly inflated, mostly due to the fact that we have seen a Steam Locomotive rushing across our terminal. Normal terminal session of this length should produce a much smaller output file.
-
-Next, time when you are about to ask a question on forums about your Linux configuration issue and having a hard time to explain how to reproduce your problem, simply run:
-```
-$ asciinema rec
-```
-and paste the resulting URL into your forum post.
-
-### Troubleshooting
-
-### asciinema needs a UTF-8
-
-Error message:
-```
-asciinema needs a UTF-8 native locale to run. Check the output of `locale` command.
-```
-Solution:
-Generate and export UTF-8 locale. For example:
-```
-$ localedef -c -f UTF-8 -i en_US en_US.UTF-8
-$ export LC_ALL=en_US.UTF-8
-```
-
---------------------------------------------------------------------------------
-
-via: https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux
-
-作者:[Lubos Rendek][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux
-[1]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-1-arch-linux
-[2]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-2-debian
-[3]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-3-ubuntu
-[4]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-4-fedora
-[5]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-1-arch-linux
-[6]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-2-debian
-[7]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-3-ubuntu
-[8]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-4-fedora
-[9]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-5-centos
-[10]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h13-1-asciinema-needs-a-utf-8
-[11]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h1-introduction
-[12]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h2-difficulty
-[13]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h3-conventions
-[14]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-standard-repository-installation
-[15]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h5-installation-from-source
-[16]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-prerequisites
-[17]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h7-linuxbrew-installation
-[18]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h8-asciinema-installation
-[19]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h9-recording-terminal-session
-[20]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h10-replay-recorded-terminal-session
-[21]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h11-embedding-video-as-html
-[22]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h12-conclusion
-[23]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h13-troubleshooting
diff --git a/sources/tech/20170206 Try Raspberry Pis PIXEL OS on your PC.md b/sources/tech/20170206 Try Raspberry Pis PIXEL OS on your PC.md
deleted file mode 100644
index 5f3cc73b2b..0000000000
--- a/sources/tech/20170206 Try Raspberry Pis PIXEL OS on your PC.md
+++ /dev/null
@@ -1,145 +0,0 @@
-Try Raspberry Pi's PIXEL OS on your PC
-============================================================
-
-
- ![Try Raspberry Pi's PIXEL OS on your PC](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/virtualbox_pixel_raspberrypi.jpg?itok=bEdS8qpi "Try Raspberry Pi's PIXEL OS on your PC")
-Image credits :
-
-Raspberry Pi Foundation, CC BY-SA
-
-Over the last four years, the Raspberry Pi Foundation has put a great deal of effort into optimizing Raspbian, its port of Debian, for Pi hardware, including creating new educational software, programming tools, and a nicer looking desktop.
-
-In September, we released an update that introduced PIXEL (Pi Improved Xwindows Environment, Lightweight), the Pi's new desktop environment. Just before Christmas, we released a version of the OS that runs on x86 PCs, so now you can install it on your PC, Mac, or laptop.
-
- ![Installing PIXEL](https://opensource.com/sites/default/files/pixel_0.jpg "Installing PIXEL")
-
-Of course, like many well-supported Linux distros, the OS runs really well on old hardware. Raspbian is a great way to breathe new life into that old Windows machine that you gave up on years ago.
-
-The [PIXEL ISO][13] is available for download from the Raspberry Pi website, and a bootable live DVD was given away on the front of "[The MagPi][14]" magazine.
-
- ![Welcome to PIXEL](https://opensource.com/sites/default/files/welcome-to-pixel.jpg "Welcome to PIXEL")
-
-We released Raspberry Pi's OS for PCs to remove the barrier to entry for people looking to learn computing.This release is even cheaper than buying a Raspberry Pi because it is free and you can use it on your existing computer. PIXEL is the Linux desktop we've always wanted, and we want it to be available to everyone.
-
-### Powered by Debian
-
-Raspbian, or the x86 PIXEL distro, wouldn't be possible without its construction on top of Debian. Debian has a huge bank of amazing free and open source software, programs, games, and other tools from an apt repository. On the Raspberry Pi, you're limited to packages that are compiled to run on [ARM][15] chips. However, on the PC image, you have a much wider scope for which packages will run on your machine, because Intel chips found in PCs have much greater support.
-
- ![Debian Advanced Packaging Tool (APT) repository](https://opensource.com/sites/default/files/apt.png "Debian Advanced Packaging Tool (APT) repository")
-
-### What PIXEL contains
-
-Both Raspbian with PIXEL and Debian with PIXEL come bundled with a whole host of software. Raspbian comes with:
-
-* Programming environments for Python, Java, Scratch, Sonic Pi, Mathematica*, Node-RED, and the Sense HAT emulator
-* The LibreOffice office suite
-* Chromium (including Flash) and Epiphany web browsers
-* Minecraft: Pi edition (including a Python API)*
-* Various tools and utilities
-
-*The only programs from this list not included in the x86 version are Mathematica and Minecraft, due to licensing limitations.
-
- ![PIXEL menu](https://opensource.com/sites/default/files/pixel-menu.png "PIXEL menu")
-
-### Create a PIXEL live disk
-
-You can download the PIXEL ISO and write it to a blank DVD or a USB stick. Then you can boot your PC from the disk, and you'll see the PIXEL desktop in no time. You can browse the web, open a programming environment, or use the office suite, all without installing anything on your computer. When you're done, just take out the DVD or USB drive, shut down your computer, and when you power up your computer again, it'll boot back up into your usual OS as before.
-
-### Run PIXEL in a virtual machine
-
-One way of trying out PIXEL is to install it in a virtual machine using a tool like VirtualBox.
-
- ![PIXEL Virtualbox](https://opensource.com/sites/default/files/pixel-virtualbox.png "PIXEL Virtualbox")
-
-This allows you to try out the image without installing it, or you can just run it in a window alongside your main OS, and get access to the software and tools in PIXEL. It also means your session will persist, rather than starting from scratch every time you reboot, as you would with a live disk.
-
-### Install PIXEL on your PC
-
-If you're really ready to commit, you can wipe your old operating system and install PIXEL on your hard drive. This might be a good idea if you're wanting to make use of an old unused laptop.
-
-### PIXEL in education
-
-Many schools use Windows on all their PCs, and have strict controls over what software can be installed on them. This makes it difficult for teachers to use the software tools and IDE (integrated development environment) necessary to teach programming skills. Even online-based programming initiatives like Scratch 2 can be blocked by overcautious network filters. In some cases, installing something like Python is simply not possible. The Raspberry Pi hardware addresses this by providing a small, cheap computer that boots from an SD card packed with educational software, which students can connect up to the monitor, mouse, and keyboard of an existing PC.
-
-However, a PIXEL live disc allows teachers to boot into a system loaded with ready-to-use programming languages and tools, all of which do not require installation permissions. At the end of the lesson, they can shut down safely, bringing the computers back to their original state. This is also a handy solution for Code Clubs, CoderDojos, youth clubs, Raspberry Jams, and more.
-
-### Remote GPIO
-
-One of the features that sets the Raspberry Pi apart from traditional desktop PCs is the presence of GPIO pins (General Purpose Input/Output) pins, which allow you to connect electronic components and add-on boards to devices in the real world, opening up new worlds, such as hobby projects, home automation, connected devices, and the Internet of Things.
-
-One wonderful feature of the [GPIO Zero][16] Python library is the ability to control the GPIO pins of a Raspberry Pi over the network with some simple code written on your PC.
-
-
-
-[][11][
- ![View image on Twitter](https://pbs.twimg.com/media/C0MRi_lWgAAvUOp.jpg:small "View image on Twitter")
-][8][
- ![View image on Twitter](https://pbs.twimg.com/media/C0MRkoCWgAAbxxi.jpg:small "View image on Twitter")
-][9][
- ![View image on Twitter](https://pbs.twimg.com/media/C0MRmmsXEAAPpNU.jpg:small "View image on Twitter")
-][10]
-
-> [ Follow][1][
-> ![](https://pbs.twimg.com/profile_images/528155792013160448/cobxlRc8_normal.jpeg)
-> Ben Nuttall @ben_nuttall][5]
->
-> PC running x86 PIXEL controlling Pi's GPIO over the network using gpiozero
->
-> [][6] · [East, England][7]
->
-> * [][2]
->
-> * [ 3636 Retweets][3]
->
-> * [ 109109 likes][4]
-
-
-
-Remote GPIO is possible from one Raspberry Pi to another or from any PC running any OS, but, of course, with PIXEL x86 you have everything you need pre-installed and it works out of the box. See Josh's [blog post][17] and refer to my [gist][18] for more information.
-
-### Further guidance
-
-[Issue #53 of The MagPi][19] features some great guides for trying out and installing PIXEL, including using the live disc with a persistence drive to maintain your files and applications. You can buy a copy, or download the PDF for free. Check it out to read more.
-
---------------------------------------------------------------------------------
-
-译者简介:
-
-Ben Nuttall - Ben Nuttall is the Raspberry Pi Community Manager. In addition to his work for the Raspberry Pi Foundation, he's into free software, maths, kayaking, GitHub, Adventure Time, and Futurama. Follow Ben on Twitter @ben_nuttall.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc
-
-作者:[Ben Nuttall][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/bennuttall
-[1]:https://twitter.com/ben_nuttall
-[2]:https://twitter.com/intent/tweet?in_reply_to=811511740907261952
-[3]:https://twitter.com/intent/retweet?tweet_id=811511740907261952
-[4]:https://twitter.com/intent/like?tweet_id=811511740907261952
-[5]:https://twitter.com/ben_nuttall
-[6]:https://twitter.com/ben_nuttall/status/811511740907261952
-[7]:https://twitter.com/search?q=place%3A3bc1b6cfd27ef7f6
-[8]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
-[9]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
-[10]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
-[11]:https://twitter.com/ben_nuttall/status/811511740907261952/photo/1
-[12]:https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc?rate=iqVrGV3EhwRuqh68sf6Zye6Y7VSpXRCUQoZV3sg-QJM
-[13]:http://downloads.raspberrypi.org/pixel_x86/images/pixel_x86-2016-12-13/
-[14]:https://www.raspberrypi.org/magpi/issues/53/
-[15]:https://en.wikipedia.org/wiki/ARM_Holdings
-[16]:http://gpiozero.readthedocs.io/
-[17]:http://www.allaboutcode.co.uk/single-post/2016/12/21/GPIOZero-Remote-GPIO-with-PIXEL-x86
-[18]:https://gist.github.com/bennuttall/572789b0aa5fc2e7c05c7ada1bdc813e
-[19]:https://www.raspberrypi.org/magpi/issues/53/
-[20]:https://opensource.com/user/26767/feed
-[21]:https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc#comments
-[22]:https://opensource.com/users/bennuttall
diff --git a/sources/tech/20170207 5 Open Source Software Defined Networking Projects to Know.md b/sources/tech/20170207 5 Open Source Software Defined Networking Projects to Know.md
deleted file mode 100644
index c7178c612c..0000000000
--- a/sources/tech/20170207 5 Open Source Software Defined Networking Projects to Know.md
+++ /dev/null
@@ -1,67 +0,0 @@
-5 Open Source Software Defined Networking Projects to Know
-============================================================
-
-
- ![SDN](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/software-defined-networking_0.jpg?itok=FeWzZo8n "SDN")
-SDN is beginning to redefine corporate networking; here are five open source projects you should know.[Creative Commons Zero][1]Pixabay
-
-Throughout 2016, Software Defined Networking (SDN) continued to rapidly evolve and gain maturity. We are now beyond the conceptual phase of open source networking, and the companies that were assessing the potential of these projects two years ago have begun enterprise deployments. As has been predicted for several years, SDN is beginning to redefine corporate networking.
-
-Market researchers are essentially unanimous on the topic. IDC published[ a study][3] of the SDN market earlier this year and predicted a 53.9 percent CAGR from 2014 through 2020, at which point the market will be valued at $12.5 billion. In addition, the Technology Trends 2016 report ranked SDN as the best technology investment for 2016.
-
-"Cloud computing and the 3rd Platform have driven the need for SDN, which will represent a market worth more than $12.5 billion in 2020\. Not surprisingly, the value of SDN will accrue increasingly to network-virtualization software and to SDN applications, including virtualized network and security services. Large enterprises are now realizing the value of SDN in the datacenter, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network," said[ Rohit Mehra][4], Vice President of Network Infrastructure at IDC.
-
-The Linux Foundation recently[ ][5]announced the release of its 2016 report[ "Guide to the Open Cloud: Current Trends and Open Source Projects."][6] This third annual report provides a comprehensive look at the state of open cloud computing, and includes a section on unikernels. You can[ download the report][7] now, and one of the first things to notice is that it aggregates and analyzes research, illustrating how trends in containers, unikernels, and more are reshaping cloud computing. The report provides descriptions and links to categorized projects central to today’s open cloud environment.
-
-In this series, we are looking at various categories and providing extra insight on how the areas are evolving. Below, you’ll find several important SDN projects and the impact that they are having, along with links to their GitHub repositories, all gathered from the Guide to the Open Cloud:
-
-### Software-Defined Networking
-
-[ONOS][8]
-
-Open Network Operating System (ONOS), a Linux Foundation project, is a software-defined networking OS for service providers that has scalability, high availability, high performance and abstractions to create apps and services. [ONOS on GitHub][9]
-
-[OpenContrail][10]
-
-OpenContrail is Juniper Networks’ open source network virtualization platform for the cloud. It provides all the necessary components for network virtualization: SDN controller, virtual router, analytics engine, and published northbound APIs. Its REST API configures and gathers operational and analytics data from the system. [OpenContrail on GitHub][11]
-
-[OpenDaylight][12]
-
-OpenDaylight, an OpenDaylight Foundation project at The Linux Foundation, is a programmable, software-defined networking platform for service providers and enterprises. Based on a microservices architecture, it enables network services across a spectrum of hardware in multivendor environments. [OpenDaylight on GitHub][13]
-
-[Open vSwitch][14]
-
-Open vSwitch, a Linux Foundation project, is a production-quality, multilayer virtual switch. It’s designed for massive network automation through programmatic extension, while still supporting standard management interfaces and protocols including NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, and 802.1ag. It supports distribution across multiple physical servers similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V. [OVS on GitHub][15]
-
-[OPNFV][16]
-
-Open Platform for Network Functions Virtualization (OPNFV), a Linux Foundation project, is a reference NFV platform for enterprise and service provider networks. It brings together upstream components across compute, storage and network virtualization in order create an end-to-end platform for NFV applications. [OPNFV on Bitergia][17]
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/news/open-cloud-report/2016/5-open-source-software-defined-networking-projects-know
-
-作者:[SAM DEAN][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/sam-dean
-[1]:https://www.linux.com/licenses/category/creative-commons-zero
-[2]:https://www.linux.com/files/images/software-defined-networkingjpg-0
-[3]:https://www.idc.com/getdoc.jsp?containerId=prUS41005016
-[4]:http://www.idc.com/getdoc.jsp?containerId=PRF003513
-[5]:https://www.linux.com/blog/linux-foundation-issues-2016-guide-open-source-cloud-projects
-[6]:http://ctt.marketwire.com/?release=11G120876-001&id=10172077&type=0&url=http%3A%2F%2Fgo.linuxfoundation.org%2Frd-open-cloud-report-2016-pr
-[7]:http://go.linuxfoundation.org/l/6342/2016-10-31/3krbjr
-[8]:http://onosproject.org/
-[9]:https://github.com/opennetworkinglab/onos
-[10]:http://www.opencontrail.org/
-[11]:https://github.com/Juniper/contrail-controller
-[12]:https://www.opendaylight.org/
-[13]:https://github.com/opendaylight
-[14]:http://openvswitch.org/
-[15]:https://github.com/openvswitch/ovs
-[16]:https://www.opnfv.org/
-[17]:http://projects.bitergia.com/opnfv/browser/
diff --git a/sources/tech/20170209 How to protect your server with badIPs.com and report IPs with Fail2ban on Debian.md b/sources/tech/20170209 How to protect your server with badIPs.com and report IPs with Fail2ban on Debian.md
deleted file mode 100644
index 11ebcf135e..0000000000
--- a/sources/tech/20170209 How to protect your server with badIPs.com and report IPs with Fail2ban on Debian.md
+++ /dev/null
@@ -1,226 +0,0 @@
-How to protect your server with badIPs.com and report IPs with Fail2ban on Debian
-============================================================
-
-### On this page
-
-1. [Use the badIPs list][4]
- 1. [Define your security level and category][1]
-2. [Let's create the script][5]
-3. [Report IP addresses to badIPs with Fail2ban][6]
- 1. [Fail2ban >= 0.8.12][2]
- 2. [Fail2ban < 0.8.12][3]
-4. [Statistics of your IP reporting][7]
-
-This tutorial documents the process of using the badips abuse tracker in conjunction with Fail2ban to protect your server or computer. I've tested it on a Debian 8 Jessie and Debian 7 Wheezy system.
-
-**What is badIPs?**
-
-BadIps is a listing of IP that are reported as bad in combinaison with [fail2ban][8].
-
-This tutorial contains two parts, the first one will deal with the use of the list and the second will deal with the injection of data.
-
-###
-Use the badIPs list
-
-### Define your security level and category
-
-You can get the IP address list by simply using the REST API.
-
-When you GET this URL : [https://www.badips.com/get/categories][9]
-You’ll see all the different categories that are present on the service.
-
-* Second step, determine witch score is made for you.
- Here a quote from badips that should help (personnaly I took score = 3):
-* If you'd like to compile a statistic or use the data for some experiment etc. you may start with score 0.
-* If you'd like to firewall your private server or website, go with scores from 2\. Maybe combined with your own results, even if they do not have a score above 0 or 1.
-* If you're about to protect a webshop or high traffic, money-earning e-commerce server, we recommend to use values from 3 or 4\. Maybe as well combined with your own results (key / sync).
-* If you're paranoid, take 5.
-
-So now that you get your two variables, let's make your link by concatening them and grab your link.
-
-http://www.badips.com/get/list/{{SERVICE}}/{{LEVEL}}
-
-Note: Like me, you can take all the services. Change the name of the service to "any" in this case.
-
-The resulting URL is:
-
-https://www.badips.com/get/list/any/3
-
-### Let's create the script
-
-Alright, when that’s done, we’ll create a simple script.
-
-1. Put our list in a tempory file.
-2. (only once) create a chain in iptables.
-3. Flush all the data linked to our chain (old entries).
-4. We’ll link each IP to our new chain.
-5. When it’s done, block all INPUT / OUTPUT / FORWARD that’s linked to our chain.
-6. Remove our temp file.
-
-Nowe we'll create the script for that:
-
-cd /home//
-vi myBlacklist.sh
-
-Enter the following content into that file.
-
-```
-#!/bin/sh
-# based on this version http://www.timokorthals.de/?p=334
-# adapted by Stéphane T.
-
-_ipt=/sbin/iptables # Location of iptables (might be correct)
-_input=badips.db # Name of database (will be downloaded with this name)
-_pub_if=eth0 # Device which is connected to the internet (ex. $ifconfig for that)
-_droplist=droplist # Name of chain in iptables (Only change this if you have already a chain with this name)
-_level=3 # Blog level: not so bad/false report (0) over confirmed bad (3) to quite aggressive (5) (see www.badips.com for that)
-_service=any # Logged service (see www.badips.com for that)
-
-# Get the bad IPs
-wget -qO- http://www.badips.com/get/list/${_service}/$_level > $_input || { echo "$0: Unable to download ip list."; exit 1; }
-
-### Setup our black list ###
-# First flush it
-$_ipt --flush $_droplist
-
-# Create a new chain
-# Decomment the next line on the first run
-# $_ipt -N $_droplist
-
-# Filter out comments and blank lines
-# store each ip in $ip
-for ip in `cat $_input`
-do
-# Append everything to $_droplist
-$_ipt -A $_droplist -i ${_pub_if} -s $ip -j LOG --log-prefix "Drop Bad IP List "
-$_ipt -A $_droplist -i ${_pub_if} -s $ip -j DROP
-done
-
-# Finally, insert or append our black list
-$_ipt -I INPUT -j $_droplist
-$_ipt -I OUTPUT -j $_droplist
-$_ipt -I FORWARD -j $_droplist
-
-# Delete your temp file
-rm $_input
-exit 0
-```
-
-When that’s done, you should create a cronjob that will update our blacklist.
-
-For this, I used crontab and I run the script every day on 11:30PM (just before my delayed backup).
-
-crontab -e
-
-```
-23 30 * * * /home//myBlacklist.sh #Block BAD IPS
-```
-
-Don’t forget to chmod your script:
-
-chmod + x myBlacklist.sh
-
-Now that’s done, your server/computer should be a little bit safer.
-
-You can also run the script manually like this:
-
-cd /home//
-./myBlacklist.sh
-
-It should take some time… so don’t break the script. In fact, the value of it lies in the last lines.
-
-### Report IP addresses to badIPs with Fail2ban
-
-In the second part of this tutorial, I will show you how to report bd IP addresses bach to the badips.com website by using Fail2ban.
-
-### Fail2ban >= 0.8.12
-
-The reporting is made with Fail2ban. Depending on your Fail2ban version you must use the first or second section of this chapter.If you have fail2ban in version 0.8.12.
-
-If you have fail2ban version 0.8.12 or later.
-
-fail2ban-server --version
-
-In each category that you’ll report, simply add an action.
-
-```
-[ssh]
- enabled = true
- action = iptables-multiport
- badips[category=ssh]
- port = ssh
- filter = sshd
- logpath = /var/log/auth.log
- maxretry= 6
-```
-
-As you can see, the category is SSH, take a look here ([https://www.badips.com/get/categories][11]) to find the correct category.
-
-### Fail2ban < 0.8.12
-
-If the version is less recent than 0.8.12, you’ll have a to create an action. This can be downloaded here: [https://www.badips.com/asset/fail2ban/badips.conf][12].
-
-wget https://www.badips.com/asset/fail2ban/badips.conf -O /etc/fail2ban/action.d/badips.conf
-
-With the badips.conf from above, you can either activate per category as above or you can enable it globally:
-
-cd /etc/fail2ban/
-vi jail.conf
-
-```
-[DEFAULT]
-
-...
-
-banaction = iptables-multiport
- badips
-```
-
-Now restart fail2ban - it should start reporting from now on.
-
-service fail2ban restart
-
-### Statistics of your IP reporting
-
-Last step – not really useful… You can create a key.
-This one is usefull if you want to see your data.
-Just copy / paste this and a JSON response will appear on your console.
-
-wget https://www.badips.com/get/key -qO -
-
-```
-{
- "err":"",
- "suc":"new key 5f72253b673eb49fc64dd34439531b5cca05327f has been set.",
- "key":"5f72253b673eb49fc64dd34439531b5cca05327f"
-}
-```
-
-Then go on [badips][13] website, enter your “key” and click “statistics”.
-
-Here we go… all your stats by category.
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/
-
-作者:[Stephane T][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/
-[1]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#define-your-security-level-and-category
-[2]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#failban-gt-
-[3]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#failban-ltnbsp
-[4]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#use-the-badips-list
-[5]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#lets-create-the-script
-[6]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#report-ip-addresses-to-badips-with-failban
-[7]:https://www.howtoforge.com/tutorial/protect-your-server-computer-with-badips-and-fail2ban/#statistics-of-your-ip-reporting
-[8]:http://www.fail2ban.org/
-[9]:https://www.badips.com/get/categories
-[10]:http://www.timokorthals.de/?p=334
-[11]:https://www.badips.com/get/categories
-[12]:https://www.badips.com/asset/fail2ban/badips.conf
-[13]:https://www.badips.com/
diff --git a/sources/tech/20170210 Use tmux for a more powerful terminal.md b/sources/tech/20170210 Use tmux for a more powerful terminal.md
deleted file mode 100644
index 062dc022f3..0000000000
--- a/sources/tech/20170210 Use tmux for a more powerful terminal.md
+++ /dev/null
@@ -1,129 +0,0 @@
-translating by Flowsnow
-
-# [Use tmux for a more powerful terminal][3]
-
-
- ![](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/tmux-945x400.jpg)
-
-Some Fedora users spend most or all their time at a [command line][4] terminal. The terminal gives you access to your whole system, as well as thousands of powerful utilities. However, it only shows you one command line session at a time by default. Even with a large terminal window, the entire window only shows one session. This wastes space, especially on large monitors and high resolution laptop screens. But what if you could break up that terminal into multiple sessions? This is precisely where _tmux_ is handy — some say indispensable.
-
-### Install and start _tmux_
-
-The _tmux_ utility gets its name from being a terminal muxer, or multiplexer. In other words, it can break your single terminal session into multiple sessions. It manages both _windows_ and _panes_ :
-
-* A _window_ is a single view — that is, an assortment of things shown in your terminal.
-* A _pane_ is one part of that view, often a terminal session.
-
-To get started, install the _tmux_ utility on your system. You’ll need to have _sudo_ setup for your user account ([check out this article][5] for instructions if needed).
-
-```
-sudo dnf -y install tmux
-```
-
-Run the utility to get started:
-
-tmux
-
-### The status bar
-
-At first, it might seem like nothing happens, other than a status bar that appears at the bottom of the terminal:
-
- ![Start of tmux session](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-41.png)
-
-The bottom bar shows you:
-
-* _[0] _ – You’re in the first session that was created by the _tmux_ server. Numbering starts with 0\. The server tracks all sessions whether they’re still alive or not.
-* _0:username@host:~_ – Information about the first window of that session. Numbering starts with 0\. The terminal in the active pane of the window is owned by _username_ at hostname _host_ . The current directory is _~ _ (the home directory).
-* _*_ – Shows that you’re currently in this window.
-* _“hostname” _ – the hostname of the _tmux_ server you’re using.
-* Also, the date and time on that particular host is shown.
-
-The information bar will change as you add more windows and panes to the session.
-
-### Basics of tmux
-
-Stretch your terminal window to make it much larger. Now let’s experiment with a few simple commands to create additional panes. All commands by default start with _Ctrl+b_ .
-
-* Hit _Ctrl+b, “_ to split the current single pane horizontally. Now you have two command line panes in the window, one on top and one on bottom. Notice that the new bottom pane is your active pane.
-* Hit _Ctrl+b, %_ to split the current pane vertically. Now you have three command line panes in the window. The new bottom right pane is your active pane.
-
- ![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
-
-Notice the highlighted border around your current pane. To navigate around panes, do any of the following:
-
-* Hit _Ctrl+b _ and then an arrow key.
-* Hit _Ctrl+b, q_ . Numbers appear on the panes briefly. During this time, you can hit the number for the pane you want.
-
-Now, try using the panes to run different commands. For instance, try this:
-
-* Use _ls_ to show directory contents in the top pane.
-* Start _vi_ in the bottom left pane to edit a text file.
-* Run _top_ in the bottom right pane to monitor processes on your system.
-
-The display will look something like this:
-
- ![tmux session with three panes running different commands](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-57-51.png)
-
-So far, this example has only used one window with multiple panes. You can also run multiple windows in your session.
-
-* To create a new window, hit _Ctrl+b, c._ Notice that the status bar now shows two windows running. (Keen readers will see this in the screenshot above.)
-* To move to the previous window, hit _Ctrl+b, p._
-* If you want to move to the next window, hit _Ctrl+b, n_ .
-* To immediately move to a specific window (0-9), hit _Ctrl+b_ followed by the window number.
-
-If you’re wondering how to close a pane, simply quit that specific command line shell using _exit_ , _logout_ , or _Ctrl+d._ Once you close all panes in a window, that window disappears as well.
-
-### Detaching and attaching
-
-One of the most powerful features of _tmux_ is the ability to detach and reattach to a session. You can leave your windows and panes running when you detach. Moreover, you can even logout of the system entirely. Then later you can login to the same system, reattach to the _tmux_ session, and see all your windows and panes where you left them. The commands you were running stay running while you’re detached.
-
-To detach from a session, hit _Ctrl+b, d._ The session disappears and you’ll be back at the standard single shell. To reattach to the session, use this command:
-
-```
-tmux attach-session
-```
-
-This function is also a lifesaver when your network connection to a host is shaky. If your connection fails, all the processes in the session will stay running. Once your connection is back up, you can resume your work as if nothing happened.
-
-And if that weren’t enough, on top of multiple windows and panes per session, you can also run multiple sessions. You can list these and then attach to the correct one by number or name:
-
-```
-tmux list-sessions
-```
-
-### Further reading
-
-This article only scratches the surface of _tmux’_ s capabilities. You can manipulate your sessions in many other ways:
-
-* Swap one pane with another
-* Move a pane to another window (in the same or a different session!)
-* Set keybindings that perform your favorite commands automatically
-* Configure a _~/.tmux.conf_ file with your favorite settings by default so each new session looks the way you like
-
-For a full explanation of all commands, check out these references:
-
-* The official [manual page][1]
-* This [eBook][2] all about _tmux_
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Paul W. Frields has been a Linux user and enthusiast since 1997, and joined the Fedora Project in 2003, shortly after launch. He was a founding member of the Fedora Project Board, and has worked on documentation, website publishing, advocacy, toolchain development, and maintaining software. He joined Red Hat as Fedora Project Leader from February 2008 to July 2010, and remains with Red Hat as an engineering manager. He currently lives with his wife and two children in Virginia.
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
-
-作者:[Paul W. Frields][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://fedoramagazine.org/author/pfrields/
-[1]: http://man.openbsd.org/OpenBSD-current/man1/tmux.1
-[2]: https://pragprog.com/book/bhtmux2/tmux-2
-[3]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
-[4]: http://www.cryptonomicon.com/beginning.html
-[5]: https://fedoramagazine.org/howto-use-sudo/
\ No newline at end of file
diff --git a/sources/tech/20170211 Docker swarm mode - Adding worker nodes tutorial.md b/sources/tech/20170211 Docker swarm mode - Adding worker nodes tutorial.md
deleted file mode 100644
index a229be2738..0000000000
--- a/sources/tech/20170211 Docker swarm mode - Adding worker nodes tutorial.md
+++ /dev/null
@@ -1,149 +0,0 @@
-# Docker swarm mode - Adding worker nodes tutorial
-
-Let us expand on what we started with CentOS 7.2 several weeks ago. In this [guide][1], we learned how to initiate and start the native clustering and orchestration functionality built into Docker 1.12\. But we only had our manager node and no other workers. Today, we will expand this.
-
-I will show you how to add non-symmetrical nodes into the swarm, i.e. a [Fedora 24][2] that will sit alongside our CentOS box, and they will both participate in the cluster, with all the associated fancy loadbalancing and whatnot. Of course, this will not be trivial, and we will encounter some snags, and so it ought to be quite interesting. After me.
-
- ![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-teaser-more.png)
-
-### Prerequisites
-
-There are several things we need to do before we can successfully join additional nodes into the swarm. One, ideally, all nodes should be running the same version of Docker, and it should be at least 1.12 in order to support native orchestration. Like CentOS, Fedora does not have the latest built in its repo, so you will need to manually [add and install][3] the right software version, either manually or using the Docker repository, and then fix a few dependency conflicts. I have shown you how to do this with CentOS, and the exercise is identical.
-
-Moreover, all your nodes will need to be able to communicate with one another. There will have to be routing and firewall rules in places so that the managers and workers can talk among them. Otherwise, you will not be able to join nodes into the swarm. The easiest way to work around problems is to temporarily flush firewall rules (iptables -F), but this may impair your security. Make sure you fully understand what you're doing, and that you create the right rules for your nodes and ports.
-
-Error response from daemon: Timeout was reached before node was joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
-
-You need to have the same Docker images available on your hosts. In our previous tutorial, we created an Apache image, and you will need to do the same on your worker nodes, or distribute the created images. If you do not do that, you will encounter errors. If you need help setting up Docker, please read my [intro guide][4] and the [networking tutorial][5].
-
-```
-7vwdxioopmmfp3amlm0ulimcu \_ websky.11 my-apache2:latest
-localhost.localdomain Shutdown Rejected 7 minutes ago
-"No such image: my-apache2:lat&"
-```
-
-### Let's get started
-
-So we have our CentOS box up and running, and it's spawning containers successfully. You are able to connect to the services using host ports, and everything looks peachy. At the moment, your swarm only has the manager.
-
- ![Manager](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-manager.png)
-
-### Join workers
-
-To add new nodes, you will need to use the join command. But you first need to discover what token, IP address and port you must provide on the worker nodes for them to authenticate correctly against the swarm manager. Then execute (on Fedora).
-
-```
-[root@localhost ~]# docker swarm join-token worker
-To add a worker to this swarm, run the following command:
-
-docker swarm join \
---token SWMTKN-1-0xvojvlza90nrbihu6gfu3qm34ari7lwnza ... \
-192.168.2.100:2377
-```
-
-If you do not fix the firewall and routing rules, you will get timeout errors. If you've already joined the swarm, repeating the join command will create its own noise:
-
-```
-Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
-```
-
-If ever in doubt, you can leave the swarm and then try again:
-
-```
-[root@localhost ~]# docker swarm leave
-Node left the swarm.
-
-docker swarm join --token
-SWMTKN-1-0xvojvlza90nrbihu6gfu3qnza4 ... 192.168.2.100:2377
-This node joined a swarm as a worker.
-```
-
-On the worker node, you can use docker info to check the status:
-
-```
-Swarm: active
-NodeID: 2i27v3ce9qs2aq33nofaon20k
-Is Manager: false
-Node Address: 192.168.2.103
-
-Likewise, on the manager:
-
-Swarm: active
-NodeID: cneayene32jsb0t2inwfg5t5q
-Is Manager: true
-ClusterID: 8degfhtsi7xxucvi6dxvlx1n4
-Managers: 1
-Nodes: 3
-Orchestration:
-Task History Retention Limit: 5
-Raft:
-Snapshot Interval: 10000
-Heartbeat Tick: 1
-Election Tick: 3
-Dispatcher:
-Heartbeat Period: 5 seconds
-CA Configuration:
-Expiry Duration: 3 months
-Node Address: 192.168.2.100
-```
-
-### Create or scale services
-
-Now, we need to see if and how Docker distributes the containers between the nodes. My testing shows a rather simplistic balancing algorithm under very light load. Once or twice, Docker did not try to re-distribute running services to new workers, even after I tried to scale and update them. Likewise, on one occasion, it created a new service entirely on the worker node. Maybe it was the best choice.
-
- ![Scale service](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-scale-service.png)
-
- ![Service ls](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-service-list.png)
-
- ![Services ls, more](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-service-list-more.png)
-
- ![New service](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-new-service.png)
-
-New service created entirely on the worker node.
-
-After a while, there was some re-distribution of containers for existing services between the two, but it took some time. New services worked fine. This is an early observation only, so I cannot say much more at this point. For now, this is a good starting point to begin exploring and tweaking.
-
- ![Service distributed](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-distributed.png)
-
-Load balancing kicks in after a while.
-
-### Conclusion
-
-Docker is a neat little beast, and it will only continue to grow bigger, more complex, more powerful, and of course, more elegant. It is only a matter of time before it gets eaten by a big, juicy enterprise. When it comes to its native orchestration, the swarm mode works quite well, but it takes more than just a few containers to fully tap into the power of its algorithms and scalability.
-
-My tutorial shows how to add a Fedora node to a cluster run by a CentOS box, and the two worked fine side by side. There are some questions around the loadbalancing, but this is something I will explore in future articles. All in all, I hope this was a worthwhile lesson. We've tackled some prerequisites and common problems that you might encounter when trying to setup a swarm, we fired up a bunch of containers, and we even briefly touched on how to scale and distribute the services. And remember, 'tis is just a beginning.
-
-Cheers.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
-
-From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
-
-I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
-
-Please see my full list of open-source projects, publications and patents, just scroll down.
-
-For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-
-
--------------
-
-
-via: http://www.dedoimedo.com/computers/docker-swarm-adding-worker-nodes.html
-
-作者:[Igor Ljubuncic][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.dedoimedo.com/faq.html
-[1]:http://www.dedoimedo.com/computers/docker-swarm-intro.html
-[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
-[3]:http://www.dedoimedo.com/computers/docker-centos-upgrade-latest.html
-[4]:http://www.dedoimedo.com/computers/docker-guide.html
-[5]:http://www.dedoimedo.com/computers/docker-networking.html
diff --git a/sources/tech/20170213 Recover from a badly corrupt Linux EFI installation.md b/sources/tech/20170213 Recover from a badly corrupt Linux EFI installation.md
deleted file mode 100644
index db61d3f87b..0000000000
--- a/sources/tech/20170213 Recover from a badly corrupt Linux EFI installation.md
+++ /dev/null
@@ -1,112 +0,0 @@
-ictlyh Translating
-# Recover from a badly corrupt Linux EFI installation
-
-In the past decade or so, Linux distributions would occasionally fail before, during and after the installation, but I was always able to somehow recover the system and continue working normally. Well, [Solus][1]broke my laptop. Literally.
-
-GRUB rescue. No luck. Reinstall. No luck still! Ubuntu refused to install, complaining about the target device not being this or that. Wow. Something like this has never happened to me before. Effectively my test machine had become a useless brick. Should we despair? No, absolutely not. Let me show you how you can fix it.
-
-### Problem in more detail
-
-It all started with Solus trying to install its own bootloader - goofiboot. No idea what, who or why, but it failed to complete successfully, and I was left with a system that would not boot. After BIOS, I would get a GRUB rescue shell.
-
- ![Installation failed](http://www.dedoimedo.com/images/computers-years/2016-2/solus-installation-failed.png)
-
-I tried manually working in the rescue shell, using this and that command, very similar to what I have outlined in my extensive [GRUB2 tutorial][2]. This did not really work. My next attempt was to recover from a live CD, again following my own advice, as I have outlined in my [GRUB2 & EFI tutorial][3]. I set up a new entry, and made sure to mark it active with the efibootmgr utility. Just as we did in the guide, and this has served us well before. Alas, this recovery method did not work, either.
-
-I tried to perform a complete Ubuntu installation, into the same partition used by Solus, expecting the installer to sort out some of the fine details. But Ubuntu was not able to finish the install. It complained about: failed to install into /target. This was a first. What now?
-
-### Manually clean up EFI partition
-
-Obviously, something is very wrong with our EFI partition. Just to briefly recap, if you are using UEFI, then you must have a separate FAT32-formatted partition. This partition is used to store EFI boot images. For instance, when you install Fedora, the Fedora boot image will be copied into the EFI subdirectory. Every operating system is stored into a folder of its own, e.g. /boot/efi/EFI//.
-
- ![EFI partition contents](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-efi-partition-contents.png)
-
-On my [G50][4] machine, there were multiple entries, from a variety of my distro tests, including: centos, debian, fedora, mx-15, suse, ubuntu, zorin, and many others. There was also a goofiboot folder. However, the efibootmgr was not showing a goofiboot entry in its menu. There was obviously something wrong with the whole thing.
-
-```
-sudo efibootmgr -d /dev/sda
-BootCurrent: 0001
-Timeout: 0 seconds
-BootOrder: 0001,0005,2003,0000,2001,2002
-Boot0000* Lenovo Recovery System
-Boot0001* ubuntu
-Boot0003* EFI Network 0 for IPv4 (68-F7-28-4D-D1-A1)
-Boot0004* EFI Network 0 for IPv6 (68-F7-28-4D-D1-A1)
-Boot0005* Windows Boot Manager
-Boot0006* fedora
-Boot0007* suse
-Boot0008* debian
-Boot0009* mx-15
-Boot2001* EFI USB Device
-Boot2002* EFI DVD/CDROM
-Boot2003* EFI Network
-...
-```
-
-P.S. The output above was generated running the command in a LIVE session!
-
-I decided to clean up all the non-default and non-Microsoft entries and start fresh. Obviously, something was corrupt, and preventing new distros from setting up their own bootloader. So I deleted all the folders in the /boot/efi/EFI partition except Boot and Windows. And then, I also updated the boot manager by removing all the extras.
-
-```
-efibootmgr -b -B
-```
-
-Lastly, I reinstalled Ubuntu and closely monitored the progress with the GRUB installation and setup. This time, things completed fine. There were some errors with several invalid entries, as can be expected, but the whole sequenced completed just fine.
-
- ![Install errors](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-errors.jpg)
-
- ![Install successful](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-successful.jpg)
-
-### More reading
-
-If you don't fancy this manual fix, you may want to read:
-
-```
-[Boot-Info][5] page, with automated tools to help you recover your system
-
-[Boot-repair-cd][6] automatic repair tool download page
-```
-
-### Conclusion
-
-If you ever encounter a situation where your system is badly botched due to an EFI partition clobbering, then you may want to follow the advice in this guide. Delete all non-default entries. Make sure you do not touch anything Microsoft, if you're multi-booting with Windows. Then update the boot menu accordingly so the baddies are removed. Rerun the installation setup for your desired distro, or try to fix with a less stringent method as explained before.
-
-I hope this little article saves you some bacon. I was quite annoyed by what Solus did to my system. This is not something that should happen, and the recovery ought to be simpler. However, while things may seem dreadful, the fix is not difficult. You just need to delete the corrupt files and start again. Your data should not be affected, and you will be able to promptly boot into a running system and continue working. There you go.
-
-Cheers.
-
---------------------------------------------------------------------------------
-
-
-作者简介:
-
-My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
-
-From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
-
-I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
-
-Please see my full list of open-source projects, publications and patents, just scroll down.
-
-For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-
-
--------------
-
-
-via: http://www.dedoimedo.com/computers/grub2-efi-corrupt-part-recovery.html
-
-作者:[Igor Ljubuncic][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.dedoimedo.com/faq.html
-
-[1]:http://www.dedoimedo.com/computers/solus-1-2-review.html
-[2]:http://www.dedoimedo.com/computers/grub-2.html
-[3]:http://www.dedoimedo.com/computers/grub2-efi-recovery.html
-[4]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
-[5]:https://help.ubuntu.com/community/Boot-Info
-[6]:https://sourceforge.net/projects/boot-repair-cd/
diff --git a/sources/tech/20170214 Basics of network protocol analyzer Wireshark On Linux.md b/sources/tech/20170214 Basics of network protocol analyzer Wireshark On Linux.md
deleted file mode 100644
index 1cb7ab9528..0000000000
--- a/sources/tech/20170214 Basics of network protocol analyzer Wireshark On Linux.md
+++ /dev/null
@@ -1,130 +0,0 @@
-wcnnbdk1 translating
-### Basics of network protocol analyzer Wireshark On Linux
-
-
-Contents
-
-* * [1. Installation][4]
- * [2. Basic Configuration][5]
- * [2.1. Layout][1]
- * [2.2. Toolbars][2]
- * [2.3. Functionality][3]
- * [3. Capture][6]
- * [4. Reading Data][7]
- * [5. Closing Thoughts][8]
-
-Wireshark is just one of the valuable tools provided by Kali Linux. Like the others, it can be used for either positive or negative purposes. Of course, this guide will cover monitoring _your own_ network traffic to detect any potentially unwanted activity.
-
-Wireshark is incredibly powerful, and it can appear daunting at first, but it serves the single purpose of monitoring network traffic, and all of those many options that it makes available only serve to enhance it's monitoring ability.
-
-### Installation
-
-Kali ships with Wireshark. However, the `wireshark-gtk` package provides a nicer interface that makes working with Wireshark a much friendlier experience. So, the first step in using Wireshark is installing the `wireshark-gtk` package.
-```
-# apt install wireshark-gtk
-```
-Don't worry if you're running Kali on a live medium. It'll still work.
-
-### Basic Configuration
-
-Before you do anything else, it's probably best to set Wireshark up the way you will be most comfortable using it. Wireshark offers a number of different layouts as well as options that configure the program's behavior. Despite their numbers, using them is fairly straightforward.
-
-Start out by opening Wireshark-gtk. Make sure it is the GTK version. They are listed separately by Kali.
-
-
- ![Wireshark running on Kali](https://linuxconfig.org/images/wireshark-start.jpg?58a2b879)
-
-
-### Layout
-
-By default, Wireshark has three sections stacked on top of one another. The top section is the list of packets. The middle section is the packet details. The bottom section contains the raw packet bytes. For most uses, the top two are much more useful than the last, but can still be great information for more advanced users.
-
-The sections can be expanded and contracted, but that stacked layout isn't for everyone. You can alter it in Wireshark's "Preferences" menu. To get there, click on "Edit" then "Preferences..." at the bottom of the drop down. That will open up a new window with more options. Click on "Layout" under "User Interface" on the side menu.
-
-
- ![Wireshark's layout configuration](https://linuxconfig.org/images/wireshark-layouts.jpg?58a2b879)
-
-
-You will now see different available layout options. The illustrations across the top allow you to select the positioning of the different panes, and the radio button selectors allow you to select the data that will go in each pane.
-
-The tab below, labelled "Columns," allows you to select which columns will be displayed by Wireshark in the list of packets. Select only the ones with the data you need, or leave them all checked.
-
-### Toolbars
-
-There isn't too much that you can do with the toolbars in Wireshark, but if you want to customize them, you can find some useful setting on the same "Layout" menu as the pane arrangement tools in the last section. There are toolbar options directly below the pane options that allow you to change how the toolbars and toolbar items are displayed.
-
-You can also customize which toolbars are displayed under the "View" menu by checking and unchecking them.
-
-### Functionality
-
-The majority of the controls for altering how Wireshark captures packets are collected can be found under "Capture" in "Options."
-
-The top "Capture" section of the window allows you to select which networking interfaces Wireshark should monitor. This could differ greatly depending on your system and how it's configured. Just be sure to check the right boxes to get the right data. Virtual machines and their accompanying networks will show up in this list. There will also be multiple options for multiple network interface cards.
-
-
- ![Wireshark's capture configuration](https://linuxconfig.org/images/wireshark-capture-config.jpg?58a2b879)
-
-
-Directly below the listing of network interfaces are two options. One allows you to select all interfaces. The other allows you to enable or disable promiscuous mode. This allows your computer to monitor the traffic of all other computers on the selected network. If you are trying to monitor your whole network, this is the option you want.
-
-**WARNING:** using promiscuous mode on a network that you do not own or have permission to monitor is illegal!
-
-On the bottom left of the screen are the "Display Options" and "Name Resolution" sections. For "Display Options," it's probably a good idea to leave all three checked. If you want to uncheck them, it's okay, but "Update list of packets in real time" should probably remain checked at all times.
-
-Under "Name Resolution" you can pick your preference. Having more options checked will create more requests and clutter up your packet list. Checking for MAC resolutions is a good idea to see the brand of the networking hardware being used. It helps you identify which machines and interfaces are interacting.
-
-### Capture
-
-Capture is at the core of Wireshark. It's primary purpose is it monitor and record traffic on a specified network. It does this, in its most basic form, very simply. Of course, more configuration and options can be used to utilize more of Wireshark's power. This intro section, though, will be sticking to the most basic recording.
-
-To start a new capture, press the new live capture button. It should look like a blue shark fin.
-
-
- ![Wireshark listing packet information](https://linuxconfig.org/images/wireshark-packet-list.jpg?58a2b879)
-
-
-While capturing, Wireshark will gather all of the packet data that it can and record it. Depending on your settings, you should see new packets coming in on the "Packet Listing" pane. You can click on each one you find interesting and investigate in real time, or you can simply walk away and let Wireshark run.
-
-When you're done, press the red square "Stop" button. Now, you can choose to either save or discard your capture. To save, you can click on "File" then "Save" or "Save as."
-
-### Reading Data
-
-Wireshark aims to provide you with all of the data that you will need. In doing so, it collects a large amount of data related to the network packets that it is monitoring. It tries to make this data less daunting by breaking it down in collapsible tabs. Each tab corresponds to a piece of the request data tied to the packet.
-
-The tabs are stacked in order from lowest level to highest level. The top tab will always contain data on the bytes contained in the packet. The lowest tab will vary. In the case of an HTTP request, it will contain the HTTP information. The majority of packets that you encounter will be TCP data, and that will be the bottom tab.
-
-
- ![Wireshark listing HTTP packet info](https://linuxconfig.org/images/wireshark-packet-info-http.jpg?58a2b879)
-
-
-Each tab contains data relevant data for that part of the packet. An HTTP packet will contain information pertaining to the type of request, the web browser used, IP address of the server, language, and encoding data. A TCP packet will contain information on which ports are being used on both the client and server as well as flags being used for the TCP handshake process.
-
-
- ![Wireshark listing TCP packet info](https://linuxconfig.org/images/wireshark-packet-info-tcp.jpg?58a2b879)
-
-
-The other upper fields will contain less information that will interest most users. There is a tab containing information on whether or not the packet was transferred via IPv4 or IPv6 as well as the IP addresses of the client and the server. Another tab provides the MAC address information for both the client machine and the router or gateway used to access the internet.
-
-### Closing Thoughts
-
-Even with just these basics, you can see how powerful of a tool Wireshark can be. Monitoring your network traffic can help to stop cyber attacks or just improve connection speeds. It can also help you chase down problem applications. The next Wireshark guide will explore the options available for filtering packets with Wireshark.
-
---------------------------------------------------------------------------------
-
-via: https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux
-
-作者:[Nick Congleton][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux
-[1]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-1-layout
-[2]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-2-toolbars
-[3]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-3-functionality
-[4]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h1-installation
-[5]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h2-basic-configuration
-[6]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h3-capture
-[7]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h4-reading-data
-[8]:https://linuxconfig.org/basic-of-network-protocol-analyzer-wireshark-on-linux#h5-closing-thoughts
diff --git a/sources/tech/20170214 How to Install Configure and Secure FTP Server in CentOS 7 Comprehensive Guide.md b/sources/tech/20170214 How to Install Configure and Secure FTP Server in CentOS 7 Comprehensive Guide.md
deleted file mode 100644
index 96d28f223a..0000000000
--- a/sources/tech/20170214 How to Install Configure and Secure FTP Server in CentOS 7 Comprehensive Guide.md
+++ /dev/null
@@ -1,277 +0,0 @@
-How to Install, Configure and Secure FTP Server in CentOS 7 – [Comprehensive Guide]
-============================================================
-
-FTP (File Transfer Protocol) is a traditional and widely used standard tool for [transferring files between a server and clients][1] over a network, especially where no authentication is necessary (permits anonymous users to connect to a server). We must understand that FTP is unsecure by default, because it transmits user credentials and data without encryption.
-
-In this guide, we will describe the steps to install, configure and secure a FTP server (VSFTPD stands for “Very Secure FTP Daemon“) in CentOS/RHEL 7 and Fedora distributions.
-
-Note that all the commands in this guide will be run as root, in case you are not operating the server with the root account, use the [sudo command][2] to gain root privileges.
-
-### Step 1: Installing FTP Server
-
-1. Installing vsftpd server is straight forward, just run the following command in the terminal.
-
-```
-# yum install vsftpd
-```
-
-2. After the installation completes, the service will be disabled at first, so we need to start it manually for the time being and enable it to start automatically from the next system boot as well:
-
-```
-# systemctl start vsftpd
-# systemctl enable vsftpd
-```
-
-3. Next, in order to allow access to FTP services from external systems, we have to open port 21, where the FTP daemons are listening as follows:
-
-```
-# firewall-cmd --zone=public --permanent --add-port=21/tcp
-# firewall-cmd --zone=public --permanent --add-service=ftp
-# firewall-cmd --reload
-```
-
-### Step 2: Configuring FTP Server
-
-4. Now we will move over to perform a few configurations to setup and secure our FTP server, let us start by making a backup of the original config file /etc/vsftpd/vsftpd.conf:
-
-```
-# cp /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpd.conf.orig
-```
-
-Next, open the config file above and set the following options with these corresponding values:
-
-```
-anonymous_enable=NO # disable anonymous login
-local_enable=YES # permit local logins
-write_enable=YES # enable FTP commands which change the filesystem
-local_umask=022 # value of umask for file creation for local users
-dirmessage_enable=YES # enable showing of messages when users first enter a new directory
-xferlog_enable=YES # a log file will be maintained detailing uploads and downloads
-connect_from_port_20=YES # use port 20 (ftp-data) on the server machine for PORT style connections
-xferlog_std_format=YES # keep standard log file format
-listen=NO # prevent vsftpd from running in standalone mode
-listen_ipv6=YES # vsftpd will listen on an IPv6 socket instead of an IPv4 one
-pam_service_name=vsftpd # name of the PAM service vsftpd will use
-userlist_enable=YES # enable vsftpd to load a list of usernames
-tcp_wrappers=YES # turn on tcp wrappers
-```
-
-5. Now configure FTP to allow/deny FTP access to users based on the user list file `/etc/vsftpd.userlist`.
-
-By default, users listed in `userlist_file=/etc/vsftpd.userlist` are denied login access with userlist_deny option set to YES, if userlist_enable=YES.
-
-However, userlist_deny=NO alters the setting, meaning that only users explicitly listed in userlist_file=/etc/vsftpd.userlist will be permitted to login.
-
-```
-userlist_enable=YES # vsftpd will load a list of usernames, from the filename given by userlist_file
-userlist_file=/etc/vsftpd.userlist # stores usernames.
-userlist_deny=NO
-```
-
-That’s not all, when users login to the FTP server, they are placed in a chroot’ed jail, this is the local root directory which will act as their home directory for the FTP session only.
-
-Next, we will look at two possible scenarios of how to chroot FTP users to Home directories (local root) directory for FTP users, as explained below.
-
-6. Now add these two following options to restrict FTP users to their Home directories.
-
-```
-chroot_local_user=YES
-allow_writeable_chroot=YES
-```
-
-chroot_local_user=YES means local users will be placed in a chroot jail, their home directory after login by default settings.
-
-And also by default, vsftpd does not allow the chroot jail directory to be writable for security reasons, however, we can use the option allow_writeable_chroot=YES to override this setting.
-
-Save the file and close it.
-
-### Securing FTP Server with SELinux
-
-7. Now, let’s set the SELinux boolean below to allow FTP to read files in a user’s home directory. Note that this was initially done using the the command:
-
-```
-# setsebool -P ftp_home_dir on
-```
-
-However, the `ftp_home_dir` directive has been disabled by default as explained in this bug report: [https://bugzilla.redhat.com/show_bug.cgi?id=1097775][3].
-
-Now we will use semanage command to set SELinux rule to allow FTP to read/write user’s home directory.
-
-```
-# semanage boolean -m ftpd_full_access --on
-```
-
-At this point, we have to restart vsftpd to effect all the changes we made so far above:
-
-```
-# systemctl restart vsftpd
-```
-
-### Step 4: Testing FTP Server
-
-8. Now we will test FTP server by creating a FTP user with [useradd command][4].
-
-```
-# useradd -m -c “Ravi Saive, CEO” -s /bin/bash ravi
-# passwd ravi
-```
-
-Afterwards, we have to add the user ravi to the file /etc/vsftpd.userlist using the [echo command][5] as follows:
-
-```
-# echo "ravi" | tee -a /etc/vsftpd.userlist
-# cat /etc/vsftpd.userlist
-```
-
-9. Now it’s time to test if our settings above are working correctly. Let’s start by testing anonymous logins, we can see from the screen shot below that anonymous logins are not permitted:
-
-```
-# ftp 192.168.56.10
-Connected to 192.168.56.10 (192.168.56.10).
-220 Welcome to TecMint.com FTP service.
-Name (192.168.56.10:root) : anonymous
-530 Permission denied.
-Login failed.
-ftp>
-```
-[
- ![Test Anonymous FTP Login](http://www.tecmint.com/wp-content/uploads/2017/02/Test-Anonymous-FTP-Login.png)
-][6]
-
-Test Anonymous FTP Login
-
-10. Let’s also test if a user not listed in the file /etc/vsftpd.userlist will be granted permission to login, which is not the case as in the screen shot below:
-
-```
-# ftp 192.168.56.10
-Connected to 192.168.56.10 (192.168.56.10).
-220 Welcome to TecMint.com FTP service.
-Name (192.168.56.10:root) : aaronkilik
-530 Permission denied.
-Login failed.
-ftp>
-```
-[
- ![FTP User Login Failed](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login-Failed.png)
-][7]
-
-FTP User Login Failed
-
-11. Now do a final check if a user listed in the file /etc/vsftpd.userlist, is actually placed in his/her home directory after login:
-
-```
-# ftp 192.168.56.10
-Connected to 192.168.56.10 (192.168.56.10).
-220 Welcome to TecMint.com FTP service.
-Name (192.168.56.10:root) : ravi
-331 Please specify the password.
-Password:
-230 Login successful.
-Remote system type is UNIX.
-Using binary mode to transfer files.
-ftp> ls
-```
-[
- ![FTP User Login Successful[](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png)
-][8]
-
-FTP User Login Successful[
-
-Warning: Using `allow_writeable_chroot=YES` has certain security implications, especially if the users have upload permission, or shell access.
-
-Only activate this option if you exactly know what you are doing. It’s important to note that these security implications arenot vsftpd specific, they apply to all FTP daemons which offer to put local users in chroot jails as well.
-
-Therefore, we will look at a more secure way of setting a different non-writable local root directory in the next section.
-
-### Step 5: Configure Different FTP User Home Directories
-
-12. Open the vsftpd configuration file again and start by commenting the unsecure option below:
-
-```
-#allow_writeable_chroot=YES
-```
-
-Then create the alternative local root directory for the user (`ravi`, yours is probably different) and remove write permissions to all users to this directory:
-
-```
-# mkdir /home/ravi/ftp
-# chown nobody:nobody /home/ravi/ftp
-# chmod a-w /home/ravi/ftp
-```
-
-13. Next, create a directory under the local root where the user will store his/her files:
-
-```
-# mkdir /home/ravi/ftp/files
-# chown ravi:ravi /home/ravi/ftp/files
-# chmod 0700 /home/ravi/ftp/files/
-```
-
-Then add/modify the following options in the vsftpd config file with these values:
-
-```
-user_sub_token=$USER # inserts the username in the local root directory
-local_root=/home/$USER/ftp # defines any users local root directory
-```
-
-Save the file and close it. Once again, let’s restart the service with the new settings:
-
-```
-# systemctl restart vsftpd
-```
-
-14. Now do a final test again and see that the users local root directory is the FTP directory we created in his home directory.
-
-```
-# ftp 192.168.56.10
-Connected to 192.168.56.10 (192.168.56.10).
-220 Welcome to TecMint.com FTP service.
-Name (192.168.56.10:root) : ravi
-331 Please specify the password.
-Password:
-230 Login successful.
-Remote system type is UNIX.
-Using binary mode to transfer files.
-ftp> ls
-```
-[
- ![FTP User Home Directory Login Successful](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login-Successful.png)
-][9]
-
-FTP User Home Directory Login Successful
-
-That’s it! In this article, we described how to install, configure as well as secure a FTP server in CentOS 7, use the comment section below to write back to us concerning this guide/share any useful information about this topic.
-
-**Suggested Read:** [Install ProFTPD Server on RHEL/CentOS 7][10]
-
-In the next article, we will also show you how to [secure an FTP server using SSL/TLS][11] connections in CentOS 7, until then, stay connected to TecMint.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/install-ftp-server-in-centos-7/
-
-作者:[Aaron Kili][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/aaronkili/
-
-[1]:http://www.tecmint.com/scp-commands-examples/
-[2]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
-[3]:https://bugzilla.redhat.com/show_bug.cgi?id=1097775
-[4]:http://www.tecmint.com/add-users-in-linux/
-[5]:http://www.tecmint.com/echo-command-in-linux/
-[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Test-Anonymous-FTP-Login.png
-[7]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login-Failed.png
-[8]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png
-[9]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login-Successful.png
-[10]:http://www.tecmint.com/install-proftpd-in-centos-7/
-[11]:http://www.tecmint.com/secure-vsftpd-using-ssl-tls-on-centos/
diff --git a/sources/tech/20170307 Assign Read-Write Access to a User on Specific Directory in Linux.md b/sources/tech/20170307 Assign Read-Write Access to a User on Specific Directory in Linux.md
deleted file mode 100644
index 0b779bcb5a..0000000000
--- a/sources/tech/20170307 Assign Read-Write Access to a User on Specific Directory in Linux.md
+++ /dev/null
@@ -1,154 +0,0 @@
-翻译中 [ChrisLeeGit](https://github.com/chrisleegit)
-
-Assign Read/Write Access to a User on Specific Directory in Linux
-============================================================
-
-
-In a previous article, we showed you how to [create a shared directory in Linux][3]. Here, we will describe how to give read/write access to a user on a specific directory in Linux.
-
-There are two possible methods of doing this: the first is [using ACLs (Access Control Lists)][4] and the second is [creating user groups to manage file permissions][5], as explained below.
-
-For the purpose of this tutorial, we will use following setup.
-
-```
-Operating system: CentOS 7
-Test directory: /shares/project1/reports
-Test user: tecmint
-Filesystem type: Ext4
-```
-
-Make sure all commands are executed as root user or use the the [sudo command][6] with equivalent privileges.
-
-Let’s start by creating the directory called `reports` using the mkdir command:
-
-```
-# mkdir -p /shares/project1/reports
-```
-
-### Using ACL to Give Read/Write Access to User on Directory
-
-Important: To use this method, ensure that your Linux filesystem type (such as Ext3 and Ext4, NTFS, BTRFS) support ACLs.
-
-1. First, [check the current file system type][7] on your system, and also whether the kernel supports ACL as follows:
-
-```
-# df -T | awk '{print $1,$2,$NF}' | grep "^/dev"
-# grep -i acl /boot/config*
-```
-
-From the screenshot below, the filesystem type is Ext4 and the kernel supports POSIX ACLs as indicated by the CONFIG_EXT4_FS_POSIX_ACL=y option.
-
-[
- ![Check Filesystem Type and Kernel ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png)
-][8]
-
-Check Filesystem Type and Kernel ACL Support
-
-2. Next, check if the file system (partition) is mounted with ACL option or not:
-
-```
-# tune2fs -l /dev/sda1 | grep acl
-```
-[
- ![Check Partition ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Partition-ACL-Support.png)
-][9]
-
-Check Partition ACL Support
-
-From the above output, we can see that default mount option already has support for ACL. If in case it’s not enabled, you can enable it for the particular partition (/dev/sda3 for this case):
-
-```
-# mount -o remount,acl /
-# tune2fs -o acl /dev/sda3
-```
-
-3. Now, its time to assign a read/write access to a user `tecmint` to a specific directory called `reports`by running the following commands.
-
-```
-# getfacl /shares/project1/reports # Check the default ACL settings for the directory
-# setfacl -m user:tecmint:rw /shares/project1/reports # Give rw access to user tecmint
-# getfacl /shares/project1/reports # Check new ACL settings for the directory
-```
-[
- ![Give Read/Write Access to Directory Using ACL](http://www.tecmint.com/wp-content/uploads/2017/03/Give-Read-Write-Access-to-Directory-Using-ACL.png)
-][10]
-
-Give Read/Write Access to Directory Using ACL
-
-In the screenshot above, the user `tecmint` now has read/write (rw) permissions on directory /shares/project1/reports as seen from the output of the second getfacl command.
-
-For more information about ACL lists, do check out our following guides.
-
-1. [How to Use ACLs (Access Control Lists) to Setup Disk Quotas for Users/Groups][1]
-2. [How to Use ACLs (Access Control Lists) to Mount Network Shares][2]
-
-Now let’s see the second method of assigning read/write access to a directory.
-
-### Using Groups to Give Read/Write Access to User on Directory
-
-1. If the user already has a default user group (normally with same name as username), simply change the group owner of the directory.
-
-```
-# chgrp tecmint /shares/project1/reports
-```
-
-Alternatively, create a new group for multiple users (who will be given read/write permissions on a specific directory), as follows. However, this will c[reate a shared directory][11]:
-
-```
-# groupadd projects
-```
-
-2. Then add the user `tecmint` to the group `projects` as follows:
-
-```
-# usermod -aG projects tecmint # add user to projects
-# groups tecmint # check users groups
-```
-
-3. Change the group owner of the directory to projects:
-
-```
-# chgrp projects /shares/project1/reports
-```
-
-4. Now set read/write access for the group members:
-
-```
-# chmod -R 0760 /shares/projects/reports
-# ls -l /shares/projects/ #check new permissions
-```
-
-That’s it! In this tutorial, we showed you how to give read/write access to a user on a specific directory in Linux. If any issues, do ask via the comment section below.
-
---------------------------------------------------------------------------------
-
-
-作者简介:
-
-Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/give-read-write-access-to-directory-in-linux/
-
-作者:[Aaron Kili][a]
-译者:[ChrisLeeGit](https://github.com/chrisleegit)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/aaronkili/
-[1]:http://www.tecmint.com/set-access-control-lists-acls-and-disk-quotas-for-users-groups/
-[2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/
-[3]:http://www.tecmint.com/create-a-shared-directory-in-linux/
-[4]:http://www.tecmint.com/secure-files-using-acls-in-linux/
-[5]:http://www.tecmint.com/manage-users-and-groups-in-linux/
-[6]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
-[7]:http://www.tecmint.com/find-linux-filesystem-type/
-[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png
-[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Partition-ACL-Support.png
-[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Give-Read-Write-Access-to-Directory-Using-ACL.png
-[11]:http://www.tecmint.com/create-a-shared-directory-in-linux/
-[12]:http://www.tecmint.com/author/aaronkili/
-[13]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
-[14]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/sources/tech/20170308 Many SQL Performance Problems Stem from Unnecessary, Mandatory Work.md b/sources/tech/20170308 Many SQL Performance Problems Stem from Unnecessary, Mandatory Work.md
index b48182f037..f17e7d2fd3 100644
--- a/sources/tech/20170308 Many SQL Performance Problems Stem from Unnecessary, Mandatory Work.md
+++ b/sources/tech/20170308 Many SQL Performance Problems Stem from Unnecessary, Mandatory Work.md
@@ -1,3 +1,5 @@
+translating by Flowsnow!
+
Many SQL Performance Problems Stem from “Unnecessary, Mandatory Work”
============================================================
diff --git a/sources/tech/20170309 8 reasons to use LXDE.md b/sources/tech/20170309 8 reasons to use LXDE.md
deleted file mode 100644
index f936b7833b..0000000000
--- a/sources/tech/20170309 8 reasons to use LXDE.md
+++ /dev/null
@@ -1,83 +0,0 @@
-ictlyh Translating
-8 reasons to use LXDE
-============================================================
-
-### Learn reasons to consider using the lightweight LXDE desktop environment as your Linux desktop.
-
-![8 reasons to use LXDE](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/rh_003499_01_linux31x_cc.png?itok=1HXbvw2E "8 reasons to use LXDE")
->Image by : opensource.com
-
-Late last year, an upgrade to Fedora 25 brought issues with the new version of [KDE][7] Plasma that were so bad it was difficult to get any work done. I decided to try other Linux desktop environments for two reasons. First, I needed to get my work done. Second, having used KDE exclusively for many years, I thought it was time to try some different desktops.
-
-The first alternate desktop I tried for several weeks was [Cinnamon][8], which I wrote about in January. This time I have been using LXDE (Lightweight X11 Desktop Environment) for about six weeks, and I have found many things about it that I like. Here is my list of eight reasons to use LXDE.
-
-More Linux resources
-
-* [What is Linux?][1]
-* [What are Linux containers?][2]
-* [Managing devices in Linux][3]
-* [Download Now: Linux commands cheat sheet][4]
-* [Our latest Linux articles][5]
-
-**1\. LXDE supports multiple panels. **As with KDE and Cinnamon, LXDE sports panels that contain the system menu, application launchers, and a taskbar that displays buttons for the running applications. The first time I logged in to LXDE the panel configuration looked surprisingly familiar. LXDE appears to have picked up the KDE configuration for my favored top and bottom panels, including system tray settings. The application launchers on the top panel appear to have been from the Cinnamon configuration. The contents of the panels make it easy to launch and manage programs. By default, there is only one panel at the bottom of the desktop.
-
- ![The LXDE desktop with the Openbox Configuration Manager open.](https://opensource.com/sites/default/files/lxde-openboxconfigurationmanager.png "The LXDE desktop with the Openbox Configuration Manager open.")
-
-The LXDE desktop with the Openbox Configuration Manager open. This desktop has not been modified, so it uses the default color and icon schemes.
-
-**2\. The Openbox configuration manager provides a single, simple tool for managing the look and feel of the desktop.** It provides options for themes, window decorations, window behavior with multiple monitors, moving and resizing windows, mouse control, multiple desktops, and more. Although that seems like a lot, it is far less complex than configuring the KDE desktop, yet Openbox provides a surprisingly great amount of control.
-
-**3\. LXDE has a powerful menu tool.** There is an interesting option that you can access on the Advanced tab of the Desktop Preferences menu. The long name for this option is, “Show menus provided by window managers when desktop is clicked.” When this checkbox is selected, the Openbox desktop menu is displayed instead of the standard LXDE desktop menu, when you right-click the desktop.
-
-The Openbox desktop menu contains nearly every menu selection you would ever want, with all easily accessible from the desktop. It includes all of the application menus, system administration, and preferences. It even has a menu containing a list of all the terminal emulator applications installed so that sysadmins can easily launch their favorite.
-
-**4\. By design, the LXDE desktop is clean and simple.** It has nothing to get in the way of getting your work done. Although you can add some clutter to the desktop in the form of files, directory folders, and links to applications, there are no widgets that can be added to the desktop. I do like some widgets on my KDE and Cinnamon desktops, but they are easy to cover and then I need to move or minimize windows, or just use the “Show desktop” button to clear off the entire desktop. LXDE does have a “Iconify all windows” button, but I seldom need to use it unless I want to look at my wallpaper.
-
-**5\. LXDE comes with a strong file manager.** The default file manager for LXDE is PCManFM, so that became my file manager for the duration of my time with LXDE. PCManFM is very flexible and can be configured to make it work well for most people and situations. It seems to be somewhat less configurable than Krusader, which is usually my go-to file manager, but I really like the sidebar on PCManFM that Krusader does not have.
-
-PCManFM allows multiple tabs, which can be opened with a right-click on any item in the sidebar or by a left-click on the new tab icon in the icon bar. The Places pane at the left of the PCManFM window shows the applications menu, and you can launch applications from PCManFM. The upper part of the Places pane also shows a devices icon, which can be used to view your physical storage devices, a list of removable devices along with buttons to allow you to mount or unmount them, and the Home, Desktop, and trashcan folders to make them easy to access. The bottom part of the Places panel contains shortcuts to some default directories, Documents, Music, Pictures, Videos, and Downloads. You can also drag additional directories to the shortcut part of the Places pane. The Places pane can be swapped for a regular directory tree.
-
-**6\. The title bar of ****a new window flashes**** if it is opened behind existing windows.** This is a nice way to make locating new windows in with a large number of existing ones.
-
-**7\. Most modern desktop environments allow for multiple desktops and LXDE is no exception to that.** I like to use one desktop for my development, testing, and writing activities, and another for mundane tasks like email and web browsing. LXDE provides two desktops by default, but you can configure just one or more. Right-click on the Desktop Pager to configure it.
-
-Through some disruptive but not destructive testing, I was able to determine that the maximum number of desktops allowed is 100\. I also discovered that when I reduced the number of desktops to fewer than the three I actually had in use, the windows on the defunct desktops are moved to desktop 1\. What fun I have had with this!
-
-**8\. The Xfce power manager is a powerful little application that allows you to configure how power management works.** It provides a tab for General configuration as well as tabs for System, Display, and Devices. The Devices tab displays a table of attached devices on my system, such as battery-powered mice, keyboards, and even my UPS. It displays information about each, including the vendor and serial number, if available, and the state of the battery charge. As I write this, my UPS is 100% charged and my Logitech mouse is 75% charged. The Xfce power manager also displays an icon in the system tray so you can get a quick read on your devices' battery status from there.
-
-There are more things to like about the LXDE desktop, but these are the ones that either grabbed my attention or are so important to my way of working in a modern GUI interface that they are indispensable to me.
-
-One quirk I noticed with LXDE is that I never did figure out what the “Reconfigure” option does on the desktop (Openbox) menu. I clicked on that several times and never noticed any activity of any kind to indicate that that selection actually did anything.
-
-I have found LXDE to be an easy-to-use, yet powerful, desktop. I have enjoyed my weeks using it for this article. LXDE has enabled me to work effectively mostly by allowing me access to the applications and files that I want, while remaining unobtrusive the rest of the time. I also never encountered anything that prevented me from doing my work. Well, except perhaps for the time I spent exploring this fine desktop. I can highly recommend the LXDE desktop.
-
-I am now using GNOME 3 and the GNOME Shell and will report on that in my next installment.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years.
-
---------------------------------------
-
-via: https://opensource.com/article/17/3/8-reasons-use-lxde
-
-作者:[David Both ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/dboth
-[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
-[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
-[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
-[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
-[5]:https://opensource.com/tags/linux?src=linux_resource_menu
-[6]:https://opensource.com/article/17/3/8-reasons-use-lxde?rate=QigvkBy_9zLvktdsL-QaIWedjIqjtlwwJIVFQDQzsSY
-[7]:https://opensource.com/life/15/4/9-reasons-to-use-kde
-[8]:https://opensource.com/article/17/1/cinnamon-desktop-environment
-[9]:https://opensource.com/user/14106/feed
-[10]:https://opensource.com/article/17/3/8-reasons-use-lxde#comments
-[11]:https://opensource.com/users/dboth
diff --git a/sources/tech/20170314 Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8.md b/sources/tech/20170314 Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8.md
deleted file mode 100644
index dfa6d2a138..0000000000
--- a/sources/tech/20170314 Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8.md
+++ /dev/null
@@ -1,339 +0,0 @@
-Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8
-============================================================
-
-
-
-This tutorial describes how to join an Ubuntu machine into a Samba4 Active Directory domain in order to authenticate AD accounts with local ACL for files and directories or to create and map volume shares for domain controller users (act a as file server).
-
-#### Requirements:
-
-1. [Create an Active Directory Infrastructure with Samba4 on Ubuntu][1]
-
-### Step 1: Initial Configurations to Join Ubuntu to Samba4 AD
-
-1. Before starting to join an Ubuntu host into an Active Directory DC you need to assure that some services are configured properly on local machine.
-
-An important aspect of your machine represents the hostname. Setup a proper machine name before joining the domain with the help of hostnamectl command or by manually editing /etc/hostname file.
-
-```
-# hostnamectl set-hostname your_machine_short_name
-# cat /etc/hostname
-# hostnamectl
-```
-[
- ![Set System Hostname](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Ubuntu-System-Hostname.png)
-][2]
-
-Set System Hostname
-
-2. On the next step, open and manually edit your machine network settings with the proper IP configurations. The most important settings here are the DNS IP addresses which points back to your domain controller.
-
-Edit /etc/network/interfaces file and add dns-nameservers statement with your proper AD IP addresses and domain name as illustrated on the below screenshot.
-
-Also, make sure that the same DNS IP addresses and the domain name are added to /etc/resolv.conf file.
-
-[
- ![Configure Network Settings for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network-Settings-for-AD.png)
-][3]
-
-Configure Network Settings for AD
-
-On the above screenshot, 192.168.1.254 and 192.168.1.253 are the IP addresses of the Samba4 AD DC and Tecmint.lan represents the name of the AD domain which will be queried by all machines integrated into realm.
-
-3. Restart the network services or reboot the machine in order to apply the new network configurations. Issue a ping command against your domain name in order to test if DNS resolution is working as expected.
-
-The AD DC should replay with its FQDN. In case you have configured a DHCP server in your network to automatically assign IP settings for your LAN hosts, make sure you add AD DC IP addresses to the DHCP server DNS configurations.
-
-```
-# systemctl restart networking.service
-# ping -c2 your_domain_name
-```
-
-4. The last important configuration required is represented by time synchronization. Install ntpdate package, query and sync time with the AD DC by issuing the below commands.
-
-```
-$ sudo apt-get install ntpdate
-$ sudo ntpdate -q your_domain_name
-$ sudo ntpdate your_domain_name
-```
-[
- ![Time Synchronization with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Time-Synchronization-with-AD.png)
-][4]
-
-Time Synchronization with AD
-
-5. On the next step install the software required by Ubuntu machine to be fully integrated into the domain by running the below command.
-
-```
-$ sudo apt-get install samba krb5-config krb5-user winbind libpam-winbind libnss-winbind
-```
-[
- ![Install Samba4 in Ubuntu Client](http://www.tecmint.com/wp-content/uploads/2017/03/Install-Samba4-in-Ubuntu-Client.png)
-][5]
-
-Install Samba4 in Ubuntu Client
-
-While the Kerberos packages are installing you should be asked to enter the name of your default realm. Use the name of your domain with uppercases and press Enter key to continue the installation.
-
-[
- ![Add AD Domain Name](http://www.tecmint.com/wp-content/uploads/2017/03/Add-AD-Domain-Name.png)
-][6]
-
-Add AD Domain Name
-
-6. After all packages finish installing, test Kerberos authentication against an AD administrative account and list the ticket by issuing the below commands.
-
-```
-# kinit ad_admin_user
-# klist
-```
-[
- ![Check Kerberos Authentication with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kerberos-Authentication-with-AD.png)
-][7]
-
-Check Kerberos Authentication with AD
-
-### Step 2: Join Ubuntu to Samba4 AD DC
-
-7. The first step in integrating the Ubuntu machine into the Samba4 Active Directory domain is to edit Samba configuration file.
-
-Backup the default configuration file of Samba, provided by the package manager, in order to start with a clean configuration by running the following commands.
-
-```
-# mv /etc/samba/smb.conf /etc/samba/smb.conf.initial
-# nano /etc/samba/smb.conf
-```
-
-On the new Samba configuration file add the below lines:
-
-```
-[global]
-workgroup = TECMINT
-realm = TECMINT.LAN
-netbios name = ubuntu
-security = ADS
-dns forwarder = 192.168.1.1
-idmap config * : backend = tdb
-idmap config *:range = 50000-1000000
-template homedir = /home/%D/%U
-template shell = /bin/bash
-winbind use default domain = true
-winbind offline logon = false
-winbind nss info = rfc2307
-winbind enum users = yes
-winbind enum groups = yes
-vfs objects = acl_xattr
-map acl inherit = Yes
-store dos attributes = Yes
-```
-[
- ![Configure Samba for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Samba.png)
-][8]
-
-Configure Samba for AD
-
-Replace workgroup, realm, netbios name and dns forwarder variables with your own custom settings.
-
-The winbind use default domain parameter causes winbind service to treat any unqualified AD usernames as users of the AD. You should omit this parameter if you have local system accounts names which overlap AD accounts.
-
-8. Now you should restart all samba daemons and stop and remove unnecessary services and enable samba services system-wide by issuing the below commands.
-
-```
-$ sudo systemctl restart smbd nmbd winbind
-$ sudo systemctl stop samba-ad-dc
-$ sudo systemctl enable smbd nmbd winbind
-```
-
-9. Join Ubuntu machine to Samba4 AD DC by issuing the following command. Use the name of an AD DC account with administrator privileges in order for the binding to realm to work as expected.
-
-```
-$ sudo net ads join -U ad_admin_user
-```
-[
- ![Join Ubuntu to Samba4 AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Join-Ubuntu-to-Samba4-AD-DC.png)
-][9]
-
-Join Ubuntu to Samba4 AD DC
-
-10. From a [Windows machine with RSAT tools installed][10] you can open AD UC and navigate to Computers container. Here, your Ubuntu joined machine should be listed.
-
-[
- ![Confirm Ubuntu Client in Windows AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Confirm-Ubuntu-Client-in-RSAT-.png)
-][11]
-
-Confirm Ubuntu Client in Windows AD DC
-
-### Step 3: Configure AD Accounts Authentication
-
-11. In order to perform authentication for AD accounts on the local machine, you need to modify some services and files on the local machine.
-
-First, open and edit The Name Service Switch (NSS) configuration file.
-
-```
-$ sudo nano /etc/nsswitch.conf
-```
-
-Next append winbind value for passwd and group lines as illustrated on the below excerpt.
-
-```
-passwd: compat winbind
-group: compat winbind
-```
-[
- ![Configure AD Accounts Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-AD-Accounts-Authentication.png)
-][12]
-
-Configure AD Accounts Authentication
-
-12. In order to test if the Ubuntu machine was successfully integrated to realm run wbinfo command to list domain accounts and groups.
-
-```
-$ wbinfo -u
-$ wbinfo -g
-```
-[
- ![List AD Domain Accounts and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/List-AD-Domain-Accounts-and-Groups.png)
-][13]
-
-List AD Domain Accounts and Groups
-
-13. Also, check Winbind nsswitch module by issuing the getent command and pipe the results through a filter such as grep to narrow the output only for specific domain users or groups.
-
-```
-$ sudo getent passwd| grep your_domain_user
-$ sudo getent group|grep 'domain admins'
-```
-[
- ![Check AD Domain Users and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/Check-AD-Domain-Users-and-Groups.png)
-][14]
-
-Check AD Domain Users and Groups
-
-14. In order to authenticate on Ubuntu machine with domain accounts you need to run pam-auth-update command with root privileges and add all the entries required for winbind service and to automatically create home directories for each domain account at the first login.
-
-Check all entries by pressing `[space]` key and hit ok to apply configuration.
-
-```
-$ sudo pam-auth-update
-```
-[
- ![Authenticate Ubuntu with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Ubuntu-with-Domain-Accounts.png)
-][15]
-
-Authenticate Ubuntu with Domain Accounts
-
-15. On Debian systems you need to manually edit /etc/pam.d/common-account file and the following line in order to automatically create homes for authenticated domain users.
-
-```
-session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
-```
-[
- ![Authenticate Debian with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Debian-with-Domain-Accounts.png)
-][16]
-
-Authenticate Debian with Domain Accounts
-
-16. In order for Active Directory users to be able to change password from command line in Linux open /etc/pam.d/common-password file and remove the use_authtok statement from password line to finally look as on the below excerpt.
-
-```
-password [success=1 default=ignore] pam_winbind.so try_first_pass
-```
-[
- ![Users Allowed to Change Password](http://www.tecmint.com/wp-content/uploads/2017/03/AD-Domain-Users-Change-Password.png)
-][17]
-
-Users Allowed to Change Password
-
-17. To authenticate on Ubuntu host with a Samba4 AD account use the domain username parameter after su – command. Run id command to get extra info about the AD account.
-
-```
-$ su - your_ad_user
-```
-[
- ![Find AD User Information](http://www.tecmint.com/wp-content/uploads/2017/03/Find-AD-User-Information.png)
-][18]
-
-Find AD User Information
-
-Use [pwd command][19] to see your domain user current directory and passwd command if you want to change password.
-
-18. To use a domain account with root privileges on your Ubuntu machine, you need to add the AD username to the sudo system group by issuing the below command:
-
-```
-$ sudo usermod -aG sudo your_domain_user
-```
-
-Login to Ubuntu with the domain account and update your system by running apt-get update command to check if the domain user has root privileges.
-
-[
- ![Add Sudo User Root Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Sudo-User-Root-Group.png)
-][20]
-
-Add Sudo User Root Group
-
-19. To add root privileges for a domain group, open end edit /etc/sudoers file using visudo command and add the following line as illustrated on the below screenshot.
-
-```
-%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
-```
-[
- ![Add Root Privileges to Domain Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Root-Privileges-to-Domain-Group.jpg)
-][21]
-
-Add Root Privileges to Domain Group
-
-Use backslashes to escape spaces contained into your domain group name or to escape the first backslash. In the above example the domain group for TECMINT realm is named “domain admins”.
-
-The preceding percent sign `(%)` symbol indicates that we are referring to a group, not a username.
-
-20. In case you are running the graphical version of Ubuntu and you want to login on the system with a domain user, you need to modify LightDM display manager by editing /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf file, add the following lines and reboot the machine to reflect changes.
-
-```
-greeter-show-manual-login=true
-greeter-hide-users=true
-```
-
-It should now be able to perform logins on Ubuntu Desktop with a domain account using either your_domain_username or your_domain_username@your_domain.tld or your_domain\your_domain_username format.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/join-ubuntu-to-active-directory-domain-member-samba-winbind/
-
-作者:[Matei Cezar][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/cezarmatei/
-
-[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
-[2]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Ubuntu-System-Hostname.png
-[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network-Settings-for-AD.png
-[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Time-Synchronization-with-AD.png
-[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Install-Samba4-in-Ubuntu-Client.png
-[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-AD-Domain-Name.png
-[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kerberos-Authentication-with-AD.png
-[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Samba.png
-[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Join-Ubuntu-to-Samba4-AD-DC.png
-[10]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
-[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Confirm-Ubuntu-Client-in-RSAT-.png
-[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-AD-Accounts-Authentication.png
-[13]:http://www.tecmint.com/wp-content/uploads/2017/03/List-AD-Domain-Accounts-and-Groups.png
-[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-AD-Domain-Users-and-Groups.png
-[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Ubuntu-with-Domain-Accounts.png
-[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Debian-with-Domain-Accounts.png
-[17]:http://www.tecmint.com/wp-content/uploads/2017/03/AD-Domain-Users-Change-Password.png
-[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-AD-User-Information.png
-[19]:http://www.tecmint.com/pwd-command-examples/
-[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-Sudo-User-Root-Group.png
-[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-Root-Privileges-to-Domain-Group.jpg
-[22]:http://www.tecmint.com/author/cezarmatei/
-[23]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
-[24]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/sources/tech/20170317 How to Build Your Own Media Center with OpenELEC.md b/sources/tech/20170317 How to Build Your Own Media Center with OpenELEC.md
deleted file mode 100644
index d4dcd4842f..0000000000
--- a/sources/tech/20170317 How to Build Your Own Media Center with OpenELEC.md
+++ /dev/null
@@ -1,125 +0,0 @@
-How to Build Your Own Media Center with OpenELEC
-============================================================
-
-![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-media-center.jpg "How to Build Your Own Media Center with OpenELECs")
-
-
-Have you ever wanted to make your own home theater system? If so, this is the guide for you! In this article we’ll go over how to set up a home entertainment system powered by OpenELEC and Kodi. We’ll go over how to make the installation medium, what devices can run the software, how to install it and everything else there is to know!
-
-
-### Choosing a device
-
-Before setting up the software in the media center, you’ll need to choose a device. OpenELEC supports a multitude of devices. From regular desktops and laptops to the Raspberry Pi 2/3, etc. With a device chosen, think about how you’ll access the media on the OpenELEC system and get it ready to use.
-
-**Note:** as OpenELEC is based on Kodi, there are many ways to load playable media (Samba network shares, external devices, etc.).
-
-### Making the installation disk
-
-The OpenELEC installation disk requires a USB flash drive of at least 1 GB. This is the only way to install the software, as the developers do not currently distribute an ISO file. A raw IMG file needs to be created instead. Choose the link that corresponds with your device and [download][10] the raw disk image. With the image downloaded, open a terminal and use the command to extract the data from the archive.
-
-**On Linux/macOS**
-
-```
-cd ~/Downloads
-gunzip -d OpenELEC*.img.gz
-```
-
-**On Windows**
-
-Download [7zip][11], install it, and then extract the archive.
-
-With the raw .IMG file extracted, download the [Etcher USB creation tool][12] and follow the instructions on the page to install it and create the USB disk.
-
-**Note:** for Raspberry Pi users, Etcher supports burning to SD cards as well.
-
-### Installing OpenELEC
-
-The OpenELEC installation process is probably one of the easiest operating systems to install. To start, plug in the USB device and configure your device to boot from the USB drive. For some, this can be accomplished by pressing the DEL key or F2\. However, as all BIOS are different, it is best to look into the manual and find out.
-
- ![openelec-installer-selection](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installer-selection.png "openelec-installer-selection")
-
-Once in the BIOS, configure it to load the USB stick directly. This will allow the computer to boot the drive, which will bring you to the Syslinux boot screen. Enter “installer” in the prompt, then press the Enter key.
-
- ![openelec-installation-selection-menu](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-selection-menu.png "openelec-installation-selection-menu")
-
-By default, the quick installation option is selected. Press Enter to start the install. This will move the installer onto the drive selection page. Select the hard drive where OpenELEC should be installed, then press the Enter key to start the installation process.
-
- ![openelec-installation-in-progress](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-in-progress.png "openelec-installation-in-progress")
-
-Once done, reboot the system and load OpenELEC.
-
-### Configuring OpenELEC
-
- ![openelec-wireless-network-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-wireless-network-setup.jpg "openelec-wireless-network-setup")
-
-On first boot, the user must configure a few things. If your media center device has a wireless network card, OpenELEC will prompt the user to connect it to a wireless access point. Select a network from the list and enter the access code.
-
- ![openelec-sharing-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-sharing-setup.jpg "openelec-sharing-setup")
-
-On the next “Welcome to OpenELEC” screen, the user must configure various sharing settings (SSH and Samba). It is advised that you turn these settings on, as this will make it easier to remotely transfer media files as well as gain command-line access.
-
-### Adding Media
-
-To add media to OpenElec (Kodi), first select the section that you want to add media to. Adding media for Photos, Music, etc., is the same process. In this guide we’ll focus on adding videos.
-
- ![openelec-add-files-to-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-files-to-kodi.jpg "openelec-add-files-to-kodi")
-
-Click the “Video” option on the home screen to go to the videos area. Select the “Files” option. On the next page click “Add videos…” This will take the user to the Kodi add-media screen. From here it is possible to add new media sources (both internal and external).
-
- ![openelec-add-media-source-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-media-source-kodi.jpg "openelec-add-media-source-kodi")
-
-OpenELEC automatically mounts external devices (like USB, DVD data discs, etc.), and it can be added by browsing for the folder’s mount point. Usually these devices are placed in “/run.” Alternatively, go back to the page where you clicked on “Add videos…” and click on the device there. Any external device, including DVDs/CDs, will show up there and can be accessed directly. This is a good option for those who don’t understand how to find mount points.
-
- ![openelec-name-video-source-folder](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-name-video-source-folder.jpg "openelec-name-video-source-folder")
-
-Now that the device is selected within Kodi, the interface will ask the user to browse for the individual directory on the device with the media files using the media center’s file browser tool. Once the directory that holds the files is found, add it, give the directory a name and press the OK button to save it.
-
- ![openelec-show-added-media-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-show-added-media-kodi.jpg "openelec-show-added-media-kodi")
-
-When a user browses “Videos,” they’ll see a clickable folder which brings up the media added from an external device. These folders can easily be played on the system.
-
-### Using OpenElec
-
-When the user logs in they’ll see a “home screen.” This home screen has several sections the user is able to click on and go to: Pictures, Videos, Music, Programs, etc. When hovering over any of these sections, subsections appear. For example, when hovering over “Pictures,” the subsections “files” and “Add-ons” appear.
-
- ![openelec-navigation-bar](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-navigation-bar.jpg "openelec-navigation-bar")
-
-If a user clicks on one of the subsections under a section, like “add-ons,” the Kodi add-on chooser appears. This installer will allow users to either browse for new add-ons to install in relation to this subsection (like Picture-related add-ons, etc.) or to launch existing picture-related ones that are already on the system.
-
-Additionally, clicking the files subsection of any section (e.g. Videos) takes the user directly to any available files in that section.
-
-### System Settings
-
- ![openelec-system-settings](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-system-settings.jpg "openelec-system-settings")
-
-Kodi has an extensive settings area. To get to the Settings, hover the mouse to the right, and the menu selector will scroll right and reveal “System.” Click on it to open the global system settings area.
-
-Any setting can be modified and changed by the user, from installing add-ons from the Kodi-repository, to activating various services, to changing the theme, and even the weather. To exit the settings area and return to the home screen, press the “home” icon in the bottom-right corner.
-
-### Conclusion
-
-With the OpenELEC installed and configured, you are now free to go and use your very own Linux-powered home-theater system. Out of all of the home-theater-based Linux distributions, this one is the most user-friendly. Do keep in mind that although this operating system is known as “OpenELEC,” it runs Kodi and is compatible with all of the different Kodi add-ons, tools, and programs.
-
---------------------------------------------------------------------------------
-
-via: https://www.maketecheasier.com/build-media-center-with-openelec/
-
-作者:[Derrik Diener][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.maketecheasier.com/author/derrikdiener/
-[1]:https://www.maketecheasier.com/author/derrikdiener/
-[2]:https://www.maketecheasier.com/build-media-center-with-openelec/#comments
-[3]:https://www.maketecheasier.com/category/linux-tips/
-[4]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
-[5]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F&text=How+to+Build+Your+Own+Media+Center+with+OpenELEC
-[6]:mailto:?subject=How%20to%20Build%20Your%20Own%20Media%20Center%20with%20OpenELEC&body=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
-[7]:https://www.maketecheasier.com/permanently-disable-windows-defender-windows-10/
-[8]:https://www.maketecheasier.com/repair-mac-hard-disk-with-fsck/
-[9]:https://support.google.com/adsense/troubleshooter/1631343
-[10]:http://openelec.tv/get-openelec/category/1-openelec-stable-releases
-[11]:http://www.7-zip.org/
-[12]:https://etcher.io/
diff --git a/sources/tech/20170317 Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9.md b/sources/tech/20170317 Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9.md
deleted file mode 100644
index 12d543f778..0000000000
--- a/sources/tech/20170317 Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9.md
+++ /dev/null
@@ -1,316 +0,0 @@
-#rusking translating
-Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9
-============================================================
-
-by [Matei Cezar][23] | Published: March 17, 2017 | Last Updated: March 17, 2017
-
- Download Your Free eBooks NOW - [10 Free Linux eBooks for Administrators][24] | [4 Free Shell Scripting eBooks][25]
-
-This guide will describe how you can integrate CentOS 7 Desktop to Samba4 Active Directory Domain Controller with Authconfig-gtk in order to authenticate users across your network infrastructure from a single centralized account database held by Samba.
-
-#### Requirements
-
-1. [Create an Active Directory Infrastructure with Samba4 on Ubuntu][1]
-2. [CentOS 7.3 Installation Guide][2]
-
-### Step 1: Configure CentOS Network for Samba4 AD DC
-
-1. Before starting to join CentOS 7 Desktop to a Samba4 domain you need to assure that the network is properly setup to query domain via DNS service.
-
-Open Network Settings and turn off the Wired network interface if enabled. Hit on the lower Settings button as illustrated in the below screenshots and manually edit your network settings, especially the DNS IPs that points to your Samba4 AD DC.
-
-When you finish, Apply the configurations and turn on your Network Wired Card.
-
-[
- ![Network Settings](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg)
-][3]
-
-Network Settings
-
-[
- ![Configure Network](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg)
-][4]
-
-Configure Network
-
-2. Next, open your network interface configuration file and add a line at the end of file with the name of your domain. This line assures that the domain counterpart is automatically appended by DNS resolution (FQDN) when you use only a short name for a domain DNS record.
-
-```
-$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
-```
-
-Add the following line:
-
-```
-SEARCH="your_domain_name"
-```
-[
- ![Network Interface Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg)
-][5]
-
-Network Interface Configuration
-
-3. Finally, restart the network services to reflect changes, verify if the resolver configuration file is correctly configured and issue a series of ping commands against your DCs short names and against your domain name in order to verify if DNS resolution is working.
-
-```
-$ sudo systemctl restart network
-$ cat /etc/resolv.conf
-$ ping -c1 adc1
-$ ping -c1 adc2
-$ ping tecmint.lan
-```
-[
- ![Verify Network Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Network-Configuration.jpg)
-][6]
-
-Verify Network Configuration
-
-4. Also, configure your machine hostname and reboot the machine to properly apply the settings by issuing the following commands:
-
-```
-$ sudo hostnamectl set-hostname your_hostname
-$ sudo init 6
-```
-
-Verify if hostname was correctly applied with the below commands:
-
-```
-$ cat /etc/hostname
-$ hostname
-```
-
-5. The last setting will ensure that your system time is in sync with Samba4 AD DC by issuing the below commands:
-
-```
-$ sudo yum install ntpdate
-$ sudo ntpdate -ud domain.tld
-```
-
-### Step 2: Install Required Software to Join Samba4 AD DC
-
-6. In order to integrate CentOS 7 to an Active Directory domain install the following packages from command line:
-
-```
-$ sudo yum install samba samba samba-winbind krb5-workstation
-```
-
-7. Finally, install the graphical interface software used for domain integration provided by CentOS repos: Authconfig-gtk.
-
-```
-$ sudo yum install authconfig-gtk
-```
-
-### Step 3: Join CentOS 7 Desktop to Samba4 AD DC
-
-8. The process of joining CentOS to a domain controller is very straightforward. From command line open Authconfig-gtk program with root privileges and make the following changes as described below:
-
-```
-$ sudo authconfig-gtk
-```
-
-On Identity & Authentication tab.
-
-* User Account Database = select Winbind
-* Winbind Domain = YOUR_DOMAIN
-* Security Model = ADS
-* Winbind ADS Realm = YOUR_DOMAIN.TLD
-* Domain Controllers = domain machines FQDN
-* Template Shell = /bin/bash
-* Allow offline login = checked
-
-[
- ![Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Configuration.jpg)
-][7]
-
-Authentication Configuration
-
-On Advanced Options tab.
-
-* Local Authentication Options = check Enable fingerprint reader support
-* Other Authentication Options = check Create home directories on the first login
-
-[
- ![Authentication Advance Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Advance-Configuration.jpg)
-][8]
-
-Authentication Advance Configuration
-
-9. After you’ve added all required values, return to Identity & Authentication tab and hit on Join Domain button and the Save button from alert window to save settings.
-
-[
- ![Identity and Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Identity-and-Authentication.jpg)
-][9]
-
-Identity and Authentication
-
-[
- ![Save Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg)
-][10]
-
-Save Authentication Configuration
-
-10. After the configuration has been saved you will be asked to provide a domain administrator account in order to join the domain. Supply the credentials for a domain administrator user and hit OK button to finally join the domain.
-
-[
- ![Joining Winbind Domain](http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg)
-][11]
-
-Joining Winbind Domain
-
-11. After your machine has been integrated into the realm, hit on Apply button to reflect changes, close all windows and reboot the machine.
-
-[
- ![Apply Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg)
-][12]
-
-Apply Authentication Configuration
-
-12. In order to verify if the system has been joined to Samba4 AD DC open AD Users and Computers from a Windows machine with [RSAT tools installed][13] and navigate to your domain Computers container.
-
-The name of your CentOS machine should be listed on the right plane.
-
-[
- ![Active Directory Users and Computers](http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg)
-][14]
-
-Active Directory Users and Computers
-
-### Step 4: Login to CentOS Desktop with a Samba4 AD DC Account
-
-13. In order to login to CentOS Desktop hit on Not listed? link and add the username of a domain account preceded by the domain counterpart as illustrated below.
-
-```
-Domain\domain_account
-or
-Domain_user@domain.tld
-```
-[
- ![Not listed Users](http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg)
-][15]
-
-Not listed Users
-
-[
- ![Enter Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg)
-][16]
-
-Enter Domain Username
-
-14. To authenticate with a domain account from command line in CentOS use one of the following syntaxes:
-
-```
-$ su - domain\domain_user
-$ su - domain_user@domain.tld
-```
-[
- ![Authenticate Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User.jpg)
-][17]
-
-Authenticate Domain Username
-
-[
- ![Authenticate Domain User Email](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User-Email.jpg)
-][18]
-
-Authenticate Domain User Email
-
-15. To add root privileges for a domain user or group, edit sudoers file using visudo command with root powers and add the following lines as illustrated on the below excerpt:
-
-```
-YOUR_DOMAIN\\domain_username ALL=(ALL:ALL) ALL #For domain users
-%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL #For domain groups
-```
-[
- ![Assign Permission to User and Group](http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Permission-to-User-and-Group.jpg)
-][19]
-
-Assign Permission to User and Group
-
-16. To display a summary about the domain controller use the following command:
-
-```
-$ sudo net ads info
-```
-[
- ![Check Domain Controller Info](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Domain-Controller-Info.jpg)
-][20]
-
-Check Domain Controller Info
-
-17. In order to verify if the trust machine account created when CentOS was added to the Samba4 AD DC is functional and list domain accounts from command line install Winbind client by issuing the below command:
-
-```
-$ sudo yum install samba-winbind-clients
-```
-
-Then issue a series of checks against Samba4 AD DC by executing the following commands:
-
-```
-$ wbinfo -p #Ping domain
-$ wbinfo -t #Check trust relationship
-$ wbinfo -u #List domain users
-$ wbinfo -g #List domain groups
-$ wbinfo -n domain_account #Get the SID of a domain account
-```
-[
- ![Get Samba4 AD DC Details](http://www.tecmint.com/wp-content/uploads/2017/03/Get-Samba4-AD-DC-Details.jpg)
-][21]
-
-Get Samba4 AD DC Details
-
-18. In case you want to leave the domain issue the following command against your domain name by using an domain account with administrator privileges:
-
-```
-$ sudo net ads leave your_domain -U domain_admin_username
-```
-[
- ![Leave Domain from Samba4 AD](http://www.tecmint.com/wp-content/uploads/2017/03/Leave-Domain-from-Samba4-AD.jpg)
-][22]
-
-Leave Domain from Samba4 AD
-
-That’s all! Although this procedure is focused on joining CentOS 7 to a Samba4 AD DC, the same steps described in this documentation are also valid for integrating a CentOS 7 Desktop machine to a Microsoft Windows Server 2008 or 2012 domain.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/join-centos-7-to-samba4-active-directory/
-
-作者:[Matei Cezar][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/cezarmatei/
-
-[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
-[2]:http://www.tecmint.com/centos-7-3-installation-guide/
-[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg
-[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg
-[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg
-[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Network-Configuration.jpg
-[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Configuration.jpg
-[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Advance-Configuration.jpg
-[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Identity-and-Authentication.jpg
-[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg
-[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg
-[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg
-[13]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
-[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg
-[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg
-[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg
-[17]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User.jpg
-[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User-Email.jpg
-[19]:http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Permission-to-User-and-Group.jpg
-[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Domain-Controller-Info.jpg
-[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Get-Samba4-AD-DC-Details.jpg
-[22]:http://www.tecmint.com/wp-content/uploads/2017/03/Leave-Domain-from-Samba4-AD.jpg
-[23]:http://www.tecmint.com/author/cezarmatei/
-[24]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
-[25]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/sources/tech/20170320 How to deploy Node.js Applications with pm2 and Nginx on Ubuntu.md b/sources/tech/20170320 How to deploy Node.js Applications with pm2 and Nginx on Ubuntu.md
deleted file mode 100644
index db2f20688d..0000000000
--- a/sources/tech/20170320 How to deploy Node.js Applications with pm2 and Nginx on Ubuntu.md
+++ /dev/null
@@ -1,281 +0,0 @@
-ictlyh Translating
-How to deploy Node.js Applications with pm2 and Nginx on Ubuntu
-============================================================
-
-### On this page
-
-1. [Step 1 - Install Node.js LTS][1]
-2. [Step 2 - Generate Express Sample App][2]
-3. [Step 3 - Install pm2][3]
-4. [Step 4 - Install and Configure Nginx as a Reverse proxy][4]
-5. [Step 5 - Testing][5]
-6. [Links][6]
-
-pm2 is a process manager for Node.js applications, it allows you to keep your apps alive and has a built-in load balancer. It's simple and powerful, you can always restart or reload your node application with zero downtime and it allows you to create a cluster of your node app.
-
-In this tutorial, I will show you how to install and configure pm2 for the simple 'Express' application and then configure Nginx as a reverse proxy for the node application that is running under pm2.
-
-**Prerequisites**
-
-* Ubuntu 16.04 - 64bit
-* Root Privileges
-
-### Step 1 - Install Node.js LTS
-
-In this tutorial, we will start our project from scratch. First, we need Nodejs installed on the server. I will use the Nodejs LTS version 6.x which can be installed from the nodesource repository.
-
-Install the package '**python-software-properties**' from the Ubuntu repository and then add the 'nodesource' Nodejs repository.
-
-sudo apt-get install -y python-software-properties
-curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
-
-Install the latest Nodejs LTS version.
-
-sudo apt-get install -y nodejs
-
-When the installation succeeded, check node and npm version.
-
-node -v
-npm -v
-
-[
- ![Check the node.js version](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/1.png)
-][10]
-
-### Step 2 - Generate Express Sample App
-
-I will use simple web application skeleton generated with a package named '**express-generator**' for this example installation. Express-generator can be installed with the npm command.
-
-Install '**express-generator**' with npm:
-
-npm install express-generator -g
-
-**-g:** install package inside the system
-
-We will run the application as a normal user, not a root or super user. So we need to create a new user first.
-
-Create a new user, I name mine '**yume**':
-
-useradd -m -s /bin/bash yume
-passwd yume
-
-Login to the new user by using su:
-
-su - yume
-
-Next, generate a new simple web application with the express command:
-
-express hakase-app
-
-The command will create new project directory '**hakase-app**'.
-
-[
- ![Generate app skeleton with express-generator](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/2.png)
-][11]
-
-Go to the project directory and install all dependencies needed by the app.
-
-cd hakase-app
-npm install
-
-Then test and start a new simple application with the command below:
-
-DEBUG=myapp:* npm start
-
-By default, our express application will run on port **3000**. Now visit server IP address: [192.168.33.10:3000][12]
-
-[
- ![express nodejs running on port 3000](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/3.png)
-][13]
-
-The simple web application skeleton is running on port 3000, under user 'yume'.
-
-### Step 3 - Install pm2
-
-pm2 is a node package and can be installed with the npm command. So let's install it with npm (with root privileges, when you are still logged in as user hakase, then run the command "exit" ro become root again):
-
-npm install pm2 -g
-
-Now we can use pm2 for our web application.
-
-Go to the app directory '**hakase-app**':
-
-su - hakase
-cd ~/hakase-app/
-
-There you can find a file named '**package.json**', display its content with the cat command.
-
-cat package.json
-
-[
- ![express nodejs services configuration](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/4.png)
-][14]
-
-You can see the '**start**' line contains a command that is used by Nodejs to start the express application. This command we will use with the pm2 process manager.
-
-Run the express application with the pm2 command below:
-
-pm2 start ./bin/www
-
-Now you can see the results is below:
-
-[
- ![Running nodejs app with pm2](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/5.png)
-][15]
-
-Our express application is running under pm2 with name '**www**', id '**0**'. You can get more details about the application running under pm2 with the show option '**show nodeid|name**'.
-
-pm2 show www
-
-[
- ![pm2 service status](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/6.png)
-][16]
-
-If you like to see the log of our application, you can use the logs option. It's just access and error log and you can see the HTTP Status of the application.
-
-pm2 logs www
-
-[
- ![pm2 services logs](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/7.png)
-][17]
-
-You can see that our process is running. Now, let's enable it to start at boot time.
-
-pm2 startup systemd
-
-**systemd**: Ubuntu 16 is using systemd.
-
-You will get a message for running a command as root. Back to the root privileges with "exit" and then run that command.
-
-sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u yume --hp /home/yume
-
-It will generate the systemd configuration file for application startup. When you reboot your server, the application will automatically run on startup.
-
-[
- ![pm2 add service to the boot time startup](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/8.png)
-][18]
-
-### Step 4 - Install and Configure Nginx as a Reverse proxy
-
-In this tutorial, we will use Nginx as a reverse proxy for the node application. Nginx is available in the Ubuntu repository, install it with the apt command:
-
-sudo apt-get install -y nginx
-
-Next, go to the '**sites-available**' directory and create a new virtual host configuration file.
-
-cd /etc/nginx/sites-available/
-vim hakase-app
-
-Paste configuration below:
-
-```
-upstream hakase-app {
- # Nodejs app upstream
- server 127.0.0.1:3000;
- keepalive 64;
-}
-
-# Server on port 80
-server {
- listen 80;
- server_name hakase-node.co;
- root /home/yume/hakase-app;
-
- location / {
- # Proxy_pass configuration
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header Host $http_host;
- proxy_set_header X-NginX-Proxy true;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection "upgrade";
- proxy_max_temp_file_size 0;
- proxy_pass http://hakase-app/;
- proxy_redirect off;
- proxy_read_timeout 240s;
- }
-}
-```
-
-Save the file and exit vim.
-
-On the configuration:
-
-* The node app is running with domain name '**hakase-node.co**'.
-* All traffic from nginx will be forwarded to the node app that is running on port **3000**.
-
-Test Nginx configuration and make sure there is no error.
-
-nginx -t
-
-Start Nginx and enable it to start at boot time:
-
-systemctl start nginx
-systemctl enable nginx
-
-### Step 5 - Testing
-
-Open your web browser and visit the domain name (mine is):
-
-[http://hakase-app.co][19]
-
-You will see the express application is running under the nginx web server.
-
-[
- ![Nodejs ap running with pm2 and nginx](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/9.png)
-][20]
-
-Next, reboot your server, and make sure the node app is running at the boot time:
-
-pm2 save
-sudo reboot
-
-If you have logged in again to your server, check the node app process. Run the command below as '**yume**' user.
-
-su - yume
-pm2 status www
-
-[
- ![nodejs running at the booti time with pm2](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/10.png)
-][21]
-
-The Node Application is running under pm2 and Nginx as reverse proxy.
-
-### Links
-
-* [Ubuntu][7]
-* [Node.js][8]
-* [Nginx][9]
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/
-
-作者:[Muhammad Arul ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/
-[1]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-nodejs-lts
-[2]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-generate-express-sample-app
-[3]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-pm
-[4]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-and-configure-nginx-as-a-reverse-proxy
-[5]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-testing
-[6]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#links
-[7]:https://www.ubuntu.com/
-[8]:https://nodejs.org/en/
-[9]:https://www.nginx.com/
-[10]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/1.png
-[11]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/2.png
-[12]:https://www.howtoforge.com/admin/articles/edit/192.168.33.10:3000
-[13]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/3.png
-[14]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/4.png
-[15]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/5.png
-[16]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/6.png
-[17]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/7.png
-[18]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/8.png
-[19]:http://hakase-app.co/
-[20]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/9.png
-[21]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/10.png
diff --git a/sources/tech/20170322 5 big ways AI is rapidly invading our lives.md b/sources/tech/20170322 5 big ways AI is rapidly invading our lives.md
index ea06ccc5fa..6c217cf342 100644
--- a/sources/tech/20170322 5 big ways AI is rapidly invading our lives.md
+++ b/sources/tech/20170322 5 big ways AI is rapidly invading our lives.md
@@ -1,3 +1,5 @@
+translated by zhousiyu325
+
5 big ways AI is rapidly invading our lives
============================================================
diff --git a/sources/tech/20170330 Study Ruby Programming with Open-Source Books.md b/sources/tech/20170330 Study Ruby Programming with Open-Source Books.md
index 20ac5cf826..6a92d20461 100644
--- a/sources/tech/20170330 Study Ruby Programming with Open-Source Books.md
+++ b/sources/tech/20170330 Study Ruby Programming with Open-Source Books.md
@@ -1,3 +1,7 @@
+svtter tranlating...
+
+---
+
STUDY RUBY PROGRAMMING WITH OPEN-SOURCE BOOKS
============================================================
diff --git a/sources/tech/20170331 All You Need To Know About Processes in Linux Comprehensive Guide.md b/sources/tech/20170331 All You Need To Know About Processes in Linux Comprehensive Guide.md
deleted file mode 100644
index 28b6a363e5..0000000000
--- a/sources/tech/20170331 All You Need To Know About Processes in Linux Comprehensive Guide.md
+++ /dev/null
@@ -1,345 +0,0 @@
-ictlyh Translating
-All You Need To Know About Processes in Linux [Comprehensive Guide]
-============================================================
-
-In this article, we will walk through a basic understanding of processes and briefly look at [how to manage processes in Linux][9] using certain commands.
-
-A process refers to a program in execution; it’s a running instance of a program. It is made up of the program instruction, data read from files, other programs or input from a system user.
-
-#### Types of Processes
-
-There are fundamentally two types of processes in Linux:
-
-* Foreground processes (also referred to as interactive processes) – these are initialized and controlled through a terminal session. In other words, there has to be a user connected to the system to start such processes; they haven’t started automatically as part of the system functions/services.
-* Background processes (also referred to as non-interactive/automatic processes) – are processes not connected to a terminal; they don’t expect any user input.
-
-#### What is Daemons
-
-These are special types of background processes that start at system startup and keep running forever as a service; they don’t die. They are started as system tasks (run as services), spontaneously. However, they can be controlled by a user via the init process.
-
-[
- ![Linux Process State](http://www.tecmint.com/wp-content/uploads/2017/03/ProcessState.png)
-][10]
-
-Linux Process State
-
-### Creation of a Processes in Linux
-
-A new process is normally created when an existing process makes an exact copy of itself in memory. The child process will have the same environment as its parent, but only the process ID number is different.
-
-There are two conventional ways used for creating a new process in Linux:
-
-* Using The System() Function – this method is relatively simple, however, it’s inefficient and has significantly certain security risks.
-* Using fork() and exec() Function – this technique is a little advanced but offers greater flexibility, speed, together with security.
-
-### How Does Linux Identify Processes?
-
-Because Linux is a multi-user system, meaning different users can be running various programs on the system, each running instance of a program must be identified uniquely by the kernel.
-
-And a program is identified by its process ID (PID) as well as it’s parent processes ID (PPID), therefore processes can further be categorized into:
-
-* Parent processes – these are processes that create other processes during run-time.
-* Child processes – these processes are created by other processes during run-time.
-
-#### The Init Process
-
-Init process is the mother (parent) of all processes on the system, it’s the first program that is executed when the [Linux system boots up][11]; it manages all other processes on the system. It is started by the kernel itself, so in principle it does not have a parent process.
-
-The init process always has process ID of 1. It functions as an adoptive parent for all orphaned processes.
-
-You can use the pidof command to find the ID of a process:
-
-```
-# pidof systemd
-# pidof top
-# pidof httpd
-```
-[
- ![Find Linux Process ID](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Process-ID.png)
-][12]
-
-Find Linux Process ID
-
-To find the process ID and parent process ID of the current shell, run:
-
-```
-$ echo $$
-$ echo $PPID
-```
-[
- ![Find Linux Parent Process ID](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Parent-Process-ID.png)
-][13]
-
-Find Linux Parent Process ID
-
-#### Starting a Process in Linux
-
-Once you run a command or program (for example cloudcmd – CloudCommander), it will start a process in the system. You can start a foreground (interactive) process as follows, it will be connected to the terminal and a user can send input it:
-
-```
-# cloudcmd
-```
-[
- ![Start Linux Interactive Process](http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Interactive-Process.png)
-][14]
-
-Start Linux Interactive Process
-
-#### Linux Background Jobs
-
-To start a process in the background (non-interactive), use the `&` symbol, here, the process doesn’t read input from a user until it’s moved to the foreground.
-
-```
-# cloudcmd &
-# jobs
-```
-[
- ![Start Linux Process in Background](http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Process-in-Background.png)
-][15]
-
-Start Linux Process in Background
-
-You can also send a process to the background by suspending it using `[Ctrl + Z]`, this will send the SIGSTOP signal to the process, thus stopping its operations; it becomes idle:
-
-```
-# tar -cf backup.tar /backups/* #press Ctrl+Z
-# jobs
-```
-
-To continue running the above-suspended command in the background, use the bg command:
-
-```
-# bg
-```
-
-To send a background process to the foreground, use the fg command together with the job ID like so:
-
-```
-# jobs
-# fg %1
-```
-[
- ![Linux Background Process Jobs](http://www.tecmint.com/wp-content/uploads/2017/03/Linux-Background-Process-Jobs.png)
-][16]
-
-Linux Background Process Jobs
-
-You may also like: [How to Start Linux Command in Background and Detach Process in Terminal][17]
-
-#### States of a Process in Linux
-
-During execution, a process changes from one state to another depending on its environment/circumstances. In Linux, a process has the following possible states:
-
-* Running – here it’s either running (it is the current process in the system) or it’s ready to run (it’s waiting to be assigned to one of the CPUs).
-* Waiting – in this state, a process is waiting for an event to occur or for a system resource. Additionally, the kernel also differentiates between two types of waiting processes; interruptible waiting processes – can be interrupted by signals and uninterruptible waiting processes – are waiting directly on hardware conditions and cannot be interrupted by any event/signal.
-* Stopped – in this state, a process has been stopped, usually by receiving a signal. For instance, a process that is being debugged.
-* Zombie – here, a process is dead, it has been halted but it’s still has an entry in the process table.
-
-#### How to View Active Processes in Linux
-
-There are several Linux tools for viewing/listing running processes on the system, the two traditional and well known are [ps][18] and [top][19] commands:
-
-#### 1\. ps Command
-
-It displays information about a selection of the active processes on the system as shown below:
-
-```
-# ps
-# ps -e | head
-```
-[
- ![List Linux Active Processes](http://www.tecmint.com/wp-content/uploads/2017/03/ps-command.png)
-][20]
-
-List Linux Active Processes
-
-#### 2\. top – System Monitoring Tool
-
-[top is a powerful tool][21] that offers you a [dynamic real-time view of a running system][22] as shown in the screenshot below:
-
-```
-# top
-```
-[
- ![List Linux Running Processes](http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png)
-][23]
-
-List Linux Running Processes
-
-Read this for more top usage examples: [12 TOP Command Examples in Linux][24]
-
-#### 3\. glances – System Monitoring Tool
-
-glances is a relatively new system monitoring tool with advanced features:
-
-```
-# glances
-```
-[
- ![Glances - Linux Process Monitoring](http://www.tecmint.com/wp-content/uploads/2017/03/glances.png)
-][25]
-
-Glances – Linux Process Monitoring
-
-For a comprehensive usage guide, read through: [Glances – An Advanced Real Time System Monitoring Tool for Linux][26]
-
-There are several other useful Linux system monitoring tools you can use to list active processes, open the link below to read more about them:
-
-1. [20 Command Line Tools to Monitor Linux Performance][1]
-2. [13 More Useful Linux Monitoring Tools][2]
-
-### How to Control Processes in Linux
-
-Linux also has some commands for controlling processes such as kill, pkill, pgrep and killall, below are a few basic examples of how to use them:
-
-```
-$ pgrep -u tecmint top
-$ kill 2308
-$ pgrep -u tecmint top
-$ pgrep -u tecmint glances
-$ pkill glances
-$ pgrep -u tecmint glances
-```
-[
- ![Control Linux Processes](http://www.tecmint.com/wp-content/uploads/2017/03/Control-Linux-Processes.png)
-][27]
-
-Control Linux Processes
-
-To learn how to use these commands in-depth, to kill/terminate active processes in Linux, open the links below:
-
-1. [A Guide to Kill, Pkill and Killall Commands to Terminate Linux Processess][3]
-2. [How to Find and Kill Running Processes in Linux][4]
-
-Note that you can use them to kill [unresponsive applications in Linux][28] when your system freezes.
-
-#### Sending Signals To Processes
-
-The fundamental way of controlling processes in Linux is by sending signals to them. There are multiple signals that you can send to a process, to view all the signals run:
-
-```
-$ kill -l
-```
-[
- ![List All Linux Signals](http://www.tecmint.com/wp-content/uploads/2017/03/list-all-signals.png)
-][29]
-
-List All Linux Signals
-
-To send a signal to a process, use the kill, pkill or pgrep commands we mentioned earlier on. But programs can only respond to signals if they are programmed to recognize those signals.
-
-And most signals are for internal use by the system, or for programmers when they write code. The following are signals which are useful to a system user:
-
-* SIGHUP 1 – sent to a process when its controlling terminal is closed.
-* SIGINT 2 – sent to a process by its controlling terminal when a user interrupts the process by pressing `[Ctrl+C]`.
-* SIGQUIT 3 – sent to a process if the user sends a quit signal `[Ctrl+D]`.
-* SIGKILL 9 – this signal immediately terminates (kills) a process and the process will not perform any clean-up operations.
-* SIGTERM 15 – this a program termination signal (kill will send this by default).
-* SIGTSTP 20 – sent to a process by its controlling terminal to request it to stop (terminal stop); initiated by the user pressing `[Ctrl+Z]`.
-
-The following are kill commands examples to kill the Firefox application using its PID once it freezes:
-
-```
-$ pidof firefox
-$ kill 9 2687
-OR
-$ kill -KILL 2687
-OR
-$ kill -SIGKILL 2687
-```
-
-To kill an application using its name, use pkill or killall like so:
-
-```
-$ pkill firefox
-$ killall firefox
-```
-
-#### Changing Linux Process Priority
-
-On the Linux system, all active processes have a priority and certain nice value. Processes with higher priority will normally get more CPU time than lower priority processes.
-
-However, a system user with root privileges can influence this with the nice and renice commands.
-
-From the output of the top command, the NI shows the process nice value:
-
-```
-$ top
-```
-[
- ![List Linux Running Processes](http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png)
-][30]
-
-List Linux Running Processes
-
-Use the nice command to set a nice value for a process. Keep in mind that normal users can attribute a nice value from zero to 20 to processes they own.
-Only the root user can use negative nice values.
-
-To renice the priority of a process, use the renice command as follows:
-
-```
-$ renice +8 2687
-$ renice +8 2103
-```
-
-Check out our some useful articles on how to manage and control Linux processes.
-
-1. [Linux Process Management: Boot, Shutdown, and Everything in Between][5]
-2. [Find Top 15 Processes by Memory Usage with ‘top’ in Batch Mode][6]
-3. [Find Top Running Processes by Highest Memory and CPU Usage in Linux][7]
-4. [How to Find a Process Name Using PID Number in Linux][8]
-
-That’s all for now! Do you have any questions or additional ideas, share them with us via the feedback form below.
-
---------------------------------------------------------------------------------
-
-
-作者简介:
-
-Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/linux-process-management/
-
-作者:[Aaron Kili][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/aaronkili/
-
-[1]:http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/
-[2]:http://www.tecmint.com/linux-performance-monitoring-tools/
-[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/
-[4]:http://www.tecmint.com/find-and-kill-running-processes-pid-in-linux/
-[5]:http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/
-[6]:http://www.tecmint.com/find-processes-by-memory-usage-top-batch-mode/
-[7]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
-[8]:http://www.tecmint.com/find-process-name-pid-number-linux/
-[9]:http://www.tecmint.com/dstat-monitor-linux-server-performance-process-memory-network/
-[10]:http://www.tecmint.com/wp-content/uploads/2017/03/ProcessState.png
-[11]:http://www.tecmint.com/linux-boot-process/
-[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Process-ID.png
-[13]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Parent-Process-ID.png
-[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Interactive-Process.png
-[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Process-in-Background.png
-[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Linux-Background-Process-Jobs.png
-[17]:http://www.tecmint.com/run-linux-command-process-in-background-detach-process/
-[18]:http://www.tecmint.com/linux-boot-process-and-manage-services/
-[19]:http://www.tecmint.com/12-top-command-examples-in-linux/
-[20]:http://www.tecmint.com/wp-content/uploads/2017/03/ps-command.png
-[21]:http://www.tecmint.com/12-top-command-examples-in-linux/
-[22]:http://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/
-[23]:http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png
-[24]:http://www.tecmint.com/12-top-command-examples-in-linux/
-[25]:http://www.tecmint.com/wp-content/uploads/2017/03/glances.png
-[26]:http://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/
-[27]:http://www.tecmint.com/wp-content/uploads/2017/03/Control-Linux-Processes.png
-[28]:http://www.tecmint.com/kill-processes-unresponsive-programs-in-ubuntu/
-[29]:http://www.tecmint.com/wp-content/uploads/2017/03/list-all-signals.png
-[30]:http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png
-[31]:http://www.tecmint.com/author/aaronkili/
-[32]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
-[33]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/sources/tech/20170410 Cpustat – Monitors CPU Utilization by Running Processes in Linux.md b/sources/tech/20170410 Cpustat – Monitors CPU Utilization by Running Processes in Linux.md
deleted file mode 100644
index 05858511a3..0000000000
--- a/sources/tech/20170410 Cpustat – Monitors CPU Utilization by Running Processes in Linux.md
+++ /dev/null
@@ -1,168 +0,0 @@
-ictlyh Translating
-Cpustat – Monitors CPU Utilization by Running Processes in Linux
-============================================================
-
-Cpustat is a powerful system performance measure program for Linux, written using [Go programming language][3]. It attempts to reveal CPU utilization and saturation in an effective way, using The Utilization Saturation and Errors (USE) Method (a methodology for analyzing the performance of any system).
-
-It extracts higher frequency samples of every process being executed on the system and then summarizes these samples at a lower frequency. For instance, it can measure every process every 200ms and summarize these samples every 5 seconds, including min/average/max values for certain metrics.
-
-**Suggested Read:** [20 Command Line Tools to Monitor Linux Performance][4]
-
-Cpustat outputs data in two possible ways: a pure text list of the summary interval and a colorful scrolling dashboard of each sample.
-
-### How to Install Cpustat in Linux
-
-You must have Go (GoLang) installed on your Linux system in order to use cpustat, click on the link below to follow the GoLang installation steps that is if you do not have it installed:
-
-1. [Install GoLang (Go Programming Language) in Linux][1]
-
-Once you have installed Go, type the go get command below to install it, this command will install the cpustat binary in your GOBIN variable:
-
-```
-# go get github.com/uber-common/cpustat
-```
-
-### How to Use Cpustat in Linux
-
-When the installation process completes, run cpustat as follows with root privileges using the sudo command that is if your controlling the system as a non-root user, otherwise you’ll get the error as shown:
-
-```
-$ $GOBIN/cpustat
-This program uses the netlink taskstats interface, so it must be run as root.
-```
-
-Note: To run cpustat as well as all other Go programs you have installed on your system like any other commands, include GOBIN variable in your PATH environment variable. Open the link below to learn how to set the PATH variable in Linux.
-
-1. [Learn How to Set Your $PATH Variables Permanently in Linux][2]
-
-This is how cpustat works; the `/proc` directory is queried to get the current [list of process IDs][5] for every interval, and:
-
-* for each PID, read /proc/pid/stat, then compute difference from previous sample.
-* in case it’s a new PID, read /proc/pid/cmdline.
-* for each PID, send a netlink message to fetch the taskstats, compute difference from previous sample.
-* fetch /proc/stat to get the overall system stats.
-
-Again, each sleep interval is adjusted to account for the amount of time consumed fetching all of these stats. Furthermore, each sample also records the time it took to scale each measurement by the actual elapsed time between samples. This attempts to account for delays in cpustat itself.
-
-When run without any arguments, cpustat will display the following by default: sampling interval: 200ms, summary interval: 2s (10 samples), [showing top 10 procs][6], user filter: all, pid filter: all as shown in the screenshot below:
-
-```
-$ sudo $GOBIN/cpustat
-```
-[
- ![Cpustat - Monitor Linux CPU Usage](http://www.tecmint.com/wp-content/uploads/2017/03/Cpustat-Monitor-Linux-CPU-Usage.png)
-][7]
-
-Cpustat – Monitor Linux CPU Usage
-
-From the output above, the following are the meanings of the system-wide summary metrics displayed before the fields:
-
-* usr – min/avg/max user mode run time as a percentage of a CPU.
-* sys – min/avg/max system mode run time as a percentage of a CPU.
-* nice – min/avg/max user mode low priority run time as a percentage of a CPU.
-* idle – min/avg/max user mode run time as a percentage of a CPU.
-* iowait – min/avg/max delay time waiting for disk IO.
-* prun – min/avg/max count of processes in a runnable state (same as load average).
-* pblock – min/avg/max count of processes blocked on disk IO.
-* pstart – number of processes/threads started in this summary interval.
-
-Still from the output above, for a given process, the different columns mean:
-
-* name – common process name from /proc/pid/stat or /proc/pid/cmdline.
-* pid – process id, also referred to as “tgid”.
-* min – lowest sample of user+system time for the pid, measured from /proc/pid/stat. Scale is a percentage of a CPU.
-* max – highest sample of user+system time for this pid, also measured from /proc/pid/stat.
-* usr – average user time for the pid over the summary period, measured from /proc/pid/stat.
-* sys – average system time for the pid over the summary period, measured from /proc/pid/stat.
-* nice – indicates current “nice” value for the process, measured from /proc/pid/stat. Higher means “nicer”.
-* runq – time the process and all of its threads spent runnable but waiting to run, measured from taskstats via netlink. Scale is a percentage of a CPU.
-* iow – time the process and all of its threads spent blocked by disk IO, measured from taskstats via netlink. Scale is a percentage of a CPU, averaged over the summary interval.
-* swap – time the process and all of its threads spent waiting to be swapped in, measured from taskstats via netlink. Scale is a percentage of a CPU, averaged over the summary interval.
-* vcx and icx – total number of voluntary context switches by the process and all of its threads over the summary interval, measured from taskstats via netlink.
-* rss – current RSS value fetched from /proc/pid/stat. It is the amount of memory this process is using.
-* ctime – sum of user+sys CPU time consumed by waited for children that exited during this summary interval, measured from /proc/pid/stat.
-
-Note that long running child processes can often confuse this measurement, because the time is reported only when the child process exits. However, this is useful for measuring the impact of frequent cron jobs and health checks where the CPU time is often consumed by many child processes.
-
-* thrd – number of threads at the end of the summary interval, measured from /proc/pid/stat.
-* sam – number of samples for this process included in the summary interval. Processes that have recently started or exited may have been visible for fewer samples than the summary interval.
-
-The following command displays the top 10 root user processes running on the system:
-
-```
-$ sudo $GOBIN/cpustat -u root
-```
-[
- ![Find Root User Running Processes](http://www.tecmint.com/wp-content/uploads/2017/03/show-root-user-processes.png)
-][8]
-
-Find Root User Running Processes
-
-To display output in a fancy terminal mode, use the `-t` flag as follows:
-
-```
-$ sudo $GOBIN/cpustat -u roo -t
-```
-[
- ![Running Process Usage of Root User](http://www.tecmint.com/wp-content/uploads/2017/03/Root-User-Runnng-Processes.png)
-][9]
-
-Running Process Usage of Root User
-
-To view the [top x number of processes][10] (the default is 10), you can use the `-n` flag, the following command shows the [top 20 Linux processes running][11] on the system:
-
-```
-$ sudo $GOBIN/cpustat -n 20
-```
-
-You can also write CPU profile to a file using the `-cpuprofile` option as follows and then use the [cat command][12] to view the file:
-
-```
-$ sudo $GOBIN/cpustat -cpuprofile cpuprof.txt
-$ cat cpuprof.txt
-```
-
-To display help info, use the `-h` flag as follows:
-
-```
-$ sudo $GOBIN/cpustat -h
-```
-
-Find additional info from the cpustat Github Repository: [https://github.com/uber-common/cpustat][13]
-
-That’s all! In this article, we showed you how to install and use cpustat, a useful system performance measure tool for Linux. Share your thoughts with us via the comment section below.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/cpustat-monitors-cpu-utilization-by-processes-in-linux/
-
-作者:[Aaron Kili][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.tecmint.com/author/aaronkili/
-
-[1]:http://www.tecmint.com/install-go-in-linux/
-[2]:http://www.tecmint.com/set-path-variable-linux-permanently/
-[3]:http://www.tecmint.com/install-go-in-linux/
-[4]:http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/
-[5]:http://www.tecmint.com/find-process-name-pid-number-linux/
-[6]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
-[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Cpustat-Monitor-Linux-CPU-Usage.png
-[8]:http://www.tecmint.com/wp-content/uploads/2017/03/show-root-user-processes.png
-[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Root-User-Runnng-Processes.png
-[10]:http://www.tecmint.com/find-processes-by-memory-usage-top-batch-mode/
-[11]:http://www.tecmint.com/install-htop-linux-process-monitoring-for-rhel-centos-fedora/
-[12]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
-[13]:https://github.com/uber-common/cpustat
-[14]:http://www.tecmint.com/author/aaronkili/
-[15]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
-[16]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/sources/tech/20170410 Writing a Time Series Database from Scratch.md b/sources/tech/20170410 Writing a Time Series Database from Scratch.md
new file mode 100644
index 0000000000..a7f8289b63
--- /dev/null
+++ b/sources/tech/20170410 Writing a Time Series Database from Scratch.md
@@ -0,0 +1,438 @@
+Writing a Time Series Database from Scratch
+============================================================
+
+
+I work on monitoring. In particular on [Prometheus][2], a monitoring system that includes a custom time series database, and its integration with [Kubernetes][3].
+
+In many ways Kubernetes represents all the things Prometheus was designed for. It makes continuous deployments, auto scaling, and other features of highly dynamic environments easily accessible. The query language and operational model, among many other conceptual decisions make Prometheus particularly well-suited for such environments. Yet, if monitored workloads become significantly more dynamic, this also puts new strains on monitoring system itself. With this in mind, rather than doubling back on problems Prometheus already solves well, we specifically aim to increase its performance in environments with highly dynamic, or transient services.
+
+Prometheus's storage layer has historically shown outstanding performance, where a single server is able to ingest up to one million samples per second as several million time series, all while occupying a surprisingly small amount of disk space. While the current storage has served us well, I propose a newly designed storage subsystem that corrects for shortcomings of the existing solution and is equipped to handle the next order of scale.
+
+> Note: I've no background in databases. What I say might be wrong and mislead. You can channel your criticism towards me (fabxc) in #prometheus on Freenode.
+
+### Problems, Problems, Problem Space
+
+First, a quick outline of what we are trying to accomplish and what key problems it raises. For each, we take a look at Prometheus' current approach, what it does well, and which problems we aim to address with the new design.
+
+### Time series data
+
+We have a system that collects data points over time.
+
+```
+identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), ....
+```
+
+Each data point is a tuple of a timestamp and a value. For the purpose of monitoring, the timestamp is an integer and the value any number. A 64 bit float turns out to be a good representation for counter as well as gauge values, so we go with that. A sequence of data points with strictly monotonically increasing timestamps is a series, which is addressed by an identifier. Our identifier is a metric name with a dictionary of _label dimensions_ . Label dimensions partition the measurement space of a single metric. Each metric name plus a unique set of labels is its own _time series_ that has a value stream associated with it.
+
+This is a typical set of series identifiers that are part of metric counting requests:
+
+```
+requests_total{path="/status", method="GET", instance=”10.0.0.1:80”}
+requests_total{path="/status", method="POST", instance=”10.0.0.3:80”}
+requests_total{path="/", method="GET", instance=”10.0.0.2:80”}
+```
+
+Let's simplify this representation right away: A metric name can be treated as just another label dimension — `__name__` in our case. At the query level, it might be be treated specially but that doesn't concern our way of storing it, as we will see later.
+
+```
+{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”}
+{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”}
+{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”}
+```
+
+When querying time series data, we want to do so by selecting series by their labels. In the simplest case `{__name__="requests_total"}` selects all series belonging to the `requests_total` metric. For all selected series, we retrieve data points within a specified time window.
+In more complex queries, we may wish to select series satisfying several label selectors at once and also represent more complex conditions than equality. For example, negative (`method!="GET"`) or regular expression matching (`method=~"PUT|POST"`).
+
+This largely defines the stored data and how it is recalled.
+
+### Vertical and Horizontal
+
+In a simplified view, all data points can be laid out on a two-dimensional plane. The _horizontal_ dimension represents the time and the series identifier space spreads across the _vertical_ dimension.
+
+```
+series
+ ^
+ │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"}
+ │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"}
+ │ . . . . . . .
+ │ . . . . . . . . . . . . . . . . . . . ...
+ │ . . . . . . . . . . . . . . . . . . . . .
+ │ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"}
+ │ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"}
+ │ . . . . . . . . . . . . . .
+ │ . . . . . . . . . . . . . . . . . . . ...
+ │ . . . . . . . . . . . . . . . . . . . .
+ v
+ <-------------------- time --------------------->
+```
+
+Prometheus retrieves data points by periodically scraping the current values for a set of time series. The entity from which we retrieve such a batch is called a _target_ . Thereby, the write pattern is completely vertical and highly concurrent as samples from each target are ingested independently.
+To provide some measurement of scale: A single Prometheus instance collects data points from tens of thousands of _targets_ , which expose hundreds to thousands of different time series each.
+
+At the scale of collecting millions of data points per second, batching writes is a non-negotiable performance requirement. Writing single data points scattered across our disk would be painfully slow. Thus, we want to write larger chunks of data in sequence.
+This is an unsurprising fact for spinning disks, as their head would have to physically move to different sections all the time. While SSDs are known for fast random writes, they actually can't modify individual bytes but only write in _pages_ of 4KiB or more. This means writing a 16 byte sample is equivalent to writing a full 4KiB page. This behavior is part of what is known as [ _write amplification_ ][4], which as a bonus causes your SSD to wear out – so it wouldn't just be slow, but literally destroy your hardware within a few days or weeks.
+For more in-depth information on the problem, the blog series ["Coding for SSDs" series][5] is a an excellent resource. Let's just consider the main take away: sequential and batched writes are the ideal write pattern for spinning disks and SSDs alike. A simple rule to stick to.
+
+The querying pattern is significantly more differentiated than the write the pattern. We can query a single datapoint for a single series, a single datapoint for 10000 series, weeks of data points for a single series, weeks of data points for 10000 series, etc. So on our two-dimensional plane, queries are neither fully vertical or horizontal, but a rectangular combination of the two.
+[Recording rules][6] mitigate the problem for known queries but are not a general solution for ad-hoc queries, which still have to perform reasonably well.
+
+We know that we want to write in batches, but the only batches we get are vertical sets of data points across series. When querying data points for a series over a time window, not only would it be hard to figure out where the individual points can be found, we'd also have to read from a lot of random places on disk. With possibly millions of touched samples per query, this is slow even on the fastest SSDs. Reads will also retrieve more data from our disk than the requested 16 byte sample. SSDs will load a full page, HDDs will at least read an entire sector. Either way, we are wasting precious read throughput.
+So ideally, samples for the same series would be stored sequentially so we can just scan through them with as few reads as possible. On top, we only need to know where this sequence starts to access all data points.
+
+There's obviously a strong tension between the ideal pattern for writing collected data to disk and the layout that would be significantly more efficient for serving queries. It is _the_ fundamental problem our TSDB has to solve.
+
+#### Current solution
+
+Time to take a look at how Prometheus's current storage, let's call it "V2", addresses this problem.
+We create one file per time series that contains all of its samples in sequential order. As appending single samples to all those files every few seconds is expensive, we batch up 1KiB chunks of samples for a series in memory and append those chunks to the individual files, once they are full. This approach solves a large part of the problem. Writes are now batched, samples are stored sequentially. It also enables incredibly efficient compression formats, based on the property that a given sample changes only very little with respect to the previous sample in the same series. Facebook's paper on their Gorilla TSDB describes a similar chunk-based approach and [introduces a compression format][7] that reduces 16 byte samples to an average of 1.37 bytes. The V2 storage uses various compression formats including a variation of Gorilla’s.
+
+```
+ ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A
+ └──────────┴─────────┴─────────┴─────────┴─────────┘
+ ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B
+ └──────────┴─────────┴─────────┴─────────┴─────────┘
+ . . .
+ ┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ
+ └──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘
+ chunk 1 chunk 2 chunk 3 ...
+```
+
+While the chunk-based approach is great, keeping a separate file for each series is troubling the V2 storage for various reasons:
+
+* We actually need a lot more files than the number of time series we are currently collecting data for. More on that in the section on "Series Churn". With several million files, sooner or later way may run out of [inodes][1] on our filesystem. This is a condition we can only recover from by reformatting our disks, which is as invasive and disruptive as it could be. We generally want to avoid formatting disks specifically to fit a single application.
+* Even when chunked, several thousands of chunks per second are completed and ready to be persisted. This still requires thousands of individual disk writes every second. While it is alleviated by also batching up several completed chunks for a series, this in return increases the total memory footprint of data which is waiting to be persisted.
+* It's infeasible to keep all files open for reads and writes. In particular because ~99% of data is never queried again after 24 hours. If it is queried though though, we have to open up to thousands of files, find and read relevant data points into memory, and close them again. As this would result in high query latencies, data chunks are cached rather aggressively leading to problems outlined further in the section on "Resource Consumption".
+* Eventually, old data has to be deleted and data needs to be removed from the front of millions of files. This means that deletions are actually write intensive operations. Additionally, cycling through millions of files and analyzing them makes this a process that often takes hours. By the time it completes, it might have to start over again. Oh yea, and deleting the old files will cause further write amplification for your SSD!
+* Chunks that are currently accumulating are only held in memory. If the application crashes, data will be lost. To avoid this, the memory state is periodically checkpointed to disk, which may take significantly longer than the window of data loss we are willing to accept. Restoring the checkpoint may also take several minutes, causing painfully long restart cycles.
+
+The key take away from the existing design is the concept of chunks, which we most certainly want to keep. The most recent chunks always being held in memory is also generally good. After all, the most recent data is queried the most by a large margin.
+Having one file per time series is a concept we would like to find an alternative to.
+
+### Series Churn
+
+In the Prometheus context, we use the term _series churn_ to describe that a set of time series becomes inactive, i.e. receives no more data points, and a new set of active series appears instead.
+For example, all series exposed by a given microservice instance have a respective “instance” label attached that identifies its origin. If we perform a rolling update of our microservice and swap out every instance with a newer version, series churn occurs. In more dynamic environments those events may happen on an hourly basis. Cluster orchestration systems like Kubernetes allow continuous auto-scaling and frequent rolling updates of applications, potentially creating tens of thousands of new application instances, and with them completely new sets of time series, every day.
+
+```
+series
+ ^
+ │ . . . . . .
+ │ . . . . . .
+ │ . . . . . .
+ │ . . . . . . .
+ │ . . . . . . .
+ │ . . . . . . .
+ │ . . . . . .
+ │ . . . . . .
+ │ . . . . .
+ │ . . . . .
+ │ . . . . .
+ v
+ <-------------------- time --------------------->
+```
+
+So even if the entire infrastructure roughly remains constant in size, over time there's a linear growth of time series in our database. While a Prometheus server will happily collect data for 10 million time series, query performance is significantly impacted if data has to be found among a billion series.
+
+#### Current solution
+
+The current V2 storage of Prometheus has an index based on LevelDB for all series that are currently stored. It allows querying series containing a given label pair, but lacks a scalable way to combine results from different label selections.
+For example, selecting all series with label `__name__="requests_total"` works efficiently, but selecting all series with `instance="A" AND __name__="requests_total"` has scalability problems. We will later revisit what causes this and which tweaks are necessary to improve lookup latencies.
+
+This problem is in fact what spawned the initial hunt for a better storage system. Prometheus needed an improved indexing approach for quickly searching hundreds of millions of time series.
+
+### Resource consumption
+
+Resource consumption is one of the consistent topics when trying to scale Prometheus (or anything, really). But it's not actually the absolute resource hunger that is troubling users. In fact, Prometheus manages an incredible throughput given its requirements. The problem is rather its relative unpredictability and instability in face of changes. By its architecture the V2 storage slowly builds up chunks of sample data, which causes the memory consumption to ramp up over time. As chunks get completed, they are written to disk and can be evicted from memory. Eventually, Prometheus's memory usage reaches a steady state. That is until the monitored environment changes — _series churn_ increases the usage of memory, CPU, and disk IO every time we scale an application or do a rolling update.
+If the change is ongoing, it will yet again reach a steady state eventually but it will be significantly higher than in a more static environment. Transition periods are often multiple hours long and it is hard to determine what the maximum resource usage will be.
+
+The approach of having a single file per time series also makes it way too easy for a single query to knock out the Prometheus process. When querying data that is not cached in memory, the files for queried series are opened and the chunks containing relevant data points are read into memory. If the amount of data exceeds the memory available, Prometheus quits rather ungracefully by getting OOM-killed.
+After the query is completed the loaded data can be released again but it is generally cached much longer to serve subsequent queries on the same data faster. The latter is a good thing obviously.
+
+Lastly, we looked at write amplification in the context of SSDs and how Prometheus addresses it by batching up writes to mitigate it. Nonetheless, in several places it still causes write amplification by having too small batches and not aligning data precisely on page boundaries. For larger Prometheus servers, a reduced hardware lifetime was observed in the real world. Chances are that this is still rather normal for database applications with high write throughput, but we should keep an eye on whether we can mitigate it.
+
+### Starting Over
+
+By now we have a good idea of our problem domain, how the V2 storage solves it, and where its design has issues. We also saw some great concepts that we want to adapt more or less seamlessly. A fair amount of V2's problems can be addressed with improvements and partial redesigns, but to keep things fun (and after carefully evaluating my options, of course), I decided to take a stab at writing an entire time series database — from scratch, i.e. writing bytes to the file system.
+
+The critical concerns of performance and resource usage are a direct consequence of the chosen storage format. We have to find the right set of algorithms and disk layout for our data to implement a well-performing storage layer.
+
+This is where I take the shortcut and drive straight to the solution — skip the headache, failed ideas, endless sketching, tears, and despair.
+
+### V3 — Macro Design
+
+What's the macro layout of our storage? In short, everything that is revealed when running `tree` on our data directory. Just looking at that gives us a surprisingly good picture of what is going on.
+
+```
+$ tree ./data
+./data
+├── b-000001
+│ ├── chunks
+│ │ ├── 000001
+│ │ ├── 000002
+│ │ └── 000003
+│ ├── index
+│ └── meta.json
+├── b-000004
+│ ├── chunks
+│ │ └── 000001
+│ ├── index
+│ └── meta.json
+├── b-000005
+│ ├── chunks
+│ │ └── 000001
+│ ├── index
+│ └── meta.json
+└── b-000006
+ ├── meta.json
+ └── wal
+ ├── 000001
+ ├── 000002
+ └── 000003
+```
+
+At the top level, we have a sequence of numbered blocks, prefixed with `b-`. Each block obviously holds a file containing an index and a "chunk" directory holding more numbered files. The “chunks” directory contains nothing but raw chunks of data points for various series. Just as for V2, this makes reading series data over a time windows very cheap and allows us to apply the same efficient compression algorithms. The concept has proven to work well and we stick with it. Obviously, there is no longer a single file per series but instead a handful of files holds chunks for many of them.
+The existence of an “index” file should not be surprising. Let's just assume it contains a lot of black magic allowing us to find labels, their possible values, entire time series and the chunks holding their data points.
+
+But why are there several directories containing the layout of index and chunk files? And why does the last one contain a "wal" directory instead? Understanding those two questions, solves about 90% of our problems.
+
+#### Many Little Databases
+
+We partition our _horizontal_ dimension, i.e. the time space, into non-overlapping blocks. Each block acts as a fully independent database containing all time series data for its time window. Hence, it has its own index and set of chunk files.
+
+```
+
+t0 t1 t2 t3 now
+ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
+ │ │ │ │ │ │ │ │ ┌────────────┐
+ │ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │
+ │ │ │ │ │ │ │ │ └────────────┘
+ └───────────┘ └───────────┘ └───────────┘ └───────────┘ ^
+ └──────────────┴───────┬──────┴──────────────┘ │
+ │ query
+ │ │
+ merge ─────────────────────────────────────────────────┘
+```
+
+Every block of data is immutable. Of course, we must be able to add new series and samples to the most recent block as we collect new data. For this block, all new data is written to an in-memory database that provides the same lookup properties as our persistent blocks. The in-memory data structures can be updated efficiently. To prevent data loss, all incoming data is also written to a temporary _write ahead log_ , which is the set of files in our “wal” directory, from which we can re-populate the in-memory database on restart.
+All these files come with their own serialization format, which comes with all the things one would expect: lots of flags, offsets, varints, and CRC32 checksums. Good fun to come up with, rather boring to read about.
+
+This layout allows us to fan out queries to all blocks relevant to the queried time range. The partial results from each block are merged back together to form the overall result.
+
+This horizontal partitioning adds a few great capabilities:
+
+* When querying a time range, we can easily ignore all data blocks outside of this range. It trivially addresses the problem of _series churn_ by reducing the set of inspected data to begin with.
+* When completing a block, we can persist the data from our in-memory database by sequentially writing just a handful of larger files. We avoid any write-amplification and serve SSDs and HDDs equally well.
+* We keep the good property of V2 that recent chunks, which are queried most, are always hot in memory.
+* Nicely enough, we are also no longer bound to the fixed 1KiB chunk size to better align data on disk. We can pick any size that makes the most sense for the individual data points and chosen compression format.
+* Deleting old data becomes extremely cheap and instantaneous. We merely have to delete a single directory. Remember, in the old storage we had to analyze and re-write up to hundreds of millions of files, which could take hours to converge.
+
+Each block also contains a `meta.json` file. It simply holds human-readable information about the block to easily understand the state of our storage and the data it contains.
+
+##### mmap
+
+Moving from millions of small files to a handful of larger allows us to keep all files open with little overhead. This unblocks the usage of [`mmap(2)`][8], a system call that allows us to transparently back a virtual memory region by file contents. For simplicity, you might want to think of it like swap space, just that all our data is on disk already and no writes occur when swapping data out of memory.
+
+This means we can treat all contents of our database as if they were in memory without occupying any physical RAM. Only if we access certain byte ranges in our database files, the operating system lazily loads pages from disk. This puts the operating system in charge of all memory management related to our persisted data. Generally, it is more qualified to make such decisions, as it has the full view on the entire machine and all its processes. Queried data can be rather aggressively cached in memory, yet under memory pressure the pages will be evicted. If the machine has unused memory, Prometheus will now happily cache the entire database, yet will immediately return it once another application needs it.
+Therefore, queries can longer easily OOM our process by querying more persisted data than fits into RAM. The memory cache size becomes fully adaptive and data is only loaded once the query actually needs it.
+
+From my understanding, this is how a lot of databases work today and an ideal way to do it if the disk format allows — unless one is confident to outsmart the OS from within the process. We certainly get a lot of capabilities with little work from our side.
+
+#### Compaction
+
+The storage has to periodically "cut" a new block and write the previous one, which is now completed, onto disk. Only after the block was successfully persisted, the write ahead log files, which are used to restore in-memory blocks, are deleted.
+We are interested in keeping each block reasonably short (about two hours for a typical setup) to avoid accumulating too much data in memory. When querying multiple blocks, we have to merge their results into an overall result. This merge procedure obviously comes with a cost and a week-long query should not have to merge 80+ partial results.
+
+To achieve both, we introduce _compaction_ . Compaction describes the process of taking one or more blocks of data and writing them into a, potentially larger, block. It can also modify existing data along the way, e.g. dropping deleted data, or restructuring our sample chunks for improved query performance.
+
+```
+
+t0 t1 t2 t3 t4 now
+ ┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
+ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before
+ └────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘
+ ┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐
+ │ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A)
+ └─────────────────────────────────────────┘ └───────────┘ └───────────┘
+ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐
+ │ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B)
+ └──────────────────────────┘ └──────────────────────────┘ └───────────┘
+```
+
+In this example we have the sequential blocks `[1, 2, 3, 4]`. Blocks 1, 2, and 3 can be compacted together and the new layout is `[1, 4]`. Alternatively, compact them in pairs of two into `[1, 3]`. All time series data still exist but now in fewer blocks overall. This significantly reduces the merging cost at query time as fewer partial query results have to be merged.
+
+#### Retention
+
+We saw that deleting old data was a slow process in the V2 storage and put a toll on CPU, memory, and disk alike. How can we drop old data in our block based design? Quite simply, by just deleting the directory of a block that has no data within our configured retention window. In the example below, block 1 can safely be deleted, whereas 2 has to stick around until it falls fully behind the boundary.
+
+```
+ |
+ ┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
+ │ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . .
+ └────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘
+ |
+ |
+ retention boundary
+```
+
+The older data gets, the larger the blocks may become as we keep compacting previously compacted blocks. An upper limit has to be applied so blocks don’t grow to span the entire database and thus diminish the original benefits of our design.
+Conveniently, this also limits the total disk overhead of blocks that are partially inside and partially outside of the retention window, i.e. block 2 in the example above. When setting the maximum block size at 10% of the total retention window, our total overhead of keeping block 2 around is also bound by 10%.
+
+Summed up, retention deletion goes from very expensive, to practically free.
+
+> _If you've come this far and have some background in databases, you might be asking one thing by now: Is any of this new? — Not really; and probably for the better._
+>
+> _The pattern of batching data up in memory, tracked in a write ahead log, and periodically flushed to disk is ubiquitous today._
+> _The benefits we have seen apply almost universally regardless of the data's domain specifics. Prominent open source examples following this approach are LevelDB, Cassandra, InfluxDB, or HBase. The key takeaway is to avoid reinventing an inferior wheel, researching proven methods, and applying them with the right twist._
+> _Running out of places to add your own magic dust later is an unlikely scenario._
+
+### The Index
+
+The initial motivation to investigate storage improvements were the problems brought by _series churn_ . The block-based layout reduces the total number of series that have to be considered for serving a query. So assuming our index lookup was of complexity _O(n^2)_ , we managed to reduce the _n_ a fair amount and now have an improved complexity of _O(n^2)_ — uhm, wait... damnit.
+A quick flashback to "Algorithms 101" reminds us that this, in theory, did not buy us anything. If things were bad before, they are just as bad now. Theory can be depressing.
+
+In practice, most of our queries will already be answered significantly faster. Yet, queries spanning the full time range remain slow even if they just need to find a handful of series. My original idea, dating back way before all this work was started, was a solution to exactly this problem: we need a more capable [ _inverted index_ ][9].
+An inverted index provides a fast lookup of data items based on a subset of their contents. Simply put, I can look up all series that have a label `app=”nginx"` without having to walk through every single series and check whether it contains that label.
+
+For that, each series is assigned a unique ID by which it can be retrieved in constant time, i.e. O(1). In this case the ID is our _forward index_ .
+
+> Example: If the series with IDs 10, 29, and 9 contain the label `app="nginx"`, the inverted index for the label "nginx" is the simple list `[10, 29, 9]`, which can be used to quickly retrieve all series containing the label. Even if there were 20 billion further series, it would not affect the speed of this lookup.
+
+In short, if _n_ is our total number of series, and _m_ is the result size for a given query, the complexity of our query using the index is now _O(m)_ . Queries scaling along the amount of data they retrieve ( _m_ ) instead of the data body being searched ( _n_ ) is a great property as _m_ is generally significantly smaller.
+For brevity, let’s assume we can retrieve the inverted index list itself in constant time.
+
+Actually, this is almost exactly the kind of inverted index V2 has and a minimum requirement to serve performant queries across millions of series. The keen observer will have noticed, that in the worst case, a label exists in all series and thus _m_ is, again, in _O(n)_ . This is expected and perfectly fine. If you query all data, it naturally takes longer. Things become problematic once we get involved with more complex queries.
+
+#### Combining Labels
+
+Labels associated with millions of series are common. Suppose a horizontally scaling “foo” microservice with hundreds of instances with thousands of series each. Every single series will have the label `app="foo"`. Of course, one generally won't query all series but restrict the query by further labels, e.g. I want to know how many requests my service instances received and query `__name__="requests_total" AND app="foo"`.
+
+To find all series satisfying both label selectors, we take the inverted index list for each and intersect them. The resulting set will typically be orders of magnitude smaller than each input list individually. As each input list has the worst case size O(n), the brute force solution of nested iteration over both lists, has a runtime of O(n^2). The same cost applies for other set operations, such as the union (`app="foo" OR app="bar"`). When adding further label selectors to the query, the exponent increases for each to O(n^3), O(n^4), O(n^5), ... O(n^k). A lot of tricks can be played to minimize the effective runtime by changing the execution order. The more sophisticated, the more knowledge about the shape of the data and the relationships between labels is needed. This introduces a lot of complexity, yet does not decrease our algorithmic worst case runtime.
+
+This is essentially the approach in the V2 storage and luckily a seemingly slight modification is enough gain significant improvements. What happens if we assume that the IDs in our inverted indices are sorted?
+
+Suppose this example of lists for our initial query:
+
+```
+__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ]
+ app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ]
+
+ intersection => [ 1000, 1001 ]
+```
+
+The intersection is fairly small. We can find it by setting a cursor at the beginning of each list and always advancing the one at the smaller number. When both numbers are equal, we add the number to our result and advance both cursors. Overall, we scan both lists in this zig-zag pattern and thus have a total cost of _O(2n) = O(n)_ as we only ever move forward in either list.
+
+The procedure for more than two lists of different set operations works similarly. So the number of _k_ set operations merely modifies the factor ( _O(k*n)_ ) instead of the exponent ( _O(n^k)_ ) of our worst-case lookup runtime. A great improvement.
+What I described here is a simplified version of the canonical search index used by practically any [full text search engine][10] out there. Every series descriptor is treated as a short "document", and every label (name + fixed value) as a "word" inside of it. We can ignore a lot of additional data typically encountered in search engine indices, such as word position and frequency data.
+Seemingly endless research exists on approaches improving the practical runtime, often making some assumptions about the input data. Unsurprisingly, there are also plenty of techniques to compress inverted indices that come with their own benefits and drawbacks. As our "documents" are tiny and the “words” are hugely repetitive across all series, compression becomes almost irrelevant. For example, a real-world dataset of ~4.4 million series with about 12 labels each has less than 5,000 unique labels. For our initial storage version, we stick to the basic approach without compression, and just a few simple tweaks added to skip over large ranges of non-intersecting IDs.
+
+While keeping the IDs sorted may sound simple, it is not always a trivial invariant to keep up. For instance, the V2 storage assigns hashes as IDs to new series and we cannot efficiently build up sorted inverted indices.
+Another daunting task is modifying the indices on disk as data gets deleted or updated. Typically, the easiest approach is to simply recompute and rewrite them but doing so while keeping the database queryable and consistent. The V3 storage does exactly this by having a separate immutable index per block that is only modified via rewrite on compaction. Only the indices for the mutable blocks, which are held entirely in memory, need to be updated.
+
+### Benchmarking
+
+I started initial development of the storage with a benchmark based on ~4.4 million series descriptors extracted from a real world data set and generated synthetic data points to feed into those series. This iteration just tested the stand-alone storage and was crucial to quickly identify performance bottlenecks and trigger deadlocks only experienced under highly concurrent load.
+
+After the conceptual implementation was done, the benchmark could sustain a write throughput of 20 million data points per second on my Macbook Pro — all while a dozen Chrome tabs and Slack were running. So while this sounded all great it also indicated that there's no further point in pushing this benchmark (or running it in a less random environment for that matter). After all, it is synthetic and thus not worth much beyond a good first impression. Starting out about 20x above the initial design target, it was time to embed this into an actual Prometheus server, adding all the practical overhead and flakes only experienced in more realistic environments.
+
+We actually had no reproducible benchmarking setup for Prometheus, in particular none that allowed A/B testing of different versions. Concerning in hindsight, but [now we have one][11]!
+
+Our tool allows us to declaratively define a benchmarking scenario, which is then deployed to a Kubernetes cluster on AWS. While this is not the best environment for all-out benchmarking, it certainly reflects our user base better than dedicated bare metal servers with 64 cores and 128GB of memory.
+We deploy two Prometheus 1.5.2 servers (V2 storage) and two Prometheus servers from the 2.0 development branch (V3 storage). Each Prometheus server runs on a dedicated machine with an SSD. A horizontally scaled application exposing typical microservice metrics is deployed to worker nodes. Additionally, the Kubernetes cluster itself and the nodes are being monitored. The whole setup is supervised by yet another Meta-Prometheus, monitoring each Prometheus server for health and performance.
+To simulate series churn, the microservice is periodically scaled up and down to remove old pods and spawn new pods, exposing new series. Query load is simulated by a selection of "typical" queries, run against one server of each Prometheus version.
+
+Overall the scaling and querying load as well as the sampling frequency significantly exceed today's production deployments of Prometheus. For instance, we swap out 60% of our microservice instances every 15 minutes to produce series churn. This would likely only happen 1-5 times a day in a modern infrastructure. This ensures that our V3 design is capable of handling the workloads of the years ahead. As a result, the performance differences between Prometheus 1.5.2 and 2.0 are larger than in a more moderate environment.
+In total, we are collecting about 110,000 samples per second from 850 targets exposing half a million series at a time.
+
+After leaving this setup running for a while, we can take a look at the numbers. We evaluate several metrics over the first 12 hours within both versiones reached a steady state.
+
+> Be aware of the slightly truncated Y axis in screen shots from the Prometheus graph UI.
+
+ ![Heap usage GB](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/heap_usage.png)
+> _Heap memory usage in GB_
+
+Memory usage is the most troubling resource for users today as it is relatively unpredictable and it may cause the process to crash.
+Obviously, the queried servers are consuming more memory, which can largely be attributed to overhead of the query engine, which will be subject to future optimizations. Overall, Prometheus 2.0's memory consumption is reduced by 3-4x. After about six hours, there is a clear spike in Prometheus 1.5, which aligns with the our retention boundary at six hours. As deletions are quite costly, resource consumption ramps up. This will become visible throughout various other graphs below.
+
+ ![CPU usage cores](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/cpu_usage.png)
+> _CPU usage in cores/second_
+
+A similar pattern shows for CPU usage, but the delta between queried and non-queried servers is more significant. Averaging at about 0.5 cores/sec while ingesting about 110,000 samples/second, our new storage becomes almost negligible compared to the cycles spent on query evaluation. In total the new storage needs 3-10 times fewer CPU resources.
+
+ ![Disk writes](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_writes.png)
+>_Disk writes in MB/second_
+
+The by far most dramatic and unexpected improvement shows in write utilization of our disk. It clearly shows why Prometheus 1.5 is prone to wear out SSDs. We see an initial ramp-up as soon as the first chunks are persisted into the series files and a second ramp-up once deletion starts rewriting them. Surprisingly, the queried and non-queried server show a very different utilization.
+Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%.
+
+ ![Disk usage](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_usage.png)
+> _Disk size in GB_
+
+Closely related to disk writes is the total amount of occupied disk space. As we are using almost the same compression algorithm for samples, which is the bulk of our data, they should be about the same. In a more stable setup that would largely be true, but as we are dealing with high _series churn_ , there's also the per-series overhead to consider.
+As we can see, Prometheus 1.5 ramps up storage space a lot faster before both versions reach a steady state as the retention kicks in. Prometheus 2.0 seems to have a significantly lower overhead per individual series. We can nicely see how space is linearly filled up by the write ahead log and instantaneously drops as its gets compacted. The fact that the lines for both Prometheus 2.0 servers do not exactly match is a fact that needs further investigation.
+
+This all looks quite promising. The important piece left is query latency. The new index should have improved our lookup complexity. What has not substantially changed is processing of this data, e.g. in `rate()` functions or aggregations. Those aspects are part of the query engine.
+
+ ![Query latency](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/query_latency.png)
+>_99th percentile query latency in seconds_
+
+Expectations are completely met by the data. In Prometheus 1.5 the query latency increases over time as more series are stored. It only levels off once retention starts and old series are deleted. In contrast, Prometheus 2.0 stays in place right from the beginning.
+Some caution must be taken on how this data was collected. The queries fired against the servers were chosen by estimating a good mix of range and instant queries, doing heavier and more lightweight computations, and touching few or many series. It does not necessarily represent a real-world distribution of queries. It is also not representative for queries hitting cold data and we can assume that all sample data is practically always hot in memory in either storage.
+Nonetheless, we can say with good confidence, that the overall query performance became very resilient to series churn and improved by up to 4x in our straining benchmarking scenario. In a more static environment, we can assume query time to be mostly spent in the query engine itself and the improvement to be notably lower.
+
+ ![Ingestion rate](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/ingestion_rate.png)
+>_Ingested samples/second_
+
+Lastly, a quick look into our ingestion rates of the different Prometheus servers. We can see that both servers with the V3 storage have the same ingestion rate. After a few hours it becomes unstable, which is caused by various nodes of the benchmarking cluster becoming unresponsive due to high load rather than the Prometheus instances. (The fact that both 2.0 lines exactly match is hopefully convincing enough.)
+Both Prometheus 1.5.2 servers start suffering from significant drops in ingestion rate even though more CPU and memory resources are available. The high stress of series churn causes a larger amount of data to not be collected.
+
+But what's the _absolute maximum_ number of samples per second you could ingest now?
+
+I don't know — and deliberately don't care.
+
+There are a lot of factors that shape the data flowing into Prometheus and there is no single number capable of capturing quality. Maximum ingestion rate has historically been a metric leading to skewed benchmarks and neglect of more important aspects such as query performance and resilience to series churn. The rough assumption that resource usage increases linearly was confirmed by some basic testing. It is easy to extrapolate what could be possible.
+
+Our benchmarking setup simulates a highly dynamic environment stressing Prometheus more than most real-world setups today. The results show we went way above our initial design goal, while running on non-optimal cloud servers. Ultimately, success will be determined by user feedback rather than benchmarking numbers.
+
+> Note: _At time of writing this, Prometheus 1.6 is in development, which will allow configuring the maximum memory usage more reliably and may notably reduce overall consumption in favor of slightly increased CPU utilization. I did not repeat the tests against this as the overall results still hold, especially when facing high series churn._
+
+### Conclusion
+
+Prometheus sets out to handle high cardinality of series and throughput of individual samples. It remains a challenging task, but the new storage seems to position us well for the hyper-scale, hyper-convergent, GIFEE infrastructure of the futu... well, it seems to work pretty well.
+
+A [first alpha release of Prometheus 2.0][12] with the new V3 storage is available for testing. Expect crashes, deadlocks, and other bugs at this early stage.
+
+The code for the storage itself can be found [in a separate project][13]. It's surprisingly agnostic to Prometheus itself and could be widely useful for a wider range of applications looking for an efficient local storage time series database.
+
+> _There's a long list of people to thank for their contributions to this work. Here they go in no particular order:_
+>
+> _The groundlaying work by Bjoern Rabenstein and Julius Volz on the V2 storage engine and their feedback on V3 was fundamental to everything seen in this new generation._
+>
+> _Wilhelm Bierbaum's ongoing advice and insight contributed significantly to the new design. Brian Brazil's continous feedback ensured that we ended up with a semantically sound approach. Insightful discussions with Peter Bourgon validated the design and shaped this write-up._
+>
+> _Not to forget my entire team at CoreOS and the company itself for supporting and sponsoring this work. Thanks to everyone who listened to my ramblings about SSDs, floats, and serialization formats again and again._
+
+
+--------------------------------------------------------------------------------
+
+via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/
+
+作者:[Fabian Reinartz ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://twitter.com/fabxc
+[1]:https://en.wikipedia.org/wiki/Inode
+[2]:https://prometheus.io/
+[3]:https://kubernetes.io/
+[4]:https://en.wikipedia.org/wiki/Write_amplification
+[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/
+[6]:https://prometheus.io/docs/practices/rules/
+[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf
+[8]:https://en.wikipedia.org/wiki/Mmap
+[9]:https://en.wikipedia.org/wiki/Inverted_index
+[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices
+[11]:https://github.com/prometheus/prombench
+[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/
+[13]:https://github.com/prometheus/tsdb
diff --git a/sources/tech/20170423 THE STORY OF GETTING SSH PORT 22.md b/sources/tech/20170423 THE STORY OF GETTING SSH PORT 22.md
new file mode 100644
index 0000000000..a7644c0a6d
--- /dev/null
+++ b/sources/tech/20170423 THE STORY OF GETTING SSH PORT 22.md
@@ -0,0 +1,157 @@
+SSH PORT
+============================================================
+
+The [SSH][4] (Secure Shell) port is 22\. It is not a co-incidence. This is a story I (Tatu Ylonen) haven't told before.
+
+### THE STORY OF GETTING SSH PORT 22
+
+I wrote the initial version of SSH in Spring 1995\. It was a time when [telnet][5] and [FTP][6] were widely used.
+
+Anyway, I designed SSH to replace both `telnet` (port 23) and `ftp` (port 21). Port 22 was free. It was conveniently between the ports for `telnet` and `ftp`. I figured having that port number might be one of those small things that would give some aura of credibility. But how could I get that port number? I had never allocated one, but I knew somebody who had allocated a port.
+
+The basic process for port allocation was fairly simple at that time. Internet was smaller and we were in very early stages of the Internet boom. Port numbers were allocated by IANA (Internet Assigned Numbers Authority). At the time, that meant an esteemed Internet pioneer called [Jon Postel][7] and [Joyce K. Reynolds][8]. Among other things, Jon had been the editor of such minor protocol standards as IP (RFC 791), ICMP (RFC 792), and TCP (RFC 793). Some of you may have heard of them.
+
+To me Jon felt outright scary, having authored all the main Internet RFCs!
+
+Anyway, just before announcing `ssh-1.0` in July 1995, I sent this e-mail to IANA:
+
+```
+From ylo Mon Jul 10 11:45:48 +0300 1995
+From: Tatu Ylonen
+To: Internet Assigned Numbers Authority
+Subject: request for port number
+Organization: Helsinki University of Technology, Finland
+
+Dear Sir,
+
+I have written a program to securely log from one machine into another
+over an insecure network. It provides major improvements in security
+and functionality over existing telnet and rlogin protocols and
+implementations. In particular, it prevents IP, DNS and routing
+spoofing. My plan is to distribute the software freely on the
+Internet and to get it into as wide use as possible.
+
+I would like to get a registered privileged port number for the
+software. The number should preferably be in the range 1-255 so that
+it can be used in the WKS field in name servers.
+
+I'll enclose the draft RFC for the protocol below. The software has
+been in local use for several months, and is ready for publication
+except for the port number. If the port number assignment can be
+arranged in time, I'd like to publish the software already this week.
+I am currently using port number 22 in the beta test. It would be
+great if this number could be used (it is currently shown as
+Unassigned in the lists).
+
+The service name for the software is "ssh" (for Secure Shell).
+
+Yours sincerely,
+
+Tatu Ylonen
+
+... followed by protocol specification for ssh-1.0
+```
+
+The next day, I had an e-mail from Joyce waiting in my mailbox:
+
+```
+Date: Mon, 10 Jul 1995 15:35:33 -0700
+From: jkrey@ISI.EDU
+To: ylo@cs.hut.fi
+Subject: Re: request for port number
+Cc: iana@ISI.EDU
+
+Tatu,
+
+We have assigned port number 22 to ssh, with you as the point of
+contact.
+
+Joyce
+```
+
+There we were! SSH port was 22!!!
+
+On July 12, 1995, at 2:32am, I announced a final beta version to my beta testers at Helsinki University of Technology. At 5:23pm I announced ssh-1.0.0 packages to my beta testers. At 5:51pm on July 12, 1995, I sent an announcement about SSH (Secure Shell) to the `cypherpunks@toad.com` mailing list. I also posted it in a few newsgroups, mailing lists, and directly to selected people who had discussed related topics on the Internet.
+
+### CHANGING THE SSH PORT IN THE SERVER
+
+By default, the SSH server still runs in port 22\. However, there are occasions when it is run in a different port. Testing use is one reason. Running multiple configurations on the same host is another. Rarely, it may also be run without root privileges, in which case it must be run in a non-privileged port (i.e., port number >= 1024).
+
+The port number can be configured by changing the `Port 22` directive in [/etc/ssh/sshd_config][9]. It can also be specified using the `-p ` option to [sshd][10]. The SSH client and [sftp][11] programs also support the `-p ` option.
+
+### CONFIGURING SSH THROUGH FIREWALLS
+
+SSH is one of the few protocols that are frequently permitted through firewalls. Unrestricted outbound SSH is very common, especially in smaller and more technical organizations. Inbound SSH is usually restricted to one or very few servers.
+
+### OUTBOUND SSH
+
+Configuring outbound SSH in a firewall is very easy. If there are restrictions on outgoing traffic at all, just create a rule that allows TCP port 22 to go out. That is all. If you want to restrict the destination addresses, you can also limit the rule to only permit access to your organization's external servers in the cloud, or to a [jump server][12] that guards cloud access.
+
+### BACK-TUNNELING IS A RISK
+
+Unrestricted outbound SSH can, however, be risky. The SSH protocol supports [tunneling][13]. The basic idea is that it is possible to have the SSH server on an external server listen to connections from anywhere, forward those back into the organization, and then make a connection to some Internal server.
+
+This can be very convenient in some environments. Developers and system administrators frequently use it to open a tunnel that they can use to gain remote access from their home or from their laptop when they are travelling.
+
+However, it generally violates policy, takes control away from firewall administrators and the security team, and it violates policy. It can, for example, violate [PCI][14], [HIPAA][15], or [NIST SP 800-53][16]. It can be used by hackers and foreign intelligence agencies to leave backdoors into organizations.
+
+[CryptoAuditor][17] is a product that can control tunneling at a firewall or at the entry point to a group of cloud servers. It works together with [Universal SSH Key Manager][18] to gain access to [host keys][19] and is able to use them to decrypt the SSH sessions at a firewall and block unauthorized forwarding.
+
+### INBOUND SSH ACCESS
+
+For inbound access, there are a few practical alternatives:
+
+* Configure firewall to forward all connections to port 22 to a particular IP address on the internal network or [DMZ][1]. Run [CryptoAuditor][2] or a jump server at that IP address to control and audit further access into the organization.
+* Use different ports on the firewall to access different servers.
+* Only allow SSH access after you have logged in using a VPN (Virtual Private Network), typically using the [IPsec][3] protocol.
+
+### ENABLING SSH ACCESS VIA IPTABLES
+
+[Iptables][20] is a host firewall built into the Linux kernel. It is typically configured to protect the server by preventing access to any ports that have not been expressly opened.
+
+If `iptables` is enabled on the server, the following commands can be used to permit incoming SSH access. They must be run as root.
+
+```
+iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
+iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT
+```
+
+If you want to save the rules permanently, on some systems that can be done with the command:
+
+```
+service iptables save
+```
+
+ ![SSH port at firewall can permit tunneling to banks](https://www.ssh.com/s/ssh-port-firewall-access-banks-950x333-s+ZpRviP.png)
+
+--------------------------------------------------------------------------------
+
+via: https://www.ssh.com/ssh/port
+
+作者:[Tatu Ylonen ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ssh.com/ssh/port
+[1]:https://en.wikipedia.org/wiki/DMZ_(computing)
+[2]:https://www.ssh.com/products/cryptoauditor/
+[3]:https://www.ssh.com/network/ipsec/
+[4]:https://www.ssh.com/ssh/
+[5]:https://www.ssh.com/ssh/telnet
+[6]:https://www.ssh.com/ssh/ftp/
+[7]:https://en.wikipedia.org/wiki/Jon_Postel
+[8]:https://en.wikipedia.org/wiki/Joyce_K._Reynolds
+[9]:https://www.ssh.com/ssh/sshd_config/
+[10]:https://www.ssh.com/ssh/sshd/
+[11]:https://www.ssh.com/ssh/sftp/
+[12]:https://www.ssh.com/iam/jump-server
+[13]:https://www.ssh.com/ssh/tunneling/
+[14]:https://www.ssh.com/compliance/pci/
+[15]:https://www.ssh.com/compliance/hipaa/security-rule
+[16]:https://www.ssh.com/compliance/nist-800-53/
+[17]:https://www.ssh.com/products/cryptoauditor/
+[18]:https://www.ssh.com/products/universal-ssh-key-manager/
+[19]:https://www.ssh.com/ssh/host-key
+[20]:https://en.wikipedia.org/wiki/Iptables
diff --git a/translated/talk/20170119 The Many the Humble the Ubuntu Users.md b/translated/talk/20170119 The Many the Humble the Ubuntu Users.md
new file mode 100644
index 0000000000..c53b1c7d34
--- /dev/null
+++ b/translated/talk/20170119 The Many the Humble the Ubuntu Users.md
@@ -0,0 +1,51 @@
+# 多数的,低下的Ubuntu用户
+
+#### “更好的捕鼠器“不是一个合格生物学家的专用谚语。就像Ubuntu,尽管带有一点小问题,但它仅仅需要非常优秀的做好它的工作。
+
+### Roblimo的藏身处
+
+我已经很久不是一个擅长于计算机的人了。事实上,我在网上第一次获得小有名气是在写一个星期专栏的时候,这个专栏的名字是“这台老电脑”。它描述的是一台古老的设备能够做什么——通常在上面安装Linux——在那之后,我又在Andover.net上开设了一个相同的专栏,名字叫做“平价计算”,是关于如何在这个世界省钱的——在这个大部分在线计算机专栏都让你花钱花到你没钱吃饭的世界里(译者注:作者的意思是“我算是一股清流了”)。
+
+据我所知的那些大部分的Linux的早期使用者,都痴迷于他们的电脑,还有那些让他们电脑有用的软件。他们乐于仔细研读源代码并且做出一点小小的修改。最关键的是,他们是计算科学的学生,或者从事IT行业的人们。他们的计算机和计算机网络迷住了他们,就像是他们本来应该成为的样子。
+
+我是(现在也是)一个写作者,不是一个从事计算机科学的家伙。对于我而言,计算机一直以来都是一个工具。我希望他们能够老老实实的,直到我让它们去做点什么,然后按照我的意思去做,最好只有一点点小问题,不至于让我烦躁。我喜欢图形化界面,因为我没有足够的时间去记忆很长的命令行来管理我的电脑和网络。当然,我是有能力使用命令行的,但是我更愿意站在海滩上,不沉浸在命令行的海洋里。
+
+有一段时间啊,在Linux这个圈子里,仁慈的用户太少了。“你什么意思?你仅仅是想要你的电脑去书写文章,或者可能增加一点点HTML到里面吗?“开发者和管理员这样询问道,好像所有除了编写代码的其他领域的奉献者们都不如他们所贡献的。
+
+但是尽管面临这些讥笑和嘲讽,我在一次又一次的宣讲和谈话中提出一个像这样的主题:”与其仅仅解决你自己的痛点,为什么不把你朋友的痛点也一起解决了呢?比如与你携同工作的朋友?以及在那些在你最喜欢的饭店工作的朋友们,还有你的医生?难道你不希望你的医生专注于医治病人,而不是在`apt get`以及`grep`心烦意乱?“
+
+所以,是的,因为我希望能够更简单的使用Linux,我是一个[早期的Mandrake用户][1],并且在今天,我是一个快乐的Ubuntu使用者。
+
+为什么是Ubuntu?你一定是在逗我,为什么不是?它是Linux发行版中的丰田凯美瑞(或者本田思域)!平凡的而卓越。它流行到你可以在网络实时聊天,Linux问题,Ubuntu自己的论坛上以及许许多多的地方都发现它的身影。
+
+当然,使用Debian或者Fedora是很酷的,还有Mint也是一个与众不同的小帅哥。但是我依然感兴趣于写一些故事以及添加一些HTML到故事里,加之在浏览器中阅读,在Google Docs与合作客户修改工作,随时接受我的邮件,用一张过去或者现在的图片做这做那…这些全部都是电脑用户的那点事儿。
+
+当这一切运行的时候,我的桌面什么样子就没有意义了。我不能看见它!它覆盖了整个应用窗口!然后我使用了两个显示器,不仅仅是一个。我有...让我们看看,17个打开的标签页在两个窗口中。并且GIMP在运行,[Bluefish][2],这个我刚才在使用的东西,来书写这篇文章。
+
+所以对我而言,Ubuntu是我最小的抵抗之路。Mint可能有一点可爱,但是当你使用它的时候去掉它的修饰(就是底部的边栏,译者注),它不就是Ubuntu吗?所以如果我是用一个个相同的程序,我的确,并且不能看见桌面环境,所以谁还在意他长什么样子?
+
+有些研究表明Mint正在变得更加流行,而有些人说Debian更甚。但是他们的研究中,Ubuntu都是居于最上位的,尽管这一年又一年过去了。
+
+所以称我为大多数人吧!认为我无聊吧!称我为多数的,低下的Ubuntu用户吧——至少从现在开始...
+
+------
+
+作者简介:
+
+![](http://0.gravatar.com/avatar/f861a631676e6d4d2f4e4de2454f230e?s=80&d=blank&r=pg)
+
+Robin "Roblimo" Miller 是一个自由职业的作者并且是前开源科技组的主编辑。开源科技组,这个公司拥有SourceForge, freshmeat, Linux.com, NewForge, ThinkGeek以及Slashdot, 并且他直到最近依然在Slashdot作为一个视频编辑者。他的博客是Robin ‘Roblimo’ Miller的私人站点。@robinAKAroblimo
+
+------
+
+via: http://fossforce.com/2017/01/many-humble-ubuntu-users/
+
+作者:[Robin "Roblimo" Miller][a]
+译者:[svtter](https://github.com/svtter)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://www.roblimo.com/
+[1]: https://linux.slashdot.org/story/00/11/02/2324224/mandrake-72-in-wal-mart-a-good-idea
+[2]: http://bluefish.openoffice.nl/index.html
diff --git a/translated/talk/20170402 Why do developers who could work anywhere flock to the worlds most expensive cities.md b/translated/talk/20170402 Why do developers who could work anywhere flock to the worlds most expensive cities.md
new file mode 100644
index 0000000000..7ee343da04
--- /dev/null
+++ b/translated/talk/20170402 Why do developers who could work anywhere flock to the worlds most expensive cities.md
@@ -0,0 +1,63 @@
+
+
+
+为什么可以在任何地方工作的开发者要聚集到世界上最昂贵的城市?
+============================================================
+
+
+
+
+
+ ![](https://tctechcrunch2011.files.wordpress.com/2017/04/img_20170401_1835042.jpg?w=977)
+
+政治家和经济学家都在[哀嚎][10]某几个阿尔法地区——三番,洛杉矶,纽约,波士顿,多伦多,伦敦,巴黎——在吸引了所有最好的工作的同时变得令人退却的昂贵,减少了经济变动性而增大了贫富差异。但是为什么那些最好的工作不能搬到其他地区呢?
+
+当然,很多都不能。工作在纽约或者伦敦(当然,在英国脱欧毁灭伦敦的银行体系之前)普通的金融从业人员如果告诉他们的老板他们想要以后都在清迈工作,将会在办公室里受到嘲笑而且不再受欢迎。
+
+但是这对(大部分)软件领域不适用。大部分 web/app 开发者如果这样要求的话可能会被拒绝;但是它们至少不会被嘲笑或者被炒。优秀开发者往往供不应求,而且处在 Skype 和 Slack 的时代,软件开发完全可以不依赖物质世界的交互。
+
+(这一切对作家来说更加正确,真的;事实上我是在波纳配发表的这篇文章。但是作家并不像软件开发者一样具有足够的影响力。)
+
+有些人会告诉你远程协助的团队天生比本地团队效率和生产力低下一些,或者那些“不经意的灵感碰撞”是如此重要以致于每一位员工每天都必须强制到一个地方来人为的制造这样的碰撞。这些人错了,只要团队的讨论次数不够多——数量级不过一把、一打或者许多,而不是成百上千——也不够专注。
+
+我应该知道:在 [HappyFunCorp][11] 时,我们广泛的与远程团队工作,而且长期雇佣远程开发者,而结果难以置信的好。我在我三番的家中与斯德哥尔摩,圣保罗,上海,布鲁克林,新德里的开发者交流和合作的一天,完全没有任何不寻常。
+
+目前为止,不管是不是个好主意,但我有点跑题了。供求关系指出那些拥有足够技能的开发者可以成为被称为“数字流浪者”的人,如果他们愿意的话。但是许多可以做到的却不愿意。最近,我在雷克雅维克的一栋通过 Airbnb 共享的房子和一伙不断变化的临时远程工作团队度过了一段时间,我保持着东海岸时间来跟上他们的工作,也把早上和周末的时光都花费在探索冰岛了——但是最后几乎所有人都回到了湾区生活。
+
+从经济层面来说,当然,这太疯狂了。搬到东南亚工作光在房租一项上每月就会为我们省下几千美金。 为什么那些可以在哥斯达黎加挣着三番工资,或者在柏林赚着纽约水平薪资的人们,选择不这样做?为什么那些据说冷静固执的工程师在财政方面如此荒谬?
+
+当然这里有社交和文化原因。清迈很不错,但没有大都会博物馆或者蒸汽朋克化装舞会,也没有 15 分钟脚程可以到的 50 家美食餐厅。柏林也很可爱,但没法提供风筝冲浪或者山脉远足和加州气候。当然也无法保证拥有无数和你一样分享同样价值观和母语的人们。
+
+但是我觉得原因除了这些还有很多。我相信相比贫富之间的差异,还有一个更基础的经济分水岭存在。我认为我们在目睹世界上可以实现超凡成就的极端斯坦城市和无法成就伟大但可以工作和赚钱的平均斯坦城市之间正在生成巨大的裂缝。(名词是从伟大的纳西姆·塔勒布那里偷来的)
+(译者注:[平均斯坦与极端斯坦的概念是美国学者纳西姆·塔勒布首先提出来的。他发现在我们所处的世界上,有些事物表现出相当的平均性,大部分个体都靠近均值,离均值越远则个体数量越稀少,与均值的偏离达到一定程度的个体数量将趋近于零。有些事物则表现出相当的极端性,均值这个概念在这个领域没有太多的意义,剧烈偏离均值的个体大量存在,而且偏离程度大得惊人。他把前者称为平均斯坦,把后者称为极端斯坦。][15])
+
+艺术行业有着悠久历史的极端斯坦城市。这也解释了为什么有抱负的作家纷纷搬到纽约城,而那些已经在国际上大获成功的导演和演员仍然在不断被吸引到洛杉矶,如同飞蛾扑火。现在,这对技术行业同样适用。即使你不曾想试着(帮助)构造一些非凡的事物—— 如今创业神话如此恢弘,很难想象有工程师完全没有梦想过它—— _伟大事物发生的地方_正在以美好的前景如梦如幻的吸引着人们。
+
+但是关于这一切有趣的是,理论上讲,它会改变;因为——直到最近——分布式的,分散管理的团队实际上可以获得超凡的成就。 情况对这些团队很不利,因为风投的目光趋于短浅。但是没有任何法律指出独角兽公司只能诞生在加州和某些屈指可数的次级领土;而且似乎,不管结果好坏,极端斯坦正在扩散。如果这样的扩散最终可以矛盾的导致米申地区的房租变 _便宜_那就太棒了!
+
+--------------------------------------------------------------------------------
+
+via: https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
+
+作者:[ Jon Evans ][a]
+译者:[xiaow6](https://github.com/xiaow6)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://techcrunch.com/author/jon-evans/
+[1]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#comments
+[2]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#
+[3]:http://twitter.com/share?via=techcrunch&url=http://tcrn.ch/2owXJ0C&text=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&hashtags=
+[4]:https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Ftechcrunch.com%2F2017%2F04%2F02%2Fwhy-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities%2F&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
+[5]:https://plus.google.com/share?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
+[6]:http://www.reddit.com/submit?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
+[7]:http://www.stumbleupon.com/badge/?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
+[8]:mailto:?subject=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities?&body=Article:%20https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
+[9]:https://share.flipboard.com/bookmarklet/popout?v=2&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
+[10]:https://mobile.twitter.com/Noahpinion/status/846054187288866
+[11]:http://happyfuncorp.com/
+[12]:https://twitter.com/rezendi
+[13]:https://techcrunch.com/author/jon-evans/
+[14]:https://techcrunch.com/2017/04/01/discussing-the-limits-of-artificial-intelligence/
+[15]:http://blog.sina.com.cn/s/blog_5ba3d8610100q3b1.html
diff --git a/translated/tech/20110207 How debuggers work Part 3 - Debugging information.md b/translated/tech/20110207 How debuggers work Part 3 - Debugging information.md
new file mode 100644
index 0000000000..ff5d53b9a1
--- /dev/null
+++ b/translated/tech/20110207 How debuggers work Part 3 - Debugging information.md
@@ -0,0 +1,339 @@
+[调试器的工作原理: 第3篇 - 调试信息][25]
+============================================================
+
+
+这是调试器的工作原理系列文章的第三篇。阅读这篇文章之前应当先阅读[第一篇][26]与[第二篇][27]。
+
+### 这篇文章的主要内容
+
+本文将解释调试器是如何在机器码中,查找它将C语言源代码转换成机器语言代码时所需要的C语言函数、变量、与数据。
+
+### 调试信息
+
+
+现代编译器能够将有着各种排版或嵌套的程序流程、各种数据类型的变量的高级语言代码转换为一大堆称之为机器码的 0/1 数据,这么做的唯一目的是尽可能快的在目标 CPU 上运行程序。通常来说一行C语言代码能够转换为若干条机器码。变量被分散在机器码中的各个部分,有的在堆栈中,有的在寄存器中,或者直接被优化掉了。数据结构与对象在机器码中甚至不“存在”,它们只是用于将数据按一定的结构编码存储进缓存。
+
+那么调试器怎么知道,当你需要在某个函数入口处暂停时,程序要在哪停下来呢?它怎么知道当你查看某个变量值时,它怎么找到这个值?答案是,调试信息。
+
+编译器在生成机器码时同时会生成相应的调试信息。调试信息代表了可执行程序与源代码之间的关系,并以一种提前定义好的格式,同机器码存放在一起。过去的数年里,人们针对不同的平台与可执行文件发明了很多种用于存储这些信息的格式。不过我们这篇文章不会讲这些格式的历史,而是将阐述这些调试信息是如何工作的,所以我们将专注于一些事情,比如 `DWARF`。`DWARF` 如今十分广泛的应用在类 `Unix` 平台上的可执行文件的调试。
+
+### ELF 中的 DWARF
+
+ ![](http://eli.thegreenplace.net/images/2011/02/dwarf_logo.gif)
+
+根据[它的维基百科][17] 所描述,虽然 `DWARF` 是同 `ELF` 一同设计的(`DWARF` 是由 `DWARF` 标准委员会推出的开放标准。上文中展示的 图标就来自这个网站。),但 `DWARF` 在理论上来说也可以嵌入到其他的可执行文件格式中。
+
+`DWARF` 是一种复杂的格式,它的构建基于过去多年中许多不同的编译器与操作系统。正是因为它解决了一个为任意语言在任何平台与业务系统中产生调试信息的这样棘手的难题,它也必须很复杂。想要透彻的讲解 `DWARF` 仅仅是通过这单薄的一篇文章是远远不够的,说实话我也并没有充分地了解 `DWARF` 到每一个微小的细节,所以我也不能十分透彻的讲解 (如果你感兴趣的话,文末有一些能够帮助你的资源。建议从 `DWARF` 教程开始上手)。这篇文章中我将以浅显易懂的方式展示 `DWARF` 在实际应用中调试信息是如何工作的。
+
+### ELF文件中的调试部分
+
+首先让我们看看 `DWARF` 处在 ELF 文件中的什么位置。`ELF` 定义了每一个生成的目标文件中的每一部分。 _section header table_ 声明并定义了每一部分。不同的工具以不同的方式处理不同的部分,例如连接器会寻找连接器需要的部分,调试器会查找调试器需要的部分。
+
+我们本文的实验会使用从这个C语言源文件构建的可执行文件,编译成 tracedprog2:
+
+
+```
+#include
+
+void do_stuff(int my_arg)、
+{
+ int my_local = my_arg + 2;
+ int i;
+
+ for (i = 0; i < my_local; ++i)
+ printf("i = %d\n", i);
+}
+
+int main()
+{
+ do_stuff(2);
+ return 0;
+}
+```
+
+
+使用 `objdump -h` 命令检查 `ELF` 可执行文件中的段落头,我们会看到几个以 .debug_ 开头的段落,这些就是 `DWARF` 的调试部分。
+
+```
+26 .debug_aranges 00000020 00000000 00000000 00001037
+ CONTENTS, READONLY, DEBUGGING
+27 .debug_pubnames 00000028 00000000 00000000 00001057
+ CONTENTS, READONLY, DEBUGGING
+28 .debug_info 000000cc 00000000 00000000 0000107f
+ CONTENTS, READONLY, DEBUGGING
+29 .debug_abbrev 0000008a 00000000 00000000 0000114b
+ CONTENTS, READONLY, DEBUGGING
+30 .debug_line 0000006b 00000000 00000000 000011d5
+ CONTENTS, READONLY, DEBUGGING
+31 .debug_frame 00000044 00000000 00000000 00001240
+ CONTENTS, READONLY, DEBUGGING
+32 .debug_str 000000ae 00000000 00000000 00001284
+ CONTENTS, READONLY, DEBUGGING
+33 .debug_loc 00000058 00000000 00000000 00001332
+ CONTENTS, READONLY, DEBUGGING
+```
+
+每个段落的第一个数字代表了这个段落的大小,最后一个数字代表了这个段落开始位置距离 `ELF` 的偏移量。调试器利用这些信息从可执行文件中读取段落。
+
+现在让我们看看一些在 `DWARF` 中查找有用的调试信息的实际例子。
+
+### 查找函数
+
+调试器的基础任务之一,就是当我们在某个函数处设置断点时,调试器需要能够在入口处暂停。为此,必须为函数与函数在机器码地址这两者建立起某种映射关系。
+
+为了获取这种映射关系,我们可以查找 `DWARF` 中的 .debug_info 段落。在我们深入之前,需要一点基础知识。`DWARF` 中每一个描述类型被称之为调试信息入口(`DIE`)。每个 `DIE` 都有关于它的属性之类的标签。`DIE` 之间通过兄弟节点或子节点连接,属性的值也可以指向其他的 `DIE`.
+
+运行以下命令
+
+```
+objdump --dwarf=info tracedprog2
+```
+
+输出文件相当的长,为了方便举例我们只关注这些行(从这里开始,无用的冗长信息我会以 (...)代替,方便排版。):
+
+```
+<1><71>: Abbrev Number: 5 (DW_TAG_subprogram)
+ <72> DW_AT_external : 1
+ <73> DW_AT_name : (...): do_stuff
+ <77> DW_AT_decl_file : 1
+ <78> DW_AT_decl_line : 4
+ <79> DW_AT_prototyped : 1
+ <7a> DW_AT_low_pc : 0x8048604
+ <7e> DW_AT_high_pc : 0x804863e
+ <82> DW_AT_frame_base : 0x0 (location list)
+ <86> DW_AT_sibling : <0xb3>
+
+<1>: Abbrev Number: 9 (DW_TAG_subprogram)
+ DW_AT_external : 1
+ DW_AT_name : (...): main
+ DW_AT_decl_file : 1
+ DW_AT_decl_line : 14
+ DW_AT_type : <0x4b>
+ DW_AT_low_pc : 0x804863e
+ DW_AT_high_pc : 0x804865a
+ DW_AT_frame_base : 0x2c (location list)
+```
+
+上面的代码中有两个带有 DW_TAG_subprogram 标签的入口,在 `DWARF` 中这是对函数的指代。注意,这是两个段落入口,其中一个是 do_stuff 函数的入口,另一个是主函数的入口。这些信息中有很多值得关注的属性,但其中最值得注意的是 DW_AT_low_pc。它代表了函数开始处程序指针的值(在x86平台上是 `EIP`)。此处 0x8048604 代表了 do_stuff 函数开始处的程序指针。下面我们将利用 `objdump -d` 命令对可执行文件进行反汇编。来看看这块地址中都有什么:
+
+```
+08048604 :
+ 8048604: 55 push ebp
+ 8048605: 89 e5 mov ebp,esp
+ 8048607: 83 ec 28 sub esp,0x28
+ 804860a: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
+ 804860d: 83 c0 02 add eax,0x2
+ 8048610: 89 45 f4 mov DWORD PTR [ebp-0xc],eax
+ 8048613: c7 45 (...) mov DWORD PTR [ebp-0x10],0x0
+ 804861a: eb 18 jmp 8048634
+ 804861c: b8 20 (...) mov eax,0x8048720
+ 8048621: 8b 55 f0 mov edx,DWORD PTR [ebp-0x10]
+ 8048624: 89 54 24 04 mov DWORD PTR [esp+0x4],edx
+ 8048628: 89 04 24 mov DWORD PTR [esp],eax
+ 804862b: e8 04 (...) call 8048534
+ 8048630: 83 45 f0 01 add DWORD PTR [ebp-0x10],0x1
+ 8048634: 8b 45 f0 mov eax,DWORD PTR [ebp-0x10]
+ 8048637: 3b 45 f4 cmp eax,DWORD PTR [ebp-0xc]
+ 804863a: 7c e0 jl 804861c
+ 804863c: c9 leave
+ 804863d: c3 ret
+```
+
+显然,0x8048604 是 do_stuff 的开始地址,这样一来,调试器就可以建立函数与其在可执行文件中的位置间的映射关系。
+
+### 查找变量
+
+假设我们当前在 do_staff 函数中某个位置上设置断点停了下来。我们想通过调试器取得 my_local 这个变量的值。调试器怎么知道在哪里去找这个值呢?很显然这要比查找函数更为困难。变量可能存储在全局存储区、堆栈、甚至是寄存器中。此外,同名变量在不同的作用域中可能有着不同的值。调试信息必须能够反映所有的这些变化,当然,`DWARF` 就能做到。
+
+我不会逐一去将每一种可能的状况,但我会以调试器在 do_stuff 函数中查找 my_local 变量的过程来举个例子。下面我们再看一遍 .debug_info 中 do_stuff 的每一个入口,这次连它的子入口也要一起看。
+
+```
+<1><71>: Abbrev Number: 5 (DW_TAG_subprogram)
+ <72> DW_AT_external : 1
+ <73> DW_AT_name : (...): do_stuff
+ <77> DW_AT_decl_file : 1
+ <78> DW_AT_decl_line : 4
+ <79> DW_AT_prototyped : 1
+ <7a> DW_AT_low_pc : 0x8048604
+ <7e> DW_AT_high_pc : 0x804863e
+ <82> DW_AT_frame_base : 0x0 (location list)
+ <86> DW_AT_sibling : <0xb3>
+ <2><8a>: Abbrev Number: 6 (DW_TAG_formal_parameter)
+ <8b> DW_AT_name : (...): my_arg
+ <8f> DW_AT_decl_file : 1
+ <90> DW_AT_decl_line : 4
+ <91> DW_AT_type : <0x4b>
+ <95> DW_AT_location : (...) (DW_OP_fbreg: 0)
+ <2><98>: Abbrev Number: 7 (DW_TAG_variable)
+ <99> DW_AT_name : (...): my_local
+ <9d> DW_AT_decl_file : 1
+ <9e> DW_AT_decl_line : 6
+ <9f> DW_AT_type : <0x4b>
+ DW_AT_location : (...) (DW_OP_fbreg: -20)
+<2>: Abbrev Number: 8 (DW_TAG_variable)
+ DW_AT_name : i
+ DW_AT_decl_file : 1
+ DW_AT_decl_line : 7
+ DW_AT_type : <0x4b>
+ DW_AT_location : (...) (DW_OP_fbreg: -24)
+```
+
+看到每个入口处第一对尖括号中的数字了吗?这些是嵌套的等级,在上面的例子中,以 <2> 开头的入口是以 <1> 开头的子入口。因此我们得知 my_local 变量(以 DW_TAG_variable 标签标记)是 do_stuff 函数的局部变量。除此之外,调试器也需要知道变量的数据类型,这样才能正确的使用与显示变量。上面的例子中 my_local 的变量类型指向另一个 `DIE` <0x4b>。如果使用 objdump 命令查看这个 `DIE` 部分的话,我们会发现这部分代表了有符号4字节整型数据。
+
+而为了在实际运行的程序内存中查找变量的值,调试器需要使用到 DW_AT_location 属性。对于 my_local 而言,是 DW_OP_fbreg: -20。这个代码段的意思是说 my_local 存储在距离它所在函数起始地址偏移量为-20的地方。
+
+do_stuff 函数的 DW_AT_frame_base 属性值为 0x0 (location list)。这意味着这个属性的值需要在 location list 中查找。下面我们来一起看看。
+
+```
+$ objdump --dwarf=loc tracedprog2
+
+tracedprog2: file format elf32-i386
+
+Contents of the .debug_loc section:
+
+ Offset Begin End Expression
+ 00000000 08048604 08048605 (DW_OP_breg4: 4 )
+ 00000000 08048605 08048607 (DW_OP_breg4: 8 )
+ 00000000 08048607 0804863e (DW_OP_breg5: 8 )
+ 00000000
+ 0000002c 0804863e 0804863f (DW_OP_breg4: 4 )
+ 0000002c 0804863f 08048641 (DW_OP_breg4: 8 )
+ 0000002c 08048641 0804865a (DW_OP_breg5: 8 )
+ 0000002c
+```
+
+我们需要关注的是第一列(do_stuff 函数的 DW_AT_frame_base 属性包含 location list 中 0x0 的偏移量。而 main 函数的相同属性包含 0x2c 的偏移量,这个偏移量是第二套地址列表的偏移量。)。对于调试器可能定位到的每一个地址,它都会指定当前栈帧到变量间的偏移量,而这个偏移就是通过寄存器来计算的。对于x86平台而言,bpreg4 指向 esp,而 bpreg5 指向 ebp。
+
+让我们再看看 do_stuff 函数的头几条指令。
+
+```
+08048604 :
+ 8048604: 55 push ebp
+ 8048605: 89 e5 mov ebp,esp
+ 8048607: 83 ec 28 sub esp,0x28
+ 804860a: 8b 45 08 mov eax,DWORD PTR [ebp+0x8]
+ 804860d: 83 c0 02 add eax,0x2
+ 8048610: 89 45 f4 mov DWORD PTR [ebp-0xc],eax
+```
+
+只有当第二条指令执行后,ebp 寄存器才真正存储了有用的值。当然,前两条指令的基址是由上面所列出来的地址信息表计算出来的。一但 ebp 确定了,计算偏移量就十分方便了,因为尽管 esp 在操作堆栈的时候需要移动,但 ebp 作为栈底并不需要移动。
+
+究竟我们应该去哪里找 my_local 的值呢?在 0x8048610 这块地址后, my_local 的值经过在 eax 中的计算后被存在了内存中,从这里开始我们才需要关注 my_local 的值。调试器会利用 DW_OP_breg5: 8 这个基址来查找。我们回想下,my_local 的 DW_AT_location 属性值为 DW_OP_fbreg: -20。所以应当从基址中 -20 ,同时由于 ebp 寄存器需要 + 8,所以最终结果为 - 12。现在再次查看反汇编代码,来看看数据从 eax 中被移动到哪里了。当然,这里 my_local 应当被存储在了 ebp - 12 的地址中。
+
+### 查看行号
+
+当我们谈论调试信息的时候,我们利用了些技巧。当调试C语言源代码并在某个函数出放置断点的时候,我们并不关注第一条“机器码”指令(函数的调用准备工作已经完成而局部变量还没有初始化。)。我们真正关注的是函数的第一行“C代码”。
+
+这就是 `DWARF` 完全覆盖映射C源代码与可执行文件中机器码地址的原因。下面是 .debug_line 段中所包含的内容,我们将其转换为可读的格式展示如下。
+
+```
+$ objdump --dwarf=decodedline tracedprog2
+
+tracedprog2: file format elf32-i386
+
+Decoded dump of debug contents of section .debug_line:
+
+CU: /home/eliben/eli/eliben-code/debugger/tracedprog2.c:
+File name Line number Starting address
+tracedprog2.c 5 0x8048604
+tracedprog2.c 6 0x804860a
+tracedprog2.c 9 0x8048613
+tracedprog2.c 10 0x804861c
+tracedprog2.c 9 0x8048630
+tracedprog2.c 11 0x804863c
+tracedprog2.c 15 0x804863e
+tracedprog2.c 16 0x8048647
+tracedprog2.c 17 0x8048653
+tracedprog2.c 18 0x8048658
+```
+
+很容易就可以看出其中C源代码与反汇编代码之间的对应关系。第5行指向 do_stuff 函数的入口,0x8040604。第6行,指向 0x804860a ,正是调试器在调试 do_stuff 函数时需要停下来的地方。这里已经完成了函数调用的准备工作。上面的这些信息形成了行号与地址间的双向映射关系。
+
+* 当在某一行设置断点的时候,调试器会利用这些信息去查找相应的地址来做断点工作(还记得上篇文章中的 int 3 指令吗?)
+* 当指令造成代码段错误时,调试器会利用这些信息来查看源代码中发生的状况。
+
+### libdwarf - 用 DWARF 编程
+
+尽管使用命令行工具来获得 `DWARF` 很有用,但这仍然不够易用。作为程序员,我们应当知道当我们需要这些调试信息时应当怎么编程来获取这些信息。
+
+自然我们想到的第一种方法就是阅读 `DWARF` 规范并按规范操作阅读使用。有句话说的好,分析 HTML 应当使用库函数,永远不要手工分析。对于 `DWARF` 来说这是如此。`DWARF` 比 HTML 要复杂得多。上面所展示出来的只是冰山一角。更糟糕的是,在实际的目标文件中,大部分信息是以压缩格式存储的,分析起来更加复杂(信息中的某些部分,例如位置信息与行号信息,在某些虚拟机下是以指令的方式编码的。)。
+
+所以我们要使用库函数来处理 `DWARF`。下面是两种我熟悉的主流库(还有些不完整的库这里没有写)
+
+1. `BFD` (libbfd),包含了 `objdump` (对,就是这篇文章中我们一直在用的这货),`ld`(`GNU` 连接器)与 `as`(`GNU` 编译器)。`BFD` 主要用于[GNU binutils][11]。
+2. `libdwarf` ,同它的哥哥 `libelf` 一同用于 `Solaris` 与 `FreeBSD` 中的调试信息分析。
+
+相比较而言我更倾向于使用 `libdwarf`,因为我对它了解的更多,并且 `libdwarf` 的开源协议更开放。
+
+因为 `libdwarf` 本身相当复杂,操作起来需要相当多的代码,所以我在这不会展示所有代码。你可以在 [这里][24] 下载代码并运行试试。运行这些代码需要提前安装 `libelfand` 与 `libdwarf` ,同时在使用连接器的时候要使用参数 `-lelf` 与 `-ldwarf`。
+
+这个示例程序可以接受可执行文件并打印其中的函数名称与函数入口地址。下面是我们整篇文章中使用的C程序经过示例程序处理后的输出。
+
+```
+$ dwarf_get_func_addr tracedprog2
+DW_TAG_subprogram: 'do_stuff'
+low pc : 0x08048604
+high pc : 0x0804863e
+DW_TAG_subprogram: 'main'
+low pc : 0x0804863e
+high pc : 0x0804865a
+```
+
+`libdwarf` 的文档很棒,如果你花些功夫,利用 `libdwarf` 获得这篇文章中所涉及到的 `DWARF` 信息应该并不困难。
+
+### 结论与计划
+
+原理上讲,调试信息是个很简单的概念。尽管实现细节可能比较复杂,但经过了上面的学习我想你应该了解了调试器是如何从可执行文件中获取它需要的源代码信息的了。对于程序员而言,程序只是代码段与数据结构;对可执行文件而言,程序只是一系列存储在内存或寄存器中的指令或数据。但利用调试信息,调试器就可以将这两者连接起来,从而完成调试工作。
+
+此文与这系列的前两篇,一同介绍了调试器的内部工作过程。利用这里所讲到的知识,再敲些代码,应该可以完成一个 `Linux` 中最简单基础但也有一定功能的调试器。
+
+下一步我并不确定要做什么,这个系列文章可能就此结束,也有可能我要讲些堆栈调用的事情,又或者讲 `Windows` 下的调试。你们有什么好的点子或者相关材料,可以直接评论或者发邮件给我。
+
+### 参考
+
+* objdump 参考手册
+* [ELF][12] 与 [DWARF][13]的维基百科
+* [Dwarf Debugging Standard home page][14],这里有很棒的 DWARF 教程与 DWARF 标准,作者是 Michael Eager。第二版基于 GCC 也许更能吸引你。
+* [libdwarf home page][15],这里可以下载到 libwarf 的完整库与参考手册
+* [BFD documentation][16]
+
+* * *
+
+--------------------------------------------------------------------------------
+
+via: http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information
+
+作者:[Eli Bendersky][a]
+译者:[YYforymj](https://github.com/YYforymj)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://eli.thegreenplace.net/
+[1]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id1
+[2]:http://dwarfstd.org/
+[3]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id2
+[4]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id3
+[5]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id4
+[6]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id5
+[7]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id6
+[8]:http://eli.thegreenplace.net/tag/articles
+[9]:http://eli.thegreenplace.net/tag/debuggers
+[10]:http://eli.thegreenplace.net/tag/programming
+[11]:http://www.gnu.org/software/binutils/
+[12]:http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
+[13]:http://en.wikipedia.org/wiki/DWARF
+[14]:http://dwarfstd.org/
+[15]:http://reality.sgiweb.org/davea/dwarf.html
+[16]:http://sourceware.org/binutils/docs-2.21/bfd/index.html
+[17]:http://en.wikipedia.org/wiki/DWARF
+[18]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id7
+[19]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id8
+[20]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id9
+[21]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id10
+[22]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id11
+[23]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information#id12
+[24]:https://github.com/eliben/code-for-blog/blob/master/2011/dwarf_get_func_addr.c
+[25]:http://eli.thegreenplace.net/2011/02/07/how-debuggers-work-part-3-debugging-information
+[26]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1/
+[27]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints/
diff --git a/translated/tech/20150316 Linux on UEFI A Quick Installation Guide.md b/translated/tech/20150316 Linux on UEFI A Quick Installation Guide.md
new file mode 100644
index 0000000000..6ec82d8308
--- /dev/null
+++ b/translated/tech/20150316 Linux on UEFI A Quick Installation Guide.md
@@ -0,0 +1,238 @@
+在 UEFI 模式下安装 Linux:快速安装指南
+============================================================
+
+
+此页面是免费浏览的,没有烦人的外部广告;然而,我的确花了时间准备,网站托管也花了钱。如果您发现此页面帮到了您,请考虑进行小额捐款,以帮助保持网站的运行。谢谢!
+
+### 引言
+
+几年来,一种新的固件技术一直潜伏在大多数普通用户中。该技术被称为 [可扩展固件接口(EFI)][29](译者注:Extensible Firmware Interface), 或更新的统一可扩展固件接口(UEFI, 本质上是 EFI 2. _x_ )(译者注:Unified EFI),它已经开始替代古老的[基本输入/输出系统(BIOS)][30](译者注:Basic Input/Output System)固件技术,有经验的计算机用户或多或少都有些熟悉。
+
+本页面面向 Linux 用户,是使用 EFI 技术的一个快速介绍,其中包括有关开始将 Linux 安装到此类计算机上的建议。不幸的是,EFI 是一个密集的话题;EFI 软件本身是复杂的,许多实现有系统特定的怪异甚至是缺陷。因此,我无法在一个页面上描述在 EFI 计算机上安装和使用 Linux 的一切。我希望你能将本页面作为一个有用的起点,尽管如此,每个部分以及末尾[参考文献][31]部分的链接都会指向其他文档。
+
+#### 目录
+
+* [引言][18]
+* [你的计算机是否使用 EFI 技术?][19]
+* [你的发行版是否支持 EFI 技术?][20]
+* [准备安装 Linux][21]
+* [安装 Linux][22]
+* [修复安装后的问题][23]
+* [哎呀:将传统模式下安装的引导转为 EFI 模式下的引导][24]
+* [参考文献][25]
+
+### 你的计算机是否使用 EFI 技术?
+
+EFI 是一种_固件_,意味着它是内置于计算机中处理低级任务的软件。最重要的是,固件控制着计算机的引导过程,反过来说就是基于 EFI 的计算机与基于 BIOS 的计算机的引导过程不同。(有关此规律的例外之处稍后再说。)这种差异可能使操作系统安装介质的设计超级复杂化,但是一旦安装运行,它对计算机的日常操作几乎没有影响。请注意,大多数制造商使用术语 “BIOS” 来表示他们的 EFI。我认为这种用法很混乱,所以我避免了;在我看来,EFI 和 BIOS 是两种不同类型的固件。
+
+**注意:**苹果公司的 Mac 使用的 EFI 在许多方面是不寻常的。尽管本页面的大部分内容同样适用于 Mac,但有些细节上的出入,特别是在设置 EFI 引导加载程序的时候。这个任务最好在 OS X 上进行,使用 Mac 的 [bless utility][49]工具,我不在此做过多描述。
+
+自从2006年第一次推出以来,EFI 已被用于基于英特尔的 Mac 上。从2012年底开始,大多数安装 Windows 8 或更高版本系统的计算机就已经默认使用 UEFI 启动,实际上大多数 PC 从 2011 年中期就开始使用 UEFI,尽管默认情况下它们可能无法以 EFI 模式启动。尽管2011年前销出的 PC 大都默认使用 BIOS 模式启动,但也有一些支持 EFI。
+
+如果你不确定你的计算机是否支持 EFI,则应查看固件设置实用程序,参考用户手册,以便了解 _EFI_、_UEFI_ 以及 _legacy booting_。(可以通过查找用户手册的 PDF 文件来快速了解。)如果你没有找到类似的参考,你的计算机可能使用老式的(“legacy”) BIOS 引导;但如果你找到了这些术语的参考,几乎可以肯定它使用了 EFI 技术。你还可以尝试_只_有 EFI 模式引导加载器的安装介质。使用 [rEFInd][50] 制作的 USB 闪存驱动器或 CD-R 镜像是用来测试不错的选择。
+
+在继续之前,你应当了解大多数 _x_ 86 和 _x_ 86-64 架构的计算机上的 EFI 都包含一个叫做_兼容支持模块(CSM)_(译者注:Compatibility Support Module)的组件,这使得 EFI 能够使用旧的 BIOS 风格的引导机制来引导操作系统。这会非常方便,因为它向后兼容;但是这样也导致一些意外情况的发生,因为计算机不论以 EFI 模式引导还是以 BIOS 模式(也称为 CSM 或 legacy)引导,在控制时没有标准的使用规范和用户界面。特别地,你的 Linux 安装介质非常容易意外的以 BIOS/CSM/legacy 模式启动,这会导致 Linux 以 BIOS/CSM/legacy 模式安装。如果 Linux 是唯一的操作系统,也可以正常工作,但是如果与在 EFI 模式下的 Windows 组成双启动的话,就会非常复杂。(反过来问题也可能发生。)以下部分将帮助你以正确模式引导安装程序。如果你在阅读这篇文章之前就已经以 BIOS 模式安装了 Linux,并且希望切换引导模式,请阅读后续章节,[哎呀:将传统模式下安装的引导转为 EFI 模式下的引导][51]。
+
+UEFI 的一个附加功能值得一提:_Secure Boot_(译者注:直译为安全启动)。此特性旨在最大限度的降低计算机受到 _boot kit_ 病毒感染的风险,这是一种感染计算机引导加载程序的恶意软件。Boot kits 很难检测和删除,阻止它们的运行刻不容缓。微软公司要求所有带有支持 Windows 8 标志的台式机和笔记本电脑启用 Secure Boot。这一配置使 Linux 的安装变得复杂,尽管有些发行版可以较好的处理这个问题。不要将 Secure Boot 和 EFI 或 UEFI 混淆;支持 EFI 的计算机不一定支持 Secure Boot,而且支持 EFI 的 _x_ 86-64 的计算机也可以禁用 Secure Boot。微软同意用户在 Windows 8 认证的 _x_ 86 和 _x_ 86-64 计算机上禁用 Secure Boot功能;然而对装有 Windows 8 的 ARM 计算机而言却相反,它们必须_不允许_禁用 Secure Boot。幸运的是,基于 ARM 的 Windows 8 计算机目前很少见。我建议避免使用它们。
+
+### 你的发行版是否支持 EFI 技术?
+
+大多数 Linux 发行版多年来一直支持 EFI。然而,不同的发行版对 EFI 的支持程度不同。大多数主流发行版(Fedora,OpenSUSE,Ubuntu 等)都能很好的支持 EFI,包括对 Secure Boot 的支持。另外一些“自定义”的发行版,比如 Gentoo,对 EFI 的支持较弱,但他们的性质使其很容易添加 EFI 支持。事实上,可以向_任意_ Linux 发行版添加 EFI 支持:你需要安装它(即使在 BIOS 模式下),然后在计算机上安装 EFI 引导加载程序。有关如何执行此操作的信息,请参阅[哎呀:将传统模式下安装的引导转为 EFI 模式下的引导][52]部分。
+
+你应当查看发行版的功能列表,来确定它是否支持 EFI。你还应当注意你的发行版对 Secure Boot 的支持情况,特别是如果你打算和 Windows 8 组成双启动。请注意,即使正式支持 Secure Boot 的发行版也可能要求禁用此功能,因为 Linux 对 Secure Boot 的支持通常很差劲,或者导致意外情况的发生。
+
+### 准备安装 Linux
+
+下面几个准备步骤有助于在 EFI 计算机上 Linux 的安装,使其更加顺利:
+
+1. **升级固件** — 有些 EFI 并不完整,但硬件制造商偶尔会发布其固件的更新。因此我建议你将固件升级到最新可用的版本。如果你从论坛的帖子知道自己计算机的 EFI 有问题,你应当在安装 Linux 之前更新它,因为如果安装 Linux 之后更新固件,会有些问题需要额外的操作才能解决。另一方面,升级固件是有一定风险的,所以如果制造商提供了 EFI 支持,最好的办法就是按他们提供的方式进行升级。
+
+2. **了解如何使用固件** - 通常你可以通过在引导之前按 Del 键或功能键进入固件设置实用程序。按下开机键后尽快查看相关的提示信息,或者尝试每个功能键。类似的,ESC 键或功能键通常进入固件内置的引导管理器,可以选择要进入的操作系统或外部设备。一些制造商不断让这些设置隐藏的更深。在某些情况下,如[此页面][32]所述,你可以在 Windows 8 内这样做。
+
+3. **调整以下固件设置:**
+
+ * **快速启动** — 此功能可以通过在硬件初始化时使用快捷方式来加快引导过程。这很好用,但会使 USB 设备不能初始化,导致计算机无法从 USB 闪存驱动器或类似的设备启动。因此禁用快速启动_可能_有一定的帮助,甚至是必须的;但是只有在 Linux 安装程序启动遇到问题时,你才能安全地将其停用。请注意,此功能有时可能会以其他名字出现。在某些情况下,你必须_启用_ USB 支持,而不是_禁用_快速启动功能。
+
+ * **安全启动** — Fedora,OpenSUSE,Ubuntu 以及其他的发行版官方就支持 Secure Boot;但是如果在启动引导加载程序或内核时遇到问题,可能需要禁用此功能。不幸的是,没办法具体描述怎么禁用,因为不同计算机的设置方法也不同。请参阅[我的 Secure Boot 页面][1]获取更多关于此话题的信息。
+
+ **注意:** 一些教程说安装 Linux 时需要启用 BIOS/CSM/legacy 支持。通常情况下,这样做是错的。启用这些支持可以解决启动安装程序涉及的问题,但也会带来新的问题。以这种方式安装的教程通常通过引导修复来解决这些问题,但最好从一开始就做对。本页面提供了帮助你以 EFI 模式启动 Linux 安装程序的提示,从而避免以后的问题。
+
+ * **CSM/legacy 选项** — 如果你想以 EFI 模式安装,请_关闭_这些选项。一些教程推荐启用这些选项,有时这是必须的 —— 比如,有些附加视频卡需要在固件中启用 BIOS 模式。尽管如此,大多数情况下启用 CSM/legacy 支持只会无意中增加以 BIOS 模式启动 Linux 的风险,但你并_不想_这样。请注意,Secure Boot 和 CSM/legacy 选项有时会交织在一起,因此更改任一选项之后务必检查另一个。
+
+4. **禁用 Windows 的快速启动功能** — [这个页面][33]描述了如何禁用此功能,不禁用的话会导致文件系统损坏。请注意此功能与固件的快速启动不同。
+
+5. **检查分区表** — 使用 [GPT fdisk][34],parted 或其他任意分区工具检查磁盘分区。理想情况下,你应该创建一个包含每个分区确切起点和终点(以扇区为单位)的硬拷贝。这会是很有用的参考,特别是在安装时进行手动分区的时候。如果已经安装了 Windows,确定可以识别你的 [EFI 系统分区(ESP)][35],它是一个 FAT 分区,设置了“启动标记”(parted 或 Gparted)或在 gdisk 中有名为 EF00 的类别码。
+
+### 安装 Linux
+
+大部分 Linux 发行版都提供了足够的安装说明;然而我注意到了在 EFI 模式安装中的几个常见的绊脚石:
+
+* **确保使用正确位深的发行版** — EFI 启动加载器和 EFI 自身的位深相同。现代计算机通常是 64 位,尽管最初几代基于 Intel 的 Mac,一些现代的平板电脑和变形本,以及一些鲜为人知的电脑使用 32 位 EFI。虽然可以将 32 位 EFI 引导加载程序添加至 32 位发行版,但我还没有遇到过正式支持 32 位 EFI 的 Linux 发行版。(我的 [Managing EFI Boot Loaders for Linux][36] 工具一般包括引导加载程序,而且理解了这些原则你就可以修改 32 位发行版的安装程序,尽管这不是一个初学者该做的。)在 64 位 EFI 的计算机上安装 32 位发行版最让人头疼,而且我不在这里描述这一过程;在具有 64 位 EFI 的计算机上,你应当使用 64 位的发行版。
+
+* **正确准备引导介质** — 将 .iso 镜像转移到 USB 闪存驱动器的第三方工具,比如 unetbootin,在创建正确的 EFI 模式引导项时经常失败。我建议按照发行版维护者的建议创建 USB 闪存驱动器。如果没有类似的建议,使用 Linux 的 dd 工具,通过执行 dd if=image.iso of=/dev/sdc 在识别为 /dev/sdc 的 USB 闪存驱动器上创建一个镜像。至于 Windows,有 [WinDD][37] 和 [dd for windows][38],但我从没测试过他们。请注意,使用不兼容 EFI 的工具创建安装介质是错误的,这会导致在 BIOS 模式下安装的巨大错误的发生,然后必须纠正它们,所以不要忽视这一点!
+
+* **备份 ESP 分区** — 如果计算机已经存在 Windows 或者其他的操作系统,我建议在安装 Linux 之前备份你的 ESP 分区。尽管 Linux _不应_ 损坏 ESP 分区已有的文件,但似乎这时不时发生。发生这种事情时备份会有很大用处。只需简单的文件级的备份(使用 cp,tar,或者 zip 类似的工具)就足够了。
+
+* **以 EFI 模式启动** — 以 BIOS/CSM/legacy 模式引导 Linux 安装程序的意外非常容易发生,特别是当固件启用 CSM/legacy 选项时。下面一些提示可以帮助你避免此问题:
+
+* 进入 Linux shell 环境执行 ls /sys/firmware/efi 验证当前是否处于 EFI 模式。如果你看到一个文件和目录的列表,表明你已经以 EFI 模式启动,而且可以忽略以下多余的提示;如果没有,表明你是以 BIOS 模式启动的,应当重新检查你的设置。
+
+* 使用固件内置的引导管理器(你应该已经知道在哪;请参阅[了解如何使用固件][26])使之以 EFI 模式启动。一般你会看到 CD-R 或 USB 闪存驱动器两个选项,其中一个选项包括 _EFI_ 或 _UEFI_ 字样的描述,另一个不包括。使用 EFI/UEFI 选项来启动介质。
+
+* 禁用安全启动 - 即使你使用的发行版官方支持 Secure Boot,有时他们不能生效。在这种情况下,计算机会静默的转到下一个启动项,它可能是启动介质的 BIOS 模式,导致你以 BIOS 模式启动。请参阅[我的 Secure Boot 页面][27]以得到禁用 Secure Boot 的相关提示。
+
+* 如果 Linux 安装程序总是无法以 EFI 模式启动,试试用我的 [rEFInd boot manager][28] 制作的 USB 闪存驱动器或 CD-R。如果 rEFInd 启动成功,它保证是以 EFI 模式运行的,而且在基于 UEFI 的 PC 上,它只显示 EFI 模式的引导项,因此若您启动到 Linux 安装程序,则应处于 EFI 模式。(但是在 Mac 上,除了 EFI 模式选项之外,rEFInd 还显示 BIOS 模式的引导项。)
+
+* **准备 ESP 分区** — 除了 Mac,EFI 使用 ESP 分区来保存引导加载程序。如果你的计算机已经安装了 Windows,那么 ESP 分区就已存在,可以在 Linux 上直接使用。如果不是这样,那么我建议创建一个大小为 550 MB 的 ESP 分区。(如果你已有的 ESP 分区比这小,别担心,直接用就行。)在此分区上创建一个 FAT32 文件系统。如果你使用 Gparted 或者 parted 准备 ESP 分区,记得给它一个“启动标记”。如果你使用 GPT fdisk(gdisk,cgdisk 或 sgdisk)准备 ESP 分区,记得给它一个名为 EF00 的类别码。有些安装程序会创建一个较小的 ESP 分区,并且设置为 FAT16 文件系统。尽管这样能正常工作,但如果你之后需要重装 Windows,安装程序会无法识别 FAT16 文件系统的 ESP 分区,所以你需要将其备份后转为 FAT32 文件系统。
+
+* **使用 ESP 分区** — 不同发行版的安装程序以不同的方式辨识 ESP 分区。比如,Debian 和 Ubuntu 的某些版本把 ESP 分区称为“EFI boot partition”,而且不会明确显示它的挂载点(尽管它会在后台挂载);但是有些发行版,像 Arch 或 Gentoo,需要你去手动挂载。尽管将 ESP 分区挂载到 /boot 进行相应配置后可以正常工作,特别是当你想使用 gummiboot 或 ELILO(译者注:gummiboot 和 ELILO 都是 EFI 引导工具)时,但是在 Linux 中最标准的 ESP 分区挂载点是 /boot/efi。某些发行版的 /boot 不能用 FAT 分区。因此,当你设置 ESP 分区挂载点时,请将其设置为 /boot/efi。除非 ESP 分区没有,否则_不要_为其新建文件系统 — 如果已经安装 Windows 或其他操作系统,它们的引导文件都在 ESP 分区里,新建文件系统会销毁这些文件。
+
+* **设置引导程序的位置** — 某些发行版会询问将引导程序(GRUB)装到何处。如果 ESP 分区按上述内容被正确标记,不必理会此问题,但有些发行版仍会询问。请尝试使用 ESP 分区。
+
+* **其他分区** — 除了 ESP 分区,不再需要其他的特殊分区;你可以设置 根(/)分区,swap 分区,/home 分区,或者其他你想在 BIOS 模式下安装时使用的分区。请注意 EFI 模式下_不需要设置_[BIOS 启动分区][39],所以如果安装程序提示你需要它,意味着你可能意外的进入了 BIOS 模式。另一方面,如果你创建了 BIOS 启动分区,会更灵活,因为你可以安装 BIOS 模式下的 GRUB,然后以任意模式(EFI模式 或 BIOS模式)引导。
+
+* **解决无显示问题** — 2013 年,许多人在 EFI 模式下经常遇到(之后出现的频率逐渐降低)无显示的问题。有时可以在命令行下通过给内核添加 nomodeset 参数解决这一问题。在 GRUB 界面按 e 键会打开一个简易文本编辑器。大多数情况下你需要搜索有关此问题的更多信息,因为此问题更多是由特定硬件引起的。
+
+在某些情况下,你可能不得不以 BIOS 模式安装 Linux。但你可以手动安装 EFI 引导程序让 Linux 以 EFI 模式启动。请参阅 [Managing EFI Boot Loaders for Linux][53] 页面获取更多有关它们以及如何安装的可用信息。
+
+### 解决安装后的问题
+
+如果 Linux 无法在 EFI 模式下工作,但在 BIOS 模式下成功了,那么你可以完全放弃 EFI 模式。在只有 Linux 的计算机上这非常简单;安装 BIOS 引导程序即可(如果你是在 BIOS 模式下安装的,引导程序也应随之装好)。如果是和 EFI 下的 Windows 组成双系统,最简单的方法是安装我的[rEFInd boot manager][54]。在 Windows 上安装它,然后编辑 refind.conf 文件:取消注释 scanfor 一行,并确保拥有 hdbios 选项。这样 rEFInd 在引导时会重定向到 BIOS 模式的引导项。
+
+如果重启后计算机直接进入了 Windows,很可能是 Linux 的引导程序或管理器安装不正确。(但是应当首先尝试禁用 Secure Boot;之前提到过,它经常引发各种问题。)下面是关于此问题的几种可能的解决方案:
+
+* **使用 efibootmgr** — 你可以以 _EFI 模式_引导一个 Linux 急救盘,使用 efibootmgr 实用工具尝试重写你的 Linux 引导程序,如[这里][40]所述。
+
+* **使用 Windows 上的 bcdedit** — 在 Windows 管理员命令提示符窗口中,输入 bcdedit /set {bootmgr}path \EFI\fedora\grubx64.efi 会用 ESP 分区的 EFI/fedora/grubx64.efi 文件作为默认的引导。根据需要更改此路径,指向你想设置的引导文件。如果你启用了 Secure Boot,需要设置 shim.efi,shimx64.efi 或者 PreLoader.efi(不管有哪个)为引导而不是 grubx64.efi。
+
+* **安装 rEFInd** — 有时候 rEFInd 可以解决这个问题。我推荐使用 [CD-R 或者 USB 闪存驱动器][41]进行测试。如果 Linux 可以启动,就安装 Debian 软件包,RPM 程序,或者 .zip 文件包。(请注意,你需要在一个高亮选项 Linux vmlinuz* 按两次 F2 或 Insert 修改启动项。如果你的启动分区是单独的,这就更有必要了,因为这种情况下,rEFInd 无法找到根(/)分区,也就无法传递参数给内核。)
+
+* **修复引导** — Ubuntu 的[引导修复实用工具][42]可以自动修复一些问题;然而,我建议只在 Ubuntu 和 相关的发行版上使用,比如 Mint。有时候,有必要通过高级选项备份并替换 Windows 的引导。
+
+* **劫持 Windows 引导程序** — 有些不完整的 EFI 引导只能引导 Windows,就是 ESP 分区上的 EFI/Microsoft/Boot/bootmgfw.efi 文件。因此,你可能需要将引导程序改名(我建议将其移动到上级目录 EFI/Microsoft/bootmgfw.efi),然后将首选引导程序复制到这里。(大多数发行版会在 EFI 的子目录放置 GRUB 的副本,例如 Ubuntu 的 EFI/ubuntu,Fedora 的 EFI/fedora。)请注意此方法是不光彩的破解行为,有用户反映 Windows 会替换引导程序,所以这个办法不是 100% 有效。然而,这是在不完整的 EFI 上生效的唯一办法。在尝试之前,我建议你升级固件并重新注册自己的引导程序,Linux 上用 efibootmgr,Windows 上用 bcdedit。
+
+有关引导程序的其他类型的问题 - 如果 GRUB(或者你的发行版默认的其他引导程序或引导管理器)没有引导操作系统,你必须修复这个问题。因为 GRUB 2 引导 Windows 时非常挑剔,所以 Windows 经常启动失败。在某些情况下,Secure Boot 会加剧这个问题。请参阅[我的 GRUB 2 页面][55]获取一个引导 Windows 的 GRUB 2 示例。还会有很多原因导致 Linux 引导出现问题,类似于 BIOS 模式下的情况,所以我没有全部写出来。
+
+尽管 GRUB 2 使用很普遍,但我对它的评价却不高 - 它很复杂,而且难以配置和使用。因此,如果你在使用 GRUB 的时候遇到了问题,我的第一反应就是用别的东西代替。[我的 EFI 引导程序页面][56]有其他的选择。其中包括我的 [rEFInd boot manager][57],它除了能够让许多发行版上的 GRUB 2 工作,也更容易安装和维护 - 但是它还不能完全代替 GRUB 2。
+
+除此之外,EFI 引导的问题可能很奇怪,所以你需要去论坛发帖求助。尽量将问题描述完整。[Boot Info Script][58] 可帮助你提供有用的信息 - 运行此脚本,将生成的名为 RESULTS.txt 的文件粘贴到论坛的帖子上。一定要将文本粘贴到 [code] 和 [/code] 之间;不然会遭人埋怨。或者将 RESULTS.txt 文件上传到 pastebin 网站上,比如 [pastebin.com][59],然后将网站给你的 URL 地址发布到论坛。
+
+### 哎呀:将传统模式下安装的引导转为 EFI 模式下的引导
+
+**警告:**这些指南主要用于基于 UEFI 的 PC。如果你的 Mac 已经安装了 BIOS 模式下的 Linux,但想以 EFI 模式启动 Linux,可以_在 OS X_ 中安装引导程序。rEFInd(或者旧式的 rEFIt)是 Mac 上的常用选择,但 GRUB 可以做的更多。
+
+论坛上有很多人看了错误的教程,在已经存在 EFI 模式的 Windows 的情况下,安装了 BIOS 引导的 Linux,这一问题在 2015 年初很普遍。这样配置效果很不好,因为大多数 EFI 很难在两种模式之间切换,而且 GRUB 也无法胜任这项工作。你可能会遇到不完整的 EFI 无法启动外部介质的情况,也可能遇到 EFI 模式下的显示问题,或者其他问题。
+
+如前所述,在[解决安装后的问题][60]部分,解决办法之一就是_在 Windows_ 上安装 rEFInd,将其配置为支持 BIOS 模式引导。然后可以通过 rEFInd 和 chainload 引导 BIOS 模式下的 GRUB。在 Linux 上遇到 EFI 特定的问题时,例如无法使用显卡,我建议你使用这个办法修复。如果你没有这样的 EFI 特定的问题,在 Windows 中安装 rEFInd 和合适的 EFI 文件系统驱动可以让 Linux 直接以 EFI 模式启动。这个解决方案很完美,它和我下面描述的内容等同。
+
+大多数情况下,最好将 Linux 配置为以 EFI 模式启动。有很多办法可以做到,但最好的是以 EFI 模式引导 Linux(或者,可以想到,Windows,或者一个 EFI shell)来注册首选引导管理器。实现这一目标的方法如下:
+
+1. 下载适用于 USB 闪存驱动器或 CD-R 的 [rEFInd boot manager][43]。
+2. 从下载的镜像文件准备安装介质。可以在计算机上准备,不管是 EFI 还是 BIOS 的计算机都可以(或者在其他平台上使用其他方法)。
+3. 如果你还没有这样做,[请禁用 Secure Boot][44]。因为 rEFInd CD-R 和 USB 镜像不支持 Secure Boot,所以这很必要,你可以以后重新启用它。
+4. 在目标计算机上启动 rEFInd。如前所述,你可能需要调整固件设置,并使用内置引导管理器选择要引导的介质。你选择的那一项需要包含 _UEFI_ 这样描述的字符串。
+
+5. 在 rEFInd 上测试引导项。你应该至少看到一个启动 Linux 内核的选项(名字含有 vmlinuz 这样的字符串)。有两种方法可以启动它:
+ * 如果你_没有_独立的 /boot 分区,只需简单的在高亮选项上按回车键。Linux 就会启动。
+ * 如果你_确定有_一个独立的 /boot 分区,按两次 Insert 或 F2 键。这样会打开一个编辑器,你可以用它来编辑内核选项。按 root= 格式添加这些选项以标识根(/)文件系统,如果根(/)分区在 /dev/sda5 上,就添加 root=/dev/sda5。如果不知道根文件系统是什么,那你需要重启并尽可能想到办法。在一些罕见的情况下,你可能需要添加其他内核选项来代替 root= 选项。比如配置了 LVM(译者注:Logical Volume Manager,逻辑卷管理)的 Gentoo 就需要 dolvm 选项。
+6. Linux 一旦启动,安装你想要的引导程序。rEFInd 的安装很简单,可以通过 RPM,Debian 软件包,PPA,或从[rEFInd 下载页面][45]下载的二进制 .zip 文件进行安装。在 Ubuntu 和相关的发行版上,Boot Repair 可以相对简单地修复你的 GRUB 设置,但它会有质的飞跃,从而正常工作。(它通常工作良好,但有时候会把事情搞得一团糟。)另外一些选项都在我的 [Managing EFI Boot Loader for Linux][46] 页面上。
+7. 如果你想在 Secure Boot 激活的情况下引导,只需重启并启用它。但是,请注意,可能需要额外的安装步骤才能将引导程序设置为使用 Secure Boot。有关详细信息,请参阅[我的主题页面][47]或引导程序有关 Secure Boot 的文档资料。
+
+重启时,你可以看到刚才安装的引导程序。如果计算机进入了 BIOS 模式下的 GRUB,你应当进入固件禁用 BIOS/CSM/legacy 支持,或调整引导顺序。如果计算机直接进入了 Windows,那么你应当阅读前一部分,[解决安装后的问题][61]。
+
+你可能想或需要调整你的配置。通常是为了看到额外的引导选项,或者隐藏某些选项。请参阅引导程序的文档资料,以了解如何进行这些更改。
+
+### 参考和附加信息
+
+
+* **信息网页**
+ * 我的 [Managing EFI Boot Loaders for Linux][2] 页面含有可用的 EFI 引导程序和引导管理器。
+ * [OS X's bless tool 的手册页][3] 页面在设置 OS X 平台上的引导程序或引导管理器时可能会很有用。
+ * [EFI 启动过程][4] 描述了 EFI 是启动时的大致框架。
+ * [Arch Linux UEFI wiki page][5] 有大量关于 UEFI 和 Linux 的详细信息。
+ * 亚当·威廉姆森写的 [什么是 EFI,它是怎么工作的][6]。
+ * [这个页面][7] 描述了如何从 Windows 8 调整 EFI 的固件设置。describes how to adjust EFI firmware settings from within Windows 8.
+ * 马修·J·加勒特是 Shim 引导程序的开发者,此程序支持 Secure Boot,他维护的[博客][8]经常更新有关 EFI 的问题。
+ * 如果你对 EFI 软件的开发感兴趣,我的 [Programming for EFI][9] 页面可以为你起步助力。
+* **附加程序**
+ * [rEFInd 官网][10]
+ * [gummiboot 官网][11]
+ * [ELILO 官网][12]
+ * [GRUB 官网][13]
+ * [GPT fdisk 分区软件官网][14]
+ * Ubuntu 的 [Boot Repair 实用工具][15]可帮助解决一些引启动问题
+* **交流**
+ * [Sourceforge 上的 rEFInd 交流论坛][16]是 rEFInd 用户互相交流或与我联系的一种方法。
+ * Pastebin 网站,比如 [http://pastebin.com][17], 是在 Web 论坛上与其他用户交换大量文本的一种便捷的方法。
+--------------------------------------------------------------------------------
+
+via: http://www.rodsbooks.com/linux-uefi/
+
+作者:[Roderick W. Smith][a]
+译者:[fuowang](https://github.com/fuowang)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:rodsmith@rodsbooks.com
+[1]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
+[2]:http://www.rodsbooks.com/efi-bootloaders/
+[3]:http://ss64.com/osx/bless.html
+[4]:http://homepage.ntlworld.com/jonathan.deboynepollard/FGA/efi-boot-process.html
+[5]:https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface
+[6]:https://www.happyassassin.net/2014/01/25/uefi-boot-how-does-that-actually-work-then/
+[7]:http://www.eightforums.com/tutorials/20256-uefi-firmware-settings-boot-inside-windows-8-a.html
+[8]:http://mjg59.dreamwidth.org/
+[9]:http://www.rodsbooks.com/efi-programming/
+[10]:http://www.rodsbooks.com/refind/
+[11]:http://freedesktop.org/wiki/Software/gummiboot
+[12]:http://elilo.sourceforge.net/
+[13]:http://www.gnu.org/software/grub/
+[14]:http://www.rodsbooks.com/gdisk/
+[15]:https://help.ubuntu.com/community/Boot-Repair
+[16]:https://sourceforge.net/p/refind/discussion/
+[17]:http://pastebin.com/
+[18]:http://www.rodsbooks.com/linux-uefi/#intro
+[19]:http://www.rodsbooks.com/linux-uefi/#isitefi
+[20]:http://www.rodsbooks.com/linux-uefi/#distributions
+[21]:http://www.rodsbooks.com/linux-uefi/#preparing
+[22]:http://www.rodsbooks.com/linux-uefi/#installing
+[23]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
+[24]:http://www.rodsbooks.com/linux-uefi/#oops
+[25]:http://www.rodsbooks.com/linux-uefi/#references
+[26]:http://www.rodsbooks.com/linux-uefi/#using_firmware
+[27]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
+[28]:http://www.rodsbooks.com/refind/getting.html
+[29]:https://en.wikipedia.org/wiki/Uefi
+[30]:https://en.wikipedia.org/wiki/BIOS
+[31]:http://www.rodsbooks.com/linux-uefi/#references
+[32]:http://www.eightforums.com/tutorials/20256-uefi-firmware-settings-boot-inside-windows-8-a.html
+[33]:http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html
+[34]:http://www.rodsbooks.com/gdisk/
+[35]:http://en.wikipedia.org/wiki/EFI_System_partition
+[36]:http://www.rodsbooks.com/efi-bootloaders
+[37]:https://sourceforge.net/projects/windd/
+[38]:http://www.chrysocome.net/dd
+[39]:https://en.wikipedia.org/wiki/BIOS_Boot_partition
+[40]:http://www.rodsbooks.com/efi-bootloaders/installation.html
+[41]:http://www.rodsbooks.com/refind/getting.html
+[42]:https://help.ubuntu.com/community/Boot-Repair
+[43]:http://www.rodsbooks.com/refind/getting.html
+[44]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
+[45]:http://www.rodsbooks.com/refind/getting.html
+[46]:http://www.rodsbooks.com/efi-bootloaders/
+[47]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html
+[48]:mailto:rodsmith@rodsbooks.com
+[49]:http://ss64.com/osx/bless.html
+[50]:http://www.rodsbooks.com/refind/getting.html
+[51]:http://www.rodsbooks.com/linux-uefi/#oops
+[52]:http://www.rodsbooks.com/linux-uefi/#oops
+[53]:http://www.rodsbooks.com/efi-bootloaders/
+[54]:http://www.rodsbooks.com/refind/
+[55]:http://www.rodsbooks.com/efi-bootloaders/grub2.html
+[56]:http://www.rodsbooks.com/efi-bootloaders
+[57]:http://www.rodsbooks.com/refind/
+[58]:http://sourceforge.net/projects/bootinfoscript/
+[59]:http://pastebin.com/
+[60]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
+[61]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
diff --git a/translated/tech/20161025 GitLab Workflow An Overview.md b/translated/tech/20161025 GitLab Workflow An Overview.md
new file mode 100644
index 0000000000..70abe479f0
--- /dev/null
+++ b/translated/tech/20161025 GitLab Workflow An Overview.md
@@ -0,0 +1,501 @@
+GitLab工作流:概览
+======
+
+GitLab是一个基于git的仓库管理程序,也是一个方便软件开发的强大完整应用。
+
+GitLab拥有一个”用户新人友好“的界面,通过自由图形和命令行界面,使你的工作更加具有效率。GitLab不仅仅对开发者是一个有用的工具,它甚至可以被集成到你的整个团队中,使得每一个人获得一个唯一的平台。
+
+GitLab工作流逻辑符合使用者思维,使得整个平台变得更加易用。相信我,使用一次,你就离不开它了!
+
+* * *
+
+### 在这篇文章中
+
+* [GitLab工作流][53]
+ * [软件开发阶段][22]
+* [GitLab工单跟踪][52]
+ * [秘密工单][21]
+ * [截止日期][20]
+ * [委托人][19]
+ * [标签][18]
+ * [工单重要性][17]
+ * [GitLab工单看板][16]
+* [GitLab中的代码审查][51]
+ * [第一次提交][15]
+ * [合并请求][14]
+ * [WIP MR][13]
+ * [审查][12]
+* [建立,测试以及部署][50]
+ * [Koding][11]
+ * [用户案例][10]
+* [反馈: 循环分析][49]
+* [增强][48]
+ * [工单 & MR模版][9]
+ * [里程碑][8]
+* [高级技巧][47]
+ * [对于工单 & MRs][7]
+ * [订阅][3]
+ * [添加 TO-DO][2]
+ * [搜索你的工单 & MRs][1]
+ * [转移工单][6]
+ * [代码片段][5]
+* [GitLab 工作流 用户案例 梗概][46]
+* [尾声][45]
+
+* * *
+
+### GitLab 工作流
+
+**GitLab 工作流** 使用GitLab作为平台管理你的代码,它是一系列具有逻辑可能性的过程——这个逻辑过程依据软件开发的生命周期来制定。
+
+GitLab 工作流考虑到[GitLab Flow][97],是由一系列由**基于Git**的方法和策略组成的,这些方法为版本的管理,例如**分支策略**,**Git最佳实践**等等提供了保障。
+
+通过GitLab工作流,可以很方便的提升团队的工作效率以及凝聚力。这种提升,在引入一个新的项目的开始,一直到发布这个项目,成为一个产品都有所体现。这就是我们所说的“如何通过最快的速度把一个点子在10步之内变成一个产品”。
+
+![FROM IDEA TO PRODUCTION IN 10 STEPS](https://about.gitlab.com/images/blogimages/idea-to-production-10-steps.png)
+
+### 软件开发阶段
+
+一般情况下,软件开发经过10个主要阶段;GitLab为这10个阶段依次提供了解决方案:
+
+1. **IDEA:** 每一个从点子开始的项目,通常来源于一次闲聊。在这个阶段,GitLab集成了[Mattermost][44]。
+2. **ISSUE:** 最有效的讨论一个点子的方法,就是为这个点子建立一个工单讨论。你的团队和你的合作伙伴可以帮助你去提升这个点子,通过[issue tracker][43]
+3. **PLAN:** 一旦讨论得到一致的同意,就是开始编码的时候了。但是等等!首先,我们需要优先考虑组织我们的工作流。对于此,我们可以使用[Issue Board][42]。
+4. **CODE:** 现在,当一切准备就绪,我们可以开始写代码了。
+5. **COMMIT:** 当我们为我们的草稿欢呼的时候,我们就可以在版本控制下,提交代码到功能分支了。
+6. **TEST:** 通过[GitLab CI][41],我们可以运行脚本来创建和测试我们的应用
+7. **REVIEW:** 一旦脚本成功运行,我们的创建和测试成功,我们就可以进行[code review][40]以及批准。
+8. **STAGING:** 现在是时候[将我们的代码部署到演示环境][39]来检查一下,是否一切就像我们预估的那样顺畅——或者我们可能仍然需要修改。
+9. **PRODUCTION:** 当项目已经运行的时分通畅,就是[部署到生产环境][38]的时候了!
+10. **FEEDBACK**: 现在是时候翻回去看我们能在项目中提升的部分了。我们使用[循环分析][37]来对当前项目中关键的部分进行的反馈。
+
+简单浏览这些步骤,我们可以发现,提供强大的工具来支持这些步骤是十分重要的。在接下来的部分,我们为GitLab的可用工具提供一个简单的概览。
+
+### GitLab 工单追踪
+
+GitLab有一个强大的工单追溯系统,在使用过程中,允许你和你的团队,以及你的合作者分享和讨论建议。
+
+![issue tracker - view list](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-tracker-list-view.png)
+
+工单是GitLab工作流的第一个重要重要特性。[以工单的讨论为开始][95]; 跟随点子的改变是一个最好的方式。
+
+这十分有利于:
+* 讨论点子
+* 提交功能建议
+* 提问题
+* 提交bug
+* 获取支持
+* 精细化新代码的引入
+
+对于每一个在GitLab上部署的项目都有一个工单追踪器。找到你的项目中的 **Issues** > **New issue**,来创建一个新的工单。建立一个标题来总结要被讨论的主题,并且使用[Markdown][94]来形容它。检查[pro tips][93]来加强你的工单描述。
+
+GitLab 工单追踪器代表了一个额外的实用功能,使得步骤变的更佳易于管理和考虑。下面的部分仔细描述了它。
+
+![new issue - additional settings](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-features-view.png)
+
+### 秘密工单
+
+无论何时,你仅仅想要在团队中讨论这个工单,你可以使用[issue confidential][92]。即使你的项目是公开的,你的工单也会被保留。当一个不是本项目成员的人,就算是[Reporter level][01],想要访问工单的地址时,浏览器也会返回一个404错误。
+
+### 截止日期
+
+每一个工单允许你填写一个[截止日期][90]。有些团队以紧凑的时间表工作,并且拥有一种方式去设置一个截止日期来解决问题,是有必要的。这些都可以通过截止日期这一功能实现。
+
+当你有一个多任务的项目截止日期的时候——比如说,一个新的发布,项目的启动,或者追踪团体任务——你可以使用[milestones][89]。
+
+### 受托者
+
+任何时候,一个人想要完成工单中的工作,这个工单都可以被分配个那个人。你可以任意修改被分配者,直到满足你的需求。这个功能的想法是,一个受托者本身对这个工单负责,直到其将这个工单重新赋予其他人。
+
+这对于筛选每个受托者的工单也有帮助。
+
+### 标签
+
+GitLab标签也是GitLab流的一个重要组成部分。你可以使用它们来分类你的工单,在工作流中定位,以及通过[优先级标签][88]来组织它们。
+
+标签使得你与[GitLab Issue Board][87]协同工作,加快工程进度以及组织你的工作流。
+
+**New!** 你可以创建[组标签][86]。它可以使得在每一个项目组中使用相同的标签。
+
+### 工单权重
+
+你可以添加个[工单权重][85]使得一个工单重要性表现的更为清晰。01-03表示工单不是特别重要,07-09表示十分重要,04-06表示程度适中。此外,你可以与你的团队自行定义工单重要性的指标。
+
+### GitLab工单看板
+
+在项目中,[GitLab工单看板][84]是一个计划以及组织你的工单理想工具。
+
+看板包含了与其相关的各自标签,每一个列表包含了相关的被标记的工单,并且以卡片的形式展示出来。
+
+这些卡片可以在列表之间移动,被移动的卡片,其标签将会依据你移动的位置发生改变。
+
+![GitLab Issue Board](https://about.gitlab.com/images/blogimages/designing-issue-boards/issue-board.gif)
+
+**New!** 你也可以在看板右边创建工单,通过点击列表上方的按钮。当你这么做的时候,这个工单将会自动添加与列表相关的标签。
+**New!** 我们[最近被告知][83] 每一个GitLab项目拥有**多个工单看板** (仅存在于[GitLab Enterprise Edition][82]); 为不同的工作流组织你的工单,这是一个最好的方式。
+
+![Multiple Issue Boards](https://about.gitlab.com/images/8_13/m_ib.gif)
+
+### 通过GitLab进行代码复审
+
+在工单追踪中,讨论了新的提议之后,就是在代码上做工作的时候了。你在本地书写代码,一旦你完成了你的第一个版本,你提交你的代码并且推送到你的GitLab仓库。你基于Git的管理策略可以在[GitLab流][81]中被提升。
+
+### 第一次提交
+
+在你的第一次提交信息中,你可以添加涉及到工单号在其中。通过这样做你可以将两个阶段的开发工作流链接起来:工单本身以及关于这个工单的第一次提交。
+
+这样做,如果你提交的代码和工单属于同一个项目,你可以简单的添加 `#xxx` 到提交信息中(译者注:git commit message),`xxx`是一个工单号。如果它们不在一个项目中,你可以添加整个工单的整个URL(`https://gitlab.com///issues/`)。
+
+```
+`git commit -m "this is my commit message. Ref #xxx"`
+```
+
+或者
+
+```
+`git commit -m "this is my commit message. Related to https://gitlab.com///issues/"`
+```
+
+当然,你也可以替换`gitlab.com`,以你自己的GitLab实例来替换这个URL
+
+**Note:** 链接工单和你的第一次提交是为了追踪你的进展,通过[GitLab Cycle Analytics][80]. 这将会衡量完成时间与计划工单的实施。这个时间是创建工单与第一次提交的间隔时间。
+
+### 合并请求
+
+一旦你提交你的改动到功能分支,GitLab将对定义这次修改,并且建议你提交一次合并请求(MR)。
+
+每一次MR都会有一个题目(这个题目总结了这次的改动)并且一个书写自[Markdown][79]的描述。在描述中,你可以简单的描述MR做了什么,涉及到任何工单以及Mr(通过创建一个链接联系他们),并且,你也可以添加个[关闭工单模式][78],当MR被**合并**的时候,相关联的工单就会被关闭。
+
+例如:
+
+```
+`## 增加一个新页面
+
+个MR将会为这个项目创建一个`readme.md`,此文件包含这个app的概览
+
+Closes #xxx and https://gitlab.com///issues/
+
+预览:
+
+![preview the new page](#image-url)
+
+cc/ @Mary @Jane @John`
+```
+
+当你创建一个带有描述的MR,就像是上文叙述的那样,它将会:
+
+* 当合并时,关闭包括工单 `#xxx` 以及 `https://gitlab.com///issues/`
+* 展示一张图片
+* 提醒用户 `@Mary`, `@Jane`,以及给`@John`发邮件
+
+你可以分配这个MR给你自己,直到你完成你的工作,然后把他分配给其他人来做一次代码复审。如果有必要的话,这个可以被重新分配多次,直到你覆盖你所需要的所有复审。
+
+它也可以被标记,并且添加一个[milestone][77]来促进管理。
+
+当你添加或者修改一个文件并且提交一个新的分支,从UI而不是命令行的时候,它也一样简单。创建一个新的合并请求,仅仅需要标记一下复选框,“以这些改变开始一个新的合并请求”,然后,一旦你提交你的改动,GitLab将会自动创建一个新的MR。
+
+![commit to a feature branch and add a new MR from the UI](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/start-new-mr-edit-from-ui.png)
+
+**Note:** 添加[关闭工单样式][76]到你的MR来使得[GitLab Cycle Analytics][75]追踪你的项目进展,是十分重要的。它将会追踪“代码”阶段,衡量项目的时间。这个时间是第一次提交和创建一个合并请求间隔的时间。
+
+**New!** 我们已经开发了[审查应用][74],一个新的功能是使得你可以部署你的应用到一个动态的环境中,来自那些你可以预览的改动。这些改动基于分支的名字,以及每一个合并请求。看看[working example][73]。
+
+### WIP MR
+
+一个 WIP MR,含义是 **在工作过程中的合并请求**,是一个我们在GitLab中避免MR在准备就绪前被合并的技术。只需要添加`WIP:` 在MR的标题开头,它将不会被合并,除非你把`WIP:`删除。
+
+当你改动已经准备好被合并,删除`WIP:` 编辑工单来手动删除,或者使用一个快捷键,允许你在MR描述下使用。
+
+![WIP MR click to remove WIP from the title](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-wip-mr.png)
+
+**New!** `WIP`模式可以被[很快的添加到合并请求][72],通过[slash command][71]`/wip`。只需要输入它并且在评论或者MR描述中提交。
+
+### 复审
+
+一旦你创建一个合并请求,就是你开始从你的团队以及合作方收取反馈的时候了。使用UI中可用的区别功能,你可以简单的添加行中注释,来回复他们或者解决他们。
+
+你也可以在每一行代码中获取一个链接,通过点击行号。
+
+提交历史在UI中是可见的,通过提交历史,你可以追踪文件的每一次改变。你可以在行中浏览他们,
+
+![code review in MRs at GitLab](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-review.png)
+
+**New!** 你可以找到合并冲突,快速[通过UI界面来解决][70],或者依据你的需要修改文件来修复冲突。
+
+![mr conflict resolution](https://about.gitlab.com/images/8_13/inlinemergeconflictresolution.gif)
+
+### 创建,测试以及发布
+
+[GitLab CI][69] 是一个强大的内建工具,其作用是[持续集成,持续发布以及持续投递][58],可以按照你希望的运行一些脚本。它的可能性是无止尽的:它就像是你自己的命令行为你工作。
+
+它完全是通过Yaml文件设置的,`.gitlab-ci.yml`,放置在你的项目仓库中。使用网络,通过简单的添加一个文件,命名为`.gitlab-ci.yml`来打开一个下拉目录,为不同的应用选择各种CI模版。
+
+![GitLab CI templates - dropdown menu](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-ci-template.png)
+
+### Koding
+
+Use GitLab's [Koding integration][67] to run your entire development environment in the cloud. This means that you can check out a project or just a merge request in a full-fledged IDE with the press of a button.
+
+使用 GitLab的[Koding集成][67]去使用你整个云端开发环境。这意味着你可以通过一个完整的IDE点,点击一个按键,在一个项目中切换分支,或者合并一个请求。
+
+### 使用案例
+
+GitLab CI的使用案例:
+
+* 使用它去[创建][36]任何[静态网站生成器][35],并且通过[GitLab Pages][34]发布你的网站。
+* 使用它来[发布你的网站][33]来`staging`以及`production`[环境][32](译者注:展示以及产品化)
+* 用它来[创建一个iOS应用][31]
+* 用它来[创建一集发布你的Docker镜像][30]通过[GitLab容器注册][29]
+
+我们已经准备一大堆[GitLab CI样例工程][66]作为您的指南。看看他们吧!
+
+### 反馈:循环分析
+
+当你依据 GitLab工作流 工作,你的团队从点子到产品,在每一个[过程的关键部分][64],你将会即时获得一个[GitLab循环分析][65]的反馈:
+
+* **Issue:** 创建一个工单到分配这个工单到一个里程碑,或者添加一个工单到你的工单看板的时间
+* **Plan:** 给工单分配一个里程碑或者把它添加到工单看板,到发布第一次提交的时间。
+* **Code:** 第一次提交到提出合并请求的时间
+* **Test:** CI为了相关合并请求,运行整个管道的时间
+* **Review:** 创建一个合并请求到合并的时间
+* **Staging:** 合并到发布成为产品的时间
+* **Production** (总的): 创建一个工单到把代码发布成[产品][28]的时间
+
+### 加强
+
+### 工单以及合并模版
+
+[工单以及合并模版][63]允许你去定义一个关于工单的详细模版,以及合并您的项目中请求描述部分。
+
+您将会把他们以[Markdown][62]形式书写,并且把他们加入您仓库的默认分支。任何时候一个工单或者MR被创建,他们都可以被一个下拉菜单访问。
+
+他们节省了您在描述工单和MR,以及标准化需要持续跟踪的重要信息的时间。它确保了你需要的一切都在你的掌控之中。
+
+当你可以创建许多模版,他们为不同的目的提供服务。例如,你可以有一个提供功能建议的工单模版,或者一个bug汇报的工单模版。在[GitLab CE project][61]中寻找真实的例子吧!
+
+![issues and MR templates - dropdown menu screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issues-choose-template.png)
+
+### 里程碑
+
+[里程碑][60] 是GitLab中追踪你队伍工作的最好工具。它基于共同的目标,详细的日期。
+
+不同情况的目的是不同的,但是概述是相同的:你有一个工单的集合以及正在编码的合并请求来达到特定的目标。
+
+这个目标基本上可以是任何东西——用来组合团队工作,通过一个截止日期来提高团队的工作时间。例如,发布一个新的release,启动一个新的产品,通过日期让事情完成,或者集合一些项目,使之一个季度完成。
+
+![milestone dashboard](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-milestone.png)
+
+### 高级要点
+
+### 工单和MR
+
+* 工单和MR的描述中:
+ * 输入`#`来触发一个关于现存工单的下拉列表
+ * 输入`!` 来触发一个关于现存MR的下拉列表
+ * 输入 `/` 来触发[slash 命令][4]
+ * 输入 `:` 来出发emoji表情 (也支持行中评论)
+* 添加图片(jpg, png, gif) 和视频到行中评论,通过按钮 **Attach a file**
+* [自动应用标签][27] 通过 [GitLab Webhooks][26]
+* [构成引用][24]: 使用语法 `>>>` 来开始或者结束一个引用
+
+ ```
+ `>>>
+ Quoted text
+
+ Another paragraph
+ >>>`
+ ```
+* Create [task lists][23]:
+
+ ```
+ `- [ ] Task 1
+ - [ ] Task 2
+ - [ ] Task 3`
+ ```
+
+#### 订阅
+
+你是否发现你有一个工单或者MR想要追踪?在你的右边,扩展导航中,点击[订阅][59],你就可以在任何时候收到一个评论的更新。要是你想要一次订阅多个工单和MR?使用[bulk subscription][58]. 😃
+
+#### 添加代办
+
+除了一直留意工单和MR,如果你想要预先做点什么,或者在任何时候你想要在GitLab 代办列表中添加什么,点击你右边的导航,并且[点击 **添加代办**][57]。
+
+#### 寻找你的工单和MR
+
+当你寻找一个在很久以前由你开启的工单——他们可能数以千计——所以你很难找到他们。打开你左边的导航,并且点击**工单**或者**合并请求**,你就会看到那些分配给你的。同时,在那里或者任何工单追踪器,你可以通过作者,分配者,里程碑,标签以及重要性来过滤工单,也可以通过搜索所有不同状态的工单,例如开启的,合并的,关闭的等等。
+
+### 移动工单
+
+一个工单在一个错误的项目中结束了?不用单机,点击**Edit**,然后[移动工单][56]到正确的项目。
+
+### 代码片段
+
+有时候你在不同的项目以及文件中,使用一些相同的代码段和模版吗?创建一个代码段并且使它在你需要的时候可用。打开左边导航栏,点击**[Snipptes][25]**。所有你的片段都会在那里。你可以把她们设置成公开的,内部的(仅仅为GitLab注册用户提供),或者私有的。
+
+![Snippets - screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-snippet.png)
+
+### GitLab 工作流用户案例设想
+
+为了全神贯注,让我们把所有东西聚在一起理顺一下。不必担心,这十分简单。
+
+让我们假设:你工作于一个聚焦于软件开发的公司。你创建了一个新的工单,这个工单是为了开发一个新功能,实施于你的一个应用中。
+
+### 标签策略
+
+为了这个应用,你已经创建了几个标签,“讨论”,“后端”,“前端”,“正在进行”,“展示”,“就绪”,“文档”,“营销”以及“产品”。所有都已经在工单看板有他们自己的列表。你的工单已经有标签“讨论”。
+
+在工单追踪器中的讨论达成一致,你的后端团队开始在工单上工作,所以他们把这个工单的标签从“讨论”移动到“后端”。第一个开发者开始写代码,并且把这个工单分配给自己,增加标签“正在进行”。
+
+### 编码 & 提交
+
+在他的第一次提交的信息中,他提及了他的工单编号。在工作后,他把他的提交推送到一个功能分支,并且创建一个新的合并请求,在MR描述中,包含工单关闭模式。他的团队复审了他的代码并且保证所有的测试和建立都已经通过。
+
+### 使用工单看板
+
+一旦后端团队完成了他们的工作,他们就删除“正在进行”标签,并且把工单从“后端”移动到“前端”看板。所以,前端团队接到通知,这个工单已经为他们准备好了。
+
+### 发布到演示
+
+当一个前端开发者开始为工单工作,他(她)增加一个标签“正在进行”,并且把这个工单重新分配给自己。当工作完成,这个实施将会被发布到一个**演示**环境。标签“正在进行”就会被删除,然后在工单看板里,工单卡被移动到“演示”表中。
+
+### 团队合作
+
+最后,当新功能引入成功,你的团队把它移动到“就绪”列表。
+
+然后,就是你的技术文档编写团队的时间了,他们为新功能书写文档。一旦某个人完成书写,他添加标签“文档”。同时,你的市场团队开始启动以及推荐功能,所以某个人添加“市场化”。当技术文档书写完毕,书写者删除标签“文档”。一旦市场团队完成他们的工作,他们将工单从“市场化”移动到“生产”。
+
+### 部署到生产环境
+
+最后,你将会成为那个为新释出负责的人,合并“合并请求”并且将新功能部署到**生产**环境,然后工单的状态转变为**关闭**。
+
+### 反馈
+
+通过 [循环分析][55],你和你的团队节省了如何从点子到产品的时间,并且开启另一个工单,来讨论如何将这个过程进一步提升。
+
+### 总结
+
+GitLab 工作流 通过一个平台,帮助你的团队加速从点子到生产的改变:
+
+* 它是**有效的**:因为你可以获取你想要的结果
+* 它是**效率高的**:因为你可以用最小的努力和话费达到最大的生产力。
+* 它是**高产的**:因为你可以非常有效的计划和行动
+* 它是**简单的**:因为你不需要安装不同的工具去完成你的目的,仅仅需要GitLab
+* 它是**快速的**:因为你不需要跳过多个平台来完成你的工作
+
+每月的22号都会有一个新的GitLab版本释出,来让它变的更好的称谓集成软件开发方法,并且让团队在一个单一的,唯一的界面一起工作。
+
+在GitLab,每个人都可以奉献!多亏了我们强大的社区,我们获得了我们想要的。并且多亏了他们,我们才能一直为你提供更好的产品。
+
+还有什么问题和反馈吗?请留言,或者在推特上@我们[@GitLab][54]!🙌
+
+--------------------------------------------------------------------------------
+
+via: https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/
+
+作者:[Marcia Ramos][a]
+
+译者:[svtter](https://github.com/svtter)
+
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twitter.com/XMDRamos
+[1]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#search-for-your-issues-and-mrs
+[2]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#add-to-do
+[3]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#subscribe
+[4]:https://docs.gitlab.com/ce/user/project/slash_commands.html
+[5]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-snippets
+[6]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#moving-issues
+[7]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#for-both-issues-and-mrs
+[8]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
+[9]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-and-mr-templates
+[10]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#use-cases
+[11]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#koding
+[12]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#review
+[13]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#wip-mr
+[14]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#merge-request
+[15]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#first-commit
+[16]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
+[17]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-weight
+[18]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#labels
+[19]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#assignee
+[20]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#due-dates
+[21]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#confidential-issues
+[22]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#stages-of-software-development
+[23]:https://docs.gitlab.com/ee/user/markdown.html#task-lists
+[24]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#blockquote-fence-syntax
+[25]:https://gitlab.com/dashboard/snippets
+[26]:https://docs.gitlab.com/ce/web_hooks/web_hooks.html
+[27]:https://about.gitlab.com/2016/08/19/applying-gitlab-labels-automatically/
+[28]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
+[29]:https://about.gitlab.com/2016/05/23/gitlab-container-registry/
+[30]:https://about.gitlab.com/2016/08/11/building-an-elixir-release-into-docker-image-using-gitlab-ci-part-1/
+[31]:https://about.gitlab.com/2016/03/10/setting-up-gitlab-ci-for-ios-projects/
+[32]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
+[33]:https://about.gitlab.com/2016/08/26/ci-deployment-and-environments/
+[34]:https://pages.gitlab.io/
+[35]:https://about.gitlab.com/2016/06/17/ssg-overview-gitlab-pages-part-3-examples-ci/
+[36]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
+[37]:https://about.gitlab.com/solutions/cycle-analytics/
+[38]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
+[39]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
+[40]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-code-review
+[41]:https://about.gitlab.com/gitlab-ci/
+[42]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
+[43]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
+[44]:https://about.gitlab.com/2015/08/18/gitlab-loves-mattermost/
+[45]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#conclusions
+[46]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow-use-case-scenario
+[47]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
+[48]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#enhance
+[49]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
+[50]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#build-test-and-deploy
+[51]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-review-with-gitlab
+[52]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
+[53]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow
+[54]:https://twitter.com/gitlab
+[55]:https://about.gitlab.com/solutions/cycle-analytics/
+[56]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#move-issues-to-other-projects
+[57]:https://about.gitlab.com/2016/06/22/gitlab-8-9-released/#manually-add-todos
+[58]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#bulk-subscribe-to-issues
+[59]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#subscribe-to-a-label
+[60]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#milestones
+[61]:https://gitlab.com/gitlab-org/gitlab-ce/issues/new
+[62]:https://docs.gitlab.com/ee/user/markdown.html
+[63]:https://docs.gitlab.com/ce/user/project/description_templates.html
+[64]:https://about.gitlab.com/2016/09/21/cycle-analytics-feature-highlight/
+[65]:https://about.gitlab.com/solutions/cycle-analytics/
+[66]:https://docs.gitlab.com/ee/ci/examples/README.html
+[67]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#koding-integration
+[68]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
+[69]:https://about.gitlab.com/gitlab-ci/
+[70]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#merge-conflict-resolution
+[71]:https://docs.gitlab.com/ce/user/project/slash_commands.html
+[72]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#wip-slash-command
+[73]:https://gitlab.com/gitlab-examples/review-apps-nginx/
+[74]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#ability-to-stop-review-apps
+[75]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
+[76]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
+[77]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
+[78]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
+[79]:https://docs.gitlab.com/ee/user/markdown.html
+[80]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
+[81]:https://about.gitlab.com/2014/09/29/gitlab-flow/
+[82]:https://about.gitlab.com/free-trial/
+[83]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#multiple-issue-boards-ee
+[84]:https://about.gitlab.com/solutions/issueboard
+[85]:https://docs.gitlab.com/ee/workflow/issue_weight.html
+[86]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#group-labels
+[87]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
+[88]:https://docs.gitlab.com/ee/user/project/labels.html#prioritize-labels
+[89]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
+[90]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#due-dates-for-issues
+[91]:https://docs.gitlab.com/ce/user/permissions.html
+[92]:https://about.gitlab.com/2016/03/31/feature-highlihght-confidential-issues/
+[93]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
+[94]:https://docs.gitlab.com/ee/user/markdown.html
+[95]:https://about.gitlab.com/2016/03/03/start-with-an-issue/
+[96]:https://about.gitlab.com/2016/09/13/gitlab-master-plan/
+[97]:https://about.gitlab.com/2014/09/29/gitlab-flow/
diff --git a/translated/tech/20170112 Partition Backup.md b/translated/tech/20170112 Partition Backup.md
index 607273b6b1..31d42086f1 100644
--- a/translated/tech/20170112 Partition Backup.md
+++ b/translated/tech/20170112 Partition Backup.md
@@ -139,7 +139,7 @@ via: https://www.linuxforum.com/threads/partition-backup.3638/
作者:[Jarret][a]
译者:[ictlyh](https://github.com/ictlyh)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20170112 The 6 unwritten rules of open source development.md b/translated/tech/20170112 The 6 unwritten rules of open source development.md
new file mode 100644
index 0000000000..572fdffa91
--- /dev/null
+++ b/translated/tech/20170112 The 6 unwritten rules of open source development.md
@@ -0,0 +1,61 @@
+六个开源开发的"潜规则"
+============================================================
+
+你想成为开源项目中得意满满,功成名就的那个人吗,那就要遵守下面的"潜规则"
+
+![The 6 unwritten rules of open source development](http://images.techhive.com/images/article/2016/12/09_opensource-100698477-large.jpg)
+
+
+Matt Hicks 是 Red Hat软件工程的副主席,也是 Red Hat 开源合作团队的奠基成员之一.他历时十五年,在软件工程中担任多种职务:开发,运行,架构,管理.
+
+正如体育界不成文的规定一样,这些规则基本上不会在官方文档上,正式记录.比如说,在棒球运动中,从井口跑法时不要偷球,到有人第一时候不要放弃全力疾走.对于圈外人来讲,这些东西很难懂,甚至觉得没什么意义.但是对于那些想成为 MVP 的队员来说,这些都是理所应当的.
+
+软件开发,特别是开源软件开发中,也有一套不成文的规定.和其他团队运动一样,这些规定很大程度上决定了开源社区如何看待一名开发者,特别是小白.
+
+运行之前先调试
+
+在与社区互动前,开放源代码,或者其他什么的,你需要做一下基本工作.对于有眼界的开源贡献者,这意味这你需要理解社区的目标,从头学习.人人都想贡献源代码,但是只有少量的人做过准备,并且乐意,有能力完成这项艰苦卓绝的工作:测试补丁,检查代码,写文档,校正错误.所有的这些不受待见的任务在一个健康的社区中都需要有人去完成.
+
+为什么要在优雅地码代码前做这些呢?这是一种信任,更重要的是,不要只关注自己开发的功能,而是要关注整个社区的动向
+
+填坑而不是挖坑
+
+当你在特定的社区中建立起自己的声望,那么很有必要去深入理解项目,和基础代码.不要在任务状态上停留,要去钻研项目本身,理解那些超出你擅长范围之外的知识.不要只把自己的理解局限于开发者.这样你会获得一种洞察力,让你的代码有更大的影响,而不只是你那一亩三分地.
+
+打个比方,你已经完成了一个网络模块的测试版本.你测试了一下,觉得不错.然后你把它开放到社区,想要更多的人测试.结果发现,如果将它运行在特定的管理器中,它有可能损害安全设置,还可能导致主存泄露.这个问题本可以在基础代码阶段就被解决,而不需要单独测试时候才发现.这说明,你要对项目各个部分如何与其他人协作交互有比较深的理解.让你的补丁填坑而不是挖坑.这样你朝成为社区大牛的目标上又前进了一大步.
+
+不要投放代码炸弹
+
+代码提交完毕你的工作还没结束.你还要想一想以后的改变,和常见的问答,测试也没有完成.你要确保你可以准时提交,努力去理解如何在不影响社区其他成员的情况下,运行和修复代码.
+
+助己前先助人
+开源社区不是自相残杀的世界,我们更看重项目的价值而非个体的贡献和成功.如果你想给自己加分,让自己成为更厉害的社区成员,那就努力帮助别人.如果你熟悉网络部分,那就审查网络模块,用你的专业技能让整个代码更加优雅.很简单的道理,顶级的审查者经常和顶级的贡献者打交道.你帮助的越多,你就越有价值
+
+作为一个开发者,你很可能需要开源项目中解决一个你十分头痛的技术点.可能你更想运行在一个目前还不支持的系统,你超想改革社区目前使用的安全技术.想要引进新技术,特别是比较有争议的技术,最好的办法就是让人无法拒绝它.你需要透彻地了解底层代码,考虑每一个微小的优势.在不影响已实现功能的前提下增加新功能.不仅要在计划上下功夫,还要在特性的完善上下功夫.
+
+不要放弃
+
+开源社区也有多不靠谱的成员,所以提交中可靠性高的才能被采用.不要只是因为提交被上游拒绝就离开社区.找出原因,修正错误,然后再试一试.当你开发时候,要和基础代码保持一致,确保即使项目进化你的代码仍然可用.不要把你的代码留给别人修复,要自己修复.这样可以在社区形成良好的风气,每个人都自己改.
+
+这些"潜规则"看上去很简单,但是还是有许多开源项目的贡献者并没有遵守.成功的开发者不仅可以成功地为自己完成项目,还可以帮助开源社区.
+
+--------------------------------------------------------------------------------
+
+via: http://www.infoworld.com/article/3156776/open-source-tools/the-6-unwritten-rules-of-open-source-development.html
+
+作者:[Matt Hicks][a]
+译者:[Taylor1024](https://github.com/Taylor1024)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.infoworld.com/blog/new-tech-forum/
+[1]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&via=infoworld&text=The+6+unwritten+rules+of+open+source+development
+[2]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
+[3]:http://www.linkedin.com/shareArticle?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&title=The+6+unwritten+rules+of+open+source+development
+[4]:https://plus.google.com/share?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
+[5]:http://reddit.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&title=The+6+unwritten+rules+of+open+source+development
+[6]:http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
+[7]:http://www.infoworld.com/article/3156776/open-source-tools/the-6-unwritten-rules-of-open-source-development.html#email
+[8]:http://www.infoworld.com/article/3152565/linux/5-rock-solid-linux-distros-for-developers.html#tk.ifw-infsb
+[9]:http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
diff --git a/translated/tech/20170127 How to compare directories with Meld on Linux.md b/translated/tech/20170127 How to compare directories with Meld on Linux.md
deleted file mode 100644
index f71c99f567..0000000000
--- a/translated/tech/20170127 How to compare directories with Meld on Linux.md
+++ /dev/null
@@ -1,131 +0,0 @@
-在 Linux 上使用 Meld 比较文件夹
-============================================================
-
-### 本文导航
-1. [用 Meld 比较文件夹][1]
-2. [总结][2]
-
-我们已经从一个新手的角度了解了 Meld (包括 Meld 的安装),我们也提及了一些 Meld 中级用户常用的小技巧。如果你有印象,在新手教程中,我们说过 Meld 可以比较文件和文件夹。已经讨论过怎么讨论文件,今天,我们来看看 Meld 怎么比较文件夹。
-
-本教程中的所有命令和例子都是在 Ubuntu 14.04 上测试的,使用的 Meld 版本基于 3.14.2 版。
-
-
-### 用 Meld 比较文件夹
-打开 Meld 工具,然后选择_比较文件夹_选项来比较两个文件夹。
-[
- ![Compare directories using Meld](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dir-comp-1.png)
-][5]
-
-选择你要比较的文件夹:
-[
- ![select the directories](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-sel-dir-2.png)
-][6]
-
-然后单击_比较_按钮,你会看到 Meld 像图中这样分成两栏显示。
-[
- ![Compare directories visually](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dircomp-begins-3.png)
-][7]
-分栏会树形显示这些文件/文件夹。你可以在上图中看到明显的区别——不论文件是新建的还是被修改过的——都会以不同的颜色高亮显示。
-
-根据 Meld 的官方文档可以知道在窗口中看到的每个不同的文件或文件夹都会被突出显示。这样就很容易看出这个文件/文件夹与另一个分栏中对应位置的文件/文件夹的区别。
-
-下表是 Meld 网站上列出的在比较文件夹时突出显示的不同字体大小/颜色/背景等代表的含义。
-
-
-|**State** | **Appearance** | **Meaning** |
-| --- | --- | --- |
-| Same | Normal font | The file/folder is the same across all compared folders. |
-| Same when filtered | Italics | These files are different across folders, but once text filters are applied, these files become identical. |
-| Modified | Blue and bold | These files differ between the folders being compared. |
-| New | Green and bold | This file/folder exists in this folder, but not in the others. |
-| Missing | Greyed out text with a line through the middle | This file/folder doesn't exist in this folder, but does in one of the others. |
-| Error | Bright red with a yellow background and bold | When comparing this file, an error occurred. The most common error causes are file permissions (i.e., Meld was not allowed to open the file) and filename encoding errors. |
-Meld 默认会列出文件夹中的所有内容,即使这些内容没有任何不同。当然,你也可以在工具栏中单击_同样的_按钮设置 Meld 不显示这些相同的文件/文件夹——单击这个按钮使其不可用。
-[
- ![same button](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-button.png)
-][3]
-
-[
- ![Meld compare buttons](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-disabled.png)
-][8]
-下面是单击_同样的_按钮使其不可用的截图:
-[
- ![Directory Comparison without same files](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-only-diff.png)
-][9]
-这样你会看到只显示了两个文件夹中不同的文件(新建的和修改过的)。同样,如果你单击_新建的_按钮使其不可用,那么 Meld 就只会列出修改过的文件。所以,在比较文件夹时可以通过这些按钮自定义要显示的内容。
-
-你可以使用上下箭头来切换选择是显示新建的文件还是修改过的文件,然后打开两个文件进行分栏比较。双击文件或者单击箭头旁边的_比较_按钮都可以进行分栏比较。。
-[
- ![meld compare arrow keys](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-compare-arrows.png)
-][10]
-
-**提示 1**:如果你仔细观察,就会看到 Meld 窗口的左边和右边有一些小进度块。这些进度块就像是“用颜色区分的包含不同文件/文件夹的数个区段”。每个区段都由很多的小进度块组成,而一个个小小的有颜色的进度块就表示此处有不同的文件/文件夹。你可以单击每一个这样的小小进度块跳到它对应的文件/文件夹。
-
-
-**提示 2**:尽管你经常分栏比较文件然后以你的方式合并不同的文件,假如你想要合并所有不同的文件/文件夹(就是说你想要把一个文件夹中特有的文件/文件夹添加到另一个文件夹中),那么你可以用_复制到左边_和_复制到右边_按钮:
-[
- ![meld copy right part](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-copy-right-left.png)
-][11]
-比如,你可以在左边的分栏中选择一个文件或文件夹,然后单击_复制到右边_按钮在右边的文件夹中对应的位置新建完全一样的文件或文件夹。
-
-现在,在窗口的下栏菜单中找到_过滤_按钮,它就在_同样的_、_新建的_和_修改过的_ 这三个按钮下面。这里你可以选择或取消文件的类型来让 Meld 在比较文件夹时决定是否显示这种类型的文件/文件夹。官方文档解释说菜单中的这个条目表示“被匹配到的文件不会显示。”
-
-这个条目包括备份文件,操作系统元数据,版本控制文件、二进制文件和多媒体文件。
-
-
-[
- ![Meld filters](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-filters.png)
-][12]
-前面提到的条目也可以通过这样的方式找到:_浏览->文件过滤_。你可以同过 _编辑->首选项->文件过滤_ 为这个条目增加新元素(也可以删除已经存在的元素)。
-
-[
- ![Meld preferences](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-edit-filters-menu.png)
-][13]
-要新建一个过滤条件,你需要使用一组 shell 符号,下表列出了 Meld 支持的 shell 符号:
-
-
-| **Wildcard** | **Matches** |
-| --- | --- |
-| * | anything (i.e., zero or more characters) |
-| ? | exactly one character |
-| [abc] | any one of the listed characters |
-| [!abc] | anything except one of the listed characters |
-| {cat,dog} | either "cat" or "dog" |
-最重要的一点是 Meld 的文件名默认大小写敏感。也就是说,Meld 认为 readme 和 ReadMe 与 README 是不一样的文件。
-
-幸运的是,你可以关掉 Meld 的大小写敏感。只需要打开_浏览_菜单然后选择_忽略文件名大小写_选项。
-[
- ![Meld ignore filename case](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-ignore-case.png)
-][14]
-
-### 结论
-你是否觉得使用 Meld 比较文件夹很容易呢——事实上,我认为它相当容易。只有新建一个过滤器会花点时间,但是这不意味着你没必要学习创建过滤器。显然,这取决于你要过滤的内容。
-
-真的很棒,你甚至可以用 Meld 比较三个文件夹。想要比较三个文件夹时你可以通过_单击 3 个比较_ 复选框。今天,我们不介绍怎么比较三个文件夹,但它肯定会出现在后续的教程中。
-
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
-
-作者:[Ansh][a]
-译者:[vim-kakali](https://github.com/vim-kakali)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
-[1]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#compare-directories-using-meld
-[2]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#conclusion
-[3]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-button.png
-[4]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
-[5]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dir-comp-1.png
-[6]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-sel-dir-2.png
-[7]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dircomp-begins-3.png
-[8]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-disabled.png
-[9]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-only-diff.png
-[10]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-compare-arrows.png
-[11]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-copy-right-left.png
-[12]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-filters.png
-[13]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-edit-filters-menu.png
-[14]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-ignore-case.png
diff --git a/translated/tech/20170203 Record and Replay Terminal Session with Asciinema on Linux.md b/translated/tech/20170203 Record and Replay Terminal Session with Asciinema on Linux.md
new file mode 100644
index 0000000000..9713441cd7
--- /dev/null
+++ b/translated/tech/20170203 Record and Replay Terminal Session with Asciinema on Linux.md
@@ -0,0 +1,282 @@
+### 如何在 Linux 终端会话中使用 Asciinema 进行录制和回放
+
+![](https://linuxconfig.org/images/asciimena-video-example.jpg?58942057)
+
+内容
+
+* * [1、简介][11]
+ * [2、困难][12]
+ * [3、惯例][13]
+ * [4、标准库安装][14]
+ * [4.1、在 Arch Linux 上安装][1]
+ * [4.2、在 Debian 上安装][2]
+ * [4.3、在 Ubuntu 上安装][3]
+ * [4.4、在 Fedora 上安装][4]
+ * [5、从源代码安装][15]
+ * [6、前提条件][16]
+ * [6.1、在 Arch Linux 上安装 ruby][5]
+ * [6.2、在 Debian 上安装 ruby][6]
+ * [6.3、在 Ubuntu 安装 ruby][7]
+ * [6.4、在 Fedora 安装 ruby][8]
+ * [6.5、在 CentOS 安装 ruby][9]
+ * [7、 安装 Linuxbrew][17]
+ * [8、 安装 Asciinema][18]
+ * [9、录制终端会话][19]
+ * [10、回放已录制终端会话][20]
+ * [11、将视频嵌入 HTML][21]
+ * [12、结论][22]
+ * [13、 故障排除][23]
+ * [13.1、在 UTF-8 环境下运行 asciinema][10]
+
+### 简介
+
+Asciinema 是一个轻量并且非常高效的脚本终端会话录制器的替代品。使用它可以录制、回放和分享 JSON 格式的终端会话记录。和一些桌面录制器,比如 Recordmydesktop、Simplescreenrecorder、Vokoscreen 或 Kazam 相比,Asciinema 最主要的优点是,它能够以通过 ANSI 转义码编码的 ASCII 文本录制所有的标准终端输入、输出和错误。
+
+事实上,即使是很长的终端会话,录制出的 JSON 格式文件也非常小。另外,JSON 格式使得用户可以利用简单的文件转化器,将输出的 JSON 格式文件嵌入到 HTML 代码中,然后分享到公共网站或者使用 asciinema 账户分享到 Asciinema.org 。最后,如果你的终端会话中有一些错误,并且你还懂一些 ASCI 转义码语法,那么你可以使用任何编辑器来修改你的已录制终端会话。
+
+### 困难
+
+很简单!
+
+### 惯例
+
+* **#** - 给定命令需要以 root 用户权限运行或者使用 `sudo` 命令
+* **$** - 给定命令以常规权限用户运行
+
+### 标准库安装
+
+很有可能, asciinema 可以使用你的版本库进行安装。但是,如果不可以使用系统版本库进行安装或者你想安装最新的版本,那么,你可以像下面的“从源代码安装”部分所描述的那样,使用 Linuxbrew 包管理器来执行 Asciinema 安装。
+
+### 在 Arch Linux 上安装
+
+```
+# pacman -S asciinema
+```
+
+### 在 Debian 上安装
+
+```
+# apt install asciinema
+```
+
+### 在 Ubuntu 上安装
+
+```
+$ sudo apt install asciinema
+```
+
+### 在 Fedora 上安装
+
+```
+$ sudo dnf install asciinema
+```
+
+### 从源代码安装
+
+最简单并且值得推荐的方式是使用 Linuxbrew 包管理器,从源代码安装最新版本的 Asciinema 。
+
+### 前提条件
+
+下面列出的前提条件是安装 Linuxbrew 和 Asciinema 需要满足的依赖关系:
+
+* git
+* gcc
+* make
+* ruby
+
+在安装 Linuxbrew 之前,请确保上面的这些包都已经安装在了你的 Linux 系统中。
+
+### 在 Arch Linux 上安装 ruby
+
+```
+# pacman -S git gcc make ruby
+```
+
+### 在 Debian 上安装 ruby
+
+```
+# apt install git gcc make ruby
+```
+
+### 在 Ubuntu 上安装 ruby
+
+```
+$ sudo apt install git gcc make ruby
+```
+
+### 在 Fedora 上安装 ruby
+
+```
+$ sudo dnf install git gcc make ruby
+```
+
+### 在 CentOS 上安装 ruby
+
+```
+# yum install git gcc make ruby
+```
+
+### 安装 Linuxbrew
+
+Linuxbrew 包管理器是苹果的 MacOS 操作系统很受欢迎的 Homebrew 包管理器的一个复刻版本。还没发布多久,Homebrew 就以容易使用而著称。如果你想使用 Linuxbrew 来安装 Asciinema,那么,请运行下面命令在你的 Linux 版本上安装 Linuxbrew:
+```
+$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install)"
+```
+现在,Linuxbrew 已经安装到了目录 `$HOME/.linuxbrew/` 下。剩下需要做的就是使它成为可执行 `PATH` 环境变量的一部分。
+```
+$ echo 'export PATH="$HOME/.linuxbrew/bin:$PATH"' >>~/.bash_profile
+$ . ~/.bash_profile
+```
+为了确认 Linuxbrew 是否已经安装好,你可以使用 `brew` 命令来查看它的版本:
+```
+$ brew --version
+Homebrew 1.1.7
+Homebrew/homebrew-core (git revision 5229; last commit 2017-02-02)
+```
+
+### 安装 Asciinema
+
+安装好 Linuxbrew 以后,安装 Asciinema 就变得无比容易了:
+```
+$ brew install asciinema
+```
+检查 Asciinema 是否安装正确:
+```
+$ asciinema --version
+asciinema 1.3.0
+```
+
+### 录制终端会话
+
+经过一番辛苦的安装工作以后,是时候来干一些有趣的事情了。Asciinema 是一个非常容易使用的软件。事实上,目前的 1.3 版本只有很少的几个可用命令行选项,其中一个是 `--help` 。
+
+我们首先使用 `rec` 选项来录制终端会话。下面的命令将会开始录制终端会话,之后,你将会有一个选项来丢弃已录制记录或者把它上传到 asciinema.org 网站以便将来参考。
+```
+$ asciinema rec
+```
+运行上面的命令以后,你会注意到, Asciinema 已经开始录制终端会话了,你可以按下 `CTRL+D` 快捷键或执行 `exit` 命令来停止录制。如果你使用的是 Debian/Ubuntu/Mint Linux 系统,你可以像下面这样尝试进行第一次 asciinema 录制:
+```
+$ su
+Password:
+# apt install sl
+# exit
+$ sl
+```
+一旦输入最后一个 `exit` 命令以后,将会询问你:
+```
+$ exit
+~ Asciicast recording finished.
+~ Press to upload, to cancel.
+
+https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4
+```
+如果你不想上传你的私密命令行技巧到 asciinema.org 网站,那么有一个选项可以把 Asciinema 记录以 JSON 格式保存为本地文件。比如,下面的 asciinema 记录将被存为 `/tmp/my_rec.json`:
+```
+$ asciinema rec /tmp/my_rec.json
+```
+另一个非常有用的 asciinema 特性是时间微调。如果你的键盘输入速度很慢,或者你在进行多任务,输入命令和执行命令之间的时间可以延长。Asciinema 会记录你的实时按键时间,这意味着每一个停顿都将反映在最终视频的长度上。可以使用 `-w` 选项来缩短按键的时间间隔。比如,下面的命令将按键的时间间隔缩短为 0.2 秒:
+```
+$ asciinema rec -w 0.2
+```
+
+### 回放已录制终端会话
+
+有两种方式可以来回放已录制会话。第一种方式是直接从 asciinema.org 网站上播放终端会话。这意味着,你之前已经把录制会话上传到了 asciinema.org 网站,并且需要提供有效链接:
+```
+$ asciinema play https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4
+```
+Alternatively, use your locally stored JSON file:
+另外,你也可以使用本地存储的 JSON 文件:
+```
+$ asciinema play /tmp/my_rec.json
+```
+如果要使用 `wget` 命令来下载之前的上传记录,只需在链接的后面加上 `.json`:
+```
+$ wget -q -O steam_locomotive.json https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4.json
+$ asciinema play steam_locomotive.json
+```
+
+### 将视频嵌入 HTML
+
+最后,Asciinema 还带有一个独立的 JavaScript 播放器。这意味者你可以很容易的在你的网站上分享终端会话记录。下面,使用一段简单的 `index.html` 代码来说明这个方法。首先,下载所有必要的东西:
+```
+$ cd /tmp/
+$ mkdir steam_locomotive
+$ cd steam_locomotive/
+$ wget -q -O steam_locomotive.json https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4.json
+$ wget -q https://github.com/asciinema/asciinema-player/releases/download/v2.4.0/asciinema-player.css
+$ wget -q https://github.com/asciinema/asciinema-player/releases/download/v2.4.0/asciinema-player.js
+```
+之后,创建一个新的包含下面这些内容的 `/tmp/steam_locomotive/index.html` 文件:
+```
+
+
+
+
+
+
+
+
+
+```
+完成以后,打开你的网页浏览器,按下 `CTRL+O` 来打开新创建的 `/tmp/steam_locomotive/index.html` 文件。
+
+### 结论
+
+正如前面所说的,使用 Asciinema 录制器来录制终端会话最主要的优点是它的输出文件非常小,这使得你的视频很容易分享出去。上面的例子产生了一个包含 58472 个字符的文件,它是一个只有 58 KB 大的 22 秒终端会话视频。如果我们查看输出的 JSON 文件,会发现甚至这个数字已经非常大了,这主要是因为一个 “蒸汽机车” 已经跑过了终端。这个长度的正常终端会话会产生一个更小的输出文件。
+
+下次,当你想要在一个论坛上询问关于 Linux 配置的问题,并且很难描述你的问题的时候,只需运行下面的命令:
+```
+$ asciinema rec
+```
+然后把最后的链接贴到论坛的帖子里。
+
+### 故障排除
+
+### 在 UTF-8 环境下运行 asciinema
+
+错误信息:
+```
+asciinema 需要在 UTF-8 环境下运行。请检查 `locale` 命令的输出。
+```
+解决方法:
+生成并导出UTF-8语言环境。例如:
+```
+$ localedef -c -f UTF-8 -i en_US en_US.UTF-8
+$ export LC_ALL=en_US.UTF-8
+```
+
+--------------------------------------------------------------------------------
+
+via: https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux
+
+作者:[Lubos Rendek][a]
+译者:[ucasFL](https://github.com/ucasFL)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux
+[1]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-1-arch-linux
+[2]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-2-debian
+[3]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-3-ubuntu
+[4]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-4-fedora
+[5]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-1-arch-linux
+[6]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-2-debian
+[7]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-3-ubuntu
+[8]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-4-fedora
+[9]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-5-centos
+[10]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h13-1-asciinema-needs-a-utf-8
+[11]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h1-introduction
+[12]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h2-difficulty
+[13]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h3-conventions
+[14]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h4-standard-repository-installation
+[15]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h5-installation-from-source
+[16]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h6-prerequisites
+[17]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h7-linuxbrew-installation
+[18]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h8-asciinema-installation
+[19]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h9-recording-terminal-session
+[20]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h10-replay-recorded-terminal-session
+[21]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h11-embedding-video-as-html
+[22]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h12-conclusion
+[23]:https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-on-linux#h13-troubleshooting
diff --git a/translated/tech/20170207 5 Open Source Software Defined Networking Projects to Know.md b/translated/tech/20170207 5 Open Source Software Defined Networking Projects to Know.md
new file mode 100644
index 0000000000..f0ccff46ef
--- /dev/null
+++ b/translated/tech/20170207 5 Open Source Software Defined Networking Projects to Know.md
@@ -0,0 +1,72 @@
+5 个要了解的开源软件定义网络项目
+============================================================
+
+
+ ![SDN](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/software-defined-networking_0.jpg?itok=FeWzZo8n "SDN")
+
+SDN 开始重新定义企业网络。这里有五个应该知道的开源项目。
+ [Creative Commons Zero][1] Pixabay
+
+纵观整个 2016 年,软件定义网络(SDN)持续快速发展并变得成熟。我们现在已经超出了开源网络的概念阶段,两年前评估这些项目潜力的公司已经开始了企业部署。如几年来所预测的,SDN 正在开始重新定义企业网络。
+
+这与市场研究人员的观点基本上是一致的。IDC 在今年早些时候公布了 SDN 市场的[一份研究][3],它预计从 2014 年到 2020 年 SDN 的年均复合增长率为 53.9%,届时市场价值将达到 125 亿美元。此外,“2016 技术趋势” 报告中将 SDN 列为 2016 年最佳技术投资。
+
+IDC 网络基础设施副总裁,[Rohit Mehra][4] 说:“云计算和第三方平台推动了 SDN 的需求,这将在 2020 年代表一个价值超过 125 亿美元的市场。毫无疑问的是 SDN 的价值将越来越多地渗透到网络虚拟化软件和 SDN 应用中,包括虚拟化网络和安全服务,大型企业在数据中心实现 SDN 的价值,但它们最终会认识到其在分支机构和校园网络中的广泛应用。“
+
+Linux 基金会最近[发布][5]了其 2016 年度报告[“开放云指南:当前趋势和开源项目”][6]。其中第三份年度报告全面介绍了开放云计算的状态,并包含关于 unikernel 的部分。你现在可以[下载报告][7]了,首先要注意的是汇总和分析研究,说明了容器、unikernel 等的趋势是如何重塑云计算的。该报告提供了对当今开放云环境中心的分类项目的描述和链接。
+
+在本系列中,我们会研究各种类别,并提供关于这些领域如何发展的更多见解。下面,你会看到几个重要的 SDN 项目及其所带来的影响,以及 GitHub 仓库的链接,这些都是从“开放云指南”中收集的:
+
+### 软件定义网络
+
+[ONOS][8]
+
+开放网络操作系统(ONOS)是一个 Linux 基金会项目,它是一个给服务提供商的软件定义网络操作系统,它具有可扩展性、高可用性、高性能和抽象功能来创建应用程序和服务。[ONOS 的 GitHub 地址][9]。
+
+[OpenContrail][10]
+
+OpenContrail 是 Juniper Networks 的云开源网络虚拟化平台。它提供网络虚拟化的所有必要组件:SDN 控制器、虚拟路由器、分析引擎和已发布的上层 API。其 REST API 配置并收集来自系统的操作和分析数据。[OpenContrail 的 GitHub 地址][11]。
+
+[OpenDaylight][12]
+
+OpenDaylight 是 Linux 基金会的一个 OpenDaylight Foundation 项目,它是一个可编程的、提供给服务提供商和企业的软件定义网络平台。它基于微服务架构,可以在多供应商环境中的一系列硬件上实现网络服务。[OpenDaylight 的 GitHub 地址][13]。
+
+[Open vSwitch][14]
+
+Open vSwitch 是一个 Linux 基金会项目,具有生产级别质量的多层虚拟交换机。它通过程序化扩展设计用于大规模网络自动化,同时还支持标准管理接口和协议,包括 NetFlow、sFlow、IPFIX、RSPAN、CLI、LACP 和 802.1ag。它支持类似 VMware 的分布式 vNetwork 或者 Cisco Nexus 1000V 那样跨越多个物理服务器分发。[OVS 在 GitHub 的地址][15]。
+
+[OPNFV][16]
+
+网络功能虚拟化开放平台(OPNFV) 是 Linux 基金会项目,它用于企业和服务提供商网络的 NFV 平台。它汇集了计算、存储和网络虚拟化方面的上游组件以创建 NFV 程序的端到端平台。[OPNFV 在 Bitergia 上的地址][17]。
+
+_要了解更多关于开源云计算趋势和查看顶级开源云计算项目完整列表,[请下载 Linux 基金会的 “开放云指南”。][18]_
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/open-cloud-report/2016/5-open-source-software-defined-networking-projects-know
+
+作者:[SAM DEAN][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[jasminepeng](https://github.com/jasminepeng)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/sam-dean
+[1]:https://www.linux.com/licenses/category/creative-commons-zero
+[2]:https://www.linux.com/files/images/software-defined-networkingjpg-0
+[3]:https://www.idc.com/getdoc.jsp?containerId=prUS41005016
+[4]:http://www.idc.com/getdoc.jsp?containerId=PRF003513
+[5]:https://www.linux.com/blog/linux-foundation-issues-2016-guide-open-source-cloud-projects
+[6]:http://ctt.marketwire.com/?release=11G120876-001&id=10172077&type=0&url=http%3A%2F%2Fgo.linuxfoundation.org%2Frd-open-cloud-report-2016-pr
+[7]:http://go.linuxfoundation.org/l/6342/2016-10-31/3krbjr
+[8]:http://onosproject.org/
+[9]:https://github.com/opennetworkinglab/onos
+[10]:http://www.opencontrail.org/
+[11]:https://github.com/Juniper/contrail-controller
+[12]:https://www.opendaylight.org/
+[13]:https://github.com/opendaylight
+[14]:http://openvswitch.org/
+[15]:https://github.com/openvswitch/ovs
+[16]:https://www.opnfv.org/
+[17]:http://projects.bitergia.com/opnfv/browser/
+[18]:http://bit.ly/2eHQOwy
diff --git a/translated/tech/20170211 Docker swarm mode - Adding worker nodes tutorial.md b/translated/tech/20170211 Docker swarm mode - Adding worker nodes tutorial.md
new file mode 100644
index 0000000000..224390f63e
--- /dev/null
+++ b/translated/tech/20170211 Docker swarm mode - Adding worker nodes tutorial.md
@@ -0,0 +1,149 @@
+# Docker Sawrm 模式 - 添加 worker 节点教程
+
+让我们继续几周前在 CentOS 7.2 中开始的工作。 在本[指南][1]中,我们学习了如何初始化以及启动 Docker 1.12 中内置的本地集群以及编排功能。但是我们只有管理节点还没有其他 worker 节点。今天我们会展开这个。
+
+我将向你展示如何将不对称节点添加到 Sawrm 中,也就是 [Fedora 24][2] 将与 CentOS 相邻,它们都将加入到集群中,还有相关很棒的负载均衡等等。当然这并不是微不足道的,我们会遇到一些障碍,所以它应该是非常有趣的。
+
+ ![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-teaser-more.png)
+
+### 先决条件
+
+在将其他节点成功加入 Swarm 之前,我们需要做几件事情。理想情况下,所有节点都应该运行相同版本的 Docker,为了支持本地编排,它的版本至少应该为 1.12。像 CentOS 一样,Fedora 内置的仓库没有最新的构建,所以你需要手动或者使用 Docker 仓库手动[添加并安装][3]正确的版本,并修复一些依赖冲突。我已经向你展示了如何在 CentOS 中操作,练习是相同的。
+
+此外,所有节点都需要能够相互通信。这就需要有正确的路由和防火墙规则,这样管理和 worker 节点才能互相通信。否则,你无法将节点加入 Swarm 中。最简单的解决方法是临时刷新防火墙规则 (iptables -F),但这可能会损害你的安全。请确保你完全了解你正在做什么,并为你的节点和端口创建正确的规则。
+
+守护进程的错误响应:节点加入之前已超时。尝试加入 Swarm 的请求将在后台继续进行。使用 “docker info” 命令查看节点的当前 Swarm 状态。
+
+你需要在主机上提供相同的 Docker 镜像。在上一个教程中我们创建了一个 Apache 映像,你需要在你的 worker 节点上执行相同操作,或者分发创建的镜像。如果你不这样做,你会遇到错误。如果你在设置 Docker 上需要帮助,请阅读我的[介绍指南][4]和[网络教程][5]。
+
+```
+7vwdxioopmmfp3amlm0ulimcu \_ websky.11 my-apache2:latest
+localhost.localdomain Shutdown Rejected 7 minutes ago
+"No such image: my-apache2:lat&"
+```
+
+### 现在开始
+
+现在我们有一台 CentOS 机器并启动了,并成功创建了容器。你可以使用主机端口连接到服务,这一切都看起来很好。目前,你的 Swarm 只有管理节点。
+
+ ![Manager](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-manager.png)
+
+### 加入 workers
+
+要添加新的节点,你需要使用 join 命令。但是你首先必须提供令牌、IP 地址和端口,以便 woker 节点能正确地对 Swarm 管理器进行身份验证。接着执行(在 Fedora 上):
+
+```
+[root@localhost ~]# docker swarm join-token worker
+要将 worker 添加大这个 Swarm 中,运行下面的命令:
+
+docker swarm join \
+--token SWMTKN-1-0xvojvlza90nrbihu6gfu3qm34ari7lwnza ... \
+192.168.2.100:2377
+```
+
+如果你不修复防火墙和路由规则,你会得到超时错误。如果你已经加入了 Swarm,重复 join 命令会收到错误:
+
+```
+Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
+```
+
+如果有疑问,你可以离开 Swarm,然后重试:
+
+```
+[root@localhost ~]# docker swarm leave
+Node left the swarm.
+
+docker swarm join --token
+SWMTKN-1-0xvojvlza90nrbihu6gfu3qnza4 ... 192.168.2.100:2377
+This node joined a swarm as a worker.
+```
+
+在 worker 节点中,你可以使用 “docker info” 来检查状态:
+
+```
+Swarm: active
+NodeID: 2i27v3ce9qs2aq33nofaon20k
+Is Manager: false
+Node Address: 192.168.2.103
+
+Likewise, on the manager:
+
+Swarm: active
+NodeID: cneayene32jsb0t2inwfg5t5q
+Is Manager: true
+ClusterID: 8degfhtsi7xxucvi6dxvlx1n4
+Managers: 1
+Nodes: 3
+Orchestration:
+Task History Retention Limit: 5
+Raft:
+Snapshot Interval: 10000
+Heartbeat Tick: 1
+Election Tick: 3
+Dispatcher:
+Heartbeat Period: 5 seconds
+CA Configuration:
+Expiry Duration: 3 months
+Node Address: 192.168.2.100
+```
+
+### 创建或缩放服务
+
+现在,我们需要看下 Docker 是否以及如何在节点间分发容器。我的测试展示了一个在非常轻的负载下相当简单的平衡算法。试了一两次之后,即使在我尝试缩放并更新之后,Docker 也没有将运行的服务重新分配给新的 worker。同样,有一次,它在 worker 节点上创建了一个新的服务。也许这是最好的选择。
+
+ ![Scale service](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-scale-service.png)
+
+ ![Service ls](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-service-list.png)
+
+ ![Services ls, more](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-service-list-more.png)
+
+ ![New service](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-new-service.png)
+
+在新的 worker 节点上创建完整新的服务。
+
+过了一段时间,两个容器之间的现有服务有一些重新分配,但这需要一些时间。新服务工作正常。这只是一个前期观察,所以我现在不能说更多。现在是开始探索和调整的新起点。
+
+ ![Service distributed](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-distributed.png)
+
+负载均衡过了一会工作了。
+
+### 总结
+
+Docker 是一只灵巧的小野兽,它只会继续扩大,更复杂,更强大,当然也更优雅。它被一个大企业吃掉只是一个时间问题。当它涉及本地编排时,Swarm 模式运行得很好,但是它不仅仅需要几个容器来充分利用其算法和可扩展性。
+
+我的教程展示了如何将 Fedora 节点添加到由 CentOS 运行的群集中,并且两者能并行工作。关于负载平衡还有一些问题,但这是我将在以后的文章中探讨的。总而言之,我希望这是一个值得记住的教训。我们已经解决了在尝试设置 Swarm 时可能遇到的一些先决条件和常见问题,同时我们启动了一堆容器,我们甚至简要介绍了如何缩放和分发服务。要记住,这只是一个开始。
+
+干杯。
+
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+我是 Igor Ljubuncic。现在大约 38 岁,已婚但还没有孩子。我现在在一个大胆创新的云科技公司做首席工程师。直到大约 2015 年初,我还在一个全世界最大的 IT 公司之一中做系统架构工程师,和一个工程计算团队开发新的基于 Linux 的解决方案,优化内核以及攻克 Linux 的问题。在那之前,我是一个为高性能计算环境设计创新解决方案的团队的技术领导。还有一些其他花哨的头衔,包括系统专家、系统程序员等等。所有这些都曾是我的爱好,但从 2008 年开始成为了我的付费工作。还有什么比这更令人满意的呢?
+
+从 2004 年到 2008 年间,我曾通过作为医学影像行业的物理学家来糊口。我的工作专长集中在解决问题和算法开发。为此,我广泛地使用了 Matlab,主要用于信号和图像处理。另外,我得到了几个主要的工程方法学的认证,包括 MEDIC 六西格玛绿带、试验设计以及统计工程学。
+
+我也开始写书,包括奇幻类和 Linux 上的技术性工作。彼此交融。
+
+要查看我开源项目、出版物和专利的完整列表,请滚动到下面。
+
+有关我的奖项,提名和 IT 相关认证的完整列表,请稍等一下。
+
+-------------
+
+
+via: http://www.dedoimedo.com/computers/docker-swarm-adding-worker-nodes.html
+
+作者:[Igor Ljubuncic][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.dedoimedo.com/faq.html
+[1]:http://www.dedoimedo.com/computers/docker-swarm-intro.html
+[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
+[3]:http://www.dedoimedo.com/computers/docker-centos-upgrade-latest.html
+[4]:http://www.dedoimedo.com/computers/docker-guide.html
+[5]:http://www.dedoimedo.com/computers/docker-networking.html
diff --git a/translated/tech/20170213 Recover from a badly corrupt Linux EFI installation.md b/translated/tech/20170213 Recover from a badly corrupt Linux EFI installation.md
new file mode 100644
index 0000000000..d045105961
--- /dev/null
+++ b/translated/tech/20170213 Recover from a badly corrupt Linux EFI installation.md
@@ -0,0 +1,112 @@
+# 从损坏的 Linux EFI 安装中恢复
+
+在过去的十多年里,Linux 发行版在安装前、安装过程中、以及安装后偶尔会失败,但我总是有办法恢复系统并继续正常工作。然而,[Solus][1] 损坏了我的笔记本。
+
+GRUB 恢复。不行,重装。还不行!Ubuntu 拒绝安装,报错目标设备不是这个或那个。哇。我之前还没有遇到过想这样的事情。我的测试机已变成无用的砖块。我们该失望吗?不,绝对不。让我来告诉你怎样你可以修复它吧。
+
+### 问题详情
+
+所有事情都从 Solus 尝试安装它自己的启动引导器 - goofiboot 开始。不知道什么原因、它没有成功完成安装,留给我的就是一个无法启动的系统。BIOS 之后,我有一个 GRUB 恢复终端。
+
+ ![安装失败](http://www.dedoimedo.com/images/computers-years/2016-2/solus-installation-failed.png)
+
+我尝试在终端中手动修复,使用类似和我在我的扩展 [GRUB2 指南][2]中介绍的这个或那个命令。但还是不行。然后我尝试按照我在[GRUB2 和 EFI 指南][3]中的建议从 Live CD(译者注:Live CD 是一个完整的计算机可引导安装媒介,它包括在计算机内存中运行的操作系统,而不是从硬盘驱动器加载; CD 本身是只读的。 它允许用户为任何目的运行操作系统,而无需安装它或对计算机的配置进行任何更改)中恢复。我用 efibootmgr 工具创建了一个条目,确保标记它为有效。正如我们之前在指南中做的那样,之前这些是能正常工作的。哎,现在这个方法也不起作用。
+
+我尝试一次完整的 Ubuntu 安装,把它安装到 Solus 所在的分区,希望安装程序能给我一些有用的信息。但是 Ubuntu 无法按成安装。它报错:failed to install into /target。又回到开始的地方了。怎么办?
+
+### 手动清除 EFI 分区
+
+显然,我们的 EFI 分区出现了严重问题。简单回顾以下,如果你使用的是 UEFI,那么你需要一个单独的 FAT-32 格式化分区。该分区用于存储 EFI 引导镜像。例如,当你安装 Fedora 时,Fedora 引导镜像会被拷贝到 EFI 子目录。每个操作系统都会被存储到一个它自己的目录,一般是 /boot/efi/EFI/<操作系统版本>/。
+
+ ![EFI 分区内容](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-efi-partition-contents.png)
+
+在我的 [G50][4] 机器上,这里有很多各种发行版测试条目,包括:centos、debian、fedora、mx-15、suse、Ubuntu、zorin 以及其它。这里也有一个 goofiboot 目录。但是,efibootmgr 并没有在它的菜单中显示 goofiboot 条目。显然这里出现了一些问题。
+
+```
+sudo efibootmgr -d /dev/sda
+BootCurrent: 0001
+Timeout: 0 seconds
+BootOrder: 0001,0005,2003,0000,2001,2002
+Boot0000* Lenovo Recovery System
+Boot0001* ubuntu
+Boot0003* EFI Network 0 for IPv4 (68-F7-28-4D-D1-A1)
+Boot0004* EFI Network 0 for IPv6 (68-F7-28-4D-D1-A1)
+Boot0005* Windows Boot Manager
+Boot0006* fedora
+Boot0007* suse
+Boot0008* debian
+Boot0009* mx-15
+Boot2001* EFI USB Device
+Boot2002* EFI DVD/CDROM
+Boot2003* EFI Network
+...
+```
+
+P.S. 上面的输出是在 LIVE 会话中运行命令生成的!
+
+
+我决定清除所有非默认的以及非微软的条目然后重新开始。显然,有些东西被损坏了,妨碍了新的发行版设置它们自己的启动引导程序。因此我删除了 /boot/efi/EFI 分区下面出 Boot 和 Windows 外的所有目录。同时,我也通过删除所有额外的条目更新了启动管理器。
+
+```
+efibootmgr -b -B
+```
+
+最后,我重新安装了 Ubuntu,并仔细监控 GRUB 安装和配置的过程。这次,成功完成啦。正如预期的那样,几个无效条目出现了一些错误,但整个安装过程完成就好了。
+
+ ![安装错误](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-errors.jpg)
+
+ ![安装成功](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-successful.jpg)
+
+### 额外阅读
+
+如果你不喜欢这种手动修复,你可以阅读:
+
+```
+[Boot-Info][5] 手册,里面有帮助你恢复系统的自动化工具
+
+[Boot-repair-cd][6] 自动恢复工具下载页面
+```
+
+### 总结
+
+如果你遇到由于 EFI 分区破坏而导致系统严重瘫痪的情况,那么你可能需要遵循本指南中的建议。 删除所有非默认条目。 如果你使用 Windows 进行多重引导,请确保不要修改任何和 Microsoft 相关的东西。 然后相应地更新引导菜单,以便删除损坏的条目。 重新运行所需发行版的安装设置,或者尝试用之前介绍的比较不严格的修复方法。
+
+我希望这篇小文章能帮你节省一些时间。Solus 对我系统的更改使我很懊恼。这些事情本不应该发生,恢复过程也应该更简单。不管怎样,虽然事情似乎很可怕,修复并不是很难。你只需要删除损害的文件然后重新开始。你的数据应该不会受到影响,你也应该能够顺利进入到运行中的系统并继续工作。开始吧。
+
+加油。
+
+--------------------------------------------------------------------------------
+
+
+作者简介:
+
+我叫 Igor Ljubuncic。38 岁,已婚,但还没有小孩。我现在是一个云技术公司的首席工程师,前端新手。在 2015 年年初之前,我在世界上最大的 IT 公司之一的工程计算团队担任操作系统架构师,开发新的基于 Linux 的解决方案、优化内核、在 Linux 上实现一些好的想法。在这之前,我是一个为高性能计算环境设计创新解决方案团队的技术主管。其它一些头衔包括系统专家、系统开发员或者类似的。所有这些都是我的爱好,但从 2008 年开始,就是有报酬的工作。还有什么比这更令人满意的呢?
+
+从 2004 到 2008 年,我通过在医疗图像行业担任物理专家养活自己。我的工作主要关注解决问题和开发算法。为此,我广泛使用 Matlab,主要用于信号和图像处理。另外,我已通过几个主要工程方法的认证,包括 MEDIC Six Sigma Green Belt、实验设计以及统计工程。
+
+有时候我也会写书,包括 Linux 创新及技术工作。
+
+往下滚动你可以查看我开源项目的完整列表、发表文章以及专利。
+
+有关我奖项、提名以及 IT 相关认证的完整列表,稍后也会有。
+
+
+-------------
+
+
+via: http://www.dedoimedo.com/computers/grub2-efi-corrupt-part-recovery.html
+
+作者:[Igor Ljubuncic][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.dedoimedo.com/faq.html
+
+[1]:http://www.dedoimedo.com/computers/solus-1-2-review.html
+[2]:http://www.dedoimedo.com/computers/grub-2.html
+[3]:http://www.dedoimedo.com/computers/grub2-efi-recovery.html
+[4]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
+[5]:https://help.ubuntu.com/community/Boot-Info
+[6]:https://sourceforge.net/projects/boot-repair-cd/
diff --git a/translated/tech/20170214 How to Install Configure and Secure FTP Server in CentOS 7 Comprehensive Guide.md b/translated/tech/20170214 How to Install Configure and Secure FTP Server in CentOS 7 Comprehensive Guide.md
new file mode 100644
index 0000000000..c132e630a5
--- /dev/null
+++ b/translated/tech/20170214 How to Install Configure and Secure FTP Server in CentOS 7 Comprehensive Guide.md
@@ -0,0 +1,277 @@
+如何在 CentOS 7 中安装、配置 SFTP - [全面指南]
+============================================================
+
+FTP(文件传输协议)是一种用于通过网络[在服务器和客户端之间传输文件][1]的传统并广泛使用的标准工具,特别是在不需要身份验证的情况下(允许匿名用户连接到服务器)。我们必须明白,默认情况下 FTP 是不安全的,因为它不加密传输用户凭据和数据。
+
+在本指南中,我们将介绍在 CentOS/RHEL7 和 Fedora 发行版中安装、配置和保护 FTP 服务器( VSFTPD 代表 “Very Secure FTP Daemon”)的步骤。
+
+请注意,本指南中的所有命令将以 root 身份运行,以防你不使用 root 帐户操作服务器,请使用 [sudo命令][2] 获取 root 权限。
+
+### 步骤 1:安装 FTP 服务器
+
+1. 安装 vsftpd 服务器很直接,只要在终端运行下面的命令。
+
+```
+# yum install vsftpd
+```
+
+2. 安装完成后,服务会先被禁用,因此我们需要手动启动,并设置在下次启动时自动启用:
+
+```
+# systemctl start vsftpd
+# systemctl enable vsftpd
+```
+
+3. 接下来,为了允许从外部系统访问 FTP 服务,我们需要打开 FTP 守护进程监听 21 端口:
+
+```
+# firewall-cmd --zone=public --permanent --add-port=21/tcp
+# firewall-cmd --zone=public --permanent --add-service=ftp
+# firewall-cmd --reload
+```
+
+### 步骤 2: 配置 FTP 服务器
+
+4. 现在,我们会进行一些配置来设置并加密我们的 FTP 服务器,让我们先备份一下原始配置文件 /etc/vsftpd/vsftpd.conf:
+
+```
+# cp /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpd.conf.orig
+```
+
+接下来,打开上面的文件,并将下面的选项设置相关的值:
+
+```
+anonymous_enable=NO # disable anonymous login
+local_enable=YES # permit local logins
+write_enable=YES # enable FTP commands which change the filesystem
+local_umask=022 # value of umask for file creation for local users
+dirmessage_enable=YES # enable showing of messages when users first enter a new directory
+xferlog_enable=YES # a log file will be maintained detailing uploads and downloads
+connect_from_port_20=YES # use port 20 (ftp-data) on the server machine for PORT style connections
+xferlog_std_format=YES # keep standard log file format
+listen=NO # prevent vsftpd from running in standalone mode
+listen_ipv6=YES # vsftpd will listen on an IPv6 socket instead of an IPv4 one
+pam_service_name=vsftpd # name of the PAM service vsftpd will use
+userlist_enable=YES # enable vsftpd to load a list of usernames
+tcp_wrappers=YES # turn on tcp wrappers
+```
+
+5. 现在基于用户列表文件 `/etc/vsftpd.userlist` 来配置 FTP 允许/拒绝用户访问。
+
+默认情况下,如果设置了 userlist_enable=YES,当 userlist_deny 选项设置为 YES 的时候,`userlist_file=/etc/vsftpd.userlist` 中的用户列表被拒绝登录。
+
+然而, userlist_deny=NO 更改了设置,意味着只有在 userlist_file=/etc/vsftpd.userlist 显式指定的用户才允许登录。
+
+```
+userlist_enable=YES # vsftpd will load a list of usernames, from the filename given by userlist_file
+userlist_file=/etc/vsftpd.userlist # stores usernames.
+userlist_deny=NO
+```
+
+这并不是全部,当用户登录到 FTP 服务器时,它们会进入 chroot jail 中,这是仅作为 FTP 会话主目录的本地根目录。
+
+接下来,我们将介绍如何将 FTP 用户 chroot 到 FTP 用户的家目录(本地 root)中的两种可能情况,如下所述。
+
+6. 接下来添加下面的选项来限制 FTP 用户到它们自己的家目录。
+
+```
+chroot_local_user=YES
+allow_writeable_chroot=YES
+```
+
+chroot_local_user=YES 意味着用户可以设置 chroot jail,默认是登录后的家目录。
+
+同样默认的是,出于安全原因,vsftpd 不会允许 chroot jail 目录可写,然而,我们可以添加 allow_writeable_chroot=YES 来覆盖这个设置。
+
+保存并关闭文件。
+
+### 用 SELinux 加密 FTP 服务器
+
+7. 现在,让我们设置下面的 SELinux 布尔值来允许 FTP 能读取用户家目录下的文件。请注意,这最初是使用以下命令完成的:
+
+```
+# setsebool -P ftp_home_dir on
+```
+
+然而,`ftp_home_dir` 指令由于这个 bug 报告:[https://bugzilla.redhat.com/show_bug.cgi?id=1097775][3] 默认是禁用的。
+
+现在,我们会使用 semanage 命令来设置 SELinux 规则来允许 FTP 读取/写入用户的家目录。
+
+```
+# semanage boolean -m ftpd_full_access --on
+```
+
+这时,我们需要重启 vsftpd 来使目前的设置生效:
+
+```
+# systemctl restart vsftpd
+```
+
+### 步骤 4: 测试 FTP 服务器
+
+8. 现在我们会用[ useradd 命令][4]创建一个 FTP 用户来测试 FTP 服务器。
+
+```
+# useradd -m -c “Ravi Saive, CEO” -s /bin/bash ravi
+# passwd ravi
+```
+
+之后,我们如下使用[ echo 命令][5]添加用户 ravi 到文件 /etc/vsftpd.userlist 中:
+
+```
+# echo "ravi" | tee -a /etc/vsftpd.userlist
+# cat /etc/vsftpd.userlist
+```
+
+9. 现在是时候测试我们上面的设置是否可以工作了。让我们使用匿名登录测试,我们可以从下面的截图看到匿名登录不被允许。
+
+```
+# ftp 192.168.56.10
+Connected to 192.168.56.10 (192.168.56.10).
+220 Welcome to TecMint.com FTP service.
+Name (192.168.56.10:root) : anonymous
+530 Permission denied.
+Login failed.
+ftp>
+```
+[
+ ![Test Anonymous FTP Login](http://www.tecmint.com/wp-content/uploads/2017/02/Test-Anonymous-FTP-Login.png)
+][6]
+
+测试 FTP 匿名登录
+
+10. 让我们也测试一下没有列在 /etc/vsftpd.userlist 中的用户是否有权限登录,这不是下面截图中的例子:
+
+```
+# ftp 192.168.56.10
+Connected to 192.168.56.10 (192.168.56.10).
+220 Welcome to TecMint.com FTP service.
+Name (192.168.56.10:root) : aaronkilik
+530 Permission denied.
+Login failed.
+ftp>
+```
+[
+ ![FTP User Login Failed](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login-Failed.png)
+][7]
+
+FTP 用户登录失败
+
+11. 现在最后测试一下列在 /etc/vsftpd.userlis 中的用户是否在登录后真的进入了他/她的家目录:
+
+```
+# ftp 192.168.56.10
+Connected to 192.168.56.10 (192.168.56.10).
+220 Welcome to TecMint.com FTP service.
+Name (192.168.56.10:root) : ravi
+331 Please specify the password.
+Password:
+230 Login successful.
+Remote system type is UNIX.
+Using binary mode to transfer files.
+ftp> ls
+```
+[
+ ![FTP User Login Successful[](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png)
+][8]
+
+用户成功登录
+
+警告:使用 `allow_writeable_chroot=YES' 有一定的安全隐患,特别是用户具有上传权限或 shell 访问权限时。
+
+只有当你完全知道你正做什么时才激活此选项。重要的是要注意,这些安全性影响并不是 vsftpd 特定的,它们适用于所有 FTP 守护进程,它们也提供将本地用户置于 chroot jail中。
+
+因此,我们将在下一节中看到一种更安全的方法来设置不同的不可写本地根目录。
+
+### 步骤 5: 配置不同的 FTP 家目录
+
+12. 再次打开 vsftpd 配置文件,并将下面不安全的选项注释掉:
+
+```
+#allow_writeable_chroot=YES
+```
+
+接着为用户(`ravi`,你的可能不同)创建另外一个替代根目录,并将所有用户对该目录的可写权限移除:
+
+```
+# mkdir /home/ravi/ftp
+# chown nobody:nobody /home/ravi/ftp
+# chmod a-w /home/ravi/ftp
+```
+
+13. 接下来,在用户存储他/她的文件的本地根目录下创建一个文件夹:
+
+```
+# mkdir /home/ravi/ftp/files
+# chown ravi:ravi /home/ravi/ftp/files
+# chmod 0700 /home/ravi/ftp/files/
+```
+
+、接着在 vsftpd 配置文件中添加/修改这些选项:
+
+```
+user_sub_token=$USER # 在本地根目录下插入用户名
+local_root=/home/$USER/ftp # 定义任何用户的本地根目录
+```
+
+保存并关闭文件。再说一次,有新的设置后,让我们重启服务:
+
+```
+# systemctl restart vsftpd
+```
+
+14. 现在最后在测试一次查看用户本地根目录就是我们在他的家目录创建的 FTP 目录。
+
+```
+# ftp 192.168.56.10
+Connected to 192.168.56.10 (192.168.56.10).
+220 Welcome to TecMint.com FTP service.
+Name (192.168.56.10:root) : ravi
+331 Please specify the password.
+Password:
+230 Login successful.
+Remote system type is UNIX.
+Using binary mode to transfer files.
+ftp> ls
+```
+[
+ ![FTP User Home Directory Login Successful](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login-Successful.png)
+][9]
+
+FTP 用户家目录登录成功
+
+就是这样了!在本文中,我们介绍了如何在 CentOS 7 中安装、配置以及加密的 FTP 服务器,使用下面的评论栏给我们回复,或者分享关于这个主题的任何有用信息。
+
+**建议阅读:** [在 RHEL/CentOS 7 上安装 ProFTPD 服务器] [10]
+
+在下一篇文章中,我们还将向你介绍如何在 CentOS 7 中[保护使用 SSL/TLS][11]连接的 FTP 服务器,再此之前,请继续关注 TecMint。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Aaron Kili 是一名 Linux 和 F.O.S.S 爱好者,即将从事 Linux 系统管理员和网页开发工作,他日前是 TecMint 技术网站的原创作者,非常喜欢使用电脑工作,坚信分享知识是一种美德。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/install-ftp-server-in-centos-7/
+
+作者:[Aaron Kili][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/aaronkili/
+
+[1]:http://www.tecmint.com/scp-commands-examples/
+[2]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
+[3]:https://bugzilla.redhat.com/show_bug.cgi?id=1097775
+[4]:http://www.tecmint.com/add-users-in-linux/
+[5]:http://www.tecmint.com/echo-command-in-linux/
+[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Test-Anonymous-FTP-Login.png
+[7]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login-Failed.png
+[8]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png
+[9]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login-Successful.png
+[10]:http://www.tecmint.com/install-proftpd-in-centos-7/
+[11]:http://www.tecmint.com/secure-vsftpd-using-ssl-tls-on-centos/
diff --git a/translated/tech/20170307 Assign Read-Write Access to a User on Specific Directory in Linux.md b/translated/tech/20170307 Assign Read-Write Access to a User on Specific Directory in Linux.md
new file mode 100644
index 0000000000..ecde430a6f
--- /dev/null
+++ b/translated/tech/20170307 Assign Read-Write Access to a User on Specific Directory in Linux.md
@@ -0,0 +1,155 @@
+在 Linux 上给用户赋予指定目录的读写权限
+============================================================
+
+
+在上篇文章中我们向您展示了如何在 Linux 上[创建一个共享目录][3]。这次,我们会为您介绍如何将 Linux 上指定目录的读写权限赋予用户。
+
+
+有两种方法可以实现这个目标:第一种是 [使用 ACL (访问控制列表)][4] ,第二种是[创建用户组来管理文件权限][5],下面会一一介绍。
+
+
+为了完成这个教程,我们将使用以下设置。
+
+```
+Operating system: CentOS 7
+Test directory: /shares/project1/reports
+Test user: tecmint
+Filesystem type: Ext4
+```
+
+请确认所有的命令都是使用 root 用户执行的,或者使用 [sudo 命令][6] 来享受与之同样的权限。
+
+让我们开始吧!下面,先使用 mkdir 命令来创建一个名为 `reports` 的目录。
+
+```
+# mkdir -p /shares/project1/reports
+```
+
+### 使用 ACL 来为用户赋予目录的读写权限
+
+重要提示:打算使用此方法的话,您需要确认您的 Linux 文件系统类型(如 Ext3 和 Ext4, NTFS, BTRFS)支持 ACL。
+
+1. 首先, 依照以下命令在您的系统中[检查当前文件系统类型][7],并且查看内核是否支持 ACL:
+
+```
+# df -T | awk '{print $1,$2,$NF}' | grep "^/dev"
+# grep -i acl /boot/config*
+```
+
+从下方的截屏可以看到,文件系统类型是 **Ext4**,并且从 **CONFIG_EXT4_FS_POSIX_ACL=y** 选项可以发现内核是支持 **POSIX ACLs** 的。
+
+[
+ ![Check Filesystem Type and Kernel ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png)
+][8]
+
+*查看文件系统类型和内核的 ACL 支持。*
+
+2. 接下来,查看文件系统(分区)挂载时是否使用了 ACL 选项。
+
+```
+# tune2fs -l /dev/sda1 | grep acl
+```
+[
+ ![Check Partition ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Partition-ACL-Support.png)
+][9]
+
+*查看分区是否支持 ACL*
+
+通过上边的输出可以发现,默认的挂载项目中已经对 **ACL** 进行了支持。如果发现结果不如所愿,你可以通过以下命令对指定分区(此例中使用 **/dev/sda3**)开启 ACL 的支持。
+
+```
+# mount -o remount,acl /
+# tune2fs -o acl /dev/sda3
+```
+
+3. 现在是时候指定目录 `reports` 的读写权限分配给名为 `tecmint` 的用户了,依照以下命令执行即可。
+
+```
+# getfacl /shares/project1/reports # Check the default ACL settings for the directory
+# setfacl -m user:tecmint:rw /shares/project1/reports # Give rw access to user tecmint
+# getfacl /shares/project1/reports # Check new ACL settings for the directory
+```
+[
+ ![Give Read/Write Access to Directory Using ACL](http://www.tecmint.com/wp-content/uploads/2017/03/Give-Read-Write-Access-to-Directory-Using-ACL.png)
+][10]
+
+*通过 ACL 对指定目录赋予读写权限*
+
+在上方的截屏中,通过输出结果的第二行 **getfacl** 命令可以发现,用户 `tecmint` 已经成功的被赋予了 **/shares/project1/reports** 目录的读写权限。
+
+如果想要获取ACL列表的更多信息。可以在下方查看我们的其他指南。
+
+1. [如何使用访问控制列表(ACL)为用户/组设置磁盘配额][1]
+2. [如何使用访问控制列表(ACL)挂载网络共享][2]
+
+现在我们来看看如何使用第二种方法来为目录赋予读写权限。
+
+### 使用用户组来为用户赋予指定目录的读写权限
+
+1. 如果用户已经拥有了默认的用户组(通常组名与用户名相同),就可以简单的通过变更文件夹的所属用户组来完成。
+
+```
+# chgrp tecmint /shares/project1/reports
+```
+
+另外,我们也可以通过以下方法为多个用户(需要赋予指定目录读写权限的)新建一个用户组。如此一来,也就[创建了一个共享目录][11]。
+
+```
+# groupadd projects
+```
+
+2. 接下来将用户 `tecmint` 添加到 `projects` 组中:
+
+```
+# usermod -aG projects tecmint # add user to projects
+# groups tecmint # check users groups
+```
+
+3. 将目录的所属用户组变更为 projects:
+
+```
+# chgrp projects /shares/project1/reports
+```
+
+4. 现在,给组成员设置读写权限。
+
+```
+# chmod -R 0760 /shares/projects/reports
+# ls -l /shares/projects/ #check new permissions
+```
+
+
+好了!这篇教程中,我们向您展示了如何在 Linux 中将指定目录的读写权限赋予用户。若有疑问,请在留言区中提问。
+
+--------------------------------------------------------------------------------
+
+
+作者简介:
+
+Aaron Kili 是 Linux 和 F.O.S.S 爱好者,未来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/give-read-write-access-to-directory-in-linux/
+
+作者:[Aaron Kili][a]
+译者:[Mr-Ping](http://www.mr-ping.com)
+校对:[jasminepeng](https://github.com/jasminepeng)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/aaronkili/
+[1]:http://www.tecmint.com/set-access-control-lists-acls-and-disk-quotas-for-users-groups/
+[2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/
+[3]:http://www.tecmint.com/create-a-shared-directory-in-linux/
+[4]:http://www.tecmint.com/secure-files-using-acls-in-linux/
+[5]:http://www.tecmint.com/manage-users-and-groups-in-linux/
+[6]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
+[7]:http://www.tecmint.com/find-linux-filesystem-type/
+[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png
+[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Partition-ACL-Support.png
+[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Give-Read-Write-Access-to-Directory-Using-ACL.png
+[11]:http://www.tecmint.com/create-a-shared-directory-in-linux/
+[12]:http://www.tecmint.com/author/aaronkili/
+[13]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
+[14]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/translated/tech/20170307 How to set up a personal web server with a Raspberry Pi.md b/translated/tech/20170307 How to set up a personal web server with a Raspberry Pi.md
index d3e6d61ee6..24d0aea995 100644
--- a/translated/tech/20170307 How to set up a personal web server with a Raspberry Pi.md
+++ b/translated/tech/20170307 How to set up a personal web server with a Raspberry Pi.md
@@ -4,10 +4,9 @@
![How to set up a personal web server with a Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa "How to set up a personal web server with a Raspberry Pi")
>图片来源 : opensource.com
-个人网络服务器即 “云”,只是是你去拥有和控制它,而不是托管在一个大型的公司上。
+个人网络服务器即 “云”,只不过是你拥有和控制它,而不是一个大型公司。
-
-拥有一个自己的云有很多好处,包括定制,免费存储,免费的互联网服务,开源软件的路径,高品质的安全性,完全控制您的内容,快速更改的能力,一个实验的地方 代码等等。 这些好处大部分是无法估量的,但在财务上,这些好处可以节省您每个月超过 100 美元。
+拥有一个自己的云有很多好处,包括定制,免费存储,免费的互联网服务,开源软件的路径,高品质的安全性,完全控制您的内容,快速更改的能力,实验代码的地方等等。 这些好处大部分是无法估量的,但在财务上,这些好处可以为您每个月节省超过 100 美元。
![Building your own web server with Raspberry Pi](https://opensource.com/sites/default/files/1-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Building your own web server with Raspberry Pi")
@@ -15,18 +14,18 @@
我本可以选择 AWS ,但我更喜欢完全自由且安全性可控,并且我可以学一下这些东西是如何搭建的。
-* 私有主机: 不使用 BlueHost 或 DreamHost
-* 云存储:不使用 Dropbox, Box, Google Drive, Microsoft Azure, iCloud, 或是 AWS
-* 确保内部安全
-* HTTPS:Let’s Encrypt
-* 分析: Google
-* OpenVPN:Do not need private Internet access (预计每个月花费 $7)
+* 私有主机: 不使用 BlueHost 或 DreamHost
+* 云存储:不使用 Dropbox, Box, Google Drive, Microsoft Azure, iCloud, 或是 AWS
+* 内部部署安全
+* HTTPS:Let’s Encrypt
+* 分析: Google
+* OpenVPN:不需要专有互联网连接 (预计每个月花费 $7)
我所使用的物品清单:
* 树莓派 3 代 Model B
-* MicroSD 卡 (推荐使用 32GB, [兼容树莓派的 SD 卡][1])
-* USB microSD 卡读卡器
+* MicroSD 卡(推荐使用 32 GB, [兼容树莓派的 SD 卡][a1])
+* USB microSD 卡读卡器
* 以太网络线
* 连接上 Wi-Fi 的路由器
* 树莓派盒子
@@ -40,32 +39,32 @@
### 步骤 1: 启动树莓派
-下载最新发布的 Raspbian (树莓派的操作系统). [Raspbian Jessie][6] 的 ZIP 包就可以用。解压缩或提取下载的文件然后把它拷贝到 SD 卡里。使用 [Pi Filler][7] 可以让这些过程变得更简单。[下载 Pi Filer 1.3][8] 或最新的版本。解压或提取下载文件之后打开它,你应该会看到这样的提示:
+下载最新发布的 Raspbian (树莓派的操作系统)。 [Raspbian Jessie][a6] 的 ZIP 包就可以用 [1]。解压缩或提取下载的文件然后把它拷贝到 SD 卡里。使用 [Pi Filler][a7] 可以让这些过程变得更简单。[下载 Pi Filer 1.3][8] 或最新的版本。解压或提取下载文件之后打开它,你应该会看到这样的提示:
![Pi Filler prompt](https://opensource.com/sites/default/files/2-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Pi Filler prompt")
-确保 USB 读卡器这时还没有插上。如果已经插上了那就先推出。点 Continue 继续下一步。你会看到一个让你选择文件的界面,选择你之前解压缩后的树莓派系统文件。然后你会看到另一个提示如图所示:
+确保 USB 读卡器这时还没有插上。如果已经插上了那就先推出。点 Continue 继续下一步。你会看到一个让你选择文件的界面,选择你之前解压缩后的树莓派系统文件。然后你会看到另一个提示,如图所示:
![USB card reader prompt](https://opensource.com/sites/default/files/3-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "USB card reader")
-把 MicroSD 卡 (推荐 32GB ,至少 16GB) 插入到 USB MicroSD 卡读卡器里。然后把 USB 读卡器接入到你的电脑里。你可以把你的 SD 卡重命名为 "Raspberry" 以区别其他设备。然后点 continue。请先确保你的 SD 卡是空的,因为 Pi Filler 也会在运行时 _擦除_ 所有事先存在 SD 卡里的内容。如果你要备份卡里的内容,那你最好就马上备份。当你点 continue 的时候,Raspbian OS 就会被写入到 SD 卡里。这个过程大概会花费一到三分钟左右。当写入完成后,推出 USB 读卡器,把 SD 卡拔出来插入到树莓派的 SD 卡槽里。把电源线接上,给树莓派提供电源。这时树莓派就会自己启动。树莓派的默认登录账户信息是:
+把 MicroSD 卡 (推荐 32 GB ,至少 16GB) 插入到 USB MicroSD 卡读卡器里。然后把 USB 读卡器接入到你的电脑里。你可以把你的 SD 卡重命名为 “Raspberry” 以区别其他设备。然后点击 continue。请先确保你的 SD 卡是空的,因为 Pi Filler 会在运行时 _擦除_ 所有事先存在 SD 卡里的内容。如果你要备份卡里的内容,那你最好就马上备份。当你点 continue 的时候,Raspbian OS 就会被写入到 SD 卡里。这个过程大概会花费一到三分钟左右。当写入完成后,推出 USB 读卡器,把 SD 卡拔出来插入到树莓派的 SD 卡槽里。把电源线接上,给树莓派提供电源。这时树莓派就会自己启动。树莓派的默认登录账户信息是:
**用户名: pi
密码: raspberry**
-当树莓派首次启动完成时,会跳出一个标题为 "Setup Options" 的配置界面,就像下面的图片一样 [2]:
+当树莓派首次启动完成时,会跳出一个标题为 设置选项 的配置界面,就像下面的图片一样 [2]:
![Raspberry Pi software configuration setup](https://opensource.com/sites/default/files/4-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi software configuration setup")
-选择 "Expand Filesystem" 这一选项并回车 [3]. 同时,我还推荐选择第二个选项 "Change User Password" 。这对保证安全性来说尤为重要。它还能个性化你的树莓派.
+选择 “Expand Filesystem” 这一选项并回车 [3]。 同时,我还推荐选择第二个选项 “Change User Password”。这对保证安全性来说尤为重要。它还能个性化你的树莓派。
-在选项列表中选择第三项 "Enable Boot To Desktop/Scratch" 并回车。这时会跳到另一个标题为 "Choose boot option" 的界面,就像下面这张图这样。
+在选项列表中选择第三项 “Enable Boot To Desktop/Scratch” 并回车。这时会跳到另一个标题为 “Choose boot option” 的界面,就像下面这张图这样。
![Choose boot option](https://opensource.com/sites/default/files/5-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Choose boot option")
-在 "Choose boot option" 这个界面选择第二个选项 "Desktop log in as user 'pi' at the graphical desktop" 并回车 [4]。完成这个操作之后会回到之前的 "Setup Options" 界面。如果没有回到之前的界面的话就选择当前界面底部的 "OK" 按钮并回车。
+在 “Choose boot option” 这个界面选择第二个选项 “Desktop log in as user 'pi' at the graphical desktop” 并回车 [4]。完成这个操作之后会回到之前的 “Setup Options” 界面。如果没有回到之前的界面的话就选择当前界面底部的 “OK” 按钮并回车。
-当这些操作都完成之后,选择当前界面底部的 "Finish" 按钮并回车,这时它就会自动重启。如果没有自动重启的话,就在终端里使用如下命令来重启。
+当这些操作都完成之后,选择当前界面底部的 “Finish” 按钮并回车,这时它就会自动重启。如果没有自动重启的话,就在终端里使用如下命令来重启。
**$ sudo reboot**
@@ -85,7 +84,7 @@ $ sudo apt-get dist-upgrade -y
$ sudo rpi-update
```
-这些操作可能会花费几分钟时间。完成之后,现在运行着的树莓派就时最新的了。
+这些操作可能会花费几分钟时间。完成之后,现在运行着的树莓派就是最新的了。
### 步骤 2: 配置树莓派
@@ -97,9 +96,9 @@ SSH 指的是 Secure Shell,是一种加密网络协议,可让你在计算机
$ sudo ifconfig
```
-如果你在使用以太网,看 "eth0" 这一块。如果你在使用 Wi-Fi, 看 "wlan0" 这一块。
+如果你在使用以太网,看 “eth0” 部分。如果你在使用 Wi-Fi, 看 “wlan0” 部分。
-查找“inet addr”,后跟一个IP地址,如192.168.1.115,这是本篇文章中使用的默认IP
+查找 “inet addr”,后跟一个 IP 地址,如 192.168.1.115,这是本篇文章中使用的默认 IP。
有了这个地址,在终端中输入 :
@@ -107,15 +106,15 @@ $ sudo ifconfig
$ ssh pi@192.168.1.115
```
-对于PC上的SSH,请参见脚注[5]。
+对于 PC 上的 SSH,请参见脚注 [5]。
-出现提示时输入默认密码“raspberry”,除非你之前更改过密码。
+出现提示时输入默认密码 “raspberry”,除非你之前更改过密码。
现在你已经通过 SSH 登录成功。
### 远程桌面
-使用GUI(图形用户界面)有时比命令行更容易。 在树莓派的命令行(使用SSH)上键入:
+使用 GUI(图形用户界面)有时比命令行更容易。 在树莓派的命令行(使用 SSH)上键入:
```
$ sudo apt-get install xrdp
@@ -123,19 +122,19 @@ $ sudo apt-get install xrdp
Xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
-在 Mac 上,在 App store 中搜索 “Microsoft Remote Desktop”。 下载它。 (对于PC,请参见脚注[6]。)
+在 Mac 上,在 App store 中搜索 “Microsoft Remote Desktop”。 下载它。 (对于 PC,请参见脚注 [6]。)
-安装完成之后,在你的 Mac 中搜索一个叫 "Microsoft Remote Desktop" 的应用并打开它,你会看到 :
+安装完成之后,在你的 Mac 中搜索一个叫 “Microsoft Remote Desktop” 的应用并打开它,你会看到 :
![Microsoft Remote Desktop](https://opensource.com/sites/default/files/7-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Microsoft Remote Desktop")
-图片来自 Mitchell McLaughlin, CC BY-SA 4.0
+*图片来自 Mitchell McLaughlin, CC BY-SA 4.0*
-点击 "New" 新建一个远程连接,在空白处填写如下配置。
+点击 “New” 新建一个远程连接,在空白处填写如下配置。
![Setting up a remote connection](https://opensource.com/sites/default/files/8-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Setting up a remote connection")
-图片来自 Mitchell McLaughlin, CC BY-SA 4.0
+*图片来自 Mitchell McLaughlin, CC BY-SA 4.0*
关闭 “New” 窗口就会自动保存。
@@ -147,7 +146,7 @@ Xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
好了,现在你不需要额外的鼠标、键盘或显示器就能控制你的树莓派。这是一个更为轻量级的配置。
-### 静态本地 ip 地址
+### 静态本地 IP 地址
有时候你的本地 IP 地址 192.168.1.115 会发生改变。我们需要让这个 IP 地址静态化。输入:
@@ -155,17 +154,17 @@ Xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
$ sudo ifconfig
```
-从 “eth0” 部分或 “wlan0” 部分,“inet addr”(树莓派当前 IP),“bcast”(广播 IP 范围)和 “mask”(子网掩码地址))中删除。 然后输入:
+从 “eth0” 部分或 “wlan0” 部分,“inet addr”(树莓派当前 IP),“bcast”(广播 IP 范围)和 “mask”(子网掩码地址))中写入。 然后输入:
```
$ netstat -nr
```
-记下 "destination" 和 "gateway/network."
+记下 “destination” 和 “gateway/network”。
![Setting up a local IP address](https://opensource.com/sites/default/files/setting_up_local_ip_address.png "Setting up a local IP address")
-cumulative records 应该大概是这样子的:
+应该大概是这样子的:
```
net address 192.168.1.115
@@ -182,7 +181,7 @@ destination 192.168.1.0
$ sudo nano /etc/dhcpcd.conf
```
-不要设置 **/etc/network/interfaces**
+不要使用 **/etc/network/interfaces**。
剩下要做的就是把这些内容追加到这个文件的底部,把 IP 换成你想要的 IP 地址。
@@ -209,21 +208,21 @@ $ sudo ifconfig
### 静态全局 IP address
-如果您的 ISP(互联网服务提供商)已经给您一个静态外部 IP 地址,您可以跳过端口转发部分。 如果没有,请继续阅读。
+如果您的 ISP(互联网服务提供商)已经给您一个静态外部 IP 地址,您可以跳到端口转发部分。 如果没有,请继续阅读。
-你已经设置了SSH,远程桌面和静态内部 IP 地址,因此现在本地网络中的计算机将会知道在哪里可以找到你的树莓派。 但是你仍然无法从本地 Wi-Fi 网络外部访问你的树莓派。 你需要树莓派可以从互联网上的任何地方公开访问。 这需要静态外部IP地址[7]。
+你已经设置了 SSH,远程桌面和静态内部 IP 地址,因此现在本地网络中的计算机将会知道在哪里可以找到你的树莓派。 但是你仍然无法从本地 Wi-Fi 网络外部访问你的树莓派。 你需要树莓派可以从互联网上的任何地方公开访问。这需要静态外部 IP 地址 [7]。
-调用您的 ISP 并请求静态外部(有时称为静态全局)IP 地址可能会是一个非常敏感的过程。 ISP 拥有决策权,所以我会非常小心处理。 他们可能拒绝你的的静态外部 IP 地址请求。 如果他们拒绝了你的请求,你不要怪罪于他们,因为这种类型的请求有法律和操作风险。 他们特别不希望客户运行中型或大型互联网服务。 他们可能会明确地询问为什么需要一个静态的外部 IP 地址。 最好说实话,告诉他们你打算主办一个低流量的个人网站或类似的小型非营利互联网服务。 如果一切顺利,他们应该打开一张票,并在一两个月内给你打电话。
+调用您的 ISP 并请求静态外部(有时称为静态全局)IP 地址可能会是一个非常敏感的过程。 ISP 拥有决策权,所以我会非常小心处理。 他们可能拒绝你的的静态外部 IP 地址请求。 如果他们拒绝了你的请求,你不要怪罪于他们,因为这种类型的请求有法律和操作风险。 他们特别不希望客户运行中型或大型互联网服务。 他们可能会明确地询问为什么需要一个静态的外部 IP 地址。 最好说实话,告诉他们你打算主办一个低流量的个人网站或类似的小型非营利互联网服务。 如果一切顺利,他们应该会建立一个任务,并在一两个星期内给你打电话。
### 端口转发
这个新获得的 ISP 分配的静态全局 IP 地址是用于访问路由器。 树莓派现在仍然无法访问。 你需要设置端口转发才能访问树莓派。
-端口是信息在互联网上传播的虚拟途径。 你有时需要转发端口,以使计算机像树莓派一样可以访问 Internet,因为它位于网络路由器后面。 VollmilchTV 专栏在 YouTube 上的一个视频 [什么是TCP / IP,端口,路由,Intranet,防火墙,互联网] [9]帮助我更好地了解端口。
+端口是信息在互联网上传播的虚拟途径。 你有时需要转发端口,以使计算机像树莓派一样可以访问 Internet,因为它位于网络路由器后面。 VollmilchTV 专栏在 YouTube 上的一个视频,名字是[什么是 TCP/IP,端口,路由,Intranet,防火墙,互联网][9],帮助我更好地了解端口。
-端口转发可用于像 树莓派 Web服务器或 VoIP 或点对点下载的应用程序。 有[65,000+个端口] [10]可供选择,因此你可以为你构建的每个 Internet 应用程序分配一个不同的端口。
+端口转发可用于像 树莓派 Web 服务器或 VoIP 或点对点下载的应用程序。 有 [65,000+个端口][10]可供选择,因此你可以为你构建的每个 Internet 应用程序分配一个不同的端口。
-设置端口转发的方式取决于你的路由器。 如果你有 Linksys 的话,Gabriel Ramirez 在 YouTbue 上有一个标题叫 [How to go online with your Apache Ubuntu server] [2] 的视频解释了如何设置。 如果您没有 Linksys,请阅读路由器附带的文档,以便自定义和定义要转发的端口。
+设置端口转发的方式取决于你的路由器。 如果你有 Linksys 的话,Gabriel Ramirez 在 YouTbue 上有一个标题叫 [How to go online with your Apache Ubuntu server][a2] 的视频解释了如何设置。 如果您没有 Linksys,请阅读路由器附带的文档,以便自定义和定义要转发的端口。
你将需要转发 SSH 以及远程桌面端口。
@@ -235,7 +234,7 @@ $ ssh pi@your_global_ip_address
它应该会提示你输入密码。
-检查端口转发是否适用于远程桌面。 打开 Microsoft Remote Desktop。 你之前的的远程连接设置应该已经保存了,但需要使用静态外部IP地址(例如195.198.227.116)来更新“PC名称”字段,而不是静态内部地址(例如192.168.1.115)。
+检查端口转发是否也适用于远程桌面。 打开 Microsoft Remote Desktop。 你之前的的远程连接设置应该已经保存了,但需要使用静态外部 IP 地址(例如195.198.227.116)来更新 “PC名称” 字段,而不是静态内部地址(例如 192.168.1.115)。
现在,尝试通过远程桌面连接。 它应该简单地加载并到达树莓派的桌面。
@@ -243,9 +242,9 @@ $ ssh pi@your_global_ip_address
好了, 树莓派现在可以从互联网上访问了,并且已经准备好进行高级项目了。
-作为一个奖励选项,您可以保持两个远程连接到您的Pi。 一个通过互联网,另一个通过LAN(局域网)。 很容易设置。 在 Microsoft Remote Desktop 中,保留一个称为 “Pi Internet” 的远程连接,另一个称为 “Pi Local”。 将 Pi Internet的 “PC name” 配置为静态外部IP地址,例如195.198.227.116 \。 将 Pi Local 的 “PC name” 配置为静态内部IP地址,例如192.168.1.115 \。 现在,您可以选择在全球或本地连接。
+作为一个奖励选项,您可以保持两个远程连接到您的 Pi。 一个通过互联网,另一个通过 LAN(局域网)。很容易设置。在 Microsoft Remote Desktop 中,保留一个称为 “Pi Internet” 的远程连接,另一个称为 “Pi Local”。 将 Pi Internet 的 “PC name” 配置为静态外部 IP 地址,例如 195.198.227.116。 将 Pi Local 的 “PC name” 配置为静态内部 IP 地址,例如192.168.1.115。 现在,您可以选择在全球或本地连接。
-如果你还没有看过由 Gabriel Ramirez 发布的 [如何使用您的Apache Ubuntu服务器上线] [3],那么你可以去看一下作为过渡到第二个项目的教程。 它将向您展示项目背后的技术架构。 在我们的例子中,你使用的是树莓派而不是 Ubuntu 服务器。 动态DNS位于域公司和您的路由器之间,这是 Ramirez 省略的部分。 除了这个微妙之处外,视频是在整体上解释系统的工作原理。 您可能会注意到本教程涵盖了树莓派设置和端口转发,这是服务器端或后端。 查看原始来源,涵盖域名,动态DNS,Jekyll(静态HTML生成器)和Apache(网络托管)的更高级项目,这是客户端或前端。
+如果你还没有看过由 Gabriel Ramirez 发布的 [如何使用您的Apache Ubuntu服务器上线][a3],那么你可以去看一下,作为过渡到第二个项目的教程。 它将向您展示项目背后的技术架构。 在我们的例子中,你使用的是树莓派而不是 Ubuntu 服务器。 动态 DNS 位于域公司和您的路由器之间,这是 Ramirez 省略的部分。 除了这个微妙之处外,视频是在整体上解释系统的工作原理。 您可能会注意到本教程涵盖了树莓派设置和端口转发,这是服务器端或后端。 查看原始来源,涵盖域名,动态 DNS,Jekyll(静态 HTML 生成器)和 Apache(网络托管)的更高级项目,这是客户端或前端。
### 脚注
@@ -265,19 +264,19 @@ $ sudo-rasps-config
![PuTTY configuration](https://opensource.com/sites/default/files/putty_configuration.png "PuTTY configuration")
-[下载并运行 PuTTY] [11] 或 Windows 的另一个 SSH 客户端。 在该字段中输入你的IP地址,如上图所示。 将默认端口保留在22 \。 回车,PuTTY 将打开一个终端窗口,提示你输入用户名和密码。 填写然后开始在树莓派上进行你的远程工作。
+[下载并运行 PuTTY][11] 或 Windows 的另一个 SSH 客户端。 在该字段中输入你的 IP 地址,如上图所示。 将默认端口保留在 22。 回车,PuTTY 将打开一个终端窗口,提示你输入用户名和密码。 填写然后开始在树莓派上进行你的远程工作。
-[6]如果尚未安装,请下载 [Microsoft Remote Desktop] [12]。 搜索您的计算机上的的 Microsoft Remote Desktop。 运行。 提示时输入IP地址。 接下来,会弹出一个xrdp窗口,提示你输入用户名和密码。
+[6] 如果尚未安装,请下载 [Microsoft Remote Desktop][12]。 搜索您的计算机上的的 Microsoft Remote Desktop。 运行。 提示时输入 IP 地址。 接下来,会弹出一个 xrdp 窗口,提示你输入用户名和密码。
-[7]路由器具有动态分配的外部 IP 地址,所以在理论上,它可以从互联网上暂时访问,但是您需要ISP的帮助才能使其永久访问。 如果不是这样,你需要在每次使用时重新配置远程连接。
+[7] 路由器具有动态分配的外部 IP 地址,所以在理论上,它可以从互联网上暂时访问,但是您需要 ISP 的帮助才能使其永久访问。 如果不是这样,你需要在每次使用时重新配置远程连接。
- _原文出自 [Mitchell McLaughlin's Full-Stack Computer Projects][4]._
+ _原文出自 [Mitchell McLaughlin's Full-Stack Computer Projects][a4]。_
--------------------------------------------------------------------------------
作者简介:
-Mitchell McLaughlin - 我是一名开放网络的贡献者和开发者。 我感兴趣的领域很广泛,但我特别喜欢开源软件/硬件,比特币和编程。 我住在旧金山 我有过一些简短的 GoPro 和 Oracle 工作经验。
+Mitchell McLaughlin - 我是一名开放网络的贡献者和开发者。我感兴趣的领域很广泛,但我特别喜欢开源软件/硬件,比特币和编程。 我住在旧金山 我有过一些简短的 GoPro 和 Oracle 工作经验。
-------------
@@ -287,18 +286,18 @@ via: https://opensource.com/article/17/3/building-personal-web-server-raspberry-
作者:[Mitchell McLaughlin ][a]
译者:[chenxinlong](https://github.com/chenxinlong)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mitchm
-[1]:http://elinux.org/RPi_SD_cards
-[2]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
-[3]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
-[4]:https://mitchellmclaughlin.com/server.html
-[5]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3?rate=Zdmkgx8mzy9tFYdVcQZSWDMSy4uDugnbCKG4mFsVyaI
-[6]:https://www.raspberrypi.org/downloads/raspbian/
-[7]:http://ivanx.com/raspberrypi/
+[a1]:http://elinux.org/RPi_SD_cards
+[a2]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
+[a3]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
+[a4]:https://mitchellmclaughlin.com/server.html
+[a5]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3?rate=Zdmkgx8mzy9tFYdVcQZSWDMSy4uDugnbCKG4mFsVyaI
+[a6]:https://www.raspberrypi.org/downloads/raspbian/
+[a7]:http://ivanx.com/raspberrypi/
[8]:http://ivanx.com/raspberrypi/files/PiFiller.zip
[9]:https://www.youtube.com/watch?v=iskxw6T1Wb8
[10]:https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
diff --git a/translated/tech/20170310 How to install Fedora 25 on your Raspberry Pi.md b/translated/tech/20170310 How to install Fedora 25 on your Raspberry Pi.md
deleted file mode 100644
index c7600ee77e..0000000000
--- a/translated/tech/20170310 How to install Fedora 25 on your Raspberry Pi.md
+++ /dev/null
@@ -1,122 +0,0 @@
-如何在树莓派上安装 Fedora 25
-============================================================
-
-### 继续阅读,了解 Fedora 第一个官方支持 Pi 的版本。
-
- ![How to install Fedora 25 on your Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/gnome_raspberry_pi_fedora.jpg?itok=Efm6IKxP "How to install Fedora 25 on your Raspberry Pi")
->图片提供 opensource.com
-
-2016 年 10 月,Fedora 25 Beta 发布了,随之而来的还有对[ Raspberry Pi 2 和 3 的初步支持][6]。Fedora 25 的最终“通用”版在一个月后发布,从那时起,我一直在树莓派上尝试不同的 Fedora spins。
-
-这篇文章不仅是一篇 Raspberry Pi 3 上的 Fedora 25 的评论,还集合了提示、截图以及我对 Fedora 第一个官方支持 Pi 的这个版本的一些个人想法。
-
-在我开始之前,需要说一下的是,为写这篇文章所做的所有工作都是在我的运行 Fedora 25 的个人笔记本电脑上完成的。我使用一张 microSD 插到 SD 适配器中,复制和编辑所有的 Fedora 镜像到 32GB 的 microSD 卡中,然后用它在一台三星电视上启动了 Raspberry Pi 3。 因为 Fedora 25 尚不支持内置 Wi-Fi,所以 Raspberry Pi 3 还使用以太网线缆进行网络连接。最后,我使用了 Logitech K410 无线键盘和触摸板进行输入。
-
-如果你没有机会使用以太网线连接,在你的树莓派上玩 Fedora 25,我曾经有一个 Edimax Wi-Fi USB 适配器,它也可以在 Fedora 25 上工作,但在本文中,我只使用了以太网连接。
-
-## 在树莓派上安装 Fedora 25 之前
-
-阅读 Fedora 项目 wiki 上 的[树莓派支持文档][7]。你可以从 wiki 下载 Fedora 25 安装所需的镜像,那里还列出了所有支持和不支持的内容。
-
-此外,请注意,这是初始支持版本,还有许多新的工作和支持将随着 Fedora 26 的发布而出现,所以请随时报告 bug,并通过 [Bugzilla][8]、Fedora 的[ ARM 邮件列表][9]、或者 Freenode IRC 频道#fedora-arm,分享你在树莓派上使用 Fedora 25 的体验反馈。
-
-### 安装
-
-我下载并安装了五个不同的 Fedora 25 spin:GNOME(工作站默认)、KDE、Minimal、LXDE 和 Xfce。在多数情况下,它们都有一致和易于遵循的步骤,以确保我的 Raspberry Pi 3 上启动正常。有的 spin 有人们正在解决的已知 bug,有的通过 Fedora wik 遵循标准操作程序。
-
- ![GNOME on Raspberry Pi](https://opensource.com/sites/default/files/gnome_on_rpi.png "GNOME on Raspberry Pi")
-
-*Raspberry Pi 3 上的 Fedora 25 workstation、 GNOME 版本*
-
-### 安装步骤
-
-1\. 在你的笔记本上,从支持文档页面的链接下载树莓派的 Fedora 25 镜像。
-
-2\. 在笔记本上,使用 **fedora-arm-installer** 或命令行将镜像复制到 microSD:
-
-**xzcat Fedora-Workstation-armhfp-25-1.3-sda.raw.xz | dd bs=4M status=progress of=/dev/mmcblk0**
-
-注意:**/dev/mmclk0** 是我的 microSD 插到 SD 适配器后,在我的笔记本电脑上挂载的设备。虽然我在笔记本上使用 Fedora,可以使用 **fedora-arm-installer**,但我还是喜欢命令行。
-
-3\. 复制完镜像后,_先不要启动你的系统_。我知道你很想这么做,但你仍然需要进行几个调整。
-
-4\. 为了使镜像文件尽可能小以便下载,镜像上的根文件系统是很小的,因此你必须增加根文件系统的大小。如果你不这么做,你仍然可以启动你的派,但如果你一旦运行 **dnf update** 来升级你的系统,它就会填满文件系统,导致糟糕的事情发生,所以趁着 microSD 还在你的笔记本上进行分区:
-
-**growpart /dev/mmcblk0 4
-resize2fs /dev/mmcblk0p4**
-
-注意:在 Fedora 中,**growpart** 命令由 **cloud-utils-growpart.noarch** 这个 RPM 提供。
-
-5\.文件系统更新后,您需要将 **vc4** 模块列入黑名单。[更多有关此 bug 的信息。][10]
-
-我建议在启动树莓派之前这样做,因为不同的 spin 将以不同的方式表现。例如,(至少对我来说)在没有黑名单 **vc4** 的情况下,GNOME 在我启动后首先出现,但在系统更新后,它不再出现。 KDE spin 在第一次启动时根本不会出现。因此我们可能需要在我们的第一次启动之前将 **vc4** 加入黑名单,直到错误解决。
-
-黑名单应该出现在两个不同的地方。首先,在你的 microSD 根分区上,在 **etc/modprode.d/** 下创建一个 **vc4.conf**,内容是:**blacklist vc4**。第二,在你的 microSD 启动分区,添加 **rd.driver.blacklist=vc4** 到 **extlinux/extlinux.conf** 的末尾。
-
-6\. 现在,你可以启动你的树莓派了。
-
-### 启动
-
-你要有耐心,特别是对于 GNOME 和 KDE 发行版来说。在 SSD(固态驱动器)和几乎即时启动的时代,你很容易就对派的启动速度感到不耐烦,特别是第一次启动时。在第一次启动 Window Manager 之前,会先弹出一个初始配置页面,可以配置 root 密码、常规用户、时区和网络。配置完毕后,你就应该能够 SSH 到你的树莓派上,方便地调试显示问题了。
-
-### 系统更新
-
-在树莓派上运行 Fedora 25 后,你最终(或立即)会想要更新系统。
-
-首先,进行内核升级时,先熟悉你的 **/boot/extlinux/extlinux.conf** 文件。如果升级内核,下次启动时,除非手动选择正确的内核,否则很可能会启动进入 Rescue 模式。避免这种情况发生最好的方法是,在你的 **extlinux.conf** 中将定义 Rescue 镜像的那五行移动到文件的底部,这样最新的内核将在下次自动启动。你可以直接在派上或通过在笔记本挂载来编辑 **/boot/extlinux/extlinux.conf**:
-
-**label Fedora 25 Rescue fdcb76d0032447209f782a184f35eebc (4.9.9-200.fc25.armv7hl)
- kernel /vmlinuz-0-rescue-fdcb76d0032447209f782a184f35eebc
- append ro root=UUID=c19816a7-cbb8-4cbb-8608-7fec6d4994d0 rd.driver.blacklist=vc4
- fdtdir /dtb-4.9.9-200.fc25.armv7hl/
- initrd /initramfs-0-rescue-fdcb76d0032447209f782a184f35eebc.img**
-
-第二点,如果无论什么原因,如果你的显示器在升级后再次变暗,并且你确定已经将 **vc4** 加入黑名单,请运行 **lsmod | grep vc4**。你可以先启动到多用户模式而不是图形模式,并从命令行中运行 **startx**。 请阅读 **/etc/inittab** 中的内容,了解如何切换目标的说明。
-
- ![KDE on Raspberry Pi 3](https://opensource.com/sites/default/files/kde_on_rpi.png "KDE on Raspberry Pi 3")
-
-*Raspberry Pi 3 上的 Fedora 25 workstation、 KDE 版本*
-
-### Fedora Spins
-
-在我尝试过的所有 Fedora Spin 中,唯一有问题的是 XFCE spin,我相信这是由于这个[已知的 bug][11]。
-
-按照我在这里分享的步骤操作,GNOME、KDE、LXDE 和 minimal 都运行得很好。考虑到 KDE 和 GNOME 会占用更多资源,我会推荐想要在树莓派上使用 Fedora 25 的人 使用 LXDE 和 Minimal。如果你是一位系统管理员,想要一台廉价的 SELinux 支持的服务器来满足你的安全考虑,而且只是想要使用树莓派作为你的服务器,开放 22 端口以及 vi 可用,那就用 Minimal 版本。对于开发人员或刚开始学习 Linux 的人来说,LXDE 可能是更好的方式,因为它可以快速方便地访问所有基于 GUI 的工具,如浏览器、IDE 和你可能需要的客户端。
-
- ![LXES on Raspberry Pi ](https://opensource.com/sites/default/files/lxde_on_rpi.png "LXDE on Raspberry Pi 3")
-
-*Raspberry Pi 3 上的 Fedora 25 workstation、LXDE。*
-
-看到越来越多的 Linux 发行版在基于 ARM 的树莓派上可用,那真是太棒了。对于其第一个支持的版本,Fedora 团队为日常 Linux 用户提供了更好的体验。我很期待 Fedora 26 的改进和 bug 修复。
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Anderson Silva - Anderson 于 1996 年开始使用 Linux。更精确地说是 Red Hat Linux。 2007 年,他作为 IT 部门的发布工程师时加入红帽,他的职业梦想成为了现实。此后,他在红帽担任过多个不同角色,从发布工程师到系统管理员、高级经理和信息系统工程师。他是一名 RHCE 和 RHCA 以及一名活跃的 Fedora 包维护者。
-
-----------------
-
-via: https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi
-
-作者:[Anderson Silva][a]
-译者:[geekpi](https://github.com/geekpi)
-校对:[jasminepeng](https://github.com/jasminepeng)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/ansilva
-[1]:https://opensource.com/tags/raspberry-pi?src=raspberry_pi_resource_menu
-[2]:https://opensource.com/resources/what-raspberry-pi?src=raspberry_pi_resource_menu
-[3]:https://opensource.com/article/16/12/getting-started-raspberry-pi?src=raspberry_pi_resource_menu
-[4]:https://opensource.com/article/17/2/raspberry-pi-submit-your-article?src=raspberry_pi_resource_menu
-[5]:https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi?rate=gIIRltTrnOlwo4h81uDvdAjAE3V2rnwoqH0s_Dx44mE
-[6]:https://fedoramagazine.org/raspberry-pi-support-fedora-25-beta/
-[7]:https://fedoraproject.org/wiki/Raspberry_Pi
-[8]:https://bugzilla.redhat.com/show_bug.cgi?id=245418
-[9]:https://lists.fedoraproject.org/admin/lists/arm%40lists.fedoraproject.org/
-[10]:https://bugzilla.redhat.com/show_bug.cgi?id=1387733
-[11]:https://bugzilla.redhat.com/show_bug.cgi?id=1389163
-[12]:https://opensource.com/user/26502/feed
-[13]:https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi#comments
-[14]:https://opensource.com/users/ansilva
diff --git a/translated/tech/20170314 Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8.md b/translated/tech/20170314 Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8.md
new file mode 100644
index 0000000000..4d2cfcd16f
--- /dev/null
+++ b/translated/tech/20170314 Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8.md
@@ -0,0 +1,339 @@
+Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8
+============================================================
+使用 Samba 和 Winbind 将 Ubuntu 16.04 添加到 AD 域 ——(八)
+
+这篇文章讲述了如何将 Ubuntu 主机加入到 Samba4 AD 域,并实现使用域帐号登录 Ubuntu 系统。
+
+#### 要求:
+
+1. [在 Ubuntu 系统上使用 Samba4 软件来创建活动目录架构][1]
+
+### 第一步: Ubuntu 系统加入到 Samba4 AD 之前的基本配置
+
+1、在将 Ubuntu 主机加入到 AD DC 之前,你得先确保 Ubuntu 系统中的一些服务配置正常。
+
+主机名是你的机器的一个重要标识。因此,在加入域前,使用 hostnamectl 命令或者通过手动编辑 /etc/hostname 文件来为 Ubuntu 主机设置一个合适的主机名。
+
+```
+# hostnamectl set-hostname your_machine_short_name
+# cat /etc/hostname
+# hostnamectl
+```
+[
+ ![Set System Hostname](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Ubuntu-System-Hostname.png)
+][2]
+
+设置系统主机名
+
+2、在这一步中,打开并编辑网卡配置文件,为你的主机设置一个合适的 IP 地址。注意把 DNS 地址设置为你的域控制器的地址。
+
+编辑 /etc/network/interfaces 文件,添加 dns-nameservers 参数,并设置为 AD 服务器的 IP 地址,添加 dns-search 参数,其值为域控制器的主机名,如下图所示。
+
+并且,把上面设置的 DNS IP 地址和域名添加到 /etc/resolv.conf 配置文件中,如下图所示:
+
+[
+ ![Configure Network Settings for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network-Settings-for-AD.png)
+][3]
+
+为 AD 配置网络设置
+
+在上面的截图中, 192.168.1.254 和 192.168.1.253 是 Samba4 AD DC 服务器的 IP 地址, Tecmint.lan 是 AD 域名,已加入到这个域中的所有机器都可以查询到该域名。
+
+3、重启网卡服务或者重启计算机以使网卡配置生效。使用 ping 命令加上域名来检测 DNS 解析是否正常。
+
+AD DC 应该返回的 FQDN 。如果你的网络中配置了 DHCP 服务器来为局域网中的计算机自动分配 IP 地址,请确保你已经把 AD DC 服务器的 IP 地址添加到 DHCP 服务器的 DNS 配置中。
+
+```
+# systemctl restart networking.service
+# ping -c2 your_domain_name
+```
+
+4、最后一步是配置服务器时间同步。安装 ntpdate 包,使用下面的命令来查询并同步 AD DC 服务器的时间。
+
+```
+$ sudo apt-get install ntpdate
+$ sudo ntpdate -q your_domain_name
+$ sudo ntpdate your_domain_name
+```
+[
+ ![Time Synchronization with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Time-Synchronization-with-AD.png)
+][4]
+
+AD 服务器时间同步
+
+5、下一步,在 Ubuntu 机器上执行下面的命令来安装加入域环境所必需软件。
+
+```
+$ sudo apt-get install samba krb5-config krb5-user winbind libpam-winbind libnss-winbind
+```
+[
+ ![Install Samba4 in Ubuntu Client](http://www.tecmint.com/wp-content/uploads/2017/03/Install-Samba4-in-Ubuntu-Client.png)
+][5]
+
+在 Ubuntu 机器上安装 Samba4 软件
+
+在 Kerberos 软件包安装的过程中,你会被询问输入默认的域名。输入大写的域名,并按 Enter 键继续安装。
+
+[
+ ![Add AD Domain Name](http://www.tecmint.com/wp-content/uploads/2017/03/Add-AD-Domain-Name.png)
+][6]
+
+添加 AD 域名
+
+6、当所有的软件包安装完成之后,使用域管理员帐号来测试 Kerberos 认证,然后执行下面的命令来列出票据信息。
+
+```
+# kinit ad_admin_user
+# klist
+```
+[
+ ![Check Kerberos Authentication with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kerberos-Authentication-with-AD.png)
+][7]
+
+使用 AD 来检查 Kerberos 认证是否正常
+
+### 第二步:将 Ubuntu 主机添加到 Samba4 AD DC
+
+7、将 Ubuntu 主机添加到 Samba4 活动目录域环境中的第一步是编辑 Samba 配置文件。
+
+备份 Samba 的默认配置文件,这个配置文件是安装 Samba 软件的过程中自动生成的,使用下面的命令来重新初始化配置文件。
+
+```
+# mv /etc/samba/smb.conf /etc/samba/smb.conf.initial
+# nano /etc/samba/smb.conf
+```
+
+在新的 Samba 配置文件中添加以下内容:
+
+```
+[global]
+workgroup = TECMINT
+realm = TECMINT.LAN
+netbios name = ubuntu
+security = ADS
+dns forwarder = 192.168.1.1
+idmap config * : backend = tdb
+idmap config *:range = 50000-1000000
+template homedir = /home/%D/%U
+template shell = /bin/bash
+winbind use default domain = true
+winbind offline logon = false
+winbind nss info = rfc2307
+winbind enum users = yes
+winbind enum groups = yes
+vfs objects = acl_xattr
+map acl inherit = Yes
+store dos attributes = Yes
+```
+[
+ ![Configure Samba for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Samba.png)
+][8]
+
+Samba 服务的 AD 环境配置
+
+根据你本地的实际情况来替换 workgroup , realm , netbios name 和 dns forwarder 的参数值。
+
+由于 winbind use default domain 这个参数会让 winbind 服务把任何登录系统的帐号都当作 AD 帐号。因此,如果存在本地帐号名跟域帐号同名的情况下,请不要设置该参数。
+
+8、现在,你应该重启 Samba 后台服务,停止并卸载一些不必要的服务,然后启用 samba 服务的 system-wide 功能,使用下面的命令来完成。
+
+
+```
+$ sudo systemctl restart smbd nmbd winbind
+$ sudo systemctl stop samba-ad-dc
+$ sudo systemctl enable smbd nmbd winbind
+```
+
+9、通过下面的命令,使用域管理员帐号来把 Ubuntu 主机加入到 Samba4 AD DC 中。
+
+```
+$ sudo net ads join -U ad_admin_user
+```
+[
+ ![Join Ubuntu to Samba4 AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Join-Ubuntu-to-Samba4-AD-DC.png)
+][9]
+
+把 Ubuntu 主机加入到 Samba4 AD DC
+
+10、在 [安装了 RSAT 工具的 Windows 机器上][10] 打开 AD UC ,展开到包含的计算机。你可以看到已加入域的 Ubuntu 计算机。
+
+[
+ ![Confirm Ubuntu Client in Windows AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Confirm-Ubuntu-Client-in-RSAT-.png)
+][11]
+
+确认 Ubuntu 计算机已加入到 Windows AD DC
+
+### 第三步:配置 AD 帐号认证
+
+11、为了在本地完成 AD 帐号认证,你需要修改本地机器上的一些服务和配置文件。
+
+首先,打开并编辑名字服务切换 (NSS) 配置文件。
+
+```
+$ sudo nano /etc/nsswitch.conf
+```
+
+然后在 passwd 和 group 行添加 winbind 值,如下图所示:
+
+```
+passwd: compat winbind
+group: compat winbind
+```
+[
+ ![Configure AD Accounts Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-AD-Accounts-Authentication.png)
+][12]
+
+配置 AD 帐号认证
+
+12、为了测试 Ubuntu 机器是否已加入到域中,你可以使用 wbinfo 命令来列出域帐号和组。
+
+```
+$ wbinfo -u
+$ wbinfo -g
+```
+[
+ ![List AD Domain Accounts and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/List-AD-Domain-Accounts-and-Groups.png)
+][13]
+
+列出域帐号和组
+
+13、同时,使用 getent 命令加上管道符及 grep 参数来过滤指定域用户或组,以测试 Winbind nsswitch 模块是否运行正常。
+
+```
+$ sudo getent passwd| grep your_domain_user
+$ sudo getent group|grep 'domain admins'
+```
+[
+ ![Check AD Domain Users and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/Check-AD-Domain-Users-and-Groups.png)
+][14]
+
+检查 AD 域用户和组
+
+14、为了让域帐号在 Ubuntu 机器上完成认证登录,你需要使用 root 帐号运行 pam-auth-update 命令,然后勾选 winbind 服务所需的选项,以让每个域帐号首次登录时自动创建 home 目录。
+
+查看所有的选项,按 ‘[空格]’键选中,单击 OK 以应用更改。
+
+```
+$ sudo pam-auth-update
+```
+[
+ ![Authenticate Ubuntu with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Ubuntu-with-Domain-Accounts.png)
+][15]
+
+使用域帐号登录 Ubuntu 主机
+
+15、在 Debian 系统中,如果想让系统自动为登录的域帐号创建家目录,你需要手动编辑 /etc/pam.d/common-account 配置文件,并添加下面的内容。
+
+```
+session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
+```
+[
+ ![Authenticate Debian with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Debian-with-Domain-Accounts.png)
+][16]
+
+使用域帐号登录 Debian 系统
+
+16、为了让 AD 用户能够在 Linux 的命令行下修改密码,你需要打开 /etc/pam.d/common-password 配置文件,在 password 那一行删除 use_authtok 参数,如下图所示:
+
+```
+password [success=1 default=ignore] pam_winbind.so try_first_pass
+```
+[
+ ![Users Allowed to Change Password](http://www.tecmint.com/wp-content/uploads/2017/03/AD-Domain-Users-Change-Password.png)
+][17]
+
+允许域帐号在 Linux 命令行下修改密码
+
+17、要使用 Samba4 AD 帐号来登录 Ubuntu 主机,在 su - 后面加上域用户名即可。你还可以通过运行 id 命令来查看 AD 帐号的其它信息。
+
+```
+$ su - your_ad_user
+```
+[
+ ![Find AD User Information](http://www.tecmint.com/wp-content/uploads/2017/03/Find-AD-User-Information.png)
+][18]
+
+查看 AD 用户信息
+
+使用 [pwd 命令][19]来查看域帐号的当前目录,如果你想修改域帐号的密码,你可以使用 passwd 命令来完成。
+
+18、如果想让域帐号在 ubuntu 机器上拥有 root 权限,你可以使用下面的命令来把 AD 帐号添加到 sudo 系统组中:
+
+```
+$ sudo usermod -aG sudo your_domain_user
+```
+
+登录域帐号登录到 Ubuntu 主机,然后运行 apt-get-update 命令来更新系统,以验证域账号是否拥有 root 权限。
+
+[
+ ![Add Sudo User Root Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Sudo-User-Root-Group.png)
+][20]
+
+给域帐号添加 root 权限
+
+19、要为整个域用户组添加 root 权限,使用 vi 命令打开并编辑 /etc/sudoers 配置文件,如下图所示,添加如下内容:
+
+```
+%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
+```
+[
+ ![Add Root Privileges to Domain Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Root-Privileges-to-Domain-Group.jpg)
+][21]
+
+为域帐号组添加 root 权限
+
+使用反斜杠来转义域用户组的名称中包含的空格,或者用来转义第一个反斜杠。在上面的例子中, TECMINT 域的域用户组的名字是 “domain admins" 。
+
+前边的 `(%)` 表明我们指定是的用户组而不是用户名。
+
+20、如果你使用的是图形界面的 Ubuntu 系统,并且你想使用域帐号来登录系统,你需要修改 LightDM 显示管理器,编辑 /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf 配置文件,添加下面的内容,然后重启系统才能生效。
+
+```
+greeter-show-manual-login=true
+greeter-hide-users=true
+```
+
+现在你就可以域帐号来登录 Ubuntu 桌面系统了。使用域用户名或者域用户名@域名.tld 或者域名\域用户名的方式来登录系统。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+我是一个电脑迷,开源 Linux 系统和软件爱好者,有 4 年多的 Linux 桌面、服务器系统使用和 Base 编程经验。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/join-ubuntu-to-active-directory-domain-member-samba-winbind/
+
+作者:[Matei Cezar][a]
+译者:[rusking](https://github.com/rusking)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/cezarmatei/
+
+[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
+[2]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Ubuntu-System-Hostname.png
+[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network-Settings-for-AD.png
+[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Time-Synchronization-with-AD.png
+[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Install-Samba4-in-Ubuntu-Client.png
+[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-AD-Domain-Name.png
+[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kerberos-Authentication-with-AD.png
+[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Samba.png
+[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Join-Ubuntu-to-Samba4-AD-DC.png
+[10]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
+[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Confirm-Ubuntu-Client-in-RSAT-.png
+[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-AD-Accounts-Authentication.png
+[13]:http://www.tecmint.com/wp-content/uploads/2017/03/List-AD-Domain-Accounts-and-Groups.png
+[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-AD-Domain-Users-and-Groups.png
+[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Ubuntu-with-Domain-Accounts.png
+[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Debian-with-Domain-Accounts.png
+[17]:http://www.tecmint.com/wp-content/uploads/2017/03/AD-Domain-Users-Change-Password.png
+[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-AD-User-Information.png
+[19]:http://www.tecmint.com/pwd-command-examples/
+[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-Sudo-User-Root-Group.png
+[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-Root-Privileges-to-Domain-Group.jpg
+[22]:http://www.tecmint.com/author/cezarmatei/
+[23]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
+[24]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/translated/tech/20170317 How to Build Your Own Media Center with OpenELEC.md b/translated/tech/20170317 How to Build Your Own Media Center with OpenELEC.md
new file mode 100644
index 0000000000..7196c70c7c
--- /dev/null
+++ b/translated/tech/20170317 How to Build Your Own Media Center with OpenELEC.md
@@ -0,0 +1,123 @@
+# 如何通过OpenELEC创建你自己的媒体中心
+
+![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-media-center.jpg "How to Build Your Own Media Center with OpenELECs")
+
+你是否曾经想要创建你自己的家庭影院系统?如果是的话,这里有一个为你准备的指南!在本篇文章中,我们将会介绍如何设置一个由OpenELEC以及Kodi驱动的家庭娱乐系统。我们将会介绍如何合适的安装,哪些设备可以运行软件,如何安装它,以及其他一切需要知道的事情。
+
+### 选择一个设备
+
+在开始设定媒体中心的软件前,你需要选择一个设备。OpenELEC支持一系列设备。从一般的桌面设备到树莓派2/3等等。选择好设备以后,考虑一下你怎么访问OpenELEC系统中的媒体,以及准备好使用
+
+**注意: **OpenELEC基于Okdi,有许多方式加载一个可以运行的媒体(像是Samba网络分享,额外的设备,等等)。
+
+### 制作安装磁盘
+
+OpenELEC安装磁盘需要一个USB存储器,且其至少1GB的容量。这是安装软件的唯一方式,因为开发者没有发布ISO文件。取而代之的是一个需要被创建的IMG原始文件。选择与你设备相关的链接并且[下载][10]原始的磁盘镜像。当磁盘镜像下载完毕,打开一个终端,并且使用命令将数据从压缩包中解压出来。
+
+**在Linux/macOS上**
+
+```
+cd ~/Downloads
+gunzip -d OpenELEC*.img.gz
+```
+
+**在Windows上**
+
+下载[7zip][11],安装它,然后解压压缩文件。
+
+当原始的 .IMG 文件被解压后,下载[Etcher USB creation tool][12],并且依据在界面上的指南来安装它,以及创建USB磁盘。
+
+**注意:** 对于树莓派用户, Etcher也支持将文件写入到SD卡中。
+
+### 安装OpenElEC
+
+OpenELEC安装进程可能是安装流程中最简单的操作系统之一了。将USB设备加入,然后配置设备使其以USB方式启动作为开始。同样,这个过程也可以通过按DEL或者F2来替代。然而并不是所有的BIOS都是一样的, 所以最好的方式就是看看手册什么的。
+
+ ![openelec-installer-selection](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installer-selection.png "openelec-installer-selection")
+
+一旦进入BIOS,修改设置使其从USB磁盘中直接加载。这将会允许电脑从USB磁盘中启动,这将会使你从Syslinux boot屏幕中启动。在提示中,进入”installer“,然后按下Enter按键。
+
+ ![openelec-installation-selection-menu](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-selection-menu.png "openelec-installation-selection-menu")
+
+通常,快速安装选项是被选中的。按Enter来开始安装。这将会使安装器跳转到磁盘选择界面。选择硬盘——那个OpenELEC应该被安装的地方,然后按下Enter按键来开始安装过程。
+
+ ![openelec-installation-in-progress](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-in-progress.png "openelec-installation-in-progress")
+
+一旦完成安装,重启系统,并且加载OpenELEC。
+
+### 配置 OpenELEC
+
+ ![openelec-wireless-network-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-wireless-network-setup.jpg "openelec-wireless-network-setup")
+
+在第一次启动时,用户必须配置一些东西。如果你的媒体中心拥有一个无线网卡,OpenELEC将会提示用户将其连接到一个热点上。选择一个列表中的网络并且输入密码。
+
+ ![openelec-sharing-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-sharing-setup.jpg "openelec-sharing-setup")
+
+在下一步”欢迎来到OpenELEC“屏上,用户必须配置不同的分享设置(SSH以及Samba)。它将会建议你把这些设置开启,因为拥有命令行权限,这将会使得远程传输媒体文件变得很简单。
+
+## 增加媒体
+
+在OpenELEC(Kodi)中增加媒体,首先选择你希望添加的媒体的部分。为照片音乐等添加媒体,是同一个过程。在这个指南中,我们将着重讲解添加视频。
+
+ ![openelec-add-files-to-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-files-to-kodi.jpg "openelec-add-files-to-kodi")
+
+点击在主页的”视频“选项来进入视频页面。选择”文件“选项,在下一个页面点击“添加视频...",这将会使得用户进入Kodi的添加媒体页面。在这个页面,你可以随意的添加媒体源了(包括内部和外部的)。
+
+ ![openelec-add-media-source-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-media-source-kodi.jpg "openelec-add-media-source-kodi")
+
+OpenELEC自动挂载外部的设备(像是USB,DVD信息碟片,等等),并且他可以被添加到浏览文件挂载点。一般情况下,这些设备都会被放在”/run“下,或者,返回你点击”添加视频..."的页面,在那里选择设备。任何外部设备,包括 DVD/CD,将会直接展示那些可以展示的部分。这是一个很好的选择——对于那些不懂如何找到挂载点的用户。
+
+ ![openelec-name-video-source-folder](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-name-video-source-folder.jpg "openelec-name-video-source-folder")
+
+现在这个设备在Kodi中被选中了,界面将会询问用户去浏览设备上私人文件夹,里面有私人文件——这一切都是在媒体中心文件浏览器工具下执行的。一旦有文件的文件夹被找到阿勒,添加它,给予文件夹一个名字,然后按下OK按钮来保存它。
+
+ ![openelec-show-added-media-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-show-added-media-kodi.jpg "openelec-show-added-media-kodi")
+
+当一个用户浏览“视频”,他们将会看到可以点击的文件夹,这个文件夹中带有从外部设备添加的媒体。这些文件夹可以在系统上很简单的播放。
+
+### 使用 OpenELec
+
+当用户登录他们将会看见一个“主界面”,这个主界面有许多部分,用户可以点击他们并且进入,包括:图片,视频,音乐,程序等等。当悬停在这些部分的时候,子部分就会出现。例如,当悬停在“图片”上时,子部分”文件“以及”插件”就会出现。
+
+ ![openelec-navigation-bar](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-navigation-bar.jpg "openelec-navigation-bar")
+
+Additionally, clicking the files subsection of any section (e.g. Videos) takes the user directly to any available files in that section.
+如果一个用户点击了一个部分中的子部分,例如“插件”,Kodi 插件选择就会出现。这个安装器将会允许用户浏览新的插件内容,来安装到这个子部分(像是图片关联插件,等等)或者启动一个已经存在的图片关联插件,当然,这个插件应该已经安装到系统上了。
+
+此外,点击任何部分的文件子部分(例如 视频)将会直接带领用户到任何可用的文件的那个部分。
+
+### 系统设置
+
+ ![openelec-system-settings](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-system-settings.jpg "openelec-system-settings")
+
+Kodi有许多扩展设置区域。为了获得这些设置,使鼠标在右方悬停,目录选择器将会滚动右方并且显示”系统“。点他来打开全局系统设定区。
+
+任何设置都可以被用户修改,安装插件从Kodi仓库到激活额外的服务,到改变他们,甚至天气。如果想要退出设定区域并且返回主页面,点击右下方角落中的”home“图标。
+
+### 结论
+
+通过OpenELEC的安装和配置,你现在可以随意离开或者使用你自己的Linux支持的家庭影院系统。在所有的家庭影院系统Linux发行版中,这个是最用户有好的。请记住,尽管这个系统是以”OpenELEC“来被熟知的,它运行着Kodi以及它适应任何Kodi的插件,工具以及程序。
+
+------
+
+via: https://www.maketecheasier.com/build-media-center-with-openelec/
+
+作者:[Derrik Diener][a]
+译者:[svtter](https://github.com/svtter)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.maketecheasier.com/author/derrikdiener/
+[1]: https://www.maketecheasier.com/author/derrikdiener/
+[2]: https://www.maketecheasier.com/build-media-center-with-openelec/#comments
+[3]: https://www.maketecheasier.com/category/linux-tips/
+[4]: http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
+[5]: http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F&text=How+to+Build+Your+Own+Media+Center+with+OpenELEC
+[6]: mailto:?subject=How%20to%20Build%20Your%20Own%20Media%20Center%20with%20OpenELEC&body=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
+[7]: https://www.maketecheasier.com/permanently-disable-windows-defender-windows-10/
+[8]: https://www.maketecheasier.com/repair-mac-hard-disk-with-fsck/
+[9]: https://support.google.com/adsense/troubleshooter/1631343
+[10]: http://openelec.tv/get-openelec/category/1-openelec-stable-releases
+[11]: http://www.7-zip.org/
+[12]: https://etcher.io/
diff --git a/translated/tech/20170317 Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9.md b/translated/tech/20170317 Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9.md
new file mode 100644
index 0000000000..73a6241a21
--- /dev/null
+++ b/translated/tech/20170317 Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9.md
@@ -0,0 +1,312 @@
+Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9
+============================================================
+将 CentOS 7 桌面系统加入到 Samba4 AD 域环境中——(九)
+
+这篇文章讲述了如何使用 Authconfig-gtk 工具将 CentOS 7 桌面系统加入到 Samba4 AD 域环境中,并使用域帐号登录到 CentOS 系统。
+
+#### 要求
+
+1、[在 Ubuntu 系统中使用 Samba4 创建活动目录架构][1]
+2、[CentOS 7.3 安装指南]][2]
+
+### 第一步:在 CentOS 系统中配置 Samba4 AD DC Step 1: Configure CentOS Network for Samba4 AD DC
+
+1、在将 CentOS 7 加入到 Samba4 域环境之前,你得先配置 CentOS 系统的网络环境,确保在 CentOS 系统中通过 DNS 可以解析到域名。
+
+打开网络设置,关闭有线网卡。打开下方的设置按钮,手动编辑网络设置,指定 DNS 的 IP 地址为 Samba4 AD DC 服务器的 IP 地址。
+
+设置完成之后,应用配置,并打开有线网卡。
+
+[
+ ![Network Settings](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg)
+][3]
+
+网络设置
+
+[
+ ![Configure Network](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg)
+][4]
+
+配置网络
+
+2、下一步,打开网络配置文件,在文件末尾添加一行域名信息。这样能确保当你仅使用主机名来查询域中的 DNS 记录时, DNS 解析器会自动把域名添加进来。
+
+```
+$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
+```
+
+添加下面一行:
+
+```
+SEARCH="your_domain_name"
+```
+[
+ ![Network Interface Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg)
+][5]
+
+网卡配置
+
+3、最后,重启网卡服务以应用更改,并验证解析器的配置文件是否正确配置。我们通过使用 ping 命令加上 DC 服务器的主机名或域名以验证 DNS 解析能否正常运行。
+
+```
+$ sudo systemctl restart network
+$ cat /etc/resolv.conf
+$ ping -c1 adc1
+$ ping -c1 adc2
+$ ping tecmint.lan
+```
+[
+ ![Verify Network Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Network-Configuration.jpg)
+][6]
+
+验证网络配置是否正常
+
+4、同时,使用下面的命令来配置你的主机名,然后重启计算机以应用更改:
+
+```
+$ sudo hostnamectl set-hostname your_hostname
+$ sudo init 6
+```
+
+使用下面的命令来验证主机名是否正确配置
+
+```
+$ cat /etc/hostname
+$ hostname
+```
+
+5、最后一步配置是使用下面的命令来保证系统时间跟 Samba4 AD DC 服务器的时间同步:
+
+```
+$ sudo yum install ntpdate
+$ sudo ntpdate -ud domain.tld
+```
+
+### 第二步:安装要加入 Samba4 AD DC 所必需的软件包
+
+6、为了将 CentOS 7 加入到活动目录域中,你需要使用下面的命令来安装相关的软件包:
+
+```
+$ sudo yum install samba samba samba-winbind krb5-workstation
+```
+
+7、最后,安装 CentOS 软件库中提供的图形化界面软件包: Authconfig-gtk 。该软件用于将 CentOS 系统集成到域环境中。
+
+```
+$ sudo yum install authconfig-gtk
+```
+
+### 第三步:将 CentOS 7 桌面系统集成到 Samba4 AD DC 域环境中
+
+8、将 CentOS 加入到域的过程非常简单。使用有 root 权限的帐号在命令行下打开 Authconfig-gtk 程序,然后按下图所示修改相关配置即可:
+
+```
+$ sudo authconfig-gtk
+```
+
+打开身份或认证配置页面:
+
+* 用户帐号数据库 = 选择 Winbind
+* Winbind 域 = 你的域名
+* 安全模式 = ADS
+* Winbind ADS 域 = 你的域名.TLD
+* 域控制器 = 域控服务器的全域名
+* 默认Shell = /bin/bash
+* 勾选允许离线登录
+
+[
+ ![Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Configuration.jpg)
+][7]
+
+域认证配置
+
+打开高级选项配置页面:
+
+* 本地认证选项 = 支持指纹识别
+* 其它认证选项 = 用户首次登录创建家目录
+
+[
+ ![Authentication Advance Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Advance-Configuration.jpg)
+][8]
+
+高级认证配置
+
+9、修改完上面的配置之后,返回到身份或认证配置页面,点击加入域按钮,在弹出的提示框点保存即可。
+
+[
+ ![Identity and Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Identity-and-Authentication.jpg)
+][9]
+
+身份和认证
+
+[
+ ![Save Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg)
+][10]
+
+保存认证配置
+
+10、保存配置之后,系统将会提示你提供域管理员信息以将 CentOS 系统加入到域中。输入域管理员帐号及密码后点击 OK 按钮,加入域完成。
+
+[
+ ![Joining Winbind Domain](http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg)
+][11]
+
+加入 Winbind 域环境
+
+11、另入域后,点击应用按钮以让配置生效,选择所有的 windows 并重启机器。
+
+[
+ ![Apply Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg)
+][12]
+
+应用认证配置
+
+12、要验证 CentOS 是否已成功加入到 Samba4 AD DC 中,你可以在安装了 [RSAT 工具][13] 的 windows 机器上,打开 AD 用户和计算机工具,点击域中的计算机。
+
+你将会在右侧看到 CentOS 主机信息。
+
+[
+ ![Active Directory Users and Computers](http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg)
+][14]
+
+活动目录用户和计算机
+
+### 第四步:使用 Samba4 AD DC 帐号登录 CentOS 桌面系统
+
+13、选择使用其它账户,然后输入域帐号和密码进行登录,如下图所示:
+
+```
+Domain\domain_account
+or
+Domain_user@domain.tld
+```
+[
+ ![Not listed Users](http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg)
+][15]
+
+使用其它账户
+
+[
+ ![Enter Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg)
+][16]
+
+输入域用户名
+
+14、在 CentOS 系统的命令行中,你也可以使用下面的任一方式来切换到域帐号进行登录:
+
+```
+$ su - domain\domain_user
+$ su - domain_user@domain.tld
+```
+[
+ ![Authenticate Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User.jpg)
+][17]
+
+使用域帐号登录
+
+[
+ ![Authenticate Domain User Email](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User-Email.jpg)
+][18]
+
+使用域帐号邮箱登录
+
+15、要为域用户或组添加 root 权限,在命令行下使用 root 权限帐号打开 sudoers 配置文件,添加下面一行内容:
+
+```
+YOUR_DOMAIN\\domain_username ALL=(ALL:ALL) ALL #For domain users
+%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL #For domain groups
+```
+[
+ ![Assign Permission to User and Group](http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Permission-to-User-and-Group.jpg)
+][19]
+
+指定用户和用户组权限
+
+16、使用下面的命令来查看域控制器信息:
+
+```
+$ sudo net ads info
+```
+[
+ ![Check Domain Controller Info](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Domain-Controller-Info.jpg)
+][20]
+
+查看域控制器信息
+
+17、你可以在安装了 Winbind 客户端的机器上使用下面的命令来验证 CentOS 加入到 Samba4 AD DC 后的信任关系是否正常:
+
+```
+$ sudo yum install samba-winbind-clients
+```
+
+然后,执行下面的一些命令来查看 Samba4 AD DC 的相关信息:
+
+```
+$ wbinfo -p #Ping 域名
+$ wbinfo -t #检查信任关系
+$ wbinfo -u #列出域用户帐号
+$ wbinfo -g #列出域用户组
+$ wbinfo -n domain_account #查看域帐号的 SID 信息
+```
+[
+ ![Get Samba4 AD DC Details](http://www.tecmint.com/wp-content/uploads/2017/03/Get-Samba4-AD-DC-Details.jpg)
+][21]
+
+查看 Samba4 AD DC 信息
+
+18、如果你想让 CentOS 系统退出域环境,使用具有管理员权限的帐号执行下面的命令,后面加上域名及域管理员帐号,如下图所示:
+
+```
+$ sudo net ads leave your_domain -U domain_admin_username
+```
+[
+ ![Leave Domain from Samba4 AD](http://www.tecmint.com/wp-content/uploads/2017/03/Leave-Domain-from-Samba4-AD.jpg)
+][22]
+
+退出 Samba4 AD 域
+
+这篇文章就写到这里吧!尽管上面的这些操作步骤是将 CentOS 7 系统加入到 Samba4 AD DC 域中,其实这些步骤也同样适用于将 CentOS 7 桌面系统加入到 Microsoft Windows Server 2008 或 2012 的域中。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+我是一个电脑迷,开源 Linux 系统和软件爱好者,有 4 年多的 Linux 桌面、服务器系统使用和 Base 编程经验。
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/join-centos-7-to-samba4-active-directory/
+
+作者:[Matei Cezar][a]
+译者:[rusking](https://github.com/rusking)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/cezarmatei/
+
+[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
+[2]:http://www.tecmint.com/centos-7-3-installation-guide/
+[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg
+[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg
+[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg
+[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Network-Configuration.jpg
+[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Configuration.jpg
+[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Advance-Configuration.jpg
+[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Identity-and-Authentication.jpg
+[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg
+[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg
+[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg
+[13]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
+[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg
+[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg
+[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg
+[17]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User.jpg
+[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User-Email.jpg
+[19]:http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Permission-to-User-and-Group.jpg
+[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Domain-Controller-Info.jpg
+[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Get-Samba4-AD-DC-Details.jpg
+[22]:http://www.tecmint.com/wp-content/uploads/2017/03/Leave-Domain-from-Samba4-AD.jpg
+[23]:http://www.tecmint.com/author/cezarmatei/
+[24]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
+[25]:http://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/translated/tech/20170317 Make Container Management Easy With Cockpit.md b/translated/tech/20170317 Make Container Management Easy With Cockpit.md
index b52538061a..98f104eaa2 100644
--- a/translated/tech/20170317 Make Container Management Easy With Cockpit.md
+++ b/translated/tech/20170317 Make Container Management Easy With Cockpit.md
@@ -3,7 +3,8 @@
![cockpit](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit-containers.jpg?itok=D3MMNlkg "cockpit")
-如果你正在寻找一种简单的方法来管理包含容器的 Linux 服务器,那么你应该看看 Cockpit。[Creative Commons Zero][6]
+如果你正在寻找一种简单的方法来管理包含容器的 Linux 服务器,那么你应该看看 Cockpit。
+ [Creative Commons Zero][6]
如果你管理着一台 Linux 服务器,那么你可能正在寻找一个可靠的管理工具。为了这个你可能已经看了 [Webmin][14] 和 [cPanel][15] 这类软件。但是,如果你正在寻找一种简单的方法来管理还包括 Docker 的 Linux 服务器,那么有一个工具可以用于这个需求:[Cockpit][16]。
@@ -25,7 +26,7 @@
* 查看系统服务和日志文件
-Cockpit 可以安装在 [Debian][17]、[Red Hat][18]、[CentOS] [19]、[Arch Linux][20] 和 [Ubuntu][21] 之上。在这里,我将使用一台已经安装了 Docker 的 Ubuntu 16.04 服务器来安装系统。
+Cockpit 可以安装在 [Debian][17]、[Red Hat][18]、[CentOS][19]、[Arch Linux][20] 和 [Ubuntu][21] 之上。在这里,我将使用一台已经安装了 Docker 的 Ubuntu 16.04 服务器来安装系统。
在上面的功能列表中,其中最突出的是容器管理。为什么?因为它使安装和管理容器变得非常简单。事实上,你可能很难找到更好的容器管理解决方案。
因此,让我们来安装这个方案并看看它的使用是多么简单。
@@ -65,7 +66,7 @@ sudo systemctl enable cockpit
![login](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_a.jpg?itok=RViOst2V "login")
-图 1:Cockpit 登录页面。[授权使用][1]
+*图 1:Cockpit 登录页面。[授权使用][1]*
在 Ubuntu 中使用 Cockpit 有个警告。Cockpit 中的很多任务需要管理员权限。如果你使用标准用户登录,则无法使用 Docker 等一些工具。 要解决这个问题,你可以在 Ubuntu 上启用 root 用户。但这并不总是一个好主意。通过启用 root 帐户,你将绕过多年来一直存在的安全系统。但是,为了本文,我将使用以下两个命令启用 root 用户:
@@ -75,7 +76,7 @@ sudo passwd root
sudo passwd -u root
```
-注意,请确保你给 root 帐户一个强壮的密码。
+注意,请确保给 root 帐户一个强壮的密码。
你想恢复这个修改的话,你只需输入下面的命令:
@@ -85,13 +86,13 @@ sudo passwd -l root
在其他发行版(如 CentOS 和 Red Hat)中,你可以使用用户名 _root_ 和 root 密码登录 Cockpit,而无需像上面那样需要额外的步骤。
-如果你还在犹豫启用 root 用户,则可以始终在服务终端拉取镜像(使用命令 _docker pull IMAGE_NAME _, 这里的 _IMAGE_NAME_ 是你要拉取的镜像)。这会将镜像添加到你的 docker 服务器中,然后可以通过普通用户进行管理。唯一需要注意的是,普通用户必须使用以下命令将自己添加到 Docker 组:
+如果你对启用 root 用户感到担心,则可以在服务终端拉取镜像(使用命令 _docker pull IMAGE_NAME _, 这里的 _IMAGE_NAME_ 是你要拉取的镜像)。这会将镜像添加到你的 docker 服务器中,然后可以通过普通用户进行管理。唯一需要注意的是,普通用户必须使用以下命令将自己添加到 Docker 组:
```
sudo usermod -aG docker USER
```
- USER 是实际添加到组的用户名。在你完成后,重新登出并登入,接着使用下面的命令重启 Docker:
+ 其中,USER 是实际添加到组的用户名。在你完成后,重新登出并登入,接着使用下面的命令重启 Docker:
```
sudo service docker restart
@@ -99,27 +100,26 @@ sudo service docker restart
现在常规用户可以启动并停止 Docker 镜像/容器而无需启用 root 用户了。唯一一点是用户不能通过 Cockpit 界面添加新的镜像。
-使用 Cockpit
+### 使用 Cockpit
一旦你登录后,你可以看到 Cockpit 的主界面(图 2)。
![main window](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_b.jpg?itok=tZCHcq-Y "main window")
-图 2:Cockpit 主界面。[授权使用][2]
+*图 2:Cockpit 主界面。[授权使用][2]*
你可以通过每个栏目来检查服务器的状态,与用户合作等,但是我们想要直接进入容器。单击 “Containers” 那栏以显示当前运行的以及可用的镜像(图3)。
![Cockpit](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_c.jpg?itok=OOYJt2yv "Cockpit")
-图 3:使用 Cockpit 管理容器难以置信地简单。[授权使用][3]
+*图 3:使用 Cockpit 管理容器难以置信地简单。[授权使用][3]*
要启动一个镜像,只要找到镜像并点击关联的启动按钮。在弹出的窗口中(图 4),你可以在点击运行之前查看所有镜像的信息(并根据需要调整)。
-
![Running Docker image](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_d.jpg?itok=8uldEq_r "Running Docker image")
-图 4: 使用 Cockpit 运行 Docker 镜像。[授权使用][4]
+*图 4: 使用 Cockpit 运行 Docker 镜像。[授权使用][4]*
镜像运行后,你可以点击它查看状态,并可以停止、重启、删除示例。你也可以点击修改资源限制并接着调整内存限制还有(或者)CPU 优先级。
@@ -129,13 +129,13 @@ sudo service docker restart
![Adding image](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cockpit_f.jpg?itok=_S5g8Da2 "Adding image")
-图 5:使用 Cockpit 添加最新的官方构建 CentOS 镜像到 Docker 中。[Used with permission][5]
+*图 5:使用 Cockpit 添加最新的官方构建 CentOS 镜像到 Docker 中。[授权使用][5]*
镜像下载完后,那它就在 Docker 中可用了,并可以通过 Cockpit 运行。
-### 如它获取那样简单
+### 如获取它那样简单
-管理 Docker 并不容易。是的,在 Ubuntu 上运行 Cockpit 会有一个警告,但如果这是你唯一的选择,那么还有办法让它工作。在 Cockpit 的帮助下,你不仅可以轻松管理 Docker 镜像,也可以在任何可以访问 Linux 服务器的 web 浏览器上进行。请享受这个新发现的容易使用 Docker 的方法。
+管理 Docker 并不容易。是的,在 Ubuntu 上运行 Cockpit 会有一个警告,但如果这是你唯一的选择,那么还有办法让它工作。在 Cockpit 的帮助下,你不仅可以轻松管理 Docker 镜像,也可以在任何可以访问 Linux 服务器的 web 浏览器上这样做。请享受这个新发现的容易使用 Docker 的方法。
_在 Linux Foundation 以及 edX 中通过免费的 ["Introduction to Linux"][13] 课程学习 Linux。_
@@ -145,7 +145,7 @@ via: https://www.linux.com/learn/intro-to-linux/2017/3/make-container-management
作者:[JACK WALLEN][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20170320 How to deploy Node.js Applications with pm2 and Nginx on Ubuntu.md b/translated/tech/20170320 How to deploy Node.js Applications with pm2 and Nginx on Ubuntu.md
new file mode 100644
index 0000000000..cf9ccafdb7
--- /dev/null
+++ b/translated/tech/20170320 How to deploy Node.js Applications with pm2 and Nginx on Ubuntu.md
@@ -0,0 +1,280 @@
+如何在 Ubuntu 上使用 pm2 和 Nginx 部署 Node.js 应用
+============================================================
+
+### 导航
+
+1. [第一步 - 安装 Node.js][1]
+2. [第二步 - 生成 Express 事例 App][2]
+3. [第三步- 安装 pm2][3]
+4. [第四步 - 安装配置 Nginx 作为反向代理][4]
+5. [第五步 - 测试][5]
+6. [链接][6]
+
+pm2 是一个 Node.js 应用的进程管理器,它允许你让你的应用程序保持运行,还有一个内建的负载均衡器。它非常简单而且强大,你可以零间断重启或重新加载你的 node 应用,它也允许你为你的 node 应用创建集群。
+
+在这篇博文中,我会向你展示如何安装和配置 pm2 用于这个简单的 'Express' 应用,然后配置 Nginx 作为运行在 pm2 下的 node 应用的反向代理。
+
+**前提**
+
+* Ubuntu 16.04 - 64bit
+* Root 权限
+
+### 第一步 - 安装 Node.js LTS
+
+在这篇指南中,我们会从零开始我们的实验。首先,我们需要在服务器上安装 Node.js。我会使用 Nodejs LTS 6.x 版本,它能从 nodesource 仓库中安装。
+
+从 Ubuntu 仓库安装 '**python-software-properties**' 软件包并添加 'nodesource' Nodejs 仓库。
+
+`sudo apt-get install -y python-software-properties`
+`curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -`
+
+安装最新版本的 Nodejs LTS
+
+`sudo apt-get install -y nodejs`
+
+安装完成后,查看 node 和 npm 版本。
+
+`node -v`
+`npm -v`
+
+[
+ ![检查 node.js 版本](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/1.png)
+][10]
+
+### 第二步 - 生成 Express 事例 App
+
+我会使用 **express-generator**' 软件包生成的简单 web 应用框架进行事例安装。Express-generator 可以使用 npm 命令安装。
+
+用 npm 安装 '**express-generator**':
+
+`npm install express-generator -g`
+
+**-g:** 在系统内部安装软件包
+
+我会以普通用户运行应用程序,而不是 root 或者超级用户。我们首先需要创建一个新的用户。
+
+创建一个名为 '**yume**' 的用户:
+
+`useradd -m -s /bin/bash yume`
+`passwd yume`
+
+使用 su 命令登录到新用户:
+
+`su - yume`
+
+下一步,用 express 命令生成一个新的简单 web 应用程序:
+
+`express hakase-app`
+
+命令会创建新项目目录 '**hakase-app**'。
+
+[
+ ![用 express-generator 生成应用框架](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/2.png)
+][11]
+
+进入到项目目录并安装应用需要的所有依赖。
+
+`cd hakase-app`
+`npm install`
+
+然后用下面的命令测试并启动一个新的简单应用程序:
+
+`DEBUG=myapp:* npm start`
+
+默认情况下,我们的 express 应用汇运行在 **3000** 端口。现在访问服务器的 IP 地址:[192.168.33.10:3000][12]
+
+[
+ ![express nodejs 运行在 3000 端口](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/3.png)
+][13]
+
+简单 web 应用框架以 'yume' 用户运行在 3000 端口。
+
+### 第三步 - 安装 pm2
+
+pm2 是一个 node 软件包,可以使用 npm 命令安装。让我们用 npm 命令安装吧(用 root 权限,如果你仍然以 yume 用户登录,那么运行命令 "exit" 再次成为 root 用户):
+
+`npm install pm2 -g`
+
+现在我们可以为我们的 web 应用使用 pm2 了。
+
+进入应用目录 '**hakase-app**':
+
+`su - yume`
+`cd ~/hakase-app/`
+
+这里你可以看到一个名为 '**package.json**' 的文件,用 cat 命令显示它的内容。
+
+`cat package.json`
+
+[
+ ![配置 express nodejs 服务](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/4.png)
+][14]
+
+你可以看到 '**start**' 行有一个 nodejs 用于启动 express 应用的命令。我们会和 pm2 进程管理器一起使用这个命令。
+
+像下面这样使用 pm2 命令运行 express 应用:
+
+`pm2 start ./bin/www`
+
+现在你可以看到像下面这样的结果:
+
+[
+ ![使用 pm2 运行 nodejs app](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/5.png)
+][15]
+
+我们的 express 应用正在 pm2 中运行,名称为 '**www**',id '**0**'。你可以用 show 选项 '**show nodeid|name**' 获取更多 pm2 下运行的应用的信息。
+
+`pm2 show www`
+
+[
+ ![pm2 服务状态](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/6.png)
+][16]
+
+如果你想看我们应用的日志,你可以使用 logs 选项。它包括访问和错误日志,你还可以看到应用程序的 HTTP 状态。
+
+`pm2 logs www`
+
+[
+ ![pm2 服务日志](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/7.png)
+][17]
+
+你可以看到我们的程序正在运行。现在,让我们来让它开机自启动。
+
+`pm2 startup systemd`
+
+**systemd**: Ubuntu 16 使用的是 systemd。
+
+你会看到要用 root 用户运行命令的信息。使用 "exit" 命令回到 root 用户然后运行命令。
+
+
+`sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u yume --hp /home/yume`
+
+它会为启动应用程序生成 systemd 配置文件。当你重启服务器的时候,应用程序就会自动运行。
+
+[
+ ![pm2 添加服务到开机自启动](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/8.png)
+][18]
+
+### 第四步 - 安装和配置 Nginx 作为反向代理
+
+在这篇指南中,我们会使用 Nginx 作为 node 应用的反向代理。Ubuntu 仓库中有 Nginx,用 apt 命令安装它:
+
+`sudo apt-get install -y nginx`
+
+下一步,进入到 '**sites-available**' 目录并创建新的虚拟 host 配置文件。
+
+`cd /etc/nginx/sites-available/`
+`vim hakase-app`
+
+粘贴下面的配置:
+
+ upstream hakase-app {
+ # Nodejs app upstream
+ server 127.0.0.1:3000;
+ keepalive 64;
+ }
+
+ # Server on port 80
+ server {
+ listen 80;
+ server_name hakase-node.co;
+ root /home/yume/hakase-app;
+
+ location / {
+ # Proxy_pass configuration
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header Host $http_host;
+ proxy_set_header X-NginX-Proxy true;
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "upgrade";
+ proxy_max_temp_file_size 0;
+ proxy_pass http://hakase-app/;
+ proxy_redirect off;
+ proxy_read_timeout 240s;
+ }
+ }
+
+
+保存文件并退出 vim。
+
+在配置中:
+
+* node 应用使用域名 '**hakase-node.co**' 运行。
+* 所有来自 nginx 的流量都会被转发到运行在 **3000** 端口的 node app。
+
+测试 Nginx 配置确保没有错误。
+
+`nginx -t`
+
+启用 Nginx 并使其开机自启动。
+
+`systemctl start nginx`
+`systemctl enable nginx`
+
+### 第五步 - 测试
+
+打开你的 web 浏览器并访问域名(我的是):
+
+[http://hakase-app.co][19]
+
+你可以看到 express 应用正在 Nginx web 服务器中运行。
+
+[
+ ![Nodejs app 在 pm2 和 Nginx 中运行](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/9.png)
+][20]
+
+下一步,重启你的服务器,确保你的 node app 能开机自启动:
+
+`pm2 save`
+`sudo reboot`
+
+如果你再次登录到了你的服务器,检查 node app 进程。以 '**yume**' 用户运行下面的命令。
+
+`su - yume`
+`pm2 status www`
+
+[
+ ![nodejs 在 pm2 下开机自启动](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/10.png)
+][21]
+
+Node 应用在 pm2 中运行并使用 Nginx 作为反向代理。
+
+### 链接
+
+* [Ubuntu][7]
+* [Node.js][8]
+* [Nginx][9]
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/
+
+作者:[Muhammad Arul ][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/
+[1]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-nodejs-lts
+[2]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-generate-express-sample-app
+[3]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-pm
+[4]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-and-configure-nginx-as-a-reverse-proxy
+[5]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-testing
+[6]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#links
+[7]:https://www.ubuntu.com/
+[8]:https://nodejs.org/en/
+[9]:https://www.nginx.com/
+[10]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/1.png
+[11]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/2.png
+[12]:https://www.howtoforge.com/admin/articles/edit/192.168.33.10:3000
+[13]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/3.png
+[14]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/4.png
+[15]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/5.png
+[16]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/6.png
+[17]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/7.png
+[18]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/8.png
+[19]:http://hakase-app.co/
+[20]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/9.png
+[21]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/10.png
diff --git a/translated/tech/20170321 Writing a Linux Debugger Part 1 Setup.md b/translated/tech/20170321 Writing a Linux Debugger Part 1 Setup.md
index a6e23452ae..23b9e2070f 100644
--- a/translated/tech/20170321 Writing a Linux Debugger Part 1 Setup.md
+++ b/translated/tech/20170321 Writing a Linux Debugger Part 1 Setup.md
@@ -40,12 +40,13 @@
3. 寄存器和内存
4. Elves 和 dwarves
5. 逐步、源码和信号
-6. Stepping on dwarves
+6. 使用 DWARF 调试信息逐步执行
7. 源码层断点
8. 调用栈
9. 读取变量
10. 下一步
+译者注:ELF([Executable and Linkable Format](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format "Executable and Linkable Format") 可执行文件格式),DWARF(一种广泛使用的调试数据格式,参考 [WIKI](https://en.wikipedia.org/wiki/DWARF "DWARF WIKI"))
* * *
### 准备环境
diff --git a/translated/tech/20170324 Writing a Linux Debugger Part 2 Breakpoints.md b/translated/tech/20170324 Writing a Linux Debugger Part 2 Breakpoints.md
index 0912f7e512..1adcc247a8 100644
--- a/translated/tech/20170324 Writing a Linux Debugger Part 2 Breakpoints.md
+++ b/translated/tech/20170324 Writing a Linux Debugger Part 2 Breakpoints.md
@@ -14,12 +14,13 @@
3. 寄存器和内存
4. Elves 和 dwarves
5. 逐步、源码和信号
-6. Stepping on dwarves
+6. 使用 DWARF 调试信息逐步执行
7. 源码层断点
8. 调用栈
9. 读取变量
10. 下一步
+译者注:ELF([Executable and Linkable Format](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format "Executable and Linkable Format") 可执行文件格式),DWARF(一种广泛使用的调试数据格式,参考 [WIKI](https://en.wikipedia.org/wiki/DWARF "DWARF WIKI"))
* * *
### 断点如何形成?
diff --git a/translated/tech/20170403 Yes Python is Slow and I Dont Care.md b/translated/tech/20170403 Yes Python is Slow and I Dont Care.md
deleted file mode 100644
index 95f83385c3..0000000000
--- a/translated/tech/20170403 Yes Python is Slow and I Dont Care.md
+++ /dev/null
@@ -1,113 +0,0 @@
-python 是慢,但是爷就喜欢它
-=====================================
-
-### 对追求生产率而牺牲性能的怒吼
-
- ![](https://cdn-images-1.medium.com/max/800/0*pWAgROZ2JbYzlDgj.jpg)
-
-我从关于 Python 中的 asyncio 这个标准库的讨论中休息一会,谈谈我最近正在思考的一些东西:Python 的速度。对不了解我的人说明一下,我是一个 Python 的粉丝,而且我在我能想到的所有地方都积极地使用 Python。人们对 Python 最大的抱怨之一就是它的速度比较慢,有些人甚至拒绝尝试使用 Python,因为它比其他语言速度慢。这里说说为什么我认为应该尝试使用 Python,尽管它是有点慢。
-
-### 速度不再重要
-
-过去的情形是,程序需要花费很长的时间来运行,CPU 比较贵,内存也很贵。程序的运行时间是一个很重要的指标。计算机非常的昂贵,计算机运行所需要的电也是相当贵的。对这些资源进行优化是因为一个永恒的商业法则:
-
-> 优化你最贵的资源。
-
-在过去,最贵的资源是计算机的运行时间。这就是导致计算机科学致力于研究不同算法的效率的原因。然而,这已经不再是正确的,因为现在硅芯片很便宜,确实很便宜。运行时间不再是你最贵的资源。公司最贵的资源现在是它的员工的时间。或者换句话说,就是你。把事情做完比快速地做事更加重要。实际上,这是相当的重要,我将把它再次放在这里,仿佛它是一个引用一样(对于那些只是粗略浏览的人):
-
-> 把事情做完比快速地做事更加重要。
-
-你可能会说:“我的公司在意速度,我开发一个 web 应用程序,那么所有的响应时间必须少于 x 毫秒。”或者,“我们失去了客户,因为他们认为我们的 app 运行太慢了。”我并不是想说速度一点也不重要,我只是想说速度不再是最重要的东西;它不再是你最贵的资源。
-
-![](https://cdn-images-1.medium.com/max/800/0*Z6j9zMua_w-T25TC.jpg)
-
-### 速度是唯一重要的东西
-
-当你在编程的背景下说 _速度_ 时,你通常意味着性能,也就是 CPU 周期。当你的 CEO 在编程的背景下说 _速度_ 时,他指的是业务速度,最重要的指标是产品上市的时间。基本上,你的产品/web 程序是多么的快并不重要。它是用什么语言写的也不重要。甚至它需要花费多少钱也不重要。在一天结束时,让你的公司存活下来或者死去的唯一事物就是产品上市时间。我不只是说创业公司的想法 -- 你开始赚钱需要花费多久,更多的是“从想法到客户手中”的时间期限。企业能够存活下来的唯一方法就是比你的竞争对手更快地创新。如果在你的产品上市之前,你的竞争对手已经提前上市了,那么你想出了多少好的主意也将不再重要。你必须第一个上市,或者至少能跟上。一但你放慢了脚步,你就输了。
-
-> 企业能够存活下来的唯一方法就是比你的竞争对手更快地创新。
-
-#### 一个微服务的案例
-
-像 Amazon、Google 和 Netflix 这样的公司明白快速前进的重要性。他们创建了一个业务系统,可以使用这个系统迅速地前进和快速的创新。微服务是针对他们的问题的解决方案。这篇文章不谈你是否应该使用微服务,但是至少要接受 Amazon 和 Google 认为他们应该使用微服务。
-
- ![](https://cdn-images-1.medium.com/max/600/0*MBM9zatYv_Lzr3QN.jpg)
-
-微服务本来就很慢。微服务的主要概念是用网络调用来打破边界。这意味着你正在使用一个函数调用(几个 cpu 周期)并将它转变为一个网络调用。没有什么比这更影响性能了。和 CPU 相比较,网络调用真的很慢。但是这些大公司仍然选择使用微服务。我所知道的架构里面没有比微服务还要慢的了。微服务最大的弊端就是它的性能,但是最大的长处就是上市的时间。通过在较小的项目和代码库上建立团队,一个公司能够以更快的速度进行迭代和创新。这恰恰表明了,非常大的公司也很在意上市时间,而不仅仅只是只有创业公司。
-
-#### CPU不是你的瓶颈
-
- ![](https://cdn-images-1.medium.com/max/800/0*s1RKhkRIBMEYji_w.jpg)
-
- 如果你在写一个网络应用程序,如 web 服务器,很有可能的情况会是,CPU 时间并不是你的程序的瓶颈。当你的 web 服务器处理一个请求时,可能会进行几次网络调用,例如到数据库,或者像 Redis 这样的缓存服务器。虽然这些服务本身可能比较快速,但是对它们的网络调用却很慢。[有一篇很好的关于特定操作的速度差异的博客文章][1]。在这篇文章里,作者把 CPU 周期时间缩放到更容易理解的人类时间。如果一个单独的周期等同于 1 秒,那么一个从 California 到 New York 的网络调用将相当于 4 年。那就说明了网络调用是多少的慢。按一些粗略估计,我们可以假设在同一数据中心内的普通网络调用大约需要 3 ms。这相当于我们“人类比例” 3 个月。现在假设你的程序是高 CPU 密集型,这需要 100000 个 CPU 周期来对单一调用进行响应。这相当于刚刚超过 1 天。现在让我们假设你使用的是一种要慢 5 倍的语言,这将需要大约 5 天。很好,将那与我们 3 个月的网络调用时间相比,4 天的差异就显得并不是很重要了。如果有人为了一个包裹不得不至少等待 3 个月,我不认为额外的 4 天对他们来说真的很重要。
-
-上面所说的终极意思是,尽管 Python 速度慢,但是这并不重要。语言的速度(或者 CPU 时间)几乎从来不是问题。实际上谷歌曾经就这一概念做过一个研究,[并且他们就此发表过一篇论文][2]。那篇论文论述了设计高吞吐量的系统。在结论里,他们说到:
-
-> 在高吞吐量的环境中使用解释性语言似乎是矛盾的,但是我们已经发现 CPU 时间几乎不是限制因素;语言的表达性是指,大多数程序是源程序,同时花费它们的大多数时间在 I/O 读写和本机运行时代码。而且,解释性语言无论是在语言层面的轻松实验还是在允许我们在很多机器上探索分布计算的方法都是很有帮助的,
-
-再次强调:
-> CPU 时间几乎不是限制因素。
-
-### 如果 CPU 时间是一个问题怎么办?
-
-你可能会说,“前面说的情况真是太好了,但是我们确实有过一些问题,这些问题中 CPU 成为了我们的瓶颈,并造成了我们的 web 应用的速度十分缓慢”,或者“在服务器上 X 语言比 Y 语言需要更少的硬件资源来运行。”这些都可能是对的。关于 web 服务器有这样的美妙的事情:你可以几乎无限地负载均衡它们。换句话说,可以在 web 服务器上投入更多的硬件。当然,Python 可能会比其他语言要求更好的硬件资源,比如 c 语言。只是把硬件投入在 CPU 问题上。相比于你的时间,硬件就显得非常的便宜了。如果你在一年内节省了两周的生产力时间,那将远远多于所增加的硬件开销的回报。
-
-
-* * *
-
-![](https://cdn-images-1.medium.com/max/1000/0*mJFOcWsdEQq98gkF.jpg)
-
-### 那么,Python 是更快吗?
-
-这一次我一直在谈论最重要的是开发时间。所以问题依然存在:当就开发时间而言,Python 要比其他语言更快吗?按常规惯例来看,我、[google][3] [还有][4][其他][5][几个人][6]可以告诉你 Python 是多么的[高效][7]。它为你抽象出很多东西,帮助你关注那些你真正应该编写代码的地方,而不会被困在琐碎事情的杂草里,比如你是否应该使用一个向量或者一个数组。但你可能不喜欢只是听别人说的这些话,所以让我们来看一些更多的经验数据。
-
-在大多数情况下,关于 python 是否是更高效语言的争论可以归结为脚本语言(或动态语言)与静态类型语言两者的争论。我认为人们普遍接受的是静态类型语言的生产力较低,但是,[这有一篇优秀的论文][8]解释了为什么不是这样。就 Python 而言,这里有一项[研究][9],它调查了不同语言编写字符串处理的代码所需要花费的时间,供参考。
-
- ![](https://cdn-images-1.medium.com/max/800/1*cw7Oq54ZflGZhlFglDka4Q.png)
-
-
-在上述研究中,Python 的效率比 Java 高出 2 倍。有一些其他研究也显示相似的东西。 Rosetta Code 对编程语言的差异进行了[深入的研究][10]。在论文中,他们把 python 与其他脚本语言/解释性语言相比较,得出结论:
-
-> Python 更简洁,即使与函数式语言相比较(平均要短 1.2 到 1.6 倍)
->
-
-普遍的趋势似乎是 Python 中的代码行总是更少。代码行听起来可能像一个可怕的指标,但是包括上面已经提到的两项研究在内的[多项研究][11]表明,每种语言中每行代码所需要花费的时间大约是一样的。因此,限制代码行数就可以提高生产效率。甚至 codinghorror(一名 C# 程序员)本人[写了一篇关于 Python 是如何更有效率的文章][12]。
-
-我认为说 Python 比其他的很多语言更加的有效率是公正的。这主要是由于 Python 有大量的自带以及第三方库。[这里是一篇讨论 Python 和其他语言间的差异的简单的文章][13]。如果你不知道为何 Python 是如此的小巧和高效,我邀请你借此机会学习一点 python,自己多实践。这儿是你的第一个程序:
-
- _import __hello___
-
- * * *
-
-### 但是如何速度真的重要怎么办呢?
---------------------------------------------------------------------------------
-
-
-
-via: https://hackernoon.com/yes-python-is-slow-and-i-dont-care-13763980b5a1
-
-作者:[Nick Humrich ][a]
-译者:[zhousiyu325](https://github.com/zhousiyu325)
-校对:[jasminepeng](https://github.com/jasminepeng)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://hackernoon.com/@nhumrich
-[1]:https://blog.codinghorror.com/the-infinite-space-between-words/
-[2]:https://static.googleusercontent.com/media/research.google.com/en//archive/sawzall-sciprog.pdf
-[3]:https://www.codefellows.org/blog/5-reasons-why-python-is-powerful-enough-for-google/
-[4]:https://www.lynda.com/Python-tutorials/Python-Programming-Efficiently/534425-2.html
-[5]:https://www.linuxjournal.com/article/3882
-[6]:https://www.codeschool.com/blog/2016/01/27/why-python/
-[7]:http://pythoncard.sourceforge.net/what_is_python.html
-[8]:http://www.tcl.tk/doc/scripting.html
-[9]:http://www.connellybarnes.com/documents/language_productivity.pdf
-[10]:https://arxiv.org/pdf/1409.0252.pdf
-[11]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.1831&rep=rep1&type=pdf
-[12]:https://blog.codinghorror.com/are-all-programming-languages-the-same/
-[13]:https://www.python.org/doc/essays/comparisons/
-[14]:https://wiki.python.org/moin/PythonSpeed
-[15]:https://wiki.python.org/moin/PythonSpeed/PerformanceTips
-[16]:https://www.eveonline.com/
-[17]:http://pypy.org/
-[18]:http://pythondevelopers.herokuapp.com/
diff --git a/translated/tech/20170410 How to install Asterisk on the Raspberry Pi.md b/translated/tech/20170410 How to install Asterisk on the Raspberry Pi.md
index 0637191329..bf51a7dd85 100644
--- a/translated/tech/20170410 How to install Asterisk on the Raspberry Pi.md
+++ b/translated/tech/20170410 How to install Asterisk on the Raspberry Pi.md
@@ -7,22 +7,22 @@
![How to install Asterisk on the Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life-raspberrypi_0.png?itok=wxVxQ0Z4 "How to install Asterisk on the Raspberry Pi")
>图片版权: Dwight Sipler 的 [Flickr][8]
-你是否在为小型企业或家庭办公室寻找电话系统?我一直对可扩展 VoIP(Voice over IP)解决方案感兴趣,我在树莓派上找到 [Asterisk][9] 的一个实现。
+你是否在为小型企业或家庭办公室寻找电话系统?我一直对可扩展 VoIP(Voice over IP)解决方案感兴趣,后来我在树莓派上找到 [Asterisk][9] 的一个实现。
我的好奇心被激起了,我决心尝试一下,所以我从 [Asterisk][11] 官网[下载][10]了它,然后使用我的树莓派 3 构建服务器。
### 准备开始
-首先,我将下载的镜像刻录到 MicroSD 卡上。建议的最小值是 4GB。将镜像传输到 MicroSD 卡并插到树莓派上的相应插槽中后,我将网线连接到树莓派和家庭路由器上的以太网端口中。
+首先,我将下载的镜像刻录到 MicroSD 卡上。建议的最小值是 4 GB。将镜像传输到 MicroSD 卡并插到树莓派上的相应插槽中后,我将网线连接到树莓派和家庭路由器上的以太网端口中。
-更多关于树莓派的
+更多关于树莓派的内容:
* [我们最新的树莓派教程][1]
* [什么是树莓派?][2]
* [开始使用树莓派][3]
* [给我们发送你的树莓派项目和教程][4]
-接下来,我在 Linux 上打开一个终端,并输入 **ssh root@192.168.1.8**,这是我的服务器的IP地址。我被提示以 root 用户身份登录到 **raspbx** 上。默认密码是 “raspberry”。 (出于安全考虑,如果你打算再多试试,请务必更改默认密码。)
+接下来,我在 Linux 上打开一个终端,并输入 **ssh root@192.168.1.8**,这是我的服务器的 IP 地址。我被提示以 root 用户身份登录到 **raspbx** 上。默认密码是 “raspberry”。 (出于安全考虑,如果你打算再多试试,请务必更改默认密码。)
当我登录到了 raspbx 上的 shell 后,接下来我需要准备配置了。根据网站上提供的[文档][12],我在 shell 下输入 **regen-hostkeys** 来创建新的主机密钥。然后输入 **configure-timezone** 来配置服务器的时区。我通过在提示符下输入 **dpkg-reconfigure locales** 来配置区域设置。我也安装了 [Fail2Ban][13] 来提供服务器的安全性。
@@ -36,21 +36,21 @@
![FreePBX_Login_Screen](https://opensource.com/sites/default/files/freepbx_login_screen.png "FreePBX_Login_Screen")
-登录之后,我进入位于显示屏左上方的 “Application Menu”。点击菜单链接并选择了第二个选项,即 “Applications”,接着选择了第四个选项,其中标有 “Extensions”。从那里我选择创建一个 **New Chan_Sip** 分机。
+登录之后,我进入位于显示屏左上方的应用菜单。点击菜单链接并选择了第二个选项,即 “应用”,接着选择了第四个选项,“分机”。从那里我选择创建一个 **New Chan_Sip** 分机。
![](https://opensource.com/sites/default/files/add_a_new_chan_sip_extension.png)
我使用密码配置了一个 **sip** 分机用户。密码是自动生成的,也可以选择创建自己的密码。
-现在我有了一个完整的分机,我急于尝试我的新的 VoIP 服务器。我下载并安装了[ Yate 客户端][16],这是在构建服务器的过程中发现的。安装 [Yate][17] 之后,我想测试与服务器的连接。我发现我可以使用 Yate 连接到服务器并输入 **43** 进行回声测试。当我听到客户端指示时,我感到很激动。
+现在我有了一个完整的分机,我急于尝试我的新的 VoIP 服务器。我下载并安装了[ Yate 客户端][16],这是在构建服务器的过程中发现的。安装 [Yate][17] 之后,我想测试与服务器的连接。我发现我可以使用 Yate 连接到服务器并输入 \*43 进行回声测试。当我听到客户端指示时,我感到很激动。
![](https://opensource.com/sites/default/files/echotest.png)
-我决定创建另外一个 **sip** 分机,这样我就可以测试系统的语音信箱功能。 在完成后,我使用 Yate 客户端来呼叫这个分机,并留下了简短的语音留言。然后再次使用 Yate 呼叫该分机并输入 **97** 来检索语音留言。然后我想看看我是否可以使用我的新服务器来呼叫外线。返回到菜单,选择 “Connectivity” 选项,并添加了 Google Voice 号码。
+我决定创建另外一个 **sip** 分机,这样我就可以测试系统的语音信箱功能。 在完成后,我使用 Yate 客户端来呼叫这个分机,并留下了简短的语音留言。然后再次使用 Yate 呼叫该分机并输入 \*97 来检索语音留言。然后我想看看我是否可以使用我的新服务器来呼叫外线。返回到菜单,选择 “连接” 选项,并添加了 Google Voice 号码。
![Google_Voice_Connectivity](https://opensource.com/sites/default/files/google_voice_connectivity.png "Google_Voice_Connectivity")
-接着我返回到 “Connectivity” 菜单将 Google Voice 添加到出站路由中。
+接着我返回到 “连接” 菜单,并将 Google Voice 添加到出站路由中。
![Google_Voice_outbound_route](https://opensource.com/sites/default/files/google_voice_outbound_route.png "Google_Voice_outbound_route")
@@ -77,7 +77,7 @@ via: https://opensource.com/article/17/4/asterisk-raspberry-pi-3
作者:[Don Watkins][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/[Translating]20150112 Data-Oriented Hash Table.md b/translated/tech/[Translating]20150112 Data-Oriented Hash Table.md
deleted file mode 100644
index c7f1485bab..0000000000
--- a/translated/tech/[Translating]20150112 Data-Oriented Hash Table.md
+++ /dev/null
@@ -1,380 +0,0 @@
-Translating by sanfusu
-
-[面向数据的哈希表][1]
-============================================================
-
-
-最近几年中,面向数据的设计已经受到了很多的关注 —— 一种强调内存中数据布局的编程风格,包括如何访问以及将会引发多少的 cache 缺失。
-
-由于在内存读取操作中缺失所占的数量级要大于命中的数量级,所以缺失的数量通常是优化的关键标准。
-
- 这不仅仅关乎那些对性能有要求的 code-data 结构设计的软件,由于缺乏对内存效益的重视而成为软件运行缓慢、膨胀的一个很大因素。
-
-
-
-高效缓存数据结构的中心原则是将事情变得平滑和线性。
-比如,在大部分情况下,存储一个序列元素更倾向于使用平滑的数组而不是链表 —— 每一次通过指针来查找数据都会为 cache 缺失增加一份风险;而平滑的数组可以预先获取,并使得内存系统以最大的效率运行
-
-
-
-如果你知道一点内存层级如何运作的知识,下面的内容会是想当然的结果——但是有时候即便他们相当明显,测试一下任不失为一个好主意。
-[Baptiste Wicht 测试过了 `std::vector` vs `std::list` vs `std::deque`][4]
-(几年前,后者通常使用分块数组来实现,比如:一个数组的数组)
-结果大部分会和你预期的保持一致,但是会存在一些违反直觉的东西。
-作为实例:在序列链表的中间位置做插入或者移除操作被认为会比数组快
-但如果该元素是一个 POD 类型,并且不大于 64 字节或者在 64 字节左右(在一个 cache 流水线内),
-通过对要操作的元素周围的数组元素进行移位操作要比从头遍历链表来的快。
-这是由于在遍历链表以及通过指针插入/删除元素的时候可能会导致不少的 cache 缺失,相对而言,数组移位则很少会发生。
-(对于更大的元素,非 POD 类型,或者你你已经有了指向链表元素的指针,此时和预期的一样,链表胜出)
-
-
-
-
-多亏了 Baptiste 的数据,我们知道了内存布局如何影响序列容器。
-
-但是关联容器,比如 hash 表会怎么样呢?
-
-已经有了些权威推荐:
-
-[Chandler Carruth 推荐的带局部探测的开放寻址][5]
-
-(此时,我们没必要追踪指针)
-
-以及[Mike Acton 推荐的在内存中将 value 和 key 隔离][6](这种情况下,我们可以在每一个 cache 流水线中得到更多的 key), 这可以在我们不得查找多个 key 时提高局部性能。
-
-
-这些想法很有意义,但再一次的说明:测试永远是好习惯,但由于我找不到任何数据,所以只好自己收集了。
-
-
-### [][7]测试
-
-
-我测试了四个不同的 quick-and-dirty 哈希表实现,另外还包括 `std::unordered_map` 。
-这五个哈希表都使用了同一个哈希函数 —— Bob Jenkins' [SpookyHash][8](64 位哈希值)。
-(由于哈希函数在这里不是重点,所以我没有测试不同的哈希函数;我同样也没有检测我的分析中的总内存消耗。)
-实现会通过短代码在测试结果表中标注出来。
-
-* **UM**: `std::unordered_map` 。
-在 VS2012 和 libstdc++-v3 (libstdc++-v3: gcc 和 clang 都会用到这东西)中,
-UM 是以链接表的形式实现,所有的元素都在链表中,buckets 数组中存储了链表的迭代器。
-VS2012 中,则是一个双链表,每一个 bucket 存储了起始迭代器和结束迭代器;
-libstdc++ 中,是一个单链表,每一个 bucket 只存储了一个起始迭代器。
-这两种情况里,链表节点是独立申请和释放的。最大负载因子是 1 。
-
-* **Ch**:
-分离的、链状 buket 指向一个元素节点的单链表。
-为了避免分开申请每一个节点,元素节点存储在平滑的数组池中。
-未使用的节点保存在一个空闲链表中。
-最大负载因子是 1。
-
-* **OL**:
-开地址线性探测 —— 每一个 bucket 存储一个 62 bit 的 hash 值,一个 2 bit 的状态值(包括 empty,filled,removed 三个状态),key 和 vale 。最大负载因子是 2/3。
-* **DO1**:
-data-oriented 1 —— 和 OL 相似,但是哈希值、状态值和 key、values 分离在两个隔离的平滑数组中。
-
-* **DO2**:
-"data-oriented 2" —— 与 OL 类似,但是哈希/状态,keys 和 values 被分离在 3 个相隔离的平滑数组中。
-
-
-在我的所有实现中,包括 VS2012 的 UM 实现,默认使用尺寸为 2 的 n 次方。如果超出了最大负载因子,则扩展两倍。
-
-在 libstdc++ 中,UM 默认尺寸是一个素数。如果超出了最大负载因子,则扩展为下一个素数大小。
-
-但是我不认为这些细节对性能很重要。
-
-素数是一种对低 bit 位上没有足够熵的低劣 hash 函数的挽救手段,但是我们正在用的是一个很好的 hash 函数。
-
-
-OL,DO1 和 DO2 的实现将共同的被称为 OA(open addressing)——稍后我们将发现他们在性能特性上非常相似。
-
-在每一个实现中,单元数从 100 K 到 1 M,有效负载(比如:总的 key + value 大小)从 8 到 4 k 字节
-我为几个不同的操作记了时间。
-
- keys 和 values 永远是 POD 类型,keys 永远是 8 个字节(除了 8 字节的有效负载,此时 key 和 value 都是 4 字节)
-
-因为我的目的是为了测试内存影响而不是哈希函数性能,所以我将 key 放在连续的尺寸空间中。
-每一个测试都会重复 5 遍,然后记录最小的耗时。
-
-
-测试的操作在这里:
-
-* **Fill**:
-将一个随机的 key 序列插入到表中(key 在序列中是唯一的)。
-* **Presized fill**:
-和 Fill 相似,但是在插入之间我们先为所有的 key 保留足够的内存空间,以防止在 fill 过程中 rehash 或者重申请。
-* **Lookup**:
-执行 100 k 随机 key 查找,所有的 key 都在 table 中。
-* **Failed lookup**:
-执行 100 k 随机 key 查找,所有的 key 都不在 table 中。
-* **Remove**:
-从 table 中移除随机选择的半数元素。
-* **Destruct**:
-销毁 table 并释放内存.
-
-
-你可以[在这里下载我的测试代码][9]。
-
-这些代码只能在 64 机器上编译(包括Windows和Linux)。
-
-在 `main()` 函数附件有一些标记,你可把他们打开或者关掉——如果全开,可能会需要一两个小时才能结束运行。
-
-我搜集的结果也放在了那个打包文件里的 Excel 表中。
-
-(注意: Windows 和 Linux 在不同的 CPU 上跑的,所以时间不具备可比较性)
-
-代码也跑了一些单元测试,用来验证所有的 hash 表实现都能运行正确。
-
-
-我还顺带尝试了附加的两个实现:
-
-Ch 中第一个节点存放在 bucket 中而不是 pool 里,二次探测的开放寻址。
-
-这两个都不足以好到可以放在最终的数据里,但是他们的代码仍放在了打包文件里面。
-
-### [][10]结果
-
-
-这里有成吨的数据!!
-
-这一节我将详细的讨论一下结果,但是如果你对此不感兴趣,可以直接跳到下一节的总结。
-
-### [][11]Windows
-
-
-这是所有的测试的图表结果,使用 Visual Studio 2012 编译,运行于 Windows8.1 和 Core i7-4710HQ 机器上。(点击可以放大。)
-
-[
- ![Results for VS 2012, Windows 8.1, Core i7-4710HQ](http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png "Results for VS 2012, Windows 8.1, Core i7-4710HQ")
-][12]
-
-
-从左至右是不同的有效负载大小,从上往下是不同的操作
-
-(注意:不是所有的Y轴都是相同的比例!)我将为每一个操作总结一下主要趋向。
-
-**Fill**:
-
-在我的 hash 表中,Ch 稍比任何的 OA 变种要好。随着哈希表大小和有效负载的加大,差距也随之变大。
-
-我猜测这是由于 Ch 只需要从一个空闲链表中拉取一个元素,然后把他放在 bucket 前面,而 OA 不得不搜索一部分 buckets 来找到一个空位置。
-
-所有的 OA 变种的性能表现基本都很相似,当然 DO1 稍微有点优势。
-
-
-在小负载的情况,UM 几乎是所有 hash 表中表现最差的 —— 因为 UM 为每一次的插入申请(内存)付出了沉重的代价。
-
-但是在 128 字节的时候,这些 hash 表基本相当,大负载的时候 UM 还赢了点。
-
-因为,我所有的实现都需要重新调整元素池的大小,并需要移动大量的元素到新池里面,这一点我几乎无能为力;而 UM 一旦为元素申请了内存后便不需要移动了。
-
-注意大负载中图表上夸张的跳步!
-
-这更确认了重新调整大小带来的问题。
-
-相反,UM 只是线性上升 —— 只需要重新调整 bucket 数组大小。由于没有太多隆起的地方,所以相对有效率。
-
-**Presized fill**:
-
-大致和 Fill 相似,但是图示结果更加的线性光滑,没有太大的跳步(因为不需要 rehash ),所有的实现差距在这一测试中要缩小了些。
-
-大负载时 UM 依然稍快于 Ch,问题还是在于重新调整大小上。
-
-Ch 仍是稳步少快于 OA 变种,但是 DO1 比其他的 OA 稍有优势。
-
-**Lookup**:
-
-所有的实现都相当的集中。除了最小负载时,DO1 和 OL 稍快,其余情况下 UM 和 DO2 都跑在了前面。
-
-真的,我无法描述 UM 在这一步做的多么好。
-
-尽管需要遍历链表,但是 UM 还是坚守了面向数据的本性。
-
-
-顺带一提,查找时间和 hash 表的大小有着很弱的关联,这真的很有意思。
-
-哈希表查找时间期望上是一个常量时间,所以在的渐进视图中,性能不应该依赖于表的大小。但是那是在忽视了 cache 影响的情况下!
-
-作为具体的例子,当我们在具有 10 k 条目的表中做 100 k 查找时,速度会便变快,因为在第一次 10 k - 20 k 查找后,大部分的表会处在 L3 中。
-
-**Failed lookup**:
-
-相对于成功查找,这里就有点更分散一些。
-
-DO1 和 DO2 跑在了前面,但 UM 并没有落下,OL 则是捉襟见肘啊。
-
-我猜测,这可能是因为 OL 整体上具有更长的搜索路径,尤其是在失败查询时;
-
-内存中,hash 值在 key 和 value 之飘来荡去的找不着出路,我也很受伤啊。
-
-DO1 和 DO2 具有相同的搜索长度,但是他们将所有的 hash 值打包在内存中,这使得问题有所缓解。
-
-**Remove**:
-
-DO2 很显然是赢家,但 DO1 也未落下。Ch 则落后,UM 则是差的不是一丁半点(主要是因为每次移除都要释放内存);
-
-差距随着负载的增加而拉大。
-
-移除操作是唯一不需要接触数据的操作,只需要 hash 值和 key 的帮助,这也是为什么 DO1 和 DO2 在移除操作中的表现大相径庭,而其他测试中却保持一致。
-
-(如果你的值不是 POD 类型的,并需要析构,这种差异应该是会消失的。)
-
-**Destruct**:
-
-Ch 除了最小负载,其他的情况都是最快的(最小负载时,约等于 OA 变种)。
-
-所有的 OA 变种基本都是相等的。
-
-注意,在我的 hash 表中所做的所有析构操作都是释放少量的内存 buffer 。
-
-但是 [在Windows中,释放内存的消耗和大小成比例关系][13]。
-
-(而且,这是一个很显著的开支 —— 申请 ~1 GB 的内存需要 ~100 ms 的时间去释放!)
-
-
-UM 在析构时是最慢的一个(小负载时,慢的程度可以用数量级来衡量),大负载时依旧是稍慢些。
-
-对于 UM 来讲,释放每一个元素而不是释放一组数组真的是一个硬伤。
-
-### [][14]Linux
-
-
-我还在装有 Linux Mint 17.1 的 Core i5-4570S 机器上使用 gcc 4.8 和 clang 3.5 来运行了测试。gcc 和 clang 的结果很相像,因此我只展示了 gcc 的;完整的结果集合包含在了代码下载打包文件中,链接在上面。(点击来缩放)
-[
- ![Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S](http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png "Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S")
-][15]
-
-
-大部分结果和 Windows 很相似,因此我只高亮了一些有趣的不同点。
-
-**Lookup**:
-
-这里 DO1 跑在前头,而在 Windows 中 DO2 更快些。
-
-同样,UM 和 Ch 落后于其他所有的实现——过多的指针追踪,然而 OA 只需要在内存中线性的移动即可。
-
-至于 Windows 和 Linux 结果为何不同,则不是很清楚。UM 同样比 Ch 慢了不少,特别是大负载时,这很奇怪;我期望的是他们可以基本相同。
-
-**Failed lookup**:
-
-UM 再一次落后于其他实现,甚至比 OL 还要慢。
-
-我再一次无法理解为何 UM 比 Ch 慢这么多,Linux 和 Windows 的结果为何有着如此大的差距。
-
-
-**Destruct**:
-
-在我的实现中,小负载的时候,析构的消耗太少了,以至于无法测量;
-
-在大负载中,线性增加的比例和创建的虚拟内存页数量相关,而不是申请到的数量?
-
-同样,要比 Windows 中的析构快上几个数量级。
-
-但是并不是所有的都和 hash 表有关;
-
-我们在这里可以看出不同系统和运行时内存系统的表现。
-
-貌似,Linux 释放大内存块是要比 Windows 快上不少(或者 Linux 很好的隐藏了开支,或许将释放工作推迟到了进程退出,又或者将工作推给了其他线程或者进程)。
-
-
-UM 由于要释放每一个元素,所以在所有的负载中都比其他慢上几个数量级。
-
-事实上,我将图片做了剪裁,因为 UM 太慢了,以至于破坏了 Y 轴的比例。
-
-
-### [][16]总结
-
-
-好,当我们凝视各种情况下的数据和矛盾的结果时,我们可以得出什么结果呢?
-
-我想直接了当的告诉你这些 hash 表变种中有一个打败了其他所有的 hash 表,但是这显然不那么简单。
-
-不过我们仍然可以学到一些东西。
-
-
-首先,在大多数情况下我们“很容易”做的比 `std::unordered_map` 还要好。
-
-我为这些测试所写的所有实现(他们并不复杂;我只花了一两个小时就写完了)要么是符合 `unordered_map` 要么是在其基础上做的提高,
-
-除了大负载(超过128字节)中的插入性能, `unordered_map` 为每一个节点独立申请存储占了优势。
-
-(尽管我没有测试,我同样期望 `unordered_map` 能在非 POD 类型的负载上取得胜利。)
-
-具有指导意义的是,如果你非常关心性能,不要假设你的标准库中的数据结构是高度优化的。
-
-他们可能只是在 C++ 标准的一致性上做了优化,但不是性能。:P
-
-
-其次,如果不管在小负载还是超负载中,若都只用 DO1 (开放寻址,线性探测,hashes/states 和 key/vaules分别处在隔离的平滑数组中),那可能不会有啥好表现。
-
-这不是最快的插入,但也不坏(还比 `unordered_map` 快),并且在查找,移除,析构中也很快。
-
-你所知道的 —— “面向数据设计”完成了!
-
-
-注意,我的为这些哈希表做的测试代码远未能用于生产环境——他们只支持 POD 类型,
-
-没有拷贝构造函数以及类似的东西,也未检测重复的 key,等等。
-
-我将可能尽快的构建一些实际的 hash 表,用于我的实用库中。
-
-为了覆盖基础部分,我想我将有两个变种:一个基于 DO1,用于小的,移动时不需要太大开支的负载;另一个用于链接并且避免重新申请和移动元素(就像 `unordered_map` ),用于大负载或者移动起来需要大开支的负载情况。
-
-这应该会给我带来最好的两个世界。
-
-
-与此同时,我希望你们会有所启迪。
-
-最后记住,如果 Chandler Carruth 和 Mike Acton 在数据结构上给你提出些建议,你一定要听 😉
-
---------------------------------------------------------------------------------
-
-
-作者简介:
-
-
-我是一名图形程序员,目前在西雅图做自由职业者。之前我在 NVIDIA 的 DevTech 软件团队中工作,并在美少女特工队工作室中为 PS3 和 PS4 的 Infamous 系列游戏开发渲染技术。
-
-
-自 2002 年起,我对图形非常感兴趣,并且已经完成了一系列的工作,包括:雾、大气雾霾、体积照明、水、视觉效果、粒子系统、皮肤和头发阴影、后处理、镜面模型、线性空间渲染、和 GPU 性能测量和优化。
-
-
-你可以在我的博客了解更多和我有关的事,处理图形,我还对理论物理和程序设计感兴趣。
-
-
-你可以在 nathaniel.reed@gmail.com 或者在 Twitter(@Reedbeta)/Google+ 上关注我。我也会经常在 StackExchange 上回答计算机图形的问题。
-
---------------
-
-via: http://reedbeta.com/blog/data-oriented-hash-table/
-
-作者:[Nathan Reed][a]
-译者:[sanfusu](https://github.com/sanfusu)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://reedbeta.com/about/
-[1]:http://reedbeta.com/blog/data-oriented-hash-table/
-[2]:http://reedbeta.com/blog/category/coding/
-[3]:http://reedbeta.com/blog/data-oriented-hash-table/#comments
-[4]:http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html
-[5]:https://www.youtube.com/watch?v=fHNmRkzxHWs
-[6]:https://www.youtube.com/watch?v=rX0ItVEVjHc
-[7]:http://reedbeta.com/blog/data-oriented-hash-table/#the-tests
-[8]:http://burtleburtle.net/bob/hash/spooky.html
-[9]:http://reedbeta.com/blog/data-oriented-hash-table/hash-table-tests.zip
-[10]:http://reedbeta.com/blog/data-oriented-hash-table/#the-results
-[11]:http://reedbeta.com/blog/data-oriented-hash-table/#windows
-[12]:http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png
-[13]:https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
-[14]:http://reedbeta.com/blog/data-oriented-hash-table/#linux
-[15]:http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png
-[16]:http://reedbeta.com/blog/data-oriented-hash-table/#conclusions