diff --git a/published/20161216 Kprobes Event Tracing on ARMv8.md b/published/20161216 Kprobes Event Tracing on ARMv8.md
new file mode 100644
index 0000000000..3985f064dc
--- /dev/null
+++ b/published/20161216 Kprobes Event Tracing on ARMv8.md
@@ -0,0 +1,331 @@
+ARMv8 上的 kprobes 事件跟踪
+==============
+
+![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
+
+### 介绍
+
+kprobes 是一种内核功能,它允许通过在执行(或模拟)断点指令之前和之后,设置调用开发者提供例程的任意断点来检测内核。可参见 kprobes 文档^注1 获取更多信息。基本的 kprobes 功能可使用 `CONFIG_KPROBEES` 来选择。在 arm64 的 v4.8 内核发行版中, kprobes 支持被添加到主线。
+
+在这篇文章中,我们将介绍 kprobes 在 arm64 上的使用,通过在命令行中使用 debugfs 事件追踪接口来收集动态追踪事件。这个功能在一些架构(包括 arm32)上可用已经有段时间,现在在 arm64 上也能使用了。这个功能可以无需编写任何代码就能使用 kprobes。
+
+### 探针类型
+
+kprobes 子系统提供了三种不同类型的动态探针,如下所述。
+
+#### kprobes
+
+基本探针是 kprobes 插入的一个软件断点,用以替代你正在探测的指令,当探测点被命中时,它为最终的单步执行(或模拟)保存下原始指令。
+
+#### kretprobes
+
+kretprobes 是 kprobes 的一部分,它允许拦截返回函数,而不必在返回点设置一个探针(或者可能有多个探针)。对于支持的架构(包括 ARMv8),只要选择 kprobes,就可以选择此功能。
+
+#### jprobes
+
+jprobes 允许通过提供一个具有相同调用签名的中间函数来拦截对一个函数的调用,这里中间函数将被首先调用。jprobes 只是一个编程接口,它不能通过 debugfs 事件追踪子系统来使用。因此,我们将不会在这里进一步讨论 jprobes。如果你想使用 jprobes,请参考 kprobes 文档。
+
+### 调用 kprobes
+
+kprobes 提供一系列能从内核代码中调用的 API 来设置探测点和当探测点被命中时调用的注册函数。在不往内核中添加代码的情况下,kprobes 也是可用的,这是通过写入特定事件追踪的 debugfs 文件来实现的,需要在文件中设置探针地址和信息,以便在探针被命中时记录到追踪日志中。后者是本文将要讨论的重点。最后 kprobes 可以通过 perl 命令来使用。
+
+#### kprobes API
+
+内核开发人员可以在内核中编写函数(通常在专用的调试模块中完成)来设置探测点,并且在探测指令执行前和执行后立即执行任何所需操作。这在 kprobes.txt 中有很好的解释。
+
+#### 事件追踪
+
+事件追踪子系统有自己的自己的文档^注2 ,对于了解一般追踪事件的背景可能值得一读。事件追踪子系统是追踪点和 kprobes 事件追踪的基础。事件追踪文档重点关注追踪点,所以请在查阅文档时记住这一点。kprobes 与追踪点不同的是没有预定义的追踪点列表,而是采用动态创建的用于触发追踪事件信息收集的任意探测点。事件追踪子系统通过一系列 debugfs 文件来控制和监视。事件追踪(`CONFIG_EVENT_TRACING`)将在被如 kprobe 事件追踪子系统等需要时自动选择。
+
+##### kprobes 事件
+
+使用 kprobes 事件追踪子系统,用户可以在内核任意断点处指定要报告的信息,只需要指定任意现有可探测指令的地址以及格式化信息即可确定。在执行过程中遇到断点时,kprobes 将所请求的信息传递给事件追踪子系统的公共部分,这些部分将数据格式化并追加到追踪日志中,就像追踪点的工作方式一样。kprobes 使用一个类似的但是大部分是独立的 debugfs 文件来控制和显示追踪事件信息。该功能可使用 `CONFIG_KPROBE_EVENT` 来选择。Kprobetrace 文档^ 注3 提供了如何使用 kprobes 事件追踪的基本信息,并且应当被参考用以了解以下介绍示例的详细信息。
+
+#### kprobes 和 perf
+
+perf 工具为 kprobes 提供了另一个命令行接口。特别地,`perf probe` 允许探测点除了由函数名加偏移量和地址指定外,还可由源文件和行号指定。perf 接口实际上是使用 kprobes 的 debugfs 接口的封装器。
+
+### Arm64 kprobes
+
+上述所有 kprobes 的方面现在都在 arm64 上得到实现,然而实际上与其它架构上的有一些不同:
+
+* 注册名称参数当然是依架构而特定的,并且可以在 ARM ARM 中找到。
+* 目前不是所有的指令类型都可被探测。当前不可探测的指令包括 mrs/msr(除了 DAIF 读取)、异常生成指令、eret 和 hint(除了 nop 变体)。在这些情况下,只探测一个附近的指令来代替是最简单的。这些指令在探测的黑名单里是因为在 kprobes 单步执行或者指令模拟时它们对处理器状态造成的改变是不安全的,这是由于 kprobes 构造的单步执行上下文和指令所需要的不一致,或者是由于指令不能容忍在 kprobes 中额外的处理时间和异常处理(ldx/stx)。
+* 试图识别在 ldx/stx 序列中的指令并且防止探测,但是理论上这种检查可能会失败,导致允许探测到的原子序列永远不会成功。当探测原子代码序列附近时应该小心。
+* 注意由于 linux ARM64 调用约定的具体信息,为探测函数可靠地复制栈帧是不可能的,基于此不要试图用 jprobes 这样做,这一点与支持 jprobes 的大多数其它架构不同。这样的原因是被调用者没有足够的信息来确定需要的栈数量。
+* 注意当探针被命中时,一个探针记录的栈指针信息将反映出使用中的特定栈指针,它是内核栈指针或者中断栈指针。
+* 有一组内核函数是不能被探测的,通常因为它们作为 kprobes 处理的一部分被调用。这组函数的一部分是依架构特定的,并且也包含如异常入口代码等。
+
+### 使用 kprobes 事件追踪
+
+kprobes 的一个常用例子是检测函数入口和/或出口。因为只需要使用函数名来作为探针地址,它安装探针特别简单。kprobes 事件追踪将查看符号名称并且确定地址。ARMv8 调用标准定义了函数参数和返回值的位置,并且这些可以作为 kprobes 事件处理的一部分被打印出来。
+
+#### 例子: 函数入口探测
+
+检测 USB 以太网驱动程序复位功能:
+
+```
+$ pwd
+/sys/kernel/debug/tracing
+$ cat > kprobe_events < events/kprobes/enable
+```
+
+此时每次该驱动的 `ax8872_reset()` 函数被调用,追踪事件都将会被记录。这个事件将显示指向通过作为此函数的唯一参数的 `X0`(按照 ARMv8 调用标准)传入的 `usbnet` 结构的指针。插入需要以太网驱动程序的 USB 加密狗后,我们看见以下追踪信息:
+
+```
+$ cat trace
+# tracer: nop
+#
+# entries-in-buffer/entries-written: 1/1 #P:8
+#
+# _—–=> irqs-off
+# / _—-=> need-resched
+# | / _—=> hardirq/softirq
+# || / _–=> preempt-depth
+# ||| / delay
+# TASK-PID CPU# |||| TIMESTAMP FUNCTION
+# | | | |||| | |
+kworker/0:0-4 [000] d… 10972.102939: p_ax88772_reset_0:
+(ax88772_reset+0x0/0x230) arg1=0xffff800064824c80
+```
+
+这里我们可以看见传入到我们的探测函数的指针参数的值。由于我们没有使用 kprobes 事件追踪的可选标签功能,我们需要的信息自动被标注为 `arg1`。注意这指向我们需要 kprobes 记录这个探针的一组值的第一个,而不是函数参数的实际位置。在这个例子中它也只是碰巧是我们探测函数的第一个参数。
+
+#### 例子: 函数入口和返回探测
+
+kretprobe 功能专门用于探测函数返回。在函数入口 kprobes 子系统将会被调用并且建立钩子以便在函数返回时调用,钩子将记录需求事件信息。对最常见情况,返回信息通常在 `X0` 寄存器中,这是非常有用的。在 `%x0` 中返回值也可以被称为 `$retval`。以下例子也演示了如何提供一个可读的标签来展示有趣的信息。
+
+使用 kprobes 和 kretprobe 检测内核 `do_fork()` 函数来记录参数和结果的例子:
+
+```
+$ cd /sys/kernel/debug/tracing
+$ cat > kprobe_events < events/kprobes/enable
+```
+
+此时每次对 `_do_fork()` 的调用都会产生两个记录到 trace 文件的 kprobe 事件,一个报告调用参数值,另一个报告返回值。返回值在 trace 文件中将被标记为 `pid`。这里是三次 fork 系统调用执行后的 trace 文件的内容:
+
+```
+_$ cat trace
+# tracer: nop
+#
+# entries-in-buffer/entries-written: 6/6 #P:8
+#
+# _—–=> irqs-off
+# / _—-=> need-resched
+# | / _—=> hardirq/softirq
+# || / _–=> preempt-depth
+# ||| / delay
+# TASK-PID CPU# |||| TIMESTAMP FUNCTION
+# | | | |||| | |
+ bash-1671 [001] d… 204.946007: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
+ bash-1671 [001] d..1 204.946391: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x724
+ bash-1671 [001] d… 208.845749: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
+ bash-1671 [001] d..1 208.846127: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x725
+ bash-1671 [001] d… 214.401604: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
+ bash-1671 [001] d..1 214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_
+```
+
+#### 例子: 解引用指针参数
+
+对于指针值,kprobes 事件处理子系统也允许解引用和打印所需的内存内容,适用于各种基本数据类型。为了展示所需字段,手动计算结构的偏移量是必要的。
+
+检测 `_do_wait()` 函数:
+
+```
+$ cat > kprobe_events < events/kprobes/enable
+```
+
+注意在第一个探针中使用的参数标签是可选的,并且可用于更清晰地识别记录在追踪日志中的信息。带符号的偏移量和括号表明了寄存器参数是指向记录在追踪日志中的内存内容的指针。`:u32` 表明了内存位置包含一个无符号的 4 字节宽的数据(在这个例子中指局部定义的结构中的一个 emum 和一个 int)。
+
+探针标签(冒号后)是可选的,并且将用来识别日志中的探针。对每个探针来说标签必须是独一无二的。如果没有指定,将从附近的符号名称自动生成一个有用的标签,如前面的例子所示。
+
+也要注意 `$retval` 参数可以只是指定为 `%x0`。
+
+这里是两次 fork 系统调用执行后的 trace 文件的内容:
+
+```
+$ cat trace
+# tracer: nop
+#
+# entries-in-buffer/entries-written: 4/4 #P:8
+#
+# _—–=> irqs-off
+# / _—-=> need-resched
+# | / _—=> hardirq/softirq
+# || / _–=> preempt-depth
+# ||| / delay
+# TASK-PID CPU# |||| TIMESTAMP FUNCTION
+# | | | |||| | |
+ bash-1702 [001] d… 175.342074: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xe
+ bash-1702 [002] d..1 175.347236: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0x757
+ bash-1702 [002] d… 175.347337: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xf
+ bash-1702 [002] d..1 175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6
+```
+
+#### 例子: 探测任意指令地址
+
+在前面的例子中,我们已经为函数的入口和出口插入探针,然而探测一个任意指令(除少数例外)是可能的。如果我们正在 C 函数中放置一个探针,第一步是查看代码的汇编版本以确定我们要放置探针的位置。一种方法是在 vmlinux 文件上使用 gdb,并在要放置探针的函数中展示指令。下面是一个在 `arch/arm64/kernel/modules.c` 中 `module_alloc` 函数执行此操作的示例。在这种情况下,因为 gdb 似乎更喜欢使用弱符号定义,并且它是与这个函数关联的存根代码,所以我们从 System.map 中来获取符号值:
+
+```
+$ grep module_alloc System.map
+ffff2000080951c4 T module_alloc
+ffff200008297770 T kasan_module_alloc
+```
+
+在这个例子中我们使用了交叉开发工具,并且在我们的主机系统上调用 gdb 来检查指令包含我们感兴趣函数。
+
+```
+$ ${CROSS_COMPILE}gdb vmlinux
+(gdb) x/30i 0xffff2000080951c4
+ 0xffff2000080951c4 : sub sp, sp, #0x30
+ 0xffff2000080951c8 : adrp x3, 0xffff200008d70000
+ 0xffff2000080951cc : add x3, x3, #0x0
+ 0xffff2000080951d0 : mov x5, #0x713 // #1811
+ 0xffff2000080951d4 : mov w4, #0xc0 // #192
+ 0xffff2000080951d8 :
+ mov x2, #0xfffffffff8000000 // #-134217728
+ 0xffff2000080951dc : stp x29, x30, [sp,#16] 0xffff2000080951e0 : add x29, sp, #0x10
+ 0xffff2000080951e4 : movk x5, #0xc8, lsl #48
+ 0xffff2000080951e8 : movk w4, #0x240, lsl #16
+ 0xffff2000080951ec : str x30, [sp] 0xffff2000080951f0 : mov w7, #0xffffffff // #-1
+ 0xffff2000080951f4 : mov x6, #0x0 // #0
+ 0xffff2000080951f8 : add x2, x3, x2
+ 0xffff2000080951fc : mov x1, #0x8000 // #32768
+ 0xffff200008095200 : stp x19, x20, [sp,#32] 0xffff200008095204 : mov x20, x0
+ 0xffff200008095208 : bl 0xffff2000082737a8 <__vmalloc_node_range>
+ 0xffff20000809520c : mov x19, x0
+ 0xffff200008095210 : cbz x0, 0xffff200008095234
+ 0xffff200008095214 : mov x1, x20
+ 0xffff200008095218 : bl 0xffff200008297770
+ 0xffff20000809521c : tbnz w0, #31, 0xffff20000809524c
+ 0xffff200008095220 : mov sp, x29
+ 0xffff200008095224 : mov x0, x19
+ 0xffff200008095228 : ldp x19, x20, [sp,#16] 0xffff20000809522c : ldp x29, x30, [sp],#32
+ 0xffff200008095230 : ret
+ 0xffff200008095234 : mov sp, x29
+ 0xffff200008095238 : mov x19, #0x0 // #0
+```
+
+在这种情况下,我们将在此函数中显示以下源代码行的结果:
+
+```
+p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START,
+VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+NUMA_NO_NODE, __builtin_return_address(0));
+```
+
+……以及在此代码行的函数调用的返回值:
+
+```
+if (p && (kasan_module_alloc(p, size) < 0)) {
+```
+
+我们可以在从调用外部函数的汇编代码中识别这些。为了展示这些值,我们将在目标系统上的 `0xffff20000809520c` 和 `0xffff20000809521c` 处放置探针。
+
+```
+$ cat > kprobe_events < events/kprobes/enable
+```
+
+现在将一个以太网适配器加密狗插入到 USB 端口后,我们看到以下写入追踪日志的内容:
+
+```
+$ cat trace
+# tracer: nop
+#
+# entries-in-buffer/entries-written: 12/12 #P:8
+#
+# _—–=> irqs-off
+# / _—-=> need-resched
+# | / _—=> hardirq/softirq
+# || / _–=> preempt-depth
+# ||| / delay
+# TASK-PID CPU# |||| TIMESTAMP FUNCTION
+# | | | |||| | |
+ systemd-udevd-2082 [000] d… 77.200991: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001188000
+ systemd-udevd-2082 [000] d… 77.201059: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
+ systemd-udevd-2082 [000] d… 77.201115: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001198000
+ systemd-udevd-2082 [000] d… 77.201157: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
+ systemd-udevd-2082 [000] d… 77.227456: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011a0000
+ systemd-udevd-2082 [000] d… 77.227522: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
+ systemd-udevd-2082 [000] d… 77.227579: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b0000
+ systemd-udevd-2082 [000] d… 77.227635: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
+ modprobe-2097 [002] d… 78.030643: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b8000
+ modprobe-2097 [002] d… 78.030761: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
+ modprobe-2097 [002] d… 78.031132: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001270000
+ modprobe-2097 [002] d… 78.031187: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
+```
+
+kprobes 事件系统的另一个功能是记录统计信息,这可在 `inkprobe_profile` 中找到。在以上追踪后,该文件的内容为:
+
+```
+$ cat kprobe_profile
+ p_0xffff20000809520c 6 0
+p_0xffff20000809521c 6 0
+```
+
+这表明我们设置的两处断点每个共发生了 8 次命中,这当然与追踪日志数据是一致的。在 kprobetrace 文档中有更多 kprobe_profile 的功能描述。
+
+也可以进一步过滤 kprobes 事件。用来控制这点的 debugfs 文件在 kprobetrace 文档中被列出,然而它们内容的详细信息大多在 trace events 文档中被描述。
+
+### 总结
+
+现在,Linux ARMv8 对支持 kprobes 功能也和其它架构相当。有人正在做添加 uprobes 和 systemtap 支持的工作。这些功能/工具和其他已经完成的功能(如: perf、 coresight)允许 Linux ARMv8 用户像在其它更老的架构上一样调试和测试性能。
+
+* * *
+
+参考文献
+
+- 注1: Jim Keniston, Prasanna S. Panchamukhi, Masami Hiramatsu. “Kernel Probes (kprobes).” _GitHub_. GitHub, Inc., 15 Aug. 2016\. Web. 13 Dec. 2016.
+- 注2: Ts’o, Theodore, Li Zefan, and Tom Zanussi. “Event Tracing.” _GitHub_. GitHub, Inc., 3 Mar. 2016\. Web. 13 Dec. 2016.
+- 注3: Hiramatsu, Masami. “Kprobe-based Event Tracing.” _GitHub_. GitHub, Inc., 18 Aug. 2016\. Web. 13 Dec. 2016.
+
+
+----------------
+
+作者简介 : [David Long][8] 在 Linaro Kernel - Core Development 团队中担任工程师。 在加入 Linaro 之前,他在商业和国防行业工作了数年,既做嵌入式实时工作又为Unix提供软件开发工具。之后,在 Digital(又名 Compaq)公司工作了十几年,负责 Unix 标准,C 编译器和运行时库的工作。之后 David 又去了一系列初创公司做嵌入式 Linux 和安卓系统,嵌入式定制操作系统和 Xen 虚拟化。他拥有 MIPS,Alpha 和 ARM 平台的经验(等等)。他使用过从 1979 年贝尔实验室 V6 开始的大部分Unix操作系统,并且长期以来一直是 Linux 用户和倡导者。他偶尔也因使用烙铁和数字示波器调试设备驱动而知名。
+
+--------------------------------------------------------------------------------
+
+via: http://www.linaro.org/blog/kprobes-event-tracing-armv8/
+
+作者:[David Long][a]
+译者:[kimii](https://github.com/kimii)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.linaro.org/author/david-long/
+[1]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
+[2]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
+[3]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
+[4]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
+[5]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
+[6]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
+[7]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
+[8]:http://www.linaro.org/author/david-long/
+[9]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#comments
+[10]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
+[11]:http://www.linaro.org/tag/arm64/
+[12]:http://www.linaro.org/tag/armv8/
+[13]:http://www.linaro.org/tag/jprobes/
+[14]:http://www.linaro.org/tag/kernel/
+[15]:http://www.linaro.org/tag/kprobes/
+[16]:http://www.linaro.org/tag/kretprobes/
+[17]:http://www.linaro.org/tag/perf/
+[18]:http://www.linaro.org/tag/tracing/
+
diff --git a/sources/tech/20171009 Examining network connections on Linux systems.md b/published/20171009 Examining network connections on Linux systems.md
similarity index 57%
rename from sources/tech/20171009 Examining network connections on Linux systems.md
rename to published/20171009 Examining network connections on Linux systems.md
index 665e1e546d..1676525e21 100644
--- a/sources/tech/20171009 Examining network connections on Linux systems.md
+++ b/published/20171009 Examining network connections on Linux systems.md
@@ -1,23 +1,20 @@
-translating by firmianay
-
-Examining network connections on Linux systems
+检查 Linux 系统上的网络连接
============================================================
-### Linux systems provide a lot of useful commands for reviewing network configuration and connections. Here's a look at a few, including ifquery, ifup, ifdown and ifconfig.
+> Linux 系统提供了许多有用的命令来检查网络配置和连接。下面来看几个,包括 `ifquery`、`ifup`、`ifdown` 和 `ifconfig`。
+Linux 上有许多可用于查看网络设置和连接的命令。在今天的文章中,我们将会通过一些非常方便的命令来看看它们是如何工作的。
-There are a lot of commands available on Linux for looking at network settings and connections. In today's post, we're going to run through some very handy commands and see how they work.
+### ifquery 命令
-### ifquery command
-
-One very useful command is the **ifquery** command. This command should give you a quick list of network interfaces. However, you might only see something like this —showing only the loopback interface:
+一个非常有用的命令是 `ifquery`。这个命令应该会显示一个网络接口列表。但是,你可能只会看到类似这样的内容 - 仅显示回环接口:
```
$ ifquery --list
lo
```
-If this is the case, your **/etc/network/interfaces** file doesn't include information on network interfaces except for the loopback interface. You can add lines like the last two in the example below — assuming DHCP is used to assign addresses — if you'd like it to be more useful.
+如果是这种情况,那说明你的 `/etc/network/interfaces` 不包括除了回环接口之外的网络接口信息。在下面的例子中,假设你使用 DHCP 来分配地址,且如果你希望它更有用的话,你可以添加例子最后的两行。
```
# interfaces(5) file used by ifup(8) and ifdown(8)
@@ -27,15 +24,13 @@ auto eth0
iface eth0 inet dhcp
```
-### ifup and ifdown commands
+### ifup 和 ifdown 命令
-The related **ifup** and **ifdown** commands can be used to bring network connections up and shut them down as needed provided this file has the required descriptive data. Just keep in mind that "if" means "interface" in these commands just as it does in the **ifconfig** command, not "if" as in "if I only had a brain".
+可以使用相关的 `ifup` 和 `ifdown` 命令来打开网络连接并根据需要将其关闭,只要该文件具有所需的描述性数据即可。请记住,“if” 在这里意思是接口,这与 `ifconfig` 命令中的一样,而不是如果我只有一个大脑 中的 “if”。
-
+### ifconfig 命令
-### ifconfig command
-
-The **ifconfig** command, on the other hand, doesn't read the /etc/network/interfaces file at all and still provides quite a bit of useful information on network interfaces -- configuration data along with packet counts that tell you how busy each interface has been. The ifconfig command can also be used to shut down and restart network interfaces (e.g., ifconfig eth0 down).
+另外,`ifconfig` 命令完全不读取 `/etc/network/interfaces`,但是仍然提供了网络接口相当多的有用信息 —— 配置数据以及可以告诉你每个接口有多忙的数据包计数。`ifconfig` 命令也可用于关闭和重新启动网络接口(例如:`ifconfig eth0 down`)。
```
$ ifconfig eth0
@@ -50,15 +45,13 @@ eth0 Link encap:Ethernet HWaddr 00:1e:4f:c8:43:fc
Interrupt:21 Memory:fe9e0000-fea00000
```
-The RX and TX packet counts in this output are extremely low. In addition, no errors or packet collisions have been reported. The **uptime** command will likely confirm that this system has only recently been rebooted.
+输出中的 RX 和 TX 数据包计数很低。此外,没有报告错误或数据包冲突。或许可以用 `uptime` 命令确认此系统最近才重新启动。
-The broadcast (Bcast) and network mask (Mask) addresses shown above indicate that the system is operating on a Class C equivalent network (the default) so local addresses will range from 192.168.0.1 to 192.168.0.254.
+上面显示的广播 (Bcast) 和网络掩码 (Mask) 地址表明系统运行在 C 类等效网络(默认)上,所以本地地址范围从 `192.168.0.1` 到 `192.168.0.254`。
-### netstat command
+### netstat 命令
-The **netstat** command provides information on routing and network connections. The **netstat -rn** command displays the system's routing table.
-
-
+`netstat` 命令提供有关路由和网络连接的信息。`netstat -rn` 命令显示系统的路由表。192.168.0.1 是本地网关 (Flags=UG)。
```
$ netstat -rn
@@ -69,7 +62,7 @@ Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
-That **169.254.0.0** entry in the above output is only necessary if you are using or planning to use link-local communications. You can comment out the related lines in the **/etc/network/if-up.d/avahi-autoipd** file like this if this is not the case:
+上面输出中的 `169.254.0.0` 条目仅在你正在使用或计划使用本地链路通信时才有必要。如果不是这样的话,你可以在 `/etc/network/if-up.d/avahi-autoipd` 中注释掉相关的行:
```
$ tail -12 /etc/network/if-up.d/avahi-autoipd
@@ -86,9 +79,9 @@ $ tail -12 /etc/network/if-up.d/avahi-autoipd
#fi
```
-### netstat -a command
+### netstat -a 命令
-The **netstat -a** command will display **_all_** network connections. To limit this to listening and established connections (generally much more useful), use the **netstat -at** command instead.
+`netstat -a` 命令将显示“所有”网络连接。为了将其限制为显示正在监听和已建立的连接(通常更有用),请改用 `netstat -at` 命令。
```
$ netstat -at
@@ -104,21 +97,9 @@ tcp6 0 0 ip6-localhost:ipp [::]:* LISTEN
tcp6 0 0 ip6-localhost:smtp [::]:* LISTEN
```
-### netstat -rn command
+### host 命令
-The **netstat -rn** command displays the system's routing table. The 192.168.0.1 address is the local gateway (Flags=UG).
-
-```
-$ netstat -rn
-Kernel IP routing table
-Destination Gateway Genmask Flags MSS Window irtt Iface
-0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0
-192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
-```
-
-### host command
-
-The **host** command works a lot like **nslookup** by looking up the remote system's IP address, but also provides the system's mail handler.
+`host` 命令就像 `nslookup` 一样,用来查询远程系统的 IP 地址,但是还提供系统的邮箱处理地址。
```
$ host world.std.com
@@ -126,9 +107,9 @@ world.std.com has address 192.74.137.5
world.std.com mail is handled by 10 smtp.theworld.com.
```
-### nslookup command
+### nslookup 命令
-The **nslookup** also provides information on the system (in this case, the local system) that is providing DNS lookup services.
+`nslookup` 还提供系统中(本例中是本地系统)提供 DNS 查询服务的信息。
```
$ nslookup world.std.com
@@ -140,9 +121,9 @@ Name: world.std.com
Address: 192.74.137.5
```
-### dig command
+### dig 命令
-The **dig** command provides quitea lot of information on connecting to a remote system -- including the name server we are communicating with and how long the query takes to respond and is often used for troubleshooting.
+`dig` 命令提供了很多有关连接到远程系统的信息 - 包括与我们通信的名称服务器以及查询需要多长时间进行响应,并经常用于故障排除。
```
$ dig world.std.com
@@ -167,9 +148,9 @@ world.std.com. 78146 IN A 192.74.137.5
;; MSG SIZE rcvd: 58
```
-### nmap command
+### nmap 命令
-The **nmap** command is most frequently used to probe remote systems, but can also be used to report on the services being offered by the local system. In the output below, we can see that ssh is available for logins, that smtp is servicing email, that a web site is active, and that an ipp print service is running.
+`nmap` 经常用于探查远程系统,但是同样也用于报告本地系统提供的服务。在下面的输出中,我们可以看到登录可以使用 ssh、smtp 用于电子邮箱、web 站点也是启用的,并且 ipp 打印服务正在运行。
```
$ nmap localhost
@@ -187,15 +168,15 @@ PORT STATE SERVICE
Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds
```
-Linux systems provide a lot of useful commands for reviewing their network configuration and connections. If you run out of commands to explore, keep in mind that **apropos network** might point you toward even more.
+Linux 系统提供了很多有用的命令用于查看网络配置和连接。如果你都探索完了,请记住 `apropos network` 或许会让你了解更多。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3230519/linux/examining-network-connections-on-linux-systems.html
作者:[Sandra Henry-Stocker][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20171029 A block layer introduction part 1 the bio layer.md b/published/20171029 A block layer introduction part 1 the bio layer.md
new file mode 100644
index 0000000000..96374c2302
--- /dev/null
+++ b/published/20171029 A block layer introduction part 1 the bio layer.md
@@ -0,0 +1,43 @@
+回复:块层介绍第一部分 - 块 I/O 层
+============================================================
+
+### 块层介绍第一部分:块 I/O 层
+
+回复:amarao 在[块层介绍第一部分:块 I/O 层][1] 中提的问题
+先前的文章:[块层介绍第一部分:块 I/O 层][2]
+
+![](https://static.lwn.net/images/2017/neil-blocklayer.png)
+
+嗨,
+
+你在这里描述的问题与块层不直接相关。这可能是一个驱动错误、可能是一个 SCSI 层错误,但绝对不是一个块层的问题。
+
+不幸的是,报告针对 Linux 的错误是一件难事。有些开发者拒绝去看 bugzilla,有些开发者喜欢它,有些(像我这样)只能勉强地使用它。
+
+另一种方法是发送电子邮件。为此,你需要选择正确的邮件列表,还有也许是正确的开发人员,当他们心情愉快,或者不是太忙或者不是假期时找到它们。有些人会努力回复所有,有些是完全不可预知的 - 这对我来说通常会发送一个补丁,包含一些错误报告。如果你只是有一个你自己几乎都不了解的 bug,那么你的预期响应率可能会更低。很遗憾,但这是是真的。
+
+许多 bug 都会得到回应和处理,但很多 bug 都没有。
+
+我不认为说没有人关心是公平的,但是没有人认为它如你想的那样重要是有可能的。如果你想要一个解决方案,那么你需要驱动它。一个驱动它的方法是花钱请顾问或者与经销商签订支持合同。我怀疑你的情况没有上面的可能。另一种方法是了解代码如何工作,并自己找到解决方案。很多人都这么做,但是这对你来说可能不是一种选择。另一种方法是在不同的相关论坛上不断提出问题,直到得到回复。坚持可以见效。你需要做好准备去执行任何你所要求的测试,可能包括建立一个新的内核来测试。
+
+如果你能在最近的内核(4.12 或者更新)上复现这个 bug,我建议你邮件报告给 linux-kernel@vger.kernel.org、linux-scsi@vger.kernel.org 和我(neilb@suse.com)(注意你不必订阅这些列表来发送邮件,只需要发送就行)。描述你的硬件以及如何触发问题的。
+
+包含所有进程状态是 “D” 的栈追踪。你可以用 “cat /proc/$PID/stack” 来得到它,这里的 “$PID” 是进程的 pid。
+
+确保避免抱怨或者说这个已经坏了好几年了以及这是多么严重不足。没有人关心这个。我们关心的是 bug 以及如何修复它。因此只要报告相关的事实就行。
+
+尝试在邮件中而不是链接到其他地方的链接中包含所有事实。有时链接是需要的,但是对于你的脚本,它只有 8 行,所以把它包含在邮件中就行(并避免像 “fuckup” 之类的描述。只需称它为“坏的”(broken)或者类似的)。同样确保你的邮件发送的不是 HTML 格式。我们喜欢纯文本。HTML 被所有的 @vger.kernel.org 邮件列表拒绝。你或许需要配置你的邮箱程序不发送 HTML。
+
+--------------------------------------------------------------------------------
+
+via: https://lwn.net/Articles/737655/
+
+作者:[neilbrown][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://lwn.net/Articles/737655/
+[1]:https://lwn.net/Articles/737588/
+[2]:https://lwn.net/Articles/736534/
diff --git a/published/201711/20141028 When Does Your OS Run.md b/published/201711/20141028 When Does Your OS Run.md
new file mode 100644
index 0000000000..80ec1340c8
--- /dev/null
+++ b/published/201711/20141028 When Does Your OS Run.md
@@ -0,0 +1,57 @@
+操作系统何时运行?
+============================================================
+
+请各位思考以下问题:在你阅读本文的这段时间内,计算机中的操作系统在**运行**吗?又或者仅仅是 Web 浏览器在运行?又或者它们也许均处于空闲状态,等待着你的指示?
+
+这些问题并不复杂,但它们深入涉及到系统软件工作的本质。为了准确回答这些问题,我们需要透彻理解操作系统的行为模型,包括性能、安全和除错等方面。在该系列文章中,我们将以 Linux 为主举例来帮助你建立操作系统的行为模型,OS X 和 Windows 在必要的时候也会有所涉及。对那些深度探索者,我会在适当的时候给出 Linux 内核源码的链接。
+
+这里有一个基本认知,就是,在任意给定时刻,某个 CPU 上仅有一个任务处于活动状态。大多数情形下这个任务是某个用户程序,例如你的 Web 浏览器或音乐播放器,但它也可能是一个操作系统线程。可以确信的是,它是**一个任务**,不是两个或更多,也不是零个,对,**永远**是一个。
+
+这听上去可能会有些问题。比如,你的音乐播放器是否会独占 CPU 而阻止其它任务运行?从而使你不能打开任务管理工具去杀死音乐播放器,甚至让鼠标点击也失效,因为操作系统没有机会去处理这些事件。你可能会愤而喊出,“它究竟在搞什么鬼?”,并引发骚乱。
+
+此时便轮到**中断**大显身手了。中断就好比,一声巨响或一次拍肩后,神经系统通知大脑去感知外部刺激一般。计算机主板上的[芯片组][1]同样会中断 CPU 运行以传递新的外部事件,例如键盘上的某个键被按下、网络数据包的到达、一次硬盘读取的完成,等等。硬件外设、主板上的中断控制器和 CPU 本身,它们共同协作实现了中断机制。
+
+中断对于记录我们最珍视的资源——时间——也至关重要。计算机[启动过程][2]中,操作系统内核会设置一个硬件计时器以让其产生周期性**计时中断**,例如每隔 10 毫秒触发一次。每当计时中断到来,内核便会收到通知以更新系统统计信息和盘点如下事项:当前用户程序是否已运行了足够长时间?是否有某个 TCP 定时器超时了?中断给予了内核一个处理这些问题并采取合适措施的机会。这就好像你给自己设置了整天的周期闹铃并把它们用作检查点:我是否应该去做我正在进行的工作?是否存在更紧急的事项?直到你发现 10 年时间已逝去……
+
+这些内核对 CPU 周期性的劫持被称为滴答,也就是说,是中断让你的操作系统滴答了一下。不止如此,中断也被用作处理一些软件事件,如整数溢出和页错误,其中未涉及外部硬件。**中断是进入操作系统内核最频繁也是最重要的入口**。对于学习电子工程的人而言,这些并无古怪,它们是操作系统赖以运行的机制。
+
+说到这里,让我们再来看一些实际情形。下图示意了 Intel Core i5 系统中的一个网卡中断。图片中的部分元素设置了超链,你可以点击它们以获取更为详细的信息,例如每个设备均被链接到了对应的 Linux 驱动源码。
+
+![](http://duartes.org/gustavo/blog/img/os/hardware-interrupt.png)
+
+链接如下:
+
+- network card : https://github.com/torvalds/linux/blob/v3.17/drivers/net/ethernet/intel/e1000e/netdev.c
+- USB keyboard : https://github.com/torvalds/linux/blob/v3.16/drivers/hid/usbhid/usbkbd.c
+- I/O APIC : https://github.com/torvalds/linux/blob/v3.16/arch/x86/kernel/apic/io_apic.c
+- HPET : https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/hpet.c
+
+让我们来仔细研究下。首先,由于系统中存在众多中断源,如果硬件只是通知 CPU “嘿,这里发生了一些事情”然后什么也不做,则不太行得通。这会带来难以忍受的冗长等待。因此,计算机上电时,每个设备都被授予了一根**中断线**,或者称为 IRQ。这些 IRQ 然后被系统中的中断控制器映射成值介于 0 到 255 之间的**中断向量**。等到中断到达 CPU,它便具备了一个完好定义的数值,异于硬件的某些其它诡异行为。
+
+相应地,CPU 中还存有一个由内核维护的指针,指向一个包含 255 个函数指针的数组,其中每个函数被用来处理某个特定的中断向量。后文中,我们将继续深入探讨这个数组,它也被称作**中断描述符表**(IDT)。
+
+每当中断到来,CPU 会用中断向量的值去索引中断描述符表,并执行相应处理函数。这相当于,在当前正在执行任务的上下文中,发生了一个特殊函数调用,从而允许操作系统以较小开销快速对外部事件作出反应。考虑下述场景,Web 服务器在发送数据时,CPU 却间接调用了操作系统函数,这听上去要么很炫酷要么令人惊恐。下图展示了 Vim 编辑器运行过程中一个中断到来的情形。
+
+![](http://duartes.org/gustavo/blog/img/os/vim-interrupted.png)
+
+此处请留意,中断的到来是如何触发 CPU 到 [Ring 0][3] 内核模式的切换而未有改变当前活跃的任务。这看上去就像,Vim 编辑器直接面向操作系统内核产生了一次神奇的函数调用,但 Vim 还在那里,它的[地址空间][4]原封未动,等待着执行流返回。
+
+这很令人振奋,不是么?不过让我们暂且告一段落吧,我需要合理控制篇幅。我知道还没有回答完这个开放式问题,甚至还实质上翻开了新的问题,但你至少知道了在你读这个句子的同时**滴答**正在发生。我们将在充实了对操作系统动态行为模型的理解之后再回来寻求问题的答案,对 Web 浏览器情形的理解也会变得清晰。如果你仍有问题,尤其是在这篇文章公诸于众后,请尽管提出。我将会在文章或后续评论中回答它们。下篇文章将于明天在 RSS 和 Twitter 上发布。
+
+--------------------------------------------------------------------------------
+
+via: http://duartes.org/gustavo/blog/post/when-does-your-os-run/
+
+作者:[gustavo][a]
+译者:[Cwndmiao](https://github.com/Cwndmiao)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://duartes.org/gustavo/blog/about/
+[1]:http://duartes.org/gustavo/blog/post/motherboard-chipsets-memory-map
+[2]:http://duartes.org/gustavo/blog/post/kernel-boot-process
+[3]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
+[4]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
+[5]:http://feeds.feedburner.com/GustavoDuarte
+[6]:http://twitter.com/food4hackers
diff --git a/published/20170202 Understanding Firewalld in Multi-Zone Configurations.md b/published/201711/20170202 Understanding Firewalld in Multi-Zone Configurations.md
similarity index 100%
rename from published/20170202 Understanding Firewalld in Multi-Zone Configurations.md
rename to published/201711/20170202 Understanding Firewalld in Multi-Zone Configurations.md
diff --git a/published/20170227 Ubuntu Core in LXD containers.md b/published/201711/20170227 Ubuntu Core in LXD containers.md
similarity index 100%
rename from published/20170227 Ubuntu Core in LXD containers.md
rename to published/201711/20170227 Ubuntu Core in LXD containers.md
diff --git a/translated/tech/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md b/published/201711/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md
similarity index 73%
rename from translated/tech/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md
rename to published/201711/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md
index cd2ce76527..9069b01ed9 100644
--- a/translated/tech/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md
+++ b/published/201711/20170418 INTRODUCING MOBY PROJECT A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT.md
@@ -1,27 +1,27 @@
-介绍 MOBY 项目:推进软件容器化运动的一个新的开源项目
+介绍 Moby 项目:推进软件容器化运动的一个新的开源项目
============================================================
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/1-2.png?resize=763%2C275&ssl=1)
-自从 Docker 四年前将软件容器推向民主化以来,整个生态系统都围绕着容器化而发展,在这段压缩的时期,它经历了两个不同的增长阶段。在这每一个阶段,生产容器系统的模式已经演变成适应用户群体以及项目的规模和需求和不断增长的贡献者生态系统。
+自从 Docker 四年前将软件容器推向大众化以来,整个生态系统都围绕着容器化而发展,在这段这么短的时期内,它经历了两个不同的增长阶段。在这每一个阶段,生产容器系统的模式已经随着项目和不断增长的容器生态系统而演变适应用户群体的规模和需求。
-Moby 是一个新的开源项目,旨在推进软件容器化运动,帮助生态系统将容器作为主流。它提供了一个组件库,一个将它们组装到定制的基于容器的系统的框架,以及所有容器爱好者进行实验和交换想法的地方。
+Moby 是一个新的开源项目,旨在推进软件容器化运动,帮助生态系统将容器作为主流。它提供了一个组件库,一个将它们组装到定制的基于容器的系统的框架,也是所有容器爱好者进行实验和交换想法的地方。
让我们来回顾一下我们如何走到今天。在 2013-2014 年,开拓者开始使用容器,并在一个单一的开源代码库,Docker 和其他一些项目中进行协作,以帮助工具成熟。
![Docker Open Source](https://i0.wp.com/blog.docker.com/wp-content/uploads/2-2.png?resize=975%2C548&ssl=1)
-然后在 2015-2016 年,云原生应用中大量采用容器用于生产环境。在这个阶段,用户社区已经发展到支持成千上万个部署,由数百个生态系统项目和成千上万的贡献者支持。正是在这个阶段,Docker 将其生产模式演变为基于开放式组件的方法。这样,它使我们能够增加创新和合作的方面。
+然后在 2015-2016 年,云原生应用中大量采用容器用于生产环境。在这个阶段,用户社区已经发展到支持成千上万个部署,由数百个生态系统项目和成千上万的贡献者支持。正是在这个阶段,Docker 将其产品模式演变为基于开放式组件的方法。这样,它使我们能够增加创新和合作的方面。
-涌现出来的新独立的 Docker 组件项目帮助刺激了合作伙伴生态系统和用户社区的发展。在此期间,我们从 Docker 代码库中提取并快速创新组件,以便系统制造商可以在构建自己的容器系统时独立重用它们:[runc][7]、[HyperKit][8]、[VPNKit][9]、[SwarmKit][10]、[InfraKit][11]、[containerd][12] 等。
+涌现出来的新独立的 Docker 组件项目帮助促进了合作伙伴生态系统和用户社区的发展。在此期间,我们从 Docker 代码库中提取并快速创新组件,以便系统制造商可以在构建自己的容器系统时独立重用它们:[runc][7]、[HyperKit][8]、[VPNKit][9]、[SwarmKit][10]、[InfraKit][11]、[containerd][12] 等。
![Docker Open Components](https://i1.wp.com/blog.docker.com/wp-content/uploads/3-2.png?resize=975%2C548&ssl=1)
-站在容器浪潮的最前沿,我们看到 2017 年出现的一个趋势是容器将成为主流,传播到计算、服务器、数据中心、云、桌面、物联网和移动的各个领域。每个行业和垂直市场、金融、医疗、政府、旅游、制造。以及每一个使用案例,现代网络应用、传统服务器应用、机器学习、工业控制系统、机器人技术。容器生态系统中许多新进入者的共同点是,它们建立专门的系统,针对特定的基础设施、行业或使用案例。
+站在容器浪潮的最前沿,我们看到 2017 年出现的一个趋势是容器将成为主流,传播到计算、服务器、数据中心、云、桌面、物联网和移动的各个领域。每个行业和垂直市场,金融、医疗、政府、旅游、制造。以及每一个使用案例,现代网络应用、传统服务器应用、机器学习、工业控制系统、机器人技术。容器生态系统中许多新进入者的共同点是,它们建立专门的系统,针对特定的基础设施、行业或使用案例。
-作为一家公司,Docker 使用开源作为我们的创新实验室,而与整个生态系统合作。Docker 的成功取决于容器生态系统的成功:如果生态系统成功,我们就成功了。因此,我们一直在计划下一阶段的容器生态系统增长:什么样的生产模式将帮助我们扩大集容器生态系统,实现容器成为主流的承诺?
+作为一家公司,Docker 使用开源作为我们的创新实验室,而与整个生态系统合作。Docker 的成功取决于容器生态系统的成功:如果生态系统成功,我们就成功了。因此,我们一直在计划下一阶段的容器生态系统增长:什么样的产品模式将帮助我们扩大容器生态系统,以实现容器成为主流的承诺?
-去年,我们的客户开始在 Linux 以外的许多平台上要求有 Docker:Mac 和 Windows 桌面、Windows Server、云平台(如亚马逊网络服务(AWS)、Microsoft Azure 或 Google 云平台),并且我们专门为这些平台创建了[许多 Docker 版本][13]。为了在一个相对较短的时间与更小的团队,以可扩展的方式构建和发布这些专业版本,而不必重新发明轮子,很明显,我们需要一个新的方法。我们需要我们的团队不仅在组件上进行协作,而且还在组件组合上进行协作,这借用[来自汽车行业的想法][14],其中组件被重用于构建完全不同的汽车。
+去年,我们的客户开始在 Linux 以外的许多平台上要求有 Docker:Mac 和 Windows 桌面、Windows Server、云平台(如亚马逊网络服务(AWS)、Microsoft Azure 或 Google 云平台),并且我们专门为这些平台创建了[许多 Docker 版本][13]。为了在一个相对较短的时间和更小的团队中,以可扩展的方式构建和发布这些专业版本,而不必重新发明轮子,很明显,我们需要一个新的方式。我们需要我们的团队不仅在组件上进行协作,而且还在组件组合上进行协作,这借用[来自汽车行业的想法][14],其中组件被重用于构建完全不同的汽车。
![Docker production model](https://i1.wp.com/blog.docker.com/wp-content/uploads/4-2.png?resize=975%2C548&ssl=1)
@@ -29,15 +29,13 @@ Moby 是一个新的开源项目,旨在推进软件容器化运动,帮助生
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/5-2.png?resize=975%2C548&ssl=1)
-为了实现这种新的合作高度,今天我们宣布推出软件容器化运动的新开源项目 Moby。它是提供了数十个组件的“乐高集”,一个将它们组合成定制容器系统的框架,以及所有容器爱好者进行试验和交换意见的场所。可以把 Moby 认为是容器系统的“乐高俱乐部”。
+为了实现这种新的合作高度,今天(2017 年 4 月 18 日)我们宣布推出软件容器化运动的新开源项目 Moby。它是提供了数十个组件的“乐高组件”,一个将它们组合成定制容器系统的框架,以及所有容器爱好者进行试验和交换意见的场所。可以把 Moby 认为是容器系统的“乐高俱乐部”。
-Moby包括:
-
-1. 容器化后端组件**库**(例如,底层构建器、日志记录设备、卷管理、网络、镜像管理、containerd、SwarmKit 等)
+Moby 包括:
+1. 容器化后端组件**库**(例如,低层构建器、日志记录设备、卷管理、网络、镜像管理、containerd、SwarmKit 等)
2. 将组件组合到独立容器平台中的**框架**,以及为这些组件构建、测试和部署构件的工具。
-
-3. 一个名为 **Moby Origin** 的引用组件,它是 Docker 容器平台的开放基础,以及使用 Moby 库或其他项目的各种组件的容器系统示例。
+3. 一个名为 “Moby Origin” 的引用组件,它是 Docker 容器平台的开放基础,以及使用 Moby 库或其他项目的各种组件的容器系统示例。
Moby 专为系统构建者而设计,他们想要构建自己的基于容器的系统,而不是可以使用 Docker 或其他容器平台的应用程序开发人员。Moby 的参与者可以从源自 Docker 的组件库中进行选择,或者可以选择将“自己的组件”(BYOC)打包为容器,以便在所有组件之间进行混合和匹配以创建定制的容器系统。
@@ -49,9 +47,9 @@ Docker 将 Moby 作为一个开放的研发实验室来试验、开发新的组
via: https://blog.docker.com/2017/04/introducing-the-moby-project/
-作者:[Solomon Hykes ][a]
+作者:[Solomon Hykes][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/201711/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md b/published/201711/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md
new file mode 100644
index 0000000000..66a57da3e9
--- /dev/null
+++ b/published/201711/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md
@@ -0,0 +1,67 @@
+了解用于 Linux 和 Windows 容器的 Docker “容器主机”与“容器操作系统”
+=================
+
+让我们来探讨一下“容器主机”和“容器操作系统”之间的关系,以及它们在 Linux 和 Windows 容器之间的区别。
+
+### 一些定义
+
+* 容器主机:也称为主机操作系统。主机操作系统是 Docker 客户端和 Docker 守护程序在其上运行的操作系统。在 Linux 和非 Hyper-V 容器的情况下,主机操作系统与运行中的 Docker 容器共享内核。对于 Hyper-V,每个容器都有自己的 Hyper-V 内核。
+* 容器操作系统:也被称为基础操作系统。基础操作系统是指包含操作系统如 Ubuntu、CentOS 或 windowsservercore 的镜像。通常情况下,你将在基础操作系统镜像之上构建自己的镜像,以便可以利用该操作系统的部分功能。请注意,Windows 容器需要一个基础操作系统,而 Linux 容器不需要。
+* 操作系统内核:内核管理诸如内存、文件系统、网络和进程调度等底层功能。
+
+### 如下的一些图
+
+![Linux Containers](http://floydhilton.com/images/2017/03/2017-03-31_14_50_13-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
+
+在上面的例子中:
+
+* 主机操作系统是 Ubuntu。
+* Docker 客户端和 Docker 守护进程(一起被称为 Docker 引擎)正在主机操作系统上运行。
+* 每个容器共享主机操作系统内核。
+* CentOS 和 BusyBox 是 Linux 基础操作系统镜像。
+* “No OS” 容器表明你不需要基础操作系统以在 Linux 中运行一个容器。你可以创建一个含有 [scratch][1] 基础镜像的 Docker 文件,然后运行直接使用内核的二进制文件。
+* 查看[这篇][2]文章来比较基础 OS 的大小。
+
+![Windows Containers - Non Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_04_03-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
+
+在上面的例子中:
+
+* 主机操作系统是 Windows 10 或 Windows Server。
+* 每个容器共享主机操作系统内核。
+* 所有 Windows 容器都需要 [nanoserver][3] 或 [windowsservercore][4] 的基础操作系统。
+
+![Windows Containers - Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_41_31-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
+
+在上面的例子中:
+
+* 主机操作系统是 Windows 10 或 Windows Server。
+* 每个容器都托管在自己的轻量级 Hyper-V 虚拟机中。
+* 每个容器使用 Hyper-V 虚拟机内的内核,它在容器之间提供额外的分离层。
+* 所有 Windows 容器都需要 [nanoserver][5] 或 [windowsservercore][6] 的基础操作系统。
+
+### 几个好的链接
+
+* [关于 Windows 容器][7]
+* [深入实现 Windows 容器,包括多用户模式和“写时复制”来节省资源][8]
+* [Linux 容器如何通过使用“写时复制”来节省资源][9]
+
+--------------------------------------------------------------------------------
+
+via: http://floydhilton.com/docker/2017/03/31/Docker-ContainerHost-vs-ContainerOS-Linux-Windows.html
+
+作者:[Floyd Hilton][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://floydhilton.com/about/
+[1]:https://hub.docker.com/_/scratch/
+[2]:https://www.brianchristner.io/docker-image-base-os-size-comparison/
+[3]:https://hub.docker.com/r/microsoft/nanoserver/
+[4]:https://hub.docker.com/r/microsoft/windowsservercore/
+[5]:https://hub.docker.com/r/microsoft/nanoserver/
+[6]:https://hub.docker.com/r/microsoft/windowsservercore/
+[7]:https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
+[8]:http://blog.xebia.com/deep-dive-into-windows-server-containers-and-docker-part-2-underlying-implementation-of-windows-server-containers/
+[9]:https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#the-copy-on-write-strategy
diff --git a/published/20170608 The Life-Changing Magic of Tidying Up Code.md b/published/201711/20170608 The Life-Changing Magic of Tidying Up Code.md
similarity index 100%
rename from published/20170608 The Life-Changing Magic of Tidying Up Code.md
rename to published/201711/20170608 The Life-Changing Magic of Tidying Up Code.md
diff --git a/published/20170706 Wildcard Certificates Coming January 2018.md b/published/201711/20170706 Wildcard Certificates Coming January 2018.md
similarity index 100%
rename from published/20170706 Wildcard Certificates Coming January 2018.md
rename to published/201711/20170706 Wildcard Certificates Coming January 2018.md
diff --git a/published/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md b/published/201711/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md
similarity index 100%
rename from published/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md
rename to published/201711/20170825 Guide to Linux App Is a Handy Tool for Every Level of Linux User.md
diff --git a/published/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md b/published/201711/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md
similarity index 100%
rename from published/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md
rename to published/201711/20170905 GIVE AWAY YOUR CODE BUT NEVER YOUR TIME.md
diff --git a/published/20170928 3 Python web scrapers and crawlers.md b/published/201711/20170928 3 Python web scrapers and crawlers.md
similarity index 100%
rename from published/20170928 3 Python web scrapers and crawlers.md
rename to published/201711/20170928 3 Python web scrapers and crawlers.md
diff --git a/published/20171002 Scaling the GitLab database.md b/published/201711/20171002 Scaling the GitLab database.md
similarity index 100%
rename from published/20171002 Scaling the GitLab database.md
rename to published/201711/20171002 Scaling the GitLab database.md
diff --git a/published/20171003 PostgreSQL Hash Indexes Are Now Cool.md b/published/201711/20171003 PostgreSQL Hash Indexes Are Now Cool.md
similarity index 100%
rename from published/20171003 PostgreSQL Hash Indexes Are Now Cool.md
rename to published/201711/20171003 PostgreSQL Hash Indexes Are Now Cool.md
diff --git a/published/20171004 No the Linux desktop hasnt jumped in popularity.md b/published/201711/20171004 No the Linux desktop hasnt jumped in popularity.md
similarity index 100%
rename from published/20171004 No the Linux desktop hasnt jumped in popularity.md
rename to published/201711/20171004 No the Linux desktop hasnt jumped in popularity.md
diff --git a/published/20171007 Instant 100 command line productivity boost.md b/published/201711/20171007 Instant 100 command line productivity boost.md
similarity index 100%
rename from published/20171007 Instant 100 command line productivity boost.md
rename to published/201711/20171007 Instant 100 command line productivity boost.md
diff --git a/published/20171008 8 best languages to blog about.md b/published/201711/20171008 8 best languages to blog about.md
similarity index 100%
rename from published/20171008 8 best languages to blog about.md
rename to published/201711/20171008 8 best languages to blog about.md
diff --git a/published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md b/published/201711/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md
similarity index 100%
rename from published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md
rename to published/201711/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md
diff --git a/published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md b/published/201711/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md
similarity index 100%
rename from published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md
rename to published/201711/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md
diff --git a/published/20171011 How to set up a Postgres database on a Raspberry Pi.md b/published/201711/20171011 How to set up a Postgres database on a Raspberry Pi.md
similarity index 100%
rename from published/20171011 How to set up a Postgres database on a Raspberry Pi.md
rename to published/201711/20171011 How to set up a Postgres database on a Raspberry Pi.md
diff --git a/published/20171011 Why Linux Works.md b/published/201711/20171011 Why Linux Works.md
similarity index 100%
rename from published/20171011 Why Linux Works.md
rename to published/201711/20171011 Why Linux Works.md
diff --git a/published/20171013 6 reasons open source is good for business.md b/published/201711/20171013 6 reasons open source is good for business.md
similarity index 100%
rename from published/20171013 6 reasons open source is good for business.md
rename to published/201711/20171013 6 reasons open source is good for business.md
diff --git a/published/20171013 Best of PostgreSQL 10 for the DBA.md b/published/201711/20171013 Best of PostgreSQL 10 for the DBA.md
similarity index 100%
rename from published/20171013 Best of PostgreSQL 10 for the DBA.md
rename to published/201711/20171013 Best of PostgreSQL 10 for the DBA.md
diff --git a/published/20171015 How to implement cloud-native computing with Kubernetes.md b/published/201711/20171015 How to implement cloud-native computing with Kubernetes.md
similarity index 100%
rename from published/20171015 How to implement cloud-native computing with Kubernetes.md
rename to published/201711/20171015 How to implement cloud-native computing with Kubernetes.md
diff --git a/published/20171015 Monitoring Slow SQL Queries via Slack.md b/published/201711/20171015 Monitoring Slow SQL Queries via Slack.md
similarity index 100%
rename from published/20171015 Monitoring Slow SQL Queries via Slack.md
rename to published/201711/20171015 Monitoring Slow SQL Queries via Slack.md
diff --git a/published/20171015 Why Use Docker with R A DevOps Perspective.md b/published/201711/20171015 Why Use Docker with R A DevOps Perspective.md
similarity index 100%
rename from published/20171015 Why Use Docker with R A DevOps Perspective.md
rename to published/201711/20171015 Why Use Docker with R A DevOps Perspective.md
diff --git a/published/20171016 Introducing CRI-O 1.0.md b/published/201711/20171016 Introducing CRI-O 1.0.md
similarity index 100%
rename from published/20171016 Introducing CRI-O 1.0.md
rename to published/201711/20171016 Introducing CRI-O 1.0.md
diff --git a/published/20171017 A tour of Postgres Index Types.md b/published/201711/20171017 A tour of Postgres Index Types.md
similarity index 100%
rename from published/20171017 A tour of Postgres Index Types.md
rename to published/201711/20171017 A tour of Postgres Index Types.md
diff --git a/published/201711/20171017 Image Processing on Linux.md b/published/201711/20171017 Image Processing on Linux.md
new file mode 100644
index 0000000000..32ef1a2acd
--- /dev/null
+++ b/published/201711/20171017 Image Processing on Linux.md
@@ -0,0 +1,96 @@
+Linux 上的科学图像处理
+============================================================
+
+在显示你的数据和工作方面我发现了几个科学软件,但是我不会涉及太多方面。因此在这篇文章中,我将谈到一款叫 ImageJ 的热门图像处理软件。特别的,我会介绍 [Fiji][4],这是一款绑定了一系列用于科学图像处理插件的 ImageJ 软件。
+
+Fiji 这个名字是一个循环缩略词,很像 GNU 。代表着 “Fiji Is Just ImageJ”。 ImageJ 是科学研究领域进行图像分析的实用工具 —— 例如你可以用它来辨认航拍风景图中树的种类。 ImageJ 能划分物品种类。它以插件架构制成,海量插件可供选择以提升使用灵活度。
+
+首先是安装 ImageJ (或 Fiji)。大多数的 ImageJ 发行版都可有该软件包。你愿意的话,可以以这种方式安装它,然后根据你的研究安装所需的独立插件。另一种选择是安装 Fiji 的同时获取最常用的插件。不幸的是,大多数 Linux 发行版的软件中心不会有可用的 Fiji 安装包。幸而,官网上的简单安装文件是可以使用的。这是一个 zip 文件,包含了运行 Fiji 需要的所有文件目录。第一次启动时,你只会看到一个列出了菜单项的工具栏。(图 1)
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif1.png)
+
+*图 1. 第一次打开 Fiji 有一个最小化的界面。*
+
+如果你没有备好图片来练习使用 ImageJ ,Fiji 安装包包含了一些示例图片。点击“File”->“Open Samples”的下拉菜单选项(图 2)。这些示例包含了许多你可能有兴趣做的任务。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif2.jpg)
+
+*图 2. 案例图片可供学习使用 ImageJ。*
+
+如果你安装了 Fiji,而不是单纯的 ImageJ ,那么大量插件也会被安装。首先要注意的是自动更新器插件。每次打开 ImageJ ,该插件将联网检验 ImageJ 和已安装插件的更新。
+
+所有已安装的插件都在“插件”菜单项中可选。一旦你安装了很多插件,列表会变得冗杂,所以需要精简你选择的插件。你想手动更新的话,点击“Help”->“Update Fiji” 菜单项强制检测并获取可用更新的列表(图 3)。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif3.png)
+
+*图 3. 强制手动检测可用更新。*
+
+那么,现在,用 Fiji/ImageJ 可以做什么呢?举一例,统计图片中的物品数。你可以通过点击“File”->“Open Samples”->“Embryos”来载入示例。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif4.jpg)
+
+*图 4. 用 ImageJ 算出图中的物品数。*
+
+第一步给图片设定比例,这样你可以告诉 ImageJ 如何判别物品。首先,选择在工具栏选择线条按钮。然后选择“Analyze”->“Set Scale”,然后就会设置比例尺包含的像素点个数(图 5)。你可以设置“known distance ”为 100,单元为“um”。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif5.png)
+
+*图 5. 很多图片分析任务需要对图片设定一个范围。*
+
+接下来的步骤是简化图片内的信息。点击“Image”->“Type”->“8-bit”来减少信息量到 8 比特灰度图片。要分隔独立物体点击“Process”->“Binary”->“Make Binary”以自动设置图片门限。(图 6)。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif6.png)
+
+*图 6. 有些工具可以自动完成像门限一样的任务。*
+
+图片内的物品计数前,你需要移除像比例尺之类的人工操作。可以用矩形选择工具来选中它并点击“Edit”->“Clear”来完成这项操作。现在你可以分析图片看看这里是啥物体。
+
+确保图中没有区域被选中,点击“Analyze”->“Analyze Particles”来弹出窗口来选择最小尺寸,这决定了最后的图片会展示什么(图 7)。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif7.png)
+
+*图 7. 你可以通过确定最小尺寸生成一个缩减过的图片。 *
+
+图 8 在总结窗口展示了一个概览。每个最小点也有独立的细节窗口。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif8.png)
+
+*图 8. 包含了已知最小点总览清单的输出结果。*
+
+当你有一个分析程序可以工作于给定图片类型,你通常需要将相同的步骤应用到一系列图片当中。这可能数以千计,你当然不会想对每张图片手动重复操作。这时候,你可以集中必要步骤到宏,这样它们可以被应用多次。点击插件->“Macros”->“Record”,弹出一个新的窗口记录你随后的所有命令。所有步骤完成,你可以将之保存为一个宏文件,并且通过点击“Plugins”->“Macros”->“Run”来在其它图片上重复运行。
+
+如果你有非常特定的工作步骤,你可以简单地打开宏文件并手动编辑它,因为它是一个简单的文本文件。事实上有一套完整的宏语言可供你更加充分地控制图片处理过程。
+
+然而,如果你有真的有非常多的系列图片需要处理,这也将是冗长乏味的工作。这种情况下,前往“Process”->“Batch”->“Macro”,会弹出一个你可以设置批量处理工作的新窗口(图 9)。
+
+![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif9.png)
+
+*图 9. 对批量输入的图片用单一命令运行宏。*
+
+这个窗口中,你能选择应用哪个宏文件、输入图片所在的源目录和你想写入输出图片的输出目录。也可以设置输出文件格式,及通过文件名筛选输入图片中需要使用的。万事具备之后,点击窗口下方的的“Process”按钮开始批量操作。
+
+若这是会重复多次的工作,你可以点击窗口底部的“Save”按钮保存批量处理到一个文本文件。点击也在窗口底部的“Open”按钮重新加载相同的工作。这个功能可以使得研究中最冗余部分自动化,这样你就可以在重点放在实际的科学研究中。
+
+考虑到单单是 ImageJ 主页就有超过 500 个插件和超过 300 种宏可供使用,简短起见,我只能在这篇短文中提出最基本的话题。幸运的是,还有很多专业领域的教程可供使用,项目主页上还有关于 ImageJ 核心的非常棒的文档。如果你觉得这个工具对研究有用,你研究的专业领域也会有很多信息指引你。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Joey Bernard 有物理学和计算机科学的相关背景。这对他在新不伦瑞克大学当计算研究顾问的日常工作大有裨益。他也教计算物理和并行程序规划。
+
+--------------------------------
+
+via: https://www.linuxjournal.com/content/image-processing-linux
+
+作者:[Joey Bernard][a]
+译者:[XYenChi](https://github.com/XYenChi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxjournal.com/users/joey-bernard
+[1]:https://www.linuxjournal.com/tag/science
+[2]:https://www.linuxjournal.com/tag/statistics
+[3]:https://www.linuxjournal.com/users/joey-bernard
+[4]:https://imagej.net/Fiji
diff --git a/published/20171018 How containers and microservices change security.md b/published/201711/20171018 How containers and microservices change security.md
similarity index 100%
rename from published/20171018 How containers and microservices change security.md
rename to published/201711/20171018 How containers and microservices change security.md
diff --git a/published/20171018 Learn how to program in Python by building a simple dice game.md b/published/201711/20171018 Learn how to program in Python by building a simple dice game.md
similarity index 100%
rename from published/20171018 Learn how to program in Python by building a simple dice game.md
rename to published/201711/20171018 Learn how to program in Python by building a simple dice game.md
diff --git a/published/20171018 Tips to Secure Your Network in the Wake of KRACK.md b/published/201711/20171018 Tips to Secure Your Network in the Wake of KRACK.md
similarity index 100%
rename from published/20171018 Tips to Secure Your Network in the Wake of KRACK.md
rename to published/201711/20171018 Tips to Secure Your Network in the Wake of KRACK.md
diff --git a/published/20171019 3 Simple Excellent Linux Network Monitors.md b/published/201711/20171019 3 Simple Excellent Linux Network Monitors.md
similarity index 100%
rename from published/20171019 3 Simple Excellent Linux Network Monitors.md
rename to published/201711/20171019 3 Simple Excellent Linux Network Monitors.md
diff --git a/published/20171019 How to manage Docker containers in Kubernetes with Java.md b/published/201711/20171019 How to manage Docker containers in Kubernetes with Java.md
similarity index 100%
rename from published/20171019 How to manage Docker containers in Kubernetes with Java.md
rename to published/201711/20171019 How to manage Docker containers in Kubernetes with Java.md
diff --git a/published/20171020 3 Tools to Help You Remember Linux Commands.md b/published/201711/20171020 3 Tools to Help You Remember Linux Commands.md
similarity index 100%
rename from published/20171020 3 Tools to Help You Remember Linux Commands.md
rename to published/201711/20171020 3 Tools to Help You Remember Linux Commands.md
diff --git a/published/20171020 Running Android on Top of a Linux Graphics Stack.md b/published/201711/20171020 Running Android on Top of a Linux Graphics Stack.md
similarity index 100%
rename from published/20171020 Running Android on Top of a Linux Graphics Stack.md
rename to published/201711/20171020 Running Android on Top of a Linux Graphics Stack.md
diff --git a/published/20171024 Top 5 Linux pain points in 2017.md b/published/201711/20171024 Top 5 Linux pain points in 2017.md
similarity index 100%
rename from published/20171024 Top 5 Linux pain points in 2017.md
rename to published/201711/20171024 Top 5 Linux pain points in 2017.md
diff --git a/published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md b/published/201711/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md
similarity index 100%
rename from published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md
rename to published/201711/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md
diff --git a/published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md b/published/201711/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md
similarity index 100%
rename from published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md
rename to published/201711/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md
diff --git a/published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md b/published/201711/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md
similarity index 100%
rename from published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md
rename to published/201711/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md
diff --git a/published/20171026 But I dont know what a container is .md b/published/201711/20171026 But I dont know what a container is .md
similarity index 100%
rename from published/20171026 But I dont know what a container is .md
rename to published/201711/20171026 But I dont know what a container is .md
diff --git a/published/20171026 Why is Kubernetes so popular.md b/published/201711/20171026 Why is Kubernetes so popular.md
similarity index 100%
rename from published/20171026 Why is Kubernetes so popular.md
rename to published/201711/20171026 Why is Kubernetes so popular.md
diff --git a/published/20171101 How to use cron in Linux.md b/published/201711/20171101 How to use cron in Linux.md
similarity index 100%
rename from published/20171101 How to use cron in Linux.md
rename to published/201711/20171101 How to use cron in Linux.md
diff --git a/published/20171101 We re switching to a DCO for source code contributions.md b/published/201711/20171101 We re switching to a DCO for source code contributions.md
similarity index 100%
rename from published/20171101 We re switching to a DCO for source code contributions.md
rename to published/201711/20171101 We re switching to a DCO for source code contributions.md
diff --git a/published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md b/published/201711/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md
similarity index 100%
rename from published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md
rename to published/201711/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md
diff --git a/published/20171106 Finding Files with mlocate.md b/published/201711/20171106 Finding Files with mlocate.md
similarity index 100%
rename from published/20171106 Finding Files with mlocate.md
rename to published/201711/20171106 Finding Files with mlocate.md
diff --git a/published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md b/published/201711/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md
similarity index 100%
rename from published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md
rename to published/201711/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md
diff --git a/published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md b/published/201711/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md
similarity index 100%
rename from published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md
rename to published/201711/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md
diff --git a/published/20171107 AWS adopts home-brewed KVM as new hypervisor.md b/published/201711/20171107 AWS adopts home-brewed KVM as new hypervisor.md
similarity index 100%
rename from published/20171107 AWS adopts home-brewed KVM as new hypervisor.md
rename to published/201711/20171107 AWS adopts home-brewed KVM as new hypervisor.md
diff --git a/published/20171107 How I created my first RPM package in Fedora.md b/published/201711/20171107 How I created my first RPM package in Fedora.md
similarity index 100%
rename from published/20171107 How I created my first RPM package in Fedora.md
rename to published/201711/20171107 How I created my first RPM package in Fedora.md
diff --git a/published/20171108 Build and test applications with Ansible Container.md b/published/201711/20171108 Build and test applications with Ansible Container.md
similarity index 100%
rename from published/20171108 Build and test applications with Ansible Container.md
rename to published/201711/20171108 Build and test applications with Ansible Container.md
diff --git a/published/201711/20171110 File better bugs with coredumpctl.md b/published/201711/20171110 File better bugs with coredumpctl.md
new file mode 100644
index 0000000000..e06604ef3f
--- /dev/null
+++ b/published/201711/20171110 File better bugs with coredumpctl.md
@@ -0,0 +1,98 @@
+用 coredumpctl 更好地记录 bug
+===========
+
+![](https://fedoramagazine.org/wp-content/uploads/2017/11/coredump.png-945x400.jpg)
+
+一个不幸的事实是,所有的软件都有 bug,一些 bug 会导致系统崩溃。当它出现的时候,它经常会在磁盘上留下一个被称为“核心转储”的数据文件。该文件包含有关系统崩溃时的相关数据,可能有助于确定发生崩溃的原因。通常开发者要求提供 “回溯” 形式的数据,以显示导致崩溃的指令流。开发人员可以使用它来修复 bug 以改进系统。如果系统发生了崩溃,以下是如何轻松生成 回溯 的方法。
+
+### 从使用 coredumpctl 开始
+
+大多数 Fedora 系统使用[自动错误报告工具(ABRT)][2]来自动捕获崩溃文件并记录 bug。但是,如果你禁用了此服务或删除了该软件包,则此方法可能会有所帮助。
+
+如果你遇到系统崩溃,请首先确保你运行的是最新的软件。更新通常包含修复程序,这些更新通常含有已经发现的会导致严重错误和崩溃的错误的修复。当你更新后,请尝试重现导致错误的情况。
+
+如果崩溃仍然发生,或者你已经在运行最新的软件,那么可以使用有用的 `coredumpctl` 工具。此程序可帮助查找和处理崩溃。要查看系统上所有核心转储列表,请运行以下命令:
+
+```
+coredumpctl list
+```
+
+如果你看到比预期长的列表,请不要感到惊讶。有时系统组件在后台默默地崩溃,并自行恢复。快速查找今天的转储的简单方法是使用 `-since` 选项:
+
+```
+coredumpctl list --since=today
+```
+
+“PID” 列包含用于标识转储的进程 ID。请注意这个数字,因为你会之后再用到它。或者,如果你不想记住它,使用下面的命令将它赋值给一个变量:
+
+```
+MYPID=
+```
+
+要查看关于核心转储的信息,请使用此命令(使用 `$MYPID` 变量或替换 PID 编号):
+
+```
+coredumpctl info $MYPID
+```
+
+### 安装 debuginfo 包
+
+在核心转储中的数据以及原始代码中的指令之间调试符号转义。这个符号数据可能相当大。与大多数用户运行在 Fedora 系统上的软件包不同,符号以 “debuginfo” 软件包的形式安装。要确定你必须安装哪些 debuginfo 包,请先运行以下命令:
+
+```
+coredumpctl gdb $MYPID
+```
+
+这可能会在屏幕上显示大量信息。最后一行可能会告诉你使用 `dnf` 安装更多的 debuginfo 软件包。[用 sudo ][3]运行该命令以安装:
+
+```
+sudo dnf debuginfo-install
+```
+
+然后再次尝试 `coredumpctl gdb $MYPID` 命令。**你可能需要重复执行此操作**,因为其他符号会在回溯中展开。
+
+### 捕获回溯
+
+在调试器中运行以下命令以记录信息:
+
+```
+set logging file mybacktrace.txt
+set logging on
+```
+
+你可能会发现关闭分页有帮助。对于长的回溯,这可以节省时间。
+
+```
+set pagination off
+```
+
+现在运行回溯:
+
+```
+thread apply all bt full
+```
+
+现在你可以输入 `quit` 来退出调试器。`mybacktrace.txt` 包含可附加到 bug 或问题的追踪信息。或者,如果你正在与某人实时合作,则可以将文本上传到 pastebin。无论哪种方式,你现在可以向开发人员提供更多的帮助来解决问题。
+
+---------------------------------
+
+作者简介:
+
+Paul W. Frields
+
+Paul W. Frields 自 1997 年以来一直是 Linux 用户和爱好者,并于 2003 年在 Fedora 发布不久后加入 Fedora。他是 Fedora 项目委员会的创始成员之一,从事文档、网站发布、宣传、工具链开发和维护软件。他于 2008 年 2 月至 2010 年 7 月加入 Red Hat,担任 Fedora 项目负责人,现任红帽公司工程部经理。他目前和妻子和两个孩子住在弗吉尼亚州。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/file-better-bugs-coredumpctl/
+
+作者:[Paul W. Frields][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://fedoramagazine.org/author/pfrields/
+[1]:https://fedoramagazine.org/file-better-bugs-coredumpctl/
+[2]:https://github.com/abrt/abrt
+[3]:https://fedoramagazine.org/howto-use-sudo/
diff --git a/published/20171114 Linux totally dominates supercomputers.md b/published/201711/20171114 Linux totally dominates supercomputers.md
similarity index 100%
rename from published/20171114 Linux totally dominates supercomputers.md
rename to published/201711/20171114 Linux totally dominates supercomputers.md
diff --git a/published/201711/20171116 5 Coolest Linux Terminal Emulators.md b/published/201711/20171116 5 Coolest Linux Terminal Emulators.md
new file mode 100644
index 0000000000..da334bba40
--- /dev/null
+++ b/published/201711/20171116 5 Coolest Linux Terminal Emulators.md
@@ -0,0 +1,102 @@
+5 款最酷的 Linux 终端模拟器
+============================================================
+
+
+![Cool retro term](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner2.png)
+
+> Carla Schroder 正在看着那些她喜欢的终端模拟器, 包括展示在这儿的 Cool Retro Term。
+
+虽然,我们可以继续使用老旧的 GNOME 终端、Konsole,以及好笑而孱弱的旧式 xterm。 不过,让我们带着尝试某种新东西的心境,回过头来看看 5 款酷炫并且实用的 Linux 终端。
+
+### Xiki
+
+首先我要推荐的第一个终端是 [Xiki][10]。 Xiki 是 Craig Muth 的智慧结晶,他是一个天才程序员,也是一个有趣的人(有趣在此处的意思是幽默,可能还有其它的意思)。 很久以前我在 [遇见 Xiki,Linux 和 Mac OS X 下革命性命令行 Shell][11] 一文中介绍过 Xiki。 Xiki 不仅仅是又一款终端模拟器;它也是一个扩展命令行用途、加快命令行速度的交互式环境。
+
+视频: https://youtu.be/bUR_eUVcABg
+
+Xiki 支持鼠标,并且在绝大多数命令行 Shell 上都支持。 它有大量的屏显帮助,而且可以使用鼠标和键盘快速导航。 它体现在速度上的一个简单例子就是增强了 `ls` 命令。 Xiki 可以快速穿过文件系统上的多层目录,而不用持续的重复输入 `ls` 或者 `cd`, 或者利用那些巧妙的正则表达式。
+
+Xiki 可以与许多文本编辑器相集成, 提供了一个永久的便签, 有一个快速搜索引擎, 同时像他们所说的,还有许许多多的功能。 Xiki 是如此的有特色、如此的不同, 所以学习和了解它的最快的方式可以看 [Craig 的有趣和实用的视频][12]。
+
+### Cool Retro Term
+
+我推荐 [Cool Retro Term][13] (如题图显示) 主要因为它的外观,以及它的实用性。 它将我们带回了阴极射线管显示器的时代,这不算很久以前,而我也没有怀旧的意思,我死也不会放弃我的 LCD 屏幕。它基于 [Konsole][14], 因此有着 Konsole 的优秀功能。可以通过 Cool Retro Term 的配置文件菜单来改变它的外观。配置文件包括 Amber、Green、Pixelated、Apple 和 Transparent Green 等等,而且全都包括一个像真的一样的扫描线。并不是全都是有用的,例如 Vintage 配置文件看起来就像一个闪烁着的老旧的球面屏。
+
+Cool Retro Term 的 GitHub 仓库有着详细的安装指南,且 Ubuntu 用户有 [PPA][15]。
+
+### Sakura
+
+你要是想要一个优秀的轻量级、易配置的终端,可以尝试下 [Sakura][16](图 1)。 它依赖少,不像 GNOME 终端 和 Konsole,在 GNOME 和 KDE 中牵扯了很多组件。其大多数选项是可以通过右键菜单配置的,例如选项卡的标签、 颜色、大小、选项卡的默认数量、字体、铃声,以及光标类型。 你可以在你个人的配置文件 `~/.config/sakura/sakura.conf` 里面设置更多的选项,例如绑定快捷键。
+
+![sakura](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_9.png)
+
+*图 1: Sakura 是一个优秀的、轻量级的、可配置的终端。*
+
+命令行选项详见 `man sakura`。可以使用这些来从命令行启动 sakura,或者在你的图形启动器上使用它们。 例如,打开 4 个选项卡并设置窗口标题为 “MyWindowTitle”:
+
+```
+$ sakura -t MyWindowTitle -n 4
+```
+
+### Terminology
+
+[Terminology][17] 来自 Enlightenment 图形环境的郁葱可爱的世界,它能够被美化成任何你所想要的样子 (图 2)。 它有许多有用的功能:独立的拆分窗口、打开文件和 URL、文件图标、选项卡,林林总总。 它甚至能运行在没有图形界面的 Linux 控制台上。
+
+![Terminology](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-2_6.png)
+
+*图 2: Terminology 也能够运行在没有图形界面的 Linux 控制台上。*
+
+当你打开多个拆分窗口时,每个窗口都能设置不同的背景,并且背景文件可以是任意媒体文件:图像文件、视频或者音乐文件。它带有一堆便于清晰可读的暗色主题和透明主题,它甚至一个 Nyan 猫主题。它没有滚动条,因此需要使用组合键 `Shift+PageUp` 和 `Shift+PageDown` 进行上下导航。
+
+它有多个控件:一个右键单击菜单,上下文对话框,以及命令行选项。右键单击菜单里包含世界上最小的字体,且 Miniview 可显示一个微观的文件树,但我没有找到可以使它们易于辨读的选项。当你打开多个标签时,可以点击小标签浏览器来打开一个可以上下滚动的选择器。任何东西都是可配置的;通过 `man terminology` 可以查看一系列的命令和选项,包括一批不错的快捷键快捷方式。奇怪的是,帮助里面没有包括以下命令,这是我偶然发现的:
+
+* tyalpha
+* tybg
+* tycat
+* tyls
+* typop
+* tyq
+
+使用 `tybg [filename]` 命令来设置背景,不带参数的 `tybg` 命令来移除背景。 运行 `typop [filename]` 来打开文件。 `tyls` 命令以图标视图列出文件。 加上 `-h` 选项运行这些命令可以了解它们是干什么的。 即使有可读性的怪癖,Terminology 依然是快速、漂亮和实用的。
+
+### Tilda
+
+已经有几个优秀的下拉式终端模拟器,包括 Guake 和 Yakuake。 [Tilda][18] (图 3) 是其中最简单和轻量级的一个。 打开 Tilda 后它会保持打开状态, 你可以通过快捷键来显示和隐藏它。 Tilda 快捷键是默认设置的, 你可以设置自己喜欢的快捷键。 它一直打开着的,随时准备工作,但是直到你需要它的时候才会出现。
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-3_3.png)
+
+*图 3: Tilda 是最简单和轻量级的一个终端模拟器。*
+
+Tilda 选项方面有很好的补充,包括默认的大小、位置、外观、绑定键、搜索条、鼠标动作,以及标签条。 这些都被右键单击菜单控制。
+
+_学习更多关于 Linux 的知识可以通过 Linux 基金会 和 edX 的免费课程 ["Linux 介绍" ][9]。_
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/11/5-coolest-linux-terminal-emulators
+
+作者:[CARLA SCHRODER][a]
+译者:[cnobelw](https://github.com/cnobelw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/cschroder
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/used-permission
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/licenses/category/used-permission
+[5]:https://www.linux.com/files/images/fig-1png-9
+[6]:https://www.linux.com/files/images/fig-2png-6
+[7]:https://www.linux.com/files/images/fig-3png-3
+[8]:https://www.linux.com/files/images/banner2png
+[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[10]:http://xiki.org/
+[11]:https://www.linux.com/learn/meet-xiki-revolutionary-command-shell-linux-and-mac-os-x
+[12]:http://xiki.org/screencasts/
+[13]:https://github.com/Swordfish90/cool-retro-term
+[14]:https://www.linux.com/learn/expert-tips-and-tricks-kate-and-konsole
+[15]:https://launchpad.net/~bugs-launchpad-net-falkensweb/+archive/ubuntu/cool-retro-term
+[16]:https://bugs.launchpad.net/sakura
+[17]:https://www.enlightenment.org/about-terminology
+[18]:https://github.com/lanoxx/tilda
diff --git a/published/201711/20171117 How to Easily Remember Linux Commands.md b/published/201711/20171117 How to Easily Remember Linux Commands.md
new file mode 100644
index 0000000000..c408beb6e9
--- /dev/null
+++ b/published/201711/20171117 How to Easily Remember Linux Commands.md
@@ -0,0 +1,92 @@
+如何轻松记住 Linux 命令
+=================
+
+![](https://www.maketecheasier.com/assets/uploads/2017/10/rc-feat.jpg)
+
+Linux 新手往往对命令行心存畏惧。部分原因是因为需要记忆大量的命令,毕竟掌握命令是高效使用命令行的前提。
+
+不幸的是,学习这些命令并无捷径,然而在你开始学习命令之初,有些工具还是可以帮到你的。
+
+### history
+
+![Linux Bash History 命令](https://www.maketecheasier.com/assets/uploads/2017/10/rc-bash-history.jpg)
+
+首先要介绍的是命令行工具 `history`,它能帮你记住那些你曾经用过的命令。包括应用最广泛的 Bash 在内的大多数 [Linux shell][1],都会创建一个历史文件来包含那些你输入过的命令。如果你用的是 Bash,这个历史文件就是 `/home//.bash_history`。
+
+这个历史文件是纯文本格式的,你可以用任意的文本编辑器打开来浏览和搜索。
+
+### apropos
+
+确实存在一个可以帮你找到其他命令的命令。这个命令就是 `apropos`,它能帮你找出合适的命令来完成你的搜索。比如,假设你需要知道哪个命令可以列出目录的内容,你可以运行下面命令:
+
+```shell
+apropos "list directory"
+```
+
+![Linux Apropos](https://www.maketecheasier.com/assets/uploads/2017/10/rc-apropos.jpg)
+
+这就搜索出结果了,非常直接。给 “directory” 加上复数后再试一下。
+
+```shell
+apropos "list directories"
+```
+
+这次没用了。`apropos` 所作的其实就是搜索一系列命令的描述。描述不匹配的命令不会纳入结果中。
+
+还有其他的用法。通过 `-a` 标志,你可以以更灵活的方式来增加搜索关键字。试试这条命令:
+
+```shell
+apropos "match pattern"
+```
+
+![Linux Apropos -a Flag](https://www.maketecheasier.com/assets/uploads/2017/10/rc-apropos-a.jpg)
+
+你会觉得应该会有一些匹配的内容出现,比如 [grep][2] 对吗? 然而,实际上并没有匹配出任何结果。再说一次,apropos 只会根据字面内容进行搜索。
+
+现在让我们试着用 `-a` 标志来把单词分割开来。(LCTT 译注:该选项的意思是“and”,即多个关键字都存在,但是不需要正好是连在一起的字符串。)
+
+```shell
+apropos "match" -a "pattern"
+```
+
+这一下,你可以看到很多期望的结果了。
+
+`apropos` 是一个很棒的工具,不过你需要留意它的缺陷。
+
+### ZSH
+
+![Linux ZSH Autocomplete](https://www.maketecheasier.com/assets/uploads/2017/10/rc-zsh.jpg)
+
+ZSH 其实并不是用于记忆命令的工具。它其实是一种 shell。你可以用 [ZSH][3] 来替代 Bash 作为你的命令行 shell。ZSH 包含了自动纠错机制,能在你输入命令的时候给你予提示。开启该功能后,它会提示你相近的选择。在 ZSH 中你可以像往常一样使用命令行,同时你还能享受到极度安全的网络以及其他一些非常好用的特性。充分利用 ZSH 的最简单方法就是使用 [Oh-My-ZSH][4]。
+
+### 速记表
+
+最后,也可能是最间的方法就是使用 [速记表][5]。
+
+有很多在线的速记表,比如[这个][6] 可以帮助你快速查询命令。
+
+![linux-commandline-cheatsheet](https://www.maketecheasier.com/assets/uploads/2013/10/linux-commandline-cheatsheet.gif)
+
+为了快速查询,你可以寻找图片格式的速记表,然后将它设置为你的桌面墙纸。
+
+这并不是记忆命令的最好方法,但是这么做可以帮你节省在线搜索遗忘命令的时间。
+
+在学习时依赖这些方法,最终你会发现你会越来越少地使用这些工具。没有人能够记住所有的事情,因此偶尔遗忘掉某些东西或者遇到某些没有见过的东西也很正常。这也是这些工具以及因特网存在的意义。
+
+--------------------------------------------------------------------------------
+
+via: https://www.maketecheasier.com/remember-linux-commands/
+
+作者:[Nick Congleton][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.maketecheasier.com/author/nickcongleton/
+[1]: https://www.maketecheasier.com/alternative-linux-shells/
+[2]: https://www.maketecheasier.com/what-is-grep-and-uses/
+[3]: https://www.maketecheasier.com/understanding-the-different-shell-in-linux-zsh-shell/
+[4]: https://github.com/robbyrussell/oh-my-zsh
+[5]: https://www.maketecheasier.com/premium/cheatsheet/linux-command-line/
+[6]: https://www.cheatography.com/davechild/cheat-sheets/linux-command-line/
diff --git a/published/20171118 Getting started with OpenFaaS on minikube.md b/published/201711/20171118 Getting started with OpenFaaS on minikube.md
similarity index 100%
rename from published/20171118 Getting started with OpenFaaS on minikube.md
rename to published/201711/20171118 Getting started with OpenFaaS on minikube.md
diff --git a/published/201711/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md b/published/201711/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md
new file mode 100644
index 0000000000..c44add76f4
--- /dev/null
+++ b/published/201711/20171128 tmate – Instantly Share Your Terminal Session To Anyone In Seconds.md
@@ -0,0 +1,157 @@
+tmate:秒级分享你的终端会话
+=================
+
+不久前,我们写过一篇关于 [teleconsole](https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/) 的介绍,该工具可用于快速分享终端给任何人(任何你信任的人)。今天我们要聊一聊另一款类似的应用,名叫 `tmate`。
+
+`tmate` 有什么用?它可以让你在需要帮助时向你的朋友们求助。
+
+### 什么是 tmate?
+
+[tmate](https://tmate.io/) 的意思是 `teammates`,它是 tmux 的一个分支,并且使用相同的配置信息(例如快捷键配置,配色方案等)。它是一个终端多路复用器,同时具有即时分享终端的能力。它允许在单个屏幕中创建并操控多个终端,同时这些终端还能与其他同事分享。
+
+你可以分离会话,让作业在后台运行,然后在想要查看状态时重新连接会话。`tmate` 提供了一个即时配对的方案,让你可以与一个或多个队友共享一个终端。
+
+在屏幕的地步有一个状态栏,显示了当前会话的一些诸如 ssh 命令之类的共享信息。
+
+### tmate 是怎么工作的?
+
+- 运行 `tmate` 时,会通过 `libssh` 在后台创建一个连接到 tmate.io (由 tmate 开发者维护的后台服务器)的 ssh 连接。
+- tmate.io 服务器的 ssh 密钥通过 DH 交换进行校验。
+- 客户端通过本地 ssh 密钥进行认证。
+- 连接创建后,本地 tmux 服务器会生成一个 150 位(不可猜测的随机字符)会话令牌。
+- 队友能通过用户提供的 SSH 会话 ID 连接到 tmate.io。
+
+### 使用 tmate 的必备条件
+
+由于 `tmate.io` 服务器需要通过本地 ssh 密钥来认证客户机,因此其中一个必备条件就是生成 SSH 密钥 key。
+记住,每个系统都要有自己的 SSH 密钥。
+
+```shell
+$ ssh-keygen -t rsa
+Generating public/private rsa key pair.
+Enter file in which to save the key (/home/magi/.ssh/id_rsa):
+Enter passphrase (empty for no passphrase):
+Enter same passphrase again:
+Your identification has been saved in /home/magi/.ssh/id_rsa.
+Your public key has been saved in /home/magi/.ssh/id_rsa.pub.
+The key fingerprint is:
+SHA256:3ima5FuwKbWyyyNrlR/DeBucoyRfdOtlUmb5D214NC8 magi@magi-VirtualBox
+The key's randomart image is:
++---[RSA 2048]----+
+| |
+| |
+| . |
+| . . = o |
+| *ooS= . + o |
+| . =.@*o.o.+ E .|
+| =o==B++o = . |
+| o.+*o+.. . |
+| ..o+o=. |
++----[SHA256]-----+
+```
+
+### 如何安装 tmate
+
+`tmate` 已经包含在某些发行版的官方仓库中,可以通过包管理器来安装。
+
+对于 Debian/Ubuntu,可以使用 [APT-GET 命令](https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/)或者 [APT 命令](https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/)to 来安装。
+
+```shell
+$ sudo apt-get install software-properties-common
+$ sudo add-apt-repository ppa:tmate.io/archive
+$ sudo apt-get update
+$ sudo apt-get install tmate
+```
+
+你也可以从官方仓库中安装 tmate。
+
+```shell
+$ sudo apt-get install tmate
+```
+
+对于 Fedora,使用 [DNF 命令](https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/) 来安装。
+
+```shell
+$ sudo dnf install tmate
+```
+
+对于基于 Arch Linux 的系统,使用 []()[Yaourt 命令](https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/)或 []()[Packer 命令](https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/) 来从 AUR 仓库中安装。
+
+```shell
+$ yaourt -S tmate
+```
+或
+
+```shell
+$ packer -S tmate
+```
+
+对于 openSUSE,使用 [Zypper 命令](https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/) 来安装。
+
+```shell
+$ sudo zypper in tmate
+```
+
+### 如何使用 tmate
+
+成功安装后,打开终端然后输入下面命令,就会打开一个新的会话,在屏幕底部,你能看到 SSH 会话的 ID。
+
+```shell
+$ tmate
+```
+
+![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-1.png)
+
+要注意的是,SSH 会话 ID 会在几秒后消失,不过不要紧,你可以通过下面命令获取到这些详细信息。
+
+```shell
+$ tmate show-messages
+```
+
+`tmate` 的 `show-messages` 命令会显示 tmate 的日志信息,其中包含了该 ssh 连接内容。
+
+![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-2.png)
+
+现在,分享你的 SSH 会话 ID 给你的朋友或同事从而允许他们观看终端会话。除了 SSH 会话 ID 以外,你也可以分享 web URL。
+
+另外你还可以选择分享的是只读会话还是可读写会话。
+
+### 如何通过 SSH 连接会话
+
+只需要在终端上运行你从朋友那得到的 SSH 终端 ID 就行了。类似下面这样。
+
+```shell
+$ ssh session: ssh 3KuRj95sEZRHkpPtc2y6jcokP@sg2.tmate.io
+```
+
+![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-4.png)
+
+### 如何通过 Web URL 连接会话
+
+打开浏览器然后访问朋友给你的 URL 就行了。像下面这样。
+
+![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-3.png)
+
+
+只需要输入 `exit` 就能退出会话了。
+
+```
+[Source System Output]
+[exited]
+
+[Remote System Output]
+[server exited]
+Connection to sg2.tmate.io closed by remote host。
+Connection to sg2.tmate.io closed。
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/
+
+作者:[Magesh Maruthamuthu][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
diff --git a/published/20171124 How to Install Android File Transfer for Linux.md b/published/20171124 How to Install Android File Transfer for Linux.md
new file mode 100644
index 0000000000..3cdb372c93
--- /dev/null
+++ b/published/20171124 How to Install Android File Transfer for Linux.md
@@ -0,0 +1,75 @@
+如何在 Linux 下安装安卓文件传输助手
+===============
+
+如果你尝试在 Ubuntu 下连接你的安卓手机,你也许可以试试 Linux 下的安卓文件传输助手。
+
+本质上来说,这个应用是谷歌 macOS 版本的一个克隆。它是用 Qt 编写的,用户界面非常简洁,使得你能轻松在 Ubuntu 和安卓手机之间传输文件和文件夹。
+
+现在,有可能一部分人想知道有什么是这个应用可以做,而 Nautilus(Ubuntu 默认的文件资源管理器)不能做的,答案是没有。
+
+当我将我的 Nexus 5X(记得选择 [媒体传输协议 MTP][7] 选项)连接在 Ubuntu 上时,在 [GVfs][8](LCTT 译注: GNOME 桌面下的虚拟文件系统)的帮助下,我可以打开、浏览和管理我的手机,就像它是一个普通的 U 盘一样。
+
+[![Nautilus MTP integration with a Nexus 5X](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg)][9]
+
+但是*一些*用户在使用默认的文件管理器时,在 MTP 的某些功能上会出现问题:比如文件夹没有正确加载,创建新文件夹后此文件夹不存在,或者无法在媒体播放器中使用自己的手机。
+
+这就是要为 Linux 系统用户设计一个安卓文件传输助手应用的原因,将这个应用当做将 MTP 设备安装在 Linux 下的另一种选择。如果你使用 Linux 下的默认应用时一切正常,你也许并不需要尝试使用它 (除非你真的很想尝试新鲜事物)。
+
+
+![Android File Transfer Linux App](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/android-file-transfer-for-linux-750x662.jpg)
+
+该 app 特点:
+
+* 简洁直观的用户界面
+* 支持文件拖放功能(从 Linux 系统到手机)
+* 支持批量下载 (从手机到 Linux系统)
+* 显示传输进程对话框
+* FUSE 模块支持
+* 没有文件大小限制
+* 可选命令行工具
+
+### Ubuntu 下安装安卓手机文件助手的步骤
+
+以上就是对这个应用的介绍,下面是如何安装它的具体步骤。
+
+这有一个 [PPA](个人软件包集)源为 Ubuntu 14.04 LTS、16.04 LTS 和 Ubuntu 17.10 提供可用应用。
+
+为了将这一 PPA 加入你的软件资源列表中,执行这条命令:
+
+```
+sudo add-apt-repository ppa:samoilov-lex/aftl-stable
+```
+
+接着,为了在 Ubuntu 下安装 Linux版本的安卓文件传输助手,执行:
+
+```
+sudo apt-get update && sudo apt install android-file-transfer
+```
+
+这样就行了。
+
+你会在你的应用列表中发现这一应用的启动图标。
+
+在你启动这一应用之前,要确保没有其他应用(比如 Nautilus)已经挂载了你的手机。如果其它应用正在使用你的手机,就会显示“无法找到 MTP 设备”。要解决这一问题,将你的手机从 Nautilus(或者任何正在使用你的手机的应用)上移除,然后再重新启动安卓文件传输助手。
+
+--------------------------------------------------------------------------------
+
+via: http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux
+
+作者:[JOEY SNEDDON][a]
+译者:[wenwensnow](https://github.com/wenwensnow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://plus.google.com/117485690627814051450/?rel=author
+[1]:https://plus.google.com/117485690627814051450/?rel=author
+[2]:http://www.omgubuntu.co.uk/category/app
+[3]:http://www.omgubuntu.co.uk/category/download
+[4]:https://github.com/whoozle/android-file-transfer-linux
+[5]:http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux
+[6]:http://android.com/filetransfer?linkid=14270770
+[7]:https://en.wikipedia.org/wiki/Media_Transfer_Protocol
+[8]:https://en.wikipedia.org/wiki/GVfs
+[9]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg
+[10]:https://launchpad.net/~samoilov-lex/+archive/ubuntu/aftl-stable
diff --git a/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md b/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md
new file mode 100644
index 0000000000..9b6a4f242c
--- /dev/null
+++ b/published/20171124 Open Source Cloud Skills and Certification Are Key for SysAdmins.md
@@ -0,0 +1,72 @@
+开源云技能认证:系统管理员的核心竞争力
+=========
+
+![os jobs](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-house-sysadmin.jpg?itok=i5FHc3lu "os jobs")
+
+> [2017年开源工作报告][1](以下简称“报告”)显示,具有开源云技术认证的系统管理员往往能获得更高的薪酬。
+
+
+报告调查的受访者中,53% 认为系统管理员是雇主们最期望被填补的职位空缺之一,因此,技术娴熟的系统管理员更受青睐而收获高薪职位,但这一职位,并没想象中那么容易填补。
+
+系统管理员主要负责服务器和其他电脑操作系统的安装、服务支持和维护,及时处理服务中断和预防其他问题的出现。
+
+总的来说,今年的报告指出开源领域人才需求最大的有开源云(47%),应用开发(44%),大数据(43%),开发运营和安全(42%)。
+
+此外,报告对人事经理的调查显示,58% 期望招揽更多的开源人才,67% 认为开源人才的需求增长会比业内其他领域更甚。有些单位视开源人才为招聘最优选则,它们招聘的开源人才较上年增长了 2 个百分点。
+
+同时,89% 的人事经理认为很难找到颇具天赋的开源人才。
+
+### 为什么要获取认证
+
+报告显示,对系统管理员的需求刺激着人事经理为 53% 的组织/机构提供正规的培训和专业技术认证,而这一比例去年为 47%。
+
+对系统管理方面感兴趣的 IT 人才考虑获取 Linux 认证已成为行业规律。随便查看几个知名的招聘网站,你就能发现:[CompTIA Linux+][3] 认证是入门级 Linux 系统管理员的最高认证;如果想胜任高级别的系统管理员职位,获取[红帽认证工程师(RHCE)][4]和[红帽认证系统管理员(RHCSA)][5]则是不可或缺的。
+
+戴士(Dice)[2017 技术行业薪资调查][6]显示,2016 年系统管理员的薪水为 79,538 美元,较上年下降了 0.8%;系统架构师的薪水为 125,946 美元,同比下降 4.7%。尽管如此,该调查发现“高水平专业人才仍最受欢迎,特别是那些精通支持产业转型发展所需技术的人才”。
+
+在开源技术方面,HBase(一个开源的分布式数据库)技术人才的薪水在戴士 2017 技术行业薪资调查中排第一。在计算机网络和数据库领域,掌握 OpenVMS 操作系统技术也能获得高薪。
+
+### 成为出色的系统管理员
+
+出色的系统管理员须在问题出现时马上处理,这意味着你必须时刻准备应对可能出现的状况。这个职位追求“零责备的、精益的、流程或技术上交互式改进的”思维方式和善于自我完善的人格,成为一个系统管理员意味着“你必将与开源软件如 Linux、BSD 甚至开源 Solaris 等结下不解之缘”,Paul English ^译注1 在 [opensource.com][7] 上发文指出。
+
+Paul English 认为,现在的系统管理员较以前而言,要更多地与软件打交道,而且要能够编写脚本来协助系统管理。
+
+>译注1:Paul English,计算机科学学士,UNIX/Linux 系统管理员,PreOS Security Inc. 公司 CEO,2015-2017 年于为推动系统管理员发展实践的非盈利组织——专业系统管理员联盟担任董事会成员。
+
+### 展望 2018
+
+[Robert Half 2018 年技术人才薪资导览][8]预测 2018 年北美地区许多单位将聘用大量系统管理方面的专业人才,同时个人软实力和领导力水平作为优秀人才的考量因素,越来越受到重视。
+
+该报告指出:“良好的聆听能力和批判性思维能力对于理解和解决用户的问题和担忧至关重要,也是 IT 从业者必须具备的重要技能,特别是从事服务台和桌面支持工作相关的技术人员。”
+
+这与[Linux基金会][9]^译注2 提出的不同阶段的系统管理员必备技能相一致,都强调了强大的分析能力和快速处理问题的能力。
+
+>译注2:Linux 基金会,成立于 2000 年,致力于围绕开源项目构建可持续发展的生态系统,以加速开源项目的技术开发和商业应用;它是世界上最大的开源非盈利组织,在推广、保护和推进 Linux 发展,协同开发,维护“历史上最大的共享资源”上功勋卓越。
+
+如果想逐渐爬上系统管理员职位的金字塔上层,还应该对系统配置的结构化方法充满兴趣;且拥有解决系统安全问题的经验;用户身份验证管理的经验;与非技术人员进行非技术交流的能力;以及优化系统以满足最新的安全需求的能力。
+
+- [下载][10]2017年开源工作报告全文,以获取更多信息。
+
+
+-----------------------
+
+via: https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins
+
+作者:[linux.com][a]
+译者:[wangy325](https://github.com/wangy325)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins
+[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
+[2]:https://www.linux.com/licenses/category/creative-commons-zero
+[3]:https://certification.comptia.org/certifications/linux?tracking=getCertified/certifications/linux.aspx
+[4]:https://www.redhat.com/en/services/certification/rhce
+[5]:https://www.redhat.com/en/services/certification/rhcsa
+[6]:http://marketing.dice.com/pdf/Dice_TechSalarySurvey_2017.pdf?aliId=105832232
+[7]:https://opensource.com/article/17/7/truth-about-sysadmins
+[8]:https://www.roberthalf.com/salary-guide/technology
+[9]:https://www.linux.com/learn/10-essential-skills-novice-junior-and-senior-sysadmins%20%20
+[10]:http://bit.ly/2017OSSjobsreport
\ No newline at end of file
diff --git a/published/20171130 Search DuckDuckGo from the Command Line.md b/published/20171130 Search DuckDuckGo from the Command Line.md
new file mode 100644
index 0000000000..48b6fdd830
--- /dev/null
+++ b/published/20171130 Search DuckDuckGo from the Command Line.md
@@ -0,0 +1,97 @@
+在命令行中使用 DuckDuckGo 搜索
+=============
+
+![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/duckduckgo.png)
+
+此前我们介绍了[如何在命令行中使用 Google 搜索][3]。许多读者反馈说他们平时使用 [Duck Duck Go][4],这是一个功能强大而且保密性很强的搜索引擎。
+
+正巧,最近出现了一款能够从命令行搜索 DuckDuckGo 的工具。它叫做 ddgr(我把它读作 “dodger”),非常好用。
+
+像 [Googler][7] 一样,ddgr 是一个完全开源而且非官方的工具。没错,它并不属于 DuckDuckGo。所以,如果你发现它返回的结果有些奇怪,请先询问这个工具的开发者,而不是搜索引擎的开发者。
+
+### DuckDuckGo 命令行应用
+
+![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/ddgr-gif.gif)
+
+[DuckDuckGo Bangs(DuckDuckGo 快捷搜索)][8] 可以帮助你轻易地在 DuckDuckGo 上找到想要的信息(甚至 _本网站 omgubuntu_ 都有快捷搜索)。ddgr 非常忠实地呈现了这个功能。
+
+和网页版不同的是,你可以更改每页返回多少结果。这比起每次查询都要看三十多条结果要方便一些。默认界面经过了精心设计,在不影响可读性的情况下尽量减少了占用空间。
+
+`ddgr` 有许多功能和亮点,包括:
+
+* 更改搜索结果数
+* 支持 Bash 自动补全
+* 使用 DuckDuckGo Bangs
+* 在浏览器中打开链接
+* ”手气不错“选项
+* 基于时间、地区、文件类型等的筛选功能
+* 极少的依赖项
+
+你可以从 Github 的项目页面上下载支持各种系统的 `ddgr`:
+
+- [从 Github 下载 “ddgr”][9]
+
+另外,在 Ubuntu 16.04 LTS 或更新版本中,你可以使用 PPA 安装 ddgr。这个仓库由 ddgr 的开发者维护。如果你想要保持在最新版本的话,推荐使用这种方式安装。
+
+需要提醒的是,在本文创作时,这个 PPA 中的 ddgr _并不是_ 最新版本,而是一个稍旧的版本(缺少 -num 选项)。
+
+使用以下命令添加 PPA:
+
+```
+sudo add-apt-repository ppa:twodopeshaggy/jarun
+sudo apt-get update
+```
+
+### 如何使用 ddgr 在命令行中搜索 DuckDuckGo
+
+安装完毕后,你只需打开你的终端模拟器,并运行:
+
+```
+ddgr
+```
+
+然后输入查询内容:
+
+```
+search-term
+```
+
+你可以限制搜索结果数:
+
+```
+ddgr --num 5 search-term
+```
+
+或者自动在浏览器中打开第一条搜索结果:
+
+
+```
+ddgr -j search-term
+```
+
+你可以使用参数和选项来提高搜索精确度。使用以下命令来查看所有的参数:
+
+```
+ddgr -h
+```
+
+--------------------------------------------------------------------------------
+
+via: http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app
+
+作者:[JOEY SNEDDON][a]
+译者:[yixunx](https://github.com/yixunx)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://plus.google.com/117485690627814051450/?rel=author
+[1]:https://plus.google.com/117485690627814051450/?rel=author
+[2]:http://www.omgubuntu.co.uk/category/download
+[3]:http://www.omgubuntu.co.uk/2017/08/search-google-from-the-command-line
+[4]:http://duckduckgo.com/
+[5]:http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app
+[6]:https://github.com/jarun/ddgr
+[7]:https://github.com/jarun/googler
+[8]:https://duckduckgo.com/bang
+[9]:https://github.com/jarun/ddgr/releases/tag/v1.1
diff --git a/sources/talk/20170119 Be a force for good in your community.md b/sources/talk/20170119 Be a force for good in your community.md
deleted file mode 100644
index 22c43d8470..0000000000
--- a/sources/talk/20170119 Be a force for good in your community.md
+++ /dev/null
@@ -1,130 +0,0 @@
-Translating by chao-zhi
-
-Be a force for good in your community
-============================================================
-
->Find out how to give the gift of an out, learn about the power of positive intent, and more.
-
- ![Be a force for good in your community](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/people_remote_teams_world.png?itok=wI-GW8zX "Be a force for good in your community")
-
->Image by : opensource.com
-
-Passionate debate is among the hallmark traits of open source communities and open organizations. On our best days, these debates are energetic and constructive. They are heated, yet moderated with humor and goodwill. All parties remain focused on facts, on the shared purpose of collaborative problem-solving, and driving continuous improvement. And for many of us, they're just plain fun.
-
-On our worst days, these debates devolve into rehashing the same old arguments on the same old topics. Or we turn on one another, delivering insults—passive-aggressive or outright nasty, depending on our style—and eroding the passion, trust, and productivity of our communities.
-
-We've all been there, watching and feeling helpless, as a community conversation begins to turn toxic. Yet, as [DeLisa Alexander recently shared][1], there are so many ways that each and every one of us can be a force for good in our communities.
-
-In the first article of this "open culture" series, I will share a few strategies for how you can intervene, in that crucial moment, and steer everyone to a more positive and productive place.
-
-### Don't call people out. Call them up.
-
-Recently, I had lunch with my friend and colleague, [Mark Rumbles][2]. Over the years, we've collaborated on a number of projects that support open culture and leadership at Red Hat. On this day, Mark asked me how I was holding up, as he saw I'd recently intervened in a mailing list conversation when I saw the debate was getting ugly.
-
-Fortunately, the dust had long since settled, and in fact I'd almost forgotten about the conversation. Nevertheless, it led us to talk about the challenges of open and frank debate in a community that has thousands of members.
-
->One of the biggest ways we can be a force for good in our communities is to respond to conflict in a way that compels everyone to elevate their behavior, rather than escalate it.
-
-Mark said something that struck me as rather insightful. He said, "You know, as a community, we are really good at calling each other out. But what I'd like to see us do more of is calling each other _up_."
-
-Mark is absolutely right. One of the biggest ways we can be a force for good in our communities is to respond to conflict in a way that compels everyone to elevate their behavior, rather than escalate it.
-
-### Assume positive intent
-
-We can start by making a simple assumption when we observe poor behavior in a heated conversation: It's entirely possible that there are positive intentions somewhere in the mix.
-
-This is admittedly not an easy thing to do. When I see signs that a debate is turning nasty, I pause and ask myself what Steven Covey calls The Humanizing Question:
-
-"Why would a reasonable, rational, and decent person do something like this?"
-
-Now, if this is one of your "usual suspects"—a community member with a propensity toward negative behavior--perhaps your first thought is, "Um, what if this person _isn't_ reasonable, rational, or decent?"
-
-Stay with me, now. I'm not suggesting that you engage in some touchy-feely form of self-delusion. It's called The Humanizing Question not only because asking it humanizes the other person, but also because it humanizes _you_.
-
-And that, in turn, helps you respond or intervene from the most productive possible place.
-
-### Seek to understand the reasons for community dissent
-
-When I ask myself why a reasonable, rational, and decent person might do something like this, time and again, it comes down to the same few reasons:
-
-* They don't feel heard.
-* They don't feel respected.
-* They don't feel understood.
-
-One easy positive intention we can apply to almost any poor behavior, then, is that the person wants to be heard, respected, or understood. That's pretty reasonable, I suppose.
-
-By standing in this more objective and compassionate place, we can see that their behavior is _almost certainly _**_not_**_ going to help them get what they want, _and that the community will suffer as a result . . . without our help.
-
-For me, that inspires a desire to help everyone get "unstuck" from this ugly place we're in.
-
-Before I intervene, though, I ask myself a follow-up question: _What other positive intentions might be driving this behavior?_
-
-Examples that readily jump to mind include:
-
-* They are worried that we're missing something important, or we're making a mistake, and no one else seems to see it.
-* They want to feel valued for their contributions.
-* They are burned out, because of overworking in the community or things happening in their personal life.
-* They are tired of something being broken and frustrated that no one else seems to see the damage or inconvenience that creates.
-* ...and so on and so forth.
-
-With that, I have a rich supply of positive intent that I can ascribe to their behavior. I'm ready to reach out and offer them some help, in the form of an out.
-
-### Give the gift of an out
-
-What is an out? Think of it as an escape hatch. It's a way to exit the conversation, or abandon the poor behavior and resume behaving like a decent person, without losing face. It's calling someone up, rather than calling them out.
-
-You've probably experienced this, as some point in your life, when _you_ were behaving poorly in a conversation, ranting and hollering and generally raising a fuss about something or another, and someone graciously offered _you_ a way out. Perhaps they chose not to "take the bait" by responding to your unkind choice of words, and instead, said something that demonstrated they believed you were a reasonable, rational, and decent human being with positive intentions, such as:
-
-> _So, uh, what I'm hearing is that you're really worried about this, and you're frustrated because it seems like no one is listening. Or maybe you're concerned that we're missing the significance of it. Is that about right?_
-
-And here's the thing: Even if that wasn't entirely true (perhaps you had less-than-noble intentions), in that moment, you probably grabbed ahold of that life preserver they handed you, and gladly accepted the opportunity to reframe your poor behavior. You almost certainly pivoted and moved to a more productive place, likely without even recognizing it.
-
-Perhaps you said something like, "Well, it's not that exactly, but I just worry that we're headed down the wrong path here, and I get what you're saying that as community, we can't solve every problem at the same time, but if we don't solve this one soon, bad things are going to happen…"
-
-In the end, the conversation almost certainly began to move to a more productive place, or you all agreed to disagree.
-
-We all have the opportunity to offer an upset person a safe way out of that destructive place they're operating from. Here's how.
-
-### Bad behavior or bad actor?
-
-If the person is particularly agitated, they may not hear or accept the first out you hand them. That's okay. Most likely, their lizard brain--that prehistoric amygdala that was once critical for human survival—has taken over, and they need a few more moments to recognize you're not a threat. Just keep gently but firmly treating them as if they _were_ a rational, reasonable, decent human being, and watch what happens.
-
-In my experience, these community interventions end in one of three ways:
-
-Most often, the person actually _is_ a reasonable person, and soon enough, they gratefully and graciously accept the out. In the process, everyone breaks out of the black vs. white, "win or lose" mindset. People begin to think up creative alternatives and "win-win" outcomes that benefit everyone.
-
->Why would a reasonable, rational, and decent person do something like this?
-
-Occasionally, the person is not particularly reasonable, rational, or decent by nature, but when treated with such consistent, tireless, patient generosity and kindness (by you), they are shamed into retreating from the conversation. This sounds like, "Well, I think I've said all I have to say. Thanks for hearing me out." Or, for less enlightened types, "Well, I'm tired of this conversation. Let's drop it." (Yes, please. Thank you.)
-
-Less often, the person is what's known as a _bad actor_, or in community management circles, a pot-stirrer. These folks do exist, and they thrive on drama. Guess what? By consistently engaging in a kind, generous, community-calming way, and entirely ignoring all attempts to escalate the situation, you effectively shift the conversation into an area that holds little interest for them. They have no choice but to abandon it. Winners all around.
-
-That's the power of assuming positive intent. By responding to angry and hostile words with grace and dignity, you can diffuse a flamewar, untangle and solve tricky problems, and quite possibly make a new friend or two in the process.
-
-Am I successful every time I apply this principle? Heck, no. But I never regret the choice to assume positive intent. And I can vividly recall a few unfortunate occasions when I assumed negative intent and responded in a way that further contributed to the problem.
-
-Now it's your turn. I'd love to hear about some strategies and principles you apply, to be a force for good when conversations get heated in your community. Share your thoughts in the comments below.
-
-Next time, we'll explore more ways to be a force for good in your community, and I'll share some tips for handling "Mr. Grumpy."
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot-square_0.jpg?itok=FS97b9YD)
-
-Rebecca Fernandez is a Principal Employment Branding + Communications Specialist at Red Hat, a contributor to The Open Organization book, and the maintainer of the Open Decision Framework. She is interested in open source and the intersection of the open source way with business management models. Twitter: @ruhbehka
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/17/1/force-for-good-community
-
-作者:[Rebecca Fernandez][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/rebecca
-[1]:https://opensource.com/business/15/5/5-ways-promote-inclusive-environment
-[2]:https://twitter.com/leadership_365
diff --git a/sources/talk/20170201 GOOGLE CHROME–ONE YEAR IN.md b/sources/talk/20170201 GOOGLE CHROME–ONE YEAR IN.md
deleted file mode 100644
index 3a567251da..0000000000
--- a/sources/talk/20170201 GOOGLE CHROME–ONE YEAR IN.md
+++ /dev/null
@@ -1,290 +0,0 @@
-GOOGLE CHROME–ONE YEAR IN
-========================================
-
-
-Four weeks ago, emailed notice of a free massage credit revealed that I’ve been at Google for a year. Time flies when you’re [drinking from a firehose][3].
-
-When I mentioned my anniversary, friends and colleagues from other companies asked what I’ve learned while working on Chrome over the last year. This rambling post is an attempt to answer that question.
-
-### NON-MASKABLE INTERRUPTS
-
-While I _started_ at Google just over a year ago, I haven’t actually _worked_ there for a full year yet. My second son (Nate) was born a few weeks early, arriving ten workdays after my first day of work.
-
-I took full advantage of Google’s very generous twelve weeks of paternity leave, taking a few weeks after we brought Nate home, and the balance as spring turned to summer. In a year, we went from having an enormous infant to an enormous toddler who’s taking his first steps and trying to emulate everything his 3 year-old brother (Noah) does.
-
- ![Baby at the hospital](https://textplain.files.wordpress.com/2017/01/image55.png?w=318&h=468 "New Release")
-
- ![First birthday cake](https://textplain.files.wordpress.com/2017/01/image56.png?w=484&h=466)
-
-I mention this because it’s had a huge impact on my work over the last year—_much_ more than I’d naively expected.
-
-When Noah was born, I’d been at Telerik for [almost a year][4], and I’d been hacking on Fiddler alone for nearly a decade. I took a short paternity leave, and my coding hours shifted somewhat (I started writing code late at night between bottle feeds), but otherwise my work wasn’t significantly impacted.
-
-As I pondered joining Google Chrome’s security team, I expected pretty much the same—a bit less sleep, a bit of scheduling awkwardness, but I figured things would fall into a good routine in a few months.
-
-Things turned out somewhat differently.
-
-Perhaps sensing that my life had become too easy, fate decided that 2016 was the year I’d get sick. _Constantly_. (Our theory is that Noah was bringing home germs from pre-school; he got sick a bunch too, but recovered quickly each time.) I was sick more days in 2016 than I was in the prior decade, including a month-long illness in the spring. _That_ ended with a bout of pneumonia that concluded with a doctor-mandated seven days away from the office. As I coughed my brains out on the sofa at home, I derived some consolation in thinking about Google’s generous life insurance package. But for the most part, my illnesses were minor—enough to keep me awake at night and coughing all day, but otherwise able to work.
-
-Mathematically, you might expect two kids to be twice as much work as one, but in our experience, it hasn’t worked out that way. Instead, it varies between 80% (when the kids happily play together) to 400% (when they’re colliding like atoms in a runaway nuclear reactor). Thanks to my wife’s heroic efforts, we found a workable _daytime _routine. The nights, however, have been unexpectedly difficult. Big brother Noah is at an age where he usually sleeps through the night, but he’s sure to wake me up every morning at 6:30am sharp. Fortunately, Nate has been a pretty good sleeper, but even now, at just over a year old, he usually still wakes up and requires attention twice a night or so.
-
-I can’t_ remember _the last time I had eight hours of sleep in a row. And that’s been _extremely _challenging… because I can’t remember _much else_ either. Learning new things when you don’t remember them the next day is a brutal, frustrating process.
-
-When Noah was a baby, I could simply sleep in after a long night. Even if I didn’t get enough sleep, it wouldn’t really matter—I’d been coding in C# on Fiddler for a decade, and deadlines were few and far between. If all else failed, I’d just avoid working on any especially gnarly code and spend the day handling support requests, updating graphics, or doing other simple and straightforward grunt work from my backlog.
-
-Things are much different on Chrome.
-
-### ROLES
-
-When I first started talking to the Chrome Security team about coming aboard, it was for a role on the Developer Advocacy team. I’d be driving HTTPS adoption across the web and working with big sites to unblock their migrations in any way I could. I’d already been doing the first half of that for fun (delivering [talks][5] at conferences like Codemash and [Velocity][6]), and I’d previously spent eight years as a Security Program Manager for the Internet Explorer team. I had _tons _of relevant experience. Easy peasy.
-
-I interviewed for the Developer Advocate role. The hiring committee kicked back my packet and said I should interview as a Technical Program Manager instead.
-
-I interviewed as a Technical Program Manager. The hiring committee kicked back my packet and said I should interview as a Developer Advocate instead.
-
-The Chrome team resolved the deadlock by hiring me as a Senior Software Engineer (SWE).
-
-I was initially _very _nervous about this, having not written any significant C++ code in over a decade—except for one [in-place replacement][7] of IE9’s caching logic which I’d coded as a PM because I couldn’t find a developer to do the work. But eventually I started believing in my own pep talk: _“I mean, how hard could it be, right? I’ve been troubleshooting code in web browsers for almost two decades now. I’m not a complete dummy. I’ll ramp up. It’ll be rough, but it’ll work out. Hell, I started writing Fiddler not knowing either C# nor HTTP, and _that _turned out pretty good. I’ll buy some books and get caught up. There’s no way that Google would have just hired me as a C++ developer without asking me any C++ coding questions if it wasn’t going to all be okay. Right? Right?!?”_
-
-### THE FIREHOSE
-
-I knew I had a lot to learn, and fast, but it took me a while to realize just how much else I didn’t know.
-
-Google’s primary development platform is Linux, an OS that I would install every few years, play with for a day, then forget about. My new laptop was a Mac, a platform I’d used a bit more, but still one for which I was about a twentieth as proficient as I was on Windows. The Chrome Windows team made a half-hearted attempt to get me to join their merry band, but warned me honestly that some of the tooling wasn’t quite as good as it was on Linux and it’d probably be harder for me to get help. So I tried to avoid Windows for the first few months, ordering a puny Windows machine that took around four times longer to build Chrome than my obscenely powerful Linux box (with its 48 logical cores). After a few months, I gave up on trying to avoid Windows and started using it as my primary platform. I was more productive, but incredibly slow builds remained a problem for a few months. Everyone told me to just order _another_ obscenely powerful box to put next to my Linux one, but it felt wrong to have hardware at my desk that collectively cost more than my first car—especially when, at Microsoft, I bought all my own hardware. I eventually mentioned my cost/productivity dilemma to a manager, who noted I was getting paid a Google engineer’s salary and then politely asked me if I was just really terrible at math. I ordered a beastly Windows machine and now my builds scream. (To the extent that _any_ C++ builds can scream, of course. At Telerik, I was horrified when a full build of Fiddler slowed to a full 5 seconds on my puny Windows machine; my typical Chrome build today still takes about 15 minutes.)
-
-Beyond learning different operating systems, I’d never used Google’s apps before (Docs/Sheets/Slides); luckily, I found these easy to pick up, although I still haven’t fully figured out how Google Drive file organization works. Google Docs, in particular, is so good that I’ve pretty much given up on Microsoft Word (which headed downhill after the 2010 version). Google Keep is a low-powered alternative to OneNote (which is, as far as I can tell, banned because it syncs to Microsoft servers) and I haven’t managed to get it to work well for my needs. Google Plus still hasn’t figured out how to support pasting of images via CTRL+V, a baffling limitation for something meant to compete in the space… hell, even _Microsoft Yammer _supports that, for gods sake. The only real downside to the web apps is that tab/window management on modern browsers is still a very much unsolved problem (but more on that in a bit).
-
-But these speedbumps all pale in comparison to Gmail. Oh, Gmail. As a program manager at Microsoft, pretty much your _entire life _is in your inbox. After twelve years with Outlook and Exchange, switching to Gmail was a train wreck. “_What do you mean, there aren’t folders? How do I mark this message as low priority? Where’s the button to format text with strikethrough? What do you mean, I can’t drag an email to my calendar? What the hell does this Archive thing do? Where’s that message I was just looking at? Hell, where did my Gmail tab even go—it got lost in a pile of sixty other tabs across four top-level Chrome windows. WTH??? How does anyone get anything done?”_
-
-### COMMUNICATION AND REMOTE WORK
-
-While Telerik had an office in Austin, I didn’t interact with other employees very often, and when I did they were usually in other offices. I thought I had a handle on remote work, but I really didn’t. Working with a remote team on a daily basis is just _different_.
-
-With communication happening over mail, IRC, Hangouts, bugs, document markup comments, GVC (video conferencing), G+, and discussion lists, it was often hard to [figure out which mechanisms to use][8], let alone which recipients to target. Undocumented pitfalls abounded (many discussion groups were essentially abandoned while others were unexpectedly broad; turning on chat history was deemed a “no-no” for document retention reasons).
-
-It often it took a bit of research to even understand who various communication participants were and how they related to the projects at hand.
-
-After years of email culture at Microsoft, I grew accustomed to a particular style of email, and Google’s is just _different._ Mail threads were long, with frequent additions of new recipients and many terse remarks. Many times, I’d reply privately to someone on a side thread, with a clarifying question, or suggesting a counterpoint to something they said. The response was often “_Hey, this just went to me. Mind adding on the main thread?_”
-
-I’m working remotely, with peers around the world, so real-time communication with my team is essential. Some Chrome subteams use Hangouts, but the Security team largely uses IRC.
-
-[
- ![XKCD comic on IRC](https://textplain.files.wordpress.com/2017/01/image30.png?w=1320&h=560 "https://xkcd.com/1782/")
-][9]
-
-Now, I’ve been chatting with people online since BBSes were a thing (I’ve got a five digit ICQ number somewhere), but my knowledge of IRC was limited to the fact that it was a common way of taking over suckers’ machines with buffer overflows in the ‘90s. My new teammates tried to explain how to IRC repeatedly: “_Oh, it’s easy, you just get this console IRC client. No, no, you don’t run it on your own workstation, that’d be crazy. You wouldn’t have history! You provision a persistent remote VM on a machine in Google’s cloud, then SSH to that, then you run screens and then you run your IRC client in that. Easy peasy._”
-
-Getting onto IRC remained on my “TODO” list for five months before I finally said “F- it”, installed [HexChat][10] on my Windows box, disabled automatic sleep, and called it done. It’s worked fairly well.
-
-### GOOGLE DEVELOPER TOOLING
-
-When an engineer first joins Google, they start with a week or two of technical training on the Google infrastructure. I’ve worked in software development for nearly two decades, and I’ve never even dreamed of the development environment Google engineers get to use. I felt like Charlie Bucket on his tour of Willa Wonka’s Chocolate Factory—astonished by the amazing and unbelievable goodies available at any turn. The computing infrastructure was something out of Star Trek, the development tools were slick and amazing, the _process_ was jaw-dropping.
-
-While I was doing a “hello world” coding exercise in Google’s environment, a former colleague from the IE team pinged me on Hangouts chat, probably because he’d seen my tweets about feeling like an imposter as a SWE. He sent me a link to click, which I did. Code from Google’s core advertising engine appeared in my browser. Google’s engineers have access to nearly all of the code across the whole company. This alone was astonishing—in contrast, I’d initially joined the IE team so I could get access to the networking code to figure out why the Office Online team’s website wasn’t working. “Neat, I can see everything!” I typed back. “Push the Analyze button” he instructed. I did, and some sort of automated analyzer emitted a report identifying a few dozen performance bugs in the code. “Wow, that’s amazing!” I gushed. “Now, push the Fix button” he instructed. “Uh, this isn’t some sort of security red team exercise, right?” I asked. He assured me that it wasn’t. I pushed the button. The code changed to fix some unnecessary object copies. “Amazing!” I effused. “Click Submit” he instructed. I did, and watched as the system compiled the code in the cloud, determined which tests to run, and ran them. Later that afternoon, an owner of the code in the affected folder typed LGTM (Googlers approve changes by typing the acronym for Looks Good To Me) on the change list I had submitted, and my change was live in production later that day. I was, in a word, gobsmacked. That night, I searched the entire codebase for [misuse][11] of an IE cache control token and proposed fixes for the instances I found. I also narcissistically searched for my own name and found a bunch of references to blog posts I’d written about assorted web development topics.
-
-Unfortunately for Chrome Engineers, the introduction to Google’s infrastructure is followed by a major letdown—because Chromium is open-source, the Chrome team itself doesn’t get to take advantage of most of Google’s internal goodies. Development of Chrome instead resembles C++ development at most major companies, albeit with an automatically deployed toolchain and enhancements like a web-based code review tool and some super-useful scripts. The most amazing of these is called [bisect-builds][12], and it allows a developer to very quickly discover what build of Chrome introduced a particular bug. You just give it a “known good” build number and a “known bad” build number and it automatically downloads and runs the minimal number of builds to perform a binary search for the build that introduced a given bug:
-
- ![Console showing bisect builds running](https://textplain.files.wordpress.com/2017/01/image31.png?w=1320&h=514 "Binary searching for regressions")
-
-Firefox has [a similar system][13], but I’d’ve killed for something like this back when I was reproducing and reducing bugs in IE. While it’s easy to understand how the system functions, it works so well that it feels like magic. Other useful scripts include the presubmit checks that run on each change list before you submit them for code review—they find and flag various style violations and other problems.
-
-Compilation itself typically uses a local compiler; on Windows, we use the MSVC command line compiler from Visual Studio 2015 Update 3, although work is underway to switch over to [Clang][14]. Compilation and linking all of Chrome takes quite some time, although on my new beastly dev boxes it’s not _too_ bad. Googlers do have one special perk—we can use Goma (a distributed compiler system that runs on Google’s amazing internal cloud) but I haven’t taken advantage of that so far.
-
-For bug tracking, Chrome recently moved to [Monorail][15], a straightforward web-based bug tracking system. It works fairly well, although it is somewhat more cumbersome than it needs to be and would be much improved with [a few tweaks][16]. Monorail is open-source, but I haven’t committed to it myself yet.
-
-For code review, Chrome presently uses [Rietveld][17], a web-based system, but this is slated to change in the near(ish) future. Like Monorail, it’s pretty straightforward although it would benefit from some minor usability tweaks; I committed one trivial change myself, but the pending migration to a different system means that it isn’t likely to see further improvements.
-
-As an open-source project, Chromium has quite a bit of public [documentation for developers][18], including [Design Documents][19]. Unfortunately, Chrome moves so fast that many of the design documents are out-of-date, and it’s not always obvious what’s current and what was replaced long ago. The team does _value_ engineers’ investment in the documents, however, and various efforts are underway to update the documents and reduce Chrome’s overall architectural complexity. I expect these will be ongoing battles forever, just like in any significant active project.
-
-### WHAT I’VE DONE
-
-“That’s all well and good,” my reader asks, “but _what have you done_ in the last year?”
-
-### I WROTE SOME CODE
-
-My first check in to Chrome [landed][20] in February; it was a simple adjustment to limit Public-Key-Pins to 60 days. Assorted other checkins trickled in through the spring before I went on paternity leave. The most _fun_ fix I did cleaned up a tiny [UX glitch][21] that sat unnoticed in Chrome for almost a decade; it was mostly interesting because it was a minor thing that I’d tripped over for years, including back in IE. (The root cause was arguably that MSDN documentation about DWM lied; I fixed the bug in Chrome, sent the fix to IE, and asked MSDN to fix their docs).
-
-I fixed a number of [minor][22] [security][23] [bugs][24], and lately I’ve been working on [UX issues][25] related to Chrome’s HTTPS user-experience. Back in 2005, I wrote [a blog post][26] complaining about websites using HTTPS incorrectly, and now, just over a decade later, Chrome and Firefox are launching UI changes to warn users when a site is collecting sensitive information on pages which are Not Secure; I’m delighted to have a small part in those changes.
-
-Having written a handful of Internet Explorer Extensions in the past, I was excited to discover the joy of writing Chrome extensions. Chrome extensions are fun, simple, and powerful, and there’s none of the complexity and crashes of COM.
-
-[
- ![My 3 Chrome Extensions](https://textplain.files.wordpress.com/2017/01/image201.png?w=1288&h=650 "My 3 Chrome Extensions")
-][27]
-
-My first and most significant extension is the moarTLS Analyzer– it’s related to my HTTPS work at Google and it’s proven very useful in discovering sites that could improve their security. I [blogged about it][28] and the process of [developing it][29] last year.
-
-Because I run several different Chrome instances on my PC (and they update daily or weekly), I found myself constantly needing to look up the Chrome version number for bug reports and the like. I wrote a tiny extension that shows the version number in a button on the toolbar (so it’s captured in screenshots too!):
-
- ![Show Chrome Version screenshot](https://textplain.files.wordpress.com/2017/02/image.png?w=886&h=326 "Show Chrome Version")
-
-More than once, I spent an hour or so trying to reproduce and reduce a bug that had been filed against Chrome. When I found out the cause, I’d jubilently add my notes to the issue in the Monorail bug tracker, click “Save changes” and discover that someone more familiar with the space had beaten me to the punch and figured it out while I’d had the bug open on my screen. Adding an “Issue has been updated” alert to the bug tracker itself seemed like the right way to go, but it would require some changes that I wasn’t able to commit on my own. So, instead I built an extension that provides such alerts within the page until the [feature][30] can be added to the tracker itself.
-
-Each of these extensions was a joy to write.
-
-### I FILED SOME BUGS
-
-I’m a diligent self-hoster, and I run Chrome Canary builds on all of my devices. I submit crash reports and [file bugs][31] with as much information as I can. My proudest moment was in helping narrow down a bizarre and intermittent problem users had with Chrome on Windows 10, where Chrome tabs would crash on every startup until you rebooted the OS. My [blog post][32] explains the full story, and encourages others to file bugs as they encounter them.
-
-### I TRIAGED MORE BUGS
-
-I’ve been developing software for Windows for just over two decades, and inevitably I’ve learned quite a bit about it, including the undocumented bits. That’s given me a leg up in understanding bugs in the Windows code. Some of the most fun include issues in Drag and Drop, like this [gem][33] of a bug that means that you can’t drop files from Chrome to most applications in Windows. More meaningful [bugs][34] [relate][35] [to][36] [problems][37] with Windows’ Mark-of-the-Web security feature (about which I’ve [blogged][38] [about][39] [several][40] times).
-
-### I TOOK SHERIFF ROTATIONS
-
-Google teams have the notion of sheriffs—a rotating assignment that ensures that important tasks (like triaging incoming security bugs) always has a defined owner, without overwhelming any single person. Each Sheriff has a term of ~1 week where they take on additional duties beyond their day-to-day coding, designing, testing, etc.
-
-The Sheriff system has some real benefits—perhaps the most important of which is creating a broad swath of people experienced and qualified in making triage decisions around security vulnerabilities. The alternative is to leave such tasks to a single owner, rapidly increasing their [bus factor][41] and thus the risk to the project. (I know this from first-hand experience. After IE8 shipped, I was on my way out the door to join another team. Then IE’s Security PM left, leaving a gaping hole that I felt obliged to stay around to fill. It worked out okay for me and the team, but it was tense all around.)
-
-I’m on two sheriff rotations: [Enamel][42] (my subteam) and the broader Chrome Security Sheriff.
-
-The Enamel rotation’s tasks are akin to what I used to do as a Program Manager at Microsoft—triage incoming bugs, respond to questions in the [Help Forums][43], and generally act as a point of contact for my immediate team.
-
-In contrast, the Security Sheriff rotation is more work, and somewhat more exciting. The Security Sheriff’s [duties][44] include triaging all bugs of type “Security”, assigning priority, severity, and finding an owner for each. Most security bugs are automatically reported by [our fuzzers][45] (a tireless robot army!), but we also get reports from the public and from Chrome team members and [Project Zero][46] too.
-
-At Microsoft, incoming security bug reports were first received and evaluated by the Microsoft Security Response Center (MSRC); valid reports were passed along to the IE team after some level of analysis and reproduction was undertaken. In general, all communication was done through MSRC, and the turnaround cycle on bugs was _typically _on the order of weeks to months.
-
-In contrast, anyone can [file a security bug][47] against Chrome, and every week lots of people do. One reason for that is that Chrome has a [Vulnerability Rewards program][48] which pays out up to $100K for reports of vulnerabilities in Chrome and Chrome OS. Chrome paid out just under $1M USD in bounties [last year][49]. This is an _awesome _incentive for researchers to responsibly disclose bugs directly to us, and the bounties are _much _higher than those of nearly any other project.
-
-In his “[Hacker Quantified Security][50]” talk at the O’Reilly Security conference, HackerOne CTO and Cofounder Alex Rice showed the following chart of bounty payout size for vulnerabilities when explaining why he was using a Chromebook. Apologies for the blurry photo, but the line at the top shows Chrome OS, with the 90th percentile line miles below as severity rises to Critical:
-
-[
- ![Vulnerability rewards by percentile. Chrome is WAY off the chart.](https://textplain.files.wordpress.com/2017/01/image_thumb6.png?w=962&h=622 "Chrome Vulnerability Rewards are Yuuuuge")
-][51]
-
-With a top bounty of $100000 for an exploit or exploit chain that fully compromises a Chromebook, researchers are much more likely to send their bugs to us than to try to find a buyer on the black market.
-
-Bug bounties are great, except when they’re not. Unfortunately, many filers don’t bother to read the [Chrome Security FAQ][52] which explains what constitutes a security vulnerability and the great many things that do not. Nearly every week, we have at least one person (and often more) file a bug noting “_I can use the Developer Tools to read my own password out of a webpage. Can I have a bounty?_” or “_If I install malware on my PC, I can see what happens inside Chrome” _or variations of these.
-
-Because we take security bug reports very seriously, we often spend a lot of time on what seem like garbage filings to verify that there’s not just some sort of communication problem. This exposes one downside of the sheriff process—the lack of continuity from week to week.
-
-In the fall, we had one bug reporter file a new issue every week that was just a collection of security related terms (XSS! CSRF! UAF! EoP! Dangling Pointer! Script Injection!) lightly wrapped in prose, including screenshots, snippets from websites, console output from developer tools, and the like. Each week, the sheriff would investigate, ask for more information, and engage in a fruitless back and forth with the filer trying to figure out what claim was being made. Eventually I caught on to what was happening and started monitoring the sheriff’s queue, triaging the new findings directly and sparing the sheriff of the week. But even today we still catch folks who lookup old bug reports (usually Won’t Fixed issues), copy/paste the content into new bugs, and file them into the queue. It’s frustrating, but coming from a closed bug database, I’d choose the openness of the Chrome bug database every time.
-
-Getting ready for my first Sherriff rotation, I started watching the incoming queue a few months earlier and felt ready for my first rotation in September. Day One was quiet, with a few small issues found by fuzzers and one or two junk reports from the public which I triaged away with pointers to the “_Why isn’t a vulnerability_” entries in the Security FAQ. I spent the rest of the day writing a fix for a lower-priority security [bug][53] that had been filed a month before. A pretty successful day, I thought.
-
-Day Two was more interesting. Scanning the queue, I saw a few more fuzzer issues and [one external report][54] whose text started with “Here is a Chrome OS exploit chain.” The report was about two pages long, and had a forty-two page PDF attachment explaining the four exploits the finder had used to take over a fully-patched Chromebook.
-
- ![Star Wars trench run photo](https://textplain.files.wordpress.com/2017/02/image1.png?w=478&h=244 "Defenses can't keep up!")
-
-Watching Luke’s X-wing take out the Death Star in Star Wars was no more exciting than reading the PDF’s tale of how a single byte memory overwrite in the DNS resolver code could weave its way through the many-layered security features of the Chromebook and achieve a full compromise. It was like the most amazing magic trick you’ve ever seen.
-
-I hopped over to IRC. “So, do we see full compromises of Chrome OS every week?” I asked innocently.
-
-“No. Why?” came the reply from several corners. I pasted in the bug link and a few moments later the replies started flowing in “OMG. Amazing!” Even guys from Project Zero were impressed, and they’re magicians who build exploits like this (usually for other products) all the time. The researcher had found one small bug and a variety of neglected components that were thought to be unreachable and put together a deadly chain.
-
-The first patches were out for code review that evening, and by the next day, we’d reached out to the open-source owner of the DNS component with the 1-byte overwrite bug so he could release patches for the other projects using his code. Within a few days, fixes to other components landed and had been ported to all of the supported versions of Chrome OS. Two weeks later, the Chrome Vulnerability rewards team added the [reward-100000][55] tag, the only bug so far to be so marked. Four weeks after that, I had to hold my tongue when Alex mentioned that “no one’s ever claimed that $100000 bounty” during his “Hacker Quantified Security” talk. Just under 90 days from filing, the bug was unrestricted and made available for public viewing.
-
-The remainder of my first Sheriff rotation was considerably less exciting, although still interesting. I spent some time looking through the components the researcher had abused in his exploit chain and filed a few bugs. Ultimately, the most risky component he used was removed entirely.
-
-### OUTREACH AND BLOGGING
-
-Beyond working on the Enamel team (focused on Chrome’s security UI surface), I also work on the “MoarTLS” project, designed to help encourage and assist the web as a whole in moving to HTTPS. This takes a number of forms—I help maintain the [HTTPS on Top Sites Report Card][56], I do consultations and HTTPS Audits with major sites as they enable HTTPS on their sites. I discover, reduce, and file bugs on Chrome’s and other browsers’ support of features like Upgrade-Insecure-Requests. I publish a [running list of articles][57] on why and how sites should enable TLS. I hassle teams all over Google (and the web in general) to enable HTTPS on every single hyperlink they emit. I responsibly disclosed security bugs in a number of products and sites, including [a vulnerability][58] in Hillary Clinton’s fundraising emails. I worked to send a notification to many many many thousands of sites collecting user information non-securely, warning them of the [UI changes in Chrome 56][59].
-
-When I applied to Google for the Developer Advocate role, I expected I’d be delivering public talks _constantly_, but as a SWE I’ve only given a few talks, including my [Migrating to HTTPS talk][60] at the first O’Reilly Security Conference. I had a lot of fun at that conference, catching up with old friends from the security community (mostly ex-Microsofties). I also went to my first [Chrome Dev Summit][61], where I didn’t have a public talk (my colleagues did) but I did get to talk to some major companies about deploying HTTPS.
-
-I also blogged [quite a bit][62]. At Microsoft, I started blogging because I got tired of repeating myself, and because our Exchange server and document retention policies had started making it hard or impossible to find old responses—I figured “Well, if I publish everything on the web, Google will find it, and Internet Archive will back it up.”
-
-I’ve kept blogging since leaving Microsoft, and I’m happy that I have even though my reader count numbers are much lower than they were at Microsoft. I’ve managed to mostly avoid trouble, although my posts are not entirely uncontroversial. At Microsoft, they wouldn’t let me publish [this post][63] (because it was too frank); in my first month at Google, I got a phone call at home (during the first portion of my paternity leave) from a Google Director complaining that I’d written [something][64] that was too harsh about a change Microsoft had made. But for the most part, my blogging seems not to ruffle too many feathers.
-
-### TIDBITS
-
-* Food at Google is generally _really _good; I’m at a satellite office in Austin, so the selection is much smaller than on the main campuses, but the rotating menu is fairly broad and always has at least three major options. And the breakfasts! I gained about 15 pounds in my first few months, but my pneumonia took it off and I’ve restrained my intake since I came back.
-* At Microsoft, I always sneered at companies offering free food (“I’m an adult professional. I can pay for my lunch.”), but it’s definitely convenient to not have to hassle with payments. And until the government closes the loophole, it’s a way to increase employees’ compensation without getting taxed.
-* For the first three months, I was impressed and slightly annoyed that all of the snack options in Google’s micro-kitchens are healthy (e.g. fruit)—probably a good thing, since I sit about twenty feet from one. Then I saw someone open a drawer and pull out some M&Ms, and I learned the secret—all of the junk food is in drawers. The selection is impressive and ranges from the popular to the high end.
-* Google makes heavy use of the “open-office concept.” I think this makes sense for some teams, but it’s not at all awesome for me. I’d gladly take a 10% salary cut for a private office. I doubt I’m alone.
-* Coworkers at Google range from very smart to insanely off-the-scales-smart. Yet, almost all of them are humble, approachable, and kind.
-* Google, like Microsoft, offers gift matching for charities. This is an awesome perk, and one I aim to max out every year. I’m awed by people who go [far][1] beyond that.
-* **Window Management – **I mentioned earlier that one downside of web-based tools is that it’s hard to even _find _the right tab when I’ve got dozens of open tabs that I’m flipping between. The [Quick Tabs extension][2] is one great mitigation; it shows your tabs in a searchable, most-recently-used list in a convenient dropdown:
-
-[
- ![QuickTabs Extension](https://textplain.files.wordpress.com/2017/01/image59.png?w=526&h=376 "A Searchable MRU of open tabs. Yes please!")
-][65]
-
-Another trick that I learned just this month is that you can instruct Chrome to open a site in “App” mode, where it runs in its own top-level window (with no other tabs), showing the site’s icon as the icon in the Windows taskbar. It’s easy:
-
-On Windows, run chrome.exe –app=https://mail.google.com
-
-While on OS X, run open -n -b com.google.Chrome –args –app=’[https://news.google.com][66]‘
-
-_Tip: The easy way to create a shortcut to a the current page in app mode is to click the Chrome Menu > More Tools > Add to {shelf/desktop} and tick the Open as Window checkbox._
-
-I now have [SlickRun][67] MagicWords set up for **mail**, **calendar**, and my other critical applications.
-
---------------------------------------------------------------------------------
-
-via: https://textslashplain.com/2017/02/01/google-chrome-one-year-in/
-
-作者:[ericlaw][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://textslashplain.com/author/ericlaw1979/
-[1]:https://www.jefftk.com/p/leaving-google-joining-wave
-[2]:https://chrome.google.com/webstore/detail/quick-tabs/jnjfeinjfmenlddahdjdmgpbokiacbbb
-[3]:https://textslashplain.com/2015/12/23/my-next-adventure/
-[4]:http://sdtimes.com/telerik-acquires-fiddler-debugger-along-with-its-creator/
-[5]:https://bayden.com/dl/Codemash2015-ericlaw-https-in-2015.pptx
-[6]:https://conferences.oreilly.com/velocity/devops-web-performance-2015/public/content/2015/04/16-https-stands-for-user-experience
-[7]:https://textslashplain.com/2015/04/09/on-appreciation/
-[8]:https://xkcd.com/1254/
-[9]:http://m.xkcd.com/1782/
-[10]:https://hexchat.github.io/
-[11]:https://blogs.msdn.microsoft.com/ieinternals/2009/07/20/internet-explorers-cache-control-extensions/
-[12]:https://www.chromium.org/developers/bisect-builds-py
-[13]:https://mozilla.github.io/mozregression/
-[14]:https://chromium.googlesource.com/chromium/src/+/lkgr/docs/clang.md
-[15]:https://bugs.chromium.org/p/monorail/adminIntro
-[16]:https://bugs.chromium.org/p/monorail/issues/list?can=2&q=reporter%3Aelawrence
-[17]:https://en.wikipedia.org/wiki/Rietveld_(software)
-[18]:https://www.chromium.org/developers
-[19]:https://www.chromium.org/developers/design-documents
-[20]:https://codereview.chromium.org/1733973004/
-[21]:https://codereview.chromium.org/2244263002/
-[22]:https://codereview.chromium.org/2323273003/
-[23]:https://codereview.chromium.org/2368593002/
-[24]:https://codereview.chromium.org/2347923002/
-[25]:https://codereview.chromium.org/search?closed=1&owner=elawrence&reviewer=&cc=&repo_guid=&base=&project=&private=1&commit=1&created_before=&created_after=&modified_before=&modified_after=&order=&format=html&keys_only=False&with_messages=False&cursor=&limit=30
-[26]:https://blogs.msdn.microsoft.com/ie/2005/04/20/tls-and-ssl-in-the-real-world/
-[27]:https://chrome.google.com/webstore/search/bayden?hl=en-US&_category=extensions
-[28]:https://textslashplain.com/2016/03/17/seek-and-destroy-non-secure-references-using-the-moartls-analyzer/
-[29]:https://textslashplain.com/2016/03/18/building-the-moartls-analyzer/
-[30]:https://bugs.chromium.org/p/monorail/issues/detail?id=1739
-[31]:https://bugs.chromium.org/p/chromium/issues/list?can=1&q=reporter%3Ame&colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&x=m&y=releaseblock&cells=ids
-[32]:https://textslashplain.com/2016/08/18/file-the-bug/
-[33]:https://bugs.chromium.org/p/chromium/issues/detail?id=540547
-[34]:https://bugs.chromium.org/p/chromium/issues/detail?id=601538
-[35]:https://bugs.chromium.org/p/chromium/issues/detail?id=595844#c6
-[36]:https://bugs.chromium.org/p/chromium/issues/detail?id=629637
-[37]:https://bugs.chromium.org/p/chromium/issues/detail?id=591343
-[38]:https://textslashplain.com/2016/04/04/downloads-and-the-mark-of-the-web/
-[39]:https://blogs.msdn.microsoft.com/ieinternals/2011/03/23/understanding-local-machine-zone-lockdown/
-[40]:https://blogs.msdn.microsoft.com/ieinternals/2012/06/19/enhanced-protected-mode-and-local-files/
-[41]:https://en.wikipedia.org/wiki/Bus_factor
-[42]:https://www.chromium.org/Home/chromium-security/enamel
-[43]:https://productforums.google.com/forum/#!forum/chrome
-[44]:https://www.chromium.org/Home/chromium-security/security-sheriff
-[45]:https://blog.chromium.org/2012/04/fuzzing-for-security.html
-[46]:https://en.wikipedia.org/wiki/Project_Zero_(Google)
-[47]:https://bugs.chromium.org/p/chromium/issues/entry?template=Security%20Bug
-[48]:https://www.google.com/about/appsecurity/chrome-rewards/
-[49]:https://security.googleblog.com/2017/01/vulnerability-rewards-program-2016-year.html
-[50]:https://conferences.oreilly.com/security/network-data-security-ny/public/schedule/detail/53296
-[51]:https://textplain.files.wordpress.com/2017/01/image58.png
-[52]:https://dev.chromium.org/Home/chromium-security/security-faq
-[53]:https://bugs.chromium.org/p/chromium/issues/detail?id=639126#c11
-[54]:https://bugs.chromium.org/p/chromium/issues/detail?id=648971
-[55]:https://bugs.chromium.org/p/chromium/issues/list?can=1&q=label%3Areward-100000&colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&x=m&y=releaseblock&cells=ids
-[56]:https://www.google.com/transparencyreport/https/grid/?hl=en
-[57]:https://whytls.com/
-[58]:https://textslashplain.com/2016/09/22/use-https-for-all-inbound-links/
-[59]:https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html
-[60]:https://www.safaribooksonline.com/library/view/the-oreilly-security/9781491960035/video287622.html
-[61]:https://developer.chrome.com/devsummit/
-[62]:https://textslashplain.com/2016/
-[63]:https://blogs.msdn.microsoft.com/ieinternals/2013/10/16/strict-p3p-validation/
-[64]:https://textslashplain.com/2016/01/20/putting-users-first/
-[65]:https://chrome.google.com/webstore/detail/quick-tabs/jnjfeinjfmenlddahdjdmgpbokiacbbb
-[66]:https://news.google.com/
-[67]:https://bayden.com/slickrun/
diff --git a/sources/talk/20170320 Education of a Programmer.md b/sources/talk/20170320 Education of a Programmer.md
deleted file mode 100644
index 7ffeb2fe49..0000000000
--- a/sources/talk/20170320 Education of a Programmer.md
+++ /dev/null
@@ -1,155 +0,0 @@
-Education of a Programmer
-============================================================
-
-_When I left Microsoft in October 2016 after almost 21 years there and almost 35 years in the industry, I took some time to reflect on what I had learned over all those years. This is a lightly edited version of that post. Pardon the length!_
-
-There are an amazing number of things you need to know to be a proficient programmer — details of languages, APIs, algorithms, data structures, systems and tools. These things change all the time — new languages and programming environments spring up and there always seems to be some hot new tool or language that “everyone” is using. It is important to stay current and proficient. A carpenter needs to know how to pick the right hammer and nail for the job and needs to be competent at driving the nail straight and true.
-
-At the same time, I’ve found that there are some concepts and strategies that are applicable over a wide range of scenarios and across decades. We have seen multiple orders of magnitude change in the performance and capability of our underlying devices and yet certain ways of thinking about the design of systems still say relevant. These are more fundamental than any specific implementation. Understanding these recurring themes is hugely helpful in both the analysis and design of the complex systems we build.
-
-Humility and Ego
-
-This is not limited to programming, but in an area like computing which exhibits so much constant change, one needs a healthy balance of humility and ego. There is always more to learn and there is always someone who can help you learn it — if you are willing and open to that learning. One needs both the humility to recognize and acknowledge what you don’t know and the ego that gives you confidence to master a new area and apply what you already know. The biggest challenges I have seen are when someone works in a single deep area for a long time and “forgets” how good they are at learning new things. The best learning comes from actually getting hands dirty and building something, even if it is just a prototype or hack. The best programmers I know have had both a broad understanding of technology while at the same time have taken the time to go deep into some technology and become the expert. The deepest learning happens when you struggle with truly hard problems.
-
-End to End Argument
-
-Back in 1981, Jerry Saltzer, Dave Reed and Dave Clark were doing early work on the Internet and distributed systems and wrote up their [classic description][4] of the end to end argument. There is much misinformation out there on the Internet so it can be useful to go back and read the original paper. They were humble in not claiming invention — from their perspective this was a common engineering strategy that applies in many areas, not just in communications. They were simply writing it down and gathering examples. A minor paraphrasing is:
-
-When implementing some function in a system, it can be implemented correctly and completely only with the knowledge and participation of the endpoints of the system. In some cases, a partial implementation in some internal component of the system may be important for performance reasons.
-
-The SRC paper calls this an “argument”, although it has been elevated to a “principle” on Wikipedia and in other places. In fact, it is better to think of it as an argument — as they detail, one of the hardest problem for a system designer is to determine how to divide responsibilities between components of a system. This ends up being a discussion that involves weighing the pros and cons as you divide up functionality, isolate complexity and try to design a reliable, performant system that will be flexible to evolving requirements. There is no simple set of rules to follow.
-
-Much of the discussion on the Internet focuses on communications systems, but the end-to-end argument applies in a much wider set of circumstances. One example in distributed systems is the idea of “eventual consistency”. An eventually consistent system can optimize and simplify by letting elements of the system get into a temporarily inconsistent state, knowing that there is a larger end-to-end process that can resolve these inconsistencies. I like the example of a scaled-out ordering system (e.g. as used by Amazon) that doesn’t require every request go through a central inventory control choke point. This lack of a central control point might allow two endpoints to sell the same last book copy, but the overall system needs some type of resolution system in any case, e.g. by notifying the customer that the book has been backordered. That last book might end up getting run over by a forklift in the warehouse before the order is fulfilled anyway. Once you realize an end-to-end resolution system is required and is in place, the internal design of the system can be optimized to take advantage of it.
-
-In fact, it is this design flexibility in the service of either ongoing performance optimization or delivering other system features that makes this end-to-end approach so powerful. End-to-end thinking often allows internal performance flexibility which makes the overall system more robust and adaptable to changes in the characteristics of each of the components. This makes an end-to-end approach “anti-fragile” and resilient to change over time.
-
-An implication of the end-to-end approach is that you want to be extremely careful about adding layers and functionality that eliminates overall performance flexibility. (Or other flexibility, but performance, especially latency, tends to be special.) If you expose the raw performance of the layers you are built on, end-to-end approaches can take advantage of that performance to optimize for their specific requirements. If you chew up that performance, even in the service of providing significant value-add functionality, you eliminate design flexibility.
-
-The end-to-end argument intersects with organizational design when you have a system that is large and complex enough to assign whole teams to internal components. The natural tendency of those teams is to extend the functionality of those components, often in ways that start to eliminate design flexibility for applications trying to deliver end-to-end functionality built on top of them.
-
-One of the challenges in applying the end-to-end approach is determining where the end is. “Little fleas have lesser fleas… and so on ad infinitum.”
-
-Concentrating Complexity
-
-Coding is an incredibly precise art, with each line of execution required for correct operation of the program. But this is misleading. Programs are not uniform in the overall complexity of their components or the complexity of how those components interact. The most robust programs isolate complexity in a way that lets significant parts of the system appear simple and straightforward and interact in simple ways with other components in the system. Complexity hiding can be isomorphic with other design approaches like information hiding and data abstraction but I find there is a different design sensibility if you really focus on identifying where the complexity lies and how you are isolating it.
-
-The example I’ve returned to over and over again in my [writing][5] is the screen repaint algorithm that was used by early character video terminal editors like VI and EMACS. The early video terminals implemented control sequences for the core action of painting characters as well as additional display functions to optimize redisplay like scrolling the current lines up or down or inserting new lines or moving characters within a line. Each of those commands had different costs and those costs varied across different manufacturer’s devices. (See [TERMCAP][6] for links to code and a fuller history.) A full-screen application like a text editor wanted to update the screen as quickly as possible and therefore needed to optimize its use of these control sequences to transition the screen from one state to another.
-
-These applications were designed so this underlying complexity was hidden. The parts of the system that modify the text buffer (where most innovation in functionality happens) completely ignore how these changes are converted into screen update commands. This is possible because the performance cost of computing the optimal set of updates for _any_ change in the content is swamped by the performance cost of actually executing the update commands on the terminal itself. It is a common pattern in systems design that performance analysis plays a key part in determining how and where to hide complexity. The screen update process can be asynchronous to the changes in the underlying text buffer and can be independent of the actual historical sequence of changes to the buffer. It is not important _how_ the buffer changed, but only _what_ changed. This combination of asynchronous coupling, elimination of the combinatorics of historical path dependence in the interaction between components and having a natural way for interactions to efficiently batch together are common characteristics used to hide coupling complexity.
-
-Success in hiding complexity is determined not by the component doing the hiding but by the consumers of that component. This is one reason why it is often so critical for a component provider to actually be responsible for at least some piece of the end-to-end use of that component. They need to have clear optics into how the rest of the system interacts with their component and how (and whether) complexity leaks out. This often shows up as feedback like “this component is hard to use” — which typically means that it is not effectively hiding the internal complexity or did not pick a functional boundary that was amenable to hiding that complexity.
-
-Layering and Componentization
-
-It is the fundamental role of a system designer to determine how to break down a system into components and layers; to make decisions about what to build and what to pick up from elsewhere. Open Source may keep money from changing hands in this “build vs. buy” decision but the dynamics are the same. An important element in large scale engineering is understanding how these decisions will play out over time. Change fundamentally underlies everything we do as programmers, so these design choices are not only evaluated in the moment, but are evaluated in the years to come as the product continues to evolve.
-
-Here are a few things about system decomposition that end up having a large element of time in them and therefore tend to take longer to learn and appreciate.
-
-* Layers are leaky. Layers (or abstractions) are [fundamentally leaky][1]. These leaks have consequences immediately but also have consequences over time, in two ways. One consequence is that the characteristics of the layer leak through and permeate more of the system than you realize. These might be assumptions about specific performance characteristics or behavior ordering that is not an explicit part of the layer contract. This means that you generally are more _vulnerable_ to changes in the internal behavior of the component that you understood. A second consequence is it also means you are more _dependent_ on that internal behavior than is obvious, so if you consider changing that layer the consequences and challenges are probably larger than you thought.
-* Layers are too functional. It is almost a truism that a component you adopt will have more functionality than you actually require. In some cases, the decision to use it is based on leveraging that functionality for future uses. You adopt specifically because you want to “get on the train” and leverage the ongoing work that will go into that component. There are a few consequences of building on this highly functional layer. 1) The component will often make trade-offs that are biased by functionality that you do not actually require. 2) The component will embed complexity and constraints because of functionality you do not require and those constraints will impede future evolution of that component. 3) There will be more surface area to leak into your application. Some of that leakage will be due to true “leaky abstractions” and some will be explicit (but generally poorly controlled) increased dependence on the full capabilities of the component. Office is big enough that we found that for any layer we built on, we eventually fully explored its functionality in some part of the system. While that might appear to be positive (we are more completely leveraging the component), all uses are not equally valuable. So we end up having a massive cost to move from one layer to another based on this long-tail of often lower value and poorly recognized use cases. 4) The additional functionality creates complexity and opportunities for misuse. An XML validation API we used would optionally dynamically download the schema definition if it was specified as part of the XML tree. This was mistakenly turned on in our basic file parsing code which resulted in both a massive performance degradation as well as an (unintentional) distributed denial of service attack on a w3c.org web server. (These are colloquially known as “land mine” APIs.)
-* Layers get replaced. Requirements evolve, systems evolve, components are abandoned. You eventually need to replace that layer or component. This is true for external component dependencies as well as internal ones. This means that the issues above will end up becoming important.
-* Your build vs. buy decision will change. This is partly a corollary of above. This does not mean the decision to build or buy was wrong at the time. Often there was no appropriate component when you started and it only becomes available later. Or alternatively, you use a component but eventually find that it does not match your evolving requirements and your requirements are narrow enough, well-understood or so core to your value proposition that it makes sense to own it yourself. It does mean that you need to be just as concerned about leaky layers permeating more of the system for layers you build as well as for layers you adopt.
-* Layers get thick. As soon as you have defined a layer, it starts to accrete functionality. The layer is the natural throttle point to optimize for your usage patterns. The difficulty with a thick layer is that it tends to reduce your ability to leverage ongoing innovation in underlying layers. In some sense this is why OS companies hate thick layers built on top of their core evolving functionality — the pace at which innovation can be adopted is inherently slowed. One disciplined approach to avoid this is to disallow any additional state storage in an adaptor layer. Microsoft Foundation Classes took this general approach in building on top of Win32\. It is inevitably cheaper in the short term to just accrete functionality on to an existing layer (leading to all the eventual problems above) rather than refactoring and recomponentizing. A system designer who understands this looks for opportunities to break apart and simplify components rather than accrete more and more functionality within them.
-
-Einsteinian Universe
-
-I had been designing asynchronous distributed systems for decades but was struck by this quote from Pat Helland, a SQL architect, at an internal Microsoft talk. “We live in an Einsteinian universe — there is no such thing as simultaneity. “ When building distributed systems — and virtually everything we build is a distributed system — you cannot hide the distributed nature of the system. It’s just physics. This is one of the reasons I’ve always felt Remote Procedure Call, and especially “transparent” RPC that explicitly tries to hide the distributed nature of the interaction, is fundamentally wrong-headed. You need to embrace the distributed nature of the system since the implications almost always need to be plumbed completely through the system design and into the user experience.
-
-Embracing the distributed nature of the system leads to a number of things:
-
-* You think through the implications to the user experience from the start rather than trying to patch on error handling, cancellation and status reporting as an afterthought.
-* You use asynchronous techniques to couple components. Synchronous coupling is _impossible._ If something appears synchronous, it’s because some internal layer has tried to hide the asynchrony and in doing so has obscured (but definitely not hidden) a fundamental characteristic of the runtime behavior of the system.
-* You recognize and explicitly design for interacting state machines and that these states represent robust long-lived internal system states (rather than ad-hoc, ephemeral and undiscoverable state encoded by the value of variables in a deep call stack).
-* You recognize that failure is expected. The only guaranteed way to detect failure in a distributed system is to simply decide you have waited “too long”. This naturally means that [cancellation is first-class][2]. Some layer of the system (perhaps plumbed through to the user) will need to decide it has waited too long and cancel the interaction. Cancelling is only about reestablishing local state and reclaiming local resources — there is no way to reliably propagate that cancellation through the system. It can sometimes be useful to have a low-cost, unreliable way to attempt to propagate cancellation as a performance optimization.
-* You recognize that cancellation is not rollback since it is just reclaiming local resources and state. If rollback is necessary, it needs to be an end-to-end feature.
-* You accept that you can never really know the state of a distributed component. As soon as you discover the state, it may have changed. When you send an operation, it may be lost in transit, it might be processed but the response is lost, or it may take some significant amount of time to process so the remote state ultimately transitions at some arbitrary time in the future. This leads to approaches like idempotent operations and the ability to robustly and efficiently rediscover remote state rather than expecting that distributed components can reliably track state in parallel. The concept of “[eventual consistency][3]” succinctly captures many of these ideas.
-
-I like to say you should “revel in the asynchrony”. Rather than trying to hide it, you accept it and design for it. When you see a technique like idempotency or immutability, you recognize them as ways of embracing the fundamental nature of the universe, not just one more design tool in your toolbox.
-
-Performance
-
-I am sure Don Knuth is horrified by how misunderstood his partial quote “Premature optimization is the root of all evil” has been. In fact, performance, and the incredible exponential improvements in performance that have continued for over 6 decades (or more than 10 decades depending on how willing you are to project these trends through discrete transistors, vacuum tubes and electromechanical relays), underlie all of the amazing innovation we have seen in our industry and all the change rippling through the economy as “software eats the world”.
-
-A key thing to recognize about this exponential change is that while all components of the system are experiencing exponential change, these exponentials are divergent. So the rate of increase in capacity of a hard disk changes at a different rate from the capacity of memory or the speed of the CPU or the latency between memory and CPU. Even when trends are driven by the same underlying technology, exponentials diverge. [Latency improvements fundamentally trail bandwidth improvements][7]. Exponential change tends to look linear when you are close to it or over short periods but the effects over time can be overwhelming. This overwhelming change in the relationship between the performance of components of the system forces reevaluation of design decisions on a regular basis.
-
-A consequence of this is that design decisions that made sense at one point no longer make sense after a few years. Or in some cases an approach that made sense two decades ago starts to look like a good trade-off again. Modern memory mapping has characteristics that look more like process swapping of the early time-sharing days than it does like demand paging. (This does sometimes result in old codgers like myself claiming that “that’s just the same approach we used back in ‘75” — ignoring the fact that it didn’t make sense for 40 years and now does again because some balance between two components — maybe flash and NAND rather than disk and core memory — has come to resemble a previous relationship).
-
-Important transitions happen when these exponentials cross human constraints. So you move from a limit of two to the sixteenth characters (which a single user can type in a few hours) to two to the thirty-second (which is beyond what a single person can type). So you can capture a digital image with higher resolution than the human eye can perceive. Or you can store an entire music collection on a hard disk small enough to fit in your pocket. Or you can store a digitized video recording on a hard disk. And then later the ability to stream that recording in real time makes it possible to “record” it by storing it once centrally rather than repeatedly on thousands of local hard disks.
-
-The things that stay as a fundamental constraint are three dimensions and the speed of light. We’re back to that Einsteinian universe. We will always have memory hierarchies — they are fundamental to the laws of physics. You will always have stable storage and IO, memory, computation and communications. The relative capacity, latency and bandwidth of these elements will change, but the system is always about how these elements fit together and the balance and tradeoffs between them. Jim Gray was the master of this analysis.
-
-Another consequence of the fundamentals of 3D and the speed of light is that much of performance analysis is about three things: locality, locality, locality. Whether it is packing data on disk, managing processor cache hierarchies, or coalescing data into a communications packet, how data is packed together, the patterns for how you touch that data with locality over time and the patterns of how you transfer that data between components is fundamental to performance. Focusing on less code operating on less data with more locality over space and time is a good way to cut through the noise.
-
-Jon Devaan used to say “design the data, not the code”. This also generally means when looking at the structure of a system, I’m less interested in seeing how the code interacts — I want to see how the data interacts and flows. If someone tries to explain a system by describing the code structure and does not understand the rate and volume of data flow, they do not understand the system.
-
-A memory hierarchy also implies we will always have caches — even if some system layer is trying to hide it. Caches are fundamental but also dangerous. Caches are trying to leverage the runtime behavior of the code to change the pattern of interaction between different components in the system. They inherently need to model that behavior, even if that model is implicit in how they fill and invalidate the cache and test for a cache hit. If the model is poor _or becomes_ poor as the behavior changes, the cache will not operate as expected. A simple guideline is that caches _must_ be instrumented — their behavior will degrade over time because of changing behavior of the application and the changing nature and balance of the performance characteristics of the components you are modeling. Every long-time programmer has cache horror stories.
-
-I was lucky that my early career was spent at BBN, one of the birthplaces of the Internet. It was very natural to think about communications between asynchronous components as the natural way systems connect. Flow control and queueing theory are fundamental to communications systems and more generally the way that any asynchronous system operates. Flow control is inherently resource management (managing the capacity of a channel) but resource management is the more fundamental concern. Flow control also is inherently an end-to-end responsibility, so thinking about asynchronous systems in an end-to-end way comes very naturally. The story of [buffer bloat][8]is well worth understanding in this context because it demonstrates how lack of understanding the dynamics of end-to-end behavior coupled with technology “improvements” (larger buffers in routers) resulted in very long-running problems in the overall network infrastructure.
-
-The concept of “light speed” is one that I’ve found useful in analyzing any system. A light speed analysis doesn’t start with the current performance, it asks “what is the best theoretical performance I could achieve with this design?” What is the real information content being transferred and at what rate of change? What is the underlying latency and bandwidth between components? A light speed analysis forces a designer to have a deeper appreciation for whether their approach could ever achieve the performance goals or whether they need to rethink their basic approach. It also forces a deeper understanding of where performance is being consumed and whether this is inherent or potentially due to some misbehavior. From a constructive point of view, it forces a system designer to understand what are the true performance characteristics of their building blocks rather than focusing on the other functional characteristics.
-
-I spent much of my career building graphical applications. A user sitting at one end of the system defines a key constant and constraint in any such system. The human visual and nervous system is not experiencing exponential change. The system is inherently constrained, which means a system designer can leverage ( _must_ leverage) those constraints, e.g. by virtualization (limiting how much of the underlying data model needs to be mapped into view data structures) or by limiting the rate of screen update to the perception limits of the human visual system.
-
-The Nature of Complexity
-
-I have struggled with complexity my entire career. Why do systems and apps get complex? Why doesn’t development within an application domain get easier over time as the infrastructure gets more powerful rather than getting harder and more constrained? In fact, one of our key approaches for managing complexity is to “walk away” and start fresh. Often new tools or languages force us to start from scratch which means that developers end up conflating the benefits of the tool with the benefits of the clean start. The clean start is what is fundamental. This is not to say that some new tool, platform or language might not be a great thing, but I can guarantee it will not solve the problem of complexity growth. The simplest way of controlling complexity growth is to build a smaller system with fewer developers.
-
-Of course, in many cases “walking away” is not an alternative — the Office business is built on hugely valuable and complex assets. With OneNote, Office “walked away” from the complexity of Word in order to innovate along a different dimension. Sway is another example where Office decided that we needed to free ourselves from constraints in order to really leverage key environmental changes and the opportunity to take fundamentally different design approaches. With the Word, Excel and PowerPoint web apps, we decided that the linkage with our immensely valuable data formats was too fundamental to walk away from and that has served as a significant and ongoing constraint on development.
-
-I was influenced by Fred Brook’s “[No Silver Bullet][9]” essay about accident and essence in software development. There is much irreducible complexity embedded in the essence of what the software is trying to model. I just recently re-read that essay and found it surprising on re-reading that two of the trends he imbued with the most power to impact future developer productivity were increasing emphasis on “buy” in the “build vs. buy” decision — foreshadowing the change that open-source and cloud infrastructure has had. The other trend was the move to more “organic” or “biological” incremental approaches over more purely constructivist approaches. A modern reader sees that as the shift to agile and continuous development processes. This in 1986!
-
-I have been much taken with the work of Stuart Kauffman on the fundamental nature of complexity. Kauffman builds up from a simple model of Boolean networks (“[NK models][10]”) and then explores the application of this fundamentally mathematical construct to things like systems of interacting molecules, genetic networks, ecosystems, economic systems and (in a limited way) computer systems to understand the mathematical underpinning to emergent ordered behavior and its relationship to chaotic behavior. In a highly connected system, you inherently have a system of conflicting constraints that makes it (mathematically) hard to evolve that system forward (viewed as an optimization problem over a rugged landscape). A fundamental way of controlling this complexity is to batch the system into independent elements and limit the interconnections between elements (essentially reducing both “N” and “K” in the NK model). Of course this feels natural to a system designer applying techniques of complexity hiding, information hiding and data abstraction and using loose asynchronous coupling to limit interactions between components.
-
-A challenge we always face is that many of the ways we want to evolve our systems cut across all dimensions. Real-time co-authoring has been a very concrete (and complex) recent example for the Office apps.
-
-Complexity in our data models often equates with “power”. An inherent challenge in designing user experiences is that we need to map a limited set of gestures into a transition in the underlying data model state space. Increasing the dimensions of the state space inevitably creates ambiguity in the user gesture. This is “[just math][11]” which means that often times the most fundamental way to ensure that a system stays “easy to use” is to constrain the underlying data model.
-
-Management
-
-I started taking leadership roles in high school (student council president!) and always found it natural to take on larger responsibilities. At the same time, I was always proud that I continued to be a full-time programmer through every management stage. VP of development for Office finally pushed me over the edge and away from day-to-day programming. I’ve enjoyed returning to programming as I stepped away from that job over the last year — it is an incredibly creative and fulfilling activity (and maybe a little frustrating at times as you chase down that “last” bug).
-
-Despite having been a “manager” for over a decade by the time I arrived at Microsoft, I really learned about management after my arrival in 1996\. Microsoft reinforced that “engineering leadership is technical leadership”. This aligned with my perspective and helped me both accept and grow into larger management responsibilities.
-
-The thing that most resonated with me on my arrival was the fundamental culture of transparency in Office. The manager’s job was to design and use transparent processes to drive the project. Transparency is not simple, automatic, or a matter of good intentions — it needs to be designed into the system. The best transparency comes by being able to track progress as the granular output of individual engineers in their day-to-day activity (work items completed, bugs opened and fixed, scenarios complete). Beware subjective red/green/yellow, thumbs-up/thumbs-down dashboards!
-
-I used to say my job was to design feedback loops. Transparent processes provide a way for every participant in the process — from individual engineer to manager to exec to use the data being tracked to drive the process and result and understand the role they are playing in the overall project goals. Ultimately transparency ends up being a great tool for empowerment — the manager can invest more and more local control in those closest to the problem because of confidence they have visibility to the progress being made. Coordination emerges naturally.
-
-Key to this is that the goal has actually been properly framed (including key resource constraints like ship schedule). Decision-making that needs to constantly flow up and down the management chain usually reflects poor framing of goals and constraints by management.
-
-I was at Beyond Software when I really internalized the importance of having a singular leader over a project. The engineering manager departed (later to hire me away for FrontPage) and all four of the leads were hesitant to step into the role — not least because we did not know how long we were going to stick around. We were all very technically sharp and got along well so we decided to work as peers to lead the project. It was a mess. The one obvious problem is that we had no strategy for allocating resources between the pre-existing groups — one of the top responsibilities of management! The deep accountability one feels when you know you are personally in charge was missing. We had no leader really accountable for unifying goals and defining constraints.
-
-I have a visceral memory of the first time I fully appreciated the importance of _listening_ for a leader. I had just taken on the role of Group Development Manager for Word, OneNote, Publisher and Text Services. There was a significant controversy about how we were organizing the text services team and I went around to each of the key participants, heard what they had to say and then integrated and wrote up all I had heard. When I showed the write-up to one of the key participants, his reaction was “wow, you really heard what I had to say”! All of the largest issues I drove as a manager (e.g. cross-platform and the shift to continuous engineering) involved carefully listening to all the players. Listening is an active process that involves trying to understand the perspectives and then writing up what I learned and testing it to validate my understanding. When a key hard decision needed to happen, by the time the call was made everyone knew they had been heard and understood (whether they agreed with the decision or not).
-
-It was the previous job, as FrontPage development manager, where I internalized the “operational dilemma” inherent in decision making with partial information. The longer you wait, the more information you will have to make a decision. But the longer you wait, the less flexibility you will have to actually implement it. At some point you just need to make a call.
-
-Designing an organization involves a similar tension. You want to increase the resource domain so that a consistent prioritization framework can be applied across a larger set of resources. But the larger the resource domain, the harder it is to actually have all the information you need to make good decisions. An organizational design is about balancing these two factors. Software complicates this because characteristics of the software can cut across the design in an arbitrary dimensionality. Office has used [shared teams][12] to address both these issues (prioritization and resources) by having cross-cutting teams that can share work (add resources) with the teams they are building for.
-
-One dirty little secret you learn as you move up the management ladder is that you and your new peers aren’t suddenly smarter because you now have more responsibility. This reinforces that the organization as a whole better be smarter than the leader at the top. Empowering every level to own their decisions within a consistent framing is the key approach to making this true. Listening and making yourself accountable to the organization for articulating and explaining the reasoning behind your decisions is another key strategy. Surprisingly, fear of making a dumb decision can be a useful motivator for ensuring you articulate your reasoning clearly and make sure you listen to all inputs.
-
-Conclusion
-
-At the end of my interview round for my first job out of college, the recruiter asked if I was more interested in working on “systems” or “apps”. I didn’t really understand the question. Hard, interesting problems arise at every level of the software stack and I’ve had fun plumbing all of them. Keep learning.
-
---------------------------------------------------------------------------------
-
-via: https://hackernoon.com/education-of-a-programmer-aaecf2d35312
-
-作者:[ Terry Crowley][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://hackernoon.com/@terrycrowley
-[1]:https://medium.com/@terrycrowley/leaky-by-design-7b423142ece0#.x67udeg0a
-[2]:https://medium.com/@terrycrowley/how-to-think-about-cancellation-3516fc342ae#.3pfjc5b54
-[3]:http://queue.acm.org/detail.cfm?id=2462076
-[4]:http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf
-[5]:https://medium.com/@terrycrowley/model-view-controller-and-loose-coupling-6370f76e9cde#.o4gnupqzq
-[6]:https://en.wikipedia.org/wiki/Termcap
-[7]:http://www.ll.mit.edu/HPEC/agendas/proc04/invited/patterson_keynote.pdf
-[8]:https://en.wikipedia.org/wiki/Bufferbloat
-[9]:http://worrydream.com/refs/Brooks-NoSilverBullet.pdf
-[10]:https://en.wikipedia.org/wiki/NK_model
-[11]:https://medium.com/@terrycrowley/the-math-of-easy-to-use-14645f819201#.untmk9eq7
-[12]:https://medium.com/@terrycrowley/breaking-conways-law-a0fdf8500413#.gqaqf1c5k
diff --git a/sources/tech/20090701 The One in Which I Call Out Hacker News.md b/sources/tech/20090701 The One in Which I Call Out Hacker News.md
index 5a43be3bd6..44c751dd5a 100644
--- a/sources/tech/20090701 The One in Which I Call Out Hacker News.md
+++ b/sources/tech/20090701 The One in Which I Call Out Hacker News.md
@@ -1,3 +1,5 @@
+translating by hopefully2333
+
# [The One in Which I Call Out Hacker News][14]
diff --git a/sources/tech/20141028 When Does Your OS Run.md b/sources/tech/20141028 When Does Your OS Run.md
deleted file mode 100644
index 099c0347bf..0000000000
--- a/sources/tech/20141028 When Does Your OS Run.md
+++ /dev/null
@@ -1,53 +0,0 @@
-When Does Your OS Run?
-============================================================
-
-
-Here’s a question: in the time it takes you to read this sentence, has your OS been _running_ ? Or was it only your browser? Or were they perhaps both idle, just waiting for you to _do something already_ ?
-
-These questions are simple but they cut through the essence of how software works. To answer them accurately we need a good mental model of OS behavior, which in turn informs performance, security, and troubleshooting decisions. We’ll build such a model in this post series using Linux as the primary OS, with guest appearances by OS X and Windows. I’ll link to the Linux kernel sources for those who want to delve deeper.
-
-The fundamental axiom here is that _at any given moment, exactly one task is active on a CPU_ . The task is normally a program, like your browser or music player, or it could be an operating system thread, but it is one task. Not two or more. Never zero, either. One. Always.
-
-This sounds like trouble. For what if, say, your music player hogs the CPU and doesn’t let any other tasks run? You would not be able to open a tool to kill it, and even mouse clicks would be futile as the OS wouldn’t process them. You could be stuck blaring “What does the fox say?” and incite a workplace riot.
-
-That’s where interrupts come in. Much as the nervous system interrupts the brain to bring in external stimuli – a loud noise, a touch on the shoulder – the [chipset][1] in a computer’s motherboard interrupts the CPU to deliver news of outside events – key presses, the arrival of network packets, the completion of a hard drive read, and so on. Hardware peripherals, the interrupt controller on the motherboard, and the CPU itself all work together to implement these interruptions, called interrupts for short.
-
-Interrupts are also essential in tracking that which we hold dearest: time. During the [boot process][2] the kernel programs a hardware timer to issue timer interrupts at a periodic interval, for example every 10 milliseconds. When the timer goes off, the kernel gets a shot at the CPU to update system statistics and take stock of things: has the current program been running for too long? Has a TCP timeout expired? Interrupts give the kernel a chance to both ponder these questions and take appropriate actions. It’s as if you set periodic alarms throughout the day and used them as checkpoints: should I be doing what I’m doing right now? Is there anything more pressing? One day you find ten years have got behind you.
-
-These periodic hijackings of the CPU by the kernel are called ticks, so interrupts quite literally make your OS tick. But there’s more: interrupts are also used to handle some software events like integer overflows and page faults, which involve no external hardware. Interrupts are the most frequent and crucial entry point into the OS kernel. They’re not some oddity for the EE people to worry about, they’re _the_ mechanism whereby your OS runs.
-
-Enough talk, let’s see some action. Below is a network card interrupt in an Intel Core i5 system. The diagrams now have image maps, so you can click on juicy bits for more information. For example, each device links to its Linux driver.
-
-![](http://duartes.org/gustavo/blog/img/os/hardware-interrupt.png)
-
-
-
-Let’s take a look at this. First off, since there are many sources of interrupts, it wouldn’t be very helpful if the hardware simply told the CPU “hey, something happened!” and left it at that. The suspense would be unbearable. So each device is assigned an interrupt request line, or IRQ, during power up. These IRQs are in turn mapped into interrupt vectors, a number between 0 and 255, by the interrupt controller. By the time an interrupt reaches the CPU it has a nice, well-defined number insulated from the vagaries of hardware.
-
-The CPU in turn has a pointer to what’s essentially an array of 255 functions, supplied by the kernel, where each function is the handler for that particular interrupt vector. We’ll look at this array, the Interrupt Descriptor Table (IDT), in more detail later on.
-
-Whenever an interrupt arrives, the CPU uses its vector as an index into the IDT and runs the appropriate handler. This happens as a special function call that takes place in the context of the currently running task, allowing the OS to respond to external events quickly and with minimal overhead. So web servers out there indirectly _call a function in your CPU_ when they send you data, which is either pretty cool or terrifying. Below we show a situation where a CPU is busy running a Vim command when an interrupt arrives:
-
-![](http://duartes.org/gustavo/blog/img/os/vim-interrupted.png)
-
-Notice how the interrupt’s arrival causes a switch to kernel mode and [ring zero][3] but it _does not change the active task_ . It’s as if Vim made a magic function call straight into the kernel, but Vim is _still there_ , its [address space][4] intact, waiting for that call to return.
-
-Exciting stuff! Alas, I need to keep this post-sized, so let’s finish up for now. I understand we have not answered the opening question and have in fact opened up new questions, but you now suspect ticks were taking place while you read that sentence. We’ll find the answers as we flesh out our model of dynamic OS behavior, and the browser scenario will become clear. If you have questions, especially as the posts come out, fire away and I’ll try to answer them in the posts themselves or as comments. Next installment is tomorrow on [RSS][5] and [Twitter][6].
-
---------------------------------------------------------------------------------
-
-via: http://duartes.org/gustavo/blog/post/when-does-your-os-run/
-
-作者:[gustavo ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://duartes.org/gustavo/blog/about/
-[1]:http://duartes.org/gustavo/blog/post/motherboard-chipsets-memory-map
-[2]:http://duartes.org/gustavo/blog/post/kernel-boot-process
-[3]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
-[4]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
-[5]:http://feeds.feedburner.com/GustavoDuarte
-[6]:http://twitter.com/food4hackers
diff --git a/sources/tech/20160325 Network automation with Ansible.md b/sources/tech/20160325 Network automation with Ansible.md
deleted file mode 100644
index 6072731a74..0000000000
--- a/sources/tech/20160325 Network automation with Ansible.md
+++ /dev/null
@@ -1,993 +0,0 @@
-Translating by qhwdw Network automation with Ansible
-================
-
-### Network Automation
-
-As the IT industry transforms with technologies from server virtualization to public and private clouds with self-service capabilities, containerized applications, and Platform as a Service (PaaS) offerings, one of the areas that continues to lag behind is the network.
-
-Over the past 5+ years, the network industry has seen many new trends emerge, many of which are categorized as software-defined networking (SDN).
-
-###### Note
-
-SDN is a new approach to building, managing, operating, and deploying networks. The original definition for SDN was that there needed to be a physical separation of the control plane from the data (packet forwarding) plane, and the decoupled control plane must control several devices.
-
-Nowadays, many more technologies get put under the _SDN umbrella_, including controller-based networks, APIs on network devices, network automation, whitebox switches, policy networking, Network Functions Virtualization (NFV), and the list goes on.
-
-For purposes of this report, we refer to SDN solutions as solutions that include a network controller as part of the solution, and improve manageability of the network but don’t necessarily decouple the control plane from the data plane.
-
-One of these trends is the emergence of application programming interfaces (APIs) on network devices as a way to manage and operate these devices and truly offer machine to machine communication. APIs simplify the development process when it comes to automation and building network applications, providing more structure on how data is modeled. For example, when API-enabled devices return data in JSON/XML, it is structured and easier to work with as compared to CLI-only devices that return raw text that then needs to be manually parsed.
-
-Prior to APIs, the two primary mechanisms used to configure and manage network devices were the command-line interface (CLI) and Simple Network Management Protocol (SNMP). If we look at each of those, the CLI was meant as a human interface to the device, and SNMP wasn’t built to be a real-time programmatic interface for network devices.
-
-Luckily, as many vendors scramble to add APIs to devices, sometimes _just because_ it’s a check in the box on an RFP, there is actually a great byproduct—enabling network automation. Once a true API is exposed, the process for accessing data within the device, as well as managing the configuration, is greatly simplified, but as we’ll review in this report, automation is also possible using more traditional methods, such as CLI/SNMP.
-
-###### Note
-
-As network refreshes happen in the months and years to come, vendor APIs should no doubt be tested and used as key decision-making criteria for purchasing network equipment (virtual and physical). Users should want to know how data is modeled by the equipment, what type of transport is used by the API, if the vendor offers any libraries or integrations to automation tools, and if open standards/protocols are being used.
-
-Generally speaking, network automation, like most types of automation, equates to doing things faster. While doing more faster is nice, reducing the time for deployments and configuration changes isn’t always a problem that needs solving for many IT organizations.
-
-Including speed, we’ll now take a look at a few of the reasons that IT organizations of all shapes and sizes should look at gradually adopting network automation. You should note that the same principles apply to other types of automation as well.
-
-
-### Simplified Architectures
-
-Today, every network is a unique snowflake, and network engineers take pride in solving transport and application issues with one-off network changes that ultimately make the network not only harder to maintain and manage, but also harder to automate.
-
-Instead of thinking about network automation and management as a secondary or tertiary project, it needs to be included from the beginning as new architectures and designs are deployed. Which features work across vendors? Which extensions work across platforms? What type of API or automation tooling works when using particular network device platforms? When these questions get answered earlier on in the design process, the resulting architecture becomes simpler, repeatable, and easier to maintain _and_ automate, all with fewer vendor proprietary extensions enabled throughout the network.
-
-### Deterministic Outcomes
-
-In an enterprise organization, change review meetings take place to review upcoming changes on the network, the impact they have on external systems, and rollback plans. In a world where a human is touching the CLI to make those _upcoming changes_, the impact of typing the wrong command is catastrophic. Imagine a team with three, four, five, or 50 engineers. Every engineer may have his own way of making that particular _upcoming change_. And the ability to use a CLI or a GUI does not eliminate or reduce the chance of error during the control window for the change.
-
-Using proven and tested network automation helps achieve more predictable behavior and gives the executive team a better chance at achieving deterministic outcomes, moving one step closer to having the assurance that the task is going to get done right the first time without human error.
-
-
-### Business Agility
-
-It goes without saying that network automation offers speed and agility not only for deploying changes, but also for retrieving data from network devices as fast as the business demands. Since the advent of server virtualization, server and virtualization admins have had the ability to deploy new applications almost instantaneously. And the faster applications are deployed, the more questions are raised as to why it takes so long to configure a VLAN, route, FW ACL, or load-balancing policy.
-
-By understanding the most common workflows within an organization and _why_ network changes are really required, the process to deploy modern automation tooling such as Ansible becomes much simpler.
-
-This chapter introduced some of the high-level points on why you should consider network automation. In the next section, we take a look at what Ansible is and continue to dive into different types of network automation that are relevant to IT organizations of all sizes.
-
-
-### What Is Ansible?
-
-Ansible is one of the newer IT automation and configuration management platforms that exists in the open source world. It’s often compared to other tools such as Puppet, Chef, and SaltStack. Ansible emerged on the scene in 2012 as an open source project created by Michael DeHaan, who also created Cobbler and cocreated Func, both of which are very popular in the open source community. Less than 18 months after the Ansible open source project started, Ansible Inc. was formed and received $6 million in Series A funding. It became and is still the number one contributor to and supporter of the Ansible open source project. In October 2015, Red Hat acquired Ansible Inc.
-
-But, what exactly is Ansible?
-
-_Ansible is a super-simple automation platform that is agentless and extensible._
-
-Let’s dive into this statement in a bit more detail and look at the attributes of Ansible that have helped it gain a significant amount of traction within the industry.
-
-
-### Simple
-
-One of the most attractive attributes of Ansible is that you _DO NOT_ need any special coding skills in order to get started. All instructions, or tasks to be automated, are documented in a standard, human-readable data format that anyone can understand. It is not uncommon to have Ansible installed and automating tasks in under 30 minutes!
-
-For example, the following task from an Ansible playbook is used to ensure a VLAN exists on a Cisco Nexus switch:
-
-```
-- nxos_vlan: vlan_id=100 name=web_vlan
-```
-
-You can tell by looking at this almost exactly what it’s going to do without understanding or writing any code!
-
-###### Note
-
-The second half of this report covers the Ansible terminology (playbooks, plays, tasks, modules, etc.) in great detail. However, we have included a few brief examples in the meantime to convey key concepts when using Ansible for network automation.
-
-### Agentless
-
-If you look at other tools on the market, such as Puppet and Chef, you’ll learn that, by default, they require that each device you are automating have specialized software installed. This is _NOT_ the case with Ansible, and this is the major reason why Ansible is a great choice for networking automation.
-
-It’s well understood that IT automation tools, including Puppet, Chef, CFEngine, SaltStack, and Ansible, were initially built to manage and automate the configuration of Linux hosts to increase the pace at which applications are deployed. Because Linux systems were being automated, getting agents installed was never a technical hurdle to overcome. If anything, it just delayed the setup, since now _N_ number of hosts (the hosts you want to automate) needed to have software deployed on them.
-
-On top of that, when agents are used, there is additional complexity required for DNS and NTP configuration. These are services that most environments do have already, but when you need to get something up fairly quick or simply want to see what it can do from a test perspective, it could significantly delay the overall setup and installation process.
-
-Since this report is meant to cover Ansible for network automation, it’s worth pointing out that having Ansible as an agentless platform is even more compelling to network admins than to sysadmins. Why is this?
-
-It’s more compelling for network admins because as mentioned, Linux operating systems are open, and anything can be installed on them. For networking, this is definitely not the case, although it is gradually changing. If we take the most widely deployed network operating system, Cisco IOS, as just one example and ask the question, _"Can third-party software be installed on IOS based platforms?"_ it shouldn’t come as a surprise that the answer is _NO_.
-
-For the last 20+ years, nearly all network operating systems have been closed and vertically integrated with the underlying network hardware. Because it’s not so easy to load an agent on a network device (router, switch, load balancer, firewall, etc.) without vendor support, having an automation platform like Ansible that was built from the ground up to be agentless and extensible is just what the doctor ordered for the network industry. We can finally start eliminating manual interactions with the network with ease!
-
-### Extensible
-
-Ansible is also extremely extensible. As open source and code start to play a larger role in the network industry, having platforms that are extensible is a must. This means that if the vendor or community doesn’t provide a particular feature or function, the open source community, end user, customer, consultant, or anyone else can _extend_ Ansible to enable a given set of functionality. In the past, the network vendor or tool vendor was on the hook to provide the new plug-ins and integrations. Imagine using an automation platform like Ansible, and your network vendor of choice releases a new feature that you _really_ need automated. While the network vendor or Ansible could in theory release the new plug-in to automate that particular feature, the great thing is, anyone from your internal engineers to your value-added reseller (VARs) or consultant could now provide these integrations.
-
-It is a fact that Ansible is extremely extensible because as stated, Ansible was initially built to automate applications and systems. It is because of Ansible’s extensibility that Ansible integrations have been written for network vendors, including but not limited to Cisco, Arista, Juniper, F5, HP, A10, Cumulus, and Palo Alto Networks.
-
-
-### Why Ansible for Network Automation?
-
-We’ve taken a brief look at what Ansible is and also some of the benefits of network automation, but why should Ansible be used for network automation?
-
-In full transparency, many of the reasons already stated are what make Ansible such as great platform for automating application deployments. However, we’ll take this a step further now, getting even more focused on networking, and continue to outline a few other key points to be aware of.
-
-
-### Agentless
-
-The importance of an agentless architecture cannot be stressed enough when it comes to network automation, especially as it pertains to automating existing devices. If we take a look at all devices currently installed at various parts of the network, from the DMZ and campus, to the branch and data center, the lion’s share of devices do _NOT_ have a modern device API. While having an API makes things so much simpler from an automation perspective, an agentless platform like Ansible makes it possible to automate and manage those _legacy_ _(traditional)_ devices, for example, _CLI-based devices_, making it a tool that can be used in any network environment.
-
-###### Note
-
-If CLI-only devices are integrated with Ansible, the mechanisms as to how the devices are accessed for read-only and read-write operations occur through protocols such as telnet, SSH, and SNMP.
-
-As standalone network devices like routers, switches, and firewalls continue to add support for APIs, SDN solutions are also emerging. The one common theme with SDN solutions is that they all offer a single point of integration and policy management, usually in the form of an SDN controller. This is true for solutions such as Cisco ACI, VMware NSX, Big Switch Big Cloud Fabric, and Juniper Contrail, as well as many of the other SDN offerings from companies such as Nuage, Plexxi, Plumgrid, Midokura, and Viptela. This even includes open source controllers such as OpenDaylight.
-
-These solutions all simplify the management of networks, as they allow an administrator to start to migrate from box-by-box management to network-wide, single-system management. While this is a great step in the right direction, these solutions still don’t eliminate the risks for human error during change windows. For example, rather than configure _N_ switches, you may need to configure a single GUI that could take just as long in order to make the required configuration change—it may even be more complex, because after all, who prefers a GUI _over_ a CLI! Additionally, you may possibly have different types of SDN solutions deployed per application, network, region, or data center.
-
-The need to automate networks, for configuration management, monitoring, and data collection, does not go away as the industry begins migrating to controller-based network architectures.
-
-As most software-defined networks are deployed with a controller, nearly all controllers expose a modern REST API. And because Ansible has an agentless architecture, it makes it extremely simple to automate not only legacy devices that may not have an API, but also software-defined networking solutions via REST APIs, all without requiring any additional software (agents) on the endpoints. The net result is being able to automate any type of device using Ansible with or without an API.
-
-
-### Free and Open Source Software (FOSS)
-
-Being that Ansible is open source with all code publicly accessible on GitHub, it is absolutely free to get started using Ansible. It can literally be installed and providing value to network engineers in minutes. Ansible, the open source project, or Ansible Inc., do not require any meetings with sales reps before they hand over software either. That is stating the obvious, since it’s true for all open source projects, but being that the use of open source, community-driven software within the network industry is fairly new and gradually increasing, we wanted to explicitly make this point.
-
-It is also worth stating that Ansible, Inc. is indeed a company and needs to make money somehow, right? While Ansible is open source, it also has an enterprise product called Ansible Tower that adds features such as role-based access control (RBAC), reporting, web UI, REST APIs, multi-tenancy, and much more, which is usually a nice fit for enterprises looking to deploy Ansible. And the best part is that even Ansible Tower is _FREE_ for up to 10 devices—so, at least you can get a taste of Tower to see if it can benefit your organization without spending a dime and sitting in countless sales meetings.
-
-
-### Extensible
-
-We stated earlier that Ansible was primarily built as an automation platform for deploying Linux applications, although it has expanded to Windows since the early days. The point is that the Ansible open source project did not have the goal of automating network infrastructure. The truth is that the more the Ansible community understood how flexible and extensible the underlying Ansible architecture was, the easier it became to _extend_ Ansible for their automation needs, which included networking. Over the past two years, there have been a number of Ansible integrations developed, many by industry independents such as Matt Oswalt, Jason Edelman, Kirk Byers, Elisa Jasinska, David Barroso, Michael Ben-Ami, Patrick Ogenstad, and Gabriele Gerbino, as well as by leading networking network vendors such as Arista, Juniper, Cumulus, Cisco, F5, and Palo Alto Networks.
-
-
-### Integrating into Existing DevOps Workflows
-
-Ansible is used for application deployments within IT organizations. It’s used by operations teams that need to manage the deployment, monitoring, and management of various types of applications. By integrating Ansible with the network infrastructure, it expands what is possible when new applications are turned up or migrated. Rather than have to wait for a new top of rack (TOR) switch to be turned up, a VLAN to be added, or interface speed/duplex to be checked, all of these network-centric tasks can be automated and integrated into existing workflows that already exist within the IT organization.
-
-
-### Idempotency
-
-The term _idempotency_ (pronounced item-potency) is used often in the world of software development, especially when working with REST APIs, as well as in the world of _DevOps_ automation and configuration management frameworks, including Ansible. One of Ansible’s beliefs is that all Ansible modules (integrations) should be idempotent. Okay, so what does it mean for a module to be idempotent? After all, this is a new term for most network engineers.
-
-The answer is simple. Being idempotent allows the defined task to run one time or a thousand times without having an adverse effect on the target system, only ever making the change once. In other words, if a change is required to get the system into its desired state, the change is made; and if the device is already in its desired state, no change is made. This is unlike most traditional custom scripts and the copy and pasting of CLI commands into a terminal window. When the same command or script is executed repeatedly on the same system, errors are (sometimes) raised. Ever paste a command set into a router and get some type of error that invalidates the rest of your configuration? Was that fun?
-
-Another example is if you have a text file or a script that configures 10 VLANs, the same commands are then entered 10 times _EVERY_ time the script is run. If an idempotent Ansible module is used, the existing configuration is gathered first from the network device, and each new VLAN being configured is checked against the current configuration. Only if the new VLAN needs to be added (or changed—VLAN name, as an example) is a change or command actually pushed to the device.
-
-As the technologies become more complex, the value of idempotency only increases because with idempotency, you shouldn’t care about the _existing_ state of the network device being modified, only the _desired_ state that you are trying to achieve from a network configuration and policy perspective.
-
-
-### Network-Wide and Ad Hoc Changes
-
-One of the problems solved with configuration management tools is configuration drift (when a device’s desired configuration gradually drifts, or changes, over time due to manual change and/or having multiple disparate tools being used in an environment)—in fact, this is where tools like Puppet and Chef got started. Agents _phone home_ to the head-end server, validate its configuration, and if a change is required, the change is made. The approach is simple enough. What if an outage occurs and you need to troubleshoot though? You usually bypass the management system, go direct to a device, find the fix, and quickly leave for the day, right? Sure enough, at the next time interval when the agent phones back home, the change made to fix the problem is overwritten (based on how the _master/head-end server_ is configured). One-off changes should always be limited in highly automated environments, but tools that still allow for them are greatly valuable. As you guessed, one of these tools is Ansible.
-
-Because Ansible is agentless, there is not a default push or pull to prevent configuration drift. The tasks to automate are defined in what is called an Ansible playbook. When using Ansible, it is up to the user to run the playbook. If the playbook is to be executed at a given time interval and you’re not using Ansible Tower, you will definitely know how often the tasks are run; if you are just using the native Ansible command line from a terminal prompt, the playbook is run once and only once.
-
-Running a playbook once by default is attractive for network engineers. It is added peace of mind that changes made manually on the device are not going to be automatically overwritten. Additionally, the scope of devices that a playbook is executed against is easily changed when needed such that even if a single change needs to automate only a single device, Ansible can still be used. The _scope_ of devices is determined by what is called an Ansible inventory file; the inventory could have one device or a thousand devices.
-
-The following shows a sample inventory file with two groups defined and a total of six network devices:
-
-```
-[core-switches]
-dc-core-1
-dc-core-2
-
-[leaf-switches]
-leaf1
-leaf2
-leaf3
-leaf4
-```
-
-To automate all hosts, a snippet from your play definition in a playbook looks like this:
-
-```
-hosts: all
-```
-
-And to automate just one leaf switch, it looks like this:
-
-```
-hosts: leaf1
-```
-
-And just the core switches:
-
-```
-hosts: core-switches
-```
-
-###### Note
-
-As stated previously, playbooks, plays, and inventories are covered in more detail later on this report.
-
-Being able to easily automate one device or _N_ devices makes Ansible a great choice for making those one-off changes when they are required. It’s also great for those changes that are network-wide: possibly for shutting down all interfaces of a given type, configuring interface descriptions, or adding VLANs to wiring closets across an enterprise campus network.
-
-### Network Task Automation with Ansible
-
-This report is gradually getting more technical in two areas. The first area is around the details and architecture of Ansible, and the second area is about exactly what types of tasks can be automated from a network perspective with Ansible. The latter is what we’ll take a look at in this chapter.
-
-Automation is commonly equated with speed, and considering that some network tasks don’t require speed, it’s easy to see why some IT teams don’t see the value in automation. VLAN configuration is a great example because you may be thinking, "How _fast_ does a VLAN really need to get created? Just how many VLANs are being added on a daily basis? Do _I_ really need automation?”
-
-In this section, we are going to focus on several other tasks where automation makes sense such as device provisioning, data collection, reporting, and compliance. But remember, as we stated earlier, automation is much more than speed and agility as it’s offering you, your team, and your business more predictable and more deterministic outcomes.
-
-### Device Provisioning
-
-One of the easiest and fastest ways to get started using Ansible for network automation is creating device configuration files that are used for initial device provisioning and pushing them to network devices.
-
-If we take this process and break it down into two steps, the first step is creating the configuration file, and the second is pushing the configuration onto the device.
-
-First, we need to decouple the _inputs_ from the underlying vendor proprietary syntax (CLI) of the config file. This means we’ll have separate files with values for the configuration parameters such as VLANs, domain information, interfaces, routing, and everything else, and then, of course, a configuration template file(s). For this example, this is our standard golden template that’s used for all devices getting deployed. Ansible helps bridge the gap between rendering the inputs and values with the configuration template. In less than a few seconds, Ansible can generate hundreds of configuration files predictably and reliably.
-
-Let’s take a quick look at an example of taking a current configuration and decomposing it into a template and separate variables (inputs) file.
-
-Here is an example of a configuration file snippet:
-
-```
-hostname leaf1
-ip domain-name ntc.com
-!
-vlan 10
- name web
-!
-vlan 20
- name app
-!
-vlan 30
- name db
-!
-vlan 40
- name test
-!
-vlan 50
- name misc
-```
-
-If we extract the input values, this file is transformed into a template.
-
-###### Note
-
-Ansible uses the Python-based Jinja2 templating language, thus the template called _leaf.j2_ is a Jinja2 template.
-
-Note that in the following example the _double curly braces_ denote a variable.
-
-The resulting template looks like this and is given the filename _leaf.j2_:
-
-```
-!
-hostname {{ inventory_hostname }}
-ip domain-name {{ domain_name }}
-!
-!
-{% for vlan in vlans %}
-vlan {{ vlan.id }}
- name {{ vlan.name }}
-{% endfor %}
-!
-```
-
-Since the double curly braces denote variables, and we see those values are not in the template, they need to be stored somewhere. They get stored in a variables file. A matching variables file for the previously shown template looks like this:
-
-```
----
-hostname: leaf1
-domain_name: ntc.com
-vlans:
- - { id: 10, name: web }
- - { id: 20, name: app }
- - { id: 30, name: db }
- - { id: 40, name: test }
- - { id: 50, name: misc }
-```
-
-This means if the team that controls VLANs wants to add a VLAN to the network devices, no problem. Have them change it in the variables file and regenerate a new config file using the Ansible module called `template`. This whole process is idempotent too; only if there is a change to the template or values being entered will a new configuration file be generated.
-
-Once the configuration is generated, it needs to be _pushed_ to the network device. One such method to push configuration files to network devices is using the open source Ansible module called `napalm_install_config`.
-
-The next example is a sample playbook to _build and push_ a configuration to network devices. Again, this playbook uses the `template` module to build the configuration files and the `napalm_install_config` to push them and activate them as the new running configurations on the devices.
-
-Even though every line isn’t reviewed in the example, you can still make out what is actually happening.
-
-###### Note
-
-The following playbook introduces new concepts such as the built-in variable `inventory_hostname`. These concepts are covered in [Ansible Terminology and Getting Started][1].
-
-```
----
-
- - name: BUILD AND PUSH NETWORK CONFIGURATION FILES
- hosts: leaves
- connection: local
- gather_facts: no
-
- tasks:
- - name: BUILD CONFIGS
- template:
- src=templates/leaf.j2
- dest=configs/{{inventory_hostname }}.conf
-
- - name: PUSH CONFIGS
- napalm_install_config:
- hostname={{ inventory_hostname }}
- username={{ un }}
- password={{ pwd }}
- dev_os={{ os }}
- config_file=configs/{{ inventory_hostname }}.conf
- commit_changes=1
- replace_config=0
-```
-
-This two-step process is the simplest way to get started with network automation using Ansible. You simply template your configs, build config files, and push them to the network device—otherwise known as the _BUILD and PUSH_ method.
-
-###### Note
-
-Another example like this is reviewed in much more detail in [Ansible Network Integrations][2].
-
-
-### Data Collection and Monitoring
-
-Monitoring tools typically use SNMP—these tools poll certain management information bases (MIBs) and return data to the monitoring tool. Based on the data being returned, it may be more or less than you actually need. What if interface stats are being polled? You are likely getting back every counter that is displayed in a _show interface_ command. What if you only need _interface resets_ and wish to see these resets correlated to the interfaces that have CDP/LLDP neighbors on them? Of course, this is possible with current technology; it could be you are running multiple show commands and parsing the output manually, or you’re using an SNMP-based tool but going between tabs in the GUI trying to find the data you actually need. How does Ansible help with this?
-
-Being that Ansible is totally open and extensible, it’s possible to collect and monitor the exact counters or values needed. This may require some up-front custom work but is totally worth it in the end, because the data being gathered is what you need, not what the vendor is providing you. Ansible also provides intuitive ways to perform certain tasks conditionally, which means based on data being returned, you can perform subsequent tasks, which may be to collect more data or to make a configuration change.
-
-Network devices have _A LOT_ of static and ephemeral data buried inside, and Ansible helps extract the bits you need.
-
-You can even use Ansible modules that use SNMP behind the scenes, such as a module called `snmp_device_version`. This is another open source module that exists within the community:
-
-```
- - name: GET SNMP DATA
- snmp_device_version:
- host=spine
- community=public
- version=2c
-```
-
-Running the preceding task returns great information about a device and adds some level of discovery capabilities to Ansible. For example, that task returns the following data:
-
-```
-{"ansible_facts": {"ansible_device_os": "nxos", "ansible_device_vendor": "cisco", "ansible_device_version": "7.0(3)I2(1)"}, "changed": false}
-```
-
-You can now determine what type of device something is without knowing up front. All you need to know is the read-only community string of the device.
-
-
-### Migrations
-
-Migrating from one platform to the next is never an easy task. This may be from the same vendor or from different vendors. Vendors may offer a script or a tool to help with migrations. Ansible can be used to build out configuration templates for all types of network devices and operating systems in such a way that you could generate a configuration file for all vendors given a defined and common set of inputs (common data model). Of course, if there are vendor proprietary extensions, they’ll need to be accounted for, too. Having this type of flexibility helps with not only migrations, but also disaster recovery (DR), as it’s very common to have different switch models in the production and DR data centers, maybe even different vendors.
-
-
-### Configuration Management
-
-As stated, configuration management is the most common type of automation. What Ansible allows you to do fairly easily is create _roles_ to streamline the consumption of task-based automation. From a high level, a role is a logical grouping of reusable tasks that are automated against a particular group of devices. Another way to think about roles is to think about workflows. First and foremost, workflows and processes need to be understood before automation is going to start adding value. It’s always important to start small and expand from there.
-
-For example, a set of tasks that automate the configuration of routers and switches is very common and is a great place to start. But where do the IP addresses come from that are configured on network devices? Maybe an IP address management solution? Once the IP addresses are allocated for a given function and deployed, does DNS need to be updated too? Do DHCP scopes need to be created?
-
-Can you see how the workflow can start small and gradually expand across different IT systems? As the workflow continues to expand, so would the role.
-
-
-### Compliance
-
-As with many forms of automation, making configuration changes with any type of automation tool is seen as a risk. While making manual changes could arguably be riskier, as you’ve read and may have experienced firsthand, Ansible has capabilities to automate data collection, monitoring, and configuration building, which are all "read-only" and "low risk" actions. One _low risk_ use case that can use the data being gathered is configuration compliance checks and configuration validation. Does the deployed configuration meet security requirements? Are the required networks configured? Is protocol XYZ disabled? Since each module, or integration, with Ansible returns data, it is quite simple to _assert_ that something is _TRUE_ or _FALSE_. And again, based on _it_ being _TRUE_ or _FALSE_, it’s up to you to determine what happens next—maybe it just gets logged, or maybe a complex operation is performed.
-
-### Reporting
-
-We now understand that Ansible can also be used to collect data and perform compliance checks. The data being returned and collected from the device by way of Ansible is up for grabs in terms of what you want to do with it. Maybe the data being returned becomes inputs to other tasks, or maybe you just want to create reports. Being that reports are generated from templates combined with the actual important data to be inserted into the template, the process to create and use reporting templates is the same process used to create configuration templates.
-
-From a reporting perspective, these templates may be flat text files, markdown files that are viewed on GitHub, HTML files that get dynamically placed on a web server, and the list goes on. The user has the power to create the exact type of report she wishes, inserting the exact data she needs to be part of that report.
-
-It is powerful to create reports not only for executive management, but also for the ops engineers, since there are usually different metrics both teams need.
-
-
-### How Ansible Works
-
-After looking at what Ansible can offer from a network automation perspective, we’ll now take a look at how Ansible works. You will learn about the overall communication flow from an Ansible control host to the nodes that are being automated. First, we review how Ansible works _out of the box_, and we then take a look at how Ansible, and more specifically Ansible _modules_, work when network devices are being automated.
-
-### Out of the Box
-
-By now, you should understand that Ansible is an automation platform. In fact, it is a lightweight automation platform that is installed on a single server or on every administrator’s laptop within an organization. You decide. Ansible is easily installed using utilities such as pip, apt, and yum on Linux-based machines.
-
-###### Note
-
-The machine that Ansible is installed on is referred to as the _control host_ through the remainder of this report.
-
-The control host will perform all automation tasks that are defined in an Ansible playbook (don’t worry; we’ll cover playbooks and other Ansible terms soon enough). The important piece for now is to understand that a playbook is simply a set of automation tasks and instructions that gets executed on a given number of hosts.
-
-When a playbook is created, you also need to define which hosts you want to automate. The mapping between the playbook and the hosts to automate happens by using what is known as an Ansible inventory file. This was already shown in an earlier example, but here is another sample inventory file showing two groups: `cisco`and `arista`:
-
-```
-[cisco]
-nyc1.acme.com
-nyc2.acme.com
-
-[arista]
-sfo1.acme.com
-sfo2.acme.com
-```
-
-###### Note
-
-You can also use IP addresses within the inventory file, instead of hostnames. For these examples, the hostnames were resolvable via DNS.
-
-As you can see, the Ansible inventory file is a text file that lists hosts and groups of hosts. You then reference a specific host or a group from within the playbook, thus dictating which hosts get automated for a given play and playbook. This is shown in the following two examples.
-
-The first example shows what it looks like if you wanted to automate all hosts within the `cisco` group, and the second example shows how to automate just the _nyc1.acme.com_ host:
-
-```
----
-
- - name: TEST PLAYBOOK
- hosts: cisco
-
- tasks:
- - TASKS YOU WANT TO AUTOMATE
-```
-
-```
----
-
- - name: TEST PLAYBOOK
- hosts: nyc1.acme.com
-
- tasks:
- - TASKS YOU WANT TO AUTOMATE
-```
-
-Now that the basics of inventory files are understood, we can take a look at how Ansible (the control host) communicates with devices _out of the box_ and how tasks are automated on Linux endpoints. This is an important concept to understand, as this is usually different when network devices are being automated.
-
-There are two main requirements for Ansible to work out of the box to automate Linux-based systems. These requirements are SSH and Python.
-
-First, the endpoints must support SSH for transport, since Ansible uses SSH to connect to each target node. Because Ansible supports a pluggable connection architecture, there are also various plug-ins available for different types of SSH implementations.
-
-The second requirement is how Ansible gets around the need to require an _agent_ to preexist on the target node. While Ansible does not require a software agent, it does require an onboard Python execution engine. This execution engine is used to execute Python code that is transmitted from the Ansible control host to the target node being automated.
-
-If we elaborate on this out of the box workflow, it is broken down as follows:
-
-1. When an Ansible play is executed, the control host connects to the Linux-based target node using SSH.
-
-2. For each task, that is, Ansible module being executed within the play, Python code is transmitted over SSH and executed directly on the remote system.
-
-3. Each Ansible module upon execution on the remote system returns JSON data to the control host. This data includes information such as if the configuration changed, if the task passed/failed, and other module-specific data.
-
-4. The JSON data returned back to Ansible can then be used to generate reports using templates or as inputs to subsequent modules.
-
-5. Repeat step 3 for each task that exists within the play.
-
-6. Repeat step 1 for each play within the playbook.
-
-Shouldn’t this mean that network devices should work out of the box with Ansible because they also support SSH? It is true that network devices do support SSH, but it is the first requirement combined with the second one that limits the functionality possible for network devices.
-
-To start, most network devices do not support Python, so it makes using the default Ansible connection mechanism process a non-starter. That said, over the past few years, vendors have added Python support on several different device platforms. However, most of these platforms still lack the integration needed to allow Ansible to get direct access to a Linux shell over SSH with the proper permissions to copy over the required code, create temp directories and files, and execute the code on box. While all the parts are there for Ansible to work natively with SSH/Python _and_ Linux-based network devices, it still requires network vendors to open their systems more than they already have.
-
-###### Note
-
-It is worth noting that Arista does offer native integration because it is able to drop SSH users directly into a Linux shell with access to a Python execution engine, which in turn does allow Ansible to use its default connection mechanism. Because we called out Arista, we need to also highlight Cumulus as working with Ansible’s default connection mechanism, too. This is because Cumulus Linux is native Linux, and there isn’t a need to use a vendor API for the automation of the Cumulus Linux OS.
-
-### Ansible Network Integrations
-
-The previous section covered the way Ansible works by default. We looked at how Ansible sets up a connection to a device at the beginning of a _play_, executes tasks by copying Python code to the devices, executes the code, and then returns results back to the Ansible control host.
-
-In this section, we’ll take a look at what this process is when automating network devices with Ansible. As already covered, Ansible has a pluggable connection architecture. For _most_ network integrations, the `connection` parameter is set to `local`. The most common place to make the connection type local is within the playbook, as shown in the following example:
-
-```
----
-
- - name: TEST PLAYBOOK
- hosts: cisco
- connection: local
-
- tasks:
- - TASKS YOU WANT TO AUTOMATE
-```
-
-Notice how within the play definition, this example added the `connection` parameter as compared to the examples in the previous section.
-
-This tells Ansible not to connect to the target device via SSH and to just connect to the local machine running the playbook. Basically, this delegates the connection responsibility to the actual Ansible modules being used within the _tasks_ section of the playbook. Delegating power for each type of module allows the modules to connect to the device in whatever fashion necessary; this could be NETCONF for Juniper and HP Comware7, eAPI for Arista, NX-API for Cisco Nexus, or even SNMP for traditional/legacy-based systems that don’t have a programmatic API.
-
-###### Note
-
-Network integrations in Ansible come in the form of Ansible modules. While we continue to whet your appetite using terminology such as playbooks, plays, tasks, and modules to convey key concepts, each of these terms are finally covered in greater detail in [Ansible Terminology and Getting Started][3] and [Hands-on Look at Using Ansible for Network Automation][4].
-
-Let’s take a look at another sample playbook:
-
-```
----
-
- - name: TEST PLAYBOOK
- hosts: cisco
- connection: local
-
- tasks:
- - nxos_vlan: vlan_id=10 name=WEB_VLAN
-```
-
-If you notice, this playbook now includes a task, and this task uses the `nxos_vlan` module. The `nxos_vlan` module is just a Python file, and it is in this file where the connection to the Cisco NX-OS device is made using NX-API. However, the connection could have been set up using any other device API, and this is how vendors and users like us are able to build our own integrations. Integrations (modules) are typically done on a per-feature basis, although as you’ve already seen with modules like `napalm_install_config`, they can be used to _push_ a full configuration file, too.
-
-One of the major differences is that with the default connection mechanism, Ansible launches a persistent SSH connection to the device, and this connection persists for a given play. When the connection setup and teardown occurs within the module, as with many network modules that use `connection=local`, Ansible is logging in/out of the device on _every_ task versus this happening on the play level.
-
-And in traditional Ansible fashion, each network module returns JSON data. The only difference is the massaging of this data is happening locally on the Ansible control host versus on the target node. The data returned back to the playbook varies per vendor and type of module, but as an example, many of the Cisco NX-OS modules return back existing state, proposed state, and end state, as well as the commands (if any) that are being sent to the device.
-
-As you get started using Ansible for network automation, it is important to remember that setting the connection parameter to local is taking Ansible out of the connection setup/teardown process and leaving that up to the module. This is why modules supported for different types of vendor platforms will have different ways of communicating with the devices.
-
-
-### Ansible Terminology and Getting Started
-
-This chapter walks through many of the terms and key concepts that have been gradually introduced already in this report. These are terms such as _inventory file_, _playbook_, _play_, _tasks_, and _modules_. We also review a few other concepts that are helpful to be aware of when getting started with Ansible for network automation.
-
-Please reference the following sample inventory file and playbook throughout this section, as they are continuously used in the examples that follow to convey what each Ansible term means.
-
-_Sample inventory_:
-
-```
-# sample inventory file
-# filename inventory
-
-[all:vars]
-user=admin
-pwd=admin
-
-[tor]
-rack1-tor1 vendor=nxos
-rack1-tor2 vendor=nxos
-rack2-tor1 vendor=arista
-rack2-tor2 vendor=arista
-
-[core]
-core1
-core2
-```
-
-_Sample playbook_:
-
-```
----
-# sample playbook
-# filename site.yml
-
- - name: PLAY 1 - Top of Rack (TOR) Switches
- hosts: tor
- connection: local
-
- tasks:
- - name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
- nxos_vlan:
- vlan_id=10
- name=WEB_VLAN
- host={{ inventory_hostname }}
- username=admin
- password=admin
- when: vendor == "nxos"
-
- - name: ENSURE VLAN 10 EXISTS ON ARISTA TOR SWITCHES
- eos_vlan:
- vlanid=10
- name=WEB_VLAN
- host={{ inventory_hostname }}
- username={{ user }}
- password={{ pwd }}
- when: vendor == "arista"
-
- - name: PLAY 2 - Core (TOR) Switches
- hosts: core
- connection: local
-
- tasks:
- - name: ENSURE VLANS EXIST IN CORE
- nxos_vlan:
- vlan_id={{ item }}
- host={{ inventory_hostname }}
- username={{ user }}
- password={{ pwd }}
- with_items:
- - 10
- - 20
- - 30
- - 40
- - 50
-```
-
-### Inventory File
-
-Using an inventory file, such as the preceding one, enables us to automate tasks for specific hosts and groups of hosts by referencing the proper host/group using the `hosts` parameter that exists at the top section of each play.
-
-It is also possible to store variables within an inventory file. This is shown in the example. If the variable is on the same line as a host, it is a host-specific variable. If the variables are defined within brackets such as `[all:vars]`, it means that the variables are in scope for the group `all`, which is a default group that includes _all_ hosts in the inventory file.
-
-###### Note
-
-Inventory files are the quickest way to get started with Ansible, but should you already have a source of truth for network devices such as a network management tool or CMDB, it is possible to create and use a dynamic inventory script rather than a static inventory file.
-
-### Playbook
-
-The playbook is the top-level object that is executed to automate network devices. In our example, this is the file _site.yml_, as depicted in the preceding example. A playbook uses YAML to define the set of tasks to automate, and each playbook is comprised of one or more plays. This is analogous to a football playbook. Like in football, teams have playbooks made up of plays, and Ansible playbooks are made up of plays, too.
-
-###### Note
-
-YAML is a data format that is supported by all programming languages. YAML is itself a superset of JSON, and it’s quite easy to recognize YAML files, as they always start with three dashes (hyphens), `---`.
-
-
-### Play
-
-One or more plays can exist within an Ansible playbook. In the preceding example, there are two plays within the playbook. Each starts with a _header_ section where play-specific parameters are defined.
-
-The two plays from that example have the following parameters defined:
-
-`name`
-
-The text `PLAY 1 - Top of Rack (TOR) Switches` is arbitrary and is displayed when the playbook runs to improve readability during playbook execution and reporting. This is an optional parameter.
-
-`hosts`
-
-As covered previously, this is the host or group of hosts that are automated in this particular play. This is a required parameter.
-
-`connection`
-
-As covered previously, this is the type of connection mechanism used for the play. This is an optional parameter, but is commonly set to `local` for network automation plays.
-
-
-
-Each play is comprised of one or more tasks.
-
-
-
-### Tasks
-
-Tasks represent what is automated in a declarative manner without worrying about the underlying syntax or "how" the operation is performed.
-
-In our example, the first play has two tasks. Each task ensures VLAN 10 exists. The first task does this for Cisco Nexus devices, and the second task does this for Arista devices:
-
-```
-tasks:
- - name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
- nxos_vlan:
- vlan_id=10
- name=WEB_VLAN
- host={{ inventory_hostname }}
- username=admin
- password=admin
- when: vendor == "nxos"
-```
-
-Tasks can also use the `name` parameter just like plays can. As with plays, the text is arbitrary and is displayed when the playbook runs to improve readability during playbook execution and reporting. It is an optional parameter for each task.
-
-The next line in the example task starts with `nxos_vlan`. This tell us that this task will execute the Ansible module called `nxos_vlan`.
-
-We’ll now dig deeper into modules.
-
-
-
-### Modules
-
-It is critical to understand modules within Ansible. While any programming language can be used to write Ansible modules as long as they return JSON key-value pairs, they are almost always written in Python. In our example, we see two modules being executed: `nxos_vlan` and `eos_vlan`. The modules are both Python files; and in fact, while you can’t tell from looking at the playbook, the real filenames are _eos_vlan.py_ and _nxos_vlan.py_, respectively.
-
-Let’s look at the first task in the first play from the preceding example:
-
-```
- - name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
- nxos_vlan:
- vlan_id=10
- name=WEB_VLAN
- host={{ inventory_hostname }}
- username=admin
- password=admin
- when: vendor == "nxos"
-```
-
-This task executes `nxos_vlan`, which is a module that automates VLAN configuration. In order to use modules, including this one, you need to specify the desired state or configuration policy you want the device to have. This example states: VLAN 10 should be configured with the name `WEB_VLAN`, and it should exist on each switch being automated. We can see this easily with the `vlan_id`and `name` parameters. There are three other parameters being passed into the module as well. They are `host`, `username`, and `password`:
-
-`host`
-
-This is the hostname (or IP address) of the device being automated. Since the hosts we want to automate are already defined in the inventory file, we can use the built-in Ansible variable `inventory_hostname`. This variable is equal to what is in the inventory file. For example, on the first iteration, the host in the inventory file is `rack1-tor1`, and on the second iteration, it is `rack1-tor2`. These names are passed into the module and then within the module, a DNS lookup occurs on each name to resolve it to an IP address. Then the communication begins with the device.
-
-`username`
-
-Username used to log in to the switch.
-
-
-`password`
-
-Password used to log in to the switch.
-
-
-The last piece to cover here is the use of the `when` statement. This is how Ansible performs conditional tasks within a play. As we know, there are multiple devices and types of devices that exist within the `tor` group for this play. Using `when` offers an option to be more selective based on any criteria. Here we are only automating Cisco devices because we are using the `nxos_vlan` module in this task, while in the next task, we are automating only the Arista devices because the `eos_vlan` module is used.
-
-###### Note
-
-This isn’t the only way to differentiate between devices. This is being shown to illustrate the use of `when` and that variables can be defined within the inventory file.
-
-Defining variables in an inventory file is great for getting started, but as you continue to use Ansible, you’ll want to use YAML-based variables files to help with scale, versioning, and minimizing change to a given file. This will also simplify and improve readability for the inventory file and each variables file used. An example of a variables file was given earlier when the build/push method of device provisioning was covered.
-
-Here are a few other points to understand about the tasks in the last example:
-
-* Play 1 task 1 shows the `username` and `password` hardcoded as parameters being passed into the specific module (`nxos_vlan`).
-
-* Play 1 task 1 and play 2 passed variables into the module instead of hardcoding them. This masks the `username` and `password`parameters, but it’s worth noting that these variables are being pulled from the inventory file (for this example).
-
-* Play 1 uses a _horizontal_ key=value syntax for the parameters being passed into the modules, while play 2 uses the vertical key=value syntax. Both work just fine. You can also use vertical YAML syntax with "key: value" syntax.
-
-* The last task also introduces how to use a _loop_ within Ansible. This is by using `with_items` and is analogous to a for loop. That particular task is looping through five VLANs to ensure they all exist on the switch. Note: it’s also possible to store these VLANs in an external YAML variables file as well. Also note that the alternative to not using `with_items` would be to have one task per VLAN—and that just wouldn’t scale!
-
-
-### Hands-on Look at Using Ansible for Network Automation
-
-In the previous chapter, a general overview of Ansible terminology was provided. This covered many of the specific Ansible terms, such as playbooks, plays, tasks, modules, and inventory files. This section will continue to provide working examples of using Ansible for network automation, but will provide more detail on working with modules to automate a few different types of devices. Examples will include automating devices from multiple vendors, including Cisco, Arista, Cumulus, and Juniper.
-
-The examples in this section assume the following:
-
-* Ansible is installed.
-
-* The proper APIs are enabled on the devices (NX-API, eAPI, NETCONF).
-
-* Users exist with the proper permissions on the system to make changes via the API.
-
-* All Ansible modules exist on the system and are in the library path.
-
-###### Note
-
-Setting the module and library path can be done within the _ansible.cfg_ file. You can also use the `-M` flag from the command line to change it when executing a playbook.
-
-The inventory used for the examples in this section is shown in the following section (with passwords removed and IP addresses changed). In this example, some hostnames are not FQDNs as they were in the previous examples.
-
-
-### Inventory File
-
-```
-[cumulus]
-cvx ansible_ssh_host=1.2.3.4 ansible_ssh_pass=PASSWORD
-
-[arista]
-veos1
-
-[cisco]
-nx1 hostip=5.6.7.8 un=USERNAME pwd=PASSWORD
-
-[juniper]
-vsrx hostip=9.10.11.12 un=USERNAME pwd=PASSWORD
-```
-
-###### Note
-
-Just in case you’re wondering at this point, Ansible does support functionality that allows you to store passwords in encrypted files. If you want to learn more about this feature, check out [Ansible Vault][5] in the docs on the Ansible website.
-
-This inventory file has four groups defined with a single host in each group. Let’s review each section in a little more detail:
-
-Cumulus
-
-The host `cvx` is a Cumulus Linux (CL) switch, and it is the only device in the `cumulus` group. Remember that CL is native Linux, so this means the default connection mechanism (SSH) is used to connect to and automate the CL switch. Because `cvx` is not defined in DNS or _/etc/hosts_, we’ll let Ansible know not to use the hostname defined in the inventory file, but rather the name/IP defined for `ansible_ssh_host`. The username to log in to the CL switch is defined in the playbook, but you can see that the password is being defined in the inventory file using the `ansible_ssh_pass` variable.
-
-Arista
-
-The host called `veos1` is an Arista switch running EOS. It is the only host that exists within the `arista` group. As you can see for Arista, there are no other parameters defined within the inventory file. This is because Arista uses a special configuration file for their devices. This file is called _.eapi.conf_ and for our example, it is stored in the home directory. Here is the conf file being used for this example to function properly:
-
-```
-[connection:veos1]
-host: 2.4.3.4
-username: unadmin
-password: pwadmin
-```
-
-This file contains all required information for Ansible (and the Arista Python library called _pyeapi_) to connect to the device using just the information as defined in the conf file.
-
-Cisco
-
-Just like with Cumulus and Arista, there is only one host (`nx1`) that exists within the `cisco` group. This is an NX-OS-based Cisco Nexus switch. Notice how there are three variables defined for `nx1`. They include `un` and `pwd`, which are accessed in the playbook and passed into the Cisco modules in order to connect to the device. In addition, there is a parameter called `hostip`. This is required because `nx1` is also not defined in DNS or configured in the _/etc/hosts_ file.
-
-
-###### Note
-
-We could have named this parameter anything. If automating a native Linux device, `ansible_ssh_host` is used just like we saw with the Cumulus example (if the name as defined in the inventory is not resolvable). In this example, we could have still used `ansible_ssh_host`, but it is not a requirement, since we’ll be passing this variable as a parameter into Cisco modules, whereas `ansible_ssh_host` is automatically checked when using the default SSH connection mechanism.
-
-Juniper
-
-As with the previous three groups and hosts, there is a single host `vsrx` that is located within the `juniper` group. The setup within the inventory file is identical to that of Cisco’s as both are used the same exact way within the playbook.
-
-
-### Playbook
-
-The next playbook has four different plays. Each play is built to automate a specific group of devices based on vendor type. Note that this is only one way to perform these tasks within a single playbook. There are other ways in which we could have used conditionals (`when` statement) or created Ansible roles (which is not covered in this report).
-
-Here is the example playbook:
-
-```
----
-
- - name: PLAY 1 - CISCO NXOS
- hosts: cisco
- connection: local
-
- tasks:
- - name: ENSURE VLAN 100 exists on Cisco Nexus switches
- nxos_vlan:
- vlan_id=100
- name=web_vlan
- host={{ hostip }}
- username={{ un }}
- password={{ pwd }}
-
- - name: PLAY 2 - ARISTA EOS
- hosts: arista
- connection: local
-
- tasks:
- - name: ENSURE VLAN 100 exists on Arista switches
- eos_vlan:
- vlanid=100
- name=web_vlan
- connection={{ inventory_hostname }}
-
- - name: PLAY 3 - CUMULUS
- remote_user: cumulus
- sudo: true
- hosts: cumulus
-
- tasks:
- - name: ENSURE 100.10.10.1 is configured on swp1
- cl_interface: name=swp1 ipv4=100.10.10.1/24
-
- - name: restart networking without disruption
- shell: ifreload -a
-
- - name: PLAY 4 - JUNIPER SRX changes
- hosts: juniper
- connection: local
-
- tasks:
- - name: INSTALL JUNOS CONFIG
- junos_install_config:
- host={{ hostip }}
- file=srx_demo.conf
- user={{ un }}
- passwd={{ pwd }}
- logfile=deploysite.log
- overwrite=yes
- diffs_file=junpr.diff
-```
-
-You will notice the first two plays are very similar to what we already covered in the original Cisco and Arista example. The only difference is that each group being automated (`cisco` and `arista`) is defined in its own play, and this is in contrast to using the `when`conditional that was used earlier.
-
-There is no right way or wrong way to do this. It all depends on what information is known up front and what fits your environment and use cases best, but our intent is to show a few ways to do the same thing.
-
-The third play automates the configuration of interface `swp1` that exists on the Cumulus Linux switch. The first task within this play ensures that `swp1` is a Layer 3 interface and is configured with the IP address 100.10.10.1\. Because Cumulus Linux is native Linux, the networking service needs to be restarted for the changes to take effect. This could have also been done using Ansible handlers (out of the scope of this report). There is also an Ansible core module called `service` that could have been used, but that would disrupt networking on the switch; using `ifreload` restarts networking non-disruptively.
-
-Up until now in this section, we looked at Ansible modules focused on specific tasks such as configuring interfaces and VLANs. The fourth play uses another option. We’ll look at a module that _pushes_ a full configuration file and immediately activates it as the new running configuration. This is what we showed previously using `napalm_install_config`, but this example uses a Juniper-specific module called `junos_install_config`.
-
-This module `junos_install_config` accepts several parameters, as seen in the example. By now, you should understand what `user`, `passwd`, and `host` are used for. The other parameters are defined as follows:
-
-`file`
-
-This is the config file that is copied from the Ansible control host to the Juniper device.
-
-`logfile`
-
-This is optional, but if specified, it is used to store messages generated while executing the module.
-
-`overwrite`
-
-When set to yes/true, the complete configuration is replaced with the file being sent (default is false).
-
-`diffs_file`
-
-This is optional, but if specified, will store the diffs generated when applying the configuration. An example of the diff generated when just changing the hostname but still sending a complete config file is shown next:
-
-```
-# filename: junpr.diff
-[edit system]
-- host-name vsrx;
-+ host-name vsrx-demo;
-```
-
-
-That covers the detailed overview of the playbook. Let’s take a look at what happens when the playbook is executed:
-
-###### Note
-
-Note: the `-i` flag is used to specify the inventory file to use. The `ANSIBLE_HOSTS`environment variable can also be set rather than using the flag each time a playbook is executed.
-
-```
-ntc@ntc:~/ansible/multivendor$ ansible-playbook -i inventory demo.yml
-
-PLAY [PLAY 1 - CISCO NXOS] *************************************************
-
-TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] *********************
-changed: [nx1]
-
-PLAY [PLAY 2 - ARISTA EOS] *************************************************
-
-TASK: [ENSURE VLAN 100 exists on Arista switches] **************************
-changed: [veos1]
-
-PLAY [PLAY 3 - CUMULUS] ****************************************************
-
-GATHERING FACTS ************************************************************
-ok: [cvx]
-
-TASK: [ENSURE 100.10.10.1 is configured on swp1] ***************************
-changed: [cvx]
-
-TASK: [restart networking without disruption] ******************************
-changed: [cvx]
-
-PLAY [PLAY 4 - JUNIPER SRX changes] ****************************************
-
-TASK: [INSTALL JUNOS CONFIG] ***********************************************
-changed: [vsrx]
-
-PLAY RECAP ***************************************************************
- to retry, use: --limit @/home/ansible/demo.retry
-
-cvx : ok=3 changed=2 unreachable=0 failed=0
-nx1 : ok=1 changed=1 unreachable=0 failed=0
-veos1 : ok=1 changed=1 unreachable=0 failed=0
-vsrx : ok=1 changed=1 unreachable=0 failed=0
-```
-
-You can see that each task completes successfully; and if you are on the terminal, you’ll see that each changed task was displayed with an amber color.
-
-Let’s run this playbook again. By running it again, we can verify that all of the modules are _idempotent_; and when doing this, we see that NO changes are made to the devices and everything is green:
-
-```
-PLAY [PLAY 1 - CISCO NXOS] ***************************************************
-
-TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] ***********************
-ok: [nx1]
-
-PLAY [PLAY 2 - ARISTA EOS] ***************************************************
-
-TASK: [ENSURE VLAN 100 exists on Arista switches] ****************************
-ok: [veos1]
-
-PLAY [PLAY 3 - CUMULUS] ******************************************************
-
-GATHERING FACTS **************************************************************
-ok: [cvx]
-
-TASK: [ENSURE 100.10.10.1 is configured on swp1] *****************************
-ok: [cvx]
-
-TASK: [restart networking without disruption] ********************************
-skipping: [cvx]
-
-PLAY [PLAY 4 - JUNIPER SRX changes] ******************************************
-
-TASK: [INSTALL JUNOS CONFIG] *************************************************
-ok: [vsrx]
-
-PLAY RECAP ***************************************************************
-cvx : ok=2 changed=0 unreachable=0 failed=0
-nx1 : ok=1 changed=0 unreachable=0 failed=0
-veos1 : ok=1 changed=0 unreachable=0 failed=0
-vsrx : ok=1 changed=0 unreachable=0 failed=0
-```
-
-Notice how there were 0 changes, but they still returned "ok" for each task. This verifies, as expected, that each of the modules in this playbook are idempotent.
-
-
-### Summary
-
-Ansible is a super-simple automation platform that is agentless and extensible. The network community continues to rally around Ansible as a platform that can be used for network automation tasks that range from configuration management to data collection and reporting. You can push full configuration files with Ansible, configure specific network resources with idempotent modules such as interfaces or VLANs, or simply just automate the collection of information such as neighbors, serial numbers, uptime, and interface stats, and customize reports as you need them.
-
-Because of its architecture, Ansible proves to be a great tool available here and now that helps bridge the gap from _legacy CLI/SNMP_ network device automation to modern _API-driven_ automation.
-
-Ansible’s ease of use and agentless architecture accounts for the platform’s increasing following within the networking community. Again, this makes it possible to automate devices without APIs (CLI/SNMP); devices that have modern APIs, including standalone switches, routers, and Layer 4-7 service appliances; and even those software-defined networking (SDN) controllers that offer RESTful APIs.
-
-There is no device left behind when using Ansible for network automation.
-
------------
-
-作者简介:
-
- ![](https://d3tdunqjn7n0wj.cloudfront.net/360x360/jason-edelman-crop-5b2672f569f553a3de3a121d0179efcb.jpg)
-
-Jason Edelman, CCIE 15394 & VCDX-NV 167, is a born and bred network engineer from the great state of New Jersey. He was the typical “lover of the CLI” or “router jockey.” At some point several years ago, he made the decision to focus more on software, development practices, and how they are converging with network engineering. Jason currently runs a boutique consulting firm, Network to Code, helping vendors and end users take advantage of new tools and technologies to reduce their operational inefficiencies. Jason has a Bachelor’s...
-
---------------------------------------------------------------------------------
-
-via: https://www.oreilly.com/learning/network-automation-with-ansible
-
-作者:[Jason Edelman][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.oreilly.com/people/ee4fd-jason-edelman
-[1]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
-[2]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_network_integrations
-[3]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
-[4]:https://www.oreilly.com/learning/network-automation-with-ansible#handson_look_at_using_ansible_for_network_automation
-[5]:http://docs.ansible.com/ansible/playbooks_vault.html
-[6]:https://www.oreilly.com/people/ee4fd-jason-edelman
-[7]:https://www.oreilly.com/people/ee4fd-jason-edelman
diff --git a/sources/tech/20161216 Kprobes Event Tracing on ARMv8.md b/sources/tech/20161216 Kprobes Event Tracing on ARMv8.md
deleted file mode 100644
index b0528ed2b1..0000000000
--- a/sources/tech/20161216 Kprobes Event Tracing on ARMv8.md
+++ /dev/null
@@ -1,333 +0,0 @@
-# Kprobes Event Tracing on ARMv8
-
-![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
-
-### Introduction
-
-Kprobes is a kernel feature that allows instrumenting the kernel by setting arbitrary breakpoints that call out to developer-supplied routines before and after the breakpointed instruction is executed (or simulated). See the kprobes documentation[[1]][2] for more information. Basic kprobes functionality is selected withCONFIG_KPROBES. Kprobes support was added to mainline for arm64 in the v4.8 release.
-
-In this article we describe the use of kprobes on arm64 using the debugfs event tracing interfaces from the command line to collect dynamic trace events. This feature has been available for some time on several architectures (including arm32), and is now available on arm64\. The feature allows use of kprobes without having to write any code.
-
-### Types of Probes
-
-The kprobes subsystem provides three different types of dynamic probes described below.
-
-### Kprobes
-
-The basic probe is a software breakpoint kprobes inserts in place of the instruction you are probing, saving the original instruction for eventual single-stepping (or simulation) when the probe point is hit.
-
-### Kretprobes
-
-Kretprobes is a part of kprobes that allows intercepting a returning function instead of having to set a probe (or possibly several probes) at the return points. This feature is selected whenever kprobes is selected, for supported architectures (including ARMv8).
-
-### Jprobes
-
-Jprobes allows intercepting a call into a function by supplying an intermediary function with the same calling signature, which will be called first. Jprobes is a programming interface only and cannot be used through the debugfs event tracing subsystem. As such we will not be discussing jprobes further here. Consult the kprobes documentation if you wish to use jprobes.
-
-### Invoking Kprobes
-
-Kprobes provides a set of APIs which can be called from kernel code to set up probe points and register functions to be called when probe points are hit. Kprobes is also accessible without adding code to the kernel, by writing to specific event tracing debugfs files to set the probe address and information to be recorded in the trace log when the probe is hit. The latter is the focus of what this document will be talking about. Lastly kprobes can be accessed through the perf command.
-
-### Kprobes API
-
-The kernel developer can write functions in the kernel (often done in a dedicated debug module) to set probe points and take whatever action is desired right before and right after the probed instruction is executed. This is well documented in kprobes.txt.
-
-### Event Tracing
-
-The event tracing subsystem has its own documentation[[2]][3] which might be worth a read to understand the background of event tracing in general. The event tracing subsystem serves as a foundation for both tracepoints and kprobes event tracing. The event tracing documentation focuses on tracepoints, so bear that in mind when consulting that documentation. Kprobes differs from tracepoints in that there is no predefined list of tracepoints but instead arbitrary dynamically created probe points that trigger the collection of trace event information. The event tracing subsystem is controlled and monitored through a set of debugfs files. Event tracing (CONFIG_EVENT_TRACING) will be selected automatically when needed by something like the kprobe event tracing subsystem.
-
-#### Kprobes Events
-
-With the kprobes event tracing subsystem the user can specify information to be reported at arbitrary breakpoints in the kernel, determined simply by specifying the address of any existing probeable instruction along with formatting information. When that breakpoint is encountered during execution kprobes passes the requested information to the common parts of the event tracing subsystem which formats and appends the data to the trace log, much like how tracepoints works. Kprobes uses a similar but mostly separate collection of debugfs files to control and display trace event information. This feature is selected withCONFIG_KPROBE_EVENT. The kprobetrace documentation[[3]][4] provides the essential information on how to use kprobes event tracing and should be consulted to understand details about the examples presented below.
-
-### Kprobes and Perf
-
-The perf tools provide another command line interface to kprobes. In particular “perf probe” allows probe points to be specified by source file and line number, in addition to function name plus offset, and address. The perf interface is really a wrapper for using the debugfs interface for kprobes.
-
-### Arm64 Kprobes
-
-All of the above aspects of kprobes are now implemented for arm64, in practice there are some differences from other architectures though:
-
-* Register name arguments are, of course, architecture specific and can be found in the ARM ARM.
-
-* Not all instruction types can currently be probed. Currently unprobeable instructions include mrs/msr(except DAIF read), exception generation instructions, eret, and hint (except for the nop variant). In these cases it is simplest to just probe a nearby instruction instead. These instructions are blacklisted from probing because the changes they cause to processor state are unsafe to do during kprobe single-stepping or instruction simulation, because the single-stepping context kprobes constructs is inconsistent with what the instruction needs, or because the instruction can’t tolerate the additional processing time and exception handling in kprobes (ldx/stx).
-* An attempt is made to identify instructions within a ldx/stx sequence and prevent probing, however it is theoretically possible for this check to fail resulting in allowing a probed atomic sequence which can never succeed. Be careful when probing around atomic code sequences.
-* Note that because of the details of Linux ARM64 calling conventions it is not possible to reliably duplicate the stack frame for the probed function and for that reason no attempt is made to do so with jprobes, unlike the majority of other architectures supporting jprobes. The reason for this is that there is insufficient information for the callee to know for certain the amount of the stack that is needed.
-
-* Note that the stack pointer information recorded from a probe will reflect the particular stack pointer in use at the time the probe was hit, be it the kernel stack pointer or the interrupt stack pointer.
-* There is a list of kernel functions which cannot be probed, usually because they are called as part of kprobes processing. Part of this list is architecture-specific and also includes things like exception entry code.
-
-### Using Kprobes Event Tracing
-
-One common use case for kprobes is instrumenting function entry and/or exit. It is particularly easy to install probes for this since one can just use the function name for the probe address. Kprobes event tracing will look up the symbol name and determine the address. The ARMv8 calling standard defines where the function arguments and return values can be found, and these can be printed out as part of the kprobe event processing.
-
-### Example: Function entry probing
-
-Instrumenting a USB ethernet driver reset function:
-
-```
-_$ pwd
-/sys/kernel/debug/tracing
-$ cat > kprobe_events < events/kprobes/enable_
-```
-
-At this point a trace event will be recorded every time the driver’s _ax8872_reset()_ function is called. The event will display the pointer to the _usbnet_ structure passed in via X0 (as per the ARMv8 calling standard) as this function’s only argument. After plugging in a USB dongle requiring this ethernet driver we see the following trace information:
-
-```
-_$ cat trace
-# tracer: nop
-#
-# entries-in-buffer/entries-written: 1/1 #P:8
-#
-# _—–=> irqs-off
-# / _—-=> need-resched
-# | / _—=> hardirq/softirq
-# || / _–=> preempt-depth
-# ||| / delay
-# TASK-PID CPU# |||| TIMESTAMP FUNCTION
-# | | | |||| | |
-kworker/0:0-4 [000] d… 10972.102939: p_ax88772_reset_0:
-(ax88772_reset+0x0/0x230) arg1=0xffff800064824c80_
-```
-
-Here we can see the value of the pointer argument passed in to our probed function. Since we did not use the optional labelling features of kprobes event tracing the information we requested is automatically labeled_arg1_. Note that this refers to the first value in the list of values we requested that kprobes log for this probe, not the actual position of the argument to the function. In this case it also just happens to be the first argument to the function we’ve probed.
-
-### Example: Function entry and return probing
-
-The kretprobe feature is used specifically to probe a function return. At function entry the kprobes subsystem will be called and will set up a hook to be called at function return, where it will record the requested event information. For the most common case the return information, typically in the X0 register, is quite useful. The return value in %x0 can also be referred to as _$retval_. The following example also demonstrates how to provide a human-readable label to be displayed with the information of interest.
-
-Example of instrumenting the kernel __do_fork()_ function to record arguments and results using a kprobe and a kretprobe:
-
-```
-_$ cd /sys/kernel/debug/tracing
-$ cat > kprobe_events < events/kprobes/enable_
-```
-
-At this point every call to _do_fork() will produce two kprobe events recorded into the “_trace_” file, one reporting the calling argument values and one reporting the return value. The return value shall be labeled “_pid_” in the trace file. Here are the contents of the trace file after three fork syscalls have been made:
-
-```
-_$ cat trace
-# tracer: nop
-#
-# entries-in-buffer/entries-written: 6/6 #P:8
-#
-# _—–=> irqs-off
-# / _—-=> need-resched
-# | / _—=> hardirq/softirq
-# || / _–=> preempt-depth
-# ||| / delay
-# TASK-PID CPU# |||| TIMESTAMP FUNCTION
-# | | | |||| | |
- bash-1671 [001] d… 204.946007: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
- bash-1671 [001] d..1 204.946391: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x724
- bash-1671 [001] d… 208.845749: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
- bash-1671 [001] d..1 208.846127: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x725
- bash-1671 [001] d… 214.401604: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
- bash-1671 [001] d..1 214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_
-```
-
-### Example: Dereferencing pointer arguments
-
-For pointer values the kprobe event processing subsystem also allows dereferencing and printing of desired memory contents, for various base data types. It is necessary to manually calculate the offset into structures in order to display a desired field.
-
-Instrumenting the `_do_wait()` function:
-
-```
-_$ cat > kprobe_events < events/kprobes/enable_
-```
-
-Note that the argument labels used in the first probe are optional and can be used to more clearly identify the information recorded in the trace log. The signed offset and parentheses indicate that the register argument is a pointer to memory contents to be recorded in the trace log. The “_:u32_” indicates that the memory location contains an unsigned four-byte wide datum (an enum and an int in a locally defined structure in this case).
-
-The probe labels (after the colon) are optional and will be used to identify the probe in the log. The label must be unique for each probe. If unspecified a useful label will be automatically generated from a nearby symbol name, as has been shown in earlier examples.
-
-Also note the “_$retval_” argument could just be specified as “_%x0_“.
-
-Here are the contents of the “_trace_” file after two fork syscalls have been made:
-
-```
-_$ cat trace
-# tracer: nop
-#
-# entries-in-buffer/entries-written: 4/4 #P:8
-#
-# _—–=> irqs-off
-# / _—-=> need-resched
-# | / _—=> hardirq/softirq
-# || / _–=> preempt-depth
-# ||| / delay
-# TASK-PID CPU# |||| TIMESTAMP FUNCTION
-# | | | |||| | |
- bash-1702 [001] d… 175.342074: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xe
- bash-1702 [002] d..1 175.347236: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0x757
- bash-1702 [002] d… 175.347337: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xf
- bash-1702 [002] d..1 175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6_
-```
-
-### Example: Probing arbitrary instruction addresses
-
-In previous examples we have inserted probes for function entry and exit, however it is possible to probe an arbitrary instruction (with a few exceptions). If we are placing a probe inside a C function the first step is to look at the assembler version of the code to identify where we want to place the probe. One way to do this is to use gdb on the vmlinux file and display the instructions in the function where you wish to place the probe. An example of doing this for the _module_alloc_ function in arch/arm64/kernel/modules.c follows. In this case, because gdb seems to prefer using the weak symbol definition and it’s associated stub code for this function, we get the symbol value from System.map instead:
-
-```
-_$ grep module_alloc System.map
-ffff2000080951c4 T module_alloc
-ffff200008297770 T kasan_module_alloc_
-```
-
-In this example we’re using cross-development tools and we invoke gdb on our host system to examine the instructions comprising our function of interest:
-
-```
-_$ ${CROSS_COMPILE}gdb vmlinux
-(gdb) x/30i 0xffff2000080951c4
- 0xffff2000080951c4 : sub sp, sp, #0x30
- 0xffff2000080951c8 : adrp x3, 0xffff200008d70000
- 0xffff2000080951cc : add x3, x3, #0x0
- 0xffff2000080951d0 : mov x5, #0x713 // #1811
- 0xffff2000080951d4 : mov w4, #0xc0 // #192
- 0xffff2000080951d8 :
- mov x2, #0xfffffffff8000000 // #-134217728
- 0xffff2000080951dc : stp x29, x30, [sp,#16] 0xffff2000080951e0 : add x29, sp, #0x10
- 0xffff2000080951e4 : movk x5, #0xc8, lsl #48
- 0xffff2000080951e8 : movk w4, #0x240, lsl #16
- 0xffff2000080951ec : str x30, [sp] 0xffff2000080951f0 : mov w7, #0xffffffff // #-1
- 0xffff2000080951f4 : mov x6, #0x0 // #0
- 0xffff2000080951f8 : add x2, x3, x2
- 0xffff2000080951fc : mov x1, #0x8000 // #32768
- 0xffff200008095200 : stp x19, x20, [sp,#32] 0xffff200008095204 : mov x20, x0
- 0xffff200008095208 : bl 0xffff2000082737a8 <__vmalloc_node_range>
- 0xffff20000809520c : mov x19, x0
- 0xffff200008095210 : cbz x0, 0xffff200008095234
- 0xffff200008095214 : mov x1, x20
- 0xffff200008095218 : bl 0xffff200008297770
- 0xffff20000809521c : tbnz w0, #31, 0xffff20000809524c
- 0xffff200008095220 : mov sp, x29
- 0xffff200008095224 : mov x0, x19
- 0xffff200008095228 : ldp x19, x20, [sp,#16] 0xffff20000809522c : ldp x29, x30, [sp],#32
- 0xffff200008095230 : ret
- 0xffff200008095234 : mov sp, x29
- 0xffff200008095238 : mov x19, #0x0 // #0_
-```
-
-In this case we are going to display the result from the following source line in this function:
-
-```
-_p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START,
-VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
-NUMA_NO_NODE, __builtin_return_address(0));_
-```
-
-…and also the return value from the function call in this line:
-
-```
-_if (p && (kasan_module_alloc(p, size) < 0)) {_
-```
-
-We can identify these in the assembler code from the call to the external functions. To display these values we will place probes at 0xffff20000809520c _and _0xffff20000809521c on our target system:
-
-```
-_$ cat > kprobe_events < events/kprobes/enable_
-```
-
-Now after plugging an ethernet adapter dongle into the USB port we see the following written into the trace log:
-
-```
-_$ cat trace
-# tracer: nop
-#
-# entries-in-buffer/entries-written: 12/12 #P:8
-#
-# _—–=> irqs-off
-# / _—-=> need-resched
-# | / _—=> hardirq/softirq
-# || / _–=> preempt-depth
-# ||| / delay
-# TASK-PID CPU# |||| TIMESTAMP FUNCTION
-# | | | |||| | |
- systemd-udevd-2082 [000] d… 77.200991: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001188000
- systemd-udevd-2082 [000] d… 77.201059: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
- systemd-udevd-2082 [000] d… 77.201115: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001198000
- systemd-udevd-2082 [000] d… 77.201157: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
- systemd-udevd-2082 [000] d… 77.227456: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011a0000
- systemd-udevd-2082 [000] d… 77.227522: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
- systemd-udevd-2082 [000] d… 77.227579: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b0000
- systemd-udevd-2082 [000] d… 77.227635: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
- modprobe-2097 [002] d… 78.030643: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b8000
- modprobe-2097 [002] d… 78.030761: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
- modprobe-2097 [002] d… 78.031132: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001270000
- modprobe-2097 [002] d… 78.031187: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0_
-```
-
-One more feature of the kprobes event system is recording of statistics information, which can be found inkprobe_profile. After the above trace the contents of that file are:
-
-```
-_$ cat kprobe_profile
- p_0xffff20000809520c 6 0
-p_0xffff20000809521c 6 0_
-```
-
-This indicates that there have been a total of 8 hits each of the two breakpoints we set, which of course is consistent with the trace log data. More kprobe_profile features are described in the kprobetrace documentation.
-
-There is also the ability to further filter kprobes events. The debugfs files used to control this are listed in the kprobetrace documentation while the details of their contents are (mostly) described in the trace events documentation.
-
-### Conclusion
-
-Linux on ARMv8 now is on parity with other architectures supporting the kprobes feature. Work is being done by others to also add uprobes and systemtap support. These features/tools and other already completed features (e.g.: perf, coresight) allow the Linux ARMv8 user to debug and test performance as they would on other, older architectures.
-
-* * *
-
-Bibliography
-
-[[1]][5] Jim Keniston, Prasanna S. Panchamukhi, Masami Hiramatsu. “Kernel Probes (Kprobes).” _GitHub_. GitHub, Inc., 15 Aug. 2016\. Web. 13 Dec. 2016.
-
-[[2]][6] Ts’o, Theodore, Li Zefan, and Tom Zanussi. “Event Tracing.” _GitHub_. GitHub, Inc., 3 Mar. 2016\. Web. 13 Dec. 2016.
-
-[[3]][7] Hiramatsu, Masami. “Kprobe-based Event Tracing.” _GitHub_. GitHub, Inc., 18 Aug. 2016\. Web. 13 Dec. 2016.
-
-
-----------------
-
-作者简介 : [David Long][8]David works as an engineer in the Linaro Kernel - Core Development team. Before coming to Linaro he spent several years in the commercial and defense industries doing both embedded realtime work, and software development tools for Unix. That was followed by a dozen years at Digital (aka Compaq) doing Unix standards, C compiler, and runtime library work. After that David went to a series of startups doing embedded Linux and Android, embedded custom OS's, and Xen virtualization. He has experience with MIPS, Alpha, and ARM platforms (amongst others). He has used most flavors of Unix starting in 1979 with Bell Labs V6, and has been a long-time Linux user and advocate. He has also occasionally been known to debug a device driver with a soldering iron and digital oscilloscope.
-
---------------------------------------------------------------------------------
-
-via: http://www.linaro.org/blog/kprobes-event-tracing-armv8/
-
-作者:[ David Long][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.linaro.org/author/david-long/
-[1]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
-[2]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
-[3]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
-[4]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
-[5]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
-[6]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
-[7]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
-[8]:http://www.linaro.org/author/david-long/
-[9]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#comments
-[10]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
-[11]:http://www.linaro.org/tag/arm64/
-[12]:http://www.linaro.org/tag/armv8/
-[13]:http://www.linaro.org/tag/jprobes/
-[14]:http://www.linaro.org/tag/kernel/
-[15]:http://www.linaro.org/tag/kprobes/
-[16]:http://www.linaro.org/tag/kretprobes/
-[17]:http://www.linaro.org/tag/perf/
-[18]:http://www.linaro.org/tag/tracing/
diff --git a/sources/tech/20170215 How to take screenshots on Linux using Scrot.md b/sources/tech/20170215 How to take screenshots on Linux using Scrot.md
deleted file mode 100644
index 11d9ac5a95..0000000000
--- a/sources/tech/20170215 How to take screenshots on Linux using Scrot.md
+++ /dev/null
@@ -1,331 +0,0 @@
-zpl1025
-How to take screenshots on Linux using Scrot
-============================================================
-
-### On this page
-
-1. [About Scrot][12]
-2. [Scrot Installation][13]
-3. [Scrot Usage/Features][14]
- 1. [Get the application version][1]
- 2. [Capturing current window][2]
- 3. [Selecting a window][3]
- 4. [Include window border in screenshots][4]
- 5. [Delay in taking screenshots][5]
- 6. [Countdown before screenshot][6]
- 7. [Image quality][7]
- 8. [Generating thumbnails][8]
- 9. [Join multiple displays shots][9]
- 10. [Executing operations on saved images][10]
- 11. [Special strings][11]
-4. [Conclusion][15]
-
-Recently, we discussed about the [gnome-screenshot][17] utility, which is a good screen grabbing tool. But if you are looking for an even better command line utility for taking screenshots, then you must give Scrot a try. This tool has some extra features that are currently not available in gnome-screenshot. In this tutorial, we will explain Scrot using easy to understand examples.
-
-Please note that all the examples mentioned in this tutorial have been tested on Ubuntu 16.04 LTS, and the scrot version we have used is 0.8.
-
-### About Scrot
-
-[Scrot][18] (**SCR**eensh**OT**) is a screenshot capturing utility that uses the imlib2 library to acquire and save images. Developed by Tom Gilbert, it's written in C programming language and is licensed under the BSD License.
-
-### Scrot Installation
-
-The scrot tool may be pre-installed on your Ubuntu system, but if that's not the case, then you can install it using the following command:
-
-sudo apt-get install scrot
-
-Once the tool is installed, you can launch it by using the following command:
-
-scrot [options] [filename]
-
-**Note**: The parameters in [] are optional.
-
-### Scrot Usage/Features
-
-In this section, we will discuss how the Scrot tool can be used and what all features it provides.
-
-When the tool is run without any command line options, it captures the whole screen.
-
-[
- ![Using Scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/scrot.png)
-][19]
-
-By default, the captured file is saved with a date-stamped filename in the current directory, although you can also explicitly specify the name of the captured image when the command is run. For example:
-
-scrot [image-name].png
-
-### Get the application version
-
-If you want, you can check the version of scrot using the -v command line option.
-
-scrot -v
-
-Here is an example:
-
-[
- ![Get scrot version](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/version.png)
-][20]
-
-### Capturing current window
-
-Using the utility, you can limit the screenshot to the currently focused window. This feature can be accessed using the -u command line option.
-
-scrot -u
-
-For example, here's my desktop when I executed the above command on the command line:
-
-[
- ![capture window in scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/desktop.png)
-][21]
-
-And here's the screenshot captured by scrot:
-
-[
- ![Screenshot captured by scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/active.png)
-][22]
-
-### Selecting a window
-
-The utility allows you to capture any window by clicking on it using the mouse. This feature can be accessed using the -s option.
-
-scrot -s
-
-For example, as you can see in the screenshot below, I have a screen with two terminal windows overlapping each other. On the top window, I run the aforementioned command.
-
-[
- ![select window](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select1.png)
-][23]
-
-Now suppose, I want to capture the bottom terminal window. For that, I will just click on that window once the command is executed - the command execution won't complete until you click somewhere on the screen.
-
-Here's the screenshot captured after clicking on that terminal:
-
-[
- ![window screenshot captured](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select2.png)
-][24]
-
-**Note**: As you can see in the above snapshot, whatever area the bottom window is covering has been captured, even if that includes an overlapping portion of the top window.
-
-### Include window border in screenshots
-
-The -u command line option we discussed earlier doesn't include the window border in screenshots. However, you can include the border of the window if you want. This feature can be accessed using the -b option (in conjunction with the -u option of course).
-
-scrot -ub
-
-Here is an example screenshot:
-
-[
- ![include window border in screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/border-new.png)
-][25]
-
-**Note**: Including window border also adds some of the background area to the screenshot.
-
-### Delay in taking screenshots
-
-You can introduce a time delay while taking screenshots. For this, you have to assign a numeric value to the --delay or -d command line option.
-
-scrot --delay [NUM]
-
-scrot --delay 5
-
-Here is an example:
-
-[
- ![delay taking screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/delay.png)
-][26]
-
-In this case, scrot will wait for 5 seconds and then take the screenshot.
-
-### Countdown before screenshot
-
-The tool also allows you to display countdown while using delay option. This feature can be accessed using the -c command line option.
-
-scrot –delay [NUM] -c
-
-scrot -d 5 -c
-
-Here is an example screenshot:
-
-[
- ![example delayed screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/countdown.png)
-][27]
-
-### Image quality
-
-Using the tool, you can adjust the quality of the screenshot image at the scale of 1-100\. High value means high size and low compression. Default value is 75, although effect differs depending on the file format chosen.
-
-This feature can be accessed using --quality or -q option, but you have to assign a numeric value to this option ranging from 1-100.
-
-scrot –quality [NUM]
-
-scrot –quality 10
-
-Here is an example snapshot:
-
-[
- ![snapshot quality](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/img-quality.jpg)
-][28]
-
-So you can see that the quality of the image degrades a lot as the -q option is assigned value closer to 1.
-
-### Generating thumbnails
-
-The scrot utility also allows you to generate thumbnail of the screenshot. This feature can be accessed using the --thumb option. This option requires a NUM value, which is basically the percentage of the original screenshot size.
-
-scrot --thumb NUM
-
-scrot --thumb 50
-
-**Note**: The --thumb option makes sure that the screenshot is captured and saved in original size as well.
-
-For example, here is the original screenshot captured in my case:
-
-[
- ![Original screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/orig.png)
-][29]
-
-And following is the thumbnail saved:
-
-[
- ![thumbnail of the screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/thmb.png)
-][30]
-
-### Join multiple displays shots
-
-In case your machine has multiple displays attached to it, scrot allows you to grab and join screenshots of these displays. This feature can be accessed using the -m command line option.
-
-scrot -m
-
-Here is an example snapshot:
-
-[
- ![Join screenshots](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/multiple.png)
-][31]
-
-### Executing operations on saved images
-
-Using the tool, we can execute various operations on saved images - for example, open the screenshot in an image editor like gThumb. This feature can be accessed using the -e command line option. Here's an example:
-
-scrot abc.png -e ‘gthumb abc.png’
-
-Here, gthumb is an image editor which will automatically launch after we run the command.
-
-Following is the snapshot of the command:
-
-[
- ![Execute commands on screenshots](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec1.png)
-][32]
-
-And here is the output of the above command:
-
-[
- ![esample screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec2.png)
-][33]
-
-So you can see that the scrot command grabbed the screenshot and then launched the gThumb image editor with the captured image as argument.
-
-If you don’t specify a filename to your screenshot, then the snapshot will be saved with a date-stamped filename in your current directory - this, as we've already mentioned in the beginning, is the default behaviour of scrot.
-
-Here's an -e command line option example where scrot uses the default name for the screenshot:
-
-scrot -e ‘gthumb $n’
-
-[
- ![scrot running gthumb](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec3.png)
-][34]
-
-It's worth mentioning that $n is a special string, which provides access to the screenshot name. For more details on special strings, head to the next section.
-
-### Special strings
-
-The -e (or the --exec ) and filename parameters can take format specifiers when used with scrot. There are two types of format specifiers. First type is characters preceded by ‘%’ that are used for date and time formats, while the second type is internal to scrot and are prefixed by ‘$’
-
-Several specifiers which are recognised by the --exec and filename parameters are discussed below.
-
-**$f** – provides access to screenshot path (including filename).
-
-For example,
-
-scrot ashu.jpg -e ‘mv $f ~/Pictures/Scrot/ashish/’
-
-Here is an example snapshot:
-
-[
- ![example](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/f.png)
-][35]
-
-If you will not specify a filename, then scrot will by-default save the snapshot in a date stamped file format. This is the by-default date-stamped file format used in scrot : %yy-%mm-%dd-%hhmmss_$wx$h_scrot.png.
-
-**$n** – provides snapshot name. Here is an example snapshot:
-
-[
- ![scrot $n variable](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/n.png)
-][36]
-
-**$s** – gives access to the size of screenshot. This feature, for example, can be accessed in the following way.
-
-scrot abc.jpg -e ‘echo $s’
-
-Here is an example snapshot
-
-[
- ![scrot $s variable](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/s.png)
-][37]
-
-Similarly, you can use the other special strings **$p**, **$w**, **$h**, **$t**, **$$** and **\n** that provide access to image pixel size, image width, image height, image format, $ symbol, and give access to new line respectively. You can, for example, use these strings in the way similar to the **$s** example we have discussed above.
-
-### Conclusion
-
-The utility is easy to install on Ubuntu systems, which is good for beginners. Scrot also provides some advanced features such as special strings that can be used in scripting by professionals. Needless to say, there is a slight learning curve associated in case you want to use them.
-
- ![](https://www.howtoforge.com/images/pdficon_small.png)
- [vie][16]
-
---------------------------------------------------------------------------------
-
-via: https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
-
-作者:[Himanshu Arora][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
-[1]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#get-the-applicationnbspversion
-[2]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#capturing-current-window
-[3]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#selecting-a-window
-[4]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#includenbspwindow-border-in-screenshots
-[5]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#delay-in-taking-screenshots
-[6]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#countdown-before-screenshot
-[7]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#image-quality
-[8]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#generating-thumbnails
-[9]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#join-multiple-displays-shots
-[10]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#executing-operations-on-saved-images
-[11]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#special-strings
-[12]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#about-scrot
-[13]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-installation
-[14]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-usagefeatures
-[15]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#conclusion
-[16]:https://www.howtoforge.com/subscription/
-[17]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
-[18]:https://en.wikipedia.org/wiki/Scrot
-[19]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/scrot.png
-[20]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/version.png
-[21]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/desktop.png
-[22]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/active.png
-[23]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select1.png
-[24]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select2.png
-[25]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/border-new.png
-[26]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/delay.png
-[27]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/countdown.png
-[28]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/img-quality.jpg
-[29]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/orig.png
-[30]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/thmb.png
-[31]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/multiple.png
-[32]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec1.png
-[33]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec2.png
-[34]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec3.png
-[35]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/f.png
-[36]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/n.png
-[37]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/s.png
diff --git a/sources/tech/20170410 Writing a Time Series Database from Scratch.md b/sources/tech/20170410 Writing a Time Series Database from Scratch.md
index a7f8289b63..e73ffa6777 100644
--- a/sources/tech/20170410 Writing a Time Series Database from Scratch.md
+++ b/sources/tech/20170410 Writing a Time Series Database from Scratch.md
@@ -1,3 +1,6 @@
+cielong translating
+----
+
Writing a Time Series Database from Scratch
============================================================
diff --git a/sources/tech/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md b/sources/tech/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md
deleted file mode 100644
index 35ae7b418d..0000000000
--- a/sources/tech/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md
+++ /dev/null
@@ -1,84 +0,0 @@
-translating---geekpi
-
-# Understanding Docker "Container Host" vs. "Container OS" for Linux and Windows Containers
-
-
-
-Lets explore the relationship between the “Container Host” and the “Container OS” and how they differ between Linux and Windows containers.
-
-#### Some Definitions:
-
-* Container Host: Also called the Host OS. The Host OS is the operating system on which the Docker client and Docker daemon run. In the case of Linux and non-Hyper-V containers, the Host OS shares its kernel with running Docker containers. For Hyper-V each container has its own Hyper-V kernel.
-
-* Container OS: Also called the Base OS. The base OS refers to an image that contains an operating system such as Ubuntu, CentOS, or windowsservercore. Typically, you would build your own image on top of a Base OS image so that you can take utilize parts of the OS. Note that windows containers require a Base OS, while Linux containers do not.
-
-* Operating System Kernel: The Kernel manages lower level functions such as memory management, file system, network and process scheduling.
-
-#### Now for some pictures:
-
-![Linux Containers](http://floydhilton.com/images/2017/03/2017-03-31_14_50_13-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
-
-In the above example
-
-* The Host OS is Ubuntu.
-
-* The Docker Client and the Docker Daemon (together called the Docker Engine) are running on the Host OS.
-
-* Each container shares the Host OS kernel.
-
-* CentOS and BusyBox are Linux Base OS images.
-
-* The “No OS” container demonstrates that you do not NEED a base OS to run a container in Linux. You can create a Docker file that has a base image of [scratch][1]and then runs a binary that uses the kernel directly.
-
-* Check out [this][2] article for a comparison of Base OS sizes.
-
-![Windows Containers - Non Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_04_03-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
-
-In the above example
-
-* The Host OS is Windows 10 or Windows Server.
-
-* Each container shares the Host OS kernel.
-
-* All windows containers require a Base OS of either [nanoserver][3] or [windowsservercore][4].
-
-![Windows Containers - Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_41_31-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
-
-In the above example
-
-* The Host OS is Windows 10 or Windows Server.
-
-* Each container is hosted in its own light weight Hyper-V VM.
-
-* Each container uses the kernel inside the Hyper-V VM which provides an extra layer of separation between containers.
-
-* All windows containers require a Base OS of either [nanoserver][5] or [windowsservercore][6].
-
-A couple of good links:
-
-* [About windows containers][7]
-
-* [Deep dive into the implementation Windows Containers including multiple User Modes and “copy-on-write” to save resources][8]
-
-* [How Linux containers save resources by using “copy-on-write”][9]
-
---------------------------------------------------------------------------------
-
-via: http://floydhilton.com/docker/2017/03/31/Docker-ContainerHost-vs-ContainerOS-Linux-Windows.html
-
-作者:[Floyd Hilton ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://floydhilton.com/about/
-[1]:https://hub.docker.com/_/scratch/
-[2]:https://www.brianchristner.io/docker-image-base-os-size-comparison/
-[3]:https://hub.docker.com/r/microsoft/nanoserver/
-[4]:https://hub.docker.com/r/microsoft/windowsservercore/
-[5]:https://hub.docker.com/r/microsoft/nanoserver/
-[6]:https://hub.docker.com/r/microsoft/windowsservercore/
-[7]:https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
-[8]:http://blog.xebia.com/deep-dive-into-windows-server-containers-and-docker-part-2-underlying-implementation-of-windows-server-containers/
-[9]:https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#the-copy-on-write-strategy
diff --git a/sources/tech/20170622 A users guide to links in the Linux filesystem.md b/sources/tech/20170622 A users guide to links in the Linux filesystem.md
index 5fc4defaeb..3cb59aaacb 100644
--- a/sources/tech/20170622 A users guide to links in the Linux filesystem.md
+++ b/sources/tech/20170622 A users guide to links in the Linux filesystem.md
@@ -1,4 +1,6 @@
-Translating by Snapcrafter
+Translating by yongshouzhang
+
+
A user's guide to links in the Linux filesystem
============================================================
diff --git a/sources/tech/20170707 Lessons from my first year of live coding on Twitch.md b/sources/tech/20170707 Lessons from my first year of live coding on Twitch.md
index eca587641f..94a58e875e 100644
--- a/sources/tech/20170707 Lessons from my first year of live coding on Twitch.md
+++ b/sources/tech/20170707 Lessons from my first year of live coding on Twitch.md
@@ -1,3 +1,5 @@
+Translating by lonaparte
+
Lessons from my first year of live coding on Twitch
============================================================
diff --git a/sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md b/sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md
index 2410f2dcad..2193b8078c 100644
--- a/sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md
+++ b/sources/tech/20170719 Containing System Services in Red Hat Enterprise Linux – Part 1.md
@@ -1,4 +1,5 @@
-> translating by rieonke
+translating by liuxinyu123
+
Containing System Services in Red Hat Enterprise Linux – Part 1
============================================================
diff --git a/sources/tech/20170809 Designing a Microservices Architecture for Failure.md b/sources/tech/20170809 Designing a Microservices Architecture for Failure.md
deleted file mode 100644
index 3e53413b8b..0000000000
--- a/sources/tech/20170809 Designing a Microservices Architecture for Failure.md
+++ /dev/null
@@ -1,215 +0,0 @@
-translating by penghuster
-
-Designing a Microservices Architecture for Failure
-============================================================
-
-
-A Microservices architecture makes it possible to **isolate failures**through well-defined service boundaries. But like in every distributed system, there is a **higher chance** for network, hardware or application level issues. As a consequence of service dependencies, any component can be temporarily unavailable for their consumers. To minimize the impact of partial outages we need to build fault tolerant services that can **gracefully** respond to certain types of outages.
-
-This article introduces the most common techniques and architecture patterns to build and operate a **highly available microservices** system based on [RisingStack’s Node.js Consulting & Development experience][3].
-
- _If you are not familiar with the patterns in this article, it doesn’t necessarily mean that you do something wrong. Building a reliable system always comes with an extra cost._
-
-### The Risk of the Microservices Architecture
-
-The microservices architecture moves application logic to services and uses a network layer to communicate between them. Communicating over a network instead of in-memory calls brings extra latency and complexity to the system which requires cooperation between multiple physical and logical components. The increased complexity of the distributed system leads to a higher chance of particular **network failures**.
-
-One of the biggest advantage of a microservices architecture over a monolithic one is that teams can independently design, develop and deploy their services. They have full ownership over their service's lifecycle. It also means that teams have no control over their service dependencies as it's more likely managed by a different team. With a microservices architecture, we need to keep in mind that provider **services can be temporarily unavailable** by broken releases, configurations, and other changes as they are controlled by someone else and components move independently from each other.
-
-### Graceful Service Degradation
-
-One of the best advantages of a microservices architecture is that you can isolate failures and achieve graceful service degradation as components fail separately. For example, during an outage customers in a photo sharing application maybe cannot upload a new picture, but they can still browse, edit and share their existing photos.
-
-![Microservices fail separately in theory](https://blog-assets.risingstack.com/2017/08/microservices-fail-separately-in-theory.png)
-
- _Microservices fail separately (in theory)_
-
-In most of the cases, it's hard to implement this kind of graceful service degradation as applications in a distributed system depend on each other, and you need to apply several failover logics _(some of them will be covered by this article later)_ to prepare for temporary glitches and outages.
-
-![Microservices Depend on Each Other](https://blog-assets.risingstack.com/2017/08/Microservices-depend-on-each-other.png)
-
- _Services depend on each other and fail together without failover logics._
-
-### Change management
-
-Google’s site reliability team has found that roughly **70% of the outages are caused by changes** in a live system. When you change something in your service - you deploy a new version of your code or change some configuration - there is always a chance for failure or the introduction of a new bug.
-
-In a microservices architecture, services depend on each other. This is why you should minimize failures and limit their negative effect. To deal with issues from changes, you can implement change management strategies and **automatic rollouts**.
-
-For example, when you deploy new code, or you change some configuration, you should apply these changes to a subset of your instances gradually, monitor them and even automatically revert the deployment if you see that it has a negative effect on your key metrics.
-
-![Microservices Change Management](https://blog-assets.risingstack.com/2017/08/microservices-change-management.png)
-
- _Change Management - Rolling Deployment_
-
-Another solution could be that you run two production environments. You always deploy to only one of them, and you only point your load balancer to the new one after you verified that the new version works as it is expected. This is called blue-green, or red-black deployment.
-
-**Reverting code is not a bad thing.** You shouldn’t leave broken code in production and then think about what went wrong. Always revert your changes when it’s necessary. The sooner the better.
-
-#### Want to learn more about building reliable mircoservices architectures?
-
-##### Check out our upcoming trainings!
-
-[MICROSERVICES TRAININGS ][4]
-
-### Health-check and Load Balancing
-
-Instances continuously start, restart and stop because of failures, deployments or autoscaling. It makes them temporarily or permanently unavailable. To avoid issues, your load balancer should **skip unhealthy instances** from the routing as they cannot serve your customers' or sub-systems' need.
-
-Application instance health can be determined via external observation. You can do it with repeatedly calling a `GET /health`endpoint or via self-reporting. Modern **service discovery** solutions continuously collect health information from instances and configure the load-balancer to route traffic only to healthy components.
-
-### Self-healing
-
-Self-healing can help to recover an application. We can talk about self-healing when an application can **do the necessary steps** to recover from a broken state. In most of the cases, it is implemented by an external system that watches the instances health and restarts them when they are in a broken state for a longer period. Self-healing can be very useful in most of the cases, however, in certain situations it **can cause trouble** by continuously restarting the application. This might happen when your application cannot give positive health status because it is overloaded or its database connection times out.
-
-Implementing an advanced self-healing solution which is prepared for a delicate situation - like a lost database connection - can be tricky. In this case, you need to add extra logic to your application to handle edge cases and let the external system know that the instance is not needed to restart immediately.
-
-### Failover Caching
-
-Services usually fail because of network issues and changes in our system. However, most of these outages are temporary thanks to self-healing and advanced load-balancing we should find a solution to make our service work during these glitches. This is where **failover caching** can help and provide the necessary data to our application.
-
-Failover caches usually use **two different expiration dates**; a shorter that tells how long you can use the cache in a normal situation, and a longer one that says how long can you use the cached data during failure.
-
-![Microservices Failover Caching](https://blog-assets.risingstack.com/2017/08/microservices-failover-caching.png)
-
- _Failover Caching_
-
-It’s important to mention that you can only use failover caching when it serves **the outdated data better than nothing**.
-
-To set cache and failover cache, you can use standard response headers in HTTP.
-
-For example, with the `max-age` header you can specify the maximum amount of time a resource will be considered fresh. With the `stale-if-error` header, you can determine how long should the resource be served from a cache in the case of a failure.
-
-Modern CDNs and load balancers provide various caching and failover behaviors, but you can also create a shared library for your company that contains standard reliability solutions.
-
-### Retry Logic
-
-There are certain situations when we cannot cache our data or we want to make changes to it, but our operations eventually fail. In these cases, we can **retry our action** as we can expect that the resource will recover after some time or our load-balancer sends our request to a healthy instance.
-
-You should be careful with adding retry logic to your applications and clients, as a larger amount of **retries can make things even worse** or even prevent the application from recovering.
-
-In distributed system, a microservices system retry can trigger multiple other requests or retries and start a **cascading effect**. To minimize the impact of retries, you should limit the number of them and use an exponential backoff algorithm to continually increase the delay between retries until you reach the maximum limit.
-
-As a retry is initiated by the client _(browser, other microservices, etc.)_ and the client doesn't know that the operation failed before or after handling the request, you should prepare your application to handle **idempotency**. For example, when you retry a purchase operation, you shouldn't double charge the customer. Using a unique **idempotency-key** for each of your transactions can help to handle retries.
-
-### Rate Limiters and Load Shedders
-
-Rate limiting is the technique of defining how many requests can be received or processed by a particular customer or application during a timeframe. With rate limiting, for example, you can filter out customers and microservices who are responsible for **traffic peaks**, or you can ensure that your application doesn’t overload until autoscaling can’t come to rescue.
-
-You can also hold back lower-priority traffic to give enough resources to critical transactions.
-
-![Microservices Rate Limiter](https://blog-assets.risingstack.com/2017/08/microservices-rate-limiter.png)
-
- _A rate limiter can hold back traffic peaks_
-
-A different type of rate limiter is called the _concurrent request limiter_ . It can be useful when you have expensive endpoints that shouldn’t be called more than a specified times, while you still want to serve traffic.
-
-A _fleet usage load shedder_ can ensure that there are always enough resources available to **serve critical transactions**. It keeps some resources for high priority requests and doesn’t allow for low priority transactions to use all of them. A load shedder makes its decisions based on the whole state of the system, rather than based on a single user’s request bucket size. Load shedders **help your system to recover**, since they keep the core functionalities working while you have an ongoing incident.
-
-To read more about rate limiters and load shredders, I recommend checking out [Stripe’s article][5].
-
-### Fail Fast and Independently
-
-In a microservices architecture we want to prepare our services **to fail fast and separately**. To isolate issues on service level, we can use the _bulkhead pattern_ . You can read more about bulkheads later in this blog post.
-
-We also want our components to **fail fast** as we don't want to wait for broken instances until they timeout. Nothing is more disappointing than a hanging request and an unresponsive UI. It's not just wasting resources but also screwing up the user experience. Our services are calling each other in a chain, so we should pay an extra attention to prevent hanging operations before these delays sum up.
-
-The first idea that would come to your mind would be applying fine grade timeouts for each service calls. The problem with this approach is that you cannot really know what's a good timeout value as there are certain situations when network glitches and other issues happen that only affect one-two operations. In this case, you probably don’t want to reject those requests if there’s only a few of them timeouts.
-
-We can say that achieving the fail fast paradigm in microservices by **using timeouts is an anti-pattern** and you should avoid it. Instead of timeouts, you can apply the _circuit-breaker_ pattern that depends on the success / fail statistics of operations.
-
-#### Want to learn more about building reliable mircoservices architectures?
-
-##### Check out our upcoming trainings!
-
-[MICROSERVICES TRAININGS ][6]
-
-### Bulkheads
-
-Bulkhead is used in the industry to **partition** a ship **into sections**, so that sections can be sealed off if there is a hull breach.
-
-The concept of bulkheads can be applied in software development to **segregate resources**.
-
-By applying the bulkheads pattern, we can **protect limited resources** from being exhausted. For example, we can use two connection pools instead of a shared on if we have two kinds of operations that communicate with the same database instance where we have limited number of connections. As a result of this client - resource separation, the operation that timeouts or overuses the pool won't bring all of the other operations down.
-
-One of the main reasons why Titanic sunk was that its bulkheads had a design failure, and the water could pour over the top of the bulkheads via the deck above and flood the entire hull.
-
-![Titanic Microservices Bulkheads](https://blog-assets.risingstack.com/2017/08/titanic-bulkhead-microservices.png)
-
- _Bulkheads in Titanic (they didn't work)_
-
-### Circuit Breakers
-
-To limit the duration of operations, we can use timeouts. Timeouts can prevent hanging operations and keep the system responsive. However, using static, fine tuned timeouts in microservices communication is an **anti-pattern** as we’re in a highly dynamic environment where it's almost impossible to come up with the right timing limitations that work well in every case.
-
-Instead of using small and transaction-specific static timeouts, we can use circuit breakers to deal with errors. Circuit breakers are named after the real world electronic component because their behavior is identical. You can **protect resources** and **help them to recover** with circuit breakers. They can be very useful in a distributed system where a repetitive failure can lead to a snowball effect and bring the whole system down.
-
-A circuit breaker opens when a particular type of **error occurs multiple times** in a short period. An open circuit breaker prevents further requests to be made - like the real one prevents electrons from flowing. Circuit breakers usually close after a certain amount of time, giving enough space for underlying services to recover.
-
-Keep in mind that not all errors should trigger a circuit breaker. For example, you probably want to skip client side issues like requests with `4xx` response codes, but include `5xx` server-side failures. Some circuit breakers can have a half-open state as well. In this state, the service sends the first request to check system availability, while letting the other requests to fail. If this first request succeeds, it restores the circuit breaker to a closed state and lets the traffic flow. Otherwise, it keeps it open.
-
-![Microservices Circuit Breakers](https://blog-assets.risingstack.com/2017/08/microservices-circuit-breakers.png)
-
- _Circuit Breaker_
-
-### Testing for Failures
-
-You should continually **test your system against common issues** to make sure that your services can **survive various failures**. You should test for failures frequently to keep your team prepared for incidents.
-
-For testing, you can use an external service that identifies groups of instances and randomly terminates one of the instances in this group. With this, you can prepare for a single instance failure, but you can even shut down entire regions to simulate a cloud provider outage.
-
-One of the most popular testing solutions is the [ChaosMonkey][7]resiliency tool by Netflix.
-
-### Outro
-
-Implementing and running a reliable service is not easy. It takes a lot of effort from your side and also costs money to your company.
-
-Reliability has many levels and aspects, so it is important to find the best solution for your team. You should make reliability a factor in your business decision processes and allocate enough budget and time for it.
-
-### Key Takeways
-
-* Dynamic environments and distributed systems - like microservices - lead to a higher chance of failures.
-
-* Services should fail separately, achieve graceful degradation to improve user experience.
-
-* 70% of the outages are caused by changes, reverting code is not a bad thing.
-
-* Fail fast and independently. Teams have no control over their service dependencies.
-
-* Architectural patterns and techniques like caching, bulkheads, circuit breakers and rate-limiters help to build reliable microservices.
-
-To learn more about running a reliable service check out our free [Node.js Monitoring, Alerting & Reliability 101 e-book][8]. In case you need help with implementing a microservices system, reach out to us at [@RisingStack][9] on Twitter, or enroll in our upcoming [Building Microservices with Node.js][10].
-
-
--------------
-
-作者简介
-
-[Péter Márton][2]
-
-CTO at RisingStack, microservices and brewing beer with Node.js
-
-[https://twitter.com/slashdotpeter][1]
-
---------------------------------------------------------------------------------
-
-via: https://blog.risingstack.com/designing-microservices-architecture-for-failure/
-
-作者:[ Péter Márton][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://blog.risingstack.com/author/peter-marton/
-[1]:https://twitter.com/slashdotpeter
-[2]:https://blog.risingstack.com/author/peter-marton/
-[3]:https://risingstack.com/
-[4]:https://blog.risingstack.com/training-building-microservices-node-js/?utm_source=rsblog&utm_medium=roadblock-new&utm_content=/designing-microservices-architecture-for-failure/
-[5]:https://stripe.com/blog/rate-limiters
-[6]:https://blog.risingstack.com/training-building-microservices-node-js/?utm_source=rsblog&utm_medium=roadblock-new
-[7]:https://github.com/Netflix/chaosmonkey
-[8]:https://trace.risingstack.com/monitoring-ebook
-[9]:https://twitter.com/RisingStack
-[10]:https://blog.risingstack.com/training-building-microservices-node-js/
-[11]:https://blog.risingstack.com/author/peter-marton/
diff --git a/sources/tech/20170908 Betting on the Web.md b/sources/tech/20170908 Betting on the Web.md
index 5a5746e681..80d0002a80 100644
--- a/sources/tech/20170908 Betting on the Web.md
+++ b/sources/tech/20170908 Betting on the Web.md
@@ -1,4 +1,5 @@
-translate by hwlife
+voidpainter is translating
+---
[Betting on the Web][27]
============================================================
diff --git a/sources/tech/20171006 Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components.md b/sources/tech/20171006 Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components.md
index 0ff40cdd6e..e8c2861f27 100644
--- a/sources/tech/20171006 Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components.md
+++ b/sources/tech/20171006 Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components.md
@@ -1,6 +1,8 @@
Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components
============================================================
+Translating by S9mtAt
+
### Full demo weather app included.
Android development is evolving fast. A lot of developers and companies are trying to address common problems and create some great tools or libraries that can totally change the way we structure our apps.
diff --git a/sources/tech/20171006 How to Install Software from Source Code... and Remove it Afterwards.md b/sources/tech/20171006 How to Install Software from Source Code... and Remove it Afterwards.md
deleted file mode 100644
index eb4a201931..0000000000
--- a/sources/tech/20171006 How to Install Software from Source Code... and Remove it Afterwards.md
+++ /dev/null
@@ -1,517 +0,0 @@
-"Translating by syys96"
-How to Install Software from Source Code… and Remove it Afterwards
-============================================================
-
-![How to install software from source code](https://itsfoss.com/wp-content/uploads/2017/10/install-software-from-source-code-linux-800x450.jpg)
-
- _Brief: This detailed guide explains how to install a program from source code in Linux and how to remove the software installed from the source code._
-
-One of the greatest strength of your Linux distribution is its package manager and the associated software repository. With them, you have all the necessary tools and resources to download and install a new software on your computer in a completely automated manner.
-
-But despite all their efforts, the package maintainers cannot handle each and every use cases. Nor can they package all the software available out there. So there are still situations where you will have to compile and install a new software by yourself. As of myself, the most common reason, by far, I have to compile some software is when I need to run a very specific version. Or because I want to modify the source code or use some fancy compilation options.
-
-If your needs belong to that latter category, there are chances you already know what you do. But for the vast majority of Linux users, compiling and installing a software from the sources for the first time might look like an initiation ceremony: somewhat frightening; but with the promise to enter a new world of possibilities and to be part of a privileged community if you overcome that.
-
-[Suggested readHow To Install And Remove Software In Ubuntu [Complete Guide]][8]
-
-### A. Installing software from source code in Linux
-
-And that’s exactly what we will do here. For the purpose of that article, let’s say I need to install [NodeJS][9] 8.1.1 on my system. That version exactly. A version which is not available from the Debian repository:
-
-```
-sh$ apt-cache madison nodejs | grep amd64
- nodejs | 6.11.1~dfsg-1 | http://deb.debian.org/debian experimental/main amd64 Packages
- nodejs | 4.8.2~dfsg-1 | http://ftp.fr.debian.org/debian stretch/main amd64 Packages
- nodejs | 4.8.2~dfsg-1~bpo8+1 | http://ftp.fr.debian.org/debian jessie-backports/main amd64 Packages
- nodejs | 0.10.29~dfsg-2 | http://ftp.fr.debian.org/debian jessie/main amd64 Packages
- nodejs | 0.10.29~dfsg-1~bpo70+1 | http://ftp.fr.debian.org/debian wheezy-backports/main amd64 Packages
-```
-
-### Step 1: Getting the source code from GitHub
-
-Like many open-source projects, the sources of NodeJS can be found on GitHub: [https://github.com/nodejs/node][10]
-
-So, let’s go directly there.
-
-![The NodeJS official GitHub repository](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-account.png)
-
-If you’re not familiar with [GitHub][11], [git][12] or any other [version control system][13] worth mentioning the repository contains the current source for the software, as well as a history of all the modifications made through the years to that software. Eventually up to the very first line written for that project. For the developers, keeping that history has many advantages. For us today, the main one is we will be able to get the sources from for the project as they were at any given point in time. More precisely, I will be able to get the sources as they were when the 8.1.1 version I want was released. Even if there were many modifications since then.
-
-![Choose the v8.1.1 tag in the NodeJS GitHub repository](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-choose-revision-tag.png)
-
-On GitHub, you can use the “branch” button to navigate between different versions of the software. [“Branch” and “tags” are somewhat related concepts in Git][14]. Basically, the developers create “branch” and “tags” to keep track of important events in the project history, like when they start working on a new feature or when they publish a release. I will not go into the details here, all you need to know is I’m looking for the version _tagged_ “v8.1.1”
-
-![The NodeJS GitHub repository as it was at the time the v8.1.1 tag was created](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-revision-811.png)
-
-After having chosen on the “v8.1.1” tag, the page is refreshed, the most obvious change being the tag now appears as part of the URL. In addition, you will notice the file change date are different too. The source tree you are now seeing is the one that existed at the time the v8.1.1 tag was created. In some sense, you can think of a version control tool like git as a time travel machine, allowing you to go back and forth into a project history.
-
-![NodeJS GitHub repository download as a ZIP button](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-revision-download-zip.png)
-
-At this point, we can download the sources of NodeJS 8.1.1\. You can’t miss the big blue button suggesting to download the ZIP archive of the project. As of myself, I will download and extract the ZIP from the command line for the sake of the explanation. But if you prefer using a [GUI][15] tool, don’t hesitate to do that instead:
-
-```
-wget https://github.com/nodejs/node/archive/v8.1.1.zip
-unzip v8.1.1.zip
-cd node-8.1.1/
-```
-
-Downloading the ZIP archive works great. But if you want to do it “like a pro”, I would suggest using directly the `git` tool to download the sources. It is not complicated at all— and it will be a nice first contact with a tool you will often encounter:
-
-```
-# first ensure git is installed on your system
-sh$ sudo apt-get install git
-# Make a shallow clone the NodeJS repository at v8.1.1
-sh$ git clone --depth 1 \
- --branch v8.1.1 \
- https://github.com/nodejs/node
-sh$ cd node/
-```
-
-By the way, if you have any issue, just consider that first part of this article as a general introduction. Later I have more detailed explanations for Debian- and ReadHat-based distributions in order to help you troubleshoot common issues.
-
-Anyway, whenever you downloaded the source using `git` or as a ZIP archive, you should now have exactly the same source files in the current directory:
-
-```
-sh$ ls
-android-configure BUILDING.md common.gypi doc Makefile src
-AUTHORS CHANGELOG.md configure GOVERNANCE.md node.gyp test
-benchmark CODE_OF_CONDUCT.md CONTRIBUTING.md lib node.gypi tools
-BSDmakefile COLLABORATOR_GUIDE.md deps LICENSE README.md vcbuild.bat
-```
-
-### Step 2: Understanding the Build System of the program
-
-We usually talk about “compiling the sources”, but the compilation is only one of the phases required to produce a working software from its source. A build system is a set of tool and practices used to automate and articulate those different tasks in order to build entirely the software just by issuing few commands.
-
-If the concept is simple, the reality is somewhat more complicated. Because different projects or programming language may have different requirements. Or because of the programmer’s tastes. Or the supported platforms. Or for historical reason. Or… or.. there is an almost endless list of reasons to choose or create another build system. All that to say there are many different solutions used out there.
-
-NodeJS uses a [GNU-style build system][16]. This is a popular choice in the open source community. And once again, a good way to start your journey.
-
-Writing and tuning a build system is a pretty complex task. But for the “end user”, GNU-style build systems resume themselves in using two tools: `configure` and `make`.
-
-The `configure` file is a project-specific script that will check the destination system configuration and available feature in order to ensure the project can be built, eventually dealing with the specificities of the current platform.
-
-An important part of a typical `configure` job is to build the `Makefile`. That is the file containing the instructions required to effectively build the project.
-
-The [`make` tool][17]), on the other hand, is a POSIX tool available on any Unix-like system. It will read the project-specific `Makefile` and perform the required operations to build and install your program.
-
-But, as always in the Linux world, you still have some latency to customize the build for your specific needs.
-
-```
-./configure --help
-```
-
-The `configure -help` command will show you all the available configuration options. Once again, this is very project-specific. And to be honest, it is sometimes required to dig into the project before fully understand the meaning of each and every configure option.
-
-But there is at least one standard GNU Autotools option that you must know: the `--prefix` option. This has to do with the file system hierarchy and the place your software will be installed.
-
-[Suggested read8 Vim Tips And Tricks That Will Make You A Pro User][18]
-
-### Step 3: The FHS
-
-The Linux file system hierarchy on a typical distribution mostly comply with the [Filesystem Hierarchy Standard (FHS)][19]
-
-That standard explains the purpose of the various directories of your system: `/usr`, `/tmp`, `/var` and so on.
-
-When using the GNU Autotools— and most other build systems— the default installation location for your new software will be `/usr/local`. Which is a good choice as according to the FSH _“The /usr/local hierarchy is for use by the system administrator when installing software locally? It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr.”_
-
-The `/usr/local` hierarchy somehow replicates the root directory, and you will find there `/usr/local/bin` for the executable programs, `/usr/local/lib` for the libraries, `/usr/local/share` for architecture independent files and so on.
-
-The only issue when using the `/usr/local` tree for custom software installation is the files for all your software will be mixed there. Especially, after having installed a couple of software, it will be hard to track to which file exactly of `/usr/local/bin` and `/usr/local/lib` belongs to which software. That will not cause any issue to the system though. After all, `/usr/bin` is just about the same mess. But that will become an issue the day you will want to remove a manually installed software.
-
-To solve that issue, I usually prefer installing custom software in the `/opt`sub-tree instead. Once again, to quote the FHS:
-
-_”/opt is reserved for the installation of add-on application software packages.
-
-A package to be installed in /opt must locate its static files in a separate /opt/ or /opt/ directory tree, where is a name that describes the software package and is the provider’s LANANA registered name.”_
-
-So we will create a sub-directory of `/opt` specifically for our custom NodeJS installation. And if someday I want to remove that software, I will simply have to remove that directory:
-
-```
-sh$ sudo mkdir /opt/node-v8.1.1
-sh$ sudo ln -sT node-v8.1.1 /opt/node
-# What is the purpose of the symbolic link above?
-# Read the article till the end--then try to answer that
-# question in the comment section!
-
-sh$ ./configure --prefix=/opt/node-v8.1.1
-sh$ make -j9 && echo ok
-# -j9 means run up to 9 parallel tasks to build the software.
-# As a rule of thumb, use -j(N+1) where N is the number of cores
-# of your system. That will maximize the CPU usage (one task per
-# CPU thread/core + a provision of one extra task when a process
-# is blocked by an I/O operation.
-```
-
-Anything but “ok” after the `make` command has completed would mean there was an error during the build process. As we ran a parallel build because of the `-j` option, it is not always easy to retrieve the error message given the large volume of output produced by the build system.
-
-In the case of issue, just restart `make`, but without the `-j` option this time. And the error should appear near the end of the output:
-
-```
-sh$ make
-```
-
-Finally, once the compilation has gone to the end, you can install your software to its location by running the command:
-
-```
-sh$ sudo make install
-```
-
-And test it:
-
-```
-sh$ /opt/node/bin/node --version
-v8.1.1
-```
-
-### B. What if things go wrong while installing from source code?
-
-What I’ve explained above is mostly what you can see on the “build instruction” page of a well-documented project. But given this article goal is to let you compile your first software from sources, it might worth taking the time to investigate some common issues. So, I will do the whole procedure again, but this time from a fresh and minimal Debian 9.0 and CentOS 7.0 systems. So you can see the error I encountered and how I solved them.
-
-### From Debian 9.0 “Stretch”
-
-```
-itsfoss@debian:~$ git clone --depth 1 \
- --branch v8.1.1 \
- https://github.com/nodejs/node
--bash: git: command not found
-```
-
-This problem is quite easy to diagnosis and solve. Just install the `git` package:
-
-```
-itsfoss@debian:~$ sudo apt-get install git
-```
-
-```
-itsfoss@debian:~$ git clone --depth 1 \
- --branch v8.1.1 \
- https://github.com/nodejs/node && echo ok
-[...]
-ok
-```
-
-```
-itsfoss@debian:~/node$ sudo mkdir /opt/node-v8.1.1
-itsfoss@debian:~/node$ sudo ln -sT node-v8.1.1 /opt/node
-```
-
-No problem here.
-
-```
-itsfoss@debian:~/node$ ./configure --prefix=/opt/node-v8.1.1/
-WARNING: failed to autodetect C++ compiler version (CXX=g++)
-WARNING: failed to autodetect C compiler version (CC=gcc)
-Node.js configure error: No acceptable C compiler found!
- Please make sure you have a C compiler installed on your system and/or
- consider adjusting the CC environment variable if you installed
- it in a non-standard prefix.
-```
-
-Obviously, to compile a project, you need a compiler. NodeJS being written using the [C++ language][20], we need a C++ [compiler][21]. Here I will install `g++`, the GNU C++ compiler for that purpose:
-
-```
-itsfoss@debian:~/node$ sudo apt-get install g++
-itsfoss@debian:~/node$ ./configure --prefix=/opt/node-v8.1.1/ && echo ok
-[...]
-ok
-```
-
-```
-itsfoss@debian:~/node$ make -j9 && echo ok
--bash: make: command not found
-```
-
-One other missing tool. Same symptoms. Same solution:
-
-```
-itsfoss@debian:~/node$ sudo apt-get install make
-itsfoss@debian:~/node$ make -j9 && echo ok
-[...]
-ok
-```
-
-```
-itsfoss@debian:~/node$ sudo make install
-[...]
-itsfoss@debian:~/node$ /opt/node/bin/node --version
-v8.1.1
-```
-
-Success!
-
-Please notice: I’ve installed the various tools one by one to show how to diagnosis the compilation issues and to show you the typical solution to solve those issues. But if you search more about that topic or read other tutorials, you will discover that most distributions have “meta-packages” acting as an umbrella to install some or all the typical tools used for compiling a software. On Debian-based systems, you will probably encounter the [build-essentials][22]package for that purpose. And on Red-Hat-based distributions, that will be the _“Development Tools”_ group.
-
-### From CentOS 7.0
-
-```
-[itsfoss@centos ~]$ git clone --depth 1 \
- --branch v8.1.1 \
- https://github.com/nodejs/node
--bash: git: command not found
-```
-
-Command not found? Just install it using the `yum` package manager:
-
-```
-[itsfoss@centos ~]$ sudo yum install git
-```
-
-```
-[itsfoss@centos ~]$ git clone --depth 1 \
- --branch v8.1.1 \
- https://github.com/nodejs/node && echo ok
-[...]
-ok
-```
-
-```
-[itsfoss@centos ~]$ sudo mkdir /opt/node-v8.1.1
-[itsfoss@centos ~]$ sudo ln -sT node-v8.1.1 /opt/node
-```
-
-```
-[itsfoss@centos ~]$ cd node
-[itsfoss@centos node]$ ./configure --prefix=/opt/node-v8.1.1/
-WARNING: failed to autodetect C++ compiler version (CXX=g++)
-WARNING: failed to autodetect C compiler version (CC=gcc)
-Node.js configure error: No acceptable C compiler found!
-
- Please make sure you have a C compiler installed on your system and/or
- consider adjusting the CC environment variable if you installed
- it in a non-standard prefix.
-```
-
-You guess it: NodeJS is written using the C++ language, but my system lacks the corresponding compiler. Yum to the rescue. As I’m not a regular CentOS user, I actually had to search on the Internet the exact name of the package containing the g++ compiler. Leading me to that page: [https://superuser.com/questions/590808/yum-install-gcc-g-doesnt-work-anymore-in-centos-6-4][23]
-
-```
-[itsfoss@centos node]$ sudo yum install gcc-c++
-[itsfoss@centos node]$ ./configure --prefix=/opt/node-v8.1.1/ && echo ok
-[...]
-ok
-```
-
-```
-[itsfoss@centos node]$ make -j9 && echo ok
-[...]
-ok
-```
-
-```
-[itsfoss@centos node]$ sudo make install && echo ok
-[...]
-ok
-```
-
-```
-[itsfoss@centos node]$ /opt/node/bin/node --version
-v8.1.1
-```
-
-Success. Again.
-
-### C. Making changes to the software installed from source code
-
-You may install a software from the source because you need a very specific version not available in your distribution repository. Or because you want to _modify_ that program. Either to fix a bug or add a feature. After all, open-source is all about that. So I will take that opportunity to give you a taste of the power you have at hand now you are able to compile your own software.
-
-Here, we will make a minor change to the sources of NodeJS. And we will see if our change will be incorporated into the compiled version of the software:
-
-Open the file `node/src/node.cc` in your favorite [text editor][24] (vim, nano, gedit, … ). And try to locate that fragment of code:
-
-```
- if (debug_options.ParseOption(argv[0], arg)) {
- // Done, consumed by DebugOptions::ParseOption().
- } else if (strcmp(arg, "--version") == 0 || strcmp(arg, "-v") == 0) {
- printf("%s\n", NODE_VERSION);
- exit(0);
- } else if (strcmp(arg, "--help") == 0 || strcmp(arg, "-h") == 0) {
- PrintHelp();
- exit(0);
- }
-```
-
-It is around [line 3830 of the file][25]. Then modify the line containing `printf` to match that one instead:
-
-```
- printf("%s (compiled by myself)\n", NODE_VERSION);
-```
-
-Then head back to your terminal. Before going further— and to give you some more insight of the power behind git— you can check if you’ve modified the right file:
-
-```
-diff --git a/src/node.cc b/src/node.cc
-index bbce1022..a5618b57 100644
---- a/src/node.cc
-+++ b/src/node.cc
-@@ -3828,7 +3828,7 @@ static void ParseArgs(int* argc,
- if (debug_options.ParseOption(argv[0], arg)) {
- // Done, consumed by DebugOptions::ParseOption().
- } else if (strcmp(arg, "--version") == 0 || strcmp(arg, "-v") == 0) {
-- printf("%s\n", NODE_VERSION);
-+ printf("%s (compiled by myself)\n", NODE_VERSION);
- exit(0);
- } else if (strcmp(arg, "--help") == 0 || strcmp(arg, "-h") == 0) {
- PrintHelp();
-```
-
-You should see a “-” (minus sign) before the line as it was before you changed it. And a “+” (plus sign) before the line after your changes.
-
-It is now time to recompile and re-install your software:
-
-```
-make -j9 && sudo make install && echo ok
-[...]
-ok
-```
-
-This times, the only reason it might fail is that you’ve made a typo while changing the code. If this is the case, re-open the `node/src/node.cc` file in your text editor and fix the mistake.
-
-Once you’ve managed to compile and install that new modified NodeJS version, you will be able to check if your modifications were actually incorporated into the software:
-
-```
-itsfoss@debian:~/node$ /opt/node/bin/node --version
-v8.1.1 (compiled by myself)
-```
-
-Congratulations! You’ve made your first change to an open-source program!
-
-### D. Let the shell locate our custom build software
-
-You may have noticed until now, I always launched my newly compiled NodeJS software by specifying the absolute path to the binary file.
-
-```
-/opt/node/bin/node
-```
-
-It works. But this is annoying, to say the least. There are actually two common ways of fixing that. But to understand them, you must first know your shell locates the executable files by looking for them only into the directories specified by the `PATH` [environment variable][26].
-
-```
-itsfoss@debian:~/node$ echo $PATH
-/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
-```
-
-Here, on that Debian system, if you do not specify explicitly any directory as part of a command name, the shell will first look for that executable programs into `/usr/local/bin`, then if not found into `/usr/bin`, then if not found into `/bin` then if not found into `/usr/local/games` then if not found into `/usr/games`, then if not found … the shell will report an error _“command not found”_ .
-
-Given that, we have two way to make a command accessible to the shell: by adding it to one of the already configured `PATH` directories. Or by adding the directory containing our executable file to the `PATH`.
-
-### Adding a link from /usr/local/bin
-
-Just _copying_ the node binary executable from `/opt/node/bin` to `/usr/local/bin` would be a bad idea since by doing so, the executable program would no longer be able to locate the other required components belonging to `/opt/node/` (it’s a common practice for a software to locate its resource files relative to its own location).
-
-So, the traditional way of doing that is by using a symbolic link:
-
-```
-itsfoss@debian:~/node$ sudo ln -sT /opt/node/bin/node /usr/local/bin/node
-itsfoss@debian:~/node$ which -a node || echo not found
-/usr/local/bin/node
-itsfoss@debian:~/node$ node --version
-v8.1.1 (compiled by myself)
-```
-
-This is a simple and effective solution, especially if a software package is made of just few well known executable programs— since you have to create a symbolic link for each and every user-invokable commands. For example, if you’re familiar with NodeJS, you know the `npm` companion application I should symlink from `/usr/local/bin` too. But I let that to you as an exercise.
-
-### Modifying the PATH
-
-First, if you tried the preceding solution, remove the node symbolic link created previously to start from a clear state:
-
-```
-itsfoss@debian:~/node$ sudo rm /usr/local/bin/node
-itsfoss@debian:~/node$ which -a node || echo not found
-not found
-```
-
-And now, here is the magic command to change your `PATH`:
-
-```
-itsfoss@debian:~/node$ export PATH="/opt/node/bin:${PATH}"
-itsfoss@debian:~/node$ echo $PATH
-/opt/node/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
-```
-
-Simply said, I replaced the content of the `PATH` environment variable by its previous content, but prefixed by `/opt/node/bin`. So, as you can imagine it now, the shell will look first into the `/opt/node/bin` directory for executable programs. We can confirm that using the `which` command:
-
-```
-itsfoss@debian:~/node$ which -a node || echo not found
-/opt/node/bin/node
-itsfoss@debian:~/node$ node --version
-v8.1.1 (compiled by myself)
-```
-
-Whereas the “link” solution is permanent as soon as you’ve created the symbolic link into `/usr/local/bin`, the `PATH` change is effective only into the current shell. I let you do some researches by yourself to know how to make changes of the `PATH` permanents. As a hint, it has to do with your “profile”. If you find the solution, don’t hesitate to share that with the other readers by using the comment section below!
-
-### E. How to remove that newly installed software from source code
-
-Since our custom compiled NodeJS software sits completely in the `/opt/node-v8.1.1` directory, removing that software is not more work than using the `rm` command to remove that directory:
-
-```
-sudo rm -rf /opt/node-v8.1.1
-```
-
-BEWARE: `sudo` and `rm -rf` are a dangerous cocktail! Always check your command twice before pressing the “enter” key. You won’t have any confirmation message and no undelete if you remove the wrong directory…
-
-Then, if you’ve modified your `PATH`, you will have to revert those changes. Which is not complicated at all.
-
-And if you’ve created links from `/usr/local/bin` you will have to remove them all:
-
-```
-itsfoss@debian:~/node$ sudo find /usr/local/bin \
- -type l \
- -ilname "/opt/node/*" \
- -print -delete
-/usr/local/bin/node
-```
-
-### Wait? Where was the Dependency Hell?
-
-As a final comment, if you read about compiling your own custom software, you might have heard about the [dependency hell][27]. This is a nickname for that annoying situation where before being able to successfully compile a software, you must first compile a pre-requisite library, which in its turn requires another library that might in its turn be incompatible with some other software you’ve already installed.
-
-Part of the job of the package maintainers of your distribution is to actually resolve that dependency hell and to ensure the various software of your system are using compatible libraries and are installed in the right order.
-
-In that article, I chose on purpose to install NodeJS as it virtually doesn’t have dependencies. I said “virtually” because, in fact, it _has_ dependencies. But the source code of those dependencies are present in the source repository of the project (in the `node/deps` subdirectory), so you don’t have to download and install them manually before hand.
-
-But if you’re interested in understanding more about that problem and learn how to deal with it, let me know that using the comment section below: that would be a great topic for a more advanced article!
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Engineer by Passion, Teacher by Vocation. My goals : to share my enthusiasm for what I teach and prepare my students to develop their skills by themselves. You can find me on my website as well.
-
---------------------
-
-via: https://itsfoss.com/install-software-from-source-code/
-
-作者:[Sylvain Leroux ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://itsfoss.com/author/sylvain/
-[1]:https://itsfoss.com/author/sylvain/
-[2]:https://itsfoss.com/install-software-from-source-code/#comments
-[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
-[4]:https://twitter.com/share?original_referer=/&text=How+to+Install+Software+from+Source+Code%E2%80%A6+and+Remove+it+Afterwards&url=https://itsfoss.com/install-software-from-source-code/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=Yes_I_Know_IT
-[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
-[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
-[7]:https://www.reddit.com/submit?url=https://itsfoss.com/install-software-from-source-code/&title=How+to+Install+Software+from+Source+Code%E2%80%A6+and+Remove+it+Afterwards
-[8]:https://itsfoss.com/remove-install-software-ubuntu/
-[9]:https://nodejs.org/en/
-[10]:https://github.com/nodejs/node
-[11]:https://en.wikipedia.org/wiki/GitHub
-[12]:https://en.wikipedia.org/wiki/Git
-[13]:https://en.wikipedia.org/wiki/Version_control
-[14]:https://stackoverflow.com/questions/1457103/how-is-a-tag-different-from-a-branch-which-should-i-use-here
-[15]:https://en.wikipedia.org/wiki/Graphical_user_interface
-[16]:https://en.wikipedia.org/wiki/GNU_Build_System
-[17]:https://en.wikipedia.org/wiki/Make_%28software
-[18]:https://itsfoss.com/pro-vim-tips/
-[19]:http://www.pathname.com/fhs/
-[20]:https://en.wikipedia.org/wiki/C%2B%2B
-[21]:https://en.wikipedia.org/wiki/Compiler
-[22]:https://packages.debian.org/sid/build-essential
-[23]:https://superuser.com/questions/590808/yum-install-gcc-g-doesnt-work-anymore-in-centos-6-4
-[24]:https://en.wikipedia.org/wiki/List_of_text_editors
-[25]:https://github.com/nodejs/node/blob/v8.1.1/src/node.cc#L3830
-[26]:https://en.wikipedia.org/wiki/Environment_variable
-[27]:https://en.wikipedia.org/wiki/Dependency_hell
diff --git a/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md b/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md
index 22986eaa19..1e02fe5430 100644
--- a/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md
+++ b/sources/tech/20171007 A Large-Scale Study of Programming Languages and Code Quality in GitHub.md
@@ -1,3 +1,4 @@
+fuzheng1998 translating
A Large-Scale Study of Programming Languages and Code Quality in GitHub
============================================================
diff --git a/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md b/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
index 20c14074c6..bc3e800452 100644
--- a/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
+++ b/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
@@ -1,3 +1,5 @@
+HankChow Translating
+
In Device We Trust: Measure Twice, Compute Once with Xen, Linux, TPM 2.0 and TXT
============================================================
diff --git a/sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md b/sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md
index 5f1d67a9c8..661f5bc2df 100644
--- a/sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md
+++ b/sources/tech/20171012 Linux Networking Hardware for Beginners Think Software.md
@@ -1,4 +1,5 @@
-translating by sugarfillet
+Translating by FelixYFZ
+
Linux Networking Hardware for Beginners: Think Software
============================================================
diff --git a/sources/tech/20171014 Proxy Models in Container Environments.md b/sources/tech/20171014 Proxy Models in Container Environments.md
deleted file mode 100644
index 74ab1993e5..0000000000
--- a/sources/tech/20171014 Proxy Models in Container Environments.md
+++ /dev/null
@@ -1,89 +0,0 @@
-translating by 2ephaniah
-
-Proxy Models in Container Environments
-============================================================
-
-### Most of us are familiar with how proxies work, but is it any different in a container-based environment? See what's changed.
-
-Inline, side-arm, reverse, and forward. These used to be the terms we used to describe the architectural placement of proxies in the network.
-
-Today, containers use some of the same terminology, but they are introducing new ones. That’s an opportunity for me to extemporaneously expound* on my favorite of all topics: the proxy.
-
-One of the primary drivers of cloud (once we all got past the pipedream of cost containment) has been scalability. Scale has challenged agility (and sometimes won) in various surveys over the past five years as the number one benefit organizations seek by deploying apps in cloud computing environments.
-
-That’s in part because in a digital economy (in which we now operate), apps have become the digital equivalent of brick-and-mortar “open/closed” signs and the manifestation of digital customer assistance. Slow, unresponsive apps have the same effect as turning out the lights or understaffing the store.
-
-Apps need to be available and responsive to meet demand. Scale is the technical response to achieving that business goal. Cloud not only provides the ability to scale, but offers the ability to scale _automatically_ . To do that requires a load balancer. Because that’s how we scale apps – with proxies that load balance traffic/requests.
-
-Containers are no different with respect to expectations around scale. Containers must scale – and scale automatically – and that means the use of load balancers (proxies).
-
-If you’re using native capabilities, you’re doing primitive load balancing based on TCP/UDP. Generally speaking, container-based proxy implementations aren’t fluent in HTTP or other application layer protocols and don’t offer capabilities beyond plain old load balancing ([POLB][1]). That’s often good enough, as container scale operates on a cloned, horizontal premise – to scale an app, add another copy and distribute requests across it. Layer 7 (HTTP) routing capabilities are found at the ingress (in [ingress controllers][2] and API gateways) and are used as much (or more) for app routing as they are to scale applications.
-
-In some cases, however, this is not enough. If you want (or need) more application-centric scale or the ability to insert additional services, you’ll graduate to more robust offerings that can provide programmability or application-centric scalability or both.
-
-To do that means [plugging-in proxies][3]. The container orchestration environment you’re working in largely determines the deployment model of the proxy in terms of whether it’s a reverse proxy or a forward proxy. Just to keep things interesting, there’s also a third model – sidecar – that is the foundation of scalability supported by emerging service mesh implementations.
-
-### Reverse Proxy
-
- [![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed_thumb.png?ver=2017-09-12-082119-957 "Image title")][4]
-
-A reverse proxy is closest to a traditional model in which a virtual server accepts all incoming requests and distributes them across a pool (farm, cluster) of resources.
-
-There is one proxy per ‘application’. Any client that wants to connect to the application is instead connected to the proxy, which then chooses and forwards the request to an appropriate instance. If the green app wants to communicate with the blue app, it sends a request to the blue proxy, which determines which of the two instances of the blue app should respond to the request.
-
-In this model, the proxy is only concerned with the app it is managing. The blue proxy doesn’t care about the instances associated with the orange proxy, and vice-versa.
-
-### Forward Proxy
-
- [![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy_thumb.jpg?ver=2017-09-14-072422-213)][5]
-
-This mode more closely models that of a traditional outbound firewall.
-
-In this model, each container **node** has an associated proxy. If a client wants to connect to a particular application or service, it is instead connected to the proxy local to the container node where the client is running. The proxy then chooses an appropriate instance of that application and forwards the client's request.
-
-Both the orange and the blue app connect to the same proxy associated with its node. The proxy then determines which instance of the requested app instance should respond.
-
-In this model, every proxy must know about every application to ensure it can forward requests to the appropriate instance.
-
-### Sidecar Proxy
-
- [![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy_thumb.jpg?ver=2017-09-14-072425-620)][6]
-
-This mode is also referred to as a service mesh router. In this model, each **container **has its own proxy.
-
-If a client wants to connect to an application, it instead connects to the sidecar proxy, which chooses an appropriate instance of that application and forwards the client's request. This behavior is the same as a _forward proxy _ model.
-
-The difference between a sidecar and forward proxy is that sidecar proxies do not need to modify the container orchestration environment. For example, in order to plug-in a forward proxy to k8s, you need both the proxy _and _ a replacement for kube-proxy. Sidecar proxies do not require this modification because it is the app that automatically connects to its “sidecar” proxy instead of being routed through the proxy.
-
-### Summary
-
-Each model has its advantages and disadvantages. All three share a reliance on environmental data (telemetry and changes in configuration) as well as the need to integrate into the ecosystem. Some models are pre-determined by the environment you choose, so careful consideration as to future needs – service insertion, security, networking complexity – need to be evaluated before settling on a model.
-
-We’re still in early days with respect to containers and their growth in the enterprise. As they continue to stretch into production environments it’s important to understand the needs of the applications delivered by containerized environments and how their proxy models differ in implementation.
-
-*It was extemporaneous when I wrote it down. Now, not so much.
-
-
---------------------------------------------------------------------------------
-
-via: https://dzone.com/articles/proxy-models-in-container-environments
-
-作者:[Lori MacVittie ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://dzone.com/users/307701/lmacvittie.html
-[1]:https://f5.com/about-us/blog/articles/go-beyond-polb-plain-old-load-balancing
-[2]:https://f5.com/about-us/blog/articles/ingress-controllers-new-name-familiar-function-27388
-[3]:http://clouddocs.f5.com/products/asp/v1.0/
-[4]:https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed.png?ver=2017-09-12-082118-160
-[5]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy.jpg?ver=2017-09-14-072419-667
-[6]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy.jpg?ver=2017-09-14-072424-073
-[7]:https://dzone.com/users/307701/lmacvittie.html
-[8]:https://dzone.com/users/307701/lmacvittie.html
-[9]:https://dzone.com/articles/proxy-models-in-container-environments#
-[10]:https://dzone.com/cloud-computing-tutorials-tools-news
-[11]:https://dzone.com/articles/proxy-models-in-container-environments#
-[12]:https://dzone.com/go?i=243221&u=https%3A%2F%2Fget.platform9.com%2Fjzlp-kubernetes-deployment-models-the-ultimate-guide%2F
diff --git a/sources/tech/20171017 Image Processing on Linux.md b/sources/tech/20171017 Image Processing on Linux.md
deleted file mode 100644
index 77c1ab2caa..0000000000
--- a/sources/tech/20171017 Image Processing on Linux.md
+++ /dev/null
@@ -1,98 +0,0 @@
-XYenChi is translating
-Image Processing on Linux
-============================================================
-
-
-I've covered several scientific packages in this space that generate nice graphical representations of your data and work, but I've not gone in the other direction much. So in this article, I cover a popular image processing package called ImageJ. Specifically, I am looking at [Fiji][4], an instance of ImageJ bundled with a set of plugins that are useful for scientific image processing.
-
-The name Fiji is a recursive acronym, much like GNU. It stands for "Fiji Is Just ImageJ". ImageJ is a useful tool for analyzing images in scientific research—for example, you may use it for classifying tree types in a landscape from aerial photography. ImageJ can do that type categorization. It's built with a plugin architecture, and a very extensive collection of plugins is available to increase the available functionality.
-
-The first step is to install ImageJ (or Fiji). Most distributions will have a package available for ImageJ. If you wish, you can install it that way and then install the individual plugins you need for your research. The other option is to install Fiji and get the most commonly used plugins at the same time. Unfortunately, most Linux distributions will not have a package available within their package repositories for Fiji. Luckily, however, an easy installation file is available from the main website. It's a simple zip file, containing a directory with all of the files required to run Fiji. When you first start it, you get only a small toolbar with a list of menu items (Figure 1).
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif1.png)
-
-Figure 1\. You get a very minimal interface when you first start Fiji.
-
-If you don't already have some images to use as you are learning to work with ImageJ, the Fiji installation includes several sample images. Click the File→Open Samples menu item for a dropdown list of sample images (Figure 2). These samples cover many of the potential tasks you might be interested in working on.
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif2.jpg)
-
-Figure 2\. Several sample images are available that you can use as you learn how to work with ImageJ.
-
-If you installed Fiji, rather than ImageJ alone, a large set of plugins already will be installed. The first one of note is the autoupdater plugin. This plugin checks the internet for updates to ImageJ, as well as the installed plugins, each time ImageJ is started.
-
-All of the installed plugins are available under the Plugins menu item. Once you have installed a number of plugins, this list can become a bit unwieldy, so you may want to be judicious in your plugin selection. If you want to trigger the updates manually, click the Help→Update Fiji menu item to force the check and get a list of available updates (Figure 3).
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif3.png)
-
-Figure 3\. You can force a manual check of what updates are available.
-
-Now, what kind of work can you do with Fiji/ImageJ? One example is doing counts of objects within an image. You can load a sample by clicking File→Open Samples→Embryos.
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif4.jpg)
-
-Figure 4\. With ImageJ, you can count objects within an image.
-
-The first step is to set a scale to the image so you can tell ImageJ how to identify objects. First, select the line button on the toolbar and draw a line over the length of the scale legend on the image. You then can select Analyze→Set Scale, and it will set the number of pixels that the scale legend occupies (Figure 5). You can set the known distance to be 100 and the units to be "um".
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif5.png)
-
-Figure 5\. For many image analysis tasks, you need to set a scale to the image.
-
-The next step is to simplify the information within the image. Click Image→Type→8-bit to reduce the information to an 8-bit gray-scale image. To isolate the individual objects, click Process→Binary→Make Binary to threshold the image automatically (Figure 6).
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif6.png)
-
-Figure 6\. There are tools to do automatic tasks like thresholding.
-
-Before you can count the objects within the image, you need to remove artifacts like the scale legend. You can do that by using the rectangular selection tool to select it and then click Edit→Clear. Now you can analyze the image and see what objects are there.
-
-Making sure that there are no areas selected in the image, click Analyze→Analyze Particles to pop up a window where you can select the minimum size, what results to display and what to show in the final image (Figure 7).
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif7.png)
-
-Figure 7\. You can generate a reduced image with identified particles.
-
-Figure 8 shows an overall look at what was discovered in the summary results window. There is also a detailed results window for each individual particle.
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif8.png)
-
-Figure 8\. One of the output results includes a summary list of the particles identified.
-
-Once you have an analysis worked out for a given image type, you often need to apply the exact same analysis to a series of images. This series may number into the thousands, so it's typically not something you will want to repeat manually for each image. In such cases, you can collect the required steps together into a macro so that they can be reapplied multiple times. Clicking Plugins→Macros→Record pops up a new window where all of your subsequent commands will be recorded. Once all of the steps are finished, you can save them as a macro file and rerun them on other images by clicking Plugins→Macros→Run.
-
-If you have a very specific set of steps for your workflow, you simply can open the macro file and edit it by hand, as it is a simple text file. There is actually a complete macro language available to you to control the process that is being applied to your images more fully.
-
-If you have a really large set of images that needs to be processed, however, this still might be too tedious for your workflow. In that case, go to Process→Batch→Macro to pop up a new window where you can set up your batch processing workflow (Figure 9).
-
-![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif9.png)
-
-Figure 9\. You can run a macro on a batch of input image files with a single command.
-
-From this window, you can select which macro file to apply, the source directory where the input images are located and the output directory where you want the output images to be written. You also can set the output file format and filter the list of images being used as input based on what the filename contains. Once everything is done, start the batch run by clicking the Process button at the bottom of the window.
-
-If this is a workflow that will be repeated over time, you can save the batch process to a text file by clicking the Save button at the bottom of the window. You then can reload the same workflow by clicking the Open button, also at the bottom of the window. All of this functionality allows you to automate the most tedious parts of your research so you can focus on the actual science.
-
-Considering that there are more than 500 plugins and more than 300 macros available from the main ImageJ website alone, it is an understatement that I've been able to touch on only the most basic of topics in this short article. Luckily, many domain-specific tutorials are available, along with the very good documentation for the core of ImageJ from the main project website. If you think this tool could be of use to your research, there is a wealth of information to guide you in your particular area of study.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Joey Bernard has a background in both physics and computer science. This serves him well in his day job as a computational research consultant at the University of New Brunswick. He also teaches computational physics and parallel programming.
-
---------------------------------
-
-via: https://www.linuxjournal.com/content/image-processing-linux
-
-作者:[Joey Bernard][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linuxjournal.com/users/joey-bernard
-[1]:https://www.linuxjournal.com/tag/science
-[2]:https://www.linuxjournal.com/tag/statistics
-[3]:https://www.linuxjournal.com/users/joey-bernard
-[4]:https://imagej.net/Fiji
diff --git a/sources/tech/20171109 Concurrent Servers- Part 4 - libuv.md b/sources/tech/20171109 Concurrent Servers- Part 4 - libuv.md
new file mode 100644
index 0000000000..94b98cf5c2
--- /dev/null
+++ b/sources/tech/20171109 Concurrent Servers- Part 4 - libuv.md
@@ -0,0 +1,492 @@
+Translating by qhwdw [Concurrent Servers: Part 4 - libuv][17]
+============================================================
+
+This is part 4 of a series of posts on writing concurrent network servers. In this part we're going to use libuv to rewrite our server once again, and also talk about handling time-consuming tasks in callbacks using a thread pool. Finally, we're going to look under the hood of libuv for a bit to study how it wraps blocking file-system operations with an asynchronous API.
+
+All posts in the series:
+
+* [Part 1 - Introduction][7]
+
+* [Part 2 - Threads][8]
+
+* [Part 3 - Event-driven][9]
+
+* [Part 4 - libuv][10]
+
+### Abstracting away event-driven loops with libuv
+
+In [part 3][11], we've seen how similar select-based and epoll-based servers are, and I mentioned it's very tempting to abstract away the minor differences between them. Numerous libraries are already doing this, however, so in this part I'm going to pick one and use it. The library I'm picking is [libuv][12], which was originally designed to serve as the underlying portable platform layer for Node.js, and has since found use in additional projects. libuv is written in C, which makes it highly portable and very suitable for tying into high-level languages like JavaScript and Python.
+
+While libuv has grown to be a fairly large framework for abstracting low-level platform details, it remains centered on the concept of an _event loop_ . In our event-driven servers in part 3, the event loop was explicit in the main function; when using libuv, the loop is usually hidden inside the library itself, and user code just registers event handlers (as callback functions) and runs the loop. Furthermore, libuv will use the fastest event loop implementation for a given platform: for Linux this is epoll, etc.
+
+![libuv loop](https://eli.thegreenplace.net/images/2017/libuvloop.png)
+
+libuv supports multiple event loops, and thus an event loop is a first class citizen within the library; it has a handle - uv_loop_t, and functions for creating/destroying/starting/stopping loops. That said, I will only use the "default" loop in this post, which libuv makes available via uv_default_loop(); multiple loops are mosly useful for multi-threaded event-driven servers, a more advanced topic I'll leave for future parts in the series.
+
+### A concurrent server using libuv
+
+To get a better feel for libuv, let's jump to our trusty protocol server that we've been vigorously reimplementing throughout the series. The structure of this server is going to be somewhat similar to the select and epoll-based servers of part 3, since it also relies on callbacks. The full [code sample is here][13]; we start with setting up the server socket bound to a local port:
+
+```
+int portnum = 9090;
+if (argc >= 2) {
+ portnum = atoi(argv[1]);
+}
+printf("Serving on port %d\n", portnum);
+
+int rc;
+uv_tcp_t server_stream;
+if ((rc = uv_tcp_init(uv_default_loop(), &server_stream)) < 0) {
+ die("uv_tcp_init failed: %s", uv_strerror(rc));
+}
+
+struct sockaddr_in server_address;
+if ((rc = uv_ip4_addr("0.0.0.0", portnum, &server_address)) < 0) {
+ die("uv_ip4_addr failed: %s", uv_strerror(rc));
+}
+
+if ((rc = uv_tcp_bind(&server_stream, (const struct sockaddr*)&server_address, 0)) < 0) {
+ die("uv_tcp_bind failed: %s", uv_strerror(rc));
+}
+```
+
+Fairly standard socket fare here, except that it's all wrapped in libuv APIs. In return we get a portable interface that should work on any platform libuv supports.
+
+This code also demonstrates conscientious error handling; most libuv functions return an integer status, with a negative number meaning an error. In our server we treat these errors as fatals, but one may imagine a more graceful recovery.
+
+Now that the socket is bound, it's time to listen on it. Here we run into our first callback registration:
+
+```
+// Listen on the socket for new peers to connect. When a new peer connects,
+// the on_peer_connected callback will be invoked.
+if ((rc = uv_listen((uv_stream_t*)&server_stream, N_BACKLOG, on_peer_connected)) < 0) {
+ die("uv_listen failed: %s", uv_strerror(rc));
+}
+```
+
+uv_listen registers a callback that the event loop will invoke when new peers connect to the socket. Our callback here is called on_peer_connected, and we'll examine it soon.
+
+Finally, main runs the libuv loop until it's stopped (uv_run only returns when the loop has stopped or some error occurred).
+
+```
+// Run the libuv event loop.
+uv_run(uv_default_loop(), UV_RUN_DEFAULT);
+
+// If uv_run returned, close the default loop before exiting.
+return uv_loop_close(uv_default_loop());
+```
+
+Note that only a single callback was registered by main prior to running the event loop; we'll soon see how additional callbacks are added. It's not a problem to add and remove callbacks throughout the runtime of the event loop - in fact, this is how most servers are expected to be written.
+
+This is on_peer_connected, which handles new client connections to the server:
+
+```
+void on_peer_connected(uv_stream_t* server_stream, int status) {
+ if (status < 0) {
+ fprintf(stderr, "Peer connection error: %s\n", uv_strerror(status));
+ return;
+ }
+
+ // client will represent this peer; it's allocated on the heap and only
+ // released when the client disconnects. The client holds a pointer to
+ // peer_state_t in its data field; this peer state tracks the protocol state
+ // with this client throughout interaction.
+ uv_tcp_t* client = (uv_tcp_t*)xmalloc(sizeof(*client));
+ int rc;
+ if ((rc = uv_tcp_init(uv_default_loop(), client)) < 0) {
+ die("uv_tcp_init failed: %s", uv_strerror(rc));
+ }
+ client->data = NULL;
+
+ if (uv_accept(server_stream, (uv_stream_t*)client) == 0) {
+ struct sockaddr_storage peername;
+ int namelen = sizeof(peername);
+ if ((rc = uv_tcp_getpeername(client, (struct sockaddr*)&peername,
+ &namelen)) < 0) {
+ die("uv_tcp_getpeername failed: %s", uv_strerror(rc));
+ }
+ report_peer_connected((const struct sockaddr_in*)&peername, namelen);
+
+ // Initialize the peer state for a new client: we start by sending the peer
+ // the initial '*' ack.
+ peer_state_t* peerstate = (peer_state_t*)xmalloc(sizeof(*peerstate));
+ peerstate->state = INITIAL_ACK;
+ peerstate->sendbuf[0] = '*';
+ peerstate->sendbuf_end = 1;
+ peerstate->client = client;
+ client->data = peerstate;
+
+ // Enqueue the write request to send the ack; when it's done,
+ // on_wrote_init_ack will be called. The peer state is passed to the write
+ // request via the data pointer; the write request does not own this peer
+ // state - it's owned by the client handle.
+ uv_buf_t writebuf = uv_buf_init(peerstate->sendbuf, peerstate->sendbuf_end);
+ uv_write_t* req = (uv_write_t*)xmalloc(sizeof(*req));
+ req->data = peerstate;
+ if ((rc = uv_write(req, (uv_stream_t*)client, &writebuf, 1,
+ on_wrote_init_ack)) < 0) {
+ die("uv_write failed: %s", uv_strerror(rc));
+ }
+ } else {
+ uv_close((uv_handle_t*)client, on_client_closed);
+ }
+}
+```
+
+This code is well commented, but there are a couple of important libuv idioms I'd like to highlight:
+
+* Passing custom data into callbacks: since C has no closures, this can be challenging. libuv has a void* datafield in all its handle types; these fields can be used to pass user data. For example, note how client->data is made to point to a peer_state_t structure so that the callbacks registered by uv_write and uv_read_start can know which peer data they're dealing with.
+
+* Memory management: event-driven programming is much easier in languages with garbage collection, because callbacks usually run in a completely different stack frame from where they were registered, making stack-based memory management difficult. It's almost always necessary to pass heap-allocated data to libuv callbacks (except in main, which remains alive on the stack when all callbacks run), and to avoid leaks much care is required about when these data are safe to free(). This is something that comes with a bit of practice [[1]][6].
+
+The peer state for this server is:
+
+```
+typedef struct {
+ ProcessingState state;
+ char sendbuf[SENDBUF_SIZE];
+ int sendbuf_end;
+ uv_tcp_t* client;
+} peer_state_t;
+```
+
+It's fairly similar to the state in part 3; we no longer need sendptr, since uv_write will make sure to send the whole buffer it's given before invoking the "done writing" callback. We also keep a pointer to the client for other callbacks to use. Here's on_wrote_init_ack:
+
+```
+void on_wrote_init_ack(uv_write_t* req, int status) {
+ if (status) {
+ die("Write error: %s\n", uv_strerror(status));
+ }
+ peer_state_t* peerstate = (peer_state_t*)req->data;
+ // Flip the peer state to WAIT_FOR_MSG, and start listening for incoming data
+ // from this peer.
+ peerstate->state = WAIT_FOR_MSG;
+ peerstate->sendbuf_end = 0;
+
+ int rc;
+ if ((rc = uv_read_start((uv_stream_t*)peerstate->client, on_alloc_buffer,
+ on_peer_read)) < 0) {
+ die("uv_read_start failed: %s", uv_strerror(rc));
+ }
+
+ // Note: the write request doesn't own the peer state, hence we only free the
+ // request itself, not the state.
+ free(req);
+}
+```
+
+Then we know for sure that the initial '*' was sent to the peer, we start listening to incoming data from this peer by calling uv_read_start, which registers a callback (on_peer_read) that will be invoked by the event loop whenever new data is received on the socket from the client:
+
+```
+void on_peer_read(uv_stream_t* client, ssize_t nread, const uv_buf_t* buf) {
+ if (nread < 0) {
+ if (nread != uv_eof) {
+ fprintf(stderr, "read error: %s\n", uv_strerror(nread));
+ }
+ uv_close((uv_handle_t*)client, on_client_closed);
+ } else if (nread == 0) {
+ // from the documentation of uv_read_cb: nread might be 0, which does not
+ // indicate an error or eof. this is equivalent to eagain or ewouldblock
+ // under read(2).
+ } else {
+ // nread > 0
+ assert(buf->len >= nread);
+
+ peer_state_t* peerstate = (peer_state_t*)client->data;
+ if (peerstate->state == initial_ack) {
+ // if the initial ack hasn't been sent for some reason, ignore whatever
+ // the client sends in.
+ free(buf->base);
+ return;
+ }
+
+ // run the protocol state machine.
+ for (int i = 0; i < nread; ++i) {
+ switch (peerstate->state) {
+ case initial_ack:
+ assert(0 && "can't reach here");
+ break;
+ case wait_for_msg:
+ if (buf->base[i] == '^') {
+ peerstate->state = in_msg;
+ }
+ break;
+ case in_msg:
+ if (buf->base[i] == '$') {
+ peerstate->state = wait_for_msg;
+ } else {
+ assert(peerstate->sendbuf_end < sendbuf_size);
+ peerstate->sendbuf[peerstate->sendbuf_end++] = buf->base[i] + 1;
+ }
+ break;
+ }
+ }
+
+ if (peerstate->sendbuf_end > 0) {
+ // we have data to send. the write buffer will point to the buffer stored
+ // in the peer state for this client.
+ uv_buf_t writebuf =
+ uv_buf_init(peerstate->sendbuf, peerstate->sendbuf_end);
+ uv_write_t* writereq = (uv_write_t*)xmalloc(sizeof(*writereq));
+ writereq->data = peerstate;
+ int rc;
+ if ((rc = uv_write(writereq, (uv_stream_t*)client, &writebuf, 1,
+ on_wrote_buf)) < 0) {
+ die("uv_write failed: %s", uv_strerror(rc));
+ }
+ }
+ }
+ free(buf->base);
+}
+```
+
+The runtime behavior of this server is very similar to the event-driven servers of part 3: all clients are handled concurrently in a single thread. Also similarly, a certain discipline has to be maintained in the server's code: the server's logic is implemented as an ensemble of callbacks, and long-running operations are a big no-no since they block the event loop. Let's explore this issue a bit further.
+
+### Long-running operations in event-driven loops
+
+The single-threaded nature of event-driven code makes it very susceptible to a common issue: long-running code blocks the entire loop. Consider this program:
+
+```
+void on_timer(uv_timer_t* timer) {
+ uint64_t timestamp = uv_hrtime();
+ printf("on_timer [%" PRIu64 " ms]\n", (timestamp / 1000000) % 100000);
+
+ // "Work"
+ if (random() % 5 == 0) {
+ printf("Sleeping...\n");
+ sleep(3);
+ }
+}
+
+int main(int argc, const char** argv) {
+ uv_timer_t timer;
+ uv_timer_init(uv_default_loop(), &timer);
+ uv_timer_start(&timer, on_timer, 0, 1000);
+ return uv_run(uv_default_loop(), UV_RUN_DEFAULT);
+}
+```
+
+It runs a libuv event loop with a single registered callback: on_timer, which is invoked by the loop every second. The callback reports a timestamp, and once in a while simulates some long-running task by sleeping for 3 seconds. Here's a sample run:
+
+```
+$ ./uv-timer-sleep-demo
+on_timer [4840 ms]
+on_timer [5842 ms]
+on_timer [6843 ms]
+on_timer [7844 ms]
+Sleeping...
+on_timer [11845 ms]
+on_timer [12846 ms]
+Sleeping...
+on_timer [16847 ms]
+on_timer [17849 ms]
+on_timer [18850 ms]
+...
+```
+
+on_timer dutifully fires every second, until the random sleep hits in. At that point, on_timer is not invoked again until the sleep is over; in fact, _no other callbacks_ will be invoked in this time frame. The sleep call blocks the current thread, which is the only thread involved and is also the thread the event loop uses. When this thread is blocked, the event loop is blocked.
+
+This example demonstrates why it's so important for callbacks to never block in event-driven calls, and applies equally to Node.js servers, client-side Javascript, most GUI programming frameworks, and many other asynchronous programming models.
+
+But sometimes running time-consuming tasks is unavoidable. Not all tasks have asynchronous APIs; for example, we may be dealing with some library that only has a synchronous API, or just have to perform a potentially long computation. How can we combine such code with event-driven programming? Threads to the rescue!
+
+### Threads for "converting" blocking calls into asynchronous calls
+
+A thread pool can be used to turn blocking calls into asynchronous calls, by running alongside the event loop and posting events onto it when tasks are completed. Here's how it works, for a given blocking function do_work():
+
+1. Instead of directly calling do_work() in a callback, we package it into a "task" and ask the thread pool to execute the task. We also register a callback for the loop to invoke when the task has finished; let's call iton_work_done().
+
+2. At this point our callback can return and the event loop keeps spinning; at the same time, a thread in the pool is executing the task.
+
+3. Once the task has finished executing, the main thread (the one running the event loop) is notified and on_work_done() is invoked by the event loop.
+
+Let's see how this solves our previous timer/sleep example, using libuv's work scheduling API:
+
+```
+void on_after_work(uv_work_t* req, int status) {
+ free(req);
+}
+
+void on_work(uv_work_t* req) {
+ // "Work"
+ if (random() % 5 == 0) {
+ printf("Sleeping...\n");
+ sleep(3);
+ }
+}
+
+void on_timer(uv_timer_t* timer) {
+ uint64_t timestamp = uv_hrtime();
+ printf("on_timer [%" PRIu64 " ms]\n", (timestamp / 1000000) % 100000);
+
+ uv_work_t* work_req = (uv_work_t*)malloc(sizeof(*work_req));
+ uv_queue_work(uv_default_loop(), work_req, on_work, on_after_work);
+}
+
+int main(int argc, const char** argv) {
+ uv_timer_t timer;
+ uv_timer_init(uv_default_loop(), &timer);
+ uv_timer_start(&timer, on_timer, 0, 1000);
+ return uv_run(uv_default_loop(), UV_RUN_DEFAULT);
+}
+```
+
+Instead of calling sleep directly in on_timer, we enqueue a task, represented by a handle of type work_req [[2]][14], the function to run in the task (on_work) and the function to invoke once the task is completed (on_after_work). on_workis where the "work" (the blocking/time-consuming operation) happens. Note a crucial difference between the two callbacks passed into uv_queue_work: on_work runs in the thread pool, while on_after_work runs on the main thread which also runs the event loop - just like any other callback.
+
+Let's see this version run:
+
+```
+$ ./uv-timer-work-demo
+on_timer [89571 ms]
+on_timer [90572 ms]
+on_timer [91573 ms]
+on_timer [92575 ms]
+Sleeping...
+on_timer [93576 ms]
+on_timer [94577 ms]
+Sleeping...
+on_timer [95577 ms]
+on_timer [96578 ms]
+on_timer [97578 ms]
+...
+```
+
+The timer ticks every second, even though the sleeping function is still invoked; sleeping is now done on a separate thread and doesn't block the event loop.
+
+### A primality-testing server, with exercises
+
+Since sleep isn't a very exciting way to simulate work, I've prepared a more comprehensive example - a server that accepts numbers from clients over a socket, checks whether these numbers are prime and sends back either "prime" or "composite". The full [code for this server is here][15] - I won't post it here since it's long, but will rather give readers the opportunity to explore it on their own with a couple of exercises.
+
+The server deliberatly uses a naive primality test algorithm, so for large primes it can take quite a while to return an answer. On my machine it takes ~5 seconds to compute the answer for 2305843009213693951, but YMMV.
+
+Exercise 1: the server has a setting (via an environment variable named MODE) to either run the primality test in the socket callback (meaning on the main thread) or in the libuv work queue. Play with this setting to observe the server's behavior when multiple clients are connecting simultaneously. In blocking mode, the server will not answer other clients while it's computing a big task; in non-blocking mode it will.
+
+Exercise 2: libuv has a default thread-pool size, and it can be configured via an environment variable. Can you use multiple clients to discover experimentally what the default size is? Having found the default thread-pool size, play with different settings to see how it affects the server's responsiveness under heavy load.
+
+### Non-blocking file-system operations using work queues
+
+Delegating potentially-blocking operations to a thread pool isn't good for just silly demos and CPU-intensive computations; libuv itself makes heavy use of this capability in its file-system APIs. This way, libuv accomplishes the superpower of exposing the file-system with an asynchronous API, in a portable way.
+
+Let's take uv_fs_read(), for example. This function reads from a file (represented by a uv_fs_t handle) into a buffer [[3]][16], and invokes a callback when the reading is completed. That is, uv_fs_read() always returns immediately, even if the file sits on an NFS-like system and it may take a while for the data to get to the buffer. In other words, this API is asynchronous in the way other libuv APIs are. How does this work?
+
+At this point we're going to look under the hood of libuv; the internals are actually fairly straightforward, and it's a good exercise. Being a portable library, libuv has different implementations of many of its functions for Windows and Unix systems. We're going to be looking at src/unix/fs.c in the libuv source tree.
+
+The code for uv_fs_read is:
+
+```
+int uv_fs_read(uv_loop_t* loop, uv_fs_t* req,
+ uv_file file,
+ const uv_buf_t bufs[],
+ unsigned int nbufs,
+ int64_t off,
+ uv_fs_cb cb) {
+ if (bufs == NULL || nbufs == 0)
+ return -EINVAL;
+
+ INIT(READ);
+ req->file = file;
+
+ req->nbufs = nbufs;
+ req->bufs = req->bufsml;
+ if (nbufs > ARRAY_SIZE(req->bufsml))
+ req->bufs = uv__malloc(nbufs * sizeof(*bufs));
+
+ if (req->bufs == NULL) {
+ if (cb != NULL)
+ uv__req_unregister(loop, req);
+ return -ENOMEM;
+ }
+
+ memcpy(req->bufs, bufs, nbufs * sizeof(*bufs));
+
+ req->off = off;
+ POST;
+}
+```
+
+It may seem puzzling at first, because it defers the real work to the INIT and POST macros, with some local variable setup for POST. This is done to avoid too much code duplication within the file.
+
+The INIT macro is:
+
+```
+#define INIT(subtype) \
+ do { \
+ req->type = UV_FS; \
+ if (cb != NULL) \
+ uv__req_init(loop, req, UV_FS); \
+ req->fs_type = UV_FS_ ## subtype; \
+ req->result = 0; \
+ req->ptr = NULL; \
+ req->loop = loop; \
+ req->path = NULL; \
+ req->new_path = NULL; \
+ req->cb = cb; \
+ } \
+ while (0)
+```
+
+It sets up the request, and most importantly sets the req->fs_type field to the actual FS request type. Since uv_fs_read invokes INIT(READ), it means req->fs_type gets assigned the constant UV_FS_READ.
+
+The POST macro is:
+
+```
+#define POST \
+ do { \
+ if (cb != NULL) { \
+ uv__work_submit(loop, &req->work_req, uv__fs_work, uv__fs_done); \
+ return 0; \
+ } \
+ else { \
+ uv__fs_work(&req->work_req); \
+ return req->result; \
+ } \
+ } \
+ while (0)
+```
+
+What it does depends on whether the callback is NULL. In libuv file-system APIs, a NULL callback means we actually want to perform the operation _synchronously_ . In this case POST invokes uv__fs_work directly (we'll get to what this function does in just a bit), whereas for a non-NULL callback, it submits uv__fs_work as a work item to the work queue (which is the thread pool), and registers uv__fs_done as the callback; that function does a bit of book-keeping and invokes the user-provided callback.
+
+If we look at the code of uv__fs_work, we'll see it uses more macros to route work to the actual file-system call as needed. In our case, for UV_FS_READ the call will be made to uv__fs_read, which (at last!) does the reading using regular POSIX APIs. This function can be safely implemented in a _blocking_ manner, since it's placed on a thread-pool when called through the asynchronous API.
+
+In Node.js, the fs.readFile function is mapped to uv_fs_read. Thus, reading files can be done in a non-blocking fashion even though the underlying file-system API is blocking.
+
+* * *
+
+
+[[1]][1] To ensure that this server doesn't leak memory, I ran it under Valgrind with the leak checker enabled. Since servers are often designed to run forever, this was a bit challenging; to overcome this issue I've added a "kill switch" to the server - a special sequence received from a client makes it stop the event loop and exit. The code for this is in theon_wrote_buf handler.
+
+
+[[2]][2] Here we don't use work_req for much; the primality testing server discussed next will show how it's used to pass context information into the callback.
+
+
+[[3]][3] uv_fs_read() provides a generalized API similar to the preadv Linux system call: it takes multiple buffers which it fills in order, and supports an offset into the file. We can ignore these features for the sake of our discussion.
+
+
+--------------------------------------------------------------------------------
+
+via: https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
+
+作者:[Eli Bendersky ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://eli.thegreenplace.net/
+[1]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id1
+[2]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id2
+[3]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id3
+[4]:https://eli.thegreenplace.net/tag/concurrency
+[5]:https://eli.thegreenplace.net/tag/c-c
+[6]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id4
+[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
+[8]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
+[9]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
+[10]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
+[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
+[12]:http://libuv.org/
+[13]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-server.c
+[14]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id5
+[15]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-isprime-server.c
+[16]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id6
+[17]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
diff --git a/sources/tech/20171110 File better bugs with coredumpctl.md b/sources/tech/20171110 File better bugs with coredumpctl.md
deleted file mode 100644
index 95846965bf..0000000000
--- a/sources/tech/20171110 File better bugs with coredumpctl.md
+++ /dev/null
@@ -1,99 +0,0 @@
-translating----geekpi
-
-# [File better bugs with coredumpctl][1]
-
-![](https://fedoramagazine.org/wp-content/uploads/2017/11/coredump.png-945x400.jpg)
-
-An unfortunate fact of life is that all software has bugs, and some bugs can make the system crash. When it does, it often leaves a data file called a _core dump_ on disk. This file contains data about your system when it crashed, and may help determine why the crash occurred. Often developers request data in the form of a _backtrace_ , showing the flow of instructions that led to the crash. The developer can use it to fix the bug and improve the system. Here’s how to easily generate a backtrace if you have a system crash.
-
-### Getting started with coredumpctl
-
-Most Fedora systems use the [Automatic Bug Reporting Tool (ABRT)][2] to automatically capture dumps and file bugs for crashes. However, if you have disabled this service or removed the package, this method may be helpful.
-
-If you experience a system crash, first ensure that you’re running the latest updated software. Updates often contain fixes that have already been found to fix a bug that causes critical errors and crashes. Once you update, try to recreate the situation that led to the bug.
-
-If the crash still happens, or if you’re already running the latest software, it’s time to use the helpful _coredumpctl_ utility. This utility helps locate and process crashes. To see a list of all core dumps on your system, run this command:
-
-```
-coredumpctl list
-```
-
-Don’t be surprised if you see a longer list than expected. Sometimes system components crash silently behind the scenes, and recover on their own. An easy way to quickly find a dump from today is to use the _–since_ option:
-
-```
-coredumpctl list --since=today
-```
-
-The _PID_ column contains the process ID used to identify the dump. Note that number, since you’ll use it again along the way. Or, if you don’t want to remember it, assign it to a variable you can use in the rest of the commands below:
-
-```
-MYPID=
-```
-
-To see information about the core dump, use this command (either use the _$MYPID_ variable, or substitute the PID number):
-
-```
-coredumpctl info $MYPID
-```
-
-### Install debuginfo packages
-
-Debugging symbols translate between data in the core dump and the instructions found in original source code. This symbol data can be quite large. Therefore, symbols are shipped in _debuginfo_ packages separately from the packages most users run on Fedora systems. To determine which debuginfo packages you must install, start by running this command:
-
-```
-coredumpctl gdb $MYPID
-```
-
-This may result in a large amount of information to the screen. The last line may tell you to use _dnf_ to install more debuginfo packages. Run that command [with sudo][3] to continue:
-
-```
-sudo dnf debuginfo-install
-```
-
-Then try the _coredumpctl gdb $MYPID_ command again. **You may need to do this repeatedly,** as other symbols are unwound in the trace.
-
-### Capturing the backtrace
-
-Run the following commands to log information in the debugger:
-
-```
-set logging file mybacktrace.txt
-set logging on
-```
-
-You may find it helpful to turn off the pagination. For long backtraces this saves time.
-
-```
-set pagination off
-```
-
-Now run the backtrace:
-
-```
-thread apply all bt full
-```
-
-Now you can type _quit_ to quit the debugger. The _mybacktrace.txt_ file includes backtrace information you can attach to a bug or issue. Or if you’re working with someone in real time, you can upload the text to a pastebin. Either way, you can now provide more assistance to the developer to fix the problem.
-
----------------------------------
-
-作者简介:
-
-Paul W. Frields
-
-Paul W. Frields has been a Linux user and enthusiast since 1997, and joined the Fedora Project in 2003, shortly after launch. He was a founding member of the Fedora Project Board, and has worked on documentation, website publishing, advocacy, toolchain development, and maintaining software. He joined Red Hat as Fedora Project Leader from February 2008 to July 2010, and remains with Red Hat as an engineering manager. He currently lives with his wife and two children in Virginia.
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/file-better-bugs-coredumpctl/
-
-作者:[Paul W. Frields ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://fedoramagazine.org/author/pfrields/
-[1]:https://fedoramagazine.org/file-better-bugs-coredumpctl/
-[2]:https://github.com/abrt/abrt
-[3]:https://fedoramagazine.org/howto-use-sudo/
diff --git a/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md b/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md
new file mode 100644
index 0000000000..c6c50d9b25
--- /dev/null
+++ b/sources/tech/20171116 Unleash Your Creativity – Linux Programs for Drawing and Image Editing.md
@@ -0,0 +1,130 @@
+### Unleash Your Creativity – Linux Programs for Drawing and Image Editing
+
+ By: [chabowski][1]
+
+The following article is part of a series of articles that provide tips and tricks for Linux newbies – or Desktop users that are not yet experienced with regard to certain topics. This series intends to complement the special edition #30 “[Getting Started with Linux][2]” based on [openSUSE Leap][3], recently published by the [Linux Magazine,][4] with valuable additional information.
+
+![](https://www.suse.com/communities/blog/files/2017/11/DougDeMaio-450x450.jpeg)
+
+This article has been contributed by Douglas DeMaio, openSUSE PR Expert at SUSE.
+
+Both Mac OS or Window offer several popular programs for graphics editing, vector drawing and creating and manipulating Portable Document Format (PDF). The good news: users familiar with the Adobe Suite can transition with ease to free, open-source programs available on Linux.
+
+Programs like [GIMP][5], [InkScape][6] and [Okular][7] are cross platform programs that are available by default in Linux/GNU distributions and are persuasive alternatives to expensive Adobe programs like [Photoshop][8], [Illustrator][9] and [Acrobat][10].
+
+These creativity programs on Linux distributions are just as powerful as those for macOS or Window. This article will explain some of the differences and how the programs can be used to make your transition to Linux comfortable.
+
+### Krita
+
+The KDE desktop environment comes with tons of cool applications. [Krita][11] is a professional open source painting program. It gives users the freedom to create any artistic image they desire. Krita features tools that are much more extensive than the tool sets of most proprietary programs you might be familiar with. From creating textures to comics, Krita is a must have application for Linux users.
+
+![](https://www.suse.com/communities/blog/files/2017/11/krita-450x267.png)
+
+### GIMP
+
+GNU Image Manipulation Program (GIMP) is a cross-platform image editor. Users of Photoshop will find the User Interface of GIMP to be similar to that of Photoshop. The drop down menu offers colors, layers, filters and tools to help the user with editing graphics. Rulers are located both horizontal and vertical and guide can be dragged across the screen to give exact measurements. The drop down menu gives tool options for resizing or cropping photos; adjustments can be made to the color balance, color levels, brightness and contrast as well as hue and saturation.
+
+![](https://www.suse.com/communities/blog/files/2017/11/gimp-450x281.png)
+
+There are multiple filters in GIMP to enhance or distort your images. Filters for artistic expression and animation are available and are more powerful tool options than those found in some proprietary applications. Gradients can be applied through additional layers and the Text Tool offers many fonts, which can be altered in shape and size through the Perspective Tool.
+
+The cloning tool works exactly like those in other graphics editors, so manipulating images is simple and acurrate given the selection of brush sizes to do the job.
+
+Perhaps one of the best options available with GIMP is that the images can be saved in a variety of formats like .jpg, .png, .pdf, .eps and .svg. These image options provide high-quality images in a small file.
+
+### InkScape
+
+Designing vector imagery with InkScape is simple and free. This cross platform allows for the creation of logos and illustrations that are highly scalable. Whether designing cartoons or creating images for branding, InkScape is a powerful application to get the job done. Like GIMP, InkScape lets you save files in various formats and allows for object manipulation like moving, rotating and skewing text and objects. Shape tools are available with InkScape so making stars, hexagons and other elements will meet the needs of your creative mind.
+
+![](https://www.suse.com/communities/blog/files/2017/11/inkscape-450x273.png)
+
+InkScape offers a comprehensive tool set, including a drawing tool, a pen tool and the freehand calligraphy tool that allows for object creation with your own personal style. The color selector gives you the choice of RGB, CMYK and RGBA – using specific colors for branding logos, icons and advertisement is definitely convincing.
+
+Short cut commands are similar to what users experience in Adobe Illustrator. Making layers and grouping or ungrouping the design elements can turn a blank page into a full-fledged image that can be used for designing technical diagrams for presentations, importing images into a multimedia program or for creating web graphics and software design.
+
+Inkscape can import vector graphics from multiple other programs. It can even import bitmap images. Inkscape is one of those cross platform, open-source programs that allow users to operate across different operating systems, no matter if they work with macOS, Windows or Linux.
+
+### Okular and LibreOffice
+
+LibreOffice, which is a free, open-source Office Suite, allows users to collaborate and interact with documents and important files on Linux, but also on macOS and Window. You can also create PDF files via LibreOffice, and LibreOffice Draw lets you view (and edit) PDF files as images.
+
+![](https://www.suse.com/communities/blog/files/2017/11/draw-450x273.png)
+
+However, the Portable Document Format (PDF) is quite different on the three Operating Systems. MacOS offers [Preview][12] by default; Windows has [Edge][13]. Of course, also Adobe Reader can be used for both MacOS and Window. With Linux, and especially the desktop selection of KDE, [Okular][14] is the default program for viewing PDF files.
+
+![](https://www.suse.com/communities/blog/files/2017/11/okular-450x273.png)
+
+The functionality of Okular supports different types of documents, like PDF, Postscript, [DjVu][15], [CHM][16], [XPS][17], [ePub][18] and others. Yet the universal document viewer also offers some powerful features that make interacting with a document different from other programs on MacOS and Windows. Okular gives selection and search tools that make accessing the text in PDFs fluid for how users interact with documents. Viewing documents with Okular is also accommodating with the magnification tool that allows for a quick look at small text in a document.
+
+Okular also provides users with the option to configure it to use more memory if the document is too large and freezes the Operating System. This functionality is convenient for users accessing high-quality print documents for example for advertising.
+
+For those who want to change locked images and documents, it’s rather easy to do so with LibreOffice Draw. A hypothetical situation would be to take a locked IRS (or tax) form and change it to make the uneditable document editable. Imagine how much fun it could be to transform it to some humorous kind of tax form …
+
+And indeed, the sky’s the limit on how creative a user wants to be when using programs that are available on Linux distributions.
+
+![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif)
+
+![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif)
+
+![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif)
+
+![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif)
+
+![2 votes, average: 5.00 out of 5](https://www.suse.com/communities/blog/wp-content/plugins/wp-postratings/images/stars_crystal/rating_on.gif)
+
+(
+
+ _**2** votes, average: **5.00** out of 5_
+
+)
+
+ _You need to be a registered member to rate this post._
+
+Tags: [drawing][19], [Getting Started with Linux][20], [GIMP][21], [image editing][22], [Images][23], [InkScape][24], [KDE][25], [Krita][26], [Leap 42.3][27], [LibreOffice][28], [Linux Magazine][29], [Okular][30], [openSUSE][31], [PDF][32] Categories: [Desktop][33], [Expert Views][34], [LibreOffice][35], [openSUSE][36]
+
+--------------------------------------------------------------------------------
+
+via: https://www.suse.com/communities/blog/unleash-creativity-linux-programs-drawing-image-editing/
+
+作者:[chabowski ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://www.suse.com/communities/blog/author/chabowski/
+[2]:http://www.linux-magazine.com/Resources/Special-Editions/30-Getting-Started-with-Linux
+[3]:https://en.opensuse.org/Portal:42.3
+[4]:http://www.linux-magazine.com/
+[5]:https://www.gimp.org/
+[6]:https://inkscape.org/en/
+[7]:https://okular.kde.org/
+[8]:http://www.adobe.com/products/photoshop.html
+[9]:http://www.adobe.com/products/illustrator.html
+[10]:https://acrobat.adobe.com/us/en/acrobat/acrobat-pro-cc.html
+[11]:https://krita.org/en/
+[12]:https://en.wikipedia.org/wiki/Preview_(macOS)
+[13]:https://en.wikipedia.org/wiki/Microsoft_Edge
+[14]:https://okular.kde.org/
+[15]:http://djvu.org/
+[16]:https://fileinfo.com/extension/chm
+[17]:https://fileinfo.com/extension/xps
+[18]:http://idpf.org/epub
+[19]:https://www.suse.com/communities/blog/tag/drawing/
+[20]:https://www.suse.com/communities/blog/tag/getting-started-with-linux/
+[21]:https://www.suse.com/communities/blog/tag/gimp/
+[22]:https://www.suse.com/communities/blog/tag/image-editing/
+[23]:https://www.suse.com/communities/blog/tag/images/
+[24]:https://www.suse.com/communities/blog/tag/inkscape/
+[25]:https://www.suse.com/communities/blog/tag/kde/
+[26]:https://www.suse.com/communities/blog/tag/krita/
+[27]:https://www.suse.com/communities/blog/tag/leap-42-3/
+[28]:https://www.suse.com/communities/blog/tag/libreoffice/
+[29]:https://www.suse.com/communities/blog/tag/linux-magazine/
+[30]:https://www.suse.com/communities/blog/tag/okular/
+[31]:https://www.suse.com/communities/blog/tag/opensuse/
+[32]:https://www.suse.com/communities/blog/tag/pdf/
+[33]:https://www.suse.com/communities/blog/category/desktop/
+[34]:https://www.suse.com/communities/blog/category/expert-views/
+[35]:https://www.suse.com/communities/blog/category/libreoffice/
+[36]:https://www.suse.com/communities/blog/category/opensuse/
diff --git a/sources/tech/20171117 System Logs: Understand Your Linux System.md b/sources/tech/20171117 System Logs: Understand Your Linux System.md
new file mode 100644
index 0000000000..0dcaa57925
--- /dev/null
+++ b/sources/tech/20171117 System Logs: Understand Your Linux System.md
@@ -0,0 +1,59 @@
+### System Logs: Understand Your Linux System
+
+![chabowski](https://www.suse.com/communities/blog/files/2016/03/chabowski_avatar_1457537819-100x100.jpg)
+ By: [chabowski][1]
+
+The following article is part of a series of articles that provide tips and tricks for Linux newbies – or Desktop users that are not yet experienced with regard to certain topics). This series intends to complement the special edition #30 “[Getting Started with Linux][2]” based on [openSUSE Leap][3], recently published by the [Linux Magazine,][4] with valuable additional information.
+
+This article has been contributed by Romeo S. Romeo is a PDX-based enterprise Linux professional specializing in scalable solutions for innovative corporations looking to disrupt the marketplace.
+
+System logs are incredibly important files in Linux. Special programs that run in the background (usually called daemons or servers) handle most of the tasks on your Linux system. Whenever these daemons do anything, they write the details of the task to a log file as a sort of “history” of what they’ve been up to. These daemons perform actions ranging from syncing your clock with an atomic clock to managing your network connection. All of this is written to log files so that if something goes wrong, you can look into the specific log file and see what happened.
+
+![](https://www.suse.com/communities/blog/files/2017/11/markus-spiske-153537-300x450.jpg)
+
+Photo by Markus Spiske on Unsplash
+
+There are many different logs on your Linux computer. Historically, they were mostly stored in the /var/log directory in a plain text format. Quite a few still are, and you can read them easily with the less pager. On your freshly installed openSUSE Leap 42.3 system, and on most modern systems, important logs are stored by the systemd init system. This is the system that handles starting up daemons and getting the computer ready for use on startup. The logs handled by systemd are stored in a binary format, which means that they take up less space and can more easily be viewed or exported in various formats, but the downside is that you need a special tool to view them. Luckily, this tool comes installed on your system: it’s called journalctl and by default, it records all of the logs from every daemon to one location.
+
+To take a look at your systemd log, just run the journalctl command. This will open up the combined logs in the less pager. To get a better idea of what you’re looking at, see a single log entry from journalctl here:
+
+```
+Jul 06 11:53:47 aaathats3as pulseaudio[2216]: [pulseaudio] alsa-util.c: Disabling timer-based scheduling because running inside a VM.
+```
+
+This individual log entry contains (in order) the date and time of the entry, the hostname of the computer, the name of the process that logged the entry, the PID (process ID number) of the process that logged the entry, and then the log entry itself.
+
+If a program running on your system is misbehaving, look at the log file and search (with the “/” key followed by the search term) for the name of the program. Chances are that if the program is reporting errors that are causing it to malfunction, then the errors will show up in the system log. Sometimes errors are verbose enough for you to be able to fix them yourself. Other times, you have to search for a solution on the Web. Google is usually the most convenient search engine to use for weird Linux problems
+![](https://www.suse.com/communities/blog/files/2017/09/Sunglasses_Emoji-450x450.png)
+. However, be sure that you only enter the actual log entry, because the rest of the information at the beginning of the line (date, host name, PID) is unnecessary and could return false positives.
+
+After you search for the problem, the first few results are usually pages containing various things that you can try for solutions. Of course, you shouldn’t just follow random instructions that you find on the Internet: always be sure to do additional research into what exactly you will be doing and what the effects of it are before following any instructions. With that being said, the results for a specific entry from the system’s log file are usually much more useful than results from searching more generic terms that describe the malfunctioning of the program directly. This is because many different things could cause a program to misbehave, and multiple problems could cause identical misbehaviors.
+
+For example, a lack of audio on the system could be due to a massive amount of different reasons, ranging from speakers not being plugged in, to back end sound systems misbehaving, to a lack of the proper drivers. If you search for a general problem, you’re likely to see a lot of irrelevant solutions and you’ll end up wasting your time on a wild goose chase. With a specific search of an actual line from a log file, you can see other people who have had the same log entry. See Picture 1 and Picture 2 to compare and contrast between the two types of searching.
+
+![](https://www.suse.com/communities/blog/files/2017/11/picture1-450x450.png)
+
+Picture 1 shows generic, unspecific Google results for a general misbehavior of the system. This type of searching generally doesn’t help much.
+
+![](https://www.suse.com/communities/blog/files/2017/11/picture2-450x450.png)
+
+Picture 2 shows more specific, helpful Google results for a particular log file line. This type of searching is generally very helpful.
+
+There are some systems that log their actions outside of journalctl. The most important ones that you may find yourself dealing with on a desktop system are /var/log/zypper.log for openSUSE’s package manager, /var/log/boot.log for those messages that scroll by too fast to be read when you turn your system on, and /var/log/ntp if your Network Time Protocol Daemon is having troubles syncing time. One more important place to look for errors if you’re having problems with specific hardware is the Kernel Ring Buffer, which you can read by typing the dmesg -H command (this opens in the less pager as well). The Kernel Ring Buffer is stored in RAM, so you lose it when you reboot your system, but it contains important messages from the Linux kernel about important events, such as hardware being added, modules being loaded, or strange network errors.
+
+Hopefully you are prepared now to understand your Linux system better! Have a lot of fun!
+
+--------------------------------------------------------------------------------
+
+via: https://www.suse.com/communities/blog/system-logs-understand-linux-system/
+
+作者:[chabowski]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://www.suse.com/communities/blog/author/chabowski/
+[2]:http://www.linux-magazine.com/Resources/Special-Editions/30-Getting-Started-with-Linux
+[3]:https://en.opensuse.org/Portal:42.3
+[4]:http://www.linux-magazine.com/
diff --git a/sources/tech/20171118 Language engineering for great justice.md b/sources/tech/20171118 Language engineering for great justice.md
index d91c3ab471..35d9bd854f 100644
--- a/sources/tech/20171118 Language engineering for great justice.md
+++ b/sources/tech/20171118 Language engineering for great justice.md
@@ -1,3 +1,4 @@
+Translating by ValoniaKim
Language engineering for great justice
============================================================
diff --git a/sources/tech/20171120 Mark McIntyre How Do You Fedora.md b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md
new file mode 100644
index 0000000000..bfd19e1eda
--- /dev/null
+++ b/sources/tech/20171120 Mark McIntyre How Do You Fedora.md
@@ -0,0 +1,75 @@
+translating by zrszrszrs
+# [Mark McIntyre: How Do You Fedora?][1]
+
+
+![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg)
+
+We recently interviewed Mark McIntyre on how he uses Fedora. This is [part of a series][2] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][3] to express your interest in becoming a interviewee.
+
+### Who is Mark McIntyre?
+
+Mark McIntyre is a geek by birth and Linux by choice. “I started coding at the early age of 13 learning BASIC on my own and finding the excitement of programming which led me down a path of becoming a professional coder,” he says. McIntyre and his niece are big fans of pizza. “My niece and I started a quest last fall to try as many of the pizza joints in Knoxville. You can read about our progress at [https://knox-pizza-quest.blogspot.com/][4]” Mark is also an amateur photographer and [publishes his images][5] on Flickr.
+
+![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg)
+
+Mark has a diverse background as a developer. He has worked with Visual Basic for Applications, LotusScript, Oracle’s PL/SQL, Tcl/Tk and Python with Django as the framework. His strongest skill is Python which he uses in his current job as a systems engineer. “I am using Python on a regular basis. As my job is morphing into more of an automation engineer, that became more frequent.”
+
+McIntyre is a self-described nerd and loves sci-fi movies, but his favorite movie falls out of that genre. “As much as I am a nerd and love the Star Trek and Star Wars and related movies, the movie Glory is probably my favorite of all time.” He also mentioned that Serenity was a fantastic follow-up to a great TV series.
+
+Mark values humility, knowledge and graciousness in others. He appreciates people who act based on understanding the situation that other people are in. “If you add a decision to serve another, you have the basis for someone you’d want to be around instead of someone who you have to tolerate.”
+
+McIntyre works for [Scripps Networks Interactive][6], which is the parent company for HGTV, Food Network, Travel Channel, DIY, GAC, and several other cable channels. “Currently, I function as a systems engineer for the non-linear video content, which is all the media purposed for online consumption.” He supports a few development teams who write applications to publish the linear video from cable TV into the online formats such as Amazon and Hulu. The systems include both on-premise and cloud systems. Mark also develops automation tools for deploying these applications primarily to a cloud infrastructure.
+
+### The Fedora community
+
+Mark describes the Fedora community as an active community filled with people who enjoy life as Fedora users. “From designers to packagers, this group is still very active and feels alive.” McIntyre continues, “That gives me a sense of confidence in the operating system.”
+
+He started frequenting the #fedora channel on IRC around 2002: “Back then, Wi-Fi functionality was still done a lot by hand in starting the adapter and configuring the modules.” In order to get his Wi-Fi working he had to recompile the Fedora kernel. Shortly after, he started helping others in the #fedora channel.
+
+McIntyre encourages others to get involved in the Fedora Community. “There are many different areas of opportunity in which to be involved. Front-end design, testing deployments, development, packaging of applications, and new technology implementation.” He recommends picking an area of interest and asking questions of that group. “There are many opportunities available to jump in to contribute.”
+
+He credits a fellow community member with helping him get started: “Ben Williams was very helpful in my first encounters with Fedora, helping me with some of my first installation rough patches in the #fedora support channel.” Ben also encouraged Mark to become an [Ambassador][7].
+
+### What hardware and software?
+
+McIntyre uses Fedora Linux on all his laptops and desktops. On servers he chooses CentOS, due to the longer support lifecycle. His current desktop is self-built and equipped with an Intel Core i5 processor, 32 GB of RAM and 2 TB of disk space. “I have a 4K monitor attached which gives me plenty of room for viewing all my applications at once.” His current work laptop is a Dell Inspiron 2-in-1 13-inch laptop with 16 GB RAM and a 525 GB m.2 SSD.
+
+![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png)
+
+Mark currently runs Fedora 26 on any box he setup in the past few months. When it comes to new versions he likes to avoid the rush when the version is officially released. “I usually try to get the latest version as soon as it goes gold, with the exception of one of my workstations running the next version’s beta when it is closer to release.” He usually upgrades in place: “The in-place upgrade using _dnf system-upgrade_ works very well these days.”
+
+To handle his photography, McIntyre uses [GIMP][8] and [Darktable][9], along with a few other photo viewing and quick editing packages. When not using web-based email, he uses [Geary][10] along with [GNOME Calendar][11]. Mark’s IRC client of choice is [HexChat][12] connecting to a [ZNC bouncer][13]running on a Fedora Server instance. His department’s communication is handled via Slack.
+
+“I have never really been a big IDE fan, so I spend time in [vim][14] for most of my editing.” Occasionally, he opens up a simple text editor like [gedit][15] or [xed][16]. Mark uses [GPaste][17] for copying and pasting. “I have become a big fan of [Tilix][18] for my terminal choice.” McIntyre manages the podcasts he likes with [Rhythmbox][19], and uses [Epiphany][20] for quick web lookups.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/mark-mcintyre-fedora/
+
+作者:[Charles Profitt][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://fedoramagazine.org/author/cprofitt/
+[1]:https://fedoramagazine.org/mark-mcintyre-fedora/
+[2]:https://fedoramagazine.org/tag/how-do-you-fedora/
+[3]:https://fedoramagazine.org/submit-an-idea-or-tip/
+[4]:https://knox-pizza-quest.blogspot.com/
+[5]:https://www.flickr.com/photos/mockgeek/
+[6]:http://www.scrippsnetworksinteractive.com/
+[7]:https://fedoraproject.org/wiki/Ambassadors
+[8]:https://www.gimp.org/
+[9]:http://www.darktable.org/
+[10]:https://wiki.gnome.org/Apps/Geary
+[11]:https://wiki.gnome.org/Apps/Calendar
+[12]:https://hexchat.github.io/
+[13]:https://wiki.znc.in/ZNC
+[14]:http://www.vim.org/
+[15]:https://wiki.gnome.org/Apps/Gedit
+[16]:https://github.com/linuxmint/xed
+[17]:https://github.com/Keruspe/GPaste
+[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/
+[19]:https://wiki.gnome.org/Apps/Rhythmbox
+[20]:https://wiki.gnome.org/Apps/Web
diff --git a/sources/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md b/sources/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md
new file mode 100644
index 0000000000..fe72e37128
--- /dev/null
+++ b/sources/tech/20171121 LibreOffice Is Now Available on Flathub the Flatpak App Store.md
@@ -0,0 +1,73 @@
+translating---geekpi
+
+
+# LibreOffice Is Now Available on Flathub, the Flatpak App Store
+
+![LibreOffice on Flathub](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/libroffice-on-flathub-750x250.jpeg)
+
+LibreOffice is now available to install from [Flathub][3], the centralised Flatpak app store.
+
+Its arrival allows anyone running a modern Linux distribution to install the latest stable release of LibreOffice in a click or two, without having to hunt down a PPA, tussle with tarballs or wait for a distro provider to package it up.
+
+A [LibreOffice Flatpak][5] has been available for users to download and install since August of last year and the [LibreOffice 5.2][6] release.
+
+What’s “new” here is the distribution method. Rather than release updates through their own dedicated server The Document Foundation has opted to use Flathub.
+
+This is _great_ news for end users as it means there’s one less repo to worry about adding on a fresh install, but it’s also good news for Flatpak advocates too: LibreOffice is open-source software’s most popular productivity suite. Its support for both format and app store is sure to be warmly welcomed.
+
+At the time of writing you can install LibreOffice 5.4.2 from Flathub. New stable releases will be added as and when they’re released.
+
+### Enable Flathub on Ubuntu
+
+![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png)
+
+Fedora, Arch, and Linux Mint 18.3 users have Flatpak installed, ready to go, out of the box. Mint even comes with the Flathub remote pre-enabled.
+
+[Install LibreOffice from Flathub][7]
+
+To get Flatpak up and running on Ubuntu you first have to install it:
+
+```
+sudo apt install flatpak gnome-software-plugin-flatpak
+```
+
+To be able to install apps from Flathub you need to add the Flathub remote server:
+
+```
+flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
+```
+
+That’s pretty much it. Just log out and back in (so that Ubuntu Software refreshes its cache) and you _should_ be able to find any Flatpak apps available on Flathub through the Ubuntu Software app.
+
+In this instance, search for “LibreOffice” and locate the result that has a line of text underneath mentioning Flathub. (Do bear in mind that Ubuntu has tweaked the Software client to shows Snap app results above everything else, so you may need scroll down the list of results to see it).
+
+There is a [bug with installing Flatpak apps][8] from a flatpakref file, so if the above method doesn’t work you can also install Flatpak apps form Flathub using the command line.
+
+The Flathub website lists the command needed to install each app. Switch to the “Command Line” tab to see them.
+
+#### More apps on Flathub
+
+If you read this site regularly enough you’ll know that I _love_ Flathub. It’s home to some of my favourite apps (Corebird, Parlatype, GNOME MPV, Peek, Audacity, GIMP… etc). I get the latest, stable versions of these apps (plus any dependencies they need) without compromise.
+
+And, as I tweeted a week or so back, most Flatpak apps now look great with GTK themes — no more [workarounds][9]required!
+
+--------------------------------------------------------------------------------
+
+via: http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store
+
+作者:[ JOEY SNEDDON ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://plus.google.com/117485690627814051450/?rel=author
+[1]:https://plus.google.com/117485690627814051450/?rel=author
+[2]:http://www.omgubuntu.co.uk/category/news
+[3]:http://www.flathub.org/
+[4]:http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store
+[5]:http://www.omgubuntu.co.uk/2016/08/libreoffice-5-2-released-whats-new
+[6]:http://www.omgubuntu.co.uk/2016/08/libreoffice-5-2-released-whats-new
+[7]:https://flathub.org/repo/appstream/org.libreoffice.LibreOffice.flatpakref
+[8]:https://bugs.launchpad.net/ubuntu/+source/gnome-software/+bug/1716409
+[9]:http://www.omgubuntu.co.uk/2017/05/flatpak-theme-issue-fix
diff --git a/sources/tech/20171124 How do groups work on Linux.md b/sources/tech/20171124 How do groups work on Linux.md
new file mode 100644
index 0000000000..3e9c386e01
--- /dev/null
+++ b/sources/tech/20171124 How do groups work on Linux.md
@@ -0,0 +1,143 @@
+HankChow Translating
+
+How do groups work on Linux?
+============================================================
+
+Hello! Last week, I thought I knew how users and groups worked on Linux. Here is what I thought:
+
+1. Every process belongs to a user (like `julia`)
+
+2. When a process tries to read a file owned by a group, Linux a) checks if the user `julia` can access the file, and b) checks which groups `julia` belongs to, and whether any of those groups owns & can access that file
+
+3. If either of those is true (or if the ‘any’ bits are set right) then the process can access the file
+
+So, for example, if a process is owned by the `julia` user and `julia` is in the `awesome` group, then the process would be allowed to read this file.
+
+```
+r--r--r-- 1 root awesome 6872 Sep 24 11:09 file.txt
+
+```
+
+I had not thought carefully about this, but if pressed I would have said that it probably checks the `/etc/group` file at runtime to see what groups you’re in.
+
+### that is not how groups work
+
+I found out at work last week that, no, what I describe above is not how groups work. In particular Linux does **not** check which groups a process’s user belongs to every time that process tries to access a file.
+
+Here is how groups actually work! I learned this by reading Chapter 9 (“Process Credentials”) of [The Linux Programming Interface][1] which is an incredible book. As soon as I realized that I did not understand how users and groups worked, I opened up the table of contents with absolute confidence that it would tell me what’s up, and I was right.
+
+### how users and groups checks are done
+
+They key new insight for me was pretty simple! The chapter starts out by saying that user and group IDs are **attributes of the process**:
+
+* real user ID and group ID;
+
+* effective user ID and group ID;
+
+* saved set-user-ID and saved set-group-ID;
+
+* file-system user ID and group ID (Linux-specific); and
+
+* supplementary group IDs.
+
+This means that the way Linux **actually** does group checks to see a process can read a file is:
+
+* look at the process’s group IDs & supplementary group IDs (from the attributes on the process, **not** by looking them up in `/etc/group`)
+
+* look at the group on the file
+
+* see if they match
+
+Generally when doing access control checks it uses the **effective** user/group ID, not the real user/group ID. Technically when accessing a file it actually uses the **file-system** ids but those are usually the same as the effective uid/gid.
+
+### Adding a user to a group doesn’t put existing processes in that group
+
+Here’s another fun example that follows from this: if I create a new `panda` group and add myself (bork) to it, then run `groups` to check my group memberships – I’m not in the panda group!
+
+```
+bork@kiwi~> sudo addgroup panda
+Adding group `panda' (GID 1001) ...
+Done.
+bork@kiwi~> sudo adduser bork panda
+Adding user `bork' to group `panda' ...
+Adding user bork to group panda
+Done.
+bork@kiwi~> groups
+bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd
+
+```
+
+no `panda` in that list! To double check, let’s try making a file owned by the `panda`group and see if I can access it:
+
+```
+$ touch panda-file.txt
+$ sudo chown root:panda panda-file.txt
+$ sudo chmod 660 panda-file.txt
+$ cat panda-file.txt
+cat: panda-file.txt: Permission denied
+
+```
+
+Sure enough, I can’t access `panda-file.txt`. No big surprise there. My shell didn’t have the `panda` group as a supplementary GID before, and running `adduser bork panda` didn’t do anything to change that.
+
+### how do you get your groups in the first place?
+
+So this raises kind of a confusing question, right – if processes have groups baked into them, how do you get assigned your groups in the first place? Obviously you can’t assign yourself more groups (that would defeat the purpose of access control).
+
+It’s relatively clear how processes I **execute** from my shell (bash/fish) get their groups – my shell runs as me, and it has a bunch of group IDs on it. Processes I execute from my shell are forked from the shell so they get the same groups as the shell had.
+
+So there needs to be some “first” process that has your groups set on it, and all the other processes you set inherit their groups from that. That process is called your **login shell** and it’s run by the `login` program (`/bin/login`) on my laptop. `login` runs as root and calls a C function called `initgroups` to set up your groups (by reading `/etc/group`). It’s allowed to set up your groups because it runs as root.
+
+### let’s try logging in again!
+
+So! Let’s say I am running in a shell, and I want to refresh my groups! From what we’ve learned about how groups are initialized, I should be able to run `login` to refresh my groups and start a new login shell!
+
+Let’s try it:
+
+```
+$ sudo login bork
+$ groups
+bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd panda
+$ cat panda-file.txt # it works! I can access the file owned by `panda` now!
+
+```
+
+Sure enough, it works! Now the new shell that `login` spawned is part of the `panda` group! Awesome! This won’t affect any other shells I already have running. If I really want the new `panda` group everywhere, I need to restart my login session completely, which means quitting my window manager and logging in again.
+
+### newgrp
+
+Somebody on Twitter told me that if you want to start a new shell with a new group that you’ve been added to, you can use `newgrp`. Like this:
+
+```
+sudo addgroup panda
+sudo adduser bork panda
+newgrp panda # starts a new shell, and you don't have to be root to run it!
+
+```
+
+You can accomplish the same(ish) thing with `sg panda bash` which will start a `bash` shell that runs with the `panda` group.
+
+### setuid sets the effective user ID
+
+I’ve also always been a little vague about what it means for a process to run as “setuid root”. It turns out that setuid sets the effective user ID! So if I (`julia`) run a setuid root process (like `passwd`), then the **real** user ID will be set to `julia`, and the **effective** user ID will be set to `root`.
+
+`passwd` needs to run as root, but it can look at its real user ID to see that `julia`started the process, and prevent `julia` from editing any passwords except for `julia`’s password.
+
+### that’s all!
+
+There are a bunch more details about all the edge cases and exactly how everything works in The Linux Programming Interface so I will not get into all the details here. That book is amazing. Everything I talked about in this post is from Chapter 9, which is a 17-page chapter inside a 1300-page book.
+
+The thing I love most about that book is that reading 17 pages about how users and groups work is really approachable, self-contained, super useful, and I don’t have to tackle all 1300 pages of it at once to learn helpful things :)
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2017/11/20/groups/
+
+作者:[Julia Evans ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca/
+[1]:http://man7.org/tlpi/
diff --git a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md
new file mode 100644
index 0000000000..d3ba75da14
--- /dev/null
+++ b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md
@@ -0,0 +1,185 @@
+Translating by filefi
+
+
+How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10
+============================================================
+
+by [Pradeep Kumar][1] · Published November 29, 2017 · Updated November 29, 2017
+
+ [![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2]
+
+Wireshark is free and open source, cross platform, GUI based Network packet analyzer that is available for Linux, Windows, MacOS, Solaris etc. It captures network packets in real time & presents them in human readable format. Wireshark allows us to monitor the network packets up to microscopic level. Wireshark also has a command line utility called ‘tshark‘ that performs the same functions as Wireshark but through terminal & not through GUI.
+
+Wireshark can be used for network troubleshooting, analyzing, software & communication protocol development & also for education purposed. Wireshark uses a library called ‘pcap‘ for capturing the network packets.
+
+Wireshark comes with a lot of features & some those features are;
+
+* Support for a hundreds of protocols for inspection,
+
+* Ability to capture packets in real time & save them for later offline analysis,
+
+* A number of filters to analyzing data,
+
+* Data captured can be compressed & uncompressed on the fly,
+
+* Various file formats for data analysis supported, output can also be saved to XML, CSV, plain text formats,
+
+* data can be captured from a number of interfaces like ethernet, wifi, bluetooth, USB, Frame relay , token rings etc.
+
+In this article, we will discuss how to install Wireshark on Ubuntu/Debain machines & will also learn to use Wireshark for capturing network packets.
+
+#### Installation of Wireshark on Ubuntu 16.04 / 17.10
+
+Wireshark is available with default Ubuntu repositories & can be simply installed using the following command. But there might be chances that you will not get the latest version of wireshark.
+
+```
+linuxtechi@nixworld:~$ sudo apt-get update
+linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
+```
+
+So to install latest version of wireshark we have to enable or configure official wireshark repository.
+
+Use the beneath commands one after the another to configure repository and to install latest version of Wireshark utility
+
+```
+linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable
+linuxtechi@nixworld:~$ sudo apt-get update
+linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
+```
+
+Once the Wireshark is installed execute the below command so that non-root users can capture live packets of interfaces,
+
+```
+linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap
+```
+
+#### Installation of Wireshark on Debian 9
+
+Wireshark package and its dependencies are already present in the default debian 9 repositories, so to install latest and stable version of Wireshark on Debian 9, use the following command:
+
+```
+linuxtechi@nixhome:~$ sudo apt-get update
+linuxtechi@nixhome:~$ sudo apt-get install wireshark -y
+```
+
+During the installation, it will prompt us to configure dumpcap for non-superusers,
+
+Select ‘yes’ and then hit enter.
+
+ [![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3]
+
+Once the Installation is completed, execute the below command so that non-root users can also capture the live packets of the interfaces.
+
+```
+linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap
+```
+
+We can also use the latest source package to install the wireshark on Ubuntu/Debain & many other Linux distributions.
+
+#### Installing Wireshark using source code on Debian / Ubuntu Systems
+
+Firstly download the latest source package (which is 2.4.2 at the time for writing this article), use the following command,
+
+```
+linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz
+```
+
+Next extract the package & enter into the extracted directory,
+
+```
+linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp
+linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2
+```
+
+Now we will compile the code with the following commands,
+
+```
+linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install
+linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make
+```
+
+Lastly install the compiled packages to install Wireshark on the system,
+
+```
+linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install
+linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig
+```
+
+Upon installation a separate group for Wireshark will also be created, we will now add our user to the group so that it can work with wireshark otherwise you might get ‘permission denied‘ error when starting wireshark.
+
+To add the user to the wireshark group, execute the following command,
+
+```
+linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi
+```
+
+Now we can start wireshark either from GUI Menu or from terminal with this command,
+
+```
+linuxtechi@nixhome:~$ wireshark
+```
+
+#### Access Wireshark on Debian 9 System
+
+ [![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4]
+
+Click on Wireshark icon
+
+ [![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5]
+
+#### Access Wireshark on Ubuntu 16.04 / 17.10
+
+ [![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6]
+
+Click on Wireshark icon
+
+ [![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7]
+
+#### Capturing and Analyzing packets
+
+Once the wireshark has been started, we should be presented with the wireshark window, example is shown above for Ubuntu and Debian system.
+
+ [![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8]
+
+All these are the interfaces from where we can capture the network packets. Based on the interfaces you have on your system, this screen might be different for you.
+
+We are selecting ‘enp0s3’ for capturing the network traffic for that inteface. After selecting the inteface, network packets for all the devices on our network start to populate (refer to screenshot below)
+
+ [![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9]
+
+First time we see this screen we might get overwhelmed by the data that is presented in this screen & might have thought how to sort out this data but worry not, one the best features of Wireshark is its filters.
+
+We can sort/filter out the data based on IP address, Port number, can also used source & destination filters, packet size etc & can also combine 2 or more filters together to create more comprehensive searches. We can either write our filters in ‘Apply a Display Filter‘ tab , or we can also select one of already created rules. To select pre-built filter, click on ‘flag‘ icon , next to ‘Apply a Display Filter‘ tab,
+
+ [![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10]
+
+We can also filter data based on the color coding, By default, light purple is TCP traffic, light blue is UDP traffic, and black identifies packets with errors , to see what these codes mean, click View -> Coloring Rules, also we can change these codes.
+
+ [![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11]
+
+After we have the results that we need, we can then click on any of the captured packets to get more details about that packet, this will show all the data about that network packet.
+
+Wireshark is an extremely powerful tool takes some time to getting used to & make a command over it, this tutorial will help you get started. Please feel free to drop in your queries or suggestions in the comment box below.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com
+
+作者:[Pradeep Kumar][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxtechi.com/author/pradeep/
+[1]:https://www.linuxtechi.com/author/pradeep/
+[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg
+[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg
+[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg
+[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg
+[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg
+[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg
+[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg
+[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg
+[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg
+[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg
diff --git a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md
new file mode 100644
index 0000000000..3469c62569
--- /dev/null
+++ b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md
@@ -0,0 +1,115 @@
+Excellent Business Software Alternatives For Linux
+-------
+
+Many business owners choose to use Linux as the operating system for their operations for a variety of reasons.
+
+1. Firstly, they don't have to pay anything for the privilege, and that is a massive bonus during the early stages of a company where money is tight.
+
+2. Secondly, Linux is a light alternative compared to Windows and other popular operating systems available today.
+
+Of course, lots of entrepreneurs worry they won't have access to some of the essential software packages if they make that move. However, as you will discover throughout this post, there are plenty of similar tools that will cover all the bases.
+
+ [![](https://4.bp.blogspot.com/-xwLuDRdB6sw/Whxx0Z5pI5I/AAAAAAAADhU/YWHID8GU9AgrXRfeTz4HcDZkG-XWZNbSgCLcBGAs/s400/4444061098_6eeaa7dc1a_z.jpg)][3]
+
+### Alternatives to Microsoft Word
+
+All company bosses will require access to a word processing tool if they want to ensure the smooth running of their operation according to
+
+[the latest article from Fareed Siddiqui][4]
+
+. You'll need that software to write business plans, letters, and many other jobs within your firm. Thankfully, there are a variety of alternatives you might like to select if you opt for the Linux operating system. Some of the most popular ones include:
+
+* LibreOffice Writer
+
+* AbiWord
+
+* KWord
+
+* LaTeX
+
+So, you just need to read some online reviews and then download the best word processor based on your findings. Of course, if you're not satisfied with the solution, you should take a look at some of the other ones on that list. In many instances, any of the programs mentioned above should work well.
+
+### Alternatives to Microsoft Excel
+
+ [![](https://4.bp.blogspot.com/-XdS6bSLQbOU/WhxyeWZeeCI/AAAAAAAADhc/C3hGY6rgzX4m2emunot80-4URu9-aQx8wCLcBGAs/s400/28929069495_e85d2626ba_z.jpg)][5]
+
+You need a spreadsheet tool if you want to ensure your business doesn't get into trouble when it comes to bookkeeping and inventory control. There are specialist software packages on the market for both of those tasks, but
+
+[open-source alternatives][6]
+
+to Microsoft Excel will give you the most amount of freedom when creating your spreadsheets and editing them. While there are other packages out there, some of the best ones for Linux users include:
+
+* [LibreOffice Calc][1]
+
+* KSpread
+
+* Gnumeric
+
+Those programs work in much the same way as Microsoft Excel, and so you can use them for issues like accounting and stock control. You might also use that software to monitor employee earnings or punctuality. The possibilities are endless and only limited by your imagination.
+
+### Alternatives to Adobe Photoshop
+
+ [![](https://3.bp.blogspot.com/-Id9Dm3CIXmc/WhxzGIlv3zI/AAAAAAAADho/VfIRCAbJMjMZzG2M97-uqLV9mOhqN7IWACLcBGAs/s400/32206185926_c69accfcef_z.jpg)][7]
+
+Company bosses require access to design programs when developing their marketing materials and creating graphics for their websites. You might also use software of that nature to come up with a new business logo at some point. Lots of entrepreneurs spend a fortune on
+
+[Training Connections Photoshop classes][8]
+
+and those available from other providers. They do that in the hope of educating their teams and getting the best results. However, people who use Linux can still benefit from that expertise if they select one of the following
+
+[alternatives][9]
+
+:
+
+* GIMP
+
+* Krita
+
+* Pixel
+
+* LightZone
+
+The last two suggestions on that list require a substantial investment. Still, they function in much the same way as Adobe Photoshop, and so you should manage to achieve the same quality of work.
+
+### Other software solutions that you'll want to consider
+
+Alongside those alternatives to some of the most widely-used software packages around today, business owners should take a look at the full range of products they could use with the Linux operating system. Here are some tools you might like to research and consider:
+
+* Inkscape - similar to Coreldraw
+
+* LibreOffice Base - similar to Microsoft Access
+
+* LibreOffice Impress - similar to Microsoft PowerPoint
+
+* File Roller - siThis is a contributed postmilar to WinZip
+
+* Linphone - similar to Skype
+
+There are
+
+[lots of other programs][10]
+
+ you'll also want to research, and so the best solution is to use the internet to learn more. You will find lots of reviews from people who've used the software in the past, and many of them will compare the tool to its Windows or iOS alternative. So, you shouldn't have to work too hard to identify the best ones and sort the wheat from the chaff.
+
+Now you have all the right information; it's time to weigh all the pros and cons of Linux and work out if it's suitable for your operation. In most instances, that operating system does not place any limits on your business activities. It's just that you need to use different software compared to some of your competitors. People who use Linux tend to benefit from improved security, speed, and performance. Also, the solution gets regular updates, and so it's growing every single day. Unlike Windows and other solutions; you can customize Linux to meet your requirements. With that in mind, do not make the mistake of overlooking this fantastic system!
+
+--------------------------------------------------------------------------------
+
+via: http://linuxblog.darkduck.com/2017/11/excellent-business-software.html
+
+作者:[DarkDuck][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxblog.darkduck.com/
+[1]:http://linuxblog.darkduck.com/2015/08/pivot-tables-in-libreoffice-calc.html
+[3]:https://4.bp.blogspot.com/-xwLuDRdB6sw/Whxx0Z5pI5I/AAAAAAAADhU/YWHID8GU9AgrXRfeTz4HcDZkG-XWZNbSgCLcBGAs/s1600/4444061098_6eeaa7dc1a_z.jpg
+[4]:https://www.linkedin.com/pulse/benefits-using-microsoft-word-fareed/
+[5]:https://4.bp.blogspot.com/-XdS6bSLQbOU/WhxyeWZeeCI/AAAAAAAADhc/C3hGY6rgzX4m2emunot80-4URu9-aQx8wCLcBGAs/s1600/28929069495_e85d2626ba_z.jpg
+[6]:http://linuxblog.darkduck.com/2014/03/why-open-software-and-what-are-benefits.html
+[7]:https://3.bp.blogspot.com/-Id9Dm3CIXmc/WhxzGIlv3zI/AAAAAAAADho/VfIRCAbJMjMZzG2M97-uqLV9mOhqN7IWACLcBGAs/s1600/32206185926_c69accfcef_z.jpg
+[8]:https://www.trainingconnection.com/photoshop-training.php
+[9]:http://linuxblog.darkduck.com/2011/10/photoshop-alternatives-for-linux.html
+[10]:http://www.makeuseof.com/tag/best-linux-software/
diff --git a/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md b/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md
new file mode 100644
index 0000000000..46afe9b893
--- /dev/null
+++ b/sources/tech/20171130 Undistract-me : Get Notification When Long Running Terminal Commands Complete.md
@@ -0,0 +1,156 @@
+translating---geekpi
+
+Undistract-me : Get Notification When Long Running Terminal Commands Complete
+============================================================
+
+by [sk][2] · November 30, 2017
+
+![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png)
+
+A while ago, we published how to [get notification when a Terminal activity is done][3]. Today, I found out a similar utility called “undistract-me” that notifies you when long running terminal commands complete. Picture this scenario. You run a command that takes a while to finish. In the mean time, you check your facebook and get so involved in it. After a while, you remembered that you ran a command few minutes ago. You go back to the Terminal and notice that the command has already finished. But you have no idea when the command is completed. Have you ever been in this situation? I bet most of you were in this situation many times. This is where “undistract-me” comes in help. You don’t need to constantly check the terminal to see if a command is completed or not. Undistract-me utility will notify you when a long running command is completed. It will work on Arch Linux, Debian, Ubuntu and other Ubuntu-derivatives.
+
+#### Installing Undistract-me
+
+Undistract-me is available in the default repositories of Debian and its variants such as Ubuntu. All you have to do is to run the following command to install it.
+
+```
+sudo apt-get install undistract-me
+```
+
+The Arch Linux users can install it from AUR using any helper programs.
+
+Using [Pacaur][4]:
+
+```
+pacaur -S undistract-me-git
+```
+
+Using [Packer][5]:
+
+```
+packer -S undistract-me-git
+```
+
+Using [Yaourt][6]:
+
+```
+yaourt -S undistract-me-git
+```
+
+Then, run the following command to add “undistract-me” to your Bash.
+
+```
+echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc
+```
+
+Alternatively you can run this command to add it to your Bash:
+
+```
+echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc
+```
+
+If you are in Zsh shell, run this command:
+
+```
+echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc
+```
+
+Finally update the changes:
+
+For Bash:
+
+```
+source ~/.bashrc
+```
+
+For Zsh:
+
+```
+source ~/.zshrc
+```
+
+#### Configure Undistract-me
+
+By default, Undistract-me will consider any command that takes more than 10 seconds to complete as a long-running command. You can change this time interval by editing /usr/share/undistract-me/long-running.bash file.
+
+```
+sudo nano /usr/share/undistract-me/long-running.bash
+```
+
+Find “LONG_RUNNING_COMMAND_TIMEOUT” variable and change the default value (10 seconds) to something else of your choice.
+
+ [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7]
+
+Save and close the file. Do not forget to update the changes:
+
+```
+source ~/.bashrc
+```
+
+Also, you can disable notifications for particular commands. To do so, find the “LONG_RUNNING_IGNORE_LIST” variable and add the commands space-separated like below.
+
+By default, the notification will only show if the active window is not the window the command is running in. That means, it will notify you only if the command is running in the background Terminal window. If the command is running in active window Terminal, you will not be notified. If you want undistract-me to send notifications either the Terminal window is visible or in the background, you can set IGNORE_WINDOW_CHECK to 1 to skip the window check.
+
+The other cool feature of Undistract-me is you can set audio notification along with visual notification when a command is done. By default, it will only send a visual notification. You can change this behavior by setting the variable UDM_PLAY_SOUND to a non-zero integer on the command line. However, your Ubuntu system should have pulseaudio-utils and sound-theme-freedesktop utilities installed to enable this functionality.
+
+Please remember that you need to run the following command to update the changes made.
+
+For Bash:
+
+```
+source ~/.bashrc
+```
+
+For Zsh:
+
+```
+source ~/.zshrc
+```
+
+It is time to verify if this really works.
+
+#### Get Notification When Long Running Terminal Commands Complete
+
+Now, run any command that takes longer than 10 seconds or the time duration you defined in Undistract-me script.
+
+I ran the following command on my Arch Linux desktop.
+
+```
+sudo pacman -Sy
+```
+
+This command took 32 seconds to complete. After the completion of the above command, I got the following notification.
+
+ [![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8]
+
+Please remember Undistract-me script notifies you only if the given command took more than 10 seconds to complete. If the command is completed in less than 10 seconds, you will not be notified. Of course, you can change this time interval settings as I described in the Configuration section above.
+
+I find this tool very useful. It helped me to get back to the business after I completely lost in some other tasks. I hope this tool will be helpful to you too.
+
+More good stuffs to come. Stay tuned!
+
+Cheers!
+
+Resource:
+
+* [Undistract-me GitHub Repository][1]
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/
+
+作者:[sk][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://github.com/jml/undistract-me
+[2]:https://www.ostechnix.com/author/sk/
+[3]:https://www.ostechnix.com/get-notification-terminal-task-done/
+[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
+[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
+[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png
+[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png
diff --git a/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md
new file mode 100644
index 0000000000..efb0937695
--- /dev/null
+++ b/sources/tech/20171130 Wake up and Shut Down Linux Automatically.md
@@ -0,0 +1,132 @@
+Wake up and Shut Down Linux Automatically
+============================================================
+
+### [banner.jpg][1]
+
+![time keeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb)
+
+Learn how to configure your Linux computers to watch the time for you, then wake up and shut down automatically.
+
+[Creative Commons Attribution][6][The Observatory at Delhi][7]
+
+Don't be a watt-waster. If your computers don't need to be on then shut them down. For convenience and nerd creds, you can configure your Linux computers to wake up and shut down automatically.
+
+### Precious Uptimes
+
+Some computers need to be on all the time, which is fine as long as it's not about satisfying an uptime compulsion. Some people are very proud of their lengthy uptimes, and now that we have kernel hot-patching that leaves only hardware failures requiring shutdowns. I think it's better to be practical. Save electricity as well as wear on your moving parts, and shut them down when they're not needed. For example, you can wake up a backup server at a scheduled time, run your backups, and then shut it down until it's time for the next backup. Or, you can configure your Internet gateway to be on only at certain times. Anything that doesn't need to be on all the time can be configured to turn on, do a job, and then shut down.
+
+### Sleepies
+
+For computers that don't need to be on all the time, good old cron will shut them down reliably. Use either root's cron, or /etc/crontab. This example creates a root cron job to shut down every night at 11:15 p.m.
+
+```
+# crontab -e -u root
+# m h dom mon dow command
+15 23 * * * /sbin/shutdown -h now
+```
+
+```
+15 23 * * 1-5 /sbin/shutdown -h now
+```
+
+You may also use /etc/crontab, which is fast and easy, and everything is in one file. You have to specify the user:
+
+```
+15 23 * * 1-5 root shutdown -h now
+```
+
+Auto-wakeups are very cool; most of my SUSE colleagues are in Nuremberg, so I am crawling out of bed at 5 a.m. to have a few hours of overlap with their schedules. My work computer turns itself on at 5:30 a.m., and then all I have to do is drag my coffee and myself to my desk to start work. It might not seem like pressing a power button is a big deal, but at that time of day every little thing looms large.
+
+Waking up your Linux PC can be less reliable than shutting it down, so you may want to try different methods. You can use wakeonlan, RTC wakeups, or your PC's BIOS to set scheduled wakeups. These all work because, when you power off your computer, it's not really all the way off; it is in an extremely low-power state and can receive and respond to signals. You need to use the power supply switch to turn it off completely.
+
+### BIOS Wakeup
+
+A BIOS wakeup is the most reliable. My system BIOS has an easy-to-use wakeup scheduler (Figure 1). Chances are yours does, too. Easy peasy.
+
+### [fig-1.png][2]
+
+![wake up](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I)
+
+Figure 1: My system BIOS has an easy-to-use wakeup scheduler.
+
+[Used with permission][8]
+
+### wakeonlan
+
+wakeonlan is the next most reliable method. This requires sending a signal from a second computer to the computer you want to power on. You could use an Arduino or Raspberry Pi to send the wakeup signal, a Linux-based router, or any Linux PC. First, look in your system BIOS to see if wakeonlan is supported -- which it should be -- and then enable it, as it should be disabled by default.
+
+Then, you'll need an Ethernet network adapter that supports wakeonlan; wireless adapters won't work. You'll need to verify that your Ethernet card supports wakeonlan:
+
+```
+# ethtool eth0 | grep -i wake-on
+ Supports Wake-on: pumbg
+ Wake-on: g
+```
+
+* d -- all wake ups disabled
+
+* p -- wake up on physical activity
+
+* u -- wake up on unicast messages
+
+* m -- wake up on multicast messages
+
+* b -- wake up on broadcast messages
+
+* a -- wake up on ARP messages
+
+* g -- wake up on magic packet
+
+* s -- set the Secure On password for the magic packet
+
+man ethtool is not clear on what the p switch does; it suggests that any signal will cause a wake up. In my testing, however, it doesn't do that. The one that must be enabled is g -- wake up on magic packet, and the Wake-on line shows that it is already enabled. If it is not enabled, you can use ethtool to enable it, using your own device name, of course:
+
+```
+# ethtool -s eth0 wol g
+```
+
+```
+@reboot /usr/bin/ethtool -s eth0 wol g
+```
+
+### [fig-2.png][3]
+
+![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ)
+
+Figure 2: Enable Wake on LAN.
+
+[Used with permission][9]
+
+Another option is recent Network Manager versions have a nice little checkbox to enable wakeonlan (Figure 2).
+
+There is a field for setting a password, but if your network interface doesn't support the Secure On password, it won't work.
+
+Now you need to configure a second PC to send the wakeup signal. You don't need root privileges, so create a cron job for your user. You need the MAC address of the network interface on the machine you're waking up:
+
+```
+30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B
+```
+
+Using the real-time clock for wakeups is the least reliable method. Check out [Wake Up Linux With an RTC Alarm Clock][4]; this is a bit outdated as most distros use systemd now. Come back next week to learn more about updated ways to use RTC wakeups.
+
+Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically
+
+作者:[Carla Schroder]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://www.linux.com/files/images/bannerjpg
+[2]:https://www.linux.com/files/images/fig-1png-11
+[3]:https://www.linux.com/files/images/fig-2png-7
+[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock
+[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[6]:https://www.linux.com/licenses/category/creative-commons-attribution
+[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html
+[8]:https://www.linux.com/licenses/category/used-permission
+[9]:https://www.linux.com/licenses/category/used-permission
diff --git a/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md b/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md
new file mode 100644
index 0000000000..a74b196663
--- /dev/null
+++ b/sources/tech/20171201 Fedora Classroom Session: Ansible 101.md
@@ -0,0 +1,71 @@
+### [Fedora Classroom Session: Ansible 101][2]
+
+### By Sachin S Kamath
+
+![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg)
+
+Fedora Classroom sessions continue this week with an Ansible session. The general schedule for sessions appears [on the wiki][3]. You can also find [resources and recordings from previous sessions][4] there. Here are details about this week’s session on [Thursday, 30th November at 1600 UTC][5]. That link allows you to convert the time to your timezone.
+
+### Topic: Ansible 101
+
+As the Ansible [documentation][6] explains, Ansible is an IT automation tool. It’s primarily used to configure systems, deploy software, and orchestrate more advanced IT tasks. Examples include continuous deployments or zero downtime rolling updates.
+
+This Classroom session covers the topics listed below:
+
+1. Introduction to SSH
+
+2. Understanding different terminologies
+
+3. Introduction to Ansible
+
+4. Ansible installation and setup
+
+5. Establishing password-less connection
+
+6. Ad-hoc commands
+
+7. Managing inventory
+
+8. Playbooks examples
+
+There will also be a follow-up Ansible 102 session later. That session will cover complex playbooks, roles, dynamic inventory files, control flow and Galaxy.
+
+### Instructors
+
+We have two experienced instructors handling this session.
+
+[Geoffrey Marr][7], also known by his IRC name as “coremodule,” is a Red Hat employee and Fedora contributor with a background in Linux and cloud technologies. While working, he spends his time lurking in the [Fedora QA][8] wiki and test pages. Away from work, he enjoys RaspberryPi projects, especially those focusing on software-defined radio.
+
+[Vipul Siddharth][9] is an intern at Red Hat who also works on Fedora. He loves to contribute to open source and seeks opportunities to spread the word of free and open source software.
+
+### Joining the session
+
+This session takes place on [BlueJeans][10]. The following information will help you join the session:
+
+* URL: [https://bluejeans.com/3466040121][1]
+
+* Meeting ID (for Desktop App): 3466040121
+
+We hope you attend, learn from, and enjoy this session! If you have any feedback about the sessions, have ideas for a new one or want to host a session, please feel free to comment on this post or edit the [Classroom wiki page][11].
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/fedora-classroom-session-ansible-101/
+
+作者:[Sachin S Kamath]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://bluejeans.com/3466040121
+[2]:https://fedoramagazine.org/fedora-classroom-session-ansible-101/
+[3]:https://fedoraproject.org/wiki/Classroom
+[4]:https://fedoraproject.org/wiki/Classroom#Previous_Sessions
+[5]:https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+Classroom+-+Ansible+101&iso=20171130T16&p1=%3A
+[6]:http://docs.ansible.com/ansible/latest/index.html
+[7]:https://fedoraproject.org/wiki/User:Coremodule
+[8]:https://fedoraproject.org/wiki/QA
+[9]:https://fedoraproject.org/wiki/User:Siddharthvipul1
+[10]:https://www.bluejeans.com/downloads
+[11]:https://fedoraproject.org/wiki/Classroom
diff --git a/sources/tech/20171201 How to Manage Users with Groups in Linux.md b/sources/tech/20171201 How to Manage Users with Groups in Linux.md
new file mode 100644
index 0000000000..35350c819f
--- /dev/null
+++ b/sources/tech/20171201 How to Manage Users with Groups in Linux.md
@@ -0,0 +1,168 @@
+translating---imquanquan
+
+How to Manage Users with Groups in Linux
+============================================================
+
+### [group-of-people-1645356_1920.jpg][1]
+
+![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV)
+
+Learn how to work with users, via groups and access control lists in this tutorial.
+
+[Creative Commons Zero][4]
+
+Pixabay
+
+When you administer a Linux machine that houses multiple users, there might be times when you need to take more control over those users than the basic user tools offer. This idea comes to the fore especially when you need to manage permissions for certain users. Say, for example, you have a directory that needs to be accessed with read/write permissions by one group of users and only read permissions for another group. With Linux, this is entirely possible. To make this happen, however, you must first understand how to work with users, via groups and access control lists (ACLs).
+
+We’ll start from the beginning with users and work our way to the more complex ACLs. Everything you need to make this happen will be included in your Linux distribution of choice. We won’t touch on the basics of users, as the focus on this article is about groups.
+
+For the purpose of this piece, I’m going to assume the following:
+
+You need to create two users with usernames:
+
+* olivia
+
+* nathan
+
+You need to create two groups:
+
+* readers
+
+* editors
+
+Olivia needs to be a member of the group editors, while nathan needs to be a member of the group readers. The group readers needs to only have read permission to the directory /DATA, whereas the group editors needs to have both read and write permission to the /DATA directory. This, of course, is very minimal, but it will give you the basic information you need to expand the tasks to fit your much larger needs.
+
+I’ll be demonstrating on the Ubuntu 16.04 Server platform. The commands will be universal—the only difference would be if your distribution of choice doesn’t make use of sudo. If this is the case, you’ll have to first su to the root user to issue the commands that require sudo in the demonstrations.
+
+### Creating the users
+
+The first thing we need to do is create the two users for our experiment. User creation is handled with the useradd command. Instead of just simply creating the users we need to create them both with their own home directories and then give them passwords.
+
+The first thing we do is create the users. To do this, issue the commands:
+
+```
+sudo useradd -m olivia
+
+sudo useradd -m nathan
+```
+
+Next each user must have a password. To add passwords into the mix, you’d issue the following commands:
+
+```
+sudo passwd olivia
+
+sudo passwd nathan
+```
+
+That’s it, your users are created.
+
+### Creating groups and adding users
+
+Now we’re going to create the groups readers and editors and then add users to them. The commands to create our groups are:
+
+```
+addgroup readers
+
+addgroup editors
+```
+
+### [groups_1.jpg][2]
+
+![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB)
+
+Figure 1: Our new groups ready to be used.
+
+[Used with permission][5]
+
+With our groups created, we need to add our users. We’ll add user nathan to group readers with the command:
+
+```
+sudo usermod -a -G readers nathan
+```
+
+```
+sudo usermod -a -G editors olivia
+```
+
+### Giving groups permissions to directories
+
+Let’s say you have the directory /READERS and you need to allow all members of the readers group access to that directory. First, change the group of the folder with the command:
+
+```
+sudo chown -R :readers /READERS
+```
+
+```
+sudo chmod -R g-w /READERS
+```
+
+```
+sudo chmod -R o-x /READERS
+```
+
+Let’s say you have the directory /EDITORS and you need to give members of the editors group read and write permission to its contents. To do that, the following command would be necessary:
+
+```
+sudo chown -R :editors /EDITORS
+
+sudo chmod -R g+w /EDITORS
+
+sudo chmod -R o-x /EDITORS
+```
+
+The problem with using this method is you can only add one group to a directory at a time. This is where access control lists come in handy.
+
+### Using access control lists
+
+Now, let’s get tricky. Say you have a single folder—/DATA—and you want to give members of the readers group read permission and members of the group editors read/write permissions. To do that, you must take advantage of the setfacl command. The setfacl command sets file access control lists for files and folders.
+
+The structure of this command looks like this:
+
+```
+setfacl OPTION X:NAME:Y /DIRECTORY
+```
+
+```
+sudo setfacl -m g:readers:rx -R /DATA
+```
+
+To give members of the editors group read/write permissions (while retaining read permissions for the readers group), we’d issue the command;
+
+```
+sudo setfacl -m g:editors:rwx -R /DATA
+```
+
+### All the control you need
+
+And there you have it. You can now add members to groups and control those groups’ access to various directories with all the power and flexibility you need. To read more about the above tools, issue the commands:
+
+* man usradd
+
+* man addgroup
+
+* man usermod
+
+* man sefacl
+
+* man chown
+
+* man chmod
+
+Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux
+
+作者:[Jack Wallen ]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://www.linux.com/files/images/group-people-16453561920jpg
+[2]:https://www.linux.com/files/images/groups1jpg
+[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[4]:https://www.linux.com/licenses/category/creative-commons-zero
+[5]:https://www.linux.com/licenses/category/used-permission
diff --git a/sources/tech/20171202 Scrot Linux command-line screen grabs made simple b/sources/tech/20171202 Scrot Linux command-line screen grabs made simple
new file mode 100644
index 0000000000..979ed86b3c
--- /dev/null
+++ b/sources/tech/20171202 Scrot Linux command-line screen grabs made simple
@@ -0,0 +1,72 @@
+Translating by filefi
+
+# Scrot: Linux command-line screen grabs made simple
+
+by [Scott Nesbitt][a] · November 30, 2017
+
+> Scrot is a basic, flexible tool that offers a number of handy options for taking screen captures from the Linux command line.
+
+[![Original photo by Rikki Endsley. CC BY-SA 4.0](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)][1]
+
+
+
+There are great tools on the Linux desktop for taking screen captures, such as [KSnapshot][2] and [Shutter][3]. Even the simple utility that comes with the GNOME desktop does a pretty good job of capturing screens. But what if you rarely need to take screen captures? Or you use a Linux distribution without a built-in capture tool, or an older computer with limited resources?
+
+Turn to the command line and a little utility called [Scrot][4]. It does a fine job of taking simple screen captures, and it includes a few features that might surprise you.
+
+### Getting started with Scrot
+Many Linux distributions come with Scrot already installed—to check, type `which scrot`. If it isn't there, you can install Scrot using your distro's package manager. If you're willing to compile the code, grab it [from GitHub][5].
+
+To take a screen capture, crack open a terminal window and type `scrot [filename]`, where `[filename]` is the name of file to which you want to save the image (for example, `desktop.png`). If you don't include a name for the file, Scrot will create one for you, such as `2017-09-24-185009_1687x938_scrot.png`. (That filename isn't as descriptive it could be, is it? That's why it's better to add one to the command.)
+
+Running Scrot with no options takes a screen capture of your entire desktop. If you don't want to do that, Scrot lets you focus on smaller portions of your screen.
+
+### Taking a screen capture of a single window
+
+Tell Scrot to take a screen capture of a single window by typing `scrot -u [filename]`.
+
+The `-u` option tells Scrot to grab the window currently in focus. That's usually the terminal window you're working in, which might not be the one you want.
+
+To grab another window on your desktop, type `scrot -s [filename]`.
+
+The `-s` option lets you do one of two things:
+
+* select an open window, or
+
+* draw a rectangle around a window or a portion of a window to capture it.
+
+You can also set a delay, which gives you a little more time to select the window you want to capture. To do that, type `scrot -u -d [num] [filename]`.
+
+The `-d` option tells Scrot to wait before grabbing the window, and `[num]` is the number of seconds to wait. Specifying `-d 5` (wait five seconds) should give you enough time to choose a window.
+
+### More useful options
+
+Scrot offers a number of additional features (most of which I never use). The ones I find most useful include:
+
+* `-b` also grabs the window's border
+
+* `-t` grabs a window and creates a thumbnail of it. This can be useful when you're posting screen captures online.
+
+* `-c` creates a countdown in your terminal when you use the `-d` option.
+
+To learn about Scrot's other options, check out the its documentation by typing `man scrot` in a terminal window, or [read it online][6]. Then start snapping images of your screen.
+
+It's basic, but Scrot gets the job done nicely.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot
+
+作者:[Scott Nesbitt][a]
+译者:[filefi](https://github.com/filefi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/scottnesbitt
+[1]:https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A
+[2]:https://www.kde.org/applications/graphics/ksnapshot/
+[3]:https://launchpad.net/shutter
+[4]:https://github.com/dreamer/scrot
+[5]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html
+[6]:https://github.com/dreamer/scrot
diff --git a/translated/talk/20170119 Be a force for good in your community.md b/translated/talk/20170119 Be a force for good in your community.md
new file mode 100644
index 0000000000..035409c4c1
--- /dev/null
+++ b/translated/talk/20170119 Be a force for good in your community.md
@@ -0,0 +1,128 @@
+成为你所在社区的美好力量
+============================================================
+
+>明白如何传递美好,了解积极意愿的力量,以及更多。
+
+ ![Be a force for good in your community](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/people_remote_teams_world.png?itok=wI-GW8zX "Be a force for good in your community")
+
+>图片来自:opensource.com
+
+激烈的争论是开源社区和开放组织的标志特征之一。在我们最好的日子里,这些争论充满活力和建设性。他们面红耳赤的背后其实是幽默和善意。各方实事求是,共同解决问题,推动持续改进。对我们中的许多人来说,他们只是单纯的娱乐而已。
+
+然而在我们最糟糕的日子里,这些争论演变成了对旧话题的反复争吵。或者我们用各种方式来传递伤害和相互攻击,或是使用卑劣的手段,而这些侵蚀着我们社区的激情、信任和生产力。
+
+我们茫然四顾,束手无策,因为社区的对话开始变得有毒。然而,正如 [DeLisa Alexander最近的分享][1],我们每个人都有很多方法可以成为我们社区的一种力量。
+
+在这个“开源文化”系列的第一篇文章中,我将分享一些策略,教你如何在这个关键时刻进行干预,引导每个人走向更积极、更有效率的方向。
+
+### 不要将人推开,而是将人推向前方
+
+最近,我和我的朋友和同事 [Mark Rumbles][2] 一起吃午饭。多年来,我们在许多支持开源文化和引领 Red Hat 的项目中合作。在这一天,马克问我是怎么坚持的,当我看到辩论变得越来越丑陋的时候,他看到我最近介入了一个邮件列表的对话。
+
+幸运的是,这事早已尘埃落定,事实上我几乎忘记了谈话的内容。然而,它让我们开始讨论如何在一个拥有数千名成员的社区里,公开和坦率的辩论。
+
+>在我们的社区里,我们成为一种美好力量的最好的方法之一就是:在回应冲突时,以一种迫使每个人提升他们的行为,而不是使冲突升级的方式。
+
+Mark 说了一些让我印象深刻的话。他说:“你知道,作为一个社区,我们真的很擅长将人推开。但我想看到的是,我们更多的是互相扶持 _向前_ 。”
+
+Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量的最好的方法之一就是:在回应冲突时,以一种迫使每个人提升他们的行为,而不是使冲突升级的方式。
+
+### 积极意愿假想
+
+我们可以从一个简单的假想开始,当我们在一个激烈的对话中观察到不良行为时:完全有可能该不良行为其实有着积极意愿。
+
+诚然,这不是一件容易的事情。当我看到一场辩论正在变得肮脏的迹象时,我停下来问自己,史蒂芬·科维(Steven Covey)所说的人性化问题是什么:
+
+“为什么一个理性、正直的人会做这样的事情?”
+
+现在,如果他是你的一个“普通的观察对象”——一个有消极行为倾向的社区成员——也许你的第一个想法是,“嗯,也许这个人是个不靠谱,不理智的人”
+
+回过头来说。我并不是说你让你自欺欺人。这其实就是人性化的问题,不仅是因为它让你理解别人的立场,它还让你变得人性化。
+
+而这反过来又能帮助你做出反应,或者从最有效率的地方进行干预。
+
+### 寻求了解社区异议的原因
+
+当我再一次问自己为什么一个理性的、正直的人可能会做这样的事情时,归结为几个原因:
+
+* 他认为没人聆听他
+* 他认为没人尊重他
+* 他认为没人理解他
+
+一个简单的积极意愿假想,我们可以适用于几乎所有的不良行为,其实就是那个人想要被聆听,被尊重,或被理解。我想这是相当合理的。
+
+通过站在这个更客观、更有同情心的角度,我们可以看到他们的行为几乎肯定 **_不_** 会帮助他们得到他们想要的东西,而社区也会因此而受到影响。如果没有我们的帮助的话。
+
+对我来说,这激发了一个愿望:帮助每个人从我们所处的这个丑陋的地方“摆脱困境”。
+
+在我介入之前,我问自己一个后续的问题:是否有其他积极的意图可能会驱使这种行为
+
+容易想到的例子包括:
+
+* 他们担心我们错过了一些重要的东西,或者我们犯了一个错误,没有人能看到它。
+* 他们想为自己的贡献感到有价值。
+* 他们精疲力竭,因为在社区里工作过度或者在他们的个人生活中发生了一些事情。
+* 他们讨厌一些东西被破坏,并感到沮丧,因为没有人能看到造成的伤害或不便。
+* ……诸如此类。
+
+有了这些,我就有了丰富的积极的意图假想,我可以为他们的行为找到原因。我准备伸出援助之手,向他们提供一些帮助。
+
+### 传递美好,挣脱泥潭
+
+什么是 an out?(类似与佛家“解脱法门”的意思)把它想象成一个逃跑的门。这是一种退出对话的方式,或者放弃不良的行为,恢复表现得像一个体面的人,而不是丢面子。是叫某人振作向上,而不是叫他走开。
+
+你可能经历过这样的事情,在你的生活中,当 _你_ 在一次谈话中表现不佳时,咆哮着,大喊大叫,对某事大惊小怪,而有人慷慨地给 _你_ 提供了一个台阶下。也许他们选择不去和你“抬杠”,相反,他们说了一些表明他们相信你是一个理性、正直的人,他们采用积极意愿假想,比如:
+
+> _所以,嗯,我听到的是你真的很担心,你很沮丧,因为似乎没有人在听。或者你担心我们忽略了它的重要性。是这样对吧?_
+
+于是乎:即使这不是完全正确的(也许你的意图不那么高尚),在那一刻,你可能抓住了他们提供给你的台阶,并欣然接受了重新定义你的不良行为的机会。你几乎可以肯定地转向一个更富有成效的角度,甚至你自己可能都没有意识到。
+
+也许你这样说,“哦,虽然不完全是这样,但我只是担心,我们这样会走向歧途,我明白你说的,作为社区,我们不能同时解决所有问题,但如果我们不尽快解决这个问题,会有更多不好的事情要发生……”
+
+最后,谈话几乎可以肯定地开始转移到一个更有效率的方向。
+
+我们都有机会让一个沮丧的人挣脱泥潭,而这就是方法。
+
+### 坏行为还是坏人?
+
+如果这个人特别激动,他们可能不会听到或者接受你给出的第一台阶。没关系。最可能的是,他们迟钝的大脑已经被史前曾经对人类生存至关重要的杏仁核接管了,他们需要更多的时间来认识到你并不是一个威胁。只是需要你保持温和的态度,坚定地对待他们,就好像他们 _曾经是_ 一个理性、正直的人,看看会发生什么。
+
+根据我的经验,这些社区干预以三种方式结束:
+
+大多数情况下,这个人实际上 _是_ 一个理性的人,很快,他们就感激地接受了这个事实。在这个过程中,每个人都跳出了“黑与白”,“赢或输”的心态。人们开始思考创造性的选择和“双赢”的结果,每个人都将受益。
+
+> 为什么一个理性、正直的人会做这样的事呢?
+
+有时候,这个人天生不是特别理性或正直的,但当他被你以如此一致的、不知疲倦的、耐心的慷慨和善良的对待的时候,他们就会羞愧地从谈话中撤退。这听起来像是,“嗯,我想我已经说了所有要说的了。谢谢你听我的意见”。或者,对于不那么开明的人来说,“嗯,我厌倦了这种谈话。让我们结束吧。”(好的,谢谢)。
+
+更少的情况是,这个人是一个“_坏人_”,或者在社区管理圈子里,是一个“搅屎棍”。这些人确实存在,而且他们在演戏方面很有发展。你猜怎么着?通过持续地以一种友善、慷慨、以社区为中心的方式,完全无视所有试图使局势升级的尝试,你有效地将谈话变成了一个对他们没有兴趣的领域。他们别无选择,只能放弃它。你成为赢家。
+
+这就是积极意愿假想的力量。通过对愤怒和充满敌意的言辞做出回应,优雅而有尊严地回应,你就能化解一场战争,理清混乱,解决棘手的问题,而且在这个过程中很有可能会交到一个新朋友。
+
+我每次应用这个原则都成功吗?见鬼,不。但我从不后悔选择了积极意愿。但是我能生动的回想起,当我采用消极意愿假想时,将问题变得更糟糕的场景。
+
+现在轮到你了。我很乐意听到你提出的一些策略和原则,当你的社区里的对话变得激烈的时候,要成为一股好力量。在下面的评论中分享你的想法。
+
+下次,我们将探索更多的方法,在你的社区里成为一个美好力量,我将分享一些处理“坏脾气先生”的技巧。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot-square_0.jpg?itok=FS97b9YD)
+
+丽贝卡·费尔南德斯(Rebecca Fernandez)是红帽公司(Red Hat)的首席就业品牌 + 通讯专家,是《开源组织》书籍的贡献者,也是开源决策框架的维护者。她的兴趣是开源和业务管理模型的开源方式。Twitter:@ruhbehka
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/17/1/force-for-good-community
+
+作者:[Rebecca Fernandez][a]
+译者:[chao-zhi](https://github.com/chao-zhi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/rebecca
+[1]:https://opensource.com/business/15/5/5-ways-promote-inclusive-environment
+[2]:https://twitter.com/leadership_365
diff --git a/translated/talk/20170320 Education of a Programmer.md b/translated/talk/20170320 Education of a Programmer.md
new file mode 100644
index 0000000000..84762b8655
--- /dev/null
+++ b/translated/talk/20170320 Education of a Programmer.md
@@ -0,0 +1,167 @@
+程序员的学习之路
+============================================================
+
+*2016 年 10 月,当我从微软离职时,我已经在微软工作了近 21 年,在工业界也快 35 年了。我花了一些时间反思我这些年来学到的东西,这些文字是那篇帖子稍加修改后得到。请见谅,文章有一点长。*
+
+要成为一名专业的程序员,你需要知道的事情多得令人吃惊:语言的细节,API,算法,数据结构,系统和工具。这些东西一直在随着时间变化——新的语言和编程环境不断出现,似乎总有热门的新工具或新语言是“每个人”都在使用的。紧跟潮流,保持专业,这很重要。木匠需要知道如何为工作选择合适的锤子和钉子,并且要有能力笔直精准地钉入钉子。
+
+与此同时,我也发现有一些理论和方法有着广泛的应用场景,它们能使用几十年。底层设备的性能和容量在这几十年来增长了几个数量级,但系统设计的思考方式还是互相有关联的,这些思考方式比具体的实现更根本。理解这些重复出现的主题对分析与设计我们所负责的系统大有帮助。
+
+谦卑和自我
+
+这不仅仅局限于编程,但在编程这个持续发展的领域,一个人需要在谦卑和自我中保持平衡。总有新的东西需要学习,并且总有人能帮助你学习——如果你愿意学习的话。一个人即需要保持谦卑,认识到自己不懂并承认它,也要保持自我,相信自己能掌握一个新的领域,并且能运用你已经掌握的知识。我见过的最大的挑战就是一些人在某个领域深入专研了很长时间,“忘记”了自己擅长学习新的东西。最好的学习来自放手去做,建造一些东西,即便只是一个原型或者 hack。我知道的最好的程序员对技术有广泛的认识,但同时他们对某个技术深入研究,成为了专家。而深入的学习来自努力解决真正困难的问题。
+
+端到端观点
+
+1981 年,Jerry Saltzer, Dave Reed 和 Dave Clark 在做因特网和分布式系统的早期工作,他们提出了端到端观点,并作出了[经典的阐述][4]。网络上的文章有许多误传,所以更应该阅读论文本身。论文的作者很谦虚,没有声称这是他们自己的创造——从他们的角度看,这只是一个常见的工程策略,不只在通讯领域中,在其他领域中也有运用。他们只是将其写下来并收集了一些例子。下面是文章的一个小片段:
+
+> 当我们设计系统的一个功能时,仅依靠端点的知识和端点的参与,就能正确地完整地实现这个功能。在一些情况下,系统的内部模块局部实现这个功能,可能会对性能有重要的提升。
+
+论文称这是一个“观点”,虽然在维基百科和其他地方它已经被上升成“原则”。实际上,还是把它看作一个观点比较好,正如作者们所说,系统设计者面临的最难的问题之一就是如何在系统组件之间划分责任,这会引发不断的讨论:怎样在划分功能时权衡利弊,怎样隔离复杂性,怎样设计一个灵活的高性能系统来满足不断变化的需求。没有简单的原则可以直接遵循。
+
+互联网上的大部分讨论集中在通信系统上,但端到端观点的适用范围其实更广泛。分布式系统中的“最终一致性”就是一个例子。一个满足“最终一致性”的系统,可以让系统中的元素暂时进入不一致的状态,从而简化系统,优化性能,因为有一个更大的端到端过程来解决不一致的状态。我喜欢横向拓展的订购系统的例子(例如亚马逊),它不要求每个请求都通过中央库存的控制点。缺少中央控制点可能允许两个终端出售相同的最后一本书,所以系统需要用某种方法来解决这个问题,如通知客户该书会延期交货。不论怎样设计,想购买的最后一本书在订单完成前都有可能被仓库中的叉车运出厍(译者注:比如被其他人下单购买)。一旦你意识到你需要一个端到端的解决方案,并实现了这个方案,那系统内部的设计就可以被优化,并利用这个解决方案。
+
+事实上,这种设计上的灵活性可以优化系统的性能,或者提供其他的系统功能,从而使得端到端的方法变得如此强大。端到端的思考往往允许内部进行灵活的操作,使整个系统更加健壮,并且能适应每个组件特性的变化。这些都让端到端的方法变得健壮,并能适应变化。
+
+端到端方法意味着,添加会牺牲整体性能灵活性的抽象层和功能时要非常小心(也可能是其他的灵活性,但性能,特别是延迟,往往是特殊的)。如果你展示出底层的原始性能(performance, 也可能指操作),端到端的方法可以根据这个性能(操作)来优化,实现特定的需求。如果你破坏了底层性能(操作),即使你实现了重要的有附加价值的功能,你也牺牲了设计灵活性。
+
+如果系统足够庞大而且足够复杂,需要把整个开发团队分配给系统内部的组件,那么端到端观点可以和团队组织相结合。这些团队自然要扩展这些组件的功能,他们通常从牺牲设计上的灵活性开始,尝试在组件上实现端到端的功能。
+
+应用端到端方法面临的挑战之一是确定端点在哪里。 俗话说,“大跳蚤上有小跳蚤,小跳蚤上有更少的跳蚤……等等”。
+
+关注复杂性
+
+编程是一门精确的艺术,每一行代码都要确保程序的正确执行。但这是带有误导的。编程的复杂性不在于各个部分的整合,也不在于各个部分之间如何相互交互。最健壮的程序将复杂性隔离开,让最重要的部分变的简单直接,通过简单的方式与其他部分交互。虽然隐藏复杂性和信息隐藏、数据抽象等其他设计方法一样,但我仍然觉得,如果你真的要定位出系统的复杂所在,并将其隔离开,那你需要对设计特别敏锐。
+
+在我的[文章][5]中反复提到的例子是早期的终端编辑器 VI 和 Emacs 中使用的屏幕重绘算法。早期的视频终端实现了控制序列,来控制绘制字符核心操作,也实现了附加的显示功能,来优化重新绘制屏幕,如向上向下滚动当前行,或者插入新行,或在当前行中移动字符。这些命令都具有不同的开销,并且这些开销在不同制造商的设备中也是不同的。(参见[TERMCAP][6]以获取代码链接和更完整的历史记录。)像文本编辑器这样的全屏应用程序希望尽快更新屏幕,因此需要优化使用这些控制序列来转换屏幕从一个状态到另一个状态。
+
+这些程序在设计上隐藏了底层的复杂性。系统中修改文本缓冲区的部分(功能上大多数创新都在这里)完全忽略了这些改变如何被转换成屏幕更新命令。这是可以接受的,因为针对*任何*内容的改变计算最佳命令所消耗的性能代价,远不及被终端本身实际执行这些更新命令的性能代价。在确定如何隐藏复杂性,以及隐藏哪些复杂性时,性能分析扮演着重要的角色,这一点在系统设计中非常常见。屏幕的更新与底层文本缓冲区的更改是异步的,并且可以独立于缓冲区的实际历史变化顺序。缓冲区*怎样*改变的并不重要,重要的是改变了*什么*。异步耦合,在组件交互时消除组件对历史路径依赖的组合,以及用自然的交互方式以有效地将组件组合在一起是隐藏耦合复杂度的常见特征。
+
+隐藏复杂性的成功不是由隐藏复杂性的组件决定的,而是由使用该模块的使用者决定的。这就是为什么组件的提供者至少要为组件的某些端到端过程负责。他们需要清晰的知道系统的其他部分如何与组件相互作用,复杂性是如何泄漏出来的(以及是否泄漏出来)。这常常表现为“这个组件很难使用”这样的反馈——这通常意味着它不能有效地隐藏内部复杂性,或者没有选择一个隐藏复杂性的功能边界。
+
+分层与组件化
+
+系统设计人员的一个基本工作是确定如何将系统分解成组件和层;决定自己要开发什么,以及从别的地方获取什么。开源项目在决定自己开发组件还是购买服务时,大多会选择自己开发,但组件之间交互的过程是一样的。在大规模工程中,理解这些决策将如何随着时间的推移而发挥作用是非常重要的。从根本上说,变化是程序员所做的一切的基础,所以这些设计决定不仅在当下被评估,还要随着产品的不断发展而在未来几年得到评估。
+
+以下是关于系统分解的一些事情,它们最终会占用大量的时间,因此往往需要更长的时间来学习和欣赏。
+
+* 层泄漏。层(或抽象)[基本上是泄漏的][1]。这些泄漏会立即产生后果,也会随着时间的推移而产生两方面的后果。其中一方面就是该抽象层的特性渗透到了系统的其他部分,渗透的程度比你意识到得更深入。这些渗透可能是关于具体的性能特征的假设,以及抽象层的文档中没有明确的指出的行为发生的顺序。这意味着假如内部组件的行为发生变化,你的系统会比想象中更加脆弱。第二方面是你比表面上看起来更依赖组件内部的行为,所以如果你考虑改变这个抽象层,后果和挑战可能超出你的想象。
+
+* 层具有太多功能了。您所采用的组件具有比实际需要更多的功能,这几乎是一个真理。在某些情况下,你决定采用这个组件是因为你想在将来使用那些尚未用到的功能。有时,你采用组件是想“上快车”,利用组件完成正在进行的工作。在功能强大的抽象层上开发会带来一些后果。1) 组件往往会根据你并不需要的功能作出取舍。 2) 为了实现那些你并不没有用到的功能,组件引入了复杂性和约束,这些约束将阻碍该组件的未来的演变。3) 层泄漏的范围更大。一些泄漏是由于真正的“抽象泄漏”,另一些是由于明显的,逐渐增加的对组件全部功能的依赖(但这些依赖通常都没有处理好)。Office 太大了,我们发现,对于我们建立的任何抽象层,我们最终都在系统的某个部分完全运用了它的功能。虽然这看起来是积极的(我们完全地利用了这个组件),但并不是所用的使用都有同样的价值。所以,我们最终要付出巨大的代价才能从一个抽象层往另一个抽象层迁移,这种“长尾巴”没什么价值,并且对使用场景认识不足。4) 附加的功能会增加复杂性,并增加功能滥用的可能。如果将验证 XML 的 API 指定为 XML 树的一部分,那这个 API 可以选择动态下载 XML 的模式定义。这在我们的基本文件解析代码中被错误地执行,导致 w3c.org 服务器上的大量性能下降以及(无意)分布式拒绝服务攻击。(这些被通俗地称为“地雷”API)。
+
+* 抽象层被更换。需求发展,系统发展,组件被放弃。您最终需要更换该抽象层或组件。不管是对外部组件的依赖还是对内部组件的依赖都是如此。这意味着上述问题将变得重要起来。
+
+* 自己构建还是购买的决定将会改变。这是上面几方面的必然结果。这并不意味着自己构建还是购买的决定在当时是错误的。一开始时往往没有合适的组件,一段时间之后才有合适的组件出现。或者,也可能你使用了一个组件,但最终发现它不符合您不断变化的要求,而且你的要求非常窄,能被理解,或着对你的价值体系来说是非常重要的,以至于拥有自己的模块是有意义的。这意味着你像关心自己构造的模块一样,关心购买的模块,关心它们是怎样泄漏并深入你的系统中的。
+
+* 抽象层会变臃肿。一旦你定义了一个抽象层,它就开始增加功能。层是对使用模式优化的自然分界点。臃肿的层的困难在于,它往往会降低您利用底层的不断创新的能力。从某种意义上说,这就是操作系统公司憎恨构建在其核心功能之上的臃肿的层的原因——采用创新的速度放缓了。避免这种情况的一种比较规矩的方法是禁止在适配器层中进行任何额外的状态存储。微软基础类在 Win32 上采用这个一般方法。在短期内,将功能集成到现有层(最终会导致上述所有问题)而不是重构和重新推导是不可避免的。理解这一点的系统设计人员寻找分解和简化组件的方法,而不是在其中增加越来越多的功能。
+
+爱因斯坦宇宙
+
+几十年来,我一直在设计异步分布式系统,但是在微软内部的一次演讲中,SQL 架构师 Pat Helland 的一句话震惊了我。 “我们生活在爱因斯坦的宇宙中,没有同时性。”在构建分布式系统时(基本上我们构建的都是分布式系统),你无法隐藏系统的分布式特性。这是物理的。我一直感到远程过程调用在根本上错误的,这是一个原因,尤其是那些“透明的”远程过程调用,它们就是想隐藏分布式的交互本质。你需要拥抱系统的分布式特性,因为这些意义几乎总是需要通过系统设计和用户体验来完成。
+
+拥抱分布式系统的本质则要遵循以下几个方面:
+
+* 一开始就要思考设计对用户体验的影响,而不是试图在处理错误,取消请求和报告状态上打补丁。
+
+* 使用异步技术来耦合组件。同步耦合是*不可能*的。如果某些行为看起来是同步的,是因为某些内部层尝试隐藏异步,这样做会遮蔽(但绝对不隐藏)系统运行时的基本行为特征。
+
+* 认识到并且明确设计了交互状态机,这些状态表示长期的可靠的内部系统状态(而不是由深度调用堆栈中的变量值编码的临时,短暂和不可发现的状态)。
+
+* 认识到失败是在所难免的。要保证能检测出分布式系统中的失败,唯一的办法就是直接看你的等待时间是否“太长”。这自然意味着[取消的等级最高][2]。系统的某一层(可能直接通向用户)需要决定等待时间是否过长,并取消操作。取消只是为了重建局部状态,回收局部的资源——没有办法在系统内广泛使用取消机制。有时用一种低成本,不可靠的方法广泛使用取消机制对优化性能可能有用。
+
+* 认识到取消不是回滚,因为它只是回收本地资源和状态。如果回滚是必要的,它必须实现成一个端到端的功能。
+
+* 承认永远不会真正知道分布式组件的状态。只要你发现一个状态,它可能就已经改变了。当你发送一个操作时,请求可能在传输过程中丢失,也可能被处理了但是返回的响应丢失了,或者请求需要一定的时间来处理,这样远程状态最终会在未来的某个任意的时间转换。这需要像幂等操作这样的方法,并且要能够稳健有效地重新发现远程状态,而不是期望可靠地跟踪分布式组件的状态。“[最终一致性][3]”的概念简洁地捕捉了这其中大多数想法。
+
+我喜欢说你应该“陶醉在异步”。与其试图隐藏异步,不如接受异步,为异步而设计。当你看到像幂等性或不变性这样的技术时,你就认识到它们是拥抱宇宙本质的方法,而不仅仅是工具箱中的一个设计工具。
+
+性能
+
+我确信 Don Knuth 会对人们怎样误解他的名言“过早的优化是一切罪恶的根源”而感到震惊。事实上,性能,及性能持续超过60年的指数增长(或超过10年,取决于您是否愿意将晶体管,真空管和机电继电器的发展算入其中),为所有行业内的惊人创新和影响经济的“软件吃遍世界”的变化打下了基础。
+
+要认识到这种指数变化的一个关键是,虽然系统的所有组件正在经历指数变化,但这些指数是不同的。硬盘容量的增长速度与内存容量的增长速度不同,与 CPU 的增长速度不同,与内存 CPU 之间的延迟的性能改善速度也不用。即使性能发展的趋势是由相同的基础技术驱动的,增长的指数也会有分歧。[延迟的改进从根本上改善了带宽][7]。指数变化在近距离或者短期内看起来是线性的,但随着时间的推移可能是压倒性的。系统不同组件的性能的增长不同,会出现压倒性的变化,并迫使对设计决策定期进行重新评估。
+
+这样做的结果是,几年后,一度有意义的设计决定就不再有意义了。或者在某些情况下,二十年前有意义的方法又开始变成一个好的决定。现代内存映射的特点看起来更像是早期分时的进程切换,而不像分页那样。 (这样做有时会让我这样的老人说“这就是我们在 1975 年时用的方法”——忽略了这种方法在 40 年都没有意义,但现在又重新成为好的方法,因为两个组件之间的关系——可能是闪存和 NAND 而不是磁盘和核心内存——已经变得像以前一样了)。
+
+当这些指数超越人自身的限制时,重要的转变就发生了。你能从 2 的 16 次方个字符(一个人可以在几个小时打这么多字)过渡到 2 的 3 次方个字符(远超出了一个人打字的范围)。你可以捕捉比人眼能感知的分辨率更高的数字图像。或者你可以将整个音乐专辑存在小巧的磁盘上,放在口袋里。或者你可以将数字化视频录制存储在硬盘上。再通过实时流式传输的能力,可以在一个地方集中存储一次,不需要在数千个本地硬盘上重复记录。
+
+但有的东西仍然是根本的限制条件,那就是空间的三维和光速。我们又回到了爱因斯坦的宇宙。内存的分级结构将始终存在——它是物理定律的基础。稳定的存储和 IO,内存,计算和通信也都将一直存在。这些模块的相对容量,延迟和带宽将会改变,但是系统始终要考虑这些元素如何组合在一起,以及它们之间的平衡和折衷。Jim Gary 是这方面的大师。
+
+空间和光速的根本限制造成的另一个后果是,性能分析主要是关于三件事:局部化 (locality),局部化,局部化。无论是将数据打包在磁盘上,管理处理器缓存的层次结构,还是将数据合并到通信数据包中,数据如何打包在一起,如何在一段时间内从局部获取数据,数据如何在组件之间传输数据是性能的基础。把重点放在减少管理数据的代码上,增加空间和时间上的局部性,是消除噪声的好办法。
+
+Jon Devaan 曾经说过:“设计数据,而不是设计代码”。这也通常意味着当查看系统结构时,我不太关心代码如何交互——我想看看数据如何交互和流动。如果有人试图通过描述代码结构来解释一个系统,而不理解数据流的速率和数量,他们就不了解这个系统。
+
+内存的层级结构也意味着我缓存将会一直存在——即使某些系统层正在试图隐藏它。缓存是根本的,但也是危险的。缓存试图利用代码的运行时行为,来改变系统中不同组件之间的交互模式。它们需要对运行时行为进行建模,即使模型填充缓存并使缓存失效,并测试缓存命中。如果模型由于行为改变而变差或变得不佳,缓存将无法按预期运行。一个简单的指导方针是,缓存必须被检测——由于应用程序行为的改变,事物不断变化的性质和组件之间性能的平衡,缓存的行为将随着时间的推移而退化。每一个老程序员都有缓存变糟的经历。
+
+我很幸运,我的早期职业生涯是在互联网的发源地之一 BBN 度过的。 我们很自然地将将异步组件之间的通信视为系统连接的自然方式。流量控制和队列理论是通信系统的基础,更是任何异步系统运行的方式。流量控制本质上是资源管理(管理通道的容量),但资源管理是更根本的关注点。流量控制本质上也应该由端到端的应用负责,所以用端到端的方式思考异步系统是自然的。[缓冲区膨胀][8]的故事在这种情况下值得研究,因为它展示了当对端到端行为的动态性以及技术“改进”(路由器中更大的缓冲区)缺乏理解时,在整个网络基础设施中导致的长久的问题。
+
+ 我发现“光速”的概念在分析任何系统时都非常有用。光速分析并不是从当前的性能开始分析,而是问“这个设计理论上能达到的最佳性能是多少?”真正传递的信息是什么,以什么样的速度变化?组件之间的底层延迟和带宽是多少?光速分析迫使设计师深入思考他们的方法能否达到性能目标,或者否需要重新考虑设计的基本方法。它也迫使人们更深入地了解性能在哪里损耗,以及损耗是由固有的,还是由于一些不当行为产生的。从构建的角度来看,它迫使系统设计人员了解其构建的模块的真实性能特征,而不是关注其他功能特性。
+
+我的职业生涯大多花费在构建图形应用程序上。用户坐在系统的一端,定义关键的常量和约束。人类的视觉和神经系统没有经历过指数性的变化。它们固有地受到限制,这意味着系统设计者可以利用(必须利用)这些限制,例如,通过虚拟化(限制底层数据模型需要映射到视图数据结构中的数量),或者通过将屏幕更新的速率限制到人类视觉系统的感知限制。
+
+复杂性的本质
+
+我的整个职业生涯都在与复杂性做斗争。为什么系统和应用变得复杂呢?为什么在一个应用领域内进行开发并没有随着时间变得简单,而基础设施却没有变得更复杂,反而变得更强大了?事实上,管理复杂性的一个关键方法就是“走开”然后重新开始。通常新的工具或语言迫使我们从头开始,这意味着开发人员将工具的优点与从新开始的优点结合起来。从新开始是重要的。这并不是说新工具,新平台,或新语言可能不好,但我保证它们不能解决复杂性增长的问题。控制复杂性的最简单的方法就是用更少的程序员,建立一个更小的系统。
+
+当然,很多情况下“走开”并不是一个选择——Office 建立在有巨大的价值的复杂的资源上。通过 OneNote, Office 从 Word 的复杂性上“走开”,从而在另一个维度上进行创新。Sway 是另一个例子, Office 决定从限制中跳出来,利用关键的环境变化,抓住机会从底层上采取全新的设计方案。我们有 Word,Excel,PowerPoint 这些应用,它们的数据结构非常有价值,我们并不能完全放弃这些数据结构,它们成为了开发中持续的显著的限制条件。
+
+我受到 Fred Brook 讨论软件开发中的意外和本质的文章[《没有银子弹》][9]的影响,他希望用两个趋势来尽可能地推动程序员的生产力:一是在选择自己开发还是购买时,更多地关注购买——这预示了开源社区和云架构的改变;二是从单纯的构建方法转型到更“有机”或者“生态”的增量开发方法。现代的读者可以认为是向敏捷开发和持续开发的转型。但那篇文章可是写于 1986 年!
+
+我很欣赏 Stuart Kauffman 的在复杂性的基本性上的研究工作。Kauffman 从一个简单的布尔网络模型(“[NK 模型][10]”)开始建立起来,然后探索这个基本的数学结构在相互作用的分子,基因网络,生态系统,经济系统,计算机系统(以有限的方式)等系统中的应用,来理解紧急有序行为的数学基础及其与混沌行为的关系。在一个高度连接的系统中,你固有地有一个相互冲突的约束系统,使得它(在数学上)很难向前发展(这被看作是在崎岖景观上的优化问题)。控制这种复杂性的基本方法是将系统分成独立元素并限制元素之间的相互连接(实质上减少 NK 模型中的“N”和“K”)。当然对那些使用复杂隐藏,信息隐藏和数据抽象,并且使用松散异步耦合来限制组件之间的交互的技术的系统设计者来说,这是很自然的。
+
+
+我们一直面临的一个挑战是,我们想到的许多拓展系统的方法,都跨越了所有的方面。实时共同编辑是 Office 应用程序最近的一个非常具体的(也是最复杂的)例子。
+
+我们的数据模型的复杂性往往等同于“能力”。设计用户体验的固有挑战是我们需要将有限的一组手势,映射到底层数据模型状态空间的转换。增加状态空间的维度不可避免地在用户手势中产生模糊性。这是“[纯数学][11]”,这意味着确保系统保持“易于使用”的最基本的方式常常是约束底层的数据模型。
+
+管理
+
+我从高中开始着手一些领导角色(学生会主席!),对承担更多的责任感到理所当然。同时,我一直为自己在每个管理阶段都坚持担任全职程序员而感到自豪。但 Office 的开发副总裁最终还是让我从事管理,离开了日常的编程工作。当我在去年离开那份工作时,我很享受重返编程——这是一个出奇地充满创造力的充实的活动(当修完“最后”的 bug 时,也许也会有一点令人沮丧)。
+
+尽管在我加入微软前已经做了十多年的“主管”,但是到了 1996 年我加入微软才真正了解到管理。微软强调“工程领导是技术领导”。这与我的观点一致,帮助我接受并承担更大的管理责任。
+
+主管的工作是设计项目并透明地推进项目。透明并不简单,它不是自动的,也不仅仅是有好的意愿就行。透明需要被设计进系统中去。透明工作的最好方式是能够记录每个工程师每天活动的产出,以此来追踪项目进度(完成任务,发现 bug 并修复,完成一个情景)。留意主观上的红/绿/黄,点赞或踩的仪表板。
+
+我过去说我的工作是设计反馈回路。独立工程师,经理,行政人员,每一个项目的参与者都能通过分析记录的项目数据,推进项目,产出结果,了解自己在整个项目中扮演的角色。最终,透明化最终成为增强能力的一个很好的工具——管理者可以将更多的局部控制权给予那些最接近问题的人,因为他们对所取得的进展有信心。这样的话,合作自然就会出现。
+
+Key to this is that the goal has actually been properly framed (including key resource constraints like ship schedule). Decision-making that needs to constantly flow up and down the management chain usually reflects poor framing of goals and constraints by management.
+
+关键需要确定目标框架(包括关键资源的约束,如发布的时间表)。如果决策需要在管理链上下不断流动,那说明管理层对目标和约束的框架不好。
+
+当我在 Beyond Software 工作时,我真正理解了一个项目拥有一个唯一领导的重要性。原来的项目经理离职了(后来从 FrontPage 雇佣了我)。我们四个主管在是否接任这个岗位上都有所犹豫,这不仅仅由于我们都不知道要在这家公司坚持多久。我们都技术高超,并且相处融洽,所以我们决定以同级的身份一起来领导这个项目。然而这槽糕透了。有一个显而易见的问题,我们没有相应的战略用来在原有的组织之间分配资源——这应当是管理者的首要职责之一!当你知道你是唯一的负责人时,你会有很深的责任感,但在这个例子中,这种责任感缺失了。我们没有真正的领导来负责统一目标和界定约束。
+
+我有清晰地记得,我第一次充分认识到*倾听*对一个领导者的重要性。那时我刚刚担任了 Word,OneNote,Publisher 和 Text Services 团队的开发经理。关于我们如何组织文本服务团队,我们有一个很大的争议,我走到了每个关键参与者身边,听他们想说的话,然后整合起来,写下了我所听到的一切。当我向其中一位主要参与者展示我写下的东西时,他的反应是“哇,你真的听了我想说的话”!作为一名管理人员,我所经历的所有最大的问题(例如,跨平台和转型持续工程)涉及到仔细倾听所有的参与者。倾听是一个积极的过程,它包括:尝试以别人的角度去理解,然后写出我学到的东西,并对其进行测试,以验证我的理解。当一个关键的艰难决定需要发生的时候,在最终决定前,每个人都知道他们的想法都已经被听到并理解(不论他们是否同意最后的决定)。
+
+在 FrontPage 担任开发经理的工作,让我理解了在只有部分信息的情况下做决定的“操作困境”。你等待的时间越长,你就会有更多的信息做出决定。但是等待的时间越长,实际执行的灵活性就越低。在某个时候,你仅需要做出决定。
+
+设计一个组织涉及类似的两难情形。您希望增加资源领域,以便可以在更大的一组资源上应用一致的优先级划分框架。但资源领域越大,越难获得作出决定所需要的所有信息。组织设计就是要平衡这两个因素。软件复杂化,因为软件的特点可以在任意维度切入设计。Office 已经使用[共享团队][12]来解决这两个问题(优先次序和资源),让跨领域的团队能与需要产品的团队分享工作(增加资源)。
+
+随着管理阶梯的提升,你会懂一个小秘密:你和你的新同事不会因为你现在承担更多的责任,就突然变得更聪明。这强调了整个组织比顶层领导者更聪明。赋予每个级别在一致框架下拥有自己的决定是实现这一目标的关键方法。听取并使自己对组织负责,阐明和解释决策背后的原因是另一个关键策略。令人惊讶的是,害怕做出一个愚蠢的决定可能是一个有用的激励因素,以确保你清楚地阐明你的推理,并确保你听取所有的信息。
+
+结语
+
+我离开大学寻找第一份工作时,面试官在最后一轮面试时问我对做“系统”和做“应用”哪一个更感兴趣。我当时并没有真正理解这个问题。在软件技术栈的每一个层面都会有趣的难题,我很高兴深入研究这些问题。保持学习。
+
+--------------------------------------------------------------------------------
+
+via: https://hackernoon.com/education-of-a-programmer-aaecf2d35312
+
+作者:[ Terry Crowley][a]
+译者:[explosic4](https://github.com/explosic4)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://hackernoon.com/@terrycrowley
+[1]:https://medium.com/@terrycrowley/leaky-by-design-7b423142ece0#.x67udeg0a
+[2]:https://medium.com/@terrycrowley/how-to-think-about-cancellation-3516fc342ae#.3pfjc5b54
+[3]:http://queue.acm.org/detail.cfm?id=2462076
+[4]:http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf
+[5]:https://medium.com/@terrycrowley/model-view-controller-and-loose-coupling-6370f76e9cde#.o4gnupqzq
+[6]:https://en.wikipedia.org/wiki/Termcap
+[7]:http://www.ll.mit.edu/HPEC/agendas/proc04/invited/patterson_keynote.pdf
+[8]:https://en.wikipedia.org/wiki/Bufferbloat
+[9]:http://worrydream.com/refs/Brooks-NoSilverBullet.pdf
+[10]:https://en.wikipedia.org/wiki/NK_model
+[11]:https://medium.com/@terrycrowley/the-math-of-easy-to-use-14645f819201#.untmk9eq7
+[12]:https://medium.com/@terrycrowley/breaking-conways-law-a0fdf8500413#.gqaqf1c5k
diff --git a/translated/tech/20160325 Network automation with Ansible.md b/translated/tech/20160325 Network automation with Ansible.md
new file mode 100644
index 0000000000..a4d184f57f
--- /dev/null
+++ b/translated/tech/20160325 Network automation with Ansible.md
@@ -0,0 +1,992 @@
+用 Ansible 实现网络自动化
+================
+
+### 网络自动化
+
+由于 IT 行业的技术变化,从服务器虚拟化到具有自服务能力的公有和私有云、容器化应用、以及提供的平台即服务(Paas),一直以来落后的一个领域是网络。
+
+在过去的 5 时间,网络行业似乎有很多新的趋势出现,它们中的很多被归入到软件定义网络(SDN)。
+
+###### 注意
+
+SDN 是新出现的一种构建、管理、操作、和部署网络的方法。SDN 最初的定义是需要将控制层和数据层(包转发)物理分离,并且,解耦合的控制层必须管理好各自的设备。
+
+如今,许多技术已经 _包括在 SDN _ 下面,包括基于控制器的网络(controller-based networks)、网络设备上的 APIs、网络自动化、白盒交换机、策略网络化、网络功能虚拟化(NFV)、等等。
+
+由于这篇报告的目的,我们参考 SDN 的解决方案作为我们的解决方案,其中包括一个网络控制器作为解决方案的一部分,并且提升了网络的可管理性,但并不需要从数据层解耦控制层。
+
+这些趋势的其中一个是,网络设备上出现的应用程序编辑接口(APIs)作为管理和操作这些设备的一种方法,和真正地提供了机器对机器的通讯。当需要自动化和构建网络应用程序、提供更多数据建模的结构时,APIs 简化了开发过程。例如,当启用 API 的设备在 JSON/XML 中返回数据时,它是结构化的,并且比返回原生文本信息、需要手工去解析的仅命令行的设备更易于使用。
+
+在 APIs 之前,用于配置和管理网络设备的两个主要机制是命令行接口(CLI)和简单网络管理协议(SNMP)。让我们来了解一下它们,CLI 是一个到设备的人机界面,而 SNMP 并不是为设备提供实时编程的接口。
+
+幸运的是,因为很多供应商争相为设备增加 APIs,有时候 _正是因为_ 它,才被保留到需求建议书(RFP)中,它有一个非常好的副作用 —— 支持网络自动化。一旦一个真实的 API 被披露,访问设备内数据的过程,以及管理配置,会极大的被简单化,因此,我们将在本报告中对此进行评估。虽然使用许多传统方法也可以实现自动化,比如,CLI/SNMP。
+
+###### 注意
+
+随着未来几个月或几年的网络设备更新,供应商的 APIs 无疑应该被测试,并且要做为采购网络设备(虚拟和物理)的关键决策标准。用户应该知道数据是如何通过设备建模的,被 API 使用的传输类型是什么,如果供应商提供一些库或集成到自动化工具中,并且,如果被用于一个开放的标准/协议。
+
+总而言之,网络自动化,像大多数的自动化类型一样,是为了更快地工作。工作的更快是好事,降低部署和配置改变的时间并不总是许多 IT 组织需要去解决的问题。
+
+包括速度,我们现在看看这些各种类型的 IT 组织逐渐采用网络自动化的几种原因。你应该注意到,同样的原则也适用于其它类型的自动化。
+
+
+### 简化架构
+
+今天,每个网络都是一个独特的“雪花”型,并且,网络工程师们为能够解决传输和网络应用问题而感到自豪,这些问题最终使网络不仅难以维护和管理,而且也很难去实现自动化。
+
+它需要从一开始就包含到新的架构和设计中去部署,而不是去考虑网络自动化和管理作为一个二级或三级项目。哪个特性可以跨不同的供应商工作?哪个扩展可以跨不同的平台工作?当使用特别的网络设备平台时,API 类型或者自动化工程是什么?当这些问题在设计进程之前得到答案,最终的架构将变成简单的、可重复的、并且易于维护 _和_ 自动化的,在整个网络中将很少启用供应商专用的扩展。
+
+### 确定的结果
+
+在一个企业组织中,改变审查会议(change review meeting)去评估即将到来的网络上的变化、它们对外部系统的影响、以及回滚计划。在这个世界上,人们为这些即 _将到来的变化_ 去接触 CLI,输入错误的命令造成的影响是灾难性的。想像一下,一个有三位、四位、五位、或者 50 位工程师的团队。每位工程师应对 _即将到来的变化_ 都有他们自己的独特的方法。并且,在管理这些变化的期间,使用一个 CLI 或者 GUI 的能力并不会消除和减少出现错误的机率。
+
+使用经过验证和测试过的网络自动化可以帮助实现更多的可预测行为,并且使执行团队有更好的机会实现确实性结果,在保证任务没有人为错误的情况下首次正确完成的道路上更进一步。
+
+
+### 业务灵活性
+
+不用说,网络自动化不仅为部署变化提供速度和灵活性,而且使得根据业务需要去从网络设备中检索数据的速度变得更快。自从服务器虚拟化实现以后,服务器和虚拟化使得管理员有能力在瞬间去部署一个新的应用程序。而且,更多的快速部署应用程序的问题出现在,配置一个 VLAN(虚拟局域网)、路由器、FW ACL(防火墙的访问控制列表)、或者负载均衡策略需要多长时间?
+
+在一个组织内通过去熟悉大多数的通用工作流和 _为什么_ 网络改变是真实的需求?新的部署过程自动化工具,如 Ansible 将使这些变得非常简单。
+
+这一章将介绍一些关于为什么应该去考虑网络自动化的高级知识点。在下一节,我们将带你去了解 Ansible 是什么,并且继续深入了解各种不同规模的 IT 组织的网络自动化的不同类型。
+
+
+### 什么是 Ansible?
+
+Ansible 是存在于开源世界里的一种最新的 IT 自动化和配置管理平台。它经常被拿来与其它工具如 Puppet、Chef、和 SaltStack 去比较。Ansible 作为一个由 Michael DeHaan 创建的开源项目出现于 2012 年,Michael DeHaan 也创建了 Cobbler 和 cocreated Func,它们在开源社区都非常流行。在 Ansible 开源项目创建之后不足 18 个月时间, Ansilbe 公司成立,并收到了 $6 million 的一系列资金。它成为并一直保持着第一的贡献者和 Ansible 开源项目的支持者。在 2015 年 10 月,Red Hat 获得了 Ansible 公司。
+
+但是,Ansible 到底是什么?
+
+_Ansible 是一个无需代理和可扩展的超级简单的自动化平台。_
+
+让我们更深入地了解它的细节,并且看一看 Ansible 的属性,它帮助 Ansible 在行业内获得大量的吸引力(traction)。
+
+
+### 简单
+
+Ansible 的其中一个吸引人的属性是,去使用它你 _不_ 需要特定的编程技能。所有的指令,或者任务都是自动化的,在一个标准的、任何人都可以理解的人类可读的数据格式的一个文档中。在 30 分钟之内完成安装和自动化任务的情况并不罕见!
+
+例如,下列的一个 Ansible playbook 任务是用于去确保在一个 Cisco Nexus 交换机中存在一个 VLAN:
+
+```
+- nxos_vlan: vlan_id=100 name=web_vlan
+```
+
+你无需熟悉或写任何代码就可以明确地看出它将要做什么!
+
+###### 注意
+
+这个报告的下半部分涉到 Ansible 术语(playbooks、plays、tasks、modules、等等)的细节。但是,在我们为网络自动化使用 Ansible 时,我们也同时有一些详细的示例去解释这些关键概念。
+
+### 无代理
+
+如果你看到市面上的其它工具,比如 Puppet 和 Chef,你学习它们会发现,一般情况下,它们要求每个实现自动化的设备必须安装特定的软件。这种情况在 Ansible 上 _并不_需要,这就是为什么 Ansible 是实现网络自动化的最佳选择的主要原因。
+
+它很好理解,那些 IT 自动化工具,包括 Puppet、Chef、CFEngine、SaltStack、和 Ansible,它们最初构建是为管理和自动化配置 Linux 主机,以跟得上部署的应用程序增长的步伐。因为 Linux 系统是被配置成自动化的,要安装代理并不是一个技术难题。如果有的话,它会担误安装过程,因为,现在有 _N_ 多个(你希望去实现自动化的)主机需要在它们上面部署软件。
+
+再加上,当使用代理时,它们需要的 DNS 和 NTP 配置更加复杂。这些都是大多数环境中已经配置好的服务,但是,当你希望快速地获取一些东西或者只是简单地想去测试一下它能做什么的时候,它将极大地担误整个设置和安装的过程。
+
+由于本报告只是为介绍利用 Ansible 实现网络自动化,它最有价值的是,Ansible 作为一个无代理平台,对于系统管理员来说,它更具有吸引力,这是为什么呢?
+
+正如前面所说的那样,对网络管理员来说,它是非常有吸引力的,Linux 操作系统是开源的,并且,任何东西都可以安装在它上面。对于网络来说,虽然它正在逐渐改变,但事实并非如此。如果我们更广泛地部署网络操作系统,如 Cisco IOS,它就是这样的一个例子,并且问一个问题, _“第三方软件能否部署在基于 IOS (译者注:此处的 IOS,指的是思科的网络操作系统 IOS)的平台上吗?”_它并不会给你惊喜,它的回答是 _NO_。
+
+在过去的二十多年里,几乎所有的网络操作系统都是闭源的,并且,垂直整合到底层的网络硬件中。在一个网络设备中(路由器、交换机、负载均衡、防火墙、等等),不需要供应商的支持,有一个像 Ansible 这样的自动化平台,从头开始去构建一个无代理、可扩展的自动化平台,就像是它专门为网络行业订制的一样。我们最终将开始减少并消除与网络的人工交互。
+
+### 可扩展
+
+Ansible 的可扩展性也非常的好。作为一个开源的,并且从代码开始将在网络行业中发挥重要的作用,有一个可扩展的平台是必需的。这意味着如果供应商或社区不提供一个特定的特性或功能,开源社区,终端用户,消费者,顾问者,或者,任何可能去 _扩展_ Ansible 的人,去启用一个给定的功能集。过去,网络供应商或者工具供应商通过一个 hook 去提供插件和集成。想像一下,使用一个像 Ansible 这样的自动化平台,并且,你选择的网络供应商发布了你 _真正_ 需要的自动化的一个新特性。从理论上说,网络供应商或者 Ansible 可以发行一个新的插件去实现自动化这个独特的特性,这是一件非常好的事情,从你的内部工程师到你的增值分销商(VARs)或者你的顾问中的任何人,都可以去提供这种集成。
+
+正如前面所说的那样,Ansible 实际上是极具扩展性的,Ansible 最初就是为自动化应用程序和系统构建的。这是因为,Ansible 的可扩展性是被网络供应商编写集成的,包括但不限于 Cisco、Arista、Juniper、F5、HP、A10、Cumulus、和 Palo Alto Networks。
+
+
+### 对于网络自动化,为什么要使用 Ansible?
+
+我们已经简单了解除了 Ansible 是什么,以及一些网络自动化的好处,但是,对于网络自动化,我们为什么要使用 Ansible?
+
+在一个完全透明的环境下,已经说的很多的理由是 Ansible 可以做什么,比如,作为一个很大的自动化应用程序部署平台,但是,我们现在要深入一些,更多地关注于网络,并且继续总结一些更需要注意的其它关键点。
+
+
+### 无代理
+
+在实现网络自动化的时候,无代理架构的重要性并不是重点强调的,特别是当它适用于现有的自动化设备时。如果,我们看一下当前网络中已经安装的各种设备时,从 DMZ 和园区,到分支和数据中心,最大份额的设备 _并不_ 具有最新 API 的设备。从自动化的角度来看,API 可以使做一些事情变得很简单,像 Ansible 这样的无代理平台有可能去自动化和管理那些 _传统_ 的设备。例如,_基于CLI 的设备_,它的工具可以被用于任何网络环境中。
+
+###### 注意
+
+如果仅 CLI 的设备已经集成进 Ansible,它的机制就像是,怎么在设备上通过协议如 telnet、SSH、和 SNMP,去进行只读访问和读写操作。
+
+作为一个独立的网络设备,像路由器、交换机、和防火墙持续去增加 APIs 的支持,SDN 解决方案也正在出现。SDN 解决方案的其中一个主题是,它们都提供一个单点集成和策略管理,通常是以一个 SDN 控制器的形式出现。这是真实的解决方案,比如,Cisco ACI、VMware NSX、Big Switch Big Cloud Fabric、和 Juniper Contrail,同时,其它的 SDN 提供者,比如 Nuage、Plexxi、Plumgrid、Midokura、和 Viptela。甚至包含开源的控制器,比如 OpenDaylight。
+
+所有的这些解决方案都简化了网络管理,就像他们允许一个管理员去开始从“box-by-box”管理(译者注:指的是单个设备挨个去操作的意思)迁移到网络范围的管理。这是在正确方向上迈出的很大的一步,这些解决方案并不能消除在改变窗口中人类犯错的机率。例如,比起配置 _N_ 个交换机,你可能需要去配置一个单个的 GUI,它需要很长的时间才能实现所需要的配置改变 — 它甚至可能更复杂,毕竟,相对于一个 CLI,他们更喜欢 GUI!另外,你可能有不同类型的 SDN 解决方案部署在每个应用程序、网络、区域、或者数据中心。
+
+在需要自动化的网络中,对于配置管理、监视、和数据收集,当行业开始向基于控制器的网络架构中迁移时,这些需求并不会消失。
+
+大量的软件定义网络中都部署有控制器,所有最新的控制器都提供(expose)一个最新的 REST API。并且,因为 Ansible 是一个无代理架构,它实现自动化是非常简单的,而不仅仅是没有 API 的传统设备,但也有通过 REST APIs 的软件定义网络解决方案,在所有的终端上不需要有额外的软件(译者注:指的是代理)。最终的结果是,使用 Ansible,无论有或没有 API,可以使任何类型的设备都能够自动化。
+
+
+### 免费和开源软件(FOSS)
+
+Ansible 是一个开源软件,它的全部代码在 GitHub 上都是公开的、可访问的,使用 Ansible 是完全免费的。它可以在几分钟内完成安装并为网络工程师提供有用的价值。Ansible,这个开源项目,或者 Ansible 公司,在它们交付软件之前,你不会遇到任何一个销售代表。那是显而易见的事实,因为它是一个真正的开源项目,但是,开源项目的使用,在网络行业中社区驱动的软件是非常少的,但是,也在逐渐增加,我们想明确指出这一点。
+
+同样需要指出的一点是,Ansible, Inc. 也是一个公司,它也需要去赚钱,对吗?虽然 Ansible 是开源的,它也有一个叫 Ansible Tower 的企业产品,它增加了一些特性,比如,基于规则的访问控制(RBAC)、报告、 web UI、REST APIs、多租户、等等,(相比 Ansible)它更适合于企业去部署。并且,更重要的是,Ansible Tower 甚至可以最多在 10 台设备上 _免费_ 使用,至少,你可以去体验一下,它是否会为你的组织带来好处,而无需花费一分钱,并且,也不需要与无数的销售代表去打交道。
+
+
+### 可扩展性
+
+我们在前面说过,Ansible 主要是为部署 Linux 应用程序而构建的自动化平台,虽然从早期开始已经扩展到 Windows。需要指出的是,Ansible 开源项目并没有自动化网络基础设施的目标。事实上是,Ansible 社区更多地理解了在底层的 Ansible 架构上怎么更具灵活性和可扩展性,对于他们的自动化需要,它变成了 _扩展_ 的 Ansible,它包含了网络。在过去的两年中,部署有许多的 Ansible 集成,许多行业独立人士(industry independents),比如,Matt Oswalt、Jason Edelman、Kirk Byers、Elisa Jasinska、David Barroso、Michael Ben-Ami、Patrick Ogenstad、和 Gabriele Gerbino,以及网络系统供应商的领导者,比如,Arista、Juniper、Cumulus、Cisco、F5、和 Palo Alto Networks。
+
+
+### 集成到已存在的 DevOps 工作流中
+
+Ansible 在 IT 组织中被用于应用程序部署。它被用于需要管理部署、监视、和管理各种类型的应用程序的操作团队中。通过将 Ansible 集成到网络基础设施中,当新应用程序到来或迁移后,它扩展了可能的范围。而不是去等待一个新的顶架交换机(TOR,译者注:一种数据中心设备接入的方式)的到来、去添加一个 VLAN、或者去检查接口的速度/双工,所有的这些以网络为中心的任务都可以被自动化,并且可以集成到 IT 组织内已经存在的工作流中。
+
+
+### 幂等性
+
+术语 _幂等性_ (明显能提升项目的效能) 经常用于软件开发的领域中,尤其是当使用 REST APIs工作的时候,以及在 _DevOps_ 自动化和配置管理框架的领域中,包括 Ansible。Ansible 的其中一个信念是,所有的 Ansible 模块(集成的)应该是幂等的。那么,对于一个模块来说,幂等是什么意思呢?毕竟,对大多数网络工程师来说,这是一个新的术语。
+
+答案很简单。幂等性的本质是允许定义的任务,运行一次或者上千次都不会在目标系统上产生不利影响,仅仅是一种一次性的改变。换句话说,如果一个请求的改变去使系统进入到它期望的状态,这种改变完成之后,并且,如果这个设备已经达到这种状态,它不会再发生改变。这不像大多数传统的定制脚本,和拷贝(copy),以及过去的那些终端窗口中的 CLI 命令。当相同的命令或者脚本在同一个系统上重复运行,会出现错误(有时候)。以前,粘贴一组命令到一个路由器中,然后得到一些使你的其余的配置失效的错误类型?好玩吧?
+
+另外的例子是,如果你有一个配置 10 个 VLANs 的文件文件或者脚本,那么 _每次_ 运行这个脚本,相同的命令命令会被输入 10 次。如果使用一个幂等的 Ansible 模块,首先会从网络设备中采集已存在的配置,并且,每个新的 VLAN 被配置后会再次检查当前配置。仅仅是这个新的 VLAN 需要去被添加(或者,改变 VLAN 名字,作为一个示例)是一个改变,或者命令真实地推送到设备。
+
+当一个技术越来越复杂,幂等性的价值就越高,在你修改的时候,你并不能注意到 _已存在_ 的网络设备的状态,仅仅是从一个网络配置和策略角度去尝试达到 _期望的_ 状态。
+
+
+### 网络范围的和临时(Ad Hoc)的改变
+
+用配置管理工具解决的其中一个问题是,配置“飘移”(当设备的期望配置逐渐漂移,或者改变,随着时间的推移手动改变和/或在一个环境中使用了多个不同的工具),事实上,这也是像 Puppet 和 Chef 得到使用的地方。代理商 _phone home_ 到前端服务器,验证它的配置,并且,如果需要一个改变,则改变它。这个方法是非常简单的。如果有故障了,需要去排除怎么办?你通常需要通过管理系统,直接连到设备,找到并修复它,然后,马上离开,对不对?果然,在下次当代理的电话打到家里,修复问题的改变被覆盖了(基于主/前端服务器是怎么配置的)。在高度自动化的环境中,一次性的改变应该被限制,但是,仍然允许它们(译者注:指的是一次性改变)使用的工具是非常有价值的。正如你想到的,其中一个这样的工具是 Ansible。
+
+因为 Ansible 是无代理的,这里并没有一个默认的推送或者拉取去防止配置漂移。自动化任务被定义在 Ansible playbook 中,当使用 Ansible 时,它推送到用户去运行 playbook。如果 playbook 在一个给定的时间间隔内运行,并且你没有用 Ansible Tower,你肯定知道任务的执行频率;如果你正好在终端提示符下使用一个原生的 Ansible 命令行,playbook 运行一次,并且仅运行一次。
+
+缺省运行的 playbook 对网络工程师是很具有吸引力的,让人欣慰的是,在设备上手动进行的改变不会自动被覆盖。另外,当需要的时候,一个 playbook 运行的设备范围很容易被改变,即使是对一个单个设备进行自动化的单次改变,Ansible 仍然可以用,设备的 _范围_ 由一个被称为 Ansible 清单(inventory)的文件决定;这个清单可以是一台设备或者是一千台设备。
+
+下面展示的一个清单文件示例,它定义了两组共六台设备:
+
+```
+[core-switches]
+dc-core-1
+dc-core-2
+
+[leaf-switches]
+leaf1
+leaf2
+leaf3
+leaf4
+```
+
+为了自动化所有的主机,你的 play 定义的 playbook 的一个片段看起来应该是这样的:
+
+```
+hosts: all
+```
+
+并且,一个自动化的叶子节点交换机,它看起来应该像这样:
+
+```
+hosts: leaf1
+```
+
+这是一个核心交换机:
+
+```
+hosts: core-switches
+```
+
+###### 注意
+
+正如前面所说的那样,这个报告的后面部分将详细介绍 playbooks、plays、和清单(inventories)。
+
+因为能够很容易地对一台设备或者 _N_ 台设备进行自动化,所以在需要对这些设备进行一次性改变时,Ansible 成为了最佳的选择。在网络范围内的改变它也做的很好:可以是关闭给定类型的所有接口、配置接口描述、或者是在一个跨企业园区布线的网络中添加 VLANs。
+
+### 使用 Ansible 实现网络任务自动化
+
+这个报告从两个方面逐渐深入地讲解一些技术。第一个方面是围绕 Ansible 架构和它的细节,第二个方面是,从一个网络的角度,讲解使用 Ansible 可以完成什么类型的自动化。在这一章中我们将带你去详细了解第二方面的内容。
+
+自动化一般被认为是速度快,但是,考虑到一些任务并不要求速度,这就是为什么一些 IT 团队没有认识到自动化的价值所在。VLAN 配置是一个非常好的例子,因为,你可能会想,“创建一个 VLAN 到底有多快?一般情况下每天添加多少个 VLANs?我真的需要自动化吗?”
+
+在这一节中,我们专注于另外几种有意义的自动化任务,比如,设备准备、数据收集、报告、和遵从情况。但是,需要注意的是,正如我们前面所说的,自动化为你、你的团队、以及你的精确的更可预测的结果和更多的确定性,提供了更快的速度和敏捷性。
+
+### 设备准备
+
+为网络自动化开始使用 Ansible 的最容易也是最快的方法是,为设备最初投入使用创建设备配置文件,并且将配置文件推送到网络设备中。
+
+如果我们去完成这个过程,它将分解为两步,第一步是创建一个配置文件,第二步是推送这个配置到设备中。
+
+首先,我们需要去从供应商配置文件的底层专用语法(CLI)中解耦 _输入_。这意味着我们需要对配置参数中分离出文件和值,比如,VLANs、域信息、接口、路由、和其它的内容、等等,然后,当然是一个配置的模块文件。在这个示例中,这里有一个标准模板,它可以用于所有设备的初始部署。Ansible 将帮助提供配置模板中需要的输入和值之间的部分。几秒钟之内,Ansible 可以生成数百个可靠的和可预测的配置文件。
+
+让我们快速的看一个示例,它使用当前的配置,并且分解它到一个模板和单独的一个(作为一个输入源的)变量文件中。
+
+这是一个配置文件片断的示例:
+
+```
+hostname leaf1
+ip domain-name ntc.com
+!
+vlan 10
+ name web
+!
+vlan 20
+ name app
+!
+vlan 30
+ name db
+!
+vlan 40
+ name test
+!
+vlan 50
+ name misc
+```
+
+如果我们提取输入值,这个文件将被转换成一个模板。
+
+###### 注意
+
+Ansible 使用基于 Python 的 Jinja2 模板化语言,因此,这个被命名为 _leaf.j2_ 的文件是一个 Jinja2 模板。
+
+注意,下列的示例中,_双大括号({{)_ 代表一个变量。
+
+模板看起来像这些,并且给它命名为 _leaf.j2_:
+
+```
+!
+hostname {{ inventory_hostname }}
+ip domain-name {{ domain_name }}
+!
+!
+{% for vlan in vlans %}
+vlan {{ vlan.id }}
+ name {{ vlan.name }}
+{% endfor %}
+!
+```
+
+因为双大括号代表变量,并且,我们看到这些值并不在模板中,所以它们需要将值保存在一个地方。值被保存在一个变量文件中。正如前面所说的,一个相应的变量文件看起来应该是这样的:
+
+```
+---
+hostname: leaf1
+domain_name: ntc.com
+vlans:
+ - { id: 10, name: web }
+ - { id: 20, name: app }
+ - { id: 30, name: db }
+ - { id: 40, name: test }
+ - { id: 50, name: misc }
+```
+
+这意味着,如果管理 VLANs 的团队希望在网络设备中添加一个 VLAN,很简单,他们只需要在变量文件中改变它,然后,使用 Ansible 中一个叫 `template` 的模块,去重新生成一个新的配置文件。这整个过程也是幂等的;仅仅是在模板或者值发生改变时,它才会去生成一个新的配置文件。
+
+一旦配置文件生成,它需要去 _推送_ 到网络设备。推送配置文件到网络设备使用一个叫做 `napalm_install_config`的开源的 Ansible 模块。
+
+接下来的示例是一个简单的 playbook 去 _构建并推送_ 一个配置文件到网络设备。同样地,playbook 使用一个名叫 `template` 的模块去构建配置文件,然后使用一个名叫 `napalm_install_config` 的模块去推送它们,并且激活它作为设备上运行的新的配置文件。
+
+虽然没有详细解释示例中的每一行,但是,你仍然可以看明白它们实际上做了什么。
+
+###### 注意
+
+下面的 playbook 介绍了新的概念,比如,内置变量 `inventory_hostname`。这些概念包含在 [Ansible 术语和入门][1] 中。
+
+```
+---
+
+ - name: BUILD AND PUSH NETWORK CONFIGURATION FILES
+ hosts: leaves
+ connection: local
+ gather_facts: no
+
+ tasks:
+ - name: BUILD CONFIGS
+ template:
+ src=templates/leaf.j2
+ dest=configs/{{inventory_hostname }}.conf
+
+ - name: PUSH CONFIGS
+ napalm_install_config:
+ hostname={{ inventory_hostname }}
+ username={{ un }}
+ password={{ pwd }}
+ dev_os={{ os }}
+ config_file=configs/{{ inventory_hostname }}.conf
+ commit_changes=1
+ replace_config=0
+```
+
+这个两步的过程是一个使用 Ansible 进行网络自动化入门的简单方法。通过模板简化了你的配置,构建配置文件,然后,推送它们到网络设备 — 因此,被称为 _BUILD 和 PUSH_ 方法。
+
+###### 注意
+
+像这样的更详细的例子,请查看 [Ansible 网络集成][2]。
+
+### 数据收集和监视
+
+监视工具一般使用 SNMP — 这些工具拉某些管理信息库(MIBs),然后给监视工具返回数据。基于返回的数据,它可能多于也可能少于你真正所需要的数据。如果接口基于返回的数据统计你正在拉的内容,你可能会返回在 _show interface_ 命令中显示的计数器。如果你仅需要 _interface resets_ 并且,希望去看到与重置相关的邻接 CDP/LLDP 的接口,那该怎么做呢?当然,这也可以使用当前的技术;可以运行多个显示命令去手动解析输出信息,或者,使用基于 SNMP 的工具,在 GUI 中切换不同的选项卡(Tab)找到真正你所需要的数据。Ansible 怎么能帮助我们去完成这些工作呢?
+
+由于 Ansible 是完全开放并且是可扩展的,它可以精确地去收集和监视所需要的计数器或者值。这可能需要一些预先的定制工作,但是,最终这些工作是非常有价值的。因为采集的数据是你所需要的,而不是供应商提供给你的。Ansible 也提供直观的方法去执行某些条件任务,这意味着基于正在返回的数据,你可以执行子任务,它可以收集更多的数据或者产生一个配置改变。
+
+网络设备有 _许多_ 统计和隐藏在里面的临时数据,而 Ansible 可以帮你提取它们。
+
+你甚至可以在 Ansible 中使用前面提到的 SNMP 的模块,模块的名字叫 `snmp_device_version`。这是在社区中存在的另一个开源模块:
+
+```
+ - name: GET SNMP DATA
+ snmp_device_version:
+ host=spine
+ community=public
+ version=2c
+```
+
+运行前面的任务返回非常多的关于设备的信息,并且添加一些级别的发现能力到 Ansible中。例如,那个任务返回下列的数据:
+
+```
+{"ansible_facts": {"ansible_device_os": "nxos", "ansible_device_vendor": "cisco", "ansible_device_version": "7.0(3)I2(1)"}, "changed": false}
+```
+
+你现在可以决定某些事情,而不需要事先知道是什么类型的设备。你所需要知道的仅仅是设备的只读通讯字符串。
+
+
+### 迁移
+
+从一个平台迁移到另外一个平台,可能是从同一个供应商或者是从不同的供应商,迁移从来都不是件容易的事。供应商可能提供一个脚本或者一个工具去帮助你迁移。Ansible 可以被用于去为所有类型的网络设备构建配置模板,然后,操作系统用这个方法去为所有的供应商生成一个配置文件,然后作为一个(通用数据模型的)输入设置。当然,如果有供应商专用的扩展,它也是会被用到的。这种灵活性不仅对迁移有帮助,而且也可以用于灾难恢复(DR),它在生产系统中不同的交换机型号之间和灾备数据中心中是经常使用的,即使是在不同的供应商的设备上。
+
+
+### 配置管理
+
+正如前面所说的,配置管理是最常用的自动化类型。Ansible 可以很容易地做到创建 _角色(roles)_ 去简化基于任务的自动化。从更高的层面来看,角色是指针对一个特定设备组的可重用的自动化任务的逻辑分组。关于角色的另一种说法是,认为角色就是相关的工作流(workflows)。首先,在开始自动化添加值之前,需要理解工作流和过程。不论是开始一个小的自动化任务还是扩展它,理解工作流和过程都是非常重要的。
+
+例如,一组自动化配置路由器和交换机的任务是非常常见的,并且它们也是一个很好的起点。但是,配置在哪台网络设备上?配置的 IP 地址是什么?或许需要一个 IP 地址管理方案?一旦用一个给定的功能分配了 IP 地址并且已经部署,DNS 也更新了吗?DHCP 的范围需要创建吗?
+
+你可以看到工作流是怎么从一个小的任务开始,然后逐渐扩展到跨不同的 IT 系统?因为工作流持续扩展,所以,角色也一样(持续扩展)。
+
+
+### 遵从性
+
+和其它形式的自动化工具一样,用任何形式的自动化工具产生配置改变都视为风险。手工去产生改变可能看上去风险更大,正如你看到的和亲身经历过的那样,Ansible 有能力去做自动数据收集、监视、和配置构建,这些都是“只读的”和“低风险”的动作。其中一个 _低风险_ 使用案例是,使用收集的数据进行配置遵从性检查和配置验证。部署的配置是否满足安全要求?是否配置了所需的网络?协议 XYZ 禁用了吗?因为每个模块、或者用 Ansible 返回数据的整合,它只是非常简单地 _声明_ 那些事是 _TRUE_ 还是 _FALSE_。然后接着基于 _它_ 是 _TRUE_ 或者是 _FALSE_, 接着由你决定应该发生什么 —— 或许它只是被记录下来,或者,也可能执行一个复杂操作。
+
+### 报告
+
+我们现在知道,Ansible 也可以用于去收集数据和执行遵从性检查。Ansible 可以根据你想要做的事情去从设备中返回和收集数据。或许返回的数据成为其它的任务的输入,或者你想去用它创建一个报告。从模板中生成报告,并将真实的数据插入到模板中,创建和使用报告模板的过程与创建配置模板的过程是相同的。
+
+从一个报告的角度看,这些模板或许是纯文本文件,就像是在 GitHub 上看到的 markdown 文件、放置在 Web 服务器上的 HTML 文件,等等。用户有权去创建一个她希望的报告类型,插入她所需要的真实数据到报告中。
+
+创建报告的用处很多,不仅是为行政管理,也为了运营工程师,因为它们通常有双方都需要的不同指标。
+
+
+### Ansible 怎么工作
+
+从一个网络自动化的角度理解了 Ansible 能做什么之后,我们现在看一下 Ansible 是怎么工作的。你将学习到从一个 Ansible 管理主机到一个被自动化的节点的全部通讯流。首先,我们回顾一下,Ansible 是怎么 _开箱即用的(out of the box)_,然后,我们看一下 Ansible 怎么去做到的,具体说就是,当网络设备自动化时,Ansible _模块_是怎么去工作的。
+
+### 开箱即用
+
+到目前为止,你已经明白了,Ansible 是一个自动化平台。实际上,它是一个安装在一台单个服务器上或者企业中任何一位管理员的笔记本中的轻量级的自动化平台。当然,(安装在哪里?)这是由你来决定的。在基于 Linux 的机器上,使用一些实用程序(比如 pip、apt、和 yum)安装 Ansible 是非常容易的。
+
+###### 注意
+
+在本报告的其余部分,安装 Ansible 的机器被称为 _控制主机_。
+
+控制主机将执行在 Ansible 的 playbook (不用担心,稍后我们将讲到 playbook 和其它的 Ansible 术语)中定义的所有自动化任务。现在,我们只需要知道,一个 playbook 是简单的一组自动化任务和在给定数量的主机上执行的指令。
+
+当一个 playbook 创建之后,你还需要去定义它要自动化的主机。映射一个 playbook 和要自动化运行的主机,是通过一个被称为 Ansible 清单的文件。这是一个前面展示的示例,但是,这里是同一个清单文件的另外两个组:`cisco` 和 `arista`:
+
+```
+[cisco]
+nyc1.acme.com
+nyc2.acme.com
+
+[arista]
+sfo1.acme.com
+sfo2.acme.com
+```
+
+###### 注意
+
+你也可以在清单文件中使用 IP 地址,而不是主机名。对于这样的示例,主机名将是通过 DNS 可解析的。
+
+正如你所看到的,Ansible 清单文件是一个文本文件,它列出了主机和主机组。然后,你可以在 playbook 中引用一个具体的主机或者组,以此去决定对给定的 play 和 playbook 在哪台主机上进行自动化。下面展示了两个示例。
+
+展示的第一个示例它看上去像是,你想去自动化 `cisco` 组中所有的主机,而展示的第二个示例只对 _nyc1.acme.com_ 主机进行自动化:
+
+```
+---
+
+ - name: TEST PLAYBOOK
+ hosts: cisco
+
+ tasks:
+ - TASKS YOU WANT TO AUTOMATE
+```
+
+```
+---
+
+ - name: TEST PLAYBOOK
+ hosts: nyc1.acme.com
+
+ tasks:
+ - TASKS YOU WANT TO AUTOMATE
+```
+
+现在,我们已经理解了基本的清单文件,我们可以看一下(在控制主机上的)Ansible 是怎么与 _开箱即用_ 的设备通讯的,和在 Linux 终端上自动化的任务。这里需要明白一个重要的观点就是,需要去自动化的网络设备通常是不一样的。(译者注:指的是设备的类型、品牌、型号等等)
+
+Ansible 对基于 Linux 的系统去开箱即用自动化工作有两个要求。它们是 SSH 和 Python。
+
+首先,终端必须支持 SSH 传输,因为 Ansible 使用 SSH 去连接到每个目标节点。因为 Ansible 支持一个可拔插的连接架构,也有各种类型的插件去实现不同类型的 SSH。
+
+第二个要求是,Ansible 并不要求在目标节点上预先存在一个 _代理_,Ansible 并不要求一个软件代理,它仅需要一个内置的 Python 执行引擎。这个执行引擎用于去执行从 Ansible 管理主机发送到被自动化的目标节点的 Python 代码。
+
+如果我们详细解释这个开箱即用工作流,它将分解成如下的步骤:
+
+1. 当一个 Ansible play 被执行,控制主机使用 SSH 连接到基于 Linux 的目标节点。
+
+2. 对于每个任务,也就是说,Ansible 模块将在这个 play 中被执行,通过 SSH 发送 Python 代码并直接在远程系统中执行。
+
+3. 在远程系统上运行的每个 Ansible 模块将返回 JSON 数据到控制主机。这些数据包含有信息,比如,配置改变、任务成功/失败、以及其它模块特定的数据。
+
+4. JSON 数据返回给 Ansible,然后被用于去生成报告,或者被用作接下来模块的输入。
+
+5. 在 play 中为每个任务重复第 3 步。
+
+6. 在 playbook 中为每个 play 重复第 1 步。
+
+是不是意味着每个网络设备都可以被 Ansible 开箱即用?因为它们也都支持 SSH,确实,网络设备都支持 SSH,但是,它是第一个和第二要求的组合限制了网络设备可能的功能。
+
+刚开始时,大多数网络设备并不支持 Python,因此,使用默认的 Ansible 连接机制是无法进行的。换句话说,在过去的几年里,供应商在几个不同的设备平台上增加了 Python 支持。但是,这些平台中的大多数仍然缺乏必要的集成,以允许 Ansible 去直接通过 SSH 访问一个 Linux shell,并以适当的权限去拷贝所需的代码、创建临时目录和文件、以及在设备中执行代码。尽管 Ansible 中所有的这些部分都可以在基于 Linux 的网络设备上使用 SSH/Python 在本地运行,它仍然需要网络设备供应商去更进一步开放他们的系统。
+
+###### 注意
+
+值的注意的是,Arista 确实也提供了原生的集成,因为它可以放弃 SSH 用户,直接进入到一个 Linux shell 中访问 Python 引擎,它可以允许 Ansible 去使用默认连接机制。因为我们调用了 Arista,我们也需要着重强调与 Ansible 默认连接机制一起工作的 Cumulus。这是因为 Cumulus Linux 是原生 Linux,并且它并不需要为 Cumulus Linux 操作系统使用供应商 API。
+
+### Ansible 网络集成
+
+前面的节讲到过 Ansible 默认的工作方式。我们看一下,在开始一个 _play_ 之后,Ansible 是怎么去设置一个到设备的连接、通过执行拷贝 Python 代码到设备去运行任务、运行代码、和返回结果给 Ansible 控制主机。
+
+在这一节中,我们将看一看,当使用 Ansible 进行自动化网络设备时都做了什么。正如前面讲过的,Ansible 是一个可拔插的连接架构。对于 _大多数_ 的网络集成, `connection` 参数设置为 `local`。在 playbook 中大多数的连接类型都设置为 `local`,如下面的示例所展示的:
+
+```
+---
+
+ - name: TEST PLAYBOOK
+ hosts: cisco
+ connection: local
+
+ tasks:
+ - TASKS YOU WANT TO AUTOMATE
+```
+
+注意在 play 中是怎么定义的,这个示例增加 `connection` 参数去和前面节中的示例进行比较。
+
+这告诉 Ansible 不要通过 SSH 去连接到目标设备,而是连接到本地机器运行这个 playbook。基本上,这是把连接职责委托给 playbook 中 _任务_ 节中使用的真实的 Ansible 模块。每个模块类型的委托权利允许这个模块在必要时以各种形式去连接到设备。这可能是 Juniper 和 HP Comware7 的 NETCONF、Arista 的 eAPI、Cisco Nexus 的 NX-API、或者甚至是基于传统系统的 SNMP,它们没有可编程的 API。
+
+###### 注意
+
+网络集成在 Ansible 中是以 Ansible 模块的形式带来的。尽管我们持续使用术语来吊你的胃口,比如,playbooks、plays、任务、和讲到的关键概念 `模块`,这些术语中的每一个都会在 [Ansible 术语和入门][3] 和 [动手实践使用 Ansible 去进行网络自动化][4] 中详细解释。
+
+让我们看一看另外一个 playbook 的示例:
+
+```
+---
+
+ - name: TEST PLAYBOOK
+ hosts: cisco
+ connection: local
+
+ tasks:
+ - nxos_vlan: vlan_id=10 name=WEB_VLAN
+```
+
+你注意到了吗,这个 playbook 现在包含一个任务,并且这个任务使用了 `nxos_vlan` 模块。`nxos_vlan` 模块是一个 Python 文件,并且,在这个文件中它是使用 NX-API 连接到 Cisco 的 NX-OS 设备。可是,这个连接可能是使用其它设备 API 设置的,这就是为什么供应商和用户像我们这样能够去建立自己的集成的原因。集成(模块)通常是以每特性(per-feature)为基础完成的,虽然,你已经看到了像 `napalm_install_config` 这样的模块,它们也可以被用来 _推送_ 一个完整的配置文件。
+
+主要区别之一是使用的默认连接机制,Ansible 启动一个持久的 SSH 连接到设备,并且这个连接为一个给定的 play 持续。当在一个模块中发生连接设置和拆除时,与许多使用 `connection=local` 的网络模块一样,对发生在 play 级别上的 _每个_ 任务,Ansible 将登入/登出设备。
+
+而在传统的 Ansible 形式下,每个网络模块返回 JSON 数据。仅有的区别是相对于目标节点,数据的推取发生在本地的 Ansible 控制主机上。相对于每供应商(per vendor)和模块类型,数据返回到 playbook,但是作为一个示例,许多的 Cisco NX-OS 模块返回已存在的状态、建议状态、和最终状态,以及发送到设备的命令(如果有的话)。
+
+作为使用 Ansible 进行网络自动化的开始,最重要的是,为 Ansible 的连接设备/拆除过程,记着去设置连接参数为 `local`,并且将它留在模块中。这就是为什么模块支持不同类型的供应商平台,它将与设备使用不同的方式进行通讯。
+
+
+### Ansible 术语和入门
+
+这一章我们将介绍许多 Ansible 的术语和报告中前面部分出现过的关键概念。比如, _清单文件_、_playbook_、_play_、_任务_、和 _模块_。我们也会去回顾一些其它的概念,这些术语和概念对我们学习使用 Ansible 去进行网络自动化非常有帮助。
+
+在这一节中,我们将引用如下的一个简单的清单文件和 playbook 的示例,它们将在后面的章节中持续出现。
+
+_清单示例_:
+
+```
+# sample inventory file
+# filename inventory
+
+[all:vars]
+user=admin
+pwd=admin
+
+[tor]
+rack1-tor1 vendor=nxos
+rack1-tor2 vendor=nxos
+rack2-tor1 vendor=arista
+rack2-tor2 vendor=arista
+
+[core]
+core1
+core2
+```
+
+_playbook 示例_:
+
+```
+---
+# sample playbook
+# filename site.yml
+
+ - name: PLAY 1 - Top of Rack (TOR) Switches
+ hosts: tor
+ connection: local
+
+ tasks:
+ - name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
+ nxos_vlan:
+ vlan_id=10
+ name=WEB_VLAN
+ host={{ inventory_hostname }}
+ username=admin
+ password=admin
+ when: vendor == "nxos"
+
+ - name: ENSURE VLAN 10 EXISTS ON ARISTA TOR SWITCHES
+ eos_vlan:
+ vlanid=10
+ name=WEB_VLAN
+ host={{ inventory_hostname }}
+ username={{ user }}
+ password={{ pwd }}
+ when: vendor == "arista"
+
+ - name: PLAY 2 - Core (TOR) Switches
+ hosts: core
+ connection: local
+
+ tasks:
+ - name: ENSURE VLANS EXIST IN CORE
+ nxos_vlan:
+ vlan_id={{ item }}
+ host={{ inventory_hostname }}
+ username={{ user }}
+ password={{ pwd }}
+ with_items:
+ - 10
+ - 20
+ - 30
+ - 40
+ - 50
+```
+
+### 清单文件
+
+使用一个清单文件,比如前面提到的那个,允许我们去为自动化任务指定主机、和使用每个 play 顶部节中(如果存在)的参数 `hosts` 所引用的主机/组指定的主机组。
+
+它也可能在一个清单文件中存储变量。如这个示例中展示的那样。如果变量在同一行视为一台主机,它是一个具体主机变量。如果变量定义在方括号中(“[ ]”),比如,`[all:vars]`,它的意思是指变量在组中的范围 `all`,它是一个默认组,包含了清单文件中的 _所有_ 主机。
+
+###### 注意
+
+清单文件是使用 Ansible 开始自动化的快速方法,但是,你应该已经有一个真实的网络设备源,比如一个网络管理工具或者 CMDB,它可以去创建和使用一个动态的清单脚本,而不是一个静态的清单文件。
+
+### Playbook
+
+playbook 是去运行自动化网络设备的顶级对象。在我们的示例中,它是一个 _site.yml_ 文件,如前面的示例所展示的。一个 playbook 使用 YAML 去定义一组自动化任务,并且,每个 playbook 由一个或多个 plays 组成。这类似于一个橄榄球的剧本。就像在橄榄球赛中,团队有剧集组成的剧本,Ansible 的 playbooks 也是由 play 组成的。
+
+###### 注意
+
+YAML 是一种被所有编程语言支持的数据格式。YAML 本身就是 JSON 的超集,并且,YAML 文件非常易于识别,因为它总是三个破折号(连字符)开始,比如,`---`。
+
+
+### Play
+
+一个 Ansible playbook 可以存在一个或多个 plays。在前面的示例中,它在 playbook 中有两个 plays。每个 play 开始的地方都有一个 _header_ 节,它定义了具体的参数。
+
+示例中两个 plays 都定义了下面的参数:
+
+`name`
+
+文件 `PLAY 1 - Top of Rack (TOR) Switches` 是任意内容的,它在 playbook 运行的时候,去改善 playbook 运行和报告期间的可读性。这是一个可选参数。
+
+`hosts`
+
+正如前面讲过的,这是在特定的 play 中要去进行自动化的主机或主机组。这是一个必需参数。
+
+`connection`
+
+正如前面讲过的,这是 play 连接机制的类型。这是个可选参数,但是,对于网络自动化 plays,一般设置为 `local`。
+
+
+
+每个 play 都是由一个或多个任务组成。
+
+
+
+### 任务
+
+任务是以声明的方式去表示自动化的内容,而不用担心底层的语法或者操作是怎么执行的。
+
+在我们的示例中,第一个 play 有两个任务。每个任务确保存在 10 个 VLAN。第一个任务是为 Cisco Nexus 设备的,而第二个任务是为 Arista 设备的:
+
+```
+tasks:
+ - name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
+ nxos_vlan:
+ vlan_id=10
+ name=WEB_VLAN
+ host={{ inventory_hostname }}
+ username=admin
+ password=admin
+ when: vendor == "nxos"
+```
+
+任务也可以使用 `name` 参数,就像 plays 一样。和 plays 一样,文本内容是任意的,并且当 playbook 运行时显示,去改善 playbook 运行和报告期间的可读性。它对每个任务都是可选参数。
+
+示例任务中的下一行是以 `nxos_vlan` 开始的。它告诉我们这个任务将运行一个叫 `nxos_vlan` 的 Ansible 模块。
+
+现在,我们将进入到模块中。
+
+
+
+### 模块
+
+在 Ansible 中理解模块的概念是至关重要的。虽然任何编辑语言都可以用来写 Ansible 模块,只要它们能够返回 JSON 键 — 值对即可,但是,几乎所有的模块都是用 Python 写的。在我们示例中,我们看到有两个模块被运行: `nxos_vlan` 和 `eos_vlan`。这两个模块都是 Python 文件;而事实上,在你不能看到 playbook 的时候,真实的文件名分别是 _eos_vlan.py_ 和 _nxos_vlan.py_。
+
+让我们看一下前面的示例中第一个 play 中的第一个 任务:
+
+```
+ - name: ENSURE VLAN 10 EXISTS ON CISCO TOR SWITCHES
+ nxos_vlan:
+ vlan_id=10
+ name=WEB_VLAN
+ host={{ inventory_hostname }}
+ username=admin
+ password=admin
+ when: vendor == "nxos"
+```
+
+这个任务运行 `nxos_vlan`,它是一个自动配置 VLAN 的模块。为了使用这个模块,包含它,你需要为设备指定期望的状态或者配置策略。这个示例中的状态是:VLAN 10 将被配置一个名字 `WEB_VLAN`,并且,它将被自动配置到每个交换机上。我们可以看到,使用 `vlan_id` 和 `name` 参数很容易做到。模块中还有三个其它的参数,它们分别是:`host`、`username`、和 `password`:
+
+`host`
+
+这是将要被自动化的主机名(或者 IP 地址)。因为,我们希望去自动化的设备已经被定义在清单文件中,我们可以使用内置的 Ansible 变量 `inventory_hostname`。这个变量等价于清单文件中的内容。例如,在第一个循环中,在清单文件中的主机是 `rack1-tor1`,然后,在第二个循环中,它是 `rack1-tor2`。这些名字是进入到模块的,并且包含在模块中的,在每个名字到 IP 地址的解析中,都发生一个 DNS 查询。然后与这个设备进行通讯。
+
+`username`
+
+用于登入到交换机的用户名。
+
+
+`password`
+
+用于登入到交换机的密码。
+
+
+示例中最后的片断部分使用了一个 `when` 语句。这是在一个 play 中使用的 Ansible 的执行条件任务。正如我们所了解的,在这个 play 的 `tor` 组中有多个设备和设备类型。使用 `when` 基于任意标准去提供更多的选择。这里我们仅自动化 Cisco 设备,因为,我们在这个任务中使用了 `nxos_vlan` 模块,在下一个任务中,我们仅自动化 Arista 设备,因为,我们使用了 `eos_vlan` 模块。
+
+###### 注意
+
+这并不是区分设备的唯一方法。这里仅是演示如何使用 `when`,并且可以在清单文件中定义变量。
+
+在清单文件中定义变量是一个很好的开端,但是,如果你继续使用 Ansible,你将会为了扩展性、版本控制、对给定文件的改变最小化而去使用基于 YAML 的变量。这也将简化和改善清单文件和每个使用的变量的可读性。在设备准备的构建/推送方法中讲过一个变量文件的示例。
+
+在最后的示例中,关于任务有几点需要去搞清楚:
+
+* Play 1 任务 1 展示了硬编码了 `username` 和 `password` 作为参数进入到具体的模块中(`nxos_vlan`)。
+
+* Play 1 任务 1 和 play 2 在模块中使用了变量,而不是硬编码它们。这掩饰了 `username` 和 `password` 参数,但是,需要值得注意的是,(在这个示例中)这些变量是从清单文件中提取出现的。
+
+* Play 1 中为进入到模块中的参数使用了一个 _水平的(horizontal)_ 的 key=value 语法,虽然 play 2 使用了垂直的(vertical) key=value 语法。它们都工作的非常好。你也可以使用垂直的 YAML “key: value” 语法。
+
+* 最后的任务也介绍了在 Ansible 中怎么去使用一个 _loop_ 循环。它通过使用 `with_items` 来完成,并且它类似于一个 for 循环。那个特定的任务是循环进入五个 VLANs 中去确保在交换机中它们都存在。注意:它也可能被保存在一个外部的 YAML 变量文件中。还需要注意的一点是,不使用 `with_items` 的替代方案是,每个 VLAN 都有一个任务 —— 如果这样做,它就失去了弹性!
+
+
+### 动手实践使用 Ansible 去进行网络自动化
+
+在前面的章节中,提供了 Ansible 术语的一个概述。它已经覆盖了大多数具体的 Ansible 术语,比如 playbooks、plays、任务、模块、和清单文件。这一节将继续提供示例去讲解使用 Ansible 实现网络自动化,而且将提供在不同类型的设备中自动化工作的模块的更多细节。示例中的将要进行自动化设备由多个供应商提供,包括 Cisco、Arista、Cumulus、和 Juniper。
+
+在本节中的示例,假设的前提条件如下:
+
+* Ansible 已经安装。
+
+* 在设备中(NX-API、eAPI、NETCONF)适合的 APIs 已经启用。
+
+* 用户在系统上有通过 API 去产生改变的适当权限。
+
+* 所有的 Ansible 模块已经在系统中存在,并且也在库的路径变量中。
+
+###### 注意
+
+可以在 _ansible.cfg_ 文件中设置模块和库路径。在你运行一个 playbook 时,你也可以使用 `-M` 标志从命令行中去改变它。
+
+在本节中示例使用的清单如下。(删除了密码,IP 地址也发生了变化)。在这个示例中,(和前面的示例一样)某些主机名并不是完全合格域名(FQDNs)。
+
+
+### 清单文件
+
+```
+[cumulus]
+cvx ansible_ssh_host=1.2.3.4 ansible_ssh_pass=PASSWORD
+
+[arista]
+veos1
+
+[cisco]
+nx1 hostip=5.6.7.8 un=USERNAME pwd=PASSWORD
+
+[juniper]
+vsrx hostip=9.10.11.12 un=USERNAME pwd=PASSWORD
+```
+
+###### 注意
+
+正如你所知道的,Ansible 支持将密码存储在一个加密文件中的功能。如果你想学习关于这个特性的更多内容,请查看在 Ansible 网站上的文档中的 [Ansible Vault][5] 部分。
+
+这个清单文件有四个组,每个组定义了一台单个的主机。让我们详细回顾一下每一节:
+
+Cumulus
+
+主机 `cvx` 是一个 Cumulus Linux (CL) 交换机,并且它是 `cumulus` 组中的唯一设备。记住,CL 是原生 Linux,因此,这意味着它是使用默认连接机制(SSH)连到到需要自动化的 CL 交换机。因为 `cvx` 在 DNS 或者 _/etc/hosts_ 文件中没有定义,我们将让 Ansible 知道不要在清单文件中定义主机名,而是在 `ansible_ssh_host` 中定义的名字/IP。登陆到 CL 交换机的用户名被定义在 playbook 中,但是,你可以看到密码使用变量 `ansible_ssh_pass` 定义在清单文件中。
+
+Arista
+
+被称为 `veos1` 的是一台运行 EOS 的 Arista 交换机。它是在 `arista` 组中唯一的主机。正如你在 Arista 中看到的,在清单文件中并没有其它的参数存在。这是因为 Arista 为它们的设备使用了一个特定的配置文件。在我们的示例中它的名字为 _.eapi.conf_,它存在在 home 目录中。下面是正确使用配置文件的这个功能的示例:
+
+```
+[connection:veos1]
+host: 2.4.3.4
+username: unadmin
+password: pwadmin
+```
+
+这个文件包含了定义在配置文件中的 Ansible 连接到设备(和 Arista 的被称为 _pyeapi_ 的 Python 库)所需要的全部信息。
+
+Cisco
+
+和 Cumulus 和 Arista 一样,这里仅有一台主机(`nx1`)存在于 `cisco` 组中。这是一台 NX-OS-based Cisco Nexus 交换机。注意在这里为 `nx1` 定义了三个变量。它们包括 `un` 和 `pwd`,这是为了在 playbook 中访问和为了进入到 Cisco 模块去连接到设备。另外,这里有一个称为 `hostip` 的参数,它是必需的,因为,`nx1` 没有在 DNS 中或者是 _/etc/hosts_ 配置文件中定义。
+
+
+###### 注意
+
+如果自动化一个原生的 Linux 设备,我们可以将这个参数命名为任何东西。`ansible_ssh_host` 被用于到如我们看到的那个 Cumulus 示例(如果在清单文件中的定义不能被解析)。在这个示例中,我们将一直使用 `ansible_ssh_host`,但是,它并不是必需的,因为我们将这个变量作为一个参数进入到 Cisco 模块,而 `ansible_ssh_host` 是在使用默认的 SSH 连接机制时自动检查的。
+
+Juniper
+
+和前面的三个组和主机一样,在 `juniper` 组中有一个单个的主机 `vsrx`。它在清单文件中的设置与 Cisco 相同,因为两者在 playbook 中使用了相同的方式。
+
+
+### Playbook
+
+接下来的 playbook 有四个不同的 plays。每个 play 是基于特定的供应商类型的设备组的自动化构建的。注意,那是在一个单个的 playbook 中执行这些任务的唯一的方法。这里还有其它的方法,它可以使用条件(`when` 语句)或者创建 Ansible 角色(它在这个报告中没有介绍)。
+
+这里有一个 playbook 的示例:
+
+```
+---
+
+ - name: PLAY 1 - CISCO NXOS
+ hosts: cisco
+ connection: local
+
+ tasks:
+ - name: ENSURE VLAN 100 exists on Cisco Nexus switches
+ nxos_vlan:
+ vlan_id=100
+ name=web_vlan
+ host={{ hostip }}
+ username={{ un }}
+ password={{ pwd }}
+
+ - name: PLAY 2 - ARISTA EOS
+ hosts: arista
+ connection: local
+
+ tasks:
+ - name: ENSURE VLAN 100 exists on Arista switches
+ eos_vlan:
+ vlanid=100
+ name=web_vlan
+ connection={{ inventory_hostname }}
+
+ - name: PLAY 3 - CUMULUS
+ remote_user: cumulus
+ sudo: true
+ hosts: cumulus
+
+ tasks:
+ - name: ENSURE 100.10.10.1 is configured on swp1
+ cl_interface: name=swp1 ipv4=100.10.10.1/24
+
+ - name: restart networking without disruption
+ shell: ifreload -a
+
+ - name: PLAY 4 - JUNIPER SRX changes
+ hosts: juniper
+ connection: local
+
+ tasks:
+ - name: INSTALL JUNOS CONFIG
+ junos_install_config:
+ host={{ hostip }}
+ file=srx_demo.conf
+ user={{ un }}
+ passwd={{ pwd }}
+ logfile=deploysite.log
+ overwrite=yes
+ diffs_file=junpr.diff
+```
+
+你将注意到,前面的两个 plays 是非常类似的,我们已经在最初的 Cisco 和 Arista 示例中讲过了。唯一的区别是每个要自动化的组(`cisco` and `arista`) 定义了它们自己的 play,我们在前面介绍使用 `when` 条件时比较过。
+
+这里有一个不正确的或者是错误的方式去做这些。这取决于你预先知道的信息是什么和适合你的环境和使用的最佳案例是什么,但我们的目的是为了展示做同一件事的几种不同的方法。
+
+第三个 play 是在 Cumulus Linux 交换机的 `swp1` 接口上进行自动化配置。在这个 play 中的第一个任务是去确认 `swp1` 是一个三层接口,并且它配置的 IP 地址是 100.10.10.1。因为 Cumulus Linux 是原生的 Linux,网络服务在改变后需要重启才能生效。这也可以使用 Ansible 的操作来达到这个目的(这已经超出了本报告讨论的范围),这里有一个被称为 `service` 的 Ansible 核心模块来做这些,但它会中断交换机上的网络;使用 `ifreload` 重新启动则不会中断。
+
+本节到现在为止,我们已经讲解了专注于特定任务的 Ansible 模块,比如,配置接口和 VLANs。第四个 play 使用了另外的选项。我们将看到一个 _pushes_ 模块,它是一个完整的配置文件并且立即激活它作为正在运行的新的配置。这里将使用 `napalm_install_config`来展示前面的示例,但是,这个示例使用了一个 Juniper 专用的模块。
+
+`junos_install_config` 模块接受几个参数,如下面的示例中所展示的。到现在为止,你应该理解了什么是 `user`、`passwd`、和 `host`。其它的参数定义如下:
+
+`file`
+
+这是一个从 Ansible 控制主机拷贝到 Juniper 设备的配置文件。
+
+`logfile`
+
+这是可选的,但是,如果你指定它,它会被用于存储运行这个模块时生成的信息。
+
+`overwrite`
+
+当你设置为 yes/true 时,完整的配置将被发送的配置覆盖。(默认是 false)
+
+`diffs_file`
+
+这是可选的,但是,如果你指定它,当应用配置时,它将存储生成的差异。当应用配置时将存储一个生成的差异。当正好更改了主机名,但是,仍然发送了一个完整的配置文件时会生成一个差异,如下的示例:
+
+```
+# filename: junpr.diff
+[edit system]
+- host-name vsrx;
++ host-name vsrx-demo;
+```
+
+
+上面已经介绍了 playbook 概述的细节。现在,让我们看看当 playbook 运行时发生了什么:
+
+###### 注意
+
+注意:`-i` 标志是用于指定使用的清单文件。也可以设置环境变量 `ANSIBLE_HOSTS`,而不用每次运行 playbook 时都去使用一个 `-i` 标志。
+
+```
+ntc@ntc:~/ansible/multivendor$ ansible-playbook -i inventory demo.yml
+
+PLAY [PLAY 1 - CISCO NXOS] *************************************************
+
+TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] *********************
+changed: [nx1]
+
+PLAY [PLAY 2 - ARISTA EOS] *************************************************
+
+TASK: [ENSURE VLAN 100 exists on Arista switches] **************************
+changed: [veos1]
+
+PLAY [PLAY 3 - CUMULUS] ****************************************************
+
+GATHERING FACTS ************************************************************
+ok: [cvx]
+
+TASK: [ENSURE 100.10.10.1 is configured on swp1] ***************************
+changed: [cvx]
+
+TASK: [restart networking without disruption] ******************************
+changed: [cvx]
+
+PLAY [PLAY 4 - JUNIPER SRX changes] ****************************************
+
+TASK: [INSTALL JUNOS CONFIG] ***********************************************
+changed: [vsrx]
+
+PLAY RECAP ***************************************************************
+ to retry, use: --limit @/home/ansible/demo.retry
+
+cvx : ok=3 changed=2 unreachable=0 failed=0
+nx1 : ok=1 changed=1 unreachable=0 failed=0
+veos1 : ok=1 changed=1 unreachable=0 failed=0
+vsrx : ok=1 changed=1 unreachable=0 failed=0
+```
+
+你可以看到每个任务成功完成;如果你是在终端上运行,你将看到用琥珀色显示的每个改变的任务。
+
+让我们再次运行 playbook。通过再次运行,我们可以校验所有模块的 _幂等性_;当我们这样做的时候,我们看到设备上 _没有_ 产生变化,并且所有的东西都是绿色的:
+
+```
+PLAY [PLAY 1 - CISCO NXOS] ***************************************************
+
+TASK: [ENSURE VLAN 100 exists on Cisco Nexus switches] ***********************
+ok: [nx1]
+
+PLAY [PLAY 2 - ARISTA EOS] ***************************************************
+
+TASK: [ENSURE VLAN 100 exists on Arista switches] ****************************
+ok: [veos1]
+
+PLAY [PLAY 3 - CUMULUS] ******************************************************
+
+GATHERING FACTS **************************************************************
+ok: [cvx]
+
+TASK: [ENSURE 100.10.10.1 is configured on swp1] *****************************
+ok: [cvx]
+
+TASK: [restart networking without disruption] ********************************
+skipping: [cvx]
+
+PLAY [PLAY 4 - JUNIPER SRX changes] ******************************************
+
+TASK: [INSTALL JUNOS CONFIG] *************************************************
+ok: [vsrx]
+
+PLAY RECAP ***************************************************************
+cvx : ok=2 changed=0 unreachable=0 failed=0
+nx1 : ok=1 changed=0 unreachable=0 failed=0
+veos1 : ok=1 changed=0 unreachable=0 failed=0
+vsrx : ok=1 changed=0 unreachable=0 failed=0
+```
+
+注意:这里有 0 个改变,但是,每次运行任务,正如期望的那样,它们都返回 “ok”。说明在这个 playbook 中的每个模块都是幂等的。
+
+
+### 总结
+
+Ansible 是一个超级简单的、无代理和可扩展的自动化平台。网络社区持续不断地围绕 Ansible 去重整它作为一个能够执行一些自动化网络任务的平台,比如,做配置管理、数据收集和报告,等等。你可以使用 Ansible 去推送完整的配置文件,配置具体的使用幂等模块的网络资源,比如,接口、VLANs,或者,简单地自动收集信息,比如,领居、序列号、启动时间、和接口状态,以及按你的需要定制一个报告。
+
+因为它的架构,Ansible 被证明是一个在这里可用的、非常好的工具,它可以帮助你实现从传统的基于 _CLI/SNMP_ 的网络设备到基于 _API 驱动_ 的现代化网络设备的自动化。
+
+在网络社区中,易于使用和无代理架构的 Ansible 的占比持续增加,它将使没有 APIs 的设备(CLI/SNMP)的自动化成为可能。包括独立的交换机、路由器、和 4-7 层的服务应用程序;甚至是提供了 RESTful API 的那些软件定义网络控制器。
+
+当使用 Ansible 实现网络自动化时,不会落下任何设备。
+
+-----------
+
+作者简介:
+
+ ![](https://d3tdunqjn7n0wj.cloudfront.net/360x360/jason-edelman-crop-5b2672f569f553a3de3a121d0179efcb.jpg)
+
+Jason Edelman,CCIE 15394 & VCDX-NV 167,出生并成长于新泽西州的一位网络工程师。他是一位典型的 “CLI 爱好者” 和 “路由器小子”。在几年前,他决定更多地关注于软件、开发实践、以及怎么与网络工程融合。Jason 目前经营着一个小的咨询公司,公司名为:Network to Code(http://networktocode.com/),帮助供应商和终端用户利用新的工具和技术来减少他们的低效率操作...
+
+--------------------------------------------------------------------------------
+
+via: https://www.oreilly.com/learning/network-automation-with-ansible
+
+作者:[Jason Edelman][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.oreilly.com/people/ee4fd-jason-edelman
+[1]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
+[2]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_network_integrations
+[3]:https://www.oreilly.com/learning/network-automation-with-ansible#ansible_terminology_and_getting_started
+[4]:https://www.oreilly.com/learning/network-automation-with-ansible#handson_look_at_using_ansible_for_network_automation
+[5]:http://docs.ansible.com/ansible/playbooks_vault.html
+[6]:https://www.oreilly.com/people/ee4fd-jason-edelman
+[7]:https://www.oreilly.com/people/ee4fd-jason-edelman
diff --git a/translated/tech/20170215 How to take screenshots on Linux using Scrot.md b/translated/tech/20170215 How to take screenshots on Linux using Scrot.md
new file mode 100644
index 0000000000..1ddefb37eb
--- /dev/null
+++ b/translated/tech/20170215 How to take screenshots on Linux using Scrot.md
@@ -0,0 +1,330 @@
+如何在 Linux 系统里用 Scrot 截屏
+============================================================
+
+### 文章主要内容
+
+1. [关于 Scrot][12]
+2. [安装 Scrot][13]
+3. [Scrot 的使用和特点][14]
+ 1. [获取程序版本][1]
+ 2. [抓取当前窗口][2]
+ 3. [抓取选定窗口][3]
+ 4. [在截屏时包含窗口边框][4]
+ 5. [延时截屏][5]
+ 6. [截屏前倒数][6]
+ 7. [图片质量][7]
+ 8. [生成缩略图][8]
+ 9. [拼接多显示器截屏][9]
+ 10. [在保存截图后执行操作][10]
+ 11. [特殊字符串][11]
+4. [结论][15]
+
+最近,我们介绍过 [gnome-screenshot][17] 工具,这是一个很优秀的屏幕抓取工具。但如果你想找一个在命令行运行的更好用的截屏工具,你一定要试试 Scrot。这个工具有一些 gnome-screenshot 没有的独特功能。在这片文章里,我们会通过简单易懂的例子来详细介绍 Scrot。
+
+请注意一下,这篇文章里的所有例子都在 Ubuntu 16.04 LTS 上测试过,我们用的 scrot 版本是 0.8。
+
+### 关于 Scrot
+
+[Scrot][18] (**SCR**eensh**OT**) 是一个屏幕抓取工具,使用 imlib2 库来获取和保存图片。由 Tom Gilbert 用 C 语言开发完成,通过 BSD 协议授权。
+
+### 安装 Scrot
+
+scort 工具可能在你的 Ubuntu 系统里预装了,不过如果没有的话,你可以用下面的命令安装:
+
+sudo apt-get install scrot
+
+安装完成后,你可以通过下面的命令来使用:
+
+scrot [options] [filename]
+
+**注意**:方括号里的参数是可选的。
+
+### Scrot 的使用和特点
+
+在这个小节里,我们会介绍如何使用 Scrot 工具,以及它的所有功能。
+
+如果不带任何选项执行命令,它会抓取整个屏幕。
+
+[
+ ![使用 Scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/scrot.png)
+][19]
+
+默认情况下,抓取的截图会用带时间戳的文件名保存到当前目录下,不过你也可以在运行命令时指定截图文件名。比如:
+
+scrot [image-name].png
+
+### 获取程序版本
+
+你想的话,可以用 -v 选项来查看 scrot 的版本。
+
+scrot -v
+
+这是例子:
+
+[
+ ![获取 scrot 版本](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/version.png)
+][20]
+
+### 抓取当前窗口
+
+这个工具可以限制抓取当前的焦点窗口。这个功能可以通过 -u 选项打开。
+
+scrot -u
+
+例如,这是我在命令行执行上边命令时的桌面:
+
+[
+ ![用 scrot 截取窗口](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/desktop.png)
+][21]
+
+这是另一张用 scrot 抓取的截图:
+
+[
+ ![用 scrot 抓取的图片](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/active.png)
+][22]
+
+### 抓取选定窗口
+
+这个工具还可以让你抓取任意用鼠标点击的窗口。这个功能可以用 -s 选项打开。
+
+scrot -s
+
+例如,在下面的截图里你可以看到,我有两个互相重叠的终端窗口。我在上层的窗口里执行上面的命令。
+
+[
+ ![选择窗口](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select1.png)
+][23]
+
+现在假如我想抓取下层的终端窗口。这样我只要在执行命令后点击窗口就可以了 - 在你用鼠标点击之前,命令的执行不会结束。
+
+这是我点击了下层终端窗口后的截图:
+
+[
+ ![窗口截图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select2.png)
+][24]
+
+**注意**:你可以在上面的截图里看到,下层终端窗口的整个显示区域都被抓去下来了,甚至包括了上层窗口的部分叠加内容。
+
+### 在截屏时包含窗口边框
+
+我们之前介绍的 -u 选项在截屏时不会包含窗口边框。不过,需要的话你也可以在截屏时包含窗口边框。这个功能可以通过 -b 选项打开(当然要和 -u 选项一起)。
+
+scrot -ub
+
+下面是示例截图:
+
+[
+ ![截屏时包含窗口边框](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/border-new.png)
+][25]
+
+**注意**:截屏时包含窗口边框同时也会增加一点额外的背景。
+
+### 延时截屏
+
+你可以在开始截屏时增加一点延时。需要在 --delay 或 -d 选项后设定一个时间值参数。
+
+scrot --delay [NUM]
+
+scrot --delay 5
+
+例如:
+
+[
+ ![延时截屏](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/delay.png)
+][26]
+
+在这例子里,scrot 会等待 5 秒再截屏。
+
+### 截屏前倒数
+
+这个工具也可以在你使用延时功能后显示一个倒计时。这个功能可以通过 -c 选项打开。
+
+scrot –delay [NUM] -c
+
+scrot -d 5 -c
+
+下面是示例截图:
+
+[
+ ![延时截屏示例](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/countdown.png)
+][27]
+
+### 图片质量
+
+你可以使用这个工具来调整截图的图片质量,范围是 1-100 之间。较大的值意味着更大的文件大小以及更低的压缩率。默认值是 75,不过最终效果根据选择的文件类型也会有一些差异。
+
+这个功能可以通过 --quality 或 -q 选项打开,但是你必须提供一个 1-100 之间的数值作为参数。
+
+scrot –quality [NUM]
+
+scrot –quality 10
+
+下面是示例截图:
+
+[
+ ![截屏质量](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/img-quality.jpg)
+][28]
+
+你可以看到,-q 选项的参数更靠近 1 让图片质量下降了很多。
+
+### 生成缩略图
+
+scort 工具还可以生成截屏的缩略图。这个功能可以通过 --thumb 选项打开。这个选项也需要一个 NUM 数值作为参数,基本上是指定原图大小的百分比。
+
+scrot --thumb NUM
+
+scrot --thumb 50
+
+**注意**:加上 --thumb 选项也会同时保存原始截图文件。
+
+例如,下面是我测试的原始截图:
+
+[
+ ![原始截图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/orig.png)
+][29]
+
+下面是保存的缩略图:
+
+[
+ ![截图缩略图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/thmb.png)
+][30]
+
+### 拼接多显示器截屏
+
+如果你的电脑接了多个显示设备,你可以用 scort 抓取并拼接这些显示设备的截图。这个功能可以通过 -m 选项打开。
+
+scrot -m
+
+下面是示例截图:
+
+[
+ ![拼接截屏](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/multiple.png)
+][31]
+
+### 在保存截图后执行操作
+
+使用这个工具,你可以在保存截图后执行各种操作 - 例如,用像 gThumb 这样的图片编辑器打开截图。这个功能可以通过 -e 选项打开。下面是例子:
+
+scrot abc.png -e ‘gthumb abc.png’
+
+这个命令里的 gthumb 是一个图片编辑器,上面的命令在执行后会自动打开。
+
+下面是命令的截图:
+
+[
+ ![截屏后执行命令](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec1.png)
+][32]
+
+这个是上面命令执行后的效果:
+
+[
+ ![示例截图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec2.png)
+][33]
+
+你可以看到 scrot 抓取了屏幕截图,然后再启动了 gThumb 图片编辑器打开刚才保存的截图图片。
+
+如果你截图时没有指定文件名,截图将会用带有时间戳的文件名保存到当前目录 - 这是 scrot 的默认设定,我们前面已经说过。
+
+下面是一个使用默认名字并且加上 -e 选项来截图的例子:
+
+scrot -e ‘gthumb $n’
+
+[
+ ![scrot 截屏后运行 gthumb](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec3.png)
+][34]
+
+有个地方要注意的是 $n 是一个特殊字符串,用来获取当前截图的文件名。关于特殊字符串的更多细节,请继续看下个小节。
+
+### 特殊字符串
+
+scrot 的 -e(或 --exec)选项和文件名参数可以使用格式说明符。有两种类型格式。第一种是以 '%' 加字母组成,用来表示日期和时间,第二种以 '$' 开头,scrot 内部使用。
+
+下面介绍几个 --exec 和文件名参数接受的说明符。
+
+**$f** – 让你可以使用截图的全路径(包括文件名)。
+
+例如
+
+scrot ashu.jpg -e ‘mv $f ~/Pictures/Scrot/ashish/’
+
+下面是示例截图:
+
+[
+ ![示例](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/f.png)
+][35]
+
+如果你没有指定文件名,scrot 默认会用日期格式的文件名保存截图。这个是 scrot 的默认文件名格式:%yy-%mm-%dd-%hhmmss_$wx$h_scrot.png。
+
+**$n** – 提供截图文件名。下面是示例截图:
+
+[
+ ![scrot $n variable](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/n.png)
+][36]
+
+**$s** – 获取截图的文件大小。这个功能可以像下面这样使用。
+
+scrot abc.jpg -e ‘echo $s’
+
+下面是示例截图:
+
+[
+ ![scrot $s 变量](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/s.png)
+][37]
+
+类似的,你也可以使用其他格式字符串 **$p**, **$w**, **$h**, **$t**, **$$** 以及 **\n** 来分别获取图片像素大小,图像宽度,图像高度,图像格式,输入 $ 字符,以及换行。你可以像上面介绍的 **$s** 格式那样使用这些字符串。
+
+### 结论
+
+这个应用能轻松地安装在 Ubuntu 系统上,对初学者比较友好。scrot 也提供了一些高级功能,比如支持格式化字符串,方便专业用户用脚本处理。当然,如果你想用起来的话有一点轻微的学习曲线。
+
+ ![](https://www.howtoforge.com/images/pdficon_small.png)
+ [vie][16]
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
+
+作者:[Himanshu Arora][a]
+译者:[zpl1025](https://github.com/zpl1025)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
+[1]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#get-the-applicationnbspversion
+[2]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#capturing-current-window
+[3]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#selecting-a-window
+[4]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#includenbspwindow-border-in-screenshots
+[5]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#delay-in-taking-screenshots
+[6]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#countdown-before-screenshot
+[7]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#image-quality
+[8]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#generating-thumbnails
+[9]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#join-multiple-displays-shots
+[10]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#executing-operations-on-saved-images
+[11]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#special-strings
+[12]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#about-scrot
+[13]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-installation
+[14]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-usagefeatures
+[15]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#conclusion
+[16]:https://www.howtoforge.com/subscription/
+[17]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
+[18]:https://en.wikipedia.org/wiki/Scrot
+[19]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/scrot.png
+[20]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/version.png
+[21]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/desktop.png
+[22]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/active.png
+[23]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select1.png
+[24]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select2.png
+[25]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/border-new.png
+[26]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/delay.png
+[27]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/countdown.png
+[28]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/img-quality.jpg
+[29]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/orig.png
+[30]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/thmb.png
+[31]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/multiple.png
+[32]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec1.png
+[33]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec2.png
+[34]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec3.png
+[35]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/f.png
+[36]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/n.png
+[37]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/s.png
diff --git a/translated/tech/20171003 Streams a new general purpose data structure in Redis.md b/translated/tech/20171003 Streams a new general purpose data structure in Redis.md
index bb21a1bd93..39da54d0cf 100644
--- a/translated/tech/20171003 Streams a new general purpose data structure in Redis.md
+++ b/translated/tech/20171003 Streams a new general purpose data structure in Redis.md
@@ -1,15 +1,13 @@
-[Streams:Redis中新的一个通用数据结构][1]
+streams:一个新的 Redis 通用数据结构
==================================
+直到几个月以前,对于我来说,在消息传递的环境中,streams 只是一个有趣且相对简单的概念。在 Kafka 流行这个概念之后,我主要研究它们在 Disque 实例中的用途。Disque 是一个将会转化为 Redis 4.2 的模块的消息队列。后来我发现 Disque 全都是 AP 消息,它将在不需要客户端过多参与的情况下实现容错和保证送达,因此,我认为 streams 的概念在那种情况下并不适用。
-直到几个月以前,对于我来说,在消息传递的环境中,streams 只是一个有趣且相对简单的概念。在 Kafka 概念普及之后,我主要研究他们在 Disque 实例中的效能。Disque 是一个将被转化到 Redis 4.2 模块中的消息队列。后来我明白 Disque 将是关于 AP 消息的全部,它将在不需要客户端过多参与的情况下实现容错和保证送达,因此,我认为 streams 的概念在那种情况下并不适用。
+但是,在 Redis 中有一个问题,那就是缺省情况下导出数据结构并不轻松。它在 Redis 列表、排序集和发布/订阅(Pub/Sub)能力上有某些缺陷。你可以合适地使用这些工具去模拟一个消息或事件的序列,而有所权衡。排序集是大量耗费内存的,不能自然的模拟一次又一次的相同消息的传递,客户端不能阻塞新消息。因为一个排序集并不是一个序列化的数据结构,它是一个根据它们量的变化而移动的元素集:它不是很像时间系列一样的东西。列表有另外的问题,它在某些特定的用例中产生类似的适用性问题:你无法浏览列表中部是什么,因为在那种情况下,访问时间是线性的。此外,没有任何的指定输出功能,列表上的阻塞操作仅为单个客户端提供单个元素。列表中没有固定的元素标识,也就是说,不能指定从哪个元素开始给我提供内容。对于一到多的工作负载,这里有发布/订阅,它在大多数情况下是非常好的,但是,对于某些不想“即发即弃”的东西:保留一个历史是很重要的,而不是断开之后重新获得消息,也因为某些消息列表,像时间系列,在用范围查询浏览时,是非常重要的:在这 10 秒范围内我的温度读数是多少?
-但是,在 Redis 中有一个问题,那就是从缺省导出数据结构并不轻松。它在 Redis 列表、排序集和发布/订阅(Pub/Sub)能力上有某些缺陷,你可以权衡差异,友好地使用这些工具去模拟一个消息或事件的序列。排序集是大量耗费内存的,不能用相同的消息模型一次又一次的传递,客户端不能阻塞新消息。因为一个排序集并不是一个序列化的数据结构,它是一个根据他们量的变化而变化的元素集:它不是像时间系列一样很适合的东西。列表有另外的问题,它在某些用户案例中产生适用性问题:你无法浏览列表中是什么,因为在那种情况下,访问时间是线性的。此外,没有任何输出,列表上的阻塞操作仅为单个客户端提供单个元素。比如说:从那个元素开始给我提供内容,列表中也没有固定的元素标识。对于一到多的工作负载,这里有发布/订阅,它在大多数情况下是非常好的,但是,对于某些不想“即发即弃”的东西:去保留一个历史是很重要的,而不是断开之后重新获得消息,也因为某些消息列表,像时间系列,在用范围查询浏览时,是非常重要的:在这 10 秒范围内我的温度读数是多少?
+这有一种方法可以尝试处理上面的问题,我计划对排序集进行通用化,并列入一个唯一的、更灵活的数据结构,然而,我的设计尝试最终以生成一个比当前的数据结构更加矫揉造作的结果而结束。一个关于 Redis 数据结构导出的更好的想法是,让它更像天然的计算机科学的数据结构,而不是,“Salvatore 发明的 API”。因此,在最后我停止了我的尝试,并且说,“ok,这是我们目前能提供的”,或许,我将为发布/订阅增加一些历史信息,或者将来对列表访问增加一些更灵活的方式。然而,每次在会议上有用户对我说“你如何在 Redis 中模拟时间系列” 或者类似的问题时,我的脸就绿了。
-这有一种方法可以尝试处理上面的问题,计划对排序集进行通用化,并列入一个唯一的更灵活的数据结构,然而,我的设计尝试最终以生成一个相对当前的人造的数据结构的结果结束,一个关于 Redis 数据结构导出的更好的想法是,让它更像天然的计算机科学的数据结构。而不是, “Salvatore 发明的 API”。因此,在最后我停止了我的尝试,并且说,“ok,这是我们目前能提供的”,或许,我将为发布/订阅增加一些历史,或者将来对列表访问增加一些更灵活的方式。然而,每次在会议上有用户对我说“你如何在 Redis 中模拟时间系列” 或者类似的问题时,我的脸变绿了。
-
-起源
-=======
+### 起源
在将 Redis 4.0 中的模块介绍完之后,用户开始去看他们自己怎么去修复这些问题。他们之一,Timothy Downs,通过 IRC 写信给我:
diff --git a/translated/tech/20171006 How to Install Software from Source Code... and Remove it Afterwards.md b/translated/tech/20171006 How to Install Software from Source Code... and Remove it Afterwards.md
new file mode 100644
index 0000000000..9673771438
--- /dev/null
+++ b/translated/tech/20171006 How to Install Software from Source Code... and Remove it Afterwards.md
@@ -0,0 +1,514 @@
+怎么用源代码安装软件 … 以及如何卸载它
+============================================================
+
+![How to install software from source code](https://itsfoss.com/wp-content/uploads/2017/10/install-software-from-source-code-linux-800x450.jpg)
+
+ _简介:这篇文章详细介绍了在 Linux 中怎么用源代码安装程序,以及怎么去卸载源代码安装的程序。_
+
+你的 Linux 分发版的其中一个最大的优点就是它的包管理器和相关的软件库。正是因为它们,你才可以去下载所需的工具和资源,以及在你的计算机上完全自动化地安装一个新软件。
+
+但是,尽管他们付出了很多的努力,包维护者仍然没法做到处理好每个用到的依赖,也不可能将所有的可用软件都打包进去。因此,仍然存在需要你自已去编译和安装一个新软件的情形。对于我来说,到目前为止,最主要的原因是,当我需要去运行一个特定的版本时我还要编译一些软件。或者,我想去修改源代码或使用一些想要的编译选项。
+
+如果你也属于后一种情况,那你已经知道你应该做什么了。但是,对于绝大多数的 Linux 用户来说,第一次从源代码中编译和安装一个软件看上去像是一个入门的仪式:它让很多人感到恐惧;但是,如果你能克服困难,你将可能进入一个全新的世界,并且,如果你做到了,那么你将成为社区中享有特权的一部分人。
+
+[Suggested readHow To Install And Remove Software In Ubuntu [Complete Guide]][8]
+
+### A. 在 Linux 中从源代码开始安装软件
+
+这正是我们要做的。因为这篇文章的需要,我要在我的系统上安装 [NodeJS][9] 8.1.1。它是个完全真实的版本。这个版本在 Debian 仓库中不可用:
+
+```
+sh$ apt-cache madison nodejs | grep amd64
+ nodejs | 6.11.1~dfsg-1 | http://deb.debian.org/debian experimental/main amd64 Packages
+ nodejs | 4.8.2~dfsg-1 | http://ftp.fr.debian.org/debian stretch/main amd64 Packages
+ nodejs | 4.8.2~dfsg-1~bpo8+1 | http://ftp.fr.debian.org/debian jessie-backports/main amd64 Packages
+ nodejs | 0.10.29~dfsg-2 | http://ftp.fr.debian.org/debian jessie/main amd64 Packages
+ nodejs | 0.10.29~dfsg-1~bpo70+1 | http://ftp.fr.debian.org/debian wheezy-backports/main amd64 Packages
+```
+
+### 第 1 步:从 GitHub 上获取源代码
+
+像大多数开源项目一样,NodeJS 的源代码可以在 GitHub:[https://github.com/nodejs/node][10] 上找到。
+
+所以,我们直接开始吧。
+
+![The NodeJS official GitHub repository](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-account.png)
+
+如果你不熟悉 [GitHub][11]、[git][12] 或者提到的其它的包含这个软件源代码的 [版本管理系统][13],以及多年来对该软件的所有修改的历史,最终找到该软件的最早版本。对于开发者来说,保持它的历史版本有很多好处。对现在的我来说,其中一个好处是可以得到任何一个给定时间点的项目源代码。更准确地说,当我希望的 8.1.1 版发布时,我可以像他们一样第一时间得到源代码。即便他们有很多的修改。
+
+![Choose the v8.1.1 tag in the NodeJS GitHub repository](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-choose-revision-tag.png)
+
+在 GitHub 上,你可以使用 “branch” 按钮导航到这个软件的不同版本。[在 Git 中 “Branch” 和 “tags” 相关的一些概念][14]。总的来说,开发者创建 “branch” 和 “tags” 在项目历史中对重要事件保持跟踪,就像当她们启用一个新特性或者发布一个新版本。在这里先不详细介绍了,所有你想知道的都可以看 _tagged_ 的 “v8.1.1” 版本。
+
+![The NodeJS GitHub repository as it was at the time the v8.1.1 tag was created](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-revision-811.png)
+
+在选择了 “v8.1.1” 标签后,页面被刷新,最显著的变化是标签现在作为 URL 的一部分出现。另外,你可能会注意到文件改变数据也不同。在源代码树上,你可以看到现在已经创建了 v8.1.1 标签。在某种意义上,你也可以认为像 git 这样的版本管理工具是一个时光穿梭机,允许你返回进入到以前的项目历史中。
+
+![NodeJS GitHub repository download as a ZIP button](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-revision-download-zip.png)
+
+在这个时候,我们可以下载 NodeJS 8.1.1 的源代码。你不要错过大的蓝色按钮,建议下载一个项目的 ZIP 压缩包。对于我来说,为讲解的目的,我从命令行中下载并解压这个 ZIP 压缩包。但是,如果你更喜欢使用一个 [GUI][15] 工具,不用担心,你可以这样做:
+
+```
+wget https://github.com/nodejs/node/archive/v8.1.1.zip
+unzip v8.1.1.zip
+cd node-8.1.1/
+```
+
+下载一个 ZIP 包它做的很好,但是如果你希望去做 “like a pro”,我建议你直接使用 `git` 工具去下载源代码。它一点也不复杂 — 并且如果你是第一次使用一个工具,它将是一个很好的开端,你以后将经常用到它:
+
+```
+# first ensure git is installed on your system
+sh$ sudo apt-get install git
+# Make a shallow clone the NodeJS repository at v8.1.1
+sh$ git clone --depth 1 \
+ --branch v8.1.1 \
+ https://github.com/nodejs/node
+sh$ cd node/
+```
+
+顺便说一下,如果你想发布任何项目,正好可以考虑把这篇文章的第一部分做为一个总体介绍。后面,为了帮你排除常问题,我们将更详细地解释基于 Debian 和基于 ReadHat 的发布。
+
+不管怎样,在你使用 `git` 或者作为一个 ZIP 压缩包下载了源代码后,你现在应该在当前的目录下提取源代码文件:
+
+```
+sh$ ls
+android-configure BUILDING.md common.gypi doc Makefile src
+AUTHORS CHANGELOG.md configure GOVERNANCE.md node.gyp test
+benchmark CODE_OF_CONDUCT.md CONTRIBUTING.md lib node.gypi tools
+BSDmakefile COLLABORATOR_GUIDE.md deps LICENSE README.md vcbuild.bat
+```
+
+### 第 2 步:理解程序的构建系统
+
+构建系统就是我们通常所说的 "编译源代码”, 其实,编译只是从源代码中生成一个可使用的软件的其中一个阶段。一个构建系统是一套工具,在具体的实践中,为了完全构建一个软件,仅仅需要发出几个命令就可以自动并清晰地完成这些不同的任务。
+
+虽然概念很简单,实际上编译做了很多事情。因为不同的项目或者编程语言可能要求不一样,或者是因为编程体验,或者因为支持的平台、或者因为历史的原因,或者 … 或者 … 选择或创建其它的构建系统的原因有很多。所有的这些都说明可用的不同的解决方案有很多。
+
+NodeJS 使用一个 [GNU 风格的构建系统][16]。在开源社区中这是一个很流行的选择。一旦开始,将进入一段精彩的旅程。
+
+写出和调优一个构建系统是一个非常复杂的任务。但是,作为 “终端用户” 来说, GNU 风格的构建系统使用两个工具来构建它们:`configure` 和 `make`。
+
+`configure` 文件是项目专用的脚本,为了确保项目可以被构建,它将检查目标系统配置和可用功能,最后使用当前平台专用的脚本来处理构建工作。
+
+一个典型的 `configure` 作业的重要部分是去构建 `Makefile`。这个文件包含有效构建项目所需的指令。
+
+([`make` 工具][17]),另一方面,一个 POSIX 工具可用于任何类 Unix 系统。它将读取项目专用的 `Makefile` 然后执行所需的操作去构建和安装你的程序。
+
+但是,在 Linux 的世界中,你仍然有一些原因去定制你自己专用的构建。
+
+```
+./configure --help
+```
+
+`configure -help` 命令将展示所有你可用的配置选项。再强调一下,它是项目专用的。说实话,有时候,在你完全理解每个配置选项的作用之前,你需要深入到项目中去好好研究。
+
+但是,这里至少有一个标准的 GNU 自动化工具选项,它就是众所周知的 `--prefix` 选项。它与文件系统的层次结构有关,它是你软件要安装的位置。
+
+[Suggested read8 Vim Tips And Tricks That Will Make You A Pro User][18]
+
+### 第 3 步:文件系统层次化标准(FHS)
+
+大部分典型的 Linux 分发版的文件系统层次结构都遵从 [文件系统层次化标准(FHS)][19]。
+
+这个标准说明了你的系统中各种目录的用途,比如,`/usr`、`/tmp`、`/var` 等等。
+
+当使用 GNU 自动化工具 _和大多数其它的构建系统_ 时,它的默认安装位置都在你的系统的 `/usr/local` 目录中。依据 FHS 中 _“/usr/local 层级是为系统管理员安装软件的位置使用的,它在系统软件更新时是覆盖安全的。它可以被用于一个主机组中,在 /usr 中找不到的、可共享的程序和数据”_ ,因此,它是一个非常好的选择。
+
+`/usr/local` 层次以某种方式复制了 root 目录,并且你可以在 `/usr/local/bin` 这里找到可执行程序,在 `/usr/local/lib` 中是库,在 `/usr/local/share` 中是架构依赖文件,等等。
+
+使用 `/usr/local` 树作为你定制安装的软件位置的唯一问题是,你的软件将在这里混杂在一起。尤其是你安装了多个软件之后,将很难去准确地跟踪 `/usr/local/bin` 和 `/usr/local/lib` 到底属于哪个软件。它虽然不足以在你的系统上产生问题。毕竟,`/usr/bin` 是很混乱的。但是,它在你想去手工卸载已安装的软件时会将成为一个问题。
+
+去解决这个问题,我通常喜欢安装定制的软件到 `/opt` 子目录下。再次引用 FHS:
+
+ _“`/opt` 是为安装应用程序插件软件包而保留的。一个包安装在 `/opt` 下必须在 `/opt/` 或者 `/opt/` 目录中独立定位到它的静态文件,`` 处是所说的那个软件名的名字,而 `` 处是提供者的 LANANA 注册名字。”_(译者注:LANANA 是指 The Linux Assigned Names And Numbers Authority,http://www.lanana.org/ )
+
+因此,我们将在 `/opt` 下创建一个子目录,用于我们定制的 NodeJS 的安装。并且,如果有一天我想去卸载它,我只是很简单地去删除那个目录:
+
+```
+sh$ sudo mkdir /opt/node-v8.1.1
+sh$ sudo ln -sT node-v8.1.1 /opt/node
+# What is the purpose of the symbolic link above?
+# Read the article till the end--then try to answer that
+# question in the comment section!
+
+sh$ ./configure --prefix=/opt/node-v8.1.1
+sh$ make -j9 && echo ok
+# -j9 means run up to 9 parallel tasks to build the software.
+# As a rule of thumb, use -j(N+1) where N is the number of cores
+# of your system. That will maximize the CPU usage (one task per
+# CPU thread/core + a provision of one extra task when a process
+# is blocked by an I/O operation.
+```
+
+在你运行完成 `make` 命令之后,如果有任何的除了 “ok” 以外的信息,将意味着在构建过程中有错误。比如,我们使用一个 `-j` 选项去运行一个并行构建,在构建系统的大量输出过程中,检索错误信息并不是件很容易的事。
+
+在这种情况下,只能是重新开始 `make`,并且不要使用 `-j` 选项。这样错误将会出现在输出信息的最后面:
+
+```
+sh$ make
+```
+
+最终,编译结束后,你可以运行这个命令去安装你的软件:
+
+```
+sh$ sudo make install
+```
+
+然后测试它:
+
+```
+sh$ /opt/node/bin/node --version
+v8.1.1
+```
+
+### B. 如果在源代码安装的过程中出现错误怎么办?
+
+我上面介绍的是大多数的文档丰富的项目在“构建指令”页面上你所看到的。但是,本文的目标是让你从源代码开始去编译你的第一个软件,它可能要花一些时间去研究一些常见的问题。因此,我将再次重新开始一遍整个过程,但是,这次是在一个最新的、最小化安装的 Debian 9.0 和 CentOS 7.0 系统上。因此,你可能看到很多的错误和我怎么去解决它。
+
+### 从 Debian 9.0 中 “Stretch” 开始
+
+```
+itsfoss@debian:~$ git clone --depth 1 \
+ --branch v8.1.1 \
+ https://github.com/nodejs/node
+-bash: git: command not found
+```
+
+这个问题非常容易去诊断和解决。仅仅是去安装这个 `git` 包:
+
+```
+itsfoss@debian:~$ sudo apt-get install git
+```
+
+```
+itsfoss@debian:~$ git clone --depth 1 \
+ --branch v8.1.1 \
+ https://github.com/nodejs/node && echo ok
+[...]
+ok
+```
+
+```
+itsfoss@debian:~/node$ sudo mkdir /opt/node-v8.1.1
+itsfoss@debian:~/node$ sudo ln -sT node-v8.1.1 /opt/node
+```
+
+现在没有问题了。
+
+```
+itsfoss@debian:~/node$ ./configure --prefix=/opt/node-v8.1.1/
+WARNING: failed to autodetect C++ compiler version (CXX=g++)
+WARNING: failed to autodetect C compiler version (CC=gcc)
+Node.js configure error: No acceptable C compiler found!
+ Please make sure you have a C compiler installed on your system and/or
+ consider adjusting the CC environment variable if you installed
+ it in a non-standard prefix.
+```
+
+很显然,编译一个项目,你需要一个编译器。NodeJS 是使用 [C++ language][20] 写的,我们需要一个 C++ [编译器][21]。在这里我将安装 `g++`,它就是为这个目的写的 GNU C++ 编译器:
+
+```
+itsfoss@debian:~/node$ sudo apt-get install g++
+itsfoss@debian:~/node$ ./configure --prefix=/opt/node-v8.1.1/ && echo ok
+[...]
+ok
+```
+
+```
+itsfoss@debian:~/node$ make -j9 && echo ok
+-bash: make: command not found
+```
+
+还差一个其它工具。同样的症状。同样的解决方案:
+
+```
+itsfoss@debian:~/node$ sudo apt-get install make
+itsfoss@debian:~/node$ make -j9 && echo ok
+[...]
+ok
+```
+
+```
+itsfoss@debian:~/node$ sudo make install
+[...]
+itsfoss@debian:~/node$ /opt/node/bin/node --version
+v8.1.1
+```
+
+成功!
+
+请注意:我将安装各种工具一步一步去展示怎么去诊断编译问题,以及展示怎么去解决这些问题。但是,如果你搜索关于这个主题的更多文档,或者读其它的教程,你将发现,很多分发版有一个 “meta-packages”,它像一个伞一样去安装一系列的或者全部的常用工具用于编译软件。在基于 Debian 的系统上,你或许遇到过 [构建要素][22] 包,它就是这种用作。在基于 Red Hat 的分发版中,它将是 _“开发工具”_ 组。
+
+### 在 CentOS 7.0 上
+
+```
+[itsfoss@centos ~]$ git clone --depth 1 \
+ --branch v8.1.1 \
+ https://github.com/nodejs/node
+-bash: git: command not found
+```
+
+命令没有找到?可以用 `yum` 包管理器去安装它:
+
+```
+[itsfoss@centos ~]$ sudo yum install git
+```
+
+```
+[itsfoss@centos ~]$ git clone --depth 1 \
+ --branch v8.1.1 \
+ https://github.com/nodejs/node && echo ok
+[...]
+ok
+```
+
+```
+[itsfoss@centos ~]$ sudo mkdir /opt/node-v8.1.1
+[itsfoss@centos ~]$ sudo ln -sT node-v8.1.1 /opt/node
+```
+
+```
+[itsfoss@centos ~]$ cd node
+[itsfoss@centos node]$ ./configure --prefix=/opt/node-v8.1.1/
+WARNING: failed to autodetect C++ compiler version (CXX=g++)
+WARNING: failed to autodetect C compiler version (CC=gcc)
+Node.js configure error: No acceptable C compiler found!
+
+ Please make sure you have a C compiler installed on your system and/or
+ consider adjusting the CC environment variable if you installed
+ it in a non-standard prefix.
+```
+
+你知道的:NodeJS 是使用 C++ 语言写的,但是,我的系统缺少合适的编译器。Yum 可以帮到你。因为,我不是一个合格的 CentOS 用户,在因特网上准确地找到包含 g++ 编译器的包的名字是很困难的。这个页面会指导我:[https://superuser.com/questions/590808/yum-install-gcc-g-doesnt-work-anymore-in-centos-6-4][23]
+
+```
+[itsfoss@centos node]$ sudo yum install gcc-c++
+[itsfoss@centos node]$ ./configure --prefix=/opt/node-v8.1.1/ && echo ok
+[...]
+ok
+```
+
+```
+[itsfoss@centos node]$ make -j9 && echo ok
+[...]
+ok
+```
+
+```
+[itsfoss@centos node]$ sudo make install && echo ok
+[...]
+ok
+```
+
+```
+[itsfoss@centos node]$ /opt/node/bin/node --version
+v8.1.1
+```
+
+再次成功!
+
+### C. 从源代码中对要安装的软件做一些改变
+
+你从源代码中安装一个软件,可能是因为你的分发仓库中没有一个可用的特定版本。或者因为你想去 _修改_ 那个程序。也可能是修复一个 bug 或者增加一个特性。毕竟,开源软件这些都可以做到。因此,我将抓住这个机会,让你亲自体验怎么去编译你自己的软件。
+
+在这里,我将在 NodeJS 源代码上生成一个主要的改变。然后,我们将看到我们的改变将被纳入到软件的编译版本中:
+
+用你喜欢的 [文本编辑器][24](如,vim、nano、gedit、 … )打开文件 `node/src/node.cc`。然后,尝试找到如下的代码片段:
+
+```
+ if (debug_options.ParseOption(argv[0], arg)) {
+ // Done, consumed by DebugOptions::ParseOption().
+ } else if (strcmp(arg, "--version") == 0 || strcmp(arg, "-v") == 0) {
+ printf("%s\n", NODE_VERSION);
+ exit(0);
+ } else if (strcmp(arg, "--help") == 0 || strcmp(arg, "-h") == 0) {
+ PrintHelp();
+ exit(0);
+ }
+```
+
+它在 [文件的 3830 行][25] 附近。然后,修改包含 `printf` 的行,将它替换成如下内容:
+
+```
+ printf("%s (compiled by myself)\n", NODE_VERSION);
+```
+
+然后,返回到你的终端。在继续之前,_为了对强大的 Git 支持有更多的了解_,你可以去检查一下,你修改是文件是否正确:
+
+```
+diff --git a/src/node.cc b/src/node.cc
+index bbce1022..a5618b57 100644
+--- a/src/node.cc
++++ b/src/node.cc
+@@ -3828,7 +3828,7 @@ static void ParseArgs(int* argc,
+ if (debug_options.ParseOption(argv[0], arg)) {
+ // Done, consumed by DebugOptions::ParseOption().
+ } else if (strcmp(arg, "--version") == 0 || strcmp(arg, "-v") == 0) {
+- printf("%s\n", NODE_VERSION);
++ printf("%s (compiled by myself)\n", NODE_VERSION);
+ exit(0);
+ } else if (strcmp(arg, "--help") == 0 || strcmp(arg, "-h") == 0) {
+ PrintHelp();
+```
+
+在你改变的行之前,你将看到一个 “-” (减号标志)。而在改变之后的行前面有一个 “+” (加号标志)。
+
+现在可以去重新编译并重新安装你的软件了:
+
+```
+make -j9 && sudo make install && echo ok
+[...]
+ok
+```
+
+这个时候,可能失败的唯一原因就是你改变代码时的输入错误。如果就是这种情况,在文本编辑器中重新打开 `node/src/node.cc` 文件并修复错误。
+
+一旦你管理的一个编译和安装的新修改版本的 NodeJS,将可以去检查你的修改是否包含到软件中:
+
+```
+itsfoss@debian:~/node$ /opt/node/bin/node --version
+v8.1.1 (compiled by myself)
+```
+
+恭喜你!你生成了开源程序中你的第一个改变!
+
+### D. 让 shell 定位到定制构建的软件
+
+到目前为止,你可能注意到,我通常启动我新编译的 NodeJS 软件是通过指定一个到二进制文件的绝对路径。
+
+```
+/opt/node/bin/node
+```
+
+这是可以正常工作的。但是,这样太麻烦。实际上有两种办法可以去解决这个问题。但是,去理解它们,你必须首先明白,你的 shell 定位可执行文件是进入到通过在[环境变量][26]`PATH` 中指定的目录去查找的。
+
+```
+itsfoss@debian:~/node$ echo $PATH
+/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+```
+
+在 Debian 系统上,如果你不指定一个精确的目录做为命令名字的一部分,shell 将首先在 `/usr/local/bin` 中查找可执行程序,如果没有找到,然后进入 `/usr/bin` 中查找,如果没有找到,然后进入 `/bin`查找,如果没有找到,然后进入 `/usr/local/games` 查找,如果没有找到,然后进入 `/usr/games` 查找,如果没有找到,那么,shell 将报告一个错误,_“command not found”_。
+
+由此,我们可以知道有两种方法去确保命令可以被 shell 访问到:通过将它增加到已经配置好的 `PATH` 目录中,或者将包含可执行程序的目录添加到 `PATH` 中。
+
+### 从 /usr/local/bin 中添加一个链接
+
+仅从 `/opt/node/bin` 中 _拷贝_ 节点二进制可执行文件到 `/usr/local/bin` 是将是一个错误的做法。因为,如果这么做,可执行程序将无法定位到在 `/opt/node/` 中的其它需要的组件。(常见的做法是软件在它自己的位置去定位它所需要的资源文件)
+
+因此,传统的做法是去使用一个符号链接:
+
+```
+itsfoss@debian:~/node$ sudo ln -sT /opt/node/bin/node /usr/local/bin/node
+itsfoss@debian:~/node$ which -a node || echo not found
+/usr/local/bin/node
+itsfoss@debian:~/node$ node --version
+v8.1.1 (compiled by myself)
+```
+
+这一个简单而有效的解决办法,尤其是,如果一个软件包是由好几个众所周知的可执行程序组成的,因为,你将为每个用户调用命令创建一个符号链接。例如,如果你熟悉 NodeJS,你知道应用的 `npm` 组件,也是 `/usr/local/bin` 中的符号链接。这只是,我让你做了一个练习。
+
+### 修改 PATH
+
+首先,如果你尝试前面的解决方案,先移除前面创建的节点符号链接,去从一个干净的状态开始:
+
+```
+itsfoss@debian:~/node$ sudo rm /usr/local/bin/node
+itsfoss@debian:~/node$ which -a node || echo not found
+not found
+```
+
+现在,这里有一个不可思议的命令去改变你的 `PATH`:
+
+```
+itsfoss@debian:~/node$ export PATH="/opt/node/bin:${PATH}"
+itsfoss@debian:~/node$ echo $PATH
+/opt/node/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+```
+
+简单说就是,我用前面的内容替换了环境变量 `PATH` 中原先的内容,但是通过一个 `/opt/node/bin` 的前缀。因此,你可以想像一下,shell 将先进入到 `/opt/node/bin` 目录中查找可执行程序。我们也可以使用 `which` 命令去确认一下:
+
+```
+itsfoss@debian:~/node$ which -a node || echo not found
+/opt/node/bin/node
+itsfoss@debian:~/node$ node --version
+v8.1.1 (compiled by myself)
+```
+
+鉴于 “link” 解决方案是永久的,只要创建到 `/usr/local/bin`的符号链接就行了,而对 `PATH` 的改变仅对进入到当前的 shell 生效。你可以自己做一些研究,如何做到对 `PATH` 的永久改变。给你一个提示,可以将它写到你的 “profile” 中。如果你找到这个解决方案,不要犹豫,通过下面的评论区共享给其它的读者!
+
+### E. 怎么去卸载刚才从源代码中安装的软件
+
+因为我们定制编译的 NodeJS 软件全部在 `/opt/node-v8.1.1` 目录中,卸载它不需要做太多的工作,仅使用 `rm` 命令去删除那个目录即可:
+
+```
+sudo rm -rf /opt/node-v8.1.1
+```
+
+注意:`sudo` 和 `rm -rf` 是 “非常危险的鸡尾酒!”,一定要在按下 “enter” 键之前多检查几次你的命令。你不会得到任何的确认信息,并且如果你删除了错误的目录它是不可恢复的 …
+
+然后,如果你修改了你的 `PATH`,你可以去恢复这些改变。它一点也不复杂。
+
+如果你从 `/usr/local/bin` 创建了一个符号链接,你应该去删除它们:
+
+```
+itsfoss@debian:~/node$ sudo find /usr/local/bin \
+ -type l \
+ -ilname "/opt/node/*" \
+ -print -delete
+/usr/local/bin/node
+```
+
+### 等等? 依赖地狱在哪里?
+
+一个最终结论是,如果你读过有关的编译定制软件的文档,你可能听到关于 [依赖地狱][27] 的说法。那是在你能够成功编译一个软件之前,对那种烦人情况的一个呢称,你必须首先编译一个前提条件所需要的库,它又可能要求其它的库,而这些库有可能与你的系统上已经安装的其它软件不兼容。
+
+作为你的分发版的包维护者的工作的一部分,去真正地解决那些依赖关系,确保你的系统上的各种软件都使用了可兼容的库,并且按正确的顺序去安装。
+
+在这篇文章中,我特意选择了 NodeJS 去安装,是因为它几乎没有依赖。我说 “几乎” 是因为,实际上,它 _有_ 依赖。但是,这些源代码的依赖已经预置到项目的源仓库中(在 `node/deps` 子目录下),因此,在你动手编译之前,你不用手动去下载和安装它们。
+
+如果你有兴趣了解更多关于那个问题的知识和学习怎么去处理它。请在下面的评论区告诉我,它将是更高级别的文章的好主题!
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+充满激情的工程师,职业是教师,我的目标是:热心分享我所教的内容,并让我的学生自己培养它们的技能。你也可以在我的网站上联系到我。
+
+--------------------
+
+via: https://itsfoss.com/install-software-from-source-code/
+
+作者:[Sylvain Leroux ][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/sylvain/
+[1]:https://itsfoss.com/author/sylvain/
+[2]:https://itsfoss.com/install-software-from-source-code/#comments
+[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
+[4]:https://twitter.com/share?original_referer=/&text=How+to+Install+Software+from+Source+Code%E2%80%A6+and+Remove+it+Afterwards&url=https://itsfoss.com/install-software-from-source-code/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=Yes_I_Know_IT
+[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
+[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
+[7]:https://www.reddit.com/submit?url=https://itsfoss.com/install-software-from-source-code/&title=How+to+Install+Software+from+Source+Code%E2%80%A6+and+Remove+it+Afterwards
+[8]:https://itsfoss.com/remove-install-software-ubuntu/
+[9]:https://nodejs.org/en/
+[10]:https://github.com/nodejs/node
+[11]:https://en.wikipedia.org/wiki/GitHub
+[12]:https://en.wikipedia.org/wiki/Git
+[13]:https://en.wikipedia.org/wiki/Version_control
+[14]:https://stackoverflow.com/questions/1457103/how-is-a-tag-different-from-a-branch-which-should-i-use-here
+[15]:https://en.wikipedia.org/wiki/Graphical_user_interface
+[16]:https://en.wikipedia.org/wiki/GNU_Build_System
+[17]:https://en.wikipedia.org/wiki/Make_%28software
+[18]:https://itsfoss.com/pro-vim-tips/
+[19]:http://www.pathname.com/fhs/
+[20]:https://en.wikipedia.org/wiki/C%2B%2B
+[21]:https://en.wikipedia.org/wiki/Compiler
+[22]:https://packages.debian.org/sid/build-essential
+[23]:https://superuser.com/questions/590808/yum-install-gcc-g-doesnt-work-anymore-in-centos-6-4
+[24]:https://en.wikipedia.org/wiki/List_of_text_editors
+[25]:https://github.com/nodejs/node/blob/v8.1.1/src/node.cc#L3830
+[26]:https://en.wikipedia.org/wiki/Environment_variable
+[27]:https://en.wikipedia.org/wiki/Dependency_hell
diff --git a/translated/tech/20171014 Proxy Models in Container Environments.md b/translated/tech/20171014 Proxy Models in Container Environments.md
new file mode 100644
index 0000000000..4e5b329d68
--- /dev/null
+++ b/translated/tech/20171014 Proxy Models in Container Environments.md
@@ -0,0 +1,86 @@
+容器环境中的代理模型
+============================================================
+
+### 我们大多数人都熟悉代理如何工作,但在基于容器的环境中有什么不同?看看有什么改变。
+
+内联,side-arm,反向和前向。这些曾经是我们用来描述网络代理架构布局的术语。
+
+如今,容器使用一些相同的术语,但它们正在引入新的东西。这对我是个机会来阐述我最爱的所有主题:代理。
+
+云的主要驱动之一(我们曾经有过成果控制的白日梦)就是可扩展性。在过去五年中,扩展在各种调查中面临着敏捷性的挑战(有时甚至获胜),因为这是机构在云计算环境中部署应用的最大追求。
+
+这在一定程度上是因为在数字经济 (我们现在运营的) 中,应用已经成为数字等同于实体店的“开放/关闭”的标志和数字客户援助的体现。缓慢、无响应的应用程序等同于把灯关闭或者商店人员不足。
+
+应用程序需要可用且响应满足需求。扩展是实现这一业务目标的技术响应。云不仅提供了扩展的能力,而且还提供了_自动_扩展的能力。要做到这一点,需要一个负载均衡器。因为这就是我们扩展应用程序的方式 - 使用代理负载均衡流量/请求。
+
+容器在扩展上与预期没有什么不同。容器必须进行扩展 - 并自动扩展 - 这意味着使用负载均衡器(代理)。
+
+如果你使用的是本机,则你正在基于 TCP/UDP 进行基本的负载平衡。一般来说,基于容器的代理实现在 HTTP 或其他应用层协议中不流畅,除了一般的旧的负载均衡([POLB][1])之外,不提供其他功能。这通常足够好,因为容器扩展是在一个克隆的水平预置环境中进行的 - 要扩展一个应用程序,添加另一个副本并在其上分发请求。在入口处(在[入口控制器][2]和 API 网关中)可以找到第 7 层(HTTP)路由功能,并且可以使用尽可能多(或更多)的应用程序路由来扩展应用程序。
+
+然而,在某些情况下,这还不够。如果你希望(或需要)更多以应用程序为中心的扩展或插入其他服务的能力,那么你将获得更健壮的产品,可提供可编程性或以应用程序为中心的可伸缩性,或者两者兼而有之。
+
+这意味着[插入代理][3]。你正在使用的容器编排环境在很大程度上决定了代理的部署模型,无论它是反向代理还是前向代理。为了让事情有趣,还有第三个模型 - sidecar - 这是由新兴的服务网格实现支持的可扩展性的基础。
+
+### 反向代理
+
+ [![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed_thumb.png?ver=2017-09-12-082119-957 "Image title")][4]
+
+反向代理最接近于传统模型,在这种模型中,虚拟服务器接受所有传入请求,并将其分发到资源池(服务器中心,集群)中。
+
+每个“应用程序”有一个代理。任何想要连接到应用程序的客户端连接到代理,代理然后选择并转发请求到适当的实例。如果绿色应用想要与蓝色应用通信,它会向蓝色代理发送请求,蓝色代理会确定蓝色应用的两个实例中的哪一个应该响应该请求。
+
+在这个模型中,代理只关心它正在管理的应用程序。蓝色代理不关心与橙色代理关联的实例,反之亦然。
+
+### 前向代理
+
+ [![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy_thumb.jpg?ver=2017-09-14-072422-213)][5]
+
+这种模式更接近传统出站防火墙的模式。
+
+在这个模型中,每个容器 **节点** 都有一个关联的代理。如果客户端想要连接到特定的应用程序或服务,它将连接到正在运行的客户端所在的容器节点的本地代理。代理然后选择一个适当的应用实例,并转发客户端的请求。
+
+橙色和蓝色的应用连接到与其节点相关的同一个代理。代理然后确定所请求的应用实例的哪个实例应该响应。
+
+在这个模型中,每个代理必须知道每个应用,以确保它可以将请求转发给适当的实例。
+
+### sidecar 代理
+
+ [![Image title](https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy_thumb.jpg?ver=2017-09-14-072425-620)][6]
+
+这种模型也被称为服务网格路由。在这个模型中,每个**容器**都有自己的代理。
+
+如果客户想要连接到一个应用,它将连接到 sidecar 代理,它会选择一个合适的应用程序实例并转发客户端的请求。此行为与_前向代理_模型相同。
+
+sidecar 和前向代理之间的区别在于,sidecar 代理不需要修改容器编排环境。例如,为了插入一个前向代理到 k8s,你需要代理_和_一个 kube-proxy 的替代。sidecar 代理不需要此修改,因为应用会自动连接到 “sidecar” 代理而不是通过代理路由。
+
+### 总结
+
+每种模式都有其优点和缺点。三者共同依赖环境数据(远程监控和配置变化),以及融入生态系统的需求。有些模型是根据你选择的环境预先确定的,因此需要仔细考虑将来的需求 - 服务插入、安全性、网络复杂性 - 在建立模型之前需要进行评估。
+
+在容器及其在企业中的发展方面,我们还处于早期阶段。随着它们继续延伸到生产环境中,了解容器化环境发布的应用程序的需求以及它们在代理模型实现上的差异是非常重要的。
+
+我是急性写下这篇文章的。现在就这么多。
+
+--------------------------------------------------------------------------------
+
+via: https://dzone.com/articles/proxy-models-in-container-environments
+
+作者:[Lori MacVittie ][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://dzone.com/users/307701/lmacvittie.html
+[1]:https://f5.com/about-us/blog/articles/go-beyond-polb-plain-old-load-balancing
+[2]:https://f5.com/about-us/blog/articles/ingress-controllers-new-name-familiar-function-27388
+[3]:http://clouddocs.f5.com/products/asp/v1.0/
+[4]:https://devcentral.f5.com/Portals/0/Users/038/38/38/unavailable_is_closed.png?ver=2017-09-12-082118-160
+[5]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-node_forward_proxy.jpg?ver=2017-09-14-072419-667
+[6]:https://devcentral.f5.com/Portals/0/Users/038/38/38/per-pod_sidecar_proxy.jpg?ver=2017-09-14-072424-073
+[7]:https://dzone.com/users/307701/lmacvittie.html
+[8]:https://dzone.com/users/307701/lmacvittie.html
+[9]:https://dzone.com/articles/proxy-models-in-container-environments#
+[10]:https://dzone.com/cloud-computing-tutorials-tools-news
+[11]:https://dzone.com/articles/proxy-models-in-container-environments#
+[12]:https://dzone.com/go?i=243221&u=https%3A%2F%2Fget.platform9.com%2Fjzlp-kubernetes-deployment-models-the-ultimate-guide%2F
diff --git a/translated/tech/20171031 How to use SVG as a Placeholder and Other Image Loading Techniques.md b/translated/tech/20171031 How to use SVG as a Placeholder and Other Image Loading Techniques.md
new file mode 100644
index 0000000000..32c24db6dc
--- /dev/null
+++ b/translated/tech/20171031 How to use SVG as a Placeholder and Other Image Loading Techniques.md
@@ -0,0 +1,238 @@
+怎么去使用 SVG 作为一个占位符,以及其它图像加载技术
+============================================================
+
+![](https://cdn-images-1.medium.com/max/1563/0*zJGl1vKLttcJGIL4.jpg)
+从被用作占位符的图像中生成 SVGs。继续阅读!
+
+我对怎么去让 web 性能更优化和图像加载的更快充满了热情。对这些感兴趣的领域中的其中一项研究就是占位符:当图像还没有被加载的时候应该去展示些什么?
+
+在前此天,我偶然发现了使用 SVG 的一些加载技术,随后,我将在这篇文章中去描述它。
+
+在这篇文章中我们将涉及如下的主题:
+
+* 不同的占位符类型的概述
+
+* 基于 SVG 的占位符(边缘、形状、和轮廓)
+
+* 自动化处理
+
+### 不同的占位符类型的概述
+
+以前 [我写的关于占位符和图像延迟加载(lazy-loading)][28] 的文章和 [关于它的讨论][29] 中。当进行一个图像的延迟加载时,一个很好的主意是去考虑提供一个东西作为占位符,因为,它可能会很大程序上影响用户的感知体验。以前我提供了几个选项:
+
+
+![](https://cdn-images-1.medium.com/max/1563/0*jlMM144vAhH-0bEn.png)
+
+在图像被加载之前,有几种办法去填充图像区域。
+
+* 在图像区保持空白:在一个响应式设计的环境中,这种方式防止了内容的跳跃。这种布局从用户体验的角度来看是非常差的作法。但是,它是为了性能的考虑,否则,每次为了获取图像尺寸,浏览器被迫进行布局重计算,以为它留下空间。
+
+* 占位符:在那里显示一个用户配置的图像。我们可以在背景上显示一个轮廓。它一直显示直到实际的图像被加载,它也被用于当请求失败或者当用户根本没有设置图像的情况下。这些图像一般都是矢量图,并且都选择尺寸非常小的内联图片。
+
+* 固定的颜色:从图像中获取颜色,并将其作为占位符的背景颜色。这可能是主导的颜色,最具活力的 … 这个主意是基于你正在加载的图像,并且它将有助于在没有图像和图像加载完成之间进行平滑过渡。
+
+* 模糊的图像:也被称为模糊技术。你提供一个极小版本的图像,然后再去过渡到完整的图像。最初的图像的像素和尺寸是极小的。为去除伪影图像(artifacts the image)被放大和模糊化。我在前面写的 [怎么去做中间的渐进加载的图像][1]、[使用 WebP 去创建极小的预览图像][2]、和 [渐进加载图像的更多示例][3] 中讨论过这方面的内容。
+
+结果是,还有其它的更多的变化,并且许多聪明的人开发了其它的创建占位符的技术。
+
+其中一个就是用梯度图代替固定的颜色。梯度图可以创建一个更精确的最终图像的预览,它整体上非常小(提升了有效载荷)。
+
+
+![](https://cdn-images-1.medium.com/max/1250/0*ecPkBAl69ayvRctn.jpg)
+使用梯度图作为背景。来自 Gradify 的截屏,它现在并不在线,代码 [在 GitHub][4]。
+
+其它的技术是使用基于 SVGs 的技术,它在最近的实验和黑客中得到了一些支持。
+
+### 基于 SVG 的占位符
+
+我们知道 SVGs 是完美的矢量图像。在大多数情况下我们是希望去加载一个位图,所以,问题是怎么去矢量化一个图像。一些选择是使用边缘、形状和轮廓。
+
+#### 边缘
+
+在 [前面的文章中][30],我解释了怎么去找出一个图像的边缘和创建一个动画。我最初的目标是去尝试绘制区域,矢量化这个图像,但是,我并不知道该怎么去做到。我意识到使用边缘也可能被创新,并且,我决定去让它们动起来,创建一个 “绘制” 的效果。
+
+[在以前,使用边缘检测绘制图像和 SVG 动画,在 SVG 中基本上不被使用和支持的。一段时间以后,我们开始用它去作为一个有趣的替代 … medium.com][31][][32]
+
+#### 形状
+
+SVG 也可以用于去从图像中绘制区域而不是边缘/边界。用这种方法,我们可以矢量化一个位图去创建一个占位符。
+
+在以前,我尝试去用三角形做类似的事情。你可以在我的 [at CSSConf][33] 和 [Render Conf][34] 的演讲中看到它。
+
+
+上面的 codepen 是一个由 245 个三角形组成的基于 SVG 占位符的观点的证明。生成的三角形是使用 [Possan’s polyserver][36] 基于 [Delaunay triangulation][35]。正如预期的那样,使用更多的三角形,文件尺寸就更大。
+
+#### Primitive 和 SQIP,一个基于 SVG 的 LQIP 技术
+
+Tobias Baldauf 正在致力于另一个使用 SVGs 的被称为 [SQIP][37] 的低质量图像占位符技术。在深入研究 SQIP 之前,我先简单了解一下 [Primitive][38],它是基于 SQIP 的一个库。
+
+Primitive 是非常吸引人的,我强烈建议你去了解一下。它讲解了一个位图怎么变成由重叠形状组成的 SVG。它尺寸比较小,一个更小的往返,更适合直接放置到页面中,在一个初始的 HTML 载荷中,它是非常有意义的。
+
+Primitive 基于像三角形、长方形、和圆形等形状去生成一个图像。在每一步中它增加一个新形状。很多步之后,图像的结果看起来非常接近原始图像。如果你输出的是 SVG,它意味着输出代码的尺寸将很大。
+
+为了理解 Primitive 是怎么工作的,我通过几个图像来跑一下它。我用 10 个形状和 100 个形状来为这个插画生成 SVGs:
+
+ ** 此处有Canvas,请手动处理 **
+
+![](https://cdn-images-1.medium.com/max/625/1*y4sr9twkh_WyZh6h0yH98Q.png)
+
+
+![](https://cdn-images-1.medium.com/max/625/1*cqyhYnx83LYvhGdmg2dFDw.png)
+
+![](https://cdn-images-1.medium.com/max/625/1*qQP5160gPKQdysh0gFnNfw.jpeg)
+Processing [this picture][5] 使用 Primitive,使用 [10 个形状][6] 和 [100 形状][7]。
+
+
+![](https://cdn-images-1.medium.com/max/625/1*PWZLlC4lrLO4CVv1GwR7qA.png)
+
+
+
+![](https://cdn-images-1.medium.com/max/625/1*khnga22ldJKOZ2z45Srh8A.png)
+
+
+![](https://cdn-images-1.medium.com/max/625/1*N-20rR7YGFXiDSqIeIyOjA.jpeg)
+Processing [this picture][8] 使用 Primitive,使用 [10 形状][9] 和 [100 形状][10]。
+
+当在图像中使用 10 个形状时,我们基本构画出了原始图像。在图像环境占位符这里我们使用了 SVG 作为潜在的占位符。实际上,使用 10 个形状的 SVG 代码已经很小了,大约是 1030 字节,当通过 SVGO 传输时,它将下降到 ~640 字节。
+
+```
+
+```
+
+使用 100 个形状生成的图像是很大的,正如我们预期的那样,在 SVGO(之前是 8kB)之后,加权大小为 ~5kB。它们在细节上已经很好了,但是仍然是个很小的载荷。使用多少三角形主要取决于图像类型和细腻程序(如,对比度、颜色数量、复杂度)。
+
+它还可能去创建一个类似于 [cpeg-dssim][39] 的脚本,去调整所使用的形状的数量,以满足 [结构相似][40] 的阈值(或者最差情况中的最大数量)。
+
+这些 SVG 的结果也可以用作背景图像。因为尺寸约束和矢量化,它们在图像和大规模的背景图像中是很好的选择。
+
+#### SQIP
+
+用 [Tobias 自己的话说][41]:
+
+> SQIP 是尝试在这两个极端之间找到一种平衡:它使用 [Primitive][42] 去生成一个由几种简单图像构成的近似图像的可见特征的 SVG,使用 [SVGO][43] 去优化 SVG,并且为它增加高斯模糊滤镜。产生的最终的 SVG 占位符加权后大小为 ~800–1000 字节,在屏幕上看起来更为平滑,并提供一个可视的图像内容提示。
+
+这个结果和使用一个极小的使用了模糊技术的占位符图像类似。(what [Medium][44] and [other sites][45] do)。区别在于它们使用了一个位图图像,如 JPG 或者 WebP,而这里是使用的占位符是 SVG。
+
+如果我们使用 SQIP 而不是原始图像,我们将得到这样的效果:
+
+
+![](https://cdn-images-1.medium.com/max/938/0*yUY1ZFP27vFYgj_o.png)
+
+
+
+![](https://cdn-images-1.medium.com/max/938/0*DKoZP7DXFvUZJ34E.png)
+[第一张图片][11] 和 [第二张][12] 的输出图像使用了 SQIP。
+
+输出的 SVG 是 ~900 字节,并且检查代码,我们可以发现 `feGaussianBlur` 过滤应用到形状组上:
+
+```
+
+```
+
+SQIP 也可以输出一个 Base 64 编码的 SVG 内容的图像标签:
+
+```
+
+```
+
+#### 轮廓
+
+我们刚才看了使用了边缘和 primitive 形状的 SVG。另外一种可能是去矢量化图像以 “tracing” 它们。[Mikael 动画][47] 分享的 [a codepen][48],在几天前展示了怎么去使用两色轮廓作为一个占位符。结果非常漂亮:
+
+
+![](https://cdn-images-1.medium.com/max/1250/1*r6HbVnBkISCQp_UVKjOJKQ.gif)
+
+SVGs 在这种情况下是手工绘制的,但是,这种技术可以用工具快速生成并自动化处理。
+
+* [Gatsby][13],一个 React 支持的描绘 SVGs 的静态网站生成器。它使用 [一个 potrace 算法的 JS 端口][14] 去矢量化图像。
+
+* [Craft 3 CMS][15],它也增加了对轮廓的支持。它使用 [一个 potrace 算法的 PHP 端口][16]。
+
+
+* [image-trace-loader][17],一个使用了 Potrace 算法去处理图像的 Webpack 加载器。
+
+
+如果感兴趣,可以去看一下 Emil 的 webpack 加载器 (基于 potrace) 和 Mikael 的手工绘制 SVGs 之间的比较。
+
+
+假设我使用一个默认选项的 potrace 生成输出。但是,有可能对它们进行调整。查看 [the options for image-trace-loader][49],它非常漂亮 [the ones passed down to potrace][50]。
+
+### 总结
+
+我们看到有不同的工具和技术去从图像中生成 SVGs,并且使用它们作为占位符。与 [WebP 是一个奇妙格式的缩略图][51] 方式相同,SVG 也是一个用于占位符的有趣的格式。我们可以控制细节的级别(和它们的大小),它是高可压缩的,并且很容易用 CSS 和 JS 进行处理。
+
+#### 额外的资源
+
+这篇文章发表于 [the top of Hacker News][52]。我非常感谢它,并且,在页面上的注释中的其它资源的全部有链接。下面是其中一部分。
+
+* [Geometrize][18] 是用 Haxe 写的 Primitive 的一个端口。这个也是,[一个 JS 实现][19],你可以直接 [在你的浏览器上][20]尝试。
+
+* [Primitive.js][21],它也是在 JS 中的一个 Primitive 端口,[primitive.nextgen][22],它是使用 Primitive.js 和 Electron 的 Primitive 的桌面版应用的一个端口。
+
+* 这里有两个 Twitter 帐户,里面你可以看到一些用 Primitive 和 Geometrize 生成的图像示例。访问 [@PrimitivePic][23] 和 [@Geometrizer][24]。
+
+* [imagetracerjs][25],它是在 JavaScript 中的光栅图像跟踪和矢量化程序。这里也有为 [Java][26] 和 [Android][27] 提供的端口。
+
+--------------------------------------------------------------------------------
+
+via: https://medium.freecodecamp.org/using-svg-as-placeholders-more-image-loading-techniques-bed1b810ab2c
+
+作者:[ José M. Pérez][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://medium.freecodecamp.org/@jmperezperez?source=post_header_lockup
+[1]:https://medium.com/@jmperezperez/how-medium-does-progressive-image-loading-fd1e4dc1ee3d
+[2]:https://medium.com/@jmperezperez/using-webp-to-create-tiny-preview-images-3e9b924f28d6
+[3]:https://medium.com/@jmperezperez/more-examples-of-progressive-image-loading-f258be9f440b
+[4]:https://github.com/fraser-hemp/gradify
+[5]:https://jmperezperez.com/assets/images/posts/svg-placeholders/pexels-photo-281184-square.jpg
+[6]:https://jmperezperez.com/assets/images/posts/svg-placeholders/pexels-photo-281184-square-10.svg
+[7]:https://jmperezperez.com/assets/images/posts/svg-placeholders/pexels-photo-281184-square-100.svg
+[8]:https://jmperezperez.com/assets/images/posts/svg-placeholders/pexels-photo-618463-square.jpg
+[9]:https://jmperezperez.com/assets/images/posts/svg-placeholders/pexels-photo-618463-square-10.svg
+[10]:https://jmperezperez.com/assets/images/posts/svg-placeholders/pexels-photo-618463-square-100.svg
+[11]:https://jmperezperez.com/assets/images/posts/svg-placeholders/pexels-photo-281184-square-sqip.svg
+[12]:https://jmperezperez.com/svg-placeholders/%28/assets/images/posts/svg-placeholders/pexels-photo-618463-square-sqip.svg
+[13]:https://www.gatsbyjs.org/
+[14]:https://www.npmjs.com/package/potrace
+[15]:https://craftcms.com/
+[16]:https://github.com/nystudio107/craft3-imageoptimize/blob/master/src/lib/Potracio.php
+[17]:https://github.com/EmilTholin/image-trace-loader
+[18]:https://github.com/Tw1ddle/geometrize-haxe
+[19]:https://github.com/Tw1ddle/geometrize-haxe-web
+[20]:http://www.samcodes.co.uk/project/geometrize-haxe-web/
+[21]:https://github.com/ondras/primitive.js
+[22]:https://github.com/cielito-lindo-productions/primitive.nextgen
+[23]:https://twitter.com/PrimitivePic
+[24]:https://twitter.com/Geometrizer
+[25]:https://github.com/jankovicsandras/imagetracerjs
+[26]:https://github.com/jankovicsandras/imagetracerjava
+[27]:https://github.com/jankovicsandras/imagetracerandroid
+[28]:https://medium.com/@jmperezperez/lazy-loading-images-on-the-web-to-improve-loading-time-and-saving-bandwidth-ec988b710290
+[29]:https://www.youtube.com/watch?v=szmVNOnkwoU
+[30]:https://medium.com/@jmperezperez/drawing-images-using-edge-detection-and-svg-animation-16a1a3676d3
+[31]:https://medium.com/@jmperezperez/drawing-images-using-edge-detection-and-svg-animation-16a1a3676d3
+[32]:https://medium.com/@jmperezperez/drawing-images-using-edge-detection-and-svg-animation-16a1a3676d3
+[33]:https://jmperezperez.com/cssconfau16/#/45
+[34]:https://jmperezperez.com/renderconf17/#/46
+[35]:https://en.wikipedia.org/wiki/Delaunay_triangulation
+[36]:https://github.com/possan/polyserver
+[37]:https://github.com/technopagan/sqip
+[38]:https://github.com/fogleman/primitive
+[39]:https://github.com/technopagan/cjpeg-dssim
+[40]:https://en.wikipedia.org/wiki/Structural_similarity
+[41]:https://github.com/technopagan/sqip
+[42]:https://github.com/fogleman/primitive
+[43]:https://github.com/svg/svgo
+[44]:https://medium.com/@jmperezperez/how-medium-does-progressive-image-loading-fd1e4dc1ee3d
+[45]:https://medium.com/@jmperezperez/more-examples-of-progressive-image-loading-f258be9f440b
+[46]:http://www.w3.org/2000/svg
+[47]:https://twitter.com/mikaelainalem
+[48]:https://codepen.io/ainalem/full/aLKxjm/
+[49]:https://github.com/EmilTholin/image-trace-loader#options
+[50]:https://www.npmjs.com/package/potrace#parameters
+[51]:https://medium.com/@jmperezperez/using-webp-to-create-tiny-preview-images-3e9b924f28d6
+[52]:https://news.ycombinator.com/item?id=15696596
diff --git a/translated/tech/20171116 5 Coolest Linux Terminal Emulators.md b/translated/tech/20171116 5 Coolest Linux Terminal Emulators.md
deleted file mode 100644
index 692ba31828..0000000000
--- a/translated/tech/20171116 5 Coolest Linux Terminal Emulators.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# 5 款最酷的linux终端模拟器
-============================================================
-
-
-![Cool retro term](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner2.png)
-Carla Schroder 看着那些她喜欢的终端模拟器, 包括展示在这儿的 Cool Retro Term。[许可才能使用][4]
-
-当然, 我们可以使用老旧的 GNOME 终端, Konsole, 和有趣的、虚弱的、老旧的 xterm。 然而, 当你带着尝试某种新东西的心境, 回过头来看看这 5 款酷炫并且实用的 Linux 终端。
-
-### Xiki
-
-首先我要推荐的第一个终端是 [Xiki][10]。 Xiki 是 Craig Muth 的智慧结晶, 他是一个有才的程序员和有趣的人 (幽默, 可能还有其他意义)。 很久以前我写了 Xiki, 在 [遇见 Xiki, Linux 和 Mac OS X 下革命性的命令行Shell][11]。 Xiki 不仅仅是另一款终端模拟器; 它是一个扩展命令行范围、加快命令行速度的交互式环境。
-
-
-Xiki 支持鼠标,并且能运行在绝大多数命令行Shell上。 它有大量的屏显帮助,而且使用鼠标和键盘能快速导航。 它在速度体现上的一个简单的例子就是增强了 `ls` 命令。 Xiki 在文件系统上可以实现多级缩放, 而不用持续的重复输入 `ls` 或者 `cd`, 或者使用巧妙的正则表达式。
-
-Xiki 集成了许多文本编辑器, 提供了一个永久的便签, 有一个快速搜索引擎, 同时像他们所说的, 也有许许多多。 Xiki 是如此的有特色、如此的不同, 最快的方式来学习和了解它可以看 [Craig 的有趣和实用的视频][12]。
-
-### Cool Retro Term
-
-我推荐 [Cool Retro Term][13] (上面主页上的显示) 主要因为它的外观,以及它的实用性。 它将我们带回了阴极射线管显示器的时代, 这不是很久以前, 而我并没有怀旧, 因我的冷屏死亡而撬液晶显示屏, 它基于 [Konsole][14], 因此有着 Konsole 的优秀功能。通过 Cool Retro Term 的配置文件菜单来改变它的外观。配置文件包括 Amber, Green, Pixelated, Apple ][, 和 Transparent Green, 以及一个现实的扫描线。它们中并不是所有的都是有用的, 例如 Vintage 配置文件就像一个现实的濒死的屏幕那样弯曲和闪烁。
-
-Cool Retro Term 的 GitHub repository 有着详细的安装指南, 且 Ubuntu 用户有 [PPA][15]。
-
-### Sakura
-
-当你想要一个优秀的轻量级、易配置的终端, 尝试 [Sakura][16] (图 1)。 它依赖少, 不像 GNOME 终端 和 Konsole, 它们在 GNOME 和 KDE 中占大块。大多数选项是可以通过右键菜单配置的, 例如选项卡的标签, 颜色, 大小, 选项卡的字体、 铃声, 以及光标类型的默认值。 你可以设置更多的选项, 例如绑定快捷键, 在你个人的配置文件里面, `~/.config/sakura/sakura.conf`。
-
-![sakura](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_9.png)
-图 1: Sakura 是一个优秀的、轻量级的、可配置的终端。[许可才能使用][1]
-
-命令行选项详见 `man sakura`。 使用这些来从命令行启动 sakura, 或者在你的图形启动器上使用它们。 例如, 打开 4 个选项卡并设置窗口标题为 MyWindowTitle :
-
-```
-$ sakura -t MyWindowTitle -n 4
-```
-
-### Terminology
-
-[Terminology][17] 来自 Enlightenment 图形环境的郁葱可爱的世界,它能够被美化成任何你所想要的 (图 2)。 它有许多有用的功能: 独立的拆分窗口, 打开文件和 URLs, 文件图标, 选项卡, 并采取了更多其他功能。 它甚至能运行在没有图形界面的 Linux 控制台上。
-
-
-![Terminology](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-2_6.png)
-图 2: Terminology 也能够运行在没有图形界面的 Linux 控制台上。[许可才能使用][2]
-
-当你打开多个拆分窗口时,每个窗口都能设置不同的背景, 并且背景文件可以是任意媒体文件: 图像文件, 视频或者音乐文件。 它带有一堆黑色主题和透明度, 有的人需要可读的, 甚至一个 Nyan 猫主题。 没有滚动条, 因此需要使用组合键 Shift+PageUp 和 Shift+PageDown 进行上下导航。
-
-他有多个控件: 一个右键单击菜单, 上下文对话框, 以及命令行选项。 右键单击菜单里包含最小字体, 且 Miniview 可显示一个微观的文件树。 如果有选项能使我找不到的东西可读, 当你打开多个标签时, 点击小标签浏览器来打开一个可以上下滚动的选择器。 任何东西都是可配置的; 通过 `man terminology` 查看一系列的命令和选项, 包括一批不错的快捷键快捷方式。 奇怪的是, 这不包括我偶然发现的以下命令:
-
-* tyalpha
-
-* tybg
-
-* tycat
-
-* tyls
-
-* typop
-
-* tyq
-
-使用 `tybg [filename]` 命令来设置背景, 不带参数的 `tybg` 命令来移除背景。 运行 `typop [filename]` 来打开文件。 `tyls` 命令在图标视图中列出文件。 运行这些命令,加上 `-h` 选项来了解他们是干什么的。 即使有可读性的怪癖, Terminology 是快速、漂亮和实用的。
-
-### Tilda
-
-有几个优秀的下拉式终端模拟器, 包括 Guake 和 Yakuake。 [Tilda][18] (图 3) 是其中最简单和轻量级的一个。 打开 Tilda 后它保持打开, 你可以通过快捷键来显示和隐藏它。 Tilda 快捷键是o默认的, 你可以设置自己喜欢的快捷键。 它一直打开着的,随时准备工作, 但是直到你需要它的时候才会出现。
-
-
-![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-3_3.png)
-图 3: Tilda 是最简单和轻量级的一个终端模拟器。[许可才能使用][3]
-
-Tilda 有一个漂亮的选项组件, 包括默认的大小、位置、外观、绑定键、搜索条、鼠标图标, 以及标签条。 这些都被右键单击菜单控制。
- _学习更多关于 Linux 的知识可以通过 Linux 基金会 和 edX 的免费课程 ["Linux 介绍" ][9]。_
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2017/11/5-coolest-linux-terminal-emulators
-
-作者:[CARLA SCHRODER ][a]
-译者:[译者ID](https://github.com/cnobelw)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/cschroder
-[1]:https://www.linux.com/licenses/category/used-permission
-[2]:https://www.linux.com/licenses/category/used-permission
-[3]:https://www.linux.com/licenses/category/used-permission
-[4]:https://www.linux.com/licenses/category/used-permission
-[5]:https://www.linux.com/files/images/fig-1png-9
-[6]:https://www.linux.com/files/images/fig-2png-6
-[7]:https://www.linux.com/files/images/fig-3png-3
-[8]:https://www.linux.com/files/images/banner2png
-[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
-[10]:http://xiki.org/
-[11]:https://www.linux.com/learn/meet-xiki-revolutionary-command-shell-linux-and-mac-os-x
-[12]:http://xiki.org/screencasts/
-[13]:https://github.com/Swordfish90/cool-retro-term
-[14]:https://www.linux.com/learn/expert-tips-and-tricks-kate-and-konsole
-[15]:https://launchpad.net/~bugs-launchpad-net-falkensweb/+archive/ubuntu/cool-retro-term
-[16]:https://bugs.launchpad.net/sakura
-[17]:https://www.enlightenment.org/about-terminology
-[18]:https://github.com/lanoxx/tilda
diff --git a/translated/tech/20171120 Containers and Kubernetes Whats next.md b/translated/tech/20171120 Containers and Kubernetes Whats next.md
new file mode 100644
index 0000000000..5ed099c170
--- /dev/null
+++ b/translated/tech/20171120 Containers and Kubernetes Whats next.md
@@ -0,0 +1,80 @@
+容器技术和 k8s 的下一站:
+============================================================
+### 想知道容器编排管理和 K8s 的最新展望么?来看看专家怎么说。
+
+![CIO_Big Data Decisions_2](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Big%20Data%20Decisions_2.png?itok=Y5zMHxf8 "CIO_Big Data Decisions_2")
+
+如果你想对容器在未来的发展方向有一个整体把握,那么你一定要跟着钱走,看看钱都投在了哪里。当然了,有很多很多的钱正在投入容器的进一步发展。相关研究预计 2020 年容器技术的投入将占有 [27 亿美元][4] 的市场份额 。而在 2016 年,容器相关技术投入的总额为 7.62 亿美元,只有 2020 年投入预计的三分之一。巨额投入的背后是一些显而易见的基本因素,包括容器化的迅速增长以及并行化的大趋势。随着容器被大面积推广和使用,容器编排管理也会被理所当然的推广应用起来。
+
+来自 [_The new stack_][5] 的调研数据表明,容器的推广使用是编排管理被推广的主要的催化剂。根据调研参与者的反馈数据,在已经将容器技术使用到生产环境中的使用者里,有六成正在将 kubernetes(k8s)编排管理广泛的应用在生产环境中,另外百分之十九的人员则表示他们已经处于部署 k8s 的初级阶段。在容器部署初期的使用者当中,虽然只有百分之五的人员表示已经在使用 K8s ,但是百分之五十八的人员表示他们正在计划和准备使用 K8s。总而言之,容器和 Kuebernetes 的关系就好比是鸡和蛋一样,相辅相成紧密关联。众多专家一致认为编排管理工具对容器的[长周期管理][6] 以及其在市场中的发展有至关重要的作用。正如 [Cockroach 实验室][7] 的 Alex Robinson 所说,容器编排管理被更广泛的拓展和应用是一个总体的大趋势。毫无疑问,这是一个正在快速演变的领域,且未来潜力无穷。鉴于此,我们对罗宾逊和其他的一些容器的实际使用和推介者做了采访,来从他们作为容器技术的践行者的视角上展望一下容器编排以及 k8s 的下一步发展。
+
+### **容器编排将被主流接受**
+
+像任何重要技术的转型一样,我们就像是处在一个高崖之上一般,在经过了初期步履蹒跚的跋涉之后将要来到一望无际的广袤平原。广大的新天地和平实真切的应用需求将会让这种新技术在主流应用中被迅速推广,尤其是在大企业环境中。正如 Alex Robinson 说的那样,容器技术的淘金阶段已经过去,早期的技术革新创新正在减速,随之而来的则是市场对容器技术的稳定性和可用性的强烈需求。这意味着未来我们将不会再见到大量的新的编排管理系统的涌现,而是会看到容器技术方面更多的安全解决方案,更丰富的管理工具,以及基于目前主流容器编排系统的更多的新特性。
+
+### **更好的易用性**
+
+人们将在简化容器的部署方面下大功夫,因为容器部署的初期工作对很多公司和组织来说还是比较复杂的,尤其是容器的[长期管理维护][8]更是需要投入大量的精力。正如 [Codemill AB][9] 公司的 My Karlsson 所说,容器编排技术还是太复杂了,这导致很多使用者难以娴熟驾驭和充分利用容器编排的功能。很多容器技术的新用户都需要花费很多精力,走很多弯路,才能搭建小规模的,单个的,被隔离的容器系统。这种现象在那些没有针对容器技术设计和优化的应用中更为明显。在简化容器编排管理方面有很多优化可以做,这些优化和改造将会使容器技术更加具有可用性。
+
+### **在 hybrid cloud 以及 multi-cloud 技术方面会有更多侧重**
+
+随着容器和容器编排技术被越来越多的使用,更多的组织机构会选择扩展他们现有的容器技术的部署,从之前的把非重要系统部署在单一环境的使用情景逐渐过渡到更加[复杂的使用情景][10]。对很多公司来说,这意味着他们必须开始学会在 [hybrid cloud][11] 和 [muilti-cloud][12] 的环境下,全局化的去管理那些容器化的应用和微服务。正如红帽 [Openshift 部门产品战略总监][14] [Brian Gracely][13] 所说,容器和 k8s 技术的使用使得我们成功的实现了混合云以及应用的可移植性。结合 Open Service Broker API 的使用,越来越多的结合私有云和公有云资源的新应用将会涌现出来。
+据 [CloudBees][15] 公司的高级工程师 Carlos Sanchez 分析,联合服务(Federation)将会得到极大推动,使一些诸如多地区部署和多云部署等的备受期待的新特性成为可能。
+
+**[ 想知道 CIO 们对 hybrid cloud 和 multi cloud 的战略构想么? 请参看我们的这条相关资源, **[**Hybrid Cloud: The IT leader's guide**][16]**. ]**
+
+### **平台和工具的持续整合及加强**
+
+对任何一种科技来说,持续的整合和加强从来都是大势所趋; 容器编排管理技术在这方面也不例外。来自 [Sumo Logic][17] 的首席分析师 Ben Newton 表示,随着容器化渐成主流,软件工程师们正在很少数的一些技术上做持续整合加固的工作,来满足他们的一些微应用的需求。容器和 K8s 将会毫无疑问的成为容器编排管理方面的主流平台,并轻松碾压其他的一些小众平台方案。因为 K8s 提供了一个相当清晰的可以摆脱各种特有云生态的途径,K8s 将被大量公司使用,逐渐形成一个不依赖于某个特定云服务的“中立云”(cloud-neutral)。
+
+### **K8s 的下一站**
+
+来自 [Alcide][18] 的 CTO 和联合创始人 Gadi Naor 表示,k8s 将会是一个有长期和远景发展的技术,虽然我们的社区正在大力推广和发展 k8s,k8s 仍有很长的路要走。
+专家们对[日益流行的 k8s 平台][19]也作出了以下一些预测:
+
+**_来自 Alcide 的 Gadi Naor 表示:_** “运营商会持续演进并趋于成熟,直到在 k8s 上运行的应用可以完全自治。利用 [OpenTracing][20] 和诸如 [istio][21] 技术的 service mesh 架构,在 k8s 上部署和监控微应用将会带来很多新的可能性。”
+
+**_来自 Red Hat 的 Brian Gracely 表示:_** “k8s 所支持的应用的种类越来越多。今后在 k8s 上,你不仅可以运行传统的应用程序,还可以运行原生的云应用,大数据应用以及 HPC 或者基于 GPU 运算的应用程序,这将为灵活的架构设计带来无限可能。”
+
+**_来自 Sumo Logic 的 Ben Newton 表示:_** “随着 k8s 成为一个具有统治地位的平台,我预计更多的操作机制将会被统一化,尤其是 k8s 将和第三方管理和监控平台融合起来。”
+
+**_来自 CloudBees 的 Carlos Sanchez 表示:_** “在不久的将来我们就能看到不依赖于 Docker 而使用其他运行时环境的系统,这将会有助于消除任何可能的 lock-in 情景“ [小编提示:[CRI-O][22] 就是一个可以借鉴的例子。]“而且我期待将来会出现更多的针对企业环境的存储服务新特性,包括数据快照以及在线的磁盘容量的扩展。”
+
+**_来自 Cockroach Labs 的 Alex Robinson 表示:_** “ k8s 社区正在讨论的一个重大发展议题就是加强对[有状态程序][23]的管理。目前在 k8s 平台下,实现状态管理仍然非常困难,除非你所使用的云服务商可以提供远程固定磁盘。现阶段也有很多人在多方面试图改善这个状况,包括在 k8s 平台内部以及在外部服务商一端做出的一些改进。”
+
+-------------------------------------------------------------------------------
+
+via: https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next
+
+作者:[Kevin Casey ][a]
+译者:[yunfengHe](https://github.com/yunfengHe)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://enterprisersproject.com/user/kevin-casey
+[1]:https://enterprisersproject.com/article/2017/11/kubernetes-numbers-10-compelling-stats
+[2]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity
+[3]:https://enterprisersproject.com/article/2017/11/5-kubernetes-success-tips-start-smart?sc_cid=70160000000h0aXAAQ
+[4]:https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf
+[5]:https://thenewstack.io/
+[6]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul
+[7]:https://www.cockroachlabs.com/
+[8]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul
+[9]:https://codemill.se/
+[10]:https://www.redhat.com/en/challenges/integration?intcmp=701f2000000tjyaAAA
+[11]:https://enterprisersproject.com/hybrid-cloud
+[12]:https://enterprisersproject.com/article/2017/7/multi-cloud-vs-hybrid-cloud-whats-difference
+[13]:https://enterprisersproject.com/user/brian-gracely
+[14]:https://www.redhat.com/en
+[15]:https://www.cloudbees.com/
+[16]:https://enterprisersproject.com/hybrid-cloud?sc_cid=70160000000h0aXAAQ
+[17]:https://www.sumologic.com/
+[18]:http://alcide.io/
+[19]:https://enterprisersproject.com/article/2017/10/how-explain-kubernetes-plain-english
+[20]:http://opentracing.io/
+[21]:https://istio.io/
+[22]:http://cri-o.io/
+[23]:https://opensource.com/article/17/2/stateful-applications
+[24]:https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next?rate=PBQHhF4zPRHcq2KybE1bQgMkS2bzmNzcW2RXSVItmw8
+[25]:https://enterprisersproject.com/user/kevin-casey
diff --git a/translated/tech/20171124 An introduction to the Django ORM.md b/translated/tech/20171124 An introduction to the Django ORM.md
new file mode 100644
index 0000000000..789640441b
--- /dev/null
+++ b/translated/tech/20171124 An introduction to the Django ORM.md
@@ -0,0 +1,196 @@
+Django ORM 简介
+============================================================
+
+### 学习怎么去使用 Python 的 web 框架中的对象关系映射与你的数据库交互,就像你使用 SQL 一样。
+
+
+![Getting to know the Django ORM](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-spider-frame-framework.png?itok=Rl2AG2Dc "Getting to know the Django ORM")
+Image by : [Christian Holmér][10]. Modified by Opensource.com. [CC BY-SA 4.0][11]
+
+你可能听说过 [Django][12],它是一个被称为“完美主义者的最后期限” 的 Python web 框架。它是一匹 [可爱的小矮马][13]。
+
+Django 的其中一个强大的功能是它的对象关系映射(ORM),它允许你去和你的数据库交互,就像你使用 SQL 一样。事实上,Django 的 ORM 就是创建 SQL 去查询和维护数据库的一个 Python 的方法,并且在一个 Python 方法中获取结果。 我说 _就是_ 一种方法,但实际上,它是一项非常聪明的工程,它利用了 Python 中比较复杂的部分,使得开发过程更容易。
+
+在我们开始去了解 ORM 是怎么工作的之前,我们需要一个去操作的数据库。和任何一个关系型数据库一样,我们需要去定义一堆表和它们的关系(即,它们相互之间联系起来的方式)。让我们使用我们熟悉的东西。比如说,我们需要去建立一个有博客文章和作者的博客。每个作者有一个名字。一位作者可以有很多的博客文章。一篇博客文章可以有很多的作者、标题、内容和发布日期。
+
+在 Django-ville 中,这个文章和作者的概念可以被称为博客应用。在这个语境中,一个应用是一个自包含一系列描述我们的博客行为和功能的模型和视图。用正确的方式打包,以便于其它的 Django 项目可以使用我们的博客应用。在我们的项目中,博客正是其中的一个应用。比如,我们也可以有一个论坛应用。但是,我们仍然坚持我们的博客应用的原有范围。
+
+这是为这个教程事先准备的 `models.py`:
+
+```
+from django.db import models
+
+class Author(models.Model):
+ name = models.CharField(max_length=100)
+
+ def __str__(self):
+ return self.name
+
+class Post(models.Model):
+ title = models.CharField(max_length=100)
+ content = models.TextField()
+ published_date = models.DateTimeField(blank=True, null=True)
+ author = models.ManyToManyField(Author, related_name="posts")
+
+ def __str__(self):
+ return self.title
+```
+
+更多的 Python 资源
+
+* [Python 是什么?][1]
+
+* [最好的 Python IDEs][2]
+
+* [最好的 Python GUI 框架][3]
+
+* [最新的 Python 内容][4]
+
+* [更多的开发者资源][5]
+
+现在,看上去似乎有点令人恐惧,因此,我们把它分解来看。我们有两个模型:作者和文章。它们都有名字或者标题。文章为内容设置一个大文本框,以及为发布的时间和日期设置一个 `DateTimeField`。文章也有一个 `ManyToManyField`,它同时链接到文章和作者。
+
+大多数的教程都是从 scratch—but 开始的,但是,在实践中并不会发生这种情况。实际上,它会提供给你一堆已存在的代码,就像上面的 `model.py` 一样,而你必须去搞清楚它们是做什么的。
+
+因此,现在你的任务是去进入到应用程序中去了解它。做到这一点有几种方法,你可以登入到 [Django admin][14],一个 Web 后端,它有全部列出的应用和操作它们的方法。我们先退出它,现在我们感兴趣的东西是 ORM。
+
+我们可以在 Django 项目的主目录中运行 `python manage.py shell` 去访问 ORM。
+
+```
+/srv/web/django/ $ python manage.py shell
+
+Python 3.6.3 (default, Nov 9 2017, 15:58:30)
+[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.38)] on darwin
+Type "help", "copyright", "credits" or "license" for more information.
+(InteractiveConsole)
+>>>
+```
+
+这将带我们进入到交互式控制台。[`shell` 命令][15] 为我们做了很多设置,包括导入我们的设置和配置 Django 环境。虽然我们启动了 shell,但是,在我们导入它之前,我们并不能访问我们的博客模型。
+
+```
+>>> from blog.models import *
+```
+
+它导入了全部的博客模型,因此,我们可以玩我们的博客了。
+
+首先,我们列出所有的作者。
+
+```
+>>> Author.objects.all()
+```
+
+我们将从这个命令取得结果,它是一个 `QuerySet`,它列出了所有我们的作者对象。它不会充满我们的整个控制台,因为,如果有很多查询结果,Django 将自动截断输出结果。
+
+```
+>>> Author.objects.all()
+, ,
+ , '...(remaining elements truncated)...']
+```
+
+我们可以使用 `get` 代替 `all` 去检索单个作者。但是,我们需要一些更多的信息去 `get` 一个单个记录。在关系型数据库中,表有一个主键,它唯一标识了表中的每个记录,但是,作者名并不唯一。许多人都 [重名][16],因此,它不是唯一约束的一个好的选择。解决这个问题的一个方法是使用一个序列(1、2、3...)或者一个通用唯一标识符(UUID)作为主键。但是,因为它对人类并不可用,我们可以通过使用 `name` 来操作我们的作者对象。
+
+```
+>>> Author.objects.get(name="VM (Vicky) Brasseur")
+
+```
+
+到现在为止,我们已经有了一个我们可以交互的对象,而不是一个 `QuerySet` 列表。我们现在可以与这个 Python 对象进行交互了,使用任意一个表列做为属性去查看对象。
+
+```
+>>> vmb = Author.objects.get(name="VM (Vicky) Brasseur")
+>>> vmb.name
+u'VM (Vicky) Brasseur'
+```
+
+然后,很酷的事件发生了。通常在关系型数据库中,如果我们希望去展示其它表的信息,我们需要去写一个 `LEFT JOIN`,或者其它的表耦合函数,并确保它们之间有匹配的外键。而 Django 可以为我们做到这些。
+
+在我们的模型中,由于作者写了很多的文章,因此,我们的作者对象可以检查它自己的文章。
+
+```
+>>> vmb.posts.all()
+QuerySet[,
+ ,
+ ,
+ '...(remaining elements truncated)...']
+```
+
+We can manipulate `QuerySets` using normal pythonic list manipulations.
+
+```
+>>> for post in vmb.posts.all():
+... print(post.title)
+...
+7 tips for nailing your job interview
+5 tips for getting the biggest bang for your cover letter buck
+Quit making these 10 common resume mistakes
+```
+
+去实现更复杂的查询,我们可以使用过滤得到我们想要的内容。这是非常微妙的。在 SQL 中,你可以有一些选项,比如,`like`、`contains`、和其它的过滤对象。在 ORM 中这些事情也可以做到。但是,是通过 _特别的_ 方式实现的:是通过使用一个隐式(而不是显式)定义的函数实现的。
+
+如果在我的 Python 脚本中调用了一个函数 `do_thing()`,我期望在某个地方有一个匹配 `def do_thing`。这是一个显式的函数定义。然而,在 ORM 中,你可以调用一个 _不显式定义的_ 函数。之前,我们使用 `name` 去匹配一个名字。但是,如果我们想做一个子串搜索,我们可以使用 `name__contains`。
+
+```
+>>> Author.objects.filter(name__contains="Vic")
+QuerySet[, ]
+```
+
+现在,关于双下划线(`__`)我有一个小小的提示。这些是 Python _特有的_。在 Python 的世界里,你可以看到如 `__main__` 或者 `__repr__`。这些有时被称为 `dunder methods`,是 “双下划线” 的缩写。这里仅有几个非字母数字字符可以被用于 Python 中的对象名字;下划线是其中的一个。这些在 ORM 中被用于不同的过滤关键字的显式分隔。在底层,字符串被这些下划线分割。并且这个标记是分开处理的。`name__contains` 被替换成 `attribute: name, filter: contains`。在其它编程语言中,你可以使用箭头代替,比如,在 PHP 中是 `name->contains`。不要被双下划线吓着你,正好相反,它们是 Python 的好帮手(并且如果你斜着看,你就会发现它看起来像一条小蛇,想去帮你写代码的小蟒蛇)。
+
+ORM 是非常强大并且是 Python 特有的。不过,在 Django 的管理网站上我提到过上面的内容。
+
+### [django-admin.png][6]
+
+![Django Admin](https://opensource.com/sites/default/files/u128651/django-admin.png "Django Admin")
+
+Django 的其中一个非常精彩的用户可访问特性是它的管理界面,如果你定义你的模型,你将看到一个非常好用的基于 web 的编辑门户,而且它是免费的。
+
+ORM,有多强大?
+
+### [django-admin-author.png][7]
+
+![Authors list in Django Admin](https://opensource.com/sites/default/files/u128651/django-admin-author.png "Authors list in Django Admin")
+
+好吧!给你一些代码去创建最初的模型,Django 转到基于 web 的门户,它是非常强大的,它可以使用我们前面用过的同样的原生函数。默认情况下,这个管理门户只有基本的东西,但这只是在你的模型中添加一些定义去改变外观的问题。例如,在早期的这些 `__str__` 方法中,我们使用这些去定义作者对象应该有什么?(在这种情况中,比如,作者的名字),做了一些工作后,你可以创建一个界面,让它看起来像一个内容管理系统,以允许你的用户去编辑他们的内容。(例如,为一个标记为 “已发布” 的文章,增加一些输入框和过滤)。
+
+如果你想去了解更多内容,[Django 美女的教程][17] 中关于 [the ORM][18] 的节有详细的介绍。在 [Django project website][19] 上也有丰富的文档。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Katie McLaughlin - Katie 在过去的这几年有许多不同的头衔,她以前是使用多种语言的一位软件开发人员,多种操作系统的系统管理员,和多个不同话题的演讲者。当她不改变 “世界” 的时候,她也去享受烹饪、挂毯艺术,和去研究各种应用程序栈怎么去处理 emoji。
+
+------------------------
+
+via: https://opensource.com/article/17/11/django-orm
+
+作者:[Katie McLaughlin Feed ][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/glasnt
+[1]:https://opensource.com/resources/python?intcmp=7016000000127cYAAQ
+[2]:https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ
+[3]:https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ
+[4]:https://opensource.com/tags/python?intcmp=7016000000127cYAAQ
+[5]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ
+[6]:https://opensource.com/file/377811
+[7]:https://opensource.com/file/377816
+[8]:https://opensource.com/article/17/11/django-orm?rate=iwO0q67yiUUPweMIMoyLbbYyhK5RTOOzEtyiNkJ0eBE
+[9]:https://opensource.com/user/41661/feed
+[10]:https://www.flickr.com/people/crsan/
+[11]:https://creativecommons.org/licenses/by-sa/4.0/
+[12]:https://www.djangoproject.com/
+[13]:http://www.djangopony.com/
+[14]:https://docs.djangoproject.com/en/1.11/ref/contrib/admin/
+[15]:https://docs.djangoproject.com/en/1.11/ref/django-admin/#shell
+[16]:https://2016.katieconf.xyz/
+[17]:https://djangogirls.org/
+[18]:https://tutorial.djangogirls.org/en/django_orm/
+[19]:https://docs.djangoproject.com/en/1.11/topics/db/
+[20]:https://opensource.com/users/glasnt
+[21]:https://opensource.com/users/glasnt
+[22]:https://opensource.com/article/17/11/django-orm#comments
diff --git a/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md b/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md
new file mode 100644
index 0000000000..e51c580da9
--- /dev/null
+++ b/translated/tech/20171124 Photon Could Be Your New Favorite Container OS.md
@@ -0,0 +1,147 @@
+Photon也许能成为你最喜爱的容器操作系统
+============================================================
+
+![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS")
+
+Phonton OS专注于容器,是一个非常出色的平台。 —— Jack Wallen
+
+容器在当下的火热,并不是没有原因的。正如[之前][13]讨论的,容器可以使您轻松快捷地将新的服务与应用部署到您的网络上,而且并不耗费太多的系统资源。比起专用硬件和虚拟机,容器都是更加划算的,除此之外,他们更容易更新与重用。
+
+更重要的是,容器喜欢Linux(反之亦然)。不需要太多时间和麻烦,你就可以启动一台Linux服务器,运行[Docker][14],再是部署容器。但是,哪种Linux发行版最适合部署容器呢?我们的选择很多。你可以使用标准的Ubuntu服务器平台(更容易安装Docker并部署容器)或者是更轻量级的发行版 —— 专门用于部署容器。
+
+[Photon][15]就是这样的一个发行版。这个特殊的版本是由[VMware][16]于2005年创建的,它包含了Docker的守护进程,并与容器框架(如Mesos和Kubernetes)一起使用。Photon经过优化可与[VMware vSphere][17]协同工作,而且可用于裸机,[Microsoft Azure][18], [Google Compute Engine][19], [Amazon Elastic Compute Cloud][20], 或者 [VirtualBox][21]等。
+
+Photon通过只安装Docker守护进程所必需的东西来保持它的轻量。而这样做的结果是,这个发行版的大小大约只有300MB。但这足以让Linux的运行一切正常。除此之外,Photon的主要特点还有:
+
+* 内核调整为性能模式。
+
+* 内核根据[内核自防护项目][6](KSPP)进行了加固。
+
+* 所有安装的软件包都根据加固的安全标识来构建。
+
+* 操作系统在信任验证后启动。
+
+* Photon管理进程管理防火墙,网络,软件包,和远程登录在Photon机子上的用户。
+
+* 支持持久卷。
+
+* [Project Lightwave][7] 整合。
+
+* 及时的安全补丁与更新。
+
+Photon可以通过[ISO][22],[OVA][23],[Amazon Machine Image][24],[Google Compute Engine image][25]和[Azure VHD][26]安装使用。现在我将向您展示如何使用ISO镜像在VirtualBox上安装Photon。整个安装过程大概需要五分钟,在最后您将有一台随时可以部署容器的虚拟机。
+
+### 创建虚拟机
+
+在部署第一台容器之前,您必须先创建一台虚拟机并安装Photon。为此,打开VirtualBox并点击“新建”按钮。跟着创建虚拟机向导进行配置(根据您的容器将需要的用途,为Photon提供必要的资源)。在创建好虚拟机后,您所需要做的第一件事就是更改配置。选择新建的虚拟机(在VirtualBox主窗口的左侧面板中),然后单击“设置”。在弹出的窗口中,点击“网络”(在左侧的导航中)。
+
+在“网络”窗口(图1)中,你需要在“连接”的下拉窗口中选择桥接。这可以确保您的Photon服务与您的网络相连。完成更改后,单击确定。
+
+### [photon_0.jpg][8]
+
+![change settings](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_0.jpg?itok=Q0yhOhsZ "change setatings")
+图 1: 更改Photon在VirtualBox中的网络设置。[经许可使用][1]
+
+从左侧的导航选择您的Photon虚拟机,点击启动。系统会提示您去加载IOS镜像。当您完成之后,Photon安装程序将会启动并提示您按回车后开始安装。安装过程基于ncurses(没有GUI),但它非常简单。
+
+接下来(图2),系统会询问您是要最小化安装,完整安装还是安装OSTree服务器。我选择了完整安装。选择您所需要的任意选项,然后按回车继续。
+
+### [photon_1.jpg][9]
+
+![installation type](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_2.jpg?itok=QL1Rs-PH "Photon")
+图 2: 选择您的安装类型.[经许可使用][2]
+
+在下一个窗口,选择您要安装Photon的磁盘。由于我们将其安装在虚拟机,因此只有一块磁盘会被列出(图3)。选择“自动”按下回车。然后安装程序会让您输入(并验证)管理员密码。在这之后镜像开始安装在您的磁盘上并在不到5分钟的时间内结束。
+
+### [photon_2.jpg][]
+
+![Photon](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_1.jpg?itok=OdnMVpaA "installation type")
+图 3: 选择安装Photon的硬盘.[经许可使用][3]
+
+安装完成后,重启虚拟机并使用安装时创建的用户root和它的密码登录。一切就绪,你准备好开始工作了。
+
+在开始使用Docker之前,您需要更新一下Photon。Photon使用 _yum_ 软件包管理器,因此在以root用户登录后输入命令 _yum update_。如果有任何可用更新,则会询问您是否确认(图4)。
+
+### [photon_3.jpg][11]
+
+![Updating](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_3.jpg?itok=vjqrspE2 "Updating")
+图 4: 更新 Photon.[经许可使用][4]
+
+用法
+
+正如我所说的,Photon提供了部署容器甚至创建Kubernetes集群所需要的所有包。但是,在使用之前还要做一些事情。首先要启动Docker守护进程。为此,执行以下命令:
+
+```
+systemctl start docker
+
+systemctl enable docker
+```
+
+现在我们需要创建一个标准用户,因此我们没有以root去运行docker命令。为此,执行以下命令:
+
+```
+useradd -m USERNAME
+
+passwd USERNAME
+```
+
+其中USERNAME是我们新增的用户的名称。
+
+接下来,我们需要将这个新用户添加到 _docker_ 组,执行命令:
+
+```
+usermod -a -G docker USERNAME
+```
+
+其中USERNAME是刚刚创建的用户的名称。
+
+注销root用户并切换为新增的用户。现在,您已经可以不必使用 _sudo_ 命令或者是切换到root用户来使用 _docker_命令了。从Docker Hub中取出一个镜像开始部署容器吧。
+
+### 一个优秀的容器平台
+
+在专注于容器方面,Photon毫无疑问是一个出色的平台。请注意,Photon是一个开源项目,因此没有任何付费支持。如果您对Photon有任何的问题,请移步Photon项目的Github下的[Issues][27],那里可以供您阅读相关问题,或者提交您的问题。如果您对Photon感兴趣,您也可以在项目的官方[Github][28]中找到源码。
+
+尝试一下Photon吧,看看它是否能够使得Docker容器和Kubernetes集群的部署更加容易。
+
+欲了解Linux的更多信息,可以通过学习Linux基金会和edX的免费课程,[“Linux 入门”][29]。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/11/photon-could-be-your-new-favorite-container-os
+
+作者:[JACK WALLEN][a]
+译者:[KeyLD](https://github.com/KeyLd)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/jlwallen
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/used-permission
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/licenses/category/used-permission
+[5]:https://www.linux.com/licenses/category/creative-commons-zero
+[6]:https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project
+[7]:http://vmware.github.io/lightwave/
+[8]:https://www.linux.com/files/images/photon0jpg
+[9]:https://www.linux.com/files/images/photon1jpg
+[10]:https://www.linux.com/files/images/photon2jpg
+[11]:https://www.linux.com/files/images/photon3jpg
+[12]:https://www.linux.com/files/images/photon-linuxjpg
+[13]:https://www.linux.com/learn/intro-to-linux/2017/11/how-install-and-use-docker-linux
+[14]:https://www.docker.com/
+[15]:https://vmware.github.io/photon/
+[16]:https://www.vmware.com/
+[17]:https://www.vmware.com/products/vsphere.html
+[18]:https://azure.microsoft.com/
+[19]:https://cloud.google.com/compute/
+[20]:https://aws.amazon.com/ec2/
+[21]:https://www.virtualbox.org/
+[22]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
+[23]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
+[24]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
+[25]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
+[26]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
+[27]:https://github.com/vmware/photon/issues
+[28]:https://github.com/vmware/photon
+[29]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/translated/tech/20171127 Migrating to Linux Disks Files and Filesystems.md b/translated/tech/20171127 Migrating to Linux Disks Files and Filesystems.md
new file mode 100644
index 0000000000..438b27a222
--- /dev/null
+++ b/translated/tech/20171127 Migrating to Linux Disks Files and Filesystems.md
@@ -0,0 +1,135 @@
+迁移到 Linux:磁盘、文件、和文件系统
+============================================================
+
+![Migrating to LInux ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/butterflies-807551_1920.jpg?itok=pxTxwvFO "Migrating to LInux ")
+在你的主要桌面上安装和使用 Linux 将帮你快速熟悉你需要的工具和方法。[Creative Commons Zero][1]Pixabay
+
+这是我们的迁移到 Linux 系列文章的第二篇。如果你错过了第一篇,[你可以在这里找到它][4]。以前提到过,为什么要迁移到 Linux 的几个原因。你可以在你的工作中为 Linux 开发和使用代码,或者,你可能是正想去尝试一下新事物。
+
+不论是什么原因,拥有一个 Linux 的主桌面,将帮助你快速熟悉你需要的工具和方法。在这篇文章中,我将介绍 Linux 的文件、文件系统和磁盘。
+
+### 我的 C:\ 在哪里?
+
+如果你是一个 Mac 用户,Linux 对你来说应该非常熟悉,Mac 使用的文件、文件系统、和磁盘与 Linux 是非常接近的。另一方面,如果你的使用经验主要是 Windows,访问 Linux 下的磁盘可能看上去有点困惑。一般,Windows 给每个磁盘分配一个盘符(像 C:\)。而 Linux 并不是这样。而在你的 Linux 系统中它是一个单一的文件和目录的层次结构。
+
+让我们看一个示例。假设你的计算机使用了一个主硬盘、一个有 _Books_ 和 _Videos_ 目录的 CD-ROM 、和一个有 _Transfer_ 目录的 U 盘,在你的 WIndows 下,你应该看到的是下面的样子:
+
+```
+C:\ [Hard drive]
+
+├ System
+
+├ System32
+
+├ Program Files
+
+├ Program Files (x86)
+
+└
+
+D:\ [CD-ROM]
+
+├ Books
+
+└ Videos
+
+E:\ [USB thumb drive]
+
+└ Transfer
+```
+
+而一个典型的 Linux 系统却是这样:
+
+```
+/ (the top most directory, called the root directory) [Hard drive]
+
+├ bin
+
+├ etc
+
+├ lib
+
+├ sbin
+
+├ usr
+
+├
+
+└ media
+
+ └
+
+ ├ cdrom [CD-ROM]
+
+ │ ├ Books
+
+ │ └ Videos
+
+ └ Kingme_USB [USB thumb drive]
+
+ └ Transfer
+```
+
+如果你使用一个图形化环境,通常,Linux 中的文件管理器将出现看起来像驱动器的图标的 CD-ROM 和 USB 便携式驱动器,因此,你根本就无需知道介质所在的目录。
+
+### 文件系统
+
+Linux 称这些东西为文件系统。一个文件系统是在介质(比如,硬盘)上保持跟踪所有的文件和目录的一组结构。如果没有文件系统,我们存储在硬盘上的信息就会混乱,我们就不知道哪个块属于哪个文件。你可能听到过一些名字,比如,Ext4、XFS、和 Btrfs。这些都是 Linux 文件系统。
+
+每个保存有文件和目录的介质都有一个文件系统在上面。不同的介质类型可能使用了为它优化过的特定的文件系统。比如,CD-ROMs 使用 ISO9660 或者 UDF 文件系统类型。USB 便携式驱动器一般使用 FAT32,以便于它们可以很容易去与其它计算机系统共享。
+
+Windows 也使用文件系统。不过,我们不过多的讨论它。例如,当你插入一个 CD-ROM,Windows 将读取 ISO9660 文件系统结构,分配一个盘符给它,然后,在盘符(比如,D:\)下显示文件和目录。当然,如果你深究细节,从技术角度说,Windows 是分配一个盘符给一个文件系统,而不是整个驱动器。
+
+使用同样的例子,Linux 也读取 ISO9660 文件系统结构,但它不分配盘符,它附加文件系统到一个目录(这个过程被称为加载)。Linux 将随后在附加的目录(比如是, _/media//cdrom_ )下显示 CD-ROM 上的文件和目录。
+
+因此,在 Linux 上回答 “我的 C:\ 在哪里?” 这个问题,答案是,这里没有 C:\,它们工作方式不一样。
+
+### 文件
+
+Windows 在它的文件系统中存在文件和目录(也被称为文件夹)。但是,Linux 也让你将其它的东西放到文件系统中。这些其它类型的东西是文件系统的原生的对象,并且,它们和普通文件实际上是不同的。除普通文件和目录之外,Linux 还允许你去创建和使用硬链接、符号链接、命名管道、设备节点、和套接字。在这里,我们不展开讨论所有的文件系统对象的类型,但是,这里有几种经常使用到的。
+
+硬链接是用于为文件创建一个或者多个别名。指向磁盘上同样内容的每个别名的名字是不同的。如果在一个文件名下编辑文件,这个改变也同时出现在其它的文件名上。例如,你有一个 _MyResume_2017.doc_,它还一个被称为 _JaneDoeResume.doc_ 的硬链接。(注意,硬链接是从命令行下,使用 _ln_ 的命令去创建的)。你可以找到并编辑 _MyResume_2017.doc_,然后,然后找到 _JaneDoeResume.doc_,你发现它保持了跟踪 -- 它包含了你所有的更新。
+
+符号链接有点像 Windows 中的快捷方式。文件系统的入口包含一个到其它文件或者目录的路径。在很多方面,它们的工作方式和硬链接很相似,它们可以创建一个到其它文件的别名。但是,符号链接也可以像文件一样给目录创建一个别名,并且,符号链接可以指向到不同介质上的不同文件系统,而硬链接做不到这些。(注意,你可以使用带 _-s_ 选项的 _ln_ 命令去创建一个符号链接)
+
+### 权限
+
+另一个很大的区别是文件系统对象上在 Windows 和 Linux 之中涉及的权限(文件、目录、及其它)。Windows 在文件和目录上实现了一套非常复杂的权限。例如,用户和用户组可以有权限去读取、写入、运行、修改、等等。用户和用户组可以授权访问除例外以外的目录中的所有内容,也可以不允许访问除例外的目录中的所有内容。
+
+然而,大多数使用 Windows 的人并不去使用一个特定的权限;因此,当他们发现使用一套权限并且在 Linux 上是强制执行的,他们感到非常惊讶!Linux 通过使用 SELinux 或者 AppArmor 可以强制执行一套更复杂的权限。但是,大多数 Linux 安装版都使用了内置的默认权限。
+
+在默认的权限中,文件系统中的每个条目都有一套为它的文件所有者、文件所在的组、和其它人的权限。这些权限允许他们:读取、写入、和运行。给它们的权限有一个层次。首先,它检查这个(登入的)用户是否为该文件所有者和它拥有的权限。如果不是,然后检查这个用户是否在文件所在的组中和它拥有的权限。如果不是,然后它再检查其它人拥有的权限。这里设置了其它人的权限。但是,这里设置的三套权限大多数情况下都会使用其中的一套。
+
+如果你使用命令行,你输入 `ls -l`,你可以看到如下所表示的权限:
+
+```
+rwxrw-r-- 1 stan dndgrp 25 Oct 33rd 25:01 rolldice.sh
+```
+
+最前面的字母,`rwxrw-r--`,展示了权限。在这个例子中,所有者(stan)可以读取、写入、和运行这个文件(前面的三个字母,rwx);dndgrp 组的成员可以读取和写入这个文件,但是不能运行(第二组的三个字母,rw-);其它人仅可以读取这个文件(最后的三个字母,r--)。
+
+(注意,在 Windows 中去生成一个可运行的脚本,你生成的文件有一个特定的扩展名,比如 .bat,而在 Linux 中,扩展名在操作系统中没有任何意义。而是需要去设置这个文件可运行的权限)
+
+如果你收到一个 _permission denied_ 错误,可能是你去尝试运行了一个要求管理员权限的程序或者命令,或者你去尝试访问一个你的帐户没有访问权限的文件。如果你尝试去做一些要求管理员权限的事,你必须切换登入到一个被称为 _root_ 的用户帐户。或者通过使用一个命令行的被称为 _sudo_ 的助理程序。它可以临时允许你以 _root_ 权限运行。当然,_sudo_ 工具,也会要求你输入密码,以确保你真的有权限。
+
+### 硬盘文件系统
+
+Windows 主要使用一个被称为 `NTFS` 的硬盘文件系统。在 Linux 上,你也可以选一个你希望去使用的硬盘文件系统。不同的文件系统类型呈现不同的特性和不同的性能特征。主要的原生 Linux 的文件系统,现在使用的是 Ext4。但是,在安装 Linux 的时候,你可以有丰富的文件系统类型可供选择,比如,Ext3(Ext4 的前任)、XFS、Btrfs、UBIFS(用于嵌入式系统)、等等。如果你不确定要使用哪一个,Ext4 是一个很好的选择。
+
+ _通过来自 Linux 基金会和 edX 的 ["Linux 介绍"][2] 上免费学习更多的 Linux 课程。_
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
+
+作者:[JOHN BONESIO][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/johnbonesio
+[1]:https://www.linux.com/licenses/category/creative-commons-zero
+[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[3]:https://www.linux.com/files/images/butterflies-8075511920jpg
+[4]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
diff --git a/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md b/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md
new file mode 100644
index 0000000000..9f905bd496
--- /dev/null
+++ b/translated/tech/20171130 Translate Shell – A Tool To Use Google Translate From Command Line In Linux.md
@@ -0,0 +1,400 @@
+Translate Shell: 一款在 Linux 命令行中使用 Google Translate的工具
+============================================================
+
+我对 CLI 应用非常感兴趣,因此热衷于使用并分享 CLI 应用。 我之所以更喜欢 CLI 很大原因是因为我在大多数的时候都使用的是字符界面(black screen),已经习惯了使用 CLI 应用而不是 GUI 应用.
+
+我写过很多关于 CLI 应用的文章。 最近我发现了一些 google 的 CLI 工具,像 “Google Translator”, “Google Calendar”, 和 “Google Contacts”。 这里,我想在给大家分享一下。
+
+今天我们要介绍的是 “Google Translator” 工具。 由于母语是泰米尔语,我在一天内用了很多次才理解了它的意义。
+
+`Google translate` 为其他语系的人们所广泛使用。
+
+### 什么是 Translate Shell
+
+[Translate Shell][2] (之前叫做 Google Translate CLI) 是一款借助 `Google Translate`(默认), `Bing Translator`, `Yandex.Translate` 以及 `Apertium` 来翻译的命令行翻译器。
+它让你可以在终端访问这些翻译引擎. `Translate Shell` 在大多数Linux发行版中都能使用。
+
+### 如何安装 Translate Shell
+
+有三种方法安装 `Translate Shell`。
+
+* 下载自包含的可执行文件
+
+* 手工安装
+
+* 通过包挂力气安装
+
+#### 方法-1 : 下载自包含的可执行文件
+
+下载自包含的可执行文件放到 `/usr/bin` 目录中。
+
+```shell
+$ wget git.io/trans
+$ chmod +x ./trans
+$ sudo mv trans /usr/bin/
+```
+
+#### 方法-2 : 手工安装
+
+克隆 `Translate Shell` github 仓库然后手工编译。
+
+```shell
+$ git clone https://github.com/soimort/translate-shell && cd translate-shell
+$ make
+$ sudo make install
+```
+
+#### 方法-3 : Via Package Manager
+
+有些发行版的官方仓库中包含了 `Translate Shell`,可以通过包管理器来安装。
+
+对于 Debian/Ubuntu, 使用 [APT-GET Command][3] 或者 [APT Command][4]来安装。
+
+```shell
+$ sudo apt-get install translate-shell
+```
+
+对于 Fedora, 使用 [DNF Command][5] 来安装。
+
+```shell
+$ sudo dnf install translate-shell
+```
+
+对于基于 Arch Linux 的系统, 使用 [Yaourt Command][6] 或 [Packer Command][7] 来从 AUR 仓库中安装。
+
+```shell
+$ yaourt -S translate-shell
+or
+$ packer -S translate-shell
+```
+
+### 如何使用 Translate Shell
+
+安装好后,打开终端闭关输入下面命令。 `Google Translate` 会自动探测源文本是哪种语言,并且在默认情况下将之翻译成你的 `locale` 所对应的语言。
+
+```
+$ trans [Words]
+```
+
+下面我将泰米尔语中的单词 “நன்றி” (Nanri) 翻译成英语。 这个单词的意思是感谢别人。
+
+```
+$ trans நன்றி
+நன்றி
+(Naṉṟi)
+
+Thanks
+
+Definitions of நன்றி
+[ தமிழ் -> English ]
+
+noun
+ gratitude
+ நன்றி
+ thanks
+ நன்றி
+
+நன்றி
+ Thanks
+```
+
+使用下面命令也能将英语翻译成泰米尔语。
+
+```
+$ trans :ta thanks
+thanks
+/THaNGks/
+
+நன்றி
+(Naṉṟi)
+
+Definitions of thanks
+[ English -> தமிழ் ]
+
+noun
+ நன்றி
+ gratitude, thanks
+
+thanks
+ நன்றி
+```
+
+要将一个单词翻译到多个语种可以使用下面命令(本例中, 我将单词翻译成泰米尔语以及印地语)。
+
+```
+$ trans :ta+hi thanks
+thanks
+/THaNGks/
+
+நன்றி
+(Naṉṟi)
+
+Definitions of thanks
+[ English -> தமிழ் ]
+
+noun
+ நன்றி
+ gratitude, thanks
+
+thanks
+ நன்றி
+
+thanks
+/THaNGks/
+
+धन्यवाद
+(dhanyavaad)
+
+Definitions of thanks
+[ English -> हिन्दी ]
+
+noun
+ धन्यवाद
+ thanks, thank, gratitude, thankfulness, felicitation
+
+thanks
+ धन्यवाद, शुक्रिया
+```
+
+使用下面命令可以将多个单词当成一个参数(句子)来进行翻译。(只需要把句子应用起来作为一个参数就行了)。
+
+```
+$ trans :ta "what is going on your life?"
+what is going on your life?
+
+உங்கள் வாழ்க்கையில் என்ன நடக்கிறது?
+(Uṅkaḷ vāḻkkaiyil eṉṉa naṭakkiṟatu?)
+
+Translations of what is going on your life?
+[ English -> தமிழ் ]
+
+what is going on your life?
+ உங்கள் வாழ்க்கையில் என்ன நடக்கிறது?
+```
+
+下面命令独立地翻译各个单词。
+
+```
+$ trans :ta curios happy
+curios
+
+ஆர்வம்
+(Ārvam)
+
+Translations of curios
+[ Română -> தமிழ் ]
+
+curios
+ ஆர்வம், அறிவாளிகள், ஆர்வமுள்ள, அறிய, ஆர்வமாக
+happy
+/ˈhapē/
+
+சந்தோஷமாக
+(Cantōṣamāka)
+
+Definitions of happy
+[ English -> தமிழ் ]
+
+ மகிழ்ச்சியான
+ happy, convivial, debonair, gay
+ திருப்தி உடைய
+ happy
+
+adjective
+ இன்பமான
+ happy
+
+happy
+ சந்தோஷமாக, மகிழ்ச்சி, இனிய, சந்தோஷமா
+```
+
+简洁模式: 默认情况下,`Translate Shell` 尽可能多的显示翻译信息. 如果你希望只显示简要信息,只需要加上`-b`选项。
+
+```
+$ trans -b :ta thanks
+நன்றி
+```
+
+字典模式: 加上 `-d` 可以把 `Translate Shell` 当成字典来用.
+
+```
+$ trans -d :en thanks
+thanks
+/THaNGks/
+
+Synonyms
+ noun
+ - gratitude, appreciation, acknowledgment, recognition, credit
+
+ exclamation
+ - thank you, many thanks, thanks very much, thanks a lot, thank you kindly, much obliged, much appreciated, bless you, thanks a million
+
+Examples
+ - In short, thanks for everything that makes this city great this Thanksgiving.
+
+ - many thanks
+
+ - There were no thanks in the letter from him, just complaints and accusations.
+
+ - It is a joyful celebration in which Bolivians give thanks for their freedom as a nation.
+
+ - festivals were held to give thanks for the harvest
+
+ - The collection, as usual, received a great response and thanks is extended to all who subscribed.
+
+ - It would be easy to dwell on the animals that Tasmania has lost, but I prefer to give thanks for what remains.
+
+ - thanks for being so helpful
+
+ - It came back on about half an hour earlier than predicted, so I suppose I can give thanks for that.
+
+ - Many thanks for the reply but as much as I tried to follow your advice, it's been a bad week.
+
+ - To them and to those who have supported the office I extend my grateful thanks .
+
+ - We can give thanks and words of appreciation to others for their kind deeds done to us.
+
+ - Adam, thanks for taking time out of your very busy schedule to be with us tonight.
+
+ - a letter of thanks
+
+ - Thank you very much for wanting to go on reading, and thanks for your understanding.
+
+ - Gerry has received a letter of thanks from the charity for his part in helping to raise this much needed cash.
+
+ - So thanks for your reply to that guy who seemed to have a chip on his shoulder about it.
+
+ - Suzanne, thanks for being so supportive with your comments on my blog.
+
+ - She has never once acknowledged my thanks , or existence for that matter.
+
+ - My grateful thanks go to the funders who made it possible for me to travel.
+
+ - festivals were held to give thanks for the harvest
+
+ - All you secretaries who made it this far into the article… thanks for your patience.
+
+ - So, even though I don't think the photos are that good, thanks for the compliments!
+
+ - And thanks for warning us that your secret service requires a motorcade of more than 35 cars.
+
+ - Many thanks for your advice, which as you can see, I have passed on to our readers.
+
+ - Tom Ryan was given a bottle of wine as a thanks for his active involvement in the twinning project.
+
+ - Mr Hill insists he has received no recent complaints and has even been sent a letter of thanks from the forum.
+
+ - Hundreds turned out to pay tribute to a beloved former headteacher at a memorial service to give thanks for her life.
+
+ - Again, thanks for a well written and much deserved tribute to our good friend George.
+
+ - I appreciate your doing so, and thanks also for the compliments about the photos!
+
+See also
+ Thanks!, thank, many thanks, thanks to, thanks to you, special thanks, give thanks, thousand thanks, Many thanks!, render thanks, heartfelt thanks, thanks to this
+```
+
+使用下面格式可以使用 `Translate Shell` 来翻译文件。
+
+```shell
+$ trans :ta file:///home/magi/gtrans.txt
+உங்கள் வாழ்க்கையில் என்ன நடக்கிறது?
+```
+
+下面命令可以让 `Translate Shell` 进入交互模式. 在进入交互模式之前你需要明确指定源语言和目标语言。本例中,我将英文单词翻译成泰米尔语。
+
+```
+$ trans -shell en:ta thanks
+Translate Shell
+(:q to quit)
+thanks
+/THaNGks/
+
+நன்றி
+(Naṉṟi)
+
+Definitions of thanks
+[ English -> தமிழ் ]
+
+noun
+ நன்றி
+ gratitude, thanks
+
+thanks
+ நன்றி
+```
+
+想知道语言代码,可以执行下面语言。
+
+```shell
+$ trans -R
+```
+或者
+```shell
+$ trans -T
+┌───────────────────────┬───────────────────────┬───────────────────────┐
+│ Afrikaans - af │ Hindi - hi │ Punjabi - pa │
+│ Albanian - sq │ Hmong - hmn │ Querétaro Otomi- otq │
+│ Amharic - am │ Hmong Daw - mww │ Romanian - ro │
+│ Arabic - ar │ Hungarian - hu │ Russian - ru │
+│ Armenian - hy │ Icelandic - is │ Samoan - sm │
+│ Azerbaijani - az │ Igbo - ig │ Scots Gaelic - gd │
+│ Basque - eu │ Indonesian - id │ Serbian (Cyr...-sr-Cyrl
+│ Belarusian - be │ Irish - ga │ Serbian (Latin)-sr-Latn
+│ Bengali - bn │ Italian - it │ Sesotho - st │
+│ Bosnian - bs │ Japanese - ja │ Shona - sn │
+│ Bulgarian - bg │ Javanese - jv │ Sindhi - sd │
+│ Cantonese - yue │ Kannada - kn │ Sinhala - si │
+│ Catalan - ca │ Kazakh - kk │ Slovak - sk │
+│ Cebuano - ceb │ Khmer - km │ Slovenian - sl │
+│ Chichewa - ny │ Klingon - tlh │ Somali - so │
+│ Chinese Simp...- zh-CN│ Klingon (pIqaD)tlh-Qaak Spanish - es │
+│ Chinese Trad...- zh-TW│ Korean - ko │ Sundanese - su │
+│ Corsican - co │ Kurdish - ku │ Swahili - sw │
+│ Croatian - hr │ Kyrgyz - ky │ Swedish - sv │
+│ Czech - cs │ Lao - lo │ Tahitian - ty │
+│ Danish - da │ Latin - la │ Tajik - tg │
+│ Dutch - nl │ Latvian - lv │ Tamil - ta │
+│ English - en │ Lithuanian - lt │ Tatar - tt │
+│ Esperanto - eo │ Luxembourgish - lb │ Telugu - te │
+│ Estonian - et │ Macedonian - mk │ Thai - th │
+│ Fijian - fj │ Malagasy - mg │ Tongan - to │
+│ Filipino - tl │ Malay - ms │ Turkish - tr │
+│ Finnish - fi │ Malayalam - ml │ Udmurt - udm │
+│ French - fr │ Maltese - mt │ Ukrainian - uk │
+│ Frisian - fy │ Maori - mi │ Urdu - ur │
+│ Galician - gl │ Marathi - mr │ Uzbek - uz │
+│ Georgian - ka │ Mongolian - mn │ Vietnamese - vi │
+│ German - de │ Myanmar - my │ Welsh - cy │
+│ Greek - el │ Nepali - ne │ Xhosa - xh │
+│ Gujarati - gu │ Norwegian - no │ Yiddish - yi │
+│ Haitian Creole - ht │ Pashto - ps │ Yoruba - yo │
+│ Hausa - ha │ Persian - fa │ Yucatec Maya - yua │
+│ Hawaiian - haw │ Polish - pl │ Zulu - zu │
+│ Hebrew - he │ Portuguese - pt │ │
+└───────────────────────┴───────────────────────┴───────────────────────┘
+```
+
+想了解更多选项的内容,可以查看 `man` 页.
+
+```shell
+$ man trans
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/translate-shell-a-tool-to-use-google-translate-from-command-line-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/magesh/
+[2]:https://github.com/soimort/translate-shell
+[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[5]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[6]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
+[7]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/